Compare commits

..

1381 Commits

Author SHA1 Message Date
Gleb Natapov
64f1aa8d99 mutation_partition: correctly measure static row size when doing digest calculation
The code uses incorrect output stream in case only digest is requested
and thus getting incorrect data size. Failing to correctly account
for static row size while calculating digest may cause digest mismatch
between digest and data query.

Fixes #3753.

Message-Id: <20180905131219.GD2326@scylladb.com>
(cherry picked from commit 98092353df)
2018-09-06 16:51:31 +03:00
Eliran Sinvani
280e6eedb9 cql3: ensure repeated values in IN clauses don't return repeated rows
When the list of values in the IN list of a single column contains
duplicates, multiple executors are activated since the assumption
is that each value in the IN list corresponds to a different partition.
this results in the same row appearing in the result number times
corresponding to the duplication of the partition value.

Added queries for the in restriction unitest and fixed with a bad result check.

Fixes #2837
Tests: Queries as in the usecase from the GitHub issue in both forms ,
prepared and plain (using python driver),Unitest.

Signed-off-by: Eliran Sinvani <eliransin@scylladb.com>
Message-Id: <ad88b7218fa55466be7bc4303dc50326a3d59733.1534322238.git.eliransin@scylladb.com>
(cherry picked from commit d734d316a6)
2018-08-26 15:52:18 +03:00
Tomasz Grabiec
f80f15a6af Merge 'Fix multi-cell static list updates in the presence of ckeys' from Duarte
Fixes a regression introduced in
9e88b60ef5, which broke the lookup for
prefetched values of lists when a clustering key is specified.

This is the code that was removed from some list operations:

 std::experimental::optional<clustering_key> row_key;
 if (!column.is_static()) {
   row_key = clustering_key::from_clustering_prefix(*params._schema, prefix);
 }
 ...
 auto&& existing_list = params.get_prefetched_list(m.key().view(), row_key, column);

Put it back, in the form of common code in the update_parameters class.

Fixes #3703

* https://github.com/duarten/scylla cql-list-fixes/v1:
  tests/cql_query_test: Test multi-cell static list updates with ckeys
  cql3/lists: Fix multi-cell static list updates in the presence of ckeys
  keys: Add factory for an empty clustering_key_prefix_view

(cherry picked from commit 6937cc2d1c)
2018-08-21 17:37:36 +01:00
Duarte Nunes
d0eb0c0b90 cql3/query_options: Use _value_views in prepare()
_value_views is the authoritative data structure for the
client-specified values. Indeed, the ctor called
transport::request::read_options() leaves _values completely empty.

In query_options::prepare() we were, however, using _values to
associated values to the client-specified column names, and not
_value_views. Fix this by using _value_views instead.

As for the reasons we didn't see this bug earlier, I assume it's
because very few drivers set the 0x04 query options flag, which means
column names are omitted. This is the right thing to do since most
drivers have enough information to correctly position the values.

Fixes #3688

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20180814234605.14775-1-duarte@scylladb.com>
(cherry picked from commit a4355fe7e7)
2018-08-21 18:24:06 +03:00
Jesse Haber-Kucharsky
1427c4d428 auth: Don't use unsupported hashing algorithms
In previous versions of Fedora, the `crypt_r` function returned
`nullptr` when a requested hashing algorithm was not supported.

This is consistent with the documentation of the function in its man
page.

As of Fedora 28, the function's behavior changes so that the encrypted
text is not `nullptr` on error, but instead the string "*0".

The info pages for `crypt_r` clarify somewhat (and contradict the man
pages):

    Some implementations return `NULL` on failure, and others return an
    _invalid_ hashed passphrase, which will begin with a `*` and will
    not be the same as SALT.

Because of this change of behavior, users running Scylla on a Fedora 28
machine which was upgraded from a previous release would not be able to
authenticate: an unsupported hashing algorithm would be selected,
producing encrypted text that did not match the entry in the table.

With this change, unsupported algorithms are correctly detected and
users should be able to continue to authenticate themselves.

Fixes #3637.

Signed-off-by: Jesse Haber-Kucharsky <jhaberku@scylladb.com>
Message-Id: <bcd708f3ec195870fa2b0d147c8910fb63db7e0e.1533322594.git.jhaberku@scylladb.com>
(cherry picked from commit fce10f2c6e)
2018-08-05 10:30:58 +03:00
Gleb Natapov
034f2cb42d cache_hitrate_calculator: fix race when new table is added during calculations
The calculation consists of several parts with preemption point between
them, so a table can be added while calculation is ongoing. Do not
assume that table exists in intermediate data structure.

Fixes #3636

Message-Id: <20180801093147.GD23569@scylladb.com>
(cherry picked from commit 44a6afad8c)
2018-08-01 14:30:58 +03:00
Amos Kong
e043a5c276 scylla_setup: fix conditional statement of silent mode
Commit 300af65555 introdued a problem in
conditional statement, script will always abort in silent mode, it doesn't
care about the return value.

Fixes #3485

Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <1c12ab04651352964a176368f8ee28f19ae43c68.1528077114.git.amos@scylladb.com>
(cherry picked from commit 364c2551c8)
2018-07-25 12:34:11 +03:00
Takuya ASADA
5da9bd3a6e dist/common/scripts/scylla_setup: abort running script when one of setup failed in silent mode
Current script silently continues even one of setup fails, need to
abort.

Fixes #3433

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20180522180355.1648-1-syuu@scylladb.com>
(cherry picked from commit 300af65555)
2018-07-25 12:34:11 +03:00
Avi Kivity
3578027e2e Merge "row_cache: Fix violation of continuity on concurrent eviction and population" from Tomasz
"
The problem happens under the following circumstances:

  - we have a partially populated partition in cache, with a gap in the middle

  - a read with no clustering restrictions trying to populate that gap

  - eviction of the entry for the lower bound of the gap concurrent with population

The population may incorrectly mark the range before the gap as continuous.
This may result in temporary loss of writes in that clustering range. The
problem heals by clearing cache.

Caught by row_cache_test::test_concurrent_reads_and_eviction, which has been
failing sporadically.

The problem is in ensure_population_lower_bound(), which returns true if
current clustering range covers all rows, which means that the populator has a
right to set continuity flag to true on the row it inserts. This is correct
only if the current population range actually starts since before all
clustering rows. Otherwise, we're populating since _last_row and should
consult it.

Fixes #3608.
"

* 'tgrabiec/fix-violation-of-continuity-on-concurrent-read-and-eviction' of github.com:tgrabiec/scylla:
  row_cache: Fix violation of continuity on concurrent eviction and population
  position_in_partition: Introduce is_before_all_clustered_rows()

(cherry picked from commit 31151cadd4)
2018-07-25 12:34:11 +03:00
Shlomi Livne
7d2150a057 release: prepare for 2.1.6
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2018-07-01 22:35:26 +03:00
Avi Kivity
afd3c571cc Merge "Backport Disable sstable filtering based on min/max clustering key components" to 2.1" from Tomasz
"
Changes made:
  - switched the test to use do_with_cql_env_thread due to lack of SEASTAR_TEST_CASE_THREAD macro
  - imported make_local_key() from master, needed for the database_test to pass
"

* tag 'tgrabiec/disable-min-max-sstable-filtering-v1-branch-2.1' of github.com:tgrabiec/scylla:
  Merge "Disable sstable filtering based on min/max clustering key components" from Tomasz
  tests: simple_schema: Generate local keys form make_pkeys()
  tests: Import make_local_key() from master
2018-06-28 12:41:00 +03:00
Avi Kivity
093c8512db Merge "Disable sstable filtering based on min/max clustering key components" from Tomasz
"
With DateTiered and TimeWindow, there is a read optimization enabled
which excludes sstables based on overlap with recorded min/max values
of clustering key components. The problem is that it doesn't take into
account partition tombstones and static rows, which should still be
returned by the reader even if there is no overlap in the query's
clustering range. A read which returns no clustering rows can
mispopulate cache, which will appear as partition deletion or writes
to the static row being lost. Until node restart or eviction of the
partition entry.

There is also a bad interaction between cache population on read and
that optimization. When the clustering range of the query doesn't
overlap with any sstable, the reader will return no partition markers
for the read, which leads cache populator to assume there is no
partition in sstables and it will cache an empty partition. This will
cause later reads of that partition to miss prior writes to that
partition until it is evicted from cache or node is restarted.

Disable until a more elaborate fix is implemented.

Fixes #3552
Fixes #3553
"

* tag 'tgrabiec/disable-min-max-sstable-filtering-v1' of github.com:tgrabiec/scylla:
  tests: Add test for slicing a mutation source with date tiered compaction strategy
  tests: Check that database conforms to mutation source
  database: Disable sstable filtering based on min/max clustering key components

(cherry picked from commit e1efda8b0c)
2018-06-28 11:10:41 +02:00
Tomasz Grabiec
9c0b8ec736 tests: simple_schema: Generate local keys form make_pkeys()
Extracted from commit 2b0b703615
2018-06-28 11:10:41 +02:00
Tomasz Grabiec
1794b732b0 tests: Import make_local_key() from master
Imported from master at 8a25bd467c69df94ea3f3638b42d36beee20adf0
2018-06-28 11:10:41 +02:00
Avi Kivity
c1ac4fb8b0 Update seastar submodule
* seastar 2a2c1d2...c89c8b8 (1):
  > tests/test-utils: Add macro for running tests within a seastar thread

Needed for tests in the following patch.
2018-06-28 10:00:05 +03:00
Asias He
2e7e59fb50 gossip: Fix tokens assignment in assassinate_endpoint
The tokens vector is defined a few lines above and is needed outsie the
if block.

Do not redefine it again in the if block, otherwise the tokens will be empty.

Found by code inspection.

Fixes #3551.

Message-Id: <c7a06375c65c950e94236571127f533e5a60cbfd.1530002177.git.asias@scylladb.com>
(cherry picked from commit c3b5a2ecd5)
2018-06-27 12:00:58 +03:00
Vladimir Krivopalov
af29d4bed3 Fix Scylla compilation with Crypto++ v6.
In Crypto++ v6, the `byte` typedef has been moved from the global
namespace to the `CryptoPP::` namespace.

This fix brings in the CryptoPP namespace so that the `byte` typedef is
seen with both old and new versions of Crypto++.

Fixes #3252.

Signed-off-by: Vladimir Krivopalov <vladimir@scylladb.com>
Message-Id: <799d055be710231884d101a52c0be8ed8b0a9806.1520125889.git.vladimir@scylladb.com>
(cherry picked from commit 99bd5180ba)
2018-06-25 17:49:32 +03:00
Shlomi Livne
72494bbe05 release: prepare for 2.1.5
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2018-06-19 09:05:55 +03:00
Avi Kivity
5784823888 Update scylla-ami submodule
* dist/ami/files/scylla-ami c5d9e96...0df779d (1):
  > scylla_install_ami: Update CentOS to latest version

Fixes #3523.
2018-06-17 12:12:21 +03:00
Takuya ASADA
a7633be1a9 Revert "dist/ami: update CentOS base image to latest version"
This reverts commit 69d226625a.
Since ami-4bf3d731 is Market Place AMI, not possible to publish public AMI based on it.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20180523112414.27307-1-syuu@scylladb.com>
(cherry picked from commit 55d6be9254)
2018-06-17 11:33:55 +03:00
Takuya ASADA
e78ded74ce dist/debian: add --jobs <njobs> option just like build_rpm.sh
On some build environment we may want to limit number of parallel jobs since
ninja-build runs ncpus jobs by default, it may too many since g++ eats very
huge memory.
So support --jobs <njobs> just like on rpm build script.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20180425205439.30053-1-syuu@scylladb.com>
(cherry picked from commit 782ebcece4)
2018-06-14 15:05:09 +03:00
Avi Kivity
6615c2a6a9 database: stop using incremental selectors
There is a bug in incremental_selector for partitioned_sstable_set, so
until it is found, stop using it.

This degrades scan performance of Leveled Compaction Strategy tables.

Fixes #3513. (as a workaround)
Introduced: 2.1
Message-Id: <20180613131547.19084-1-avi@scylladb.com>

(cherry picked from commit aeffbb6732)
2018-06-14 10:52:39 +03:00
Vlad Zolotarov
11500ccd3a locator::ec2_multi_region_snitch: don't call for ec2_snitch::gossiper_starting()
ec2_snitch::gossiper_starting() calls for the base class (default) method
that sets _gossip_started to TRUE and thereby prevents to following
reconnectable_snitch_helper registration.

Fixes #3454

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1528208520-28046-1-git-send-email-vladz@scylladb.com>
(cherry picked from commit 2dde372ae6)
2018-06-14 10:52:39 +03:00
Shlomi Livne
955f3eeb56 release: prepare for 2.1.4
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2018-06-06 11:27:01 +03:00
Avi Kivity
08bfd96774 Update seastar submodule
* seastar 675acd5...2a2c1d2 (1):
  > tls: Ensure handshake always drains output before return/throw

Fixes #3461.
2018-05-31 12:06:13 +03:00
Mika Eloranta' via ScyllaDB development
f6c4d558eb build: fix rpm build script --jobs N handling
Fixes argument misquoting at $SRPM_OPTS expansion for the mock commands
and makes the --jobs argument work as supposed.

Signed-off-by: Mika Eloranta <mel@aiven.io>
Message-Id: <20180113212904.85907-1-mel@aiven.io>
(cherry picked from commit 7266446227)
2018-05-27 10:25:26 +03:00
Avi Kivity
0040ff6de2 Update seastar submodule
* seastar 0e6dcd5...675acd5 (1):
  > net/tls: Wait for output to be sent when shutting down

Fixes #3459.
2018-05-24 12:03:10 +03:00
Glauber Costa
c238bc7a81 commitlog: don't move pointer to segment
We are currently moving the pointer we acquired to the segment inside
the lambda in which we'll handle the cycle.

The problem is, we also use that same pointer inside the exception
handler. If an exception happens we'll access it and we'll crash.

Probably #3440.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20180518125820.10726-1-glauber@scylladb.com>
(cherry picked from commit 596a525950)
2018-05-19 19:13:58 +03:00
Avi Kivity
3b984a4293 dist: redhat: get rid of raid0.devices_discard_performance
This parameter is not available on recent Red Hat kernels or on
non-Red Hat kernels (it was removed on 3.10.0-772.el7,
RHBZ 1455932). The presence of the parameter on kernels that don't
support it cause the module load to fail, with the result that the
storage is not available.

Fix by removing the parameter. For someone running an older Red Hat
kernel the effect will be that discard is disabled, but they can fix
that by updating the kernel. For someone running a newer kernel, the
effect will be that they can access their data.

Fixes #3437.
Message-Id: <20180516134913.6540-1-avi@scylladb.com>

(cherry picked from commit 3b8118d4e5)
2018-05-19 19:13:58 +03:00
Takuya ASADA
156761d77e dist/ami: update CentOS base image to latest version
Since we requires updated version of systemd, we need to update CentOS base
image.

Fixes #3184

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1518118694-23770-1-git-send-email-syuu@scylladb.com>

Conflicts:
	dist/ami/build_ami.sh

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20180508083521.18661-1-syuu@scylladb.com>
2018-05-19 19:13:58 +03:00
Avi Kivity
8e33e80ad3 release: prepare for 2.1.3 2018-04-25 09:01:30 +03:00
Duarte Nunes
c35dd86c87 db/schema_tables: Only drop UDTs after merging tables
Dropping a user type requires that all tables using that type also be
dropped. However, a type may appear to be dropped at the same time as
a table, for instance due to the order in which a node receives schema
notifications, or when dropping a keyspace.

When dropping a table, if we build a schema in a shard through a
global_schema_pointer, then we'll check for the existence of any user
type the schema employs. We thus need to ensure types are only dropped
after tables, similarly to how it's done for keyspaces.

Fixes #3068

Tests: unit-tests (release)

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20180129114137.85149-1-duarte@scylladb.com>
(cherry picked from commit 1e3fae5bef)
2018-04-25 01:15:25 +03:00
Pekka Enberg
87cb8a1fa4 release: prepare for 2.1.2 2018-04-17 09:45:00 +03:00
Takuya ASADA
26f3340c32 dist/debian: use ~root as HOME to place .pbuilderrc
When 'always_set_home' is specified on /etc/sudoers pbuilder won't read
.pbuilderrc from current user home directory, and we don't have a way to change
the behavor from sudo command parameter.

So let's use ~root/.pbuilderrc and switch to HOME=/root when sudo executed,
this can work both environment which does specified always_set_home and doesn't
specified.

Fixes #3366

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1523926024-3937-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit ace44784e8)
2018-04-17 09:39:15 +03:00
Avi Kivity
aaba093371 Update seastar submodule
* seastar af1b789...0e6dcd5 (1):
  > tls: Ensure we always pass through semaphores on shutdown

Fixes #3358.
2018-04-14 20:52:02 +03:00
Gleb Natapov
a64c6e6be9 cql_server: fix a race between closing of a connection and notifier registration
There is a race between cql connection closure and notifier
registration. If a connection is closed before notification registration
is complete stale pointer to the connection will remain in notification
list since attempt to unregister the connection will happen to early.
The fix is to move notifier unregisteration after connection's gate
is closed which will ensure that there is no outstanding registration
request. But this means that now a connection with closed gate can be in
notifier list, so with_gate() may throw and abort a notifier loop. Fix
that by replacing with_gate() by call to is_closed();

Fixes: #3355
Tests: unit(release)

Message-Id: <20180412134744.GB22593@scylladb.com>
(cherry picked from commit 1a9aaece3e)
2018-04-12 16:57:18 +03:00
Duarte Nunes
c83d2d0d77 db/view: Reject view entries with non-composite, empty partition key
Empty partition keys are not supported on normal tables - they cannot
be inserted or queried (surprisingly, the rules for composite
partition keys are different: all components are then allowed to be
empty). However, the (non-composite) partition key of a view could end
up being empty if that column is: a base table regular column, a
base table clustering key column, or a base table partition key column,
part of a composite key.

Fixes #3262
Refs CASSANDRA-14345

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20180403122244.10626-1-duarte@scylladb.com>
(cherry picked from commit ec8960df45)
2018-04-03 19:08:38 +03:00
Asias He
0aa49d0311 gossip: Relax generation max difference check
start node 1 2 3
shutdown node2
shutdown node1 and node3
start node1 and node3
nodetool removenode node2
clean up all scylla data on node2
bootstrap node2 as a new node

I saw node2 could not bootstrap stuck at waiting for schema information to compelte for ever:

On node1, node3

    [shard 0] gossip - received an invalid gossip generation for peer 127.0.0.2; local generation = 2, received generation = 1521779704

On node2

    [shard 0] storage_service - JOINING: waiting for schema information to complete

This is becasue in nodetool removenode operation, the generation of node1 was increased from 0 to 2.

   gossiper::advertise_removing () calls eps.get_heart_beat_state().force_newer_generation_unsafe();
   gossiper::advertise_token_removed() calls eps.get_heart_beat_state().force_newer_generation_unsafe();

Each force_newer_generation_unsafe increases the generation by 1.

Here is an example,

Before nodetool removenode:
```
curl -X GET --header "Accept: application/json" "http://127.0.0.1:10000/failure_detector/endpoints/" | python -mjson.tool
   {
   "addrs": "127.0.0.2",
   "generation": 0,
   "is_alive": false,
   "update_time": 1521778757334,
   "version": 0
   },
```

After nodetool revmoenode:
```
curl -X GET --header "Accept: application/json" "http://127.0.0.1:10000/failure_detector/endpoints/" | python -mjson.tool
 {
     "addrs": "127.0.0.2",
     "application_state": [
         {
             "application_state": 0,
             "value": "removed,146b52d5-dc94-4e35-b7d4-4f64be0d2672,1522038476246",
             "version": 214
         },
         {
             "application_state": 6,
             "value": "REMOVER,14ecc9b0-4b88-4ff3-9c96-38505fb4968a",
             "version": 153
            }
     ],
     "generation": 2,
     "is_alive": false,
     "update_time": 1521779276246,
     "version": 0
 },
```

In gossiper::apply_state_locally, we have this check:

```
if (local_generation != 0 && remote_generation > local_generation + MAX_GENERATION_DIFFERENCE) {
    // assume some peer has corrupted memory and is broadcasting an unbelievable generation about another peer (or itself)
  logger.warn("received an invalid gossip generation for peer {}; local generation = {}, received generation = {}",ep, local_generation, remote_generation);

}
```
to skip the gossip update.

To fix, we relax generation max difference check to allow the generation
of a removed node.

After this patch, the removed node bootstraps successfully.

Tests: dtest:update_cluster_layout_tests.py
Fixes #3331

Message-Id: <678fb60f6b370d3ca050c768f705a8f2fd4b1287.1522289822.git.asias@scylladb.com>
(cherry picked from commit f539e993d3)
2018-04-03 19:08:38 +03:00
Shlomi Livne
cce455b1f5 release: prepare for 2.1.1
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2018-03-25 09:32:02 +03:00
Avi Kivity
6772f3806b tests: mutation_source_test: fix scattering or partition tombstone
The partition tombstone is not part of a mutation_fragment in the old
streamed_mutation, so it was not scattered correctly by fragment_scatterer.
This causes test failures if the mutations to be scattered have a partition
tombstone.

Fix by calling consume(tombstone) directly. This isn't nice, but the code
is dead anyway.
2018-03-24 15:15:02 +03:00
Avi Kivity
6c9d699835 Merge "Fix abort during counter table read-on-delete" from Tomasz
"
This fixes an abort in an sstable reader when querying a partition with no
clustering ranges (happens on counter table mutation with no live rows) which
also doesn't have any static columns. In such case, the
sstable_mutation_reader will setup the data_consume_context such that it only
covers the static row of the partition, knowing that there is no need to read
any clustered rows. See partition.cc::advance_to_upper_bound(). Later when
the reader is done with the range for the static row, it will try to skip to
the first clustering range (missing in this case). If clustering_ranges_walker
tells us to skip to after_all_clustering_rows(), we will hit an assert inside
continuous_data_consumer::fast_forward_to() due to attempt to skip past the
original data file range. If clustering_ranges_walker returns
before_all_clustering_rows() instead, all is fine because we're still at the
same data file position.

Fixes #3304.
"

* 'tgrabiec/fix-counter-read-no-static-columns' of github.com:scylladb/seastar-dev:
  tests: mutation_source_test: Test reads with no clustering ranges and no static columns
  tests: simple_schema: Allow creating schema with no static column
  clustering_ranges_walker: Stop after static row in case no clustering ranges

(cherry picked from commit 054854839a)
2018-03-23 10:47:23 +03:00
Vlad Zolotarov
a75e1632c8 test.py: limit the tests to run on 2 shards with 4GB of memory
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
(cherry picked from commit 57a6ed5aaa)
2018-03-22 12:45:25 +02:00
Jesse Haber-Kucharsky
c5718bf620 auth: Fix improper sharing of sharded service
This change is backported from 092f2e659c.

Previously, the sharded permissions cache was only accessible to the
implementation of `auth::service` in `auth/service.cc`. The intention
was that invoking `auth::service::get_permissions` on shard `k` would
query the cache on shard `k`, which would in turn depend on
`auth::service` on shard k to check for superuser status.

The problem is in `auth::service::start`.

`seastar::sharded<auth::permissions_cache>::start` is invoked with
`*this` of shard 0, causing all instances of the cache to reference the
same object.

I wasn't able to locally reproduce errors or crashes due to this bug
when I compiled a release build of Scylla. However, running a debug
build meant that the glorious `seastar::debug_shared_ptr_counter_type`
quickly saved the day with its checks that `seastar::shared_ptr` isn't
being misused.

To eliminate this problem, we move ownership of a single instance of
`auth::permissions_cache` to a single instance of `auth::service`. When
`auth::service` is sharded, so is the permissions cache.

I verified interactively that no assertions failed in debug mode with
this change.

Fixes #3296.

Tests: unit (debug, release)
Signed-off-by: Jesse Haber-Kucharsky <jhaberku@scylladb.com>
Message-Id: <280a889f551180db1c00d8a80eddf85b2ff0ac60.1521696176.git.jhaberku@scylladb.com>
2018-03-22 10:04:50 +02:00
Duarte Nunes
2315fcd6cf gms/gossiper: Synchronize endpoint state destruction
In gossiper::handle_major_state_change() we set the endpoint_state for
a particular endpoint and replicate the changes to other cores.

This is totally unsynchronized with the execution of
gossiper::evict_from_membership(), which can happen concurrently, and
can remove the very same endpoint from the map  (in all cores).

Replicating the changes to other cores in handle_major_state_change()
can interleave with replicating the changes to other cores in
evict_from_membership(), and result in an undefined final state.

Another issue happened in debug mode dtests, where a fiber executes
handle_major_state_change(), calls into the subscribers, of which
storage_service is one, and ultimately lands on
storage_service::update_peer_info(), which iterates over the
endpoint's application state with deferring points in between (to
update a system table). gossiper::evict_from_membership() was executed
concurrently by another fiber, which freed the state the first one is
iterating over.

Fixes #3299.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20180318123211.3366-1-duarte@scylladb.com>
(cherry picked from commit 810db425a5)
2018-03-18 14:55:32 +02:00
Asias He
8c5464d2fd range_streamer: Stream 10% of ranges instead of 10 ranges per time
If there are a lot of ranges, e.g., num_tokens=2048, 10 ranges per
stream plan will cause tons of stream plan to be created to stream data,
each having very few data. This cause each stream plan has low transfer
bandwidth, so that the total time to complete the streaming increases.

It makes more sense to send a percentage of the total ranges per stream
plan than a fixed ranges.

Here is an example to stream a keyspace with 513 ranges in
total, 10 ranges v.s. 10% ranges:

Before:
[shard 0] range_streamer - Bootstrap with 127.0.0.1 for
keyspace=system_traces, 510 out of 513 ranges: ranges = 51
[shard 0] range_streamer - Bootstrap with ks for keyspace=127.0.0.1
succeeded, took 107 seconds

After:
[shard 0] range_streamer - Bootstrap with 127.0.0.1 for
keyspace=system_traces, 510 out of 513 ranges: ranges = 10
[shard 0] range_streamer - Bootstrap with ks for keyspace=127.0.0.1
succeeded, took 22 seconds

Message-Id: <a890b84fbac0f3c3cc4021e30dbf4cdf135b93ea.1520992228.git.asias@scylladb.com>
(cherry picked from commit 9b5585ebd5)
2018-03-14 10:13:01 +02:00
Asias He
346d2788e3 Revert "streaming: Do not abort session too early in idle detection"
This reverts commit f792c78c96.

With the "Use range_streamer everywhere" (7217b7ab36) series,
all the user of streaming now do streaming with relative small ranges
and can retry streaming at higher level.

Reduce the time-to-recover from 5 hours to 10 minutes per stream session.

Even if the 10 minutes idle detection might cause higher false positive,
it is fine, since we can retry the "small" stream session anyway. In the
long term, we should replace the whole idle detection logic with
whenever the stream initiator goes away, the stream slave goes away.

Message-Id: <75f308baf25a520d42d884c7ef36f1aecb8a64b0.1520992219.git.asias@scylladb.com>
(cherry picked from commit ad7b132188)
2018-03-14 10:12:59 +02:00
Avi Kivity
4f68fede6d Merge "Make reader concurrency dual-restricted by count and memory" from Botond
"
Refs #2692
Fixes #3246

The current restricting algorithm [1] restricts the active-reader queue
based on the memory consumption of the existing active readers. When
this memory consumption is above the limit new readers are not admitted.
The inactive reader queue on the other hand has a fixed length.
This caused performance regressions on two workloads:
* read-only: since the inactive-reader queue length is severly limited
  (compared to the previous situation) reads will timeout at loads
  comfortably handled before.
* mixed: since the memory consumption happens only at admission time
  (already created active readers are not limited) memory consumption
  growed significantly causing problems when compactions kicked in.

The solution is to reintroduce the old limit of 100 active concurrent
user-reads while still keeping the memory-based limit as well. For
workloads that don't consume a lot of memory or on large boxes with lots
of memory the count-based limit will be reached which is reverting to the
old well-known behaviour. For memory-hungry workloads or on small boxes
with little memory the memory based-limit will kick in sooner avoiding
memory overconsumption.

[1] introduced by bdbbfe9390
"

* 'restricted-reader-dual-limit/v3-backport-2.1' of https://github.com/denesb/scylla:
  Modify unit tests so that they test the dual-limits
  Use the reader_concurrency_semaphore to limit reader concurrency
  Add reader_concurrency_semaphore
  Add reader_resource_tracker param to mutation_source
  mv reader_resource_tracker.hh -> reader_concurrency_semaphore.hh
2018-03-08 19:10:06 +02:00
Botond Dénes
681f9e4f50 Modify unit tests so that they test the dual-limits 2018-03-08 18:54:16 +02:00
Botond Dénes
c503bc7693 Use the reader_concurrency_semaphore to limit reader concurrency 2018-03-08 18:54:15 +02:00
Botond Dénes
de7024251b Add reader_concurrency_semaphore
This semaphore implements the new dual, count and memory based active
reader limiting. As purely memory-based limiting proved to cause
problems on big boxes admitting a large number of readers (more than any
disk could handle) the previous count-based limit is reintroduced in
addition to the existing memory-based limit.
When creating new readers first the count-based limit is checked. If
that clears the memory limit is checked before admitting the reader.
reader_conccurency_semaphore wraps the two semaphores that implement
these limits and enforces the correct order of limit checking.
This class also completely replaces the restricted_reader_config struct,
it encapsulates all data and related functinality of the latter, making
client code simpler.
2018-03-08 18:54:15 +02:00
Botond Dénes
9a0eb2319c Add reader_resource_tracker param to mutation_source
Soon, reader_resource_tracker will only be constructible after the
reader has been admitted. This means that the resource tracker cannot be
preconstructed and just captured by the lambda stored in the mutation
source and instead has to be passed in along the other parameters.
2018-03-08 18:54:12 +02:00
Botond Dénes
9ef462449b mv reader_resource_tracker.hh -> reader_concurrency_semaphore.hh
In preparation to reader_concurrency_semaphore being added to the file.
The reader_resource_tracker is really only a helper class for
reader_concurrency_semaphore so the latter is better suited to provide
the name of the file.
2018-03-08 15:34:48 +02:00
Amnon Heiman
6271f30716 dist/docker: Add support for housekeeping
This patch takes a modified version of the Ubuntu 14.04 housekeeping
service script and uses it in Docker to validate the current version.

To disable the version validation, pass the --disable-version-check flag
when running the container.

Message-Id: <20180220161231.1630-1-amnon@scylladb.com>
(cherry picked from commit edcfab3262)
2018-03-07 16:17:13 +02:00
Takuya ASADA
8b64e80c88 dist/debian: install scylla-housekeeping upstart script correctly on Ubuntu 14.04
Since we splited scylla-housekeeping service to two different services for systemd, we don't share same service name between systemd and upstart anymore.
So handle it independently for each distribution, try to install
/etc/init/scylla-housekeeping.conf on Ubuntu 14.04.

Fixes #3239

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1519852659-10688-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 101e909483)
2018-03-07 16:16:36 +02:00
Amnon Heiman
c5bffcaa68 scylla-housekeeing: need to support both debian/ubuntu variations
Debian and ubuntu list files come in two variations.
The housekeeping should support both.

This patch change the regexp that match the os in the repository file.
After the introduction of the second list variation, the os name can be in the middle of the path not only at the end.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20180227092543.19538-1-amnon@scylladb.com>
(cherry picked from commit 57d46c6959)
2018-03-07 16:15:54 +02:00
Tomasz Grabiec
8aa0b60e91 tests: cache: Fix invalidate() not being waited for
Probably responsible for occasional failures of subsequent assertion.
Didn't mange to reproduce.

Message-Id: <1520330967-584-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit d9f0c1f097)
2018-03-06 12:17:16 +02:00
Asias He
dccf762654 storage_service: Add missing return in pieces empty check
If pieces.empty is empty, it is bogus to access pieces[0]:

   sstring move_name = pieces[0];

Fix by adding the missing return.

Spotted by Vlad Zolotarov <vladz@scylladb.com>

Fixes #3258
Message-Id: <bcb446f34f953bc51c3704d06630b53fda82e8d2.1520297558.git.asias@scylladb.com>

(cherry picked from commit 8900e830a3)
2018-03-06 09:58:21 +02:00
Tomasz Grabiec
e5344079d9 intrusive_set_external_comparator: Fix _header having undefined color on move
swap_tree() doesn't change the color of the header, and becasue header
was not initialized, it is undefined (can be both red or black). One
problem this causes is that algo::is_header() expects the header to be
always red. It is used by unlink(), which for trees which have a black
header would infinite-loop.

The fix is to initialize the header.

Fixes #3242.

Message-Id: <1519815091-13111-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit 30635510a2)
2018-02-28 13:57:33 +02:00
Paweł Dziepak
7bc8515c48 tests/cql3: increase TTL to avoid spurious failures
The test inserts some values with a TTL of 1 second and then
reads them back expecting them not to be expired yet. That may not
always be the case if the machine is slow and we are running in the
debug mode. Increasising the TTLs by x100 should help avoid these
false positives.

Message-Id: <20180219133816.17452-1-pdziepak@scylladb.com>
(cherry picked from commit d97eebe82d)
2018-02-22 14:14:41 +00:00
Duarte Nunes
1228a41eaa cql3/query_processor: Remove prepared statements upon dropping a view
Fixes #3198

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20180209143652.31852-1-duarte@scylladb.com>
(cherry picked from commit d757c87107)
2018-02-22 14:11:08 +00:00
Tomasz Grabiec
58b90ceee0 tests: row_cache: Improve test for snapshot consistency on eviction
Reproduces https://github.com/scylladb/scylla/issues/3215.
Message-Id: <1518710592-21925-1-git-send-email-tgrabiec@scylladb.com>

(cherry picked from commit 9c3e56fb16)
2018-02-16 11:42:33 +01:00
Tomasz Grabiec
ef46067606 mvcc: Do not move unevictable snapshots to cache
Commit 6ccd317 introduced a bug in partition_entry::evict() where a
partition entry may be partially evicted if there are non-evictable
snapshots in it. Partially evicting some of the versions may violate
consistency of a snapshot which includes evicted versions. For one,
continuity flags are interpreted realtive to the merged view, not
within a version, so evicting from some of the versions may mark
reanges as continuous when before they were discontinuous. Also, range
tombtsones of the snapshot are taken from all versions, so we can't
partially evict some of them without marking all affected ranges as
discontinuous.

The fix is to revert back to full eviciton, and avoid moving
non-evictable snapshots to cache. When moving whole partition entry to
cache, we first create a neutral empty partition entry and then merge
the memtable entry into it just like we would if the entry already
existed.

Fixes #3215.

Tests: unit (release)
Message-Id: <1518710592-21925-2-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit b0b57b8143)
2018-02-16 11:26:13 +01:00
Shlomi Livne
ffdd0f6392 release: prepare for 2.1.0
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2018-02-14 15:17:43 +02:00
Paweł Dziepak
3ab1c8abff cql3/select_statement: do not capture stack variables by reference
Default capture by reference considered harmful in async code.

(cherry picked from commit b635fec9bf)
2018-02-08 17:54:00 +02:00
Amnon Heiman
d306c40507 database: correct the label creation for database reads
The labels in database active_reads metrics where not define correctly.

Label should be created so it will be possible to select based on their
value.

The current implementation define a label "class" with three instances:
user, streaming, system.

Fixes: #2770

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20180123125206.23660-1-amnon@scylladb.com>
(cherry picked from commit a0a1961b6d)
2018-02-08 15:20:55 +02:00
Paweł Dziepak
b98d5b30de Merge "Do not evict from memtable snapshots" from Tomasz
"When moving whole partition entries from memtable to cache, we move
snapshots as well. It is incorrect to evict from such snapshots
though, because associated readers would miss data.

Solution is to record evictability of partition version references (snapshots)
and avoiding eviction from non-evictable snapshots.

Could affect scanning reads, if the reader uses partition entry from
memtable, and the partition is too large to fit in reader's buffer,
and that entry gets moved to cache (was absent in cache), and then
gets evicted (memory pressure). The reader will not see the remainder
of that entry. Found during code review.

Introduced in ca8e3c4, so affects 2.1+

Fixes #3186.

Tests: unit (release)"

* 'tgrabiec/do-not-evict-memtable-snapshots' of github.com:tgrabiec/scylla:
  tests: mvcc: Add test for eviction with non-evictable snapshots
  mutation_partition: Define + operator on tombstones
  tests: mvcc: Check that partition is fully discontinuous after eviction
  tests: row_cache: Add test for memtable readers surviving flush and eviction
  memtable: Make printable
  mvcc: Take partition_entry by const ref in operator<<()
  mvcc: Do not evict from non-evictable snapshots
  mvcc: Drop unnecessary assignment to partition_snapshot::_version
  tests: Use partition_entry::make_evictable() where appropriate
  mvcc: Encapsulate construction of evictable entries

(cherry picked from commit 6ccd317c38)
2018-02-06 19:29:56 +01:00
Tomasz Grabiec
85f5e57502 tests: Introduce mutation_partition_assertions
mutation_assertions are now delegating to mutation_partition_assertions.

(cherry picked from commit c7539f2ed0)
2018-02-06 19:29:56 +01:00
Tomasz Grabiec
19158f3401 mutation_partition: Make check_continuity() const-qualified
(cherry picked from commit bde050835f)
2018-02-06 19:29:56 +01:00
Tomasz Grabiec
a7e40d6acb mutation_partition: Make check_continuity() public
(cherry picked from commit f9257886cb)
2018-02-06 19:29:56 +01:00
Tomasz Grabiec
eedcfedd5a mutation_partition: Extract sliced() from mutation into mutation_partition
So that we can call it on mutation_partition.

(cherry picked from commit b3709047b0)
2018-02-06 19:29:56 +01:00
Tomasz Grabiec
b655fe262b mvcc: Add const-qualified partition_version_ref::operator*()
(cherry picked from commit a6e083ef6f)
2018-02-06 19:29:56 +01:00
Shlomi Livne
cbb3b959e3 release: prepare for 2.1.rc3
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2018-02-06 12:12:31 +02:00
Raphael S. Carvalho
3dd282f7f0 sstables/compress: Fix race condition in segmented offset reading of shared sstable
Race condition was introduced by commit 028c7a0888, which introduces chunk offset
compression, because a reading state is kept in the compress structure which is
supposed to be immutable and can be shared among shards owning the same sstable.

So it may happen that shard A updates state while shard B relies on information
previously set which leads to incorrect decompression, which in turn leads to
read misbehaving.

We could serialize access to at() which would only lead to contention issues for
shared sstables, but that can be avoided by moving state out of compress structure
which is expected to be immutable after sstable is loaded and feeded to shards that
own it. Sequential accessor (wraps state and reference to segmented_offset) is
added to prevent at() and push_back() interfaces from being polluted.

Tests: release mode.

Fixes #3148.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20180205192432.23405-1-raphaelsc@scylladb.com>
(cherry picked from commit 09f4ee808f)
2018-02-06 12:10:29 +02:00
Tomasz Grabiec
574548e50f Merge 'Fixes for exception safety in memtable range reads' from Paweł
These patches deal with the remaining exception safety issues in the
memtable partition range readers. That includes moving the assignment
to iterator_reader::_last outside of allocating section to avoid
problems caused by exception-unsafe assignment operator. Memory
accotuning code is also moved out of the retryable context to improve
the code robustness and avoid potential problems in the future.

Fixes #3172.

 * https://github.com/pdziepak/scylla.git memtable-range-read-exception-safety-2.1/v1:
  memtable: do not update iterator_reader::_last in alloc section
  memtable: do not change accounting state in alloc section
  tests/memtable: add more reader exception safety tests
2018-02-05 20:51:26 +01:00
Paweł Dziepak
688d58f54a tests/memtable: add more reader exception safety tests 2018-02-05 15:11:55 +00:00
Paweł Dziepak
ea9b0bb4b0 memtable: do not change accounting state in alloc section
Allocating sections can be retried so code that has side effects (like
updating flushed bytes accouting) has no place there.
2018-02-05 15:11:54 +00:00
Paweł Dziepak
6a9b026601 memtable: do not update iterator_reader::_last in alloc section
iterator_reader::_last is a part of the state that survives allocating
section retries, therefore, it should not be modified in the retryable
context.
2018-02-05 15:11:53 +00:00
Amnon Heiman
adc1523aaa scylla_setup support private repo on debian during setup
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20170917145248.19677-1-amnon@scylladb.com>
(cherry picked from commit bc356a3c15)
2018-02-01 15:58:00 +02:00
Tomasz Grabiec
5444eead08 Merge "Make memtable reads exception safe" from Paweł
These patches change the memtable reader implementation (in particular
partition_snapshot_reader) so that the existing exception safety
paroblems are fixed, but also in a way that, hopefully, would make it
easier to reason about the error handling and avoid future bugs in that
area.

The main difficulty related to exception safety is that when an
exception is thrown out of an allocating section that code is run again
with increased memory reserved. If the retryable code has side effects
it is very easy to get incorrect behaviour.

In addition to that, entering an allocating section is not exactly cheap
which encourages doing so rarely and having large sections.

The approach taken by this series is to, first, make entering allocating
sections cheaper and then reducing the amount of logic that runs inside
of them to a minimum.

This means that instead of entering a section once per a call to
flat_mutation_reader::fill_buffer() the allocation section is entered
once for each emitted row. The only state modified from within the
section are cached iterators to the current row, which are dropped on
retry. Hopefully, this would make the reader code easier to reason
about.

The optimisations to the allocating sections and managed_bytes
linearised context has successfully eliminated any penalty caused by
much more fine grained allocating sections.

Fixes #3123.
Fixes #3133.

Tests: unit-tests (release)

BEFORE
test                                      iterations      median         mad         min         max
memtable.one_partition_one_row               1155362   869.139ns     0.282ns   868.465ns   873.253ns
memtable.one_partition_many_rows              127252     7.871us    15.252ns     7.851us     7.886us
memtable.many_partitions_one_row               58715    17.109us     2.765ns    17.013us    17.112us
memtable.many_partitions_many_rows              4839   206.717us   212.385ns   206.505us   207.448us

AFTER
test                                      iterations      median         mad         min         max
memtable.one_partition_one_row               1194453   839.223ns     0.503ns   834.952ns   842.841ns
memtable.one_partition_many_rows              133785     7.477us     4.492ns     7.473us     7.507us
memtable.many_partitions_one_row               60267    16.680us    18.027ns    16.592us    16.700us
memtable.many_partitions_many_rows              4975   201.048us   144.929ns   200.822us   201.699us

        ./before_sq  ./after_sq  diff
 read     337373.86   353694.24  4.8%
 write    388759.99   394135.78  1.4%

* https://github.com/pdziepak/scylla.git memtable-exception-safety-2.1/v1:
  flat_mutation_reader: add allocation point in push_mutation_fragment
  linearization_context: remove non-trivial operations from fast path
  lsa: split alloc section into reserving and reclamation-disabled parts
  lsa: optimise disabling reclamation and invalidation counter
  mutation_fragment: allow creating clustering row in place
  paratition_snapshot_reader: minimise amount of retryable code
  memtable: drop memtable_entry::read()
  tests/memtable: add test for reader exception safety
2018-02-01 10:54:35 +01:00
Paweł Dziepak
1e74362ec9 tests/memtable: add test for reader exception safety 2018-02-01 10:54:34 +01:00
Paweł Dziepak
72e52dafba memtable: drop memtable_entry::read() 2018-02-01 10:54:34 +01:00
Paweł Dziepak
29746e1e7b paratition_snapshot_reader: minimise amount of retryable code
Retryable code that has side effects is a recipe for bugs. This patch
reworkds the snapshot reader so that the amount of logic run with
reclamation disabled is minimal and has a very limited side effects.
2018-02-01 10:54:34 +01:00
Paweł Dziepak
13cd56774f mutation_fragment: allow creating clustering row in place
Moving clustering_row is expensive due to amount of data stored
internally. Adding a mutation_fragment constructor that builds a
clustering_row in-place saves some of that moving.
2018-02-01 10:54:34 +01:00
Paweł Dziepak
812018479b lsa: optimise disabling reclamation and invalidation counter
Most of the lsa gory details are hidden in utils/logalloc.cc. That
includes the actual implementation of a lsa region: region_impl.

However, there is code in the hot path that often accesses the
_reclaiming_enabled member as well as its base class
allocation_strategy.

In order to optimise those accesses another class is introduced:
basic_region_impl that inherits from allocation_strategy and is a base
of region_impl. It is defined in utils/logalloc.hh so that it is
publicly visible and its member functions are inlineable from anywhere
in the code. This class is supposed to be as small as possible, but
contain all members and functions that are accessed from the fast path
and should be inlined.
2018-02-01 10:54:34 +01:00
Paweł Dziepak
0ee2462811 lsa: split alloc section into reserving and reclamation-disabled parts
Allocating sections reserves certain amount of memory, then disables
reclamation and attempts to perform given operation. If that fails due
to std::bad_alloc the reserve is increased and the operation is retried.

Reserving memory is expensive while just disabling reclamation isn't.
Moreover, the code that runs inside the section needs to be safely
retryable. This means that we want the amount of logic running with
reclamation disabled as small as possible, even if it means entering and
leaving the section multiple times.

In order to reduce the performance penalty of such solution the memory
reserving and reclamation disabling parts of the allocating sections are
separated.
2018-02-01 10:54:34 +01:00
Paweł Dziepak
c8bc3a7053 linearization_context: remove non-trivial operations from fast path
Since linearization_context is thread_local every time it is accessed
the compiler needs to emit code that checks if it was already
constructed and does so if it wasn't. Moreover, upon leaving the context
from the outermost scope the map needs to be cleared.

All these operations impose some performance overhead and aren't really
necessary if no buffers were linearised (the expected case). This patch
rearranges the code so that lineatization_context is trivially
constructible and the map is cleared only if it was modified.
2018-02-01 10:54:34 +01:00
Paweł Dziepak
9f78799e80 flat_mutation_reader: add allocation point in push_mutation_fragment
Exception safety tests inject a failure at every allocation and verify
whether the error is handled properly.

push_mutation_fragment() adds a mutation fragment to a circular_buffer,
in theory any call to that function can result in a memory allocation,
but in practice that depends on the implementation details. In order to
improve the effectiveness of the exception safety tests this patch adds
an explicit allocation point in push_mutation_fragment().
2018-02-01 10:54:33 +01:00
Calle Wilund
5bba3856ca auth: Fix transitional auth for non-valid credentials
Fixes #3096

The credentials processing for transitional auth was broken
in ba6a41d, "auth: Switch to sharded service which effectively removed
the "virtualization" of underlying auth in the SASL challenge.

As a quick workaround, add the permissive exception handling to
sasl object as well.

Message-Id: <20180103102724.1083-1-calle@scylladb.com>
(cherry picked from commit 35b9ec868a)
2018-02-01 11:36:46 +02:00
Avi Kivity
63e92418dd Update seastar submodule
* seastar 8d254a1...af1b789 (3):
  > tls_test: Fix echo test not setting server trust store
  > tls: Do not restrict re-handshake to client
  > tls: Actually verify client certificate if requested

Fixes #3072
2018-01-28 13:59:04 +02:00
Paweł Dziepak
9eaa6f233e Update scylla-ami submodule
* scylla-ami 3366c93...c5d9e96 (1):
  > Update Amazon kernel packages release stream to 2017.09
2018-01-24 13:27:52 +00:00
Raphael S. Carvalho
6600317b2c sstables: fix wildly inaccurate sstable key estimation after dynamic index sampling
The reason sstable key estimation is inaccurate is that it doesn't account that
index sampling is now dynamic.

The estimation is done as follow:
    uint64_t get_estimated_key_count() const {
        return ((uint64_t)_components->summary.header.size_at_full_sampling + 1) *
                _components->summary.header.min_index_interval;
    }

The biggest problem is that _components->summary.header.min_index_interval isn't
actually the minimum interval, but instead the default interval value set in the
schema.
So the estimation gets worse the larger the average partition, because the larger
the average partition the lower the index sampling interval.
One of the problems is that estimation has a big influence on bloom filter size,
and so for large partitions we were generating bigger filters than we had to.

From now on, size at full sampling is calculated as if sampling were static
(which was the case until commit 8726ee937d which introduced size-based
sampling), using minimum index as a strict sampling interval.

Tests: units (release)

Fixes #3113.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20180122233612.11147-1-raphaelsc@scylladb.com>
(cherry picked from commit 2c181b69c9)
2018-01-24 11:42:28 +02:00
Vladimir Krivopalov
807acb2dd9 main: Fix warnings when running "scylla --version"
Print Scylla version, if requested, before running Seastar application.

Fixes #3124

Signed-off-by: Vladimir Krivopalov <vladimir@scylladb.com>
Message-Id: <bbd0f303f612327446ce1f10ebd17ebed8d76048.1516144651.git.vladimir@scylladb.com>
(cherry picked from commit 73b6e9fbb1)
2018-01-17 16:59:28 +02:00
Takuya ASADA
5e44bf97f0 dist/debian: follow gcc-7.2 package naming changes on 3rdparty repo for Debian 9
Switch to renamed gcc-7.2 package on Debian 9, too.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1516191853-2562-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit f3c8574135)
2018-01-17 14:38:55 +02:00
Takuya ASADA
4003be40b3 dist/debian: fix package name typo on Debian 8
Correct package name is scylla-gcc72-g++-7, not scylla-g++-7.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1516189354-5880-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 15e266eea4)
2018-01-17 13:45:39 +02:00
Takuya ASADA
cf059b6ee2 dist/debian: follow renaming of gcc-7.2 packages on Ubuntu 14.04/16.04
Now we applied our scylla-$(pkg)$(ver) style package naming on gcc-7.2,
so switch to it.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1516103292-26942-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 912a14eb9b)
2018-01-17 13:38:56 +02:00
Shlomi Livne
d96c31ee4d release: prepare for 2.1.rc2
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2018-01-16 16:22:56 +02:00
Avi Kivity
680ce234b0 Merge "Fix memory leak on zone reclaim" from Tomek
"_free_segments_in_zones is not adjusted by
segment_pool::reclaim_segments() for empty zones on reclaim under some
conditions. For instance when some zone becomes empty due to regular
free() and then reclaiming is called from the std allocator, and it is
satisfied from a zone after the one which is empty. This would result
in free memory in such zone to appear as being leaked due to corrupted
free segment count, which may cause a later reclaim to fail. This
could result in bad_allocs.

The fix is to always collect such zones.

Fixes #3129
Refs #3119
Refs #3120"

* 'tgrabiec/fix-free_segments_in_zones-leak' of github.com:scylladb/seastar-dev:
  tests: lsa: Test _free_segments_in_zones is kept correct on reclaim
  lsa: Expose max_zone_segments for tests
  lsa: Expose tracker::non_lsa_used_space()
  lsa: Fix memory leak on zone reclaim

(cherry picked from commit 4ad212dc01)
2018-01-16 15:54:40 +02:00
Asias He
ad656b2c55 storage_service: Do not wait for restore_replica_count in handle_state_removing
The call chain is:

storage_service::on_change() -> storage_service::handle_state_removing()
-> storage_service::restore_replica_count() -> streamer->stream_async()

Listeners run as part of gossip message processing, which is serialized.
This means we won't be processing any gossip messages until streaming
completes.

In fact, there is no need to wait for restore_replica_count to complete
which can take a long time, since when it completes, this node will send
notification to tell the removal_coordinator that the restore process is
finished on this node. This node will be removed from _replicating_nodes
on the removal_coordinator.

Tested with update_cluster_layout_tests.py

Fixes #2886

Message-Id: <8b4fe637dfea6c56167ddde3ca86fefb8438ce96.1516088237.git.asias@scylladb.com>
(cherry picked from commit 5107b6ad16)
2018-01-16 11:37:55 +02:00
Tomasz Grabiec
43101b6bff database: Invalidate only affected ranges from flush_streaming_mutations()
Invalidating whole range causes larger latency spikes.

Regression from 2.0 introduced in d22fdf4261.

Refs #3119

Tests: units (release)

Message-Id: <1516046938-26855-1-git-send-email-tgrabiec@scylladb.com>
(cherry picked from commit b5d5bf5bc4)
2018-01-16 11:18:36 +02:00
Asias He
492a5c8886 storage_service: Set NORMAL status after token_metadata is replicated
Commit 2d5fb9d109 (gms/gossiper: Replicate changes incrementally to
other shards) changes the way we replicate _token_metadata and
endpoint_state_map. Before they are replicated at the same time, after
they are not any more. This causes a shard in NORMAL status can still be
with a empty _token_metadata.

We saw errors:

   [shard 12] token_metadata - sorted_tokens is empty in first_token_index!

during CorruptThenRepairNemesis.

Fix by setting the gossip status to NORMAL after replication of
_token_metadata, so that once a node is in NORMAL, we can do repair. The
commit 69c81bcc87 (repair: Do not allow repair until node is in NORMAL
status) prevents the early repair operation by checking if a node is in
NORMAL status.

Fixes #3121

Message-Id: <af6a223733d2e11351f1fa35f59eacfa7d65dd30.1516065564.git.asias@scylladb.com>
(cherry picked from commit 3c8ed255ac)
2018-01-16 09:41:34 +02:00
Raphael S. Carvalho
152747b8fd mutation_reader: Fix use-after-move
Problem introduced in 375ed938b4

Also remove redefinition of schema in dummy incremental selector
which is supposed to use the one in base class instead.

Following tests are fixed:
    ./build/release/tests/mutation_reader_test
    ./build/release/tests/sstable_test -- -c1
    ./build/release/tests/row_cache_test
    ./build/release/tests/cache_flat_mutation_reader_test
    ./build/release/tests/row_cache_stress_test

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20180111153831.17462-1-raphaelsc@scylladb.com>
2018-01-11 17:43:41 +02:00
Takuya ASADA
00c08519a7 dist/debian: make pbuilder works on Debian 9
On Debian 9, 'pbuilder create' fails because of lack of GPG key for
3rdparty repo, so we need --allow-untrusted on 'pbuilder create' and
'pbuilder update'.

Also, apt-key adv --fetch-keys does not works correctly on it, but we can use
"curl <URL> | apt-key add -" as workaround.

Fixes #3088

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1513797714-18067-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit b68ee98310)
2018-01-11 15:03:49 +02:00
Takuya ASADA
5d47a39b7b dist/debian: follow renaming of gcc-7.2 packages on Debian 8
Now we applied our scylla-$(pkg)$(ver) style package naming on gcc-7.2,
so switch to it.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1515522920-8266-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 420b61b466)
2018-01-11 15:03:47 +02:00
Takuya ASADA
4f8e8bdc04 dist/debian: rename boost1.63 to scylla-boost163 on Debian 8
We provided "boost1.63" package for Debian 8 since we couldn't build
"scylla-boost163" package witch is available on Ubuntu14/16, but I fixed the
problem and now we have it for Debian 8 too, so switch to it.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1514220163-25985-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 51013f561d)
2018-01-11 15:03:44 +02:00
Paweł Dziepak
ef1dab4565 combined_reader: optimise for disjoint partition streams
The legacy mutation_reader/streamed_mutation design allowed very easily
to skip the partition merging logic if there was only one underlying
reader that has emitted it.

That optimisation was lost after conversion to flat mutation readers
which has impacted the performance. This patch mostly recovers it by
bypassing most of mutation_reader_merger logic if there is only a single
active reader for a given partition.

The performance regression was introduced in
8731c1bc66 "Flatten the implementation of
combined_mutation_reader".

perf_simple_query -c4 read results (medians of 60):

original regression
             before 8731c1     after 8731c1   diff
 read            326241.02        300244.09  -8.0%

this patch
                    before            after  diff
 read            313882.59        325148.05  3.6%
Message-Id: <20180103121019.764-1-pdziepak@scylladb.com>

(cherry picked from commit b4a4c04bab)
2018-01-11 10:33:31 +01:00
Tomasz Grabiec
3f602814ba mutation_reader: Move definition of combining mutation reader to source file
So that the whole world doesn't recompile when it changes.

(cherry picked from commit 60ed5d29c0)
2018-01-11 10:33:08 +01:00
Tomasz Grabiec
83d4e85e00 mutation_reader: Use make_combined_reader() to create combined reader
So that we can hide the definition of combined_mutation_reader. It's
also less verbose.

(cherry picked from commit 52285a9e73)
2018-01-11 10:33:06 +01:00
Asias He
857ffeefce streaming: Do send failed message for uninitialized session
The uninitialized session has no peer associated with it yet. There is
no point sending the failed message when abort the session. Sending the
failed message in this case will send to a peer with uninitialized
dst_cpu_id which will casue the receiver to pass a bogus shard id to
smp::submit_to which cases segfault.

In addition, to be safe, initialize the dst_cpu_id to zero. So that
uninitialized session will send message to shard zero instead of random
bogus shard id.

Fixes the segfault issue found by
repair_additional_test.py:RepairAdditionalTest.repair_abort_test

Fixes #3115
Message-Id: <9f0f7b44c7d6d8f5c60d6293ab2435dadc3496a9.1515380325.git.asias@scylladb.com>

(cherry picked from commit 774307b3a7)
2018-01-09 16:32:12 +02:00
Piotr Jastrzebski
a845e23702 Fix fast_forward_to(partition_range&) in forwardable flat reader.
Making sure fast_forward_to(const partition_range&) sets _current
correctly.

Fixes #3089

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <6c29cf273f191da0e21035bcbe1592042ecffc70.1515490058.git.piotr@scylladb.com>
(cherry picked from commit 945f45f490)
2018-01-09 14:57:53 +02:00
George Tavares
f9b14df3a3 db/view: Consume updated rows regardless of static row
Using Materialized Views, if the base table has static columns,
and the update in base table mutates static and non static rows,
the streamed_mutation is stopped before process non static row.
The patch avoids stopping the stream_mutation and adds a test case.

Message-Id: <20171220173434.25091-1-tavares.george@gmail.com>
(cherry picked from commit ceecd542cd)
2018-01-08 15:39:57 +01:00
Raphael S. Carvalho
ae47dfde7d sstables: cure our blindness on sstable read failure
After 611774b, we're blind again on which sstable caused a compaction
to fail, leaving us with cryptic message as follow:
compaction_manager - compaction failed: std::runtime_error (compressed
chunk failed checksum)

After this change, now both read failure in compaction or regular read
will report the guilty sstable, see:
compaction_manager - compaction failed: std::runtime_error (SSTable reader
found an exception when reading sstable ./data/.../keyspace1-standard1
ka-1-Data.db : std::runtime_error(compressed chunk failed checksum))

Fixes #3006.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20180102230752.14701-1-raphaelsc@scylladb.com>
(cherry picked from commit 4610e994e1)
2018-01-08 13:43:32 +02:00
Vladimir Krivopalov
cc15a13365 Use CharReaderBuilder/CharReader and StreamWriterBuilder from JsonCpp.
In version 1.8.3 of JsonCpp shipped with Fedora 27, old FastWriter and
Reader classes from JsonCpp have been deprecated in favour of
newer/better ones: CharReaderBuilder/CharReader and
StreamWriterBuilder/StreamWriter.
This fix uses the new classes where available or resorts to old ones for
older versions of the library.

Fixes #2989

Signed-off-by: Vladimir Krivopalov <vladimir@scylladb.com>
(cherry picked from commit 76775ddf26)
2018-01-07 14:48:54 +02:00
Avi Kivity
6e14dcb84c Merge "Fix potential infinite recursion in leveled compaction" from Raphael
'"The issue is triggered by compaction of sstables of level higher than 0.

The problem happens when interval map of partitioned sstable set stores
intervals such as follow:
[-9223362900961284625 : -3695961740249769322 ]
(-3695961740249769322 : -3695961103022958562 ]

When selector is called for first interval above, the exclusive lower
bound of the second interval is returned as next token, but the
inclusivess info is not returned.
So reader_selector was returning that there *were* new readers when
the current token was -3695961740249769322 because it was stored in
selector position field as inclusive, but it's actually exclusive.

This false positive was leading to infinite recursion in combined
reader because sstable set's incremental selector itself knew that
there were actually *no* new readers, and therefore *no* progress
could be made."

Fixes #2908.'

* 'high_level_compaction_infinite_recursion_fix_v4' of github.com:raphaelsc/scylla:
  tests: test for infinite recursion bug when doing high-level compaction
  Fix potential infinite recursion when combining mutations for leveled compaction
  dht: make it easier to create ring_position_view from token
  dht: introduce is_min/max for ring_position

(cherry picked from commit 375ed938b4)
2018-01-07 14:47:18 +02:00
Pekka Enberg
9ed64cc11c dist/docker: Switch to Scylla 2.1 repository 2018-01-05 10:43:29 +02:00
Shlomi Livne
d4c46afc50 release: prepare for 2.1.rc1
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2018-01-03 10:48:35 +02:00
Paweł Dziepak
f371d17884 db/schema_tables: do not use moved from shared pointer
Shared pointer view is captured by two continuations, one of which is
moving it away. Using do_with() solves the problem.

Fixes #3092.
Message-Id: <20171221111614.16208-1-pdziepak@scylladb.com>

(cherry picked from commit 4dfddc97c7)
2017-12-21 15:13:53 +01:00
Tomasz Grabiec
0a82a885a4 Merge "Remove memtable::make_reader" from Piotr
Migrate all the places that used memtable::make_reader to use
memtable::make_flat_reader and remove memtable::make_reader.

* seastar-dev.git haaawk/remove_memtable_make_reader_v2_rebased:
  Remove memtable::make_reader
  Stop using memtable::make_reader in row_cache_stress_test
  Stop using memtable::make_reader in row_cache_test
  Stop using memtable::make_reader in mutation_test
  Stop using memtable::make_reader in streamed_mutation_test
  Stop using memtable::make_reader in memtable_snapshot_source.hh
  Stop using memtable::make_reader in memtable::apply
  Add consume_partitions(flat_mutation_reader& reader, Consumer consumer)
  Add default parameter values in make_combined_reader
  Migrate test_virtual_dirty_accounting_on_flush to flat reader
  Migrate test_adding_a_column_during_reading_doesnt_affect_read_result
  Simplify flat_reader_assertions& produces(const mutation& m)
  Migrate test_partition_version_consistency_after_lsa_compaction_happens
  flat_mutation_reader: Allow setting buffer capacity
  Add next_mutation() to flat_mutation_reader_assertions
  cf::for_all_partitions::iteration_state: don't store schema_ptr
  read_mutation_from_flat_mutation_reader: don't take schema_ptr
  Migrate test_fast_forward_to_after_memtable_is_flushed to flat reader

(cherry picked from commit b0a56a91c2)
2017-12-21 14:10:31 +01:00
Tomasz Grabiec
17febfdb0e database: Move operator<<() overloads to appropriate source files
(cherry picked from commit fd7ab5fe99)
2017-12-21 14:10:24 +01:00
Vlad Zolotarov
830bf99528 tests: sstable_datafile_test: fix the compilation error on Power
'char' and int8_t ('unsigned char') are different types. 'bytes' base type
is int8_t - use the correct type for casting.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
(cherry picked from commit 22ca5d2596)
2017-12-21 14:09:47 +01:00
Tomasz Grabiec
90000d9861 Merge "Fixes for multi_range_reader" from Paweł
The following patches contain fixes for skipping to the next parititon
in multi_range_reader and completelty dissable support for fast
forwarding inside a single partition, which is not needed and would only
add unnecessary complexity.

* https://github.com/pdziepak/scylla.git fix-multi_range_reader/v1:
  flat_multi_range_mutation_reader: disallow
    streamed_mutation::forwarding
  flat_multi_range_mutation_reader: clear buffer on next_partition()
  tests/flat_multi_range_mutation_reader: test skipping to next
    partition

(cherry picked from commit 71cc63dfa6)
2017-12-21 14:07:15 +01:00
Asias He
46dae42dcd streaming: One cf per time on sender
In the case there are large number of column families, the sender will
send all the column families in parallel. We allow 20% of shard memory
for streaming on the receiver, so each column family will have 1/N, N is
the number of in-flight column families, memory for memtable. Large N
causes a lot of small sstables to be generated.

It is possible there are multiple senders to a single receiver, e.g.,
when a new node joins the cluster, the maximum in-flight column families
is number of peer node. The column families are sent in the order of
cf_id. It is not guaranteed that all peers has the same speed so they
are sending the same cf_id at the same time, though. We still have
chance some of the peers are sending the same cf_id.

Fixes #3065

Message-Id: <46961463c2a5e4f1faff232294dc485ac4f1a04e.1513159678.git.asias@scylladb.com>
(cherry picked from commit a9dab60b6c)
2017-12-20 17:07:39 +01:00
Tomasz Grabiec
d6395634ad range_tombstone_list: Fix insert_from()
end_bound was not updated in one of the cases in which end and
end_kind was changed, as a result later merging decision using
end_bound were incorrect. end_bound was using the new key, but the old
end_kind.

Fixes #3083.
Message-Id: <1513772083-5257-1-git-send-email-tgrabiec@scylladb.com>

(cherry picked from commit dfe48bbbc7)
2017-12-20 15:31:51 +01:00
Avi Kivity
d886b3def4 Merge "Fix read amplification in sstable reads" from Paweł
"4b9a34a85425d1279b471b2ff0b0f2462328929c "Merge sstable_data_source
into sstable_mutation_reader" has introduced unintentional changes, some
of them causing excessive read amplification during empty range reads.
The following patches restore the previous behaviour."

* tag 'fix-read-amplification/v1' of https://github.com/pdziepak/scylla:
  sstables: set _read_enabled to false if possible
  sstables: set _single_partition_read for single parititon reads

(cherry picked from commit 772d1f47d7)
2017-12-19 18:18:06 +02:00
Tomasz Grabiec
bcb06bb043 flat_mutation_reader: Fix make_nonforwardable()
It emitted end-of-stream prematurely if buffer was full.
Message-Id: <1513697716-32634-1-git-send-email-tgrabiec@scylladb.com>

(cherry picked from commit 6a6bf58b98)
2017-12-19 16:01:21 +00:00
Tomasz Grabiec
4606300b25 row_cache: Fix single_partition_populating_reader not waiting on create_underlying() to resolve
Results in undefined behavior.
Message-Id: <1513691679-27081-1-git-send-email-tgrabiec@scylladb.com>

(cherry picked from commit 7b36c8423c)
2017-12-19 16:12:37 +02:00
Piotr Jastrzebski
282d93de99 Use row_cache::make_flat_reader in column_family::make_reader
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <ba1659ceed8676f45942ce6e7506158026947345.1513687259.git.piotr@scylladb.com>
(cherry picked from commit 570fc5afed)
2017-12-19 14:42:52 +02:00
Avi Kivity
52d3403cb0 Update scylla-ami submodule
* dist/ami/files/scylla-ami be90a3f...3366c93 (1):
  > scylla_install_ami: skip ec2_check while building AMI

Still tracking master.
2017-12-19 10:12:05 +02:00
Tomasz Grabiec
97f6073699 Merge "Migrate cache to use flat_mutation_reader" from Piotr
(cherry picked from commit 37b19ae6ba)
2017-12-18 20:51:09 +01:00
Glauber Costa
5454e6e168 conf: document listen_on_broadcast_address
That's a supported feature that is listed in our help message, but it
is not present in the yaml file.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20171215011240.16027-1-glauber@scylladb.com>
(cherry picked from commit b8f49fcc14)
2017-12-18 17:00:46 +02:00
Vlad Zolotarov
498fb11c70 messaging_service: fix a mutli-NIC support
Don't enforce the outgoing connections from the 'listen_address'
interface only.

If 'local_address' is given to connect() it will enforce it to use a
particular interface to connect from, even if the destination address
should be accessed from a different interface. If we don't specify the
'local_address' the source interface will be chosen according to the
routing configuration.

Fixes #3066

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1513372688-21595-1-git-send-email-vladz@scylladb.com>
(cherry picked from commit be6f8be9cb)
2017-12-17 10:51:37 +02:00
Avi Kivity
a6b4881994 Merge "SSTable summary regeneration fixes" from Raphael
"Fixes #3057."

* 'summary_recreation_fixes_v2' of github.com:raphaelsc/scylla:
  tests: sstable summary recreation sanity test
  sstables: make loading of sstable without summary to work again
  sstables: fix summary generation with dynamic index sampling

(cherry picked from commit 11de20fc33)
2017-12-17 09:39:16 +02:00
Takuya ASADA
9848df6667 dist/common/systemd: specify correct repo file path for housekeeping service on Ubuntu/Debian
Currently scylla-housekeeping-daily.service/-restart.service hardcoded
"--repo-files '/etc/yum.repos.d/scylla*.repo'" to specify CentOS .repo file,
but we use same .service for Ubuntu/Debian.
It doesn't work correctly, we need to specify .list file for Debian variants.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1513385159-15736-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit c2e87f4677)
2017-12-16 22:03:42 +02:00
Piotr Jastrzebski
2090a5f8f6 Fix build by removing semicolon after concept
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <4504cf47be0a451c58052476bc8cc4f9cba59472.1513248094.git.piotr@scylladb.com>
(cherry picked from commit ac1d2f98e4)
2017-12-14 12:48:29 +02:00
Amos Kong
7634ed39eb Reset default cluster_name back to 'Test Cluster' for compatibility
There are some users used original default cluster_name 'Test Cluster',
they will fail to start the node for cluster_name change if they use
new scylla.yaml.

'ScyllaDB Cluster' isn't more beautiful than 'Test Cluster', reset back
to original old to avoid problem for users.

Fixes #3060

Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <8c9dab8a64d0f4ab3a5d6910b87af696c60e5076.1513072453.git.amos@scylladb.com>
(cherry picked from commit b07de93636)
2017-12-13 16:58:10 +02:00
Avi Kivity
fb9b15904a Merge "Convert sstable readers to flat streams" from Paweł
"While aa8c2cbc16 'Merge "Migrate sstables
to flat_mutation_reader" from Piotr' has converted the low-level sstable
reader to the new flat_mutation_reader interface there were still
multiple readers related to sstables that required converting,
including:
 - restricted reader
 - filtering reader
 - single partition sstable reader
This series completes their conversion to the flat stream interface."

* tag 'flat_mutation_reader-sstable-readers/v2' of https://github.com/pdziepak/scylla:
  db: convert single_key_sstalbe_reader to flat streams
  db: fully convert incremental_reader_selector to flat readers
  db: make make_range_sstable_reader() return flat reader
  db: make column_family::make_reader() return flat reader
  db: make column_family::make_sstable_reader() return a flat reader
  filtering_reader: switch to flat mutation fragment streams
  filtering_reader: pass a const dht::decorated_key& to the callback
  mutation_reader: drop make_restricted_reader()
  db: use make_restricted_flat_reader
  mutation_reader: convert restricted reader to flat streams

(cherry picked from commit 6cb3b29168)
2017-12-13 15:38:22 +02:00
Glauber Costa
4e11f05aa7 database: delete created SSTables if streaming writes fail
We have had an issue recently where failed SSTable writes left the
generated SSTables dangling in a potentially invalid state. If the write
had, for instance, started and generated tmp TOCs but not finished,
those files would be left for dead.

We had fixed this in commit b7e1575ad4,
but streaming memtables still have the same isse.

Note that we can't fix this in the common function
write_memtable_to_sstable because different flushers have different
retry policies.

Fixes #3062

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20171213011741.8156-1-glauber@scylladb.com>
(cherry picked from commit 1aabbc75ab)
2017-12-13 10:09:43 +02:00
Jesse Haber-Kucharsky
516a1ae834 cql3: Add missing return
Since `return` is missing, the "else" branch is also taken and this
results a user being created from scratch.

Fixes #3058.

Signed-off-by: Jesse Haber-Kucharsky <jhaberku@scylladb.com>
Message-Id: <bf3ca5907b046586d9bfe00f3b61b3ac695ba9c5.1512951084.git.jhaberku@scylladb.com>
(cherry picked from commit 7e3a344460)
2017-12-11 09:55:27 +02:00
Paweł Dziepak
be5127388d Merge "Fix range tombstone emitting which led to skipping over data" from Tomasz
"Fixes cache reader to not skip over data in some cases involving overlapping
range tombstones in different partition versions and discontinuous cache.

Introduced in 2.0

Fixes #3053."

* tag 'tgrabiec/fix-range-tombstone-slicing-v2' of github.com:scylladb/seastar-dev:
  tests: row_cache: Add reproducer for issue #3053
  tests: mvcc: Add test for partition_snapshot::range_tombstones()
  mvcc: Optimize partition_snapshot::range_tombstones() for single version case
  mvcc: Fix partition_snapshot::range_tombstones()
  tests: random_mutation_generator: Do not emit dummy entries at clustering row positions

(cherry picked from commit 051cbbc9af)
2017-12-08 13:03:32 +01:00
Tomasz Grabiec
6d0679ca72 mvcc: Extract partition_entry::add_version()
(cherry picked from commit 52cabe343c)
2017-12-08 12:33:49 +01:00
Avi Kivity
eb67b427b2 Merge "SSTable resharding fixes" from Raphael
"Didn't affect any release. Regression introduced in 301358e.

Fixes #3041"

* 'resharding_fix_v4' of github.com:raphaelsc/scylla:
  tests: add sstable resharding test to test.py
  tests: fix sstable resharding test
  sstables: Fix resharding by not filtering out mutation that belongs to other shard
  db: introduce make_range_sstable_reader
  rename make_range_sstable_reader to make_local_shard_sstable_reader
  db: extract sstable reader creation from incremental_reader_selector
  db: reuse make_range_sstable_reader in make_sstable_reader

(cherry picked from commit d934ca55a7)
2017-12-07 16:43:28 +02:00
Amos Kong
2931324b34 dist/debian: add scylla-tools-core to depends list
Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <db39cbda0e08e501633556ab238d816e357ad327.1512646123.git.amos@scylladb.com>
(cherry picked from commit 8fd5d27508)
2017-12-07 13:40:46 +02:00
Amos Kong
614519c4be dist/redhat: add scylla-tools-core to requires list
Fixes #3051

Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <f7013a4fbc241bb4429d855671fee4b845b255cd.1512646123.git.amos@scylladb.com>
(cherry picked from commit eb3b138ee2)
2017-12-07 13:40:46 +02:00
Botond Dénes
203b924c76 mutation_reader_merger: don't query the kind of moved-from fragment
Call mutation_fragment_kind() on the fragment *before* it's moved as
there are not guarantees for the state of a moved-from object (apart
from that it's in a valid one).

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <c47b1e22877bb9499f1fbb9d513093c29ef1901b.1512635422.git.bdenes@scylladb.com>
(cherry picked from commit 1ff65f41fd)
2017-12-07 11:41:04 +01:00
Botond Dénes
f4f957fa53 Add streamed mutation fast-forwarding unit test for the flat combined-reader
Test for the bug fixed by 9661769.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <fc917bae8e9c99f026bf7b366e6e9d39faf466af.1512630741.git.bdenes@scylladb.com>
(cherry picked from commit 9fce51f8a0)
2017-12-07 11:40:53 +01:00
Botond Dénes
39e614a444 combined_mutation_reader: fix fast-fowarding related row-skipping bug
When fast forwarding is enabled and all readers positioned inside the
current partition return EOS, return EOS from the combined-reader
too. Instead of skipping to the next partition if there are idle readers
(positioned at some later partition) available. This will cause rows to
be skipped in some cases.

The fix is to distinguish EOS'd readers that are only halted (waiting
for a fast-forward) from thoose really out of data. To achieve this we
track the last fragment-kind the reader emitted. If that was a
partition-end then the reader is out of data, otherwise it might emit
more fragments after a fast-forward. Without this additional information
it is impossible to determine why a reader reached EOS and the code
later may make the wrong decision about whether the combined-reader as
a whole is at EOS or not.
Also when fast-forwarding between partition-ranges or calling
next_partition() we set the last fragment-kind of forwarded readers
because they should emit a partition-start, otherwise they are out of
data.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <6f0b21b1ec62e1197de6b46510d5508cdb4a6977.1512569218.git.bdenes@scylladb.com>
(cherry picked from commit 9661769313)
2017-12-06 16:42:06 +02:00
Paweł Dziepak
d8521d0fa2 Merge "Flatten combined_mutation_reader" from Botond
"Convert combined_mutation_reader into a flat_mutation_reader impl. For
now - in the name of incremental progress - all consumers are updated to
use the combined reader through the
mutation_reader_from_flat_mutation_reader adaptor. The combined reader also
uses all it's sub mutation_readers through the
flat_mutation_reader_from_mutation_reader adaptor."

* 'bdenes/flatten-combined-reader-v8' of https://github.com/denesb/scylla:
  Add unit tests for the combined reader - selector interactions
  Add flat_mutation_reader overload of make_combined_reader
  Flatten the implementation of combined_mutation_reader
  Add mutation_fragment_merger
  mutation_fragment::apply(): handle partition start and end too
  Add non-const overload of partition_start::partition_tombstone()
  Make combined_mutation_reader a flat_mutation_reader
  Move the mutation merging logic to combined_mutation_reader
  Remove the unnecessary indirection of mutation_reader_merger::next()
  Move the implementation of combined_mutation_reader into mutation_reader_merger
  Remove unused mutation_and_reader::less_compare and operator<

(cherry picked from commit 046991b0b7)
2017-12-06 16:41:42 +02:00
Takuya ASADA
f60696b55f dist/debian: need apt-get update after installing GPG key for 3rdparty repo
We need apt-get update after install GPG key, otherwise we still get
unauthenticated package error on Debian package build.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1512556948-29398-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit aeb6ebce5a)
2017-12-06 12:43:42 +02:00
Avi Kivity
1b15a0926a Merge "Make sstable tests use flat_mutation_reader" from Paweł
"This series makes sstable tests use flat stream interface. The main
motivation is to allow eventual removal of mutation_reader and
streamed_mutation and ensuring that the conversion between the
interfaces doesn't hide any bugs that would be otherwise found."

* tag 'flat_mutation_reader-sstable-tests/v1' of https://github.com/pdziepak/scylla:
  sstables: drop read_range_rows()
  tests/mutation_reader: stop using read_range_rows()
  incremental_reader_selector: do not use read_range_rows()
  tests/sstable: stop using read_range_rows()
  sstables: drop read_row()
  tests/sstables: use read_row_flat() instead of read_row()
  database: use read_row_flat() instead of read_row()
  tests/sstable_mutation_test: get flat_mutation_readers from mutation sources
  tests/sstables: make sstable_reader return flat_mutation_reader
  sstable: drop read_row() overload accepting sstable::key
  tests/sstable: stop using read_row() with sstable::key
  tests/flat_mutation_reader_assertions: add has_monotonic_positions()
  tests/flat_mutation_reader_assertions: add produces(Range)
  tests/flat_mutation_reader_assertions: add produces(mutation)
  tests/flat_mutation_reader_assertions: add produces(dht::decorated_key)
  tests/flat_mutation_reader_assertions: add produces(mutation_fragment::kind)
  tests/flat_mutation_reader_assertions: fix fast forwarding

(cherry picked from commit 601a03dda7)
2017-12-06 10:12:36 +02:00
Takuya ASADA
32efd3902c dist/debian: install CA certificates before install repo GPG key
Since pbuilder chroot environment does not install CA certificates by default,
accessing https://download.opensuse.org will cause certificate verification
error.
So we need to install it before installing 3rdparty repo GPG key.

Also, checking existance of gpgkeys_curl is not needed, since it's always
not installed since we are running the script in clean chroot environment.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1512517001-27524-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 8f02967a3b)
2017-12-06 10:12:17 +02:00
Avi Kivity
6b2f7f8c39 Merge "enable secure-apt for Ubuntu/Debian pbuilder" from Takuya
* 'debian-secure-apt-3rdparty-v3' of https://github.com/syuu1228/scylla:
  dist/debian: support Ubuntu 18.04LTS
  dist/debian: disable ALLOWUNTRUSTED
  dist/debian: enable secure-apt for Debian
  dist/debian: enable secure-apt for Ubuntu

(cherry picked from commit a25b5e30f8)
2017-12-04 14:47:23 +02:00
Takuya ASADA
370a6482e3 dist/debian: disable entire pybuild actions
Even after 25bc18b commited, we still see the build error similar to #3036 on
some environment, but not on dh_auto_install, it on dh_auto_test (see #3039).

So we need to disable entire pybuild actions, not just dh_auto_install.

Fixes #3039

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1512185097-23828-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 8c403ea4e0)
2017-12-02 19:37:01 +02:00
Takuya ASADA
981644167b dist/debian: skip running dh_auto_install on pybuild
We are getting package build error on dh_auto_install which is invoked by
pybuild.
But since we handle all installation on debian/scylla-server.install, we can
simply skip running dh_auto_install.

Fixes #3036

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1512065117-15708-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit 25bc18b8ff)
2017-12-01 16:06:44 +02:00
Avi Kivity
6f669da227 Update seastar submodule
* seastar 78cd87f...8d254a1 (2):
  > fstream: do not ignore dma_write return value
  > Update dpdk submodule

Fixes dpdk build and missing file write error check.
2017-11-30 10:43:22 +02:00
Avi Kivity
bdf1173075 Point seastar submodule at scylla-seastar.git
This allows fixes to seastar to be cherry-picked into
scylla-seastar.git branch-2.1.
2017-11-30 10:40:51 +02:00
Duarte Nunes
106c69ad45 compound_compact: Change universal reference to const reference
The universal reference was introduced so we could bind an rvalue to
the argument, but it would have sufficed to make the argument a const
reference. This is also more consistent with the function's other
overload.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20171129132758.19654-1-duarte@scylladb.com>
(cherry picked from commit cda3ddd146)
2017-11-29 14:42:08 +01:00
Tomasz Grabiec
740fcc73b8 Merge "compact_storage serialization fixes" from Duarte
Fix two issues with serializing non-compound range tombstones as
compound: convert a non-compound clustering element to compound and
actually advertise the issue to other nodes.

* git@github.com:duarten/scylla.git  rt-compact-fixes/v1:
  compound_compact: Allow rvalues in size()
  sstables/sstables: Convert non-compound clustering element to compound
  tests/sstable_mutation_test: Verify we can write/read non-correct RTs
  service/storage_service: Export non-compound RT feature

(cherry picked from commit e9cce59b85)
2017-11-29 14:18:21 +01:00
Raphael S. Carvalho
cefbb0b999 sstables: fix data_consume_context's move operator and ctor
after 7f8b62bc0b, its move operator and ctor broke. That potentially
leads to error because data_consume_context dtor moves sstable ref
to continuation when waiting for in-flight reads from input stream.
Otherwise, sstable can be destroyed meanwhile and file descriptor
would be invalid, leading to EBADF.

Fixes #3020.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20171129014917.11841-1-raphaelsc@scylladb.com>
(cherry picked from commit f699cf17ae)
2017-11-29 09:54:27 +01:00
Tomasz Grabiec
02f43f5e4c Merge "Convert memtable flush reader to flat streams" from Paweł
This series converts memtable flush reader to the new flat mutation
readers. Just like the scanning reader, flush reader concatenates
multiple partition snapshot readers in order to provide a stream
of all partitions in the memtable.

* https://github.com/pdziepak/scylla.git flat_mutation_reader-memtable-flush/v1
   tests/flat_mutation_reader_assertion: add produces_partition()
   memtable: make make_flush_reader() return flat_mutation_reader
   flat_mutation_reader: add optimised flat_mutation_reader_opt
   memtable: switch flush reader implementation to flat streams
   tests/memtable: add test for flush reader

(cherry picked from commit 04106b4c96)
2017-11-27 20:29:25 +01:00
Duarte Nunes
8850ef7c59 tests/sstable_mutation_test: Change make_reader to make_flat_reader
A merge conflict between 596ebaed1f and
bd1efbc25c caused the test to fail to
build.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
(cherry picked from commit 4a6ffa3f5c)
2017-11-27 09:59:56 +01:00
Duarte Nunes
8567723a7b tests: Initialize storage service for some tests
These tests now require having the storage service initialize, which
is needed to decide whether correct non-compound range tombstones
should be emitted or not.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20171126152921.5199-1-duarte@scylladb.com>
(cherry picked from commit 922f095f22)
2017-11-26 17:41:20 +02:00
Duarte Nunes
b0b7c73acd cql3/delete_statement: Allow non-range deletions on non-compound schemas
This patch fixes a regression introduced in
1c872e2ddc.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20171126102333.3736-1-duarte@scylladb.com>
(cherry picked from commit 15fbb8e1ca)
2017-11-26 12:29:27 +02:00
Takuya ASADA
eb82d66849 dist/debian: link libgcc dynamically
As we discussed on the thread (https://github.com/scylladb/scylla/issues/2941),
since we override symbols on libgcc, we need to link libgcc dynamically for
Ubuntu/Debian too (CentOS already do it).

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1511542866-21486-2-git-send-email-syuu@scylladb.com>
(cherry picked from commit 7380a6088b)
2017-11-25 20:10:15 +02:00
Takuya ASADA
eb12fb3733 dist/debian: switch to our PPA verions of gcc-72
Now we have gcc-7.2 on our PPA for Ubuntu 16.04/14.04, let's switch to it.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1511542866-21486-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit df6546d151)
2017-11-25 20:10:14 +02:00
Tomasz Grabiec
60d011c9c0 Merge "Convert sstable writers to flat mutation readers" from Paweł
The following patches convert sstable writers to use flat mutation
readers instead of the legacy mutation_reader interface.
Writers were already using flat consumer interface and used
consume_flattened_in_thread(), so most of the work was limited to
providing an appropriate equivalent for flat mutation readers.

* https://github.com/pdziepak/scylla.git flat_mutation_reader-sstable-write/v1:
  flat_mutation_reader: move consumer_adapter out of consume()
  flat_mutation_reader: introduce consume_in_thread()
  tests/flat_mutation_reader: test consume_in_thread()
  sstables: switch write_components() to flat_mutation_reader
  streamed_mutation: drop streamed_mutation_returning()
  sstables: convert compaction to flat_mutation_reader
  mutation_reader: drop consume_flattened_in_thread()

(cherry picked from commit 596ebaed1f)
2017-11-24 18:49:32 +01:00
Tomasz Grabiec
7c3390bde8 Merge "Fixes to sstable files for non-compound schemas" from Duarte
This series mainly fixes issues with the serialization of promoted
index entries for non-compound schemas and with the serialization of
range tombstones, also for non-compound schemas.

We lift the correct cell name writing code into its own function,
and direct all users to it. We also ensure backward compatibility with
incorrectly generated promoted indexes and range tombstones.

Fixes #2995
Fixes #2986
Fixes #2979
Fixes #2992
Fixes #2993

* git@github.com:duarten/scylla.git  promoted-index-serialization/v3:
  sstables/sstables: Unify column name writers
  sstables/sstables: Don't write index entry for a missing row maker
  sstables/sstables: Reuse write_range_tombstone() for row tombstones
  sstables/sstables: Lift index writing for row tombstones
  sstables/sstables: Leverage index code upon range tombstone consume
  sstables/sstables: Move out tombstone check in write_range_tombstone()
  sstables/sstables: A schema with static columns is always compound
  sstables/sstables: Lift column name writing logic
  sstables/sstables: Use schema-aware write_column_name() for
    collections
  sstables/sstables: Use schema-aware write_column_name() for row marker
  sstables/sstables: Use schema-aware write_column_name() for static row
  sstables/sstables: Writing promoted index entry leverages
    column_name_writer
  sstables/sstables: Add supported feature list to sstables
  sstables/sstables: Don't use incorrectly serialized promoted index
  cql3/single_column_primary_key_restrictions: Implement is_inclusive()
  cql3/delete_statement: Constrain range deletions for non-compound
    schemas
  tests/cql_query_test: Verify range deletion constraints
  sstables/sstables: Correctly deserialize range tombstones
  service/storage_service: Add feature for correct non-compound RTs
  tests/sstable_*: Start the storage service for some cases
  sstables/sstable_writer: Prepare to control range tombstone
    serialization
  sstables/sstables: Correctly serialize range tombstones
  tests/sstable_assertions: Fix monotonicity check for promoted indexes
  tests/sstable_assertions: Assert a promoted index is empty
  tests/sstable_mutation_test: Verify promoted index serializes
    correctly
  tests/sstable_mutation_test: Verify promoted index repeats tombstones
  tests/sstable_mutation_test: Ensure range tombstone serializes
    correctly
  tests/sstable_datafile_test: Add test for incorrect promoted index
  tests/sstable_datafile_test: Verify reading of incorrect range
    tombstones
  sstables/sstable: Rename schema-oblivious write_column_name() function
  sstables/sstables: No promoted index without clustering keys
  tests/sstable_mutation_test: Verify promoted index is not generated
  sstables/sstables: Optimize column name writing and indexing
  compound_compat: Don't assume compoundness

(cherry picked from commit bd1efbc25c)
2017-11-24 18:49:19 +01:00
Tomasz Grabiec
95b55a0e9d tests: sstable: Make tombstone_purge_test more reliable
TTL of 1 second may cause the cell to expire right after we write it,
if the second component of current time changes right after it. Use
larger ttl to avoid spurious faliures due to this.
Message-Id: <1511463392-1451-1-git-send-email-tgrabiec@scylladb.com>

(cherry picked from commit 35e404b1a2)
2017-11-24 18:49:16 +01:00
Amnon Heiman
7785d8f396 estimated_histogram: update the sum and count when merging
When merging histograms the count and the sum should be updated.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20171122154822.23855-1-amnon@scylladb.com>
(cherry picked from commit 3f8d9a87ee)
2017-11-22 16:57:08 +01:00
Glauber Costa
b805e37d30 estimated_histogram: also fill up sum metric
Prometheus histograms have 3 embedded metrics: count, buckets, and sum.
Currently we fill up count and buckets but sum is left at 0. This is
particularly bad, since according to the prometheus documentation, the
best way to calculate histogram averages is to write:

  rate(metric_sum[5m]) / rate(metric_count[5m])

One way of keeping track of the sum is adding the value we sampled,
every time we sample. However, the interface for the estimated histogram
has a method that allows to add a metric while allowing to adjust the
count for missing metrics (add_nano())

That makes acumulating a sum inaccurate--as we will have no values for
the points that were added. To overcome that, when we call add_nano(),
we pretend we are introducing new_count - _count metrics, all with the
same value.

Long term, doing away with sampling may help us provide more accurate
results.

After this patch, we are able to correctly calculate latency averages
through the data exported in prometheus.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20171122144558.7575-1-glauber@scylladb.com>
(cherry picked from commit 6c4e8049a0)
2017-11-22 16:57:07 +01:00
Tomasz Grabiec
a790b8cd20 Merge "Remove sstable::read_rows" from Piotr
* seastar-dev.git haaawk/flat_reader_remove_read_rows:
  sstable_mutation_test: use read_rows_flat instead of read_rows
  perf_sstable: use read_rows_flat instead of read_rows
  Remove sstable::read_rows

(cherry picked from commit e9ffe36d65)
2017-11-22 16:11:31 +01:00
Tomasz Grabiec
a10ea80a63 Merge "Migrate sstables to flat_mutation_reader" from Piotr
Introduce sstable::read_row_flat and sstable::read_range_rows_flat methods
and use them in sstable::as_mutation_source.

* https://github.com/scylladb/seastar-dev/tree/haaawk/flat_reader_sstables_v3:
  Introduce conversion from flat_mutation_reader to streamed_mutation
  Add sstables::read_rows_flat and sstables::read_range_rows_flat
  Turn sstable_mutation_reader into a flat_mutation_reader
  sstable: add getter for filter_tracker
  Move mp_row_consumer methods implementations to the bottom
  Remove unused sstable_mutation_reader constructor
  Replace "sm" with "partition" in get_next_sm and on_sm_finished
  Move advance_to_upper_bound above sstable_mutation_reader
  Store sstable_mutation_reader pointer in mp_row_consumer
  Stop using streamed_mutation in consumer and reader
  Stop using streamed_mutation in sstable_data_source
  Delete sstable_streamed_mutation
  Introduce sstable::read_row_flat
  Migrate sstable::as_mutation_source to flat_mutation_reader
  Remove single_partition_reader_adaptor
  Merge data_consume_context::impl into data_consume_context
  Create data_consume_context_opt.
  Merge on_partition_finished into mark_partition_finished
  Check _partition_finished instead of _current_partition_key
  Merge sstable_data_source into sstable_mutation_reader
  Remove sstable_data_source
  Remove get_next_partition and partition_header

(cherry picked from commit aa8c2cbc16)
2017-11-22 15:49:22 +01:00
Takuya ASADA
91a5c9d20c dist/redhat: avoid hardcoding GPG key file path on scylla-epel-7-x86_64.cfg
Since we want to support cross building, we shouldn't hardcode GPG file path,
even these files provided on recent version of mock.

This fixes build error on some older build environment such as CentOS-7.2.

Fixes #3002

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1511277722-22917-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit c1b97d11ea)
2017-11-21 17:26:53 +02:00
Takuya ASADA
f846b897bf configure.py: suppress 'nonnull-compare' warning on antlr3
We get following warning from antlr3 header when we compile Scylla with gcc-7.2:
/opt/scylladb/include/antlr3bitset.inl: In member function 'antlr3::BitsetList<AllocatorType>::BitsetType* antlr3::BitsetList<AllocatorType>::bitsetLoad() [with ImplTraits = antlr3::TraitsBase<antlr3::CustomTraitsBase>]':
/opt/scylladb/include/antlr3bitset.inl:54:2: error: nonnull argument 'this' compared to NULL [-Werror=nonnull-compare]

To make it compilable we need to specify '-Wno-nonnull-compare' on cflags.

Message-Id: <1510952411-20722-2-git-send-email-syuu@scylladb.com>
(cherry picked from commit f26cde582f)
2017-11-21 17:26:53 +02:00
Takuya ASADA
8d7c34bf68 dist/debian: switch Debian 3rdparty packages to external build service
Switch Debian 3rdparty packages to our OBS repo
(https://build.opensuse.org/project/subprojects/home:scylladb).

We don't use 3rdparty packages on dist/debian/dep, so dropped them.
Also we switch Debian to gcc-7.2/boost-1.63 on same time.

Due to packaging issues following packages doesn't renamed our 3rdparty
package naming rule for now:
 - gcc-7: renamed as 'xxx-scylla72', instead of scylla-xxx-72.
 - boost1.63: doesn't renamed, also doesn't changed prefix to /opt/scylladb

Message-Id: <1510952411-20722-1-git-send-email-syuu@scylladb.com>
(cherry picked from commit ab9d7cdc65)
2017-11-21 17:26:53 +02:00
Duarte Nunes
7449586a26 thrift/server: Handle exception within gate
The exception handling code inspects server state, which could be
destroyed before the handle_exception() task runs since it runs after
exiting the gate. Move the exception handling inside the gate and
avoid scheduling another accept if the server has been stopped.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20171116122921.21273-1-duarte@scylladb.com>
(cherry picked from commit 34a0b85982)
2017-11-21 15:52:38 +02:00
Daniel Fiala
b601b9f078 utils/big_decimal: Fix compilation issue with converion of cpp_int to uint64_t.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
Message-Id: <20171121134854.16278-1-daniel@scylladb.com>
(cherry picked from commit 21ea05ada1)
2017-11-21 15:52:01 +02:00
Tomasz Grabiec
1ec81cda37 Merge "Convert queries to flat mutation readers" from Paweł
These patches convert queries (data, mutation and counter) to flat
mutation readers. All of them already use consume_flattened() to
consume a flat stream of data, so the only major missing thing
 was adding support for reversed partitions to
flat_mutation_reader::consume().

* pdziepak flat_mutation_reader-queries/v3-rebased:
  flat_mutation_reader: keep reference to decorated key valid
  flat_muation_reader: support consuming reversed partitions
  tests/flat_mutation_reader: add test for
    flat_mutation_reader::consume()
  mutation_partition: convert queries to flat_mutation_readers
  tests/row_cache_stress_test: do not use consume_flattened()
  mutation_reader: drop consume_flattened()
  streamed_mutation: drop reverse_streamed_mutation()

(cherry picked from commit 6969a235f3)
2017-11-21 12:58:41 +01:00
Paweł Dziepak
e87a2bc9c0 streamed_mutation: make emit_range_tombstone() exception safe
For a time range tombstone that was already removed from a tree
is owned by a raw pointer. This doesn't end well if creation of
a mutation fragment or a call to push_mutation_fragment() throw.
Message-Id: <20171121105749.16559-1-pdziepak@scylladb.com>

(cherry picked from commit 1b936876b7)
2017-11-21 12:35:47 +01:00
Tomasz Grabiec
b84d13d325 Merge "Fix reversed queries with range tombstones" from Paweł
This series reworks handling of range tombstones in reversed queries
so that they are applied to correct rows. Additionally, the concept
of flipped range tombstones is removed, since it only made it harder
to reason about the code.

Fixes #2982.

* https://github.com/pdziepak/scylla fix-reverse-query-range-tombstone/v2:
  streamed_mutation: fix reversing range tombstones
  range_tombstone: drop flip()
  tests/cql_query_test: test range tombstones and reverse queries
  tests/range_tombstone_list: add test for range_tombstone_accumulator

(cherry picked from commit cec5b0a5b8)
2017-11-21 12:35:37 +01:00
Botond Dénes
b5abf6541d Add fast-forwarding with no data test to mutation_source_test
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <9cb630bf9441e178b2040709f92767d4a740a875.1511180262.git.bdenes@scylladb.com>
(cherry picked from commit f059e71056)
2017-11-21 12:34:46 +01:00
Botond Dénes
8cf869cb37 flat_mutation_reader_assertions: add fast_forward_to(position_range)
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <7b530909cf188887377aec3985f9f8c0e3b9b1e8.1511180262.git.bdenes@scylladb.com>
(cherry picked from commit a1a0d445d6)
2017-11-21 12:34:43 +01:00
Botond Dénes
df509761b0 flat_mutation_reader_from_mutation_reader(): make ff more resilient
Currently flat_mutation_reader_from_mutation_reader()'s
converting_reader will throw std::runtime_error if fast_forward_to() is
called when its internal streamed_mutation_opt is disengaged. This can
create problems if this reader is a sub-reader of a combined reader as the
latter has no way to determine the source of a sub-reader EOS. A reader
can be in EOS either because it reached the end of the current
position_range or because it doesn't have any more data.
To avoid this, instead of throwing we just silently ignore the fact that
the streamed_mutation_opt is disengaged and set _end_of_stream to true
which is still correct.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <83d309b225950bdbbd931f1c5e7fb91c9929ba1c.1511180262.git.bdenes@scylladb.com>
(cherry picked from commit 8065dca4a1)
2017-11-21 12:34:40 +01:00
Vlad Zolotarov
b90e11264e cql_transport::cql_server: fix the distributed prepared statements cache population
Don't std::move() the "query" string inside the parallel_for_each() lambda.
parallel_for_each is going to invoke the given callback object for each element of the range
and as a result the first call of lambda that std::move()s the "query" is going to destroy it for
all other calls.

Fixes #2998

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1511225744-1159-1-git-send-email-vladz@scylladb.com>
(cherry picked from commit 941aa20252)
2017-11-21 10:53:50 +02:00
Shlomi Livne
84b2bff0a6 release: prepare for 2.1.rc0
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2017-11-19 18:53:20 +02:00
Tomasz Grabiec
2113299b61 sstables: index_reader: Reset lower bound for promoted index lookups from advance_to_next_partition()
_current_pi_idx was not reset from advance_to_next_partition(), which
is used when we skip to the next partition before fully consuming
it. As a result, if we try to skip to a clustering position which is
before the index block used by the last skip in the previous
partition, we would not skip assuming that the new position is in the
current block. This may result in more data being read from the
sstable than necessary.

Fixes #2984
Message-Id: <1510915793-20159-1-git-send-email-tgrabiec@scylladb.com>
2017-11-17 11:00:26 +00:00
Avi Kivity
6950389a3f Update seastar submodule
* seastar 11ad0b1...78cd87f (3):
  > Merge "http: Use output stream for files" from Amnon
  > tutorial: a section about when_all() and when_all_succeed()
  > Merge "Power8 related changes (what's left of them)" from Vlad
2017-11-16 16:31:46 +02:00
Avi Kivity
f18b3928d0 Merge 2017-11-16 14:55:45 +02:00
Avi Kivity
beffe469af index_entry: add move constructor, assigment operators
As can be seen in one of the traces in #2958, the copy constructor
of index_entry is called in response to std::vector<index_entry>::push_back(index_entry&&).

This is wasteful. Fix by providing the full suite of constructors/assignment operators.
Message-Id: <20171116121608.5580-1-avi@scylladb.com>
2017-11-16 13:54:05 +01:00
Avi Kivity
bbcfc57cb4 Merge "Free auth and its use from global variables" from Jesse
"This patch series addresses #2929. The objective is to eliminate global
state from the implementation and use of all access-control functionlity.

I've made every effort to make these patches logically independent and
incremental, but the final patch is big: this was necessary because
eliminating the global instances themselves is an atomic change."

* 'jhk/non_global_auth/v2' of https://github.com/hakuch/scylla:
  auth: Switch to sharded service
  tracing/trace_keyspace_helper: Use internal `client_state`
  auth: Make the QP an explicit dependency
  auth: Unify Java class name attributes
  auth: Make life-time control more consistent
  auth: Move metadata constants
  auth: Don't expose internal constant
  auth: Extract `permissions_cache`
  utils/loading_cache: Include necessary dependency
  auth: Fix static constant initialization
  auth: Extract `delayed_tasks` from `auth.cc`
2017-11-16 14:52:34 +02:00
Jesse Haber-Kucharsky
ba6a41d397 auth: Switch to sharded service
This change appears quite large, but is logically fairly simple.

Previously, the `auth` module was structured around global state in a
number of ways:

- There existed global instances for the authenticator and the
  authorizer, which were accessed pervasively throughout the system
  through `auth::authenticator::get()` and `auth::authorizer::get()`,
  respectively. These instances needed to be initialized before they
  could be used with `auth::authenticator::setup(sstring type_name)`
  and `auth::authorizer::setup(sstring type_name)`.

- The implementation of the `auth::auth` functions and the authenticator
  and authorizer depended on resources accessed globally through
  `cql3::get_local_query_processor()` and
  `service::get_local_migration_manager()`.

- CQL statements would check for access and manage users through static
  functions in `auth::auth`. These functions would access the global
  authenticator and authorizer instances and depended on the necessary
  systems being started before they were used.

This change eliminates global state from all of these.

The specific changes are:

- Move out `allow_all_authenticator` and `allow_all_authorizer` into
  their own files so that they're constructed like any other
  authenticator or authorizer.

- Delete `auth.hh` and `auth.cc`. Constants and helper functions useful
  for implementing functionality in the `auth` module have moved to
  `common.hh`.

- Remove silent global dependency in
  `auth::authenticated_user::is_super()` on the auth* service in favour
  of a new function `auth::is_super_user()` with an explicit auth*
  service argument.

- Remove global authenticator and authorizer instances, as well as the
  `setup()` functions.

- Expose dependency on the auth* service in
  `auth::authorizer::authorize()` and `auth::authorizer::list()`, which
  is necessary to check for superuser status.

- Add an explicit `service::migration_manager` argument to the
  authenticators and authorizers so they can announce metadata tables.

- The permissions cache now requires an auth* service reference instead
  of just an authorizer since authorizing also requires this.

- The permissions cache configuration can now easily be created from the
  DB configuration.

- Move the static functions in `auth::auth` to the new `auth::service`.
  Where possible, previously static resources like the `delayed_tasks`
  are now members.

- Validating `cql3::user_options` requires an authenticator, which was
  previously accessed globally.

- Instances of the auth* service are accessed through `external`
  instances of `client_state` instead of globally. This includes several
  CQL statements including `alter_user_statement`,
  `create_user_statement`, `drop_user_statement`, `grant_statement`,
  `list_permissions_statement`, `permissions_altering_statement`, and
  `revoke_statement`. For `internal` `client_state`, this is `nullptr`.

- Since the `cql_server` is responsible for instantiating connections
  and each connection gets a new `client_state`, the `cql_server` is
  instantiated with a reference to the auth* service.

- Similarly, the Thrift server is now also instantiated with a reference
  to the auth* service.

- Since the storage service is responsible for instantiating and
  starting the sharded servers, it is instantiated with the sharded
  auth* service which it threads through. All relevant factory functions
  have been updated.

- The storage service is still responsible for starting the auth*
  service it has been provided, and shutting it down.

- The `cql_test_env` is now instantiated with an instance of the auth*
  service, and can be accessed through a member function.

- All unit tests have been updated and pass.

Fixes #2929.
2017-11-15 23:22:42 -05:00
Jesse Haber-Kucharsky
1dd181bd7b tracing/trace_keyspace_helper: Use internal client_state 2017-11-15 23:19:18 -05:00
Jesse Haber-Kucharsky
41612ee577 auth: Make the QP an explicit dependency
Rather than have all uses of the QP in auth reference global variables,
we supply a QP reference to both the authenticator and authorizer on
construction.

The caller still references a global variable when constructing the
instances, but fixing this problem is a much larger task that is out of
scope of this change.
2017-11-15 23:19:13 -05:00
Jesse Haber-Kucharsky
157e22a4f0 auth: Unify Java class name attributes 2017-11-15 23:19:00 -05:00
Jesse Haber-Kucharsky
9aff5d9a77 auth: Make life-time control more consistent 2017-11-15 23:18:44 -05:00
Jesse Haber-Kucharsky
5825e37310 auth: Move metadata constants
This change is motivated partly be aesthetics, but more significantly
due to the future work to refactor `auth` into a sharded service. Since
doing so will require writing `auth::auth` from scratch, these
constants (and other common functionality) need a new home.
2017-11-15 23:18:42 -05:00
Jesse Haber-Kucharsky
22670cae82 auth: Don't expose internal constant 2017-11-15 23:17:52 -05:00
Jesse Haber-Kucharsky
20b7f92b9c auth: Extract permissions_cache
In addition to improving clarity, this makes the cache testable.

There shouldn't be any functional changes.
2017-11-15 23:17:41 -05:00
Jesse Haber-Kucharsky
6f4241574c utils/loading_cache: Include necessary dependency 2017-11-15 23:17:05 -05:00
Jesse Haber-Kucharsky
5c39a2cc15 auth: Fix static constant initialization
Using "Meyer's singletons" eliminate the problem of static constant
initialization order because static variables inside functions are
initialized only the first time control flow passes over their
declaration.

Fixes #2966.
2017-11-15 23:16:52 -05:00
Jesse Haber-Kucharsky
507e1ef8d5 auth: Extract delayed_tasks from auth.cc
This simple task scheduler is used by the auth module to delay metadata
creation until the system is settled.

Extracting it out allows the `auth` module to be refactored into a
sharded service and for other components of `auth` to make use of it.

Fixes #2965.
2017-11-15 23:16:46 -05:00
Botond Dénes
7f60fb19b4 flat_mutation_reader::fast_forward_buffer_to: remove schema parameter
e7a0732f72 added the schema to
flat_mutation_reader::impl so the schema doesn't need to be provided
externally anymore.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <04933512d3485d85629a9945b8ecb211aa2aab50.1510732121.git.bdenes@scylladb.com>
2017-11-15 10:40:02 +01:00
Tomasz Grabiec
a061be688d Merge "Prepare sstables read path for flat_mutation_reader" from Piotr
This patchset prepares sstables read path for flat_mutation_reader.
It cuts some dependencies between classes and replaces
sstables::mutation_reader with ::mutation_reader.  This will make it
possible to gradually convert the code to flat_mutation_reader because
we have converters between flat_mutation_reader and ::mutation_reader.

* seastar-dev.git haaawk/flat_reader_prepare_sstables_rebased
  Reduce dependencies from mp_row_consumer to sstable_streamed_mutation
  Replace sstables::mutation_reader with ::mutation_reader
  Remove range_reader_adaptor
  Remove sstable_range_wrapping_reader
2017-11-15 10:40:02 +01:00
Piotr Jastrzebski
6cd4b6b09c Remove sstable_range_wrapping_reader
The wrapper is no longer needed because
read_range_rows returns ::mutation_reader instead of
sstables::mutation_reader and the reader returned from
it keeps the pointer to shared_sstable that was used to
create the reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-15 10:40:02 +01:00
Piotr Jastrzebski
6d85e4fb0c Remove range_reader_adaptor
The wrapper is no longer needed because
read_range_rows returns ::mutation_reader instead of
sstables::mutation_reader and the reader returned from
it keeps the pointer to shared_sstable that was used to
create the reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-15 10:40:02 +01:00
Piotr Jastrzebski
ea449c9cce Replace sstables::mutation_reader with ::mutation_reader
This will make migration to flat_mutation_reader much
easier and sstables::mutation_reader is going away with
this migration anyway.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-15 10:40:01 +01:00
Piotr Jastrzebski
228f0737f4 Reduce dependencies from mp_row_consumer to sstable_streamed_mutation
Before this patch mp_row_consumer was using sstable_streamed_mutation
in two ways:

1. Populate sstable_streamed_mutation's buffer with mutation_fragments
2. Advance sstable_streamed_mutation's sstable_data_source to new position.

We can easily reduce those dependencies only to the first one.
This will reduce the coupling between those classes and simplify
the flow of execution.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-15 10:40:01 +01:00
Takuya ASADA
07c191af41 dist/common/scripts/scylla_dev_mode_setup: include scylla_lib.sh
To use verify_args function we requires scylla_lib.sh, so include it.

Fixes #2945

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1510154173-18017-1-git-send-email-syuu@scylladb.com>
2017-11-15 11:31:14 +02:00
Avi Kivity
7caf3a543e Merge "Respect size-tiered options in strategies that rely on its functionality" from Raphael
"Otherwise, such strategies couldn't behave as expected when it needs to do STCS."

* 'respecting_stcs_options_v2' of github.com:raphaelsc/scylla:
  tests: enable twcs test that relied on size-tiered properties
  twcs: respect stcs options by forwarding them to stcs method
  lcs: forward stcs options to respect them
  stcs: make most_interesting_bucket respect size-tiered options
  stcs: make most_interesting_bucket respect thresholds
  compaction: make size_tiered_most_interesting_bucket static method of stcs class
  stcs: introduce new ctor
  stcs: make header self contained
  stcs: inline function definition so as not to break one definition rule
2017-11-14 17:57:57 +02:00
Raphael S. Carvalho
1f478d5daa tests: enable twcs test that relied on size-tiered properties
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-11-14 13:27:27 -02:00
Raphael S. Carvalho
8165af1d08 twcs: respect stcs options by forwarding them to stcs method
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-11-14 13:27:27 -02:00
Raphael S. Carvalho
9cdc047a4c lcs: forward stcs options to respect them
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-11-14 13:27:27 -02:00
Raphael S. Carvalho
2b7f87474b stcs: make most_interesting_bucket respect size-tiered options
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-11-14 13:27:25 -02:00
Raphael S. Carvalho
d8ec913c34 stcs: make most_interesting_bucket respect thresholds
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-11-14 13:26:04 -02:00
Raphael S. Carvalho
cb6d060d8e compaction: make size_tiered_most_interesting_bucket static method of stcs class
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-11-14 13:24:03 -02:00
Raphael S. Carvalho
b69dbf8b99 stcs: introduce new ctor
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-11-14 13:21:59 -02:00
Avi Kivity
0dc888f963 Merge 2017-11-14 15:59:52 +02:00
Tomasz Grabiec
7323fe76db gossiper: Replicate endpoint_state::is_alive()
Broken in f570e41d18.

Not replicating this may cause coordinator to treat a node which is
down as alive, or vice verse.

Fixes regression in dtest:

  consistency_test.py:TestAvailability.test_simple_strategy

which was expected to get "unavailable" exception but it was getting a
timeout.

Message-Id: <1510666967-1288-1-git-send-email-tgrabiec@scylladb.com>
2017-11-14 15:58:00 +02:00
Vlad Zolotarov
c6c41aa877 tests: loading_cache_test: make it more robust
Make sure loading_cache::stop() is always called where appropriate:
regardless whether the test failed or there was an exception during the test.
Otherwise a false-alarm use-after-free error may occur.

Fixes #2955

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1510625736-3109-1-git-send-email-vladz@scylladb.com>
2017-11-14 11:35:49 +00:00
Avi Kivity
09e730f9f2 Merge "Fix bugs in cache related to handling of bad_alloc" from Tomasz
"Fixes #2944."

* tag 'tgrabiec/cache-exception-safety-fixes-v2' of github.com:scylladb/seastar-dev:
  tests: row_cache: Add test for exception safety of multi-partition scans
  tests: row_cache: Add test for exception safety of single-partition reads
  tests: mutation_source_tests: Always print the seed
  tests: Disable alloc failure injection in test assertions
  tests: Avoid needless copies
  row_cache: Fix exception safety of cache_entry::read()
  row_cache: scanning_and_populating_reader: Fix exception unsafety causing read to skip data
  row_cache: partition_range_cursor: Extract valid() and advance_to() from refresh()
  cache_streamed_mutation: Add trace-level logging to cache_streamed_mutation
  mvcc: Lift noexcept off partition_snapshot_row_weakref assignment/constructors
  cache_streamed_mutation: Make advancing to the next range exception-safe
  cache_streamed_mutation: Make add_clustering_row_to_buffer() exception-safe
  cache_streamed_mutation: Make drain_tombstones() exception-safe
  cache_streamed_mutation: Return void from start_reading_from_underlying()
  cache_streamed_mutation: Document invariants related to exception-safety
  streamed_mutation: Add reserve_one()
  lsa: Guarantee invalidated references on allocating section retry
  mvcc: partition_snapshot_row_cursor: Mark allocation points
2017-11-14 11:42:13 +02:00
Raphael S. Carvalho
f6574412a3 stcs: make header self contained
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-11-13 18:07:31 -02:00
Raphael S. Carvalho
2b45aa3593 stcs: inline function definition so as not to break one definition rule
goal is to allow multiple definitions of header

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-11-13 18:07:30 -02:00
Tomasz Grabiec
638d23025b tests: row_cache: Add test for exception safety of multi-partition scans 2017-11-13 20:55:14 +01:00
Tomasz Grabiec
084e1861c8 tests: row_cache: Add test for exception safety of single-partition reads 2017-11-13 20:55:14 +01:00
Tomasz Grabiec
a968a84ec5 tests: mutation_source_tests: Always print the seed
BOOST_TEST_MESSAGE() is not logged by default, and for some tests we
don't want to enable that because it's too noisy. But we need to know
the seed to reproduce a failure, so we better to always print it.
2017-11-13 20:55:14 +01:00
Tomasz Grabiec
e868929faf tests: Disable alloc failure injection in test assertions
Injecting failures to assertions doesn't add much value but slows down
test execution by adding extra iterations.
2017-11-13 20:55:14 +01:00
Tomasz Grabiec
5cf7f9d1bb tests: Avoid needless copies 2017-11-13 20:55:14 +01:00
Tomasz Grabiec
1971332195 row_cache: Fix exception safety of cache_entry::read()
When we fail, we need to return streamed_mutation back, so that
the operation can be retried.

Causes SIGSEGV on nullptr otherwise on bad_alloc.
2017-11-13 20:55:14 +01:00
Tomasz Grabiec
11a195c403 row_cache: scanning_and_populating_reader: Fix exception unsafety causing read to skip data
If assignment to _lower_bound in the "_secondary_in_progress = true;"
case in do_read_from_primary() throws due to allocation failure, the
update section will be retried and we will take the not_moved path,
skipping the range which was discontinuous and was supposed to be read
from underlying.

Fix by redoing lookup using _lower_bound in case the section is
retried. When we retry, _primary.valid() will be false. We need to
ensure now that _lower_bound is always valid.

Fixes #2944.
2017-11-13 20:55:14 +01:00
Tomasz Grabiec
5dc1ee41e4 row_cache: partition_range_cursor: Extract valid() and advance_to() from refresh() 2017-11-13 20:55:14 +01:00
Tomasz Grabiec
09c49b2db3 cache_streamed_mutation: Add trace-level logging to cache_streamed_mutation 2017-11-13 20:55:14 +01:00
Tomasz Grabiec
f60cfa34f4 mvcc: Lift noexcept off partition_snapshot_row_weakref assignment/constructors
Assignment to _pos (position_in_partition) may throw. noexcept is a
remnant from the version which didn't have _pos.
2017-11-13 20:55:14 +01:00
Tomasz Grabiec
bd7b68f877 cache_streamed_mutation: Make advancing to the next range exception-safe
Changing _ck_ranges_curr and _lower_bound should be atomic, either
both fail or both succeed.  Currently it could happen that if
position_in_partition::for_range_start() fails, _ck_ranges_curr would
be advanced but _lower_bound not.
2017-11-13 20:55:14 +01:00
Tomasz Grabiec
081deec731 cache_streamed_mutation: Make add_clustering_row_to_buffer() exception-safe
We need to maintain the following invariants:
 (1) no fragment with position >= _lower_bound was pushed yet
 (2) If _lower_bound > mf.position(), mf was emitted

Before this patch (1) could be violated if drain_tombstones() failed
in the middle. (2) could be violated if push_mutation_fragment()
failed.
2017-11-13 20:55:14 +01:00
Tomasz Grabiec
d1b844737a cache_streamed_mutation: Make drain_tombstones() exception-safe
If push_mutation_fragment() failed, mfo which we got from get_next()
would be lost. Fix by making sure push_mutation_fragment() won't fail.
2017-11-13 20:55:14 +01:00
Tomasz Grabiec
875fc93956 cache_streamed_mutation: Return void from start_reading_from_underlying()
The return value is no longer used.
2017-11-13 20:55:14 +01:00
Tomasz Grabiec
5fb319bbb9 cache_streamed_mutation: Document invariants related to exception-safety 2017-11-13 20:55:14 +01:00
Tomasz Grabiec
53f4452b47 streamed_mutation: Add reserve_one() 2017-11-13 20:55:13 +01:00
Tomasz Grabiec
8d69d217af lsa: Guarantee invalidated references on allocating section retry
There is existing code (e.g. use of partition_snapshot_row_cursor in
cache_streamed_mutation) which assumes that references will be
invalidated when bad_alloc is thrown from allocating_section. That is
currently the case because on retry we will attempt memory reclamation
which will invalidate references either through compaction or
eviction. Make this guarantee explicit.
2017-11-13 20:55:13 +01:00
Tomasz Grabiec
6bf1c6014f mvcc: partition_snapshot_row_cursor: Mark allocation points
This marks places which may allocate but not always do as allocation
points to increase effectiveness of testing.
2017-11-13 20:55:13 +01:00
Raphael S. Carvalho
cfd2343689 sstables: fix report in integrity check file interposer
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20171113185842.11018-1-raphaelsc@scylladb.com>
2017-11-13 20:07:09 +01:00
Raphael S. Carvalho
cf8e12c760 checked_file_impl: remove unneeded variant of open_checked_file_dma
like in integrity_checked_file_impl, we don't need a variant of
open for default file open options.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20171113185412.10880-1-raphaelsc@scylladb.com>
2017-11-13 20:06:58 +01:00
Raphael S. Carvalho
564046a135 thrift: fix compilation error
thrift/server.cc:237:6:   required from here
thrift/server.cc:236:9: error: cannot call member function ‘void thrift_server::maybe_retry_accept(int, bool, std::__exception_ptr::exception_ptr)’ without object
         maybe_retry_accept(which, keepalive, std::move(ex));

gcc version: gcc (GCC) 6.3.1 20161221 (Red Hat 6.3.1-1)

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20171113184537.10472-1-raphaelsc@scylladb.com>
2017-11-13 20:05:33 +01:00
Avi Kivity
9cd5bc4eb8 Merge "Convert streaming to flat mutation readers" from Paweł
"The following patches convert streaming and repair code to the new
flat mutation reader interface. In particular this involves changing::
 - fragment_and_freeze() -- a consumer that fragments and freezes mutations
 - checksum computation for repair which until now was using two-level
   mutation_reader/streamed_mutation interface
 - multi_range_reader -- a mutation reader that automatically fast
   forwards to between given partiton ranges"

* tag 'flat_mutation_reader-streaming/v2' of https://github.com/pdziepak/scylla: (24 commits)
  mutation_reader: drop multi_range_reader
  db: convert make_streaming_reader() to flat_mutation_reader
  tests/flat_mutation_reader: add test for multi range reader
  tests/flat_mutation_reader_assertions: add fast_forward_to()
  tests/simple_schema: add to_ring_positions() helper
  flat_mutation_reader: convert flat_multi_range_mutation_reader
  flat_mutation_reader: add partition_range_forwarding
  flat_mutation_reader: make pop_mutation_fragment() public
  flat_mutation_reader: copy multi_range_mutation_reader
  streamed_mutation: drop mutation_hasher
  tests/flat_mutation_reader: add test for partition checksum
  repair: convert partition_checksum::compute_streamed() to flat streams
  repair: make partition_hasher consume flat mutation streams
  mutation_hasher: copy mutation_hasher to repair.cc
  partition_start: make partition_tombstone() const
  partition_checksum: introduce compute() for flat_mutation_reader
  db: drop single-range make_streaming_reader()
  fragment_and_freeze: drop streamed_mutation overload
  stream_transfer_task: switch to flat_mutation_reader
  tests/flat_mutation_reader: add test for fragment_and_freeze
  ...
2017-11-13 18:56:59 +02:00
Paweł Dziepak
97767963a0 mutation_reader: drop multi_range_reader 2017-11-13 16:49:52 +00:00
Paweł Dziepak
dca93bea23 db: convert make_streaming_reader() to flat_mutation_reader 2017-11-13 16:49:52 +00:00
Paweł Dziepak
98965add5b tests/flat_mutation_reader: add test for multi range reader
Based on mutation_reader.cc:test_multi_range_reader.
2017-11-13 16:49:52 +00:00
Paweł Dziepak
d23813cd41 tests/flat_mutation_reader_assertions: add fast_forward_to() 2017-11-13 16:49:52 +00:00
Paweł Dziepak
8fc9d250c5 tests/simple_schema: add to_ring_positions() helper
Based on mutation_reader_test.cc:to_ring_position()
2017-11-13 16:49:52 +00:00
Paweł Dziepak
a9ec01d5a5 flat_mutation_reader: convert flat_multi_range_mutation_reader 2017-11-13 16:49:52 +00:00
Paweł Dziepak
11e8866aee flat_mutation_reader: add partition_range_forwarding
flat_mutation_reader::partition_range_forwarding and
mutation_reader::forwarding are aliases of the same type. The change was
necessary in order to make mutation_reader::forwarding available in
flat_mutation_reader.hh even though it is included by mutation_reader.hh
2017-11-13 16:49:52 +00:00
Paweł Dziepak
009785a178 flat_mutation_reader: make pop_mutation_fragment() public
flat_mutation_reader public interface already exposes low leve
is_buffer_empty() and is_buffer_full() adding pop_mutation_fragment()
will make implementation of intermediate readers more straightforward.
2017-11-13 16:49:52 +00:00
Paweł Dziepak
d9a2b00d4a flat_mutation_reader: copy multi_range_mutation_reader
multi_range_mutation_reader for flat mutation readers is going to be
based on the original one.
2017-11-13 16:49:52 +00:00
Paweł Dziepak
7866e5b4a9 streamed_mutation: drop mutation_hasher 2017-11-13 16:49:52 +00:00
Paweł Dziepak
aa64b711d1 tests/flat_mutation_reader: add test for partition checksum
Based on streamed_mutation_test:test_mutation_hash
2017-11-13 16:49:52 +00:00
Paweł Dziepak
f690e2e80b repair: convert partition_checksum::compute_streamed() to flat streams 2017-11-13 16:49:52 +00:00
Paweł Dziepak
d71a14b943 repair: make partition_hasher consume flat mutation streams 2017-11-13 16:49:52 +00:00
Paweł Dziepak
2b774119a1 mutation_hasher: copy mutation_hasher to repair.cc
Repair is the exclusive user of mutation_hasher. Moving it there will
make integration with partition_checksum easier.
2017-11-13 16:49:52 +00:00
Paweł Dziepak
af4fa6152b partition_start: make partition_tombstone() const 2017-11-13 16:49:52 +00:00
Paweł Dziepak
f648f94464 partition_checksum: introduce compute() for flat_mutation_reader 2017-11-13 16:49:52 +00:00
Paweł Dziepak
37640f223b db: drop single-range make_streaming_reader() 2017-11-13 16:49:52 +00:00
Paweł Dziepak
e2481a89e1 fragment_and_freeze: drop streamed_mutation overload 2017-11-13 16:49:52 +00:00
Paweł Dziepak
6f1e0d3ed8 stream_transfer_task: switch to flat_mutation_reader 2017-11-13 16:49:52 +00:00
Paweł Dziepak
50a1d76c1f tests/flat_mutation_reader: add test for fragment_and_freeze
Based on streamed_mutation_test:test_fragmenting_and_freezing_streamed_mutations
2017-11-13 16:49:52 +00:00
Paweł Dziepak
f5c40e0861 flat_mutation_reader_from_mutations: take vector by value 2017-11-13 16:49:51 +00:00
Paweł Dziepak
9854b8a450 fragment_and_freeze: work on flat_mutation_readers 2017-11-13 16:49:47 +00:00
Paweł Dziepak
8bb672502d fragment_and_freeze: allow callback to stop iteration
There is a user of fragment_and_freeze() (streaming) that will need
to be able to break the loop Right now, it does that between
streamed_mutation, but that won't be possible after we switch to flat
readers.
2017-11-13 16:44:33 +00:00
Paweł Dziepak
73b8f54cf4 test/mutation_source_test: generate sets of mutations 2017-11-13 16:42:56 +00:00
Tomasz Grabiec
3536d2156c tests: row_cache: Add reproducer for issue #2948
Message-Id: <1510229584-14398-2-git-send-email-tgrabiec@scylladb.com>
2017-11-13 15:20:21 +00:00
Tomasz Grabiec
8402728747 row_cache: Call open_version() under region's allocator
partition_entry::read() calls open_version() under standard allocator,
but it may allocate a new partition version if a snapshot already
exists which was created in an earlier phase. Versions are supposed to
be allocated using region's allocator, they will be freed using
region's allocator. LSA will delegate free() to the standard allocator
correctly in this case, but it will subtract from its
_non_lsa_occupancy, assuming the allocation was done through it. This
will corrupt occupancy() for cache region.

Fixes #2948.
Message-Id: <1510229584-14398-1-git-send-email-tgrabiec@scylladb.com>
2017-11-13 15:20:08 +00:00
Avi Kivity
061f6830fa Merge "thrift/server: Ensure stop() waits for accepts" from Duarte
"Ensure stop() waits for the accept loop to complete to avoid crashes
during shutdown."

* 'thrift-server-stop/v4' of https://github.com/duarten/scylla:
  thrift/server: Restore code format
  thrift/server: Stopping the server waits for connection shutdown
  thrift/server: Abort listeners on stop()
  thrift/server: Avoid manual memory management
  thrift/server: Add move ctor for connection
  thrift/server: Extract retry logic
  thrift/server: Retry with backoff for some error types
  thrift/server: Retry accept in case of error
2017-11-13 12:48:05 +02:00
Duarte Nunes
049fbb58f3 thrift/server: Restore code format
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-11-13 11:21:54 +01:00
Duarte Nunes
7b25e3200a thrift/server: Stopping the server waits for connection shutdown
This patch ensures the future returned from stop() resolves only when
all connections and listeners are no longer in use.

Fixes #2657
Fixes #2942

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-11-13 11:21:53 +01:00
Duarte Nunes
f523a0f845 thrift/server: Abort listeners on stop()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-11-13 11:19:44 +01:00
Duarte Nunes
8e0e2363e9 thrift/server: Avoid manual memory management
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-11-13 11:19:44 +01:00
Duarte Nunes
75d04be96f thrift/server: Add move ctor for connection 2017-11-13 11:19:44 +01:00
Duarte Nunes
9d3322ff1a thrift/server: Extract retry logic
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-11-13 11:19:43 +01:00
Duarte Nunes
b5cf1a152f thrift/server: Retry with backoff for some error types
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-11-13 11:19:19 +01:00
Duarte Nunes
f367dbe1ed thrift/server: Retry accept in case of error
In case of errors like ECONNABORTED, we want to retry accepting
connections. Right now we immediately retry the accept, but in
subsequent patches we introduce a backoff for other types of errors.

We also consider fatal errors like EBADFD, which should not trigger a
retry.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-11-13 11:19:03 +01:00
Avi Kivity
d57395dce9 cql: prevent overflow when computing averages
Currently, we use type type of the column as the accumulator when we
average it. This can easily overflow, e.g. (2^31-1)+(3) = overflow.

Fix by using __int128 for the accumulator.  It's not standard, but
it's way more efficient and simpler than the alternatives.

Inspired by CASSANDRA-12417, but much simpler due to the availability
of __int128.
Message-Id: <20171112173529.30764-1-avi@scylladb.com>
2017-11-13 08:53:59 +01:00
Piotr Jastrzebski
acfc6fef55 Simplify flat_mutation_reader wrappers
If a wrapper takes a flat_mutation_reader in a constructor
then it does not have to take schema_ptr because it can obtain
it from the inner flat_mutation_reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <88c3672df08d2ac465711e9138d426e43ae9c62b.1510331382.git.piotr@scylladb.com>
2017-11-13 08:53:34 +01:00
Avi Kivity
f8af4f507b Merge "Support for varint and decimal in aggregate functions" from Daniel
"This patch adds support for varint and decimal to aggregate functions.
Some other types (like byte or smallint) weren't supported and they are
supported by C*. So their aggregate functions were added as well.

To allow aggregate functions for big_decimal, following methods were added
to big_decimal type:
  * Division by int64_t that preservers number of decimal digits.
  * Operator += .
  * Comparison operators.

Fixes #2842."

* 'danfiala/scylla-2842-send-002' of https://github.com/hagrid-the-developer/scylla:
  tests: Add tests for aggregate functions.
  tests: Add tests for big_decimal type.
  cql3/functions: Add aggregate functions for big_decimal.
  utils/big_decimal: Added necessary operators and methods for aggregate functions.
  cql3/functions: Add aggregate functions for types for which it is trivial.
2017-11-12 17:11:33 +02:00
Daniel Fiala
bc20484c47 tests: Add tests for aggregate functions.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
2017-11-12 15:53:22 +01:00
Daniel Fiala
ee1d69502b tests: Add tests for big_decimal type.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
2017-11-12 15:53:22 +01:00
Daniel Fiala
74c5f70b0a cql3/functions: Add aggregate functions for big_decimal.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
2017-11-12 15:53:13 +01:00
Daniel Fiala
ce2f010859 utils/big_decimal: Added necessary operators and methods for aggregate functions.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
2017-11-12 15:51:29 +01:00
Daniel Fiala
115668fe70 cql3/functions: Add aggregate functions for types for which it is trivial.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
2017-11-12 13:56:20 +01:00
Tomasz Grabiec
484dde692f Merge "make sure that cache updates don't overflow dirty memory" from Glauber
Since we started accounting virtual dirty memory we no longer have a cap
on real dirty memory. In most situations that is not needed, since real
dirty will just be at most twice as much as virtual dirty (current
flushing memtable plus new memtable).

However, due to things like cache updates and component flushing we can
end up having a lot of memtables that are virtually freed but not yet
fully released, leading real dirty memory to explode using all the box'
memory.

This patch adds a cap on real dirty memory as well. Because of the
hierarchical nature of region_group, if the parent blocks due to memory
depletion, so will the child (virtual dirty region group).

After that is done, we need to make sure that dirty memory is not seen
as freed until the cache update is done. Until a particular partition is
moved to the cache it is not evictable. As a result we can OOM the
system if we have a lot of pending cache updates as the writes will not
be throttled and memory won't be made available.

This patch pins the memory used by the region as real dirty before the
cache update starts, and unpins it when it is over. In the mean time it
gradually releases memory of the partitions that are being moved to
cache.

I have verified in a couple of workloads that the amount of memory
accounted through this is the same amount of memory accounted through
the memtable flush procedure.

Fixes #1942

* git@github.com:glommer/scylla.git glommer/update-cache-v4:
  row_cache: modernize use of seastar threads
  mutation_partition: estimate size of partition
  memtable: factor out calculation of memtable_entry memory size
  memtable: add a method to export memtable's dirty memory manager
  dirty_memory_manager: block if we hit the real dirty limit
  dirty_memory_manager: add functions to manipulate real dirty
  partition: add method to calculate memory size of a partition
  row cache: pin real dirty during cache updates.
2017-11-10 13:55:12 +01:00
Piotr Jastrzebski
e7a0732f72 Add schema_ptr to flat_mutation_reader
It is usefull to have a schema inside a flat reader
the same way we had schema inside a streamed_mutation.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <b37e0dbf38810c00bd27fb876b69e1754c16a89f.1510312137.git.piotr@scylladb.com>
2017-11-10 13:54:55 +01:00
Pekka Enberg
0c192c835c cql3: Fix 'DROP INDEX' to also drop index view
This patch fixes 'DROP INDEX' CQL statement to also drop the underlying
index view automatically so that we don't leave unused materialized
views behind.
Message-Id: <1510303421-15945-1-git-send-email-penberg@scylladb.com>
2017-11-10 10:52:08 +01:00
Duarte Nunes
73f6c9a612 Merge seastar upstream
* seastar 8040cab...11ad0b1 (7):
  > alloc_failure_injector: Fix compilation error with gcc 7.1
  > core/gate: Add is_closed() function
  > doc: code formatting and fix function call
  > doc: tutoral code formatting
  > build: adjust -Wno-error=cpp for clang
  > build: don't error out on preprocessor #warning
  > Merge 'Enhancements of allocation failure injector' from Tomasz

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-11-09 14:42:06 +01:00
Takuya ASADA
f607a01cc5 dist/debian: link boost statically
Since we switched scylla-boost163 which isn't provided by distribution repo,
we need to link them statically.

Fixes #2946

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1510229553-29801-1-git-send-email-syuu@scylladb.com>
2017-11-09 14:51:00 +02:00
Glauber Costa
1d7617723d row cache: pin real dirty during cache updates.
Right now, once a region is moved to the cache is no longer visible to
the dirty memory system. Not as real dirty nor virtual dirty.

The problem is that until a particular partition is moved to the cache
it is not evictable. As a result we can OOM the system if we have a lot
of pending cache updates as the writes will not be throttled and memory
won't be made available.

This patch pins the memory used by the region as real dirty before the
cache update starts, and unpins it when it is over. In the mean time it
gradually releases memory of the partitions that are being moved to
cache.

I have verified in a couple of workloads that the amount of memory
accounted through this is the same amount of memory accounted through
the memtable flush procedure.

Fixes #1942

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-11-08 19:46:36 -05:00
Glauber Costa
c2f49da609 partition: add method to calculate memory size of a partition
Once that is added, also add a method to a memtable entry to calculate
the entire size of a memtable entry. Right now we only have one method
to calculate the size minus rows.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-11-08 16:21:44 -05:00
Glauber Costa
b02ab991b9 dirty_memory_manager: add functions to manipulate real dirty
There are times in which we want to add and remove real dirty memory
without impacting virtual dirty. One such example is the cache update
process, where real dirty is the limiting factor.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-11-08 16:21:44 -05:00
Glauber Costa
a6b2226562 dirty_memory_manager: block if we hit the real dirty limit
Since we started accounting virtual dirty memory we no longer have a cap
on real dirty memory. In most situations that is not needed, since real
dirty will just be at most twice as much as virtual dirty (current
flushing memtable plus new memtable).

However, due to things like cache updates and component flushing we can
end up having a lot of memtables that are virtually freed but not yet
fully released, leading real dirty memory to explode using all the box'
memory.

This patch adds a cap on real dirty memory as well. Because of the
hierarchical nature of region_group, if the parent blocks due to memory
depletion, so will the child (virtual dirty region group).

A next step is to add a controller that will increase the priority of
the tasks involving in releasing real dirty memory if we get dangerously
close to the threshold.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-11-08 16:21:44 -05:00
Glauber Costa
b98a48657e memtable: add a method to export memtable's dirty memory manager
It will be used by the cache update process to gradually return real
dirty memory to the manager.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-11-08 16:21:44 -05:00
Glauber Costa
ec36b9eddc memtable: factor out calculation of memtable_entry memory size
The total size is the sum of two components. Add a method that
does that sum so this code gets easier to reuse.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-11-08 16:21:44 -05:00
Glauber Costa
d49ecae201 mutation_partition: estimate size of partition
In the memtable flusher, we account for the size of a partition as we
read them. However, there are other points in the architecture where we
would like to calculate the size of a partition in a point in which we
are not reading it. One such example is the cache update process.

This patch enhances the mutation_partition adding a method that returns
the total size for this partition.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-11-08 16:21:44 -05:00
Glauber Costa
b836005555 row_cache: modernize use of seastar threads
For a while now we have an async() function, that simplifies the code by not
needing to issue an explicit join. This patch converts the row cache to use
async() as well, which most of our code already does. Doing so will make
it easier to make changes to update_cache.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-11-08 16:21:44 -05:00
Paweł Dziepak
b69f94fece Merge "Implement flat_mutation_reader::consume" from Piotr
"Implement flat_mutation_reader::consume and add tests for it.
For that implement flat_mutation_reader_from_mutations and
read_mutation_from_flat_mutation_reader."

* 'haaawk/flat_reader_consume_v3' of github.com:scylladb/seastar-dev:
  Add tests for flat_mutation_reader::consume
  Add tests for flat_mutation_reader utils
  Introduce read_mutation_from_flat_mutation_reader
  Make mutation_rebuilder streamed_mutation independent
  flat_mutation_reader_from_mutation: support multiple mutations
  Introduce flat_mutation_reader::consume
  Move FlattenedConsumer concept to flat_mutation_reader.hh
2017-11-08 15:08:47 +00:00
Paweł Dziepak
0373f357a8 Merge "Make memtable::make_reader return flat_mutation_reader" from Piotr
"This patchset introduces memtable::make_flat_reader that returns
flat_mutation_reader and converts internal memtable readers into
flat_mutation_readers.

It also introduces some utility methods like make_forwardable and
make_partition_snapshot_flat_reader."

* 'haaawk/flat_reader_memtable_v4' of github.com:scylladb/seastar-dev:
  Turn scanning_reader into flat_mutation_reader
  Change memtable_entry::read to return flat_mutation_reader
  Make iterator_reader independent from mutation_reader
  Introduce make_partition_snapshot_flat_reader
  Prepare partition_snapshot_flat_reader
  Introduce flat_mutation_reader_from_mutation
  Prepare flat_mutation_reader_from_mutation
  Introduce make_forwardable
  Prepare make_forwardable for flat_mutation_reader
  Introduce empty_flat_reader
  memtable: Introduce make_flat_reader
2017-11-08 14:24:26 +00:00
Piotr Jastrzebski
29d409de2f Add tests for flat_mutation_reader::consume
Make sure that flat_mutation_reader::consume stops
as it's asked by the consumer.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 14:26:10 +01:00
Piotr Jastrzebski
d42e53982d Add tests for flat_mutation_reader utils
Test flat_mutation_reader_from_mutations and
read_mutation_from_flat_mutation_reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 14:26:10 +01:00
Piotr Jastrzebski
4b58a05053 Introduce read_mutation_from_flat_mutation_reader
This helper method reads a single mutation from
a flat_mutation_reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 14:26:10 +01:00
Piotr Jastrzebski
6718ecab82 Make mutation_rebuilder streamed_mutation independent
mutation_rebuilder will be used not only with streamed_mutations
but also with flat_mutation_readers so it's better for it to be
independent from streamed_mutation.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 14:26:10 +01:00
Piotr Jastrzebski
aa16cd7eef flat_mutation_reader_from_mutation: support multiple mutations
Rename flat_mutation_reader_from_mutation to
flat_mutation_reader_from_mutations.

Make it work with std::vector<mutation> instead of a single
mutation.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 14:26:10 +01:00
Piotr Jastrzebski
bcd5415413 Introduce flat_mutation_reader::consume
This is equivalent to consume_flattened for old
mutation_reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 14:25:28 +01:00
Piotr Jastrzebski
9233ee7309 Move FlattenedConsumer concept to flat_mutation_reader.hh
This concept will be used both in flat_mutation_reader.hh
and mutation_reader.hh. mutation_reader.hh includes
flat_mutation_reader.hh so we have to move the concept to
make it accessible in both files.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 14:14:51 +01:00
Piotr Jastrzebski
864d02e795 Turn scanning_reader into flat_mutation_reader
This will make memtable::make_reader more efficient.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 14:08:53 +01:00
Tomasz Grabiec
9e115fb7e2 Merge addition of mutation_source::make_flat_mutation_reader() from Piotr
Make it possible for a mutation_source to be created both for sources
that use old mutation_reader and new flat_mutation_reader.

Add tests for flat_mutation_reader::next_partition to run_mutation_source_tests.

* seastar-dev.git 'dev/haaawk/flat_reader_mutation_source_v3':
  Remove mutation_reader.hh dependency from flat_mutation_reader.hh
  Prepare mutation_source for more than one implementation
  Add flat reader mutation source implementation
  Add mutation_source::make_flat_mutation_reader
  Use mutation_source::make_flat_mutation_reader in tests
  Add flat_mutation_reader_assertions
  Add test for flat_mutation_reader::next_partition
2017-11-08 14:05:25 +01:00
Piotr Jastrzebski
68505a5065 Change memtable_entry::read to return flat_mutation_reader
This is the first step to move scanning_reader to
be flat_mutation_reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 13:52:09 +01:00
Piotr Jastrzebski
7b016527bf Make iterator_reader independent from mutation_reader
iterator_reader will be used also in flat_mutation_reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 13:52:09 +01:00
Piotr Jastrzebski
f499949645 Introduce make_partition_snapshot_flat_reader
This allows creation of flat_mutation_reader from MVCC.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 13:52:09 +01:00
Piotr Jastrzebski
ced64d7571 Prepare partition_snapshot_flat_reader
This commit creates a copy of partition_snapshot_reader
and names it partition_snapshot_flat_reader.
This new class will be turned into a flat_mutation_reader
in the next commit.

The purpose of this commit is to make it easier to review the next
commit.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 13:52:09 +01:00
Piotr Jastrzebski
ed074a4f56 Introduce flat_mutation_reader_from_mutation
This is a utility method that will be handy in conversion
from mutation_reader to flat_mutation_reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 13:52:09 +01:00
Piotr Jastrzebski
c3a4ce842a Prepare flat_mutation_reader_from_mutation
This commit copies streamed_mutation_from_mutation
from streamed_mutation to flat_mutation_reader
and renames it to streamed_mutation_from_mutation_copy.
This copy will be used as a base for
flat_mutation_reader_from_mutation.

The purpose of this commit is to make it easier to review the next
commit.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 13:52:09 +01:00
Piotr Jastrzebski
decefe6eaa Introduce make_forwardable
It will add the ability to fast_forward_to on position_range
to flat_mutation_reader that does not have this ability.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 13:52:09 +01:00
Piotr Jastrzebski
6da8caf26f Prepare make_forwardable for flat_mutation_reader
This commit copies make_forwardable from streamed_mutation
to flat_mutation_reader and renames it to make_forwardable_copy.
This copy will be used as a base for make_forwardable implementation
for flat_mutation_reader.

The purpose of this commit is to make it easier to review the next
commit.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 13:52:09 +01:00
Piotr Jastrzebski
647dd7f86a Introduce empty_flat_reader
This is an implementation of flat_mutation_reader
that returns nothing.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 13:52:09 +01:00
Piotr Jastrzebski
0a9ab7ff80 memtable: Introduce make_flat_reader
This method creates a flat_mutation_reader
instead of mutation_reader. All users will be gradually
converted to the new interface. make_reader is implemented
using make_flat_reader and will be removed once all users
are migrated.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 13:52:09 +01:00
Piotr Jastrzebski
3661aca7ee Add test for flat_mutation_reader::next_partition
This is added to the run_mutation_source_tests suite.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 12:58:31 +01:00
Piotr Jastrzebski
1c9e4ba04c Add flat_mutation_reader_assertions
This will be usefull in tests.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 12:58:31 +01:00
Piotr Jastrzebski
4bca2210bf Use mutation_source::make_flat_mutation_reader in tests
Use the new call in run_conversion_to_mutation_reader_tests.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 12:58:31 +01:00
Piotr Jastrzebski
6efda10790 Add mutation_source::make_flat_mutation_reader
This will be used as an intermediate state of migration
from mutation_reader to flat_mutation_reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 12:58:31 +01:00
Piotr Jastrzebski
93e8b43e7b Add flat reader mutation source implementation
This will be used by sources that are migrated to
flat_mutation_reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 12:41:12 +01:00
Piotr Jastrzebski
1a7936561e Prepare mutation_source for more than one implementation
There will be a second implementation that will be used by
sources that are converted to flat_mutation_reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 12:41:12 +01:00
Piotr Jastrzebski
e80007559b Remove mutation_reader.hh dependency from flat_mutation_reader.hh
It's not needed and causes cyclic dependency when we need
flat_mutation_reader in mutation_source.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-11-08 12:41:12 +01:00
Duarte Nunes
f50b7c240f tests/view_schema_test: Wrap view queries in eventually()
...instead of wrapping the base table queries, since those will
immediately succeed. This fixes ocasional failures.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20171107170221.4309-1-duarte@scylladb.com>
2017-11-08 09:20:43 +02:00
Duarte Nunes
328b908574 tests/view_schema_test: Avoid non-pk restrictions
We don't support non-PK restrictions correctly as explained in commit
3c90607 ("tests/cql_query_test: Fix view creation in
test_duration_restrictions()") and Apache Cassandra doesn't support them
for MVs either. Change some test cases to not rely on them.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20171107165138.3176-1-duarte@scylladb.com>
2017-11-08 09:20:11 +02:00
Duarte Nunes
cb9daec8fd thrift: Preserve query order for some verbs
f44131226a introduced a regression where for some verbs we would
return partitions in their natural sort order, but since thrift
partition ranges can wrap-around, what we need to preserve is query
order.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20171103201118.18175-1-duarte@scylladb.com>
2017-11-07 17:00:48 +00:00
Paweł Dziepak
5a4b46f555 Merge "Fix exception safety related to range tombstones in cache" from Tomasz
Fixes #2938.

* 'tgrabiec/fix-range-tombstone-list-exception-safety-v1' of github.com:scylladb/seastar-dev:
  tests: range_tombstone_list: Add test for exception safety of apply()
  tests: Introduce range_tombstone_list assertions
  cache: Make range tombstone merging exception-safe
  range_tombstone_list: Introduce apply_monotonically()
  range_tombstone_list: Make reverter::erase() exception-safe
  range_tombstone_list: Fix memory leaks in case of bad_alloc
  mutation_partition: Fix abort in case range tombstone copying fails
  managed_bytes: Declare copy constructor as allocation point
  Integrate with allocation failure injection framework
2017-11-07 15:30:52 +00:00
Pekka Enberg
b515cca5e2 tests/view_schema_test: Disable non-PK restriction tests
We don't support non-PK restrictions correctly as explained in commit
3c90607 ("tests/cql_query_test: Fix view creation in
test_duration_restrictions()") and Apache Cassandra doesn't support them
for MVs either. Disable the tests, but don't remove them because they
will be resurrected once CASSANDRA-13832 is fixed.

Message-Id: <1510052422-3478-1-git-send-email-penberg@scylladb.com>
2017-11-07 16:42:00 +02:00
Tomasz Grabiec
0123eb876c tests: range_tombstone_list: Add test for exception safety of apply() 2017-11-07 15:33:24 +01:00
Tomasz Grabiec
dedb8a6a15 tests: Introduce range_tombstone_list assertions 2017-11-07 15:33:24 +01:00
Tomasz Grabiec
bbca83d4c0 cache: Make range tombstone merging exception-safe
range_tombstone_list::apply() has no exception safety guarantees about
the logical state. The target mutation_partition in cache should be
assumed to be left in unspecified state. In particular, some of the
preexisting overlapping tombstones may be removed and not reinserted,
so the cache would be missing some of the range tombstone information
in case the whole allocating section fails.

Use apply_monotonically() which provides the needed guarantees.

Fixes #2938.
2017-11-07 15:33:24 +01:00
Tomasz Grabiec
9c620e0246 range_tombstone_list: Introduce apply_monotonically() 2017-11-07 15:33:24 +01:00
Tomasz Grabiec
2fe53ac617 range_tombstone_list: Make reverter::erase() exception-safe
erase_undo_op() constructor takes ownership of *it, and destroys it
when it goes out of scope. If emplace_back() fails, *it would be
destroyed before being removed from its container (_dst._tombstones).
Fix by making sure _ops.emplace_back() won't fail.
2017-11-07 15:33:24 +01:00
Tomasz Grabiec
6190f9fc63 range_tombstone_list: Fix memory leaks in case of bad_alloc
If insert() fails, the allocated range_tombstone would not be freed.
Use alloc_strategy_unique_ptr.
2017-11-07 15:33:24 +01:00
Tomasz Grabiec
ca3e72266f mutation_partition: Fix abort in case range tombstone copying fails
If exception is thrown from _row_tombstones.apply(), _rows will be
left uncleared. This will trigger assertion in bi::set_member_hook
destructor, which assrts that the hook is not linked.

Always clear _rows.
2017-11-07 15:33:24 +01:00
Tomasz Grabiec
5348d9f596 managed_bytes: Declare copy constructor as allocation point
Because of the small size optimization, not all copies will call the
allocator, so allocation failure injection may miss this site if the
value is not large enough. Make the testing more effective by marking
this place explicitly as an allocation point.
2017-11-07 15:33:24 +01:00
Tomasz Grabiec
34ccf234ea Integrate with allocation failure injection framework 2017-11-07 15:33:24 +01:00
Tomasz Grabiec
2d3f3ab2b8 Update seastar submodule
* seastar d71922c...8040cab (4):
  > util: Introduce support for allocation failure injection
  > Adding dpdk-port-index as a command line option with default value of 0
  > core/sharded: Introduce invoke_on_others()
  > noncopyable_function: improve support for capturing mutable lambdas
2017-11-07 15:29:45 +01:00
Paweł Dziepak
80cfcc357f Merge "Config fixes" from Calle
"Fixes #2933

Fixes regressions introduced by config restructuring.
Allows "base" config to handle errors by warning, while
other uses can opt otherwise."

[pdziepak: resolved merge conflict]

* 'calle/cfgfix' of github.com:scylladb/seastar-dev:
  config_test: Use error handler (ignore errors) + add error test
  config: Resurrect command line aliases that where lost
  main: Use error handler for config parse
  config_file: Add optional "error_handler" to yaml parse functions
2017-11-06 11:40:37 +00:00
Amos Kong
76ab8bf292 scylla_setup: parse --no-ec2-check option
The option was introduced by commit e645b0f ("dist/common/scripts: move
EC2 configuration verification to 'scylla_ec2_check'"), but it doesn't
parsed the option at all.

Fixes #2934

Signed-off-by: Amos Kong <amos@scylladb.com>
Acked-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <df21356e528c6e161f73f4408a201fedef8f52d9.1509744954.git.amos@scylladb.com>
2017-11-06 12:06:54 +02:00
Calle Wilund
21b2c2e310 config_test: Use error handler (ignore errors) + add error test
Fixes #2933

Uses handler on main test, ignoring the invalid option present.
Also adds test to verify error handling works as expected.
2017-11-06 09:58:16 +00:00
Calle Wilund
959d729428 config: Resurrect command line aliases that where lost 2017-11-06 09:54:46 +00:00
Calle Wilund
f1dd698600 main: Use error handler for config parse
Treat all errors as loggable errors/warnings. Preserving previous
behaviour.
2017-11-06 09:54:09 +00:00
Calle Wilund
287b6fd8bd config_file: Add optional "error_handler" to yaml parse functions
Allowing parse errors / unknown options to be ignored.
2017-11-06 09:53:05 +00:00
Amos Kong
d9f16cf23c trivial: fix a typo in warning message
> std::invalid_argument: Option memtable_allocation_typeis not applicable

Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <37d293a4eadfb8a58acaf96f80b1d2e943530c6b.1509947604.git.amos@scylladb.com>
2017-11-06 09:41:07 +02:00
Duarte Nunes
d8e0b47e75 Merge 'CQL secondary index queries' from Pekka
"This patch series adds support for secondary index queries using the
backing index view that's created when CREATE INDEX statement is
executed.

Example:

  -- Create keyspace and table:

  CREATE KEYSPACE ks WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 1};

  CREATE TABLE ks.users (
    userid uuid,
    name text,
    email text,
    country text,
    PRIMARY KEY (userid)
  );

  -- Create secondary indexes:

  CREATE INDEX ON ks.users (email);

  CREATE INDEX ON ks.users (country);

  -- Insert some data:

  INSERT INTO ks.users (userid, name, email, country) VALUES (uuid(), 'Bondie Easseby', 'beassebyv@house.gov', 'France');

  INSERT INTO ks.users (userid, name, email, country) VALUES (uuid(), 'Demetri Curror', 'dcurrorw@techcrunch.com', 'France');

  INSERT INTO ks.users (userid, name, email, country) VALUES (uuid(), 'Langston Paulisch', 'lpaulischm@reverbnation.com', 'United States');

  INSERT INTO ks.users (userid, name, email, country) VALUES (uuid(), 'Channa Devote', 'cdevote14@marriott.com', 'Denmark');

  -- Query on the secondary-index backed non-primary keys:

  SELECT * FROM ks.users WHERE email = 'beassebyv@house.gov';

   userid | country | email | name
  --------+---------+-------+------
   022238c8-5213-44b5-959e-4e3e1b032f85 |        France |         beassebyv@house.gov |    Bondie Easseby

  (1 rows)

  SELECT * FROM ks.users WHERE country = 'France';

   userid                               | country | email                   | name
  --------------------------------------+---------+-------------------------+----------------
   2152d85a-61f6-4eab-af4d-e7e7d0872319 |  France |     beassebyv@house.gov | Bondie Easseby
   59fddb6d-bfc9-4636-a9a0-85383fd815ee |  France | dcurrorw@techcrunch.com | Demetri Curror

Known imitations:

- Only regular column indexes return results. Indexing primary key
  components like clustering keys return empty result set because of
  index view query partition key serialization issues that will be fixed
  in subsequent patches.

- Secondary index queries are not paginated, which can cause problems
  for queries that return a large number of rows.

- Multiple restrictions don't work correctly if one of them is backed by
  a secondary-index.

- Only one secondary-indexed restriction per query is supported -- other
  restrictions are ignored.

- Compound partition keys are not supported.

- ALLOW FILTERING on non-primary key columns does not work correctly
  without secondary index (see issue #2200)."

* 'penberg/cql-2i-queries/v2' of github.com:penberg/scylla:
  tests/cql_query_test: Add test case for secondary index queries
  cql3: Secondary-index backed select statements
  index: Fix index view schema when primary key component is indexed
  tests/cql_query_test: Fix view creation in test_duration_restrictions()
  cql3/restrictions: Add statement_restrictions::index_restrictions() helper
  index: Implement index::supports_expression() for EQ operator
  cql3: Make operator_type class non-copyable
  index: Fix index::supports_expression() operator parameter type
  cql3: Implement statement_restriction index validation
2017-11-04 01:51:55 +01:00
Tomasz Grabiec
2e96069f2f tests: perf_cache_eviction: Switch to time-series like workload
Before the patch we appended and queried at the front. Insert at the
front instead, so that writes and reads overlap. Stresses eviction and
population more.
Message-Id: <1506369562-14892-1-git-send-email-tgrabiec@scylladb.com>
2017-11-03 13:45:41 +00:00
Tomasz Grabiec
92e3449d59 mutation_reader: Do not call fast_forward_to() on a reference to a capture
The range reference is supposed to be valid as long as the reader is
used, not just around fast_forward_to().

Introduced in a6b9186cab
Message-Id: <1509710642-12713-1-git-send-email-tgrabiec@scylladb.com>
2017-11-03 12:09:42 +00:00
Amos Kong
4762326f35 dist/redhat: Fix dependence version issue
The spec file requires two different version, it causes conflict.

The problem was introduced in commit
6893ad46b8 ("dist/redhat: Switch to
g++-7/boost-1.63 on CentOS7").

Fixes #2931

Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <f0d74c4ae0d325d7e2bd827f56a36330b9ef19eb.1509703504.git.amos@scylladb.com>
2017-11-03 13:40:22 +02:00
Paweł Dziepak
0de651d617 Merge "Mark whole query range continuous in cache" from Tomasz
"We currently can't insert row entries at any position_in_partition,
but only at full keys and after all keys. If a query range has bounds
such that we have to insert a dummy entry at non-representable position
then information about range continuity will not be fully populated.
In particular, single-row queries of a row which is not present in sstables
will miss when repeated again.

The series fixes the problem by marking the whole query range as continuous
by inserting dummy entries at boundaries when necessary.

Refs #2579."

* tag 'tgrabiec/cache-range-continuity-v2' of github.com:scylladb/seastar-dev:
  tests: row_cache: Add test for population of single rows
  tests: Add test for population of continuity
  tests: mutation_reader_assertions: Introduce produces_compacted()
  mutation: Introduce apply(mutation_fragment)
  cache: Document invariants of cache_streamed_mutation::_lower_bound
  cache_streamed_mutation: Special-case population for singular ranges
  query: Introduce is_single_row()
  cache_streamed_mutation: Increment mispopulation counter when can't populate due to eviction
  cache_streamed_mutation: Override continuity of older versions when populating
  cache_streamed_mutation: Mark whole query range as continuous
  tests: cache_streamed_mutation: Allow creating expected_row at any position_in_partition
  cache_streamed_mutation: Populate continuity when range adjacent to non-latest version rows
  cache_streamed_mutation: Avoid lookup in maybe_add_to_cache() in more cases
  row_cache: Make read_context::key() valid before reading from underlying starts
  mutation_partition: Allow creating rows_entry at any clustered position_in_partition
  position_in_partition: Do not use -2 and +2 weights
  clustering_ranges_walker: Make contains() drop range tombstones adjacent to query range
  mutation_partition: Remove delegating_compare()
  mvcc: Print iterators in operator<< for partition_snapshot_row_cursor
  mvcc: Introduce partition_snapshot_row_weakref
  mvcc: Make the null state of partition_snapshot::change_mark explicit
  mvcc: Add partition_snapshot::region() getter
  mvcc: Add partition_snapshot::schema() getter
  position_in_partition: Introduce before_key()
  position_in_partition: Introduce min()
  position_in_partition: Introduce for_static_row()
2017-11-03 11:05:49 +00:00
Paweł Dziepak
ab12981491 test.py: make sure that tests/memory_footprint is being run
While not being a real unit tests memory_footprint can be a quite useful
tool and running it among other tests will ensure that we will notice
when it gets broken.
Message-Id: <20171102160233.6756-2-pdziepak@scylladb.com>
2017-11-03 11:46:30 +01:00
Paweł Dziepak
4cda3170d6 tests/memory_footprint: do not create two cache instances
When created cache registers several metrics, since attempts to create
an already existing metrics result in an exception being thrown it is no
longer possible to have two cache instances at the same time. This is
exactly what happens in memory_footprint: one (useless) cache object is
created through a call to do_with_cql_env() and, then, memory_footprint
explicitly creates another one (not a useless one).

The tests itself doesn't really need a full cql environment and the only
reason it was added is so that storage_service is initialised and various
code paths can query for the available cluster features. This can be
done in a much lightweight way using storage_service_for_tests.

Fixes memory_footprint failure (until next time we decide there is
nothing wrong with globals).

Message-Id: <20171102160233.6756-1-pdziepak@scylladb.com>
2017-11-03 11:46:30 +01:00
Amos Kong
f2ff431b75 dist/redhat: Fix baseurl of 3rdparty repo
$basearch isn't parsed as expected, the finaly baseurl is wrong.
We only have x86_64 arch in external 3rdparty repository, and
the conf file is only for x86_64, so it's fine to use hardcode
x86_64.

The problem was introduced by commit
b5e83ebd94 ("dist/redhat: switch 3rdparty
packages to external build service").

Fixes #2930

Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <708f46a7c36623e86fee278462c80db1eff3b820.1509700430.git.amos@scylladb.com>
2017-11-03 11:51:34 +02:00
Pekka Enberg
aeea83172b tests/cql_query_test: Add test case for secondary index queries 2017-11-03 10:12:58 +02:00
Pekka Enberg
9048f741ad cql3: Secondary-index backed select statements
This patch adds support for secondary-index backed select statements.
Current select_statement class is split into two separate classes:
primary_key_select_statement that retains regular query behavior and
indexed_table_select_statement that introduces the new secondary-index
backed query logic. One of the two behaviors is selected at query
preparation time to minimize overhead for non-indexed queries.
2017-11-03 10:12:58 +02:00
Pekka Enberg
3150962cb7 index: Fix index view schema when primary key component is indexed
This fixes index view schema to exclude indexed column when a primary
key component like clustering key is indexed. This fixes a server crash
when CREATE INDEX statement is executed on a clustering key column.
2017-11-03 10:12:58 +02:00
Pekka Enberg
3c90607988 tests/cql_query_test: Fix view creation in test_duration_restrictions()
The materialized view created in test_duration_restriction() restricts
on a non-PK column. Since Scylla's ALLOW FILTERING and secondary index
validation path is broken, once we start to do secondary index queries,
query processor thinks there's a secondary index backing that non-PK
column and fails because it's unable to find such column.

Fix up the view to only trigger the duration type validation error we're
interested in here.
2017-11-03 10:12:58 +02:00
Pekka Enberg
c243a0c8fc cql3/restrictions: Add statement_restrictions::index_restrictions() helper 2017-11-03 09:10:43 +02:00
Pekka Enberg
678a6f6e2f index: Implement index::supports_expression() for EQ operator 2017-11-03 09:10:43 +02:00
Pekka Enberg
04b482146c cql3: Make operator_type class non-copyable
The operator_type class is really an enumeration, which is not supposed
to be copied.
2017-11-03 09:10:43 +02:00
Pekka Enberg
1ae9343f68 index: Fix index::supports_expression() operator parameter type
The cql3::operator_type is supposed to be passed around as const
reference, not by value; otherwise equality won't work.
2017-11-03 09:10:43 +02:00
Pekka Enberg
3e3c580f74 cql3: Implement statement_restriction index validation 2017-11-03 09:10:43 +02:00
Botond Dénes
ce03a4d2c7 test.py: print failed test summary if there are failed tests
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <3a913b111552276ab94dfb83738244699550f929.1507894597.git.bdenes@scylladb.com>
2017-11-02 11:49:14 +00:00
Tomasz Grabiec
16d6222a96 tests: row_cache: Add test for population of single rows 2017-11-02 12:16:17 +01:00
Tomasz Grabiec
bbf8ccb709 tests: Add test for population of continuity 2017-11-02 12:16:17 +01:00
Tomasz Grabiec
3ad5666098 tests: mutation_reader_assertions: Introduce produces_compacted() 2017-11-02 12:16:17 +01:00
Tomasz Grabiec
749f5770df mutation: Introduce apply(mutation_fragment) 2017-11-02 12:16:17 +01:00
Tomasz Grabiec
a76202df4f cache: Document invariants of cache_streamed_mutation::_lower_bound
(cherry picked from commit b52813279d30782270ac83856233f18787b28b7e)
2017-11-02 12:16:17 +01:00
Tomasz Grabiec
328faf695e cache_streamed_mutation: Special-case population for singular ranges
This is an optimization which avoids creating dummy entries around row
entry when populating a singular range.
2017-11-02 12:16:09 +01:00
Tomasz Grabiec
90796893ee query: Introduce is_single_row() 2017-11-02 11:05:19 +01:00
Tomasz Grabiec
0fd57cdff5 cache_streamed_mutation: Increment mispopulation counter when can't populate due to eviction 2017-11-02 11:05:19 +01:00
Tomasz Grabiec
8c41a3eb43 cache_streamed_mutation: Override continuity of older versions when populating
Fixes the case of continuity not being populated when the row which is
the upper bound of the population range belongs to a non-latest
version. In such case we wouldn't mark the range as continuous,
because we can't modify rows of non-latest versions. To fix this,
create an empty entry in latest version which will just override the
continuity flag of the old entry.
2017-11-02 11:05:19 +01:00
Tomasz Grabiec
65ed490e1c cache_streamed_mutation: Mark whole query range as continuous
Before this patch only ranges between returned row fragments were
marked as continuous.  In the extreme case, there could be no such
fragments, in which case next read would miss as well. To avoid this,
mark whole query range as continuous by inserting dummy entries when
necessary.

Refs #2579.
2017-11-02 11:05:19 +01:00
Tomasz Grabiec
552d7a683a tests: cache_streamed_mutation: Allow creating expected_row at any position_in_partition 2017-11-02 11:05:19 +01:00
Tomasz Grabiec
d4928eb1b7 cache_streamed_mutation: Populate continuity when range adjacent to non-latest version rows
Current code will not mark the range as continuous if the previous
entry does not come in the latest version. Fix that by switching
to partition_snapshot_row_pointer, which is capable of checking
in older versions as necessary.

Also, we avoid the key comparison if we know that the iterator
is still valid.
2017-11-02 11:05:19 +01:00
Tomasz Grabiec
835d17ee37 cache_streamed_mutation: Avoid lookup in maybe_add_to_cache() in more cases 2017-11-02 11:05:19 +01:00
Tomasz Grabiec
af4a9a4a30 row_cache: Make read_context::key() valid before reading from underlying starts
So that we can call cache_streamed_mutation::can_populate() before
we start reading from underlying. Will be needed in upcoming changes
which insert dummy entries when falling back to underlying.
2017-11-02 11:05:19 +01:00
Tomasz Grabiec
72028bb048 mutation_partition: Allow creating rows_entry at any clustered position_in_partition
In preparation for supporting setting continuity of arbitrary clustering range.
2017-11-02 11:05:19 +01:00
Tomasz Grabiec
9ac1b515e1 position_in_partition: Do not use -2 and +2 weights
::weight() is using those values for excl_end and excl_start in order
to be able to represent non-overlapping ranges. In their model the end
bound is inclusive. We don't need this, since position_range has end
bound exclusive.

This change makes that:

  position_in_partition::after_key(y)
    == position_in_partition::for_range_end(clutering_range::make({x}, {y})
2017-11-02 11:05:19 +01:00
Tomasz Grabiec
4b25fa1130 clustering_ranges_walker: Make contains() drop range tombstones adjacent to query range
position_range is end-exclusive. The reader might have returned a tombstone
which is not really relevant for the range.
2017-11-02 11:05:19 +01:00
Tomasz Grabiec
409adc045a mutation_partition: Remove delegating_compare()
It can't work with rows_entry at any position_in_partition,
so we need to drop it.
2017-11-02 11:05:19 +01:00
Tomasz Grabiec
b4954f55b9 mvcc: Print iterators in operator<< for partition_snapshot_row_cursor 2017-11-02 11:05:19 +01:00
Tomasz Grabiec
ad156b5986 mvcc: Introduce partition_snapshot_row_weakref 2017-11-02 11:05:19 +01:00
Tomasz Grabiec
967cabcaf2 mvcc: Make the null state of partition_snapshot::change_mark explicit 2017-11-02 11:05:19 +01:00
Tomasz Grabiec
4b7933543d mvcc: Add partition_snapshot::region() getter 2017-11-02 11:05:19 +01:00
Tomasz Grabiec
9cf30f19ae mvcc: Add partition_snapshot::schema() getter 2017-11-02 11:05:19 +01:00
Tomasz Grabiec
34cb13939f position_in_partition: Introduce before_key() 2017-11-02 11:05:19 +01:00
Tomasz Grabiec
cc06c328ef position_in_partition: Introduce min() 2017-11-02 11:05:19 +01:00
Tomasz Grabiec
d05f130b09 position_in_partition: Introduce for_static_row() 2017-11-02 11:05:19 +01:00
Calle Wilund
8c257c40b4 storage_service: Only replicate token metadata iff modified in on_change
Fixes #2869

Message-Id: <20171101105629.22104-1-calle@scylladb.com>
2017-11-01 14:56:55 +02:00
Jesse Haber-Kucharsky
da5c486e49 Add coding-style.md referencing Seastar
While it would be nice if we could reference the file corresponding to
the exact version of Seastar pinned as a Scylla submodule, GitHub does
not support this.

Signed-off-by: Jesse Haber-Kucharsky <jhaberku@scylladb.com>
Message-Id: <b7565acd1ccb8bce42b0edf00221922e78e1c9ef.1508274655.git.jhaberku@scylladb.com>
2017-10-30 16:52:29 -07:00
Duarte Nunes
74a4cf8bb1 thrift/handler: multiget_{slice, count} always returns queried keys
This patch changes the way the multiget_{slice, count} verbs return
their results, by ensuring a queried key that produced no results is
still present in the returned map, associated with an empty list.

This is not required by the thrift interface, and it is a performance
step back, but matches the behavior of Apache Cassandra.

Said behavior is relied upon by projects like JanusGraph, whose
integration with Scylla motivated this patch.

Fixes #2900

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20171019161104.22797-2-duarte@scylladb.com>
2017-10-30 16:48:58 -07:00
Duarte Nunes
f44131226a thrift/handler: Use map for column_visitor aggregation
Most common operations, like multiget_count and multiget_slice, return
maps. So, instead of keeping a vector internally in column_visitor
that we later transform into a map, keep a map that we transform into
a vector for the uncommon operations.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20171019161104.22797-1-duarte@scylladb.com>
2017-10-30 16:48:55 -07:00
Takuya ASADA
a56dd79d69 dist/redhat: moving gcc-7.1 to gcc-7.2
We mistakenly merged the patch witch compiles Scylla using gcc-7.1, it need to
fix correct version (gcc-7.2).

Message-Id: <1508875618-31659-1-git-send-email-syuu@scylladb.com>
2017-10-30 14:26:43 -07:00
Tomasz Grabiec
b4e3c0946a cache_streamed_mutation: Avoid copy of decorated_key
Message-Id: <1509060503-17483-1-git-send-email-tgrabiec@scylladb.com>
2017-10-26 16:51:27 -07:00
Pekka Enberg
9ba3920fd7 Merge "cql3/query_processor: Clean-up" from Jesse
"This series cleans-up the query processor header and source file,
 including deleting dead Java code.

 There are no functional or interface changes.

 I've run all unit tests and observed no failures."

* 'jhk/clean_up_qp/v2' of github.com:hakuch/scylla:
  cql3/query_processor: Fix formatting
  cql3/query_processor: Organize headers
  cql3/query_processor.hh: Consolidate `public` and `private` sections
  cql3/query_processor: Remove dead Java code
2017-10-21 21:48:06 +03:00
Jesse Haber-Kucharsky
66c4abe4fb cql3/query_processor: Fix formatting
Lines are now less than 120 columns and formatting conforms to the Seastar coding standards document.
2017-10-21 13:53:03 -04:00
Jesse Haber-Kucharsky
edb83c0014 cql3/query_processor: Organize headers 2017-10-21 13:53:03 -04:00
Jesse Haber-Kucharsky
ed6a3179a1 cql3/query_processor.hh: Consolidate public and private sections 2017-10-21 13:53:03 -04:00
Jesse Haber-Kucharsky
50cfa8a7b8 cql3/query_processor: Remove dead Java code 2017-10-21 13:53:03 -04:00
Avi Kivity
ef8587a910 Merge seastar upstream
* seastar 8babd1f...d71922c (11):
  > configure.py: add -Wno-sign-compare to compile Boost.Test with gcc-7
  > log: Print nested exceptions
  > reactor: do not account non idle activity for total idle time calculation
  > execution_stage: defer execution less aggressively
  > Fix -Wreturn-type warnings
  > cpu scheduler: make _reciprocal_shares_times_2_32 wider to avoid overflow problems
  > noncopyable_function add bool operator
  > execution_stage: make make_execution_stage return a named type
  > memory: support overriding the default allocator page size
  > memory: fix crash during startup with large page_size
  > core: io_destroy is missing when destructing reactor, which causes io_context leak
2017-10-21 16:38:32 +03:00
Takuya ASADA
bc76d34e34 dist/debian: handle python scripts correctly on package builder
We are failing to build .deb package on pbuilder due to lack of build time
dependencies so we need add those packages on Build-Depends, also we need to
follow Debian packaging style for the package contains python scripts.

Fixes #2918

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1508457215-11552-1-git-send-email-syuu@scylladb.com>
2017-10-21 16:38:20 +03:00
Takuya ASADA
6893ad46b8 dist/redhat: Switch to g++-7/boost-1.63 on CentOS7
Switch to g++-7/boost-1.63 on CentOS7, too.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1508457509-13122-2-git-send-email-syuu@scylladb.com>
2017-10-21 12:28:40 +03:00
Takuya ASADA
7f38634080 dist/debian: Switch to g++-7/boost-1.63 on Ubuntu 14.04/16.04
Switch to g++-7/boost-1.63 for Ubuntu 14.04/16.04 that newly provided via
our 3rdparty PPA.

To make Scylla compilable with boost-1.63/g++-7, we need to disable following
warnings:
 - misleading-indentation
 - overflow
 - noexcept-type
Compile error message:
https://gist.github.com/syuu1228/96acc640c56c3316df5ce6911d60beea

Seastar also has similar problem, it needs to disable 'sign-compare', detail
is in a patch for Seastar.

This update also fixes current Ubuntu 14.04/16.04 compilation error problem,
since errors were come from too old g++/boost.

Fixes #2902
Fixes #2903

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1508457509-13122-1-git-send-email-syuu@scylladb.com>
2017-10-21 12:28:38 +03:00
Avi Kivity
d6cd44a725 Revert "Merge 'Single key sstable reader optimization' from Botond"
This reverts commit 5e9cd128ad, reversing
changes made to 1f4e6759a7. Tomek found
some serious issues.
2017-10-19 12:47:21 +03:00
Botond Dénes
9bd4d7cbb2 Readd x right to configure.py (removed by 05db87e06)
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <41d505277fe29b275aba65477cafc9275b393a64.1508395961.git.bdenes@scylladb.com>
2017-10-19 11:30:06 +03:00
Duarte Nunes
5e9cd128ad Merge 'Single key sstable reader optimization' from Botond
"When reading a single row it is possible that the read will be satisfied
by just reading from one of the data source candidates. To exploit this
an optimization is employed which sorts data source candidates by their
timestamp and reads mutations from the most recent to the oldest. When
all needed cells are present and their earliest timestamp is still
later than the latest one of the remaining data source the read can be
terminated early.
However this optimization also has the possibility to backfire as the
data sources are read sequentially, so if all of them has to be read
eventually then we will end up worse then without it.
Thus the optimization can be disabled up-front or enabled to only run
until its efficiency degrades below a certain threshold.
Also counters are added to column-families to make it possible to
observe how well it performs.

Benchmarking

Benchmarking was done with disabled cache and at a constant op rate of
4k (1/3 of the max op rate on my box), against 3 sstables containing the
same 10000 rows.

1) Optimization turned off (all sstables read paralelly)
latency mean              : 1.3 [simple:1.3]
latency median            : 1.0 [simple:1.0]
latency 95th percentile   : 2.4 [simple:2.4]
latency 99th percentile   : 2.9 [simple:2.9]
latency 99.9th percentile : 8.0 [simple:8.0]
latency max               : 13.5 [simple:13.5]

2) Optimization turned on, best case (1 of 3 sstables read)
latency mean              : 0.6 [simple:0.6]
latency median            : 0.6 [simple:0.6]
latency 95th percentile   : 1.0 [simple:1.0]
latency 99th percentile   : 1.2 [simple:1.2]
latency 99.9th percentile : 4.4 [simple:4.4]
latency max               : 13.4 [simple:13.4]

3) Optimization turned on, best case, IN query (1 of 3 sstables read)
latency mean              : 0.7 [simple_in:0.7]
latency median            : 0.6 [simple_in:0.6]
latency 95th percentile   : 1.1 [simple_in:1.1]
latency 99th percentile   : 1.4 [simple_in:1.4]
latency 99.9th percentile : 5.4 [simple_in:5.4]
latency max               : 16.8 [simple_in:16.8]

4) Optimization turned on, worst case (3 of 3 sstables read sequentally)
latency mean              : 2.8 [simple:2.8]
latency median            : 2.3 [simple:2.3]
latency 95th percentile   : 5.4 [simple:5.4]
latency 99th percentile   : 6.5 [simple:6.5]
latency 99.9th percentile : 13.5 [simple:13.5]
latency max               : 19.2 [simple:19.2]

5) Optimization turned on, mid case (2 of 3 sstables read sequentally)
latency mean              : 1.4 [simple:1.4]
latency median            : 1.1 [simple:1.1]
latency 95th percentile   : 2.7 [simple:2.7]
latency 99th percentile   : 3.2 [simple:3.2]
latency 99.9th percentile : 7.7 [simple:7.7]
latency max               : 15.1 [simple:15.1]"

Ref #324

* 'bdenes/optimize_single_row_read_v6' of github.com:denesb/scylla:
  Add unit tests for single_key_sstable_reader
  Add counters for the single-key reader optimization
  Add single_key_parallel_scan_threshold option
  single_key_sstable_reader: optimize single-row queries
  single_key_sstable_reader: move reading code into it's own method
  Add selects_only_full_rows() and selects_only_full_rows_with_atomic_columns()
2017-10-18 16:38:53 +01:00
Duarte Nunes
1f4e6759a7 tests: Fix compile errors introduced in c468e5981
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1508337315-8224-1-git-send-email-duarte@scylladb.com>
2017-10-18 16:38:18 +01:00
Botond Dénes
c3bd89ad63 Add unit tests for single_key_sstable_reader 2017-10-18 17:24:03 +03:00
Botond Dénes
dfe312ca3a Add counters for the single-key reader optimization
Add two counters, one to determine how many of the reads fall into the
optimization, and a second one to determine it's effectiveness.

The first one is single_key_reader_optimization_hit_rate. It contains
the rate of reads that the optimization applies to out of all the reads
that go into the single_key_sstable_reader.

The second one, single_key_reader_optimization_extra_read_proportion is
a histogram of the efficiency of the optimization. It contains the
proportion of extra data-sources read. It's a number between 0 and 1,
where 0 is the best case (only one data-source was read) and 1 is the
worst case (all data-sources were read eventually). This is the same
number that is used for the threshold option (see previous patch).
Each of the histogram's buckets cover a chunk of 0.1 from the [0, 1]
range.
Note that single_key_parallel_scan_threshold effectively provides an
upper bound for the proportion as the optimization is turned off as soon
as it goes above that number.

The counters are disabled if single_key_parallel_scan_threshold is set
to 0 disabling the optimization entirely.
2017-10-18 17:24:03 +03:00
Botond Dénes
08502f2d48 Add single_key_parallel_scan_threshold option
This option regulates when exactly the single-key optimization is
considered ineffective and turned off.
The threshold is the proportion of the extra data source candidates that
can be read before the optimization is considered ineffective and
disabled. The proportion is calculated as follows:
    (read_data_sources - 1) / (total_data_sources - 1)

We substract 1 from the read_data_sources and total_data_sources to
effectively measure the rate of *extra* data sources we read. This
makes sure that the proportion is meaningful even if e.g. we have only
have a total of 2 data-sources and we read only 1 (best case).

Whenever this number goes above the threshold the optimization is
disabled. The threshold is number between 0 and 1, 0 forces the
optimization off and 1 forces it on. Increase the treshold to favor
throughput over latency for single-row reads, decrease the treshold to
improve latency at the expense of throughput.

If the threshold is > 0 (it's not force disabled) and the optimization
is disabled due to a read crossing the threshold, we will issue
"probing" reads (every 100th read) to determine if the optimization is
worth re-enabling. Probing reads are allowed to run through the
optimization path and if they go below the threshold the optimization is
re-enabled.
2017-10-18 17:24:03 +03:00
Botond Dénes
3c1fa3ecc1 single_key_sstable_reader: optimize single-row queries
For single-row queries that only query atomic cells one can put a lower
bound on the timestamps which may affect the query results and thus rule
out entire data sources. This allows the query to read only those
sstables that actually contribute to the result.
To do this we incrementally move through the sstables overlapping with
the query range, checking after each read mutation whether we already
have a value for all required cells and whether the lower-bound of their
timestamps is higher than the upper-bound of the timestamps of all the
remaining data-sources. When this condition is met we terminate the
read.
2017-10-18 17:24:03 +03:00
Botond Dénes
5fc44c4307 single_key_sstable_reader: move reading code into it's own method 2017-10-18 17:24:03 +03:00
Botond Dénes
6cdeca1846 Add selects_only_full_rows() and selects_only_full_rows_with_atomic_columns() 2017-10-18 17:24:03 +03:00
Botond Dénes
7aceb14395 Fix compile errors in tests/config_test.cc introduced by c468e5981
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <2700ac3987c3a229eb7083ce6f5d390012a3b66c.1508336217.git.bdenes@scylladb.com>
2017-10-18 15:20:45 +01:00
Paweł Dziepak
c28e31eac4 database: fix build (auto shards&) 2017-10-18 13:10:00 +01:00
Duarte Nunes
446e5f53db database: Avoid superfluous shards_for_this_sstable vector copies
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20171018112643.40411-1-duarte@scylladb.com>
2017-10-18 15:00:52 +03:00
Duarte Nunes
044b8deae4 Merge 'Solves problems related to gossip which can be observed in a large cluster' from Tomasz
"The main problem fixed is slow processing of application state changes.
This may lead to a bootstrapping node not having up to date view on the
ring, and serve incorrect data.

Fixes #2855."

* tag 'tgrabiec/gossip-performance-v3' of github.com:scylladb/seastar-dev:
  gms/gossiper: Remove periodic replication of endpoint state map
  gossiper: Check for features in the change listener
  gms/gossiper: Replicate changes incrementally to other shards
  gms/gossiper: Document validity of endpoint_state properties
  storage_service: Update token_metadata after changing endpoint_state
  gms/gossiper: Process endpoints in parallel
  gms/gossiper: Serialize state changes and notifications for given node
  utils/loading_shared_values: Allow Loader to return non-future result
  gms/gossiper: Encapsulate lookup of endpoint_state
  storage_service: Batch token metadata and endpoint state replication
  utils/serialized_action: Introduce trigger_later()
  gossiper: Add and improve logging
  gms/gossiper: Don't fire change listeners when there is no change
  gms/gossiper: Allow parallel apply_state_locally()
  gms/gossiper: Avoid copies in endpoint_state::add_application_state()
  gms/failure_detector: Ignore short update intervals
2017-10-18 10:13:25 +01:00
Duarte Nunes
c468e59817 Merge 'Extract config file mechanism + allow additional' from Calle
"Extracts the yaml/boost-po aspects of the "self-describing" db::config
into an abstract type.

db::config is then reimplemented in said type, removing some of the
slightly cumbersome entanglement with seastar opts (log).

Adds a main hook for additional configuration files (options + file)"

* 'calle/config' of github.com:scylladb/seastar-dev:
  main/init: Add registerable configuration objects
  db::config: Re-implement on utils/config_file.
  utils::config_file: Abstract out config file to external type
2017-10-18 09:50:53 +01:00
Tomasz Grabiec
f570e41d18 gms/gossiper: Remove periodic replication of endpoint state map
For large clusters the map can be big and cause latency problems.
Since we now actively replicate changes, this is no longer needed.
2017-10-18 08:49:53 +02:00
Tomasz Grabiec
84c7b63c51 gossiper: Check for features in the change listener
In preparation for removal of periodic replication
2017-10-18 08:49:53 +02:00
Tomasz Grabiec
2d5fb9d109 gms/gossiper: Replicate changes incrementally to other shards
storage_service depends on endpoint states to be replicated to all
shards before token metadata is replicated. Currently this is taken
care of by storage_service::replicate_to_all_cores(), invoked from
storage_service's change listener. It copies whole endpoint state map,
which is expensive in large clusters. It's more efficient to replicate
only incremental changes, and only once, rather than for each
application state.
2017-10-18 08:49:53 +02:00
Tomasz Grabiec
28c9609370 gms/gossiper: Document validity of endpoint_state properties 2017-10-18 08:49:53 +02:00
Tomasz Grabiec
cf113ed295 storage_service: Update token_metadata after changing endpoint_state
There is a requirement that whatever is present in token_metadata,
should also be present in endpoint_state. Because of that, we should update
endpoint_state first (set_gossip_tokens).

Apache Cassandra switched to this order as well in commit
b39d984f7bd682c7638415d65dcc4ac9bcb74e5f.
2017-10-18 08:49:53 +02:00
Tomasz Grabiec
5cc83b9b3c gms/gossiper: Process endpoints in parallel
Makes state application faster due to increased parallelism.

Refs #2855.

Bootrap of 11th node, ignoring apply_state_locally() which complete instantly:

Before:

DEBUG 2017-10-06 15:24:04,213 [shard 0] gossip - apply_state_locally() took 1230 ms
DEBUG 2017-10-06 15:24:04,223 [shard 0] gossip - apply_state_locally() took 1421 ms
DEBUG 2017-10-06 15:24:04,225 [shard 0] gossip - apply_state_locally() took 607 ms
DEBUG 2017-10-06 15:24:04,288 [shard 0] gossip - apply_state_locally() took 488 ms
DEBUG 2017-10-06 15:24:04,408 [shard 0] gossip - apply_state_locally() took 1425 ms

After:

DEBUG 2017-10-06 16:24:13,130 [shard 0] gossip - apply_state_locally() took 814 ms
2017-10-18 08:49:53 +02:00
Tomasz Grabiec
8f01e08690 gms/gossiper: Serialize state changes and notifications for given node
It's possible that a change listener for a later state will run before
change listener for the previous state completes, in which case
node's state can be corruped. For example, the previous change listener
may override sysytem.peers with an old value.

This patch fixes the problem by serializing state changes and
listeners for each node.

The implementation uses loading_shared_values so that the lock remains
alive as long as there is anyone holding it. Using endpoint_state_map
for that doesn't seem appropraite, because entries can be removed from
it while listeners are still running.  There is code in the gossiper
which anticipates that entry may be gone across deferring points in
some places.
2017-10-18 08:49:53 +02:00
Tomasz Grabiec
f7a7e97095 utils/loading_shared_values: Allow Loader to return non-future result 2017-10-18 08:49:52 +02:00
Tomasz Grabiec
6fccf7f4d0 gms/gossiper: Encapsulate lookup of endpoint_state 2017-10-18 08:49:52 +02:00
Tomasz Grabiec
6263b0ebb6 storage_service: Batch token metadata and endpoint state replication
Replication needs to be serialized. We can batch replication requests
which are waiting to start. Use serialized_action, which does this.
2017-10-18 08:49:52 +02:00
Tomasz Grabiec
2e2ae4671e utils/serialized_action: Introduce trigger_later()
Can be used instead of trigger() to improve batching.
2017-10-18 08:49:52 +02:00
Tomasz Grabiec
41ffefd194 gossiper: Add and improve logging 2017-10-18 08:49:52 +02:00
Tomasz Grabiec
0ed84710d9 gms/gossiper: Don't fire change listeners when there is no change
apply_new_states() always fires change listeners for received values,
even if we already processed the state earlier. Some change listeners
are heavy-weight, e.g. storage_service::handle_state_normal().  We
should avoid calling them more than necessary.

Make sure that we always run the change listeners by putting them in a
defer() block. Otherwise, if exception is thrown in the middle of state
application, change listeners would not be run. Later we would not
detect the change for states which were already applied, and not run
the change listers.

Fixes #2867
2017-10-18 08:49:52 +02:00
Tomasz Grabiec
c780a74b58 gms/gossiper: Allow parallel apply_state_locally()
It is serialized since e428d06f40. This causes regression in
performance of application state propagation due to reduced
parallelism.

Processing states for each node has high latency due to memtable
flushes triggered by update_tokens() and commitlog syncs done by
system.peers updates, if commitlog sync mode is set to "batch".  We
have high internal concurrency for these, so increasing parallelism
significantly reduces time to process all states.

Fixes #2855.
2017-10-18 08:49:52 +02:00
Tomasz Grabiec
f20a805eca gms/gossiper: Avoid copies in endpoint_state::add_application_state() 2017-10-18 08:49:52 +02:00
Tomasz Grabiec
a71624d58d gms/failure_detector: Ignore short update intervals
Failure detector decides that a node is down if it hasn't received a change of
its heartbeat for longer than ~11 times the average of past intervals between
updates.

If there are multiple incoming ACKs containing information about the
same node, we may detect and report a change for each of them. This
will cause failure_detector to establish that the average report
period is in milliseconds. After the update storm is over, it will
claim the node failure very soon, because report period will now be a
large multiple of the average.

Fix by not counting short updates into the calculation of average
arrival time.

Fixes #2861.
2017-10-18 08:49:52 +02:00
Calle Wilund
12a54805ea main/init: Add registerable configuration objects
Allowing plugging in command line arguments + "parse-points"
for configs outside db/config
2017-10-18 00:52:04 +00:00
Calle Wilund
4bd98f7296 db::config: Re-implement on utils/config_file.
Re-use config abstraction, and de-couple the seastar logging 
parts a little bit more.
2017-10-18 00:51:54 +00:00
Calle Wilund
05db87e068 utils::config_file: Abstract out config file to external type
Handling all the boost::commandline + YAML stuff.
This patch only provides an external version of these functions,
it does not modify the db::config object. That is for a follow-up
patch.
2017-10-18 00:51:41 +00:00
Pekka Enberg
ae92055b52 Merge "Bring histogram closer to what Prometheus expects" from Glauber
"Histograms are a native prometheus type, and there are many functions
available that operate on them. There is extensive documentation about
them at https://prometheus.io/docs/practices/histograms/

One example is the function histogram_quantile(), that can extract
useful quantiles from the histograms. Currently, those functions don't
work well.

The reasons are twofold:
1) We are only exporting 16 metrics, starting from 1usec. That means
that the highest latency we can differentiate is 4ms. After that,
everything falls into the same bin.

2) The format that prometheus expects is that each bin will contain
the total number of points seen *up until that bin*, while we
currently export the total number of points that falls between bins.
IOW, it is a cummulative histogram.

About point two, granted it is a bit hidden in their website, but it is
there. The following phrase about a caveat make it clear:

"Note that we divide the sum of both buckets. The reason is that the
histogram buckets are cumulative. The le="0.3" bucket is also contained
in the le="1.2" bucket; dividing it by 2 corrects for that."

It is also not needed to accumulate things that fall over the last bin:
the _count component of the histogram will already account for that."

Acked-by: Amnon Heiman <amnon@scylladb.com>
Acked-by: Gleb Natapov <gleb@scylladb.com>

* 'prometheus-histograms' of github.com:glommer/scylla:
  storage_proxy: change reporting of estimated histograms
  estimated_histogram: bring histogram closer to what prometheus expects.
2017-10-17 20:23:10 +03:00
Takuya ASADA
3cab5557e5 dist/debian: fix Debian not to use new dependency package names
We moved to new dependency package names like
antlr3-c++-dev to scylla-antlr35-c++-dev when we moved to ppa on Ubuntu, but
Debian still uses old dist/debian/dep packages.
So keep using old style package names.

Fixes #2831

Message-Id: <1508245175-2184-1-git-send-email-syuu@scylladb.com>
2017-10-17 16:39:35 +03:00
Daniel Fiala
f5629b3a23 types: Use std::pair instead of std::tuple to avoid compile-time error with explicit constructor.
Fixes #2895.

Signed-off-by: Daniel Fiala <daniel@scylladb.com>
Message-Id: <20171017071316.2836-1-daniel@scylladb.com>
2017-10-17 12:32:43 +01:00
Duarte Nunes
baeec0935f Replace query::full_slice with schema::full_slice()
query::full_slice doesn't select any regular or static columns, which
is at odds with the expectations of its users. This patch replaces it
with the schema::full_slice() version.

Refs #2885

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1507732800-9448-2-git-send-email-duarte@scylladb.com>
2017-10-17 11:25:53 +02:00
Duarte Nunes
fbb4c9edda schema: Provide all-selecting partition slice
This patch introduces schema::full_slice(), which returns a
partition_slice selecting the full clustering range, as well as all
static and regular columns. No options aside from the default are
set in that partition_slice.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <1507732800-9448-1-git-send-email-duarte@scylladb.com>
2017-10-17 11:25:35 +02:00
Tomasz Grabiec
6d5a0f8a98 db: Add debug-level logging related to streaming
Message-Id: <1505896395-30203-1-git-send-email-tgrabiec@scylladb.com>
2017-10-16 18:49:10 +01:00
Paweł Dziepak
d9abb75bfa tests/perf_simple_query: fix counter update query
Message-Id: <20171016125334.4423-1-pdziepak@scylladb.com>
2017-10-16 19:41:31 +02:00
Calle Wilund
cc28cf838c password_auth: Return actual generated salt from gensalt
Fixes: 2898
Typo error in gensalt(). Only returned selected hash method, not
the random salt bytes. Does not prevent the hash function from
operating, but strength is ever so reduced.
Message-Id: <20171016130505.25593-2-calle@scylladb.com>
2017-10-16 14:07:46 +01:00
Calle Wilund
57c5f13166 password_auth: Keep crypt_data as thread local
Fixes: 2887
Speeds up password hashing ever so slightly.
Message-Id: <20171016130505.25593-1-calle@scylladb.com>
2017-10-16 14:07:42 +01:00
Paweł Dziepak
8c3b7fea81 Merge "Introduce new API and converters from/to old mutation_reader" from Piotr
"This changeset is the first step to flatten mutation_reader.
Then it introduces new mutation_fragment types for partition header and end of partition.
Using those a new flat_mutation_reader is defined.
Finally it introduces converters between new flat_mutation_reader and
old mutation_reader."

* 'haaawk/flattened_mutation_reader_v12' of github.com:scylladb/seastar-dev:
  Add tests for flat_mutation_reader
  Introduce conversion from flat_mutation_reader to mutation_reader
  Introduce conversion from mutation_reader to flat_mutation_reader
  Introduce flat_mutation_reader
  Extract FlattenedConsumer concept using GCC6_CONCEPT
  Introduce partition_end mutation_fragment
  Introduce a position for end of partition
  Introduce partition_start mutation_fragment
  Introduce FragmentConsumer
  Introduce a position for partition start
  streamed_mutation: Extract concepts using GCC6_CONCEPT macro
2017-10-16 12:14:23 +01:00
Gleb Natapov
bd09ce7cd4 gdb: Add new command task_histogram
The command scans random set of objects in a small pool (or, optionally
only objects of a certain size) for vptrs and builds a histogram, so that
most often used vptrs can be easily found. The command is useful to find
"memory leaks" caused by creating of too many tasks of a certain type
which is usually a result of unlimited parallelism somewhere.

Message-Id: <20171015081634.GB21092@scylladb.com>
2017-10-15 12:12:42 +03:00
Piotr Jastrzebski
5f34559b78 Add tests for flat_mutation_reader
Those tests run mutation source test for all sources
using conversion to and from flat_mutation_reader.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-10-13 16:08:59 +02:00
Piotr Jastrzebski
31733a7eeb Introduce conversion from flat_mutation_reader to mutation_reader
This will be used in transition from mutation_reader
to flat_mutation_reader

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-10-13 16:08:59 +02:00
Piotr Jastrzebski
6a66bee788 Introduce conversion from mutation_reader to flat_mutation_reader
This will be used in transition from mutation_reader
to flat_mutation_reader

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-10-13 16:08:59 +02:00
Piotr Jastrzebski
748205ca75 Introduce flat_mutation_reader
This reader operates on mutation_fragments instead of
streamed_mutations.

Each partition starts with a partition_header fragment
and ends with end_of_partition fragment.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-10-13 16:08:40 +02:00
Tomasz Grabiec
b74b06808e tests: row_cache: Add test for concurrent population of partition entries
Message-Id: <1507815478-20269-2-git-send-email-tgrabiec@scylladb.com>
2017-10-12 15:55:33 +01:00
Tomasz Grabiec
083b9cddef row_cache: Fix handling of concurrent partition population
This fixes a regression introduced in 27a3b4bca9 (master only).

partition_range_cursor assumes that as long as references are valid,
_end is valid as well. But if new entries were inserted before _end,
it may not, if the new entries fall after the query range. This may
result in reads returning partitions from outside the query range.
Message-Id: <1507815478-20269-1-git-send-email-tgrabiec@scylladb.com>
2017-10-12 15:55:20 +01:00
Tomasz Grabiec
68fe1a5bee utils/loading_cache: Fix compilation on older compilers
Message-Id: <1507728312-10585-1-git-send-email-tgrabiec@scylladb.com>
2017-10-12 14:55:34 +03:00
Raphael S. Carvalho
25a4f152cd sstables: remove dead sstable method
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20171012072905.12737-1-raphaelsc@scylladb.com>
2017-10-12 11:58:39 +02:00
Raphael S. Carvalho
16dd0d15fc sstables: make get_shards_for_this_sstable return const ref
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20171012072850.12681-1-raphaelsc@scylladb.com>
2017-10-12 11:58:23 +02:00
Pekka Enberg
1701fc2e50 Merge "gms/gossiper: Multiple cleanups" from Duarte
"Based on the functions get_endpoint_state_for_endpoint_ptr(),
get_application_state_ptr() and
endpoint_state::get_application_state_ptr(), this series
cleanups miscelaneous functions related to the gossiper.

It not only removes duplicated code, but also omits many copies.

All pointer usages have been audited for safety."

Acked-by: Asias He <asias@scylladb.com>
Acked-by: Tomasz Grabiec <tgrabiec@scylladb.com>

* 'gossiper-cleanup/v2' of github.com:duarten/scylla: (27 commits)
  gms/endpoint_state: Remove get_application_state()
  service/storage_service: Avoid copies in prepare_replacement_info()
  service/storage_service: Cleanup get_application_state_value()
  service/storage_service: Cleanup handle_state_removing()
  service/storage_service: Cleanup get_rpc_address()
  locator/reconnectable_snitch_helper: Avoid versioned_value copies
  locator/production_snitch_base: Cleanup get_endpoint_info()
  service/migration_manager: Avoid copies in is_ready_for_bootstrap()
  service/migration_manager: Cleanup has_compatible_schema_tables_version()
  service/migration_manager: Fix usages of get_application_state()
  cache_hit_rate: Avoid copies in get_hit_rate()
  gms/endpoint_state: Avoid copies in is_shutdown()
  service/load_broadcaster: Avoid copy in on_join()
  gms/gossiper: Cleanup get_supported_features()
  gms/gossiper: Cleanup get_gossip_status()
  gms/gossiper: Cleanup seen_any_seed()
  gms/gossiper: Cleanup get_host_id()
  gms/gossiper: Removed dead uses_vnodes() function
  gms/gossiper: Cleanup uses_host_id()
  gms/gossiper: Add get_application_state_ptr()
  ...
2017-10-11 13:45:36 +03:00
Duarte Nunes
f67a553b96 gms/endpoint_state: Remove get_application_state()
It is no longer used, as all callsites have moved to
get_application_state_ptr().

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
e9358c1c83 service/storage_service: Avoid copies in prepare_replacement_info()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
674f5d8eaf service/storage_service: Cleanup get_application_state_value()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
0ccb9211d7 service/storage_service: Cleanup handle_state_removing()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
bdee795876 service/storage_service: Cleanup get_rpc_address()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
2f05d7423a locator/reconnectable_snitch_helper: Avoid versioned_value copies
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
28d63a76df locator/production_snitch_base: Cleanup get_endpoint_info()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
03e6fc95ba service/migration_manager: Avoid copies in is_ready_for_bootstrap()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
72ca6b34ef service/migration_manager: Cleanup has_compatible_schema_tables_version()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
976324bbb8 service/migration_manager: Fix usages of get_application_state()
We were taking a reference to a temporary value in different places.
Fix them by using get_application_state_ptr(), which also avoids a copy.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
bb89b97cbb cache_hit_rate: Avoid copies in get_hit_rate()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
9d5c6e0c72 gms/endpoint_state: Avoid copies in is_shutdown()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
25b0654312 service/load_broadcaster: Avoid copy in on_join()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
92df519b91 gms/gossiper: Cleanup get_supported_features()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
39f71f7d12 gms/gossiper: Cleanup get_gossip_status()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
db660f1e08 gms/gossiper: Cleanup seen_any_seed()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
88dd97fe8e gms/gossiper: Cleanup get_host_id()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
95079795ce gms/gossiper: Removed dead uses_vnodes() function
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
7db7704edc gms/gossiper: Cleanup uses_host_id()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
2984bdab29 gms/gossiper: Add get_application_state_ptr()
This patch introduces the get_application_state_ptr() function, which
allows access to a versioned_value of a particular endpoint.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
f41748af81 gms/gossiper: Cleanup notify_failure_detector()
Now that we have get_endpoint_state_for_endpoint_ptr(), which does not
return a copy and allows mutating the actual state, we can use it
instead of repeating the lookup code.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
2210d10552 gms/gossiper: Cleanup is_alive()
Make it use get_endpoint_state_for_endpoint_ptr(), check if gossiper is
enabled, mark it as const, and have some callers use it instead of open
coding the logic.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:32 +01:00
Duarte Nunes
ceef45a6fe gms/gossiper: Const-qualify functions
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:31 +01:00
Duarte Nunes
955aee1588 gms/gossiper: Cleanup convict()
Have convict() use get_endpoint_state_for_endpoint_ptr(), simplify
logging, and also protect expensive operations by checking the log
level.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:31 +01:00
Duarte Nunes
cf99a41226 gms/gossiper: Add non-const get_endpoint_state_for_endpoint_ptr()
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:31 +01:00
Duarte Nunes
d0fba1a113 gms/failure_detector: Simplify alive/dead endpoint count
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:31 +01:00
Duarte Nunes
dc65cda1a3 gms/failure_detector: Fix if/else style to include braces
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-11 10:02:31 +01:00
Raphael S. Carvalho
67c5c8dc67 sstables: do not recompute shards for all tables after each compaction
For every finished compaction, we were calculating shards for all
existing tables. With ignore_msb set to 0, it's probably not a big
deal, but if ignore_msb is like 12 and LCS is used (meaning thousands
of tables possibly), the operation may stall the reactor for a
considerable amount of time. That's fixed by caching shards.

Fixes #2875.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20171011053424.22308-1-raphaelsc@scylladb.com>
2017-10-11 11:45:01 +03:00
Tomasz Grabiec
66a15ccd18 gms/gossiper: Introduce copy-less endpoint_state::get_application_state_ptr()
Message-Id: <1507642411-28680-3-git-send-email-tgrabiec@scylladb.com>
2017-10-10 18:27:43 +01:00
Gleb Natapov
36d9225e40 scylla-gdb: print number of allocated objects as an integer instead of float
Message-Id: <20171010151835.GT23527@scylladb.com>
2017-10-10 18:19:44 +03:00
Avi Kivity
4ad3900d8d Merge "gossiper: Optimize endpoint_state lookup" from Duarte
"gossiper::get_endpoint_state_for_endpoint() returns a copy of
endpoint_state, which we've seen can be very expensive. This
series introduces a function that returns a pointer and avoids
the copy.

Fixes #764"

* 'endpoint-state/v2' of https://github.com/duarten/scylla:
  gossiper: Avoid endpoint_state copies
  endpoint_state: const-qualify functions
  storage_service: Remove duplicate endpoint state check
2017-10-10 17:29:22 +03:00
Piotr Jastrzebski
f325fef362 Extract FlattenedConsumer concept using GCC6_CONCEPT
This concept will be used in flat_mutation_reader::consume

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-10-10 16:15:59 +02:00
Piotr Jastrzebski
46727f12e0 Introduce partition_end mutation_fragment
This type of mutation_fragment will be used in new mutation_reader
to signal the end of the current partition.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-10-10 16:15:59 +02:00
Piotr Jastrzebski
adffc80619 Introduce a position for end of partition
This position will be used for mutation fragment
that represents the end of partition.

This position sorts after all other mutation fragments.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-10-10 16:15:59 +02:00
Piotr Jastrzebski
2516b42752 Introduce partition_start mutation_fragment
This type of mutation_fragment will be used in new mutation_reader
to signal the beginning of the next partition.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-10-10 16:15:59 +02:00
Piotr Jastrzebski
1f4fb6dd4a Introduce FragmentConsumer
This concept helps define StreamedMutationConsumer.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-10-10 16:05:44 +02:00
Duarte Nunes
ceebbe14cc gossiper: Avoid endpoint_state copies
gossiper::get_endpoint_state_for_endpoint() returns a copy of
endpoint_state, which we've seen can be very expensive.

This patch adds a similar function which returns a pointer instead,
and changes the call sites where using the pointer-returning variant
is deemed safe (the pointer neither escapes the function, nor crosses
any defer point).

Fixes #764

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-10 13:48:02 +01:00
Duarte Nunes
bc976b4773 endpoint_state: const-qualify functions
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-10 13:30:28 +01:00
Duarte Nunes
198b1b76b5 storage_service: Remove duplicate endpoint state check
We already performed the check, so we don't need to do it again.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-10-10 13:25:34 +01:00
Avi Kivity
c0687a9761 sstables: replace naked new with make_lw_shared
Fallout from the sstables dependency reduction patches.
Message-Id: <20171010121134.26342-1-avi@scylladb.com>
2017-10-10 13:21:46 +01:00
Tomasz Grabiec
46c7e06e56 locator: Optimize token_metadata::is_member()
Currently it's linear in the number of tokens in the system in the
worst case. We could use the knowledge which _topology has to make it
O(1).

Fixes #2873.

Message-Id: <1507630182-13410-1-git-send-email-tgrabiec@scylladb.com>
2017-10-10 14:27:54 +03:00
Tomasz Grabiec
44faaafc29 cache_streamed_mutation: Read static row with cache region locked
_snp->static_row() allocates and needs reference stability.
Message-Id: <1507555031-11567-1-git-send-email-tgrabiec@scylladb.com>
2017-10-09 15:55:53 +01:00
Avi Kivity
8d81ec92f6 gdb: adjust 'scylla memory' command for fallback small pools
Seastar small pools can now fall back to smaller spans. Adjust
the 'scylla memory' command accordingly.
Message-Id: <20171005123935.13503-1-avi@scylladb.com>
2017-10-09 11:44:03 +02:00
Botond Dénes
dead2617ce mp_row_consumer: remove unnecessary _reasource_tracker member
Leftovers from a43901f84.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <88237d9cd97feeca47e12ec4af89c90f1a3a6bb5.1507535176.git.bdenes@scylladb.com>
2017-10-09 10:59:40 +03:00
Botond Dénes
af083d6507 Merge mutation_reader related test cases into mutation_reader_test
The following tests were merged:
* combined_mutation_reader_test
* restricted_reader_test

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <db6b5b3c2d30cfaa720fff07c859649a180cff95.1507299293.git.bdenes@scylladb.com>
2017-10-08 17:33:55 +03:00
Avi Kivity
fd1d35d4af Update seastar submodule
* seastar c62bbf9...8babd1f (9):
  > Enhanced support for Travis CI: build with and without DPDK support, use varioius compilers (GCC 5/6/7)
  > backtrace: Allow whitespace after the backtrace addresses
  > test.py: fix typo in noncopyable_function_test
  > utils: introduce noncopyable_function
  > Revert "utils: introduce noncopyable_function"
  > utils: introduce noncopyable_function
  > Add seastar-addr2line helper script to decode backtraces
  > execution_stage: pass scheduling_group to constructor
  > reactor: preempt tasks when a signal is received
2017-10-08 16:36:10 +03:00
Avi Kivity
98e69482bf Merge "Add support for CAST AS functions" from Daniel
"This series implements CAST AS functions in scylla.

It allows to use expressions of the form CAST(x AS type) in select statements.
Primary motivation for this functions came from aggregate functions, because
function avg(.) gives rounded results for interger columns. Now it is possible
to convert such column to float/double and obtain floating point results:

    SELECT ... avg(cast(x as double)), ...

Fixes #2280."

* 'danfiala/2280-patch-series-v2' of https://github.com/hagrid-the-developer/scylla:
  tests: Add test for CAST AS functions.
  cql3: Add support for CAST AS functions to ANTLR grammar.
  cql3/selectable: Add selectable::with_cast for CAST AS functions.
  cql3/functions: Add support for CAST AS functions.
  types:: Add support for CAST AS functions.
  types: Moved code that implements conversion of types' values to string.
2017-10-08 12:55:07 +03:00
Botond Dénes
046a1f9b05 sstables: Get rid of [[deprecated]] index_reader::get_index_entries()
Change test code (the only consumers) to read index by partitions.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <b6111e92b5e0729bfa2e76fd848215804174067a.1507297154.git.bdenes@scylladb.com>
2017-10-08 12:18:52 +03:00
Daniel Fiala
9e11bfe8fa tests: Add test for CAST AS functions.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
2017-10-07 21:05:53 +02:00
Daniel Fiala
4dd504b9ac cql3: Add support for CAST AS functions to ANTLR grammar.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
2017-10-07 21:04:40 +02:00
Daniel Fiala
7fe653f08c cql3/selectable: Add selectable::with_cast for CAST AS functions.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
2017-10-07 21:04:40 +02:00
Daniel Fiala
ca092a0b7d cql3/functions: Add support for CAST AS functions.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
2017-10-07 21:04:40 +02:00
Daniel Fiala
61570e4a73 types:: Add support for CAST AS functions.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
2017-10-07 21:04:40 +02:00
Daniel Fiala
e2c0a57ecf types: Moved code that implements conversion of types' values to string.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
2017-10-07 21:04:40 +02:00
Botond Dénes
a43901f842 row_consumer: de-virtualize io_priority() and resource_tracker()
Fixes #2830

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <448a1f739ab8c88a7a5562bce8dce5ae6efdf934.1507302530.git.bdenes@scylladb.com>
2017-10-06 18:50:12 +01:00
Botond Dénes
d2b294dc06 loading_cache: prepend this-> to method calls on captured this
To make gcc 6.3 happy.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <849402e20a1ffa6f603eff4fe295981a94b9ca79.1507282527.git.bdenes@scylladb.com>
2017-10-06 12:09:34 +02:00
Vlad Zolotarov
bc9d17963f test.py: add loading_cache_test
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1507137724-2408-3-git-send-email-vladz@scylladb.com>
2017-10-05 15:30:07 +01:00
Vlad Zolotarov
1394e781be utils + cql3: use a functor class instead of std::function
Define value_extractor_fn as a functor class instead of std::function.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1507137724-2408-2-git-send-email-vladz@scylladb.com>
2017-10-05 15:29:51 +01:00
Duarte Nunes
a011eb72c2 Merge branch 'CQL secondary index backing views' from Pekka
"This patch series adds backing materialized view for secondary indices.
When a new index is created with the 'CREATE INDEX' statement, a backing
materialized view is created automatically.

For example, assuming the following table:

  CREATE TABLE ks1.users (
    userid uuid,
    email text,
    PRIMARY KEY (userid)
  );

When the following index is created:

  CREATE INDEX user_email ON ks1.users (email);

The following materialized view is also created:

  cqlsh> DESCRIBE ks1.users;

  <snip>

  CREATE MATERIALIZED VIEW ks1.user_email_index AS
      SELECT email, userid
      FROM ks1.users
      WHERE email IS NOT NULL
      PRIMARY KEY (email, userid)
      WITH CLUSTERING ORDER BY (userid ASC)
      AND bloom_filter_fp_chance = 0.01
      AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
      AND comment = ''
      AND compaction = {'class': 'SizeTieredCompactionStrategy'}
      AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
      AND crc_check_chance = 1.0
      AND dclocal_read_repair_chance = 0.1
      AND default_time_to_live = 0
      AND gc_grace_seconds = 864000
      AND max_index_interval = 2048
      AND memtable_flush_period_in_ms = 0
      AND min_index_interval = 128
      AND read_repair_chance = 0.0
      AND speculative_retry = '99.0PERCENTILE';

CQL queries will use the backing materialized view as part of queries on
indexed columns to fetch the primary keys."

* 'penberg/cql-2i-backing-view/v3' of github.com:scylladb/seastar-dev:
  schema_tables: Create backing view for indices
  database: Kill obsolete secondary index manager stub
  cql3: Wire up secondary index manager
  cql3/restrictions: Add term_slice::is_supported_by() function
  index: Add secondary_index_manager::create_view_for_index()
  index: Add target_parser::parse() helper
  cql3/statements: Add index_target::from_sstring() helper
  index: Add secondary_index_manager::get_dependent_indices()
  index: Add secondary_index_manager::reload()
  index: Add secondary_index_manager::list_indexes()
  index: Add index class
  index: Pass column_family to secondary_index_manager constructor
  database: Make secondary index manager per-column family
2017-10-05 12:08:14 +01:00
Pekka Enberg
4045e1ec09 schema_tables: Create backing view for indices
This patch wires calls to secondary index manager reload() in
merge_tables_and_views() and changes make_update_indices_mutations() to
also create mutations for the backing materialized view. After this
patch, "CREATE INDEX" CQL statement also creates a materialized view.
2017-10-05 10:07:44 +03:00
Pekka Enberg
5d30ad5e1a database: Kill obsolete secondary index manager stub 2017-10-05 10:07:44 +03:00
Pekka Enberg
3a27f2e812 cql3: Wire up secondary index manager 2017-10-05 10:07:44 +03:00
Pekka Enberg
feae924c8c cql3/restrictions: Add term_slice::is_supported_by() function 2017-10-05 10:07:44 +03:00
Pekka Enberg
ed4c96c025 index: Add secondary_index_manager::create_view_for_index()
This patch adds a create_view_for_index() function, which creates a
view_ptr for index_metadata.
2017-10-05 10:07:44 +03:00
Pekka Enberg
a809ea902e index: Add target_parser::parse() helper 2017-10-05 10:07:44 +03:00
Pekka Enberg
9f07af8224 cql3/statements: Add index_target::from_sstring() helper 2017-10-05 10:07:44 +03:00
Pekka Enberg
50943ce592 index: Add secondary_index_manager::get_dependent_indices() 2017-10-05 10:07:44 +03:00
Glauber Costa
189ef02596 storage_proxy: change reporting of estimated histograms
We are currently collapsing the histograms in 16 points, exponentially
increasing in value, starting from 1.

While reducing the number of points is a worthy goal, the current
configuration caps us at 4ms. Our latencies tend to be higher than this.

Starting from 1 is also a bit of an exhaggeration: rarely are our
latencies in that range. This patch changes reporting so that we
report 20 points starting from 32.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-10-04 20:01:15 -04:00
Glauber Costa
fc4416abcc estimated_histogram: bring histogram closer to what prometheus expects.
Histograms are a native prometheus type, and there are many functions
available that operate on them. There is extensive documentation about
them at https://prometheus.io/docs/practices/histograms/

One example is the function histogram_quantile(), that can extract
useful quantiles from the histograms. Currently, those functions don't
work well.

The reasons are twofold:
1) We are only exporting 16 metrics, starting from 1usec. That means
   that the highest latency we can differentiate is 4ms. After that,
   everything falls into the same bin.

2) The format that prometheus expects is that each bin will contain
   the total number of points seen *up until that bin*, while we
   currently export the total number of points that falls between bins.
   IOW, it is a cummulative histogram.

About point two, granted it is a bit hidden in their website, but it is
there. The following phrase about a caveat make it clear:

 "Note that we divide the sum of both buckets. The reason is that the
  histogram buckets are cumulative. The le="0.3" bucket is also contained
  in the le="1.2" bucket; dividing it by 2 corrects for that."

It is also not needed to accumulate things that fall over the last bin:
the _count component of the histogram will already account for that.

This patch changes the histogram format to be more in line with what
prometheus expect.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-10-04 20:01:13 -04:00
Avi Kivity
ab65b42bb6 size_estimates: remove ambiguity in call to std::ref()
The call to std::ref() is not namespace-qualified, and so can conflict
with seastar::ref().

Fix by naming std::ref() explicitly.
Message-Id: <20171004155250.4960-1-avi@scylladb.com>
2017-10-04 18:31:40 +02:00
Duarte Nunes
953888d0d0 Merge "Auth: pluggable auth + "transitional" auth-objects" from Calle
"Makes authorizer/authenticator actually pluggable (by class name)
and adds a "Transitional" type for both, conforming to the DSE
definition of the types.

The idea is to allow a rolling upgrade of a cluster to
authentication op by first making all clients provide credentials
(ignored by non-auth), then node by node enable auth with
transitional handlers, then ensure user DB is populated and
distributed, and finally rollingly enable strict auth for
each node. Pfew."

Fixes #2836

* auth: Transitional auth wrappers
  auth: Make authenticator/authorizer use actual name based lookup
2017-10-04 12:46:16 +02:00
Calle Wilund
611e00646b auth: Transitional auth wrappers
Similar to DSE objects with similar name. Basically ignores
all authentication/authorization except "superuser" login. All others
sessions are treated as anonymous.
Note: like DSE counterparts, a client session must still _use_
authentication to be able to connect, even though the actual content of
the auth is mostly ignored.
2017-10-04 12:44:44 +02:00
Calle Wilund
b96a7ae656 auth: Make authenticator/authorizer use actual name based lookup
Allowing for pluggable auth objects.

Note: requires "class_registrator: Fix qualified name matching +
provider helpers" patch previously sent.
2017-10-04 12:44:44 +02:00
Calle Wilund
801ee44cb8 class_registrator: Fix qualified name matching + provider helpers
Should not assume namespace "org", nor should we allow "loose"
substring matching.
2017-10-04 12:43:42 +02:00
Calle Wilund
3c509e0333 class_registrator: Allow different return types
Allows registry to give back, for example, shared_ptr etc instead of
solely unique_ptr. If a registry is defined with seastar/std
shared/lw_shared/unique_ptr as "BaseType", the type will assume
this is the intended result type.
2017-10-04 12:43:42 +02:00
Avi Kivity
bdbbfe9390 Merge "Make restricting_mutation_reader more accurate" from Botond
"Currently restricting_mutation_reader restricts mutation_readears on a
count basis. This is inaccurate on multiple levels. The reader might be
a combined_mutation_reader, which might be composed of multiple
individual readers, whose number might change during the lifetime of the
reader. The memory consumption of the readers can vary and may change
during the lifetime of the reader as well.
To remedy this, make the restriction memory-consumption based. The
restricting semaphore is now configured with the amound of memory
(bytes) that its readers are allowed to consume in total. New readers
consume 128k units up-front to account for read-ahead buffers, and then
consume additional units for any buffer (returned
from input_stream<>::read()) they keep around.
Like before, readers already allowed to read will not be blocked,
instead new readers will be blocked on their first read if all the units
all consumed.

Fixes #2692."

* 'bdenes/restricting_mutation_reader-v5' of https://github.com/denesb/scylla:
  Update reader restriction related metrics
  Add restricted_reader_test unit test
  restricted_mutation_reader: restrict based-on memory consumption
  mutation_reader.hh: Move restricted_reader related code
2017-10-04 12:43:58 +03:00
Paweł Dziepak
fdfa6703c3 Merge "loading_shared_values and size limited and evicting prepared statements cache" from Vlad
"
The original motivation for the "utils: introduce a loading_shared_values" series was a hinted handoff work where
I needed an on-demand asynchronously loading key-value container (a replica address to a commitlog instance map).

It turned out that we already have the classes that do almost what I needed:
   - utils::loading_cache
   - sstables::shared_index_lists

Therefore it made sense to find a common ground, unify this functionality and reuse the code both in the classes above and in the
new hinted handoff code.

This series introduces the utils::loading_shared_values that generalizes the sstables::shared_index_lists
API on top of bi::unordered_set with the rehashing logic from the utils::loading_cache triggered by an addition
of an entry to the set (PATCH1).

Then it reworks the sstables::shared_index_lists and utils::loading_cache on top of the new class (PATCH2 and PATCH3).

PATCH4 optimizes the loading_cache for the long timer period use case.

But then we have discovered that we have another "customer" for the loading_cache. Apparently our prepared statements cache
had a birth flaw - it was unlimited in size - unless the corresponding keyspace and/or table are modified/dropped the entries
are never evicted. We clearly need to limit its size and it would also make sense to evict the cache entries that haven't been
used long enough.

This seems like a perfect match for a utils::loading_cache except for prepared statements don't need to be reloaded after
they are created.

Patches starting from PATCH5 are dealing with adding the utils::loading_cache the missing functionality (like making the "reloading"
conditional and adding the synchronous methods like find(key)) and then transitioning the CQL and Thrift prepared statements
caches to utils::loading_cache.

This also fixes #2474."

* 'evict_unused_prepared-v5' of https://github.com/vladzcloudius/scylla:
  tests: loading_cache_test: initial commit
  cql3::query_processor: implement CQL and Thrift prepared statements caches using cql3::prepared_statements_cache
  cql3: prepared statements cache on top of loading_cache
  utils::loading_cache: make the size limitation more strict
  utils::loading_cache: added static_asserts for checking the callbacks signatures
  utils::loading_cache: add a bunch of standard synchronous methods
  utils::loading_cache: add the ability to create a cache that would not reload the values
  utils::loading_cache: add the ability to work with not-copy-constructable values
  utils::loading_cache: add EntrySize template parameter
  utils::loading_cache: rework on top of utils::loading_shared_values
  sstables::shared_index_list: use utils::loading_shared_values
  utils: introduce loading_shared_values
2017-10-04 09:13:32 +01:00
Daniel Fiala
1133838b9f types: Add data_type_for for varint and decimal, data_value constructor for simple_date_type.
Signed-off-by: Daniel Fiala <daniel@scylladb.com>
Message-Id: <20171004044040.21631-1-daniel@scylladb.com>
2017-10-04 10:52:57 +03:00
Tomasz Grabiec
f506339582 tests: perf_fast_forward: Auto-create test directory
To avoid exception due to missing directory.

Message-Id: <1506081627-12933-1-git-send-email-tgrabiec@scylladb.com>
2017-10-03 15:36:37 +03:00
Botond Dénes
fea6214a0a Update reader restriction related metrics
Update description of existing reader count metrics, add memory
consumption metrics. Use labels to distinguish between system, user and
streaming reads related metrics.
2017-10-03 12:44:17 +03:00
Botond Dénes
3280fbc4d4 Add restricted_reader_test unit test 2017-10-03 12:44:17 +03:00
Botond Dénes
47e07b787e restricted_mutation_reader: restrict based-on memory consumption
Restrict readers based on their memory consumption, instead of the count
of the top-level readers. To do this an interposer is installed at the
input_stream level which tracks buffers emmited by the stream. This way
we can have an accurate picture of the readers' actual memory
consumption.
New readers will consume 16k units from the semaphore up-front. This is
to account their own memory-consumption, apart from the buffers they
will allocate. Creating the reader will be deferred to when there are
enough resources to create it. As before only new readers will be
blocked on an exhausted semaphore, existing readers can continue to
work.
2017-10-03 12:44:12 +03:00
Botond Dénes
0a07e9e7c7 mutation_reader.hh: Move restricted_reader related code
In preparation of make_restricted_reader taking a mutation_source as
its argument.
2017-10-03 12:39:22 +03:00
Avi Kivity
78eae8bf48 Revert "Merge "Make restricting_mutation_reader more accurate" from Botond"
This reverts commit c6e5dcc556, reversing
changes made to 19b21a0ab2. Failes to build,
plus author has more changes.
2017-10-03 11:58:59 +03:00
Pekka Enberg
641f28da02 cql3/statements: Clean up select_statement class definition
We have some historical #ifdef'd code that really ought to be removed by now...

Message-Id: <1507015932-8165-1-git-send-email-penberg@scylladb.com>
2017-10-03 11:17:32 +03:00
Avi Kivity
c6e5dcc556 Merge "Make restricting_mutation_reader more accurate" from Botond
"Currently restricting_mutation_reader restricts mutation_readears on a
count basis. This is inaccurate on multiple levels. The reader might be
a combined_mutation_reader, which might be composed of multiple
individual readers, whose number might change during the lifetime of the
reader. The memory consumption of the readers can vary and may change
during the lifetime of the reader as well.
To remedy this, make the restriction memory-consumption based. The
restricting semaphore is now configured with the amound of memory
(bytes) that its readers are allowed to consume in total. New readers
consume 128k units up-front to account for read-ahead buffers, and then
consume additional units for any buffer (returned
from input_stream<>::read()) they keep around.
Like before, readers already allowed to read will not be blocked,
instead new readers will be blocked on their first read if all the units
all consumed."

Fixes #2692.

* 'bdenes/restricting_mutation_reader-v4' of https://github.com/denesb/scylla:
  Update reader restriction related metrics
  Add restricted_reader_test unit test
  restricted_mutation_reader: restrict based-on memory consumption
  mutation_reader.hh: Move restricted_reader related code
2017-10-03 11:15:34 +03:00
Daniel Fiala
19b21a0ab2 types: Allow 'T' as a date-time separator in timestamps.
* Letter 'T' is specified in ISO 8601 and also in Cassandra
  documentation.

Signed-off-by: Daniel Fiala <daniel@scylladb.com>
Message-Id: <20171003073558.19257-1-daniel@scylladb.com>
2017-10-03 11:10:11 +03:00
Avi Kivity
3cc1c2c387 Merge seastar upstream
* seastar 899fc4e...c62bbf9 (6):
  > Merge "CPU Scheduler for seastar" from Avi
  > reactor: set SCHED_FIFO policy for timer thread
  > future: mark future::wait() as noexcept
  > shared_promise: Make get_shared_future() const-qualified
  > Remove pessimizing and redundant std::move()-s reported by Clang-tidy utility
  > Work around GCC 5 bug: scylladb/seastar#338, scylladb/seastar#339
2017-10-02 20:47:32 +03:00
Avi Kivity
dd5ab75d04 range: add missing include
Message-Id: <20171002144608.5032-1-avi@scylladb.com>
2017-10-02 16:49:24 +02:00
Avi Kivity
5ed6d1b176 dist: enable CAP_SYS_NICE
Allow scylla to use SCHED_FIFO for the timer thread for more accurate
scheduling.
Message-Id: <20171001121500.28318-1-avi@scylladb.com>
2017-10-02 16:32:00 +02:00
Avi Kivity
dbce5158a3 Update ami submodule
* dist/ami/files/scylla-ami 5ffa449...be90a3f (1):
  > amazon kernel: enable updates
2017-10-02 17:07:09 +03:00
Piotr Jastrzebski
83fd22face Add test to reproduce #2854
When memtable gets flushed, existing mutation_readers created
for it stop handling fast_forward_to correctly.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <f580ac59f3fcec53e7c78ad7a8b6374eb36958c6.1506690042.git.piotr@scylladb.com>
2017-09-29 15:17:53 +02:00
Piotr Jastrzebski
2583207d9d Fix memtable scanning_reader::fast_forward_to
If memtable is flushed then call fast_forward_to on _delegate.
Otherwise call iterator_reader::fast_forward_to.

Fixes #2854

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <6bf1c8bafce845ef945698ce4d722c3c8606e632.1506690042.git.piotr@scylladb.com>
2017-09-29 15:17:39 +02:00
Asias He
c0b965ee56 gossip: Better check for gossip stabilization on startup
This is a backport of Apache CASSANDRA-9401
(2b1e6aba405002ce86d5badf4223de9751bf867d)

It is better to check the number of nodes in the endpoint_state_map
is not changing for gossip stabilization.

Fixes #2853
Message-Id: <e9f901ac9cadf5935c9c473433dd93e9d02cb748.1506666004.git.asias@scylladb.com>
2017-09-29 08:57:25 +02:00
Tomasz Grabiec
d75f243a8b Update seastar submodule
Fixes #2770.
Fixes #2819.

* seastar 92fdce2...899fc4e (14):
  > scollectd: increment the metadata iterator with the values
  > Enable Travis CI builds for Seastar.
  > tests: Fix httpd test compilation error caused by unconditionally explicit tuple constructor in GCC5: scylladb/seastar#326
  > core::shared_future: add available() and failed() methods
  > rpc: make sure that _write_buf stream is always properly closed
  > log: Fail on attempt to register logger with the same name twice
  > Merge "Make backtraces useful on ASLR-enabled machines as well" from Botond
  > reactor: add option to bypass fsync
  > future-util: modernize do_until() implementation
  > future-util: fix do_until() API to not have forwarding references
  > input_stream: add rvalue variant of input_stream::consume()
  > logger: remove extra spaces after timestamp
  > tutorial: lifetime management
  > Fix broken link for fsqual failure message
2017-09-28 15:27:34 +02:00
Piotr Jastrzebski
6069bab755 Cache single queries to non-existing partitions
This way we don't need to query sstables again
when the query is repeated.

Fixes #1533

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <8f8559ff19c534dbbb7c9ef6c28271cec607ba20.1506521461.git.piotr@scylladb.com>
2017-09-27 16:15:18 +02:00
Tomasz Grabiec
b704710954 migration_manager: Make sure schema pulls eventually happen when schema_tables_v3 is enabled
We don't pull schema during rolling upgrade, that is until
schema_tables_v3 feature is enabled on all nodes.

Because features are enabled from gossiper timer, there is a race
between feature enablement and processing of endpoint states which may
trigger schema pull.  It can happen that we first try to pull, but
only later enable the feature. In that case the schema pull will not
happen until the next schema change.

The fix is to ensure that pulls abandoned due to feature not being enabled
will be retried when it is enabled.

Fixes sporadic failure in dtest:

  repair_additional_test.py:RepairAdditionalTest.repair_schema_test
Message-Id: <1506428715-8182-2-git-send-email-tgrabiec@scylladb.com>
2017-09-27 12:00:07 +01:00
Tomasz Grabiec
7a58fb5767 gossiper: Allow waiting for feature to be enabled
Message-Id: <1506428715-8182-1-git-send-email-tgrabiec@scylladb.com>
2017-09-27 11:57:06 +01:00
Raphael S. Carvalho
63eb9f61c0 db: use correct dirty memory manager for system column families
Dirty memory manager for non-system column families was being used
when applying mutations to system cfs.
That previously lead to deadlock when updating history. Basically,
write disable waits on compaction, and compaction waits on a write
that would release dirty memory for updating compaction history.

Only using the correct dirty manager wouldn't solve this problem
if write is disabled for system cf, but the problem is completely
solved in addition to previous change which updates history
outside the sstable lock.

Refs #2769.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170918215238.9810-3-raphaelsc@scylladb.com>
2017-09-26 19:51:31 +02:00
Raphael S. Carvalho
e34c1db642 db: update compaction history outside the sstable write lock
The reason to do that is because compaction can deadlock if refresh
disables write which waits for compaction, and compaction in turn
waits for dirty memory[1] that would be released by memtable write.

Dirty memory manager for non-system cfs was being used for system cfs,
which was useful for exposing this problem.

[1]: when updating compaction history.

Fixes #2769.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170918215238.9810-2-raphaelsc@scylladb.com>
2017-09-26 19:51:12 +02:00
Asias He
4b1034b9cd storage_service: Remove the stream_hints
Our hinted handoff implementation will not use the
db::system_keyspace::HINTS system table to store hints.
No need to stream them.

Acked-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <3b9190e250b54321ceb87767f4722c7458d41797.1506391500.git.asias@scylladb.com>
2017-09-26 19:05:21 +03:00
Paweł Dziepak
af1976bc30 Merge "Fix cache reader skipping rows in some cases" from Tomasz
"Fixes the problem of concurrent populations of clustering row ranges
leading to some readers skipping over some of the rows.
Spotted during code review.

Fixes #2834."

* tag 'tgrabiec/fix-cache-reader-skipping-rows-v2' of github.com:scylladb/seastar-dev:
  tests: mvcc: Add test for partition_snapshot_row_cursor
  tests: row_cache: Add test for concurrent population
  tests: row_cache: Make populate_range() accept partition_range
  tests: Add simple_schema::make_ckey_range()
  cache_streamed_mutation: Add missing _next_row.maybe_refresh() call
  mvcc: partition_snapshot_row_cursor: Fix cursor skipping over rows added after its position
  mvcc: partition_snapshot_row_cursor: Rename up_to_date() to iterators_valid()
  mvcc: Keep track of all iterators in partition_snapshot_row_cursor
  mvcc: Make partition_snapshot_row_cursor printable
2017-09-26 15:09:58 +01:00
Tomasz Grabiec
3eb251e3a4 tests: perf_fast_forward: Fail if ran with more than one shard
The test reads only from local shard, if ran with more shards,
current shard will miss some of the data.

Message-Id: <1506081609-12811-1-git-send-email-tgrabiec@scylladb.com>
2017-09-26 15:23:10 +03:00
Calle Wilund
dd2b8821a4 everywhere_strategy: Make get_natural_endpoints handle non-init state
Make get_natural_endpoints return local address iff token metadata
is not yet setup (since that is the one address we already know of).

If a request has a consistency level requiring more endpoints, it
will still fail, but for calls with, for example, CL=ONE, at startup
we will succeed, and more or less act like local strategy. Yet,
further down the line, have data distributed as desired.

Acked-by: Gleb Natapov <gleb@scylladb.com>
Message-Id: <20170926113512.15707-1-calle@scylladb.com>
2017-09-26 15:21:30 +03:00
Asias He
98e9049820 gossip: Print SCHEMA_TABLES_VERSION correctly
Found this when debugging gossip with debug print. The application state
SCHEMA_TABLES_VERSION was printed as UNKNOWN.
Message-Id: <d7616920d2e6516b5470a758bcf9c88f3d857381.1506391495.git.asias@scylladb.com>
2017-09-26 08:38:28 +02:00
Tomasz Grabiec
e5e9886014 tests: mvcc: Add test for partition_snapshot_row_cursor 2017-09-25 11:21:58 +02:00
Tomasz Grabiec
e4adc9c600 tests: row_cache: Add test for concurrent population 2017-09-25 11:21:58 +02:00
Tomasz Grabiec
a3fb7ce660 tests: row_cache: Make populate_range() accept partition_range 2017-09-25 11:21:58 +02:00
Tomasz Grabiec
dd7af02251 tests: Add simple_schema::make_ckey_range() 2017-09-25 11:21:58 +02:00
Tomasz Grabiec
e83cd508f6 cache_streamed_mutation: Add missing _next_row.maybe_refresh() call
We were checking if the cursor is up_to_date(), but this is not enough
to guarantee that the cursor is valid, merely that its iterators are
valid. The cursor may be invalidated even if its iterators are valid
if there was an insertion after cursor's position.

Fixes #2834.
2017-09-25 11:21:58 +02:00
Tomasz Grabiec
2f8d91043d mvcc: partition_snapshot_row_cursor: Fix cursor skipping over rows added after its position
The cursor maintains a heap of iterators in all versions. If rows were
inserted before the latest version's iterator, cursor would not see
them. Fix by redoing the lookup for iterators not in the current row
in maybe_refresh().

Refs #2834.
2017-09-25 11:21:58 +02:00
Tomasz Grabiec
09d99b0358 mvcc: partition_snapshot_row_cursor: Rename up_to_date() to iterators_valid() 2017-09-25 11:21:58 +02:00
Tomasz Grabiec
4ee11641c0 mvcc: Keep track of all iterators in partition_snapshot_row_cursor
Will be needed when updating the iterator for latest version. Before
this change, such iterator could be neither in _current_row nor in
_heap.

Besides that, this will allow user to always access the iterator of
latest version, which enables some optimizations in the future of
avoiding unnecessary lookups. get_iterator_in_latest_version() is now
always valid.
2017-09-25 11:21:58 +02:00
Tomasz Grabiec
a8cbd34dde mvcc: Make partition_snapshot_row_cursor printable 2017-09-25 11:21:58 +02:00
Tomasz Grabiec
8e46d15f91 storage_service: Register features before joining
Since commit 8378fe190, we disable schema sync in a mixed cluster.
The detection is done using gossiper features. We need to make sure
the features are registerred, and thus can be enabled, before the
bootstrapping of a non-seed node happens. Otherwise the bootstrap will
hang waiting on schema sync which will not happen.
Message-Id: <1505893837-27876-2-git-send-email-tgrabiec@scylladb.com>
2017-09-25 09:13:02 +01:00
Tomasz Grabiec
b92dcb0284 storage_service: Extract register_features()
Message-Id: <1505893837-27876-1-git-send-email-tgrabiec@scylladb.com>
2017-09-25 09:12:46 +01:00
Tomasz Grabiec
d11d696072 tests: mutation_source_tests: Fix use-after-scope on partition range
Message-Id: <1506096881-3076-1-git-send-email-tgrabiec@scylladb.com>
2017-09-22 19:13:47 +02:00
Botond Dénes
015ac042a8 combined_mutation_reader_test: remove unneeded includes
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <a388efa6fc93049f4d69c049764cc9225a04bce4.1506098363.git.bdenes@scylladb.com>
2017-09-22 18:45:04 +02:00
Botond Dénes
a7984a9908 combined_mutation_reader_test: remove leftover debug logging
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <96e61fcd2543ec84921f1b2188d7248e55e7efe0.1506097635.git.bdenes@scylladb.com>
2017-09-22 18:44:47 +02:00
Tomasz Grabiec
5def901a92 sstables: Don't register logger with the same name twice
There can be one logger with given name. This was causing
--logger-log-level sstable=trace to not work for the majority of log
points.

Message-Id: <1505902259-4561-1-git-send-email-tgrabiec@scylladb.com>
2017-09-20 16:40:06 +03:00
Piotr Jastrzebski
98c359d7de Introduce a position for partition start
This position will be used for mutation fragment
that represents the start of a partition.

This position sorts before static row.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-09-20 11:34:03 +02:00
Piotr Jastrzebski
e1f7d1f25d streamed_mutation: Extract concepts using GCC6_CONCEPT macro
It makes it easier to actually use those concepts.

Lambdas passed to mutation_fragment::visit have to declare
return type otherwise compiler fails with:

internal compiler error: Segmentation fault

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
2017-09-20 11:34:03 +02:00
Tomasz Grabiec
02d41864af Merge "Fix miss opportunity to update gossiper features" from Asias
The gossiper checks if features should be enabled from its timer
callback when it detects that endpoint_state_map changed, that is
different than shadow_endpoint_state_map.

shadow_endpoint_state_map is also assigned from endpoint_state_map in
storage_service::replicate_tm_and_ep_map(), called from
storage_service::on_change()

Call gossiper:maybe_enable_features() in replicate_tm_and_ep_map so
that we won't miss gossip feature update.

Fixes #2824

* git@github.com:scylladb/seastar-dev asias/gossip_miss_feature_update_v1:
  gossip: Move the _features_condvar signal code to
    maybe_enable_features
  gossip: Make maybe_enable_features public
  storage_service: Check gossip feature update in
    replicate_tm_and_ep_map
2017-09-20 11:16:37 +02:00
Asias He
ebc3bada12 storage_service: Check gossip feature update in replicate_tm_and_ep_map
This is another place we can update endpoint_state_map in addition to
gossiper::run().

Call the gossiper:maybe_enable_features() so that we won't miss gossip
feature update.
2017-09-20 16:58:33 +08:00
Asias He
6022b7423a gossip: Make maybe_enable_features public
It will be needed by storage_service.
2017-09-20 16:58:33 +08:00
Asias He
68c7a391b5 gossip: Move the _features_condvar signal code to maybe_enable_features
It is easier to call to features update logic outside gossiper.
2017-09-20 16:58:32 +08:00
Asias He
173cba67ba storage_service: Remove rpc client on all shards in on_dead
We should close connections to nodes that are down on all shards instead
of the shard which runs the on_dead gossip callback.

Found by Gleb.
Message-Id: <527a14105a07218066e9f1da943693d9de6993e5.1505894260.git.asias@scylladb.com>
2017-09-20 10:23:31 +02:00
Botond Dénes
43dba8f173 Update reader restriction related metrics
Update description of existing reader count metrics, add memory
consumption metrics.
2017-09-20 11:16:21 +03:00
Botond Dénes
b2db29dc65 Add restricted_reader_test unit test 2017-09-20 11:15:45 +03:00
Botond Dénes
33e97e7457 restricted_mutation_reader: restrict based-on memory consumption
Restrict readers based on their memory consumption, instead of the count
of the top-level readers. To do this an interposer is installed at the
input_stream level which tracks buffers emmited by the stream. This way
we can have an accurate picture of the readers' actual memory
consumption.
New readers will consume 16k units from the semaphore up-front. This is
to account their own memory-consumption, apart from the buffers they
will allocate. Creating the reader will be deferred to when there are
enough resources to create it. As before only new readers will be
blocked on an exhausted semaphore, existing readers can continue to
work.
2017-09-20 11:14:35 +03:00
Botond Dénes
e4a9e55e0d mutation_reader.hh: Move restricted_reader related code
In preparation of make_restricted_reader taking a mutation_source as
its argument.
2017-09-20 11:12:57 +03:00
Tomasz Grabiec
741ec61269 streaming: Fix streaming not streaming all ranges
It skipped one sub-range in each of the 10 range batch, and
tried to access the range vector using end() iterator.

Fixes sporadic failures of
update_cluster_layout_tests.py:TestUpdateClusterLayout.simple_add_node_1_test.

Message-Id: <1505848902-16734-1-git-send-email-tgrabiec@scylladb.com>
2017-09-20 10:33:59 +03:00
Avi Kivity
5b0cb28af9 Merge "row_cache: Call fast_forward_to() outside allocating section" from Tomasz
"On bad_alloc the section is retried. If the exception happened inside
fast_forward_to() on the underlying reader, that call will be
retried. However, the reader should not be used after exception is
thrown, since it is in unspecified state. Also, calling
fast_forward_to() with cache region locked increases the chances of it
failing to allocate.

We shouldn't call fast_forward_to() with the cache region locked.

Fixes #2791."

* 'tgrabiec/dont-ffwd-in-alloc-section' of github.com:scylladb/seastar-dev:
  cache_streamed_mutation: De-futurize cursor movement
  cache_streamed_mutation: Call fast_forward_to() outside allocating section
  cache_streamed_mutation: Switch from flags to explicit state machine
2017-09-19 17:11:22 +03:00
Botond Dénes
96c6d54a5c incremental_reader_selector: Remove unecessary check for duplicated next_token
The next_token will never be the same as the current _selector_position,
unless they are both maximum_token, which is already handled.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <9c54ae07a18d201185027c9b533bcb5256bead8a.1505826102.git.bdenes@scylladb.com>
2017-09-19 16:42:02 +03:00
Avi Kivity
12c393dd16 build: default to gold linker
Better, faster.
Message-Id: <20170919115737.12084-1-avi@scylladb.com>
2017-09-19 14:02:31 +02:00
Avi Kivity
a31ade54e0 streamed_mutation: optimize merge_mutations() if only one mutation
If we read a partition from a single sstable (a fairly common case),
we can bypass mutation_merger and just return the input.
Message-Id: <20170918181418.14021-1-avi@scylladb.com>
2017-09-19 11:00:59 +01:00
Botond Dénes
8cb953b58b incremental_reader_selector: don't create readers unconditionally on ff
When fast-forwarding check that the new position is past the selector
before attempting to create new readers. Also don't clear the set of
already created readers and don't overwrite the selector position.

Fixes #2807

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <514f69005eb29c2a3359f098d40abf588900b76f.1505811064.git.bdenes@scylladb.com>
2017-09-19 11:27:47 +02:00
Asias He
8f8273969d gossip: Do not wait for echo message in mark_alive
gossiper::apply_state_locally() calls handle_major_state_change() for
each endpoint, in a seastar thread, which calls mark_alive() for new
nodes, which calls ms().send_gossip_echo(id).get(). So it synchronously
waits for each node to respond before it moves on to the next entry. As
a result it may take a while before whole state is processed.

Apache (tm) Cassandra (tm) sends echos in the background.

In a large cluster, we see at the time the joining node starts
streaming, it hasn't managed to apply all the endpoint_state for peer
nodes, so the joining node does not know some of the nodes yet, which
results in the joining node ingores to stream from some of the existing
nodes.

Fixes #2787
Fixes #2797

Message-Id: <3760da2bef1a83f1b6a27702a67ca4170e74b92c.1505719669.git.asias@scylladb.com>
2017-09-19 10:49:00 +03:00
Raphael S. Carvalho
1524426deb sstables: Fix compaction correctness of higher-level tables
When incremental_reader_selector is used for compaction, it will
first call incremental selector of partitioned sstable set with
minimum token that will result in first interval being skipped,
which means not everything being compacted. The interval is
skipped because iterator is incorrectly advanced when token
lies before it.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170918021446.15920-1-raphaelsc@scylladb.com>
2017-09-19 09:59:30 +03:00
Avi Kivity
55e0b63e65 storage_proxy: scan more nodes exponentially to achieve target result set size
The current sequential scan can take a long time on a small or empty table
with a large (nr_nodes * nr_vnodes) count, and can time out. Switching to
exponential scan reduces the time.

Fixes #1230.
Message-Id: <20170912173803.8277-1-avi@scylladb.com>
2017-09-18 15:15:15 +02:00
Avi Kivity
e44517851e untyped_result_set: reduce dependencies
Forward-declare untyped_result_set and untyped_result_set_row, and remove
the include from query_processor.hh.
Message-Id: <20170916170859.27612-3-avi@scylladb.com>
2017-09-18 15:15:15 +02:00
Avi Kivity
0317746822 untyped_result_set: make untyped_result_set::row a namespace scope class
Makes it possible to forward-declare, with the aim of reducing dependencies.
Message-Id: <20170916170859.27612-2-avi@scylladb.com>
2017-09-18 15:15:15 +02:00
Pekka Enberg
9ebd8be82b index: Add secondary_index_manager::reload()
This patch adds a reload() function, which updates the secondary index
manager index map to match underlying column family indices.
2017-09-18 14:31:35 +03:00
Duarte Nunes
16d2e4e81b Merge 'reduce sstables.hh coupling' from Glauber
"sstables.hh is already too big, and it is soon to become bigger with the
inclusion of the read_monitor, to pair it with the write_monitor.

It's a good opportunity for us to reduce sstables.hh dependencies by
moving the write monitor to its own reader. One obvious caller is
already changed so we don't need to include sstables.hh anymore."

* 'progress-monitor' of https://github.com/glommer/scylla:
  sstables: do not include sstables.hh from memtable glue
  sstables: move write_monitor to its own header
2017-09-18 13:31:32 +02:00
Pekka Enberg
2ae6b141e5 index: Add secondary_index_manager::list_indexes() 2017-09-18 14:27:35 +03:00
Avi Kivity
a2f26f7b29 log_histogram: rename to log_heap
log_histogram is not really a histogram, it is a heap-like container.
Rename to log_heap in case we do want a log_histogram one day.
Message-Id: <20170916172137.30941-1-avi@scylladb.com>
2017-09-18 12:44:05 +02:00
Amnon Heiman
8d668a9dc0 API: storage_service repair_async_status to return proper error code
This patch change the implementation of storage_service
repair_async_status to throw an exception, this way a 400 return code
will be returned.

Fixes #2794

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20170917080533.6612-1-amnon@scylladb.com>
2017-09-18 09:08:26 +03:00
Vlad Zolotarov
cea15486c4 tests: loading_cache_test: initial commit
Test utils::loading_shared_values and utils::loading_cache.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 22:19:15 -04:00
Vlad Zolotarov
66568be969 cql3::query_processor: implement CQL and Thrift prepared statements caches using cql3::prepared_statements_cache
- Transition the prepared statements caches for both CQL and Trhift to the cql3::prepared_statements_cache class.
   - Add the corresponding metrics to the query_processor:
      - Evictions count.
      - Current entries count.
      - Current memory footprint.

Fixes #2474

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 22:19:15 -04:00
Vlad Zolotarov
8f912b46b1 cql3: prepared statements cache on top of loading_cache
This is a template class that implements caching of prepared statements for a given ID type:
   - Each cache instance is given 1/256 of the total shard memory. If the new entry is going to overflow
     this memory limit - the less recently used entries are going to be evicted so that the new entry could
     be added.
   - The memory consumption of a single prepared statement is defined by a cql3::prepared_cache_entry_size
     functor class that returns a number of bytes for a given prepared statement (currently returns 10000
     bytes for any statement).
   - The cache entry is going to be evicted if not used for 60 minutes or more.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 22:19:11 -04:00
Vlad Zolotarov
9a43398d6a utils::loading_cache: make the size limitation more strict
Ensure that the size of the cache is never bigger than the "max_size".

Before this patch the size of the cache could have been indefinitely bigger than
the requested value during the refresh time period which is clearly an undesirable
behaviour.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 20:53:11 -04:00
Vlad Zolotarov
4e72a56310 utils::loading_cache: added static_asserts for checking the callbacks signatures
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 20:53:11 -04:00
Vlad Zolotarov
a13362e74b utils::loading_cache: add a bunch of standard synchronous methods
Add a few standard synchronous methods to the cache, e.g. find(), remove_if(), etc.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 20:53:11 -04:00
Vlad Zolotarov
fa2f8162a5 utils::loading_cache: add the ability to create a cache that would not reload the values
Sometimes we don't want the cached values to be periodically reloaded.
This patch adds the ability to control this using a ReloadEnabled template parameter.

In case the reloading is not needed the "loading" function is not given to the constructor
but rather to the get_ptr(key, loader) method (currently it's the only method that is used, we may add
the corresponding get(key, loader) method in the future when needed).

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 20:53:11 -04:00
Vlad Zolotarov
a60a77dfc8 utils::loading_cache: add the ability to work with not-copy-constructable values
Current get(...) interface restricts the cache to work only with copy-constructable
values (it returns future<Tp>).
To make it able to work with non-copyable value we need to introduce an interface that would
return something like a reference to the cached value (like regular containers do).

We can't return future<Tp&> since the caller would have to ensure somehow that the underlying
value is still alive. The much more safe and easy-to-use way would be to return a shared_ptr-like
pointer to that value.

"Luckily" to us we value we actually store in a cache is already wrapped into the lw_shared_ptr
and we may simply return an object that impersonates itself as a smart_pointer<Tp> value while
it keeps a "reference" to an object stored in the cache.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 20:53:11 -04:00
Vlad Zolotarov
c24d85f632 utils::loading_cache: add EntrySize template parameter
Allow a variable entry size parameter.
Provide an EntrySize functor that would return a size for a
specific entry.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 20:53:11 -04:00
Vlad Zolotarov
6024014f92 utils::loading_cache: rework on top of utils::loading_shared_values
Get rid of the "proprietary" solution for asynchronous values on-demand loading.
Use utils::loading_shared_values instead.

We would still need to maintain intrusive set and list for efficient shrink and invalidate
operations but their entry is not going to contain the actual key and value anymore
but rather a loading_shared_values::entry_ptr which is essentially a shared pointer to a key-value
pair value.

In general, we added another level of dereferencing in order to get the key value but since
we use the bi::store_hash<true> in the hook and the bi::compare_hash<true> in the bi::unordered_set
this should not translate into an additional set lookup latency.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 20:53:11 -04:00
Vlad Zolotarov
d56684b1a5 sstables::shared_index_list: use utils::loading_shared_values
Since utils::loading_shared_values API is based on the original shared_index_list
this change is mostly a drop-in replacement of the corresponding parts.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 20:53:11 -04:00
Vlad Zolotarov
ec3fed5c4d utils: introduce loading_shared_values
This class implements an key-value container that is populated
using the provided asynchronous callback.

The value is loaded when there are active references to the value for the given key.

Container ensures that only one entry is loaded per key at any given time.

The returned value is a lw_shared_ptr to the actual value.

The value for a specific key is immediately evicted when there are no
more references to it.

The container is based on the boost::intrusive::unordered_set and is rehashed (grown) if needed
every time a new value is added (asynchronously loaded).

The container has a rehash() method that would grow or shrink the container as needed
in order to get the load factor into the [0.25, 0.75] range.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-09-15 20:53:06 -04:00
Glauber Costa
2227ae3f19 sstables: do not include sstables.hh from memtable glue
There is no need to include the whole sstables.hh file in
memtable-sstable.hh anymore. All we need is the shared_sstable
definition and the progress monitor.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-09-15 14:16:35 -04:00
Glauber Costa
51829f528d sstables: move write_monitor to its own header
Soon I am about to introduce a read monitor, and pairing infrastructure
to manage it. Having it all living in sstables.hh force to include it
everytime, even in places that don't really need it.

Move to its own header.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-09-15 14:09:07 -04:00
Tomasz Grabiec
804722b6c8 tests: perf_fast_forward: Fix use-after-scope on partition range
Message-Id: <1505489249-16806-1-git-send-email-tgrabiec@scylladb.com>
2017-09-15 16:34:41 +01:00
Tomasz Grabiec
7b5b461067 cache_streamed_mutation: De-futurize cursor movement
start_reading_from_underlying() doesn't return future<>
any more, so we can simplify this.
2017-09-15 15:41:55 +02:00
Tomasz Grabiec
22019577cc cache_streamed_mutation: Call fast_forward_to() outside allocating section
On bad_alloc the section is retried. If the exception happened inside
fast_forward_to() on the underlying reader, that call will be
retried. However, the reader should not be used after exception is
thrown, since it is in unspecified state. Also, calling
fast_forward_to() with cache region locked increases the chances of it
failing to allocate.

We shouldn't call fast_forward_to() with the cache region locked.

Fixes #2791.
2017-09-15 15:41:55 +02:00
Tomasz Grabiec
3b790a1e80 cache_streamed_mutation: Switch from flags to explicit state machine
We're in one state at a time, so it's better to express it as a single
variable rather than N independent flags.

In preparation before adding more states.
2017-09-15 15:41:55 +02:00
Glauber Costa
eb93d5f8ad database: pass a monitor as a parameter to memtable writer
Right now we pass a permit to the memtable writer and that permit is
used insite write_memtable_to_sstable to compose a write_monitor.

We would like to extend the write_monitor to include other things, that
right now are not available as parameters to write_memtable_to_sstable -
and which are possibly too specialized to be.

The solution for that is to pass the write_monitor instead of the permit
to the writer. Conceptually, that also makes sense because the
write_monitor is something the sstable writer is aware of. Permits, on
the other hand, are a database concept that is alien to the sstable
writer.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20170915032836.21154-1-glauber@scylladb.com>
2017-09-15 12:26:56 +02:00
Duarte Nunes
8378fe190a Merge 'Fix schema version mismatch during rolling upgrade from 1.7' from Tomasz
"When there are at least 2 nodes upgraded to 2.0, and the two exchanged schema
for some reason, reads or writes which involve both 1.7 and 2.0 nodes may
start to fail with the following error logged:

    storage_proxy - Exception when communicating with 127.0.0.3: Failed to load schema version 58fc9b89-74ab-37ca-8640-8b38a1204f8d

The situation should heal after whole cluster is upgraded.

Table schema versions are calculated by 2.0 nodes differently than 1.7 nodes
due to change in the schema tables format. Mismatch is meant to be avoided by
having 2.0 nodes calculate the old digest on schema migration during upgrade,
and use that version until next time the table is altered. It is thus not
allowed to alter tables during the rolling upgrade.

Two 2.0 nodes may exchange schema, if they detect through gossip that their
schema versions don't match. They may not match temporarily during boot, until
the upgraded node completes the bootstrap and propagates its new schema
through gossip. One source of such temporary mismatch is construction of new
tracing tables, which didn't exist on 1.7. Such schema pull will result in a
schema merge, which cause all tables to be altered and their schema version to
be recalculated. The new schema will not match the one used by 1.7 nodes,
causing reads and writes to fail, because schema requesting won't work during
rolling upgrade from 1.7 to 2.0.

The main fix employed here is to hold schema pulls, even among 2.0 nodes,
until rolling upgrade is complete."

* 'tgrabiec/fix-schema-mismatch' of github.com:scylladb/seastar-dev:
  tests: schema_change_test: Add test_merging_does_not_alter_tables_which_didnt_change test case
  tests: cql_test_env: Enable all features in tests
  schema_tables: Make make_scylla_tables_mutation() visible
  migration_manager: Disable pulls during rolling upgrade from 1.7
  storage_service: Introduce SCHEMA_TABLES_V3 feature
  schema_tables: Don't alter tables which differ only in version
  schema_mutations: Use mutation_opt instead of stdx::optional<mutation>
2017-09-15 10:27:47 +02:00
Tomasz Grabiec
c657eec4cf tests: schema_change_test: Add test_merging_does_not_alter_tables_which_didnt_change test case 2017-09-14 20:26:31 +02:00
Tomasz Grabiec
f0fdf75e7c tests: cql_test_env: Enable all features in tests 2017-09-14 20:26:31 +02:00
Tomasz Grabiec
571cac95ed schema_tables: Make make_scylla_tables_mutation() visible
For tests.
2017-09-14 20:26:31 +02:00
Tomasz Grabiec
5a92c18e63 migration_manager: Disable pulls during rolling upgrade from 1.7
If there is a schema pull during rolling upgrade among a two 2.0 nodes,
then schema merge will delete the persisted schema version. When the node
loads that table again, e.g. on restart, it will generate a version
which is different than the one which 1.7 nodes use. This will
cause reads and writes to fail.

To avoid this, disable pulls until all nodes are upgraded.

Fixes #2802.
2017-09-14 20:26:31 +02:00
Tomasz Grabiec
713d75fd51 storage_service: Introduce SCHEMA_TABLES_V3 feature 2017-09-14 20:26:31 +02:00
Tomasz Grabiec
f943d2efbf schema_tables: Don't alter tables which differ only in version
We apply deletion of scylla_tables.version to the incoming schema
mutations so that table schema version is recalculated after merge.
The mutations which we read from local schema tables may not have it
deleted in which case all tables would be considered as differing on
the presence of the version field. Avoid this by deleting the field
from old mutations as well.
2017-09-14 20:26:31 +02:00
Tomasz Grabiec
99272087e6 schema_mutations: Use mutation_opt instead of stdx::optional<mutation> 2017-09-14 20:26:31 +02:00
Takuya ASADA
7662271fc9 dist/ami: show correct message when scylla-ami-setup.service failed
After the service started, a state of the service may become
"failed", "active" or "activating".
But our script does not accept scylla-ami-setup.service become "failed" state,
in result the script shows up wrong message.
So we handle these three types of state correctly.

Fixes #2759

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1504589079-1986-1-git-send-email-syuu@scylladb.com>
2017-09-14 12:40:05 +03:00
Tomasz Grabiec
0911fbbdef row_cache: Fix row_cache::update_invalidating()
evict() doesn't guarantee that the whole partition is discontinuous.
In particular, partition tombstone cannot be marked as discontinuous.
The parts which are still continuous must be updated.

Broken after c78047fa5b.

Message-Id: <1505375684-28574-1-git-send-email-tgrabiec@scylladb.com>
2017-09-14 10:58:25 +03:00
Asias He
0ec574610d locator: Get rid of assert in token_metadata
In commit 69c81bcc87 (repair: Do not allow repair until node is in
NORMAL status), we saw a coredump due to an assert in
token_metadata::first_token_index.

Throw an exception instead of abort the whole scylla process.
Message-Id: <c110645cee1ee3897e30a3ae1b7ab3f49c97412c.1504752890.git.asias@scylladb.com>
2017-09-14 10:33:02 +03:00
Gleb Natapov
31e803a36c storage_proxy: wire up percentile speculative read properly
Collect coordinator side read statistic per CF and use them in percentile
speculative read executor. Getting percentile from estimated_histogram
object is rather expensive, so cache it and recalculate only once per
second (or if requested percentile changes).

Fixes #2757

Message-Id: <20170911131752.27369-3-gleb@scylladb.com>
2017-09-14 10:31:26 +03:00
Gleb Natapov
0842faecef estimated_histogram: fix overflow error handling
Currently overflow values are stored in incorrect bucket (last one
instead of special "overflow" one) and percentile() function throws
if there is overflow value. The patch fixes the code to store overflow
value in corespondent bucket and makes percentile() to take it into
account instead of throwing.

Message-Id: <20170911131752.27369-2-gleb@scylladb.com>
2017-09-14 10:31:21 +03:00
Asias He
5ff0b113c9 gossip: Fix indentation in apply_state_locally
Message-Id: <2bdefa8d982ad8da7452b41e894f41d865b83b0b.1505356245.git.asias@scylladb.com>
2017-09-14 10:09:50 +03:00
Tomasz Grabiec
65ca8eebb8 mutation_partition: Print rows_entry's position instead of key
For dummy rows, _key doesn't reflect the right position.

Message-Id: <1505317040-6783-1-git-send-email-tgrabiec@scylladb.com>
2017-09-13 20:49:28 +03:00
Avi Kivity
ca8e3c4a78 Merge "Evict from partition snapshots in cache" from Tomasz
"This series fixes the problem of active reads causing OOM due to the fact that
partition snapshots they hold are not evictable. In particular, a single scan
of a partition larger than memory will bad_alloc due to itself.

After this, when partition entry is evicted from cache, data in all the snapshots
is also evicted. We still don't have row-level eviction, but this series lays some
grounds for it by making cache readers prepared for the possibility of rows
being evicted.

Fixes #2775.
Fixes #2730."

* tag 'tgrabiec/snapshot-evicition-in-cache-v1' of github.com:scylladb/seastar-dev:
  tests: Add test for partition_entry::evict()
  mutation_partition: Introduce range continuity checking methods
  mutation_partition: Enable rows_entry::compare() on position_in_partition_views
  tests: Extract mvcc tests to separate file
  tests: row_cache: Add evicition tests
  tests: simple_schema: Add new_tombstone() helper
  tests: streamed_mutation_assertions: Introduce produces(mutation&)
  streamed_mutation: Allow setting buffer capacity
  row_cache: Evict partition snapshots
  mvcc: Introduce partition_entry::evict()
  row_cache: Handle eviction in partition reader
  tests: row_cache_test: Don't assume mvcc snapshots are not evictable
  row_cache: Reuse allocation_strategy::invalidate_references()
  row_cache: Don't invalidate references on insertion
  lsa: Move reclaim counter concept to allocation_strategy level
  mvcc: Ensure partition_snapshot always destroys versions using proper allocator
  mvcc: Encapsulate reference stability check in partition_snapshot
  mvcc: Store LSA region reference in partition_snapshot
2017-09-13 20:48:33 +03:00
Tomasz Grabiec
a45b1ef4bc sstables: Make atomic_deletion_manager logger static
So that it's visible to the framework at boot and --logger-log-level can
be used on it.

Message-Id: <1505286578-21904-1-git-send-email-tgrabiec@scylladb.com>
2017-09-13 20:35:41 +03:00
Tomasz Grabiec
b8f62e86de tests: Add test for partition_entry::evict() 2017-09-13 17:47:04 +02:00
Tomasz Grabiec
455a1b0d24 mutation_partition: Introduce range continuity checking methods 2017-09-13 17:47:04 +02:00
Tomasz Grabiec
abc489e99d mutation_partition: Enable rows_entry::compare() on position_in_partition_views
For full symmetry with existing overloads.
2017-09-13 17:47:04 +02:00
Tomasz Grabiec
d76b141b34 tests: Extract mvcc tests to separate file 2017-09-13 17:47:04 +02:00
Tomasz Grabiec
2dfb3b95a5 tests: row_cache: Add evicition tests 2017-09-13 17:47:03 +02:00
Tomasz Grabiec
204ec9c673 tests: simple_schema: Add new_tombstone() helper 2017-09-13 17:47:03 +02:00
Tomasz Grabiec
5b1adfa542 tests: streamed_mutation_assertions: Introduce produces(mutation&) 2017-09-13 17:47:03 +02:00
Tomasz Grabiec
cb16b038ef streamed_mutation: Allow setting buffer capacity
Needed in tests to limit amount of prefetching done by readers, so
that it's easier to test interleaving of various events.
2017-09-13 17:47:03 +02:00
Tomasz Grabiec
c78047fa5b row_cache: Evict partition snapshots
If snapshots are not evicted, they may pin unbouned amount of memory
for a long time in cache, which may lead to OOM. Evict snapshots
together with the entry.

Fixes #2775.
Fixes #2730.
2017-09-13 17:47:03 +02:00
Tomasz Grabiec
b6ae5783cd mvcc: Introduce partition_entry::evict()
The operation frees as much memory as possible, marking affected
mutation elements as discontinuous.
2017-09-13 17:47:03 +02:00
Tomasz Grabiec
fa2c26342c row_cache: Handle eviction in partition reader 2017-09-13 17:38:08 +02:00
Tomasz Grabiec
99aa3d1964 tests: row_cache_test: Don't assume mvcc snapshots are not evictable
The test was not updating the underlying mutation source but still
expecting to get the right data after calling invalidate().  If
snapshots are evictable, that's not guaranteed. Apply to underlying as
well, so data is read from underlying if necessary.
2017-09-13 17:38:08 +02:00
Tomasz Grabiec
adb159d51b row_cache: Reuse allocation_strategy::invalidate_references()
Modification count in the tracker is redundant, we can rely on
allocator's invalidation counter.
2017-09-13 17:38:08 +02:00
Tomasz Grabiec
27a3b4bca9 row_cache: Don't invalidate references on insertion
modification_count is currently only used to detect invalidation of
references, intended to be incremented on erasure.

Insertion into intrusive set doesn't invalidate references, so no
need to increment the counter.
2017-09-13 17:38:08 +02:00
Tomasz Grabiec
87be474c19 lsa: Move reclaim counter concept to allocation_strategy level
So that generic code can detect invalidation of references. Also, to
allow reusing the same mechanism for signalling external reference
invalidation.
2017-09-13 17:38:08 +02:00
Tomasz Grabiec
4053c801e2 mvcc: Ensure partition_snapshot always destroys versions using proper allocator
partition_snapshot is managed by lw_shared_ptr. Currently it is
assumed that before it dies, maybe_merge_versions() is called on it,
which destroyes it in the right allocator context. It's not very
safe. This patch improves safety by using the right allocator in
snapshot's destructor.
2017-09-13 17:38:08 +02:00
Tomasz Grabiec
cda86abdbc mvcc: Encapsulate reference stability check in partition_snapshot 2017-09-13 17:38:08 +02:00
Tomasz Grabiec
2df6f356b1 mvcc: Store LSA region reference in partition_snapshot
Will be useful for improving encapsulation.
2017-09-13 17:38:08 +02:00
Tomasz Grabiec
4c920c9891 tests: cql_test_env: Use cancel_prior_atomic_deletions()
This fixes a failure in view_schema_test, which starts many instances
of single_node_cql_env. cancel_atomic_deletions() causes later
deletions to fail, which causes some of the test cases to fail.
Message-Id: <1505311250-3118-2-git-send-email-tgrabiec@scylladb.com>
2017-09-13 17:11:34 +03:00
Tomasz Grabiec
dc0860ac70 sstables: Introduce cancel_prior_atomic_deletions()
Like cancel_atomic_deletions() but doesn't fail later deletions.
Message-Id: <1505311250-3118-1-git-send-email-tgrabiec@scylladb.com>
2017-09-13 17:11:33 +03:00
Tomasz Grabiec
8a425cedc6 tests: cql_test_env: Cancel pending sstable deletions on shutdown
Fixes a hang on shutdown with --smp 2 in perf_fast_forward. The hang
is in sstables::await_background_jobs_on_all_shards(), which is
waiting on sstable deletions. Not all shards agree to delete certain
sstables, because e.g. not all shards decide to compact them
yet. Cancel those deletes after database is stopped on all shards,
like we do in main.cc

Fixes #2796.

Message-Id: <1505292239-26032-1-git-send-email-tgrabiec@scylladb.com>
2017-09-13 11:56:48 +03:00
Asias He
c84dcabb8f gossip: Use boost::copy_range in apply_state_locally
boost::copy_range is better because the vector is allocated with the
correct size instead of growing when the inserter is called.

[avi: also crashes less]

Message-Id: <b19ca92d56ad070fca1e848daa67c00c024e3a4d.1505291199.git.asias@scylladb.com>
2017-09-13 11:33:15 +03:00
Tomasz Grabiec
b3a8ba5af6 gdb: Introduce "scylla find" command
Finds live objects on seastar heap of current shard which contain
given value. Prints results in 'scylla ptr' format.

Example:

  (gdb) scylla find 0x600005321900
  thread 1, small (size <= 512), live (0x6000000f3800 +48)
  thread 1, small (size <= 56), live (0x6000008a1230 +32)

Message-Id: <1505284614-19577-1-git-send-email-tgrabiec@scylladb.com>
2017-09-13 11:22:23 +03:00
Asias He
fa9d47c7f3 gossip: Fix a log message typo in compare_endpoint_startup
Message-Id: <c4958950e1108082b63e08ab81ee2177edc9b232.1505286843.git.asias@scylladb.com>
2017-09-13 09:54:56 +02:00
Glauber Costa
ecad1be161 compaction_strategy: add missing header
compaction_strategy.hh throws an exception, but it doesn't add the
exception header. It is working in-tree because of inclusion order,
but it broke one of my yet-out-of-tree changes.

In any case, it is best to add the headers we will need to the files,
and that is what this patch does.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20170912233326.26114-1-glauber@scylladb.com>
2017-09-13 08:40:15 +02:00
Raphael S. Carvalho
ef18b1162b sstables/compaction_manager: rename and better explain reshard function
submit doesn't properly describe the function and also improve explanation
of the relationship between function itself and its job parameter.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170912032034.23043-1-raphaelsc@scylladb.com>
2017-09-12 12:25:17 +03:00
Avi Kivity
1bd207a306 sstables: merge filter.cc into sstables.cc
filter.cc has just two smallish functions, which are part of the sstable
class. Move them to sstables.cc where the rest of the class members are defined.
Message-Id: <20170912080541.7836-1-avi@scylladb.com>
2017-09-12 10:06:52 +02:00
Tomasz Grabiec
423142ec81 tests: row_cache_test: Fix abort in debug mode
The test used apply() variant which assumed that it was invoked in a
seastar thread, which is no longer the case after commit d22fdf4. Fix
by copying outisde cache update, and use non-deferring apply() variant
for cache update.

Message-Id: <1505200142-3650-1-git-send-email-tgrabiec@scylladb.com>
2017-09-12 10:57:36 +03:00
Tomasz Grabiec
3f527e028d Merge "Reduce dependencies on sstables.hh" from Avi
This patchset reduces includes of sstables.hh, reducing compile time
by both reducing the amount of code compiled, and the amount of
needless recompiles caused by false dependencies.  It does so by
replacing lw_shared_ptr<sstable>, which requires a complete class,
with a new custom type shared_sstable, which allows an incomplete
sstable class definition.

* https://github.com/avikivity/scylla deps2/v2.1
  database: change truncate() to flush while compaction is disabled
  database: make run_with_compaction_disabled() a non-template
  database: add indirection to compaction_manager instance
  database: remove dependency on compaction.hh and compaction_manager.hh
  size_estimates_virtual_reader.hh: add missing include
  system_keyspace: add missing include
  main: add missing include
  storage_service: add missing include
  repair: add missing include
  compaction.hh: add missig include and forward declaration
  compaction_manager: add missing include
  shared_index_lists.hh: add missing include
  perf_fast_forward: add missing include
  sstable_mutation_test: add missing include
  sstables: extract version and format enum into a separate header file
  database.hh: add missing forward declaration for
    foreign_sstable_open_info
  cql_test_env: add forward declaration
  database: make column_family::disable_sstable_write() out-of-line
  sstables: introduce make_sstable() as a shortcut for
    make_lw_shared<sstable>
  treewide: use shared_sstable, make_sstable in place of
    lw_shared_ptr<sstable>
  sstables: use support for lw_shared_ptr with incomplete type for
    shared_sstable
  sstables: reduce dependencies
  streaming: remove unneeded includes
2017-09-12 09:56:46 +02:00
Tomasz Grabiec
ee1e7732a6 database: Create tables with continuous cache
When table is created, it doesn't contain any data, so we can mark the whole
data range as continuous in cache. This way reads will immediately hit, and
flushes will populate. If sstables are later attached, the attaching process
is supposed to invalidate affected ranges (and it does).

Fixes #2536.

Message-Id: <1505200269-4031-1-git-send-email-tgrabiec@scylladb.com>
2017-09-12 10:53:07 +03:00
Avi Kivity
85a6a2b3cb streaming: remove unneeded includes 2017-09-12 10:43:39 +03:00
Avi Kivity
578bf55371 sstables: reduce dependencies
Use shared_sstable.hh instead of sstables.hh.
2017-09-12 10:43:36 +03:00
Avi Kivity
07feaf9c4c sstables: use support for lw_shared_ptr with incomplete type for shared_sstable
Use the lw_shared_ptr deleter support to define shared_sstable without
pulling the definition of class sstable, reducing compile time and
dependencies if only shared_sstable is needed.
2017-09-12 10:43:05 +03:00
Avi Kivity
f7023501d6 treewide: use shared_sstable, make_sstable in place of lw_shared_ptr<sstable>
Since shared_sstable is going to be its own type soon, we can't use the old alias.
2017-09-12 10:43:05 +03:00
Avi Kivity
1a3cdffbc1 sstables: introduce make_sstable() as a shortcut for make_lw_shared<sstable>
shared_sstable will soon not be an alias for lw_shared_ptr<sstable>, so we need
another factory function.
2017-09-12 10:43:05 +03:00
Avi Kivity
88b91c84a1 database: make column_family::disable_sstable_write() out-of-line
Reduces dependencies.
2017-09-12 10:43:05 +03:00
Avi Kivity
02028df9b1 cql_test_env: add forward declaration
Not worthwhile to add a new #include for this.
2017-09-12 10:43:05 +03:00
Avi Kivity
02e5bf1c20 database.hh: add missing forward declaration for foreign_sstable_open_info
Supplied by an incidental include now, but it will be gone soon.
2017-09-12 10:43:05 +03:00
Avi Kivity
c4bafd912c sstables: extract version and format enum into a separate header file
This allows removing a dependency on sstables.hh later on.
2017-09-12 10:43:05 +03:00
Avi Kivity
5ebb15b9d4 sstable_mutation_test: add missing include 2017-09-12 10:43:05 +03:00
Avi Kivity
fdab47ab32 perf_fast_forward: add missing include 2017-09-12 10:43:05 +03:00
Avi Kivity
ca2d0b4efb shared_index_lists.hh: add missing include 2017-09-12 10:43:05 +03:00
Avi Kivity
eb62b2c00d compaction_manager: add missing include 2017-09-12 10:43:05 +03:00
Avi Kivity
0efa444a56 compaction.hh: add missing includes 2017-09-12 10:42:45 +03:00
Avi Kivity
7ca029c8f1 database_fwd.hh: add column_family forward declaration 2017-09-12 10:41:28 +03:00
Avi Kivity
4751402709 build: disable -fsanitize-address-use-after-scope on CqlParser.o
The parser generator somehow confuses the use-after-scope sanitizer, causing it
to use large amounts of stack space. Disable that sanitizer on that file.
Message-Id: <20170905110628.18047-1-avi@scylladb.com>
2017-09-11 19:42:26 +02:00
Avi Kivity
43a72254ff repair: add missing include 2017-09-11 20:09:45 +03:00
Avi Kivity
aebab377d9 storage_service: add missing include 2017-09-11 20:09:45 +03:00
Avi Kivity
a3b8089bd4 main: add missing include 2017-09-11 20:09:45 +03:00
Avi Kivity
0aaefe665b system_keyspace: add missing include 2017-09-11 20:09:45 +03:00
Avi Kivity
d3cde2e2be size_estimates_virtual_reader.hh: add missing include 2017-09-11 20:09:45 +03:00
Avi Kivity
9b540eccb0 database: remove dependency on compaction.hh and compaction_manager.hh 2017-09-11 20:09:45 +03:00
Avi Kivity
f9c8c1ddc2 database: add indirection to compaction_manager instance
Allows making it forward-declared later on, reducing dependencies.
2017-09-11 20:09:45 +03:00
Avi Kivity
9d0aaa941a database: make run_with_compaction_disabled() a non-template
Allows reducing dependencies down the line, and un-templating
non-performance-critical functions is a good thing.
2017-09-11 20:09:45 +03:00
Avi Kivity
6b5514a3df database: change truncate() to flush while compaction is disabled
In preparation to make run_with_compaction_disabled() a non-template,
we want to remove any non-copyable captures (so the function can be
an std::function, which requires copyability). Move the flush within
the compaction disabled region. This changes the behavior, but it shouldn't
matter.
2017-09-11 20:09:45 +03:00
Avi Kivity
14fd4168dc Merge seastar upstream
* seastar 31b925d...92fdce2 (3):
  > shared_ptr: allow incomplete classes in lw_shared_ptr<>
  > Update DPDK to 17.05
  > future: pass func as mutable to lambda arg of handle_exception[_type]
2017-09-11 20:09:04 +03:00
Tomasz Grabiec
95b3eaac97 debug: Allow running scylla_row_cache_report.stp script against a running process
Message-Id: <1504776359-16424-1-git-send-email-tgrabiec@scylladb.com>
2017-09-11 14:17:30 +03:00
Avi Kivity
fe019ad84d Merge "Refuse to load non-Scylla counter sstables" from Paweł
"These patches make Scylla refuse to load counter sstables that may
contain unsupported counter shards. They are recognised by the lack of
the Scylla component.

Fixes #2766."

* tag 'reject-non-scylla-counter-sstables/v1' of https://github.com/pdziepak/scylla:
  db: reject non-Scylla counter sstables in flush_upload_dir
  db: disallow loading non-Scylla counter sstables
  sstable: add has_scylla_component()
2017-09-11 13:28:44 +03:00
Tzach Livyatan
83eab5c8d7 Remove comment about Too high number of concurrent compactions from scylla_compaction_manager_compactions help
It should never happen and its not clear what too high stands for

Signed-off-by: Tzach Livyatan <tzach@scylladb.com>
Message-Id: <20170911085645.21222-1-tzach@scylladb.com>
2017-09-11 13:27:35 +03:00
Gleb Natapov
d0d8bdf615 storage_proxy: remove unused parameter from get_restricted_ranges() function
Message-Id: <20170911084653.GH24167@scylladb.com>
2017-09-11 11:58:44 +02:00
Gleb Natapov
f66e9377d4 storage_proxy: do not keep reference to a keyspace during write
A keyspace can be deleted while write is ongoing, so the object cannot
be used after defer point. The keyspace reference is only used to check
how many replies a write operation should wait for and this can be
precalculated during write handler creation.

Fixes #2777

Message-Id: <20170911084436.GG24167@scylladb.com>
2017-09-11 11:57:00 +02:00
Asias He
bb9dbc5ade storage_service: Do not use c_str() in the logger
Use logger.info("{}", msg) instead.

Message-Id: <d2f15007a54554b58e29fd05331c06ae030d582f.1504832296.git.asias@scylladb.com>
2017-09-10 18:10:24 +03:00
Botond Dénes
9ebeb9d5ce Fix --Wreturn-type warnings in tests: use abort() instead of assert(0)
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <95927f933411302e84d57d169ee0147def7bc643.1504890922.git.bdenes@scylladb.com>
2017-09-10 17:09:53 +03:00
Gleb Natapov
9137446109 api: uses correct statistics for storage proxy range histograms.
Message-Id: <20170910073458.GB1870@scylladb.com>
2017-09-10 16:18:36 +03:00
Pekka Enberg
d2632ddf1d Merge "gossip: optimize apply_state_locally for large cluster" from Asias
"This series tries to improve the bootstrap of a node in a large cluster by
improving how gossip applies the gossip node state. In #2404, the joining node
failed to bootstrap, because it did not see the seed node when
storage_service::bootstrap ran. After this series, we apply the whole gossip
state contained in the gossip ack/ack2 message before applying the next one,
and we apply the state of the seed node earlier than non-seed node so we can
have the seed node's state faster. We also add some randomness to the order of
applying gossip node state to prevent some of the nodes' state are always
applied earlier than the others.

This series improves apply_state_locally for large cluster:

 - Tune the order of applying endpoint_state
 - Serialize apply_state_locally
 - Avoid copying of the gossip state map

Fixes #2404"

* tag 'asias/gossip_issue_2404_v2' of github.com:scylladb/seastar-dev:
  gossip: Avoid copying with apply_state_locally
  gossip: Serialize apply_state_locally
  gossip: Tune the order of applying endpoint_state in apply_state_locally
  gossip: Introduce is_seed helper
  gossip: Pass const endpoint_state& in notify_failure_detector
  gossip: Pass reference in notify_failure_detector
2017-09-08 11:41:43 +03:00
Asias He
57dd3cb2c5 gossip: Do not use c_str() in the logger
Use logger.info("{}", msg) instead.

Message-Id: <52c24d7dfe082ee926f065a6268d83fcb31ddc28.1504832289.git.asias@scylladb.com>
2017-09-08 10:59:42 +03:00
Asias He
e98ce7887b gossip: Avoid copying with apply_state_locally
Move the std::map<inet_address, endpoint_state> map from the gossip
ack/ack2 message directly and move it around in apply_state_locally to
avoid copying the map.
2017-09-08 15:19:48 +08:00
Asias He
fd879b4e09 gossip: Serialize apply_state_locally
apply_state_locally will be called when gossip ack/ack2
message is received. It will use the std::map<inet_address,
endpoint_state>& map to update the endpoint state.

However, we can receive multiple such gossip ack/ack2  messages from
multiple peer nodes in parallel. Currently, we process them in parallel.
It is better to apply all the states from one node then move to apply
all the states from another node than interleaving. Because it is more
important to have the state of the whole cluster than to have a bit
newer state from another peer (if it is newer), especially when the node
boots up and runs its first round of gossip exchange.

After this patch, we apply the whole gossip state contained in the
gossip ack/ack2 message before applying the next one.
2017-09-08 15:19:47 +08:00
Asias He
9ccba950ba gossip: Tune the order of applying endpoint_state in apply_state_locally
We currently always apply the endpoint_state in the order of the
endpoint ip address. This is not good because some of the endpoint's
state is always applied earlier than the others.

In large cluster, the number of endpoints can be large, it takes time to
apply all of them. To make it more fair, we apply the endpoint_state
randomly.

Apply the seed node's state earlier because in bootstrap, we will check
if we have seen the seed node in storage_service::bootstrap. In #2404,
the bootstrap failed because, the joining node hasn't apply the seed
node's state when storage_service::bootstrap runs.
2017-09-08 15:19:47 +08:00
Asias He
c5456ed38f gossip: Introduce is_seed helper
To check if a endpoint is a seed node.
2017-09-08 15:19:47 +08:00
Asias He
32edd95241 gossip: Pass const endpoint_state& in notify_failure_detector 2017-09-08 15:19:47 +08:00
Asias He
46e562cbfa gossip: Pass reference in notify_failure_detector
In large cluster, the map can be large. Pass reference to avoid copying.
2017-09-08 15:19:47 +08:00
Glauber Costa
db846326f8 compaction: remove dead code
This code has no more users. Bury it.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20170908005305.29925-1-glauber@scylladb.com>
2017-09-08 08:17:15 +02:00
Tomasz Grabiec
57dc988475 Update seastar submodule
* seastar 85ca12d...31b925d (19):
  > net/byteorder: fix 64 bit ntohq and htonq on big endian machines
  > core, util: fix compilation on non-x86 processors
  > core/memory: Fix SIGSEGV in small_pool::add_more_objects()
  > log: remove debug leftovers
  > Merge "TLS state machine fixes" from Calle
  > logger: allow adjusting the timestamp style for stdout logs
  > thread: make thread_context::s_main portable
  > core: add seastar::cache_line_size constant
  > Add detach() to input_stream and output_stream
  > Install dependencies for Arch Linux.
  > tls: Guard non-established sockets in sesrefs + more explicit close + states
  > tls: Make vec_push fully exception safe
  > basic_sstring: resize uses sstring
  > Merge "Add and correct unit tests" from Jesse
  > tcp: enforce 1-byte maximum segment invariant with zero window
  > tcp: verify 1-byte maximum segment invariant during send with zero window
  > memory: reduce small_pool vulnerability to fragmentation further
  > Prometheus: avoid merging all metrics family
  > net: Fix possible NULL pointer dereference.
2017-09-07 10:34:27 +02:00
Avi Kivity
d9ee2ad9f0 chunked_vector: avoid boost::small_vector with old boost versions
Apparently older boost versions have a bug resulting in a double-free
in boost::container::small_vector. Use std::vector instead.

Fixes #2748.

Tested-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <20170903170207.21635-1-avi@scylladb.com>
2017-09-07 09:32:51 +03:00
Tomasz Grabiec
121cd8cb6c tests: Fix cql_query_test.cc::test_duration_restrictions
validate_request_failure() assumed that the future returned by execute_cql()
is always ready, which doesn't have to be the case, and caused aborts
in debug mode build.

Message-Id: <1504701342-13300-1-git-send-email-tgrabiec@scylladb.com>
2017-09-06 15:49:03 +03:00
Tomasz Grabiec
3986486cb3 tests: cql_test_env: Avoid exceptions to make debugging easier
Message-Id: <1504701375-13491-1-git-send-email-tgrabiec@scylladb.com>
2017-09-06 15:48:59 +03:00
Paweł Dziepak
e401d2d50b db: reject non-Scylla counter sstables in flush_upload_dir
Scylla already refuses to load counter sstables that do not have Scylla
component. However, if this happens because of 'nodetool refresh'
command the existing protection will trigger after sstables have been
moved to the data directory. This is too later, so an additional check
is added when the upload directory is scanned.
2017-09-06 12:04:26 +01:00
Paweł Dziepak
6a5e8bace1 db: disallow loading non-Scylla counter sstables
Scylla does not support local and remote counter shards. This means that
it is unsafe to directly load sstables that may contain them.
2017-09-06 12:03:58 +01:00
Paweł Dziepak
ebc538f4a3 sstable: add has_scylla_component()
has_scylla_component() is going to be used to verify that an sstable has
been generated by a recent version of Scylla. This would make it
possible to reject sstables that may be unsafe to load (e.g. sstables
containing legacy counter shards).
2017-09-06 12:03:45 +01:00
Avi Kivity
a59e375aad Merge "Support termination of repair jobs" from Asias
"This series implements the missing API to terminate all repairs.

For example:

$ curl -X POST  --header "Accept: application/json"
"http://127.0.0.1:10000/storage_service/force_terminate_repair"

With the new stream_plan::abort() api we can now abort the stream
session assocaited with the repair as well.

On top of this, we can support termination of single repair job instead all
jobs.

Fixes #2105"

* tag 'asisas/repair_abort_v4' of github.com:scylladb/seastar-dev:
  repair: Support termination of repair jobs
  repair: Track repair_info
  repair: Intorduce repair id to repair_info map
  api: Add force_terminate_repair API
  streaming: Add abort to stream_plan
  streaming: Add abort_all_stream_sessions for stream_coordinator
  streaming: Introduce streaming::abort()
  streaming: Make stream_manager and coordinator message debug level
  streaming: Check if _stream_result is valid
  streaming: Log peer address in on_error
  streaming: Introduce received_failed_complete_message
2017-09-06 12:58:05 +03:00
Avi Kivity
31706ba989 Merge "Fix Scylla upgrades when counters are used" from Paweł
"Scylla 1.7.4 and older use incorrect ordering of counter shards, this
was fixed in 0d87f3dd7d ("utils::UUID:
operator< should behave as comparison of hex strings/bytes"). However,
that patch was not backported to 1.7 branch until very recently. This
means that versions 1.7.4 and older emit counter shards in an incorrect
order and expect them to be so. This is particularly bad when dealing
with imported correct sstables in which case some shards may become
duplicated.

The solution implemented in this patch is to allow any order of counter
shards and automaticly merge all duplicates. The code is written in a
way so that the correct ordering is expected in the fast path in order
not to excessively punish unaffected deployments.

A new feature flag CORRECT_COUNTER_ORDER is introduced to allow seamless
upgrade from 1.7.4 to later Scylla versions. If that feature is not
available Scylla still writes sstables and sends on-wire counters using
the old ordering so that it can be correctly understood by 1.7.4, once
the flag becomes available Scylla switches to the correct order.

Fixes #2752."

* tag 'fix-upgrade-with-counters/v2' of https://github.com/pdziepak/scylla:
  tests/counter: verify counter_id ordering
  counter: check that utils::UUID uses int64_t
  mutation_partition_serializer: use old counter ordering if necessary
  mutation_partition_view: do not expect counter shards to be sorted
  sstables: write counter shards in the order expected by the cluster
  tests/sstables: add storage_service_for_tests to counter write test
  tests/sstables: add test for reading wrong-order counter cells
  sstables: do not expect counter shards to be sorted
  storage_service: introduce CORRECT_COUNTER_ORDER feature
  tests/counter: test 1.7.4 compatible shard ordering
  counters: add helper for retrieving shards in 1.7.4 order
  tests/counter: add tests for 1.7.4 counter shard order
  counters: add counter id comparator compatible with Scylla 1.7.4
  tests/counter: verify order of counter shards
  tests/counter: add test for sorting and deduplicating shards
  counters: add function for sorting and deduplicating counter cells
  counters: add counter_id::operator>
2017-09-05 14:20:55 +03:00
Paweł Dziepak
ed68a75b75 tests/counter: verify counter_id ordering 2017-09-05 10:52:54 +01:00
Paweł Dziepak
cdf7ba76f1 counter: check that utils::UUID uses int64_t 2017-09-05 10:46:03 +01:00
Paweł Dziepak
4aa72c6454 mutation_partition_serializer: use old counter ordering if necessary
Until the cluster is fully upgraded from a version that uses the
incorrect counter shard ordering it is essential to keep using it lest
the old nodes corrupt the data upon receiving mutations with a counter
shard ordering they do not expect.
2017-09-05 10:32:48 +01:00
Paweł Dziepak
b540516e5e mutation_partition_view: do not expect counter shards to be sorted 2017-09-05 10:32:48 +01:00
Paweł Dziepak
84edb5a1f2 sstables: write counter shards in the order expected by the cluster
If the feature signaling that we have switched to the correct ordering
of counter shards is not enabled it means that the user still can do a
rollback to a version that expects wrong ordering. In order to avoid any
disasters when that happens write sstables using the 1.7.4 order until
we know for sure that it is no longer needed.
2017-09-05 10:32:48 +01:00
Paweł Dziepak
2b614201a7 tests/sstables: add storage_service_for_tests to counter write test
Writing a counters to a sstable is going to require cluster feature
information, which requires accessing some singletons.
2017-09-05 10:32:48 +01:00
Paweł Dziepak
5007c9290a tests/sstables: add test for reading wrong-order counter cells 2017-09-05 10:32:48 +01:00
Paweł Dziepak
3e1d09e71d sstables: do not expect counter shards to be sorted 2017-09-05 10:32:48 +01:00
Paweł Dziepak
ecd2bf128b storage_service: introduce CORRECT_COUNTER_ORDER feature
Scylla 1.7.4 used incorrect ordering of counter shards. In order to fix
this problem a new feature is introduced that will be used to determine
when nodes with that bug fixed can start sending counter shard in the
correct order.
2017-09-05 10:32:48 +01:00
Paweł Dziepak
1e03c4acbe tests/counter: test 1.7.4 compatible shard ordering 2017-09-05 10:32:47 +01:00
Paweł Dziepak
067e429881 counters: add helper for retrieving shards in 1.7.4 order 2017-09-05 10:32:47 +01:00
Paweł Dziepak
fd25a09db2 tests/counter: add tests for 1.7.4 counter shard order 2017-09-05 10:32:47 +01:00
Paweł Dziepak
a93e8ce185 counters: add counter id comparator compatible with Scylla 1.7.4 2017-09-05 10:32:47 +01:00
Paweł Dziepak
b0f67c1680 tests/counter: verify order of counter shards 2017-09-05 10:32:47 +01:00
Paweł Dziepak
27397b5dad tests/counter: add test for sorting and deduplicating shards 2017-09-05 10:32:47 +01:00
Paweł Dziepak
e0c2379f26 counters: add function for sorting and deduplicating counter cells
Due to a bug in an implementation of UUID less compare some Scylla
versions sort counter shards in an incorrect order. Moreover, when
dealing with imported correct data the inconsistencies in ordering
caused some counter shards to become duplicated.
2017-09-05 10:32:39 +01:00
Paweł Dziepak
74af818eaf counters: add counter_id::operator> 2017-09-04 18:25:47 +01:00
Avi Kivity
4b06a2e95d Merge "Fix exception safety in cache update related paths" from Tomasz
* 'tgrabiec/make-row-cache-update-exception-safe' of github.com:scylladb/seastar-dev:
  row_cache: Improve safety of cache updates
  row_cache: Extract invalidate_sync()
  memtable: Mark mark_flushed() as noexcept
  database: Add non-throwing try_trigger_compaction()
  database: Make add_sstable() have strong exception guarantees
  row_cache: Don't require presence checker to be supplied externally
  database: Supply presence checker in sstable snapshots
  mutation_source: Introduce mutation_source::make_partition_presence_checker()
  mutation_reader: Move definitions up in the header
  mutation_reader: Use constructor delegation to reduce code duplication
  row_cache: Make populate() preserve continuity
  row_cache: Allow marking as fully continuous on construction
  database: Add missing serialization of sstable set udpate and cache invalidation
2017-09-04 18:37:42 +03:00
Tomasz Grabiec
d22fdf4261 row_cache: Improve safety of cache updates
Cache imposes requirements on how updates to the on-disk mutation source
are made:
  1) each change to the on-disk muation source must be followed
     by cache synchronization reflecting that change
  2) The two must be serialized with other synchronizations
  3) must have strong failure guarantees (atomicity)

Because of that, sstable list update and cache synchronization must be
done under a lock, and cache synchronization cannot fail to synchronize.

Normally cache synchronization achieves no-failure thing by wiping the
cache (which is noexcept) in case failure is detect. There are some
setup steps hoever which cannot be skipped, e.g. taking a lock
followed by switching cache to use the new snapshot. That truly cannot
fail.  The lock inside cache synchronizers is redundant, since the
user needs to take it anyway around the combined operation.

In order to make ensuring strong exception guarantees easier, and
making the cache interface easier to use correctly, this patch moves
the control of the combined update into the cache. This is done by
having cache::update() et al accept a callback (external_updater)
which is supposed to perform modiciation of the underlying mutation
source when invoked.

This is in-line with the layering. Cache is layered on top of the
on-disk mutation source (it wraps it) and reading has to go through
cache. After the patch, modification also goes through cache. This way
more of cache's requirements can be confined to its implementation.

The failure semantics of update() and other synchronizers needed to
change due to strong exception guaratnees. Now if it fails, it means
the update was not performed, neither to the cache nor to the
underlying mutation source.

The database::_cache_update_sem goes away, serialization is done
internally by the cache.

The external_updater needs to have strong exception guarantees. This
requirement is not new. It is however currently violated in some
places. This patch marks those callbacks as noexcept and leaves a
FIXME. Those should be fixed, but that's not in the scope of this
patch. Aborting is still better than corrupting the state.

Fixes #2754.

Also fixes the following test failure:

  tests/row_cache_test.cc(949): fatal error: in "test_update_failure": critical check it->second.equal(*s, mopt->partition()) has failed

which started to trigger after commit 318423d50b. Thread stack
allocation may fail, in which case we did not do the necessary
invalidation.
2017-09-04 10:04:29 +02:00
Tomasz Grabiec
b0f3efa577 row_cache: Extract invalidate_sync() 2017-09-04 10:04:29 +02:00
Tomasz Grabiec
673a22f8e1 memtable: Mark mark_flushed() as noexcept
Callers rely on that.
2017-09-04 10:04:29 +02:00
Tomasz Grabiec
bf75b882ae database: Add non-throwing try_trigger_compaction() 2017-09-04 10:04:29 +02:00
Tomasz Grabiec
116d4ae02b database: Make add_sstable() have strong exception guarantees
If insert() fails, we left the database with stats updated, but
sstable not being attached.
2017-09-04 10:04:29 +02:00
Tomasz Grabiec
56e3ce05db row_cache: Don't require presence checker to be supplied externally
The API is simpler and safer this way.
2017-09-04 10:04:29 +02:00
Tomasz Grabiec
df787afe6a database: Supply presence checker in sstable snapshots 2017-09-04 10:04:29 +02:00
Tomasz Grabiec
8a9f0f86e7 mutation_source: Introduce mutation_source::make_partition_presence_checker()
Every mutation source can have a presence checker. By default all
answer "maybe contains".

Having this on mutation_source level will be useful for simplifying
cache update flow. The cache can ask the right snapshot for a presence
checker rather than relying on database to know when and how to make
the right one which preserves all invariants.

This will be especially useful once all updates of the underlying
mutation source of cache (e.g. sstable list) will have to go through
cache for safety reasons.
2017-09-04 10:04:29 +02:00
Tomasz Grabiec
065feb1b7b mutation_reader: Move definitions up in the header 2017-09-04 10:04:29 +02:00
Tomasz Grabiec
4e4839082b mutation_reader: Use constructor delegation to reduce code duplication 2017-09-04 10:04:29 +02:00
Tomasz Grabiec
1a2f17d42c row_cache: Make populate() preserve continuity 2017-09-04 10:04:29 +02:00
Tomasz Grabiec
bc3112a187 row_cache: Allow marking as fully continuous on construction
Will be needed in tests.
2017-09-04 10:04:29 +02:00
Tomasz Grabiec
ab8632b225 database: Add missing serialization of sstable set udpate and cache invalidation
Commit e3ad676433 missed a few places.

It is required to serialize sstable list update and cache synchronization
in order to preserve partition update isolation.

Fixes #2746.
2017-09-04 10:04:29 +02:00
Piotr Jastrzebski
dd5dc75605 Stop calling _local_cache.stop in at_exit.
This removes a race condition that was causing #2721

Fixes #2721

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <ad060fab43d63c17db9f811c421d7ab26e5e57c8.1503933021.git.piotr@scylladb.com>
2017-09-03 15:55:48 +03:00
Asias He
e14bb7b1d5 repair: Remove #if'ed code in repair_ranges
It is unlikely we will use parallel_for_each version in repair_ranges.
Get rid of the dead code.

Message-Id: <31a9366adfe0262512a326ef9703aa0bba05e1fb.1503996138.git.asias@scylladb.com>
2017-09-03 11:13:02 +03:00
Avi Kivity
0524cbbd72 Merge db/config.cc cleanups from Jesse
* 'jhk/config_hygiene/v1' of https://github.com/hakuch/scylla:
  db/config.cc: Clarify documentation for `typed_value_ex`
  db/config.cc: Fix formatting and warnings
  db/config.cc: Remove unnecessary `mutable` on lambdas
  db/config.cc: Remove unused variables
2017-09-03 11:08:53 +03:00
Botond Dénes
a980ff6463 Use abort() instead of assert + throw in unreachable code
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <393c3730111dfe090c44d8fc2e31602956a7d008.1504022425.git.bdenes@scylladb.com>
2017-09-03 11:07:27 +03:00
Raphael S. Carvalho
22701346de sstables/stcs: avoid needless copy of bucket in get_buckets()
In addition, remove bucket by iterator which is faster.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170903000315.16338-1-raphaelsc@scylladb.com>
2017-09-03 10:46:48 +03:00
Avi Kivity
551eb75eb0 Update AMI submodule
* dist/ami/files/scylla-ami b41e5eb...5ffa449 (3):
  > amzn-main.repo: stick to Amazon Linux 2017.03 kernel (4.9.x)
  > Prevent dependency error on 'yum update'
  > scylla_create_devices: don't raise error when no disks found
2017-08-31 15:13:52 +03:00
Glauber Costa
e642aee3f7 database: wait for asynchronous operations to end before closing CF
This was part of "add gate for generic async operations to column family" but
somehow didn't make it into the final patch.

Add the missing piece.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20170830164205.4497-1-glauber@scylladb.com>
2017-08-31 11:16:30 +03:00
Avi Kivity
23d3ca56a1 Merge "optional integrity checker of sstable component writes" from Raphael
"optional interposer that will check integrity of writes to sstable components.
The option name is enable_sstable_data_integrity_check, it's disabled by default
and can be enabled via config file.
It will provide enough details that will help to find the root of the issue.

if disk failed for example, we would've something like the following reported:
ERROR 2017-08-17 09:18:11,577 [shard 0] sstable - integrity check failed for ./data/data/system_schema/aggregates-924c55872e3a345bb10c12f37c1ba895/system_schema-aggregates-ka-111-Scylla.db, stage: read after write verification, write: 4096 bytes to offset 0, reason: data read from underlying storage isn't the same as written, mismatch at byte 0:
 data written sample:   10000001000000010000001a00000001
 date read sample:      00000000000000000000000000000000"

* 'integrity_check_interposer_v3' of github.com:raphaelsc/scylla:
  sstables: optionally check integrity of sstable component writes
  sstables: remove unneeded new_sstable_component_file variant
  db/config: add sstable_data_integrity_check option
  sstables: introduce file interposer for integrity check
2017-08-31 11:08:12 +03:00
Raphael S. Carvalho
a84fbde8c8 sstables: optionally check integrity of sstable component writes
If config file's sstable_data_integrity_check option is enabled,
new integrity check interposer will be used in addition to the
existing one. Performance is expected to drop because of all
the integrity checks for every write. This new interposer will
provide detailed info when integrity check fails.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-08-31 02:27:50 -03:00
Raphael S. Carvalho
04ea4daa7e sstables: remove unneeded new_sstable_component_file variant
can get rid of it because file_open_options is optional in
reactor::open_file_dma()

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-08-31 02:17:34 -03:00
Raphael S. Carvalho
0218d6fd8f db/config: add sstable_data_integrity_check option
If enabled, interposer for checking integrity of sstable component
writes will be used.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-08-30 13:57:08 -03:00
Raphael S. Carvalho
f76b609cf5 sstables: introduce file interposer for integrity check
optional interposer that will check integrity when writing to
sstable components. It will provide enough details that will
help to find the root of the issue, which may come from lower
level layers.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-08-30 11:55:36 -03:00
Jesse Haber-Kucharsky
eddf34d005 test.py: Add missing tests
Signed-off-by: Jesse Haber-Kucharsky <jhaberku@scylladb.com>
Message-Id: <6fc5e810495801e646ccc41c16b581c8eceeda22.1504030666.git.jhaberku@scylladb.com>
2017-08-30 09:58:12 +01:00
Asias He
471e8b341f repair: Support termination of repair jobs
This patch implements the missing API to terminate all repairs.

For example:

$ curl -X POST  --header "Accept: application/json"
"http://127.0.0.2:10000/storage_service/force_terminate_repair"

With the new stream_plan::abort() api we can now abort the stream
session assocaited with the repair as well.

Fixes #2105
2017-08-30 15:19:52 +08:00
Asias He
07d9dc03ec repair: Track repair_info
Make repair_info a shared pointer and store them in _repairs map so we
can find by the repair id and access them later.
2017-08-30 15:19:52 +08:00
Asias He
5c9732c645 repair: Intorduce repair id to repair_info map
The maps are stored in a vector. The vector has smp::count elements, each
element will be accessed by only one shard.

The add_repair_info, remove_repair_info and get_repair_info helpers
are added.
2017-08-30 15:19:51 +08:00
Asias He
6dc62c6215 api: Add force_terminate_repair API
The api /storage_service/force_terminate is supposed to be
/storage_service/force_terminate_repair.

scylla-jmx uses /storage_service/force_terminate api.
So instead of renaming it, it is better to add a new name for it.
2017-08-30 15:19:51 +08:00
Asias He
9c8da2cc56 streaming: Add abort to stream_plan
It can be used by the user of stream_plan to abort the stream sessions.
Repair will be the first user when aborting the repair.
2017-08-30 15:19:51 +08:00
Asias He
475b7a7f1c streaming: Add abort_all_stream_sessions for stream_coordinator
It will abort all the sessions within the stream_coordinator.
It will be used by stream_plan soon.
2017-08-30 15:19:51 +08:00
Asias He
fad34801bf streaming: Introduce streaming::abort()
It will be used soon by stream_plan::abort() to abort a stream session.
2017-08-30 15:19:50 +08:00
Asias He
7fba7cca01 streaming: Make stream_manager and coordinator message debug level
When we abort a session, it is possible that:

node 1 abort the session by user request
node 1 send the complete_message to node 2
node 2 abort the session upon receive of the complete_message
node 1 sends one more stream message to node 2 and the stream_manager
for the session can not be found.

It is fine for node 2 to not able to find the stream_manager, make the
log on node 2 less verbose to confuse user less.
2017-08-30 15:19:50 +08:00
Asias He
be573bcafb streaming: Check if _stream_result is valid
If on_error() was called before init() was executed, the
_stream_result can be invalid.
2017-08-30 15:19:44 +08:00
Asias He
8a3f6acdd2 streaming: Log peer address in on_error 2017-08-30 15:18:27 +08:00
Asias He
eace5fc6e8 streaming: Introduce received_failed_complete_message
It is the handler for the failed complete message. Add a flag to
remember if we received a such message from peer, if so, do not send
back the failed complete message back to the peer when running
close_session with failed status.
2017-08-30 15:18:27 +08:00
Asias He
cc18da5640 Revert "gossip: Make bootstrap more robust"
This reverts commit b56ba02335.

After commit 8fa35d6ddf (messaging_service: Get rid of timeout and retry
logic for streaming verb), streaming verb in rpc does not check if a
node is in gossip memebership since all the retry logic is removed.

Remove the extra wait before removing the joining node from gossip
membership.

Message-Id: <a416a735bb8aad533bbee190e3324e6b16799415.1504063598.git.asias@scylladb.com>
2017-08-30 10:14:11 +03:00
Avi Kivity
48b9e47f7d Revert "row_cache: Add missing handling for failures happening outside the updating thread"
This reverts commit f9feb310ab (requested by author).
2017-08-29 19:26:02 +03:00
Tomasz Grabiec
f9feb310ab row_cache: Add missing handling for failures happening outside the updating thread
Thread stack allocation may fail, in which case we did not do the
necessary invalidation. Fix by hoisting the scope of the cleanup function.

Also fixes the following test failure:

  tests/row_cache_test.cc(949): fatal error: in "test_update_failure": critical check it->second.equal(*s, mopt->partition()) has failed

which started to trigger after commit 318423d50b.

Message-Id: <1504023113-30374-2-git-send-email-tgrabiec@scylladb.com>
2017-08-29 19:17:22 +03:00
Tomasz Grabiec
5d2f2bc90b lsa: Mark region::merge() as noexcept
It seems to satisfy this, and row_cache::do_update() will rely on it to simplify
error handling.

Message-Id: <1504023113-30374-1-git-send-email-tgrabiec@scylladb.com>
2017-08-29 19:17:17 +03:00
Asias He
8fa35d6ddf messaging_service: Get rid of timeout and retry logic for streaming verb
With the "Use range_streamer everywhere" (7217b7ab36) seires, all
the user of streaming now do streaming with relative small ranges and
can retry streaming at higher level.

There are problems with timeout and retry at RPC verb level in streaming:
1) Timeout can be false negative.
2) We can not cancel the send operations which are already called. When
user aborts the streaming, the retry logic keeps running for a long
time.

This patch removes all the timeout and retry logic for streaming verbs.
After this, the timeout is the job of TCP, the retry is the job of the
upper layer.

Message-Id: <df20303c1fa728dcfdf06430417cf2bd7a843b00.1503994267.git.asias@scylladb.com>
2017-08-29 17:20:00 +03:00
Botond Dénes
d1209c548a Fix -Wreturn-type warnings
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <99f7a006daaa78eb87720ac51c394093398bc868.1504013915.git.bdenes@scylladb.com>
2017-08-29 16:41:09 +03:00
Tomer Sandler
f1eb6a8de3 node_health_check: Various updates
- Removed text from Report's "PURPOSE" section, which was referring to the "MANUAL CHECK LIST" (not needed anymore).
- Removed curl command (no longer using the api_address), instead using scylla --version
- Added -v flag in iptables command, for more verbosity
- Added support to for OEL (Oracle Enterprise Linux) - minor fix
- Some text changes - minor
- OEL support indentation fix + collecting all files under /etc/scylla
- Added line seperation under cp output message

Signed-off-by: Tomer Sandler <tomer@scylladb.com>
Message-Id: <20170828131429.4212-1-tomer@scylladb.com>
2017-08-29 15:15:10 +03:00
Paweł Dziepak
90c77c89ae test.py: add missing compress_test
Message-Id: <20170829105331.27078-1-pdziepak@scylladb.com>
2017-08-29 13:05:11 +02:00
Paweł Dziepak
d5fa07f6df Merge "sstables: switch from deque<> to a custom container" from Avi
Large deques require contiguous storage, which may not be available (or may
be expensive to obtain).  Switch to new custom container instead, which allocates
less contiguous storage.

Allocation problems were observed with the summary and compression info. While
there is work to reduce compression info contiguous space use, this solves
all std::deque problems (and should not conflict with that work).

Fixes #2708

* tag '2708/v6' of https://github.com/avikivity/scylla:
  sstables: switch std::deque to chunked_vector
  tests: add test for chunked_vector
  utils: add a new container type chunked_vector
2017-08-29 11:11:01 +01:00
Avi Kivity
5224ab9c92 Merge "Fix sstable reader not working for empty set of clustering ranges" from Tomasz
"Fixes #2734."

* 'tgrabiec/make-sstable-reader-work-with-empty-range-set' of github.com:scylladb/seastar-dev:
  tests: Introduce clustering_ranges_walker_test
  tests: simple_schema: Add missing include
  sstables: reader: Make clustering_ranges_walker work with empty range set
  clustering_ranges_walker: Make adjacency more accurate
2017-08-29 10:28:49 +03:00
Asias He
a36141843a gossip: Switch to seastar::lowres_system_clock
The newly added lowres_system_clock is good enough for gossip
resolution. Switch to use it.

Message-Id: <fe0e7a9ef1ea0caffaa8364afe5c78b6988613bf.1503971833.git.asias@scylladb.com>
2017-08-29 10:16:25 +03:00
Asias He
2701bfd1f8 gossip: Use unordered_map for _unreachable_endpoints and _shadow_unreachable_endpoints
The _unreachable_endpoints will be accessed in fast path soon by the
hinted hand off code.

Message-Id: <500d9cbb2117ab7b070fd1bd111c5590f46c3c3a.1503971826.git.asias@scylladb.com>
2017-08-29 10:15:55 +03:00
Tomasz Grabiec
05e0ca6546 tests: Introduce clustering_ranges_walker_test 2017-08-28 21:08:55 +02:00
Tomasz Grabiec
dcbc1282a9 tests: simple_schema: Add missing include 2017-08-28 21:00:06 +02:00
Tomasz Grabiec
48dabc8262 sstables: reader: Make clustering_ranges_walker work with empty range set
Such queries can be issued by counter updates which involve only
static row.

Causes failure in test_query_only_static_row invoked from
sstable_mutation_test. See commit 6572f38, which fixed the problem in
cache reader.

Fixes #2734.
2017-08-28 21:00:06 +02:00
Tomasz Grabiec
071badce3b clustering_ranges_walker: Make adjacency more accurate
Current check considered some adjacent range tombstones as overlapping
with the ranges. Making this more accurate will become more important
after we will rely on putting p_i_p::after_all_clustered_rows() in
_current_start in out-of-range state.
2017-08-28 21:00:06 +02:00
Jesse Haber-Kucharsky
abf4c1688d db/config.cc: Clarify documentation for typed_value_ex 2017-08-28 10:08:29 -04:00
Jesse Haber-Kucharsky
7374f9d86f db/config.cc: Fix formatting and warnings 2017-08-28 10:08:29 -04:00
Jesse Haber-Kucharsky
90666e5744 db/config.cc: Remove unnecessary mutable on lambdas 2017-08-28 10:08:29 -04:00
Jesse Haber-Kucharsky
449bd60480 db/config.cc: Remove unused variables 2017-08-28 10:08:29 -04:00
Botond Dénes
eec451bcf8 segmented_offsets: use _current_bucket_segment_index consistently
Previously _current_bucket_segment_index was used differently depending on
whether update_position_trackers() is used in a random or sequential
access. In the former case was used as the absolute index of the segment
(independent of the buckets) and in the latter as the relative index of
the segment within its bucket. This caused problems when there was a
switch between random and sequential access, meaning one could get different
results for an at() call depending on what was the previous at() call.
Fix this by consistently using _current_bucket_segment_index as - like its
name suggest - the bucket relative segment index.

Ref #1946.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <7f68ac1d32c80e8dea6dfa11be02acaa961bce2a.1503924927.git.bdenes@scylladb.com>
2017-08-28 16:14:25 +03:00
Avi Kivity
fa8d0fe4d0 Revert "Revert "Revert "Revert "Merge "Compress in-memory compression-info" from Botond""""
This reverts commit 238877a0c6.  A fix was found and will be committed
shortly.
2017-08-28 16:14:13 +03:00
Tomer Sandler
83f249c15d node_health_check: added line seperation under cp output message
Signed-off-by: Tomer Sandler <tomer@scylladb.com>
Message-Id: <20170828124307.2564-1-tomer@scylladb.com>
2017-08-28 15:44:13 +03:00
Tomasz Grabiec
16c1b0fb6b Merge "Reduce dependencies on types.hh" from Avi
* 'deps1/v1' of https://github.com/avikivity/scylla:
  types.hh: extract marshal_exception from types.hh into a new file
  utils: remove dependency on types.hh
  locator: add missing include "log.hh"
  supervisor: remove dependency on init.hh
  tracing: add missing include "log.hh"
  gms: remove unneeded #include "types.hh"
2017-08-28 13:58:46 +02:00
Avi Kivity
4e67bc9573 Merge "Fixes for skipping in sstable reader" from Tomasz
* 'tgrabiec/fix-fast-forwarding' of github.com:scylladb/seastar-dev:
  tests: mutation_source_test: Add more tests for fast forwarding across partitions
  sstables: Fix abort in mutation reader for certain skip pattern
  sstables: Fix reader returning partition past the query range in some cases
  sstables: Introduce data_consume_context::eof()
2017-08-28 12:48:02 +03:00
Tomasz Grabiec
3241018c79 tests: mutation_source_test: Add more tests for fast forwarding across partitions 2017-08-28 10:30:08 +02:00
Tomasz Grabiec
65e488c150 sstables: Fix abort in mutation reader for certain skip pattern
The problem happens for the following sequence of events:

 1) reader stops in the middle of some partition before it
    skips to another partition range

 2) reader is fast forwarded to a partition range which has no data in
    the sstable. There are some partitions between the previous
    partition range and the one we skip to

 3) the reader is asked for next partition

The problem was that mutation_reader::fast_forward_to() was putting
the reader in _read_enabled == false state in step 2, but
data_consume_context was not fast forwarded to the range. When in step
3 we were asked for the next partition, we attempted to skip using
index (because of 1). The result of the skip was some position which
is outside of the current range of data_consume_context, which causes
it to abort. To fix, add a check for _read_enabled before we try to
skip.
2017-08-28 10:28:15 +02:00
Tomasz Grabiec
dc3c8863f3 sstables: Fix reader returning partition past the query range in some cases
If index was used to skip to the next partition (because the current
partition wasn't consumed in full) and reader's partition range ends
before the data file ends, we did not detect that we're out of range
before returning a streamed_mutation. Fix by checking _context.eof()
before doing that.

Refs #2733.
2017-08-28 10:16:27 +02:00
Tzach Livyatan
12fb975282 Fix typos in metrics description
Fixes #2658

Signed-off-by: Tzach Livyatan <tzach@scylladb.com>
Message-Id: <20170803121732.19640-1-tzach@scylladb.com>
2017-08-28 10:48:28 +03:00
Takuya ASADA
437931f499 dist/redhat: fix dependency package name typo
scylla-libstdc++-static53 -> scylla-libstdc++53-static

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1503306027-7316-1-git-send-email-syuu@scylladb.com>
2017-08-28 10:44:40 +03:00
Tomasz Grabiec
6baad2c2e6 sstables: Introduce data_consume_context::eof() 2017-08-28 09:19:43 +02:00
Avi Kivity
171fe67a64 gms: remove unneeded #include "types.hh" 2017-08-27 15:18:57 +03:00
Avi Kivity
a9f19e37b5 tracing: add missing include "log.hh"
It's currently made available via another include, which is going away.
2017-08-27 15:18:41 +03:00
Avi Kivity
471ae5b22b supervisor: remove dependency on init.hh
Replace with a simpler dependency on log.hh
2017-08-27 15:17:55 +03:00
Avi Kivity
27d3ab20a9 locator: add missing include "log.hh"
It's currently made available via another include, which is going away.
2017-08-27 15:17:05 +03:00
Avi Kivity
7234f0f0a0 utils: remove dependency on types.hh
Replace with dependency on much smaller marshal_exception.hh.
2017-08-27 15:16:21 +03:00
Avi Kivity
93317e2f4a types.hh: extract marshal_exception from types.hh into a new file
For better or worse, marshal_exception is used from utils/, and it's not good
to have utils/ depend on types.hh. Extract marshal_exception to make it possible
to remove the dependency.
2017-08-27 15:14:55 +03:00
Avi Kivity
238877a0c6 Revert "Revert "Revert "Merge "Compress in-memory compression-info" from Botond"""
This reverts commit 9d27455744. It's still broken.

To reproduce:

  ./tools/bin/cassandra-stress write -schema compression=LZ4Compressor

(on a clean database)

.0  0x00007ffff32aa69b in raise () from /lib64/libc.so.6
.1  0x00007ffff32ac4a0 in abort () from /lib64/libc.so.6
.2  0x000000000054a0e8 in seastar::memory::abort_on_underflow (size=<optimized out>) at core/memory.cc:1189
.3  seastar::memory::allocate_large (size=<optimized out>) at core/memory.cc:1194
.4  0x000000000054b305 in seastar::memory::allocate (size=size@entry=18446744073702885265) at core/memory.cc:1227
.5  0x000000000054b45e in malloc (n=n@entry=18446744073702885265) at core/memory.cc:1452
.6  0x00000000006013e4 in seastar::temporary_buffer<char>::temporary_buffer (this=0x6010195fc800, size=18446744073702885265) at /home/avi/urchin/seastar/core/temporary_buffer.hh:72
.7  0x0000000000a3908b in seastar::input_stream<char>::read_exactly (this=0x6010053d0248, n=18446744073702885265) at /home/avi/urchin/seastar/core/iostream-impl.hh:189
.8  0x0000000000a9c77f in compressed_file_data_source_impl::get (this=0x6010053d0240) at sstables/compress.cc:499
.9  0x0000000000aa1b01 in seastar::data_source::get (this=<optimized out>) at /home/avi/urchin/seastar/core/iostream.hh:63
.10 seastar::future<> seastar::input_stream<char>::consume<sstables::data_consume_rows_context>(sstables::data_consume_rows_context&)::{lambda()#1}::operator()() const (__closure=__closure@entry=0x6010195fcab0) at /home/avi/urchin/seastar/core/iostream-impl.hh:204
.11 0x0000000000aa22f0 in seastar::futurize<seastar::future<seastar::bool_class<seastar::stop_iteration_tag> > >::apply<seastar::future<> seastar::input_stream<char>::consume<sstables::data_consume_rows_context>(sstables::data_consume_rows_context&)::{lambda()#1}&>(sstables::data_consume_rows_context&&) (func=...) at /home/avi/urchin/seastar/core/future.hh:1312
.12 seastar::repeat<seastar::future<> seastar::input_stream<char>::consume<sstables::data_consume_rows_context>(sstables::data_consume_rows_context&)::{lambda()#1}>(sstables::data_consume_rows_context&&) (action=...) at /home/avi/urchin/seastar/core/future-util.hh:203
.13 0x0000000000a9e730 in seastar::input_stream<char>::consume<sstables::data_consume_rows_context> (consumer=..., this=<optimized out>) at /home/avi/urchin/seastar/core/iostream-impl.hh:237
.14 data_consumer::continuous_data_consumer<sstables::data_consume_rows_context>::consume_input<sstables::data_consume_rows_context> (c=..., this=<optimized out>) at sstables/consumer.hh:226
.15 sstables::data_consume_context::impl::read (this=<optimized out>) at sstables/row.cc:411
.16 sstables::data_consume_context::read (this=<optimized out>) at sstables/row.cc:437
.17 0x0000000000aafbae in sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}::operator()() const::{lambda()#1}::operator()() const (__closure=<optimized out>) at sstables/partition.cc:843
.18 seastar::apply_helper<sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}::operator()() const::{lambda()#1}, std::tuple<>&&, std::integer_sequence<unsigned long> >::apply({lambda()#2}&&, std::tuple) (args=..., func=...) at ./seastar/core/apply.hh:36
.19 seastar::apply<sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}::operator()() const::{lambda()#1}>(sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}::operator()() const::{lambda()#1}&&, std::tuple<>&&) (args=..., func=...)
    at ./seastar/core/apply.hh:44
.20 seastar::futurize<seastar::future<> >::apply<sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}::operator()() const::{lambda()#1}>(sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}::operator()() const::{lambda()#1}&&, std::tuple<>&&) (args=...,
    func=...) at ./seastar/core/future.hh:1302
.21 seastar::future<>::then<sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}::operator()() const::{lambda()#1}, seastar::future<> >(sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}::operator()() const::{lambda()#1}&&) (
    this=this@entry=0x6010195fcbb0, func=...) at ./seastar/core/future.hh:890
.22 0x0000000000ac273f in sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}::operator()() const (__closure=0x6010195fcc28) at sstables/partition.cc:843
.23 seastar::do_until_continued<sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}, sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#1}>(sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#1}&&, sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}&&, seastar::promise<>) (stop_cond=..., action=..., p=...) at /home/avi/urchin/seastar/core/future-util.hh:155
.24 0x0000000000ac29c3 in seastar::do_until<sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}, sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#1}>(sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#1}&&, sstables::sstable_streamed_mutation::fill_buffer()::{lambda()#2}&&) (action=..., stop_cond=..., this=<optimized out>) at /home/avi/urchin/seastar/core/future-util.hh:330
.25 sstables::sstable_streamed_mutation::fill_buffer (this=<optimized out>) at sstables/partition.cc:844
.26 0x0000000000ad3d2b in streamed_mutation::fill_buffer (this=0x6010195fcd10) at ./streamed_mutation.hh:489
.27 consume_flattened_in_thread<stable_flattened_mutations_consumer<compact_for_compaction<sstables::compacting_sstable_writer> >, std::function<bool (streamed_mutation const&)> >(mutation_reader&, stable_flattened_mutations_consumer<compact_for_compaction<sstables::compacting_sstable_writer> >&, std::function<bool (streamed_mutation const&)>&&) (

(gdb) p addr
$1 = {
  chunk_start = 13330037,
  chunk_len = 18446744073702885265,
  offset = 0
}
2017-08-27 13:32:37 +03:00
Avi Kivity
576e33149f Merge seastar upstream
* seastar 0083ee8...85ca12d (1):
  > Merge "Run-time logging configuration" from Jesse

Includes patch from Jesse:

"Switch to Seastar for logging option handling

In addition to updating the abstraction layer for Seastar logging in `log.hh`,
the configuration system (`db/config.{hh,cc}`) has been updated in two ways:

- The string-map type for Boost.program_options is now defined in Seastar.

- A configuration value can be marked as `UsedFromSeastar`. This is like `Used`,
  except the option is expected to be defined in the Boost.Program_options
  description for Seastar. If the option is not defined in Seastar, or it is
  defined with a different type, then a run-time exception is thrown early in
  Scylla's initialization. This is necessary because logging options which are
  now defined in Seastar were previously defined in Scylla and support for these
  options in the YAML file cannot be dropped. In order to be able to verify that
  options marked `UsedFromSeastar` are actually defined in Seastar, the
  interface for adding options to `db::config` has changed from taking a
  `boost::program_options::options_description_easy_init` (which is handle into
  a `boost::program_options::options_description` which only allows adding
  options) to taking a `boost::program_options::options_description`
  directly (which also allows querying existing options).

Scylla also fully defers to Seastar's support for run-time logging
configuration."

Signed-off-by: Jesse Haber-Kucharsky <jhaberku@scylladb.com>
Message-Id: <ef26cffb91bef1ae95d508187a6dd861a6c4fc84.1503344007.git.jhaberku@scylladb.com>
2017-08-27 13:11:33 +03:00
Avi Kivity
4f5b5bc8e6 Merge seastar upstream
* seastar b9f1eb7...0083ee8 (1):
  > http: Add MIME type support for JSON
2017-08-27 13:09:04 +03:00
Jesse Haber-Kucharsky
af95d3baa7 db/config.cc: Remove unused function
Signed-off-by: Jesse Haber-Kucharsky <jhaberku@scylladb.com>
Message-Id: <5a4e4e153c2d87e838d1cf6def7a494a92a72f63.1503344007.git.jhaberku@scylladb.com>
2017-08-27 13:08:19 +03:00
Vlad Zolotarov
9b9f19606f scylla_cpuset_setup: add the description near the perftune.yaml removing
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1503600250-25169-1-git-send-email-vladz@scylladb.com>
2017-08-27 12:51:12 +03:00
Asias He
68346f7e53 repair: Use with_semaphore for sp_parallelism_semaphore
Instead of calling semaphore.signal() manually.

Message-Id: <51b7ecdebac91763a2340fe00959742810614845.1503648936.git.asias@scylladb.com>
2017-08-27 12:50:38 +03:00
Avi Kivity
2b3ee4b0a7 Merge "make cf drop more robust" from Glauber
"We have recently found two problems with the drop_column_family code
that needs addressing. The first is that exceptions in truncate() may
lead to stop() being skipped, which can cause Scylla to crash.

The other is that a truncate() issued before drop_column_family may get
the chance to execute only after the column family is already dropped
and also crash (That is issue 2726).

The second problem is the classic problem of asynchronous execution on
an object that may terminate, which we have been traditionally solving
with a gate. We add a gate to the column family that will be closed
during CF stop(), and we will require all asychronous operations to
enter it.

The immediate fix is for truncate(), where we have seen a real, concrete
problem. But it would be good to audit other code paths to make sure
that they are sane.

The most obvious ones, flush, compaction and sstable deletion are
already sane, since they are waited on explicitly during stop()."

Fixes #2726.

* 'issue-2726-v2-master' of github.com:glommer/scylla:
  database: add gate for generic async operations to column family
  database: make sure that column family is always stopped when dropped
2017-08-27 12:42:20 +03:00
Avi Kivity
1f66940134 sstables: switch std::deque to chunked_vector
Reduce susceptibility to memory fragmentation.
2017-08-26 16:44:47 +03:00
Avi Kivity
204659ef40 tests: add test for chunked_vector 2017-08-26 16:44:47 +03:00
Avi Kivity
3ba2c0652d utils: add a new container type chunked_vector
We currently use std::deque<> for when we need large random-access containers,
but deque<> requires nr_items * sizeof(T) / 64 bytes of contiguous memory, which can
exceed our 256k fragmentation unit with large sstables.  The new
container, which is a cross between deque and vector, has much lower
limitations.

Like deque, we allocate chunks of contiguous items, but they are
128k in size instead of 512. The last chunk can be smaller to avoid
allocating 128k for a really small vector.
2017-08-26 16:44:45 +03:00
Tomasz Grabiec
2ca99be27d ring_position_view: Print token instead of token pointer
Broken in e989d65539.
Message-Id: <1503667158-7544-1-git-send-email-tgrabiec@scylladb.com>
2017-08-25 14:25:21 +01:00
Glauber Costa
83323e155e database: add gate for generic async operations to column family
run_with_compaction_disabled(), which is called by truncate, has a
pretty large defer point in remove(). When the code gets to finally
execute, we can't guarantee that the column family will still be alive.

That is true in particular if we issued a drop table command following
truncate: by the time truncate gets to resume, the CF will be gone.
Before the column family is dropped, it will always call its stop()
method, which means we have an opportunity to do some waiting there. We
already wait for flushes and current compactions to end.

Traditionally, we have been solving similar problems by adding a gate
that will catch asynchronous operations and making sure that potentially
asynchronous operations will enter the gate before executing. Let's do
the same thing here. We will close() the gate during stop().

Fixes #2726

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-08-24 13:12:57 -04:00
Glauber Costa
d090e7be35 database: make sure that column family is always stopped when dropped
truncate can throw exceptions. If it does, cf->stop() will never be
called because it is contained in a .then clause instead of finally.

One of the things that truncate does - in a finally block of its own -
is initiate a final compaction. If it returns an exception nobody will
wait for that compaction to finish (since cf->stop() is the one doing
that) and we'll crash.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-08-24 13:01:47 -04:00
Avi Kivity
40aeb00151 Merge "consider the pre-existing cpuset.conf when configuring networking mode" from Vlad
"Preserve the networking configuration mode during the upgrade by generating the /etc/scylla.d/perftune.yaml
file and using it."

Fixes #2725.

* 'dist_respect_cpuset_conf-v3' of https://github.com/vladzcloudius/scylla:
  scylla_prepare: respect the cpuset.conf when configuring the networking
  scylla_cpuset_setup: rm perftune.yaml
  scylla_cpuset_setup: add a missing "include" of scylla_lib.sh
2017-08-24 18:53:22 +03:00
Vlad Zolotarov
c72eb34b89 scylla_prepare: respect the cpuset.conf when configuring the networking
Choose the networking configuration mode according to the current contents of /etc/scylla.d/cpuset.conf.

If it doesn't exist - use the default mode.
If it exists - use the mode that has been used for generation of the CPU set.

Store the configuration into the /etc/scylla.d/perftune.yaml

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-08-24 09:09:40 -04:00
Vlad Zolotarov
89285a13ac scylla_cpuset_setup: rm perftune.yaml
scylla_setup resets our configuration and perftune.yaml is a part of it.
perftune.yaml is generated based on the contents of cpuset.conf therefore we should reset
these together.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-08-24 09:09:40 -04:00
Vlad Zolotarov
d0ccfe34b9 scylla_cpuset_setup: add a missing "include" of scylla_lib.sh
The scylla_cpuset_setup uses a verify_args() function that is defined in the scylla_lib.sh.

Fixes #2716

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-08-24 09:09:40 -04:00
Paweł Dziepak
1006a946e8 mvcc: allow invoking maybe_merge_versions() inside allocating section
Message-Id: <20170823083544.4225-1-pdziepak@scylladb.com>
2017-08-24 14:30:38 +02:00
Pekka Enberg
870de26e35 index: Add index class
Add a simple index class, which represents an instantiated index.
2017-08-24 14:00:02 +03:00
Pekka Enberg
d63a650b3f index: Pass column_family to secondary_index_manager constructor
We need column family for various secondary index manager operations.
2017-08-24 14:00:02 +03:00
Pekka Enberg
981e320d54 database: Make secondary index manager per-column family
Make the secondary index manager per-column family like in Apache
Cassandra to keep CQL front-end similar between the two codebases.
2017-08-24 14:00:02 +03:00
Botond Dénes
839d1db4d3 parse(compression): add missing reinterpret_cast<char*>
std::copy_n was using value as uint64_t*, smashing the stack.
Also remove unused variable.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <4e2d71fc74326965dfd98bed2347100fb6ebe43b.1503568210.git.bdenes@scylladb.com>
2017-08-24 13:38:03 +03:00
Avi Kivity
9d27455744 Revert "Revert "Merge "Compress in-memory compression-info" from Botond""
This reverts commit 9656fd79a0. A fix is now
available.
2017-08-24 13:37:35 +03:00
Tomasz Grabiec
9656fd79a0 Revert "Merge "Compress in-memory compression-info" from Botond"
This reverts commit ef85cf1cb3, reversing
changes made to de011ece52.

Vlad reports that this causes SIGSEGV on cluster restarts.

seastar::backtrace_buffer::append_backtrace() at /home/vladz/work/urchin/seastar/core/reactor.cc:274
 (inlined by) print_with_backtrace at /home/vladz/work/urchin/seastar/core/reactor.cc:289
seastar::print_with_backtrace(char const*) at /home/vladz/work/urchin/seastar/core/reactor.cc:296
sigsegv_action at /home/vladz/work/urchin/seastar/core/reactor.cc:3512
 (inlined by) operator() at /home/vladz/work/urchin/seastar/core/reactor.cc:3498
 (inlined by) _FUN at /home/vladz/work/urchin/seastar/core/reactor.cc:3494
?? ??:0
operator()<seastar::temporary_buffer<char> > at /home/vladz/work/urchin/sstables/sstables.cc:870
 (inlined by) apply at /home/vladz/work/urchin/seastar/core/apply.hh:36
 (inlined by) apply<sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()>::<lambda()>::<lambda()>::<lambda(auto:104)>, seastar::temporary_buffer<char> > at /home/vladz/work/urchin/seastar/core/apply.hh:44
 (inlined by) do_void_futurize_apply_tuple<sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()>::<lambda()>::<lambda()>::<lambda(auto:104)>, seastar::temporary_buffer<char> > at /home/vladz/work/urchin/seastar/core/future.hh:1270
 (inlined by) apply<sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()>::<lambda()>::<lambda()>::<lambda(auto:104)>, seastar::temporary_buffer<char> > at /home/vladz/work/urchin/seastar/core/future.hh:1290
 (inlined by) then<sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()>::<lambda()>::<lambda()>::<lambda(auto:104)> > at /home/vladz/work/urchin/seastar/core/future.hh:890
 (inlined by) operator() at /home/vladz/work/urchin/sstables/sstables.cc:873
 (inlined by) do_until_continued<sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()>::<lambda()>::<lambda()>, sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()>::<lambda()>::<lambda()>&> at /home/vladz/work/urchin/seastar/core/future-util.hh:155
do_until<sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()>::<lambda()>::<lambda()>, sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()>::<lambda()>::<lambda()>&> at /home/vladz/work/urchin/seastar/core/future-util.hh:330
 (inlined by) operator() at /home/vladz/work/urchin/sstables/sstables.cc:874
 (inlined by) apply at /home/vladz/work/urchin/seastar/core/apply.hh:36
 (inlined by) apply<sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()>::<lambda()> > at /home/vladz/work/urchin/seastar/core/apply.hh:44
 (inlined by) apply<sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()>::<lambda()> > at /home/vladz/work/urchin/seastar/core/future.hh:1302
then<sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()>::<lambda()> > at /home/vladz/work/urchin/seastar/core/future.hh:890
 (inlined by) operator() at /home/vladz/work/urchin/sstables/sstables.cc:875
 (inlined by) apply at /home/vladz/work/urchin/seastar/core/apply.hh:36
 (inlined by) apply<sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()> > at /home/vladz/work/urchin/seastar/core/apply.hh:44
 (inlined by) apply<sstables::parse(sstables::random_access_reader&, sstables::compression&)::<lambda()> > at /home/vladz/work/urchin/seastar/core/future.hh:1302
operator()<seastar::future_state<> > at /home/vladz/work/urchin/seastar/core/future.hh:900
 (inlined by) run at /home/vladz/work/urchin/seastar/core/future.hh:395
seastar::reactor::run_tasks(seastar::circular_buffer<std::unique_ptr<seastar::task, std::default_delete<seastar::task> >, std::allocator<std::unique_ptr<seastar::task, std::default_delete<seastar::task> > > >&) at /home/vladz/work/urchin/seastar/core/reactor.cc:2317
seastar::reactor::run() at /home/vladz/work/urchin/seastar/core/reactor.cc:2775
seastar::app_template::run_deprecated(int, char**, std::function<void ()>&&) at /home/vladz/work/urchin/seastar/core/app-template.cc:142
2017-08-24 11:44:14 +02:00
Alexys Jacob
a133290694 scylla_io_setup: migrate away from deprecated string.atoi
Python 2.0 deprecated string.atoi and we should move away from it
as stated here: https://docs.python.org/2/library/string.html#string.atoi

Signed-off-by: Alexys Jacob <ultrabug@gentoo.org>
Message-Id: <20170817134002.28124-1-ultrabug@gentoo.org>
2017-08-24 12:36:34 +03:00
Avi Kivity
dcac7125fe Merge seastar upstream
* seastar e96881a...b9f1eb7 (9):
  > httpd: indentation patch
  > httpd: handle exception when shutting down
  > stall-detector: Allow backtrace throttling to be configured
  > stall-detector: Fix messages about suppresssion not appearing
  > scripts: posix_net_conf.sh: allow passing a perftune.py configuration file as a parameter
  > scripts: perftune.py: add the possibility to pass the parameters in a configuration file and print the YAML file with the current configuration
  > scripts: perftune.py: actually use the number of Rx queues when comparing to the number of CPU threads
  > core: make current_backtrace() noexcept
  > memory: add large allocation detector stubs for default allocator
2017-08-24 11:35:28 +03:00
Piotr Jastrzebski
477068d2c3 Make streamed_mutation more exception safe
Make sure that push_mutation_fragment leaves
_buffer_size with a correct value if exception
is thrown from emplace_back.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <83398412aa78332d88d91336b79140aecc988602.1503474403.git.piotr@scylladb.com>
2017-08-23 09:37:04 +01:00
Avi Kivity
2f41ed8493 Merge "repair: Do not allow repair until node is in NORMAL status" from Asias
Fixes #2723.

* tag 'asias/repair_issue_2723_v1' of github.com:cloudius-systems/seastar-dev:
  repair: Do not allow repair until node is in NORMAL status
  gossip: Add is_normal helper
2017-08-23 09:44:45 +03:00
Asias He
69c81bcc87 repair: Do not allow repair until node is in NORMAL status
The following backtrace was reported by user when running repair and keeping restarting the node at the same time.

 #0  0x00007eff077281d7 in raise () from /lib64/libc.so.6
 #1  0x00007eff07729a08 in abort () from /lib64/libc.so.6
 #2  0x00007eff07721146 in __assert_fail_base () from /lib64/libc.so.6
 #3  0x00007eff077211f2 in __assert_fail () from /lib64/libc.so.6
 #4  0x00000000010ef2c2 in locator::token_metadata::first_token_index (this=0x641000214e98, start=...) at locator/token_metadata.cc:133
 #5  0x00000000010ef2d9 in locator::token_metadata::first_token (this=0x641000214e98, start=...) at locator/token_metadata.cc:143
 #6  0x00000000010e329d in locator::abstract_replication_strategy::get_natural_endpoints (this=0x641000494000, search_token=...)
     at locator/abstract_replication_strategy.cc:66
 #7  0x0000000001481186 in get_neighbors (hosts=std::vector of length 0, capacity 0, data_centers=std::vector of length 0, capacity 0,
     range=<error reading variable: access outside bounds of object referenced via synthetic pointer>, ksname=..., db=...) at repair/repair.cc:196
 #8  repair_range<nonwrapping_range<dht::token> > (range=..., ri=...) at repair/repair.cc:781
 #9  <lambda(auto:99&)>::<lambda(auto:100&&)>::<lambda(auto:101&)>::<lambda()>::operator() (__closure=0x7efec07f7460) at repair/repair.cc:1005
 #10 futurize<future<bool_class<stop_iteration_tag> > >::apply<repair_ranges(repair_info)::<lambda(auto:99&)>::

It is reproduced with

1) while true; do curl -X POST --header "Content-Type: application/json" --header "Accept: application/json" "http://127.0.0.1:10000/storage_service/repair_async/ks3"; done

2) start node 127.0.0.1, stop node 127.0.0.1 in a loop

The problem is, during boot up, the token_metadata is not replicated to all shards until
the node goes into NORMAL status.

To fix, check until node is in NORMAL status before allowing repair.

Fixes #2723
2017-08-23 14:40:04 +08:00
Asias He
65912dd1ac gossip: Add is_normal helper
It will be used by repair to check if a node is in NORMAL status.
2017-08-23 14:40:04 +08:00
Amnon Heiman
abbd78367c Add configuration to disable per keyspace and column family metrics
The number of keysapce and column family metrics reported is
proportional to the number of shards times the number of keysapce/column
families.

This can cause a performance issue both on the reporting system and on
the collecting system.

This patch adds a configuration flag (set to false by default) to enable
or disable those metrics.

Fixes #2701

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20170821113843.1036-1-amnon@scylladb.com>
2017-08-22 19:19:54 +03:00
Botond Dénes
4f42acc956 abstract_marker::raw::prepare: add missing return statement
The function doesn't return a value in the all-false branch.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <3c1976682ffc190d741c066d942b83be4463cae8.1503402721.git.bdenes@scylladb.com>
2017-08-22 15:06:18 +03:00
Paweł Dziepak
9d82a1ebfd abstract_read_executor: make make_requests() exception safe
Message-Id: <20170821162934.25386-5-pdziepak@scylladb.com>
2017-08-22 12:09:42 +02:00
Paweł Dziepak
31afc2f242 shared_index_lists: restore indentation
Message-Id: <20170821162934.25386-4-pdziepak@scylladb.com>
2017-08-22 12:09:42 +02:00
Paweł Dziepak
93eaa95378 sstables: make shared_index_lists::get_or_load exception safe
Message-Id: <20170821162934.25386-3-pdziepak@scylladb.com>
2017-08-22 12:09:42 +02:00
Avi Kivity
ef85cf1cb3 Merge "Compress in-memory compression-info" from Botond
"Overly large metadata can hog memory which especially hurts in setups
with bad disk/memory ratio. To ease the pain compress the in-memory
compression-info.

The compression is implemented based on Avi's idea which is to group n
offsets together into segments, where each segment stores a base
absolute offset into the file, the other offsets in the segments being
relative offsets (and thus of reduced size).  Also offsets are allocated
only just enough bits to store their maximum value. The offsets are thus
packed in a buffer like so:
    arrrarrrarrr...
where n is 4, a is an absolute offset and r are offsets relative to a.
This of course means that stored offsets will not be aligned, not even
on a byte boundary, but the size reduction pretty convincing.
In addition, segments are stored in buckets, where each bucket has its
own base offset. In addition, segments in a buckets are optimized to
address as large of a chunk of the data as possible for a given chunk
size."

Ref #1946.

* 'bdenes/compress-compression-v3' of https://github.com/denesb/scylla:
  Add unit test for compress::offsets
  Optimise the storage of compression chunk offsets
  Add script to precompute segmented compression parameters
2017-08-22 10:30:58 +03:00
Botond Dénes
62c18da35c Add unit test for compress::offsets 2017-08-21 17:06:20 +03:00
Botond Dénes
028c7a0888 Optimise the storage of compression chunk offsets
To reduce the memory footprint of compression-info, n offsets are
grouped together into segments, where each segment stores a base
absolute offset into the file, the other offsets in the segments being
relative offsets (and thus of reduced size). Also offsets are
allocated only just enough bits to store their maximum value. The
offsets are thus packed in a buffer like so:
     arrrarrrarrr...
where n is 4, a is an absolute offset and r are offsets relative to a.

The optimal value of n can be calculated for a given file_size (f) and
chunk_size (c), by finding the minima of the following function:

f(n) = (f/c)/n * (log2(f) + (n - 1)*log2((n-1)*(c + 64)))

This is done in an empirical way, using a script (see below).

Furthermore segments are stored in buckets, where each bucket has its
own base offset. Each bucket therefore can address an equal chunk of the
file and furthermore each segment in a bucket can address an equal
sub-chunk of this area.
The value of a given offset i is thus:
    bucket_base_offset_for(i) + segment_base_offset_for(i) + offset(i)

To account for the bucketed storage we calculate a local_f, which is
optimized so that a bucketful of segmented offsets can address the
largest possible chunk of f. As value of this local_f only depends on
the bucket_size (b) and c the value of n can be made independent of f
and therefore only depend on one dynamic value, c. This makes life much
simpler as we don't need to know the size of the file up-front, we can
just append buckets to the storage on demand, while the required storage
is still less than a third [1] of the original storage requirements
(std::deque<uint64>).

The table with the minima(f(n)) for different f and c values is
pre-computed by gen_segmented_compress_params.py and
stored in sstables/segmented_compress_params.hh. This script also
creates a table with the best values of local_f for the given
bucket_size. At runtime we only select the best params based on c.

[1] This was calculated for c=4K and b=4K
2017-08-21 17:06:12 +03:00
Avi Kivity
de011ece52 main: deprecate non-murmur3 partitioners more forcefully
Some (most?) users don't read logs or release notes, so they won't notice
that the ByteOrdered and Random partitioners were deprecated in 2.0. Make
them notice by refusing to start with a deprecated partitioner, unless a
switch is explicitly enabled.
Message-Id: <20170820073424.8331-1-avi@scylladb.com>
2017-08-21 14:32:22 +02:00
Avi Kivity
9f415ef870 sstables: accurate summary entry size calculation
Calculate the summary entry size correctly, so we don't end up with oversize
summaries.
Message-Id: <20170819184255.14181-2-avi@scylladb.com>
2017-08-21 14:28:57 +02:00
Avi Kivity
17c372bf0e sstables: get rid of 64kB minimum index advance to generate summary
Limiting summary entry generation to at most one summary entry
per 64k of index data can lead to large index pages, with thousands
of index entries per summary entry. These are slow to parse, and there
is no real gain from the limit, since we already enforce a size
limit on the summary.

Remove the limit and allow summary entry generation based solely on
spanned data size.

Fixes #2711.
Message-Id: <20170819184255.14181-1-avi@scylladb.com>
2017-08-21 14:26:44 +02:00
Avi Kivity
81a33df25d dht: reduce split_range_to_single_shard contiguous memory demand
split_range_to_single_shard() returns a vector of size 4096, with
each element (a partition_range) of size 100. The total of 400k can
cause defragmentation if memory is fragmented.

Fix by using a deque.

Fixes #2707.
Message-Id: <20170819141017.28287-1-avi@scylladb.com>
2017-08-21 14:25:45 +02:00
Piotr Jastrzebski
c602ffd610 Make Scylla ttl expiration behave like in Cassandra
Fixes #2497

[tgrabiec: reworked the title]

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <2f5a99dce6ef11fe0ef135c9fa0592078fc9a056.1502886874.git.piotr@scylladb.com>
2017-08-21 14:25:45 +02:00
Botond Dénes
eae33a1f19 Add script to precompute segmented compression parameters
The script generates sstables/segmented_compress_params.hh which
contains a list with the optimal number of grouped offsets for
different data and chunk sizes as well as a list with the best
nominal data sizes for different chunk sizes, given a bucket size.
Data sizes are in the range of [2**4,2**50] and chunks in the
range of [2**4, 2**30]. Data sizes that are not used with the current
bucket_size are ommited.
See next commit for details of how the calculated values are used.
2017-08-21 10:44:08 +03:00
Avi Kivity
5a2439e702 main: check for large allocations
Large allocations can require cache evictions to be satisfied, and can
therefore induce long latencies. Enable the seastar large allocation
warning so we can hunt them down and fix them.

Message-Id: <20170819135212.25230-1-avi@scylladb.com>
2017-08-21 10:25:40 +03:00
Pekka Enberg
318423d50b Merge seastar upstream
* seastar 2d16aca...e96881a (4):
  > memory: add detector for large allocations
  > memory: reduce large allocations for small pools
  > net: Fix potential NULL pointer dereference in udp.cc
  > Update dpdk submodule
2017-08-21 10:24:08 +03:00
Tomasz Grabiec
8f2ca52740 tests: Run test_query_only_static_row test case on all mutation sources
The test checks behavior common to all mutation readers, so it's
better to run it against all mutation sources rather than only for
cache reader.

Message-Id: <1503072333-17995-1-git-send-email-tgrabiec@scylladb.com>
2017-08-20 12:23:28 +03:00
Raphael S. Carvalho
10eaa2339e compaction: Make resharding go through compaction manager
Two reasons for this change:
1) every compaction should be multiplexed to manager which in turn
will make decision when to schedule. improvements on it will
immediately benefit every existing compaction type.
2) active tasks metric will now track ongoing reshard jobs.

Fixes #2671.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170817224334.6402-1-raphaelsc@scylladb.com>
2017-08-20 11:35:14 +03:00
Takuya ASADA
38b2ff617f dist/redhat: follow the change on libgcc/libstdc++ package name
Since we moved to external 3rdparty repository, we added '53' suffix on gcc
packages, so follow the change.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20170819092039.1090-2-syuu@scylladb.com>
2017-08-19 16:01:28 +03:00
Takuya ASADA
f1b5401d1f dist/redhat: Change g++ command name on CentOS
We have added '-5.3' suffix on g++ command from scylla-gcc53-c++-5.3.1-2.2,
follow the change on scylla build script.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20170819092039.1090-1-syuu@scylladb.com>
2017-08-19 16:01:27 +03:00
Avi Kivity
e428805ba5 Merge "Optimize query result partition and row counts" from Duarte
"Now that range queries go through the normal digest path, we rely on
query::result::calculate_counts() to count the amount of partitions
and rows returned.

This series optimizes it, in case it is needed, and also changes the
result message to include the partition and row counts, avoiding the
calculation altogether."

* 'calculate-counts/v3' of github.com:duarten/scylla:
  query-result: Send row and partition count over the wire
  query::result: Optimize calculate_counts()
2017-08-17 13:41:21 +03:00
Alexys Jacob
e5ff8efea3 dist: Fix Gentoo Linux scylla-jmx and scylla-tools packages detection
These two admin related packages will be packaged under the "app-admin"
category and not the "dev-db" one.

This fixes the detection path of the packages for scylla_setup.

Signed-off-by: Alexys Jacob <ultrabug@gentoo.org>
Message-Id: <20170817094756.21550-1-ultrabug@gentoo.org>
2017-08-17 13:20:43 +03:00
Nadav Har'El
7832d8a883 get rid of unused part in configure.py
Scylla's configure.py contains stuff we copied from Seastar's
configure.py, but is no longer used. Let's get rid of some of it.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20170813150842.12603-1-nyh@scylladb.com>
2017-08-17 12:05:44 +03:00
Duarte Nunes
1e7f0eab82 memtable: Created readers should be fast forwardable by default
mutation_reader::forwarding defaults to yes.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170816180304.2121-1-duarte@scylladb.com>
2017-08-17 10:21:01 +03:00
Botond Dénes
e70cfc8f36 incremental_reader_selector: account for possibly disengaged lower bound
In addition to the constructor (fixed previously) the check for no
sstables on the first call to select() also has to be prepared for the
lower bound of the range being disengaged.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <4ab1296c71814fcd492996fa36fd00fd7bbbbc7f.1502949875.git.bdenes@scylladb.com>
2017-08-17 10:07:26 +03:00
Botond Dénes
af83b7f57b incremental_reader_selector: use lazy_deref instead of tertiary operator
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <4f4b884c6a1f517bd654f3b27608d854b17a66e1.1502948635.git.bdenes@scylladb.com>
2017-08-17 08:45:46 +03:00
Botond Dénes
eb7eee510d combined_mutation_reader_test: use the global const objects directly
Instead of local ones.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <3ec1a70e4c0198c0563dff9688bbaa7fcfcace71.1502891190.git.bdenes@scylladb.com>
2017-08-16 16:56:42 +03:00
Paweł Dziepak
784dcbf1ca sstables: initialise index metrics on all shards
Fixes #2702.

Message-Id: <20170816085454.21554-1-pdziepak@scylladb.com>
2017-08-16 15:44:26 +03:00
Avi Kivity
d7e3fbc6fe Merge seastar upstream
* seastar 2a43102...2d16aca (1):
  > fstream: do not ignore unresolved future

Fixes #2697.
2017-08-16 15:09:59 +03:00
Botond Dénes
611774b1d9 Use the incremental reader for compaction
As leveled compaction strategy stands to gain the most from
incrementally opening sstables.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <292648d3fa4ea97376c0b4360754a20132194f63.1502822066.git.bdenes@scylladb.com>
2017-08-15 21:38:04 +03:00
Takuya ASADA
0f9b095867 dist/common/scripts: prevent ignoreing flag that passed after another flag which requires parameter
When user mistakenly forgot to pass parameter for a flag, our scripts misparses
next flag as the parameter.
ex) Correct usage is '--ntp-domain <domain> --setup-nic', but passed
    '--ntp-domain --setup-nic'.
Result of that, next flag will ignore by scripts.
To prevent such behavior, reject any parameter that start with '--'.

Fixes #2609

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20170815114751.6223-1-syuu@scylladb.com>
2017-08-15 18:27:32 +03:00
Duarte Nunes
c7aa3ea069 mutation_partition: Remove obsolete short read detection
When compacting a partition for querying we would read an extra row,
to include any tombstones between that one and the previous row.

This is no longer needed since we have a general mechanism to detect
short reads in the storage_proxy.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170811103031.22866-1-duarte@scylladb.com>
2017-08-15 12:01:55 +01:00
Avi Kivity
8df6dd1fa0 database: make incremental_reader_selector robust vs. full-range partition_range
incremental_reader_selector assumes the partition_range it receives has a lower
bound, but it was seen in mutation_test that this is not so.

Fix by checking whether the bound exists or not.
Message-Id: <20170815095852.14149-1-avi@scylladb.com>
2017-08-15 11:03:22 +01:00
Avi Kivity
a35bfb3ea9 Merge seastar upstream
* seastar 47b31f6...2a43102 (1):
  > Merge "Fix crash in rpc due to access to already destroyed server socket" from Gleb

Fixes #2690
2017-08-14 16:23:02 +03:00
Avi Kivity
e892a0082a Merge "Drop exhausted mutation_readers when possible" from Duarte
"Exhausted readers belonging to a combined_mutation_reader can be fast
forwarded, so we have to keep them around. However, if the reader is
not fast forwardable, then we can drop the contained readers and their
buffers."

* 'ff-reader/v2' of github.com:duarten/scylla:
  combined_mutation_reader: Drop exhausted readers if not in FF mode
  combined_mutation_reader: Remove superfluous mutation_readers list
  memtable_snapshot_source: Created readers should be fast forwardable
2017-08-14 16:20:38 +03:00
Duarte Nunes
7fb6a74302 combined_mutation_reader: Drop exhausted readers if not in FF mode
Exhausted readers can be fast forwarded, so we have to keep them
around. However, if the current reader is not fast forwardable, then
we can drop those readers and their buffers.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-08-14 14:37:27 +02:00
Duarte Nunes
0b53f88a42 combined_mutation_reader: Remove superfluous mutation_readers list
The _all_readers variable can do the same job.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-08-14 14:37:27 +02:00
Duarte Nunes
77477605c1 memtable_snapshot_source: Created readers should be fast forwardable
As they're used by the cache tests.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-08-14 14:37:27 +02:00
Avi Kivity
afff29bdb9 Merge seastar upstream
* seastar edb73ab...47b31f6 (1):
  > tls: Only recurse once in shutdown code

Fixes #2691.
2017-08-14 15:09:42 +03:00
Duarte Nunes
a17cef76b2 query-result-writer: Remove unneeded field
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170811102940.22747-1-duarte@scylladb.com>
2017-08-14 12:33:33 +01:00
Duarte Nunes
ec75eac37d ring_position_exponential_vector_sharder: Take ranges by rvalue
Avoids some copies.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170814093310.29200-1-duarte@scylladb.com>
2017-08-14 12:55:43 +03:00
Duarte Nunes
3b9a9b7321 query-result: Send row and partition count over the wire
To avoid calculating them on the coordinator side.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-08-14 10:29:06 +02:00
Duarte Nunes
d7bab684ea query::result: Optimize calculate_counts()
Now that range queries go through the normal digest path, we rely on
query::result::calculate_counts() to count the amount of partitions
and rows returned. This patch makes it a bit faster.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-08-14 10:28:29 +02:00
Avi Kivity
cb2c5016ea Merge seastar upstream
* seastar 7a49ae5...edb73ab (11):
  > scripts: perftune.py: change the network module mode auto selection heuristic
  > net/tls: explicitly ignore ready future during shutdown
  > Use python2 explicitly as an interpreter for Python v2 scripts
  > peering_sharded_service: prevent over-run the container
  > Add link to documentation to the README.md
  > Add guidelines for contributing to Seastar
  > sharded: fix move constructor for peering_sharded_service services
  > Provide a convenient way to lazy-convert to string the values of pointers
  > tutorial: overhaul semaphores section
  > simple-stream: Make fragmented::write_substream return simple if possible
  > simple-stream: Make simple/fragmented memory output stream top level
2017-08-14 10:29:27 +03:00
Raphael S. Carvalho
050a7019b8 sstables/index_reader: fix index reader for summary entry spanning lots of keys
quantity prevents index_reader from reading all index entries of a summary
entry that span more than min_index_interval entries. That can happen after
introduction of size-based sampling, and consequently, sstable will not be
able to return a key which logical position in summary entry is beyond
min_index_interval. It's ok to not use quantity because index_reader will
read all indexes until either next summary entry or end of file is reached.

Fixes test_sstable_conforms_to_mutation_source

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170812045821.25269-1-raphaelsc@scylladb.com>
2017-08-12 09:44:16 +03:00
Duarte Nunes
08e284a07e combined_mutation_reader: Don't drop mutation readers
This patch fixes a regression introduced in a6b9186ca.

We should keep the readers around in case a subsequent call to
fast_forward() will require them.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170811160444.12795-1-duarte@scylladb.com>
2017-08-11 19:17:29 +03:00
Duarte Nunes
44b6da2e90 test.py: Add combined_mutation_reader_test
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170811155017.9899-1-duarte@scylladb.com>
2017-08-11 18:54:11 +03:00
Avi Kivity
dbf8625ac9 Merge "size-based sampling for sstable summary" from Raphael
"Fixes #1842."

* 'size_based_sampling_v3' of github.com:raphaelsc/scylla:
  tests: test summary entry spanning more keys than min interval
  db/config: introduce sstable_summary_ratio option
  sstables: introduce size-based sampling for sstable summary
  sstables: make components_writer::offset const qualified and uint64_t
  sstables: make writer::offset const qualified and uint64_t
2017-08-11 18:41:45 +03:00
Duarte Nunes
e7d56884c0 list_reader_selector: Prevent infinite loop
In case the readers are empty.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170811153142.8926-1-duarte@scylladb.com>
2017-08-11 18:34:55 +03:00
Vladimir Krivopalov
003e8cf250 Use python2 explicitly as an interpreter for Python v2 scripts
Signed-off-by: Vladimir Krivopalov <vladimir.krivopalov@gmail.com>
Message-Id: <20170811032712.4362-1-vladimir.krivopalov@gmail.com>
2017-08-11 18:08:11 +03:00
Duarte Nunes
20337053ad Don't use literal lambdas
These are only available in C++17. Fixes the build after b5460c2.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-08-11 13:08:42 +02:00
Duarte Nunes
b5460c2990 Merge "Support duration type" from Jesse
"This patch series adds support for the `duration` type in CQL, which
was added to Cassandra in 3.10.

As part of this work, it was necessary also to add support for the
`vint` and `unsigned vint` types to the native protocol implementation,
which are part of v5 of the specification.

To test interactively, it is necessary to use cqlsh distributed with
Cassandra, as the version we distribute does not yet support the
duration type."

* 'jhk/duration_protocol/v5' of https://github.com/hakuch/scylla:
  Support `duration` CQL native type
  CQL native protocol: Add support for `vint` serialization
  duration_test.cc: Add test for printing zero duration
  duration.cc: Remove nop `const` qualifier on return type
  Change `const` qualifier declaration order for `duration`
  duration.cc: Simplify range checking
  Rename `duration` to `cql_duration`
2017-08-11 10:56:55 +01:00
Duarte Nunes
bcf21aacc2 storage_proxy: Directly call query_nonsingular_mutations_locally
Instead of duplicating the branch.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170811001559.25788-1-duarte@scylladb.com>
2017-08-11 09:06:01 +03:00
Duarte Nunes
a3ee99554b service/storage_proxy: Remove out of date comment
Now that we don't go directly to reconciliation for range queries, the
result isn't required to have the row and partition counts calculated
(we no longer transform a reconciled_result to a query::result).

Furthermore, this line was causing a lot of dtests to fail on account
of them not expecting an error line in the logs.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170810225351.12610-1-duarte@scylladb.com>
2017-08-11 09:04:23 +03:00
Raphael S. Carvalho
5124f94358 tests: test summary entry spanning more keys than min interval
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-08-11 01:37:06 -03:00
Raphael S. Carvalho
872412d31a db/config: introduce sstable_summary_ratio option
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-08-11 01:36:21 -03:00
Raphael S. Carvalho
8726ee937d sstables: introduce size-based sampling for sstable summary
Currently, a summary entry is added after min_index_interval index
entries were written. Not taking into account size of index entries
becomes a problem with large partitions which may create big index
entries due to promoted indexes. Read performance is affected as a
consequence because index entries spanned by summary are all read
from disk to serve request.

What we wanna do is to also add a summary entry after index reaches
a boundary. To deal with oversampling, we want to write 1 byte to
summary for every 2000 bytes written to data file (this will be
eventually made into an option in the config file).
Both conditions must be met to avoid under or oversampling.
That way, the amount of data needed from index file to satify the
request is drastically reduced.

Fixes #1842.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-08-11 00:30:12 -03:00
Raphael S. Carvalho
da7489720b sstables: make components_writer::offset const qualified and uint64_t
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-08-10 21:48:11 -03:00
Raphael S. Carvalho
881c479be8 sstables: make writer::offset const qualified and uint64_t
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-08-10 21:46:39 -03:00
Jesse Haber-Kucharsky
509626fe08 Support duration CQL native type
`duration` is a new native type that was introduced in Cassandra 3.10 [1].

Support for parsing and the internal representation of the type was added in
8fa47b74e8.

Important note: The version of cqlsh distributed with Scylla does not have
support for durations included (it was added to Cassandra in [2]). To test this
change, you can use cqlsh distributed with Cassandra.

Duration types are useful when working with time-series tables, because they can
be used to manipulate date-time values in relative terms.

Two interesting applications are:

- Aggregation by time intervals [3]:

`SELECT * FROM my_table GROUP BY floor(time, 3h)`

- Querying on changes in date-times:

`SELECT ... WHERE last_heartbeat_time < now() - 3h`

(Note: neither of these is currently supported, though columns with duration
values are.)

Internally, durations are represented as three signed counters: one for months,
for days, and for nanoseconds. Each of these counters is serialized using a
variable-length encoding which is described in version 5 of the CQL native
protocol specification.

The representation of a duration as three counters means that a semantic
ordering on durations doesn't exist: Is `1mo` greater than `1mo1d`? We cannot
know, because some months have more days than others. Durations can only have a
concrete absolute value when they are "attached" to absolute date-time
references. For example, `2015-04-31 at 12:00:00 + 1mo`.

That duration values are not comparable presents some difficulties for the
implementation, because most CQL types are. Like in Cassandra's implementation
[2], I adopted a similar strategy to the way restrictions on the `counter` type
are checked. A type "references" a duration if it is either a duration or it
contains a duration (like a `tuple<..., duration, ...>`, or a UDT with a
duration member).

The following restrictions apply on durations. Note that some of these contexts
are either experimental features (materialized views), or not currently
supported at run-time (though support exists in the parser and code, so it is
prudent to add the restrictions now):

- Durations cannot appear in any part of a primary key, either for tables or
  materialized views.

- Durations cannot be directly used as the element type of a `set`, nor can they
  be used as the key type of a `map`. Because internal ordering on durations is
  based on a byte-level comparison, this property of Cassandra was intended to
  help avoid user confusion around ordering of collection elements.

- Secondary indexes on durations are not supported.

- "Slice" relations (<=, <, >=, >) are not supported on durations with `WHERE`
   restrictions (like `SELECT ... WHERE span <= 3d`). Multi-column restrictions
   only work with clustering columns, which cannot be `duration` due to the
   first rule.

- "Slice" relations are not supported on durations with query conditions (like
  `UPDATE my_table ... IF span > 5us`).

Backwards incompatibility note:

As described in the documentation [4], duration literals take one of two
forms: either ISO 8601 formats (there are three), or a "standard" format. The ISO
8601 formats start with "P" (like "P5W"). Therefore, identifiers that have this
form are no longer supported.

Fixes #2240.

[1] https://issues.apache.org/jira/browse/CASSANDRA-11873

[2] bfd57d13b7

[3] https://issues.apache.org/jira/browse/CASSANDRA-11871

[4] http://cassandra.apache.org/doc/latest/cql/types.html#working-with-durations
2017-08-10 15:01:10 -04:00
Jesse Haber-Kucharsky
91dab1d998 CQL native protocol: Add support for vint serialization
Version 5 of the native protocol for CQL [1] adds the `vint` and `unsigned vint`
types.

An unsigned integer encoded as a `vint` has a variable size based on the
magnitude of the value. The first byte indicates the total number of bytes.

For signed integers, a "zig-zag" encoding scheme ensures that small negative
values are encoded as short-length `vint`s (0 -> 0, -1 -> 1, 1 -> 2, 2 -> 3, -2
-> 4, etc).

[1] https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v5.spec
2017-08-10 14:11:30 -04:00
Jesse Haber-Kucharsky
77489f843f duration_test.cc: Add test for printing zero duration
It's somewhat counter-intuitive, but Cassandra also formats zero-valued
duration values as an empty string.
2017-08-10 14:11:30 -04:00
Jesse Haber-Kucharsky
d9c027c2dd duration.cc: Remove nop const qualifier on return type
These have no effect according to the Clang static analyzer.
2017-08-10 14:11:30 -04:00
Jesse Haber-Kucharsky
54c3cf0201 Change const qualifier declaration order for duration
The vast majority of the code-base is written in left-`const` style, and
consistency is important.
2017-08-10 14:11:30 -04:00
Jesse Haber-Kucharsky
1889b036b1 duration.cc: Simplify range checking 2017-08-10 14:11:23 -04:00
Avi Kivity
301358e440 Merge "Optimize combined_mutation_reader for disjoint sstable ranges" from Botond
"sstables will sometimes have narrow/disjont ranges (e.g. LCS L1+).
This can be exploited when reading from a range of sstables by opening
sstables on-demand thus saving memory, processing and potentially I/O.

To achieve this combined_mutation_reader is refactored such that the
reader selection logic is moved-out into a reader_selector class.
combined_mutation_reader now takes a reader_selector instance in its
constructor and asks it for new readers for the current ring position
on every call to operator()().

At the moment two specializations of reader_selector are provided:
* list_reader_selector which implements the current logic, that is using
    a provided mutation_reader list, and
* incremental_reader_selector which implements the on-demand opening
    logic discussed above.

Fixes #1935"

* 'bdenes/optimize_combined_reader-v6' of https://github.com/denesb/scylla:
  Add combined_mutation_reader_test unit test
  Remove range_sstable_reader
  Add incremental_reader_selector
  Add reader_selector to combined_mutation_reader
  sstable_set::incremental_selector: select() now returns a selection
2017-08-10 15:16:30 +03:00
Botond Dénes
9ee9988097 Add combined_mutation_reader_test unit test 2017-08-10 12:38:10 +03:00
Botond Dénes
3e97a5cd6b Remove range_sstable_reader
range_sstable_reader is replaced with combined_mutation_reader, using
the incremental_reader_selector.
2017-08-10 12:38:10 +03:00
Botond Dénes
bfc74f1312 Add incremental_reader_selector
incremental_reader_selector is a specialization of reader_selector for
the case when sstables have narrow and/or disjoint token ranges. To
exploit this it creates new readers on-demand when their sstable's
token range intersects with the current ring position.
2017-08-10 12:38:02 +03:00
Botond Dénes
a6b9186cab Add reader_selector to combined_mutation_reader
combined_mutation_reader now accepts as a constructor argument a
reader_selector instance whoose task is to create new readers on
each call to operator()() if needed and possible.
This way it is possible to control how readers are created through
different specializations of reader_selector.

The previous logic is refactored into list_reader_selector which
is using a pre-provided mutation_reader list and forwards all of them to
combined_mutation_reader at once.
2017-08-10 12:37:40 +03:00
Takuya ASADA
1cb0fff146 dist/common/scripts/scylla_raid_setup: handle '--disks' parameter correctly when disk list is end with ','
We should handle parameters correctly even it's malformed.
Fixes #2402

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1499266239-27551-1-git-send-email-syuu@scylladb.com>
2017-08-10 11:42:33 +03:00
Takuya ASADA
8e115d69a9 dist/debian: append postfix '~DISTRIBUTION' to scylla package version
We are moving to aptly to release .deb packages, that requires debian repository
structure changes.
After the change, we will share 'pool' directory between distributions.
However, our .deb package name on specific release is exactly same between
distributions, so we have file name confliction.
To avoid the problem, we need to append distribution name on package version.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1502312935-22348-1-git-send-email-syuu@scylladb.com>
2017-08-10 10:53:56 +03:00
Vlad Zolotarov
1b4594b03a transport::server::process_prepare() don't ignore errors on other shards
If storing of the statement fails on any shard we should fail the whole PREPARE
request.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1502325392-31169-13-git-send-email-vladz@scylladb.com>
2017-08-10 10:32:37 +03:00
Jesse Haber-Kucharsky
352e9f60ba Rename duration to cql_duration
`std::chrono::duration` is a prolific enough name that it's best to
disambiguate.
2017-08-09 15:15:20 -04:00
Botond Dénes
94fc550e68 sstable_set::incremental_selector: select() now returns a selection
A seletion contains - in addition to the list of sstables - a next_token
which is a hint as to what is the next best token to call select() with.
This should be the smallest token such that at the next call to
select() the least number of new sstables will be returned, without
skipping any.
2017-08-09 16:27:33 +03:00
Takuya ASADA
3077416ecc dist/debian: Backport scalability fix of _Unwind_Find_FDE to out gcc for Debian 8
Since we provide custom build gcc only for Debian 8, the fix is not apply to
Ubuntu/Debian 9.

Fixes #2646

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1502239191-12649-1-git-send-email-syuu@scylladb.com>
2017-08-09 12:19:52 +03:00
Avi Kivity
7217b7ab36 Merge "Use range_streamer everywhere" from Asias
"With this series, all the following cluster operations:

- bootstrap
- rebuild
- decommission
- removenode

will use the same code to do the streaming.

The range_streamer is now extended to support both fetch from and push
to peer node. Another big change is now the range_streamer will stream
less ranges at a time, so less data, per stream_plan and range_streamer
will remember which ranges are failed to stream and can retry later.

The retry policy is very simple at the moment it retries at most 5 times
and sleep 1 minutes, 1.5^2 minutes, 1.5^3 minutes ....

Later, we can introduce api for user to decide when to stop retrying and
the retry interval.

The benefits:

 - All the cluster operation shares the same code to stream
 - We can know the operation progress, e.g., we can know total number of
   ranges need to be streamed and number of ranges finished in
   bootstrap, decommission and etc.
 - All the cluster operation can survive peer node down during the
   operation which usually takes long time to complete, e.g., when adding
   a new node, currently if any of the existing node which streams data to
   the new node had issue sending data to the new node, the whole bootstrap
   process will fail. After this patch, we can fix the problematic node
   and restart it, the joining node will retry streaming from the node
   again.
 - We can fail streaming early and timeout early and retry less because
   all the operations use stream can survive failure of a single
   stream_plan. It is not that important for now to have to make a single
   stream_plan successful. Note, another user of streaming, repair, is now
   using small stream_plan as well and can rerun the repair for the
   failed ranges too.

This is one step closer to supporting the resumable add/remove node
opeartions."

* tag 'asias/use_range_streamer_everywhere_v4' of github.com:cloudius-systems/seastar-dev:
  storage_service: Use the new range_streamer interface for removenode
  storage_service: Use the new range_streamer interface for decommission
  storage_service: Use the new range_streamer interface for rebuild
  storage_service: Use the new range_streamer interface for bootstrap
  dht: Extend range_streamer interface
2017-08-09 10:00:25 +03:00
Takuya ASADA
98fc7b376d dist/redhat: install mdadm/xfsprogs on package install time
We experienced 'Constructing RAID volume...' takes too much time on some AMIs,
this is because setup script stuck at 'yum -y install mdadm xfsprogs'.
We don't have to install these packages on AMI startup time, we should
preinstall them on AMI creating time.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1502192796-21040-1-git-send-email-syuu@scylladb.com>
2017-08-09 09:10:34 +03:00
Piotr Jastrzebski
4137517cdc Check arguments of table_helper::setup_keyspace
to make sure all table helpers passed as arguments are
for the right keyspace.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <10edacd509880bb18180f13e8c28593d068c5c7b.1501688729.git.piotr@scylladb.com>
2017-08-08 15:55:06 +03:00
Piotr Jastrzebski
2d8a80f211 Make table_helper constructor safer
by taking keyspace name by value and storing it inside the object.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <a5dab41647348ae311e023fe5592aec650c6e32a.1501688729.git.piotr@scylladb.com>
2017-08-08 15:55:06 +03:00
Daniel Fiala
06089474c9 Print warning if user uses default cluster_name
* Configuration for cluster_name is commented-out in config file.
* Default value set to empty string and if not rewritten by user then
  warning is printed and value is reset to "ScyllaDB Cluster".

Fixes #2648.

Message-Id: <20170808113322.9313-1-daniel@scylladb.com>
2017-08-08 14:47:17 +03:00
Avi Kivity
a71138fc84 config: mark column_index_size_in_kb as Used
Fixes #2681
Message-Id: <20170808100415.16296-1-avi@scylladb.com>
2017-08-08 11:08:00 +01:00
Ultrabug
2022da2405 Add overall python code QA and guidelines with flake8
ScyllaDB loves python & python loves ScyllaDB.

It would benefit the project to start enforcing some code guidelines
and basic QA with a linter along a PEP8 respect thanks to flake8.

This patch adds a tox config to at least start with an assessment
of the work to be done on all .py files in the code base.

To reduce its noise, tests on long lines (> 80char) are ignored
for now.

Signed-off-by: Ultrabug <ultrabug@gentoo.org>
Message-Id: <20170726134242.8927-1-ultrabug@gentoo.org>
2017-08-08 11:15:45 +03:00
Raphael S. Carvalho
dddbd34b52 sstables: close index file when sstable writer fails
index's file output stream uses write behind but it's not closed
when sstable write fails and that may lead to crash.
It happened before for data file (which is obviously easier to
reproduce for it) and was fixed by 0977f4fdf8.

Fixes #2673.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170807171146.10243-1-raphaelsc@scylladb.com>
2017-08-08 09:53:14 +03:00
Asias He
49360992d9 storage_service: Use the new range_streamer interface for removenode
So that removenode operation will now stream small ranges at a time and
restream the failed ranges.
2017-08-07 16:31:48 +08:00
Asias He
6b8dc85f12 storage_service: Use the new range_streamer interface for decommission
So that decommission operation will now stream small ranges at a time and
restream the failed ranges.
2017-08-07 16:31:48 +08:00
Asias He
24584b8509 storage_service: Use the new range_streamer interface for rebuild
So that rebuild operation will now stream small ranges at a time and
restream the failed ranges.
2017-08-07 16:31:47 +08:00
Asias He
f239b11a84 storage_service: Use the new range_streamer interface for bootstrap
So that bootstrap operation will now stream small ranges at a time and
restream the failed ranges.
2017-08-07 16:31:47 +08:00
Asias He
6810031ba7 dht: Extend range_streamer interface
After this patch and the following patches to use the new
range_streamder interface, all the following cluster operations:

- bootstrap
- rebuild
- decommission
- removenode

will use the same code to do the streaming.

The range_streamer is now extended to support both fetch from and push
to peer node. Another big change is now the range_streamer will stream
less ranges at a time, so less data, per stream_plan and range_streamer
will remember which ranges are failed to stream and can retry later.

The retry policy is very simple at the moment it retries at most 5 times
and sleep 1 minutes, 1.5^2 minutes, 1.5^3 minutes ....

Later, we can introduce api for user to decide when to stop retrying and
the retry interval.

The benefits:

- All the cluster operation shares the same code to stream

- We can know the operation progress, e.g., we can know total number of
  ranges need to be streamed and number of ranges finished in
  bootstrap, decommission and etc.
- All the cluster operation can survive peer node down during the
  operation which usually takes long time to complete, e.g., when adding
  a new node, currently if any of the existing node which streams data to
  the new node had issue sending data to the new node, the whole bootstrap
  process will fail. After this patch, we can fix the problematic node
  and restart it, the joining node will retry streaming from the node
  again.
- We can fail streaming early and timeout early and retry less because
  all the operations use stream can survive failure of a single
  stream_plan. It is not that important for now to have to make a single
  stream_plan successful. Note, another user of streaming, repair, is now
  using small stream_plan as well and can rerun the repair for the
  failed ranges too.

This is one step closer to supporting the resumable add/remove node
opeartions.
2017-08-07 16:31:47 +08:00
Avi Kivity
86de6cc7fb Merge seastat upstream
* seastar f14d2a3...7a49ae5 (8):
  > sharded: improve support for cooperating sharded<> services
  > sharded: support for peer services
  > semaphore: add a version of with_semaphore that takes a duration timeout
  > scripts: perftune.py: fix the CPU mask generation for more than 64 CPUs
  > Revert "future-utils: make when_all() (vector variant) exception safe"
  > Revert "future-utils: fix gross compilation errors in when_all()"
  > future-utils: fix gross compilation errors in when_all()
  > future-utils: make when_all() (vector variant) exception safe

Includes change to batchlog_manager constructor to adapt it to
seastar::sharded::start() change.
2017-08-06 17:47:47 +03:00
Avi Kivity
3edec66903 Revert "repair: Make send_repair_checksum_range timeout"
This reverts commit 98757069a5. We have the
failure detector which will detect an unresponsive node and fail the RPC.
Adding a timeout can just introduce false positives.
2017-08-06 13:09:36 +03:00
Avi Kivity
621926d914 dist: debian: escape "$" character for make 2017-08-05 16:51:03 +03:00
Avi Kivity
a471851bf1 dist: debian: add /opt/scylladb/bin to PATH so antlr can be found 2017-08-05 15:46:58 +03:00
Avi Kivity
8bdc0dd471 dist: debian: search for libaries in /opt/scylladb/lib 2017-08-05 13:18:14 +03:00
Takuya ASADA
2ff3bdba5c dist/debian: switch Ubuntu 3rdparty packages to external build service
Switch Ubuntu to launchpad ppa:
https://launchpad.net/~scylladb/+archive/ubuntu/ppa/+packages

Since switching 3rdparty on Debian is not ready yet, keep them to use scylla
3rdparty repo, also keep --rebuild-dep option and dist/debian/dep.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1501866678-4922-1-git-send-email-syuu@scylladb.com>
2017-08-05 11:29:13 +03:00
Glauber Costa
4a911879a3 add active streaming reads metric
In commit f38e4ff3f, we have separated streaming reads from normal reads
for the purpose of determining the maximum number of reads going on.
However, we'll now be totally unaware of how many reads will be
happening on behalf of streaming and that can be important information
when debugging issues.

This patch adds this metric so we don't fly blind.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <1501909973-32519-1-git-send-email-glauber@scylladb.com>
2017-08-05 11:06:37 +03:00
Duarte Nunes
587b6be089 dirty_memory_manager: Add missing include
Allows tests/memory_footprint to build on Ubuntu 14.04.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-08-04 10:15:23 +02:00
Avi Kivity
4f12068e50 dist: re-add --rebuild-dep to build_rpm.sh
For compatibility with existing scripts; ignored.
2017-08-04 07:10:18 +03:00
Takuya ASADA
b5e83ebd94 dist/redhat: switch 3rdparty packages to external build service
Drop existing 3rdparty build script/3rdparty repo, switch to Fedora Copr
https://copr.fedorainfracloud.org/coprs/scylladb/scylla-3rdparty/packages/

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20170803110754.22152-1-syuu@scylladb.com>
2017-08-04 06:40:09 +03:00
Pekka Enberg
90872ffa1f docker: Disable stall detector
Fixes #2162

Message-Id: <1501759957-4380-1-git-send-email-penberg@scylladb.com>
2017-08-03 14:52:49 +03:00
Takuya ASADA
91ade1a660 dist/debian: check scylla user/group existance before adding them
To prevent install failing on the environment which already has scylla
user/group, existance check is needed.

Fixes #2389

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1495023805-14905-1-git-send-email-syuu@scylladb.com>
2017-08-03 13:01:18 +03:00
Takuya ASADA
6ac254fbcb dist: change nomerges=1 on block devices during fstrim execution
We have problem to run fstrim with nomerges=2, so we need to change
the parameter to 1 during fstrim execution.
To do this, this fix changes follow things:
 - revert dropping scylla_fstrim on Ubuntu 16.04/CentOS
 - disable distribution provided fstrim script
 - enable scylla_fstrim on all distributions
 - introduce --set-nomerges on scylla-blocktune
 - scylla_fstrim call scylla-blocktune by following order:
   - 'scylla-blocktune --set-nomerges 1'
   - 'fstrim' for each devices
   - 'scylla-blocktune --set-nomerges 2'

Fixes #2649

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1501531393-21109-1-git-send-email-syuu@scylladb.com>
2017-08-03 13:00:34 +03:00
Botond Dénes
0b7ac01f0f Add QtCreator project file and .gdbinit to .gitignore
Message-Id: <ff662910fe1156cdde2bda4aa5bb9cfc45bddda9.1501752340.git.bdenes@scylladb.com>
2017-08-03 12:58:35 +03:00
Avi Kivity
f38e4ff3f9 database: prevent streaming reads from blocking normal reads
Streaming reads and normal reads share a semaphore, so if a bunch of
streaming reads use all available slots, no normal reads can proceed.

Fix by assigning streaming reads their own semaphore; they will compete
with normal reads once issued, and the I/O scheduler will determine the
winner.

Fixes #2663.
Message-Id: <20170802153107.939-1-avi@scylladb.com>
2017-08-03 10:23:01 +01:00
Avi Kivity
911536960a database: remove streaming read queue length limit
If we fail a streaming read due queue overload, we will fail the entire repair.
Remove the limit for streaming, and trust the caller (repair) to have bounded
concurrency.

Fixes #2659.
Message-Id: <20170802143448.28311-1-avi@scylladb.com>
2017-08-03 10:21:07 +01:00
Avi Kivity
e9519ca8e5 Merge "make range selects more efficient by going through digest matching stage" from Gleb
"Currently scanning reads go to reconciliation stage directly which
requires asking for mutation data from all peers. This series makes
it to try matching digests first like a single partition read."

Fixes #2666.

* 'gleb/digest_scan' of github.com:cloudius-systems/seastar-dev:
  storage_proxy: make range_slice_read_executor go through digest matching state
  storage_proxy: add capability to read data/digest for non singular ranges
  storage_proxy: remove redundant parameter from never_speculating_read_executor constructor
2017-08-03 12:18:11 +03:00
Tzach Livyatan
d3d46a5eac Add comments on cluster_name in scylla.yaml
Fix #2316

Signed-off-by: Tzach Livyatan <tzach@scylladb.com>
Message-Id: <20170730082922.21884-1-tzach@scylladb.com>
2017-08-03 12:12:15 +03:00
Gleb Natapov
d2a2a6d471 storage_proxy: make range_slice_read_executor go through digest matching state
Currently scanning reads go to reconciliation stage directly which
requires asking for mutation data from all peers. This patch makes
it to try matching digests first like a single partition read.

The change requires internode protocol changes since currently it is not
possible to ask for multi partition data/digest over RPC. It means that
the capability has to be guarded by new gossip feature flag which the
patch also adds.
2017-08-03 11:37:03 +03:00
Tzach Livyatan
99b2232c5d docs/docker: Add hostname parameter to examples
Using --hostname to give the container a meaningful name is a good
practice, and make the monitoring dashboard easier to understand

Signed-off-by: Tzach Livyatan <tzach@scylladb.com>
Message-Id: <20170803081027.6675-1-tzach@scylladb.com>
2017-08-03 11:14:12 +03:00
Gleb Natapov
3b7d8c8767 storage_proxy: add capability to read data/digest for non singular ranges
Currently only mutation_data read supports non singular ranges. This
patch extends data/digest reads to support them too.
2017-08-03 10:35:09 +03:00
Gleb Natapov
c619ef258b storage_proxy: remove redundant parameter from never_speculating_read_executor constructor
never_speculating_read_executor always waits for all targets so
block_for parameter is always equal to targets.size(). No need to
to pass it explicitly.
2017-08-03 10:08:44 +03:00
Duarte Nunes
4c9206ba2f tests/sstable_mutation_test: Don't use moved-from object
Fix a bug introduced in dbbb9e93d and exposed by gcc6 by not using a
moved-from object. Twice.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170802161033.4213-1-duarte@scylladb.com>
2017-08-03 09:45:49 +03:00
Asias He
763fa83232 repair: Fix build in repair_cf_range
The compiler does not like the mutable.
Message-Id: <83c5e8a944b72a095b8e29e9988986e6ca9cefc5.1501690749.git.asias@scylladb.com>
2017-08-02 18:57:32 +02:00
Asias He
5798625d73 repair: Singal parallelism_semaphore in case of error
If we throw after we take the semaphore and beforew the when_all
below runs, no one will increase the semaphore.

Fixes #2661
Message-Id: <49540ede4c8a6d84004e10e0f63690e3c21d72c7.1501686383.git.asias@scylladb.com>
2017-08-02 18:32:32 +03:00
Avi Kivity
ebff739a84 Merge "use paging for compaction history" from Amnon
"This series adds an option to use paging in internal query and use that for the
get compaction history function.

Internal paging will be done explicitly, to use paging, you first create a
state object (that contains the query as well) and use that state to get the
first page, the result will contain both the query result and a new state that
can be used to get the next page.

Fixes #2366"

* 'amnon/paged_compaction_history_v5' of github.com:cloudius-systems/seastar-dev:
  system_keyspace: Use paging for get compaction history
  Add paging for internal queries
  query_options: Allows creating query_options from query_options
2017-08-02 18:15:58 +03:00
Avi Kivity
ac31abf6a4 repair: don't lambda-capture repair_tracker
It is static, so it need not be captured, and some compilers complain.
2017-08-02 18:07:31 +03:00
Avi Kivity
ce60ef59f3 Revert "repair: Singal parallelism_semaphore in case of error"
This reverts commit a548eee28c. It releases
the semaphore too early (noted by Glauber).
2017-08-02 17:13:46 +03:00
Avi Kivity
b2753b0183 Merge "Fix possible repair stuck" from Asias
"This series tries to fix possible repair stuck."

Fixes #2660, #2661, #2662.

* tag 'asias/repair_stuck_v2.1' of github.com:cloudius-systems/seastar-dev:
  repair: Make send_repair_checksum_range timeout
  repair: Singal parallelism_semaphore in case of error
  repair: Fix repair_tracker done
2017-08-02 16:51:51 +03:00
Asias He
98757069a5 repair: Make send_repair_checksum_range timeout
If the verb never returns the repair will hangs forever. Make it use the
timeout version of the send_message.

Fixes #2662
2017-08-02 21:41:50 +08:00
Asias He
a548eee28c repair: Singal parallelism_semaphore in case of error
If we throw after we take the semaphore and beforew the when_all
below runs, one one will increase the semaphore.

Fixes #2661
2017-08-02 21:41:45 +08:00
Asias He
abcff4c78e repair: Fix repair_tracker done
If it throws after repair_tracker.start and before the when_all below,
the repair_tracker.done will never be called for this repair id.

Fixes #2660
2017-08-02 21:40:29 +08:00
Pekka Enberg
78f68613ce dist/docker: Reduce number of layers
One of the best practices for Dockerfiles is to minimize the number of
layers because they increase the overall image size:

https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#minimize-the-number-of-layers

Consolidate our "yum install" commands to reduce the number of lauyers.

Suggested by Dean Hamstead.

Message-Id: <1501670572-8701-1-git-send-email-penberg@scylladb.com>
2017-08-02 15:21:05 +03:00
Takuya ASADA
ffbdacc1fa dist/debian: remove ant from prerequisite packages
This lines are mistakenly copied from scylla-tools, won't need for scylla-server.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1498029619-1928-1-git-send-email-syuu@scylladb.com>
2017-08-02 12:12:42 +03:00
Duarte Nunes
cec41f9de6 Merge seastar upstream
* seastar fc937b8...f14d2a3 (4):
  > configure.py: Ensure tmp directory exists when getting dpdk cflags
  > checked_ptr: fix hash() compilation
  > net: fix potential use after free in posix_server_socket::accept()
  > http: removed unneeded lamda captures

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-08-02 10:05:08 +02:00
Asias He
cf6f4a5185 gossip: Introduce the shadow_round_ms option
It specifies the maximum gossip shadow round time. It can be used to
reduce the gossip feature check time during node boot up.
For instance, when the first node in the cluster, which listed both
itself and other node as seed in the yaml config, boots up, it will try
to talk to other seed nodes which are not started yet. The gossip shadow
round will be used to fetch the feature info of the cluster. Since there
is no other seed node in the cluster, the shadow round will fail. User
can reduce the default shadow_round_ms option to reduce the boot time.

Fixes #2615
Message-Id: <10916ce9059f3c7f1a1fb465919ae57de3b67d59.1500540297.git.asias@scylladb.com>
2017-08-02 09:52:35 +03:00
Vlad Zolotarov
4b28ea216d utils::loading_cache: cancel the timer after closing the gate
The timer is armed inside the section guarded by the _timer_reads_gate
therefore it has to be canceled after the gate is closed.

Otherwise we may end up with the armed timer after stop() method has
returned a ready future.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1501603059-32515-1-git-send-email-vladz@scylladb.com>
2017-08-01 17:21:44 +01:00
Duarte Nunes
569bbf2edd sstables/sstables: Use per-cpu noop_write_monitor
We employ a thread-per-core architecture, so don't go about sharing
seastar::shared_ptrs across cpus.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170801144153.17354-1-duarte@scylladb.com>
2017-08-01 18:10:49 +03:00
Avi Kivity
db7329b1cb Merge "Ensure correct EOC for PI block cell names" from Duarte
"This series ensures the always write correct cell names to promoted
index cell blocks, taking into account the eoc of range tombstones.

Fixes #2333"

* 'pi-cell-name/v1' of github.com:duarten/scylla:
  tests/sstable_mutation_test: Test promoted index blocks are monotonic
  sstables: Consider eoc when flushing pi block
  sstables: Extract out converting bound_kind to eoc
2017-08-01 18:09:07 +03:00
Gleb Natapov
1da4d5c5ee cql transport: run accept loop in the foreground
It was meant to be run in the foreground since it is waited upon during
stop(), but as it is now from the stop() perspective it is completed
after first connection is accepted.

Fixes #2652

Message-Id: <20170801125558.GS20001@scylladb.com>
2017-08-01 17:04:14 +03:00
Avi Kivity
1e8bb972b6 compaction: fix iteration in leveled compaction droppable tombstones loop
Since get_level_count() is unsigned, it will never be negative, and
the loop may never terminate.

Message-Id: <20170719133502.13316-1-avi@scylladb.com>
2017-08-01 13:40:36 +03:00
Avi Kivity
ba2e170e4b compaction: fix return in leveled compaction droppable tombstones loop
If the loop ever terminates, we need to return something.

Message-Id: <20170719133508.13374-1-avi@scylladb.com>
2017-08-01 13:33:02 +03:00
Takuya ASADA
a998b7b3eb dist/ami: follow scylla-tools package name change on RedHat variants
Since scylla-tools generates two .rpm packages, we need to copy them to our AMI.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20170722090002.9850-1-syuu@scylladb.com>
2017-07-31 18:57:12 +03:00
Avi Kivity
7c8dea088a Merge seastar upstream
* seastar 54e940f...fc937b8 (2):
  > configure.py: Always ensure tmp directory exists
  > coding-style.md: introduce
2017-07-31 18:06:09 +03:00
Duarte Nunes
a85232dd82 Fix compilation errors on GCC 6
GCC 6 inconsistently requires explicitly calling a member function
through "this->" for lambda functions capturing "this".

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170731143755.21970-1-duarte@scylladb.com>
2017-07-31 17:40:44 +03:00
Benoît Canet
b44ba11e4c transport: Count the number of unpaged queries
Queries with query page size equal or smaller than
zero are unpaged queries.

Count these kind of queries and make them a metrics
since they can ruin the performance of the system.

Message-Id: <20170731130004.25807-2-benoit@scylladb.com>
2017-07-31 16:01:45 +03:00
Avi Kivity
3fe6731436 Merge "educe the effect of the latency metrics" from Amnon
"This series reduce that effect in two ways:
1. Remove the latency counters from the system keyspaces
2. Reduce the histogram size by limiting the maximum number of buckets and
   stop the last bucket."

Fixes #2650.

* 'amnon/remove_cf_latency_v2' of github.com:cloudius-systems/seastar-dev:
  database: remove latency from the system table
  estimated histogram: return a smaller histogram
2017-07-31 15:58:30 +03:00
Paweł Dziepak
402799fcc0 mutation_reader: drop move_and_clear()
Since the discovery of std::exchange(x, {}) move_and_clear has become
obsolete. Beside, the name was wrong, it did not clear the vector but
recreated it meaning that any allocated memory wasn't reused (not that
it mattered in the existing usages).

Message-Id: <20170731123549.10887-1-pdziepak@scylladb.com>
2017-07-31 15:51:19 +03:00
Gleb Natapov
87bc3f7e7f configure.py: use user provided compiler flags when checking for features
User provided compiler flags my change an outcome of the test.

Message-Id: <20170724111520.GA18230@scylladb.com>
2017-07-31 15:33:06 +03:00
Avi Kivity
f4b2a1ef4e Merge "Optimise combined_mutation_reader" from Paweł
"These patches optimise combined_mutation_reader for cases where the majority
of mutation_readers is disjoint.

perf_fast_forward:
Results are medians of 3 of fragments/s as reported by perf_fast_forward.

Command:
perf_fast_forward -c1 --enable-cache

small: small-partition-skips (read=1, skip=0)
large: large-partition-skips (read=1, skip=0)

          before        after      diff
small     195753      238196       +22%
large    1244325     1359096        +9%

perf_simple_query:

Results are medians of 10 of reads/s as reported by perf_simple_query.

Command:
perf_simple_query -c1

before   98651.40
after   104554.85
diff          +6%"

* tag 'avoid-merge_mutations/v1' of https://github.com/pdziepak/scylla:
  combined_mutation_reader: avoid unnecessary merge_mutations()
  combined_mutation_reader: do not pop mutation with different key
2017-07-31 15:14:42 +03:00
Avi Kivity
178b54e790 Merge "memtable flush: Fixes and improvements" from Duarte
"This series ensure that when we retry a memtable flush, we re-acquire the
flush permit that was previously released. It also ensures we don't hold
the sstable read lock for the duration of the sleep leading to the retry.

To achieve that cleanly we refactor the way the permit lifecycle is managed
by employing a RAII-based approach.

We also improve the latency of writes blocked on virtual dirty by releasing
the flush permit before fsyncing the sstables. There are additional avenues
for performance improvements on top of this one."

* 'memtable-flush-additional-fixes/v4' of github.com:duarten/scylla:
  column_family: Re-acquire flush permit in case of error
  column_family: Don't hold sstable read lock when retrying flush
  sstables: Release the flush permit before fsyncing
  sstables: Introduce write_monitor
  database: Extract out dirty_memory_manager
  dirty_memory_manager: Refactor flush permit lifetime management
  dirty_memory_manager: Invert permit acquisition order
  memtable_list: Register different seal functions for each behaviour
2017-07-31 14:57:19 +03:00
Paweł Dziepak
2b53a560c8 combined_mutation_reader: avoid unnecessary merge_mutations()
Merging mutations is quite an expensive operation. The creation of
streamed mutation merger involves several allocations (mostly coming
from various std::vector) and then all mutation_fragments need to go
through a heap.

All this is completely unnecessary if there is only one mutation, so
let's skip a call to merge_mutations() in such cases. This also means
that we can reuse memory allocated by _current vector if merge is not
required.
2017-07-31 12:35:40 +01:00
Paweł Dziepak
f78f2b3c92 combined_mutation_reader: do not pop mutation with different key
Originally, the loop insidecombined_mutation_reader::next() so that it
was popping mutation from the heap and when it encountered one with a
different decorated key it was pushed back and the ones accumulated so
far merged and emitted. In other words, every time the reader progressed
to the next mutation it did needless pop and push operations on the
heap.

This patch rearranges the code so that the key of the next mutation is
compared before it is popped from the heap.
2017-07-31 12:35:40 +01:00
Duarte Nunes
c81431ad16 column_family: Re-acquire flush permit in case of error
If we fail to flush an sstable, after creating the flush_reader, then
we will have released the flush permit when we retry the flush. Ensure
that when retrying, we re-acquire the flush permit.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-31 12:40:19 +02:00
Duarte Nunes
9162e016da column_family: Don't hold sstable read lock when retrying flush
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-31 12:40:19 +02:00
Duarte Nunes
1a33cc6847 sstables: Release the flush permit before fsyncing
This allows a queued flush to start while we fsync the current
sstable, which helps reduce the overall time new writes are blocked on
dirty memory.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-31 12:40:19 +02:00
Duarte Nunes
784a078e72 sstables: Introduce write_monitor
The write_monitor provides callbacks to inform an observer of the
state of the ongoing sstable write.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-31 12:40:19 +02:00
Duarte Nunes
d2b0a5a0a6 database: Extract out dirty_memory_manager
Needed to the flush_permit can be propagated to the sstables layer.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-31 12:40:19 +02:00
Duarte Nunes
a2b732c156 dirty_memory_manager: Refactor flush permit lifetime management
This patch refactors how the flush permit lifetime is managed,
dropping the current hash table in favour of a RAII approach.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-31 12:40:19 +02:00
Duarte Nunes
f647f5b14a dirty_memory_manager: Invert permit acquisition order
For an upcoming fix it is required to invert the permit acquisition
order: first we acquire the background work permit and then the single
flush permit.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-31 12:40:19 +02:00
Duarte Nunes
e371accac8 memtable_list: Register different seal functions for each behaviour
Instead of passing a flush_behaviour to the seal function, use two
different functions for each of the behaviours.

This will be important in the forthcoming patches, which will require
the signatures of those functions to differ.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-31 12:40:19 +02:00
Paweł Dziepak
e970630272 tests/serialized_action: add missing forced defers
serialized_action_tests depends on the fact that first part of the
serialized_action is executed at cetrtain points (in which it reads a
global variable that is later updated by the main thread).
This worked well in the release mode before ready continuations were
inlined and run immediately, but not in the debug mode since inlining
was not happening and the main seastar::thread was missing some yield
points.
Message-Id: <20170731103013.26542-1-pdziepak@scylladb.com>
2017-07-31 11:35:24 +01:00
Duarte Nunes
4e3232fc29 utils/log_histogram: Fix typo when calculating number of buckets
We weren't correctly calculating the number of buckets due to
returning the wrong variable.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170731094733.7746-1-duarte@scylladb.com>
2017-07-31 12:49:11 +03:00
Avi Kivity
e855a28fae Revert "Merge "memtable flush: Fixes and improvements" from Duarte"
This reverts commit 733a64a1df, reversing
changes made to e11e66723a.

Breaks sstable_test and perf_fast_forward.
2017-07-31 12:44:28 +03:00
Avi Kivity
85056f3611 log_histogram: fix constexpr-ness of log_histogram_options
1. assert() is not constexpr.
2. can't use static_assert(), because the contructor may be called in a non-constexpr
   environment; moved to log_histogram
3. pow2_rank() uses count_leading_zeros() which is not constexpr; split
   into constexpr and non-constexpr versions
4. duplicated number_of_buckets() because bucket_of() can't be constexpr due to pow2_rank
Message-Id: <20170726105444.32698-1-avi@scylladb.com>
2017-07-31 09:11:40 +01:00
Avi Kivity
733a64a1df Merge "memtable flush: Fixes and improvements" from Duarte
"This series ensure that when we retry a memtable flush, we re-acquire the
flush permit that was previously released. It also ensures we don't hold
the sstable read lock for the duration of the sleep leading to the retry.

To achieve that cleanly we refactor the way the permit lifecycle is managed
by employing a RAII-based approach.

We also improve the latency of writes blocked on virtual dirty by releasing
the flush permit before fsyncing the sstables. There are additional avenues
for performance improvements on top of this one."

* 'memtable-flush-additional-fixes/v3' of github.com:duarten/scylla:
  column_family: Re-acquire flush permit in case of error
  column_family: Don't hold sstable read lock when retrying flush
  sstables: Release the flush permit before fsyncing
  sstables: Introduce write_monitor
  database: Extract out dirty_memory_manager
  dirty_memory_manager: Refactor flush permit lifetime management
  dirty_memory_manager: Invert permit acquisition order
  memtable_list: Register different seal functions for each behaviour
  main: Don't catch polymorphic exceptions by value
2017-07-31 10:32:26 +03:00
Duarte Nunes
e11e66723a main: Don't catch polymorphic exceptions by value
GCC trunk complains due to exception slicing.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170727163021.8000-1-duarte@scylladb.com>
2017-07-31 10:12:13 +03:00
Avi Kivity
fc683c3f3e Merge seastar upstream
* seastar a14d667...54e940f (8):
  > Merge "Prometheus to use output stream" from Amnon
  > http_test: Fix an http output stream test
  > build: harden try_compile_and_link output temporary file
  > configure: disable exception scalability hack on debug build
  > build: don't perform test compiles to /dev/null
  > Provide workaround for non scaleable c++ exception runtime
  > Merge "Add output stream to http message reply" from Amnon
  > configure.py: use user provided compiler flags when checking for features
2017-07-31 10:09:48 +03:00
Avi Kivity
c1718dd5e3 Update scylla-ami submodule
* dist/ami/files/scylla-ami 2bd1481...b41e5eb (1):
  > Fix incorrect scylla-server sysconfig file edit for i3 memflush controller
2017-07-31 09:41:24 +03:00
Takuya ASADA
714540cd4c dist/debian: refuse upgrade if current scylla < 1.7.3 && commitlog remains
Commitlog replay fails when upgrade from <1.7.3 to 2.0, we need to refuse
updating package if current scylla < 1.7.3 && commitlog remains.

Note: We have the problem on scylla-server package, but to prevent
scylla-conf package upgrade, %pretrans should be define on scylla-conf.

Fixes #2551

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1501187555-4629-1-git-send-email-syuu@scylladb.com>
2017-07-31 09:08:40 +03:00
Avi Kivity
e5d2e28df9 Merge "Backport exception scalability fix from gcc-7" from Gleb
"This patch series backports scalability fix for _Unwind_Find_FDE and modifies
out CentOS package to use our libgcc and libstdc++ which are needed to make
use of the fix instead of locally installed ones."

Ref #2646 (fixes on RHEL 7 and related only)

* 'gleb/exception-gcc-fix-v2' of github.com:cloudius-systems/seastar-dev:
  dist/redhat: Make scylla rpm depend on scylla-libgcc and scylla-libstdc++ and use it instead of locally installed one
  dist/redhat: Backport scalability fix of _Unwind_Find_FDE to out gcc
2017-07-30 19:31:03 +03:00
Gleb Natapov
8fe875cc79 dist/redhat: Make scylla rpm depend on scylla-libgcc and scylla-libstdc++ and use it instead of locally installed one 2017-07-30 16:03:25 +03:00
Gleb Natapov
1cf7e72c68 dist/redhat: Backport scalability fix of _Unwind_Find_FDE to out gcc 2017-07-30 16:03:10 +03:00
Paweł Dziepak
e62403190b Merge "Introduce perf_cache_eviction test" from Tomasz
Runs appending writes to a single partition, at full speed, and a reader
which selects the head of the partition, with 100ms delay between reads.
Prints latency percentiles and some stats.

Intended to test performance at the transition from non-evicting to
evicting modes.

Currently we can see that after the transition, whole partition gets
evicted and reads constantly miss.

Sample output:

    rd/s: 10, wr/s: 135947, ev/s: 0, pmerge/s: 1, miss/s: 0, cache: 708/778 [MB], LSA: 820/910 [MB], std free: 82 [MB]

    reads : min: 149   , 50%: 179   , 90%: 1331  , 99%: 1331  , 99.9%: 1331  , max: 6866   [us]
    writes: min: 3     , 50%: 4     , 90%: 4     , 99%: 5     , 99.9%: 258   , max: 51012  [us]

    rd/s: 7, wr/s: 93354, ev/s: 9, pmerge/s: 1, miss/s: 3, cache: 0/0 [MB], LSA: 107/128 [MB], std free: 82 [MB]

    reads : min: 179   , 50%: 179   , 90%: 73457 , 99%: 73457 , 99.9%: 73457 , max: 105778 [us]
    writes: min: 3     , 50%: 4     , 90%: 4     , 99%: 5     , 99.9%: 258   , max: 105778 [us]

* tag 'tgrabiec/row-eviction-perf-test' of github.com:scylladb/seastar-dev:
  tests: Introduce perf_cache_eviction
  tests: simple_schema: Add getter for DDL statement
  estimated_histogram: Implement percentile()
  utils: estimated_histogram: Make printable
2017-07-28 09:49:22 +01:00
Duarte Nunes
0f1bd81523 column_family: Re-acquire flush permit in case of error
If we fail to flush an sstable, after creating the flush_reader, then
we will have released the flush permit when we retry the flush. Ensure
that when retrying, we re-acquire the flush permit.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 21:09:18 +02:00
Duarte Nunes
2f4cffc7f6 column_family: Don't hold sstable read lock when retrying flush
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 21:09:18 +02:00
Duarte Nunes
5e64839e85 sstables: Release the flush permit before fsyncing
This allows a queued flush to start while we fsync the current
sstable, which helps reduce the overall time new writes are blocked on
dirty memory.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 21:09:18 +02:00
Duarte Nunes
a737577881 sstables: Introduce write_monitor
The write_monitor provides callbacks to inform an observer of the
state of the ongoing sstable write.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 21:09:18 +02:00
Duarte Nunes
121f967b30 database: Extract out dirty_memory_manager
Needed to the flush_permit can be propagated to the sstables layer.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 21:09:18 +02:00
Duarte Nunes
ef1275e9dd dirty_memory_manager: Refactor flush permit lifetime management
This patch refactors how the flush permit lifetime is managed,
dropping the current hash table in favour of a RAII approach.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 21:09:18 +02:00
Duarte Nunes
cfc8fae33f dirty_memory_manager: Invert permit acquisition order
For an upcoming fix it is required to invert the permit acquisition
order: first we acquire the background work permit and then the single
flush permit.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 21:09:18 +02:00
Duarte Nunes
7e68e4677d memtable_list: Register different seal functions for each behaviour
Instead of passing a flush_behaviour to the seal function, use two
different functions for each of the behaviours.

This will be important in the forthcoming patches, which will require
the signatures of those functions to differ.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 21:09:18 +02:00
Duarte Nunes
7502401652 main: Don't catch polymorphic exceptions by value
GCC trunk complains due to exception slicing.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 21:09:18 +02:00
Duarte Nunes
143f4fd861 Merge "Prevent pull requests from accumulating" from Tomasz
If schema merging completes at lower rate than incoming pull requests,
then merge processes will accumulate and needlessly request and hold schema mutations.

In rare cases, when there are constant schema changes, they may even
overflow memory. This was seen in dtest:

  concurrent_schema_changes_test.py:TestConcurrentSchemaChanges.create_lots_of_schema_churn_test

Allowing only one active and one queued pull request per remote
endpoint is enough.

* tag 'tgrabiec/dont-accumulate-schema-pulls-v2' of github.com:scylladb/seastar-dev:
  migration_manager: Log schema pulls
  migration_manager: Prevent pull requests from accumulating
  utils: Introduce serialized_action
2017-07-27 21:01:38 +02:00
Tomasz Grabiec
e09220dbff migration_manager: Log schema pulls 2017-07-27 20:08:25 +02:00
Tomasz Grabiec
350d98d4e1 migration_manager: Prevent pull requests from accumulating
If schema merging completes at lower rate than incoming pull requests,
then merge processes will accumulate and needlessly request and hold schema mutations.

In rare cases, when there are constant schema changes, they may even
overflow memory. This was seen in dtest:

  concurrent_schema_changes_test.py:TestConcurrentSchemaChanges.create_lots_of_schema_churn_test

Allowing only one active and one queued pull request per remote
endpoint is enough.
2017-07-27 20:08:25 +02:00
Tomasz Grabiec
6a3703944b utils: Introduce serialized_action 2017-07-27 20:08:21 +02:00
Duarte Nunes
dbbb9e93da tests/sstable_mutation_test: Test promoted index blocks are monotonic
Reproduces #2333

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 18:23:58 +02:00
Duarte Nunes
06728bdfe9 sstables: Consider eoc when flushing pi block
When flushing a promoted index block using a range tombstone cell name
as a bound, use the right eoc value instead of always writing
composite::eoc::none.

Fixes #2333

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 18:23:58 +02:00
Duarte Nunes
718517ed91 sstables: Extract out converting bound_kind to eoc
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-27 18:23:58 +02:00
Paweł Dziepak
f02bef7917 streamed_mutation: do not call fill_buffer() ahead of time
consume_mutation_fragments_until() allows consuming mutation fragments
until a specified condition happens. This patch reorganises its
implementation so that we avoid situations when fill_buffer() is called
with stop condition being true.
Message-Id: <20170727122218.7703-1-pdziepak@scylladb.com>
2017-07-27 17:47:57 +02:00
Tomasz Grabiec
ac7e6ef1bc tests: Introduce perf_cache_eviction 2017-07-27 17:19:07 +02:00
Tomasz Grabiec
2d2e7ef6fb tests: simple_schema: Add getter for DDL statement 2017-07-27 17:19:07 +02:00
Tomasz Grabiec
5602be72fa estimated_histogram: Implement percentile() 2017-07-27 17:19:07 +02:00
Tomasz Grabiec
1bc305ed7b utils: estimated_histogram: Make printable 2017-07-27 17:19:03 +02:00
Takuya ASADA
91a75f141b dist/redhat: limit metapackage dependencies to specific version of scylla packages
When we install scylla metapackage with version (ex: scylla-1.7.1),
it just always install newest scylla-server/-jmx/-tools on the repo,
instead of installing specified version of packages.

To install same version packages with the metapackage, limited dependencies to
current package version.

Fixes #2642

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20170726193321.7399-1-syuu@scylladb.com>
2017-07-27 14:21:35 +03:00
Takuya ASADA
11870e47ec dist/redhat: refuse upgrade if current scylla < 1.7.3 && commitlog remains
Commitlog replay fails when upgrade from <1.7.3 to 2.0, we need to refuse
updating package if current scylla < 1.7.3 && commitlog remains.

Note: We have the problem on scylla-server package, but to prevent
scylla-conf package upgrade, %pretrans should be define on scylla-conf.

Fixes #2551

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20170727110730.613-1-syuu@scylladb.com>
2017-07-27 14:09:17 +03:00
Tomasz Grabiec
22948238b6 row_cache: Fix potential timeout or deadlock due to sstable read concurrency limit
database::make_sstable_reader() creates a reader which will need to
obtain a semaphore permit when invoked. Therefore, each read may
create at most one such reader in order to be guaranteed to make
progress. If the reader tries to create another reader, that may
deadlock (or for non-system tables, timeout), if enough number of such
readers tries to do the same thing at the same time.

Avoid the problem by dropping previous reader before creating a new
one.

Refs #2644.

Message-Id: <1501152454-4866-1-git-send-email-tgrabiec@scylladb.com>
2017-07-27 13:58:20 +03:00
Vlad Zolotarov
e98adb13d5 service::storage_service: initialize auth and tracing after we joined the ring
Initialize the system_auth and system_traces keyspaces and their tables after
the Node joins the token ring because as a part of system_auth initialization
there are going to be issues SELECT and possible INSERT CQL statements.

This patch effectively reverts the d3b8b67 patch and brings the initialization order
to how it was before that patch.

Fixes #2273

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1500417217-16677-1-git-send-email-vladz@scylladb.com>
2017-07-27 10:54:36 +02:00
Amnon Heiman
a71b9e498a database: remove latency from the system table
This patch remove the latency histograms from the system table, it also
extend the already existing exclusion to all system keyspaces.

It also uses the new get_histogram API to set a minimal bucket size to
100 microseconds.
2017-07-27 11:41:15 +03:00
Amnon Heiman
1b05f23d12 estimated histogram: return a smaller histogram
The current histogram contains 91 buckets, this is a very high
resolution with a high upper limit.

To reduce traffic passed, between scylla and the prometheus, this patch
generate a smaller histogram.

It limit the number of buckets (16 by default), set a lower limit to the
lowest bucket, and uses 2 as the bucket coeficient.

Highest empty buckets will not be reported.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>

estimated histogram
2017-07-27 11:41:10 +03:00
Tomasz Grabiec
e9fc0b0491 Merge "Some fixes for performance regressions in perf_fast_forward" from Paweł
These patches contain some minor fixes for performance regression reported
by perf_fast_forward after partial cache was merged. The solution is still
far from perfect, there is one case that still has 30% degradation, but
there is some improvement so there is no reason to hold these changes back.

Refs #2582.

Some numbers:
before - before cache changes were merged
(555621b537)

cache - at the commit that introduced the partial cache
(9b21a9bfb6)

after - recent master + this series
(based on e988121dbb)

Differences are shown relative to "before".

Testing effectiveness of caching of large partition, single-key slicing reads:
Large partitions, range [0, 500000], populating cache
  before      cache      after
 1636840    1013688    1234606
              -38%        -25%

Large partitions, range [0, 500000], reading from cache
  before      cache      after
 2012615    3076812    3035423
               +53%       +51%

Testing scanning small partitions with skips.
reading small partitions (skip 0)
 before      cache      after
 227060     165261     200639
              -27%       -11%

skipping small partitions (skip 1)
 before      cache      after
  29813      27312      38210
               -8%       +28%

Testing slicing small partitions:
slicing small partitions (offset 0, read 4096)
 before      cache      after
 195282     149695     180497
              -23%        -8%

* https://github.com/pdziepak/scylla.git perf_fast_forward-regression/v3:
  sstables: make sure that fill_buffer() actually fills buffer
  mutation_merger: improve handling of non-deferring fill_buffer()s
  partition_snapshot_row_cursor: avoid apply() in single-version cases
  sstables: introduce decorated_key_view
  ring_position_comparator: accept sstables::decorated_key_view
  sstable: keep a pre-computed token in summary_entry
  sstables: cache token in index entries
  index_reader: advance_and_check_if_present() use index_comparator
  ring_position_comparator: drop unused overloads
  cache_streamed_mutation: avoid moving clustering_row
  streamed_mutation: introduce consume_mutation_fragments_until()
  cache_streamed_mutation: use consumer based read_context reader
  rows_entry: make position() inlineable
  mutation_fragment: make destructor always_inline
  keys: introduce compound_wrapper::from_exploded_view()
  sstables: avoid copying key components
  compound_compat: explode: reserve some elements in a vector
  cache: short-circut static row logic if there are no static columns
  cache: use equality comparators instead of tri_compare
  sstables: avoid indirect calls to abstract_type::is_multi_cell()
2017-07-27 10:14:35 +02:00
Pekka Enberg
b80504188a docs/docker-hub: Mark '--experimental' as 2.0 feature
The '--experimental' flag appears in 2.0 so mark it as such in the user
documentation on Docker Hub.

Message-Id: <1501137703-29706-1-git-send-email-penberg@scylladb.com>
2017-07-27 10:28:25 +03:00
Duarte Nunes
85e85ec72e Don't catch polymorphic exceptions by value
It makes gcc a very sad compiler.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170726172053.5639-2-duarte@scylladb.com>
2017-07-27 09:39:58 +03:00
Duarte Nunes
7536659cb5 CqlParser: Don't catch polymorphic exceptions by value
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170726172053.5639-1-duarte@scylladb.com>
2017-07-27 09:39:57 +03:00
Tzach Livyatan
ea97b87205 Adding Scylla restart instructions
Signed-off-by: Tzach Livyatan <tzach@scylladb.com>
Message-Id: <20170725064719.31109-1-tzach@scylladb.com>
2017-07-27 09:38:49 +03:00
Vlad Zolotarov
9adabd1bc4 utils::loading_cache: add stop() method
loading_cache invokes a timer that may issue asynchronous operations
(queries) that would end with writing into the internal fields.

We have to ensure that these operations are over before we can destroy
the loading_cache object.

Fixes #2624

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1501096256-10949-1-git-send-email-vladz@scylladb.com>
2017-07-26 21:28:49 +02:00
Duarte Nunes
50ad0003c6 db/schema_tables: Drop dropped columns when dropping tables
Fixes #2633

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170726150228.2593-2-duarte@scylladb.com>
2017-07-26 18:41:28 +02:00
Duarte Nunes
3425403126 db/schema_tables: Store column_name in text form
As does Cassandra.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170726150228.2593-1-duarte@scylladb.com>
2017-07-26 18:41:12 +02:00
Duarte Nunes
787308a96c cql3/tuples: Don't catch polymorphic exception by value
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170726155740.3275-1-duarte@scylladb.com>
2017-07-26 19:28:35 +03:00
Asias He
515a744303 gossip: Fix nr_live_nodes calculation
We need to consider the _live_endpoints size. The nr_live_nodes should
not be larger than _live_endpoints size, otherwise the loop to collect
the live node can run forever.

It is a regression introduced in commit 437899909d
(gossip: Talk to more live nodes in each gossip round).

Fixes #2637

Message-Id: <863ec3890647038ae1dfcffc73dde0163e29db20.1501026478.git.asias@scylladb.com>
2017-07-26 16:48:30 +03:00
Paweł Dziepak
7b0f75c0d1 sstables: avoid indirect calls to abstract_type::is_multi_cell() 2017-07-26 14:38:27 +01:00
Paweł Dziepak
b4d1dea4a9 cache: use equality comparators instead of tri_compare
Equality comparator may be much cheaper than the fully fledged
trichotomic comparator, especially if the component types are byte order
equal but not byte order comparable.
2017-07-26 14:38:27 +01:00
Paweł Dziepak
2780555968 cache: short-circut static row logic if there are no static columns 2017-07-26 14:38:27 +01:00
Paweł Dziepak
4a0385e908 compound_compat: explode: reserve some elements in a vector
When we are exploding a compound key we know already that there is more
than one component, but we have no easy way of determining how many of
them are going to be there. Let's reserve space for a few elements so
that we avoid an excessive number of reallocations in case of
medium-sized keys.
2017-07-26 14:38:27 +01:00
Paweł Dziepak
28c105e4a7 sstables: avoid copying key components 2017-07-26 14:38:27 +01:00
Paweł Dziepak
6031b7e587 keys: introduce compound_wrapper::from_exploded_view() 2017-07-26 14:38:27 +01:00
Paweł Dziepak
c9ccd813ab mutation_fragment: make destructor always_inline
mutation_fragment destructor was already made inline-friendly by moving
most of the logic to a separate function. However, the compiler still is
quite reluctant to inline it in certain cases, so let's give it a
stronger hint.
2017-07-26 14:38:27 +01:00
Paweł Dziepak
43cce6c2f4 rows_entry: make position() inlineable 2017-07-26 14:38:27 +01:00
Paweł Dziepak
c2ec43f70b cache_streamed_mutation: use consumer based read_context reader 2017-07-26 14:38:21 +01:00
Paweł Dziepak
2066354de3 streamed_mutation: introduce consume_mutation_fragments_until()
consume_mutation_fragments_until() is a consumer based interface that
avoids indirect calls and continuation overhead present in the naive
streamed_mutation::operator() approach.
2017-07-26 14:37:20 +01:00
Paweł Dziepak
9bc6038ff3 cache_streamed_mutation: avoid moving clustering_row
clustering_row can stores quite a lot of data internally which makes its
move constructor not exactly cheap.
If possible it is better to move mutation_fragment around as it keeps
everything externally. This also avoids some cases when clustering row
would be extracted from mutation_fragment only to be made to create
another mutation_fragment later.
2017-07-26 14:36:37 +01:00
Paweł Dziepak
68e57a742f ring_position_comparator: drop unused overloads 2017-07-26 14:36:37 +01:00
Paweł Dziepak
960a140880 index_reader: advance_and_check_if_present() use index_comparator 2017-07-26 14:36:37 +01:00
Paweł Dziepak
dc7bad9a50 sstables: cache token in index entries
When a sstable reader is fast forwarded some index entries may be read
(and compared) multiple times. This patch makes sure that once a token
is computed we keep it around and reuse if the entry is accessed again.
2017-07-26 14:36:37 +01:00
Paweł Dziepak
bfb7b56c74 sstable: keep a pre-computed token in summary_entry
Each sstable index lookup involves a binary search in the summary and
each time a partition key of summary entry is compared with anything its
token needs to be calculated.
Since we keep summary in the memory all the time it is better to also
keep the tokens around.
2017-07-26 14:36:36 +01:00
Paweł Dziepak
fe7eba7f06 ring_position_comparator: accept sstables::decorated_key_view
ring_position_comparator has overloads for comparing ring_positions as
well as sstables::key_view. In the case of the latter it needs to
compute the token of the key. However, the sstable layer could cache
some tokens so let's allow the comparator callers to provide it
directly.
2017-07-26 14:36:36 +01:00
Paweł Dziepak
31d7cfdefb sstables: introduce decorated_key_view 2017-07-26 14:36:36 +01:00
Paweł Dziepak
722c56f3f2 partition_snapshot_row_cursor: avoid apply() in single-version cases 2017-07-26 14:36:36 +01:00
Paweł Dziepak
e145ee6bb8 mutation_merger: improve handling of non-deferring fill_buffer()s
It is possible that a call to fill_buffer() will return an immediately
ready future. This patch avoids uncontrolled recursion in case when all
merged streamed mutation do not defer ini fill_buffer() and also
optimises for non-deferring case by avoiding some of the logic.
2017-07-26 14:36:36 +01:00
Paweł Dziepak
e0a04cb7fe sstables: make sure that fill_buffer() actually fills buffer
streamed_mutation::impl::fill_buffer() is supposed to either push
mutation fragments to the buffer or set EOS flag. However, it was
possible that mp_row_consumer would return proceed::no if a skip was
needed without satisfying any of these conditions.
2017-07-26 14:36:36 +01:00
Pekka Enberg
e66635a885 Merge "Developer documentation improvements" from Jesse
"This patch series addresses some feedback from the preliminary
 HACKING.md, adds some new content, and updates the README file with
 some quick-start information."

* 'jhk/better_hacking/v3' of github.com:hakuch/scylla:
  README.md: Add quick-start section and defer to `HACKING.md`
  HACKING.md: `CMakeLists.txt` for analysis works for other IDEs too
  HACKING.md: Add details and examples for unit tests
  HACKING.md: Add section for project dependencies
  HACKING.md: Describe releases and tags
  HACKING.md: Re-work "building" section, including memory needs
  HACKING.md: Update ccache recommendations
  HACKING.md: Update "Contributing" URL
2017-07-26 16:25:58 +03:00
Duarte Nunes
e988121dbb schema_builder: Replace type when re-dropping column
Fixes #2634

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170725183933.5311-1-duarte@scylladb.com>
2017-07-26 13:26:29 +02:00
Duarte Nunes
64fcf0c642 alter_table_statement: Allow collection columns to replace normal ones
Fixes #2632

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170725183811.5155-1-duarte@scylladb.com>
2017-07-26 13:24:03 +02:00
Duarte Nunes
1622847c1d perf/perf_fast_forward: Don't pass non-pod to varargs function
Passing a Non-POD object to variadic functions is unsupported.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170726094756.22867-1-duarte@scylladb.com>
2017-07-26 11:48:22 +01:00
Duarte Nunes
9c831b4e97 schema: Remove unnecessary print
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170725174000.71061-1-duarte@scylladb.com>
2017-07-26 12:01:51 +02:00
Duarte Nunes
472f32fb06 tests/schema_change_test: Add test case for add+drop notification
Reproduces #2616

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170725170622.4380-2-duarte@scylladb.com>
2017-07-26 11:59:48 +02:00
Duarte Nunes
33e18a1779 db/schema_tables: Consider differing dropped columns
If a node is notified of a schema change where the schema's dropped
columns have changes, that node will miss the changes to the dropped
columns. A scenario where this can happen is where a column c is
dropped, then added as a different typed, and then dropped again, with
a node n having seen the first drop and being notified of the
subsequent add and drop.

Fixes #2616

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170725170622.4380-1-duarte@scylladb.com>
2017-07-26 11:59:34 +02:00
Jesse Haber-Kucharsky
d6c0138576 README.md: Add quick-start section and defer to HACKING.md 2017-07-25 17:58:00 -04:00
Jesse Haber-Kucharsky
d06bccf857 HACKING.md: CMakeLists.txt for analysis works for other IDEs too 2017-07-25 17:57:55 -04:00
Jesse Haber-Kucharsky
9c2390e1a4 HACKING.md: Add details and examples for unit tests 2017-07-25 17:57:43 -04:00
Jesse Haber-Kucharsky
488839dd15 HACKING.md: Add section for project dependencies 2017-07-25 17:57:43 -04:00
Jesse Haber-Kucharsky
14d03d7548 HACKING.md: Describe releases and tags 2017-07-25 17:57:43 -04:00
Jesse Haber-Kucharsky
6e8bfdbb3f HACKING.md: Re-work "building" section, including memory needs 2017-07-25 17:57:43 -04:00
Jesse Haber-Kucharsky
64acb41305 HACKING.md: Update ccache recommendations 2017-07-25 17:57:43 -04:00
Jesse Haber-Kucharsky
4fe767de31 HACKING.md: Update "Contributing" URL
The old page results in a 404 error.
2017-07-25 17:46:45 -04:00
Paweł Dziepak
295689d16f db: include counter writes on leader in metrics
Counters write path on leader is completely different than on any other
replica (non-leaders share write path between counters and regular
columns). This patch makes sure that counter writes performed on leader
are added to appropriate metrics.
Message-Id: <20170725153346.31238-1-pdziepak@scylladb.com>
2017-07-25 18:31:43 +02:00
Tomasz Grabiec
18be42f71a Merge fixes related to row cache from Raphael
* git@github.com:raphaelsc/scylla.git row_cache_fixes:
  db: atomically synchronize cache with changes to the snapshot
  db: refresh row cache's underlying data source after compaction
2017-07-25 15:34:32 +02:00
Paweł Dziepak
79a1ad7a37 tests/row_cache: test queries with no clustering ranges
Reproducer for #2604.
Message-Id: <20170725131220.17467-3-pdziepak@scylladb.com>
2017-07-25 15:29:17 +02:00
Paweł Dziepak
1ea507d6ae tests: do not overload the meaning of empty clustering range
Empty clustering key range is perfectly valid and signifies that the
reader is not interested in anything but the static row. Let's not
make it mean anything else.
Message-Id: <20170725131220.17467-2-pdziepak@scylladb.com>
2017-07-25 15:28:12 +02:00
Paweł Dziepak
6572f38450 cache: fix aborts if no clustering range is specified
cache_streamed_mutation assumed that at least one clustering range was
specified. That was wrong since the readers are allowed to query just
for a static row (e.g. counter update that modifies only static
columns).

Fixes #2604.
Message-Id: <20170725131220.17467-1-pdziepak@scylladb.com>
2017-07-25 15:27:48 +02:00
Amnon Heiman
1f5a9ecc40 scylla-housekeeping: support patches releases
To support both version and patch release, the version server now returns
a patchversion parameter that include the latest minor version's patch
release.

The housekeeping should return a separate message if the current
minor version is not with the latest patch release, and a message if the version was
changed.

For example, if a user is using version 1.6.1 it should get a warning
that he need to update if 1.6.2 is available and in addition a warning it
should upgrade if version 1.7 is out.

Examples:
$ scylla-housekeeping version --version 1.6.2
Your current Scylla release is 1.6.2, while the latest patch release is 1.6.4, and the latest minor release is 1.7.2 (recommended)

$ scylla-housekeeping version --version 1.7.1
You current Scylla release is  1.7.1 while the latest patch release is 1.7.2 is available, update for the latest bug fixes

$ scylla-housekeeping version --version 1.7.1
You current Scylla release is 1.7.1 while the latest patch release is 1.7.2, update for the latest bug fixes and improvements

Fixes #1972

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Acked-by: Tzach Livyatan <tzach@scylladb.com>
Message-Id: <20170725095455.6450-1-amnon@scylladb.com>
2017-07-25 13:12:18 +03:00
Raphael S. Carvalho
637f3bfa50 db: refresh row cache's underlying data source after compaction
Underlying data source in row cache holds a reference to sstable set
prior to compaction which isn't released until a memtable flush, which
means file descriptors of deleted sstables remains opened, wasting
disk space.
The fix is to refresh underlying data source in row cache.

Fixes #2570.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-24 15:49:11 -03:00
Raphael S. Carvalho
e3ad676433 db: atomically synchronize cache with changes to the snapshot
updates to cache and snapshot (i.e. sstable set) aren't synchronized, so
it may happen that cache update for memtable flush will use wrong snapshot
version, and that violates cache invariant of each partition entry only
reflecting one snapshot.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-24 15:45:05 -03:00
Avi Kivity
c21bb5ae05 tests: fix sstable_datafile_test build with boost 1.55
Boost 1.55 accidentally removed support for "range for" on
recursive_directory_iterator (previous and latter versions do
support it). Use old-style iteration instead.

Message-Id: <20170724080128.8824-1-avi@scylladb.com>
2017-07-24 11:20:12 +03:00
Avi Kivity
f75b578607 Update ami submodule
* dist/ami/files/scylla-ami 5dfe42f...2bd1481 (1):
  > Enable support for experimental CPU controller in i3 instances
2017-07-24 10:26:52 +03:00
Tomasz Grabiec
60678f0e8a ring_position: Optimize contruction from r-value referenceces of decorated_key
Message-Id: <1500650171-26291-1-git-send-email-tgrabiec@scylladb.com>
2017-07-24 10:25:14 +03:00
Tomasz Grabiec
136d205855 mutation_partition: Always mark static row as continuous when no static columns
To avoid unnecessary cache misses after static columns are added.

Message-Id: <1500650057-26036-1-git-send-email-tgrabiec@scylladb.com>
2017-07-24 10:23:35 +03:00
Tomasz Grabiec
714d609605 database: Fix reversed order of keyspace and table names in a log message
Message-Id: <1500649623-25377-1-git-send-email-tgrabiec@scylladb.com>
2017-07-21 17:10:17 +02:00
Tomasz Grabiec
059779eea6 gdb: Fix 'scylla ptr' reporting large object pages as free
The 'free' attribute is not updated for all pages belonging to a large
object, so we can't use it to determine if the page is allocated or
not. More reliable way is to check if it belongs to any free span.
Message-Id: <1500648094-20039-1-git-send-email-tgrabiec@scylladb.com>
2017-07-21 16:56:41 +02:00
Tomasz Grabiec
29a82f5554 schema_registry: Keep unused entries around for 1 second
This is in order to avoid frequent misses which have a relatively high
cost. A miss means we need to fetch schema definition from another
node and in case of writes do a schema merge.

If the schema is kept alive only by the incoming request, then it
will be forgotten immediately when the request is done, and the next
request using the same schema version will miss again.

Refs #2608.
Message-Id: <1500632447-10104-1-git-send-email-tgrabiec@scylladb.com>
2017-07-21 16:56:37 +02:00
Tomasz Grabiec
ecc85988dd legacy_schema_migrator: Don't snapshot empty legacy tables
Otherwise we will create a new (empty) snapshot each time we boot.
Message-Id: <1500573920-31478-2-git-send-email-tgrabiec@scylladb.com>
2017-07-21 16:56:31 +02:00
Tomasz Grabiec
408cea66cd database: Allow disabling auto snapshots during drop/truncate
Message-Id: <1500573920-31478-1-git-send-email-tgrabiec@scylladb.com>
2017-07-21 16:56:29 +02:00
Duarte Nunes
937fe80a1a Merge 'Fix possible inconsistency of table schema version' from Tomasz
"Fixes issues uncovered in longevity test (#2608).

Main problem is that due to time drift scylla_tables.version column
may not get deleted on all nodes doing the schema merge, which will
make some nodes come up with different table schema version than others.

The inconsistency will not heal because scylla_tables doesn't
take part in the schema sync. This is fixed by the last patch.

This will cause nodes to constantly try to sync the schema, which under
some conditions triggers #2617."

* tag 'tgrabiec/fix-table-schema-version-inconsistency-v1' of github.com:scylladb/seastar-dev:
  schema_tables: Add scylla_tables to ALL
  schema: Make schema_mutations equality consistent with digest
  schema_tables: Extract compact_for_schema_digest()
  schema_tables: Always drop scylla_tables::version
2017-07-21 16:55:23 +02:00
Tomasz Grabiec
65c64614aa schema_registry: Ensure schema_ptr is always synced on the other core
global_schema_ptr ensures that schema object is replicated to other
cores on access. It was replicating the "synced" state as well, but
only when the shard didn't know about the schema. It could happen that
the other shard has the entry, but it's not yet synced, in which case
we would fail to replicate the "synced" state. This will result in
exception from mutate(), which rejects attempts to mutate using an
unsynced schema.

The fix is to always replicate the "synced" state. If the entry is
syncing, we will preemptively mark it as synced earlier. The syncing
code is already prepared for this.

Refs #2617.
Message-Id: <1500555224-15825-1-git-send-email-tgrabiec@scylladb.com>
2017-07-21 16:54:47 +02:00
Duarte Nunes
7eecda3a61 schema: Support compaction enabled attribute
Fixes #2547

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170721132206.3037-1-duarte@scylladb.com>
2017-07-21 15:38:45 +02:00
Amnon Heiman
e345d05ebe system_keyspace: Use paging for get compaction history
there could be a lot of compactions when querying for compaction
history.

This patch changes the query to use paging. It would collect all results
when returning to the caller.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2017-07-20 18:17:49 +03:00
Vlad Zolotarov
9086c643a6 service::storage_proxy: add a trace points pair in the SELECT replica flow
Add two trace points: at the beginning and at the end of the replica flow on the
replica shard.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1499961542-16263-1-git-send-email-vladz@scylladb.com>
2017-07-20 16:44:25 +02:00
Amnon Heiman
08c81427b9 Add paging for internal queries
Usually, internal queries are used for short queries. Sometimes though,
like in the case of get compaction history, there could be a large
amount of results. Without paging it will overload the system.

This patch adds the ability to use paging internally.

Using paging will be done explicitely, all the relevant information
would be store in an internal_query_state, that would hold both the
paging state but also the query so consecutive calls can be made.

To use paging use the query method with a function.

The function gets beside a statement and its parameters a function that
will be used for each of the returned rows.

For example if qp is a query_processor:

qp.query("SELECT * from system.compaction_history", [] (const cql3::untyped_result_set::row& row) {
  ....
  // do something with row
  ...
  return stop_iteration::no; // keep on reading
});

Will run the function on each of the compaction history table rows.

To stop the iteration, the function can return stop_iteration::yes.
2017-07-20 17:43:51 +03:00
Tomasz Grabiec
2bc549f426 Merge perf_fast_forward enhancements from Paweł
* https://github.com/pdziepak/scylla.git perf_fast_forward_improvements/v1:
  perf_fast_forward: move global state to global scope
  perf_fast_forward: move tests groups to separate functions
  perf_fast_forward: allow running only selected test groups
  perf_fast_forward: use consumer interface for reading
    streamed_mutation
2017-07-20 16:41:29 +02:00
Tomasz Grabiec
ed2388da2c schema_tables: Add scylla_tables to ALL
So that scylla_tables takes part in the digest and in mutations sent
as part of schema sync. Otherwise inconsistencies in scylla_tables
will not heal.

Refs #2608.
2017-07-20 15:47:10 +02:00
Tomasz Grabiec
78ff728795 schema: Make schema_mutations equality consistent with digest
Digest only looks like live values, ignoring deletion
information. Equality should be consistent with that, so that schemas
considered equal do not trigger the alter path unnecessarily.
2017-07-20 15:47:10 +02:00
Tomasz Grabiec
6adbe61e2f schema_tables: Extract compact_for_schema_digest() 2017-07-20 15:47:10 +02:00
Tomasz Grabiec
1b85c316bf schema_tables: Always drop scylla_tables::version
It can happen that due to time drift between nodes, the incoming
"version" cell will have higher timestamp than api::new_timestamp().
In such case the column would not be dropped and would cause version
mismatch between nodes.

Ensure it's always covered by using max of current time and cell's
timestamp.

Refs #2608.
2017-07-20 15:47:10 +02:00
Takuya ASADA
2bf16c6e8a dist/debian: add --no-clean option to skip building pbuilder .tgz image
By default build_deb.sh destroys all previous build image to make sure we don't
have environment dependent issue, but it's takes time to build distribution root
image (.tgz in pbuilder) from scratch.

--no-clean option is for skipping create .tgz stage, use previously built image,
to make build time shorter.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1500542094-12946-1-git-send-email-syuu@scylladb.com>
2017-07-20 15:37:00 +03:00
Calle Wilund
91f314e54c duration.cc: Fix static assert
static_assert(cond) is C++17 only
Message-Id: <1500373227-12025-1-git-send-email-calle@scylladb.com>
2017-07-20 13:14:51 +02:00
Paweł Dziepak
823fb5e9d8 perf_fast_forward: use consumer interface for reading streamed_mutation
Using streamed_mutation::operator() is undesirable as it introduces an
indirect call and a continuation overhead for each emitted mutation
fragment. Consumer interface is the preferred method of reading streamed
mutations.
2017-07-20 11:02:53 +01:00
Paweł Dziepak
d184508d7b perf_fast_forward: allow running only selected test groups 2017-07-20 11:02:31 +01:00
botond
884928c511 install-dependencies.sh: Fix ubuntu dependencies
Remove dependencies section from README.md, point to the
install-dependencies.sh script instead.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <7f51f17a743a82d68b7d4a279b066ffe55fe0379.1500540523.git.bdenes@scylladb.com>
2017-07-20 12:00:20 +03:00
Paweł Dziepak
a18a36c94b perf_fast_forward: move tests groups to separate functions 2017-07-20 09:26:42 +01:00
Paweł Dziepak
3fd4f9c1c7 perf_fast_forward: move global state to global scope
All test perf_fast_forward test cases currently live in the main
function. This patch moves the state they rely on to a global scope
so that it will be easier to extract these tests to individual
functions.
2017-07-20 09:26:42 +01:00
Avi Kivity
c5ee62a6a4 Merge "restrict background writers with scheduling groups" from Glauber
"This patchset restricts background writers - such as compactions,
streaming flushes and memtable flushes to a maximum amount of CPU usage
through a seastar::thread_scheduling_group.

The said maximum is recommended to be set  50 % - it is default
disabled, but can be adjusted through a configuration option until we
are able to auto-tune this.

The second patch in this series provides a preview on how such auto-tune
would look like. By implementing a simple controller we automatically
adjust the quota for the memtable writer processes, so that the rate at
which bytes come in is equal to the rates at which bytes are flushed.

Tail latencies are greatly reduced by this series, and heavy spikes that
previously appeared on CPU-bound workloads are no more."

* 'memtable-controller-v5' of https://github.com/glommer/scylla:
  simple controller for memtable/streaming writer shares.
  restrict background writers to 50 % of CPU.
2017-07-20 10:58:53 +03:00
Takuya ASADA
c441b1604a dist/redhat: use EPEL's ragel for CentOS
Since ragel added on EPEL, drop self-built package and use EPEL one.

See #2441

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <20170719170500.18515-1-syuu@scylladb.com>
2017-07-20 10:36:57 +03:00
Calle Wilund
7a583585a2 system_keyspace: Make sure "system" is written to keyspaces (visible)
Fixes #2514

Bug in schema version 3 update: We failed to write "system" to the
schema tables. Only visible on an empty instance of course.

Message-Id: <1500469809-23546-2-git-send-email-calle@scylladb.com>
2017-07-19 16:18:56 +03:00
Calle Wilund
247c36e048 system_schema: Fix remaining places not handing two system keyspaces
Some places remained where code looked directly at
system_keyspace::NAME to determine iff a ks is
considered special/system/protected. Including
schema digest calculation.

Export "is_system_keyspace" and use accordingly.

Message-Id: <1500469809-23546-1-git-send-email-calle@scylladb.com>
2017-07-19 16:18:45 +03:00
Duarte Nunes
1daf1bc4bb Merge 'Revert back to 1.7 schema layout in memory' from Tomasz
"Fixes schema layout incompatibility in a mixed 1.7 and 2.0 cluster (#2555)
by reverting back to using the old layout in memory and thus also
in across-node requests. We still use the new v3 layout in schema
tables (needed by drivers and external tools). Translations happen
when converting to/from schema mutations."

* tag 'tgrabiec/use-v2-schema-layout-in-memory-v2' of github.com:scylladb/seastar-dev:
  schema: Revert back to the 1.7 layout of static compact tables in memory
  schema: Use v3 column layout when converting to/from schema mutations
  schema: Encapsulate column layout translations in the v3_columns class
2017-07-19 12:52:52 +02:00
Avi Kivity
d5aba779d4 Merge "streaming error handling improvement" from Asias
"This series improves the streaming error handling so that when one side of the
streaming failed, it will propagate the error to the other side and the peer
will close the failed session accordingly. This removes the unnecessary wait and
timeout time for the peer to discover the failed session and fail eventually.

Fix it by:

- Use the complete message to notify peer node local session is failed
- Listen on shutdown gossip callback so that we can detect the peer is shutdown
  can close the session with the peer

Fixes #1743"

* tag 'asias/streaming/error_handling_v2' of github.com:cloudius-systems/seastar-dev:
  streaming: Listen on shutdown gossip callback
  gms: Add is_shutdown helper for endpoint_state class
  streaming: Send complete message with failed flag when session is failed
  streaming: Handle failed flag in complete message
  streaming: Do not fail the session when failed to send complete message
  streaming: Introduce send_failed_complete_message
  streaming: Do not send complete message when session is successful
  streaming: Introduce the failed parameter for complete message
  streaming: Remove unused session_failed function
  streaming: Less verbose in logging
  streaming: Better stats
2017-07-19 11:18:09 +03:00
Amos Kong
2bdcad5bc3 scylla_raid_setup: fix syntax error
/usr/lib/scylla/scylla_raid_setup: line 132: syntax error
near unexpected token `fi'

Fixes #2610

Signed-off-by: Amos Kong <amos@scylladb.com>
Message-Id: <af3a5bc77c5ba2b49a8f48a5aaa19afffb787886.1500430021.git.amos@scylladb.com>
2017-07-19 11:10:29 +03:00
Duarte Nunes
ab72132cb1 view_schema_test: Retry failed queries
Due to the asynchronous nature of view update propagation, results
might still be absent from views when we query them. To be able to
deterministically assert on view rows, this patch retries a query a
bounded number of times until it succeeds.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170718212646.2958-1-duarte@scylladb.com>
2017-07-19 09:59:44 +02:00
Duarte Nunes
115ff1095e db/view: Use view schema for view pk operations
Instead of base schema.

Fixes #2504

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170718190703.12972-1-duarte@scylladb.com>
2017-07-19 09:59:34 +02:00
Tomasz Grabiec
a9237c1666 schema: Revert back to the 1.7 layout of static compact tables in memory
We are using C* 3.x compatible layout in schema tables but want to
keep using the 1.7 layout in memory for compatibility during rolling
upgrade. This patch switches the schema and schema_builder classes
back to the old layout. Translation of layout happens when converting
to/from schema mutations.

Notable changes:

 1) Includes a revert of commit 6260f31e08
    "thrift: Update CQL mapping of static CFs".

 2) Brings back the "default_validation_class" schema attribute. In v3
    it can be dervied from column definitions, but in v2 it can't, so
    we have to store it.

 3) legacy_schema_migrator and schema_builder don't have to do
    conversions to v3, this is now handled by the v3_columns
    class. schema_builder works with the same layout as schema, that
    is v2.

 4) Includes a revert of commit 66991a7ccb
    "v3 schema test fixes"

Fixes #2555.
2017-07-19 09:52:15 +02:00
Tomasz Grabiec
dc2dc056a4 schema: Use v3 column layout when converting to/from schema mutations 2017-07-19 09:52:15 +02:00
Tomasz Grabiec
dc463ef644 schema: Encapsulate column layout translations in the v3_columns class 2017-07-19 09:52:15 +02:00
Avi Kivity
bfae5c7bac Merge "Time window compaction strategy support" from Raphael
"Time window strategy was introduced to address several limitations of
date tiered strategy. In addition, its options are much easier to reason
about, basically just window size and window unit.
TWCS will work to keep only one sstable in each window. So the only real
optimization needed is to align partition key to the window.
Size tiered strategy is used to reduce write amplification when compacting
the incoming window.

For more details: https://issues.apache.org/jira/browse/CASSANDRA-9666

Fixes #1432."

* 'twcs_v2' of github.com:raphaelsc/scylla:
  tests: add tests for time window compaction strategy
  compaction: wire up time window compaction strategy
  compaction/twcs: override default values with options in schema
  sstables: implement time window compaction strategy
  sstables: import TimeWindowCompactionStrategy.java
2017-07-19 10:22:53 +03:00
Duarte Nunes
3bfcf47cc6 types: Implement hash() for collections
This patch provides a rather trivial implementation of hash() for
collection types.

It is needed for view building, where we hold mutations in a map
indexed by partition keys (and frozen collection types can be part of
the key).

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170718192107.13746-1-duarte@scylladb.com>
2017-07-19 09:52:56 +03:00
Raphael S. Carvalho
c55c63f213 tests: add tests for time window compaction strategy
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-19 02:58:37 -03:00
Raphael S. Carvalho
7ecedac222 compaction: wire up time window compaction strategy
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-19 02:58:37 -03:00
Raphael S. Carvalho
01886c23a8 compaction/twcs: override default values with options in schema
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-19 02:58:37 -03:00
Raphael S. Carvalho
206d30c52a sstables: implement time window compaction strategy
For more details, https://issues.apache.org/jira/browse/CASSANDRA-9666

Fixes #1432.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-19 02:58:35 -03:00
Glauber Costa
c9a529ebee simple controller for memtable/streaming writer shares.
This patch introduces a simple controller that will adjust memtables CPU
shares, trying to keep it around the soft limit: if we start going below
it means we're too fast (unless we are idle) and shares are adjusted
downwards. If we start going above it means we're too fast and shares
are adjusted upwards.

I have tested this extensively in a single-CPU setup with various
CPU-bound workloads while tracking virtual dirty and the results are
good, with virtual dirty fluctuating only slightly, somewhere within the
desired range.

Exceptions to this are:
1) when the load is very light - the idle system goes faster, and that's
   ok
2) when the load is very high - as foreground requests dominate we can't
   flush fast enough and hit the hard limit. However, in such scenarios
   the memtable shares do hit its maximum, and the results are no worse
   than they are right now and this will only be fixed by CPU-limiting the
   actual requests.

This feature can be disabled with a config option - that is scheduled to
go away as we acquire more confidence in this. When the feature is
disabled, all background writers (streaming, compaction, memtables) will
share the same scheduling group, with static quotas.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-07-18 23:35:47 -04:00
Glauber Costa
4f01ec0910 restrict background writers to 50 % of CPU.
In scylla, we have foreground processes, which are latency sensitive and
need to be responded to as fast as possible in order to maintain good
latency profiles, and background process, which are less so.

The most important background processes we have during normal write
workload operations are memtable writes and sstable compactions. Those
processes are quite CPU-intensive, and left unchecked will easily
dominate the CPU. Lower values of task-quota usually help, as it will
force those processes to preempt more, but aren't enough to guarantee
good isolation. We have seen boxes with good NVMe storage having their
throughput reduced to less than half of the original baseline in a short
dive down for the duration of a compaction.

In the long run, our goal is to leverage the CPU scheduler to make sure
that those processes are balanced with respect to all the others.
However, the current state of affairs is causing grievances as this very
moment. Thankfully, those processes live in a seastar::thread, that
ships with its own rudimentary bandwidth control mechanism: the
scheduling group.

The goal of this patch is to wrap background processes together in a
scheduling group, and assign to such group 50 % of our CPU power; the
remainder being left to foreground processes.

While we pride ourselves in dynamically adjusting things to the
workload, we won't be able to do this properly before the CPU scheduler
lands - and let's face it, leaving background processes run wild is not
adaptative either. Every workload would benefit most from a different
value for such shares, but 50 % is as fair as it gets if we really need
static partitining in the mean time.

As a defense against unforeseen consequences, we'll leave the actual
value as an option, but will do our best to hide it - as this is not a
tunable that we want to be part of a normal Scylla setup. The most
convenient place for this tunable is still db::config, so we can easily
pass it down to the database layer - but we will not document it in the
yaml, and will clearly note in the help string that it is not supposed
to be tuned.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2017-07-18 23:35:33 -04:00
Asias He
d6cebd1341 streaming: Listen on shutdown gossip callback
When a node shutdown itself, it will send a shutdown status to peer
nodes. When peer nodes receives the shtudown status update, they are
supposed to close all the sessions with that node becasue the node is
shutdown, no need to wait and timeout, then fail the session.

This change can speed up the closing of sessions.
2017-07-19 10:11:06 +08:00
Asias He
ed7e6974d5 gms: Add is_shutdown helper for endpoint_state class
It will be used by streaming manager to check if a node is in shutdown
status.
2017-07-19 10:11:05 +08:00
Asias He
aa87429e67 streaming: Send complete message with failed flag when session is failed
To notify peer node the session is failed.
2017-07-19 10:11:05 +08:00
Asias He
03b838705c streaming: Handle failed flag in complete message
Fail the current session if the failed flag is on in the complete
message handler.
2017-07-19 10:11:05 +08:00
Asias He
12d18cfab4 streaming: Do not fail the session when failed to send complete message
Since the complete message is not mandatary, no point to fail the session
in case failed to send the complete message.
2017-07-19 10:11:04 +08:00
Asias He
ca5248cd58 streaming: Introduce send_failed_complete_message
Currently, send_complete_message is not used. We will use it shortly in
case the local session is failed. Send a complete message with failed
flag to notify peer node that the session is failed so that peer can
close the session. This can speed up the closing of failed session.

Also rename it to send_failed_complete_message.
2017-07-19 10:11:04 +08:00
Raphael S. Carvalho
2686e84792 sstables: import TimeWindowCompactionStrategy.java
it will be later converted to C++. Imported from latest scylla-
tools-java repository. Checked that it doesn't lack anything.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-18 18:26:17 -03:00
Takuya ASADA
49b01e764a dist/common/scripts/scylla_prepare: stop running hugeadm when it's posix mode
A user reported scylla-server.service does not able to run on their cloud instance, because of hugeadm.
(hugeadm says the kernel does not support huge pages.)
We don't need it for posix mode, so move it in dpdk mode.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1500367219-8728-1-git-send-email-syuu@scylladb.com>
2017-07-18 16:39:16 +03:00
Tomasz Grabiec
63caa58b70 Merge "Drop mutations that raced with truncate" from Duarte
Instead of retrying, just drop mutations that raced with a truncate.

* git@github.com:duarten/scylla.git truncate-reorder/v1:
  database: Rename replay_position_reordered_exception
  database: Drop mutations that raced with truncate
2017-07-18 12:53:36 +02:00
Asias He
f21cb75cdb streaming: Do not send complete message when session is successful
The complete_message is not needed and the handler of this rpc message
does nothing but returns a ready future. The patch to remove it did not
make into the Scylla 1.0 release so it was left there.
2017-07-18 15:29:42 +08:00
Duarte Nunes
d9fa3bf322 thrift: Fail when mixed CFs are detected
Fixes #2588

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170717222612.7429-1-duarte@scylladb.com>
2017-07-18 10:21:33 +03:00
Asias He
0ba4e73068 streaming: Introduce the failed parameter for complete message
Use this flag to notify the peer that the session is failed so that the
peer can close the failed session more quickly.

The flag is used as a rpc::optional so it is compatible use old
version of the verb.
2017-07-18 11:24:31 +08:00
Asias He
7599c1524d streaming: Remove unused session_failed function
It is never used. Get rid of it.
2017-07-18 11:22:09 +08:00
Asias He
caad7ced23 streaming: Less verbose in logging
Now, we will have large number of small streaming. Make the
not very important logging message debug level.
2017-07-18 11:17:09 +08:00
Asias He
d0dffd7346 streaming: Better stats
Log the number of bytes streamed and streaming bandwidth summary in the same line with session
complete message.
2017-07-18 11:17:09 +08:00
Avi Kivity
64ef7aa5e4 Merge seastar upstream
* seastar 867b7c7...a14d667 (1):
  > tls: remove unneeded lambda captures
2017-07-17 19:30:59 +03:00
Duarte Nunes
6b464da67d schema: Get rid of regular_columns_by_name
They are unused.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170717103635.6473-2-duarte@scylladb.com>
2017-07-17 12:52:41 +02:00
Asias He
adc5f0bd21 gossip: Implement the missing fd_max_interval_ms and fd_initial_value_ms option
It is useful for larger cluster with larger gossip message latency. By
default the fd_max_interval_ms is 2 seconds which means the
failure_detector will ignore any gossip message update interval larger
than 2 seconds. However, in larger cluster, the gossip message udpate
interval can be larger than 2 seconds.

Fixes #2603.

Message-Id: <49b387955fbf439e49f22e109723d3a19d11a1b9.1500278434.git.asias@scylladb.com>
2017-07-17 13:29:16 +03:00
Duarte Nunes
13caccf1cf Merge 'Fixes around migration to v3 schema tables' from Tomasz
branch 'tgrabiec/schema-migration-fixes' of github.com:scylladb/seastar-dev:
  schema: Use proper name comparator
  legacy_schema_migrator: Properly migrate non-UTF8 named columns
  schema_tables: Store column_name in text form
  legacy_schema_migrator: Migrate columns like Cassandra
  schema_builder: Add factory method for default_names
  legacy_schema_migrator: Simplify logic
  thrift: Don't set regular_column_name_type
  schema: Use proper column name type for static columns
  schema: Fix column_name_type() for static compact tables
  schema: Introduce clustering_column_at()
  thrift: Reuse cell_comparator::to_sstring() for obtaining comparator type
  partition_slice_builder: Use proper column's type instead of regular_column_name_type()
2017-07-17 11:16:52 +02:00
Tomasz Grabiec
34dae0588c schema: Use proper name comparator
This replaces column_definition::name_comparator, which incorrectly
assumes that names are always utf8, with name_compare moved from
schema::rebuild() and unifies usages.
2017-07-17 09:40:06 +02:00
Tomasz Grabiec
7e54290d38 legacy_schema_migrator: Properly migrate non-UTF8 named columns
Currently migrator assumed all columns are utf8-named, which
doesn't have to be the case for static compact tables.

Refs #2597.

Due to #2573, we can assume that Scylla wasn't used with non-utf8
column names, and that old names are always in textual form.
2017-07-17 09:40:06 +02:00
Tomasz Grabiec
60a76efd37 schema_tables: Store column_name in text form
That's how it is stored by Cassandra.

Refs #2597.
2017-07-17 09:40:06 +02:00
Tomasz Grabiec
61229a7536 legacy_schema_migrator: Migrate columns like Cassandra
This fixes generation of synthetic columns for static compact tables.
Current code always generates synthetic clustering column with utf8
type and synthetic regular column with bytes type (in schema_builder).
That's fine when creating a new CQL table, but not when migrating
existing tables created via thrift API.

Fixes #2584.

This also migrates empty compact value columns like Cassandra
does. Such columns are present in compact tables without regular
columns, e.g.:

  create table test (k int, ck int, primary key (k, ck)) with compact storage;

They should be migrated to a synthetic regular column with
empty_type type and a non-empty name.
2017-07-17 09:40:06 +02:00
Tomasz Grabiec
49e21b3b8e schema_builder: Add factory method for default_names 2017-07-17 09:40:06 +02:00
Tomasz Grabiec
6dc299c27a legacy_schema_migrator: Simplify logic
The expression "is_dense.value_or(true)" is always true inside the if,
so drop it.

This allows us to drop temporary calulated_is_dense.

We can also get rid of one of the if branches by extracting
builder.set_is_dense() outside.
2017-07-17 09:40:06 +02:00
Tomasz Grabiec
3987e9be31 thrift: Don't set regular_column_name_type
Regular columns are always utf8 after f5dae826ce.
2017-07-17 09:40:06 +02:00
Tomasz Grabiec
b919c50d21 schema: Use proper column name type for static columns
After f5dae826ce, static columns not
always have utf8 column names. For static compact tables it's
determined by the cell name comparator type, which is equal to the
type of the synthetic clustering column.

Caused various errors with static thrift tables with non-utf8
comparator.
2017-07-17 09:40:06 +02:00
Tomasz Grabiec
f685f7f8a1 schema: Fix column_name_type() for static compact tables
Introduced in f5dae826ce.
2017-07-17 09:40:06 +02:00
Tomasz Grabiec
84536a4a75 schema: Introduce clustering_column_at() 2017-07-17 09:40:06 +02:00
Tomasz Grabiec
9ed958a1eb thrift: Reuse cell_comparator::to_sstring() for obtaining comparator type 2017-07-17 09:40:06 +02:00
Tomasz Grabiec
9768036d61 partition_slice_builder: Use proper column's type instead of regular_column_name_type() 2017-07-17 09:40:06 +02:00
Avi Kivity
c51001b598 Merge seastar upstream
* seastar b812cee...867b7c7 (1):
  > rpc: start server's send loop only after protocol negotiation

Fixes #2600.
2017-07-16 19:36:31 +03:00
Avi Kivity
a5bd854019 Merg seastar upstream
* seastar 844bcfb...b812cee (1):
  > Update dpdk submodule

Fix #2595 (again).
2017-07-16 17:00:48 +03:00
Avi Kivity
d9c64ef737 tests: move tmpdir to /tmp
Reduces view_schema_test runtime to 5 seconds, from 53 seconds on an NVMe disk
with write-back cache, and forever on a spinning disk.
Message-Id: <20170716081653.10018-1-avi@scylladb.com>
2017-07-16 11:55:08 +02:00
Avi Kivity
9116dd91cb tests: copy the sstable with an unknown component to the data directory
We will be creating links to those sstable's files, and those don't work
if the data directory and the test sstable are on different devices.

Copying the files to the same directory fixes the problem.
Message-Id: <20170716090405.14307-1-avi@scylladb.com>
2017-07-16 11:55:00 +02:00
Duarte Nunes
2c711922cc database: Drop mutations that raced with truncate
Mutations that race with a truncate can just be dropped.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-16 00:08:05 +02:00
Duarte Nunes
0825c9c805 database: Rename replay_position_reordered_exception
Rename replay_position_reordered_exception to
mutation_reordered_with_truncate_exception for more precision, since
this is the only situation where this exception can be thrown.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-16 00:08:05 +02:00
Avi Kivity
e87ab54bfc Merge seastar upstream
* seastar ff34c42...844bcfb (1):
  > Update dpdk submodule

Fixes #2595.
2017-07-15 19:17:05 +03:00
Tomasz Grabiec
caa62f7f05 Merge "Fixes for memtable flushing and replay positions" from Duarte
We don't ensure mutations are applied in memory following the order of their
replay positions. A memtable can thus be flushed with replay position rp,
with the new one being at replay position rp', where rp' < rp. This breaks
an intrinsic assumption in the code, which this series addresses.

Fixes #2074

branch memtable-flush/v3 of git@github.com:duarten/scylla.git:
  commitlog: Always flush latest memtable
  column_family: More precise count of switched memtables
  column_family: Fix typo in pending_tasks metric name
  column_family: More precise count of pending flushes
  dirty_memory_manager: Remove unnecessary check from flush_one()
  column_family: Don't rely on flush_queue to guarantee flushes finished
  column_family: Don't bother closing the flush_queue on stop()
  column_family: Stop using flush_queue
  column_family: Remove outdated comment about the flush_queue
  memtable: Stop tracking the highest flushed rp
2017-07-14 11:39:37 +02:00
Avi Kivity
162d9aa85d tests: fix view_schema_test with clang
Clang is happy to create a vector<data_value> from a {}, a {1, 2}, but not a {1}.
No doubt it is correct, but sheesh.

Make the data_value explicit to humor it.
Message-Id: <20170713074315.9857-1-avi@scylladb.com>
2017-07-14 12:24:27 +03:00
Duarte Nunes
b8235f2e88 storage_proxy: Preserve replica order across mutations
In storage_proxy we arrange the mutations sent by the replicas in a
vector of vectors, such that each row corresponds to a partition key
and each column contains the mutation, possibly empty, as sent by a
particular replica.

There is reconciliation-related code that assumes that all the
mutations sent by a particular replica can be found in a single
column, but that isn't guaranteed by the way we initially arrange the
mutations.

This patch fixes this and enforces the expected order.

Fixes #2531
Fixes #2593

Signed-off-by: Gleb Natapov <gleb@scylladb.com>
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170713162014.15343-1-duarte@scylladb.com>
2017-07-14 12:11:22 +03:00
Duarte Nunes
5f24e9a4a5 memtable: Stop tracking the highest flushed rp
Since we no longer enforce that mutations are applied in memory
ordered by their replay_positions, the way the highest_flush_rp is
being tracked is no longer correct.

The invariant it was used to maintain no longer exists, so we can get
rid of it together with the assertion on the highest_flush_rp on
flush().

Fixes #2074

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-13 22:56:06 +02:00
Duarte Nunes
22a53a52a1 column_family: Remove outdated comment about the flush_queue
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-13 22:56:05 +02:00
Duarte Nunes
003941cd95 column_family: Stop using flush_queue
Since commitlog ordering requirements have been relaxed, we now keep
the set of replay_positions seen by a memtable in a set, which we then
use to clean up relevant segments in the commitlog. This means that
the guarantees provided by the flush_queue are no longer necessary.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-13 22:56:00 +02:00
Duarte Nunes
7e6fe5895e column_family: Don't bother closing the flush_queue on stop()
When stopping a column family we issue a flush(), for which we wait.
Since writes are supposed to have stopped coming in, and also new
flush requests, there's no need to call and wait for the flush_queue
to be closed.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-13 22:51:58 +02:00
Duarte Nunes
a1f4536ffb column_family: Don't rely on flush_queue to guarantee flushes finished
We now don't ensure mutations are applied in memory following the
order of their replay positions, so we can't rely on the replay
position to order memtable flushes. So, use a phased_barrier() to
ensure that calling flush() returns a future that completes when all
flushes up to that point have finished.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-13 22:51:58 +02:00
Duarte Nunes
1b320496e2 dirty_memory_manager: Remove unnecessary check from flush_one()
We don't need to check whether a memtable is empty in flush_one(), as
that must be checked later, during the actual sealing.

The condition itself is rare and is checked already after the potentially
contented semaphore has been acquired.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-13 22:51:57 +02:00
Duarte Nunes
59bdaed02b column_family: More precise count of pending flushes
This patch ensures we update the count of pending flushes in the same
place as we update the stats across column families, which is more
correct since it only accounts for actual flushes and not those of
empty memtables or that have been coalesced together.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-13 22:51:25 +02:00
Duarte Nunes
3e27c335a9 column_family: Fix typo in pending_tasks metric name
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-13 22:51:25 +02:00
Duarte Nunes
a11724c6e1 column_family: More precise count of switched memtables
The memtable_switch_count metric is supposed to count the number of
times a flush has resulted in the memtable being switched out, but we
were incrementing the count regardless of whether we tried to flush an
empty memtable or two or more flushes were coalesced into one. This
patch fixes this by moving the metric to where the memtable is
actually switched.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-13 22:51:25 +02:00
Duarte Nunes
bca1b19ce9 commitlog: Always flush latest memtable
We now don't ensure mutations are applied in memory following the
order of their replay positions, so we can't rely on the replay
position to order memtable flushes. When flushing commit log segments,
ensure we flush the latest memtable.

Refs #2074

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
2017-07-13 22:51:25 +02:00
Paweł Dziepak
ec689b2fe1 Merge "utils: minor fixes in the loading_cache class" from Vlad
"This series aims to fix the "serving invalid (old) values" issue in the
loading_cache (issue #2590) by arming the timer with a period that equals
min(expire, refresh).

We are still trying to optimize the main case where 'expire' is
significantly longer than 'refresh' period.

We don't want to add any additional logic in the fast path and this
series gives the immediate solution for the issue above while not adding
any additional CPU cycle to the fast path."

* 'loading_cache_short_expired-v2' of https://github.com/vladzcloudius/scylla:
  utils::loading_cache: arm the timer with a period equal to min(_expire, _update)
  utils::loading_cache: make a timer use a loading_cache_clock_type clock as a source
2017-07-13 16:58:53 +01:00
Vlad Zolotarov
45e23d8090 db::config: fix the permissions cache related parameters description
Make the descriptions of permissions_validity_in_ms, permissions_update_interval_in_ms
and permissions_cache_max_entries more readable and more related to what they really
do.

Mention the none-zero value requirement for the permissions_update_interval_in_ms and
the permissions_cache_max_entries when the permissions cache is enabled.

Adjust the parameters description in the scylla.yaml too.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1499957053-31792-1-git-send-email-vladz@scylladb.com>
2017-07-13 16:00:40 +01:00
Vlad Zolotarov
76ea74f3fd utils::loading_cache: arm the timer with a period equal to min(_expire, _update)
Arm the timer with a period that is not greater than either the permissions_validity_in_ms
or the permissions_update_interval_in_ms in order to ensure that we are not stuck with
the values older than permissions_validity_in_ms.

Fixes #2590

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-07-13 10:48:59 -04:00
Vlad Zolotarov
121e3c7b8f utils::loading_cache: make a timer use a loading_cache_clock_type clock as a source
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
2017-07-13 10:42:12 -04:00
Tomasz Grabiec
30ec4af949 legacy_schema_migrator: Fix calculation of is_dense
Current algorithm was marking tables with regular columns not named
"value" as not dense, which doesn't have to be the case. It can be
either way.

It should be enough to look at clustering components. If there is a
clustering key, then table is dense if and only if all comparator
components belong to the clustering key.

If there is no clustering key, then if there are any regular columns
we're sure it's not dense.

Fixes #2587.

Message-Id: <1499877777-7083-1-git-send-email-tgrabiec@scylladb.com>
2017-07-13 17:28:09 +03:00
Jesse Haber-Kucharsky
8fa47b74e8 cql: Add definition of underlying type for durations
Cassandra 3.10 added the `duration` type [1], intended to manipulate date-time
values with offsets (for example, `now() - 2y3h`).

The full implementation of the `duration` type in Scylla requires support
for version 5 of the binary protocol, which is not yet available.

In the meantime, this patch patch adds the implementation of the underlying type
for the eventual `duration` type. Included is also the ported test suite from
the reference implementation and additional tests.

Related to #2240.

[1] https://issues.apache.org/jira/browse/CASSANDRA-11873

Signed-off-by: Jesse Haber-Kucharsky <jhaberku@scylladb.com>
Message-Id: <b1e481da103efee82106bf31f261c5a1f4f8d9ca.1499885803.git.jhaberku@scylladb.com>
2017-07-13 17:26:00 +03:00
Tomasz Grabiec
54953c8d27 gdb: Fix "scylla columnfamilies" command
Broken in 0e4d5bc2f3.

Message-Id: <1499951956-26206-1-git-send-email-tgrabiec@scylladb.com>
2017-07-13 16:33:32 +03:00
Amnon Heiman
45b3e8cd11 query_options: Allows creating query_options from query_options
query_options object cannot be changed after it was created. For
internal uses, like internal query paging, it is needed to create a new
object based on some of the data from an existing one with a new paging
state.

This patch adds a constructor from a unique_ptr and paging state.

using unique_ptr behave similar to move modify constructor.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2017-07-13 14:02:11 +03:00
Duarte Nunes
3df6777b9b database: Load views after loading tables
Since base tables no longer look for their views, we need to parse
base tables first so that when we add a view we can fetch and connect
it to its base table.

When announcing view table mutations to other nodes we always include
the base table mutations, so there's no need to expect a view being
added before its base table.

Found out while testing view building.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170712172115.2960-1-duarte@scylladb.com>
2017-07-13 11:14:02 +02:00
Avi Kivity
4704a78332 tests: remove bad constexpr in sstable_datafile_test
std::ceil() is not constexpr.

Found by clang.
2017-07-12 17:14:13 +03:00
Avi Kivity
67a5e10218 Merge seastar upstream
* seastar a2be7a4...ff34c42 (3):
  > tls: Wrap all IO in semaphore (Fixes #2575)
  > tests/lowres_clock_test.cc: Declare helper static
  > tests/lowres_clock_test.cc: fix compilation error for older GCC
2017-07-12 10:19:55 +03:00
Avi Kivity
a397889c81 Merge "Preserve table schema digest on schema tables migration" from Tomasz
"Currently new nodes calculate digests based on v3 schema mutations,
which are very different from v2 mutations. As a result they will
use schemas with different table_schema_version that the old nodes.
The old nodes will not recognize the version and will try to request
its definition. That will fail, because old nodes don't understand
v3 schema mutations.

To fix this problem, let's preserve the digests during migration,
so that they're the same on new and old nodes. This will allow
requests to proceed as usual.

This does not solve the problem of schema being changed during
the rolling upgrade. This is not allowed, as it would bring the
same problem back.

Fixes #2549."

* tag 'tgrabiec/use-consistent-schema-table-digests-v2' of github.com:cloudius-systems/seastar-dev:
  tests: Add test for concurrent column addition
  legacy_schema_migrator: Set digest to one compatible with the old nodes
  schema_tables: Persist table_schema_version
  schema_tables: Introduce system_schema.scylla_tables
  schema_tables: Simplify read_table_mutations()
  schema_tables: Resurrect v2 read_table_mutations()
  system_keyspace: Forward-declare legacy schemas
  legacy_schema_migrator: Take storage_proxy as dependency
2017-07-11 17:22:42 +03:00
Raphael S. Carvalho
7dbfebb7dc lcs: remove conditional limit for partial sort
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170711140241.11023-2-raphaelsc@scylladb.com>
2017-07-11 17:18:32 +03:00
Raphael S. Carvalho
ebb5dafef0 lcs: remove useless filter for demotion procedure
there's no way a sstable from a level higher than N+1 will be in
set of candidates that can be either level N or level N + 1.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170711140241.11023-1-raphaelsc@scylladb.com>
2017-07-11 17:18:31 +03:00
Botond Dénes
33bc62a9cf Fix crash in the out-of order restrictions error msg composition
Use name of the existing preceeding column with restriction
(last_column) instead of assuming that the column right after the
current column already has restrictions.
This will yield an error message that is different from that of
Cassandra, albeit still a correct one.

Fixes #2421

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <40335768a2c8bd6c911b881c27e9ea55745c442e.1499781685.git.bdenes@scylladb.com>
2017-07-11 17:15:33 +03:00
Gleb Natapov
f88723e739 storage_proxy: pass pending_endpoints by reference instead of by value
This makes lifetime of dead_endpoints object more clear and move() also has its price.

Message-Id: <20170710084549.GX2324@scylladb.com>
2017-07-11 16:52:21 +03:00
Gleb Natapov
739dd878e3 consistency_level: report less live endpoints in Unavailable exception if there are pending nodes
DowngradingConsistencyRetryPolicy uses live replicas count from
Unavailable exception to adjust CL for retry, but when there are pending
nodes CL is increased internally by a coordinator and that may prevent
retried query from succeeding. Adjust live replica count in case of
pending node presence so that retried query will be able to proceed.

Fixes #2535

Message-Id: <20170710085238.GY2324@scylladb.com>
2017-07-11 16:51:56 +03:00
Avi Kivity
3b7fde18cf Merge "improvements for leveled strategy manifest" from Raphael
"most of changes are to improve maintainability of the strategy but
the ones that are introduced by the following patches:
lcs: do not check if level 0 can be promoted twice
lcs: remove quadratic behavior from L0 compaction
lcs: partially sort candidates that will be trimmed
lcs: only demote sstable from level higher than target one"

* 'lcs_improvements_2' of github.com:raphaelsc/scylla:
  lcs: only demote sstable from level higher than target one
  lcs: improve indentation for get_overlapping_starved_sstables
  lcs: improve indentation for get_compaction_candidates
  lcs: partially sort candidates that will be trimmed
  lcs: remove quadratic behavior from L0 compaction
  lcs: introduce private interface
  lcs: make some member functions static
  lcs: make some functions const qualified
  lcs: remove add method
  lcs: extract code for higher levels compaction from get_candidates_for
  lcs: simplify code to get candidates for higher levels
  lcs: extract round-robin heuristic for even distribution of keys into function
  lcs: update outdated comments for level 0 compaction
  lcs: improve worth_promoting_L0_candidates interface
  lcs: do not check if level 0 can be promoted twice
  lcs: extract code for level 0 compaction from get_candidates_for
2017-07-11 16:38:50 +03:00
Paweł Dziepak
5aa523aaf9 transport: send correct type id for counter columns
CQL reply may contain metadata that describes columns present in the
response including the information about their type.

However, Scylla incorrectly reports counter types as bigint. The
serialised format of counters and bigint is exactly the same, which
could explain why the problem hasn't been noticed earlier but it is a
bug nevertheless.

Fixes #2569.
Message-Id: <20170711130520.27603-1-pdziepak@scylladb.com>
2017-07-11 16:21:49 +03:00
Tomasz Grabiec
6d53cb7ab5 tests: Add test for concurrent column addition 2017-07-11 14:52:23 +02:00
Tomasz Grabiec
f5909ec515 legacy_schema_migrator: Set digest to one compatible with the old nodes
Calculate and set digest using v2 mutations so that digests are the
same before and after migration. This is neeed so that no schema
definition exchange is required during rolling upgrade.

Fixes #2549.
2017-07-11 14:52:23 +02:00
Tomasz Grabiec
5b69d99bf8 schema_tables: Persist table_schema_version
When migrating schema tables from v2 to v3, mutations underlying
table schema will change, and so will their digest. However, we want
the digest to be the same on new nodes as on the old nodes, because
schema exchange is not possible between the two nodes, so they
must to request schema definitions from each other.

The solution is to make the digest persistable, so that it sticks to
given table schema, surviving both migration and node restarts. On
migration from v2, the digest will be calculated from v2 mutations, so
it will be the same on new and old nodes.
2017-07-11 14:52:23 +02:00
Tomasz Grabiec
cdf5b67522 schema_tables: Introduce system_schema.scylla_tables
It will be used to store Scylla spcific table metadata.  We cannot
store it in the standard "tables" table for compatibility reasons -
Cassandra will fail to read schema if it encounteres columns it is not
expecting.
2017-07-11 14:52:23 +02:00
Tomasz Grabiec
cdcdf4772f schema_tables: Simplify read_table_mutations() 2017-07-11 14:52:23 +02:00
Tomasz Grabiec
6e62bc77f1 schema_tables: Resurrect v2 read_table_mutations() 2017-07-11 14:52:23 +02:00
Tomasz Grabiec
4b5818a404 system_keyspace: Forward-declare legacy schemas 2017-07-11 14:52:23 +02:00
Tomasz Grabiec
8624edc0fa legacy_schema_migrator: Take storage_proxy as dependency
Will be needed to query for mutations.
2017-07-11 14:52:23 +02:00
Raphael S. Carvalho
6aa2e5be17 lcs: only demote sstable from level higher than target one
if we are compacting level 1 into level 2, we only want to demote
a sstable from level 3 or higher.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:42 -03:00
Raphael S. Carvalho
53b72b473e lcs: improve indentation for get_overlapping_starved_sstables
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:40 -03:00
Raphael S. Carvalho
3639b48d7b lcs: improve indentation for get_compaction_candidates
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:38 -03:00
Raphael S. Carvalho
5a8b8a6ccb lcs: partially sort candidates that will be trimmed
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:37 -03:00
Raphael S. Carvalho
8334086441 lcs: remove quadratic behavior from L0 compaction
L0 compaction triggers quadratic behavior when many newly created
sstables are needed for promotion due to their size being relatively
low to max sstable size parameter. So until L0 is worth promoting,
the strategy will compact every new sstable with all the existing
ones in L0. To fix it, let's do STCS on level 0 until it becomes
worth promoting.

Fixes #2432.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:35 -03:00
Raphael S. Carvalho
80f1dca328 lcs: introduce private interface
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:33 -03:00
Raphael S. Carvalho
bc71f97116 lcs: make some member functions static
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:32 -03:00
Raphael S. Carvalho
f4b733efe4 lcs: make some functions const qualified
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:28 -03:00
Raphael S. Carvalho
ede0ee16b2 lcs: remove add method
Its code can be inlined because no one besides create() calls it

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:26 -03:00
Raphael S. Carvalho
00ef528e5b lcs: extract code for higher levels compaction from get_candidates_for
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:25 -03:00
Raphael S. Carvalho
a46b73c401 lcs: simplify code to get candidates for higher levels
get rid of unneeded loop for dealing with suspect sstables and
std::advance because vector allows random access.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:19 -03:00
Raphael S. Carvalho
e954af0f0f lcs: extract round-robin heuristic for even distribution of keys into function
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:15 -03:00
Raphael S. Carvalho
3c0028d921 lcs: update outdated comments for level 0 compaction
some comments are no longer relevant, especially the ones that
talk about dealing with busy sstables due to parallel compaction,
which isn't done by us for lcs.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:07 -03:00
Raphael S. Carvalho
62607ba36a lcs: improve worth_promoting_L0_candidates interface
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:35:00 -03:00
Raphael S. Carvalho
c1e42f6528 lcs: do not check if level 0 can be promoted twice
can_promote flag will be used to carry info about whether or not
level 0 can promoted. That will avoid a single iteration for higher
levels too which can contain tens of thousands of sstables.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:34:49 -03:00
Raphael S. Carvalho
887aab4ae7 lcs: extract code for level 0 compaction from get_candidates_for
I will split code for higher levels compaction into functions first
before putting it into its own function too.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-11 09:34:41 -03:00
Pekka Enberg
ed3c62704e transport/server: Kill unused functions
Message-Id: <1499773755-27920-1-git-send-email-penberg@scylladb.com>
2017-07-11 14:57:54 +03:00
Glauber Costa
780a6e4d2e change task quota's default
The default of 2ms is somewhat arbitrary. Now that we have a lot more
mileage deploying Scylla applications in production it does sound not
only arbitrary, but high.

In particular, it is really hard to achieve 1ms latencies in the face of
CPU-heavy workloads with it.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <1499354495-27173-1-git-send-email-glauber@scylladb.com>
2017-07-11 13:50:39 +03:00
Avi Kivity
7147808797 Merge seastar upstream
* seastar 89cc97c...a2be7a4 (3):
  > configure.py: verifies boost version
  > pkg-config: Eliminate spaces in include path arguments
  > allow applications to override task-quota-ms
2017-07-11 13:50:06 +03:00
Botond Dénes
f18f724f1c Generate an error when CONTAINS is used on a non-collection column
Fixes #2255

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <517bb6268ac213aed9a1def231614c2e88f77c9f.1499764183.git.bdenes@scylladb.com>
2017-07-11 11:30:49 +02:00
Tomasz Grabiec
310d2a54d2 legacy_schema_migrator: Use separate joinpoint instance for each table
Otherwise we may deadlock, as explained in commit 5e8f0efc8:

Table drop starts with creating a snapshot on all shards. All shards
must use the same snapshot timestamp which, among other things, is
part of the snapshot name. The timestamp is generated using supplied
timestamp generating function (joinpoint object). The joinpoint object
will wait for all shards to arrive and then generate and return the
timestamp.

However, we drop tables in parallel, using the same joinpoint
instance. So joinpoint may be contacted by snapshotting shards of
tables A and B concurrently, generating timestamp t1 for some shards
of table A and some shards of table B. Later the remaining shards of
table A will get a different timestamp. As a result, different shards
may use different snapshot names for the same table. The snapshot
creation will never complete because the sealing fiber waits for all
shards to signal it, on the same name.
Message-Id: <1499762663-21967-1-git-send-email-tgrabiec@scylladb.com>
2017-07-11 11:21:45 +02:00
Avi Kivity
7b4412c3ce Revert "Merge "improvements for leveled strategy manifest" from Raphael"
This reverts commit 43a3e718e6, reversing
changes made to 3813e94b0a. It contains some
unrelated commits.
2017-07-11 11:12:53 +03:00
Avi Kivity
43a3e718e6 Merge "improvements for leveled strategy manifest" from Raphael
"most of changes are to improve maintainability of the strategy but
the ones that are introduced by the following patches:
lcs: do not check if level 0 can be promoted twice
lcs: remove quadratic behavior from L0 compaction
lcs: partially sort candidates that will be trimmed
lcs: only demote sstable from level higher than target one"

* 'lcs_improvements' of github.com:raphaelsc/scylla: (21 commits)
  lcs: only demote sstable from level higher than target one
  lcs: improve indentation for get_overlapping_starved_sstables
  lcs: improve indentation for get_compaction_candidates
  lcs: partially sort candidates that will be trimmed
  lcs: remove quadratic behavior from L0 compaction
  lcs: introduce private interface
  lcs: make some member functions static
  lcs: make some functions const qualified
  lcs: remove add method
  lcs: extract code for higher levels compaction from get_candidates_for
  lcs: simplify code to get candidates for higher levels
  lcs: extract round-robin heuristic for even distribution of keys into function
  lcs: update outdated comments for level 0 compaction
  lcs: improve worth_promoting_L0_candidates interface
  lcs: do not check if level 0 can be promoted twice
  lcs: extract code for level 0 compaction from get_candidates_for
  dist/offline_installer: add --skip-setup option to offline installer
  dist/offline_installer/debian: install python-minimal package before installing scylla deps
  migration_manager: Give empty response to schema pulls from incompatible nodes
  migration_manager: Don't pull schema from incompatible nodes
  ...
2017-07-11 11:08:12 +03:00
Botond Dénes
3813e94b0a Add Cql.tokens and KDevelop project files to .gitignore
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <ae4935d2ac0c92287022f677c3e66757c0861e13.1499753032.git.bdenes@scylladb.com>
2017-07-11 10:21:00 +03:00
Botond Dénes
61c5c2a175 transport: Fix accept typo in debug log message
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <d2f9269f25ace6579a6fbe6b99f4da60a05beac8.1499753306.git.bdenes@scylladb.com>
2017-07-11 09:16:35 +03:00
Raphael S. Carvalho
8b9686e621 lcs: only demote sstable from level higher than target one
if we are compacting level 1 into level 2, we only want to demote
a sstable from level 3 or higher.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 16:10:42 -03:00
Raphael S. Carvalho
0d0699e06e lcs: improve indentation for get_overlapping_starved_sstables
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 16:01:31 -03:00
Raphael S. Carvalho
cda2b18f83 lcs: improve indentation for get_compaction_candidates
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:55:43 -03:00
Raphael S. Carvalho
ca1c6fd9ca lcs: partially sort candidates that will be trimmed
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:45:26 -03:00
Raphael S. Carvalho
28ebe1807f lcs: remove quadratic behavior from L0 compaction
L0 compaction triggers quadratic behavior when many newly created
sstables are needed for promotion due to their size being relatively
low to max sstable size parameter. So until L0 is worth promoting,
the strategy will compact every new sstable with all the existing
ones in L0. To fix it, let's do STCS on level 0 until it becomes
worth promoting.

Fixes #2432.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:42:28 -03:00
Raphael S. Carvalho
0392dc5d23 lcs: introduce private interface
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:13:56 -03:00
Raphael S. Carvalho
dd9c9341be lcs: make some member functions static
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:13:55 -03:00
Raphael S. Carvalho
408a7f902a lcs: make some functions const qualified
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:10:47 -03:00
Raphael S. Carvalho
7cba6548e2 lcs: remove add method
Its code can be inlined because no one besides create() calls it

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:10:47 -03:00
Raphael S. Carvalho
0a9fcc6202 lcs: extract code for higher levels compaction from get_candidates_for
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:10:47 -03:00
Raphael S. Carvalho
8709365d84 lcs: simplify code to get candidates for higher levels
get rid of unneeded loop for dealing with suspect sstables and
std::advance because vector allows random access.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:10:47 -03:00
Raphael S. Carvalho
1a9fc835a0 lcs: extract round-robin heuristic for even distribution of keys into function
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:10:40 -03:00
Raphael S. Carvalho
258ed0afbd lcs: update outdated comments for level 0 compaction
some comments are no longer relevant, especially the ones that
talk about dealing with busy sstables due to parallel compaction,
which isn't done by us for lcs.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:10:30 -03:00
Raphael S. Carvalho
97b5cf94d8 lcs: improve worth_promoting_L0_candidates interface
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:10:30 -03:00
Raphael S. Carvalho
8f418e9864 lcs: do not check if level 0 can be promoted twice
can_promote flag will be used to carry info about whether or not
level 0 can promoted. That will avoid a single iteration for higher
levels too which can contain tens of thousands of sstables.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:10:30 -03:00
Raphael S. Carvalho
6785d83c02 lcs: extract code for level 0 compaction from get_candidates_for
I will split code for higher levels compaction into functions first
before putting it into its own function too.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-10 15:10:30 -03:00
Takuya ASADA
abd2b6bd6f dist/offline_installer: add --skip-setup option to offline installer
To use offline installer in non-interactive way, add option to skip scylla_setup (which run in interactive mode).

Fixes #2533

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
[ penberg: clean up diff noise ]
Message-Id: <1499387981-11814-1-git-send-email-syuu@scylladb.com>
2017-07-10 15:10:30 -03:00
Takuya ASADA
e1a2be28d2 dist/offline_installer/debian: install python-minimal package before installing scylla deps
To prevent dependency error, we need to install python-minimal manually.

Fixes #2553

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
[ penberg: clean up diff noise ]
Message-Id: <1499387587-9032-1-git-send-email-syuu@scylladb.com>
2017-07-10 15:10:30 -03:00
Tomasz Grabiec
fa17c2a59b migration_manager: Give empty response to schema pulls from incompatible nodes
The old nodes which are still using v2 schema tables will fail to
apply our response, with error messages complaining about not being
able to locate schema of certain versions (new schema tables). This
change inhibits such errors by responding with an empty mutation list.
2017-07-10 15:10:30 -03:00
Tomasz Grabiec
8e8a26ef1b migration_manager: Don't pull schema from incompatible nodes
Currently it results in scary error messages in logs about not being
able to find schema of given version. It's benign, but may scare
users. It the future incompatibilities could result in more subtle
errors. Better to inhibit it completely.
2017-07-10 15:10:30 -03:00
Tomasz Grabiec
b2f52454b9 service: Advertise schema tables format version through gossip
Will be needed to inhibit schema exchange on per-peer basis.
2017-07-10 15:10:30 -03:00
Takuya ASADA
bf49dd8aa1 dist/offline_installer: add --skip-setup option to offline installer
To use offline installer in non-interactive way, add option to skip scylla_setup (which run in interactive mode).

Fixes #2533

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
[ penberg: clean up diff noise ]
Message-Id: <1499387981-11814-1-git-send-email-syuu@scylladb.com>
2017-07-10 16:02:11 +03:00
Takuya ASADA
1f97e5b3f4 dist/offline_installer/debian: install python-minimal package before installing scylla deps
To prevent dependency error, we need to install python-minimal manually.

Fixes #2553

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
[ penberg: clean up diff noise ]
Message-Id: <1499387587-9032-1-git-send-email-syuu@scylladb.com>
2017-07-10 16:01:26 +03:00
Avi Kivity
91221e020b Merge "Silence schema pull errors during upgrade from 1.7 to 2.0" from Tomasz
"Old and new nodes will advertise different schema version because
of different format of schema tables. This will result in attempts
to sync the schema by each of the node.

Currently this will result in scary error messages in logs about
sync failing due to not being able to find schema of given version.
It's benign, but may scare users. It the future incompatibilities
could result in more subtle errors. Better to inhibit it completely."

* 'tgrabiec/fix-schema-pull-errors-during-upgrade' of github.com:cloudius-systems/seastar-dev:
  migration_manager: Give empty response to schema pulls from incompatible nodes
  migration_manager: Don't pull schema from incompatible nodes
  service: Advertise schema tables format version through gossip
2017-07-10 14:04:04 +03:00
Pekka Enberg
8112d7c5c0 idl: Fix frozen_schema version numbers
The IDL changes will appear in 2.0 so fix up the version numbers.

Message-Id: <1499680669-6757-1-git-send-email-penberg@scylladb.com>
2017-07-10 14:02:20 +03:00
Avi Kivity
06b7ec6901 install-dependencies.sh: add snappy 2017-07-10 13:25:57 +03:00
Avi Kivity
7ddd322bce Add install-dependencies.sh
Easier to get started when a script installs all the build dependencies.
Message-Id: <20170710101657.12574-1-avi@scylladb.com>
2017-07-10 12:21:02 +02:00
Botond Dénes
e0d0f9f30c Make the CMakeLists.txt's IDE marker generic
To allow some other IDEs (e.g. KDevelop, QtCreator) to use the cmake
file in a convenient manner. Keep the existing CLIEN_IDE marker to
not break existing workflows.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <5ecf8c0e8a242cc8ebb0d803547bead4dadc38e2.1499667807.git.bdenes@scylladb.com>
2017-07-10 12:21:02 +02:00
Botond Dénes
66cbc45321 Add text(sstring) version of count, max and min functions
Fixes #2459

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <b6abb97f21c0caea8e36c7590b92a12d148195db.1499666251.git.bdenes@scylladb.com>
2017-07-10 09:06:15 +03:00
Tomasz Grabiec
72e01b7fe8 tests: commitlog: Check there are no segments left on disk after clean shutdown
Reproduces #2550.

Message-Id: <1499358825-17855-2-git-send-email-tgrabiec@scylladb.com>
2017-07-09 19:25:27 +03:00
Tomasz Grabiec
6555a2f50b commitlog: Discard active but unused segments on shutdown
So that they are not left on disk even though we did a clean shutdown.

First part of the fix is to ensure that closed segments are recognized
as not allocating (_closed flag). Not doing this prevents them from
being collected by discard_unused_segments(). Second part is to
actually call discard_unused_segments() on shutdown after all segments
were shut down, so that those whose position are cleared can be
removed.

Fixes #2550.

Message-Id: <1499358825-17855-1-git-send-email-tgrabiec@scylladb.com>
2017-07-09 19:25:22 +03:00
Tomasz Grabiec
d33d29ad95 legacy_schema_migrator: Drop tables instead of truncate()+remove()
It achieves similar effect, but is safer than non-standard remove()
path. The latter was missing unregistration from compaction manager.

Fixes 2554.

Message-Id: <1499447165-30253-1-git-send-email-tgrabiec@scylladb.com>
2017-07-09 18:36:44 +03:00
Duarte Nunes
136accdbf6 database: Fix typos in metric descriptions
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170709145522.19534-1-duarte@scylladb.com>
2017-07-09 18:35:17 +03:00
Raphael S. Carvalho
7f7758fb6f tests/sstable: make sstable_expired_data_ratio more robust
this change will stress histogram ability to return a good estimation
after merging keys such that it doesn't grow beyond size limit.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170708205713.5958-1-raphaelsc@scylladb.com>
2017-07-09 10:33:10 +03:00
Botond Dénes
4f6b2a1ff0 transport: Move "accept failed" message to the debug log
Fixes #2518

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <492ea8a916bb3b2427f6cc16a4f6eadadaa30b10.1499418234.git.bdenes@scylladb.com>
2017-07-08 10:59:03 +03:00
Takuya ASADA
09aeb2aabe dist/debian/pbuilderrc: merge Debian releases
Merge duplicated lines to simplified pbuilderrc.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1499454242-3716-1-git-send-email-syuu@scylladb.com>
2017-07-08 10:56:54 +03:00
Tomasz Grabiec
07ed512060 migration_manager: Give empty response to schema pulls from incompatible nodes
The old nodes which are still using v2 schema tables will fail to
apply our response, with error messages complaining about not being
able to locate schema of certain versions (new schema tables). This
change inhibits such errors by responding with an empty mutation list.
2017-07-07 19:09:57 +02:00
Tomasz Grabiec
5f613d0527 migration_manager: Don't pull schema from incompatible nodes
Currently it results in scary error messages in logs about not being
able to find schema of given version. It's benign, but may scare
users. It the future incompatibilities could result in more subtle
errors. Better to inhibit it completely.
2017-07-07 19:08:59 +02:00
Tomasz Grabiec
18a9e1762c service: Advertise schema tables format version through gossip
Will be needed to inhibit schema exchange on per-peer basis.
2017-07-07 19:07:59 +02:00
Piotr Jastrzebski
a4b6cfe8f0 row_cache: use continuity info in single partition queries
If a query requests for a single partition that is inside
a range that has already been queried, use the continuity info
and don't go to disk when it's not needed.

Fixes #2244.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <15bb3b5b03225e7402e3862da53b5e06d3f4fa74.1499345295.git.piotr@scylladb.com>
2017-07-07 10:29:19 +02:00
Piotr Jastrzebski
b950c59bbb row_cache: Fix wrong comment on continuity flag
This comment was stating exactly the opposite to the
truth. This is very misleading

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <79062a061e22ef4c4add24cbdf723cbfb5cda060.1499345295.git.piotr@scylladb.com>
2017-07-07 10:29:19 +02:00
Piotr Jastrzebski
70f4b23876 row_cache_test: Add test to reproduce issue 2544
This tests checks that cache should use continuity information
for single partition queries inside a range that has already been
queried.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <2ebd03ff5366e554d520f86da8054e0b9eff4178.1499345295.git.piotr@scylladb.com>
2017-07-07 10:29:19 +02:00
Avi Kivity
ecda97edeb Merge seastar upstream
* seastar c848486...89cc97c (2):
  > future-utils: fix do_for_each exception reporting
  > core/thread: Fix unwind information for seastar threads
2017-07-06 17:28:29 +03:00
Jesse Haber-Kucharsky
4f838a82e2 Add guide for getting started with development ("hacking")
This change adds the start of what will hopefully be a continually evolving and
improving document for helping developers and contributors to get started with
Scylla development.

The first part of the document is general advice and information that is broadly
applicable.

The second part is an opinionated example of a particular work-flow and set of
tools. This is intended to serve as a starting point and inspire contributors to
develop their own work-flow.

The section on branching is marked "TODO" for now, and will be addressed by a
subsequent change.

Signed-off-by: Jesse Haber-Kucharsky <jhaberku@scylladb.com>
Message-Id: <470a542a92aff20d6205fb94b3fb26168735ae6f.1499319310.git.jhaberku@scylladb.com>
2017-07-06 15:59:16 +03:00
Duarte Nunes
3dd0397700 wrapping_range: Fix lvalue transform()
Instead of copying and moving the bound, pass it by reference so the
transformer can decide whether it wants to copy or not. The only
caller so far doesn't want a copy and takes the value by reference,
which would be capturing a temporary value. Caught by the
view_schema_test with gcc7.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170705210255.29669-1-duarte@scylladb.com>
2017-07-06 15:47:49 +03:00
Raphael S. Carvalho
ff50b57761 dist: fix spelling mistakes in dev-mode.conf
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170705202054.4614-1-raphaelsc@scylladb.com>
2017-07-06 15:08:17 +03:00
Botond Dénes
b1082641f9 Make sure keyspace strategy class is stored in qualified form
Even when it's provided in unqualified (short) form.
Fixes #767

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <4379f8864843e64c097d432fd06129ce4025f100.1499322476.git.bdenes@scylladb.com>
2017-07-06 14:50:00 +03:00
Botond Dénes
c4277d6774 cql3: Add K_FROZEN and K_TUPLE to basic_unreserved_keyword
To allow the non-reserved keywords "frozen" and "tuple" to be used as
column names without double-quotes.

Fixes #2507

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <9ae17390662aca90c14ae695c9b4a39531c6cde6.1499329781.git.bdenes@scylladb.com>
2017-07-06 12:25:38 +03:00
Avi Kivity
a6d9cf09a7 build: fix excessive stack usage in CqlParser in debug mode
The state machines generated by antlr allocate many local variables per function.
In release mode, the stack space occupied by the variables is reused, but in debug
build, it is not, due to Address Sanitizer setting -fstack-reuse=none. This causes
a single function to take above 100k of stack space.

Fix by hacking the generated code to use just one variable.

Fixes #2546
Message-Id: <20170704135824.13225-1-avi@scylladb.com>
2017-07-05 23:05:26 +02:00
Duarte Nunes
d583ef6860 thrift/handler: Remove leftover debug artifacts
Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170705161156.2307-1-duarte@scylladb.com>
2017-07-05 19:57:07 +03:00
Takuya ASADA
6d0bd01e0f dist/offline_installer/redhat: enable EPEL repo before try to install makeself
To prevent yum install error, we need to enable EPEL repo before install
makeself.

Fixes #2508

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1499196715-19710-1-git-send-email-syuu@scylladb.com>
2017-07-05 09:50:03 +03:00
Takuya ASADA
71624d7919 dist/common/scripts/scylla_raid_setup: prevent renaming MDRAID device after reboot
On Debian variants, mdadm.conf should placed at /etc/mdadm instead of /etc.
Also it seems we need update-initramfs to fix renaming issue.

Fixes #2502

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1499179912-14125-1-git-send-email-syuu@scylladb.com>
2017-07-04 18:07:20 +03:00
Avi Kivity
b1a0e37fcb Merge "Adjust row cache metrics for row granularity" from Tomasz
* tag 'tgrabiec/row-cache-metrics-v2' of github.com:cloudius-systems/seastar-dev:
  row_cache: Switch _stats.hits/misses to row granularity
  row_cache: Rename num_entries() to partitions() for clarity
  row_cache: Track mispopulations also at row level
  row_cache: Track row insertions
  row_cache: Track row hits and misses
  row_cache: Make mispopulation counter also apply for continuity information
  row_cache: Add partition_ prefix to current counters
  misc_services: Switch to using reads_with[_no]_misses counters
  row_cache: Add metrics for operations on underlying reader
  row_cache: Add reader-related metrics
  row_cache: Remove dead code
2017-07-04 15:20:25 +03:00
Tomasz Grabiec
37d2b6b3c6 row_cache: Switch _stats.hits/misses to row granularity
Those are exported by the RESTful APIs called
"get_row_hits/get_row_misses" and reported by nodetool.
2017-07-04 13:55:06 +02:00
Tomasz Grabiec
62c76abf71 row_cache: Rename num_entries() to partitions() for clarity 2017-07-04 13:55:06 +02:00
Tomasz Grabiec
60c2a86192 row_cache: Track mispopulations also at row level 2017-07-04 13:55:06 +02:00
Tomasz Grabiec
94547db620 row_cache: Track row insertions 2017-07-04 13:55:06 +02:00
Tomasz Grabiec
a58f2c8640 row_cache: Track row hits and misses 2017-07-04 13:55:06 +02:00
Tomasz Grabiec
77b2a92ece row_cache: Make mispopulation counter also apply for continuity information 2017-07-04 13:55:06 +02:00
Tomasz Grabiec
a5fdff2ac2 row_cache: Add partition_ prefix to current counters
In preparation for adding per-row counters.
2017-07-04 13:55:06 +02:00
Tomasz Grabiec
ae4b24db06 misc_services: Switch to using reads_with[_no]_misses counters
They better approximate the intended meaning than hits/misses, which
according to Gleb is whether a read did any I/O or not.
2017-07-04 13:55:06 +02:00
Tomasz Grabiec
6a22cbceaf row_cache: Add metrics for operations on underlying reader 2017-07-04 13:55:06 +02:00
Tomasz Grabiec
5c7b6fc164 row_cache: Add reader-related metrics 2017-07-04 13:55:06 +02:00
Tomasz Grabiec
be2e89d596 row_cache: Remove dead code 2017-07-04 13:55:06 +02:00
Tomasz Grabiec
e720b317c9 row_cache: Restore update of concurrent_misses_same_key
It was lost in action in 6f6575f456.

Message-Id: <1499168837-5072-1-git-send-email-tgrabiec@scylladb.com>
2017-07-04 14:51:05 +03:00
Avi Kivity
66e56511d6 Merge "Use selective_token_range_sharder in repair" from Asias
"This series introduces selective_token_range_sharder and uses it in repair to
generate dht::token_range belongs to a specific shard."

* tag 'asias/repair-selective_token_range_sharder-v3' of github.com:cloudius-systems/seastar-dev:
  repair: Use selective_token_range_sharder
  tests: Add test_selective_token_range_sharder
  dht: Add selective_token_range_sharder
2017-07-04 14:14:33 +03:00
Asias He
b10e961a64 repair: Use selective_token_range_sharder
With this change, we ask all the shard to handle the ranges provided by
user and we use selective_token_range_sharder to split the ranges and
ignore the ranges do not belong to the current shard.
2017-07-04 18:46:19 +08:00
Asias He
2a794db61b tests: Add test_selective_token_range_sharder 2017-07-04 18:46:19 +08:00
Asias He
d835cf2748 dht: Add selective_token_range_sharder
It is like ring_position_range_sharder but it works with
dht::token_range. This sharder will return the ranges belong to a
selected shard.
2017-07-04 18:46:19 +08:00
Tomasz Grabiec
1d6fec0755 row_cache: Drop not very useful prefixes from metric names
This drops "total_opertaions_" and "objects_" prefixes. There is no
convention of adding them in other parts of the system, and they don't
add much value.

Fixes scylladb/scylla-grafana-monitoring#169.

Message-Id: <1499160342-25865-1-git-send-email-tgrabiec@scylladb.com>
2017-07-04 13:37:12 +03:00
Nadav Har'El
d95f908586 Fix test to use non-wrapping range
The test put a wrapping range into a non-wrapping range variable.
This was harmless at the time this test was written, but newer code
may not be as forgiving so better use a non-wrapping range as intended.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20170704103128.29689-1-nyh@scylladb.com>
2017-07-04 13:36:29 +03:00
Avi Kivity
07b8adce0e sstables: fix use-after-free in read_simple()
`r` is moved-from, and later captured in a different lambda. The compiler may
choose to move and perform the other capture later, resulting in a use-after-free.

Fix by copying `r` instead of moving it.

Discovered by sstable_test in debug mode.
Message-Id: <20170702082546.20570-1-avi@scylladb.com>
2017-07-04 10:24:07 +02:00
Raphael S. Carvalho
7b777fe2e3 sstables/lcs: choose sstable with highest droppable tombstone ratio
Currently, lcs will choose, for tombstone compaction, sstable with
the lowest ratio from the ones which ratio is at least above threshold
(0.2 by default).

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170703185633.6644-1-raphaelsc@scylladb.com>
2017-07-04 10:25:10 +03:00
Avi Kivity
bcf7867ac9 Merge "small fixes and cleanup for leveled strategy (part 2)" from from Raphael
* 'lcs_improvements_part_2' of github.com:raphaelsc/scylla:
  lcs: Match estimated tasks arithmetic to score in LCS
  lcs: prevent leveled_compaction_strategy.hh from being included more than once
  lcs: use vector instead for storing a level of sstables
  compaction: keep only one variant of size_tiered_most_interesting_bucket
  lcs: get rid of unused code in leveled_manifest
2017-07-04 10:10:53 +03:00
Raphael S. Carvalho
7606ffd744 lcs: Match estimated tasks arithmetic to score in LCS
Contains fix for CASSANDRA-8904.

Added TARGET_SCORE to get rid of magic number for target score which
is now used more than once.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-04 03:35:02 -03:00
Raphael S. Carvalho
dfb5463478 lcs: prevent leveled_compaction_strategy.hh from being included more than once
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-04 03:35:00 -03:00
Raphael S. Carvalho
db98ab6aaf lcs: use vector instead for storing a level of sstables
list is no longer needed because lcs no longer moves a sstable breaking
invariant at its level to level 0. Now lcs incrementally restores invariant
by compacting together first set of overlapping tables.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-04 03:34:57 -03:00
Raphael S. Carvalho
b350352e6c compaction: keep only one variant of size_tiered_most_interesting_bucket
two variants of size_tiered_most_interesting_bucket existed to avoid copy,
but subsequent work will make lcs use vector for each level of sstables,
so let's only keep one variant.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-04 03:34:51 -03:00
Raphael S. Carvalho
5921600b95 lcs: get rid of unused code in leveled_manifest
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-07-04 03:34:34 -03:00
Nadav Har'El
d177ec05cb repair: further limit parallelism of checksum calculation
Repair today has a semaphore limiting the number of ongoing checksum
comparisons running in parallel (on one shard) to 100. We needed this
number to be fairly high, because a "checksum comparison" can involve
high latency operations - namely, sending an RPC request to another node
in a remote DC and waiting for it to calculate a checksum there, and while
waiting for a response we need to proceed calculating checksums in parallel.

But as a consequence, in the current code, we can end up with as many as
100 fibers all at the same stage of reading partitions to checksum from
sstables. This requires tons of memory, to hold at least 128K of buffer
(even more with read-ahead) for each of these fibers, plus partition data
for each. But doing 100 reads in parallel is pointless - one (or very few)
should be enough.

So this patch adds another semaphore to limit the number of checksum
*calculations* (including the read and checksum calculation) on each shard
to just 2. There may still be 100 ongoing checksum *comparisons*, in
other stages of the comparisons (sending the checksum requests to other
and waiting for them to return), but only 2 will ever be in the stage of
reading from disk and checksumming them.

The limit of 2 checksum calculations (per shard) applies on the repair
slave, not just to the master: The slave may receive many checksum
requests in parallel, but will only actually work on 2 at a time.

Because the parallelism=100 now rate-limits operations which use very little
memory, in the future we can safely increase it even more, to support
situations where the disk is very fast but the link between nodes has
very high latency.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20170703151329.25716-1-nyh@scylladb.com>
2017-07-03 18:14:57 +03:00
Piotr Jastrzebski
80f08921c4 Make table_helper independent from trace_keyspace_helper
table_helper is a generic helper than can easily be used in other places.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <11e46dbc1c90d0273a41c8144e6f6013e21efcdb.1499077818.git.piotr@scylladb.com>
2017-07-03 15:55:00 +03:00
Raphael S. Carvalho
972a0237ef database: restore indentation for cleanup_sstables
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170630035324.19881-2-raphaelsc@scylladb.com>
2017-07-03 12:48:54 +03:00
Raphael S. Carvalho
b9d0645199 database: fix potential use-after-free in sstable cleanup
when do_for_each is in its last iteration and with_semaphore defers
because there's an ongoing cleanup, sstable object will be used after
freed because it was taken by ref and the container it lives in was
destroyed prematurely.

Let's fix it with a do_with, also making code nicer.

Fixes #2537.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170630035324.19881-1-raphaelsc@scylladb.com>
2017-07-03 12:48:53 +03:00
Avi Kivity
5883e85da3 Merge "improve maintainability of compaction strategies" from Raphael
"compaction_strategy.cc keeps the full implementation of size tiered,
major, and null strategies, and partial implementation of leveled
and date tiered strategies. It's a mess. In the future, we will also
need space for time window strategy. The file is hard to read and
maintain.
My goal here is to improve maintainability of the strategies by
putting each of them into its own header.

NOTE: No semantic change is introduced here."

* 'improve_compaction_strategy_maintainability' of github.com:raphaelsc/scylla:
  compaction_strategy: move dtcs to its existing header
  compaction_strategy: move lcs implementation to its own header
  compaction_strategy: move stcs implementation to its own header
  compaction_strategy: move compaction_strategy_impl to its own header
2017-07-03 11:39:30 +03:00
Takuya ASADA
0c81974bc4 dist/common/systemd: move scylla-server.service to be after network-online.target instead of network.target
To make sure start Scylla after network is up, we need to move from
network.target to network-online.target.

Fixes #2337

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1493661832-9545-1-git-send-email-syuu@scylladb.com>
2017-07-03 10:01:21 +03:00
Asias He
b2a2fbcf73 repair: Do not store the failed ranges
The number of failed ranges can be large so it can consume a lot of memory.
We already logged the failed ranges in the log. No need to storge them
in memory.

Message-Id: <7a70c4732667c5c3a69211785e8efff0c222fc28.1498809367.git.asias@scylladb.com>
2017-07-03 10:00:25 +03:00
Takuya ASADA
1c35549932 dist/common/scripts/scylla_cpuscaling_setup: skip configuration when cpufreq driver doesn't loaded
Configuring cpufreq service on VMs/IaaS causes an error because it doesn't supported cpufreq.
To prevent causing error, skip whole configuration when the driver not loaded.

Fixes #2051

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1498809504-27029-1-git-send-email-syuu@scylladb.com>
2017-07-03 09:59:56 +03:00
Takuya ASADA
e645b0fb13 dist/common/scripts: move EC2 configuration verification to 'scylla_ec2_check'
Currently we only have EC2 configuration verification on AMI, so move it to
/usr/lib/scylla and run it from scylla_setup, to make it usable for
non-AMI users.

Fixes #1997

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Message-Id: <1498811107-29135-1-git-send-email-syuu@scylladb.com>
2017-07-03 09:59:28 +03:00
Avi Kivity
6895f6e603 sstable_datafile_test: fix sstable_expired_data_ratio failure
A comment states that we want the file to be old enough, but sets
a timestamp of max(), which is in the future. This may have passed
because the conversion from numeric_limits<time_t>::max() to
db_clock::time_point is not well defined (their dynamic range is
different), so truncation may have converted the large number to a
low one.
Message-Id: <20170702082903.20879-1-avi@scylladb.com>
2017-07-02 20:22:51 +02:00
Avi Kivity
51b6066212 cql3: operation: correctly format error messages
Error messages incorrectly used the debug representation of the receiver,
rather than the text representation of the operation itself.

Fixes #113.
Message-Id: <20170701101325.3163-1-avi@scylladb.com>
2017-07-02 20:06:50 +02:00
Duarte Nunes
d157e4558a utils/log_histogram: Remove largest() function
It should never have existed in the first place, as there are no
legitimate callers and it can be misused.

Signed-off-by: Duarte Nunes <duarte@scylladb.com>
Message-Id: <20170630095939.2429-1-duarte@scylladb.com>
2017-07-02 14:29:17 +03:00
Gleb Natapov
d23111312f main: wait for wait_for_gossip_to_settle() to complete during boot
Boot should not continue until a future returned by
wait_for_gossip_to_settle() is resolved.  Commit 991ec4a16 mistakenly
broke that, so restore it back. Also fix calls for supervisor::notify()
to be in the right places.

Message-Id: <20170702082355.GQ14563@scylladb.com>
2017-07-02 11:32:36 +03:00
Avi Kivity
5bc13e4454 Revert "Make table_helper independent from trace_keyspace_helper"
This reverts commit db5bf363d0. Causes
errors of the sort

    Exiting on unhandled exception: exceptions::invalid_request_exception
    (Keyspace 'system_traces' does not exist)
2017-07-02 11:30:51 +03:00
Avi Kivity
7c809917b6 compaction_manager: fix debug mode build (periodic_compaction_submission_interval)
Turn static constexpr variable into a function.
2017-07-01 19:34:46 +03:00
Avi Kivity
c2c69e003f compaction: fix build on debug mode (DEFAULT_TOMBSTONE_COMPACTION_INTERVAL)
Debug mode wants to allocate storage for a constexpr variable for some
reason. Turn it into a function.
2017-07-01 19:26:22 +03:00
Avi Kivity
59f649e2bc Revert "cql_server::do_accepts: modernize loop"
This reverts commit 37af493f6e. Connections
are not accepted and ^C does not work anymore.
2017-07-01 12:54:23 +03:00
Jesse Haber-Kucharsky
1100bb8a5b cql: Eagerly throw lexing and parsing exceptions
Previously, lexing and parsing errors were aggregated while CQL queries were
evaluated. Afterwards, the first collected error (if present) would be thrown as
an exception.

The problem was that when parsing and lexing errors were aggregated this way,
the parser would continue even in spite of errors like "no viable alternative".
Semantic actions attached to grammar rules would still execute, though with
variables that had not yet been initialized. This would crash Scylla.

This change modifies the error-handling strategy of CQL parsing. Rather than
aggregate errors, we throw an exception on the first error we encounter. This
ensures that grammar actions never execute unless there is a precise match.

One possible issue with this approach is that the generated C++ code from the
ANTLR grammar may not be exception-safe. I compiled Scylla in debug-mode with
ASan support and executed several erroneous CQL queries with `cqlsh`. No memory
leaks were reported.

Fixes #2466.

Signed-off-by: Jesse Haber-Kucharsky <jhaberku@scylladb.com>
Message-Id: <db1f650a2bbb615b506d9015486eece45375a440.1498836703.git.jhaberku@scylladb.com>
2017-07-01 12:13:44 +03:00
Raphael S. Carvalho
69a9ad468c compaction_strategy: move dtcs to its existing header
Goal is to improve maintainability.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-30 03:50:09 -03:00
Raphael S. Carvalho
4d387475fe compaction_strategy: move lcs implementation to its own header
Goal is to improve maintainability.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-30 03:50:07 -03:00
Raphael S. Carvalho
4b46d286fd compaction_strategy: move stcs implementation to its own header
Goal is to improve maintainability.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-30 03:50:06 -03:00
Raphael S. Carvalho
0d9bb0da39 compaction_strategy: move compaction_strategy_impl to its own header
compaction_strategy.cc keeps the full implementation of size tiered,
major, and null strategies, and partial implementation of leveled
and date tiered strategies. It's a mess. In the future, we will also
need space for time window strategy. The file is hard to read and
maintain.
My goal here is to eventually improve maintainability of the
strategies by putting each of them into its own header.
This is the first step towards that goal.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-30 03:50:04 -03:00
Raphael S. Carvalho
9fa855e105 compaction_strategy: use duration type for default tombstone compaction interval
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170630041838.20604-1-raphaelsc@scylladb.com>
2017-06-30 08:56:22 +03:00
Piotr Jastrzebski
db5bf363d0 Make table_helper independent from trace_keyspace_helper
table_helper is a generic helper than can easily be used in other places.

Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Message-Id: <3e360a963d4a53de6d758ba8bada78fc572f001a.1498745600.git.piotr@scylladb.com>
2017-06-29 17:20:07 +03:00
Tomasz Grabiec
97005825bf row_cache: Fix compilation errors with gcc 5
Message-Id: <1498741526-27055-1-git-send-email-tgrabiec@scylladb.com>
2017-06-29 16:34:46 +03:00
Avi Kivity
6da9b6eb81 cql3: error_listener: add virtual destructor
Found by Eclipse.
Message-Id: <20170629063324.31309-1-avi@scylladb.com>
2017-06-29 10:51:20 +02:00
Avi Kivity
9298fea27b Merge seastar upstream
* seastar 0ab7ae5...c848486 (2):
  > build: export full cflags in pkgconfig file (Fixes #2439)
  > configure: Avoid putting tmp file on /tmp
2017-06-29 11:35:24 +03:00
Avi Kivity
fc966c0c4c Merge "tombstone removal compaction" from Raphael
"This feature is intended to make compaction more efficient at getting rid of
droppable tombstone and expired data wasting disk space. So far, people have
been dealing with it manually through major compaction.

With strategies other than date tiered, large sstables will be left untouched
for a long time even though it's all expired. Date tiered suffers from it when
mixing data with different TTL because it only includes for compaction sstable
that is fully expired.

sstables keeps as metadata a histogram which allows us to easily estimate
droppable data ratio from gc_before. sstables which droppable data ratio is
above 20% (default value for tombstone_threshold option) will be considered
candidates for the operation.

Like in C*, we will only do tombstone removal compaction when there's nothing
to compact in standard way. It would be interesting to trigger it too when
disk usage is above a given threshold, but I decided to leave this for later.

Fixes #2306."

* 'tombstone_removal_compaction_v4' of github.com:raphaelsc/scylla:
  tests: more testing for tombstone compaction options
  tests: basic tombstone compaction test for date tiered
  compaction/dtcs: add support for tombstone compaction
  tests: basic test of tombstone compaction with lcs
  compaction/lcs: add support for tombstone compaction
  tests: basic tombstone compaction test for size tiered
  compaction/stcs: add support for tombstone compaction
  tests: add test for estimation of droppable tombstone ratio
  sstables: introduce function to estimate droppable tombstone ratio
  compaction_manager: periodically submit cfs for compaction
  streaming_histogram: fix coding style
  tests: add streaming_histogram_test
  streaming_histogram: implement sum
  tests: add test for sstable with bad tombstone histogram
  sstables: discard bad streaming histogram for future use
  tests: add sstable tombstone histogram test
  streaming_histogram: fix update
  streaming_histogram: move it to utils
  streaming_histogram: do not limit it to be used by sstables
  sstables: update tombstone_histogram for cells with expiration time
2017-06-29 10:19:59 +03:00
Avi Kivity
1317c4a03e Update ami submodule
* dist/ami/files/scylla-ami f10db69...5dfe42f (1):
  > don't fetch perf from amazon repo
2017-06-29 09:38:48 +03:00
Raphael S. Carvalho
ab335c8085 tests: more testing for tombstone compaction options
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:43:08 -03:00
Raphael S. Carvalho
ce4dc15a20 tests: basic tombstone compaction test for date tiered
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:43:08 -03:00
Raphael S. Carvalho
f76ece5349 compaction/dtcs: add support for tombstone compaction
Unlike other strategies, dtcs has tombstone compaction disabled by
default due to:
- deletion shouldn't be used with DTCS; rather data is deleted through TTL.
- with time series workloads, it's usually better to wait for whole sstable
to be expired rather than compacting a single sstable when it's more than
20% (default value) expired.
See CASSANDRA-9234 for more details.

For tombstone compaction, unworthy sstables are filtered out and the oldest
one is chosen because it's the one less likely to shadow data and it's also
relatively big.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:43:08 -03:00
Raphael S. Carvalho
c400bf97b9 tests: basic test of tombstone compaction with lcs
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:43:08 -03:00
Raphael S. Carvalho
70e54cfe6e compaction/lcs: add support for tombstone compaction
LCS will choose its candidate by starting from highest level and
getting sstable which has highest droppable tombstone ratio.
Unlike STCS which needs to choose oldest sstable from biggest tier,
LCS can choose the one with highest d__t__r because sstables in
a given level don't overlap.
Sstable picked up for tombstone removal compaction won't be demoted
or promoted.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:43:08 -03:00
Raphael S. Carvalho
138fda468f tests: basic tombstone compaction test for size tiered
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:43:08 -03:00
Raphael S. Carvalho
8fd80ac22c compaction/stcs: add support for tombstone compaction
Larger sstables are hard to find sstable peers and therefore are
left uncompacted for a long time. Expired data and tombstones which
can be purged will waste disk space meanwhile.

sstable tracks droppable tombstone from which ratio can be calculated.
If ratio is greater than threshold (0.2 by default), sstable will
be eligible for compaction. Oldest sstables from biggest tiers are
preferrable because droppable data in them are more likely to satisfy
the conditions for purge, like not shadowing data in another sstable.

Subsequent patches will add support in leveled and date tiered
strategies.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:43:08 -03:00
Raphael S. Carvalho
ad24470972 tests: add test for estimation of droppable tombstone ratio
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:43:08 -03:00
Raphael S. Carvalho
eb6d17b748 sstables: introduce function to estimate droppable tombstone ratio
Function used to estimate ratio of droppable tombstone.
A tombstone is considered droppable for cells expired before
gc_before and regular tombstones older than gc_before.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:43:08 -03:00
Raphael S. Carvalho
0d21129cc7 compaction_manager: periodically submit cfs for compaction
This is useful for a column family which isn't generating new content
and will have lots of expired data later on that can be purged.
Compaction submission is NO-OP if there's nothing to do, so I think
it's reasonable to do it at an interval of 1 hour.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:43:03 -03:00
Raphael S. Carvalho
719dbf547d streaming_histogram: fix coding style
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:08:12 -03:00
Raphael S. Carvalho
6fb26d9f0c tests: add streaming_histogram_test
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:08:12 -03:00
Raphael S. Carvalho
a65b9eb8b4 streaming_histogram: implement sum
This function is used to estimate number of points in interval
[-inf,b]. It will be useful for estimating droppable tombstone
ratio in a given sstable.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:08:12 -03:00
Raphael S. Carvalho
c01c659594 tests: add test for sstable with bad tombstone histogram
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:08:12 -03:00
Raphael S. Carvalho
06fabf9810 sstables: discard bad streaming histogram for future use
Find bad histogram which had incorrect elements merged due to use of
unordered map. The keys will be unordered. Histogram which size is
less than max allowed will be correct because no entries needed to be
merged, so we can avoid discarding those.

This is important because histogram for tombstone will be used to
estimate droppable tombstone ratio. If it's incorrectly high for many
of existing sstables, we will needlessly compact lots of them.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 02:08:10 -03:00
Raphael S. Carvalho
7b532867ce tests: add sstable tombstone histogram test
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 01:17:28 -03:00
Raphael S. Carvalho
f35bd66da4 streaming_histogram: fix update
This bug was introduced when converting java code. Return value
of map::erase() was used as if it were the value of the removed
entry, but it's actually the number of removed entries.
update() also relies on ordered keys, so map is used instead
by histogram.
In addition, histograms will be written in sorted order (like C*
does) such that we can detect bad histograms, using disk_array.
disk_array is also used from now on to read histograms.
The conversion from array to map is fine because histograms for
sstables are limited to 100 elements.

Coming patch will detect bad histograms (generated only by us)
and discard them, because we can't rely on their information.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-29 01:17:26 -03:00
Amnon Heiman
644868d816 api: remove reply creation
As a preperation for the http stream support, creation of empty reply
should be avoided.

This patch removes a line that cannot be reached but causes the compiler
to complain.

It has no effect aside of removing the reply creation.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <20170628130202.8132-1-amnon@scylladb.com>
2017-06-28 16:30:58 +03:00
Tomasz Grabiec
786e75dbf7 row_cache: Use continuity information to decide whether to populate
If cache is missing given key, but the range is marked as continuous,
it means sstables don't have that entry and we can insert it without
asking the presence checker (bloom filter based). The latter is more
expensive and gives false positives. So this improves update
performance and hit ratio.

Another positive effect is that we don't have to clear continuity now.

Fixes #1999.

Message-Id: <1498643043-21117-1-git-send-email-tgrabiec@scylladb.com>
2017-06-28 13:32:48 +03:00
Tomasz Grabiec
3489c68a68 lsa: Fix performance regression in eviction and compact_on_idle
Region comparator, used by the two, calls region_impl::min_occupancy(),
which calls log_histogram::largest(). The latter is O(N) in terms of
the number of segments, and is supposed to be used only in tests.
We should call one_of_largest() instead, which is O(1).

This caused compact_on_idle() to take more CPU as the number of
segments grew (even when there was nothing to compact). Eviction
would see the same kind of slow down as well.

Introduced in 11b5076b3c.

Message-Id: <1498641973-20054-1-git-send-email-tgrabiec@scylladb.com>
2017-06-28 12:32:43 +03:00
Raphael S. Carvalho
a3a73899bc database: remove outdated FIXME comments
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20170621002253.29660-1-raphaelsc@scylladb.com>
2017-06-28 11:06:02 +02:00
Etienne Kruger
37af493f6e cql_server::do_accepts: modernize loop
Replace recursion in cql_server::do_accepts with more modern repeat()
from future-util.hh.

Fixes #2467.

Signed-off-by: Etienne Kruger <el@loadavg.io>
Message-Id: <20170628033130.19824-1-el@loadavg.io>
2017-06-28 10:25:22 +03:00
Raphael S. Carvalho
d90f46000d streaming_histogram: move it to utils
It's not specific to sstables. May be needed somewhere else in
the future.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-28 01:07:13 -03:00
Glauber Costa
f3742d1e38 disable defragment-memory-on-idle-by-default
It's been linked with various performance issues, either by causing
them or making them worse. One example is #1634, and also recently
I have investigated continuous performance degradation that was also
linked to defrag on idle activity.

Until we can figure out how to reduce its impact, we should disable it.

Signed-off-by: Glauber Costa <glauber@glauber.scylladb>
Message-Id: <20170627201109.10775-1-glauber@scylladb.com>
2017-06-28 00:21:11 +03:00
Raphael S. Carvalho
fb9bc609c6 streaming_histogram: do not limit it to be used by sstables
streaming histogram will later be placed in /utils, so we want
it to use std::unordered_map<> instead of disk_hash<>.
That also requires implementing serialization/deserialization
functions for it.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-27 16:51:52 -03:00
Raphael S. Carvalho
e224653d70 sstables: update tombstone_histogram for cells with expiration time
That tombstone_histogram is used to determine droppable data ratio
for a sstable, and unlike C*, we were only updating it for
tombstones. We need to update it with expiration time of cells too,
if any. Creation time (expiration - ttl) cannot be used because if
ttl > gc_grace_seconds, the resulting sstable could be considered
worth dropping by tomstone compaction before any data is actually
expired.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2017-06-27 16:50:38 -03:00
Avi Kivity
08488a75e0 dist: tolerate sysctl failures
sysctl may fail in a container environment if /proc is not virtualized
properly.

Fixes #1990
Message-Id: <20170625145930.31619-1-avi@scylladb.com>
2017-06-27 16:11:48 +02:00
Avi Kivity
ff7be8241f Merge "Fix compilation issues in older environments" from Tomasz
* 'tgrabiec/fix-compilation-issues' of github.com:cloudius-systems/seastar-dev:
  tests: streamed_mutation_test: Avoid using boost::size() on row ranges
  tests: row_cache: Remove unused method
2017-06-27 16:30:54 +03:00
Tomasz Grabiec
eb844a10e9 tests: streamed_mutation_test: Avoid using boost::size() on row ranges
Fails to compile with libboost 1.55.
2017-06-27 15:27:13 +02:00
Tomasz Grabiec
e68925595c tests: row_cache: Remove unused method 2017-06-27 14:10:37 +02:00
Vlad Zolotarov
6839a50677 db::commitlog: entry_writer add a virtual destructor
Add a virtual destructor for a base class commitlog::entry_writer.

Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Message-Id: <1498511180-18391-1-git-send-email-vladz@scylladb.com>
2017-06-27 10:17:10 +03:00
425 changed files with 27698 additions and 15717 deletions

9
.gitignore vendored
View File

@@ -9,3 +9,12 @@ dist/ami/files/*.rpm
dist/ami/variables.json
dist/ami/scylla_deploy.sh
*.pyc
Cql.tokens
.kdev4
*.kdev4
CMakeLists.txt.user
.cache
.tox
*.egg-info
__pycache__CMakeLists.txt.user
.gdbinit

View File

@@ -5,8 +5,8 @@
cmake_minimum_required(VERSION 3.7)
project(scylla)
if (NOT DEFINED ENV{CLION_IDE})
message(FATAL_ERROR "This CMakeLists.txt file is only valid for use in CLion")
if (NOT DEFINED FOR_IDE AND NOT DEFINED ENV{FOR_IDE} AND NOT DEFINED ENV{CLION_IDE})
message(FATAL_ERROR "This CMakeLists.txt file is only valid for use in IDEs, please define FOR_IDE to acknowledge this.")
endif()
# Default value. A more accurate list is populated through `pkg-config` below if `seastar.pc` is available.

233
HACKING.md Normal file
View File

@@ -0,0 +1,233 @@
# Guidelines for developing Scylla
This document is intended to help developers and contributors to Scylla get started. The first part consists of general guidelines that make no assumptions about a development environment or tooling. The second part describes a particular environment and work-flow for exemplary purposes.
## Overview
This section covers some high-level information about the Scylla source code and work-flow.
### Getting the source code
Scylla uses [Git submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules) to manage its dependency on Seastar and other tools. Be sure that all submodules are correctly initialized when cloning the project:
```bash
$ git clone https://github.com/scylladb/scylla
$ cd scylla
$ git submodule update --init --recursive
```
### Dependencies
Scylla depends on the system package manager for its development dependencies.
Running `./install_dependencies.sh` (as root) installs the appropriate packages based on your Linux distribution.
### Build system
**Note**: Compiling Scylla requires, conservatively, 2 GB of memory per native thread, and up to 3 GB per native thread while linking.
Scylla is built with [Ninja](https://ninja-build.org/), a low-level rule-based system. A Python script, `configure.py`, generates a Ninja file (`build.ninja`) based on configuration options.
To build for the first time:
```bash
$ ./configure.py
$ ninja-build
```
Afterwards, it is sufficient to just execute Ninja.
The full suite of options for project configuration is available via
```bash
$ ./configure.py --help
```
The most important options are:
- `--mode={release,debug,all}`: Debug mode enables [AddressSanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer) and allows for debugging with tools like GDB. Debugging builds are generally slower and generate much larger object files than release builds.
- `--{enable,disable}-dpdk`: [DPDK](http://dpdk.org/) is a set of libraries and drivers for fast packet processing. During development, it's not necessary to enable support even if it is supported by your platform.
Source files and build targets are tracked manually in `configure.py`, so the script needs to be updated when new files or targets are added or removed.
To save time -- for instance, to avoid compiling all unit tests -- you can also specify specific targets to Ninja. For example,
```bash
$ ninja-build build/release/tests/schema_change_test
```
### Unit testing
Unit tests live in the `/tests` directory. Like with application source files, test sources and executables are specified manually in `configure.py` and need to be updated when changes are made.
A test target can be any executable. A non-zero return code indicates test failure.
Most tests in the Scylla repository are built using the [Boost.Test](http://www.boost.org/doc/libs/1_64_0/libs/test/doc/html/index.html) library. Utilities for writing tests with Seastar futures are also included.
Run all tests through the test execution wrapper with
```bash
$ ./test.py --mode={debug,release}
```
The `--name` argument can be specified to run a particular test.
Alternatively, you can execute the test executable directly. For example,
```bash
$ build/release/tests/row_cache_test -- -c1 -m1G
```
The `-c1 -m1G` arguments limit this Seastar-based test to a single system thread and 1 GB of memory.
### Preparing patches
All changes to Scylla are submitted as patches to the public mailing list. Once a patch is approved by one of the maintainers of the project, it is committed to the maintainers' copy of the repository at https://github.com/scylladb/scylla.
Detailed instructions for formatting patches for the mailing list and advice on preparing good patches are available at the [ScyllaDB website](http://docs.scylladb.com/contribute/).
### Running Scylla
Once Scylla has been compiled, executing the (`debug` or `release`) target will start a running instance in the foreground:
```bash
$ build/release/scylla
```
The `scylla` executable requires a configuration file, `scylla.yaml`. By default, this is read from `$SCYLLA_HOME/conf/scylla.yaml`. A good starting point for development is located in the repository at `/conf/scylla.yaml`.
For development, a directory at `$HOME/scylla` can be used for all Scylla-related files:
```bash
$ mkdir -p $HOME/scylla $HOME/scylla/conf
$ cp conf/scylla.yaml $HOME/scylla/conf/scylla.yaml
$ # Edit configuration options as appropriate
$ SCYLLA_HOME=$HOME/scylla build/release/scylla
```
The `scylla.yaml` file in the repository by default writes all database data to `/var/lib/scylla`, which likely requires root access. Change the `data_file_directories` and `commitlog_directory` fields as appropriate.
Scylla has a number of requirements for the file-system and operating system to operate ideally and at peak performance. However, during development, these requirements can be relaxed with the `--developer-mode` flag.
Additionally, when running on under-powered platforms like portable laptops, the `--overprovisined` flag is useful.
On a development machine, one might run Scylla as
```bash
$ SCYLLA_HOME=$HOME/scylla build/release/scylla --overprovisioned --developer-mode=yes
```
### Branches and tags
Multiple release branches are maintained on the Git repository at https://github.com/scylladb/scylla. Release 1.5, for instance, is tracked on the `branch-1.5` branch.
Similarly, tags are used to pin-point precise release versions, including hot-fix versions like 1.5.4. These are named `scylla-1.5.4`, for example.
Most development happens on the `master` branch. Release branches are cut from `master` based on time and/or features. When a patch against `master` fixes a serious issue like a node crash or data loss, it is backported to a particular release branch with `git cherry-pick` by the project maintainers.
## Example: development on Fedora 25
This section describes one possible work-flow for developing Scylla on a Fedora 25 system. It is presented as an example to help you to develop a work-flow and tools that you are comfortable with.
### Preface
This guide will be written from the perspective of a fictitious developer, Taylor Smith.
### Git work-flow
Having two Git remotes is useful:
- A public clone of Seastar (`"public"`)
- A private clone of Seastar (`"private"`) for in-progress work or work that is not yet ready to share
The first step to contributing a change to Scylla is to create a local branch dedicated to it. For example, a feature that fixes a bug in the CQL statement for creating tables could be called `ts/cql_create_table_error/v1`. The branch name is prefaced by the developer's initials and has a suffix indicating that this is the first version. The version suffix is useful when branches are shared publicly and changes are requested on the mailing list. Having a branch for each version of the patch (or patch set) shared publicly makes it easier to reference and compare the history of a change.
Setting the upstream branch of your development branch to `master` is a useful way to track your changes. You can do this with
```bash
$ git branch -u master ts/cql_create_table_error/v1
```
As a patch set is developed, you can periodically push the branch to the private remote to back-up work.
Once the patch set is ready to be reviewed, push the branch to the public remote and prepare an email to the `scylladb-dev` mailing list. Including a link to the branch on your public remote allows for reviewers to quickly test and explore your changes.
### Development environment and source code navigation
Scylla includes a [CMake](https://cmake.org/) file, `CMakeLists.txt`, for use only with development environments (not for building) so that they can properly analyze the source code.
[CLion](https://www.jetbrains.com/clion/) is a commercial IDE offers reasonably good source code navigation and advice for code hygiene, though its C++ parser sometimes makes errors and flags false issues.
Other good options that directly parse CMake files are [KDevelop](https://www.kdevelop.org/) and [QtCreator](https://wiki.qt.io/Qt_Creator).
To use the `CMakeLists.txt` file with these programs, define the `FOR_IDE` CMake variable or shell environmental variable.
[Eclipse](https://eclipse.org/cdt/) is another open-source option. It doesn't natively work with CMake projects, and its C++ parser has many similar issues as CLion.
### Distributed compilation: `distcc` and `ccache`
Scylla's compilations times can be long. Two tools help somewhat:
- [ccache](https://ccache.samba.org/) caches compiled object files on disk and re-uses them when possible
- [distcc](https://github.com/distcc/distcc) distributes compilation jobs to remote machines
A reasonably-powered laptop acts as the coordinator for compilation. A second, more powerful, machine acts as a passive compilation server.
Having a direct wired connection between the machines ensures that object files can be transmitted quickly and limits the overhead of remote compilation.
The coordinator has been assigned the static IP address `10.0.0.1` and the passive compilation machine has been assigned `10.0.0.2`.
On Fedora, installing the `ccache` package places symbolic links for `gcc` and `g++` in the `PATH`. This allows normal compilation to transparently invoke `ccache` for compilation and cache object files on the local file-system.
Next, set `CCACHE_PREFIX` so that `ccache` is responsible for invoking `distcc` as necessary:
```bash
export CCACHE_PREFIX="distcc"
```
On each host, edit `/etc/sysconfig/distccd` to include the allowed coordinators and the total number of jobs that the machine should accept.
This example is for the laptop, which has 2 physical cores (4 logical cores with hyper-threading):
```
OPTIONS="--allow 10.0.0.2 --allow 127.0.0.1 --jobs 4"
```
`10.0.0.2` has 8 physical cores (16 logical cores) and 64 GB of memory.
As a rule-of-thumb, the number of jobs that a machine should be specified to support should be equal to the number of its native threads.
Restart the `distccd` service on all machines.
On the coordinator machine, edit `$HOME/.distcc/hosts` with the available hosts for compilation. Order of the hosts indicates preference.
```
10.0.0.2/16 localhost/2
```
In this example, `10.0.0.2` will be sent up to 16 jobs and the local machine will be sent up to 2. Allowing for two extra threads on the host machine for coordination, we run compilation with `16 + 2 + 2 = 20` jobs in total: `ninja-build -j20`.
When a compilation is in progress, the status of jobs on all remote machines can be visualized in the terminal with `distccmon-text` or graphically as a GTK application with `distccmon-gnome`.
One thing to keep in mind is that linking object files happens on the coordinating machine, which can be a bottleneck. See the next section speeding up this process.
### Using the `gold` linker
Linking Scylla can be slow. The gold linker can replace GNU ld and often speeds the linking process. On Fedora, you can switch the system linker using
```bash
$ sudo alternatives --config ld
```
### Testing changes in Seastar with Scylla
Sometimes Scylla development is closely tied with a feature being developed in Seastar. It can be useful to compile Scylla with a particular check-out of Seastar.
One way to do this it to create a local remote for the Seastar submodule in the Scylla repository:
```bash
$ cd $HOME/src/scylla
$ cd seastar
$ git remote add local /home/tsmith/src/seastar
$ git remote update
$ git checkout -t local/my_local_seastar_branch
```

View File

@@ -1,29 +1,19 @@
# Scylla
## Building Scylla
## Quick-start
In addition to required packages by Seastar, the following packages are required by Scylla.
### Submodules
Scylla uses submodules, so make sure you pull the submodules first by doing:
```
git submodule init
git submodule update --init --recursive
```bash
$ git submodule update --init --recursive
$ sudo ./install-dependencies.sh
$ ./configure.py --mode=release
$ ninja-build -j4 # Assuming 4 system threads.
$ ./build/release/scylla
$ # Rejoice!
```
### Building and Running Scylla on Fedora
* Installing required packages:
Please see [HACKING.md](HACKING.md) for detailed information on building and developing Scylla.
```
sudo dnf install yaml-cpp-devel lz4-devel zlib-devel snappy-devel jsoncpp-devel thrift-devel antlr3-tool antlr3-C++-devel libasan libubsan gcc-c++ gnutls-devel ninja-build ragel libaio-devel cryptopp-devel xfsprogs-devel numactl-devel hwloc-devel libpciaccess-devel libxml2-devel python3-pyparsing lksctp-tools-devel protobuf-devel protobuf-compiler systemd-devel libunwind-devel
```
* Build Scylla
```
./configure.py --mode=release --with=scylla --disable-xen
ninja-build build/release/scylla -j2 # you can use more cpus if you have tons of RAM
```
## Running Scylla
* Run Scylla
```

View File

@@ -1,6 +1,6 @@
#!/bin/sh
VERSION=2.0.4
VERSION=2.1.6
if test -f version
then

View File

@@ -952,6 +952,22 @@
}
]
},
{
"path":"/storage_service/force_terminate_repair",
"operations":[
{
"method":"POST",
"summary":"Force terminate all repair sessions",
"type":"void",
"nickname":"force_terminate_all_repair_sessions_new",
"produces":[
"application/json"
],
"parameters":[
]
}
]
},
{
"path":"/storage_service/decommission",
"operations":[

View File

@@ -49,7 +49,7 @@ static std::unique_ptr<reply> exception_reply(std::exception_ptr eptr) {
throw bad_param_exception(ex.what());
}
// We never going to get here
return std::make_unique<reply>();
throw std::runtime_error("exception_reply");
}
future<> set_server_init(http_context& ctx) {

View File

@@ -20,6 +20,7 @@
*/
#include "compaction_manager.hh"
#include "sstables/compaction_manager.hh"
#include "api/api-doc/compaction_manager.json.hh"
#include "db/system_keyspace.hh"
#include "column_family.hh"

View File

@@ -397,7 +397,7 @@ void set_storage_proxy(http_context& ctx, routes& r) {
});
sp::get_range_estimated_histogram.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_timer_stats(ctx.sp, &proxy::stats::read);
return sum_timer_stats(ctx.sp, &proxy::stats::range);
});
sp::get_range_latency.set(r, [&ctx](std::unique_ptr<request> req) {

View File

@@ -34,6 +34,7 @@
#include "column_family.hh"
#include "log.hh"
#include "release.hh"
#include "sstables/compaction_manager.hh"
namespace api {
@@ -361,16 +362,22 @@ void set_storage_service(http_context& ctx, routes& r) {
try {
res = fut.get0();
} catch(std::runtime_error& e) {
return make_ready_future<json::json_return_type>(json_exception(httpd::bad_param_exception(e.what())));
throw httpd::bad_param_exception(e.what());
}
return make_ready_future<json::json_return_type>(json::json_return_type(res));
});
});
ss::force_terminate_all_repair_sessions.set(r, [](std::unique_ptr<request> req) {
//TBD
unimplemented();
return make_ready_future<json::json_return_type>(json_void());
return repair_abort_all(service::get_local_storage_service().db()).then([] {
return make_ready_future<json::json_return_type>(json_void());
});
});
ss::force_terminate_all_repair_sessions_new.set(r, [](std::unique_ptr<request> req) {
return repair_abort_all(service::get_local_storage_service().db()).then([] {
return make_ready_future<json::json_return_type>(json_void());
});
});
ss::decommission.set(r, [](std::unique_ptr<request> req) {

View File

@@ -269,7 +269,7 @@ public:
}
// Can be called on live and dead cells
bool has_expired(gc_clock::time_point now) const {
return is_live_and_has_ttl() && expiry() < now;
return is_live_and_has_ttl() && expiry() <= now;
}
bytes_view serialize() const {
return _data;

View File

@@ -0,0 +1,41 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "auth/allow_all_authenticator.hh"
#include "service/migration_manager.hh"
#include "utils/class_registrator.hh"
namespace auth {
const sstring& allow_all_authenticator_name() {
static const sstring name = meta::AUTH_PACKAGE_NAME + "AllowAllAuthenticator";
return name;
}
// To ensure correct initialization order, we unfortunately need to use a string literal.
static const class_registrator<
authenticator,
allow_all_authenticator,
cql3::query_processor&,
::service::migration_manager&> registration("org.apache.cassandra.auth.AllowAllAuthenticator");
}

View File

@@ -0,0 +1,97 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <stdexcept>
#include "auth/authenticator.hh"
#include "auth/authenticated_user.hh"
#include "auth/common.hh"
namespace cql3 {
class query_processor;
}
namespace service {
class migration_manager;
}
namespace auth {
const sstring& allow_all_authenticator_name();
class allow_all_authenticator final : public authenticator {
public:
allow_all_authenticator(cql3::query_processor&, ::service::migration_manager&) {
}
future<> start() override {
return make_ready_future<>();
}
future<> stop() override {
return make_ready_future<>();
}
const sstring& qualified_java_name() const override {
return allow_all_authenticator_name();
}
bool require_authentication() const override {
return false;
}
option_set supported_options() const override {
return option_set();
}
option_set alterable_options() const override {
return option_set();
}
future<::shared_ptr<authenticated_user>> authenticate(const credentials_map& credentials) const override {
return make_ready_future<::shared_ptr<authenticated_user>>(::make_shared<authenticated_user>());
}
future<> create(sstring username, const option_map& options) override {
return make_ready_future();
}
future<> alter(sstring username, const option_map& options) override {
return make_ready_future();
}
future<> drop(sstring username) override {
return make_ready_future();
}
const resource_ids& protected_resources() const override {
static const resource_ids ids;
return ids;
}
::shared_ptr<sasl_challenge> new_sasl_challenge() const override {
throw std::runtime_error("Should not reach");
}
};
}

View File

@@ -0,0 +1,41 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "auth/allow_all_authorizer.hh"
#include "auth/common.hh"
#include "utils/class_registrator.hh"
namespace auth {
const sstring& allow_all_authorizer_name() {
static const sstring name = meta::AUTH_PACKAGE_NAME + "AllowAllAuthorizer";
return name;
}
// To ensure correct initialization order, we unfortunately need to use a string literal.
static const class_registrator<
authorizer,
allow_all_authorizer,
cql3::query_processor&,
::service::migration_manager&> registration("org.apache.cassandra.auth.AllowAllAuthorizer");
}

View File

@@ -0,0 +1,98 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "authorizer.hh"
#include "exceptions/exceptions.hh"
#include "stdx.hh"
namespace cql3 {
class query_processor;
}
namespace service {
class migration_manager;
}
namespace auth {
class service;
const sstring& allow_all_authorizer_name();
class allow_all_authorizer final : public authorizer {
public:
allow_all_authorizer(cql3::query_processor&, ::service::migration_manager&) {
}
future<> start() override {
return make_ready_future<>();
}
future<> stop() override {
return make_ready_future<>();
}
const sstring& qualified_java_name() const override {
return allow_all_authorizer_name();
}
future<permission_set> authorize(service&, ::shared_ptr<authenticated_user>, data_resource) const override {
return make_ready_future<permission_set>(permissions::ALL);
}
future<> grant(::shared_ptr<authenticated_user>, permission_set, data_resource, sstring) override {
throw exceptions::invalid_request_exception("GRANT operation is not supported by AllowAllAuthorizer");
}
future<> revoke(::shared_ptr<authenticated_user>, permission_set, data_resource, sstring) override {
throw exceptions::invalid_request_exception("REVOKE operation is not supported by AllowAllAuthorizer");
}
future<std::vector<permission_details>> list(
service&,
::shared_ptr<authenticated_user> performer,
permission_set,
stdx::optional<data_resource>,
stdx::optional<sstring>) const override {
throw exceptions::invalid_request_exception("LIST PERMISSIONS operation is not supported by AllowAllAuthorizer");
}
future<> revoke_all(sstring dropped_user) override {
return make_ready_future();
}
future<> revoke_all(data_resource) override {
return make_ready_future();
}
const resource_ids& protected_resources() override {
static const resource_ids ids;
return ids;
}
future<> validate_configuration() const override {
return make_ready_future();
}
};
}

View File

@@ -1,384 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2016 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include <seastar/core/sleep.hh>
#include <seastar/core/distributed.hh>
#include "auth.hh"
#include "authenticator.hh"
#include "authorizer.hh"
#include "database.hh"
#include "cql3/query_processor.hh"
#include "cql3/statements/raw/cf_statement.hh"
#include "cql3/statements/create_table_statement.hh"
#include "db/config.hh"
#include "service/migration_manager.hh"
#include "utils/loading_cache.hh"
#include "utils/hash.hh"
const sstring auth::auth::DEFAULT_SUPERUSER_NAME("cassandra");
const sstring auth::auth::AUTH_KS("system_auth");
const sstring auth::auth::USERS_CF("users");
static const sstring USER_NAME("name");
static const sstring SUPER("super");
static logging::logger alogger("auth");
// TODO: configurable
using namespace std::chrono_literals;
const std::chrono::milliseconds auth::auth::SUPERUSER_SETUP_DELAY = 10000ms;
class auth_migration_listener : public service::migration_listener {
void on_create_keyspace(const sstring& ks_name) override {}
void on_create_column_family(const sstring& ks_name, const sstring& cf_name) override {}
void on_create_user_type(const sstring& ks_name, const sstring& type_name) override {}
void on_create_function(const sstring& ks_name, const sstring& function_name) override {}
void on_create_aggregate(const sstring& ks_name, const sstring& aggregate_name) override {}
void on_create_view(const sstring& ks_name, const sstring& view_name) override {}
void on_update_keyspace(const sstring& ks_name) override {}
void on_update_column_family(const sstring& ks_name, const sstring& cf_name, bool) override {}
void on_update_user_type(const sstring& ks_name, const sstring& type_name) override {}
void on_update_function(const sstring& ks_name, const sstring& function_name) override {}
void on_update_aggregate(const sstring& ks_name, const sstring& aggregate_name) override {}
void on_update_view(const sstring& ks_name, const sstring& view_name, bool columns_changed) override {}
void on_drop_keyspace(const sstring& ks_name) override {
auth::authorizer::get().revoke_all(auth::data_resource(ks_name));
}
void on_drop_column_family(const sstring& ks_name, const sstring& cf_name) override {
auth::authorizer::get().revoke_all(auth::data_resource(ks_name, cf_name));
}
void on_drop_user_type(const sstring& ks_name, const sstring& type_name) override {}
void on_drop_function(const sstring& ks_name, const sstring& function_name) override {}
void on_drop_aggregate(const sstring& ks_name, const sstring& aggregate_name) override {}
void on_drop_view(const sstring& ks_name, const sstring& view_name) override {}
};
static auth_migration_listener auth_migration;
namespace std {
template <>
struct hash<auth::data_resource> {
size_t operator()(const auth::data_resource & v) const {
return v.hash_value();
}
};
template <>
struct hash<auth::authenticated_user> {
size_t operator()(const auth::authenticated_user & v) const {
return utils::tuple_hash()(v.name(), v.is_anonymous());
}
};
}
class auth::auth::permissions_cache {
public:
typedef utils::loading_cache<std::pair<authenticated_user, data_resource>, permission_set, utils::loading_cache_reload_enabled::yes, utils::simple_entry_size<permission_set>, utils::tuple_hash> cache_type;
typedef typename cache_type::key_type key_type;
permissions_cache()
: permissions_cache(
cql3::get_local_query_processor().db().local().get_config()) {
}
permissions_cache(const db::config& cfg)
: _cache(cfg.permissions_cache_max_entries(), std::chrono::milliseconds(cfg.permissions_validity_in_ms()), std::chrono::milliseconds(cfg.permissions_update_interval_in_ms()), alogger,
[] (const key_type& k) {
alogger.debug("Refreshing permissions for {}", k.first.name());
return authorizer::get().authorize(::make_shared<authenticated_user>(k.first), k.second);
}) {}
future<> stop() {
return _cache.stop();
}
future<permission_set> get(::shared_ptr<authenticated_user> user, data_resource resource) {
return _cache.get(key_type(*user, std::move(resource)));
}
private:
cache_type _cache;
};
namespace std { // for ADL, yuch
std::ostream& operator<<(std::ostream& os, const std::pair<auth::authenticated_user, auth::data_resource>& p) {
os << "{user: " << p.first.name() << ", data_resource: " << p.second << "}";
return os;
}
}
static distributed<auth::auth::permissions_cache> perm_cache;
/**
* Poor mans job schedule. For maximum 2 jobs. Sic.
* Still does nothing more clever than waiting 10 seconds
* like origin, then runs the submitted tasks.
*
* Only difference compared to sleep (from which this
* borrows _heavily_) is that if tasks have not run by the time
* we exit (and do static clean up) we delete the promise + cont
*
* Should be abstracted to some sort of global server function
* probably.
*/
struct waiter {
promise<> done;
timer<> tmr;
waiter() : tmr([this] {done.set_value();})
{
tmr.arm(auth::auth::SUPERUSER_SETUP_DELAY);
}
~waiter() {
if (tmr.armed()) {
tmr.cancel();
done.set_exception(std::runtime_error("shutting down"));
}
alogger.trace("Deleting scheduled task");
}
void kill() {
}
};
typedef std::unique_ptr<waiter> waiter_ptr;
static std::vector<waiter_ptr> & thread_waiters() {
static thread_local std::vector<waiter_ptr> the_waiters;
return the_waiters;
}
void auth::auth::schedule_when_up(scheduled_func f) {
alogger.trace("Adding scheduled task");
auto & waiters = thread_waiters();
waiters.emplace_back(std::make_unique<waiter>());
auto* w = waiters.back().get();
w->done.get_future().finally([w] {
auto & waiters = thread_waiters();
auto i = std::find_if(waiters.begin(), waiters.end(), [w](const waiter_ptr& p) {
return p.get() == w;
});
if (i != waiters.end()) {
waiters.erase(i);
}
}).then([f = std::move(f)] {
alogger.trace("Running scheduled task");
return f();
}).handle_exception([](auto ep) {
return make_ready_future();
});
}
bool auth::auth::is_class_type(const sstring& type, const sstring& classname) {
if (type == classname) {
return true;
}
auto i = classname.find_last_of('.');
return classname.compare(i + 1, sstring::npos, type) == 0;
}
future<> auth::auth::setup() {
auto& db = cql3::get_local_query_processor().db().local();
auto& cfg = db.get_config();
future<> f = perm_cache.start();
if (is_class_type(cfg.authenticator(),
authenticator::ALLOW_ALL_AUTHENTICATOR_NAME)
&& is_class_type(cfg.authorizer(),
authorizer::ALLOW_ALL_AUTHORIZER_NAME)
) {
// just create the objects
return f.then([&cfg] {
return authenticator::setup(cfg.authenticator());
}).then([&cfg] {
return authorizer::setup(cfg.authorizer());
});
}
if (!db.has_keyspace(AUTH_KS)) {
std::map<sstring, sstring> opts;
opts["replication_factor"] = "1";
auto ksm = keyspace_metadata::new_keyspace(AUTH_KS, "org.apache.cassandra.locator.SimpleStrategy", opts, true);
// We use min_timestamp so that default keyspace metadata will loose with any manual adjustments. See issue #2129.
f = service::get_local_migration_manager().announce_new_keyspace(ksm, api::min_timestamp, false);
}
return f.then([] {
return setup_table(USERS_CF, sprint("CREATE TABLE %s.%s (%s text, %s boolean, PRIMARY KEY(%s)) WITH gc_grace_seconds=%d",
AUTH_KS, USERS_CF, USER_NAME, SUPER, USER_NAME,
90 * 24 * 60 * 60)); // 3 months.
}).then([&cfg] {
return authenticator::setup(cfg.authenticator());
}).then([&cfg] {
return authorizer::setup(cfg.authorizer());
}).then([] {
service::get_local_migration_manager().register_listener(&auth_migration); // again, only one shard...
// instead of once-timer, just schedule this later
schedule_when_up([] {
// setup default super user
return has_existing_users(USERS_CF, DEFAULT_SUPERUSER_NAME, USER_NAME).then([](bool exists) {
if (!exists) {
auto query = sprint("INSERT INTO %s.%s (%s, %s) VALUES (?, ?) USING TIMESTAMP 0",
AUTH_KS, USERS_CF, USER_NAME, SUPER);
cql3::get_local_query_processor().process(query, db::consistency_level::ONE, {DEFAULT_SUPERUSER_NAME, true}).then([](auto) {
alogger.info("Created default superuser '{}'", DEFAULT_SUPERUSER_NAME);
}).handle_exception([](auto ep) {
try {
std::rethrow_exception(ep);
} catch (exceptions::request_execution_exception&) {
alogger.warn("Skipped default superuser setup: some nodes were not ready");
}
});
}
});
});
});
}
future<> auth::auth::shutdown() {
// just make sure we don't have pending tasks.
// this is mostly relevant for test cases where
// db-env-shutdown != process shutdown
return smp::invoke_on_all([] {
thread_waiters().clear();
}).then([] {
return perm_cache.stop();
});
}
future<auth::permission_set> auth::auth::get_permissions(::shared_ptr<authenticated_user> user, data_resource resource) {
return perm_cache.local().get(std::move(user), std::move(resource));
}
static db::consistency_level consistency_for_user(const sstring& username) {
if (username == auth::auth::DEFAULT_SUPERUSER_NAME) {
return db::consistency_level::QUORUM;
}
return db::consistency_level::LOCAL_ONE;
}
static future<::shared_ptr<cql3::untyped_result_set>> select_user(const sstring& username) {
// Here was a thread local, explicit cache of prepared statement. In normal execution this is
// fine, but since we in testing set up and tear down system over and over, we'd start using
// obsolete prepared statements pretty quickly.
// Rely on query processing caching statements instead, and lets assume
// that a map lookup string->statement is not gonna kill us much.
return cql3::get_local_query_processor().process(
sprint("SELECT * FROM %s.%s WHERE %s = ?",
auth::auth::AUTH_KS, auth::auth::USERS_CF,
USER_NAME), consistency_for_user(username),
{ username }, true);
}
future<bool> auth::auth::is_existing_user(const sstring& username) {
return select_user(username).then(
[](::shared_ptr<cql3::untyped_result_set> res) {
return make_ready_future<bool>(!res->empty());
});
}
future<bool> auth::auth::is_super_user(const sstring& username) {
return select_user(username).then(
[](::shared_ptr<cql3::untyped_result_set> res) {
return make_ready_future<bool>(!res->empty() && res->one().get_as<bool>(SUPER));
});
}
future<> auth::auth::insert_user(const sstring& username, bool is_super) {
return cql3::get_local_query_processor().process(sprint("INSERT INTO %s.%s (%s, %s) VALUES (?, ?)",
AUTH_KS, USERS_CF, USER_NAME, SUPER),
consistency_for_user(username), { username, is_super }).discard_result();
}
future<> auth::auth::delete_user(const sstring& username) {
return cql3::get_local_query_processor().process(sprint("DELETE FROM %s.%s WHERE %s = ?",
AUTH_KS, USERS_CF, USER_NAME),
consistency_for_user(username), { username }).discard_result();
}
future<> auth::auth::setup_table(const sstring& name, const sstring& cql) {
auto& qp = cql3::get_local_query_processor();
auto& db = qp.db().local();
if (db.has_schema(AUTH_KS, name)) {
return make_ready_future();
}
::shared_ptr<cql3::statements::raw::cf_statement> parsed = static_pointer_cast<
cql3::statements::raw::cf_statement>(cql3::query_processor::parse_statement(cql));
parsed->prepare_keyspace(AUTH_KS);
::shared_ptr<cql3::statements::create_table_statement> statement =
static_pointer_cast<cql3::statements::create_table_statement>(
parsed->prepare(db, qp.get_cql_stats())->statement);
auto schema = statement->get_cf_meta_data();
auto uuid = generate_legacy_id(schema->ks_name(), schema->cf_name());
schema_builder b(schema);
b.set_uuid(uuid);
return service::get_local_migration_manager().announce_new_column_family(b.build(), false);
}
future<bool> auth::auth::has_existing_users(const sstring& cfname, const sstring& def_user_name, const sstring& name_column) {
auto default_user_query = sprint("SELECT * FROM %s.%s WHERE %s = ?", AUTH_KS, cfname, name_column);
auto all_users_query = sprint("SELECT * FROM %s.%s LIMIT 1", AUTH_KS, cfname);
return cql3::get_local_query_processor().process(default_user_query, db::consistency_level::ONE, { def_user_name }).then([=](::shared_ptr<cql3::untyped_result_set> res) {
if (!res->empty()) {
return make_ready_future<bool>(true);
}
return cql3::get_local_query_processor().process(default_user_query, db::consistency_level::QUORUM, { def_user_name }).then([all_users_query](::shared_ptr<cql3::untyped_result_set> res) {
if (!res->empty()) {
return make_ready_future<bool>(true);
}
return cql3::get_local_query_processor().process(all_users_query, db::consistency_level::QUORUM).then([](::shared_ptr<cql3::untyped_result_set> res) {
return make_ready_future<bool>(!res->empty());
});
});
});
}

View File

@@ -1,125 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2016 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <chrono>
#include <seastar/core/sstring.hh>
#include <seastar/core/future.hh>
#include <seastar/core/shared_ptr.hh>
#include "exceptions/exceptions.hh"
#include "permission.hh"
#include "data_resource.hh"
#include "authenticated_user.hh"
namespace auth {
class auth {
public:
class permissions_cache;
static const sstring DEFAULT_SUPERUSER_NAME;
static const sstring AUTH_KS;
static const sstring USERS_CF;
static const std::chrono::milliseconds SUPERUSER_SETUP_DELAY;
static bool is_class_type(const sstring& type, const sstring& classname);
static future<permission_set> get_permissions(::shared_ptr<authenticated_user>, data_resource);
/**
* Checks if the username is stored in AUTH_KS.USERS_CF.
*
* @param username Username to query.
* @return whether or not Cassandra knows about the user.
*/
static future<bool> is_existing_user(const sstring& username);
/**
* Checks if the user is a known superuser.
*
* @param username Username to query.
* @return true is the user is a superuser, false if they aren't or don't exist at all.
*/
static future<bool> is_super_user(const sstring& username);
/**
* Inserts the user into AUTH_KS.USERS_CF (or overwrites their superuser status as a result of an ALTER USER query).
*
* @param username Username to insert.
* @param isSuper User's new status.
* @throws RequestExecutionException
*/
static future<> insert_user(const sstring& username, bool is_super);
/**
* Deletes the user from AUTH_KS.USERS_CF.
*
* @param username Username to delete.
* @throws RequestExecutionException
*/
static future<> delete_user(const sstring& username);
/**
* Sets up Authenticator and Authorizer.
*/
static future<> setup();
static future<> shutdown();
/**
* Set up table from given CREATE TABLE statement under system_auth keyspace, if not already done so.
*
* @param name name of the table
* @param cql CREATE TABLE statement
*/
static future<> setup_table(const sstring& name, const sstring& cql);
static future<bool> has_existing_users(const sstring& cfname, const sstring& def_user_name, const sstring& name_column_name);
// For internal use. Run function "when system is up".
typedef std::function<future<>()> scheduled_func;
static void schedule_when_up(scheduled_func);
};
}
std::ostream& operator<<(std::ostream& os, const std::pair<auth::authenticated_user, auth::data_resource>& p);

View File

@@ -41,7 +41,6 @@
#include "authenticated_user.hh"
#include "auth.hh"
const sstring auth::authenticated_user::ANONYMOUS_USERNAME("anonymous");
@@ -60,13 +59,6 @@ const sstring& auth::authenticated_user::name() const {
return _anon ? ANONYMOUS_USERNAME : _name;
}
future<bool> auth::authenticated_user::is_super() const {
if (is_anonymous()) {
return make_ready_future<bool>(false);
}
return auth::auth::is_super_user(_name);
}
bool auth::authenticated_user::operator==(const authenticated_user& v) const {
return _anon ? v._anon : _name == v._name;
}

View File

@@ -58,14 +58,6 @@ public:
const sstring& name() const;
/**
* Checks the user's superuser status.
* Only a superuser is allowed to perform CREATE USER and DROP USER queries.
* Im most cased, though not necessarily, a superuser will have Permission.ALL on every resource
* (depends on IAuthorizer implementation).
*/
future<bool> is_super() const;
/**
* If IAuthenticator doesn't require authentication, this method may return true.
*/

View File

@@ -41,13 +41,14 @@
#include "authenticator.hh"
#include "authenticated_user.hh"
#include "common.hh"
#include "password_authenticator.hh"
#include "auth.hh"
#include "cql3/query_processor.hh"
#include "db/config.hh"
#include "utils/class_registrator.hh"
const sstring auth::authenticator::USERNAME_KEY("username");
const sstring auth::authenticator::PASSWORD_KEY("password");
const sstring auth::authenticator::ALLOW_ALL_AUTHENTICATOR_NAME("org.apache.cassandra.auth.AllowAllAuthenticator");
auth::authenticator::option auth::authenticator::string_to_option(const sstring& name) {
if (strcasecmp(name.c_str(), "password") == 0) {
@@ -64,64 +65,3 @@ sstring auth::authenticator::option_to_string(option opt) {
throw std::invalid_argument(sprint("Unknown option {}", opt));
}
}
/**
* Authenticator is assumed to be a fully state-less immutable object (note all the const).
* We thus store a single instance globally, since it should be safe/ok.
*/
static std::unique_ptr<auth::authenticator> global_authenticator;
future<>
auth::authenticator::setup(const sstring& type) {
if (auth::auth::is_class_type(type, ALLOW_ALL_AUTHENTICATOR_NAME)) {
class allow_all_authenticator : public authenticator {
public:
const sstring& class_name() const override {
return ALLOW_ALL_AUTHENTICATOR_NAME;
}
bool require_authentication() const override {
return false;
}
option_set supported_options() const override {
return option_set();
}
option_set alterable_options() const override {
return option_set();
}
future<::shared_ptr<authenticated_user>> authenticate(const credentials_map& credentials) const override {
return make_ready_future<::shared_ptr<authenticated_user>>(::make_shared<authenticated_user>());
}
future<> create(sstring username, const option_map& options) override {
return make_ready_future();
}
future<> alter(sstring username, const option_map& options) override {
return make_ready_future();
}
future<> drop(sstring username) override {
return make_ready_future();
}
const resource_ids& protected_resources() const override {
static const resource_ids ids;
return ids;
}
::shared_ptr<sasl_challenge> new_sasl_challenge() const override {
throw std::runtime_error("Should not reach");
}
};
global_authenticator = std::make_unique<allow_all_authenticator>();
} else if (auth::auth::is_class_type(type, password_authenticator::PASSWORD_AUTHENTICATOR_NAME)) {
auto pwa = std::make_unique<password_authenticator>();
auto f = pwa->init();
return f.then([pwa = std::move(pwa)]() mutable {
global_authenticator = std::move(pwa);
});
} else {
throw exceptions::configuration_exception("Invalid authenticator type: " + type);
}
return make_ready_future();
}
auth::authenticator& auth::authenticator::get() {
assert(global_authenticator);
return *global_authenticator;
}

View File

@@ -69,7 +69,6 @@ class authenticator {
public:
static const sstring USERNAME_KEY;
static const sstring PASSWORD_KEY;
static const sstring ALLOW_ALL_AUTHENTICATOR_NAME;
/**
* Supported CREATE USER/ALTER USER options.
@@ -86,23 +85,14 @@ public:
using option_map = std::unordered_map<option, boost::any, enum_hash<option>>;
using credentials_map = std::unordered_map<sstring, sstring>;
/**
* Setup is called once upon system startup to initialize the IAuthenticator.
*
* For example, use this method to create any required keyspaces/column families.
* Note: Only call from main thread.
*/
static future<> setup(const sstring& type);
/**
* Returns the system authenticator. Must have called setup before calling this.
*/
static authenticator& get();
virtual ~authenticator()
{}
virtual const sstring& class_name() const = 0;
virtual future<> start() = 0;
virtual future<> stop() = 0;
virtual const sstring& qualified_java_name() const = 0;
/**
* Whether or not the authenticator requires explicit login.

View File

@@ -41,23 +41,39 @@
#include "authorizer.hh"
#include "authenticated_user.hh"
#include "common.hh"
#include "default_authorizer.hh"
#include "auth.hh"
#include "cql3/query_processor.hh"
#include "db/config.hh"
#include "utils/class_registrator.hh"
const sstring auth::authorizer::ALLOW_ALL_AUTHORIZER_NAME("org.apache.cassandra.auth.AllowAllAuthorizer");
const sstring& auth::allow_all_authorizer_name() {
static const sstring name = meta::AUTH_PACKAGE_NAME + "AllowAllAuthorizer";
return name;
}
/**
* Authenticator is assumed to be a fully state-less immutable object (note all the const).
* We thus store a single instance globally, since it should be safe/ok.
*/
static std::unique_ptr<auth::authorizer> global_authorizer;
using authorizer_registry = class_registry<auth::authorizer, cql3::query_processor&>;
future<>
auth::authorizer::setup(const sstring& type) {
if (auth::auth::is_class_type(type, ALLOW_ALL_AUTHORIZER_NAME)) {
if (type == allow_all_authorizer_name()) {
class allow_all_authorizer : public authorizer {
public:
future<> start() override {
return make_ready_future<>();
}
future<> stop() override {
return make_ready_future<>();
}
const sstring& qualified_java_name() const override {
return allow_all_authorizer_name();
}
future<permission_set> authorize(::shared_ptr<authenticated_user>, data_resource) const override {
return make_ready_future<permission_set>(permissions::ALL);
}
@@ -86,16 +102,14 @@ auth::authorizer::setup(const sstring& type) {
};
global_authorizer = std::make_unique<allow_all_authorizer>();
} else if (auth::auth::is_class_type(type, default_authorizer::DEFAULT_AUTHORIZER_NAME)) {
auto da = std::make_unique<default_authorizer>();
auto f = da->init();
return f.then([da = std::move(da)]() mutable {
global_authorizer = std::move(da);
});
return make_ready_future();
} else {
throw exceptions::configuration_exception("Invalid authorizer type: " + type);
auto a = authorizer_registry::create(type, cql3::get_local_query_processor());
auto f = a->start();
return f.then([a = std::move(a)]() mutable {
global_authorizer = std::move(a);
});
}
return make_ready_future();
}
auth::authorizer& auth::authorizer::get() {

View File

@@ -55,6 +55,8 @@
namespace auth {
class service;
class authenticated_user;
struct permission_details {
@@ -71,10 +73,14 @@ using std::experimental::optional;
class authorizer {
public:
static const sstring ALLOW_ALL_AUTHORIZER_NAME;
virtual ~authorizer() {}
virtual future<> start() = 0;
virtual future<> stop() = 0;
virtual const sstring& qualified_java_name() const = 0;
/**
* The primary Authorizer method. Returns a set of permissions of a user on a resource.
*
@@ -82,7 +88,7 @@ public:
* @param resource Resource for which the authorization is being requested. @see DataResource.
* @return Set of permissions of the user on the resource. Should never return empty. Use permission.NONE instead.
*/
virtual future<permission_set> authorize(::shared_ptr<authenticated_user>, data_resource) const = 0;
virtual future<permission_set> authorize(service&, ::shared_ptr<authenticated_user>, data_resource) const = 0;
/**
* Grants a set of permissions on a resource to a user.
@@ -126,7 +132,7 @@ public:
* @throws RequestValidationException
* @throws RequestExecutionException
*/
virtual future<std::vector<permission_details>> list(::shared_ptr<authenticated_user> performer, permission_set, optional<data_resource>, optional<sstring>) const = 0;
virtual future<std::vector<permission_details>> list(service&, ::shared_ptr<authenticated_user> performer, permission_set, optional<data_resource>, optional<sstring>) const = 0;
/**
* This method is called before deleting a user with DROP USER query so that a new user with the same
@@ -156,18 +162,6 @@ public:
* @throws ConfigurationException when there is a configuration error.
*/
virtual future<> validate_configuration() const = 0;
/**
* Setup is called once upon system startup to initialize the IAuthorizer.
*
* For example, use this method to create any required keyspaces/column families.
*/
static future<> setup(const sstring& type);
/**
* Returns the system authorizer. Must have called setup before calling this.
*/
static authorizer& get();
};
}

70
auth/common.cc Normal file
View File

@@ -0,0 +1,70 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "auth/common.hh"
#include <seastar/core/shared_ptr.hh>
#include "cql3/query_processor.hh"
#include "cql3/statements/create_table_statement.hh"
#include "schema_builder.hh"
#include "service/migration_manager.hh"
namespace auth {
namespace meta {
const sstring DEFAULT_SUPERUSER_NAME("cassandra");
const sstring AUTH_KS("system_auth");
const sstring USERS_CF("users");
const sstring AUTH_PACKAGE_NAME("org.apache.cassandra.auth.");
}
future<> create_metadata_table_if_missing(
const sstring& table_name,
cql3::query_processor& qp,
const sstring& cql,
::service::migration_manager& mm) {
auto& db = qp.db().local();
if (db.has_schema(meta::AUTH_KS, table_name)) {
return make_ready_future<>();
}
auto parsed_statement = static_pointer_cast<cql3::statements::raw::cf_statement>(
cql3::query_processor::parse_statement(cql));
parsed_statement->prepare_keyspace(meta::AUTH_KS);
auto statement = static_pointer_cast<cql3::statements::create_table_statement>(
parsed_statement->prepare(db, qp.get_cql_stats())->statement);
const auto schema = statement->get_cf_meta_data();
const auto uuid = generate_legacy_id(schema->ks_name(), schema->cf_name());
schema_builder b(schema);
b.set_uuid(uuid);
return mm.announce_new_column_family(b.build(), false);
}
}

74
auth/common.hh Normal file
View File

@@ -0,0 +1,74 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <chrono>
#include <seastar/core/future.hh>
#include <seastar/core/reactor.hh>
#include <seastar/core/resource.hh>
#include <seastar/core/sstring.hh>
#include "delayed_tasks.hh"
#include "seastarx.hh"
namespace service {
class migration_manager;
}
namespace cql3 {
class query_processor;
}
namespace auth {
namespace meta {
extern const sstring DEFAULT_SUPERUSER_NAME;
extern const sstring AUTH_KS;
extern const sstring USERS_CF;
extern const sstring AUTH_PACKAGE_NAME;
}
template <class Task>
future<> once_among_shards(Task&& f) {
if (engine().cpu_id() == 0u) {
return f();
}
return make_ready_future<>();
}
template <class Task, class Clock>
void delay_until_system_ready(delayed_tasks<Clock>& ts, Task&& f) {
static const typename std::chrono::milliseconds delay_duration(10000);
ts.schedule_after(delay_duration, std::forward<Task>(f));
}
future<> create_metadata_table_if_missing(
const sstring& table_name,
cql3::query_processor&,
const sstring& cql,
::service::migration_manager&);
}

View File

@@ -46,16 +46,19 @@
#include <seastar/core/reactor.hh>
#include "auth.hh"
#include "common.hh"
#include "default_authorizer.hh"
#include "authenticated_user.hh"
#include "permission.hh"
#include "cql3/query_processor.hh"
#include "cql3/untyped_result_set.hh"
#include "exceptions/exceptions.hh"
#include "log.hh"
const sstring auth::default_authorizer::DEFAULT_AUTHORIZER_NAME(
"org.apache.cassandra.auth.CassandraAuthorizer");
const sstring& auth::default_authorizer_name() {
static const sstring name = meta::AUTH_PACKAGE_NAME + "CassandraAuthorizer";
return name;
}
static const sstring USER_NAME = "username";
static const sstring RESOURCE_NAME = "resource";
@@ -64,28 +67,47 @@ static const sstring PERMISSIONS_CF = "permissions";
static logging::logger alogger("default_authorizer");
auth::default_authorizer::default_authorizer() {
// To ensure correct initialization order, we unfortunately need to use a string literal.
static const class_registrator<
auth::authorizer,
auth::default_authorizer,
cql3::query_processor&,
::service::migration_manager&> password_auth_reg("org.apache.cassandra.auth.CassandraAuthorizer");
auth::default_authorizer::default_authorizer(cql3::query_processor& qp, ::service::migration_manager& mm)
: _qp(qp)
, _migration_manager(mm) {
}
auth::default_authorizer::~default_authorizer() {
}
future<> auth::default_authorizer::init() {
sstring create_table = sprint("CREATE TABLE %s.%s ("
future<> auth::default_authorizer::start() {
static const sstring create_table = sprint("CREATE TABLE %s.%s ("
"%s text,"
"%s text,"
"%s set<text>,"
"PRIMARY KEY(%s, %s)"
") WITH gc_grace_seconds=%d", auth::auth::AUTH_KS,
") WITH gc_grace_seconds=%d", meta::AUTH_KS,
PERMISSIONS_CF, USER_NAME, RESOURCE_NAME, PERMISSIONS_NAME,
USER_NAME, RESOURCE_NAME, 90 * 24 * 60 * 60); // 3 months.
return auth::setup_table(PERMISSIONS_CF, create_table);
return auth::once_among_shards([this] {
return auth::create_metadata_table_if_missing(
PERMISSIONS_CF,
_qp,
create_table,
_migration_manager);
});
}
future<> auth::default_authorizer::stop() {
return make_ready_future<>();
}
future<auth::permission_set> auth::default_authorizer::authorize(
::shared_ptr<authenticated_user> user, data_resource resource) const {
return user->is_super().then([this, user, resource = std::move(resource)](bool is_super) {
service& ser, ::shared_ptr<authenticated_user> user, data_resource resource) const {
return auth::is_super_user(ser, *user).then([this, user, resource = std::move(resource)](bool is_super) {
if (is_super) {
return make_ready_future<permission_set>(permissions::ALL);
}
@@ -94,10 +116,9 @@ future<auth::permission_set> auth::default_authorizer::authorize(
* TOOD: could create actual data type for permission (translating string<->perm),
* but this seems overkill right now. We still must store strings so...
*/
auto& qp = cql3::get_local_query_processor();
auto query = sprint("SELECT %s FROM %s.%s WHERE %s = ? AND %s = ?"
, PERMISSIONS_NAME, auth::AUTH_KS, PERMISSIONS_CF, USER_NAME, RESOURCE_NAME);
return qp.process(query, db::consistency_level::LOCAL_ONE, {user->name(), resource.name() })
, PERMISSIONS_NAME, meta::AUTH_KS, PERMISSIONS_CF, USER_NAME, RESOURCE_NAME);
return _qp.process(query, db::consistency_level::LOCAL_ONE, {user->name(), resource.name() })
.then_wrapped([=](future<::shared_ptr<cql3::untyped_result_set>> f) {
try {
auto res = f.get0();
@@ -120,11 +141,10 @@ future<> auth::default_authorizer::modify(
::shared_ptr<authenticated_user> performer, permission_set set,
data_resource resource, sstring user, sstring op) {
// TODO: why does this not check super user?
auto& qp = cql3::get_local_query_processor();
auto query = sprint("UPDATE %s.%s SET %s = %s %s ? WHERE %s = ? AND %s = ?",
auth::AUTH_KS, PERMISSIONS_CF, PERMISSIONS_NAME,
meta::AUTH_KS, PERMISSIONS_CF, PERMISSIONS_NAME,
PERMISSIONS_NAME, op, USER_NAME, RESOURCE_NAME);
return qp.process(query, db::consistency_level::ONE, {
return _qp.process(query, db::consistency_level::ONE, {
permissions::to_strings(set), user, resource.name() }).discard_result();
}
@@ -142,15 +162,14 @@ future<> auth::default_authorizer::revoke(
}
future<std::vector<auth::permission_details>> auth::default_authorizer::list(
::shared_ptr<authenticated_user> performer, permission_set set,
service& ser, ::shared_ptr<authenticated_user> performer, permission_set set,
optional<data_resource> resource, optional<sstring> user) const {
return performer->is_super().then([this, performer, set = std::move(set), resource = std::move(resource), user = std::move(user)](bool is_super) {
return auth::is_super_user(ser, *performer).then([this, performer, set = std::move(set), resource = std::move(resource), user = std::move(user)](bool is_super) {
if (!is_super && (!user || performer->name() != *user)) {
throw exceptions::unauthorized_exception(sprint("You are not authorized to view %s's permissions", user ? *user : "everyone"));
}
auto query = sprint("SELECT %s, %s, %s FROM %s.%s", USER_NAME, RESOURCE_NAME, PERMISSIONS_NAME, auth::AUTH_KS, PERMISSIONS_CF);
auto& qp = cql3::get_local_query_processor();
auto query = sprint("SELECT %s, %s, %s FROM %s.%s", USER_NAME, RESOURCE_NAME, PERMISSIONS_NAME, meta::AUTH_KS, PERMISSIONS_CF);
// Oh, look, it is a case where it does not pay off to have
// parameters to process in an initializer list.
@@ -158,15 +177,15 @@ future<std::vector<auth::permission_details>> auth::default_authorizer::list(
if (resource && user) {
query += sprint(" WHERE %s = ? AND %s = ?", USER_NAME, RESOURCE_NAME);
f = qp.process(query, db::consistency_level::ONE, {*user, resource->name()});
f = _qp.process(query, db::consistency_level::ONE, {*user, resource->name()});
} else if (resource) {
query += sprint(" WHERE %s = ? ALLOW FILTERING", RESOURCE_NAME);
f = qp.process(query, db::consistency_level::ONE, {resource->name()});
f = _qp.process(query, db::consistency_level::ONE, {resource->name()});
} else if (user) {
query += sprint(" WHERE %s = ?", USER_NAME);
f = qp.process(query, db::consistency_level::ONE, {*user});
f = _qp.process(query, db::consistency_level::ONE, {*user});
} else {
f = qp.process(query, db::consistency_level::ONE, {});
f = _qp.process(query, db::consistency_level::ONE, {});
}
return f.then([set](::shared_ptr<cql3::untyped_result_set> res) {
@@ -188,10 +207,9 @@ future<std::vector<auth::permission_details>> auth::default_authorizer::list(
}
future<> auth::default_authorizer::revoke_all(sstring dropped_user) {
auto& qp = cql3::get_local_query_processor();
auto query = sprint("DELETE FROM %s.%s WHERE %s = ?", auth::AUTH_KS,
auto query = sprint("DELETE FROM %s.%s WHERE %s = ?", meta::AUTH_KS,
PERMISSIONS_CF, USER_NAME);
return qp.process(query, db::consistency_level::ONE, { dropped_user }).discard_result().handle_exception(
return _qp.process(query, db::consistency_level::ONE, { dropped_user }).discard_result().handle_exception(
[dropped_user](auto ep) {
try {
std::rethrow_exception(ep);
@@ -202,17 +220,16 @@ future<> auth::default_authorizer::revoke_all(sstring dropped_user) {
}
future<> auth::default_authorizer::revoke_all(data_resource resource) {
auto& qp = cql3::get_local_query_processor();
auto query = sprint("SELECT %s FROM %s.%s WHERE %s = ? ALLOW FILTERING",
USER_NAME, auth::AUTH_KS, PERMISSIONS_CF, RESOURCE_NAME);
return qp.process(query, db::consistency_level::LOCAL_ONE, { resource.name() })
.then_wrapped([resource, &qp](future<::shared_ptr<cql3::untyped_result_set>> f) {
USER_NAME, meta::AUTH_KS, PERMISSIONS_CF, RESOURCE_NAME);
return _qp.process(query, db::consistency_level::LOCAL_ONE, { resource.name() })
.then_wrapped([this, resource](future<::shared_ptr<cql3::untyped_result_set>> f) {
try {
auto res = f.get0();
return parallel_for_each(res->begin(), res->end(), [&qp, res, resource](const cql3::untyped_result_set::row& r) {
return parallel_for_each(res->begin(), res->end(), [this, res, resource](const cql3::untyped_result_set::row& r) {
auto query = sprint("DELETE FROM %s.%s WHERE %s = ? AND %s = ?"
, auth::AUTH_KS, PERMISSIONS_CF, USER_NAME, RESOURCE_NAME);
return qp.process(query, db::consistency_level::LOCAL_ONE, { r.get_as<sstring>(USER_NAME), resource.name() })
, meta::AUTH_KS, PERMISSIONS_CF, USER_NAME, RESOURCE_NAME);
return _qp.process(query, db::consistency_level::LOCAL_ONE, { r.get_as<sstring>(USER_NAME), resource.name() })
.discard_result().handle_exception([resource](auto ep) {
try {
std::rethrow_exception(ep);
@@ -231,7 +248,7 @@ future<> auth::default_authorizer::revoke_all(data_resource resource) {
const auth::resource_ids& auth::default_authorizer::protected_resources() {
static const resource_ids ids({ data_resource(auth::AUTH_KS, PERMISSIONS_CF) });
static const resource_ids ids({ data_resource(meta::AUTH_KS, PERMISSIONS_CF) });
return ids;
}

View File

@@ -41,26 +41,40 @@
#pragma once
#include <functional>
#include "authorizer.hh"
#include "cql3/query_processor.hh"
#include "service/migration_manager.hh"
namespace auth {
class default_authorizer : public authorizer {
public:
static const sstring DEFAULT_AUTHORIZER_NAME;
const sstring& default_authorizer_name();
default_authorizer();
class default_authorizer : public authorizer {
cql3::query_processor& _qp;
::service::migration_manager& _migration_manager;
public:
default_authorizer(cql3::query_processor&, ::service::migration_manager&);
~default_authorizer();
future<> init();
future<> start() override;
future<permission_set> authorize(::shared_ptr<authenticated_user>, data_resource) const override;
future<> stop() override;
const sstring& qualified_java_name() const override {
return default_authorizer_name();
}
future<permission_set> authorize(service&, ::shared_ptr<authenticated_user>, data_resource) const override;
future<> grant(::shared_ptr<authenticated_user>, permission_set, data_resource, sstring) override;
future<> revoke(::shared_ptr<authenticated_user>, permission_set, data_resource, sstring) override;
future<std::vector<permission_details>> list(::shared_ptr<authenticated_user>, permission_set, optional<data_resource>, optional<sstring>) const override;
future<std::vector<permission_details>> list(service&, ::shared_ptr<authenticated_user>, permission_set, optional<data_resource>, optional<sstring>) const override;
future<> revoke_all(sstring) override;

View File

@@ -46,28 +46,42 @@
#include <seastar/core/reactor.hh>
#include "auth.hh"
#include "common.hh"
#include "password_authenticator.hh"
#include "authenticated_user.hh"
#include "cql3/query_processor.hh"
#include "cql3/untyped_result_set.hh"
#include "log.hh"
#include "service/migration_manager.hh"
#include "utils/class_registrator.hh"
const sstring auth::password_authenticator::PASSWORD_AUTHENTICATOR_NAME("org.apache.cassandra.auth.PasswordAuthenticator");
const sstring& auth::password_authenticator_name() {
static const sstring name = meta::AUTH_PACKAGE_NAME + "PasswordAuthenticator";
return name;
}
// name of the hash column.
static const sstring SALTED_HASH = "salted_hash";
static const sstring USER_NAME = "username";
static const sstring DEFAULT_USER_NAME = auth::auth::DEFAULT_SUPERUSER_NAME;
static const sstring DEFAULT_USER_PASSWORD = auth::auth::DEFAULT_SUPERUSER_NAME;
static const sstring DEFAULT_USER_NAME = auth::meta::DEFAULT_SUPERUSER_NAME;
static const sstring DEFAULT_USER_PASSWORD = auth::meta::DEFAULT_SUPERUSER_NAME;
static const sstring CREDENTIALS_CF = "credentials";
static logging::logger plogger("password_authenticator");
// To ensure correct initialization order, we unfortunately need to use a string literal.
static const class_registrator<
auth::authenticator,
auth::password_authenticator,
cql3::query_processor&,
::service::migration_manager&> password_auth_reg("org.apache.cassandra.auth.PasswordAuthenticator");
auth::password_authenticator::~password_authenticator()
{}
auth::password_authenticator::password_authenticator()
{}
auth::password_authenticator::password_authenticator(cql3::query_processor& qp, ::service::migration_manager& mm)
: _qp(qp)
, _migration_manager(mm) {
}
// TODO: blowfish
// Origin uses Java bcrypt library, i.e. blowfish salt
@@ -88,12 +102,10 @@ auth::password_authenticator::password_authenticator()
// and some old-fashioned random salt generation.
static constexpr size_t rand_bytes = 16;
static thread_local crypt_data tlcrypt = { 0, };
static sstring hashpw(const sstring& pass, const sstring& salt) {
// crypt_data is huge. should this be a thread_local static?
auto tmp = std::make_unique<crypt_data>();
tmp->initialized = 0;
auto res = crypt_r(pass.c_str(), salt.c_str(), tmp.get());
auto res = crypt_r(pass.c_str(), salt.c_str(), &tlcrypt);
if (res == nullptr) {
throw std::system_error(errno, std::system_category());
}
@@ -122,17 +134,16 @@ static sstring gensalt() {
sstring salt;
if (!prefix.empty()) {
return prefix + salt;
return prefix + input;
}
auto tmp = std::make_unique<crypt_data>();
tmp->initialized = 0;
// Try in order:
// blowfish 2011 fix, blowfish, sha512, sha256, md5
for (sstring pfx : { "$2y$", "$2a$", "$6$", "$5$", "$1$" }) {
salt = pfx + input;
if (crypt_r("fisk", salt.c_str(), tmp.get())) {
const char* e = crypt_r("fisk", salt.c_str(), &tlcrypt);
if (e && (e[0] != '*')) {
prefix = pfx;
return salt;
}
@@ -144,39 +155,52 @@ static sstring hashpw(const sstring& pass) {
return hashpw(pass, gensalt());
}
future<> auth::password_authenticator::init() {
gensalt(); // do this once to determine usable hashing
future<> auth::password_authenticator::start() {
return auth::once_among_shards([this] {
gensalt(); // do this once to determine usable hashing
sstring create_table = sprint(
"CREATE TABLE %s.%s ("
"%s text,"
"%s text," // salt + hash + number of rounds
"options map<text,text>,"// for future extensions
"PRIMARY KEY(%s)"
") WITH gc_grace_seconds=%d",
auth::auth::AUTH_KS,
CREDENTIALS_CF, USER_NAME, SALTED_HASH, USER_NAME,
90 * 24 * 60 * 60); // 3 months.
static const sstring create_table = sprint(
"CREATE TABLE %s.%s ("
"%s text,"
"%s text," // salt + hash + number of rounds
"options map<text,text>,"// for future extensions
"PRIMARY KEY(%s)"
") WITH gc_grace_seconds=%d",
meta::AUTH_KS,
CREDENTIALS_CF, USER_NAME, SALTED_HASH, USER_NAME,
90 * 24 * 60 * 60); // 3 months.
return auth::setup_table(CREDENTIALS_CF, create_table).then([this] {
// instead of once-timer, just schedule this later
auth::schedule_when_up([] {
return auth::has_existing_users(CREDENTIALS_CF, DEFAULT_USER_NAME, USER_NAME).then([](bool exists) {
if (!exists) {
cql3::get_local_query_processor().process(sprint("INSERT INTO %s.%s (%s, %s) VALUES (?, ?) USING TIMESTAMP 0",
auth::AUTH_KS,
CREDENTIALS_CF,
USER_NAME, SALTED_HASH
),
db::consistency_level::ONE, {DEFAULT_USER_NAME, hashpw(DEFAULT_USER_PASSWORD)}).then([](auto) {
plogger.info("Created default user '{}'", DEFAULT_USER_NAME);
});
}
return auth::create_metadata_table_if_missing(
CREDENTIALS_CF,
_qp,
create_table,
_migration_manager).then([this] {
auth::delay_until_system_ready(_delayed, [this] {
return has_existing_users().then([this](bool existing) {
if (!existing) {
return _qp.process(
sprint(
"INSERT INTO %s.%s (%s, %s) VALUES (?, ?) USING TIMESTAMP 0",
meta::AUTH_KS,
CREDENTIALS_CF,
USER_NAME, SALTED_HASH),
db::consistency_level::ONE,
{ DEFAULT_USER_NAME, hashpw(DEFAULT_USER_PASSWORD) }).then([](auto) {
plogger.info("Created default user '{}'", DEFAULT_USER_NAME);
});
}
return make_ready_future<>();
});
});
});
});
}
future<> auth::password_authenticator::stop() {
return make_ready_future<>();
}
db::consistency_level auth::password_authenticator::consistency_for_user(const sstring& username) {
if (username == DEFAULT_USER_NAME) {
return db::consistency_level::QUORUM;
@@ -184,8 +208,8 @@ db::consistency_level auth::password_authenticator::consistency_for_user(const s
return db::consistency_level::LOCAL_ONE;
}
const sstring& auth::password_authenticator::class_name() const {
return PASSWORD_AUTHENTICATOR_NAME;
const sstring& auth::password_authenticator::qualified_java_name() const {
return password_authenticator_name();
}
bool auth::password_authenticator::require_authentication() const {
@@ -218,9 +242,8 @@ future<::shared_ptr<auth::authenticated_user> > auth::password_authenticator::au
// Rely on query processing caching statements instead, and lets assume
// that a map lookup string->statement is not gonna kill us much.
return futurize_apply([this, username, password] {
auto& qp = cql3::get_local_query_processor();
return qp.process(sprint("SELECT %s FROM %s.%s WHERE %s = ?", SALTED_HASH,
auth::AUTH_KS, CREDENTIALS_CF, USER_NAME),
return _qp.process(sprint("SELECT %s FROM %s.%s WHERE %s = ?", SALTED_HASH,
meta::AUTH_KS, CREDENTIALS_CF, USER_NAME),
consistency_for_user(username), {username}, true);
}).then_wrapped([=](future<::shared_ptr<cql3::untyped_result_set>> f) {
try {
@@ -244,9 +267,8 @@ future<> auth::password_authenticator::create(sstring username,
try {
auto password = boost::any_cast<sstring>(options.at(option::PASSWORD));
auto query = sprint("INSERT INTO %s.%s (%s, %s) VALUES (?, ?)",
auth::AUTH_KS, CREDENTIALS_CF, USER_NAME, SALTED_HASH);
auto& qp = cql3::get_local_query_processor();
return qp.process(query, consistency_for_user(username), { username, hashpw(password) }).discard_result();
meta::AUTH_KS, CREDENTIALS_CF, USER_NAME, SALTED_HASH);
return _qp.process(query, consistency_for_user(username), { username, hashpw(password) }).discard_result();
} catch (std::out_of_range&) {
throw exceptions::invalid_request_exception("PasswordAuthenticator requires PASSWORD option");
}
@@ -257,9 +279,8 @@ future<> auth::password_authenticator::alter(sstring username,
try {
auto password = boost::any_cast<sstring>(options.at(option::PASSWORD));
auto query = sprint("UPDATE %s.%s SET %s = ? WHERE %s = ?",
auth::AUTH_KS, CREDENTIALS_CF, SALTED_HASH, USER_NAME);
auto& qp = cql3::get_local_query_processor();
return qp.process(query, consistency_for_user(username), { hashpw(password), username }).discard_result();
meta::AUTH_KS, CREDENTIALS_CF, SALTED_HASH, USER_NAME);
return _qp.process(query, consistency_for_user(username), { hashpw(password), username }).discard_result();
} catch (std::out_of_range&) {
throw exceptions::invalid_request_exception("PasswordAuthenticator requires PASSWORD option");
}
@@ -268,24 +289,24 @@ future<> auth::password_authenticator::alter(sstring username,
future<> auth::password_authenticator::drop(sstring username) {
try {
auto query = sprint("DELETE FROM %s.%s WHERE %s = ?",
auth::AUTH_KS, CREDENTIALS_CF, USER_NAME);
auto& qp = cql3::get_local_query_processor();
return qp.process(query, consistency_for_user(username), { username }).discard_result();
meta::AUTH_KS, CREDENTIALS_CF, USER_NAME);
return _qp.process(query, consistency_for_user(username), { username }).discard_result();
} catch (std::out_of_range&) {
throw exceptions::invalid_request_exception("PasswordAuthenticator requires PASSWORD option");
}
}
const auth::resource_ids& auth::password_authenticator::protected_resources() const {
static const resource_ids ids({ data_resource(auth::AUTH_KS, CREDENTIALS_CF) });
static const resource_ids ids({ data_resource(meta::AUTH_KS, CREDENTIALS_CF) });
return ids;
}
::shared_ptr<auth::authenticator::sasl_challenge> auth::password_authenticator::new_sasl_challenge() const {
class plain_text_password_challenge: public sasl_challenge {
const password_authenticator& _self;
public:
plain_text_password_challenge(const password_authenticator& a)
: _authenticator(a)
plain_text_password_challenge(const password_authenticator& self) : _self(self)
{}
/**
@@ -340,12 +361,58 @@ const auth::resource_ids& auth::password_authenticator::protected_resources() co
return _complete;
}
future<::shared_ptr<authenticated_user>> get_authenticated_user() const override {
return _authenticator.authenticate(_credentials);
return _self.authenticate(_credentials);
}
private:
const password_authenticator& _authenticator;
credentials_map _credentials;
bool _complete = false;
};
return ::make_shared<plain_text_password_challenge>(*this);
}
//
// Similar in structure to `auth::service::has_existing_users()`, but trying to generalize the pattern breaks all kinds
// of module boundaries and leaks implementation details.
//
future<bool> auth::password_authenticator::has_existing_users() const {
static const sstring default_user_query = sprint(
"SELECT * FROM %s.%s WHERE %s = ?",
meta::AUTH_KS,
CREDENTIALS_CF,
USER_NAME);
static const sstring all_users_query = sprint(
"SELECT * FROM %s.%s LIMIT 1",
meta::AUTH_KS,
CREDENTIALS_CF);
// This logic is borrowed directly from Apache Cassandra. By first checking for the presence of the default user, we
// can potentially avoid doing a range query with a high consistency level.
return _qp.process(
default_user_query,
db::consistency_level::ONE,
{ meta::DEFAULT_SUPERUSER_NAME },
true).then([this](auto results) {
if (!results->empty()) {
return make_ready_future<bool>(true);
}
return _qp.process(
default_user_query,
db::consistency_level::QUORUM,
{ meta::DEFAULT_SUPERUSER_NAME },
true).then([this](auto results) {
if (!results->empty()) {
return make_ready_future<bool>(true);
}
return _qp.process(
all_users_query,
db::consistency_level::QUORUM).then([](auto results) {
return make_ready_future<bool>(!results->empty());
});
});
});
}

View File

@@ -42,19 +42,33 @@
#pragma once
#include "authenticator.hh"
#include "cql3/query_processor.hh"
#include "delayed_tasks.hh"
namespace service {
class migration_manager;
}
namespace auth {
class password_authenticator : public authenticator {
public:
static const sstring PASSWORD_AUTHENTICATOR_NAME;
const sstring& password_authenticator_name();
password_authenticator();
class password_authenticator : public authenticator {
cql3::query_processor& _qp;
::service::migration_manager& _migration_manager;
delayed_tasks<> _delayed{};
public:
password_authenticator(cql3::query_processor&, ::service::migration_manager&);
~password_authenticator();
future<> init();
future<> start() override;
const sstring& class_name() const override;
future<> stop() override;
const sstring& qualified_java_name() const override;
bool require_authentication() const override;
option_set supported_options() const override;
option_set alterable_options() const override;
@@ -67,6 +81,9 @@ public:
static db::consistency_level consistency_for_user(const sstring& username);
private:
future<bool> has_existing_users() const;
};
}

51
auth/permissions_cache.cc Normal file
View File

@@ -0,0 +1,51 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "auth/permissions_cache.hh"
#include "auth/authorizer.hh"
#include "auth/common.hh"
#include "auth/service.hh"
#include "db/config.hh"
namespace auth {
permissions_cache_config permissions_cache_config::from_db_config(const db::config& dc) {
permissions_cache_config c;
c.max_entries = dc.permissions_cache_max_entries();
c.validity_period = std::chrono::milliseconds(dc.permissions_validity_in_ms());
c.update_period = std::chrono::milliseconds(dc.permissions_update_interval_in_ms());
return c;
}
permissions_cache::permissions_cache(const permissions_cache_config& c, service& ser, logging::logger& log)
: _cache(c.max_entries, c.validity_period, c.update_period, log, [&ser, &log](const key_type& k) {
log.debug("Refreshing permissions for {}", k.first.name());
return ser.underlying_authorizer().authorize(ser, ::make_shared<authenticated_user>(k.first), k.second);
}) {
}
future<permission_set> permissions_cache::get(::shared_ptr<authenticated_user> user, data_resource r) {
return _cache.get(key_type(*user, r));
}
}

99
auth/permissions_cache.hh Normal file
View File

@@ -0,0 +1,99 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <chrono>
#include <functional>
#include <iostream>
#include <utility>
#include <seastar/core/future.hh>
#include <seastar/core/shared_ptr.hh>
#include "auth/authenticated_user.hh"
#include "auth/data_resource.hh"
#include "auth/permission.hh"
#include "log.hh"
#include "utils/loading_cache.hh"
namespace std {
template <>
struct hash<auth::data_resource> final {
size_t operator()(const auth::data_resource & v) const {
return v.hash_value();
}
};
template <>
struct hash<auth::authenticated_user> final {
size_t operator()(const auth::authenticated_user & v) const {
return utils::tuple_hash()(v.name(), v.is_anonymous());
}
};
inline std::ostream& operator<<(std::ostream& os, const std::pair<auth::authenticated_user, auth::data_resource>& p) {
os << "{user: " << p.first.name() << ", data_resource: " << p.second << "}";
return os;
}
}
namespace db {
class config;
}
namespace auth {
class service;
struct permissions_cache_config final {
static permissions_cache_config from_db_config(const db::config&);
std::size_t max_entries;
std::chrono::milliseconds validity_period;
std::chrono::milliseconds update_period;
};
class permissions_cache final {
using cache_type = utils::loading_cache<
std::pair<authenticated_user, data_resource>,
permission_set,
utils::loading_cache_reload_enabled::yes,
utils::simple_entry_size<permission_set>,
utils::tuple_hash>;
using key_type = typename cache_type::key_type;
cache_type _cache;
public:
explicit permissions_cache(const permissions_cache_config&, service&, logging::logger&);
future <> stop() {
return _cache.stop();
}
future<permission_set> get(::shared_ptr<authenticated_user>, data_resource);
};
}

355
auth/service.cc Normal file
View File

@@ -0,0 +1,355 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "auth/service.hh"
#include <map>
#include <seastar/core/future-util.hh>
#include <seastar/core/shared_ptr.hh>
#include "auth/allow_all_authenticator.hh"
#include "auth/allow_all_authorizer.hh"
#include "auth/common.hh"
#include "cql3/query_processor.hh"
#include "cql3/untyped_result_set.hh"
#include "db/config.hh"
#include "db/consistency_level.hh"
#include "exceptions/exceptions.hh"
#include "log.hh"
#include "service/migration_listener.hh"
#include "utils/class_registrator.hh"
namespace auth {
namespace meta {
static const sstring user_name_col_name("name");
static const sstring superuser_col_name("super");
}
static logging::logger log("auth_service");
class auth_migration_listener final : public ::service::migration_listener {
authorizer& _authorizer;
public:
explicit auth_migration_listener(authorizer& a) : _authorizer(a) {
}
private:
void on_create_keyspace(const sstring& ks_name) override {}
void on_create_column_family(const sstring& ks_name, const sstring& cf_name) override {}
void on_create_user_type(const sstring& ks_name, const sstring& type_name) override {}
void on_create_function(const sstring& ks_name, const sstring& function_name) override {}
void on_create_aggregate(const sstring& ks_name, const sstring& aggregate_name) override {}
void on_create_view(const sstring& ks_name, const sstring& view_name) override {}
void on_update_keyspace(const sstring& ks_name) override {}
void on_update_column_family(const sstring& ks_name, const sstring& cf_name, bool) override {}
void on_update_user_type(const sstring& ks_name, const sstring& type_name) override {}
void on_update_function(const sstring& ks_name, const sstring& function_name) override {}
void on_update_aggregate(const sstring& ks_name, const sstring& aggregate_name) override {}
void on_update_view(const sstring& ks_name, const sstring& view_name, bool columns_changed) override {}
void on_drop_keyspace(const sstring& ks_name) override {
_authorizer.revoke_all(auth::data_resource(ks_name));
}
void on_drop_column_family(const sstring& ks_name, const sstring& cf_name) override {
_authorizer.revoke_all(auth::data_resource(ks_name, cf_name));
}
void on_drop_user_type(const sstring& ks_name, const sstring& type_name) override {}
void on_drop_function(const sstring& ks_name, const sstring& function_name) override {}
void on_drop_aggregate(const sstring& ks_name, const sstring& aggregate_name) override {}
void on_drop_view(const sstring& ks_name, const sstring& view_name) override {}
};
static db::consistency_level consistency_for_user(const sstring& name) {
if (name == meta::DEFAULT_SUPERUSER_NAME) {
return db::consistency_level::QUORUM;
} else {
return db::consistency_level::LOCAL_ONE;
}
}
static future<::shared_ptr<cql3::untyped_result_set>> select_user(cql3::query_processor& qp, const sstring& name) {
// Here was a thread local, explicit cache of prepared statement. In normal execution this is
// fine, but since we in testing set up and tear down system over and over, we'd start using
// obsolete prepared statements pretty quickly.
// Rely on query processing caching statements instead, and lets assume
// that a map lookup string->statement is not gonna kill us much.
return qp.process(
sprint(
"SELECT * FROM %s.%s WHERE %s = ?",
meta::AUTH_KS,
meta::USERS_CF,
meta::user_name_col_name),
consistency_for_user(name),
{ name },
true);
}
service_config service_config::from_db_config(const db::config& dc) {
const qualified_name qualified_authorizer_name(meta::AUTH_PACKAGE_NAME, dc.authorizer());
const qualified_name qualified_authenticator_name(meta::AUTH_PACKAGE_NAME, dc.authenticator());
service_config c;
c.authorizer_java_name = qualified_authorizer_name;
c.authenticator_java_name = qualified_authenticator_name;
return c;
}
service::service(
permissions_cache_config c,
cql3::query_processor& qp,
::service::migration_manager& mm,
std::unique_ptr<authorizer> a,
std::unique_ptr<authenticator> b)
: _permissions_cache_config(std::move(c))
, _permissions_cache(nullptr)
, _qp(qp)
, _migration_manager(mm)
, _authorizer(std::move(a))
, _authenticator(std::move(b))
, _migration_listener(std::make_unique<auth_migration_listener>(*_authorizer)) {
}
service::service(
permissions_cache_config cache_config,
cql3::query_processor& qp,
::service::migration_manager& mm,
const service_config& sc)
: service(
std::move(cache_config),
qp,
mm,
create_object<authorizer>(sc.authorizer_java_name, qp, mm),
create_object<authenticator>(sc.authenticator_java_name, qp, mm)) {
}
bool service::should_create_metadata() const {
const bool null_authorizer = _authorizer->qualified_java_name() == allow_all_authorizer_name();
const bool null_authenticator = _authenticator->qualified_java_name() == allow_all_authenticator_name();
return !null_authorizer || !null_authenticator;
}
future<> service::create_metadata_if_missing() {
auto& db = _qp.db().local();
auto f = make_ready_future<>();
if (!db.has_keyspace(meta::AUTH_KS)) {
std::map<sstring, sstring> opts{{"replication_factor", "1"}};
auto ksm = keyspace_metadata::new_keyspace(
meta::AUTH_KS,
"org.apache.cassandra.locator.SimpleStrategy",
opts,
true);
// We use min_timestamp so that default keyspace metadata will loose with any manual adjustments.
// See issue #2129.
f = _migration_manager.announce_new_keyspace(ksm, api::min_timestamp, false);
}
return f.then([this] {
// 3 months.
static const auto gc_grace_seconds = 90 * 24 * 60 * 60;
static const sstring users_table_query = sprint(
"CREATE TABLE %s.%s (%s text, %s boolean, PRIMARY KEY (%s)) WITH gc_grace_seconds=%s",
meta::AUTH_KS,
meta::USERS_CF,
meta::user_name_col_name,
meta::superuser_col_name,
meta::user_name_col_name,
gc_grace_seconds);
return create_metadata_table_if_missing(
meta::USERS_CF,
_qp,
users_table_query,
_migration_manager);
}).then([this] {
delay_until_system_ready(_delayed, [this] {
return has_existing_users().then([this](bool existing) {
if (!existing) {
//
// Create default superuser.
//
static const sstring query = sprint(
"INSERT INTO %s.%s (%s, %s) VALUES (?, ?) USING TIMESTAMP 0",
meta::AUTH_KS,
meta::USERS_CF,
meta::user_name_col_name,
meta::superuser_col_name);
return _qp.process(
query,
db::consistency_level::ONE,
{ meta::DEFAULT_SUPERUSER_NAME, true }).then([](auto&&) {
log.info("Created default superuser '{}'", meta::DEFAULT_SUPERUSER_NAME);
}).handle_exception([](auto exn) {
try {
std::rethrow_exception(exn);
} catch (const exceptions::request_execution_exception&) {
log.warn("Skipped default superuser setup: some nodes were not ready");
}
}).discard_result();
}
return make_ready_future<>();
});
});
return make_ready_future<>();
});
}
future<> service::start() {
return once_among_shards([this] {
if (should_create_metadata()) {
return create_metadata_if_missing();
}
return make_ready_future<>();
}).then([this] {
return when_all_succeed(_authorizer->start(), _authenticator->start());
}).then([this] {
_permissions_cache = std::make_unique<permissions_cache>(_permissions_cache_config, *this, log);
}).then([this] {
return once_among_shards([this] {
_migration_manager.register_listener(_migration_listener.get());
return make_ready_future<>();
});
});
}
future<> service::stop() {
return once_among_shards([this] {
_delayed.cancel_all();
return make_ready_future<>();
}).then([this] {
return _permissions_cache->stop();
}).then([this] {
return when_all_succeed(_authorizer->stop(), _authenticator->stop());
});
}
future<bool> service::has_existing_users() const {
static const sstring default_user_query = sprint(
"SELECT * FROM %s.%s WHERE %s = ?",
meta::AUTH_KS,
meta::USERS_CF,
meta::user_name_col_name);
static const sstring all_users_query = sprint(
"SELECT * FROM %s.%s LIMIT 1",
meta::AUTH_KS,
meta::USERS_CF);
// This logic is borrowed directly from Apache Cassandra. By first checking for the presence of the default user, we
// can potentially avoid doing a range query with a high consistency level.
return _qp.process(
default_user_query,
db::consistency_level::ONE,
{ meta::DEFAULT_SUPERUSER_NAME },
true).then([this](auto results) {
if (!results->empty()) {
return make_ready_future<bool>(true);
}
return _qp.process(
default_user_query,
db::consistency_level::QUORUM,
{ meta::DEFAULT_SUPERUSER_NAME },
true).then([this](auto results) {
if (!results->empty()) {
return make_ready_future<bool>(true);
}
return _qp.process(
all_users_query,
db::consistency_level::QUORUM).then([](auto results) {
return make_ready_future<bool>(!results->empty());
});
});
});
}
future<bool> service::is_existing_user(const sstring& name) const {
return select_user(_qp, name).then([](auto results) {
return !results->empty();
});
}
future<bool> service::is_super_user(const sstring& name) const {
return select_user(_qp, name).then([](auto results) {
return !results->empty() && results->one().template get_as<bool>(meta::superuser_col_name);
});
}
future<> service::insert_user(const sstring& name, bool is_superuser) {
return _qp.process(
sprint(
"INSERT INTO %s.%s (%s, %s) VALUES (?, ?)",
meta::AUTH_KS,
meta::USERS_CF,
meta::user_name_col_name,
meta::superuser_col_name),
consistency_for_user(name),
{ name, is_superuser }).discard_result();
}
future<> service::delete_user(const sstring& name) {
return _qp.process(
sprint(
"DELETE FROM %s.%s WHERE %s = ?",
meta::AUTH_KS,
meta::USERS_CF,
meta::user_name_col_name),
consistency_for_user(name),
{ name }).discard_result();
}
future<permission_set> service::get_permissions(::shared_ptr<authenticated_user> u, data_resource r) const {
return _permissions_cache->get(std::move(u), std::move(r));
}
//
// Free functions.
//
future<bool> is_super_user(const service& ser, const authenticated_user& u) {
if (u.is_anonymous()) {
return make_ready_future<bool>(false);
}
return ser.is_super_user(u.name());
}
}

133
auth/service.hh Normal file
View File

@@ -0,0 +1,133 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <memory>
#include <seastar/core/future.hh>
#include <seastar/core/sstring.hh>
#include "auth/authenticator.hh"
#include "auth/authorizer.hh"
#include "auth/authenticated_user.hh"
#include "auth/permission.hh"
#include "auth/permissions_cache.hh"
#include "delayed_tasks.hh"
#include "seastarx.hh"
namespace cql3 {
class query_processor;
}
namespace db {
class config;
}
namespace service {
class migration_manager;
class migration_listener;
}
namespace auth {
class authenticator;
class authorizer;
struct service_config final {
static service_config from_db_config(const db::config&);
sstring authorizer_java_name;
sstring authenticator_java_name;
};
class service final {
permissions_cache_config _permissions_cache_config;
std::unique_ptr<permissions_cache> _permissions_cache;
cql3::query_processor& _qp;
::service::migration_manager& _migration_manager;
std::unique_ptr<authorizer> _authorizer;
std::unique_ptr<authenticator> _authenticator;
// Only one of these should be registered, so we end up with some unused instances. Not the end of the world.
std::unique_ptr<::service::migration_listener> _migration_listener;
delayed_tasks<> _delayed{};
public:
service(
permissions_cache_config,
cql3::query_processor&,
::service::migration_manager&,
std::unique_ptr<authorizer>,
std::unique_ptr<authenticator>);
service(
permissions_cache_config,
cql3::query_processor&,
::service::migration_manager&,
const service_config&);
future<> start();
future<> stop();
future<bool> is_existing_user(const sstring& name) const;
future<bool> is_super_user(const sstring& name) const;
future<> insert_user(const sstring& name, bool is_superuser);
future<> delete_user(const sstring& name);
future<permission_set> get_permissions(::shared_ptr<authenticated_user>, data_resource) const;
authenticator& underlying_authenticator() {
return *_authenticator;
}
const authenticator& underlying_authenticator() const {
return *_authenticator;
}
authorizer& underlying_authorizer() {
return *_authorizer;
}
const authorizer& underlying_authorizer() const {
return *_authorizer;
}
private:
future<bool> has_existing_users() const;
bool should_create_metadata() const;
future<> create_metadata_if_missing();
};
future<bool> is_super_user(const service&, const authenticated_user&);
}

232
auth/transitional.cc Normal file
View File

@@ -0,0 +1,232 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright (C) 2017 ScyllaDB
*
* Modified by ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "authenticator.hh"
#include "authenticated_user.hh"
#include "authenticator.hh"
#include "authorizer.hh"
#include "password_authenticator.hh"
#include "default_authorizer.hh"
#include "permission.hh"
#include "db/config.hh"
#include "utils/class_registrator.hh"
namespace auth {
class service;
static const sstring PACKAGE_NAME("com.scylladb.auth.");
static const sstring& transitional_authenticator_name() {
static const sstring name = PACKAGE_NAME + "TransitionalAuthenticator";
return name;
}
static const sstring& transitional_authorizer_name() {
static const sstring name = PACKAGE_NAME + "TransitionalAuthorizer";
return name;
}
class transitional_authenticator : public authenticator {
std::unique_ptr<authenticator> _authenticator;
public:
static const sstring PASSWORD_AUTHENTICATOR_NAME;
transitional_authenticator(cql3::query_processor& qp, ::service::migration_manager& mm)
: transitional_authenticator(std::make_unique<password_authenticator>(qp, mm))
{}
transitional_authenticator(std::unique_ptr<authenticator> a)
: _authenticator(std::move(a))
{}
future<> start() override {
return _authenticator->start();
}
future<> stop() override {
return _authenticator->stop();
}
const sstring& qualified_java_name() const override {
return transitional_authenticator_name();
}
bool require_authentication() const override {
return true;
}
option_set supported_options() const override {
return _authenticator->supported_options();
}
option_set alterable_options() const override {
return _authenticator->alterable_options();
}
future<::shared_ptr<authenticated_user>> authenticate(const credentials_map& credentials) const override {
auto i = credentials.find(authenticator::USERNAME_KEY);
if ((i == credentials.end() || i->second.empty()) && (!credentials.count(PASSWORD_KEY) || credentials.at(PASSWORD_KEY).empty())) {
// return anon user
return make_ready_future<::shared_ptr<authenticated_user>>(::make_shared<authenticated_user>());
}
return make_ready_future().then([this, &credentials] {
return _authenticator->authenticate(credentials);
}).handle_exception([](auto ep) {
try {
std::rethrow_exception(ep);
} catch (exceptions::authentication_exception&) {
// return anon user
return make_ready_future<::shared_ptr<authenticated_user>>(::make_shared<authenticated_user>());
}
});
}
future<> create(sstring username, const option_map& options) override {
return _authenticator->create(username, options);
}
future<> alter(sstring username, const option_map& options) override {
return _authenticator->alter(username, options);
}
future<> drop(sstring username) override {
return _authenticator->drop(username);
}
const resource_ids& protected_resources() const override {
return _authenticator->protected_resources();
}
::shared_ptr<sasl_challenge> new_sasl_challenge() const override {
class sasl_wrapper : public sasl_challenge {
public:
sasl_wrapper(::shared_ptr<sasl_challenge> sasl)
: _sasl(std::move(sasl))
{}
bytes evaluate_response(bytes_view client_response) override {
try {
return _sasl->evaluate_response(client_response);
} catch (exceptions::authentication_exception&) {
_complete = true;
return {};
}
}
bool is_complete() const {
return _complete || _sasl->is_complete();
}
future<::shared_ptr<authenticated_user>> get_authenticated_user() const {
return futurize_apply([this] {
return _sasl->get_authenticated_user().handle_exception([](auto ep) {
try {
std::rethrow_exception(ep);
} catch (exceptions::authentication_exception&) {
// return anon user
return make_ready_future<::shared_ptr<authenticated_user>>(::make_shared<authenticated_user>());
}
});
});
}
private:
::shared_ptr<sasl_challenge> _sasl;
bool _complete = false;
};
return ::make_shared<sasl_wrapper>(_authenticator->new_sasl_challenge());
}
};
class transitional_authorizer : public authorizer {
std::unique_ptr<authorizer> _authorizer;
public:
transitional_authorizer(cql3::query_processor& qp, ::service::migration_manager& mm)
: transitional_authorizer(std::make_unique<default_authorizer>(qp, mm))
{}
transitional_authorizer(std::unique_ptr<authorizer> a)
: _authorizer(std::move(a))
{}
~transitional_authorizer()
{}
future<> start() override {
return _authorizer->start();
}
future<> stop() override {
return _authorizer->stop();
}
const sstring& qualified_java_name() const override {
return transitional_authorizer_name();
}
future<permission_set> authorize(service& ser, ::shared_ptr<authenticated_user> user, data_resource resource) const override {
return is_super_user(ser, *user).then([](bool s) {
static const permission_set transitional_permissions =
permission_set::of<permission::CREATE,
permission::ALTER, permission::DROP,
permission::SELECT, permission::MODIFY>();
return make_ready_future<permission_set>(s ? permissions::ALL : transitional_permissions);
});
}
future<> grant(::shared_ptr<authenticated_user> user, permission_set ps, data_resource r, sstring s) override {
return _authorizer->grant(std::move(user), std::move(ps), std::move(r), std::move(s));
}
future<> revoke(::shared_ptr<authenticated_user> user, permission_set ps, data_resource r, sstring s) override {
return _authorizer->revoke(std::move(user), std::move(ps), std::move(r), std::move(s));
}
future<std::vector<permission_details>> list(service& ser, ::shared_ptr<authenticated_user> user, permission_set ps, optional<data_resource> r, optional<sstring> s) const override {
return _authorizer->list(ser, std::move(user), std::move(ps), std::move(r), std::move(s));
}
future<> revoke_all(sstring s) override {
return _authorizer->revoke_all(std::move(s));
}
future<> revoke_all(data_resource r) override {
return _authorizer->revoke_all(std::move(r));
}
const resource_ids& protected_resources() override {
return _authorizer->protected_resources();
}
future<> validate_configuration() const override {
return _authorizer->validate_configuration();
}
};
}
//
// To ensure correct initialization order, we unfortunately need to use string literals.
//
static const class_registrator<
auth::authenticator,
auth::transitional_authenticator,
cql3::query_processor&,
::service::migration_manager&> transitional_authenticator_reg("com.scylladb.auth.TransitionalAuthenticator");
static const class_registrator<
auth::authorizer,
auth::transitional_authorizer,
cql3::query_processor&,
::service::migration_manager&> transitional_authorizer_reg("com.scylladb.auth.TransitionalAuthorizer");

View File

@@ -0,0 +1,661 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <vector>
#include "row_cache.hh"
#include "mutation_reader.hh"
#include "streamed_mutation.hh"
#include "partition_version.hh"
#include "utils/logalloc.hh"
#include "query-request.hh"
#include "partition_snapshot_reader.hh"
#include "partition_snapshot_row_cursor.hh"
#include "read_context.hh"
#include "flat_mutation_reader.hh"
namespace cache {
extern logging::logger clogger;
class cache_flat_mutation_reader final : public flat_mutation_reader::impl {
enum class state {
before_static_row,
// Invariants:
// - position_range(_lower_bound, _upper_bound) covers all not yet emitted positions from current range
// - if _next_row has valid iterators:
// - _next_row points to the nearest row in cache >= _lower_bound
// - _next_row_in_range = _next.position() < _upper_bound
// - if _next_row doesn't have valid iterators, it has no meaning.
reading_from_cache,
// Starts reading from underlying reader.
// The range to read is position_range(_lower_bound, min(_next_row.position(), _upper_bound)).
// Invariants:
// - _next_row_in_range = _next.position() < _upper_bound
move_to_underlying,
// Invariants:
// - Upper bound of the read is min(_next_row.position(), _upper_bound)
// - _next_row_in_range = _next.position() < _upper_bound
// - _last_row points at a direct predecessor of the next row which is going to be read.
// Used for populating continuity.
// - _population_range_starts_before_all_rows is set accordingly
reading_from_underlying,
end_of_stream
};
lw_shared_ptr<partition_snapshot> _snp;
position_in_partition::tri_compare _position_cmp;
query::clustering_key_filter_ranges _ck_ranges;
query::clustering_row_ranges::const_iterator _ck_ranges_curr;
query::clustering_row_ranges::const_iterator _ck_ranges_end;
lsa_manager _lsa_manager;
partition_snapshot_row_weakref _last_row;
// We need to be prepared that we may get overlapping and out of order
// range tombstones. We must emit fragments with strictly monotonic positions,
// so we can't just trim such tombstones to the position of the last fragment.
// To solve that, range tombstones are accumulated first in a range_tombstone_stream
// and emitted once we have a fragment with a larger position.
range_tombstone_stream _tombstones;
// Holds the lower bound of a position range which hasn't been processed yet.
// Only fragments with positions < _lower_bound have been emitted.
//
// It is assumed that !_lower_bound.is_clustering_row(). We depend on this when
// calling range_tombstone::trim_front() and when inserting dummy entries. Dummy
// entries are assumed to be only at !is_clustering_row() positions.
position_in_partition _lower_bound;
position_in_partition_view _upper_bound;
state _state = state::before_static_row;
lw_shared_ptr<read_context> _read_context;
partition_snapshot_row_cursor _next_row;
bool _next_row_in_range = false;
// True iff current population interval, since the previous clustering row, starts before all clustered rows.
// We cannot just look at _lower_bound, because emission of range tombstones changes _lower_bound and
// because we mark clustering intervals as continuous when consuming a clustering_row, it would prevent
// us from marking the interval as continuous.
// Valid when _state == reading_from_underlying.
bool _population_range_starts_before_all_rows;
future<> do_fill_buffer();
void copy_from_cache_to_buffer();
future<> process_static_row();
void move_to_end();
void move_to_next_range();
void move_to_range(query::clustering_row_ranges::const_iterator);
void move_to_next_entry();
// Emits all delayed range tombstones with positions smaller than upper_bound.
void drain_tombstones(position_in_partition_view upper_bound);
// Emits all delayed range tombstones.
void drain_tombstones();
void add_to_buffer(const partition_snapshot_row_cursor&);
void add_clustering_row_to_buffer(mutation_fragment&&);
void add_to_buffer(range_tombstone&&);
void add_to_buffer(mutation_fragment&&);
future<> read_from_underlying();
void start_reading_from_underlying();
bool after_current_range(position_in_partition_view position);
bool can_populate() const;
void maybe_update_continuity();
void maybe_add_to_cache(const mutation_fragment& mf);
void maybe_add_to_cache(const clustering_row& cr);
void maybe_add_to_cache(const range_tombstone& rt);
void maybe_add_to_cache(const static_row& sr);
void maybe_set_static_row_continuous();
void finish_reader() {
push_mutation_fragment(partition_end());
_end_of_stream = true;
_state = state::end_of_stream;
}
public:
cache_flat_mutation_reader(schema_ptr s,
dht::decorated_key dk,
query::clustering_key_filter_ranges&& crr,
lw_shared_ptr<read_context> ctx,
lw_shared_ptr<partition_snapshot> snp,
row_cache& cache)
: flat_mutation_reader::impl(std::move(s))
, _snp(std::move(snp))
, _position_cmp(*_schema)
, _ck_ranges(std::move(crr))
, _ck_ranges_curr(_ck_ranges.begin())
, _ck_ranges_end(_ck_ranges.end())
, _lsa_manager(cache)
, _tombstones(*_schema)
, _lower_bound(position_in_partition::before_all_clustered_rows())
, _upper_bound(position_in_partition_view::before_all_clustered_rows())
, _read_context(std::move(ctx))
, _next_row(*_schema, *_snp)
{
clogger.trace("csm {}: table={}.{}", this, _schema->ks_name(), _schema->cf_name());
push_mutation_fragment(partition_start(std::move(dk), _snp->partition_tombstone()));
}
cache_flat_mutation_reader(const cache_flat_mutation_reader&) = delete;
cache_flat_mutation_reader(cache_flat_mutation_reader&&) = delete;
virtual future<> fill_buffer() override;
virtual ~cache_flat_mutation_reader() {
maybe_merge_versions(_snp, _lsa_manager.region(), _lsa_manager.read_section());
}
virtual void next_partition() override {
clear_buffer_to_next_partition();
if (is_buffer_empty()) {
_end_of_stream = true;
}
}
virtual future<> fast_forward_to(const dht::partition_range&) override {
clear_buffer();
_end_of_stream = true;
return make_ready_future<>();
}
virtual future<> fast_forward_to(position_range pr) override {
throw std::bad_function_call();
}
};
inline
future<> cache_flat_mutation_reader::process_static_row() {
if (_snp->version()->partition().static_row_continuous()) {
_read_context->cache().on_row_hit();
row sr = _lsa_manager.run_in_read_section([this] {
return _snp->static_row();
});
if (!sr.empty()) {
push_mutation_fragment(mutation_fragment(static_row(std::move(sr))));
}
return make_ready_future<>();
} else {
_read_context->cache().on_row_miss();
return _read_context->get_next_fragment().then([this] (mutation_fragment_opt&& sr) {
if (sr) {
assert(sr->is_static_row());
maybe_add_to_cache(sr->as_static_row());
push_mutation_fragment(std::move(*sr));
}
maybe_set_static_row_continuous();
});
}
}
inline
future<> cache_flat_mutation_reader::fill_buffer() {
if (_state == state::before_static_row) {
auto after_static_row = [this] {
if (_ck_ranges_curr == _ck_ranges_end) {
finish_reader();
return make_ready_future<>();
}
_state = state::reading_from_cache;
_lsa_manager.run_in_read_section([this] {
move_to_range(_ck_ranges_curr);
});
return fill_buffer();
};
if (_schema->has_static_columns()) {
return process_static_row().then(std::move(after_static_row));
} else {
return after_static_row();
}
}
clogger.trace("csm {}: fill_buffer(), range={}, lb={}", this, *_ck_ranges_curr, _lower_bound);
return do_until([this] { return _end_of_stream || is_buffer_full(); }, [this] {
return do_fill_buffer();
});
}
inline
future<> cache_flat_mutation_reader::do_fill_buffer() {
if (_state == state::move_to_underlying) {
_state = state::reading_from_underlying;
_population_range_starts_before_all_rows = _lower_bound.is_before_all_clustered_rows(*_schema);
auto end = _next_row_in_range ? position_in_partition(_next_row.position())
: position_in_partition(_upper_bound);
return _read_context->fast_forward_to(position_range{_lower_bound, std::move(end)}).then([this] {
return read_from_underlying();
});
}
if (_state == state::reading_from_underlying) {
return read_from_underlying();
}
// assert(_state == state::reading_from_cache)
return _lsa_manager.run_in_read_section([this] {
auto next_valid = _next_row.iterators_valid();
clogger.trace("csm {}: reading_from_cache, range=[{}, {}), next={}, valid={}", this, _lower_bound,
_upper_bound, _next_row.position(), next_valid);
// We assume that if there was eviction, and thus the range may
// no longer be continuous, the cursor was invalidated.
if (!next_valid) {
auto adjacent = _next_row.advance_to(_lower_bound);
_next_row_in_range = !after_current_range(_next_row.position());
if (!adjacent && !_next_row.continuous()) {
_last_row = nullptr; // We could insert a dummy here, but this path is unlikely.
start_reading_from_underlying();
return make_ready_future<>();
}
}
_next_row.maybe_refresh();
clogger.trace("csm {}: next={}, cont={}", this, _next_row.position(), _next_row.continuous());
while (!is_buffer_full() && _state == state::reading_from_cache) {
copy_from_cache_to_buffer();
if (need_preempt()) {
break;
}
}
return make_ready_future<>();
});
}
inline
future<> cache_flat_mutation_reader::read_from_underlying() {
return consume_mutation_fragments_until(_read_context->underlying().underlying(),
[this] { return _state != state::reading_from_underlying || is_buffer_full(); },
[this] (mutation_fragment mf) {
_read_context->cache().on_row_miss();
maybe_add_to_cache(mf);
add_to_buffer(std::move(mf));
},
[this] {
_state = state::reading_from_cache;
_lsa_manager.run_in_update_section([this] {
auto same_pos = _next_row.maybe_refresh();
if (!same_pos) {
_read_context->cache().on_mispopulate(); // FIXME: Insert dummy entry at _upper_bound.
_next_row_in_range = !after_current_range(_next_row.position());
if (!_next_row.continuous()) {
start_reading_from_underlying();
}
return;
}
if (_next_row_in_range) {
maybe_update_continuity();
_last_row = _next_row;
add_to_buffer(_next_row);
try {
move_to_next_entry();
} catch (const std::bad_alloc&) {
// We cannot reenter the section, since we may have moved to the new range, and
// because add_to_buffer() should not be repeated.
_snp->region().allocator().invalidate_references(); // Invalidates _next_row
}
} else {
if (no_clustering_row_between(*_schema, _upper_bound, _next_row.position())) {
this->maybe_update_continuity();
} else if (can_populate()) {
rows_entry::compare less(*_schema);
auto& rows = _snp->version()->partition().clustered_rows();
if (query::is_single_row(*_schema, *_ck_ranges_curr)) {
with_allocator(_snp->region().allocator(), [&] {
auto e = alloc_strategy_unique_ptr<rows_entry>(
current_allocator().construct<rows_entry>(_ck_ranges_curr->start()->value()));
// Use _next_row iterator only as a hint, because there could be insertions after _upper_bound.
auto insert_result = rows.insert_check(_next_row.get_iterator_in_latest_version(), *e, less);
auto inserted = insert_result.second;
auto it = insert_result.first;
if (inserted) {
e.release();
auto next = std::next(it);
it->set_continuous(next->continuous());
clogger.trace("csm {}: inserted dummy at {}, cont={}", this, it->position(), it->continuous());
}
});
} else if (!_ck_ranges_curr->start() || _last_row.refresh(*_snp)) {
with_allocator(_snp->region().allocator(), [&] {
auto e = alloc_strategy_unique_ptr<rows_entry>(
current_allocator().construct<rows_entry>(*_schema, _upper_bound, is_dummy::yes, is_continuous::yes));
// Use _next_row iterator only as a hint, because there could be insertions after _upper_bound.
auto insert_result = rows.insert_check(_next_row.get_iterator_in_latest_version(), *e, less);
auto inserted = insert_result.second;
if (inserted) {
clogger.trace("csm {}: inserted dummy at {}", this, _upper_bound);
e.release();
} else {
clogger.trace("csm {}: mark {} as continuous", this, insert_result.first->position());
insert_result.first->set_continuous(true);
}
});
}
} else {
_read_context->cache().on_mispopulate();
}
try {
move_to_next_range();
} catch (const std::bad_alloc&) {
// We cannot reenter the section, since we may have moved to the new range
_snp->region().allocator().invalidate_references(); // Invalidates _next_row
}
}
});
return make_ready_future<>();
});
}
inline
void cache_flat_mutation_reader::maybe_update_continuity() {
if (can_populate() && (_population_range_starts_before_all_rows || _last_row.refresh(*_snp))) {
if (_next_row.is_in_latest_version()) {
clogger.trace("csm {}: mark {} continuous", this, _next_row.get_iterator_in_latest_version()->position());
_next_row.get_iterator_in_latest_version()->set_continuous(true);
} else {
// Cover entry from older version
with_allocator(_snp->region().allocator(), [&] {
auto& rows = _snp->version()->partition().clustered_rows();
rows_entry::compare less(*_schema);
auto e = alloc_strategy_unique_ptr<rows_entry>(
current_allocator().construct<rows_entry>(*_schema, _next_row.position(), is_dummy(_next_row.dummy()), is_continuous::yes));
auto insert_result = rows.insert_check(_next_row.get_iterator_in_latest_version(), *e, less);
auto inserted = insert_result.second;
if (inserted) {
clogger.trace("csm {}: inserted dummy at {}", this, e->position());
e.release();
}
});
}
} else {
_read_context->cache().on_mispopulate();
}
}
inline
void cache_flat_mutation_reader::maybe_add_to_cache(const mutation_fragment& mf) {
if (mf.is_range_tombstone()) {
maybe_add_to_cache(mf.as_range_tombstone());
} else {
assert(mf.is_clustering_row());
const clustering_row& cr = mf.as_clustering_row();
maybe_add_to_cache(cr);
}
}
inline
void cache_flat_mutation_reader::maybe_add_to_cache(const clustering_row& cr) {
if (!can_populate()) {
_last_row = nullptr;
_population_range_starts_before_all_rows = false;
_read_context->cache().on_mispopulate();
return;
}
clogger.trace("csm {}: populate({})", this, cr);
_lsa_manager.run_in_update_section_with_allocator([this, &cr] {
mutation_partition& mp = _snp->version()->partition();
rows_entry::compare less(*_schema);
auto new_entry = alloc_strategy_unique_ptr<rows_entry>(
current_allocator().construct<rows_entry>(cr.key(), cr.tomb(), cr.marker(), cr.cells()));
new_entry->set_continuous(false);
auto it = _next_row.iterators_valid() ? _next_row.get_iterator_in_latest_version()
: mp.clustered_rows().lower_bound(cr.key(), less);
auto insert_result = mp.clustered_rows().insert_check(it, *new_entry, less);
if (insert_result.second) {
_read_context->cache().on_row_insert();
new_entry.release();
}
it = insert_result.first;
rows_entry& e = *it;
if (!_ck_ranges_curr->start() || _last_row.refresh(*_snp)) {
clogger.trace("csm {}: set_continuous({})", this, e.position());
e.set_continuous(true);
} else {
_read_context->cache().on_mispopulate();
}
with_allocator(standard_allocator(), [&] {
_last_row = partition_snapshot_row_weakref(*_snp, it);
});
_population_range_starts_before_all_rows = false;
});
}
inline
bool cache_flat_mutation_reader::after_current_range(position_in_partition_view p) {
return _position_cmp(p, _upper_bound) >= 0;
}
inline
void cache_flat_mutation_reader::start_reading_from_underlying() {
clogger.trace("csm {}: start_reading_from_underlying(), range=[{}, {})", this, _lower_bound, _next_row_in_range ? _next_row.position() : _upper_bound);
_state = state::move_to_underlying;
}
inline
void cache_flat_mutation_reader::copy_from_cache_to_buffer() {
clogger.trace("csm {}: copy_from_cache, next={}, next_row_in_range={}", this, _next_row.position(), _next_row_in_range);
position_in_partition_view next_lower_bound = _next_row.dummy() ? _next_row.position() : position_in_partition_view::after_key(_next_row.key());
for (auto&& rts : _snp->range_tombstones(*_schema, _lower_bound, _next_row_in_range ? next_lower_bound : _upper_bound)) {
add_to_buffer(std::move(rts));
if (is_buffer_full()) {
return;
}
}
if (_next_row_in_range) {
_last_row = _next_row;
add_to_buffer(_next_row);
move_to_next_entry();
} else {
move_to_next_range();
}
}
inline
void cache_flat_mutation_reader::move_to_end() {
drain_tombstones();
finish_reader();
clogger.trace("csm {}: eos", this);
}
inline
void cache_flat_mutation_reader::move_to_next_range() {
auto next_it = std::next(_ck_ranges_curr);
if (next_it == _ck_ranges_end) {
move_to_end();
_ck_ranges_curr = next_it;
} else {
move_to_range(next_it);
}
}
inline
void cache_flat_mutation_reader::move_to_range(query::clustering_row_ranges::const_iterator next_it) {
auto lb = position_in_partition::for_range_start(*next_it);
auto ub = position_in_partition_view::for_range_end(*next_it);
_last_row = nullptr;
_lower_bound = std::move(lb);
_upper_bound = std::move(ub);
_ck_ranges_curr = next_it;
auto adjacent = _next_row.advance_to(_lower_bound);
_next_row_in_range = !after_current_range(_next_row.position());
clogger.trace("csm {}: move_to_range(), range={}, lb={}, ub={}, next={}", this, *_ck_ranges_curr, _lower_bound, _upper_bound, _next_row.position());
if (!adjacent && !_next_row.continuous()) {
// FIXME: We don't insert a dummy for singular range to avoid allocating 3 entries
// for a hit (before, at and after). If we supported the concept of an incomplete row,
// we could insert such a row for the lower bound if it's full instead, for both singular and
// non-singular ranges.
if (_ck_ranges_curr->start() && !query::is_single_row(*_schema, *_ck_ranges_curr)) {
// Insert dummy for lower bound
if (can_populate()) {
// FIXME: _lower_bound could be adjacent to the previous row, in which case we could skip this
clogger.trace("csm {}: insert dummy at {}", this, _lower_bound);
auto it = with_allocator(_lsa_manager.region().allocator(), [&] {
auto& rows = _snp->version()->partition().clustered_rows();
auto new_entry = current_allocator().construct<rows_entry>(*_schema, _lower_bound, is_dummy::yes, is_continuous::no);
return rows.insert_before(_next_row.get_iterator_in_latest_version(), *new_entry);
});
_last_row = partition_snapshot_row_weakref(*_snp, it);
} else {
_read_context->cache().on_mispopulate();
}
}
start_reading_from_underlying();
}
}
// _next_row must be inside the range.
inline
void cache_flat_mutation_reader::move_to_next_entry() {
clogger.trace("csm {}: move_to_next_entry(), curr={}", this, _next_row.position());
if (no_clustering_row_between(*_schema, _next_row.position(), _upper_bound)) {
move_to_next_range();
} else {
if (!_next_row.next()) {
move_to_end();
return;
}
_next_row_in_range = !after_current_range(_next_row.position());
clogger.trace("csm {}: next={}, cont={}, in_range={}", this, _next_row.position(), _next_row.continuous(), _next_row_in_range);
if (!_next_row.continuous()) {
start_reading_from_underlying();
}
}
}
inline
void cache_flat_mutation_reader::drain_tombstones(position_in_partition_view pos) {
while (true) {
reserve_one();
auto mfo = _tombstones.get_next(pos);
if (!mfo) {
break;
}
push_mutation_fragment(std::move(*mfo));
}
}
inline
void cache_flat_mutation_reader::drain_tombstones() {
while (true) {
reserve_one();
auto mfo = _tombstones.get_next();
if (!mfo) {
break;
}
push_mutation_fragment(std::move(*mfo));
}
}
inline
void cache_flat_mutation_reader::add_to_buffer(mutation_fragment&& mf) {
clogger.trace("csm {}: add_to_buffer({})", this, mf);
if (mf.is_clustering_row()) {
add_clustering_row_to_buffer(std::move(mf));
} else {
assert(mf.is_range_tombstone());
add_to_buffer(std::move(mf).as_range_tombstone());
}
}
inline
void cache_flat_mutation_reader::add_to_buffer(const partition_snapshot_row_cursor& row) {
if (!row.dummy()) {
_read_context->cache().on_row_hit();
add_clustering_row_to_buffer(row.row());
}
}
// Maintains the following invariants, also in case of exception:
// (1) no fragment with position >= _lower_bound was pushed yet
// (2) If _lower_bound > mf.position(), mf was emitted
inline
void cache_flat_mutation_reader::add_clustering_row_to_buffer(mutation_fragment&& mf) {
clogger.trace("csm {}: add_clustering_row_to_buffer({})", this, mf);
auto& row = mf.as_clustering_row();
auto key = row.key();
try {
drain_tombstones(row.position());
push_mutation_fragment(std::move(mf));
_lower_bound = position_in_partition::after_key(std::move(key));
} catch (...) {
// We may have emitted some of the range tombstones which start after the old _lower_bound
_lower_bound = position_in_partition::for_key(std::move(key));
throw;
}
}
inline
void cache_flat_mutation_reader::add_to_buffer(range_tombstone&& rt) {
clogger.trace("csm {}: add_to_buffer({})", this, rt);
// This guarantees that rt starts after any emitted clustering_row
if (!rt.trim_front(*_schema, _lower_bound)) {
return;
}
_lower_bound = position_in_partition(rt.position());
_tombstones.apply(std::move(rt));
drain_tombstones(_lower_bound);
}
inline
void cache_flat_mutation_reader::maybe_add_to_cache(const range_tombstone& rt) {
if (can_populate()) {
clogger.trace("csm {}: maybe_add_to_cache({})", this, rt);
_lsa_manager.run_in_update_section_with_allocator([&] {
_snp->version()->partition().row_tombstones().apply_monotonically(*_schema, rt);
});
} else {
_read_context->cache().on_mispopulate();
}
}
inline
void cache_flat_mutation_reader::maybe_add_to_cache(const static_row& sr) {
if (can_populate()) {
clogger.trace("csm {}: populate({})", this, sr);
_read_context->cache().on_row_insert();
_lsa_manager.run_in_update_section_with_allocator([&] {
_snp->version()->partition().static_row().apply(*_schema, column_kind::static_column, sr.cells());
});
} else {
_read_context->cache().on_mispopulate();
}
}
inline
void cache_flat_mutation_reader::maybe_set_static_row_continuous() {
if (can_populate()) {
clogger.trace("csm {}: set static row continuous", this);
_snp->version()->partition().set_static_row_continuous(true);
} else {
_read_context->cache().on_mispopulate();
}
}
inline
bool cache_flat_mutation_reader::can_populate() const {
return _snp->at_latest_version() && _read_context->cache().phase_of(_read_context->key()) == _read_context->phase();
}
} // namespace cache
inline flat_mutation_reader make_cache_flat_mutation_reader(schema_ptr s,
dht::decorated_key dk,
query::clustering_key_filter_ranges crr,
row_cache& cache,
lw_shared_ptr<cache::read_context> ctx,
lw_shared_ptr<partition_snapshot> snp)
{
return make_flat_mutation_reader<cache::cache_flat_mutation_reader>(
std::move(s), std::move(dk), std::move(crr), std::move(ctx), std::move(snp), cache);
}

View File

@@ -1,538 +0,0 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <vector>
#include "row_cache.hh"
#include "mutation_reader.hh"
#include "streamed_mutation.hh"
#include "partition_version.hh"
#include "utils/logalloc.hh"
#include "query-request.hh"
#include "partition_snapshot_reader.hh"
#include "partition_snapshot_row_cursor.hh"
#include "read_context.hh"
namespace cache {
class lsa_manager {
row_cache& _cache;
public:
lsa_manager(row_cache& cache) : _cache(cache) { }
template<typename Func>
decltype(auto) run_in_read_section(const Func& func) {
return _cache._read_section(_cache._tracker.region(), [&func] () {
return with_linearized_managed_bytes([&func] () {
return func();
});
});
}
template<typename Func>
decltype(auto) run_in_update_section(const Func& func) {
return _cache._update_section(_cache._tracker.region(), [&func] () {
return with_linearized_managed_bytes([&func] () {
return func();
});
});
}
template<typename Func>
void run_in_update_section_with_allocator(Func&& func) {
return _cache._update_section(_cache._tracker.region(), [this, &func] () {
return with_linearized_managed_bytes([this, &func] () {
return with_allocator(_cache._tracker.region().allocator(), [this, &func] () mutable {
return func();
});
});
});
}
logalloc::region& region() { return _cache._tracker.region(); }
logalloc::allocating_section& read_section() { return _cache._read_section; }
};
class cache_streamed_mutation final : public streamed_mutation::impl {
enum class state {
before_static_row,
// Invariants:
// - position_range(_lower_bound, _upper_bound) covers all not yet emitted positions from current range
// - _next_row points to the nearest row in cache >= _lower_bound
// - _next_row_in_range = _next.position() < _upper_bound
reading_from_cache,
// Starts reading from underlying reader.
// The range to read is position_range(_lower_bound, min(_next_row.position(), _upper_bound)).
// Invariants:
// - _next_row_in_range = _next.position() < _upper_bound
move_to_underlying,
// Invariants:
// - Upper bound of the read is min(_next_row.position(), _upper_bound)
// - _next_row_in_range = _next.position() < _upper_bound
// - _last_row_key contains the key of last emitted clustering_row
reading_from_underlying,
end_of_stream
};
lw_shared_ptr<partition_snapshot> _snp;
position_in_partition::tri_compare _position_cmp;
query::clustering_key_filter_ranges _ck_ranges;
query::clustering_row_ranges::const_iterator _ck_ranges_curr;
query::clustering_row_ranges::const_iterator _ck_ranges_end;
lsa_manager _lsa_manager;
stdx::optional<clustering_key> _last_row_key;
// We need to be prepared that we may get overlapping and out of order
// range tombstones. We must emit fragments with strictly monotonic positions,
// so we can't just trim such tombstones to the position of the last fragment.
// To solve that, range tombstones are accumulated first in a range_tombstone_stream
// and emitted once we have a fragment with a larger position.
range_tombstone_stream _tombstones;
// Holds the lower bound of a position range which hasn't been processed yet.
// Only fragments with positions < _lower_bound have been emitted.
position_in_partition _lower_bound;
position_in_partition_view _upper_bound;
state _state = state::before_static_row;
lw_shared_ptr<read_context> _read_context;
partition_snapshot_row_cursor _next_row;
bool _next_row_in_range = false;
future<> do_fill_buffer();
void copy_from_cache_to_buffer();
future<> process_static_row();
void move_to_end();
void move_to_next_range();
void move_to_current_range();
void move_to_next_entry();
// Emits all delayed range tombstones with positions smaller than upper_bound.
void drain_tombstones(position_in_partition_view upper_bound);
// Emits all delayed range tombstones.
void drain_tombstones();
void add_to_buffer(const partition_snapshot_row_cursor&);
void add_clustering_row_to_buffer(mutation_fragment&&);
void add_to_buffer(range_tombstone&&);
void add_to_buffer(mutation_fragment&&);
future<> read_from_underlying();
future<> start_reading_from_underlying();
bool after_current_range(position_in_partition_view position);
bool can_populate() const;
void maybe_update_continuity();
void maybe_add_to_cache(const mutation_fragment& mf);
void maybe_add_to_cache(const clustering_row& cr);
void maybe_add_to_cache(const range_tombstone& rt);
void maybe_add_to_cache(const static_row& sr);
void maybe_set_static_row_continuous();
public:
cache_streamed_mutation(schema_ptr s,
dht::decorated_key dk,
query::clustering_key_filter_ranges&& crr,
lw_shared_ptr<read_context> ctx,
lw_shared_ptr<partition_snapshot> snp,
row_cache& cache)
: streamed_mutation::impl(std::move(s), dk, snp->partition_tombstone())
, _snp(std::move(snp))
, _position_cmp(*_schema)
, _ck_ranges(std::move(crr))
, _ck_ranges_curr(_ck_ranges.begin())
, _ck_ranges_end(_ck_ranges.end())
, _lsa_manager(cache)
, _tombstones(*_schema)
, _lower_bound(position_in_partition::before_all_clustered_rows())
, _upper_bound(position_in_partition_view::before_all_clustered_rows())
, _read_context(std::move(ctx))
, _next_row(*_schema, cache._tracker.region(), *_snp)
{ }
cache_streamed_mutation(const cache_streamed_mutation&) = delete;
cache_streamed_mutation(cache_streamed_mutation&&) = delete;
virtual future<> fill_buffer() override;
virtual ~cache_streamed_mutation() {
maybe_merge_versions(_snp, _lsa_manager.region(), _lsa_manager.read_section());
}
};
inline
future<> cache_streamed_mutation::process_static_row() {
if (_snp->version()->partition().static_row_continuous()) {
_read_context->cache().on_row_hit();
row sr = _lsa_manager.run_in_read_section([this] {
return _snp->static_row();
});
if (!sr.empty()) {
push_mutation_fragment(mutation_fragment(static_row(std::move(sr))));
}
return make_ready_future<>();
} else {
_read_context->cache().on_row_miss();
return _read_context->get_next_fragment().then([this] (mutation_fragment_opt&& sr) {
if (sr) {
assert(sr->is_static_row());
maybe_add_to_cache(sr->as_static_row());
push_mutation_fragment(std::move(*sr));
}
maybe_set_static_row_continuous();
});
}
}
inline
future<> cache_streamed_mutation::fill_buffer() {
if (_state == state::before_static_row) {
auto after_static_row = [this] {
if (_ck_ranges_curr == _ck_ranges_end) {
_end_of_stream = true;
_state = state::end_of_stream;
return make_ready_future<>();
}
_state = state::reading_from_cache;
_lsa_manager.run_in_read_section([this] {
move_to_current_range();
});
return fill_buffer();
};
if (_schema->has_static_columns()) {
return process_static_row().then(std::move(after_static_row));
} else {
return after_static_row();
}
}
return do_until([this] { return _end_of_stream || is_buffer_full(); }, [this] {
return do_fill_buffer();
});
}
inline
future<> cache_streamed_mutation::do_fill_buffer() {
if (_state == state::move_to_underlying) {
_state = state::reading_from_underlying;
auto end = _next_row_in_range ? position_in_partition(_next_row.position())
: position_in_partition(_upper_bound);
return _read_context->fast_forward_to(position_range{_lower_bound, std::move(end)}).then([this] {
return read_from_underlying();
});
}
if (_state == state::reading_from_underlying) {
return read_from_underlying();
}
// assert(_state == state::reading_from_cache)
return _lsa_manager.run_in_read_section([this] {
auto same_pos = _next_row.maybe_refresh();
// FIXME: If continuity changed anywhere between _lower_bound and _next_row.position()
// we need to redo the lookup with _lower_bound. There is no eviction yet, so not yet a problem.
assert(same_pos);
while (!is_buffer_full() && _state == state::reading_from_cache) {
copy_from_cache_to_buffer();
if (need_preempt()) {
break;
}
}
return make_ready_future<>();
});
}
inline
future<> cache_streamed_mutation::read_from_underlying() {
return consume_mutation_fragments_until(_read_context->get_streamed_mutation(),
[this] { return _state != state::reading_from_underlying || is_buffer_full(); },
[this] (mutation_fragment mf) {
_read_context->cache().on_row_miss();
maybe_add_to_cache(mf);
add_to_buffer(std::move(mf));
},
[this] {
_state = state::reading_from_cache;
_lsa_manager.run_in_update_section([this] {
auto same_pos = _next_row.maybe_refresh();
assert(same_pos); // FIXME: handle eviction
if (_next_row_in_range) {
maybe_update_continuity();
add_to_buffer(_next_row);
move_to_next_entry();
} else {
if (no_clustering_row_between(*_schema, _upper_bound, _next_row.position())) {
this->maybe_update_continuity();
} else {
// FIXME: Insert dummy entry at _upper_bound.
_read_context->cache().on_mispopulate();
}
move_to_next_range();
}
});
return make_ready_future<>();
});
}
inline
void cache_streamed_mutation::maybe_update_continuity() {
if (can_populate() && _next_row.is_in_latest_version()) {
if (_last_row_key) {
if (_next_row.previous_row_in_latest_version_has_key(*_last_row_key)) {
_next_row.set_continuous(true);
}
} else if (!_ck_ranges_curr->start()) {
_next_row.set_continuous(true);
}
} else {
_read_context->cache().on_mispopulate();
}
}
inline
void cache_streamed_mutation::maybe_add_to_cache(const mutation_fragment& mf) {
if (mf.is_range_tombstone()) {
maybe_add_to_cache(mf.as_range_tombstone());
} else {
assert(mf.is_clustering_row());
const clustering_row& cr = mf.as_clustering_row();
maybe_add_to_cache(cr);
}
}
inline
void cache_streamed_mutation::maybe_add_to_cache(const clustering_row& cr) {
if (!can_populate()) {
_read_context->cache().on_mispopulate();
return;
}
_lsa_manager.run_in_update_section_with_allocator([this, &cr] {
mutation_partition& mp = _snp->version()->partition();
rows_entry::compare less(*_schema);
// FIXME: If _next_row is up to date, but latest version doesn't have iterator in
// current row (could be far away, so we'd do this often), then this will do
// the lookup in mp. This is not necessary, because _next_row has iterators for
// next rows in each version, even if they're not part of the current row.
// They're currently buried in the heap, but you could keep a vector of
// iterators per each version in addition to the heap.
auto new_entry = alloc_strategy_unique_ptr<rows_entry>(
current_allocator().construct<rows_entry>(cr.key(), cr.tomb(), cr.marker(), cr.cells()));
new_entry->set_continuous(false);
auto it = _next_row.has_valid_row_from_latest_version()
? _next_row.get_iterator_in_latest_version() : mp.clustered_rows().lower_bound(cr.key(), less);
auto insert_result = mp.clustered_rows().insert_check(it, *new_entry, less);
if (insert_result.second) {
_read_context->cache().on_row_insert();
new_entry.release();
}
it = insert_result.first;
rows_entry& e = *it;
if (_last_row_key) {
if (it == mp.clustered_rows().begin()) {
// FIXME: check whether entry for _last_row_key is in older versions and if so set
// continuity to true.
_read_context->cache().on_mispopulate();
} else {
auto prev_it = it;
--prev_it;
clustering_key_prefix::equality eq(*_schema);
if (eq(*_last_row_key, prev_it->key())) {
e.set_continuous(true);
}
}
} else if (!_ck_ranges_curr->start()) {
e.set_continuous(true);
} else {
// FIXME: Insert dummy entry at _ck_ranges_curr->start()
_read_context->cache().on_mispopulate();
}
});
}
inline
bool cache_streamed_mutation::after_current_range(position_in_partition_view p) {
return _position_cmp(p, _upper_bound) >= 0;
}
inline
future<> cache_streamed_mutation::start_reading_from_underlying() {
_state = state::move_to_underlying;
return make_ready_future<>();
}
inline
void cache_streamed_mutation::copy_from_cache_to_buffer() {
position_in_partition_view next_lower_bound = _next_row.dummy() ? _next_row.position() : position_in_partition_view::after_key(_next_row.key());
for (auto&& rts : _snp->range_tombstones(*_schema, _lower_bound, _next_row_in_range ? next_lower_bound : _upper_bound)) {
add_to_buffer(std::move(rts));
if (is_buffer_full()) {
return;
}
}
if (_next_row_in_range) {
add_to_buffer(_next_row);
move_to_next_entry();
} else {
move_to_next_range();
}
}
inline
void cache_streamed_mutation::move_to_end() {
drain_tombstones();
_end_of_stream = true;
_state = state::end_of_stream;
}
inline
void cache_streamed_mutation::move_to_next_range() {
++_ck_ranges_curr;
if (_ck_ranges_curr == _ck_ranges_end) {
move_to_end();
} else {
move_to_current_range();
}
}
inline
void cache_streamed_mutation::move_to_current_range() {
_last_row_key = std::experimental::nullopt;
_lower_bound = position_in_partition::for_range_start(*_ck_ranges_curr);
_upper_bound = position_in_partition_view::for_range_end(*_ck_ranges_curr);
auto complete_until_next = _next_row.advance_to(_lower_bound) || _next_row.continuous();
_next_row_in_range = !after_current_range(_next_row.position());
if (!complete_until_next) {
start_reading_from_underlying();
}
}
// _next_row must be inside the range.
inline
void cache_streamed_mutation::move_to_next_entry() {
if (no_clustering_row_between(*_schema, _next_row.position(), _upper_bound)) {
move_to_next_range();
} else {
if (!_next_row.next()) {
move_to_end();
return;
}
_next_row_in_range = !after_current_range(_next_row.position());
if (!_next_row.continuous()) {
start_reading_from_underlying();
}
}
}
inline
void cache_streamed_mutation::drain_tombstones(position_in_partition_view pos) {
while (auto mfo = _tombstones.get_next(pos)) {
push_mutation_fragment(std::move(*mfo));
}
}
inline
void cache_streamed_mutation::drain_tombstones() {
while (auto mfo = _tombstones.get_next()) {
push_mutation_fragment(std::move(*mfo));
}
}
inline
void cache_streamed_mutation::add_to_buffer(mutation_fragment&& mf) {
if (mf.is_clustering_row()) {
add_clustering_row_to_buffer(std::move(mf));
} else {
assert(mf.is_range_tombstone());
add_to_buffer(std::move(mf).as_range_tombstone());
}
}
inline
void cache_streamed_mutation::add_to_buffer(const partition_snapshot_row_cursor& row) {
if (!row.dummy()) {
_read_context->cache().on_row_hit();
add_clustering_row_to_buffer(row.row());
}
}
inline
void cache_streamed_mutation::add_clustering_row_to_buffer(mutation_fragment&& mf) {
auto& row = mf.as_clustering_row();
drain_tombstones(row.position());
_last_row_key = row.key();
_lower_bound = position_in_partition::after_key(row.key());
push_mutation_fragment(std::move(mf));
}
inline
void cache_streamed_mutation::add_to_buffer(range_tombstone&& rt) {
// This guarantees that rt starts after any emitted clustering_row
if (!rt.trim_front(*_schema, _lower_bound)) {
return;
}
_lower_bound = position_in_partition(rt.position());
_tombstones.apply(std::move(rt));
drain_tombstones(_lower_bound);
}
inline
void cache_streamed_mutation::maybe_add_to_cache(const range_tombstone& rt) {
if (can_populate()) {
_lsa_manager.run_in_update_section_with_allocator([&] {
_snp->version()->partition().row_tombstones().apply_monotonically(*_schema, rt);
});
} else {
_read_context->cache().on_mispopulate();
}
}
inline
void cache_streamed_mutation::maybe_add_to_cache(const static_row& sr) {
if (can_populate()) {
_read_context->cache().on_row_insert();
_lsa_manager.run_in_update_section_with_allocator([&] {
_snp->version()->partition().static_row().apply(*_schema, column_kind::static_column, sr.cells());
});
} else {
_read_context->cache().on_mispopulate();
}
}
inline
void cache_streamed_mutation::maybe_set_static_row_continuous() {
if (can_populate()) {
_snp->version()->partition().set_static_row_continuous(true);
} else {
_read_context->cache().on_mispopulate();
}
}
inline
bool cache_streamed_mutation::can_populate() const {
return _snp->at_latest_version() && _read_context->cache().phase_of(_read_context->key()) == _read_context->phase();
}
} // namespace cache
inline streamed_mutation make_cache_streamed_mutation(schema_ptr s,
dht::decorated_key dk,
query::clustering_key_filter_ranges crr,
row_cache& cache,
lw_shared_ptr<cache::read_context> ctx,
lw_shared_ptr<partition_snapshot> snp)
{
return make_streamed_mutation<cache::cache_streamed_mutation>(
std::move(s), std::move(dk), std::move(crr), std::move(ctx), std::move(snp), cache);
}

View File

@@ -130,7 +130,7 @@ inline file make_checked_file(const io_error_handler& error_handler, file f)
future<file>
inline open_checked_file_dma(const io_error_handler& error_handler,
sstring name, open_flags flags,
file_open_options options)
file_open_options options = {})
{
return do_io_check(error_handler, [&] {
return open_file_dma(name, flags, options).then([&] (file f) {
@@ -139,17 +139,6 @@ inline open_checked_file_dma(const io_error_handler& error_handler,
});
}
future<file>
inline open_checked_file_dma(const io_error_handler& error_handler,
sstring name, open_flags flags)
{
return do_io_check(error_handler, [&] {
return open_file_dma(name, flags).then([&] (file f) {
return make_ready_future<file>(make_checked_file(error_handler, f));
});
});
}
future<file>
inline open_checked_directory(const io_error_handler& error_handler,
sstring name)

View File

@@ -42,17 +42,6 @@ std::ostream& operator<<(std::ostream& out, const bound_kind k);
bound_kind invert_kind(bound_kind k);
int32_t weight(bound_kind k);
static inline bound_kind flip_bound_kind(bound_kind bk)
{
switch (bk) {
case bound_kind::excl_end: return bound_kind::excl_start;
case bound_kind::incl_end: return bound_kind::incl_start;
case bound_kind::excl_start: return bound_kind::excl_end;
case bound_kind::incl_start: return bound_kind::incl_end;
}
abort();
}
class bound_view {
public:
const static thread_local clustering_key empty_prefix;

View File

@@ -169,14 +169,14 @@ public:
bool contains_tombstone(position_in_partition_view start, position_in_partition_view end) const {
position_in_partition::less_compare less(_schema);
if (_trim && less(end, *_trim)) {
if (_trim && !less(*_trim, end)) {
return false;
}
auto i = _current;
while (i != _end) {
auto range_start = position_in_partition_view::for_range_start(*i);
if (less(end, range_start)) {
if (!less(range_start, end)) {
return false;
}
auto range_end = position_in_partition_view::for_range_end(*i);

3
coding-style.md Normal file
View File

@@ -0,0 +1,3 @@
# Scylla Coding Style
Please see the [Seastar style document](https://github.com/scylladb/seastar/blob/master/coding-style.md).

View File

@@ -21,6 +21,9 @@
#pragma once
#include "sstables/shared_sstable.hh"
#include "exceptions/exceptions.hh"
class column_family;
class schema;
using schema_ptr = lw_shared_ptr<const schema>;
@@ -33,6 +36,7 @@ enum class compaction_strategy_type {
size_tiered,
leveled,
date_tiered,
time_window,
};
class compaction_strategy_impl;
@@ -53,13 +57,13 @@ public:
compaction_strategy& operator=(compaction_strategy&&);
// Return a list of sstables to be compacted after applying the strategy.
compaction_descriptor get_sstables_for_compaction(column_family& cfs, std::vector<lw_shared_ptr<sstable>> candidates);
compaction_descriptor get_sstables_for_compaction(column_family& cfs, std::vector<shared_sstable> candidates);
std::vector<resharding_descriptor> get_resharding_jobs(column_family& cf, std::vector<lw_shared_ptr<sstable>> candidates);
std::vector<resharding_descriptor> get_resharding_jobs(column_family& cf, std::vector<shared_sstable> candidates);
// Some strategies may look at the compacted and resulting sstables to
// get some useful information for subsequent compactions.
void notify_completion(const std::vector<lw_shared_ptr<sstable>>& removed, const std::vector<lw_shared_ptr<sstable>>& added);
void notify_completion(const std::vector<shared_sstable>& removed, const std::vector<shared_sstable>& added);
// Return if parallel compaction is allowed by strategy.
bool parallel_compaction() const;
@@ -82,6 +86,8 @@ public:
return "LeveledCompactionStrategy";
case compaction_strategy_type::date_tiered:
return "DateTieredCompactionStrategy";
case compaction_strategy_type::time_window:
return "TimeWindowCompactionStrategy";
default:
throw std::runtime_error("Invalid Compaction Strategy");
}
@@ -100,6 +106,8 @@ public:
return compaction_strategy_type::leveled;
} else if (short_name == "DateTieredCompactionStrategy") {
return compaction_strategy_type::date_tiered;
} else if (short_name == "TimeWindowCompactionStrategy") {
return compaction_strategy_type::time_window;
} else {
throw exceptions::configuration_exception(sprint("Unable to find compaction strategy class '%s'", name));
}

View File

@@ -12,7 +12,9 @@
# The name of the cluster. This is mainly used to prevent machines in
# one logical cluster from joining another.
cluster_name: 'Test Cluster'
# It is recommended to change the default value when creating a new cluster.
# You can NOT modify this value for an existing cluster
#cluster_name: 'Test Cluster'
# This defines the number of tokens randomly assigned to this node on the ring
# The more tokens, relative to other nodes, the larger the proportion of data
@@ -85,6 +87,13 @@ listen_address: localhost
# Leaving this blank will set it to the same value as listen_address
# broadcast_address: 1.2.3.4
# When using multiple physical network interfaces, set this to true to listen on broadcast_address
# in addition to the listen_address, allowing nodes to communicate in both interfaces.
# Ignore this property if the network configuration automatically routes between the public and private networks such as EC2.
#
# listen_on_broadcast_address: false
# port for the CQL native transport to listen for clients on
# For security reasons, you should not expose this port to the internet. Firewall it if needed.
native_transport_port: 9042
@@ -270,17 +279,17 @@ batch_size_fail_threshold_in_kb: 50
# Validity period for permissions cache (fetching permissions can be an
# expensive operation depending on the authorizer, CassandraAuthorizer is
# one example). Defaults to 2000, set to 0 to disable.
# one example). Defaults to 10000, set to 0 to disable.
# Will be disabled automatically for AllowAllAuthorizer.
# permissions_validity_in_ms: 2000
# permissions_validity_in_ms: 10000
# Refresh interval for permissions cache (if enabled).
# After this interval, cache entries become eligible for refresh. Upon next
# access, an async reload is scheduled and the old value returned until it
# completes. If permissions_validity_in_ms is non-zero, then this must be
# also.
# Defaults to the same value as permissions_validity_in_ms.
# permissions_update_interval_in_ms: 1000
# completes. If permissions_validity_in_ms is non-zero, then this also must have
# a non-zero value. Defaults to 2000. It's recommended to set this value to
# be at least 3 times smaller than the permissions_validity_in_ms.
# permissions_update_interval_in_ms: 2000
# The partitioner is responsible for distributing groups of rows (by
# partition key) across nodes in the cluster. You should leave this

View File

@@ -34,7 +34,7 @@ for line in open('/etc/os-release'):
os_ids += value.split(' ')
# distribution "internationalization", converting package names.
# Fedora name is key, values is distro -> package name dict.
# Fedora name is key, values is distro -> package name dict.
i18n_xlat = {
'boost-devel': {
'debian': 'libboost-dev',
@@ -48,7 +48,7 @@ def pkgname(name):
for id in os_ids:
if id in dict:
return dict[id]
return name
return name
def get_flags():
with open('/proc/cpuinfo') as f:
@@ -86,7 +86,7 @@ def try_compile(compiler, source = '', flags = []):
with tempfile.NamedTemporaryFile() as sfile:
sfile.file.write(bytes(source, 'utf-8'))
sfile.file.flush()
return subprocess.call([compiler, '-x', 'c++', '-o', '/dev/null', '-c', sfile.name] + flags,
return subprocess.call([compiler, '-x', 'c++', '-o', '/dev/null', '-c', sfile.name] + args.user_cflags.split() + flags,
stdout = subprocess.DEVNULL,
stderr = subprocess.DEVNULL) == 0
@@ -167,7 +167,9 @@ modes = {
scylla_tests = [
'tests/mutation_test',
'tests/mvcc_test',
'tests/streamed_mutation_test',
'tests/flat_mutation_reader_test',
'tests/schema_registry_test',
'tests/canonical_mutation_test',
'tests/range_test',
@@ -186,7 +188,8 @@ scylla_tests = [
'tests/perf/perf_cql_parser',
'tests/perf/perf_simple_query',
'tests/perf/perf_fast_forward',
'tests/cache_streamed_mutation_test',
'tests/perf/perf_cache_eviction',
'tests/cache_flat_mutation_reader_test',
'tests/row_cache_stress_test',
'tests/memory_footprint',
'tests/perf/perf_sstable',
@@ -221,7 +224,7 @@ scylla_tests = [
'tests/murmur_hash_test',
'tests/allocation_strategy_test',
'tests/logalloc_test',
'tests/log_histogram_test',
'tests/log_heap_test',
'tests/managed_vector_test',
'tests/crc_test',
'tests/flush_queue_test',
@@ -238,7 +241,15 @@ scylla_tests = [
'tests/view_schema_test',
'tests/counter_test',
'tests/cell_locker_test',
'tests/streaming_histogram_test',
'tests/duration_test',
'tests/vint_serialization_test',
'tests/compress_test',
'tests/chunked_vector_test',
'tests/loading_cache_test',
'tests/castas_fcts_test',
'tests/big_decimal_test',
'tests/aggregate_fcts_test',
]
apps = [
@@ -324,6 +335,7 @@ scylla_core = (['database.cc',
'mutation_partition_view.cc',
'mutation_partition_serializer.cc',
'mutation_reader.cc',
'flat_mutation_reader.cc',
'mutation_query.cc',
'keys.cc',
'counters.cc',
@@ -331,11 +343,11 @@ scylla_core = (['database.cc',
'sstables/compress.cc',
'sstables/row.cc',
'sstables/partition.cc',
'sstables/filter.cc',
'sstables/compaction.cc',
'sstables/compaction_strategy.cc',
'sstables/compaction_manager.cc',
'sstables/atomic_deletion.cc',
'sstables/integrity_checked_file_impl.cc',
'transport/event.cc',
'transport/event_notifier.cc',
'transport/server.cc',
@@ -350,6 +362,7 @@ scylla_core = (['database.cc',
'cql3/sets.cc',
'cql3/maps.cc',
'cql3/functions/functions.cc',
'cql3/functions/castas_fcts.cc',
'cql3/statements/cf_prop_defs.cc',
'cql3/statements/cf_statement.cc',
'cql3/statements/authentication_statement.cc',
@@ -451,6 +464,7 @@ scylla_core = (['database.cc',
'utils/dynamic_bitset.cc',
'utils/managed_bytes.cc',
'utils/exceptions.cc',
'utils/config_file.cc',
'gms/version_generator.cc',
'gms/versioned_value.cc',
'gms/gossiper.cc',
@@ -510,20 +524,27 @@ scylla_core = (['database.cc',
'lister.cc',
'repair/repair.cc',
'exceptions/exceptions.cc',
'auth/auth.cc',
'auth/allow_all_authenticator.cc',
'auth/allow_all_authorizer.cc',
'auth/authenticated_user.cc',
'auth/authenticator.cc',
'auth/authorizer.cc',
'auth/common.cc',
'auth/default_authorizer.cc',
'auth/data_resource.cc',
'auth/password_authenticator.cc',
'auth/permission.cc',
'auth/permissions_cache.cc',
'auth/service.cc',
'auth/transitional.cc',
'tracing/tracing.cc',
'tracing/trace_keyspace_helper.cc',
'tracing/trace_state.cc',
'table_helper.cc',
'range_tombstone.cc',
'range_tombstone_list.cc',
'disk-error-handler.cc'
'disk-error-handler.cc',
'duration.cc',
'vint-serialization.cc',
]
+ [Antlr3Grammar('cql3/Cql.g')]
+ [Thrift('interface/cassandra.thrift', 'Cassandra')]
@@ -619,6 +640,12 @@ pure_boost_tests = set([
'tests/dynamic_bitset_test',
'tests/idl_test',
'tests/cartesian_product_test',
'tests/streaming_histogram_test',
'tests/duration_test',
'tests/vint_serialization_test',
'tests/compress_test',
'tests/chunked_vector_test',
'tests/big_decimal_test',
])
tests_not_using_seastar_test_framework = set([
@@ -632,6 +659,7 @@ tests_not_using_seastar_test_framework = set([
'tests/message',
'tests/perf/perf_simple_query',
'tests/perf/perf_fast_forward',
'tests/perf/perf_cache_eviction',
'tests/row_cache_stress_test',
'tests/memory_footprint',
'tests/gossip',
@@ -645,19 +673,20 @@ for t in tests_not_using_seastar_test_framework:
for t in scylla_tests:
deps[t] = [t + '.cc']
if t not in tests_not_using_seastar_test_framework:
deps[t] += scylla_tests_dependencies
deps[t] += scylla_tests_dependencies
deps[t] += scylla_tests_seastar_deps
else:
deps[t] += scylla_core + api + idls + ['tests/cql_test_env.cc']
deps['tests/sstable_test'] += ['tests/sstable_datafile_test.cc']
deps['tests/sstable_test'] += ['tests/sstable_datafile_test.cc', 'tests/sstable_utils.cc']
deps['tests/mutation_reader_test'] += ['tests/sstable_utils.cc']
deps['tests/bytes_ostream_test'] = ['tests/bytes_ostream_test.cc', 'utils/managed_bytes.cc', 'utils/logalloc.cc', 'utils/dynamic_bitset.cc']
deps['tests/input_stream_test'] = ['tests/input_stream_test.cc']
deps['tests/UUID_test'] = ['utils/UUID_gen.cc', 'tests/UUID_test.cc', 'utils/uuid.cc', 'utils/managed_bytes.cc', 'utils/logalloc.cc', 'utils/dynamic_bitset.cc']
deps['tests/murmur_hash_test'] = ['bytes.cc', 'utils/murmur_hash.cc', 'tests/murmur_hash_test.cc']
deps['tests/allocation_strategy_test'] = ['tests/allocation_strategy_test.cc', 'utils/logalloc.cc', 'utils/dynamic_bitset.cc']
deps['tests/log_histogram_test'] = ['tests/log_histogram_test.cc']
deps['tests/log_heap_test'] = ['tests/log_heap_test.cc']
deps['tests/anchorless_list_test'] = ['tests/anchorless_list_test.cc']
warnings = [
@@ -671,6 +700,10 @@ warnings = [
'-Wno-return-stack-address',
'-Wno-missing-braces',
'-Wno-unused-lambda-capture',
'-Wno-misleading-indentation',
'-Wno-overflow',
'-Wno-noexcept-type',
'-Wno-nonnull-compare'
]
warnings = [w
@@ -772,7 +805,8 @@ if args.alloc_failure_injector:
seastar_flags += ['--enable-alloc-failure-injector']
seastar_cflags = args.user_cflags + " -march=nehalem"
seastar_flags += ['--compiler', args.cxx, '--c-compiler', args.cc, '--cflags=%s' % (seastar_cflags)]
seastar_ldflags = args.user_ldflags
seastar_flags += ['--compiler', args.cxx, '--c-compiler', args.cc, '--cflags=%s' % (seastar_cflags), '--ldflags=%s' %(seastar_ldflags)]
status = subprocess.call([python, './configure.py'] + seastar_flags, cwd = 'seastar')
@@ -836,7 +870,7 @@ with open(buildfile, 'w') as f:
builddir = {outdir}
cxx = {cxx}
cxxflags = {user_cflags} {warnings} {defines}
ldflags = {user_ldflags}
ldflags = -fuse-ld=gold {user_ldflags}
libs = {libs}
pool link_pool
depth = {link_pool_depth}
@@ -893,7 +927,8 @@ with open(buildfile, 'w') as f:
&& sed -i -e 's/^\\( *\)\\(ImplTraits::CommonTokenType\\* [a-zA-Z0-9_]* = NULL;\\)$$/\\1const \\2/' $
-e '1i using ExceptionBaseType = int;' $
-e 's/^{{/{{ ExceptionBaseType\* ex = nullptr;/; $
s/ExceptionBaseType\* ex = new/ex = new/' $
s/ExceptionBaseType\* ex = new/ex = new/; $
s/exceptions::syntax_exception e/exceptions::syntax_exception\& e/' $
build/{mode}/gen/${{stem}}Parser.cpp
description = ANTLR3 $in
''').format(mode = mode, **modeval))
@@ -919,25 +954,13 @@ with open(buildfile, 'w') as f:
objs += dep.objects('$builddir/' + mode + '/gen')
if isinstance(dep, Antlr3Grammar):
objs += dep.objects('$builddir/' + mode + '/gen')
if binary.endswith('.pc'):
vars = modeval.copy()
vars.update(globals())
pc = textwrap.dedent('''\
Name: Seastar
URL: http://seastar-project.org/
Description: Advanced C++ framework for high-performance server applications on modern hardware.
Version: 1.0
Libs: -L{srcdir}/{builddir} -Wl,--whole-archive -lseastar -Wl,--no-whole-archive {dbgflag} -Wl,--no-as-needed {static} {pie} -fvisibility=hidden -pthread {user_ldflags} {libs} {sanitize_libs}
Cflags: -std=gnu++1y {dbgflag} {fpie} -Wall -Werror -fvisibility=hidden -pthread -I{srcdir} -I{srcdir}/{builddir}/gen {user_cflags} {warnings} {defines} {sanitize} {opt}
''').format(builddir = 'build/' + mode, srcdir = os.getcwd(), **vars)
f.write('build $builddir/{}/{}: gen\n text = {}\n'.format(mode, binary, repr(pc)))
elif binary.endswith('.a'):
if binary.endswith('.a'):
f.write('build $builddir/{}/{}: ar.{} {}\n'.format(mode, binary, mode, str.join(' ', objs)))
else:
if binary.startswith('tests/'):
local_libs = '$libs'
if binary not in tests_not_using_seastar_test_framework or binary in pure_boost_tests:
local_libs += ' ' + maybe_static(args.staticboost, '-lboost_unit_test_framework')
local_libs += ' ' + maybe_static(args.staticboost, '-lboost_unit_test_framework')
if has_thrift:
local_libs += ' ' + thrift_libs + ' ' + maybe_static(args.staticboost, '-lboost_system')
# Our code's debugging information is huge, and multiplied

View File

@@ -399,6 +399,7 @@ unaliasedSelector returns [shared_ptr<selectable::raw> s]
| K_WRITETIME '(' c=cident ')' { tmp = make_shared<selectable::writetime_or_ttl::raw>(c, true); }
| K_TTL '(' c=cident ')' { tmp = make_shared<selectable::writetime_or_ttl::raw>(c, false); }
| f=functionName args=selectionFunctionArgs { tmp = ::make_shared<selectable::with_function::raw>(std::move(f), std::move(args)); }
| K_CAST '(' arg=unaliasedSelector K_AS t=native_type ')' { tmp = ::make_shared<selectable::with_cast::raw>(std::move(arg), std::move(t)); }
)
( '.' fi=cident { tmp = make_shared<selectable::with_field_selection::raw>(std::move(tmp), std::move(fi)); } )*
{ $s = tmp; }
@@ -1167,6 +1168,7 @@ constant returns [shared_ptr<cql3::constants::literal> constant]
| t=INTEGER { $constant = cql3::constants::literal::integer(sstring{$t.text}); }
| t=FLOAT { $constant = cql3::constants::literal::floating_point(sstring{$t.text}); }
| t=BOOLEAN { $constant = cql3::constants::literal::bool_(sstring{$t.text}); }
| t=DURATION { $constant = cql3::constants::literal::duration(sstring{$t.text}); }
| t=UUID { $constant = cql3::constants::literal::uuid(sstring{$t.text}); }
| t=HEXNUMBER { $constant = cql3::constants::literal::hex(sstring{$t.text}); }
| { sign=""; } ('-' {sign = "-"; } )? t=(K_NAN | K_INFINITY) { $constant = cql3::constants::literal::floating_point(sstring{sign + $t.text}); }
@@ -1464,6 +1466,7 @@ native_type returns [shared_ptr<cql3_type> t]
| K_COUNTER { $t = cql3_type::counter; }
| K_DECIMAL { $t = cql3_type::decimal; }
| K_DOUBLE { $t = cql3_type::double_; }
| K_DURATION { $t = cql3_type::duration; }
| K_FLOAT { $t = cql3_type::float_; }
| K_INET { $t = cql3_type::inet; }
| K_INT { $t = cql3_type::int_; }
@@ -1569,6 +1572,7 @@ basic_unreserved_keyword returns [sstring str]
K_SELECT: S E L E C T;
K_FROM: F R O M;
K_AS: A S;
K_CAST: C A S T;
K_WHERE: W H E R E;
K_AND: A N D;
K_KEY: K E Y;
@@ -1649,6 +1653,7 @@ K_BOOLEAN: B O O L E A N;
K_COUNTER: C O U N T E R;
K_DECIMAL: D E C I M A L;
K_DOUBLE: D O U B L E;
K_DURATION: D U R A T I O N;
K_FLOAT: F L O A T;
K_INET: I N E T;
K_INT: I N T;
@@ -1778,6 +1783,20 @@ fragment EXPONENT
: E ('+' | '-')? DIGIT+
;
fragment DURATION_UNIT
: Y
| M O
| W
| D
| H
| M
| S
| M S
| U S
| '\u00B5' S
| N S
;
INTEGER
: '-'? DIGIT+
;
@@ -1802,6 +1821,13 @@ BOOLEAN
: T R U E | F A L S E
;
DURATION
: '-'? DIGIT+ DURATION_UNIT (DIGIT+ DURATION_UNIT)*
| '-'? 'P' (DIGIT+ 'Y')? (DIGIT+ 'M')? (DIGIT+ 'D')? ('T' (DIGIT+ 'H')? (DIGIT+ 'M')? (DIGIT+ 'S')?)? // ISO 8601 "format with designators"
| '-'? 'P' DIGIT+ 'W'
| '-'? 'P' DIGIT DIGIT DIGIT DIGIT '-' DIGIT DIGIT '-' DIGIT DIGIT 'T' DIGIT DIGIT ':' DIGIT DIGIT ':' DIGIT DIGIT // ISO 8601 "alternative format"
;
IDENT
: LETTER (LETTER | DIGIT | '_')*
;

View File

@@ -79,6 +79,7 @@ abstract_marker::raw::raw(int32_t bind_index)
return ::make_shared<maps::marker>(_bind_index, receiver);
}
assert(0);
return shared_ptr<term>();
}
assignment_testable::test_result abstract_marker::raw::test_assignment(database& db, const sstring& keyspace, ::shared_ptr<column_specification> receiver) {

View File

@@ -79,7 +79,7 @@ int64_t attributes::get_timestamp(int64_t now, const query_options& options) {
}
try {
data_type_for<int64_t>()->validate(*tval);
} catch (marshal_exception e) {
} catch (marshal_exception& e) {
throw exceptions::invalid_request_exception("Invalid timestamp value");
}
return value_cast<int64_t>(data_type_for<int64_t>()->deserialize(*tval));
@@ -99,7 +99,7 @@ int32_t attributes::get_time_to_live(const query_options& options) {
try {
data_type_for<int32_t>()->validate(*tval);
}
catch (marshal_exception e) {
catch (marshal_exception& e) {
throw exceptions::invalid_request_exception("Invalid TTL value");
}

View File

@@ -40,11 +40,29 @@
*/
#include "cql3/column_condition.hh"
#include "statements/request_validations.hh"
#include "unimplemented.hh"
#include "lists.hh"
#include "maps.hh"
#include <boost/range/algorithm_ext/push_back.hpp>
namespace {
void validate_operation_on_durations(const abstract_type& type, const cql3::operator_type& op) {
using cql3::statements::request_validations::check_false;
if (op.is_slice() && type.references_duration()) {
check_false(type.is_collection(), "Slice conditions are not supported on collections containing durations");
check_false(type.is_tuple(), "Slice conditions are not supported on tuples containing durations");
check_false(type.is_user_type(), "Slice conditions are not supported on UDTs containing durations");
// We're a duration.
throw exceptions::invalid_request_exception(sprint("Slice conditions are not supported on durations"));
}
}
}
namespace cql3 {
bool
@@ -95,6 +113,7 @@ column_condition::raw::prepare(database& db, const sstring& keyspace, const colu
}
return column_condition::in_condition(receiver, std::move(terms));
} else {
validate_operation_on_durations(*receiver.type, _op);
return column_condition::condition(receiver, _value->prepare(db, keyspace, receiver.column_specification), _op);
}
}
@@ -129,6 +148,8 @@ column_condition::raw::prepare(database& db, const sstring& keyspace, const colu
| boost::adaptors::transformed(std::bind(&term::raw::prepare, std::placeholders::_1, std::ref(db), std::ref(keyspace), value_spec)));
return column_condition::in_condition(receiver, _collection_element->prepare(db, keyspace, element_spec), terms);
} else {
validate_operation_on_durations(*receiver.type, _op);
return column_condition::condition(receiver,
_collection_element->prepare(db, keyspace, element_spec),
_value->prepare(db, keyspace, value_spec),

View File

@@ -52,14 +52,15 @@ std::ostream&
operator<<(std::ostream&out, constants::type t)
{
switch (t) {
case constants::type::STRING: return out << "STRING";
case constants::type::INTEGER: return out << "INTEGER";
case constants::type::UUID: return out << "UUID";
case constants::type::FLOAT: return out << "FLOAT";
case constants::type::BOOLEAN: return out << "BOOLEAN";
case constants::type::HEX: return out << "HEX";
};
assert(0);
case constants::type::STRING: return out << "STRING";
case constants::type::INTEGER: return out << "INTEGER";
case constants::type::UUID: return out << "UUID";
case constants::type::FLOAT: return out << "FLOAT";
case constants::type::BOOLEAN: return out << "BOOLEAN";
case constants::type::HEX: return out << "HEX";
case constants::type::DURATION: return out << "DURATION";
}
abort();
}
bytes
@@ -145,6 +146,11 @@ constants::literal::test_assignment(database& db, const sstring& keyspace, ::sha
return assignment_testable::test_result::WEAKLY_ASSIGNABLE;
}
break;
case type::DURATION:
if (kind == cql3_type::kind_enum_set::prepare<cql3_type::kind::DURATION>()) {
return assignment_testable::test_result::EXACT_MATCH;
}
break;
}
return assignment_testable::test_result::NOT_ASSIGNABLE;
}

View File

@@ -60,7 +60,7 @@ public:
#endif
public:
enum class type {
STRING, INTEGER, UUID, FLOAT, BOOLEAN, HEX
STRING, INTEGER, UUID, FLOAT, BOOLEAN, HEX, DURATION
};
/**
@@ -149,6 +149,10 @@ public:
return ::make_shared<literal>(type::HEX, text);
}
static ::shared_ptr<literal> duration(sstring text) {
return ::make_shared<literal>(type::DURATION, text);
}
virtual ::shared_ptr<term> prepare(database& db, const sstring& keyspace, ::shared_ptr<column_specification> receiver);
private:
bytes parsed_value(data_type validator);

View File

@@ -48,6 +48,10 @@ shared_ptr<cql3_type> cql3_type::raw::prepare(database& db, const sstring& keysp
}
}
bool cql3_type::raw::is_duration() const {
return false;
}
bool cql3_type::raw::references_user_type(const sstring& name) const {
return false;
}
@@ -78,6 +82,10 @@ public:
virtual sstring to_string() const {
return _type->to_string();
}
virtual bool is_duration() const override {
return _type->get_type()->equals(duration_type);
}
};
class cql3_type::raw_collection : public raw {
@@ -126,9 +134,15 @@ public:
if (_kind == &collection_type_impl::kind::list) {
return make_shared(cql3_type(to_string(), list_type_impl::get_instance(_values->prepare_internal(keyspace, user_types)->get_type(), !_frozen), false));
} else if (_kind == &collection_type_impl::kind::set) {
if (_values->is_duration()) {
throw exceptions::invalid_request_exception(sprint("Durations are not allowed inside sets: %s", *this));
}
return make_shared(cql3_type(to_string(), set_type_impl::get_instance(_values->prepare_internal(keyspace, user_types)->get_type(), !_frozen), false));
} else if (_kind == &collection_type_impl::kind::map) {
assert(_keys); // "Got null keys type for a collection";
if (_keys->is_duration()) {
throw exceptions::invalid_request_exception(sprint("Durations are not allowed as map keys: %s", *this));
}
return make_shared(cql3_type(to_string(), map_type_impl::get_instance(_keys->prepare_internal(keyspace, user_types)->get_type(), _values->prepare_internal(keyspace, user_types)->get_type(), !_frozen), false));
}
abort();
@@ -138,6 +152,10 @@ public:
return (_keys && _keys->references_user_type(name)) || _values->references_user_type(name);
}
bool is_duration() const override {
return false;
}
virtual sstring to_string() const override {
sstring start = _frozen ? "frozen<" : "";
sstring end = _frozen ? ">" : "";
@@ -329,6 +347,7 @@ thread_local shared_ptr<cql3_type> cql3_type::inet = make("inet", inet_addr_type
thread_local shared_ptr<cql3_type> cql3_type::varint = make("varint", varint_type, cql3_type::kind::VARINT);
thread_local shared_ptr<cql3_type> cql3_type::decimal = make("decimal", decimal_type, cql3_type::kind::DECIMAL);
thread_local shared_ptr<cql3_type> cql3_type::counter = make("counter", counter_type, cql3_type::kind::COUNTER);
thread_local shared_ptr<cql3_type> cql3_type::duration = make("duration", duration_type, cql3_type::kind::DURATION);
const std::vector<shared_ptr<cql3_type>>&
cql3_type::values() {
@@ -354,6 +373,7 @@ cql3_type::values() {
cql3_type::timeuuid,
cql3_type::date,
cql3_type::time,
cql3_type::duration,
};
return v;
}

View File

@@ -75,6 +75,7 @@ public:
virtual bool supports_freezing() const = 0;
virtual bool is_collection() const;
virtual bool is_counter() const;
virtual bool is_duration() const;
virtual bool references_user_type(const sstring&) const;
virtual std::experimental::optional<sstring> keyspace() const;
virtual void freeze();
@@ -102,7 +103,7 @@ private:
public:
enum class kind : int8_t {
ASCII, BIGINT, BLOB, BOOLEAN, COUNTER, DECIMAL, DOUBLE, EMPTY, FLOAT, INT, SMALLINT, TINYINT, INET, TEXT, TIMESTAMP, UUID, VARCHAR, VARINT, TIMEUUID, DATE, TIME
ASCII, BIGINT, BLOB, BOOLEAN, COUNTER, DECIMAL, DOUBLE, EMPTY, FLOAT, INT, SMALLINT, TINYINT, INET, TEXT, TIMESTAMP, UUID, VARCHAR, VARINT, TIMEUUID, DATE, TIME, DURATION
};
using kind_enum = super_enum<kind,
kind::ASCII,
@@ -125,7 +126,8 @@ public:
kind::VARINT,
kind::TIMEUUID,
kind::DATE,
kind::TIME>;
kind::TIME,
kind::DURATION>;
using kind_enum_set = enum_set<kind_enum>;
private:
std::experimental::optional<kind_enum_set::prepared> _kind;
@@ -154,6 +156,7 @@ public:
static thread_local shared_ptr<cql3_type> varint;
static thread_local shared_ptr<cql3_type> decimal;
static thread_local shared_ptr<cql3_type> counter;
static thread_local shared_ptr<cql3_type> duration;
static const std::vector<shared_ptr<cql3_type>>& values();
public:

View File

@@ -67,10 +67,6 @@ class error_collector : public error_listener<RecognizerType, ExceptionBaseType>
*/
const sstring_view _query;
/**
* The error messages.
*/
std::vector<sstring> _error_msgs;
public:
/**
@@ -81,7 +77,10 @@ public:
*/
error_collector(const sstring_view& query) : _query(query) {}
virtual void syntax_error(RecognizerType& recognizer, ANTLR_UINT8** token_names, ExceptionBaseType* ex) override {
/**
* Format and throw a new \c exceptions::syntax_exception.
*/
[[noreturn]] virtual void syntax_error(RecognizerType& recognizer, ANTLR_UINT8** token_names, ExceptionBaseType* ex) override {
auto hdr = get_error_header(ex);
auto msg = get_error_message(recognizer, ex, token_names);
std::stringstream result;
@@ -90,22 +89,15 @@ public:
if (recognizer instanceof Parser)
appendQuerySnippet((Parser) recognizer, builder);
#endif
_error_msgs.emplace_back(result.str());
}
virtual void syntax_error(RecognizerType& recognizer, const sstring& msg) override {
_error_msgs.emplace_back(msg);
throw exceptions::syntax_exception(result.str());
}
/**
* Throws the first syntax error found by the lexer or the parser if it exists.
*
* @throws SyntaxException the syntax error.
* Throw a new \c exceptions::syntax_exception.
*/
void throw_first_syntax_error() {
if (!_error_msgs.empty()) {
throw exceptions::syntax_exception(_error_msgs[0]);
}
[[noreturn]] virtual void syntax_error(RecognizerType&, const sstring& msg) override {
throw exceptions::syntax_exception(msg);
}
private:

View File

@@ -53,6 +53,7 @@ namespace cql3 {
template<typename RecognizerType, typename ExceptionBaseType>
class error_listener {
public:
virtual ~error_listener() = default;
/**
* Invoked when a syntax error occurs.

View File

@@ -41,6 +41,7 @@
#pragma once
#include "utils/big_decimal.hh"
#include "aggregate_function.hh"
#include "native_aggregate_function.hh"
@@ -111,9 +112,70 @@ make_sum_function() {
return make_shared<sum_function_for<Type>>();
}
template <typename Type>
class impl_div_for_avg {
public:
static Type div(const Type& x, const int64_t y) {
return x/y;
}
};
template <>
class impl_div_for_avg<big_decimal> {
public:
static big_decimal div(const big_decimal& x, const int64_t y) {
return x.div(y, big_decimal::rounding_mode::HALF_EVEN);
}
};
// We need a wider accumulator for average, since summing the inputs can overflow
// the input type
template <typename T>
struct accumulator_for;
template <>
struct accumulator_for<int8_t> {
using type = __int128;
};
template <>
struct accumulator_for<int16_t> {
using type = __int128;
};
template <>
struct accumulator_for<int32_t> {
using type = __int128;
};
template <>
struct accumulator_for<int64_t> {
using type = __int128;
};
template <>
struct accumulator_for<float> {
using type = float;
};
template <>
struct accumulator_for<double> {
using type = double;
};
template <>
struct accumulator_for<boost::multiprecision::cpp_int> {
using type = boost::multiprecision::cpp_int;
};
template <>
struct accumulator_for<big_decimal> {
using type = big_decimal;
};
template <typename Type>
class impl_avg_function_for final : public aggregate_function::aggregate {
Type _sum{};
typename accumulator_for<Type>::type _sum{};
int64_t _count = 0;
public:
virtual void reset() override {
@@ -121,9 +183,9 @@ public:
_count = 0;
}
virtual opt_bytes compute(cql_serialization_format sf) override {
Type ret = 0;
Type ret{};
if (_count) {
ret = _sum / _count;
ret = impl_div_for_avg<Type>::div(_sum, _count);
}
return data_type_for<Type>()->decompose(ret);
}

View File

@@ -0,0 +1,82 @@
/*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "castas_fcts.hh"
#include "cql3/functions/native_scalar_function.hh"
namespace cql3 {
namespace functions {
namespace {
using bytes_opt = std::experimental::optional<bytes>;
class castas_function_for : public cql3::functions::native_scalar_function {
castas_fctn _func;
public:
castas_function_for(data_type to_type,
data_type from_type,
castas_fctn func)
: native_scalar_function("castas" + to_type->as_cql3_type()->to_string(), to_type, {from_type})
, _func(func) {
}
virtual bool is_pure() override {
return true;
}
virtual void print(std::ostream& os) const override {
os << "cast(" << _arg_types[0]->name() << " as " << _return_type->name() << ")";
}
virtual bytes_opt execute(cql_serialization_format sf, const std::vector<bytes_opt>& parameters) override {
auto from_type = arg_types()[0];
auto to_type = return_type();
auto&& val = parameters[0];
if (!val) {
return val;
}
auto val_from = from_type->deserialize(*val);
auto val_to = _func(val_from);
return to_type->decompose(val_to);
}
};
shared_ptr<function> make_castas_function(data_type to_type, data_type from_type, castas_fctn func) {
return ::make_shared<castas_function_for>(std::move(to_type), std::move(from_type), std::move(func));
}
} /* Anonymous Namespace */
shared_ptr<function> castas_functions::get(data_type to_type, const std::vector<shared_ptr<cql3::selection::selector>>& provided_args, schema_ptr s) {
if (provided_args.size() != 1) {
throw exceptions::invalid_request_exception("Invalid CAST expression");
}
auto from_type = provided_args[0]->get_type();
auto from_type_key = from_type;
if (from_type_key->is_reversed()) {
from_type_key = dynamic_cast<const reversed_type_impl&>(*from_type).underlying_type();
}
auto f = get_castas_fctn(to_type, from_type_key);
return make_castas_function(to_type, from_type, f);
}
}
}

View File

@@ -0,0 +1,63 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Modified by ScyllaDB
*
* Copyright (C) 2017 ScyllaDB
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <tuple>
#include <unordered_map>
#include "cql3/functions/function.hh"
#include "cql3/functions/abstract_function.hh"
#include "exceptions/exceptions.hh"
#include "core/print.hh"
#include "cql3/cql3_type.hh"
#include "cql3/selection/selector.hh"
namespace cql3 {
namespace functions {
class castas_functions {
public:
static shared_ptr<function> get(data_type to_type, const std::vector<shared_ptr<cql3::selection::selector>>& provided_args, schema_ptr s);
};
}
}

View File

@@ -59,6 +59,14 @@ functions::init() {
declare(make_to_blob_function(type->get_type()));
declare(make_from_blob_function(type->get_type()));
}
declare(aggregate_fcts::make_count_function<int8_t>());
declare(aggregate_fcts::make_max_function<int8_t>());
declare(aggregate_fcts::make_min_function<int8_t>());
declare(aggregate_fcts::make_count_function<int16_t>());
declare(aggregate_fcts::make_max_function<int16_t>());
declare(aggregate_fcts::make_min_function<int16_t>());
declare(aggregate_fcts::make_count_function<int32_t>());
declare(aggregate_fcts::make_max_function<int32_t>());
declare(aggregate_fcts::make_min_function<int32_t>());
@@ -67,6 +75,14 @@ functions::init() {
declare(aggregate_fcts::make_max_function<int64_t>());
declare(aggregate_fcts::make_min_function<int64_t>());
declare(aggregate_fcts::make_count_function<boost::multiprecision::cpp_int>());
declare(aggregate_fcts::make_max_function<boost::multiprecision::cpp_int>());
declare(aggregate_fcts::make_min_function<boost::multiprecision::cpp_int>());
declare(aggregate_fcts::make_count_function<big_decimal>());
declare(aggregate_fcts::make_max_function<big_decimal>());
declare(aggregate_fcts::make_min_function<big_decimal>());
declare(aggregate_fcts::make_count_function<float>());
declare(aggregate_fcts::make_max_function<float>());
declare(aggregate_fcts::make_min_function<float>());
@@ -88,22 +104,22 @@ functions::init() {
declare(make_varchar_as_blob_fct());
declare(make_blob_as_varchar_fct());
declare(aggregate_fcts::make_sum_function<int8_t>());
declare(aggregate_fcts::make_sum_function<int16_t>());
declare(aggregate_fcts::make_sum_function<int32_t>());
declare(aggregate_fcts::make_sum_function<int64_t>());
declare(aggregate_fcts::make_sum_function<float>());
declare(aggregate_fcts::make_sum_function<double>());
#if 0
declare(AggregateFcts.sumFunctionForDecimal);
declare(AggregateFcts.sumFunctionForVarint);
#endif
declare(aggregate_fcts::make_sum_function<boost::multiprecision::cpp_int>());
declare(aggregate_fcts::make_sum_function<big_decimal>());
declare(aggregate_fcts::make_avg_function<int8_t>());
declare(aggregate_fcts::make_avg_function<int16_t>());
declare(aggregate_fcts::make_avg_function<int32_t>());
declare(aggregate_fcts::make_avg_function<int64_t>());
declare(aggregate_fcts::make_avg_function<float>());
declare(aggregate_fcts::make_avg_function<double>());
#if 0
declare(AggregateFcts.avgFunctionForVarint);
declare(AggregateFcts.avgFunctionForDecimal);
#endif
declare(aggregate_fcts::make_avg_function<boost::multiprecision::cpp_int>());
declare(aggregate_fcts::make_avg_function<big_decimal>());
// also needed for smp:
#if 0
@@ -342,7 +358,7 @@ function_call::execute_internal(cql_serialization_format sf, scalar_function& fu
fun.return_type()->validate(*result);
}
return result;
} catch (marshal_exception e) {
} catch (marshal_exception& e) {
throw runtime_exception(sprint("Return of function %s (%s) is not a valid value for its declared return type %s",
fun, to_hex(result),
*fun.return_type()->as_cql3_type()

View File

@@ -46,15 +46,19 @@
namespace cql3 {
sstring
operation::set_element::to_string(const column_definition& receiver) const {
return format("{}[{}] = {}", receiver.name_as_text(), *_selector, *_value);
}
shared_ptr<operation>
operation::set_element::prepare(database& db, const sstring& keyspace, const column_definition& receiver) {
using exceptions::invalid_request_exception;
auto rtype = dynamic_pointer_cast<const collection_type_impl>(receiver.type);
if (!rtype) {
throw invalid_request_exception(sprint("Invalid operation (%s) for non collection column %s", receiver, receiver.name()));
throw invalid_request_exception(sprint("Invalid operation (%s) for non collection column %s", to_string(receiver), receiver.name()));
} else if (!rtype->is_multi_cell()) {
throw invalid_request_exception(sprint("Invalid operation (%s) for frozen collection column %s", receiver, receiver.name()));
throw invalid_request_exception(sprint("Invalid operation (%s) for frozen collection column %s", to_string(receiver), receiver.name()));
}
if (&rtype->_kind == &collection_type_impl::kind::list) {
@@ -67,7 +71,7 @@ operation::set_element::prepare(database& db, const sstring& keyspace, const col
return make_shared<lists::setter_by_index>(receiver, idx, lval);
}
} else if (&rtype->_kind == &collection_type_impl::kind::set) {
throw invalid_request_exception(sprint("Invalid operation (%s) for set column %s", receiver, receiver.name()));
throw invalid_request_exception(sprint("Invalid operation (%s) for set column %s", to_string(receiver), receiver.name()));
} else if (&rtype->_kind == &collection_type_impl::kind::map) {
auto key = _selector->prepare(db, keyspace, maps::key_spec_of(*receiver.column_specification));
auto mval = _value->prepare(db, keyspace, maps::value_spec_of(*receiver.column_specification));
@@ -83,6 +87,11 @@ operation::set_element::is_compatible_with(shared_ptr<raw_update> other) {
return !dynamic_pointer_cast<set_value>(std::move(other));
}
sstring
operation::addition::to_string(const column_definition& receiver) const {
return format("{} = {} + {}", receiver.name_as_text(), receiver.name_as_text(), *_value);
}
shared_ptr<operation>
operation::addition::prepare(database& db, const sstring& keyspace, const column_definition& receiver) {
auto v = _value->prepare(db, keyspace, receiver.column_specification);
@@ -90,11 +99,11 @@ operation::addition::prepare(database& db, const sstring& keyspace, const column
auto ctype = dynamic_pointer_cast<const collection_type_impl>(receiver.type);
if (!ctype) {
if (!receiver.is_counter()) {
throw exceptions::invalid_request_exception(sprint("Invalid operation (%s) for non counter column %s", receiver, receiver.name()));
throw exceptions::invalid_request_exception(sprint("Invalid operation (%s) for non counter column %s", to_string(receiver), receiver.name()));
}
return make_shared<constants::adder>(receiver, v);
} else if (!ctype->is_multi_cell()) {
throw exceptions::invalid_request_exception(sprint("Invalid operation (%s) for frozen collection column %s", receiver, receiver.name()));
throw exceptions::invalid_request_exception(sprint("Invalid operation (%s) for frozen collection column %s", to_string(receiver), receiver.name()));
}
if (&ctype->_kind == &collection_type_impl::kind::list) {
@@ -113,19 +122,24 @@ operation::addition::is_compatible_with(shared_ptr<raw_update> other) {
return !dynamic_pointer_cast<set_value>(other);
}
sstring
operation::subtraction::to_string(const column_definition& receiver) const {
return format("{} = {} - {}", receiver.name_as_text(), receiver.name_as_text(), *_value);
}
shared_ptr<operation>
operation::subtraction::prepare(database& db, const sstring& keyspace, const column_definition& receiver) {
auto ctype = dynamic_pointer_cast<const collection_type_impl>(receiver.type);
if (!ctype) {
if (!receiver.is_counter()) {
throw exceptions::invalid_request_exception(sprint("Invalid operation (%s) for non counter column %s", receiver, receiver.name()));
throw exceptions::invalid_request_exception(sprint("Invalid operation (%s) for non counter column %s", to_string(receiver), receiver.name()));
}
auto v = _value->prepare(db, keyspace, receiver.column_specification);
return make_shared<constants::subtracter>(receiver, v);
}
if (!ctype->is_multi_cell()) {
throw exceptions::invalid_request_exception(
sprint("Invalid operation (%s) for frozen collection column %s", receiver, receiver.name()));
sprint("Invalid operation (%s) for frozen collection column %s", to_string(receiver), receiver.name()));
}
if (&ctype->_kind == &collection_type_impl::kind::list) {
@@ -150,14 +164,19 @@ operation::subtraction::is_compatible_with(shared_ptr<raw_update> other) {
return !dynamic_pointer_cast<set_value>(other);
}
sstring
operation::prepend::to_string(const column_definition& receiver) const {
return format("{} = {} + {}", receiver.name_as_text(), *_value, receiver.name_as_text());
}
shared_ptr<operation>
operation::prepend::prepare(database& db, const sstring& keyspace, const column_definition& receiver) {
auto v = _value->prepare(db, keyspace, receiver.column_specification);
if (!dynamic_cast<const list_type_impl*>(receiver.type.get())) {
throw exceptions::invalid_request_exception(sprint("Invalid operation (%s) for non list column %s", receiver, receiver.name()));
throw exceptions::invalid_request_exception(sprint("Invalid operation (%s) for non list column %s", to_string(receiver), receiver.name()));
} else if (!receiver.type->is_multi_cell()) {
throw exceptions::invalid_request_exception(sprint("Invalid operation (%s) for frozen list column %s", receiver, receiver.name()));
throw exceptions::invalid_request_exception(sprint("Invalid operation (%s) for frozen list column %s", to_string(receiver), receiver.name()));
}
return make_shared<lists::prepender>(receiver, std::move(v));

View File

@@ -203,6 +203,8 @@ public:
const shared_ptr<term::raw> _selector;
const shared_ptr<term::raw> _value;
const bool _by_uuid;
private:
sstring to_string(const column_definition& receiver) const;
public:
set_element(shared_ptr<term::raw> selector, shared_ptr<term::raw> value, bool by_uuid = false)
: _selector(std::move(selector)), _value(std::move(value)), _by_uuid(by_uuid) {
@@ -215,6 +217,8 @@ public:
class addition : public raw_update {
const shared_ptr<term::raw> _value;
private:
sstring to_string(const column_definition& receiver) const;
public:
addition(shared_ptr<term::raw> value)
: _value(value) {
@@ -227,6 +231,8 @@ public:
class subtraction : public raw_update {
const shared_ptr<term::raw> _value;
private:
sstring to_string(const column_definition& receiver) const;
public:
subtraction(shared_ptr<term::raw> value)
: _value(value) {
@@ -239,6 +245,8 @@ public:
class prepend : public raw_update {
shared_ptr<term::raw> _value;
private:
sstring to_string(const column_definition& receiver) const;
public:
prepend(shared_ptr<term::raw> value)
: _value(std::move(value)) {

View File

@@ -71,7 +71,12 @@ private:
, _text(std::move(text))
{}
public:
operator_type(const operator_type&) = delete;
operator_type& operator=(const operator_type&) = delete;
const operator_type& reverse() const { return _reverse; }
bool is_slice() const {
return (*this == LT) || (*this == LTE) || (*this == GT) || (*this == GTE);
}
sstring to_string() const { return _text; }
bool operator==(const operator_type& other) const { return this == &other; }
bool operator!=(const operator_type& other) const { return this != &other; }

View File

@@ -49,6 +49,23 @@ thread_local const query_options::specific_options query_options::specific_optio
thread_local query_options query_options::DEFAULT{db::consistency_level::ONE, std::experimental::nullopt,
std::vector<cql3::raw_value_view>(), false, query_options::specific_options::DEFAULT, cql_serialization_format::latest()};
query_options::query_options(db::consistency_level consistency,
std::experimental::optional<std::vector<sstring_view>> names,
std::vector<cql3::raw_value> values,
std::vector<cql3::raw_value_view> value_views,
bool skip_metadata,
specific_options options,
cql_serialization_format sf)
: _consistency(consistency)
, _names(std::move(names))
, _values(std::move(values))
, _value_views(value_views)
, _skip_metadata(skip_metadata)
, _options(std::move(options))
, _cql_serialization_format(sf)
{
}
query_options::query_options(db::consistency_level consistency,
std::experimental::optional<std::vector<sstring_view>> names,
std::vector<cql3::raw_value> values,
@@ -82,18 +99,29 @@ query_options::query_options(db::consistency_level consistency,
{
}
query_options::query_options(db::consistency_level cl, std::vector<cql3::raw_value> values)
query_options::query_options(db::consistency_level cl, std::vector<cql3::raw_value> values, specific_options options)
: query_options(
cl,
{},
std::move(values),
false,
query_options::specific_options::DEFAULT,
std::move(options),
cql_serialization_format::latest()
)
{
}
query_options::query_options(std::unique_ptr<query_options> qo, ::shared_ptr<service::pager::paging_state> paging_state)
: query_options(qo->_consistency,
std::move(qo->_names),
std::move(qo->_values),
std::move(qo->_value_views),
qo->_skip_metadata,
std::move(query_options::specific_options{qo->_options.page_size, paging_state, qo->_options.serial_consistency, qo->_options.timestamp}),
qo->_cql_serialization_format) {
}
query_options::query_options(std::vector<cql3::raw_value> values)
: query_options(
db::consistency_level::ONE, std::move(values))
@@ -181,19 +209,18 @@ void query_options::prepare(const std::vector<::shared_ptr<column_specification>
}
auto& names = *_names;
std::vector<cql3::raw_value> ordered_values;
std::vector<cql3::raw_value_view> ordered_values;
ordered_values.reserve(specs.size());
for (auto&& spec : specs) {
auto& spec_name = spec->name->text();
for (size_t j = 0; j < names.size(); j++) {
if (names[j] == spec_name) {
ordered_values.emplace_back(_values[j]);
ordered_values.emplace_back(_value_views[j]);
break;
}
}
}
_values = std::move(ordered_values);
fill_value_views();
_value_views = std::move(ordered_values);
}
void query_options::fill_value_views()

View File

@@ -108,6 +108,13 @@ public:
bool skip_metadata,
specific_options options,
cql_serialization_format sf);
explicit query_options(db::consistency_level consistency,
std::experimental::optional<std::vector<sstring_view>> names,
std::vector<cql3::raw_value> values,
std::vector<cql3::raw_value_view> value_views,
bool skip_metadata,
specific_options options,
cql_serialization_format sf);
explicit query_options(db::consistency_level consistency,
std::experimental::optional<std::vector<sstring_view>> names,
std::vector<cql3::raw_value_view> value_views,
@@ -140,7 +147,8 @@ public:
// forInternalUse
explicit query_options(std::vector<cql3::raw_value> values);
explicit query_options(db::consistency_level, std::vector<cql3::raw_value> values);
explicit query_options(db::consistency_level, std::vector<cql3::raw_value> values, specific_options options = specific_options::DEFAULT);
explicit query_options(std::unique_ptr<query_options>, ::shared_ptr<service::pager::paging_state> paging_state);
db::consistency_level get_consistency() const;
cql3::raw_value_view get_value_at(size_t idx) const;

View File

@@ -38,19 +38,19 @@
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include <seastar/core/metrics.hh>
#define CRYPTOPP_ENABLE_NAMESPACE_WEAK 1
#include "cql3/query_processor.hh"
#include <cryptopp/md5.h>
#include <seastar/core/metrics.hh>
#include "cql3/CqlParser.hpp"
#include "cql3/error_collector.hh"
#include "cql3/statements/batch_statement.hh"
#include "cql3/util.hh"
#include "transport/messages/result_message.hh"
#define CRYPTOPP_ENABLE_NAMESPACE_WEAK 1
#include <cryptopp/md5.h>
namespace cql3 {
using namespace statements;
@@ -68,9 +68,8 @@ const std::chrono::minutes prepared_statements_cache::entry_expiry = std::chrono
class query_processor::internal_state {
service::query_state _qs;
public:
internal_state()
: _qs(service::client_state{service::client_state::internal_tag()})
{ }
internal_state() : _qs(service::client_state{service::client_state::internal_tag()}) {
}
operator service::query_state&() {
return _qs;
}
@@ -92,74 +91,102 @@ api::timestamp_type query_processor::next_timestamp() {
return _internal_state->next_timestamp();
}
query_processor::query_processor(distributed<service::storage_proxy>& proxy,
distributed<database>& db)
: _migration_subscriber{std::make_unique<migration_subscriber>(this)}
, _proxy(proxy)
, _db(db)
, _internal_state(new internal_state())
, _prepared_cache(prep_cache_log)
{
query_processor::query_processor(distributed<service::storage_proxy>& proxy, distributed<database>& db)
: _migration_subscriber{std::make_unique<migration_subscriber>(this)}
, _proxy(proxy)
, _db(db)
, _internal_state(new internal_state())
, _prepared_cache(prep_cache_log) {
namespace sm = seastar::metrics;
_metrics.add_group("query_processor", {
sm::make_derive("statements_prepared", _stats.prepare_invocations,
sm::description("Counts a total number of parsed CQL requests.")),
});
_metrics.add_group(
"query_processor",
{
sm::make_derive(
"statements_prepared",
_stats.prepare_invocations,
sm::description("Counts a total number of parsed CQL requests."))});
_metrics.add_group("cql", {
sm::make_derive("reads", _cql_stats.reads,
sm::description("Counts a total number of CQL read requests.")),
_metrics.add_group(
"cql",
{
sm::make_derive(
"reads",
_cql_stats.reads,
sm::description("Counts a total number of CQL read requests.")),
sm::make_derive("inserts", _cql_stats.inserts,
sm::description("Counts a total number of CQL INSERT requests.")),
sm::make_derive(
"inserts",
_cql_stats.inserts,
sm::description("Counts a total number of CQL INSERT requests.")),
sm::make_derive("updates", _cql_stats.updates,
sm::description("Counts a total number of CQL UPDATE requests.")),
sm::make_derive(
"updates",
_cql_stats.updates,
sm::description("Counts a total number of CQL UPDATE requests.")),
sm::make_derive("deletes", _cql_stats.deletes,
sm::description("Counts a total number of CQL DELETE requests.")),
sm::make_derive(
"deletes",
_cql_stats.deletes,
sm::description("Counts a total number of CQL DELETE requests.")),
sm::make_derive("batches", _cql_stats.batches,
sm::description("Counts a total number of CQL BATCH requests.")),
sm::make_derive(
"batches",
_cql_stats.batches,
sm::description("Counts a total number of CQL BATCH requests.")),
sm::make_derive("statements_in_batches", _cql_stats.statements_in_batches,
sm::description("Counts a total number of sub-statements in CQL BATCH requests.")),
sm::make_derive(
"statements_in_batches",
_cql_stats.statements_in_batches,
sm::description("Counts a total number of sub-statements in CQL BATCH requests.")),
sm::make_derive("batches_pure_logged", _cql_stats.batches_pure_logged,
sm::description("Counts a total number of LOGGED batches that were executed as LOGGED batches.")),
sm::make_derive(
"batches_pure_logged",
_cql_stats.batches_pure_logged,
sm::description(
"Counts a total number of LOGGED batches that were executed as LOGGED batches.")),
sm::make_derive("batches_pure_unlogged", _cql_stats.batches_pure_unlogged,
sm::description("Counts a total number of UNLOGGED batches that were executed as UNLOGGED batches.")),
sm::make_derive(
"batches_pure_unlogged",
_cql_stats.batches_pure_unlogged,
sm::description(
"Counts a total number of UNLOGGED batches that were executed as UNLOGGED "
"batches.")),
sm::make_derive("batches_unlogged_from_logged", _cql_stats.batches_unlogged_from_logged,
sm::description("Counts a total number of LOGGED batches that were executed as UNLOGGED batches.")),
sm::make_derive(
"batches_unlogged_from_logged",
_cql_stats.batches_unlogged_from_logged,
sm::description("Counts a total number of LOGGED batches that were executed as UNLOGGED "
"batches.")),
sm::make_derive("prepared_cache_evictions", [] { return prepared_statements_cache::shard_stats().prepared_cache_evictions; },
sm::description("Counts a number of prepared statements cache entries evictions.")),
sm::make_derive(
"prepared_cache_evictions",
[] { return prepared_statements_cache::shard_stats().prepared_cache_evictions; },
sm::description("Counts a number of prepared statements cache entries evictions.")),
sm::make_gauge("prepared_cache_size", [this] { return _prepared_cache.size(); },
sm::description("A number of entries in the prepared statements cache.")),
sm::make_gauge(
"prepared_cache_size",
[this] { return _prepared_cache.size(); },
sm::description("A number of entries in the prepared statements cache.")),
sm::make_gauge("prepared_cache_memory_footprint", [this] { return _prepared_cache.memory_footprint(); },
sm::description("Size (in bytes) of the prepared statements cache.")),
});
sm::make_gauge(
"prepared_cache_memory_footprint",
[this] { return _prepared_cache.memory_footprint(); },
sm::description("Size (in bytes) of the prepared statements cache."))});
service::get_local_migration_manager().register_listener(_migration_subscriber.get());
}
query_processor::~query_processor()
{}
query_processor::~query_processor() {
}
future<> query_processor::stop()
{
future<> query_processor::stop() {
service::get_local_migration_manager().unregister_listener(_migration_subscriber.get());
return make_ready_future<>();
}
future<::shared_ptr<result_message>>
query_processor::process(const sstring_view& query_string, service::query_state& query_state, query_options& options)
{
query_processor::process(const sstring_view& query_string, service::query_state& query_state, query_options& options) {
log.trace("process: \"{}\"", query_string);
tracing::trace(query_state.get_trace_state(), "Parsing a statement");
auto p = get_statement(query_string, query_state.get_client_state());
@@ -179,14 +206,10 @@ query_processor::process(const sstring_view& query_string, service::query_state&
}
future<::shared_ptr<result_message>>
query_processor::process_statement(::shared_ptr<cql_statement> statement,
service::query_state& query_state,
const query_options& options)
{
#if 0
logger.trace("Process {} @CL.{}", statement, options.getConsistency());
#endif
query_processor::process_statement(
::shared_ptr<cql_statement> statement,
service::query_state& query_state,
const query_options& options) {
return statement->check_access(query_state.get_client_state()).then([this, statement, &query_state, &options]() {
auto& client_state = query_state.get_client_state();
@@ -210,38 +233,50 @@ query_processor::process_statement(::shared_ptr<cql_statement> statement,
}
future<::shared_ptr<cql_transport::messages::result_message::prepared>>
query_processor::prepare(sstring query_string, service::query_state& query_state)
{
query_processor::prepare(sstring query_string, service::query_state& query_state) {
auto& client_state = query_state.get_client_state();
return prepare(std::move(query_string), client_state, client_state.is_thrift());
}
future<::shared_ptr<cql_transport::messages::result_message::prepared>>
query_processor::prepare(sstring query_string, const service::client_state& client_state, bool for_thrift)
{
query_processor::prepare(sstring query_string, const service::client_state& client_state, bool for_thrift) {
using namespace cql_transport::messages;
if (for_thrift) {
return prepare_one<result_message::prepared::thrift>(std::move(query_string), client_state, compute_thrift_id, prepared_cache_key_type::thrift_id);
return prepare_one<result_message::prepared::thrift>(
std::move(query_string),
client_state,
compute_thrift_id, prepared_cache_key_type::thrift_id);
} else {
return prepare_one<result_message::prepared::cql>(std::move(query_string), client_state, compute_id, prepared_cache_key_type::cql_id);
return prepare_one<result_message::prepared::cql>(
std::move(query_string),
client_state,
compute_id,
prepared_cache_key_type::cql_id);
}
}
::shared_ptr<cql_transport::messages::result_message::prepared>
query_processor::get_stored_prepared_statement(const std::experimental::string_view& query_string,
const sstring& keyspace,
bool for_thrift)
{
query_processor::get_stored_prepared_statement(
const std::experimental::string_view& query_string,
const sstring& keyspace,
bool for_thrift) {
using namespace cql_transport::messages;
if (for_thrift) {
return get_stored_prepared_statement_one<result_message::prepared::thrift>(query_string, keyspace, compute_thrift_id, prepared_cache_key_type::thrift_id);
return get_stored_prepared_statement_one<result_message::prepared::thrift>(
query_string,
keyspace,
compute_thrift_id,
prepared_cache_key_type::thrift_id);
} else {
return get_stored_prepared_statement_one<result_message::prepared::cql>(query_string, keyspace, compute_id, prepared_cache_key_type::cql_id);
return get_stored_prepared_statement_one<result_message::prepared::cql>(
query_string,
keyspace,
compute_id,
prepared_cache_key_type::cql_id);
}
}
static bytes md5_calculate(const std::experimental::string_view& s)
{
static bytes md5_calculate(const std::experimental::string_view& s) {
constexpr size_t size = CryptoPP::Weak1::MD5::DIGESTSIZE;
CryptoPP::Weak::MD5 hash;
unsigned char digest[size];
@@ -253,13 +288,15 @@ static sstring hash_target(const std::experimental::string_view& query_string, c
return keyspace + query_string.to_string();
}
prepared_cache_key_type query_processor::compute_id(const std::experimental::string_view& query_string, const sstring& keyspace)
{
prepared_cache_key_type query_processor::compute_id(
const std::experimental::string_view& query_string,
const sstring& keyspace) {
return prepared_cache_key_type(md5_calculate(hash_target(query_string, keyspace)));
}
prepared_cache_key_type query_processor::compute_thrift_id(const std::experimental::string_view& query_string, const sstring& keyspace)
{
prepared_cache_key_type query_processor::compute_thrift_id(
const std::experimental::string_view& query_string,
const sstring& keyspace) {
auto target = hash_target(query_string, keyspace);
uint32_t h = 0;
for (auto&& c : hash_target(query_string, keyspace)) {
@@ -269,11 +306,7 @@ prepared_cache_key_type query_processor::compute_thrift_id(const std::experiment
}
std::unique_ptr<prepared_statement>
query_processor::get_statement(const sstring_view& query, const service::client_state& client_state)
{
#if 0
Tracing.trace("Parsing {}", queryStr);
#endif
query_processor::get_statement(const sstring_view& query, const service::client_state& client_state) {
::shared_ptr<raw::parsed_statement> statement = parse_statement(query);
// Set keyspace for statement that require login
@@ -281,16 +314,12 @@ query_processor::get_statement(const sstring_view& query, const service::client_
if (cf_stmt) {
cf_stmt->prepare_keyspace(client_state);
}
#if 0
Tracing.trace("Preparing statement");
#endif
++_stats.prepare_invocations;
return statement->prepare(_db.local(), _cql_stats);
}
::shared_ptr<raw::parsed_statement>
query_processor::parse_statement(const sstring_view& query)
{
query_processor::parse_statement(const sstring_view& query) {
try {
auto statement = util::do_with_parser(query, std::mem_fn(&cql3_parser::CqlParser::query));
if (!statement) {
@@ -307,12 +336,14 @@ query_processor::parse_statement(const sstring_view& query)
}
}
query_options query_processor::make_internal_options(const statements::prepared_statement::checked_weak_ptr& p,
const std::initializer_list<data_value>& values,
db::consistency_level cl)
{
query_options query_processor::make_internal_options(
const statements::prepared_statement::checked_weak_ptr& p,
const std::initializer_list<data_value>& values,
db::consistency_level cl,
int32_t page_size) {
if (p->bound_names.size() != values.size()) {
throw std::invalid_argument(sprint("Invalid number of values. Expecting %d but got %d", p->bound_names.size(), values.size()));
throw std::invalid_argument(
sprint("Invalid number of values. Expecting %d but got %d", p->bound_names.size(), values.size()));
}
auto ni = p->bound_names.begin();
std::vector<cql3::raw_value> bound_values;
@@ -326,11 +357,19 @@ query_options query_processor::make_internal_options(const statements::prepared_
bound_values.push_back(cql3::raw_value::make_value(n->type->decompose(v)));
}
}
if (page_size > 0) {
::shared_ptr<service::pager::paging_state> paging_state;
db::consistency_level serial_consistency = db::consistency_level::SERIAL;
api::timestamp_type ts = api::missing_timestamp;
return query_options(
cl,
bound_values,
cql3::query_options::specific_options{page_size, std::move(paging_state), serial_consistency, ts});
}
return query_options(cl, bound_values);
}
statements::prepared_statement::checked_weak_ptr query_processor::prepare_internal(const sstring& query_string)
{
statements::prepared_statement::checked_weak_ptr query_processor::prepare_internal(const sstring& query_string) {
auto& p = _internal_statements[query_string];
if (p == nullptr) {
auto np = parse_statement(query_string)->prepare(_db.local(), _cql_stats);
@@ -341,33 +380,128 @@ statements::prepared_statement::checked_weak_ptr query_processor::prepare_intern
}
future<::shared_ptr<untyped_result_set>>
query_processor::execute_internal(const sstring& query_string,
const std::initializer_list<data_value>& values)
{
query_processor::execute_internal(const sstring& query_string, const std::initializer_list<data_value>& values) {
if (log.is_enabled(logging::log_level::trace)) {
log.trace("execute_internal: \"{}\" ({})", query_string, ::join(", ", values));
}
return execute_internal(prepare_internal(query_string), values);
}
struct internal_query_state {
sstring query_string;
std::unique_ptr<query_options> opts;
statements::prepared_statement::checked_weak_ptr p;
bool more_results = true;
};
::shared_ptr<internal_query_state> query_processor::create_paged_state(const sstring& query_string,
const std::initializer_list<data_value>& values, int32_t page_size) {
auto p = prepare_internal(query_string);
auto opts = make_internal_options(p, values, db::consistency_level::ONE, page_size);
::shared_ptr<internal_query_state> res = ::make_shared<internal_query_state>(
internal_query_state{
query_string,
std::make_unique<cql3::query_options>(std::move(opts)), std::move(p),
true});
return res;
}
bool query_processor::has_more_results(::shared_ptr<cql3::internal_query_state> state) const {
if (state) {
return state->more_results;
}
return false;
}
future<> query_processor::for_each_cql_result(
::shared_ptr<cql3::internal_query_state> state,
std::function<stop_iteration(const cql3::untyped_result_set::row&)>&& f) {
return do_with(seastar::shared_ptr<bool>(), [f, this, state](auto& is_done) mutable {
is_done = seastar::make_shared<bool>(false);
auto stop_when = [is_done]() {
return *is_done;
};
auto do_resuls = [is_done, state, f, this]() mutable {
return this->execute_paged_internal(
state).then([is_done, state, f, this](::shared_ptr<cql3::untyped_result_set> msg) mutable {
if (msg->empty()) {
*is_done = true;
} else {
if (!this->has_more_results(state)) {
*is_done = true;
}
for (auto& row : *msg) {
if (f(row) == stop_iteration::yes) {
*is_done = true;
break;
}
}
}
});
};
return do_until(stop_when, do_resuls);
});
}
future<::shared_ptr<untyped_result_set>>
query_processor::execute_internal(statements::prepared_statement::checked_weak_ptr p,
const std::initializer_list<data_value>& values)
{
auto opts = make_internal_options(p, values);
query_processor::execute_paged_internal(::shared_ptr<internal_query_state> state) {
return state->p->statement->execute_internal(_proxy, *_internal_state, *state->opts).then(
[state, this](::shared_ptr<cql_transport::messages::result_message> msg) mutable {
class visitor : public result_message::visitor_base {
::shared_ptr<internal_query_state> _state;
query_processor& _qp;
public:
visitor(::shared_ptr<internal_query_state> state, query_processor& qp) : _state(state), _qp(qp) {
}
virtual ~visitor() = default;
void visit(const result_message::rows& rmrs) override {
auto& rs = rmrs.rs();
if (rs.get_metadata().paging_state()) {
bool done = !rs.get_metadata().flags().contains<cql3::metadata::flag::HAS_MORE_PAGES>();
if (done) {
_state->more_results = false;
} else {
const service::pager::paging_state& st = *rs.get_metadata().paging_state();
shared_ptr<service::pager::paging_state> shrd = ::make_shared<service::pager::paging_state>(st);
_state->opts = std::make_unique<query_options>(std::move(_state->opts), shrd);
_state->p = _qp.prepare_internal(_state->query_string);
}
} else {
_state->more_results = false;
}
}
};
visitor v(state, *this);
if (msg != nullptr) {
msg->accept(v);
}
return make_ready_future<::shared_ptr<untyped_result_set>>(::make_shared<untyped_result_set>(msg));
});
}
future<::shared_ptr<untyped_result_set>>
query_processor::execute_internal(
statements::prepared_statement::checked_weak_ptr p,
const std::initializer_list<data_value>& values) {
query_options opts = make_internal_options(p, values);
return do_with(std::move(opts), [this, p = std::move(p)](auto& opts) {
return p->statement->execute_internal(_proxy, *_internal_state, opts).then([stmt = p->statement](auto msg) {
return p->statement->execute_internal(
_proxy,
*_internal_state,
opts).then([&opts, stmt = p->statement](auto msg) {
return make_ready_future<::shared_ptr<untyped_result_set>>(::make_shared<untyped_result_set>(msg));
});
});
}
future<::shared_ptr<untyped_result_set>>
query_processor::process(const sstring& query_string,
db::consistency_level cl,
const std::initializer_list<data_value>& values,
bool cache)
{
query_processor::process(
const sstring& query_string,
db::consistency_level cl,
const std::initializer_list<data_value>& values,
bool cache) {
if (cache) {
return process(prepare_internal(query_string), cl, values);
} else {
@@ -379,10 +513,10 @@ query_processor::process(const sstring& query_string,
}
future<::shared_ptr<untyped_result_set>>
query_processor::process(statements::prepared_statement::checked_weak_ptr p,
db::consistency_level cl,
const std::initializer_list<data_value>& values)
{
query_processor::process(
statements::prepared_statement::checked_weak_ptr p,
db::consistency_level cl,
const std::initializer_list<data_value>& values) {
auto opts = make_internal_options(p, values, cl);
return do_with(std::move(opts), [this, p = std::move(p)](auto & opts) {
return p->statement->execute(_proxy, *_internal_state, opts).then([](auto msg) {
@@ -392,10 +526,10 @@ query_processor::process(statements::prepared_statement::checked_weak_ptr p,
}
future<::shared_ptr<cql_transport::messages::result_message>>
query_processor::process_batch(::shared_ptr<statements::batch_statement> batch,
service::query_state& query_state,
query_options& options)
{
query_processor::process_batch(
::shared_ptr<statements::batch_statement> batch,
service::query_state& query_state,
query_options& options) {
return batch->check_access(query_state.get_client_state()).then([this, &query_state, &options, batch] {
batch->validate();
batch->validate(_proxy, query_state.get_client_state());
@@ -403,101 +537,90 @@ query_processor::process_batch(::shared_ptr<statements::batch_statement> batch,
});
}
query_processor::migration_subscriber::migration_subscriber(query_processor* qp)
: _qp{qp}
{
query_processor::migration_subscriber::migration_subscriber(query_processor* qp) : _qp{qp} {
}
void query_processor::migration_subscriber::on_create_keyspace(const sstring& ks_name)
{
void query_processor::migration_subscriber::on_create_keyspace(const sstring& ks_name) {
}
void query_processor::migration_subscriber::on_create_column_family(const sstring& ks_name, const sstring& cf_name)
{
void query_processor::migration_subscriber::on_create_column_family(const sstring& ks_name, const sstring& cf_name) {
}
void query_processor::migration_subscriber::on_create_user_type(const sstring& ks_name, const sstring& type_name)
{
void query_processor::migration_subscriber::on_create_user_type(const sstring& ks_name, const sstring& type_name) {
}
void query_processor::migration_subscriber::on_create_function(const sstring& ks_name, const sstring& function_name)
{
void query_processor::migration_subscriber::on_create_function(const sstring& ks_name, const sstring& function_name) {
log.warn("{} event ignored", __func__);
}
void query_processor::migration_subscriber::on_create_aggregate(const sstring& ks_name, const sstring& aggregate_name)
{
void query_processor::migration_subscriber::on_create_aggregate(const sstring& ks_name, const sstring& aggregate_name) {
log.warn("{} event ignored", __func__);
}
void query_processor::migration_subscriber::on_create_view(const sstring& ks_name, const sstring& view_name)
{
void query_processor::migration_subscriber::on_create_view(const sstring& ks_name, const sstring& view_name) {
}
void query_processor::migration_subscriber::on_update_keyspace(const sstring& ks_name)
{
void query_processor::migration_subscriber::on_update_keyspace(const sstring& ks_name) {
}
void query_processor::migration_subscriber::on_update_column_family(const sstring& ks_name, const sstring& cf_name, bool columns_changed)
{
void query_processor::migration_subscriber::on_update_column_family(
const sstring& ks_name,
const sstring& cf_name,
bool columns_changed) {
// #1255: Ignoring columns_changed deliberately.
log.info("Column definitions for {}.{} changed, invalidating related prepared statements", ks_name, cf_name);
remove_invalid_prepared_statements(ks_name, cf_name);
}
void query_processor::migration_subscriber::on_update_user_type(const sstring& ks_name, const sstring& type_name)
{
void query_processor::migration_subscriber::on_update_user_type(const sstring& ks_name, const sstring& type_name) {
}
void query_processor::migration_subscriber::on_update_function(const sstring& ks_name, const sstring& function_name)
{
void query_processor::migration_subscriber::on_update_function(const sstring& ks_name, const sstring& function_name) {
}
void query_processor::migration_subscriber::on_update_aggregate(const sstring& ks_name, const sstring& aggregate_name)
{
void query_processor::migration_subscriber::on_update_aggregate(const sstring& ks_name, const sstring& aggregate_name) {
}
void query_processor::migration_subscriber::on_update_view(const sstring& ks_name, const sstring& view_name, bool columns_changed)
{
void query_processor::migration_subscriber::on_update_view(
const sstring& ks_name,
const sstring& view_name, bool columns_changed) {
}
void query_processor::migration_subscriber::on_drop_keyspace(const sstring& ks_name)
{
void query_processor::migration_subscriber::on_drop_keyspace(const sstring& ks_name) {
remove_invalid_prepared_statements(ks_name, std::experimental::nullopt);
}
void query_processor::migration_subscriber::on_drop_column_family(const sstring& ks_name, const sstring& cf_name)
{
void query_processor::migration_subscriber::on_drop_column_family(const sstring& ks_name, const sstring& cf_name) {
remove_invalid_prepared_statements(ks_name, cf_name);
}
void query_processor::migration_subscriber::on_drop_user_type(const sstring& ks_name, const sstring& type_name)
{
void query_processor::migration_subscriber::on_drop_user_type(const sstring& ks_name, const sstring& type_name) {
}
void query_processor::migration_subscriber::on_drop_function(const sstring& ks_name, const sstring& function_name)
{
void query_processor::migration_subscriber::on_drop_function(const sstring& ks_name, const sstring& function_name) {
log.warn("{} event ignored", __func__);
}
void query_processor::migration_subscriber::on_drop_aggregate(const sstring& ks_name, const sstring& aggregate_name)
{
void query_processor::migration_subscriber::on_drop_aggregate(const sstring& ks_name, const sstring& aggregate_name) {
log.warn("{} event ignored", __func__);
}
void query_processor::migration_subscriber::on_drop_view(const sstring& ks_name, const sstring& view_name)
{
void query_processor::migration_subscriber::on_drop_view(const sstring& ks_name, const sstring& view_name) {
remove_invalid_prepared_statements(ks_name, view_name);
}
void query_processor::migration_subscriber::remove_invalid_prepared_statements(sstring ks_name, std::experimental::optional<sstring> cf_name)
{
void query_processor::migration_subscriber::remove_invalid_prepared_statements(
sstring ks_name,
std::experimental::optional<sstring> cf_name) {
_qp->_prepared_cache.remove_if([&] (::shared_ptr<cql_statement> stmt) {
return this->should_invalidate(ks_name, cf_name, stmt);
});
}
bool query_processor::migration_subscriber::should_invalidate(sstring ks_name, std::experimental::optional<sstring> cf_name, ::shared_ptr<cql_statement> statement)
{
bool query_processor::migration_subscriber::should_invalidate(
sstring ks_name,
std::experimental::optional<sstring> cf_name,
::shared_ptr<cql_statement> statement) {
return statement->depends_on_keyspace(ks_name) && (!cf_name || statement->depends_on_column_family(*cf_name));
}

View File

@@ -43,21 +43,22 @@
#include <experimental/string_view>
#include <unordered_map>
#include <seastar/core/metrics_registration.hh>
#include "core/shared_ptr.hh"
#include "exceptions/exceptions.hh"
#include <seastar/core/distributed.hh>
#include <seastar/core/metrics_registration.hh>
#include <seastar/core/shared_ptr.hh>
#include "cql3/prepared_statements_cache.hh"
#include "cql3/query_options.hh"
#include "cql3/statements/prepared_statement.hh"
#include "cql3/statements/raw/parsed_statement.hh"
#include "cql3/statements/raw/cf_statement.hh"
#include "cql3/untyped_result_set.hh"
#include "exceptions/exceptions.hh"
#include "log.hh"
#include "service/migration_manager.hh"
#include "service/query_state.hh"
#include "log.hh"
#include "core/distributed.hh"
#include "statements/prepared_statement.hh"
#include "transport/messages/result_message.hh"
#include "untyped_result_set.hh"
#include "prepared_statements_cache.hh"
namespace cql3 {
@@ -65,14 +66,22 @@ namespace statements {
class batch_statement;
}
class prepared_statement_is_too_big : public std::exception {
public:
static constexpr int max_query_prefix = 100;
class untyped_result_set;
class untyped_result_set_row;
private:
/*!
* \brief to allow paging, holds
* internal state, that needs to be passed to the execute statement.
*
*/
struct internal_query_state;
class prepared_statement_is_too_big : public std::exception {
sstring _msg;
public:
static constexpr int max_query_prefix = 100;
prepared_statement_is_too_big(const sstring& query_string)
: _msg(seastar::format("Prepared statement is too big: {}", query_string.substr(0, max_query_prefix)))
{
@@ -107,15 +116,33 @@ private:
class internal_state;
std::unique_ptr<internal_state> _internal_state;
public:
query_processor(distributed<service::storage_proxy>& proxy, distributed<database>& db);
~query_processor();
prepared_statements_cache _prepared_cache;
// A map for prepared statements used internally (which we don't want to mix with user statement, in particular we
// don't bother with expiration on those.
std::unordered_map<sstring, std::unique_ptr<statements::prepared_statement>> _internal_statements;
public:
static const sstring CQL_VERSION;
static prepared_cache_key_type compute_id(
const std::experimental::string_view& query_string,
const sstring& keyspace);
static prepared_cache_key_type compute_thrift_id(
const std::experimental::string_view& query_string,
const sstring& keyspace);
static ::shared_ptr<statements::raw::parsed_statement> parse_statement(const std::experimental::string_view& query);
query_processor(distributed<service::storage_proxy>& proxy, distributed<database>& db);
~query_processor();
distributed<database>& db() {
return _db;
}
distributed<service::storage_proxy>& proxy() {
return _proxy;
}
@@ -124,125 +151,6 @@ public:
return _cql_stats;
}
#if 0
public static final QueryProcessor instance = new QueryProcessor();
#endif
private:
#if 0
private static final Logger logger = LoggerFactory.getLogger(QueryProcessor.class);
private static final MemoryMeter meter = new MemoryMeter().withGuessing(MemoryMeter.Guess.FALLBACK_BEST).ignoreKnownSingletons();
private static final long MAX_CACHE_PREPARED_MEMORY = Runtime.getRuntime().maxMemory() / 256;
private static EntryWeigher<MD5Digest, ParsedStatement.Prepared> cqlMemoryUsageWeigher = new EntryWeigher<MD5Digest, ParsedStatement.Prepared>()
{
@Override
public int weightOf(MD5Digest key, ParsedStatement.Prepared value)
{
return Ints.checkedCast(measure(key) + measure(value.statement) + measure(value.boundNames));
}
};
private static EntryWeigher<Integer, ParsedStatement.Prepared> thriftMemoryUsageWeigher = new EntryWeigher<Integer, ParsedStatement.Prepared>()
{
@Override
public int weightOf(Integer key, ParsedStatement.Prepared value)
{
return Ints.checkedCast(measure(key) + measure(value.statement) + measure(value.boundNames));
}
};
#endif
prepared_statements_cache _prepared_cache;
std::unordered_map<sstring, std::unique_ptr<statements::prepared_statement>> _internal_statements;
#if 0
// A map for prepared statements used internally (which we don't want to mix with user statement, in particular we don't
// bother with expiration on those.
private static final ConcurrentMap<String, ParsedStatement.Prepared> internalStatements = new ConcurrentHashMap<>();
// Direct calls to processStatement do not increment the preparedStatementsExecuted/regularStatementsExecuted
// counters. Callers of processStatement are responsible for correctly notifying metrics
public static final CQLMetrics metrics = new CQLMetrics();
private static final AtomicInteger lastMinuteEvictionsCount = new AtomicInteger(0);
static
{
preparedStatements = new ConcurrentLinkedHashMap.Builder<MD5Digest, ParsedStatement.Prepared>()
.maximumWeightedCapacity(MAX_CACHE_PREPARED_MEMORY)
.weigher(cqlMemoryUsageWeigher)
.listener(new EvictionListener<MD5Digest, ParsedStatement.Prepared>()
{
public void onEviction(MD5Digest md5Digest, ParsedStatement.Prepared prepared)
{
metrics.preparedStatementsEvicted.inc();
lastMinuteEvictionsCount.incrementAndGet();
}
}).build();
thriftPreparedStatements = new ConcurrentLinkedHashMap.Builder<Integer, ParsedStatement.Prepared>()
.maximumWeightedCapacity(MAX_CACHE_PREPARED_MEMORY)
.weigher(thriftMemoryUsageWeigher)
.listener(new EvictionListener<Integer, ParsedStatement.Prepared>()
{
public void onEviction(Integer integer, ParsedStatement.Prepared prepared)
{
metrics.preparedStatementsEvicted.inc();
lastMinuteEvictionsCount.incrementAndGet();
}
})
.build();
ScheduledExecutors.scheduledTasks.scheduleAtFixedRate(new Runnable()
{
public void run()
{
long count = lastMinuteEvictionsCount.getAndSet(0);
if (count > 0)
logger.info("{} prepared statements discarded in the last minute because cache limit reached ({} bytes)",
count,
MAX_CACHE_PREPARED_MEMORY);
}
}, 1, 1, TimeUnit.MINUTES);
}
public static int preparedStatementsCount()
{
return preparedStatements.size() + thriftPreparedStatements.size();
}
// Work around initialization dependency
private static enum InternalStateInstance
{
INSTANCE;
private final QueryState queryState;
InternalStateInstance()
{
ClientState state = ClientState.forInternalCalls();
try
{
state.setKeyspace(SystemKeyspace.NAME);
}
catch (InvalidRequestException e)
{
throw new RuntimeException();
}
this.queryState = new QueryState(state);
}
}
private static QueryState internalQueryState()
{
return InternalStateInstance.INSTANCE.queryState;
}
private QueryProcessor()
{
MigrationManager.instance.register(new MigrationSubscriber());
}
#endif
public:
statements::prepared_statement::checked_weak_ptr get_prepared(const prepared_cache_key_type& key) {
auto it = _prepared_cache.find(key);
if (it == _prepared_cache.end()) {
@@ -251,128 +159,69 @@ public:
return *it;
}
#if 0
public static void validateKey(ByteBuffer key) throws InvalidRequestException
{
if (key == null || key.remaining() == 0)
{
throw new InvalidRequestException("Key may not be empty");
}
future<::shared_ptr<cql_transport::messages::result_message>>
process_statement(
::shared_ptr<cql_statement> statement,
service::query_state& query_state,
const query_options& options);
// check that key can be handled by FBUtilities.writeShortByteArray
if (key.remaining() > FBUtilities.MAX_UNSIGNED_SHORT)
{
throw new InvalidRequestException("Key length of " + key.remaining() +
" is longer than maximum of " + FBUtilities.MAX_UNSIGNED_SHORT);
}
}
future<::shared_ptr<cql_transport::messages::result_message>>
process(
const std::experimental::string_view& query_string,
service::query_state& query_state,
query_options& options);
public static void validateCellNames(Iterable<CellName> cellNames, CellNameType type) throws InvalidRequestException
{
for (CellName name : cellNames)
validateCellName(name, type);
}
public static void validateCellName(CellName name, CellNameType type) throws InvalidRequestException
{
validateComposite(name, type);
if (name.isEmpty())
throw new InvalidRequestException("Invalid empty value for clustering column of COMPACT TABLE");
}
public static void validateComposite(Composite name, CType type) throws InvalidRequestException
{
long serializedSize = type.serializer().serializedSize(name, TypeSizes.NATIVE);
if (serializedSize > Cell.MAX_NAME_LENGTH)
throw new InvalidRequestException(String.format("The sum of all clustering columns is too long (%s > %s)",
serializedSize,
Cell.MAX_NAME_LENGTH));
}
#endif
public:
future<::shared_ptr<cql_transport::messages::result_message>> process_statement(::shared_ptr<cql_statement> statement,
service::query_state& query_state, const query_options& options);
#if 0
public static ResultMessage process(String queryString, ConsistencyLevel cl, QueryState queryState)
throws RequestExecutionException, RequestValidationException
{
return instance.process(queryString, queryState, QueryOptions.forInternalCalls(cl, Collections.<ByteBuffer>emptyList()));
}
#endif
future<::shared_ptr<cql_transport::messages::result_message>> process(const std::experimental::string_view& query_string,
service::query_state& query_state, query_options& options);
#if 0
public static ParsedStatement.Prepared parseStatement(String queryStr, QueryState queryState) throws RequestValidationException
{
return getStatement(queryStr, queryState.getClientState());
}
public static UntypedResultSet process(String query, ConsistencyLevel cl) throws RequestExecutionException
{
try
{
ResultMessage result = instance.process(query, QueryState.forInternalCalls(), QueryOptions.forInternalCalls(cl, Collections.<ByteBuffer>emptyList()));
if (result instanceof ResultMessage.Rows)
return UntypedResultSet.create(((ResultMessage.Rows)result).result);
else
return null;
}
catch (RequestValidationException e)
{
throw new RuntimeException(e);
}
}
private static QueryOptions makeInternalOptions(ParsedStatement.Prepared prepared, Object[] values)
{
if (prepared.boundNames.size() != values.length)
throw new IllegalArgumentException(String.format("Invalid number of values. Expecting %d but got %d", prepared.boundNames.size(), values.length));
List<ByteBuffer> boundValues = new ArrayList<ByteBuffer>(values.length);
for (int i = 0; i < values.length; i++)
{
Object value = values[i];
AbstractType type = prepared.boundNames.get(i).type;
boundValues.add(value instanceof ByteBuffer || value == null ? (ByteBuffer)value : type.decompose(value));
}
return QueryOptions.forInternalCalls(boundValues);
}
private static ParsedStatement.Prepared prepareInternal(String query) throws RequestValidationException
{
ParsedStatement.Prepared prepared = internalStatements.get(query);
if (prepared != null)
return prepared;
// Note: if 2 threads prepare the same query, we'll live so don't bother synchronizing
prepared = parseStatement(query, internalQueryState());
prepared.statement.validate(internalQueryState().getClientState());
internalStatements.putIfAbsent(query, prepared);
return prepared;
}
#endif
private:
query_options make_internal_options(const statements::prepared_statement::checked_weak_ptr& p, const std::initializer_list<data_value>&, db::consistency_level = db::consistency_level::ONE);
public:
future<::shared_ptr<untyped_result_set>> execute_internal(
const sstring& query_string,
const std::initializer_list<data_value>& = { });
future<::shared_ptr<untyped_result_set>>
execute_internal(const sstring& query_string, const std::initializer_list<data_value>& = { });
statements::prepared_statement::checked_weak_ptr prepare_internal(const sstring& query);
future<::shared_ptr<untyped_result_set>> execute_internal(
statements::prepared_statement::checked_weak_ptr p,
const std::initializer_list<data_value>& = { });
future<::shared_ptr<untyped_result_set>>
execute_internal(statements::prepared_statement::checked_weak_ptr p, const std::initializer_list<data_value>& = { });
/*!
* \brief iterate over all cql results using paging
*
* You Create a statement with optional paraemter and pass
* a function that goes over the results.
*
* The passed function would be called for all the results, return stop_iteration::yes
* to stop during iteration.
*
* For example:
return query("SELECT * from system.compaction_history",
[&history] (const cql3::untyped_result_set::row& row) mutable {
....
....
return stop_iteration::no;
});
* You can use place holder in the query, the prepared statement will only be done once.
*
*
* query_string - the cql string, can contain place holder
* f - a function to be run on each of the query result, if the function return false the iteration would stop
* args - arbitrary number of query parameters
*/
template<typename... Args>
future<> query(
const sstring& query_string,
std::function<stop_iteration(const cql3::untyped_result_set_row&)>&& f,
Args&&... args) {
return for_each_cql_result(
create_paged_state(query_string, { data_value(std::forward<Args>(args))... }), std::move(f));
}
future<::shared_ptr<untyped_result_set>> process(
const sstring& query_string,
db::consistency_level, const std::initializer_list<data_value>& = { }, bool cache = false);
const sstring& query_string,
db::consistency_level,
const std::initializer_list<data_value>& = { },
bool cache = false);
future<::shared_ptr<untyped_result_set>> process(
statements::prepared_statement::checked_weak_ptr p,
db::consistency_level, const std::initializer_list<data_value>& = { });
statements::prepared_statement::checked_weak_ptr p,
db::consistency_level,
const std::initializer_list<data_value>& = { });
/*
* This function provides a timestamp that is guaranteed to be higher than any timestamp
@@ -384,115 +233,110 @@ public:
*/
api::timestamp_type next_timestamp();
#if 0
public static UntypedResultSet executeInternalWithPaging(String query, int pageSize, Object... values)
{
try
{
ParsedStatement.Prepared prepared = prepareInternal(query);
if (!(prepared.statement instanceof SelectStatement))
throw new IllegalArgumentException("Only SELECTs can be paged");
SelectStatement select = (SelectStatement)prepared.statement;
QueryPager pager = QueryPagers.localPager(select.getPageableCommand(makeInternalOptions(prepared, values)));
return UntypedResultSet.create(select, pager, pageSize);
}
catch (RequestValidationException e)
{
throw new RuntimeException("Error validating query" + e);
}
}
/**
* Same than executeInternal, but to use for queries we know are only executed once so that the
* created statement object is not cached.
*/
public static UntypedResultSet executeOnceInternal(String query, Object... values)
{
try
{
ParsedStatement.Prepared prepared = parseStatement(query, internalQueryState());
prepared.statement.validate(internalQueryState().getClientState());
ResultMessage result = prepared.statement.executeInternal(internalQueryState(), makeInternalOptions(prepared, values));
if (result instanceof ResultMessage.Rows)
return UntypedResultSet.create(((ResultMessage.Rows)result).result);
else
return null;
}
catch (RequestExecutionException e)
{
throw new RuntimeException(e);
}
catch (RequestValidationException e)
{
throw new RuntimeException("Error validating query " + query, e);
}
}
public static UntypedResultSet resultify(String query, Row row)
{
return resultify(query, Collections.singletonList(row));
}
public static UntypedResultSet resultify(String query, List<Row> rows)
{
try
{
SelectStatement ss = (SelectStatement) getStatement(query, null).statement;
ResultSet cqlRows = ss.process(rows);
return UntypedResultSet.create(cqlRows);
}
catch (RequestValidationException e)
{
throw new AssertionError(e);
}
}
#endif
future<::shared_ptr<cql_transport::messages::result_message::prepared>>
prepare(sstring query_string, service::query_state& query_state);
future<::shared_ptr<cql_transport::messages::result_message::prepared>>
prepare(sstring query_string, const service::client_state& client_state, bool for_thrift);
static prepared_cache_key_type compute_id(const std::experimental::string_view& query_string, const sstring& keyspace);
static prepared_cache_key_type compute_thrift_id(const std::experimental::string_view& query_string, const sstring& keyspace);
future<> stop();
future<::shared_ptr<cql_transport::messages::result_message>>
process_batch(::shared_ptr<statements::batch_statement>, service::query_state& query_state, query_options& options);
std::unique_ptr<statements::prepared_statement> get_statement(
const std::experimental::string_view& query,
const service::client_state& client_state);
friend class migration_subscriber;
private:
query_options make_internal_options(
const statements::prepared_statement::checked_weak_ptr& p,
const std::initializer_list<data_value>&,
db::consistency_level = db::consistency_level::ONE,
int32_t page_size = -1);
/*!
* \brief created a state object for paging
*
* When using paging internally a state object is needed.
*/
::shared_ptr<internal_query_state> create_paged_state(
const sstring& query_string,
const std::initializer_list<data_value>& = { },
int32_t page_size = 1000);
/*!
* \brief run a query using paging
*/
future<::shared_ptr<untyped_result_set>> execute_paged_internal(::shared_ptr<internal_query_state> state);
/*!
* \brief iterate over all results using paging
*/
future<> for_each_cql_result(
::shared_ptr<cql3::internal_query_state> state,
std::function<stop_iteration(const cql3::untyped_result_set_row&)>&& f);
/*!
* \brief check, based on the state if there are additional results
* Users of the paging, should not use the internal_query_state directly
*/
bool has_more_results(::shared_ptr<cql3::internal_query_state> state) const;
///
/// \tparam ResultMsgType type of the returned result message (CQL or Thrift)
/// \tparam PreparedKeyGenerator a function that generates the prepared statement cache key for given query and keyspace
/// \tparam IdGetter a function that returns the corresponding prepared statement ID (CQL or Thrift) for a given prepared statement cache key
/// \tparam PreparedKeyGenerator a function that generates the prepared statement cache key for given query and
/// keyspace
/// \tparam IdGetter a function that returns the corresponding prepared statement ID (CQL or Thrift) for a given
//// prepared statement cache key
/// \param query_string
/// \param client_state
/// \param id_gen prepared ID generator, called before the first deferring
/// \param id_getter prepared ID getter, passed to deferred context by reference. The caller must ensure its liveness.
/// \param id_getter prepared ID getter, passed to deferred context by reference. The caller must ensure its
//// liveness.
/// \return
template <typename ResultMsgType, typename PreparedKeyGenerator, typename IdGetter>
future<::shared_ptr<cql_transport::messages::result_message::prepared>>
prepare_one(sstring query_string, const service::client_state& client_state, PreparedKeyGenerator&& id_gen, IdGetter&& id_getter) {
return do_with(id_gen(query_string, client_state.get_raw_keyspace()), std::move(query_string), [this, &client_state, &id_getter] (const prepared_cache_key_type& key, const sstring& query_string) {
prepare_one(
sstring query_string,
const service::client_state& client_state,
PreparedKeyGenerator&& id_gen,
IdGetter&& id_getter) {
return do_with(
id_gen(query_string, client_state.get_raw_keyspace()),
std::move(query_string),
[this, &client_state, &id_getter](const prepared_cache_key_type& key, const sstring& query_string) {
return _prepared_cache.get(key, [this, &query_string, &client_state] {
auto prepared = get_statement(query_string, client_state);
auto bound_terms = prepared->statement->get_bound_terms();
if (bound_terms > std::numeric_limits<uint16_t>::max()) {
throw exceptions::invalid_request_exception(sprint("Too many markers(?). %d markers exceed the allowed maximum of %d", bound_terms, std::numeric_limits<uint16_t>::max()));
throw exceptions::invalid_request_exception(
sprint("Too many markers(?). %d markers exceed the allowed maximum of %d",
bound_terms,
std::numeric_limits<uint16_t>::max()));
}
assert(bound_terms == prepared->bound_names.size());
prepared->raw_cql_statement = query_string;
return make_ready_future<std::unique_ptr<statements::prepared_statement>>(std::move(prepared));
}).then([&key, &id_getter] (auto prep_ptr) {
return make_ready_future<::shared_ptr<cql_transport::messages::result_message::prepared>>(::make_shared<ResultMsgType>(id_getter(key), std::move(prep_ptr)));
return make_ready_future<::shared_ptr<cql_transport::messages::result_message::prepared>>(
::make_shared<ResultMsgType>(id_getter(key), std::move(prep_ptr)));
}).handle_exception_type([&query_string] (typename prepared_statements_cache::statement_is_too_big&) {
return make_exception_future<::shared_ptr<cql_transport::messages::result_message::prepared>>(prepared_statement_is_too_big(query_string));
return make_exception_future<::shared_ptr<cql_transport::messages::result_message::prepared>>(
prepared_statement_is_too_big(query_string));
});
});
};
template <typename ResultMsgType, typename KeyGenerator, typename IdGetter>
::shared_ptr<cql_transport::messages::result_message::prepared>
get_stored_prepared_statement_one(const std::experimental::string_view& query_string, const sstring& keyspace, KeyGenerator&& key_gen, IdGetter&& id_getter)
{
get_stored_prepared_statement_one(
const std::experimental::string_view& query_string,
const sstring& keyspace,
KeyGenerator&& key_gen,
IdGetter&& id_getter) {
auto cache_key = key_gen(query_string, keyspace);
auto it = _prepared_cache.find(cache_key);
if (it == _prepared_cache.end()) {
@@ -503,55 +347,15 @@ private:
}
::shared_ptr<cql_transport::messages::result_message::prepared>
get_stored_prepared_statement(const std::experimental::string_view& query_string, const sstring& keyspace, bool for_thrift);
#if 0
public ResultMessage processPrepared(CQLStatement statement, QueryState queryState, QueryOptions options)
throws RequestExecutionException, RequestValidationException
{
List<ByteBuffer> variables = options.getValues();
// Check to see if there are any bound variables to verify
if (!(variables.isEmpty() && (statement.getBoundTerms() == 0)))
{
if (variables.size() != statement.getBoundTerms())
throw new InvalidRequestException(String.format("there were %d markers(?) in CQL but %d bound variables",
statement.getBoundTerms(),
variables.size()));
// at this point there is a match in count between markers and variables that is non-zero
if (logger.isTraceEnabled())
for (int i = 0; i < variables.size(); i++)
logger.trace("[{}] '{}'", i+1, variables.get(i));
}
metrics.preparedStatementsExecuted.inc();
return processStatement(statement, queryState, options);
}
#endif
public:
future<::shared_ptr<cql_transport::messages::result_message>> process_batch(::shared_ptr<statements::batch_statement>,
service::query_state& query_state, query_options& options);
std::unique_ptr<statements::prepared_statement> get_statement(const std::experimental::string_view& query,
const service::client_state& client_state);
static ::shared_ptr<statements::raw::parsed_statement> parse_statement(const std::experimental::string_view& query);
#if 0
private static long measure(Object key)
{
return meter.measureDeep(key);
}
#endif
public:
future<> stop();
friend class migration_subscriber;
get_stored_prepared_statement(
const std::experimental::string_view& query_string,
const sstring& keyspace,
bool for_thrift);
};
class query_processor::migration_subscriber : public service::migration_listener {
query_processor* _qp;
public:
migration_subscriber(query_processor* qp);
@@ -575,9 +379,14 @@ public:
virtual void on_drop_function(const sstring& ks_name, const sstring& function_name) override;
virtual void on_drop_aggregate(const sstring& ks_name, const sstring& aggregate_name) override;
virtual void on_drop_view(const sstring& ks_name, const sstring& view_name) override;
private:
void remove_invalid_prepared_statements(sstring ks_name, std::experimental::optional<sstring> cf_name);
bool should_invalidate(sstring ks_name, std::experimental::optional<sstring> cf_name, ::shared_ptr<cql_statement> statement);
bool should_invalidate(
sstring ks_name,
std::experimental::optional<sstring> cf_name,
::shared_ptr<cql_statement> statement);
};
extern distributed<query_processor> _the_query_processor;

View File

@@ -74,11 +74,9 @@ public:
get_delegate()->merge_with(restriction);
}
#if 0
virtual bool has_supporting_index(::shared_ptr<secondary_index_manager> index_manager) override {
virtual bool has_supporting_index(const secondary_index::secondary_index_manager& index_manager) const override {
return get_delegate()->has_supporting_index(index_manager);
}
#endif
virtual std::vector<bytes_opt> values(const query_options& options) const override {
return get_delegate()->values(options);

View File

@@ -128,19 +128,18 @@ protected:
}
return str;
}
#if 0
@Override
public final boolean hasSupportingIndex(SecondaryIndexManager indexManager)
{
for (ColumnDefinition columnDef : columnDefs)
{
SecondaryIndex index = indexManager.getIndexForColumn(columnDef.name.bytes);
if (index != null && isSupportedBy(index))
virtual bool has_supporting_index(const secondary_index::secondary_index_manager& index_manager) const override {
for (const auto& index : index_manager.list_indexes()) {
if (is_supported_by(index))
return true;
}
return false;
}
virtual bool is_supported_by(const secondary_index::index& index) const = 0;
#if 0
/**
* Check if this type of restriction is supported for the specified column by the specified index.
* @param index the Secondary index
@@ -172,6 +171,15 @@ public:
return abstract_restriction::term_uses_function(_value, ks_name, function_name);
}
virtual bool is_supported_by(const secondary_index::index& index) const override {
for (auto* cdef : _column_defs) {
if (index.supports_expression(*cdef, cql3::operator_type::EQ)) {
return true;
}
}
return false;
}
virtual sstring to_string() const override {
return sprint("EQ(%s)", _value->to_string());
}
@@ -232,6 +240,15 @@ class multi_column_restriction::IN : public multi_column_restriction {
public:
using multi_column_restriction::multi_column_restriction;
virtual bool is_supported_by(const secondary_index::index& index) const override {
for (auto* cdef : _column_defs) {
if (index.supports_expression(*cdef, cql3::operator_type::IN)) {
return true;
}
}
return false;
}
virtual bool is_IN() const override {
return true;
}
@@ -379,6 +396,15 @@ public:
: slice(schema, defs, term_slice::new_instance(bound, inclusive, term))
{ }
virtual bool is_supported_by(const secondary_index::index& index) const override {
for (auto* cdef : _column_defs) {
if (_slice.is_supported_by(*cdef, index)) {
return true;
}
}
return false;
}
virtual bool is_slice() const override {
return true;
}

View File

@@ -87,6 +87,7 @@ public:
virtual std::vector<bounds_range_type> bounds_ranges(const query_options& options) const = 0;
using restrictions::uses_function;
using restrictions::has_supporting_index;
bool empty() const override {
return get_column_defs().empty();

View File

@@ -43,6 +43,7 @@
#include <vector>
#include "index/secondary_index_manager.hh"
#include "cql3/query_options.hh"
#include "cql3/statements/bound.hh"
#include "types.hh"
@@ -107,15 +108,15 @@ public:
*/
virtual void merge_with(::shared_ptr<restriction> other) = 0;
#if 0
/**
* Check if the restriction is on indexed columns.
*
* @param indexManager the index manager
* @return <code>true</code> if the restriction is on indexed columns, <code>false</code>
*/
public boolean hasSupportingIndex(SecondaryIndexManager indexManager);
virtual bool has_supporting_index(const secondary_index::secondary_index_manager& index_manager) const = 0;
#if 0
/**
* Adds to the specified list the <code>IndexExpression</code>s corresponding to this <code>Restriction</code>.
*

View File

@@ -47,6 +47,7 @@
#include "cql3/query_options.hh"
#include "types.hh"
#include "schema.hh"
#include "index/secondary_index_manager.hh"
namespace cql3 {
@@ -74,15 +75,15 @@ public:
*/
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const = 0;
#if 0
/**
* Check if the restriction is on indexed columns.
*
* @param index_manager the index manager
* @return <code>true</code> if the restriction is on indexed columns, <code>false</code>
*/
virtual bool has_supporting_index(::shared_ptr<secondary_index_manager> index_manager) const = 0;
virtual bool has_supporting_index(const secondary_index::secondary_index_manager& index_manager) const = 0;
#if 0
/**
* Adds to the specified list the <code>index_expression</code>s corresponding to this <code>Restriction</code>.
*

View File

@@ -316,11 +316,11 @@ public:
fail(unimplemented::cause::LEGACY_COMPOSITE_KEYS); // not 100% correct...
}
#if 0
virtual bool hasSupportingIndex(SecondaryIndexManager indexManager) override {
return restrictions.hasSupportingIndex(indexManager);
virtual bool has_supporting_index(const secondary_index::secondary_index_manager& index_manager) const override {
return _restrictions->has_supporting_index(index_manager);
}
#if 0
virtual void addIndexExpressionTo(List<IndexExpression> expressions, QueryOptions options) override {
restrictions.addIndexExpressionTo(expressions, options);
}

View File

@@ -82,14 +82,18 @@ public:
ByteBuffer value = validateIndexedValue(columnDef, values.get(0));
expressions.add(new IndexExpression(columnDef.name.bytes, Operator.EQ, value));
}
#endif
@Override
public boolean hasSupportingIndex(SecondaryIndexManager indexManager)
{
SecondaryIndex index = indexManager.getIndexForColumn(columnDef.name.bytes);
return index != null && isSupportedBy(index);
virtual bool has_supporting_index(const secondary_index::secondary_index_manager& index_manager) const override {
for (const auto& index : index_manager.list_indexes()) {
if (is_supported_by(index))
return true;
}
return false;
}
virtual bool is_supported_by(const secondary_index::index& index) const = 0;
#if 0
/**
* Check if this type of restriction is supported by the specified index.
*
@@ -129,6 +133,10 @@ public:
return abstract_restriction::term_uses_function(_value, ks_name, function_name);
}
virtual bool is_supported_by(const secondary_index::index& index) const override {
return index.supports_expression(_column_def, cql3::operator_type::EQ);
}
virtual bool is_EQ() const override {
return true;
}
@@ -178,6 +186,10 @@ public:
return true;
}
virtual bool is_supported_by(const secondary_index::index& index) const override {
return index.supports_expression(_column_def, cql3::operator_type::IN);
}
virtual void merge_with(::shared_ptr<restriction> r) override {
throw exceptions::invalid_request_exception(sprint(
"%s cannot be restricted by more than one relation if it includes a IN", _column_def.name_as_text()));
@@ -190,6 +202,14 @@ public:
const query_options& options,
gc_clock::time_point now) const override;
virtual std::vector<bytes_opt> values_raw(const query_options& options) const = 0;
virtual std::vector<bytes_opt> values(const query_options& options) const override {
std::vector<bytes_opt> ret = values_raw(options);
std::sort(ret.begin(),ret.end());
ret.erase(std::unique(ret.begin(),ret.end()),ret.end());
return ret;
}
#if 0
@Override
protected final boolean isSupportedBy(SecondaryIndex index)
@@ -212,7 +232,7 @@ public:
return abstract_restriction::term_uses_function(_values, ks_name, function_name);
}
virtual std::vector<bytes_opt> values(const query_options& options) const override {
virtual std::vector<bytes_opt> values_raw(const query_options& options) const override {
std::vector<bytes_opt> ret;
for (auto&& v : _values) {
ret.emplace_back(to_bytes_opt(v->bind_and_get(options)));
@@ -237,7 +257,7 @@ public:
return false;
}
virtual std::vector<bytes_opt> values(const query_options& options) const override {
virtual std::vector<bytes_opt> values_raw(const query_options& options) const override {
auto&& lval = dynamic_pointer_cast<multi_item_terminal>(_marker->bind(options));
if (!lval) {
throw exceptions::invalid_request_exception("Invalid null value for IN restriction");
@@ -264,6 +284,10 @@ public:
|| (_slice.has_bound(statements::bound::END) && abstract_restriction::term_uses_function(_slice.bound(statements::bound::END), ks_name, function_name));
}
virtual bool is_supported_by(const secondary_index::index& index) const override {
return _slice.is_supported_by(_column_def, index);
}
virtual bool is_slice() const override {
return true;
}
@@ -403,22 +427,21 @@ public:
target.add(new IndexExpression(columnDef.name.bytes, op, value));
}
}
#endif
virtual bool is_supported_by(SecondaryIndex index) override {
virtual bool is_supported_by(const secondary_index::index& index) const override {
bool supported = false;
if (numberOfValues() > 0)
supported |= index.supportsOperator(Operator.CONTAINS);
if (numberOfKeys() > 0)
supported |= index.supportsOperator(Operator.CONTAINS_KEY);
if (numberOfEntries() > 0)
supported |= index.supportsOperator(Operator.EQ);
if (number_of_values() > 0) {
supported |= index.supports_expression(_column_def, cql3::operator_type::CONTAINS);
}
if (number_of_keys() > 0) {
supported |= index.supports_expression(_column_def, cql3::operator_type::CONTAINS_KEY);
}
if (number_of_entries() > 0) {
supported |= index.supports_expression(_column_def, cql3::operator_type::EQ);
}
return supported;
}
#endif
uint32_t number_of_values() const {
return _values.size();

View File

@@ -150,8 +150,7 @@ public:
}
}
#if 0
virtual bool has_supporting_index(::shared_ptr<secondary_index_manager> index_manager) const override {
virtual bool has_supporting_index(const secondary_index::secondary_index_manager& index_manager) const override {
for (auto&& e : _restrictions) {
if (e.second->has_supporting_index(index_manager)) {
return true;
@@ -159,7 +158,6 @@ public:
}
return false;
}
#endif
/**
* Returns the column after the specified one.

View File

@@ -87,6 +87,9 @@ public:
uint32_t size() const override {
return 0;
}
virtual bool has_supporting_index(const secondary_index::secondary_index_manager& index_manager) const override {
return false;
}
sstring to_string() const override {
return "Initial restrictions";
}
@@ -198,16 +201,12 @@ statement_restrictions::statement_restrictions(database& db,
}
}
}
warn(unimplemented::cause::INDEXES);
#if 0
ColumnFamilyStore cfs = Keyspace.open(cfm.ks_name).getColumnFamilyStore(cfm.cfName);
secondary_index_manager secondaryIndexManager = cfs.index_manager;
#endif
bool has_queriable_clustering_column_index = false; /*_clustering_columns_restrictions->has_supporting_index(secondaryIndexManager);*/
bool has_queriable_index = false; /*has_queriable_clustering_column_index
|| _partition_key_restrictions->has_supporting_index(secondaryIndexManager)
|| nonprimary_key_restrictions->has_supporting_index(secondaryIndexManager);*/
auto& cf = db.find_column_family(schema);
auto& sim = cf.get_index_manager();
bool has_queriable_clustering_column_index = _clustering_columns_restrictions->has_supporting_index(sim);
bool has_queriable_index = has_queriable_clustering_column_index
|| _partition_key_restrictions->has_supporting_index(sim)
|| _nonprimary_key_restrictions->has_supporting_index(sim);
// At this point, the select statement if fully constructed, but we still have a few things to validate
process_partition_key_restrictions(has_queriable_index, for_view);
@@ -273,10 +272,7 @@ statement_restrictions::statement_restrictions(database& db,
}
if (_uses_secondary_indexing && !for_view) {
fail(unimplemented::cause::INDEXES);
#if 0
validate_secondary_index_selections(selects_only_static_columns);
#endif
}
}
@@ -307,6 +303,10 @@ bool statement_restrictions::uses_function(const sstring& ks_name, const sstring
|| _nonprimary_key_restrictions->uses_function(ks_name, function_name);
}
const std::vector<::shared_ptr<restrictions>>& statement_restrictions::index_restrictions() const {
return _index_restrictions;
}
void statement_restrictions::process_partition_key_restrictions(bool has_queriable_index, bool for_view) {
// If there is a queriable index, no special condition are required on the other restrictions.
// But we still need to know 2 things:

View File

@@ -124,6 +124,8 @@ private:
public:
bool uses_function(const sstring& ks_name, const sstring& function_name) const;
const std::vector<::shared_ptr<restrictions>>& index_restrictions() const;
/**
* Checks if the restrictions on the partition key is an IN restriction.
*

View File

@@ -117,6 +117,21 @@ public:
}
}
bool is_supported_by(const column_definition& cdef, const secondary_index::index& index) const {
bool supported = false;
if (has_bound(statements::bound::START)) {
supported |= is_inclusive(statements::bound::START)
? index.supports_expression(cdef, cql3::operator_type::GTE)
: index.supports_expression(cdef, cql3::operator_type::GT);
}
if (has_bound(statements::bound::END)) {
supported |= is_inclusive(statements::bound::END)
? index.supports_expression(cdef, cql3::operator_type::LTE)
: index.supports_expression(cdef, cql3::operator_type::LT);
}
return supported;
}
sstring to_string() const {
static auto print_term = [] (::shared_ptr<term> t) -> sstring {
return t ? t->to_string() : "null";

View File

@@ -73,11 +73,11 @@ public:
return _column_definitions;
}
#if 0
bool has_supporting_index(::shared_ptr<secondary_index_manager> index_manager) const override {
virtual bool has_supporting_index(const secondary_index::secondary_index_manager& index_manager) const override {
return false;
}
#if 0
void add_index_expression_to(std::vector<::shared_ptr<index_expression>>& expressions,
const query_options& options) override {
throw exceptions::unsupported_operation_exception();

View File

@@ -25,6 +25,7 @@
#include "writetime_or_ttl.hh"
#include "selector_factories.hh"
#include "cql3/functions/functions.hh"
#include "cql3/functions/castas_fcts.hh"
#include "abstract_function_selector.hh"
#include "writetime_or_ttl_selector.hh"
@@ -141,6 +142,30 @@ selectable::with_field_selection::raw::processes_selection() const {
return true;
}
shared_ptr<selector::factory>
selectable::with_cast::new_selector_factory(database& db, schema_ptr s, std::vector<const column_definition*>& defs) {
std::vector<shared_ptr<selectable>> args{_arg};
auto&& factories = selector_factories::create_factories_and_collect_column_definitions(args, db, s, defs);
auto&& fun = functions::castas_functions::get(_type->get_type(), factories->new_instances(), s);
return abstract_function_selector::new_factory(std::move(fun), std::move(factories));
}
sstring
selectable::with_cast::to_string() const {
return sprint("cast(%s as %s)", _arg->to_string(), _type->to_string());
}
shared_ptr<selectable>
selectable::with_cast::raw::prepare(schema_ptr s) {
return ::make_shared<selectable::with_cast>(_arg->prepare(s), _type);
}
bool
selectable::with_cast::raw::processes_selection() const {
return true;
}
std::ostream & operator<<(std::ostream &os, const selectable& s) {
return os << s.to_string();
}

View File

@@ -45,6 +45,7 @@
#include "schema.hh"
#include "core/shared_ptr.hh"
#include "cql3/selection/selector.hh"
#include "cql3/cql3_type.hh"
#include "cql3/functions/function_name.hh"
namespace cql3 {
@@ -83,6 +84,8 @@ public:
class with_function;
class with_field_selection;
class with_cast;
};
std::ostream & operator<<(std::ostream &os, const selectable& s);
@@ -110,6 +113,29 @@ public:
};
};
class selectable::with_cast : public selectable {
::shared_ptr<selectable> _arg;
::shared_ptr<cql3_type> _type;
public:
with_cast(::shared_ptr<selectable> arg, ::shared_ptr<cql3_type> type)
: _arg(std::move(arg)), _type(std::move(type)) {
}
virtual sstring to_string() const override;
virtual shared_ptr<selector::factory> new_selector_factory(database& db, schema_ptr s, std::vector<const column_definition*>& defs) override;
class raw : public selectable::raw {
::shared_ptr<selectable::raw> _arg;
::shared_ptr<cql3_type> _type;
public:
raw(shared_ptr<selectable::raw> arg, ::shared_ptr<cql3_type> type)
: _arg(std::move(arg)), _type(std::move(type)) {
}
virtual shared_ptr<selectable> prepare(schema_ptr s) override;
virtual bool processes_selection() const override;
};
};
}
}

View File

@@ -130,6 +130,10 @@ single_column_relation::to_receivers(schema_ptr schema, const column_definition&
}
}
if (is_contains() && !receiver->type->is_collection()) {
throw exceptions::invalid_request_exception(sprint("Cannot use CONTAINS on non-collection column \"%s\"", receiver->name));
}
if (is_contains_key()) {
if (!dynamic_cast<const map_type_impl*>(receiver->type.get())) {
throw exceptions::invalid_request_exception(sprint("Cannot use CONTAINS KEY on non-map column %s", receiver->name));

View File

@@ -43,6 +43,7 @@
#include <vector>
#include "cql3/restrictions/single_column_restriction.hh"
#include "statements/request_validations.hh"
#include "core/shared_ptr.hh"
#include "to_string.hh"
@@ -157,6 +158,19 @@ protected:
statements::bound bound,
bool inclusive) override {
auto&& column_def = to_column_definition(schema, _entity);
if (column_def.type->references_duration()) {
using statements::request_validations::check_false;
const auto& ty = *column_def.type;
check_false(ty.is_collection(), "Slice restrictions are not supported on collections containing durations");
check_false(ty.is_tuple(), "Slice restrictions are not supported on tuples containing durations");
check_false(ty.is_user_type(), "Slice restrictions are not supported on UDTs containing durations");
// We're a duration.
throw exceptions::invalid_request_exception("Slice restrictions are not supported on duration columns");
}
auto term = to_term(to_receivers(schema, column_def), _value, db, schema->ks_name(), std::move(bound_names));
return ::make_shared<restrictions::single_column_restriction::slice>(column_def, bound, inclusive, std::move(term));
}

View File

@@ -138,7 +138,7 @@ static data_type validate_alter(schema_ptr schema, const column_definition& def,
return type;
}
static void validate_column_rename(const schema& schema, const column_identifier& from, const column_identifier& to)
static void validate_column_rename(database& db, const schema& schema, const column_identifier& from, const column_identifier& to)
{
auto def = schema.get_column_definition(from.name());
if (!def) {
@@ -154,8 +154,7 @@ static void validate_column_rename(const schema& schema, const column_identifier
}
if (!schema.indices().empty()) {
auto& sim = secondary_index::get_secondary_index_manager();
auto dependent_indices = sim.local().get_dependent_indices(*def);
auto dependent_indices = db.find_column_family(schema.id()).get_index_manager().get_dependent_indices(*def);
if (!dependent_indices.empty()) {
auto index_names = ::join(", ", dependent_indices | boost::adaptors::transformed([](const index_metadata& im) {
return im.name();
@@ -235,7 +234,8 @@ future<shared_ptr<cql_transport::event::schema_change>> alter_table_statement::a
// with the same name unless the types are compatible (see #6276).
auto& dropped = schema->dropped_columns();
auto i = dropped.find(column_name->text());
if (i != dropped.end() && !type->is_compatible_with(*i->second.type)) {
if (i != dropped.end() && i->second.type->is_collection() && i->second.type->is_multi_cell()
&& !type->is_compatible_with(*i->second.type)) {
throw exceptions::invalid_request_exception(sprint("Cannot add a collection with the name %s "
"because a collection with the same name and a different type has already been used in the past", column_name));
}
@@ -342,7 +342,7 @@ future<shared_ptr<cql_transport::event::schema_change>> alter_table_statement::a
auto from = entry.first->prepare_column_identifier(schema);
auto to = entry.second->prepare_column_identifier(schema);
validate_column_rename(*schema, *from, *to);
validate_column_rename(db, *schema, *from, *to);
cfm.with_column_rename(from->name(), to->name());
// If the view includes a renamed column, it must be renamed in the view table and the definition.
@@ -352,7 +352,7 @@ future<shared_ptr<cql_transport::event::schema_change>> alter_table_statement::a
auto view_from = entry.first->prepare_column_identifier(view);
auto view_to = entry.second->prepare_column_identifier(view);
validate_column_rename(*view, *view_from, *view_to);
validate_column_rename(db, *view, *view_from, *view_to);
builder.with_column_rename(view_from->name(), view_to->name());
auto new_where = util::rename_column_in_where_clause(

View File

@@ -42,8 +42,8 @@
#include <boost/range/adaptor/map.hpp>
#include "alter_user_statement.hh"
#include "auth/auth.hh"
#include "auth/authenticator.hh"
#include "auth/service.hh"
cql3::statements::alter_user_statement::alter_user_statement(sstring username, ::shared_ptr<user_options> opts, std::experimental::optional<bool> superuser)
: _username(std::move(username))
@@ -52,7 +52,7 @@ cql3::statements::alter_user_statement::alter_user_statement(sstring username, :
{}
void cql3::statements::alter_user_statement::validate(distributed<service::storage_proxy>& proxy, const service::client_state& state) {
_opts->validate();
_opts->validate(state.get_auth_service()->underlying_authenticator());
if (!_superuser && _opts->empty()) {
throw exceptions::invalid_request_exception("ALTER USER can't be empty");
@@ -73,7 +73,10 @@ future<> cql3::statements::alter_user_statement::check_access(const service::cli
// my disgust.
throw exceptions::unauthorized_exception("You aren't allowed to alter your own superuser status");
}
return user->is_super().then([this, user](bool is_super) {
const auto& auth_service = *state.get_auth_service();
return auth::is_super_user(auth_service, *user).then([this, user, &auth_service](bool is_super) {
if (_superuser && !is_super) {
throw exceptions::unauthorized_exception("Only superusers are allowed to alter superuser status");
}
@@ -84,7 +87,7 @@ future<> cql3::statements::alter_user_statement::check_access(const service::cli
if (!is_super) {
for (auto o : _opts->options() | boost::adaptors::map_keys) {
if (!auth::authenticator::get().alterable_options().contains(o)) {
if (!auth_service.underlying_authenticator().alterable_options().contains(o)) {
throw exceptions::unauthorized_exception(sprint("You aren't allowed to alter {} option", o));
}
}
@@ -94,14 +97,17 @@ future<> cql3::statements::alter_user_statement::check_access(const service::cli
future<::shared_ptr<cql_transport::messages::result_message>>
cql3::statements::alter_user_statement::execute(distributed<service::storage_proxy>& proxy, service::query_state& state, const query_options& options) {
return auth::auth::is_existing_user(_username).then([this](bool exists) {
auto& client_state = state.get_client_state();
auto& auth_service = *client_state.get_auth_service();
return auth_service.is_existing_user(_username).then([this, &auth_service](bool exists) {
if (!exists) {
throw exceptions::invalid_request_exception(sprint("User %s doesn't exist", _username));
}
auto f = _opts->options().empty() ? make_ready_future() : auth::authenticator::get().alter(_username, _opts->options());
auto f = _opts->options().empty() ? make_ready_future() : auth_service.underlying_authenticator().alter(_username, _opts->options());
if (_superuser) {
f = f.then([this] {
return auth::auth::insert_user(_username, *_superuser);
f = f.then([this, &auth_service] {
return auth_service.insert_user(_username, *_superuser);
});
}
return f.then([] { return make_ready_future<::shared_ptr<cql_transport::messages::result_message>>(); });

View File

@@ -47,6 +47,7 @@
#include "service/storage_service.hh"
#include "schema.hh"
#include "schema_builder.hh"
#include "request_validations.hh"
#include <boost/range/adaptor/transformed.hpp>
#include <boost/algorithm/string/join.hpp>
@@ -108,6 +109,18 @@ create_index_statement::validate(distributed<service::storage_proxy>& proxy, con
sprint("No column definition found for column %s", *target->column));
}
if (cd->type->references_duration()) {
using request_validations::check_false;
const auto& ty = *cd->type;
check_false(ty.is_collection(), "Secondary indexes are not supported on collections containing durations");
check_false(ty.is_tuple(), "Secondary indexes are not supported on tuples containing durations");
check_false(ty.is_user_type(), "Secondary indexes are not supported on UDTs containing durations");
// We're a duration.
throw exceptions::invalid_request_exception("Secondary indexes are not supported on duration columns");
}
// Origin TODO: we could lift that limitation
if ((schema->is_dense() || !schema->thrift().has_compound_comparator()) &&
cd->kind != column_kind::regular_column) {

View File

@@ -219,6 +219,9 @@ std::unique_ptr<prepared_statement> create_table_statement::raw_statement::prepa
if (t->is_counter()) {
throw exceptions::invalid_request_exception(sprint("counter type is not supported for PRIMARY KEY part %s", alias->text()));
}
if (t->references_duration()) {
throw exceptions::invalid_request_exception(sprint("duration type is not supported for PRIMARY KEY part %s", alias->text()));
}
if (_static_columns.count(alias) > 0) {
throw exceptions::invalid_request_exception(sprint("Static column %s cannot be part of the PRIMARY KEY", alias->text()));
}
@@ -254,6 +257,9 @@ std::unique_ptr<prepared_statement> create_table_statement::raw_statement::prepa
if (at->is_counter()) {
throw exceptions::invalid_request_exception(sprint("counter type is not supported for PRIMARY KEY part %s", stmt->_column_aliases[0]));
}
if (at->references_duration()) {
throw exceptions::invalid_request_exception(sprint("duration type is not supported for PRIMARY KEY part %s", stmt->_column_aliases[0]));
}
stmt->_clustering_key_types.emplace_back(at);
} else {
std::vector<data_type> types;
@@ -263,6 +269,9 @@ std::unique_ptr<prepared_statement> create_table_statement::raw_statement::prepa
if (type->is_counter()) {
throw exceptions::invalid_request_exception(sprint("counter type is not supported for PRIMARY KEY part %s", t->text()));
}
if (type->references_duration()) {
throw exceptions::invalid_request_exception(sprint("duration type is not supported for PRIMARY KEY part %s", t->text()));
}
if (_static_columns.count(t) > 0) {
throw exceptions::invalid_request_exception(sprint("Static column %s cannot be part of the PRIMARY KEY", t->text()));
}

View File

@@ -40,8 +40,8 @@
*/
#include "create_user_statement.hh"
#include "auth/auth.hh"
#include "auth/authenticator.hh"
#include "auth/service.hh"
cql3::statements::create_user_statement::create_user_statement(sstring username, ::shared_ptr<user_options> opts, bool superuser, bool if_not_exists)
: _username(std::move(username))
@@ -55,7 +55,7 @@ void cql3::statements::create_user_statement::validate(distributed<service::stor
throw exceptions::invalid_request_exception("Username can't be an empty string");
}
_opts->validate();
_opts->validate(state.get_auth_service()->underlying_authenticator());
// validate login here before checkAccess to avoid leaking user existence to anonymous users.
state.ensure_not_anonymous();
@@ -66,19 +66,22 @@ void cql3::statements::create_user_statement::validate(distributed<service::stor
future<::shared_ptr<cql_transport::messages::result_message>>
cql3::statements::create_user_statement::execute(distributed<service::storage_proxy>& proxy, service::query_state& state, const query_options& options) {
return state.get_client_state().user()->is_super().then([this](bool is_super) {
auto& client_state = state.get_client_state();
auto& auth_service = *client_state.get_auth_service();
return auth::is_super_user(auth_service, *client_state.user()).then([this, &auth_service](bool is_super) {
if (!is_super) {
throw exceptions::unauthorized_exception("Only superusers are allowed to perform CREATE USER queries");
}
return auth::auth::is_existing_user(_username).then([this](bool exists) {
return auth_service.is_existing_user(_username).then([this, &auth_service](bool exists) {
if (exists && !_if_not_exists) {
throw exceptions::invalid_request_exception(sprint("User %s already exists", _username));
}
if (exists && _if_not_exists) {
return make_ready_future<::shared_ptr<cql_transport::messages::result_message>>();
}
return auth::authenticator::get().create(_username, _opts->options()).then([this] {
return auth::auth::insert_user(_username, _superuser).then([] {
return auth_service.underlying_authenticator().create(_username, _opts->options()).then([this, &auth_service] {
return auth_service.insert_user(_username, _superuser).then([] {
return make_ready_future<::shared_ptr<cql_transport::messages::result_message>>();
});
});

View File

@@ -116,6 +116,12 @@ static bool validate_primary_key(
throw exceptions::invalid_request_exception(sprint(
"Cannot use MultiCell column '%s' in PRIMARY KEY of materialized view", def->name_as_text()));
}
if (def->type->references_duration()) {
throw exceptions::invalid_request_exception(sprint(
"Cannot use Duration column '%s' in PRIMARY KEY of materialized view", def->name_as_text()));
}
if (def->is_static()) {
throw exceptions::invalid_request_exception(sprint(
"Cannot use Static column '%s' in PRIMARY KEY of materialized view", def->name_as_text()));

View File

@@ -42,9 +42,9 @@
#include <boost/range/adaptor/map.hpp>
#include "drop_user_statement.hh"
#include "auth/auth.hh"
#include "auth/authenticator.hh"
#include "auth/authorizer.hh"
#include "auth/service.hh"
cql3::statements::drop_user_statement::drop_user_statement(sstring username, bool if_exists)
: _username(std::move(username))
@@ -65,12 +65,15 @@ void cql3::statements::drop_user_statement::validate(distributed<service::storag
future<::shared_ptr<cql_transport::messages::result_message>>
cql3::statements::drop_user_statement::execute(distributed<service::storage_proxy>& proxy, service::query_state& state, const query_options& options) {
return state.get_client_state().user()->is_super().then([this](bool is_super) {
auto& client_state = state.get_client_state();
auto& auth_service = *client_state.get_auth_service();
return auth::is_super_user(auth_service, *client_state.user()).then([this, &auth_service](bool is_super) {
if (!is_super) {
throw exceptions::unauthorized_exception("Only superusers are allowed to perform DROP USER queries");
}
return auth::auth::is_existing_user(_username).then([this](bool exists) {
return auth_service.is_existing_user(_username).then([this, &auth_service](bool exists) {
if (!_if_exists && !exists) {
throw exceptions::invalid_request_exception(sprint("User %s doesn't exist", _username));
}
@@ -79,9 +82,9 @@ cql3::statements::drop_user_statement::execute(distributed<service::storage_prox
}
// clean up permissions after the dropped user.
return auth::authorizer::get().revoke_all(_username).then([this] {
return auth::auth::delete_user(_username).then([this] {
return auth::authenticator::get().drop(_username);
return auth_service.underlying_authorizer().revoke_all(_username).then([this, &auth_service] {
return auth_service.delete_user(_username).then([this, &auth_service] {
return auth_service.underlying_authenticator().drop(_username);
});
}).then([] {
return make_ready_future<::shared_ptr<cql_transport::messages::result_message>>();

View File

@@ -44,7 +44,10 @@
future<::shared_ptr<cql_transport::messages::result_message>>
cql3::statements::grant_statement::execute(distributed<service::storage_proxy>& proxy, service::query_state& state, const query_options& options) {
return auth::authorizer::get().grant(state.get_client_state().user(), _permissions, _resource, _username).then([] {
auto& client_state = state.get_client_state();
auto& auth_service = *client_state.get_auth_service();
return auth_service.underlying_authorizer().grant(client_state.user(), _permissions, _resource, _username).then([] {
return make_ready_future<::shared_ptr<cql_transport::messages::result_message>>();
});
}

View File

@@ -59,6 +59,20 @@ sstring index_target::as_cql_string(schema_ptr schema) const {
return sprint("%s(%s)", to_sstring(type), column->to_cql_string());
}
index_target::target_type index_target::from_sstring(const sstring& s)
{
if (s == "keys") {
return index_target::target_type::keys;
} else if (s == "entries") {
return index_target::target_type::keys_and_values;
} else if (s == "values") {
return index_target::target_type::values;
} else if (s == "full") {
return index_target::target_type::full;
}
throw std::runtime_error(sprint("Unknown target type: %s", s));
}
sstring index_target::index_option(target_type type) {
switch (type) {
case target_type::keys: return secondary_index::index_keys_option_name;

View File

@@ -68,6 +68,7 @@ struct index_target {
static sstring index_option(target_type type);
static target_type from_column_definition(const column_definition& cd);
static index_target::target_type from_sstring(const sstring& s);
class raw {
public:

View File

@@ -44,7 +44,7 @@
#include "list_permissions_statement.hh"
#include "auth/authorizer.hh"
#include "auth/auth.hh"
#include "auth/common.hh"
#include "cql3/result_set.hh"
#include "transport/messages/result_message.hh"
@@ -64,7 +64,7 @@ void cql3::statements::list_permissions_statement::validate(distributed<service:
future<> cql3::statements::list_permissions_statement::check_access(const service::client_state& state) {
auto f = make_ready_future();
if (_username) {
f = auth::auth::is_existing_user(*_username).then([this](bool exists) {
f = state.get_auth_service()->is_existing_user(*_username).then([this](bool exists) {
if (!exists) {
throw exceptions::invalid_request_exception(sprint("User %s doesn't exist", *_username));
}
@@ -84,7 +84,7 @@ future<> cql3::statements::list_permissions_statement::check_access(const servic
future<::shared_ptr<cql_transport::messages::result_message>>
cql3::statements::list_permissions_statement::execute(distributed<service::storage_proxy>& proxy, service::query_state& state, const query_options& options) {
static auto make_column = [](sstring name) {
return ::make_shared<column_specification>(auth::auth::AUTH_KS, "permissions", ::make_shared<column_identifier>(std::move(name), true), utf8_type);
return ::make_shared<column_specification>(auth::meta::AUTH_KS, "permissions", ::make_shared<column_identifier>(std::move(name), true), utf8_type);
};
static thread_local const std::vector<::shared_ptr<column_specification>> metadata({
make_column("username"), make_column("resource"), make_column("permission")
@@ -104,7 +104,8 @@ cql3::statements::list_permissions_statement::execute(distributed<service::stora
}
return map_reduce(resources, [&state, this](opt_resource r) {
return auth::authorizer::get().list(state.get_client_state().user(), _permissions, std::move(r), _username);
auto& auth_service = *state.get_client_state().get_auth_service();
return auth_service.underlying_authorizer().list(auth_service, state.get_client_state().user(), _permissions, std::move(r), _username);
}, std::vector<auth::permission_details>(), [](std::vector<auth::permission_details> details, std::vector<auth::permission_details> pd) {
details.insert(details.end(), pd.begin(), pd.end());
return std::move(details);

View File

@@ -42,7 +42,7 @@
#include "list_users_statement.hh"
#include "cql3/query_processor.hh"
#include "cql3/query_options.hh"
#include "auth/auth.hh"
#include "auth/common.hh"
void cql3::statements::list_users_statement::validate(distributed<service::storage_proxy>& proxy, const service::client_state& state) {
}
@@ -57,7 +57,7 @@ cql3::statements::list_users_statement::execute(distributed<service::storage_pro
auto is = std::make_unique<service::query_state>(service::client_state::for_internal_calls());
auto io = std::make_unique<query_options>(db::consistency_level::QUORUM, std::vector<cql3::raw_value>{});
auto f = get_local_query_processor().process(
sprint("SELECT * FROM %s.%s", auth::auth::AUTH_KS,
auth::auth::USERS_CF), *is, *io);
sprint("SELECT * FROM %s.%s", auth::meta::AUTH_KS,
auth::meta::USERS_CF), *is, *io);
return f.finally([is = std::move(is), io = std::move(io)] {});
}

View File

@@ -41,11 +41,11 @@
#include <seastar/core/thread.hh>
#include "auth/service.hh"
#include "permission_altering_statement.hh"
#include "cql3/query_processor.hh"
#include "cql3/query_options.hh"
#include "cql3/selection/selection.hh"
#include "auth/auth.hh"
cql3::statements::permission_altering_statement::permission_altering_statement(
auth::permission_set permissions, auth::data_resource resource,
@@ -60,7 +60,7 @@ void cql3::statements::permission_altering_statement::validate(distributed<servi
}
future<> cql3::statements::permission_altering_statement::check_access(const service::client_state& state) {
return auth::auth::is_existing_user(_username).then([this, &state](bool exists) {
return state.get_auth_service()->is_existing_user(_username).then([this, &state](bool exists) {
if (!exists) {
throw exceptions::invalid_request_exception(sprint("User %s doesn't exist", _username));
}

View File

@@ -44,7 +44,10 @@
future<::shared_ptr<cql_transport::messages::result_message>>
cql3::statements::revoke_statement::execute(distributed<service::storage_proxy>& proxy, service::query_state& state, const query_options& options) {
return auth::authorizer::get().revoke(state.get_client_state().user(), _permissions, _resource, _username).then([] {
auto& client_state = state.get_client_state();
auto& auth_service = *client_state.get_auth_service();
return auth_service.underlying_authorizer().revoke(client_state.user(), _permissions, _resource, _username).then([] {
return make_ready_future<::shared_ptr<cql_transport::messages::result_message>>();
});
}

View File

@@ -51,6 +51,8 @@
#include "service/pager/query_pagers.hh"
#include <seastar/core/execution_stage.hh>
#include "view_info.hh"
#include "partition_slice_builder.hh"
#include "cql3/untyped_result_set.hh"
namespace cql3 {
@@ -341,6 +343,10 @@ select_statement::execute_internal(distributed<service::storage_proxy>& proxy,
service::query_state& state,
const query_options& options)
{
if (options.get_specific_options().page_size > 0) {
// need page, use regular execute
return do_execute(proxy, state, options);
}
int32_t limit = get_limit(options);
auto now = gc_clock::now();
auto command = ::make_lw_shared<query::read_command>(_schema->id(), _schema->version(),
@@ -397,6 +403,167 @@ select_statement::process_results(foreign_ptr<lw_shared_ptr<query::result>> resu
return _restrictions;
}
primary_key_select_statement::primary_key_select_statement(schema_ptr schema, uint32_t bound_terms,
::shared_ptr<parameters> parameters,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions,
bool is_reversed,
ordering_comparator_type ordering_comparator,
::shared_ptr<term> limit, cql_stats &stats)
: select_statement{schema, bound_terms, parameters, selection, restrictions, is_reversed, ordering_comparator, limit, stats}
{}
::shared_ptr<cql3::statements::select_statement>
indexed_table_select_statement::prepare(database& db,
schema_ptr schema,
uint32_t bound_terms,
::shared_ptr<parameters> parameters,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions,
bool is_reversed,
ordering_comparator_type ordering_comparator,
::shared_ptr<term> limit, cql_stats &stats)
{
auto index_opt = find_idx(db, schema, restrictions);
if (!index_opt) {
throw std::runtime_error("No index found.");
}
return ::make_shared<cql3::statements::indexed_table_select_statement>(
schema,
bound_terms,
parameters,
std::move(selection),
std::move(restrictions),
is_reversed,
std::move(ordering_comparator),
limit,
stats,
*index_opt);
}
stdx::optional<secondary_index::index> indexed_table_select_statement::find_idx(database& db,
schema_ptr schema,
::shared_ptr<restrictions::statement_restrictions> restrictions)
{
auto& sim = db.find_column_family(schema).get_index_manager();
for (::shared_ptr<cql3::restrictions::restrictions> restriction : restrictions->index_restrictions()) {
for (const auto& cdef : restriction->get_column_defs()) {
for (auto index : sim.list_indexes()) {
if (index.depends_on(*cdef)) {
return stdx::make_optional<secondary_index::index>(std::move(index));
}
}
}
}
return stdx::nullopt;
}
indexed_table_select_statement::indexed_table_select_statement(schema_ptr schema, uint32_t bound_terms,
::shared_ptr<parameters> parameters,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions,
bool is_reversed,
ordering_comparator_type ordering_comparator,
::shared_ptr<term> limit, cql_stats &stats,
const secondary_index::index& index)
: select_statement{schema, bound_terms, parameters, selection, restrictions, is_reversed, ordering_comparator, limit, stats}
, _index{index}
{}
future<shared_ptr<cql_transport::messages::result_message>>
indexed_table_select_statement::do_execute(distributed<service::storage_proxy>& proxy,
service::query_state& state,
const query_options& options)
{
tracing::add_table_name(state.get_trace_state(), keyspace(), column_family());
auto cl = options.get_consistency();
validate_for_read(_schema->ks_name(), cl);
int32_t limit = get_limit(options);
auto now = gc_clock::now();
++_stats.reads;
assert(_restrictions->uses_secondary_indexing());
return find_index_partition_ranges(proxy, state, options).then([limit, now, &state, &options, &proxy, this] (dht::partition_range_vector partition_ranges) {
auto command = ::make_lw_shared<query::read_command>(
_schema->id(),
_schema->version(),
make_partition_slice(options),
limit,
now,
tracing::make_trace_info(state.get_trace_state()),
query::max_partitions,
options.get_timestamp(state));
return this->execute(proxy, command, std::move(partition_ranges), state, options, now);
});
}
future<dht::partition_range_vector>
indexed_table_select_statement::find_index_partition_ranges(distributed<service::storage_proxy>& proxy,
service::query_state& state,
const query_options& options)
{
const auto& im = _index.metadata();
sstring index_table_name = sprint("%s_index", im.name());
tracing::add_table_name(state.get_trace_state(), keyspace(), index_table_name);
auto& db = proxy.local().get_db().local();
const auto& view = db.find_column_family(_schema->ks_name(), index_table_name);
dht::partition_range_vector partition_ranges;
for (const auto& entry : _restrictions->get_non_pk_restriction()) {
auto pk = partition_key::from_optional_exploded(*view.schema(), entry.second->values(options));
auto dk = dht::global_partitioner().decorate_key(*view.schema(), pk);
auto range = dht::partition_range::make_singular(dk);
partition_ranges.emplace_back(range);
}
auto now = gc_clock::now();
int32_t limit = get_limit(options);
partition_slice_builder partition_slice_builder{*view.schema()};
auto cmd = ::make_lw_shared<query::read_command>(
view.schema()->id(),
view.schema()->version(),
partition_slice_builder.build(),
limit,
now,
tracing::make_trace_info(state.get_trace_state()),
query::max_partitions,
options.get_timestamp(state));
return proxy.local().query(view.schema(),
cmd,
std::move(partition_ranges),
options.get_consistency(),
state.get_trace_state()).then([cmd, this, &options, now, &view] (foreign_ptr<lw_shared_ptr<query::result>> result) {
std::vector<const column_definition*> columns;
for (const column_definition& cdef : _schema->partition_key_columns()) {
columns.emplace_back(view.schema()->get_column_definition(cdef.name()));
}
auto selection = selection::selection::for_columns(view.schema(), columns);
cql3::selection::result_set_builder builder(*selection, now, options.get_cql_serialization_format());
query::result_view::consume(*result,
cmd->slice,
cql3::selection::result_set_builder::visitor(builder, *view.schema(), *selection));
auto rs = cql3::untyped_result_set(::make_shared<cql_transport::messages::result_message::rows>(std::move(builder.build())));
dht::partition_range_vector partition_ranges;
for (size_t i = 0; i < rs.size(); i++) {
const auto& row = rs.at(i);
for (const auto& column : row.get_columns()) {
auto blob = row.get_blob(column->name->to_cql_string());
auto pk = partition_key::from_exploded(*_schema, { blob });
auto dk = dht::global_partitioner().decorate_key(*_schema, pk);
auto range = dht::partition_range::make_singular(dk);
partition_ranges.emplace_back(range);
}
}
return make_ready_future<dht::partition_range_vector>(partition_ranges);
}).finally([cmd] {});
}
namespace raw {
select_statement::select_statement(::shared_ptr<cf_name> cf_name,
@@ -437,15 +604,31 @@ std::unique_ptr<prepared_statement> select_statement::prepare(database& db, cql_
check_needs_filtering(restrictions);
auto stmt = ::make_shared<cql3::statements::select_statement>(schema,
bound_names->size(),
_parameters,
std::move(selection),
std::move(restrictions),
is_reversed_,
std::move(ordering_comparator),
prepare_limit(db, bound_names),
stats);
::shared_ptr<cql3::statements::select_statement> stmt;
if (restrictions->uses_secondary_indexing()) {
stmt = indexed_table_select_statement::prepare(
db,
schema,
bound_names->size(),
_parameters,
std::move(selection),
std::move(restrictions),
is_reversed_,
std::move(ordering_comparator),
prepare_limit(db, bound_names),
stats);
} else {
stmt = ::make_shared<cql3::statements::primary_key_select_statement>(
schema,
bound_names->size(),
_parameters,
std::move(selection),
std::move(restrictions),
is_reversed_,
std::move(ordering_comparator),
prepare_limit(db, bound_names),
stats);
}
auto partition_key_bind_indices = bound_names->get_partition_key_bind_indexes(schema);

View File

@@ -66,7 +66,8 @@ namespace statements {
class select_statement : public cql_statement {
public:
using parameters = raw::select_statement::parameters;
private:
using ordering_comparator_type = raw::select_statement::ordering_comparator_type;
protected:
static constexpr int DEFAULT_COUNT_PAGE_SIZE = 10000;
static thread_local const ::shared_ptr<parameters> _default_parameters;
schema_ptr _schema;
@@ -81,7 +82,6 @@ private:
using compare_fn = raw::select_statement::compare_fn<T>;
using result_row_type = raw::select_statement::result_row_type;
using ordering_comparator_type = raw::select_statement::ordering_comparator_type;
/**
* The comparator used to orders results when multiple keys are selected (using IN).
@@ -90,8 +90,8 @@ private:
query::partition_slice::option_set _opts;
cql_stats& _stats;
private:
future<::shared_ptr<cql_transport::messages::result_message>> do_execute(distributed<service::storage_proxy>& proxy,
protected :
virtual future<::shared_ptr<cql_transport::messages::result_message>> do_execute(distributed<service::storage_proxy>& proxy,
service::query_state& state, const query_options& options);
friend class select_statement_executor;
public:
@@ -126,54 +126,6 @@ public:
shared_ptr<cql_transport::messages::result_message> process_results(foreign_ptr<lw_shared_ptr<query::result>> results,
lw_shared_ptr<query::read_command> cmd, const query_options& options, gc_clock::time_point now);
#if 0
private ResultMessage.Rows pageAggregateQuery(QueryPager pager, QueryOptions options, int pageSize, long now)
throws RequestValidationException, RequestExecutionException
{
Selection.ResultSetBuilder result = _selection->resultSetBuilder(now);
while (!pager.isExhausted())
{
for (org.apache.cassandra.db.Row row : pager.fetchPage(pageSize))
{
// Not columns match the query, skip
if (row.cf == null)
continue;
processColumnFamily(row.key.getKey(), row.cf, options, now, result);
}
}
return new ResultMessage.Rows(result.build(options.getProtocolVersion()));
}
static List<Row> readLocally(String keyspaceName, List<ReadCommand> cmds)
{
Keyspace keyspace = Keyspace.open(keyspaceName);
List<Row> rows = new ArrayList<Row>(cmds.size());
for (ReadCommand cmd : cmds)
rows.add(cmd.getRow(keyspace));
return rows;
}
public ResultMessage.Rows executeInternal(QueryState state, QueryOptions options) throws RequestExecutionException, RequestValidationException
{
int limit = getLimit(options);
long now = System.currentTimeMillis();
Pageable command = getPageableCommand(options, limit, now);
List<Row> rows = command == null
? Collections.<Row>emptyList()
: (command instanceof Pageable.ReadCommands
? readLocally(keyspace(), ((Pageable.ReadCommands)command).commands)
: ((RangeSliceCommand)command).executeLocally());
return processResults(rows, options, limit, now);
}
public ResultSet process(List<Row> rows) throws InvalidRequestException
{
QueryOptions options = QueryOptions.DEFAULT;
return process(rows, options, getLimit(options), System.currentTimeMillis());
}
#endif
const sstring& keyspace() const;
@@ -183,241 +135,60 @@ public:
::shared_ptr<restrictions::statement_restrictions> get_restrictions() const;
#if 0
private SliceQueryFilter sliceFilter(ColumnSlice slice, int limit, int toGroup)
{
return sliceFilter(new ColumnSlice[]{ slice }, limit, toGroup);
}
private SliceQueryFilter sliceFilter(ColumnSlice[] slices, int limit, int toGroup)
{
assert ColumnSlice.validateSlices(slices, _schema.comparator, _is_reversed) : String.format("Invalid slices: " + Arrays.toString(slices) + (_is_reversed ? " (reversed)" : ""));
return new SliceQueryFilter(slices, _is_reversed, limit, toGroup);
}
#endif
private:
protected:
int32_t get_limit(const query_options& options) const;
bool needs_post_query_ordering() const;
};
#if 0
private int updateLimitForQuery(int limit)
{
// Internally, we don't support exclusive bounds for slices. Instead, we query one more element if necessary
// and exclude it later (in processColumnFamily)
return restrictions.isNonCompositeSliceWithExclusiveBounds() && limit != Integer.MAX_VALUE
? limit + 1
: limit;
}
class primary_key_select_statement : public select_statement {
public:
primary_key_select_statement(schema_ptr schema,
uint32_t bound_terms,
::shared_ptr<parameters> parameters,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions,
bool is_reversed,
ordering_comparator_type ordering_comparator,
::shared_ptr<term> limit,
cql_stats &stats);
};
private SortedSet<CellName> getRequestedColumns(QueryOptions options) throws InvalidRequestException
{
// Note: getRequestedColumns don't handle static columns, but due to CASSANDRA-5762
// we always do a slice for CQL3 tables, so it's ok to ignore them here
assert !restrictions.isColumnRange();
SortedSet<CellName> columns = new TreeSet<CellName>(cfm.comparator);
for (Composite composite : restrictions.getClusteringColumnsAsComposites(options))
columns.addAll(addSelectedColumns(composite));
return columns;
}
class indexed_table_select_statement : public select_statement {
secondary_index::index _index;
public:
static ::shared_ptr<cql3::statements::select_statement> prepare(database& db,
schema_ptr schema,
uint32_t bound_terms,
::shared_ptr<parameters> parameters,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions,
bool is_reversed,
ordering_comparator_type ordering_comparator,
::shared_ptr<term> limit,
cql_stats &stats);
private SortedSet<CellName> addSelectedColumns(Composite prefix)
{
if (cfm.comparator.isDense())
{
return FBUtilities.singleton(cfm.comparator.create(prefix, null), cfm.comparator);
}
else
{
SortedSet<CellName> columns = new TreeSet<CellName>(cfm.comparator);
indexed_table_select_statement(schema_ptr schema,
uint32_t bound_terms,
::shared_ptr<parameters> parameters,
::shared_ptr<selection::selection> selection,
::shared_ptr<restrictions::statement_restrictions> restrictions,
bool is_reversed,
ordering_comparator_type ordering_comparator,
::shared_ptr<term> limit,
cql_stats &stats,
const secondary_index::index& index);
// We need to query the selected column as well as the marker
// column (for the case where the row exists but has no columns outside the PK)
// Two exceptions are "static CF" (non-composite non-compact CF) and "super CF"
// that don't have marker and for which we must query all columns instead
if (cfm.comparator.isCompound() && !cfm.isSuper())
{
// marker
columns.add(cfm.comparator.rowMarker(prefix));
private:
static stdx::optional<secondary_index::index> find_idx(database& db,
schema_ptr schema,
::shared_ptr<restrictions::statement_restrictions> restrictions);
// selected columns
for (ColumnDefinition def : selection.getColumns())
if (def.isRegular() || def.isStatic())
columns.add(cfm.comparator.create(prefix, def));
}
else
{
// We now that we're not composite so we can ignore static columns
for (ColumnDefinition def : cfm.regularColumns())
columns.add(cfm.comparator.create(prefix, def));
}
return columns;
}
}
virtual future<::shared_ptr<cql_transport::messages::result_message>> do_execute(distributed<service::storage_proxy>& proxy,
service::query_state& state, const query_options& options) override;
public List<IndexExpression> getValidatedIndexExpressions(QueryOptions options) throws InvalidRequestException
{
if (!restrictions.usesSecondaryIndexing())
return Collections.emptyList();
List<IndexExpression> expressions = restrictions.getIndexExpressions(options);
ColumnFamilyStore cfs = Keyspace.open(keyspace()).getColumnFamilyStore(columnFamily());
SecondaryIndexManager secondaryIndexManager = cfs.indexManager;
secondaryIndexManager.validateIndexSearchersForQuery(expressions);
return expressions;
}
private CellName makeExclusiveSliceBound(Bound bound, CellNameType type, QueryOptions options) throws InvalidRequestException
{
if (restrictions.areRequestedBoundsInclusive(bound))
return null;
return type.makeCellName(restrictions.getClusteringColumnsBounds(bound, options).get(0));
}
private Iterator<Cell> applySliceRestriction(final Iterator<Cell> cells, final QueryOptions options) throws InvalidRequestException
{
final CellNameType type = cfm.comparator;
final CellName excludedStart = makeExclusiveSliceBound(Bound.START, type, options);
final CellName excludedEnd = makeExclusiveSliceBound(Bound.END, type, options);
return Iterators.filter(cells, new Predicate<Cell>()
{
public boolean apply(Cell c)
{
// For dynamic CF, the column could be out of the requested bounds (because we don't support strict bounds internally (unless
// the comparator is composite that is)), filter here
return !((excludedStart != null && type.compare(c.name(), excludedStart) == 0)
|| (excludedEnd != null && type.compare(c.name(), excludedEnd) == 0));
}
});
}
private ResultSet process(List<Row> rows, QueryOptions options, int limit, long now) throws InvalidRequestException
{
Selection.ResultSetBuilder result = selection.resultSetBuilder(now);
for (org.apache.cassandra.db.Row row : rows)
{
// Not columns match the query, skip
if (row.cf == null)
continue;
processColumnFamily(row.key.getKey(), row.cf, options, now, result);
}
ResultSet cqlRows = result.build(options.getProtocolVersion());
orderResults(cqlRows);
// Internal calls always return columns in the comparator order, even when reverse was set
if (isReversed)
cqlRows.reverse();
// Trim result if needed to respect the user limit
cqlRows.trim(limit);
return cqlRows;
}
// Used by ModificationStatement for CAS operations
void processColumnFamily(ByteBuffer key, ColumnFamily cf, QueryOptions options, long now, Selection.ResultSetBuilder result)
throws InvalidRequestException
{
CFMetaData cfm = cf.metadata();
ByteBuffer[] keyComponents = null;
if (cfm.getKeyValidator() instanceof CompositeType)
{
keyComponents = ((CompositeType)cfm.getKeyValidator()).split(key);
}
else
{
keyComponents = new ByteBuffer[]{ key };
}
Iterator<Cell> cells = cf.getSortedColumns().iterator();
if (restrictions.isNonCompositeSliceWithExclusiveBounds())
cells = applySliceRestriction(cells, options);
CQL3Row.RowIterator iter = cfm.comparator.CQL3RowBuilder(cfm, now).group(cells);
// If there is static columns but there is no non-static row, then provided the select was a full
// partition selection (i.e. not a 2ndary index search and there was no condition on clustering columns)
// then we want to include the static columns in the result set (and we're done).
CQL3Row staticRow = iter.getStaticRow();
if (staticRow != null && !iter.hasNext() && !restrictions.usesSecondaryIndexing() && restrictions.hasNoClusteringColumnsRestriction())
{
result.newRow(options.getProtocolVersion());
for (ColumnDefinition def : selection.getColumns())
{
switch (def.kind)
{
case PARTITION_KEY:
result.add(keyComponents[def.position()]);
break;
case STATIC:
addValue(result, def, staticRow, options);
break;
default:
result.add((ByteBuffer)null);
}
}
return;
}
while (iter.hasNext())
{
CQL3Row cql3Row = iter.next();
// Respect requested order
result.newRow(options.getProtocolVersion());
// Respect selection order
for (ColumnDefinition def : selection.getColumns())
{
switch (def.kind)
{
case PARTITION_KEY:
result.add(keyComponents[def.position()]);
break;
case CLUSTERING_COLUMN:
result.add(cql3Row.getClusteringColumn(def.position()));
break;
case COMPACT_VALUE:
result.add(cql3Row.getColumn(null));
break;
case REGULAR:
addValue(result, def, cql3Row, options);
break;
case STATIC:
addValue(result, def, staticRow, options);
break;
}
}
}
}
private static void addValue(Selection.ResultSetBuilder result, ColumnDefinition def, CQL3Row row, QueryOptions options)
{
if (row == null)
{
result.add((ByteBuffer)null);
return;
}
if (def.type.isMultiCell())
{
List<Cell> cells = row.getMultiCellColumn(def.name);
ByteBuffer buffer = cells == null
? null
: ((CollectionType)def.type).serializeForNativeProtocol(cells, options.getProtocolVersion());
result.add(buffer);
return;
}
result.add(row.getColumn(def.name));
}
#endif
future<dht::partition_range_vector> find_index_partition_ranges(distributed<service::storage_proxy>& proxy,
service::query_state& state,
const query_options& options);
};
}

View File

@@ -134,7 +134,7 @@ public:
try {
validate_assignable_to(db, keyspace, receiver);
return assignment_testable::test_result::WEAKLY_ASSIGNABLE;
} catch (exceptions::invalid_request_exception e) {
} catch (exceptions::invalid_request_exception& e) {
return assignment_testable::test_result::NOT_ASSIGNABLE;
}
}

View File

@@ -47,11 +47,11 @@
#include "result_set.hh"
#include "transport/messages/result_message.hh"
cql3::untyped_result_set::row::row(const std::unordered_map<sstring, bytes_opt>& data)
cql3::untyped_result_set_row::untyped_result_set_row(const std::unordered_map<sstring, bytes_opt>& data)
: _data(data)
{}
cql3::untyped_result_set::row::row(const std::vector<::shared_ptr<column_specification>>& columns, std::vector<bytes_opt> data)
cql3::untyped_result_set_row::untyped_result_set_row(const std::vector<::shared_ptr<column_specification>>& columns, std::vector<bytes_opt> data)
: _columns(columns)
, _data([&columns, data = std::move(data)] () mutable {
std::unordered_map<sstring, bytes_opt> tmp;
@@ -62,7 +62,7 @@ cql3::untyped_result_set::row::row(const std::vector<::shared_ptr<column_specifi
}())
{}
bool cql3::untyped_result_set::row::has(const sstring& name) const {
bool cql3::untyped_result_set_row::has(const sstring& name) const {
auto i = _data.find(name);
return i != _data.end() && i->second;
}
@@ -90,7 +90,7 @@ cql3::untyped_result_set::untyped_result_set(::shared_ptr<result_message> msg)
}())
{}
const cql3::untyped_result_set::row& cql3::untyped_result_set::one() const {
const cql3::untyped_result_set_row& cql3::untyped_result_set::one() const {
if (_rows.size() != 1) {
throw std::runtime_error("One row required, " + std::to_string(_rows.size()) + " found");
}

View File

@@ -49,96 +49,97 @@
namespace cql3 {
class untyped_result_set_row {
private:
const std::vector<::shared_ptr<column_specification>> _columns;
const std::unordered_map<sstring, bytes_opt> _data;
public:
untyped_result_set_row(const std::unordered_map<sstring, bytes_opt>&);
untyped_result_set_row(const std::vector<::shared_ptr<column_specification>>&, std::vector<bytes_opt>);
untyped_result_set_row(untyped_result_set_row&&) = default;
untyped_result_set_row(const untyped_result_set_row&) = delete;
bool has(const sstring&) const;
bytes get_blob(const sstring& name) const {
return *_data.at(name);
}
template<typename T>
T get_as(const sstring& name) const {
return value_cast<T>(data_type_for<T>()->deserialize(get_blob(name)));
}
template<typename T>
std::experimental::optional<T> get_opt(const sstring& name) const {
return has(name) ? get_as<T>(name) : std::experimental::optional<T>{};
}
template<typename T>
T get_or(const sstring& name, T t) const {
return has(name) ? get_as<T>(name) : t;
}
// this could maybe be done as an overload of get_as (or something), but that just
// muddles things for no real gain. Let user (us) attempt to know what he is doing instead.
template<typename K, typename V, typename Iter>
void get_map_data(const sstring& name, Iter out, data_type keytype =
data_type_for<K>(), data_type valtype =
data_type_for<V>()) const {
auto vec =
value_cast<map_type_impl::native_type>(
map_type_impl::get_instance(keytype, valtype, false)->deserialize(
get_blob(name)));
std::transform(vec.begin(), vec.end(), out,
[](auto& p) {
return std::pair<K, V>(value_cast<K>(p.first), value_cast<V>(p.second));
});
}
template<typename K, typename V, typename ... Rest>
std::unordered_map<K, V, Rest...> get_map(const sstring& name,
data_type keytype = data_type_for<K>(), data_type valtype =
data_type_for<V>()) const {
std::unordered_map<K, V, Rest...> res;
get_map_data<K, V>(name, std::inserter(res, res.end()), keytype, valtype);
return res;
}
template<typename V, typename Iter>
void get_list_data(const sstring& name, Iter out, data_type valtype = data_type_for<V>()) const {
auto vec =
value_cast<list_type_impl::native_type>(
list_type_impl::get_instance(valtype, false)->deserialize(
get_blob(name)));
std::transform(vec.begin(), vec.end(), out, [](auto& v) { return value_cast<V>(v); });
}
template<typename V, typename ... Rest>
std::vector<V, Rest...> get_list(const sstring& name, data_type valtype = data_type_for<V>()) const {
std::vector<V, Rest...> res;
get_list_data<V>(name, std::back_inserter(res), valtype);
return res;
}
template<typename V, typename Iter>
void get_set_data(const sstring& name, Iter out, data_type valtype =
data_type_for<V>()) const {
auto vec =
value_cast<set_type_impl::native_type>(
set_type_impl::get_instance(valtype,
false)->deserialize(
get_blob(name)));
std::transform(vec.begin(), vec.end(), out, [](auto& p) {
return value_cast<V>(p);
});
}
template<typename V, typename ... Rest>
std::unordered_set<V, Rest...> get_set(const sstring& name,
data_type valtype =
data_type_for<V>()) const {
std::unordered_set<V, Rest...> res;
get_set_data<V>(name, std::inserter(res, res.end()), valtype);
return res;
}
const std::vector<::shared_ptr<column_specification>>& get_columns() const {
return _columns;
}
};
class untyped_result_set {
public:
class row {
private:
const std::vector<::shared_ptr<column_specification>> _columns;
const std::unordered_map<sstring, bytes_opt> _data;
public:
row(const std::unordered_map<sstring, bytes_opt>&);
row(const std::vector<::shared_ptr<column_specification>>&, std::vector<bytes_opt>);
row(row&&) = default;
row(const row&) = delete;
bool has(const sstring&) const;
bytes get_blob(const sstring& name) const {
return *_data.at(name);
}
template<typename T>
T get_as(const sstring& name) const {
return value_cast<T>(data_type_for<T>()->deserialize(get_blob(name)));
}
template<typename T>
std::experimental::optional<T> get_opt(const sstring& name) const {
return has(name) ? get_as<T>(name) : std::experimental::optional<T>{};
}
template<typename T>
T get_or(const sstring& name, T t) const {
return has(name) ? get_as<T>(name) : t;
}
// this could maybe be done as an overload of get_as (or something), but that just
// muddles things for no real gain. Let user (us) attempt to know what he is doing instead.
template<typename K, typename V, typename Iter>
void get_map_data(const sstring& name, Iter out, data_type keytype =
data_type_for<K>(), data_type valtype =
data_type_for<V>()) const {
auto vec =
value_cast<map_type_impl::native_type>(
map_type_impl::get_instance(keytype, valtype, false)->deserialize(
get_blob(name)));
std::transform(vec.begin(), vec.end(), out,
[](auto& p) {
return std::pair<K, V>(value_cast<K>(p.first), value_cast<V>(p.second));
});
}
template<typename K, typename V, typename ... Rest>
std::unordered_map<K, V, Rest...> get_map(const sstring& name,
data_type keytype = data_type_for<K>(), data_type valtype =
data_type_for<V>()) const {
std::unordered_map<K, V, Rest...> res;
get_map_data<K, V>(name, std::inserter(res, res.end()), keytype, valtype);
return res;
}
template<typename V, typename Iter>
void get_list_data(const sstring& name, Iter out, data_type valtype = data_type_for<V>()) const {
auto vec =
value_cast<list_type_impl::native_type>(
list_type_impl::get_instance(valtype, false)->deserialize(
get_blob(name)));
std::transform(vec.begin(), vec.end(), out, [](auto& v) { return value_cast<V>(v); });
}
template<typename V, typename ... Rest>
std::vector<V, Rest...> get_list(const sstring& name, data_type valtype = data_type_for<V>()) const {
std::vector<V, Rest...> res;
get_list_data<V>(name, std::back_inserter(res), valtype);
return res;
}
template<typename V, typename Iter>
void get_set_data(const sstring& name, Iter out, data_type valtype =
data_type_for<V>()) const {
auto vec =
value_cast<set_type_impl::native_type>(
set_type_impl::get_instance(valtype,
false)->deserialize(
get_blob(name)));
std::transform(vec.begin(), vec.end(), out, [](auto& p) {
return value_cast<V>(p);
});
}
template<typename V, typename ... Rest>
std::unordered_set<V, Rest...> get_set(const sstring& name,
data_type valtype =
data_type_for<V>()) const {
std::unordered_set<V, Rest...> res;
get_set_data<V>(name, std::inserter(res, res.end()), valtype);
return res;
}
const std::vector<::shared_ptr<column_specification>>& get_columns() const {
return _columns;
}
};
using row = untyped_result_set_row;
typedef std::vector<row> rows_type;
using const_iterator = rows_type::const_iterator;

View File

@@ -53,6 +53,9 @@ update_parameters::get_prefetched_list(
return {};
}
if (column.is_static()) {
ckey = clustering_key_view::make_empty();
}
auto i = _prefetched->rows.find(std::make_pair(std::move(pkey), std::move(ckey)));
if (i == _prefetched->rows.end()) {
return {};

Some files were not shown because too many files have changed in this diff Show More