Compare commits

..

21 Commits

Author SHA1 Message Date
Takuya ASADA
91194f0ac0 dist: prevent error building ubuntu package on EC2
fixes #491

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-11-09 12:55:46 +02:00
Takuya ASADA
aaba5a6f7c dist: add missing dist/ubuntu/changelog.in
Missing file for d2dd6b90a9

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-11-05 11:09:49 +02:00
Takuya ASADA
eaabbba14d dist: split debug symbol to sepalate package on ubuntu
Fixes #524

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-11-04 17:30:43 +02:00
Takuya ASADA
8ce8a0b84c dist: support ./SCYLLA-VERSION-GEN on ubuntu package
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-11-04 17:30:36 +02:00
Tomasz Grabiec
f74cdaa184 query_processor: Use the correct client_state
Since 4641dfff24, query_state keeps a
copy of client_state, not a reference. Therefore _cl is no longer
updated by queries using _qp. Fix by using the client_state from _qp.

Fixes #525.
2015-11-04 12:44:01 +02:00
Pekka Enberg
a461a21434 release: 0.11.1 2015-11-04 08:16:15 +02:00
Pekka Enberg
9178e96cce Merge seastar upstream
* seastar 9ae6407...05b41b8 (11):
  > resource: fix system memory reservation
  > Add .gitattributes to improve 'git diff' output
  > README: remove irrelevant OSv instructions
  > Add invocation of ninja in Ubuntu section of README
  > collectd: remove incorrect license
  > rpc server: Add pending and sent messages to server
  > scripts: posix_net_conf.sh: Use a generic logic for RPS configuring
  > scripts: posix_net_conf.sh: allow passing a NIC name as a parameter
  > doc: link to the tutorial
  > tutorial: begin documenting the network API
  > slab: remove bogus uintptr_t definition
2015-11-04 08:15:55 +02:00
Asias He
32d5a81b73 ami: Improve scylla raid setup
Use all the disks except the one for rootfs for RAID0 which stores
scylla data. If only one disk is available warn the user since currently
our AMI's rootfs is not XFS.

[fedora@ip-172-31-39-189 ~]$ cat WARN.TXT
WARN: Scylla is not using XFS to store data. Performance will suffer.

Tested on AWS with 1 disk, 2 disks, 7 disk case.

(cherry picked from commit 49d6cba471)
2015-11-03 09:22:20 +02:00
Avi Kivity
35af260ca9 Update scylla-ami submodule
* dist/ami/files/scylla-ami c6ddbea...3f37184 (1):
  > Update reflector URL

(cherry picked from commit 440b403089)
2015-11-03 09:22:11 +02:00
Takuya ASADA
1d22543f59 dist: add scylla.repo to fetch scylla rpms on ami
Mistakenly didn't included on yum repository for AMI patchset, but it's needed

Signed-off-by: Takuya ASADA <syuu@cloudius-systems.com>
(cherry picked from commit 8587c4d6b3)
2015-11-03 09:22:05 +02:00
Takuya ASADA
1c040898e8 dist: update packages on ec2 instance first bootup
Signed-off-by: Takuya ASADA <syuu@cloudius-systems.com>
(cherry picked from commit aaeccdee60)
2015-11-03 09:21:55 +02:00
Takuya ASADA
249967d62b dist: use scylla repo on ami, instead of locally built rpms
Signed-off-by: Takuya ASADA <syuu@cloudius-systems.com>
(cherry picked from commit 8b98fe5a1c)
2015-11-03 09:21:49 +02:00
Takuya ASADA
9d8bc9e3cc dist: move inline script to setup-ami.sh
Signed-off-by: Takuya ASADA <syuu@cloudius-systems.com>
(cherry picked from commit 2863b8098f)
2015-11-03 09:21:42 +02:00
Shlomi Livne
f849c4b9c2 dist: set SCYLLA_HOME used to find the configuration and property files
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2015-11-02 08:29:43 +02:00
Shlomi Livne
0e7e68236a dist: fedora/rhel/centos copy cassandra-rackdc.properties
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2015-11-02 08:29:39 +02:00
Shlomi Livne
94802ce842 dist: ubuntu copy cassandra-rackdc.properties
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2015-11-02 08:29:33 +02:00
Shlomi Livne
5362765835 Update snitch registration EC2MultiRegionSnitch --> Ec2MultiRegionSnitch
Update snitch EC2MultiRegionSnitch to Ec2MultiRegionSnitch,
org.apache.cassandra.locator.EC2MultiRegionSnitch to
org.apache.cassandra.locator.Ec2MultiRegionSnitch

Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2015-11-02 08:29:27 +02:00
Shlomi Livne
f87d9ddd34 Update snitch registration EC2Snitch --> Ec2Snitch
Update EC2Snitch to Ec2Snitch, org.apache.cassandra.locator.EC2Snitch to
org.apache.cassandra.locator.Ec2Snitch

Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2015-11-02 08:29:20 +02:00
Pekka Enberg
ca6078d5eb Merge "Wire reconnectable_snitch_helper to gossiping_property_file_snitch" from Vlad
"gossiping_property_file_snitch checks its property
file (cassandra-rackdc.properties) for changes every minute and
if there were changes it re-registers the helper and initiates
re-read of the new DC and Rack values in the corresponding places.

Therefore we need the ability to unregister/register the corresponding subscriber
at the same time when a subscriber list is possibly iterated by
some other asynchronous context on the current CPU.

The current gossiper implementation assumes that subscribers list may not be
changed from the context different from the one that iterates on their list.

So, this had to be fixed.

There was also missing an update_endpoint(ep) interface in the locator::topology
class and the corresponding token_metadata::update_topology(ep) wrapper.

Also there were some bugs in the gossiping_property_file::reload_configuration()
method."
2015-10-30 16:04:07 +02:00
Pekka Enberg
4701e698b9 Merge "Fixes for dependency packages, fix upstart script" from Takuya
"Fixes #510, (part of) #493."
2015-10-30 15:42:20 +02:00
Pekka Enberg
f78deffdc8 release: prepare for 0.11
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-10-28 14:49:59 +02:00
323 changed files with 5238 additions and 12994 deletions

1
.gitignore vendored
View File

@@ -4,4 +4,3 @@
build
build.ninja
cscope.*
/debian/

76
ORIGIN
View File

@@ -1,77 +1 @@
http://git-wip-us.apache.org/repos/asf/cassandra.git trunk (bf599fb5b062cbcc652da78b7d699e7a01b949ad)
import = bf599fb5b062cbcc652da78b7d699e7a01b949ad
Y = Already in scylla
$ git log --oneline import..cassandra-2.1.11 -- gms/
Y 484e645 Mark node as dead even if already left
d0c166f Add trampled commit back
ba5837e Merge branch 'cassandra-2.0' into cassandra-2.1
718e47f Forgot a damn c/r
a7282e4 Merge branch 'cassandra-2.0' into cassandra-2.1
Y ae4cd69 Print versions for gossip states in gossipinfo.
Y 7fba3d2 Don't mark nodes down before the max local pause interval once paused.
c2142e6 Merge branch 'cassandra-2.0' into cassandra-2.1
ba9a69e checkForEndpointCollision fails for legitimate collisions, finalized list of statuses and nits, CASSANDRA-9765
54470a2 checkForEndpointCollision fails for legitimate collisions, improved version after CR, CASSANDRA-9765
2c9b490 checkForEndpointCollision fails for legitimate collisions, CASSANDRA-9765
4c15970 Merge branch 'cassandra-2.0' into cassandra-2.1
ad8047a ArrivalWindow should use primitives
Y 4012134 Failure detector detects and ignores local pauses
9bcdd0f Merge branch 'cassandra-2.0' into cassandra-2.1
cefaa4e Close incoming connections when MessagingService is stopped
ea1beda Merge branch 'cassandra-2.0' into cassandra-2.1
08dbbd6 Ignore gossip SYNs after shutdown
3c17ac6 Merge branch 'cassandra-2.0' into cassandra-2.1
a64bc43 lists work better when you initialize them
543a899 change list to arraylist
730d4d4 Merge branch 'cassandra-2.0' into cassandra-2.1
e3e2de0 change list to arraylist
f7884c5 Merge branch 'cassandra-2.0' into cassandra-2.1
Y 84b2846 remove redundant state
4f2c372 Merge branch 'cassandra-2.0' into cassandra-2.1
Y b2c62bb Add shutdown gossip state to prevent timeouts during rolling restarts
Y def4835 Add missing follow on fix for 7816 only applied to cassandra-2.1 branch in 763130bdbde2f4cec2e8973bcd5203caf51cc89f
Y 763130b Followup commit for 7816
1376b8e Merge branch 'cassandra-2.0' into cassandra-2.1
Y 2199a87 Fix duplicate up/down messages sent to native clients
136042e Merge branch 'cassandra-2.0' into cassandra-2.1
Y eb9c5bb Improve FD logging when the arrival time is ignored.
$ git log --oneline import..cassandra-2.1.11 -- service/StorageService.java
92c5787 Keep StorageServiceMBean interface stable
6039d0e Fix DC and Rack in nodetool info
a2f0da0 Merge branch 'cassandra-2.0' into cassandra-2.1
c4de752 Follow-up to CASSANDRA-10238
e889ee4 2i key cache load fails
4b1d59e Merge branch 'cassandra-2.0' into cassandra-2.1
257cdaa Fix consolidating racks violating the RF contract
Y 27754c0 refuse to decomission if not in state NORMAL patch by Jan Karlsson and Stefania for CASSANDRA-8741
Y 5bc56c3 refuse to decomission if not in state NORMAL patch by Jan Karlsson and Stefania for CASSANDRA-8741
Y 8f9ca07 Cannot replace token does not exist - DN node removed as Fat Client
c2142e6 Merge branch 'cassandra-2.0' into cassandra-2.1
54470a2 checkForEndpointCollision fails for legitimate collisions, improved version after CR, CASSANDRA-9765
1eccced Handle corrupt files on startup
2c9b490 checkForEndpointCollision fails for legitimate collisions, CASSANDRA-9765
c4b5260 Merge branch 'cassandra-2.0' into cassandra-2.1
Y 52dbc3f Can't transition from write survey to normal mode
9966419 Make rebuild only run one at a time
d693ca1 Merge branch 'cassandra-2.0' into cassandra-2.1
be9eff5 Add option to not validate atoms during scrub
2a4daaf followup fix for 8564
93478ab Wait for anticompaction to finish
9e9846e Fix for harmless exceptions being logged as ERROR
6d06f32 Fix anticompaction blocking ANTI_ENTROPY stage
4f2c372 Merge branch 'cassandra-2.0' into cassandra-2.1
Y b2c62bb Add shutdown gossip state to prevent timeouts during rolling restarts
Y cba1b68 Fix failed bootstrap/replace attempts being persisted in system.peers
f59df28 Allow takeColumnFamilySnapshot to take a list of tables patch by Sachin Jarin; reviewed by Nick Bailey for CASSANDRA-8348
Y ac46747 Fix failed bootstrap/replace attempts being persisted in system.peers
5abab57 Merge branch 'cassandra-2.0' into cassandra-2.1
0ff9c3c Allow reusing snapshot tags across different column families.
f9c57a5 Merge branch 'cassandra-2.0' into cassandra-2.1
Y b296c55 Fix MOVED_NODE client event
bbb3fc7 Merge branch 'cassandra-2.0' into cassandra-2.1
37eb2a0 Fix NPE in nodetool getendpoints with bad ks/cf
f8b43d4 Merge branch 'cassandra-2.0' into cassandra-2.1
e20810c Remove C* specific class from JMX API

View File

@@ -11,37 +11,13 @@ git submodule init
git submodule update --recursive
```
### Building and Running Scylla on Fedora
* Installing required packages:
### Building scylla on Fedora
Installing required packages:
```
sudo yum install yaml-cpp-devel lz4-devel zlib-devel snappy-devel jsoncpp-devel thrift-devel antlr3-tool antlr3-C++-devel libasan libubsan
```
* Build Scylla
```
./configure.py --mode=release --with=scylla --disable-xen
ninja build/release/scylla -j2 # you can use more cpus if you have tons of RAM
```
* Run Scylla
```
./build/release/scylla
```
* run Scylla with one CPU and ./tmp as data directory
```
./build/release/scylla --datadir tmp --commitlog-directory tmp --smp 1
```
* For more run options:
```
./build/release/scylla --help
```
## Building Fedora RPM
As a pre-requisite, you need to install [Mock](https://fedoraproject.org/wiki/Mock) on your machine:
@@ -80,17 +56,5 @@ docker build -t <image-name> .
Run the image with:
```
docker run -p $(hostname -i):9042:9042 -i -t <image name>
docker run -i -t <image name>
```
## Contributing to Scylla
Do not send pull requests.
Send patches to the mailing list address scylladb-dev@googlegroups.com.
Be sure to subscribe.
In order for your patches to be merged, you must sign the Contributor's
License Agreement, protecting your rights and ours. See
http://www.scylladb.com/opensource/cla/.

View File

@@ -1,6 +1,6 @@
#!/bin/sh
VERSION=0.14.1
VERSION=0.11.1
if test -f version
then

View File

@@ -579,6 +579,30 @@
}
]
},
{
"path":"/column_family/sstables/snapshots_size/{name}",
"operations":[
{
"method":"GET",
"summary":"the size of SSTables in 'snapshots' subdirectory which aren't live anymore",
"type":"double",
"nickname":"true_snapshots_size",
"produces":[
"application/json"
],
"parameters":[
{
"name":"name",
"description":"The column family name in keysspace:name format",
"required":true,
"allowMultiple":false,
"type":"string",
"paramType":"path"
}
]
}
]
},
{
"path":"/column_family/metrics/memtable_columns_count/{name}",
"operations":[
@@ -2017,7 +2041,7 @@
]
},
{
"path":"/column_family/metrics/snapshots_size/{name}",
"path":"/column_family/metrics/true_snapshots_size/{name}",
"operations":[
{
"method":"GET",

View File

@@ -15,7 +15,7 @@
"summary":"get List of running compactions",
"type":"array",
"items":{
"type":"summary"
"type":"jsonmap"
},
"nickname":"get_compactions",
"produces":[
@@ -46,16 +46,16 @@
]
},
{
"path":"/compaction_manager/compaction_info",
"path":"/compaction_manager/compaction_summary",
"operations":[
{
"method":"GET",
"summary":"get a list of all active compaction info",
"summary":"get compaction summary",
"type":"array",
"items":{
"type":"compaction_info"
"type":"string"
},
"nickname":"get_compaction_info",
"nickname":"get_compaction_summary",
"produces":[
"application/json"
],
@@ -174,73 +174,30 @@
}
],
"models":{
"row_merged":{
"id":"row_merged",
"description":"A row merged information",
"mapper":{
"id":"mapper",
"description":"A key value mapping",
"properties":{
"key":{
"type":"int",
"description":"The number of sstable"
"type":"string",
"description":"The key"
},
"value":{
"type":"long",
"description":"The number or row compacted"
"type":"string",
"description":"The value"
}
}
},
"compaction_info" :{
"id": "compaction_info",
"description":"A key value mapping",
"properties":{
"operation_type":{
"type":"string",
"description":"The operation type"
},
"completed":{
"type":"long",
"description":"The current completed"
},
"total":{
"type":"long",
"description":"The total to compact"
},
"unit":{
"type":"string",
"description":"The compacted unit"
}
}
},
"summary":{
"id":"summary",
"description":"A compaction summary object",
"jsonmap":{
"id":"jsonmap",
"description":"A json representation of a map as a list of key value",
"properties":{
"id":{
"type":"string",
"description":"The UUID"
},
"ks":{
"type":"string",
"description":"The keyspace name"
},
"cf":{
"type":"string",
"description":"The column family name"
},
"completed":{
"type":"long",
"description":"The number of units completed"
},
"total":{
"type":"long",
"description":"The total number of units"
},
"task_type":{
"type":"string",
"description":"The task compaction type"
},
"unit":{
"type":"string",
"description":"The units being used"
"value":{
"type":"array",
"items":{
"type":"mapper"
},
"description":"A list of key, value mapping"
}
}
},
@@ -275,7 +232,7 @@
"rows_merged":{
"type":"array",
"items":{
"type":"row_merged"
"type":"mapper"
},
"description":"The merged rows"
}

View File

@@ -48,10 +48,7 @@
{
"method":"GET",
"summary":"Get all endpoint states",
"type":"array",
"items":{
"type":"endpoint_state"
},
"type":"string",
"nickname":"get_all_endpoint_states",
"produces":[
"application/json"
@@ -151,53 +148,6 @@
"description": "The value"
}
}
},
"endpoint_state": {
"id": "states",
"description": "Holds an endpoint state",
"properties": {
"addrs": {
"type": "string",
"description": "The endpoint address"
},
"generation": {
"type": "int",
"description": "The heart beat generation"
},
"version": {
"type": "int",
"description": "The heart beat version"
},
"update_time": {
"type": "long",
"description": "The update timestamp"
},
"is_alive": {
"type": "boolean",
"description": "Is the endpoint alive"
},
"application_state" : {
"type":"array",
"items":{
"type":"version_value"
},
"description": "Is the endpoint alive"
}
}
},
"version_value": {
"id": "version_value",
"description": "Holds a version value for an application state",
"properties": {
"application_state": {
"type": "int",
"description": "The application state enum index"
},
"value": {
"type": "string",
"description": "The version value"
}
}
}
}
}

View File

@@ -8,16 +8,13 @@
],
"apis":[
{
"path":"/messaging_service/messages/timeout",
"path":"/messaging_service/totaltimeouts",
"operations":[
{
"method":"GET",
"summary":"Get the number of timeout messages",
"type":"array",
"items":{
"type":"message_counter"
},
"nickname":"get_timeout_messages",
"summary":"Total number of timeouts happened on this node",
"type":"long",
"nickname":"get_totaltimeouts",
"produces":[
"application/json"
],
@@ -28,7 +25,7 @@
]
},
{
"path":"/messaging_service/messages/dropped_by_ver",
"path":"/messaging_service/messages/dropped",
"operations":[
{
"method":"GET",
@@ -37,25 +34,6 @@
"items":{
"type":"verb_counter"
},
"nickname":"get_dropped_messages_by_ver",
"produces":[
"application/json"
],
"parameters":[
]
}
]
},
{
"path":"/messaging_service/messages/dropped",
"operations":[
{
"method":"GET",
"summary":"Get the number of messages that were dropped before sending",
"type":"array",
"items":{
"type":"message_counter"
},
"nickname":"get_dropped_messages",
"produces":[
"application/json"
@@ -165,49 +143,6 @@
]
}
]
},
{
"path":"/messaging_service/messages/respond_completed",
"operations":[
{
"method":"GET",
"summary":"Get the number of completed respond messages",
"type":"array",
"items":{
"type":"message_counter"
},
"nickname":"get_respond_completed_messages",
"produces":[
"application/json"
],
"parameters":[
]
}
]
},
{
"path":"/messaging_service/version",
"operations":[
{
"method":"GET",
"summary":"Get the version number",
"type":"int",
"nickname":"get_version",
"produces":[
"application/json"
],
"parameters":[
{
"name":"addr",
"description":"Address",
"required":true,
"allowMultiple":false,
"type":"string",
"paramType":"query"
}
]
}
]
}
],
"models":{
@@ -215,10 +150,10 @@
"id":"message_counter",
"description":"Holds command counters",
"properties":{
"value":{
"count":{
"type":"long"
},
"key":{
"ip":{
"type":"string"
}
}

View File

@@ -290,25 +290,6 @@
}
]
},
{
"path":"/storage_service/describe_ring/",
"operations":[
{
"method":"GET",
"summary":"The TokenRange for a any keyspace",
"type":"array",
"items":{
"type":"token_range"
},
"nickname":"describe_any_ring",
"produces":[
"application/json"
],
"parameters":[
]
}
]
},
{
"path":"/storage_service/describe_ring/{keyspace}",
"operations":[
@@ -317,9 +298,9 @@
"summary":"The TokenRange for a given keyspace",
"type":"array",
"items":{
"type":"token_range"
"type":"string"
},
"nickname":"describe_ring",
"nickname":"describe_ring_jmx",
"produces":[
"application/json"
],
@@ -330,7 +311,7 @@
"required":true,
"allowMultiple":false,
"type":"string",
"paramType":"path"
"paramType":"query"
}
]
}
@@ -425,7 +406,7 @@
"summary":"load value. Keys are IP addresses",
"type":"array",
"items":{
"type":"double_mapper"
"type":"mapper"
},
"nickname":"get_load_map",
"produces":[
@@ -797,72 +778,8 @@
"paramType":"path"
},
{
"name":"primaryRange",
"description":"If the value is the string 'true' with any capitalization, repair only the first range returned by the partitioner.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"parallelism",
"description":"Repair parallelism, can be 0 (sequential), 1 (parallel) or 2 (datacenter-aware).",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"incremental",
"description":"If the value is the string 'true' with any capitalization, perform incremental repair.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"jobThreads",
"description":"An integer specifying the parallelism on each node.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"ranges",
"description":"An explicit list of ranges to repair, overriding the default choice. Each range is expressed as token1:token2, and multiple ranges can be given as a comma separated list.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"columnFamilies",
"description":"Which column families to repair in the given keyspace. Multiple columns families can be named separated by commas. If this option is missing, all column families in the keyspace are repaired.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"dataCenters",
"description":"Which data centers are to participate in this repair. Multiple data centers can be listed separated by commas.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"hosts",
"description":"Which hosts are to participate in this repair. Multiple hosts can be listed separated by commas.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"trace",
"description":"If the value is the string 'true' with any capitalization, enable tracing of the repair.",
"name":"options",
"description":"Options for the repair",
"required":false,
"allowMultiple":false,
"type":"string",
@@ -2028,20 +1945,6 @@
}
}
},
"double_mapper":{
"id":"double_mapper",
"description":"A key value mapping between a string and a double",
"properties":{
"key":{
"type":"string",
"description":"The key"
},
"value":{
"type":"double",
"description":"The value"
}
}
},
"maplist_mapper":{
"id":"maplist_mapper",
"description":"A key value mapping, where key and value are list",
@@ -2100,59 +2003,6 @@
"description":"The column family"
}
}
},
"endpoint_detail":{
"id":"endpoint_detail",
"description":"Endpoint detail",
"properties":{
"host":{
"type":"string",
"description":"The endpoint host"
},
"datacenter":{
"type":"string",
"description":"The endpoint datacenter"
},
"rack":{
"type":"string",
"description":"The endpoint rack"
}
}
},
"token_range":{
"id":"token_range",
"description":"Endpoint range information",
"properties":{
"start_token":{
"type":"string",
"description":"The range start token"
},
"end_token":{
"type":"string",
"description":"The range start token"
},
"endpoints":{
"type":"array",
"items":{
"type":"string"
},
"description":"The endpoints"
},
"rpc_endpoints":{
"type":"array",
"items":{
"type":"string"
},
"description":"The rpc endpoints"
},
"endpoint_details":{
"type":"array",
"items":{
"type":"endpoint_detail"
},
"description":"The endpoint details"
}
}
}
}
}

View File

@@ -128,54 +128,47 @@ inline double pow2(double a) {
return a * a;
}
// FIXME: Move to utils::ihistogram::operator+=()
inline utils::ihistogram add_histogram(utils::ihistogram res,
inline httpd::utils_json::histogram add_histogram(httpd::utils_json::histogram res,
const utils::ihistogram& val) {
if (res.count == 0) {
return val;
if (!res.count._set) {
res = val;
return res;
}
if (val.count == 0) {
return std::move(res);
return res;
}
if (res.min > val.min) {
if (res.min() > val.min) {
res.min = val.min;
}
if (res.max < val.max) {
if (res.max() < val.max) {
res.max = val.max;
}
double ncount = res.count + val.count;
double ncount = res.count() + val.count;
// To get an estimated sum we take the estimated mean
// and multiply it by the true count
res.sum = res.sum + val.mean * val.count;
double a = res.count/ncount;
res.sum = res.sum() + val.mean * val.count;
double a = res.count()/ncount;
double b = val.count/ncount;
double mean = a * res.mean + b * val.mean;
double mean = a * res.mean() + b * val.mean;
res.variance = (res.variance + pow2(res.mean - mean) )* a +
res.variance = (res.variance() + pow2(res.mean() - mean) )* a +
(val.variance + pow2(val.mean -mean))* b;
res.mean = mean;
res.count = res.count + val.count;
res.count = res.count() + val.count;
for (auto i : val.sample) {
res.sample.push_back(i);
res.sample.push(i);
}
return res;
}
inline
httpd::utils_json::histogram to_json(const utils::ihistogram& val) {
httpd::utils_json::histogram h;
h = val;
return h;
}
template<class T, class F>
future<json::json_return_type> sum_histogram_stats(distributed<T>& d, utils::ihistogram F::*f) {
return d.map_reduce0([f](const T& p) {return p.get_stats().*f;}, utils::ihistogram(),
add_histogram).then([](const utils::ihistogram& val) {
return make_ready_future<json::json_return_type>(to_json(val));
return d.map_reduce0([f](const T& p) {return p.get_stats().*f;}, httpd::utils_json::histogram(),
add_histogram).then([](const httpd::utils_json::histogram& val) {
return make_ready_future<json::json_return_type>(val);
});
}

View File

@@ -64,21 +64,21 @@ future<> foreach_column_family(http_context& ctx, const sstring& name, function<
future<json::json_return_type> get_cf_stats(http_context& ctx, const sstring& name,
int64_t column_family::stats::*f) {
return map_reduce_cf(ctx, name, int64_t(0), [f](const column_family& cf) {
return map_reduce_cf(ctx, name, 0, [f](const column_family& cf) {
return cf.get_stats().*f;
}, std::plus<int64_t>());
}
future<json::json_return_type> get_cf_stats(http_context& ctx,
int64_t column_family::stats::*f) {
return map_reduce_cf(ctx, int64_t(0), [f](const column_family& cf) {
return map_reduce_cf(ctx, 0, [f](const column_family& cf) {
return cf.get_stats().*f;
}, std::plus<int64_t>());
}
static future<json::json_return_type> get_cf_stats_count(http_context& ctx, const sstring& name,
utils::ihistogram column_family::stats::*f) {
return map_reduce_cf(ctx, name, int64_t(0), [f](const column_family& cf) {
return map_reduce_cf(ctx, name, 0, [f](const column_family& cf) {
return (cf.get_stats().*f).count;
}, std::plus<int64_t>());
}
@@ -101,7 +101,7 @@ static future<json::json_return_type> get_cf_stats_sum(http_context& ctx, const
static future<json::json_return_type> get_cf_stats_count(http_context& ctx,
utils::ihistogram column_family::stats::*f) {
return map_reduce_cf(ctx, int64_t(0), [f](const column_family& cf) {
return map_reduce_cf(ctx, 0, [f](const column_family& cf) {
return (cf.get_stats().*f).count;
}, std::plus<int64_t>());
}
@@ -110,30 +110,28 @@ static future<json::json_return_type> get_cf_histogram(http_context& ctx, const
utils::ihistogram column_family::stats::*f) {
utils::UUID uuid = get_uuid(name, ctx.db.local());
return ctx.db.map_reduce0([f, uuid](const database& p) {return p.find_column_family(uuid).get_stats().*f;},
utils::ihistogram(),
httpd::utils_json::histogram(),
add_histogram)
.then([](const utils::ihistogram& val) {
return make_ready_future<json::json_return_type>(to_json(val));
.then([](const httpd::utils_json::histogram& val) {
return make_ready_future<json::json_return_type>(val);
});
}
static future<json::json_return_type> get_cf_histogram(http_context& ctx, utils::ihistogram column_family::stats::*f) {
std::function<utils::ihistogram(const database&)> fun = [f] (const database& db) {
utils::ihistogram res;
std::function<httpd::utils_json::histogram(const database&)> fun = [f] (const database& db) {
httpd::utils_json::histogram res;
for (auto i : db.get_column_families()) {
res = add_histogram(res, i.second->get_stats().*f);
}
return res;
};
return ctx.db.map(fun).then([](const std::vector<utils::ihistogram> &res) {
std::vector<httpd::utils_json::histogram> r;
boost::copy(res | boost::adaptors::transformed(to_json), std::back_inserter(r));
return make_ready_future<json::json_return_type>(r);
return ctx.db.map(fun).then([](const std::vector<httpd::utils_json::histogram> &res) {
return make_ready_future<json::json_return_type>(res);
});
}
static future<json::json_return_type> get_cf_unleveled_sstables(http_context& ctx, const sstring& name) {
return map_reduce_cf(ctx, name, int64_t(0), [](const column_family& cf) {
return map_reduce_cf(ctx, name, 0, [](const column_family& cf) {
return cf.get_unleveled_sstables();
}, std::plus<int64_t>());
}
@@ -223,25 +221,25 @@ void set_column_family(http_context& ctx, routes& r) {
});
cf::get_memtable_off_heap_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], 0, [](column_family& cf) {
return cf.active_memtable().region().occupancy().total_space();
}, std::plus<int64_t>());
});
cf::get_all_memtable_off_heap_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, int64_t(0), [](column_family& cf) {
return map_reduce_cf(ctx, 0, [](column_family& cf) {
return cf.active_memtable().region().occupancy().total_space();
}, std::plus<int64_t>());
});
cf::get_memtable_live_data_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], 0, [](column_family& cf) {
return cf.active_memtable().region().occupancy().used_space();
}, std::plus<int64_t>());
});
cf::get_all_memtable_live_data_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, int64_t(0), [](column_family& cf) {
return map_reduce_cf(ctx, 0, [](column_family& cf) {
return cf.active_memtable().region().occupancy().used_space();
}, std::plus<int64_t>());
});
@@ -256,7 +254,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_cf_all_memtables_off_heap_size.set(r, [&ctx] (std::unique_ptr<request> req) {
warn(unimplemented::cause::INDEXES);
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], 0, [](column_family& cf) {
return cf.occupancy().total_space();
}, std::plus<int64_t>());
});
@@ -265,21 +263,21 @@ void set_column_family(http_context& ctx, routes& r) {
warn(unimplemented::cause::INDEXES);
return ctx.db.map_reduce0([](const database& db){
return db.dirty_memory_region_group().memory_used();
}, int64_t(0), std::plus<int64_t>()).then([](int res) {
}, 0, std::plus<int64_t>()).then([](int res) {
return make_ready_future<json::json_return_type>(res);
});
});
cf::get_cf_all_memtables_live_data_size.set(r, [&ctx] (std::unique_ptr<request> req) {
warn(unimplemented::cause::INDEXES);
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], 0, [](column_family& cf) {
return cf.occupancy().used_space();
}, std::plus<int64_t>());
});
cf::get_all_cf_all_memtables_live_data_size.set(r, [&ctx] (std::unique_ptr<request> req) {
warn(unimplemented::cause::INDEXES);
return map_reduce_cf(ctx, int64_t(0), [](column_family& cf) {
return map_reduce_cf(ctx, 0, [](column_family& cf) {
return cf.active_memtable().region().occupancy().used_space();
}, std::plus<int64_t>());
});
@@ -304,7 +302,7 @@ void set_column_family(http_context& ctx, routes& r) {
});
cf::get_estimated_row_count.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], 0, [](column_family& cf) {
uint64_t res = 0;
for (auto i: *cf.get_sstables() ) {
res += i.second->get_stats_metadata().estimated_row_size.count();
@@ -424,11 +422,11 @@ void set_column_family(http_context& ctx, routes& r) {
});
cf::get_max_row_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), max_row_size, max_int64);
return map_reduce_cf(ctx, req->param["name"], 0, max_row_size, max_int64);
});
cf::get_all_max_row_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, int64_t(0), max_row_size, max_int64);
return map_reduce_cf(ctx, 0, max_row_size, max_int64);
});
cf::get_mean_row_size.set(r, [&ctx] (std::unique_ptr<request> req) {
@@ -539,20 +537,20 @@ void set_column_family(http_context& ctx, routes& r) {
}, std::plus<uint64_t>());
});
cf::get_index_summary_off_heap_memory_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return sst.second->get_summary().memory_footprint();
});
}, std::plus<uint64_t>());
cf::get_index_summary_off_heap_memory_used.set(r, [] (std::unique_ptr<request> req) {
//TBD
// FIXME
// We are missing the off heap memory calculation
// Return 0 is the wrong value. It's a work around
// until the memory calculation will be available
//auto id = get_uuid(req->param["name"], ctx.db.local());
return make_ready_future<json::json_return_type>(0);
});
cf::get_all_index_summary_off_heap_memory_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return sst.second->get_summary().memory_footprint();
});
}, std::plus<uint64_t>());
cf::get_all_index_summary_off_heap_memory_used.set(r, [] (std::unique_ptr<request> req) {
//TBD
unimplemented();
return make_ready_future<json::json_return_type>(0);
});
cf::get_compression_metadata_off_heap_memory_used.set(r, [] (std::unique_ptr<request> req) {
@@ -591,16 +589,11 @@ void set_column_family(http_context& ctx, routes& r) {
return make_ready_future<json::json_return_type>(0);
});
cf::get_true_snapshots_size.set(r, [&ctx] (std::unique_ptr<request> req) {
auto uuid = get_uuid(req->param["name"], ctx.db.local());
return ctx.db.local().find_column_family(uuid).get_snapshot_details().then([](
const std::unordered_map<sstring, column_family::snapshot_details>& sd) {
int64_t res = 0;
for (auto i : sd) {
res += i.second.total;
}
return make_ready_future<json::json_return_type>(res);
});
cf::get_true_snapshots_size.set(r, [] (std::unique_ptr<request> req) {
//TBD
// FIXME
//auto id = get_uuid(req->param["name"], ctx.db.local());
return make_ready_future<json::json_return_type>(0);
});
cf::get_all_true_snapshots_size.set(r, [] (std::unique_ptr<request> req) {
@@ -623,25 +616,25 @@ void set_column_family(http_context& ctx, routes& r) {
});
cf::get_row_cache_hit.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](const column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], 0, [](const column_family& cf) {
return cf.get_row_cache().stats().hits;
}, std::plus<int64_t>());
});
cf::get_all_row_cache_hit.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, int64_t(0), [](const column_family& cf) {
return map_reduce_cf(ctx, 0, [](const column_family& cf) {
return cf.get_row_cache().stats().hits;
}, std::plus<int64_t>());
});
cf::get_row_cache_miss.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](const column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], 0, [](const column_family& cf) {
return cf.get_row_cache().stats().misses;
}, std::plus<int64_t>());
});
cf::get_all_row_cache_miss.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, int64_t(0), [](const column_family& cf) {
return map_reduce_cf(ctx, 0, [](const column_family& cf) {
return cf.get_row_cache().stats().misses;
}, std::plus<int64_t>());

View File

@@ -21,17 +21,16 @@
#include "compaction_manager.hh"
#include "api/api-doc/compaction_manager.json.hh"
#include "db/system_keyspace.hh"
namespace api {
using namespace scollectd;
namespace cm = httpd::compaction_manager_json;
using namespace json;
static future<json::json_return_type> get_cm_stats(http_context& ctx,
int64_t compaction_manager::stats::*f) {
return ctx.db.map_reduce0([f](database& db) {
return ctx.db.map_reduce0([&](database& db) {
return db.get_compaction_manager().get_stats().*f;
}, int64_t(0), std::plus<int64_t>()).then([](const int64_t& res) {
return make_ready_future<json::json_return_type>(res);
@@ -39,38 +38,29 @@ static future<json::json_return_type> get_cm_stats(http_context& ctx,
}
void set_compaction_manager(http_context& ctx, routes& r) {
cm::get_compactions.set(r, [&ctx] (std::unique_ptr<request> req) {
return ctx.db.map_reduce0([](database& db) {
std::vector<cm::summary> summaries;
const compaction_manager& cm = db.get_compaction_manager();
cm::get_compactions.set(r, [] (std::unique_ptr<request> req) {
//TBD
unimplemented();
std::vector<cm::jsonmap> map;
return make_ready_future<json::json_return_type>(map);
});
for (const auto& c : cm.get_compactions()) {
cm::summary s;
s.ks = c->ks;
s.cf = c->cf;
s.unit = "keys";
s.task_type = "compaction";
s.completed = c->total_keys_written;
s.total = c->total_partitions;
summaries.push_back(std::move(s));
}
return summaries;
}, std::vector<cm::summary>(), concat<cm::summary>).then([](const std::vector<cm::summary>& res) {
return make_ready_future<json::json_return_type>(res);
});
cm::get_compaction_summary.set(r, [] (std::unique_ptr<request> req) {
//TBD
unimplemented();
std::vector<sstring> res;
return make_ready_future<json::json_return_type>(res);
});
cm::force_user_defined_compaction.set(r, [] (std::unique_ptr<request> req) {
//TBD
// FIXME
warn(unimplemented::cause::API);
return make_ready_future<json::json_return_type>(json_void());
unimplemented();
return make_ready_future<json::json_return_type>("");
});
cm::stop_compaction.set(r, [] (std::unique_ptr<request> req) {
//TBD
// FIXME
warn(unimplemented::cause::API);
unimplemented();
return make_ready_future<json::json_return_type>("");
});
@@ -91,42 +81,14 @@ void set_compaction_manager(http_context& ctx, routes& r) {
cm::get_bytes_compacted.set(r, [] (std::unique_ptr<request> req) {
//TBD
// FIXME
warn(unimplemented::cause::API);
unimplemented();
return make_ready_future<json::json_return_type>(0);
});
cm::get_compaction_history.set(r, [] (std::unique_ptr<request> req) {
return db::system_keyspace::get_compaction_history().then([] (std::vector<db::system_keyspace::compaction_history_entry> history) {
std::vector<cm::history> res;
res.reserve(history.size());
for (auto& entry : history) {
cm::history h;
h.id = entry.id.to_sstring();
h.ks = std::move(entry.ks);
h.cf = std::move(entry.cf);
h.compacted_at = entry.compacted_at;
h.bytes_in = entry.bytes_in;
h.bytes_out = entry.bytes_out;
for (auto it : entry.rows_merged) {
httpd::compaction_manager_json::row_merged e;
e.key = it.first;
e.value = it.second;
h.rows_merged.push(std::move(e));
}
res.push_back(std::move(h));
}
return make_ready_future<json::json_return_type>(res);
});
});
cm::get_compaction_info.set(r, [] (std::unique_ptr<request> req) {
//TBD
// FIXME
warn(unimplemented::cause::API);
std::vector<cm::compaction_info> res;
unimplemented();
std::vector<cm::history> res;
return make_ready_future<json::json_return_type>(res);
});

View File

@@ -22,33 +22,15 @@
#include "failure_detector.hh"
#include "api/api-doc/failure_detector.json.hh"
#include "gms/failure_detector.hh"
#include "gms/application_state.hh"
#include "gms/gossiper.hh"
namespace api {
namespace fd = httpd::failure_detector_json;
void set_failure_detector(http_context& ctx, routes& r) {
fd::get_all_endpoint_states.set(r, [](std::unique_ptr<request> req) {
std::vector<fd::endpoint_state> res;
for (auto i : gms::get_local_gossiper().endpoint_state_map) {
fd::endpoint_state val;
val.addrs = boost::lexical_cast<std::string>(i.first);
val.is_alive = i.second.is_alive();
val.generation = i.second.get_heart_beat_state().get_generation();
val.version = i.second.get_heart_beat_state().get_heart_beat_version();
val.update_time = i.second.get_update_timestamp().time_since_epoch().count();
for (auto a : i.second.get_application_state_map()) {
fd::version_value version_val;
// We return the enum index and not it's name to stay compatible to origin
// method that the state index are static but the name can be changed.
version_val.application_state = static_cast<std::underlying_type<gms::application_state>::type>(a.first);
version_val.value = a.second.value;
val.application_state.push(version_val);
}
res.push_back(val);
}
return make_ready_future<json::json_return_type>(res);
return gms::get_all_endpoint_states().then([](const sstring& str) {
return make_ready_future<json::json_return_type>(str);
});
});
fd::get_up_endpoint_count.set(r, [](std::unique_ptr<request> req) {

View File

@@ -41,8 +41,8 @@ std::vector<message_counter> map_to_message_counters(
std::vector<message_counter> res;
for (auto i : map) {
res.push_back(message_counter());
res.back().key = boost::lexical_cast<sstring>(i.first);
res.back().value = i.second;
res.back().ip = boost::lexical_cast<sstring>(i.first);
res.back().count = i.second;
}
return res;
}
@@ -70,39 +70,12 @@ future_json_function get_client_getter(std::function<uint64_t(const shard_info&)
};
}
future_json_function get_server_getter(std::function<uint64_t(const rpc::stats&)> f) {
return [f](std::unique_ptr<request> req) {
using map_type = std::unordered_map<gms::inet_address, uint64_t>;
auto get_shard_map = [f](messaging_service& ms) {
std::unordered_map<gms::inet_address, unsigned long> map;
ms.foreach_server_connection_stats([&map, f] (const rpc::client_info& info, const rpc::stats& stats) mutable {
map[gms::inet_address(net::ipv4_address(info.addr))] = f(stats);
});
return map;
};
return get_messaging_service().map_reduce0(get_shard_map, map_type(), map_sum<map_type>).
then([](map_type&& map) {
return make_ready_future<json::json_return_type>(map_to_message_counters(map));
});
};
}
void set_messaging_service(http_context& ctx, routes& r) {
get_timeout_messages.set(r, get_client_getter([](const shard_info& c) {
return c.get_stats().timeout;
}));
get_sent_messages.set(r, get_client_getter([](const shard_info& c) {
return c.get_stats().sent_messages;
}));
get_dropped_messages.set(r, get_client_getter([](const shard_info& c) {
// We don't have the same drop message mechanism
// as origin has.
// hence we can always return 0
return 0;
}));
get_exception_messages.set(r, get_client_getter([](const shard_info& c) {
return c.get_stats().exception_received;
}));
@@ -111,19 +84,11 @@ void set_messaging_service(http_context& ctx, routes& r) {
return c.get_stats().pending;
}));
get_respond_pending_messages.set(r, get_server_getter([](const rpc::stats& c) {
return c.pending;
get_respond_pending_messages.set(r, get_client_getter([](const shard_info& c) {
return c.get_stats().wait_reply;
}));
get_respond_completed_messages.set(r, get_server_getter([](const rpc::stats& c) {
return c.sent_messages;
}));
get_version.set(r, [](const_req req) {
return net::get_local_messaging_service().get_raw_version(req.get_query_param("addr"));
});
get_dropped_messages_by_ver.set(r, [](std::unique_ptr<request> req) {
get_dropped_messages.set(r, [](std::unique_ptr<request> req) {
shared_ptr<std::vector<uint64_t>> map = make_shared<std::vector<uint64_t>>(num_verb, 0);
return net::get_messaging_service().map_reduce([map](const uint64_t* local_map) mutable {

View File

@@ -201,16 +201,22 @@ void set_storage_proxy(http_context& ctx, routes& r) {
return make_ready_future<json::json_return_type>(json_void());
});
sp::get_read_repair_attempted.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_stats(ctx.sp, &proxy::stats::read_repair_attempts);
sp::get_read_repair_attempted.set(r, [](std::unique_ptr<request> req) {
//TBD
unimplemented();
return make_ready_future<json::json_return_type>(0);
});
sp::get_read_repair_repaired_blocking.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_stats(ctx.sp, &proxy::stats::read_repair_repaired_blocking);
sp::get_read_repair_repaired_blocking.set(r, [](std::unique_ptr<request> req) {
//TBD
unimplemented();
return make_ready_future<json::json_return_type>(0);
});
sp::get_read_repair_repaired_background.set(r, [&ctx](std::unique_ptr<request> req) {
return sum_stats(ctx.sp, &proxy::stats::read_repair_repaired_background);
sp::get_read_repair_repaired_background.set(r, [](std::unique_ptr<request> req) {
//TBD
unimplemented();
return make_ready_future<json::json_return_type>(0);
});
sp::get_schema_versions.set(r, [](std::unique_ptr<request> req) {

View File

@@ -43,29 +43,6 @@ static sstring validate_keyspace(http_context& ctx, const parameters& param) {
throw bad_param_exception("Keyspace " + param["keyspace"] + " Does not exist");
}
static std::vector<ss::token_range> describe_ring(const sstring& keyspace) {
std::vector<ss::token_range> res;
for (auto d : service::get_local_storage_service().describe_ring(keyspace)) {
ss::token_range r;
r.start_token = d._start_token;
r.end_token = d._end_token;
r.endpoints = d._endpoints;
r.rpc_endpoints = d._rpc_endpoints;
for (auto det : d._endpoint_details) {
ss::endpoint_detail ed;
ed.host = det._host;
ed.datacenter = det._datacenter;
if (det._rack != "") {
ed.rack = det._rack;
}
r.endpoint_details.push(ed);
}
res.push_back(r);
}
return res;
}
void set_storage_service(http_context& ctx, routes& r) {
ss::local_hostid.set(r, [](std::unique_ptr<request> req) {
return db::system_keyspace::get_local_host_id().then([](const utils::UUID& id) {
@@ -89,7 +66,7 @@ void set_storage_service(http_context& ctx, routes& r) {
});
ss::get_token_endpoint.set(r, [] (const_req req) {
auto token_to_ep = service::get_local_storage_service().get_token_to_endpoint_map();
auto token_to_ep = service::get_local_storage_service().get_token_metadata().get_token_to_endpoint();
std::vector<storage_service_json::mapper> res;
return map_to_key_value(token_to_ep, res);
});
@@ -148,13 +125,12 @@ void set_storage_service(http_context& ctx, routes& r) {
return make_ready_future<json::json_return_type>(res);
});
ss::describe_any_ring.set(r, [&ctx](const_req req) {
return describe_ring("");
});
ss::describe_ring.set(r, [&ctx](const_req req) {
auto keyspace = validate_keyspace(ctx, req.param);
return describe_ring(keyspace);
ss::describe_ring_jmx.set(r, [&ctx](std::unique_ptr<request> req) {
//TBD
unimplemented();
auto keyspace = validate_keyspace(ctx, req->param);
std::vector<sstring> res;
return make_ready_future<json::json_return_type>(res);
});
ss::get_host_id_map.set(r, [](const_req req) {
@@ -169,14 +145,8 @@ void set_storage_service(http_context& ctx, routes& r) {
ss::get_load_map.set(r, [] (std::unique_ptr<request> req) {
return service::get_local_storage_service().get_load_map().then([] (auto&& load_map) {
std::vector<ss::double_mapper> res;
for (auto i : load_map) {
ss::double_mapper val;
val.key = i.first;
val.value = i.second;
res.push_back(val);
}
return make_ready_future<json::json_return_type>(res);
std::vector<ss::mapper> res;
return make_ready_future<json::json_return_type>(map_to_key_value(load_map, res));
});
});
@@ -187,10 +157,15 @@ void set_storage_service(http_context& ctx, routes& r) {
});
});
ss::get_natural_endpoints.set(r, [&ctx](const_req req) {
auto keyspace = validate_keyspace(ctx, req.param);
return container_to_vec(service::get_local_storage_service().get_natural_endpoints(keyspace, req.get_query_param("cf"),
req.get_query_param("key")));
ss::get_natural_endpoints.set(r, [&ctx](std::unique_ptr<request> req) {
//TBD
unimplemented();
auto keyspace = validate_keyspace(ctx, req->param);
auto column_family = req->get_query_param("cf");
auto key = req->get_query_param("key");
std::vector<sstring> res;
return make_ready_future<json::json_return_type>(res);
});
ss::get_snapshot_details.set(r, [](std::unique_ptr<request> req) {
@@ -272,14 +247,10 @@ void set_storage_service(http_context& ctx, routes& r) {
ss::force_keyspace_cleanup.set(r, [&ctx](std::unique_ptr<request> req) {
//TBD
// FIXME
// the nodetool clean up is used in many tests
// this workaround willl let it work until
// a cleanup is implemented
warn(unimplemented::cause::API);
unimplemented();
auto keyspace = validate_keyspace(ctx, req->param);
auto column_family = req->get_query_param("cf");
return make_ready_future<json::json_return_type>(0);
return make_ready_future<json::json_return_type>(json_void());
});
ss::scrub.set(r, [&ctx](std::unique_ptr<request> req) {
@@ -318,14 +289,18 @@ void set_storage_service(http_context& ctx, routes& r) {
ss::repair_async.set(r, [&ctx](std::unique_ptr<request> req) {
static std::vector<sstring> options = {"primaryRange", "parallelism", "incremental",
"jobThreads", "ranges", "columnFamilies", "dataCenters", "hosts", "trace"};
// Currently, we get all the repair options encoded in a single
// "options" option, and split it to a map using the "," and ":"
// delimiters. TODO: consider if it doesn't make more sense to just
// take all the query parameters as this map and pass it to the repair
// function.
std::unordered_map<sstring, sstring> options_map;
for (auto o : options) {
auto s = req->get_query_param(o);
if (s != "") {
options_map[o] = s;
for (auto s : split(req->get_query_param("options"), ",")) {
auto kv = split(s, ":");
if (kv.size() != 2) {
throw httpd::bad_param_exception("malformed async repair options");
}
options_map.emplace(std::move(kv[0]), std::move(kv[1]));
}
// The repair process is asynchronous: repair_start only starts it and
@@ -363,11 +338,11 @@ void set_storage_service(http_context& ctx, routes& r) {
});
});
ss::move.set(r, [] (std::unique_ptr<request> req) {
ss::move.set(r, [](std::unique_ptr<request> req) {
//TBD
unimplemented();
auto new_token = req->get_query_param("new_token");
return service::get_local_storage_service().move(new_token).then([] {
return make_ready_future<json::json_return_type>(json_void());
});
return make_ready_future<json::json_return_type>(json_void());
});
ss::remove_node.set(r, [](std::unique_ptr<request> req) {
@@ -417,18 +392,15 @@ void set_storage_service(http_context& ctx, routes& r) {
});
ss::get_drain_progress.set(r, [](std::unique_ptr<request> req) {
return service::get_storage_service().map_reduce(adder<service::storage_service::drain_progress>(), [] (auto& ss) {
return ss.get_drain_progress();
}).then([] (auto&& progress) {
auto progress_str = sprint("Drained %s/%s ColumnFamilies", progress.remaining_cfs, progress.total_cfs);
return make_ready_future<json::json_return_type>(std::move(progress_str));
});
//TBD
unimplemented();
return make_ready_future<json::json_return_type>("");
});
ss::drain.set(r, [](std::unique_ptr<request> req) {
return service::get_local_storage_service().drain().then([] {
return make_ready_future<json::json_return_type>(json_void());
});
//TBD
unimplemented();
return make_ready_future<json::json_return_type>(json_void());
});
ss::truncate.set(r, [&ctx](std::unique_ptr<request> req) {
//TBD
@@ -523,10 +495,8 @@ void set_storage_service(http_context& ctx, routes& r) {
});
});
ss::is_joined.set(r, [] (std::unique_ptr<request> req) {
return service::get_local_storage_service().is_joined().then([] (bool is_joined) {
return make_ready_future<json::json_return_type>(is_joined);
});
ss::is_joined.set(r, [](const_req req) {
return service::get_local_storage_service().is_joined();
});
ss::set_stream_throughput_mb_per_sec.set(r, [](std::unique_ptr<request> req) {
@@ -755,19 +725,17 @@ void set_storage_service(http_context& ctx, routes& r) {
return make_ready_future<json::json_return_type>(0);
});
ss::get_ownership.set(r, [] (std::unique_ptr<request> req) {
return service::get_local_storage_service().get_ownership().then([] (auto&& ownership) {
std::vector<storage_service_json::mapper> res;
return make_ready_future<json::json_return_type>(map_to_key_value(ownership, res));
});
ss::get_ownership.set(r, [](const_req req) {
auto tokens = service::get_local_storage_service().get_ownership();
std::vector<storage_service_json::mapper> res;
return map_to_key_value(tokens, res);
});
ss::get_effective_ownership.set(r, [&ctx] (std::unique_ptr<request> req) {
auto keyspace_name = req->param["keyspace"] == "null" ? "" : validate_keyspace(ctx, req->param);
return service::get_local_storage_service().effective_ownership(keyspace_name).then([] (auto&& ownership) {
std::vector<storage_service_json::mapper> res;
return make_ready_future<json::json_return_type>(map_to_key_value(ownership, res));
});
ss::get_effective_ownership.set(r, [&ctx](const_req req) {
auto tokens = service::get_local_storage_service().effective_ownership(
(req.param["keyspace"] == "null")? "" : validate_keyspace(ctx, req.param));
std::vector<storage_service_json::mapper> res;
return map_to_key_value(tokens, res);
});
}

View File

@@ -234,8 +234,6 @@ public:
friend std::ostream& operator<<(std::ostream& os, const atomic_cell& ac);
};
class collection_mutation_view;
// Represents a mutation of a collection. Actual format is determined by collection type,
// and is:
// set: list of atomic_cell
@@ -243,30 +241,20 @@ class collection_mutation_view;
// list: tbd, probably ugly
class collection_mutation {
public:
managed_bytes data;
collection_mutation() {}
collection_mutation(managed_bytes b) : data(std::move(b)) {}
collection_mutation(collection_mutation_view v);
operator collection_mutation_view() const;
struct view {
bytes_view data;
bytes_view serialize() const { return data; }
static view from_bytes(bytes_view v) { return { v }; }
};
struct one {
managed_bytes data;
one() {}
one(managed_bytes b) : data(std::move(b)) {}
one(view v) : data(v.data) {}
operator view() const { return { data }; }
};
};
class collection_mutation_view {
public:
bytes_view data;
bytes_view serialize() const { return data; }
static collection_mutation_view from_bytes(bytes_view v) { return { v }; }
};
inline
collection_mutation::collection_mutation(collection_mutation_view v)
: data(v.data) {
}
inline
collection_mutation::operator collection_mutation_view() const {
return { data };
}
namespace db {
template<typename T>
class serializer;
@@ -286,15 +274,15 @@ public:
atomic_cell_or_collection(atomic_cell ac) : _data(std::move(ac._data)) {}
static atomic_cell_or_collection from_atomic_cell(atomic_cell data) { return { std::move(data._data) }; }
atomic_cell_view as_atomic_cell() const { return atomic_cell_view::from_bytes(_data); }
atomic_cell_or_collection(collection_mutation cm) : _data(std::move(cm.data)) {}
atomic_cell_or_collection(collection_mutation::one cm) : _data(std::move(cm.data)) {}
explicit operator bool() const {
return !_data.empty();
}
static atomic_cell_or_collection from_collection_mutation(collection_mutation data) {
static atomic_cell_or_collection from_collection_mutation(collection_mutation::one data) {
return std::move(data.data);
}
collection_mutation_view as_collection_mutation() const {
return collection_mutation_view{_data};
collection_mutation::view as_collection_mutation() const {
return collection_mutation::view{_data};
}
bytes_view serialize() const {
return _data;
@@ -302,12 +290,6 @@ public:
bool operator==(const atomic_cell_or_collection& other) const {
return _data == other._data;
}
void linearize() {
_data.linearize();
}
void unlinearize() {
_data.scatter();
}
friend std::ostream& operator<<(std::ostream&, const atomic_cell_or_collection&);
};

View File

@@ -33,10 +33,8 @@
*
*/
class bytes_ostream {
public:
using size_type = bytes::size_type;
using value_type = bytes::value_type;
private:
static_assert(sizeof(value_type) == 1, "value_type is assumed to be one byte long");
struct chunk {
// FIXME: group fragment pointers to reduce pointer chasing when packetizing
@@ -119,13 +117,13 @@ private:
};
}
public:
bytes_ostream() noexcept
bytes_ostream()
: _begin()
, _current(nullptr)
, _size(0)
{ }
bytes_ostream(bytes_ostream&& o) noexcept
bytes_ostream(bytes_ostream&& o)
: _begin(std::move(o._begin))
, _current(o._current)
, _size(o._size)
@@ -150,7 +148,7 @@ public:
return *this;
}
bytes_ostream& operator=(bytes_ostream&& o) noexcept {
bytes_ostream& operator=(bytes_ostream&& o) {
_size = o._size;
_begin = std::move(o._begin);
_current = o._current;

View File

@@ -82,12 +82,6 @@ public:
}
return caching_options(k, r);
}
bool operator==(const caching_options& other) const {
return _key_cache == other._key_cache && _row_cache == other._row_cache;
}
bool operator!=(const caching_options& other) const {
return !(*this == other);
}
};

View File

@@ -63,18 +63,16 @@ public:
}
static compaction_strategy_type type(const sstring& name) {
auto pos = name.find("org.apache.cassandra.db.compaction.");
sstring short_name = (pos == sstring::npos) ? name : name.substr(pos + 35);
if (short_name == "NullCompactionStrategy") {
if (name == "NullCompactionStrategy") {
return compaction_strategy_type::null;
} else if (short_name == "MajorCompactionStrategy") {
} else if (name == "MajorCompactionStrategy") {
return compaction_strategy_type::major;
} else if (short_name == "SizeTieredCompactionStrategy") {
} else if (name == "SizeTieredCompactionStrategy") {
return compaction_strategy_type::size_tiered;
} else if (short_name == "LeveledCompactionStrategy") {
} else if (name == "LeveledCompactionStrategy") {
return compaction_strategy_type::leveled;
} else {
throw exceptions::configuration_exception(sprint("Unable to find compaction strategy class '%s'", name));
throw exceptions::configuration_exception(sprint("Unable to find compaction strategy class 'org.apache.cassandra.db.compaction.%s", name));
}
}

View File

@@ -68,7 +68,7 @@ public:
, _byte_order_equal(std::all_of(_types.begin(), _types.end(), [] (auto t) {
return t->is_byte_order_equal();
}))
, _byte_order_comparable(!is_prefixable && _types.size() == 1 && _types[0]->is_byte_order_comparable())
, _byte_order_comparable(_types.size() == 1 && _types[0]->is_byte_order_comparable())
, _is_reversed(_types.size() == 1 && _types[0]->is_reversed())
{ }
@@ -159,7 +159,7 @@ public:
}
return ::serialize_value(*this, values);
}
bytes serialize_value_deep(const std::vector<data_value>& values) {
bytes serialize_value_deep(const std::vector<boost::any>& values) {
// TODO: Optimize
std::vector<bytes> partial;
partial.reserve(values.size());
@@ -278,10 +278,10 @@ public:
});
}
bytes from_string(sstring_view s) {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw std::runtime_error("not implemented");
}
sstring to_string(const bytes& b) {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw std::runtime_error("not implemented");
}
// Retruns true iff given prefix has no missing components
bool is_full(bytes_view v) const {

View File

@@ -114,14 +114,6 @@ public:
}
return opts;
}
bool operator==(const compression_parameters& other) const {
return _compressor == other._compressor
&& _chunk_length == other._chunk_length
&& _crc_check_chance == other._crc_check_chance;
}
bool operator!=(const compression_parameters& other) const {
return !(*this == other);
}
private:
void validate_options(const std::map<sstring, sstring>& options) {
// currently, there are no options specific to a particular compressor

0
conf/cassandra-rackdc.properties Normal file → Executable file
View File

View File

@@ -409,16 +409,15 @@ partitioner: org.apache.cassandra.dht.Murmur3Partitioner
# offheap_objects: native memory, eliminating nio buffer heap overhead
# memtable_allocation_type: heap_buffers
# Total space to use for commitlogs.
# Total space to use for commitlogs. Since commitlog segments are
# mmapped, and hence use up address space, the default size is 32
# on 32-bit JVMs, and 8192 on 64-bit JVMs.
#
# If space gets above this value (it will round up to the next nearest
# segment multiple), Scylla will flush every dirty CF in the oldest
# segment and remove it. So a small total commitlog space will tend
# to cause more flush activity on less-active columnfamilies.
#
# A value of -1 (default) will automatically equate it to the total amount of memory
# available for Scylla.
commitlog_total_space_in_mb: -1
commitlog_total_space_in_mb: 8192
# This sets the amount of memtable flush writer threads. These will
# be blocked by disk io, and each one will hold a memtable in memory
@@ -782,25 +781,40 @@ commitlog_total_space_in_mb: -1
# the request scheduling. Currently the only valid option is keyspace.
# request_scheduler_id: keyspace
# Enable or disable inter-node encryption.
# You must also generate keys and provide the appropriate key and trust store locations and passwords.
# No custom encryption options are currently enabled. The available options are:
#
# Enable or disable inter-node encryption
# Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
# users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
# suite for authentication, key exchange and encryption of the actual data transfers.
# Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
# NOTE: No custom encryption options are enabled at the moment
# The available internode options are : all, none, dc, rack
# If set to dc scylla will encrypt the traffic between the DCs
# If set to rack scylla will encrypt the traffic between the racks
#
# If set to dc cassandra will encrypt the traffic between the DCs
# If set to rack cassandra will encrypt the traffic between the racks
#
# The passwords used in these options must match the passwords used when generating
# the keystore and truststore. For instructions on generating these files, see:
# http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
#
# server_encryption_options:
# internode_encryption: none
# certificate: conf/scylla.crt
# keyfile: conf/scylla.key
# truststore: <none, use system trust>
# keystore: conf/.keystore
# keystore_password: cassandra
# truststore: conf/.truststore
# truststore_password: cassandra
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
# require_client_auth: false
# enable or disable client/server encryption.
# client_encryption_options:
# enabled: false
# certificate: conf/scylla.crt
# keyfile: conf/scylla.key
# keystore: conf/.keystore
# keystore_password: cassandra
# require_client_auth: false
# Set trustore and truststore_password if require_client_auth is true
@@ -824,17 +838,3 @@ commitlog_total_space_in_mb: -1
# reducing overhead from the TCP protocol itself, at the cost of increasing
# latency if you block for cross-datacenter responses.
# inter_dc_tcp_nodelay: false
# Relaxation of environment checks.
#
# Scylla places certain requirements on its environment. If these requirements are
# not met, performance and reliability can be degraded.
#
# These requirements include:
# - A filesystem with good support for aysnchronous I/O (AIO). Currently,
# this means XFS.
#
# false: strict environment checks are in place; do not start if they are not met.
# true: relaxed environment checks; performance and reliability may degraade.
#
# developer_mode: false

View File

@@ -183,7 +183,6 @@ scylla_tests = [
'tests/managed_vector_test',
'tests/crc_test',
'tests/flush_queue_test',
'tests/dynamic_bitset_test',
]
apps = [
@@ -281,8 +280,6 @@ scylla_core = (['database.cc',
'cql3/statements/schema_altering_statement.cc',
'cql3/statements/ks_prop_defs.cc',
'cql3/statements/modification_statement.cc',
'cql3/statements/parsed_statement.cc',
'cql3/statements/property_definitions.cc',
'cql3/statements/update_statement.cc',
'cql3/statements/delete_statement.cc',
'cql3/statements/batch_statement.cc',
@@ -342,7 +339,6 @@ scylla_core = (['database.cc',
'utils/rate_limiter.cc',
'utils/compaction_manager.cc',
'utils/file_lock.cc',
'utils/dynamic_bitset.cc',
'gms/version_generator.cc',
'gms/versioned_value.cc',
'gms/gossiper.cc',
@@ -378,8 +374,6 @@ scylla_core = (['database.cc',
'service/storage_service.cc',
'service/pending_range_calculator_service.cc',
'service/load_broadcaster.cc',
'service/pager/paging_state.cc',
'service/pager/query_pagers.cc',
'streaming/streaming.cc',
'streaming/stream_task.cc',
'streaming/stream_session.cc',
@@ -400,7 +394,6 @@ scylla_core = (['database.cc',
'streaming/messages/file_message_header.cc',
'streaming/messages/outgoing_file_message.cc',
'streaming/messages/incoming_file_message.cc',
'streaming/stream_session_state.cc',
'gc_clock.cc',
'partition_slice_builder.cc',
'init.cc',
@@ -486,7 +479,6 @@ tests_not_using_seastar_test_framework = set([
'tests/crc_test',
'tests/perf/perf_sstable',
'tests/managed_vector_test',
'tests/dynamic_bitset_test',
])
for t in tests_not_using_seastar_test_framework:
@@ -503,7 +495,7 @@ deps['tests/sstable_test'] += ['tests/sstable_datafile_test.cc']
deps['tests/bytes_ostream_test'] = ['tests/bytes_ostream_test.cc']
deps['tests/UUID_test'] = ['utils/UUID_gen.cc', 'tests/UUID_test.cc']
deps['tests/murmur_hash_test'] = ['bytes.cc', 'utils/murmur_hash.cc', 'tests/murmur_hash_test.cc']
deps['tests/allocation_strategy_test'] = ['tests/allocation_strategy_test.cc', 'utils/logalloc.cc', 'log.cc', 'utils/dynamic_bitset.cc']
deps['tests/allocation_strategy_test'] = ['tests/allocation_strategy_test.cc', 'utils/logalloc.cc', 'log.cc']
warnings = [
'-Wno-mismatched-tags', # clang-only

View File

@@ -856,7 +856,7 @@ dropIndexStatement returns [DropIndexStatement expr]
* TRUNCATE <CF>;
*/
truncateStatement returns [::shared_ptr<truncate_statement> stmt]
: K_TRUNCATE (K_COLUMNFAMILY)? cf=columnFamilyName { $stmt = ::make_shared<truncate_statement>(cf); }
: K_TRUNCATE cf=columnFamilyName { $stmt = ::make_shared<truncate_statement>(cf); }
;
#if 0

View File

@@ -80,7 +80,7 @@ int64_t attributes::get_timestamp(int64_t now, const query_options& options) {
} catch (marshal_exception e) {
throw exceptions::invalid_request_exception("Invalid timestamp value");
}
return value_cast<int64_t>(data_type_for<int64_t>()->deserialize(*tval));
return boost::any_cast<int64_t>(data_type_for<int64_t>()->deserialize(*tval));
}
int32_t attributes::get_time_to_live(const query_options& options) {
@@ -99,7 +99,7 @@ int32_t attributes::get_time_to_live(const query_options& options) {
throw exceptions::invalid_request_exception("Invalid TTL value");
}
auto ttl = value_cast<int32_t>(data_type_for<int32_t>()->deserialize(*tval));
auto ttl = boost::any_cast<int32_t>(data_type_for<int32_t>()->deserialize(*tval));
if (ttl < 0) {
throw exceptions::invalid_request_exception("A TTL must be greater or equal to 0");
}

View File

@@ -55,11 +55,14 @@ namespace cql3 {
* Represents an identifer for a CQL column definition.
* TODO : should support light-weight mode without text representation for when not interned
*/
class column_identifier final : public selection::selectable {
class column_identifier final : public selection::selectable /* implements IMeasurableMemory*/ {
public:
bytes bytes_;
private:
sstring _text;
#if 0
private static final long EMPTY_SIZE = ObjectSizes.measure(new ColumnIdentifier("", true));
#endif
public:
column_identifier(sstring raw_text, bool keep_case);
@@ -80,6 +83,20 @@ public:
}
#if 0
public long unsharedHeapSize()
{
return EMPTY_SIZE
+ ObjectSizes.sizeOnHeapOf(bytes)
+ ObjectSizes.sizeOf(text);
}
public long unsharedHeapSizeExcludingData()
{
return EMPTY_SIZE
+ ObjectSizes.sizeOnHeapExcludingData(bytes)
+ ObjectSizes.sizeOf(text);
}
public ColumnIdentifier clone(AbstractAllocator allocator)
{
return new ColumnIdentifier(allocator.clone(bytes), text);

View File

@@ -160,7 +160,7 @@ void constants::deleter::execute(mutation& m, const exploded_clustering_prefix&
auto ctype = static_pointer_cast<const collection_type_impl>(column.type);
m.set_cell(prefix, column, atomic_cell_or_collection::from_collection_mutation(ctype->serialize_mutation_form(coll_m)));
} else {
m.set_cell(prefix, column, make_dead_cell(params));
m.set_cell(prefix, column, params.make_dead_cell());
}
}

View File

@@ -197,7 +197,7 @@ public:
virtual void execute(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) override {
auto value = _t->bind_and_get(params._options);
auto cell = value ? make_cell(*value, params) : make_dead_cell(params);
auto cell = value ? params.make_cell(*value) : params.make_dead_cell();
m.set_cell(prefix, column, std::move(cell));
}
};

View File

@@ -90,7 +90,7 @@ public:
if (!values[0]) {
return;
}
_sum += value_cast<Type>(data_type_for<Type>()->deserialize(*values[0]));
_sum += boost::any_cast<Type>(data_type_for<Type>()->deserialize(*values[0]));
}
};
@@ -132,7 +132,7 @@ public:
return;
}
++_count;
_sum += value_cast<Type>(data_type_for<Type>()->deserialize(*values[0]));
_sum += boost::any_cast<Type>(data_type_for<Type>()->deserialize(*values[0]));
}
};
@@ -169,7 +169,7 @@ public:
if (!values[0]) {
return;
}
auto val = value_cast<Type>(data_type_for<Type>()->deserialize(*values[0]));
auto val = boost::any_cast<Type>(data_type_for<Type>()->deserialize(*values[0]));
if (!_max) {
_max = val;
} else {
@@ -216,7 +216,7 @@ public:
if (!values[0]) {
return;
}
auto val = value_cast<Type>(data_type_for<Type>()->deserialize(*values[0]));
auto val = boost::any_cast<Type>(data_type_for<Type>()->deserialize(*values[0]));
if (!_min) {
_min = val;
} else {

View File

@@ -50,11 +50,6 @@ functions::init() {
if (type == cql3_type::varchar || type == cql3_type::blob) {
continue;
}
// counters are not supported yet
if (type->is_counter()) {
warn(unimplemented::cause::COUNTERS);
continue;
}
declare(make_to_blob_function(type->get_type()));
declare(make_from_blob_function(type->get_type()));

View File

@@ -71,10 +71,10 @@ make_min_timeuuid_fct() {
return {};
}
auto ts_obj = timestamp_type->deserialize(*bb);
if (ts_obj.is_null()) {
if (ts_obj.empty()) {
return {};
}
auto ts = value_cast<db_clock::time_point>(ts_obj);
auto ts = boost::any_cast<db_clock::time_point>(ts_obj);
auto uuid = utils::UUID_gen::min_time_UUID(ts.time_since_epoch().count());
return {timeuuid_type->decompose(uuid)};
});
@@ -91,10 +91,10 @@ make_max_timeuuid_fct() {
return {};
}
auto ts_obj = timestamp_type->deserialize(*bb);
if (ts_obj.is_null()) {
if (ts_obj.empty()) {
return {};
}
auto ts = value_cast<db_clock::time_point>(ts_obj);
auto ts = boost::any_cast<db_clock::time_point>(ts_obj);
auto uuid = utils::UUID_gen::max_time_UUID(ts.time_since_epoch().count());
return {timeuuid_type->decompose(uuid)};
});

View File

@@ -54,7 +54,7 @@ shared_ptr<function>
make_uuid_fct() {
return make_native_scalar_function<false>("uuid", uuid_type, {},
[] (serialization_format sf, const std::vector<bytes_opt>& parameters) -> bytes_opt {
return {uuid_type->decompose(utils::make_random_uuid())};
return {uuid_type->decompose(boost::any(utils::make_random_uuid()))};
});
}

View File

@@ -113,12 +113,12 @@ lists::value::from_serialized(bytes_view v, list_type type, serialization_format
// Collections have this small hack that validate cannot be called on a serialized object,
// but compose does the validation (so we're fine).
// FIXME: deserializeForNativeProtocol()?!
auto l = value_cast<list_type_impl::native_type>(type->deserialize(v, sf));
auto l = boost::any_cast<list_type_impl::native_type>(type->deserialize(v, sf));
std::vector<bytes_opt> elements;
elements.reserve(l.size());
for (auto&& element : l) {
// elements can be null in lists that represent a set of IN values
elements.push_back(element.is_null() ? bytes_opt() : bytes_opt(type->get_elements_type()->decompose(element)));
elements.push_back(element.empty() ? bytes_opt() : bytes_opt(type->get_elements_type()->decompose(element)));
}
return value(std::move(elements));
} catch (marshal_exception& e) {
@@ -274,7 +274,7 @@ lists::setter_by_index::execute(mutation& m, const exploded_clustering_prefix& p
if (!existing_list_opt) {
throw exceptions::invalid_request_exception("Attempted to set an element on a list which is null");
}
collection_mutation_view existing_list_ser = *existing_list_opt;
collection_mutation::view existing_list_ser = *existing_list_opt;
auto ltype = dynamic_pointer_cast<const list_type_impl>(column.type);
collection_type_impl::mutation_view existing_list = ltype->deserialize_mutation_form(existing_list_ser);
// we verified that index is an int32_type
@@ -339,7 +339,7 @@ lists::do_append(shared_ptr<term> t,
} else {
auto&& to_add = list_value->_elements;
auto deref = [] (const bytes_opt& v) { return *v; };
auto&& newv = collection_mutation{list_type_impl::pack(
auto&& newv = collection_mutation::one{list_type_impl::pack(
boost::make_transform_iterator(to_add.begin(), deref),
boost::make_transform_iterator(to_add.end(), deref),
to_add.size(), serialization_format::internal())};

View File

@@ -114,26 +114,30 @@ maps::literal::validate_assignable_to(database& db, const sstring& keyspace, col
assignment_testable::test_result
maps::literal::test_assignment(database& db, const sstring& keyspace, ::shared_ptr<column_specification> receiver) {
if (!dynamic_pointer_cast<const map_type_impl>(receiver->type)) {
return assignment_testable::test_result::NOT_ASSIGNABLE;
}
throw std::runtime_error("not implemented");
#if 0
if (!(receiver.type instanceof MapType))
return AssignmentTestable.TestResult.NOT_ASSIGNABLE;
// If there is no elements, we can't say it's an exact match (an empty map if fundamentally polymorphic).
if (entries.empty()) {
return assignment_testable::test_result::WEAKLY_ASSIGNABLE;
}
auto key_spec = maps::key_spec_of(*receiver);
auto value_spec = maps::value_spec_of(*receiver);
if (entries.isEmpty())
return AssignmentTestable.TestResult.WEAKLY_ASSIGNABLE;
ColumnSpecification keySpec = Maps.keySpecOf(receiver);
ColumnSpecification valueSpec = Maps.valueSpecOf(receiver);
// It's an exact match if all are exact match, but is not assignable as soon as any is non assignable.
auto res = assignment_testable::test_result::EXACT_MATCH;
for (auto entry : entries) {
auto t1 = entry.first->test_assignment(db, keyspace, key_spec);
auto t2 = entry.second->test_assignment(db, keyspace, value_spec);
if (t1 == assignment_testable::test_result::NOT_ASSIGNABLE || t2 == assignment_testable::test_result::NOT_ASSIGNABLE)
return assignment_testable::test_result::NOT_ASSIGNABLE;
if (t1 != assignment_testable::test_result::EXACT_MATCH || t2 != assignment_testable::test_result::EXACT_MATCH)
res = assignment_testable::test_result::WEAKLY_ASSIGNABLE;
AssignmentTestable.TestResult res = AssignmentTestable.TestResult.EXACT_MATCH;
for (Pair<Term.Raw, Term.Raw> entry : entries)
{
AssignmentTestable.TestResult t1 = entry.left.testAssignment(keyspace, keySpec);
AssignmentTestable.TestResult t2 = entry.right.testAssignment(keyspace, valueSpec);
if (t1 == AssignmentTestable.TestResult.NOT_ASSIGNABLE || t2 == AssignmentTestable.TestResult.NOT_ASSIGNABLE)
return AssignmentTestable.TestResult.NOT_ASSIGNABLE;
if (t1 != AssignmentTestable.TestResult.EXACT_MATCH || t2 != AssignmentTestable.TestResult.EXACT_MATCH)
res = AssignmentTestable.TestResult.WEAKLY_ASSIGNABLE;
}
return res;
#endif
}
sstring
@@ -157,7 +161,7 @@ maps::value::from_serialized(bytes_view value, map_type type, serialization_form
// Collections have this small hack that validate cannot be called on a serialized object,
// but compose does the validation (so we're fine).
// FIXME: deserialize_for_native_protocol?!
auto m = value_cast<map_type_impl::native_type>(type->deserialize(value, sf));
auto m = boost::any_cast<map_type_impl::native_type>(type->deserialize(value, sf));
std::map<bytes, bytes, serialized_compare> map(type->get_keys_type()->as_less_comparator());
for (auto&& e : m) {
map.emplace(type->get_keys_type()->decompose(e.first),
@@ -346,8 +350,10 @@ maps::discarder_by_key::execute(mutation& m, const exploded_clustering_prefix& p
if (!key) {
throw exceptions::invalid_request_exception("Invalid null map key");
}
auto ckey = dynamic_pointer_cast<constants::value>(std::move(key));
assert(ckey);
collection_type_impl::mutation mut;
mut.cells.emplace_back(*key->get(params._options), params.make_dead_cell());
mut.cells.emplace_back(*ckey->_bytes, params.make_dead_cell());
auto mtype = static_cast<const map_type_impl*>(column.type.get());
m.set_cell(prefix, column, mtype->serialize_mutation_form(mut));
}

View File

@@ -216,7 +216,7 @@ operation::element_deletion::prepare(database& db, const sstring& keyspace, cons
return make_shared<lists::discarder_by_index>(receiver, std::move(idx));
} else if (&ctype->_kind == &collection_type_impl::kind::set) {
auto&& elt = _element->prepare(db, keyspace, sets::value_spec_of(receiver.column_specification));
return make_shared<sets::element_discarder>(receiver, std::move(elt));
return make_shared<sets::discarder>(receiver, std::move(elt));
} else if (&ctype->_kind == &collection_type_impl::kind::map) {
auto&& key = _element->prepare(db, keyspace, maps::key_spec_of(*receiver.column_specification));
return make_shared<maps::discarder_by_key>(receiver, std::move(key));

View File

@@ -45,7 +45,6 @@
#include "exceptions/exceptions.hh"
#include "database_fwd.hh"
#include "term.hh"
#include "update_parameters.hh"
#include <experimental/optional>
@@ -87,14 +86,6 @@ public:
virtual ~operation() {}
atomic_cell make_dead_cell(const update_parameters& params) const {
return params.make_dead_cell();
}
atomic_cell make_cell(bytes_view value, const update_parameters& params) const {
return params.make_cell(value);
}
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const {
return _t && _t->uses_function(ks_name, function_name);
}
@@ -199,7 +190,13 @@ public:
}
virtual shared_ptr<operation> prepare(database& db, const sstring& keyspace, const column_definition& receiver);
#if 0
protected String toString(ColumnSpecification column)
{
return String.format("%s[%s] = %s", column.name, selector, value);
}
#endif
virtual bool is_compatible_with(shared_ptr<raw_update> other) override;
};
@@ -212,6 +209,13 @@ public:
virtual shared_ptr<operation> prepare(database& db, const sstring& keyspace, const column_definition& receiver) override;
#if 0
protected String toString(ColumnSpecification column)
{
return String.format("%s = %s + %s", column.name, column.name, value);
}
#endif
virtual bool is_compatible_with(shared_ptr<raw_update> other) override;
};
@@ -224,6 +228,13 @@ public:
virtual shared_ptr<operation> prepare(database& db, const sstring& keyspace, const column_definition& receiver) override;
#if 0
protected String toString(ColumnSpecification column)
{
return String.format("%s = %s - %s", column.name, column.name, value);
}
#endif
virtual bool is_compatible_with(shared_ptr<raw_update> other) override;
};
@@ -236,6 +247,12 @@ public:
virtual shared_ptr<operation> prepare(database& db, const sstring& keyspace, const column_definition& receiver) override;
#if 0
protected String toString(ColumnSpecification column)
{
return String.format("%s = %s - %s", column.name, value, column.name);
}
#endif
virtual bool is_compatible_with(shared_ptr<raw_update> other) override;
};

View File

@@ -178,7 +178,7 @@ query_processor::prepare(const std::experimental::string_view& query_string, con
query_processor::get_stored_prepared_statement(const std::experimental::string_view& query_string, const sstring& keyspace, bool for_thrift)
{
if (for_thrift) {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw std::runtime_error("not implemented");
#if 0
Integer thriftStatementId = computeThriftId(queryString, keyspace);
ParsedStatement.Prepared existing = thriftPreparedStatements.get(thriftStatementId);
@@ -209,7 +209,7 @@ query_processor::store_prepared_statement(const std::experimental::string_view&
MAX_CACHE_PREPARED_MEMORY));
#endif
if (for_thrift) {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw std::runtime_error("not implemented");
#if 0
Integer statementId = computeThriftId(queryString, keyspace);
thriftPreparedStatements.put(statementId, prepared);
@@ -300,7 +300,7 @@ query_processor::parse_statement(const sstring_view& query)
query_options query_processor::make_internal_options(
::shared_ptr<statements::parsed_statement::prepared> p,
const std::initializer_list<data_value>& values) {
const std::initializer_list<boost::any>& values) {
if (p->bound_names.size() != values.size()) {
throw std::invalid_argument(sprint("Invalid number of values. Expecting %d but got %d", p->bound_names.size(), values.size()));
}
@@ -308,9 +308,9 @@ query_options query_processor::make_internal_options(
std::vector<bytes_opt> bound_values;
for (auto& v : values) {
auto& n = *ni++;
if (v.type() == bytes_type) {
bound_values.push_back({value_cast<bytes>(v)});
} else if (v.is_null()) {
if (v.type() == typeid(bytes)) {
bound_values.push_back({boost::any_cast<bytes>(v)});
} else if (v.empty()) {
bound_values.push_back({});
} else {
bound_values.push_back({n->type->decompose(v)});
@@ -333,10 +333,7 @@ query_options query_processor::make_internal_options(
future<::shared_ptr<untyped_result_set>> query_processor::execute_internal(
const std::experimental::string_view& query_string,
const std::initializer_list<data_value>& values) {
if (log.is_enabled(logging::log_level::trace)) {
log.trace("execute_internal: \"{}\" ({})", query_string, ::join(", ", values));
}
const std::initializer_list<boost::any>& values) {
auto p = prepare_internal(query_string);
auto opts = make_internal_options(p, values);
return do_with(std::move(opts),

View File

@@ -323,12 +323,12 @@ public:
#endif
private:
::shared_ptr<statements::parsed_statement::prepared> prepare_internal(const std::experimental::string_view& query);
query_options make_internal_options(::shared_ptr<statements::parsed_statement::prepared>, const std::initializer_list<data_value>&);
query_options make_internal_options(::shared_ptr<statements::parsed_statement::prepared>, const std::initializer_list<boost::any>&);
public:
future<::shared_ptr<untyped_result_set>> execute_internal(
const std::experimental::string_view& query_string,
const std::initializer_list<data_value>& = { });
const std::initializer_list<boost::any>& = { });
/*
* This function provides a timestamp that is guaranteed to be higher than any timestamp

View File

@@ -374,7 +374,7 @@ public:
}
virtual std::vector<bytes_opt> bounds(statements::bound b, const query_options& options) const override {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw std::runtime_error("not implemented");
#if 0
return Composites.toByteBuffers(boundsAsComposites(b, options));
#endif

View File

@@ -41,13 +41,13 @@ public:
::shared_ptr<primary_key_restrictions<T>> do_merge_to(schema_ptr schema, ::shared_ptr<restriction> restriction) const {
if (restriction->is_multi_column()) {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw std::runtime_error("not implemented");
}
return ::make_shared<single_column_primary_key_restrictions<T>>(schema)->merge_to(schema, restriction);
}
::shared_ptr<primary_key_restrictions<T>> merge_to(schema_ptr schema, ::shared_ptr<restriction> restriction) override {
if (restriction->is_multi_column()) {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw std::runtime_error("not implemented");
}
if (restriction->is_on_token()) {
return static_pointer_cast<token_restriction>(restriction);

View File

@@ -80,7 +80,7 @@ public:
private:
const uint32_t _column_count;
::shared_ptr<const service::pager::paging_state> _paging_state;
::shared_ptr<service::pager::paging_state> _paging_state;
public:
metadata(std::vector<::shared_ptr<column_specification>> names_)
@@ -88,7 +88,7 @@ public:
{ }
metadata(flag_enum_set flags, std::vector<::shared_ptr<column_specification>> names_, uint32_t column_count,
::shared_ptr<const service::pager::paging_state> paging_state)
::shared_ptr<service::pager::paging_state> paging_state)
: _flags(flags)
, names(std::move(names_))
, _column_count(column_count)
@@ -121,7 +121,7 @@ private:
}
public:
void set_has_more_pages(::shared_ptr<const service::pager::paging_state> paging_state) {
void set_has_more_pages(::shared_ptr<service::pager::paging_state> paging_state) {
if (!paging_state) {
return;
}
@@ -342,10 +342,6 @@ public:
std::sort(_rows.begin(), _rows.end(), std::forward<RowComparator>(cmp));
}
metadata& get_metadata() {
return *_metadata;
}
const metadata& get_metadata() const {
return *_metadata;
}

View File

@@ -125,7 +125,7 @@ protected:
}
};
std::unique_ptr<selectors> new_selectors() const override {
std::unique_ptr<selectors> new_selectors() {
return std::make_unique<simple_selectors>();
}
};
@@ -196,7 +196,7 @@ protected:
}
};
std::unique_ptr<selectors> new_selectors() const override {
std::unique_ptr<selectors> new_selectors() {
return std::make_unique<selectors_with_processing>(_factories);
}
};
@@ -252,7 +252,7 @@ selection::collect_metadata(schema_ptr schema, const std::vector<::shared_ptr<ra
return r;
}
result_set_builder::result_set_builder(const selection& s, db_clock::time_point now, serialization_format sf)
result_set_builder::result_set_builder(selection& s, db_clock::time_point now, serialization_format sf)
: _result_set(std::make_unique<result_set>(::make_shared<metadata>(*(s.get_result_metadata()))))
, _selectors(s.new_selectors())
, _now(now)
@@ -295,7 +295,7 @@ void result_set_builder::add(const column_definition& def, const query::result_a
}
}
void result_set_builder::add(const column_definition& def, collection_mutation_view c) {
void result_set_builder::add(const column_definition& def, collection_mutation::view c) {
auto&& ctype = static_cast<const collection_type_impl*>(def.type.get());
current->emplace_back(ctype->to_value(c, _serialization_format));
// timestamps, ttls meaningless for collections
@@ -330,98 +330,6 @@ std::unique_ptr<result_set> result_set_builder::build() {
return std::move(_result_set);
}
result_set_builder::visitor::visitor(
cql3::selection::result_set_builder& builder, const schema& s,
const selection& selection)
: _builder(builder), _schema(s), _selection(selection), _row_count(0) {
}
void result_set_builder::visitor::add_value(const column_definition& def,
query::result_row_view::iterator_type& i) {
if (def.type->is_multi_cell()) {
auto cell = i.next_collection_cell();
if (!cell) {
_builder.add_empty();
return;
}
_builder.add(def, *cell);
} else {
auto cell = i.next_atomic_cell();
if (!cell) {
_builder.add_empty();
return;
}
_builder.add(def, *cell);
}
}
void result_set_builder::visitor::accept_new_partition(const partition_key& key,
uint32_t row_count) {
_partition_key = key.explode(_schema);
_row_count = row_count;
}
void result_set_builder::visitor::accept_new_partition(uint32_t row_count) {
_row_count = row_count;
}
void result_set_builder::visitor::accept_new_row(const clustering_key& key,
const query::result_row_view& static_row,
const query::result_row_view& row) {
_clustering_key = key.explode(_schema);
accept_new_row(static_row, row);
}
void result_set_builder::visitor::accept_new_row(
const query::result_row_view& static_row,
const query::result_row_view& row) {
auto static_row_iterator = static_row.iterator();
auto row_iterator = row.iterator();
_builder.new_row();
for (auto&& def : _selection.get_columns()) {
switch (def->kind) {
case column_kind::partition_key:
_builder.add(_partition_key[def->component_index()]);
break;
case column_kind::clustering_key:
if (_clustering_key.size() > def->component_index()) {
_builder.add(_clustering_key[def->component_index()]);
} else {
_builder.add({});
}
break;
case column_kind::regular_column:
add_value(*def, row_iterator);
break;
case column_kind::compact_column:
add_value(*def, row_iterator);
break;
case column_kind::static_column:
add_value(*def, static_row_iterator);
break;
default:
assert(0);
}
}
}
void result_set_builder::visitor::accept_partition_end(
const query::result_row_view& static_row) {
if (_row_count == 0) {
_builder.new_row();
auto static_row_iterator = static_row.iterator();
for (auto&& def : _selection.get_columns()) {
if (def->is_partition_key()) {
_builder.add(_partition_key[def->component_index()]);
} else if (def->is_static()) {
add_value(*def, static_row_iterator);
} else {
_builder.add_empty();
}
}
}
}
api::timestamp_type result_set_builder::timestamp_of(size_t idx) {
return _timestamps[idx];
}

View File

@@ -161,7 +161,7 @@ public:
return std::find(_columns.begin(), _columns.end(), &def) != _columns.end();
}
::shared_ptr<metadata> get_result_metadata() const {
::shared_ptr<metadata> get_result_metadata() {
return _metadata;
}
@@ -186,16 +186,16 @@ private:
public:
static ::shared_ptr<selection> from_selectors(database& db, schema_ptr schema, const std::vector<::shared_ptr<raw_selector>>& raw_selectors);
virtual std::unique_ptr<selectors> new_selectors() const = 0;
virtual std::unique_ptr<selectors> new_selectors() = 0;
/**
* Returns a range of CQL3 columns this selection needs.
*/
auto const& get_columns() const {
auto const& get_columns() {
return _columns;
}
uint32_t get_column_count() const {
uint32_t get_column_count() {
return _columns.size();
}
@@ -238,39 +238,15 @@ private:
const db_clock::time_point _now;
serialization_format _serialization_format;
public:
result_set_builder(const selection& s, db_clock::time_point now, serialization_format sf);
result_set_builder(selection& s, db_clock::time_point now, serialization_format sf);
void add_empty();
void add(bytes_opt value);
void add(const column_definition& def, const query::result_atomic_cell_view& c);
void add(const column_definition& def, collection_mutation_view c);
void add(const column_definition& def, collection_mutation::view c);
void new_row();
std::unique_ptr<result_set> build();
api::timestamp_type timestamp_of(size_t idx);
int32_t ttl_of(size_t idx);
// Implements ResultVisitor concept from query.hh
class visitor {
protected:
result_set_builder& _builder;
const schema& _schema;
const selection& _selection;
uint32_t _row_count;
std::vector<bytes> _partition_key;
std::vector<bytes> _clustering_key;
public:
visitor(cql3::selection::result_set_builder& builder, const schema& s, const selection&);
visitor(visitor&&) = default;
void add_value(const column_definition& def, query::result_row_view::iterator_type& i);
void accept_new_partition(const partition_key& key, uint32_t row_count);
void accept_new_partition(uint32_t row_count);
void accept_new_row(const clustering_key& key,
const query::result_row_view& static_row,
const query::result_row_view& row);
void accept_new_row(const query::result_row_view& static_row,
const query::result_row_view& row);
void accept_partition_end(const query::result_row_view& static_row);
};
private:
bytes_opt get_value(data_type t, query::result_atomic_cell_view c);
};

View File

@@ -125,7 +125,7 @@ sets::value::from_serialized(bytes_view v, set_type type, serialization_format s
// Collections have this small hack that validate cannot be called on a serialized object,
// but compose does the validation (so we're fine).
// FIXME: deserializeForNativeProtocol?!
auto s = value_cast<set_type_impl::native_type>(type->deserialize(v, sf));
auto s = boost::any_cast<set_type_impl::native_type>(type->deserialize(v, sf));
std::set<bytes, serialized_compare> elements(type->get_elements_type()->as_less_comparator());
for (auto&& element : s) {
elements.insert(elements.end(), type->get_elements_type()->decompose(element));
@@ -284,11 +284,16 @@ sets::discarder::execute(mutation& m, const exploded_clustering_prefix& row_key,
auto kill = [&] (bytes idx) {
mut.cells.push_back({std::move(idx), params.make_dead_cell()});
};
auto svalue = dynamic_pointer_cast<sets::value>(value);
assert(svalue);
mut.cells.reserve(svalue->_elements.size());
for (auto&& e : svalue->_elements) {
kill(e);
// This can be either a set or a single element
auto cvalue = dynamic_pointer_cast<constants::value>(value);
if (cvalue) {
kill(cvalue->_bytes ? *cvalue->_bytes : bytes());
} else {
auto svalue = static_pointer_cast<sets::value>(value);
mut.cells.reserve(svalue->_elements.size());
for (auto&& e : svalue->_elements) {
kill(e);
}
}
auto ctype = static_pointer_cast<const collection_type_impl>(column.type);
m.set_cell(row_key, column,
@@ -296,17 +301,4 @@ sets::discarder::execute(mutation& m, const exploded_clustering_prefix& row_key,
ctype->serialize_mutation_form(mut)));
}
void sets::element_discarder::execute(mutation& m, const exploded_clustering_prefix& row_key, const update_parameters& params)
{
assert(column.type->is_multi_cell() && "Attempted to remove items from a frozen set");
auto elt = _t->bind(params._options);
if (!elt) {
throw exceptions::invalid_request_exception("Invalid null set element");
}
collection_type_impl::mutation mut;
mut.cells.emplace_back(*elt->get(params._options), params.make_dead_cell());
auto ctype = static_pointer_cast<const collection_type_impl>(column.type);
m.set_cell(row_key, column, ctype->serialize_mutation_form(mut));
}
}

View File

@@ -133,13 +133,6 @@ public:
}
virtual void execute(mutation& m, const exploded_clustering_prefix& row_key, const update_parameters& params) override;
};
class element_discarder : public operation {
public:
element_discarder(const column_definition& column, shared_ptr<term> t)
: operation(column, std::move(t)) { }
virtual void execute(mutation& m, const exploded_clustering_prefix& row_key, const update_parameters& params) override;
};
};
}

View File

@@ -159,7 +159,7 @@ protected:
virtual shared_ptr<restrictions::restriction> new_contains_restriction(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names,
bool is_key) override {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw std::runtime_error("not implemented");
#if 0
ColumnDefinition columnDef = toColumnDefinition(schema, entity);
Term term = toTerm(toReceivers(schema, columnDef), value, schema.ksName, bound_names);

View File

@@ -322,7 +322,7 @@ public:
virtual future<shared_ptr<transport::messages::result_message>> execute_internal(
distributed<service::storage_proxy>& proxy,
service::query_state& query_state, const query_options& options) override {
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw "not implemented";
#if 0
assert !hasConditions;
for (IMutation mutation : getMutations(BatchQueryOptions.withoutPerStatementVariables(options), true, queryState.getTimestamp()))

View File

@@ -45,14 +45,6 @@ namespace cql3 {
namespace statements {
delete_statement::delete_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs)
: modification_statement{type, bound_terms, std::move(s), std::move(attrs)}
{ }
bool delete_statement::require_full_clustering_key() const {
return false;
}
void delete_statement::add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) {
if (_column_operations.empty()) {
m.partition().apply_delete(*s, prefix, params.make_tombstone());
@@ -104,17 +96,5 @@ delete_statement::parsed::prepare_internal(database& db, schema_ptr schema, ::sh
return stmt;
}
delete_statement::parsed::parsed(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<operation::raw_deletion>> deletions,
std::vector<::shared_ptr<relation>> where_clause,
conditions_vector conditions,
bool if_exists)
: modification_statement::parsed(std::move(name), std::move(attrs), std::move(conditions), false, if_exists)
, _deletions(std::move(deletions))
, _where_clause(std::move(where_clause))
{ }
}
}

View File

@@ -55,9 +55,13 @@ namespace statements {
*/
class delete_statement : public modification_statement {
public:
delete_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs);
delete_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs)
: modification_statement{type, bound_terms, std::move(s), std::move(attrs)}
{ }
virtual bool require_full_clustering_key() const override;
virtual bool require_full_clustering_key() const override {
return false;
}
virtual void add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) override;
@@ -90,7 +94,11 @@ public:
std::vector<::shared_ptr<operation::raw_deletion>> deletions,
std::vector<::shared_ptr<relation>> where_clause,
conditions_vector conditions,
bool if_exists);
bool if_exists)
: modification_statement::parsed(std::move(name), std::move(attrs), std::move(conditions), false, if_exists)
, _deletions(std::move(deletions))
, _where_clause(std::move(where_clause))
{ }
protected:
virtual ::shared_ptr<modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs);

View File

@@ -71,81 +71,6 @@ operator<<(std::ostream& out, modification_statement::statement_type t) {
return out;
}
modification_statement::modification_statement(statement_type type_, uint32_t bound_terms, schema_ptr schema_, std::unique_ptr<attributes> attrs_)
: type{type_}
, _bound_terms{bound_terms}
, s{schema_}
, attrs{std::move(attrs_)}
, _column_operations{}
{ }
bool modification_statement::uses_function(const sstring& ks_name, const sstring& function_name) const {
if (attrs->uses_function(ks_name, function_name)) {
return true;
}
for (auto&& e : _processed_keys) {
auto r = e.second;
if (r && r->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& operation : _column_operations) {
if (operation && operation->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& condition : _column_conditions) {
if (condition && condition->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& condition : _static_conditions) {
if (condition && condition->uses_function(ks_name, function_name)) {
return true;
}
}
return false;
}
uint32_t modification_statement::get_bound_terms() {
return _bound_terms;
}
sstring modification_statement::keyspace() const {
return s->ks_name();
}
sstring modification_statement::column_family() const {
return s->cf_name();
}
bool modification_statement::is_counter() const {
return s->is_counter();
}
int64_t modification_statement::get_timestamp(int64_t now, const query_options& options) const {
return attrs->get_timestamp(now, options);
}
bool modification_statement::is_timestamp_set() const {
return attrs->is_timestamp_set();
}
gc_clock::duration modification_statement::get_time_to_live(const query_options& options) const {
return gc_clock::duration(attrs->get_time_to_live(options));
}
void modification_statement::check_access(const service::client_state& state) {
warn(unimplemented::cause::PERMISSIONS);
#if 0
state.hasColumnFamilyAccess(keyspace(), columnFamily(), Permission.MODIFY);
// CAS updates can be used to simulate a SELECT query, so should require Permission.SELECT as well.
if (hasConditions())
state.hasColumnFamilyAccess(keyspace(), columnFamily(), Permission.SELECT);
#endif
}
future<std::vector<mutation>>
modification_statement::get_mutations(distributed<service::storage_proxy>& proxy, const query_options& options, bool local, int64_t now) {
auto keys = make_lw_shared(build_partition_keys(options));
@@ -205,9 +130,9 @@ public:
const query::result_row_view& row) {
update_parameters::prefetch_data::row cells;
auto add_cell = [&cells] (column_id id, std::experimental::optional<collection_mutation_view>&& cell) {
auto add_cell = [&cells] (column_id id, std::experimental::optional<collection_mutation::view>&& cell) {
if (cell) {
cells.emplace(id, collection_mutation{to_bytes(cell->data)});
cells.emplace(id, collection_mutation::one{to_bytes(cell->data)});
}
};
@@ -624,63 +549,6 @@ bool modification_statement::depends_on_column_family(const sstring& cf_name) co
return column_family() == cf_name;
}
void modification_statement::add_operation(::shared_ptr<operation> op) {
if (op->column.is_static()) {
_sets_static_columns = true;
} else {
_sets_regular_columns = true;
}
_column_operations.push_back(std::move(op));
}
void modification_statement::add_condition(::shared_ptr<column_condition> cond) {
if (cond->column.is_static()) {
_sets_static_columns = true;
_static_conditions.emplace_back(std::move(cond));
} else {
_sets_regular_columns = true;
_column_conditions.emplace_back(std::move(cond));
}
}
void modification_statement::set_if_not_exist_condition() {
_if_not_exists = true;
}
bool modification_statement::has_if_not_exist_condition() const {
return _if_not_exists;
}
void modification_statement::set_if_exist_condition() {
_if_exists = true;
}
bool modification_statement::has_if_exist_condition() const {
return _if_exists;
}
bool modification_statement::requires_read() {
return std::any_of(_column_operations.begin(), _column_operations.end(), [] (auto&& op) {
return op->requires_read();
});
}
bool modification_statement::has_conditions() {
return _if_not_exists || _if_exists || !_column_conditions.empty() || !_static_conditions.empty();
}
void modification_statement::validate_where_clause_for_conditions() {
// no-op by default
}
modification_statement::parsed::parsed(::shared_ptr<cf_name> name, ::shared_ptr<attributes::raw> attrs, conditions_vector conditions, bool if_not_exists, bool if_exists)
: cf_statement{std::move(name)}
, _attrs{std::move(attrs)}
, _conditions{std::move(conditions)}
, _if_not_exists{if_not_exists}
, _if_exists{if_exists}
{ }
}
}

View File

@@ -107,29 +107,84 @@ private:
};
public:
modification_statement(statement_type type_, uint32_t bound_terms, schema_ptr schema_, std::unique_ptr<attributes> attrs_);
modification_statement(statement_type type_, uint32_t bound_terms, schema_ptr schema_, std::unique_ptr<attributes> attrs_)
: type{type_}
, _bound_terms{bound_terms}
, s{schema_}
, attrs{std::move(attrs_)}
, _column_operations{}
{ }
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const override;
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const override {
if (attrs->uses_function(ks_name, function_name)) {
return true;
}
for (auto&& e : _processed_keys) {
auto r = e.second;
if (r && r->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& operation : _column_operations) {
if (operation && operation->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& condition : _column_conditions) {
if (condition && condition->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& condition : _static_conditions) {
if (condition && condition->uses_function(ks_name, function_name)) {
return true;
}
}
return false;
}
virtual bool require_full_clustering_key() const = 0;
virtual void add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) = 0;
virtual uint32_t get_bound_terms() override;
virtual uint32_t get_bound_terms() override {
return _bound_terms;
}
virtual sstring keyspace() const;
virtual sstring keyspace() const {
return s->ks_name();
}
virtual sstring column_family() const;
virtual sstring column_family() const {
return s->cf_name();
}
virtual bool is_counter() const;
virtual bool is_counter() const {
return s->is_counter();
}
int64_t get_timestamp(int64_t now, const query_options& options) const;
int64_t get_timestamp(int64_t now, const query_options& options) const {
return attrs->get_timestamp(now, options);
}
bool is_timestamp_set() const;
bool is_timestamp_set() const {
return attrs->is_timestamp_set();
}
gc_clock::duration get_time_to_live(const query_options& options) const;
gc_clock::duration get_time_to_live(const query_options& options) const {
return gc_clock::duration(attrs->get_time_to_live(options));
}
virtual void check_access(const service::client_state& state) override;
virtual void check_access(const service::client_state& state) override {
warn(unimplemented::cause::PERMISSIONS);
#if 0
state.hasColumnFamilyAccess(keyspace(), columnFamily(), Permission.MODIFY);
// CAS updates can be used to simulate a SELECT query, so should require Permission.SELECT as well.
if (hasConditions())
state.hasColumnFamilyAccess(keyspace(), columnFamily(), Permission.SELECT);
#endif
}
void validate(distributed<service::storage_proxy>&, const service::client_state& state) override;
@@ -137,7 +192,14 @@ public:
virtual bool depends_on_column_family(const sstring& cf_name) const override;
void add_operation(::shared_ptr<operation> op);
void add_operation(::shared_ptr<operation> op) {
if (op->column.is_static()) {
_sets_static_columns = true;
} else {
_sets_regular_columns = true;
}
_column_operations.push_back(std::move(op));
}
#if 0
public Iterable<ColumnDefinition> getColumnsWithConditions()
@@ -150,15 +212,31 @@ public:
}
#endif
public:
void add_condition(::shared_ptr<column_condition> cond);
void add_condition(::shared_ptr<column_condition> cond) {
if (cond->column.is_static()) {
_sets_static_columns = true;
_static_conditions.emplace_back(std::move(cond));
} else {
_sets_regular_columns = true;
_column_conditions.emplace_back(std::move(cond));
}
}
void set_if_not_exist_condition();
void set_if_not_exist_condition() {
_if_not_exists = true;
}
bool has_if_not_exist_condition() const;
bool has_if_not_exist_condition() const {
return _if_not_exists;
}
void set_if_exist_condition();
void set_if_exist_condition() {
_if_exists = true;
}
bool has_if_exist_condition() const;
bool has_if_exist_condition() const {
return _if_exists;
}
private:
void add_key_values(const column_definition& def, ::shared_ptr<restrictions::restriction> values);
@@ -176,7 +254,11 @@ protected:
const column_definition* get_first_empty_key();
public:
bool requires_read();
bool requires_read() {
return std::any_of(_column_operations.begin(), _column_operations.end(), [] (auto&& op) {
return op->requires_read();
});
}
protected:
future<update_parameters::prefetched_rows_type> read_required_rows(
@@ -187,7 +269,9 @@ protected:
db::consistency_level cl);
public:
bool has_conditions();
bool has_conditions() {
return _if_not_exists || _if_exists || !_column_conditions.empty() || !_static_conditions.empty();
}
virtual future<::shared_ptr<transport::messages::result_message>>
execute(distributed<service::storage_proxy>& proxy, service::query_state& qs, const query_options& options) override;
@@ -344,7 +428,9 @@ protected:
* processed to check that they are compatible.
* @throws InvalidRequestException
*/
virtual void validate_where_clause_for_conditions();
virtual void validate_where_clause_for_conditions() {
// no-op by default
}
public:
class parsed : public cf_statement {
@@ -357,7 +443,13 @@ public:
const bool _if_not_exists;
const bool _if_exists;
protected:
parsed(::shared_ptr<cf_name> name, ::shared_ptr<attributes::raw> attrs, conditions_vector conditions, bool if_not_exists, bool if_exists);
parsed(::shared_ptr<cf_name> name, ::shared_ptr<attributes::raw> attrs, conditions_vector conditions, bool if_not_exists, bool if_exists)
: cf_statement{std::move(name)}
, _attrs{std::move(attrs)}
, _conditions{std::move(conditions)}
, _if_not_exists{if_not_exists}
, _if_exists{if_exists}
{ }
public:
virtual ::shared_ptr<parsed_statement::prepared> prepare(database& db) override;

View File

@@ -1,83 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright 2014 Cloudius Systems
*
* Modified by Cloudius Systems
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "cql3/statements/parsed_statement.hh"
namespace cql3 {
namespace statements {
parsed_statement::~parsed_statement()
{ }
shared_ptr<variable_specifications> parsed_statement::get_bound_variables() {
return _variables;
}
// Used by the parser and preparable statement
void parsed_statement::set_bound_variables(const std::vector<::shared_ptr<column_identifier>>& bound_names) {
_variables = ::make_shared<variable_specifications>(bound_names);
}
bool parsed_statement::uses_function(const sstring& ks_name, const sstring& function_name) const {
return false;
}
parsed_statement::prepared::prepared(::shared_ptr<cql_statement> statement_, std::vector<::shared_ptr<column_specification>> bound_names_)
: statement(std::move(statement_))
, bound_names(std::move(bound_names_))
{ }
parsed_statement::prepared::prepared(::shared_ptr<cql_statement> statement_, const variable_specifications& names)
: prepared(statement_, names.get_specifications())
{ }
parsed_statement::prepared::prepared(::shared_ptr<cql_statement> statement_, variable_specifications&& names)
: prepared(statement_, std::move(names).get_specifications())
{ }
parsed_statement::prepared::prepared(::shared_ptr<cql_statement>&& statement_)
: prepared(statement_, std::vector<::shared_ptr<column_specification>>())
{ }
}
}

View File

@@ -60,29 +60,47 @@ private:
::shared_ptr<variable_specifications> _variables;
public:
virtual ~parsed_statement();
virtual ~parsed_statement()
{ }
shared_ptr<variable_specifications> get_bound_variables();
shared_ptr<variable_specifications> get_bound_variables() {
return _variables;
}
void set_bound_variables(const std::vector<::shared_ptr<column_identifier>>& bound_names);
// Used by the parser and preparable statement
void set_bound_variables(const std::vector<::shared_ptr<column_identifier>>& bound_names)
{
_variables = ::make_shared<variable_specifications>(bound_names);
}
class prepared {
public:
const ::shared_ptr<cql_statement> statement;
const std::vector<::shared_ptr<column_specification>> bound_names;
prepared(::shared_ptr<cql_statement> statement_, std::vector<::shared_ptr<column_specification>> bound_names_);
prepared(::shared_ptr<cql_statement> statement_, std::vector<::shared_ptr<column_specification>> bound_names_)
: statement(std::move(statement_))
, bound_names(std::move(bound_names_))
{ }
prepared(::shared_ptr<cql_statement> statement_, const variable_specifications& names);
prepared(::shared_ptr<cql_statement> statement_, const variable_specifications& names)
: prepared(statement_, names.get_specifications())
{ }
prepared(::shared_ptr<cql_statement> statement_, variable_specifications&& names);
prepared(::shared_ptr<cql_statement> statement_, variable_specifications&& names)
: prepared(statement_, std::move(names).get_specifications())
{ }
prepared(::shared_ptr<cql_statement>&& statement_);
prepared(::shared_ptr<cql_statement>&& statement_)
: prepared(statement_, std::vector<::shared_ptr<column_specification>>())
{ }
};
virtual ::shared_ptr<prepared> prepare(database& db) = 0;
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const;
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const {
return false;
}
};
}

View File

@@ -1,186 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright 2015 Cloudius Systems
*
* Modified by Cloudius Systems
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "cql3/statements/property_definitions.hh"
namespace cql3 {
namespace statements {
property_definitions::property_definitions()
: _properties{}
{ }
void property_definitions::add_property(const sstring& name, sstring value) {
auto it = _properties.find(name);
if (it != _properties.end()) {
throw exceptions::syntax_exception(sprint("Multiple definition for property '%s'", name));
}
_properties.emplace(name, value);
}
void property_definitions::add_property(const sstring& name, const std::map<sstring, sstring>& value) {
auto it = _properties.find(name);
if (it != _properties.end()) {
throw exceptions::syntax_exception(sprint("Multiple definition for property '%s'", name));
}
_properties.emplace(name, value);
}
void property_definitions::validate(const std::set<sstring>& keywords, const std::set<sstring>& obsolete) {
for (auto&& kv : _properties) {
auto&& name = kv.first;
if (keywords.count(name)) {
continue;
}
if (obsolete.count(name)) {
#if 0
logger.warn("Ignoring obsolete property {}", name);
#endif
} else {
throw exceptions::syntax_exception(sprint("Unknown property '%s'", name));
}
}
}
std::experimental::optional<sstring> property_definitions::get_simple(const sstring& name) const {
auto it = _properties.find(name);
if (it == _properties.end()) {
return std::experimental::nullopt;
}
try {
return boost::any_cast<sstring>(it->second);
} catch (const boost::bad_any_cast& e) {
throw exceptions::syntax_exception(sprint("Invalid value for property '%s'. It should be a string", name));
}
}
std::experimental::optional<std::map<sstring, sstring>> property_definitions::get_map(const sstring& name) const {
auto it = _properties.find(name);
if (it == _properties.end()) {
return std::experimental::nullopt;
}
try {
return boost::any_cast<std::map<sstring, sstring>>(it->second);
} catch (const boost::bad_any_cast& e) {
throw exceptions::syntax_exception(sprint("Invalid value for property '%s'. It should be a map.", name));
}
}
bool property_definitions::has_property(const sstring& name) const {
return _properties.find(name) != _properties.end();
}
sstring property_definitions::get_string(sstring key, sstring default_value) const {
auto value = get_simple(key);
if (value) {
return value.value();
} else {
return default_value;
}
}
// Return a property value, typed as a Boolean
bool property_definitions::get_boolean(sstring key, bool default_value) const {
auto value = get_simple(key);
if (value) {
std::string s{value.value()};
std::transform(s.begin(), s.end(), s.begin(), ::tolower);
return s == "1" || s == "true" || s == "yes";
} else {
return default_value;
}
}
// Return a property value, typed as a double
double property_definitions::get_double(sstring key, double default_value) const {
auto value = get_simple(key);
return to_double(key, value, default_value);
}
double property_definitions::to_double(sstring key, std::experimental::optional<sstring> value, double default_value) {
if (value) {
auto val = value.value();
try {
return std::stod(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid double value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
// Return a property value, typed as an Integer
int32_t property_definitions::get_int(sstring key, int32_t default_value) const {
auto value = get_simple(key);
return to_int(key, value, default_value);
}
int32_t property_definitions::to_int(sstring key, std::experimental::optional<sstring> value, int32_t default_value) {
if (value) {
auto val = value.value();
try {
return std::stoi(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid integer value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
long property_definitions::to_long(sstring key, std::experimental::optional<sstring> value, long default_value) {
if (value) {
auto val = value.value();
try {
return std::stol(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid long value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
}
}

View File

@@ -66,38 +66,141 @@ protected:
#endif
std::unordered_map<sstring, boost::any> _properties;
property_definitions();
property_definitions()
: _properties{}
{ }
public:
void add_property(const sstring& name, sstring value);
void add_property(const sstring& name, sstring value) {
auto it = _properties.find(name);
if (it != _properties.end()) {
throw exceptions::syntax_exception(sprint("Multiple definition for property '%s'", name));
}
_properties.emplace(name, value);
}
void add_property(const sstring& name, const std::map<sstring, sstring>& value);
void validate(const std::set<sstring>& keywords, const std::set<sstring>& obsolete);
void add_property(const sstring& name, const std::map<sstring, sstring>& value) {
auto it = _properties.find(name);
if (it != _properties.end()) {
throw exceptions::syntax_exception(sprint("Multiple definition for property '%s'", name));
}
_properties.emplace(name, value);
}
void validate(const std::set<sstring>& keywords, const std::set<sstring>& obsolete) {
for (auto&& kv : _properties) {
auto&& name = kv.first;
if (keywords.count(name)) {
continue;
}
if (obsolete.count(name)) {
#if 0
logger.warn("Ignoring obsolete property {}", name);
#endif
} else {
throw exceptions::syntax_exception(sprint("Unknown property '%s'", name));
}
}
}
protected:
std::experimental::optional<sstring> get_simple(const sstring& name) const;
std::experimental::optional<std::map<sstring, sstring>> get_map(const sstring& name) const;
std::experimental::optional<sstring> get_simple(const sstring& name) const {
auto it = _properties.find(name);
if (it == _properties.end()) {
return std::experimental::nullopt;
}
try {
return boost::any_cast<sstring>(it->second);
} catch (const boost::bad_any_cast& e) {
throw exceptions::syntax_exception(sprint("Invalid value for property '%s'. It should be a string", name));
}
}
std::experimental::optional<std::map<sstring, sstring>> get_map(const sstring& name) const {
auto it = _properties.find(name);
if (it == _properties.end()) {
return std::experimental::nullopt;
}
try {
return boost::any_cast<std::map<sstring, sstring>>(it->second);
} catch (const boost::bad_any_cast& e) {
throw exceptions::syntax_exception(sprint("Invalid value for property '%s'. It should be a map.", name));
}
}
public:
bool has_property(const sstring& name) const;
bool has_property(const sstring& name) const {
return _properties.find(name) != _properties.end();
}
sstring get_string(sstring key, sstring default_value) const;
sstring get_string(sstring key, sstring default_value) const {
auto value = get_simple(key);
if (value) {
return value.value();
} else {
return default_value;
}
}
// Return a property value, typed as a Boolean
bool get_boolean(sstring key, bool default_value) const;
bool get_boolean(sstring key, bool default_value) const {
auto value = get_simple(key);
if (value) {
std::string s{value.value()};
std::transform(s.begin(), s.end(), s.begin(), ::tolower);
return s == "1" || s == "true" || s == "yes";
} else {
return default_value;
}
}
// Return a property value, typed as a double
double get_double(sstring key, double default_value) const;
double get_double(sstring key, double default_value) const {
auto value = get_simple(key);
return to_double(key, value, default_value);
}
static double to_double(sstring key, std::experimental::optional<sstring> value, double default_value);
static double to_double(sstring key, std::experimental::optional<sstring> value, double default_value) {
if (value) {
auto val = value.value();
try {
return std::stod(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid double value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
// Return a property value, typed as an Integer
int32_t get_int(sstring key, int32_t default_value) const;
int32_t get_int(sstring key, int32_t default_value) const {
auto value = get_simple(key);
return to_int(key, value, default_value);
}
static int32_t to_int(sstring key, std::experimental::optional<sstring> value, int32_t default_value);
static int32_t to_int(sstring key, std::experimental::optional<sstring> value, int32_t default_value) {
if (value) {
auto val = value.value();
try {
return std::stoi(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid integer value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
static long to_long(sstring key, std::experimental::optional<sstring> value, long default_value);
static long to_long(sstring key, std::experimental::optional<sstring> value, long default_value) {
if (value) {
auto val = value.value();
try {
return std::stol(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid long value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
};
}

View File

@@ -46,7 +46,6 @@
#include "core/shared_ptr.hh"
#include "query-result-reader.hh"
#include "query_result_merger.hh"
#include "service/pager/query_pagers.hh"
namespace cql3 {
@@ -54,31 +53,6 @@ namespace statements {
thread_local const shared_ptr<select_statement::parameters> select_statement::_default_parameters = ::make_shared<select_statement::parameters>();
select_statement::parameters::parameters()
: _is_distinct{false}
, _allow_filtering{false}
{ }
select_statement::parameters::parameters(orderings_type orderings,
bool is_distinct,
bool allow_filtering)
: _orderings{std::move(orderings)}
, _is_distinct{is_distinct}
, _allow_filtering{allow_filtering}
{ }
bool select_statement::parameters::is_distinct() {
return _is_distinct;
}
bool select_statement::parameters::allow_filtering() {
return _allow_filtering;
}
select_statement::parameters::orderings_type const& select_statement::parameters::orderings() {
return _orderings;
}
select_statement::select_statement(schema_ptr schema,
uint32_t bound_terms,
::shared_ptr<parameters> parameters,
@@ -140,14 +114,6 @@ bool select_statement::depends_on_column_family(const sstring& cf_name) const {
return column_family() == cf_name;
}
const sstring& select_statement::keyspace() const {
return _schema->ks_name();
}
const sstring& select_statement::column_family() const {
return _schema->cf_name();
}
query::partition_slice
select_statement::make_partition_slice(const query_options& options) {
std::vector<column_id> static_columns;
@@ -194,7 +160,7 @@ int32_t select_statement::get_limit(const query_options& options) const {
try {
int32_type->validate(*val);
auto l = value_cast<int32_t>(int32_type->deserialize(*val));
auto l = boost::any_cast<int32_t>(int32_type->deserialize(*val));
if (l <= 0) {
throw exceptions::invalid_request_exception("LIMIT must be strictly positive");
}
@@ -229,51 +195,37 @@ select_statement::execute(distributed<service::storage_proxy>& proxy, service::q
page_size = DEFAULT_COUNT_PAGE_SIZE;
}
auto key_ranges = _restrictions->get_partition_key_ranges(options);
warn(unimplemented::cause::PAGING);
return execute(proxy, command, _restrictions->get_partition_key_ranges(options), state, options, now);
if (page_size <= 0
|| !service::pager::query_pagers::may_need_paging(page_size,
*command, key_ranges)) {
return execute(proxy, command, std::move(key_ranges), state, options,
now);
#if 0
if (page_size <= 0 || !command || !query_pagers::may_need_paging(command, page_size)) {
return execute(proxy, command, state, options, now);
}
auto p = service::pager::query_pagers::pager(_schema, _selection,
state, options, command, std::move(key_ranges));
auto pager = query_pagers::pager(command, cl, state.get_client_state(), options.get_paging_state());
if (_selection->is_aggregate()) {
return do_with(
cql3::selection::result_set_builder(*_selection, now,
options.get_serialization_format()),
[p, page_size, now](auto& builder) {
return do_until([p] {return p->is_exhausted();},
[p, &builder, page_size, now] {
return p->fetch_page(builder, page_size, now);
}
).then([&builder] {
auto rs = builder.build();
auto msg = ::make_shared<transport::messages::result_message::rows>(std::move(rs));
return make_ready_future<shared_ptr<transport::messages::result_message>>(std::move(msg));
});
});
if (selection->isAggregate()) {
return page_aggregate_query(pager, options, page_size, now);
}
// We can't properly do post-query ordering if we page (see #6722)
if (needs_post_query_ordering()) {
throw exceptions::invalid_request_exception(
"Cannot page queries with both ORDER BY and a IN restriction on the partition key;"
" you must either remove the ORDER BY or the IN and sort client side, or disable paging for this query");
"Cannot page queries with both ORDER BY and a IN restriction on the partition key;"
" you must either remove the ORDER BY or the IN and sort client side, or disable paging for this query");
}
return p->fetch_page(page_size, now).then(
[this, p, &options, limit, now](std::unique_ptr<cql3::result_set> rs) {
return pager->fetch_page(page_size).then([this, pager, &options, limit, now] (auto page) {
auto msg = process_results(page, options, limit, now);
if (!p->is_exhausted()) {
rs->get_metadata().set_has_more_pages(p->state());
}
if (!pager->is_exhausted()) {
msg->result->metadata->set_has_more_pages(pager->state());
}
auto msg = ::make_shared<transport::messages::result_message::rows>(std::move(rs));
return make_ready_future<shared_ptr<transport::messages::result_message>>(std::move(msg));
});
return msg;
});
#endif
}
future<shared_ptr<transport::messages::result_message>>
@@ -329,18 +281,114 @@ select_statement::execute_internal(distributed<service::storage_proxy>& proxy, s
}
}
shared_ptr<transport::messages::result_message> select_statement::process_results(
foreign_ptr<lw_shared_ptr<query::result>> results,
lw_shared_ptr<query::read_command> cmd, const query_options& options,
db_clock::time_point now) {
// Implements ResultVisitor concept from query.hh
class result_set_building_visitor {
cql3::selection::result_set_builder& builder;
select_statement& stmt;
uint32_t _row_count;
std::vector<bytes> _partition_key;
std::vector<bytes> _clustering_key;
public:
result_set_building_visitor(cql3::selection::result_set_builder& builder, select_statement& stmt)
: builder(builder)
, stmt(stmt)
, _row_count(0)
{ }
void add_value(const column_definition& def, query::result_row_view::iterator_type& i) {
if (def.type->is_multi_cell()) {
auto cell = i.next_collection_cell();
if (!cell) {
builder.add_empty();
return;
}
builder.add(def, *cell);
} else {
auto cell = i.next_atomic_cell();
if (!cell) {
builder.add_empty();
return;
}
builder.add(def, *cell);
}
};
void accept_new_partition(const partition_key& key, uint32_t row_count) {
_partition_key = key.explode(*stmt._schema);
_row_count = row_count;
}
void accept_new_partition(uint32_t row_count) {
_row_count = row_count;
}
void accept_new_row(const clustering_key& key, const query::result_row_view& static_row,
const query::result_row_view& row) {
_clustering_key = key.explode(*stmt._schema);
accept_new_row(static_row, row);
}
void accept_new_row(const query::result_row_view& static_row, const query::result_row_view& row) {
auto static_row_iterator = static_row.iterator();
auto row_iterator = row.iterator();
builder.new_row();
for (auto&& def : stmt._selection->get_columns()) {
switch (def->kind) {
case column_kind::partition_key:
builder.add(_partition_key[def->component_index()]);
break;
case column_kind::clustering_key:
builder.add(_clustering_key[def->component_index()]);
break;
case column_kind::regular_column:
add_value(*def, row_iterator);
break;
case column_kind::compact_column:
add_value(*def, row_iterator);
break;
case column_kind::static_column:
add_value(*def, static_row_iterator);
break;
default:
assert(0);
}
}
}
void accept_partition_end(const query::result_row_view& static_row) {
if (_row_count == 0) {
builder.new_row();
auto static_row_iterator = static_row.iterator();
for (auto&& def : stmt._selection->get_columns()) {
if (def->is_partition_key()) {
builder.add(_partition_key[def->component_index()]);
} else if (def->is_static()) {
add_value(*def, static_row_iterator);
} else {
builder.add_empty();
}
}
}
}
};
shared_ptr<transport::messages::result_message>
select_statement::process_results(foreign_ptr<lw_shared_ptr<query::result>> results, lw_shared_ptr<query::read_command> cmd,
const query_options& options, db_clock::time_point now) {
cql3::selection::result_set_builder builder(*_selection, now, options.get_serialization_format());
// FIXME: This special casing saves us the cost of copying an already
// linearized response. When we switch views to scattered_reader this will go away.
if (results->buf().is_linearized()) {
query::result_view view(results->buf().view());
view.consume(cmd->slice, result_set_building_visitor(builder, *this));
} else {
bytes_ostream w(results->buf());
query::result_view view(w.linearize());
view.consume(cmd->slice, result_set_building_visitor(builder, *this));
}
cql3::selection::result_set_builder builder(*_selection, now,
options.get_serialization_format());
query::result_view::consume(results->buf(), cmd->slice,
cql3::selection::result_set_builder::visitor(builder, *_schema,
*_selection));
auto rs = builder.build();
if (needs_post_query_ordering()) {
rs->sort(_ordering_comparator);
if (_is_reversed) {
@@ -351,18 +399,6 @@ shared_ptr<transport::messages::result_message> select_statement::process_result
return ::make_shared<transport::messages::result_message::rows>(std::move(rs));
}
select_statement::raw_statement::raw_statement(::shared_ptr<cf_name> cf_name,
::shared_ptr<parameters> parameters,
std::vector<::shared_ptr<selection::raw_selector>> select_clause,
std::vector<::shared_ptr<relation>> where_clause,
::shared_ptr<term::raw> limit)
: cf_statement(std::move(cf_name))
, _parameters(std::move(parameters))
, _select_clause(std::move(select_clause))
, _where_clause(std::move(where_clause))
, _limit(std::move(limit))
{ }
::shared_ptr<parsed_statement::prepared>
select_statement::raw_statement::prepare(database& db) {
schema_ptr schema = validation::validate_column_family(db, keyspace(), column_family());

View File

@@ -63,6 +63,7 @@ namespace statements {
*
*/
class select_statement : public cql_statement {
friend class result_set_building_visitor;
public:
class parameters final {
public:
@@ -72,13 +73,20 @@ public:
const bool _is_distinct;
const bool _allow_filtering;
public:
parameters();
parameters()
: _is_distinct{false}
, _allow_filtering{false}
{ }
parameters(orderings_type orderings,
bool is_distinct,
bool allow_filtering);
bool is_distinct();
bool allow_filtering();
orderings_type const& orderings();
bool allow_filtering)
: _orderings{std::move(orderings)}
, _is_distinct{is_distinct}
, _allow_filtering{allow_filtering}
{ }
bool is_distinct() { return _is_distinct; }
bool allow_filtering() { return _allow_filtering; }
orderings_type const& orderings() { return _orderings; }
};
private:
static constexpr int DEFAULT_COUNT_PAGE_SIZE = 10000;
@@ -188,9 +196,13 @@ public:
}
#endif
const sstring& keyspace() const;
const sstring& keyspace() const {
return _schema->ks_name();
}
const sstring& column_family() const;
const sstring& column_family() const {
return _schema->cf_name();
}
query::partition_slice make_partition_slice(const query_options& options);
@@ -446,7 +458,13 @@ public:
::shared_ptr<parameters> parameters,
std::vector<::shared_ptr<selection::raw_selector>> select_clause,
std::vector<::shared_ptr<relation>> where_clause,
::shared_ptr<term::raw> limit);
::shared_ptr<term::raw> limit)
: cf_statement(std::move(cf_name))
, _parameters(std::move(parameters))
, _select_clause(std::move(select_clause))
, _where_clause(std::move(where_clause))
, _limit(std::move(limit))
{ }
virtual ::shared_ptr<prepared> prepare(database& db) override;
private:

View File

@@ -48,14 +48,6 @@ namespace cql3 {
namespace statements {
update_statement::update_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs)
: modification_statement{type, bound_terms, std::move(s), std::move(attrs)}
{ }
bool update_statement::require_full_clustering_key() const {
return true;
}
void update_statement::add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) {
if (s->is_dense()) {
if (!prefix || (prefix.size() == 1 && prefix.components().front().empty())) {
@@ -108,16 +100,6 @@ void update_statement::add_update_for_key(mutation& m, const exploded_clustering
#endif
}
update_statement::parsed_insert::parsed_insert(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<column_identifier::raw>> column_names,
std::vector<::shared_ptr<term::raw>> column_values,
bool if_not_exists)
: modification_statement::parsed{std::move(name), std::move(attrs), conditions_vector{}, if_not_exists, false}
, _column_names{std::move(column_names)}
, _column_values{std::move(column_values)}
{ }
::shared_ptr<modification_statement>
update_statement::parsed_insert::prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs)
@@ -166,16 +148,6 @@ update_statement::parsed_insert::prepare_internal(database& db, schema_ptr schem
return stmt;
}
update_statement::parsed_update::parsed_update(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<operation::raw_update>>> updates,
std::vector<relation_ptr> where_clause,
conditions_vector conditions)
: modification_statement::parsed(std::move(name), std::move(attrs), std::move(conditions), false, false)
, _updates(std::move(updates))
, _where_clause(std::move(where_clause))
{ }
::shared_ptr<modification_statement>
update_statement::parsed_update::prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs)

View File

@@ -64,9 +64,14 @@ public:
private static final Constants.Value EMPTY = new Constants.Value(ByteBufferUtil.EMPTY_BYTE_BUFFER);
#endif
update_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs);
update_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs)
: modification_statement{type, bound_terms, std::move(s), std::move(attrs)}
{ }
private:
virtual bool require_full_clustering_key() const override;
virtual bool require_full_clustering_key() const override {
return true;
}
virtual void add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) override;
public:
@@ -87,7 +92,11 @@ public:
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<column_identifier::raw>> column_names,
std::vector<::shared_ptr<term::raw>> column_values,
bool if_not_exists);
bool if_not_exists)
: modification_statement::parsed{std::move(name), std::move(attrs), conditions_vector{}, if_not_exists, false}
, _column_names{std::move(column_names)}
, _column_values{std::move(column_values)}
{ }
virtual ::shared_ptr<modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs) override;
@@ -113,7 +122,11 @@ public:
::shared_ptr<attributes::raw> attrs,
std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<operation::raw_update>>> updates,
std::vector<relation_ptr> where_clause,
conditions_vector conditions);
conditions_vector conditions)
: modification_statement::parsed(std::move(name), std::move(attrs), std::move(conditions), false, false)
, _updates(std::move(updates))
, _where_clause(std::move(where_clause))
{ }
protected:
virtual ::shared_ptr<modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs);

View File

@@ -224,6 +224,14 @@ public:
// We don't "need" that override but it saves us the allocation of a Value object if used
return options.make_temporary(_type->build_value(bind_internal(options)));
}
#if 0
@Override
public String toString()
{
return tupleToString(elements);
}
#endif
};
/**
@@ -251,7 +259,7 @@ public:
try {
// Collections have this small hack that validate cannot be called on a serialized object,
// but the deserialization does the validation (so we're fine).
auto l = value_cast<list_type_impl::native_type>(type->deserialize(value, options.get_serialization_format()));
auto l = boost::any_cast<list_type_impl::native_type>(type->deserialize(value, options.get_serialization_format()));
auto ttype = dynamic_pointer_cast<const tuple_type_impl>(type->get_elements_type());
assert(ttype);

View File

@@ -66,7 +66,7 @@ public:
}
template<typename T>
T get_as(const sstring& name) const {
return value_cast<T>(data_type_for<T>()->deserialize(get_blob(name)));
return boost::any_cast<T>(data_type_for<T>()->deserialize(get_blob(name)));
}
// this could maybe be done as an overload of get_as (or something), but that just
// muddles things for no real gain. Let user (us) attempt to know what he is doing instead.
@@ -75,12 +75,12 @@ public:
data_type_for<K>(), data_type valtype =
data_type_for<V>()) const {
auto vec =
value_cast<map_type_impl::native_type>(
boost::any_cast<const map_type_impl::native_type&>(
map_type_impl::get_instance(keytype, valtype, false)->deserialize(
get_blob(name)));
std::transform(vec.begin(), vec.end(), out,
[](auto& p) {
return std::pair<K, V>(value_cast<K>(p.first), value_cast<V>(p.second));
return std::pair<K, V>(boost::any_cast<const K&>(p.first), boost::any_cast<const V&>(p.second));
});
}
template<typename K, typename V, typename ... Rest>

View File

@@ -43,7 +43,7 @@
namespace cql3 {
std::experimental::optional<collection_mutation_view>
std::experimental::optional<collection_mutation::view>
update_parameters::get_prefetched_list(
const partition_key& pkey,
const clustering_key& row_key,

View File

@@ -86,7 +86,7 @@ public:
return pk_eq(k1.first, k2.first) && ck_eq(k1.second, k2.second);
}
};
using row = std::unordered_map<column_id, collection_mutation>;
using row = std::unordered_map<column_id, collection_mutation::one>;
public:
std::unordered_map<key, row, key_hashing, key_equality> rows;
schema_ptr schema;
@@ -183,7 +183,7 @@ public:
return _timestamp;
}
std::experimental::optional<collection_mutation_view> get_prefetched_list(
std::experimental::optional<collection_mutation::view> get_prefetched_list(
const partition_key& pkey, const clustering_key& row_key, const column_definition& column) const;
};

View File

@@ -88,6 +88,14 @@ public:
}
_specs[bind_index] = spec;
}
#if 0
@Override
public String toString()
{
return Arrays.toString(specs);
}
#endif
};
}

View File

@@ -416,23 +416,6 @@ static std::vector<sstring> parse_fname(sstring filename) {
return comps;
}
static bool belongs_to_current_shard(const schema& s, const partition_key& first, const partition_key& last) {
auto key_shard = [&s] (const partition_key& pk) {
auto token = dht::global_partitioner().get_token(s, pk);
return dht::shard_of(token);
};
auto s1 = key_shard(first);
auto s2 = key_shard(last);
auto me = engine().cpu_id();
return (s1 <= me) && (me <= s2);
}
static bool belongs_to_current_shard(const schema& s, range<partition_key> r) {
assert(r.start());
assert(r.end());
return belongs_to_current_shard(s, r.start()->value(), r.end()->value());
}
future<sstables::entry_descriptor> column_family::probe_file(sstring sstdir, sstring fname) {
using namespace sstables;
@@ -449,29 +432,19 @@ future<sstables::entry_descriptor> column_family::probe_file(sstring sstdir, sst
update_sstables_known_generation(comps.generation);
assert(_sstables->count(comps.generation) == 0);
auto fut = sstable::get_sstable_key_range(*_schema, _schema->ks_name(), _schema->cf_name(), sstdir, comps.generation, comps.version, comps.format);
return std::move(fut).then([this, sstdir = std::move(sstdir), comps] (range<partition_key> r) {
// Checks whether or not sstable belongs to current shard.
if (!belongs_to_current_shard(*_schema, std::move(r))) {
sstable::mark_sstable_for_deletion(_schema->ks_name(), _schema->cf_name(), sstdir, comps.generation, comps.version, comps.format);
return make_ready_future<>();
}
auto sst = std::make_unique<sstables::sstable>(_schema->ks_name(), _schema->cf_name(), sstdir, comps.generation, comps.version, comps.format);
auto fut = sst->load();
return std::move(fut).then([this, sst = std::move(sst)] () mutable {
add_sstable(std::move(*sst));
return make_ready_future<>();
});
}).then_wrapped([fname, comps] (future<> f) {
auto sst = std::make_unique<sstables::sstable>(_schema->ks_name(), _schema->cf_name(), sstdir, comps.generation, comps.version, comps.format);
auto fut = sst->load();
return std::move(fut).then([this, sst = std::move(sst)] () mutable {
add_sstable(std::move(*sst));
return make_ready_future<>();
}).then_wrapped([fname, comps = std::move(comps)] (future<> f) {
try {
f.get();
} catch (malformed_sstable_exception& e) {
dblog.error("malformed sstable {}: {}. Refusing to boot", fname, e.what());
throw;
} catch(...) {
dblog.error("Unrecognized error while processing {}: {}. Refusing to boot",
fname, std::current_exception());
dblog.error("Unrecognized error while processing {}: Refusing to boot", fname);
throw;
}
return make_ready_future<entry_descriptor>(std::move(comps));
@@ -489,6 +462,19 @@ void column_family::add_sstable(sstables::sstable&& sstable) {
}
void column_family::add_sstable(lw_shared_ptr<sstables::sstable> sstable) {
auto key_shard = [this] (const partition_key& pk) {
auto token = dht::global_partitioner().get_token(*_schema, pk);
return dht::shard_of(token);
};
auto s1 = key_shard(sstable->get_first_partition_key(*_schema));
auto s2 = key_shard(sstable->get_last_partition_key(*_schema));
auto me = engine().cpu_id();
auto included = (s1 <= me) && (me <= s2);
if (!included) {
dblog.info("sstable {} not relevant for this shard, ignoring", sstable->get_filename());
sstable->mark_for_deletion();
return;
}
auto generation = sstable->generation();
// allow in-progress reads to continue using old list
_sstables = make_lw_shared<sstable_list>(*_sstables);
@@ -560,10 +546,6 @@ column_family::try_flush_memtable_to_sstable(lw_shared_ptr<memtable> old) {
sstables::sstable::version_types::ka,
sstables::sstable::format_types::big);
auto memtable_size = old->occupancy().total_space();
_config.cf_stats->pending_memtables_flushes_count++;
_config.cf_stats->pending_memtables_flushes_bytes += memtable_size;
newtab->set_unshared();
dblog.debug("Flushing to {}", newtab->get_filename());
return newtab->write_components(*old).then([this, newtab, old] {
@@ -587,33 +569,23 @@ column_family::try_flush_memtable_to_sstable(lw_shared_ptr<memtable> old) {
return newtab->create_links(dir);
});
});
}).then_wrapped([this, old, newtab, memtable_size] (future<> ret) {
_config.cf_stats->pending_memtables_flushes_count--;
_config.cf_stats->pending_memtables_flushes_bytes -= memtable_size;
}).then([this, old, newtab] {
dblog.debug("Flushing done");
// We must add sstable before we call update_cache(), because
// memtable's data after moving to cache can be evicted at any time.
auto old_sstables = _sstables;
add_sstable(newtab);
old->mark_flushed(newtab);
return update_cache(*old, std::move(old_sstables));
}).then_wrapped([this, old] (future<> ret) {
try {
ret.get();
// We must add sstable before we call update_cache(), because
// memtable's data after moving to cache can be evicted at any time.
auto old_sstables = _sstables;
add_sstable(newtab);
old->mark_flushed(newtab);
_memtables->erase(boost::range::find(*_memtables, old));
dblog.debug("Memtable replaced");
trigger_compaction();
return update_cache(*old, std::move(old_sstables)).then_wrapped([this, old] (future<> f) {
try {
f.get();
} catch(...) {
dblog.error("failed to move memtable to cache: {}", std::current_exception());
}
_memtables->erase(boost::range::find(*_memtables, old));
dblog.debug("Memtable replaced");
return make_ready_future<stop_iteration>(stop_iteration::yes);
});
return make_ready_future<stop_iteration>(stop_iteration::yes);
} catch (...) {
dblog.error("failed to write sstable: {}", std::current_exception());
}
@@ -732,22 +704,21 @@ column_family::compact_sstables(sstables::compaction_descriptor descriptor) {
std::unordered_set<sstables::shared_sstable> s(
sstables_to_compact->begin(), sstables_to_compact->end());
for (const auto& oldtab : *current_sstables) {
// Checks if oldtab is a sstable not being compacted.
if (!s.count(oldtab.second)) {
update_stats_for_new_sstable(oldtab.second->data_size());
_sstables->emplace(oldtab.first, oldtab.second);
}
}
for (const auto& newtab : *new_tables) {
// FIXME: rename the new sstable(s). Verify a rename doesn't cause
// problems for the sstable object.
update_stats_for_new_sstable(newtab.second->data_size());
_sstables->emplace(newtab.first, newtab.second);
}
for (const auto& newtab : *new_tables) {
// FIXME: rename the new sstable(s). Verify a rename doesn't cause
// problems for the sstable object.
update_stats_for_new_sstable(newtab.second->data_size());
_sstables->emplace(newtab.first, newtab.second);
}
for (const auto& oldtab : *sstables_to_compact) {
oldtab->mark_for_deletion();
for (const auto& oldtab : *sstables_to_compact) {
oldtab->mark_for_deletion();
}
}
});
});
@@ -760,13 +731,7 @@ column_family::load_new_sstables(std::vector<sstables::entry_descriptor> new_tab
return sst->load().then([this, sst] {
return sst->mutate_sstable_level(0);
}).then([this, sst] {
auto first = sst->get_first_partition_key(*_schema);
auto last = sst->get_last_partition_key(*_schema);
if (belongs_to_current_shard(*_schema, first, last)) {
this->add_sstable(sst);
} else {
sst->mark_for_deletion();
}
this->add_sstable(sst);
return make_ready_future<>();
});
});
@@ -858,77 +823,58 @@ future<> column_family::populate(sstring sstdir) {
auto verifier = make_lw_shared<std::unordered_map<unsigned long, status>>();
auto descriptor = make_lw_shared<sstable_descriptor>();
return do_with(std::vector<future<>>(), [this, sstdir, verifier, descriptor] (std::vector<future<>>& futures) {
return lister::scan_dir(sstdir, { directory_entry_type::regular }, [this, sstdir, verifier, descriptor, &futures] (directory_entry de) {
// FIXME: The secondary indexes are in this level, but with a directory type, (starting with ".")
auto f = probe_file(sstdir, de.name).then([verifier, descriptor] (auto entry) {
if (verifier->count(entry.generation)) {
if (verifier->at(entry.generation) == status::has_toc_file) {
if (entry.component == sstables::sstable::component_type::TOC) {
throw sstables::malformed_sstable_exception("Invalid State encountered. TOC file already processed");
} else if (entry.component == sstables::sstable::component_type::TemporaryTOC) {
throw sstables::malformed_sstable_exception("Invalid State encountered. Temporary TOC file found after TOC file was processed");
}
} else if (entry.component == sstables::sstable::component_type::TOC) {
verifier->at(entry.generation) = status::has_toc_file;
} else if (entry.component == sstables::sstable::component_type::TemporaryTOC) {
verifier->at(entry.generation) = status::has_temporary_toc_file;
}
} else {
return lister::scan_dir(sstdir, { directory_entry_type::regular }, [this, sstdir, verifier, descriptor] (directory_entry de) {
// FIXME: The secondary indexes are in this level, but with a directory type, (starting with ".")
return probe_file(sstdir, de.name).then([verifier, descriptor] (auto entry) {
if (verifier->count(entry.generation)) {
if (verifier->at(entry.generation) == status::has_toc_file) {
if (entry.component == sstables::sstable::component_type::TOC) {
verifier->emplace(entry.generation, status::has_toc_file);
throw sstables::malformed_sstable_exception("Invalid State encountered. TOC file already processed");
} else if (entry.component == sstables::sstable::component_type::TemporaryTOC) {
verifier->emplace(entry.generation, status::has_temporary_toc_file);
} else {
verifier->emplace(entry.generation, status::has_some_file);
throw sstables::malformed_sstable_exception("Invalid State encountered. Temporary TOC file found after TOC file was processed");
}
} else if (entry.component == sstables::sstable::component_type::TOC) {
verifier->at(entry.generation) = status::has_toc_file;
} else if (entry.component == sstables::sstable::component_type::TemporaryTOC) {
verifier->at(entry.generation) = status::has_temporary_toc_file;
}
// Retrieve both version and format used for this column family.
if (!descriptor->version) {
descriptor->version = entry.version;
} else {
if (entry.component == sstables::sstable::component_type::TOC) {
verifier->emplace(entry.generation, status::has_toc_file);
} else if (entry.component == sstables::sstable::component_type::TemporaryTOC) {
verifier->emplace(entry.generation, status::has_temporary_toc_file);
} else {
verifier->emplace(entry.generation, status::has_some_file);
}
if (!descriptor->format) {
descriptor->format = entry.format;
}
// Retrieve both version and format used for this column family.
if (!descriptor->version) {
descriptor->version = entry.version;
}
if (!descriptor->format) {
descriptor->format = entry.format;
}
});
}).then([verifier, sstdir, descriptor, this] {
return parallel_for_each(*verifier, [sstdir = std::move(sstdir), descriptor, this] (auto v) {
if (v.second == status::has_temporary_toc_file) {
unsigned long gen = v.first;
assert(descriptor->version);
sstables::sstable::version_types version = descriptor->version.value();
assert(descriptor->format);
sstables::sstable::format_types format = descriptor->format.value();
if (engine().cpu_id() != 0) {
dblog.info("At directory: {}, partial SSTable with generation {} not relevant for this shard, ignoring", sstdir, v.first);
return make_ready_future<>();
}
});
// push future returned by probe_file into an array of futures,
// so that the supplied callback will not block scan_dir() from
// reading the next entry in the directory.
futures.push_back(std::move(f));
// shard 0 is the responsible for removing a partial sstable.
return sstables::sstable::remove_sstable_with_temp_toc(_schema->ks_name(), _schema->cf_name(), sstdir, gen, version, format);
} else if (v.second != status::has_toc_file) {
throw sstables::malformed_sstable_exception(sprint("At directory: %s: no TOC found for SSTable with generation %d!. Refusing to boot", sstdir, v.first));
}
return make_ready_future<>();
}).then([&futures] {
return when_all(futures.begin(), futures.end()).then([] (std::vector<future<>> ret) {
try {
for (auto& f : ret) {
f.get();
}
} catch(...) {
throw;
}
});
}).then([verifier, sstdir, descriptor, this] {
return parallel_for_each(*verifier, [sstdir = std::move(sstdir), descriptor, this] (auto v) {
if (v.second == status::has_temporary_toc_file) {
unsigned long gen = v.first;
assert(descriptor->version);
sstables::sstable::version_types version = descriptor->version.value();
assert(descriptor->format);
sstables::sstable::format_types format = descriptor->format.value();
if (engine().cpu_id() != 0) {
dblog.info("At directory: {}, partial SSTable with generation {} not relevant for this shard, ignoring", sstdir, v.first);
return make_ready_future<>();
}
// shard 0 is the responsible for removing a partial sstable.
return sstables::sstable::remove_sstable_with_temp_toc(_schema->ks_name(), _schema->cf_name(), sstdir, gen, version, format);
} else if (v.second != status::has_toc_file) {
throw sstables::malformed_sstable_exception(sprint("At directory: %s: no TOC found for SSTable with generation %d!. Refusing to boot", sstdir, v.first));
}
return make_ready_future<>();
});
});
});
}
@@ -964,20 +910,6 @@ database::setup_collectd() {
, scollectd::make_typed(scollectd::data_type::GAUGE, [this] {
return _dirty_memory_region_group.memory_used();
})));
_collectd.push_back(
scollectd::add_polled_metric(scollectd::type_instance_id("memtables"
, scollectd::per_cpu_plugin_instance
, "queue_length", "pending_flushes")
, scollectd::make_typed(scollectd::data_type::GAUGE, _cf_stats.pending_memtables_flushes_count)
));
_collectd.push_back(
scollectd::add_polled_metric(scollectd::type_instance_id("memtables"
, scollectd::per_cpu_plugin_instance
, "bytes", "pending_flushes")
, scollectd::make_typed(scollectd::data_type::GAUGE, _cf_stats.pending_memtables_flushes_bytes)
));
}
database::~database() {
@@ -1036,7 +968,7 @@ template <typename Func>
static future<>
do_parse_system_tables(distributed<service::storage_proxy>& proxy, const sstring& _cf_name, Func&& func) {
using namespace db::schema_tables;
static_assert(std::is_same<future<>, std::result_of_t<Func(schema_result_value_type&)>>::value,
static_assert(std::is_same<future<>, std::result_of_t<Func(schema_result::value_type&)>>::value,
"bad Func signature");
@@ -1071,11 +1003,11 @@ do_parse_system_tables(distributed<service::storage_proxy>& proxy, const sstring
future<> database::parse_system_tables(distributed<service::storage_proxy>& proxy) {
using namespace db::schema_tables;
return do_parse_system_tables(proxy, db::schema_tables::KEYSPACES, [this] (schema_result_value_type &v) {
return do_parse_system_tables(proxy, db::schema_tables::KEYSPACES, [this] (schema_result::value_type &v) {
auto ksm = create_keyspace_from_schema_partition(v);
return create_keyspace(ksm);
}).then([&proxy, this] {
return do_parse_system_tables(proxy, db::schema_tables::COLUMNFAMILIES, [this, &proxy] (schema_result_value_type &v) {
return do_parse_system_tables(proxy, db::schema_tables::COLUMNFAMILIES, [this, &proxy] (schema_result::value_type &v) {
return create_tables_from_tables_partition(proxy, v.second).then([this] (std::map<sstring, schema_ptr> tables) {
for (auto& t: tables) {
auto s = t.second;
@@ -1147,7 +1079,7 @@ void database::add_keyspace(sstring name, keyspace k) {
}
void database::update_keyspace(const sstring& name) {
throw std::runtime_error("update keyspace not implemented");
throw std::runtime_error("not implemented");
}
void database::drop_keyspace(const sstring& name) {
@@ -1312,7 +1244,6 @@ keyspace::make_column_family_config(const schema& s) const {
cfg.enable_cache = _config.enable_cache;
cfg.max_memtable_size = _config.max_memtable_size;
cfg.dirty_memory_region_group = _config.dirty_memory_region_group;
cfg.cf_stats = _config.cf_stats;
cfg.enable_incremental_backups = _config.enable_incremental_backups;
return cfg;
@@ -1485,7 +1416,7 @@ column_family::query(const query::read_command& cmd, const std::vector<query::pa
return do_until([&qs] { return !qs.limit || qs.range_empty; }, [this, &qs] {
return qs.reader().then([this, &qs](mutation_opt mo) {
if (mo) {
auto p_builder = qs.builder.add_partition(*mo->schema(), mo->key());
auto p_builder = qs.builder.add_partition(mo->key());
auto is_distinct = qs.cmd.slice.options.contains(query::partition_slice::option::distinct);
auto limit = !is_distinct ? qs.limit : 1;
mo->partition().query(p_builder, *_schema, qs.cmd.timestamp, limit);
@@ -1502,7 +1433,7 @@ column_family::query(const query::read_command& cmd, const std::vector<query::pa
}).finally([lc, this]() mutable {
_stats.reads.mark(lc);
if (lc.is_start()) {
_stats.estimated_read.add(lc.latency(), _stats.reads.count);
_stats.estimated_read.add(lc.latency_in_nano(), _stats.reads.count);
}
});
}
@@ -1516,51 +1447,28 @@ column_family::as_mutation_source() const {
future<lw_shared_ptr<query::result>>
database::query(const query::read_command& cmd, const std::vector<query::partition_range>& ranges) {
column_family& cf = find_column_family(cmd.cf_id);
return cf.query(cmd, ranges);
static auto make_empty = [] {
return make_ready_future<lw_shared_ptr<query::result>>(make_lw_shared(query::result()));
};
try {
column_family& cf = find_column_family(cmd.cf_id);
return cf.query(cmd, ranges);
} catch (const no_such_column_family&) {
// FIXME: load from sstables
return make_empty();
}
}
future<reconcilable_result>
database::query_mutations(const query::read_command& cmd, const query::partition_range& range) {
column_family& cf = find_column_family(cmd.cf_id);
return mutation_query(cf.as_mutation_source(), range, cmd.slice, cmd.row_limit, cmd.timestamp);
}
std::unordered_set<sstring> database::get_initial_tokens() {
std::unordered_set<sstring> tokens;
sstring tokens_string = get_config().initial_token();
try {
boost::split(tokens, tokens_string, boost::is_any_of(sstring(",")));
} catch (...) {
throw std::runtime_error(sprint("Unable to parse initial_token=%s", tokens_string));
column_family& cf = find_column_family(cmd.cf_id);
return mutation_query(cf.as_mutation_source(), range, cmd.slice, cmd.row_limit, cmd.timestamp);
} catch (const no_such_column_family&) {
// FIXME: load from sstables
return make_ready_future<reconcilable_result>(reconcilable_result());
}
tokens.erase("");
return tokens;
}
std::experimental::optional<gms::inet_address> database::get_replace_address() {
auto& cfg = get_config();
sstring replace_address = cfg.replace_address();
sstring replace_address_first_boot = cfg.replace_address_first_boot();
try {
if (!replace_address.empty()) {
return gms::inet_address(replace_address);
} else if (!replace_address_first_boot.empty()) {
return gms::inet_address(replace_address_first_boot);
}
return std::experimental::nullopt;
} catch (...) {
return std::experimental::nullopt;
}
}
bool database::is_replacing() {
sstring replace_address_first_boot = get_config().replace_address_first_boot();
if (!replace_address_first_boot.empty() && db::system_keyspace::bootstrap_complete()) {
dblog.info("Replace address on first boot requested; this node is already bootstrapped");
return false;
}
return bool(get_replace_address());
}
std::ostream& operator<<(std::ostream& out, const atomic_cell_or_collection& c) {
@@ -1592,7 +1500,8 @@ future<> database::apply_in_memory(const frozen_mutation& m, const db::replay_po
auto& cf = find_column_family(m.column_family_id());
cf.apply(m, rp);
} catch (no_such_column_family&) {
dblog.error("Attempting to mutate non-existent table {}", m.column_family_id());
// TODO: log a warning
// FIXME: load keyspace meta-data from storage
}
return make_ready_future<>();
}
@@ -1680,7 +1589,6 @@ database::make_keyspace_config(const keyspace_metadata& ksm) {
cfg.max_memtable_size = std::numeric_limits<size_t>::max();
}
cfg.dirty_memory_region_group = &_dirty_memory_region_group;
cfg.cf_stats = &_cf_stats;
cfg.enable_incremental_backups = _cfg->incremental_backups();
return cfg;
}
@@ -1949,7 +1857,7 @@ future<> column_family::snapshot(sstring name) {
}
future<bool> column_family::snapshot_exists(sstring tag) {
sstring jsondir = _config.datadir + "/snapshots/" + tag;
sstring jsondir = _config.datadir + "/snapshots/";
return engine().open_directory(std::move(jsondir)).then_wrapped([] (future<file> f) {
try {
f.get0();
@@ -2025,11 +1933,7 @@ future<> column_family::clear_snapshot(sstring tag) {
future<std::unordered_map<sstring, column_family::snapshot_details>> column_family::get_snapshot_details() {
std::unordered_map<sstring, snapshot_details> all_snapshots;
return do_with(std::move(all_snapshots), [this] (auto& all_snapshots) {
return engine().file_exists(_config.datadir + "/snapshots").then([this, &all_snapshots](bool file_exists) {
if (!file_exists) {
return make_ready_future<>();
}
return lister::scan_dir(_config.datadir + "/snapshots", { directory_entry_type::directory }, [this, &all_snapshots] (directory_entry de) {
return lister::scan_dir(_config.datadir + "/snapshots", { directory_entry_type::directory }, [this, &all_snapshots] (directory_entry de) {
auto snapshot_name = de.name;
auto snapshot = _config.datadir + "/snapshots/" + snapshot_name;
all_snapshots.emplace(snapshot_name, snapshot_details());
@@ -2064,7 +1968,6 @@ future<std::unordered_map<sstring, column_family::snapshot_details>> column_fami
});
});
});
});
}).then([&all_snapshots] {
return std::move(all_snapshots);
});

View File

@@ -102,16 +102,6 @@ class replay_position_reordered_exception : public std::exception {};
using memtable_list = std::vector<lw_shared_ptr<memtable>>;
using sstable_list = sstables::sstable_list;
// The CF has a "stats" structure. But we don't want all fields here,
// since some of them are fairly complex for exporting to collectd. Also,
// that structure matches what we export via the API, so better leave it
// untouched. And we need more fields. We will summarize it in here what
// we need.
struct cf_stats {
int64_t pending_memtables_flushes_count = 0;
int64_t pending_memtables_flushes_bytes = 0;
};
class column_family {
public:
struct config {
@@ -123,7 +113,6 @@ public:
bool enable_incremental_backups = false;
size_t max_memtable_size = 5'000'000;
logalloc::region_group* dirty_memory_region_group = nullptr;
::cf_stats* cf_stats = nullptr;
};
struct no_commitlog {};
struct stats {
@@ -194,7 +183,8 @@ private:
mutation_source sstables_as_mutation_source();
key_source sstables_as_key_source() const;
partition_presence_checker make_partition_presence_checker(lw_shared_ptr<sstable_list> old_sstables);
std::chrono::steady_clock::time_point _sstable_writes_disabled_at;
// We will use highres because hopefully it won't take more than a few usecs
std::chrono::high_resolution_clock::time_point _sstable_writes_disabled_at;
public:
// Creates a mutation reader which covers all data sources for this column family.
// Caller needs to ensure that column_family remains live (FIXME: relax this).
@@ -215,10 +205,6 @@ public:
return _cache;
}
row_cache& get_row_cache() {
return _cache;
}
logalloc::occupancy_stats occupancy() const;
public:
column_family(schema_ptr schema, config cfg, db::commitlog& cl, compaction_manager&);
@@ -250,7 +236,7 @@ public:
// to call this separately in all shards first, to guarantee that none of them are writing
// new data before you can safely assume that the whole node is disabled.
future<int64_t> disable_sstable_write() {
_sstable_writes_disabled_at = std::chrono::steady_clock::now();
_sstable_writes_disabled_at = std::chrono::high_resolution_clock::now();
return _sstables_lock.write_lock().then([this] {
return make_ready_future<int64_t>((*_sstables->end()).first);
});
@@ -258,10 +244,10 @@ public:
// SSTable writes are now allowed again, and generation is updated to new_generation
// returns the amount of microseconds elapsed since we disabled writes.
std::chrono::steady_clock::duration enable_sstable_write(int64_t new_generation) {
std::chrono::high_resolution_clock::duration enable_sstable_write(int64_t new_generation) {
update_sstables_known_generation(new_generation);
_sstables_lock.write_unlock();
return std::chrono::steady_clock::now() - _sstable_writes_disabled_at;
return std::chrono::high_resolution_clock::now() - _sstable_writes_disabled_at;
}
// Make sure the generation numbers are sequential, starting from "start".
@@ -324,10 +310,6 @@ public:
return _stats;
}
compaction_manager& get_compaction_manager() const {
return _compaction_manager;
}
template<typename Func, typename Result = futurize_t<std::result_of_t<Func()>>>
Result run_with_compaction_disabled(Func && func) {
++_compaction_disabled;
@@ -463,7 +445,6 @@ public:
bool enable_incremental_backups = false;
size_t max_memtable_size = 5'000'000;
logalloc::region_group* dirty_memory_region_group = nullptr;
::cf_stats* cf_stats = nullptr;
};
private:
std::unique_ptr<locator::abstract_replication_strategy> _replication_strategy;
@@ -522,7 +503,6 @@ public:
// use shard_of() for data
class database {
::cf_stats _cf_stats;
logalloc::region_group _dirty_memory_region_group;
std::unordered_map<sstring, keyspace> _keyspaces;
std::unordered_map<utils::UUID, lw_shared_ptr<column_family>> _column_families;
@@ -569,9 +549,6 @@ public:
return _commitlog.get();
}
compaction_manager& get_compaction_manager() {
return _compaction_manager;
}
const compaction_manager& get_compaction_manager() const {
return _compaction_manager;
}
@@ -656,10 +633,6 @@ public:
const logalloc::region_group& dirty_memory_region_group() const {
return _dirty_memory_region_group;
}
std::unordered_set<sstring> get_initial_tokens();
std::experimental::optional<gms::inet_address> get_replace_address();
bool is_replacing();
};
// FIXME: stub
@@ -674,7 +647,7 @@ column_family::apply(const mutation& m, const db::replay_position& rp) {
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
_stats.estimated_write.add(lc.latency_in_nano(), _stats.writes.count);
}
}
@@ -708,7 +681,7 @@ column_family::apply(const frozen_mutation& m, const db::replay_position& rp) {
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
_stats.estimated_write.add(lc.latency_in_nano(), _stats.writes.count);
}
}

View File

@@ -35,8 +35,8 @@ class column_definition;
// keys.hh
class exploded_clustering_prefix;
class partition_key;
class clustering_key;
class clustering_key_prefix;
using clustering_key = clustering_key_prefix;
// memtable.hh
class memtable;

View File

@@ -56,7 +56,6 @@
#include "unimplemented.hh"
#include "db/config.hh"
#include "gms/failure_detector.hh"
#include "service/storage_service.hh"
static logging::logger logger("batchlog_manager");
@@ -88,8 +87,10 @@ future<> db::batchlog_manager::start() {
);
});
});
auto ring_delay = service::get_local_storage_service().get_ring_delay();
_timer.arm(lowres_clock::now() + ring_delay);
_timer.arm(
lowres_clock::now()
+ std::chrono::milliseconds(
service::storage_service::RING_DELAY));
}
return make_ready_future<>();
}
@@ -114,7 +115,7 @@ mutation db::batchlog_manager::get_batch_log_mutation_for(const std::vector<muta
mutation db::batchlog_manager::get_batch_log_mutation_for(const std::vector<mutation>& mutations, const utils::UUID& id, int32_t version, db_clock::time_point now) {
auto schema = _qp.db().local().find_schema(system_keyspace::NAME, system_keyspace::BATCHLOG);
auto key = partition_key::from_singular(*schema, id);
auto timestamp = api::new_timestamp();
auto timestamp = db_clock::now_in_usecs();
auto data = [this, &mutations] {
std::vector<frozen_mutation> fm(mutations.begin(), mutations.end());
const auto size = std::accumulate(fm.begin(), fm.end(), size_t(0), [](size_t s, auto& m) {
@@ -131,7 +132,7 @@ mutation db::batchlog_manager::get_batch_log_mutation_for(const std::vector<muta
mutation m(key, schema);
m.set_cell({}, to_bytes("version"), version, timestamp);
m.set_cell({}, to_bytes("written_at"), now, timestamp);
m.set_cell({}, to_bytes("data"), data_value(std::move(data)), timestamp);
m.set_cell({}, to_bytes("data"), std::move(data), timestamp);
return m;
}

View File

@@ -55,7 +55,6 @@
#include <core/rwlock.hh>
#include <core/gate.hh>
#include <core/fstream.hh>
#include <seastar/core/memory.hh>
#include <net/byteorder.hh>
#include "commitlog.hh"
@@ -90,7 +89,7 @@ public:
db::commitlog::config::config(const db::config& cfg)
: commit_log_location(cfg.commitlog_directory())
, commitlog_total_space_in_mb(cfg.commitlog_total_space_in_mb() >= 0 ? cfg.commitlog_total_space_in_mb() : memory::stats().total_memory() >> 20)
, commitlog_total_space_in_mb(cfg.commitlog_total_space_in_mb())
, commitlog_segment_size_in_mb(cfg.commitlog_segment_size_in_mb())
, commitlog_sync_period_in_ms(cfg.commitlog_sync_batch_window_in_ms())
, mode(cfg.commitlog_sync() == "batch" ? sync_mode::BATCH : sync_mode::PERIODIC)
@@ -281,43 +280,6 @@ private:
* A single commit log file on disk. Manages creation of the file and writing mutations to disk,
* as well as tracking the last mutation position of any "dirty" CFs covered by the segment file. Segment
* files are initially allocated to a fixed size and can grow to accomidate a larger value if necessary.
*
* The IO flow is somewhat convoluted and goes something like this:
*
* Mutation path:
* - Adding data to the segment usually writes into the internal buffer
* - On EOB or overflow we issue a write to disk ("cycle").
* - A cycle call will acquire the segment read lock and send the
* buffer to the corresponding position in the file
* - If we are periodic and crossed a timing threshold, or running "batch" mode
* we might be forced to issue a flush ("sync") after adding data
* - A sync call acquires the write lock, thus locking out writes
* and waiting for pending writes to finish. It then checks the
* high data mark, and issues the actual file flush.
* Note that the write lock is released prior to issuing the
* actual file flush, thus we are allowed to write data to
* after a flush point concurrently with a pending flush.
*
* Sync timer:
* - In periodic mode, we try to primarily issue sync calls in
* a timer task issued every N seconds. The timer does the same
* operation as the above described sync, and resets the timeout
* so that mutation path will not trigger syncs and delay.
*
* Note that we do not care which order segment chunks finish writing
* to disk, other than all below a flush point must finish before flushing.
*
* We currently do not wait for flushes to finish before issueing the next
* cycle call ("after" flush point in the file). This might not be optimal.
*
* To close and finish a segment, we first close the gate object that guards
* writing data to it, then flush it fully (including waiting for futures create
* by the timer to run their course), and finally wait for it to
* become "clean", i.e. get notified that all mutations it holds have been
* persisted to sstables elsewhere. Once this is done, we can delete the
* segment. If a segment (object) is deleted without being fully clean, we
* do not remove the file on disk.
*
*/
class db::commitlog::segment: public enable_lw_shared_from_this<segment> {
@@ -353,8 +315,7 @@ public:
// The commit log entry overhead in bytes (int: length + int: head checksum + int: tail checksum)
static constexpr size_t entry_overhead_size = 3 * sizeof(uint32_t);
static constexpr size_t segment_overhead_size = 2 * sizeof(uint32_t);
static constexpr size_t descriptor_header_size = 5 * sizeof(uint32_t);
static constexpr uint32_t segment_magic = ('S'<<24) |('C'<< 16) | ('L' << 8) | 'C';
static constexpr size_t descriptor_header_size = 4 * sizeof(uint32_t);
// The commit log (chained) sync marker/header size in bytes (int: length + int: checksum [segmentId, position])
static constexpr size_t sync_marker_size = 2 * sizeof(uint32_t);
@@ -407,7 +368,6 @@ public:
void reset_sync_time() {
_sync_time = clock_type::now();
}
// See class comment for info
future<sseg_ptr> sync() {
// Note: this is not a marker for when sync was finished.
// It is when it was initiated
@@ -424,7 +384,6 @@ public:
future<> shutdown() {
return _gate.close();
}
// See class comment for info
future<sseg_ptr> flush(uint64_t pos = 0) {
auto me = shared_from_this();
assert(!me.owned());
@@ -470,7 +429,6 @@ public:
/**
* Send any buffer contents to disk and get a new tmp buffer
*/
// See class comment for info
future<sseg_ptr> cycle(size_t s = 0) {
auto size = clear_buffer_slack();
auto buf = std::move(_buffer);
@@ -525,7 +483,6 @@ public:
if (off == 0) {
// first block. write file header.
out.write(segment_magic);
out.write(_desc.ver);
out.write(_desc.id);
crc32_nbo crc;
@@ -1137,7 +1094,7 @@ db::commitlog::commitlog(config cfg)
: _segment_manager(new segment_manager(std::move(cfg))) {
}
db::commitlog::commitlog(commitlog&& v) noexcept
db::commitlog::commitlog(commitlog&& v)
: _segment_manager(std::move(v._segment_manager)) {
}
@@ -1213,11 +1170,10 @@ const db::commitlog::config& db::commitlog::active_config() const {
return _segment_manager->cfg;
}
future<std::unique_ptr<subscription<temporary_buffer<char>, db::replay_position>>>
future<subscription<temporary_buffer<char>, db::replay_position>>
db::commitlog::read_log_file(const sstring& filename, commit_load_reader_func next, position_type off) {
return engine().open_file_dma(filename, open_flags::ro).then([next = std::move(next), off](file f) {
return std::make_unique<subscription<temporary_buffer<char>, replay_position>>(
read_log_file(std::move(f), std::move(next), off));
return read_log_file(std::move(f), std::move(next), off);
});
}
@@ -1233,8 +1189,6 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
size_t next = 0;
size_t start_off = 0;
size_t skip_to = 0;
size_t file_size = 0;
size_t corrupt_size = 0;
bool eof = false;
bool header = true;
@@ -1278,20 +1232,16 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
}
// Will throw if we got eof
data_input in(buf);
auto magic = in.read<uint32_t>();
auto ver = in.read<uint32_t>();
auto id = in.read<uint64_t>();
auto checksum = in.read<uint32_t>();
if (magic == 0 && ver == 0 && id == 0 && checksum == 0) {
if (ver == 0 && id == 0 && checksum == 0) {
// let's assume this was an empty (pre-allocated)
// file. just skip it.
return stop();
}
if (magic != segment::segment_magic) {
throw std::invalid_argument("Not a scylla format commitlog file");
}
crc32_nbo crc;
crc.process(ver);
crc.process<int32_t>(id & 0xffffffff);
@@ -1332,11 +1282,7 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
auto cs = crc.checksum();
if (cs != checksum) {
// if a chunk header checksum is broken, we shall just assume that all
// remaining is as well. We cannot trust the "next" pointer, so...
logger.debug("Checksum error in segment chunk at {}.", pos);
corrupt_size += (file_size - pos);
return stop();
throw std::runtime_error("Checksum error in chunk header");
}
this->next = next;
@@ -1362,24 +1308,21 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
auto size = in.read<uint32_t>();
auto checksum = in.read<uint32_t>();
crc32_nbo crc;
crc.process(size);
if (size < 3 * sizeof(uint32_t) || checksum != crc.checksum()) {
if (size == 0) {
// special scylla case: zero padding due to dma blocks
auto slack = next - pos;
if (size != 0) {
logger.debug("Segment entry at {} has broken header. Skipping to next chunk ({} bytes)", rp, slack);
corrupt_size += slack;
}
// size == 0 -> special scylla case: zero padding due to dma blocks
return skip(slack);
}
if (size < 3 * sizeof(uint32_t)) {
throw std::runtime_error("Invalid entry size");
}
if (start_off > pos) {
return skip(size - entry_header_size);
}
return fin.read_exactly(size - entry_header_size).then([this, size, crc = std::move(crc), rp](temporary_buffer<char> buf) mutable {
return fin.read_exactly(size - entry_header_size).then([this, size, checksum, rp](temporary_buffer<char> buf) {
advance(buf);
data_input in(buf);
@@ -1388,15 +1331,12 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
in.skip(data_size);
auto checksum = in.read<uint32_t>();
crc32_nbo crc;
crc.process(size);
crc.process_bytes(buf.get(), data_size);
if (crc.checksum() != checksum) {
// If we're getting a checksum error here, most likely the rest of
// the file will be corrupt as well. But it does not hurt to retry.
// Just go to the next entry (since "size" in header seemed ok).
logger.debug("Segment entry at {} checksum error. Skipping {} bytes", rp, size);
corrupt_size += size;
return make_ready_future<>();
throw std::runtime_error("Checksum error in data entry");
}
return s.produce(buf.share(0, data_size), rp);
@@ -1404,18 +1344,10 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
});
}
future<> read_file() {
return f.size().then([this](uint64_t size) {
file_size = size;
}).then([this] {
return read_header().then(
[this] {
return do_until(std::bind(&work::end_of_file, this), std::bind(&work::read_chunk, this));
}).then([this] {
if (corrupt_size > 0) {
throw segment_data_corruption_error("Data corruption", corrupt_size);
}
});
});
return read_header().then(
[this] {
return do_until(std::bind(&work::end_of_file, this), std::bind(&work::read_chunk, this));
});
}
};
@@ -1443,10 +1375,6 @@ uint64_t db::commitlog::get_completed_tasks() const {
return _segment_manager->totals.allocation_count;
}
uint64_t db::commitlog::get_flush_count() const {
return _segment_manager->totals.flush_count;
}
uint64_t db::commitlog::get_pending_tasks() const {
return _segment_manager->totals.pending_operations;
}

View File

@@ -139,7 +139,7 @@ public:
const uint32_t ver;
};
commitlog(commitlog&&) noexcept;
commitlog(commitlog&&);
~commitlog();
/**
@@ -231,7 +231,6 @@ public:
uint64_t get_total_size() const;
uint64_t get_completed_tasks() const;
uint64_t get_flush_count() const;
uint64_t get_pending_tasks() const;
uint64_t get_num_segments_created() const;
uint64_t get_num_segments_destroyed() const;
@@ -266,21 +265,8 @@ public:
typedef std::function<future<>(temporary_buffer<char>, replay_position)> commit_load_reader_func;
class segment_data_corruption_error: public std::runtime_error {
public:
segment_data_corruption_error(std::string msg, uint64_t s)
: std::runtime_error(msg), _bytes(s) {
}
uint64_t bytes() const {
return _bytes;
}
private:
uint64_t _bytes;
};
static subscription<temporary_buffer<char>, replay_position> read_log_file(file, commit_load_reader_func, position_type = 0);
static future<std::unique_ptr<subscription<temporary_buffer<char>, replay_position>>> read_log_file(
const sstring&, commit_load_reader_func, position_type = 0);
static future<subscription<temporary_buffer<char>, replay_position>> read_log_file(const sstring&, commit_load_reader_func, position_type = 0);
private:
commitlog(config);
};

View File

@@ -69,7 +69,6 @@ public:
uint64_t invalid_mutations = 0;
uint64_t skipped_mutations = 0;
uint64_t applied_mutations = 0;
uint64_t corrupt_bytes = 0;
};
future<> process(stats*, temporary_buffer<char> buf, replay_position rp);
@@ -167,16 +166,9 @@ db::commitlog_replayer::impl::recover(sstring file) {
return db::commitlog::read_log_file(file,
std::bind(&impl::process, this, s.get(), std::placeholders::_1,
std::placeholders::_2), p).then([](auto s) {
auto f = s->done();
auto f = s.done();
return f.finally([s = std::move(s)] {});
}).then_wrapped([s](future<> f) {
try {
f.get();
} catch (commitlog::segment_data_corruption_error& e) {
s->corrupt_bytes += e.bytes();
} catch (...) {
throw;
}
}).then([s] {
return make_ready_future<stats>(*s);
});
}
@@ -241,7 +233,7 @@ db::commitlog_replayer::commitlog_replayer(seastar::sharded<cql3::query_processo
: _impl(std::make_unique<impl>(qp))
{}
db::commitlog_replayer::commitlog_replayer(commitlog_replayer&& r) noexcept
db::commitlog_replayer::commitlog_replayer(commitlog_replayer&& r)
: _impl(std::move(r._impl))
{}
@@ -258,32 +250,24 @@ future<db::commitlog_replayer> db::commitlog_replayer::create_replayer(seastar::
}
future<> db::commitlog_replayer::recover(std::vector<sstring> files) {
logger.info("Replaying {}", files);
return parallel_for_each(files, [this](auto f) {
return this->recover(f);
return this->recover(f).handle_exception([f](auto ep) {
logger.error("Error recovering {}: {}", f, ep);
std::rethrow_exception(ep);
});
});
}
future<> db::commitlog_replayer::recover(sstring f) {
return _impl->recover(f).then([f](impl::stats stats) {
if (stats.corrupt_bytes != 0) {
logger.warn("Corrupted file: {}. {} bytes skipped.", f, stats.corrupt_bytes);
}
future<> db::commitlog_replayer::recover(sstring file) {
return _impl->recover(file).then([file](impl::stats stats) {
logger.info("Log replay of {} complete, {} replayed mutations ({} invalid, {} skipped)"
, f
, file
, stats.applied_mutations
, stats.invalid_mutations
, stats.skipped_mutations
);
}).handle_exception([f](auto ep) {
logger.error("Error recovering {}: {}", f, ep);
try {
std::rethrow_exception(ep);
} catch (std::invalid_argument&) {
logger.error("Scylla cannot process {}. Make sure to fully flush all Cassandra commit log files to sstable before migrating.");
throw;
} catch (...) {
throw;
}
});;
});
}

View File

@@ -57,7 +57,7 @@ class commitlog;
class commitlog_replayer {
public:
commitlog_replayer(commitlog_replayer&&) noexcept;
commitlog_replayer(commitlog_replayer&&);
~commitlog_replayer();
static future<commitlog_replayer> create_replayer(seastar::sharded<cql3::query_processor>&);

View File

@@ -31,7 +31,6 @@
#include "core/fstream.hh"
#include "core/do_with.hh"
#include "log.hh"
#include <boost/any.hpp>
static logging::logger logger("config");
@@ -117,9 +116,8 @@ template<typename K, typename V>
struct convert<std::unordered_map<K, V>> {
static Node encode(const std::unordered_map<K, V>& rhs) {
Node node(NodeType::Map);
for (auto& p : rhs) {
node.force_insert(p.first, p.second);
}
for(typename std::map<K, V>::const_iterator it=rhs.begin();it!=rhs.end();++it)
node.force_insert(it->first, it->second);
return node;
}
static bool decode(const Node& node, std::unordered_map<K, V>& rhs) {
@@ -414,21 +412,3 @@ future<> db::config::read_from_file(const sstring& filename) {
return read_from_file(std::move(f));
});
}
boost::filesystem::path db::config::get_conf_dir() {
using namespace boost::filesystem;
path confdir;
auto* cd = std::getenv("SCYLLA_CONF");
if (cd != nullptr) {
confdir = path(cd);
} else {
auto* p = std::getenv("SCYLLA_HOME");
if (p != nullptr) {
confdir = path(p);
}
confdir /= "conf";
}
return confdir;
}

View File

@@ -121,7 +121,23 @@ public:
* @return path of the directory where configuration files are located
* according the environment variables definitions.
*/
static boost::filesystem::path get_conf_dir();
static boost::filesystem::path get_conf_dir() {
using namespace boost::filesystem;
path confdir;
auto* cd = std::getenv("SCYLLA_CONF");
if (cd != nullptr) {
confdir = path(cd);
} else {
auto* p = std::getenv("SCYLLA_HOME");
if (p != nullptr) {
confdir = path(p);
}
confdir /= "conf";
}
return confdir;
}
typedef std::unordered_map<sstring, sstring> string_map;
typedef std::vector<sstring> string_list;
@@ -274,7 +290,7 @@ public:
"Related information: Configuring compaction" \
) \
/* Common fault detection setting */ \
val(phi_convict_threshold, uint32_t, 8, Used, \
val(phi_convict_threshold, uint32_t, 8, Unused, \
"Adjusts the sensitivity of the failure detector on an exponential scale. Generally this setting never needs adjusting.\n" \
"Related information: Failure detection and recovery" \
) \
@@ -300,7 +316,7 @@ public:
val(commitlog_sync_batch_window_in_ms, uint32_t, 10000, Used, \
"Controls how long the system waits for other writes before performing a sync in \"batch\" mode." \
) \
val(commitlog_total_space_in_mb, int64_t, -1, Used, \
val(commitlog_total_space_in_mb, uint32_t, 8192, Used, \
"Total space used for commitlogs. If the used space goes above this value, Cassandra rounds up to the next nearest segment multiple and flushes memtables to disk for the oldest commitlog segments, removing those log segments. This reduces the amount of data to replay on startup, and prevents infrequently-updated tables from indefinitely keeping commitlog segments. A small total commitlog space tends to cause more flush activity on less-active tables.\n" \
"Related information: Configuring memtable throughput" \
) \
@@ -386,11 +402,11 @@ public:
val(batch_size_warn_threshold_in_kb, uint32_t, 5, Unused, \
"Log WARN on any batch size exceeding this value in kilobytes. Caution should be taken on increasing the size of this threshold as it can lead to node instability." \
) \
val(broadcast_address, sstring, /* listen_address */, Used, \
val(broadcast_address, sstring, /* listen_address */, Unused, \
"The IP address a node tells other nodes in the cluster to contact it by. It allows public and private address to be different. For example, use the broadcast_address parameter in topologies where not all nodes have access to other nodes by their private IP addresses.\n" \
"If your Cassandra cluster is deployed across multiple Amazon EC2 regions and you use the EC2MultiRegionSnitch , set the broadcast_address to public IP address of the node and the listen_address to the private IP." \
) \
val(initial_token, sstring, /* N/A */, Used, \
val(initial_token, sstring, /* N/A */, Unused, \
"Used in the single-node-per-token architecture, where a node owns exactly one contiguous range in the ring space. Setting this property overrides num_tokens.\n" \
"If you not using vnodes or have num_tokens set it to 1 or unspecified (#num_tokens), you should always specify this parameter when setting up a production cluster for the first time and when adding capacity. For more information, see this parameter in the Cassandra 1.1 Node and Cluster Configuration documentation.\n" \
"This parameter can be used with num_tokens (vnodes ) in special cases such as Restoring from a snapshot." \
@@ -414,7 +430,7 @@ public:
, "org.apache.cassandra.dht.ByteOrderedPartitioner" \
, "org.apache.cassandra.dht.OrderPreservingPartitioner" \
) \
val(storage_port, uint16_t, 7000, Used, \
val(storage_port, uint16_t, 7000, Unused, \
"The port for inter-node communication." \
) \
/* Advanced automatic backup setting */ \
@@ -544,7 +560,7 @@ public:
) \
/* RPC (remote procedure call) settings */ \
/* Settings for configuring and tuning client connections. */ \
val(broadcast_rpc_address, sstring, /* unset */, Used, \
val(broadcast_rpc_address, sstring, /* unset */, Unused, \
"RPC address to broadcast to drivers and other Cassandra nodes. This cannot be set to 0.0.0.0. If blank, it is set to the value of the rpc_address or rpc_interface. If rpc_address or rpc_interfaceis set to 0.0.0.0, this property must be set.\n" \
) \
val(rpc_port, uint16_t, 9160, Used, \
@@ -666,7 +682,7 @@ public:
val(permissions_update_interval_in_ms, uint32_t, 2000, Unused, \
"Refresh interval for permissions cache (if enabled). After this interval, cache entries become eligible for refresh. On next access, an async reload is scheduled and the old value is returned until it completes. If permissions_validity_in_ms , then this property must benon-zero." \
) \
val(server_encryption_options, string_map, /*none*/, Used, \
val(server_encryption_options, string_map, /*none*/, Unused, \
"Enable or disable inter-node encryption. You must also generate keys and provide the appropriate key and trust store locations and passwords. No custom encryption options are currently enabled. The available options are:\n" \
"\n" \
"internode_encryption : (Default: none ) Enable or disable encryption of inter-node communication using the TLS_RSA_WITH_AES_128_CBC_SHA cipher suite for authentication, key exchange, and encryption of data transfers. The available inter-node options are:\n" \
@@ -674,9 +690,20 @@ public:
"\tnone : No encryption.\n" \
"\tdc : Encrypt the traffic between the data centers (server only).\n" \
"\track : Encrypt the traffic between the racks(server only).\n" \
"certificate : (Default: conf/scylla.crt) The location of a PEM-encoded x509 certificate used to identify and encrypt the internode communication.\n" \
"keyfile : (Default: conf/scylla.key) PEM Key file associated with certificate.\n" \
"truststore : (Default: <system truststore> ) Location of the truststore containing the trusted certificate for authenticating remote servers.\n" \
"\tkeystore : (Default: conf/.keystore ) The location of a Java keystore (JKS) suitable for use with Java Secure Socket Extension (JSSE), which is the Java version of the Secure Sockets Layer (SSL), and Transport Layer Security (TLS) protocols. The keystore contains the private key used to encrypt outgoing messages.\n" \
"\tkeystore_password : (Default: cassandra ) Password for the keystore.\n" \
"\ttruststore : (Default: conf/.truststore ) Location of the truststore containing the trusted certificate for authenticating remote servers.\n" \
"\ttruststore_password : (Default: cassandra ) Password for the truststore.\n" \
"\n" \
"The passwords used in these options must match the passwords used when generating the keystore and truststore. For instructions on generating these files, see Creating a Keystore to Use with JSSE.\n" \
"\n" \
"The advanced settings are:\n" \
"\n" \
"\tprotocol : (Default: TLS )\n" \
"\talgorithm : (Default: SunX509 )\n" \
"\tstore_type : (Default: JKS )\n" \
"\tcipher_suites : (Default: TLS_RSA_WITH_AES_128_CBC_SHA , TLS_RSA_WITH_AES_256_CBC_SHA )\n" \
"\trequire_client_auth : (Default: false ) Enables or disables certificate authentication.\n" \
"Related information: Node-to-node encryption" \
) \
val(client_encryption_options, string_map, /*none*/, Unused, \
@@ -716,16 +743,6 @@ public:
val(api_ui_dir, sstring, "swagger-ui/dist/", Used, "The directory location of the API GUI") \
val(api_doc_dir, sstring, "api/api-doc/", Used, "The API definition file directory") \
val(load_balance, sstring, "none", Used, "CQL request load balancing: 'none' or round-robin'") \
val(consistent_rangemovement, bool, true, Used, "When set to true, range movements will be consistent. It means: 1) it will refuse to bootstrapp a new node if other bootstrapping/leaving/moving nodes detected. 2) data will be streamed to a new node only from the node which is no longer responsible for the token range. Same as -Dcassandra.consistent.rangemovement in cassandra") \
val(join_ring, bool, true, Used, "When set to true, a node will join the token ring. When set to false, a node will not join the token ring. User can use nodetool join to initiate ring joinging later. Same as -Dcassandra.join_ring in cassandra.") \
val(load_ring_state, bool, true, Used, "When set to true, load tokens and host_ids previously saved. Same as -Dcassandra.load_ring_state in cassandra.") \
val(replace_node, sstring, "", Used, "The UUID of the node to replace. Same as -Dcassandra.replace_node in cssandra.") \
val(replace_token, sstring, "", Used, "The tokens of the node to replace. Same as -Dcassandra.replace_token in cassandra.") \
val(replace_address, sstring, "", Used, "The listen_address or broadcast_address of the dead node to replace. Same as -Dcassandra.replace_address.") \
val(replace_address_first_boot, sstring, "", Used, "Like replace_address option, but if the node has been bootstrapped sucessfully it will be ignored. Same as -Dcassandra.replace_address_first_boot.") \
val(override_decommission, bool, false, Used, "Set true to force a decommissioned node to join the cluster") \
val(ring_delay_ms, uint32_t, 30 * 1000, Used, "Time a node waits to hear from other nodes before joining the ring in milliseconds. Same as -Dcassandra.ring_delay_ms in cassandra.") \
val(developer_mode, bool, false, Used, "Relax environement checks. Setting to true can reduce performance and reliability significantly.") \
/* done! */
#define _make_value_member(name, type, deflt, status, desc, ...) \

View File

@@ -42,7 +42,7 @@ struct query_context {
future<::shared_ptr<cql3::untyped_result_set>> execute_cql(sstring text, sstring cf, Args&&... args) {
// FIXME: Would be better not to use sprint here.
sstring req = sprint(text, cf);
return this->_qp.local().execute_internal(req, { data_value(std::forward<Args>(args))... });
return this->_qp.local().execute_internal(req, { boost::any(std::forward<Args>(args))... });
}
database& db() {
return _db.local();
@@ -67,8 +67,9 @@ extern std::unique_ptr<query_context> qctx;
// we executed the query, and return an empty result
template <typename... Args>
static future<::shared_ptr<cql3::untyped_result_set>> execute_cql(sstring text, Args&&... args) {
assert(qctx);
return qctx->execute_cql(text, std::forward<Args>(args)...);
if (qctx) {
return qctx->execute_cql(text, std::forward<Args>(args)...);
}
return make_ready_future<shared_ptr<cql3::untyped_result_set>>(::make_shared<cql3::untyped_result_set>(cql3::untyped_result_set::make_empty()));
}
}

View File

@@ -329,7 +329,7 @@ future<utils::UUID> calculate_schema_digest(distributed<service::storage_proxy>&
std::vector<query::result> results;
for (auto&& p : rs->partitions()) {
auto mut = p.mut().unfreeze(s);
auto partition_key = value_cast<sstring>(utf8_type->deserialize(mut.key().get_component(*s, 0)));
auto partition_key = boost::any_cast<sstring>(utf8_type->deserialize(mut.key().get_component(*s, 0)));
if (partition_key == system_keyspace::NAME) {
continue;
}
@@ -368,7 +368,7 @@ future<std::vector<frozen_mutation>> convert_schema_to_mutations(distributed<ser
std::vector<frozen_mutation> results;
for (auto&& p : rs->partitions()) {
auto mut = p.mut().unfreeze(s);
auto partition_key = value_cast<sstring>(utf8_type->deserialize(mut.key().get_component(*s, 0)));
auto partition_key = boost::any_cast<sstring>(utf8_type->deserialize(mut.key().get_component(*s, 0)));
if (partition_key == system_keyspace::NAME) {
continue;
}
@@ -398,18 +398,18 @@ read_schema_for_keyspaces(distributed<service::storage_proxy>& proxy, const sstr
return map_reduce(keyspace_names.begin(), keyspace_names.end(), map, schema_result{}, insert);
}
future<schema_result_value_type>
future<schema_result::value_type>
read_schema_partition_for_keyspace(distributed<service::storage_proxy>& proxy, const sstring& schema_table_name, const sstring& keyspace_name)
{
auto schema = proxy.local().get_db().local().find_schema(system_keyspace::NAME, schema_table_name);
auto keyspace_key = dht::global_partitioner().decorate_key(*schema,
partition_key::from_singular(*schema, keyspace_name));
return db::system_keyspace::query(proxy, schema_table_name, keyspace_key).then([keyspace_name] (auto&& rs) {
return schema_result_value_type{keyspace_name, std::move(rs)};
return schema_result::value_type{keyspace_name, std::move(rs)};
});
}
future<schema_result_value_type>
future<schema_result::value_type>
read_schema_partition_for_table(distributed<service::storage_proxy>& proxy, const sstring& schema_table_name, const sstring& keyspace_name, const sstring& table_name)
{
auto schema = proxy.local().get_db().local().find_schema(system_keyspace::NAME, schema_table_name);
@@ -417,7 +417,7 @@ read_schema_partition_for_table(distributed<service::storage_proxy>& proxy, cons
partition_key::from_singular(*schema, keyspace_name));
auto clustering_range = query::clustering_range(clustering_key_prefix::from_clustering_prefix(*schema, exploded_clustering_prefix({utf8_type->decompose(table_name)})));
return db::system_keyspace::query(proxy, schema_table_name, keyspace_key, clustering_range).then([keyspace_name] (auto&& rs) {
return schema_result_value_type{keyspace_name, std::move(rs)};
return schema_result::value_type{keyspace_name, std::move(rs)};
});
}
@@ -468,7 +468,7 @@ future<> do_merge_schema(distributed<service::storage_proxy>& proxy, std::vector
std::set<sstring> keyspaces;
std::set<utils::UUID> column_families;
for (auto&& mutation : mutations) {
keyspaces.emplace(value_cast<sstring>(utf8_type->deserialize(mutation.key().get_component(*s, 0))));
keyspaces.emplace(boost::any_cast<sstring>(utf8_type->deserialize(mutation.key().get_component(*s, 0))));
column_families.emplace(mutation.column_family_id());
}
@@ -528,7 +528,7 @@ future<> do_merge_schema(distributed<service::storage_proxy>& proxy, std::vector
future<std::set<sstring>> merge_keyspaces(distributed<service::storage_proxy>& proxy, schema_result&& before, schema_result&& after)
{
std::vector<schema_result_value_type> created;
std::vector<schema_result::value_type> created;
std::vector<sstring> altered;
std::set<sstring> dropped;
@@ -552,7 +552,7 @@ future<std::set<sstring>> merge_keyspaces(distributed<service::storage_proxy>& p
for (auto&& key : diff.entries_only_on_right) {
auto&& value = after[key];
if (!value->empty()) {
created.emplace_back(schema_result_value_type{key, std::move(value)});
created.emplace_back(schema_result::value_type{key, std::move(value)});
}
}
for (auto&& key : diff.entries_differing) {
@@ -566,7 +566,7 @@ future<std::set<sstring>> merge_keyspaces(distributed<service::storage_proxy>& p
} else if (!pre->empty()) {
dropped.emplace(keyspace_name);
} else if (!post->empty()) { // a (re)created keyspace
created.emplace_back(schema_result_value_type{key, std::move(post)});
created.emplace_back(schema_result::value_type{key, std::move(post)});
}
}
return do_with(std::move(created), [&proxy, altered = std::move(altered)] (auto& created) {
@@ -899,7 +899,7 @@ std::vector<mutation> make_drop_keyspace_mutations(lw_shared_ptr<keyspace_metada
*
* @param partition Keyspace attributes in serialized form
*/
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result_value_type& result)
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result::value_type& result)
{
auto&& rs = result.second;
if (rs->empty()) {
@@ -1269,7 +1269,7 @@ void create_table_from_table_row_and_column_rows(schema_builder& builder, const
} else {
// FIXME:
// is_dense = CFMetaData.calculateIsDense(fullRawComparator, columnDefs);
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw std::runtime_error("not implemented");
}
bool is_compound = cell_comparator::check_compound(table_row.get_nonnull<sstring>("comparator"));
@@ -1310,10 +1310,10 @@ void create_table_from_table_row_and_column_rows(schema_builder& builder, const
builder.set_max_compaction_threshold(table_row.get_nonnull<int>("max_compaction_threshold"));
}
if (table_row.has("comment")) {
builder.set_comment(table_row.get_nonnull<sstring>("comment"));
}
#if 0
if (result.has("comment"))
cfm.comment(result.getString("comment"));
#endif
if (table_row.has("memtable_flush_period_in_ms")) {
builder.set_memtable_flush_period(table_row.get_nonnull<int32_t>("memtable_flush_period_in_ms"));
}

View File

@@ -55,7 +55,6 @@ namespace db {
namespace schema_tables {
using schema_result = std::map<sstring, lw_shared_ptr<query::result_set>>;
using schema_result_value_type = std::pair<sstring, lw_shared_ptr<query::result_set>>;
static constexpr auto KEYSPACES = "schema_keyspaces";
static constexpr auto COLUMNFAMILIES = "schema_columnfamilies";
@@ -75,7 +74,7 @@ future<utils::UUID> calculate_schema_digest(distributed<service::storage_proxy>&
future<std::vector<frozen_mutation>> convert_schema_to_mutations(distributed<service::storage_proxy>& proxy);
future<schema_result_value_type>
future<schema_result::value_type>
read_schema_partition_for_keyspace(distributed<service::storage_proxy>& proxy, const sstring& schema_table_name, const sstring& keyspace_name);
future<> merge_schema(distributed<service::storage_proxy>& proxy, std::vector<mutation> mutations);
@@ -90,11 +89,11 @@ std::vector<mutation> make_create_keyspace_mutations(lw_shared_ptr<keyspace_meta
std::vector<mutation> make_drop_keyspace_mutations(lw_shared_ptr<keyspace_metadata> keyspace, api::timestamp_type timestamp);
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result_value_type& partition);
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result::value_type& partition);
future<> merge_tables(distributed<service::storage_proxy>& proxy, schema_result&& before, schema_result&& after);
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result_value_type& partition);
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result::value_type& partition);
mutation make_create_keyspace_mutation(lw_shared_ptr<keyspace_metadata> keyspace, api::timestamp_type timestamp, bool with_tables_and_types_and_functions = true);

View File

@@ -143,18 +143,18 @@ atomic_cell_view db::serializer<atomic_cell_view>::read(input& in) {
}
template<>
db::serializer<collection_mutation_view>::serializer(const collection_mutation_view& c)
db::serializer<collection_mutation::view>::serializer(const collection_mutation::view& c)
: _item(c), _size(bytes_view_serializer(c.serialize()).size()) {
}
template<>
void db::serializer<collection_mutation_view>::write(output& out, const collection_mutation_view& t) {
void db::serializer<collection_mutation::view>::write(output& out, const collection_mutation::view& t) {
bytes_view_serializer::write(out, t.serialize());
}
template<>
void db::serializer<collection_mutation_view>::read(collection_mutation_view& c, input& in) {
c = collection_mutation_view::from_bytes(bytes_view_serializer::read(in));
void db::serializer<collection_mutation::view>::read(collection_mutation::view& c, input& in) {
c = collection_mutation::view::from_bytes(bytes_view_serializer::read(in));
}
template<>
@@ -187,6 +187,30 @@ void db::serializer<partition_key_view>::skip(input& in) {
in.skip(len);
}
template<>
db::serializer<clustering_key_view>::serializer(const clustering_key_view& key)
: _item(key), _size(sizeof(uint16_t) /* size */ + key.representation().size()) {
}
template<>
void db::serializer<clustering_key_view>::write(output& out, const clustering_key_view& key) {
bytes_view v = key.representation();
out.write<uint16_t>(v.size());
out.write(v.begin(), v.end());
}
template<>
void db::serializer<clustering_key_view>::read(clustering_key_view& b, input& in) {
auto len = in.read<uint16_t>();
b = clustering_key_view::from_bytes(in.read_view(len));
}
template<>
clustering_key_view db::serializer<clustering_key_view>::read(input& in) {
auto len = in.read<uint16_t>();
return clustering_key_view::from_bytes(in.read_view(len));
}
template<>
db::serializer<clustering_key_prefix_view>::serializer(const clustering_key_prefix_view& key)
: _item(key), _size(sizeof(uint16_t) /* size */ + key.representation().size()) {
@@ -254,9 +278,10 @@ template class db::serializer<bytes> ;
template class db::serializer<bytes_view> ;
template class db::serializer<sstring> ;
template class db::serializer<atomic_cell_view> ;
template class db::serializer<collection_mutation_view> ;
template class db::serializer<collection_mutation::view> ;
template class db::serializer<utils::UUID> ;
template class db::serializer<partition_key_view> ;
template class db::serializer<clustering_key_view> ;
template class db::serializer<clustering_key_prefix_view> ;
template class db::serializer<frozen_mutation> ;
template class db::serializer<db::replay_position> ;

View File

@@ -22,12 +22,11 @@
#ifndef DB_SERIALIZER_HH_
#define DB_SERIALIZER_HH_
#include <experimental/optional>
#include "utils/data_input.hh"
#include "utils/data_output.hh"
#include "bytes_ostream.hh"
#include "bytes.hh"
#include "mutation.hh"
#include "keys.hh"
#include "database_fwd.hh"
#include "frozen_mutation.hh"
@@ -59,9 +58,9 @@ public:
return *this;
}
static void write(output&, const type&);
static void read(type&, input&);
static type read(input&);
static void write(output&, const T&);
static void read(T&, input&);
static T read(input&);
static void skip(input& in);
size_t size() const {
@@ -77,100 +76,11 @@ public:
void write(data_output& out) const {
write(out, _item);
}
bytes to_bytes() const {
bytes b(bytes::initialized_later(), _size);
data_output out(b);
write(out);
return b;
}
static type from_bytes(bytes_view v) {
data_input in(v);
return read(in);
}
private:
const type& _item;
const T& _item;
size_t _size;
};
template<typename T>
class serializer<std::experimental::optional<T>> {
public:
typedef std::experimental::optional<T> type;
typedef data_output output;
typedef data_input input;
typedef serializer<T> _MyType;
serializer(const type& t)
: _item(t)
, _size(output::serialized_size<bool>() + (t ? serializer<T>(*t).size() : 0))
{}
// apply to memory, must be at least size() large.
const _MyType& operator()(output& out) const {
write(out, _item);
return *this;
}
static void write(output& out, const type& v) {
bool en = v;
out.write<bool>(en);
if (en) {
serializer<T>::write(out, *v);
}
}
static void read(type& dst, input& in) {
auto en = in.read<bool>();
if (en) {
dst = serializer<T>::read(in);
} else {
dst = {};
}
}
static type read(input& in) {
type t;
read(t, in);
return t;
}
static void skip(input& in) {
auto en = in.read<bool>();
if (en) {
serializer<T>::skip(in);
}
}
size_t size() const {
return _size;
}
void write(bytes_ostream& out) const {
auto buf = out.write_place_holder(_size);
data_output data_out((char*)buf, _size);
write(data_out, _item);
}
void write(data_output& out) const {
write(out, _item);
}
bytes to_bytes() const {
bytes b(bytes::initialized_later(), _size);
data_output out(b);
write(out);
return b;
}
static type from_bytes(bytes_view v) {
data_input in(v);
return read(in);
}
private:
const std::experimental::optional<T> _item;
size_t _size;
};
template<> serializer<utils::UUID>::serializer(const utils::UUID &);
template<> void serializer<utils::UUID>::write(output&, const type&);
template<> void serializer<utils::UUID>::read(utils::UUID&, input&);
@@ -199,9 +109,9 @@ template<> void serializer<atomic_cell_view>::write(output&, const type&);
template<> void serializer<atomic_cell_view>::read(atomic_cell_view&, input&);
template<> atomic_cell_view serializer<atomic_cell_view>::read(input&);
template<> serializer<collection_mutation_view>::serializer(const collection_mutation_view &);
template<> void serializer<collection_mutation_view>::write(output&, const type&);
template<> void serializer<collection_mutation_view>::read(collection_mutation_view&, input&);
template<> serializer<collection_mutation::view>::serializer(const collection_mutation::view &);
template<> void serializer<collection_mutation::view>::write(output&, const type&);
template<> void serializer<collection_mutation::view>::read(collection_mutation::view&, input&);
template<> serializer<frozen_mutation>::serializer(const frozen_mutation &);
template<> void serializer<frozen_mutation>::write(output&, const type&);
@@ -214,6 +124,11 @@ template<> void serializer<partition_key_view>::read(partition_key_view&, input&
template<> partition_key_view serializer<partition_key_view>::read(input&);
template<> void serializer<partition_key_view>::skip(input&);
template<> serializer<clustering_key_view>::serializer(const clustering_key_view &);
template<> void serializer<clustering_key_view>::write(output&, const clustering_key_view&);
template<> void serializer<clustering_key_view>::read(clustering_key_view&, input&);
template<> clustering_key_view serializer<clustering_key_view>::read(input&);
template<> serializer<clustering_key_prefix_view>::serializer(const clustering_key_prefix_view &);
template<> void serializer<clustering_key_prefix_view>::write(output&, const clustering_key_prefix_view&);
template<> void serializer<clustering_key_prefix_view>::read(clustering_key_prefix_view&, input&);
@@ -245,7 +160,7 @@ typedef serializer<bytes> bytes_serializer; // Compatible with bytes_view_serial
typedef serializer<bytes_view> bytes_view_serializer; // Compatible with bytes_serializer
typedef serializer<sstring> sstring_serializer;
typedef serializer<atomic_cell_view> atomic_cell_view_serializer;
typedef serializer<collection_mutation_view> collection_mutation_view_serializer;
typedef serializer<collection_mutation::view> collection_mutation_view_serializer;
typedef serializer<utils::UUID> uuid_serializer;
typedef serializer<partition_key_view> partition_key_view_serializer;
typedef serializer<clustering_key_view> clustering_key_view_serializer;

View File

@@ -464,8 +464,7 @@ static future<> build_bootstrap_info() {
static auto state_map = std::unordered_map<sstring, bootstrap_state>({
{ "NEEDS_BOOTSTRAP", bootstrap_state::NEEDS_BOOTSTRAP },
{ "COMPLETED", bootstrap_state::COMPLETED },
{ "IN_PROGRESS", bootstrap_state::IN_PROGRESS },
{ "DECOMMISSIONED", bootstrap_state::DECOMMISSIONED }
{ "IN_PROGRESS", bootstrap_state::IN_PROGRESS }
});
bootstrap_state state = bootstrap_state::NEEDS_BOOTSTRAP;
@@ -487,7 +486,9 @@ future<> init_local_cache() {
}
void minimal_setup(distributed<database>& db, distributed<cql3::query_processor>& qp) {
qctx = std::make_unique<query_context>(db, qp);
auto new_ctx = std::make_unique<query_context>(db, qp);
qctx.swap(new_ctx);
assert(!new_ctx);
}
future<> setup(distributed<database>& db, distributed<cql3::query_processor>& qp) {
@@ -539,11 +540,10 @@ future<> save_truncation_records(const column_family& cf, db_clock::time_point t
out.write<db_clock::rep>(truncated_at.time_since_epoch().count());
map_type_impl::native_type tmp;
tmp.emplace_back(cf.schema()->id(), data_value(buf));
auto map_type = map_type_impl::get_instance(uuid_type, bytes_type, true);
tmp.emplace_back(boost::any{ cf.schema()->id() }, boost::any{ buf });
sstring req = sprint("UPDATE system.%s SET truncated_at = truncated_at + ? WHERE key = '%s'", LOCAL, LOCAL);
return qctx->qp().execute_internal(req, {make_map_value(map_type, tmp)}).then([](auto rs) {
return qctx->qp().execute_internal(req, {tmp}).then([](auto rs) {
truncation_records = {};
return force_blocking_flush(LOCAL);
});
@@ -633,7 +633,7 @@ future<db_clock::time_point> get_truncated_at(utils::UUID cf_id) {
set_type_impl::native_type prepare_tokens(std::unordered_set<dht::token>& tokens) {
set_type_impl::native_type tset;
for (auto& t: tokens) {
tset.push_back(dht::global_partitioner().to_sstring(t));
tset.push_back(boost::any(dht::global_partitioner().to_sstring(t)));
}
return tset;
}
@@ -641,7 +641,7 @@ set_type_impl::native_type prepare_tokens(std::unordered_set<dht::token>& tokens
std::unordered_set<dht::token> decode_tokens(set_type_impl::native_type& tokens) {
std::unordered_set<dht::token> tset;
for (auto& t: tokens) {
auto str = value_cast<sstring>(t);
auto str = boost::any_cast<sstring>(t);
assert(str == dht::global_partitioner().to_sstring(dht::global_partitioner().from_sstring(str)));
tset.insert(dht::global_partitioner().from_sstring(str));
}
@@ -658,8 +658,7 @@ future<> update_tokens(gms::inet_address ep, std::unordered_set<dht::token> toke
}
sstring req = "INSERT INTO system.%s (peer, tokens) VALUES (?, ?)";
auto set_type = set_type_impl::get_instance(utf8_type, true);
return execute_cql(req, PEERS, ep.addr(), make_set_value(set_type, prepare_tokens(tokens))).discard_result().then([] {
return execute_cql(req, PEERS, ep.addr(), prepare_tokens(tokens)).discard_result().then([] {
return force_blocking_flush(PEERS);
});
}
@@ -690,7 +689,7 @@ future<std::unordered_map<gms::inet_address, std::unordered_set<dht::token>>> lo
auto blob = row.get_blob("tokens");
auto cdef = peers()->get_column_definition("tokens");
auto deserialized = cdef->type->deserialize(blob);
auto tokens = value_cast<set_type_impl::native_type>(deserialized);
auto tokens = boost::any_cast<set_type_impl::native_type>(deserialized);
ret->emplace(peer, decode_tokens(tokens));
}
@@ -797,8 +796,6 @@ future<> remove_endpoint(gms::inet_address ep) {
}).then([ep] {
sstring req = "DELETE FROM system.%s WHERE peer = ?";
return execute_cql(req, PEERS, ep.addr()).discard_result();
}).then([] {
return force_blocking_flush(PEERS);
});
}
@@ -811,14 +808,16 @@ future<> update_tokens(std::unordered_set<dht::token> tokens) {
}
sstring req = "INSERT INTO system.%s (key, tokens) VALUES (?, ?)";
auto set_type = set_type_impl::get_instance(utf8_type, true);
return execute_cql(req, LOCAL, sstring(LOCAL), make_set_value(set_type, prepare_tokens(tokens))).discard_result().then([] {
return execute_cql(req, LOCAL, sstring(LOCAL), prepare_tokens(tokens)).discard_result().then([] {
return force_blocking_flush(LOCAL);
});
}
future<> force_blocking_flush(sstring cfname) {
assert(qctx);
if (!qctx) {
return make_ready_future<>();
}
return qctx->_db.invoke_on_all([cfname = std::move(cfname)](database& db) {
// if (!Boolean.getBoolean("cassandra.unsafesystem"))
column_family& cf = db.find_column_family(NAME, cfname);
@@ -863,7 +862,7 @@ future<std::unordered_set<dht::token>> get_saved_tokens() {
auto blob = msg->one().get_blob("tokens");
auto cdef = local()->get_column_definition("tokens");
auto deserialized = cdef->type->deserialize(blob);
auto tokens = value_cast<set_type_impl::native_type>(deserialized);
auto tokens = boost::any_cast<set_type_impl::native_type>(deserialized);
return make_ready_future<std::unordered_set<dht::token>>(decode_tokens(tokens));
});
@@ -877,10 +876,6 @@ bool bootstrap_in_progress() {
return get_bootstrap_state() == bootstrap_state::IN_PROGRESS;
}
bool was_decommissioned() {
return get_bootstrap_state() == bootstrap_state::DECOMMISSIONED;
}
bootstrap_state get_bootstrap_state() {
return _local_cache.local()._state;
}
@@ -889,8 +884,7 @@ future<> set_bootstrap_state(bootstrap_state state) {
static std::unordered_map<bootstrap_state, sstring, enum_hash<bootstrap_state>> state_to_name({
{ bootstrap_state::NEEDS_BOOTSTRAP, "NEEDS_BOOTSTRAP" },
{ bootstrap_state::COMPLETED, "COMPLETED" },
{ bootstrap_state::IN_PROGRESS, "IN_PROGRESS" },
{ bootstrap_state::DECOMMISSIONED, "DECOMMISSIONED" }
{ bootstrap_state::IN_PROGRESS, "IN_PROGRESS" }
});
sstring state_name = state_to_name.at(state);
@@ -1010,55 +1004,5 @@ query(distributed<service::storage_proxy>& proxy, const sstring& cf_name, const
});
}
static map_type_impl::native_type prepare_rows_merged(std::unordered_map<int32_t, int64_t>& rows_merged) {
map_type_impl::native_type tmp;
for (auto& r: rows_merged) {
int32_t first = r.first;
int64_t second = r.second;
auto map_element = std::make_pair<data_value, data_value>(data_value(first), data_value(second));
tmp.push_back(std::move(map_element));
}
return tmp;
}
future<> update_compaction_history(sstring ksname, sstring cfname, int64_t compacted_at, int64_t bytes_in, int64_t bytes_out,
std::unordered_map<int32_t, int64_t> rows_merged)
{
// don't write anything when the history table itself is compacted, since that would in turn cause new compactions
if (ksname == "system" && cfname == COMPACTION_HISTORY) {
return make_ready_future<>();
}
auto map_type = map_type_impl::get_instance(int32_type, long_type, true);
sstring req = "INSERT INTO system.%s (id, keyspace_name, columnfamily_name, compacted_at, bytes_in, bytes_out, rows_merged) VALUES (?, ?, ?, ?, ?, ?, ?)";
return execute_cql(req, COMPACTION_HISTORY, utils::UUID_gen::get_time_UUID(), ksname, cfname, compacted_at, bytes_in, bytes_out,
make_map_value(map_type, prepare_rows_merged(rows_merged))).discard_result();
}
future<std::vector<compaction_history_entry>> get_compaction_history()
{
sstring req = "SELECT * from system.%s";
return execute_cql(req, COMPACTION_HISTORY).then([] (::shared_ptr<cql3::untyped_result_set> msg) {
std::vector<compaction_history_entry> history;
for (auto& row : *msg) {
compaction_history_entry entry;
entry.id = row.get_as<utils::UUID>("id");
entry.ks = row.get_as<sstring>("keyspace_name");
entry.cf = row.get_as<sstring>("columnfamily_name");
entry.compacted_at = row.get_as<int64_t>("compacted_at");
entry.bytes_in = row.get_as<int64_t>("bytes_in");
entry.bytes_out = row.get_as<int64_t>("bytes_out");
if (row.has("rows_merged")) {
entry.rows_merged = row.get_map<int32_t, int64_t>("rows_merged");
}
history.push_back(std::move(entry));
}
return std::move(history);
});
}
} // namespace system_keyspace
} // namespace db

View File

@@ -153,8 +153,7 @@ load_dc_rack_info();
enum class bootstrap_state {
NEEDS_BOOTSTRAP,
COMPLETED,
IN_PROGRESS,
DECOMMISSIONED
IN_PROGRESS
};
#if 0
@@ -259,28 +258,26 @@ enum class bootstrap_state {
compactionLog.truncateBlocking();
}
public static void updateCompactionHistory(String ksname,
String cfname,
long compactedAt,
long bytesIn,
long bytesOut,
Map<Integer, Long> rowsMerged)
{
// don't write anything when the history table itself is compacted, since that would in turn cause new compactions
if (ksname.equals("system") && cfname.equals(COMPACTION_HISTORY))
return;
String req = "INSERT INTO system.%s (id, keyspace_name, columnfamily_name, compacted_at, bytes_in, bytes_out, rows_merged) VALUES (?, ?, ?, ?, ?, ?, ?)";
executeInternal(String.format(req, COMPACTION_HISTORY), UUIDGen.getTimeUUID(), ksname, cfname, ByteBufferUtil.bytes(compactedAt), bytesIn, bytesOut, rowsMerged);
}
public static TabularData getCompactionHistory() throws OpenDataException
{
UntypedResultSet queryResultSet = executeInternal(String.format("SELECT * from system.%s", COMPACTION_HISTORY));
return CompactionHistoryTabularData.from(queryResultSet);
}
#endif
struct compaction_history_entry {
utils::UUID id;
sstring ks;
sstring cf;
int64_t compacted_at = 0;
int64_t bytes_in = 0;
int64_t bytes_out = 0;
// Key: number of rows merged
// Value: counter
std::unordered_map<int32_t, int64_t> rows_merged;
};
future<> update_compaction_history(sstring ksname, sstring cfname, int64_t compacted_at, int64_t bytes_in, int64_t bytes_out,
std::unordered_map<int32_t, int64_t> rows_merged);
future<std::vector<compaction_history_entry>> get_compaction_history();
typedef std::vector<db::replay_position> replay_positions;
future<> save_truncation_record(const column_family&, db_clock::time_point truncated_at, db::replay_position);
@@ -522,7 +519,6 @@ enum class bootstrap_state {
bool bootstrap_complete();
bool bootstrap_in_progress();
bootstrap_state get_bootstrap_state();
bool was_decommissioned();
future<> set_bootstrap_state(bootstrap_state state);
#if 0

View File

@@ -3,22 +3,18 @@ Maintainer: Takuya ASADA <syuu@scylladb.com>
Homepage: http://scylladb.com
Section: database
Priority: optional
Standards-Version: 3.9.5
Build-Depends: debhelper (>= 9), libyaml-cpp-dev, liblz4-dev, libsnappy-dev, libcrypto++-dev, libjsoncpp-dev, libaio-dev, libthrift-dev, thrift-compiler, antlr3, antlr3-c++-dev, ragel, g++-4.9, ninja-build, git, libboost-program-options1.55-dev | libboost-program-options-dev, libboost-filesystem1.55-dev | libboost-filesystem-dev, libboost-system1.55-dev | libboost-system-dev, libboost-thread1.55-dev | libboost-thread-dev, libboost-test1.55-dev | libboost-test-dev, libgnutls28-dev, libhwloc-dev, libnuma-dev, libpciaccess-dev
Standards-Version: 3.9.2
Build-Depends: debhelper (>= 9), libyaml-cpp-dev, liblz4-dev, libsnappy-dev, libcrypto++-dev, libjsoncpp-dev, libaio-dev, libthrift-dev, thrift-compiler, antlr3-tool, antlr3-c++-dev, ragel, g++-4.9, ninja-build, git, libboost-program-options1.55-dev, libboost-filesystem1.55-dev, libboost-system1.55-dev, libboost-thread1.55-dev, libboost-test1.55-dev
Package: scylla-server
Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}, hugepages, adduser, mdadm, xfsprogs, hwloc-nox
Depends: ${shlibs:Depends}, ${misc:Depends}, hugepages
Description: Scylla database server binaries
Scylla is a highly scalable, eventually consistent, distributed,
partitioned row DB.
Scylla is a highly scalable, eventually consistent, distributed, partitioned row DB.
Package: scylla-server-dbg
Section: debug
Priority: extra
Architecture: amd64
Depends: scylla-server (= ${binary:Version}), ${shlibs:Depends}, ${misc:Depends}
Description: debugging symbols for scylla-server
Scylla is a highly scalable, eventually consistent, distributed,
partitioned row DB.
Scylla is a highly scalable, eventually consistent, distributed, partitioned row DB.
This package contains the debugging symbols for scylla-server.

16
debian/copyright vendored Normal file
View File

@@ -0,0 +1,16 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: Scylla DB
Upstream-Contact: http://www.scylladb.com/
Source: https://github.com/scylladb/scylla
Files: *
Copyright: Copyright (C) 2015 ScyllaDB
License: AGPL-3.0
Files: seastar/*
Copyright: Copyright (C) 2015 ScyllaDB
License: Apache
Files: seastar/dpdk/*
Copyright: Copyright(c) 2015 Intel Corporation. All rights reserved.
License: BSD-3-clause

View File

@@ -1,5 +1,4 @@
scylla - core unlimited
scylla - memlock unlimited
scylla - nofile 200000
scylla - nofile 100000
scylla - as unlimited
scylla - nproc 8096

View File

@@ -2,12 +2,8 @@
DOC = $(CURDIR)/debian/scylla-server/usr/share/doc/scylla-server
SCRIPTS = $(CURDIR)/debian/scylla-server/usr/lib/scylla
SWAGGER = $(SCRIPTS)/swagger-ui
API = $(SCRIPTS)/api
SYSCTL = $(CURDIR)/debian/scylla-server/etc/sysctl.d
LIMITS= $(CURDIR)/debian/scylla-server/etc/security/limits.d
LIBS = $(CURDIR)/debian/scylla-server/usr/lib
CONF = $(CURDIR)/debian/scylla-server/etc/scylla
override_dh_auto_build:
./configure.py --disable-xen --enable-dpdk --mode=release --static-stdc++ --compiler=g++-4.9
@@ -19,15 +15,12 @@ override_dh_auto_clean:
rm -rf build.ninja seastar/build.ninja
override_dh_auto_install:
mkdir -p $(CURDIR)/debian/scylla-server/etc/default/ && \
cp $(CURDIR)/dist/redhat/sysconfig/scylla-server \
$(CURDIR)/debian/scylla-server/etc/default/
mkdir -p $(LIMITS) && \
cp $(CURDIR)/dist/common/limits.d/scylla.conf $(LIMITS)
mkdir -p $(SYSCTL) && \
cp $(CURDIR)/dist/common/sysctl.d/99-scylla.conf $(SYSCTL)
mkdir -p $(CONF) && \
cp $(CURDIR)/conf/scylla.yaml $(CONF)
cp $(CURDIR)/conf/cassandra-rackdc.properties $(CONF)
cp $(CURDIR)/debian/limits.d/scylla.conf $(LIMITS)
mkdir -p $(DOC) && \
cp $(CURDIR)/*.md $(DOC)
@@ -38,13 +31,6 @@ override_dh_auto_install:
mkdir -p $(SCRIPTS) && \
cp $(CURDIR)/seastar/dpdk/tools/dpdk_nic_bind.py $(SCRIPTS)
cp $(CURDIR)/dist/common/scripts/* $(SCRIPTS)
cp $(CURDIR)/dist/ubuntu/scripts/* $(SCRIPTS)
mkdir -p $(SWAGGER) && \
cp -r $(CURDIR)/swagger-ui/dist $(SWAGGER)
mkdir -p $(API) && \
cp -r $(CURDIR)/api/api-doc $(API)
mkdir -p $(CURDIR)/debian/scylla-server/usr/bin/ && \
cp $(CURDIR)/build/release/scylla \
@@ -52,7 +38,11 @@ override_dh_auto_install:
mkdir -p $(CURDIR)/debian/scylla-server/var/lib/scylla/data
mkdir -p $(CURDIR)/debian/scylla-server/var/lib/scylla/commitlog
mkdir -p $(CURDIR)/debian/scylla-server/var/lib/scylla/coredump
mkdir -p $(CURDIR)/debian/scylla-server/var/lib/scylla/conf
cp $(CURDIR)/conf/scylla.yaml \
$(CURDIR)/debian/scylla-server/var/lib/scylla/conf
cp $(CURDIR)/conf/cassandra-rackdc.properties \
$(CURDIR)/debian/scylla-server/var/lib/scylla/conf
override_dh_strip:
dh_strip --dbg-package=scylla-server-dbg

24
debian/scylla-server.postinst vendored Normal file
View File

@@ -0,0 +1,24 @@
#!/bin/sh
set -e
if [ "$1" = configure ]; then
adduser --system \
--quiet \
--home /var/lib/scylla \
--no-create-home \
--disabled-password \
--group scylla
chown -R scylla:scylla /var/lib/scylla
fi
# Automatically added by dh_installinit
if [ -x "/etc/init.d/scylla-server" ]; then
if [ ! -e "/etc/init/scylla-server.conf" ]; then
update-rc.d scylla-server defaults >/dev/null
fi
fi
# End automatically added section
# Automatically added by dh_installinit
update-rc.d -f scylla-server remove >/dev/null || exit $?
# End automatically added section

9
debian/scylla-server.preinst vendored Normal file
View File

@@ -0,0 +1,9 @@
# Automatically added by dh_installinit
if [ "$1" = install ] || [ "$1" = upgrade ]; then
if [ -e "/etc/init.d/scylla-server" ] && [ -L "/etc/init.d/scylla-server" ] \
&& [ $(readlink -f "/etc/init.d/scylla-server") = /lib/init/upstart-job ]
then
rm -f "/etc/init.d/scylla-server"
fi
fi
# End automatically added section

View File

@@ -14,20 +14,20 @@ console log
pre-start script
cd /var/lib/scylla
. /etc/default/scylla-server
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS
export OPTS_FILE NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_ARGS
/usr/lib/scylla/scylla_prepare
end script
script
cd /var/lib/scylla
. /etc/default/scylla-server
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS
export OPTS_FILE NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_ARGS
exec /usr/lib/scylla/scylla_run
end script
post-stop script
cd /var/lib/scylla
. /etc/default/scylla-server
export NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_HOME SCYLLA_CONF SCYLLA_ARGS
export OPTS_FILE NETWORK_MODE TAP BRIDGE ETHDRV ETHPCIID NR_HUGEPAGES USER GROUP SCYLLA_ARGS
/usr/lib/scylla/scylla_stop
end script

View File

@@ -71,33 +71,33 @@ future<> boot_strapper::bootstrap() {
}
std::unordered_set<token> boot_strapper::get_bootstrap_tokens(token_metadata metadata, database& db) {
auto initial_tokens = db.get_initial_tokens();
#if 0
Collection<String> initialTokens = DatabaseDescriptor.getInitialTokens();
// if user specified tokens, use those
if (initial_tokens.size() > 0) {
logger.debug("tokens manually specified as {}", initial_tokens);
std::unordered_set<token> tokens;
for (auto& token_string : initial_tokens) {
auto token = dht::global_partitioner().from_sstring(token_string);
if (metadata.get_endpoint(token)) {
throw std::runtime_error(sprint("Bootstrapping to existing token %s is not allowed (decommission/removenode the old node first).", token_string));
}
tokens.insert(token);
if (initialTokens.size() > 0)
{
logger.debug("tokens manually specified as {}", initialTokens);
List<Token> tokens = new ArrayList<Token>(initialTokens.size());
for (String tokenString : initialTokens)
{
Token token = StorageService.getPartitioner().getTokenFactory().fromString(tokenString);
if (metadata.getEndpoint(token) != null)
throw new ConfigurationException("Bootstrapping to existing token " + tokenString + " is not allowed (decommission/removenode the old node first).");
tokens.add(token);
}
logger.debug("Get manually specified bootstrap_tokens={}", tokens);
return tokens;
}
#endif
size_t num_tokens = db.get_config().num_tokens();
if (num_tokens < 1) {
throw std::runtime_error("num_tokens must be >= 1");
}
if (num_tokens == 1) {
logger.warn("Picking random token for a single vnode. You should probably add more vnodes; failing that, you should probably specify the token manually");
}
// if (numTokens == 1)
// logger.warn("Picking random token for a single vnode. You should probably add more vnodes; failing that, you should probably specify the token manually");
auto tokens = get_random_tokens(metadata, num_tokens);
logger.debug("Get random bootstrap_tokens={}", tokens);
logger.debug("Get bootstrap_tokens={}", tokens);
return tokens;
}

View File

@@ -34,12 +34,12 @@ token byte_ordered_partitioner::get_random_token()
std::map<token, float> byte_ordered_partitioner::describe_ownership(const std::vector<token>& sorted_tokens)
{
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw std::runtime_error("not implemented");
}
token byte_ordered_partitioner::midpoint(const token& t1, const token& t2) const
{
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
throw std::runtime_error("not implemented");
}
unsigned

View File

@@ -386,22 +386,12 @@ public:
friend std::ostream& operator<<(std::ostream&, const ring_position&);
};
// Trichotomic comparator for ring_position
struct ring_position_comparator {
const schema& s;
ring_position_comparator(const schema& s_) : s(s_) {}
int operator()(const ring_position& lh, const ring_position& rh) const;
};
// "less" comparator for ring_position
struct ring_position_less_comparator {
const schema& s;
ring_position_less_comparator(const schema& s_) : s(s_) {}
bool operator()(const ring_position& lh, const ring_position& rh) const {
return lh.less_compare(s, rh);
}
};
struct token_comparator {
// Return values are those of a trichotomic comparison.
int operator()(const token& t1, const token& t2) const;

View File

@@ -75,9 +75,6 @@ token murmur3_partitioner::get_random_token() {
}
inline int64_t long_token(const token& t) {
if (t.is_minimum()) {
return std::numeric_limits<long>::min();
}
if (t._data.size() != sizeof(int64_t)) {
throw runtime_exception(sprint("Invalid token. Should have size %ld, has size %ld\n", sizeof(int64_t), t._data.size()));
@@ -88,8 +85,18 @@ inline int64_t long_token(const token& t) {
return net::ntoh(*lp);
}
// XXX: Technically, this should be inside long token. However, long_token is
// used quite a lot in hot paths, so it is better to keep the branches of, if
// we can. Most our comparators will check for _kind separately,
// so this should be fine.
sstring murmur3_partitioner::to_sstring(const token& t) const {
return ::to_sstring(long_token(t));
int64_t lt;
if (t._kind == dht::token::kind::before_all_keys) {
lt = std::numeric_limits<long>::min();
} else {
lt = long_token(t);
}
return ::to_sstring(lt);
}
dht::token murmur3_partitioner::from_sstring(const sstring& t) const {
@@ -112,35 +119,17 @@ int murmur3_partitioner::tri_compare(const token& t1, const token& t2) {
}
}
// Assuming that x>=y, return the positive difference x-y.
// The return type is an unsigned type, as the difference may overflow
// a signed type (e.g., consider very positive x and very negative y).
template <typename T>
static std::make_unsigned_t<T> positive_subtract(T x, T y) {
return std::make_unsigned_t<T>(x) - std::make_unsigned_t<T>(y);
}
token murmur3_partitioner::midpoint(const token& t1, const token& t2) const {
auto l1 = long_token(t1);
auto l2 = long_token(t2);
int64_t mid;
if (l1 <= l2) {
// To find the midpoint, we cannot use the trivial formula (l1+l2)/2
// because the addition can overflow the integer. To avoid this
// overflow, we first notice that the above formula is equivalent to
// l1 + (l2-l1)/2. Now, "l2-l1" can still overflow a signed integer
// (e.g., think of a very positive l2 and very negative l1), but
// because l1 <= l2 in this branch, we note that l2-l1 is positive
// and fits an *unsigned* int's range. So,
mid = l1 + positive_subtract(l2, l1)/2;
} else {
// When l2 < l1, we need to switch l1 and and l2 in the above
// formula, because now l1 - l2 is positive.
// Additionally, we consider this case is a "wrap around", so we need
// to behave as if l2 + 2^64 was meant instead of l2, i.e., add 2^63
// to the average.
mid = l2 + positive_subtract(l1, l2)/2 + 0x8000'0000'0000'0000;
// long_token is defined as signed, but the arithmetic works out the same
// without invoking undefined behavior with a signed type.
auto delta = (uint64_t(l2) - uint64_t(l1)) / 2;
if (l1 > l2) {
// wraparound
delta += 0x8000'0000'0000'0000;
}
auto mid = uint64_t(l1) + delta;
return get_token(mid);
}

View File

@@ -41,11 +41,9 @@
#include "locator/snitch_base.hh"
#include "database.hh"
#include "gms/gossiper.hh"
#include "gms/failure_detector.hh"
#include "log.hh"
#include "streaming/stream_plan.hh"
#include "streaming/stream_state.hh"
#include "service/storage_service.hh"
namespace dht {
@@ -57,7 +55,14 @@ static std::unordered_map<range<token>, std::unordered_set<inet_address>>
unordered_multimap_to_unordered_map(const std::unordered_multimap<range<token>, inet_address>& multimap) {
std::unordered_map<range<token>, std::unordered_set<inet_address>> ret;
for (auto x : multimap) {
ret[x.first].emplace(x.second);
auto& range_token = x.first;
auto& ep = x.second;
auto it = ret.find(range_token);
if (it != ret.end()) {
it->second.emplace(ep);
} else {
ret.emplace(range_token, std::unordered_set<inet_address>{ep});
}
}
return ret;
}
@@ -110,6 +115,7 @@ range_streamer::get_all_ranges_with_sources_for(const sstring& keyspace_name, st
auto& ks = _db.local().find_keyspace(keyspace_name);
auto& strat = ks.get_replication_strategy();
// std::unordered_multimap<range<token>, inet_address>
auto tm = _metadata.clone_only_token_map();
auto range_addresses = unordered_multimap_to_unordered_map(strat.get_range_addresses(tm));
@@ -160,24 +166,23 @@ range_streamer::get_all_ranges_with_strict_sources_for(const sstring& keyspace_n
for (auto& x : range_addresses) {
const range<token>& src_range = x.first;
if (src_range.contains(desired_range, dht::tri_compare)) {
std::vector<inet_address> old_endpoints(x.second.begin(), x.second.end());
auto old_endpoints = x.second;
auto it = pending_range_addresses.find(desired_range);
if (it == pending_range_addresses.end()) {
throw std::runtime_error(sprint("Can not find desired_range = {} in pending_range_addresses", desired_range));
}
std::unordered_set<inet_address> new_endpoints = it->second;
assert (it != pending_range_addresses.end());
auto new_endpoints = it->second;
//Due to CASSANDRA-5953 we can have a higher RF then we have endpoints.
//So we need to be careful to only be strict when endpoints == RF
if (old_endpoints.size() == strat.get_replication_factor()) {
auto it = std::remove_if(old_endpoints.begin(), old_endpoints.end(),
[&new_endpoints] (inet_address ep) { return new_endpoints.count(ep); });
old_endpoints.erase(it, old_endpoints.end());
std::unordered_set<inet_address> diff;
std::set_difference(old_endpoints.begin(), old_endpoints.end(),
new_endpoints.begin(), new_endpoints.end(), std::inserter(diff, diff.begin()));
old_endpoints = std::move(diff);
if (old_endpoints.size() != 1) {
throw std::runtime_error(sprint("Expected 1 endpoint but found %d", old_endpoints.size()));
throw std::runtime_error(sprint("Expected 1 endpoint but found ", old_endpoints.size()));
}
}
range_sources.emplace(desired_range, old_endpoints.front());
range_sources.emplace(desired_range, *(old_endpoints.begin()));
}
}
@@ -205,7 +210,9 @@ range_streamer::get_all_ranges_with_strict_sources_for(const sstring& keyspace_n
bool range_streamer::use_strict_sources_for_ranges(const sstring& keyspace_name) {
auto& ks = _db.local().find_keyspace(keyspace_name);
auto& strat = ks.get_replication_strategy();
return !_db.local().is_replacing()
// FIXME: DatabaseDescriptor.isReplacing()
auto is_replacing = false;
return !is_replacing
&& use_strict_consistency()
&& !_tokens.empty()
&& _metadata.get_all_endpoints().size() != strat.get_replication_factor();
@@ -222,17 +229,25 @@ void range_streamer::add_ranges(const sstring& keyspace_name, std::vector<range<
}
}
std::unordered_map<inet_address, std::vector<range<token>>> range_fetch_map;
// TODO: share code with unordered_multimap_to_unordered_map
std::unordered_map<inet_address, std::vector<range<token>>> tmp;
for (auto& x : get_range_fetch_map(ranges_for_keyspace, _source_filters, keyspace_name)) {
range_fetch_map[x.first].emplace_back(x.second);
auto& addr = x.first;
auto& range_ = x.second;
auto it = tmp.find(addr);
if (it != tmp.end()) {
it->second.push_back(range_);
} else {
tmp.emplace(addr, std::vector<range<token>>{range_});
}
}
if (logger.is_enabled(logging::log_level::debug)) {
for (auto& x : range_fetch_map) {
for (auto& x : tmp) {
logger.debug("{} : range {} from source {} for keyspace {}", _description, x.second, x.first, keyspace_name);
}
}
_to_fetch.emplace(keyspace_name, std::move(range_fetch_map));
_to_fetch.emplace(keyspace_name, std::move(tmp));
}
future<streaming::stream_state> range_streamer::fetch_async() {
@@ -253,17 +268,4 @@ future<streaming::stream_state> range_streamer::fetch_async() {
return _stream_plan.execute();
}
std::unordered_multimap<inet_address, range<token>>
range_streamer::get_work_map(const std::unordered_multimap<range<token>, inet_address>& ranges_with_source_target,
const sstring& keyspace) {
auto filter = std::make_unique<dht::range_streamer::failure_detector_source_filter>(gms::get_local_failure_detector());
std::unordered_set<std::unique_ptr<i_source_filter>> source_filters;
source_filters.emplace(std::move(filter));
return get_range_fetch_map(ranges_with_source_target, source_filters, keyspace);
}
bool range_streamer::use_strict_consistency() {
return service::get_local_storage_service().db().local().get_config().consistent_rangemovement();
}
} // dht

View File

@@ -62,7 +62,10 @@ public:
using stream_plan = streaming::stream_plan;
using stream_state = streaming::stream_state;
using i_failure_detector = gms::i_failure_detector;
static bool use_strict_consistency();
static bool use_strict_consistency() {
//FIXME: Boolean.parseBoolean(System.getProperty("cassandra.consistent.rangemovement","true"));
return true;
}
public:
/**
* A filter applied to sources to stream from when constructing a fetch map.
@@ -70,7 +73,6 @@ public:
class i_source_filter {
public:
virtual bool should_include(inet_address endpoint) = 0;
virtual ~i_source_filter() {}
};
/**
@@ -146,11 +148,11 @@ private:
const std::unordered_set<std::unique_ptr<i_source_filter>>& source_filters,
const sstring& keyspace);
public:
static std::unordered_multimap<inet_address, range<token>>
get_work_map(const std::unordered_multimap<range<token>, inet_address>& ranges_with_source_target,
const sstring& keyspace);
#if 0
public static Multimap<InetAddress, Range<Token>> getWorkMap(Multimap<Range<Token>, InetAddress> rangesWithSourceTarget, String keyspace)
{
return getRangeFetchMap(rangesWithSourceTarget, Collections.<ISourceFilter>singleton(new FailureDetectorSourceFilter(FailureDetector.instance)), keyspace);
}
// For testing purposes
Multimap<String, Map.Entry<InetAddress, Collection<Range<Token>>>> toFetch()

Some files were not shown because too many files have changed in this diff Show More