Compare commits

..

434 Commits

Author SHA1 Message Date
Pekka Enberg
d3a05737f7 release: prepare for 0.14.1 2016-01-05 15:30:47 +02:00
Shlomi Livne
21c68d3da9 dist/redhat: Increase scylla-server service start timeout to 15 min
Fixes #749

Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2016-01-05 15:30:41 +02:00
Pekka Enberg
88d544ed14 Merge "Fixes for AMI" from Shlomi
"The patch fixes a few issues caused by generalizing the ami scripts. The
 scylla_bootparam_setup requires invocation with ami flag. The
 scylla_install is missing some steps executed by the scylla-ami.sh."
2016-01-04 15:21:24 +02:00
Shlomi Livne
638c0c0ea8 Fixing missing items in move from scylla-ami.sh to scylla_install
scylla-ami.sh moved some ami specific files. This parts have been
dropped when converging scylla-ami into scylla_install. Fixing that.

Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2016-01-04 14:57:57 +02:00
Shlomi Livne
f3e96e0f0b Invoke scylla_bootparam_setup with/without ami flag
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2016-01-04 14:57:57 +02:00
Shlomi Livne
fa15440665 Fix error: no integer expression expected in AMI creation
The script imports the /etc/sysconfig/scylla-server for configuration
settings (NR_PAGES). The /etc/sysconfig/scylla-server iincludes an AMI
param which is of string value and called as a last step in
scylla_install (after scylla_bootparam_setup has been initated).

The AMI variable is setup in scylla_install and is used in multiple
scripts. To resolve the conflict moving the import of
/etc/sysconfig/scylla-server after the AMI variable has been compared.

Fixes: #744

Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2016-01-04 14:57:33 +02:00
Takuya ASADA
c4d66a3beb dist: apply limits settings correctly on Ubuntu
Fixes #738

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2016-01-02 12:20:47 +02:00
Shlomi Livne
5023f9bbab Make sure the directory we are writting coredumps to exists
After upgrading an AMI and trying to stop and start a machine the
/var/lib/scylla/coredump is not created. Create the directory if it does
not exist prior to generating core

Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
2015-12-31 13:21:54 +02:00
Pekka Enberg
3efc145562 dist: Increase NOFILE rlimit to 200k
Commit 2ba4910 ("main: verify that the NOFILE rlimit is sufficient")
added a recommendation to set NOFILE rlimit to 200k. Update our release
binaries to do the same.
2015-12-30 12:21:01 +02:00
Avi Kivity
1ad638f8bf main: verify that the NOFILE rlimit is sufficient
Require 10k files, recommend 200k.

Allow bypassing via --developer-mode.

Fixes #692.
2015-12-30 11:05:21 +02:00
Avi Kivity
43f4a8031d init: bail out if running not on an XFS filesystem
Allow an override via '--developer-mode true', and use it in
the docker setup, since that cannot be expected to use XFS.

Fixes #658.
2015-12-30 11:05:14 +02:00
Pekka Enberg
27dbbe1ca4 release: prepare for 0.14 2015-12-30 10:28:58 +02:00
Pekka Enberg
0aa105c9cf Merge "load report a negative value" from Amnon
"This series solve an issue with the load broadcaster that reports negative
 values due to an integer wrap around.  While fixing this issue an additional
 change was made so that the load_map would return doubles and not formatted
 string.  This is a better API, safer and better documented."
2015-12-30 10:21:55 +02:00
Nadav Har'El
f0b27671a2 murmur3 partitioner: remove outdated comment, and code
Since commit 16596385ee, long_token() is already checking
t.is_minimum(), so the comment which explains why it does not (for
performance) is no longer relevant. And we no longer need to check
t._kind before calling long_token (the check we do here is the same
as is_minimum).

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-30 10:01:29 +02:00
Nadav Har'El
de5a3e5c5a repair: check columnFamilies list
Check the list of column families passed as an option to repair, to
provide the user with a more meaningful exception when a non-existant
column family is passed.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-30 09:59:54 +02:00
Nadav Har'El
3ae29216c8 repair: add missing ampersand
This was a plain bug - ranges_opt is supposed to parse the option into
the vector "var", but took the vector by value.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-30 09:46:13 +02:00
Nadav Har'El
a0a649c1be repair: support "columnFamilies" parameter
Support the "columnFamilies" parameter of repair, allowing to repair
only some of the column families of a keyspace, instead of all of them.
For example, using a command like "nodetool repaire keyspace cf1 cf2".

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-30 09:45:28 +02:00
Lucas Meneghel Rodrigues
43d39d8b03 scylla_coredump_setup: Don't call yum on scylla server spec file
The script scylla_coredump_setup was introduced in
9b4d0592, and added to the scylla rpm spec file, as a
post script. However, calling yum when there's one
yum instance installng scylla server will cause a deadlock,
since yum waits for the yum lock to be released, and the
original yum process waits for the script to end.

So let's remove this from the script. Debian shouldn't be
affected, since it was never added to the debian build
rules (to the best of my knowlege, after analyzing 9b4d0592),
hence I did not remove it. It should cause the same problem
with apt-get in case it was used.

CC: Takuya ASADA <syuu@scylladb.com>
[ penberg: Rebase and drop statement about 'abrt' package not in Fedora. ]
Signed-off-by: Lucas Meneghel Rodrigues <lmr@scylladb.com>
2015-12-30 09:38:36 +02:00
Nadav Har'El
ebebaa525d repair: fix missing default values
A default value was not set for the "incremental" and "parallelism"
repair parameters, so Scylla can wrongly decide that they have an
unsupported value.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-29 15:39:47 +02:00
Amnon Heiman
ec379649ea API: repair to use documented params
The repair API use to have an undocumented parameter list similiar to
origin.

This patch changes the way repair is getting its parameters.
Instead of a one undocumented string it now lists all the different
optional parameters in the swagger file and accept them explicitely.

Reviewed-by: Nadav Har'El <nyh@scylladb.com>
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-29 15:38:44 +02:00
Amnon Heiman
f0d68e4161 main: start the http server in the first step
This change set the http server to start as the first step in the boot
order.

It is helpfull if some other step takes a long time or stuck.

Fixes #725

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-29 14:20:57 +02:00
Avi Kivity
c8b09a69a9 lsa: disable constant_time_size in binomial_heap implementation
Corrupts heap on boost < 1.60, and not needed.

Fixes #698.
2015-12-29 12:59:00 +01:00
Vlad Zolotarov
756de38a9d database: actually check that a snapshot directory exists
Actually check that a snapshot directory with a given tag
exists instead of just checking that a 'snapshot' directory
exists.

Fixes issue #689

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2015-12-29 12:59:00 +01:00
Amnon Heiman
71905081b1 API: report the load map as an unformatted double
In origin the storage_serivce report the load map as a formatted string.
As an API a better option is to report the load map as double and let
the JMX proxy do the formatting.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-29 11:55:34 +02:00
Amnon Heiman
06e1facc34 load_broadcaster report negative size
The map_reduce0 convert the result value to the init value type. In
load_bradcaster 0 is of type int.
This result with an int wrap around and negative results.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-29 11:55:34 +02:00
Avi Kivity
41bd266ddd db: provide more information on "Unrecognized error" while loading sstables
This information can be used to understand the root cause of the failure.

Refs #692.
2015-12-29 10:23:32 +02:00
Nadav Har'El
7247f055df repair: partial support for some options
Add partial support for the "incremental" option (only support the
"false" setting, i.e., not incremental repair) and the "parallelism"
option (the choice of sequential or parallel repair is ignored - we
always use our own technique).

This is needed because scylla-jmx passes these options by default
(e.g., "incremental=false" is passed to say this is *not* incremental
repair, and we just need to allow this and ignore it).

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-29 09:38:09 +02:00
Nadav Har'El
3cfa39e1f0 repair: log repair options
When throwing an "unsupported repair options" exception to the caller
(such as "nodetool repair"), also list which options were not recognized.
Additionally, list the options when logging the repair operation.

This patch includes an operator<< implementation for pretty-printing an
std::unordered_map. We may want to move it later to a more central
location - even Seastar (like we have a pretty-printer for std::vector
in core/sstring.hh).

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-29 09:37:30 +02:00
Raphael S. Carvalho
b7d36af26f compaction: fix max_purgeable calculation
max_purgeable was being incorrectly calculated because the code
that creates vector of uncompacted sstables was wrong.
This value is used to determine whether or not a tombstone can
be purged.
Operand < is supposed to be used instead in the callback passed
as third parameter to boost::set_difference.
This fix is a step towards closing the issue #676.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-29 09:30:08 +02:00
Takuya ASADA
46767fcacf dist: fix .rpm build error (File not found: scylla_extlinux_setup)
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-29 09:26:58 +02:00
Pekka Enberg
ca1f9f1c9a main: Fix implicitly disabled client encryption options
The start_native_transport() function in storage_service expects the
'enabled' option to be defined. If the option is not defined, it means
that encryption is implicitly disabled.

Fixes #718.
2015-12-28 16:24:49 +02:00
Pekka Enberg
a76b3a009b Merge "use steady_clock where monotonic clock is required" from Vlad
"The first patch in this series fixes the issue #638 in scylla.
 The second one fixes the tests to use the appropriate clock."
2015-12-28 13:35:50 +02:00
Avi Kivity
561bb79d22 Merge "CQL server SSL" from Calle
"* Update scylla.conf section
 * Add SSL capability to cql server
 * Use conf and initiate optional SSL cql server in
   main/storage_service"
2015-12-28 12:55:25 +02:00
Avi Kivity
72cb8d4461 Merge "Messaging service TLS" from Calle
"Adds support for TLS/SSL encrypted (and cert verified)
connections for message service

* Modify config option to match "native" style cerificate management
* Add SSL options to messaging service and generate SSL server/client
  endpoints when required
* Add config option handling to init/main"
2015-12-28 12:54:28 +02:00
Calle Wilund
fae3bb7a24 storage_service: Set up CQL server as SSL if specified
* Massage user options in main
* Use them in storage_service, and if needed, load certificates etc
  and pass to transport/cql server.

Conflicts:
	service/storage_service.cc
2015-12-28 10:13:48 +00:00
Calle Wilund
51d3990261 cql_server: Allow using SSL socket
Optional credentials argument determine if SSL or normal
server socket is created.

Note: This does not follow the pattern of "socket as argument", simply
because this is a distributed object, so only trivial or immutable
objects should be passed to it.
2015-12-28 10:13:48 +00:00
Calle Wilund
d8b2581a07 scylla.conf: Update client_encryption_options with scylla syntax
Using certificate+key directly
2015-12-28 10:13:48 +00:00
Calle Wilund
5f003f9284 scylla.conf: Modify server_encryption_options section
Describe scylla version of option.

Note, for test usage, the below should be workable:

server_encryption_options:
    internode_encryption: all
    certificate: seastar/tests/test.crt
    truststore: seastar/tests/catest.pem
    keyfile: seastar/tests/test.key

Since the seastar test suite contains a snakeoil cert + trust
combo
2015-12-28 10:10:35 +00:00
Calle Wilund
70f293d82e main/init: Use server_encryption_options
* Reads server_encryption_options
* Interpret the above, and load and initialize credentials
  and use with messaging service init if required
2015-12-28 10:10:35 +00:00
Calle Wilund
d1badfa108 messaging_service: Optionally create SSL endpoints
* Accept port + credentials + option for what to encrypt
* If set, enable a SSL listener at ssl_port
* Check outgoing connections by IP to determine if
  they should go to SSL/normal endpoint

Requires seastar RPC patch

Note: currently, the connections created by messaging service
does _not_ do certificate name verification. While DNS lookup
is probably not that expensive here, I am not 100% sure it is
the desired behaviour.
Normal trust is however verified.
2015-12-28 10:10:35 +00:00
Calle Wilund
1a9fb4ed7f config: Modify/use server_encryption_options
* Mark option used
* Make sub-options adapted to seastar-tls useable values (i.e. x509)

Syntax is now:

server_encryption_options:
	internode_encryption: <none, all, dc, rack>
	certificate: <path-to-PEM-x509-cert> (default conf/scylla.crt)
	keyfile: <path-to-PEM-x509-key> (default conf/scylla.key)
	truststore: <path-to-PEM-trust-store-file> (default empty,
                                                    use system trust)
2015-12-28 10:10:35 +00:00
Calle Wilund
b7baa4d1f5 config: clean up some style + move method to cc file 2015-12-28 10:10:35 +00:00
Takuya ASADA
fc29a341d2 dist: show usage and scylla-server status when login to AMI instance
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-28 11:40:34 +02:00
Avi Kivity
827a4d0010 Merge "streaming: Invalidate cache upon receiving of stream" from Asias
"When a node gain or regain responsibility for certain token ranges, streaming
will be performed, upon receiving of the stream data, the row cache
is invalidated for that range.

Refs #484."
2015-12-28 10:24:46 +02:00
Amnon Heiman
2c79fe1488 storage_service: describe_ring return full data
The describe_ring method in storage_service did not report the start and
end tokens.

Also for rpc addresses that are not the local address, it returned the
value representation (including the version) and not just the adress.

Fixes #695

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-28 09:56:12 +02:00
Takuya ASADA
0abcf5b3f3 dist: use readable time format on coredump file, instead of unix time
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-28 09:55:05 +02:00
Takuya ASADA
940c34b896 dist: don't abort scylla_coredump_setup when 'yum remove abrt' failed
It always fail when abrt is not installed.
This also fixes build_ami.sh failing because of this error.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-28 09:40:57 +02:00
Vlad Zolotarov
0f8090d6c7 tests: use steady_clock where monotinic clock is required
Use steady_clock instead of high_resolution_clock where monotonic
clock is required. high_resolution_clock is essentially a
system_clock (Wall Clock) therefore may not to be assumed monotonic
since Wall Clock may move backwards due to time/date adjustments.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2015-12-27 18:08:15 +02:00
Vlad Zolotarov
33552829b2 core: use steady_clock where monotinic clock is required
Use steady_clock instead of high_resolution_clock where monotonic
clock is required. high_resolution_clock is essentially a
system_clock (Wall Clock) therefore may not to be assumed monotonic
since Wall Clock may move backwards due to time/date adjustments.

Fixes issue #638

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2015-12-27 18:07:53 +02:00
Takuya ASADA
7f4a1567c6 dist: support non-ami boot parameter setup, add parameters for preallocate hugepages on boot-time
Fixes #172

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-27 17:56:49 +02:00
Takuya ASADA
6bf602e435 dist: setup ntpd on AMI
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-27 17:54:32 +02:00
Avi Kivity
2b22772e3c Merge "Introduce keep alive timer for stream_session" from Asias
"Fixes stream_session hangs:

1) if the sending node is gone, the receiving peer will wait forever
2) if the node which should send COMPLETE_MESSAGE to the peer node is gone,
   the peer node will wait forever"
2015-12-27 16:56:32 +02:00
Avi Kivity
f3980f1fad Merge seastar upstream
* seastar 51154f7...8b2171e (9):
  > memcached: avoid a collision of an expiration with time_point(-1).
  > tutorial: minor spelling corrections etc.
  > tutorial: expand semaphores section
  > Merge "Use steady_clock where monotonic clock is required" from Vlad
  > Merge "TLS fixes + RPC adaption" from Calle
  > do_with() optimization
  > tutorial: explain limiting parallelism using semaphores
  > submit_io: change pending flushes criteria
  > apps: remove defunct apps/seastar

Adjust code to use steady_clock instead of high_resolution_clock.
2015-12-27 14:40:20 +02:00
Avi Kivity
0687d7401d Merge "storage_service updates" from Asias
"
- Fix erase of new_replica_endpoints in get_changed_ranges_for_leaving
- Introduce ntroduce ring_delay_ms option
"
2015-12-27 12:46:37 +02:00
Nadav Har'El
06f8dd4eb2 repair: job id must start at 1
This patch fixes a bug where the *first* run of "nodetool repair" always
returned immediately, instead of waiting for the repair to complete.

Repair operations are asynchronous: Starting a repair returns a numeric
id, which can then be used to query for the repair's completion, and this
is what "nodetool repair" does (through our JMX layer). We started with
the repair ID "0", the next one is "1", and so on.

The problem is that "nodetool repair", when it sees 0 being returned,
treats it not as a regular repair ID, but rather as an answer that
there is nothing to repair - printing a message to that effect and *not*
waiting for the repair (which was correctly started) to complete.

The trivial fix is to start our repair IDs at 1, instead of 0.
We currently do not return 0 in any case (we don't know there is nothing
to repair before we actually start the work, and parameter errors
cause an exception, not a return of 0).

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-27 12:42:26 +02:00
Avi Kivity
93aeedf403 Merge "Fixes for CentOS/RHEL support" from Takuya
"Recent changes on scripts causes error on CentOS/RHEL, this patchset fixes it."
2015-12-27 12:21:29 +02:00
Glauber Costa
e299127e81 main: check if options file can be read.
If we can't open the file, we will fail with a misterious error. It is a costumary
scenario, though, since people who are unaware or have just forgotten about seastar's
restriction of direct io access may put those files in tmpfs and other mount points.

We have a direct_io check that is designed exactly for this purpose, so as to give
the user a better error message. This patch makes use of it.

Fixes #644

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2015-12-27 12:20:40 +02:00
Asias He
f57ba6902b storage_service: Introduce ring_delay_ms option
It is hard-coded as 30 seconds at the moment.

Usage:
$ scylla --ring-delay-ms 5000

Time a node waits to hear from other nodes before joining the ring in
milliseconds.

Same as -Dcassandra.ring_delay_ms in cassandra.
2015-12-25 15:08:22 +08:00
Asias He
9c07ed8db6 storage_service: Fix erase new_replica_endpoints in get_changed_ranges_for_leaving
We need to calculate begin() and end() in the loop since elements in
new_replica_endpoints might be removed.

Refs #700
2015-12-25 15:08:22 +08:00
Asias He
88846bc816 storage_service: Add more debug info in decommission
It is useful to debug decommission issue.
2015-12-25 15:08:22 +08:00
Asias He
19f1875682 gossip: Print endpoint_state_map debug info in trace level
This generates too many logs with debug level. Make it trace level.
2015-12-25 15:08:22 +08:00
Nadav Har'El
06ab43a7ee murmur3 partitioner: fix midpoint() algorithm
The midpoint() algorithm to find a token between two tokens doesn't
work correctly in case of wraparound. The code tried to handle this
case, but did it wrong. So this patch fixes the midpoint() algorithm,
and adds clearer comments about why the fixed algorithm is correct.

This patch also modifies two midpoint() tests in partitioner_test,
which were incorrect - they verified that midpoint() returns some expected
values, but expected values were wrong!

We also add to the test a more fundemental test of midpoint() correctness,
which doesn't check the midpoint against a known value (which is easy to
get wrong, like indeed happened); Rather we simply check that the midpoint
is really inside the range (according to the token ordering operator).
This simple test failed with the old implementation of midpoint() and
passes with the new one.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-24 17:19:49 +02:00
Avi Kivity
3392f02b54 Merge "Make date parser more liberal" from Paweł
"This series makes date and time parsing more liberal so that Scylla
accepts the same date formats the origin does.

Fixes #521."
2015-12-24 17:18:04 +02:00
Asias He
20c258f202 streaming: Fix session hang with maybe_completed: WAIT_COMPLETE -> WAIT_COMPLETE
The problem is that we set the session state to WAIT_COMPLETE in
send_complete_message's continuation, the peer node might send
COMPLETE_MESSAGE before we run the continuation, thus we set the wrong
status in COMPLETE_MESSAGE's handler and will not close the session.

Before:

   GOT STREAM_MUTATION_DONE
   receive  task_completed
   SEND COMPLETE_MESSAGE to 127.0.0.2:0
   GOT COMPLETE_MESSAGE, from=127.0.0.2, connecting=127.0.0.3, dst_cpu_id=0
   complete: PREPARING -> WAIT_COMPLETE
   GOT COMPLETE_MESSAGE Reply
   maybe_completed: WAIT_COMPLETE -> WAIT_COMPLETE

After:

   GOT STREAM_MUTATION_DONE
   receive  task_completed
   maybe_completed: PREPARING -> WAIT_COMPLETE
   SEND COMPLETE_MESSAGE to 127.0.0.2:0
   GOT COMPLETE_MESSAGE, from=127.0.0.2, connecting=127.0.0.3, dst_cpu_id=0
   complete: WAIT_COMPLETE -> COMPLETE
   Session with 127.0.0.2 is complete
2015-12-24 20:34:44 +08:00
Asias He
c971fad618 streaming: Introduce keep alive timer for each stream_session
If the session is idle for 10 minutes, close the session. This can
detect the following hangs:

1) if the sending node is gone, the receiving peer will wait forever
2) if the node which should send COMPLETE_MESSAGE to the peer node is
gone, the peer node will wait forever

Fixes simple_kill_streaming_node_while_bootstrapping_test.
2015-12-24 20:34:44 +08:00
Asias He
f527e07be6 streaming: Get stream_session in STREAM_MUTATION handler
Get from address from cinfo. It is needed to figure out which stream
session this mutation is belonged to, since we need to update the keep
alive timer for this stream session.
2015-12-24 20:34:44 +08:00
Asias He
d7a8c655a6 streaming: Print All sessions completed after state change message
close_session will print "All sessions completed" message, print the
state change message before that.
2015-12-24 20:34:44 +08:00
Asias He
bd276fd087 streaming: Increase retry timeout
Currently, if the node is actually down, although the streaming_timeout
is 10 seconds, the sending of the verb will return rpc_closed error
immediately, so we give up in 20 * 5 = 100 seconds. After this change,
we give up in 10 * 30 = 300 seconds at least, and 10 * (30 + 30) = 600
seconds at most.
2015-12-24 20:34:44 +08:00
Asias He
eaea09ee71 streaming: Retransmit COMPLETE_MESSAGE message
It is oneway message at the moment. If a COMPLETE_MESSAGE is lost, no
one will close the session. The first step to fix the issue is to try to
retransmit the message.
2015-12-24 20:34:44 +08:00
Asias He
d1d6395978 streaming: Print old state before setting the new state 2015-12-24 20:34:44 +08:00
Takuya ASADA
bf9547b1c4 dist: support RHEL on scylla_install
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-24 18:48:30 +09:00
Takuya ASADA
bb0880f024 dist: use /etc/os-release instead of /etc/redhat-release
Since other scripts using /etc/os-release, it is better to use same one here.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-24 18:48:30 +09:00
Takuya ASADA
b6df28f3d5 dist: use $ID instead of $NAME to detect type of distribution
$NAME is full name of distribution, for script it is too long.
$ID is shortened one, which is more useful.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-24 18:48:30 +09:00
Takuya ASADA
0a4b68d35e dist: support CentOS yum repository
Fixes #671

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-24 18:48:30 +09:00
Takuya ASADA
8f4e90b87a dist: use tsc clocksource on AMI
Stop using xen clocksource, use tsc clocksource instead.
Fixes #462

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-22 22:29:32 +02:00
Amnon Heiman
b0856f7acf API: Init value for cf_map reduce should be of type int64_t
The helper function for summing statistic over the column family are
template function that infer the return type acording to the type of the
Init param.

In the API the return value should be int64_t, passing an integer would
cause a number wrap around.

A partial output from the nodetool cfstats after the fix

nodetool cfstats keyspace1
Keyspace: keyspace1
	Read Count: 0
	Read Latency: NaN ms.
	Write Count: 4050000
	Write Latency: 0.009178098765432099 ms.
	Pending Flushes: 0
		Table: standard1
		SSTable count: 12
		Space used (live): 1118617445
		Space used (total): 23336562465

Fixes #682

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-22 17:33:13 +02:00
Tomasz Grabiec
88f5da5d1d Merge branch 'calle/paging_fixes' from seastar-dev.git
From Calle:

Fixes #589
Query should not return dangling static row in partition without any
regular/ck columns if a CK restriction is applied.

Refs #650
Fixes bug in CK range code for paging, and removes CK use for tables with not
clustering -> way simpler code. Also removed lots of workaround code no longer
required.

Note that this patch set does not fully fix #650/paging since bug #663 causes
duplicate rows. Still almost there though.
2015-12-22 11:22:42 +01:00
Avi Kivity
926d340661 logger: be robust when exceptions are thrown while stringifying args
Instead of propagating the exception, swallow it and print it out in
the log message.

Fixes #672.
2015-12-21 19:58:08 +01:00
Paweł Dziepak
cf949e98cb tests/types: add more tests for date and time parsing
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-21 15:34:17 +01:00
Paweł Dziepak
633a13f7b3 types: timestamp_from_string: accept more date formats
Boost::date_time doesn't accept some of the date and time formats that
the origin do (e.g. 2013-9-22 or 2013-009-22).

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-21 15:30:35 +01:00
Calle Wilund
f118222b2d query_pagers: Remove unneeded clustering + remove static workaround
Refs #640

* Remove use of cluster key range for tables without CK
  Checking CK existance once and use the info allows us to remove some
  stupid complexity in checking for "last key" match
* With fix for #589 we can also remove some superfluous code to
  compensate for that issue, and make "partition end" simper
* Remove extra row in CK case. Not needed anymore

End result is that pager now more or less only relies on adapted query
ranges.
2015-12-21 14:19:45 +00:00
Calle Wilund
72a079d196 paging_state: Make clustering key optional 2015-12-21 14:19:45 +00:00
Calle Wilund
c868d22d0c db/serializer: Add support for optional<T> to be serialized
template spacialization.

Simply just wraps underlying type serialization and adds a "bool"
check mark first in stream.
2015-12-21 14:19:45 +00:00
Calle Wilund
803b58620f data_output: specialize serialized_size for bool to ensure sync with write 2015-12-21 14:19:45 +00:00
Paweł Dziepak
d41807cb66 types: timestamp_from_string(): restore indentation
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-21 15:17:50 +01:00
Paweł Dziepak
873ed78358 types: catch parsing errors in timestamp_from_string()
timestamp_from_string() is used by both timestamp and date types, so it
is better to move the try { } catch { } to the functions itself instead
of expecting its callers to catch exceptions.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-21 15:14:36 +01:00
Takuya ASADA
0d1ef007d3 dist: skip mounting RAID if it's already mounted
On AMI, scylla-server fails to systemctl restart because scylla_prepare tries to mount /var/lib/scylla even it's already mounted.
This patch fixes the issue.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-21 15:50:09 +02:00
Avi Kivity
c3d0ae822d Merge seastar upstream
* seastar b44d729...51154f7 (6):
  > semaphore: add with_semaphore()
  > scripts: posix_net_conf.sh: don't transform wide CPU mask
  > resource: fix build for systems without HWLOC
  > build: link libasan before all other libraries
  > Use sys_membarrier() when available
  > build: add missing library (boost_filesystem)
2015-12-21 14:45:57 +02:00
Calle Wilund
8c17e9e26c mutation_partition: Do not return static row if CK range does not match
Fixes #589

If we got no rows, but have live static columns, we should only
give them back IFF we did not have any CK restrictions.
If ck:s exist, and we have a restriction on them, we either have maching
rows, or return nothing, since cql does not allow "is null".
2015-12-21 10:38:48 +00:00
Pekka Enberg
98454b13b9 cql3: Remove some ifdef'd code 2015-12-21 10:38:48 +00:00
Pekka Enberg
c6541b4cc2 cql3: Remove untranslated IMeasurableMemory code from column_identifier
We will not be using it so just remove the untranslated code.
2015-12-21 10:38:48 +00:00
Pekka Enberg
81d72afd85 cql3: Move delete_statement implementation to source file 2015-12-21 10:38:48 +00:00
Pekka Enberg
cd58ea3b96 cql3: Move modification_statement implementation to source file 2015-12-21 10:38:48 +00:00
Pekka Enberg
bcd602d3f8 cql3: Move parsed_statement implementation to source file 2015-12-21 10:38:48 +00:00
Pekka Enberg
44ba4857eb cql3: Move property_definitions implementation to source file 2015-12-21 10:38:48 +00:00
Pekka Enberg
2759473c7a cql3: Move select_statement implementation to source file 2015-12-21 10:38:48 +00:00
Pekka Enberg
7a5d6818a3 cql3: Move update_statement implementation to source file 2015-12-21 10:38:48 +00:00
Paweł Dziepak
9aa24860d7 test/sstables: add more key_reader tests
This patch introduces a test for reading keys from a single sstable with
the range begining and end being the keys present in the index summary.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-21 10:38:48 +00:00
Paweł Dziepak
2fd7caafa0 sstables: respect range inclusiveness in key_reader
When choosing a relevant range of buckets it wasn't taken into account
whether the range bounds are inclusive or not. That may have resulted in
more buckets being read than necessary which was a condition not
expected by the code responsible from looking for a relevant keys inside
the buckets.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-21 10:38:48 +00:00
Raphael S. Carvalho
d8e810686a sstables: remove outdated comment
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-21 10:38:48 +00:00
Raphael S. Carvalho
99710ae0e6 db: fix indentation
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-21 10:38:48 +00:00
Raphael S. Carvalho
e1edc2111c sstables: fix comment describing sstable::mark_for_deletion
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-21 10:38:48 +00:00
Raphael S. Carvalho
22ac260059 db: add missing sstable::mark_for_deletion call
If a sstable doesn't belong to current shard, mark_for_deletion
should be called for the deletion manager to still work.
It doesn't mean that the sstable will be deleted, but that the
sstable is not relevant to the current shard, thus it can be
deleted by the deletion manager in the future.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-21 10:38:48 +00:00
Asias He
2d32195c32 streaming: Invalidate cache upon receiving of stream
When a node gain or regain responsibility for certain token ranges,
streaming will be performed, upon receiving of the stream data, the
row cache is invalidated for that range.

Refs #484.
2015-12-21 14:44:13 +08:00
Asias He
517fd9edd4 streaming: Add helper to get distributed<database> db 2015-12-21 14:42:47 +08:00
Asias He
d51227ad9c streaming: Remove transfer_files
It is never used.
2015-12-21 14:42:47 +08:00
Asias He
c25393a3f6 database: Add non-const version of get_row_cache
We need this to invalidate row cache of a column family.
2015-12-21 14:42:47 +08:00
Tomasz Grabiec
324ad43be1 Merge branch 'penberg/cql-cleanups/v1' from seastar-dev.git
Another round of cleanups to the CQL code from Pekka.
2015-12-18 17:36:45 +01:00
Tomasz Grabiec
0862d2f531 Merge branch 'pdziepak/fix-sstables-key_reader-663/v2'
From Paweł:

"This series fixes sstables::key_reader not respecting range inclusiveness
if the bounds were the keys that were present in the index summary.

Fixes #663."
2015-12-18 17:35:09 +01:00
Paweł Dziepak
b39d1fb1fc test/sstables: add more key_reader tests
This patch introduces a test for reading keys from a single sstable with
the range begining and end being the keys present in the index summary.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-18 17:24:29 +01:00
Paweł Dziepak
18b8d7cccc sstables: respect range inclusiveness in key_reader
When choosing a relevant range of buckets it wasn't taken into account
whether the range bounds are inclusive or not. That may have resulted in
more buckets being read than necessary which was a condition not
expected by the code responsible from looking for a relevant keys inside
the buckets.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-18 17:24:26 +01:00
Pekka Enberg
eeadf601e6 Merge "cleanups and improvements" from Raphael 2015-12-18 13:45:11 +02:00
Pekka Enberg
9521ef6402 cql3: Remove some ifdef'd code 2015-12-18 13:29:58 +02:00
Pekka Enberg
f5597968ac cql3: Remove untranslated IMeasurableMemory code from column_identifier
We will not be using it so just remove the untranslated code.
2015-12-18 13:29:58 +02:00
Pekka Enberg
b754de8f4a cql3: Move delete_statement implementation to source file 2015-12-18 13:29:58 +02:00
Pekka Enberg
227e517852 cql3: Move modification_statement implementation to source file 2015-12-18 13:29:58 +02:00
Pekka Enberg
ca963d470e cql3: Move parsed_statement implementation to source file 2015-12-18 13:07:55 +02:00
Pekka Enberg
ff994cfd39 cql3: Move property_definitions implementation to source file 2015-12-18 13:04:32 +02:00
Pekka Enberg
d7db5e91b6 cql3: Move select_statement implementation to source file 2015-12-18 12:59:22 +02:00
Pekka Enberg
8b780e3958 cql3: Move update_statement implementation to source file 2015-12-18 12:54:19 +02:00
Takuya ASADA
0f46d10011 dist: add execute permission to build_ami_local.sh
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-18 11:56:44 +02:00
Pekka Enberg
e56bf8933f Improve not implemented errors
Print out the function name where we're throwing the exception from to
make it easier to debug such exceptions.
2015-12-18 10:51:37 +01:00
Paweł Dziepak
73f9850e1c tests/key_reader: make sure that the reader lives long enough
Fixes test failure in debug mode.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-18 10:32:37 +01:00
Pekka Enberg
39af3ec190 Merge "Implement nodetool drain" from Paweł
"This series adds support for nodetool command 'drain'. The general idea
 of this command is to close all connection (both with clienst and other
 nodes) and flush all memtables to disk.

 Fixes #662."
2015-12-18 11:16:32 +02:00
Takuya ASADA
ae10d86ba4 dist: add missing building time dependencies for Ubuntu
This is necessary to make --cpuset parameter work correctly.

Fixes #554

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-18 11:16:02 +02:00
Takuya ASADA
01bd4959ac dist: downgrade g++ to 4.9 on Ubuntu
Since Ubuntu package fails to build with g++-5, we need to downgrade it.

Fixes #665

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-18 11:15:22 +02:00
Takuya ASADA
aad9c9741a dist: add hwloc as a dependency
It is required for posix_net_conf.sh

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-18 11:14:07 +02:00
Paweł Dziepak
ae3e1374b4 test.py: add missing tests
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-17 19:08:21 +01:00
Pekka Enberg
89dcc5dfb3 Merge "dist: provide generic Scylla setup script" from Takuya
"Merge AMI scripts to dist/common/scripts, make it usable on non-AMI
 environments. Provides a script to do all settings automatically, which
 able to run as one-liner like this:

   curl http://url_to_scylla_install | sudo bash -s -- -d /dev/xvdb,/dev/xvdc -n eth0 -l ./

 Also enables coredump, save it to /var/lib/scylla/coredump"
2015-12-17 16:01:49 +02:00
Paweł Dziepak
39a65e6294 api: enable storage_service::drain()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-17 14:06:41 +01:00
Paweł Dziepak
9c0b7f9bbe storage_service: implement drain()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-17 14:06:41 +01:00
Paweł Dziepak
dcbba2303e messaging_service: restore indentation
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-17 14:06:41 +01:00
Paweł Dziepak
9661d8936b messaging_service: wait for outstanding requests
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-17 14:06:41 +01:00
Paweł Dziepak
442bc90505 compaction_manager: check whether the manager is already stopped
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-17 14:06:41 +01:00
Paweł Dziepak
25d255390e database: add non-const getter for compaction_manager
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-17 14:06:41 +01:00
Paweł Dziepak
31672906d3 transport: wait for outstanding requests to end during shutdown
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-17 14:06:41 +01:00
Paweł Dziepak
8ee1a44720 storage_service: implement get_drain_progress()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-17 14:06:40 +01:00
Paweł Dziepak
28e6edf927 transport: ignore future when stopping the server
When the server is shutting down a flag _stopping is set and listeners
are aborted using abort_accept(), which causes accept() calls to return
failed futures. However, accept handler just checks that the flag
_stopping is set and returns which causes a failed future to be
destroyed and a warning is printed.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-17 14:06:40 +01:00
Takuya ASADA
f7796ef7b3 dist: host gcc-5.1.1-4.fc22.src.rpm on our S3 account, since Fedora mirror deleted it
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-17 12:53:32 +02:00
Takuya ASADA
9b4d0592fa dist: enable coredump, save it to /var/lib/scylla/coredump
Enables coredump, save it to /var/lib/scylla/coredump

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-17 18:20:27 +09:00
Takuya ASADA
d0e5f8083f dist: provide generic scylla setup script
Merge AMI scripts to dist/common/scripts, make it usable on non-AMI environments.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-17 18:20:03 +09:00
Takuya ASADA
768ad7c4b8 dist: add SET_NIC entry on sysconfig
Add SET_NIC parameter which is already used in scylla_prepare

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-17 18:19:46 +09:00
Takuya ASADA
de1277de29 dist: specify NIC ifname on sysconfig, pass it to posix_net_conf.sh
Support to specify IFNAME for posix_net_conf.sh

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-17 18:19:23 +09:00
Takuya ASADA
04d9a2a210 dist: add mdadm, xfsprogs on package dependencies
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-17 16:59:07 +09:00
Pekka Enberg
9604d55a44 Merge "Add unit test for get_restricted_ranges()" from Tomek 2015-12-17 09:14:30 +02:00
Avi Kivity
b34a1f6a84 Merge "Preliminary changes for handling of schema changes" from Tomasz
"I extracted some less controversial changes on which the schema changes series will depend
 o somewhat reduce the noise in the main series."
2015-12-16 19:08:22 +02:00
Tomasz Grabiec
e2037ebc62 schema: Fix operator==() to include missing fields 2015-12-16 18:06:55 +01:00
Tomasz Grabiec
5a4d47aa1b schema: Remove dead code 2015-12-16 18:06:55 +01:00
Tomasz Grabiec
7a3bae0322 schema: Add equality operators 2015-12-16 18:06:55 +01:00
Tomasz Grabiec
f9d6c7b026 compress: Add equality operators 2015-12-16 18:06:55 +01:00
Tomasz Grabiec
adb93ef31f types: Make name() return const& 2015-12-16 18:06:55 +01:00
Tomasz Grabiec
f28e5f0517 tests: mutation_assertions: Make is_equal_to() check symmetricity 2015-12-16 18:06:55 +01:00
Tomasz Grabiec
3324cf0b8c tests: mutation_reader_assertions: Introduce next_mutation() 2015-12-16 18:06:55 +01:00
Tomasz Grabiec
ad99f89228 tests: mutation_assertion: Introduce has_schema() 2015-12-16 18:06:55 +01:00
Tomasz Grabiec
7451ab4356 tests: mutation_assertion: Allow chaining of assertions 2015-12-16 18:06:55 +01:00
Tomasz Grabiec
efe08a0512 tests: mutation_assertions: Own the mutation which is checked
Easier for users because they don't have to ensure liveness.
2015-12-16 18:06:55 +01:00
Tomasz Grabiec
0cdee6d1c3 tests: row_cache: Fix test_update()
The underlying data source for cache should not be the same memtable
which is later used to update the cache from. This fixes the following
assertion failure:

row_cache_test_g: utils/logalloc.hh:289: decltype(auto) logalloc::allocating_section::operator()(logalloc::region&, Func&&) [with Func = memtable::make_reader(schema_ptr, const partition_range&)::<lambda()>]: Assertion `r.reclaiming_enabled()' failed.

The problem is that when memtable is merged into cache their regions
are also merged, so locking cache's region locks the memtable region
as well.
2015-12-16 18:06:55 +01:00
Tomasz Grabiec
09188bccde mutation_query: Make reconcilable_result printable 2015-12-16 18:06:54 +01:00
Tomasz Grabiec
dd51ff0410 query: Make query::result movable 2015-12-16 18:06:54 +01:00
Tomasz Grabiec
872bfadb3d messaging_service: Remove unused parameters from send_migration_request() 2015-12-16 18:06:54 +01:00
Tomasz Grabiec
157af1036b data_output: Introduce write_view() which matches data_input::read_view() 2015-12-16 18:06:54 +01:00
Tomasz Grabiec
054187acf2 db/serializer: Introduce to_bytes/from_bytes helpers 2015-12-16 18:06:54 +01:00
Tomasz Grabiec
e8d49a106c query_processor: Add trace-level logging of queries 2015-12-16 18:06:54 +01:00
Tomasz Grabiec
de09c86681 data_value: Make printable 2015-12-16 18:06:54 +01:00
Tomasz Grabiec
2ee60d8496 tests: sstable_test: Avoid throwing during expected conditions
Makes debugging easier by making 'catch throw' not stop on expected
conditions.
2015-12-16 18:06:54 +01:00
Tomasz Grabiec
ef49c95015 tests: cql_query_env: Avoid exceptions during normal execution 2015-12-16 18:06:54 +01:00
Tomasz Grabiec
50984ad8d4 scylla-gdb.py: Allow the script to be sourced multiple times
Currently sourcing for the second time causes an exception from
pretty printer registration:

Traceback (most recent call last):
  File "./scylla-gdb.py", line 41, in <module>
    gdb.printing.register_pretty_printer(gdb.current_objfile(), build_pretty_printer())
  File "/usr/share/gdb/python/gdb/printing.py", line 152, in register_pretty_printer
    printer.name)
RuntimeError: pretty-printer already registered: scylla
2015-12-16 18:06:51 +01:00
Avi Kivity
e27a5d97f6 Merge "background mutation throttling" from Gleb
Fixes the case where background activity needed to complete CL=ONE writes
is queued up in the storage proxy, and the client adds new work faster
than it can be cleared.
2015-12-16 18:08:12 +02:00
Raphael S. Carvalho
41be378ff1 db: fix build of sstable list in column_family::compact_sstables
The last two loops were incorrectly inside the first one. That's a
bug because a new sstable may be emplaced more than once in the
sstable list, which can cause several problems. mark_for_deletion
may also be called more than once for compacted sstables, however,
it is idempotent.
Found this issue while auditing the code.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-16 17:46:17 +02:00
Avi Kivity
4c84d23f3b Merge seastar upstream
"* seastar 294ea30...b44d729 (5):
  > Merge "Properly distribute IO queues" from Glauber
  > reactor: allow more poll time in virtualized environments
  > reactor: fix idle-poll limit
  > reactor: use a vector of unique_ptr for the IO queues
  > io queues: make the queues really part of the reactor"
2015-12-16 17:42:30 +02:00
Tomasz Grabiec
0d5166dcd8 tests: Add test for get_restricted_ranges() 2015-12-16 13:09:01 +01:00
Tomasz Grabiec
e445e4785c storage_proxy: Extract get_restricted_ranges() as a free function
To make it directly testable.
2015-12-16 13:09:01 +01:00
Tomasz Grabiec
756624ef18 Remove dead code 2015-12-16 13:09:01 +01:00
Tomasz Grabiec
eb27fb1f6b range: Introduce equal() 2015-12-16 13:09:01 +01:00
Calle Wilund
43929d0ec1 commitlog: Add some comments about the IO flow
Documentation.
2015-12-16 13:13:31 +02:00
Gleb Natapov
de63b3a824 storage_proxy: provide timeout for send_mutation verb
Providing timeout for send_mutation verb allows rpc to drop packets that
sit in outgoing queue for to long.
2015-12-16 10:13:46 +02:00
Gleb Natapov
fe4bc741f4 storage_proxy: throttle mutations based on ongoing background activity
With consistency level less then ALL mutation processing can move to
background (meaning client was answered, but there is still work to
do on behalf of the request). If background request rate completion
is lower than incoming request rate background request will accumulate
and eventually will exhaust all memory resources. This patch's aim is
to prevent this situation by monitoring how much memory all current
background request take and when some threshold is passed stop moving
request to background (by not replying to a client until either memory
consumptions moves below the threshold or request is fully completed).

There are two main point where each background mutation consumes memory:
holding frozen mutation until operation is complete in order to hint it
if it does not) and on rpc queue to each replica where it sits until it's
sent out on the wire. The patch accounts for both of those separately
and limits the former to be 10% of total memory and the later to be 6M.
Why 6M? The best answer I can give is why not :) But on a more serious
note the number should be small enough so that all the data can be
sent out in a reasonable amount of time and one shard is not capable to
achieve even close to a full bandwidth, so empirical evidence shows 6M
to be a good number.
2015-12-16 10:13:46 +02:00
Pekka Enberg
40e8a9c99c sstables/compaction: Fix compilation error with GCC 4.9.2
I am sure it's a compiler issue but I am not ready to give up and
upgrade just yet:

  sstables/compaction.cc:307:55: error: converting to ‘std::unordered_map<int, long int>’ from initializer list would use explicit constructor ‘std::unordered_map<_Key, _Tp, _Hash, _Pred, _Alloc>::unordered_map(std::unordered_map<_Key, _Tp, _Hash, _Pred, _Alloc>::size_type, const hasher&, const key_equal&, const allocator_type&) [with _Key = int; _Tp = long int; _Hash = std::hash<int>; _Pred = std::equal_to<int>; _Alloc = std::allocator<std::pair<const int, long int> >; std::unordered_map<_Key, _Tp, _Hash, _Pred, _Alloc>::size_type = long unsigned int; std::unordered_map<_Key, _Tp, _Hash, _Pred, _Alloc>::hasher = std::hash<int>; std::unordered_map<_Key, _Tp, _Hash, _Pred, _Alloc>::key_equal = std::equal_to<int>; std::unordered_map<_Key, _Tp, _Hash, _Pred, _Alloc>::allocator_type = std::allocator<std::pair<const int, long int> >]’
                 stats->start_size, stats->end_size, {});
2015-12-16 10:03:14 +02:00
Raphael S. Carvalho
36d31a5dab fix cql_query_test
Test was failing because _qp (distributed<cql3::query_processor>) was stopped
before _db (distributed<database>).
Compaction manager is member of database, and when database is stopped,
compaction manager is also stopped. After a2fb0ec9a, compaction updates the
system table compaction history, and that requires a working query context.
We cannot simply move _qp->stop() to after _db->stop() because the former
relies on migration_manager and storage_proxy. So the most obvious fix is to
clean the global variable that stores query context after _qp was stopped.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-16 09:58:46 +02:00
Nadav Har'El
63c0906b16 messaging_service: drop unnecessary explicit templates
The previous patch added message_service read()/write() support for all
types which know how to serialize themselves through our "old" serialization
API (serialize()/deserialize()/serialized_size()).

So we no longer need the almost 200 lines of repetitive code in
messaging_service.{cc,hh} which defined these read/write templates
separately for a dozen different types using their *serialize() methods.
We also no longer need the helper functions read_gms()/write_gms(), which
are basically the same code as that in the template functions added in the
previous patch.

Compilation is not significantly slowed down by this patch, because it
merely replaces a dozen templates by one template that covers them all -
it does not add new template complexity, and these templates are anyway
instantiated only in messaging_service.cc (other code only calls specific
functions defined in messaging_service.cc, and does not use these templates).

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-15 19:07:05 +02:00
Nadav Har'El
438f6b79f7 messaging_service: allow any self-serializing type
Currently, messaging_service only supports sending types for which a read/
write function has been explicitly implemented in messageing_service.hh/cc.

Some types already have serialization/deserialization methods inside them,
and those could have been used for the serialization without having to write
new functions for each of these types. Many of these types were already
supported explicitly in messaging_service.{cc,hh}, but some were forgot -
for example, dht::token.

So this patch adds a default implemention of messaging_service write()/read()
which will work for any type which has these serialization methods.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
2015-12-15 19:07:05 +02:00
Tomasz Grabiec
a78f4656e8 Introduce ring_position_less_comparator 2015-12-15 18:00:55 +01:00
Avi Kivity
8fc7583224 Merge seastar upstream
* seastar 5b9e3da...294ea30 (9):
  > Merge "IO queues" from Glauber
  > reactor: increment check_direct_io_support to also deal with files
  > Merge "SSL/TLS initial certificate validation" from Calle
  > tutorial.md: remove inaccurate statements about x86
  > build: verify that the installed compiler is up to date
  > build: complain if fossil version of gnutls is installed
  > build: fix debian naming of gnutls-devel package
  > build: add configure-time check for gnutls-devel
  > tutorial.md: introduction to asynchrnous programming
2015-12-15 16:50:16 +02:00
Gleb Natapov
e43ae7521f storage_proxy: unfuturize send_to_live_endpoints()
send_to_live_endpoints() is never waited upon, it does its job in the
background. This patch formalize that by changing return value to void
and also refactoring code so that frozen_mutation shared pointer is not
held more that it should: currently it is held until send_mutation()
completes, but since send_mutation() does not use frozen_mutation
asynchronously this is not necessary.
2015-12-15 15:40:36 +02:00
Tomasz Grabiec
305c2b0880 frozen_mutation: Introduce decorated_key() helper
Requested by Asias for use in streaming code.
2015-12-15 15:16:04 +02:00
Tomasz Grabiec
179b587d62 Abstract timestamp creation behind new_timestamp()
Replace db_clock::now_in_usec() and db_clock::now() * 1000 accesses
where the intent is to create a new auto-generate cell timestamp with
a call to new_timestamp(). Now the knowledge of how to create timestamps
is in a single place.
2015-12-15 15:16:04 +02:00
Avi Kivity
8abd013601 Merge 2015-12-15 15:00:49 +02:00
Avi Kivity
fb8a4f6c1b Merge " implement get_compactions API" from Raphael
"get_compactions returns progress information for each compaction
running in the system. It can be accessed using swagger UI.
'nodetool compactionstats' is not working yet because of some
pending work in the nodetool side."
2015-12-15 14:59:49 +02:00
Paweł Dziepak
71f92c4d14 mutation_partition: do not move rows_entry::_link
Apparently, link hook copy constructor is a no-op and move contructor
doesn't exist so the code is correct, but that explicit move makes code
needlessly confusing.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-15 13:22:23 +01:00
Paweł Dziepak
59245e7913 row_cache: add functions for invalidating entries in cache
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-15 13:21:11 +01:00
Raphael S. Carvalho
833a78e9f7 api: implement get_compactions
get_compactions returns progress information about each
ongoing compaction.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-15 09:50:36 -02:00
Raphael S. Carvalho
193ede68f3 compaction: register and deregister compaction_stats
That's important for compaction stats API that will need stats
data of each ongoing compaction.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-15 09:50:32 -02:00
Raphael S. Carvalho
e74dcc86bd compaction_manager: introduce list of compaction_stats
This list will store compaction_stats for each ongoing compaction.
That's why register and deregister methods are provided.
This change is important for compaction stats API that needs data
of each ongoing compaction, such as progress, ks, cf, etc.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-15 09:50:28 -02:00
Raphael S. Carvalho
a26fb15d1a db: add method to get compaction manager from cf
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-15 09:50:20 -02:00
Raphael S. Carvalho
1fba394dd0 sstables: store keyspace and cf in compaction_stats
The reason behind this change is that we will need ks and cf
for the compaction stats API.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-15 09:50:02 -02:00
Raphael S. Carvalho
ac1a67c8bc sstables: move compaction_stats to header file
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-15 09:49:45 -02:00
Avi Kivity
a2eac711cf Merge "compaction history support" from Raphael
"This patchset will make Scylla update the system table
COMPACTION_HISTORY whenever a compaction job finishes.
Functions were added to both update and retrieve the
content of this system table. Compaction history API
is also enabled in this series."
2015-12-15 13:22:14 +02:00
Raphael S. Carvalho
87fbe29cf9 api: add support to compaction history
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-15 09:00:21 -02:00
Takuya ASADA
9c5afb8e58 dist: add scylla-gdb.py on scylla-server-debuginfo rpm package
It will place at /usr/src/debug/scylla-server-development/scylla-gdb.py
Fixes #604

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-15 12:13:17 +02:00
Raphael S. Carvalho
a2fb0ec9a3 sstables: update compaction history at the end of compaction
When compaction job finishes, call function to update the system
table COMPACTION_HISTORY. That's also needed for the compaction
history API.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-14 14:20:03 -02:00
Raphael S. Carvalho
433ed60ca3 db: add method to get compaction history
This method is intended to return content of the system table
COMPACTION_HISTORY as a vector of compaction_history_entry.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-14 14:19:04 -02:00
Raphael S. Carvalho
f3beacac28 db: add method to update the system table COMPACTION_HISTORY
It's supposed to be called at the end of compaction.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-14 13:47:10 -02:00
Raphael S. Carvalho
0fa194c844 sstables: remove outdated comment
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-14 12:43:53 -02:00
Raphael S. Carvalho
6142efaedb db: fix indentation
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-14 12:43:34 -02:00
Raphael S. Carvalho
81f5b1716e sstables: fix comment describing sstable::mark_for_deletion
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-14 12:43:11 -02:00
Raphael S. Carvalho
7bbc1b49b6 db: add missing sstable::mark_for_deletion call
If a sstable doesn't belong to current shard, mark_for_deletion
should be called for the deletion manager to still work.
It doesn't mean that the sstable will be deleted, but that the
sstable is not relevant to the current shard, thus it can be
deleted by the deletion manager in the future.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-14 12:42:26 -02:00
Tomasz Grabiec
0865ecde17 storage_proxy: Fix range splitting
There is a check whose intent was to detect wrap around during walk of
the ring tokens by comparing the split point with minimum token, which
is supposed to be inserted by the ring iterator. It assumed that when
we encounter it, the range is a wrap around. It doesn't hold when
minimum token is part of the token metadata or set of tokens is empty.

In such case, a full range would be split into 3 overlapping full
ranges. The fix is to drop the assumption and instead ensure that
ranges do not wrap around by unwrapping them if necessary.

Fixes #655.
2015-12-14 16:05:54 +02:00
Takuya ASADA
3b7693feda dist: add package dependency to gnutls library
Now Seastar depends to gnutls, we need to add it on .rpm/.deb package dependency.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-14 13:28:28 +02:00
Pekka Enberg
ba09c545fc dist/docker: Enable SMP support
Now that Scylla has a sleep mode, we can enable SMP support again.

Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-12-14 13:23:30 +02:00
Avi Kivity
fd14cb3743 mutation_partition: fix leak in move assignment operator
The default move assignment operator calls boost::intrusive::set's move
assignment operator, which leaks, because it does not believe it owns
the data.

Fix by providing a custom implementation.
2015-12-14 10:33:19 +01:00
Asias He
9781e0d34d storage_service: Make bootstrapping/leaving/moving log more consistent
It is useful for test code to grep the log.
2015-12-11 13:57:40 +02:00
Tomasz Grabiec
1991fd5ca2 Merge branch 'pdziepak/fix-clustering-key-comparison/v2' fom seastar-dev.git
From Paweł:

This series fixes comparison of byte order comparable clustering keys.

Fixes #645.
2015-12-11 12:51:02 +01:00
Paweł Dziepak
3a73496817 tests/cql: add test for ordering clustering keys
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-11 12:05:25 +01:00
Paweł Dziepak
8cab343895 compound: fix compare() of prefixable types
All components of prefixable compound type are preceeded by their
length what makes them not byte order comparable.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-11 12:04:31 +01:00
Paweł Dziepak
8fd4b9f911 schema: remove _clustering_key_prefix_type
All clustering keys are now prefixable.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-11 10:47:24 +01:00
Paweł Dziepak
bb9a71f70c thrift: let class_from_compound_type() accept prefixable types
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-11 10:45:56 +01:00
Pekka Enberg
0d8a02453e types: Fix frozen collection type names
Frozen collection type names must be wrapped in FrozenType so that we
are able to store the types correctly in system tables.

This fixes #646 and fixes #580.

Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-12-11 10:41:11 +01:00
Pekka Enberg
63bdeb65f2 cql3: Implement maps::literal::test_assignment() function
The test_assignement() function is invoked via the Cassandra unit tests
so we might as well implement it.

Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-12-11 09:35:13 +01:00
Asias He
57ee9676c2 storage_service: Fix default ring_delay time
It is 30 seconds instead of 5 seconds by default. To align with c*.

Pleas note, after this a node will takes at least 30 seconds to complete
a bootstrap.
2015-12-11 09:05:19 +02:00
Avi Kivity
b3cd672d97 Merge seastar upstream
* seastar ad07a2e...5b9e3da (2):
  > Merge "rpc cleanups and improvements" from Gleb
  > shared_future: Add missing include
2015-12-10 18:11:59 +02:00
Paweł Dziepak
9d482532f4 tests/lsa: reduce the size of large allocation
Originally, large allocation test case attempted to allocate an object
as big as halft of the space used by the lsa. That failed when the test
was executed with lower amount of memory available mainly due to the
memory fragmentation caused by previous test cases.

This patches reduces the size of the large allocation to 3/8 of the
total space used by the lsa which is still a lot but seems to make the
test pass even with as little memory as 64MB per shard.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-10 13:16:43 +01:00
Avi Kivity
d425aacaeb release: copy version string into heap
If we get a core dump from a user, it is important to be able to
identify its version.  Copy the release string into the heap (which is
copied into the code dump), so we can search for it using the "strings"
or "ident" commands.

Reviewed-by: Nadav Har'El <nyh@scylladb.com>
2015-12-10 13:12:40 +02:00
Lucas Meneghel Rodrigues
2167173251 utils/logalloc.cc - Declare member minimum_size from segment_zone struct
This fixes compile error:

In function `logalloc::segment_zone::segment_zone()':
/home/lmr/Code/scylla/utils/logalloc.cc:412: undefined reference to `logalloc::segment_zone::minimum_size'
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.

Signed-off-by: Lucas Meneghel Rodrigues <lmr@scylladb.com>
2015-12-10 12:54:34 +02:00
Asias He
b7d10b710e streaming: Propagate fail to send PREPARE_DONE_MESSAGE exception
Otherwise the stream_plan will not be marked as failed state.
2015-12-10 12:38:00 +02:00
Paweł Dziepak
ec453c5037 managed_bytes: fix potentially unaligned accesses
blob_storage defined with attribute packed which makes its alignment
requirement equal 1. This means that its members may be unaligned.
GCC is obviously aware of that and will generate appropriate code
(and not generate ubsan checks). However, there are few places where
members of blob_storage are accessed via pointers, these have to be
wrapped by unaligned_cast<> to let the compiler know that the location
pointed to may be not aligned properly.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-10 11:59:54 +02:00
Tomasz Grabiec
43498b3158 Merge branch 'pdziepak/fix-partial-clustering-keys/v1' from seastar-dev.git
Form Paweł:

This series fixes support for clustering keys which trailing components
are null. The solution is to use clustering_key_prefix instead of
clustering_key everywhere.

Fixes #515.
2015-12-10 10:43:12 +01:00
Paweł Dziepak
66ff1421f0 tests/cql: add test for clustering keys with empty components
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-10 05:47:07 +01:00
Paweł Dziepak
64f50a4f40 db: make clustering_key a prefix
Schemas using compact storage can have clustering keys with the trailing
components not set and effectively being a clustering key prefixes
instead of full clustering keys.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-10 05:46:47 +01:00
Paweł Dziepak
77c7ed6cc5 keys: add prefix_equality_less_compare for prefixes
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-10 05:46:26 +01:00
Paweł Dziepak
220a3b23c0 keys: allow creating partial views of prefixes
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-10 05:46:26 +01:00
Paweł Dziepak
3c16ab080a sstables: do not assume clustering_key has the proper format
In case of non-compound dense tables the column name is just the value
of the clustering key (which has only one component). Current code just
casts clustering_key to bytes_view which works because there is no
additional metadata in single element clustering keys.
However, that may change when the internal representation of clustering
key is changed so explicitly extract the proper component.

This change will become necessary when clustering_key is replaced by
clustering_key_prefix.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-10 05:46:26 +01:00
Paweł Dziepak
5f1e9fd88f mutation_partition: remove unused find_entry()
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-10 05:46:26 +01:00
Paweł Dziepak
3287022000 cql3: do not assume that clustering key is full
In case of schemas that use compact storage it is possible that trailing
components of clustering keys are not set.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-10 05:46:26 +01:00
Avi Kivity
167addbfe1 main: remove issue #417 (poll mode) warning
Fixed.
2015-12-09 19:00:32 +02:00
Avi Kivity
a352d63bf9 Merge seastar upstream
* seastar c5e595b...ad07a2e (1):
  > reactor: add command line option to disable sleep mode

Fixes #417
2015-12-09 19:00:20 +02:00
Glauber Costa
3c988e8240 perf_sstable: use current scylla default directory
When this tool was written, we were still using /var/lib/cassandra as a default
location. We should update it.

Signed-off-by: Glauber Costa <glauber@scylladb.com>
2015-12-09 17:46:31 +02:00
Avi Kivity
01c3670def Merge seastar upstream
* seastar 5dc22fa...c5e595b (3):
  > memory: be less strict about NUMA bindings
  > reactor: let the resource code specify the default memory reserve
  > resource: reserve even more memory when hwloc is compiled in

Fixes #642
2015-12-09 16:47:47 +02:00
Asias He
66938ac129 streaming: Add retransmit logic for streaming verbs
Retransmit streaming related verbs and give up in 5 minutes.

Tested with:

  lein test :only cassandra.batch-test/batch-halves-decommission

Fixes #568.
2015-12-09 15:12:36 +02:00
Avi Kivity
14794af260 Merge seastar upstream
* seastar 9f9182e...5dc22fa (1):
  > future: add repeat_until_value(): repeat an action until it returns a value
2015-12-09 15:11:59 +02:00
Avi Kivity
213700e42f Merge seastar upstream
* seastar d40453b...9f9182e (5):
  > Merge "Sleep mode support"
  > future: add futurize<T>::from_tuple(tuple<T>)
  > tls: Add missing destructor for dh_params::impl, fixes ASAN error
  > tls/socket fix: Add missing noexcept to constructor/move
  > Merge "Initial SSL/TLS socket support" from Calle
2015-12-09 11:01:13 +02:00
Avi Kivity
204610ac61 Merge "Make LSA more large-allocation-friendly" from Paweł
"This series attempts to make LSA more friendly for large (i.e. bigger
than LSA segment) allocations. It is achieved by introducing segment
zones – large, contiguous areas of segments and using them to allocate
segments instead of calling malloc() directly.
Zones can be shrunk when needed to reclaim memory and segments can be
migrated either to reduce number of zone or to defragment one in order
to be able to shrink it. LSA tries to keep all segments at the lower
addresses and reclaims memory starting from the zones in the highest
parts of the address space."
2015-12-09 10:49:23 +02:00
Avi Kivity
883074e936 Merge "Fix replace_node support" from Asias
Also:

[PATCH scylla v1 0/7] gossip mark node down fix + cleanup
[PATCH scylla v1 0/2] Refuse decommissioned node to rejoin
[PATCH scylla] storage_service: Fix added node not showing up in nodetool in status joining
2015-12-09 10:42:52 +02:00
Paweł Dziepak
8ba66bb75d managed_bytes: fix copy size in move constructor
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-09 10:38:28 +02:00
Asias He
b63d49c773 storage_service: Log removing replaced endpoint from system.peers
This info is important when replacing a node. Useful for debugging.
2015-12-09 12:30:52 +08:00
Asias He
d26c7e671d storage_service: Enable commented out code in handle_state_normal
Add current_owner to endpoints_to_remove if endpoint and current_owner
have the same token and endpoint is newer than current_owner.
2015-12-09 12:30:52 +08:00
Asias He
3793bb7be1 token_metadata: Add get_endpoint_to_token_map_for_reading 2015-12-09 12:30:52 +08:00
Asias He
1cc7887ffb token_metadata: Do nothing if tokens is empty.
When replacing a node, we might ignore the tokens so that the tokens is
empty. In this case, we will have

   std::unordered_map<inet_address, std::unordered_set<token>> = {ip, {}}

passed to token_metadata::update_normal_tokens(std::unordered_map<inet_address,
std::unordered_set<token>>& endpoint_tokens)

and hit the assert

   assert(!tokens.empty());
2015-12-09 12:30:52 +08:00
Asias He
e79c85964f system_keyspace: Flush system.peers in remove_endpoint
1) Start node 1, node 2, node 3
2) Stop  node 3
3) Start node 4 to replace node 3
4) Kill  node 4 (removal of node 3 in system.peers is not flushed to disk)
5) Start node 4 (will load node 3's token and host_id info in bootup)

This makes

   "Token .* changing ownership from 127.0.0.3 to 127.0.0.4"

messages printed again in step 5) which are not expected, which fails the dtest

   FAIL: replace_first_boot_test (replace_address_test.TestReplaceAddress)
   ----------------------------------------------------------------------
   Traceback (most recent call last):
     File "scylla-dtest/replace_address_test.py",
   line 220, in replace_first_boot_test
       self.assertEqual(len(movedTokensList), numNodes)
   AssertionError: 512 != 256
2015-12-09 12:30:52 +08:00
Asias He
110a18987e token_metadata: Print Token changing ownership from
Needed by test.
2015-12-09 12:30:52 +08:00
Asias He
906f670a86 gossip: Print node status in handle_major_state_change
It is useful to know the STATUS value when debugging.
2015-12-09 12:29:15 +08:00
Asias He
a0325a5528 gossip: Simplify is_shutdown and friends.
Use the newly added helper get_gossip_status.
2015-12-09 12:29:15 +08:00
Asias He
9d4382c626 gossip: Introduce get_gossip_status
Get value of application_state::STATUS.
2015-12-09 12:29:15 +08:00
Asias He
5a65d8bcdd gossip: Fix endless marking a node down
In commit 56df32ba56 (gossip: Mark node as
dead even if already left). A node liveness check is missed.

Fix it up.

Before: (mark a node down multiple times)

[Tue Dec  8 12:16:33 2015] INFO  [shard 0] gossip - InetAddress 127.0.0.3 is now DOWN
[Tue Dec  8 12:16:33 2015] DEBUG [shard 0] storage_service - endpoint=127.0.0.3 on_dead
[Tue Dec  8 12:16:34 2015] INFO  [shard 0] gossip - InetAddress 127.0.0.3 is now DOWN
[Tue Dec  8 12:16:34 2015] DEBUG [shard 0] storage_service - endpoint=127.0.0.3 on_dead
[Tue Dec  8 12:16:35 2015] INFO  [shard 0] gossip - InetAddress 127.0.0.3 is now DOWN
[Tue Dec  8 12:16:35 2015] DEBUG [shard 0] storage_service - endpoint=127.0.0.3 on_dead
[Tue Dec  8 12:16:36 2015] INFO  [shard 0] gossip - InetAddress 127.0.0.3 is now DOWN
[Tue Dec  8 12:16:36 2015] DEBUG [shard 0] storage_service - endpoint=127.0.0.3 on_dead

After: (mark a node down only one time)

[Tue Dec  8 12:28:36 2015] INFO  [shard 0] gossip - InetAddress 127.0.0.3 is now DOWN
[Tue Dec  8 12:28:36 2015] DEBUG [shard 0] storage_service - endpoint=127.0.0.3 on_dead
2015-12-09 12:29:15 +08:00
Asias He
fa3c84db10 gossip: Kill default constructor for versioned_value
The only reason we needed it is to make
   _application_state[key] = value
work.

With the current default constructor, we increase the version number
needlessly. To fix and to be safe, remove the default constructor
completely.
2015-12-09 12:29:15 +08:00
Asias He
52a5e954f9 gossip: Pass const ref for versioned_value in on_change and before_change 2015-12-09 12:29:15 +08:00
Asias He
3308430343 storage_service: Make before_change and on_change log print more informative
- Make before_change and on_change print the versioned_value
- Print endpoint address first in handle_state_* and
  on_change and friends.
2015-12-09 12:29:15 +08:00
Asias He
ccbd801f40 storage_service: Fix decommissioned nodes are willing to rejoin the cluster if restarted
Backport: CASSANDRA-8801

a53a6ce Decommissioned nodes will not rejoin the cluster.

Tested with:
topology_test.py:TestTopology.decommissioned_node_cant_rejoin_test
2015-12-09 10:43:51 +08:00
Asias He
b3dd2d976a storage_service: Simplify prepare_to_join with seastar thread 2015-12-09 10:43:51 +08:00
Asias He
e9a4d93d1b storage_service: Fix added node not showing up in nodetool in status joining
The get_token_endpoint API should return a map of tokens to endpoints,
including the bootstrapping ones.

Use get_local_storage_service().get_token_to_endpoint_map() for it.

$ nodetool -p 7100 status

Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address    Load       Tokens  Owns    Host ID Rack
UN  127.0.0.1  12645      256     ?  eac5b6cf-5fda-4447-8104-a7bf3b773aba  rack1
UN  127.0.0.2  12635      256     ?  2ad1b7df-c8ad-4cbc-b1f1-059121d2f0c7  rack1
UN  127.0.0.3  12624      256     ?  61f82ea7-637d-4083-acc9-567e0c01b490  rack1
UJ  127.0.0.4  ?          256     ?  ced2725e-a5a4-4ac3-86de-e1c66cecfb8d  rack1

Fixes #617
2015-12-09 10:43:51 +08:00
Paweł Dziepak
63bdf52803 tests/lsa: add large allocations test
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 23:56:46 +01:00
Tomasz Grabiec
d68a8b5349 Merge branch 'dev/amnon/index_summary_size_v2' from seastar-dev.git
API for getting sstable index summary memory footprint from Amnon
2015-12-08 20:03:39 +01:00
Paweł Dziepak
73a1213160 scylla-gdb.py: print lsa zones
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Paweł Dziepak
0d66300d43 lsa: add more counters
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Paweł Dziepak
83b004b2fb lsa: avoid fragmenting memory
Originally, lsa allocated each segment independently what could result
in high memory fragmentation. As a result many compaction and eviction
passes may be needed to release a sufficiently big contiguous memory
block.

These problems are solved by introduction of segment zones, contiguous
groups of segments. All segments are allocated from zones and the
algorithm tries to keep the number of zones to a minimum. Moreover,
segments can be migrated between zones or inside a zone in order to deal
with fragmentation inside zone.

Segment zones can be shrunk but cannot grow. Segment pool keeps a tree
containing all zones ordered by their base addresses. This tree is used
only by the memory reclamer. There is also a list of zones that have
at least one free segments that is used during allocation.

Segment allocation doesn't have any preferences which segment (and zone)
to choose. Each zone contains a free list of unused segments. If there
are no zones with free segments a new one is created.

Segment reclamation migrates segments from the zones higher in memory
to the ones at lower addresses. The remaining zones are shrunk until the
requested number of segments is reclaimed.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Paweł Dziepak
6c4a54fb0b tests: add tests for utils::dynamic_bitset
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Paweł Dziepak
2fb14a10b6 utils: add dynamic_bitset
A dynamic bitset implementation that provides functions to search for
both set and cleared bits in both directions.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Paweł Dziepak
40dda261f2 lsa: maintain segment to region mapping
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Paweł Dziepak
c4e71bac7f tests/row_cache_alloc_stress: make sure that allocation fails
Currently test case "Testing reading when memory can't be reclaimed."
assumes that the allocation section used by row cache upon entering
will require more free memory than there is available (inc. evictable).
However, the reserves used by allocation section are adjusted
dynamically and depend solely on previous events. In other words there
is no guarantee that the reserve would be increased so much that the
allocation will fail.

The problem is solved by adding another allocation that is guaranteed
to be bigger than all evictable and free memory.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Paweł Dziepak
2e94086a2c lsa: use bi::list to implement segment_stack
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-12-08 19:31:40 +01:00
Tomasz Grabiec
6ead7a0ec5 Merge tag 'large-blobs/v3' from git@github.com:avikivity/scylla.git
Scattering of blobs from Avi:

This patchset converts the stack to scatter managed_bytes in lsa memory,
allowing large blobs (and collections) to be stored in memtable and cache.
Outside memtable/cache, they are still stored sequentially, but it is assumed
that the number of transient objects is bounded.

The approach taken here is to scatter managed_bytes data in multiple
blob_storage objects, but to linearize them back when accessing (for
example, to merge cells).  This allows simple access through the normal
bytes_view.  It causes an extra two copies, but copying a megabyte twice
is cheap compared to accessing a megabyte's worth of small cells, so
per-byte throughput is increased.

Testing show that lsa large object space is kept at zero, but throughput
is bad because Scylla easily overwhelms the disk with large blobs; we'll
need Glauber's throttling patches or a really fast disk to see good
throughput with this.
2015-12-08 19:15:13 +01:00
Avi Kivity
5c5331d910 tests: test large blobs in memtables 2015-12-08 15:17:09 +02:00
Avi Kivity
0c2fba7e0b lsa: advertize our preferred maximum allocation size
Let managed_bytes know that allocating below a tenth of the segment size is
the right thing to do.
2015-12-08 15:17:09 +02:00
Avi Kivity
f9e2a9a086 mutation_partition: work on linearized atomic_cell_or_mutation objects
Ensure that when we examine atomic_cell_or_mutation objects for merging,
that they are contiguous in memory.  When we are done we scatter them again.
2015-12-08 15:17:09 +02:00
Avi Kivity
ad975ad629 atomic_cell_or_collection: linearize(), unlinearize()
Add linearize() and unlinearize() methods that allow making an
atomic_cell_or_collection object temporarily contiguous, so we can examine
it as a bytes_view.
2015-12-08 15:17:09 +02:00
Avi Kivity
13324607e6 managed_bytes: conform to allocation_strategy's max_preferred_allocation_size
Instead of allocating a single blob_storage, chain multiple blob_storage
objects in a list, each limited not to exceed the allocation_strategy's
max_preferred_allocation_size.  This allows lsa to allocate each blob_storage
object as an lsa managed object that can be migrated in memory.

Also provide linearize()/scatter() methods that can be used to temporarily
consolidate the storage into a single blob_storage.  This makes the data
contiguous, so we can use a regular bytes_view to examine it.
2015-12-08 15:17:08 +02:00
Takuya ASADA
8c98e239d0 dist: use /etc/scylla as SCYLLA_CONF directory on AMI
We don't need copy /var/lib/scylla/conf on RAID anymore, it moved to /etc/scylla.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-08 11:09:12 +02:00
Avi Kivity
098136f4ab Merge "Convert serialization of query::result to use db::serializer<>" from Tomasz
Reviewed-by: Nadav Har'El <nyh@scylladb.com>
2015-12-07 16:53:34 +02:00
Amnon Heiman
3ce7fa181c API: Add the implementation for index_summary_off_heap_memory
This adds the implementation for the index_summary_off_heap_memory for a
single column family and for all of them.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-07 15:15:39 +02:00
Amnon Heiman
e786f1d02f sstable: Add get_summary function
The get_summary method returns a const reference to the summary object.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-07 14:52:18 +02:00
Amnon Heiman
bae286a5b4 Add memory_footprint method to summary_ka
Similiar to origin, off heap memory, memory_footprint is the size of
queus multiply by the structure size.

memory_footprint is used by the API to report the memory that is taken
by the summary.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-07 14:52:18 +02:00
Amnon Heiman
2086c651ba column_family: get_snapshot_details should return empty map for no snapshots
If there is no snapshot directory for the specific column family,
get_snapshot_details should return an empty map.

This patch check that a directory exists before trying to iterate over
it.

Fixes #619

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-07 12:51:04 +01:00
Tomasz Grabiec
b43b5af894 Merge tag 'tgrabiec/make-future-values-nothrow-move-constructible-v3' from seastar-dev.git
Seastar's future<> now requires types to be nothrow move
constructible. This series makes Scylla code comply.
2015-12-07 10:43:18 +01:00
Tomasz Grabiec
95f515a6bd Move seastar submodule head
Scylla changes:
  sstable.cc: Remove file_exists() function which conflicts with seastar's

Amnon Heiman (2):
      reactor: Add file_exists method
      Add a wrapper for file_exists

Avi Kivity (2):
      Merge "Introduce shared_future" from Tomasz
      Merge ""scripts: a few fixes in posix_net_conf.sh" from Vlad

Gleb Natapov (3):
      rpc: not stop client in error state
      avoid allocation in parallel_for_each is there is nothing to do
      memory: fix size_to_idx calculation

Nadav Har'El (1):
      test: fix use-after-free in timertest

Pawe�� Dziepak (1):
      memory: use size instead of old_size to shrink memory block

Tomasz Grabiec (7):
      file: Mark move constructor as noexcept
      core: future: Add static asserts about type's noexcept guarantees
      core: future: Drop now redundant move_noexcept flag
      core: future_state: Make state getters non-destructive for non-rvalue-refs
      core: future: Make get_available_state() noexcept
      core: Introduce shared_future
      Make json_return_type movable

Vlad Zolotarov (8):
      scripts: posix_net_conf.sh: ban NIC IRQs from being moved by irqbalance
      scripts: posix_net_conf.sh: exclude CPU0 siblings from RPS
      scripts: posix_net_conf.sh: Configure XPS
      scripts: posix_net_conf.sh: Add a new mode for MQ NICs
      scripts: posix_net_conf.sh: increase some backlog sizes
      core: to_sstring(): cleanup
      core: to_sstring_strintf(): always use %g(or %lg) format for floating point values
      core: prevent explicit calls for to_sstring_sprintf()
2015-12-07 10:41:39 +01:00
Glauber Costa
79e70568d7 scylla-setup: do not add discard to the command line
In a recent discussion with the XFS developers, Dave Chinner recommended
us *not* to use discard, but rather issue fstrims explicitly. In machines
like Amazon's c3-class, the situation is made worse by the fact that discard
is not supported by the disk. Contrary to my intuition, adding the discard
mount option in such situation is *not* a nop and will just create load
for no reason.

Signed-off-by: Glauber Costa <glommer@scylladb.com>
2015-12-07 11:22:27 +02:00
Tomasz Grabiec
934d3f06d1 api: Make histogram reduction work on domain value instead of json objects
Objects extending json_base are not movable, so we won't be able to
pass them via future<>, which will assert that types are nothrow move
constructible.

This problem only affects httpd::utils_json::histogram, which is used
in map-reduce. This patch changes the aggregation to work on domain
value (utils::ihistrogram) instead of json objects.
2015-12-07 09:50:28 +01:00
Tomasz Grabiec
c0ac7b3a73 commitlog: Wrap subscription in a unique_ptr<> to make it nothrow movable
future<> will require nothrow move constructible types.
2015-12-07 09:50:28 +01:00
Tomasz Grabiec
657841922a Mark move constructors noexcept when possible 2015-12-07 09:50:27 +01:00
Tomasz Grabiec
fdc28a73f8 thrift: Make with_cob() handle not nothrow move constructible types 2015-12-07 09:50:27 +01:00
Tomasz Grabiec
538de7222a Introduce noexcept_traits 2015-12-07 09:50:27 +01:00
Tomasz Grabiec
bc23ebcbc3 schema_tables: Replace schema_result::value_type with equivalent movable type
future<> requires and will assert nothrow move constructible types.
2015-12-07 09:50:27 +01:00
Avi Kivity
91c2af2803 Merge "nodetool removenode fix + cleanup" from Asias 2015-12-07 10:41:51 +02:00
Takuya ASADA
2891291ad1 dist: add swagger-ui and api-doc on ubuntu package
Fixes .deb part of #520

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-07 10:39:59 +02:00
Takuya ASADA
3f0ca277e5 dist: add swagger-ui and api-doc on rpm package
Fixes .rpm part of #520

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-07 10:39:59 +02:00
Avi Kivity
2437fc956c allocation_strategy: expose preferred allocation size limit
Our premier allocation_strategy, lsa, prefers to limit allocations below
a tenth of the segment size so they can be moved around; larger allocations
are pinned and can cause memory fragmentation.

Provide an API so that objects can query for this preferred size limit.

For now, lsa is not updated to expose its own limit; this will be done
after the full stack is updated to make use of the limit, or intermediate
steps will not work correctly.
2015-12-06 16:23:42 +02:00
Vlad Zolotarov
564cb2bcd1 gms::versioned_value: don't use to_sstring_sprintf() directly
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2015-12-06 12:24:54 +02:00
Raphael S. Carvalho
d435ca7da6 enable more logging for leveled compaction strategy
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-12-06 11:36:50 +02:00
Pekka Enberg
a95a7294ef types: Fix 'varint' type value compatibility check
Fixes #575.

Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-12-04 13:25:34 +01:00
Glauber Costa
5e8249f062 commitlog: fix but preventing flushing with default max_size value
The config file expresses this number in MB, while total_memory() gives us
a quantity in bytes. This causes the commitlog not to flush until we reach
really skyhigh numbers.

While we need this fix for the short term before we cook another release,
I will note that for the mid/long term, it would be really helpful to stop
representing memory amounts as integers, and use an explicit C++ type for
those. That would have prevented this bug.

Signed-off-by: Glauber Costa <glommer@scylladb.com>
2015-12-04 09:29:19 +02:00
Vlad Zolotarov
cd215fc552 types: map::to_string() - non-empty implementation
Print a map in the form of [(]{ key0 : value0 }[, { keyN : valueN }]*[)]
The map is printed inside () brackets if it's frozen.

Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
2015-12-03 18:46:12 +01:00
Amnon Heiman
54b4f26cb0 API: Change the compaction summary to use an object
In origin, there are two APIs to get the information about the current
running compactions. Both APIs do the string formatting.

This patch changes the API to have a single API get_compaction that
would return a list of summary object.

The jmx would do the string formatting for the two APIs.

This change gives a better API experience is it's better documented and
would make it easier to support future format changes in origin.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-03 11:57:37 +02:00
Asias He
dcb9b441ab storage_service: Fix debug build
Start non-seed with debug build I saw:

==9844==WARNING: ASan is ignoring requested __asan_handle_no_return:
stack top: 0x7ffdabd73000; bottom 0x7fe309218000; size: 0x001aa2b5b000 (114398965760)
False positive error reports may follow
For details see http://code.google.com/p/address-sanitizer/issues/detail?id=189
DEBUG [shard 0] storage_service - Starting shadow gossip round to check for endpoint collision
=================================================================
==9844==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fe309219ad0 at pc 0x00000495a88e bp 0x7fe309219960 sp 0x7fe309219950
WRITE of size 8 at 0x7fe309219ad0 thread T0
    #0 0x495a88d in _Head_base<seastar::async(Func&&, Args&& ...)
       [with Func = service::storage_service::check_for_endpoint_collision()::<lambda()>; Args = {};
       futurize_t<typename std::result_of<typename std::decay<_Tp>::type(std::decay_t<Args>...)>::type> = future<>]::work*>
       /usr/include/c++/5.1.1/tuple:115
    #1 0x495a993 in _Tuple_impl<seastar::async(Func&&, Args&& ...)
       [with Func = service::storage_service::check_for_endpoint_collision()::<lambda()>; Args = {};
       futurize_t<typename std::result_of<typename std::decay<_Tp>::type(std::decay_t<Args>...)>::type> = future<>]::work*,
       std::default_delete<seastar::async(Func&&, Args&& ...) [with Func = service::storage_service::check_for_endpoint_collision()::<lambda()>;
       Args = {}; futurize_t<typename std::result_of<typename std::decay<_Tp>::type(std::decay_t<Args>...)>::type> = future<>]::work>, void>
       /usr/include/c++/5.1.1/tuple:213
    #2 0x495aa73 in tuple<seastar::async(Func&&, Args&& ...)
       [with Func = service::storage_service::check_for_endpoint_collision()::<lambda()>; Args = {};
       futurize_t<typename std::result_of<typename std::decay<_Tp>::type(std::decay_t<Args>...)>::type> = future<>]::work*,
       std::default_delete<seastar::async(Func&&, Args&& ...)
       [with Func = service::storage_service::check_for_endpoint_collision()::<lambda()>; Args = {};
       futurize_t<typename std::result_of<typename std::decay<_Tp>::type(std::decay_t<Args>...)>::type> = future<>]::work>, void>
       /usr/include/c++/5.1.1/tuple:613
    #3 0x495ab82 in unique_ptr /usr/include/c++/5.1.1/bits/unique_ptr.h:206
    ...
    #16 0x4d44c8e in _M_invoke /usr/include/c++/5.1.1/functional:1871
    #17 0x5d2fb7 in std::function<void ()>::operator()() const /usr/include/c++/5.1.1/functional:2271
    #18 0x8a1e70 in seastar::thread_context::main() core/thread.cc:139
    #19 0x8a1d89 in seastar::thread_context::s_main(unsigned int, unsigned int) core/thread.cc:130
    #20 0x7fe311b6cf0f  (/lib64/libc.so.6+0x48f0f)

I'm not sure why this patch helps. Perhaps the exception makes ASAN unhappy.

Anyway, this patch makes the debug build work again.

Fixes #613.
2015-12-03 10:42:11 +02:00
Tomasz Grabiec
d64db98943 query: Convert serialization of query::result to use db::serializer<>
That's what we're trying to standardize on.

This patch also fixes an issue with current query::result::serialize()
not being const-qualified, because it modifies the
buffer. messaging_service did a const cast to work this around, which
is not safe.
2015-12-03 09:19:11 +01:00
Tomasz Grabiec
d4d3a5b620 bytes_ostream: Make size_type and value_type public 2015-12-03 09:19:11 +01:00
Tomasz Grabiec
96d215168e Merge tag 'asias/gossip_start_stop/fix/v1' from seastar-dev.git
Fixes for issues in tests from Asias.
2015-12-03 09:10:55 +01:00
Tomasz Grabiec
f0cfa61968 Relax header dependencies 2015-12-03 09:10:02 +01:00
Tomasz Grabiec
9e0c498425 Merge branch 'dev/amnon/latency_clock_v2'
From Amnon:

After this series an example run of cfhistograms report a maximal 0.5s latency
as it should
2015-12-02 19:58:43 +01:00
Amnon Heiman
1812fe9e70 API: Add the get_version to messaging_service swagger definition file
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-02 14:45:44 +02:00
Amnon Heiman
ae53604ed7 API: Add the get_version implementation to messaging service
This patch adds the implementation to the get_version.
After this patch the following url will be available:
messaging_service/version?addr=127.0.0.1

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-02 13:29:40 +02:00
Avi Kivity
53e3e79349 Merge "API: Stubing the compaction manager" from Amnon
"This series allows the compaction manager to be used by the nodetool as a stub implementation.

It has two changes:
* Add to the compaction manager API a method that returns a compaction info
object

* Stub all the compaction method so that it will create an unimplemented
warning but will not fail, the API implementation will be reverted when the
work on compaction will be completed."
2015-12-02 13:28:34 +02:00
Takuya ASADA
871bfb1c94 dist: generate correct distribution codename on debian/changelog
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-02 12:38:52 +02:00
Takuya ASADA
b61ea247d2 dist: check supported Ubuntu release
Warn if unsupported release.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-02 12:38:52 +02:00
Takuya ASADA
0c66c25250 dist: fix typo on scylla_prepare
Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-12-02 11:30:15 +02:00
Asias He
3004866f59 gossip: Rename start to start_gossiping
So that we have a more consistent name start_gossiping() and
stop_gossiping() and it will not confuse with get_gossiper.start().
2015-12-02 16:50:34 +08:00
Asias He
5c3951b28a gossip: Get rid of the handler helper 2015-12-02 16:50:34 +08:00
Asias He
7a6ad7aec2 gossip: Fix Assertion `local_is_initialized()' failed
This patch fixes the following cql_query_test failure.

   cql_query_test: scylla/seastar/core/sharded.hh:439:
   Service& seastar::sharded<Service>::local() [with Service =
   gms::gossiper]: Assertion `local_is_initialized()' failed.

The problem is in gossiper::stop() we call gossip::add_local_application_state()
which will in turn call gms::get_local_gossiper(). In seastar::sharded::stop

 _instances[engine().cpu_id()].service = nullptr;
 return inst->stop().then([this, inst] {
     return _instances[engine().cpu_id()].freed.get_future();
 });

We set the _instances to nullptr before we call the stop method, so
local_is_initialized asserts when we try to access get_local_gossiper
again.

To fix, we make the stopping of gossiper explicit. In the shutdown
procedure, we call stop_gossiping() explicitly.

This has two more advantages:

1) The api to stop gossip is now calling the stop_gossiping() instead of
sharing the seastar::sharded's stop method.

2) We can now get rid of the _handler seastar::sharded helper.
2015-12-02 16:50:34 +08:00
Asias He
e22972009b gossip: Make log message in mark_dead stick to cassandra 2015-12-02 14:21:26 +08:00
Asias He
ad30cf0faf failure_detector: Use a standalone logger name
Do not share logger with gossip. Sometimes, it is useful to only see one
of them.
2015-12-02 14:21:26 +08:00
Asias He
eb05dc680d storage_service: Warn lost of data when remove a node
If RF = 1 and one node is down, it is possible that data is lost. Warn
this in the logger.
2015-12-02 14:21:26 +08:00
Asias He
0fe14e2b4b storage_service: Do not ignore future in remove_node 2015-12-02 14:21:26 +08:00
Asias He
5a7f15ba49 storage_service: Run confirm_replication on cpu zero
storage_service::_replicating_nodes is valid on cpu zero only.
2015-12-02 14:21:26 +08:00
Amnon Heiman
7e79d35f85 Estimated histogram: Clean the add interface
The add interface of the estimated histogram is confusing as it is not
clear what units are used.

This patch removes the general add method and replace it with a add_nano
that adds nanoseconds or add that gets duration.

To be compatible with origin, nanoseconds vales are translated to
microseconds.
2015-12-01 15:28:06 +02:00
Amnon Heiman
61abc85eb3 histogram: Add started counter
This patch adds a started counter, that is used to mark the number of
operation that were started.

This counter serves two purposes, it is a better indication for when to
sample the data and it is used to indicate how many pending operations
are.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-01 15:28:06 +02:00
Amnon Heiman
88dcf2e935 latency: Switch to steady_clock
The system clock is less suitable for for time difference than
steady_clock.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-01 15:28:06 +02:00
Avi Kivity
f667e05e08 Merge "backport gosisp and storage_service fix" from Asias
"This contains most bug fixes from imported version commit
38847a6bd967e4f41bc7b1fc83629161a2c214dc to c* 2.1.11 for gossip and
storage_service."
2015-12-01 14:42:41 +02:00
Asias He
dc6f2157e7 Update ORIGIN for gossip and storage_service 2015-12-01 19:45:04 +08:00
Amnon Heiman
3674ee2fc1 API: get snapshot size
This patch adds the column family API that return the snapshot size.
The changes in the swagger definition file follo origin so the same API will be used for the metric and the
column_family.

The implementation is based on the get_snapshot_details in the
column_family.

This fix:
425

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-12-01 11:41:52 +02:00
Asias He
56df32ba56 gossip: Mark node as dead even if already left
Backport: CASSANDRA-10205

484e645 Mark node as dead even if already left
2015-12-01 17:29:25 +08:00
Asias He
59694a8e43 failure_detector: Print versions for gossip states in gossipinfo
Backport: CASSANDRA-10330

ae4cd69 Print versions for gossip states in gossipinfo

For instance, the version for each state, which can be useful for
diagnosing the reason for any missing states. Also instead of just
omitting the TOKENS state, let's indicate whether the state was actually
present or not.
2015-12-01 17:29:25 +08:00
Asias He
af91a8f31b storage_service: Fix transition from write survey to normal mode
Backport: CASSANDRA-9740

52dbc3f Can't transition from write survey to normal mode
2015-12-01 17:29:25 +08:00
Asias He
2f071d9648 storage_service: Refuse to decommission if not in state NORMAL
Backport: CASSANDRA-8741

5bc56c3 refuse to decomission if not in state NORMAL
2015-12-01 17:29:25 +08:00
Asias He
224db2ba37 failure_detector: Don't mark nodes down before the max local pause interval once paused
Backport: CASSANDRA-9446

7fba3d2 Don't mark nodes down before the max local pause interval once paused
2015-12-01 17:29:25 +08:00
Asias He
51fcc48700 failure_detector: Failure detector detects and ignores local pauses
Backport: CASSANDRA-9183

4012134 Failure detector detects and ignores local pauses
2015-12-01 17:29:25 +08:00
Asias He
1b9e350614 gossip: Do not print node is now part of the cluster during gossip shadow round
With

   Node 1 (Seed node, Port 7000 is opened, 10.184.9.144)
   Node 2 (Port 7000 is opened, 10.184.9.145)
   Node 3 (Port 7000 is blocked by firewall)

On Node 3, we saw the following error which was very confusing: Node 3
saw Node 1 and Node 3 but it complained it can not contact any seeds.

The message "Node 10.184.9.144 is now part of the cluster" and friends
are actually messages printed during the gossip shadow round where Node
3 connects to Node 1's port 7000 and Node 1 returns all info it knows to
Node 3, so that Node 3 knows Node 1 and Node 2 and we see the "Node
10.184.9.144/145 is now part of the cluster" message.

However, during the normal gossip round, Node 3 will not mark Node 1 and
Node 2 UP until the Seed node initiates a gossip round to Node 3, (note
port 7000 on node 3 is blocked in this case). So Node 3 will not mark
Node 1 and Node 2 UP and we see the "Unable to contact any seeds" error.

[shard 0] storage_service - Loading persisted ring state
[shard 0] gossip - Node 10.184.9.144 is now part of the cluster
[shard 0] gossip - inet_address 10.184.9.144 is now UP
[shard 0] gossip - Node 10.184.9.145 is now part of the cluster
[shard 0] gossip - inet_address 10.184.9.145 is now UP
[shard 0] storage_service - Starting up server gossip
scylla_run[12479]: Start gossiper service ...
[shard 0] storage_service - JOINING: waiting for ring information
[shard 0] storage_service - JOINING: schema complete, ready to bootstrap
[shard 0] storage_service - JOINING: waiting for pending range calculation
[shard 0] storage_service - JOINING: calculation complete, ready to bootstrap
[shard 0] storage_service - JOINING: getting bootstrap token
[shard 0] storage_service - JOINING: sleeping 5000 ms for pending range setup
scylla_run[12479]: Exiting on unhandled exception of type 'std::runtime_error': Unable to contact any seeds!
2015-12-01 17:29:25 +08:00
Asias He
f62a6f234b gossip: Add shutdown gossip state
Backported: CASSANDRA-8336 and CASSANDRA-9871

84b2846 remove redundant state
b2c62bb Add shutdown gossip state to prevent timeouts during rolling restarts
8f9ca07 Cannot replace token does not exist - DN node removed as Fat Client

Fixes:

When X is shutdown, X sends SHUTDOWN message to both Y and Z, but for
some reason, only Y receives the message and Z does not receive the
message. If Z has a higher gossip version for X than Y has for
X, Z will initiate a gossip with Y and Y will mark X alive again.

X ------> Y
 \      /
  \    /
    Z
2015-12-01 17:29:25 +08:00
Gleb Natapov
8c02ad0e9e messaging: log connection dropping event 2015-11-30 19:42:04 +02:00
Avi Kivity
b85f3ad130 Merge "Commit log replay - handle corrupted data silently, as non-fatal"
Fixes: #593

"Changes the parser/replayer to treat data corruption as non-fatal,
skipping as little as possible to get the most data out of a segment,
but keeping track of, and reporting, the amount corrupted.

Replayer handles this and reports any non-fatal errors on replay finish.
Also added tests for corruption cases.

This patch series contains a cleanup-patch for commitlog_tests that was
previously submitted, but got lost."
2015-11-30 19:13:31 +02:00
Gleb Natapov
5b9f3bff7d storage_proxy: simplify error handling by using this/handle_exception
It is cleaner to use this/handle_exception instead of then_wrapped if
normal and error flow do not share any state.
2015-11-30 17:41:32 +02:00
Gleb Natapov
5484f25091 storage_proxy: remove unneeded continuation
make_ready_future() around when_all() is not longer needed. It was
added to catch mutate_locally() exceptions, but now it is handled in
lower level.
2015-11-30 17:41:28 +02:00
Gleb Natapov
cf95c3f681 storage_proxy: introduce unique_response_handler object to prevent write request leaks
If something bad happens between write request handler creation and
request execution the request handler have to be destroyed. Currently
code tries to do that explicitly in all places where request may be
abandoned, but it misses some (at least one). This patch replaces this
by introducing unique_response_handler object that will remove the handler
automatically if request is not executed for some reason.
2015-11-30 17:41:27 +02:00
Gleb Natapov
d8afc6014e storage_proxy: catch exception thrown by mutate_locally in mutate verb handler
Also simplify error logging.
2015-11-30 17:41:25 +02:00
Avi Kivity
3c9ded27cc Update scylla-ami submodule
* ami/files/scylla-ami 3f37184...07b7118 (1):
  > Use /etc/scylla as SCYLLA_CONF directory
2015-11-30 16:39:49 +02:00
Takuya ASADA
616903de12 dist: use distribution version of antlr3, on Ubuntu 15.10
Rename antlr3-tool to antlr3 (same as distribution package), and use distribution version if it's available

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-11-30 16:37:36 +02:00
Pekka Enberg
0e8f80b5ee Merge "Relax bootstrapping/leaving/moving nodes check" from Asias 2015-11-30 11:53:07 +02:00
Asias He
2022117234 failure_detector: Enable phi_convict_threshold option
Adjusts the sensitivity of the failure detector on an exponential scale.

Use as:

$ scylla --phi-convict-threshold 9

Default to 8.
2015-11-30 11:09:36 +02:00
Asias He
db70643fe3 failure_detector: Print application_state properly 2015-11-30 11:08:40 +02:00
Asias He
aaca88a1e7 token_metadata: Add print_pending_ranges for debug print
Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-11-30 11:07:42 +02:00
Avi Kivity
2c59e2f81f Merge "Fix race between population and update in row cache" from Tomasz
"Before this change, populations could race with update from flushed
memtable, which might result in cache being populated with older
data. Populations started before the flush are not considering the
memtable nor its sstable.

The fix employed here is to make update wait for populations which
were started before the flushed memtable's sstable was added to the
undrelying data source. All populatinos started after that are
guaranteed to see the new data. The update() call will wait only for
current populating reads to complete, it will not wait for readers to
get advanced by the consumer for instance."
2015-11-30 11:06:23 +02:00
Tomasz Grabiec
8d88ece896 schema_tables: Fix "comment" property not being loaded from storage 2015-11-30 10:57:36 +02:00
Pekka Enberg
a64fa3db03 Merge "range_streamer fix and cleanup" from Asias
Do not use hard-coded value for is_replacing and rangemovement.
2015-11-30 10:47:06 +02:00
Asias He
879a4ad4d3 storage_service: Update pending ranges immediately after update of normal tokens
To avoid a race where natural endpoint was updated to contain node A,
but A was not yet removed from pending endpoints.

This fixes the root cause of commit d9d8f87c1 (storage_proxy: filter out
natural endpoints from pending endpoint). This patch alone fixes #539,
but we still want commit d9d8f87c1 to be safe.
2015-11-30 10:20:59 +02:00
Asias He
0af7fb5509 range_streamer: Kill FIXME in use_strict_consistency for consistent_rangemovement 2015-11-30 09:15:42 +08:00
Asias He
f80e3d7859 range_streamer: Simplify multiple_map to map conversion in add_ranges 2015-11-30 09:15:42 +08:00
Asias He
21882f5122 range_streamer: Kill one leftover comment 2015-11-30 09:15:42 +08:00
Asias He
6b258f1247 range_streamer: Kill FIXME for is_replacing 2015-11-30 09:15:42 +08:00
Asias He
aa2b11f21b database: Move is_replacing and get_replace_address to database class
So they can be used outside storage_service.
2015-11-30 09:15:42 +08:00
Asias He
80d1d4d161 storage_service: Relax bootstrapping/leaving/moving nodes check in check_for_endpoint_collision
When other bootstrapping/leaving/moving nodes are found during
bootstrap, instead of throwing immediately, sleep and try again for one
minute, hoping other nodes will finish the operation soon.

Since we are retrying using shadow gossip round more than once, we need
to put the gossip state back to shadow round after each shadow round, to
make shadow round works correctly.

This is useful when starting an empty cluster for testing. E.g,

   $ scylla --listen-address 127.0.0.1
   $ sleep 3
   $ scylla --listen-address 127.0.0.2
   $ sleep 3
   $ scylla --listen-address 127.0.0.3

Without this patch, node 3 will hit the check.

   TIME  STATUS
   -----------------------
   Node  1:
   32:00 Starts
   32:00 In NORMAL status

   Node  2:
   32:03 Starts
   32:04 In BOOT status
   32:10 In NORMAL status

   Node  3:
   32:06 Starts
   32:06 Found node 2 in BOOT status, hit the check, sleep and try again
   32:11 Found node 2 in NORMAL status, can keep going now
   32:12 In BOOT status
   32:18 In NORMAL status
2015-11-30 09:07:57 +08:00
Asias He
8b19373536 storage_service: Relax bootstrapping/leaving/moving nodes check in join_token_ring
When other bootstrapping/leaving/moving nodes are found during
bootstrap, instead of throwing immediately, sleep and try again for one
minute, hoping other nodes will finish the operation soon.

This is useful when starting an empty cluster for testing. E.g,

   $ scylla --listen-address 127.0.0.1
   $ scylla --listen-address 127.0.0.2
   $ scylla --listen-address 127.0.0.3

Without this patch, node 3 will hit the check.

   TIME  STATUS
   -----------------------
   Node  1:
   25:19 Starts
   25:20 In NORMAL status

   Node  2:
   25:19 Starts
   25:23 In BOOT status
   25:28 In NORMAL status

   Node  3:
   25:19 Starts
   25:24 Found node 2 in BOOT status, hit the check, sleep and try again
   25:29 Found node 2 in NORMAL status, can keep going now
   25:29 In BOOT status
   25:34 In NORMAL status
2015-11-30 09:07:57 +08:00
Tomasz Grabiec
df46542832 tests: Add test for populate and update race 2015-11-29 16:25:22 +01:00
Tomasz Grabiec
6f69d4b700 tests: Avoid potential use after free on partition range 2015-11-29 16:25:21 +01:00
Tomasz Grabiec
de75f3fa69 row_cache: Add default value for partition range in make_reader() 2015-11-29 16:25:21 +01:00
Tomasz Grabiec
ab328ead3d mutation: Introduce ring_position() 2015-11-29 16:25:21 +01:00
Tomasz Grabiec
32ac2ccc4a memtable: Introduce apply(memtable&) 2015-11-29 16:25:21 +01:00
Tomasz Grabiec
7c3e6c306b row_cache: Wait for in-flight populations on update
Before this change, populations could race with update from flushed
memtable, which might result in cache being populated with older
data. Populations started before the flush are not considering the
memtable nor its sstable.

The fix employed here is to make update wait for populations which
were started before the flushed memtable's sstable was added to the
undrelying data source. All populatinos started after that are
guaranteed to see the new data.
2015-11-29 16:25:21 +01:00
Tomasz Grabiec
a3e3add28a utils: Introduce phased_barrier
Utility for waiting on a group of async actions started before certain
point in time.
2015-11-29 16:25:21 +01:00
Pekka Enberg
a26ffefd53 transport/server: Remove CQL text type from encoding
The text data type is no longer present in CQL binary protocol v3 and
later. We don't need it for encoding earlier versions either because
it's an alias for varchar which is present in all CQL binary protocol
versions.

Fixes #526.

Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-11-27 09:13:56 +01:00
Pekka Enberg
2599b78583 Merge "CQL notification + storage_service fix" from Asias
"pushed_notifications_test.py dtest is now passing"
2015-11-27 10:09:56 +02:00
Asias He
da0e80a286 storage_service: Fix failed bootstrap/replace attempts being persisted in system.peers
Backported from:

ac46747 Fix failed bootstrap/replace attempts being persisted in system.peers (CASSANDRA-9180)
2015-11-27 15:31:56 +08:00
Asias He
36b2de10ed failure_detector: Improve FD logging when the arrival time is ignored
Backport from:

eb9c5bb Improve FD logging when the arrival time is ignored.
2015-11-27 15:31:56 +08:00
Asias He
ed9cd23a2d transport: Fix duplicate up/down messages sent to native clients
This patch plus pekka's previous commit 3c72ea9f96

   "gms: Fix gossiper::handle_major_state_change() restart logic"

fix CASSANDRA-7816.

Backported from:

   def4835 Add missing follow on fix for 7816 only applied to
           cassandra-2.1 branch in 763130bdbde2f4cec2e8973bcd5203caf51cc89f
   763130b Followup commit for 7816
   2199a87 Fix duplicate up/down messages sent to native clients

Tested by:
   pushed_notifications_test.py:TestPushedNotifications.restart_node_test
2015-11-27 15:31:56 +08:00
Asias He
25bb889c2a transport: Fix wrong message for UP and DOWN event 2015-11-27 15:31:56 +08:00
Asias He
ca8c4f3e77 storage_service: Fix MOVED_NODE client event (CASSANDRA-8516)
Backport from:

b296c55f956c6ef07c8330dc28ef8c351e5bcfe2 (Fix MOVED_NODE client event)

Fixes:

DISABLE_VNODES=true nosetests
pushed_notifications_test.py:TestPushedNotifications.move_single_node_test
2015-11-27 15:31:56 +08:00
Gleb Natapov
ad358300a9 cql server: remove connection from notifiers earlier
Remove connection from notifiers lists just before closing it to prevent
attempts to send notification on already closed connection.
2015-11-26 18:50:08 +02:00
Pekka Enberg
569d288891 cql3: Add TRUNCATE TABLE alias for TRUNCATE
CQL 3.2.1 introduces a "TRUNCATE TABLE X" alias for "TRUNCATE X":

  4e3555c1d9

Fix our CQL grammar to also support that.

Please note that we don't bump up advertised CQL version yet because our
cqlsh clients won't be able to connect by default until we upgrade them
to C* 2.1.10 or later.

Fixes #576

Signed-off-by: Pekka Enberg <penberg@scylladb.com>
2015-11-26 18:45:50 +02:00
Gleb Natapov
96f40d535e cql server: add missing gate during connection access
cql connection access is protected by a gate, but event notifiers have
omitted taking it. Fix it.
2015-11-26 13:05:59 +02:00
Tomasz Grabiec
a7c11d1e30 db: Fix handling of missing column family
The FIXMEs are no longer valid, we load schema on bootstrap and don't
support hot-plugging of column families via file system (nor does
Cassandra).

Handling of missing tables matches Cassandra 2.1, applies log
it and continue, queries propagate the error.
2015-11-25 16:59:15 +02:00
Tomasz Grabiec
3a402db1be storage_proxy: Remove dead signature 2015-11-25 16:57:03 +02:00
Asias He
d03b452322 storage_service: Remove RPC client in on_dead
When gossip mark a node down, we should close all the RPC connections to
that node.
2015-11-25 16:30:14 +02:00
Gleb Natapov
d9d8f87c1b storage_proxy: filter out natural endpoints from pending endpoint
If request comes after natural endpoint was updated to contain node A,
but A was not yet removed from pending endpoints it will be in both and
write request logic cannot handle this properly. Filter nodes which are
already in natural endpoint from pending endpoint to fix this.

Fixes #539.
2015-11-25 16:28:55 +02:00
Pekka Enberg
cf7541020f Merge "Enable more config options" from Asias 2015-11-25 16:09:22 +02:00
Tomasz Grabiec
c3f03d5c96 Merge branch 'pdziepak/random-lsa-patches/v3' from seastar-dev.git
LSA fixes from Paweł.
2015-11-25 10:26:23 +01:00
Paweł Dziepak
89f7f746cb lsa: fix printing object_descriptor::_alignment
object_descriptor::_alignment is of type uint8_t which is actually an
unsigned char.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 20:13:29 +01:00
Paweł Dziepak
65875124b7 lsa: guarantee that segment_heap doesn't throw
boost::heap::binomial_heap allocates helper object in push() and,
therefore, may throw an exception. This shouldn't happen during
compaction.

The solution is to reserve space for this helper object in
segment_descriptor and use a custom allocator with
boost::heap::binomial_heap.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 19:51:22 +01:00
Paweł Dziepak
273b8daeeb lsa: add no-op default constructor for segment
Zero initialization of segment::data when segment is value initialized
is undesirable.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 16:37:37 +01:00
Paweł Dziepak
e6cf3e915f lsa: add counters for memory used by large objects
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 16:36:27 +01:00
Paweł Dziepak
9396956955 scylla-gdb.py: show lsa statistics and regions
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 16:36:20 +01:00
Paweł Dziepak
aaecf5424c scylla-gdb.py: show free, used and total memory
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 16:36:16 +01:00
Paweł Dziepak
6b113a9a7a lsa: fix eviction of large blobs
LSA memory reclaimer logic assumes that the amount of memory used by LSA
equals: segments_in_use * segment_size. However, LSA is also responsible
for eviction of large objects which do not affect the used segmentcount,
e.g. region with no used segments may still use a lot of memory for
large objects. The solution is to switch from measuring memory in used
segments to used bytes count that includes also large objects.

Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
2015-11-24 16:29:09 +01:00
Takuya ASADA
4a8c79ca0e dist: re-initialize RAID on ephemeral disk when stop/restart AMI instance
Since this won't check disk types, may re-initialize RAID on EBS when first block was lost.
But in such condition, probably re-initialize RAID is the only choice we can take, so this is fine.
Fixes #364.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-11-24 10:46:10 +02:00
Asias He
3a9200db03 config: Add more document for options
For consistent_rangemovement, join_ring, load_ring_state, etc.
2015-11-24 10:07:31 +08:00
Asias He
7ddf8963f5 config: Enable broadcast_rpc_address option
With this patch, start two nodes

node 1:
scylla --rpc-address 127.0.0.1 --broadcast-rpc-address 127.0.0.11

node 2:
scylla --rpc-address 127.0.0.2 --broadcast-rpc-address 127.0.0.12

On node 1:
cqlsh> SELECT rpc_address from system.peers;

 rpc_address
-------------
  127.0.0.12

which means client should use this address to connect node 2 for cql and
thrift protocol.
2015-11-24 10:07:31 +08:00
Asias He
33ef58c5c9 utils: Add get_broadcast_rpc_address and set_broadcast_rpc_address helper 2015-11-24 10:07:31 +08:00
Asias He
1e55aa38c1 storage_service: Implement is_replacing 2015-11-24 10:07:29 +08:00
Asias He
644c226d58 config: Enable replace_address and replace_address_first_boot option
It is same as

   -Dcassandra.replace_address
   -Dcassandra.replace_address_first_boot

in cassandra.
2015-11-24 10:07:24 +08:00
Asias He
bfe26ea208 config: Enable replace_token option
It is same as

   -Dcassandra.replace_token

in cassandra.

Use it as:

   $ scylla --replace-token $token1,$token2,$token3
2015-11-24 10:07:20 +08:00
Asias He
730abbc421 config: Enable replace_node option
It is same as

   -Dcassandra.replace_node

in cassandra.

Use it as:

   $ scylla --replace-node $node_uuid
2015-11-24 10:07:16 +08:00
Asias He
2513d6ddbe config: Enable load_ring_state option
It is same as

   -Dcassandra.load_ring_state

in cassandra.

Use it as:

   $ scylla --load-ring-state 0

or

   $ scylla --load-ring-state 1
2015-11-24 10:07:12 +08:00
Asias He
6e72e78e0d config: Enable join_ring option
It is same as

   -Dcassandra.join_ring

in cassandra.

Use it as:

   $ scylla --join-ring 0

or

   $ scylla --join-ring 1
2015-11-24 10:07:07 +08:00
Asias He
505b3e4936 config: Enable consistent_rangemovement option
It is same as

  -Dcassandra.consistent.rangemovement

in cassandra.

Use it as:

  $ scylla --consistent-rangemovement 0

or

  $ scylla --consistent-rangemovement 1
2015-11-24 10:06:54 +08:00
Gleb Natapov
33e5097090 messaging: do not kill live connection needlessly
Messaging service closes connection in rpc call continuation on
closed_error, but the code runs for each outstanding rpc call on the
connection, so first continuation may destroy genuinely closed connection,
then connection is reopened and next continuation that handless previous
error kills now perfectly healthy connection. Fix this by closing
connection only in error state.
2015-11-23 20:16:28 +02:00
Tomasz Grabiec
cb0b56f75f Merge tag 'empty/v3' from https://github.com/avikivity/scylla
From Avi:

Origin supports a notion of empty values for non-container types; these
are serialized as zero-length blobs.  They are mostly useless and only
retained for compatibility.

The implementation here introduces a wrapper maybe_empty<T>, similar to
optional<T> but oriented towards usually-nonempty usage with implicit
conversion.

There is more work needed for full empty support: fixing up deserializers to
create empty values instead of nulls, and splitting up data_value into
data_value and a data_value_nonnull for the cases that require it.

(I chose maybe_empty<> rather than using optional<data_value> for nullable
data_value both because it requires fewer changes, and because
optional<data_value> introduces a lot of control flow when moving or copying,
which would be mostly useless in most cases).
2015-11-23 16:12:06 +01:00
Calle Wilund
b1a0c4b451 commitlog_tests: Add segment corruption tests
Test entry and chunk corruption.
2015-11-23 15:43:33 +01:00
Calle Wilund
d65adef10c commitlog_tests: test cleanup
This cleanup patch got lost in git-space some time ago. It is however sorely
needed...

* Use cleaner wrapper for creating temp dir + commit log, avoiding
  having to clear and clean in every test, etc.
* Remove assertions based on file system checks, since these are not
  valid due to both the async nature of the CL, and more to the point,
  because of pre-allocation of files and file blocks. Use CL
  counters/methods instead
* Fix some race conditions to ensure tests are safe(r)
* Speed up some tests
2015-11-23 15:42:45 +01:00
Calle Wilund
262f44948d commitlog: Add get_flush_count method (for testing) 2015-11-23 15:42:45 +01:00
Calle Wilund
76b43fbf74 commitlog_replayer: Handle replay data errors as non-fatal
Discern fatal and non-fatal excceptions, and handle data corruption 
by adding to stats, resporting it, but continue processing.

Note that "invalid_arguement", i.e. attempting to replay origin/old
segments are still considered fatal, as it is probably better to 
signal this strongly to user/admin
2015-11-23 15:42:45 +01:00
Calle Wilund
2fe2320490 commitlog: Make reading segments with crc/data errors non-fatal
Parser object now attempts to skip past/terminate parsing on corrupted
entries/chunks (as detected by invalid sizes/crc:s). The amount of data
skipped is kept track of (as well as we can estimate - pre-allocation
makes it tricky), and at the end of parsing/reporting, IFF errors 
occurred, and exception detailing the failures is thrown (since 
subsciption has little mechanism to deal with this otherwise). 

Thus a caller can decide how to deal with data corruption, but will be
given as many entries as possible.
2015-11-23 15:42:45 +01:00
Avi Kivity
23895ac7f5 types: fix up confusion around empty serialized representation
An empty serialized representation means an empty value, not NULL.

Fix up the confusion by converting incorrect make_null() calls to a new
make_empty(), and removing make_null() in empty-capable types like
bytes_type.

Collections don't support empty serialized representations, so remove
the call there.
2015-11-22 12:20:24 +02:00
Tomasz Grabiec
ae9e0c3d41 storage_proxy: Avoid potential use after move on schema_ptr
Paramter evaluation order is unspecified, so it's possible that the
move of 'schema' into lambda captures would happen before construction of
mutation.
2015-11-22 12:15:04 +02:00
Avi Kivity
0799251a9f Merge "optimize the sstable loading step of boot" from Raphael
"To speed up boot, parallelism was introduced to our code that loads
sstables from a column family, a function was implemented to read
the minimum from a sstable to determine whether it belongs to the
current shard, and buffer size in read simple is dynamically chosen
based on the size of the file and dma alignment.
The latter is important because filter file can be considerably
large when the respective sstable (data file) is very large.
Before this patchset, scylla took about 5 minutes to boot with a
data directory of 660GB. After this patchset, scylla took about 20
seconds to boot with the same data directory."
2015-11-22 11:27:34 +02:00
Asias He
23723991ed gossip: Fix STATUS field in nodetool gossipinfo
Before:
   === with c* cluster ===
   $ nodetool -p 7100 gossipinfo

   STATUS:NORMAL,-1139428872328849340

   === with scylla ===
   $ nodetool -p 7100 gossipinfo

   0:NORMAL,8251763528961471825;-9147358554612963965;5334343410266177046

After:
   === with scylla ===
   $ nodetool -p 7100 gossipinfo

   0:NORMAL,8251763528961471825

To align with c*, print one token in the STATUS field.

Refs #508.
2015-11-20 10:57:49 +02:00
Raphael S. Carvalho
a5842642fa sstables: change buf size in read_simple to 128k
Avi says:
"A small buffer size will hurt if we read a large file, but
a large buffer size won't hurt if we read a small file, since
we close it immediately."

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-11-19 13:35:25 -02:00
Raphael S. Carvalho
0f3ccc1143 db: optimize the sstable loading process
Currently, we only determine if a sstable belongs to current shard
after loading some of its components into memory. For example,
filter may be considerably big and its content is irrelevant to
decide if a sstable should be included to a given shard.
Start using the functions previously introduced to optimize the
sstable loading process. add_sstable no longer checks if a sstable
is relevant to the current shard.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-11-19 13:34:25 -02:00
Raphael S. Carvalho
0053394ec0 sstables: introduce mark_sstable_for_deletion
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-11-19 13:34:24 -02:00
Raphael S. Carvalho
0ce2b7bc8d db: introduce belongs_to_current_shard
Returns true if key range belongs to current shard.
False otherwise.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-11-19 13:34:21 -02:00
Raphael S. Carvalho
f06b72eb18 sstables: introduce function to return sstable key range
Provides a function that will return sstable key range reading only
the summary component.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-11-19 13:34:19 -02:00
Raphael S. Carvalho
966e8c7144 db: introduce parallelism to sstable loading
Boot may be slow because the function that loads sstables do so
serially instead of in parallel. In the callback supplied to
lister::scan_dir, let's push the future returned by probe_file
(function that loads sstable) into a vector of future and wait
for all of them at the end.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
2015-11-19 13:34:11 -02:00
Takuya ASADA
83c8b3e433 dist: support Ubuntu 15.10
We cannot share some dependency package names between 14.04 and 15.10, so need to add ifdefs.
Not tested on other version of Ubuntu.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-11-19 10:57:25 +02:00
Tomasz Grabiec
53e842aaf7 scylla-gdb.py: Fix scylla column_families command 2015-11-19 10:44:00 +02:00
Avi Kivity
0b91b643ba types: empty value support for non-container types
Origin supports (https://issues.apache.org/jira/browse/CASSANDRA-5648) "empty"
values even for non-container types such as int.  Use maybe_empty<> to
encapsulate abstract_type::native_type, adding an empty flag if needed.
2015-11-18 18:38:38 +02:00
Avi Kivity
7257f72fbf types: introduce maybe_empty<T> type alias
- T for container types (that can naturally be empty)
 - emptyable<T> otherwise (adding that property artificially)
2015-11-18 15:25:24 +02:00
Avi Kivity
58d3a3e138 types: introduce emptyable<> template
Similar to optional<>, with the following differences:
 - decays back to the encapsulated type, with an emptiness check;
   this reflects the expectation that the value will rarely be empty
 - avoids conditionals during copy/move (and requires a default constructor),
   again with the same expectation.
2015-11-18 15:25:22 +02:00
Gleb Natapov
0870caaea1 cql transport: catch all exceptions
Not all exceptions are inherited from std::exception
(std::nested_exception) for instance, so catch and log all of them.
2015-11-18 15:17:43 +02:00
Asias He
242e5ea291 streaming: Ignore remote no_such_column_family for stream_transfer_task
When we start to sending mutations for cf_id to remote node, remote node
might do not have the cf_id anymore due to dropping of the cf for
instance.

We should not fail the streaming if this happens, since the cf does not
exist anymore there is no point streaming it.

Fixes #566
2015-11-18 15:12:23 +02:00
Asias He
3816e35d11 storage_service: Detect other bootstrapping/leaving/moving nodes during bootstrap 2015-11-18 15:11:56 +02:00
Avi Kivity
6390bc3121 README: instructions for contributing 2015-11-18 15:10:37 +02:00
Asias He
3b52033371 gossip: Favor newly added node in do_gossip_to_live_member
When a new node joins a cluster, it will starts a gossip round with seed
node. However, within this round, the seed node will not tell the new
node anything it knows about other nodes in the cluster, because the
digest in the gossip SYN message contains only the new node itself and
no other nodes. The seed node will pick randomly from the live nodes,
including the newly added node in do_gossip_to_live_member to start a
gossip round. If the new node is "lucky", seed node will talk to it very
soon and tells all the information it knows about the cluster, thus the
new node will mark the seed node alive and think it has seen the seed
node. If there considerably large number of live nodes, it might take a
long time before the seed node pick the new node and talk to it.

In bootstrap code, storage_service::bootstrap checks if we see any nodes
after sleep of RING_DELAY milliseconds and throw "Unable to contact any
seeds!" if not, thus the node will fail to bootstrap.

To help the seed node talk to new node faster, we favor new node in
do_gossip_to_live_member.
2015-11-18 15:00:37 +02:00
Amnon Heiman
374414ffd0 API: failure_detector modify the get_all_endpoint_states
In origin, get_all_endpoint_states perform all the information
formatting and returns a string.

This is not a good API approach, this patch replaces the implementation
so the API will return an array of values and the JMX will do the
formatting.

This is a better API and would make it simpler in the future to stay in
sync with origin output.

This patch is part of #508

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-18 14:59:09 +02:00
Avi Kivity
17f6dc3671 Merge seastar upstream
* seastar 95ddb8e...84cb6df (2):
  > rpc: do not convert EOF into exception
  > reactor: remove debug output in command line option validation
2015-11-18 11:20:27 +02:00
Takuya ASADA
16cd5892f7 dist: setup rps on scylla_prepare, not on scylla_run
All preparation of running scylla should be done in scylla_prepare

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
2015-11-18 11:20:17 +02:00
Asias He
269ea7f81b storage_service: Enable is_ready_for_bootstrap in join_token_ring
The goal is to make sure our schema matches with other nodes in the
cluster.
2015-11-18 10:46:40 +02:00
Asias He
bb1470f0d4 migration_manager: Introduce is_ready_for_bootstrap
This compares local schema version with other nodes in the cluster.
Return true if all of them match with each other.
2015-11-18 10:46:06 +02:00
Avi Kivity
ba859acb3b big_decimal: add default constructor
Arithmetic types should have a default constructor, and anyway the
following patch wants it.
2015-11-18 10:36:03 +02:00
Takuya ASADA
f0a6c33b6d dist: use /var/lib/scylla instead of /data on ami
Fixes #551.
Change mountpoint to /var/lib/scylla, copy conf/ on it.
Note: need to replace conf/ with symlink to /etc/scylla when new rpm uploaded on yum repository.

Signed-off-by: Takuya ASADA <syuu@scylladb.com>
Signed-off-by: Pekka Enberg <penberg@iki.fi>
2015-11-18 10:10:48 +02:00
Amnon Heiman
27737d702b API: Stubing the compaction manager - workaround
Until the compaction manager api would be ready, its failing command
causes problem with nodetool related tests.
Ths patch stub the compaction manager logic so it will not fail.

It will be replaced by an actuall implementation when the equivelent
code in compaction will be ready.

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2015-11-11 16:58:13 +02:00
Amnon Heiman
66e428799f API: add compaction info object
This patch adds a compaction info object and an API that returns it.
It will be mapped to the JMX getCompactions that returns a map.

The use of an object is more RESTFull and will be better documented in
the swagger definition file.
2015-11-11 16:57:44 +02:00
238 changed files with 7701 additions and 2864 deletions

76
ORIGIN
View File

@@ -1 +1,77 @@
http://git-wip-us.apache.org/repos/asf/cassandra.git trunk (bf599fb5b062cbcc652da78b7d699e7a01b949ad)
import = bf599fb5b062cbcc652da78b7d699e7a01b949ad
Y = Already in scylla
$ git log --oneline import..cassandra-2.1.11 -- gms/
Y 484e645 Mark node as dead even if already left
d0c166f Add trampled commit back
ba5837e Merge branch 'cassandra-2.0' into cassandra-2.1
718e47f Forgot a damn c/r
a7282e4 Merge branch 'cassandra-2.0' into cassandra-2.1
Y ae4cd69 Print versions for gossip states in gossipinfo.
Y 7fba3d2 Don't mark nodes down before the max local pause interval once paused.
c2142e6 Merge branch 'cassandra-2.0' into cassandra-2.1
ba9a69e checkForEndpointCollision fails for legitimate collisions, finalized list of statuses and nits, CASSANDRA-9765
54470a2 checkForEndpointCollision fails for legitimate collisions, improved version after CR, CASSANDRA-9765
2c9b490 checkForEndpointCollision fails for legitimate collisions, CASSANDRA-9765
4c15970 Merge branch 'cassandra-2.0' into cassandra-2.1
ad8047a ArrivalWindow should use primitives
Y 4012134 Failure detector detects and ignores local pauses
9bcdd0f Merge branch 'cassandra-2.0' into cassandra-2.1
cefaa4e Close incoming connections when MessagingService is stopped
ea1beda Merge branch 'cassandra-2.0' into cassandra-2.1
08dbbd6 Ignore gossip SYNs after shutdown
3c17ac6 Merge branch 'cassandra-2.0' into cassandra-2.1
a64bc43 lists work better when you initialize them
543a899 change list to arraylist
730d4d4 Merge branch 'cassandra-2.0' into cassandra-2.1
e3e2de0 change list to arraylist
f7884c5 Merge branch 'cassandra-2.0' into cassandra-2.1
Y 84b2846 remove redundant state
4f2c372 Merge branch 'cassandra-2.0' into cassandra-2.1
Y b2c62bb Add shutdown gossip state to prevent timeouts during rolling restarts
Y def4835 Add missing follow on fix for 7816 only applied to cassandra-2.1 branch in 763130bdbde2f4cec2e8973bcd5203caf51cc89f
Y 763130b Followup commit for 7816
1376b8e Merge branch 'cassandra-2.0' into cassandra-2.1
Y 2199a87 Fix duplicate up/down messages sent to native clients
136042e Merge branch 'cassandra-2.0' into cassandra-2.1
Y eb9c5bb Improve FD logging when the arrival time is ignored.
$ git log --oneline import..cassandra-2.1.11 -- service/StorageService.java
92c5787 Keep StorageServiceMBean interface stable
6039d0e Fix DC and Rack in nodetool info
a2f0da0 Merge branch 'cassandra-2.0' into cassandra-2.1
c4de752 Follow-up to CASSANDRA-10238
e889ee4 2i key cache load fails
4b1d59e Merge branch 'cassandra-2.0' into cassandra-2.1
257cdaa Fix consolidating racks violating the RF contract
Y 27754c0 refuse to decomission if not in state NORMAL patch by Jan Karlsson and Stefania for CASSANDRA-8741
Y 5bc56c3 refuse to decomission if not in state NORMAL patch by Jan Karlsson and Stefania for CASSANDRA-8741
Y 8f9ca07 Cannot replace token does not exist - DN node removed as Fat Client
c2142e6 Merge branch 'cassandra-2.0' into cassandra-2.1
54470a2 checkForEndpointCollision fails for legitimate collisions, improved version after CR, CASSANDRA-9765
1eccced Handle corrupt files on startup
2c9b490 checkForEndpointCollision fails for legitimate collisions, CASSANDRA-9765
c4b5260 Merge branch 'cassandra-2.0' into cassandra-2.1
Y 52dbc3f Can't transition from write survey to normal mode
9966419 Make rebuild only run one at a time
d693ca1 Merge branch 'cassandra-2.0' into cassandra-2.1
be9eff5 Add option to not validate atoms during scrub
2a4daaf followup fix for 8564
93478ab Wait for anticompaction to finish
9e9846e Fix for harmless exceptions being logged as ERROR
6d06f32 Fix anticompaction blocking ANTI_ENTROPY stage
4f2c372 Merge branch 'cassandra-2.0' into cassandra-2.1
Y b2c62bb Add shutdown gossip state to prevent timeouts during rolling restarts
Y cba1b68 Fix failed bootstrap/replace attempts being persisted in system.peers
f59df28 Allow takeColumnFamilySnapshot to take a list of tables patch by Sachin Jarin; reviewed by Nick Bailey for CASSANDRA-8348
Y ac46747 Fix failed bootstrap/replace attempts being persisted in system.peers
5abab57 Merge branch 'cassandra-2.0' into cassandra-2.1
0ff9c3c Allow reusing snapshot tags across different column families.
f9c57a5 Merge branch 'cassandra-2.0' into cassandra-2.1
Y b296c55 Fix MOVED_NODE client event
bbb3fc7 Merge branch 'cassandra-2.0' into cassandra-2.1
37eb2a0 Fix NPE in nodetool getendpoints with bad ks/cf
f8b43d4 Merge branch 'cassandra-2.0' into cassandra-2.1
e20810c Remove C* specific class from JMX API

View File

@@ -82,3 +82,15 @@ Run the image with:
```
docker run -p $(hostname -i):9042:9042 -i -t <image name>
```
## Contributing to Scylla
Do not send pull requests.
Send patches to the mailing list address scylladb-dev@googlegroups.com.
Be sure to subscribe.
In order for your patches to be merged, you must sign the Contributor's
License Agreement, protecting your rights and ours. See
http://www.scylladb.com/opensource/cla/.

View File

@@ -1,6 +1,6 @@
#!/bin/sh
VERSION=0.12
VERSION=0.14.1
if test -f version
then

View File

@@ -579,30 +579,6 @@
}
]
},
{
"path":"/column_family/sstables/snapshots_size/{name}",
"operations":[
{
"method":"GET",
"summary":"the size of SSTables in 'snapshots' subdirectory which aren't live anymore",
"type":"double",
"nickname":"true_snapshots_size",
"produces":[
"application/json"
],
"parameters":[
{
"name":"name",
"description":"The column family name in keysspace:name format",
"required":true,
"allowMultiple":false,
"type":"string",
"paramType":"path"
}
]
}
]
},
{
"path":"/column_family/metrics/memtable_columns_count/{name}",
"operations":[
@@ -2041,7 +2017,7 @@
]
},
{
"path":"/column_family/metrics/true_snapshots_size/{name}",
"path":"/column_family/metrics/snapshots_size/{name}",
"operations":[
{
"method":"GET",

View File

@@ -15,7 +15,7 @@
"summary":"get List of running compactions",
"type":"array",
"items":{
"type":"jsonmap"
"type":"summary"
},
"nickname":"get_compactions",
"produces":[
@@ -46,16 +46,16 @@
]
},
{
"path":"/compaction_manager/compaction_summary",
"path":"/compaction_manager/compaction_info",
"operations":[
{
"method":"GET",
"summary":"get compaction summary",
"summary":"get a list of all active compaction info",
"type":"array",
"items":{
"type":"string"
"type":"compaction_info"
},
"nickname":"get_compaction_summary",
"nickname":"get_compaction_info",
"produces":[
"application/json"
],
@@ -174,30 +174,73 @@
}
],
"models":{
"mapper":{
"id":"mapper",
"description":"A key value mapping",
"row_merged":{
"id":"row_merged",
"description":"A row merged information",
"properties":{
"key":{
"type":"string",
"description":"The key"
"type":"int",
"description":"The number of sstable"
},
"value":{
"type":"string",
"description":"The value"
"type":"long",
"description":"The number or row compacted"
}
}
},
"jsonmap":{
"id":"jsonmap",
"description":"A json representation of a map as a list of key value",
"compaction_info" :{
"id": "compaction_info",
"description":"A key value mapping",
"properties":{
"operation_type":{
"type":"string",
"description":"The operation type"
},
"completed":{
"type":"long",
"description":"The current completed"
},
"total":{
"type":"long",
"description":"The total to compact"
},
"unit":{
"type":"string",
"description":"The compacted unit"
}
}
},
"summary":{
"id":"summary",
"description":"A compaction summary object",
"properties":{
"value":{
"type":"array",
"items":{
"type":"mapper"
},
"description":"A list of key, value mapping"
"id":{
"type":"string",
"description":"The UUID"
},
"ks":{
"type":"string",
"description":"The keyspace name"
},
"cf":{
"type":"string",
"description":"The column family name"
},
"completed":{
"type":"long",
"description":"The number of units completed"
},
"total":{
"type":"long",
"description":"The total number of units"
},
"task_type":{
"type":"string",
"description":"The task compaction type"
},
"unit":{
"type":"string",
"description":"The units being used"
}
}
},
@@ -232,7 +275,7 @@
"rows_merged":{
"type":"array",
"items":{
"type":"mapper"
"type":"row_merged"
},
"description":"The merged rows"
}

View File

@@ -48,7 +48,10 @@
{
"method":"GET",
"summary":"Get all endpoint states",
"type":"string",
"type":"array",
"items":{
"type":"endpoint_state"
},
"nickname":"get_all_endpoint_states",
"produces":[
"application/json"
@@ -148,6 +151,53 @@
"description": "The value"
}
}
},
"endpoint_state": {
"id": "states",
"description": "Holds an endpoint state",
"properties": {
"addrs": {
"type": "string",
"description": "The endpoint address"
},
"generation": {
"type": "int",
"description": "The heart beat generation"
},
"version": {
"type": "int",
"description": "The heart beat version"
},
"update_time": {
"type": "long",
"description": "The update timestamp"
},
"is_alive": {
"type": "boolean",
"description": "Is the endpoint alive"
},
"application_state" : {
"type":"array",
"items":{
"type":"version_value"
},
"description": "Is the endpoint alive"
}
}
},
"version_value": {
"id": "version_value",
"description": "Holds a version value for an application state",
"properties": {
"application_state": {
"type": "int",
"description": "The application state enum index"
},
"value": {
"type": "string",
"description": "The version value"
}
}
}
}
}

View File

@@ -184,6 +184,30 @@
]
}
]
},
{
"path":"/messaging_service/version",
"operations":[
{
"method":"GET",
"summary":"Get the version number",
"type":"int",
"nickname":"get_version",
"produces":[
"application/json"
],
"parameters":[
{
"name":"addr",
"description":"Address",
"required":true,
"allowMultiple":false,
"type":"string",
"paramType":"query"
}
]
}
]
}
],
"models":{

View File

@@ -425,7 +425,7 @@
"summary":"load value. Keys are IP addresses",
"type":"array",
"items":{
"type":"mapper"
"type":"double_mapper"
},
"nickname":"get_load_map",
"produces":[
@@ -797,8 +797,72 @@
"paramType":"path"
},
{
"name":"options",
"description":"Options for the repair",
"name":"primaryRange",
"description":"If the value is the string 'true' with any capitalization, repair only the first range returned by the partitioner.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"parallelism",
"description":"Repair parallelism, can be 0 (sequential), 1 (parallel) or 2 (datacenter-aware).",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"incremental",
"description":"If the value is the string 'true' with any capitalization, perform incremental repair.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"jobThreads",
"description":"An integer specifying the parallelism on each node.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"ranges",
"description":"An explicit list of ranges to repair, overriding the default choice. Each range is expressed as token1:token2, and multiple ranges can be given as a comma separated list.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"columnFamilies",
"description":"Which column families to repair in the given keyspace. Multiple columns families can be named separated by commas. If this option is missing, all column families in the keyspace are repaired.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"dataCenters",
"description":"Which data centers are to participate in this repair. Multiple data centers can be listed separated by commas.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"hosts",
"description":"Which hosts are to participate in this repair. Multiple hosts can be listed separated by commas.",
"required":false,
"allowMultiple":false,
"type":"string",
"paramType":"query"
},
{
"name":"trace",
"description":"If the value is the string 'true' with any capitalization, enable tracing of the repair.",
"required":false,
"allowMultiple":false,
"type":"string",
@@ -1964,6 +2028,20 @@
}
}
},
"double_mapper":{
"id":"double_mapper",
"description":"A key value mapping between a string and a double",
"properties":{
"key":{
"type":"string",
"description":"The key"
},
"value":{
"type":"double",
"description":"The value"
}
}
},
"maplist_mapper":{
"id":"maplist_mapper",
"description":"A key value mapping, where key and value are list",

View File

@@ -128,47 +128,54 @@ inline double pow2(double a) {
return a * a;
}
inline httpd::utils_json::histogram add_histogram(httpd::utils_json::histogram res,
// FIXME: Move to utils::ihistogram::operator+=()
inline utils::ihistogram add_histogram(utils::ihistogram res,
const utils::ihistogram& val) {
if (!res.count._set) {
res = val;
return res;
if (res.count == 0) {
return val;
}
if (val.count == 0) {
return res;
return std::move(res);
}
if (res.min() > val.min) {
if (res.min > val.min) {
res.min = val.min;
}
if (res.max() < val.max) {
if (res.max < val.max) {
res.max = val.max;
}
double ncount = res.count() + val.count;
double ncount = res.count + val.count;
// To get an estimated sum we take the estimated mean
// and multiply it by the true count
res.sum = res.sum() + val.mean * val.count;
double a = res.count()/ncount;
res.sum = res.sum + val.mean * val.count;
double a = res.count/ncount;
double b = val.count/ncount;
double mean = a * res.mean() + b * val.mean;
double mean = a * res.mean + b * val.mean;
res.variance = (res.variance() + pow2(res.mean() - mean) )* a +
res.variance = (res.variance + pow2(res.mean - mean) )* a +
(val.variance + pow2(val.mean -mean))* b;
res.mean = mean;
res.count = res.count() + val.count;
res.count = res.count + val.count;
for (auto i : val.sample) {
res.sample.push(i);
res.sample.push_back(i);
}
return res;
}
inline
httpd::utils_json::histogram to_json(const utils::ihistogram& val) {
httpd::utils_json::histogram h;
h = val;
return h;
}
template<class T, class F>
future<json::json_return_type> sum_histogram_stats(distributed<T>& d, utils::ihistogram F::*f) {
return d.map_reduce0([f](const T& p) {return p.get_stats().*f;}, httpd::utils_json::histogram(),
add_histogram).then([](const httpd::utils_json::histogram& val) {
return make_ready_future<json::json_return_type>(val);
return d.map_reduce0([f](const T& p) {return p.get_stats().*f;}, utils::ihistogram(),
add_histogram).then([](const utils::ihistogram& val) {
return make_ready_future<json::json_return_type>(to_json(val));
});
}

View File

@@ -64,21 +64,21 @@ future<> foreach_column_family(http_context& ctx, const sstring& name, function<
future<json::json_return_type> get_cf_stats(http_context& ctx, const sstring& name,
int64_t column_family::stats::*f) {
return map_reduce_cf(ctx, name, 0, [f](const column_family& cf) {
return map_reduce_cf(ctx, name, int64_t(0), [f](const column_family& cf) {
return cf.get_stats().*f;
}, std::plus<int64_t>());
}
future<json::json_return_type> get_cf_stats(http_context& ctx,
int64_t column_family::stats::*f) {
return map_reduce_cf(ctx, 0, [f](const column_family& cf) {
return map_reduce_cf(ctx, int64_t(0), [f](const column_family& cf) {
return cf.get_stats().*f;
}, std::plus<int64_t>());
}
static future<json::json_return_type> get_cf_stats_count(http_context& ctx, const sstring& name,
utils::ihistogram column_family::stats::*f) {
return map_reduce_cf(ctx, name, 0, [f](const column_family& cf) {
return map_reduce_cf(ctx, name, int64_t(0), [f](const column_family& cf) {
return (cf.get_stats().*f).count;
}, std::plus<int64_t>());
}
@@ -101,7 +101,7 @@ static future<json::json_return_type> get_cf_stats_sum(http_context& ctx, const
static future<json::json_return_type> get_cf_stats_count(http_context& ctx,
utils::ihistogram column_family::stats::*f) {
return map_reduce_cf(ctx, 0, [f](const column_family& cf) {
return map_reduce_cf(ctx, int64_t(0), [f](const column_family& cf) {
return (cf.get_stats().*f).count;
}, std::plus<int64_t>());
}
@@ -110,28 +110,30 @@ static future<json::json_return_type> get_cf_histogram(http_context& ctx, const
utils::ihistogram column_family::stats::*f) {
utils::UUID uuid = get_uuid(name, ctx.db.local());
return ctx.db.map_reduce0([f, uuid](const database& p) {return p.find_column_family(uuid).get_stats().*f;},
httpd::utils_json::histogram(),
utils::ihistogram(),
add_histogram)
.then([](const httpd::utils_json::histogram& val) {
return make_ready_future<json::json_return_type>(val);
.then([](const utils::ihistogram& val) {
return make_ready_future<json::json_return_type>(to_json(val));
});
}
static future<json::json_return_type> get_cf_histogram(http_context& ctx, utils::ihistogram column_family::stats::*f) {
std::function<httpd::utils_json::histogram(const database&)> fun = [f] (const database& db) {
httpd::utils_json::histogram res;
std::function<utils::ihistogram(const database&)> fun = [f] (const database& db) {
utils::ihistogram res;
for (auto i : db.get_column_families()) {
res = add_histogram(res, i.second->get_stats().*f);
}
return res;
};
return ctx.db.map(fun).then([](const std::vector<httpd::utils_json::histogram> &res) {
return make_ready_future<json::json_return_type>(res);
return ctx.db.map(fun).then([](const std::vector<utils::ihistogram> &res) {
std::vector<httpd::utils_json::histogram> r;
boost::copy(res | boost::adaptors::transformed(to_json), std::back_inserter(r));
return make_ready_future<json::json_return_type>(r);
});
}
static future<json::json_return_type> get_cf_unleveled_sstables(http_context& ctx, const sstring& name) {
return map_reduce_cf(ctx, name, 0, [](const column_family& cf) {
return map_reduce_cf(ctx, name, int64_t(0), [](const column_family& cf) {
return cf.get_unleveled_sstables();
}, std::plus<int64_t>());
}
@@ -221,25 +223,25 @@ void set_column_family(http_context& ctx, routes& r) {
});
cf::get_memtable_off_heap_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], 0, [](column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
return cf.active_memtable().region().occupancy().total_space();
}, std::plus<int64_t>());
});
cf::get_all_memtable_off_heap_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, 0, [](column_family& cf) {
return map_reduce_cf(ctx, int64_t(0), [](column_family& cf) {
return cf.active_memtable().region().occupancy().total_space();
}, std::plus<int64_t>());
});
cf::get_memtable_live_data_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], 0, [](column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
return cf.active_memtable().region().occupancy().used_space();
}, std::plus<int64_t>());
});
cf::get_all_memtable_live_data_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, 0, [](column_family& cf) {
return map_reduce_cf(ctx, int64_t(0), [](column_family& cf) {
return cf.active_memtable().region().occupancy().used_space();
}, std::plus<int64_t>());
});
@@ -254,7 +256,7 @@ void set_column_family(http_context& ctx, routes& r) {
cf::get_cf_all_memtables_off_heap_size.set(r, [&ctx] (std::unique_ptr<request> req) {
warn(unimplemented::cause::INDEXES);
return map_reduce_cf(ctx, req->param["name"], 0, [](column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
return cf.occupancy().total_space();
}, std::plus<int64_t>());
});
@@ -263,21 +265,21 @@ void set_column_family(http_context& ctx, routes& r) {
warn(unimplemented::cause::INDEXES);
return ctx.db.map_reduce0([](const database& db){
return db.dirty_memory_region_group().memory_used();
}, 0, std::plus<int64_t>()).then([](int res) {
}, int64_t(0), std::plus<int64_t>()).then([](int res) {
return make_ready_future<json::json_return_type>(res);
});
});
cf::get_cf_all_memtables_live_data_size.set(r, [&ctx] (std::unique_ptr<request> req) {
warn(unimplemented::cause::INDEXES);
return map_reduce_cf(ctx, req->param["name"], 0, [](column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
return cf.occupancy().used_space();
}, std::plus<int64_t>());
});
cf::get_all_cf_all_memtables_live_data_size.set(r, [&ctx] (std::unique_ptr<request> req) {
warn(unimplemented::cause::INDEXES);
return map_reduce_cf(ctx, 0, [](column_family& cf) {
return map_reduce_cf(ctx, int64_t(0), [](column_family& cf) {
return cf.active_memtable().region().occupancy().used_space();
}, std::plus<int64_t>());
});
@@ -302,7 +304,7 @@ void set_column_family(http_context& ctx, routes& r) {
});
cf::get_estimated_row_count.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], 0, [](column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](column_family& cf) {
uint64_t res = 0;
for (auto i: *cf.get_sstables() ) {
res += i.second->get_stats_metadata().estimated_row_size.count();
@@ -422,11 +424,11 @@ void set_column_family(http_context& ctx, routes& r) {
});
cf::get_max_row_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], 0, max_row_size, max_int64);
return map_reduce_cf(ctx, req->param["name"], int64_t(0), max_row_size, max_int64);
});
cf::get_all_max_row_size.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, 0, max_row_size, max_int64);
return map_reduce_cf(ctx, int64_t(0), max_row_size, max_int64);
});
cf::get_mean_row_size.set(r, [&ctx] (std::unique_ptr<request> req) {
@@ -537,20 +539,20 @@ void set_column_family(http_context& ctx, routes& r) {
}, std::plus<uint64_t>());
});
cf::get_index_summary_off_heap_memory_used.set(r, [] (std::unique_ptr<request> req) {
//TBD
// FIXME
// We are missing the off heap memory calculation
// Return 0 is the wrong value. It's a work around
// until the memory calculation will be available
//auto id = get_uuid(req->param["name"], ctx.db.local());
return make_ready_future<json::json_return_type>(0);
cf::get_index_summary_off_heap_memory_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return sst.second->get_summary().memory_footprint();
});
}, std::plus<uint64_t>());
});
cf::get_all_index_summary_off_heap_memory_used.set(r, [] (std::unique_ptr<request> req) {
//TBD
unimplemented();
return make_ready_future<json::json_return_type>(0);
cf::get_all_index_summary_off_heap_memory_used.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, uint64_t(0), [] (column_family& cf) {
return std::accumulate(cf.get_sstables()->begin(), cf.get_sstables()->end(), uint64_t(0), [](uint64_t s, auto& sst) {
return sst.second->get_summary().memory_footprint();
});
}, std::plus<uint64_t>());
});
cf::get_compression_metadata_off_heap_memory_used.set(r, [] (std::unique_ptr<request> req) {
@@ -589,11 +591,16 @@ void set_column_family(http_context& ctx, routes& r) {
return make_ready_future<json::json_return_type>(0);
});
cf::get_true_snapshots_size.set(r, [] (std::unique_ptr<request> req) {
//TBD
// FIXME
//auto id = get_uuid(req->param["name"], ctx.db.local());
return make_ready_future<json::json_return_type>(0);
cf::get_true_snapshots_size.set(r, [&ctx] (std::unique_ptr<request> req) {
auto uuid = get_uuid(req->param["name"], ctx.db.local());
return ctx.db.local().find_column_family(uuid).get_snapshot_details().then([](
const std::unordered_map<sstring, column_family::snapshot_details>& sd) {
int64_t res = 0;
for (auto i : sd) {
res += i.second.total;
}
return make_ready_future<json::json_return_type>(res);
});
});
cf::get_all_true_snapshots_size.set(r, [] (std::unique_ptr<request> req) {
@@ -616,25 +623,25 @@ void set_column_family(http_context& ctx, routes& r) {
});
cf::get_row_cache_hit.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], 0, [](const column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](const column_family& cf) {
return cf.get_row_cache().stats().hits;
}, std::plus<int64_t>());
});
cf::get_all_row_cache_hit.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, 0, [](const column_family& cf) {
return map_reduce_cf(ctx, int64_t(0), [](const column_family& cf) {
return cf.get_row_cache().stats().hits;
}, std::plus<int64_t>());
});
cf::get_row_cache_miss.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, req->param["name"], 0, [](const column_family& cf) {
return map_reduce_cf(ctx, req->param["name"], int64_t(0), [](const column_family& cf) {
return cf.get_row_cache().stats().misses;
}, std::plus<int64_t>());
});
cf::get_all_row_cache_miss.set(r, [&ctx] (std::unique_ptr<request> req) {
return map_reduce_cf(ctx, 0, [](const column_family& cf) {
return map_reduce_cf(ctx, int64_t(0), [](const column_family& cf) {
return cf.get_row_cache().stats().misses;
}, std::plus<int64_t>());

View File

@@ -21,12 +21,13 @@
#include "compaction_manager.hh"
#include "api/api-doc/compaction_manager.json.hh"
#include "db/system_keyspace.hh"
namespace api {
using namespace scollectd;
namespace cm = httpd::compaction_manager_json;
using namespace json;
static future<json::json_return_type> get_cm_stats(http_context& ctx,
int64_t compaction_manager::stats::*f) {
@@ -38,29 +39,38 @@ static future<json::json_return_type> get_cm_stats(http_context& ctx,
}
void set_compaction_manager(http_context& ctx, routes& r) {
cm::get_compactions.set(r, [] (std::unique_ptr<request> req) {
//TBD
unimplemented();
std::vector<cm::jsonmap> map;
return make_ready_future<json::json_return_type>(map);
});
cm::get_compactions.set(r, [&ctx] (std::unique_ptr<request> req) {
return ctx.db.map_reduce0([](database& db) {
std::vector<cm::summary> summaries;
const compaction_manager& cm = db.get_compaction_manager();
cm::get_compaction_summary.set(r, [] (std::unique_ptr<request> req) {
//TBD
unimplemented();
std::vector<sstring> res;
return make_ready_future<json::json_return_type>(res);
for (const auto& c : cm.get_compactions()) {
cm::summary s;
s.ks = c->ks;
s.cf = c->cf;
s.unit = "keys";
s.task_type = "compaction";
s.completed = c->total_keys_written;
s.total = c->total_partitions;
summaries.push_back(std::move(s));
}
return summaries;
}, std::vector<cm::summary>(), concat<cm::summary>).then([](const std::vector<cm::summary>& res) {
return make_ready_future<json::json_return_type>(res);
});
});
cm::force_user_defined_compaction.set(r, [] (std::unique_ptr<request> req) {
//TBD
unimplemented();
return make_ready_future<json::json_return_type>("");
// FIXME
warn(unimplemented::cause::API);
return make_ready_future<json::json_return_type>(json_void());
});
cm::stop_compaction.set(r, [] (std::unique_ptr<request> req) {
//TBD
unimplemented();
// FIXME
warn(unimplemented::cause::API);
return make_ready_future<json::json_return_type>("");
});
@@ -81,14 +91,42 @@ void set_compaction_manager(http_context& ctx, routes& r) {
cm::get_bytes_compacted.set(r, [] (std::unique_ptr<request> req) {
//TBD
unimplemented();
// FIXME
warn(unimplemented::cause::API);
return make_ready_future<json::json_return_type>(0);
});
cm::get_compaction_history.set(r, [] (std::unique_ptr<request> req) {
return db::system_keyspace::get_compaction_history().then([] (std::vector<db::system_keyspace::compaction_history_entry> history) {
std::vector<cm::history> res;
res.reserve(history.size());
for (auto& entry : history) {
cm::history h;
h.id = entry.id.to_sstring();
h.ks = std::move(entry.ks);
h.cf = std::move(entry.cf);
h.compacted_at = entry.compacted_at;
h.bytes_in = entry.bytes_in;
h.bytes_out = entry.bytes_out;
for (auto it : entry.rows_merged) {
httpd::compaction_manager_json::row_merged e;
e.key = it.first;
e.value = it.second;
h.rows_merged.push(std::move(e));
}
res.push_back(std::move(h));
}
return make_ready_future<json::json_return_type>(res);
});
});
cm::get_compaction_info.set(r, [] (std::unique_ptr<request> req) {
//TBD
unimplemented();
std::vector<cm::history> res;
// FIXME
warn(unimplemented::cause::API);
std::vector<cm::compaction_info> res;
return make_ready_future<json::json_return_type>(res);
});

View File

@@ -22,15 +22,33 @@
#include "failure_detector.hh"
#include "api/api-doc/failure_detector.json.hh"
#include "gms/failure_detector.hh"
#include "gms/application_state.hh"
#include "gms/gossiper.hh"
namespace api {
namespace fd = httpd::failure_detector_json;
void set_failure_detector(http_context& ctx, routes& r) {
fd::get_all_endpoint_states.set(r, [](std::unique_ptr<request> req) {
return gms::get_all_endpoint_states().then([](const sstring& str) {
return make_ready_future<json::json_return_type>(str);
});
std::vector<fd::endpoint_state> res;
for (auto i : gms::get_local_gossiper().endpoint_state_map) {
fd::endpoint_state val;
val.addrs = boost::lexical_cast<std::string>(i.first);
val.is_alive = i.second.is_alive();
val.generation = i.second.get_heart_beat_state().get_generation();
val.version = i.second.get_heart_beat_state().get_heart_beat_version();
val.update_time = i.second.get_update_timestamp().time_since_epoch().count();
for (auto a : i.second.get_application_state_map()) {
fd::version_value version_val;
// We return the enum index and not it's name to stay compatible to origin
// method that the state index are static but the name can be changed.
version_val.application_state = static_cast<std::underlying_type<gms::application_state>::type>(a.first);
version_val.value = a.second.value;
val.application_state.push(version_val);
}
res.push_back(val);
}
return make_ready_future<json::json_return_type>(res);
});
fd::get_up_endpoint_count.set(r, [](std::unique_ptr<request> req) {

View File

@@ -119,6 +119,10 @@ void set_messaging_service(http_context& ctx, routes& r) {
return c.sent_messages;
}));
get_version.set(r, [](const_req req) {
return net::get_local_messaging_service().get_raw_version(req.get_query_param("addr"));
});
get_dropped_messages_by_ver.set(r, [](std::unique_ptr<request> req) {
shared_ptr<std::vector<uint64_t>> map = make_shared<std::vector<uint64_t>>(num_verb, 0);

View File

@@ -89,7 +89,7 @@ void set_storage_service(http_context& ctx, routes& r) {
});
ss::get_token_endpoint.set(r, [] (const_req req) {
auto token_to_ep = service::get_local_storage_service().get_token_metadata().get_token_to_endpoint();
auto token_to_ep = service::get_local_storage_service().get_token_to_endpoint_map();
std::vector<storage_service_json::mapper> res;
return map_to_key_value(token_to_ep, res);
});
@@ -169,8 +169,14 @@ void set_storage_service(http_context& ctx, routes& r) {
ss::get_load_map.set(r, [] (std::unique_ptr<request> req) {
return service::get_local_storage_service().get_load_map().then([] (auto&& load_map) {
std::vector<ss::mapper> res;
return make_ready_future<json::json_return_type>(map_to_key_value(load_map, res));
std::vector<ss::double_mapper> res;
for (auto i : load_map) {
ss::double_mapper val;
val.key = i.first;
val.value = i.second;
res.push_back(val);
}
return make_ready_future<json::json_return_type>(res);
});
});
@@ -312,18 +318,14 @@ void set_storage_service(http_context& ctx, routes& r) {
ss::repair_async.set(r, [&ctx](std::unique_ptr<request> req) {
// Currently, we get all the repair options encoded in a single
// "options" option, and split it to a map using the "," and ":"
// delimiters. TODO: consider if it doesn't make more sense to just
// take all the query parameters as this map and pass it to the repair
// function.
static std::vector<sstring> options = {"primaryRange", "parallelism", "incremental",
"jobThreads", "ranges", "columnFamilies", "dataCenters", "hosts", "trace"};
std::unordered_map<sstring, sstring> options_map;
for (auto s : split(req->get_query_param("options"), ",")) {
auto kv = split(s, ":");
if (kv.size() != 2) {
throw httpd::bad_param_exception("malformed async repair options");
for (auto o : options) {
auto s = req->get_query_param(o);
if (s != "") {
options_map[o] = s;
}
options_map.emplace(std::move(kv[0]), std::move(kv[1]));
}
// The repair process is asynchronous: repair_start only starts it and
@@ -415,15 +417,18 @@ void set_storage_service(http_context& ctx, routes& r) {
});
ss::get_drain_progress.set(r, [](std::unique_ptr<request> req) {
//TBD
unimplemented();
return make_ready_future<json::json_return_type>("");
return service::get_storage_service().map_reduce(adder<service::storage_service::drain_progress>(), [] (auto& ss) {
return ss.get_drain_progress();
}).then([] (auto&& progress) {
auto progress_str = sprint("Drained %s/%s ColumnFamilies", progress.remaining_cfs, progress.total_cfs);
return make_ready_future<json::json_return_type>(std::move(progress_str));
});
});
ss::drain.set(r, [](std::unique_ptr<request> req) {
//TBD
unimplemented();
return make_ready_future<json::json_return_type>(json_void());
return service::get_local_storage_service().drain().then([] {
return make_ready_future<json::json_return_type>(json_void());
});
});
ss::truncate.set(r, [&ctx](std::unique_ptr<request> req) {
//TBD

View File

@@ -302,6 +302,12 @@ public:
bool operator==(const atomic_cell_or_collection& other) const {
return _data == other._data;
}
void linearize() {
_data.linearize();
}
void unlinearize() {
_data.scatter();
}
friend std::ostream& operator<<(std::ostream&, const atomic_cell_or_collection&);
};

View File

@@ -33,8 +33,10 @@
*
*/
class bytes_ostream {
public:
using size_type = bytes::size_type;
using value_type = bytes::value_type;
private:
static_assert(sizeof(value_type) == 1, "value_type is assumed to be one byte long");
struct chunk {
// FIXME: group fragment pointers to reduce pointer chasing when packetizing
@@ -117,13 +119,13 @@ private:
};
}
public:
bytes_ostream()
bytes_ostream() noexcept
: _begin()
, _current(nullptr)
, _size(0)
{ }
bytes_ostream(bytes_ostream&& o)
bytes_ostream(bytes_ostream&& o) noexcept
: _begin(std::move(o._begin))
, _current(o._current)
, _size(o._size)
@@ -148,7 +150,7 @@ public:
return *this;
}
bytes_ostream& operator=(bytes_ostream&& o) {
bytes_ostream& operator=(bytes_ostream&& o) noexcept {
_size = o._size;
_begin = std::move(o._begin);
_current = o._current;

View File

@@ -82,6 +82,12 @@ public:
}
return caching_options(k, r);
}
bool operator==(const caching_options& other) const {
return _key_cache == other._key_cache && _row_cache == other._row_cache;
}
bool operator!=(const caching_options& other) const {
return !(*this == other);
}
};

View File

@@ -68,7 +68,7 @@ public:
, _byte_order_equal(std::all_of(_types.begin(), _types.end(), [] (auto t) {
return t->is_byte_order_equal();
}))
, _byte_order_comparable(_types.size() == 1 && _types[0]->is_byte_order_comparable())
, _byte_order_comparable(!is_prefixable && _types.size() == 1 && _types[0]->is_byte_order_comparable())
, _is_reversed(_types.size() == 1 && _types[0]->is_reversed())
{ }
@@ -278,10 +278,10 @@ public:
});
}
bytes from_string(sstring_view s) {
throw std::runtime_error("not implemented");
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
}
sstring to_string(const bytes& b) {
throw std::runtime_error("not implemented");
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
}
// Retruns true iff given prefix has no missing components
bool is_full(bytes_view v) const {

View File

@@ -114,6 +114,14 @@ public:
}
return opts;
}
bool operator==(const compression_parameters& other) const {
return _compressor == other._compressor
&& _chunk_length == other._chunk_length
&& _crc_check_chance == other._crc_check_chance;
}
bool operator!=(const compression_parameters& other) const {
return !(*this == other);
}
private:
void validate_options(const std::map<sstring, sstring>& options) {
// currently, there are no options specific to a particular compressor

View File

@@ -782,40 +782,25 @@ commitlog_total_space_in_mb: -1
# the request scheduling. Currently the only valid option is keyspace.
# request_scheduler_id: keyspace
# Enable or disable inter-node encryption
# Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
# users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
# suite for authentication, key exchange and encryption of the actual data transfers.
# Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
# NOTE: No custom encryption options are enabled at the moment
# Enable or disable inter-node encryption.
# You must also generate keys and provide the appropriate key and trust store locations and passwords.
# No custom encryption options are currently enabled. The available options are:
#
# The available internode options are : all, none, dc, rack
#
# If set to dc cassandra will encrypt the traffic between the DCs
# If set to rack cassandra will encrypt the traffic between the racks
#
# The passwords used in these options must match the passwords used when generating
# the keystore and truststore. For instructions on generating these files, see:
# http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
# If set to dc scylla will encrypt the traffic between the DCs
# If set to rack scylla will encrypt the traffic between the racks
#
# server_encryption_options:
# internode_encryption: none
# keystore: conf/.keystore
# keystore_password: cassandra
# truststore: conf/.truststore
# truststore_password: cassandra
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
# cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
# require_client_auth: false
# certificate: conf/scylla.crt
# keyfile: conf/scylla.key
# truststore: <none, use system trust>
# enable or disable client/server encryption.
# client_encryption_options:
# enabled: false
# keystore: conf/.keystore
# keystore_password: cassandra
# certificate: conf/scylla.crt
# keyfile: conf/scylla.key
# require_client_auth: false
# Set trustore and truststore_password if require_client_auth is true
@@ -839,3 +824,17 @@ commitlog_total_space_in_mb: -1
# reducing overhead from the TCP protocol itself, at the cost of increasing
# latency if you block for cross-datacenter responses.
# inter_dc_tcp_nodelay: false
# Relaxation of environment checks.
#
# Scylla places certain requirements on its environment. If these requirements are
# not met, performance and reliability can be degraded.
#
# These requirements include:
# - A filesystem with good support for aysnchronous I/O (AIO). Currently,
# this means XFS.
#
# false: strict environment checks are in place; do not start if they are not met.
# true: relaxed environment checks; performance and reliability may degraade.
#
# developer_mode: false

View File

@@ -183,6 +183,7 @@ scylla_tests = [
'tests/managed_vector_test',
'tests/crc_test',
'tests/flush_queue_test',
'tests/dynamic_bitset_test',
]
apps = [
@@ -280,6 +281,8 @@ scylla_core = (['database.cc',
'cql3/statements/schema_altering_statement.cc',
'cql3/statements/ks_prop_defs.cc',
'cql3/statements/modification_statement.cc',
'cql3/statements/parsed_statement.cc',
'cql3/statements/property_definitions.cc',
'cql3/statements/update_statement.cc',
'cql3/statements/delete_statement.cc',
'cql3/statements/batch_statement.cc',
@@ -339,6 +342,7 @@ scylla_core = (['database.cc',
'utils/rate_limiter.cc',
'utils/compaction_manager.cc',
'utils/file_lock.cc',
'utils/dynamic_bitset.cc',
'gms/version_generator.cc',
'gms/versioned_value.cc',
'gms/gossiper.cc',
@@ -482,6 +486,7 @@ tests_not_using_seastar_test_framework = set([
'tests/crc_test',
'tests/perf/perf_sstable',
'tests/managed_vector_test',
'tests/dynamic_bitset_test',
])
for t in tests_not_using_seastar_test_framework:
@@ -498,7 +503,7 @@ deps['tests/sstable_test'] += ['tests/sstable_datafile_test.cc']
deps['tests/bytes_ostream_test'] = ['tests/bytes_ostream_test.cc']
deps['tests/UUID_test'] = ['utils/UUID_gen.cc', 'tests/UUID_test.cc']
deps['tests/murmur_hash_test'] = ['bytes.cc', 'utils/murmur_hash.cc', 'tests/murmur_hash_test.cc']
deps['tests/allocation_strategy_test'] = ['tests/allocation_strategy_test.cc', 'utils/logalloc.cc', 'log.cc']
deps['tests/allocation_strategy_test'] = ['tests/allocation_strategy_test.cc', 'utils/logalloc.cc', 'log.cc', 'utils/dynamic_bitset.cc']
warnings = [
'-Wno-mismatched-tags', # clang-only

View File

@@ -856,7 +856,7 @@ dropIndexStatement returns [DropIndexStatement expr]
* TRUNCATE <CF>;
*/
truncateStatement returns [::shared_ptr<truncate_statement> stmt]
: K_TRUNCATE cf=columnFamilyName { $stmt = ::make_shared<truncate_statement>(cf); }
: K_TRUNCATE (K_COLUMNFAMILY)? cf=columnFamilyName { $stmt = ::make_shared<truncate_statement>(cf); }
;
#if 0

View File

@@ -55,14 +55,11 @@ namespace cql3 {
* Represents an identifer for a CQL column definition.
* TODO : should support light-weight mode without text representation for when not interned
*/
class column_identifier final : public selection::selectable /* implements IMeasurableMemory*/ {
class column_identifier final : public selection::selectable {
public:
bytes bytes_;
private:
sstring _text;
#if 0
private static final long EMPTY_SIZE = ObjectSizes.measure(new ColumnIdentifier("", true));
#endif
public:
column_identifier(sstring raw_text, bool keep_case);
@@ -83,20 +80,6 @@ public:
}
#if 0
public long unsharedHeapSize()
{
return EMPTY_SIZE
+ ObjectSizes.sizeOnHeapOf(bytes)
+ ObjectSizes.sizeOf(text);
}
public long unsharedHeapSizeExcludingData()
{
return EMPTY_SIZE
+ ObjectSizes.sizeOnHeapExcludingData(bytes)
+ ObjectSizes.sizeOf(text);
}
public ColumnIdentifier clone(AbstractAllocator allocator)
{
return new ColumnIdentifier(allocator.clone(bytes), text);

View File

@@ -114,30 +114,26 @@ maps::literal::validate_assignable_to(database& db, const sstring& keyspace, col
assignment_testable::test_result
maps::literal::test_assignment(database& db, const sstring& keyspace, ::shared_ptr<column_specification> receiver) {
throw std::runtime_error("not implemented");
#if 0
if (!(receiver.type instanceof MapType))
return AssignmentTestable.TestResult.NOT_ASSIGNABLE;
if (!dynamic_pointer_cast<const map_type_impl>(receiver->type)) {
return assignment_testable::test_result::NOT_ASSIGNABLE;
}
// If there is no elements, we can't say it's an exact match (an empty map if fundamentally polymorphic).
if (entries.isEmpty())
return AssignmentTestable.TestResult.WEAKLY_ASSIGNABLE;
ColumnSpecification keySpec = Maps.keySpecOf(receiver);
ColumnSpecification valueSpec = Maps.valueSpecOf(receiver);
if (entries.empty()) {
return assignment_testable::test_result::WEAKLY_ASSIGNABLE;
}
auto key_spec = maps::key_spec_of(*receiver);
auto value_spec = maps::value_spec_of(*receiver);
// It's an exact match if all are exact match, but is not assignable as soon as any is non assignable.
AssignmentTestable.TestResult res = AssignmentTestable.TestResult.EXACT_MATCH;
for (Pair<Term.Raw, Term.Raw> entry : entries)
{
AssignmentTestable.TestResult t1 = entry.left.testAssignment(keyspace, keySpec);
AssignmentTestable.TestResult t2 = entry.right.testAssignment(keyspace, valueSpec);
if (t1 == AssignmentTestable.TestResult.NOT_ASSIGNABLE || t2 == AssignmentTestable.TestResult.NOT_ASSIGNABLE)
return AssignmentTestable.TestResult.NOT_ASSIGNABLE;
if (t1 != AssignmentTestable.TestResult.EXACT_MATCH || t2 != AssignmentTestable.TestResult.EXACT_MATCH)
res = AssignmentTestable.TestResult.WEAKLY_ASSIGNABLE;
auto res = assignment_testable::test_result::EXACT_MATCH;
for (auto entry : entries) {
auto t1 = entry.first->test_assignment(db, keyspace, key_spec);
auto t2 = entry.second->test_assignment(db, keyspace, value_spec);
if (t1 == assignment_testable::test_result::NOT_ASSIGNABLE || t2 == assignment_testable::test_result::NOT_ASSIGNABLE)
return assignment_testable::test_result::NOT_ASSIGNABLE;
if (t1 != assignment_testable::test_result::EXACT_MATCH || t2 != assignment_testable::test_result::EXACT_MATCH)
res = assignment_testable::test_result::WEAKLY_ASSIGNABLE;
}
return res;
#endif
}
sstring

View File

@@ -199,13 +199,7 @@ public:
}
virtual shared_ptr<operation> prepare(database& db, const sstring& keyspace, const column_definition& receiver);
#if 0
protected String toString(ColumnSpecification column)
{
return String.format("%s[%s] = %s", column.name, selector, value);
}
#endif
virtual bool is_compatible_with(shared_ptr<raw_update> other) override;
};
@@ -218,13 +212,6 @@ public:
virtual shared_ptr<operation> prepare(database& db, const sstring& keyspace, const column_definition& receiver) override;
#if 0
protected String toString(ColumnSpecification column)
{
return String.format("%s = %s + %s", column.name, column.name, value);
}
#endif
virtual bool is_compatible_with(shared_ptr<raw_update> other) override;
};
@@ -237,13 +224,6 @@ public:
virtual shared_ptr<operation> prepare(database& db, const sstring& keyspace, const column_definition& receiver) override;
#if 0
protected String toString(ColumnSpecification column)
{
return String.format("%s = %s - %s", column.name, column.name, value);
}
#endif
virtual bool is_compatible_with(shared_ptr<raw_update> other) override;
};
@@ -256,12 +236,6 @@ public:
virtual shared_ptr<operation> prepare(database& db, const sstring& keyspace, const column_definition& receiver) override;
#if 0
protected String toString(ColumnSpecification column)
{
return String.format("%s = %s - %s", column.name, value, column.name);
}
#endif
virtual bool is_compatible_with(shared_ptr<raw_update> other) override;
};

View File

@@ -178,7 +178,7 @@ query_processor::prepare(const std::experimental::string_view& query_string, con
query_processor::get_stored_prepared_statement(const std::experimental::string_view& query_string, const sstring& keyspace, bool for_thrift)
{
if (for_thrift) {
throw std::runtime_error("not implemented");
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
#if 0
Integer thriftStatementId = computeThriftId(queryString, keyspace);
ParsedStatement.Prepared existing = thriftPreparedStatements.get(thriftStatementId);
@@ -209,7 +209,7 @@ query_processor::store_prepared_statement(const std::experimental::string_view&
MAX_CACHE_PREPARED_MEMORY));
#endif
if (for_thrift) {
throw std::runtime_error("not implemented");
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
#if 0
Integer statementId = computeThriftId(queryString, keyspace);
thriftPreparedStatements.put(statementId, prepared);
@@ -334,6 +334,9 @@ query_options query_processor::make_internal_options(
future<::shared_ptr<untyped_result_set>> query_processor::execute_internal(
const std::experimental::string_view& query_string,
const std::initializer_list<data_value>& values) {
if (log.is_enabled(logging::log_level::trace)) {
log.trace("execute_internal: \"{}\" ({})", query_string, ::join(", ", values));
}
auto p = prepare_internal(query_string);
auto opts = make_internal_options(p, values);
return do_with(std::move(opts),

View File

@@ -374,7 +374,7 @@ public:
}
virtual std::vector<bytes_opt> bounds(statements::bound b, const query_options& options) const override {
throw std::runtime_error("not implemented");
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
#if 0
return Composites.toByteBuffers(boundsAsComposites(b, options));
#endif

View File

@@ -41,13 +41,13 @@ public:
::shared_ptr<primary_key_restrictions<T>> do_merge_to(schema_ptr schema, ::shared_ptr<restriction> restriction) const {
if (restriction->is_multi_column()) {
throw std::runtime_error("not implemented");
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
}
return ::make_shared<single_column_primary_key_restrictions<T>>(schema)->merge_to(schema, restriction);
}
::shared_ptr<primary_key_restrictions<T>> merge_to(schema_ptr schema, ::shared_ptr<restriction> restriction) override {
if (restriction->is_multi_column()) {
throw std::runtime_error("not implemented");
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
}
if (restriction->is_on_token()) {
return static_pointer_cast<token_restriction>(restriction);

View File

@@ -384,7 +384,11 @@ void result_set_builder::visitor::accept_new_row(
_builder.add(_partition_key[def->component_index()]);
break;
case column_kind::clustering_key:
_builder.add(_clustering_key[def->component_index()]);
if (_clustering_key.size() > def->component_index()) {
_builder.add(_clustering_key[def->component_index()]);
} else {
_builder.add({});
}
break;
case column_kind::regular_column:
add_value(*def, row_iterator);

View File

@@ -159,7 +159,7 @@ protected:
virtual shared_ptr<restrictions::restriction> new_contains_restriction(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names,
bool is_key) override {
throw std::runtime_error("not implemented");
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
#if 0
ColumnDefinition columnDef = toColumnDefinition(schema, entity);
Term term = toTerm(toReceivers(schema, columnDef), value, schema.ksName, bound_names);

View File

@@ -322,7 +322,7 @@ public:
virtual future<shared_ptr<transport::messages::result_message>> execute_internal(
distributed<service::storage_proxy>& proxy,
service::query_state& query_state, const query_options& options) override {
throw "not implemented";
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
#if 0
assert !hasConditions;
for (IMutation mutation : getMutations(BatchQueryOptions.withoutPerStatementVariables(options), true, queryState.getTimestamp()))

View File

@@ -45,6 +45,14 @@ namespace cql3 {
namespace statements {
delete_statement::delete_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs)
: modification_statement{type, bound_terms, std::move(s), std::move(attrs)}
{ }
bool delete_statement::require_full_clustering_key() const {
return false;
}
void delete_statement::add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) {
if (_column_operations.empty()) {
m.partition().apply_delete(*s, prefix, params.make_tombstone());
@@ -96,5 +104,17 @@ delete_statement::parsed::prepare_internal(database& db, schema_ptr schema, ::sh
return stmt;
}
delete_statement::parsed::parsed(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<operation::raw_deletion>> deletions,
std::vector<::shared_ptr<relation>> where_clause,
conditions_vector conditions,
bool if_exists)
: modification_statement::parsed(std::move(name), std::move(attrs), std::move(conditions), false, if_exists)
, _deletions(std::move(deletions))
, _where_clause(std::move(where_clause))
{ }
}
}

View File

@@ -55,13 +55,9 @@ namespace statements {
*/
class delete_statement : public modification_statement {
public:
delete_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs)
: modification_statement{type, bound_terms, std::move(s), std::move(attrs)}
{ }
delete_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs);
virtual bool require_full_clustering_key() const override {
return false;
}
virtual bool require_full_clustering_key() const override;
virtual void add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) override;
@@ -94,11 +90,7 @@ public:
std::vector<::shared_ptr<operation::raw_deletion>> deletions,
std::vector<::shared_ptr<relation>> where_clause,
conditions_vector conditions,
bool if_exists)
: modification_statement::parsed(std::move(name), std::move(attrs), std::move(conditions), false, if_exists)
, _deletions(std::move(deletions))
, _where_clause(std::move(where_clause))
{ }
bool if_exists);
protected:
virtual ::shared_ptr<modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs);

View File

@@ -71,6 +71,81 @@ operator<<(std::ostream& out, modification_statement::statement_type t) {
return out;
}
modification_statement::modification_statement(statement_type type_, uint32_t bound_terms, schema_ptr schema_, std::unique_ptr<attributes> attrs_)
: type{type_}
, _bound_terms{bound_terms}
, s{schema_}
, attrs{std::move(attrs_)}
, _column_operations{}
{ }
bool modification_statement::uses_function(const sstring& ks_name, const sstring& function_name) const {
if (attrs->uses_function(ks_name, function_name)) {
return true;
}
for (auto&& e : _processed_keys) {
auto r = e.second;
if (r && r->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& operation : _column_operations) {
if (operation && operation->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& condition : _column_conditions) {
if (condition && condition->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& condition : _static_conditions) {
if (condition && condition->uses_function(ks_name, function_name)) {
return true;
}
}
return false;
}
uint32_t modification_statement::get_bound_terms() {
return _bound_terms;
}
sstring modification_statement::keyspace() const {
return s->ks_name();
}
sstring modification_statement::column_family() const {
return s->cf_name();
}
bool modification_statement::is_counter() const {
return s->is_counter();
}
int64_t modification_statement::get_timestamp(int64_t now, const query_options& options) const {
return attrs->get_timestamp(now, options);
}
bool modification_statement::is_timestamp_set() const {
return attrs->is_timestamp_set();
}
gc_clock::duration modification_statement::get_time_to_live(const query_options& options) const {
return gc_clock::duration(attrs->get_time_to_live(options));
}
void modification_statement::check_access(const service::client_state& state) {
warn(unimplemented::cause::PERMISSIONS);
#if 0
state.hasColumnFamilyAccess(keyspace(), columnFamily(), Permission.MODIFY);
// CAS updates can be used to simulate a SELECT query, so should require Permission.SELECT as well.
if (hasConditions())
state.hasColumnFamilyAccess(keyspace(), columnFamily(), Permission.SELECT);
#endif
}
future<std::vector<mutation>>
modification_statement::get_mutations(distributed<service::storage_proxy>& proxy, const query_options& options, bool local, int64_t now) {
auto keys = make_lw_shared(build_partition_keys(options));
@@ -549,6 +624,63 @@ bool modification_statement::depends_on_column_family(const sstring& cf_name) co
return column_family() == cf_name;
}
void modification_statement::add_operation(::shared_ptr<operation> op) {
if (op->column.is_static()) {
_sets_static_columns = true;
} else {
_sets_regular_columns = true;
}
_column_operations.push_back(std::move(op));
}
void modification_statement::add_condition(::shared_ptr<column_condition> cond) {
if (cond->column.is_static()) {
_sets_static_columns = true;
_static_conditions.emplace_back(std::move(cond));
} else {
_sets_regular_columns = true;
_column_conditions.emplace_back(std::move(cond));
}
}
void modification_statement::set_if_not_exist_condition() {
_if_not_exists = true;
}
bool modification_statement::has_if_not_exist_condition() const {
return _if_not_exists;
}
void modification_statement::set_if_exist_condition() {
_if_exists = true;
}
bool modification_statement::has_if_exist_condition() const {
return _if_exists;
}
bool modification_statement::requires_read() {
return std::any_of(_column_operations.begin(), _column_operations.end(), [] (auto&& op) {
return op->requires_read();
});
}
bool modification_statement::has_conditions() {
return _if_not_exists || _if_exists || !_column_conditions.empty() || !_static_conditions.empty();
}
void modification_statement::validate_where_clause_for_conditions() {
// no-op by default
}
modification_statement::parsed::parsed(::shared_ptr<cf_name> name, ::shared_ptr<attributes::raw> attrs, conditions_vector conditions, bool if_not_exists, bool if_exists)
: cf_statement{std::move(name)}
, _attrs{std::move(attrs)}
, _conditions{std::move(conditions)}
, _if_not_exists{if_not_exists}
, _if_exists{if_exists}
{ }
}
}

View File

@@ -107,84 +107,29 @@ private:
};
public:
modification_statement(statement_type type_, uint32_t bound_terms, schema_ptr schema_, std::unique_ptr<attributes> attrs_)
: type{type_}
, _bound_terms{bound_terms}
, s{schema_}
, attrs{std::move(attrs_)}
, _column_operations{}
{ }
modification_statement(statement_type type_, uint32_t bound_terms, schema_ptr schema_, std::unique_ptr<attributes> attrs_);
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const override {
if (attrs->uses_function(ks_name, function_name)) {
return true;
}
for (auto&& e : _processed_keys) {
auto r = e.second;
if (r && r->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& operation : _column_operations) {
if (operation && operation->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& condition : _column_conditions) {
if (condition && condition->uses_function(ks_name, function_name)) {
return true;
}
}
for (auto&& condition : _static_conditions) {
if (condition && condition->uses_function(ks_name, function_name)) {
return true;
}
}
return false;
}
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const override;
virtual bool require_full_clustering_key() const = 0;
virtual void add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) = 0;
virtual uint32_t get_bound_terms() override {
return _bound_terms;
}
virtual uint32_t get_bound_terms() override;
virtual sstring keyspace() const {
return s->ks_name();
}
virtual sstring keyspace() const;
virtual sstring column_family() const {
return s->cf_name();
}
virtual sstring column_family() const;
virtual bool is_counter() const {
return s->is_counter();
}
virtual bool is_counter() const;
int64_t get_timestamp(int64_t now, const query_options& options) const {
return attrs->get_timestamp(now, options);
}
int64_t get_timestamp(int64_t now, const query_options& options) const;
bool is_timestamp_set() const {
return attrs->is_timestamp_set();
}
bool is_timestamp_set() const;
gc_clock::duration get_time_to_live(const query_options& options) const {
return gc_clock::duration(attrs->get_time_to_live(options));
}
gc_clock::duration get_time_to_live(const query_options& options) const;
virtual void check_access(const service::client_state& state) override {
warn(unimplemented::cause::PERMISSIONS);
#if 0
state.hasColumnFamilyAccess(keyspace(), columnFamily(), Permission.MODIFY);
// CAS updates can be used to simulate a SELECT query, so should require Permission.SELECT as well.
if (hasConditions())
state.hasColumnFamilyAccess(keyspace(), columnFamily(), Permission.SELECT);
#endif
}
virtual void check_access(const service::client_state& state) override;
void validate(distributed<service::storage_proxy>&, const service::client_state& state) override;
@@ -192,14 +137,7 @@ public:
virtual bool depends_on_column_family(const sstring& cf_name) const override;
void add_operation(::shared_ptr<operation> op) {
if (op->column.is_static()) {
_sets_static_columns = true;
} else {
_sets_regular_columns = true;
}
_column_operations.push_back(std::move(op));
}
void add_operation(::shared_ptr<operation> op);
#if 0
public Iterable<ColumnDefinition> getColumnsWithConditions()
@@ -212,31 +150,15 @@ public:
}
#endif
public:
void add_condition(::shared_ptr<column_condition> cond) {
if (cond->column.is_static()) {
_sets_static_columns = true;
_static_conditions.emplace_back(std::move(cond));
} else {
_sets_regular_columns = true;
_column_conditions.emplace_back(std::move(cond));
}
}
void add_condition(::shared_ptr<column_condition> cond);
void set_if_not_exist_condition() {
_if_not_exists = true;
}
void set_if_not_exist_condition();
bool has_if_not_exist_condition() const {
return _if_not_exists;
}
bool has_if_not_exist_condition() const;
void set_if_exist_condition() {
_if_exists = true;
}
void set_if_exist_condition();
bool has_if_exist_condition() const {
return _if_exists;
}
bool has_if_exist_condition() const;
private:
void add_key_values(const column_definition& def, ::shared_ptr<restrictions::restriction> values);
@@ -254,11 +176,7 @@ protected:
const column_definition* get_first_empty_key();
public:
bool requires_read() {
return std::any_of(_column_operations.begin(), _column_operations.end(), [] (auto&& op) {
return op->requires_read();
});
}
bool requires_read();
protected:
future<update_parameters::prefetched_rows_type> read_required_rows(
@@ -269,9 +187,7 @@ protected:
db::consistency_level cl);
public:
bool has_conditions() {
return _if_not_exists || _if_exists || !_column_conditions.empty() || !_static_conditions.empty();
}
bool has_conditions();
virtual future<::shared_ptr<transport::messages::result_message>>
execute(distributed<service::storage_proxy>& proxy, service::query_state& qs, const query_options& options) override;
@@ -428,9 +344,7 @@ protected:
* processed to check that they are compatible.
* @throws InvalidRequestException
*/
virtual void validate_where_clause_for_conditions() {
// no-op by default
}
virtual void validate_where_clause_for_conditions();
public:
class parsed : public cf_statement {
@@ -443,13 +357,7 @@ public:
const bool _if_not_exists;
const bool _if_exists;
protected:
parsed(::shared_ptr<cf_name> name, ::shared_ptr<attributes::raw> attrs, conditions_vector conditions, bool if_not_exists, bool if_exists)
: cf_statement{std::move(name)}
, _attrs{std::move(attrs)}
, _conditions{std::move(conditions)}
, _if_not_exists{if_not_exists}
, _if_exists{if_exists}
{ }
parsed(::shared_ptr<cf_name> name, ::shared_ptr<attributes::raw> attrs, conditions_vector conditions, bool if_not_exists, bool if_exists);
public:
virtual ::shared_ptr<parsed_statement::prepared> prepare(database& db) override;

View File

@@ -0,0 +1,83 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright 2014 Cloudius Systems
*
* Modified by Cloudius Systems
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "cql3/statements/parsed_statement.hh"
namespace cql3 {
namespace statements {
parsed_statement::~parsed_statement()
{ }
shared_ptr<variable_specifications> parsed_statement::get_bound_variables() {
return _variables;
}
// Used by the parser and preparable statement
void parsed_statement::set_bound_variables(const std::vector<::shared_ptr<column_identifier>>& bound_names) {
_variables = ::make_shared<variable_specifications>(bound_names);
}
bool parsed_statement::uses_function(const sstring& ks_name, const sstring& function_name) const {
return false;
}
parsed_statement::prepared::prepared(::shared_ptr<cql_statement> statement_, std::vector<::shared_ptr<column_specification>> bound_names_)
: statement(std::move(statement_))
, bound_names(std::move(bound_names_))
{ }
parsed_statement::prepared::prepared(::shared_ptr<cql_statement> statement_, const variable_specifications& names)
: prepared(statement_, names.get_specifications())
{ }
parsed_statement::prepared::prepared(::shared_ptr<cql_statement> statement_, variable_specifications&& names)
: prepared(statement_, std::move(names).get_specifications())
{ }
parsed_statement::prepared::prepared(::shared_ptr<cql_statement>&& statement_)
: prepared(statement_, std::vector<::shared_ptr<column_specification>>())
{ }
}
}

View File

@@ -60,47 +60,29 @@ private:
::shared_ptr<variable_specifications> _variables;
public:
virtual ~parsed_statement()
{ }
virtual ~parsed_statement();
shared_ptr<variable_specifications> get_bound_variables() {
return _variables;
}
shared_ptr<variable_specifications> get_bound_variables();
// Used by the parser and preparable statement
void set_bound_variables(const std::vector<::shared_ptr<column_identifier>>& bound_names)
{
_variables = ::make_shared<variable_specifications>(bound_names);
}
void set_bound_variables(const std::vector<::shared_ptr<column_identifier>>& bound_names);
class prepared {
public:
const ::shared_ptr<cql_statement> statement;
const std::vector<::shared_ptr<column_specification>> bound_names;
prepared(::shared_ptr<cql_statement> statement_, std::vector<::shared_ptr<column_specification>> bound_names_)
: statement(std::move(statement_))
, bound_names(std::move(bound_names_))
{ }
prepared(::shared_ptr<cql_statement> statement_, std::vector<::shared_ptr<column_specification>> bound_names_);
prepared(::shared_ptr<cql_statement> statement_, const variable_specifications& names)
: prepared(statement_, names.get_specifications())
{ }
prepared(::shared_ptr<cql_statement> statement_, const variable_specifications& names);
prepared(::shared_ptr<cql_statement> statement_, variable_specifications&& names)
: prepared(statement_, std::move(names).get_specifications())
{ }
prepared(::shared_ptr<cql_statement> statement_, variable_specifications&& names);
prepared(::shared_ptr<cql_statement>&& statement_)
: prepared(statement_, std::vector<::shared_ptr<column_specification>>())
{ }
prepared(::shared_ptr<cql_statement>&& statement_);
};
virtual ::shared_ptr<prepared> prepare(database& db) = 0;
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const {
return false;
}
virtual bool uses_function(const sstring& ks_name, const sstring& function_name) const;
};
}

View File

@@ -0,0 +1,186 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/*
* Copyright 2015 Cloudius Systems
*
* Modified by Cloudius Systems
*/
/*
* This file is part of Scylla.
*
* Scylla is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Scylla is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with Scylla. If not, see <http://www.gnu.org/licenses/>.
*/
#include "cql3/statements/property_definitions.hh"
namespace cql3 {
namespace statements {
property_definitions::property_definitions()
: _properties{}
{ }
void property_definitions::add_property(const sstring& name, sstring value) {
auto it = _properties.find(name);
if (it != _properties.end()) {
throw exceptions::syntax_exception(sprint("Multiple definition for property '%s'", name));
}
_properties.emplace(name, value);
}
void property_definitions::add_property(const sstring& name, const std::map<sstring, sstring>& value) {
auto it = _properties.find(name);
if (it != _properties.end()) {
throw exceptions::syntax_exception(sprint("Multiple definition for property '%s'", name));
}
_properties.emplace(name, value);
}
void property_definitions::validate(const std::set<sstring>& keywords, const std::set<sstring>& obsolete) {
for (auto&& kv : _properties) {
auto&& name = kv.first;
if (keywords.count(name)) {
continue;
}
if (obsolete.count(name)) {
#if 0
logger.warn("Ignoring obsolete property {}", name);
#endif
} else {
throw exceptions::syntax_exception(sprint("Unknown property '%s'", name));
}
}
}
std::experimental::optional<sstring> property_definitions::get_simple(const sstring& name) const {
auto it = _properties.find(name);
if (it == _properties.end()) {
return std::experimental::nullopt;
}
try {
return boost::any_cast<sstring>(it->second);
} catch (const boost::bad_any_cast& e) {
throw exceptions::syntax_exception(sprint("Invalid value for property '%s'. It should be a string", name));
}
}
std::experimental::optional<std::map<sstring, sstring>> property_definitions::get_map(const sstring& name) const {
auto it = _properties.find(name);
if (it == _properties.end()) {
return std::experimental::nullopt;
}
try {
return boost::any_cast<std::map<sstring, sstring>>(it->second);
} catch (const boost::bad_any_cast& e) {
throw exceptions::syntax_exception(sprint("Invalid value for property '%s'. It should be a map.", name));
}
}
bool property_definitions::has_property(const sstring& name) const {
return _properties.find(name) != _properties.end();
}
sstring property_definitions::get_string(sstring key, sstring default_value) const {
auto value = get_simple(key);
if (value) {
return value.value();
} else {
return default_value;
}
}
// Return a property value, typed as a Boolean
bool property_definitions::get_boolean(sstring key, bool default_value) const {
auto value = get_simple(key);
if (value) {
std::string s{value.value()};
std::transform(s.begin(), s.end(), s.begin(), ::tolower);
return s == "1" || s == "true" || s == "yes";
} else {
return default_value;
}
}
// Return a property value, typed as a double
double property_definitions::get_double(sstring key, double default_value) const {
auto value = get_simple(key);
return to_double(key, value, default_value);
}
double property_definitions::to_double(sstring key, std::experimental::optional<sstring> value, double default_value) {
if (value) {
auto val = value.value();
try {
return std::stod(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid double value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
// Return a property value, typed as an Integer
int32_t property_definitions::get_int(sstring key, int32_t default_value) const {
auto value = get_simple(key);
return to_int(key, value, default_value);
}
int32_t property_definitions::to_int(sstring key, std::experimental::optional<sstring> value, int32_t default_value) {
if (value) {
auto val = value.value();
try {
return std::stoi(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid integer value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
long property_definitions::to_long(sstring key, std::experimental::optional<sstring> value, long default_value) {
if (value) {
auto val = value.value();
try {
return std::stol(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid long value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
}
}

View File

@@ -66,141 +66,38 @@ protected:
#endif
std::unordered_map<sstring, boost::any> _properties;
property_definitions()
: _properties{}
{ }
property_definitions();
public:
void add_property(const sstring& name, sstring value) {
auto it = _properties.find(name);
if (it != _properties.end()) {
throw exceptions::syntax_exception(sprint("Multiple definition for property '%s'", name));
}
_properties.emplace(name, value);
}
void add_property(const sstring& name, sstring value);
void add_property(const sstring& name, const std::map<sstring, sstring>& value) {
auto it = _properties.find(name);
if (it != _properties.end()) {
throw exceptions::syntax_exception(sprint("Multiple definition for property '%s'", name));
}
_properties.emplace(name, value);
}
void add_property(const sstring& name, const std::map<sstring, sstring>& value);
void validate(const std::set<sstring>& keywords, const std::set<sstring>& obsolete);
void validate(const std::set<sstring>& keywords, const std::set<sstring>& obsolete) {
for (auto&& kv : _properties) {
auto&& name = kv.first;
if (keywords.count(name)) {
continue;
}
if (obsolete.count(name)) {
#if 0
logger.warn("Ignoring obsolete property {}", name);
#endif
} else {
throw exceptions::syntax_exception(sprint("Unknown property '%s'", name));
}
}
}
protected:
std::experimental::optional<sstring> get_simple(const sstring& name) const {
auto it = _properties.find(name);
if (it == _properties.end()) {
return std::experimental::nullopt;
}
try {
return boost::any_cast<sstring>(it->second);
} catch (const boost::bad_any_cast& e) {
throw exceptions::syntax_exception(sprint("Invalid value for property '%s'. It should be a string", name));
}
}
std::experimental::optional<sstring> get_simple(const sstring& name) const;
std::experimental::optional<std::map<sstring, sstring>> get_map(const sstring& name) const;
std::experimental::optional<std::map<sstring, sstring>> get_map(const sstring& name) const {
auto it = _properties.find(name);
if (it == _properties.end()) {
return std::experimental::nullopt;
}
try {
return boost::any_cast<std::map<sstring, sstring>>(it->second);
} catch (const boost::bad_any_cast& e) {
throw exceptions::syntax_exception(sprint("Invalid value for property '%s'. It should be a map.", name));
}
}
public:
bool has_property(const sstring& name) const {
return _properties.find(name) != _properties.end();
}
bool has_property(const sstring& name) const;
sstring get_string(sstring key, sstring default_value) const {
auto value = get_simple(key);
if (value) {
return value.value();
} else {
return default_value;
}
}
sstring get_string(sstring key, sstring default_value) const;
// Return a property value, typed as a Boolean
bool get_boolean(sstring key, bool default_value) const {
auto value = get_simple(key);
if (value) {
std::string s{value.value()};
std::transform(s.begin(), s.end(), s.begin(), ::tolower);
return s == "1" || s == "true" || s == "yes";
} else {
return default_value;
}
}
bool get_boolean(sstring key, bool default_value) const;
// Return a property value, typed as a double
double get_double(sstring key, double default_value) const {
auto value = get_simple(key);
return to_double(key, value, default_value);
}
double get_double(sstring key, double default_value) const;
static double to_double(sstring key, std::experimental::optional<sstring> value, double default_value) {
if (value) {
auto val = value.value();
try {
return std::stod(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid double value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
static double to_double(sstring key, std::experimental::optional<sstring> value, double default_value);
// Return a property value, typed as an Integer
int32_t get_int(sstring key, int32_t default_value) const {
auto value = get_simple(key);
return to_int(key, value, default_value);
}
int32_t get_int(sstring key, int32_t default_value) const;
static int32_t to_int(sstring key, std::experimental::optional<sstring> value, int32_t default_value) {
if (value) {
auto val = value.value();
try {
return std::stoi(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid integer value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
static int32_t to_int(sstring key, std::experimental::optional<sstring> value, int32_t default_value);
static long to_long(sstring key, std::experimental::optional<sstring> value, long default_value) {
if (value) {
auto val = value.value();
try {
return std::stol(val);
} catch (const std::exception& e) {
throw exceptions::syntax_exception(sprint("Invalid long value %s for '%s'", val, key));
}
} else {
return default_value;
}
}
static long to_long(sstring key, std::experimental::optional<sstring> value, long default_value);
};
}

View File

@@ -54,6 +54,31 @@ namespace statements {
thread_local const shared_ptr<select_statement::parameters> select_statement::_default_parameters = ::make_shared<select_statement::parameters>();
select_statement::parameters::parameters()
: _is_distinct{false}
, _allow_filtering{false}
{ }
select_statement::parameters::parameters(orderings_type orderings,
bool is_distinct,
bool allow_filtering)
: _orderings{std::move(orderings)}
, _is_distinct{is_distinct}
, _allow_filtering{allow_filtering}
{ }
bool select_statement::parameters::is_distinct() {
return _is_distinct;
}
bool select_statement::parameters::allow_filtering() {
return _allow_filtering;
}
select_statement::parameters::orderings_type const& select_statement::parameters::orderings() {
return _orderings;
}
select_statement::select_statement(schema_ptr schema,
uint32_t bound_terms,
::shared_ptr<parameters> parameters,
@@ -115,6 +140,14 @@ bool select_statement::depends_on_column_family(const sstring& cf_name) const {
return column_family() == cf_name;
}
const sstring& select_statement::keyspace() const {
return _schema->ks_name();
}
const sstring& select_statement::column_family() const {
return _schema->cf_name();
}
query::partition_slice
select_statement::make_partition_slice(const query_options& options) {
std::vector<column_id> static_columns;
@@ -318,6 +351,18 @@ shared_ptr<transport::messages::result_message> select_statement::process_result
return ::make_shared<transport::messages::result_message::rows>(std::move(rs));
}
select_statement::raw_statement::raw_statement(::shared_ptr<cf_name> cf_name,
::shared_ptr<parameters> parameters,
std::vector<::shared_ptr<selection::raw_selector>> select_clause,
std::vector<::shared_ptr<relation>> where_clause,
::shared_ptr<term::raw> limit)
: cf_statement(std::move(cf_name))
, _parameters(std::move(parameters))
, _select_clause(std::move(select_clause))
, _where_clause(std::move(where_clause))
, _limit(std::move(limit))
{ }
::shared_ptr<parsed_statement::prepared>
select_statement::raw_statement::prepare(database& db) {
schema_ptr schema = validation::validate_column_family(db, keyspace(), column_family());

View File

@@ -72,20 +72,13 @@ public:
const bool _is_distinct;
const bool _allow_filtering;
public:
parameters()
: _is_distinct{false}
, _allow_filtering{false}
{ }
parameters();
parameters(orderings_type orderings,
bool is_distinct,
bool allow_filtering)
: _orderings{std::move(orderings)}
, _is_distinct{is_distinct}
, _allow_filtering{allow_filtering}
{ }
bool is_distinct() { return _is_distinct; }
bool allow_filtering() { return _allow_filtering; }
orderings_type const& orderings() { return _orderings; }
bool allow_filtering);
bool is_distinct();
bool allow_filtering();
orderings_type const& orderings();
};
private:
static constexpr int DEFAULT_COUNT_PAGE_SIZE = 10000;
@@ -195,13 +188,9 @@ public:
}
#endif
const sstring& keyspace() const {
return _schema->ks_name();
}
const sstring& keyspace() const;
const sstring& column_family() const {
return _schema->cf_name();
}
const sstring& column_family() const;
query::partition_slice make_partition_slice(const query_options& options);
@@ -457,13 +446,7 @@ public:
::shared_ptr<parameters> parameters,
std::vector<::shared_ptr<selection::raw_selector>> select_clause,
std::vector<::shared_ptr<relation>> where_clause,
::shared_ptr<term::raw> limit)
: cf_statement(std::move(cf_name))
, _parameters(std::move(parameters))
, _select_clause(std::move(select_clause))
, _where_clause(std::move(where_clause))
, _limit(std::move(limit))
{ }
::shared_ptr<term::raw> limit);
virtual ::shared_ptr<prepared> prepare(database& db) override;
private:

View File

@@ -48,6 +48,14 @@ namespace cql3 {
namespace statements {
update_statement::update_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs)
: modification_statement{type, bound_terms, std::move(s), std::move(attrs)}
{ }
bool update_statement::require_full_clustering_key() const {
return true;
}
void update_statement::add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) {
if (s->is_dense()) {
if (!prefix || (prefix.size() == 1 && prefix.components().front().empty())) {
@@ -100,6 +108,16 @@ void update_statement::add_update_for_key(mutation& m, const exploded_clustering
#endif
}
update_statement::parsed_insert::parsed_insert(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<column_identifier::raw>> column_names,
std::vector<::shared_ptr<term::raw>> column_values,
bool if_not_exists)
: modification_statement::parsed{std::move(name), std::move(attrs), conditions_vector{}, if_not_exists, false}
, _column_names{std::move(column_names)}
, _column_values{std::move(column_values)}
{ }
::shared_ptr<modification_statement>
update_statement::parsed_insert::prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs)
@@ -148,6 +166,16 @@ update_statement::parsed_insert::prepare_internal(database& db, schema_ptr schem
return stmt;
}
update_statement::parsed_update::parsed_update(::shared_ptr<cf_name> name,
::shared_ptr<attributes::raw> attrs,
std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<operation::raw_update>>> updates,
std::vector<relation_ptr> where_clause,
conditions_vector conditions)
: modification_statement::parsed(std::move(name), std::move(attrs), std::move(conditions), false, false)
, _updates(std::move(updates))
, _where_clause(std::move(where_clause))
{ }
::shared_ptr<modification_statement>
update_statement::parsed_update::prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs)

View File

@@ -64,14 +64,9 @@ public:
private static final Constants.Value EMPTY = new Constants.Value(ByteBufferUtil.EMPTY_BYTE_BUFFER);
#endif
update_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs)
: modification_statement{type, bound_terms, std::move(s), std::move(attrs)}
{ }
update_statement(statement_type type, uint32_t bound_terms, schema_ptr s, std::unique_ptr<attributes> attrs);
private:
virtual bool require_full_clustering_key() const override {
return true;
}
virtual bool require_full_clustering_key() const override;
virtual void add_update_for_key(mutation& m, const exploded_clustering_prefix& prefix, const update_parameters& params) override;
public:
@@ -92,11 +87,7 @@ public:
::shared_ptr<attributes::raw> attrs,
std::vector<::shared_ptr<column_identifier::raw>> column_names,
std::vector<::shared_ptr<term::raw>> column_values,
bool if_not_exists)
: modification_statement::parsed{std::move(name), std::move(attrs), conditions_vector{}, if_not_exists, false}
, _column_names{std::move(column_names)}
, _column_values{std::move(column_values)}
{ }
bool if_not_exists);
virtual ::shared_ptr<modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs) override;
@@ -122,11 +113,7 @@ public:
::shared_ptr<attributes::raw> attrs,
std::vector<std::pair<::shared_ptr<column_identifier::raw>, ::shared_ptr<operation::raw_update>>> updates,
std::vector<relation_ptr> where_clause,
conditions_vector conditions)
: modification_statement::parsed(std::move(name), std::move(attrs), std::move(conditions), false, false)
, _updates(std::move(updates))
, _where_clause(std::move(where_clause))
{ }
conditions_vector conditions);
protected:
virtual ::shared_ptr<modification_statement> prepare_internal(database& db, schema_ptr schema,
::shared_ptr<variable_specifications> bound_names, std::unique_ptr<attributes> attrs);

View File

@@ -224,14 +224,6 @@ public:
// We don't "need" that override but it saves us the allocation of a Value object if used
return options.make_temporary(_type->build_value(bind_internal(options)));
}
#if 0
@Override
public String toString()
{
return tupleToString(elements);
}
#endif
};
/**

View File

@@ -88,14 +88,6 @@ public:
}
_specs[bind_index] = spec;
}
#if 0
@Override
public String toString()
{
return Arrays.toString(specs);
}
#endif
};
}

View File

@@ -416,6 +416,23 @@ static std::vector<sstring> parse_fname(sstring filename) {
return comps;
}
static bool belongs_to_current_shard(const schema& s, const partition_key& first, const partition_key& last) {
auto key_shard = [&s] (const partition_key& pk) {
auto token = dht::global_partitioner().get_token(s, pk);
return dht::shard_of(token);
};
auto s1 = key_shard(first);
auto s2 = key_shard(last);
auto me = engine().cpu_id();
return (s1 <= me) && (me <= s2);
}
static bool belongs_to_current_shard(const schema& s, range<partition_key> r) {
assert(r.start());
assert(r.end());
return belongs_to_current_shard(s, r.start()->value(), r.end()->value());
}
future<sstables::entry_descriptor> column_family::probe_file(sstring sstdir, sstring fname) {
using namespace sstables;
@@ -432,19 +449,29 @@ future<sstables::entry_descriptor> column_family::probe_file(sstring sstdir, sst
update_sstables_known_generation(comps.generation);
assert(_sstables->count(comps.generation) == 0);
auto sst = std::make_unique<sstables::sstable>(_schema->ks_name(), _schema->cf_name(), sstdir, comps.generation, comps.version, comps.format);
auto fut = sst->load();
return std::move(fut).then([this, sst = std::move(sst)] () mutable {
add_sstable(std::move(*sst));
return make_ready_future<>();
}).then_wrapped([fname, comps = std::move(comps)] (future<> f) {
auto fut = sstable::get_sstable_key_range(*_schema, _schema->ks_name(), _schema->cf_name(), sstdir, comps.generation, comps.version, comps.format);
return std::move(fut).then([this, sstdir = std::move(sstdir), comps] (range<partition_key> r) {
// Checks whether or not sstable belongs to current shard.
if (!belongs_to_current_shard(*_schema, std::move(r))) {
sstable::mark_sstable_for_deletion(_schema->ks_name(), _schema->cf_name(), sstdir, comps.generation, comps.version, comps.format);
return make_ready_future<>();
}
auto sst = std::make_unique<sstables::sstable>(_schema->ks_name(), _schema->cf_name(), sstdir, comps.generation, comps.version, comps.format);
auto fut = sst->load();
return std::move(fut).then([this, sst = std::move(sst)] () mutable {
add_sstable(std::move(*sst));
return make_ready_future<>();
});
}).then_wrapped([fname, comps] (future<> f) {
try {
f.get();
} catch (malformed_sstable_exception& e) {
dblog.error("malformed sstable {}: {}. Refusing to boot", fname, e.what());
throw;
} catch(...) {
dblog.error("Unrecognized error while processing {}: Refusing to boot", fname);
dblog.error("Unrecognized error while processing {}: {}. Refusing to boot",
fname, std::current_exception());
throw;
}
return make_ready_future<entry_descriptor>(std::move(comps));
@@ -462,19 +489,6 @@ void column_family::add_sstable(sstables::sstable&& sstable) {
}
void column_family::add_sstable(lw_shared_ptr<sstables::sstable> sstable) {
auto key_shard = [this] (const partition_key& pk) {
auto token = dht::global_partitioner().get_token(*_schema, pk);
return dht::shard_of(token);
};
auto s1 = key_shard(sstable->get_first_partition_key(*_schema));
auto s2 = key_shard(sstable->get_last_partition_key(*_schema));
auto me = engine().cpu_id();
auto included = (s1 <= me) && (me <= s2);
if (!included) {
dblog.info("sstable {} not relevant for this shard, ignoring", sstable->get_filename());
sstable->mark_for_deletion();
return;
}
auto generation = sstable->generation();
// allow in-progress reads to continue using old list
_sstables = make_lw_shared<sstable_list>(*_sstables);
@@ -718,21 +732,22 @@ column_family::compact_sstables(sstables::compaction_descriptor descriptor) {
std::unordered_set<sstables::shared_sstable> s(
sstables_to_compact->begin(), sstables_to_compact->end());
for (const auto& oldtab : *current_sstables) {
// Checks if oldtab is a sstable not being compacted.
if (!s.count(oldtab.second)) {
update_stats_for_new_sstable(oldtab.second->data_size());
_sstables->emplace(oldtab.first, oldtab.second);
}
}
for (const auto& newtab : *new_tables) {
// FIXME: rename the new sstable(s). Verify a rename doesn't cause
// problems for the sstable object.
update_stats_for_new_sstable(newtab.second->data_size());
_sstables->emplace(newtab.first, newtab.second);
}
for (const auto& newtab : *new_tables) {
// FIXME: rename the new sstable(s). Verify a rename doesn't cause
// problems for the sstable object.
update_stats_for_new_sstable(newtab.second->data_size());
_sstables->emplace(newtab.first, newtab.second);
}
for (const auto& oldtab : *sstables_to_compact) {
oldtab->mark_for_deletion();
}
for (const auto& oldtab : *sstables_to_compact) {
oldtab->mark_for_deletion();
}
});
});
@@ -745,7 +760,13 @@ column_family::load_new_sstables(std::vector<sstables::entry_descriptor> new_tab
return sst->load().then([this, sst] {
return sst->mutate_sstable_level(0);
}).then([this, sst] {
this->add_sstable(sst);
auto first = sst->get_first_partition_key(*_schema);
auto last = sst->get_last_partition_key(*_schema);
if (belongs_to_current_shard(*_schema, first, last)) {
this->add_sstable(sst);
} else {
sst->mark_for_deletion();
}
return make_ready_future<>();
});
});
@@ -837,58 +858,77 @@ future<> column_family::populate(sstring sstdir) {
auto verifier = make_lw_shared<std::unordered_map<unsigned long, status>>();
auto descriptor = make_lw_shared<sstable_descriptor>();
return lister::scan_dir(sstdir, { directory_entry_type::regular }, [this, sstdir, verifier, descriptor] (directory_entry de) {
// FIXME: The secondary indexes are in this level, but with a directory type, (starting with ".")
return probe_file(sstdir, de.name).then([verifier, descriptor] (auto entry) {
if (verifier->count(entry.generation)) {
if (verifier->at(entry.generation) == status::has_toc_file) {
if (entry.component == sstables::sstable::component_type::TOC) {
throw sstables::malformed_sstable_exception("Invalid State encountered. TOC file already processed");
return do_with(std::vector<future<>>(), [this, sstdir, verifier, descriptor] (std::vector<future<>>& futures) {
return lister::scan_dir(sstdir, { directory_entry_type::regular }, [this, sstdir, verifier, descriptor, &futures] (directory_entry de) {
// FIXME: The secondary indexes are in this level, but with a directory type, (starting with ".")
auto f = probe_file(sstdir, de.name).then([verifier, descriptor] (auto entry) {
if (verifier->count(entry.generation)) {
if (verifier->at(entry.generation) == status::has_toc_file) {
if (entry.component == sstables::sstable::component_type::TOC) {
throw sstables::malformed_sstable_exception("Invalid State encountered. TOC file already processed");
} else if (entry.component == sstables::sstable::component_type::TemporaryTOC) {
throw sstables::malformed_sstable_exception("Invalid State encountered. Temporary TOC file found after TOC file was processed");
}
} else if (entry.component == sstables::sstable::component_type::TOC) {
verifier->at(entry.generation) = status::has_toc_file;
} else if (entry.component == sstables::sstable::component_type::TemporaryTOC) {
throw sstables::malformed_sstable_exception("Invalid State encountered. Temporary TOC file found after TOC file was processed");
verifier->at(entry.generation) = status::has_temporary_toc_file;
}
} else if (entry.component == sstables::sstable::component_type::TOC) {
verifier->at(entry.generation) = status::has_toc_file;
} else if (entry.component == sstables::sstable::component_type::TemporaryTOC) {
verifier->at(entry.generation) = status::has_temporary_toc_file;
}
} else {
if (entry.component == sstables::sstable::component_type::TOC) {
verifier->emplace(entry.generation, status::has_toc_file);
} else if (entry.component == sstables::sstable::component_type::TemporaryTOC) {
verifier->emplace(entry.generation, status::has_temporary_toc_file);
} else {
verifier->emplace(entry.generation, status::has_some_file);
if (entry.component == sstables::sstable::component_type::TOC) {
verifier->emplace(entry.generation, status::has_toc_file);
} else if (entry.component == sstables::sstable::component_type::TemporaryTOC) {
verifier->emplace(entry.generation, status::has_temporary_toc_file);
} else {
verifier->emplace(entry.generation, status::has_some_file);
}
}
}
// Retrieve both version and format used for this column family.
if (!descriptor->version) {
descriptor->version = entry.version;
}
if (!descriptor->format) {
descriptor->format = entry.format;
}
});
}).then([verifier, sstdir, descriptor, this] {
return parallel_for_each(*verifier, [sstdir = std::move(sstdir), descriptor, this] (auto v) {
if (v.second == status::has_temporary_toc_file) {
unsigned long gen = v.first;
assert(descriptor->version);
sstables::sstable::version_types version = descriptor->version.value();
assert(descriptor->format);
sstables::sstable::format_types format = descriptor->format.value();
if (engine().cpu_id() != 0) {
dblog.info("At directory: {}, partial SSTable with generation {} not relevant for this shard, ignoring", sstdir, v.first);
return make_ready_future<>();
// Retrieve both version and format used for this column family.
if (!descriptor->version) {
descriptor->version = entry.version;
}
// shard 0 is the responsible for removing a partial sstable.
return sstables::sstable::remove_sstable_with_temp_toc(_schema->ks_name(), _schema->cf_name(), sstdir, gen, version, format);
} else if (v.second != status::has_toc_file) {
throw sstables::malformed_sstable_exception(sprint("At directory: %s: no TOC found for SSTable with generation %d!. Refusing to boot", sstdir, v.first));
}
if (!descriptor->format) {
descriptor->format = entry.format;
}
});
// push future returned by probe_file into an array of futures,
// so that the supplied callback will not block scan_dir() from
// reading the next entry in the directory.
futures.push_back(std::move(f));
return make_ready_future<>();
}).then([&futures] {
return when_all(futures.begin(), futures.end()).then([] (std::vector<future<>> ret) {
try {
for (auto& f : ret) {
f.get();
}
} catch(...) {
throw;
}
});
}).then([verifier, sstdir, descriptor, this] {
return parallel_for_each(*verifier, [sstdir = std::move(sstdir), descriptor, this] (auto v) {
if (v.second == status::has_temporary_toc_file) {
unsigned long gen = v.first;
assert(descriptor->version);
sstables::sstable::version_types version = descriptor->version.value();
assert(descriptor->format);
sstables::sstable::format_types format = descriptor->format.value();
if (engine().cpu_id() != 0) {
dblog.info("At directory: {}, partial SSTable with generation {} not relevant for this shard, ignoring", sstdir, v.first);
return make_ready_future<>();
}
// shard 0 is the responsible for removing a partial sstable.
return sstables::sstable::remove_sstable_with_temp_toc(_schema->ks_name(), _schema->cf_name(), sstdir, gen, version, format);
} else if (v.second != status::has_toc_file) {
throw sstables::malformed_sstable_exception(sprint("At directory: %s: no TOC found for SSTable with generation %d!. Refusing to boot", sstdir, v.first));
}
return make_ready_future<>();
});
});
});
}
@@ -996,7 +1036,7 @@ template <typename Func>
static future<>
do_parse_system_tables(distributed<service::storage_proxy>& proxy, const sstring& _cf_name, Func&& func) {
using namespace db::schema_tables;
static_assert(std::is_same<future<>, std::result_of_t<Func(schema_result::value_type&)>>::value,
static_assert(std::is_same<future<>, std::result_of_t<Func(schema_result_value_type&)>>::value,
"bad Func signature");
@@ -1031,11 +1071,11 @@ do_parse_system_tables(distributed<service::storage_proxy>& proxy, const sstring
future<> database::parse_system_tables(distributed<service::storage_proxy>& proxy) {
using namespace db::schema_tables;
return do_parse_system_tables(proxy, db::schema_tables::KEYSPACES, [this] (schema_result::value_type &v) {
return do_parse_system_tables(proxy, db::schema_tables::KEYSPACES, [this] (schema_result_value_type &v) {
auto ksm = create_keyspace_from_schema_partition(v);
return create_keyspace(ksm);
}).then([&proxy, this] {
return do_parse_system_tables(proxy, db::schema_tables::COLUMNFAMILIES, [this, &proxy] (schema_result::value_type &v) {
return do_parse_system_tables(proxy, db::schema_tables::COLUMNFAMILIES, [this, &proxy] (schema_result_value_type &v) {
return create_tables_from_tables_partition(proxy, v.second).then([this] (std::map<sstring, schema_ptr> tables) {
for (auto& t: tables) {
auto s = t.second;
@@ -1107,7 +1147,7 @@ void database::add_keyspace(sstring name, keyspace k) {
}
void database::update_keyspace(const sstring& name) {
throw std::runtime_error("not implemented");
throw std::runtime_error("update keyspace not implemented");
}
void database::drop_keyspace(const sstring& name) {
@@ -1462,7 +1502,7 @@ column_family::query(const query::read_command& cmd, const std::vector<query::pa
}).finally([lc, this]() mutable {
_stats.reads.mark(lc);
if (lc.is_start()) {
_stats.estimated_read.add(lc.latency_in_nano(), _stats.reads.count);
_stats.estimated_read.add(lc.latency(), _stats.reads.count);
}
});
}
@@ -1476,28 +1516,14 @@ column_family::as_mutation_source() const {
future<lw_shared_ptr<query::result>>
database::query(const query::read_command& cmd, const std::vector<query::partition_range>& ranges) {
static auto make_empty = [] {
return make_ready_future<lw_shared_ptr<query::result>>(make_lw_shared(query::result()));
};
try {
column_family& cf = find_column_family(cmd.cf_id);
return cf.query(cmd, ranges);
} catch (const no_such_column_family&) {
// FIXME: load from sstables
return make_empty();
}
column_family& cf = find_column_family(cmd.cf_id);
return cf.query(cmd, ranges);
}
future<reconcilable_result>
database::query_mutations(const query::read_command& cmd, const query::partition_range& range) {
try {
column_family& cf = find_column_family(cmd.cf_id);
return mutation_query(cf.as_mutation_source(), range, cmd.slice, cmd.row_limit, cmd.timestamp);
} catch (const no_such_column_family&) {
// FIXME: load from sstables
return make_ready_future<reconcilable_result>(reconcilable_result());
}
column_family& cf = find_column_family(cmd.cf_id);
return mutation_query(cf.as_mutation_source(), range, cmd.slice, cmd.row_limit, cmd.timestamp);
}
std::unordered_set<sstring> database::get_initial_tokens() {
@@ -1512,6 +1538,31 @@ std::unordered_set<sstring> database::get_initial_tokens() {
return tokens;
}
std::experimental::optional<gms::inet_address> database::get_replace_address() {
auto& cfg = get_config();
sstring replace_address = cfg.replace_address();
sstring replace_address_first_boot = cfg.replace_address_first_boot();
try {
if (!replace_address.empty()) {
return gms::inet_address(replace_address);
} else if (!replace_address_first_boot.empty()) {
return gms::inet_address(replace_address_first_boot);
}
return std::experimental::nullopt;
} catch (...) {
return std::experimental::nullopt;
}
}
bool database::is_replacing() {
sstring replace_address_first_boot = get_config().replace_address_first_boot();
if (!replace_address_first_boot.empty() && db::system_keyspace::bootstrap_complete()) {
dblog.info("Replace address on first boot requested; this node is already bootstrapped");
return false;
}
return bool(get_replace_address());
}
std::ostream& operator<<(std::ostream& out, const atomic_cell_or_collection& c) {
return out << to_hex(c._data);
}
@@ -1541,8 +1592,7 @@ future<> database::apply_in_memory(const frozen_mutation& m, const db::replay_po
auto& cf = find_column_family(m.column_family_id());
cf.apply(m, rp);
} catch (no_such_column_family&) {
// TODO: log a warning
// FIXME: load keyspace meta-data from storage
dblog.error("Attempting to mutate non-existent table {}", m.column_family_id());
}
return make_ready_future<>();
}
@@ -1899,7 +1949,7 @@ future<> column_family::snapshot(sstring name) {
}
future<bool> column_family::snapshot_exists(sstring tag) {
sstring jsondir = _config.datadir + "/snapshots/";
sstring jsondir = _config.datadir + "/snapshots/" + tag;
return engine().open_directory(std::move(jsondir)).then_wrapped([] (future<file> f) {
try {
f.get0();
@@ -1975,7 +2025,11 @@ future<> column_family::clear_snapshot(sstring tag) {
future<std::unordered_map<sstring, column_family::snapshot_details>> column_family::get_snapshot_details() {
std::unordered_map<sstring, snapshot_details> all_snapshots;
return do_with(std::move(all_snapshots), [this] (auto& all_snapshots) {
return lister::scan_dir(_config.datadir + "/snapshots", { directory_entry_type::directory }, [this, &all_snapshots] (directory_entry de) {
return engine().file_exists(_config.datadir + "/snapshots").then([this, &all_snapshots](bool file_exists) {
if (!file_exists) {
return make_ready_future<>();
}
return lister::scan_dir(_config.datadir + "/snapshots", { directory_entry_type::directory }, [this, &all_snapshots] (directory_entry de) {
auto snapshot_name = de.name;
auto snapshot = _config.datadir + "/snapshots/" + snapshot_name;
all_snapshots.emplace(snapshot_name, snapshot_details());
@@ -2010,6 +2064,7 @@ future<std::unordered_map<sstring, column_family::snapshot_details>> column_fami
});
});
});
});
}).then([&all_snapshots] {
return std::move(all_snapshots);
});

View File

@@ -194,8 +194,7 @@ private:
mutation_source sstables_as_mutation_source();
key_source sstables_as_key_source() const;
partition_presence_checker make_partition_presence_checker(lw_shared_ptr<sstable_list> old_sstables);
// We will use highres because hopefully it won't take more than a few usecs
std::chrono::high_resolution_clock::time_point _sstable_writes_disabled_at;
std::chrono::steady_clock::time_point _sstable_writes_disabled_at;
public:
// Creates a mutation reader which covers all data sources for this column family.
// Caller needs to ensure that column_family remains live (FIXME: relax this).
@@ -216,6 +215,10 @@ public:
return _cache;
}
row_cache& get_row_cache() {
return _cache;
}
logalloc::occupancy_stats occupancy() const;
public:
column_family(schema_ptr schema, config cfg, db::commitlog& cl, compaction_manager&);
@@ -247,7 +250,7 @@ public:
// to call this separately in all shards first, to guarantee that none of them are writing
// new data before you can safely assume that the whole node is disabled.
future<int64_t> disable_sstable_write() {
_sstable_writes_disabled_at = std::chrono::high_resolution_clock::now();
_sstable_writes_disabled_at = std::chrono::steady_clock::now();
return _sstables_lock.write_lock().then([this] {
return make_ready_future<int64_t>((*_sstables->end()).first);
});
@@ -255,10 +258,10 @@ public:
// SSTable writes are now allowed again, and generation is updated to new_generation
// returns the amount of microseconds elapsed since we disabled writes.
std::chrono::high_resolution_clock::duration enable_sstable_write(int64_t new_generation) {
std::chrono::steady_clock::duration enable_sstable_write(int64_t new_generation) {
update_sstables_known_generation(new_generation);
_sstables_lock.write_unlock();
return std::chrono::high_resolution_clock::now() - _sstable_writes_disabled_at;
return std::chrono::steady_clock::now() - _sstable_writes_disabled_at;
}
// Make sure the generation numbers are sequential, starting from "start".
@@ -321,6 +324,10 @@ public:
return _stats;
}
compaction_manager& get_compaction_manager() const {
return _compaction_manager;
}
template<typename Func, typename Result = futurize_t<std::result_of_t<Func()>>>
Result run_with_compaction_disabled(Func && func) {
++_compaction_disabled;
@@ -562,6 +569,9 @@ public:
return _commitlog.get();
}
compaction_manager& get_compaction_manager() {
return _compaction_manager;
}
const compaction_manager& get_compaction_manager() const {
return _compaction_manager;
}
@@ -648,6 +658,8 @@ public:
}
std::unordered_set<sstring> get_initial_tokens();
std::experimental::optional<gms::inet_address> get_replace_address();
bool is_replacing();
};
// FIXME: stub
@@ -662,7 +674,7 @@ column_family::apply(const mutation& m, const db::replay_position& rp) {
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency_in_nano(), _stats.writes.count);
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
}
}
@@ -696,7 +708,7 @@ column_family::apply(const frozen_mutation& m, const db::replay_position& rp) {
seal_on_overflow();
_stats.writes.mark(lc);
if (lc.is_start()) {
_stats.estimated_write.add(lc.latency_in_nano(), _stats.writes.count);
_stats.estimated_write.add(lc.latency(), _stats.writes.count);
}
}

View File

@@ -35,8 +35,8 @@ class column_definition;
// keys.hh
class exploded_clustering_prefix;
class partition_key;
class clustering_key;
class clustering_key_prefix;
using clustering_key = clustering_key_prefix;
// memtable.hh
class memtable;

View File

@@ -56,6 +56,7 @@
#include "unimplemented.hh"
#include "db/config.hh"
#include "gms/failure_detector.hh"
#include "service/storage_service.hh"
static logging::logger logger("batchlog_manager");
@@ -87,10 +88,8 @@ future<> db::batchlog_manager::start() {
);
});
});
_timer.arm(
lowres_clock::now()
+ std::chrono::milliseconds(
service::storage_service::RING_DELAY));
auto ring_delay = service::get_local_storage_service().get_ring_delay();
_timer.arm(lowres_clock::now() + ring_delay);
}
return make_ready_future<>();
}
@@ -115,7 +114,7 @@ mutation db::batchlog_manager::get_batch_log_mutation_for(const std::vector<muta
mutation db::batchlog_manager::get_batch_log_mutation_for(const std::vector<mutation>& mutations, const utils::UUID& id, int32_t version, db_clock::time_point now) {
auto schema = _qp.db().local().find_schema(system_keyspace::NAME, system_keyspace::BATCHLOG);
auto key = partition_key::from_singular(*schema, id);
auto timestamp = db_clock::now_in_usecs();
auto timestamp = api::new_timestamp();
auto data = [this, &mutations] {
std::vector<frozen_mutation> fm(mutations.begin(), mutations.end());
const auto size = std::accumulate(fm.begin(), fm.end(), size_t(0), [](size_t s, auto& m) {

View File

@@ -90,7 +90,7 @@ public:
db::commitlog::config::config(const db::config& cfg)
: commit_log_location(cfg.commitlog_directory())
, commitlog_total_space_in_mb(cfg.commitlog_total_space_in_mb() >= 0 ? cfg.commitlog_total_space_in_mb() : memory::stats().total_memory())
, commitlog_total_space_in_mb(cfg.commitlog_total_space_in_mb() >= 0 ? cfg.commitlog_total_space_in_mb() : memory::stats().total_memory() >> 20)
, commitlog_segment_size_in_mb(cfg.commitlog_segment_size_in_mb())
, commitlog_sync_period_in_ms(cfg.commitlog_sync_batch_window_in_ms())
, mode(cfg.commitlog_sync() == "batch" ? sync_mode::BATCH : sync_mode::PERIODIC)
@@ -281,6 +281,43 @@ private:
* A single commit log file on disk. Manages creation of the file and writing mutations to disk,
* as well as tracking the last mutation position of any "dirty" CFs covered by the segment file. Segment
* files are initially allocated to a fixed size and can grow to accomidate a larger value if necessary.
*
* The IO flow is somewhat convoluted and goes something like this:
*
* Mutation path:
* - Adding data to the segment usually writes into the internal buffer
* - On EOB or overflow we issue a write to disk ("cycle").
* - A cycle call will acquire the segment read lock and send the
* buffer to the corresponding position in the file
* - If we are periodic and crossed a timing threshold, or running "batch" mode
* we might be forced to issue a flush ("sync") after adding data
* - A sync call acquires the write lock, thus locking out writes
* and waiting for pending writes to finish. It then checks the
* high data mark, and issues the actual file flush.
* Note that the write lock is released prior to issuing the
* actual file flush, thus we are allowed to write data to
* after a flush point concurrently with a pending flush.
*
* Sync timer:
* - In periodic mode, we try to primarily issue sync calls in
* a timer task issued every N seconds. The timer does the same
* operation as the above described sync, and resets the timeout
* so that mutation path will not trigger syncs and delay.
*
* Note that we do not care which order segment chunks finish writing
* to disk, other than all below a flush point must finish before flushing.
*
* We currently do not wait for flushes to finish before issueing the next
* cycle call ("after" flush point in the file). This might not be optimal.
*
* To close and finish a segment, we first close the gate object that guards
* writing data to it, then flush it fully (including waiting for futures create
* by the timer to run their course), and finally wait for it to
* become "clean", i.e. get notified that all mutations it holds have been
* persisted to sstables elsewhere. Once this is done, we can delete the
* segment. If a segment (object) is deleted without being fully clean, we
* do not remove the file on disk.
*
*/
class db::commitlog::segment: public enable_lw_shared_from_this<segment> {
@@ -370,6 +407,7 @@ public:
void reset_sync_time() {
_sync_time = clock_type::now();
}
// See class comment for info
future<sseg_ptr> sync() {
// Note: this is not a marker for when sync was finished.
// It is when it was initiated
@@ -386,6 +424,7 @@ public:
future<> shutdown() {
return _gate.close();
}
// See class comment for info
future<sseg_ptr> flush(uint64_t pos = 0) {
auto me = shared_from_this();
assert(!me.owned());
@@ -431,6 +470,7 @@ public:
/**
* Send any buffer contents to disk and get a new tmp buffer
*/
// See class comment for info
future<sseg_ptr> cycle(size_t s = 0) {
auto size = clear_buffer_slack();
auto buf = std::move(_buffer);
@@ -1097,7 +1137,7 @@ db::commitlog::commitlog(config cfg)
: _segment_manager(new segment_manager(std::move(cfg))) {
}
db::commitlog::commitlog(commitlog&& v)
db::commitlog::commitlog(commitlog&& v) noexcept
: _segment_manager(std::move(v._segment_manager)) {
}
@@ -1173,10 +1213,11 @@ const db::commitlog::config& db::commitlog::active_config() const {
return _segment_manager->cfg;
}
future<subscription<temporary_buffer<char>, db::replay_position>>
future<std::unique_ptr<subscription<temporary_buffer<char>, db::replay_position>>>
db::commitlog::read_log_file(const sstring& filename, commit_load_reader_func next, position_type off) {
return engine().open_file_dma(filename, open_flags::ro).then([next = std::move(next), off](file f) {
return read_log_file(std::move(f), std::move(next), off);
return std::make_unique<subscription<temporary_buffer<char>, replay_position>>(
read_log_file(std::move(f), std::move(next), off));
});
}
@@ -1192,6 +1233,8 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
size_t next = 0;
size_t start_off = 0;
size_t skip_to = 0;
size_t file_size = 0;
size_t corrupt_size = 0;
bool eof = false;
bool header = true;
@@ -1289,7 +1332,11 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
auto cs = crc.checksum();
if (cs != checksum) {
throw std::runtime_error("Checksum error in chunk header");
// if a chunk header checksum is broken, we shall just assume that all
// remaining is as well. We cannot trust the "next" pointer, so...
logger.debug("Checksum error in segment chunk at {}.", pos);
corrupt_size += (file_size - pos);
return stop();
}
this->next = next;
@@ -1315,21 +1362,24 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
auto size = in.read<uint32_t>();
auto checksum = in.read<uint32_t>();
if (size == 0) {
// special scylla case: zero padding due to dma blocks
auto slack = next - pos;
return skip(slack);
}
crc32_nbo crc;
crc.process(size);
if (size < 3 * sizeof(uint32_t)) {
throw std::runtime_error("Invalid entry size");
if (size < 3 * sizeof(uint32_t) || checksum != crc.checksum()) {
auto slack = next - pos;
if (size != 0) {
logger.debug("Segment entry at {} has broken header. Skipping to next chunk ({} bytes)", rp, slack);
corrupt_size += slack;
}
// size == 0 -> special scylla case: zero padding due to dma blocks
return skip(slack);
}
if (start_off > pos) {
return skip(size - entry_header_size);
}
return fin.read_exactly(size - entry_header_size).then([this, size, checksum, rp](temporary_buffer<char> buf) {
return fin.read_exactly(size - entry_header_size).then([this, size, crc = std::move(crc), rp](temporary_buffer<char> buf) mutable {
advance(buf);
data_input in(buf);
@@ -1338,12 +1388,15 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
in.skip(data_size);
auto checksum = in.read<uint32_t>();
crc32_nbo crc;
crc.process(size);
crc.process_bytes(buf.get(), data_size);
if (crc.checksum() != checksum) {
throw std::runtime_error("Checksum error in data entry");
// If we're getting a checksum error here, most likely the rest of
// the file will be corrupt as well. But it does not hurt to retry.
// Just go to the next entry (since "size" in header seemed ok).
logger.debug("Segment entry at {} checksum error. Skipping {} bytes", rp, size);
corrupt_size += size;
return make_ready_future<>();
}
return s.produce(buf.share(0, data_size), rp);
@@ -1351,10 +1404,18 @@ db::commitlog::read_log_file(file f, commit_load_reader_func next, position_type
});
}
future<> read_file() {
return read_header().then(
[this] {
return do_until(std::bind(&work::end_of_file, this), std::bind(&work::read_chunk, this));
});
return f.size().then([this](uint64_t size) {
file_size = size;
}).then([this] {
return read_header().then(
[this] {
return do_until(std::bind(&work::end_of_file, this), std::bind(&work::read_chunk, this));
}).then([this] {
if (corrupt_size > 0) {
throw segment_data_corruption_error("Data corruption", corrupt_size);
}
});
});
}
};
@@ -1382,6 +1443,10 @@ uint64_t db::commitlog::get_completed_tasks() const {
return _segment_manager->totals.allocation_count;
}
uint64_t db::commitlog::get_flush_count() const {
return _segment_manager->totals.flush_count;
}
uint64_t db::commitlog::get_pending_tasks() const {
return _segment_manager->totals.pending_operations;
}

View File

@@ -139,7 +139,7 @@ public:
const uint32_t ver;
};
commitlog(commitlog&&);
commitlog(commitlog&&) noexcept;
~commitlog();
/**
@@ -231,6 +231,7 @@ public:
uint64_t get_total_size() const;
uint64_t get_completed_tasks() const;
uint64_t get_flush_count() const;
uint64_t get_pending_tasks() const;
uint64_t get_num_segments_created() const;
uint64_t get_num_segments_destroyed() const;
@@ -265,8 +266,21 @@ public:
typedef std::function<future<>(temporary_buffer<char>, replay_position)> commit_load_reader_func;
class segment_data_corruption_error: public std::runtime_error {
public:
segment_data_corruption_error(std::string msg, uint64_t s)
: std::runtime_error(msg), _bytes(s) {
}
uint64_t bytes() const {
return _bytes;
}
private:
uint64_t _bytes;
};
static subscription<temporary_buffer<char>, replay_position> read_log_file(file, commit_load_reader_func, position_type = 0);
static future<subscription<temporary_buffer<char>, replay_position>> read_log_file(const sstring&, commit_load_reader_func, position_type = 0);
static future<std::unique_ptr<subscription<temporary_buffer<char>, replay_position>>> read_log_file(
const sstring&, commit_load_reader_func, position_type = 0);
private:
commitlog(config);
};

View File

@@ -69,6 +69,7 @@ public:
uint64_t invalid_mutations = 0;
uint64_t skipped_mutations = 0;
uint64_t applied_mutations = 0;
uint64_t corrupt_bytes = 0;
};
future<> process(stats*, temporary_buffer<char> buf, replay_position rp);
@@ -166,9 +167,16 @@ db::commitlog_replayer::impl::recover(sstring file) {
return db::commitlog::read_log_file(file,
std::bind(&impl::process, this, s.get(), std::placeholders::_1,
std::placeholders::_2), p).then([](auto s) {
auto f = s.done();
auto f = s->done();
return f.finally([s = std::move(s)] {});
}).then([s] {
}).then_wrapped([s](future<> f) {
try {
f.get();
} catch (commitlog::segment_data_corruption_error& e) {
s->corrupt_bytes += e.bytes();
} catch (...) {
throw;
}
return make_ready_future<stats>(*s);
});
}
@@ -233,7 +241,7 @@ db::commitlog_replayer::commitlog_replayer(seastar::sharded<cql3::query_processo
: _impl(std::make_unique<impl>(qp))
{}
db::commitlog_replayer::commitlog_replayer(commitlog_replayer&& r)
db::commitlog_replayer::commitlog_replayer(commitlog_replayer&& r) noexcept
: _impl(std::move(r._impl))
{}
@@ -250,31 +258,32 @@ future<db::commitlog_replayer> db::commitlog_replayer::create_replayer(seastar::
}
future<> db::commitlog_replayer::recover(std::vector<sstring> files) {
logger.info("Replaying {}", files);
return parallel_for_each(files, [this](auto f) {
return this->recover(f).handle_exception([f](auto ep) {
logger.error("Error recovering {}: {}", f, ep);
try {
std::rethrow_exception(ep);
} catch (std::invalid_argument&) {
logger.error("Scylla cannot process {}. Make sure to fully flush all Cassandra commit log files to sstable before migrating.");
throw;
} catch (...) {
throw;
}
});
return this->recover(f);
});
}
future<> db::commitlog_replayer::recover(sstring file) {
return _impl->recover(file).then([file](impl::stats stats) {
future<> db::commitlog_replayer::recover(sstring f) {
return _impl->recover(f).then([f](impl::stats stats) {
if (stats.corrupt_bytes != 0) {
logger.warn("Corrupted file: {}. {} bytes skipped.", f, stats.corrupt_bytes);
}
logger.info("Log replay of {} complete, {} replayed mutations ({} invalid, {} skipped)"
, file
, f
, stats.applied_mutations
, stats.invalid_mutations
, stats.skipped_mutations
);
});
}).handle_exception([f](auto ep) {
logger.error("Error recovering {}: {}", f, ep);
try {
std::rethrow_exception(ep);
} catch (std::invalid_argument&) {
logger.error("Scylla cannot process {}. Make sure to fully flush all Cassandra commit log files to sstable before migrating.");
throw;
} catch (...) {
throw;
}
});;
}

View File

@@ -57,7 +57,7 @@ class commitlog;
class commitlog_replayer {
public:
commitlog_replayer(commitlog_replayer&&);
commitlog_replayer(commitlog_replayer&&) noexcept;
~commitlog_replayer();
static future<commitlog_replayer> create_replayer(seastar::sharded<cql3::query_processor>&);

View File

@@ -117,8 +117,9 @@ template<typename K, typename V>
struct convert<std::unordered_map<K, V>> {
static Node encode(const std::unordered_map<K, V>& rhs) {
Node node(NodeType::Map);
for(typename std::map<K, V>::const_iterator it=rhs.begin();it!=rhs.end();++it)
node.force_insert(it->first, it->second);
for (auto& p : rhs) {
node.force_insert(p.first, p.second);
}
return node;
}
static bool decode(const Node& node, std::unordered_map<K, V>& rhs) {
@@ -413,3 +414,21 @@ future<> db::config::read_from_file(const sstring& filename) {
return read_from_file(std::move(f));
});
}
boost::filesystem::path db::config::get_conf_dir() {
using namespace boost::filesystem;
path confdir;
auto* cd = std::getenv("SCYLLA_CONF");
if (cd != nullptr) {
confdir = path(cd);
} else {
auto* p = std::getenv("SCYLLA_HOME");
if (p != nullptr) {
confdir = path(p);
}
confdir /= "conf";
}
return confdir;
}

View File

@@ -121,23 +121,7 @@ public:
* @return path of the directory where configuration files are located
* according the environment variables definitions.
*/
static boost::filesystem::path get_conf_dir() {
using namespace boost::filesystem;
path confdir;
auto* cd = std::getenv("SCYLLA_CONF");
if (cd != nullptr) {
confdir = path(cd);
} else {
auto* p = std::getenv("SCYLLA_HOME");
if (p != nullptr) {
confdir = path(p);
}
confdir /= "conf";
}
return confdir;
}
static boost::filesystem::path get_conf_dir();
typedef std::unordered_map<sstring, sstring> string_map;
typedef std::vector<sstring> string_list;
@@ -290,7 +274,7 @@ public:
"Related information: Configuring compaction" \
) \
/* Common fault detection setting */ \
val(phi_convict_threshold, uint32_t, 8, Unused, \
val(phi_convict_threshold, uint32_t, 8, Used, \
"Adjusts the sensitivity of the failure detector on an exponential scale. Generally this setting never needs adjusting.\n" \
"Related information: Failure detection and recovery" \
) \
@@ -560,7 +544,7 @@ public:
) \
/* RPC (remote procedure call) settings */ \
/* Settings for configuring and tuning client connections. */ \
val(broadcast_rpc_address, sstring, /* unset */, Unused, \
val(broadcast_rpc_address, sstring, /* unset */, Used, \
"RPC address to broadcast to drivers and other Cassandra nodes. This cannot be set to 0.0.0.0. If blank, it is set to the value of the rpc_address or rpc_interface. If rpc_address or rpc_interfaceis set to 0.0.0.0, this property must be set.\n" \
) \
val(rpc_port, uint16_t, 9160, Used, \
@@ -682,7 +666,7 @@ public:
val(permissions_update_interval_in_ms, uint32_t, 2000, Unused, \
"Refresh interval for permissions cache (if enabled). After this interval, cache entries become eligible for refresh. On next access, an async reload is scheduled and the old value is returned until it completes. If permissions_validity_in_ms , then this property must benon-zero." \
) \
val(server_encryption_options, string_map, /*none*/, Unused, \
val(server_encryption_options, string_map, /*none*/, Used, \
"Enable or disable inter-node encryption. You must also generate keys and provide the appropriate key and trust store locations and passwords. No custom encryption options are currently enabled. The available options are:\n" \
"\n" \
"internode_encryption : (Default: none ) Enable or disable encryption of inter-node communication using the TLS_RSA_WITH_AES_128_CBC_SHA cipher suite for authentication, key exchange, and encryption of data transfers. The available inter-node options are:\n" \
@@ -690,20 +674,9 @@ public:
"\tnone : No encryption.\n" \
"\tdc : Encrypt the traffic between the data centers (server only).\n" \
"\track : Encrypt the traffic between the racks(server only).\n" \
"\tkeystore : (Default: conf/.keystore ) The location of a Java keystore (JKS) suitable for use with Java Secure Socket Extension (JSSE), which is the Java version of the Secure Sockets Layer (SSL), and Transport Layer Security (TLS) protocols. The keystore contains the private key used to encrypt outgoing messages.\n" \
"\tkeystore_password : (Default: cassandra ) Password for the keystore.\n" \
"\ttruststore : (Default: conf/.truststore ) Location of the truststore containing the trusted certificate for authenticating remote servers.\n" \
"\ttruststore_password : (Default: cassandra ) Password for the truststore.\n" \
"\n" \
"The passwords used in these options must match the passwords used when generating the keystore and truststore. For instructions on generating these files, see Creating a Keystore to Use with JSSE.\n" \
"\n" \
"The advanced settings are:\n" \
"\n" \
"\tprotocol : (Default: TLS )\n" \
"\talgorithm : (Default: SunX509 )\n" \
"\tstore_type : (Default: JKS )\n" \
"\tcipher_suites : (Default: TLS_RSA_WITH_AES_128_CBC_SHA , TLS_RSA_WITH_AES_256_CBC_SHA )\n" \
"\trequire_client_auth : (Default: false ) Enables or disables certificate authentication.\n" \
"certificate : (Default: conf/scylla.crt) The location of a PEM-encoded x509 certificate used to identify and encrypt the internode communication.\n" \
"keyfile : (Default: conf/scylla.key) PEM Key file associated with certificate.\n" \
"truststore : (Default: <system truststore> ) Location of the truststore containing the trusted certificate for authenticating remote servers.\n" \
"Related information: Node-to-node encryption" \
) \
val(client_encryption_options, string_map, /*none*/, Unused, \
@@ -743,6 +716,16 @@ public:
val(api_ui_dir, sstring, "swagger-ui/dist/", Used, "The directory location of the API GUI") \
val(api_doc_dir, sstring, "api/api-doc/", Used, "The API definition file directory") \
val(load_balance, sstring, "none", Used, "CQL request load balancing: 'none' or round-robin'") \
val(consistent_rangemovement, bool, true, Used, "When set to true, range movements will be consistent. It means: 1) it will refuse to bootstrapp a new node if other bootstrapping/leaving/moving nodes detected. 2) data will be streamed to a new node only from the node which is no longer responsible for the token range. Same as -Dcassandra.consistent.rangemovement in cassandra") \
val(join_ring, bool, true, Used, "When set to true, a node will join the token ring. When set to false, a node will not join the token ring. User can use nodetool join to initiate ring joinging later. Same as -Dcassandra.join_ring in cassandra.") \
val(load_ring_state, bool, true, Used, "When set to true, load tokens and host_ids previously saved. Same as -Dcassandra.load_ring_state in cassandra.") \
val(replace_node, sstring, "", Used, "The UUID of the node to replace. Same as -Dcassandra.replace_node in cssandra.") \
val(replace_token, sstring, "", Used, "The tokens of the node to replace. Same as -Dcassandra.replace_token in cassandra.") \
val(replace_address, sstring, "", Used, "The listen_address or broadcast_address of the dead node to replace. Same as -Dcassandra.replace_address.") \
val(replace_address_first_boot, sstring, "", Used, "Like replace_address option, but if the node has been bootstrapped sucessfully it will be ignored. Same as -Dcassandra.replace_address_first_boot.") \
val(override_decommission, bool, false, Used, "Set true to force a decommissioned node to join the cluster") \
val(ring_delay_ms, uint32_t, 30 * 1000, Used, "Time a node waits to hear from other nodes before joining the ring in milliseconds. Same as -Dcassandra.ring_delay_ms in cassandra.") \
val(developer_mode, bool, false, Used, "Relax environement checks. Setting to true can reduce performance and reliability significantly.") \
/* done! */
#define _make_value_member(name, type, deflt, status, desc, ...) \

View File

@@ -398,18 +398,18 @@ read_schema_for_keyspaces(distributed<service::storage_proxy>& proxy, const sstr
return map_reduce(keyspace_names.begin(), keyspace_names.end(), map, schema_result{}, insert);
}
future<schema_result::value_type>
future<schema_result_value_type>
read_schema_partition_for_keyspace(distributed<service::storage_proxy>& proxy, const sstring& schema_table_name, const sstring& keyspace_name)
{
auto schema = proxy.local().get_db().local().find_schema(system_keyspace::NAME, schema_table_name);
auto keyspace_key = dht::global_partitioner().decorate_key(*schema,
partition_key::from_singular(*schema, keyspace_name));
return db::system_keyspace::query(proxy, schema_table_name, keyspace_key).then([keyspace_name] (auto&& rs) {
return schema_result::value_type{keyspace_name, std::move(rs)};
return schema_result_value_type{keyspace_name, std::move(rs)};
});
}
future<schema_result::value_type>
future<schema_result_value_type>
read_schema_partition_for_table(distributed<service::storage_proxy>& proxy, const sstring& schema_table_name, const sstring& keyspace_name, const sstring& table_name)
{
auto schema = proxy.local().get_db().local().find_schema(system_keyspace::NAME, schema_table_name);
@@ -417,7 +417,7 @@ read_schema_partition_for_table(distributed<service::storage_proxy>& proxy, cons
partition_key::from_singular(*schema, keyspace_name));
auto clustering_range = query::clustering_range(clustering_key_prefix::from_clustering_prefix(*schema, exploded_clustering_prefix({utf8_type->decompose(table_name)})));
return db::system_keyspace::query(proxy, schema_table_name, keyspace_key, clustering_range).then([keyspace_name] (auto&& rs) {
return schema_result::value_type{keyspace_name, std::move(rs)};
return schema_result_value_type{keyspace_name, std::move(rs)};
});
}
@@ -528,7 +528,7 @@ future<> do_merge_schema(distributed<service::storage_proxy>& proxy, std::vector
future<std::set<sstring>> merge_keyspaces(distributed<service::storage_proxy>& proxy, schema_result&& before, schema_result&& after)
{
std::vector<schema_result::value_type> created;
std::vector<schema_result_value_type> created;
std::vector<sstring> altered;
std::set<sstring> dropped;
@@ -552,7 +552,7 @@ future<std::set<sstring>> merge_keyspaces(distributed<service::storage_proxy>& p
for (auto&& key : diff.entries_only_on_right) {
auto&& value = after[key];
if (!value->empty()) {
created.emplace_back(schema_result::value_type{key, std::move(value)});
created.emplace_back(schema_result_value_type{key, std::move(value)});
}
}
for (auto&& key : diff.entries_differing) {
@@ -566,7 +566,7 @@ future<std::set<sstring>> merge_keyspaces(distributed<service::storage_proxy>& p
} else if (!pre->empty()) {
dropped.emplace(keyspace_name);
} else if (!post->empty()) { // a (re)created keyspace
created.emplace_back(schema_result::value_type{key, std::move(post)});
created.emplace_back(schema_result_value_type{key, std::move(post)});
}
}
return do_with(std::move(created), [&proxy, altered = std::move(altered)] (auto& created) {
@@ -899,7 +899,7 @@ std::vector<mutation> make_drop_keyspace_mutations(lw_shared_ptr<keyspace_metada
*
* @param partition Keyspace attributes in serialized form
*/
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result::value_type& result)
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result_value_type& result)
{
auto&& rs = result.second;
if (rs->empty()) {
@@ -1269,7 +1269,7 @@ void create_table_from_table_row_and_column_rows(schema_builder& builder, const
} else {
// FIXME:
// is_dense = CFMetaData.calculateIsDense(fullRawComparator, columnDefs);
throw std::runtime_error("not implemented");
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
}
bool is_compound = cell_comparator::check_compound(table_row.get_nonnull<sstring>("comparator"));
@@ -1310,10 +1310,10 @@ void create_table_from_table_row_and_column_rows(schema_builder& builder, const
builder.set_max_compaction_threshold(table_row.get_nonnull<int>("max_compaction_threshold"));
}
#if 0
if (result.has("comment"))
cfm.comment(result.getString("comment"));
#endif
if (table_row.has("comment")) {
builder.set_comment(table_row.get_nonnull<sstring>("comment"));
}
if (table_row.has("memtable_flush_period_in_ms")) {
builder.set_memtable_flush_period(table_row.get_nonnull<int32_t>("memtable_flush_period_in_ms"));
}

View File

@@ -55,6 +55,7 @@ namespace db {
namespace schema_tables {
using schema_result = std::map<sstring, lw_shared_ptr<query::result_set>>;
using schema_result_value_type = std::pair<sstring, lw_shared_ptr<query::result_set>>;
static constexpr auto KEYSPACES = "schema_keyspaces";
static constexpr auto COLUMNFAMILIES = "schema_columnfamilies";
@@ -74,7 +75,7 @@ future<utils::UUID> calculate_schema_digest(distributed<service::storage_proxy>&
future<std::vector<frozen_mutation>> convert_schema_to_mutations(distributed<service::storage_proxy>& proxy);
future<schema_result::value_type>
future<schema_result_value_type>
read_schema_partition_for_keyspace(distributed<service::storage_proxy>& proxy, const sstring& schema_table_name, const sstring& keyspace_name);
future<> merge_schema(distributed<service::storage_proxy>& proxy, std::vector<mutation> mutations);
@@ -89,11 +90,11 @@ std::vector<mutation> make_create_keyspace_mutations(lw_shared_ptr<keyspace_meta
std::vector<mutation> make_drop_keyspace_mutations(lw_shared_ptr<keyspace_metadata> keyspace, api::timestamp_type timestamp);
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result::value_type& partition);
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result_value_type& partition);
future<> merge_tables(distributed<service::storage_proxy>& proxy, schema_result&& before, schema_result&& after);
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result::value_type& partition);
lw_shared_ptr<keyspace_metadata> create_keyspace_from_schema_partition(const schema_result_value_type& partition);
mutation make_create_keyspace_mutation(lw_shared_ptr<keyspace_metadata> keyspace, api::timestamp_type timestamp, bool with_tables_and_types_and_functions = true);

View File

@@ -187,30 +187,6 @@ void db::serializer<partition_key_view>::skip(input& in) {
in.skip(len);
}
template<>
db::serializer<clustering_key_view>::serializer(const clustering_key_view& key)
: _item(key), _size(sizeof(uint16_t) /* size */ + key.representation().size()) {
}
template<>
void db::serializer<clustering_key_view>::write(output& out, const clustering_key_view& key) {
bytes_view v = key.representation();
out.write<uint16_t>(v.size());
out.write(v.begin(), v.end());
}
template<>
void db::serializer<clustering_key_view>::read(clustering_key_view& b, input& in) {
auto len = in.read<uint16_t>();
b = clustering_key_view::from_bytes(in.read_view(len));
}
template<>
clustering_key_view db::serializer<clustering_key_view>::read(input& in) {
auto len = in.read<uint16_t>();
return clustering_key_view::from_bytes(in.read_view(len));
}
template<>
db::serializer<clustering_key_prefix_view>::serializer(const clustering_key_prefix_view& key)
: _item(key), _size(sizeof(uint16_t) /* size */ + key.representation().size()) {
@@ -281,7 +257,6 @@ template class db::serializer<atomic_cell_view> ;
template class db::serializer<collection_mutation_view> ;
template class db::serializer<utils::UUID> ;
template class db::serializer<partition_key_view> ;
template class db::serializer<clustering_key_view> ;
template class db::serializer<clustering_key_prefix_view> ;
template class db::serializer<frozen_mutation> ;
template class db::serializer<db::replay_position> ;

View File

@@ -22,11 +22,12 @@
#ifndef DB_SERIALIZER_HH_
#define DB_SERIALIZER_HH_
#include <experimental/optional>
#include "utils/data_input.hh"
#include "utils/data_output.hh"
#include "bytes_ostream.hh"
#include "bytes.hh"
#include "mutation.hh"
#include "keys.hh"
#include "database_fwd.hh"
#include "frozen_mutation.hh"
@@ -58,9 +59,9 @@ public:
return *this;
}
static void write(output&, const T&);
static void read(T&, input&);
static T read(input&);
static void write(output&, const type&);
static void read(type&, input&);
static type read(input&);
static void skip(input& in);
size_t size() const {
@@ -76,11 +77,100 @@ public:
void write(data_output& out) const {
write(out, _item);
}
bytes to_bytes() const {
bytes b(bytes::initialized_later(), _size);
data_output out(b);
write(out);
return b;
}
static type from_bytes(bytes_view v) {
data_input in(v);
return read(in);
}
private:
const T& _item;
const type& _item;
size_t _size;
};
template<typename T>
class serializer<std::experimental::optional<T>> {
public:
typedef std::experimental::optional<T> type;
typedef data_output output;
typedef data_input input;
typedef serializer<T> _MyType;
serializer(const type& t)
: _item(t)
, _size(output::serialized_size<bool>() + (t ? serializer<T>(*t).size() : 0))
{}
// apply to memory, must be at least size() large.
const _MyType& operator()(output& out) const {
write(out, _item);
return *this;
}
static void write(output& out, const type& v) {
bool en = v;
out.write<bool>(en);
if (en) {
serializer<T>::write(out, *v);
}
}
static void read(type& dst, input& in) {
auto en = in.read<bool>();
if (en) {
dst = serializer<T>::read(in);
} else {
dst = {};
}
}
static type read(input& in) {
type t;
read(t, in);
return t;
}
static void skip(input& in) {
auto en = in.read<bool>();
if (en) {
serializer<T>::skip(in);
}
}
size_t size() const {
return _size;
}
void write(bytes_ostream& out) const {
auto buf = out.write_place_holder(_size);
data_output data_out((char*)buf, _size);
write(data_out, _item);
}
void write(data_output& out) const {
write(out, _item);
}
bytes to_bytes() const {
bytes b(bytes::initialized_later(), _size);
data_output out(b);
write(out);
return b;
}
static type from_bytes(bytes_view v) {
data_input in(v);
return read(in);
}
private:
const std::experimental::optional<T> _item;
size_t _size;
};
template<> serializer<utils::UUID>::serializer(const utils::UUID &);
template<> void serializer<utils::UUID>::write(output&, const type&);
template<> void serializer<utils::UUID>::read(utils::UUID&, input&);
@@ -124,11 +214,6 @@ template<> void serializer<partition_key_view>::read(partition_key_view&, input&
template<> partition_key_view serializer<partition_key_view>::read(input&);
template<> void serializer<partition_key_view>::skip(input&);
template<> serializer<clustering_key_view>::serializer(const clustering_key_view &);
template<> void serializer<clustering_key_view>::write(output&, const clustering_key_view&);
template<> void serializer<clustering_key_view>::read(clustering_key_view&, input&);
template<> clustering_key_view serializer<clustering_key_view>::read(input&);
template<> serializer<clustering_key_prefix_view>::serializer(const clustering_key_prefix_view &);
template<> void serializer<clustering_key_prefix_view>::write(output&, const clustering_key_prefix_view&);
template<> void serializer<clustering_key_prefix_view>::read(clustering_key_prefix_view&, input&);

View File

@@ -464,7 +464,8 @@ static future<> build_bootstrap_info() {
static auto state_map = std::unordered_map<sstring, bootstrap_state>({
{ "NEEDS_BOOTSTRAP", bootstrap_state::NEEDS_BOOTSTRAP },
{ "COMPLETED", bootstrap_state::COMPLETED },
{ "IN_PROGRESS", bootstrap_state::IN_PROGRESS }
{ "IN_PROGRESS", bootstrap_state::IN_PROGRESS },
{ "DECOMMISSIONED", bootstrap_state::DECOMMISSIONED }
});
bootstrap_state state = bootstrap_state::NEEDS_BOOTSTRAP;
@@ -796,6 +797,8 @@ future<> remove_endpoint(gms::inet_address ep) {
}).then([ep] {
sstring req = "DELETE FROM system.%s WHERE peer = ?";
return execute_cql(req, PEERS, ep.addr()).discard_result();
}).then([] {
return force_blocking_flush(PEERS);
});
}
@@ -874,6 +877,10 @@ bool bootstrap_in_progress() {
return get_bootstrap_state() == bootstrap_state::IN_PROGRESS;
}
bool was_decommissioned() {
return get_bootstrap_state() == bootstrap_state::DECOMMISSIONED;
}
bootstrap_state get_bootstrap_state() {
return _local_cache.local()._state;
}
@@ -882,7 +889,8 @@ future<> set_bootstrap_state(bootstrap_state state) {
static std::unordered_map<bootstrap_state, sstring, enum_hash<bootstrap_state>> state_to_name({
{ bootstrap_state::NEEDS_BOOTSTRAP, "NEEDS_BOOTSTRAP" },
{ bootstrap_state::COMPLETED, "COMPLETED" },
{ bootstrap_state::IN_PROGRESS, "IN_PROGRESS" }
{ bootstrap_state::IN_PROGRESS, "IN_PROGRESS" },
{ bootstrap_state::DECOMMISSIONED, "DECOMMISSIONED" }
});
sstring state_name = state_to_name.at(state);
@@ -1002,5 +1010,55 @@ query(distributed<service::storage_proxy>& proxy, const sstring& cf_name, const
});
}
static map_type_impl::native_type prepare_rows_merged(std::unordered_map<int32_t, int64_t>& rows_merged) {
map_type_impl::native_type tmp;
for (auto& r: rows_merged) {
int32_t first = r.first;
int64_t second = r.second;
auto map_element = std::make_pair<data_value, data_value>(data_value(first), data_value(second));
tmp.push_back(std::move(map_element));
}
return tmp;
}
future<> update_compaction_history(sstring ksname, sstring cfname, int64_t compacted_at, int64_t bytes_in, int64_t bytes_out,
std::unordered_map<int32_t, int64_t> rows_merged)
{
// don't write anything when the history table itself is compacted, since that would in turn cause new compactions
if (ksname == "system" && cfname == COMPACTION_HISTORY) {
return make_ready_future<>();
}
auto map_type = map_type_impl::get_instance(int32_type, long_type, true);
sstring req = "INSERT INTO system.%s (id, keyspace_name, columnfamily_name, compacted_at, bytes_in, bytes_out, rows_merged) VALUES (?, ?, ?, ?, ?, ?, ?)";
return execute_cql(req, COMPACTION_HISTORY, utils::UUID_gen::get_time_UUID(), ksname, cfname, compacted_at, bytes_in, bytes_out,
make_map_value(map_type, prepare_rows_merged(rows_merged))).discard_result();
}
future<std::vector<compaction_history_entry>> get_compaction_history()
{
sstring req = "SELECT * from system.%s";
return execute_cql(req, COMPACTION_HISTORY).then([] (::shared_ptr<cql3::untyped_result_set> msg) {
std::vector<compaction_history_entry> history;
for (auto& row : *msg) {
compaction_history_entry entry;
entry.id = row.get_as<utils::UUID>("id");
entry.ks = row.get_as<sstring>("keyspace_name");
entry.cf = row.get_as<sstring>("columnfamily_name");
entry.compacted_at = row.get_as<int64_t>("compacted_at");
entry.bytes_in = row.get_as<int64_t>("bytes_in");
entry.bytes_out = row.get_as<int64_t>("bytes_out");
if (row.has("rows_merged")) {
entry.rows_merged = row.get_map<int32_t, int64_t>("rows_merged");
}
history.push_back(std::move(entry));
}
return std::move(history);
});
}
} // namespace system_keyspace
} // namespace db

View File

@@ -153,7 +153,8 @@ load_dc_rack_info();
enum class bootstrap_state {
NEEDS_BOOTSTRAP,
COMPLETED,
IN_PROGRESS
IN_PROGRESS,
DECOMMISSIONED
};
#if 0
@@ -258,26 +259,28 @@ enum class bootstrap_state {
compactionLog.truncateBlocking();
}
public static void updateCompactionHistory(String ksname,
String cfname,
long compactedAt,
long bytesIn,
long bytesOut,
Map<Integer, Long> rowsMerged)
{
// don't write anything when the history table itself is compacted, since that would in turn cause new compactions
if (ksname.equals("system") && cfname.equals(COMPACTION_HISTORY))
return;
String req = "INSERT INTO system.%s (id, keyspace_name, columnfamily_name, compacted_at, bytes_in, bytes_out, rows_merged) VALUES (?, ?, ?, ?, ?, ?, ?)";
executeInternal(String.format(req, COMPACTION_HISTORY), UUIDGen.getTimeUUID(), ksname, cfname, ByteBufferUtil.bytes(compactedAt), bytesIn, bytesOut, rowsMerged);
}
public static TabularData getCompactionHistory() throws OpenDataException
{
UntypedResultSet queryResultSet = executeInternal(String.format("SELECT * from system.%s", COMPACTION_HISTORY));
return CompactionHistoryTabularData.from(queryResultSet);
}
#endif
struct compaction_history_entry {
utils::UUID id;
sstring ks;
sstring cf;
int64_t compacted_at = 0;
int64_t bytes_in = 0;
int64_t bytes_out = 0;
// Key: number of rows merged
// Value: counter
std::unordered_map<int32_t, int64_t> rows_merged;
};
future<> update_compaction_history(sstring ksname, sstring cfname, int64_t compacted_at, int64_t bytes_in, int64_t bytes_out,
std::unordered_map<int32_t, int64_t> rows_merged);
future<std::vector<compaction_history_entry>> get_compaction_history();
typedef std::vector<db::replay_position> replay_positions;
future<> save_truncation_record(const column_family&, db_clock::time_point truncated_at, db::replay_position);
@@ -519,6 +522,7 @@ enum class bootstrap_state {
bool bootstrap_complete();
bool bootstrap_in_progress();
bootstrap_state get_bootstrap_state();
bool was_decommissioned();
future<> set_bootstrap_state(bootstrap_state state);
#if 0

View File

@@ -34,12 +34,12 @@ token byte_ordered_partitioner::get_random_token()
std::map<token, float> byte_ordered_partitioner::describe_ownership(const std::vector<token>& sorted_tokens)
{
throw std::runtime_error("not implemented");
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
}
token byte_ordered_partitioner::midpoint(const token& t1, const token& t2) const
{
throw std::runtime_error("not implemented");
throw std::runtime_error(sprint("%s not implemented", __PRETTY_FUNCTION__));
}
unsigned

View File

@@ -386,12 +386,22 @@ public:
friend std::ostream& operator<<(std::ostream&, const ring_position&);
};
// Trichotomic comparator for ring_position
struct ring_position_comparator {
const schema& s;
ring_position_comparator(const schema& s_) : s(s_) {}
int operator()(const ring_position& lh, const ring_position& rh) const;
};
// "less" comparator for ring_position
struct ring_position_less_comparator {
const schema& s;
ring_position_less_comparator(const schema& s_) : s(s_) {}
bool operator()(const ring_position& lh, const ring_position& rh) const {
return lh.less_compare(s, rh);
}
};
struct token_comparator {
// Return values are those of a trichotomic comparison.
int operator()(const token& t1, const token& t2) const;

View File

@@ -88,18 +88,8 @@ inline int64_t long_token(const token& t) {
return net::ntoh(*lp);
}
// XXX: Technically, this should be inside long token. However, long_token is
// used quite a lot in hot paths, so it is better to keep the branches of, if
// we can. Most our comparators will check for _kind separately,
// so this should be fine.
sstring murmur3_partitioner::to_sstring(const token& t) const {
int64_t lt;
if (t._kind == dht::token::kind::before_all_keys) {
lt = std::numeric_limits<long>::min();
} else {
lt = long_token(t);
}
return ::to_sstring(lt);
return ::to_sstring(long_token(t));
}
dht::token murmur3_partitioner::from_sstring(const sstring& t) const {
@@ -122,17 +112,35 @@ int murmur3_partitioner::tri_compare(const token& t1, const token& t2) {
}
}
// Assuming that x>=y, return the positive difference x-y.
// The return type is an unsigned type, as the difference may overflow
// a signed type (e.g., consider very positive x and very negative y).
template <typename T>
static std::make_unsigned_t<T> positive_subtract(T x, T y) {
return std::make_unsigned_t<T>(x) - std::make_unsigned_t<T>(y);
}
token murmur3_partitioner::midpoint(const token& t1, const token& t2) const {
auto l1 = long_token(t1);
auto l2 = long_token(t2);
// long_token is defined as signed, but the arithmetic works out the same
// without invoking undefined behavior with a signed type.
auto delta = (uint64_t(l2) - uint64_t(l1)) / 2;
if (l1 > l2) {
// wraparound
delta += 0x8000'0000'0000'0000;
int64_t mid;
if (l1 <= l2) {
// To find the midpoint, we cannot use the trivial formula (l1+l2)/2
// because the addition can overflow the integer. To avoid this
// overflow, we first notice that the above formula is equivalent to
// l1 + (l2-l1)/2. Now, "l2-l1" can still overflow a signed integer
// (e.g., think of a very positive l2 and very negative l1), but
// because l1 <= l2 in this branch, we note that l2-l1 is positive
// and fits an *unsigned* int's range. So,
mid = l1 + positive_subtract(l2, l1)/2;
} else {
// When l2 < l1, we need to switch l1 and and l2 in the above
// formula, because now l1 - l2 is positive.
// Additionally, we consider this case is a "wrap around", so we need
// to behave as if l2 + 2^64 was meant instead of l2, i.e., add 2^63
// to the average.
mid = l2 + positive_subtract(l1, l2)/2 + 0x8000'0000'0000'0000;
}
auto mid = uint64_t(l1) + delta;
return get_token(mid);
}

View File

@@ -45,6 +45,7 @@
#include "log.hh"
#include "streaming/stream_plan.hh"
#include "streaming/stream_state.hh"
#include "service/storage_service.hh"
namespace dht {
@@ -109,7 +110,6 @@ range_streamer::get_all_ranges_with_sources_for(const sstring& keyspace_name, st
auto& ks = _db.local().find_keyspace(keyspace_name);
auto& strat = ks.get_replication_strategy();
// std::unordered_multimap<range<token>, inet_address>
auto tm = _metadata.clone_only_token_map();
auto range_addresses = unordered_multimap_to_unordered_map(strat.get_range_addresses(tm));
@@ -205,9 +205,7 @@ range_streamer::get_all_ranges_with_strict_sources_for(const sstring& keyspace_n
bool range_streamer::use_strict_sources_for_ranges(const sstring& keyspace_name) {
auto& ks = _db.local().find_keyspace(keyspace_name);
auto& strat = ks.get_replication_strategy();
// FIXME: DatabaseDescriptor.isReplacing()
auto is_replacing = false;
return !is_replacing
return !_db.local().is_replacing()
&& use_strict_consistency()
&& !_tokens.empty()
&& _metadata.get_all_endpoints().size() != strat.get_replication_factor();
@@ -224,25 +222,17 @@ void range_streamer::add_ranges(const sstring& keyspace_name, std::vector<range<
}
}
// TODO: share code with unordered_multimap_to_unordered_map
std::unordered_map<inet_address, std::vector<range<token>>> tmp;
std::unordered_map<inet_address, std::vector<range<token>>> range_fetch_map;
for (auto& x : get_range_fetch_map(ranges_for_keyspace, _source_filters, keyspace_name)) {
auto& addr = x.first;
auto& range_ = x.second;
auto it = tmp.find(addr);
if (it != tmp.end()) {
it->second.push_back(range_);
} else {
tmp.emplace(addr, std::vector<range<token>>{range_});
}
range_fetch_map[x.first].emplace_back(x.second);
}
if (logger.is_enabled(logging::log_level::debug)) {
for (auto& x : tmp) {
for (auto& x : range_fetch_map) {
logger.debug("{} : range {} from source {} for keyspace {}", _description, x.second, x.first, keyspace_name);
}
}
_to_fetch.emplace(keyspace_name, std::move(tmp));
_to_fetch.emplace(keyspace_name, std::move(range_fetch_map));
}
future<streaming::stream_state> range_streamer::fetch_async() {
@@ -272,4 +262,8 @@ range_streamer::get_work_map(const std::unordered_multimap<range<token>, inet_ad
return get_range_fetch_map(ranges_with_source_target, source_filters, keyspace);
}
bool range_streamer::use_strict_consistency() {
return service::get_local_storage_service().db().local().get_config().consistent_rangemovement();
}
} // dht

View File

@@ -62,10 +62,7 @@ public:
using stream_plan = streaming::stream_plan;
using stream_state = streaming::stream_state;
using i_failure_detector = gms::i_failure_detector;
static bool use_strict_consistency() {
//FIXME: Boolean.parseBoolean(System.getProperty("cassandra.consistent.rangemovement","true"));
return true;
}
static bool use_strict_consistency();
public:
/**
* A filter applied to sources to stream from when constructing a fetch map.

12
dist/ami/build_ami.sh vendored
View File

@@ -5,6 +5,16 @@ if [ ! -e dist/ami/build_ami.sh ]; then
exit 1
fi
TARGET_JSON=scylla.json
if [ "$1" != "" ]; then
TARGET_JSON=$1
fi
if [ ! -f dist/ami/$TARGET_JSON ]; then
echo "dist/ami/$TARGET_JSON does not found"
exit 1
fi
cd dist/ami
if [ ! -f variables.json ]; then
@@ -20,4 +30,4 @@ if [ ! -d packer ]; then
cd -
fi
packer/packer build -var-file=variables.json scylla.json
packer/packer build -var-file=variables.json $TARGET_JSON

30
dist/ami/build_ami_local.sh vendored Executable file
View File

@@ -0,0 +1,30 @@
#!/bin/sh -e
if [ ! -e dist/ami/build_ami_local.sh ]; then
echo "run build_ami_local.sh in top of scylla dir"
exit 1
fi
sudo yum -y install git
if [ ! -f dist/ami/scylla-server.x86_64.rpm ]; then
dist/redhat/build_rpm.sh
cp build/rpms/scylla-server-`cat build/SCYLLA-VERSION-FILE`-`cat build/SCYLLA-RELEASE-FILE`.*.x86_64.rpm dist/ami/scylla-server.x86_64.rpm
fi
if [ ! -f dist/ami/scylla-jmx.noarch.rpm ]; then
cd build
git clone --depth 1 https://github.com/scylladb/scylla-jmx.git
cd scylla-jmx
sh -x -e dist/redhat/build_rpm.sh
cd ../..
cp build/scylla-jmx/build/rpms/scylla-jmx-`cat build/scylla-jmx/build/SCYLLA-VERSION-FILE`-`cat build/scylla-jmx/build/SCYLLA-RELEASE-FILE`.*.noarch.rpm dist/ami/scylla-jmx.noarch.rpm
fi
if [ ! -f dist/ami/scylla-tools.noarch.rpm ]; then
cd build
git clone --depth 1 https://github.com/scylladb/scylla-tools-java.git
cd scylla-tools-java
sh -x -e dist/redhat/build_rpm.sh
cd ../..
cp build/scylla-tools-java/build/rpms/scylla-tools-`cat build/scylla-tools-java/build/SCYLLA-VERSION-FILE`-`cat build/scylla-tools-java/build/SCYLLA-RELEASE-FILE`.*.noarch.rpm dist/ami/scylla-tools.noarch.rpm
fi
exec dist/ami/build_ami.sh scylla_local.json

45
dist/ami/files/.bash_profile vendored Normal file
View File

@@ -0,0 +1,45 @@
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/.local/bin:$HOME/bin
export PATH
echo
echo ' _____ _ _ _____ ____ '
echo ' / ____| | | | | __ \| _ \ '
echo ' | (___ ___ _ _| | | __ _| | | | |_) |'
echo ' \___ \ / __| | | | | |/ _` | | | | _ < '
echo ' ____) | (__| |_| | | | (_| | |__| | |_) |'
echo ' |_____/ \___|\__, |_|_|\__,_|_____/|____/ '
echo ' __/ | '
echo ' |___/ '
echo ''
echo ''
echo 'Nodetool:'
echo ' nodetool --help'
echo 'CQL Shell:'
echo ' cqlsh'
echo 'More documentation available at: '
echo ' http://www.scylladb.com/doc/'
echo
if [ "`systemctl is-active scylla-server`" = "active" ]; then
tput setaf 4
tput bold
echo " ScyllaDB is active."
tput sgr0
else
tput setaf 1
tput bold
echo " ScyllaDB is not started!"
tput sgr0
echo "Please wait for startup. To see status of ScyllaDB, run "
echo " 'systemctl status scylla-server'"
fi

View File

@@ -1,5 +0,0 @@
[Coredump]
Storage=external
Compress=yes
ProcessSizeMax=16G
ExternalSizeMax=16G

View File

@@ -1,11 +0,0 @@
[Unit]
Description=Scylla Setup
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/lib/scylla/scylla-setup.sh
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

View File

@@ -1,52 +0,0 @@
#!/bin/sh -e
if [ -b /dev/md0 ]; then
echo "RAID already constructed."
exit 1
fi
dnf update -y
DISKS=""
NR=0
for i in xvd{b..z}; do
if [ -b /dev/$i ];then
echo Found disk /dev/$i
DISKS="$DISKS /dev/$i"
NR=$((NR+1))
fi
done
echo Creating RAID0 for scylla using $NR disk\(s\): $DISKS
if [ $NR -ge 1 ]; then
mdadm --create --verbose --force --run /dev/md0 --level=0 -c256 --raid-devices=$NR $DISKS
blockdev --setra 65536 /dev/md0
mkfs.xfs /dev/md0 -f
echo "DEVICE $DISKS" > /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf
UUID=`blkid /dev/md0 | awk '{print $2}'`
mkdir /data
echo "$UUID /data xfs noatime,discard 0 0" >> /etc/fstab
mount /data
else
echo "WARN: Scylla is not using XFS to store data. Perforamnce will suffer." > /home/fedora/WARN_PLEASE_READ.TXT
fi
mkdir -p /data/data
mkdir -p /data/commitlog
chown scylla:scylla /data/*
CPU_NR=`cat /proc/cpuinfo |grep processor|wc -l`
if [ $CPU_NR -ge 8 ]; then
NR=$((CPU_NR - 1))
echo SCYLLA_ARGS=\"--cpuset 1-$NR --smp $NR\" >> /etc/sysconfig/scylla-server
echo SET_NIC=\"yes\" >> /etc/sysconfig/scylla-server
fi
/usr/lib/scylla/scylla-ami/ds2_configure.py
systemctl disable scylla-setup.service
systemctl enable scylla-server.service
systemctl start scylla-server.service
systemctl enable scylla-jmx.service
systemctl start scylla-jmx.service

View File

@@ -1,11 +0,0 @@
[scylla]
name=Scylla for Fedora $releasever - $basearch
baseurl=https://s3.amazonaws.com/downloads.scylladb.com/rpm/fedora/$releasever/$basearch/
enabled=1
gpgcheck=0
[scylla-generic]
name=Scylla for Fedora $releasever
baseurl=https://s3.amazonaws.com/downloads.scylladb.com/rpm/fedora/$releasever/noarch/
enabled=1
gpgcheck=0

View File

@@ -1,20 +0,0 @@
#!/bin/sh -e
setenforce 0
sed -e "s/enforcing/disabled/" /etc/sysconfig/selinux > /tmp/selinux
mv /tmp/selinux /etc/sysconfig/
dnf update -y
mv /home/fedora/scylla.repo /etc/yum.repos.d/
dnf install -y scylla-server scylla-server-debuginfo scylla-jmx scylla-tools
dnf install -y mdadm xfsprogs
cp /home/fedora/coredump.conf /etc/systemd/coredump.conf
mv /home/fedora/scylla-setup.service /usr/lib/systemd/system
mv /home/fedora/scylla-setup.sh /usr/lib/scylla
chmod a+rx /usr/lib/scylla/scylla-setup.sh
mv /home/fedora/scylla-ami /usr/lib/scylla/scylla-ami
chmod a+rx /usr/lib/scylla/scylla-ami/ds2_configure.py
systemctl enable scylla-setup.service
sed -e 's!/var/lib/scylla/data!/data/data!' -e 's!commitlog_directory: /var/lib/scylla/commitlog!commitlog_directory: /data/commitlog!' /var/lib/scylla/conf/scylla.yaml > /tmp/scylla.yaml
mv /tmp/scylla.yaml /var/lib/scylla/conf
grep -v ' - mounts' /etc/cloud/cloud.cfg > /tmp/cloud.cfg
mv /tmp/cloud.cfg /etc/cloud/cloud.cfg

16
dist/ami/scylla.json vendored
View File

@@ -18,13 +18,23 @@
"provisioners": [
{
"type": "file",
"source": "files/",
"destination": "/home/fedora"
"source": "files/scylla-ami",
"destination": "/home/fedora/scylla-ami"
},
{
"type": "file",
"source": "files/.bash_profile",
"destination": "/home/fedora/.bash_profile"
},
{
"type": "file",
"source": "../../scripts/scylla_install",
"destination": "/home/fedora/scylla_install"
},
{
"type": "shell",
"inline": [
"sudo sh -x -e /home/fedora/setup-ami.sh"
"sudo sh -x -e /home/fedora/scylla_install -a"
]
}
],

67
dist/ami/scylla_local.json vendored Normal file
View File

@@ -0,0 +1,67 @@
{
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `access_key`}}",
"secret_key": "{{user `secret_key`}}",
"subnet_id": "{{user `subnet_id`}}",
"security_group_id": "{{user `security_group_id`}}",
"region": "{{user `region`}}",
"associate_public_ip_address": "{{user `associate_public_ip_address`}}",
"source_ami": "ami-a51564c0",
"instance_type": "{{user `instance_type`}}",
"ssh_username": "fedora",
"ssh_timeout": "5m",
"ami_name": "scylla_{{isotime | clean_ami_name}}"
}
],
"provisioners": [
{
"type": "file",
"source": "files/scylla-ami",
"destination": "/home/fedora/scylla-ami"
},
{
"type": "file",
"source": "files/.bash_profile",
"destination": "/home/fedora/.bash_profile"
},
{
"type": "file",
"source": "../../scripts/scylla_install",
"destination": "/home/fedora/scylla_install"
},
{
"type": "file",
"source": "scylla-server.x86_64.rpm",
"destination": "/home/fedora/scylla-server.x86_64.rpm"
},
{
"type": "file",
"source": "scylla-jmx.noarch.rpm",
"destination": "/home/fedora/scylla-jmx.noarch.rpm"
},
{
"type": "file",
"source": "scylla-tools.noarch.rpm",
"destination": "/home/fedora/scylla-tools.noarch.rpm"
},
{
"type": "shell",
"inline": [
"sudo yum install -y /home/fedora/scylla-server.x86_64.rpm /home/fedora/scylla-jmx.noarch.rpm /home/fedora/scylla-tools.noarch.rpm",
"sudo mv /home/fedora/scylla-ami /usr/lib/scylla/scylla-ami",
"sudo sh -x -e /home/fedora/scylla_install -a -l /home/fedora"
]
}
],
"variables": {
"access_key": "",
"secret_key": "",
"subnet_id": "",
"security_group_id": "",
"region": "",
"associate_public_ip_address": "",
"instance_type": ""
}
}

View File

@@ -1,4 +1,5 @@
scylla - core unlimited
scylla - memlock unlimited
scylla - nofile 100000
scylla - nofile 200000
scylla - as unlimited
scylla - nproc 8096

48
dist/common/scripts/scylla_bootparam_setup vendored Executable file
View File

@@ -0,0 +1,48 @@
#!/bin/sh -e
#
# Copyright (C) 2015 ScyllaDB
print_usage() {
echo "scylla_bootparam_setup -a"
echo " -a AMI instance mode"
exit 1
}
AMI=0
while getopts a OPT; do
case "$OPT" in
"a")
AMI=1
;;
"h")
print_usage
;;
esac
done
. /etc/os-release
if [ $AMI -eq 1 ]; then
. /etc/sysconfig/scylla-server
sed -e "s#append #append clocksource=tsc tsc=reliable hugepagesz=2M hugepages=$NR_HUGEPAGES #" /boot/extlinux/extlinux.conf > /tmp/extlinux.conf
mv /tmp/extlinux.conf /boot/extlinux/extlinux.conf
else
. /etc/sysconfig/scylla-server
if [ ! -f /etc/default/grub ]; then
echo "Unsupported bootloader"
exit 1
fi
if [ "`grep hugepagesz /etc/default/grub`" != "" ] || [ "`grep hugepages /etc/default/grub`" != "" ]; then
sed -e "s#hugepagesz=2M ##" /etc/default/grub > /tmp/grub
mv /tmp/grub /etc/default/grub
sed -e "s#hugepages=[0-9]* ##" /etc/default/grub > /tmp/grub
mv /tmp/grub /etc/default/grub
fi
sed -e "s#^GRUB_CMDLINE_LINUX=\"#GRUB_CMDLINE_LINUX=\"hugepagesz=2M hugepages=$NR_HUGEPAGES #" /etc/default/grub > /tmp/grub
mv /tmp/grub /etc/default/grub
if [ "$ID" = "ubuntu" ]; then
grub2-mkconfig -o /boot/grub/grub.cfg
else
grub2-mkconfig -o /boot/grub2/grub.cfg
fi
fi

16
dist/common/scripts/scylla_coredump_setup vendored Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/sh -e
#
# Copyright (C) 2015 ScyllaDB
. /etc/os-release
if [ "$ID" = "ubuntu" ]; then
apt-get remove -y apport-noui
else
if [ -f /etc/systemd/coredump.conf ]; then
mv /etc/systemd/coredump.conf /etc/systemd/coredump.conf.save
systemctl daemon-reload
fi
fi
sysctl -p /etc/sysctl.d/99-scylla.conf

38
dist/common/scripts/scylla_ntp_setup vendored Executable file
View File

@@ -0,0 +1,38 @@
#!/bin/sh -e
#
# Copyright (C) 2015 ScyllaDB
print_usage() {
echo "scylla_ntp_setup -a"
echo " -a AMI instance mode"
exit 1
}
AMI=0
while getopts a OPT; do
case "$OPT" in
"a")
AMI=1
;;
"h")
print_usage
;;
esac
done
. /etc/os-release
if [ "$NAME" = "Ubuntu" ]; then
apt-get install -y ntp ntpdate
service ntp stop
ntpdate `cat /etc/ntp.conf |grep "^server"|head -n1|awk '{print $2}'`
service ntp start
else
yum install -y ntp ntpdate || true
if [ $AMI -eq 1 ]; then
sed -e s#fedora.pool.ntp.org#amazon.pool.ntp.org# /etc/ntp.conf > /tmp/ntp.conf
mv /tmp/ntp.conf /etc/ntp.conf
fi
systemctl enable ntpd.service
ntpdate `cat /etc/ntp.conf |grep "^server"|head -n1|awk '{print $2}'`
systemctl start ntpd.service
fi

View File

@@ -1,5 +1,43 @@
#!/bin/sh -e
if [ "$AMI" = "yes" ]; then
RAIDCNT=`grep xvdb /proc/mdstat | wc -l`
RAIDDEV=`grep xvdb /proc/mdstat | awk '{print $1}'`
if [ $RAIDCNT -ge 1 ]; then
echo "RAID already constructed."
if [ "`mount|grep /var/lib/scylla`" = "" ]; then
mount -o noatime /dev/$RAIDDEV /var/lib/scylla
fi
else
echo "RAID does not constructed, going to initialize..."
if [ "$AMI_KEEP_VERSION" != "yes" ]; then
yum update -y
fi
DISKS=""
for i in /dev/xvd{b..z}; do
if [ -b $i ];then
echo "Found disk $i"
if [ "$DISKS" = "" ]; then
DISKS=$i
else
DISKS="$DISKS,$i"
fi
fi
done
if [ "$DISKS" != "" ]; then
/usr/lib/scylla/scylla_raid_setup -d $DISKS
else
echo "WARN: Scylla is not using XFS to store data. Perforamnce will suffer." > /home/fedora/WARN_PLEASE_READ.TXT
fi
/usr/lib/scylla/scylla-ami/ds2_configure.py
fi
fi
if [ "$NETWORK_MODE" = "virtio" ]; then
ip tuntap del mode tap dev $TAP
ip tuntap add mode tap dev $TAP user $USER one_queue vnet_hdr
@@ -13,8 +51,12 @@ elif [ "$NETWORK_MODE" = "dpdk" ]; then
for n in /sys/devices/system/node/node?; do
echo $NR_HUGEPAGES > $n/hugepages/hugepages-2048kB/nr_hugepages
done
else # NETWORK_MODE = posix
if [ "$SET_NIC" = "yes" ]; then
sudo sh /usr/lib/scylla/posix_net_conf.sh $IFNAME >/dev/null 2>&1 || true
fi
fi
. /etc/os-release
if [ "$NAME" = "Ubuntu" ]; then
if [ "$ID" = "ubuntu" ]; then
hugeadm --create-mounts
fi

61
dist/common/scripts/scylla_raid_setup vendored Executable file
View File

@@ -0,0 +1,61 @@
#!/bin/sh -e
#
# Copyright (C) 2015 ScyllaDB
print_usage() {
echo "scylla-raid-setup -d /dev/hda,/dev/hdb... -r /dev/md0 -u"
echo " -d specify disks for RAID"
echo " -r MD device name for RAID"
echo " -u update /etc/fstab for RAID"
exit 1
}
RAID=/dev/md0
FSTAB=0
while getopts d:r:uh OPT; do
case "$OPT" in
"d")
DISKS=`echo $OPTARG|tr -s ',' ' '`
NR_DISK=$((`echo $OPTARG|grep , -o|wc -w` + 1))
;;
"r")
RAID=$OPTARG
;;
"u")
FSTAB=1
;;
"h")
print_usage
;;
esac
done
if [ "$DISKS" = "" ]; then
print_usage
fi
echo Creating RAID0 for scylla using $NR_DISK disk\(s\): $DISKS
if [ -e $RAID ]; then
echo "$RAID is already using"
exit 1
fi
if [ "`mount|grep /var/lib/scylla`" != "" ]; then
echo "/var/lib/scylla is already mounted"
exit 1
fi
mdadm --create --verbose --force --run $RAID --level=0 -c256 --raid-devices=$NR_DISK $DISKS
blockdev --setra 65536 $RAID
mkfs.xfs $RAID -f
echo "DEVICE $DISKS" > /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf
if [ $FSTAB -ne 0 ]; then
UUID=`blkid $RAID | awk '{print $2}'`
echo "$UUID /var/lib/scylla xfs noatime 0 0" >> /etc/fstab
fi
mount -t xfs -o noatime $RAID /var/lib/scylla
mkdir -p /var/lib/scylla/data
mkdir -p /var/lib/scylla/commitlog
mkdir -p /var/lib/scylla/coredump
chown scylla:scylla /var/lib/scylla/*
chown scylla:scylla /var/lib/scylla/

10
dist/common/scripts/scylla_save_coredump vendored Executable file
View File

@@ -0,0 +1,10 @@
#!/bin/sh -e
#
# Copyright (C) 2015 ScyllaDB
FILE=$1
TIME=`date --date @$2 +%F-%T`
PID=$3
mkdir -p /var/lib/scylla/coredump
/usr/bin/gzip -c > /var/lib/scylla/coredump/core.$FILE-$TIME-$PID.gz

100
dist/common/scripts/scylla_sysconfig_setup vendored Executable file
View File

@@ -0,0 +1,100 @@
#!/bin/sh -e
#
# Copyright (C) 2015 ScyllaDB
print_usage() {
echo "scylla-sysconfig-setup -n eth0 -m posix -p 64 -u scylla -g scylla -d /var/lib/scylla -c /etc/scylla -N -a -k"
echo " -n specify NIC"
echo " -m network mode (posix, dpdk)"
echo " -p number of hugepages"
echo " -u user (dpdk requires root)"
echo " -g group (dpdk requires root)"
echo " -d scylla home directory"
echo " -c scylla config directory"
echo " -N setup NIC's interrupts, RPS, XPS"
echo " -a AMI instance mode"
echo " -k keep package version on AMI"
exit 1
}
. /etc/os-release
if [ "$ID" = "ubuntu" ]; then
SYSCONFIG=/etc/default
else
SYSCONFIG=/etc/sysconfig
fi
NIC=eth0
NETWORK_MODE=posix
NR_HUGEPAGES=64
USER=scylla
GROUP=scylla
SCYLLA_HOME=/var/lib/scylla
SCYLLA_CONF=/etc/scylla
SETUP_NIC=0
SET_NIC="no"
AMI=no
AMI_KEEP_VERSION=no
SCYLLA_ARGS=
while getopts n:m:p:u:g:d:c:Nakh OPT; do
case "$OPT" in
"n")
NIC=$OPTARG
;;
"m")
NETWORK_MODE=$OPTARG
;;
"p")
NR_HUGEPAGES=$OPTARG
;;
"u")
USER=$OPTARG
;;
"g")
GROUP=$OPTARG
;;
"d")
SCYLLA_HOME=$OPTARG
;;
"c")
SCYLLA_CONF=$OPTARG
;;
"N")
SETUP_NIC=1
;;
"a")
AMI=yes
;;
"k")
AMI_KEEP_VERSION=yes
;;
"h")
print_usage
;;
esac
done
echo Setting parameters on $SYSCONFIG/scylla-server
ETHDRV=`/usr/lib/scylla/dpdk_nic_bind.py --status | grep if=$NIC | sed -e "s/^.*drv=//" -e "s/ .*$//"`
ETHPCIID=`/usr/lib/scylla/dpdk_nic_bind.py --status | grep if=$NIC | awk '{print $1}'`
NR_CPU=`cat /proc/cpuinfo |grep processor|wc -l`
if [ $NR_CPU -ge 8 ]; then
NR=$((NR_CPU - 1))
SET_NIC="yes"
SCYLLA_ARGS="--cpuset 1-$NR --smp $NR"
fi
sed -e s#^NETWORK_MODE=.*#NETWORK_MODE=$NETWORK_MODE# \
-e s#^ETHDRV=.*#ETHDRV=$ETHDRV# \
-e s#^ETHPCIID=.*#ETHPCIID=$ETHPCIID# \
-e s#^NR_HUGEPAGES=.*#NR_HUGEPAGES=$NR_HUGEPAGES# \
-e s#^USER=.*#USER=$USER# \
-e s#^GROUP=.*#GROUP=$GROUP# \
-e s#^SCYLLA_HOME=.*#SCYLLA_HOME=$SCYLLA_HOME# \
-e s#^SCYLLA_CONF=.*#SCYLLA_CONF=$SCYLLA_CONF# \
-e s#^SET_NIC=.*#SET_NIC=$SET_NIC# \
-e s#^SCYLLA_ARGS=.*#SCYLLA_ARGS="$SCYLLA_ARGS"# \
-e s#^AMI=.*#AMI="$AMI"# \
-e s#^AMI_KEEP_VERSION=.*#AMI_KEEP_VERSION="$AMI_KEEP_VERSION"# \
$SYSCONFIG/scylla-server > /tmp/scylla-server
mv /tmp/scylla-server $SYSCONFIG/scylla-server

View File

@@ -7,6 +7,12 @@ TAP=tap0
# bridge device name (virtio)
BRIDGE=virbr0
# ethernet device name
IFNAME=eth0
# setup NIC's interrupts, RPS, XPS (posix)
SET_NIC=no
# ethernet device driver (dpdk)
ETHDRV=
@@ -30,3 +36,9 @@ SCYLLA_CONF=/etc/scylla
# additional arguments
SCYLLA_ARGS=""
# setup as AMI instance
AMI=no
# do not upgrade Scylla packages on AMI startup
AMI_KEEP_VERSION=no

1
dist/common/sysctl.d/99-scylla.conf vendored Normal file
View File

@@ -0,0 +1 @@
kernel.core_pattern=|/usr/lib/scylla/scylla_save_coredump %e %t %p

View File

@@ -4,9 +4,9 @@ IP=$(hostname -i)
sed -e "s/seeds:.*/seeds: $IP/g" /var/lib/scylla/conf/scylla.yaml > $HOME/scylla.yaml
/usr/bin/scylla --log-to-syslog 1 \
--log-to-stdout 0 \
--developer-mode true \
--default-log-level info \
--options-file $HOME/scylla.yaml \
--listen-address $IP \
--rpc-address $IP \
--network-stack posix \
--smp 1
--network-stack posix

View File

@@ -7,21 +7,21 @@ if [ ! -e dist/redhat/build_rpm.sh ]; then
exit 1
fi
OS=`awk '{print $1}' /etc/redhat-release`
if [ "$OS" != "Fedora" ] && [ "$OS" != "CentOS" ]; then
. /etc/os-release
if [ "$ID" != "fedora" ] && [ "$ID" != "centos" ]; then
echo "Unsupported distribution"
exit 1
fi
if [ "$OS" = "Fedora" ] && [ ! -f /usr/bin/mock ]; then
if [ "$ID" = "fedora" ] && [ ! -f /usr/bin/mock ]; then
sudo yum -y install mock
elif [ "$OS" = "CentOS" ] && [ ! -f /usr/bin/yum-builddep ]; then
elif [ "$ID" = "centos" ] && [ ! -f /usr/bin/yum-builddep ]; then
sudo yum -y install yum-utils
fi
if [ ! -f /usr/bin/git ]; then
sudo yum -y install git
fi
mkdir -p $RPMBUILD/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS}
if [ "$OS" = "CentOS" ]; then
if [ "$ID" = "centos" ]; then
./dist/redhat/centos_dep/build_dependency.sh
fi
VERSION=$(./SCYLLA-VERSION-GEN)
@@ -33,7 +33,7 @@ rm -f version
cp dist/redhat/scylla-server.spec.in $RPMBUILD/SPECS/scylla-server.spec
sed -i -e "s/@@VERSION@@/$SCYLLA_VERSION/g" $RPMBUILD/SPECS/scylla-server.spec
sed -i -e "s/@@RELEASE@@/$SCYLLA_RELEASE/g" $RPMBUILD/SPECS/scylla-server.spec
if [ "$OS" = "Fedora" ]; then
if [ "$ID" = "fedora" ]; then
rpmbuild -bs --define "_topdir $RPMBUILD" $RPMBUILD/SPECS/scylla-server.spec
mock rebuild --resultdir=`pwd`/build/rpms $RPMBUILD/SRPMS/scylla-server-$VERSION*.src.rpm
else

View File

@@ -21,7 +21,7 @@ if [ ! -f isl-0.14-3.fc22.src.rpm ]; then
fi
if [ ! -f gcc-5.1.1-4.fc22.src.rpm ]; then
wget http://download.fedoraproject.org/pub/fedora/linux/updates/22/SRPMS/g/gcc-5.1.1-4.fc22.src.rpm
wget https://s3.amazonaws.com/scylla-centos-dep/gcc-5.1.1-4.fc22.src.rpm
fi
if [ ! -f boost-1.57.0-6.fc22.src.rpm ]; then

View File

@@ -10,9 +10,5 @@ elif [ "$NETWORK_MODE" = "dpdk" ]; then
args="$args --network-stack native --dpdk-pmd"
fi
if [ "$SET_NIC" == "yes" ]; then
sudo sh /usr/lib/scylla/posix_net_conf.sh >/dev/null 2>&1 || true
fi
export HOME=/var/lib/scylla
exec sudo -E -u $USER /usr/bin/scylla $args

View File

@@ -8,13 +8,18 @@ License: AGPLv3
URL: http://www.scylladb.com/
Source0: %{name}-@@VERSION@@-@@RELEASE@@.tar
BuildRequires: libaio-devel boost-devel libstdc++-devel cryptopp-devel hwloc-devel numactl-devel libpciaccess-devel libxml2-devel zlib-devel thrift-devel yaml-cpp-devel lz4-devel snappy-devel jsoncpp-devel systemd-devel xz-devel openssl-devel libcap-devel libselinux-devel libgcrypt-devel libgpg-error-devel elfutils-devel krb5-devel libcom_err-devel libattr-devel pcre-devel elfutils-libelf-devel bzip2-devel keyutils-libs-devel xfsprogs-devel make
BuildRequires: libaio-devel boost-devel libstdc++-devel cryptopp-devel hwloc-devel numactl-devel libpciaccess-devel libxml2-devel zlib-devel thrift-devel yaml-cpp-devel lz4-devel snappy-devel jsoncpp-devel systemd-devel xz-devel openssl-devel libcap-devel libselinux-devel libgcrypt-devel libgpg-error-devel elfutils-devel krb5-devel libcom_err-devel libattr-devel pcre-devel elfutils-libelf-devel bzip2-devel keyutils-libs-devel xfsprogs-devel make gnutls-devel
%{?fedora:BuildRequires: ninja-build ragel antlr3-tool antlr3-C++-devel python3 gcc-c++ libasan libubsan}
%{?rhel:BuildRequires: scylla-ninja-build scylla-ragel scylla-antlr3-tool scylla-antlr3-C++-devel python34 scylla-gcc-c++ >= 5.1.1}
Requires: systemd-libs xfsprogs
Requires: systemd-libs xfsprogs mdadm hwloc
%description
%define __debug_install_post \
%{_rpmconfigdir}/find-debuginfo.sh %{?_missing_build_ids_terminate_build:--strict-build-id} %{?_find_debuginfo_opts} "%{_builddir}/%{?buildsubdir}";\
cp scylla-gdb.py ${RPM_BUILD_ROOT}/usr/src/debug/%{name}-%{version}/;\
%{nil}
%prep
%setup -q
@@ -30,6 +35,7 @@ ninja-build -j2
%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT%{_bindir}
mkdir -p $RPM_BUILD_ROOT%{_sysconfdir}/sysctl.d/
mkdir -p $RPM_BUILD_ROOT%{_sysconfdir}/sysconfig/
mkdir -p $RPM_BUILD_ROOT%{_sysconfdir}/security/limits.d/
mkdir -p $RPM_BUILD_ROOT%{_sysconfdir}/scylla/
@@ -37,6 +43,7 @@ mkdir -p $RPM_BUILD_ROOT%{_docdir}/scylla/
mkdir -p $RPM_BUILD_ROOT%{_unitdir}
mkdir -p $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m644 dist/common/sysctl.d/99-scylla.conf $RPM_BUILD_ROOT%{_sysconfdir}/sysctl.d/
install -m644 dist/common/sysconfig/scylla-server $RPM_BUILD_ROOT%{_sysconfdir}/sysconfig/
install -m644 dist/common/limits.d/scylla.conf $RPM_BUILD_ROOT%{_sysconfdir}/security/limits.d/
install -d -m755 $RPM_BUILD_ROOT%{_sysconfdir}/scylla
@@ -44,6 +51,7 @@ install -m644 conf/scylla.yaml $RPM_BUILD_ROOT%{_sysconfdir}/scylla/
install -m644 conf/cassandra-rackdc.properties $RPM_BUILD_ROOT%{_sysconfdir}/scylla/
install -m644 dist/redhat/systemd/scylla-server.service $RPM_BUILD_ROOT%{_unitdir}/
install -m755 dist/common/scripts/* $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 dist/redhat/scripts/* $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 seastar/scripts/posix_net_conf.sh $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 seastar/dpdk/tools/dpdk_nic_bind.py $RPM_BUILD_ROOT%{_prefix}/lib/scylla/
install -m755 build/release/scylla $RPM_BUILD_ROOT%{_bindir}
@@ -57,6 +65,11 @@ install -m644 licenses/* $RPM_BUILD_ROOT%{_docdir}/scylla/licenses/
install -d -m755 $RPM_BUILD_ROOT%{_sharedstatedir}/scylla/
install -d -m755 $RPM_BUILD_ROOT%{_sharedstatedir}/scylla/data
install -d -m755 $RPM_BUILD_ROOT%{_sharedstatedir}/scylla/commitlog
install -d -m755 $RPM_BUILD_ROOT%{_sharedstatedir}/scylla/coredump
install -d -m755 $RPM_BUILD_ROOT%{_prefix}/lib/scylla/swagger-ui
cp -r swagger-ui/dist $RPM_BUILD_ROOT%{_prefix}/lib/scylla/swagger-ui
install -d -m755 $RPM_BUILD_ROOT%{_prefix}/lib/scylla/api
cp -r api/api-doc $RPM_BUILD_ROOT%{_prefix}/lib/scylla/api
%pre
/usr/sbin/groupadd scylla 2> /dev/null || :
@@ -71,8 +84,27 @@ TMP=""
if [ -d /var/lib/scylla/conf ] && [ ! -L /var/lib/scylla/conf ]; then
cp -a /var/lib/scylla/conf /tmp/%{name}-%{version}-%{release}
fi
# Adding IFNAME for previous version of sysconfig
if [ -f /etc/sysconfig/scylla-server ] && [ `grep IFNAME -r /etc/sysconfig/scylla-server|wc -l` -eq 0 ]; then
echo "# ethernet device name" >> /etc/sysconfig/scylla-server
echo "IFNAME=eth0" >> /etc/sysconfig/scylla-server
fi
if [ -d /usr/lib/scylla/scylla-ami ]; then
echo "# setup as AMI instance" >> /etc/sysconfig/scylla-server
echo "AMI=no" >> /etc/sysconfig/scylla-server
echo "# do not upgrade Scylla packages on AMI startup" >> /etc/sysconfig/scylla-server
echo "AMI_KEEP_VERSION=no" >> /etc/sysconfig/scylla-server
fi
%post
grep -v api_ui_dir /etc/scylla/scylla.yaml | grep -v api_doc_dir > /tmp/scylla.yaml
echo "api_ui_dir: /usr/lib/scylla/swagger-ui/dist/" >> /tmp/scylla.yaml
echo "api_doc_dir: /usr/lib/scylla/api/api-doc/" >> /tmp/scylla.yaml
mv /tmp/scylla.yaml /etc/scylla/scylla.yaml
# Upgrade coredump settings
if [ -f /etc/systemd/coredump.conf ];then
/usr/lib/scylla/scylla_coredump_setup
fi
%systemd_post scylla-server.service
%preun
@@ -96,6 +128,7 @@ rm -rf $RPM_BUILD_ROOT
%config(noreplace) %{_sysconfdir}/sysconfig/scylla-server
%{_sysconfdir}/security/limits.d/scylla.conf
%{_sysconfdir}/sysctl.d/99-scylla.conf
%attr(0755,root,root) %dir %{_sysconfdir}/scylla
%config(noreplace) %{_sysconfdir}/scylla/scylla.yaml
%config(noreplace) %{_sysconfdir}/scylla/cassandra-rackdc.properties
@@ -109,13 +142,22 @@ rm -rf $RPM_BUILD_ROOT
%{_prefix}/lib/scylla/scylla_prepare
%{_prefix}/lib/scylla/scylla_run
%{_prefix}/lib/scylla/scylla_stop
%{_prefix}/lib/scylla/scylla_save_coredump
%{_prefix}/lib/scylla/scylla_coredump_setup
%{_prefix}/lib/scylla/scylla_raid_setup
%{_prefix}/lib/scylla/scylla_sysconfig_setup
%{_prefix}/lib/scylla/scylla_bootparam_setup
%{_prefix}/lib/scylla/scylla_ntp_setup
%{_prefix}/lib/scylla/posix_net_conf.sh
%{_prefix}/lib/scylla/dpdk_nic_bind.py
%{_prefix}/lib/scylla/dpdk_nic_bind.pyc
%{_prefix}/lib/scylla/dpdk_nic_bind.pyo
%{_prefix}/lib/scylla/swagger-ui/dist/*
%{_prefix}/lib/scylla/api/api-doc/*
%attr(0755,scylla,scylla) %dir %{_sharedstatedir}/scylla/
%attr(0755,scylla,scylla) %dir %{_sharedstatedir}/scylla/data
%attr(0755,scylla,scylla) %dir %{_sharedstatedir}/scylla/commitlog
%attr(0755,scylla,scylla) %dir %{_sharedstatedir}/scylla/coredump
%changelog
* Tue Jul 21 2015 Takuya ASADA <syuu@cloudius-systems.com>

View File

@@ -5,13 +5,14 @@ After=network.target libvirtd.service
[Service]
Type=simple
LimitMEMLOCK=infinity
LimitNOFILE=100000
LimitNOFILE=200000
LimitAS=infinity
LimitNPROC=8096
EnvironmentFile=/etc/sysconfig/scylla-server
ExecStartPre=/usr/lib/scylla/scylla_prepare
ExecStart=/usr/lib/scylla/scylla_run
ExecStopPost=/usr/lib/scylla/scylla_stop
TimeoutStartSec=900
KillMode=process
Restart=no

View File

@@ -10,6 +10,14 @@ if [ -e debian ] || [ -e build/release ]; then
mkdir build
fi
RELEASE=`lsb_release -r|awk '{print $2}'`
CODENAME=`lsb_release -c|awk '{print $2}'`
if [ `grep -c $RELEASE dist/ubuntu/supported_release` -lt 1 ]; then
echo "Unsupported release: $RELEASE"
echo "Pless any key to continue..."
read input
fi
VERSION=$(./SCYLLA-VERSION-GEN)
SCYLLA_VERSION=$(cat build/SCYLLA-VERSION-FILE)
SCYLLA_RELEASE=$(cat build/SCYLLA-RELEASE-FILE)
@@ -24,14 +32,29 @@ cp dist/common/sysconfig/scylla-server debian/scylla-server.default
cp dist/ubuntu/changelog.in debian/changelog
sed -i -e "s/@@VERSION@@/$SCYLLA_VERSION/g" debian/changelog
sed -i -e "s/@@RELEASE@@/$SCYLLA_RELEASE/g" debian/changelog
sed -i -e "s/@@CODENAME@@/$CODENAME/g" debian/changelog
sudo apt-get -y update
./dist/ubuntu/dep/build_dependency.sh
sudo apt-get -y install libyaml-cpp-dev liblz4-dev libsnappy-dev libcrypto++-dev libboost1.55-dev libjsoncpp-dev libaio-dev ragel ninja-build git libyaml-cpp0.5 liblz4-1 libsnappy1 libcrypto++9 libboost-program-options1.55.0 libboost-program-options1.55-dev libboost-system1.55.0 libboost-system1.55-dev libboost-thread1.55.0 libboost-thread1.55-dev libboost-test1.55.0 libboost-test1.55-dev libjsoncpp0 libaio1 hugepages software-properties-common libboost-filesystem1.55-dev libboost-filesystem1.55.0
sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
sudo apt-get -y update
sudo apt-get -y install g++-5
DEP="libyaml-cpp-dev liblz4-dev libsnappy-dev libcrypto++-dev libjsoncpp-dev libaio-dev ragel ninja-build git liblz4-1 libaio1 hugepages software-properties-common libgnutls28-dev libhwloc-dev libnuma-dev libpciaccess-dev"
if [ "$RELEASE" = "14.04" ]; then
DEP="$DEP libboost1.55-dev libboost-program-options1.55.0 libboost-program-options1.55-dev libboost-system1.55.0 libboost-system1.55-dev libboost-thread1.55.0 libboost-thread1.55-dev libboost-test1.55.0 libboost-test1.55-dev libboost-filesystem1.55-dev libboost-filesystem1.55.0 libsnappy1"
else
DEP="$DEP libboost-dev libboost-program-options-dev libboost-system-dev libboost-thread-dev libboost-test-dev libboost-filesystem-dev libboost-filesystem-dev libsnappy1v5"
fi
if [ "$RELEASE" = "15.10" ]; then
DEP="$DEP libjsoncpp0v5 libcrypto++9v5 libyaml-cpp0.5v5 antlr3"
else
DEP="$DEP libjsoncpp0 libcrypto++9 libyaml-cpp0.5"
fi
sudo apt-get -y install $DEP
if [ "$RELEASE" != "15.10" ]; then
sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test
sudo apt-get -y update
fi
sudo apt-get -y install g++-4.9
debuild -r fakeroot -us -uc

View File

@@ -1,4 +1,4 @@
scylla-server (@@VERSION@@-@@RELEASE@@-ubuntu1) trusty; urgency=medium
scylla-server (@@VERSION@@-@@RELEASE@@-ubuntu1) @@CODENAME@@; urgency=medium
* Initial release.

View File

@@ -4,11 +4,11 @@ Homepage: http://scylladb.com
Section: database
Priority: optional
Standards-Version: 3.9.5
Build-Depends: debhelper (>= 9), libyaml-cpp-dev, liblz4-dev, libsnappy-dev, libcrypto++-dev, libjsoncpp-dev, libaio-dev, libthrift-dev, thrift-compiler, antlr3-tool, antlr3-c++-dev, ragel, g++-5, ninja-build, git, libboost-program-options1.55-dev, libboost-filesystem1.55-dev, libboost-system1.55-dev, libboost-thread1.55-dev, libboost-test1.55-dev
Build-Depends: debhelper (>= 9), libyaml-cpp-dev, liblz4-dev, libsnappy-dev, libcrypto++-dev, libjsoncpp-dev, libaio-dev, libthrift-dev, thrift-compiler, antlr3, antlr3-c++-dev, ragel, g++-4.9, ninja-build, git, libboost-program-options1.55-dev | libboost-program-options-dev, libboost-filesystem1.55-dev | libboost-filesystem-dev, libboost-system1.55-dev | libboost-system-dev, libboost-thread1.55-dev | libboost-thread-dev, libboost-test1.55-dev | libboost-test-dev, libgnutls28-dev, libhwloc-dev, libnuma-dev, libpciaccess-dev
Package: scylla-server
Architecture: amd64
Depends: ${shlibs:Depends}, ${misc:Depends}, hugepages, adduser
Depends: ${shlibs:Depends}, ${misc:Depends}, hugepages, adduser, mdadm, xfsprogs, hwloc-nox
Description: Scylla database server binaries
Scylla is a highly scalable, eventually consistent, distributed,
partitioned row DB.

View File

@@ -2,12 +2,15 @@
DOC = $(CURDIR)/debian/scylla-server/usr/share/doc/scylla-server
SCRIPTS = $(CURDIR)/debian/scylla-server/usr/lib/scylla
SWAGGER = $(SCRIPTS)/swagger-ui
API = $(SCRIPTS)/api
SYSCTL = $(CURDIR)/debian/scylla-server/etc/sysctl.d
LIMITS= $(CURDIR)/debian/scylla-server/etc/security/limits.d
LIBS = $(CURDIR)/debian/scylla-server/usr/lib
CONF = $(CURDIR)/debian/scylla-server/etc/scylla
override_dh_auto_build:
./configure.py --disable-xen --enable-dpdk --mode=release --static-stdc++ --compiler=g++-5
./configure.py --disable-xen --enable-dpdk --mode=release --static-stdc++ --compiler=g++-4.9
ninja
override_dh_auto_clean:
@@ -19,6 +22,9 @@ override_dh_auto_install:
mkdir -p $(LIMITS) && \
cp $(CURDIR)/dist/common/limits.d/scylla.conf $(LIMITS)
mkdir -p $(SYSCTL) && \
cp $(CURDIR)/dist/common/sysctl.d/99-scylla.conf $(SYSCTL)
mkdir -p $(CONF) && \
cp $(CURDIR)/conf/scylla.yaml $(CONF)
cp $(CURDIR)/conf/cassandra-rackdc.properties $(CONF)
@@ -32,6 +38,13 @@ override_dh_auto_install:
mkdir -p $(SCRIPTS) && \
cp $(CURDIR)/seastar/dpdk/tools/dpdk_nic_bind.py $(SCRIPTS)
cp $(CURDIR)/dist/common/scripts/* $(SCRIPTS)
cp $(CURDIR)/dist/ubuntu/scripts/* $(SCRIPTS)
mkdir -p $(SWAGGER) && \
cp -r $(CURDIR)/swagger-ui/dist $(SWAGGER)
mkdir -p $(API) && \
cp -r $(CURDIR)/api/api-doc $(API)
mkdir -p $(CURDIR)/debian/scylla-server/usr/bin/ && \
cp $(CURDIR)/build/release/scylla \
@@ -39,6 +52,7 @@ override_dh_auto_install:
mkdir -p $(CURDIR)/debian/scylla-server/var/lib/scylla/data
mkdir -p $(CURDIR)/debian/scylla-server/var/lib/scylla/commitlog
mkdir -p $(CURDIR)/debian/scylla-server/var/lib/scylla/coredump
override_dh_strip:
dh_strip --dbg-package=scylla-server-dbg

View File

@@ -14,4 +14,9 @@ fi
ln -sfT /etc/scylla /var/lib/scylla/conf
grep -v api_ui_dir /etc/scylla/scylla.yaml | grep -v api_doc_dir > /tmp/scylla.yaml
echo "api_ui_dir: /usr/lib/scylla/swagger-ui/dist/" >> /tmp/scylla.yaml
echo "api_doc_dir: /usr/lib/scylla/api/api-doc/" >> /tmp/scylla.yaml
mv /tmp/scylla.yaml /etc/scylla/scylla.yaml
#DEBHELPER#

View File

@@ -1,4 +1,4 @@
antlr3-tool (3.5.2-ubuntu1) trusty; urgency=medium
antlr3 (3.5.2-ubuntu1) trusty; urgency=medium
* Initial release.

Some files were not shown because too many files have changed in this diff Show More