So that is_set() will be true for that option. Needed in tests which
set some config options in higher layer and then lower layers detects
if option was set or not before applying its default.
Add batch_size_fail_threshold_in_kb to prevent huge batch from been
applied and causing troubles. Also do not warn or fail if only one
partition is affected.
Fixes: #2128
Message-Id: <20170309111247.GE8197@scylladb.com>
Refs #1813 (fixes scylla part)
Added require_client_auth and priority_string options to
server_encryption_options/client_encryption_options an process them.
Allows TLS method/algo specification. Also enabled enforcing known cert
authentication for both node-to-node and client communication.
It still has problems:
- while resharding a very large leveled compaction strategy table, a huge
amount of tiny sstables are generated, overwhelming the file descriptor
limits
- there is a large impact on read latency while resharding is going on
(cherry picked from commit cf27d44412)
(forward-ported from branch-1.6)
This reverts commit b81a57e8eb.
With exponential range scanning, we should now be able to survive
msb ignore bits of 12, which allows better sharding on large clusters.
With the default value of 12, a node's range is partitioned into
4096 * smp::count sub-ranges which are queried sequentually for a range
scan. If the number of rows in the table is smaller than the required
result size, we will query all of them. This can take so long that we
time out.
A better fix is to query multiple sub-ranges in parallel and merge them,
but for that we need to resurrect the non-sequential merger.
After recent changes to the memtable code, there is no reason for us to
uphold a maximum memtable size. Now that we only flush one memtable at a
time anyway, and also have soft limit notifications from the
region_group_reclaimer, we can just set the soft limit to the target
size and let all of that be handled by the dirty_memory_manager.
It does have the added property that we'll be flushing when we globally
reach the soft limit threshold. In conditions in which we have multiple
CF writes fighting for memory, that guarantees that we will start
flushing much earlier than the hard limit.
The threshold is set to 1/4 of dirty memory. While in theory we would
prefer the memtables to go as big as 1/2 of dirty memory, in my
experiments I have found 1/4 to be a better fit, at least for the
moment.
The reason for such behavior is that in situations where we have slow
disks, setting the soft limit to 1/2 of dirty will put us in a situation
in which we may not have finished writing down the memtable when we hit
the limit, and then throttle. When set the threshold to 1/4 of dirty, we
don't throttle at all.
This behavior could potentially be fixed by not doing the full
memtable-based throttling after we do the commitlog throttling, but that
is not something realistic for the moment.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
When using multiple physical network interfaces, set this to true to
listen on broadcast_address in addition to the listen_address, allowing
nodes to communicate in both interfaces. Ignore this property if the
network configuration automatically routes between the public and
private networks such as EC2.
Message-Id: <20160921094810.GA28654@scylladb.com>
Useful for triggerring core dump on allocation failure inside LSA,
which makes it easier to debug allocation failures. They normally
don't cause aborts, just fail the current operation, which makes it
hard to figure out what was the cause of allocation failure.
Message-Id: <1470233631-18508-1-git-send-email-tgrabiec@scylladb.com>
This patch adds the prometheus API it adds the proto library to the
compilation, adds an optional configuration parameter to change the
prometheus listening port and start the prometheus API in main.
To disable the prometheus API, set its listening port to 0.
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1470228764-19545-2-git-send-email-amnon@scylladb.com>
"When reading a partition try to read it all
but once more bytes are read than a given limit
we decide that partition is wide and we don't cache it.
Instead we retry the read with clustering key filtering
applied."
We have imported most of our data about config options from Cassandra. Due to
that, many options that mention the database by name are still using
"Cassandra".
Specially for the user visible options, which is something that a user sees, we
should really be using Scylla here.
This patch was created by automatically replacing every occurrence of "Cassandra"
with "Scylla" and then later on discarding the ones in which the change didn't
make sense (such as Unused options and mentions to the Cassandra documentation)
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <1423e1d7e36874a1f46bd091aec96dcb4d8482d9.1468267193.git.glauber@scylladb.com>
Enable --partitioner option so that user can choose partitioner other
than the default Murmur3Partitioner. Currently, only Murmur3Partitioner
and ByteOrderedPartitioner are supported. When non-supported partitioner
is specifed, error will be propogated to user.
Config provides operators << >> for string_map which makes it impossible
to have generic stream operators for unordered_map. Fix it by making
string_map a separate type and not just an alias.
Message-Id: <20160602102642.GJ9939@scylladb.com>
compact_on_idle will lead users to thinking we're talking about sstable
compaction, not log-structured-allocator compaction.
Rename the variable to reduce the probability of confusion.
Message-Id: <1464261650-14136-1-git-send-email-avi@scylladb.com>
Time a node waits after sending gossip shutdown message in milliseconds.
Reduces ./cql_query_test execution time
from
real 2m24.272s
user 0m8.339s
sys 0m10.556s
to
real 1m17.765s
user 0m3.698s
sys 0m11.578
Enable the incremental_backups/--incremental-backups option.
When enabled there will be a hard link created in the
<column family directory>/backup directory for every flushed
sstable.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
Fixes#614
* Use warning threshold from config
* Don't throw exceptions. We're only supposed to warn.
* Try to actually estimate mutation data payload size, not
number of mutations.
Implement the wait for gossip to settle logic in the bootup process.
CASSANDRA-4288
Fixes:
bootstrap_test.py:TestBootstrap.shutdown_wiped_node_cannot_join_test
1) start node2
2) wait for cql connection with node2 is ready
3) stop node2
4) delete data and commitlog directory for node2
5) start node2
In step 5, sometimes I saw in shadow round of node2, it gets node2's
status as BOOT from other nodes in the cluster instead of NORMAL. The
problem is we do not wait for gossip to settle before we start cql server,
as a result, when we stop node2 in step 3), other nodes in the cluster
have not got node2's status update to NORMAL.
The previous SSL enablement patches do make uses of these
options but they are still marked as Unused.
Change this and also update the db/config.hh documentation
accordingly.
Syntax is now:
client_encryption_options:
enabled: true
certificate: <path-to-PEM-x509-cert> (default conf/scylla.crt)
keyfile: <path-to-PEM-x509-key> (default conf/scylla.key)
Fixes: #756.
Signed-off-by: Benoît Canet <benoit@scylladb.com>
Message-Id: <1452032073-6933-1-git-send-email-benoit@scylladb.com>
* Mark option used
* Make sub-options adapted to seastar-tls useable values (i.e. x509)
Syntax is now:
server_encryption_options:
internode_encryption: <none, all, dc, rack>
certificate: <path-to-PEM-x509-cert> (default conf/scylla.crt)
keyfile: <path-to-PEM-x509-key> (default conf/scylla.key)
truststore: <path-to-PEM-trust-store-file> (default empty,
use system trust)
It is hard-coded as 30 seconds at the moment.
Usage:
$ scylla --ring-delay-ms 5000
Time a node waits to hear from other nodes before joining the ring in
milliseconds.
Same as -Dcassandra.ring_delay_ms in cassandra.
Backport: CASSANDRA-8801
a53a6ce Decommissioned nodes will not rejoin the cluster.
Tested with:
topology_test.py:TestTopology.decommissioned_node_cant_rejoin_test
With this patch, start two nodes
node 1:
scylla --rpc-address 127.0.0.1 --broadcast-rpc-address 127.0.0.11
node 2:
scylla --rpc-address 127.0.0.2 --broadcast-rpc-address 127.0.0.12
On node 1:
cqlsh> SELECT rpc_address from system.peers;
rpc_address
-------------
127.0.0.12
which means client should use this address to connect node 2 for cql and
thrift protocol.