Commit Graph

24863 Commits

Author SHA1 Message Date
Gleb Natapov
1ab262e86b raft: test: change replication_test to submit one entry at a time
replication_test's state machine is not commutative, so if commands are
applied in different order the states will be different as well. Since
the preemption check was added into co_await in seastar even waiting for
a ready future can preempt which will cause reordering of simultaneously
submitted entries in debug mode. For a long time we tried to keep entries
submission parallel in the test, but with the above seastar change it
is no longer possible to maintain it without changing the state machine
to be commutative. The patch changes the test to submit entries one by
one.

Message-Id: <20210117095147.GA733394@scylladb.com>
2021-01-20 10:23:43 +02:00
Kamil Braun
1a8630e6a7 transport: silence "broken pipe" and "connection reset by peer" errors
The code would already silence broken pipe exceptions since it's
expected when the other side closes the connection or when we shutdown the
socket during Scylla shutdown, but the code wouldn't handle the following:
1. "Connection reset by peer" errors: these can also happen in the
   aforementioned two scenarios; the conditions that determine which of
   the two types of errors occur are unclear.
2. The scenarios would sometimes result in a `seastar::nested_exception`,
   mainly during shutdown. The errors could happen once when trying to send
   a response to a request (`_write_buf.write(...)/flush(...)`) and then
   again when trying to close the connection in a `finally` block. These
   nested exceptions were not silenced.

The commit handles each of these cases.
Closes #7907.

Closes #7931
2021-01-19 10:30:17 +02:00
Tomasz Grabiec
94749b01eb Merge "futurize flat_mutation_reader::next_partition" from Benny
The main motivation for this patchset is to prepare
for adding a async close() method to flat_mutation_reader.

In order to close the reader before destroying it
in all paths we need to make next_partition asynchronous
so it can asynchronously close a current reader before
destoring it, e.g. by reassignment of flat_mutation_reader_opt,
as done in scanning_reader::next_partition.

Test: unit(release, debug)

* git@github.com:bhalevy/scylla.git futurize-next-partition-v1:
  flat_mutation_reader: return future from next_partition
  multishard_mutation_query: read_context: save_reader: destroy reader_meta from the calling shard
  mutation_reader: filtering_reader: fill_buffer: futurize inner loop
  flat_mutation_reader::impl: consumer_adapter: futurize handle_result
  flat_mutation_reader: consume_pausable/in_thread: futurize_invoke consumer
  flat_mutation_reader: FlatMutationReaderConsumer: support also async consumer
  flat_mutation_reader:impl: get rid of _consume_done member
2021-01-19 10:19:03 +02:00
Jenkins
faf71c6f75 release: prepare for 4.5.dev 2021-01-18 16:05:25 +02:00
Avi Kivity
df3ef800c2 Merge 'Introduce load and stream feature' from Asias He
storage_service: Introduce load_and_stream

=== Introduction ===

This feature extends the nodetool refresh to allow loading arbitrary sstables
that do not belong to a node into the cluster. It loads the sstables from disk
and calculates the owning nodes of the data and streams to the owners
automatically.

From example, say the old cluster has 6 nodes and the new cluster has 3 nodes.
We can copy the sstables from the old cluster to any of the new nodes and
trigger the load and stream process.

This can make restores and migrations much easier.

=== Performance ===

I managed to get 40MB/s per shard on my build machine.
CPU: AMD Ryzen 7 1800X Eight-Core Processor
DISK: Samsung SSD 970 PRO 512GB

Assume 1TB sstables per node, each shard can do 40MB/s, each node has 32
shards, we can finish the load and stream 1TB of data in 13 mins on each
node.

1TB / 40 MB per shard * 32 shard / 60 s = 13 mins

=== Tests ===

backup_restore_tests.py:TestBackupRestore.load_and_stream_to_new_cluster_test
which creates a cluster with 4 nodes and inserts data, then use
load_and_stream to restore to a 2 nodes cluster.

=== Usage ===

curl -X POST "http://{ip}:10000/storage_service/sstables/{keyspace}?cf={table}&load_and_stream=true

=== Notes ===

Btw, with the old nodetool refresh, the node will not pick up the data
that does not belong to this node but it will not delete it either. One
has to run nodetool cleanup to remove those data manually which is a
surprise to me and probably to users as well. With load and stream, the
process will delete the sstables once it finishes stream, so no nodetool
cleanup is needed.

The name of this feature load and stream follows load and store in CPU world.

Fixes #7831

Closes #7846

* github.com:scylladb/scylla:
  storage_service: Introduce load_and_stream
  distributed_loader: Add get_sstables_from_upload_dir
  table: Add make_streaming_reader for given sstables set
2021-01-18 15:08:19 +02:00
Avi Kivity
60f5ec3644 Merge 'managed_bytes: switch to explicit linearization' from Michał Chojnowski
This is a revival of #7490.

Quoting #7490:

The managed_bytes class now uses implicit linearization: outside LSA, data is never fragmented, and within LSA, data is linearized on-demand, as long as the code is running within with_linearized_managed_bytes() scope.

We would like to stop linearizing managed_bytes and keep it fragmented at all times, since linearization can require large contiguous chunks. Large contiguous allocations are hard to satisfy and cause latency spikes.

As a first step towards that, we remove all implicitly linearizing accessors and replace them with an explicit linearization accessor, with_linearized().

Some of the linearization happens long before use, by creating a bytes_view of the managed_bytes object and passing it onwards, perhaps storing it for later use. This does not work with with_linearized(), which creates a temporary linearized view, and does not work towards the longer term goal of never linearizing. As a substitute a managed_bytes_view class is introduced that acts as a view for managed_bytes (for interoperability it can also be a view for bytes and is compatible with bytes_view).

By the end of the series, all linearizations are temporary, within the scope of a with_linearized() call and can be converted to fragmented consumption of the data at leisure.

This has limited practical value directly, as current uses of managed_bytes are limited to keys (which are limited to 64k). However, it enables converting the atomic_cell layer back to managed_bytes (so we can remove IMR) and the CQL layer to managed_bytes/managed_bytes_view, removing contiguous allocations from the coordinator.

Closes #7820

* github.com:scylladb/scylla:
  test: add hashers_test
  memtable: fix accounting of managed_bytes in partition_snapshot_accounter
  test: add managed_bytes_test
  utils: fragment_range: add a fragment iterator for FragmentedView
  keys: update comments after changes and remove an unused method
  mutation_test: use the correct preferred_max_contiguous_allocation in measuring_allocator
  row_cache: more indentation fixes
  utils: remove unused linearization facilities in `managed_bytes` class
  misc: fix indentation
  treewide: remove remaining `with_linearized_managed_bytes` uses
  memtable, row_cache: remove `with_linearized_managed_bytes` uses
  utils: managed_bytes: remove linearizing accessors
  keys, compound: switch from bytes_view to managed_bytes_view
  sstables: writer: add write_* helpers for managed_bytes_view
  compound_compat: transition legacy_compound_view from bytes_view to managed_bytes_view
  types: change equal() to accept managed_bytes_view
  types: add parallel interfaces for managed_bytes_view
  types: add to_managed_bytes(const sstring&)
  serializer_impl: handle managed_bytes without linearizing
  utils: managed_bytes: add managed_bytes_view::operator[]
  utils: managed_bytes: introduce managed_bytes_view
  utils: fragment_range: add serialization helpers for FragmentedMutableView
  bytes: implement std::hash using appending_hash
  utils: mutable_view: add substr()
  utils: fragment_range: add compare_unsigned
  utils: managed_bytes: make the constructors from bytes and bytes_view explicit
  utils: managed_bytes: introduce with_linearized()
  utils: managed_bytes: constrain with_linearized_managed_bytes()
  utils: managed_bytes: avoid internal uses of managed_bytes::data()
  utils: managed_bytes: extract do_linearize_pure()
  thrift: do not depend on implicit conversion of keys to bytes_view
  clustering_bounds_comparator: do not depend on implicit conversion of keys to bytes_view
  cql3: expression: linearize get_value_from_mutation() eariler
  bytes: add to_bytes(bytes)
  cql3: expression: mark do_get_value() as static
2021-01-18 11:01:28 +02:00
Asias He
4d32d03172 storage_service: Introduce load_and_stream
=== Introduction ===

This feature extends the nodetool refresh to allow loading arbitrary sstables
that do not belong to a node into the cluster. It loads the sstables from disk
and calculates the owning nodes of the data and streams to the owners
automatically.

From example, say the old cluster has 6 nodes and the new cluster has 3 nodes.
We can copy the sstables from the old cluster to any of the new nodes and
trigger the load and stream process.

This can make restores and migrations much easier.

=== Performance ===

I managed to get 40MB/s per shard on my build machine.
CPU: AMD Ryzen 7 1800X Eight-Core Processor
DISK: Samsung SSD 970 PRO 512GB

Assume 1TB sstables per node, each shard can do 40MB/s, each node has 32
shards, we can finish the load and stream 1TB of data in 13 mins on each
node.

1TB / 40 MB per shard * 32 shard / 60 s = 13 mins

=== Tests ===

backup_restore_tests.py:TestBackupRestore.load_and_stream_to_new_cluster_test
which creates a cluster with 4 nodes and inserts data, then use
load_and_stream to restore to a 2 nodes cluster.

=== Usage ===

curl -X POST "http://{ip}:10000/storage_service/sstables/{keyspace}?cf={table}&load_and_stream=true

=== Notes ===

Btw, with the old nodetool refresh, the node will not pick up the data
that does not belong to this node but it will not delete it either. One
has to run nodetool cleanup to remove those data manually which is a
surprise to me and probably to users as well. With load and stream, the
process will delete the sstables once it finishes stream, so no nodetool
cleanup is needed.

The name of this feature load and stream follows load and store in CPU world.

Fixes #7831
2021-01-18 16:32:33 +08:00
Avi Kivity
ab44464911 Revert "docker: remove sshd from the image"
This reverts commit 32fd38f349. Some
tests (in scylla-cluster-tests) depend on it.
2021-01-17 14:34:40 +02:00
Raphael S. Carvalho
00c29e1e24 table: Move notify_bootstrap_or_replace_*() out of line
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20210117045747.69891-9-raphaelsc@scylladb.com>
2021-01-17 10:36:13 +02:00
Asias He
28007f13f8 distributed_loader: Add get_sstables_from_upload_dir
This function scans sstables under the upload directory and return a list of
sstables for each shard.

Refs #7831
2021-01-16 20:03:17 +08:00
Michał Chojnowski
5b72fb65ae test: add hashers_test
This test is a sanity check. It verifies that our wrappers over well known
hashes (xxhash, md5, sha256) actually calculate exactly those hashes.
It also checks that the `update()` methods of used hashers are linear with
respect to concatenation: that is, `update(a + b)` must be equivalent to
`update(a); update(b)`. This wasn't relied on before, but now we need to
confirm that hashing fragmented keys without linearizing them won't break
backward compatibility.
2021-01-15 18:28:24 +01:00
Michał Chojnowski
85048b349b memtable: fix accounting of managed_bytes in partition_snapshot_accounter
managed_bytes has a small overhead per each fragment. Due to that, managed_bytes
containing the same data can have different total memory usage in different
allocators. The smaller the preferred max allocation size setting is, the more
fragments are needed and the greater total per-fragment overhead is.
In particular, managed_bytes allocated in the LSA could grow in
memory usage when copied to the standard allocator, if the standard allocator
had a preferred max allocation setting smaller than the LSA.

partition_snapshot_accounter calculates the amount of memory used by
mutation fragments in the memtable (where they are allocated with LSA) based
on the memory usage after they are copied to the standard allocator.
This could result in an overestimation, as explained above.
But partition_snapshot_accounter must not overestimate the amount of freed
memory, as doing otherwise might result in OOM situations.

This patch prevents the overaccounting by adding minimal_external_memory_usage():
a new version of external_memory_usage(), which ignores allocator-dependent
overhead. In particular, it includes the per-fragment overhead in managed_bytes
only once, no matter how many fragments there are.
2021-01-15 18:21:13 +01:00
Michał Chojnowski
d31771c0b2 test: add managed_bytes_test 2021-01-15 18:21:13 +01:00
Michał Chojnowski
72ecbd6936 utils: fragment_range: add a fragment iterator for FragmentedView
A stylistic change. Iterators are the idiomatic way to iterate in C++.
2021-01-15 14:05:44 +01:00
Michał Chojnowski
2e38647a95 keys: update comments after changes and remove an unused method
The comments were outdated after the latest changes (bytes_view vs
managed_bytes_view).
compound_view_wrapper::get_component() is unused, so we remove it.
2021-01-15 14:05:44 +01:00
Piotr Sarna
6ae94d31c1 treewide: remove shared pointer usage from the pager
The pager interface doesn't really need to be virtual,
so the next step could be to remove the need for pointers
entirely, but migrating from shared_ptr to unique_ptr
is a low-hanging fruit.

Message-Id: <a5bdecb17ae58e914da020fb58a41f4574565c66.1610709560.git.sarna@scylladb.com>
2021-01-15 15:03:14 +02:00
Avi Kivity
f20736d93d Merge 'Support unofficial distributions' from Takuya ASADA
Since we introduced relocatable package and offline installer, scylla binary itself can run almost any distributions.
However, setup scripts are not designed to run in unsupported distributions, it causes error on such environment.
This PR adds minimal support to run offline installation on unsupported distributions, tested on SLES, Arch Linux and Gentoo.

Closes #7858

* github.com:scylladb/scylla:
  dist: use sysconfig_parser to parse gentoo config file
  dist: add package name translation
  dist: support SLES/OpenSUSE
  install.sh: add systemd existance check
  install.sh: ignore error missing sysctl entries
  dist: show warning on unsupported distributions
  dist: drop Ubuntu 14.04 code
  dist: move back is_amzn2() to scylla_util.py
  dist: rename is_gentoo_variant() to is_gentoo()
  dist: support Arch Linux
  dist: make sysconfig directory detectable
2021-01-14 16:59:49 +02:00
Raphael S. Carvalho
97e076365e Fix stalls on Memtable flush by preempting across fragment generation if needed
Flush is facing stalls because partition_snapshot_flat_reader::fill_buffer()
generates mutation fragment until buffer is full[1] without yielding.

this is the code path:
    flush_reader::fill_buffer()      <---------|
        flat_mutation_reader::consume_pausable()      <--------|
                partition_snapshot_flat_reader::fill_buffer() -|

[1]: https://github.com/scylladb/scylla/blob/6cfc949e/partition_snapshot_reader.hh#L261

This is fixed by breaking the loop in do_fill_buffer() if preemption is needed,
allowing do_until() to yield in sequence, and when it resumes, continue from
where it left off, until buffer is full.

Fixes #7885.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20210114141417.285175-1-raphaelsc@scylladb.com>
2021-01-14 16:30:55 +02:00
Ivan Prisyazhnyy
32fd38f349 docker: remove sshd from the image
implicit revert of 6322293263

sshd previosly was used by the scylla manager 1.0.
new version does not need it. there is no point of
having it currently. it also confuses everyone.

Signed-off-by: Ivan Prisyazhnyy <ivan@scylladb.com>

Closes #7921
2021-01-14 12:52:24 +02:00
Pavel Emelyanov
2b31be0daa client-state,cdc: Remove call for storage_service from permissions check
The client_state::check_access() calls for global storage service
to get the features from it and check if the CDC feature is on.
The latter is needed to perform CDC-specific checks.

However it was noticed, that the check for the feature is excessive
as all the guarded if-s will resolve to false in case CDC is off
and the check_access will effectively work as it would with the
feature check.

With that observation, it's possible to ditch one more global storage
service reference.

tests: unit(dev), dtest(dev, auth)

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Message-Id: <20210105063651.7081-1-xemul@scylladb.com>
2021-01-14 12:52:24 +02:00
Benny Halevy
29002e3b48 flat_mutation_reader: return future from next_partition
To allow it to asynchronously close underlying readers
on next_partition().

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-01-13 17:35:07 +02:00
Benny Halevy
ff931c2ecc multishard_mutation_query: read_context: save_reader: destroy reader_meta from the calling shard
The reader_meta in _readers[shard] is created on shard 0 and must
be destroyed on it as well.

A following patch changes next_partition() to return a future<>
thus it introduces a continuation that requires access to `rm`.

We cannot move it down to the conuation safely, since it will be
wrongly destroyed in the invoked shard, so use do_with to hold it
in the scope of the calling shard until the invoked function
completes.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-01-13 17:35:07 +02:00
Benny Halevy
75c0c05f71 mutation_reader: filtering_reader: fill_buffer: futurize inner loop
Prepare for futurizing next_partition().

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-01-13 17:35:07 +02:00
Benny Halevy
cd4d082e51 flat_mutation_reader::impl: consumer_adapter: futurize handle_result
Prepare for futurizing next_partition.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-01-13 17:35:07 +02:00
Benny Halevy
d8ae6d7591 flat_mutation_reader: consume_pausable/in_thread: futurize_invoke consumer
To support both sync and async consumers.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-01-13 17:35:07 +02:00
Benny Halevy
fdb3c59e35 flat_mutation_reader: FlatMutationReaderConsumer: support also async consumer
So that consumer_adapter and other consumers in the future
may return a future from consumer().

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-01-13 17:35:07 +02:00
Benny Halevy
515bed90bb flat_mutation_reader:impl: get rid of _consume_done member
It is only used in consume_pausable, that can easily do
without it.

Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
2021-01-13 17:35:07 +02:00
Takuya ASADA
7a74f8cd2e dist: use sysconfig_parser to parse gentoo config file
Use sysconfig_parser instead of regex, to improve code readability.
2021-01-13 21:34:23 +09:00
Takuya ASADA
2a4d293841 dist: add package name translation
Translate package name from CentOS package to different distribution
package name, to use single package name for pkg_install().
2021-01-13 21:27:14 +09:00
Takuya ASADA
0a9843842d dist: support SLES/OpenSUSE
Add support SLES/OpenSUSE on setup script.
2021-01-13 19:32:46 +09:00
Takuya ASADA
a34edf8169 install.sh: add systemd existance check
offline installer can run in non-systemd distributions, but it won't
work since we only have systemd units.
So check systemd existance and print error message.
2021-01-13 19:32:45 +09:00
Takuya ASADA
b8c35772b3 install.sh: ignore error missing sysctl entries
On some kernel may not have specified sysctl parameter, so we should
ignore the error.
2021-01-13 19:32:45 +09:00
Takuya ASADA
e8f74e800c dist: show warning on unsupported distributions
Add warning message on unsupported distributions, for
scylla_cpuscaling_setup and scylla_ntp_setup.
2021-01-13 19:32:45 +09:00
Takuya ASADA
2f344cf50d dist: drop Ubuntu 14.04 code
We don't support Ubuntu 14.04 anymore, drop them
2021-01-13 19:32:45 +09:00
Takuya ASADA
8e59f70080 dist: move back is_amzn2() to scylla_util.py
Distribution detection functions should be placed same place,
so move back it to scylla_util.py
2021-01-13 19:32:45 +09:00
Takuya ASADA
921b1676c0 dist: rename is_gentoo_variant() to is_gentoo()
is_redhat_variant() is the function to detect RHEL/CentOS/Fedora/OEL,
and is_debian_variant() is the function to detect Debian/Ubuntu.
Unlike these functions, is_gentoo_variant() does not detect "Gentoo variants",
we should rename it to is_gentoo().
2021-01-13 19:32:45 +09:00
Takuya ASADA
fffa8f5ded dist: support Arch Linux
Add support Arch Linux on setup script.
2021-01-13 19:32:45 +09:00
Takuya ASADA
0d11f9463d dist: make sysconfig directory detectable
Currently, install.sh provide a way to customize sysconfig directory,
but sysconfig directory is hardcoded on script.
Also, /etc/sysconfig seems correct to use default value, but current
code specify /etc/default as non-redhat distributions.

Instead of hardcoding, generate generate python script in install.sh
to save specified sysconfig directory path in python code.
2021-01-13 19:32:45 +09:00
Wojciech Mitros
93613e20a3 api: remove potential large allocation in /column_family/ GET request handler
The reply to a /column_family/ GET request contains info about all
column families. Currently, all this info is stored in a single
string when replying, and this string may require a big allocation
when there are many column families.
To avoid that allocation, instead of a single string, use a
body_writer function, which writes chunks of the message content
to the output stream.

Fixes #7916

Signed-off-by: Wojciech Mitros <wojciech.mitros@scylladb.com>

Closes #7917
2021-01-13 12:04:18 +02:00
Avi Kivity
ed53b3347e Merge 'idl: remove the large allocation in mutation_partition_view::rows()' from Wojciech Mitros
After these changes the generated code deserializes the stream into a chunked vector, instead of an contiguous one, so even if there are many fields in it, there won't be any big allocations.

I haven't run the scylla cluster test with it yet but it passes the unit tests.

Closes #7919

* github.com:scylladb/scylla:
  idl: change the type of mutation_partition_view::rows() to a chunked_vector
  idl-compiler: allow fields of type utils::chunked_vector
2021-01-13 11:07:29 +02:00
Nadav Har'El
711b311d47 cql-pytest: tests for fromJson() integer overflow
Numbers in JSON are not limited in range, so when the fromJson() function
converts a number to a limited-range integer column in Scylla, this
conversion can overflow. The following tests check that this conversion
should result in an error (FunctionFailure), not silent trunction.

Scylla today does silently wrap around the number, so these tests
xfail. They pass on Cassandra.

Refs #7914.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20210112151041.3940361-1-nyh@scylladb.com>
2021-01-13 11:07:29 +02:00
Nadav Har'El
617e1be1b6 cql-pytest: expand tests for fromJson() failures
This patch adds more (failing) tests for issue #7911, where fromJson()
failures should be reported as a clean FunctionFailure error, not an
internal server error.

The previous tests we had were about JSON parse failures, but a
different type of error we should support is valid JSON which returned
the wrong type - e.g., the JSON returning a string when an integer
was expected, or the JSON returning a string with non-ASCII characters
when ASCII was expected. So this patch adds more such tests. All of
them xfail on Scylla, and pass on Cassandra.

Refs #7911.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20210112122211.3932201-1-nyh@scylladb.com>
2021-01-13 11:07:29 +02:00
Nadav Har'El
2ebe8055ee cql-pytest: add test for fromJson() null parameter.
This patch adds a reproducer test for issue #7912, which is about passing
a null parameter to the fromJson() function supposed to be legal (and
return a null value), and is legal in Cassandra, but isn't allowed in
Scylla.

There are two tests - for a prepared and unprepared statement - which
fail in different ways. The issue is still open so the tests xfail on
Scylla - and pass on Cassandra.

Refs #7912.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20210112114254.3927671-1-nyh@scylladb.com>
2021-01-13 11:07:29 +02:00
dgarcia360
78e9f45214 docs: update url
Related issue scylladb/sphinx-scylladb-theme#88

Once this commit is merged, the docs will be published under the new domain name https://scylla.docs.scylladb.com

Frequently asked questions:

    Should we change the links in the README/docs folder?

GitHub automatically handles the redirections. For example, https://scylladb.github.io/sphinx-scylladb-theme/stable/examples/index.html redirects to https://sphinx-theme.scylladb.com/stable/examples/index.html
Nevertheless, it would be great to change URLs progressively to avoid the 301 redirections.

    Do I need to add this new domain in the custom dns domain section on GitHub settings?

It is not necessary. We have already edited the DNS for this domain and the theme creates programmatically the required CNAME file. If everything goes well, GitHub should detect the new URL after this PR is merged.

    The DNS doesn't seem to have the right SSL certificates

GitHub handles the certificate provisioning but is not aware of the subdomain for this repo yet. make multi-version will create a new file "CNAME". This is published in gh-pages branch, therefore GitHub should create the missing cert.

Closes #7877
2021-01-13 11:07:29 +02:00
Avi Kivity
d508a63d4b row_cache: linearize key in cache_entry::do_read()
do_read() does not linearize cache_entry::_key; this can cause a crash
with keys larger than 13k.

Fixes #7897.

Closes #7898
2021-01-13 11:07:29 +02:00
dgarcia360
36f8d35812 docs: added multiversion_regex_builder
Fixed makefile

Added path

Closes #7876
2021-01-13 11:07:29 +02:00
Benny Halevy
5e41228fe8 test: everywhere: use seastar::testing::local_random_engine
Use the thread_local seastar::testing::local_random_engine
in all seastar tests so they can be reproduced using
the --random-seed option.

Test: unit(dev)
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20210112103713.578301-2-bhalevy@scylladb.com>
2021-01-13 11:07:29 +02:00
Benny Halevy
43ab094c88 configure: add utf8_test to pure_boost_tests
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
Message-Id: <20210112103713.578301-1-bhalevy@scylladb.com>
2021-01-13 11:07:29 +02:00
Dejan Mircevski
d79c2cab63 cql3: Use correct comparator in timeuuid min/max
The min/max aggregators use aggregate_type_for comparators, and the
aggregate_type_for<timeuuid> is regular uuid.  But that yields wrong
results; timeuuids should be compared as timestamps.

Fix it by changing aggregate_type_for<timeuuid> from uuid to timeuuid,
so aggregators can distinguish betwen the two.  Then specialize the
aggregation utilities for timeuuid.

Add a cql-pytest and change some unit tests, which relied on naive
uuid comparators.

Fixes #7729.

Tests: unit (dev, debug)

Signed-off-by: Dejan Mircevski <dejan@scylladb.com>

Closes #7910
2021-01-13 11:07:29 +02:00
Avi Kivity
96d64b7a1f Merge "Wire interposer consumer for memtable flush" from Raphael
"
Without interposer consumer on flush, it could happen that a new sstable,
produced by memtable flush, will not conform to the strategy invariant.
For example, with TWCS, this new sstable could span multiple time windows,
making it hard for the strategy to purge expired data. If interposer is
enabled, the data will be correctly segregated into different sstables,
each one spanning a single window.

Fixes #4617.

tests:
    - mode(dev).
    - manually tested it by forcing a flush of memtable spanning many windows
"

* 'segregation_on_flush_v2' of github.com:raphaelsc/scylla:
  test: Add test for TWCS interposer on memtable flush
  table: Wire interposer consumer for memtable flush
  table: Add write_memtable_to_sstable variant which accepts flat_mutation_reader
  table: Allow sstable write permit to be shared across monitors
  memtable: Track min timestamp
  table: Extend cache update to operate a memtable split into multiple sstables
2021-01-13 11:07:29 +02:00