Commit Graph

23437 Commits

Author SHA1 Message Date
Tomasz Grabiec
bcdcf06ec7 Merge "lwt: for each statement in cas_request provide a row in CAS result set" from Pavel Solodovnikov
Previously batch statement result set included rows for only
those updates which have a prefetch data present (i.e. there
was an "old" (pre-existing) row for a key).

Also, these rows were sorted not in the order in which statements
appear in the batch, but in the order of updated clustering keys.

If we have a batch which updates a few non-existent keys, then
it's impossible to figure out which update inserted a new key
by looking at the query response. Not only because the responses
may not correspond to the order of statements in the batch, but
even some rows may not show up in the result set at all.

Please see #7113 on Github for detailed description
of the problem:
https://github.com/scylladb/scylla/issues/7113

The patch set proposes the following fix:

For conditional batch statements the result set now always
includes a row for each LWT statement, in the same order
in which individual statements appear in the batch.

This way we can always tell which update did actually insert
a new key or update the existing one.

Technically, the following changes were made:
 * `update_parameters::prefetch_data::row::is_in_cas_result_set`
   member removed as well as the supporting code in
   `cas_request::applies_to` which iterated through cas updates
   and marked individual `prefetch_data` rows as "need to be in
   cas result set".
 * `cas_request::applies_to` substantially simplified since it
   doesn't do anything more than checking `stmt.applies_to()`
   in short-circuiting manner.
 * `modification_statement::build_cas_result_set` method moved
   to `cas_request`. This allows to easily iterate through
   individual `cas_row_update` instances and preserve the order
   of the rows in the result set.
 * A little helper `cas_request::find_old_row`
   is introduced to find a row in `prefetch_data` based on the
   (pk, ck) combination obtained from the current `cas_request`
   and a given `cas_row_update`.
 * A few tests for the issue #7113 are written, other lwt-batch-related
   tests adjusted accordingly.
2020-09-04 16:09:45 +02:00
Pavel Solodovnikov
92fd515186 lwt: for each statement in cas_request provide a row in CAS result set
Previously batch statement result set included rows for only
those updates which have a prefetch data present (i.e. there
was an "old" (pre-existing) row for a key).

Also, these rows were sorted not in the order in which statements
appear in the batch, but in the order of updated clustering keys.

If we have a batch which updates a few non-existent keys, then
it's impossible to figure out which update inserted a new key
by looking at the query response. Not only because the responses
may not correspond to the order of statements in the batch, but
even some rows may not show up in the result set at all.

The patch proposes the following fix:

For conditional batch statements the result set now always
includes a row for each LWT statement, in the same order
in which individual statements appear in the batch.

This way we can always tell which update did actually insert
a new key or update the existing one.

`update_parameters::prefetch_data::row::is_in_cas_result_set`
member variable was removed as well as supporting code in
`cas_request::applies_to` which iterated through cas updates
and marked individual `prefetch_data` rows as "need to be in
cas result set".

Instead now `cas_request::applies_to` is significantly
simplified since it doesn't do anything more than checking
`stmt.applies_to()` in short-circuiting manner.

A few tests for the issue are written, other lwt-batch-related
tests were adjusted accordingly to include rows in result set
for each statement inside conditional batches.

Tests: unit(dev, debug)

Co-authored-by: Konstantin Osipov <kostja@scylladb.com>
Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
2020-09-04 13:13:26 +03:00
Pavel Solodovnikov
feaf2b6320 cas_request: move modification_statement::build_cas_result_set to cas_request
This is just a plain move of the code from `modification_statement`
to `cas_request` without changes in the logic, which will further
help to refactor `build_cas_result_set` behavior to include a row
for each LWT statement and order rows in the order of statements
in a batch.

Tests: unit(dev, debug)

Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
2020-09-04 12:25:06 +03:00
Pavel Solodovnikov
0f0ff73a58 cas_request: extract find_old_row helper function
Factor out little helper function which finds a pre-existing
row for a given `cas_row_update` (matching the primary key).
Used in `cas_request::applies_to`.

Will be used in a subsequent patch to move
`modification_statement::build_cas_result_set` into `cas_request`.

Tests: unit(dev, debug)

Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
2020-09-04 12:09:31 +03:00
Botond Dénes
e88b8a9a07 docs/debugging.md: document how TLS variables work
We use a lot of TLS variables yet GDB is not of much help when working
with these. So in this patch I document where they are located in
memory, how to calculate the address of a known TLS variable and how to
find (identify) one given an address.

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200903131802.1068288-1-bdenes@scylladb.com>
2020-09-03 17:41:15 +02:00
Avi Kivity
bc77939ada Update seastar submodule
* seastar 7f7cf0f232...4ff91c4c3a (47):
  > core/reactor: complete_timers(): restore previous scheduling group
Fixes #7117.
  > future: Drop spurious 'mutable' from a lambda
  > future: Don't put a std::tuple in future_state
  > future: Prepare for changing the future_state storage type
  > future: Add future_state::get0()
  > when_all: Replace another untuple with get0
  > when_all: Use get0 instead of untuple
  > rpc: Don't assume that a future stores a std::tuple
  > future: Add a futurize_base
  > testing: Stop boost from installing its own signal handler
  > future: Define futurize::value_type with future::value_type
  > future: Move futurize down the file
  > Merge "Put logging onto {fmt} rails" from Pavel E
  > Merge "Make future_state non variadic" from Rafael
  > sharded: propagate sharded instances stop error
  > log: Fix misprint in docs
  > future: Use destroy_at in destructor
  > future: Add static_assert rejecting variadic future_state
  > futures_test: Drop variadic test
  > when_all: Drop a superfluous use of futurize::from_tuples
  > everywhere: Use future::get0 when appropriate
  > net: upgrade sockopt comments to doxygen
  > iostream: document the read_exactly() function
  > net:tls: fix clang-10 compilation duration cast
  > future: Move continuation_base_from_future to future.hh
  > repeat: Drop unnecessary make_tuple
  > shared_future: Don't use future_state_type
  > future: Add a static_assert against variadic futures
  > future: Delete warn_variadic_future
  > rpc_demo: Don't use a variadic future
  > rpc_test: Don't use a variadic future
  > futures_test: Don't use a variadic future
  > future: Move disable_failure_guard to promise::schedule
  > net: add an interface for custom socket options
  > posix: add one more setsockopt overload
  > Merge "Simplify pollfn and its inheritants" from Pavel E
  > util:std-compat.hh: add forward declaration of std::pmr for clang-10
  > rpc: Add protocol::has_handlers() helper
  > Add a Seastar_DEBUG_ALLOCATIONS build option
  > futures_test: Add a test for futures of references
  > future: Simplify destruction of future_state
  > Use detect_stack_use_after_return=1
  > repeat: Fix indentation
  > repeat: Delete try/catch
  > repeat: Simplify loop
  > Avoid call to std::exception_ptr's destructor from a .hh
  > file: Add missing include
2020-09-03 15:56:12 +03:00
Avi Kivity
64c7c81bac Merge "Update log messages to {fmt} rules" from Pavel E
"
Before seastar is updated with the {fmt} engine under the
logging hood, some changes are to be made in scylla to
conform to {fmt} standards.

Compilation and tests checked against both -- old (current)
and new seastar-s.

tests: unit(dev), manual
"

* 'br-logging-update' of https://github.com/xemul/scylla:
  code: Force formatting of pointer in .debug and .trace
  code: Format { and } as {fmt} needs
  streaming: Do not reveal raw pointer in info message
  mp_row_consumer: Provide hex-formatting wrapper for bytes_view
  heat_load_balance: Include fmt/ranges.h
2020-09-03 15:10:09 +03:00
Nadav Har'El
1d06da18fc alternator test: test for the TRIM_HORIZON stream iterator
This patch adds a test for the TRIM_HORIZON option of GetShardIterator in
Alternator Streams. This option asks to fetch again *all* the available
history in this shard stream. We had an implementation for it, but not a
test - so this patch adds one. The test passes.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200830131458.381350-1-nyh@scylladb.com>
2020-09-02 18:37:06 +02:00
Nadav Har'El
3d4183863a alternator test: add tests for sequence-number based iterators
Alternator Streams already support the AT_SEQUENCE_NUMBER and
AFTER_SEQUENCE_NUMBER options for iterators. These options allow to replay
a stream of changes from a known position or after that known position.
However, we never had a test verifying that these features actually work
as intended, beyond just checking syntax. Having such tests is important
because recently we changed the implementation of these iterators, but
didn't have a test verifying that they still work.

So in this patch we add such tests. The tests pass (as usual, on both
Alternator and DynamoDB).

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200830115817.380075-1-nyh@scylladb.com>
2020-09-02 18:36:59 +02:00
Nadav Har'El
c879b23b82 alternator test: add another passing Alternator Streams test
We had a test, test_streams_last_result, that verifies that after reading from
an Alternator Stream the last event, reading again will find nothing.
But we didn't actually have a test which checks that if at that point a new event
*does* arrive, we can read it. This test checks this case, and it passes (we don't
have a bug there, but it's good as a regression test for NextShardIterator).

This test also verifies that after reading an event for a particular key on a
a specific stream "shard", the next event for the same key will arrive on the
same shard.

This test passes on both Alternator and DynamoDB.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200830105744.378790-1-nyh@scylladb.com>
2020-09-02 18:36:50 +02:00
Raphael S. Carvalho
adf576f769 compaction_manager: export method that returns if table has ongoing compaction
A compaction strategy, that supports parallel compaction, may want to know
if the table has compaction running on its behalf before making a decision.
For example, a size-tiered-like strategy may not want to trigger a behavior,
like cross-tier compaction, when there's ongoing compaction.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200901134306.23961-1-raphaelsc@scylladb.com>
2020-09-02 16:46:49 +03:00
Botond Dénes
90042746bf scylla-gdb.py: scylla_sstables::filename(): add md format support
Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200902090929.879377-1-bdenes@scylladb.com>
2020-09-02 16:12:17 +03:00
Dejan Mircevski
0c73ac107d cql3: Drop get_partition_key_unrestricted_components
Not used anywhere.

Tests: unit (dev)

Signed-off-by: Dejan Mircevski <dejan@scylladb.com>
2020-09-02 08:14:54 +03:00
Botond Dénes
b3f00685ec scylla-gdb.py: scylla memory: better summary of semaphore memory usage
If available, use the recently added
`reader_concurrency_semaphore::_initial_resources` to calculate the
amount of memory used out of the initially configured amount. If not
available, the summary falls back to the previous mode of just printing
the remaining amount of memory.
Example:
Replica:
  Read Concurrency Semaphores:
    user sstable reads:       11/100,     263621214/     42949672 B, queued: 847
    streaming sstable reads:   0/ 10,             0/     42949672 B, queued: 0
    system sstable reads:      1/ 10,        251584/     42949672 B, queued: 0

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200901091452.806419-1-bdenes@scylladb.com>
2020-09-01 16:26:57 +02:00
Nadav Har'El
52f92b886b alternator streams: fix bug returning the same change again
This patch fixes a bug which caused sporadic failures of the Alternator
test - test_streams.py::test_streams_last_result.

The GetRecords operation reads from an Alternator Streams shard and then
returns an "iterator" from where to continue reading next time. Because
we obviously don't want to read the same change again, we "incremented"
the current position, to start at the incremented position on the next read.

Unfortunately, the implementation of the increment() function wasn't quite
right. The position in the CDC log is a timeuuid, which has a really bizarre
comparison function (see compare_visitor in types.cc). In particular the
least-sigificant bytes of the UUID are compared as *signed* bytes. This
means that if the last byte of the UUID was 127, and increment() increased
it to 128, and this was wrong because the comparison function later deemed
that as a signed byte, where 128 is lower than 127, not higher! The result
was that with 1/256 probability (whenever the last byte of the position was
127) we would return an item twice. This was reproduced (with 1/256
probability) by the test test_streams_last_result, as reported in issue #7004.

The fix in this patch is to drop the increment() and replace it by a flag
whether an iterator is inclusive of the threshold (>=) or exclusive (>).
The internal representation of the iterator has a boolean flag "inclusive",
and the string representation uses the prefixes "I" or "i" to indicate an
inclusive or exclusive range, respectively - whereas before this patch we
always used the prefix "I".

Although increment() could have been fixed to work correctly, the result would
have been ugly because of the weirdness of the timeuuid comparison function.
increment() would also require extensive new unit-tests: we were lucky that
the high-level functional tests caught a 1 in 256 error, but they would not
have caught rarer errors (e.g., 1 in 2^32). Furthermore, I am looking at
Alternator as the first "user" of CDC, and seeing how complicated and
error-prone increment() is, we should not recommend to users to use this
technique - they should use exclusive (>) range queries instead.

Fixes #7004.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200901102718.435227-1-nyh@scylladb.com>
2020-09-01 12:28:39 +02:00
Avi Kivity
37352a73b8 Update tools/python3 submodule
* tools/python3 f89ade5...19a9cd3 (1):
  > dist: redhat: reduce log spam from unpacking sources when building rpm
2020-09-01 12:36:24 +03:00
Pavel Emelyanov
86897aa040 partition_version: Remove dead code
The rows_iterator is no longer in use since 70c72773

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
Message-Id: <20200831191208.18418-1-xemul@scylladb.com>
2020-09-01 10:19:47 +03:00
Raphael S. Carvalho
7f7f366cb5 compaction: add debug msg to inform the amount of expired ssts skipped by compaction
this information is useful when debugging compaction issues that involve
fully expired ssts.

Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <20200828140401.96440-1-raphaelsc@scylladb.com>
2020-08-31 17:18:47 +03:00
Amos Kong
5785947e28 unified/install.sh: set default python3/sysconfdir smartly
Users can set python3 and sysconfdir from cmdline of install.sh
according to the install mode (root or nonroot) and distro type.

It's helpful to correct the default python3/sysconfdir, otherwise
setup scripts or scylla-jmx doesn't work.

Fixes #7130

Signed-off-by: Amos Kong <amos@scylladb.com>
2020-08-31 15:54:51 +03:00
Amos Kong
83d7454787 install.sh: clean tmp scylla.yaml after installation
If the install.sh is executed by sudo, then the tmp scylla.yaml is owned
by root. It's difficult to overwrite it by non-privileged users.

Signed-off-by: Amos Kong <amos@scylladb.com>
2020-08-31 15:54:51 +03:00
Kamil Braun
ff78a3c332 cdc: rename CDC description tables... again
Commit a6ad70d3da changed the format of
stream IDs: the lower 8 bytes were previously generated randomly, now
some of them have semantics. In particular, the least significant byte
contains a version (stream IDs might evolve with further releases).

This is a backward-incompatible change: the code won't properly handle
stream IDs with all lower 8 bytes generated randomly. To protect us from
subtle bugs, the code has an assertion that checks the stream ID's
version.

This means that if an experimental user used CDC before the change and
then upgraded, they might hit the assertion when a node attempts to
retrieve a CDC generation with old stream IDs from the CDC description
tables and then decode it.
In effect, the user won't even be able to start a node.

Similarly as with the case described in
d89b7a0548, the simplest fix is to rename
the tables. This fix must get merged in before CDC goes out of
experimental.

Now, if the user upgrades their cluster from a pre-rename version, the
node will simply complain that it can't obtain the CDC generation
instead of preventing the cluster from working. The user will be able to
use CDC after running checkAndRepairCDCStreams.

Since a new table is added to the system_distributed keyspace, the
cluster's schema has changed, so sstables and digests need to be
regenerated for schema_digest_test.
2020-08-31 11:33:14 +03:00
Nadav Har'El
f3dfb6e011 merge: cdc: Remove post-filterings for keys-only/off cdc delta generation
Merged pull request https://github.com/scylladb/scylla/pull/7121
By Calle Wilund:

Refs #7095
Fixes #7128

CDC delta!=full both relied on post-filtering
to remove generated log row and/or cells. This is inefficient.
Instead, simply check if the data should be created in the
visitors.

Also removed delta_mode=off mode.

  cdc: Remove post-filterings for keys-only/off cdc delta generation
  cdc: Remove cdc delta_mode::off
2020-08-31 11:22:09 +03:00
Calle Wilund
70a282ced2 cdc: Remove post-filterings for keys-only/off cdc delta generation
Refs #7095

CDC delta!=full both relied on post-filtering
to remove generated log row and/or cells. This is inefficient.
Instead, simply check if the data should be created in the
visitors.

v2:
* Fixed delta logs rows created (empty) even when delta == off
v3:
* Killed delta == off
v4:
* Move checks into (const) member var(s)
2020-08-31 07:59:43 +00:00
Calle Wilund
78236c015a cdc: Remove cdc delta_mode::off
Fixes #7128

CDC logs are not useful without at least delta_mode==keys, since
pre/post image data has no info on _what_ was actually done to
base table in source mutation.
2020-08-31 07:59:40 +00:00
Etienne Adam
19683d04c6 redis: add hget and hset commands
hget and hset commands using hashes internally, thus
they are not using the existing write_strings() function.

Limitations:
 - hset only supports 3 params, instead of multiple field/value
list that is available in official redis-server.
 - hset should return 0 when the key and field already exists,
but I am not sure it's possible to retrieve this information
without doing read-before-write, which would not be atomic.

I factorized a bit the query_* functions to reduce duplication, but
I am not 100% sure of the naming, it may still be a bit confusing
between the schema used (strings, hashes) and the returned format
(currently only string but array should come later with hgetall).

Signed-off-by: Etienne Adam <etienne.adam@gmail.com>
Message-Id: <20200830190128.18534-1-etienne.adam@gmail.com>
2020-08-30 22:05:41 +03:00
Takuya ASADA
f1255cb2d0 unified: add uninstall.sh
Provide an uninstaller for offline & nonroot installation.

Fixes #7076
2020-08-29 20:55:06 +03:00
Botond Dénes
f063dc22af scylla-gdb: add scylla compaction-tasks command
Summarize the compaction_manager::task instances. Useful for detecting
compaction related problems. Example:

(gdb) scylla compaction-task
     2116 type=sstables::compaction_type::Compaction, running=false, "cdc_test"."test_table_postimage_scylla_cdc_log"
      769 type=sstables::compaction_type::Compaction, running=false, "cdc_test"."test_table_scylla_cdc_log"
      750 type=sstables::compaction_type::Compaction, running=false, "cdc_test"."test_table_preimage_postimage_scylla_cdc_log"
      731 type=sstables::compaction_type::Compaction, running=false, "cdc_test"."test_table_preimage_scylla_cdc_log"
      293 type=sstables::compaction_type::Compaction, running=false, "cdc_test"."test_table"
      286 type=sstables::compaction_type::Compaction, running=false, "cdc_test"."test_table_preimage"
      230 type=sstables::compaction_type::Compaction, running=false, "cdc_test"."test_table_postimage"
       58 type=sstables::compaction_type::Compaction, running=false, "cdc_test"."test_table_preimage_postimage"
        4 type=sstables::compaction_type::Compaction, running=true , "cdc_test"."test_table_postimage_scylla_cdc_log"
        2 type=sstables::compaction_type::Compaction, running=true , "cdc_test"."test_table"
        2 type=sstables::compaction_type::Compaction, running=true , "cdc_test"."test_table_preimage_postimage_scylla_cdc_log"
        2 type=sstables::compaction_type::Compaction, running=true , "cdc_test"."test_table_preimage"
        1 type=sstables::compaction_type::Compaction, running=true , "cdc_test"."test_table_preimage_postimage"
        1 type=sstables::compaction_type::Compaction, running=true , "cdc_test"."test_table_scylla_cdc_log"
        1 type=sstables::compaction_type::Compaction, running=true , "cdc_test"."test_table_preimage_scylla_cdc_log"
Total: 5246 instances of compaction_manager::task

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200828135030.689188-1-bdenes@scylladb.com>
2020-08-28 16:00:14 +02:00
Botond Dénes
727e9be342 scylla-gdb.py: scylla sstables: add --histogram option
Allowing to print a summary of per-table sstables. Example:
(gdb) scylla sstables --histogram
     2103 "cdc_test"."test_table_postimage_scylla_cdc_log"
      751 "cdc_test"."test_table_preimage_postimage_scylla_cdc_log"
      734 "cdc_test"."test_table_preimage_scylla_cdc_log"
      723 "cdc_test"."test_table_scylla_cdc_log"
      285 "cdc_test"."test_table"
      164 "cdc_test"."test_table_postimage"
      150 "cdc_test"."test_table_preimage"
       55 "cdc_test"."test_table_preimage_postimage"
        1 "system"."clients"
        1 "system"."compaction_history"
        1 "system_auth"."roles"
        1 "system"."peers"
total (shard-local): count=4969, data_file=171953448, in_memory=19195136

Signed-off-by: Botond Dénes <bdenes@scylladb.com>
Message-Id: <20200828091848.673398-1-bdenes@scylladb.com>
2020-08-28 11:36:37 +02:00
Pavel Solodovnikov
88ba184247 paxos: use schema_registry when applying accepted proposal if there is schema mismatch
Try to look up and use schema from the local schema_registry
in case when we have a schema mismatch between the most recent schema
version and the one that is stored inside the `frozen_mutation` for
the accepted proposal.

When such situation happens the stored `frozen_mutation` is able
to be applied only if we are lucky enough and column_mapping in
the mutation is "compatible" with the new table schema.

It wouldn't work if, for example, the columns are reordered, or
some columns, which are referenced by an LWT query, are dropped.

With the patch we are able to mitigate these cases as long as the
referenced schema is still present in the node cache (e.g.
it didn't restart/crash or the cache entry is not too old
to be evicted).

Tests: unit(dev, debug), dtest(paxos_tests.schema_mismatch_*_test)

Signed-off-by: Pavel Solodovnikov <pa.solodovnikov@scylladb.com>
Message-Id: <20200827150844.624017-1-pa.solodovnikov@scylladb.com>
2020-08-27 19:04:09 +02:00
Amnon Heiman
68b3ed1c9a storage_service.cc: get_natural_endpoints should translate key
The get_natural_endpoints returns the list of nodes holding a key.

There is a variation of the method that gets the key as string, the
current implementation just cast the string to bytes_view, which will
not work. Instead, this patch changes the implementation to use
from_nodetool_style_string to translate the key (in a nodetool like
format) to a token.

Fixes #7134

Signed-off-by: Amnon Heiman <amnon@scylladb.com>
2020-08-27 18:25:15 +03:00
Rafael Ávila de Espíndola
d18af34205 everywhere: Use future::get0 when appropriate
This works with current seastar and clears most of the way for
updating to a version that doesn't use std::tuple in futures.

Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20200826231947.1145890-1-espindola@scylladb.com>
2020-08-27 15:05:51 +03:00
Nadav Har'El
1da3af5420 alternator test: enable a passing test
After issue #7107 was fixed (regarding the correctness of OldImage and NewImage
in Alternator Streams) we forgot to remove the "xfail" tag from one of the tests
for this issue.

This test now passes, as expected, so in this patch we remove the xfail tag.

Refs #7107

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200827103054.186555-1-nyh@scylladb.com>
2020-08-27 14:15:32 +03:00
Nadav Har'El
0faf91f254 docs: fix typo in alternator/getting-started.md
alternator/getting-started.md had a missing grave accent (`) character,
resulting in messed up rendering of the involved paragraph. Add the missing
quote.

Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20200827110920.187328-1-nyh@scylladb.com>
2020-08-27 14:11:00 +03:00
Piotr Sarna
ca9422ca73 Merge 'Fix view_builder lockup and crash on shutdown' from Pavel
The lockup:

When view_builder starts all shards at some point get to a
barrier waiting for each other to pass. If any shard misses
this checkpoint, all others stuck forever. As this barrier
lives inside the _started future, which in turn is waited
on stop, the stop stucks as well.

Reasons to miss the barrier -- exception in the middle of the
fun^w start or explicit abort request while waiting for the
schema agreement.

Fix the "exception" case by unlocking the barrier promise with
exception and fix the "abort request" case by turning it into
an exception.

The bug can be reproduced by hands if making one shard never
see the schema agreement and continue looping until the abort
request.

The crash:

If the background start up fails, then the _started future is
resolved into exception. The view_builder::stop then turns this
future into a real exception caught-and-rethrown by main.cc.

This seems wrong that a failure in a background fiber aborts
the regular shutdown that may proceed otherwise.

tests: unit(dev), manual start-stop
branch: https://github.com/xemul/scylla/tree/br-view-builder-shutdown-fix-3
fixes: #7077

Patch #5 leaves the seastar::async() in the 1-st phase of the
start() although can also be tuned not to produce a thread.
However, there's one more (painless) issue with the _sem usage,
so this change appears too large for the part of the bug-fix
and will come as a followup.

* 'br-view-builder-shutdown-fix-3' of git://github.com/xemul/scylla:
  view_builder: Add comment about builder instances life-times
  view_builder: Do sleep abortable
  view_builder: Wakeup barrier on exception
  view_builder: Always resolve started future to success
  view_builder: Re-futurize start
  view_builder: Split calculate_shard_build_step into two
  view_builder: Populate the view_builder_init_state
  view_builder: Fix indentation after previous patch
  view_builder: Introduce view_builder_init_state
2020-08-27 11:51:46 +02:00
Nadav Har'El
95afadfe21 merge: alternator_streams: Include keys in OldImage/NewImage
Merged pull request https://github.com/scylladb/scylla/pull/7063
By Calle Wilund:

Fixes #6935

DynamoDB streams for some reason duplicate the record keys
into both the "Keys" and "OldImage"/"NewImage" sub-objects
when doing GetRecords. But only if there is other data
to include.

This patch appends the pk/ck parts into old/new image
iff we had any record data.

Updated to handle keys-only updates, and distinguish creating vs. updating rows. Changes cdc to not generate preimage
for non-existent/deleted rows, and also fixes missing operations/ttls in keys-only delta mode.

  alternator_streams: Include keys in OldImage/NewImage
  cdc: Do not generate pre/post image for non-existent rows
2020-08-27 11:23:35 +03:00
Pekka Enberg
0f1b54fa6e Update tools/java submodule
* tools/java d6c0ad1e2e...2d49ded77b (1):
  > sstableloader: remove wrong check that breaks range tombstones
2020-08-27 09:05:34 +03:00
Calle Wilund
678ecc7469 alternator_streams: Include keys in OldImage/NewImage
Fixes #6935
Fixes #7107

DynamoDB streams for some reason duplicate the record keys
into both the "Keys" and "OldImage"/"NewImage" sub-objects
when doing GetRecords.

This patch appends the pk/ck parts into old/new image, and
also removes the previous restrictions on image generation
since cdc now generates more consistent pre/post image
data.
2020-08-26 18:14:09 +00:00
Calle Wilund
e50911e5b0 cdc: Do not generate pre/post image for non-existent rows
Fixes #7119
Fixes #7120

If preimage select came up empty - i.e. the row did not exist, either
due to never been created, or once delete, we should not bother creating
a log preimage row for it. Esp. since it makes it harder to interpret the
cdc log.

If an operation in a cdc batch did a row delete (ranged, ck, etc), do
not generate postimage data, since the row does no longer exist.
Note that we differentiate deleting all (non-pk/ck) columns from actual
row delete.
2020-08-26 18:14:09 +00:00
Pavel Emelyanov
812eed27fe code: Force formatting of pointer in .debug and .trace
... and tests. Printin a pointer in logs is considered to be a bad practice,
so the proposal is to keep this explicit (with fmt::ptr) and allow it for
.debug and .trace cases.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2020-08-26 20:44:11 +03:00
Pavel Emelyanov
366b4e8a8f code: Format { and } as {fmt} needs
There are two places that want to print "{<text>}" strings, but do not format
the curly braces the {fmt}-way.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2020-08-26 20:44:11 +03:00
Pavel Emelyanov
78f2193956 streaming: Do not reveal raw pointer in info message
Showing raw pointer values in logs is not considered to be good
practice. However, for debugging/tracing this might be helpful.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2020-08-26 20:44:11 +03:00
Pavel Emelyanov
50e3a30dae mp_row_consumer: Provide hex-formatting wrapper for bytes_view
By default {fmt} doesn't know how to format this type (although it's a
basic_string_view instantiated), and even providing formatter/operator<<
does not help -- it anyway hits an earlier assertion in args mapper about
the disallowance of character types mixing.

The hex-wrapper with own operator<< solves the problem.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2020-08-26 20:44:11 +03:00
Pavel Emelyanov
fe33e3ed78 heat_load_balance: Include fmt/ranges.h
To provide vector<> formatter for {fmt}

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2020-08-26 20:44:08 +03:00
Avi Kivity
3daa49f098 Merge "materialized views: Fix undefined behavior on base table schema changes" from Tomasz
"
The view_info object, which is attached to the schema object of the
view, contains a data structure called
"base_non_pk_columns_in_view_pk". This data structure contains column
ids of the base table so is valid only for a particular version of the
base table schema. This data structure is used by materialized view
code to interpret mutations of the base table, those coming from base
table writes, or reads of the base table done as part of view updates
or view building.

The base table schema version of that data structure must match the
schema version of the mutation fragments, otherwise we hit undefined
behavior. This may include aborts, exceptions, segfaults, or data
corruption (e.g. writes landing in the wrong column in the view).

Before this patch, we could get schema version mismatch here after the
base table was altered. That's because the view schema did not change
when the base table was altered.

Another problem was that view building was using the current table's schema
to interpret the fragments and invoke view building. That's incorrect for two
reasons. First, fragments generated by a reader must be accessed only using
the reader's schema. Second, base_non_pk_columns_in_view_pk of the recorded
view ptrs may not longer match the current base table schema, which is used
to generate the view updates.

Part of the fix is to extract base_non_pk_columns_in_view_pk into a
third entity called base_dependent_view_info, which changes both on
base table schema changes and view schema changes.

It is managed by a shared pointer so that we can take immutable
snapshots of it, just like with schema_ptr. When starting the view
update, the base table schema_ptr and the corresponding
base_dependent_view_info have to match. So we must obtain them
atomically, and base_dependent_view_info cannot change during update.

Also, whenever the base table schema changes, we must update
base_dependent_view_infos of all attached views (atomically) so that
it matches the base table schema.

Fixes #7061.

Tests:

  - unit (dev)
  - [v1] manual (reproduced using scylla binary and cqlsh)
"

* tag 'mv-schema-mismatch-fix-v2' of github.com:tgrabiec/scylla:
  db: view: Refactor view_info::initialize_base_dependent_fields()
  tests: mv: Test dropping columns from base table
  db: view: Fix incorrect schema access during view building after base table schema changes
  schema: Call on_internal_error() when out of range id is passed to column_at()
  db: views: Fix undefined behavior on base table schema changes
  db: views: Introduce has_base_non_pk_columns_in_view_pk()
2020-08-26 17:37:52 +03:00
Pavel Emelyanov
cf1cb4d145 view_builder: Add comment about builder instances life-times
The barrier passing is tricky and deserves a description
about objects' life-times.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2020-08-26 15:56:38 +03:00
Pavel Emelyanov
643c431ce4 view_builder: Do sleep abortable
If one shard delays in seeing the schema agreement and returns on
abort request, other shards may get stuck waiting for it on the
status read barrier. Luckily with the previous patch the barrier
is exception-proof, so we may abort the waiting loop with exception
and handle the lock-up.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2020-08-26 15:56:38 +03:00
Pavel Emelyanov
c36bbc37c9 view_builder: Wakeup barrier on exception
If an exception pops up during the view_builder::start while some
shards wait for the status-read barrier, these shards are not woken
up, thus causing the shutdown to stuck.

Fix this by setting exception on the barrier promise, resolving all
pending and on-going futures.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2020-08-26 15:56:38 +03:00
Pavel Emelyanov
8f8ed625ab view_builder: Always resolve started future to success
If the view builder background start fails, the _started future resolves
to exceptional state. In turn, stopping the view builder keeps this state
through .finally() and aborts the shutdown very early, while it may and
should proceed.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2020-08-26 15:56:38 +03:00
Pavel Emelyanov
60e21bb59a view_builder: Re-futurize start
Step two turning the view_builder::start() into a chain of lambdas --
rewrite (most of) the seastar::async()'s lambda into a more "classical"
form.

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2020-08-26 15:56:38 +03:00
Pavel Emelyanov
77c7d94f85 view_builder: Split calculate_shard_build_step into two
The calculate_shard_build_step() has a cross-shard barrier in the middle and
passing the barrier is broken wrt exceptions that may happen before it. The
intention is to prepare this barrier passing for exception handling by turning
the view_builder::start() into a dedicated continuation lambda.

Step one in this campaign -- split the calculate_shard_build_step() into
steps called by view_builder::start():

 - before the barrier
 - barrier
 - after the barrier

Signed-off-by: Pavel Emelyanov <xemul@scylladb.com>
2020-08-26 15:56:38 +03:00