This commit declares shared_ptr<user_types_metadata> in
database.hh were user_types_metadata is an incomplete type so
it requires
"Allow to use shared_ptr with incomplete type other than sstable"
to compile correctly.
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
When seastar/core/shared_ptr_incomplete.hh is included in a header
then it causes problems with all declarations of shared_ptr<T> with
incomplete type T that end up in the same compilation unit.
The problem happens when we have a compilation unit that includes
two headers a.hh and b.hh such that a.hh includes
seastar/core/shared_ptr_incomplete.hh and b.hh declares
shared_ptr<T> with incomplete type T. On the same time this
compilation unit does not use declared shared_ptr<T> so it should
compile and work but it does not because shared_ptr_incomplete.hh
is included and it forces instantiation of:
template <typename T>
T*
lw_shared_ptr_accessors<T,
void_t<decltype(lw_shared_ptr_deleter<T>{})>>::to_value(lw_shared_ptr_counter_base*
counter) {
return static_cast<T*>(counter);
}
for each declared shared_ptr<T> with incomplete type T. Even the once
that are never used.
Following commit "Decouple database.hh from types/user.hh"
moves user_types_metadata type out of database.hh and instead
declares shared_ptr<user_types_metadata> in database.hh where
user_types_metadata is incomplete. Without this commit
the compilation of the following one fails with:
In file included from ./sstables/sstables.hh:34,
from ./db/size_estimates_virtual_reader.hh:38,
from db/system_keyspace.cc:77:
seastar/include/seastar/core/shared_ptr_incomplete.hh: In
instantiation of ‘static T*
seastar::internal::lw_shared_ptr_accessors<T,
seastar::internal::void_t<decltype
(seastar::lw_shared_ptr_deleter<T>{})>
>::to_value(seastar::lw_shared_ptr_counter_base*) [with T =
user_types_metadata]’:
seastar/include/seastar/core/shared_ptr.hh:243:51: required from
‘static void seastar::internal::lw_shared_ptr_accessors<T,
seastar::internal::void_t<decltype
(seastar::lw_shared_ptr_deleter<T>{})>
>::dispose(seastar::lw_shared_ptr_counter_base*) [with T =
user_types_metadata]’
seastar/include/seastar/core/shared_ptr.hh:300:31: required from
‘seastar::lw_shared_ptr<T>::~lw_shared_ptr() [with T =
user_types_metadata]’
./database.hh:1004:7: required from ‘static void
seastar::internal::lw_shared_ptr_accessors_no_esft<T>::dispose(seastar::lw_shared_ptr_counter_base*)
[with T = keyspace_metadata]’
seastar/include/seastar/core/shared_ptr.hh:300:31: required from
‘seastar::lw_shared_ptr<T>::~lw_shared_ptr() [with T =
keyspace_metadata]’
./db/size_estimates_virtual_reader.hh:233:67: required from here
seastar/include/seastar/core/shared_ptr_incomplete.hh:38:12: error:
invalid static_cast from type ‘seastar::lw_shared_ptr_counter_base*’
to type ‘user_types_metadata*’
return static_cast<T*>(counter);
^~~~~~~~~~~~~~~~~~~~~~~~
[131/415] CXX build/release/distributed_loader.o
Signed-off-by: Piotr Jastrzebski <piotr@scylladb.com>
Currently nop_large_partition_handler is only used in tests, but it
can also be used avoid self-reporting.
Tests: unit(Release)
I also tested starting scylla with
--compaction-large-partition-warning-threshold-mb=0.
Signed-off-by: Rafael Ávila de Espíndola <espindola@scylladb.com>
Message-Id: <20190123205059.39573-1-espindola@scylladb.com>
"
This series is a first small step towards rewriting
CQL restrictions layer. Primary key restrictions used to be
a template that accepts either partition_key or clustering_key,
but the implementation is already based on virtual inheritance,
so in multiple cases these templates need specializations.
Refs #3815
"
* 'detemplatize_primary_key_restrictions_2' of https://github.com/psarna/scylla:
cql3: alias single_column_primary_key_restrictions
cql3: remove KeyType template from statement_restrictions
cql3: remove template from primary_key_restrictions
cql3: remove forwarding_primary_key_restrictions
In preparation for detemplatizing this class, it's aliased with
single_column_partition_key restrictions and
single_column_clustering_key_restrictions accordingly.
Partition key restrictions and clustering key restrictions
currently require virtual function specializations and have
lots of distinct code, so there's no value in having
primary_key_restrictions<KeyType> template.
libdeflate's build places some object files in the source directory, which is
shared between the debug and release build. If the same object file (for the two
modes) is written concurrently, or if one more reads it while the other writes it,
it will be corrupted.
Fix by not building the executables at all. They aren't needed, and we already
placed the libraries' objects in the build directory (which is unshared). We only
need the libraries anyway.
Fixes#4130.
Branches: master, branch-3.0
Message-Id: <20190123145435.19049-1-avi@scylladb.com>
Commit fd422c954e aimed to fix
issue #3803. In that issue, if a query SELECTed only certain columns but
did filtering (ALLOW FILTERING) over other unselected columns, the filtering
didn't work. The fix involved adding the columns being filtered to the set
of columns we read from disk, so they can be filtered.
But that commit included an optimization: If you have clustering keys
c1 and c2, and the query asks for a specific partition key and c1 < 3 and
c2 > 3, the "c1 < 3" part does NOT need to be filtered because it is already
done as a slice (a contiguous read from disk). The committed code erroneously
concluded that both c1 and c2 don't need to be filtered, which was wrong
(c2 *does* need to be read and filtered).
In this patch, we fix this optimization. Previously, we used the "prefix
length", which in the above example was 2 (both c1 and c2 were filtered)
but we need a new and more elaborate function,
num_prefix_columns_that_need_not_be_filtered(), to determine we can only
skip filtering of 1 (c1) and cannot skip the second.
Fixes#4121. This patch also adds a unit test to confirm this.
Signed-off-by: Nadav Har'El <nyh@scylladb.com>
Message-Id: <20190123131212.6269-1-nyh@scylladb.com>
If docker sees the Dockerfile hasn't changed it may reuse an old image, not
caring that context files and dependent images have in fact changed. This can
happen for us if install-dependencies.sh or the base Fedora image changed.
To make sure we always get a correct image, add --no-cache to the build command.
Message-Id: <20190122185042.23131-1-avi@scylladb.com>
Done in a separate step so we can update the toolchain first.
dnf-utils is used to bring us repoquery, which we will use to derive the
list of files in the python packages.
patchelf is needed so we can add a DT_RUNPATH section to the interpreter
binary.
the python modules, as well as the python3 interpreter are taken from
the current RPM spec file.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
[avi: regenerate frozen toolchain image]
Message-Id: <20190123011751.14440-1-glauber@scylladb.com>
"
This series prepares for the integration of the `master` branch of
Seastar back into Scylla.
A number of changes to the existing build are necessary to integrate
Seastar correctly, and these are detailed in the individual change
messages.
I tested with and without DPDK, in release and debug mode.
The actual switch is a separate patch.
"
* 'jhk/seastar_cmake/v4' of https://github.com/hakuch/scylla:
build: Fix link order for DPDK
tests: Split out `sstable_datafile_test`
build: Remove unnecessary inclusion
tests: Fix use-after-free errors in static vars
build: Remove Seastar internals
build: Only use Seastar flags from pkg-config
build: Query Seastar flags using pkg-config
build: Change parameters for `pkg_config` function
"
Cache cf mappings when breaking in the middle of a segment sending so
that the sender has them the next time it wants to send this segment
for where it left off before.
Also add the "discard" metric so that we can track hints that are being
discarded in the send flow.
"
Fixes#4122
* 'hinted_handoff_cache_cf_mappings-v1' of https://github.com/vladzcloudius/scylla:
hinted handoff: cache column family mappings for segments that were not sent out in full
hinted handoff: add a "discarded" metric
Each `*_test.cc` file must be compiled separately so that there is only
one definition of `main`.
This change correctly defines an independent `sstable_datafile_test`
from `sstable_datafile_test.cc` and adds that test to the existing
suite.
We don't need to re-specify Seastar internals in Scylla's build, since
everything private to Seastar is managed via pkg-config.
We can eliminate all references to ragel and generated ragel header
files from Seastar.
We can also simplify the dependence on generated Seastar header files by
ensuring that all object files depend on Seastar being built first.
Some Seastar-specific flags were manually specified as Ninja rules, but
we want to rely exclusively on Seastar for its necessary flags.
The pkg-config file generated by the latest version of Seastar is
correct and allows us to do this, but the version generated by Scylla's
current check-out of Seastar does not. Therefore, we have to manually
adjust the pkg-config results temporarily until we update Seastar.
Previously, we manually parsed the pkg-config file. We now used
pkg-config itself to get the correct build flags.
This means that we will get the correct behavior for variable expansion,
and fields like `Requires`, `Requires.private`, and `Libs.private`.
Previously, these fields were ignored.
We will try to send a particular segment later (in 1s) from the place
where we left off if it wasn't sent out in full before. However we may miss
some of column family mappings when we get back to sending this file and
start sending from some entry in the middle of it (where we left off)
if we didn't save column family mappings we cached while reading this segment
from its begining.
This happens because commitlog doesn't save a column family information
in every entry but rather once for each uniq column family (version) per
"cycle" (see commitlog::segment description for more info).
Therefore we have to assume that a particular column family mapping
appears once in the whole segment (worst case). And therefore, when we
decide to resume sending a segment we need to keep the column family
mappings we accumulated so far and drop them only after we are done with
this particular segment (sent it out in full).
Fixes#4122
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
Account the amount of hints that were discarded in the send path.
This may happen for instance due to a schema change or because a hint
being to old.
Signed-off-by: Vlad Zolotarov <vladz@scylladb.com>
"
wrap around on 2038-01-19 03:14:07 UTC. Such dates are valid deletion
times starting 2018-01-19 with the 20 years long maximum ttl.
This patchset extends gc_clock::duration::rep to int64_t and adds
respective unit tests for the max_ttl cases.
Fixes#3353
Tests: unit (release)
"
* 'projects/gc_clock_64/v2' of https://github.com/bhalevy/scylla:
tests: cql_query_test add test_time_overflow
gc_clock: make 64 bit
sstables: mc: use int64_t for local_deletion_time and ttl
sstables: add capped_tombstone_deletion_time stats counter
sstables: mc: cap partition tombstone local_deletion_time to max
sstables: add capped_local_deletion_time stats counter
sstables: mc: metadata collector: cap local_deletion_time at max
sstables: mc: use proper gc_clock types for local_deletion_time and ttl
db: get default_time_to_live as int32_t rather than gc_clock::rep
sstables: safely convert ttl and local_deletion_time to int32_t
sstables: mc: move liveness_info initialization to members
sstables: mc: move parsing of liveness_info deltas to data_consume_rows_context_m
sstables: mc: define expired_liveness_ttl as signed int32_t
sstables: mc: change write_delta_deletion_time to receive tombstone rather than deletion_time
sstables: mc: use gc_clock types for writing delta ttl and local_deletion_time
Commit 019a2e3a27 marked some arguments as required, which improved
the usability of scylla_setup.
The problem is that when we call scylla_setup in interactive mode,
no argument should be required. After the aforementioned commit
scylla_setup will either complain that the required arguments were
not passed if zero arguments are present, or skip interactive mode
if one of the mandatory ones is present.
This patch fixes that by checking whether or not we were invoked with
no command line arguments and lifting the requirements for mandatory
arguments in that case.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20190122003621.11156-1-glauber@scylladb.com>
deletion_time struct as int32_t deletion_time that cannot hold long
time values. Cap local_deletion_time to max_local_deletion_time and
log a warning about that,
This corresponds to Cassandra's MAX_DELETION_TIME.
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
max local_deletion_time_tracker in stats is int32_t so just track the limit
of (max int32_t - 1) if time_point is greater than the limit.
This corresponds to Cassandra's MAX_DELETION_TIME.
Refs #3353
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
mc format only writes delta local_deletion_time of tombstones.
Conventional deletion_time is written only for the partition header.
Restructure the code to pass a tombstone to write_delta_deletion_time
rather than struct deletion_time to prepare for using 64-bit deletion times.
The tombstone uses gc_clock::time_point while struct
deletion_time is limited to int32_t local_deletion_time.
Note that for "live" tombstones we encode <api::missing_timestamp,
no_deletion_time> as was previously evaluated by to_deletion_time().
Signed-off-by: Benny Halevy <bhalevy@scylladb.com>
When the reclaim request was satisfied from the pool there's no need
to call compact_and_evict_locked(). This allows us to avoid calling
boost::range::make_heap(), which is a tiny performance difference, as
well as some confusing log messages.
Message-Id: <1548091941-8534-1-git-send-email-tgrabiec@scylladb.com>
We can invoke pkg-config with multiple options, and we specify the
package name first since this is the "target" of the pkg-config query.
Supporting multiple options is necessary for querying Seastar's
pkg-config file with `--static`, which we anticipate in a future change.
The system won't work properly if IOTune is not run. While it is fair
to skip this step because it takes long-- indeed, it is common to provision
io.conf manually to be able to skip this step, first time users don't know
this and can have the impression that this is just a totally optional step.
Except the node won't boot up without it.
As a user nicely put recently in our mailing list:
"...in this case, it would be even simpler to forbid answering "no"
to this not-so-optional step :)"
We should not forbid saying no to IOTune, but we should warn the user
about the consequences of doing so.
Fixes#4120
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Message-Id: <20190121144506.17121-1-glauber@scylladb.com>