This is meant to resolve to dependecy loop between token_metadata.hh
and system_keyspace.hh.
Signed-off-by: Vlad Zolotarov <vladz@cloudius-systems.com>
One of the find_schema variants calls a find_uuid() that throws out_of_range,
without converting it to no_such_column_family first. This results in
std::terminate() being called due to exception specifications.
Fix by converting the exception.
"When compressor_parameters was introduced it only performed properties
validation, but wasn't properly wired to the rest of the code and the
compression information never made it to the final schema object.
This patchset changes that, now compression parameters are correctly processed
by schema builder as well written and read from system tables. Updated test
case makes sure that not only incorrect values are rejected during validation,
but also that correct values really are set in the created schema."
"V2 Change how the information is gothered from the CPUs. As a result of the
change, each function call in the parallel_for_each holds its own copy of the
ID, following that the get_collectd_value method was changed to get a const
reference, to prevent the redundant creation of a shared_ptr from the local
copy.
This series adds an API for the collectd. After applying the series the
collectd will be available from the API."
Check whether the created tables have actually the appropriate
compression parameters set.
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
This function isn't called by anything, all schema creation logic should
be in apply_to_builder().
Signed-off-by: Paweł Dziepak <pdziepak@cloudius-systems.com>
close() is a blocking call, so it must be called in the I/O thread, not
the main reactor thread. To do that, we need a file::close() method that
can return a future.
Closing a file can both expose latent errors that did not have the
opportunity to be reported earlier, and also may block. While both
are unlikely in the case of O_DIRECT files, better not to risk it.
Following Nadav's discovery of the problem with large writes to output stream,
it turns out that compressed_file_output_stream also needs the option trim_to_
size enabled. Otherwise, a write to compressed_file_output_stream larger than
_size would result in a buffer larger than chunk size being flushed, which is
definitely wrong.
Signed-off-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
Reviewed-by: Nadav Har'El <nyh@cloudius-systems.com>
the file_data_sink_impl::put() code assumes it is always called on buffers
with size multiple of dma alignment (4096), except the *last* one. After
writing one unaligned-size buffer, further writes cannot continue because
the offset into the file no longer has the right alignment! If a caller
does try to do that, there is a bug in the caller (it's not a run-time error,
it's a design bug), and better discover it quickly with an assert, as I do
in this patch.
I had such a caller in an example application, and it took me a whole day
of debugging just to figure out that this is where the caller actually had
a bug.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
Reviewed-by: Raphael S. Carvalho <raphaelsc@cloudius-systems.com>
Our file-output code contains various layers making sometimes contradictory
assumptions, and it is a real art-form to make it all work together.
They usually do work together well, but there was one undetected bug for
large writes to a file output stream:
The problem is what happens when we try to write a large buffer (larger
than the output stream's buffer) in one output_stream::write() call.
By default, output_stream uses the efficient, zero-copy, implementation
which calls the underlying data sink's put function on the entire written
buffer, without copying it to the stream's buffer first.
Unfortunately, this solution does NOT work on *file* output streams.
Because of our use of AIO and O_DIRECT, we can only write from aligned
buffers, and at aligned (multiple of dma_alignment) sizes. Even a large
size cannot be fully written if not a multiple of dma_alignment, and
the need to align the buffers, and data already on the output_stream,
complicate things further.
Amazingly, we already had an option "_trim_to_size" in output_stream to
do the right thing, and we just need to enable it for file output stream.
In special cases (aligned position, aligned input buffer) it might be
possible to do something even more efficient - zero copy and just one
write request - but in the general case, _trim_to_size is exactly what
we needed.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
When creating a file object, we call fstat() to determine whether it's
a block device or a regular file. While unlikely, the fstat() call can
block. Use an ioctl() that we expect to fail on regular files instead.
After the implementation of the code that uses the scollectd API was
modified, the get_collectd_value gets the collectd ID as a const
reference, to remove unnessary creation of shared_ptr.
Signed-off-by: Amnon Heiman <amnon@cloudius-systems.com>
Add a use_count() method for shared_ptr, like std::shared_ptr has.
We already had such a method for lw_shared_ptr, but not for shared_ptr.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
A base class with virtual functions should also have a virtual destructor,
so if someone deletes it by the base class pointer, the concrete class's
destructor will be called.
I thought this missing virtual destructor is to blame for a bug I was
hunting, but it's not - but it's still worth adding this missing definition.
The silly "default" definition of the move constructor is also necessary,
because when you define the destructor explicitly, the compiler no longer
defines any constructors implicitly for you.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
convert fileiotest.cc to Boost test case, making it easier to see what
is being tested, and to add more file io tests.
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>
Our sstable code currently has a bug (not solved by this patch) in writing
large summary files, where several aio write operations are done and one of
them fails with an EINVAL.
Unfortunately and inexplicably, sstable::write_simple simply *hides* this
exception (catches it and ignores it), so the write never knows it fails,
and we only get an exception later when sstable::write_components() tries
to load() the sstable it just created.
So in this patch, I remove the hiding of the exception, and now when writing
an sstable with 1,000,000 partitions, I see this in the output:
failed to write sstable: Invalid argument
Signed-off-by: Nadav Har'El <nyh@cloudius-systems.com>