Not all the idls are used by the messaging service, this patch removes
the auto-generated single include file that holds all the files and
replaes it with individual include of the generated fiels.
The patch does the following:
* It removes from the auto-generated inc file and clean the configure.py
from it.
* It places an explicit include for each generated file in
messaging_serivce.
* It add dependency of the generated code in the idl-compiler, so a
change in the compiler will trigger recreation of the generated files.
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1453900241-13053-1-git-send-email-amnon@scylladb.com>
I saw the following Boost format string related warning during commitlog
replay:
INFO [shard 0] commitlog_replayer - Replaying node3/commitlog/CommitLog-1-72057594289748293.log, node3/commitlog/CommitLog-1-90071992799230277.log, node3/commitlog/CommitLog-1-108086391308712261.log, node3/commitlog/CommitLog-1-251820357.log, node3/commitlog/CommitLog-1-54043195780266309.log, node3/commitlog/CommitLog-1-36028797270784325.log, node3/commitlog/CommitLog-1-126100789818194245.log, node3/commitlog/CommitLog-1-18014398761302341.log, node3/commitlog/CommitLog-1-126100789818194246.log, node3/commitlog/CommitLog-1-251820358.log, node3/commitlog/CommitLog-1-18014398761302342.log, node3/commitlog/CommitLog-1-36028797270784326.log, node3/commitlog/CommitLog-1-54043195780266310.log, node3/commitlog/CommitLog-1-72057594289748294.log, node3/commitlog/CommitLog-1-90071992799230278.log, node3/commitlog/CommitLog-1-108086391308712262.log
WARN [shard 0] commitlog_replayer - error replaying: boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::io::too_many_args> > (boost::too_many_args: format-string referred to less arguments than were passed)
While inspecting the code, I noticed that one of the error loggers is
missing an argument. As I don't know how the original failure triggered,
I wasn't able to verify that that was the only one, though.
Message-Id: <1453893301-23128-1-git-send-email-penberg@scylladb.com>
The main motivation behind this change is to make get_ranges() easier for
consumers to work with the returned ranges, e.g. binary search to find a
range in which a token is contained. In addition, a wrap-around range
introduces corner cases, so we should avoid it altogether.
Suppose that a node owns three tokens: -5, 6, 8
get_ranges() would return the following ranges:
(8, -5], (-5, 6], (6, 8]
get_ranges() will now return the following ranges:
(-inf, -5], (-5, 6], (6, 8], (8, +inf)
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
Message-Id: <4bda1428d1ebbe7c8af25aa65119edc5b97bc2eb.1453827605.git.raphaelsc@scylladb.com>
We register engine().at_exit() callbacks when we initialize the services. We
do not really call the callbacks at the moment due to #293.
It is pretty hard to see the whole picture in which order the services
are shutdown. Instead of for each services to register a at_exit()
callbacks, I proposal to have a single at_exit() callback which do the
shutdown for all the services. In cassandra, the shutdown work is done
in storage_service::drain_on_shutdown callbacks.
In this patch, the drain_on_shutdown is executed during shutdown.
As a result, the proper gossip shutdown is executed and fixes#790.
With this patch, when Ctrl-C on a node, it looks like:
INFO [shard 0] storage_service - Drain on shutdown: starts
INFO [shard 0] gossip - Announcing shutdown
INFO [shard 0] storage_service - Node 127.0.0.1 state jump to normal
INFO [shard 0] storage_service - Drain on shutdown: stop_gossiping done
INFO [shard 0] storage_service - CQL server stopped
INFO [shard 0] storage_service - Drain on shutdown: shutdown rpc and cql server done
INFO [shard 0] storage_service - Drain on shutdown: shutdown messaging_service done
INFO [shard 0] storage_service - Drain on shutdown: flush column_families done
INFO [shard 0] storage_service - Drain on shutdown: shutdown commitlog done
INFO [shard 0] storage_service - Drain on shutdown: done
Time a node waits after sending gossip shutdown message in milliseconds.
Reduces ./cql_query_test execution time
from
real 2m24.272s
user 0m8.339s
sys 0m10.556s
to
real 1m17.765s
user 0m3.698s
sys 0m11.578
row_cache::update() does not explicitly invalidate the entries it failed
to update in case of a failure. This could lead to inconsistency between
row cache and sstables.
In paractice that's not a problem because before row_cache::update()
fails it will cause all entries in the cache to be invalidated during
memory reclaim, but it's better to be safe and explicitly remove entries
that should be updated but it was not possible to do so.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Message-Id: <1453829681-29239-1-git-send-email-pdziepak@scylladb.com>
The API needs to be available at an early stage of the initialization,
on the other hand not all the specific APIs are available at that time.
This patch breaks the API initialization into stages, in each stage
additional commands will be available.
While setting that the api header files was broken into api_init.hh that
is relevent to the main and to api.hh which holds the different
api helper functions.
Fixes#754
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1453822331-16729-2-git-send-email-amnon@scylladb.com>
Last series accidently broke batch mode.
With new, fancy, potentitally blocking ways, we need to treat
batch mode differently, since in this case, sync should always
come _after_ alloc-write.
Previous patch caused infinite loop. Broke jenkins.
Message-Id: <1453821077-2385-1-git-send-email-calle@scylladb.com>
Also check closed status in allocate, since alloc queue waiting could
lead to us re-allocating in a segment that gets closed in between
queue enter and us running the continuation.
Message-Id: <1453811471-1858-1-git-send-email-calle@scylladb.com>
invocation of sstable_test "./test.py --name sstable_test --mode
release --jenkins a"
ran ... --log_sink=a.release.sstable_test -c1.boost.xml" which caused
the test to fail "with error code -11" fix that.
In addition boost test printout was bad fix that as well
Signed-off-by: Shlomi Livne <shlomi@scylladb.com>
Message-Id: <3af8c4b55beae673270f5302822d7b9dbba18c0f.1453809032.git.shlomi@scylladb.com>
"Adds flush + write thresholds/limits that, when reached, causes
operations to wait before being issued.
Write ops waiting also causes further allocations to queue up,
i.e. limiting throughput.
Adds getters for some useful "backlog" measurements:
* Pending (ongoing) writes/flush
* Pending (queued, wating) allocations
* Num times write/flush threshold has been exceeded (i.e. waits occured)
* Finished, dirty segments
* Unused (preallocated) segments"
* seastar 97f418a...bdb273a (6):
> rpc: alias rpc::type to boost::type
> Fix warning_supported to properly work with Clang
> rpc: change 'overflow' to 'underflow' in input stream processing
> rpc: log an error that caused connection to be closed.
> rpc: clarify deserialization error message
> rpc: do not append new line in a logger
Configured on start (for now - and dummy values at that).
When shard write/flush count reaches limit, and incoming ops will queue
until previous ones finish.
Consequently, if an allocation op forces a write, which blocks, any
other incoming allocations will also queue up to provide back pressure.
"After the patch, all of our relevant I/O is placed on a specific priority class.
The ones which are not are left into the Seastar's default priority, which will
effectively work as an idle class.
Examples of such I/O are commitlog replay and initial SSTable loading. Since they
will happen during initialization, they will run uncontended, and do not justify
having a class on their own."
It is wrong to get a stream plan id like below:
utils::UUID plan_id = gms::get_local_gossiper().get_host_id(ep);
We should look at all stream_sessions with the peer in question.
There are only two messages: prepare_message and outgoing_file_message.
Actually only the prepare_message is the message we send on wire.
Flatten the namespace.
After this patch, our I/O operations will be tagged into a specific priority class.
The available classes are 5, and were defined in the previous patch:
1) memtable flush
2) commitlog writes
3) streaming mutation
4) SSTable compaction
5) CQL query
Signed-off-by: Glauber Costa <glauber@scylladb.com>
After the introduction of the Fair I/O Queueing mechanism in Seastar,
it is possible to add requests to a specific priority class, that will
end up being serviced fairly.
This patch introduces a Priority Manager service, that manages the priority
each class of request will get. At this moment, having a class for that may
sound like an overkill. However, the most interesting feature of the Fair I/O
queue comes from being able to adjust the priorities dynamically as workloads
changes: so we will benefit from having them all in the same place.
This is designed to behave like one of our services, with the exception that
it won't use the distributed interface. This is mainly because there is no
reason to introduce that complexity at this point - since we can do thread local
registration as we have been doing in Seastar, and because that would require us
to change most of our tests to start a new service.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
SSTables already have a priority argument wired to their read path. However,
most of our reads do not call that interface directly, but employ the services
of a mutation reader instead.
Some of those readers will be used to read through a mutation_source, and those
have to patched as well.
Right now, whenever we need to pass a class, we pass Seastar's default priority
class.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
All the SSTable read path can now take an io_priority. The public functions will
take a default parameter which is Seastar's default priority.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
All variants of write_component now take an io_priority. The public
interfaces are by default set to Seastar's default priority.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
The only user for the default size is data_read, sitting at row.cc.
That reader wants to read and process a chunk all at once. So there's
really no reason to use the default buffer size - except that this code
is old.
We should do as we do in other single-key / single-range readers and
try to read all at once if possible, by looking at the size we received
as a parameter. Cleaning up the data_stream_at interface then comes as
a nice side effect.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Its definition as a lambda function is inconvenient, because it does not allow
us to use default values for parameters.
Signed-off-by: Glauber Costa <glauber@scylladb.com>