Not all the idls are used by the messaging service, this patch removes
the auto-generated single include file that holds all the files and
replaes it with individual include of the generated fiels.
The patch does the following:
* It removes from the auto-generated inc file and clean the configure.py
from it.
* It places an explicit include for each generated file in
messaging_serivce.
* It add dependency of the generated code in the idl-compiler, so a
change in the compiler will trigger recreation of the generated files.
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Message-Id: <1453900241-13053-1-git-send-email-amnon@scylladb.com>
There are only two messages: prepare_message and outgoing_file_message.
Actually only the prepare_message is the message we send on wire.
Flatten the namespace.
After the introduction of the Fair I/O Queueing mechanism in Seastar,
it is possible to add requests to a specific priority class, that will
end up being serviced fairly.
This patch introduces a Priority Manager service, that manages the priority
each class of request will get. At this moment, having a class for that may
sound like an overkill. However, the most interesting feature of the Fair I/O
queue comes from being able to adjust the priorities dynamically as workloads
changes: so we will benefit from having them all in the same place.
This is designed to behave like one of our services, with the exception that
it won't use the distributed interface. This is mainly because there is no
reason to introduce that complexity at this point - since we can do thread local
registration as we have been doing in Seastar, and because that would require us
to change most of our tests to start a new service.
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Unlike streaming in c*, scylla does not need to open tcp connections in
streaming service for both incoming and outgoing messages, seastar::rpc
does the work. There is no need for a standalone stream_init_message
message in the streaming negotiation stage, we can merge the
stream_init_message into stream_prepare_message.
"The series do the following:
It adds the code generation
Perform the needed changes in the current classes so each would have getter for
each of its serializable value and a constructor from the serialized values.
It adds a schema definition that cover gossip_diget_ack
It changes the messaging_service to use the generated code.
An overall explanation of the solution with a description of the schema IDL can
be found on the wiki page:
https://github.com/scylladb/scylla/wiki/Serializer-Deserializer-Code-generation
"
This patch adds rules and the idl schema to configure, which will call
the code generation to create the serialization and deserialization
functions.
There is also a rule to create the header file that include the auto
generated header files.
Signed-off-by: Amnon Heiman <amnon@scylladb.com>
Compaction manager was initially created at utils because it was
more generic, and wasn't only intended for compaction.
It was more like a task handler based on futures, but now it's
only intended to manage compaction tasks, and thus should be
moved elsewhere. /sstables is where compaction code is located.
Signed-off-by: Raphael S. Carvalho <raphaelsc@scylladb.com>
User db storage + login/pwd db using system tables.
Authenticator object is a global shard-shared singleton, assumed
to be completely immutable, thus safe.
Actual login authentication is done via locally created stateful object
(sasl challenge), that queries db.
Uses "crypt_r" for password hashing, vs. origins use of bcrypt.
Main reason is that bcrypt does not exist as any consistent package
that can be consumed, so to guarantee full compatibility we'd have
to include the source. Not hard, but at least initially more work than
worth.
Use systemd Type=notify to tell systemd about startup progress.
We can now use 'systemctl status scylla-server' to see where we are
in service startup, and 'systemctl start scylla-server' will wait until
either startup is complete, or we fail to start up.
frozen_schema will transfer schema definition across nodes with schema
mutations. Because different nodes may have different versions of
schema tables, we cannot use frozen_mutations to transfer these
because frozen_mutation can only be read using the same version of the
schema it was frozen with. To solve this problem, new from of mutation
is introduced called canonical_mutation, which can be read using any
version of the schema.
Everything except alter_table_statement::announce_migration() is
translated. announce_migration() has to wait for multi schema support to
be merged.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Originally, lsa allocated each segment independently what could result
in high memory fragmentation. As a result many compaction and eviction
passes may be needed to release a sufficiently big contiguous memory
block.
These problems are solved by introduction of segment zones, contiguous
groups of segments. All segments are allocated from zones and the
algorithm tries to keep the number of zones to a minimum. Moreover,
segments can be migrated between zones or inside a zone in order to deal
with fragmentation inside zone.
Segment zones can be shrunk but cannot grow. Segment pool keeps a tree
containing all zones ordered by their base addresses. This tree is used
only by the memory reclamer. There is also a list of zones that have
at least one free segments that is used during allocation.
Segment allocation doesn't have any preferences which segment (and zone)
to choose. Each zone contains a free list of unused segments. If there
are no zones with free segments a new one is created.
Segment reclamation migrates segments from the zones higher in memory
to the ones at lower addresses. The remaining zones are shrunk until the
requested number of segments is reclaimed.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
A dynamic bitset implementation that provides functions to search for
both set and cleared bits in both directions.
Signed-off-by: Paweł Dziepak <pdziepak@scylladb.com>
Fixes#355
"Implements query paging similar to origin. If driver sets a "page size" in
a query, and we cannot know that we will not exceed this limit in a single
query, the query is performed using a "pager" object, which, using modified
partition ranges and query limits, keeps track of returned rows to "page"
through the results.
Implementation structure sort of mimics the origin design, even though it
is maybe a little bit overkill for us (currently). On the other hand, it
does not really hurt.
This implementation is tested using the "paging_test" subset in dtest.
It passes all test except:
* test_paging_using_secondary_indexes
* test_paging_using_secondary_indexes_with_static_cols
* test_failure_threshold_deletions
The two first because we don't have secondary indexes yet, the latter
because the test depends on "tombstone_failure_threshold" in origin.
Potential todo: Currently the pager object does not shortcut result
building fully when page limit is exceeded. Could save a little work
here, but probably not very significant."
* Static query method to determine if paging might be required
(very conservative - almost all querys will be paged me thinks).
* Static factory method for pager
* Actual pager implementation
Pager object uses three variables to keep track of paging state:
1.) Last partition key - partition key of last partion processed
-> next partition to start process
2.) Last clustering key, i.e. row offset within last key partition,
i.e. how far we got last time
3.) Max remaining - max rows to process further, i.e. initial limit -
processed so far
Partition ranges are modified/removed so that we begin with "Last key",
if present. (Or end with, in the case of reversed processing)
A counting visitor then keeps count of rows to include in processing.
Note: serial format blob is different compared to origin, due to scyllas
different internal architecture. I.e. we query actual rows.
But drivers etc ignore the content of the blob, it is opaque.