Files
scylladb/mutation/mutation_partition_serializer.hh
Avi Kivity ecb6fb00f0 streamed_mutation_freezer: use chunked_vector instead of std::deque for clustering rows
The streamed_mutation_freezer class uses a deque to avoid large
allocations, but fails as seen in the referenced issue when the
vector backing the deque grows too large. This may be a problem
in itself, but the issue doesn't provide enough information to tell.

Fix the immediate problem by switching to chunked_vector, which
is better in avoiding large allocations. We do lose some early-free
in serialize_mutation_fragments(), but since most of the memory should
be in the clustering row itself, not in the deque/chunked_vector holding
it, it should not be a problem.

Fixes #28275

Closes scylladb/scylladb#28281
2026-01-21 10:13:44 +02:00

45 lines
1.4 KiB
C++

/*
* Copyright (C) 2015-present ScyllaDB
*/
/*
* SPDX-License-Identifier: LicenseRef-ScyllaDB-Source-Available-1.0
*/
#pragma once
#include "replica/database_fwd.hh"
#include "bytes_ostream.hh"
#include "mutation_fragment.hh"
namespace ser {
template<typename Output>
class writer_of_mutation_partition;
}
class mutation_partition_serializer {
static size_t size(const schema&, const mutation_partition&);
public:
using size_type = uint32_t;
private:
const schema& _schema;
const mutation_partition& _p;
private:
template<typename Writer>
static void write_serialized(Writer&& out, const schema&, const mutation_partition&);
template<typename Writer>
static future<> write_serialized_gently(Writer&& out, const schema&, const mutation_partition&);
public:
using count_type = uint32_t;
mutation_partition_serializer(const schema&, const mutation_partition&);
public:
void write(bytes_ostream&) const;
void write(ser::writer_of_mutation_partition<bytes_ostream>&&) const;
future<> write_gently(bytes_ostream&) const;
future<> write_gently(ser::writer_of_mutation_partition<bytes_ostream>&&) const;
};
void serialize_mutation_fragments(const schema& s, tombstone partition_tombstone,
std::optional<static_row> sr, range_tombstone_list range_tombstones,
utils::chunked_vector<clustering_row> clustering_rows, ser::writer_of_mutation_partition<bytes_ostream>&&);