mirror of
https://github.com/scylladb/scylladb.git
synced 2026-04-21 09:00:35 +00:00
Currently, if the data_size is greater than max_chunk_size - sizeof(chunk), we end up allocating up to max_chunk_size + sizeof(chunk) bytes, exceeding buf.max_chunk_size(). This may lead to allocation failures, as seen in https://github.com/scylladb/scylla/issues/7950, where we couldn't allocate 131088 (= 128K + 16) bytes. This change adjusted the expose max_chunk_size() to be max_alloc_size (128KB) - sizeof(chunk) so that the allocated chunks would normally be allocated in 128KB chunks in the write() path. Added a unit test - test_large_placeholder that stresses the chunk allocation path from the write_place_holder(size) entry point to make sure it handles large chunk allocations correctly. Refs #7950 Refs #8081 Test: unit(release), bytes_ostream_test(debug) Signed-off-by: Benny Halevy <bhalevy@scylladb.com> Message-Id: <20210303143413.902968-1-bhalevy@scylladb.com>