Files
seaweedfs/weed/util/buffer_pool/sync_pool.go
Chris Lu d8bbc1d855 fix: cap pool retention so chunk-copy buffers don't hoard memory (#9422)
Two pool-retention sites kept the runaway-RSS pattern in #6541 visible
even after #9420 and #9421:

* weed/util/buffer_pool: SyncPoolPutBuffer dropped a buffer back into
  sync.Pool regardless of how big it had grown. After a 64 MiB chunk
  upload through volume.PostHandler -> needle.ParseUpload, the pool
  hoarded a 64 MiB byte array per cached entry for the rest of the
  process's lifetime. Cap retention at 4 MiB; oversized buffers are
  dropped so GC can reclaim the backing array.

* weed/s3api/...copy.go: uploadChunkData left UploadOption.BytesBuffer
  unset, so operation.upload_content fell back to the package-global
  valyala/bytebufferpool. That pool also retains high-water buffers
  forever, and concurrent UploadPartCopy filled it with one chunk-sized
  buffer per concurrent upload. Provide a fresh per-call bytes.Buffer
  pre-sized to chunk + multipart framing; it's GC'd as soon as the
  upload returns.

Tests:
- weed/util/buffer_pool/sync_pool_test.go: pin the cap (oversized
  buffers don't round-trip), the inverse (right-sized buffers do), and
  nil-safety.
- weed/s3api/...copy_chunk_upload_test.go: extract newChunkUploadOption
  and pin that BytesBuffer is always non-nil and pre-sized, and that
  each call gets a distinct buffer.
2026-05-10 13:34:25 -07:00

39 lines
1.0 KiB
Go

package buffer_pool
import (
"bytes"
"sync"
)
// maxRetainedBufferCap caps the capacity of buffers we hand back to the
// sync.Pool. Buffers grown past this (e.g. by a 64 MiB chunk upload through
// volume.PostHandler -> needle.ParseUpload -> bytes.Buffer.ReadFrom) are
// dropped instead of pooled, so the underlying byte array becomes garbage
// and is collected. Without this cap the pool effectively hoards every
// high-water buffer for the process's lifetime — see #6541, where Harbor's
// concurrent UploadPartCopy filled the pool with 64 MiB buffers and RSS
// never receded.
const maxRetainedBufferCap = 4 * 1024 * 1024
var syncPool = sync.Pool{
New: func() interface{} {
return new(bytes.Buffer)
},
}
func SyncPoolGetBuffer() *bytes.Buffer {
return syncPool.Get().(*bytes.Buffer)
}
func SyncPoolPutBuffer(buffer *bytes.Buffer) {
if buffer == nil {
return
}
if buffer.Cap() > maxRetainedBufferCap {
// Drop the buffer; let GC reclaim the oversized backing array.
return
}
buffer.Reset()
syncPool.Put(buffer)
}