Files
seaweedfs/test/nfs/framework.go
Chris Lu 08d9193fe1 [nfs] Add NFS (#9067)
* add filer inode foundation for nfs

* nfs command skeleton

* add filer inode index foundation for nfs

* make nfs inode index hardlink aware

* add nfs filehandle and inode lookup plumbing

* add read-only nfs frontend foundation

* add nfs namespace mutation support

* add chunk-backed nfs write path

* add nfs protocol integration tests

* add stale handle nfs coverage

* complete nfs hardlink and failover coverage

* add nfs export access controls

* add nfs metadata cache invalidation

* fix nfs chunk read lookup routing

* fix nfs review findings and rename regression

* address pr 9067 review comments

- filer_inode: fail fast if the snowflake sequencer cannot start, and let
  operators override the 10-bit node id via SEAWEEDFS_FILER_SNOWFLAKE_ID
  to avoid multi-filer collisions
- filer_inode: drop the redundant retry loop in nextInode
- filerstore_wrapper: treat inode-index writes/removals as best-effort so
  a primary store success no longer surfaces as an operation failure
- filer_grpc_server_rename: defer overwritten-target chunk deletion until
  after CommitTransaction so a rolled-back rename does not strand live
  metadata pointing at freshly deleted chunks
- command/nfs: default ip.bind to loopback and require an explicit
  filer.path, so the experimental server does not expose the entire
  filer namespace on first run
- nfs integration_test: document why LinkArgs matches go-nfs's on-the-wire
  layout rather than RFC 1813 LINK3args

* mount: pre-allocate inode in Mkdir and Symlink

Mkdir and Symlink used to send filer_pb.CreateEntryRequest with
Attributes.Inode = 0. After PR 9067, the filer's CreateEntry now assigns
its own inode in that case, so the filer-side entry ends up with a
different inode than the one the mount allocates via inodeToPath.Lookup
and returns to the kernel. Once applyLocalMetadataEvent stores the
filer's entry in the meta cache, subsequent GetAttr calls read the
cached entry and hit the setAttrByPbEntry override at line 197 of
weedfs_attr.go, returning the filer-assigned inode instead of the
mount's local one. pjdfstest tests/rename/00.t (subtests 81/87/91)
caught this — it lstat'd a freshly-created directory/symlink, renamed
it, lstat'd again, and saw a different inode the second time.

createRegularFile already pre-allocates via inodeToPath.AllocateInode
and stamps it into the create request. Do the same thing in Mkdir and
Symlink so both sides agree on the object identity from the very first
request, and so GetAttr's cache path returns the same value as Mkdir /
Symlink's initial response.

* sequence: mask snowflake node id on int→uint32 conversion

CodeQL flagged the unchecked uint32(snowflakeId) cast in
NewSnowflakeSequencer as a potential truncation bug when snowflakeId is
sourced from user input (e.g. via SEAWEEDFS_FILER_SNOWFLAKE_ID). Mask
to the 10 bits the snowflake library actually uses so any caller-
supplied int is safely clamped into range.

* add test/nfs integration suite

Boots a real SeaweedFS cluster (master + volume + filer) plus the
experimental `weed nfs` frontend as subprocesses and drives it through
the NFSv3 wire protocol via go-nfs-client, mirroring the layout of
test/sftp. The tests run without a kernel NFS mount, privileged ports,
or any platform-specific tooling.

Coverage includes read/write round-trip, mkdir/rmdir, nested
directories, rename content preservation, overwrite + explicit
truncate, 3 MiB binary file, all-byte binary and empty files, symlink
round-trip, ReadDirPlus listing, missing-path remove, FSInfo sanity,
sequential appends, and readdir-after-remove.

Framework notes:

- Picks ephemeral ports with net.Listen("127.0.0.1:0") and passes
  -port.grpc explicitly so the default port+10000 convention cannot
  overflow uint16 on macOS.
- Pre-creates the /nfs_export directory via the filer HTTP API before
  starting the NFS server — the NFS server's ensureIndexedEntry check
  requires the export root to exist with a real entry, which filer.Root
  does not satisfy when the export path is "/".
- Reuses the same rpc.Client for mount and target so go-nfs-client does
  not try to re-dial via portmapper (which concatenates ":111" onto the
  address).

* ci: add NFS integration test workflow

Mirror test/sftp's workflow for the new test/nfs suite so PRs that touch
the NFS server, the inode filer plumbing it depends on, or the test
harness itself run the 14 NFSv3-over-RPC integration tests on Ubuntu
22.04 via `make test`.

* nfs: use append for buffer growth in Write and Truncate

The previous make+copy pattern reallocated the full buffer on every
extending write or truncate, giving O(N^2) behaviour for sequential
write loops. Switching to `append(f.content, make([]byte, delta)...)`
lets Go's amortized growth strategy absorb the repeated extensions.
Called out by gemini-code-assist on PR 9067.

* filer: honor caller cancellation in collectInodeIndexEntries

Dropping the WithoutCancel wrapper lets DeleteFolderChildren bail out of
the inode-index scan if the client disconnects mid-walk. The cleanup is
already treated as best-effort by the caller (it logs on error and
continues), so a cancelled walk just means the partial index rebuild is
skipped — the same failure mode as any other index write error.
Flagged as a DoS concern by gemini-code-assist on PR 9067.

* nfs: skip filer read on open when O_TRUNC is set

openFile used to unconditionally loadWritableContent for every writable
open and then discard the buffer if O_TRUNC was set. For large files
that is a pointless 64 MiB round-trip. Reorder the branches so we only
fetch existing content when the caller intends to keep it, and mark the
file dirty right away so the subsequent Close still issues the
truncating write. Called out by gemini-code-assist on PR 9067.

* nfs: allow Seek on O_APPEND files and document buffered write cap

Two related cleanups on filesystem.go:

- POSIX only restricts Write on an O_APPEND fd, not lseek. The existing
  Seek error ("append-only file descriptors may only seek to EOF")
  prevented read-and-write workloads that legitimately reposition the
  read cursor. Write already snaps the offset to EOF before persisting
  (see seaweedFile Write), so Seek can unconditionally accept any
  offset. Update the unit test that was asserting the old behaviour.
- Add a doc comment on maxBufferedWriteSize explaining that it is a
  per-file ceiling, the memory footprint it implies, and that the real
  fix for larger whole-file rewrites is streaming / multi-chunk support.

Both changes flagged by gemini-code-assist on PR 9067.

* nfs: guard offset before casting to int in Write

CodeQL flagged `int(f.offset) + len(p)` inside the Write growth path as
a potential overflow on architectures where `int` is 32-bit. The
existing check only bounded the post-cast value, which is too late.
Clamp f.offset against maxBufferedWriteSize before the cast and also
reject negative/overflowed endOffset results. Both branches fall
through to billy.ErrNotSupported, the same behaviour the caller gets
today for any out-of-range buffered write.

* nfs: compute Write endOffset in int64 to satisfy CodeQL

The previous guard bounded f.offset but left len(p) unchecked, so
CodeQL still flagged `int(f.offset) + len(p)` as a possible int-width
overflow path. Bound len(p) against maxBufferedWriteSize first, do the
addition in int64, and only cast down after the total has been clamped
against the buffer ceiling. Behaviour is unchanged: any out-of-range
write still returns billy.ErrNotSupported.

* ci: drop emojis from nfs-tests workflow summary

Plain-text step summary per user preference — no decorative glyphs in
the NFS CI output or checklist.

* nfs: annotate remaining DEV_PLAN TODOs with status

Three of the unchecked items are genuine follow-up PRs rather than
missing work in this one, and one was actually already done:

- Reuse chunk cache and mutation stream helpers without FUSE deps:
  checked off — the NFS server imports weed/filer.ReaderCache and
  weed/util/chunk_cache directly with no weed/mount or go-fuse imports.
- Extract shared read/write helpers from mount/WebDAV/SFTP: annotated
  as deferred to a separate refactor PR (touches four packages).
- Expand direct data-path writes beyond the 64 MiB buffered fallback:
  annotated as deferred — requires a streaming WRITE path.
- Shared lock state + lock tests: annotated as blocked upstream on
  go-nfs's missing NLM/NFSv4 lock state RPCs, matching the existing
  "Current Blockers" note.

* test/nfs: share port+readiness helpers with test/testutil

Drop the per-suite mustPickFreePort and waitForService re-implementations
in favor of testutil.MustAllocatePorts (atomic batch allocation; no
close-then-hope race) and testutil.WaitForPort / SeaweedMiniStartupTimeout.
Pull testutil in via a local replace directive so this standalone
seaweedfs-nfs-tests module can import the in-repo package without a
separate release.

Subprocess startup is still master + volume + filer + nfs — no switch to
weed mini yet, since mini does not know about the nfs frontend.

* nfs: stream writes to volume servers instead of buffering the whole file

Before this change the NFS write path held the full contents of every
writable open in memory:

  - OpenFile(write) called loadWritableContent which read the existing
    file into seaweedFile.content up to maxBufferedWriteSize (64 MiB)
  - each Write() extended content in-place
  - Close() uploaded the whole buffer as a single chunk via
    persistContent + AssignVolume

The 64 MiB ceiling made large NFS writes return NFS3ERR_NOTSUPP, and
even below the cap every Write paid a whole-file-in-memory cost. This
PR rewrites the write path to match how `weed filer` and the S3 gateway
persist data:

  - openFile(write) no longer loads the existing content at all; it
    only issues an UpdateEntry when O_TRUNC is set *and* the file is
    non-empty (so a fresh create+trunc is still zero-RPC)
  - Write() streams the caller's bytes straight to a volume server via
    one AssignVolume + one chunk upload, then atomically appends the
    resulting chunk to the filer entry through mutateEntry. Any
    previously inlined entry.Content is migrated to a chunk in the same
    update so the chunk list becomes the authoritative representation.
  - Truncate() becomes a direct mutateEntry (drop chunks past the new
    size, clip inline content, update FileSize) instead of resizing an
    in-memory buffer.
  - Close() is a no-op because everything was flushed inline.

The small-file fast path that the filer HTTP handler uses is preserved:
if the post-write size still fits in maxInlineWriteSize (4 MiB) and
the file has no existing chunks, we rewrite entry.Content directly and
skip the volume-server round-trip. This keeps single-shot tiny writes
(echo, small edits) cheap while completely removing the 64 MiB cap on
larger files. Read() now always reads through the chunk reader instead
of a local byte slice, so reads inside the same session see the freshly
appended data.

Drops the unused seaweedFile.content / dirty fields, the
maxBufferedWriteSize constant, and the loadWritableContent helper.
Updates TestSeaweedFileSystemSupportsNamespaceMutations expectations
to match the new "no extra O_TRUNC UpdateEntry on an empty file"
behavior (still 3 updates: Write + Chmod + Truncate).

* filer: extract shared gateway upload helper for NFS and WebDAV

Three filer-backed gateways (NFS, WebDAV, and mount) each had a local
saveDataAsChunk that wrapped operation.NewUploader().UploadWithRetry
with near-identical bodies: build AssignVolumeRequest, build
UploadOption, build genFileUrlFn with optional filerProxy rewriting,
call UploadWithRetry, validate the result, and call ToPbFileChunk.
Pull that body into filer.SaveGatewayDataAsChunk with a
GatewayChunkUploadRequest struct so both NFS and WebDAV can delegate
to one implementation.

- NFS's saveDataAsChunk is now a thin adapter that assembles the
  GatewayChunkUploadRequest from server options and calls the helper.
  The chunkUploader interface keeps working for test injection because
  the new GatewayChunkUploader interface is structurally identical.
- WebDAV's saveDataAsChunk is similarly a thin adapter — it drops the
  local operation.NewUploader call plus the AssignVolume/UploadOption
  scaffolding.
- mount is intentionally left alone. mount's saveDataAsChunk has two
  features that do not fit the shared helper (a pre-allocated file-id
  pool used to skip AssignVolume entirely, and a chunkCache
  write-through at offset 0 so future reads hit the mount's local
  cache), both of which are mount-specific.

Marks the Phase 2 "extract shared read/write helpers from mount,
WebDAV, and SFTP" DEV_PLAN item as done. The filer-level chunk read
path (NonOverlappingVisibleIntervals + ViewFromVisibleIntervals +
NewChunkReaderAtFromClient) was already shared.

* nfs: remove DESIGN.md and DEV_PLAN.md

The planning documents have served their purpose — all phase 1 and
phase 2 items are landed, phase 3 streaming writes are landed, phase 2
shared helpers are extracted, and the two remaining phase 4 items
(shared lock state + lock tests) are blocked upstream on
github.com/willscott/go-nfs which exposes no NLM or NFSv4 lock state
RPCs. The running decision log no longer reflects current code and
would just drift. The NFS wiki page
(https://github.com/seaweedfs/seaweedfs/wiki/NFS-Server) now carries
the overview, configuration surface, architecture notes, and known
limitations; the source is the source of truth for the rest.
2026-04-14 20:48:24 -07:00

424 lines
13 KiB
Go

package nfs
import (
"bytes"
"fmt"
"io"
"mime/multipart"
"net"
"net/http"
"os"
"os/exec"
"path/filepath"
"runtime"
"strings"
"syscall"
"testing"
"time"
"github.com/seaweedfs/seaweedfs/test/testutil"
"github.com/stretchr/testify/require"
nfsclient "github.com/willscott/go-nfs-client/nfs"
"github.com/willscott/go-nfs-client/nfs/rpc"
)
// NfsTestFramework boots a minimal SeaweedFS cluster (master + volume + filer)
// plus the experimental `weed nfs` frontend and hands out NFSv3 RPC clients
// that talk to it. Everything is driven via subprocesses so the tests exercise
// the same binary an operator would deploy, and no kernel mount is required.
type NfsTestFramework struct {
t *testing.T
tempDir string
dataDir string
masterProcess *os.Process
volumeProcess *os.Process
filerProcess *os.Process
nfsProcess *os.Process
masterAddr string
masterGrpc int
volumeAddr string
volumeGrpc int
filerAddr string
filerGrpc int
nfsAddr string
exportRoot string
weedBinary string
isSetup bool
skipCleanup bool
}
// TestConfig controls how the framework boots the cluster.
type TestConfig struct {
NumVolumes int
EnableDebug bool
SkipCleanup bool // keep temp dir on failure for inspection
// ExportRoot is the filer path the NFS server exports. Defaults to "/"
// so tests can use any path, with a single warning logged by the server.
ExportRoot string
}
// DefaultTestConfig returns the defaults used by most tests. A dedicated
// /nfs_export subtree is used as the NFS export root because the NFS server
// requires the export directory to exist in the filer's namespace and carry
// a non-zero inode — passing "/" would succeed only for filer setups that
// have already backfilled the root inode.
func DefaultTestConfig() *TestConfig {
return &TestConfig{
NumVolumes: 3,
EnableDebug: false,
SkipCleanup: false,
ExportRoot: "/nfs_export",
}
}
// NewNfsTestFramework allocates a framework bound to the current test. Call
// Setup next to actually start the cluster.
func NewNfsTestFramework(t *testing.T, config *TestConfig) *NfsTestFramework {
if config == nil {
config = DefaultTestConfig()
}
tempDir, err := os.MkdirTemp("", "seaweedfs_nfs_test_")
require.NoError(t, err)
// testutil.MustAllocatePorts holds every listener open until the full
// batch has been reserved, which avoids the "close-then-hope" race my
// original per-port helper had. We need seven ports: four HTTP (master,
// volume, filer, nfs) and three gRPC (master, volume, filer — nfs has
// no gRPC endpoint).
ports := testutil.MustAllocatePorts(t, 7)
exportRoot := config.ExportRoot
if exportRoot == "" {
exportRoot = "/"
}
return &NfsTestFramework{
t: t,
tempDir: tempDir,
dataDir: filepath.Join(tempDir, "data"),
masterAddr: fmt.Sprintf("127.0.0.1:%d", ports[0]),
masterGrpc: ports[1],
volumeAddr: fmt.Sprintf("127.0.0.1:%d", ports[2]),
volumeGrpc: ports[3],
filerAddr: fmt.Sprintf("127.0.0.1:%d", ports[4]),
filerGrpc: ports[5],
nfsAddr: fmt.Sprintf("127.0.0.1:%d", ports[6]),
exportRoot: exportRoot,
weedBinary: findWeedBinary(),
isSetup: false,
skipCleanup: config.SkipCleanup,
}
}
// Setup starts the SeaweedFS cluster and the NFS frontend, waiting for each
// component to accept connections before moving on.
func (f *NfsTestFramework) Setup(config *TestConfig) error {
if f.isSetup {
return fmt.Errorf("framework already setup")
}
dirs := []string{
f.dataDir,
filepath.Join(f.dataDir, "master"),
filepath.Join(f.dataDir, "volume"),
}
for _, dir := range dirs {
if err := os.MkdirAll(dir, 0755); err != nil {
return fmt.Errorf("failed to create directory %s: %v", dir, err)
}
}
if err := f.startMaster(config); err != nil {
return fmt.Errorf("failed to start master: %v", err)
}
if !testutil.WaitForPort(portFromAddr(f.masterAddr), testutil.SeaweedMiniStartupTimeout) {
return fmt.Errorf("master not ready at %s", f.masterAddr)
}
if err := f.startVolumeServer(config); err != nil {
return fmt.Errorf("failed to start volume server: %v", err)
}
if !testutil.WaitForPort(portFromAddr(f.volumeAddr), testutil.SeaweedMiniStartupTimeout) {
return fmt.Errorf("volume server not ready at %s", f.volumeAddr)
}
if err := f.startFiler(config); err != nil {
return fmt.Errorf("failed to start filer: %v", err)
}
if !testutil.WaitForPort(portFromAddr(f.filerAddr), testutil.SeaweedMiniStartupTimeout) {
return fmt.Errorf("filer not ready at %s", f.filerAddr)
}
// Pre-create the export root in the filer's namespace. The NFS server
// expects its export directory to exist with a real inode; uploading a
// placeholder file creates the parent directory implicitly and then
// removing the file leaves the empty directory in place.
if f.exportRoot != "/" {
if err := f.ensureExportRootExists(); err != nil {
return fmt.Errorf("failed to pre-create export root %s: %v", f.exportRoot, err)
}
}
if err := f.startNfsServer(config); err != nil {
return fmt.Errorf("failed to start NFS server: %v", err)
}
if !testutil.WaitForPort(portFromAddr(f.nfsAddr), testutil.SeaweedMiniStartupTimeout) {
return fmt.Errorf("NFS server not ready at %s", f.nfsAddr)
}
// Let the NFS server finish wiring up its gRPC subscription to the filer
// before the first client call hits MOUNT/LOOKUP.
time.Sleep(500 * time.Millisecond)
f.isSetup = true
return nil
}
// Cleanup stops all processes. Temp state is preserved if SkipCleanup is set.
func (f *NfsTestFramework) Cleanup() {
processes := []*os.Process{f.nfsProcess, f.filerProcess, f.volumeProcess, f.masterProcess}
for _, proc := range processes {
if proc != nil {
_ = proc.Signal(syscall.SIGTERM)
_, _ = proc.Wait()
}
}
if !f.skipCleanup {
_ = os.RemoveAll(f.tempDir)
}
}
// NfsAddr returns the TCP address the NFS server is listening on.
func (f *NfsTestFramework) NfsAddr() string { return f.nfsAddr }
// FilerAddr returns the TCP address of the filer.
func (f *NfsTestFramework) FilerAddr() string { return f.filerAddr }
// ExportRoot returns the path the NFS server exports.
func (f *NfsTestFramework) ExportRoot() string { return f.exportRoot }
// Mount opens an NFSv3 MOUNT+NFS connection against the running NFS server
// and returns a Target that tests can drive like a mini-VFS. Caller is
// responsible for calling the returned cleanup func to Unmount and close the
// TCP connection.
func (f *NfsTestFramework) Mount() (*nfsclient.Target, func(), error) {
var (
client *rpc.Client
err error
)
// The NFS server's TCP listener may already be accepting connections when
// waitForService returns, but the RPC program registration can trail it
// by a few milliseconds. Retry the dial to absorb that small window.
for attempt := 0; attempt < 20; attempt++ {
client, err = rpc.DialTCP("tcp", f.nfsAddr, false)
if err == nil {
break
}
time.Sleep(25 * time.Millisecond)
}
if err != nil {
return nil, nil, fmt.Errorf("dial NFS: %w", err)
}
// Note: do not set Mount.Addr here. When Addr is non-empty, the go-nfs
// client re-dials via portmapper and concatenates `:111` onto the
// address, which produces "too many colons" for a raw `host:port`
// string. Reusing the existing RPC client avoids that path entirely.
mounter := &nfsclient.Mount{Client: client}
target, err := mounter.Mount(f.exportRoot, rpc.AuthNull)
if err != nil {
client.Close()
return nil, nil, fmt.Errorf("mount %s: %w", f.exportRoot, err)
}
cleanup := func() {
_ = mounter.Unmount()
client.Close()
}
return target, cleanup, nil
}
func (f *NfsTestFramework) startMaster(config *TestConfig) error {
_, masterPort := splitHostPort(f.masterAddr)
args := []string{
"master",
"-ip=127.0.0.1",
fmt.Sprintf("-port=%d", masterPort),
fmt.Sprintf("-port.grpc=%d", f.masterGrpc),
"-mdir=" + filepath.Join(f.dataDir, "master"),
"-raftBootstrap",
"-peers=none",
}
return f.startProcess(&f.masterProcess, config, args)
}
func (f *NfsTestFramework) startVolumeServer(config *TestConfig) error {
_, volumePort := splitHostPort(f.volumeAddr)
// pb.ServerAddress encodes a non-default gRPC port as `host:port.grpc`.
// See weed/pb/server_address.go — the dot, not a colon, is the separator
// between the HTTP port and the gRPC port.
masterWithGrpc := fmt.Sprintf("%s.%d", f.masterAddr, f.masterGrpc)
args := []string{
"volume",
"-master=" + masterWithGrpc,
"-ip=127.0.0.1",
fmt.Sprintf("-port=%d", volumePort),
fmt.Sprintf("-port.grpc=%d", f.volumeGrpc),
"-dir=" + filepath.Join(f.dataDir, "volume"),
fmt.Sprintf("-max=%d", config.NumVolumes),
}
return f.startProcess(&f.volumeProcess, config, args)
}
func (f *NfsTestFramework) startFiler(config *TestConfig) error {
_, filerPort := splitHostPort(f.filerAddr)
masterWithGrpc := fmt.Sprintf("%s.%d", f.masterAddr, f.masterGrpc)
args := []string{
"filer",
"-master=" + masterWithGrpc,
"-ip=127.0.0.1",
fmt.Sprintf("-port=%d", filerPort),
fmt.Sprintf("-port.grpc=%d", f.filerGrpc),
}
return f.startProcess(&f.filerProcess, config, args)
}
func (f *NfsTestFramework) startNfsServer(config *TestConfig) error {
_, nfsPort := splitHostPort(f.nfsAddr)
// `host:port.grpc` encoding — see pb/server_address.go.
filerWithGrpc := fmt.Sprintf("%s.%d", f.filerAddr, f.filerGrpc)
args := []string{
"nfs",
"-filer=" + filerWithGrpc,
"-ip.bind=127.0.0.1",
fmt.Sprintf("-port=%d", nfsPort),
"-filer.path=" + f.exportRoot,
}
return f.startProcess(&f.nfsProcess, config, args)
}
func (f *NfsTestFramework) startProcess(target **os.Process, config *TestConfig, args []string) error {
cmd := exec.Command(f.weedBinary, args...)
cmd.Dir = f.tempDir
if config.EnableDebug {
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
}
if err := cmd.Start(); err != nil {
return err
}
*target = cmd.Process
return nil
}
// portFromAddr returns just the port number from a `host:port` string.
// testutil.WaitForPort takes an int port, not a full address.
func portFromAddr(addr string) int {
_, port := splitHostPort(addr)
return port
}
// ensureExportRootExists posts a placeholder file to f.exportRoot via the
// filer's HTTP API, then deletes it. That roundtrip implicitly creates the
// target directory so the NFS server has something to mount. We bypass
// weed/pb here because the HTTP client is simpler and needs no gRPC stubs.
func (f *NfsTestFramework) ensureExportRootExists() error {
exportRoot := strings.TrimRight(f.exportRoot, "/")
if exportRoot == "" {
return nil
}
placeholder := exportRoot + "/.nfs_test_init"
filerURL := "http://" + f.filerAddr + placeholder
var body bytes.Buffer
writer := multipart.NewWriter(&body)
part, err := writer.CreateFormFile("file", ".nfs_test_init")
if err != nil {
return err
}
if _, err := io.WriteString(part, ""); err != nil {
return err
}
if err := writer.Close(); err != nil {
return err
}
httpClient := &http.Client{Timeout: 10 * time.Second}
req, err := http.NewRequest(http.MethodPost, filerURL, &body)
if err != nil {
return err
}
req.Header.Set("Content-Type", writer.FormDataContentType())
resp, err := httpClient.Do(req)
if err != nil {
return err
}
_, _ = io.Copy(io.Discard, resp.Body)
resp.Body.Close()
if resp.StatusCode/100 != 2 {
return fmt.Errorf("filer POST %s returned status %d", filerURL, resp.StatusCode)
}
// Delete the placeholder; the directory stays behind.
deleteReq, err := http.NewRequest(http.MethodDelete, filerURL, nil)
if err != nil {
return err
}
deleteResp, err := httpClient.Do(deleteReq)
if err != nil {
return err
}
_, _ = io.Copy(io.Discard, deleteResp.Body)
deleteResp.Body.Close()
if deleteResp.StatusCode/100 != 2 && deleteResp.StatusCode != http.StatusNotFound {
return fmt.Errorf("filer DELETE %s returned status %d", filerURL, deleteResp.StatusCode)
}
return nil
}
func splitHostPort(addr string) (string, int) {
host, portStr, err := net.SplitHostPort(addr)
if err != nil {
return "", 0
}
var port int
_, _ = fmt.Sscanf(portStr, "%d", &port)
return host, port
}
// findWeedBinary locates the weed binary, preferring the local build in the
// checkout so tests run against the code under review rather than whatever is
// on $PATH.
func findWeedBinary() string {
if _, thisFile, _, ok := runtime.Caller(0); ok {
thisDir := filepath.Dir(thisFile)
candidates := []string{
filepath.Join(thisDir, "../../weed/weed"),
filepath.Join(thisDir, "../weed/weed"),
}
for _, candidate := range candidates {
if _, err := os.Stat(candidate); err == nil {
abs, _ := filepath.Abs(candidate)
return abs
}
}
}
cwd, _ := os.Getwd()
candidates := []string{
filepath.Join(cwd, "../../weed/weed"),
filepath.Join(cwd, "../weed/weed"),
filepath.Join(cwd, "./weed"),
}
for _, candidate := range candidates {
if _, err := os.Stat(candidate); err == nil {
abs, _ := filepath.Abs(candidate)
return abs
}
}
if path, err := exec.LookPath("weed"); err == nil {
return path
}
return "weed"
}