Fix a bug in dsync initialization and communication (#5428)
In current implementation we used as many dsync clients as per number of endpoints(along with path) which is not the expected implementation. The implementation of Dsync was expected to be just for the endpoint Host alone such that if you have 4 servers and each with 4 disks we need to only have 4 dsync clients and 4 dsync servers. But we currently had 8 clients, servers which in-fact is unexpected and should be avoided. This PR brings the implementation back to its original intention. This issue was found #5160
This commit is contained in:
committed by
kannappanr
parent
bb73c84b10
commit
f3f09ed14e
@@ -186,8 +186,8 @@ func serverMain(ctx *cli.Context) {
|
||||
|
||||
// Set nodes for dsync for distributed setup.
|
||||
if globalIsDistXL {
|
||||
clnts, myNode := newDsyncNodes(globalEndpoints)
|
||||
fatalIf(dsync.Init(clnts, myNode), "Unable to initialize distributed locking clients")
|
||||
globalDsync, err = dsync.New(newDsyncNodes(globalEndpoints))
|
||||
fatalIf(err, "Unable to initialize distributed locking clients")
|
||||
}
|
||||
|
||||
// Initialize name space lock.
|
||||
|
||||
Reference in New Issue
Block a user