diff --git a/.gitignore b/.gitignore index c4694b3e1..cc6e1d679 100644 --- a/.gitignore +++ b/.gitignore @@ -14,3 +14,4 @@ vendor test/p2p/data/ test/logs .glide +coverage.txt diff --git a/CHANGELOG.md b/CHANGELOG.md index 92d6c112d..724c31201 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,56 @@ # Changelog +## 0.10.0 (May 18, 2017) + +BREAKING CHANGES: + +- New JSON encoding for `go-crypto` types (using `go-wire/data`): + +``` +"pub_key": { + "type": "ed25519", + "data": "83DDF8775937A4A12A2704269E2729FCFCD491B933C4B0A7FFE37FE41D7760D0" +} +``` + +- New config + - Isolate viper to `cmd/tendermint/commands` + - New Config structs in `config/`: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusConfig` + - Remove config/tendermint and config/tendermint_test. Defaults are handled by viper and `DefaultConfig() / `TestConfig()` functions + - Tests do not read config from file +- New logger (`github.com/tendermint/tmlibs/log`) + - Reduced to three levels: Error, Info, Debug + - Per-module log levels + - No global loggers (loggers are passed into constructors, or preferably set with a `SetLogger` method) +- RPC serialization cleanup: + - Lowercase json names for ValidatorSet fields + - No longer uses go-wire, so no more `[TypeByte, XXX]` madness + - Responses have no type information + - Introduce EventDataInner for serializing events +- Remove all use of go-wire (and `[TypeByte, XXX]`) in the `genesis.json` and `priv_validator.json` +- [consensus/abci] Send InitChain message in handshake if `appBlockHeight == 0` +- [types] `[]byte -> data.Bytes` +- [types] Do not include the `Accum` field when computing the hash of a validator. This makes the ValidatorSetHash unique for a given validator set, rather than changing with every block (as the Accum changes) +- A number of functions and methods ahd their signatures modified to accomodate new config and logger. See https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b for comprehensive list. Note many also had `[]byte` arguments changed to `data.Bytes`, but this is not actually breaking. + +FEATURES: + +- Log if a node is validator or not in every consensus round +- Use ldflags to set git hash as part of the version + +IMPROVEMENTS: + +- Merge `tendermint/go-p2p -> tendermint/tendermint/p2p` and `tendermint/go-rpc -> tendermint/tendermint/rpc/lib` +- Update paths for grand repo merge: + - `go-common -> tmlibs/common` + - `go-data -> go-wire/data` + - All other `go-*` libs, except `go-crypto` and `go-wire`, merged under `tmlibs` +- Return HTTP status codes with errors for RPC responses +- Use `.Wrap()` and `.Unwarp()` instead of eg. `PubKeyS` for `go-crypto` types +- Color code different instances of the consensus for tests +- RPC JSON responses use pretty printing (via `json.MarshalIndent`) + + ## 0.9.2 (April 26, 2017) BUG FIXES: @@ -150,7 +201,7 @@ IMPROVEMENTS: - Less verbose logging - Better test coverage (37% -> 49%) - Canonical SignBytes for signable types -- Write-Ahead Log for Mempool and Consensus via go-autofile +- Write-Ahead Log for Mempool and Consensus via tmlibs/autofile - Better in-process testing for the consensus reactor and byzantine faults - Better crash/restart testing for individual nodes at preset failure points, and of networks at arbitrary points - Better abstraction over timeout mechanics diff --git a/DOCKER/Dockerfile b/DOCKER/Dockerfile index 08a7b2674..a5f734f19 100644 --- a/DOCKER/Dockerfile +++ b/DOCKER/Dockerfile @@ -1,8 +1,8 @@ FROM alpine:3.5 # This is the release of tendermint to pull in. -ENV TM_VERSION 0.9.0 -ENV TM_SHA256SUM 697033ea0f34f8b34a8a2b74c4dd730b47dd4efcfce65e53e953bdae8eb14364 +ENV TM_VERSION 0.9.1 +ENV TM_SHA256SUM da34234755937140dcd953afcc965555fad7e05afd546711bc5bdc2df3d54226 # Tendermint will be looking for genesis file in /tendermint (unless you change # `genesis_file` in config.toml). You can put your config.toml and private diff --git a/DOCKER/Makefile b/DOCKER/Makefile index 092233017..612b9a694 100644 --- a/DOCKER/Makefile +++ b/DOCKER/Makefile @@ -1,10 +1,12 @@ build: # TAG=0.8.0 TAG_NO_PATCH=0.8 - docker build -t "tendermint/tendermint" -t "tendermint/tendermint:$TAG" -t "tendermint/tendermint:$TAG_NO_PATCH" . + docker build -t "tendermint/tendermint" -t "tendermint/tendermint:$(TAG)" -t "tendermint/tendermint:$(TAG_NO_PATCH)" . push: # TAG=0.8.0 TAG_NO_PATCH=0.8 - docker push "tendermint/tendermint" "tendermint/tendermint:$TAG" "tendermint/tendermint:$TAG_NO_PATCH" + docker push "tendermint/tendermint:latest" + docker push "tendermint/tendermint:$(TAG)" + docker push "tendermint/tendermint:$(TAG_NO_PATCH)" build_develop: docker build -t "tendermint/tendermint:develop" -f Dockerfile.develop . diff --git a/DOCKER/README.md b/DOCKER/README.md index b7b9f3815..a1f9f28c9 100644 --- a/DOCKER/README.md +++ b/DOCKER/README.md @@ -1,6 +1,7 @@ # Supported tags and respective `Dockerfile` links -- `0.9.0`, `0.9`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/d474baeeea6c22b289e7402449572f7c89ee21da/DOCKER/Dockerfile) +- `0.9.1`, `0.9`, `latest` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/809e0e8c5933604ba8b2d096803ada7c5ec4dfd3/DOCKER/Dockerfile) +- `0.9.0` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/d474baeeea6c22b289e7402449572f7c89ee21da/DOCKER/Dockerfile) - `0.8.0`, `0.8` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/bf64dd21fdb193e54d8addaaaa2ecf7ac371de8c/DOCKER/Dockerfile) - `develop` [(Dockerfile)](https://github.com/tendermint/tendermint/blob/master/DOCKER/Dockerfile.develop) diff --git a/Makefile b/Makefile index e11152fce..7848e4785 100644 --- a/Makefile +++ b/Makefile @@ -8,10 +8,12 @@ TMHOME = $${TMHOME:-$$HOME/.tendermint} all: install test install: get_vendor_deps - @go install ./cmd/tendermint + @go install --ldflags '-extldflags "-static"' \ + --ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse HEAD`" ./cmd/tendermint build: - go build -o build/tendermint ./cmd/tendermint + go build \ + --ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse HEAD`" -o build/tendermint ./cmd/tendermint/ build_race: go build -race -o build/tendermint ./cmd/tendermint diff --git a/README.md b/README.md index 24b3aef2e..34320f9a9 100644 --- a/README.md +++ b/README.md @@ -52,9 +52,9 @@ Yay open source! Please see our [contributing guidelines](https://tendermint.com ### Sub-projects * [ABCI](http://github.com/tendermint/abci) -* [Mintnet](http://github.com/tendermint/mintnet) +* [Mintnet-kubernetes](https://github.com/tendermint/tools/tree/master/mintnet-kubernetes) * [Go-Wire](http://github.com/tendermint/go-wire) -* [Go-P2P](http://github.com/tendermint/go-p2p) +* [Go-P2P](http://github.com/tendermint/tendermint/p2p) * [Go-Merkle](http://github.com/tendermint/go-merkle) ### Applications diff --git a/Vagrantfile b/Vagrantfile index a3b329748..c465ed73a 100644 --- a/Vagrantfile +++ b/Vagrantfile @@ -5,7 +5,7 @@ Vagrant.configure("2") do |config| config.vm.box = "ubuntu/trusty64" config.vm.provider "virtualbox" do |v| - v.memory = 3072 + v.memory = 4096 v.cpus = 2 end diff --git a/benchmarks/codec_test.go b/benchmarks/codec_test.go index 35dc591ea..7162e63d0 100644 --- a/benchmarks/codec_test.go +++ b/benchmarks/codec_test.go @@ -4,7 +4,7 @@ import ( "testing" "github.com/tendermint/go-crypto" - "github.com/tendermint/go-p2p" + "github.com/tendermint/tendermint/p2p" "github.com/tendermint/go-wire" proto "github.com/tendermint/tendermint/benchmarks/proto" ctypes "github.com/tendermint/tendermint/rpc/core/types" @@ -12,10 +12,10 @@ import ( func BenchmarkEncodeStatusWire(b *testing.B) { b.StopTimer() - pubKey := crypto.GenPrivKeyEd25519().PubKey().(crypto.PubKeyEd25519) + pubKey := crypto.GenPrivKeyEd25519().PubKey() status := &ctypes.ResultStatus{ NodeInfo: &p2p.NodeInfo{ - PubKey: pubKey, + PubKey: pubKey.Unwrap().(crypto.PubKeyEd25519), Moniker: "SOMENAME", Network: "SOMENAME", RemoteAddr: "SOMEADDR", @@ -40,7 +40,7 @@ func BenchmarkEncodeStatusWire(b *testing.B) { func BenchmarkEncodeNodeInfoWire(b *testing.B) { b.StopTimer() - pubKey := crypto.GenPrivKeyEd25519().PubKey().(crypto.PubKeyEd25519) + pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519) nodeInfo := &p2p.NodeInfo{ PubKey: pubKey, Moniker: "SOMENAME", @@ -61,7 +61,7 @@ func BenchmarkEncodeNodeInfoWire(b *testing.B) { func BenchmarkEncodeNodeInfoBinary(b *testing.B) { b.StopTimer() - pubKey := crypto.GenPrivKeyEd25519().PubKey().(crypto.PubKeyEd25519) + pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519) nodeInfo := &p2p.NodeInfo{ PubKey: pubKey, Moniker: "SOMENAME", @@ -83,7 +83,7 @@ func BenchmarkEncodeNodeInfoBinary(b *testing.B) { func BenchmarkEncodeNodeInfoProto(b *testing.B) { b.StopTimer() - pubKey := crypto.GenPrivKeyEd25519().PubKey().(crypto.PubKeyEd25519) + pubKey := crypto.GenPrivKeyEd25519().PubKey().Unwrap().(crypto.PubKeyEd25519) pubKey2 := &proto.PubKey{Ed25519: &proto.PubKeyEd25519{Bytes: pubKey[:]}} nodeInfo := &proto.NodeInfo{ PubKey: pubKey2, diff --git a/benchmarks/map_test.go b/benchmarks/map_test.go index ee538e0f3..80edaff7c 100644 --- a/benchmarks/map_test.go +++ b/benchmarks/map_test.go @@ -1,7 +1,7 @@ package benchmarks import ( - . "github.com/tendermint/go-common" + . "github.com/tendermint/tmlibs/common" "testing" ) diff --git a/benchmarks/os_test.go b/benchmarks/os_test.go index 49a160cd0..2c4611c84 100644 --- a/benchmarks/os_test.go +++ b/benchmarks/os_test.go @@ -4,7 +4,7 @@ import ( "os" "testing" - . "github.com/tendermint/go-common" + . "github.com/tendermint/tmlibs/common" ) func BenchmarkFileWrite(b *testing.B) { diff --git a/benchmarks/simu/counter.go b/benchmarks/simu/counter.go index 36d1e35df..e9502f956 100644 --- a/benchmarks/simu/counter.go +++ b/benchmarks/simu/counter.go @@ -7,11 +7,11 @@ import ( "fmt" "github.com/gorilla/websocket" - . "github.com/tendermint/go-common" - "github.com/tendermint/go-rpc/client" - "github.com/tendermint/go-rpc/types" "github.com/tendermint/go-wire" _ "github.com/tendermint/tendermint/rpc/core/types" // Register RPCResponse > Result types + "github.com/tendermint/tendermint/rpc/lib/client" + "github.com/tendermint/tendermint/rpc/lib/types" + . "github.com/tendermint/tmlibs/common" ) func main() { @@ -37,13 +37,16 @@ func main() { for i := 0; ; i++ { binary.BigEndian.PutUint64(buf, uint64(i)) //txBytes := hex.EncodeToString(buf[:n]) - request := rpctypes.NewRPCRequest("fakeid", + request, err := rpctypes.MapToRequest("fakeid", "broadcast_tx", map[string]interface{}{"tx": buf[:8]}) + if err != nil { + Exit(err.Error()) + } reqBytes := wire.JSONBytes(request) //fmt.Println("!!", string(reqBytes)) fmt.Print(".") - err := ws.WriteMessage(websocket.TextMessage, reqBytes) + err = ws.WriteMessage(websocket.TextMessage, reqBytes) if err != nil { Exit(err.Error()) } diff --git a/blockchain/log.go b/blockchain/log.go deleted file mode 100644 index 29dc03f6a..000000000 --- a/blockchain/log.go +++ /dev/null @@ -1,7 +0,0 @@ -package blockchain - -import ( - "github.com/tendermint/go-logger" -) - -var log = logger.New("module", "blockchain") diff --git a/blockchain/pool.go b/blockchain/pool.go index 0deacd263..a657b091e 100644 --- a/blockchain/pool.go +++ b/blockchain/pool.go @@ -5,9 +5,10 @@ import ( "sync" "time" - . "github.com/tendermint/go-common" - flow "github.com/tendermint/go-flowrate/flowrate" "github.com/tendermint/tendermint/types" + . "github.com/tendermint/tmlibs/common" + flow "github.com/tendermint/tmlibs/flowrate" + "github.com/tendermint/tmlibs/log" ) const ( @@ -58,7 +59,7 @@ func NewBlockPool(start int, requestsCh chan<- BlockRequest, timeoutsCh chan<- s requestsCh: requestsCh, timeoutsCh: timeoutsCh, } - bp.BaseService = *NewBaseService(log, "BlockPool", bp) + bp.BaseService = *NewBaseService(nil, "BlockPool", bp) return bp } @@ -106,7 +107,7 @@ func (pool *BlockPool) removeTimedoutPeers() { // XXX remove curRate != 0 if curRate != 0 && curRate < minRecvRate { pool.sendTimeout(peer.id) - log.Warn("SendTimeout", "peer", peer.id, "reason", "curRate too low") + pool.Logger.Error("SendTimeout", "peer", peer.id, "reason", "curRate too low") peer.didTimeout = true } } @@ -132,7 +133,7 @@ func (pool *BlockPool) IsCaughtUp() bool { // Need at least 1 peer to be considered caught up. if len(pool.peers) == 0 { - log.Debug("Blockpool has no peers") + pool.Logger.Debug("Blockpool has no peers") return false } @@ -142,7 +143,7 @@ func (pool *BlockPool) IsCaughtUp() bool { } isCaughtUp := (height > 0 || time.Now().Sub(pool.startTime) > 5*time.Second) && (maxPeerHeight == 0 || height >= maxPeerHeight) - log.Notice(Fmt("IsCaughtUp: %v", isCaughtUp), "height", height, "maxPeerHeight", maxPeerHeight) + pool.Logger.Info(Fmt("IsCaughtUp: %v", isCaughtUp), "height", height, "maxPeerHeight", maxPeerHeight) return isCaughtUp } @@ -226,6 +227,7 @@ func (pool *BlockPool) SetPeerHeight(peerID string, height int) { peer.height = height } else { peer = newBPPeer(pool, peerID, height) + peer.setLogger(pool.Logger.With("peer", peerID)) pool.peers[peerID] = peer } } @@ -279,6 +281,7 @@ func (pool *BlockPool) makeNextRequester() { nextHeight := pool.height + len(pool.requesters) request := newBPRequester(pool, nextHeight) + request.SetLogger(pool.Logger.With("height", nextHeight)) pool.requesters[nextHeight] = request pool.numPending++ @@ -328,6 +331,8 @@ type bpPeer struct { numPending int32 timeout *time.Timer didTimeout bool + + logger log.Logger } func newBPPeer(pool *BlockPool, peerID string, height int) *bpPeer { @@ -336,10 +341,15 @@ func newBPPeer(pool *BlockPool, peerID string, height int) *bpPeer { id: peerID, height: height, numPending: 0, + logger: log.NewNopLogger(), } return peer } +func (peer *bpPeer) setLogger(l log.Logger) { + peer.logger = l +} + func (peer *bpPeer) resetMonitor() { peer.recvMonitor = flow.New(time.Second, time.Second*40) var initialValue = float64(minRecvRate) * math.E @@ -377,7 +387,7 @@ func (peer *bpPeer) onTimeout() { defer peer.pool.mtx.Unlock() peer.pool.sendTimeout(peer.id) - log.Warn("SendTimeout", "peer", peer.id, "reason", "onTimeout") + peer.logger.Error("SendTimeout", "reason", "onTimeout") peer.didTimeout = true } diff --git a/blockchain/pool_test.go b/blockchain/pool_test.go index 220bc5ce5..43ddbaddf 100644 --- a/blockchain/pool_test.go +++ b/blockchain/pool_test.go @@ -5,8 +5,9 @@ import ( "testing" "time" - . "github.com/tendermint/go-common" "github.com/tendermint/tendermint/types" + . "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" ) func init() { @@ -34,6 +35,7 @@ func TestBasic(t *testing.T) { timeoutsCh := make(chan string, 100) requestsCh := make(chan BlockRequest, 100) pool := NewBlockPool(start, requestsCh, timeoutsCh) + pool.SetLogger(log.TestingLogger()) pool.Start() defer pool.Stop() @@ -65,7 +67,7 @@ func TestBasic(t *testing.T) { case peerID := <-timeoutsCh: t.Errorf("timeout: %v", peerID) case request := <-requestsCh: - log.Info("TEST: Pulled new BlockRequest", "request", request) + t.Logf("Pulled new BlockRequest %v", request) if request.Height == 300 { return // Done! } @@ -73,7 +75,7 @@ func TestBasic(t *testing.T) { go func() { block := &types.Block{Header: &types.Header{Height: request.Height}} pool.AddBlock(request.PeerID, block, 123) - log.Info("TEST: Added block", "block", request.Height, "peer", request.PeerID) + t.Logf("Added block from peer %v (height: %v)", request.PeerID, request.Height) }() } } @@ -85,11 +87,12 @@ func TestTimeout(t *testing.T) { timeoutsCh := make(chan string, 100) requestsCh := make(chan BlockRequest, 100) pool := NewBlockPool(start, requestsCh, timeoutsCh) + pool.SetLogger(log.TestingLogger()) pool.Start() defer pool.Stop() for _, peer := range peers { - log.Info("Peer", "id", peer.id) + t.Logf("Peer %v", peer.id) } // Introduce each peer. @@ -120,7 +123,7 @@ func TestTimeout(t *testing.T) { for { select { case peerID := <-timeoutsCh: - log.Info("Timeout", "peerID", peerID) + t.Logf("Peer %v timeouted", peerID) if _, ok := timedOut[peerID]; !ok { counter++ if counter == len(peers) { @@ -128,7 +131,7 @@ func TestTimeout(t *testing.T) { } } case request := <-requestsCh: - log.Info("TEST: Pulled new BlockRequest", "request", request) + t.Logf("Pulled new BlockRequest %+v", request) } } } diff --git a/blockchain/reactor.go b/blockchain/reactor.go index f88bccc3d..1c0ef3a7d 100644 --- a/blockchain/reactor.go +++ b/blockchain/reactor.go @@ -6,13 +6,12 @@ import ( "reflect" "time" - cmn "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" - "github.com/tendermint/go-p2p" - "github.com/tendermint/go-wire" + wire "github.com/tendermint/go-wire" + "github.com/tendermint/tendermint/p2p" "github.com/tendermint/tendermint/proxy" sm "github.com/tendermint/tendermint/state" "github.com/tendermint/tendermint/types" + cmn "github.com/tendermint/tmlibs/common" ) const ( @@ -43,7 +42,6 @@ type consensusReactor interface { type BlockchainReactor struct { p2p.BaseReactor - config cfg.Config state *sm.State proxyAppConn proxy.AppConnConsensus // same as consensus.proxyAppConn store *BlockStore @@ -57,7 +55,7 @@ type BlockchainReactor struct { } // NewBlockchainReactor returns new reactor instance. -func NewBlockchainReactor(config cfg.Config, state *sm.State, proxyAppConn proxy.AppConnConsensus, store *BlockStore, fastSync bool) *BlockchainReactor { +func NewBlockchainReactor(state *sm.State, proxyAppConn proxy.AppConnConsensus, store *BlockStore, fastSync bool) *BlockchainReactor { if state.LastBlockHeight == store.Height()-1 { store.height-- // XXX HACK, make this better } @@ -72,7 +70,6 @@ func NewBlockchainReactor(config cfg.Config, state *sm.State, proxyAppConn proxy timeoutsCh, ) bcR := &BlockchainReactor{ - config: config, state: state, proxyAppConn: proxyAppConn, store: store, @@ -81,7 +78,7 @@ func NewBlockchainReactor(config cfg.Config, state *sm.State, proxyAppConn proxy requestsCh: requestsCh, timeoutsCh: timeoutsCh, } - bcR.BaseReactor = *p2p.NewBaseReactor(log, "BlockchainReactor", bcR) + bcR.BaseReactor = *p2p.NewBaseReactor("BlockchainReactor", bcR) return bcR } @@ -131,11 +128,11 @@ func (bcR *BlockchainReactor) RemovePeer(peer *p2p.Peer, reason interface{}) { func (bcR *BlockchainReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) { _, msg, err := DecodeMessage(msgBytes) if err != nil { - log.Warn("Error decoding message", "error", err) + bcR.Logger.Error("Error decoding message", "error", err) return } - log.Debug("Receive", "src", src, "chID", chID, "msg", msg) + bcR.Logger.Debug("Receive", "src", src, "chID", chID, "msg", msg) switch msg := msg.(type) { case *bcBlockRequestMessage: @@ -163,7 +160,7 @@ func (bcR *BlockchainReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) // Got a peer status. Unverified. bcR.pool.SetPeerHeight(src.Key, msg.Height) default: - log.Warn(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg))) + bcR.Logger.Error(cmn.Fmt("Unknown message type %v", reflect.TypeOf(msg))) } } @@ -203,10 +200,10 @@ FOR_LOOP: case _ = <-switchToConsensusTicker.C: height, numPending, _ := bcR.pool.GetStatus() outbound, inbound, _ := bcR.Switch.NumPeers() - log.Info("Consensus ticker", "numPending", numPending, "total", len(bcR.pool.requesters), + bcR.Logger.Info("Consensus ticker", "numPending", numPending, "total", len(bcR.pool.requesters), "outbound", outbound, "inbound", inbound) if bcR.pool.IsCaughtUp() { - log.Notice("Time to switch to consensus reactor!", "height", height) + bcR.Logger.Info("Time to switch to consensus reactor!", "height", height) bcR.pool.Stop() conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor) @@ -220,12 +217,12 @@ FOR_LOOP: for i := 0; i < 10; i++ { // See if there are any blocks to sync. first, second := bcR.pool.PeekTwoBlocks() - //log.Info("TrySync peeked", "first", first, "second", second) + //bcR.Logger.Info("TrySync peeked", "first", first, "second", second) if first == nil || second == nil { // We need both to sync the first block. break SYNC_LOOP } - firstParts := first.MakePartSet(bcR.config.GetInt("block_part_size")) // TODO: put part size in parts header? + firstParts := first.MakePartSet(types.DefaultBlockPartSize) firstPartsHeader := firstParts.Header() // Finally, verify the first block using the second's commit // NOTE: we can probably make this more efficient, but note that calling @@ -234,7 +231,7 @@ FOR_LOOP: err := bcR.state.Validators.VerifyCommit( bcR.state.ChainID, types.BlockID{first.Hash(), firstPartsHeader}, first.Height, second.LastCommit) if err != nil { - log.Info("error in validation", "error", err) + bcR.Logger.Info("error in validation", "error", err) bcR.pool.RedoRequest(first.Height) break SYNC_LOOP } else { diff --git a/blockchain/store.go b/blockchain/store.go index ac7cfdafc..a96aa0fb7 100644 --- a/blockchain/store.go +++ b/blockchain/store.go @@ -7,8 +7,8 @@ import ( "io" "sync" - . "github.com/tendermint/go-common" - dbm "github.com/tendermint/go-db" + . "github.com/tendermint/tmlibs/common" + dbm "github.com/tendermint/tmlibs/db" "github.com/tendermint/go-wire" "github.com/tendermint/tendermint/types" ) diff --git a/cmd/tendermint/commands/flags/log_level.go b/cmd/tendermint/commands/flags/log_level.go new file mode 100644 index 000000000..acdd24ec2 --- /dev/null +++ b/cmd/tendermint/commands/flags/log_level.go @@ -0,0 +1,87 @@ +package flags + +import ( + "fmt" + "strings" + + "github.com/pkg/errors" + + cfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tmlibs/log" +) + +const ( + defaultLogLevelKey = "*" +) + +// ParseLogLevel parses complex log level - comma-separated +// list of module:level pairs with an optional *:level pair (* means +// all other modules). +// +// Example: +// ParseLogLevel("consensus:debug,mempool:debug,*:error", log.NewTMLogger(os.Stdout)) +func ParseLogLevel(lvl string, logger log.Logger) (log.Logger, error) { + if lvl == "" { + return nil, errors.New("Empty log level") + } + + l := lvl + + // prefix simple one word levels (e.g. "info") with "*" + if !strings.Contains(l, ":") { + l = defaultLogLevelKey + ":" + l + } + + options := make([]log.Option, 0) + + isDefaultLogLevelSet := false + var option log.Option + var err error + + list := strings.Split(l, ",") + for _, item := range list { + moduleAndLevel := strings.Split(item, ":") + + if len(moduleAndLevel) != 2 { + return nil, fmt.Errorf("Expected list in a form of \"module:level\" pairs, given pair %s, list %s", item, list) + } + + module := moduleAndLevel[0] + level := moduleAndLevel[1] + + if module == defaultLogLevelKey { + option, err = log.AllowLevel(level) + if err != nil { + return nil, errors.Wrap(err, fmt.Sprintf("Failed to parse default log level (pair %s, list %s)", item, l)) + } + options = append(options, option) + isDefaultLogLevelSet = true + } else { + switch level { + case "debug": + option = log.AllowDebugWith("module", module) + case "info": + option = log.AllowInfoWith("module", module) + case "error": + option = log.AllowErrorWith("module", module) + case "none": + option = log.AllowNoneWith("module", module) + default: + return nil, fmt.Errorf("Expected either \"info\", \"debug\", \"error\" or \"none\" log level, given %s (pair %s, list %s)", level, item, list) + } + options = append(options, option) + + } + } + + // if "*" is not provided, set default global level + if !isDefaultLogLevelSet { + option, err = log.AllowLevel(cfg.DefaultBaseConfig().LogLevel) + if err != nil { + return nil, err + } + options = append(options, option) + } + + return log.NewFilter(logger, options...), nil +} diff --git a/cmd/tendermint/commands/flags/log_level_test.go b/cmd/tendermint/commands/flags/log_level_test.go new file mode 100644 index 000000000..c89f3f880 --- /dev/null +++ b/cmd/tendermint/commands/flags/log_level_test.go @@ -0,0 +1,64 @@ +package flags_test + +import ( + "bytes" + "strings" + "testing" + + tmflags "github.com/tendermint/tendermint/cmd/tendermint/commands/flags" + "github.com/tendermint/tmlibs/log" +) + +func TestParseLogLevel(t *testing.T) { + var buf bytes.Buffer + jsonLogger := log.NewTMJSONLogger(&buf) + + correctLogLevels := []struct { + lvl string + expectedLogLines []string + }{ + {"mempool:error", []string{``, ``, `{"_msg":"Mesmero","level":"error","module":"mempool"}`}}, + {"mempool:error,*:debug", []string{``, ``, `{"_msg":"Mesmero","level":"error","module":"mempool"}`}}, + {"*:debug,wire:none", []string{ + `{"_msg":"Kingpin","level":"debug","module":"mempool"}`, + `{"_msg":"Kitty Pryde","level":"info","module":"mempool"}`, + `{"_msg":"Mesmero","level":"error","module":"mempool"}`}}, + } + + for _, c := range correctLogLevels { + logger, err := tmflags.ParseLogLevel(c.lvl, jsonLogger) + if err != nil { + t.Fatal(err) + } + + logger = logger.With("module", "mempool") + + buf.Reset() + + logger.Debug("Kingpin") + if have := strings.TrimSpace(buf.String()); c.expectedLogLines[0] != have { + t.Errorf("\nwant '%s'\nhave '%s'\nlevel '%s'", c.expectedLogLines[0], have, c.lvl) + } + + buf.Reset() + + logger.Info("Kitty Pryde") + if have := strings.TrimSpace(buf.String()); c.expectedLogLines[1] != have { + t.Errorf("\nwant '%s'\nhave '%s'\nlevel '%s'", c.expectedLogLines[1], have, c.lvl) + } + + buf.Reset() + + logger.Error("Mesmero") + if have := strings.TrimSpace(buf.String()); c.expectedLogLines[2] != have { + t.Errorf("\nwant '%s'\nhave '%s'\nlevel '%s'", c.expectedLogLines[2], have, c.lvl) + } + } + + incorrectLogLevel := []string{"some", "mempool:some", "*:some,mempool:error"} + for _, lvl := range incorrectLogLevel { + if _, err := tmflags.ParseLogLevel(lvl, jsonLogger); err == nil { + t.Fatalf("Expected %s to produce error", lvl) + } + } +} diff --git a/cmd/tendermint/commands/gen_validator.go b/cmd/tendermint/commands/gen_validator.go index a1217e1f0..97c583c22 100644 --- a/cmd/tendermint/commands/gen_validator.go +++ b/cmd/tendermint/commands/gen_validator.go @@ -1,11 +1,11 @@ package commands import ( + "encoding/json" "fmt" "github.com/spf13/cobra" - "github.com/tendermint/go-wire" "github.com/tendermint/tendermint/types" ) @@ -21,7 +21,7 @@ func init() { func genValidator(cmd *cobra.Command, args []string) { privValidator := types.GenPrivValidator() - privValidatorJSONBytes := wire.JSONBytesPretty(privValidator) + privValidatorJSONBytes, _ := json.MarshalIndent(privValidator, "", "\t") fmt.Printf(`%v `, string(privValidatorJSONBytes)) } diff --git a/cmd/tendermint/commands/init.go b/cmd/tendermint/commands/init.go index 366ca4e84..cd16707cc 100644 --- a/cmd/tendermint/commands/init.go +++ b/cmd/tendermint/commands/init.go @@ -5,8 +5,8 @@ import ( "github.com/spf13/cobra" - cmn "github.com/tendermint/go-common" "github.com/tendermint/tendermint/types" + cmn "github.com/tendermint/tmlibs/common" ) var initFilesCmd = &cobra.Command{ @@ -20,13 +20,13 @@ func init() { } func initFiles(cmd *cobra.Command, args []string) { - privValFile := config.GetString("priv_validator_file") + privValFile := config.PrivValidatorFile() if _, err := os.Stat(privValFile); os.IsNotExist(err) { privValidator := types.GenPrivValidator() privValidator.SetFile(privValFile) privValidator.Save() - genFile := config.GetString("genesis_file") + genFile := config.GenesisFile() if _, err := os.Stat(genFile); os.IsNotExist(err) { genDoc := types.GenesisDoc{ @@ -40,8 +40,8 @@ func initFiles(cmd *cobra.Command, args []string) { genDoc.SaveAs(genFile) } - log.Notice("Initialized tendermint", "genesis", config.GetString("genesis_file"), "priv_validator", config.GetString("priv_validator_file")) + logger.Info("Initialized tendermint", "genesis", config.GenesisFile(), "priv_validator", config.PrivValidatorFile()) } else { - log.Notice("Already initialized", "priv_validator", config.GetString("priv_validator_file")) + logger.Info("Already initialized", "priv_validator", config.PrivValidatorFile()) } } diff --git a/cmd/tendermint/commands/probe_upnp.go b/cmd/tendermint/commands/probe_upnp.go index bff433bc9..e23e48973 100644 --- a/cmd/tendermint/commands/probe_upnp.go +++ b/cmd/tendermint/commands/probe_upnp.go @@ -6,31 +6,30 @@ import ( "github.com/spf13/cobra" - "github.com/tendermint/go-p2p/upnp" + "github.com/tendermint/tendermint/p2p/upnp" ) var probeUpnpCmd = &cobra.Command{ Use: "probe_upnp", Short: "Test UPnP functionality", - Run: probeUpnp, + RunE: probeUpnp, } func init() { RootCmd.AddCommand(probeUpnpCmd) } -func probeUpnp(cmd *cobra.Command, args []string) { - - capabilities, err := upnp.Probe() +func probeUpnp(cmd *cobra.Command, args []string) error { + capabilities, err := upnp.Probe(logger) if err != nil { fmt.Println("Probe failed: %v", err) } else { fmt.Println("Probe success!") jsonBytes, err := json.Marshal(capabilities) if err != nil { - panic(err) + return err } fmt.Println(string(jsonBytes)) } - + return nil } diff --git a/cmd/tendermint/commands/replay.go b/cmd/tendermint/commands/replay.go index 2f0e3266e..0c88b2443 100644 --- a/cmd/tendermint/commands/replay.go +++ b/cmd/tendermint/commands/replay.go @@ -1,36 +1,24 @@ package commands import ( - "fmt" + "github.com/spf13/cobra" "github.com/tendermint/tendermint/consensus" - - "github.com/spf13/cobra" ) var replayCmd = &cobra.Command{ - Use: "replay [walfile]", + Use: "replay", Short: "Replay messages from WAL", Run: func(cmd *cobra.Command, args []string) { - - if len(args) > 1 { - consensus.RunReplayFile(config, args[1], false) - } else { - fmt.Println("replay requires an argument (walfile)") - } + consensus.RunReplayFile(config.BaseConfig, config.Consensus, false) }, } var replayConsoleCmd = &cobra.Command{ - Use: "replay_console [walfile]", + Use: "replay_console", Short: "Replay messages from WAL in a console", Run: func(cmd *cobra.Command, args []string) { - - if len(args) > 1 { - consensus.RunReplayFile(config, args[1], true) - } else { - fmt.Println("replay_console requires an argument (walfile)") - } + consensus.RunReplayFile(config.BaseConfig, config.Consensus, true) }, } diff --git a/cmd/tendermint/commands/reset_priv_validator.go b/cmd/tendermint/commands/reset_priv_validator.go index 4b7d2df49..fa12be4ea 100644 --- a/cmd/tendermint/commands/reset_priv_validator.go +++ b/cmd/tendermint/commands/reset_priv_validator.go @@ -5,9 +5,8 @@ import ( "github.com/spf13/cobra" - cfg "github.com/tendermint/go-config" - "github.com/tendermint/log15" "github.com/tendermint/tendermint/types" + "github.com/tendermint/tmlibs/log" ) var resetAllCmd = &cobra.Command{ @@ -30,33 +29,33 @@ func init() { // XXX: this is totally unsafe. // it's only suitable for testnets. func resetAll(cmd *cobra.Command, args []string) { - ResetAll(config, log) + ResetAll(config.DBDir(), config.PrivValidatorFile(), logger) } // XXX: this is totally unsafe. // it's only suitable for testnets. func resetPrivValidator(cmd *cobra.Command, args []string) { - ResetPrivValidator(config, log) + resetPrivValidatorLocal(config.PrivValidatorFile(), logger) } // Exported so other CLI tools can use it -func ResetAll(c cfg.Config, l log15.Logger) { - ResetPrivValidator(c, l) - os.RemoveAll(c.GetString("db_dir")) +func ResetAll(dbDir, privValFile string, logger log.Logger) { + resetPrivValidatorLocal(privValFile, logger) + os.RemoveAll(dbDir) + logger.Info("Removed all data", "dir", dbDir) } -func ResetPrivValidator(c cfg.Config, l log15.Logger) { +func resetPrivValidatorLocal(privValFile string, logger log.Logger) { // Get PrivValidator var privValidator *types.PrivValidator - privValidatorFile := c.GetString("priv_validator_file") - if _, err := os.Stat(privValidatorFile); err == nil { - privValidator = types.LoadPrivValidator(privValidatorFile) + if _, err := os.Stat(privValFile); err == nil { + privValidator = types.LoadPrivValidator(privValFile) privValidator.Reset() - l.Notice("Reset PrivValidator", "file", privValidatorFile) + logger.Info("Reset PrivValidator", "file", privValFile) } else { privValidator = types.GenPrivValidator() - privValidator.SetFile(privValidatorFile) + privValidator.SetFile(privValFile) privValidator.Save() - l.Notice("Generated PrivValidator", "file", privValidatorFile) + logger.Info("Generated PrivValidator", "file", privValFile) } } diff --git a/cmd/tendermint/commands/root.go b/cmd/tendermint/commands/root.go index 0cbaa289f..3565f3bb8 100644 --- a/cmd/tendermint/commands/root.go +++ b/cmd/tendermint/commands/root.go @@ -1,31 +1,39 @@ package commands import ( - "github.com/spf13/cobra" + "os" - "github.com/tendermint/go-logger" - tmcfg "github.com/tendermint/tendermint/config/tendermint" + "github.com/spf13/cobra" + "github.com/spf13/viper" + + tmflags "github.com/tendermint/tendermint/cmd/tendermint/commands/flags" + cfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tmlibs/log" ) var ( - config = tmcfg.GetConfig("") - log = logger.New("module", "main") + config = cfg.DefaultConfig() + logger = log.NewTMLogger(log.NewSyncWriter(os.Stdout)).With("module", "main") ) -//global flag -var logLevel string +func init() { + RootCmd.PersistentFlags().String("log_level", config.LogLevel, "Log level") +} var RootCmd = &cobra.Command{ Use: "tendermint", Short: "Tendermint Core (BFT Consensus) in Go", - PersistentPreRun: func(cmd *cobra.Command, args []string) { - // set the log level in the config and logger - config.Set("log_level", logLevel) - logger.SetLogLevel(logLevel) + PersistentPreRunE: func(cmd *cobra.Command, args []string) error { + err := viper.Unmarshal(config) + if err != nil { + return err + } + config.SetRoot(config.RootDir) + cfg.EnsureRoot(config.RootDir) + logger, err = tmflags.ParseLogLevel(config.LogLevel, logger) + if err != nil { + return err + } + return nil }, } - -func init() { - //parse flag and set config - RootCmd.PersistentFlags().StringVar(&logLevel, "log_level", config.GetString("log_level"), "Log level") -} diff --git a/cmd/tendermint/commands/root_test.go b/cmd/tendermint/commands/root_test.go new file mode 100644 index 000000000..f776a0248 --- /dev/null +++ b/cmd/tendermint/commands/root_test.go @@ -0,0 +1,99 @@ +package commands + +import ( + "os" + "strconv" + + "github.com/spf13/cobra" + "github.com/spf13/viper" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + cfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tmlibs/cli" + + "testing" +) + +var ( + defaultRoot = os.ExpandEnv("$HOME/.some/test/dir") +) + +const ( + rootName = "root" +) + +// isolate provides a clean setup and returns a copy of RootCmd you can +// modify in the test cases +func isolate(cmds ...*cobra.Command) cli.Executable { + viper.Reset() + config = cfg.DefaultConfig() + r := &cobra.Command{ + Use: rootName, + PersistentPreRunE: RootCmd.PersistentPreRunE, + } + r.AddCommand(cmds...) + wr := cli.PrepareBaseCmd(r, "TM", defaultRoot) + return wr +} + +func TestRootConfig(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + // we pre-create a config file we can refer to in the rest of + // the test cases. + cvals := map[string]string{ + "moniker": "monkey", + "fast_sync": "false", + } + // proper types of the above settings + cfast := false + conf, err := cli.WriteDemoConfig(cvals) + require.Nil(err) + + defaults := cfg.DefaultConfig() + dmax := defaults.P2P.MaxNumPeers + + cases := []struct { + args []string + env map[string]string + root string + moniker string + fastSync bool + maxPeer int + }{ + {nil, nil, defaultRoot, defaults.Moniker, defaults.FastSync, dmax}, + // try multiple ways of setting root (two flags, cli vs. env) + {[]string{"--home", conf}, nil, conf, cvals["moniker"], cfast, dmax}, + {nil, map[string]string{"TMROOT": conf}, conf, cvals["moniker"], cfast, dmax}, + // check setting p2p subflags two different ways + {[]string{"--p2p.max_num_peers", "420"}, nil, defaultRoot, defaults.Moniker, defaults.FastSync, 420}, + {nil, map[string]string{"TM_P2P_MAX_NUM_PEERS": "17"}, defaultRoot, defaults.Moniker, defaults.FastSync, 17}, + // try to set env that have no flags attached... + {[]string{"--home", conf}, map[string]string{"TM_MONIKER": "funny"}, conf, "funny", cfast, dmax}, + } + + for idx, tc := range cases { + i := strconv.Itoa(idx) + // test command that does nothing, except trigger unmarshalling in root + noop := &cobra.Command{ + Use: "noop", + RunE: func(cmd *cobra.Command, args []string) error { + return nil + }, + } + noop.Flags().Int("p2p.max_num_peers", defaults.P2P.MaxNumPeers, "") + cmd := isolate(noop) + + args := append([]string{rootName, noop.Use}, tc.args...) + err := cli.RunWithArgs(cmd, args, tc.env) + require.Nil(err, i) + assert.Equal(tc.root, config.RootDir, i) + assert.Equal(tc.root, config.P2P.RootDir, i) + assert.Equal(tc.root, config.Consensus.RootDir, i) + assert.Equal(tc.root, config.Mempool.RootDir, i) + assert.Equal(tc.moniker, config.Moniker, i) + assert.Equal(tc.fastSync, config.FastSync, i) + assert.Equal(tc.maxPeer, config.P2P.MaxNumPeers, i) + } + +} diff --git a/cmd/tendermint/commands/run_node.go b/cmd/tendermint/commands/run_node.go index a04b52d09..5e38f2b5f 100644 --- a/cmd/tendermint/commands/run_node.go +++ b/cmd/tendermint/commands/run_node.go @@ -1,124 +1,94 @@ package commands import ( + "fmt" "io/ioutil" "time" "github.com/spf13/cobra" - . "github.com/tendermint/go-common" "github.com/tendermint/tendermint/node" "github.com/tendermint/tendermint/types" + cmn "github.com/tendermint/tmlibs/common" ) var runNodeCmd = &cobra.Command{ - Use: "node", - Short: "Run the tendermint node", - PreRun: setConfigFlags, - Run: runNode, + Use: "node", + Short: "Run the tendermint node", + RunE: runNode, } -//flags -var ( - moniker string - nodeLaddr string - seeds string - fastSync bool - skipUPNP bool - rpcLaddr string - grpcLaddr string - proxyApp string - abciTransport string - pex bool -) - func init() { + // bind flags + runNodeCmd.Flags().String("moniker", config.Moniker, "Node Name") - // configuration options - runNodeCmd.Flags().StringVar(&moniker, "moniker", config.GetString("moniker"), - "Node Name") - runNodeCmd.Flags().StringVar(&nodeLaddr, "node_laddr", config.GetString("node_laddr"), - "Node listen address. (0.0.0.0:0 means any interface, any port)") - runNodeCmd.Flags().StringVar(&seeds, "seeds", config.GetString("seeds"), - "Comma delimited host:port seed nodes") - runNodeCmd.Flags().BoolVar(&fastSync, "fast_sync", config.GetBool("fast_sync"), - "Fast blockchain syncing") - runNodeCmd.Flags().BoolVar(&skipUPNP, "skip_upnp", config.GetBool("skip_upnp"), - "Skip UPNP configuration") - runNodeCmd.Flags().StringVar(&rpcLaddr, "rpc_laddr", config.GetString("rpc_laddr"), - "RPC listen address. Port required") - runNodeCmd.Flags().StringVar(&grpcLaddr, "grpc_laddr", config.GetString("grpc_laddr"), - "GRPC listen address (BroadcastTx only). Port required") - runNodeCmd.Flags().StringVar(&proxyApp, "proxy_app", config.GetString("proxy_app"), - "Proxy app address, or 'nilapp' or 'dummy' for local testing.") - runNodeCmd.Flags().StringVar(&abciTransport, "abci", config.GetString("abci"), - "Specify abci transport (socket | grpc)") + // node flags + runNodeCmd.Flags().Bool("fast_sync", config.FastSync, "Fast blockchain syncing") + + // abci flags + runNodeCmd.Flags().String("proxy_app", config.ProxyApp, "Proxy app address, or 'nilapp' or 'dummy' for local testing.") + runNodeCmd.Flags().String("abci", config.ABCI, "Specify abci transport (socket | grpc)") + + // rpc flags + runNodeCmd.Flags().String("rpc_laddr", config.RPCListenAddress, "RPC listen address. Port required") + runNodeCmd.Flags().String("grpc_laddr", config.GRPCListenAddress, "GRPC listen address (BroadcastTx only). Port required") + + // p2p flags + runNodeCmd.Flags().String("p2p.laddr", config.P2P.ListenAddress, "Node listen address. (0.0.0.0:0 means any interface, any port)") + runNodeCmd.Flags().String("p2p.seeds", config.P2P.Seeds, "Comma delimited host:port seed nodes") + runNodeCmd.Flags().Bool("p2p.skip_upnp", config.P2P.SkipUPNP, "Skip UPNP configuration") // feature flags - runNodeCmd.Flags().BoolVar(&pex, "pex", config.GetBool("pex_reactor"), - "Enable Peer-Exchange (dev feature)") + runNodeCmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "Enable Peer-Exchange (dev feature)") RootCmd.AddCommand(runNodeCmd) } -func setConfigFlags(cmd *cobra.Command, args []string) { - - // Merge parsed flag values onto config - config.Set("moniker", moniker) - config.Set("node_laddr", nodeLaddr) - config.Set("seeds", seeds) - config.Set("fast_sync", fastSync) - config.Set("skip_upnp", skipUPNP) - config.Set("rpc_laddr", rpcLaddr) - config.Set("grpc_laddr", grpcLaddr) - config.Set("proxy_app", proxyApp) - config.Set("abci", abciTransport) - config.Set("pex_reactor", pex) -} - // Users wishing to: // * Use an external signer for their validators // * Supply an in-proc abci app // should import github.com/tendermint/tendermint/node and implement // their own run_node to call node.NewNode (instead of node.NewNodeDefault) // with their custom priv validator and/or custom proxy.ClientCreator -func runNode(cmd *cobra.Command, args []string) { +func runNode(cmd *cobra.Command, args []string) error { // Wait until the genesis doc becomes available // This is for Mintnet compatibility. // TODO: If Mintnet gets deprecated or genesis_file is // always available, remove. - genDocFile := config.GetString("genesis_file") - if !FileExists(genDocFile) { - log.Notice(Fmt("Waiting for genesis file %v...", genDocFile)) + genDocFile := config.GenesisFile() + if !cmn.FileExists(genDocFile) { + logger.Info(cmn.Fmt("Waiting for genesis file %v...", genDocFile)) for { time.Sleep(time.Second) - if !FileExists(genDocFile) { + if !cmn.FileExists(genDocFile) { continue } jsonBlob, err := ioutil.ReadFile(genDocFile) if err != nil { - Exit(Fmt("Couldn't read GenesisDoc file: %v", err)) + return fmt.Errorf("Couldn't read GenesisDoc file: %v", err) } genDoc, err := types.GenesisDocFromJSON(jsonBlob) if err != nil { - Exit(Fmt("Error reading GenesisDoc: %v", err)) + return fmt.Errorf("Error reading GenesisDoc: %v", err) } if genDoc.ChainID == "" { - Exit(Fmt("Genesis doc %v must include non-empty chain_id", genDocFile)) + return fmt.Errorf("Genesis doc %v must include non-empty chain_id", genDocFile) } - config.Set("chain_id", genDoc.ChainID) + config.ChainID = genDoc.ChainID } } // Create & start node - n := node.NewNodeDefault(config) + n := node.NewNodeDefault(config, logger.With("module", "node")) if _, err := n.Start(); err != nil { - Exit(Fmt("Failed to start node: %v", err)) + return fmt.Errorf("Failed to start node: %v", err) } else { - log.Notice("Started node", "nodeInfo", n.Switch().NodeInfo()) + logger.Info("Started node", "nodeInfo", n.Switch().NodeInfo()) } // Trap signal, run forever. n.RunForever() + + return nil } diff --git a/cmd/tendermint/commands/show_validator.go b/cmd/tendermint/commands/show_validator.go index 4aa80ae14..53a687c6d 100644 --- a/cmd/tendermint/commands/show_validator.go +++ b/cmd/tendermint/commands/show_validator.go @@ -5,7 +5,7 @@ import ( "github.com/spf13/cobra" - "github.com/tendermint/go-wire" + "github.com/tendermint/go-wire/data" "github.com/tendermint/tendermint/types" ) @@ -20,7 +20,7 @@ func init() { } func showValidator(cmd *cobra.Command, args []string) { - privValidatorFile := config.GetString("priv_validator_file") - privValidator := types.LoadOrGenPrivValidator(privValidatorFile) - fmt.Println(string(wire.JSONBytesPretty(privValidator.PubKey))) + privValidator := types.LoadOrGenPrivValidator(config.PrivValidatorFile(), logger) + pubKeyJSONBytes, _ := data.ToJSON(privValidator.PubKey) + fmt.Println(string(pubKeyJSONBytes)) } diff --git a/cmd/tendermint/commands/testnet.go b/cmd/tendermint/commands/testnet.go index 0a2e00ad0..58767eb05 100644 --- a/cmd/tendermint/commands/testnet.go +++ b/cmd/tendermint/commands/testnet.go @@ -7,7 +7,7 @@ import ( "github.com/spf13/cobra" - cmn "github.com/tendermint/go-common" + cmn "github.com/tendermint/tmlibs/common" "github.com/tendermint/tendermint/types" ) diff --git a/cmd/tendermint/main.go b/cmd/tendermint/main.go index cddae985b..5493e4f2f 100644 --- a/cmd/tendermint/main.go +++ b/cmd/tendermint/main.go @@ -1,15 +1,13 @@ package main import ( - "fmt" "os" "github.com/tendermint/tendermint/cmd/tendermint/commands" + "github.com/tendermint/tmlibs/cli" ) func main() { - if err := commands.RootCmd.Execute(); err != nil { - fmt.Println(err) - os.Exit(1) - } + cmd := cli.PrepareBaseCmd(commands.RootCmd, "TM", os.ExpandEnv("$HOME/.tendermint")) + cmd.Execute() } diff --git a/config/config.go b/config/config.go new file mode 100644 index 000000000..1da45abab --- /dev/null +++ b/config/config.go @@ -0,0 +1,295 @@ +package config + +import ( + "path/filepath" + "time" + + "github.com/tendermint/tendermint/types" +) + +type Config struct { + // Top level options use an anonymous struct + BaseConfig `mapstructure:",squash"` + + // Options for services + P2P *P2PConfig `mapstructure:"p2p"` + Mempool *MempoolConfig `mapstructure:"mempool"` + Consensus *ConsensusConfig `mapstructure:"consensus"` +} + +func DefaultConfig() *Config { + return &Config{ + BaseConfig: DefaultBaseConfig(), + P2P: DefaultP2PConfig(), + Mempool: DefaultMempoolConfig(), + Consensus: DefaultConsensusConfig(), + } +} + +func TestConfig() *Config { + return &Config{ + BaseConfig: TestBaseConfig(), + P2P: TestP2PConfig(), + Mempool: DefaultMempoolConfig(), + Consensus: TestConsensusConfig(), + } +} + +// Set the RootDir for all Config structs +func (cfg *Config) SetRoot(root string) *Config { + cfg.BaseConfig.RootDir = root + cfg.P2P.RootDir = root + cfg.Mempool.RootDir = root + cfg.Consensus.RootDir = root + return cfg +} + +// BaseConfig struct for a Tendermint node +type BaseConfig struct { + // The root directory for all data. + // This should be set in viper so it can unmarshal into this struct + RootDir string `mapstructure:"home"` + + // The ID of the chain to join (should be signed with every transaction and vote) + ChainID string `mapstructure:"chain_id"` + + // A JSON file containing the initial validator set and other meta data + Genesis string `mapstructure:"genesis_file"` + + // A JSON file containing the private key to use as a validator in the consensus protocol + PrivValidator string `mapstructure:"priv_validator_file"` + + // A custom human readable name for this node + Moniker string `mapstructure:"moniker"` + + // TCP or UNIX socket address of the ABCI application, + // or the name of an ABCI application compiled in with the Tendermint binary + ProxyApp string `mapstructure:"proxy_app"` + + // Mechanism to connect to the ABCI application: socket | grpc + ABCI string `mapstructure:"abci"` + + // Output level for logging + LogLevel string `mapstructure:"log_level"` + + // TCP or UNIX socket address for the profiling server to listen on + ProfListenAddress string `mapstructure:"prof_laddr"` + + // If this node is many blocks behind the tip of the chain, FastSync + // allows them to catchup quickly by downloading blocks in parallel + // and verifying their commits + FastSync bool `mapstructure:"fast_sync"` + + // If true, query the ABCI app on connecting to a new peer + // so the app can decide if we should keep the connection or not + FilterPeers bool `mapstructure:"filter_peers"` // false + + // What indexer to use for transactions + TxIndex string `mapstructure:"tx_index"` + + // Database backend: leveldb | memdb + DBBackend string `mapstructure:"db_backend"` + + // Database directory + DBPath string `mapstructure:"db_dir"` + + // TCP or UNIX socket address for the RPC server to listen on + RPCListenAddress string `mapstructure:"rpc_laddr"` + + // TCP or UNIX socket address for the gRPC server to listen on + // NOTE: This server only supports /broadcast_tx_commit + GRPCListenAddress string `mapstructure:"grpc_laddr"` +} + +func DefaultBaseConfig() BaseConfig { + return BaseConfig{ + Genesis: "genesis.json", + PrivValidator: "priv_validator.json", + Moniker: "anonymous", + ProxyApp: "tcp://127.0.0.1:46658", + ABCI: "socket", + LogLevel: "info", + ProfListenAddress: "", + FastSync: true, + FilterPeers: false, + TxIndex: "kv", + DBBackend: "leveldb", + DBPath: "data", + RPCListenAddress: "tcp://0.0.0.0:46657", + GRPCListenAddress: "", + } +} + +func TestBaseConfig() BaseConfig { + conf := DefaultBaseConfig() + conf.ChainID = "tendermint_test" + conf.ProxyApp = "dummy" + conf.FastSync = false + conf.DBBackend = "memdb" + conf.RPCListenAddress = "tcp://0.0.0.0:36657" + conf.GRPCListenAddress = "tcp://0.0.0.0:36658" + return conf +} + +func (b BaseConfig) GenesisFile() string { + return rootify(b.Genesis, b.RootDir) +} + +func (b BaseConfig) PrivValidatorFile() string { + return rootify(b.PrivValidator, b.RootDir) +} + +func (b BaseConfig) DBDir() string { + return rootify(b.DBPath, b.RootDir) +} + +type P2PConfig struct { + RootDir string `mapstructure:"home"` + ListenAddress string `mapstructure:"laddr"` + Seeds string `mapstructure:"seeds"` + SkipUPNP bool `mapstructure:"skip_upnp"` + AddrBook string `mapstructure:"addr_book_file"` + AddrBookStrict bool `mapstructure:"addr_book_strict"` + PexReactor bool `mapstructure:"pex"` + MaxNumPeers int `mapstructure:"max_num_peers"` +} + +func DefaultP2PConfig() *P2PConfig { + return &P2PConfig{ + ListenAddress: "tcp://0.0.0.0:46656", + AddrBook: "addrbook.json", + AddrBookStrict: true, + MaxNumPeers: 50, + } +} + +func TestP2PConfig() *P2PConfig { + conf := DefaultP2PConfig() + conf.ListenAddress = "tcp://0.0.0.0:36656" + conf.SkipUPNP = true + return conf +} + +func (p *P2PConfig) AddrBookFile() string { + return rootify(p.AddrBook, p.RootDir) +} + +type MempoolConfig struct { + RootDir string `mapstructure:"home"` + Recheck bool `mapstructure:"recheck"` + RecheckEmpty bool `mapstructure:"recheck_empty"` + Broadcast bool `mapstructure:"broadcast"` + WalPath string `mapstructure:"wal_dir"` +} + +func DefaultMempoolConfig() *MempoolConfig { + return &MempoolConfig{ + Recheck: true, + RecheckEmpty: true, + Broadcast: true, + WalPath: "data/mempool.wal", + } +} + +func (m *MempoolConfig) WalDir() string { + return rootify(m.WalPath, m.RootDir) +} + +// ConsensusConfig holds timeouts and details about the WAL, the block structure, +// and timeouts in the consensus protocol. +type ConsensusConfig struct { + RootDir string `mapstructure:"home"` + WalPath string `mapstructure:"wal_file"` + WalLight bool `mapstructure:"wal_light"` + walFile string // overrides WalPath if set + + // All timeouts are in ms + TimeoutPropose int `mapstructure:"timeout_propose"` + TimeoutProposeDelta int `mapstructure:"timeout_propose_delta"` + TimeoutPrevote int `mapstructure:"timeout_prevote"` + TimeoutPrevoteDelta int `mapstructure:"timeout_prevote_delta"` + TimeoutPrecommit int `mapstructure:"timeout_precommit"` + TimeoutPrecommitDelta int `mapstructure:"timeout_precommit_delta"` + TimeoutCommit int `mapstructure:"timeout_commit"` + + // Make progress as soon as we have all the precommits (as if TimeoutCommit = 0) + SkipTimeoutCommit bool `mapstructure:"skip_timeout_commit"` + + // BlockSize + MaxBlockSizeTxs int `mapstructure:"max_block_size_txs"` + MaxBlockSizeBytes int `mapstructure:"max_block_size_bytes"` + + // TODO: This probably shouldn't be exposed but it makes it + // easy to write tests for the wal/replay + BlockPartSize int `mapstructure:"block_part_size"` +} + +// Wait this long for a proposal +func (cfg *ConsensusConfig) Propose(round int) time.Duration { + return time.Duration(cfg.TimeoutPropose+cfg.TimeoutProposeDelta*round) * time.Millisecond +} + +// After receiving any +2/3 prevote, wait this long for stragglers +func (cfg *ConsensusConfig) Prevote(round int) time.Duration { + return time.Duration(cfg.TimeoutPrevote+cfg.TimeoutPrevoteDelta*round) * time.Millisecond +} + +// After receiving any +2/3 precommits, wait this long for stragglers +func (cfg *ConsensusConfig) Precommit(round int) time.Duration { + return time.Duration(cfg.TimeoutPrecommit+cfg.TimeoutPrecommitDelta*round) * time.Millisecond +} + +// After receiving +2/3 precommits for a single block (a commit), wait this long for stragglers in the next height's RoundStepNewHeight +func (cfg *ConsensusConfig) Commit(t time.Time) time.Time { + return t.Add(time.Duration(cfg.TimeoutCommit) * time.Millisecond) +} + +func DefaultConsensusConfig() *ConsensusConfig { + return &ConsensusConfig{ + WalPath: "data/cs.wal/wal", + WalLight: false, + TimeoutPropose: 3000, + TimeoutProposeDelta: 500, + TimeoutPrevote: 1000, + TimeoutPrevoteDelta: 500, + TimeoutPrecommit: 1000, + TimeoutPrecommitDelta: 500, + TimeoutCommit: 1000, + SkipTimeoutCommit: false, + MaxBlockSizeTxs: 10000, + MaxBlockSizeBytes: 1, // TODO + BlockPartSize: types.DefaultBlockPartSize, // TODO: we shouldnt be importing types + } +} + +func TestConsensusConfig() *ConsensusConfig { + config := DefaultConsensusConfig() + config.TimeoutPropose = 2000 + config.TimeoutProposeDelta = 1 + config.TimeoutPrevote = 10 + config.TimeoutPrevoteDelta = 1 + config.TimeoutPrecommit = 10 + config.TimeoutPrecommitDelta = 1 + config.TimeoutCommit = 10 + config.SkipTimeoutCommit = true + return config +} + +func (c *ConsensusConfig) WalFile() string { + if c.walFile != "" { + return c.walFile + } + return rootify(c.WalPath, c.RootDir) +} + +func (c *ConsensusConfig) SetWalFile(walFile string) { + c.walFile = walFile +} + +// helper function to make config creation independent of root dir +func rootify(path, root string) string { + if filepath.IsAbs(path) { + return path + } + return filepath.Join(root, path) +} diff --git a/config/config_test.go b/config/config_test.go new file mode 100644 index 000000000..6379960fa --- /dev/null +++ b/config/config_test.go @@ -0,0 +1,28 @@ +package config + +import ( + "testing" + + "github.com/stretchr/testify/assert" +) + +func TestDefaultConfig(t *testing.T) { + assert := assert.New(t) + + // set up some defaults + cfg := DefaultConfig() + assert.NotNil(cfg.P2P) + assert.NotNil(cfg.Mempool) + assert.NotNil(cfg.Consensus) + + // check the root dir stuff... + cfg.SetRoot("/foo") + cfg.Genesis = "bar" + cfg.DBPath = "/opt/data" + cfg.Mempool.WalPath = "wal/mem/" + + assert.Equal("/foo/bar", cfg.GenesisFile()) + assert.Equal("/opt/data", cfg.DBDir()) + assert.Equal("/foo/wal/mem", cfg.Mempool.WalDir()) + +} diff --git a/config/tendermint/config.go b/config/tendermint/config.go deleted file mode 100644 index 5ddde460d..000000000 --- a/config/tendermint/config.go +++ /dev/null @@ -1,125 +0,0 @@ -package tendermint - -import ( - "os" - "path" - "strings" - - . "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" -) - -func getTMRoot(rootDir string) string { - if rootDir == "" { - rootDir = os.Getenv("TMHOME") - } - if rootDir == "" { - // deprecated, use TMHOME (TODO: remove in TM 0.11.0) - rootDir = os.Getenv("TMROOT") - } - if rootDir == "" { - rootDir = os.Getenv("HOME") + "/.tendermint" - } - return rootDir -} - -func initTMRoot(rootDir string) { - rootDir = getTMRoot(rootDir) - EnsureDir(rootDir, 0700) - EnsureDir(rootDir+"/data", 0700) - - configFilePath := path.Join(rootDir, "config.toml") - - // Write default config file if missing. - if !FileExists(configFilePath) { - // Ask user for moniker - // moniker := cfg.Prompt("Type hostname: ", "anonymous") - MustWriteFile(configFilePath, []byte(defaultConfig("anonymous")), 0644) - } -} - -func GetConfig(rootDir string) cfg.Config { - rootDir = getTMRoot(rootDir) - initTMRoot(rootDir) - - configFilePath := path.Join(rootDir, "config.toml") - mapConfig, err := cfg.ReadMapConfigFromFile(configFilePath) - if err != nil { - Exit(Fmt("Could not read config: %v", err)) - } - - // Set defaults or panic - if mapConfig.IsSet("chain_id") { - Exit("Cannot set 'chain_id' via config.toml") - } - if mapConfig.IsSet("revision_file") { - Exit("Cannot set 'revision_file' via config.toml. It must match what's in the Makefile") - } - mapConfig.SetRequired("chain_id") // blows up if you try to use it before setting. - mapConfig.SetDefault("genesis_file", rootDir+"/genesis.json") - mapConfig.SetDefault("proxy_app", "tcp://127.0.0.1:46658") - mapConfig.SetDefault("abci", "socket") - mapConfig.SetDefault("moniker", "anonymous") - mapConfig.SetDefault("node_laddr", "tcp://0.0.0.0:46656") - mapConfig.SetDefault("seeds", "") - // mapConfig.SetDefault("seeds", "goldenalchemist.chaintest.net:46656") - mapConfig.SetDefault("fast_sync", true) - mapConfig.SetDefault("skip_upnp", false) - mapConfig.SetDefault("addrbook_file", rootDir+"/addrbook.json") - mapConfig.SetDefault("addrbook_strict", true) // disable to allow connections locally - mapConfig.SetDefault("pex_reactor", false) // enable for peer exchange - mapConfig.SetDefault("priv_validator_file", rootDir+"/priv_validator.json") - mapConfig.SetDefault("db_backend", "leveldb") - mapConfig.SetDefault("db_dir", rootDir+"/data") - mapConfig.SetDefault("log_level", "info") - mapConfig.SetDefault("rpc_laddr", "tcp://0.0.0.0:46657") - mapConfig.SetDefault("grpc_laddr", "") - mapConfig.SetDefault("prof_laddr", "") - mapConfig.SetDefault("revision_file", rootDir+"/revision") - mapConfig.SetDefault("cs_wal_file", rootDir+"/data/cs.wal/wal") - mapConfig.SetDefault("cs_wal_light", false) - mapConfig.SetDefault("filter_peers", false) - - mapConfig.SetDefault("block_size", 10000) // max number of txs - mapConfig.SetDefault("block_part_size", 65536) // part size 64K - mapConfig.SetDefault("disable_data_hash", false) - - // all timeouts are in ms - mapConfig.SetDefault("timeout_handshake", 10000) - mapConfig.SetDefault("timeout_propose", 3000) - mapConfig.SetDefault("timeout_propose_delta", 500) - mapConfig.SetDefault("timeout_prevote", 1000) - mapConfig.SetDefault("timeout_prevote_delta", 500) - mapConfig.SetDefault("timeout_precommit", 1000) - mapConfig.SetDefault("timeout_precommit_delta", 500) - mapConfig.SetDefault("timeout_commit", 1000) - - // make progress asap (no `timeout_commit`) on full precommit votes - mapConfig.SetDefault("skip_timeout_commit", false) - mapConfig.SetDefault("mempool_recheck", true) - mapConfig.SetDefault("mempool_recheck_empty", true) - mapConfig.SetDefault("mempool_broadcast", true) - mapConfig.SetDefault("mempool_wal_dir", rootDir+"/data/mempool.wal") - - mapConfig.SetDefault("tx_index", "kv") - - return mapConfig -} - -var defaultConfigTmpl = `# This is a TOML config file. -# For more information, see https://github.com/toml-lang/toml - -proxy_app = "tcp://127.0.0.1:46658" -moniker = "__MONIKER__" -node_laddr = "tcp://0.0.0.0:46656" -seeds = "" -fast_sync = true -db_backend = "leveldb" -log_level = "notice" -rpc_laddr = "tcp://0.0.0.0:46657" -` - -func defaultConfig(moniker string) (defaultConfig string) { - defaultConfig = strings.Replace(defaultConfigTmpl, "__MONIKER__", moniker, -1) - return -} diff --git a/config/tendermint/logrotate.config b/config/tendermint/logrotate.config deleted file mode 100644 index 73eaf74e7..000000000 --- a/config/tendermint/logrotate.config +++ /dev/null @@ -1,22 +0,0 @@ -// If you wanted to use logrotate, I suppose this might be the config you want. -// Instead, I'll just write our own, that way we don't need sudo to install. - -$HOME/.tendermint/logs/tendermint.log { - missingok - notifempty - rotate 12 - daily - size 10M - compress - delaycompress -} - -$HOME/.barak/logs/barak.log { - missingok - notifempty - rotate 12 - weekly - size 10M - compress - delaycompress -} diff --git a/config/tendermint_test/config.go b/config/tendermint_test/config.go deleted file mode 100644 index 9d405dc95..000000000 --- a/config/tendermint_test/config.go +++ /dev/null @@ -1,164 +0,0 @@ -// Import this in all *_test.go files to initialize ~/.tendermint_test. - -package tendermint_test - -import ( - "os" - "path" - "strings" - - . "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" - "github.com/tendermint/go-logger" -) - -func init() { - // Creates ~/.tendermint_test - EnsureDir(os.Getenv("HOME")+"/.tendermint_test", 0700) -} - -func initTMRoot(rootDir string) { - // Remove ~/.tendermint_test_bak - if FileExists(rootDir + "_bak") { - err := os.RemoveAll(rootDir + "_bak") - if err != nil { - PanicSanity(err.Error()) - } - } - // Move ~/.tendermint_test to ~/.tendermint_test_bak - if FileExists(rootDir) { - err := os.Rename(rootDir, rootDir+"_bak") - if err != nil { - PanicSanity(err.Error()) - } - } - // Create new dir - EnsureDir(rootDir, 0700) - EnsureDir(rootDir+"/data", 0700) - - configFilePath := path.Join(rootDir, "config.toml") - genesisFilePath := path.Join(rootDir, "genesis.json") - privFilePath := path.Join(rootDir, "priv_validator.json") - - // Write default config file if missing. - if !FileExists(configFilePath) { - // Ask user for moniker - // moniker := cfg.Prompt("Type hostname: ", "anonymous") - MustWriteFile(configFilePath, []byte(defaultConfig("anonymous")), 0644) - } - if !FileExists(genesisFilePath) { - MustWriteFile(genesisFilePath, []byte(defaultGenesis), 0644) - } - // we always overwrite the priv val - MustWriteFile(privFilePath, []byte(defaultPrivValidator), 0644) -} - -func ResetConfig(localPath string) cfg.Config { - rootDir := os.Getenv("HOME") + "/.tendermint_test/" + localPath - initTMRoot(rootDir) - - configFilePath := path.Join(rootDir, "config.toml") - mapConfig, err := cfg.ReadMapConfigFromFile(configFilePath) - if err != nil { - Exit(Fmt("Could not read config: %v", err)) - } - - // Set defaults or panic - if mapConfig.IsSet("chain_id") { - Exit("Cannot set 'chain_id' via config.toml") - } - mapConfig.SetDefault("chain_id", "tendermint_test") - mapConfig.SetDefault("genesis_file", rootDir+"/genesis.json") - mapConfig.SetDefault("proxy_app", "dummy") - mapConfig.SetDefault("abci", "socket") - mapConfig.SetDefault("moniker", "anonymous") - mapConfig.SetDefault("node_laddr", "tcp://0.0.0.0:36656") - mapConfig.SetDefault("fast_sync", false) - mapConfig.SetDefault("skip_upnp", true) - mapConfig.SetDefault("addrbook_file", rootDir+"/addrbook.json") - mapConfig.SetDefault("addrbook_strict", true) // disable to allow connections locally - mapConfig.SetDefault("pex_reactor", false) // enable for peer exchange - mapConfig.SetDefault("priv_validator_file", rootDir+"/priv_validator.json") - mapConfig.SetDefault("db_backend", "memdb") - mapConfig.SetDefault("db_dir", rootDir+"/data") - mapConfig.SetDefault("log_level", "info") - mapConfig.SetDefault("rpc_laddr", "tcp://0.0.0.0:36657") - mapConfig.SetDefault("grpc_laddr", "tcp://0.0.0.0:36658") - mapConfig.SetDefault("prof_laddr", "") - mapConfig.SetDefault("revision_file", rootDir+"/revision") - mapConfig.SetDefault("cs_wal_file", rootDir+"/data/cs.wal/wal") - mapConfig.SetDefault("cs_wal_light", false) - mapConfig.SetDefault("filter_peers", false) - - mapConfig.SetDefault("block_size", 10000) - mapConfig.SetDefault("block_part_size", 65536) // part size 64K - mapConfig.SetDefault("disable_data_hash", false) - mapConfig.SetDefault("timeout_handshake", 10000) - mapConfig.SetDefault("timeout_propose", 2000) - mapConfig.SetDefault("timeout_propose_delta", 1) - mapConfig.SetDefault("timeout_prevote", 10) - mapConfig.SetDefault("timeout_prevote_delta", 1) - mapConfig.SetDefault("timeout_precommit", 10) - mapConfig.SetDefault("timeout_precommit_delta", 1) - mapConfig.SetDefault("timeout_commit", 10) - mapConfig.SetDefault("skip_timeout_commit", true) - mapConfig.SetDefault("mempool_recheck", true) - mapConfig.SetDefault("mempool_recheck_empty", true) - mapConfig.SetDefault("mempool_broadcast", true) - mapConfig.SetDefault("mempool_wal_dir", "") - - mapConfig.SetDefault("tx_index", "kv") - - logger.SetLogLevel(mapConfig.GetString("log_level")) - - return mapConfig -} - -var defaultConfigTmpl = `# This is a TOML config file. -# For more information, see https://github.com/toml-lang/toml - -proxy_app = "dummy" -moniker = "__MONIKER__" -node_laddr = "tcp://0.0.0.0:36656" -seeds = "" -fast_sync = false -db_backend = "memdb" -log_level = "info" -rpc_laddr = "tcp://0.0.0.0:36657" -` - -func defaultConfig(moniker string) (defaultConfig string) { - defaultConfig = strings.Replace(defaultConfigTmpl, "__MONIKER__", moniker, -1) - return -} - -var defaultGenesis = `{ - "genesis_time": "0001-01-01T00:00:00.000Z", - "chain_id": "tendermint_test", - "validators": [ - { - "pub_key": [ - 1, - "3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8" - ], - "amount": 10, - "name": "" - } - ], - "app_hash": "" -}` - -var defaultPrivValidator = `{ - "address": "D028C9981F7A87F3093672BF0D5B0E2A1B3ED456", - "pub_key": [ - 1, - "3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8" - ], - "priv_key": [ - 1, - "27F82582AEFAE7AB151CFB01C48BB6C1A0DA78F9BDDA979A9F70A84D074EB07D3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8" - ], - "last_height": 0, - "last_round": 0, - "last_step": 0 -}` diff --git a/config/toml.go b/config/toml.go new file mode 100644 index 000000000..cf232fbf6 --- /dev/null +++ b/config/toml.go @@ -0,0 +1,135 @@ +package config + +import ( + "os" + "path" + "path/filepath" + "strings" + + cmn "github.com/tendermint/tmlibs/common" +) + +/****** these are for production settings ***********/ + +func EnsureRoot(rootDir string) { + cmn.EnsureDir(rootDir, 0700) + cmn.EnsureDir(rootDir+"/data", 0700) + + configFilePath := path.Join(rootDir, "config.toml") + + // Write default config file if missing. + if !cmn.FileExists(configFilePath) { + // Ask user for moniker + // moniker := cfg.Prompt("Type hostname: ", "anonymous") + cmn.MustWriteFile(configFilePath, []byte(defaultConfig("anonymous")), 0644) + } +} + +var defaultConfigTmpl = `# This is a TOML config file. +# For more information, see https://github.com/toml-lang/toml + +proxy_app = "tcp://127.0.0.1:46658" +moniker = "__MONIKER__" +node_laddr = "tcp://0.0.0.0:46656" +seeds = "" +fast_sync = true +db_backend = "leveldb" +log_level = "info" +rpc_laddr = "tcp://0.0.0.0:46657" +` + +func defaultConfig(moniker string) (defaultConfig string) { + defaultConfig = strings.Replace(defaultConfigTmpl, "__MONIKER__", moniker, -1) + return +} + +/****** these are for test settings ***********/ + +func ResetTestRoot(testName string) *Config { + rootDir := os.ExpandEnv("$HOME/.tendermint_test") + rootDir = filepath.Join(rootDir, testName) + // Remove ~/.tendermint_test_bak + if cmn.FileExists(rootDir + "_bak") { + err := os.RemoveAll(rootDir + "_bak") + if err != nil { + cmn.PanicSanity(err.Error()) + } + } + // Move ~/.tendermint_test to ~/.tendermint_test_bak + if cmn.FileExists(rootDir) { + err := os.Rename(rootDir, rootDir+"_bak") + if err != nil { + cmn.PanicSanity(err.Error()) + } + } + // Create new dir + cmn.EnsureDir(rootDir, 0700) + cmn.EnsureDir(rootDir+"/data", 0700) + + configFilePath := path.Join(rootDir, "config.toml") + genesisFilePath := path.Join(rootDir, "genesis.json") + privFilePath := path.Join(rootDir, "priv_validator.json") + + // Write default config file if missing. + if !cmn.FileExists(configFilePath) { + // Ask user for moniker + cmn.MustWriteFile(configFilePath, []byte(testConfig("anonymous")), 0644) + } + if !cmn.FileExists(genesisFilePath) { + cmn.MustWriteFile(genesisFilePath, []byte(testGenesis), 0644) + } + // we always overwrite the priv val + cmn.MustWriteFile(privFilePath, []byte(testPrivValidator), 0644) + + config := TestConfig().SetRoot(rootDir) + return config +} + +var testConfigTmpl = `# This is a TOML config file. +# For more information, see https://github.com/toml-lang/toml + +proxy_app = "dummy" +moniker = "__MONIKER__" +node_laddr = "tcp://0.0.0.0:36656" +seeds = "" +fast_sync = false +db_backend = "memdb" +log_level = "info" +rpc_laddr = "tcp://0.0.0.0:36657" +` + +func testConfig(moniker string) (testConfig string) { + testConfig = strings.Replace(testConfigTmpl, "__MONIKER__", moniker, -1) + return +} + +var testGenesis = `{ + "genesis_time": "0001-01-01T00:00:00.000Z", + "chain_id": "tendermint_test", + "validators": [ + { + "pub_key": { + "type": "ed25519", + "data":"3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8" + }, + "amount": 10, + "name": "" + } + ], + "app_hash": "" +}` + +var testPrivValidator = `{ + "address": "D028C9981F7A87F3093672BF0D5B0E2A1B3ED456", + "pub_key": { + "type": "ed25519", + "data": "3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8" + }, + "priv_key": { + "type": "ed25519", + "data": "27F82582AEFAE7AB151CFB01C48BB6C1A0DA78F9BDDA979A9F70A84D074EB07D3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8" + }, + "last_height": 0, + "last_round": 0, + "last_step": 0 +}` diff --git a/config/toml_test.go b/config/toml_test.go new file mode 100644 index 000000000..d8f372aee --- /dev/null +++ b/config/toml_test.go @@ -0,0 +1,57 @@ +package config + +import ( + "io/ioutil" + "os" + "path/filepath" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func ensureFiles(t *testing.T, rootDir string, files ...string) { + for _, f := range files { + p := rootify(rootDir, f) + _, err := os.Stat(p) + assert.Nil(t, err, p) + } +} + +func TestEnsureRoot(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + // setup temp dir for test + tmpDir, err := ioutil.TempDir("", "config-test") + require.Nil(err) + defer os.RemoveAll(tmpDir) + + // create root dir + EnsureRoot(tmpDir) + + // make sure config is set properly + data, err := ioutil.ReadFile(filepath.Join(tmpDir, "config.toml")) + require.Nil(err) + assert.Equal([]byte(defaultConfig("anonymous")), data) + + ensureFiles(t, tmpDir, "data") +} + +func TestEnsureTestRoot(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + testName := "ensureTestRoot" + + // create root dir + cfg := ResetTestRoot(testName) + rootDir := cfg.RootDir + + // make sure config is set properly + data, err := ioutil.ReadFile(filepath.Join(rootDir, "config.toml")) + require.Nil(err) + assert.Equal([]byte(testConfig("anonymous")), data) + + // TODO: make sure the cfg returned and testconfig are the same! + + ensureFiles(t, rootDir, "data", "genesis.json", "priv_validator.json") +} diff --git a/consensus/byzantine_test.go b/consensus/byzantine_test.go index cd62f3f08..56aeeeeaa 100644 --- a/consensus/byzantine_test.go +++ b/consensus/byzantine_test.go @@ -5,17 +5,14 @@ import ( "testing" "time" - "github.com/tendermint/tendermint/config/tendermint_test" - - . "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" - "github.com/tendermint/go-events" - "github.com/tendermint/go-p2p" + "github.com/tendermint/tendermint/p2p" "github.com/tendermint/tendermint/types" + . "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/events" ) func init() { - config = tendermint_test.ResetConfig("consensus_byzantine_test") + config = ResetConfig("consensus_byzantine_test") } //---------------------------------------------- @@ -29,14 +26,17 @@ func init() { // Heal partition and ensure A sees the commit func TestByzantine(t *testing.T) { N := 4 + logger := consensusLogger() css := randConsensusNet(N, "consensus_byzantine_test", newMockTickerFunc(false), newCounter) // give the byzantine validator a normal ticker css[0].SetTimeoutTicker(NewTimeoutTicker()) switches := make([]*p2p.Switch, N) + p2pLogger := logger.With("module", "p2p") for i := 0; i < N; i++ { - switches[i] = p2p.NewSwitch(cfg.NewMapConfig(nil)) + switches[i] = p2p.NewSwitch(config.P2P) + switches[i].SetLogger(p2pLogger.With("validator", i)) } reactors := make([]p2p.Reactor, N) @@ -50,19 +50,21 @@ func TestByzantine(t *testing.T) { } }() eventChans := make([]chan interface{}, N) + eventLogger := logger.With("module", "events") for i := 0; i < N; i++ { if i == 0 { css[i].privValidator = NewByzantinePrivValidator(css[i].privValidator.(*types.PrivValidator)) // make byzantine css[i].decideProposal = func(j int) func(int, int) { return func(height, round int) { - byzantineDecideProposalFunc(height, round, css[j], switches[j]) + byzantineDecideProposalFunc(t, height, round, css[j], switches[j]) } }(i) css[i].doPrevote = func(height, round int) {} } eventSwitch := events.NewEventSwitch() + eventSwitch.SetLogger(eventLogger.With("validator", i)) _, err := eventSwitch.Start() if err != nil { t.Fatalf("Failed to start switch: %v", err) @@ -70,6 +72,7 @@ func TestByzantine(t *testing.T) { eventChans[i] = subscribeToEvent(eventSwitch, "tester", types.EventStringNewBlock(), 1) conR := NewConsensusReactor(css[i], true) // so we dont start the consensus states + conR.SetLogger(logger.With("validator", i)) conR.SetEventSwitch(eventSwitch) var conRI p2p.Reactor @@ -80,7 +83,7 @@ func TestByzantine(t *testing.T) { reactors[i] = conRI } - p2p.MakeConnectedSwitches(N, func(i int, s *p2p.Switch) *p2p.Switch { + p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch { // ignore new switch s, we already made ours switches[i].AddReactor("CONSENSUS", reactors[i]) return switches[i] @@ -118,7 +121,7 @@ func TestByzantine(t *testing.T) { case <-eventChans[ind2]: } - log.Notice("A block has been committed. Healing partition") + t.Log("A block has been committed. Healing partition") // connect the partitions p2p.Connect2Switches(switches, ind0, ind1) @@ -156,7 +159,7 @@ func TestByzantine(t *testing.T) { //------------------------------- // byzantine consensus functions -func byzantineDecideProposalFunc(height, round int, cs *ConsensusState, sw *p2p.Switch) { +func byzantineDecideProposalFunc(t *testing.T, height, round int, cs *ConsensusState, sw *p2p.Switch) { // byzantine user should create two proposals and try to split the vote. // Avoid sending on internalMsgQueue and running consensus state. @@ -177,7 +180,7 @@ func byzantineDecideProposalFunc(height, round int, cs *ConsensusState, sw *p2p. // broadcast conflicting proposals/block parts to peers peers := sw.Peers().List() - log.Notice("Byzantine: broadcasting conflicting proposals", "peers", len(peers)) + t.Logf("Byzantine: broadcasting conflicting proposals to %d peers", len(peers)) for i, peer := range peers { if i < len(peers)/2 { go sendProposalAndParts(height, round, cs, peer, proposal1, block1Hash, blockParts1) diff --git a/consensus/common_test.go b/consensus/common_test.go index 334c66dc6..ae6e399d5 100644 --- a/consensus/common_test.go +++ b/consensus/common_test.go @@ -13,21 +13,24 @@ import ( abcicli "github.com/tendermint/abci/client" abci "github.com/tendermint/abci/types" - . "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" - dbm "github.com/tendermint/go-db" - "github.com/tendermint/go-p2p" bc "github.com/tendermint/tendermint/blockchain" - "github.com/tendermint/tendermint/config/tendermint_test" + cfg "github.com/tendermint/tendermint/config" mempl "github.com/tendermint/tendermint/mempool" + "github.com/tendermint/tendermint/p2p" sm "github.com/tendermint/tendermint/state" "github.com/tendermint/tendermint/types" + . "github.com/tendermint/tmlibs/common" + dbm "github.com/tendermint/tmlibs/db" + "github.com/tendermint/tmlibs/log" "github.com/tendermint/abci/example/counter" "github.com/tendermint/abci/example/dummy" + + "github.com/go-kit/kit/log/term" ) -var config cfg.Config // NOTE: must be reset for each _test.go file +// genesis, chain_id, priv_val +var config *cfg.Config // NOTE: must be reset for each _test.go file var ensureTimeout = time.Duration(2) func ensureDir(dir string, mode os.FileMode) { @@ -36,6 +39,10 @@ func ensureDir(dir string, mode os.FileMode) { } } +func ResetConfig(name string) *cfg.Config { + return cfg.ResetTestRoot(name) +} + //------------------------------------------------------------------------------- // validator stub (a dummy consensus peer we control) @@ -64,7 +71,7 @@ func (vs *validatorStub) signVote(voteType byte, hash []byte, header types.PartS Type: voteType, BlockID: types.BlockID{hash, header}, } - err := vs.PrivValidator.SignVote(config.GetString("chain_id"), vote) + err := vs.PrivValidator.SignVote(config.ChainID, vote) return vote, err } @@ -115,7 +122,7 @@ func decideProposal(cs1 *ConsensusState, vs *validatorStub, height, round int) ( // Make proposal polRound, polBlockID := cs1.Votes.POLInfo() proposal = types.NewProposal(height, round, blockParts.Header(), polRound, polBlockID) - if err := vs.SignProposal(config.GetString("chain_id"), proposal); err != nil { + if err := vs.SignProposal(config.ChainID, proposal); err != nil { panic(err) } return @@ -205,7 +212,7 @@ func subscribeToVoter(cs *ConsensusState, addr []byte) chan interface{} { go func() { for { v := <-voteCh0 - vote := v.(types.EventDataVote) + vote := v.(types.TMEventData).Unwrap().(types.EventDataVote) // we only fire for our own votes if bytes.Equal(addr, vote.Vote.ValidatorAddress) { voteCh <- v @@ -233,7 +240,7 @@ func newConsensusState(state *sm.State, pv *types.PrivValidator, app abci.Applic return newConsensusStateWithConfig(config, state, pv, app) } -func newConsensusStateWithConfig(thisConfig cfg.Config, state *sm.State, pv *types.PrivValidator, app abci.Application) *ConsensusState { +func newConsensusStateWithConfig(thisConfig *cfg.Config, state *sm.State, pv *types.PrivValidator, app abci.Application) *ConsensusState { // Get BlockStore blockDB := dbm.NewMemDB() blockStore := bc.NewBlockStore(blockDB) @@ -244,39 +251,46 @@ func newConsensusStateWithConfig(thisConfig cfg.Config, state *sm.State, pv *typ proxyAppConnCon := abcicli.NewLocalClient(mtx, app) // Make Mempool - mempool := mempl.NewMempool(thisConfig, proxyAppConnMem) + mempool := mempl.NewMempool(thisConfig.Mempool, proxyAppConnMem) + mempool.SetLogger(log.TestingLogger().With("module", "mempool")) // Make ConsensusReactor - cs := NewConsensusState(thisConfig, state, proxyAppConnCon, blockStore, mempool) + cs := NewConsensusState(thisConfig.Consensus, state, proxyAppConnCon, blockStore, mempool) + cs.SetLogger(log.TestingLogger()) cs.SetPrivValidator(pv) evsw := types.NewEventSwitch() + evsw.SetLogger(log.TestingLogger().With("module", "events")) cs.SetEventSwitch(evsw) evsw.Start() return cs } -func loadPrivValidator(conf cfg.Config) *types.PrivValidator { - privValidatorFile := conf.GetString("priv_validator_file") +func loadPrivValidator(config *cfg.Config) *types.PrivValidator { + privValidatorFile := config.PrivValidatorFile() ensureDir(path.Dir(privValidatorFile), 0700) - privValidator := types.LoadOrGenPrivValidator(privValidatorFile) + privValidator := types.LoadOrGenPrivValidator(privValidatorFile, log.TestingLogger()) privValidator.Reset() return privValidator } func fixedConsensusState() *ConsensusState { stateDB := dbm.NewMemDB() - state := sm.MakeGenesisStateFromFile(stateDB, config.GetString("genesis_file")) + state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile()) + state.SetLogger(log.TestingLogger().With("module", "state")) privValidator := loadPrivValidator(config) cs := newConsensusState(state, privValidator, counter.NewCounterApplication(true)) + cs.SetLogger(log.TestingLogger()) return cs } func fixedConsensusStateDummy() *ConsensusState { stateDB := dbm.NewMemDB() - state := sm.MakeGenesisStateFromFile(stateDB, config.GetString("genesis_file")) + state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile()) + state.SetLogger(log.TestingLogger().With("module", "state")) privValidator := loadPrivValidator(config) cs := newConsensusState(state, privValidator, dummy.NewDummyApplication()) + cs.SetLogger(log.TestingLogger()) return cs } @@ -287,6 +301,7 @@ func randConsensusState(nValidators int) (*ConsensusState, []*validatorStub) { vss := make([]*validatorStub, nValidators) cs := newConsensusState(state, privVals[0], counter.NewCounterApplication(true)) + cs.SetLogger(log.TestingLogger()) for i := 0; i < nValidators; i++ { vss[i] = NewValidatorStub(privVals[i], i) @@ -312,16 +327,32 @@ func ensureNoNewStep(stepCh chan interface{}) { //------------------------------------------------------------------------------- // consensus nets +// consensusLogger is a TestingLogger which uses a different +// color for each validator ("validator" key must exist). +func consensusLogger() log.Logger { + return log.TestingLoggerWithColorFn(func(keyvals ...interface{}) term.FgBgColor { + for i := 0; i < len(keyvals)-1; i += 2 { + if keyvals[i] == "validator" { + return term.FgBgColor{Fg: term.Color(uint8(keyvals[i+1].(int) + 1))} + } + } + return term.FgBgColor{} + }) +} + func randConsensusNet(nValidators int, testName string, tickerFunc func() TimeoutTicker, appFunc func() abci.Application) []*ConsensusState { genDoc, privVals := randGenesisDoc(nValidators, false, 10) css := make([]*ConsensusState, nValidators) + logger := consensusLogger() for i := 0; i < nValidators; i++ { db := dbm.NewMemDB() // each state needs its own db state := sm.MakeGenesisState(db, genDoc) + state.SetLogger(logger.With("module", "state", "validator", i)) state.Save() - thisConfig := tendermint_test.ResetConfig(Fmt("%s_%d", testName, i)) - ensureDir(path.Dir(thisConfig.GetString("cs_wal_file")), 0700) // dir for wal + thisConfig := ResetConfig(Fmt("%s_%d", testName, i)) + ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal css[i] = newConsensusStateWithConfig(thisConfig, state, privVals[i], appFunc()) + css[i].SetLogger(logger.With("validator", i)) css[i].SetTimeoutTicker(tickerFunc()) } return css @@ -334,9 +365,10 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF for i := 0; i < nPeers; i++ { db := dbm.NewMemDB() // each state needs its own db state := sm.MakeGenesisState(db, genDoc) + state.SetLogger(log.TestingLogger().With("module", "state")) state.Save() - thisConfig := tendermint_test.ResetConfig(Fmt("%s_%d", testName, i)) - ensureDir(path.Dir(thisConfig.GetString("cs_wal_file")), 0700) // dir for wal + thisConfig := ResetConfig(Fmt("%s_%d", testName, i)) + ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal var privVal *types.PrivValidator if i < nValidators { privVal = privVals[i] @@ -347,6 +379,7 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF } css[i] = newConsensusStateWithConfig(thisConfig, state, privVal, appFunc()) + css[i].SetLogger(log.TestingLogger()) css[i].SetTimeoutTicker(tickerFunc()) } return css @@ -379,7 +412,7 @@ func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.G sort.Sort(types.PrivValidatorsByAddress(privValidators)) return &types.GenesisDoc{ GenesisTime: time.Now(), - ChainID: config.GetString("chain_id"), + ChainID: config.ChainID, Validators: validators, }, privValidators } @@ -388,6 +421,7 @@ func randGenesisState(numValidators int, randPower bool, minPower int64) (*sm.St genDoc, privValidators := randGenesisDoc(numValidators, randPower, minPower) db := dbm.NewMemDB() s0 := sm.MakeGenesisState(db, genDoc) + s0.SetLogger(log.TestingLogger().With("module", "state")) s0.Save() return s0, privValidators } @@ -438,6 +472,9 @@ func (m *mockTicker) Chan() <-chan timeoutInfo { return m.c } +func (mockTicker) SetLogger(log.Logger) { +} + //------------------------------------ func newCounter() abci.Application { diff --git a/consensus/height_vote_set.go b/consensus/height_vote_set.go index e7f4be3b9..455004f92 100644 --- a/consensus/height_vote_set.go +++ b/consensus/height_vote_set.go @@ -4,8 +4,8 @@ import ( "strings" "sync" - . "github.com/tendermint/go-common" "github.com/tendermint/tendermint/types" + . "github.com/tendermint/tmlibs/common" ) type RoundVoteSet struct { @@ -91,7 +91,7 @@ func (hvs *HeightVoteSet) addRound(round int) { if _, ok := hvs.roundVoteSets[round]; ok { PanicSanity("addRound() for an existing round") } - log.Debug("addRound(round)", "round", round) + // log.Debug("addRound(round)", "round", round) prevotes := types.NewVoteSet(hvs.chainID, hvs.height, round, types.VoteTypePrevote, hvs.valSet) precommits := types.NewVoteSet(hvs.chainID, hvs.height, round, types.VoteTypePrecommit, hvs.valSet) hvs.roundVoteSets[round] = RoundVoteSet{ @@ -118,7 +118,7 @@ func (hvs *HeightVoteSet) AddVote(vote *types.Vote, peerKey string) (added bool, // Peer has sent a vote that does not match our round, // for more than one round. Bad peer! // TODO punish peer. - log.Warn("Deal with peer giving votes from unwanted rounds") + // log.Warn("Deal with peer giving votes from unwanted rounds") return } } diff --git a/consensus/height_vote_set_test.go b/consensus/height_vote_set_test.go index 3bede25ca..f1ef8b7ff 100644 --- a/consensus/height_vote_set_test.go +++ b/consensus/height_vote_set_test.go @@ -3,19 +3,18 @@ package consensus import ( "testing" - . "github.com/tendermint/go-common" - "github.com/tendermint/tendermint/config/tendermint_test" "github.com/tendermint/tendermint/types" + . "github.com/tendermint/tmlibs/common" ) func init() { - config = tendermint_test.ResetConfig("consensus_height_vote_set_test") + config = ResetConfig("consensus_height_vote_set_test") } func TestPeerCatchupRounds(t *testing.T) { valSet, privVals := types.RandValidatorSet(10, 1) - hvs := NewHeightVoteSet(config.GetString("chain_id"), 1, valSet) + hvs := NewHeightVoteSet(config.ChainID, 1, valSet) vote999_0 := makeVoteHR(t, 1, 999, privVals, 0) added, err := hvs.AddVote(vote999_0, "peer1") @@ -52,7 +51,7 @@ func makeVoteHR(t *testing.T, height, round int, privVals []*types.PrivValidator Type: types.VoteTypePrecommit, BlockID: types.BlockID{[]byte("fakehash"), types.PartSetHeader{}}, } - chainID := config.GetString("chain_id") + chainID := config.ChainID err := privVal.SignVote(chainID, vote) if err != nil { panic(Fmt("Error signing vote: %v", err)) diff --git a/consensus/log.go b/consensus/log.go deleted file mode 100644 index edf7a0a8c..000000000 --- a/consensus/log.go +++ /dev/null @@ -1,18 +0,0 @@ -package consensus - -import ( - "github.com/tendermint/go-logger" -) - -var log = logger.New("module", "consensus") - -/* -func init() { - log.SetHandler( - logger.LvlFilterHandler( - logger.LvlDebug, - logger.BypassHandler(), - ), - ) -} -*/ diff --git a/consensus/mempool_test.go b/consensus/mempool_test.go index 6bfdfda9f..327ad733c 100644 --- a/consensus/mempool_test.go +++ b/consensus/mempool_test.go @@ -6,14 +6,13 @@ import ( "time" abci "github.com/tendermint/abci/types" - "github.com/tendermint/tendermint/config/tendermint_test" "github.com/tendermint/tendermint/types" - . "github.com/tendermint/go-common" + . "github.com/tendermint/tmlibs/common" ) func init() { - config = tendermint_test.ResetConfig("consensus_mempool_test") + config = ResetConfig("consensus_mempool_test") } func TestTxConcurrentWithCommit(t *testing.T) { @@ -44,7 +43,7 @@ func TestTxConcurrentWithCommit(t *testing.T) { for nTxs := 0; nTxs < NTxs; { select { case b := <-newBlockCh: - nTxs += b.(types.EventDataNewBlock).Block.Header.NumTxs + nTxs += b.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block.Header.NumTxs case <-ticker.C: panic("Timed out waiting to commit blocks with transactions") } diff --git a/consensus/reactor.go b/consensus/reactor.go index c3b1c590f..3652697b5 100644 --- a/consensus/reactor.go +++ b/consensus/reactor.go @@ -8,11 +8,11 @@ import ( "sync" "time" - . "github.com/tendermint/go-common" - "github.com/tendermint/go-p2p" - "github.com/tendermint/go-wire" + wire "github.com/tendermint/go-wire" + "github.com/tendermint/tendermint/p2p" sm "github.com/tendermint/tendermint/state" "github.com/tendermint/tendermint/types" + . "github.com/tendermint/tmlibs/common" ) const ( @@ -41,12 +41,12 @@ func NewConsensusReactor(consensusState *ConsensusState, fastSync bool) *Consens conS: consensusState, fastSync: fastSync, } - conR.BaseReactor = *p2p.NewBaseReactor(log, "ConsensusReactor", conR) + conR.BaseReactor = *p2p.NewBaseReactor("ConsensusReactor", conR) return conR } func (conR *ConsensusReactor) OnStart() error { - log.Notice("ConsensusReactor ", "fastSync", conR.fastSync) + conR.Logger.Info("ConsensusReactor ", "fastSync", conR.fastSync) conR.BaseReactor.OnStart() // callbacks for broadcasting new steps and votes to peers @@ -70,7 +70,7 @@ func (conR *ConsensusReactor) OnStop() { // Switch from the fast_sync to the consensus: // reset the state, turn off fast_sync, start the consensus-state-machine func (conR *ConsensusReactor) SwitchToConsensus(state *sm.State) { - log.Notice("SwitchToConsensus") + conR.Logger.Info("SwitchToConsensus") conR.conS.reconstructLastCommit(state) // NOTE: The line below causes broadcastNewRoundStepRoutine() to // broadcast a NewRoundStepMessage. @@ -148,17 +148,17 @@ func (conR *ConsensusReactor) RemovePeer(peer *p2p.Peer, reason interface{}) { // NOTE: blocks on consensus state for proposals, block parts, and votes func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) { if !conR.IsRunning() { - log.Debug("Receive", "src", src, "chId", chID, "bytes", msgBytes) + conR.Logger.Debug("Receive", "src", src, "chId", chID, "bytes", msgBytes) return } _, msg, err := DecodeMessage(msgBytes) if err != nil { - log.Warn("Error decoding message", "src", src, "chId", chID, "msg", msg, "error", err, "bytes", msgBytes) + conR.Logger.Error("Error decoding message", "src", src, "chId", chID, "msg", msg, "error", err, "bytes", msgBytes) // TODO punish peer? return } - log.Debug("Receive", "src", src, "chId", chID, "msg", msg) + conR.Logger.Debug("Receive", "src", src, "chId", chID, "msg", msg) // Get peer states ps := src.Data.Get(types.PeerStateKey).(*PeerState) @@ -191,7 +191,7 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) case types.VoteTypePrecommit: ourVotes = votes.Precommits(msg.Round).BitArrayByBlockID(msg.BlockID) default: - log.Warn("Bad VoteSetBitsMessage field Type") + conR.Logger.Error("Bad VoteSetBitsMessage field Type") return } src.TrySend(VoteSetBitsChannel, struct{ ConsensusMessage }{&VoteSetBitsMessage{ @@ -202,12 +202,12 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) Votes: ourVotes, }}) default: - log.Warn(Fmt("Unknown message type %v", reflect.TypeOf(msg))) + conR.Logger.Error(Fmt("Unknown message type %v", reflect.TypeOf(msg))) } case DataChannel: if conR.fastSync { - log.Warn("Ignoring message received during fastSync", "msg", msg) + conR.Logger.Info("Ignoring message received during fastSync", "msg", msg) return } switch msg := msg.(type) { @@ -220,12 +220,12 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) ps.SetHasProposalBlockPart(msg.Height, msg.Round, msg.Part.Index) conR.conS.peerMsgQueue <- msgInfo{msg, src.Key} default: - log.Warn(Fmt("Unknown message type %v", reflect.TypeOf(msg))) + conR.Logger.Error(Fmt("Unknown message type %v", reflect.TypeOf(msg))) } case VoteChannel: if conR.fastSync { - log.Warn("Ignoring message received during fastSync", "msg", msg) + conR.Logger.Info("Ignoring message received during fastSync", "msg", msg) return } switch msg := msg.(type) { @@ -242,12 +242,12 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) default: // don't punish (leave room for soft upgrades) - log.Warn(Fmt("Unknown message type %v", reflect.TypeOf(msg))) + conR.Logger.Error(Fmt("Unknown message type %v", reflect.TypeOf(msg))) } case VoteSetBitsChannel: if conR.fastSync { - log.Warn("Ignoring message received during fastSync", "msg", msg) + conR.Logger.Info("Ignoring message received during fastSync", "msg", msg) return } switch msg := msg.(type) { @@ -265,7 +265,7 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) case types.VoteTypePrecommit: ourVotes = votes.Precommits(msg.Round).BitArrayByBlockID(msg.BlockID) default: - log.Warn("Bad VoteSetBitsMessage field Type") + conR.Logger.Error("Bad VoteSetBitsMessage field Type") return } ps.ApplyVoteSetBitsMessage(msg, ourVotes) @@ -274,15 +274,15 @@ func (conR *ConsensusReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) } default: // don't punish (leave room for soft upgrades) - log.Warn(Fmt("Unknown message type %v", reflect.TypeOf(msg))) + conR.Logger.Error(Fmt("Unknown message type %v", reflect.TypeOf(msg))) } default: - log.Warn(Fmt("Unknown chId %X", chID)) + conR.Logger.Error(Fmt("Unknown chId %X", chID)) } if err != nil { - log.Warn("Error in Receive()", "error", err) + conR.Logger.Error("Error in Receive()", "error", err) } } @@ -299,12 +299,12 @@ func (conR *ConsensusReactor) SetEventSwitch(evsw types.EventSwitch) { func (conR *ConsensusReactor) registerEventCallbacks() { types.AddListenerForEvent(conR.evsw, "conR", types.EventStringNewRoundStep(), func(data types.TMEventData) { - rs := data.(types.EventDataRoundState).RoundState.(*RoundState) + rs := data.Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) conR.broadcastNewRoundStep(rs) }) types.AddListenerForEvent(conR.evsw, "conR", types.EventStringVote(), func(data types.TMEventData) { - edv := data.(types.EventDataVote) + edv := data.Unwrap().(types.EventDataVote) conR.broadcastHasVoteMessage(edv.Vote) }) } @@ -376,13 +376,13 @@ func (conR *ConsensusReactor) sendNewRoundStepMessages(peer *p2p.Peer) { } func (conR *ConsensusReactor) gossipDataRoutine(peer *p2p.Peer, ps *PeerState) { - log := log.New("peer", peer) + logger := conR.Logger.With("peer", peer) OUTER_LOOP: for { // Manage disconnects from self or peer. if !peer.IsRunning() || !conR.IsRunning() { - log.Notice(Fmt("Stopping gossipDataRoutine for %v.", peer)) + logger.Info("Stopping gossipDataRoutine for peer") return } rs := conR.conS.GetRoundState() @@ -390,7 +390,7 @@ OUTER_LOOP: // Send proposal Block parts? if rs.ProposalBlockParts.HasHeader(prs.ProposalBlockPartsHeader) { - //log.Info("ProposalBlockParts matched", "blockParts", prs.ProposalBlockParts) + //logger.Info("ProposalBlockParts matched", "blockParts", prs.ProposalBlockParts) if index, ok := rs.ProposalBlockParts.BitArray().Sub(prs.ProposalBlockParts.Copy()).PickRandom(); ok { part := rs.ProposalBlockParts.GetPart(index) msg := &BlockPartMessage{ @@ -407,16 +407,16 @@ OUTER_LOOP: // If the peer is on a previous height, help catch up. if (0 < prs.Height) && (prs.Height < rs.Height) { - //log.Info("Data catchup", "height", rs.Height, "peerHeight", prs.Height, "peerProposalBlockParts", prs.ProposalBlockParts) + //logger.Info("Data catchup", "height", rs.Height, "peerHeight", prs.Height, "peerProposalBlockParts", prs.ProposalBlockParts) if index, ok := prs.ProposalBlockParts.Not().PickRandom(); ok { // Ensure that the peer's PartSetHeader is correct blockMeta := conR.conS.blockStore.LoadBlockMeta(prs.Height) if blockMeta == nil { - log.Warn("Failed to load block meta", "peer height", prs.Height, "our height", rs.Height, "blockstore height", conR.conS.blockStore.Height(), "pv", conR.conS.privValidator) + logger.Error("Failed to load block meta", "peer height", prs.Height, "our height", rs.Height, "blockstore height", conR.conS.blockStore.Height(), "pv", conR.conS.privValidator) time.Sleep(peerGossipSleepDuration) continue OUTER_LOOP } else if !blockMeta.BlockID.PartsHeader.Equals(prs.ProposalBlockPartsHeader) { - log.Info("Peer ProposalBlockPartsHeader mismatch, sleeping", + logger.Info("Peer ProposalBlockPartsHeader mismatch, sleeping", "peerHeight", prs.Height, "blockPartsHeader", blockMeta.BlockID.PartsHeader, "peerBlockPartsHeader", prs.ProposalBlockPartsHeader) time.Sleep(peerGossipSleepDuration) continue OUTER_LOOP @@ -424,7 +424,7 @@ OUTER_LOOP: // Load the part part := conR.conS.blockStore.LoadBlockPart(prs.Height, index) if part == nil { - log.Warn("Could not load part", "index", index, + logger.Error("Could not load part", "index", index, "peerHeight", prs.Height, "blockPartsHeader", blockMeta.BlockID.PartsHeader, "peerBlockPartsHeader", prs.ProposalBlockPartsHeader) time.Sleep(peerGossipSleepDuration) continue OUTER_LOOP @@ -440,7 +440,7 @@ OUTER_LOOP: } continue OUTER_LOOP } else { - //log.Info("No parts to send in catch-up, sleeping") + //logger.Info("No parts to send in catch-up, sleeping") time.Sleep(peerGossipSleepDuration) continue OUTER_LOOP } @@ -448,7 +448,7 @@ OUTER_LOOP: // If height and round don't match, sleep. if (rs.Height != prs.Height) || (rs.Round != prs.Round) { - //log.Info("Peer Height|Round mismatch, sleeping", "peerHeight", prs.Height, "peerRound", prs.Round, "peer", peer) + //logger.Info("Peer Height|Round mismatch, sleeping", "peerHeight", prs.Height, "peerRound", prs.Round, "peer", peer) time.Sleep(peerGossipSleepDuration) continue OUTER_LOOP } @@ -489,7 +489,7 @@ OUTER_LOOP: } func (conR *ConsensusReactor) gossipVotesRoutine(peer *p2p.Peer, ps *PeerState) { - log := log.New("peer", peer) + logger := conR.Logger.With("peer", peer) // Simple hack to throttle logs upon sleep. var sleeping = 0 @@ -498,7 +498,7 @@ OUTER_LOOP: for { // Manage disconnects from self or peer. if !peer.IsRunning() || !conR.IsRunning() { - log.Notice(Fmt("Stopping gossipVotesRoutine for %v.", peer)) + logger.Info("Stopping gossipVotesRoutine for peer") return } rs := conR.conS.GetRoundState() @@ -511,7 +511,7 @@ OUTER_LOOP: sleeping = 0 } - //log.Debug("gossipVotesRoutine", "rsHeight", rs.Height, "rsRound", rs.Round, + //logger.Debug("gossipVotesRoutine", "rsHeight", rs.Height, "rsRound", rs.Round, // "prsHeight", prs.Height, "prsRound", prs.Round, "prsStep", prs.Step) // If height matches, then send LastCommit, Prevotes, Precommits. @@ -519,21 +519,21 @@ OUTER_LOOP: // If there are lastCommits to send... if prs.Step == RoundStepNewHeight { if ps.PickSendVote(rs.LastCommit) { - log.Debug("Picked rs.LastCommit to send") + logger.Debug("Picked rs.LastCommit to send") continue OUTER_LOOP } } // If there are prevotes to send... if prs.Step <= RoundStepPrevote && prs.Round != -1 && prs.Round <= rs.Round { if ps.PickSendVote(rs.Votes.Prevotes(prs.Round)) { - log.Debug("Picked rs.Prevotes(prs.Round) to send") + logger.Debug("Picked rs.Prevotes(prs.Round) to send") continue OUTER_LOOP } } // If there are precommits to send... if prs.Step <= RoundStepPrecommit && prs.Round != -1 && prs.Round <= rs.Round { if ps.PickSendVote(rs.Votes.Precommits(prs.Round)) { - log.Debug("Picked rs.Precommits(prs.Round) to send") + logger.Debug("Picked rs.Precommits(prs.Round) to send") continue OUTER_LOOP } } @@ -541,7 +541,7 @@ OUTER_LOOP: if prs.ProposalPOLRound != -1 { if polPrevotes := rs.Votes.Prevotes(prs.ProposalPOLRound); polPrevotes != nil { if ps.PickSendVote(polPrevotes) { - log.Debug("Picked rs.Prevotes(prs.ProposalPOLRound) to send") + logger.Debug("Picked rs.Prevotes(prs.ProposalPOLRound) to send") continue OUTER_LOOP } } @@ -552,7 +552,7 @@ OUTER_LOOP: // If peer is lagging by height 1, send LastCommit. if prs.Height != 0 && rs.Height == prs.Height+1 { if ps.PickSendVote(rs.LastCommit) { - log.Debug("Picked rs.LastCommit to send") + logger.Debug("Picked rs.LastCommit to send") continue OUTER_LOOP } } @@ -563,9 +563,9 @@ OUTER_LOOP: // Load the block commit for prs.Height, // which contains precommit signatures for prs.Height. commit := conR.conS.blockStore.LoadBlockCommit(prs.Height) - log.Info("Loaded BlockCommit for catch-up", "height", prs.Height, "commit", commit) + logger.Info("Loaded BlockCommit for catch-up", "height", prs.Height, "commit", commit) if ps.PickSendVote(commit) { - log.Debug("Picked Catchup commit to send") + logger.Debug("Picked Catchup commit to send") continue OUTER_LOOP } } @@ -573,7 +573,7 @@ OUTER_LOOP: if sleeping == 0 { // We sent nothing. Sleep... sleeping = 1 - log.Debug("No votes to send, sleeping", "peer", peer, + logger.Debug("No votes to send, sleeping", "localPV", rs.Votes.Prevotes(rs.Round).BitArray(), "peerPV", prs.Prevotes, "localPC", rs.Votes.Precommits(rs.Round).BitArray(), "peerPC", prs.Precommits) } else if sleeping == 2 { @@ -589,13 +589,13 @@ OUTER_LOOP: // NOTE: `queryMaj23Routine` has a simple crude design since it only comes // into play for liveness when there's a signature DDoS attack happening. func (conR *ConsensusReactor) queryMaj23Routine(peer *p2p.Peer, ps *PeerState) { - log := log.New("peer", peer) + logger := conR.Logger.With("peer", peer) OUTER_LOOP: for { // Manage disconnects from self or peer. if !peer.IsRunning() || !conR.IsRunning() { - log.Notice(Fmt("Stopping queryMaj23Routine for %v.", peer)) + logger.Info("Stopping queryMaj23Routine for peer") return } @@ -952,8 +952,8 @@ func (ps *PeerState) SetHasVote(vote *types.Vote) { } func (ps *PeerState) setHasVote(height int, round int, type_ byte, index int) { - log := log.New("peer", ps.Peer, "peerRound", ps.Round, "height", height, "round", round) - log.Debug("setHasVote(LastCommit)", "lastCommit", ps.LastCommit, "index", index) + logger := ps.Peer.Logger.With("peerRound", ps.Round, "height", height, "round", round) + logger.Debug("setHasVote(LastCommit)", "lastCommit", ps.LastCommit, "index", index) // NOTE: some may be nil BitArrays -> no side effects. ps.getVoteBitArray(height, round, type_).SetIndex(index, true) diff --git a/consensus/reactor_test.go b/consensus/reactor_test.go index bc26ffc05..a1ab37026 100644 --- a/consensus/reactor_test.go +++ b/consensus/reactor_test.go @@ -6,16 +6,14 @@ import ( "testing" "time" - "github.com/tendermint/tendermint/config/tendermint_test" - - "github.com/tendermint/go-events" - "github.com/tendermint/go-p2p" - "github.com/tendermint/tendermint/types" "github.com/tendermint/abci/example/dummy" + "github.com/tendermint/tendermint/p2p" + "github.com/tendermint/tendermint/types" + "github.com/tendermint/tmlibs/events" ) func init() { - config = tendermint_test.ResetConfig("consensus_reactor_test") + config = ResetConfig("consensus_reactor_test") } //---------------------------------------------- @@ -24,10 +22,13 @@ func init() { func startConsensusNet(t *testing.T, css []*ConsensusState, N int, subscribeEventRespond bool) ([]*ConsensusReactor, []chan interface{}) { reactors := make([]*ConsensusReactor, N) eventChans := make([]chan interface{}, N) + logger := consensusLogger() for i := 0; i < N; i++ { reactors[i] = NewConsensusReactor(css[i], true) // so we dont start the consensus states + reactors[i].SetLogger(logger.With("validator", i)) eventSwitch := events.NewEventSwitch() + eventSwitch.SetLogger(logger.With("module", "events", "validator", i)) _, err := eventSwitch.Start() if err != nil { t.Fatalf("Failed to start switch: %v", err) @@ -41,7 +42,7 @@ func startConsensusNet(t *testing.T, css []*ConsensusState, N int, subscribeEven } } // make connected switches and start all reactors - p2p.MakeConnectedSwitches(N, func(i int, s *p2p.Switch) *p2p.Switch { + p2p.MakeConnectedSwitches(config.P2P, N, func(i int, s *p2p.Switch) *p2p.Switch { s.AddReactor("CONSENSUS", reactors[i]) return s }, p2p.Connect2Switches) @@ -98,7 +99,7 @@ func TestVotingPowerChange(t *testing.T) { }, css) //--------------------------------------------------------------------------- - log.Info("---------------------------- Testing changing the voting power of one validator a few times") + t.Log("---------------------------- Testing changing the voting power of one validator a few times") val1PubKey := css[0].privValidator.(*types.PrivValidator).PubKey updateValidatorTx := dummy.MakeValSetChangeTx(val1PubKey.Bytes(), 25) @@ -159,7 +160,7 @@ func TestValidatorSetChanges(t *testing.T) { }, css) //--------------------------------------------------------------------------- - log.Info("---------------------------- Testing adding one validator") + t.Log("---------------------------- Testing adding one validator") newValidatorPubKey1 := css[nVals].privValidator.(*types.PrivValidator).PubKey newValidatorTx1 := dummy.MakeValSetChangeTx(newValidatorPubKey1.Bytes(), uint64(testMinPower)) @@ -185,7 +186,7 @@ func TestValidatorSetChanges(t *testing.T) { waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css) //--------------------------------------------------------------------------- - log.Info("---------------------------- Testing changing the voting power of one validator") + t.Log("---------------------------- Testing changing the voting power of one validator") updateValidatorPubKey1 := css[nVals].privValidator.(*types.PrivValidator).PubKey updateValidatorTx1 := dummy.MakeValSetChangeTx(updateValidatorPubKey1.Bytes(), 25) @@ -201,7 +202,7 @@ func TestValidatorSetChanges(t *testing.T) { } //--------------------------------------------------------------------------- - log.Info("---------------------------- Testing adding two validators at once") + t.Log("---------------------------- Testing adding two validators at once") newValidatorPubKey2 := css[nVals+1].privValidator.(*types.PrivValidator).PubKey newValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), uint64(testMinPower)) @@ -217,7 +218,7 @@ func TestValidatorSetChanges(t *testing.T) { waitForAndValidateBlock(t, nPeers, activeVals, eventChans, css) //--------------------------------------------------------------------------- - log.Info("---------------------------- Testing removing two validators at once") + t.Log("---------------------------- Testing removing two validators at once") removeValidatorTx2 := dummy.MakeValSetChangeTx(newValidatorPubKey2.Bytes(), 0) removeValidatorTx3 := dummy.MakeValSetChangeTx(newValidatorPubKey3.Bytes(), 0) @@ -236,7 +237,7 @@ func TestReactorWithTimeoutCommit(t *testing.T) { css := randConsensusNet(N, "consensus_reactor_with_timeout_commit_test", newMockTickerFunc(false), newCounter) // override default SkipTimeoutCommit == true for tests for i := 0; i < N; i++ { - css[i].timeoutParams.SkipTimeoutCommit = false + css[i].config.SkipTimeoutCommit = false } reactors, eventChans := startConsensusNet(t, css, N-1, false) @@ -252,8 +253,8 @@ func TestReactorWithTimeoutCommit(t *testing.T) { func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{}, eventChans []chan interface{}, css []*ConsensusState, txs ...[]byte) { timeoutWaitGroup(t, n, func(wg *sync.WaitGroup, j int) { newBlockI := <-eventChans[j] - newBlock := newBlockI.(types.EventDataNewBlock).Block - log.Warn("Got block", "height", newBlock.Height, "validator", j) + newBlock := newBlockI.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block + t.Logf("[WARN] Got block height=%v validator=%v", newBlock.Height, j) err := validateBlock(newBlock, activeVals) if err != nil { t.Fatal(err) @@ -264,7 +265,6 @@ func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{} eventChans[j] <- struct{}{} wg.Done() - log.Warn("Done wait group", "height", newBlock.Height, "validator", j) }, css) } diff --git a/consensus/replay.go b/consensus/replay.go index bd0975f4d..af30b8894 100644 --- a/consensus/replay.go +++ b/consensus/replay.go @@ -11,10 +11,10 @@ import ( "time" abci "github.com/tendermint/abci/types" - auto "github.com/tendermint/go-autofile" - . "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" - "github.com/tendermint/go-wire" + wire "github.com/tendermint/go-wire" + auto "github.com/tendermint/tmlibs/autofile" + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" "github.com/tendermint/tendermint/proxy" sm "github.com/tendermint/tendermint/state" @@ -50,7 +50,7 @@ func (cs *ConsensusState) readReplayMessage(msgBytes []byte, newStepCh chan inte // for logging switch m := msg.Msg.(type) { case types.EventDataRoundState: - log.Notice("Replay: New Step", "height", m.Height, "round", m.Round, "step", m.Step) + cs.Logger.Info("Replay: New Step", "height", m.Height, "round", m.Round, "step", m.Step) // these are playback checks ticker := time.After(time.Second * 2) if newStepCh != nil { @@ -72,19 +72,19 @@ func (cs *ConsensusState) readReplayMessage(msgBytes []byte, newStepCh chan inte switch msg := m.Msg.(type) { case *ProposalMessage: p := msg.Proposal - log.Notice("Replay: Proposal", "height", p.Height, "round", p.Round, "header", + cs.Logger.Info("Replay: Proposal", "height", p.Height, "round", p.Round, "header", p.BlockPartsHeader, "pol", p.POLRound, "peer", peerKey) case *BlockPartMessage: - log.Notice("Replay: BlockPart", "height", msg.Height, "round", msg.Round, "peer", peerKey) + cs.Logger.Info("Replay: BlockPart", "height", msg.Height, "round", msg.Round, "peer", peerKey) case *VoteMessage: v := msg.Vote - log.Notice("Replay: Vote", "height", v.Height, "round", v.Round, "type", v.Type, + cs.Logger.Info("Replay: Vote", "height", v.Height, "round", v.Round, "type", v.Type, "blockID", v.BlockID, "peer", peerKey) } cs.handleMsg(m, cs.RoundState) case timeoutInfo: - log.Notice("Replay: Timeout", "height", m.Height, "round", m.Round, "step", m.Step, "dur", m.Duration) + cs.Logger.Info("Replay: Timeout", "height", m.Height, "round", m.Round, "step", m.Step, "dur", m.Duration) cs.handleTimeout(m, cs.RoundState) default: return fmt.Errorf("Replay: Unknown TimedWALMessage type: %v", reflect.TypeOf(msg.Msg)) @@ -108,18 +108,18 @@ func (cs *ConsensusState) catchupReplay(csHeight int) error { gr.Close() } if found { - return errors.New(Fmt("WAL should not contain #ENDHEIGHT %d.", csHeight)) + return errors.New(cmn.Fmt("WAL should not contain #ENDHEIGHT %d.", csHeight)) } // Search for last height marker gr, found, err = cs.wal.group.Search("#ENDHEIGHT: ", makeHeightSearchFunc(csHeight-1)) if err == io.EOF { - log.Warn("Replay: wal.group.Search returned EOF", "#ENDHEIGHT", csHeight-1) + cs.Logger.Error("Replay: wal.group.Search returned EOF", "#ENDHEIGHT", csHeight-1) // if we upgraded from 0.9 to 0.9.1, we may have #HEIGHT instead // TODO (0.10.0): remove this gr, found, err = cs.wal.group.Search("#HEIGHT: ", makeHeightSearchFunc(csHeight)) if err == io.EOF { - log.Warn("Replay: wal.group.Search returned EOF", "#HEIGHT", csHeight) + cs.Logger.Error("Replay: wal.group.Search returned EOF", "#HEIGHT", csHeight) return nil } else if err != nil { return err @@ -134,7 +134,7 @@ func (cs *ConsensusState) catchupReplay(csHeight int) error { // TODO (0.10.0): remove this gr, found, err = cs.wal.group.Search("#HEIGHT: ", makeHeightSearchFunc(csHeight)) if err == io.EOF { - log.Warn("Replay: wal.group.Search returned EOF", "#HEIGHT", csHeight) + cs.Logger.Error("Replay: wal.group.Search returned EOF", "#HEIGHT", csHeight) return nil } else if err != nil { return err @@ -143,10 +143,10 @@ func (cs *ConsensusState) catchupReplay(csHeight int) error { } // TODO (0.10.0): uncomment - // return errors.New(Fmt("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1)) + // return errors.New(cmn.Fmt("Cannot replay height %d. WAL does not contain #ENDHEIGHT for %d.", csHeight, csHeight-1)) } - log.Notice("Catchup by replaying consensus messages", "height", csHeight) + cs.Logger.Info("Catchup by replaying consensus messages", "height", csHeight) for { line, err := gr.ReadLine() @@ -164,7 +164,7 @@ func (cs *ConsensusState) catchupReplay(csHeight int) error { return err } } - log.Notice("Replay: Done") + cs.Logger.Info("Replay: Done") return nil } @@ -199,49 +199,47 @@ func makeHeightSearchFunc(height int) auto.SearchFunc { // we were last and using the WAL to recover there type Handshaker struct { - config cfg.Config state *sm.State store types.BlockStore + logger log.Logger nBlocks int // number of blocks applied to the state } -func NewHandshaker(config cfg.Config, state *sm.State, store types.BlockStore) *Handshaker { - return &Handshaker{config, state, store, 0} +func NewHandshaker(state *sm.State, store types.BlockStore) *Handshaker { + return &Handshaker{state, store, log.NewNopLogger(), 0} +} + +func (h *Handshaker) SetLogger(l log.Logger) { + h.logger = l } func (h *Handshaker) NBlocks() int { return h.nBlocks } -var ErrReplayLastBlockTimeout = errors.New("Timed out waiting for last block to be replayed") - // TODO: retry the handshake/replay if it fails ? func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error { // handshake is done via info request on the query conn res, err := proxyApp.Query().InfoSync() if err != nil { - return errors.New(Fmt("Error calling Info: %v", err)) + return errors.New(cmn.Fmt("Error calling Info: %v", err)) } blockHeight := int(res.LastBlockHeight) // XXX: beware overflow appHash := res.LastBlockAppHash - log.Notice("ABCI Handshake", "appHeight", blockHeight, "appHash", appHash) + h.logger.Info("ABCI Handshake", "appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash)) // TODO: check version // replay blocks up to the latest in the blockstore _, err = h.ReplayBlocks(appHash, blockHeight, proxyApp) - if err == ErrReplayLastBlockTimeout { - log.Warn("Failed to sync via handshake. Trying other means. If they fail, please increase the timeout_handshake parameter") - return nil - - } else if err != nil { - return errors.New(Fmt("Error on replay: %v", err)) + if err != nil { + return errors.New(cmn.Fmt("Error on replay: %v", err)) } - log.Notice("Completed ABCI Handshake - Tendermint and App are synced", "appHeight", blockHeight, "appHash", appHash) + h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced", "appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash)) // TODO: (on restart) replay mempool @@ -254,7 +252,13 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p storeBlockHeight := h.store.Height() stateBlockHeight := h.state.LastBlockHeight - log.Notice("ABCI Replay Blocks", "appHeight", appBlockHeight, "storeHeight", storeBlockHeight, "stateHeight", stateBlockHeight) + h.logger.Info("ABCI Replay Blocks", "appHeight", appBlockHeight, "storeHeight", storeBlockHeight, "stateHeight", stateBlockHeight) + + // If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain + if appBlockHeight == 0 { + validators := types.TM2PB.Validators(h.state.Validators) + proxyApp.Consensus().InitChainSync(validators) + } // First handle edge cases and constraints on the storeBlockHeight if storeBlockHeight == 0 { @@ -266,11 +270,11 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p } else if storeBlockHeight < stateBlockHeight { // the state should never be ahead of the store (this is under tendermint's control) - PanicSanity(Fmt("StateBlockHeight (%d) > StoreBlockHeight (%d)", stateBlockHeight, storeBlockHeight)) + cmn.PanicSanity(cmn.Fmt("StateBlockHeight (%d) > StoreBlockHeight (%d)", stateBlockHeight, storeBlockHeight)) } else if storeBlockHeight > stateBlockHeight+1 { // store should be at most one ahead of the state (this is under tendermint's control) - PanicSanity(Fmt("StoreBlockHeight (%d) > StateBlockHeight + 1 (%d)", storeBlockHeight, stateBlockHeight+1)) + cmn.PanicSanity(cmn.Fmt("StoreBlockHeight (%d) > StateBlockHeight + 1 (%d)", storeBlockHeight, stateBlockHeight+1)) } // Now either store is equal to state, or one ahead. @@ -300,20 +304,20 @@ func (h *Handshaker) ReplayBlocks(appHash []byte, appBlockHeight int, proxyApp p // so replayBlock with the real app. // NOTE: We could instead use the cs.WAL on cs.Start, // but we'd have to allow the WAL to replay a block that wrote it's ENDHEIGHT - log.Info("Replay last block using real app") + h.logger.Info("Replay last block using real app") return h.replayBlock(storeBlockHeight, proxyApp.Consensus()) } else if appBlockHeight == storeBlockHeight { // We ran Commit, but didn't save the state, so replayBlock with mock app abciResponses := h.state.LoadABCIResponses() mockApp := newMockProxyApp(appHash, abciResponses) - log.Info("Replay last block using mock app") + h.logger.Info("Replay last block using mock app") return h.replayBlock(storeBlockHeight, mockApp) } } - PanicSanity("Should never happen") + cmn.PanicSanity("Should never happen") return nil, nil } @@ -331,9 +335,9 @@ func (h *Handshaker) replayBlocks(proxyApp proxy.AppConns, appBlockHeight, store finalBlock -= 1 } for i := appBlockHeight + 1; i <= finalBlock; i++ { - log.Info("Applying block", "height", i) + h.logger.Info("Applying block", "height", i) block := h.store.LoadBlock(i) - appHash, err = sm.ExecCommitBlock(proxyApp.Consensus(), block) + appHash, err = sm.ExecCommitBlock(proxyApp.Consensus(), block, h.logger) if err != nil { return nil, err } @@ -368,7 +372,7 @@ func (h *Handshaker) replayBlock(height int, proxyApp proxy.AppConnConsensus) ([ func (h *Handshaker) checkAppHash(appHash []byte) error { if !bytes.Equal(h.state.AppHash, appHash) { - panic(errors.New(Fmt("Tendermint state.AppHash does not match AppHash after replay. Got %X, expected %X", appHash, h.state.AppHash)).Error()) + panic(errors.New(cmn.Fmt("Tendermint state.AppHash does not match AppHash after replay. Got %X, expected %X", appHash, h.state.AppHash)).Error()) return nil } return nil @@ -384,6 +388,7 @@ func newMockProxyApp(appHash []byte, abciResponses *sm.ABCIResponses) proxy.AppC abciResponses: abciResponses, }) cli, _ := clientCreator.NewABCIClient() + cli.Start() return proxy.NewAppConnConsensus(cli) } diff --git a/consensus/replay_file.go b/consensus/replay_file.go index 5ad1b9457..39ce47c5e 100644 --- a/consensus/replay_file.go +++ b/consensus/replay_file.go @@ -8,24 +8,23 @@ import ( "strconv" "strings" - . "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" - dbm "github.com/tendermint/go-db" bc "github.com/tendermint/tendermint/blockchain" - mempl "github.com/tendermint/tendermint/mempool" + cfg "github.com/tendermint/tendermint/config" "github.com/tendermint/tendermint/proxy" sm "github.com/tendermint/tendermint/state" "github.com/tendermint/tendermint/types" + cmn "github.com/tendermint/tmlibs/common" + dbm "github.com/tendermint/tmlibs/db" ) //-------------------------------------------------------- // replay messages interactively or all at once -func RunReplayFile(config cfg.Config, walFile string, console bool) { - consensusState := newConsensusStateForReplay(config) +func RunReplayFile(config cfg.BaseConfig, csConfig *cfg.ConsensusConfig, console bool) { + consensusState := newConsensusStateForReplay(config, csConfig) - if err := consensusState.ReplayFile(walFile, console); err != nil { - Exit(Fmt("Error during consensus replay: %v", err)) + if err := consensusState.ReplayFile(csConfig.WalFile(), console); err != nil { + cmn.Exit(cmn.Fmt("Error during consensus replay: %v", err)) } } @@ -114,7 +113,7 @@ func (pb *playback) replayReset(count int, newStepCh chan interface{}) error { pb.fp = fp pb.scanner = bufio.NewScanner(fp) count = pb.count - count - log.Notice(Fmt("Reseting from %d to %d", pb.count, count)) + fmt.Printf("Reseting from %d to %d\n", pb.count, count) pb.count = 0 pb.cs = newCS for i := 0; pb.scanner.Scan() && i < count; i++ { @@ -127,8 +126,7 @@ func (pb *playback) replayReset(count int, newStepCh chan interface{}) error { } func (cs *ConsensusState) startForReplay() { - - log.Warn("Replay commands are disabled until someone updates them and writes tests") + cs.Logger.Error("Replay commands are disabled until someone updates them and writes tests") /* TODO:! // since we replay tocks we just ignore ticks go func() { @@ -149,9 +147,9 @@ func (pb *playback) replayConsoleLoop() int { bufReader := bufio.NewReader(os.Stdin) line, more, err := bufReader.ReadLine() if more { - Exit("input is too long") + cmn.Exit("input is too long") } else if err != nil { - Exit(err.Error()) + cmn.Exit(err.Error()) } tokens := strings.Split(string(line), " ") @@ -236,34 +234,31 @@ func (pb *playback) replayConsoleLoop() int { //-------------------------------------------------------------------------------- // convenience for replay mode -func newConsensusStateForReplay(config cfg.Config) *ConsensusState { +func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusConfig) *ConsensusState { // Get BlockStore - blockStoreDB := dbm.NewDB("blockstore", config.GetString("db_backend"), config.GetString("db_dir")) + blockStoreDB := dbm.NewDB("blockstore", config.DBBackend, config.DBDir()) blockStore := bc.NewBlockStore(blockStoreDB) // Get State - stateDB := dbm.NewDB("state", config.GetString("db_backend"), config.GetString("db_dir")) - state := sm.MakeGenesisStateFromFile(stateDB, config.GetString("genesis_file")) + stateDB := dbm.NewDB("state", config.DBBackend, config.DBDir()) + state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile()) // Create proxyAppConn connection (consensus, mempool, query) - proxyApp := proxy.NewAppConns(config, proxy.DefaultClientCreator(config), NewHandshaker(config, state, blockStore)) + clientCreator := proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir()) + proxyApp := proxy.NewAppConns(clientCreator, NewHandshaker(state, blockStore)) _, err := proxyApp.Start() if err != nil { - Exit(Fmt("Error starting proxy app conns: %v", err)) + cmn.Exit(cmn.Fmt("Error starting proxy app conns: %v", err)) } - // add the chainid to the global config - config.Set("chain_id", state.ChainID) - // Make event switch eventSwitch := types.NewEventSwitch() if _, err := eventSwitch.Start(); err != nil { - Exit(Fmt("Failed to start event switch: %v", err)) + cmn.Exit(cmn.Fmt("Failed to start event switch: %v", err)) } - mempool := mempl.NewMempool(config, proxyApp.Mempool()) + consensusState := NewConsensusState(csConfig, state.Copy(), proxyApp.Consensus(), blockStore, types.MockMempool{}) - consensusState := NewConsensusState(config, state.Copy(), proxyApp.Consensus(), blockStore, mempool) consensusState.SetEventSwitch(eventSwitch) return consensusState } diff --git a/consensus/replay_test.go b/consensus/replay_test.go index 43204ab72..23290a7f5 100644 --- a/consensus/replay_test.go +++ b/consensus/replay_test.go @@ -12,21 +12,21 @@ import ( "testing" "time" - "github.com/tendermint/tendermint/config/tendermint_test" - "github.com/tendermint/abci/example/dummy" - cmn "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" - "github.com/tendermint/go-crypto" - dbm "github.com/tendermint/go-db" - "github.com/tendermint/go-wire" + crypto "github.com/tendermint/go-crypto" + wire "github.com/tendermint/go-wire" + cmn "github.com/tendermint/tmlibs/common" + dbm "github.com/tendermint/tmlibs/db" + + cfg "github.com/tendermint/tendermint/config" "github.com/tendermint/tendermint/proxy" sm "github.com/tendermint/tendermint/state" "github.com/tendermint/tendermint/types" + "github.com/tendermint/tmlibs/log" ) func init() { - config = tendermint_test.ResetConfig("consensus_replay_test") + config = ResetConfig("consensus_replay_test") } // These tests ensure we can always recover from failure at any part of the consensus process. @@ -37,6 +37,12 @@ func init() { // the `Handshake Tests` are for failures in applying the block. // With the help of the WAL, we can recover from it all! +// NOTE: Files in this dir are generated by running the `build.sh` therein. +// It's a simple way to generate wals for a single block, or multiple blocks, with random transactions, +// and different part sizes. The output is not deterministic, and the stepChanges may need to be adjusted +// after running it (eg. sometimes small_block2 will have 5 block parts, sometimes 6). +// It should only have to be re-run if there is some breaking change to the consensus data structures (eg. blocks, votes) +// or to the behaviour of the app (eg. computes app hash differently) var data_dir = path.Join(cmn.GoPath, "src/github.com/tendermint/tendermint/consensus", "test_data") //------------------------------------------------------------------------------------------ @@ -53,7 +59,7 @@ var baseStepChanges = []int{3, 6, 8} var testCases = []*testCase{ newTestCase("empty_block", baseStepChanges), // empty block (has 1 block part) newTestCase("small_block1", baseStepChanges), // small block with txs in 1 block part - newTestCase("small_block2", []int{3, 10, 12}), // small block with txs across 5 smaller block parts + newTestCase("small_block2", []int{3, 11, 13}), // small block with txs across 6 smaller block parts } type testCase struct { @@ -130,8 +136,14 @@ func waitForBlock(newBlockCh chan interface{}, thisCase *testCase, i int) { func runReplayTest(t *testing.T, cs *ConsensusState, walFile string, newBlockCh chan interface{}, thisCase *testCase, i int) { - cs.config.Set("cs_wal_file", walFile) - cs.Start() + cs.config.SetWalFile(walFile) + started, err := cs.Start() + if err != nil { + t.Fatalf("Cannot start consensus: %v", err) + } + if !started { + t.Error("Consensus did not start") + } // Wait to make a new block. // This is just a signal that we haven't halted; its not something contained in the WAL itself. // Assuming the consensus state is running, replay of any WAL, including the empty one, @@ -154,9 +166,9 @@ func toPV(pv PrivValidator) *types.PrivValidator { return pv.(*types.PrivValidator) } -func setupReplayTest(thisCase *testCase, nLines int, crashAfter bool) (*ConsensusState, chan interface{}, string, string) { - fmt.Println("-------------------------------------") - log.Notice(cmn.Fmt("Starting replay test %v (of %d lines of WAL). Crash after = %v", thisCase.name, nLines, crashAfter)) +func setupReplayTest(t *testing.T, thisCase *testCase, nLines int, crashAfter bool) (*ConsensusState, chan interface{}, string, string) { + t.Log("-------------------------------------") + t.Logf("Starting replay test %v (of %d lines of WAL). Crash after = %v", thisCase.name, nLines, crashAfter) lineStep := nLines if crashAfter { @@ -175,7 +187,7 @@ func setupReplayTest(thisCase *testCase, nLines int, crashAfter bool) (*Consensu toPV(cs.privValidator).LastHeight = 1 // first block toPV(cs.privValidator).LastStep = thisCase.stepMap[lineStep] - log.Warn("setupReplayTest", "LastStep", toPV(cs.privValidator).LastStep) + t.Logf("[WARN] setupReplayTest LastStep=%v", toPV(cs.privValidator).LastStep) newBlockCh := subscribeToEvent(cs.evsw, "tester", types.EventStringNewBlock(), 1) @@ -200,7 +212,7 @@ func TestWALCrashAfterWrite(t *testing.T) { for _, thisCase := range testCases { split := strings.Split(thisCase.log, "\n") for i := 0; i < len(split)-1; i++ { - cs, newBlockCh, _, walFile := setupReplayTest(thisCase, i+1, true) + cs, newBlockCh, _, walFile := setupReplayTest(t, thisCase, i+1, true) runReplayTest(t, cs, walFile, newBlockCh, thisCase, i+1) } } @@ -214,7 +226,7 @@ func TestWALCrashBeforeWritePropose(t *testing.T) { for _, thisCase := range testCases { lineNum := thisCase.proposeLine // setup replay test where last message is a proposal - cs, newBlockCh, proposalMsg, walFile := setupReplayTest(thisCase, lineNum, false) + cs, newBlockCh, proposalMsg, walFile := setupReplayTest(t, thisCase, lineNum, false) msg := readTimedWALMessage(t, proposalMsg) proposal := msg.Msg.(msgInfo).Msg.(*ProposalMessage) // Set LastSig @@ -238,7 +250,7 @@ func TestWALCrashBeforeWritePrecommit(t *testing.T) { func testReplayCrashBeforeWriteVote(t *testing.T, thisCase *testCase, lineNum int, eventString string) { // setup replay test where last message is a vote - cs, newBlockCh, voteMsg, walFile := setupReplayTest(thisCase, lineNum, false) + cs, newBlockCh, voteMsg, walFile := setupReplayTest(t, thisCase, lineNum, false) types.AddListenerForEvent(cs.evsw, "tester", eventString, func(data types.TMEventData) { msg := readTimedWALMessage(t, voteMsg) vote := msg.Msg.(msgInfo).Msg.(*VoteMessage) @@ -297,7 +309,7 @@ func TestHandshakeReplayNone(t *testing.T) { // Make some blocks. Start a fresh app and apply nBlocks blocks. Then restart the app and sync it up with the remaining blocks func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) { - config := tendermint_test.ResetConfig("proxy_test_") + config := ResetConfig("proxy_test_") // copy the many_blocks file walBody, err := cmn.ReadFile(path.Join(data_dir, "many_blocks.cswal")) @@ -305,15 +317,19 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) { t.Fatal(err) } walFile := writeWAL(string(walBody)) - config.Set("cs_wal_file", walFile) + config.Consensus.SetWalFile(walFile) - privVal := types.LoadPrivValidator(config.GetString("priv_validator_file")) - testPartSize = config.GetInt("block_part_size") + privVal := types.LoadPrivValidator(config.PrivValidatorFile()) + testPartSize = config.Consensus.BlockPartSize wal, err := NewWAL(walFile, false) if err != nil { t.Fatal(err) } + wal.SetLogger(log.TestingLogger()) + if _, err := wal.Start(); err != nil { + t.Fatal(err) + } chain, commits, err := makeBlockchainFromWAL(wal) if err != nil { t.Fatalf(err.Error()) @@ -327,19 +343,19 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) { latestAppHash := buildTMStateFromChain(config, state, chain, mode) // make a new client creator - dummyApp := dummy.NewPersistentDummyApplication(path.Join(config.GetString("db_dir"), "2")) + dummyApp := dummy.NewPersistentDummyApplication(path.Join(config.DBDir(), "2")) clientCreator2 := proxy.NewLocalClientCreator(dummyApp) if nBlocks > 0 { // run nBlocks against a new client to build up the app state. // use a throwaway tendermint state - proxyApp := proxy.NewAppConns(config, clientCreator2, nil) + proxyApp := proxy.NewAppConns(clientCreator2, nil) state, _ := stateAndStore(config, privVal.PubKey) buildAppStateFromChain(proxyApp, state, chain, nBlocks, mode) } // now start the app using the handshake - it should sync - handshaker := NewHandshaker(config, state, store) - proxyApp := proxy.NewAppConns(config, clientCreator2, handshaker) + handshaker := NewHandshaker(state, store) + proxyApp := proxy.NewAppConns(clientCreator2, handshaker) if _, err := proxyApp.Start(); err != nil { t.Fatalf("Error starting proxy app connections: %v", err) } @@ -380,6 +396,10 @@ func buildAppStateFromChain(proxyApp proxy.AppConns, if _, err := proxyApp.Start(); err != nil { panic(err) } + + validators := types.TM2PB.Validators(state.Validators) + proxyApp.Consensus().InitChainSync(validators) + defer proxyApp.Stop() switch mode { case 0: @@ -402,15 +422,18 @@ func buildAppStateFromChain(proxyApp proxy.AppConns, } -func buildTMStateFromChain(config cfg.Config, state *sm.State, chain []*types.Block, mode uint) []byte { +func buildTMStateFromChain(config *cfg.Config, state *sm.State, chain []*types.Block, mode uint) []byte { // run the whole chain against this client to build up the tendermint state - clientCreator := proxy.NewLocalClientCreator(dummy.NewPersistentDummyApplication(path.Join(config.GetString("db_dir"), "1"))) - proxyApp := proxy.NewAppConns(config, clientCreator, nil) // sm.NewHandshaker(config, state, store, ReplayLastBlock)) + clientCreator := proxy.NewLocalClientCreator(dummy.NewPersistentDummyApplication(path.Join(config.DBDir(), "1"))) + proxyApp := proxy.NewAppConns(clientCreator, nil) // sm.NewHandshaker(config, state, store, ReplayLastBlock)) if _, err := proxyApp.Start(); err != nil { panic(err) } defer proxyApp.Stop() + validators := types.TM2PB.Validators(state.Validators) + proxyApp.Consensus().InitChainSync(validators) + var latestAppHash []byte switch mode { @@ -452,7 +475,7 @@ func makeBlockchainFromWAL(wal *WAL) ([]*types.Block, []*types.Commit, error) { } defer gr.Close() - log.Notice("Build a blockchain by reading from the WAL") + // log.Notice("Build a blockchain by reading from the WAL") var blockParts *types.PartSet var blocks []*types.Block @@ -596,28 +619,26 @@ func makeBlockchain(t *testing.T, chainID string, nBlocks int, privVal *types.Pr } // fresh state and mock store -func stateAndStore(config cfg.Config, pubKey crypto.PubKey) (*sm.State, *mockBlockStore) { +func stateAndStore(config *cfg.Config, pubKey crypto.PubKey) (*sm.State, *mockBlockStore) { stateDB := dbm.NewMemDB() - return sm.MakeGenesisState(stateDB, &types.GenesisDoc{ - ChainID: config.GetString("chain_id"), - Validators: []types.GenesisValidator{ - types.GenesisValidator{pubKey, 10000, "test"}, - }, - AppHash: nil, - }), NewMockBlockStore(config) + state := sm.MakeGenesisStateFromFile(stateDB, config.GenesisFile()) + state.SetLogger(log.TestingLogger().With("module", "state")) + + store := NewMockBlockStore(config) + return state, store } //---------------------------------- // mock block store type mockBlockStore struct { - config cfg.Config + config *cfg.Config chain []*types.Block commits []*types.Commit } // TODO: NewBlockStore(db.NewMemDB) ... -func NewMockBlockStore(config cfg.Config) *mockBlockStore { +func NewMockBlockStore(config *cfg.Config) *mockBlockStore { return &mockBlockStore{config, nil, nil} } @@ -626,7 +647,7 @@ func (bs *mockBlockStore) LoadBlock(height int) *types.Block { return bs.chain[h func (bs *mockBlockStore) LoadBlockMeta(height int) *types.BlockMeta { block := bs.chain[height-1] return &types.BlockMeta{ - BlockID: types.BlockID{block.Hash(), block.MakePartSet(bs.config.GetInt("block_part_size")).Header()}, + BlockID: types.BlockID{block.Hash(), block.MakePartSet(bs.config.Consensus.BlockPartSize).Header()}, Header: block.Header, } } diff --git a/consensus/state.go b/consensus/state.go index 6ff97ddc8..d4056facf 100644 --- a/consensus/state.go +++ b/consensus/state.go @@ -9,65 +9,18 @@ import ( "sync" "time" - "github.com/ebuchman/fail-test" - - . "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" - "github.com/tendermint/go-wire" + fail "github.com/ebuchman/fail-test" + wire "github.com/tendermint/go-wire" + cfg "github.com/tendermint/tendermint/config" "github.com/tendermint/tendermint/proxy" sm "github.com/tendermint/tendermint/state" "github.com/tendermint/tendermint/types" + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" ) //----------------------------------------------------------------------------- -// Timeout Parameters - -// TimeoutParams holds timeouts and deltas for each round step. -// All timeouts and deltas in milliseconds. -type TimeoutParams struct { - Propose0 int - ProposeDelta int - Prevote0 int - PrevoteDelta int - Precommit0 int - PrecommitDelta int - Commit0 int - SkipTimeoutCommit bool -} - -// Wait this long for a proposal -func (tp *TimeoutParams) Propose(round int) time.Duration { - return time.Duration(tp.Propose0+tp.ProposeDelta*round) * time.Millisecond -} - -// After receiving any +2/3 prevote, wait this long for stragglers -func (tp *TimeoutParams) Prevote(round int) time.Duration { - return time.Duration(tp.Prevote0+tp.PrevoteDelta*round) * time.Millisecond -} - -// After receiving any +2/3 precommits, wait this long for stragglers -func (tp *TimeoutParams) Precommit(round int) time.Duration { - return time.Duration(tp.Precommit0+tp.PrecommitDelta*round) * time.Millisecond -} - -// After receiving +2/3 precommits for a single block (a commit), wait this long for stragglers in the next height's RoundStepNewHeight -func (tp *TimeoutParams) Commit(t time.Time) time.Time { - return t.Add(time.Duration(tp.Commit0) * time.Millisecond) -} - -// InitTimeoutParamsFromConfig initializes parameters from config -func InitTimeoutParamsFromConfig(config cfg.Config) *TimeoutParams { - return &TimeoutParams{ - Propose0: config.GetInt("timeout_propose"), - ProposeDelta: config.GetInt("timeout_propose_delta"), - Prevote0: config.GetInt("timeout_prevote"), - PrevoteDelta: config.GetInt("timeout_prevote_delta"), - Precommit0: config.GetInt("timeout_precommit"), - PrecommitDelta: config.GetInt("timeout_precommit_delta"), - Commit0: config.GetInt("timeout_commit"), - SkipTimeoutCommit: config.GetBool("skip_timeout_commit"), - } -} +// Config //----------------------------------------------------------------------------- // Errors @@ -222,40 +175,50 @@ type PrivValidator interface { // Tracks consensus state across block heights and rounds. type ConsensusState struct { - BaseService + cmn.BaseService - config cfg.Config + // config details + config *cfg.ConsensusConfig + privValidator PrivValidator // for signing votes + + // services for creating and executing blocks proxyAppConn proxy.AppConnConsensus blockStore types.BlockStore mempool types.Mempool - privValidator PrivValidator // for signing votes - + // internal state mtx sync.Mutex RoundState state *sm.State // State until height-1. - peerMsgQueue chan msgInfo // serializes msgs affecting state (proposals, block parts, votes) - internalMsgQueue chan msgInfo // like peerMsgQueue but for our own proposals, parts, votes - timeoutTicker TimeoutTicker // ticker for timeouts - timeoutParams *TimeoutParams // parameters and functions for timeout intervals + // state changes may be triggered by msgs from peers, + // msgs from ourself, or by timeouts + peerMsgQueue chan msgInfo + internalMsgQueue chan msgInfo + timeoutTicker TimeoutTicker + // we use PubSub to trigger msg broadcasts in the reactor, + // and to notify external subscribers, eg. through a websocket evsw types.EventSwitch + // a Write-Ahead Log ensures we can recover from any kind of crash + // and helps us avoid signing conflicting votes wal *WAL replayMode bool // so we don't log signing errors during replay - nSteps int // used for testing to limit the number of transitions the state makes + // for tests where we want to limit the number of transitions the state makes + nSteps int - // allow certain function to be overwritten for testing + // some functions can be overwritten for testing decideProposal func(height, round int) doPrevote func(height, round int) setProposal func(proposal *types.Proposal) error + // closed when we finish shutting down done chan struct{} } -func NewConsensusState(config cfg.Config, state *sm.State, proxyAppConn proxy.AppConnConsensus, blockStore types.BlockStore, mempool types.Mempool) *ConsensusState { +func NewConsensusState(config *cfg.ConsensusConfig, state *sm.State, proxyAppConn proxy.AppConnConsensus, blockStore types.BlockStore, mempool types.Mempool) *ConsensusState { cs := &ConsensusState{ config: config, proxyAppConn: proxyAppConn, @@ -264,7 +227,6 @@ func NewConsensusState(config cfg.Config, state *sm.State, proxyAppConn proxy.Ap peerMsgQueue: make(chan msgInfo, msgQueueSize), internalMsgQueue: make(chan msgInfo, msgQueueSize), timeoutTicker: NewTimeoutTicker(), - timeoutParams: InitTimeoutParamsFromConfig(config), done: make(chan struct{}), } // set function defaults (may be overwritten before calling Start) @@ -276,13 +238,19 @@ func NewConsensusState(config cfg.Config, state *sm.State, proxyAppConn proxy.Ap // Don't call scheduleRound0 yet. // We do that upon Start(). cs.reconstructLastCommit(state) - cs.BaseService = *NewBaseService(log, "ConsensusState", cs) + cs.BaseService = *cmn.NewBaseService(nil, "ConsensusState", cs) return cs } //---------------------------------------- // Public interface +// SetLogger implements Service. +func (cs *ConsensusState) SetLogger(l log.Logger) { + cs.BaseService.Logger = l + cs.timeoutTicker.SetLogger(l) +} + // SetEventSwitch implements events.Eventable func (cs *ConsensusState) SetEventSwitch(evsw types.EventSwitch) { cs.evsw = evsw @@ -290,7 +258,7 @@ func (cs *ConsensusState) SetEventSwitch(evsw types.EventSwitch) { func (cs *ConsensusState) String() string { // better not to access shared variables - return Fmt("ConsensusState") //(H:%v R:%v S:%v", cs.Height, cs.Round, cs.Step) + return cmn.Fmt("ConsensusState") //(H:%v R:%v S:%v", cs.Height, cs.Round, cs.Step) } func (cs *ConsensusState) GetState() *sm.State { @@ -341,9 +309,9 @@ func (cs *ConsensusState) LoadCommit(height int) *types.Commit { func (cs *ConsensusState) OnStart() error { - walFile := cs.config.GetString("cs_wal_file") + walFile := cs.config.WalFile() if err := cs.OpenWAL(walFile); err != nil { - log.Error("Error loading ConsensusState wal", "error", err.Error()) + cs.Logger.Error("Error loading ConsensusState wal", "error", err.Error()) return err } @@ -357,7 +325,7 @@ func (cs *ConsensusState) OnStart() error { // we may have lost some votes if the process crashed // reload from consensus log to catchup if err := cs.catchupReplay(cs.Height); err != nil { - log.Error("Error on catchup replay. Proceeding to start ConsensusState anyway", "error", err.Error()) + cs.Logger.Error("Error on catchup replay. Proceeding to start ConsensusState anyway", "error", err.Error()) // NOTE: if we ever do return an error here, // make sure to stop the timeoutTicker } @@ -398,18 +366,22 @@ func (cs *ConsensusState) Wait() { // Open file to log all consensus messages and timeouts for deterministic accountability func (cs *ConsensusState) OpenWAL(walFile string) (err error) { - err = EnsureDir(path.Dir(walFile), 0700) + err = cmn.EnsureDir(path.Dir(walFile), 0700) if err != nil { - log.Error("Error ensuring ConsensusState wal dir", "error", err.Error()) + cs.Logger.Error("Error ensuring ConsensusState wal dir", "error", err.Error()) return err } cs.mtx.Lock() defer cs.mtx.Unlock() - wal, err := NewWAL(walFile, cs.config.GetBool("cs_wal_light")) + wal, err := NewWAL(walFile, cs.config.WalLight) if err != nil { return err } + wal.SetLogger(cs.Logger.With("wal", walFile)) + if _, err := wal.Start(); err != nil { + return err + } cs.wal = wal return nil } @@ -481,7 +453,7 @@ func (cs *ConsensusState) updateRoundStep(round int, step RoundStepType) { // enterNewRound(height, 0) at cs.StartTime. func (cs *ConsensusState) scheduleRound0(rs *RoundState) { - //log.Info("scheduleRound0", "now", time.Now(), "startTime", cs.StartTime) + //cs.Logger.Info("scheduleRound0", "now", time.Now(), "startTime", cs.StartTime) sleepDuration := rs.StartTime.Sub(time.Now()) cs.scheduleTimeout(sleepDuration, rs.Height, 0, RoundStepNewHeight) } @@ -500,7 +472,7 @@ func (cs *ConsensusState) sendInternalMessage(mi msgInfo) { // be processed out of order. // TODO: use CList here for strict determinism and // attempt push to internalMsgQueue in receiveRoutine - log.Warn("Internal msg queue is full. Using a go-routine") + cs.Logger.Info("Internal msg queue is full. Using a go-routine") go func() { cs.internalMsgQueue <- mi }() } } @@ -512,18 +484,18 @@ func (cs *ConsensusState) reconstructLastCommit(state *sm.State) { return } seenCommit := cs.blockStore.LoadSeenCommit(state.LastBlockHeight) - lastPrecommits := types.NewVoteSet(cs.config.GetString("chain_id"), state.LastBlockHeight, seenCommit.Round(), types.VoteTypePrecommit, state.LastValidators) + lastPrecommits := types.NewVoteSet(cs.state.ChainID, state.LastBlockHeight, seenCommit.Round(), types.VoteTypePrecommit, state.LastValidators) for _, precommit := range seenCommit.Precommits { if precommit == nil { continue } added, err := lastPrecommits.AddVote(precommit) if !added || err != nil { - PanicCrisis(Fmt("Failed to reconstruct LastCommit: %v", err)) + cmn.PanicCrisis(cmn.Fmt("Failed to reconstruct LastCommit: %v", err)) } } if !lastPrecommits.HasTwoThirdsMajority() { - PanicSanity("Failed to reconstruct LastCommit: Does not have +2/3 maj") + cmn.PanicSanity("Failed to reconstruct LastCommit: Does not have +2/3 maj") } cs.LastCommit = lastPrecommits } @@ -532,13 +504,13 @@ func (cs *ConsensusState) reconstructLastCommit(state *sm.State) { // The round becomes 0 and cs.Step becomes RoundStepNewHeight. func (cs *ConsensusState) updateToState(state *sm.State) { if cs.CommitRound > -1 && 0 < cs.Height && cs.Height != state.LastBlockHeight { - PanicSanity(Fmt("updateToState() expected state height of %v but found %v", + cmn.PanicSanity(cmn.Fmt("updateToState() expected state height of %v but found %v", cs.Height, state.LastBlockHeight)) } if cs.state != nil && cs.state.LastBlockHeight+1 != cs.Height { // This might happen when someone else is mutating cs.state. // Someone forgot to pass in state.Copy() somewhere?! - PanicSanity(Fmt("Inconsistent cs.state.LastBlockHeight+1 %v vs cs.Height %v", + cmn.PanicSanity(cmn.Fmt("Inconsistent cs.state.LastBlockHeight+1 %v vs cs.Height %v", cs.state.LastBlockHeight+1, cs.Height)) } @@ -546,7 +518,7 @@ func (cs *ConsensusState) updateToState(state *sm.State) { // This happens when SwitchToConsensus() is called in the reactor. // We don't want to reset e.g. the Votes. if cs.state != nil && (state.LastBlockHeight <= cs.state.LastBlockHeight) { - log.Notice("Ignoring updateToState()", "newHeight", state.LastBlockHeight+1, "oldHeight", cs.state.LastBlockHeight+1) + cs.Logger.Info("Ignoring updateToState()", "newHeight", state.LastBlockHeight+1, "oldHeight", cs.state.LastBlockHeight+1) return } @@ -555,7 +527,7 @@ func (cs *ConsensusState) updateToState(state *sm.State) { lastPrecommits := (*types.VoteSet)(nil) if cs.CommitRound > -1 && cs.Votes != nil { if !cs.Votes.Precommits(cs.CommitRound).HasTwoThirdsMajority() { - PanicSanity("updateToState(state) called but last Precommit round didn't have +2/3") + cmn.PanicSanity("updateToState(state) called but last Precommit round didn't have +2/3") } lastPrecommits = cs.Votes.Precommits(cs.CommitRound) } @@ -572,9 +544,9 @@ func (cs *ConsensusState) updateToState(state *sm.State) { // to be gathered for the first block. // And alternative solution that relies on clocks: // cs.StartTime = state.LastBlockTime.Add(timeoutCommit) - cs.StartTime = cs.timeoutParams.Commit(time.Now()) + cs.StartTime = cs.config.Commit(time.Now()) } else { - cs.StartTime = cs.timeoutParams.Commit(cs.CommitTime) + cs.StartTime = cs.config.Commit(cs.CommitTime) } cs.Validators = validators cs.Proposal = nil @@ -583,7 +555,7 @@ func (cs *ConsensusState) updateToState(state *sm.State) { cs.LockedRound = 0 cs.LockedBlock = nil cs.LockedBlockParts = nil - cs.Votes = NewHeightVoteSet(cs.config.GetString("chain_id"), height, validators) + cs.Votes = NewHeightVoteSet(state.ChainID, height, validators) cs.CommitRound = -1 cs.LastCommit = lastPrecommits cs.LastValidators = state.LastValidators @@ -615,7 +587,7 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) { for { if maxSteps > 0 { if cs.nSteps >= maxSteps { - log.Warn("reached max steps. exiting receive routine") + cs.Logger.Info("reached max steps. exiting receive routine") cs.nSteps = 0 return } @@ -688,19 +660,19 @@ func (cs *ConsensusState) handleMsg(mi msgInfo, rs RoundState) { // the peer is sending us CatchupCommit precommits. // We could make note of this and help filter in broadcastHasVoteMessage(). default: - log.Warn("Unknown msg type", reflect.TypeOf(msg)) + cs.Logger.Error("Unknown msg type", reflect.TypeOf(msg)) } if err != nil { - log.Error("Error with msg", "type", reflect.TypeOf(msg), "peer", peerKey, "error", err, "msg", msg) + cs.Logger.Error("Error with msg", "type", reflect.TypeOf(msg), "peer", peerKey, "error", err, "msg", msg) } } func (cs *ConsensusState) handleTimeout(ti timeoutInfo, rs RoundState) { - log.Debug("Received tock", "timeout", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step) + cs.Logger.Debug("Received tock", "timeout", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step) // timeouts must be for current height, round, step if ti.Height != rs.Height || ti.Round < rs.Round || (ti.Round == rs.Round && ti.Step < rs.Step) { - log.Debug("Ignoring tock because we're ahead", "height", rs.Height, "round", rs.Round, "step", rs.Step) + cs.Logger.Debug("Ignoring tock because we're ahead", "height", rs.Height, "round", rs.Round, "step", rs.Step) return } @@ -723,7 +695,7 @@ func (cs *ConsensusState) handleTimeout(ti timeoutInfo, rs RoundState) { types.FireEventTimeoutWait(cs.evsw, cs.RoundStateEvent()) cs.enterNewRound(ti.Height, ti.Round+1) default: - panic(Fmt("Invalid timeout step: %v", ti.Step)) + panic(cmn.Fmt("Invalid timeout step: %v", ti.Step)) } } @@ -738,15 +710,15 @@ func (cs *ConsensusState) handleTimeout(ti timeoutInfo, rs RoundState) { // NOTE: cs.StartTime was already set for height. func (cs *ConsensusState) enterNewRound(height int, round int) { if cs.Height != height || round < cs.Round || (cs.Round == round && cs.Step != RoundStepNewHeight) { - log.Debug(Fmt("enterNewRound(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Debug(cmn.Fmt("enterNewRound(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) return } if now := time.Now(); cs.StartTime.After(now) { - log.Warn("Need to set a buffer and log.Warn() here for sanity.", "startTime", cs.StartTime, "now", now) + cs.Logger.Info("Need to set a buffer and log message here for sanity.", "startTime", cs.StartTime, "now", now) } - log.Notice(Fmt("enterNewRound(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Info(cmn.Fmt("enterNewRound(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) // Increment validators if necessary validators := cs.Validators @@ -780,10 +752,10 @@ func (cs *ConsensusState) enterNewRound(height int, round int) { // Enter: from NewRound(height,round). func (cs *ConsensusState) enterPropose(height int, round int) { if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPropose <= cs.Step) { - log.Debug(Fmt("enterPropose(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Debug(cmn.Fmt("enterPropose(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) return } - log.Info(Fmt("enterPropose(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Info(cmn.Fmt("enterPropose(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) defer func() { // Done enterPropose: @@ -799,19 +771,25 @@ func (cs *ConsensusState) enterPropose(height int, round int) { }() // If we don't get the proposal and all block parts quick enough, enterPrevote - cs.scheduleTimeout(cs.timeoutParams.Propose(round), height, round, RoundStepPropose) + cs.scheduleTimeout(cs.config.Propose(round), height, round, RoundStepPropose) // Nothing more to do if we're not a validator if cs.privValidator == nil { + cs.Logger.Debug("This node is not a validator") return } if !bytes.Equal(cs.Validators.GetProposer().Address, cs.privValidator.GetAddress()) { - log.Info("enterPropose: Not our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator) + cs.Logger.Info("enterPropose: Not our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator) + if cs.Validators.HasAddress(cs.privValidator.GetAddress()) { + cs.Logger.Debug("This node is a validator") + } else { + cs.Logger.Debug("This node is not a validator") + } } else { - log.Info("enterPropose: Our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator) + cs.Logger.Info("enterPropose: Our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator) + cs.Logger.Debug("This node is a validator") cs.decideProposal(height, round) - } } @@ -849,11 +827,11 @@ func (cs *ConsensusState) defaultDecideProposal(height, round int) { part := blockParts.GetPart(i) cs.sendInternalMessage(msgInfo{&BlockPartMessage{cs.Height, cs.Round, part}, ""}) } - log.Info("Signed proposal", "height", height, "round", round, "proposal", proposal) - log.Debug(Fmt("Signed proposal block: %v", block)) + cs.Logger.Info("Signed proposal", "height", height, "round", round, "proposal", proposal) + cs.Logger.Debug(cmn.Fmt("Signed proposal block: %v", block)) } else { if !cs.replayMode { - log.Warn("enterPropose: Error signing proposal", "height", height, "round", round, "error", err) + cs.Logger.Error("enterPropose: Error signing proposal", "height", height, "round", round, "error", err) } } } @@ -888,15 +866,15 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts commit = cs.LastCommit.MakeCommit() } else { // This shouldn't happen. - log.Error("enterPropose: Cannot propose anything: No commit for the previous block.") + cs.Logger.Error("enterPropose: Cannot propose anything: No commit for the previous block.") return } // Mempool validated transactions - txs := cs.mempool.Reap(cs.config.GetInt("block_size")) + txs := cs.mempool.Reap(cs.config.MaxBlockSizeTxs) return types.MakeBlock(cs.Height, cs.state.ChainID, txs, commit, - cs.state.LastBlockID, cs.state.Validators.Hash(), cs.state.AppHash, cs.config.GetInt("block_part_size")) + cs.state.LastBlockID, cs.state.Validators.Hash(), cs.state.AppHash, cs.config.BlockPartSize) } // Enter: `timeoutPropose` after entering Propose. @@ -906,7 +884,7 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts // Otherwise vote nil. func (cs *ConsensusState) enterPrevote(height int, round int) { if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrevote <= cs.Step) { - log.Debug(Fmt("enterPrevote(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Debug(cmn.Fmt("enterPrevote(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) return } @@ -924,7 +902,7 @@ func (cs *ConsensusState) enterPrevote(height int, round int) { // TODO: catchup event? } - log.Info(Fmt("enterPrevote(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Info(cmn.Fmt("enterPrevote(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) // Sign and broadcast vote as necessary cs.doPrevote(height, round) @@ -936,14 +914,14 @@ func (cs *ConsensusState) enterPrevote(height int, round int) { func (cs *ConsensusState) defaultDoPrevote(height int, round int) { // If a block is locked, prevote that. if cs.LockedBlock != nil { - log.Notice("enterPrevote: Block was locked") + cs.Logger.Info("enterPrevote: Block was locked") cs.signAddVote(types.VoteTypePrevote, cs.LockedBlock.Hash(), cs.LockedBlockParts.Header()) return } // If ProposalBlock is nil, prevote nil. if cs.ProposalBlock == nil { - log.Warn("enterPrevote: ProposalBlock is nil") + cs.Logger.Info("enterPrevote: ProposalBlock is nil") cs.signAddVote(types.VoteTypePrevote, nil, types.PartSetHeader{}) return } @@ -952,7 +930,7 @@ func (cs *ConsensusState) defaultDoPrevote(height int, round int) { err := cs.state.ValidateBlock(cs.ProposalBlock) if err != nil { // ProposalBlock is invalid, prevote nil. - log.Warn("enterPrevote: ProposalBlock is invalid", "error", err) + cs.Logger.Error("enterPrevote: ProposalBlock is invalid", "error", err) cs.signAddVote(types.VoteTypePrevote, nil, types.PartSetHeader{}) return } @@ -967,13 +945,13 @@ func (cs *ConsensusState) defaultDoPrevote(height int, round int) { // Enter: any +2/3 prevotes at next round. func (cs *ConsensusState) enterPrevoteWait(height int, round int) { if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrevoteWait <= cs.Step) { - log.Debug(Fmt("enterPrevoteWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Debug(cmn.Fmt("enterPrevoteWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) return } if !cs.Votes.Prevotes(round).HasTwoThirdsAny() { - PanicSanity(Fmt("enterPrevoteWait(%v/%v), but Prevotes does not have any +2/3 votes", height, round)) + cmn.PanicSanity(cmn.Fmt("enterPrevoteWait(%v/%v), but Prevotes does not have any +2/3 votes", height, round)) } - log.Info(Fmt("enterPrevoteWait(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Info(cmn.Fmt("enterPrevoteWait(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) defer func() { // Done enterPrevoteWait: @@ -982,7 +960,7 @@ func (cs *ConsensusState) enterPrevoteWait(height int, round int) { }() // Wait for some more prevotes; enterPrecommit - cs.scheduleTimeout(cs.timeoutParams.Prevote(round), height, round, RoundStepPrevoteWait) + cs.scheduleTimeout(cs.config.Prevote(round), height, round, RoundStepPrevoteWait) } // Enter: +2/3 precomits for block or nil. @@ -993,11 +971,11 @@ func (cs *ConsensusState) enterPrevoteWait(height int, round int) { // else, precommit nil otherwise. func (cs *ConsensusState) enterPrecommit(height int, round int) { if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrecommit <= cs.Step) { - log.Debug(Fmt("enterPrecommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Debug(cmn.Fmt("enterPrecommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) return } - log.Info(Fmt("enterPrecommit(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Info(cmn.Fmt("enterPrecommit(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) defer func() { // Done enterPrecommit: @@ -1010,9 +988,9 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) { // If we don't have a polka, we must precommit nil if !ok { if cs.LockedBlock != nil { - log.Notice("enterPrecommit: No +2/3 prevotes during enterPrecommit while we're locked. Precommitting nil") + cs.Logger.Info("enterPrecommit: No +2/3 prevotes during enterPrecommit while we're locked. Precommitting nil") } else { - log.Notice("enterPrecommit: No +2/3 prevotes during enterPrecommit. Precommitting nil.") + cs.Logger.Info("enterPrecommit: No +2/3 prevotes during enterPrecommit. Precommitting nil.") } cs.signAddVote(types.VoteTypePrecommit, nil, types.PartSetHeader{}) return @@ -1024,15 +1002,15 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) { // the latest POLRound should be this round polRound, _ := cs.Votes.POLInfo() if polRound < round { - PanicSanity(Fmt("This POLRound should be %v but got %", round, polRound)) + cmn.PanicSanity(cmn.Fmt("This POLRound should be %v but got %", round, polRound)) } // +2/3 prevoted nil. Unlock and precommit nil. if len(blockID.Hash) == 0 { if cs.LockedBlock == nil { - log.Notice("enterPrecommit: +2/3 prevoted for nil.") + cs.Logger.Info("enterPrecommit: +2/3 prevoted for nil.") } else { - log.Notice("enterPrecommit: +2/3 prevoted for nil. Unlocking") + cs.Logger.Info("enterPrecommit: +2/3 prevoted for nil. Unlocking") cs.LockedRound = 0 cs.LockedBlock = nil cs.LockedBlockParts = nil @@ -1046,7 +1024,7 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) { // If we're already locked on that block, precommit it, and update the LockedRound if cs.LockedBlock.HashesTo(blockID.Hash) { - log.Notice("enterPrecommit: +2/3 prevoted locked block. Relocking") + cs.Logger.Info("enterPrecommit: +2/3 prevoted locked block. Relocking") cs.LockedRound = round types.FireEventRelock(cs.evsw, cs.RoundStateEvent()) cs.signAddVote(types.VoteTypePrecommit, blockID.Hash, blockID.PartsHeader) @@ -1055,10 +1033,10 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) { // If +2/3 prevoted for proposal block, stage and precommit it if cs.ProposalBlock.HashesTo(blockID.Hash) { - log.Notice("enterPrecommit: +2/3 prevoted proposal block. Locking", "hash", blockID.Hash) + cs.Logger.Info("enterPrecommit: +2/3 prevoted proposal block. Locking", "hash", blockID.Hash) // Validate the block. if err := cs.state.ValidateBlock(cs.ProposalBlock); err != nil { - PanicConsensus(Fmt("enterPrecommit: +2/3 prevoted for an invalid block: %v", err)) + cmn.PanicConsensus(cmn.Fmt("enterPrecommit: +2/3 prevoted for an invalid block: %v", err)) } cs.LockedRound = round cs.LockedBlock = cs.ProposalBlock @@ -1087,13 +1065,13 @@ func (cs *ConsensusState) enterPrecommit(height int, round int) { // Enter: any +2/3 precommits for next round. func (cs *ConsensusState) enterPrecommitWait(height int, round int) { if cs.Height != height || round < cs.Round || (cs.Round == round && RoundStepPrecommitWait <= cs.Step) { - log.Debug(Fmt("enterPrecommitWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Debug(cmn.Fmt("enterPrecommitWait(%v/%v): Invalid args. Current step: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) return } if !cs.Votes.Precommits(round).HasTwoThirdsAny() { - PanicSanity(Fmt("enterPrecommitWait(%v/%v), but Precommits does not have any +2/3 votes", height, round)) + cmn.PanicSanity(cmn.Fmt("enterPrecommitWait(%v/%v), but Precommits does not have any +2/3 votes", height, round)) } - log.Info(Fmt("enterPrecommitWait(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) + cs.Logger.Info(cmn.Fmt("enterPrecommitWait(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step)) defer func() { // Done enterPrecommitWait: @@ -1102,17 +1080,17 @@ func (cs *ConsensusState) enterPrecommitWait(height int, round int) { }() // Wait for some more precommits; enterNewRound - cs.scheduleTimeout(cs.timeoutParams.Precommit(round), height, round, RoundStepPrecommitWait) + cs.scheduleTimeout(cs.config.Precommit(round), height, round, RoundStepPrecommitWait) } // Enter: +2/3 precommits for block func (cs *ConsensusState) enterCommit(height int, commitRound int) { if cs.Height != height || RoundStepCommit <= cs.Step { - log.Debug(Fmt("enterCommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, commitRound, cs.Height, cs.Round, cs.Step)) + cs.Logger.Debug(cmn.Fmt("enterCommit(%v/%v): Invalid args. Current step: %v/%v/%v", height, commitRound, cs.Height, cs.Round, cs.Step)) return } - log.Info(Fmt("enterCommit(%v/%v). Current: %v/%v/%v", height, commitRound, cs.Height, cs.Round, cs.Step)) + cs.Logger.Info(cmn.Fmt("enterCommit(%v/%v). Current: %v/%v/%v", height, commitRound, cs.Height, cs.Round, cs.Step)) defer func() { // Done enterCommit: @@ -1128,7 +1106,7 @@ func (cs *ConsensusState) enterCommit(height int, commitRound int) { blockID, ok := cs.Votes.Precommits(commitRound).TwoThirdsMajority() if !ok { - PanicSanity("RunActionCommit() expects +2/3 precommits") + cmn.PanicSanity("RunActionCommit() expects +2/3 precommits") } // The Locked* fields no longer matter. @@ -1155,20 +1133,21 @@ func (cs *ConsensusState) enterCommit(height int, commitRound int) { // If we have the block AND +2/3 commits for it, finalize. func (cs *ConsensusState) tryFinalizeCommit(height int) { if cs.Height != height { - PanicSanity(Fmt("tryFinalizeCommit() cs.Height: %v vs height: %v", cs.Height, height)) + cmn.PanicSanity(cmn.Fmt("tryFinalizeCommit() cs.Height: %v vs height: %v", cs.Height, height)) } blockID, ok := cs.Votes.Precommits(cs.CommitRound).TwoThirdsMajority() if !ok || len(blockID.Hash) == 0 { - log.Warn("Attempt to finalize failed. There was no +2/3 majority, or +2/3 was for .", "height", height) + cs.Logger.Error("Attempt to finalize failed. There was no +2/3 majority, or +2/3 was for .", "height", height) return } if !cs.ProposalBlock.HashesTo(blockID.Hash) { // TODO: this happens every time if we're not a validator (ugly logs) // TODO: ^^ wait, why does it matter that we're a validator? - log.Warn("Attempt to finalize failed. We don't have the commit block.", "height", height, "proposal-block", cs.ProposalBlock.Hash(), "commit-block", blockID.Hash) + cs.Logger.Error("Attempt to finalize failed. We don't have the commit block.", "height", height, "proposal-block", cs.ProposalBlock.Hash(), "commit-block", blockID.Hash) return } + // go cs.finalizeCommit(height) } @@ -1176,7 +1155,7 @@ func (cs *ConsensusState) tryFinalizeCommit(height int) { // Increment height and goto RoundStepNewHeight func (cs *ConsensusState) finalizeCommit(height int) { if cs.Height != height || cs.Step != RoundStepCommit { - log.Debug(Fmt("finalizeCommit(%v): Invalid args. Current step: %v/%v/%v", height, cs.Height, cs.Round, cs.Step)) + cs.Logger.Debug(cmn.Fmt("finalizeCommit(%v): Invalid args. Current step: %v/%v/%v", height, cs.Height, cs.Round, cs.Step)) return } @@ -1184,21 +1163,21 @@ func (cs *ConsensusState) finalizeCommit(height int) { block, blockParts := cs.ProposalBlock, cs.ProposalBlockParts if !ok { - PanicSanity(Fmt("Cannot finalizeCommit, commit does not have two thirds majority")) + cmn.PanicSanity(cmn.Fmt("Cannot finalizeCommit, commit does not have two thirds majority")) } if !blockParts.HasHeader(blockID.PartsHeader) { - PanicSanity(Fmt("Expected ProposalBlockParts header to be commit header")) + cmn.PanicSanity(cmn.Fmt("Expected ProposalBlockParts header to be commit header")) } if !block.HashesTo(blockID.Hash) { - PanicSanity(Fmt("Cannot finalizeCommit, ProposalBlock does not hash to commit hash")) + cmn.PanicSanity(cmn.Fmt("Cannot finalizeCommit, ProposalBlock does not hash to commit hash")) } if err := cs.state.ValidateBlock(block); err != nil { - PanicConsensus(Fmt("+2/3 committed an invalid block: %v", err)) + cmn.PanicConsensus(cmn.Fmt("+2/3 committed an invalid block: %v", err)) } - log.Notice(Fmt("Finalizing commit of block with %d txs", block.NumTxs), + cs.Logger.Info(cmn.Fmt("Finalizing commit of block with %d txs", block.NumTxs), "height", block.Height, "hash", block.Hash(), "root", block.AppHash) - log.Info(Fmt("%v", block)) + cs.Logger.Info(cmn.Fmt("%v", block)) fail.Fail() // XXX @@ -1211,7 +1190,7 @@ func (cs *ConsensusState) finalizeCommit(height int) { cs.blockStore.SaveBlock(block, blockParts, seenCommit) } else { // Happens during replay if we already saved the block but didn't commit - log.Info("Calling finalizeCommit on already stored block", "height", block.Height) + cs.Logger.Info("Calling finalizeCommit on already stored block", "height", block.Height) } fail.Fail() // XXX @@ -1239,7 +1218,7 @@ func (cs *ConsensusState) finalizeCommit(height int) { // NOTE: the block.AppHash wont reflect these txs until the next block err := stateCopy.ApplyBlock(eventCache, cs.proxyAppConn, block, blockParts.Header(), cs.mempool) if err != nil { - log.Error("Error on ApplyBlock. Did the application crash? Please restart tendermint", "error", err) + cs.Logger.Error("Error on ApplyBlock. Did the application crash? Please restart tendermint", "error", err) return } @@ -1249,7 +1228,7 @@ func (cs *ConsensusState) finalizeCommit(height int) { // NOTE: If we fail before firing, these events will never fire // // TODO: Either - // * Fire before persisting state, in ApplyBlock + // * Fire before persisting state, in ApplyBlock // * Fire on start up if we haven't written any new WAL msgs // Both options mean we may fire more than once. Is that fine ? types.FireEventNewBlock(cs.evsw, types.EventDataNewBlock{block}) @@ -1332,7 +1311,7 @@ func (cs *ConsensusState) addProposalBlockPart(height int, part *types.Part, ver var err error cs.ProposalBlock = wire.ReadBinary(&types.Block{}, cs.ProposalBlockParts.GetReader(), types.MaxBlockSize, &n, &err).(*types.Block) // NOTE: it's possible to receive complete proposal blocks for future rounds without having the proposal - log.Info("Received complete proposal block", "height", cs.ProposalBlock.Height, "hash", cs.ProposalBlock.Hash()) + cs.Logger.Info("Received complete proposal block", "height", cs.ProposalBlock.Height, "hash", cs.ProposalBlock.Hash()) if cs.Step == RoundStepPropose && cs.isProposalComplete() { // Move onto the next step cs.enterPrevote(height, cs.Round) @@ -1356,10 +1335,10 @@ func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerKey string) error { return err } else if _, ok := err.(*types.ErrVoteConflictingVotes); ok { if peerKey == "" { - log.Warn("Found conflicting vote from ourselves. Did you unsafe_reset a validator?", "height", vote.Height, "round", vote.Round, "type", vote.Type) + cs.Logger.Error("Found conflicting vote from ourselves. Did you unsafe_reset a validator?", "height", vote.Height, "round", vote.Round, "type", vote.Type) return err } - log.Warn("Found conflicting vote. Publish evidence (TODO)") + cs.Logger.Error("Found conflicting vote. Publish evidence (TODO)") /* TODO evidenceTx := &types.DupeoutTx{ Address: address, @@ -1371,7 +1350,7 @@ func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerKey string) error { return err } else { // Probably an invalid signature. Bad peer. - log.Warn("Error attempting to add vote", "error", err) + cs.Logger.Error("Error attempting to add vote", "error", err) return ErrAddingVote } } @@ -1381,7 +1360,7 @@ func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerKey string) error { //----------------------------------------------------------------------------- func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool, err error) { - log.Debug("addVote", "voteHeight", vote.Height, "voteType", vote.Type, "csHeight", cs.Height) + cs.Logger.Debug("addVote", "voteHeight", vote.Height, "voteType", vote.Type, "csHeight", cs.Height) // A precommit for the previous height? // These come in while we wait timeoutCommit @@ -1393,11 +1372,11 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool, } added, err = cs.LastCommit.AddVote(vote) if added { - log.Info(Fmt("Added to lastPrecommits: %v", cs.LastCommit.StringShort())) + cs.Logger.Info(cmn.Fmt("Added to lastPrecommits: %v", cs.LastCommit.StringShort())) types.FireEventVote(cs.evsw, types.EventDataVote{vote}) // if we can skip timeoutCommit and have all the votes now, - if cs.timeoutParams.SkipTimeoutCommit && cs.LastCommit.HasAll() { + if cs.config.SkipTimeoutCommit && cs.LastCommit.HasAll() { // go straight to new round (skip timeout commit) // cs.scheduleTimeout(time.Duration(0), cs.Height, 0, RoundStepNewHeight) cs.enterNewRound(cs.Height, 0) @@ -1417,7 +1396,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool, switch vote.Type { case types.VoteTypePrevote: prevotes := cs.Votes.Prevotes(vote.Round) - log.Info("Added to prevote", "vote", vote, "prevotes", prevotes.StringShort()) + cs.Logger.Info("Added to prevote", "vote", vote, "prevotes", prevotes.StringShort()) // First, unlock if prevotes is a valid POL. // >> lockRound < POLRound <= unlockOrChangeLockRound (see spec) // NOTE: If (lockRound < POLRound) but !(POLRound <= unlockOrChangeLockRound), @@ -1426,7 +1405,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool, if (cs.LockedBlock != nil) && (cs.LockedRound < vote.Round) && (vote.Round <= cs.Round) { blockID, ok := prevotes.TwoThirdsMajority() if ok && !cs.LockedBlock.HashesTo(blockID.Hash) { - log.Notice("Unlocking because of POL.", "lockedRound", cs.LockedRound, "POLRound", vote.Round) + cs.Logger.Info("Unlocking because of POL.", "lockedRound", cs.LockedRound, "POLRound", vote.Round) cs.LockedRound = 0 cs.LockedBlock = nil cs.LockedBlockParts = nil @@ -1450,7 +1429,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool, } case types.VoteTypePrecommit: precommits := cs.Votes.Precommits(vote.Round) - log.Info("Added to precommit", "vote", vote, "precommits", precommits.StringShort()) + cs.Logger.Info("Added to precommit", "vote", vote, "precommits", precommits.StringShort()) blockID, ok := precommits.TwoThirdsMajority() if ok { if len(blockID.Hash) == 0 { @@ -1460,7 +1439,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool, cs.enterPrecommit(height, vote.Round) cs.enterCommit(height, vote.Round) - if cs.timeoutParams.SkipTimeoutCommit && precommits.HasAll() { + if cs.config.SkipTimeoutCommit && precommits.HasAll() { // if we have all the votes now, // go straight to new round (skip timeout commit) // cs.scheduleTimeout(time.Duration(0), cs.Height, 0, RoundStepNewHeight) @@ -1474,7 +1453,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool, cs.enterPrecommitWait(height, vote.Round) } default: - PanicSanity(Fmt("Unexpected vote type %X", vote.Type)) // Should not happen. + cmn.PanicSanity(cmn.Fmt("Unexpected vote type %X", vote.Type)) // Should not happen. } } // Either duplicate, or error upon cs.Votes.AddByIndex() @@ -1484,7 +1463,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerKey string) (added bool, } // Height mismatch, bad peer? - log.Info("Vote ignored and not added", "voteHeight", vote.Height, "csHeight", cs.Height, "err", err) + cs.Logger.Info("Vote ignored and not added", "voteHeight", vote.Height, "csHeight", cs.Height, "err", err) return } @@ -1512,11 +1491,11 @@ func (cs *ConsensusState) signAddVote(type_ byte, hash []byte, header types.Part vote, err := cs.signVote(type_, hash, header) if err == nil { cs.sendInternalMessage(msgInfo{&VoteMessage{vote}, ""}) - log.Info("Signed and pushed vote", "height", cs.Height, "round", cs.Round, "vote", vote, "error", err) + cs.Logger.Info("Signed and pushed vote", "height", cs.Height, "round", cs.Round, "vote", vote, "error", err) return vote } else { //if !cs.replayMode { - log.Warn("Error signing vote", "height", cs.Height, "round", cs.Round, "vote", vote, "error", err) + cs.Logger.Error("Error signing vote", "height", cs.Height, "round", cs.Round, "vote", vote, "error", err) //} return nil } diff --git a/consensus/state_test.go b/consensus/state_test.go index b7d9a42d1..2606685a4 100644 --- a/consensus/state_test.go +++ b/consensus/state_test.go @@ -6,17 +6,16 @@ import ( "testing" "time" - . "github.com/tendermint/go-common" - "github.com/tendermint/tendermint/config/tendermint_test" "github.com/tendermint/tendermint/types" + . "github.com/tendermint/tmlibs/common" ) func init() { - config = tendermint_test.ResetConfig("consensus_state_test") + config = ResetConfig("consensus_state_test") } -func (tp *TimeoutParams) ensureProposeTimeout() time.Duration { - return time.Duration(tp.Propose0*2) * time.Millisecond +func ensureProposeTimeout(timeoutPropose int) time.Duration { + return time.Duration(timeoutPropose*2) * time.Millisecond } /* @@ -126,7 +125,7 @@ func TestEnterProposeNoPrivValidator(t *testing.T) { startTestRound(cs, height, round) // if we're not a validator, EnterPropose should timeout - ticker := time.NewTicker(cs.timeoutParams.ensureProposeTimeout()) + ticker := time.NewTicker(ensureProposeTimeout(cs.config.TimeoutPropose)) select { case <-timeoutCh: case <-ticker.C: @@ -167,7 +166,7 @@ func TestEnterProposeYesPrivValidator(t *testing.T) { } // if we're a validator, enterPropose should not timeout - ticker := time.NewTicker(cs.timeoutParams.ensureProposeTimeout()) + ticker := time.NewTicker(ensureProposeTimeout(cs.config.TimeoutPropose)) select { case <-timeoutCh: panic("Expected EnterPropose not to timeout") @@ -181,7 +180,7 @@ func TestBadProposal(t *testing.T) { height, round := cs1.Height, cs1.Round vs2 := vss[1] - partSize := config.GetInt("block_part_size") + partSize := config.Consensus.BlockPartSize proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1) voteCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringVote(), 1) @@ -201,7 +200,7 @@ func TestBadProposal(t *testing.T) { propBlock.AppHash = stateHash propBlockParts := propBlock.MakePartSet(partSize) proposal := types.NewProposal(vs2.Height, round, propBlockParts.Header(), -1, types.BlockID{}) - if err := vs2.SignProposal(config.GetString("chain_id"), proposal); err != nil { + if err := vs2.SignProposal(config.ChainID, proposal); err != nil { t.Fatal("failed to sign bad proposal", err) } @@ -248,7 +247,7 @@ func TestFullRound1(t *testing.T) { // grab proposal re := <-propCh - propBlockHash := re.(types.EventDataRoundState).RoundState.(*RoundState).ProposalBlock.Hash() + propBlockHash := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState).ProposalBlock.Hash() <-voteCh // wait for prevote // NOTE: voteChan cap of 0 ensures we can complete this @@ -328,7 +327,7 @@ func TestLockNoPOL(t *testing.T) { vs2 := vss[1] height := cs1.Height - partSize := config.GetInt("block_part_size") + partSize := config.Consensus.BlockPartSize timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1) timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1) @@ -345,7 +344,7 @@ func TestLockNoPOL(t *testing.T) { cs1.startRoutines(0) re := <-proposalCh - rs := re.(types.EventDataRoundState).RoundState.(*RoundState) + rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) theBlockHash := rs.ProposalBlock.Hash() <-voteCh // prevote @@ -376,7 +375,7 @@ func TestLockNoPOL(t *testing.T) { /// <-newRoundCh - log.Notice("#### ONTO ROUND 1") + t.Log("#### ONTO ROUND 1") /* Round2 (cs1, B) // B B2 */ @@ -385,7 +384,7 @@ func TestLockNoPOL(t *testing.T) { // now we're on a new round and not the proposer, so wait for timeout re = <-timeoutProposeCh - rs = re.(types.EventDataRoundState).RoundState.(*RoundState) + rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) if rs.ProposalBlock != nil { panic("Expected proposal block to be nil") @@ -421,7 +420,7 @@ func TestLockNoPOL(t *testing.T) { <-timeoutWaitCh <-newRoundCh - log.Notice("#### ONTO ROUND 2") + t.Log("#### ONTO ROUND 2") /* Round3 (vs2, _) // B, B2 */ @@ -429,7 +428,7 @@ func TestLockNoPOL(t *testing.T) { incrementRound(vs2) re = <-proposalCh - rs = re.(types.EventDataRoundState).RoundState.(*RoundState) + rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) // now we're on a new round and are the proposer if !bytes.Equal(rs.ProposalBlock.Hash(), rs.LockedBlock.Hash()) { @@ -462,7 +461,7 @@ func TestLockNoPOL(t *testing.T) { incrementRound(vs2) <-newRoundCh - log.Notice("#### ONTO ROUND 3") + t.Log("#### ONTO ROUND 3") /* Round4 (vs2, C) // B C // B C */ @@ -494,7 +493,7 @@ func TestLockPOLRelock(t *testing.T) { cs1, vss := randConsensusState(4) vs2, vs3, vs4 := vss[1], vss[2], vss[3] - partSize := config.GetInt("block_part_size") + partSize := config.Consensus.BlockPartSize timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1) timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1) @@ -503,7 +502,7 @@ func TestLockPOLRelock(t *testing.T) { newRoundCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewRound(), 1) newBlockCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringNewBlockHeader(), 1) - log.Debug("vs2 last round", "lr", vs2.PrivValidator.LastRound) + t.Logf("vs2 last round %v", vs2.PrivValidator.LastRound) // everything done from perspective of cs1 @@ -518,7 +517,7 @@ func TestLockPOLRelock(t *testing.T) { <-newRoundCh re := <-proposalCh - rs := re.(types.EventDataRoundState).RoundState.(*RoundState) + rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) theBlockHash := rs.ProposalBlock.Hash() <-voteCh // prevote @@ -549,7 +548,7 @@ func TestLockPOLRelock(t *testing.T) { cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer") <-newRoundCh - log.Notice("### ONTO ROUND 1") + t.Log("### ONTO ROUND 1") /* Round2 (vs2, C) // B C C C // C C C _) @@ -589,9 +588,9 @@ func TestLockPOLRelock(t *testing.T) { _, _ = <-voteCh, <-voteCh be := <-newBlockCh - b := be.(types.EventDataNewBlockHeader) + b := be.(types.TMEventData).Unwrap().(types.EventDataNewBlockHeader) re = <-newRoundCh - rs = re.(types.EventDataRoundState).RoundState.(*RoundState) + rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) if rs.Height != 2 { panic("Expected height to increment") } @@ -606,7 +605,7 @@ func TestLockPOLUnlock(t *testing.T) { cs1, vss := randConsensusState(4) vs2, vs3, vs4 := vss[1], vss[2], vss[3] - partSize := config.GetInt("block_part_size") + partSize := config.Consensus.BlockPartSize proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1) timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1) @@ -627,7 +626,7 @@ func TestLockPOLUnlock(t *testing.T) { startTestRound(cs1, cs1.Height, 0) <-newRoundCh re := <-proposalCh - rs := re.(types.EventDataRoundState).RoundState.(*RoundState) + rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) theBlockHash := rs.ProposalBlock.Hash() <-voteCh // prevote @@ -653,14 +652,14 @@ func TestLockPOLUnlock(t *testing.T) { // timeout to new round re = <-timeoutWaitCh - rs = re.(types.EventDataRoundState).RoundState.(*RoundState) + rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) lockedBlockHash := rs.LockedBlock.Hash() //XXX: this isnt gauranteed to get there before the timeoutPropose ... cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer") <-newRoundCh - log.Notice("#### ONTO ROUND 1") + t.Log("#### ONTO ROUND 1") /* Round2 (vs2, C) // B nil nil nil // nil nil nil _ @@ -701,7 +700,7 @@ func TestLockPOLSafety1(t *testing.T) { cs1, vss := randConsensusState(4) vs2, vs3, vs4 := vss[1], vss[2], vss[3] - partSize := config.GetInt("block_part_size") + partSize := config.Consensus.BlockPartSize proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1) timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1) @@ -713,7 +712,7 @@ func TestLockPOLSafety1(t *testing.T) { startTestRound(cs1, cs1.Height, 0) <-newRoundCh re := <-proposalCh - rs := re.(types.EventDataRoundState).RoundState.(*RoundState) + rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) propBlock := rs.ProposalBlock <-voteCh // prevote @@ -732,7 +731,7 @@ func TestLockPOLSafety1(t *testing.T) { panic("failed to update validator") }*/ - log.Warn("old prop", "hash", fmt.Sprintf("%X", propBlock.Hash())) + t.Logf("old prop hash %v", fmt.Sprintf("%X", propBlock.Hash())) // we do see them precommit nil signAddVotes(cs1, types.VoteTypePrecommit, nil, types.PartSetHeader{}, vs2, vs3, vs4) @@ -747,7 +746,7 @@ func TestLockPOLSafety1(t *testing.T) { cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer") <-newRoundCh - log.Notice("### ONTO ROUND 1") + t.Log("### ONTO ROUND 1") /*Round2 // we timeout and prevote our lock // a polka happened but we didn't see it! @@ -761,12 +760,12 @@ func TestLockPOLSafety1(t *testing.T) { re = <-proposalCh } - rs = re.(types.EventDataRoundState).RoundState.(*RoundState) + rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) if rs.LockedBlock != nil { panic("we should not be locked!") } - log.Warn("new prop", "hash", fmt.Sprintf("%X", propBlockHash)) + t.Logf("new prop hash %v", fmt.Sprintf("%X", propBlockHash)) // go to prevote, prevote for proposal block <-voteCh validatePrevote(t, cs1, 1, vss[0], propBlockHash) @@ -787,7 +786,7 @@ func TestLockPOLSafety1(t *testing.T) { <-newRoundCh - log.Notice("### ONTO ROUND 2") + t.Log("### ONTO ROUND 2") /*Round3 we see the polka from round 1 but we shouldn't unlock! */ @@ -806,7 +805,7 @@ func TestLockPOLSafety1(t *testing.T) { // add prevotes from the earlier round addVotes(cs1, prevotes...) - log.Warn("Done adding prevotes!") + t.Log("Done adding prevotes!") ensureNoNewStep(newStepCh) } @@ -822,7 +821,7 @@ func TestLockPOLSafety2(t *testing.T) { cs1, vss := randConsensusState(4) vs2, vs3, vs4 := vss[1], vss[2], vss[3] - partSize := config.GetInt("block_part_size") + partSize := config.Consensus.BlockPartSize proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1) timeoutProposeCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutPropose(), 1) @@ -850,7 +849,7 @@ func TestLockPOLSafety2(t *testing.T) { cs1.updateRoundStep(0, RoundStepPrecommitWait) - log.Notice("### ONTO Round 1") + t.Log("### ONTO Round 1") // jump in at round 1 height := cs1.Height startTestRound(cs1, height, 1) @@ -878,7 +877,7 @@ func TestLockPOLSafety2(t *testing.T) { // in round 2 we see the polkad block from round 0 newProp := types.NewProposal(height, 2, propBlockParts0.Header(), 0, propBlockID1) - if err := vs3.SignProposal(config.GetString("chain_id"), newProp); err != nil { + if err := vs3.SignProposal(config.ChainID, newProp); err != nil { t.Fatal(err) } cs1.SetProposalAndBlock(newProp, propBlock0, propBlockParts0, "some peer") @@ -887,7 +886,7 @@ func TestLockPOLSafety2(t *testing.T) { addVotes(cs1, prevotes...) <-newRoundCh - log.Notice("### ONTO Round 2") + t.Log("### ONTO Round 2") /*Round2 // now we see the polka from round 1, but we shouldnt unlock */ @@ -997,7 +996,7 @@ func TestHalt1(t *testing.T) { cs1, vss := randConsensusState(4) vs2, vs3, vs4 := vss[1], vss[2], vss[3] - partSize := config.GetInt("block_part_size") + partSize := config.Consensus.BlockPartSize proposalCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringCompleteProposal(), 1) timeoutWaitCh := subscribeToEvent(cs1.evsw, "tester", types.EventStringTimeoutWait(), 1) @@ -1009,7 +1008,7 @@ func TestHalt1(t *testing.T) { startTestRound(cs1, cs1.Height, 0) <-newRoundCh re := <-proposalCh - rs := re.(types.EventDataRoundState).RoundState.(*RoundState) + rs := re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) propBlock := rs.ProposalBlock propBlockParts := propBlock.MakePartSet(partSize) @@ -1032,9 +1031,9 @@ func TestHalt1(t *testing.T) { // timeout to new round <-timeoutWaitCh re = <-newRoundCh - rs = re.(types.EventDataRoundState).RoundState.(*RoundState) + rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) - log.Notice("### ONTO ROUND 1") + t.Log("### ONTO ROUND 1") /*Round2 // we timeout and prevote our lock // a polka happened but we didn't see it! @@ -1050,7 +1049,7 @@ func TestHalt1(t *testing.T) { // receiving that precommit should take us straight to commit <-newBlockCh re = <-newRoundCh - rs = re.(types.EventDataRoundState).RoundState.(*RoundState) + rs = re.(types.TMEventData).Unwrap().(types.EventDataRoundState).RoundState.(*RoundState) if rs.Height != 2 { panic("expected height to increment") diff --git a/consensus/test_data/build.sh b/consensus/test_data/build.sh old mode 100644 new mode 100755 index 2759c0e38..d50c26296 --- a/consensus/test_data/build.sh +++ b/consensus/test_data/build.sh @@ -1,31 +1,38 @@ -#! /bin/bash +#!/usr/bin/env bash # XXX: removes tendermint dir -cd $GOPATH/src/github.com/tendermint/tendermint +cd "$GOPATH/src/github.com/tendermint/tendermint" || exit 1 + +# Make sure we have a tendermint command. +if ! hash tendermint 2>/dev/null; then + make install +fi # specify a dir to copy # TODO: eventually we should replace with `tendermint init --test` -DIR=$HOME/.tendermint_test/consensus_state_test +DIR_TO_COPY=$HOME/.tendermint_test/consensus_state_test -rm -rf $HOME/.tendermint -cp -r $DIR $HOME/.tendermint +TMHOME="$HOME/.tendermint" +rm -rf "$TMHOME" +cp -r "$DIR_TO_COPY" "$TMHOME" +cp $TMHOME/config.toml $TMHOME/config.toml.bak function reset(){ - rm -rf $HOME/.tendermint/data - tendermint unsafe_reset_priv_validator + tendermint unsafe_reset_all + cp $TMHOME/config.toml.bak $TMHOME/config.toml } reset # empty block function empty_block(){ -tendermint node --proxy_app=dummy &> /dev/null & +tendermint node --proxy_app=persistent_dummy &> /dev/null & sleep 5 killall tendermint -# /q would print up to and including the match, then quit. -# /Q doesn't include the match. +# /q would print up to and including the match, then quit. +# /Q doesn't include the match. # http://unix.stackexchange.com/questions/11305/grep-show-all-the-file-up-to-the-match sed '/ENDHEIGHT: 1/Q' ~/.tendermint/data/cs.wal/wal > consensus/test_data/empty_block.cswal @@ -36,7 +43,7 @@ reset function many_blocks(){ bash scripts/txs/random.sh 1000 36657 &> /dev/null & PID=$! -tendermint node --proxy_app=dummy &> /dev/null & +tendermint node --proxy_app=persistent_dummy &> /dev/null & sleep 7 killall tendermint kill -9 $PID @@ -51,7 +58,7 @@ reset function small_block1(){ bash scripts/txs/random.sh 1000 36657 &> /dev/null & PID=$! -tendermint node --proxy_app=dummy &> /dev/null & +tendermint node --proxy_app=persistent_dummy &> /dev/null & sleep 10 killall tendermint kill -9 $PID @@ -68,7 +75,7 @@ echo "" >> ~/.tendermint/config.toml echo "block_part_size = 512" >> ~/.tendermint/config.toml bash scripts/txs/random.sh 1000 36657 &> /dev/null & PID=$! -tendermint node --proxy_app=dummy &> /dev/null & +tendermint node --proxy_app=persistent_dummy &> /dev/null & sleep 5 killall tendermint kill -9 $PID @@ -80,7 +87,7 @@ reset -case "$1" in +case "$1" in "small_block1") small_block1 ;; diff --git a/consensus/test_data/empty_block.cswal b/consensus/test_data/empty_block.cswal index aa5b232c9..a7e5e79e4 100644 --- a/consensus/test_data/empty_block.cswal +++ b/consensus/test_data/empty_block.cswal @@ -1,10 +1,10 @@ #ENDHEIGHT: 0 -{"time":"2016-12-18T05:05:33.502Z","msg":[3,{"duration":974084551,"height":1,"round":0,"step":1}]} -{"time":"2016-12-18T05:05:33.505Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]} -{"time":"2016-12-18T05:05:33.505Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":1,"hash":"71D2DA2336A9F84C22A28FF6C67F35F3478FC0AF"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"62C0F2BCCB491399EEDAF8E85837ADDD4E25BAB7A84BFC4F0E88594531FBC6D4755DEC7E6427F04AD7EB8BB89502762AB4380C7BBA93A4C297E6180EC78E3504"]}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:33.506Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010114914148D83E0DC00000000000000114354594CBFC1A7BCA1AD0050ED6AA010023EADA390001000100000000","proof":{"aunts":[]}}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:33.508Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]} -{"time":"2016-12-18T05:05:33.508Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"3E83DF89A01C5F104912E095F32451C202F34717","parts":{"total":1,"hash":"71D2DA2336A9F84C22A28FF6C67F35F3478FC0AF"}},"signature":[1,"B64D0BB64B2E9AAFDD4EBEA679644F77AE774D69E3E2E1B042AB15FE4F84B1427AC6C8A25AFF58EA22011AE567FEA49D2EE7354382E915AD85BF40C58FA6130C"]}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:33.509Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]} -{"time":"2016-12-18T05:05:33.509Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"3E83DF89A01C5F104912E095F32451C202F34717","parts":{"total":1,"hash":"71D2DA2336A9F84C22A28FF6C67F35F3478FC0AF"}},"signature":[1,"D83E968392D1BF09821E0D05079DAB5491CABD89BE128BD1CF573ED87148BA84667A56C0A069EFC90760F25EDAC62BC324DBB12EA63F44E6CB2D3500FE5E640F"]}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:33.509Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]} +{"time":"2017-04-27T22:24:01.346Z","msg":[3,{"duration":972946821,"height":1,"round":0,"step":1}]} +{"time":"2017-04-27T22:24:01.349Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]} +{"time":"2017-04-27T22:24:01.349Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":1,"hash":"ACED4A95DDEBD24E66A681F7EAB4CA22C4B8546D"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"E785764AED6D92D7CC65C0A3A4ED9C8465198A05142C3E6C7F3EF601FDCD3A604900B77B7B87C046221EF99FD038A960398385BD5BBAA50EE4F86DE757B8F704"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:01.350Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010114B96165CF4496C00000000000000114354594CBFC1A7BCA1AD0050ED6AA010023EADA390001000100000000","proof":{"aunts":[]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:01.351Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]} +{"time":"2017-04-27T22:24:01.351Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"F3BBFBE7E4A5D619E2C498C3D1B912883786DD71","parts":{"total":1,"hash":"ACED4A95DDEBD24E66A681F7EAB4CA22C4B8546D"}},"signature":[1,"35C937C78D061ECDC3770982A1330C9AA7F6FEF00835C43DEB50B8FCF69A3EEF221E675EE5E469114F64E4FBBABA414EB9170E1025FC47D3F0EADE46767D2E00"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:01.352Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]} +{"time":"2017-04-27T22:24:01.352Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"F3BBFBE7E4A5D619E2C498C3D1B912883786DD71","parts":{"total":1,"hash":"ACED4A95DDEBD24E66A681F7EAB4CA22C4B8546D"}},"signature":[1,"D1A7D27FCD5D352F3A3EDA8DE368520BC5B796662E32BCD8D91CDB8209A88DAF37CB7C4C93143D3C12B37C1435229268098CFFD0AD1400D88DA7606454692301"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:01.352Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]} diff --git a/consensus/test_data/many_blocks.cswal b/consensus/test_data/many_blocks.cswal index fd103cb1e..9ceee2cfd 100644 --- a/consensus/test_data/many_blocks.cswal +++ b/consensus/test_data/many_blocks.cswal @@ -1,65 +1,65 @@ #ENDHEIGHT: 0 -{"time":"2017-02-17T23:54:19.013Z","msg":[3,{"duration":969121813,"height":1,"round":0,"step":1}]} -{"time":"2017-02-17T23:54:19.014Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]} -{"time":"2017-02-17T23:54:19.014Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":1,"hash":"2E32C8D500E936D27A47FCE3FF4BE7C1AFB3FAE1"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"105A5A834E9AE2FA2191CAB5CB20D63594BA7859BD3EB92F055C8A35476D71F0D89F9FD5B0FF030D021533C71A81BF6E8F026BF4A37FC637CF38CA35291A9D00"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:19.015Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010114A438480B084B40017600000000011477134726D7D54ABC03516888951EBC652413B20B0114354594CBFC1A7BCA1AD0050ED6AA010023EADA3900010176011A3631363236333634333133313344363436333632363133313331011A3631363236333634333133323344363436333632363133313332011A3631363236333634333133333344363436333632363133313333011A3631363236333634333133343344363436333632363133313334011A3631363236333634333133353344363436333632363133313335011A3631363236333634333133363344363436333632363133313336011A3631363236333634333133373344363436333632363133313337011A3631363236333634333133383344363436333632363133313338011A3631363236333634333133393344363436333632363133313339011A3631363236333634333233303344363436333632363133323330011A3631363236333634333233313344363436333632363133323331011A3631363236333634333233323344363436333632363133323332011A3631363236333634333233333344363436333632363133323333011A3631363236333634333233343344363436333632363133323334011A3631363236333634333233353344363436333632363133323335011A3631363236333634333233363344363436333632363133323336011A3631363236333634333233373344363436333632363133323337011A3631363236333634333233383344363436333632363133323338011A3631363236333634333233393344363436333632363133323339011A3631363236333634333333303344363436333632363133333330011A3631363236333634333333313344363436333632363133333331011A3631363236333634333333323344363436333632363133333332011A3631363236333634333333333344363436333632363133333333011A3631363236333634333333343344363436333632363133333334011A3631363236333634333333353344363436333632363133333335011A3631363236333634333333363344363436333632363133333336011A3631363236333634333333373344363436333632363133333337011A3631363236333634333333383344363436333632363133333338011A3631363236333634333333393344363436333632363133333339011A3631363236333634333433303344363436333632363133343330011A3631363236333634333433313344363436333632363133343331011A3631363236333634333433323344363436333632363133343332011A3631363236333634333433333344363436333632363133343333011A3631363236333634333433343344363436333632363133343334011A3631363236333634333433353344363436333632363133343335011A3631363236333634333433363344363436333632363133343336011A3631363236333634333433373344363436333632363133343337011A3631363236333634333433383344363436333632363133343338011A3631363236333634333433393344363436333632363133343339011A3631363236333634333533303344363436333632363133353330011A3631363236333634333533313344363436333632363133353331011A3631363236333634333533323344363436333632363133353332011A3631363236333634333533333344363436333632363133353333011A3631363236333634333533343344363436333632363133353334011A3631363236333634333533353344363436333632363133353335011A3631363236333634333533363344363436333632363133353336011A3631363236333634333533373344363436333632363133353337011A3631363236333634333533383344363436333632363133353338011A3631363236333634333533393344363436333632363133353339011A3631363236333634333633303344363436333632363133363330011A3631363236333634333633313344363436333632363133363331011A3631363236333634333633323344363436333632363133363332011A3631363236333634333633333344363436333632363133363333011A3631363236333634333633343344363436333632363133363334011A3631363236333634333633353344363436333632363133363335011A3631363236333634333633363344363436333632363133363336011A3631363236333634333633373344363436333632363133363337011A3631363236333634333633383344363436333632363133363338011A3631363236333634333633393344363436333632363133363339011A3631363236333634333733303344363436333632363133373330011A3631363236333634333733313344363436333632363133373331011A3631363236333634333733323344363436333632363133373332011A3631363236333634333733333344363436333632363133373333011A3631363236333634333733343344363436333632363133373334011A3631363236333634333733353344363436333632363133373335011A3631363236333634333733363344363436333632363133373336011A3631363236333634333733373344363436333632363133373337011A3631363236333634333733383344363436333632363133373338011A3631363236333634333733393344363436333632363133373339011A3631363236333634333833303344363436333632363133383330011A3631363236333634333833313344363436333632363133383331011A3631363236333634333833323344363436333632363133383332011A3631363236333634333833333344363436333632363133383333011A3631363236333634333833343344363436333632363133383334011A3631363236333634333833353344363436333632363133383335011A3631363236333634333833363344363436333632363133383336011A3631363236333634333833373344363436333632363133383337011A3631363236333634333833383344363436333632363133383338011A3631363236333634333833393344363436333632363133383339011A3631363236333634333933303344363436333632363133393330011A3631363236333634333933313344363436333632363133393331011A3631363236333634333933323344363436333632363133393332011A3631363236333634333933333344363436333632363133393333011A3631363236333634333933343344363436333632363133393334011A3631363236333634333933353344363436333632363133393335011A3631363236333634333933363344363436333632363133393336011A3631363236333634333933373344363436333632363133393337011A3631363236333634333933383344363436333632363133393338011A3631363236333634333933393344363436333632363133393339011E363136323633363433313330333033443634363336323631333133303330011E363136323633363433313330333133443634363336323631333133303331011E363136323633363433313330333233443634363336323631333133303332011E363136323633363433313330333333443634363336323631333133303333011E363136323633363433313330333433443634363336323631333133303334011E363136323633363433313330333533443634363336323631333133303335011E363136323633363433313330333633443634363336323631333133303336011E363136323633363433313330333733443634363336323631333133303337011E363136323633363433313330333833443634363336323631333133303338011E363136323633363433313330333933443634363336323631333133303339011E363136323633363433313331333033443634363336323631333133313330011E363136323633363433313331333133443634363336323631333133313331011E363136323633363433313331333233443634363336323631333133313332011E363136323633363433313331333333443634363336323631333133313333011E363136323633363433313331333433443634363336323631333133313334011E363136323633363433313331333533443634363336323631333133313335011E363136323633363433313331333633443634363336323631333133313336011E363136323633363433313331333733443634363336323631333133313337011E363136323633363433313331333833443634363336323631333133313338011E363136323633363433313331333933443634363336323631333133313339011E363136323633363433313332333033443634363336323631333133323330011E363136323633363433313332333133443634363336323631333133323331011E363136323633363433313332333233443634363336323631333133323332011E363136323633363433313332333333443634363336323631333133323333011E363136323633363433313332333433443634363336323631333133323334011E363136323633363433313332333533443634363336323631333133323335011E363136323633363433313332333633443634363336323631333133323336011E363136323633363433313332333733443634363336323631333133323337011E3631363236333634333133323338334436343633363236313331333233380100000000","proof":{"aunts":[]}}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:19.016Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]} -{"time":"2017-02-17T23:54:19.016Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"3F32EE37F9EA674A2173CAD651836A8EE628B5C7","parts":{"total":1,"hash":"2E32C8D500E936D27A47FCE3FF4BE7C1AFB3FAE1"}},"signature":[1,"31851DA0008AFF4223245EDFCCF1AD7BE96F8D66F8BD02D87F06B2F800A9405413861877D08798F0F6297D29936F5380B352C82212D2EC6F0E194A8C22A1EB0E"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:19.016Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]} -{"time":"2017-02-17T23:54:19.016Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"3F32EE37F9EA674A2173CAD651836A8EE628B5C7","parts":{"total":1,"hash":"2E32C8D500E936D27A47FCE3FF4BE7C1AFB3FAE1"}},"signature":[1,"2B1070A5AB9305612A3AE74A8036D82B5E49E0DBBFBC7D723DB985CC8A8E72A52FF8E34D85273FEB8B901945CA541FA5142C3C4D43A04E9205ACECF53FD19B01"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:19.017Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]} +{"time":"2017-04-27T22:24:06.395Z","msg":[3,{"duration":974643085,"height":1,"round":0,"step":1}]} +{"time":"2017-04-27T22:24:06.397Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]} +{"time":"2017-04-27T22:24:06.397Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":1,"hash":"218DFAADA5F4A3E0DC3060379E80C45FBB5A0442"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"B361A5602D9C36058CFE7C5F2E1B6E9DA6D1FAD5F1FBD50F03C1DCA3B80F8374018D04B8785AEC8B464E792189F832BFF5C0B82CCE93D6B3F9B04DE8DBCA1D09"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:06.397Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010114B96166FC26F4C0015F0000000001140830F265BD1DF6413337508DE45DB36552AE9A520114354594CBFC1A7BCA1AD0050ED6AA010023EADA390001015F011636313632363336343339334436343633363236313339011A3631363236333634333133303344363436333632363133313330011A3631363236333634333133313344363436333632363133313331011A3631363236333634333133323344363436333632363133313332011A3631363236333634333133333344363436333632363133313333011A3631363236333634333133343344363436333632363133313334011A3631363236333634333133353344363436333632363133313335011A3631363236333634333133363344363436333632363133313336011A3631363236333634333133373344363436333632363133313337011A3631363236333634333133383344363436333632363133313338011A3631363236333634333133393344363436333632363133313339011A3631363236333634333233303344363436333632363133323330011A3631363236333634333233313344363436333632363133323331011A3631363236333634333233323344363436333632363133323332011A3631363236333634333233333344363436333632363133323333011A3631363236333634333233343344363436333632363133323334011A3631363236333634333233353344363436333632363133323335011A3631363236333634333233363344363436333632363133323336011A3631363236333634333233373344363436333632363133323337011A3631363236333634333233383344363436333632363133323338011A3631363236333634333233393344363436333632363133323339011A3631363236333634333333303344363436333632363133333330011A3631363236333634333333313344363436333632363133333331011A3631363236333634333333323344363436333632363133333332011A3631363236333634333333333344363436333632363133333333011A3631363236333634333333343344363436333632363133333334011A3631363236333634333333353344363436333632363133333335011A3631363236333634333333363344363436333632363133333336011A3631363236333634333333373344363436333632363133333337011A3631363236333634333333383344363436333632363133333338011A3631363236333634333333393344363436333632363133333339011A3631363236333634333433303344363436333632363133343330011A3631363236333634333433313344363436333632363133343331011A3631363236333634333433323344363436333632363133343332011A3631363236333634333433333344363436333632363133343333011A3631363236333634333433343344363436333632363133343334011A3631363236333634333433353344363436333632363133343335011A3631363236333634333433363344363436333632363133343336011A3631363236333634333433373344363436333632363133343337011A3631363236333634333433383344363436333632363133343338011A3631363236333634333433393344363436333632363133343339011A3631363236333634333533303344363436333632363133353330011A3631363236333634333533313344363436333632363133353331011A3631363236333634333533323344363436333632363133353332011A3631363236333634333533333344363436333632363133353333011A3631363236333634333533343344363436333632363133353334011A3631363236333634333533353344363436333632363133353335011A3631363236333634333533363344363436333632363133353336011A3631363236333634333533373344363436333632363133353337011A3631363236333634333533383344363436333632363133353338011A3631363236333634333533393344363436333632363133353339011A3631363236333634333633303344363436333632363133363330011A3631363236333634333633313344363436333632363133363331011A3631363236333634333633323344363436333632363133363332011A3631363236333634333633333344363436333632363133363333011A3631363236333634333633343344363436333632363133363334011A3631363236333634333633353344363436333632363133363335011A3631363236333634333633363344363436333632363133363336011A3631363236333634333633373344363436333632363133363337011A3631363236333634333633383344363436333632363133363338011A3631363236333634333633393344363436333632363133363339011A3631363236333634333733303344363436333632363133373330011A3631363236333634333733313344363436333632363133373331011A3631363236333634333733323344363436333632363133373332011A3631363236333634333733333344363436333632363133373333011A3631363236333634333733343344363436333632363133373334011A3631363236333634333733353344363436333632363133373335011A3631363236333634333733363344363436333632363133373336011A3631363236333634333733373344363436333632363133373337011A3631363236333634333733383344363436333632363133373338011A3631363236333634333733393344363436333632363133373339011A3631363236333634333833303344363436333632363133383330011A3631363236333634333833313344363436333632363133383331011A3631363236333634333833323344363436333632363133383332011A3631363236333634333833333344363436333632363133383333011A3631363236333634333833343344363436333632363133383334011A3631363236333634333833353344363436333632363133383335011A3631363236333634333833363344363436333632363133383336011A3631363236333634333833373344363436333632363133383337011A3631363236333634333833383344363436333632363133383338011A3631363236333634333833393344363436333632363133383339011A3631363236333634333933303344363436333632363133393330011A3631363236333634333933313344363436333632363133393331011A3631363236333634333933323344363436333632363133393332011A3631363236333634333933333344363436333632363133393333011A3631363236333634333933343344363436333632363133393334011A3631363236333634333933353344363436333632363133393335011A3631363236333634333933363344363436333632363133393336011A3631363236333634333933373344363436333632363133393337011A3631363236333634333933383344363436333632363133393338011A3631363236333634333933393344363436333632363133393339011E363136323633363433313330333033443634363336323631333133303330011E363136323633363433313330333133443634363336323631333133303331011E363136323633363433313330333233443634363336323631333133303332011E3631363236333634333133303333334436343633363236313331333033330100000000","proof":{"aunts":[]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:06.399Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]} +{"time":"2017-04-27T22:24:06.399Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"B83D2AF62DBCA00412CD21152645411609DB51D4","parts":{"total":1,"hash":"218DFAADA5F4A3E0DC3060379E80C45FBB5A0442"}},"signature":[1,"50FC03CD17799A619D2AD2AE1004D44DDB774D37D1CBB085240C28E08791FCCC8DBAFACEE4BA967681D331B32E88AC6DE4862DD20D39999C20C6D0DFAF155C02"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:06.400Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]} +{"time":"2017-04-27T22:24:06.400Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"B83D2AF62DBCA00412CD21152645411609DB51D4","parts":{"total":1,"hash":"218DFAADA5F4A3E0DC3060379E80C45FBB5A0442"}},"signature":[1,"BD9AE0AE7B54DB3C7D4913FFEC54A29E3AC11DFFF77F59CF0411C47ADB863C1C2D44BB3B84EA1093ED01E6C19063294CA3E36D1920CD627A60F934216D2B3B02"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:06.401Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]} #ENDHEIGHT: 1 -{"time":"2017-02-17T23:54:19.019Z","msg":[1,{"height":2,"round":0,"step":"RoundStepNewHeight"}]} -{"time":"2017-02-17T23:54:20.017Z","msg":[3,{"duration":998073370,"height":2,"round":0,"step":1}]} -{"time":"2017-02-17T23:54:20.018Z","msg":[1,{"height":2,"round":0,"step":"RoundStepPropose"}]} -{"time":"2017-02-17T23:54:20.018Z","msg":[2,{"msg":[17,{"Proposal":{"height":2,"round":0,"block_parts_header":{"total":1,"hash":"D008E9014CDDEA8EC95E1E99E21333241BD52DFC"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"03E06975CD5A83E2B6AADC82F0C5965BE13CCB589912B7CBEF847BDBED6E8EAEE0901C02FAE8BC96B269C4750E5BA5C351C587537E3C063358A7D769007D8509"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:20.019Z","msg":[2,{"msg":[19,{"Height":2,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010214A4384846E01E40018101143F32EE37F9EA674A2173CAD651836A8EE628B5C7010101142E32C8D500E936D27A47FCE3FF4BE7C1AFB3FAE101142F7319866C6844639F566F5A201DE2C339EAF8670114A0DC0254E5A6831C03FB4E1BE09CB26AEAC5C73D0114354594CBFC1A7BCA1AD0050ED6AA010023EADA39011402B7F5E97ED0892197CF5779CA6B904B10C87E10010181011E363136323633363433313332333933443634363336323631333133323339011E363136323633363433313333333033443634363336323631333133333330011E363136323633363433313333333133443634363336323631333133333331011E363136323633363433313333333233443634363336323631333133333332011E363136323633363433313333333333443634363336323631333133333333011E363136323633363433313333333433443634363336323631333133333334011E363136323633363433313333333533443634363336323631333133333335011E363136323633363433313333333633443634363336323631333133333336011E363136323633363433313333333733443634363336323631333133333337011E363136323633363433313333333833443634363336323631333133333338011E363136323633363433313333333933443634363336323631333133333339011E363136323633363433313334333033443634363336323631333133343330011E363136323633363433313334333133443634363336323631333133343331011E363136323633363433313334333233443634363336323631333133343332011E363136323633363433313334333333443634363336323631333133343333011E363136323633363433313334333433443634363336323631333133343334011E363136323633363433313334333533443634363336323631333133343335011E363136323633363433313334333633443634363336323631333133343336011E363136323633363433313334333733443634363336323631333133343337011E363136323633363433313334333833443634363336323631333133343338011E363136323633363433313334333933443634363336323631333133343339011E363136323633363433313335333033443634363336323631333133353330011E363136323633363433313335333133443634363336323631333133353331011E363136323633363433313335333233443634363336323631333133353332011E363136323633363433313335333333443634363336323631333133353333011E363136323633363433313335333433443634363336323631333133353334011E363136323633363433313335333533443634363336323631333133353335011E363136323633363433313335333633443634363336323631333133353336011E363136323633363433313335333733443634363336323631333133353337011E363136323633363433313335333833443634363336323631333133353338011E363136323633363433313335333933443634363336323631333133353339011E363136323633363433313336333033443634363336323631333133363330011E363136323633363433313336333133443634363336323631333133363331011E363136323633363433313336333233443634363336323631333133363332011E363136323633363433313336333333443634363336323631333133363333011E363136323633363433313336333433443634363336323631333133363334011E363136323633363433313336333533443634363336323631333133363335011E363136323633363433313336333633443634363336323631333133363336011E363136323633363433313336333733443634363336323631333133363337011E363136323633363433313336333833443634363336323631333133363338011E363136323633363433313336333933443634363336323631333133363339011E363136323633363433313337333033443634363336323631333133373330011E363136323633363433313337333133443634363336323631333133373331011E363136323633363433313337333233443634363336323631333133373332011E363136323633363433313337333333443634363336323631333133373333011E363136323633363433313337333433443634363336323631333133373334011E363136323633363433313337333533443634363336323631333133373335011E363136323633363433313337333633443634363336323631333133373336011E363136323633363433313337333733443634363336323631333133373337011E363136323633363433313337333833443634363336323631333133373338011E363136323633363433313337333933443634363336323631333133373339011E363136323633363433313338333033443634363336323631333133383330011E363136323633363433313338333133443634363336323631333133383331011E363136323633363433313338333233443634363336323631333133383332011E363136323633363433313338333333443634363336323631333133383333011E363136323633363433313338333433443634363336323631333133383334011E363136323633363433313338333533443634363336323631333133383335011E363136323633363433313338333633443634363336323631333133383336011E363136323633363433313338333733443634363336323631333133383337011E363136323633363433313338333833443634363336323631333133383338011E363136323633363433313338333933443634363336323631333133383339011E363136323633363433313339333033443634363336323631333133393330011E363136323633363433313339333133443634363336323631333133393331011E363136323633363433313339333233443634363336323631333133393332011E363136323633363433313339333333443634363336323631333133393333011E363136323633363433313339333433443634363336323631333133393334011E363136323633363433313339333533443634363336323631333133393335011E363136323633363433313339333633443634363336323631333133393336011E363136323633363433313339333733443634363336323631333133393337011E363136323633363433313339333833443634363336323631333133393338011E363136323633363433313339333933443634363336323631333133393339011E363136323633363433323330333033443634363336323631333233303330011E363136323633363433323330333133443634363336323631333233303331011E363136323633363433323330333233443634363336323631333233303332011E363136323633363433323330333333443634363336323631333233303333011E363136323633363433323330333433443634363336323631333233303334011E363136323633363433323330333533443634363336323631333233303335011E363136323633363433323330333633443634363336323631333233303336011E363136323633363433323330333733443634363336323631333233303337011E363136323633363433323330333833443634363336323631333233303338011E363136323633363433323330333933443634363336323631333233303339011E363136323633363433323331333033443634363336323631333233313330011E363136323633363433323331333133443634363336323631333233313331011E363136323633363433323331333233443634363336323631333233313332011E363136323633363433323331333333443634363336323631333233313333011E363136323633363433323331333433443634363336323631333233313334011E363136323633363433323331333533443634363336323631333233313335011E363136323633363433323331333633443634363336323631333233313336011E363136323633363433323331333733443634363336323631333233313337011E363136323633363433323331333833443634363336323631333233313338011E363136323633363433323331333933443634363336323631333233313339011E363136323633363433323332333033443634363336323631333233323330011E363136323633363433323332333133443634363336323631333233323331011E363136323633363433323332333233443634363336323631333233323332011E363136323633363433323332333333443634363336323631333233323333011E363136323633363433323332333433443634363336323631333233323334011E363136323633363433323332333533443634363336323631333233323335011E363136323633363433323332333633443634363336323631333233323336011E363136323633363433323332333733443634363336323631333233323337011E363136323633363433323332333833443634363336323631333233323338011E363136323633363433323332333933443634363336323631333233323339011E363136323633363433323333333033443634363336323631333233333330011E363136323633363433323333333133443634363336323631333233333331011E363136323633363433323333333233443634363336323631333233333332011E363136323633363433323333333333443634363336323631333233333333011E363136323633363433323333333433443634363336323631333233333334011E363136323633363433323333333533443634363336323631333233333335011E363136323633363433323333333633443634363336323631333233333336011E363136323633363433323333333733443634363336323631333233333337011E363136323633363433323333333833443634363336323631333233333338011E363136323633363433323333333933443634363336323631333233333339011E363136323633363433323334333033443634363336323631333233343330011E363136323633363433323334333133443634363336323631333233343331011E363136323633363433323334333233443634363336323631333233343332011E363136323633363433323334333333443634363336323631333233343333011E363136323633363433323334333433443634363336323631333233343334011E363136323633363433323334333533443634363336323631333233343335011E363136323633363433323334333633443634363336323631333233343336011E363136323633363433323334333733443634363336323631333233343337011E363136323633363433323334333833443634363336323631333233343338011E363136323633363433323334333933443634363336323631333233343339011E363136323633363433323335333033443634363336323631333233353330011E363136323633363433323335333133443634363336323631333233353331011E363136323633363433323335333233443634363336323631333233353332011E363136323633363433323335333333443634363336323631333233353333011E363136323633363433323335333433443634363336323631333233353334011E363136323633363433323335333533443634363336323631333233353335011E363136323633363433323335333633443634363336323631333233353336011E3631363236333634333233353337334436343633363236313332333533370101143F32EE37F9EA674A2173CAD651836A8EE628B5C7010101142E32C8D500E936D27A47FCE3FF4BE7C1AFB3FAE10101010114D028C9981F7A87F3093672BF0D5B0E2A1B3ED456000101000201143F32EE37F9EA674A2173CAD651836A8EE628B5C7010101142E32C8D500E936D27A47FCE3FF4BE7C1AFB3FAE1012B1070A5AB9305612A3AE74A8036D82B5E49E0DBBFBC7D723DB985CC8A8E72A52FF8E34D85273FEB8B901945CA541FA5142C3C4D43A04E9205ACECF53FD19B01","proof":{"aunts":[]}}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:20.020Z","msg":[1,{"height":2,"round":0,"step":"RoundStepPrevote"}]} -{"time":"2017-02-17T23:54:20.020Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":2,"round":0,"type":1,"block_id":{"hash":"32310D174A99844713693C9815D2CA660364E028","parts":{"total":1,"hash":"D008E9014CDDEA8EC95E1E99E21333241BD52DFC"}},"signature":[1,"E0289DE621820D9236632B4862BB4D1518A4B194C5AE8194192F375C9A52775A54A7F172A5D7A2014E404A1C3AFA386923E7A20329AFDDFA14655881C04A1A02"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:20.021Z","msg":[1,{"height":2,"round":0,"step":"RoundStepPrecommit"}]} -{"time":"2017-02-17T23:54:20.021Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":2,"round":0,"type":2,"block_id":{"hash":"32310D174A99844713693C9815D2CA660364E028","parts":{"total":1,"hash":"D008E9014CDDEA8EC95E1E99E21333241BD52DFC"}},"signature":[1,"AA9F03D0707752301D7CBFCF4F0BCDBD666A46C1CAED3910BD64A3C5C2874AAF328172646C951C5E2FD962359C382A3CBBA2C73EC9B533668C6386995B83EC08"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:20.022Z","msg":[1,{"height":2,"round":0,"step":"RoundStepCommit"}]} +{"time":"2017-04-27T22:24:06.406Z","msg":[1,{"height":2,"round":0,"step":"RoundStepNewHeight"}]} +{"time":"2017-04-27T22:24:07.401Z","msg":[3,{"duration":994141766,"height":2,"round":0,"step":1}]} +{"time":"2017-04-27T22:24:07.402Z","msg":[1,{"height":2,"round":0,"step":"RoundStepPropose"}]} +{"time":"2017-04-27T22:24:07.403Z","msg":[2,{"msg":[17,{"Proposal":{"height":2,"round":0,"block_parts_header":{"total":1,"hash":"C693ABE80F5819FA5DA4666CBB74B7C865CE3C70"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"C2E469C27D63A29609ABA5D3A74BEC10E6D6CD6310FBFAF50E0989AEBEBD7EF86A7EBEF7028662BA97A8402E899588E047F784D642D19852ED01B46CD8240C01"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:07.403Z","msg":[2,{"msg":[19,{"Height":2,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010214B96167381D4C4001690114B83D2AF62DBCA00412CD21152645411609DB51D401010114218DFAADA5F4A3E0DC3060379E80C45FBB5A04420114F737D0E2D410564390CFEC68118549248848A5DA011417B6D1FF1D4D334EA682CF10572C9682866F6D840114354594CBFC1A7BCA1AD0050ED6AA010023EADA39011455F568A46B4C2A3816103A3AA8845222C0941FDE010169011E363136323633363433313330333433443634363336323631333133303334011E363136323633363433313330333533443634363336323631333133303335011E363136323633363433313330333633443634363336323631333133303336011E363136323633363433313330333733443634363336323631333133303337011E363136323633363433313330333833443634363336323631333133303338011E363136323633363433313330333933443634363336323631333133303339011E363136323633363433313331333033443634363336323631333133313330011E363136323633363433313331333133443634363336323631333133313331011E363136323633363433313331333233443634363336323631333133313332011E363136323633363433313331333333443634363336323631333133313333011E363136323633363433313331333433443634363336323631333133313334011E363136323633363433313331333533443634363336323631333133313335011E363136323633363433313331333633443634363336323631333133313336011E363136323633363433313331333733443634363336323631333133313337011E363136323633363433313331333833443634363336323631333133313338011E363136323633363433313331333933443634363336323631333133313339011E363136323633363433313332333033443634363336323631333133323330011E363136323633363433313332333133443634363336323631333133323331011E363136323633363433313332333233443634363336323631333133323332011E363136323633363433313332333333443634363336323631333133323333011E363136323633363433313332333433443634363336323631333133323334011E363136323633363433313332333533443634363336323631333133323335011E363136323633363433313332333633443634363336323631333133323336011E363136323633363433313332333733443634363336323631333133323337011E363136323633363433313332333833443634363336323631333133323338011E363136323633363433313332333933443634363336323631333133323339011E363136323633363433313333333033443634363336323631333133333330011E363136323633363433313333333133443634363336323631333133333331011E363136323633363433313333333233443634363336323631333133333332011E363136323633363433313333333333443634363336323631333133333333011E363136323633363433313333333433443634363336323631333133333334011E363136323633363433313333333533443634363336323631333133333335011E363136323633363433313333333633443634363336323631333133333336011E363136323633363433313333333733443634363336323631333133333337011E363136323633363433313333333833443634363336323631333133333338011E363136323633363433313333333933443634363336323631333133333339011E363136323633363433313334333033443634363336323631333133343330011E363136323633363433313334333133443634363336323631333133343331011E363136323633363433313334333233443634363336323631333133343332011E363136323633363433313334333333443634363336323631333133343333011E363136323633363433313334333433443634363336323631333133343334011E363136323633363433313334333533443634363336323631333133343335011E363136323633363433313334333633443634363336323631333133343336011E363136323633363433313334333733443634363336323631333133343337011E363136323633363433313334333833443634363336323631333133343338011E363136323633363433313334333933443634363336323631333133343339011E363136323633363433313335333033443634363336323631333133353330011E363136323633363433313335333133443634363336323631333133353331011E363136323633363433313335333233443634363336323631333133353332011E363136323633363433313335333333443634363336323631333133353333011E363136323633363433313335333433443634363336323631333133353334011E363136323633363433313335333533443634363336323631333133353335011E363136323633363433313335333633443634363336323631333133353336011E363136323633363433313335333733443634363336323631333133353337011E363136323633363433313335333833443634363336323631333133353338011E363136323633363433313335333933443634363336323631333133353339011E363136323633363433313336333033443634363336323631333133363330011E363136323633363433313336333133443634363336323631333133363331011E363136323633363433313336333233443634363336323631333133363332011E363136323633363433313336333333443634363336323631333133363333011E363136323633363433313336333433443634363336323631333133363334011E363136323633363433313336333533443634363336323631333133363335011E363136323633363433313336333633443634363336323631333133363336011E363136323633363433313336333733443634363336323631333133363337011E363136323633363433313336333833443634363336323631333133363338011E363136323633363433313336333933443634363336323631333133363339011E363136323633363433313337333033443634363336323631333133373330011E363136323633363433313337333133443634363336323631333133373331011E363136323633363433313337333233443634363336323631333133373332011E363136323633363433313337333333443634363336323631333133373333011E363136323633363433313337333433443634363336323631333133373334011E363136323633363433313337333533443634363336323631333133373335011E363136323633363433313337333633443634363336323631333133373336011E363136323633363433313337333733443634363336323631333133373337011E363136323633363433313337333833443634363336323631333133373338011E363136323633363433313337333933443634363336323631333133373339011E363136323633363433313338333033443634363336323631333133383330011E363136323633363433313338333133443634363336323631333133383331011E363136323633363433313338333233443634363336323631333133383332011E363136323633363433313338333333443634363336323631333133383333011E363136323633363433313338333433443634363336323631333133383334011E363136323633363433313338333533443634363336323631333133383335011E363136323633363433313338333633443634363336323631333133383336011E363136323633363433313338333733443634363336323631333133383337011E363136323633363433313338333833443634363336323631333133383338011E363136323633363433313338333933443634363336323631333133383339011E363136323633363433313339333033443634363336323631333133393330011E363136323633363433313339333133443634363336323631333133393331011E363136323633363433313339333233443634363336323631333133393332011E363136323633363433313339333333443634363336323631333133393333011E363136323633363433313339333433443634363336323631333133393334011E363136323633363433313339333533443634363336323631333133393335011E363136323633363433313339333633443634363336323631333133393336011E363136323633363433313339333733443634363336323631333133393337011E363136323633363433313339333833443634363336323631333133393338011E363136323633363433313339333933443634363336323631333133393339011E363136323633363433323330333033443634363336323631333233303330011E363136323633363433323330333133443634363336323631333233303331011E363136323633363433323330333233443634363336323631333233303332011E363136323633363433323330333333443634363336323631333233303333011E363136323633363433323330333433443634363336323631333233303334011E363136323633363433323330333533443634363336323631333233303335011E363136323633363433323330333633443634363336323631333233303336011E363136323633363433323330333733443634363336323631333233303337011E363136323633363433323330333833443634363336323631333233303338010114B83D2AF62DBCA00412CD21152645411609DB51D401010114218DFAADA5F4A3E0DC3060379E80C45FBB5A04420101010114D028C9981F7A87F3093672BF0D5B0E2A1B3ED45600010100020114B83D2AF62DBCA00412CD21152645411609DB51D401010114218DFAADA5F4A3E0DC3060379E80C45FBB5A044201BD9AE0AE7B54DB3C7D4913FFEC54A29E3AC11DFFF77F59CF0411C47ADB863C1C2D44BB3B84EA1093ED01E6C19063294CA3E36D1920CD627A60F934216D2B3B02","proof":{"aunts":[]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:07.405Z","msg":[1,{"height":2,"round":0,"step":"RoundStepPrevote"}]} +{"time":"2017-04-27T22:24:07.405Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":2,"round":0,"type":1,"block_id":{"hash":"BDB3BB45AF3B0CD2373E3690A684A00008A65526","parts":{"total":1,"hash":"C693ABE80F5819FA5DA4666CBB74B7C865CE3C70"}},"signature":[1,"FBBDB1360C3A2196A4149AEBB0E23512CF032FE9E98CB18EEA8CC71BB1F5482D4936B97B7D91965D6C00C230A54CA4CE4C6339716550D3B70E2DE79EE8DB0006"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:07.406Z","msg":[1,{"height":2,"round":0,"step":"RoundStepPrecommit"}]} +{"time":"2017-04-27T22:24:07.406Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":2,"round":0,"type":2,"block_id":{"hash":"BDB3BB45AF3B0CD2373E3690A684A00008A65526","parts":{"total":1,"hash":"C693ABE80F5819FA5DA4666CBB74B7C865CE3C70"}},"signature":[1,"84FC5E25CECBCBB85D30E3B8BD6BAD674DB2D452EC53E04BB890C2FDC1DEE8EC4FA00234D64F4CD74DE4574DE7061B9985D4655009410FF40AD7CE7515215004"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:07.407Z","msg":[1,{"height":2,"round":0,"step":"RoundStepCommit"}]} #ENDHEIGHT: 2 -{"time":"2017-02-17T23:54:20.025Z","msg":[1,{"height":3,"round":0,"step":"RoundStepNewHeight"}]} -{"time":"2017-02-17T23:54:21.022Z","msg":[3,{"duration":997103974,"height":3,"round":0,"step":1}]} -{"time":"2017-02-17T23:54:21.024Z","msg":[1,{"height":3,"round":0,"step":"RoundStepPropose"}]} -{"time":"2017-02-17T23:54:21.024Z","msg":[2,{"msg":[17,{"Proposal":{"height":3,"round":0,"block_parts_header":{"total":1,"hash":"2E5DE5777A5AD899CD2531304F42A470509DE989"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"5F6A6A8097BD6A1780568C7E064D932BC1F941E1D5AC408DE970C4EEDCCD939C0F163466D20F0E98A7599792341441422980C09D23E03009BD9CE565673C9704"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:21.024Z","msg":[2,{"msg":[19,{"Height":3,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010314A4384882C73380017C011432310D174A99844713693C9815D2CA660364E02801010114D008E9014CDDEA8EC95E1E99E21333241BD52DFC0114ABAB9E28967792BDA02172D3CB99DA99A696738E0114513B0891921FB0A5F6471950AFE2598445153CFF0114354594CBFC1A7BCA1AD0050ED6AA010023EADA3901141FBD44B8259B2A6632F08F88BE5EC8C203075CD001017C011E363136323633363433323335333833443634363336323631333233353338011E363136323633363433323335333933443634363336323631333233353339011E363136323633363433323336333033443634363336323631333233363330011E363136323633363433323336333133443634363336323631333233363331011E363136323633363433323336333233443634363336323631333233363332011E363136323633363433323336333333443634363336323631333233363333011E363136323633363433323336333433443634363336323631333233363334011E363136323633363433323336333533443634363336323631333233363335011E363136323633363433323336333633443634363336323631333233363336011E363136323633363433323336333733443634363336323631333233363337011E363136323633363433323336333833443634363336323631333233363338011E363136323633363433323336333933443634363336323631333233363339011E363136323633363433323337333033443634363336323631333233373330011E363136323633363433323337333133443634363336323631333233373331011E363136323633363433323337333233443634363336323631333233373332011E363136323633363433323337333333443634363336323631333233373333011E363136323633363433323337333433443634363336323631333233373334011E363136323633363433323337333533443634363336323631333233373335011E363136323633363433323337333633443634363336323631333233373336011E363136323633363433323337333733443634363336323631333233373337011E363136323633363433323337333833443634363336323631333233373338011E363136323633363433323337333933443634363336323631333233373339011E363136323633363433323338333033443634363336323631333233383330011E363136323633363433323338333133443634363336323631333233383331011E363136323633363433323338333233443634363336323631333233383332011E363136323633363433323338333333443634363336323631333233383333011E363136323633363433323338333433443634363336323631333233383334011E363136323633363433323338333533443634363336323631333233383335011E363136323633363433323338333633443634363336323631333233383336011E363136323633363433323338333733443634363336323631333233383337011E363136323633363433323338333833443634363336323631333233383338011E363136323633363433323338333933443634363336323631333233383339011E363136323633363433323339333033443634363336323631333233393330011E363136323633363433323339333133443634363336323631333233393331011E363136323633363433323339333233443634363336323631333233393332011E363136323633363433323339333333443634363336323631333233393333011E363136323633363433323339333433443634363336323631333233393334011E363136323633363433323339333533443634363336323631333233393335011E363136323633363433323339333633443634363336323631333233393336011E363136323633363433323339333733443634363336323631333233393337011E363136323633363433323339333833443634363336323631333233393338011E363136323633363433323339333933443634363336323631333233393339011E363136323633363433333330333033443634363336323631333333303330011E363136323633363433333330333133443634363336323631333333303331011E363136323633363433333330333233443634363336323631333333303332011E363136323633363433333330333333443634363336323631333333303333011E363136323633363433333330333433443634363336323631333333303334011E363136323633363433333330333533443634363336323631333333303335011E363136323633363433333330333633443634363336323631333333303336011E363136323633363433333330333733443634363336323631333333303337011E363136323633363433333330333833443634363336323631333333303338011E363136323633363433333330333933443634363336323631333333303339011E363136323633363433333331333033443634363336323631333333313330011E363136323633363433333331333133443634363336323631333333313331011E363136323633363433333331333233443634363336323631333333313332011E363136323633363433333331333333443634363336323631333333313333011E363136323633363433333331333433443634363336323631333333313334011E363136323633363433333331333533443634363336323631333333313335011E363136323633363433333331333633443634363336323631333333313336011E363136323633363433333331333733443634363336323631333333313337011E363136323633363433333331333833443634363336323631333333313338011E363136323633363433333331333933443634363336323631333333313339011E363136323633363433333332333033443634363336323631333333323330011E363136323633363433333332333133443634363336323631333333323331011E363136323633363433333332333233443634363336323631333333323332011E363136323633363433333332333333443634363336323631333333323333011E363136323633363433333332333433443634363336323631333333323334011E363136323633363433333332333533443634363336323631333333323335011E363136323633363433333332333633443634363336323631333333323336011E363136323633363433333332333733443634363336323631333333323337011E363136323633363433333332333833443634363336323631333333323338011E363136323633363433333332333933443634363336323631333333323339011E363136323633363433333333333033443634363336323631333333333330011E363136323633363433333333333133443634363336323631333333333331011E363136323633363433333333333233443634363336323631333333333332011E363136323633363433333333333333443634363336323631333333333333011E363136323633363433333333333433443634363336323631333333333334011E363136323633363433333333333533443634363336323631333333333335011E363136323633363433333333333633443634363336323631333333333336011E363136323633363433333333333733443634363336323631333333333337011E363136323633363433333333333833443634363336323631333333333338011E363136323633363433333333333933443634363336323631333333333339011E363136323633363433333334333033443634363336323631333333343330011E363136323633363433333334333133443634363336323631333333343331011E363136323633363433333334333233443634363336323631333333343332011E363136323633363433333334333333443634363336323631333333343333011E363136323633363433333334333433443634363336323631333333343334011E363136323633363433333334333533443634363336323631333333343335011E363136323633363433333334333633443634363336323631333333343336011E363136323633363433333334333733443634363336323631333333343337011E363136323633363433333334333833443634363336323631333333343338011E363136323633363433333334333933443634363336323631333333343339011E363136323633363433333335333033443634363336323631333333353330011E363136323633363433333335333133443634363336323631333333353331011E363136323633363433333335333233443634363336323631333333353332011E363136323633363433333335333333443634363336323631333333353333011E363136323633363433333335333433443634363336323631333333353334011E363136323633363433333335333533443634363336323631333333353335011E363136323633363433333335333633443634363336323631333333353336011E363136323633363433333335333733443634363336323631333333353337011E363136323633363433333335333833443634363336323631333333353338011E363136323633363433333335333933443634363336323631333333353339011E363136323633363433333336333033443634363336323631333333363330011E363136323633363433333336333133443634363336323631333333363331011E363136323633363433333336333233443634363336323631333333363332011E363136323633363433333336333333443634363336323631333333363333011E363136323633363433333336333433443634363336323631333333363334011E363136323633363433333336333533443634363336323631333333363335011E363136323633363433333336333633443634363336323631333333363336011E363136323633363433333336333733443634363336323631333333363337011E363136323633363433333336333833443634363336323631333333363338011E363136323633363433333336333933443634363336323631333333363339011E363136323633363433333337333033443634363336323631333333373330011E363136323633363433333337333133443634363336323631333333373331011E363136323633363433333337333233443634363336323631333333373332011E363136323633363433333337333333443634363336323631333333373333011E363136323633363433333337333433443634363336323631333333373334011E363136323633363433333337333533443634363336323631333333373335011E363136323633363433333337333633443634363336323631333333373336011E363136323633363433333337333733443634363336323631333333373337011E363136323633363433333337333833443634363336323631333333373338011E363136323633363433333337333933443634363336323631333333373339011E363136323633363433333338333033443634363336323631333333383330011E36313632363336343333333833313344363436333632363133333338333101011432310D174A99844713693C9815D2CA660364E02801010114D008E9014CDDEA8EC95E1E99E21333241BD52DFC0101010114D028C9981F7A87F3093672BF0D5B0E2A1B3ED4560001020002011432310D174A99844713693C9815D2CA660364E02801010114D008E9014CDDEA8EC95E1E99E21333241BD52DFC01AA9F03D0707752301D7CBFCF4F0BCDBD666A46C1CAED3910BD64A3C5C2874AAF328172646C951C5E2FD962359C382A3CBBA2C73EC9B533668C6386995B83EC08","proof":{"aunts":[]}}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:21.026Z","msg":[1,{"height":3,"round":0,"step":"RoundStepPrevote"}]} -{"time":"2017-02-17T23:54:21.026Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":3,"round":0,"type":1,"block_id":{"hash":"37AF6866DA8C3167CFC280FAE47B6ED441B00D5B","parts":{"total":1,"hash":"2E5DE5777A5AD899CD2531304F42A470509DE989"}},"signature":[1,"F0AAB604A8CE724453A378BBC66142C418464C3C0EC3EB2E15A1CB7524A92B9F36BE8A191238A4D317F542D999DF698B5C2A28D754240524FF8CCADA0947DE00"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:21.028Z","msg":[1,{"height":3,"round":0,"step":"RoundStepPrecommit"}]} -{"time":"2017-02-17T23:54:21.028Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":3,"round":0,"type":2,"block_id":{"hash":"37AF6866DA8C3167CFC280FAE47B6ED441B00D5B","parts":{"total":1,"hash":"2E5DE5777A5AD899CD2531304F42A470509DE989"}},"signature":[1,"C900519E305EC03392E7D197D5FAB535DB240C9C0BA5375A1679C75BAAA07C7410C0EF43CF97D98F2C08A1D739667D5ACFF6233A1FAE75D3DA275AEA422EFD0F"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:21.028Z","msg":[1,{"height":3,"round":0,"step":"RoundStepCommit"}]} +{"time":"2017-04-27T22:24:07.418Z","msg":[1,{"height":3,"round":0,"step":"RoundStepNewHeight"}]} +{"time":"2017-04-27T22:24:08.407Z","msg":[3,{"duration":988837269,"height":3,"round":0,"step":1}]} +{"time":"2017-04-27T22:24:08.408Z","msg":[1,{"height":3,"round":0,"step":"RoundStepPropose"}]} +{"time":"2017-04-27T22:24:08.408Z","msg":[2,{"msg":[17,{"Proposal":{"height":3,"round":0,"block_parts_header":{"total":1,"hash":"F2291A94D838F8E222FEFA8550B57D64CE1383B1"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"389595237F4AFD5DAADA25139227AA54C38DB653ADAB9062BEF3450DAB9A9FF359776C24A8B983600D83118A74906778331BF783927F2C00BE21FC956633EF04"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:08.408Z","msg":[2,{"msg":[19,{"Height":3,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010314B961677413A3C0015C0114BDB3BB45AF3B0CD2373E3690A684A00008A6552601010114C693ABE80F5819FA5DA4666CBB74B7C865CE3C70011448506FCD69C61430F5E1F7854303F93AE49A63440114EBCA4C0F8D8B0BFB8AB6C6CADCD1ECC6F086B8C90114354594CBFC1A7BCA1AD0050ED6AA010023EADA390114D71F486E836237FDDFFA4EEFDAD138F8298D6FF701015C011E363136323633363433323330333933443634363336323631333233303339011E363136323633363433323331333033443634363336323631333233313330011E363136323633363433323331333133443634363336323631333233313331011E363136323633363433323331333233443634363336323631333233313332011E363136323633363433323331333333443634363336323631333233313333011E363136323633363433323331333433443634363336323631333233313334011E363136323633363433323331333533443634363336323631333233313335011E363136323633363433323331333633443634363336323631333233313336011E363136323633363433323331333733443634363336323631333233313337011E363136323633363433323331333833443634363336323631333233313338011E363136323633363433323331333933443634363336323631333233313339011E363136323633363433323332333033443634363336323631333233323330011E363136323633363433323332333133443634363336323631333233323331011E363136323633363433323332333233443634363336323631333233323332011E363136323633363433323332333333443634363336323631333233323333011E363136323633363433323332333433443634363336323631333233323334011E363136323633363433323332333533443634363336323631333233323335011E363136323633363433323332333633443634363336323631333233323336011E363136323633363433323332333733443634363336323631333233323337011E363136323633363433323332333833443634363336323631333233323338011E363136323633363433323332333933443634363336323631333233323339011E363136323633363433323333333033443634363336323631333233333330011E363136323633363433323333333133443634363336323631333233333331011E363136323633363433323333333233443634363336323631333233333332011E363136323633363433323333333333443634363336323631333233333333011E363136323633363433323333333433443634363336323631333233333334011E363136323633363433323333333533443634363336323631333233333335011E363136323633363433323333333633443634363336323631333233333336011E363136323633363433323333333733443634363336323631333233333337011E363136323633363433323333333833443634363336323631333233333338011E363136323633363433323333333933443634363336323631333233333339011E363136323633363433323334333033443634363336323631333233343330011E363136323633363433323334333133443634363336323631333233343331011E363136323633363433323334333233443634363336323631333233343332011E363136323633363433323334333333443634363336323631333233343333011E363136323633363433323334333433443634363336323631333233343334011E363136323633363433323334333533443634363336323631333233343335011E363136323633363433323334333633443634363336323631333233343336011E363136323633363433323334333733443634363336323631333233343337011E363136323633363433323334333833443634363336323631333233343338011E363136323633363433323334333933443634363336323631333233343339011E363136323633363433323335333033443634363336323631333233353330011E363136323633363433323335333133443634363336323631333233353331011E363136323633363433323335333233443634363336323631333233353332011E363136323633363433323335333333443634363336323631333233353333011E363136323633363433323335333433443634363336323631333233353334011E363136323633363433323335333533443634363336323631333233353335011E363136323633363433323335333633443634363336323631333233353336011E363136323633363433323335333733443634363336323631333233353337011E363136323633363433323335333833443634363336323631333233353338011E363136323633363433323335333933443634363336323631333233353339011E363136323633363433323336333033443634363336323631333233363330011E363136323633363433323336333133443634363336323631333233363331011E363136323633363433323336333233443634363336323631333233363332011E363136323633363433323336333333443634363336323631333233363333011E363136323633363433323336333433443634363336323631333233363334011E363136323633363433323336333533443634363336323631333233363335011E363136323633363433323336333633443634363336323631333233363336011E363136323633363433323336333733443634363336323631333233363337011E363136323633363433323336333833443634363336323631333233363338011E363136323633363433323336333933443634363336323631333233363339011E363136323633363433323337333033443634363336323631333233373330011E363136323633363433323337333133443634363336323631333233373331011E363136323633363433323337333233443634363336323631333233373332011E363136323633363433323337333333443634363336323631333233373333011E363136323633363433323337333433443634363336323631333233373334011E363136323633363433323337333533443634363336323631333233373335011E363136323633363433323337333633443634363336323631333233373336011E363136323633363433323337333733443634363336323631333233373337011E363136323633363433323337333833443634363336323631333233373338011E363136323633363433323337333933443634363336323631333233373339011E363136323633363433323338333033443634363336323631333233383330011E363136323633363433323338333133443634363336323631333233383331011E363136323633363433323338333233443634363336323631333233383332011E363136323633363433323338333333443634363336323631333233383333011E363136323633363433323338333433443634363336323631333233383334011E363136323633363433323338333533443634363336323631333233383335011E363136323633363433323338333633443634363336323631333233383336011E363136323633363433323338333733443634363336323631333233383337011E363136323633363433323338333833443634363336323631333233383338011E363136323633363433323338333933443634363336323631333233383339011E363136323633363433323339333033443634363336323631333233393330011E363136323633363433323339333133443634363336323631333233393331011E363136323633363433323339333233443634363336323631333233393332011E363136323633363433323339333333443634363336323631333233393333011E363136323633363433323339333433443634363336323631333233393334011E363136323633363433323339333533443634363336323631333233393335011E363136323633363433323339333633443634363336323631333233393336011E363136323633363433323339333733443634363336323631333233393337011E363136323633363433323339333833443634363336323631333233393338011E363136323633363433323339333933443634363336323631333233393339011E363136323633363433333330333033443634363336323631333333303330010114BDB3BB45AF3B0CD2373E3690A684A00008A6552601010114C693ABE80F5819FA5DA4666CBB74B7C865CE3C700101010114D028C9981F7A87F3093672BF0D5B0E2A1B3ED45600010200020114BDB3BB45AF3B0CD2373E3690A684A00008A6552601010114C693ABE80F5819FA5DA4666CBB74B7C865CE3C700184FC5E25CECBCBB85D30E3B8BD6BAD674DB2D452EC53E04BB890C2FDC1DEE8EC4FA00234D64F4CD74DE4574DE7061B9985D4655009410FF40AD7CE7515215004","proof":{"aunts":[]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:08.409Z","msg":[1,{"height":3,"round":0,"step":"RoundStepPrevote"}]} +{"time":"2017-04-27T22:24:08.409Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":3,"round":0,"type":1,"block_id":{"hash":"A3B20D0D88B8A58ABC78F145D3A499C8FA178377","parts":{"total":1,"hash":"F2291A94D838F8E222FEFA8550B57D64CE1383B1"}},"signature":[1,"BB4CC7A065FE45E9C65ECD4EC8A3A6ED8873128AA2A386B359782EEAAFD58832261E5F365D31428D24DDC311972A7C8410570357D52E23E68F55B821B136D204"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:08.410Z","msg":[1,{"height":3,"round":0,"step":"RoundStepPrecommit"}]} +{"time":"2017-04-27T22:24:08.410Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":3,"round":0,"type":2,"block_id":{"hash":"A3B20D0D88B8A58ABC78F145D3A499C8FA178377","parts":{"total":1,"hash":"F2291A94D838F8E222FEFA8550B57D64CE1383B1"}},"signature":[1,"280EE69579077AAD01C686F04FEFE839D5025DF5CFC646DDD177632966E16F0CB59E9C26ED850D8F8E6EDE4D0E7A1CC13AF418608F77C5E6B55B3966E8A69D02"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:08.410Z","msg":[1,{"height":3,"round":0,"step":"RoundStepCommit"}]} #ENDHEIGHT: 3 -{"time":"2017-02-17T23:54:21.032Z","msg":[1,{"height":4,"round":0,"step":"RoundStepNewHeight"}]} -{"time":"2017-02-17T23:54:22.028Z","msg":[3,{"duration":996302067,"height":4,"round":0,"step":1}]} -{"time":"2017-02-17T23:54:22.030Z","msg":[1,{"height":4,"round":0,"step":"RoundStepPropose"}]} -{"time":"2017-02-17T23:54:22.030Z","msg":[2,{"msg":[17,{"Proposal":{"height":4,"round":0,"block_parts_header":{"total":1,"hash":"24CEBCBEB833F56D47AD14354071B3B7A243068A"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"CAECE2342987295CCB562C9B6AB0E296D0ECDBE0B40CDB5260B32DF07E07E7F30C4E815B76BC04B8E830143409E598F7BA24699F5B5A01A6237221C948A0920C"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:22.030Z","msg":[2,{"msg":[19,{"Height":4,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010414A43848BECCCD400169011437AF6866DA8C3167CFC280FAE47B6ED441B00D5B010101142E5DE5777A5AD899CD2531304F42A470509DE9890114EA4CCD80AE261EF694AF4292F8AF5659FB66317301144FFC4AAC3EDFBB81DA9E9DDEC9D6A205AD49049E0114354594CBFC1A7BCA1AD0050ED6AA010023EADA390114E7563C252781F893D7A191A31566713888CCA3B2010169011E363136323633363433333338333233443634363336323631333333383332011E363136323633363433333338333333443634363336323631333333383333011E363136323633363433333338333433443634363336323631333333383334011E363136323633363433333338333533443634363336323631333333383335011E363136323633363433333338333633443634363336323631333333383336011E363136323633363433333338333733443634363336323631333333383337011E363136323633363433333338333833443634363336323631333333383338011E363136323633363433333338333933443634363336323631333333383339011E363136323633363433333339333033443634363336323631333333393330011E363136323633363433333339333133443634363336323631333333393331011E363136323633363433333339333233443634363336323631333333393332011E363136323633363433333339333333443634363336323631333333393333011E363136323633363433333339333433443634363336323631333333393334011E363136323633363433333339333533443634363336323631333333393335011E363136323633363433333339333633443634363336323631333333393336011E363136323633363433333339333733443634363336323631333333393337011E363136323633363433333339333833443634363336323631333333393338011E363136323633363433333339333933443634363336323631333333393339011E363136323633363433343330333033443634363336323631333433303330011E363136323633363433343330333133443634363336323631333433303331011E363136323633363433343330333233443634363336323631333433303332011E363136323633363433343330333333443634363336323631333433303333011E363136323633363433343330333433443634363336323631333433303334011E363136323633363433343330333533443634363336323631333433303335011E363136323633363433343330333633443634363336323631333433303336011E363136323633363433343330333733443634363336323631333433303337011E363136323633363433343330333833443634363336323631333433303338011E363136323633363433343330333933443634363336323631333433303339011E363136323633363433343331333033443634363336323631333433313330011E363136323633363433343331333133443634363336323631333433313331011E363136323633363433343331333233443634363336323631333433313332011E363136323633363433343331333333443634363336323631333433313333011E363136323633363433343331333433443634363336323631333433313334011E363136323633363433343331333533443634363336323631333433313335011E363136323633363433343331333633443634363336323631333433313336011E363136323633363433343331333733443634363336323631333433313337011E363136323633363433343331333833443634363336323631333433313338011E363136323633363433343331333933443634363336323631333433313339011E363136323633363433343332333033443634363336323631333433323330011E363136323633363433343332333133443634363336323631333433323331011E363136323633363433343332333233443634363336323631333433323332011E363136323633363433343332333333443634363336323631333433323333011E363136323633363433343332333433443634363336323631333433323334011E363136323633363433343332333533443634363336323631333433323335011E363136323633363433343332333633443634363336323631333433323336011E363136323633363433343332333733443634363336323631333433323337011E363136323633363433343332333833443634363336323631333433323338011E363136323633363433343332333933443634363336323631333433323339011E363136323633363433343333333033443634363336323631333433333330011E363136323633363433343333333133443634363336323631333433333331011E363136323633363433343333333233443634363336323631333433333332011E363136323633363433343333333333443634363336323631333433333333011E363136323633363433343333333433443634363336323631333433333334011E363136323633363433343333333533443634363336323631333433333335011E363136323633363433343333333633443634363336323631333433333336011E363136323633363433343333333733443634363336323631333433333337011E363136323633363433343333333833443634363336323631333433333338011E363136323633363433343333333933443634363336323631333433333339011E363136323633363433343334333033443634363336323631333433343330011E363136323633363433343334333133443634363336323631333433343331011E363136323633363433343334333233443634363336323631333433343332011E363136323633363433343334333333443634363336323631333433343333011E363136323633363433343334333433443634363336323631333433343334011E363136323633363433343334333533443634363336323631333433343335011E363136323633363433343334333633443634363336323631333433343336011E363136323633363433343334333733443634363336323631333433343337011E363136323633363433343334333833443634363336323631333433343338011E363136323633363433343334333933443634363336323631333433343339011E363136323633363433343335333033443634363336323631333433353330011E363136323633363433343335333133443634363336323631333433353331011E363136323633363433343335333233443634363336323631333433353332011E363136323633363433343335333333443634363336323631333433353333011E363136323633363433343335333433443634363336323631333433353334011E363136323633363433343335333533443634363336323631333433353335011E363136323633363433343335333633443634363336323631333433353336011E363136323633363433343335333733443634363336323631333433353337011E363136323633363433343335333833443634363336323631333433353338011E363136323633363433343335333933443634363336323631333433353339011E363136323633363433343336333033443634363336323631333433363330011E363136323633363433343336333133443634363336323631333433363331011E363136323633363433343336333233443634363336323631333433363332011E363136323633363433343336333333443634363336323631333433363333011E363136323633363433343336333433443634363336323631333433363334011E363136323633363433343336333533443634363336323631333433363335011E363136323633363433343336333633443634363336323631333433363336011E363136323633363433343336333733443634363336323631333433363337011E363136323633363433343336333833443634363336323631333433363338011E363136323633363433343336333933443634363336323631333433363339011E363136323633363433343337333033443634363336323631333433373330011E363136323633363433343337333133443634363336323631333433373331011E363136323633363433343337333233443634363336323631333433373332011E363136323633363433343337333333443634363336323631333433373333011E363136323633363433343337333433443634363336323631333433373334011E363136323633363433343337333533443634363336323631333433373335011E363136323633363433343337333633443634363336323631333433373336011E363136323633363433343337333733443634363336323631333433373337011E363136323633363433343337333833443634363336323631333433373338011E363136323633363433343337333933443634363336323631333433373339011E363136323633363433343338333033443634363336323631333433383330011E363136323633363433343338333133443634363336323631333433383331011E363136323633363433343338333233443634363336323631333433383332011E363136323633363433343338333333443634363336323631333433383333011E363136323633363433343338333433443634363336323631333433383334011E363136323633363433343338333533443634363336323631333433383335011E36313632363336343334333833363344363436333632363133343338333601011437AF6866DA8C3167CFC280FAE47B6ED441B00D5B010101142E5DE5777A5AD899CD2531304F42A470509DE9890101010114D028C9981F7A87F3093672BF0D5B0E2A1B3ED4560001030002011437AF6866DA8C3167CFC280FAE47B6ED441B00D5B010101142E5DE5777A5AD899CD2531304F42A470509DE98901C900519E305EC03392E7D197D5FAB535DB240C9C0BA5375A1679C75BAAA07C7410C0EF43CF97D98F2C08A1D739667D5ACFF6233A1FAE75D3DA275AEA422EFD0F","proof":{"aunts":[]}}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:22.032Z","msg":[1,{"height":4,"round":0,"step":"RoundStepPrevote"}]} -{"time":"2017-02-17T23:54:22.032Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":4,"round":0,"type":1,"block_id":{"hash":"04715E223BF4327FFA9B0D5AD849B74A099D5DEC","parts":{"total":1,"hash":"24CEBCBEB833F56D47AD14354071B3B7A243068A"}},"signature":[1,"B1BFF3641FE1931C78A792540384B9D4CFC3D9008FD4988B24FAD872326C2A380A34F37610C6E076FA5B4DB9E4B3166B703B0429AF0BF1ABCCDB7B2EDB3C8F08"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:22.033Z","msg":[1,{"height":4,"round":0,"step":"RoundStepPrecommit"}]} -{"time":"2017-02-17T23:54:22.033Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":4,"round":0,"type":2,"block_id":{"hash":"04715E223BF4327FFA9B0D5AD849B74A099D5DEC","parts":{"total":1,"hash":"24CEBCBEB833F56D47AD14354071B3B7A243068A"}},"signature":[1,"F544743F17479A61F94B0F68C63D254BD60493D78E818D48A5859133619AEE5E92C47CAD89C654DF64E0911C3152091E047555D5F14655D95B9681AE9B336505"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:22.034Z","msg":[1,{"height":4,"round":0,"step":"RoundStepCommit"}]} +{"time":"2017-04-27T22:24:08.416Z","msg":[1,{"height":4,"round":0,"step":"RoundStepNewHeight"}]} +{"time":"2017-04-27T22:24:09.410Z","msg":[3,{"duration":994206152,"height":4,"round":0,"step":1}]} +{"time":"2017-04-27T22:24:09.412Z","msg":[1,{"height":4,"round":0,"step":"RoundStepPropose"}]} +{"time":"2017-04-27T22:24:09.412Z","msg":[2,{"msg":[17,{"Proposal":{"height":4,"round":0,"block_parts_header":{"total":1,"hash":"ACB46B8913F0203EB579A9977AF6AF85B6E89705"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"7DA37CE6B6BA2FE24C702D00B13B2A15505B0E364FD0824ED16EFD38B196381E9CA1F7BFAC649A978C0851D2DB4CD146FA0E43E5F1D2EAF21E5B58335840200E"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:09.412Z","msg":[2,{"msg":[19,{"Height":4,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010414B96167AFDC3480016E0114A3B20D0D88B8A58ABC78F145D3A499C8FA17837701010114F2291A94D838F8E222FEFA8550B57D64CE1383B101142B23CB9F3751854AB6B5D0DDF931F437274D25CD011450FE04FF5EE5EB1132C047965AE65A4BA54ED79E0114354594CBFC1A7BCA1AD0050ED6AA010023EADA3901148A5CA2966F0F895BF9664CB052DEA4B76DE51FA101016E011E363136323633363433333330333133443634363336323631333333303331011E363136323633363433333330333233443634363336323631333333303332011E363136323633363433333330333333443634363336323631333333303333011E363136323633363433333330333433443634363336323631333333303334011E363136323633363433333330333533443634363336323631333333303335011E363136323633363433333330333633443634363336323631333333303336011E363136323633363433333330333733443634363336323631333333303337011E363136323633363433333330333833443634363336323631333333303338011E363136323633363433333330333933443634363336323631333333303339011E363136323633363433333331333033443634363336323631333333313330011E363136323633363433333331333133443634363336323631333333313331011E363136323633363433333331333233443634363336323631333333313332011E363136323633363433333331333333443634363336323631333333313333011E363136323633363433333331333433443634363336323631333333313334011E363136323633363433333331333533443634363336323631333333313335011E363136323633363433333331333633443634363336323631333333313336011E363136323633363433333331333733443634363336323631333333313337011E363136323633363433333331333833443634363336323631333333313338011E363136323633363433333331333933443634363336323631333333313339011E363136323633363433333332333033443634363336323631333333323330011E363136323633363433333332333133443634363336323631333333323331011E363136323633363433333332333233443634363336323631333333323332011E363136323633363433333332333333443634363336323631333333323333011E363136323633363433333332333433443634363336323631333333323334011E363136323633363433333332333533443634363336323631333333323335011E363136323633363433333332333633443634363336323631333333323336011E363136323633363433333332333733443634363336323631333333323337011E363136323633363433333332333833443634363336323631333333323338011E363136323633363433333332333933443634363336323631333333323339011E363136323633363433333333333033443634363336323631333333333330011E363136323633363433333333333133443634363336323631333333333331011E363136323633363433333333333233443634363336323631333333333332011E363136323633363433333333333333443634363336323631333333333333011E363136323633363433333333333433443634363336323631333333333334011E363136323633363433333333333533443634363336323631333333333335011E363136323633363433333333333633443634363336323631333333333336011E363136323633363433333333333733443634363336323631333333333337011E363136323633363433333333333833443634363336323631333333333338011E363136323633363433333333333933443634363336323631333333333339011E363136323633363433333334333033443634363336323631333333343330011E363136323633363433333334333133443634363336323631333333343331011E363136323633363433333334333233443634363336323631333333343332011E363136323633363433333334333333443634363336323631333333343333011E363136323633363433333334333433443634363336323631333333343334011E363136323633363433333334333533443634363336323631333333343335011E363136323633363433333334333633443634363336323631333333343336011E363136323633363433333334333733443634363336323631333333343337011E363136323633363433333334333833443634363336323631333333343338011E363136323633363433333334333933443634363336323631333333343339011E363136323633363433333335333033443634363336323631333333353330011E363136323633363433333335333133443634363336323631333333353331011E363136323633363433333335333233443634363336323631333333353332011E363136323633363433333335333333443634363336323631333333353333011E363136323633363433333335333433443634363336323631333333353334011E363136323633363433333335333533443634363336323631333333353335011E363136323633363433333335333633443634363336323631333333353336011E363136323633363433333335333733443634363336323631333333353337011E363136323633363433333335333833443634363336323631333333353338011E363136323633363433333335333933443634363336323631333333353339011E363136323633363433333336333033443634363336323631333333363330011E363136323633363433333336333133443634363336323631333333363331011E363136323633363433333336333233443634363336323631333333363332011E363136323633363433333336333333443634363336323631333333363333011E363136323633363433333336333433443634363336323631333333363334011E363136323633363433333336333533443634363336323631333333363335011E363136323633363433333336333633443634363336323631333333363336011E363136323633363433333336333733443634363336323631333333363337011E363136323633363433333336333833443634363336323631333333363338011E363136323633363433333336333933443634363336323631333333363339011E363136323633363433333337333033443634363336323631333333373330011E363136323633363433333337333133443634363336323631333333373331011E363136323633363433333337333233443634363336323631333333373332011E363136323633363433333337333333443634363336323631333333373333011E363136323633363433333337333433443634363336323631333333373334011E363136323633363433333337333533443634363336323631333333373335011E363136323633363433333337333633443634363336323631333333373336011E363136323633363433333337333733443634363336323631333333373337011E363136323633363433333337333833443634363336323631333333373338011E363136323633363433333337333933443634363336323631333333373339011E363136323633363433333338333033443634363336323631333333383330011E363136323633363433333338333133443634363336323631333333383331011E363136323633363433333338333233443634363336323631333333383332011E363136323633363433333338333333443634363336323631333333383333011E363136323633363433333338333433443634363336323631333333383334011E363136323633363433333338333533443634363336323631333333383335011E363136323633363433333338333633443634363336323631333333383336011E363136323633363433333338333733443634363336323631333333383337011E363136323633363433333338333833443634363336323631333333383338011E363136323633363433333338333933443634363336323631333333383339011E363136323633363433333339333033443634363336323631333333393330011E363136323633363433333339333133443634363336323631333333393331011E363136323633363433333339333233443634363336323631333333393332011E363136323633363433333339333333443634363336323631333333393333011E363136323633363433333339333433443634363336323631333333393334011E363136323633363433333339333533443634363336323631333333393335011E363136323633363433333339333633443634363336323631333333393336011E363136323633363433333339333733443634363336323631333333393337011E363136323633363433333339333833443634363336323631333333393338011E363136323633363433333339333933443634363336323631333333393339011E363136323633363433343330333033443634363336323631333433303330011E363136323633363433343330333133443634363336323631333433303331011E363136323633363433343330333233443634363336323631333433303332011E363136323633363433343330333333443634363336323631333433303333011E363136323633363433343330333433443634363336323631333433303334011E363136323633363433343330333533443634363336323631333433303335011E363136323633363433343330333633443634363336323631333433303336011E363136323633363433343330333733443634363336323631333433303337011E363136323633363433343330333833443634363336323631333433303338011E363136323633363433343330333933443634363336323631333433303339011E363136323633363433343331333033443634363336323631333433313330010114A3B20D0D88B8A58ABC78F145D3A499C8FA17837701010114F2291A94D838F8E222FEFA8550B57D64CE1383B10101010114D028C9981F7A87F3093672BF0D5B0E2A1B3ED45600010300020114A3B20D0D88B8A58ABC78F145D3A499C8FA17837701010114F2291A94D838F8E222FEFA8550B57D64CE1383B101280EE69579077AAD01C686F04FEFE839D5025DF5CFC646DDD177632966E16F0CB59E9C26ED850D8F8E6EDE4D0E7A1CC13AF418608F77C5E6B55B3966E8A69D02","proof":{"aunts":[]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:09.414Z","msg":[1,{"height":4,"round":0,"step":"RoundStepPrevote"}]} +{"time":"2017-04-27T22:24:09.414Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":4,"round":0,"type":1,"block_id":{"hash":"C3918AEACFA8BECF3695D10CC738D6182276CCEA","parts":{"total":1,"hash":"ACB46B8913F0203EB579A9977AF6AF85B6E89705"}},"signature":[1,"2ACD2A20D31464F1A39F82E02EA023C63C5BBBAF12320652CE09A94B07956950B6425C576675E5E3A0BF2E41312527434CD03E7A255F71999D56310AD665EC04"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:09.416Z","msg":[1,{"height":4,"round":0,"step":"RoundStepPrecommit"}]} +{"time":"2017-04-27T22:24:09.416Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":4,"round":0,"type":2,"block_id":{"hash":"C3918AEACFA8BECF3695D10CC738D6182276CCEA","parts":{"total":1,"hash":"ACB46B8913F0203EB579A9977AF6AF85B6E89705"}},"signature":[1,"B9E1E611ADE3C020AC6041DB974D7A124CC0BCDF580517697D01B07D7B90AAA186F2F237C0D048BFE384EC75EFA65E56B8E9BE086AEBED74B1C5132CFA119207"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:09.417Z","msg":[1,{"height":4,"round":0,"step":"RoundStepCommit"}]} #ENDHEIGHT: 4 -{"time":"2017-02-17T23:54:22.036Z","msg":[1,{"height":5,"round":0,"step":"RoundStepNewHeight"}]} -{"time":"2017-02-17T23:54:23.034Z","msg":[3,{"duration":997096276,"height":5,"round":0,"step":1}]} -{"time":"2017-02-17T23:54:23.035Z","msg":[1,{"height":5,"round":0,"step":"RoundStepPropose"}]} -{"time":"2017-02-17T23:54:23.035Z","msg":[2,{"msg":[17,{"Proposal":{"height":5,"round":0,"block_parts_header":{"total":1,"hash":"A52BAA9C2E52E633A1605F4B930205613E3E7A2F"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"32A96AA44440B6FDB28B590A029649CE37B0F1091B9E648658E910207BB2F96E4936102C63F3908942F1A45F52C01231680593FB3E53B8B29BF588A613116A0B"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:23.035Z","msg":[2,{"msg":[19,{"Height":5,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010514A43848FAB3E280017A011404715E223BF4327FFA9B0D5AD849B74A099D5DEC0101011424CEBCBEB833F56D47AD14354071B3B7A243068A01144C1EF483E19426AC412E12F33E1D814FD019279801146440978BB85314393E824F6BBAFEE59FA1A5E30A0114354594CBFC1A7BCA1AD0050ED6AA010023EADA390114845C15D3FE4AA16021B85CD0F18E0188A673E73E01017A011E363136323633363433343338333733443634363336323631333433383337011E363136323633363433343338333833443634363336323631333433383338011E363136323633363433343338333933443634363336323631333433383339011E363136323633363433343339333033443634363336323631333433393330011E363136323633363433343339333133443634363336323631333433393331011E363136323633363433343339333233443634363336323631333433393332011E363136323633363433343339333333443634363336323631333433393333011E363136323633363433343339333433443634363336323631333433393334011E363136323633363433343339333533443634363336323631333433393335011E363136323633363433343339333633443634363336323631333433393336011E363136323633363433343339333733443634363336323631333433393337011E363136323633363433343339333833443634363336323631333433393338011E363136323633363433343339333933443634363336323631333433393339011E363136323633363433353330333033443634363336323631333533303330011E363136323633363433353330333133443634363336323631333533303331011E363136323633363433353330333233443634363336323631333533303332011E363136323633363433353330333333443634363336323631333533303333011E363136323633363433353330333433443634363336323631333533303334011E363136323633363433353330333533443634363336323631333533303335011E363136323633363433353330333633443634363336323631333533303336011E363136323633363433353330333733443634363336323631333533303337011E363136323633363433353330333833443634363336323631333533303338011E363136323633363433353330333933443634363336323631333533303339011E363136323633363433353331333033443634363336323631333533313330011E363136323633363433353331333133443634363336323631333533313331011E363136323633363433353331333233443634363336323631333533313332011E363136323633363433353331333333443634363336323631333533313333011E363136323633363433353331333433443634363336323631333533313334011E363136323633363433353331333533443634363336323631333533313335011E363136323633363433353331333633443634363336323631333533313336011E363136323633363433353331333733443634363336323631333533313337011E363136323633363433353331333833443634363336323631333533313338011E363136323633363433353331333933443634363336323631333533313339011E363136323633363433353332333033443634363336323631333533323330011E363136323633363433353332333133443634363336323631333533323331011E363136323633363433353332333233443634363336323631333533323332011E363136323633363433353332333333443634363336323631333533323333011E363136323633363433353332333433443634363336323631333533323334011E363136323633363433353332333533443634363336323631333533323335011E363136323633363433353332333633443634363336323631333533323336011E363136323633363433353332333733443634363336323631333533323337011E363136323633363433353332333833443634363336323631333533323338011E363136323633363433353332333933443634363336323631333533323339011E363136323633363433353333333033443634363336323631333533333330011E363136323633363433353333333133443634363336323631333533333331011E363136323633363433353333333233443634363336323631333533333332011E363136323633363433353333333333443634363336323631333533333333011E363136323633363433353333333433443634363336323631333533333334011E363136323633363433353333333533443634363336323631333533333335011E363136323633363433353333333633443634363336323631333533333336011E363136323633363433353333333733443634363336323631333533333337011E363136323633363433353333333833443634363336323631333533333338011E363136323633363433353333333933443634363336323631333533333339011E363136323633363433353334333033443634363336323631333533343330011E363136323633363433353334333133443634363336323631333533343331011E363136323633363433353334333233443634363336323631333533343332011E363136323633363433353334333333443634363336323631333533343333011E363136323633363433353334333433443634363336323631333533343334011E363136323633363433353334333533443634363336323631333533343335011E363136323633363433353334333633443634363336323631333533343336011E363136323633363433353334333733443634363336323631333533343337011E363136323633363433353334333833443634363336323631333533343338011E363136323633363433353334333933443634363336323631333533343339011E363136323633363433353335333033443634363336323631333533353330011E363136323633363433353335333133443634363336323631333533353331011E363136323633363433353335333233443634363336323631333533353332011E363136323633363433353335333333443634363336323631333533353333011E363136323633363433353335333433443634363336323631333533353334011E363136323633363433353335333533443634363336323631333533353335011E363136323633363433353335333633443634363336323631333533353336011E363136323633363433353335333733443634363336323631333533353337011E363136323633363433353335333833443634363336323631333533353338011E363136323633363433353335333933443634363336323631333533353339011E363136323633363433353336333033443634363336323631333533363330011E363136323633363433353336333133443634363336323631333533363331011E363136323633363433353336333233443634363336323631333533363332011E363136323633363433353336333333443634363336323631333533363333011E363136323633363433353336333433443634363336323631333533363334011E363136323633363433353336333533443634363336323631333533363335011E363136323633363433353336333633443634363336323631333533363336011E363136323633363433353336333733443634363336323631333533363337011E363136323633363433353336333833443634363336323631333533363338011E363136323633363433353336333933443634363336323631333533363339011E363136323633363433353337333033443634363336323631333533373330011E363136323633363433353337333133443634363336323631333533373331011E363136323633363433353337333233443634363336323631333533373332011E363136323633363433353337333333443634363336323631333533373333011E363136323633363433353337333433443634363336323631333533373334011E363136323633363433353337333533443634363336323631333533373335011E363136323633363433353337333633443634363336323631333533373336011E363136323633363433353337333733443634363336323631333533373337011E363136323633363433353337333833443634363336323631333533373338011E363136323633363433353337333933443634363336323631333533373339011E363136323633363433353338333033443634363336323631333533383330011E363136323633363433353338333133443634363336323631333533383331011E363136323633363433353338333233443634363336323631333533383332011E363136323633363433353338333333443634363336323631333533383333011E363136323633363433353338333433443634363336323631333533383334011E363136323633363433353338333533443634363336323631333533383335011E363136323633363433353338333633443634363336323631333533383336011E363136323633363433353338333733443634363336323631333533383337011E363136323633363433353338333833443634363336323631333533383338011E363136323633363433353338333933443634363336323631333533383339011E363136323633363433353339333033443634363336323631333533393330011E363136323633363433353339333133443634363336323631333533393331011E363136323633363433353339333233443634363336323631333533393332011E363136323633363433353339333333443634363336323631333533393333011E363136323633363433353339333433443634363336323631333533393334011E363136323633363433353339333533443634363336323631333533393335011E363136323633363433353339333633443634363336323631333533393336011E363136323633363433353339333733443634363336323631333533393337011E363136323633363433353339333833443634363336323631333533393338011E363136323633363433353339333933443634363336323631333533393339011E363136323633363433363330333033443634363336323631333633303330011E363136323633363433363330333133443634363336323631333633303331011E363136323633363433363330333233443634363336323631333633303332011E363136323633363433363330333333443634363336323631333633303333011E363136323633363433363330333433443634363336323631333633303334011E363136323633363433363330333533443634363336323631333633303335011E363136323633363433363330333633443634363336323631333633303336011E363136323633363433363330333733443634363336323631333633303337011E36313632363336343336333033383344363436333632363133363330333801011404715E223BF4327FFA9B0D5AD849B74A099D5DEC0101011424CEBCBEB833F56D47AD14354071B3B7A243068A0101010114D028C9981F7A87F3093672BF0D5B0E2A1B3ED4560001040002011404715E223BF4327FFA9B0D5AD849B74A099D5DEC0101011424CEBCBEB833F56D47AD14354071B3B7A243068A01F544743F17479A61F94B0F68C63D254BD60493D78E818D48A5859133619AEE5E92C47CAD89C654DF64E0911C3152091E047555D5F14655D95B9681AE9B336505","proof":{"aunts":[]}}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:23.037Z","msg":[1,{"height":5,"round":0,"step":"RoundStepPrevote"}]} -{"time":"2017-02-17T23:54:23.037Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":5,"round":0,"type":1,"block_id":{"hash":"FDC6D837995BEBBBFCBF3E7D7CF44F8FDA448543","parts":{"total":1,"hash":"A52BAA9C2E52E633A1605F4B930205613E3E7A2F"}},"signature":[1,"684AB4918389E06ADD5DCC4EFCCD0464EAE2BC4212344D88694706837A4D47D484747C7B5906537181E0FBD35EF78EDF673E8492C6E875BB33934456A8254B03"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:23.038Z","msg":[1,{"height":5,"round":0,"step":"RoundStepPrecommit"}]} -{"time":"2017-02-17T23:54:23.038Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":5,"round":0,"type":2,"block_id":{"hash":"FDC6D837995BEBBBFCBF3E7D7CF44F8FDA448543","parts":{"total":1,"hash":"A52BAA9C2E52E633A1605F4B930205613E3E7A2F"}},"signature":[1,"DF51D23D5D2C57598F67791D953A6C2D9FC5865A3048ADA4469B37500D2996B95732E0DC6F99EAEAEA12B4818CE355C7B701D16857D2AC767D740C2E30E9260C"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:23.038Z","msg":[1,{"height":5,"round":0,"step":"RoundStepCommit"}]} +{"time":"2017-04-27T22:24:09.431Z","msg":[1,{"height":5,"round":0,"step":"RoundStepNewHeight"}]} +{"time":"2017-04-27T22:24:10.417Z","msg":[3,{"duration":985803235,"height":5,"round":0,"step":1}]} +{"time":"2017-04-27T22:24:10.418Z","msg":[1,{"height":5,"round":0,"step":"RoundStepPropose"}]} +{"time":"2017-04-27T22:24:10.418Z","msg":[2,{"msg":[17,{"Proposal":{"height":5,"round":0,"block_parts_header":{"total":1,"hash":"DC0AAD92EA45834EB90563B51F5FCB8F6CE6A6B2"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"F7799D9885B354FF90772DA770768624E311398DA0FEF0E28C02934AAE32D0D183BEEB304022EC721FED3FE429C7F184F1A22EE6E83E1DC5D5C66500C138EF05"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:10.419Z","msg":[2,{"msg":[19,{"Height":5,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010514B96167EBE1CE40016B0114C3918AEACFA8BECF3695D10CC738D6182276CCEA01010114ACB46B8913F0203EB579A9977AF6AF85B6E897050114D4A14D12F28D02E3B707954EE2D8B1794C45FBDB0114EB7BE3102D05EFCE8D70D4033995393C3D6A6ECA0114354594CBFC1A7BCA1AD0050ED6AA010023EADA3901145DC8B2DF900A8E6A1DC37B2EE9D4B8860070759A01016B011E363136323633363433343331333133443634363336323631333433313331011E363136323633363433343331333233443634363336323631333433313332011E363136323633363433343331333333443634363336323631333433313333011E363136323633363433343331333433443634363336323631333433313334011E363136323633363433343331333533443634363336323631333433313335011E363136323633363433343331333633443634363336323631333433313336011E363136323633363433343331333733443634363336323631333433313337011E363136323633363433343331333833443634363336323631333433313338011E363136323633363433343331333933443634363336323631333433313339011E363136323633363433343332333033443634363336323631333433323330011E363136323633363433343332333133443634363336323631333433323331011E363136323633363433343332333233443634363336323631333433323332011E363136323633363433343332333333443634363336323631333433323333011E363136323633363433343332333433443634363336323631333433323334011E363136323633363433343332333533443634363336323631333433323335011E363136323633363433343332333633443634363336323631333433323336011E363136323633363433343332333733443634363336323631333433323337011E363136323633363433343332333833443634363336323631333433323338011E363136323633363433343332333933443634363336323631333433323339011E363136323633363433343333333033443634363336323631333433333330011E363136323633363433343333333133443634363336323631333433333331011E363136323633363433343333333233443634363336323631333433333332011E363136323633363433343333333333443634363336323631333433333333011E363136323633363433343333333433443634363336323631333433333334011E363136323633363433343333333533443634363336323631333433333335011E363136323633363433343333333633443634363336323631333433333336011E363136323633363433343333333733443634363336323631333433333337011E363136323633363433343333333833443634363336323631333433333338011E363136323633363433343333333933443634363336323631333433333339011E363136323633363433343334333033443634363336323631333433343330011E363136323633363433343334333133443634363336323631333433343331011E363136323633363433343334333233443634363336323631333433343332011E363136323633363433343334333333443634363336323631333433343333011E363136323633363433343334333433443634363336323631333433343334011E363136323633363433343334333533443634363336323631333433343335011E363136323633363433343334333633443634363336323631333433343336011E363136323633363433343334333733443634363336323631333433343337011E363136323633363433343334333833443634363336323631333433343338011E363136323633363433343334333933443634363336323631333433343339011E363136323633363433343335333033443634363336323631333433353330011E363136323633363433343335333133443634363336323631333433353331011E363136323633363433343335333233443634363336323631333433353332011E363136323633363433343335333333443634363336323631333433353333011E363136323633363433343335333433443634363336323631333433353334011E363136323633363433343335333533443634363336323631333433353335011E363136323633363433343335333633443634363336323631333433353336011E363136323633363433343335333733443634363336323631333433353337011E363136323633363433343335333833443634363336323631333433353338011E363136323633363433343335333933443634363336323631333433353339011E363136323633363433343336333033443634363336323631333433363330011E363136323633363433343336333133443634363336323631333433363331011E363136323633363433343336333233443634363336323631333433363332011E363136323633363433343336333333443634363336323631333433363333011E363136323633363433343336333433443634363336323631333433363334011E363136323633363433343336333533443634363336323631333433363335011E363136323633363433343336333633443634363336323631333433363336011E363136323633363433343336333733443634363336323631333433363337011E363136323633363433343336333833443634363336323631333433363338011E363136323633363433343336333933443634363336323631333433363339011E363136323633363433343337333033443634363336323631333433373330011E363136323633363433343337333133443634363336323631333433373331011E363136323633363433343337333233443634363336323631333433373332011E363136323633363433343337333333443634363336323631333433373333011E363136323633363433343337333433443634363336323631333433373334011E363136323633363433343337333533443634363336323631333433373335011E363136323633363433343337333633443634363336323631333433373336011E363136323633363433343337333733443634363336323631333433373337011E363136323633363433343337333833443634363336323631333433373338011E363136323633363433343337333933443634363336323631333433373339011E363136323633363433343338333033443634363336323631333433383330011E363136323633363433343338333133443634363336323631333433383331011E363136323633363433343338333233443634363336323631333433383332011E363136323633363433343338333333443634363336323631333433383333011E363136323633363433343338333433443634363336323631333433383334011E363136323633363433343338333533443634363336323631333433383335011E363136323633363433343338333633443634363336323631333433383336011E363136323633363433343338333733443634363336323631333433383337011E363136323633363433343338333833443634363336323631333433383338011E363136323633363433343338333933443634363336323631333433383339011E363136323633363433343339333033443634363336323631333433393330011E363136323633363433343339333133443634363336323631333433393331011E363136323633363433343339333233443634363336323631333433393332011E363136323633363433343339333333443634363336323631333433393333011E363136323633363433343339333433443634363336323631333433393334011E363136323633363433343339333533443634363336323631333433393335011E363136323633363433343339333633443634363336323631333433393336011E363136323633363433343339333733443634363336323631333433393337011E363136323633363433343339333833443634363336323631333433393338011E363136323633363433343339333933443634363336323631333433393339011E363136323633363433353330333033443634363336323631333533303330011E363136323633363433353330333133443634363336323631333533303331011E363136323633363433353330333233443634363336323631333533303332011E363136323633363433353330333333443634363336323631333533303333011E363136323633363433353330333433443634363336323631333533303334011E363136323633363433353330333533443634363336323631333533303335011E363136323633363433353330333633443634363336323631333533303336011E363136323633363433353330333733443634363336323631333533303337011E363136323633363433353330333833443634363336323631333533303338011E363136323633363433353330333933443634363336323631333533303339011E363136323633363433353331333033443634363336323631333533313330011E363136323633363433353331333133443634363336323631333533313331011E363136323633363433353331333233443634363336323631333533313332011E363136323633363433353331333333443634363336323631333533313333011E363136323633363433353331333433443634363336323631333533313334011E363136323633363433353331333533443634363336323631333533313335011E363136323633363433353331333633443634363336323631333533313336011E363136323633363433353331333733443634363336323631333533313337010114C3918AEACFA8BECF3695D10CC738D6182276CCEA01010114ACB46B8913F0203EB579A9977AF6AF85B6E897050101010114D028C9981F7A87F3093672BF0D5B0E2A1B3ED45600010400020114C3918AEACFA8BECF3695D10CC738D6182276CCEA01010114ACB46B8913F0203EB579A9977AF6AF85B6E8970501B9E1E611ADE3C020AC6041DB974D7A124CC0BCDF580517697D01B07D7B90AAA186F2F237C0D048BFE384EC75EFA65E56B8E9BE086AEBED74B1C5132CFA119207","proof":{"aunts":[]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:10.420Z","msg":[1,{"height":5,"round":0,"step":"RoundStepPrevote"}]} +{"time":"2017-04-27T22:24:10.420Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":5,"round":0,"type":1,"block_id":{"hash":"29C2401351653BDD300FDA82FF670747DEE0D6BD","parts":{"total":1,"hash":"DC0AAD92EA45834EB90563B51F5FCB8F6CE6A6B2"}},"signature":[1,"3C5A5D4A84520A495F1239FD3B166C1F38226578C0BEF7F7953B2995BF274C5A5A7A222E2433BE8DA08321F838C69C4FDC8A3767D072D3CCD260F435F4065E0E"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:10.421Z","msg":[1,{"height":5,"round":0,"step":"RoundStepPrecommit"}]} +{"time":"2017-04-27T22:24:10.421Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":5,"round":0,"type":2,"block_id":{"hash":"29C2401351653BDD300FDA82FF670747DEE0D6BD","parts":{"total":1,"hash":"DC0AAD92EA45834EB90563B51F5FCB8F6CE6A6B2"}},"signature":[1,"5FFD69406ABEE706DD424F69E6D2A94A7ED8566D4A2A936A160BF7C574B7371C53CDC75B739D173EC456ECBBC84A4745CAAAB55D9CEA0BD4AD91FB04A8300406"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:10.422Z","msg":[1,{"height":5,"round":0,"step":"RoundStepCommit"}]} #ENDHEIGHT: 5 -{"time":"2017-02-17T23:54:23.041Z","msg":[1,{"height":6,"round":0,"step":"RoundStepNewHeight"}]} -{"time":"2017-02-17T23:54:24.038Z","msg":[3,{"duration":997341910,"height":6,"round":0,"step":1}]} -{"time":"2017-02-17T23:54:24.040Z","msg":[1,{"height":6,"round":0,"step":"RoundStepPropose"}]} -{"time":"2017-02-17T23:54:24.040Z","msg":[2,{"msg":[17,{"Proposal":{"height":6,"round":0,"block_parts_header":{"total":1,"hash":"EA1E4111198195006BF7C23322B1051BE6C11582"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"571309E5959472CF453B83BB00F75BFAE9ACA8981279CBCBF19FD1A104BAD544D43A4F67FC54C17C9D51CEE821E4F514A1742FA5220EFF432C334D81B03B4C08"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:24.040Z","msg":[2,{"msg":[19,{"Height":6,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010614A43849369AF7C0018C0114FDC6D837995BEBBBFCBF3E7D7CF44F8FDA44854301010114A52BAA9C2E52E633A1605F4B930205613E3E7A2F0114AD3EFC80659CF2DCB932CAAA832249B1100C7D0901149D3F259328643A0DA0B83CF23A1AC918965E393D0114354594CBFC1A7BCA1AD0050ED6AA010023EADA390114BFB5455B51A6694370771F5072E0204CE75E6C6901018C011E363136323633363433363330333933443634363336323631333633303339011E363136323633363433363331333033443634363336323631333633313330011E363136323633363433363331333133443634363336323631333633313331011E363136323633363433363331333233443634363336323631333633313332011E363136323633363433363331333333443634363336323631333633313333011E363136323633363433363331333433443634363336323631333633313334011E363136323633363433363331333533443634363336323631333633313335011E363136323633363433363331333633443634363336323631333633313336011E363136323633363433363331333733443634363336323631333633313337011E363136323633363433363331333833443634363336323631333633313338011E363136323633363433363331333933443634363336323631333633313339011E363136323633363433363332333033443634363336323631333633323330011E363136323633363433363332333133443634363336323631333633323331011E363136323633363433363332333233443634363336323631333633323332011E363136323633363433363332333333443634363336323631333633323333011E363136323633363433363332333433443634363336323631333633323334011E363136323633363433363332333533443634363336323631333633323335011E363136323633363433363332333633443634363336323631333633323336011E363136323633363433363332333733443634363336323631333633323337011E363136323633363433363332333833443634363336323631333633323338011E363136323633363433363332333933443634363336323631333633323339011E363136323633363433363333333033443634363336323631333633333330011E363136323633363433363333333133443634363336323631333633333331011E363136323633363433363333333233443634363336323631333633333332011E363136323633363433363333333333443634363336323631333633333333011E363136323633363433363333333433443634363336323631333633333334011E363136323633363433363333333533443634363336323631333633333335011E363136323633363433363333333633443634363336323631333633333336011E363136323633363433363333333733443634363336323631333633333337011E363136323633363433363333333833443634363336323631333633333338011E363136323633363433363333333933443634363336323631333633333339011E363136323633363433363334333033443634363336323631333633343330011E363136323633363433363334333133443634363336323631333633343331011E363136323633363433363334333233443634363336323631333633343332011E363136323633363433363334333333443634363336323631333633343333011E363136323633363433363334333433443634363336323631333633343334011E363136323633363433363334333533443634363336323631333633343335011E363136323633363433363334333633443634363336323631333633343336011E363136323633363433363334333733443634363336323631333633343337011E363136323633363433363334333833443634363336323631333633343338011E363136323633363433363334333933443634363336323631333633343339011E363136323633363433363335333033443634363336323631333633353330011E363136323633363433363335333133443634363336323631333633353331011E363136323633363433363335333233443634363336323631333633353332011E363136323633363433363335333333443634363336323631333633353333011E363136323633363433363335333433443634363336323631333633353334011E363136323633363433363335333533443634363336323631333633353335011E363136323633363433363335333633443634363336323631333633353336011E363136323633363433363335333733443634363336323631333633353337011E363136323633363433363335333833443634363336323631333633353338011E363136323633363433363335333933443634363336323631333633353339011E363136323633363433363336333033443634363336323631333633363330011E363136323633363433363336333133443634363336323631333633363331011E363136323633363433363336333233443634363336323631333633363332011E363136323633363433363336333333443634363336323631333633363333011E363136323633363433363336333433443634363336323631333633363334011E363136323633363433363336333533443634363336323631333633363335011E363136323633363433363336333633443634363336323631333633363336011E363136323633363433363336333733443634363336323631333633363337011E363136323633363433363336333833443634363336323631333633363338011E363136323633363433363336333933443634363336323631333633363339011E363136323633363433363337333033443634363336323631333633373330011E363136323633363433363337333133443634363336323631333633373331011E363136323633363433363337333233443634363336323631333633373332011E363136323633363433363337333333443634363336323631333633373333011E363136323633363433363337333433443634363336323631333633373334011E363136323633363433363337333533443634363336323631333633373335011E363136323633363433363337333633443634363336323631333633373336011E363136323633363433363337333733443634363336323631333633373337011E363136323633363433363337333833443634363336323631333633373338011E363136323633363433363337333933443634363336323631333633373339011E363136323633363433363338333033443634363336323631333633383330011E363136323633363433363338333133443634363336323631333633383331011E363136323633363433363338333233443634363336323631333633383332011E363136323633363433363338333333443634363336323631333633383333011E363136323633363433363338333433443634363336323631333633383334011E363136323633363433363338333533443634363336323631333633383335011E363136323633363433363338333633443634363336323631333633383336011E363136323633363433363338333733443634363336323631333633383337011E363136323633363433363338333833443634363336323631333633383338011E363136323633363433363338333933443634363336323631333633383339011E363136323633363433363339333033443634363336323631333633393330011E363136323633363433363339333133443634363336323631333633393331011E363136323633363433363339333233443634363336323631333633393332011E363136323633363433363339333333443634363336323631333633393333011E363136323633363433363339333433443634363336323631333633393334011E363136323633363433363339333533443634363336323631333633393335011E363136323633363433363339333633443634363336323631333633393336011E363136323633363433363339333733443634363336323631333633393337011E363136323633363433363339333833443634363336323631333633393338011E363136323633363433363339333933443634363336323631333633393339011E363136323633363433373330333033443634363336323631333733303330011E363136323633363433373330333133443634363336323631333733303331011E363136323633363433373330333233443634363336323631333733303332011E363136323633363433373330333333443634363336323631333733303333011E363136323633363433373330333433443634363336323631333733303334011E363136323633363433373330333533443634363336323631333733303335011E363136323633363433373330333633443634363336323631333733303336011E363136323633363433373330333733443634363336323631333733303337011E363136323633363433373330333833443634363336323631333733303338011E363136323633363433373330333933443634363336323631333733303339011E363136323633363433373331333033443634363336323631333733313330011E363136323633363433373331333133443634363336323631333733313331011E363136323633363433373331333233443634363336323631333733313332011E363136323633363433373331333333443634363336323631333733313333011E363136323633363433373331333433443634363336323631333733313334011E363136323633363433373331333533443634363336323631333733313335011E363136323633363433373331333633443634363336323631333733313336011E363136323633363433373331333733443634363336323631333733313337011E363136323633363433373331333833443634363336323631333733313338011E363136323633363433373331333933443634363336323631333733313339011E363136323633363433373332333033443634363336323631333733323330011E363136323633363433373332333133443634363336323631333733323331011E363136323633363433373332333233443634363336323631333733323332011E363136323633363433373332333333443634363336323631333733323333011E363136323633363433373332333433443634363336323631333733323334011E363136323633363433373332333533443634363336323631333733323335011E363136323633363433373332333633443634363336323631333733323336011E363136323633363433373332333733443634363336323631333733323337011E363136323633363433373332333833443634363336323631333733323338011E363136323633363433373332333933443634363336323631333733323339011E363136323633363433373333333033443634363336323631333733333330011E363136323633363433373333333133443634363336323631333733333331011E363136323633363433373333333233443634363336323631333733333332011E363136323633363433373333333333443634363336323631333733333333011E363136323633363433373333333433443634363336323631333733333334011E363136323633363433373333333533443634363336323631333733333335011E363136323633363433373333333633443634363336323631333733333336011E363136323633363433373333333733443634363336323631333733333337011E363136323633363433373333333833443634363336323631333733333338011E363136323633363433373333333933443634363336323631333733333339011E363136323633363433373334333033443634363336323631333733343330011E363136323633363433373334333133443634363336323631333733343331011E363136323633363433373334333233443634363336323631333733343332011E363136323633363433373334333333443634363336323631333733343333011E363136323633363433373334333433443634363336323631333733343334011E363136323633363433373334333533443634363336323631333733343335011E363136323633363433373334333633443634363336323631333733343336011E363136323633363433373334333733443634363336323631333733343337011E363136323633363433373334333833443634363336323631333733343338010114FDC6D837995BEBBBFCBF3E7D7CF44F8FDA44854301010114A52BAA9C2E52E633A1605F4B930205613E3E7A2F0101010114D028C9981F7A87F3093672BF0D5B0E2A1B3ED45600010500020114FDC6D837995BEBBBFCBF3E7D7CF44F8FDA44854301010114A52BAA9C2E52E633A1605F4B930205613E3E7A2F01DF51D23D5D2C57598F67791D953A6C2D9FC5865A3048ADA4469B37500D2996B95732E0DC6F99EAEAEA12B4818CE355C7B701D16857D2AC767D740C2E30E9260C","proof":{"aunts":[]}}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:24.041Z","msg":[1,{"height":6,"round":0,"step":"RoundStepPrevote"}]} -{"time":"2017-02-17T23:54:24.041Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":6,"round":0,"type":1,"block_id":{"hash":"1F7C249FF99B67AC57C4EEC94C42E0B95C9AFB6B","parts":{"total":1,"hash":"EA1E4111198195006BF7C23322B1051BE6C11582"}},"signature":[1,"1F79910354E1F4ACA11FC16DBA1ED6F75063A15BF8093C4AAEF87F69B3990F65E51FFC8B35A409838ECD0FF3C26E87637B068D0DC7E5863D5F1CF97826222300"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:24.042Z","msg":[1,{"height":6,"round":0,"step":"RoundStepPrecommit"}]} -{"time":"2017-02-17T23:54:24.042Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":6,"round":0,"type":2,"block_id":{"hash":"1F7C249FF99B67AC57C4EEC94C42E0B95C9AFB6B","parts":{"total":1,"hash":"EA1E4111198195006BF7C23322B1051BE6C11582"}},"signature":[1,"E7838F403E4D5E651317D8563355CB2E140409B1B471B4AC12EBF7085989228CFA062029DF78A405CF977925777177D876804D78D80DF2312977E6D804394A0E"]}}],"peer_key":""}]} -{"time":"2017-02-17T23:54:24.042Z","msg":[1,{"height":6,"round":0,"step":"RoundStepCommit"}]} +{"time":"2017-04-27T22:24:10.432Z","msg":[1,{"height":6,"round":0,"step":"RoundStepNewHeight"}]} +{"time":"2017-04-27T22:24:11.422Z","msg":[3,{"duration":989935903,"height":6,"round":0,"step":1}]} +{"time":"2017-04-27T22:24:11.424Z","msg":[1,{"height":6,"round":0,"step":"RoundStepPropose"}]} +{"time":"2017-04-27T22:24:11.424Z","msg":[2,{"msg":[17,{"Proposal":{"height":6,"round":0,"block_parts_header":{"total":1,"hash":"37436A7FCDB02F18074E8D2C90E9F004BBCB534E"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"11204470769A82145069D6F08BF1B5C62E86EB0016E820FC2B4BDDA283F1B9CD4A35DA897E7624AC35D48026A1D73FDBAA19F5E43E89D63310C16168A5E7460D"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:11.424Z","msg":[2,{"msg":[19,{"Height":6,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010614B9616827C8E3800171011429C2401351653BDD300FDA82FF670747DEE0D6BD01010114DC0AAD92EA45834EB90563B51F5FCB8F6CE6A6B201147EF9A4B59129B957565C5AEA8F67911AD0672C3C011463CBE710CCF92C056DECC32112CC3A64F3F191560114354594CBFC1A7BCA1AD0050ED6AA010023EADA3901141EF14BB9890DC775E68CB57C86BA2A9118573099010171011E363136323633363433353331333833443634363336323631333533313338011E363136323633363433353331333933443634363336323631333533313339011E363136323633363433353332333033443634363336323631333533323330011E363136323633363433353332333133443634363336323631333533323331011E363136323633363433353332333233443634363336323631333533323332011E363136323633363433353332333333443634363336323631333533323333011E363136323633363433353332333433443634363336323631333533323334011E363136323633363433353332333533443634363336323631333533323335011E363136323633363433353332333633443634363336323631333533323336011E363136323633363433353332333733443634363336323631333533323337011E363136323633363433353332333833443634363336323631333533323338011E363136323633363433353332333933443634363336323631333533323339011E363136323633363433353333333033443634363336323631333533333330011E363136323633363433353333333133443634363336323631333533333331011E363136323633363433353333333233443634363336323631333533333332011E363136323633363433353333333333443634363336323631333533333333011E363136323633363433353333333433443634363336323631333533333334011E363136323633363433353333333533443634363336323631333533333335011E363136323633363433353333333633443634363336323631333533333336011E363136323633363433353333333733443634363336323631333533333337011E363136323633363433353333333833443634363336323631333533333338011E363136323633363433353333333933443634363336323631333533333339011E363136323633363433353334333033443634363336323631333533343330011E363136323633363433353334333133443634363336323631333533343331011E363136323633363433353334333233443634363336323631333533343332011E363136323633363433353334333333443634363336323631333533343333011E363136323633363433353334333433443634363336323631333533343334011E363136323633363433353334333533443634363336323631333533343335011E363136323633363433353334333633443634363336323631333533343336011E363136323633363433353334333733443634363336323631333533343337011E363136323633363433353334333833443634363336323631333533343338011E363136323633363433353334333933443634363336323631333533343339011E363136323633363433353335333033443634363336323631333533353330011E363136323633363433353335333133443634363336323631333533353331011E363136323633363433353335333233443634363336323631333533353332011E363136323633363433353335333333443634363336323631333533353333011E363136323633363433353335333433443634363336323631333533353334011E363136323633363433353335333533443634363336323631333533353335011E363136323633363433353335333633443634363336323631333533353336011E363136323633363433353335333733443634363336323631333533353337011E363136323633363433353335333833443634363336323631333533353338011E363136323633363433353335333933443634363336323631333533353339011E363136323633363433353336333033443634363336323631333533363330011E363136323633363433353336333133443634363336323631333533363331011E363136323633363433353336333233443634363336323631333533363332011E363136323633363433353336333333443634363336323631333533363333011E363136323633363433353336333433443634363336323631333533363334011E363136323633363433353336333533443634363336323631333533363335011E363136323633363433353336333633443634363336323631333533363336011E363136323633363433353336333733443634363336323631333533363337011E363136323633363433353336333833443634363336323631333533363338011E363136323633363433353336333933443634363336323631333533363339011E363136323633363433353337333033443634363336323631333533373330011E363136323633363433353337333133443634363336323631333533373331011E363136323633363433353337333233443634363336323631333533373332011E363136323633363433353337333333443634363336323631333533373333011E363136323633363433353337333433443634363336323631333533373334011E363136323633363433353337333533443634363336323631333533373335011E363136323633363433353337333633443634363336323631333533373336011E363136323633363433353337333733443634363336323631333533373337011E363136323633363433353337333833443634363336323631333533373338011E363136323633363433353337333933443634363336323631333533373339011E363136323633363433353338333033443634363336323631333533383330011E363136323633363433353338333133443634363336323631333533383331011E363136323633363433353338333233443634363336323631333533383332011E363136323633363433353338333333443634363336323631333533383333011E363136323633363433353338333433443634363336323631333533383334011E363136323633363433353338333533443634363336323631333533383335011E363136323633363433353338333633443634363336323631333533383336011E363136323633363433353338333733443634363336323631333533383337011E363136323633363433353338333833443634363336323631333533383338011E363136323633363433353338333933443634363336323631333533383339011E363136323633363433353339333033443634363336323631333533393330011E363136323633363433353339333133443634363336323631333533393331011E363136323633363433353339333233443634363336323631333533393332011E363136323633363433353339333333443634363336323631333533393333011E363136323633363433353339333433443634363336323631333533393334011E363136323633363433353339333533443634363336323631333533393335011E363136323633363433353339333633443634363336323631333533393336011E363136323633363433353339333733443634363336323631333533393337011E363136323633363433353339333833443634363336323631333533393338011E363136323633363433353339333933443634363336323631333533393339011E363136323633363433363330333033443634363336323631333633303330011E363136323633363433363330333133443634363336323631333633303331011E363136323633363433363330333233443634363336323631333633303332011E363136323633363433363330333333443634363336323631333633303333011E363136323633363433363330333433443634363336323631333633303334011E363136323633363433363330333533443634363336323631333633303335011E363136323633363433363330333633443634363336323631333633303336011E363136323633363433363330333733443634363336323631333633303337011E363136323633363433363330333833443634363336323631333633303338011E363136323633363433363330333933443634363336323631333633303339011E363136323633363433363331333033443634363336323631333633313330011E363136323633363433363331333133443634363336323631333633313331011E363136323633363433363331333233443634363336323631333633313332011E363136323633363433363331333333443634363336323631333633313333011E363136323633363433363331333433443634363336323631333633313334011E363136323633363433363331333533443634363336323631333633313335011E363136323633363433363331333633443634363336323631333633313336011E363136323633363433363331333733443634363336323631333633313337011E363136323633363433363331333833443634363336323631333633313338011E363136323633363433363331333933443634363336323631333633313339011E363136323633363433363332333033443634363336323631333633323330011E363136323633363433363332333133443634363336323631333633323331011E363136323633363433363332333233443634363336323631333633323332011E363136323633363433363332333333443634363336323631333633323333011E363136323633363433363332333433443634363336323631333633323334011E363136323633363433363332333533443634363336323631333633323335011E363136323633363433363332333633443634363336323631333633323336011E363136323633363433363332333733443634363336323631333633323337011E363136323633363433363332333833443634363336323631333633323338011E363136323633363433363332333933443634363336323631333633323339011E36313632363336343336333333303344363436333632363133363333333001011429C2401351653BDD300FDA82FF670747DEE0D6BD01010114DC0AAD92EA45834EB90563B51F5FCB8F6CE6A6B20101010114D028C9981F7A87F3093672BF0D5B0E2A1B3ED4560001050002011429C2401351653BDD300FDA82FF670747DEE0D6BD01010114DC0AAD92EA45834EB90563B51F5FCB8F6CE6A6B2015FFD69406ABEE706DD424F69E6D2A94A7ED8566D4A2A936A160BF7C574B7371C53CDC75B739D173EC456ECBBC84A4745CAAAB55D9CEA0BD4AD91FB04A8300406","proof":{"aunts":[]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:11.425Z","msg":[1,{"height":6,"round":0,"step":"RoundStepPrevote"}]} +{"time":"2017-04-27T22:24:11.425Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":6,"round":0,"type":1,"block_id":{"hash":"01CA8EA6995CA94535B20FFD04123D5CB68C0469","parts":{"total":1,"hash":"37436A7FCDB02F18074E8D2C90E9F004BBCB534E"}},"signature":[1,"F5C420BC1E1A6C6A9715756B0D71AF5481893150BCBA3F7702B1EB60D5077E66128B432064331C8B8AA46366C383666BDC5AFE56502233E9CAD0B6A6FB1BFF01"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:11.427Z","msg":[1,{"height":6,"round":0,"step":"RoundStepPrecommit"}]} +{"time":"2017-04-27T22:24:11.427Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":6,"round":0,"type":2,"block_id":{"hash":"01CA8EA6995CA94535B20FFD04123D5CB68C0469","parts":{"total":1,"hash":"37436A7FCDB02F18074E8D2C90E9F004BBCB534E"}},"signature":[1,"43C023CF88559C0BEA31FB11DD304279389373B856A5FEB56CB1BCDFCED9E01E1F46251753CE57E76EEF2616DE2F7511148A14B14B389D43CF224B2BD4571B00"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:24:11.427Z","msg":[1,{"height":6,"round":0,"step":"RoundStepCommit"}]} diff --git a/consensus/test_data/small_block1.cswal b/consensus/test_data/small_block1.cswal index d4eff73f1..94ccf712b 100644 --- a/consensus/test_data/small_block1.cswal +++ b/consensus/test_data/small_block1.cswal @@ -1,10 +1,10 @@ #ENDHEIGHT: 0 -{"time":"2016-12-18T05:05:38.593Z","msg":[3,{"duration":970717663,"height":1,"round":0,"step":1}]} -{"time":"2016-12-18T05:05:38.595Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]} -{"time":"2016-12-18T05:05:38.595Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":1,"hash":"A434EC796DF1CECC01296E953839C4675863A4E5"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"39563C3C7EDD9855B2971457A5DABF05CFDAF52805658847EB1F05115B8341344A77761CC85E670AF1B679DA9FC0905231957174699FE8326DBE7706209BDD0B"]}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:38.595Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F7465737401011491414A07A14A400195000000000114F27F65C16210AA65ACBBB6F0DFF88981B292A6D40114354594CBFC1A7BCA1AD0050ED6AA010023EADA3900010195010D6162636431333D646362613133010D6162636431343D646362613134010D6162636431353D646362613135010D6162636431363D646362613136010D6162636431373D646362613137010D6162636431383D646362613138010D6162636431393D646362613139010D6162636432303D646362613230010D6162636432313D646362613231010D6162636432323D646362613232010D6162636432333D646362613233010D6162636432343D646362613234010D6162636432353D646362613235010D6162636432363D646362613236010D6162636432373D646362613237010D6162636432383D646362613238010D6162636432393D646362613239010D6162636433303D646362613330010D6162636433313D646362613331010D6162636433323D646362613332010D6162636433333D646362613333010D6162636433343D646362613334010D6162636433353D646362613335010D6162636433363D646362613336010D6162636433373D646362613337010D6162636433383D646362613338010D6162636433393D646362613339010D6162636434303D646362613430010D6162636434313D646362613431010D6162636434323D646362613432010D6162636434333D646362613433010D6162636434343D646362613434010D6162636434353D646362613435010D6162636434363D646362613436010D6162636434373D646362613437010D6162636434383D646362613438010D6162636434393D646362613439010D6162636435303D646362613530010D6162636435313D646362613531010D6162636435323D646362613532010D6162636435333D646362613533010D6162636435343D646362613534010D6162636435353D646362613535010D6162636435363D646362613536010D6162636435373D646362613537010D6162636435383D646362613538010D6162636435393D646362613539010D6162636436303D646362613630010D6162636436313D646362613631010D6162636436323D646362613632010D6162636436333D646362613633010D6162636436343D646362613634010D6162636436353D646362613635010D6162636436363D646362613636010D6162636436373D646362613637010D6162636436383D646362613638010D6162636436393D646362613639010D6162636437303D646362613730010D6162636437313D646362613731010D6162636437323D646362613732010D6162636437333D646362613733010D6162636437343D646362613734010D6162636437353D646362613735010D6162636437363D646362613736010D6162636437373D646362613737010D6162636437383D646362613738010D6162636437393D646362613739010D6162636438303D646362613830010D6162636438313D646362613831010D6162636438323D646362613832010D6162636438333D646362613833010D6162636438343D646362613834010D6162636438353D646362613835010D6162636438363D646362613836010D6162636438373D646362613837010D6162636438383D646362613838010D6162636438393D646362613839010D6162636439303D646362613930010D6162636439313D646362613931010D6162636439323D646362613932010D6162636439333D646362613933010D6162636439343D646362613934010D6162636439353D646362613935010D6162636439363D646362613936010D6162636439373D646362613937010D6162636439383D646362613938010D6162636439393D646362613939010F616263643130303D64636261313030010F616263643130313D64636261313031010F616263643130323D64636261313032010F616263643130333D64636261313033010F616263643130343D64636261313034010F616263643130353D64636261313035010F616263643130363D64636261313036010F616263643130373D64636261313037010F616263643130383D64636261313038010F616263643130393D64636261313039010F616263643131303D64636261313130010F616263643131313D64636261313131010F616263643131323D64636261313132010F616263643131333D64636261313133010F616263643131343D64636261313134010F616263643131353D64636261313135010F616263643131363D64636261313136010F616263643131373D64636261313137010F616263643131383D64636261313138010F616263643131393D64636261313139010F616263643132303D64636261313230010F616263643132313D64636261313231010F616263643132323D64636261313232010F616263643132333D64636261313233010F616263643132343D64636261313234010F616263643132353D64636261313235010F616263643132363D64636261313236010F616263643132373D64636261313237010F616263643132383D64636261313238010F616263643132393D64636261313239010F616263643133303D64636261313330010F616263643133313D64636261313331010F616263643133323D64636261313332010F616263643133333D64636261313333010F616263643133343D64636261313334010F616263643133353D64636261313335010F616263643133363D64636261313336010F616263643133373D64636261313337010F616263643133383D64636261313338010F616263643133393D64636261313339010F616263643134303D64636261313430010F616263643134313D64636261313431010F616263643134323D64636261313432010F616263643134333D64636261313433010F616263643134343D64636261313434010F616263643134353D64636261313435010F616263643134363D64636261313436010F616263643134373D64636261313437010F616263643134383D64636261313438010F616263643134393D64636261313439010F616263643135303D64636261313530010F616263643135313D64636261313531010F616263643135323D64636261313532010F616263643135333D64636261313533010F616263643135343D64636261313534010F616263643135353D64636261313535010F616263643135363D64636261313536010F616263643135373D64636261313537010F616263643135383D64636261313538010F616263643135393D64636261313539010F616263643136303D64636261313630010F616263643136313D646362613136310100000000","proof":{"aunts":[]}}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:38.598Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]} -{"time":"2016-12-18T05:05:38.598Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"07BE34D315EE9E3F0B815E390FDC33747B4936C2","parts":{"total":1,"hash":"A434EC796DF1CECC01296E953839C4675863A4E5"}},"signature":[1,"9EAD2876BAD7D34B5073723929FF4AFE427AED2EB4E911DD24B1665C4FB937A8BE0C4F19A2188B8D5077D07920627218F1705E6365CC27D5B7255AC76AB91D00"]}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:38.599Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]} -{"time":"2016-12-18T05:05:38.599Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"07BE34D315EE9E3F0B815E390FDC33747B4936C2","parts":{"total":1,"hash":"A434EC796DF1CECC01296E953839C4675863A4E5"}},"signature":[1,"60EF6D09CB56944EA27C054A6908BEFD73C5E0A0EB5E0599FFAD3070596B864498D0C6A0D5A25D6E41BCE9548E9681AA5ECE481B955C6214B8D64AFF9737770B"]}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:38.600Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]} +{"time":"2017-04-27T22:23:46.268Z","msg":[3,{"duration":972111736,"height":1,"round":0,"step":1}]} +{"time":"2017-04-27T22:23:46.270Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]} +{"time":"2017-04-27T22:23:46.270Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":1,"hash":"368323AC043A67ACFCF02877D0117F487B6728C2"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"3948300A68131AD851E79F09EAF809AAD46C5EC1D2CFC65706F226A78E3C55B093A4AF6C57AFB43C512D818C733C7D273ED7241C4CC16F0E90A30D15730D9209"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:46.270Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010114B961624C7D4F000167000000000114F075CC9E65D85ECBEA8E4603DC7DB96E881CEEB60114354594CBFC1A7BCA1AD0050ED6AA010023EADA3900010167011A3631363236333634333133303344363436333632363133313330011A3631363236333634333133313344363436333632363133313331011A3631363236333634333133323344363436333632363133313332011A3631363236333634333133333344363436333632363133313333011A3631363236333634333133343344363436333632363133313334011A3631363236333634333133353344363436333632363133313335011A3631363236333634333133363344363436333632363133313336011A3631363236333634333133373344363436333632363133313337011A3631363236333634333133383344363436333632363133313338011A3631363236333634333133393344363436333632363133313339011A3631363236333634333233303344363436333632363133323330011A3631363236333634333233313344363436333632363133323331011A3631363236333634333233323344363436333632363133323332011A3631363236333634333233333344363436333632363133323333011A3631363236333634333233343344363436333632363133323334011A3631363236333634333233353344363436333632363133323335011A3631363236333634333233363344363436333632363133323336011A3631363236333634333233373344363436333632363133323337011A3631363236333634333233383344363436333632363133323338011A3631363236333634333233393344363436333632363133323339011A3631363236333634333333303344363436333632363133333330011A3631363236333634333333313344363436333632363133333331011A3631363236333634333333323344363436333632363133333332011A3631363236333634333333333344363436333632363133333333011A3631363236333634333333343344363436333632363133333334011A3631363236333634333333353344363436333632363133333335011A3631363236333634333333363344363436333632363133333336011A3631363236333634333333373344363436333632363133333337011A3631363236333634333333383344363436333632363133333338011A3631363236333634333333393344363436333632363133333339011A3631363236333634333433303344363436333632363133343330011A3631363236333634333433313344363436333632363133343331011A3631363236333634333433323344363436333632363133343332011A3631363236333634333433333344363436333632363133343333011A3631363236333634333433343344363436333632363133343334011A3631363236333634333433353344363436333632363133343335011A3631363236333634333433363344363436333632363133343336011A3631363236333634333433373344363436333632363133343337011A3631363236333634333433383344363436333632363133343338011A3631363236333634333433393344363436333632363133343339011A3631363236333634333533303344363436333632363133353330011A3631363236333634333533313344363436333632363133353331011A3631363236333634333533323344363436333632363133353332011A3631363236333634333533333344363436333632363133353333011A3631363236333634333533343344363436333632363133353334011A3631363236333634333533353344363436333632363133353335011A3631363236333634333533363344363436333632363133353336011A3631363236333634333533373344363436333632363133353337011A3631363236333634333533383344363436333632363133353338011A3631363236333634333533393344363436333632363133353339011A3631363236333634333633303344363436333632363133363330011A3631363236333634333633313344363436333632363133363331011A3631363236333634333633323344363436333632363133363332011A3631363236333634333633333344363436333632363133363333011A3631363236333634333633343344363436333632363133363334011A3631363236333634333633353344363436333632363133363335011A3631363236333634333633363344363436333632363133363336011A3631363236333634333633373344363436333632363133363337011A3631363236333634333633383344363436333632363133363338011A3631363236333634333633393344363436333632363133363339011A3631363236333634333733303344363436333632363133373330011A3631363236333634333733313344363436333632363133373331011A3631363236333634333733323344363436333632363133373332011A3631363236333634333733333344363436333632363133373333011A3631363236333634333733343344363436333632363133373334011A3631363236333634333733353344363436333632363133373335011A3631363236333634333733363344363436333632363133373336011A3631363236333634333733373344363436333632363133373337011A3631363236333634333733383344363436333632363133373338011A3631363236333634333733393344363436333632363133373339011A3631363236333634333833303344363436333632363133383330011A3631363236333634333833313344363436333632363133383331011A3631363236333634333833323344363436333632363133383332011A3631363236333634333833333344363436333632363133383333011A3631363236333634333833343344363436333632363133383334011A3631363236333634333833353344363436333632363133383335011A3631363236333634333833363344363436333632363133383336011A3631363236333634333833373344363436333632363133383337011A3631363236333634333833383344363436333632363133383338011A3631363236333634333833393344363436333632363133383339011A3631363236333634333933303344363436333632363133393330011A3631363236333634333933313344363436333632363133393331011A3631363236333634333933323344363436333632363133393332011A3631363236333634333933333344363436333632363133393333011A3631363236333634333933343344363436333632363133393334011A3631363236333634333933353344363436333632363133393335011A3631363236333634333933363344363436333632363133393336011A3631363236333634333933373344363436333632363133393337011A3631363236333634333933383344363436333632363133393338011A3631363236333634333933393344363436333632363133393339011E363136323633363433313330333033443634363336323631333133303330011E363136323633363433313330333133443634363336323631333133303331011E363136323633363433313330333233443634363336323631333133303332011E363136323633363433313330333333443634363336323631333133303333011E363136323633363433313330333433443634363336323631333133303334011E363136323633363433313330333533443634363336323631333133303335011E363136323633363433313330333633443634363336323631333133303336011E363136323633363433313330333733443634363336323631333133303337011E363136323633363433313330333833443634363336323631333133303338011E363136323633363433313330333933443634363336323631333133303339011E363136323633363433313331333033443634363336323631333133313330011E363136323633363433313331333133443634363336323631333133313331011E3631363236333634333133313332334436343633363236313331333133320100000000","proof":{"aunts":[]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:46.271Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]} +{"time":"2017-04-27T22:23:46.271Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"FC7032C1A475C288FE30ED57BD4131AFE03DDC16","parts":{"total":1,"hash":"368323AC043A67ACFCF02877D0117F487B6728C2"}},"signature":[1,"78DD4C7854DED0D76C3B9783DD535B967E795E93ACD1947B362B418C99DB3AB271BF20546628ECDF523C839C6C5D1BF3605DA552CBCFB9C8BB7A74594BAB8F01"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:46.273Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]} +{"time":"2017-04-27T22:23:46.273Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"FC7032C1A475C288FE30ED57BD4131AFE03DDC16","parts":{"total":1,"hash":"368323AC043A67ACFCF02877D0117F487B6728C2"}},"signature":[1,"A1C01F36B87CA1CBCE87560FBE297B6820A421F86F80C88DAD6CAD6204A11C573CDCC60796F958F52E5AF04F54DB5321AC6F38BD19E8CF069771C9F899D0220B"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:46.273Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]} diff --git a/consensus/test_data/small_block2.cswal b/consensus/test_data/small_block2.cswal index b5d1d282b..e3cfaad32 100644 --- a/consensus/test_data/small_block2.cswal +++ b/consensus/test_data/small_block2.cswal @@ -1,14 +1,15 @@ #ENDHEIGHT: 0 -{"time":"2016-12-18T05:05:43.641Z","msg":[3,{"duration":969409681,"height":1,"round":0,"step":1}]} -{"time":"2016-12-18T05:05:43.643Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]} -{"time":"2016-12-18T05:05:43.643Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":5,"hash":"C916905C3C444501DDDAA1BF52E959B7531E762E"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"F1A8E9928889C68FD393F3983B5362AECA4A95AA13FE3C78569B2515EC046893CB718071CAF54F3F1507DCD851B37CD5557EA17BB5471D2DC6FB5AC5FBB72E02"]}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:43.643Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F7465737401011491414B3483A8400190000000000114926EA77D30A4D19866159DE7E58AA9461F90F9D10114354594CBFC1A7BCA1AD0050ED6AA010023EADA3900010190010D6162636431323D646362613132010D6162636431333D646362613133010D6162636431343D646362613134010D6162636431353D646362613135010D6162636431363D646362613136010D6162636431373D646362613137010D6162636431383D646362613138010D6162636431393D646362613139010D6162636432303D646362613230010D6162636432313D646362613231010D6162636432323D646362613232010D6162636432333D646362613233010D6162636432343D646362613234010D6162636432353D646362613235010D6162636432363D646362613236010D6162636432373D646362613237010D6162636432383D646362613238010D6162636432393D646362613239010D6162636433303D646362613330010D6162636433313D646362613331010D6162636433323D646362613332010D6162636433333D646362613333010D6162636433343D646362613334010D6162636433353D646362613335010D6162636433363D646362613336010D6162636433373D646362613337010D6162636433383D646362613338010D6162636433393D646362613339010D6162636434303D","proof":{"aunts":["C9FBD66B63A976638196323F5B93494BDDFC9EED","47FD83BB7607E679EE5CF0783372D13C5A264056","FEEC97078A26B7F6057821C0660855170CC6F1D7"]}}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:43.643Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":1,"bytes":"646362613430010D6162636434313D646362613431010D6162636434323D646362613432010D6162636434333D646362613433010D6162636434343D646362613434010D6162636434353D646362613435010D6162636434363D646362613436010D6162636434373D646362613437010D6162636434383D646362613438010D6162636434393D646362613439010D6162636435303D646362613530010D6162636435313D646362613531010D6162636435323D646362613532010D6162636435333D646362613533010D6162636435343D646362613534010D6162636435353D646362613535010D6162636435363D646362613536010D6162636435373D646362613537010D6162636435383D646362613538010D6162636435393D646362613539010D6162636436303D646362613630010D6162636436313D646362613631010D6162636436323D646362613632010D6162636436333D646362613633010D6162636436343D646362613634010D6162636436353D646362613635010D6162636436363D646362613636010D6162636436373D646362613637010D6162636436383D646362613638010D6162636436393D646362613639010D6162636437303D646362613730010D6162636437313D646362613731010D6162636437323D646362613732010D6162636437333D646362613733010D6162636437343D6463","proof":{"aunts":["D7FB03B935B77C322064F8277823CDB5C7018597","47FD83BB7607E679EE5CF0783372D13C5A264056","FEEC97078A26B7F6057821C0660855170CC6F1D7"]}}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:43.644Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":2,"bytes":"62613734010D6162636437353D646362613735010D6162636437363D646362613736010D6162636437373D646362613737010D6162636437383D646362613738010D6162636437393D646362613739010D6162636438303D646362613830010D6162636438313D646362613831010D6162636438323D646362613832010D6162636438333D646362613833010D6162636438343D646362613834010D6162636438353D646362613835010D6162636438363D646362613836010D6162636438373D646362613837010D6162636438383D646362613838010D6162636438393D646362613839010D6162636439303D646362613930010D6162636439313D646362613931010D6162636439323D646362613932010D6162636439333D646362613933010D6162636439343D646362613934010D6162636439353D646362613935010D6162636439363D646362613936010D6162636439373D646362613937010D6162636439383D646362613938010D6162636439393D646362613939010F616263643130303D64636261313030010F616263643130313D64636261313031010F616263643130323D64636261313032010F616263643130333D64636261313033010F616263643130343D64636261313034010F616263643130353D64636261313035010F616263643130363D64636261313036010F616263643130373D64636261","proof":{"aunts":["A607D9BF5107E6C9FD19B6928D9CC7714B0730E4","FEEC97078A26B7F6057821C0660855170CC6F1D7"]}}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:43.644Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":3,"bytes":"313037010F616263643130383D64636261313038010F616263643130393D64636261313039010F616263643131303D64636261313130010F616263643131313D64636261313131010F616263643131323D64636261313132010F616263643131333D64636261313133010F616263643131343D64636261313134010F616263643131353D64636261313135010F616263643131363D64636261313136010F616263643131373D64636261313137010F616263643131383D64636261313138010F616263643131393D64636261313139010F616263643132303D64636261313230010F616263643132313D64636261313231010F616263643132323D64636261313232010F616263643132333D64636261313233010F616263643132343D64636261313234010F616263643132353D64636261313235010F616263643132363D64636261313236010F616263643132373D64636261313237010F616263643132383D64636261313238010F616263643132393D64636261313239010F616263643133303D64636261313330010F616263643133313D64636261313331010F616263643133323D64636261313332010F616263643133333D64636261313333010F616263643133343D64636261313334010F616263643133353D64636261313335010F616263643133363D64636261313336010F616263643133373D646362613133","proof":{"aunts":["0FD794B3506B9E92CDE3703F7189D42167E77095","86D455F542DA79F5A764B9DABDEABF01F4BAB2AB"]}}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:43.644Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":4,"bytes":"37010F616263643133383D64636261313338010F616263643133393D64636261313339010F616263643134303D64636261313430010F616263643134313D64636261313431010F616263643134323D64636261313432010F616263643134333D64636261313433010F616263643134343D64636261313434010F616263643134353D64636261313435010F616263643134363D64636261313436010F616263643134373D64636261313437010F616263643134383D64636261313438010F616263643134393D64636261313439010F616263643135303D64636261313530010F616263643135313D64636261313531010F616263643135323D64636261313532010F616263643135333D64636261313533010F616263643135343D64636261313534010F616263643135353D646362613135350100000000","proof":{"aunts":["50CBDC078A660EAE3442BA355BE10EE0D04408D1","86D455F542DA79F5A764B9DABDEABF01F4BAB2AB"]}}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:43.645Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]} -{"time":"2016-12-18T05:05:43.645Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"6ADACDC2871C59A67337DAFD5045A982ED070C51","parts":{"total":5,"hash":"C916905C3C444501DDDAA1BF52E959B7531E762E"}},"signature":[1,"E815E0A63B7EEE7894DE2D72372A7C393434AC8ACCC46B60C628910F73351806D55A59994F08B454BFD71EDAA0CA95733CA47E37FFDAF9AAA2431A8160176E01"]}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:43.647Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]} -{"time":"2016-12-18T05:05:43.647Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"6ADACDC2871C59A67337DAFD5045A982ED070C51","parts":{"total":5,"hash":"C916905C3C444501DDDAA1BF52E959B7531E762E"}},"signature":[1,"9AAC3F3A118EE039EB460E9E5308D490D671C7490309BD5D62B5F392205C7E420DFDAF90F08294FF36BE8A9AA5CC203C1F2088B42D2BB8EE40A45F2BB5C54D0A"]}}],"peer_key":""}]} -{"time":"2016-12-18T05:05:43.648Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]} +{"time":"2017-04-27T22:23:56.310Z","msg":[3,{"duration":969732098,"height":1,"round":0,"step":1}]} +{"time":"2017-04-27T22:23:56.312Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPropose"}]} +{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[17,{"Proposal":{"height":1,"round":0,"block_parts_header":{"total":6,"hash":"A3C176F13F5CBC7C48EE27A472800410C9D487DC"},"pol_round":-1,"pol_block_id":{"hash":"","parts":{"total":0,"hash":""}},"signature":[1,"7624F6E943B7A207E16D1FA87EA099BD924E930F98E7DECBC01DB37735C619409588A67C2EABA9845FD6B80FDB65ECFCDA5F0DEFCEF74B8C34DB8E0540480203"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":0,"bytes":"0101010F74656E6465726D696E745F74657374010114B96164A30A118001620000000001141F6753D22BACA2180B1EADD722434EB28444D91D0114354594CBFC1A7BCA1AD0050ED6AA010023EADA3900010162011A3631363236333634333133303344363436333632363133313330011A3631363236333634333133313344363436333632363133313331011A3631363236333634333133323344363436333632363133313332011A3631363236333634333133333344363436333632363133313333011A3631363236333634333133343344363436333632363133313334011A3631363236333634333133353344363436333632363133313335011A3631363236333634333133363344363436333632363133313336011A3631363236333634333133373344363436333632363133313337011A3631363236333634333133383344363436333632363133313338011A3631363236333634333133393344363436333632363133313339011A3631363236333634333233303344363436333632363133323330011A3631363236333634333233313344363436333632363133323331011A3631363236333634333233323344363436333632363133323332011A3631363236333634333233333344363436333632363133323333011A3631363236333634333233343344363436333632363133323334011A36313632363336","proof":{"aunts":["49F4B71E3D7C457415069E2EA916DB12F67AA8D0","D35A72BEDAAAAC17045D7BFAAFA94C2EC0B0A4C2","705BC647374F3495EE73C3F44C21E9BDB4731738"]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":1,"bytes":"34333233353344363436333632363133323335011A3631363236333634333233363344363436333632363133323336011A3631363236333634333233373344363436333632363133323337011A3631363236333634333233383344363436333632363133323338011A3631363236333634333233393344363436333632363133323339011A3631363236333634333333303344363436333632363133333330011A3631363236333634333333313344363436333632363133333331011A3631363236333634333333323344363436333632363133333332011A3631363236333634333333333344363436333632363133333333011A3631363236333634333333343344363436333632363133333334011A3631363236333634333333353344363436333632363133333335011A3631363236333634333333363344363436333632363133333336011A3631363236333634333333373344363436333632363133333337011A3631363236333634333333383344363436333632363133333338011A3631363236333634333333393344363436333632363133333339011A3631363236333634333433303344363436333632363133343330011A3631363236333634333433313344363436333632363133343331011A3631363236333634333433323344363436333632363133343332011A363136323633363433343333334436","proof":{"aunts":["5AD2A9A1A49A1FD6EF83F05FA4588F800B29DEF1","D35A72BEDAAAAC17045D7BFAAFA94C2EC0B0A4C2","705BC647374F3495EE73C3F44C21E9BDB4731738"]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":2,"bytes":"3436333632363133343333011A3631363236333634333433343344363436333632363133343334011A3631363236333634333433353344363436333632363133343335011A3631363236333634333433363344363436333632363133343336011A3631363236333634333433373344363436333632363133343337011A3631363236333634333433383344363436333632363133343338011A3631363236333634333433393344363436333632363133343339011A3631363236333634333533303344363436333632363133353330011A3631363236333634333533313344363436333632363133353331011A3631363236333634333533323344363436333632363133353332011A3631363236333634333533333344363436333632363133353333011A3631363236333634333533343344363436333632363133353334011A3631363236333634333533353344363436333632363133353335011A3631363236333634333533363344363436333632363133353336011A3631363236333634333533373344363436333632363133353337011A3631363236333634333533383344363436333632363133353338011A3631363236333634333533393344363436333632363133353339011A3631363236333634333633303344363436333632363133363330011A3631363236333634333633313344363436333632363133","proof":{"aunts":["8B5786C3D871EE37B0F4B2DECAC39E157340DFBE","705BC647374F3495EE73C3F44C21E9BDB4731738"]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":3,"bytes":"363331011A3631363236333634333633323344363436333632363133363332011A3631363236333634333633333344363436333632363133363333011A3631363236333634333633343344363436333632363133363334011A3631363236333634333633353344363436333632363133363335011A3631363236333634333633363344363436333632363133363336011A3631363236333634333633373344363436333632363133363337011A3631363236333634333633383344363436333632363133363338011A3631363236333634333633393344363436333632363133363339011A3631363236333634333733303344363436333632363133373330011A3631363236333634333733313344363436333632363133373331011A3631363236333634333733323344363436333632363133373332011A3631363236333634333733333344363436333632363133373333011A3631363236333634333733343344363436333632363133373334011A3631363236333634333733353344363436333632363133373335011A3631363236333634333733363344363436333632363133373336011A3631363236333634333733373344363436333632363133373337011A3631363236333634333733383344363436333632363133373338011A3631363236333634333733393344363436333632363133373339011A363136","proof":{"aunts":["56097661A1B2707588100586B3B1C2C8A51057D1","6DE889147DF528EEB5F7422E95DC45900CAFB619","247C721D5CEB90BB1FE389BA74C43DF0955E1647"]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:56.312Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":4,"bytes":"3236333634333833303344363436333632363133383330011A3631363236333634333833313344363436333632363133383331011A3631363236333634333833323344363436333632363133383332011A3631363236333634333833333344363436333632363133383333011A3631363236333634333833343344363436333632363133383334011A3631363236333634333833353344363436333632363133383335011A3631363236333634333833363344363436333632363133383336011A3631363236333634333833373344363436333632363133383337011A3631363236333634333833383344363436333632363133383338011A3631363236333634333833393344363436333632363133383339011A3631363236333634333933303344363436333632363133393330011A3631363236333634333933313344363436333632363133393331011A3631363236333634333933323344363436333632363133393332011A3631363236333634333933333344363436333632363133393333011A3631363236333634333933343344363436333632363133393334011A3631363236333634333933353344363436333632363133393335011A3631363236333634333933363344363436333632363133393336011A3631363236333634333933373344363436333632363133393337011A3631363236333634333933","proof":{"aunts":["081D3DC5F11850851D5F0D760B98EE87BFA6B8B0","6DE889147DF528EEB5F7422E95DC45900CAFB619","247C721D5CEB90BB1FE389BA74C43DF0955E1647"]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:56.313Z","msg":[2,{"msg":[19,{"Height":1,"Round":0,"Part":{"index":5,"bytes":"383344363436333632363133393338011A3631363236333634333933393344363436333632363133393339011E363136323633363433313330333033443634363336323631333133303330011E363136323633363433313330333133443634363336323631333133303331011E363136323633363433313330333233443634363336323631333133303332011E363136323633363433313330333333443634363336323631333133303333011E363136323633363433313330333433443634363336323631333133303334011E363136323633363433313330333533443634363336323631333133303335011E363136323633363433313330333633443634363336323631333133303336011E3631363236333634333133303337334436343633363236313331333033370100000000","proof":{"aunts":["6AA912328C2B52EFA0ECE71F523E137E400EC484","247C721D5CEB90BB1FE389BA74C43DF0955E1647"]}}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:56.314Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrevote"}]} +{"time":"2017-04-27T22:23:56.314Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":1,"block_id":{"hash":"62371CF72F8662378691706DB256C833CF1AF81B","parts":{"total":6,"hash":"A3C176F13F5CBC7C48EE27A472800410C9D487DC"}},"signature":[1,"255906FAAA50C84E85DABF7DE73468E4F95DB4E46F598848145926E2FAD77CA682BF07E09E2F3EC81FFBD9A036B67914A3C02F819B69248D777AEBA792725907"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:56.315Z","msg":[1,{"height":1,"round":0,"step":"RoundStepPrecommit"}]} +{"time":"2017-04-27T22:23:56.315Z","msg":[2,{"msg":[20,{"Vote":{"validator_address":"D028C9981F7A87F3093672BF0D5B0E2A1B3ED456","validator_index":0,"height":1,"round":0,"type":2,"block_id":{"hash":"62371CF72F8662378691706DB256C833CF1AF81B","parts":{"total":6,"hash":"A3C176F13F5CBC7C48EE27A472800410C9D487DC"}},"signature":[1,"056CC15C748434D0A59B64B45CB56EDC1A437A426E68FA63DC7D61A7C17B0F768F207D81340D129A57C5A64195F8AFDD03B6BF28D7B2286290D61BCE88FCA304"]}}],"peer_key":""}]} +{"time":"2017-04-27T22:23:56.316Z","msg":[1,{"height":1,"round":0,"step":"RoundStepCommit"}]} diff --git a/consensus/ticker.go b/consensus/ticker.go index b318597d3..e869cdef1 100644 --- a/consensus/ticker.go +++ b/consensus/ticker.go @@ -3,7 +3,8 @@ package consensus import ( "time" - . "github.com/tendermint/go-common" + . "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" ) var ( @@ -18,6 +19,8 @@ type TimeoutTicker interface { Stop() bool Chan() <-chan timeoutInfo // on which to receive a timeout ScheduleTimeout(ti timeoutInfo) // reset the timer + + SetLogger(log.Logger) } // timeoutTicker wraps time.Timer, @@ -39,8 +42,8 @@ func NewTimeoutTicker() TimeoutTicker { tickChan: make(chan timeoutInfo, tickTockBufferSize), tockChan: make(chan timeoutInfo, tickTockBufferSize), } + tt.BaseService = *NewBaseService(nil, "TimeoutTicker", tt) tt.stopTimer() // don't want to fire until the first scheduled timeout - tt.BaseService = *NewBaseService(log, "TimeoutTicker", tt) return tt } @@ -75,7 +78,7 @@ func (t *timeoutTicker) stopTimer() { select { case <-t.timer.C: default: - log.Debug("Timer already stopped") + t.Logger.Debug("Timer already stopped") } } } @@ -84,12 +87,12 @@ func (t *timeoutTicker) stopTimer() { // timers are interupted and replaced by new ticks from later steps // timeouts of 0 on the tickChan will be immediately relayed to the tockChan func (t *timeoutTicker) timeoutRoutine() { - log.Debug("Starting timeout routine") + t.Logger.Debug("Starting timeout routine") var ti timeoutInfo for { select { case newti := <-t.tickChan: - log.Debug("Received tick", "old_ti", ti, "new_ti", newti) + t.Logger.Debug("Received tick", "old_ti", ti, "new_ti", newti) // ignore tickers for old height/round/step if newti.Height < ti.Height { @@ -111,9 +114,9 @@ func (t *timeoutTicker) timeoutRoutine() { // NOTE time.Timer allows duration to be non-positive ti = newti t.timer.Reset(ti.Duration) - log.Debug("Scheduled timeout", "dur", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step) + t.Logger.Debug("Scheduled timeout", "dur", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step) case <-t.timer.C: - log.Info("Timed out", "dur", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step) + t.Logger.Info("Timed out", "dur", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step) // go routine here gaurantees timeoutRoutine doesn't block. // Determinism comes from playback in the receiveRoutine. // We can eliminate it by merging the timeoutRoutine into receiveRoutine diff --git a/consensus/version.go b/consensus/version.go index 34886db3c..84f1ec81f 100644 --- a/consensus/version.go +++ b/consensus/version.go @@ -1,7 +1,7 @@ package consensus import ( - . "github.com/tendermint/go-common" + . "github.com/tendermint/tmlibs/common" ) // kind of arbitrary diff --git a/consensus/wal.go b/consensus/wal.go index a89eff5e4..a2ac470a0 100644 --- a/consensus/wal.go +++ b/consensus/wal.go @@ -3,10 +3,10 @@ package consensus import ( "time" - auto "github.com/tendermint/go-autofile" - . "github.com/tendermint/go-common" - "github.com/tendermint/go-wire" + wire "github.com/tendermint/go-wire" "github.com/tendermint/tendermint/types" + auto "github.com/tendermint/tmlibs/autofile" + . "github.com/tendermint/tmlibs/common" ) //-------------------------------------------------------- @@ -49,9 +49,8 @@ func NewWAL(walFile string, light bool) (*WAL, error) { group: group, light: light, } - wal.BaseService = *NewBaseService(log, "WAL", wal) - _, err = wal.Start() - return wal, err + wal.BaseService = *NewBaseService(nil, "WAL", wal) + return wal, nil } func (wal *WAL) OnStart() error { diff --git a/docs/architecture/README.md b/docs/architecture/README.md index dd97a5b75..dc9c62a9e 100644 --- a/docs/architecture/README.md +++ b/docs/architecture/README.md @@ -1,6 +1,6 @@ # Architecture Decision Records -This is a location to record all high-level architecture decisions in the tendermin project. Not the implementation details, but the reasoning that happened. This should be refered to for guidance of the "right way" to extend the application. And if we notice that the original decisions were lacking, we should have another open discussion, record the new decisions here, and then modify the code to match. +This is a location to record all high-level architecture decisions in the tendermint project. Not the implementation details, but the reasoning that happened. This should be refered to for guidance of the "right way" to extend the application. And if we notice that the original decisions were lacking, we should have another open discussion, record the new decisions here, and then modify the code to match. This is like our guide and mentor when Jae and Bucky are offline.... The concept comes from a [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t) that resonated among the team when Anton shared it. diff --git a/docs/architecture/adr-001.md b/docs/architecture/adr-001.md new file mode 100644 index 000000000..a11a49e14 --- /dev/null +++ b/docs/architecture/adr-001.md @@ -0,0 +1,216 @@ +# ADR 1: Logging + +## Context + +Current logging system in Tendermint is very static and not flexible enough. + +Issues: [358](https://github.com/tendermint/tendermint/issues/358), [375](https://github.com/tendermint/tendermint/issues/375). + +What we want from the new system: + +- per package dynamic log levels +- dynamic logger setting (logger tied to the processing struct) +- conventions +- be more visually appealing + +"dynamic" here means the ability to set smth in runtime. + +## Decision + +### 1) An interface + +First, we will need an interface for all of our libraries (`tmlibs`, Tendermint, etc.). My personal preference is go-kit `Logger` interface (see Appendix A.), but that is too much a bigger change. Plus we will still need levels. + +```go +# log.go +type Logger interface { + Debug(msg string, keyvals ...interface{}) error + Info(msg string, keyvals ...interface{}) error + Error(msg string, keyvals ...interface{}) error + + With(keyvals ...interface{}) Logger +} +``` + +On a side note: difference between `Info` and `Notice` is subtle. We probably +could do without `Notice`. Don't think we need `Panic` or `Fatal` as a part of +the interface. These funcs could be implemented as helpers. In fact, we already +have some in `tmlibs/common`. + +- `Debug` - extended output for devs +- `Info` - all that is useful for a user +- `Error` - errors + +`Notice` should become `Info`, `Warn` either `Error` or `Debug` depending on the message, `Crit` -> `Error`. + +This interface should go into `tmlibs/log`. All libraries which are part of the core (tendermint/tendermint) should obey it. + +### 2) Logger with our current formatting + +On top of this interface, we will need to implement a stdout logger, which will be used when Tendermint is configured to output logs to STDOUT. + +Many people say that they like the current output, so let's stick with it. + +``` +NOTE[04-25|14:45:08] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 +``` + +Couple of minor changes: + +``` +I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 +``` + +Notice the level is encoded using only one char plus milliseconds. + +Note: there are many other formats out there like [logfmt](https://brandur.org/logfmt). + +This logger could be implemented using any logger - [logrus](https://github.com/sirupsen/logrus), [go-kit/log](https://github.com/go-kit/kit/tree/master/log), [zap](https://github.com/uber-go/zap), log15 so far as it + +a) supports coloring output
+b) is moderately fast (buffering)
+c) conforms to the new interface or adapter could be written for it
+d) is somewhat configurable
+ +go-kit is my favorite so far. Check out how easy it is to color errors in red https://github.com/go-kit/kit/blob/master/log/term/example_test.go#L12. Although, coloring could only be applied to the whole string :( + +``` +go-kit +: flexible, modular +go-kit “-”: logfmt format https://brandur.org/logfmt + +logrus +: popular, feature rich (hooks), API and output is more like what we want +logrus -: not so flexible +``` + +```go +# tm_logger.go +// NewTmLogger returns a logger that encodes keyvals to the Writer in +// tm format. +func NewTmLogger(w io.Writer) Logger { + return &tmLogger{kitlog.NewLogfmtLogger(w)} +} + +func (l tmLogger) SetLevel(level string() { + switch (level) { + case "debug": + l.sourceLogger = level.NewFilter(l.sourceLogger, level.AllowDebug()) + } +} + +func (l tmLogger) Info(msg string, keyvals ...interface{}) error { + l.sourceLogger.Log("msg", msg, keyvals...) +} + +# log.go +func With(logger Logger, keyvals ...interface{}) Logger { + kitlog.With(logger.sourceLogger, keyvals...) +} +``` + +Usage: + +```go +logger := log.NewTmLogger(os.Stdout) +logger.SetLevel(config.GetString("log_level")) +node.SetLogger(log.With(logger, "node", Name)) +``` + +**Other log formatters** + +In the future, we may want other formatters like JSONFormatter. + +``` +{ "level": "notice", "time": "2017-04-25 14:45:08.562471297 -0400 EDT", "module": "consensus", "msg": "ABCI Replay Blocks", "appHeight": 0, "storeHeight": 0, "stateHeight": 0 } +``` + +### 3) Dynamic logger setting + +https://dave.cheney.net/2017/01/23/the-package-level-logger-anti-pattern + +This is the hardest part and where the most work will be done. logger should be tied to the processing struct, or the context if it adds some fields to the logger. + +```go +type BaseService struct { + log log15.Logger + name string + started uint32 // atomic + stopped uint32 // atomic +... +} +``` + +BaseService already contains `log` field, so most of the structs embedding it should be fine. We should rename it to `logger`. + +The only thing missing is the ability to set logger: + +``` +func (bs *BaseService) SetLogger(l log.Logger) { + bs.logger = l +} +``` + +### 4) Conventions + +Important keyvals should go first. Example: + +``` +correct +I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus instance=1 appHeight=0 storeHeight=0 stateHeight=0 +``` + +not + +``` +wrong +I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 instance=1 +``` + +for that in most cases you'll need to add `instance` field to a logger upon creating, not when u log a particular message: + +```go +colorFn := func(keyvals ...interface{}) term.FgBgColor { + for i := 1; i < len(keyvals); i += 2 { + if keyvals[i] == "instance" && keyvals[i+1] == "1" { + return term.FgBgColor{Fg: term.Blue} + } else if keyvals[i] == "instance" && keyvals[i+1] == "1" { + return term.FgBgColor{Fg: term.Red} + } + } + return term.FgBgColor{} + } +logger := term.NewLogger(os.Stdout, log.NewTmLogger, colorFn) + +c1 := NewConsensusReactor(...) +c1.SetLogger(log.With(logger, "instance", 1)) + +c2 := NewConsensusReactor(...) +c2.SetLogger(log.With(logger, "instance", 2)) +``` + +## Status + +proposed + +## Consequences + +### Positive + +Dynamic logger, which could be turned off for some modules at runtime. Public interface for other projects using Tendermint libraries. + +### Negative + +We may loose the ability to color keys in keyvalue pairs. go-kit allow you to easily change foreground / background colors of the whole string, but not its parts. + +### Neutral + +## Appendix A. + +I really like a minimalistic approach go-kit took with his logger https://github.com/go-kit/kit/tree/master/log: + +``` +type Logger interface { + Log(keyvals ...interface{}) error +} +``` + +See [The Hunt for a Logger Interface](https://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide). The advantage is greater composability (check out how go-kit defines colored logging or log-leveled logging on top of this interface https://github.com/go-kit/kit/tree/master/log). diff --git a/docs/architecture/merkle.md b/docs/architecture/merkle.md index 72998db88..4e769aed6 100644 --- a/docs/architecture/merkle.md +++ b/docs/architecture/merkle.md @@ -2,7 +2,7 @@ To allow the efficient creation of an ABCi app, tendermint wishes to provide a reference implemention of a key-value store that provides merkle proofs of the data. These proofs then quickly allow the ABCi app to provide an apphash to the consensus engine, as well as a full proof to any client. -This engine is currently implemented in `go-merkle` with `merkleeyes` providing a language-agnostic binding via ABCi. It uses `go-db` bindings internally to persist data to leveldb. +This engine is currently implemented in `go-merkle` with `merkleeyes` providing a language-agnostic binding via ABCi. It uses `tmlibs/db` bindings internally to persist data to leveldb. What are some of the requirements of this store: diff --git a/glide.lock b/glide.lock index 42712c232..195386500 100644 --- a/glide.lock +++ b/glide.lock @@ -1,53 +1,88 @@ -hash: d9724aa287c40d1b3856b6565f09235d809c8b2f7c6537c04f597137c0d6cd26 -updated: 2017-04-21T13:09:25.708801802-04:00 +hash: 93f15c9766ea826c29a91f545c42172eafd8c61e39c1d81617114ad1a9c9eaf2 +updated: 2017-05-18T06:13:24.295793122-04:00 imports: - name: github.com/btcsuite/btcd - version: 4b348c1d33373d672edd83fc576892d0e46686d2 + version: 53f55a46349aa8f44b90895047e843666991cf24 subpackages: - btcec -- name: github.com/BurntSushi/toml - version: b26d9c308763d68093482582cea63d69be07a0f0 - name: github.com/davecgh/go-spew - version: 6d212800a42e8ab5c146b8ace3490ee17e5225f9 + version: 04cdfd42973bb9c8589fd6a731800cf222fde1a9 subpackages: - spew - name: github.com/ebuchman/fail-test version: 95f809107225be108efcf10a3509e4ea6ceef3c4 +- name: github.com/fsnotify/fsnotify + version: 4da3e2cfbabc9f751898f250b49f2439785783a1 +- name: github.com/go-kit/kit + version: 6964666de57c88f7d93da127e900d201b632f561 + subpackages: + - log + - log/level + - log/term +- name: github.com/go-logfmt/logfmt + version: 390ab7935ee28ec6b286364bba9b4dd6410cb3d5 - name: github.com/go-stack/stack - version: 100eb0c0a9c5b306ca2fb4f165df21d80ada4b82 + version: 7a2f19628aabfe68f0766b59e74d6315f8347d22 - name: github.com/gogo/protobuf - version: 100ba4e885062801d56799d78530b73b178a78f3 + version: 9df9efe4c742f1a2bfdedf1c3b6902fc6e814c6b subpackages: - proto - name: github.com/golang/protobuf - version: 2bba0603135d7d7f5cb73b2125beeda19c09f4ef + version: fec3b39b059c0f88fa6b20f5ed012b1aa203a8b4 subpackages: - proto - ptypes/any - name: github.com/golang/snappy version: 553a641470496b2327abcac10b36396bd98e45c9 - name: github.com/gorilla/websocket - version: 3ab3a8b8831546bd18fd182c20687ca853b2bb13 + version: a91eba7f97777409bc2c443f5534d41dd20c5720 +- name: github.com/hashicorp/hcl + version: 392dba7d905ed5d04a5794ba89f558b27e2ba1ca + subpackages: + - hcl/ast + - hcl/parser + - hcl/scanner + - hcl/strconv + - hcl/token + - json/parser + - json/scanner + - json/token - name: github.com/inconshreveable/mousetrap version: 76626ae9c91c4f2a10f34cad8ce83ea42c93bb75 - name: github.com/jmhodges/levigo version: c42d9e0ca023e2198120196f842701bb4c55d7b9 -- name: github.com/mattn/go-colorable - version: ded68f7a9561c023e790de24279db7ebf473ea80 -- name: github.com/mattn/go-isatty - version: fc9e8d8ef48496124e79ae0df75490096eccf6fe +- name: github.com/kr/logfmt + version: b84e30acd515aadc4b783ad4ff83aff3299bdfe0 +- name: github.com/magiconair/properties + version: 51463bfca2576e06c62a8504b5c0f06d61312647 +- name: github.com/mitchellh/mapstructure + version: cc8532a8e9a55ea36402aa21efdf403a60d34096 +- name: github.com/pelletier/go-buffruneio + version: c37440a7cf42ac63b919c752ca73a85067e05992 +- name: github.com/pelletier/go-toml + version: 5c26a6ff6fd178719e15decac1c8196da0d7d6d1 - name: github.com/pkg/errors - version: 645ef00459ed84a119197bfb8d8205042c6df63d + version: c605e284fe17294bda444b34710735b29d1a9d90 - name: github.com/pmezard/go-difflib version: d8ed2627bdf02c080bf22230dbb337003b7aba2d subpackages: - difflib +- name: github.com/spf13/afero + version: 9be650865eab0c12963d8753212f4f9c66cdcf12 + subpackages: + - mem +- name: github.com/spf13/cast + version: acbeb36b902d72a7a4c18e8f3241075e7ab763e4 - name: github.com/spf13/cobra - version: 10f6b9d7e1631a54ad07c5c0fb71c28a1abfd3c2 + version: 4cdb38c072b86bf795d2c81de50784d9fdd6eb77 +- name: github.com/spf13/jwalterweatherman + version: 8f07c835e5cc1450c082fe3a439cf87b0cbb2d99 - name: github.com/spf13/pflag - version: 2300d0f8576fe575f71aaa5b9bbe4e1b0dc2eb51 + version: e57e3eeb33f795204c1ca35f56c44f83227c6e66 +- name: github.com/spf13/viper + version: 0967fc9aceab2ce9da34061253ac10fb99bba5b2 - name: github.com/stretchr/testify - version: 69483b4bd14f5845b5a1e55bca19e954e827f1d0 + version: 4d4bfba8f1d1027c4fdbe371823030df51419987 subpackages: - assert - require @@ -67,7 +102,7 @@ imports: - leveldb/table - leveldb/util - name: github.com/tendermint/abci - version: 56e13d87f4e3ec1ea756957d6b23caa6ebcf0998 + version: 864d1f80b36b440bde030a5c18d8ac3aa8c2949d subpackages: - client - example/counter @@ -79,56 +114,35 @@ imports: subpackages: - edwards25519 - extra25519 -- name: github.com/tendermint/go-autofile - version: 48b17de82914e1ec2f134ce823ba426337d2c518 -- name: github.com/tendermint/go-clist - version: 3baa390bbaf7634251c42ad69a8682e7e3990552 -- name: github.com/tendermint/go-common - version: f9e3db037330c8a8d61d3966de8473eaf01154fa - subpackages: - - test -- name: github.com/tendermint/go-config - version: 620dcbbd7d587cf3599dedbf329b64311b0c307a - name: github.com/tendermint/go-crypto - version: 0ca2c6fdb0706001ca4c4b9b80c9f428e8cf39da -- name: github.com/tendermint/go-data - version: e7fcc6d081ec8518912fcdc103188275f83a3ee5 -- name: github.com/tendermint/go-db - version: 9643f60bc2578693844aacf380a7c32e4c029fee -- name: github.com/tendermint/go-events - version: f8ffbfb2be3483e9e7927495590a727f51c0c11f -- name: github.com/tendermint/go-flowrate - version: a20c98e61957faa93b4014fbd902f20ab9317a6a - subpackages: - - flowrate -- name: github.com/tendermint/go-logger - version: cefb3a45c0bf3c493a04e9bcd9b1540528be59f2 -- name: github.com/tendermint/go-merkle - version: 714d4d04557fd068a7c2a1748241ce8428015a96 -- name: github.com/tendermint/go-p2p - version: e8f33a47846708269d373f9c8080613d6c4f66b2 - subpackages: - - upnp -- name: github.com/tendermint/go-rpc - version: 2c8df0ee6b60d8ac33662df13a4e358c679e02bf - subpackages: - - client - - server - - types + version: 7dff40942a64cdeefefa9446b2d104750b349f8a - name: github.com/tendermint/go-wire - version: c1c9a57ab8038448ddea1714c0698f8051e5748c -- name: github.com/tendermint/log15 - version: ae0f3d6450da9eac7074b439c8e1c3cabf0d5ce6 + version: 5f88da3dbc1a72844e6dfaf274ce87f851d488eb subpackages: - - term + - data + - data/base58 - name: github.com/tendermint/merkleeyes - version: 9fb76efa5aebe773a598f97e68e75fe53d520e70 + version: a0e73e1ac3e18e12a007520a4ea2c9822256e307 subpackages: - app - client + - iavl - testutil +- name: github.com/tendermint/tmlibs + version: 306795ae1d8e4f4a10dcc8bdb32a00455843c9d5 + subpackages: + - autofile + - cli + - clist + - common + - db + - events + - flowrate + - log + - merkle + - test - name: golang.org/x/crypto - version: 96846453c37f0876340a66a47f3f75b1f3a6cd2d + version: 0fe963104e9d1877082f8fb38f816fcd97eb1d10 subpackages: - curve25519 - nacl/box @@ -139,7 +153,7 @@ imports: - ripemd160 - salsa20/salsa - name: golang.org/x/net - version: c8c74377599bd978aee1cf3b9b63a8634051cec2 + version: 513929065c19401a1c7b76ecd942f9f86a0c061b subpackages: - context - http2 @@ -149,25 +163,26 @@ imports: - lex/httplex - trace - name: golang.org/x/sys - version: ea9bcade75cb975a0b9738936568ab388b845617 + version: e62c3de784db939836898e5c19ffd41bece347da subpackages: - unix - name: golang.org/x/text - version: 19e3104b43db45fca0303f489a9536087b184802 + version: 19e51611da83d6be54ddafce4a4af510cb3e9ea4 subpackages: - secure/bidirule - transform - unicode/bidi - unicode/norm - name: google.golang.org/genproto - version: 411e09b969b1170a9f0c467558eb4c4c110d9c77 + version: bb3573be0c484136831138976d444b8754777aff subpackages: - googleapis/rpc/status - name: google.golang.org/grpc - version: 6914ab1e338c92da4218a23d27fcd03d0ad78d46 + version: 11d93ecdb918872ee841ba3a2dc391aa6d4f57c3 subpackages: - codes - credentials + - grpclb/grpc_lb_v1 - grpclog - internal - keepalive @@ -178,4 +193,6 @@ imports: - status - tap - transport +- name: gopkg.in/yaml.v2 + version: cd8b52f8269e0feb286dfeef29f8fe4d5b397e0b testImports: [] diff --git a/glide.yaml b/glide.yaml index cdb083e69..63acdcbaa 100644 --- a/glide.yaml +++ b/glide.yaml @@ -1,56 +1,56 @@ package: github.com/tendermint/tendermint import: -- package: github.com/tendermint/go-autofile - version: develop -- package: github.com/tendermint/go-clist - version: develop -- package: github.com/tendermint/go-common - version: develop -- package: github.com/tendermint/go-config - version: develop -- package: github.com/tendermint/go-crypto - version: develop -- package: github.com/tendermint/go-data - version: develop -- package: github.com/tendermint/go-db - version: develop -- package: github.com/tendermint/go-events - version: develop -- package: github.com/tendermint/go-logger - version: develop -- package: github.com/tendermint/go-merkle - version: develop -- package: github.com/tendermint/go-p2p - version: develop -- package: github.com/tendermint/go-rpc - version: develop -- package: github.com/tendermint/go-wire - version: develop +- package: github.com/ebuchman/fail-test +- package: github.com/gogo/protobuf + subpackages: + - proto +- package: github.com/golang/protobuf + subpackages: + - proto +- package: github.com/gorilla/websocket +- package: github.com/pkg/errors +- package: github.com/spf13/cobra +- package: github.com/spf13/viper +- package: github.com/stretchr/testify + subpackages: + - require - package: github.com/tendermint/abci - version: develop -- package: github.com/tendermint/go-flowrate -- package: github.com/tendermint/log15 -- package: github.com/tendermint/ed25519 + version: v0.5.0 + subpackages: + - client + - example/dummy + - types +- package: github.com/tendermint/go-crypto + version: v0.2.0 +- package: github.com/tendermint/go-wire + version: v0.6.2 + subpackages: + - data +- package: github.com/tendermint/tmlibs + version: v0.2.0 + subpackages: + - autofile + - cli + - clist + - common + - db + - events + - flowrate + - log + - merkle +- package: golang.org/x/crypto + subpackages: + - nacl/box + - nacl/secretbox + - ripemd160 +- package: golang.org/x/net + subpackages: + - context +- package: google.golang.org/grpc +testImport: - package: github.com/tendermint/merkleeyes version: develop subpackages: - app -- package: github.com/gogo/protobuf - version: ^0.3 - subpackages: - - proto -- package: github.com/gorilla/websocket - version: ^1.1.0 -- package: github.com/spf13/cobra -- package: github.com/spf13/pflag -- package: github.com/pkg/errors - version: ^0.8.0 -- package: golang.org/x/crypto - subpackages: - - ripemd160 -testImport: -- package: github.com/stretchr/testify - version: ^1.1.4 - subpackages: - - assert - - require + - iavl + - testutil diff --git a/mempool/log.go b/mempool/log.go deleted file mode 100644 index 90eb8703f..000000000 --- a/mempool/log.go +++ /dev/null @@ -1,18 +0,0 @@ -package mempool - -import ( - "github.com/tendermint/go-logger" -) - -var log = logger.New("module", "mempool") - -/* -func init() { - log.SetHandler( - logger.LvlFilterHandler( - logger.LvlDebug, - logger.BypassHandler(), - ), - ) -} -*/ diff --git a/mempool/mempool.go b/mempool/mempool.go index e960f520f..9e53108e5 100644 --- a/mempool/mempool.go +++ b/mempool/mempool.go @@ -7,11 +7,15 @@ import ( "sync/atomic" "time" + "github.com/pkg/errors" + abci "github.com/tendermint/abci/types" - auto "github.com/tendermint/go-autofile" - "github.com/tendermint/go-clist" - . "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" + auto "github.com/tendermint/tmlibs/autofile" + "github.com/tendermint/tmlibs/clist" + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" + + cfg "github.com/tendermint/tendermint/config" "github.com/tendermint/tendermint/proxy" "github.com/tendermint/tendermint/types" ) @@ -47,7 +51,7 @@ TODO: Better handle abci client errors. (make it automatically handle connection const cacheSize = 100000 type Mempool struct { - config cfg.Config + config *cfg.MempoolConfig proxyMtx sync.Mutex proxyAppConn proxy.AppConnMempool @@ -64,9 +68,11 @@ type Mempool struct { // A log of mempool txs wal *auto.AutoFile + + logger log.Logger } -func NewMempool(config cfg.Config, proxyAppConn proxy.AppConnMempool) *Mempool { +func NewMempool(config *cfg.MempoolConfig, proxyAppConn proxy.AppConnMempool) *Mempool { mempool := &Mempool{ config: config, proxyAppConn: proxyAppConn, @@ -76,26 +82,29 @@ func NewMempool(config cfg.Config, proxyAppConn proxy.AppConnMempool) *Mempool { rechecking: 0, recheckCursor: nil, recheckEnd: nil, - - cache: newTxCache(cacheSize), + logger: log.NewNopLogger(), + cache: newTxCache(cacheSize), } mempool.initWAL() proxyAppConn.SetResponseCallback(mempool.resCb) return mempool } +// SetLogger allows you to set your own Logger. +func (mem *Mempool) SetLogger(l log.Logger) { + mem.logger = l +} + func (mem *Mempool) initWAL() { - walDir := mem.config.GetString("mempool_wal_dir") + walDir := mem.config.WalDir() if walDir != "" { - err := EnsureDir(walDir, 0700) + err := cmn.EnsureDir(walDir, 0700) if err != nil { - log.Error("Error ensuring Mempool wal dir", "error", err) - PanicSanity(err) + cmn.PanicSanity(errors.Wrap(err, "Error ensuring Mempool wal dir")) } af, err := auto.OpenAutoFile(walDir + "/wal") if err != nil { - log.Error("Error opening Mempool wal file", "error", err) - PanicSanity(err) + cmn.PanicSanity(errors.Wrap(err, "Error opening Mempool wal file")) } mem.wal = af } @@ -202,7 +211,7 @@ func (mem *Mempool) resCbNormal(req *abci.Request, res *abci.Response) { mem.txs.PushBack(memTx) } else { // ignore bad transaction - log.Info("Bad Transaction", "res", r) + mem.logger.Info("Bad Transaction", "res", r) // remove from cache (it might be good later) mem.cache.Remove(req.GetCheckTx().Tx) @@ -219,7 +228,7 @@ func (mem *Mempool) resCbRecheck(req *abci.Request, res *abci.Response) { case *abci.Response_CheckTx: memTx := mem.recheckCursor.Value.(*mempoolTx) if !bytes.Equal(req.GetCheckTx().Tx, memTx.tx) { - PanicSanity(Fmt("Unexpected tx response from proxy during recheck\n"+ + cmn.PanicSanity(cmn.Fmt("Unexpected tx response from proxy during recheck\n"+ "Expected %X, got %X", r.CheckTx.Data, memTx.tx)) } if r.CheckTx.Code == abci.CodeType_OK { @@ -240,7 +249,7 @@ func (mem *Mempool) resCbRecheck(req *abci.Request, res *abci.Response) { if mem.recheckCursor == nil { // Done! atomic.StoreInt32(&mem.rechecking, 0) - log.Info("Done rechecking txs") + mem.logger.Info("Done rechecking txs") } default: // ignore other messages @@ -269,7 +278,7 @@ func (mem *Mempool) collectTxs(maxTxs int) types.Txs { } else if maxTxs < 0 { maxTxs = mem.txs.Len() } - txs := make([]types.Tx, 0, MinInt(mem.txs.Len(), maxTxs)) + txs := make([]types.Tx, 0, cmn.MinInt(mem.txs.Len(), maxTxs)) for e := mem.txs.Front(); e != nil && len(txs) < maxTxs; e = e.Next() { memTx := e.Value.(*mempoolTx) txs = append(txs, memTx.tx) @@ -298,9 +307,8 @@ func (mem *Mempool) Update(height int, txs types.Txs) { // Recheck mempool txs if any txs were committed in the block // NOTE/XXX: in some apps a tx could be invalidated due to EndBlock, // so we really still do need to recheck, but this is for debugging - if mem.config.GetBool("mempool_recheck") && - (mem.config.GetBool("mempool_recheck_empty") || len(txs) > 0) { - log.Info("Recheck txs", "numtxs", len(goodTxs)) + if mem.config.Recheck && (mem.config.RecheckEmpty || len(txs) > 0) { + mem.logger.Info("Recheck txs", "numtxs", len(goodTxs)) mem.recheckTxs(goodTxs) // At this point, mem.txs are being rechecked. // mem.recheckCursor re-scans mem.txs and possibly removes some txs. diff --git a/mempool/mempool_test.go b/mempool/mempool_test.go index 9ac8fe33d..6451adb2d 100644 --- a/mempool/mempool_test.go +++ b/mempool/mempool_test.go @@ -4,21 +4,31 @@ import ( "encoding/binary" "testing" - "github.com/tendermint/tendermint/config/tendermint_test" + "github.com/tendermint/abci/example/counter" + cfg "github.com/tendermint/tendermint/config" "github.com/tendermint/tendermint/proxy" "github.com/tendermint/tendermint/types" - "github.com/tendermint/abci/example/counter" + "github.com/tendermint/tmlibs/log" ) func TestSerialReap(t *testing.T) { - config := tendermint_test.ResetConfig("mempool_mempool_test") + config := cfg.ResetTestRoot("mempool_test") app := counter.NewCounterApplication(true) app.SetOption("serial", "on") cc := proxy.NewLocalClientCreator(app) appConnMem, _ := cc.NewABCIClient() + appConnMem.SetLogger(log.TestingLogger().With("module", "abci-client", "connection", "mempool")) + if _, err := appConnMem.Start(); err != nil { + t.Fatalf("Error starting ABCI client: %v", err.Error()) + } appConnCon, _ := cc.NewABCIClient() - mempool := NewMempool(config, appConnMem) + appConnCon.SetLogger(log.TestingLogger().With("module", "abci-client", "connection", "consensus")) + if _, err := appConnCon.Start(); err != nil { + t.Fatalf("Error starting ABCI client: %v", err.Error()) + } + mempool := NewMempool(config.Mempool, appConnMem) + mempool.SetLogger(log.TestingLogger()) deliverTxsRange := func(start, end int) { // Deliver some txs. diff --git a/mempool/reactor.go b/mempool/reactor.go index 4531edee0..842e11538 100644 --- a/mempool/reactor.go +++ b/mempool/reactor.go @@ -7,10 +7,11 @@ import ( "time" abci "github.com/tendermint/abci/types" - "github.com/tendermint/go-clist" - cfg "github.com/tendermint/go-config" - "github.com/tendermint/go-p2p" - "github.com/tendermint/go-wire" + wire "github.com/tendermint/go-wire" + "github.com/tendermint/tmlibs/clist" + + cfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tendermint/p2p" "github.com/tendermint/tendermint/types" ) @@ -24,17 +25,17 @@ const ( // MempoolReactor handles mempool tx broadcasting amongst peers. type MempoolReactor struct { p2p.BaseReactor - config cfg.Config + config *cfg.MempoolConfig Mempool *Mempool evsw types.EventSwitch } -func NewMempoolReactor(config cfg.Config, mempool *Mempool) *MempoolReactor { +func NewMempoolReactor(config *cfg.MempoolConfig, mempool *Mempool) *MempoolReactor { memR := &MempoolReactor{ config: config, Mempool: mempool, } - memR.BaseReactor = *p2p.NewBaseReactor(log, "MempoolReactor", memR) + memR.BaseReactor = *p2p.NewBaseReactor("MempoolReactor", memR) return memR } @@ -62,24 +63,24 @@ func (memR *MempoolReactor) RemovePeer(peer *p2p.Peer, reason interface{}) { func (memR *MempoolReactor) Receive(chID byte, src *p2p.Peer, msgBytes []byte) { _, msg, err := DecodeMessage(msgBytes) if err != nil { - log.Warn("Error decoding message", "error", err) + memR.Logger.Error("Error decoding message", "error", err) return } - log.Debug("Receive", "src", src, "chId", chID, "msg", msg) + memR.Logger.Debug("Receive", "src", src, "chId", chID, "msg", msg) switch msg := msg.(type) { case *TxMessage: err := memR.Mempool.CheckTx(msg.Tx, nil) if err != nil { // Bad, seen, or conflicting tx. - log.Info("Could not add tx", "tx", msg.Tx) + memR.Logger.Info("Could not add tx", "tx", msg.Tx) return } else { - log.Info("Added valid tx", "tx", msg.Tx) + memR.Logger.Info("Added valid tx", "tx", msg.Tx) } // broadcasting happens from go routines per peer default: - log.Warn(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg))) + memR.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg))) } } @@ -102,7 +103,7 @@ type Peer interface { // TODO: Handle mempool or reactor shutdown? // As is this routine may block forever if no new txs come in. func (memR *MempoolReactor) broadcastTxRoutine(peer Peer) { - if !memR.config.GetBool("mempool_broadcast") { + if !memR.config.Broadcast { return } diff --git a/node/log.go b/node/log.go deleted file mode 100644 index 36b451493..000000000 --- a/node/log.go +++ /dev/null @@ -1,7 +0,0 @@ -package node - -import ( - "github.com/tendermint/go-logger" -) - -var log = logger.New("module", "node") diff --git a/node/node.go b/node/node.go index 990779486..df052fefa 100644 --- a/node/node.go +++ b/node/node.go @@ -8,26 +8,27 @@ import ( "strings" abci "github.com/tendermint/abci/types" - cmn "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" crypto "github.com/tendermint/go-crypto" - dbm "github.com/tendermint/go-db" - p2p "github.com/tendermint/go-p2p" - rpc "github.com/tendermint/go-rpc" - rpcserver "github.com/tendermint/go-rpc/server" wire "github.com/tendermint/go-wire" bc "github.com/tendermint/tendermint/blockchain" + cfg "github.com/tendermint/tendermint/config" "github.com/tendermint/tendermint/consensus" mempl "github.com/tendermint/tendermint/mempool" + p2p "github.com/tendermint/tendermint/p2p" "github.com/tendermint/tendermint/proxy" rpccore "github.com/tendermint/tendermint/rpc/core" grpccore "github.com/tendermint/tendermint/rpc/grpc" + rpc "github.com/tendermint/tendermint/rpc/lib" + rpcserver "github.com/tendermint/tendermint/rpc/lib/server" sm "github.com/tendermint/tendermint/state" "github.com/tendermint/tendermint/state/txindex" "github.com/tendermint/tendermint/state/txindex/kv" "github.com/tendermint/tendermint/state/txindex/null" "github.com/tendermint/tendermint/types" "github.com/tendermint/tendermint/version" + cmn "github.com/tendermint/tmlibs/common" + dbm "github.com/tendermint/tmlibs/db" + "github.com/tendermint/tmlibs/log" _ "net/http/pprof" ) @@ -36,7 +37,7 @@ type Node struct { cmn.BaseService // config - config cfg.Config // user config + config *cfg.Config genesisDoc *types.GenesisDoc // initial validator set privValidator *types.PrivValidator // local node's validator key @@ -57,42 +58,45 @@ type Node struct { txIndexer txindex.TxIndexer } -func NewNodeDefault(config cfg.Config) *Node { +func NewNodeDefault(config *cfg.Config, logger log.Logger) *Node { // Get PrivValidator - privValidatorFile := config.GetString("priv_validator_file") - privValidator := types.LoadOrGenPrivValidator(privValidatorFile) - return NewNode(config, privValidator, proxy.DefaultClientCreator(config)) + privValidator := types.LoadOrGenPrivValidator(config.PrivValidatorFile(), logger) + return NewNode(config, privValidator, + proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir()), logger) } -func NewNode(config cfg.Config, privValidator *types.PrivValidator, clientCreator proxy.ClientCreator) *Node { - +func NewNode(config *cfg.Config, privValidator *types.PrivValidator, clientCreator proxy.ClientCreator, logger log.Logger) *Node { // Get BlockStore - blockStoreDB := dbm.NewDB("blockstore", config.GetString("db_backend"), config.GetString("db_dir")) + blockStoreDB := dbm.NewDB("blockstore", config.DBBackend, config.DBDir()) blockStore := bc.NewBlockStore(blockStoreDB) - // Get State - stateDB := dbm.NewDB("state", config.GetString("db_backend"), config.GetString("db_dir")) - state := sm.GetState(config, stateDB) + consensusLogger := logger.With("module", "consensus") + stateLogger := logger.With("module", "state") - // add the chainid and number of validators to the global config - config.Set("chain_id", state.ChainID) - config.Set("num_vals", state.Validators.Size()) + // Get State + stateDB := dbm.NewDB("state", config.DBBackend, config.DBDir()) + state := sm.GetState(stateDB, config.GenesisFile()) + state.SetLogger(stateLogger) // Create the proxyApp, which manages connections (consensus, mempool, query) // and sync tendermint and the app by replaying any necessary blocks - proxyApp := proxy.NewAppConns(config, clientCreator, consensus.NewHandshaker(config, state, blockStore)) + handshaker := consensus.NewHandshaker(state, blockStore) + handshaker.SetLogger(consensusLogger) + proxyApp := proxy.NewAppConns(clientCreator, handshaker) + proxyApp.SetLogger(logger.With("module", "proxy")) if _, err := proxyApp.Start(); err != nil { cmn.Exit(cmn.Fmt("Error starting proxy app connections: %v", err)) } // reload the state (it may have been updated by the handshake) state = sm.LoadState(stateDB) + state.SetLogger(stateLogger) // Transaction indexing var txIndexer txindex.TxIndexer - switch config.GetString("tx_index") { + switch config.TxIndex { case "kv": - store := dbm.NewDB("tx_index", config.GetString("db_backend"), config.GetString("db_dir")) + store := dbm.NewDB("tx_index", config.DBBackend, config.DBDir()) txIndexer = kv.NewTxIndex(store) default: txIndexer = &null.TxIndex{} @@ -104,6 +108,7 @@ func NewNode(config cfg.Config, privValidator *types.PrivValidator, clientCreato // Make event switch eventSwitch := types.NewEventSwitch() + eventSwitch.SetLogger(logger.With("module", "types")) _, err := eventSwitch.Start() if err != nil { cmn.Exit(cmn.Fmt("Failed to start switch: %v", err)) @@ -111,7 +116,7 @@ func NewNode(config cfg.Config, privValidator *types.PrivValidator, clientCreato // Decide whether to fast-sync or not // We don't fast-sync when the only validator is us. - fastSync := config.GetBool("fast_sync") + fastSync := config.FastSync if state.Validators.Size() == 1 { addr, _ := state.Validators.GetByIndex(0) if bytes.Equal(privValidator.Address, addr) { @@ -119,38 +124,55 @@ func NewNode(config cfg.Config, privValidator *types.PrivValidator, clientCreato } } + // Log whether this node is a validator or an observer + if state.Validators.HasAddress(privValidator.Address) { + consensusLogger.Info("This node is a validator") + } else { + consensusLogger.Info("This node is not a validator") + } + // Make BlockchainReactor - bcReactor := bc.NewBlockchainReactor(config, state.Copy(), proxyApp.Consensus(), blockStore, fastSync) + bcReactor := bc.NewBlockchainReactor(state.Copy(), proxyApp.Consensus(), blockStore, fastSync) + bcReactor.SetLogger(logger.With("module", "blockchain")) // Make MempoolReactor - mempool := mempl.NewMempool(config, proxyApp.Mempool()) - mempoolReactor := mempl.NewMempoolReactor(config, mempool) + mempoolLogger := logger.With("module", "mempool") + mempool := mempl.NewMempool(config.Mempool, proxyApp.Mempool()) + mempool.SetLogger(mempoolLogger) + mempoolReactor := mempl.NewMempoolReactor(config.Mempool, mempool) + mempoolReactor.SetLogger(mempoolLogger) // Make ConsensusReactor - consensusState := consensus.NewConsensusState(config, state.Copy(), proxyApp.Consensus(), blockStore, mempool) + consensusState := consensus.NewConsensusState(config.Consensus, state.Copy(), proxyApp.Consensus(), blockStore, mempool) + consensusState.SetLogger(consensusLogger) if privValidator != nil { consensusState.SetPrivValidator(privValidator) } consensusReactor := consensus.NewConsensusReactor(consensusState, fastSync) + consensusReactor.SetLogger(consensusLogger) - // Make p2p network switch - sw := p2p.NewSwitch(config.GetConfig("p2p")) + p2pLogger := logger.With("module", "p2p") + + sw := p2p.NewSwitch(config.P2P) + sw.SetLogger(p2pLogger) sw.AddReactor("MEMPOOL", mempoolReactor) sw.AddReactor("BLOCKCHAIN", bcReactor) sw.AddReactor("CONSENSUS", consensusReactor) // Optionally, start the pex reactor var addrBook *p2p.AddrBook - if config.GetBool("pex_reactor") { - addrBook = p2p.NewAddrBook(config.GetString("addrbook_file"), config.GetBool("addrbook_strict")) + if config.P2P.PexReactor { + addrBook = p2p.NewAddrBook(config.P2P.AddrBookFile(), config.P2P.AddrBookStrict) + addrBook.SetLogger(p2pLogger.With("book", config.P2P.AddrBookFile())) pexReactor := p2p.NewPEXReactor(addrBook) + pexReactor.SetLogger(p2pLogger) sw.AddReactor("PEX", pexReactor) } // Filter peers by addr or pubkey with an ABCI query. // If the query return code is OK, add peer. // XXX: Query format subject to change - if config.GetBool("filter_peers") { + if config.FilterPeers { // NOTE: addr is ip:port sw.SetAddrFilter(func(addr net.Addr) error { resQuery, err := proxyApp.Query().QuerySync(abci.RequestQuery{Path: cmn.Fmt("/p2p/filter/addr/%s", addr.String())}) @@ -179,11 +201,11 @@ func NewNode(config cfg.Config, privValidator *types.PrivValidator, clientCreato SetEventSwitch(eventSwitch, bcReactor, mempoolReactor, consensusReactor) // run the profile server - profileHost := config.GetString("prof_laddr") + profileHost := config.ProfListenAddress if profileHost != "" { go func() { - log.Warn("Profile server", "error", http.ListenAndServe(profileHost, nil)) + logger.Error("Profile server", "error", http.ListenAndServe(profileHost, nil)) }() } @@ -205,15 +227,14 @@ func NewNode(config cfg.Config, privValidator *types.PrivValidator, clientCreato proxyApp: proxyApp, txIndexer: txIndexer, } - node.BaseService = *cmn.NewBaseService(log, "Node", node) + node.BaseService = *cmn.NewBaseService(logger, "Node", node) return node } func (n *Node) OnStart() error { - // Create & add listener - protocol, address := ProtocolAndAddress(n.config.GetString("node_laddr")) - l := p2p.NewDefaultListener(protocol, address, n.config.GetBool("skip_upnp")) + protocol, address := ProtocolAndAddress(n.config.P2P.ListenAddress) + l := p2p.NewDefaultListener(protocol, address, n.config.P2P.SkipUPNP, n.Logger.With("module", "p2p")) n.sw.AddListener(l) // Start the switch @@ -225,16 +246,16 @@ func (n *Node) OnStart() error { } // If seeds exist, add them to the address book and dial out - if n.config.GetString("seeds") != "" { + if n.config.P2P.Seeds != "" { // dial out - seeds := strings.Split(n.config.GetString("seeds"), ",") + seeds := strings.Split(n.config.P2P.Seeds, ",") if err := n.DialSeeds(seeds); err != nil { return err } } // Run the RPC server - if n.config.GetString("rpc_laddr") != "" { + if n.config.RPCListenAddress != "" { listeners, err := n.startRPC() if err != nil { return err @@ -248,14 +269,14 @@ func (n *Node) OnStart() error { func (n *Node) OnStop() { n.BaseService.OnStop() - log.Notice("Stopping Node") + n.Logger.Info("Stopping Node") // TODO: gracefully disconnect from peers. n.sw.Stop() for _, l := range n.rpcListeners { - log.Info("Closing rpc listener", "listener", l) + n.Logger.Info("Closing rpc listener", "listener", l) if err := l.Close(); err != nil { - log.Error("Error closing listener", "listener", l, "error", err) + n.Logger.Error("Error closing listener", "listener", l, "error", err) } } } @@ -284,7 +305,6 @@ func (n *Node) AddListener(l p2p.Listener) { // ConfigureRPC sets all variables in rpccore so they will serve // rpc calls from this node func (n *Node) ConfigureRPC() { - rpccore.SetConfig(n.config) rpccore.SetEventSwitch(n.evsw) rpccore.SetBlockStore(n.blockStore) rpccore.SetConsensusState(n.consensusState) @@ -295,20 +315,23 @@ func (n *Node) ConfigureRPC() { rpccore.SetAddrBook(n.addrBook) rpccore.SetProxyAppQuery(n.proxyApp.Query()) rpccore.SetTxIndexer(n.txIndexer) + rpccore.SetLogger(n.Logger.With("module", "rpc")) } func (n *Node) startRPC() ([]net.Listener, error) { n.ConfigureRPC() - listenAddrs := strings.Split(n.config.GetString("rpc_laddr"), ",") + listenAddrs := strings.Split(n.config.RPCListenAddress, ",") // we may expose the rpc over both a unix and tcp socket listeners := make([]net.Listener, len(listenAddrs)) for i, listenAddr := range listenAddrs { mux := http.NewServeMux() wm := rpcserver.NewWebsocketManager(rpccore.Routes, n.evsw) + rpcLogger := n.Logger.With("module", "rpc-server") + wm.SetLogger(rpcLogger) mux.HandleFunc("/websocket", wm.WebsocketHandler) - rpcserver.RegisterRPCFuncs(mux, rpccore.Routes) - listener, err := rpcserver.StartHTTPServer(listenAddr, mux) + rpcserver.RegisterRPCFuncs(mux, rpccore.Routes, rpcLogger) + listener, err := rpcserver.StartHTTPServer(listenAddr, mux, rpcLogger) if err != nil { return nil, err } @@ -316,7 +339,7 @@ func (n *Node) startRPC() ([]net.Listener, error) { } // we expose a simplified api over grpc for convenience to app devs - grpcListenAddr := n.config.GetString("grpc_laddr") + grpcListenAddr := n.config.GRPCListenAddress if grpcListenAddr != "" { listener, err := grpccore.StartGRPCServer(grpcListenAddr) if err != nil { @@ -372,9 +395,9 @@ func (n *Node) makeNodeInfo() *p2p.NodeInfo { } nodeInfo := &p2p.NodeInfo{ - PubKey: n.privKey.PubKey().(crypto.PubKeyEd25519), - Moniker: n.config.GetString("moniker"), - Network: n.config.GetString("chain_id"), + PubKey: n.privKey.PubKey().Unwrap().(crypto.PubKeyEd25519), + Moniker: n.config.Moniker, + Network: n.consensusState.GetState().ChainID, Version: version.Version, Other: []string{ cmn.Fmt("wire_version=%v", wire.Version), @@ -386,9 +409,10 @@ func (n *Node) makeNodeInfo() *p2p.NodeInfo { } // include git hash in the nodeInfo if available - if rev, err := cmn.ReadFile(n.config.GetString("revision_file")); err == nil { + // TODO: use ld-flags + /*if rev, err := cmn.ReadFile(n.config.GetString("revision_file")); err == nil { nodeInfo.Other = append(nodeInfo.Other, cmn.Fmt("revision=%v", string(rev))) - } + }*/ if !n.sw.IsListening() { return nodeInfo @@ -397,7 +421,7 @@ func (n *Node) makeNodeInfo() *p2p.NodeInfo { p2pListener := n.sw.Listeners()[0] p2pHost := p2pListener.ExternalAddress().IP.String() p2pPort := p2pListener.ExternalAddress().Port - rpcListenAddr := n.config.GetString("rpc_laddr") + rpcListenAddr := n.config.RPCListenAddress // We assume that the rpcListener has the same ExternalAddress. // This is probably true because both P2P and RPC listeners use UPnP, @@ -426,3 +450,5 @@ func ProtocolAndAddress(listenAddr string) (string, string) { } return protocol, address } + +//------------------------------------------------------------------------------ diff --git a/node/node_test.go b/node/node_test.go index 2ab8e8dc6..3c751af6e 100644 --- a/node/node_test.go +++ b/node/node_test.go @@ -4,16 +4,17 @@ import ( "testing" "time" - "github.com/tendermint/tendermint/config/tendermint_test" + cfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tmlibs/log" ) func TestNodeStartStop(t *testing.T) { - config := tendermint_test.ResetConfig("node_node_test") + config := cfg.ResetTestRoot("node_node_test") // Create & start node - n := NewNodeDefault(config) + n := NewNodeDefault(config, log.TestingLogger()) n.Start() - log.Notice("Started node", "nodeInfo", n.sw.NodeInfo()) + t.Logf("Started node %v", n.sw.NodeInfo()) // Wait a bit to initialize // TODO remove time.Sleep(), make asynchronous. diff --git a/p2p/CHANGELOG.md b/p2p/CHANGELOG.md new file mode 100644 index 000000000..cae2f4c9f --- /dev/null +++ b/p2p/CHANGELOG.md @@ -0,0 +1,78 @@ +# Changelog + +## 0.5.0 (April 21, 2017) + +BREAKING CHANGES: + +- Remove or unexport methods from FuzzedConnection: Active, Mode, ProbDropRW, ProbDropConn, ProbSleep, MaxDelayMilliseconds, Fuzz +- switch.AddPeerWithConnection is unexported and replaced by switch.AddPeer +- switch.DialPeerWithAddress takes a bool, setting the peer as persistent or not + +FEATURES: + +- Persistent peers: any peer considered a "seed" will be reconnected to when the connection is dropped + + +IMPROVEMENTS: + +- Many more tests and comments +- Refactor configurations for less dependence on go-config. Introduces new structs PeerConfig, MConnConfig, FuzzConnConfig +- New methods on peer: CloseConn, HandshakeTimeout, IsPersistent, Addr, PubKey +- NewNetAddress supports a testing mode where the address defaults to 0.0.0.0:0 + + +## 0.4.0 (March 6, 2017) + +BREAKING CHANGES: + +- DialSeeds now takes an AddrBook and returns an error: `DialSeeds(*AddrBook, []string) error` +- NewNetAddressString now returns an error: `NewNetAddressString(string) (*NetAddress, error)` + +FEATURES: + +- `NewNetAddressStrings([]string) ([]*NetAddress, error)` +- `AddrBook.Save()` + +IMPROVEMENTS: + +- PexReactor responsible for starting and stopping the AddrBook + +BUG FIXES: + +- DialSeeds returns an error instead of panicking on bad addresses + +## 0.3.5 (January 12, 2017) + +FEATURES + +- Toggle strict routability in the AddrBook + +BUG FIXES + +- Close filtered out connections +- Fixes for MakeConnectedSwitches and Connect2Switches + +## 0.3.4 (August 10, 2016) + +FEATURES: + +- Optionally filter connections by address or public key + +## 0.3.3 (May 12, 2016) + +FEATURES: + +- FuzzConn + +## 0.3.2 (March 12, 2016) + +IMPROVEMENTS: + +- Memory optimizations + +## 0.3.1 () + +FEATURES: + +- Configurable parameters + diff --git a/p2p/Dockerfile b/p2p/Dockerfile new file mode 100644 index 000000000..6c71b2f81 --- /dev/null +++ b/p2p/Dockerfile @@ -0,0 +1,13 @@ +FROM golang:latest + +RUN curl https://glide.sh/get | sh + +RUN mkdir -p /go/src/github.com/tendermint/tendermint/p2p +WORKDIR /go/src/github.com/tendermint/tendermint/p2p + +COPY glide.yaml /go/src/github.com/tendermint/tendermint/p2p/ +COPY glide.lock /go/src/github.com/tendermint/tendermint/p2p/ + +RUN glide install + +COPY . /go/src/github.com/tendermint/tendermint/p2p diff --git a/p2p/README.md b/p2p/README.md new file mode 100644 index 000000000..bf0a5c4d0 --- /dev/null +++ b/p2p/README.md @@ -0,0 +1,79 @@ +# `tendermint/tendermint/p2p` + +[![CircleCI](https://circleci.com/gh/tendermint/tendermint/p2p.svg?style=svg)](https://circleci.com/gh/tendermint/tendermint/p2p) + +`tendermint/tendermint/p2p` provides an abstraction around peer-to-peer communication.
+ +## Peer/MConnection/Channel + +Each peer has one `MConnection` (multiplex connection) instance. + +__multiplex__ *noun* a system or signal involving simultaneous transmission of +several messages along a single channel of communication. + +Each `MConnection` handles message transmission on multiple abstract communication +`Channel`s. Each channel has a globally unique byte id. +The byte id and the relative priorities of each `Channel` are configured upon +initialization of the connection. + +There are two methods for sending messages: +```go +func (m MConnection) Send(chID byte, msg interface{}) bool {} +func (m MConnection) TrySend(chID byte, msg interface{}) bool {} +``` + +`Send(chID, msg)` is a blocking call that waits until `msg` is successfully queued +for the channel with the given id byte `chID`. The message `msg` is serialized +using the `tendermint/wire` submodule's `WriteBinary()` reflection routine. + +`TrySend(chID, msg)` is a nonblocking call that returns false if the channel's +queue is full. + +`Send()` and `TrySend()` are also exposed for each `Peer`. + +## Switch/Reactor + +The `Switch` handles peer connections and exposes an API to receive incoming messages +on `Reactors`. Each `Reactor` is responsible for handling incoming messages of one +or more `Channels`. So while sending outgoing messages is typically performed on the peer, +incoming messages are received on the reactor. + +```go +// Declare a MyReactor reactor that handles messages on MyChannelID. +type MyReactor struct{} + +func (reactor MyReactor) GetChannels() []*ChannelDescriptor { + return []*ChannelDescriptor{ChannelDescriptor{ID:MyChannelID, Priority: 1}} +} + +func (reactor MyReactor) Receive(chID byte, peer *Peer, msgBytes []byte) { + r, n, err := bytes.NewBuffer(msgBytes), new(int64), new(error) + msgString := ReadString(r, n, err) + fmt.Println(msgString) +} + +// Other Reactor methods omitted for brevity +... + +switch := NewSwitch([]Reactor{MyReactor{}}) + +... + +// Send a random message to all outbound connections +for _, peer := range switch.Peers().List() { + if peer.IsOutbound() { + peer.Send(MyChannelID, "Here's a random message") + } +} +``` + +### PexReactor/AddrBook + +A `PEXReactor` reactor implementation is provided to automate peer discovery. + +```go +book := p2p.NewAddrBook(addrBookFilePath) +pexReactor := p2p.NewPEXReactor(book) +... +switch := NewSwitch([]Reactor{pexReactor, myReactor, ...}) +``` diff --git a/p2p/addrbook.go b/p2p/addrbook.go new file mode 100644 index 000000000..1df0817ee --- /dev/null +++ b/p2p/addrbook.go @@ -0,0 +1,841 @@ +// Modified for Tendermint +// Originally Copyright (c) 2013-2014 Conformal Systems LLC. +// https://github.com/conformal/btcd/blob/master/LICENSE + +package p2p + +import ( + "encoding/binary" + "encoding/json" + "math" + "math/rand" + "net" + "os" + "sync" + "time" + + crypto "github.com/tendermint/go-crypto" + cmn "github.com/tendermint/tmlibs/common" +) + +const ( + // addresses under which the address manager will claim to need more addresses. + needAddressThreshold = 1000 + + // interval used to dump the address cache to disk for future use. + dumpAddressInterval = time.Minute * 2 + + // max addresses in each old address bucket. + oldBucketSize = 64 + + // buckets we split old addresses over. + oldBucketCount = 64 + + // max addresses in each new address bucket. + newBucketSize = 64 + + // buckets that we spread new addresses over. + newBucketCount = 256 + + // old buckets over which an address group will be spread. + oldBucketsPerGroup = 4 + + // new buckets over which an source address group will be spread. + newBucketsPerGroup = 32 + + // buckets a frequently seen new address may end up in. + maxNewBucketsPerAddress = 4 + + // days before which we assume an address has vanished + // if we have not seen it announced in that long. + numMissingDays = 30 + + // tries without a single success before we assume an address is bad. + numRetries = 3 + + // max failures we will accept without a success before considering an address bad. + maxFailures = 10 + + // days since the last success before we will consider evicting an address. + minBadDays = 7 + + // % of total addresses known returned by GetSelection. + getSelectionPercent = 23 + + // min addresses that must be returned by GetSelection. Useful for bootstrapping. + minGetSelection = 32 + + // max addresses returned by GetSelection + // NOTE: this must match "maxPexMessageSize" + maxGetSelection = 250 + + // current version of the on-disk format. + serializationVersion = 1 +) + +const ( + bucketTypeNew = 0x01 + bucketTypeOld = 0x02 +) + +// AddrBook - concurrency safe peer address manager. +type AddrBook struct { + cmn.BaseService + + mtx sync.Mutex + filePath string + routabilityStrict bool + rand *rand.Rand + key string + ourAddrs map[string]*NetAddress + addrLookup map[string]*knownAddress // new & old + addrNew []map[string]*knownAddress + addrOld []map[string]*knownAddress + wg sync.WaitGroup + nOld int + nNew int +} + +// NewAddrBook creates a new address book. +// Use Start to begin processing asynchronous address updates. +func NewAddrBook(filePath string, routabilityStrict bool) *AddrBook { + am := &AddrBook{ + rand: rand.New(rand.NewSource(time.Now().UnixNano())), + ourAddrs: make(map[string]*NetAddress), + addrLookup: make(map[string]*knownAddress), + filePath: filePath, + routabilityStrict: routabilityStrict, + } + am.init() + am.BaseService = *cmn.NewBaseService(nil, "AddrBook", am) + return am +} + +// When modifying this, don't forget to update loadFromFile() +func (a *AddrBook) init() { + a.key = crypto.CRandHex(24) // 24/2 * 8 = 96 bits + // New addr buckets + a.addrNew = make([]map[string]*knownAddress, newBucketCount) + for i := range a.addrNew { + a.addrNew[i] = make(map[string]*knownAddress) + } + // Old addr buckets + a.addrOld = make([]map[string]*knownAddress, oldBucketCount) + for i := range a.addrOld { + a.addrOld[i] = make(map[string]*knownAddress) + } +} + +// OnStart implements Service. +func (a *AddrBook) OnStart() error { + a.BaseService.OnStart() + a.loadFromFile(a.filePath) + a.wg.Add(1) + go a.saveRoutine() + return nil +} + +// OnStop implements Service. +func (a *AddrBook) OnStop() { + a.BaseService.OnStop() +} + +func (a *AddrBook) Wait() { + a.wg.Wait() +} + +func (a *AddrBook) AddOurAddress(addr *NetAddress) { + a.mtx.Lock() + defer a.mtx.Unlock() + a.Logger.Info("Add our address to book", "addr", addr) + a.ourAddrs[addr.String()] = addr +} + +func (a *AddrBook) OurAddresses() []*NetAddress { + addrs := []*NetAddress{} + for _, addr := range a.ourAddrs { + addrs = append(addrs, addr) + } + return addrs +} + +// NOTE: addr must not be nil +func (a *AddrBook) AddAddress(addr *NetAddress, src *NetAddress) { + a.mtx.Lock() + defer a.mtx.Unlock() + a.Logger.Info("Add address to book", "addr", addr, "src", src) + a.addAddress(addr, src) +} + +func (a *AddrBook) NeedMoreAddrs() bool { + return a.Size() < needAddressThreshold +} + +func (a *AddrBook) Size() int { + a.mtx.Lock() + defer a.mtx.Unlock() + return a.size() +} + +func (a *AddrBook) size() int { + return a.nNew + a.nOld +} + +// Pick an address to connect to with new/old bias. +func (a *AddrBook) PickAddress(newBias int) *NetAddress { + a.mtx.Lock() + defer a.mtx.Unlock() + + if a.size() == 0 { + return nil + } + if newBias > 100 { + newBias = 100 + } + if newBias < 0 { + newBias = 0 + } + + // Bias between new and old addresses. + oldCorrelation := math.Sqrt(float64(a.nOld)) * (100.0 - float64(newBias)) + newCorrelation := math.Sqrt(float64(a.nNew)) * float64(newBias) + + if (newCorrelation+oldCorrelation)*a.rand.Float64() < oldCorrelation { + // pick random Old bucket. + var bucket map[string]*knownAddress = nil + for len(bucket) == 0 { + bucket = a.addrOld[a.rand.Intn(len(a.addrOld))] + } + // pick a random ka from bucket. + randIndex := a.rand.Intn(len(bucket)) + for _, ka := range bucket { + if randIndex == 0 { + return ka.Addr + } + randIndex-- + } + cmn.PanicSanity("Should not happen") + } else { + // pick random New bucket. + var bucket map[string]*knownAddress = nil + for len(bucket) == 0 { + bucket = a.addrNew[a.rand.Intn(len(a.addrNew))] + } + // pick a random ka from bucket. + randIndex := a.rand.Intn(len(bucket)) + for _, ka := range bucket { + if randIndex == 0 { + return ka.Addr + } + randIndex-- + } + cmn.PanicSanity("Should not happen") + } + return nil +} + +func (a *AddrBook) MarkGood(addr *NetAddress) { + a.mtx.Lock() + defer a.mtx.Unlock() + ka := a.addrLookup[addr.String()] + if ka == nil { + return + } + ka.markGood() + if ka.isNew() { + a.moveToOld(ka) + } +} + +func (a *AddrBook) MarkAttempt(addr *NetAddress) { + a.mtx.Lock() + defer a.mtx.Unlock() + ka := a.addrLookup[addr.String()] + if ka == nil { + return + } + ka.markAttempt() +} + +// MarkBad currently just ejects the address. In the future, consider +// blacklisting. +func (a *AddrBook) MarkBad(addr *NetAddress) { + a.RemoveAddress(addr) +} + +// RemoveAddress removes the address from the book. +func (a *AddrBook) RemoveAddress(addr *NetAddress) { + a.mtx.Lock() + defer a.mtx.Unlock() + ka := a.addrLookup[addr.String()] + if ka == nil { + return + } + a.Logger.Info("Remove address from book", "addr", addr) + a.removeFromAllBuckets(ka) +} + +/* Peer exchange */ + +// GetSelection randomly selects some addresses (old & new). Suitable for peer-exchange protocols. +func (a *AddrBook) GetSelection() []*NetAddress { + a.mtx.Lock() + defer a.mtx.Unlock() + + if a.size() == 0 { + return nil + } + + allAddr := make([]*NetAddress, a.size()) + i := 0 + for _, v := range a.addrLookup { + allAddr[i] = v.Addr + i++ + } + + numAddresses := cmn.MaxInt( + cmn.MinInt(minGetSelection, len(allAddr)), + len(allAddr)*getSelectionPercent/100) + numAddresses = cmn.MinInt(maxGetSelection, numAddresses) + + // Fisher-Yates shuffle the array. We only need to do the first + // `numAddresses' since we are throwing the rest. + for i := 0; i < numAddresses; i++ { + // pick a number between current index and the end + j := rand.Intn(len(allAddr)-i) + i + allAddr[i], allAddr[j] = allAddr[j], allAddr[i] + } + + // slice off the limit we are willing to share. + return allAddr[:numAddresses] +} + +/* Loading & Saving */ + +type addrBookJSON struct { + Key string + Addrs []*knownAddress +} + +func (a *AddrBook) saveToFile(filePath string) { + a.Logger.Info("Saving AddrBook to file", "size", a.Size()) + + a.mtx.Lock() + defer a.mtx.Unlock() + // Compile Addrs + addrs := []*knownAddress{} + for _, ka := range a.addrLookup { + addrs = append(addrs, ka) + } + + aJSON := &addrBookJSON{ + Key: a.key, + Addrs: addrs, + } + + jsonBytes, err := json.MarshalIndent(aJSON, "", "\t") + if err != nil { + a.Logger.Error("Failed to save AddrBook to file", "err", err) + return + } + err = cmn.WriteFileAtomic(filePath, jsonBytes, 0644) + if err != nil { + a.Logger.Error("Failed to save AddrBook to file", "file", filePath, "error", err) + } +} + +// Returns false if file does not exist. +// cmn.Panics if file is corrupt. +func (a *AddrBook) loadFromFile(filePath string) bool { + // If doesn't exist, do nothing. + _, err := os.Stat(filePath) + if os.IsNotExist(err) { + return false + } + + // Load addrBookJSON{} + r, err := os.Open(filePath) + if err != nil { + cmn.PanicCrisis(cmn.Fmt("Error opening file %s: %v", filePath, err)) + } + defer r.Close() + aJSON := &addrBookJSON{} + dec := json.NewDecoder(r) + err = dec.Decode(aJSON) + if err != nil { + cmn.PanicCrisis(cmn.Fmt("Error reading file %s: %v", filePath, err)) + } + + // Restore all the fields... + // Restore the key + a.key = aJSON.Key + // Restore .addrNew & .addrOld + for _, ka := range aJSON.Addrs { + for _, bucketIndex := range ka.Buckets { + bucket := a.getBucket(ka.BucketType, bucketIndex) + bucket[ka.Addr.String()] = ka + } + a.addrLookup[ka.Addr.String()] = ka + if ka.BucketType == bucketTypeNew { + a.nNew++ + } else { + a.nOld++ + } + } + return true +} + +// Save saves the book. +func (a *AddrBook) Save() { + a.Logger.Info("Saving AddrBook to file", "size", a.Size()) + a.saveToFile(a.filePath) +} + +/* Private methods */ + +func (a *AddrBook) saveRoutine() { + dumpAddressTicker := time.NewTicker(dumpAddressInterval) +out: + for { + select { + case <-dumpAddressTicker.C: + a.saveToFile(a.filePath) + case <-a.Quit: + break out + } + } + dumpAddressTicker.Stop() + a.saveToFile(a.filePath) + a.wg.Done() + a.Logger.Info("Address handler done") +} + +func (a *AddrBook) getBucket(bucketType byte, bucketIdx int) map[string]*knownAddress { + switch bucketType { + case bucketTypeNew: + return a.addrNew[bucketIdx] + case bucketTypeOld: + return a.addrOld[bucketIdx] + default: + cmn.PanicSanity("Should not happen") + return nil + } +} + +// Adds ka to new bucket. Returns false if it couldn't do it cuz buckets full. +// NOTE: currently it always returns true. +func (a *AddrBook) addToNewBucket(ka *knownAddress, bucketIdx int) bool { + // Sanity check + if ka.isOld() { + a.Logger.Error(cmn.Fmt("Cannot add address already in old bucket to a new bucket: %v", ka)) + return false + } + + addrStr := ka.Addr.String() + bucket := a.getBucket(bucketTypeNew, bucketIdx) + + // Already exists? + if _, ok := bucket[addrStr]; ok { + return true + } + + // Enforce max addresses. + if len(bucket) > newBucketSize { + a.Logger.Info("new bucket is full, expiring old ") + a.expireNew(bucketIdx) + } + + // Add to bucket. + bucket[addrStr] = ka + if ka.addBucketRef(bucketIdx) == 1 { + a.nNew++ + } + + // Ensure in addrLookup + a.addrLookup[addrStr] = ka + + return true +} + +// Adds ka to old bucket. Returns false if it couldn't do it cuz buckets full. +func (a *AddrBook) addToOldBucket(ka *knownAddress, bucketIdx int) bool { + // Sanity check + if ka.isNew() { + a.Logger.Error(cmn.Fmt("Cannot add new address to old bucket: %v", ka)) + return false + } + if len(ka.Buckets) != 0 { + a.Logger.Error(cmn.Fmt("Cannot add already old address to another old bucket: %v", ka)) + return false + } + + addrStr := ka.Addr.String() + bucket := a.getBucket(bucketTypeNew, bucketIdx) + + // Already exists? + if _, ok := bucket[addrStr]; ok { + return true + } + + // Enforce max addresses. + if len(bucket) > oldBucketSize { + return false + } + + // Add to bucket. + bucket[addrStr] = ka + if ka.addBucketRef(bucketIdx) == 1 { + a.nOld++ + } + + // Ensure in addrLookup + a.addrLookup[addrStr] = ka + + return true +} + +func (a *AddrBook) removeFromBucket(ka *knownAddress, bucketType byte, bucketIdx int) { + if ka.BucketType != bucketType { + a.Logger.Error(cmn.Fmt("Bucket type mismatch: %v", ka)) + return + } + bucket := a.getBucket(bucketType, bucketIdx) + delete(bucket, ka.Addr.String()) + if ka.removeBucketRef(bucketIdx) == 0 { + if bucketType == bucketTypeNew { + a.nNew-- + } else { + a.nOld-- + } + delete(a.addrLookup, ka.Addr.String()) + } +} + +func (a *AddrBook) removeFromAllBuckets(ka *knownAddress) { + for _, bucketIdx := range ka.Buckets { + bucket := a.getBucket(ka.BucketType, bucketIdx) + delete(bucket, ka.Addr.String()) + } + ka.Buckets = nil + if ka.BucketType == bucketTypeNew { + a.nNew-- + } else { + a.nOld-- + } + delete(a.addrLookup, ka.Addr.String()) +} + +func (a *AddrBook) pickOldest(bucketType byte, bucketIdx int) *knownAddress { + bucket := a.getBucket(bucketType, bucketIdx) + var oldest *knownAddress + for _, ka := range bucket { + if oldest == nil || ka.LastAttempt.Before(oldest.LastAttempt) { + oldest = ka + } + } + return oldest +} + +func (a *AddrBook) addAddress(addr, src *NetAddress) { + if a.routabilityStrict && !addr.Routable() { + a.Logger.Error(cmn.Fmt("Cannot add non-routable address %v", addr)) + return + } + if _, ok := a.ourAddrs[addr.String()]; ok { + // Ignore our own listener address. + return + } + + ka := a.addrLookup[addr.String()] + + if ka != nil { + // Already old. + if ka.isOld() { + return + } + // Already in max new buckets. + if len(ka.Buckets) == maxNewBucketsPerAddress { + return + } + // The more entries we have, the less likely we are to add more. + factor := int32(2 * len(ka.Buckets)) + if a.rand.Int31n(factor) != 0 { + return + } + } else { + ka = newKnownAddress(addr, src) + } + + bucket := a.calcNewBucket(addr, src) + a.addToNewBucket(ka, bucket) + + a.Logger.Info("Added new address", "address", addr, "total", a.size()) +} + +// Make space in the new buckets by expiring the really bad entries. +// If no bad entries are available we remove the oldest. +func (a *AddrBook) expireNew(bucketIdx int) { + for addrStr, ka := range a.addrNew[bucketIdx] { + // If an entry is bad, throw it away + if ka.isBad() { + a.Logger.Info(cmn.Fmt("expiring bad address %v", addrStr)) + a.removeFromBucket(ka, bucketTypeNew, bucketIdx) + return + } + } + + // If we haven't thrown out a bad entry, throw out the oldest entry + oldest := a.pickOldest(bucketTypeNew, bucketIdx) + a.removeFromBucket(oldest, bucketTypeNew, bucketIdx) +} + +// Promotes an address from new to old. +// TODO: Move to old probabilistically. +// The better a node is, the less likely it should be evicted from an old bucket. +func (a *AddrBook) moveToOld(ka *knownAddress) { + // Sanity check + if ka.isOld() { + a.Logger.Error(cmn.Fmt("Cannot promote address that is already old %v", ka)) + return + } + if len(ka.Buckets) == 0 { + a.Logger.Error(cmn.Fmt("Cannot promote address that isn't in any new buckets %v", ka)) + return + } + + // Remember one of the buckets in which ka is in. + freedBucket := ka.Buckets[0] + // Remove from all (new) buckets. + a.removeFromAllBuckets(ka) + // It's officially old now. + ka.BucketType = bucketTypeOld + + // Try to add it to its oldBucket destination. + oldBucketIdx := a.calcOldBucket(ka.Addr) + added := a.addToOldBucket(ka, oldBucketIdx) + if !added { + // No room, must evict something + oldest := a.pickOldest(bucketTypeOld, oldBucketIdx) + a.removeFromBucket(oldest, bucketTypeOld, oldBucketIdx) + // Find new bucket to put oldest in + newBucketIdx := a.calcNewBucket(oldest.Addr, oldest.Src) + added := a.addToNewBucket(oldest, newBucketIdx) + // No space in newBucket either, just put it in freedBucket from above. + if !added { + added := a.addToNewBucket(oldest, freedBucket) + if !added { + a.Logger.Error(cmn.Fmt("Could not migrate oldest %v to freedBucket %v", oldest, freedBucket)) + } + } + // Finally, add to bucket again. + added = a.addToOldBucket(ka, oldBucketIdx) + if !added { + a.Logger.Error(cmn.Fmt("Could not re-add ka %v to oldBucketIdx %v", ka, oldBucketIdx)) + } + } +} + +// doublesha256( key + sourcegroup + +// int64(doublesha256(key + group + sourcegroup))%bucket_per_group ) % num_new_buckets +func (a *AddrBook) calcNewBucket(addr, src *NetAddress) int { + data1 := []byte{} + data1 = append(data1, []byte(a.key)...) + data1 = append(data1, []byte(a.groupKey(addr))...) + data1 = append(data1, []byte(a.groupKey(src))...) + hash1 := doubleSha256(data1) + hash64 := binary.BigEndian.Uint64(hash1) + hash64 %= newBucketsPerGroup + var hashbuf [8]byte + binary.BigEndian.PutUint64(hashbuf[:], hash64) + data2 := []byte{} + data2 = append(data2, []byte(a.key)...) + data2 = append(data2, a.groupKey(src)...) + data2 = append(data2, hashbuf[:]...) + + hash2 := doubleSha256(data2) + return int(binary.BigEndian.Uint64(hash2) % newBucketCount) +} + +// doublesha256( key + group + +// int64(doublesha256(key + addr))%buckets_per_group ) % num_old_buckets +func (a *AddrBook) calcOldBucket(addr *NetAddress) int { + data1 := []byte{} + data1 = append(data1, []byte(a.key)...) + data1 = append(data1, []byte(addr.String())...) + hash1 := doubleSha256(data1) + hash64 := binary.BigEndian.Uint64(hash1) + hash64 %= oldBucketsPerGroup + var hashbuf [8]byte + binary.BigEndian.PutUint64(hashbuf[:], hash64) + data2 := []byte{} + data2 = append(data2, []byte(a.key)...) + data2 = append(data2, a.groupKey(addr)...) + data2 = append(data2, hashbuf[:]...) + + hash2 := doubleSha256(data2) + return int(binary.BigEndian.Uint64(hash2) % oldBucketCount) +} + +// Return a string representing the network group of this address. +// This is the /16 for IPv6, the /32 (/36 for he.net) for IPv6, the string +// "local" for a local address and the string "unroutable for an unroutable +// address. +func (a *AddrBook) groupKey(na *NetAddress) string { + if a.routabilityStrict && na.Local() { + return "local" + } + if a.routabilityStrict && !na.Routable() { + return "unroutable" + } + + if ipv4 := na.IP.To4(); ipv4 != nil { + return (&net.IPNet{IP: na.IP, Mask: net.CIDRMask(16, 32)}).String() + } + if na.RFC6145() || na.RFC6052() { + // last four bytes are the ip address + ip := net.IP(na.IP[12:16]) + return (&net.IPNet{IP: ip, Mask: net.CIDRMask(16, 32)}).String() + } + + if na.RFC3964() { + ip := net.IP(na.IP[2:7]) + return (&net.IPNet{IP: ip, Mask: net.CIDRMask(16, 32)}).String() + + } + if na.RFC4380() { + // teredo tunnels have the last 4 bytes as the v4 address XOR + // 0xff. + ip := net.IP(make([]byte, 4)) + for i, byte := range na.IP[12:16] { + ip[i] = byte ^ 0xff + } + return (&net.IPNet{IP: ip, Mask: net.CIDRMask(16, 32)}).String() + } + + // OK, so now we know ourselves to be a IPv6 address. + // bitcoind uses /32 for everything, except for Hurricane Electric's + // (he.net) IP range, which it uses /36 for. + bits := 32 + heNet := &net.IPNet{IP: net.ParseIP("2001:470::"), + Mask: net.CIDRMask(32, 128)} + if heNet.Contains(na.IP) { + bits = 36 + } + + return (&net.IPNet{IP: na.IP, Mask: net.CIDRMask(bits, 128)}).String() +} + +//----------------------------------------------------------------------------- + +/* + knownAddress + + tracks information about a known network address that is used + to determine how viable an address is. +*/ +type knownAddress struct { + Addr *NetAddress + Src *NetAddress + Attempts int32 + LastAttempt time.Time + LastSuccess time.Time + BucketType byte + Buckets []int +} + +func newKnownAddress(addr *NetAddress, src *NetAddress) *knownAddress { + return &knownAddress{ + Addr: addr, + Src: src, + Attempts: 0, + LastAttempt: time.Now(), + BucketType: bucketTypeNew, + Buckets: nil, + } +} + +func (ka *knownAddress) isOld() bool { + return ka.BucketType == bucketTypeOld +} + +func (ka *knownAddress) isNew() bool { + return ka.BucketType == bucketTypeNew +} + +func (ka *knownAddress) markAttempt() { + now := time.Now() + ka.LastAttempt = now + ka.Attempts += 1 +} + +func (ka *knownAddress) markGood() { + now := time.Now() + ka.LastAttempt = now + ka.Attempts = 0 + ka.LastSuccess = now +} + +func (ka *knownAddress) addBucketRef(bucketIdx int) int { + for _, bucket := range ka.Buckets { + if bucket == bucketIdx { + // TODO refactor to return error? + // log.Warn(Fmt("Bucket already exists in ka.Buckets: %v", ka)) + return -1 + } + } + ka.Buckets = append(ka.Buckets, bucketIdx) + return len(ka.Buckets) +} + +func (ka *knownAddress) removeBucketRef(bucketIdx int) int { + buckets := []int{} + for _, bucket := range ka.Buckets { + if bucket != bucketIdx { + buckets = append(buckets, bucket) + } + } + if len(buckets) != len(ka.Buckets)-1 { + // TODO refactor to return error? + // log.Warn(Fmt("bucketIdx not found in ka.Buckets: %v", ka)) + return -1 + } + ka.Buckets = buckets + return len(ka.Buckets) +} + +/* + An address is bad if the address in question has not been tried in the last + minute and meets one of the following criteria: + + 1) It claims to be from the future + 2) It hasn't been seen in over a month + 3) It has failed at least three times and never succeeded + 4) It has failed ten times in the last week + + All addresses that meet these criteria are assumed to be worthless and not + worth keeping hold of. +*/ +func (ka *knownAddress) isBad() bool { + // Has been attempted in the last minute --> good + if ka.LastAttempt.Before(time.Now().Add(-1 * time.Minute)) { + return false + } + + // Over a month old? + if ka.LastAttempt.After(time.Now().Add(-1 * numMissingDays * time.Hour * 24)) { + return true + } + + // Never succeeded? + if ka.LastSuccess.IsZero() && ka.Attempts >= numRetries { + return true + } + + // Hasn't succeeded in too long? + if ka.LastSuccess.Before(time.Now().Add(-1*minBadDays*time.Hour*24)) && + ka.Attempts >= maxFailures { + return true + } + + return false +} diff --git a/p2p/addrbook_test.go b/p2p/addrbook_test.go new file mode 100644 index 000000000..9b83be180 --- /dev/null +++ b/p2p/addrbook_test.go @@ -0,0 +1,174 @@ +package p2p + +import ( + "fmt" + "io/ioutil" + "math/rand" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/tendermint/tmlibs/log" +) + +func createTempFileName(prefix string) string { + f, err := ioutil.TempFile("", prefix) + if err != nil { + panic(err) + } + fname := f.Name() + err = f.Close() + if err != nil { + panic(err) + } + return fname +} + +func TestAddrBookSaveLoad(t *testing.T) { + fname := createTempFileName("addrbook_test") + + // 0 addresses + book := NewAddrBook(fname, true) + book.SetLogger(log.TestingLogger()) + book.saveToFile(fname) + + book = NewAddrBook(fname, true) + book.SetLogger(log.TestingLogger()) + book.loadFromFile(fname) + + assert.Zero(t, book.Size()) + + // 100 addresses + randAddrs := randNetAddressPairs(t, 100) + + for _, addrSrc := range randAddrs { + book.AddAddress(addrSrc.addr, addrSrc.src) + } + + assert.Equal(t, 100, book.Size()) + book.saveToFile(fname) + + book = NewAddrBook(fname, true) + book.SetLogger(log.TestingLogger()) + book.loadFromFile(fname) + + assert.Equal(t, 100, book.Size()) +} + +func TestAddrBookLookup(t *testing.T) { + fname := createTempFileName("addrbook_test") + + randAddrs := randNetAddressPairs(t, 100) + + book := NewAddrBook(fname, true) + book.SetLogger(log.TestingLogger()) + for _, addrSrc := range randAddrs { + addr := addrSrc.addr + src := addrSrc.src + book.AddAddress(addr, src) + + ka := book.addrLookup[addr.String()] + assert.NotNil(t, ka, "Expected to find KnownAddress %v but wasn't there.", addr) + + if !(ka.Addr.Equals(addr) && ka.Src.Equals(src)) { + t.Fatalf("KnownAddress doesn't match addr & src") + } + } +} + +func TestAddrBookPromoteToOld(t *testing.T) { + fname := createTempFileName("addrbook_test") + + randAddrs := randNetAddressPairs(t, 100) + + book := NewAddrBook(fname, true) + book.SetLogger(log.TestingLogger()) + for _, addrSrc := range randAddrs { + book.AddAddress(addrSrc.addr, addrSrc.src) + } + + // Attempt all addresses. + for _, addrSrc := range randAddrs { + book.MarkAttempt(addrSrc.addr) + } + + // Promote half of them + for i, addrSrc := range randAddrs { + if i%2 == 0 { + book.MarkGood(addrSrc.addr) + } + } + + // TODO: do more testing :) + + selection := book.GetSelection() + t.Logf("selection: %v", selection) + + if len(selection) > book.Size() { + t.Errorf("selection could not be bigger than the book") + } +} + +func TestAddrBookHandlesDuplicates(t *testing.T) { + fname := createTempFileName("addrbook_test") + + book := NewAddrBook(fname, true) + book.SetLogger(log.TestingLogger()) + + randAddrs := randNetAddressPairs(t, 100) + + differentSrc := randIPv4Address(t) + for _, addrSrc := range randAddrs { + book.AddAddress(addrSrc.addr, addrSrc.src) + book.AddAddress(addrSrc.addr, addrSrc.src) // duplicate + book.AddAddress(addrSrc.addr, differentSrc) // different src + } + + assert.Equal(t, 100, book.Size()) +} + +type netAddressPair struct { + addr *NetAddress + src *NetAddress +} + +func randNetAddressPairs(t *testing.T, n int) []netAddressPair { + randAddrs := make([]netAddressPair, n) + for i := 0; i < n; i++ { + randAddrs[i] = netAddressPair{addr: randIPv4Address(t), src: randIPv4Address(t)} + } + return randAddrs +} + +func randIPv4Address(t *testing.T) *NetAddress { + for { + ip := fmt.Sprintf("%v.%v.%v.%v", + rand.Intn(254)+1, + rand.Intn(255), + rand.Intn(255), + rand.Intn(255), + ) + port := rand.Intn(65535-1) + 1 + addr, err := NewNetAddressString(fmt.Sprintf("%v:%v", ip, port)) + assert.Nil(t, err, "error generating rand network address") + if addr.Routable() { + return addr + } + } +} + +func TestAddrBookRemoveAddress(t *testing.T) { + fname := createTempFileName("addrbook_test") + book := NewAddrBook(fname, true) + book.SetLogger(log.TestingLogger()) + + addr := randIPv4Address(t) + book.AddAddress(addr, addr) + assert.Equal(t, 1, book.Size()) + + book.RemoveAddress(addr) + assert.Equal(t, 0, book.Size()) + + nonExistingAddr := randIPv4Address(t) + book.RemoveAddress(nonExistingAddr) + assert.Equal(t, 0, book.Size()) +} diff --git a/p2p/connection.go b/p2p/connection.go new file mode 100644 index 000000000..36f15abb7 --- /dev/null +++ b/p2p/connection.go @@ -0,0 +1,686 @@ +package p2p + +import ( + "bufio" + "fmt" + "io" + "math" + "net" + "runtime/debug" + "sync/atomic" + "time" + + wire "github.com/tendermint/go-wire" + cmn "github.com/tendermint/tmlibs/common" + flow "github.com/tendermint/tmlibs/flowrate" +) + +const ( + numBatchMsgPackets = 10 + minReadBufferSize = 1024 + minWriteBufferSize = 65536 + updateState = 2 * time.Second + pingTimeout = 40 * time.Second + flushThrottle = 100 * time.Millisecond + + defaultSendQueueCapacity = 1 + defaultSendRate = int64(512000) // 500KB/s + defaultRecvBufferCapacity = 4096 + defaultRecvMessageCapacity = 22020096 // 21MB + defaultRecvRate = int64(512000) // 500KB/s + defaultSendTimeout = 10 * time.Second +) + +type receiveCbFunc func(chID byte, msgBytes []byte) +type errorCbFunc func(interface{}) + +/* +Each peer has one `MConnection` (multiplex connection) instance. + +__multiplex__ *noun* a system or signal involving simultaneous transmission of +several messages along a single channel of communication. + +Each `MConnection` handles message transmission on multiple abstract communication +`Channel`s. Each channel has a globally unique byte id. +The byte id and the relative priorities of each `Channel` are configured upon +initialization of the connection. + +There are two methods for sending messages: + func (m MConnection) Send(chID byte, msg interface{}) bool {} + func (m MConnection) TrySend(chID byte, msg interface{}) bool {} + +`Send(chID, msg)` is a blocking call that waits until `msg` is successfully queued +for the channel with the given id byte `chID`, or until the request times out. +The message `msg` is serialized using the `tendermint/wire` submodule's +`WriteBinary()` reflection routine. + +`TrySend(chID, msg)` is a nonblocking call that returns false if the channel's +queue is full. + +Inbound message bytes are handled with an onReceive callback function. +*/ +type MConnection struct { + cmn.BaseService + + conn net.Conn + bufReader *bufio.Reader + bufWriter *bufio.Writer + sendMonitor *flow.Monitor + recvMonitor *flow.Monitor + send chan struct{} + pong chan struct{} + channels []*Channel + channelsIdx map[byte]*Channel + onReceive receiveCbFunc + onError errorCbFunc + errored uint32 + config *MConnConfig + + quit chan struct{} + flushTimer *cmn.ThrottleTimer // flush writes as necessary but throttled. + pingTimer *cmn.RepeatTimer // send pings periodically + chStatsTimer *cmn.RepeatTimer // update channel stats periodically + + LocalAddress *NetAddress + RemoteAddress *NetAddress +} + +// MConnConfig is a MConnection configuration. +type MConnConfig struct { + SendRate int64 `mapstructure:"send_rate"` + RecvRate int64 `mapstructure:"recv_rate"` +} + +// DefaultMConnConfig returns the default config. +func DefaultMConnConfig() *MConnConfig { + return &MConnConfig{ + SendRate: defaultSendRate, + RecvRate: defaultRecvRate, + } +} + +// NewMConnection wraps net.Conn and creates multiplex connection +func NewMConnection(conn net.Conn, chDescs []*ChannelDescriptor, onReceive receiveCbFunc, onError errorCbFunc) *MConnection { + return NewMConnectionWithConfig( + conn, + chDescs, + onReceive, + onError, + DefaultMConnConfig()) +} + +// NewMConnectionWithConfig wraps net.Conn and creates multiplex connection with a config +func NewMConnectionWithConfig(conn net.Conn, chDescs []*ChannelDescriptor, onReceive receiveCbFunc, onError errorCbFunc, config *MConnConfig) *MConnection { + mconn := &MConnection{ + conn: conn, + bufReader: bufio.NewReaderSize(conn, minReadBufferSize), + bufWriter: bufio.NewWriterSize(conn, minWriteBufferSize), + sendMonitor: flow.New(0, 0), + recvMonitor: flow.New(0, 0), + send: make(chan struct{}, 1), + pong: make(chan struct{}), + onReceive: onReceive, + onError: onError, + config: config, + + LocalAddress: NewNetAddress(conn.LocalAddr()), + RemoteAddress: NewNetAddress(conn.RemoteAddr()), + } + + // Create channels + var channelsIdx = map[byte]*Channel{} + var channels = []*Channel{} + + for _, desc := range chDescs { + descCopy := *desc // copy the desc else unsafe access across connections + channel := newChannel(mconn, &descCopy) + channelsIdx[channel.id] = channel + channels = append(channels, channel) + } + mconn.channels = channels + mconn.channelsIdx = channelsIdx + + mconn.BaseService = *cmn.NewBaseService(nil, "MConnection", mconn) + + return mconn +} + +func (c *MConnection) OnStart() error { + c.BaseService.OnStart() + c.quit = make(chan struct{}) + c.flushTimer = cmn.NewThrottleTimer("flush", flushThrottle) + c.pingTimer = cmn.NewRepeatTimer("ping", pingTimeout) + c.chStatsTimer = cmn.NewRepeatTimer("chStats", updateState) + go c.sendRoutine() + go c.recvRoutine() + return nil +} + +func (c *MConnection) OnStop() { + c.BaseService.OnStop() + c.flushTimer.Stop() + c.pingTimer.Stop() + c.chStatsTimer.Stop() + if c.quit != nil { + close(c.quit) + } + c.conn.Close() + // We can't close pong safely here because + // recvRoutine may write to it after we've stopped. + // Though it doesn't need to get closed at all, + // we close it @ recvRoutine. + // close(c.pong) +} + +func (c *MConnection) String() string { + return fmt.Sprintf("MConn{%v}", c.conn.RemoteAddr()) +} + +func (c *MConnection) flush() { + c.Logger.Debug("Flush", "conn", c) + err := c.bufWriter.Flush() + if err != nil { + c.Logger.Error("MConnection flush failed", "error", err) + } +} + +// Catch panics, usually caused by remote disconnects. +func (c *MConnection) _recover() { + if r := recover(); r != nil { + stack := debug.Stack() + err := cmn.StackError{r, stack} + c.stopForError(err) + } +} + +func (c *MConnection) stopForError(r interface{}) { + c.Stop() + if atomic.CompareAndSwapUint32(&c.errored, 0, 1) { + if c.onError != nil { + c.onError(r) + } + } +} + +// Queues a message to be sent to channel. +func (c *MConnection) Send(chID byte, msg interface{}) bool { + if !c.IsRunning() { + return false + } + + c.Logger.Debug("Send", "channel", chID, "conn", c, "msg", msg) //, "bytes", wire.BinaryBytes(msg)) + + // Send message to channel. + channel, ok := c.channelsIdx[chID] + if !ok { + c.Logger.Error(cmn.Fmt("Cannot send bytes, unknown channel %X", chID)) + return false + } + + success := channel.sendBytes(wire.BinaryBytes(msg)) + if success { + // Wake up sendRoutine if necessary + select { + case c.send <- struct{}{}: + default: + } + } else { + c.Logger.Error("Send failed", "channel", chID, "conn", c, "msg", msg) + } + return success +} + +// Queues a message to be sent to channel. +// Nonblocking, returns true if successful. +func (c *MConnection) TrySend(chID byte, msg interface{}) bool { + if !c.IsRunning() { + return false + } + + c.Logger.Debug("TrySend", "channel", chID, "conn", c, "msg", msg) + + // Send message to channel. + channel, ok := c.channelsIdx[chID] + if !ok { + c.Logger.Error(cmn.Fmt("Cannot send bytes, unknown channel %X", chID)) + return false + } + + ok = channel.trySendBytes(wire.BinaryBytes(msg)) + if ok { + // Wake up sendRoutine if necessary + select { + case c.send <- struct{}{}: + default: + } + } + + return ok +} + +// CanSend returns true if you can send more data onto the chID, false +// otherwise. Use only as a heuristic. +func (c *MConnection) CanSend(chID byte) bool { + if !c.IsRunning() { + return false + } + + channel, ok := c.channelsIdx[chID] + if !ok { + c.Logger.Error(cmn.Fmt("Unknown channel %X", chID)) + return false + } + return channel.canSend() +} + +// sendRoutine polls for packets to send from channels. +func (c *MConnection) sendRoutine() { + defer c._recover() + +FOR_LOOP: + for { + var n int + var err error + select { + case <-c.flushTimer.Ch: + // NOTE: flushTimer.Set() must be called every time + // something is written to .bufWriter. + c.flush() + case <-c.chStatsTimer.Ch: + for _, channel := range c.channels { + channel.updateStats() + } + case <-c.pingTimer.Ch: + c.Logger.Debug("Send Ping") + wire.WriteByte(packetTypePing, c.bufWriter, &n, &err) + c.sendMonitor.Update(int(n)) + c.flush() + case <-c.pong: + c.Logger.Debug("Send Pong") + wire.WriteByte(packetTypePong, c.bufWriter, &n, &err) + c.sendMonitor.Update(int(n)) + c.flush() + case <-c.quit: + break FOR_LOOP + case <-c.send: + // Send some msgPackets + eof := c.sendSomeMsgPackets() + if !eof { + // Keep sendRoutine awake. + select { + case c.send <- struct{}{}: + default: + } + } + } + + if !c.IsRunning() { + break FOR_LOOP + } + if err != nil { + c.Logger.Error("Connection failed @ sendRoutine", "conn", c, "error", err) + c.stopForError(err) + break FOR_LOOP + } + } + + // Cleanup +} + +// Returns true if messages from channels were exhausted. +// Blocks in accordance to .sendMonitor throttling. +func (c *MConnection) sendSomeMsgPackets() bool { + // Block until .sendMonitor says we can write. + // Once we're ready we send more than we asked for, + // but amortized it should even out. + c.sendMonitor.Limit(maxMsgPacketTotalSize, atomic.LoadInt64(&c.config.SendRate), true) + + // Now send some msgPackets. + for i := 0; i < numBatchMsgPackets; i++ { + if c.sendMsgPacket() { + return true + } + } + return false +} + +// Returns true if messages from channels were exhausted. +func (c *MConnection) sendMsgPacket() bool { + // Choose a channel to create a msgPacket from. + // The chosen channel will be the one whose recentlySent/priority is the least. + var leastRatio float32 = math.MaxFloat32 + var leastChannel *Channel + for _, channel := range c.channels { + // If nothing to send, skip this channel + if !channel.isSendPending() { + continue + } + // Get ratio, and keep track of lowest ratio. + ratio := float32(channel.recentlySent) / float32(channel.priority) + if ratio < leastRatio { + leastRatio = ratio + leastChannel = channel + } + } + + // Nothing to send? + if leastChannel == nil { + return true + } else { + // c.Logger.Info("Found a msgPacket to send") + } + + // Make & send a msgPacket from this channel + n, err := leastChannel.writeMsgPacketTo(c.bufWriter) + if err != nil { + c.Logger.Error("Failed to write msgPacket", "error", err) + c.stopForError(err) + return true + } + c.sendMonitor.Update(int(n)) + c.flushTimer.Set() + return false +} + +// recvRoutine reads msgPackets and reconstructs the message using the channels' "recving" buffer. +// After a whole message has been assembled, it's pushed to onReceive(). +// Blocks depending on how the connection is throttled. +func (c *MConnection) recvRoutine() { + defer c._recover() + +FOR_LOOP: + for { + // Block until .recvMonitor says we can read. + c.recvMonitor.Limit(maxMsgPacketTotalSize, atomic.LoadInt64(&c.config.RecvRate), true) + + /* + // Peek into bufReader for debugging + if numBytes := c.bufReader.Buffered(); numBytes > 0 { + log.Info("Peek connection buffer", "numBytes", numBytes, "bytes", log15.Lazy{func() []byte { + bytes, err := c.bufReader.Peek(MinInt(numBytes, 100)) + if err == nil { + return bytes + } else { + log.Warn("Error peeking connection buffer", "error", err) + return nil + } + }}) + } + */ + + // Read packet type + var n int + var err error + pktType := wire.ReadByte(c.bufReader, &n, &err) + c.recvMonitor.Update(int(n)) + if err != nil { + if c.IsRunning() { + c.Logger.Error("Connection failed @ recvRoutine (reading byte)", "conn", c, "error", err) + c.stopForError(err) + } + break FOR_LOOP + } + + // Read more depending on packet type. + switch pktType { + case packetTypePing: + // TODO: prevent abuse, as they cause flush()'s. + c.Logger.Debug("Receive Ping") + c.pong <- struct{}{} + case packetTypePong: + // do nothing + c.Logger.Debug("Receive Pong") + case packetTypeMsg: + pkt, n, err := msgPacket{}, int(0), error(nil) + wire.ReadBinaryPtr(&pkt, c.bufReader, maxMsgPacketTotalSize, &n, &err) + c.recvMonitor.Update(int(n)) + if err != nil { + if c.IsRunning() { + c.Logger.Error("Connection failed @ recvRoutine", "conn", c, "error", err) + c.stopForError(err) + } + break FOR_LOOP + } + channel, ok := c.channelsIdx[pkt.ChannelID] + if !ok || channel == nil { + cmn.PanicQ(cmn.Fmt("Unknown channel %X", pkt.ChannelID)) + } + msgBytes, err := channel.recvMsgPacket(pkt) + if err != nil { + if c.IsRunning() { + c.Logger.Error("Connection failed @ recvRoutine", "conn", c, "error", err) + c.stopForError(err) + } + break FOR_LOOP + } + if msgBytes != nil { + c.Logger.Debug("Received bytes", "chID", pkt.ChannelID, "msgBytes", msgBytes) + c.onReceive(pkt.ChannelID, msgBytes) + } + default: + cmn.PanicSanity(cmn.Fmt("Unknown message type %X", pktType)) + } + + // TODO: shouldn't this go in the sendRoutine? + // Better to send a ping packet when *we* haven't sent anything for a while. + c.pingTimer.Reset() + } + + // Cleanup + close(c.pong) + for _ = range c.pong { + // Drain + } +} + +type ConnectionStatus struct { + SendMonitor flow.Status + RecvMonitor flow.Status + Channels []ChannelStatus +} + +type ChannelStatus struct { + ID byte + SendQueueCapacity int + SendQueueSize int + Priority int + RecentlySent int64 +} + +func (c *MConnection) Status() ConnectionStatus { + var status ConnectionStatus + status.SendMonitor = c.sendMonitor.Status() + status.RecvMonitor = c.recvMonitor.Status() + status.Channels = make([]ChannelStatus, len(c.channels)) + for i, channel := range c.channels { + status.Channels[i] = ChannelStatus{ + ID: channel.id, + SendQueueCapacity: cap(channel.sendQueue), + SendQueueSize: int(channel.sendQueueSize), // TODO use atomic + Priority: channel.priority, + RecentlySent: channel.recentlySent, + } + } + return status +} + +//----------------------------------------------------------------------------- + +type ChannelDescriptor struct { + ID byte + Priority int + SendQueueCapacity int + RecvBufferCapacity int + RecvMessageCapacity int +} + +func (chDesc *ChannelDescriptor) FillDefaults() { + if chDesc.SendQueueCapacity == 0 { + chDesc.SendQueueCapacity = defaultSendQueueCapacity + } + if chDesc.RecvBufferCapacity == 0 { + chDesc.RecvBufferCapacity = defaultRecvBufferCapacity + } + if chDesc.RecvMessageCapacity == 0 { + chDesc.RecvMessageCapacity = defaultRecvMessageCapacity + } +} + +// TODO: lowercase. +// NOTE: not goroutine-safe. +type Channel struct { + conn *MConnection + desc *ChannelDescriptor + id byte + sendQueue chan []byte + sendQueueSize int32 // atomic. + recving []byte + sending []byte + priority int + recentlySent int64 // exponential moving average +} + +func newChannel(conn *MConnection, desc *ChannelDescriptor) *Channel { + desc.FillDefaults() + if desc.Priority <= 0 { + cmn.PanicSanity("Channel default priority must be a postive integer") + } + return &Channel{ + conn: conn, + desc: desc, + id: desc.ID, + sendQueue: make(chan []byte, desc.SendQueueCapacity), + recving: make([]byte, 0, desc.RecvBufferCapacity), + priority: desc.Priority, + } +} + +// Queues message to send to this channel. +// Goroutine-safe +// Times out (and returns false) after defaultSendTimeout +func (ch *Channel) sendBytes(bytes []byte) bool { + select { + case ch.sendQueue <- bytes: + atomic.AddInt32(&ch.sendQueueSize, 1) + return true + case <-time.After(defaultSendTimeout): + return false + } +} + +// Queues message to send to this channel. +// Nonblocking, returns true if successful. +// Goroutine-safe +func (ch *Channel) trySendBytes(bytes []byte) bool { + select { + case ch.sendQueue <- bytes: + atomic.AddInt32(&ch.sendQueueSize, 1) + return true + default: + return false + } +} + +// Goroutine-safe +func (ch *Channel) loadSendQueueSize() (size int) { + return int(atomic.LoadInt32(&ch.sendQueueSize)) +} + +// Goroutine-safe +// Use only as a heuristic. +func (ch *Channel) canSend() bool { + return ch.loadSendQueueSize() < defaultSendQueueCapacity +} + +// Returns true if any msgPackets are pending to be sent. +// Call before calling nextMsgPacket() +// Goroutine-safe +func (ch *Channel) isSendPending() bool { + if len(ch.sending) == 0 { + if len(ch.sendQueue) == 0 { + return false + } + ch.sending = <-ch.sendQueue + } + return true +} + +// Creates a new msgPacket to send. +// Not goroutine-safe +func (ch *Channel) nextMsgPacket() msgPacket { + packet := msgPacket{} + packet.ChannelID = byte(ch.id) + packet.Bytes = ch.sending[:cmn.MinInt(maxMsgPacketPayloadSize, len(ch.sending))] + if len(ch.sending) <= maxMsgPacketPayloadSize { + packet.EOF = byte(0x01) + ch.sending = nil + atomic.AddInt32(&ch.sendQueueSize, -1) // decrement sendQueueSize + } else { + packet.EOF = byte(0x00) + ch.sending = ch.sending[cmn.MinInt(maxMsgPacketPayloadSize, len(ch.sending)):] + } + return packet +} + +// Writes next msgPacket to w. +// Not goroutine-safe +func (ch *Channel) writeMsgPacketTo(w io.Writer) (n int, err error) { + packet := ch.nextMsgPacket() + // log.Debug("Write Msg Packet", "conn", ch.conn, "packet", packet) + wire.WriteByte(packetTypeMsg, w, &n, &err) + wire.WriteBinary(packet, w, &n, &err) + if err == nil { + ch.recentlySent += int64(n) + } + return +} + +// Handles incoming msgPackets. Returns a msg bytes if msg is complete. +// Not goroutine-safe +func (ch *Channel) recvMsgPacket(packet msgPacket) ([]byte, error) { + // log.Debug("Read Msg Packet", "conn", ch.conn, "packet", packet) + if ch.desc.RecvMessageCapacity < len(ch.recving)+len(packet.Bytes) { + return nil, wire.ErrBinaryReadOverflow + } + ch.recving = append(ch.recving, packet.Bytes...) + if packet.EOF == byte(0x01) { + msgBytes := ch.recving + // clear the slice without re-allocating. + // http://stackoverflow.com/questions/16971741/how-do-you-clear-a-slice-in-go + // suggests this could be a memory leak, but we might as well keep the memory for the channel until it closes, + // at which point the recving slice stops being used and should be garbage collected + ch.recving = ch.recving[:0] // make([]byte, 0, ch.desc.RecvBufferCapacity) + return msgBytes, nil + } + return nil, nil +} + +// Call this periodically to update stats for throttling purposes. +// Not goroutine-safe +func (ch *Channel) updateStats() { + // Exponential decay of stats. + // TODO: optimize. + ch.recentlySent = int64(float64(ch.recentlySent) * 0.8) +} + +//----------------------------------------------------------------------------- + +const ( + maxMsgPacketPayloadSize = 1024 + maxMsgPacketOverheadSize = 10 // It's actually lower but good enough + maxMsgPacketTotalSize = maxMsgPacketPayloadSize + maxMsgPacketOverheadSize + packetTypePing = byte(0x01) + packetTypePong = byte(0x02) + packetTypeMsg = byte(0x03) +) + +// Messages in channels are chopped into smaller msgPackets for multiplexing. +type msgPacket struct { + ChannelID byte + EOF byte // 1 means message ends here. + Bytes []byte +} + +func (p msgPacket) String() string { + return fmt.Sprintf("MsgPacket{%X:%X T:%X}", p.ChannelID, p.Bytes, p.EOF) +} diff --git a/p2p/connection_test.go b/p2p/connection_test.go new file mode 100644 index 000000000..71c3d64c2 --- /dev/null +++ b/p2p/connection_test.go @@ -0,0 +1,144 @@ +package p2p_test + +import ( + "net" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + p2p "github.com/tendermint/tendermint/p2p" + "github.com/tendermint/tmlibs/log" +) + +func createMConnection(conn net.Conn) *p2p.MConnection { + onReceive := func(chID byte, msgBytes []byte) { + } + onError := func(r interface{}) { + } + c := createMConnectionWithCallbacks(conn, onReceive, onError) + c.SetLogger(log.TestingLogger()) + return c +} + +func createMConnectionWithCallbacks(conn net.Conn, onReceive func(chID byte, msgBytes []byte), onError func(r interface{})) *p2p.MConnection { + chDescs := []*p2p.ChannelDescriptor{&p2p.ChannelDescriptor{ID: 0x01, Priority: 1, SendQueueCapacity: 1}} + c := p2p.NewMConnection(conn, chDescs, onReceive, onError) + c.SetLogger(log.TestingLogger()) + return c +} + +func TestMConnectionSend(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + server, client := net.Pipe() + defer server.Close() + defer client.Close() + + mconn := createMConnection(client) + _, err := mconn.Start() + require.Nil(err) + defer mconn.Stop() + + msg := "Ant-Man" + assert.True(mconn.Send(0x01, msg)) + // Note: subsequent Send/TrySend calls could pass because we are reading from + // the send queue in a separate goroutine. + server.Read(make([]byte, len(msg))) + assert.True(mconn.CanSend(0x01)) + + msg = "Spider-Man" + assert.True(mconn.TrySend(0x01, msg)) + server.Read(make([]byte, len(msg))) + + assert.False(mconn.CanSend(0x05), "CanSend should return false because channel is unknown") + assert.False(mconn.Send(0x05, "Absorbing Man"), "Send should return false because channel is unknown") +} + +func TestMConnectionReceive(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + server, client := net.Pipe() + defer server.Close() + defer client.Close() + + receivedCh := make(chan []byte) + errorsCh := make(chan interface{}) + onReceive := func(chID byte, msgBytes []byte) { + receivedCh <- msgBytes + } + onError := func(r interface{}) { + errorsCh <- r + } + mconn1 := createMConnectionWithCallbacks(client, onReceive, onError) + _, err := mconn1.Start() + require.Nil(err) + defer mconn1.Stop() + + mconn2 := createMConnection(server) + _, err = mconn2.Start() + require.Nil(err) + defer mconn2.Stop() + + msg := "Cyclops" + assert.True(mconn2.Send(0x01, msg)) + + select { + case receivedBytes := <-receivedCh: + assert.Equal([]byte(msg), receivedBytes[2:]) // first 3 bytes are internal + case err := <-errorsCh: + t.Fatalf("Expected %s, got %+v", msg, err) + case <-time.After(500 * time.Millisecond): + t.Fatalf("Did not receive %s message in 500ms", msg) + } +} + +func TestMConnectionStatus(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + server, client := net.Pipe() + defer server.Close() + defer client.Close() + + mconn := createMConnection(client) + _, err := mconn.Start() + require.Nil(err) + defer mconn.Stop() + + status := mconn.Status() + assert.NotNil(status) + assert.Zero(status.Channels[0].SendQueueSize) +} + +func TestMConnectionStopsAndReturnsError(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + server, client := net.Pipe() + defer server.Close() + defer client.Close() + + receivedCh := make(chan []byte) + errorsCh := make(chan interface{}) + onReceive := func(chID byte, msgBytes []byte) { + receivedCh <- msgBytes + } + onError := func(r interface{}) { + errorsCh <- r + } + mconn := createMConnectionWithCallbacks(client, onReceive, onError) + _, err := mconn.Start() + require.Nil(err) + defer mconn.Stop() + + client.Close() + + select { + case receivedBytes := <-receivedCh: + t.Fatalf("Expected error, got %v", receivedBytes) + case err := <-errorsCh: + assert.NotNil(err) + assert.False(mconn.IsRunning()) + case <-time.After(500 * time.Millisecond): + t.Fatal("Did not receive error in 500ms") + } +} diff --git a/p2p/fuzz.go b/p2p/fuzz.go new file mode 100644 index 000000000..aefac986a --- /dev/null +++ b/p2p/fuzz.go @@ -0,0 +1,173 @@ +package p2p + +import ( + "math/rand" + "net" + "sync" + "time" +) + +const ( + // FuzzModeDrop is a mode in which we randomly drop reads/writes, connections or sleep + FuzzModeDrop = iota + // FuzzModeDelay is a mode in which we randomly sleep + FuzzModeDelay +) + +// FuzzedConnection wraps any net.Conn and depending on the mode either delays +// reads/writes or randomly drops reads/writes/connections. +type FuzzedConnection struct { + conn net.Conn + + mtx sync.Mutex + start <-chan time.Time + active bool + + config *FuzzConnConfig +} + +// FuzzConnConfig is a FuzzedConnection configuration. +type FuzzConnConfig struct { + Mode int + MaxDelay time.Duration + ProbDropRW float64 + ProbDropConn float64 + ProbSleep float64 +} + +// DefaultFuzzConnConfig returns the default config. +func DefaultFuzzConnConfig() *FuzzConnConfig { + return &FuzzConnConfig{ + Mode: FuzzModeDrop, + MaxDelay: 3 * time.Second, + ProbDropRW: 0.2, + ProbDropConn: 0.00, + ProbSleep: 0.00, + } +} + +// FuzzConn creates a new FuzzedConnection. Fuzzing starts immediately. +func FuzzConn(conn net.Conn) net.Conn { + return FuzzConnFromConfig(conn, DefaultFuzzConnConfig()) +} + +// FuzzConnFromConfig creates a new FuzzedConnection from a config. Fuzzing +// starts immediately. +func FuzzConnFromConfig(conn net.Conn, config *FuzzConnConfig) net.Conn { + return &FuzzedConnection{ + conn: conn, + start: make(<-chan time.Time), + active: true, + config: config, + } +} + +// FuzzConnAfter creates a new FuzzedConnection. Fuzzing starts when the +// duration elapses. +func FuzzConnAfter(conn net.Conn, d time.Duration) net.Conn { + return FuzzConnAfterFromConfig(conn, d, DefaultFuzzConnConfig()) +} + +// FuzzConnAfterFromConfig creates a new FuzzedConnection from a config. +// Fuzzing starts when the duration elapses. +func FuzzConnAfterFromConfig(conn net.Conn, d time.Duration, config *FuzzConnConfig) net.Conn { + return &FuzzedConnection{ + conn: conn, + start: time.After(d), + active: false, + config: config, + } +} + +// Config returns the connection's config. +func (fc *FuzzedConnection) Config() *FuzzConnConfig { + return fc.config +} + +// Read implements net.Conn. +func (fc *FuzzedConnection) Read(data []byte) (n int, err error) { + if fc.fuzz() { + return 0, nil + } + return fc.conn.Read(data) +} + +// Write implements net.Conn. +func (fc *FuzzedConnection) Write(data []byte) (n int, err error) { + if fc.fuzz() { + return 0, nil + } + return fc.conn.Write(data) +} + +// Close implements net.Conn. +func (fc *FuzzedConnection) Close() error { return fc.conn.Close() } + +// LocalAddr implements net.Conn. +func (fc *FuzzedConnection) LocalAddr() net.Addr { return fc.conn.LocalAddr() } + +// RemoteAddr implements net.Conn. +func (fc *FuzzedConnection) RemoteAddr() net.Addr { return fc.conn.RemoteAddr() } + +// SetDeadline implements net.Conn. +func (fc *FuzzedConnection) SetDeadline(t time.Time) error { return fc.conn.SetDeadline(t) } + +// SetReadDeadline implements net.Conn. +func (fc *FuzzedConnection) SetReadDeadline(t time.Time) error { + return fc.conn.SetReadDeadline(t) +} + +// SetWriteDeadline implements net.Conn. +func (fc *FuzzedConnection) SetWriteDeadline(t time.Time) error { + return fc.conn.SetWriteDeadline(t) +} + +func (fc *FuzzedConnection) randomDuration() time.Duration { + maxDelayMillis := int(fc.config.MaxDelay.Nanoseconds() / 1000) + return time.Millisecond * time.Duration(rand.Int()%maxDelayMillis) +} + +// implements the fuzz (delay, kill conn) +// and returns whether or not the read/write should be ignored +func (fc *FuzzedConnection) fuzz() bool { + if !fc.shouldFuzz() { + return false + } + + switch fc.config.Mode { + case FuzzModeDrop: + // randomly drop the r/w, drop the conn, or sleep + r := rand.Float64() + if r <= fc.config.ProbDropRW { + return true + } else if r < fc.config.ProbDropRW+fc.config.ProbDropConn { + // XXX: can't this fail because machine precision? + // XXX: do we need an error? + fc.Close() + return true + } else if r < fc.config.ProbDropRW+fc.config.ProbDropConn+fc.config.ProbSleep { + time.Sleep(fc.randomDuration()) + } + case FuzzModeDelay: + // sleep a bit + time.Sleep(fc.randomDuration()) + } + return false +} + +func (fc *FuzzedConnection) shouldFuzz() bool { + if fc.active { + return true + } + + fc.mtx.Lock() + defer fc.mtx.Unlock() + + select { + case <-fc.start: + fc.active = true + return true + default: + return false + } +} diff --git a/p2p/ip_range_counter.go b/p2p/ip_range_counter.go new file mode 100644 index 000000000..85d9d407a --- /dev/null +++ b/p2p/ip_range_counter.go @@ -0,0 +1,29 @@ +package p2p + +import ( + "strings" +) + +// TODO Test +func AddToIPRangeCounts(counts map[string]int, ip string) map[string]int { + changes := make(map[string]int) + ipParts := strings.Split(ip, ":") + for i := 1; i < len(ipParts); i++ { + prefix := strings.Join(ipParts[:i], ":") + counts[prefix] += 1 + changes[prefix] = counts[prefix] + } + return changes +} + +// TODO Test +func CheckIPRangeCounts(counts map[string]int, limits []int) bool { + for prefix, count := range counts { + ipParts := strings.Split(prefix, ":") + numParts := len(ipParts) + if limits[numParts] < count { + return false + } + } + return true +} diff --git a/p2p/listener.go b/p2p/listener.go new file mode 100644 index 000000000..d31f0de83 --- /dev/null +++ b/p2p/listener.go @@ -0,0 +1,218 @@ +package p2p + +import ( + "fmt" + "net" + "strconv" + "time" + + "github.com/tendermint/tendermint/p2p/upnp" + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" +) + +type Listener interface { + Connections() <-chan net.Conn + InternalAddress() *NetAddress + ExternalAddress() *NetAddress + String() string + Stop() bool +} + +// Implements Listener +type DefaultListener struct { + cmn.BaseService + + listener net.Listener + intAddr *NetAddress + extAddr *NetAddress + connections chan net.Conn +} + +const ( + numBufferedConnections = 10 + defaultExternalPort = 8770 + tryListenSeconds = 5 +) + +func splitHostPort(addr string) (host string, port int) { + host, portStr, err := net.SplitHostPort(addr) + if err != nil { + cmn.PanicSanity(err) + } + port, err = strconv.Atoi(portStr) + if err != nil { + cmn.PanicSanity(err) + } + return host, port +} + +// skipUPNP: If true, does not try getUPNPExternalAddress() +func NewDefaultListener(protocol string, lAddr string, skipUPNP bool, logger log.Logger) Listener { + // Local listen IP & port + lAddrIP, lAddrPort := splitHostPort(lAddr) + + // Create listener + var listener net.Listener + var err error + for i := 0; i < tryListenSeconds; i++ { + listener, err = net.Listen(protocol, lAddr) + if err == nil { + break + } else if i < tryListenSeconds-1 { + time.Sleep(time.Second * 1) + } + } + if err != nil { + cmn.PanicCrisis(err) + } + // Actual listener local IP & port + listenerIP, listenerPort := splitHostPort(listener.Addr().String()) + logger.Info("Local listener", "ip", listenerIP, "port", listenerPort) + + // Determine internal address... + var intAddr *NetAddress + intAddr, err = NewNetAddressString(lAddr) + if err != nil { + cmn.PanicCrisis(err) + } + + // Determine external address... + var extAddr *NetAddress + if !skipUPNP { + // If the lAddrIP is INADDR_ANY, try UPnP + if lAddrIP == "" || lAddrIP == "0.0.0.0" { + extAddr = getUPNPExternalAddress(lAddrPort, listenerPort, logger) + } + } + // Otherwise just use the local address... + if extAddr == nil { + extAddr = getNaiveExternalAddress(listenerPort) + } + if extAddr == nil { + cmn.PanicCrisis("Could not determine external address!") + } + + dl := &DefaultListener{ + listener: listener, + intAddr: intAddr, + extAddr: extAddr, + connections: make(chan net.Conn, numBufferedConnections), + } + dl.BaseService = *cmn.NewBaseService(logger, "DefaultListener", dl) + dl.Start() // Started upon construction + return dl +} + +func (l *DefaultListener) OnStart() error { + l.BaseService.OnStart() + go l.listenRoutine() + return nil +} + +func (l *DefaultListener) OnStop() { + l.BaseService.OnStop() + l.listener.Close() +} + +// Accept connections and pass on the channel +func (l *DefaultListener) listenRoutine() { + for { + conn, err := l.listener.Accept() + + if !l.IsRunning() { + break // Go to cleanup + } + + // listener wasn't stopped, + // yet we encountered an error. + if err != nil { + cmn.PanicCrisis(err) + } + + l.connections <- conn + } + + // Cleanup + close(l.connections) + for _ = range l.connections { + // Drain + } +} + +// A channel of inbound connections. +// It gets closed when the listener closes. +func (l *DefaultListener) Connections() <-chan net.Conn { + return l.connections +} + +func (l *DefaultListener) InternalAddress() *NetAddress { + return l.intAddr +} + +func (l *DefaultListener) ExternalAddress() *NetAddress { + return l.extAddr +} + +// NOTE: The returned listener is already Accept()'ing. +// So it's not suitable to pass into http.Serve(). +func (l *DefaultListener) NetListener() net.Listener { + return l.listener +} + +func (l *DefaultListener) String() string { + return fmt.Sprintf("Listener(@%v)", l.extAddr) +} + +/* external address helpers */ + +// UPNP external address discovery & port mapping +func getUPNPExternalAddress(externalPort, internalPort int, logger log.Logger) *NetAddress { + logger.Info("Getting UPNP external address") + nat, err := upnp.Discover() + if err != nil { + logger.Info("Could not perform UPNP discover", "error", err) + return nil + } + + ext, err := nat.GetExternalAddress() + if err != nil { + logger.Info("Could not get UPNP external address", "error", err) + return nil + } + + // UPnP can't seem to get the external port, so let's just be explicit. + if externalPort == 0 { + externalPort = defaultExternalPort + } + + externalPort, err = nat.AddPortMapping("tcp", externalPort, internalPort, "tendermint", 0) + if err != nil { + logger.Info("Could not add UPNP port mapping", "error", err) + return nil + } + + logger.Info("Got UPNP external address", "address", ext) + return NewNetAddressIPPort(ext, uint16(externalPort)) +} + +// TODO: use syscalls: http://pastebin.com/9exZG4rh +func getNaiveExternalAddress(port int) *NetAddress { + addrs, err := net.InterfaceAddrs() + if err != nil { + cmn.PanicCrisis(cmn.Fmt("Could not fetch interface addresses: %v", err)) + } + + for _, a := range addrs { + ipnet, ok := a.(*net.IPNet) + if !ok { + continue + } + v4 := ipnet.IP.To4() + if v4 == nil || v4[0] == 127 { + continue + } // loopback + return NewNetAddressIPPort(ipnet.IP, uint16(port)) + } + return nil +} diff --git a/p2p/listener_test.go b/p2p/listener_test.go new file mode 100644 index 000000000..c3d33a9ae --- /dev/null +++ b/p2p/listener_test.go @@ -0,0 +1,42 @@ +package p2p + +import ( + "bytes" + "testing" + + "github.com/tendermint/tmlibs/log" +) + +func TestListener(t *testing.T) { + // Create a listener + l := NewDefaultListener("tcp", ":8001", true, log.TestingLogger()) + + // Dial the listener + lAddr := l.ExternalAddress() + connOut, err := lAddr.Dial() + if err != nil { + t.Fatalf("Could not connect to listener address %v", lAddr) + } else { + t.Logf("Created a connection to listener address %v", lAddr) + } + connIn, ok := <-l.Connections() + if !ok { + t.Fatalf("Could not get inbound connection from listener") + } + + msg := []byte("hi!") + go connIn.Write(msg) + b := make([]byte, 32) + n, err := connOut.Read(b) + if err != nil { + t.Fatalf("Error reading off connection: %v", err) + } + + b = b[:n] + if !bytes.Equal(msg, b) { + t.Fatalf("Got %s, expected %s", b, msg) + } + + // Close the server, no longer needed. + l.Stop() +} diff --git a/p2p/netaddress.go b/p2p/netaddress.go new file mode 100644 index 000000000..09787481c --- /dev/null +++ b/p2p/netaddress.go @@ -0,0 +1,253 @@ +// Modified for Tendermint +// Originally Copyright (c) 2013-2014 Conformal Systems LLC. +// https://github.com/conformal/btcd/blob/master/LICENSE + +package p2p + +import ( + "errors" + "flag" + "net" + "strconv" + "time" + + cmn "github.com/tendermint/tmlibs/common" +) + +// NetAddress defines information about a peer on the network +// including its IP address, and port. +type NetAddress struct { + IP net.IP + Port uint16 + str string +} + +// NewNetAddress returns a new NetAddress using the provided TCP +// address. When testing, other net.Addr (except TCP) will result in +// using 0.0.0.0:0. When normal run, other net.Addr (except TCP) will +// panic. +// TODO: socks proxies? +func NewNetAddress(addr net.Addr) *NetAddress { + tcpAddr, ok := addr.(*net.TCPAddr) + if !ok { + if flag.Lookup("test.v") == nil { // normal run + cmn.PanicSanity(cmn.Fmt("Only TCPAddrs are supported. Got: %v", addr)) + } else { // in testing + return NewNetAddressIPPort(net.IP("0.0.0.0"), 0) + } + } + ip := tcpAddr.IP + port := uint16(tcpAddr.Port) + return NewNetAddressIPPort(ip, port) +} + +// NewNetAddressString returns a new NetAddress using the provided +// address in the form of "IP:Port". Also resolves the host if host +// is not an IP. +func NewNetAddressString(addr string) (*NetAddress, error) { + + host, portStr, err := net.SplitHostPort(addr) + if err != nil { + return nil, err + } + + ip := net.ParseIP(host) + if ip == nil { + if len(host) > 0 { + ips, err := net.LookupIP(host) + if err != nil { + return nil, err + } + ip = ips[0] + } + } + + port, err := strconv.ParseUint(portStr, 10, 16) + if err != nil { + return nil, err + } + + na := NewNetAddressIPPort(ip, uint16(port)) + return na, nil +} + +// NewNetAddressStrings returns an array of NetAddress'es build using +// the provided strings. +func NewNetAddressStrings(addrs []string) ([]*NetAddress, error) { + netAddrs := make([]*NetAddress, len(addrs)) + for i, addr := range addrs { + netAddr, err := NewNetAddressString(addr) + if err != nil { + return nil, errors.New(cmn.Fmt("Error in address %s: %v", addr, err)) + } + netAddrs[i] = netAddr + } + return netAddrs, nil +} + +// NewNetAddressIPPort returns a new NetAddress using the provided IP +// and port number. +func NewNetAddressIPPort(ip net.IP, port uint16) *NetAddress { + na := &NetAddress{ + IP: ip, + Port: port, + str: net.JoinHostPort( + ip.String(), + strconv.FormatUint(uint64(port), 10), + ), + } + return na +} + +// Equals reports whether na and other are the same addresses. +func (na *NetAddress) Equals(other interface{}) bool { + if o, ok := other.(*NetAddress); ok { + return na.String() == o.String() + } + + return false +} + +func (na *NetAddress) Less(other interface{}) bool { + if o, ok := other.(*NetAddress); ok { + return na.String() < o.String() + } + + cmn.PanicSanity("Cannot compare unequal types") + return false +} + +// String representation. +func (na *NetAddress) String() string { + if na.str == "" { + na.str = net.JoinHostPort( + na.IP.String(), + strconv.FormatUint(uint64(na.Port), 10), + ) + } + return na.str +} + +// Dial calls net.Dial on the address. +func (na *NetAddress) Dial() (net.Conn, error) { + conn, err := net.Dial("tcp", na.String()) + if err != nil { + return nil, err + } + return conn, nil +} + +// DialTimeout calls net.DialTimeout on the address. +func (na *NetAddress) DialTimeout(timeout time.Duration) (net.Conn, error) { + conn, err := net.DialTimeout("tcp", na.String(), timeout) + if err != nil { + return nil, err + } + return conn, nil +} + +// Routable returns true if the address is routable. +func (na *NetAddress) Routable() bool { + // TODO(oga) bitcoind doesn't include RFC3849 here, but should we? + return na.Valid() && !(na.RFC1918() || na.RFC3927() || na.RFC4862() || + na.RFC4193() || na.RFC4843() || na.Local()) +} + +// For IPv4 these are either a 0 or all bits set address. For IPv6 a zero +// address or one that matches the RFC3849 documentation address format. +func (na *NetAddress) Valid() bool { + return na.IP != nil && !(na.IP.IsUnspecified() || na.RFC3849() || + na.IP.Equal(net.IPv4bcast)) +} + +// Local returns true if it is a local address. +func (na *NetAddress) Local() bool { + return na.IP.IsLoopback() || zero4.Contains(na.IP) +} + +// ReachabilityTo checks whenever o can be reached from na. +func (na *NetAddress) ReachabilityTo(o *NetAddress) int { + const ( + Unreachable = 0 + Default = iota + Teredo + Ipv6_weak + Ipv4 + Ipv6_strong + Private + ) + if !na.Routable() { + return Unreachable + } else if na.RFC4380() { + if !o.Routable() { + return Default + } else if o.RFC4380() { + return Teredo + } else if o.IP.To4() != nil { + return Ipv4 + } else { // ipv6 + return Ipv6_weak + } + } else if na.IP.To4() != nil { + if o.Routable() && o.IP.To4() != nil { + return Ipv4 + } + return Default + } else /* ipv6 */ { + var tunnelled bool + // Is our v6 is tunnelled? + if o.RFC3964() || o.RFC6052() || o.RFC6145() { + tunnelled = true + } + if !o.Routable() { + return Default + } else if o.RFC4380() { + return Teredo + } else if o.IP.To4() != nil { + return Ipv4 + } else if tunnelled { + // only prioritise ipv6 if we aren't tunnelling it. + return Ipv6_weak + } + return Ipv6_strong + } +} + +// RFC1918: IPv4 Private networks (10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12) +// RFC3849: IPv6 Documentation address (2001:0DB8::/32) +// RFC3927: IPv4 Autoconfig (169.254.0.0/16) +// RFC3964: IPv6 6to4 (2002::/16) +// RFC4193: IPv6 unique local (FC00::/7) +// RFC4380: IPv6 Teredo tunneling (2001::/32) +// RFC4843: IPv6 ORCHID: (2001:10::/28) +// RFC4862: IPv6 Autoconfig (FE80::/64) +// RFC6052: IPv6 well known prefix (64:FF9B::/96) +// RFC6145: IPv6 IPv4 translated address ::FFFF:0:0:0/96 +var rfc1918_10 = net.IPNet{IP: net.ParseIP("10.0.0.0"), Mask: net.CIDRMask(8, 32)} +var rfc1918_192 = net.IPNet{IP: net.ParseIP("192.168.0.0"), Mask: net.CIDRMask(16, 32)} +var rfc1918_172 = net.IPNet{IP: net.ParseIP("172.16.0.0"), Mask: net.CIDRMask(12, 32)} +var rfc3849 = net.IPNet{IP: net.ParseIP("2001:0DB8::"), Mask: net.CIDRMask(32, 128)} +var rfc3927 = net.IPNet{IP: net.ParseIP("169.254.0.0"), Mask: net.CIDRMask(16, 32)} +var rfc3964 = net.IPNet{IP: net.ParseIP("2002::"), Mask: net.CIDRMask(16, 128)} +var rfc4193 = net.IPNet{IP: net.ParseIP("FC00::"), Mask: net.CIDRMask(7, 128)} +var rfc4380 = net.IPNet{IP: net.ParseIP("2001::"), Mask: net.CIDRMask(32, 128)} +var rfc4843 = net.IPNet{IP: net.ParseIP("2001:10::"), Mask: net.CIDRMask(28, 128)} +var rfc4862 = net.IPNet{IP: net.ParseIP("FE80::"), Mask: net.CIDRMask(64, 128)} +var rfc6052 = net.IPNet{IP: net.ParseIP("64:FF9B::"), Mask: net.CIDRMask(96, 128)} +var rfc6145 = net.IPNet{IP: net.ParseIP("::FFFF:0:0:0"), Mask: net.CIDRMask(96, 128)} +var zero4 = net.IPNet{IP: net.ParseIP("0.0.0.0"), Mask: net.CIDRMask(8, 32)} + +func (na *NetAddress) RFC1918() bool { + return rfc1918_10.Contains(na.IP) || + rfc1918_192.Contains(na.IP) || + rfc1918_172.Contains(na.IP) +} +func (na *NetAddress) RFC3849() bool { return rfc3849.Contains(na.IP) } +func (na *NetAddress) RFC3927() bool { return rfc3927.Contains(na.IP) } +func (na *NetAddress) RFC3964() bool { return rfc3964.Contains(na.IP) } +func (na *NetAddress) RFC4193() bool { return rfc4193.Contains(na.IP) } +func (na *NetAddress) RFC4380() bool { return rfc4380.Contains(na.IP) } +func (na *NetAddress) RFC4843() bool { return rfc4843.Contains(na.IP) } +func (na *NetAddress) RFC4862() bool { return rfc4862.Contains(na.IP) } +func (na *NetAddress) RFC6052() bool { return rfc6052.Contains(na.IP) } +func (na *NetAddress) RFC6145() bool { return rfc6145.Contains(na.IP) } diff --git a/p2p/netaddress_test.go b/p2p/netaddress_test.go new file mode 100644 index 000000000..8c60da256 --- /dev/null +++ b/p2p/netaddress_test.go @@ -0,0 +1,114 @@ +package p2p + +import ( + "net" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestNewNetAddress(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + tcpAddr, err := net.ResolveTCPAddr("tcp", "127.0.0.1:8080") + require.Nil(err) + addr := NewNetAddress(tcpAddr) + + assert.Equal("127.0.0.1:8080", addr.String()) + + assert.NotPanics(func() { + NewNetAddress(&net.UDPAddr{IP: net.ParseIP("127.0.0.1"), Port: 8000}) + }, "Calling NewNetAddress with UDPAddr should not panic in testing") +} + +func TestNewNetAddressString(t *testing.T) { + assert := assert.New(t) + + tests := []struct { + addr string + correct bool + }{ + {"127.0.0.1:8080", true}, + // {"127.0.0:8080", false}, + {"a", false}, + {"127.0.0.1:a", false}, + {"a:8080", false}, + {"8082", false}, + {"127.0.0:8080000", false}, + } + + for _, t := range tests { + addr, err := NewNetAddressString(t.addr) + if t.correct { + if assert.Nil(err, t.addr) { + assert.Equal(t.addr, addr.String()) + } + } else { + assert.NotNil(err, t.addr) + } + } +} + +func TestNewNetAddressStrings(t *testing.T) { + assert, require := assert.New(t), require.New(t) + addrs, err := NewNetAddressStrings([]string{"127.0.0.1:8080", "127.0.0.2:8080"}) + require.Nil(err) + + assert.Equal(2, len(addrs)) +} + +func TestNewNetAddressIPPort(t *testing.T) { + assert := assert.New(t) + addr := NewNetAddressIPPort(net.ParseIP("127.0.0.1"), 8080) + + assert.Equal("127.0.0.1:8080", addr.String()) +} + +func TestNetAddressProperties(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + // TODO add more test cases + tests := []struct { + addr string + valid bool + local bool + routable bool + }{ + {"127.0.0.1:8080", true, true, false}, + {"ya.ru:80", true, false, true}, + } + + for _, t := range tests { + addr, err := NewNetAddressString(t.addr) + require.Nil(err) + + assert.Equal(t.valid, addr.Valid()) + assert.Equal(t.local, addr.Local()) + assert.Equal(t.routable, addr.Routable()) + } +} + +func TestNetAddressReachabilityTo(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + // TODO add more test cases + tests := []struct { + addr string + other string + reachability int + }{ + {"127.0.0.1:8080", "127.0.0.1:8081", 0}, + {"ya.ru:80", "127.0.0.1:8080", 1}, + } + + for _, t := range tests { + addr, err := NewNetAddressString(t.addr) + require.Nil(err) + + other, err := NewNetAddressString(t.other) + require.Nil(err) + + assert.Equal(t.reachability, addr.ReachabilityTo(other)) + } +} diff --git a/p2p/peer.go b/p2p/peer.go new file mode 100644 index 000000000..2602206c1 --- /dev/null +++ b/p2p/peer.go @@ -0,0 +1,303 @@ +package p2p + +import ( + "fmt" + "io" + "net" + "time" + + "github.com/pkg/errors" + crypto "github.com/tendermint/go-crypto" + wire "github.com/tendermint/go-wire" + cmn "github.com/tendermint/tmlibs/common" +) + +// Peer could be marked as persistent, in which case you can use +// Redial function to reconnect. Note that inbound peers can't be +// made persistent. They should be made persistent on the other end. +// +// Before using a peer, you will need to perform a handshake on connection. +type Peer struct { + cmn.BaseService + + outbound bool + + conn net.Conn // source connection + mconn *MConnection // multiplex connection + + persistent bool + config *PeerConfig + + *NodeInfo + Key string + Data *cmn.CMap // User data. +} + +// PeerConfig is a Peer configuration. +type PeerConfig struct { + AuthEnc bool `mapstructure:"auth_enc"` // authenticated encryption + + // times are in seconds + HandshakeTimeout time.Duration `mapstructure:"handshake_timeout"` + DialTimeout time.Duration `mapstructure:"dial_timeout"` + + MConfig *MConnConfig `mapstructure:"connection"` + + Fuzz bool `mapstructure:"fuzz"` // fuzz connection (for testing) + FuzzConfig *FuzzConnConfig `mapstructure:"fuzz_config"` +} + +// DefaultPeerConfig returns the default config. +func DefaultPeerConfig() *PeerConfig { + return &PeerConfig{ + AuthEnc: true, + HandshakeTimeout: 20, // * time.Second, + DialTimeout: 3, // * time.Second, + MConfig: DefaultMConnConfig(), + Fuzz: false, + FuzzConfig: DefaultFuzzConnConfig(), + } +} + +func newOutboundPeer(addr *NetAddress, reactorsByCh map[byte]Reactor, chDescs []*ChannelDescriptor, onPeerError func(*Peer, interface{}), ourNodePrivKey crypto.PrivKeyEd25519) (*Peer, error) { + return newOutboundPeerWithConfig(addr, reactorsByCh, chDescs, onPeerError, ourNodePrivKey, DefaultPeerConfig()) +} + +func newOutboundPeerWithConfig(addr *NetAddress, reactorsByCh map[byte]Reactor, chDescs []*ChannelDescriptor, onPeerError func(*Peer, interface{}), ourNodePrivKey crypto.PrivKeyEd25519, config *PeerConfig) (*Peer, error) { + conn, err := dial(addr, config) + if err != nil { + return nil, errors.Wrap(err, "Error creating peer") + } + + peer, err := newPeerFromConnAndConfig(conn, true, reactorsByCh, chDescs, onPeerError, ourNodePrivKey, config) + if err != nil { + conn.Close() + return nil, err + } + return peer, nil +} + +func newInboundPeer(conn net.Conn, reactorsByCh map[byte]Reactor, chDescs []*ChannelDescriptor, onPeerError func(*Peer, interface{}), ourNodePrivKey crypto.PrivKeyEd25519) (*Peer, error) { + return newInboundPeerWithConfig(conn, reactorsByCh, chDescs, onPeerError, ourNodePrivKey, DefaultPeerConfig()) +} + +func newInboundPeerWithConfig(conn net.Conn, reactorsByCh map[byte]Reactor, chDescs []*ChannelDescriptor, onPeerError func(*Peer, interface{}), ourNodePrivKey crypto.PrivKeyEd25519, config *PeerConfig) (*Peer, error) { + return newPeerFromConnAndConfig(conn, false, reactorsByCh, chDescs, onPeerError, ourNodePrivKey, config) +} + +func newPeerFromConnAndConfig(rawConn net.Conn, outbound bool, reactorsByCh map[byte]Reactor, chDescs []*ChannelDescriptor, onPeerError func(*Peer, interface{}), ourNodePrivKey crypto.PrivKeyEd25519, config *PeerConfig) (*Peer, error) { + conn := rawConn + + // Fuzz connection + if config.Fuzz { + // so we have time to do peer handshakes and get set up + conn = FuzzConnAfterFromConfig(conn, 10*time.Second, config.FuzzConfig) + } + + // Encrypt connection + if config.AuthEnc { + conn.SetDeadline(time.Now().Add(config.HandshakeTimeout * time.Second)) + + var err error + conn, err = MakeSecretConnection(conn, ourNodePrivKey) + if err != nil { + return nil, errors.Wrap(err, "Error creating peer") + } + } + + // Key and NodeInfo are set after Handshake + p := &Peer{ + outbound: outbound, + conn: conn, + config: config, + Data: cmn.NewCMap(), + } + + p.mconn = createMConnection(conn, p, reactorsByCh, chDescs, onPeerError, config.MConfig) + + p.BaseService = *cmn.NewBaseService(nil, "Peer", p) + + return p, nil +} + +// CloseConn should be used when the peer was created, but never started. +func (p *Peer) CloseConn() { + p.conn.Close() +} + +// makePersistent marks the peer as persistent. +func (p *Peer) makePersistent() { + if !p.outbound { + panic("inbound peers can't be made persistent") + } + + p.persistent = true +} + +// IsPersistent returns true if the peer is persitent, false otherwise. +func (p *Peer) IsPersistent() bool { + return p.persistent +} + +// HandshakeTimeout performs a handshake between a given node and the peer. +// NOTE: blocking +func (p *Peer) HandshakeTimeout(ourNodeInfo *NodeInfo, timeout time.Duration) error { + // Set deadline for handshake so we don't block forever on conn.ReadFull + p.conn.SetDeadline(time.Now().Add(timeout)) + + var peerNodeInfo = new(NodeInfo) + var err1 error + var err2 error + cmn.Parallel( + func() { + var n int + wire.WriteBinary(ourNodeInfo, p.conn, &n, &err1) + }, + func() { + var n int + wire.ReadBinary(peerNodeInfo, p.conn, maxNodeInfoSize, &n, &err2) + p.Logger.Info("Peer handshake", "peerNodeInfo", peerNodeInfo) + }) + if err1 != nil { + return errors.Wrap(err1, "Error during handshake/write") + } + if err2 != nil { + return errors.Wrap(err2, "Error during handshake/read") + } + + if p.config.AuthEnc { + // Check that the professed PubKey matches the sconn's. + if !peerNodeInfo.PubKey.Equals(p.PubKey().Wrap()) { + return fmt.Errorf("Ignoring connection with unmatching pubkey: %v vs %v", + peerNodeInfo.PubKey, p.PubKey()) + } + } + + // Remove deadline + p.conn.SetDeadline(time.Time{}) + + peerNodeInfo.RemoteAddr = p.Addr().String() + + p.NodeInfo = peerNodeInfo + p.Key = peerNodeInfo.PubKey.KeyString() + + return nil +} + +// Addr returns peer's remote network address. +func (p *Peer) Addr() net.Addr { + return p.conn.RemoteAddr() +} + +// PubKey returns peer's public key. +func (p *Peer) PubKey() crypto.PubKeyEd25519 { + if p.config.AuthEnc { + return p.conn.(*SecretConnection).RemotePubKey() + } + if p.NodeInfo == nil { + panic("Attempt to get peer's PubKey before calling Handshake") + } + return p.PubKey() +} + +// OnStart implements BaseService. +func (p *Peer) OnStart() error { + p.BaseService.OnStart() + _, err := p.mconn.Start() + return err +} + +// OnStop implements BaseService. +func (p *Peer) OnStop() { + p.BaseService.OnStop() + p.mconn.Stop() +} + +// Connection returns underlying MConnection. +func (p *Peer) Connection() *MConnection { + return p.mconn +} + +// IsOutbound returns true if the connection is outbound, false otherwise. +func (p *Peer) IsOutbound() bool { + return p.outbound +} + +// Send msg to the channel identified by chID byte. Returns false if the send +// queue is full after timeout, specified by MConnection. +func (p *Peer) Send(chID byte, msg interface{}) bool { + if !p.IsRunning() { + // see Switch#Broadcast, where we fetch the list of peers and loop over + // them - while we're looping, one peer may be removed and stopped. + return false + } + return p.mconn.Send(chID, msg) +} + +// TrySend msg to the channel identified by chID byte. Immediately returns +// false if the send queue is full. +func (p *Peer) TrySend(chID byte, msg interface{}) bool { + if !p.IsRunning() { + return false + } + return p.mconn.TrySend(chID, msg) +} + +// CanSend returns true if the send queue is not full, false otherwise. +func (p *Peer) CanSend(chID byte) bool { + if !p.IsRunning() { + return false + } + return p.mconn.CanSend(chID) +} + +// WriteTo writes the peer's public key to w. +func (p *Peer) WriteTo(w io.Writer) (n int64, err error) { + var n_ int + wire.WriteString(p.Key, w, &n_, &err) + n += int64(n_) + return +} + +// String representation. +func (p *Peer) String() string { + if p.outbound { + return fmt.Sprintf("Peer{%v %v out}", p.mconn, p.Key[:12]) + } + + return fmt.Sprintf("Peer{%v %v in}", p.mconn, p.Key[:12]) +} + +// Equals reports whenever 2 peers are actually represent the same node. +func (p *Peer) Equals(other *Peer) bool { + return p.Key == other.Key +} + +// Get the data for a given key. +func (p *Peer) Get(key string) interface{} { + return p.Data.Get(key) +} + +func dial(addr *NetAddress, config *PeerConfig) (net.Conn, error) { + conn, err := addr.DialTimeout(config.DialTimeout * time.Second) + if err != nil { + return nil, err + } + return conn, nil +} + +func createMConnection(conn net.Conn, p *Peer, reactorsByCh map[byte]Reactor, chDescs []*ChannelDescriptor, onPeerError func(*Peer, interface{}), config *MConnConfig) *MConnection { + onReceive := func(chID byte, msgBytes []byte) { + reactor := reactorsByCh[chID] + if reactor == nil { + cmn.PanicSanity(cmn.Fmt("Unknown channel %X", chID)) + } + reactor.Receive(chID, p, msgBytes) + } + + onError := func(r interface{}) { + onPeerError(p, r) + } + + return NewMConnectionWithConfig(conn, chDescs, onReceive, onError, config) +} diff --git a/p2p/peer_set.go b/p2p/peer_set.go new file mode 100644 index 000000000..c5206d2d5 --- /dev/null +++ b/p2p/peer_set.go @@ -0,0 +1,113 @@ +package p2p + +import ( + "sync" +) + +// IPeerSet has a (immutable) subset of the methods of PeerSet. +type IPeerSet interface { + Has(key string) bool + Get(key string) *Peer + List() []*Peer + Size() int +} + +//----------------------------------------------------------------------------- + +// PeerSet is a special structure for keeping a table of peers. +// Iteration over the peers is super fast and thread-safe. +type PeerSet struct { + mtx sync.Mutex + lookup map[string]*peerSetItem + list []*Peer +} + +type peerSetItem struct { + peer *Peer + index int +} + +func NewPeerSet() *PeerSet { + return &PeerSet{ + lookup: make(map[string]*peerSetItem), + list: make([]*Peer, 0, 256), + } +} + +// Returns false if peer with key (PubKeyEd25519) is already set +func (ps *PeerSet) Add(peer *Peer) error { + ps.mtx.Lock() + defer ps.mtx.Unlock() + if ps.lookup[peer.Key] != nil { + return ErrSwitchDuplicatePeer + } + + index := len(ps.list) + // Appending is safe even with other goroutines + // iterating over the ps.list slice. + ps.list = append(ps.list, peer) + ps.lookup[peer.Key] = &peerSetItem{peer, index} + return nil +} + +func (ps *PeerSet) Has(peerKey string) bool { + ps.mtx.Lock() + defer ps.mtx.Unlock() + _, ok := ps.lookup[peerKey] + return ok +} + +func (ps *PeerSet) Get(peerKey string) *Peer { + ps.mtx.Lock() + defer ps.mtx.Unlock() + item, ok := ps.lookup[peerKey] + if ok { + return item.peer + } else { + return nil + } +} + +func (ps *PeerSet) Remove(peer *Peer) { + ps.mtx.Lock() + defer ps.mtx.Unlock() + item := ps.lookup[peer.Key] + if item == nil { + return + } + + index := item.index + // Copy the list but without the last element. + // (we must copy because we're mutating the list) + newList := make([]*Peer, len(ps.list)-1) + copy(newList, ps.list) + // If it's the last peer, that's an easy special case. + if index == len(ps.list)-1 { + ps.list = newList + delete(ps.lookup, peer.Key) + return + } + + // Move the last item from ps.list to "index" in list. + lastPeer := ps.list[len(ps.list)-1] + lastPeerKey := lastPeer.Key + lastPeerItem := ps.lookup[lastPeerKey] + newList[index] = lastPeer + lastPeerItem.index = index + ps.list = newList + delete(ps.lookup, peer.Key) + +} + +func (ps *PeerSet) Size() int { + ps.mtx.Lock() + defer ps.mtx.Unlock() + return len(ps.list) +} + +// threadsafe list of peers. +func (ps *PeerSet) List() []*Peer { + ps.mtx.Lock() + defer ps.mtx.Unlock() + return ps.list +} diff --git a/p2p/peer_set_test.go b/p2p/peer_set_test.go new file mode 100644 index 000000000..9214b2eb4 --- /dev/null +++ b/p2p/peer_set_test.go @@ -0,0 +1,67 @@ +package p2p + +import ( + "math/rand" + "testing" + + cmn "github.com/tendermint/tmlibs/common" +) + +// Returns an empty dummy peer +func randPeer() *Peer { + return &Peer{ + Key: cmn.RandStr(12), + NodeInfo: &NodeInfo{ + RemoteAddr: cmn.Fmt("%v.%v.%v.%v:46656", rand.Int()%256, rand.Int()%256, rand.Int()%256, rand.Int()%256), + ListenAddr: cmn.Fmt("%v.%v.%v.%v:46656", rand.Int()%256, rand.Int()%256, rand.Int()%256, rand.Int()%256), + }, + } +} + +func TestAddRemoveOne(t *testing.T) { + peerSet := NewPeerSet() + + peer := randPeer() + err := peerSet.Add(peer) + if err != nil { + t.Errorf("Failed to add new peer") + } + if peerSet.Size() != 1 { + t.Errorf("Failed to add new peer and increment size") + } + + peerSet.Remove(peer) + if peerSet.Has(peer.Key) { + t.Errorf("Failed to remove peer") + } + if peerSet.Size() != 0 { + t.Errorf("Failed to remove peer and decrement size") + } +} + +func TestAddRemoveMany(t *testing.T) { + peerSet := NewPeerSet() + + peers := []*Peer{} + N := 100 + for i := 0; i < N; i++ { + peer := randPeer() + if err := peerSet.Add(peer); err != nil { + t.Errorf("Failed to add new peer") + } + if peerSet.Size() != i+1 { + t.Errorf("Failed to add new peer and increment size") + } + peers = append(peers, peer) + } + + for i, peer := range peers { + peerSet.Remove(peer) + if peerSet.Has(peer.Key) { + t.Errorf("Failed to remove peer") + } + if peerSet.Size() != len(peers)-i-1 { + t.Errorf("Failed to remove peer and decrement size") + } + } +} diff --git a/p2p/peer_test.go b/p2p/peer_test.go new file mode 100644 index 000000000..0ac776347 --- /dev/null +++ b/p2p/peer_test.go @@ -0,0 +1,156 @@ +package p2p + +import ( + golog "log" + "net" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + + crypto "github.com/tendermint/go-crypto" +) + +func TestPeerBasic(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + // simulate remote peer + rp := &remotePeer{PrivKey: crypto.GenPrivKeyEd25519(), Config: DefaultPeerConfig()} + rp.Start() + defer rp.Stop() + + p, err := createOutboundPeerAndPerformHandshake(rp.Addr(), DefaultPeerConfig()) + require.Nil(err) + + p.Start() + defer p.Stop() + + assert.True(p.IsRunning()) + assert.True(p.IsOutbound()) + assert.False(p.IsPersistent()) + p.makePersistent() + assert.True(p.IsPersistent()) + assert.Equal(rp.Addr().String(), p.Addr().String()) + assert.Equal(rp.PubKey(), p.PubKey()) +} + +func TestPeerWithoutAuthEnc(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + config := DefaultPeerConfig() + config.AuthEnc = false + + // simulate remote peer + rp := &remotePeer{PrivKey: crypto.GenPrivKeyEd25519(), Config: config} + rp.Start() + defer rp.Stop() + + p, err := createOutboundPeerAndPerformHandshake(rp.Addr(), config) + require.Nil(err) + + p.Start() + defer p.Stop() + + assert.True(p.IsRunning()) +} + +func TestPeerSend(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + config := DefaultPeerConfig() + config.AuthEnc = false + + // simulate remote peer + rp := &remotePeer{PrivKey: crypto.GenPrivKeyEd25519(), Config: config} + rp.Start() + defer rp.Stop() + + p, err := createOutboundPeerAndPerformHandshake(rp.Addr(), config) + require.Nil(err) + + p.Start() + defer p.Stop() + + assert.True(p.CanSend(0x01)) + assert.True(p.Send(0x01, "Asylum")) +} + +func createOutboundPeerAndPerformHandshake(addr *NetAddress, config *PeerConfig) (*Peer, error) { + chDescs := []*ChannelDescriptor{ + &ChannelDescriptor{ID: 0x01, Priority: 1}, + } + reactorsByCh := map[byte]Reactor{0x01: NewTestReactor(chDescs, true)} + pk := crypto.GenPrivKeyEd25519() + p, err := newOutboundPeerWithConfig(addr, reactorsByCh, chDescs, func(p *Peer, r interface{}) {}, pk, config) + if err != nil { + return nil, err + } + err = p.HandshakeTimeout(&NodeInfo{ + PubKey: pk.PubKey().Unwrap().(crypto.PubKeyEd25519), + Moniker: "host_peer", + Network: "testing", + Version: "123.123.123", + }, 1*time.Second) + if err != nil { + return nil, err + } + return p, nil +} + +type remotePeer struct { + PrivKey crypto.PrivKeyEd25519 + Config *PeerConfig + addr *NetAddress + quit chan struct{} +} + +func (p *remotePeer) Addr() *NetAddress { + return p.addr +} + +func (p *remotePeer) PubKey() crypto.PubKeyEd25519 { + return p.PrivKey.PubKey().Unwrap().(crypto.PubKeyEd25519) +} + +func (p *remotePeer) Start() { + l, e := net.Listen("tcp", "127.0.0.1:0") // any available address + if e != nil { + golog.Fatalf("net.Listen tcp :0: %+v", e) + } + p.addr = NewNetAddress(l.Addr()) + p.quit = make(chan struct{}) + go p.accept(l) +} + +func (p *remotePeer) Stop() { + close(p.quit) +} + +func (p *remotePeer) accept(l net.Listener) { + for { + conn, err := l.Accept() + if err != nil { + golog.Fatalf("Failed to accept conn: %+v", err) + } + peer, err := newInboundPeerWithConfig(conn, make(map[byte]Reactor), make([]*ChannelDescriptor, 0), func(p *Peer, r interface{}) {}, p.PrivKey, p.Config) + if err != nil { + golog.Fatalf("Failed to create a peer: %+v", err) + } + err = peer.HandshakeTimeout(&NodeInfo{ + PubKey: p.PrivKey.PubKey().Unwrap().(crypto.PubKeyEd25519), + Moniker: "remote_peer", + Network: "testing", + Version: "123.123.123", + }, 1*time.Second) + if err != nil { + golog.Fatalf("Failed to perform handshake: %+v", err) + } + select { + case <-p.quit: + conn.Close() + return + default: + } + } +} diff --git a/p2p/pex_reactor.go b/p2p/pex_reactor.go new file mode 100644 index 000000000..269a8d006 --- /dev/null +++ b/p2p/pex_reactor.go @@ -0,0 +1,358 @@ +package p2p + +import ( + "bytes" + "fmt" + "math/rand" + "reflect" + "time" + + wire "github.com/tendermint/go-wire" + cmn "github.com/tendermint/tmlibs/common" +) + +const ( + // PexChannel is a channel for PEX messages + PexChannel = byte(0x00) + + // period to ensure peers connected + defaultEnsurePeersPeriod = 30 * time.Second + minNumOutboundPeers = 10 + maxPexMessageSize = 1048576 // 1MB + + // maximum messages one peer can send to us during `msgCountByPeerFlushInterval` + defaultMaxMsgCountByPeer = 1000 + msgCountByPeerFlushInterval = 1 * time.Hour +) + +// PEXReactor handles PEX (peer exchange) and ensures that an +// adequate number of peers are connected to the switch. +// +// It uses `AddrBook` (address book) to store `NetAddress`es of the peers. +// +// ## Preventing abuse +// +// For now, it just limits the number of messages from one peer to +// `defaultMaxMsgCountByPeer` messages per `msgCountByPeerFlushInterval` (1000 +// msg/hour). +// +// NOTE [2017-01-17]: +// Limiting is fine for now. Maybe down the road we want to keep track of the +// quality of peer messages so if peerA keeps telling us about peers we can't +// connect to then maybe we should care less about peerA. But I don't think +// that kind of complexity is priority right now. +type PEXReactor struct { + BaseReactor + + sw *Switch + book *AddrBook + ensurePeersPeriod time.Duration + + // tracks message count by peer, so we can prevent abuse + msgCountByPeer *cmn.CMap + maxMsgCountByPeer uint16 +} + +// NewPEXReactor creates new PEX reactor. +func NewPEXReactor(b *AddrBook) *PEXReactor { + r := &PEXReactor{ + book: b, + ensurePeersPeriod: defaultEnsurePeersPeriod, + msgCountByPeer: cmn.NewCMap(), + maxMsgCountByPeer: defaultMaxMsgCountByPeer, + } + r.BaseReactor = *NewBaseReactor("PEXReactor", r) + return r +} + +// OnStart implements BaseService +func (r *PEXReactor) OnStart() error { + r.BaseReactor.OnStart() + r.book.Start() + go r.ensurePeersRoutine() + go r.flushMsgCountByPeer() + return nil +} + +// OnStop implements BaseService +func (r *PEXReactor) OnStop() { + r.BaseReactor.OnStop() + r.book.Stop() +} + +// GetChannels implements Reactor +func (r *PEXReactor) GetChannels() []*ChannelDescriptor { + return []*ChannelDescriptor{ + &ChannelDescriptor{ + ID: PexChannel, + Priority: 1, + SendQueueCapacity: 10, + }, + } +} + +// AddPeer implements Reactor by adding peer to the address book (if inbound) +// or by requesting more addresses (if outbound). +func (r *PEXReactor) AddPeer(p *Peer) { + if p.IsOutbound() { + // For outbound peers, the address is already in the books. + // Either it was added in DialSeeds or when we + // received the peer's address in r.Receive + if r.book.NeedMoreAddrs() { + r.RequestPEX(p) + } + } else { // For inbound connections, the peer is its own source + addr, err := NewNetAddressString(p.ListenAddr) + if err != nil { + // this should never happen + r.Logger.Error("Error in AddPeer: invalid peer address", "addr", p.ListenAddr, "error", err) + return + } + r.book.AddAddress(addr, addr) + } +} + +// RemovePeer implements Reactor. +func (r *PEXReactor) RemovePeer(p *Peer, reason interface{}) { + // If we aren't keeping track of local temp data for each peer here, then we + // don't have to do anything. +} + +// Receive implements Reactor by handling incoming PEX messages. +func (r *PEXReactor) Receive(chID byte, src *Peer, msgBytes []byte) { + srcAddr := src.Connection().RemoteAddress + srcAddrStr := srcAddr.String() + + r.IncrementMsgCountForPeer(srcAddrStr) + if r.ReachedMaxMsgCountForPeer(srcAddrStr) { + r.Logger.Error("Maximum number of messages reached for peer", "peer", srcAddrStr) + // TODO remove src from peers? + return + } + + _, msg, err := DecodeMessage(msgBytes) + if err != nil { + r.Logger.Error("Error decoding message", "error", err) + return + } + r.Logger.Info("Received message", "msg", msg) + + switch msg := msg.(type) { + case *pexRequestMessage: + // src requested some peers. + r.SendAddrs(src, r.book.GetSelection()) + case *pexAddrsMessage: + // We received some peer addresses from src. + // (We don't want to get spammed with bad peers) + for _, addr := range msg.Addrs { + if addr != nil { + r.book.AddAddress(addr, srcAddr) + } + } + default: + r.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg))) + } +} + +// RequestPEX asks peer for more addresses. +func (r *PEXReactor) RequestPEX(p *Peer) { + p.Send(PexChannel, struct{ PexMessage }{&pexRequestMessage{}}) +} + +// SendAddrs sends addrs to the peer. +func (r *PEXReactor) SendAddrs(p *Peer, addrs []*NetAddress) { + p.Send(PexChannel, struct{ PexMessage }{&pexAddrsMessage{Addrs: addrs}}) +} + +// SetEnsurePeersPeriod sets period to ensure peers connected. +func (r *PEXReactor) SetEnsurePeersPeriod(d time.Duration) { + r.ensurePeersPeriod = d +} + +// SetMaxMsgCountByPeer sets maximum messages one peer can send to us during 'msgCountByPeerFlushInterval'. +func (r *PEXReactor) SetMaxMsgCountByPeer(v uint16) { + r.maxMsgCountByPeer = v +} + +// ReachedMaxMsgCountForPeer returns true if we received too many +// messages from peer with address `addr`. +// NOTE: assumes the value in the CMap is non-nil +func (r *PEXReactor) ReachedMaxMsgCountForPeer(addr string) bool { + return r.msgCountByPeer.Get(addr).(uint16) >= r.maxMsgCountByPeer +} + +// Increment or initialize the msg count for the peer in the CMap +func (r *PEXReactor) IncrementMsgCountForPeer(addr string) { + var count uint16 + countI := r.msgCountByPeer.Get(addr) + if countI != nil { + count = countI.(uint16) + } + count++ + r.msgCountByPeer.Set(addr, count) +} + +// Ensures that sufficient peers are connected. (continuous) +func (r *PEXReactor) ensurePeersRoutine() { + // Randomize when routine starts + ensurePeersPeriodMs := r.ensurePeersPeriod.Nanoseconds() / 1e6 + time.Sleep(time.Duration(rand.Int63n(ensurePeersPeriodMs)) * time.Millisecond) + + // fire once immediately. + r.ensurePeers() + + // fire periodically + ticker := time.NewTicker(r.ensurePeersPeriod) + + for { + select { + case <-ticker.C: + r.ensurePeers() + case <-r.Quit: + ticker.Stop() + return + } + } +} + +// ensurePeers ensures that sufficient peers are connected. (once) +// +// Old bucket / New bucket are arbitrary categories to denote whether an +// address is vetted or not, and this needs to be determined over time via a +// heuristic that we haven't perfected yet, or, perhaps is manually edited by +// the node operator. It should not be used to compute what addresses are +// already connected or not. +// +// TODO Basically, we need to work harder on our good-peer/bad-peer marking. +// What we're currently doing in terms of marking good/bad peers is just a +// placeholder. It should not be the case that an address becomes old/vetted +// upon a single successful connection. +func (r *PEXReactor) ensurePeers() { + numOutPeers, _, numDialing := r.Switch.NumPeers() + numToDial := minNumOutboundPeers - (numOutPeers + numDialing) + r.Logger.Info("Ensure peers", "numOutPeers", numOutPeers, "numDialing", numDialing, "numToDial", numToDial) + if numToDial <= 0 { + return + } + + toDial := make(map[string]*NetAddress) + + // Try to pick numToDial addresses to dial. + for i := 0; i < numToDial; i++ { + // The purpose of newBias is to first prioritize old (more vetted) peers + // when we have few connections, but to allow for new (less vetted) peers + // if we already have many connections. This algorithm isn't perfect, but + // it somewhat ensures that we prioritize connecting to more-vetted + // peers. + newBias := cmn.MinInt(numOutPeers, 8)*10 + 10 + var picked *NetAddress + // Try to fetch a new peer 3 times. + // This caps the maximum number of tries to 3 * numToDial. + for j := 0; j < 3; j++ { + try := r.book.PickAddress(newBias) + if try == nil { + break + } + _, alreadySelected := toDial[try.IP.String()] + alreadyDialing := r.Switch.IsDialing(try) + alreadyConnected := r.Switch.Peers().Has(try.IP.String()) + if alreadySelected || alreadyDialing || alreadyConnected { + // r.Logger.Info("Cannot dial address", "addr", try, + // "alreadySelected", alreadySelected, + // "alreadyDialing", alreadyDialing, + // "alreadyConnected", alreadyConnected) + continue + } else { + r.Logger.Info("Will dial address", "addr", try) + picked = try + break + } + } + if picked == nil { + continue + } + toDial[picked.IP.String()] = picked + } + + // Dial picked addresses + for _, item := range toDial { + go func(picked *NetAddress) { + _, err := r.Switch.DialPeerWithAddress(picked, false) + if err != nil { + r.book.MarkAttempt(picked) + } + }(item) + } + + // If we need more addresses, pick a random peer and ask for more. + if r.book.NeedMoreAddrs() { + if peers := r.Switch.Peers().List(); len(peers) > 0 { + i := rand.Int() % len(peers) + peer := peers[i] + r.Logger.Info("No addresses to dial. Sending pexRequest to random peer", "peer", peer) + r.RequestPEX(peer) + } + } +} + +func (r *PEXReactor) flushMsgCountByPeer() { + ticker := time.NewTicker(msgCountByPeerFlushInterval) + + for { + select { + case <-ticker.C: + r.msgCountByPeer.Clear() + case <-r.Quit: + ticker.Stop() + return + } + } +} + +//----------------------------------------------------------------------------- +// Messages + +const ( + msgTypeRequest = byte(0x01) + msgTypeAddrs = byte(0x02) +) + +// PexMessage is a primary type for PEX messages. Underneath, it could contain +// either pexRequestMessage, or pexAddrsMessage messages. +type PexMessage interface{} + +var _ = wire.RegisterInterface( + struct{ PexMessage }{}, + wire.ConcreteType{&pexRequestMessage{}, msgTypeRequest}, + wire.ConcreteType{&pexAddrsMessage{}, msgTypeAddrs}, +) + +// DecodeMessage implements interface registered above. +func DecodeMessage(bz []byte) (msgType byte, msg PexMessage, err error) { + msgType = bz[0] + n := new(int) + r := bytes.NewReader(bz) + msg = wire.ReadBinary(struct{ PexMessage }{}, r, maxPexMessageSize, n, &err).(struct{ PexMessage }).PexMessage + return +} + +/* +A pexRequestMessage requests additional peer addresses. +*/ +type pexRequestMessage struct { +} + +func (m *pexRequestMessage) String() string { + return "[pexRequest]" +} + +/* +A message with announced peer addresses. +*/ +type pexAddrsMessage struct { + Addrs []*NetAddress +} + +func (m *pexAddrsMessage) String() string { + return fmt.Sprintf("[pexAddrs %v]", m.Addrs) +} diff --git a/p2p/pex_reactor_test.go b/p2p/pex_reactor_test.go new file mode 100644 index 000000000..2ce131a05 --- /dev/null +++ b/p2p/pex_reactor_test.go @@ -0,0 +1,178 @@ +package p2p + +import ( + "io/ioutil" + "math/rand" + "os" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + wire "github.com/tendermint/go-wire" + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" +) + +func TestPEXReactorBasic(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + dir, err := ioutil.TempDir("", "pex_reactor") + require.Nil(err) + defer os.RemoveAll(dir) + book := NewAddrBook(dir+"addrbook.json", true) + book.SetLogger(log.TestingLogger()) + + r := NewPEXReactor(book) + r.SetLogger(log.TestingLogger()) + + assert.NotNil(r) + assert.NotEmpty(r.GetChannels()) +} + +func TestPEXReactorAddRemovePeer(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + dir, err := ioutil.TempDir("", "pex_reactor") + require.Nil(err) + defer os.RemoveAll(dir) + book := NewAddrBook(dir+"addrbook.json", true) + book.SetLogger(log.TestingLogger()) + + r := NewPEXReactor(book) + r.SetLogger(log.TestingLogger()) + + size := book.Size() + peer := createRandomPeer(false) + + r.AddPeer(peer) + assert.Equal(size+1, book.Size()) + + r.RemovePeer(peer, "peer not available") + assert.Equal(size+1, book.Size()) + + outboundPeer := createRandomPeer(true) + + r.AddPeer(outboundPeer) + assert.Equal(size+1, book.Size(), "outbound peers should not be added to the address book") + + r.RemovePeer(outboundPeer, "peer not available") + assert.Equal(size+1, book.Size()) +} + +func TestPEXReactorRunning(t *testing.T) { + require := require.New(t) + + N := 3 + switches := make([]*Switch, N) + + dir, err := ioutil.TempDir("", "pex_reactor") + require.Nil(err) + defer os.RemoveAll(dir) + book := NewAddrBook(dir+"addrbook.json", false) + book.SetLogger(log.TestingLogger()) + + // create switches + for i := 0; i < N; i++ { + switches[i] = makeSwitch(config, i, "127.0.0.1", "123.123.123", func(i int, sw *Switch) *Switch { + sw.SetLogger(log.TestingLogger().With("switch", i)) + + r := NewPEXReactor(book) + r.SetLogger(log.TestingLogger()) + r.SetEnsurePeersPeriod(250 * time.Millisecond) + sw.AddReactor("pex", r) + return sw + }) + } + + // fill the address book and add listeners + for _, s := range switches { + addr, _ := NewNetAddressString(s.NodeInfo().ListenAddr) + book.AddAddress(addr, addr) + s.AddListener(NewDefaultListener("tcp", s.NodeInfo().ListenAddr, true, log.TestingLogger())) + } + + // start switches + for _, s := range switches { + _, err := s.Start() // start switch and reactors + require.Nil(err) + } + + time.Sleep(1 * time.Second) + + // check peers are connected after some time + for _, s := range switches { + outbound, inbound, _ := s.NumPeers() + if outbound+inbound == 0 { + t.Errorf("%v expected to be connected to at least one peer", s.NodeInfo().ListenAddr) + } + } + + // stop them + for _, s := range switches { + s.Stop() + } +} + +func TestPEXReactorReceive(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + dir, err := ioutil.TempDir("", "pex_reactor") + require.Nil(err) + defer os.RemoveAll(dir) + book := NewAddrBook(dir+"addrbook.json", true) + book.SetLogger(log.TestingLogger()) + + r := NewPEXReactor(book) + r.SetLogger(log.TestingLogger()) + + peer := createRandomPeer(false) + + size := book.Size() + netAddr, _ := NewNetAddressString(peer.ListenAddr) + addrs := []*NetAddress{netAddr} + msg := wire.BinaryBytes(struct{ PexMessage }{&pexAddrsMessage{Addrs: addrs}}) + r.Receive(PexChannel, peer, msg) + assert.Equal(size+1, book.Size()) + + msg = wire.BinaryBytes(struct{ PexMessage }{&pexRequestMessage{}}) + r.Receive(PexChannel, peer, msg) +} + +func TestPEXReactorAbuseFromPeer(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + dir, err := ioutil.TempDir("", "pex_reactor") + require.Nil(err) + defer os.RemoveAll(dir) + book := NewAddrBook(dir+"addrbook.json", true) + book.SetLogger(log.TestingLogger()) + + r := NewPEXReactor(book) + r.SetLogger(log.TestingLogger()) + r.SetMaxMsgCountByPeer(5) + + peer := createRandomPeer(false) + + msg := wire.BinaryBytes(struct{ PexMessage }{&pexRequestMessage{}}) + for i := 0; i < 10; i++ { + r.Receive(PexChannel, peer, msg) + } + + assert.True(r.ReachedMaxMsgCountForPeer(peer.ListenAddr)) +} + +func createRandomPeer(outbound bool) *Peer { + addr := cmn.Fmt("%v.%v.%v.%v:46656", rand.Int()%256, rand.Int()%256, rand.Int()%256, rand.Int()%256) + netAddr, _ := NewNetAddressString(addr) + p := &Peer{ + Key: cmn.RandStr(12), + NodeInfo: &NodeInfo{ + ListenAddr: addr, + }, + outbound: outbound, + mconn: &MConnection{RemoteAddress: netAddr}, + } + p.SetLogger(log.TestingLogger().With("peer", addr)) + return p +} diff --git a/p2p/secret_connection.go b/p2p/secret_connection.go new file mode 100644 index 000000000..24cae0f61 --- /dev/null +++ b/p2p/secret_connection.go @@ -0,0 +1,346 @@ +// Uses nacl's secret_box to encrypt a net.Conn. +// It is (meant to be) an implementation of the STS protocol. +// Note we do not (yet) assume that a remote peer's pubkey +// is known ahead of time, and thus we are technically +// still vulnerable to MITM. (TODO!) +// See docs/sts-final.pdf for more info +package p2p + +import ( + "bytes" + crand "crypto/rand" + "crypto/sha256" + "encoding/binary" + "errors" + "io" + "net" + "time" + + "golang.org/x/crypto/nacl/box" + "golang.org/x/crypto/nacl/secretbox" + "golang.org/x/crypto/ripemd160" + + "github.com/tendermint/go-crypto" + "github.com/tendermint/go-wire" + cmn "github.com/tendermint/tmlibs/common" +) + +// 2 + 1024 == 1026 total frame size +const dataLenSize = 2 // uint16 to describe the length, is <= dataMaxSize +const dataMaxSize = 1024 +const totalFrameSize = dataMaxSize + dataLenSize +const sealedFrameSize = totalFrameSize + secretbox.Overhead +const authSigMsgSize = (32 + 1) + (64 + 1) // fixed size (length prefixed) byte arrays + +// Implements net.Conn +type SecretConnection struct { + conn io.ReadWriteCloser + recvBuffer []byte + recvNonce *[24]byte + sendNonce *[24]byte + remPubKey crypto.PubKeyEd25519 + shrSecret *[32]byte // shared secret +} + +// Performs handshake and returns a new authenticated SecretConnection. +// Returns nil if error in handshake. +// Caller should call conn.Close() +// See docs/sts-final.pdf for more information. +func MakeSecretConnection(conn io.ReadWriteCloser, locPrivKey crypto.PrivKeyEd25519) (*SecretConnection, error) { + + locPubKey := locPrivKey.PubKey().Unwrap().(crypto.PubKeyEd25519) + + // Generate ephemeral keys for perfect forward secrecy. + locEphPub, locEphPriv := genEphKeys() + + // Write local ephemeral pubkey and receive one too. + // NOTE: every 32-byte string is accepted as a Curve25519 public key + // (see DJB's Curve25519 paper: http://cr.yp.to/ecdh/curve25519-20060209.pdf) + remEphPub, err := shareEphPubKey(conn, locEphPub) + if err != nil { + return nil, err + } + + // Compute common shared secret. + shrSecret := computeSharedSecret(remEphPub, locEphPriv) + + // Sort by lexical order. + loEphPub, hiEphPub := sort32(locEphPub, remEphPub) + + // Generate nonces to use for secretbox. + recvNonce, sendNonce := genNonces(loEphPub, hiEphPub, locEphPub == loEphPub) + + // Generate common challenge to sign. + challenge := genChallenge(loEphPub, hiEphPub) + + // Construct SecretConnection. + sc := &SecretConnection{ + conn: conn, + recvBuffer: nil, + recvNonce: recvNonce, + sendNonce: sendNonce, + shrSecret: shrSecret, + } + + // Sign the challenge bytes for authentication. + locSignature := signChallenge(challenge, locPrivKey) + + // Share (in secret) each other's pubkey & challenge signature + authSigMsg, err := shareAuthSignature(sc, locPubKey, locSignature) + if err != nil { + return nil, err + } + remPubKey, remSignature := authSigMsg.Key, authSigMsg.Sig + if !remPubKey.VerifyBytes(challenge[:], remSignature) { + return nil, errors.New("Challenge verification failed") + } + + // We've authorized. + sc.remPubKey = remPubKey.Unwrap().(crypto.PubKeyEd25519) + return sc, nil +} + +// Returns authenticated remote pubkey +func (sc *SecretConnection) RemotePubKey() crypto.PubKeyEd25519 { + return sc.remPubKey +} + +// Writes encrypted frames of `sealedFrameSize` +// CONTRACT: data smaller than dataMaxSize is read atomically. +func (sc *SecretConnection) Write(data []byte) (n int, err error) { + for 0 < len(data) { + var frame []byte = make([]byte, totalFrameSize) + var chunk []byte + if dataMaxSize < len(data) { + chunk = data[:dataMaxSize] + data = data[dataMaxSize:] + } else { + chunk = data + data = nil + } + chunkLength := len(chunk) + binary.BigEndian.PutUint16(frame, uint16(chunkLength)) + copy(frame[dataLenSize:], chunk) + + // encrypt the frame + var sealedFrame = make([]byte, sealedFrameSize) + secretbox.Seal(sealedFrame[:0], frame, sc.sendNonce, sc.shrSecret) + // fmt.Printf("secretbox.Seal(sealed:%X,sendNonce:%X,shrSecret:%X\n", sealedFrame, sc.sendNonce, sc.shrSecret) + incr2Nonce(sc.sendNonce) + // end encryption + + _, err := sc.conn.Write(sealedFrame) + if err != nil { + return n, err + } else { + n += len(chunk) + } + } + return +} + +// CONTRACT: data smaller than dataMaxSize is read atomically. +func (sc *SecretConnection) Read(data []byte) (n int, err error) { + if 0 < len(sc.recvBuffer) { + n_ := copy(data, sc.recvBuffer) + sc.recvBuffer = sc.recvBuffer[n_:] + return + } + + sealedFrame := make([]byte, sealedFrameSize) + _, err = io.ReadFull(sc.conn, sealedFrame) + if err != nil { + return + } + + // decrypt the frame + var frame = make([]byte, totalFrameSize) + // fmt.Printf("secretbox.Open(sealed:%X,recvNonce:%X,shrSecret:%X\n", sealedFrame, sc.recvNonce, sc.shrSecret) + _, ok := secretbox.Open(frame[:0], sealedFrame, sc.recvNonce, sc.shrSecret) + if !ok { + return n, errors.New("Failed to decrypt SecretConnection") + } + incr2Nonce(sc.recvNonce) + // end decryption + + var chunkLength = binary.BigEndian.Uint16(frame) // read the first two bytes + if chunkLength > dataMaxSize { + return 0, errors.New("chunkLength is greater than dataMaxSize") + } + var chunk = frame[dataLenSize : dataLenSize+chunkLength] + + n = copy(data, chunk) + sc.recvBuffer = chunk[n:] + return +} + +// Implements net.Conn +func (sc *SecretConnection) Close() error { return sc.conn.Close() } +func (sc *SecretConnection) LocalAddr() net.Addr { return sc.conn.(net.Conn).LocalAddr() } +func (sc *SecretConnection) RemoteAddr() net.Addr { return sc.conn.(net.Conn).RemoteAddr() } +func (sc *SecretConnection) SetDeadline(t time.Time) error { return sc.conn.(net.Conn).SetDeadline(t) } +func (sc *SecretConnection) SetReadDeadline(t time.Time) error { + return sc.conn.(net.Conn).SetReadDeadline(t) +} +func (sc *SecretConnection) SetWriteDeadline(t time.Time) error { + return sc.conn.(net.Conn).SetWriteDeadline(t) +} + +func genEphKeys() (ephPub, ephPriv *[32]byte) { + var err error + ephPub, ephPriv, err = box.GenerateKey(crand.Reader) + if err != nil { + cmn.PanicCrisis("Could not generate ephemeral keypairs") + } + return +} + +func shareEphPubKey(conn io.ReadWriteCloser, locEphPub *[32]byte) (remEphPub *[32]byte, err error) { + var err1, err2 error + + cmn.Parallel( + func() { + _, err1 = conn.Write(locEphPub[:]) + }, + func() { + remEphPub = new([32]byte) + _, err2 = io.ReadFull(conn, remEphPub[:]) + }, + ) + + if err1 != nil { + return nil, err1 + } + if err2 != nil { + return nil, err2 + } + + return remEphPub, nil +} + +func computeSharedSecret(remPubKey, locPrivKey *[32]byte) (shrSecret *[32]byte) { + shrSecret = new([32]byte) + box.Precompute(shrSecret, remPubKey, locPrivKey) + return +} + +func sort32(foo, bar *[32]byte) (lo, hi *[32]byte) { + if bytes.Compare(foo[:], bar[:]) < 0 { + lo = foo + hi = bar + } else { + lo = bar + hi = foo + } + return +} + +func genNonces(loPubKey, hiPubKey *[32]byte, locIsLo bool) (recvNonce, sendNonce *[24]byte) { + nonce1 := hash24(append(loPubKey[:], hiPubKey[:]...)) + nonce2 := new([24]byte) + copy(nonce2[:], nonce1[:]) + nonce2[len(nonce2)-1] ^= 0x01 + if locIsLo { + recvNonce = nonce1 + sendNonce = nonce2 + } else { + recvNonce = nonce2 + sendNonce = nonce1 + } + return +} + +func genChallenge(loPubKey, hiPubKey *[32]byte) (challenge *[32]byte) { + return hash32(append(loPubKey[:], hiPubKey[:]...)) +} + +func signChallenge(challenge *[32]byte, locPrivKey crypto.PrivKeyEd25519) (signature crypto.SignatureEd25519) { + signature = locPrivKey.Sign(challenge[:]).Unwrap().(crypto.SignatureEd25519) + return +} + +type authSigMessage struct { + Key crypto.PubKey + Sig crypto.Signature +} + +func shareAuthSignature(sc *SecretConnection, pubKey crypto.PubKeyEd25519, signature crypto.SignatureEd25519) (*authSigMessage, error) { + var recvMsg authSigMessage + var err1, err2 error + + cmn.Parallel( + func() { + msgBytes := wire.BinaryBytes(authSigMessage{pubKey.Wrap(), signature.Wrap()}) + _, err1 = sc.Write(msgBytes) + }, + func() { + readBuffer := make([]byte, authSigMsgSize) + _, err2 = io.ReadFull(sc, readBuffer) + if err2 != nil { + return + } + n := int(0) // not used. + recvMsg = wire.ReadBinary(authSigMessage{}, bytes.NewBuffer(readBuffer), authSigMsgSize, &n, &err2).(authSigMessage) + }) + + if err1 != nil { + return nil, err1 + } + if err2 != nil { + return nil, err2 + } + + return &recvMsg, nil +} + +func verifyChallengeSignature(challenge *[32]byte, remPubKey crypto.PubKeyEd25519, remSignature crypto.SignatureEd25519) bool { + return remPubKey.VerifyBytes(challenge[:], remSignature.Wrap()) +} + +//-------------------------------------------------------------------------------- + +// sha256 +func hash32(input []byte) (res *[32]byte) { + hasher := sha256.New() + hasher.Write(input) // does not error + resSlice := hasher.Sum(nil) + res = new([32]byte) + copy(res[:], resSlice) + return +} + +// We only fill in the first 20 bytes with ripemd160 +func hash24(input []byte) (res *[24]byte) { + hasher := ripemd160.New() + hasher.Write(input) // does not error + resSlice := hasher.Sum(nil) + res = new([24]byte) + copy(res[:], resSlice) + return +} + +// ripemd160 +func hash20(input []byte) (res *[20]byte) { + hasher := ripemd160.New() + hasher.Write(input) // does not error + resSlice := hasher.Sum(nil) + res = new([20]byte) + copy(res[:], resSlice) + return +} + +// increment nonce big-endian by 2 with wraparound. +func incr2Nonce(nonce *[24]byte) { + incrNonce(nonce) + incrNonce(nonce) +} + +// increment nonce big-endian by 1 with wraparound. +func incrNonce(nonce *[24]byte) { + for i := 23; 0 <= i; i-- { + nonce[i] += 1 + if nonce[i] != 0 { + return + } + } +} diff --git a/p2p/secret_connection_test.go b/p2p/secret_connection_test.go new file mode 100644 index 000000000..d0d008529 --- /dev/null +++ b/p2p/secret_connection_test.go @@ -0,0 +1,202 @@ +package p2p + +import ( + "bytes" + "io" + "testing" + + "github.com/tendermint/go-crypto" + cmn "github.com/tendermint/tmlibs/common" +) + +type dummyConn struct { + *io.PipeReader + *io.PipeWriter +} + +func (drw dummyConn) Close() (err error) { + err2 := drw.PipeWriter.CloseWithError(io.EOF) + err1 := drw.PipeReader.Close() + if err2 != nil { + return err + } + return err1 +} + +// Each returned ReadWriteCloser is akin to a net.Connection +func makeDummyConnPair() (fooConn, barConn dummyConn) { + barReader, fooWriter := io.Pipe() + fooReader, barWriter := io.Pipe() + return dummyConn{fooReader, fooWriter}, dummyConn{barReader, barWriter} +} + +func makeSecretConnPair(tb testing.TB) (fooSecConn, barSecConn *SecretConnection) { + fooConn, barConn := makeDummyConnPair() + fooPrvKey := crypto.GenPrivKeyEd25519() + fooPubKey := fooPrvKey.PubKey().Unwrap().(crypto.PubKeyEd25519) + barPrvKey := crypto.GenPrivKeyEd25519() + barPubKey := barPrvKey.PubKey().Unwrap().(crypto.PubKeyEd25519) + + cmn.Parallel( + func() { + var err error + fooSecConn, err = MakeSecretConnection(fooConn, fooPrvKey) + if err != nil { + tb.Errorf("Failed to establish SecretConnection for foo: %v", err) + return + } + remotePubBytes := fooSecConn.RemotePubKey() + if !bytes.Equal(remotePubBytes[:], barPubKey[:]) { + tb.Errorf("Unexpected fooSecConn.RemotePubKey. Expected %v, got %v", + barPubKey, fooSecConn.RemotePubKey()) + } + }, + func() { + var err error + barSecConn, err = MakeSecretConnection(barConn, barPrvKey) + if barSecConn == nil { + tb.Errorf("Failed to establish SecretConnection for bar: %v", err) + return + } + remotePubBytes := barSecConn.RemotePubKey() + if !bytes.Equal(remotePubBytes[:], fooPubKey[:]) { + tb.Errorf("Unexpected barSecConn.RemotePubKey. Expected %v, got %v", + fooPubKey, barSecConn.RemotePubKey()) + } + }) + + return +} + +func TestSecretConnectionHandshake(t *testing.T) { + fooSecConn, barSecConn := makeSecretConnPair(t) + fooSecConn.Close() + barSecConn.Close() +} + +func TestSecretConnectionReadWrite(t *testing.T) { + fooConn, barConn := makeDummyConnPair() + fooWrites, barWrites := []string{}, []string{} + fooReads, barReads := []string{}, []string{} + + // Pre-generate the things to write (for foo & bar) + for i := 0; i < 100; i++ { + fooWrites = append(fooWrites, cmn.RandStr((cmn.RandInt()%(dataMaxSize*5))+1)) + barWrites = append(barWrites, cmn.RandStr((cmn.RandInt()%(dataMaxSize*5))+1)) + } + + // A helper that will run with (fooConn, fooWrites, fooReads) and vice versa + genNodeRunner := func(nodeConn dummyConn, nodeWrites []string, nodeReads *[]string) func() { + return func() { + // Node handskae + nodePrvKey := crypto.GenPrivKeyEd25519() + nodeSecretConn, err := MakeSecretConnection(nodeConn, nodePrvKey) + if err != nil { + t.Errorf("Failed to establish SecretConnection for node: %v", err) + return + } + // In parallel, handle reads and writes + cmn.Parallel( + func() { + // Node writes + for _, nodeWrite := range nodeWrites { + n, err := nodeSecretConn.Write([]byte(nodeWrite)) + if err != nil { + t.Errorf("Failed to write to nodeSecretConn: %v", err) + return + } + if n != len(nodeWrite) { + t.Errorf("Failed to write all bytes. Expected %v, wrote %v", len(nodeWrite), n) + return + } + } + nodeConn.PipeWriter.Close() + }, + func() { + // Node reads + readBuffer := make([]byte, dataMaxSize) + for { + n, err := nodeSecretConn.Read(readBuffer) + if err == io.EOF { + return + } else if err != nil { + t.Errorf("Failed to read from nodeSecretConn: %v", err) + return + } + *nodeReads = append(*nodeReads, string(readBuffer[:n])) + } + nodeConn.PipeReader.Close() + }) + } + } + + // Run foo & bar in parallel + cmn.Parallel( + genNodeRunner(fooConn, fooWrites, &fooReads), + genNodeRunner(barConn, barWrites, &barReads), + ) + + // A helper to ensure that the writes and reads match. + // Additionally, small writes (<= dataMaxSize) must be atomically read. + compareWritesReads := func(writes []string, reads []string) { + for { + // Pop next write & corresponding reads + var read, write string = "", writes[0] + var readCount = 0 + for _, readChunk := range reads { + read += readChunk + readCount += 1 + if len(write) <= len(read) { + break + } + if len(write) <= dataMaxSize { + break // atomicity of small writes + } + } + // Compare + if write != read { + t.Errorf("Expected to read %X, got %X", write, read) + } + // Iterate + writes = writes[1:] + reads = reads[readCount:] + if len(writes) == 0 { + break + } + } + } + + compareWritesReads(fooWrites, barReads) + compareWritesReads(barWrites, fooReads) + +} + +func BenchmarkSecretConnection(b *testing.B) { + b.StopTimer() + fooSecConn, barSecConn := makeSecretConnPair(b) + fooWriteText := cmn.RandStr(dataMaxSize) + // Consume reads from bar's reader + go func() { + readBuffer := make([]byte, dataMaxSize) + for { + _, err := barSecConn.Read(readBuffer) + if err == io.EOF { + return + } else if err != nil { + b.Fatalf("Failed to read from barSecConn: %v", err) + } + } + }() + + b.StartTimer() + for i := 0; i < b.N; i++ { + _, err := fooSecConn.Write([]byte(fooWriteText)) + if err != nil { + b.Fatalf("Failed to write to fooSecConn: %v", err) + } + } + b.StopTimer() + + fooSecConn.Close() + //barSecConn.Close() race condition +} diff --git a/p2p/switch.go b/p2p/switch.go new file mode 100644 index 000000000..5ccdc114e --- /dev/null +++ b/p2p/switch.go @@ -0,0 +1,577 @@ +package p2p + +import ( + "errors" + "fmt" + "math/rand" + "net" + "time" + + crypto "github.com/tendermint/go-crypto" + cfg "github.com/tendermint/tendermint/config" + cmn "github.com/tendermint/tmlibs/common" +) + +const ( + reconnectAttempts = 30 + reconnectInterval = 3 * time.Second +) + +type Reactor interface { + cmn.Service // Start, Stop + + SetSwitch(*Switch) + GetChannels() []*ChannelDescriptor + AddPeer(peer *Peer) + RemovePeer(peer *Peer, reason interface{}) + Receive(chID byte, peer *Peer, msgBytes []byte) +} + +//-------------------------------------- + +type BaseReactor struct { + cmn.BaseService // Provides Start, Stop, .Quit + Switch *Switch +} + +func NewBaseReactor(name string, impl Reactor) *BaseReactor { + return &BaseReactor{ + BaseService: *cmn.NewBaseService(nil, name, impl), + Switch: nil, + } +} + +func (br *BaseReactor) SetSwitch(sw *Switch) { + br.Switch = sw +} +func (_ *BaseReactor) GetChannels() []*ChannelDescriptor { return nil } +func (_ *BaseReactor) AddPeer(peer *Peer) {} +func (_ *BaseReactor) RemovePeer(peer *Peer, reason interface{}) {} +func (_ *BaseReactor) Receive(chID byte, peer *Peer, msgBytes []byte) {} + +//----------------------------------------------------------------------------- + +/* +The `Switch` handles peer connections and exposes an API to receive incoming messages +on `Reactors`. Each `Reactor` is responsible for handling incoming messages of one +or more `Channels`. So while sending outgoing messages is typically performed on the peer, +incoming messages are received on the reactor. +*/ +type Switch struct { + cmn.BaseService + + config *cfg.P2PConfig + peerConfig *PeerConfig + listeners []Listener + reactors map[string]Reactor + chDescs []*ChannelDescriptor + reactorsByCh map[byte]Reactor + peers *PeerSet + dialing *cmn.CMap + nodeInfo *NodeInfo // our node info + nodePrivKey crypto.PrivKeyEd25519 // our node privkey + + filterConnByAddr func(net.Addr) error + filterConnByPubKey func(crypto.PubKeyEd25519) error +} + +var ( + ErrSwitchDuplicatePeer = errors.New("Duplicate peer") +) + +func NewSwitch(config *cfg.P2PConfig) *Switch { + sw := &Switch{ + config: config, + peerConfig: DefaultPeerConfig(), + reactors: make(map[string]Reactor), + chDescs: make([]*ChannelDescriptor, 0), + reactorsByCh: make(map[byte]Reactor), + peers: NewPeerSet(), + dialing: cmn.NewCMap(), + nodeInfo: nil, + } + sw.BaseService = *cmn.NewBaseService(nil, "P2P Switch", sw) + return sw +} + +// Not goroutine safe. +func (sw *Switch) AddReactor(name string, reactor Reactor) Reactor { + // Validate the reactor. + // No two reactors can share the same channel. + reactorChannels := reactor.GetChannels() + for _, chDesc := range reactorChannels { + chID := chDesc.ID + if sw.reactorsByCh[chID] != nil { + cmn.PanicSanity(fmt.Sprintf("Channel %X has multiple reactors %v & %v", chID, sw.reactorsByCh[chID], reactor)) + } + sw.chDescs = append(sw.chDescs, chDesc) + sw.reactorsByCh[chID] = reactor + } + sw.reactors[name] = reactor + reactor.SetSwitch(sw) + return reactor +} + +// Not goroutine safe. +func (sw *Switch) Reactors() map[string]Reactor { + return sw.reactors +} + +// Not goroutine safe. +func (sw *Switch) Reactor(name string) Reactor { + return sw.reactors[name] +} + +// Not goroutine safe. +func (sw *Switch) AddListener(l Listener) { + sw.listeners = append(sw.listeners, l) +} + +// Not goroutine safe. +func (sw *Switch) Listeners() []Listener { + return sw.listeners +} + +// Not goroutine safe. +func (sw *Switch) IsListening() bool { + return len(sw.listeners) > 0 +} + +// Not goroutine safe. +func (sw *Switch) SetNodeInfo(nodeInfo *NodeInfo) { + sw.nodeInfo = nodeInfo +} + +// Not goroutine safe. +func (sw *Switch) NodeInfo() *NodeInfo { + return sw.nodeInfo +} + +// Not goroutine safe. +// NOTE: Overwrites sw.nodeInfo.PubKey +func (sw *Switch) SetNodePrivKey(nodePrivKey crypto.PrivKeyEd25519) { + sw.nodePrivKey = nodePrivKey + if sw.nodeInfo != nil { + sw.nodeInfo.PubKey = nodePrivKey.PubKey().Unwrap().(crypto.PubKeyEd25519) + } +} + +// Switch.Start() starts all the reactors, peers, and listeners. +func (sw *Switch) OnStart() error { + sw.BaseService.OnStart() + // Start reactors + for _, reactor := range sw.reactors { + _, err := reactor.Start() + if err != nil { + return err + } + } + // Start peers + for _, peer := range sw.peers.List() { + sw.startInitPeer(peer) + } + // Start listeners + for _, listener := range sw.listeners { + go sw.listenerRoutine(listener) + } + return nil +} + +func (sw *Switch) OnStop() { + sw.BaseService.OnStop() + // Stop listeners + for _, listener := range sw.listeners { + listener.Stop() + } + sw.listeners = nil + // Stop peers + for _, peer := range sw.peers.List() { + peer.Stop() + sw.peers.Remove(peer) + } + // Stop reactors + for _, reactor := range sw.reactors { + reactor.Stop() + } +} + +// NOTE: This performs a blocking handshake before the peer is added. +// CONTRACT: If error is returned, peer is nil, and conn is immediately closed. +func (sw *Switch) AddPeer(peer *Peer) error { + if err := sw.FilterConnByAddr(peer.Addr()); err != nil { + return err + } + + if err := sw.FilterConnByPubKey(peer.PubKey()); err != nil { + return err + } + + if err := peer.HandshakeTimeout(sw.nodeInfo, time.Duration(sw.peerConfig.HandshakeTimeout*time.Second)); err != nil { + return err + } + + // Avoid self + if sw.nodeInfo.PubKey.Equals(peer.PubKey().Wrap()) { + return errors.New("Ignoring connection from self") + } + + // Check version, chain id + if err := sw.nodeInfo.CompatibleWith(peer.NodeInfo); err != nil { + return err + } + + // Check for duplicate peer + if sw.peers.Has(peer.Key) { + return ErrSwitchDuplicatePeer + + } + + // Start peer + if sw.IsRunning() { + sw.startInitPeer(peer) + } + + // Add the peer to .peers. + // We start it first so that a peer in the list is safe to Stop. + // It should not err since we already checked peers.Has() + if err := sw.peers.Add(peer); err != nil { + return err + } + + sw.Logger.Info("Added peer", "peer", peer) + return nil +} + +func (sw *Switch) FilterConnByAddr(addr net.Addr) error { + if sw.filterConnByAddr != nil { + return sw.filterConnByAddr(addr) + } + return nil +} + +func (sw *Switch) FilterConnByPubKey(pubkey crypto.PubKeyEd25519) error { + if sw.filterConnByPubKey != nil { + return sw.filterConnByPubKey(pubkey) + } + return nil + +} + +func (sw *Switch) SetAddrFilter(f func(net.Addr) error) { + sw.filterConnByAddr = f +} + +func (sw *Switch) SetPubKeyFilter(f func(crypto.PubKeyEd25519) error) { + sw.filterConnByPubKey = f +} + +func (sw *Switch) startInitPeer(peer *Peer) { + peer.Start() // spawn send/recv routines + for _, reactor := range sw.reactors { + reactor.AddPeer(peer) + } +} + +// Dial a list of seeds asynchronously in random order +func (sw *Switch) DialSeeds(addrBook *AddrBook, seeds []string) error { + + netAddrs, err := NewNetAddressStrings(seeds) + if err != nil { + return err + } + + if addrBook != nil { + // add seeds to `addrBook` + ourAddrS := sw.nodeInfo.ListenAddr + ourAddr, _ := NewNetAddressString(ourAddrS) + for _, netAddr := range netAddrs { + // do not add ourselves + if netAddr.Equals(ourAddr) { + continue + } + addrBook.AddAddress(netAddr, ourAddr) + } + addrBook.Save() + } + + // permute the list, dial them in random order. + perm := rand.Perm(len(netAddrs)) + for i := 0; i < len(perm); i++ { + go func(i int) { + time.Sleep(time.Duration(rand.Int63n(3000)) * time.Millisecond) + j := perm[i] + sw.dialSeed(netAddrs[j]) + }(i) + } + return nil +} + +func (sw *Switch) dialSeed(addr *NetAddress) { + peer, err := sw.DialPeerWithAddress(addr, true) + if err != nil { + sw.Logger.Error("Error dialing seed", "error", err) + } else { + sw.Logger.Info("Connected to seed", "peer", peer) + } +} + +func (sw *Switch) DialPeerWithAddress(addr *NetAddress, persistent bool) (*Peer, error) { + sw.dialing.Set(addr.IP.String(), addr) + defer sw.dialing.Delete(addr.IP.String()) + + sw.Logger.Info("Dialing peer", "address", addr) + peer, err := newOutboundPeerWithConfig(addr, sw.reactorsByCh, sw.chDescs, sw.StopPeerForError, sw.nodePrivKey, sw.peerConfig) + if err != nil { + sw.Logger.Error("Failed to dial peer", "address", addr, "error", err) + return nil, err + } + peer.SetLogger(sw.Logger.With("peer", addr)) + if persistent { + peer.makePersistent() + } + err = sw.AddPeer(peer) + if err != nil { + sw.Logger.Error("Failed to add peer", "address", addr, "error", err) + peer.CloseConn() + return nil, err + } + sw.Logger.Info("Dialed and added peer", "address", addr, "peer", peer) + return peer, nil +} + +func (sw *Switch) IsDialing(addr *NetAddress) bool { + return sw.dialing.Has(addr.IP.String()) +} + +// Broadcast runs a go routine for each attempted send, which will block +// trying to send for defaultSendTimeoutSeconds. Returns a channel +// which receives success values for each attempted send (false if times out) +// NOTE: Broadcast uses goroutines, so order of broadcast may not be preserved. +func (sw *Switch) Broadcast(chID byte, msg interface{}) chan bool { + successChan := make(chan bool, len(sw.peers.List())) + sw.Logger.Debug("Broadcast", "channel", chID, "msg", msg) + for _, peer := range sw.peers.List() { + go func(peer *Peer) { + success := peer.Send(chID, msg) + successChan <- success + }(peer) + } + return successChan +} + +// Returns the count of outbound/inbound and outbound-dialing peers. +func (sw *Switch) NumPeers() (outbound, inbound, dialing int) { + peers := sw.peers.List() + for _, peer := range peers { + if peer.outbound { + outbound++ + } else { + inbound++ + } + } + dialing = sw.dialing.Size() + return +} + +func (sw *Switch) Peers() IPeerSet { + return sw.peers +} + +// Disconnect from a peer due to external error, retry if it is a persistent peer. +// TODO: make record depending on reason. +func (sw *Switch) StopPeerForError(peer *Peer, reason interface{}) { + addr := NewNetAddress(peer.Addr()) + sw.Logger.Info("Stopping peer for error", "peer", peer, "error", reason) + sw.stopAndRemovePeer(peer, reason) + + if peer.IsPersistent() { + go func() { + sw.Logger.Info("Reconnecting to peer", "peer", peer) + for i := 1; i < reconnectAttempts; i++ { + if !sw.IsRunning() { + return + } + + peer, err := sw.DialPeerWithAddress(addr, true) + if err != nil { + if i == reconnectAttempts { + sw.Logger.Info("Error reconnecting to peer. Giving up", "tries", i, "error", err) + return + } + sw.Logger.Info("Error reconnecting to peer. Trying again", "tries", i, "error", err) + time.Sleep(reconnectInterval) + continue + } + + sw.Logger.Info("Reconnected to peer", "peer", peer) + return + } + }() + } +} + +// Disconnect from a peer gracefully. +// TODO: handle graceful disconnects. +func (sw *Switch) StopPeerGracefully(peer *Peer) { + sw.Logger.Info("Stopping peer gracefully") + sw.stopAndRemovePeer(peer, nil) +} + +func (sw *Switch) stopAndRemovePeer(peer *Peer, reason interface{}) { + sw.peers.Remove(peer) + peer.Stop() + for _, reactor := range sw.reactors { + reactor.RemovePeer(peer, reason) + } +} + +func (sw *Switch) listenerRoutine(l Listener) { + for { + inConn, ok := <-l.Connections() + if !ok { + break + } + + // ignore connection if we already have enough + maxPeers := sw.config.MaxNumPeers + if maxPeers <= sw.peers.Size() { + sw.Logger.Info("Ignoring inbound connection: already have enough peers", "address", inConn.RemoteAddr().String(), "numPeers", sw.peers.Size(), "max", maxPeers) + continue + } + + // New inbound connection! + err := sw.addPeerWithConnectionAndConfig(inConn, sw.peerConfig) + if err != nil { + sw.Logger.Info("Ignoring inbound connection: error while adding peer", "address", inConn.RemoteAddr().String(), "error", err) + continue + } + + // NOTE: We don't yet have the listening port of the + // remote (if they have a listener at all). + // The peerHandshake will handle that + } + + // cleanup +} + +//----------------------------------------------------------------------------- + +type SwitchEventNewPeer struct { + Peer *Peer +} + +type SwitchEventDonePeer struct { + Peer *Peer + Error interface{} +} + +//------------------------------------------------------------------ +// Switches connected via arbitrary net.Conn; useful for testing + +// Returns n switches, connected according to the connect func. +// If connect==Connect2Switches, the switches will be fully connected. +// initSwitch defines how the ith switch should be initialized (ie. with what reactors). +// NOTE: panics if any switch fails to start. +func MakeConnectedSwitches(cfg *cfg.P2PConfig, n int, initSwitch func(int, *Switch) *Switch, connect func([]*Switch, int, int)) []*Switch { + switches := make([]*Switch, n) + for i := 0; i < n; i++ { + switches[i] = makeSwitch(cfg, i, "testing", "123.123.123", initSwitch) + } + + if err := StartSwitches(switches); err != nil { + panic(err) + } + + for i := 0; i < n; i++ { + for j := i; j < n; j++ { + connect(switches, i, j) + } + } + + return switches +} + +var PanicOnAddPeerErr = false + +// Will connect switches i and j via net.Pipe() +// Blocks until a conection is established. +// NOTE: caller ensures i and j are within bounds +func Connect2Switches(switches []*Switch, i, j int) { + switchI := switches[i] + switchJ := switches[j] + c1, c2 := net.Pipe() + doneCh := make(chan struct{}) + go func() { + err := switchI.addPeerWithConnection(c1) + if PanicOnAddPeerErr && err != nil { + panic(err) + } + doneCh <- struct{}{} + }() + go func() { + err := switchJ.addPeerWithConnection(c2) + if PanicOnAddPeerErr && err != nil { + panic(err) + } + doneCh <- struct{}{} + }() + <-doneCh + <-doneCh +} + +func StartSwitches(switches []*Switch) error { + for _, s := range switches { + _, err := s.Start() // start switch and reactors + if err != nil { + return err + } + } + return nil +} + +func makeSwitch(cfg *cfg.P2PConfig, i int, network, version string, initSwitch func(int, *Switch) *Switch) *Switch { + privKey := crypto.GenPrivKeyEd25519() + // new switch, add reactors + // TODO: let the config be passed in? + s := initSwitch(i, NewSwitch(cfg)) + s.SetNodeInfo(&NodeInfo{ + PubKey: privKey.PubKey().Unwrap().(crypto.PubKeyEd25519), + Moniker: cmn.Fmt("switch%d", i), + Network: network, + Version: version, + RemoteAddr: cmn.Fmt("%v:%v", network, rand.Intn(64512)+1023), + ListenAddr: cmn.Fmt("%v:%v", network, rand.Intn(64512)+1023), + }) + s.SetNodePrivKey(privKey) + return s +} + +func (sw *Switch) addPeerWithConnection(conn net.Conn) error { + peer, err := newInboundPeer(conn, sw.reactorsByCh, sw.chDescs, sw.StopPeerForError, sw.nodePrivKey) + if err != nil { + conn.Close() + return err + } + peer.SetLogger(sw.Logger.With("peer", conn.RemoteAddr())) + if err = sw.AddPeer(peer); err != nil { + conn.Close() + return err + } + + return nil +} + +func (sw *Switch) addPeerWithConnectionAndConfig(conn net.Conn, config *PeerConfig) error { + peer, err := newInboundPeerWithConfig(conn, sw.reactorsByCh, sw.chDescs, sw.StopPeerForError, sw.nodePrivKey, config) + if err != nil { + conn.Close() + return err + } + peer.SetLogger(sw.Logger.With("peer", conn.RemoteAddr())) + if err = sw.AddPeer(peer); err != nil { + conn.Close() + return err + } + + return nil +} diff --git a/p2p/switch_test.go b/p2p/switch_test.go new file mode 100644 index 000000000..eed7d1fab --- /dev/null +++ b/p2p/switch_test.go @@ -0,0 +1,331 @@ +package p2p + +import ( + "bytes" + "fmt" + "net" + "sync" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + crypto "github.com/tendermint/go-crypto" + wire "github.com/tendermint/go-wire" + + cfg "github.com/tendermint/tendermint/config" + "github.com/tendermint/tmlibs/log" +) + +var ( + config *cfg.P2PConfig +) + +func init() { + config = cfg.DefaultP2PConfig() + config.PexReactor = true +} + +type PeerMessage struct { + PeerKey string + Bytes []byte + Counter int +} + +type TestReactor struct { + BaseReactor + + mtx sync.Mutex + channels []*ChannelDescriptor + peersAdded []*Peer + peersRemoved []*Peer + logMessages bool + msgsCounter int + msgsReceived map[byte][]PeerMessage +} + +func NewTestReactor(channels []*ChannelDescriptor, logMessages bool) *TestReactor { + tr := &TestReactor{ + channels: channels, + logMessages: logMessages, + msgsReceived: make(map[byte][]PeerMessage), + } + tr.BaseReactor = *NewBaseReactor("TestReactor", tr) + tr.SetLogger(log.TestingLogger()) + return tr +} + +func (tr *TestReactor) GetChannels() []*ChannelDescriptor { + return tr.channels +} + +func (tr *TestReactor) AddPeer(peer *Peer) { + tr.mtx.Lock() + defer tr.mtx.Unlock() + tr.peersAdded = append(tr.peersAdded, peer) +} + +func (tr *TestReactor) RemovePeer(peer *Peer, reason interface{}) { + tr.mtx.Lock() + defer tr.mtx.Unlock() + tr.peersRemoved = append(tr.peersRemoved, peer) +} + +func (tr *TestReactor) Receive(chID byte, peer *Peer, msgBytes []byte) { + if tr.logMessages { + tr.mtx.Lock() + defer tr.mtx.Unlock() + //fmt.Printf("Received: %X, %X\n", chID, msgBytes) + tr.msgsReceived[chID] = append(tr.msgsReceived[chID], PeerMessage{peer.Key, msgBytes, tr.msgsCounter}) + tr.msgsCounter++ + } +} + +func (tr *TestReactor) getMsgs(chID byte) []PeerMessage { + tr.mtx.Lock() + defer tr.mtx.Unlock() + return tr.msgsReceived[chID] +} + +//----------------------------------------------------------------------------- + +// convenience method for creating two switches connected to each other. +// XXX: note this uses net.Pipe and not a proper TCP conn +func makeSwitchPair(t testing.TB, initSwitch func(int, *Switch) *Switch) (*Switch, *Switch) { + // Create two switches that will be interconnected. + switches := MakeConnectedSwitches(config, 2, initSwitch, Connect2Switches) + return switches[0], switches[1] +} + +func initSwitchFunc(i int, sw *Switch) *Switch { + // Make two reactors of two channels each + sw.AddReactor("foo", NewTestReactor([]*ChannelDescriptor{ + &ChannelDescriptor{ID: byte(0x00), Priority: 10}, + &ChannelDescriptor{ID: byte(0x01), Priority: 10}, + }, true)) + sw.AddReactor("bar", NewTestReactor([]*ChannelDescriptor{ + &ChannelDescriptor{ID: byte(0x02), Priority: 10}, + &ChannelDescriptor{ID: byte(0x03), Priority: 10}, + }, true)) + return sw +} + +func TestSwitches(t *testing.T) { + s1, s2 := makeSwitchPair(t, initSwitchFunc) + defer s1.Stop() + defer s2.Stop() + + if s1.Peers().Size() != 1 { + t.Errorf("Expected exactly 1 peer in s1, got %v", s1.Peers().Size()) + } + if s2.Peers().Size() != 1 { + t.Errorf("Expected exactly 1 peer in s2, got %v", s2.Peers().Size()) + } + + // Lets send some messages + ch0Msg := "channel zero" + ch1Msg := "channel foo" + ch2Msg := "channel bar" + + s1.Broadcast(byte(0x00), ch0Msg) + s1.Broadcast(byte(0x01), ch1Msg) + s1.Broadcast(byte(0x02), ch2Msg) + + // Wait for things to settle... + time.Sleep(5000 * time.Millisecond) + + // Check message on ch0 + ch0Msgs := s2.Reactor("foo").(*TestReactor).getMsgs(byte(0x00)) + if len(ch0Msgs) != 1 { + t.Errorf("Expected to have received 1 message in ch0") + } + if !bytes.Equal(ch0Msgs[0].Bytes, wire.BinaryBytes(ch0Msg)) { + t.Errorf("Unexpected message bytes. Wanted: %X, Got: %X", wire.BinaryBytes(ch0Msg), ch0Msgs[0].Bytes) + } + + // Check message on ch1 + ch1Msgs := s2.Reactor("foo").(*TestReactor).getMsgs(byte(0x01)) + if len(ch1Msgs) != 1 { + t.Errorf("Expected to have received 1 message in ch1") + } + if !bytes.Equal(ch1Msgs[0].Bytes, wire.BinaryBytes(ch1Msg)) { + t.Errorf("Unexpected message bytes. Wanted: %X, Got: %X", wire.BinaryBytes(ch1Msg), ch1Msgs[0].Bytes) + } + + // Check message on ch2 + ch2Msgs := s2.Reactor("bar").(*TestReactor).getMsgs(byte(0x02)) + if len(ch2Msgs) != 1 { + t.Errorf("Expected to have received 1 message in ch2") + } + if !bytes.Equal(ch2Msgs[0].Bytes, wire.BinaryBytes(ch2Msg)) { + t.Errorf("Unexpected message bytes. Wanted: %X, Got: %X", wire.BinaryBytes(ch2Msg), ch2Msgs[0].Bytes) + } + +} + +func TestConnAddrFilter(t *testing.T) { + s1 := makeSwitch(config, 1, "testing", "123.123.123", initSwitchFunc) + s2 := makeSwitch(config, 1, "testing", "123.123.123", initSwitchFunc) + + c1, c2 := net.Pipe() + + s1.SetAddrFilter(func(addr net.Addr) error { + if addr.String() == c1.RemoteAddr().String() { + return fmt.Errorf("Error: pipe is blacklisted") + } + return nil + }) + + // connect to good peer + go func() { + s1.addPeerWithConnection(c1) + }() + go func() { + s2.addPeerWithConnection(c2) + }() + + // Wait for things to happen, peers to get added... + time.Sleep(100 * time.Millisecond * time.Duration(4)) + + defer s1.Stop() + defer s2.Stop() + if s1.Peers().Size() != 0 { + t.Errorf("Expected s1 not to connect to peers, got %d", s1.Peers().Size()) + } + if s2.Peers().Size() != 0 { + t.Errorf("Expected s2 not to connect to peers, got %d", s2.Peers().Size()) + } +} + +func TestConnPubKeyFilter(t *testing.T) { + s1 := makeSwitch(config, 1, "testing", "123.123.123", initSwitchFunc) + s2 := makeSwitch(config, 1, "testing", "123.123.123", initSwitchFunc) + + c1, c2 := net.Pipe() + + // set pubkey filter + s1.SetPubKeyFilter(func(pubkey crypto.PubKeyEd25519) error { + if bytes.Equal(pubkey.Bytes(), s2.nodeInfo.PubKey.Bytes()) { + return fmt.Errorf("Error: pipe is blacklisted") + } + return nil + }) + + // connect to good peer + go func() { + s1.addPeerWithConnection(c1) + }() + go func() { + s2.addPeerWithConnection(c2) + }() + + // Wait for things to happen, peers to get added... + time.Sleep(100 * time.Millisecond * time.Duration(4)) + + defer s1.Stop() + defer s2.Stop() + if s1.Peers().Size() != 0 { + t.Errorf("Expected s1 not to connect to peers, got %d", s1.Peers().Size()) + } + if s2.Peers().Size() != 0 { + t.Errorf("Expected s2 not to connect to peers, got %d", s2.Peers().Size()) + } +} + +func TestSwitchStopsNonPersistentPeerOnError(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + sw := makeSwitch(config, 1, "testing", "123.123.123", initSwitchFunc) + sw.Start() + defer sw.Stop() + + // simulate remote peer + rp := &remotePeer{PrivKey: crypto.GenPrivKeyEd25519(), Config: DefaultPeerConfig()} + rp.Start() + defer rp.Stop() + + peer, err := newOutboundPeer(rp.Addr(), sw.reactorsByCh, sw.chDescs, sw.StopPeerForError, sw.nodePrivKey) + require.Nil(err) + err = sw.AddPeer(peer) + require.Nil(err) + + // simulate failure by closing connection + peer.CloseConn() + + time.Sleep(100 * time.Millisecond) + + assert.Zero(sw.Peers().Size()) + assert.False(peer.IsRunning()) +} + +func TestSwitchReconnectsToPersistentPeer(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + sw := makeSwitch(config, 1, "testing", "123.123.123", initSwitchFunc) + sw.Start() + defer sw.Stop() + + // simulate remote peer + rp := &remotePeer{PrivKey: crypto.GenPrivKeyEd25519(), Config: DefaultPeerConfig()} + rp.Start() + defer rp.Stop() + + peer, err := newOutboundPeer(rp.Addr(), sw.reactorsByCh, sw.chDescs, sw.StopPeerForError, sw.nodePrivKey) + peer.makePersistent() + require.Nil(err) + err = sw.AddPeer(peer) + require.Nil(err) + + // simulate failure by closing connection + peer.CloseConn() + + // TODO: actually detect the disconnection and wait for reconnect + time.Sleep(100 * time.Millisecond) + + assert.NotZero(sw.Peers().Size()) + assert.False(peer.IsRunning()) +} + +func BenchmarkSwitches(b *testing.B) { + b.StopTimer() + + s1, s2 := makeSwitchPair(b, func(i int, sw *Switch) *Switch { + // Make bar reactors of bar channels each + sw.AddReactor("foo", NewTestReactor([]*ChannelDescriptor{ + &ChannelDescriptor{ID: byte(0x00), Priority: 10}, + &ChannelDescriptor{ID: byte(0x01), Priority: 10}, + }, false)) + sw.AddReactor("bar", NewTestReactor([]*ChannelDescriptor{ + &ChannelDescriptor{ID: byte(0x02), Priority: 10}, + &ChannelDescriptor{ID: byte(0x03), Priority: 10}, + }, false)) + return sw + }) + defer s1.Stop() + defer s2.Stop() + + // Allow time for goroutines to boot up + time.Sleep(1000 * time.Millisecond) + b.StartTimer() + + numSuccess, numFailure := 0, 0 + + // Send random message from foo channel to another + for i := 0; i < b.N; i++ { + chID := byte(i % 4) + successChan := s1.Broadcast(chID, "test data") + for s := range successChan { + if s { + numSuccess++ + } else { + numFailure++ + } + } + } + + b.Logf("success: %v, failure: %v", numSuccess, numFailure) + + // Allow everything to flush before stopping switches & closing connections. + b.StopTimer() + time.Sleep(1000 * time.Millisecond) +} diff --git a/p2p/types.go b/p2p/types.go new file mode 100644 index 000000000..1d3770b57 --- /dev/null +++ b/p2p/types.go @@ -0,0 +1,81 @@ +package p2p + +import ( + "fmt" + "net" + "strconv" + "strings" + + crypto "github.com/tendermint/go-crypto" +) + +const maxNodeInfoSize = 10240 // 10Kb + +type NodeInfo struct { + PubKey crypto.PubKeyEd25519 `json:"pub_key"` + Moniker string `json:"moniker"` + Network string `json:"network"` + RemoteAddr string `json:"remote_addr"` + ListenAddr string `json:"listen_addr"` + Version string `json:"version"` // major.minor.revision + Other []string `json:"other"` // other application specific data +} + +// CONTRACT: two nodes are compatible if the major/minor versions match and network match +func (info *NodeInfo) CompatibleWith(other *NodeInfo) error { + iMajor, iMinor, _, iErr := splitVersion(info.Version) + oMajor, oMinor, _, oErr := splitVersion(other.Version) + + // if our own version number is not formatted right, we messed up + if iErr != nil { + return iErr + } + + // version number must be formatted correctly ("x.x.x") + if oErr != nil { + return oErr + } + + // major version must match + if iMajor != oMajor { + return fmt.Errorf("Peer is on a different major version. Got %v, expected %v", oMajor, iMajor) + } + + // minor version must match + if iMinor != oMinor { + return fmt.Errorf("Peer is on a different minor version. Got %v, expected %v", oMinor, iMinor) + } + + // nodes must be on the same network + if info.Network != other.Network { + return fmt.Errorf("Peer is on a different network. Got %v, expected %v", other.Network, info.Network) + } + + return nil +} + +func (info *NodeInfo) ListenHost() string { + host, _, _ := net.SplitHostPort(info.ListenAddr) + return host +} + +func (info *NodeInfo) ListenPort() int { + _, port, _ := net.SplitHostPort(info.ListenAddr) + port_i, err := strconv.Atoi(port) + if err != nil { + return -1 + } + return port_i +} + +func (info NodeInfo) String() string { + return fmt.Sprintf("NodeInfo{pk: %v, moniker: %v, network: %v [remote %v, listen %v], version: %v (%v)}", info.PubKey, info.Moniker, info.Network, info.RemoteAddr, info.ListenAddr, info.Version, info.Other) +} + +func splitVersion(version string) (string, string, string, error) { + spl := strings.Split(version, ".") + if len(spl) != 3 { + return "", "", "", fmt.Errorf("Invalid version format %v", version) + } + return spl[0], spl[1], spl[2], nil +} diff --git a/p2p/upnp/README.md b/p2p/upnp/README.md new file mode 100644 index 000000000..557d05bdc --- /dev/null +++ b/p2p/upnp/README.md @@ -0,0 +1,5 @@ +# `tendermint/p2p/upnp` + +## Resources + +* http://www.upnp-hacks.org/upnp.html diff --git a/p2p/upnp/probe.go b/p2p/upnp/probe.go new file mode 100644 index 000000000..3537e1c65 --- /dev/null +++ b/p2p/upnp/probe.go @@ -0,0 +1,112 @@ +package upnp + +import ( + "errors" + "fmt" + "net" + "time" + + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" +) + +type UPNPCapabilities struct { + PortMapping bool + Hairpin bool +} + +func makeUPNPListener(intPort int, extPort int, logger log.Logger) (NAT, net.Listener, net.IP, error) { + nat, err := Discover() + if err != nil { + return nil, nil, nil, errors.New(fmt.Sprintf("NAT upnp could not be discovered: %v", err)) + } + logger.Info(cmn.Fmt("ourIP: %v", nat.(*upnpNAT).ourIP)) + + ext, err := nat.GetExternalAddress() + if err != nil { + return nat, nil, nil, errors.New(fmt.Sprintf("External address error: %v", err)) + } + logger.Info(cmn.Fmt("External address: %v", ext)) + + port, err := nat.AddPortMapping("tcp", extPort, intPort, "Tendermint UPnP Probe", 0) + if err != nil { + return nat, nil, ext, errors.New(fmt.Sprintf("Port mapping error: %v", err)) + } + logger.Info(cmn.Fmt("Port mapping mapped: %v", port)) + + // also run the listener, open for all remote addresses. + listener, err := net.Listen("tcp", fmt.Sprintf(":%v", intPort)) + if err != nil { + return nat, nil, ext, errors.New(fmt.Sprintf("Error establishing listener: %v", err)) + } + return nat, listener, ext, nil +} + +func testHairpin(listener net.Listener, extAddr string, logger log.Logger) (supportsHairpin bool) { + // Listener + go func() { + inConn, err := listener.Accept() + if err != nil { + logger.Info(cmn.Fmt("Listener.Accept() error: %v", err)) + return + } + logger.Info(cmn.Fmt("Accepted incoming connection: %v -> %v", inConn.LocalAddr(), inConn.RemoteAddr())) + buf := make([]byte, 1024) + n, err := inConn.Read(buf) + if err != nil { + logger.Info(cmn.Fmt("Incoming connection read error: %v", err)) + return + } + logger.Info(cmn.Fmt("Incoming connection read %v bytes: %X", n, buf)) + if string(buf) == "test data" { + supportsHairpin = true + return + } + }() + + // Establish outgoing + outConn, err := net.Dial("tcp", extAddr) + if err != nil { + logger.Info(cmn.Fmt("Outgoing connection dial error: %v", err)) + return + } + + n, err := outConn.Write([]byte("test data")) + if err != nil { + logger.Info(cmn.Fmt("Outgoing connection write error: %v", err)) + return + } + logger.Info(cmn.Fmt("Outgoing connection wrote %v bytes", n)) + + // Wait for data receipt + time.Sleep(1 * time.Second) + return +} + +func Probe(logger log.Logger) (caps UPNPCapabilities, err error) { + logger.Info("Probing for UPnP!") + + intPort, extPort := 8001, 8001 + + nat, listener, ext, err := makeUPNPListener(intPort, extPort, logger) + if err != nil { + return + } + caps.PortMapping = true + + // Deferred cleanup + defer func() { + err = nat.DeletePortMapping("tcp", intPort, extPort) + if err != nil { + logger.Error(cmn.Fmt("Port mapping delete error: %v", err)) + } + listener.Close() + }() + + supportsHairpin := testHairpin(listener, fmt.Sprintf("%v:%v", ext, extPort), logger) + if supportsHairpin { + caps.Hairpin = true + } + + return +} diff --git a/p2p/upnp/upnp.go b/p2p/upnp/upnp.go new file mode 100644 index 000000000..3d6c55035 --- /dev/null +++ b/p2p/upnp/upnp.go @@ -0,0 +1,380 @@ +/* +Taken from taipei-torrent + +Just enough UPnP to be able to forward ports +*/ +package upnp + +// BUG(jae): TODO: use syscalls to get actual ourIP. http://pastebin.com/9exZG4rh + +import ( + "bytes" + "encoding/xml" + "errors" + "io/ioutil" + "net" + "net/http" + "strconv" + "strings" + "time" +) + +type upnpNAT struct { + serviceURL string + ourIP string + urnDomain string +} + +// protocol is either "udp" or "tcp" +type NAT interface { + GetExternalAddress() (addr net.IP, err error) + AddPortMapping(protocol string, externalPort, internalPort int, description string, timeout int) (mappedExternalPort int, err error) + DeletePortMapping(protocol string, externalPort, internalPort int) (err error) +} + +func Discover() (nat NAT, err error) { + ssdp, err := net.ResolveUDPAddr("udp4", "239.255.255.250:1900") + if err != nil { + return + } + conn, err := net.ListenPacket("udp4", ":0") + if err != nil { + return + } + socket := conn.(*net.UDPConn) + defer socket.Close() + + err = socket.SetDeadline(time.Now().Add(3 * time.Second)) + if err != nil { + return + } + + st := "InternetGatewayDevice:1" + + buf := bytes.NewBufferString( + "M-SEARCH * HTTP/1.1\r\n" + + "HOST: 239.255.255.250:1900\r\n" + + "ST: ssdp:all\r\n" + + "MAN: \"ssdp:discover\"\r\n" + + "MX: 2\r\n\r\n") + message := buf.Bytes() + answerBytes := make([]byte, 1024) + for i := 0; i < 3; i++ { + _, err = socket.WriteToUDP(message, ssdp) + if err != nil { + return + } + var n int + n, _, err = socket.ReadFromUDP(answerBytes) + for { + n, _, err = socket.ReadFromUDP(answerBytes) + if err != nil { + break + } + answer := string(answerBytes[0:n]) + if strings.Index(answer, st) < 0 { + continue + } + // HTTP header field names are case-insensitive. + // http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2 + locString := "\r\nlocation:" + answer = strings.ToLower(answer) + locIndex := strings.Index(answer, locString) + if locIndex < 0 { + continue + } + loc := answer[locIndex+len(locString):] + endIndex := strings.Index(loc, "\r\n") + if endIndex < 0 { + continue + } + locURL := strings.TrimSpace(loc[0:endIndex]) + var serviceURL, urnDomain string + serviceURL, urnDomain, err = getServiceURL(locURL) + if err != nil { + return + } + var ourIP net.IP + ourIP, err = localIPv4() + if err != nil { + return + } + nat = &upnpNAT{serviceURL: serviceURL, ourIP: ourIP.String(), urnDomain: urnDomain} + return + } + } + err = errors.New("UPnP port discovery failed.") + return +} + +type Envelope struct { + XMLName xml.Name `xml:"http://schemas.xmlsoap.org/soap/envelope/ Envelope"` + Soap *SoapBody +} +type SoapBody struct { + XMLName xml.Name `xml:"http://schemas.xmlsoap.org/soap/envelope/ Body"` + ExternalIP *ExternalIPAddressResponse +} + +type ExternalIPAddressResponse struct { + XMLName xml.Name `xml:"GetExternalIPAddressResponse"` + IPAddress string `xml:"NewExternalIPAddress"` +} + +type ExternalIPAddress struct { + XMLName xml.Name `xml:"NewExternalIPAddress"` + IP string +} + +type UPNPService struct { + ServiceType string `xml:"serviceType"` + ControlURL string `xml:"controlURL"` +} + +type DeviceList struct { + Device []Device `xml:"device"` +} + +type ServiceList struct { + Service []UPNPService `xml:"service"` +} + +type Device struct { + XMLName xml.Name `xml:"device"` + DeviceType string `xml:"deviceType"` + DeviceList DeviceList `xml:"deviceList"` + ServiceList ServiceList `xml:"serviceList"` +} + +type Root struct { + Device Device +} + +func getChildDevice(d *Device, deviceType string) *Device { + dl := d.DeviceList.Device + for i := 0; i < len(dl); i++ { + if strings.Index(dl[i].DeviceType, deviceType) >= 0 { + return &dl[i] + } + } + return nil +} + +func getChildService(d *Device, serviceType string) *UPNPService { + sl := d.ServiceList.Service + for i := 0; i < len(sl); i++ { + if strings.Index(sl[i].ServiceType, serviceType) >= 0 { + return &sl[i] + } + } + return nil +} + +func localIPv4() (net.IP, error) { + tt, err := net.Interfaces() + if err != nil { + return nil, err + } + for _, t := range tt { + aa, err := t.Addrs() + if err != nil { + return nil, err + } + for _, a := range aa { + ipnet, ok := a.(*net.IPNet) + if !ok { + continue + } + v4 := ipnet.IP.To4() + if v4 == nil || v4[0] == 127 { // loopback address + continue + } + return v4, nil + } + } + return nil, errors.New("cannot find local IP address") +} + +func getServiceURL(rootURL string) (url, urnDomain string, err error) { + r, err := http.Get(rootURL) + if err != nil { + return + } + defer r.Body.Close() + if r.StatusCode >= 400 { + err = errors.New(string(r.StatusCode)) + return + } + var root Root + err = xml.NewDecoder(r.Body).Decode(&root) + if err != nil { + return + } + a := &root.Device + if strings.Index(a.DeviceType, "InternetGatewayDevice:1") < 0 { + err = errors.New("No InternetGatewayDevice") + return + } + b := getChildDevice(a, "WANDevice:1") + if b == nil { + err = errors.New("No WANDevice") + return + } + c := getChildDevice(b, "WANConnectionDevice:1") + if c == nil { + err = errors.New("No WANConnectionDevice") + return + } + d := getChildService(c, "WANIPConnection:1") + if d == nil { + // Some routers don't follow the UPnP spec, and put WanIPConnection under WanDevice, + // instead of under WanConnectionDevice + d = getChildService(b, "WANIPConnection:1") + + if d == nil { + err = errors.New("No WANIPConnection") + return + } + } + // Extract the domain name, which isn't always 'schemas-upnp-org' + urnDomain = strings.Split(d.ServiceType, ":")[1] + url = combineURL(rootURL, d.ControlURL) + return +} + +func combineURL(rootURL, subURL string) string { + protocolEnd := "://" + protoEndIndex := strings.Index(rootURL, protocolEnd) + a := rootURL[protoEndIndex+len(protocolEnd):] + rootIndex := strings.Index(a, "/") + return rootURL[0:protoEndIndex+len(protocolEnd)+rootIndex] + subURL +} + +func soapRequest(url, function, message, domain string) (r *http.Response, err error) { + fullMessage := "" + + "\r\n" + + "" + message + "" + + req, err := http.NewRequest("POST", url, strings.NewReader(fullMessage)) + if err != nil { + return nil, err + } + req.Header.Set("Content-Type", "text/xml ; charset=\"utf-8\"") + req.Header.Set("User-Agent", "Darwin/10.0.0, UPnP/1.0, MiniUPnPc/1.3") + //req.Header.Set("Transfer-Encoding", "chunked") + req.Header.Set("SOAPAction", "\"urn:"+domain+":service:WANIPConnection:1#"+function+"\"") + req.Header.Set("Connection", "Close") + req.Header.Set("Cache-Control", "no-cache") + req.Header.Set("Pragma", "no-cache") + + // log.Stderr("soapRequest ", req) + + r, err = http.DefaultClient.Do(req) + if err != nil { + return nil, err + } + /*if r.Body != nil { + defer r.Body.Close() + }*/ + + if r.StatusCode >= 400 { + // log.Stderr(function, r.StatusCode) + err = errors.New("Error " + strconv.Itoa(r.StatusCode) + " for " + function) + r = nil + return + } + return +} + +type statusInfo struct { + externalIpAddress string +} + +func (n *upnpNAT) getExternalIPAddress() (info statusInfo, err error) { + + message := "\r\n" + + "" + + var response *http.Response + response, err = soapRequest(n.serviceURL, "GetExternalIPAddress", message, n.urnDomain) + if response != nil { + defer response.Body.Close() + } + if err != nil { + return + } + var envelope Envelope + data, err := ioutil.ReadAll(response.Body) + reader := bytes.NewReader(data) + xml.NewDecoder(reader).Decode(&envelope) + + info = statusInfo{envelope.Soap.ExternalIP.IPAddress} + + if err != nil { + return + } + + return +} + +func (n *upnpNAT) GetExternalAddress() (addr net.IP, err error) { + info, err := n.getExternalIPAddress() + if err != nil { + return + } + addr = net.ParseIP(info.externalIpAddress) + return +} + +func (n *upnpNAT) AddPortMapping(protocol string, externalPort, internalPort int, description string, timeout int) (mappedExternalPort int, err error) { + // A single concatenation would break ARM compilation. + message := "\r\n" + + "" + strconv.Itoa(externalPort) + message += "" + protocol + "" + message += "" + strconv.Itoa(internalPort) + "" + + "" + n.ourIP + "" + + "1" + message += description + + "" + strconv.Itoa(timeout) + + "" + + var response *http.Response + response, err = soapRequest(n.serviceURL, "AddPortMapping", message, n.urnDomain) + if response != nil { + defer response.Body.Close() + } + if err != nil { + return + } + + // TODO: check response to see if the port was forwarded + // log.Println(message, response) + // JAE: + // body, err := ioutil.ReadAll(response.Body) + // fmt.Println(string(body), err) + mappedExternalPort = externalPort + _ = response + return +} + +func (n *upnpNAT) DeletePortMapping(protocol string, externalPort, internalPort int) (err error) { + + message := "\r\n" + + "" + strconv.Itoa(externalPort) + + "" + protocol + "" + + "" + + var response *http.Response + response, err = soapRequest(n.serviceURL, "DeletePortMapping", message, n.urnDomain) + if response != nil { + defer response.Body.Close() + } + if err != nil { + return + } + + // TODO: check response to see if the port was deleted + // log.Println(message, response) + _ = response + return +} diff --git a/p2p/util.go b/p2p/util.go new file mode 100644 index 000000000..2be320263 --- /dev/null +++ b/p2p/util.go @@ -0,0 +1,15 @@ +package p2p + +import ( + "crypto/sha256" +) + +// doubleSha256 calculates sha256(sha256(b)) and returns the resulting bytes. +func doubleSha256(b []byte) []byte { + hasher := sha256.New() + hasher.Write(b) + sum := hasher.Sum(nil) + hasher.Reset() + hasher.Write(sum) + return hasher.Sum(nil) +} diff --git a/p2p/version.go b/p2p/version.go new file mode 100644 index 000000000..9a4c7bbaf --- /dev/null +++ b/p2p/version.go @@ -0,0 +1,3 @@ +package p2p + +const Version = "0.5.0" diff --git a/proxy/app_conn_test.go b/proxy/app_conn_test.go index 2054175eb..159e0b3e1 100644 --- a/proxy/app_conn_test.go +++ b/proxy/app_conn_test.go @@ -4,11 +4,12 @@ import ( "strings" "testing" - . "github.com/tendermint/go-common" abcicli "github.com/tendermint/abci/client" "github.com/tendermint/abci/example/dummy" "github.com/tendermint/abci/server" "github.com/tendermint/abci/types" + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" ) //---------------------------------------- @@ -44,44 +45,59 @@ func (app *appConnTest) InfoSync() (types.ResponseInfo, error) { var SOCKET = "socket" func TestEcho(t *testing.T) { - sockPath := Fmt("unix:///tmp/echo_%v.sock", RandStr(6)) + sockPath := cmn.Fmt("unix:///tmp/echo_%v.sock", cmn.RandStr(6)) clientCreator := NewRemoteClientCreator(sockPath, SOCKET, true) // Start server - s, err := server.NewSocketServer(sockPath, dummy.NewDummyApplication()) - if err != nil { - Exit(err.Error()) + s := server.NewSocketServer(sockPath, dummy.NewDummyApplication()) + s.SetLogger(log.TestingLogger().With("module", "abci-server")) + if _, err := s.Start(); err != nil { + t.Fatalf("Error starting socket server: %v", err.Error()) } defer s.Stop() + // Start client cli, err := clientCreator.NewABCIClient() if err != nil { - Exit(err.Error()) + t.Fatalf("Error creating ABCI client: %v", err.Error()) } + cli.SetLogger(log.TestingLogger().With("module", "abci-client")) + if _, err := cli.Start(); err != nil { + t.Fatalf("Error starting ABCI client: %v", err.Error()) + } + proxy := NewAppConnTest(cli) t.Log("Connected") for i := 0; i < 1000; i++ { - proxy.EchoAsync(Fmt("echo-%v", i)) + proxy.EchoAsync(cmn.Fmt("echo-%v", i)) } proxy.FlushSync() } func BenchmarkEcho(b *testing.B) { b.StopTimer() // Initialize - sockPath := Fmt("unix:///tmp/echo_%v.sock", RandStr(6)) + sockPath := cmn.Fmt("unix:///tmp/echo_%v.sock", cmn.RandStr(6)) clientCreator := NewRemoteClientCreator(sockPath, SOCKET, true) + // Start server - s, err := server.NewSocketServer(sockPath, dummy.NewDummyApplication()) - if err != nil { - Exit(err.Error()) + s := server.NewSocketServer(sockPath, dummy.NewDummyApplication()) + s.SetLogger(log.TestingLogger().With("module", "abci-server")) + if _, err := s.Start(); err != nil { + b.Fatalf("Error starting socket server: %v", err.Error()) } defer s.Stop() + // Start client cli, err := clientCreator.NewABCIClient() if err != nil { - Exit(err.Error()) + b.Fatalf("Error creating ABCI client: %v", err.Error()) } + cli.SetLogger(log.TestingLogger().With("module", "abci-client")) + if _, err := cli.Start(); err != nil { + b.Fatalf("Error starting ABCI client: %v", err.Error()) + } + proxy := NewAppConnTest(cli) b.Log("Connected") echoString := strings.Repeat(" ", 200) @@ -98,19 +114,27 @@ func BenchmarkEcho(b *testing.B) { } func TestInfo(t *testing.T) { - sockPath := Fmt("unix:///tmp/echo_%v.sock", RandStr(6)) + sockPath := cmn.Fmt("unix:///tmp/echo_%v.sock", cmn.RandStr(6)) clientCreator := NewRemoteClientCreator(sockPath, SOCKET, true) + // Start server - s, err := server.NewSocketServer(sockPath, dummy.NewDummyApplication()) - if err != nil { - Exit(err.Error()) + s := server.NewSocketServer(sockPath, dummy.NewDummyApplication()) + s.SetLogger(log.TestingLogger().With("module", "abci-server")) + if _, err := s.Start(); err != nil { + t.Fatalf("Error starting socket server: %v", err.Error()) } defer s.Stop() + // Start client cli, err := clientCreator.NewABCIClient() if err != nil { - Exit(err.Error()) + t.Fatalf("Error creating ABCI client: %v", err.Error()) } + cli.SetLogger(log.TestingLogger().With("module", "abci-client")) + if _, err := cli.Start(); err != nil { + t.Fatalf("Error starting ABCI client: %v", err.Error()) + } + proxy := NewAppConnTest(cli) t.Log("Connected") diff --git a/proxy/client.go b/proxy/client.go index ea9051989..a70da1cae 100644 --- a/proxy/client.go +++ b/proxy/client.go @@ -1,13 +1,13 @@ package proxy import ( - "fmt" "sync" + "github.com/pkg/errors" + abcicli "github.com/tendermint/abci/client" "github.com/tendermint/abci/example/dummy" "github.com/tendermint/abci/types" - cfg "github.com/tendermint/go-config" ) // NewABCIClient returns newly connected client @@ -52,10 +52,9 @@ func NewRemoteClientCreator(addr, transport string, mustConnect bool) ClientCrea } func (r *remoteClientCreator) NewABCIClient() (abcicli.Client, error) { - // Run forever in a loop remoteApp, err := abcicli.NewClient(r.addr, r.transport, r.mustConnect) if err != nil { - return nil, fmt.Errorf("Failed to connect to proxy: %v", err) + return nil, errors.Wrap(err, "Failed to connect to proxy") } return remoteApp, nil } @@ -63,15 +62,12 @@ func (r *remoteClientCreator) NewABCIClient() (abcicli.Client, error) { //----------------------------------------------------------------- // default -func DefaultClientCreator(config cfg.Config) ClientCreator { - addr := config.GetString("proxy_app") - transport := config.GetString("abci") - +func DefaultClientCreator(addr, transport, dbDir string) ClientCreator { switch addr { case "dummy": return NewLocalClientCreator(dummy.NewDummyApplication()) case "persistent_dummy": - return NewLocalClientCreator(dummy.NewPersistentDummyApplication(config.GetString("db_dir"))) + return NewLocalClientCreator(dummy.NewPersistentDummyApplication(dbDir)) case "nilapp": return NewLocalClientCreator(types.NewBaseApplication()) default: diff --git a/proxy/log.go b/proxy/log.go deleted file mode 100644 index 45d31b879..000000000 --- a/proxy/log.go +++ /dev/null @@ -1,7 +0,0 @@ -package proxy - -import ( - "github.com/tendermint/go-logger" -) - -var log = logger.New("module", "proxy") diff --git a/proxy/multi_app_conn.go b/proxy/multi_app_conn.go index 81e01aa29..32c615202 100644 --- a/proxy/multi_app_conn.go +++ b/proxy/multi_app_conn.go @@ -1,8 +1,9 @@ package proxy import ( - cmn "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" + "github.com/pkg/errors" + + cmn "github.com/tendermint/tmlibs/common" ) //----------------------------- @@ -16,8 +17,8 @@ type AppConns interface { Query() AppConnQuery } -func NewAppConns(config cfg.Config, clientCreator ClientCreator, handshaker Handshaker) AppConns { - return NewMultiAppConn(config, clientCreator, handshaker) +func NewAppConns(clientCreator ClientCreator, handshaker Handshaker) AppConns { + return NewMultiAppConn(clientCreator, handshaker) } //----------------------------- @@ -34,8 +35,6 @@ type Handshaker interface { type multiAppConn struct { cmn.BaseService - config cfg.Config - handshaker Handshaker mempoolConn *appConnMempool @@ -46,13 +45,12 @@ type multiAppConn struct { } // Make all necessary abci connections to the application -func NewMultiAppConn(config cfg.Config, clientCreator ClientCreator, handshaker Handshaker) *multiAppConn { +func NewMultiAppConn(clientCreator ClientCreator, handshaker Handshaker) *multiAppConn { multiAppConn := &multiAppConn{ - config: config, handshaker: handshaker, clientCreator: clientCreator, } - multiAppConn.BaseService = *cmn.NewBaseService(log, "multiAppConn", multiAppConn) + multiAppConn.BaseService = *cmn.NewBaseService(nil, "multiAppConn", multiAppConn) return multiAppConn } @@ -72,25 +70,36 @@ func (app *multiAppConn) Query() AppConnQuery { } func (app *multiAppConn) OnStart() error { - // query connection querycli, err := app.clientCreator.NewABCIClient() if err != nil { - return err + return errors.Wrap(err, "Error creating ABCI client (query connection)") + } + querycli.SetLogger(app.Logger.With("module", "abci-client", "connection", "query")) + if _, err := querycli.Start(); err != nil { + return errors.Wrap(err, "Error starting ABCI client (query connection)") } app.queryConn = NewAppConnQuery(querycli) // mempool connection memcli, err := app.clientCreator.NewABCIClient() if err != nil { - return err + return errors.Wrap(err, "Error creating ABCI client (mempool connection)") + } + memcli.SetLogger(app.Logger.With("module", "abci-client", "connection", "mempool")) + if _, err := memcli.Start(); err != nil { + return errors.Wrap(err, "Error starting ABCI client (mempool connection)") } app.mempoolConn = NewAppConnMempool(memcli) // consensus connection concli, err := app.clientCreator.NewABCIClient() if err != nil { - return err + return errors.Wrap(err, "Error creating ABCI client (consensus connection)") + } + concli.SetLogger(app.Logger.With("module", "abci-client", "connection", "consensus")) + if _, err := concli.Start(); err != nil { + return errors.Wrap(err, "Error starting ABCI client (consensus connection)") } app.consensusConn = NewAppConnConsensus(concli) @@ -98,5 +107,6 @@ func (app *multiAppConn) OnStart() error { if app.handshaker != nil { return app.handshaker.Handshake(app) } + return nil } diff --git a/rpc/client/event_test.go b/rpc/client/event_test.go index cc421ad90..1b99854cb 100644 --- a/rpc/client/event_test.go +++ b/rpc/client/event_test.go @@ -25,7 +25,7 @@ func TestHeaderEvents(t *testing.T) { evtTyp := types.EventStringNewBlockHeader() evt, err := client.WaitForOneEvent(c, evtTyp, 1*time.Second) require.Nil(err, "%d: %+v", i, err) - _, ok := evt.(types.EventDataNewBlockHeader) + _, ok := evt.Unwrap().(types.EventDataNewBlockHeader) require.True(ok, "%d: %#v", i, evt) // TODO: more checks... } @@ -56,7 +56,7 @@ func TestTxEvents(t *testing.T) { evt, err := client.WaitForOneEvent(c, evtTyp, 1*time.Second) require.Nil(err, "%d: %+v", i, err) // and make sure it has the proper info - txe, ok := evt.(types.EventDataTx) + txe, ok := evt.Unwrap().(types.EventDataTx) require.True(ok, "%d: %#v", i, evt) // make sure this is the proper tx require.EqualValues(tx, txe.Tx) diff --git a/rpc/client/helpers.go b/rpc/client/helpers.go index bd00c1438..bc26ea57f 100644 --- a/rpc/client/helpers.go +++ b/rpc/client/helpers.go @@ -4,9 +4,9 @@ import ( "time" "github.com/pkg/errors" - cmn "github.com/tendermint/go-common" - events "github.com/tendermint/go-events" "github.com/tendermint/tendermint/types" + cmn "github.com/tendermint/tmlibs/common" + events "github.com/tendermint/tmlibs/events" ) // Waiter is informed of current height, decided whether to quit early @@ -77,12 +77,12 @@ func WaitForOneEvent(evsw types.EventSwitch, select { case <-quit: - return nil, errors.New("timed out waiting for event") + return types.TMEventData{}, errors.New("timed out waiting for event") case evt := <-evts: tmevt, ok := evt.(types.TMEventData) if ok { return tmevt, nil } - return nil, errors.Errorf("Got unexpected event type: %#v", evt) + return types.TMEventData{}, errors.Errorf("Got unexpected event type: %#v", evt) } } diff --git a/rpc/client/httpclient.go b/rpc/client/httpclient.go index 04595e766..cb7149406 100644 --- a/rpc/client/httpclient.go +++ b/rpc/client/httpclient.go @@ -1,14 +1,15 @@ package client import ( + "encoding/json" "fmt" "github.com/pkg/errors" - events "github.com/tendermint/go-events" - "github.com/tendermint/go-rpc/client" - wire "github.com/tendermint/go-wire" + data "github.com/tendermint/go-wire/data" ctypes "github.com/tendermint/tendermint/rpc/core/types" + "github.com/tendermint/tendermint/rpc/lib/client" "github.com/tendermint/tendermint/types" + events "github.com/tendermint/tmlibs/events" ) /* @@ -49,42 +50,41 @@ func (c *HTTP) _assertIsEventSwitch() types.EventSwitch { } func (c *HTTP) Status() (*ctypes.ResultStatus, error) { - tmResult := new(ctypes.TMResult) - _, err := c.rpc.Call("status", map[string]interface{}{}, tmResult) + result := new(ctypes.ResultStatus) + _, err := c.rpc.Call("status", map[string]interface{}{}, result) if err != nil { return nil, errors.Wrap(err, "Status") } - // note: panics if rpc doesn't match. okay??? - return (*tmResult).(*ctypes.ResultStatus), nil + return result, nil } func (c *HTTP) ABCIInfo() (*ctypes.ResultABCIInfo, error) { - tmResult := new(ctypes.TMResult) - _, err := c.rpc.Call("abci_info", map[string]interface{}{}, tmResult) + result := new(ctypes.ResultABCIInfo) + _, err := c.rpc.Call("abci_info", map[string]interface{}{}, result) if err != nil { return nil, errors.Wrap(err, "ABCIInfo") } - return (*tmResult).(*ctypes.ResultABCIInfo), nil + return result, nil } -func (c *HTTP) ABCIQuery(path string, data []byte, prove bool) (*ctypes.ResultABCIQuery, error) { - tmResult := new(ctypes.TMResult) +func (c *HTTP) ABCIQuery(path string, data data.Bytes, prove bool) (*ctypes.ResultABCIQuery, error) { + result := new(ctypes.ResultABCIQuery) _, err := c.rpc.Call("abci_query", map[string]interface{}{"path": path, "data": data, "prove": prove}, - tmResult) + result) if err != nil { return nil, errors.Wrap(err, "ABCIQuery") } - return (*tmResult).(*ctypes.ResultABCIQuery), nil + return result, nil } func (c *HTTP) BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) { - tmResult := new(ctypes.TMResult) - _, err := c.rpc.Call("broadcast_tx_commit", map[string]interface{}{"tx": tx}, tmResult) + result := new(ctypes.ResultBroadcastTxCommit) + _, err := c.rpc.Call("broadcast_tx_commit", map[string]interface{}{"tx": tx}, result) if err != nil { return nil, errors.Wrap(err, "broadcast_tx_commit") } - return (*tmResult).(*ctypes.ResultBroadcastTxCommit), nil + return result, nil } func (c *HTTP) BroadcastTxAsync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) { @@ -96,90 +96,90 @@ func (c *HTTP) BroadcastTxSync(tx types.Tx) (*ctypes.ResultBroadcastTx, error) { } func (c *HTTP) broadcastTX(route string, tx types.Tx) (*ctypes.ResultBroadcastTx, error) { - tmResult := new(ctypes.TMResult) - _, err := c.rpc.Call(route, map[string]interface{}{"tx": tx}, tmResult) + result := new(ctypes.ResultBroadcastTx) + _, err := c.rpc.Call(route, map[string]interface{}{"tx": tx}, result) if err != nil { return nil, errors.Wrap(err, route) } - return (*tmResult).(*ctypes.ResultBroadcastTx), nil + return result, nil } func (c *HTTP) NetInfo() (*ctypes.ResultNetInfo, error) { - tmResult := new(ctypes.TMResult) - _, err := c.rpc.Call("net_info", map[string]interface{}{}, tmResult) + result := new(ctypes.ResultNetInfo) + _, err := c.rpc.Call("net_info", map[string]interface{}{}, result) if err != nil { return nil, errors.Wrap(err, "NetInfo") } - return (*tmResult).(*ctypes.ResultNetInfo), nil + return result, nil } func (c *HTTP) DumpConsensusState() (*ctypes.ResultDumpConsensusState, error) { - tmResult := new(ctypes.TMResult) - _, err := c.rpc.Call("dump_consensus_state", map[string]interface{}{}, tmResult) + result := new(ctypes.ResultDumpConsensusState) + _, err := c.rpc.Call("dump_consensus_state", map[string]interface{}{}, result) if err != nil { return nil, errors.Wrap(err, "DumpConsensusState") } - return (*tmResult).(*ctypes.ResultDumpConsensusState), nil + return result, nil } func (c *HTTP) BlockchainInfo(minHeight, maxHeight int) (*ctypes.ResultBlockchainInfo, error) { - tmResult := new(ctypes.TMResult) + result := new(ctypes.ResultBlockchainInfo) _, err := c.rpc.Call("blockchain", map[string]interface{}{"minHeight": minHeight, "maxHeight": maxHeight}, - tmResult) + result) if err != nil { return nil, errors.Wrap(err, "BlockchainInfo") } - return (*tmResult).(*ctypes.ResultBlockchainInfo), nil + return result, nil } func (c *HTTP) Genesis() (*ctypes.ResultGenesis, error) { - tmResult := new(ctypes.TMResult) - _, err := c.rpc.Call("genesis", map[string]interface{}{}, tmResult) + result := new(ctypes.ResultGenesis) + _, err := c.rpc.Call("genesis", map[string]interface{}{}, result) if err != nil { return nil, errors.Wrap(err, "Genesis") } - return (*tmResult).(*ctypes.ResultGenesis), nil + return result, nil } func (c *HTTP) Block(height int) (*ctypes.ResultBlock, error) { - tmResult := new(ctypes.TMResult) - _, err := c.rpc.Call("block", map[string]interface{}{"height": height}, tmResult) + result := new(ctypes.ResultBlock) + _, err := c.rpc.Call("block", map[string]interface{}{"height": height}, result) if err != nil { return nil, errors.Wrap(err, "Block") } - return (*tmResult).(*ctypes.ResultBlock), nil + return result, nil } func (c *HTTP) Commit(height int) (*ctypes.ResultCommit, error) { - tmResult := new(ctypes.TMResult) - _, err := c.rpc.Call("commit", map[string]interface{}{"height": height}, tmResult) + result := new(ctypes.ResultCommit) + _, err := c.rpc.Call("commit", map[string]interface{}{"height": height}, result) if err != nil { return nil, errors.Wrap(err, "Commit") } - return (*tmResult).(*ctypes.ResultCommit), nil + return result, nil } func (c *HTTP) Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) { - tmResult := new(ctypes.TMResult) + result := new(ctypes.ResultTx) query := map[string]interface{}{ "hash": hash, "prove": prove, } - _, err := c.rpc.Call("tx", query, tmResult) + _, err := c.rpc.Call("tx", query, result) if err != nil { return nil, errors.Wrap(err, "Tx") } - return (*tmResult).(*ctypes.ResultTx), nil + return result, nil } func (c *HTTP) Validators() (*ctypes.ResultValidators, error) { - tmResult := new(ctypes.TMResult) - _, err := c.rpc.Call("validators", map[string]interface{}{}, tmResult) + result := new(ctypes.ResultValidators) + _, err := c.rpc.Call("validators", map[string]interface{}{}, result) if err != nil { return nil, errors.Wrap(err, "Validators") } - return (*tmResult).(*ctypes.ResultValidators), nil + return result, nil } /** websocket event stuff here... **/ @@ -197,7 +197,7 @@ type WSEvents struct { // used to maintain counts of actively listened events // so we can properly subscribe/unsubscribe // FIXME: thread-safety??? - // FIXME: reuse code from go-events??? + // FIXME: reuse code from tmlibs/events??? evtCount map[string]int // count how many time each event is subscribed listeners map[string][]string // keep track of which events each listener is listening to } @@ -334,18 +334,15 @@ func (w *WSEvents) eventListener() { // some implementation of types.TMEventData, and sends it off // on the merry way to the EventSwitch func (w *WSEvents) parseEvent(data []byte) (err error) { - result := new(ctypes.TMResult) - wire.ReadJSONPtr(result, data, &err) + result := new(ctypes.ResultEvent) + err = json.Unmarshal(data, result) if err != nil { - return err - } - event, ok := (*result).(*ctypes.ResultEvent) - if !ok { // ignore silently (eg. subscribe, unsubscribe and maybe other events) + // TODO: ? return nil } // looks good! let's fire this baby! - w.EventSwitch.FireEvent(event.Name, event.Data) + w.EventSwitch.FireEvent(result.Name, result.Data) return nil } diff --git a/rpc/client/interface.go b/rpc/client/interface.go index 2ba890798..0cd0f29bb 100644 --- a/rpc/client/interface.go +++ b/rpc/client/interface.go @@ -20,6 +20,7 @@ implementation. package client import ( + data "github.com/tendermint/go-wire/data" ctypes "github.com/tendermint/tendermint/rpc/core/types" "github.com/tendermint/tendermint/types" ) @@ -30,7 +31,7 @@ import ( type ABCIClient interface { // reading from abci app ABCIInfo() (*ctypes.ResultABCIInfo, error) - ABCIQuery(path string, data []byte, prove bool) (*ctypes.ResultABCIQuery, error) + ABCIQuery(path string, data data.Bytes, prove bool) (*ctypes.ResultABCIQuery, error) // writing to abci app BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) diff --git a/rpc/client/localclient.go b/rpc/client/localclient.go index d0f0d11b1..f4eb00d78 100644 --- a/rpc/client/localclient.go +++ b/rpc/client/localclient.go @@ -1,6 +1,7 @@ package client import ( + data "github.com/tendermint/go-wire/data" nm "github.com/tendermint/tendermint/node" "github.com/tendermint/tendermint/rpc/core" ctypes "github.com/tendermint/tendermint/rpc/core/types" @@ -56,7 +57,7 @@ func (c Local) ABCIInfo() (*ctypes.ResultABCIInfo, error) { return core.ABCIInfo() } -func (c Local) ABCIQuery(path string, data []byte, prove bool) (*ctypes.ResultABCIQuery, error) { +func (c Local) ABCIQuery(path string, data data.Bytes, prove bool) (*ctypes.ResultABCIQuery, error) { return core.ABCIQuery(path, data, prove) } diff --git a/rpc/client/mock/abci.go b/rpc/client/mock/abci.go index 6f6fa1d47..0d1012557 100644 --- a/rpc/client/mock/abci.go +++ b/rpc/client/mock/abci.go @@ -2,6 +2,7 @@ package mock import ( abci "github.com/tendermint/abci/types" + data "github.com/tendermint/go-wire/data" "github.com/tendermint/tendermint/rpc/client" ctypes "github.com/tendermint/tendermint/rpc/core/types" "github.com/tendermint/tendermint/types" @@ -22,20 +23,18 @@ func (a ABCIApp) ABCIInfo() (*ctypes.ResultABCIInfo, error) { return &ctypes.ResultABCIInfo{a.App.Info()}, nil } -func (a ABCIApp) ABCIQuery(path string, data []byte, prove bool) (*ctypes.ResultABCIQuery, error) { +func (a ABCIApp) ABCIQuery(path string, data data.Bytes, prove bool) (*ctypes.ResultABCIQuery, error) { q := a.App.Query(abci.RequestQuery{data, path, 0, prove}) - return &ctypes.ResultABCIQuery{q}, nil + return &ctypes.ResultABCIQuery{q.Result()}, nil } func (a ABCIApp) BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) { res := ctypes.ResultBroadcastTxCommit{} - c := a.App.CheckTx(tx) - res.CheckTx = &abci.ResponseCheckTx{c.Code, c.Data, c.Log} - if !c.IsOK() { + res.CheckTx = a.App.CheckTx(tx) + if !res.CheckTx.IsOK() { return &res, nil } - d := a.App.DeliverTx(tx) - res.DeliverTx = &abci.ResponseDeliverTx{d.Code, d.Data, d.Log} + res.DeliverTx = a.App.DeliverTx(tx) return &res, nil } @@ -79,12 +78,13 @@ func (m ABCIMock) ABCIInfo() (*ctypes.ResultABCIInfo, error) { return &ctypes.ResultABCIInfo{res.(abci.ResponseInfo)}, nil } -func (m ABCIMock) ABCIQuery(path string, data []byte, prove bool) (*ctypes.ResultABCIQuery, error) { +func (m ABCIMock) ABCIQuery(path string, data data.Bytes, prove bool) (*ctypes.ResultABCIQuery, error) { res, err := m.Query.GetResponse(QueryArgs{path, data, prove}) if err != nil { return nil, err } - return &ctypes.ResultABCIQuery{res.(abci.ResponseQuery)}, nil + resQuery := res.(abci.ResponseQuery) + return &ctypes.ResultABCIQuery{resQuery.Result()}, nil } func (m ABCIMock) BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) { @@ -131,7 +131,7 @@ func (r *ABCIRecorder) _assertABCIClient() client.ABCIClient { type QueryArgs struct { Path string - Data []byte + Data data.Bytes Prove bool } @@ -149,7 +149,7 @@ func (r *ABCIRecorder) ABCIInfo() (*ctypes.ResultABCIInfo, error) { return res, err } -func (r *ABCIRecorder) ABCIQuery(path string, data []byte, prove bool) (*ctypes.ResultABCIQuery, error) { +func (r *ABCIRecorder) ABCIQuery(path string, data data.Bytes, prove bool) (*ctypes.ResultABCIQuery, error) { res, err := r.Client.ABCIQuery(path, data, prove) r.addCall(Call{ Name: "abci_query", diff --git a/rpc/client/mock/abci_test.go b/rpc/client/mock/abci_test.go index 823752caf..935f9ff94 100644 --- a/rpc/client/mock/abci_test.go +++ b/rpc/client/mock/abci_test.go @@ -10,6 +10,7 @@ import ( "github.com/stretchr/testify/require" "github.com/tendermint/abci/example/dummy" abci "github.com/tendermint/abci/types" + data "github.com/tendermint/go-wire/data" ctypes "github.com/tendermint/tendermint/rpc/core/types" "github.com/tendermint/tendermint/types" @@ -35,8 +36,8 @@ func TestABCIMock(t *testing.T) { BroadcastCommit: mock.Call{ Args: goodTx, Response: &ctypes.ResultBroadcastTxCommit{ - CheckTx: &abci.ResponseCheckTx{Data: []byte("stand")}, - DeliverTx: &abci.ResponseDeliverTx{Data: []byte("deliver")}, + CheckTx: abci.Result{Data: data.Bytes("stand")}, + DeliverTx: abci.Result{Data: data.Bytes("deliver")}, }, Error: errors.New("bad tx"), }, @@ -52,9 +53,9 @@ func TestABCIMock(t *testing.T) { query, err := m.ABCIQuery("/", nil, false) require.Nil(err) require.NotNil(query) - assert.Equal(key, query.Response.GetKey()) - assert.Equal(value, query.Response.GetValue()) - assert.Equal(height, query.Response.GetHeight()) + assert.EqualValues(key, query.Key) + assert.EqualValues(value, query.Value) + assert.Equal(height, query.Height) // non-commit calls always return errors _, err = m.BroadcastTxSync(goodTx) @@ -91,7 +92,7 @@ func TestABCIRecorder(t *testing.T) { require.Equal(0, len(r.Calls)) r.ABCIInfo() - r.ABCIQuery("path", []byte("data"), true) + r.ABCIQuery("path", data.Bytes("data"), true) require.Equal(2, len(r.Calls)) info := r.Calls[0] @@ -163,7 +164,7 @@ func TestABCIApp(t *testing.T) { assert.True(res.DeliverTx.Code.IsOK()) // check the key - qres, err := m.ABCIQuery("/key", []byte(key), false) + qres, err := m.ABCIQuery("/key", data.Bytes(key), false) require.Nil(err) - assert.EqualValues(value, qres.Response.Value) + assert.EqualValues(value, qres.Value) } diff --git a/rpc/client/mock/client.go b/rpc/client/mock/client.go index a3cecfca1..bf8d78dce 100644 --- a/rpc/client/mock/client.go +++ b/rpc/client/mock/client.go @@ -16,6 +16,7 @@ package mock import ( "reflect" + data "github.com/tendermint/go-wire/data" "github.com/tendermint/tendermint/rpc/client" "github.com/tendermint/tendermint/rpc/core" ctypes "github.com/tendermint/tendermint/rpc/core/types" @@ -83,7 +84,7 @@ func (c Client) ABCIInfo() (*ctypes.ResultABCIInfo, error) { return core.ABCIInfo() } -func (c Client) ABCIQuery(path string, data []byte, prove bool) (*ctypes.ResultABCIQuery, error) { +func (c Client) ABCIQuery(path string, data data.Bytes, prove bool) (*ctypes.ResultABCIQuery, error) { return core.ABCIQuery(path, data, prove) } diff --git a/rpc/client/mock/status_test.go b/rpc/client/mock/status_test.go index 3e695cd57..e4adf52ba 100644 --- a/rpc/client/mock/status_test.go +++ b/rpc/client/mock/status_test.go @@ -5,6 +5,7 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + data "github.com/tendermint/go-wire/data" ctypes "github.com/tendermint/tendermint/rpc/core/types" "github.com/tendermint/tendermint/rpc/client/mock" @@ -16,8 +17,8 @@ func TestStatus(t *testing.T) { m := &mock.StatusMock{ Call: mock.Call{ Response: &ctypes.ResultStatus{ - LatestBlockHash: []byte("block"), - LatestAppHash: []byte("app"), + LatestBlockHash: data.Bytes("block"), + LatestAppHash: data.Bytes("app"), LatestBlockHeight: 10, }}, } diff --git a/rpc/client/rpc_test.go b/rpc/client/rpc_test.go index d9f5d3796..2586b4687 100644 --- a/rpc/client/rpc_test.go +++ b/rpc/client/rpc_test.go @@ -6,15 +6,14 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" - merkle "github.com/tendermint/go-merkle" + "github.com/tendermint/merkleeyes/iavl" merktest "github.com/tendermint/merkleeyes/testutil" "github.com/tendermint/tendermint/rpc/client" rpctest "github.com/tendermint/tendermint/rpc/test" - "github.com/tendermint/tendermint/types" ) func getHTTPClient() *client.HTTP { - rpcAddr := rpctest.GetConfig().GetString("rpc_laddr") + rpcAddr := rpctest.GetConfig().RPCListenAddress return client.NewHTTP(rpcAddr, "/websocket") } @@ -33,10 +32,10 @@ func GetClients() []client.Client { // Make sure status is correct (we connect properly) func TestStatus(t *testing.T) { for i, c := range GetClients() { - chainID := rpctest.GetConfig().GetString("chain_id") + moniker := rpctest.GetConfig().Moniker status, err := c.Status() require.Nil(t, err, "%d: %+v", i, err) - assert.Equal(t, chainID, status.NodeInfo.Network) + assert.Equal(t, moniker, status.NodeInfo.Moniker) } } @@ -78,12 +77,10 @@ func TestDumpConsensusState(t *testing.T) { func TestGenesisAndValidators(t *testing.T) { for i, c := range GetClients() { - chainID := rpctest.GetConfig().GetString("chain_id") // make sure this is the right genesis file gen, err := c.Genesis() require.Nil(t, err, "%d: %+v", i, err) - assert.Equal(t, chainID, gen.Genesis.ChainID) // get the genesis validator require.Equal(t, 1, len(gen.Genesis.Validators)) gval := gen.Genesis.Validators[0] @@ -119,17 +116,16 @@ func TestAppCalls(t *testing.T) { k, v, tx := merktest.MakeTxKV() bres, err := c.BroadcastTxCommit(tx) require.Nil(err, "%d: %+v", i, err) - require.True(bres.DeliverTx.GetCode().IsOK()) + require.True(bres.DeliverTx.Code.IsOK()) txh := bres.Height apph := txh + 1 // this is where the tx will be applied to the state // wait before querying client.WaitForHeight(c, apph, nil) qres, err := c.ABCIQuery("/key", k, false) - if assert.Nil(err) && assert.True(qres.Response.Code.IsOK()) { - data := qres.Response + if assert.Nil(err) && assert.True(qres.Code.IsOK()) { // assert.Equal(k, data.GetKey()) // only returned for proofs - assert.Equal(v, data.GetValue()) + assert.EqualValues(v, qres.Value) } // make sure we can lookup the tx with proof @@ -137,7 +133,7 @@ func TestAppCalls(t *testing.T) { ptx, err := c.Tx(bres.Hash, true) require.Nil(err, "%d: %+v", i, err) assert.Equal(txh, ptx.Height) - assert.Equal(types.Tx(tx), ptx.Tx) + assert.EqualValues(tx, ptx.Tx) // and we can even check the block is added block, err := c.Block(apph) @@ -174,12 +170,12 @@ func TestAppCalls(t *testing.T) { // and we got a proof that works! pres, err := c.ABCIQuery("/key", k, true) - if assert.Nil(err) && assert.True(pres.Response.Code.IsOK()) { - proof, err := merkle.ReadProof(pres.Response.GetProof()) + if assert.Nil(err) && assert.True(pres.Code.IsOK()) { + proof, err := iavl.ReadProof(pres.Proof) if assert.Nil(err) { - key := pres.Response.GetKey() - value := pres.Response.GetValue() - assert.Equal(appHash, proof.RootHash) + key := pres.Key + value := pres.Value + assert.EqualValues(appHash, proof.RootHash) valid := proof.Verify(key, value, appHash) assert.True(valid) } diff --git a/rpc/core/abci.go b/rpc/core/abci.go index 957727267..0cb29f479 100644 --- a/rpc/core/abci.go +++ b/rpc/core/abci.go @@ -2,12 +2,13 @@ package core import ( abci "github.com/tendermint/abci/types" + data "github.com/tendermint/go-wire/data" ctypes "github.com/tendermint/tendermint/rpc/core/types" ) //----------------------------------------------------------------------------- -func ABCIQuery(path string, data []byte, prove bool) (*ctypes.ResultABCIQuery, error) { +func ABCIQuery(path string, data data.Bytes, prove bool) (*ctypes.ResultABCIQuery, error) { resQuery, err := proxyAppQuery.QuerySync(abci.RequestQuery{ Path: path, Data: data, @@ -16,8 +17,10 @@ func ABCIQuery(path string, data []byte, prove bool) (*ctypes.ResultABCIQuery, e if err != nil { return nil, err } - log.Info("ABCIQuery", "path", path, "data", data, "result", resQuery) - return &ctypes.ResultABCIQuery{resQuery}, nil + logger.Info("ABCIQuery", "path", path, "data", data, "result", resQuery) + return &ctypes.ResultABCIQuery{ + resQuery.Result(), + }, nil } func ABCIInfo() (*ctypes.ResultABCIInfo, error) { diff --git a/rpc/core/blocks.go b/rpc/core/blocks.go index 65e47d125..4914fcb31 100644 --- a/rpc/core/blocks.go +++ b/rpc/core/blocks.go @@ -2,9 +2,10 @@ package core import ( "fmt" - . "github.com/tendermint/go-common" + ctypes "github.com/tendermint/tendermint/rpc/core/types" "github.com/tendermint/tendermint/types" + . "github.com/tendermint/tmlibs/common" ) //----------------------------------------------------------------------------- @@ -19,7 +20,7 @@ func BlockchainInfo(minHeight, maxHeight int) (*ctypes.ResultBlockchainInfo, err if minHeight == 0 { minHeight = MaxInt(1, maxHeight-20) } - log.Debug("BlockchainInfoHandler", "maxHeight", maxHeight, "minHeight", minHeight) + logger.Debug("BlockchainInfoHandler", "maxHeight", maxHeight, "minHeight", minHeight) blockMetas := []*types.BlockMeta{} for height := maxHeight; height >= minHeight; height-- { diff --git a/rpc/core/dev.go b/rpc/core/dev.go index 43a989534..a3c970d48 100644 --- a/rpc/core/dev.go +++ b/rpc/core/dev.go @@ -1,10 +1,8 @@ package core import ( - "fmt" "os" "runtime/pprof" - "strconv" ctypes "github.com/tendermint/tendermint/rpc/core/types" ) @@ -14,31 +12,6 @@ func UnsafeFlushMempool() (*ctypes.ResultUnsafeFlushMempool, error) { return &ctypes.ResultUnsafeFlushMempool{}, nil } -func UnsafeSetConfig(typ, key, value string) (*ctypes.ResultUnsafeSetConfig, error) { - switch typ { - case "string": - config.Set(key, value) - case "int": - val, err := strconv.Atoi(value) - if err != nil { - return nil, fmt.Errorf("non-integer value found. key:%s; value:%s; err:%v", key, value, err) - } - config.Set(key, val) - case "bool": - switch value { - case "true": - config.Set(key, true) - case "false": - config.Set(key, false) - default: - return nil, fmt.Errorf("bool value must be true or false. got %s", value) - } - default: - return nil, fmt.Errorf("Unknown type %s", typ) - } - return &ctypes.ResultUnsafeSetConfig{}, nil -} - var profFile *os.File func UnsafeStartCPUProfiler(filename string) (*ctypes.ResultUnsafeProfile, error) { diff --git a/rpc/core/events.go b/rpc/core/events.go index 7dc3c7c31..d7cd75612 100644 --- a/rpc/core/events.go +++ b/rpc/core/events.go @@ -1,24 +1,24 @@ package core import ( - "github.com/tendermint/go-rpc/types" ctypes "github.com/tendermint/tendermint/rpc/core/types" + "github.com/tendermint/tendermint/rpc/lib/types" "github.com/tendermint/tendermint/types" ) func Subscribe(wsCtx rpctypes.WSRPCContext, event string) (*ctypes.ResultSubscribe, error) { - log.Notice("Subscribe to event", "remote", wsCtx.GetRemoteAddr(), "event", event) + logger.Info("Subscribe to event", "remote", wsCtx.GetRemoteAddr(), "event", event) types.AddListenerForEvent(wsCtx.GetEventSwitch(), wsCtx.GetRemoteAddr(), event, func(msg types.TMEventData) { // NOTE: EventSwitch callbacks must be nonblocking // NOTE: RPCResponses of subscribed events have id suffix "#event" - tmResult := ctypes.TMResult(&ctypes.ResultEvent{event, msg}) - wsCtx.TryWriteRPCResponse(rpctypes.NewRPCResponse(wsCtx.Request.ID+"#event", &tmResult, "")) + tmResult := &ctypes.ResultEvent{event, msg} + wsCtx.TryWriteRPCResponse(rpctypes.NewRPCResponse(wsCtx.Request.ID+"#event", tmResult, "")) }) return &ctypes.ResultSubscribe{}, nil } func Unsubscribe(wsCtx rpctypes.WSRPCContext, event string) (*ctypes.ResultUnsubscribe, error) { - log.Notice("Unsubscribe to event", "remote", wsCtx.GetRemoteAddr(), "event", event) + logger.Info("Unsubscribe to event", "remote", wsCtx.GetRemoteAddr(), "event", event) wsCtx.GetEventSwitch().RemoveListenerForEvent(event, wsCtx.GetRemoteAddr()) return &ctypes.ResultUnsubscribe{}, nil } diff --git a/rpc/core/log.go b/rpc/core/log.go deleted file mode 100644 index d359bee26..000000000 --- a/rpc/core/log.go +++ /dev/null @@ -1,7 +0,0 @@ -package core - -import ( - "github.com/tendermint/log15" -) - -var log = log15.New("module", "rpc") diff --git a/rpc/core/mempool.go b/rpc/core/mempool.go index 4da83a1f6..1988b05c6 100644 --- a/rpc/core/mempool.go +++ b/rpc/core/mempool.go @@ -5,6 +5,7 @@ import ( "time" abci "github.com/tendermint/abci/types" + data "github.com/tendermint/go-wire/data" ctypes "github.com/tendermint/tendermint/rpc/core/types" "github.com/tendermint/tendermint/types" ) @@ -49,7 +50,7 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) { // subscribe to tx being committed in block deliverTxResCh := make(chan types.EventDataTx, 1) types.AddListenerForEvent(eventSwitch, "rpc", types.EventStringTx(tx), func(data types.TMEventData) { - deliverTxResCh <- data.(types.EventDataTx) + deliverTxResCh <- data.Unwrap().(types.EventDataTx) }) // broadcast the tx and register checktx callback @@ -58,7 +59,7 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) { checkTxResCh <- res }) if err != nil { - log.Error("err", "err", err) + logger.Error("err", "err", err) return nil, fmt.Errorf("Error broadcasting transaction: %v", err) } checkTxRes := <-checkTxResCh @@ -66,8 +67,8 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) { if checkTxR.Code != abci.CodeType_OK { // CheckTx failed! return &ctypes.ResultBroadcastTxCommit{ - CheckTx: checkTxR, - DeliverTx: nil, + CheckTx: checkTxR.Result(), + DeliverTx: abci.Result{}, Hash: tx.Hash(), }, nil } @@ -84,18 +85,18 @@ func BroadcastTxCommit(tx types.Tx) (*ctypes.ResultBroadcastTxCommit, error) { Data: deliverTxRes.Data, Log: deliverTxRes.Log, } - log.Notice("DeliverTx passed ", "tx", []byte(tx), "response", deliverTxR) + logger.Info("DeliverTx passed ", "tx", data.Bytes(tx), "response", deliverTxR) return &ctypes.ResultBroadcastTxCommit{ - CheckTx: checkTxR, - DeliverTx: deliverTxR, + CheckTx: checkTxR.Result(), + DeliverTx: deliverTxR.Result(), Hash: tx.Hash(), Height: deliverTxRes.Height, }, nil case <-timer.C: - log.Error("failed to include tx") + logger.Error("failed to include tx") return &ctypes.ResultBroadcastTxCommit{ - CheckTx: checkTxR, - DeliverTx: nil, + CheckTx: checkTxR.Result(), + DeliverTx: abci.Result{}, Hash: tx.Hash(), }, fmt.Errorf("Timed out waiting for transaction to be included in a block") } diff --git a/rpc/core/net.go b/rpc/core/net.go index 31d9c34ea..b56216ca7 100644 --- a/rpc/core/net.go +++ b/rpc/core/net.go @@ -38,7 +38,7 @@ func UnsafeDialSeeds(seeds []string) (*ctypes.ResultDialSeeds, error) { return &ctypes.ResultDialSeeds{}, fmt.Errorf("No seeds provided") } // starts go routines to dial each seed after random delays - log.Info("DialSeeds", "addrBook", addrBook, "seeds", seeds) + logger.Info("DialSeeds", "addrBook", addrBook, "seeds", seeds) err := p2pSwitch.DialSeeds(addrBook, seeds) if err != nil { return &ctypes.ResultDialSeeds{}, err diff --git a/rpc/core/pipe.go b/rpc/core/pipe.go index 4993ed992..a18de2ad8 100644 --- a/rpc/core/pipe.go +++ b/rpc/core/pipe.go @@ -1,14 +1,13 @@ package core import ( - cfg "github.com/tendermint/go-config" - crypto "github.com/tendermint/go-crypto" - p2p "github.com/tendermint/go-p2p" "github.com/tendermint/tendermint/consensus" + p2p "github.com/tendermint/tendermint/p2p" "github.com/tendermint/tendermint/proxy" "github.com/tendermint/tendermint/state/txindex" "github.com/tendermint/tendermint/types" + "github.com/tendermint/tmlibs/log" ) //---------------------------------------------- @@ -34,7 +33,6 @@ var ( // external, thread safe interfaces eventSwitch types.EventSwitch proxyAppQuery proxy.AppConnQuery - config cfg.Config // interfaces defined in types and above blockStore types.BlockStore @@ -47,11 +45,9 @@ var ( genDoc *types.GenesisDoc // cache the genesis structure addrBook *p2p.AddrBook txIndexer txindex.TxIndexer -) -func SetConfig(c cfg.Config) { - config = c -} + logger log.Logger +) func SetEventSwitch(evsw types.EventSwitch) { eventSwitch = evsw @@ -92,3 +88,7 @@ func SetProxyAppQuery(appConn proxy.AppConnQuery) { func SetTxIndexer(indexer txindex.TxIndexer) { txIndexer = indexer } + +func SetLogger(l log.Logger) { + logger = l +} diff --git a/rpc/core/routes.go b/rpc/core/routes.go index 38e609601..734d1ee77 100644 --- a/rpc/core/routes.go +++ b/rpc/core/routes.go @@ -1,145 +1,43 @@ package core import ( - rpc "github.com/tendermint/go-rpc/server" - "github.com/tendermint/go-rpc/types" - ctypes "github.com/tendermint/tendermint/rpc/core/types" + rpc "github.com/tendermint/tendermint/rpc/lib/server" ) // TODO: better system than "unsafe" prefix var Routes = map[string]*rpc.RPCFunc{ // subscribe/unsubscribe are reserved for websocket events. - "subscribe": rpc.NewWSRPCFunc(SubscribeResult, "event"), - "unsubscribe": rpc.NewWSRPCFunc(UnsubscribeResult, "event"), + "subscribe": rpc.NewWSRPCFunc(Subscribe, "event"), + "unsubscribe": rpc.NewWSRPCFunc(Unsubscribe, "event"), // info API - "status": rpc.NewRPCFunc(StatusResult, ""), - "net_info": rpc.NewRPCFunc(NetInfoResult, ""), - "blockchain": rpc.NewRPCFunc(BlockchainInfoResult, "minHeight,maxHeight"), - "genesis": rpc.NewRPCFunc(GenesisResult, ""), - "block": rpc.NewRPCFunc(BlockResult, "height"), - "commit": rpc.NewRPCFunc(CommitResult, "height"), - "tx": rpc.NewRPCFunc(TxResult, "hash,prove"), - "validators": rpc.NewRPCFunc(ValidatorsResult, ""), - "dump_consensus_state": rpc.NewRPCFunc(DumpConsensusStateResult, ""), - "unconfirmed_txs": rpc.NewRPCFunc(UnconfirmedTxsResult, ""), - "num_unconfirmed_txs": rpc.NewRPCFunc(NumUnconfirmedTxsResult, ""), + "status": rpc.NewRPCFunc(Status, ""), + "net_info": rpc.NewRPCFunc(NetInfo, ""), + "blockchain": rpc.NewRPCFunc(BlockchainInfo, "minHeight,maxHeight"), + "genesis": rpc.NewRPCFunc(Genesis, ""), + "block": rpc.NewRPCFunc(Block, "height"), + "commit": rpc.NewRPCFunc(Commit, "height"), + "tx": rpc.NewRPCFunc(Tx, "hash,prove"), + "validators": rpc.NewRPCFunc(Validators, ""), + "dump_consensus_state": rpc.NewRPCFunc(DumpConsensusState, ""), + "unconfirmed_txs": rpc.NewRPCFunc(UnconfirmedTxs, ""), + "num_unconfirmed_txs": rpc.NewRPCFunc(NumUnconfirmedTxs, ""), // broadcast API - "broadcast_tx_commit": rpc.NewRPCFunc(BroadcastTxCommitResult, "tx"), - "broadcast_tx_sync": rpc.NewRPCFunc(BroadcastTxSyncResult, "tx"), - "broadcast_tx_async": rpc.NewRPCFunc(BroadcastTxAsyncResult, "tx"), + "broadcast_tx_commit": rpc.NewRPCFunc(BroadcastTxCommit, "tx"), + "broadcast_tx_sync": rpc.NewRPCFunc(BroadcastTxSync, "tx"), + "broadcast_tx_async": rpc.NewRPCFunc(BroadcastTxAsync, "tx"), // abci API - "abci_query": rpc.NewRPCFunc(ABCIQueryResult, "path,data,prove"), - "abci_info": rpc.NewRPCFunc(ABCIInfoResult, ""), + "abci_query": rpc.NewRPCFunc(ABCIQuery, "path,data,prove"), + "abci_info": rpc.NewRPCFunc(ABCIInfo, ""), // control API - "dial_seeds": rpc.NewRPCFunc(UnsafeDialSeedsResult, "seeds"), + "dial_seeds": rpc.NewRPCFunc(UnsafeDialSeeds, "seeds"), "unsafe_flush_mempool": rpc.NewRPCFunc(UnsafeFlushMempool, ""), - "unsafe_set_config": rpc.NewRPCFunc(UnsafeSetConfigResult, "type,key,value"), // profiler API - "unsafe_start_cpu_profiler": rpc.NewRPCFunc(UnsafeStartCPUProfilerResult, "filename"), - "unsafe_stop_cpu_profiler": rpc.NewRPCFunc(UnsafeStopCPUProfilerResult, ""), - "unsafe_write_heap_profile": rpc.NewRPCFunc(UnsafeWriteHeapProfileResult, "filename"), -} - -func SubscribeResult(wsCtx rpctypes.WSRPCContext, event string) (ctypes.TMResult, error) { - return Subscribe(wsCtx, event) -} - -func UnsubscribeResult(wsCtx rpctypes.WSRPCContext, event string) (ctypes.TMResult, error) { - return Unsubscribe(wsCtx, event) -} - -func StatusResult() (ctypes.TMResult, error) { - return Status() -} - -func NetInfoResult() (ctypes.TMResult, error) { - return NetInfo() -} - -func UnsafeDialSeedsResult(seeds []string) (ctypes.TMResult, error) { - return UnsafeDialSeeds(seeds) -} - -func BlockchainInfoResult(min, max int) (ctypes.TMResult, error) { - return BlockchainInfo(min, max) -} - -func GenesisResult() (ctypes.TMResult, error) { - return Genesis() -} - -func BlockResult(height int) (ctypes.TMResult, error) { - return Block(height) -} - -func CommitResult(height int) (ctypes.TMResult, error) { - return Commit(height) -} - -func ValidatorsResult() (ctypes.TMResult, error) { - return Validators() -} - -func DumpConsensusStateResult() (ctypes.TMResult, error) { - return DumpConsensusState() -} - -func UnconfirmedTxsResult() (ctypes.TMResult, error) { - return UnconfirmedTxs() -} - -func NumUnconfirmedTxsResult() (ctypes.TMResult, error) { - return NumUnconfirmedTxs() -} - -// Tx allow user to query the transaction results. `nil` could mean the -// transaction is in the mempool, invalidated, or was not send in the first -// place. -func TxResult(hash []byte, prove bool) (ctypes.TMResult, error) { - return Tx(hash, prove) -} - -func BroadcastTxCommitResult(tx []byte) (ctypes.TMResult, error) { - return BroadcastTxCommit(tx) -} - -func BroadcastTxSyncResult(tx []byte) (ctypes.TMResult, error) { - return BroadcastTxSync(tx) -} - -func BroadcastTxAsyncResult(tx []byte) (ctypes.TMResult, error) { - return BroadcastTxAsync(tx) -} - -func ABCIQueryResult(path string, data []byte, prove bool) (ctypes.TMResult, error) { - return ABCIQuery(path, data, prove) -} - -func ABCIInfoResult() (ctypes.TMResult, error) { - return ABCIInfo() -} - -func UnsafeFlushMempoolResult() (ctypes.TMResult, error) { - return UnsafeFlushMempool() -} - -func UnsafeSetConfigResult(typ, key, value string) (ctypes.TMResult, error) { - return UnsafeSetConfig(typ, key, value) -} - -func UnsafeStartCPUProfilerResult(filename string) (ctypes.TMResult, error) { - return UnsafeStartCPUProfiler(filename) -} - -func UnsafeStopCPUProfilerResult() (ctypes.TMResult, error) { - return UnsafeStopCPUProfiler() -} - -func UnsafeWriteHeapProfileResult(filename string) (ctypes.TMResult, error) { - return UnsafeWriteHeapProfile(filename) + "unsafe_start_cpu_profiler": rpc.NewRPCFunc(UnsafeStartCPUProfiler, "filename"), + "unsafe_stop_cpu_profiler": rpc.NewRPCFunc(UnsafeStopCPUProfiler, ""), + "unsafe_write_heap_profile": rpc.NewRPCFunc(UnsafeWriteHeapProfile, "filename"), } diff --git a/rpc/core/status.go b/rpc/core/status.go index 96ed46ea6..7493aeb0a 100644 --- a/rpc/core/status.go +++ b/rpc/core/status.go @@ -1,6 +1,7 @@ package core import ( + data "github.com/tendermint/go-wire/data" ctypes "github.com/tendermint/tendermint/rpc/core/types" "github.com/tendermint/tendermint/types" ) @@ -9,8 +10,8 @@ func Status() (*ctypes.ResultStatus, error) { latestHeight := blockStore.Height() var ( latestBlockMeta *types.BlockMeta - latestBlockHash []byte - latestAppHash []byte + latestBlockHash data.Bytes + latestAppHash data.Bytes latestBlockTime int64 ) if latestHeight != 0 { diff --git a/rpc/core/tx.go b/rpc/core/tx.go index 7f3cdd037..5bd6e1806 100644 --- a/rpc/core/tx.go +++ b/rpc/core/tx.go @@ -8,6 +8,9 @@ import ( "github.com/tendermint/tendermint/types" ) +// Tx allow user to query the transaction results. `nil` could mean the +// transaction is in the mempool, invalidated, or was not send in the first +// place. func Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) { // if index is disabled, return error @@ -36,7 +39,7 @@ func Tx(hash []byte, prove bool) (*ctypes.ResultTx, error) { return &ctypes.ResultTx{ Height: height, Index: index, - TxResult: r.Result, + TxResult: r.Result.Result(), Tx: r.Tx, Proof: proof, }, nil diff --git a/rpc/core/types/responses.go b/rpc/core/types/responses.go index 7cab8535d..b9c22a8b0 100644 --- a/rpc/core/types/responses.go +++ b/rpc/core/types/responses.go @@ -5,9 +5,9 @@ import ( abci "github.com/tendermint/abci/types" "github.com/tendermint/go-crypto" - "github.com/tendermint/go-p2p" - "github.com/tendermint/go-rpc/types" - "github.com/tendermint/go-wire" + "github.com/tendermint/go-wire/data" + + "github.com/tendermint/tendermint/p2p" "github.com/tendermint/tendermint/types" ) @@ -34,8 +34,8 @@ type ResultCommit struct { type ResultStatus struct { NodeInfo *p2p.NodeInfo `json:"node_info"` PubKey crypto.PubKey `json:"pub_key"` - LatestBlockHash []byte `json:"latest_block_hash"` - LatestAppHash []byte `json:"latest_app_hash"` + LatestBlockHash data.Bytes `json:"latest_block_hash"` + LatestAppHash data.Bytes `json:"latest_app_hash"` LatestBlockHeight int `json:"latest_block_height"` LatestBlockTime int64 `json:"latest_block_time"` // nano } @@ -81,25 +81,25 @@ type ResultDumpConsensusState struct { type ResultBroadcastTx struct { Code abci.CodeType `json:"code"` - Data []byte `json:"data"` + Data data.Bytes `json:"data"` Log string `json:"log"` - Hash []byte `json:"hash"` + Hash data.Bytes `json:"hash"` } type ResultBroadcastTxCommit struct { - CheckTx *abci.ResponseCheckTx `json:"check_tx"` - DeliverTx *abci.ResponseDeliverTx `json:"deliver_tx"` - Hash []byte `json:"hash"` - Height int `json:"height"` + CheckTx abci.Result `json:"check_tx"` + DeliverTx abci.Result `json:"deliver_tx"` + Hash data.Bytes `json:"hash"` + Height int `json:"height"` } type ResultTx struct { - Height int `json:"height"` - Index int `json:"index"` - TxResult abci.ResponseDeliverTx `json:"tx_result"` - Tx types.Tx `json:"tx"` - Proof types.TxProof `json:"proof,omitempty"` + Height int `json:"height"` + Index int `json:"index"` + TxResult abci.Result `json:"tx_result"` + Tx types.Tx `json:"tx"` + Proof types.TxProof `json:"proof,omitempty"` } type ResultUnconfirmedTxs struct { @@ -112,96 +112,18 @@ type ResultABCIInfo struct { } type ResultABCIQuery struct { - Response abci.ResponseQuery `json:"response"` + *abci.ResultQuery `json:"response"` } type ResultUnsafeFlushMempool struct{} -type ResultUnsafeSetConfig struct{} - type ResultUnsafeProfile struct{} -type ResultSubscribe struct { -} +type ResultSubscribe struct{} -type ResultUnsubscribe struct { -} +type ResultUnsubscribe struct{} type ResultEvent struct { Name string `json:"name"` Data types.TMEventData `json:"data"` } - -//---------------------------------------- -// response & result types - -const ( - // 0x0 bytes are for the blockchain - ResultTypeGenesis = byte(0x01) - ResultTypeBlockchainInfo = byte(0x02) - ResultTypeBlock = byte(0x03) - ResultTypeCommit = byte(0x04) - - // 0x2 bytes are for the network - ResultTypeStatus = byte(0x20) - ResultTypeNetInfo = byte(0x21) - ResultTypeDialSeeds = byte(0x22) - - // 0x4 bytes are for the consensus - ResultTypeValidators = byte(0x40) - ResultTypeDumpConsensusState = byte(0x41) - - // 0x6 bytes are for txs / the application - ResultTypeBroadcastTx = byte(0x60) - ResultTypeUnconfirmedTxs = byte(0x61) - ResultTypeBroadcastTxCommit = byte(0x62) - ResultTypeTx = byte(0x63) - - // 0x7 bytes are for querying the application - ResultTypeABCIQuery = byte(0x70) - ResultTypeABCIInfo = byte(0x71) - - // 0x8 bytes are for events - ResultTypeSubscribe = byte(0x80) - ResultTypeUnsubscribe = byte(0x81) - ResultTypeEvent = byte(0x82) - - // 0xa bytes for testing - ResultTypeUnsafeSetConfig = byte(0xa0) - ResultTypeUnsafeStartCPUProfiler = byte(0xa1) - ResultTypeUnsafeStopCPUProfiler = byte(0xa2) - ResultTypeUnsafeWriteHeapProfile = byte(0xa3) - ResultTypeUnsafeFlushMempool = byte(0xa4) -) - -type TMResult interface { - rpctypes.Result -} - -// for wire.readReflect -var _ = wire.RegisterInterface( - struct{ TMResult }{}, - wire.ConcreteType{&ResultGenesis{}, ResultTypeGenesis}, - wire.ConcreteType{&ResultBlockchainInfo{}, ResultTypeBlockchainInfo}, - wire.ConcreteType{&ResultBlock{}, ResultTypeBlock}, - wire.ConcreteType{&ResultCommit{}, ResultTypeCommit}, - wire.ConcreteType{&ResultStatus{}, ResultTypeStatus}, - wire.ConcreteType{&ResultNetInfo{}, ResultTypeNetInfo}, - wire.ConcreteType{&ResultDialSeeds{}, ResultTypeDialSeeds}, - wire.ConcreteType{&ResultValidators{}, ResultTypeValidators}, - wire.ConcreteType{&ResultDumpConsensusState{}, ResultTypeDumpConsensusState}, - wire.ConcreteType{&ResultBroadcastTx{}, ResultTypeBroadcastTx}, - wire.ConcreteType{&ResultBroadcastTxCommit{}, ResultTypeBroadcastTxCommit}, - wire.ConcreteType{&ResultTx{}, ResultTypeTx}, - wire.ConcreteType{&ResultUnconfirmedTxs{}, ResultTypeUnconfirmedTxs}, - wire.ConcreteType{&ResultSubscribe{}, ResultTypeSubscribe}, - wire.ConcreteType{&ResultUnsubscribe{}, ResultTypeUnsubscribe}, - wire.ConcreteType{&ResultEvent{}, ResultTypeEvent}, - wire.ConcreteType{&ResultUnsafeSetConfig{}, ResultTypeUnsafeSetConfig}, - wire.ConcreteType{&ResultUnsafeProfile{}, ResultTypeUnsafeStartCPUProfiler}, - wire.ConcreteType{&ResultUnsafeProfile{}, ResultTypeUnsafeStopCPUProfiler}, - wire.ConcreteType{&ResultUnsafeProfile{}, ResultTypeUnsafeWriteHeapProfile}, - wire.ConcreteType{&ResultUnsafeFlushMempool{}, ResultTypeUnsafeFlushMempool}, - wire.ConcreteType{&ResultABCIQuery{}, ResultTypeABCIQuery}, - wire.ConcreteType{&ResultABCIInfo{}, ResultTypeABCIInfo}, -) diff --git a/rpc/core/types/responses_test.go b/rpc/core/types/responses_test.go index 69ee4faec..8eef19799 100644 --- a/rpc/core/types/responses_test.go +++ b/rpc/core/types/responses_test.go @@ -4,7 +4,7 @@ import ( "testing" "github.com/stretchr/testify/assert" - "github.com/tendermint/go-p2p" + "github.com/tendermint/tendermint/p2p" ) func TestStatusIndexer(t *testing.T) { diff --git a/rpc/grpc/api.go b/rpc/grpc/api.go index fab811c2e..7cfda1587 100644 --- a/rpc/grpc/api.go +++ b/rpc/grpc/api.go @@ -3,6 +3,8 @@ package core_grpc import ( core "github.com/tendermint/tendermint/rpc/core" + abci "github.com/tendermint/abci/types" + context "golang.org/x/net/context" ) @@ -14,5 +16,17 @@ func (bapi *broadcastAPI) BroadcastTx(ctx context.Context, req *RequestBroadcast if err != nil { return nil, err } - return &ResponseBroadcastTx{res.CheckTx, res.DeliverTx}, nil + return &ResponseBroadcastTx{ + + CheckTx: &abci.ResponseCheckTx{ + Code: res.CheckTx.Code, + Data: res.CheckTx.Data, + Log: res.CheckTx.Log, + }, + DeliverTx: &abci.ResponseDeliverTx{ + Code: res.DeliverTx.Code, + Data: res.DeliverTx.Data, + Log: res.DeliverTx.Log, + }, + }, nil } diff --git a/rpc/grpc/client_server.go b/rpc/grpc/client_server.go index d760bf254..e6055ede3 100644 --- a/rpc/grpc/client_server.go +++ b/rpc/grpc/client_server.go @@ -8,7 +8,7 @@ import ( "google.golang.org/grpc" - . "github.com/tendermint/go-common" + . "github.com/tendermint/tmlibs/common" ) // Start the grpcServer in a go routine diff --git a/rpc/lib/Dockerfile b/rpc/lib/Dockerfile new file mode 100644 index 000000000..a194711bf --- /dev/null +++ b/rpc/lib/Dockerfile @@ -0,0 +1,12 @@ +FROM golang:latest + +RUN mkdir -p /go/src/github.com/tendermint/tendermint/rpc/lib +WORKDIR /go/src/github.com/tendermint/tendermint/rpc/lib + +COPY Makefile /go/src/github.com/tendermint/tendermint/rpc/lib/ +# COPY glide.yaml /go/src/github.com/tendermint/tendermint/rpc/lib/ +# COPY glide.lock /go/src/github.com/tendermint/tendermint/rpc/lib/ + +COPY . /go/src/github.com/tendermint/tendermint/rpc/lib + +RUN make get_deps diff --git a/rpc/lib/Makefile b/rpc/lib/Makefile new file mode 100644 index 000000000..0937558a8 --- /dev/null +++ b/rpc/lib/Makefile @@ -0,0 +1,18 @@ +PACKAGES=$(shell go list ./... | grep -v "test") + +all: get_deps test + +test: + @echo "--> Running go test --race" + @go test --race $(PACKAGES) + @echo "--> Running integration tests" + @bash ./test/integration_test.sh + +get_deps: + @echo "--> Running go get" + @go get -v -d $(PACKAGES) + @go list -f '{{join .TestImports "\n"}}' ./... | \ + grep -v /vendor/ | sort | uniq | \ + xargs go get -v -d + +.PHONY: all test get_deps diff --git a/rpc/lib/README.md b/rpc/lib/README.md new file mode 100644 index 000000000..de481c2f6 --- /dev/null +++ b/rpc/lib/README.md @@ -0,0 +1,121 @@ +# tendermint/rpc/lib + +[![CircleCI](https://circleci.com/gh/tendermint/tendermint/rpc/lib.svg?style=svg)](https://circleci.com/gh/tendermint/tendermint/rpc/lib) + +HTTP RPC server supporting calls via uri params, jsonrpc, and jsonrpc over websockets + +# Client Requests + +Suppose we want to expose the rpc function `HelloWorld(name string, num int)`. + +## GET (URI) + +As a GET request, it would have URI encoded parameters, and look like: + +``` +curl 'http://localhost:8008/hello_world?name="my_world"&num=5' +``` + +Note the `'` around the url, which is just so bash doesn't ignore the quotes in `"my_world"`. +This should also work: + +``` +curl http://localhost:8008/hello_world?name=\"my_world\"&num=5 +``` + +A GET request to `/` returns a list of available endpoints. +For those which take arguments, the arguments will be listed in order, with `_` where the actual value should be. + +## POST (JSONRPC) + +As a POST request, we use JSONRPC. For instance, the same request would have this as the body: + +``` +{ + "jsonrpc": "2.0", + "id": "anything", + "method": "hello_world", + "params": { + "name": "my_world", + "num": 5 + } +} +``` + +With the above saved in file `data.json`, we can make the request with + +``` +curl --data @data.json http://localhost:8008 +``` + +## WebSocket (JSONRPC) + +All requests are exposed over websocket in the same form as the POST JSONRPC. +Websocket connections are available at their own endpoint, typically `/websocket`, +though this is configurable when starting the server. + +# Server Definition + +Define some types and routes: + +``` +type ResultStatus struct { + Value string +} + +// Define some routes +var Routes = map[string]*rpcserver.RPCFunc{ + "status": rpcserver.NewRPCFunc(Status, "arg"), +} + +// an rpc function +func Status(v string) (*ResultStatus, error) { + return &ResultStatus{v}, nil +} + +``` + +Now start the server: + +``` +mux := http.NewServeMux() +rpcserver.RegisterRPCFuncs(mux, Routes) +wm := rpcserver.NewWebsocketManager(Routes, nil) +mux.HandleFunc("/websocket", wm.WebsocketHandler) +logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout)) +go func() { + _, err := rpcserver.StartHTTPServer("0.0.0.0:8008", mux, logger) + if err != nil { + panic(err) + } +}() + +``` + +Note that unix sockets are supported as well (eg. `/path/to/socket` instead of `0.0.0.0:8008`) + +Now see all available endpoints by sending a GET request to `0.0.0.0:8008`. +Each route is available as a GET request, as a JSONRPCv2 POST request, and via JSONRPCv2 over websockets. + + +# Examples + +* [Tendermint](https://github.com/tendermint/tendermint/blob/master/rpc/core/routes.go) +* [tm-monitor](https://github.com/tendermint/tools/blob/master/tm-monitor/rpc.go) + +## CHANGELOG + +### 0.7.0 + +BREAKING CHANGES: + +- removed `Client` empty interface +- `ClientJSONRPC#Call` `params` argument became a map +- rename `ClientURI` -> `URIClient`, `ClientJSONRPC` -> `JSONRPCClient` + +IMPROVEMENTS: + +- added `HTTPClient` interface, which can be used for both `ClientURI` +and `ClientJSONRPC` +- all params are now optional (Golang's default will be used if some param is missing) +- added `Call` method to `WSClient` (see method's doc for details) diff --git a/rpc/lib/circle.yml b/rpc/lib/circle.yml new file mode 100644 index 000000000..0308a4e79 --- /dev/null +++ b/rpc/lib/circle.yml @@ -0,0 +1,21 @@ +machine: + environment: + GOPATH: /home/ubuntu/.go_workspace + REPO: $GOPATH/src/github.com/$CIRCLE_PROJECT_USERNAME/$CIRCLE_PROJECT_REPONAME + hosts: + circlehost: 127.0.0.1 + localhost: 127.0.0.1 + +checkout: + post: + - rm -rf $REPO + - mkdir -p $HOME/.go_workspace/src/github.com/$CIRCLE_PROJECT_USERNAME + - mv $HOME/$CIRCLE_PROJECT_REPONAME $REPO + +dependencies: + override: + - "cd $REPO && make get_deps" + +test: + override: + - "cd $REPO && make test" diff --git a/rpc/lib/client/args_test.go b/rpc/lib/client/args_test.go new file mode 100644 index 000000000..ccabd0d2c --- /dev/null +++ b/rpc/lib/client/args_test.go @@ -0,0 +1,39 @@ +package rpcclient + +import ( + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +type Tx []byte + +type Foo struct { + Bar int + Baz string +} + +func TestArgToJSON(t *testing.T) { + assert := assert.New(t) + require := require.New(t) + + cases := []struct { + input interface{} + expected string + }{ + {[]byte("1234"), "0x31323334"}, + {Tx("654"), "0x363534"}, + {Foo{7, "hello"}, `{"Bar":7,"Baz":"hello"}`}, + } + + for i, tc := range cases { + args := map[string]interface{}{"data": tc.input} + err := argsToJson(args) + require.Nil(err, "%d: %+v", i, err) + require.Equal(1, len(args), "%d", i) + data, ok := args["data"].(string) + require.True(ok, "%d: %#v", i, args["data"]) + assert.Equal(tc.expected, data, "%d", i) + } +} diff --git a/rpc/lib/client/http_client.go b/rpc/lib/client/http_client.go new file mode 100644 index 000000000..12cf793a6 --- /dev/null +++ b/rpc/lib/client/http_client.go @@ -0,0 +1,185 @@ +package rpcclient + +import ( + "bytes" + "encoding/json" + "fmt" + "io/ioutil" + "net" + "net/http" + "net/url" + "reflect" + "strings" + + "github.com/pkg/errors" + types "github.com/tendermint/tendermint/rpc/lib/types" + cmn "github.com/tendermint/tmlibs/common" +) + +// HTTPClient is a common interface for JSONRPCClient and URIClient. +type HTTPClient interface { + Call(method string, params map[string]interface{}, result interface{}) (interface{}, error) +} + +// TODO: Deprecate support for IP:PORT or /path/to/socket +func makeHTTPDialer(remoteAddr string) (string, func(string, string) (net.Conn, error)) { + + parts := strings.SplitN(remoteAddr, "://", 2) + var protocol, address string + if len(parts) != 2 { + cmn.PanicSanity(fmt.Sprintf("Expected fully formed listening address, including the tcp:// or unix:// prefix, given %s", remoteAddr)) + } else { + protocol, address = parts[0], parts[1] + } + + trimmedAddress := strings.Replace(address, "/", ".", -1) // replace / with . for http requests (dummy domain) + return trimmedAddress, func(proto, addr string) (net.Conn, error) { + return net.Dial(protocol, address) + } +} + +// We overwrite the http.Client.Dial so we can do http over tcp or unix. +// remoteAddr should be fully featured (eg. with tcp:// or unix://) +func makeHTTPClient(remoteAddr string) (string, *http.Client) { + address, dialer := makeHTTPDialer(remoteAddr) + return "http://" + address, &http.Client{ + Transport: &http.Transport{ + Dial: dialer, + }, + } +} + +//------------------------------------------------------------------------------------ + +// JSON rpc takes params as a slice +type JSONRPCClient struct { + address string + client *http.Client +} + +func NewJSONRPCClient(remote string) *JSONRPCClient { + address, client := makeHTTPClient(remote) + return &JSONRPCClient{ + address: address, + client: client, + } +} + +func (c *JSONRPCClient) Call(method string, params map[string]interface{}, result interface{}) (interface{}, error) { + request, err := types.MapToRequest("", method, params) + if err != nil { + return nil, err + } + requestBytes, err := json.Marshal(request) + if err != nil { + return nil, err + } + // log.Info(string(requestBytes)) + requestBuf := bytes.NewBuffer(requestBytes) + // log.Info(Fmt("RPC request to %v (%v): %v", c.remote, method, string(requestBytes))) + httpResponse, err := c.client.Post(c.address, "text/json", requestBuf) + if err != nil { + return nil, err + } + defer httpResponse.Body.Close() + responseBytes, err := ioutil.ReadAll(httpResponse.Body) + if err != nil { + return nil, err + } + // log.Info(Fmt("RPC response: %v", string(responseBytes))) + return unmarshalResponseBytes(responseBytes, result) +} + +//------------------------------------------------------------- + +// URI takes params as a map +type URIClient struct { + address string + client *http.Client +} + +func NewURIClient(remote string) *URIClient { + address, client := makeHTTPClient(remote) + return &URIClient{ + address: address, + client: client, + } +} + +func (c *URIClient) Call(method string, params map[string]interface{}, result interface{}) (interface{}, error) { + values, err := argsToURLValues(params) + if err != nil { + return nil, err + } + // log.Info(Fmt("URI request to %v (%v): %v", c.address, method, values)) + resp, err := c.client.PostForm(c.address+"/"+method, values) + if err != nil { + return nil, err + } + defer resp.Body.Close() + responseBytes, err := ioutil.ReadAll(resp.Body) + if err != nil { + return nil, err + } + return unmarshalResponseBytes(responseBytes, result) +} + +//------------------------------------------------ + +func unmarshalResponseBytes(responseBytes []byte, result interface{}) (interface{}, error) { + // read response + // if rpc/core/types is imported, the result will unmarshal + // into the correct type + // log.Notice("response", "response", string(responseBytes)) + var err error + response := &types.RPCResponse{} + err = json.Unmarshal(responseBytes, response) + if err != nil { + return nil, errors.Errorf("Error unmarshalling rpc response: %v", err) + } + errorStr := response.Error + if errorStr != "" { + return nil, errors.Errorf("Response error: %v", errorStr) + } + // unmarshal the RawMessage into the result + err = json.Unmarshal(*response.Result, result) + if err != nil { + return nil, errors.Errorf("Error unmarshalling rpc response result: %v", err) + } + return result, nil +} + +func argsToURLValues(args map[string]interface{}) (url.Values, error) { + values := make(url.Values) + if len(args) == 0 { + return values, nil + } + err := argsToJson(args) + if err != nil { + return nil, err + } + for key, val := range args { + values.Set(key, val.(string)) + } + return values, nil +} + +func argsToJson(args map[string]interface{}) error { + for k, v := range args { + rt := reflect.TypeOf(v) + isByteSlice := rt.Kind() == reflect.Slice && rt.Elem().Kind() == reflect.Uint8 + if isByteSlice { + bytes := reflect.ValueOf(v).Bytes() + args[k] = fmt.Sprintf("0x%X", bytes) + continue + } + + // Pass everything else to go-wire + data, err := json.Marshal(v) + if err != nil { + return err + } + args[k] = string(data) + } + return nil +} diff --git a/rpc/lib/client/ws_client.go b/rpc/lib/client/ws_client.go new file mode 100644 index 000000000..1ad744e87 --- /dev/null +++ b/rpc/lib/client/ws_client.go @@ -0,0 +1,160 @@ +package rpcclient + +import ( + "encoding/json" + "net" + "net/http" + "time" + + "github.com/gorilla/websocket" + "github.com/pkg/errors" + types "github.com/tendermint/tendermint/rpc/lib/types" + cmn "github.com/tendermint/tmlibs/common" +) + +const ( + wsResultsChannelCapacity = 10 + wsErrorsChannelCapacity = 1 + wsWriteTimeoutSeconds = 10 +) + +type WSClient struct { + cmn.BaseService + Address string // IP:PORT or /path/to/socket + Endpoint string // /websocket/url/endpoint + Dialer func(string, string) (net.Conn, error) + *websocket.Conn + ResultsCh chan json.RawMessage // closes upon WSClient.Stop() + ErrorsCh chan error // closes upon WSClient.Stop() +} + +// create a new connection +func NewWSClient(remoteAddr, endpoint string) *WSClient { + addr, dialer := makeHTTPDialer(remoteAddr) + wsClient := &WSClient{ + Address: addr, + Dialer: dialer, + Endpoint: endpoint, + Conn: nil, + } + wsClient.BaseService = *cmn.NewBaseService(nil, "WSClient", wsClient) + return wsClient +} + +func (wsc *WSClient) String() string { + return wsc.Address + ", " + wsc.Endpoint +} + +// OnStart implements cmn.BaseService interface +func (wsc *WSClient) OnStart() error { + wsc.BaseService.OnStart() + err := wsc.dial() + if err != nil { + return err + } + wsc.ResultsCh = make(chan json.RawMessage, wsResultsChannelCapacity) + wsc.ErrorsCh = make(chan error, wsErrorsChannelCapacity) + go wsc.receiveEventsRoutine() + return nil +} + +// OnReset implements cmn.BaseService interface +func (wsc *WSClient) OnReset() error { + return nil +} + +func (wsc *WSClient) dial() error { + + // Dial + dialer := &websocket.Dialer{ + NetDial: wsc.Dialer, + Proxy: http.ProxyFromEnvironment, + } + rHeader := http.Header{} + con, _, err := dialer.Dial("ws://"+wsc.Address+wsc.Endpoint, rHeader) + if err != nil { + return err + } + // Set the ping/pong handlers + con.SetPingHandler(func(m string) error { + // NOTE: https://github.com/gorilla/websocket/issues/97 + go con.WriteControl(websocket.PongMessage, []byte(m), time.Now().Add(time.Second*wsWriteTimeoutSeconds)) + return nil + }) + con.SetPongHandler(func(m string) error { + // NOTE: https://github.com/gorilla/websocket/issues/97 + return nil + }) + wsc.Conn = con + return nil +} + +// OnStop implements cmn.BaseService interface +func (wsc *WSClient) OnStop() { + wsc.BaseService.OnStop() + wsc.Conn.Close() + // ResultsCh/ErrorsCh is closed in receiveEventsRoutine. +} + +func (wsc *WSClient) receiveEventsRoutine() { + for { + _, data, err := wsc.ReadMessage() + if err != nil { + wsc.Logger.Info("WSClient failed to read message", "error", err, "data", string(data)) + wsc.Stop() + break + } else { + var response types.RPCResponse + err := json.Unmarshal(data, &response) + if err != nil { + wsc.Logger.Info("WSClient failed to parse message", "error", err, "data", string(data)) + wsc.ErrorsCh <- err + continue + } + if response.Error != "" { + wsc.ErrorsCh <- errors.Errorf(response.Error) + continue + } + wsc.ResultsCh <- *response.Result + } + } + // this must be modified in the same go-routine that reads from the + // connection to avoid race conditions + wsc.Conn = nil + + // Cleanup + close(wsc.ResultsCh) + close(wsc.ErrorsCh) +} + +// Subscribe to an event. Note the server must have a "subscribe" route +// defined. +func (wsc *WSClient) Subscribe(eventid string) error { + params := map[string]interface{}{"event": eventid} + request, err := types.MapToRequest("", "subscribe", params) + if err == nil { + err = wsc.WriteJSON(request) + } + return err +} + +// Unsubscribe from an event. Note the server must have a "unsubscribe" route +// defined. +func (wsc *WSClient) Unsubscribe(eventid string) error { + params := map[string]interface{}{"event": eventid} + request, err := types.MapToRequest("", "unsubscribe", params) + if err == nil { + err = wsc.WriteJSON(request) + } + return err +} + +// Call asynchronously calls a given method by sending an RPCRequest to the +// server. Results will be available on ResultsCh, errors, if any, on ErrorsCh. +func (wsc *WSClient) Call(method string, params map[string]interface{}) error { + request, err := types.MapToRequest("", method, params) + if err == nil { + err = wsc.WriteJSON(request) + } + return err +} diff --git a/rpc/lib/rpc_test.go b/rpc/lib/rpc_test.go new file mode 100644 index 000000000..c7bed052f --- /dev/null +++ b/rpc/lib/rpc_test.go @@ -0,0 +1,335 @@ +package rpc + +import ( + "bytes" + crand "crypto/rand" + "encoding/json" + "fmt" + "math/rand" + "net/http" + "os/exec" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "github.com/tendermint/go-wire/data" + client "github.com/tendermint/tendermint/rpc/lib/client" + server "github.com/tendermint/tendermint/rpc/lib/server" + types "github.com/tendermint/tendermint/rpc/lib/types" + "github.com/tendermint/tmlibs/log" +) + +// Client and Server should work over tcp or unix sockets +const ( + tcpAddr = "tcp://0.0.0.0:47768" + + unixSocket = "/tmp/rpc_test.sock" + unixAddr = "unix://" + unixSocket + + websocketEndpoint = "/websocket/endpoint" +) + +type ResultEcho struct { + Value string `json:"value"` +} + +type ResultEchoInt struct { + Value int `json:"value"` +} + +type ResultEchoBytes struct { + Value []byte `json:"value"` +} + +type ResultEchoDataBytes struct { + Value data.Bytes `json:"value"` +} + +// Define some routes +var Routes = map[string]*server.RPCFunc{ + "echo": server.NewRPCFunc(EchoResult, "arg"), + "echo_ws": server.NewWSRPCFunc(EchoWSResult, "arg"), + "echo_bytes": server.NewRPCFunc(EchoBytesResult, "arg"), + "echo_data_bytes": server.NewRPCFunc(EchoDataBytesResult, "arg"), + "echo_int": server.NewRPCFunc(EchoIntResult, "arg"), +} + +func EchoResult(v string) (*ResultEcho, error) { + return &ResultEcho{v}, nil +} + +func EchoWSResult(wsCtx types.WSRPCContext, v string) (*ResultEcho, error) { + return &ResultEcho{v}, nil +} + +func EchoIntResult(v int) (*ResultEchoInt, error) { + return &ResultEchoInt{v}, nil +} + +func EchoBytesResult(v []byte) (*ResultEchoBytes, error) { + return &ResultEchoBytes{v}, nil +} + +func EchoDataBytesResult(v data.Bytes) (*ResultEchoDataBytes, error) { + return &ResultEchoDataBytes{v}, nil +} + +// launch unix and tcp servers +func init() { + cmd := exec.Command("rm", "-f", unixSocket) + err := cmd.Start() + if err != nil { + panic(err) + } + if err = cmd.Wait(); err != nil { + panic(err) + } + + mux := http.NewServeMux() + server.RegisterRPCFuncs(mux, Routes, log.TestingLogger()) + wm := server.NewWebsocketManager(Routes, nil) + wm.SetLogger(log.TestingLogger()) + mux.HandleFunc(websocketEndpoint, wm.WebsocketHandler) + go func() { + _, err := server.StartHTTPServer(tcpAddr, mux, log.TestingLogger()) + if err != nil { + panic(err) + } + }() + + mux2 := http.NewServeMux() + server.RegisterRPCFuncs(mux2, Routes, log.TestingLogger()) + wm = server.NewWebsocketManager(Routes, nil) + wm.SetLogger(log.TestingLogger()) + mux2.HandleFunc(websocketEndpoint, wm.WebsocketHandler) + go func() { + _, err := server.StartHTTPServer(unixAddr, mux2, log.TestingLogger()) + if err != nil { + panic(err) + } + }() + + // wait for servers to start + time.Sleep(time.Second * 2) +} + +func echoViaHTTP(cl client.HTTPClient, val string) (string, error) { + params := map[string]interface{}{ + "arg": val, + } + result := new(ResultEcho) + if _, err := cl.Call("echo", params, result); err != nil { + return "", err + } + return result.Value, nil +} + +func echoIntViaHTTP(cl client.HTTPClient, val int) (int, error) { + params := map[string]interface{}{ + "arg": val, + } + result := new(ResultEchoInt) + if _, err := cl.Call("echo_int", params, result); err != nil { + return 0, err + } + return result.Value, nil +} + +func echoBytesViaHTTP(cl client.HTTPClient, bytes []byte) ([]byte, error) { + params := map[string]interface{}{ + "arg": bytes, + } + result := new(ResultEchoBytes) + if _, err := cl.Call("echo_bytes", params, result); err != nil { + return []byte{}, err + } + return result.Value, nil +} + +func echoDataBytesViaHTTP(cl client.HTTPClient, bytes data.Bytes) (data.Bytes, error) { + params := map[string]interface{}{ + "arg": bytes, + } + result := new(ResultEchoDataBytes) + if _, err := cl.Call("echo_data_bytes", params, result); err != nil { + return []byte{}, err + } + return result.Value, nil +} + +func testWithHTTPClient(t *testing.T, cl client.HTTPClient) { + val := "acbd" + got, err := echoViaHTTP(cl, val) + require.Nil(t, err) + assert.Equal(t, got, val) + + val2 := randBytes(t) + got2, err := echoBytesViaHTTP(cl, val2) + require.Nil(t, err) + assert.Equal(t, got2, val2) + + val3 := data.Bytes(randBytes(t)) + got3, err := echoDataBytesViaHTTP(cl, val3) + require.Nil(t, err) + assert.Equal(t, got3, val3) + + val4 := rand.Intn(10000) + got4, err := echoIntViaHTTP(cl, val4) + require.Nil(t, err) + assert.Equal(t, got4, val4) +} + +func echoViaWS(cl *client.WSClient, val string) (string, error) { + params := map[string]interface{}{ + "arg": val, + } + err := cl.Call("echo", params) + if err != nil { + return "", err + } + + select { + case msg := <-cl.ResultsCh: + result := new(ResultEcho) + err = json.Unmarshal(msg, result) + if err != nil { + return "", nil + } + return result.Value, nil + case err := <-cl.ErrorsCh: + return "", err + } +} + +func echoBytesViaWS(cl *client.WSClient, bytes []byte) ([]byte, error) { + params := map[string]interface{}{ + "arg": bytes, + } + err := cl.Call("echo_bytes", params) + if err != nil { + return []byte{}, err + } + + select { + case msg := <-cl.ResultsCh: + result := new(ResultEchoBytes) + err = json.Unmarshal(msg, result) + if err != nil { + return []byte{}, nil + } + return result.Value, nil + case err := <-cl.ErrorsCh: + return []byte{}, err + } +} + +func testWithWSClient(t *testing.T, cl *client.WSClient) { + val := "acbd" + got, err := echoViaWS(cl, val) + require.Nil(t, err) + assert.Equal(t, got, val) + + val2 := randBytes(t) + got2, err := echoBytesViaWS(cl, val2) + require.Nil(t, err) + assert.Equal(t, got2, val2) +} + +//------------- + +func TestServersAndClientsBasic(t *testing.T) { + serverAddrs := [...]string{tcpAddr, unixAddr} + for _, addr := range serverAddrs { + cl1 := client.NewURIClient(addr) + fmt.Printf("=== testing server on %s using %v client", addr, cl1) + testWithHTTPClient(t, cl1) + + cl2 := client.NewJSONRPCClient(tcpAddr) + fmt.Printf("=== testing server on %s using %v client", addr, cl2) + testWithHTTPClient(t, cl2) + + cl3 := client.NewWSClient(tcpAddr, websocketEndpoint) + _, err := cl3.Start() + require.Nil(t, err) + fmt.Printf("=== testing server on %s using %v client", addr, cl3) + testWithWSClient(t, cl3) + cl3.Stop() + } +} + +func TestHexStringArg(t *testing.T) { + cl := client.NewURIClient(tcpAddr) + // should NOT be handled as hex + val := "0xabc" + got, err := echoViaHTTP(cl, val) + require.Nil(t, err) + assert.Equal(t, got, val) +} + +func TestQuotedStringArg(t *testing.T) { + cl := client.NewURIClient(tcpAddr) + // should NOT be unquoted + val := "\"abc\"" + got, err := echoViaHTTP(cl, val) + require.Nil(t, err) + assert.Equal(t, got, val) +} + +func TestWSNewWSRPCFunc(t *testing.T) { + cl := client.NewWSClient(tcpAddr, websocketEndpoint) + _, err := cl.Start() + require.Nil(t, err) + defer cl.Stop() + + val := "acbd" + params := map[string]interface{}{ + "arg": val, + } + err = cl.Call("echo_ws", params) + require.Nil(t, err) + + select { + case msg := <-cl.ResultsCh: + result := new(ResultEcho) + err = json.Unmarshal(msg, result) + require.Nil(t, err) + got := result.Value + assert.Equal(t, got, val) + case err := <-cl.ErrorsCh: + t.Fatal(err) + } +} + +func TestWSHandlesArrayParams(t *testing.T) { + cl := client.NewWSClient(tcpAddr, websocketEndpoint) + _, err := cl.Start() + require.Nil(t, err) + defer cl.Stop() + + val := "acbd" + params := []interface{}{val} + request, err := types.ArrayToRequest("", "echo_ws", params) + require.Nil(t, err) + err = cl.WriteJSON(request) + require.Nil(t, err) + + select { + case msg := <-cl.ResultsCh: + result := new(ResultEcho) + err = json.Unmarshal(msg, result) + require.Nil(t, err) + got := result.Value + assert.Equal(t, got, val) + case err := <-cl.ErrorsCh: + t.Fatalf("%+v", err) + } +} + +func randBytes(t *testing.T) []byte { + n := rand.Intn(10) + 2 + buf := make([]byte, n) + _, err := crand.Read(buf) + require.Nil(t, err) + return bytes.Replace(buf, []byte("="), []byte{100}, -1) +} diff --git a/rpc/lib/server/handlers.go b/rpc/lib/server/handlers.go new file mode 100644 index 000000000..5745f6fa1 --- /dev/null +++ b/rpc/lib/server/handlers.go @@ -0,0 +1,659 @@ +package rpcserver + +import ( + "bytes" + "encoding/hex" + "encoding/json" + "fmt" + "io/ioutil" + "net/http" + "reflect" + "sort" + "strings" + "time" + + "github.com/gorilla/websocket" + "github.com/pkg/errors" + + types "github.com/tendermint/tendermint/rpc/lib/types" + cmn "github.com/tendermint/tmlibs/common" + events "github.com/tendermint/tmlibs/events" + "github.com/tendermint/tmlibs/log" +) + +// Adds a route for each function in the funcMap, as well as general jsonrpc and websocket handlers for all functions. +// "result" is the interface on which the result objects are registered, and is popualted with every RPCResponse +func RegisterRPCFuncs(mux *http.ServeMux, funcMap map[string]*RPCFunc, logger log.Logger) { + // HTTP endpoints + for funcName, rpcFunc := range funcMap { + mux.HandleFunc("/"+funcName, makeHTTPHandler(rpcFunc, logger)) + } + + // JSONRPC endpoints + mux.HandleFunc("/", makeJSONRPCHandler(funcMap, logger)) +} + +//------------------------------------- +// function introspection + +// holds all type information for each function +type RPCFunc struct { + f reflect.Value // underlying rpc function + args []reflect.Type // type of each function arg + returns []reflect.Type // type of each return arg + argNames []string // name of each argument + ws bool // websocket only +} + +// wraps a function for quicker introspection +// f is the function, args are comma separated argument names +func NewRPCFunc(f interface{}, args string) *RPCFunc { + return newRPCFunc(f, args, false) +} + +func NewWSRPCFunc(f interface{}, args string) *RPCFunc { + return newRPCFunc(f, args, true) +} + +func newRPCFunc(f interface{}, args string, ws bool) *RPCFunc { + var argNames []string + if args != "" { + argNames = strings.Split(args, ",") + } + return &RPCFunc{ + f: reflect.ValueOf(f), + args: funcArgTypes(f), + returns: funcReturnTypes(f), + argNames: argNames, + ws: ws, + } +} + +// return a function's argument types +func funcArgTypes(f interface{}) []reflect.Type { + t := reflect.TypeOf(f) + n := t.NumIn() + typez := make([]reflect.Type, n) + for i := 0; i < n; i++ { + typez[i] = t.In(i) + } + return typez +} + +// return a function's return types +func funcReturnTypes(f interface{}) []reflect.Type { + t := reflect.TypeOf(f) + n := t.NumOut() + typez := make([]reflect.Type, n) + for i := 0; i < n; i++ { + typez[i] = t.Out(i) + } + return typez +} + +// function introspection +//----------------------------------------------------------------------------- +// rpc.json + +// jsonrpc calls grab the given method's function info and runs reflect.Call +func makeJSONRPCHandler(funcMap map[string]*RPCFunc, logger log.Logger) http.HandlerFunc { + return func(w http.ResponseWriter, r *http.Request) { + b, _ := ioutil.ReadAll(r.Body) + // if its an empty request (like from a browser), + // just display a list of functions + if len(b) == 0 { + writeListOfEndpoints(w, r, funcMap) + return + } + + var request types.RPCRequest + err := json.Unmarshal(b, &request) + if err != nil { + WriteRPCResponseHTTPError(w, http.StatusBadRequest, types.NewRPCResponse("", nil, fmt.Sprintf("Error unmarshalling request: %v", err.Error()))) + return + } + if len(r.URL.Path) > 1 { + WriteRPCResponseHTTPError(w, http.StatusNotFound, types.NewRPCResponse(request.ID, nil, fmt.Sprintf("Invalid JSONRPC endpoint %s", r.URL.Path))) + return + } + rpcFunc := funcMap[request.Method] + if rpcFunc == nil { + WriteRPCResponseHTTPError(w, http.StatusNotFound, types.NewRPCResponse(request.ID, nil, "RPC method unknown: "+request.Method)) + return + } + if rpcFunc.ws { + WriteRPCResponseHTTPError(w, http.StatusMethodNotAllowed, types.NewRPCResponse(request.ID, nil, "RPC method is only for websockets: "+request.Method)) + return + } + args, err := jsonParamsToArgsRPC(rpcFunc, request.Params) + if err != nil { + WriteRPCResponseHTTPError(w, http.StatusBadRequest, types.NewRPCResponse(request.ID, nil, fmt.Sprintf("Error converting json params to arguments: %v", err.Error()))) + return + } + returns := rpcFunc.f.Call(args) + logger.Info("HTTPJSONRPC", "method", request.Method, "args", args, "returns", returns) + result, err := unreflectResult(returns) + if err != nil { + WriteRPCResponseHTTPError(w, http.StatusInternalServerError, types.NewRPCResponse(request.ID, result, err.Error())) + return + } + WriteRPCResponseHTTP(w, types.NewRPCResponse(request.ID, result, "")) + } +} + +func mapParamsToArgs(rpcFunc *RPCFunc, params map[string]*json.RawMessage, argsOffset int) ([]reflect.Value, error) { + values := make([]reflect.Value, len(rpcFunc.argNames)) + for i, argName := range rpcFunc.argNames { + argType := rpcFunc.args[i+argsOffset] + + if p, ok := params[argName]; ok && p != nil && len(*p) > 0 { + val := reflect.New(argType) + err := json.Unmarshal(*p, val.Interface()) + if err != nil { + return nil, err + } + values[i] = val.Elem() + } else { // use default for that type + values[i] = reflect.Zero(argType) + } + } + + return values, nil +} + +func arrayParamsToArgs(rpcFunc *RPCFunc, params []*json.RawMessage, argsOffset int) ([]reflect.Value, error) { + if len(rpcFunc.argNames) != len(params) { + return nil, errors.Errorf("Expected %v parameters (%v), got %v (%v)", + len(rpcFunc.argNames), rpcFunc.argNames, len(params), params) + } + + values := make([]reflect.Value, len(params)) + for i, p := range params { + argType := rpcFunc.args[i+argsOffset] + val := reflect.New(argType) + err := json.Unmarshal(*p, val.Interface()) + if err != nil { + return nil, err + } + values[i] = val.Elem() + } + return values, nil +} + +// raw is unparsed json (from json.RawMessage) encoding either a map or an array. +// +// argsOffset should be 0 for RPC calls, and 1 for WS requests, where len(rpcFunc.args) != len(rpcFunc.argNames). +// Example: +// rpcFunc.args = [rpctypes.WSRPCContext string] +// rpcFunc.argNames = ["arg"] +func jsonParamsToArgs(rpcFunc *RPCFunc, raw []byte, argsOffset int) ([]reflect.Value, error) { + // first, try to get the map.. + var m map[string]*json.RawMessage + err := json.Unmarshal(raw, &m) + if err == nil { + return mapParamsToArgs(rpcFunc, m, argsOffset) + } + + // otherwise, try an array + var a []*json.RawMessage + err = json.Unmarshal(raw, &a) + if err == nil { + return arrayParamsToArgs(rpcFunc, a, argsOffset) + } + + // otherwise, bad format, we cannot parse + return nil, errors.Errorf("Unknown type for JSON params: %v. Expected map or array", err) +} + +// Convert a []interface{} OR a map[string]interface{} to properly typed values +func jsonParamsToArgsRPC(rpcFunc *RPCFunc, params *json.RawMessage) ([]reflect.Value, error) { + return jsonParamsToArgs(rpcFunc, *params, 0) +} + +// Same as above, but with the first param the websocket connection +func jsonParamsToArgsWS(rpcFunc *RPCFunc, params *json.RawMessage, wsCtx types.WSRPCContext) ([]reflect.Value, error) { + values, err := jsonParamsToArgs(rpcFunc, *params, 1) + if err != nil { + return nil, err + } + return append([]reflect.Value{reflect.ValueOf(wsCtx)}, values...), nil +} + +// rpc.json +//----------------------------------------------------------------------------- +// rpc.http + +// convert from a function name to the http handler +func makeHTTPHandler(rpcFunc *RPCFunc, logger log.Logger) func(http.ResponseWriter, *http.Request) { + // Exception for websocket endpoints + if rpcFunc.ws { + return func(w http.ResponseWriter, r *http.Request) { + WriteRPCResponseHTTPError(w, http.StatusMethodNotAllowed, types.NewRPCResponse("", nil, "This RPC method is only for websockets")) + } + } + // All other endpoints + return func(w http.ResponseWriter, r *http.Request) { + logger.Debug("HTTP HANDLER", "req", r) + args, err := httpParamsToArgs(rpcFunc, r) + if err != nil { + WriteRPCResponseHTTPError(w, http.StatusBadRequest, types.NewRPCResponse("", nil, fmt.Sprintf("Error converting http params to args: %v", err.Error()))) + return + } + returns := rpcFunc.f.Call(args) + logger.Info("HTTPRestRPC", "method", r.URL.Path, "args", args, "returns", returns) + result, err := unreflectResult(returns) + if err != nil { + WriteRPCResponseHTTPError(w, http.StatusInternalServerError, types.NewRPCResponse("", nil, err.Error())) + return + } + WriteRPCResponseHTTP(w, types.NewRPCResponse("", result, "")) + } +} + +// Covert an http query to a list of properly typed values. +// To be properly decoded the arg must be a concrete type from tendermint (if its an interface). +func httpParamsToArgs(rpcFunc *RPCFunc, r *http.Request) ([]reflect.Value, error) { + values := make([]reflect.Value, len(rpcFunc.args)) + + for i, name := range rpcFunc.argNames { + argType := rpcFunc.args[i] + + values[i] = reflect.Zero(argType) // set default for that type + + arg := GetParam(r, name) + // log.Notice("param to arg", "argType", argType, "name", name, "arg", arg) + + if "" == arg { + continue + } + + v, err, ok := nonJsonToArg(argType, arg) + if err != nil { + return nil, err + } + if ok { + values[i] = v + continue + } + + values[i], err = _jsonStringToArg(argType, arg) + if err != nil { + return nil, err + } + } + + return values, nil +} + +func _jsonStringToArg(ty reflect.Type, arg string) (reflect.Value, error) { + v := reflect.New(ty) + err := json.Unmarshal([]byte(arg), v.Interface()) + if err != nil { + return v, err + } + v = v.Elem() + return v, nil +} + +func nonJsonToArg(ty reflect.Type, arg string) (reflect.Value, error, bool) { + isQuotedString := strings.HasPrefix(arg, `"`) && strings.HasSuffix(arg, `"`) + isHexString := strings.HasPrefix(strings.ToLower(arg), "0x") + expectingString := ty.Kind() == reflect.String + expectingByteSlice := ty.Kind() == reflect.Slice && ty.Elem().Kind() == reflect.Uint8 + + if isHexString { + if !expectingString && !expectingByteSlice { + err := errors.Errorf("Got a hex string arg, but expected '%s'", + ty.Kind().String()) + return reflect.ValueOf(nil), err, false + } + + var value []byte + value, err := hex.DecodeString(arg[2:]) + if err != nil { + return reflect.ValueOf(nil), err, false + } + if ty.Kind() == reflect.String { + return reflect.ValueOf(string(value)), nil, true + } + return reflect.ValueOf([]byte(value)), nil, true + } + + if isQuotedString && expectingByteSlice { + v := reflect.New(reflect.TypeOf("")) + err := json.Unmarshal([]byte(arg), v.Interface()) + if err != nil { + return reflect.ValueOf(nil), err, false + } + v = v.Elem() + return reflect.ValueOf([]byte(v.String())), nil, true + } + + return reflect.ValueOf(nil), nil, false +} + +// rpc.http +//----------------------------------------------------------------------------- +// rpc.websocket + +const ( + writeChanCapacity = 1000 + wsWriteTimeoutSeconds = 30 // each write times out after this + wsReadTimeoutSeconds = 30 // connection times out if we haven't received *anything* in this long, not even pings. + wsPingTickerSeconds = 10 // send a ping every PingTickerSeconds. +) + +// a single websocket connection +// contains listener id, underlying ws connection, +// and the event switch for subscribing to events +type wsConnection struct { + cmn.BaseService + + remoteAddr string + baseConn *websocket.Conn + writeChan chan types.RPCResponse + readTimeout *time.Timer + pingTicker *time.Ticker + + funcMap map[string]*RPCFunc + evsw events.EventSwitch +} + +// new websocket connection wrapper +func NewWSConnection(baseConn *websocket.Conn, funcMap map[string]*RPCFunc, evsw events.EventSwitch) *wsConnection { + wsc := &wsConnection{ + remoteAddr: baseConn.RemoteAddr().String(), + baseConn: baseConn, + writeChan: make(chan types.RPCResponse, writeChanCapacity), // error when full. + funcMap: funcMap, + evsw: evsw, + } + wsc.BaseService = *cmn.NewBaseService(nil, "wsConnection", wsc) + return wsc +} + +// wsc.Start() blocks until the connection closes. +func (wsc *wsConnection) OnStart() error { + wsc.BaseService.OnStart() + + // these must be set before the readRoutine is created, as it may + // call wsc.Stop(), which accesses these timers + wsc.readTimeout = time.NewTimer(time.Second * wsReadTimeoutSeconds) + wsc.pingTicker = time.NewTicker(time.Second * wsPingTickerSeconds) + + // Read subscriptions/unsubscriptions to events + go wsc.readRoutine() + + // Custom Ping handler to touch readTimeout + wsc.baseConn.SetPingHandler(func(m string) error { + // NOTE: https://github.com/gorilla/websocket/issues/97 + go wsc.baseConn.WriteControl(websocket.PongMessage, []byte(m), time.Now().Add(time.Second*wsWriteTimeoutSeconds)) + wsc.readTimeout.Reset(time.Second * wsReadTimeoutSeconds) + return nil + }) + wsc.baseConn.SetPongHandler(func(m string) error { + // NOTE: https://github.com/gorilla/websocket/issues/97 + wsc.readTimeout.Reset(time.Second * wsReadTimeoutSeconds) + return nil + }) + go wsc.readTimeoutRoutine() + + // Write responses, BLOCKING. + wsc.writeRoutine() + return nil +} + +func (wsc *wsConnection) OnStop() { + wsc.BaseService.OnStop() + if wsc.evsw != nil { + wsc.evsw.RemoveListener(wsc.remoteAddr) + } + wsc.readTimeout.Stop() + wsc.pingTicker.Stop() + // The write loop closes the websocket connection + // when it exits its loop, and the read loop + // closes the writeChan +} + +func (wsc *wsConnection) readTimeoutRoutine() { + select { + case <-wsc.readTimeout.C: + wsc.Logger.Info("Stopping connection due to read timeout") + wsc.Stop() + case <-wsc.Quit: + return + } +} + +// Implements WSRPCConnection +func (wsc *wsConnection) GetRemoteAddr() string { + return wsc.remoteAddr +} + +// Implements WSRPCConnection +func (wsc *wsConnection) GetEventSwitch() events.EventSwitch { + return wsc.evsw +} + +// Implements WSRPCConnection +// Blocking write to writeChan until service stops. +// Goroutine-safe +func (wsc *wsConnection) WriteRPCResponse(resp types.RPCResponse) { + select { + case <-wsc.Quit: + return + case wsc.writeChan <- resp: + } +} + +// Implements WSRPCConnection +// Nonblocking write. +// Goroutine-safe +func (wsc *wsConnection) TryWriteRPCResponse(resp types.RPCResponse) bool { + select { + case <-wsc.Quit: + return false + case wsc.writeChan <- resp: + return true + default: + return false + } +} + +// Read from the socket and subscribe to or unsubscribe from events +func (wsc *wsConnection) readRoutine() { + // Do not close writeChan, to allow WriteRPCResponse() to fail. + // defer close(wsc.writeChan) + + for { + select { + case <-wsc.Quit: + return + default: + var in []byte + // Do not set a deadline here like below: + // wsc.baseConn.SetReadDeadline(time.Now().Add(time.Second * wsReadTimeoutSeconds)) + // The client may not send anything for a while. + // We use `readTimeout` to handle read timeouts. + _, in, err := wsc.baseConn.ReadMessage() + if err != nil { + wsc.Logger.Info("Failed to read from connection", "remote", wsc.remoteAddr, "err", err.Error()) + // an error reading the connection, + // kill the connection + wsc.Stop() + return + } + var request types.RPCRequest + err = json.Unmarshal(in, &request) + if err != nil { + errStr := fmt.Sprintf("Error unmarshaling data: %s", err.Error()) + wsc.WriteRPCResponse(types.NewRPCResponse(request.ID, nil, errStr)) + continue + } + + // Now, fetch the RPCFunc and execute it. + + rpcFunc := wsc.funcMap[request.Method] + if rpcFunc == nil { + wsc.WriteRPCResponse(types.NewRPCResponse(request.ID, nil, "RPC method unknown: "+request.Method)) + continue + } + var args []reflect.Value + if rpcFunc.ws { + wsCtx := types.WSRPCContext{Request: request, WSRPCConnection: wsc} + args, err = jsonParamsToArgsWS(rpcFunc, request.Params, wsCtx) + } else { + args, err = jsonParamsToArgsRPC(rpcFunc, request.Params) + } + if err != nil { + wsc.WriteRPCResponse(types.NewRPCResponse(request.ID, nil, err.Error())) + continue + } + returns := rpcFunc.f.Call(args) + wsc.Logger.Info("WSJSONRPC", "method", request.Method, "args", args, "returns", returns) + result, err := unreflectResult(returns) + if err != nil { + wsc.WriteRPCResponse(types.NewRPCResponse(request.ID, nil, err.Error())) + continue + } else { + wsc.WriteRPCResponse(types.NewRPCResponse(request.ID, result, "")) + continue + } + + } + } +} + +// receives on a write channel and writes out on the socket +func (wsc *wsConnection) writeRoutine() { + defer wsc.baseConn.Close() + for { + select { + case <-wsc.Quit: + return + case <-wsc.pingTicker.C: + err := wsc.baseConn.WriteMessage(websocket.PingMessage, []byte{}) + if err != nil { + wsc.Logger.Error("Failed to write ping message on websocket", "error", err) + wsc.Stop() + return + } + case msg := <-wsc.writeChan: + jsonBytes, err := json.MarshalIndent(msg, "", " ") + if err != nil { + wsc.Logger.Error("Failed to marshal RPCResponse to JSON", "error", err) + } else { + wsc.baseConn.SetWriteDeadline(time.Now().Add(time.Second * wsWriteTimeoutSeconds)) + if err = wsc.baseConn.WriteMessage(websocket.TextMessage, jsonBytes); err != nil { + wsc.Logger.Error("Failed to write response on websocket", "error", err) + wsc.Stop() + return + } + } + } + } +} + +//---------------------------------------- + +// Main manager for all websocket connections +// Holds the event switch +// NOTE: The websocket path is defined externally, e.g. in node/node.go +type WebsocketManager struct { + websocket.Upgrader + funcMap map[string]*RPCFunc + evsw events.EventSwitch + logger log.Logger +} + +func NewWebsocketManager(funcMap map[string]*RPCFunc, evsw events.EventSwitch) *WebsocketManager { + return &WebsocketManager{ + funcMap: funcMap, + evsw: evsw, + Upgrader: websocket.Upgrader{ + ReadBufferSize: 1024, + WriteBufferSize: 1024, + CheckOrigin: func(r *http.Request) bool { + // TODO + return true + }, + }, + logger: log.NewNopLogger(), + } +} + +func (wm *WebsocketManager) SetLogger(l log.Logger) { + wm.logger = l +} + +// Upgrade the request/response (via http.Hijack) and starts the wsConnection. +func (wm *WebsocketManager) WebsocketHandler(w http.ResponseWriter, r *http.Request) { + wsConn, err := wm.Upgrade(w, r, nil) + if err != nil { + // TODO - return http error + wm.logger.Error("Failed to upgrade to websocket connection", "error", err) + return + } + + // register connection + con := NewWSConnection(wsConn, wm.funcMap, wm.evsw) + wm.logger.Info("New websocket connection", "remote", con.remoteAddr) + con.Start() // Blocking +} + +// rpc.websocket +//----------------------------------------------------------------------------- + +// NOTE: assume returns is result struct and error. If error is not nil, return it +func unreflectResult(returns []reflect.Value) (interface{}, error) { + errV := returns[1] + if errV.Interface() != nil { + return nil, errors.Errorf("%v", errV.Interface()) + } + rv := returns[0] + // the result is a registered interface, + // we need a pointer to it so we can marshal with type byte + rvp := reflect.New(rv.Type()) + rvp.Elem().Set(rv) + return rvp.Interface(), nil +} + +// writes a list of available rpc endpoints as an html page +func writeListOfEndpoints(w http.ResponseWriter, r *http.Request, funcMap map[string]*RPCFunc) { + noArgNames := []string{} + argNames := []string{} + for name, funcData := range funcMap { + if len(funcData.args) == 0 { + noArgNames = append(noArgNames, name) + } else { + argNames = append(argNames, name) + } + } + sort.Strings(noArgNames) + sort.Strings(argNames) + buf := new(bytes.Buffer) + buf.WriteString("") + buf.WriteString("
Available endpoints:
") + + for _, name := range noArgNames { + link := fmt.Sprintf("http://%s/%s", r.Host, name) + buf.WriteString(fmt.Sprintf("%s
", link, link)) + } + + buf.WriteString("
Endpoints that require arguments:
") + for _, name := range argNames { + link := fmt.Sprintf("http://%s/%s?", r.Host, name) + funcData := funcMap[name] + for i, argName := range funcData.argNames { + link += argName + "=_" + if i < len(funcData.argNames)-1 { + link += "&" + } + } + buf.WriteString(fmt.Sprintf("%s
", link, link)) + } + buf.WriteString("") + w.Header().Set("Content-Type", "text/html") + w.WriteHeader(200) + w.Write(buf.Bytes()) +} diff --git a/rpc/lib/server/http_params.go b/rpc/lib/server/http_params.go new file mode 100644 index 000000000..565060678 --- /dev/null +++ b/rpc/lib/server/http_params.go @@ -0,0 +1,90 @@ +package rpcserver + +import ( + "encoding/hex" + "net/http" + "regexp" + "strconv" + + "github.com/pkg/errors" +) + +var ( + // Parts of regular expressions + atom = "[A-Z0-9!#$%&'*+\\-/=?^_`{|}~]+" + dotAtom = atom + `(?:\.` + atom + `)*` + domain = `[A-Z0-9.-]+\.[A-Z]{2,4}` + + RE_HEX = regexp.MustCompile(`^(?i)[a-f0-9]+$`) + RE_EMAIL = regexp.MustCompile(`^(?i)(` + dotAtom + `)@(` + dotAtom + `)$`) + RE_ADDRESS = regexp.MustCompile(`^(?i)[a-z0-9]{25,34}$`) + RE_HOST = regexp.MustCompile(`^(?i)(` + domain + `)$`) + + //RE_ID12 = regexp.MustCompile(`^[a-zA-Z0-9]{12}$`) +) + +func GetParam(r *http.Request, param string) string { + s := r.URL.Query().Get(param) + if s == "" { + s = r.FormValue(param) + } + return s +} + +func GetParamByteSlice(r *http.Request, param string) ([]byte, error) { + s := GetParam(r, param) + return hex.DecodeString(s) +} + +func GetParamInt64(r *http.Request, param string) (int64, error) { + s := GetParam(r, param) + i, err := strconv.ParseInt(s, 10, 64) + if err != nil { + return 0, errors.Errorf(param, err.Error()) + } + return i, nil +} + +func GetParamInt32(r *http.Request, param string) (int32, error) { + s := GetParam(r, param) + i, err := strconv.ParseInt(s, 10, 32) + if err != nil { + return 0, errors.Errorf(param, err.Error()) + } + return int32(i), nil +} + +func GetParamUint64(r *http.Request, param string) (uint64, error) { + s := GetParam(r, param) + i, err := strconv.ParseUint(s, 10, 64) + if err != nil { + return 0, errors.Errorf(param, err.Error()) + } + return i, nil +} + +func GetParamUint(r *http.Request, param string) (uint, error) { + s := GetParam(r, param) + i, err := strconv.ParseUint(s, 10, 64) + if err != nil { + return 0, errors.Errorf(param, err.Error()) + } + return uint(i), nil +} + +func GetParamRegexp(r *http.Request, param string, re *regexp.Regexp) (string, error) { + s := GetParam(r, param) + if !re.MatchString(s) { + return "", errors.Errorf(param, "Did not match regular expression %v", re.String()) + } + return s, nil +} + +func GetParamFloat64(r *http.Request, param string) (float64, error) { + s := GetParam(r, param) + f, err := strconv.ParseFloat(s, 64) + if err != nil { + return 0, errors.Errorf(param, err.Error()) + } + return f, nil +} diff --git a/rpc/lib/server/http_server.go b/rpc/lib/server/http_server.go new file mode 100644 index 000000000..3b856b5db --- /dev/null +++ b/rpc/lib/server/http_server.go @@ -0,0 +1,136 @@ +// Commons for HTTP handling +package rpcserver + +import ( + "bufio" + "encoding/json" + "fmt" + "net" + "net/http" + "runtime/debug" + "strings" + "time" + + "github.com/pkg/errors" + types "github.com/tendermint/tendermint/rpc/lib/types" + "github.com/tendermint/tmlibs/log" +) + +func StartHTTPServer(listenAddr string, handler http.Handler, logger log.Logger) (listener net.Listener, err error) { + // listenAddr should be fully formed including tcp:// or unix:// prefix + var proto, addr string + parts := strings.SplitN(listenAddr, "://", 2) + if len(parts) != 2 { + logger.Error("WARNING (tendermint/rpc/lib): Please use fully formed listening addresses, including the tcp:// or unix:// prefix") + // we used to allow addrs without tcp/unix prefix by checking for a colon + // TODO: Deprecate + proto = types.SocketType(listenAddr) + addr = listenAddr + // return nil, errors.Errorf("Invalid listener address %s", lisenAddr) + } else { + proto, addr = parts[0], parts[1] + } + + logger.Info(fmt.Sprintf("Starting RPC HTTP server on %s socket %v", proto, addr)) + listener, err = net.Listen(proto, addr) + if err != nil { + return nil, errors.Errorf("Failed to listen to %v: %v", listenAddr, err) + } + + go func() { + res := http.Serve( + listener, + RecoverAndLogHandler(handler, logger), + ) + logger.Error("RPC HTTP server stopped", "result", res) + }() + return listener, nil +} + +func WriteRPCResponseHTTPError(w http.ResponseWriter, httpCode int, res types.RPCResponse) { + jsonBytes, err := json.MarshalIndent(res, "", " ") + if err != nil { + panic(err) + } + + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(httpCode) + w.Write(jsonBytes) +} + +func WriteRPCResponseHTTP(w http.ResponseWriter, res types.RPCResponse) { + jsonBytes, err := json.MarshalIndent(res, "", " ") + if err != nil { + panic(err) + } + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(200) + w.Write(jsonBytes) +} + +//----------------------------------------------------------------------------- + +// Wraps an HTTP handler, adding error logging. +// If the inner function panics, the outer function recovers, logs, sends an +// HTTP 500 error response. +func RecoverAndLogHandler(handler http.Handler, logger log.Logger) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + // Wrap the ResponseWriter to remember the status + rww := &ResponseWriterWrapper{-1, w} + begin := time.Now() + + // Common headers + origin := r.Header.Get("Origin") + rww.Header().Set("Access-Control-Allow-Origin", origin) + rww.Header().Set("Access-Control-Allow-Credentials", "true") + rww.Header().Set("Access-Control-Expose-Headers", "X-Server-Time") + rww.Header().Set("X-Server-Time", fmt.Sprintf("%v", begin.Unix())) + + defer func() { + // Send a 500 error if a panic happens during a handler. + // Without this, Chrome & Firefox were retrying aborted ajax requests, + // at least to my localhost. + if e := recover(); e != nil { + + // If RPCResponse + if res, ok := e.(types.RPCResponse); ok { + WriteRPCResponseHTTP(rww, res) + } else { + // For the rest, + logger.Error("Panic in RPC HTTP handler", "error", e, "stack", string(debug.Stack())) + rww.WriteHeader(http.StatusInternalServerError) + WriteRPCResponseHTTP(rww, types.NewRPCResponse("", nil, fmt.Sprintf("Internal Server Error: %v", e))) + } + } + + // Finally, log. + durationMS := time.Since(begin).Nanoseconds() / 1000000 + if rww.Status == -1 { + rww.Status = 200 + } + logger.Info("Served RPC HTTP response", + "method", r.Method, "url", r.URL, + "status", rww.Status, "duration", durationMS, + "remoteAddr", r.RemoteAddr, + ) + }() + + handler.ServeHTTP(rww, r) + }) +} + +// Remember the status for logging +type ResponseWriterWrapper struct { + Status int + http.ResponseWriter +} + +func (w *ResponseWriterWrapper) WriteHeader(status int) { + w.Status = status + w.ResponseWriter.WriteHeader(status) +} + +// implements http.Hijacker +func (w *ResponseWriterWrapper) Hijack() (net.Conn, *bufio.ReadWriter, error) { + return w.ResponseWriter.(http.Hijacker).Hijack() +} diff --git a/rpc/lib/server/parse_test.go b/rpc/lib/server/parse_test.go new file mode 100644 index 000000000..3c6d6edde --- /dev/null +++ b/rpc/lib/server/parse_test.go @@ -0,0 +1,174 @@ +package rpcserver + +import ( + "encoding/json" + "strconv" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/tendermint/go-wire/data" +) + +func TestParseJSONMap(t *testing.T) { + assert := assert.New(t) + + input := []byte(`{"value":"1234","height":22}`) + + // naive is float,string + var p1 map[string]interface{} + err := json.Unmarshal(input, &p1) + if assert.Nil(err) { + h, ok := p1["height"].(float64) + if assert.True(ok, "%#v", p1["height"]) { + assert.EqualValues(22, h) + } + v, ok := p1["value"].(string) + if assert.True(ok, "%#v", p1["value"]) { + assert.EqualValues("1234", v) + } + } + + // preloading map with values doesn't help + tmp := 0 + p2 := map[string]interface{}{ + "value": &data.Bytes{}, + "height": &tmp, + } + err = json.Unmarshal(input, &p2) + if assert.Nil(err) { + h, ok := p2["height"].(float64) + if assert.True(ok, "%#v", p2["height"]) { + assert.EqualValues(22, h) + } + v, ok := p2["value"].(string) + if assert.True(ok, "%#v", p2["value"]) { + assert.EqualValues("1234", v) + } + } + + // preload here with *pointers* to the desired types + // struct has unknown types, but hard-coded keys + tmp = 0 + p3 := struct { + Value interface{} `json:"value"` + Height interface{} `json:"height"` + }{ + Height: &tmp, + Value: &data.Bytes{}, + } + err = json.Unmarshal(input, &p3) + if assert.Nil(err) { + h, ok := p3.Height.(*int) + if assert.True(ok, "%#v", p3.Height) { + assert.Equal(22, *h) + } + v, ok := p3.Value.(*data.Bytes) + if assert.True(ok, "%#v", p3.Value) { + assert.EqualValues([]byte{0x12, 0x34}, *v) + } + } + + // simplest solution, but hard-coded + p4 := struct { + Value data.Bytes `json:"value"` + Height int `json:"height"` + }{} + err = json.Unmarshal(input, &p4) + if assert.Nil(err) { + assert.EqualValues(22, p4.Height) + assert.EqualValues([]byte{0x12, 0x34}, p4.Value) + } + + // so, let's use this trick... + // dynamic keys on map, and we can deserialize to the desired types + var p5 map[string]*json.RawMessage + err = json.Unmarshal(input, &p5) + if assert.Nil(err) { + var h int + err = json.Unmarshal(*p5["height"], &h) + if assert.Nil(err) { + assert.Equal(22, h) + } + + var v data.Bytes + err = json.Unmarshal(*p5["value"], &v) + if assert.Nil(err) { + assert.Equal(data.Bytes{0x12, 0x34}, v) + } + } +} + +func TestParseJSONArray(t *testing.T) { + assert := assert.New(t) + + input := []byte(`["1234",22]`) + + // naive is float,string + var p1 []interface{} + err := json.Unmarshal(input, &p1) + if assert.Nil(err) { + v, ok := p1[0].(string) + if assert.True(ok, "%#v", p1[0]) { + assert.EqualValues("1234", v) + } + h, ok := p1[1].(float64) + if assert.True(ok, "%#v", p1[1]) { + assert.EqualValues(22, h) + } + } + + // preloading map with values helps here (unlike map - p2 above) + tmp := 0 + p2 := []interface{}{&data.Bytes{}, &tmp} + err = json.Unmarshal(input, &p2) + if assert.Nil(err) { + v, ok := p2[0].(*data.Bytes) + if assert.True(ok, "%#v", p2[0]) { + assert.EqualValues([]byte{0x12, 0x34}, *v) + } + h, ok := p2[1].(*int) + if assert.True(ok, "%#v", p2[1]) { + assert.EqualValues(22, *h) + } + } +} + +func TestParseRPC(t *testing.T) { + assert := assert.New(t) + + demo := func(height int, name string) {} + call := NewRPCFunc(demo, "height,name") + + cases := []struct { + raw string + height int64 + name string + fail bool + }{ + // should parse + {`[7, "flew"]`, 7, "flew", false}, + {`{"name": "john", "height": 22}`, 22, "john", false}, + // defaults + {`{"name": "solo", "unused": "stuff"}`, 0, "solo", false}, + // should fail - wrong types/lenght + {`["flew", 7]`, 0, "", true}, + {`[7,"flew",100]`, 0, "", true}, + {`{"name": -12, "height": "fred"}`, 0, "", true}, + } + for idx, tc := range cases { + i := strconv.Itoa(idx) + data := []byte(tc.raw) + vals, err := jsonParamsToArgs(call, data, 0) + if tc.fail { + assert.NotNil(err, i) + } else { + assert.Nil(err, "%s: %+v", i, err) + if assert.Equal(2, len(vals), i) { + assert.Equal(tc.height, vals[0].Int(), i) + assert.Equal(tc.name, vals[1].String(), i) + } + } + + } + +} diff --git a/rpc/lib/test/data.json b/rpc/lib/test/data.json new file mode 100644 index 000000000..83283ec33 --- /dev/null +++ b/rpc/lib/test/data.json @@ -0,0 +1,9 @@ +{ + "jsonrpc": "2.0", + "id": "", + "method": "hello_world", + "params": { + "name": "my_world", + "num": 5 + } +} diff --git a/rpc/lib/test/integration_test.sh b/rpc/lib/test/integration_test.sh new file mode 100755 index 000000000..7c23be7d3 --- /dev/null +++ b/rpc/lib/test/integration_test.sh @@ -0,0 +1,95 @@ +#!/usr/bin/env bash +set -e + +# Get the directory of where this script is. +SOURCE="${BASH_SOURCE[0]}" +while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done +DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )" + +# Change into that dir because we expect that. +pushd "$DIR" + +echo "==> Building the server" +go build -o rpcserver main.go + +echo "==> (Re)starting the server" +PID=$(pgrep rpcserver || echo "") +if [[ $PID != "" ]]; then + kill -9 "$PID" +fi +./rpcserver & +PID=$! +sleep 2 + +echo "==> simple request" +R1=$(curl -s 'http://localhost:8008/hello_world?name="my_world"&num=5') +R2=$(curl -s --data @data.json http://localhost:8008) +if [[ "$R1" != "$R2" ]]; then + echo "responses are not identical:" + echo "R1: $R1" + echo "R2: $R2" + echo "FAIL" + exit 1 +else + echo "OK" +fi + +echo "==> request with 0x-prefixed hex string arg" +R1=$(curl -s 'http://localhost:8008/hello_world?name=0x41424344&num=123') +R2='{"jsonrpc":"2.0","id":"","result":{"Result":"hi ABCD 123"},"error":""}' +if [[ "$R1" != "$R2" ]]; then + echo "responses are not identical:" + echo "R1: $R1" + echo "R2: $R2" + echo "FAIL" + exit 1 +else + echo "OK" +fi + +echo "==> request with missing params" +R1=$(curl -s 'http://localhost:8008/hello_world') +R2='{"jsonrpc":"2.0","id":"","result":{"Result":"hi 0"},"error":""}' +if [[ "$R1" != "$R2" ]]; then + echo "responses are not identical:" + echo "R1: $R1" + echo "R2: $R2" + echo "FAIL" + exit 1 +else + echo "OK" +fi + +echo "==> request with unquoted string arg" +R1=$(curl -s 'http://localhost:8008/hello_world?name=abcd&num=123') +R2="{\"jsonrpc\":\"2.0\",\"id\":\"\",\"result\":null,\"error\":\"Error converting http params to args: invalid character 'a' looking for beginning of value\"}" +if [[ "$R1" != "$R2" ]]; then + echo "responses are not identical:" + echo "R1: $R1" + echo "R2: $R2" + echo "FAIL" + exit 1 +else + echo "OK" +fi + +echo "==> request with string type when expecting number arg" +R1=$(curl -s 'http://localhost:8008/hello_world?name="abcd"&num=0xabcd') +R2="{\"jsonrpc\":\"2.0\",\"id\":\"\",\"result\":null,\"error\":\"Error converting http params to args: Got a hex string arg, but expected 'int'\"}" +if [[ "$R1" != "$R2" ]]; then + echo "responses are not identical:" + echo "R1: $R1" + echo "R2: $R2" + echo "FAIL" + exit 1 +else + echo "OK" +fi + +echo "==> Stopping the server" +kill -9 $PID + +rm -f rpcserver + +popd +exit 0 diff --git a/rpc/lib/test/main.go b/rpc/lib/test/main.go new file mode 100644 index 000000000..702ed9f73 --- /dev/null +++ b/rpc/lib/test/main.go @@ -0,0 +1,38 @@ +package main + +import ( + "fmt" + "net/http" + "os" + + rpcserver "github.com/tendermint/tendermint/rpc/lib/server" + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" +) + +var routes = map[string]*rpcserver.RPCFunc{ + "hello_world": rpcserver.NewRPCFunc(HelloWorld, "name,num"), +} + +func HelloWorld(name string, num int) (Result, error) { + return Result{fmt.Sprintf("hi %s %d", name, num)}, nil +} + +type Result struct { + Result string +} + +func main() { + mux := http.NewServeMux() + logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout)) + rpcserver.RegisterRPCFuncs(mux, routes, logger) + _, err := rpcserver.StartHTTPServer("0.0.0.0:8008", mux, logger) + if err != nil { + cmn.Exit(err.Error()) + } + + // Wait forever + cmn.TrapSignal(func() { + }) + +} diff --git a/rpc/lib/types/types.go b/rpc/lib/types/types.go new file mode 100644 index 000000000..8076e4b0d --- /dev/null +++ b/rpc/lib/types/types.go @@ -0,0 +1,100 @@ +package rpctypes + +import ( + "encoding/json" + "strings" + + events "github.com/tendermint/tmlibs/events" +) + +type RPCRequest struct { + JSONRPC string `json:"jsonrpc"` + ID string `json:"id"` + Method string `json:"method"` + Params *json.RawMessage `json:"params"` // must be map[string]interface{} or []interface{} +} + +func NewRPCRequest(id string, method string, params json.RawMessage) RPCRequest { + return RPCRequest{ + JSONRPC: "2.0", + ID: id, + Method: method, + Params: ¶ms, + } +} + +func MapToRequest(id string, method string, params map[string]interface{}) (RPCRequest, error) { + payload, err := json.Marshal(params) + if err != nil { + return RPCRequest{}, err + } + request := NewRPCRequest(id, method, payload) + return request, nil +} + +func ArrayToRequest(id string, method string, params []interface{}) (RPCRequest, error) { + payload, err := json.Marshal(params) + if err != nil { + return RPCRequest{}, err + } + request := NewRPCRequest(id, method, payload) + return request, nil +} + +//---------------------------------------- + +type RPCResponse struct { + JSONRPC string `json:"jsonrpc"` + ID string `json:"id"` + Result *json.RawMessage `json:"result"` + Error string `json:"error"` +} + +func NewRPCResponse(id string, res interface{}, err string) RPCResponse { + var raw *json.RawMessage + if res != nil { + var js []byte + js, err2 := json.Marshal(res) + if err2 == nil { + rawMsg := json.RawMessage(js) + raw = &rawMsg + } else { + err = err2.Error() + } + } + return RPCResponse{ + JSONRPC: "2.0", + ID: id, + Result: raw, + Error: err, + } +} + +//---------------------------------------- + +// *wsConnection implements this interface. +type WSRPCConnection interface { + GetRemoteAddr() string + GetEventSwitch() events.EventSwitch + WriteRPCResponse(resp RPCResponse) + TryWriteRPCResponse(resp RPCResponse) bool +} + +// websocket-only RPCFuncs take this as the first parameter. +type WSRPCContext struct { + Request RPCRequest + WSRPCConnection +} + +//---------------------------------------- +// sockets +// +// Determine if its a unix or tcp socket. +// If tcp, must specify the port; `0.0.0.0` will return incorrectly as "unix" since there's no port +func SocketType(listenAddr string) string { + socketType := "unix" + if len(strings.Split(listenAddr, ":")) >= 2 { + socketType = "tcp" + } + return socketType +} diff --git a/rpc/lib/version.go b/rpc/lib/version.go new file mode 100644 index 000000000..8828f260b --- /dev/null +++ b/rpc/lib/version.go @@ -0,0 +1,7 @@ +package rpc + +const Maj = "0" +const Min = "7" +const Fix = "0" + +const Version = Maj + "." + Min + "." + Fix diff --git a/rpc/test/client_test.go b/rpc/test/client_test.go index 50e326050..b7df67841 100644 --- a/rpc/test/client_test.go +++ b/rpc/test/client_test.go @@ -10,11 +10,14 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + abci "github.com/tendermint/abci/types" - . "github.com/tendermint/go-common" - rpc "github.com/tendermint/go-rpc/client" + "github.com/tendermint/go-wire/data" + . "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tendermint/rpc/core" ctypes "github.com/tendermint/tendermint/rpc/core/types" + rpc "github.com/tendermint/tendermint/rpc/lib/client" "github.com/tendermint/tendermint/state/txindex/null" "github.com/tendermint/tendermint/types" ) @@ -36,13 +39,11 @@ func TestJSONStatus(t *testing.T) { } func testStatus(t *testing.T, client rpc.HTTPClient) { - chainID := GetConfig().GetString("chain_id") - tmResult := new(ctypes.TMResult) - _, err := client.Call("status", map[string]interface{}{}, tmResult) + moniker := GetConfig().Moniker + result := new(ctypes.ResultStatus) + _, err := client.Call("status", map[string]interface{}{}, result) require.Nil(t, err) - - status := (*tmResult).(*ctypes.ResultStatus) - assert.Equal(t, chainID, status.NodeInfo.Network) + assert.Equal(t, moniker, result.NodeInfo.Moniker) } //-------------------------------------------------------------------------------- @@ -66,17 +67,15 @@ func TestJSONBroadcastTxSync(t *testing.T) { } func testBroadcastTxSync(t *testing.T, client rpc.HTTPClient) { - config.Set("block_size", 0) - defer config.Set("block_size", -1) - tmResult := new(ctypes.TMResult) + mem := node.MempoolReactor().Mempool + initMemSize := mem.Size() + result := new(ctypes.ResultBroadcastTx) tx := randBytes(t) - _, err := client.Call("broadcast_tx_sync", map[string]interface{}{"tx": tx}, tmResult) + _, err := client.Call("broadcast_tx_sync", map[string]interface{}{"tx": tx}, result) require.Nil(t, err) - res := (*tmResult).(*ctypes.ResultBroadcastTx) - require.Equal(t, abci.CodeType_OK, res.Code) - mem := node.MempoolReactor().Mempool - require.Equal(t, 1, mem.Size()) + require.Equal(t, abci.CodeType_OK, result.Code) + require.Equal(t, initMemSize+1, mem.Size()) txs := mem.Reap(1) require.EqualValues(t, tx, txs[0]) mem.Flush() @@ -85,17 +84,20 @@ func testBroadcastTxSync(t *testing.T, client rpc.HTTPClient) { //-------------------------------------------------------------------------------- // query -func testTxKV(t *testing.T) ([]byte, []byte, []byte) { +func testTxKV(t *testing.T) ([]byte, []byte, types.Tx) { k := randBytes(t) v := randBytes(t) - return k, v, []byte(Fmt("%s=%s", k, v)) + return k, v, types.Tx(Fmt("%s=%s", k, v)) } func sendTx(t *testing.T, client rpc.HTTPClient) ([]byte, []byte) { - tmResult := new(ctypes.TMResult) + result := new(ctypes.ResultBroadcastTxCommit) k, v, tx := testTxKV(t) - _, err := client.Call("broadcast_tx_commit", map[string]interface{}{"tx": tx}, tmResult) + _, err := client.Call("broadcast_tx_commit", map[string]interface{}{"tx": tx}, result) require.Nil(t, err) + require.NotNil(t, 0, result.DeliverTx, "%#v", result) + require.EqualValues(t, 0, result.CheckTx.Code, "%#v", result) + require.EqualValues(t, 0, result.DeliverTx.Code, "%#v", result) return k, v } @@ -104,22 +106,21 @@ func TestURIABCIQuery(t *testing.T) { } func TestJSONABCIQuery(t *testing.T) { - testABCIQuery(t, GetURIClient()) + testABCIQuery(t, GetJSONClient()) } func testABCIQuery(t *testing.T, client rpc.HTTPClient) { k, _ := sendTx(t, client) - time.Sleep(time.Millisecond * 100) - tmResult := new(ctypes.TMResult) + time.Sleep(time.Millisecond * 500) + result := new(ctypes.ResultABCIQuery) _, err := client.Call("abci_query", - map[string]interface{}{"path": "", "data": k, "prove": false}, tmResult) + map[string]interface{}{"path": "", "data": data.Bytes(k), "prove": false}, result) require.Nil(t, err) - resQuery := (*tmResult).(*ctypes.ResultABCIQuery) - require.EqualValues(t, 0, resQuery.Response.Code) + require.EqualValues(t, 0, result.Code) // XXX: specific to value returned by the dummy - require.NotEqual(t, 0, len(resQuery.Response.Value)) + require.NotEqual(t, 0, len(result.Value)) } //-------------------------------------------------------------------------------- @@ -136,15 +137,14 @@ func TestJSONBroadcastTxCommit(t *testing.T) { func testBroadcastTxCommit(t *testing.T, client rpc.HTTPClient) { require := require.New(t) - tmResult := new(ctypes.TMResult) + result := new(ctypes.ResultBroadcastTxCommit) tx := randBytes(t) - _, err := client.Call("broadcast_tx_commit", map[string]interface{}{"tx": tx}, tmResult) + _, err := client.Call("broadcast_tx_commit", map[string]interface{}{"tx": tx}, result) require.Nil(err) - res := (*tmResult).(*ctypes.ResultBroadcastTxCommit) - checkTx := res.CheckTx + checkTx := result.CheckTx require.Equal(abci.CodeType_OK, checkTx.Code) - deliverTx := res.DeliverTx + deliverTx := result.DeliverTx require.Equal(abci.CodeType_OK, deliverTx.Code) mem := node.MempoolReactor().Mempool require.Equal(0, mem.Size()) @@ -158,8 +158,9 @@ func TestURITx(t *testing.T) { testTx(t, GetURIClient(), true) core.SetTxIndexer(&null.TxIndex{}) - testTx(t, GetJSONClient(), false) - core.SetTxIndexer(node.ConsensusState().GetState().TxIndexer) + defer core.SetTxIndexer(node.ConsensusState().GetState().TxIndexer) + + testTx(t, GetURIClient(), false) } func TestJSONTx(t *testing.T) { @@ -174,16 +175,15 @@ func testTx(t *testing.T, client rpc.HTTPClient, withIndexer bool) { assert, require := assert.New(t), require.New(t) // first we broadcast a tx - tmResult := new(ctypes.TMResult) + result := new(ctypes.ResultBroadcastTxCommit) txBytes := randBytes(t) tx := types.Tx(txBytes) - _, err := client.Call("broadcast_tx_commit", map[string]interface{}{"tx": txBytes}, tmResult) + _, err := client.Call("broadcast_tx_commit", map[string]interface{}{"tx": txBytes}, result) require.Nil(err) - res := (*tmResult).(*ctypes.ResultBroadcastTxCommit) - checkTx := res.CheckTx + checkTx := result.CheckTx require.Equal(abci.CodeType_OK, checkTx.Code) - deliverTx := res.DeliverTx + deliverTx := result.DeliverTx require.Equal(abci.CodeType_OK, deliverTx.Code) mem := node.MempoolReactor().Mempool require.Equal(0, mem.Size()) @@ -210,24 +210,23 @@ func testTx(t *testing.T, client rpc.HTTPClient, withIndexer bool) { // now we query for the tx. // since there's only one tx, we know index=0. - tmResult = new(ctypes.TMResult) + result2 := new(ctypes.ResultTx) query := map[string]interface{}{ "hash": tc.hash, "prove": tc.prove, } - _, err = client.Call("tx", query, tmResult) + _, err = client.Call("tx", query, result2) valid := (withIndexer && tc.valid) if !valid { require.NotNil(err, idx) } else { require.Nil(err, idx) - res2 := (*tmResult).(*ctypes.ResultTx) - assert.Equal(tx, res2.Tx, idx) - assert.Equal(res.Height, res2.Height, idx) - assert.Equal(0, res2.Index, idx) - assert.Equal(abci.CodeType_OK, res2.TxResult.Code, idx) + assert.Equal(tx, result2.Tx, idx) + assert.Equal(result.Height, result2.Height, idx) + assert.Equal(0, result2.Index, idx) + assert.Equal(abci.CodeType_OK, result2.TxResult.Code, idx) // time to verify the proof - proof := res2.Proof + proof := result2.Proof if tc.prove && assert.Equal(tx, proof.Data, idx) { assert.True(proof.Proof.Verify(proof.Index, proof.Total, tx.Hash(), proof.RootHash), idx) } @@ -282,7 +281,7 @@ func TestWSBlockchainGrowth(t *testing.T) { var initBlockN int for i := 0; i < 3; i++ { waitForEvent(t, wsc, eid, true, func() {}, func(eid string, eventData interface{}) error { - block := eventData.(types.EventDataNewBlock).Block + block := eventData.(types.TMEventData).Unwrap().(types.EventDataNewBlock).Block if i == 0 { initBlockN = block.Header.Height } else { @@ -311,12 +310,12 @@ func TestWSTxEvent(t *testing.T) { }() // send an tx - tmResult := new(ctypes.TMResult) - _, err := GetJSONClient().Call("broadcast_tx_sync", map[string]interface{}{"tx": tx}, tmResult) + result := new(ctypes.ResultBroadcastTx) + _, err := GetJSONClient().Call("broadcast_tx_sync", map[string]interface{}{"tx": tx}, result) require.Nil(err) waitForEvent(t, wsc, eid, true, func() {}, func(eid string, b interface{}) error { - evt, ok := b.(types.EventDataTx) + evt, ok := b.(types.TMEventData).Unwrap().(types.EventDataTx) require.True(ok, "Got wrong event type: %#v", b) require.Equal(tx, []byte(evt.Tx), "Returned different tx") require.Equal(abci.CodeType_OK, evt.Code) @@ -351,53 +350,3 @@ func TestWSDoubleFire(t *testing.T) { return nil }) }*/ - -//-------------------------------------------------------------------------------- -// unsafe_set_config - -var stringVal = "my string" -var intVal = 987654321 -var boolVal = true - -// don't change these -var testCasesUnsafeSetConfig = [][]string{ - []string{"string", "key1", stringVal}, - []string{"int", "key2", fmt.Sprintf("%v", intVal)}, - []string{"bool", "key3", fmt.Sprintf("%v", boolVal)}, -} - -func TestURIUnsafeSetConfig(t *testing.T) { - for _, testCase := range testCasesUnsafeSetConfig { - tmResult := new(ctypes.TMResult) - _, err := GetURIClient().Call("unsafe_set_config", map[string]interface{}{ - "type": testCase[0], - "key": testCase[1], - "value": testCase[2], - }, tmResult) - require.Nil(t, err) - } - testUnsafeSetConfig(t) -} - -func TestJSONUnsafeSetConfig(t *testing.T) { - for _, testCase := range testCasesUnsafeSetConfig { - tmResult := new(ctypes.TMResult) - _, err := GetJSONClient().Call("unsafe_set_config", - map[string]interface{}{"type": testCase[0], "key": testCase[1], "value": testCase[2]}, - tmResult) - require.Nil(t, err) - } - testUnsafeSetConfig(t) -} - -func testUnsafeSetConfig(t *testing.T) { - require := require.New(t) - s := config.GetString("key1") - require.Equal(stringVal, s) - - i := config.GetInt("key2") - require.Equal(intVal, i) - - b := config.GetBool("key3") - require.Equal(boolVal, b) -} diff --git a/rpc/test/helpers.go b/rpc/test/helpers.go index 349980e9c..130a45956 100644 --- a/rpc/test/helpers.go +++ b/rpc/test/helpers.go @@ -1,6 +1,7 @@ package rpctest import ( + "encoding/json" "fmt" "math/rand" "os" @@ -10,25 +11,19 @@ import ( "time" "github.com/stretchr/testify/require" - logger "github.com/tendermint/go-logger" - wire "github.com/tendermint/go-wire" + "github.com/tendermint/tmlibs/log" abci "github.com/tendermint/abci/types" - cfg "github.com/tendermint/go-config" - client "github.com/tendermint/go-rpc/client" - "github.com/tendermint/tendermint/config/tendermint_test" + cfg "github.com/tendermint/tendermint/config" nm "github.com/tendermint/tendermint/node" "github.com/tendermint/tendermint/proxy" ctypes "github.com/tendermint/tendermint/rpc/core/types" core_grpc "github.com/tendermint/tendermint/rpc/grpc" + client "github.com/tendermint/tendermint/rpc/lib/client" "github.com/tendermint/tendermint/types" ) -var ( - config cfg.Config -) - -const tmLogLevel = "error" +var config *cfg.Config // f**ing long, but unique for each test func makePathname() string { @@ -56,40 +51,39 @@ func makeAddrs() (string, string, string) { } // GetConfig returns a config for the test cases as a singleton -func GetConfig() cfg.Config { +func GetConfig() *cfg.Config { if config == nil { pathname := makePathname() - config = tendermint_test.ResetConfig(pathname) - // Shut up the logging - logger.SetLogLevel(tmLogLevel) + config = cfg.ResetTestRoot(pathname) + // and we use random ports to run in parallel tm, rpc, grpc := makeAddrs() - config.Set("node_laddr", tm) - config.Set("rpc_laddr", rpc) - config.Set("grpc_laddr", grpc) + config.P2P.ListenAddress = tm + config.RPCListenAddress = rpc + config.GRPCListenAddress = grpc } return config } // GetURIClient gets a uri client pointing to the test tendermint rpc func GetURIClient() *client.URIClient { - rpcAddr := GetConfig().GetString("rpc_laddr") + rpcAddr := GetConfig().RPCListenAddress return client.NewURIClient(rpcAddr) } // GetJSONClient gets a http/json client pointing to the test tendermint rpc func GetJSONClient() *client.JSONRPCClient { - rpcAddr := GetConfig().GetString("rpc_laddr") + rpcAddr := GetConfig().RPCListenAddress return client.NewJSONRPCClient(rpcAddr) } func GetGRPCClient() core_grpc.BroadcastAPIClient { - grpcAddr := config.GetString("grpc_laddr") + grpcAddr := config.GRPCListenAddress return core_grpc.StartGRPCClient(grpcAddr) } func GetWSClient() *client.WSClient { - rpcAddr := GetConfig().GetString("rpc_laddr") + rpcAddr := GetConfig().RPCListenAddress wsc := client.NewWSClient(rpcAddr, "/websocket") if _, err := wsc.Start(); err != nil { panic(err) @@ -109,10 +103,12 @@ func StartTendermint(app abci.Application) *nm.Node { func NewTendermint(app abci.Application) *nm.Node { // Create & start node config := GetConfig() - privValidatorFile := config.GetString("priv_validator_file") - privValidator := types.LoadOrGenPrivValidator(privValidatorFile) + logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout)) + logger = log.NewFilter(logger, log.AllowError()) + privValidatorFile := config.PrivValidatorFile() + privValidator := types.LoadOrGenPrivValidator(privValidatorFile, logger) papp := proxy.NewLocalClientCreator(app) - node := nm.NewNode(config, privValidator, papp) + node := nm.NewNode(config, privValidator, papp, logger) return node } @@ -133,15 +129,14 @@ func waitForEvent(t *testing.T, wsc *client.WSClient, eventid string, dieOnTimeo for { select { case r := <-wsc.ResultsCh: - result := new(ctypes.TMResult) - wire.ReadJSONPtr(result, r, &err) + result := new(ctypes.ResultEvent) + err = json.Unmarshal(r, result) if err != nil { - errCh <- err - break LOOP + // cant distinguish between error and wrong type ... + continue } - event, ok := (*result).(*ctypes.ResultEvent) - if ok && event.Name == eventid { - goodCh <- event.Data + if result.Name == eventid { + goodCh <- result.Data break LOOP } case err := <-wsc.ErrorsCh: diff --git a/scripts/dist.sh b/scripts/dist.sh index 14f0fef12..0f368de97 100755 --- a/scripts/dist.sh +++ b/scripts/dist.sh @@ -19,12 +19,12 @@ DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )" # Change into that dir because we expect that. cd "$DIR" -# Generate the tag. -if [ -z "$NOTAG" ]; then - echo "==> Tagging..." - git commit --allow-empty -a -m "Release v$VERSION" - git tag -a -m "Version $VERSION" "v${VERSION}" master -fi +## Generate the tag. +#if [ -z "$NOTAG" ]; then +# echo "==> Tagging..." +# git commit --allow-empty -a -m "Release v$VERSION" +# git tag -a -m "Version $VERSION" "v${VERSION}" master +#fi # Do a hermetic build inside a Docker container. docker build -t tendermint/tendermint-builder scripts/tendermint-builder/ diff --git a/state/errors.go b/state/errors.go index 32a9351ce..50c5a2c04 100644 --- a/state/errors.go +++ b/state/errors.go @@ -1,7 +1,7 @@ package state import ( - . "github.com/tendermint/go-common" + cmn "github.com/tendermint/tmlibs/common" ) type ( @@ -36,20 +36,20 @@ type ( ) func (e ErrUnknownBlock) Error() string { - return Fmt("Could not find block #%d", e.Height) + return cmn.Fmt("Could not find block #%d", e.Height) } func (e ErrBlockHashMismatch) Error() string { - return Fmt("App block hash (%X) does not match core block hash (%X) for height %d", e.AppHash, e.CoreHash, e.Height) + return cmn.Fmt("App block hash (%X) does not match core block hash (%X) for height %d", e.AppHash, e.CoreHash, e.Height) } func (e ErrAppBlockHeightTooHigh) Error() string { - return Fmt("App block height (%d) is higher than core (%d)", e.AppHeight, e.CoreHeight) + return cmn.Fmt("App block height (%d) is higher than core (%d)", e.AppHeight, e.CoreHeight) } func (e ErrLastStateMismatch) Error() string { - return Fmt("Latest tendermint block (%d) LastAppHash (%X) does not match app's AppHash (%X)", e.Height, e.Core, e.App) + return cmn.Fmt("Latest tendermint block (%d) LastAppHash (%X) does not match app's AppHash (%X)", e.Height, e.Core, e.App) } func (e ErrStateMismatch) Error() string { - return Fmt("State after replay does not match saved state. Got ----\n%v\nExpected ----\n%v\n", e.Got, e.Expected) + return cmn.Fmt("State after replay does not match saved state. Got ----\n%v\nExpected ----\n%v\n", e.Got, e.Expected) } diff --git a/state/execution.go b/state/execution.go index 0b1aff699..2dfdb1526 100644 --- a/state/execution.go +++ b/state/execution.go @@ -6,11 +6,12 @@ import ( fail "github.com/ebuchman/fail-test" abci "github.com/tendermint/abci/types" - . "github.com/tendermint/go-common" crypto "github.com/tendermint/go-crypto" "github.com/tendermint/tendermint/proxy" "github.com/tendermint/tendermint/state/txindex" "github.com/tendermint/tendermint/types" + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" ) //-------------------------------------------------- @@ -26,7 +27,7 @@ func (s *State) ValExecBlock(eventCache types.Fireable, proxyAppConn proxy.AppCo } // Execute the block txs - abciResponses, err := execBlockOnProxyApp(eventCache, proxyAppConn, block) + abciResponses, err := execBlockOnProxyApp(eventCache, proxyAppConn, block, s.logger) if err != nil { // There was some error in proxyApp // TODO Report error and wait for proxyApp to be available. @@ -39,7 +40,7 @@ func (s *State) ValExecBlock(eventCache types.Fireable, proxyAppConn proxy.AppCo // Executes block's transactions on proxyAppConn. // Returns a list of transaction results and updates to the validator set // TODO: Generate a bitmap or otherwise store tx validity in state. -func execBlockOnProxyApp(eventCache types.Fireable, proxyAppConn proxy.AppConnConsensus, block *types.Block) (*ABCIResponses, error) { +func execBlockOnProxyApp(eventCache types.Fireable, proxyAppConn proxy.AppConnConsensus, block *types.Block, logger log.Logger) (*ABCIResponses, error) { var validTxs, invalidTxs = 0, 0 txIndex := 0 @@ -58,7 +59,7 @@ func execBlockOnProxyApp(eventCache types.Fireable, proxyAppConn proxy.AppConnCo if txResult.Code == abci.CodeType_OK { validTxs++ } else { - log.Debug("Invalid tx", "code", txResult.Code, "log", txResult.Log) + logger.Debug("Invalid tx", "code", txResult.Code, "log", txResult.Log) invalidTxs++ txError = txResult.Code.String() } @@ -84,7 +85,7 @@ func execBlockOnProxyApp(eventCache types.Fireable, proxyAppConn proxy.AppConnCo // Begin block err := proxyAppConn.BeginBlockSync(block.Hash(), types.TM2PB.Header(block.Header)) if err != nil { - log.Warn("Error in proxyAppConn.BeginBlock", "error", err) + logger.Error("Error in proxyAppConn.BeginBlock", "error", err) return nil, err } @@ -99,15 +100,15 @@ func execBlockOnProxyApp(eventCache types.Fireable, proxyAppConn proxy.AppConnCo // End block abciResponses.EndBlock, err = proxyAppConn.EndBlockSync(uint64(block.Height)) if err != nil { - log.Warn("Error in proxyAppConn.EndBlock", "error", err) + logger.Error("Error in proxyAppConn.EndBlock", "error", err) return nil, err } valDiff := abciResponses.EndBlock.Diffs - log.Info("Executed block", "height", block.Height, "valid txs", validTxs, "invalid txs", invalidTxs) + logger.Info("Executed block", "height", block.Height, "valid txs", validTxs, "invalid txs", invalidTxs) if len(valDiff) > 0 { - log.Info("Update to validator set", "updates", abci.ValidatorsString(valDiff)) + logger.Info("Update to validator set", "updates", abci.ValidatorsString(valDiff)) } return abciResponses, nil @@ -126,7 +127,7 @@ func updateValidators(validators *types.ValidatorSet, changedValidators []*abci. power := int64(v.Power) // mind the overflow from uint64 if power < 0 { - return errors.New(Fmt("Power (%d) overflows int64", v.Power)) + return errors.New(cmn.Fmt("Power (%d) overflows int64", v.Power)) } _, val := validators.GetByAddress(address) @@ -134,20 +135,20 @@ func updateValidators(validators *types.ValidatorSet, changedValidators []*abci. // add val added := validators.Add(types.NewValidator(pubkey, power)) if !added { - return errors.New(Fmt("Failed to add new validator %X with voting power %d", address, power)) + return errors.New(cmn.Fmt("Failed to add new validator %X with voting power %d", address, power)) } } else if v.Power == 0 { // remove val _, removed := validators.Remove(address) if !removed { - return errors.New(Fmt("Failed to remove validator %X)")) + return errors.New(cmn.Fmt("Failed to remove validator %X)")) } } else { // update val val.VotingPower = power updated := validators.Update(val) if !updated { - return errors.New(Fmt("Failed to update validator %X with voting power %d", address, power)) + return errors.New(cmn.Fmt("Failed to update validator %X with voting power %d", address, power)) } } } @@ -156,8 +157,8 @@ func updateValidators(validators *types.ValidatorSet, changedValidators []*abci. // return a bit array of validators that signed the last commit // NOTE: assumes commits have already been authenticated -func commitBitArrayFromBlock(block *types.Block) *BitArray { - signed := NewBitArray(len(block.LastCommit.Precommits)) +func commitBitArrayFromBlock(block *types.Block) *cmn.BitArray { + signed := cmn.NewBitArray(len(block.LastCommit.Precommits)) for i, precommit := range block.LastCommit.Precommits { if precommit != nil { signed.SetIndex(i, true) // val_.LastCommitHeight = block.Height - 1 @@ -187,7 +188,7 @@ func (s *State) validateBlock(block *types.Block) error { } } else { if len(block.LastCommit.Precommits) != s.LastValidators.Size() { - return errors.New(Fmt("Invalid block commit size. Expected %v, got %v", + return errors.New(cmn.Fmt("Invalid block commit size. Expected %v, got %v", s.LastValidators.Size(), len(block.LastCommit.Precommits))) } err := s.LastValidators.VerifyCommit( @@ -251,14 +252,14 @@ func (s *State) CommitStateUpdateMempool(proxyAppConn proxy.AppConnConsensus, bl // Commit block, get hash back res := proxyAppConn.CommitSync() if res.IsErr() { - log.Warn("Error in proxyAppConn.CommitSync", "error", res) + s.logger.Error("Error in proxyAppConn.CommitSync", "error", res) return res } if res.Log != "" { - log.Debug("Commit.Log: " + res.Log) + s.logger.Debug("Commit.Log: " + res.Log) } - log.Info("Committed state", "hash", res.Data) + s.logger.Info("Committed state", "hash", res.Data) // Set the state's new AppHash s.AppHash = res.Data @@ -286,21 +287,21 @@ func (s *State) indexTxs(abciResponses *ABCIResponses) { // Exec and commit a block on the proxyApp without validating or mutating the state // Returns the application root hash (result of abci.Commit) -func ExecCommitBlock(appConnConsensus proxy.AppConnConsensus, block *types.Block) ([]byte, error) { +func ExecCommitBlock(appConnConsensus proxy.AppConnConsensus, block *types.Block, logger log.Logger) ([]byte, error) { var eventCache types.Fireable // nil - _, err := execBlockOnProxyApp(eventCache, appConnConsensus, block) + _, err := execBlockOnProxyApp(eventCache, appConnConsensus, block, logger) if err != nil { - log.Warn("Error executing block on proxy app", "height", block.Height, "err", err) + logger.Error("Error executing block on proxy app", "height", block.Height, "err", err) return nil, err } // Commit block, get hash back res := appConnConsensus.CommitSync() if res.IsErr() { - log.Warn("Error in proxyAppConn.CommitSync", "error", res) + logger.Error("Error in proxyAppConn.CommitSync", "error", res) return nil, res } if res.Log != "" { - log.Info("Commit.Log: " + res.Log) + logger.Info("Commit.Log: " + res.Log) } return res.Data, nil } diff --git a/state/execution_test.go b/state/execution_test.go index 299c6baa2..3ebc05d37 100644 --- a/state/execution_test.go +++ b/state/execution_test.go @@ -7,12 +7,11 @@ import ( "github.com/stretchr/testify/require" "github.com/tendermint/abci/example/dummy" crypto "github.com/tendermint/go-crypto" - dbm "github.com/tendermint/go-db" - cfg "github.com/tendermint/tendermint/config/tendermint_test" - "github.com/tendermint/tendermint/mempool" "github.com/tendermint/tendermint/proxy" "github.com/tendermint/tendermint/state/txindex" "github.com/tendermint/tendermint/types" + dbm "github.com/tendermint/tmlibs/db" + "github.com/tendermint/tmlibs/log" ) var ( @@ -24,21 +23,20 @@ var ( func TestApplyBlock(t *testing.T) { cc := proxy.NewLocalClientCreator(dummy.NewDummyApplication()) - config := cfg.ResetConfig("execution_test_") - proxyApp := proxy.NewAppConns(config, cc, nil) + proxyApp := proxy.NewAppConns(cc, nil) _, err := proxyApp.Start() require.Nil(t, err) defer proxyApp.Stop() - mempool := mempool.NewMempool(config, proxyApp.Mempool()) state := state() + state.SetLogger(log.TestingLogger()) indexer := &dummyIndexer{0} state.TxIndexer = indexer // make block block := makeBlock(1, state) - err = state.ApplyBlock(nil, proxyApp.Consensus(), block, block.MakePartSet(testPartSize).Header(), mempool) + err = state.ApplyBlock(nil, proxyApp.Consensus(), block, block.MakePartSet(testPartSize).Header(), types.MockMempool{}) require.Nil(t, err) assert.Equal(t, nTxsPerBlock, indexer.Indexed) // test indexing works diff --git a/state/log.go b/state/log.go deleted file mode 100644 index 5b102b570..000000000 --- a/state/log.go +++ /dev/null @@ -1,7 +0,0 @@ -package state - -import ( - "github.com/tendermint/go-logger" -) - -var log = logger.New("module", "state") diff --git a/state/state.go b/state/state.go index 086b0e710..808fcbe24 100644 --- a/state/state.go +++ b/state/state.go @@ -7,10 +7,11 @@ import ( "time" abci "github.com/tendermint/abci/types" - . "github.com/tendermint/go-common" - cfg "github.com/tendermint/go-config" - dbm "github.com/tendermint/go-db" - "github.com/tendermint/go-wire" + cmn "github.com/tendermint/tmlibs/common" + dbm "github.com/tendermint/tmlibs/db" + "github.com/tendermint/tmlibs/log" + + wire "github.com/tendermint/go-wire" "github.com/tendermint/tendermint/state/txindex" "github.com/tendermint/tendermint/state/txindex/null" "github.com/tendermint/tendermint/types" @@ -48,6 +49,8 @@ type State struct { // Intermediate results from processing // Persisted separately from the state abciResponses *ABCIResponses + + logger log.Logger } func LoadState(db dbm.DB) *State { @@ -64,13 +67,17 @@ func loadState(db dbm.DB, key []byte) *State { wire.ReadBinaryPtr(&s, r, 0, n, err) if *err != nil { // DATA HAS BEEN CORRUPTED OR THE SPEC HAS CHANGED - Exit(Fmt("LoadState: Data has been corrupted or its spec has changed: %v\n", *err)) + cmn.Exit(cmn.Fmt("LoadState: Data has been corrupted or its spec has changed: %v\n", *err)) } // TODO: ensure that buf is completely read. } return s } +func (s *State) SetLogger(l log.Logger) { + s.logger = l +} + func (s *State) Copy() *State { return &State{ db: s.db, @@ -83,6 +90,7 @@ func (s *State) Copy() *State { LastValidators: s.LastValidators.Copy(), AppHash: s.AppHash, TxIndexer: s.TxIndexer, // pointer here, not value + logger: s.logger, } } @@ -108,7 +116,7 @@ func (s *State) LoadABCIResponses() *ABCIResponses { wire.ReadBinaryPtr(abciResponses, r, 0, n, err) if *err != nil { // DATA HAS BEEN CORRUPTED OR THE SPEC HAS CHANGED - Exit(Fmt("LoadABCIResponses: Data has been corrupted or its spec has changed: %v\n", *err)) + cmn.Exit(cmn.Fmt("LoadABCIResponses: Data has been corrupted or its spec has changed: %v\n", *err)) } // TODO: ensure that buf is completely read. } @@ -123,7 +131,7 @@ func (s *State) Bytes() []byte { buf, n, err := new(bytes.Buffer), new(int), new(error) wire.WriteBinary(s, buf, n, err) if *err != nil { - PanicCrisis(*err) + cmn.PanicCrisis(*err) } return buf.Bytes() } @@ -140,9 +148,10 @@ func (s *State) SetBlockAndValidators(header *types.Header, blockPartsHeader typ // update the validator set with the latest abciResponses err := updateValidators(nextValSet, abciResponses.EndBlock.Diffs) if err != nil { - log.Warn("Error changing validator set", "error", err) + s.logger.Error("Error changing validator set", "error", err) // TODO: err or carry on? } + // Update validator accums and set state variables nextValSet.IncrementAccum(1) @@ -168,10 +177,10 @@ func (s *State) GetValidators() (*types.ValidatorSet, *types.ValidatorSet) { // Load the most recent state from "state" db, // or create a new one (and save) from genesis. -func GetState(config cfg.Config, stateDB dbm.DB) *State { +func GetState(stateDB dbm.DB, genesisFile string) *State { state := LoadState(stateDB) if state == nil { - state = MakeGenesisStateFromFile(stateDB, config.GetString("genesis_file")) + state = MakeGenesisStateFromFile(stateDB, genesisFile) state.Save() } @@ -203,7 +212,7 @@ func (a *ABCIResponses) Bytes() []byte { buf, n, err := new(bytes.Buffer), new(int), new(error) wire.WriteBinary(*a, buf, n, err) if *err != nil { - PanicCrisis(*err) + cmn.PanicCrisis(*err) } return buf.Bytes() } @@ -217,11 +226,11 @@ func (a *ABCIResponses) Bytes() []byte { func MakeGenesisStateFromFile(db dbm.DB, genDocFile string) *State { genDocJSON, err := ioutil.ReadFile(genDocFile) if err != nil { - Exit(Fmt("Couldn't read GenesisDoc file: %v", err)) + cmn.Exit(cmn.Fmt("Couldn't read GenesisDoc file: %v", err)) } genDoc, err := types.GenesisDocFromJSON(genDocJSON) if err != nil { - Exit(Fmt("Error reading GenesisDoc: %v", err)) + cmn.Exit(cmn.Fmt("Error reading GenesisDoc: %v", err)) } return MakeGenesisState(db, genDoc) } @@ -231,7 +240,7 @@ func MakeGenesisStateFromFile(db dbm.DB, genDocFile string) *State { // Used in tests. func MakeGenesisState(db dbm.DB, genDoc *types.GenesisDoc) *State { if len(genDoc.Validators) == 0 { - Exit(Fmt("The genesis file has no validators")) + cmn.Exit(cmn.Fmt("The genesis file has no validators")) } if genDoc.GenesisTime.IsZero() { diff --git a/state/state_test.go b/state/state_test.go index dca83e801..e97c3289a 100644 --- a/state/state_test.go +++ b/state/state_test.go @@ -6,16 +6,19 @@ import ( "github.com/stretchr/testify/assert" abci "github.com/tendermint/abci/types" - "github.com/tendermint/go-crypto" - dbm "github.com/tendermint/go-db" - "github.com/tendermint/tendermint/config/tendermint_test" + crypto "github.com/tendermint/go-crypto" + cfg "github.com/tendermint/tendermint/config" + dbm "github.com/tendermint/tmlibs/db" + "github.com/tendermint/tmlibs/log" ) func TestStateCopyEquals(t *testing.T) { - config := tendermint_test.ResetConfig("state_") + config := cfg.ResetTestRoot("state_") + // Get State db - stateDB := dbm.NewDB("state", config.GetString("db_backend"), config.GetString("db_dir")) - state := GetState(config, stateDB) + stateDB := dbm.NewDB("state", config.DBBackend, config.DBDir()) + state := GetState(stateDB, config.GenesisFile()) + state.SetLogger(log.TestingLogger()) stateCopy := state.Copy() @@ -31,10 +34,11 @@ func TestStateCopyEquals(t *testing.T) { } func TestStateSaveLoad(t *testing.T) { - config := tendermint_test.ResetConfig("state_") + config := cfg.ResetTestRoot("state_") // Get State db - stateDB := dbm.NewDB("state", config.GetString("db_backend"), config.GetString("db_dir")) - state := GetState(config, stateDB) + stateDB := dbm.NewDB("state", config.DBBackend, config.DBDir()) + state := GetState(stateDB, config.GenesisFile()) + state.SetLogger(log.TestingLogger()) state.LastBlockHeight += 1 state.Save() @@ -48,9 +52,10 @@ func TestStateSaveLoad(t *testing.T) { func TestABCIResponsesSaveLoad(t *testing.T) { assert := assert.New(t) - config := tendermint_test.ResetConfig("state_") - stateDB := dbm.NewDB("state", config.GetString("db_backend"), config.GetString("db_dir")) - state := GetState(config, stateDB) + config := cfg.ResetTestRoot("state_") + stateDB := dbm.NewDB("state", config.DBBackend, config.DBDir()) + state := GetState(stateDB, config.GenesisFile()) + state.SetLogger(log.TestingLogger()) state.LastBlockHeight += 1 diff --git a/state/txindex/kv/kv.go b/state/txindex/kv/kv.go index 03acc8dae..8f684c4a9 100644 --- a/state/txindex/kv/kv.go +++ b/state/txindex/kv/kv.go @@ -4,7 +4,7 @@ import ( "bytes" "fmt" - db "github.com/tendermint/go-db" + db "github.com/tendermint/tmlibs/db" "github.com/tendermint/go-wire" "github.com/tendermint/tendermint/state/txindex" "github.com/tendermint/tendermint/types" diff --git a/state/txindex/kv/kv_test.go b/state/txindex/kv/kv_test.go index 9a1898d7e..8de9b8cda 100644 --- a/state/txindex/kv/kv_test.go +++ b/state/txindex/kv/kv_test.go @@ -8,7 +8,7 @@ import ( "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" abci "github.com/tendermint/abci/types" - db "github.com/tendermint/go-db" + db "github.com/tendermint/tmlibs/db" "github.com/tendermint/tendermint/state/txindex" "github.com/tendermint/tendermint/types" ) diff --git a/test/app/counter_test.sh b/test/app/counter_test.sh index 439926a5d..cc5c38b25 100644 --- a/test/app/counter_test.sh +++ b/test/app/counter_test.sh @@ -33,7 +33,7 @@ function sendTx() { ERROR=`echo $RESPONSE | jq .error` ERROR=$(echo "$ERROR" | tr -d '"') # remove surrounding quotes - RESPONSE=`echo $RESPONSE | jq .result[1]` + RESPONSE=`echo $RESPONSE | jq .result` else if [ -f grpc_client ]; then rm grpc_client diff --git a/test/app/dummy_test.sh b/test/app/dummy_test.sh index 0449bc491..1a117c634 100644 --- a/test/app/dummy_test.sh +++ b/test/app/dummy_test.sh @@ -57,7 +57,7 @@ echo "... testing query with /abci_query 2" # we should be able to look up the key RESPONSE=`curl -s "127.0.0.1:46657/abci_query?path=\"\"&data=$(toHex $KEY)&prove=false"` -RESPONSE=`echo $RESPONSE | jq .result[1].response.log` +RESPONSE=`echo $RESPONSE | jq .result.response.log` set +e A=`echo $RESPONSE | grep 'exists'` @@ -70,7 +70,7 @@ set -e # we should not be able to look up the value RESPONSE=`curl -s "127.0.0.1:46657/abci_query?path=\"\"&data=$(toHex $VALUE)&prove=false"` -RESPONSE=`echo $RESPONSE | jq .result[1].response.log` +RESPONSE=`echo $RESPONSE | jq .result.response.log` set +e A=`echo $RESPONSE | grep 'exists'` if [[ $? == 0 ]]; then diff --git a/test/p2p/README.md b/test/p2p/README.md index 5836fc61e..e2a577cfa 100644 --- a/test/p2p/README.md +++ b/test/p2p/README.md @@ -38,7 +38,7 @@ for i in $(seq 1 4); do --name local_testnet_$i \ --entrypoint tendermint \ -e TMHOME=/go/src/github.com/tendermint/tendermint/test/p2p/data/mach$i/core \ - tendermint_tester node --seeds 172.57.0.101:46656,172.57.0.102:46656,172.57.0.103:46656,172.57.0.104:46656 --proxy_app=dummy + tendermint_tester node --p2p.seeds 172.57.0.101:46656,172.57.0.102:46656,172.57.0.103:46656,172.57.0.104:46656 --proxy_app=dummy done ``` diff --git a/test/p2p/atomic_broadcast/test.sh b/test/p2p/atomic_broadcast/test.sh index 8e0633c8a..00b339631 100644 --- a/test/p2p/atomic_broadcast/test.sh +++ b/test/p2p/atomic_broadcast/test.sh @@ -17,7 +17,7 @@ for i in `seq 1 $N`; do addr=$(test/p2p/ip.sh $i):46657 # current state - HASH1=`curl -s $addr/status | jq .result[1].latest_app_hash` + HASH1=`curl -s $addr/status | jq .result.latest_app_hash` # - send a tx TX=aadeadbeefbeefbeef0$i @@ -26,15 +26,15 @@ for i in `seq 1 $N`; do echo "" # we need to wait another block to get the new app_hash - h1=`curl -s $addr/status | jq .result[1].latest_block_height` + h1=`curl -s $addr/status | jq .result.latest_block_height` h2=$h1 while [ "$h2" == "$h1" ]; do sleep 1 - h2=`curl -s $addr/status | jq .result[1].latest_block_height` + h2=`curl -s $addr/status | jq .result.latest_block_height` done # check that hash was updated - HASH2=`curl -s $addr/status | jq .result[1].latest_app_hash` + HASH2=`curl -s $addr/status | jq .result.latest_app_hash` if [[ "$HASH1" == "$HASH2" ]]; then echo "Expected state hash to update from $HASH1. Got $HASH2" exit 1 @@ -44,7 +44,7 @@ for i in `seq 1 $N`; do for j in `seq 1 $N`; do if [[ "$i" != "$j" ]]; then addrJ=$(test/p2p/ip.sh $j):46657 - HASH3=`curl -s $addrJ/status | jq .result[1].latest_app_hash` + HASH3=`curl -s $addrJ/status | jq .result.latest_app_hash` if [[ "$HASH2" != "$HASH3" ]]; then echo "App hash for node $j doesn't match. Got $HASH3, expected $HASH2" diff --git a/test/p2p/basic/test.sh b/test/p2p/basic/test.sh index 3399515a8..93444792b 100644 --- a/test/p2p/basic/test.sh +++ b/test/p2p/basic/test.sh @@ -31,19 +31,19 @@ for i in `seq 1 $N`; do N_1=$(($N - 1)) # - assert everyone has N-1 other peers - N_PEERS=`curl -s $addr/net_info | jq '.result[1].peers | length'` + N_PEERS=`curl -s $addr/net_info | jq '.result.peers | length'` while [ "$N_PEERS" != $N_1 ]; do echo "Waiting for node $i to connect to all peers ..." sleep 1 - N_PEERS=`curl -s $addr/net_info | jq '.result[1].peers | length'` + N_PEERS=`curl -s $addr/net_info | jq '.result.peers | length'` done # - assert block height is greater than 1 - BLOCK_HEIGHT=`curl -s $addr/status | jq .result[1].latest_block_height` + BLOCK_HEIGHT=`curl -s $addr/status | jq .result.latest_block_height` while [ "$BLOCK_HEIGHT" -le 1 ]; do echo "Waiting for node $i to commit a block ..." sleep 1 - BLOCK_HEIGHT=`curl -s $addr/status | jq .result[1].latest_block_height` + BLOCK_HEIGHT=`curl -s $addr/status | jq .result.latest_block_height` done echo "Node $i is connected to all peers and at block $BLOCK_HEIGHT" done diff --git a/test/p2p/data/chain_config.json b/test/p2p/data/chain_config.json index 54fcea524..4221ba8da 100644 --- a/test/p2p/data/chain_config.json +++ b/test/p2p/data/chain_config.json @@ -5,10 +5,10 @@ { "validator": { "id": "mach1", - "pub_key": [ - 1, - "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" - ] + "pub_key": { + "type": "ed25519", + "data": "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" + } }, "p2p_addr": "", "rpc_addr": "" @@ -16,10 +16,10 @@ { "validator": { "id": "mach2", - "pub_key": [ - 1, - "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" - ] + "pub_key": { + "type": "ed25519", + "data": "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" + } }, "p2p_addr": "", "rpc_addr": "", @@ -28,10 +28,10 @@ { "validator": { "id": "mach3", - "pub_key": [ - 1, - "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" - ] + "pub_key": { + "type": "ed25519", + "data": "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" + } }, "p2p_addr": "", "rpc_addr": "", @@ -40,14 +40,14 @@ { "validator": { "id": "mach4", - "pub_key": [ - 1, - "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" - ] + "pub_key": { + "type": "ed25519", + "data": "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" + } }, "p2p_addr": "", "rpc_addr": "", "index": 3 } ] -} \ No newline at end of file +} diff --git a/test/p2p/data/core/init.sh b/test/p2p/data/core/init.sh index 470ed37e3..4a1575aab 100755 --- a/test/p2p/data/core/init.sh +++ b/test/p2p/data/core/init.sh @@ -17,4 +17,4 @@ git fetch origin $BRANCH git checkout $BRANCH make install -tendermint node --seeds="$TMSEEDS" --moniker="$TMNAME" --proxy_app="$PROXYAPP" \ No newline at end of file +tendermint node --p2p.seeds="$TMSEEDS" --moniker="$TMNAME" --proxy_app="$PROXYAPP" diff --git a/test/p2p/data/mach1/core/genesis.json b/test/p2p/data/mach1/core/genesis.json index 3f6fbe5a2..522f9831a 100644 --- a/test/p2p/data/mach1/core/genesis.json +++ b/test/p2p/data/mach1/core/genesis.json @@ -6,34 +6,34 @@ { "amount": 1, "name": "mach1", - "pub_key": [ - 1, - "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" - ] + "pub_key": { + "type": "ed25519", + "data": "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" + } }, { "amount": 1, "name": "mach2", - "pub_key": [ - 1, - "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" - ] + "pub_key": { + "type": "ed25519", + "data": "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" + } }, { "amount": 1, "name": "mach3", - "pub_key": [ - 1, - "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" - ] + "pub_key": { + "type": "ed25519", + "data": "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" + } }, { "amount": 1, "name": "mach4", - "pub_key": [ - 1, - "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" - ] + "pub_key": { + "type": "ed25519", + "data": "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" + } } ] -} \ No newline at end of file +} diff --git a/test/p2p/data/mach1/core/priv_validator.json b/test/p2p/data/mach1/core/priv_validator.json index 242c7b9fc..6538281b4 100644 --- a/test/p2p/data/mach1/core/priv_validator.json +++ b/test/p2p/data/mach1/core/priv_validator.json @@ -3,12 +3,12 @@ "last_height": 0, "last_round": 0, "last_step": 0, - "priv_key": [ - 1, - "547AA07C7A8CE16C5CB2A40C6C26D15B0A32960410A9F1EA6E50B636F1AB389ABE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" - ], - "pub_key": [ - 1, - "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" - ] -} \ No newline at end of file + "priv_key": { + "type": "ed25519", + "data": "547AA07C7A8CE16C5CB2A40C6C26D15B0A32960410A9F1EA6E50B636F1AB389ABE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" + }, + "pub_key": { + "type": "ed25519", + "data": "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" + } +} diff --git a/test/p2p/data/mach2/core/genesis.json b/test/p2p/data/mach2/core/genesis.json index 3f6fbe5a2..522f9831a 100644 --- a/test/p2p/data/mach2/core/genesis.json +++ b/test/p2p/data/mach2/core/genesis.json @@ -6,34 +6,34 @@ { "amount": 1, "name": "mach1", - "pub_key": [ - 1, - "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" - ] + "pub_key": { + "type": "ed25519", + "data": "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" + } }, { "amount": 1, "name": "mach2", - "pub_key": [ - 1, - "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" - ] + "pub_key": { + "type": "ed25519", + "data": "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" + } }, { "amount": 1, "name": "mach3", - "pub_key": [ - 1, - "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" - ] + "pub_key": { + "type": "ed25519", + "data": "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" + } }, { "amount": 1, "name": "mach4", - "pub_key": [ - 1, - "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" - ] + "pub_key": { + "type": "ed25519", + "data": "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" + } } ] -} \ No newline at end of file +} diff --git a/test/p2p/data/mach2/core/priv_validator.json b/test/p2p/data/mach2/core/priv_validator.json index ead45d5e6..3602454f5 100644 --- a/test/p2p/data/mach2/core/priv_validator.json +++ b/test/p2p/data/mach2/core/priv_validator.json @@ -3,12 +3,12 @@ "last_height": 0, "last_round": 0, "last_step": 0, - "priv_key": [ - 1, - "D047889E60502FC3129D0AB7F334B1838ED9ED1ECD99CBB96B71AD5ABF5A81436DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" - ], - "pub_key": [ - 1, - "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" - ] -} \ No newline at end of file + "priv_key": { + "type": "ed25519", + "data": "D047889E60502FC3129D0AB7F334B1838ED9ED1ECD99CBB96B71AD5ABF5A81436DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" + }, + "pub_key": { + "type": "ed25519", + "data": "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" + } +} diff --git a/test/p2p/data/mach3/core/genesis.json b/test/p2p/data/mach3/core/genesis.json index 3f6fbe5a2..522f9831a 100644 --- a/test/p2p/data/mach3/core/genesis.json +++ b/test/p2p/data/mach3/core/genesis.json @@ -6,34 +6,34 @@ { "amount": 1, "name": "mach1", - "pub_key": [ - 1, - "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" - ] + "pub_key": { + "type": "ed25519", + "data": "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" + } }, { "amount": 1, "name": "mach2", - "pub_key": [ - 1, - "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" - ] + "pub_key": { + "type": "ed25519", + "data": "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" + } }, { "amount": 1, "name": "mach3", - "pub_key": [ - 1, - "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" - ] + "pub_key": { + "type": "ed25519", + "data": "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" + } }, { "amount": 1, "name": "mach4", - "pub_key": [ - 1, - "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" - ] + "pub_key": { + "type": "ed25519", + "data": "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" + } } ] -} \ No newline at end of file +} diff --git a/test/p2p/data/mach3/core/priv_validator.json b/test/p2p/data/mach3/core/priv_validator.json index dd366d205..743a931f5 100644 --- a/test/p2p/data/mach3/core/priv_validator.json +++ b/test/p2p/data/mach3/core/priv_validator.json @@ -3,12 +3,12 @@ "last_height": 0, "last_round": 0, "last_step": 0, - "priv_key": [ - 1, - "C1A4E47F349FC5F556F4A9A27BA776B94424C312BAA6CF6EE44B867348D7C3F2AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" - ], - "pub_key": [ - 1, - "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" - ] -} \ No newline at end of file + "priv_key": { + "type": "ed25519", + "data": "C1A4E47F349FC5F556F4A9A27BA776B94424C312BAA6CF6EE44B867348D7C3F2AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" + }, + "pub_key": { + "type": "ed25519", + "data": "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" + } +} diff --git a/test/p2p/data/mach4/core/genesis.json b/test/p2p/data/mach4/core/genesis.json index 3f6fbe5a2..522f9831a 100644 --- a/test/p2p/data/mach4/core/genesis.json +++ b/test/p2p/data/mach4/core/genesis.json @@ -6,34 +6,34 @@ { "amount": 1, "name": "mach1", - "pub_key": [ - 1, - "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" - ] + "pub_key": { + "type": "ed25519", + "data": "BE8933DFF1600C026E34718F1785A4CDEAB90C35698B394E38B6947AE91DE116" + } }, { "amount": 1, "name": "mach2", - "pub_key": [ - 1, - "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" - ] + "pub_key": { + "type": "ed25519", + "data": "6DC534465323126587D2A2A93B59D689B717073B1DE968A25A6EF13D595318AD" + } }, { "amount": 1, "name": "mach3", - "pub_key": [ - 1, - "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" - ] + "pub_key": { + "type": "ed25519", + "data": "AE67AC697D135AA0B4601EA57EAAB3FEBF4BAA4F229C45A598C2985B12FCD1A1" + } }, { "amount": 1, "name": "mach4", - "pub_key": [ - 1, - "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" - ] + "pub_key": { + "type": "ed25519", + "data": "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" + } } ] -} \ No newline at end of file +} diff --git a/test/p2p/data/mach4/core/priv_validator.json b/test/p2p/data/mach4/core/priv_validator.json index 4a73707e8..1c10eb8a6 100644 --- a/test/p2p/data/mach4/core/priv_validator.json +++ b/test/p2p/data/mach4/core/priv_validator.json @@ -3,12 +3,12 @@ "last_height": 0, "last_round": 0, "last_step": 0, - "priv_key": [ - 1, - "C4CC3ED28F020C2DBDA98BCDBF08C3CED370470E74F25E938D5D295E8E3D2B0C9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" - ], - "pub_key": [ - 1, - "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" - ] -} \ No newline at end of file + "priv_key": { + "type": "ed25519", + "data": "C4CC3ED28F020C2DBDA98BCDBF08C3CED370470E74F25E938D5D295E8E3D2B0C9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" + }, + "pub_key": { + "type": "ed25519", + "data": "9EBC8F58CED4B46DCD5AB8ABA591DD253CD7CB5037273FDA32BC0B6461C4EFD9" + } +} diff --git a/test/p2p/fast_sync/check_peer.sh b/test/p2p/fast_sync/check_peer.sh index c459277d2..b10c3efc5 100644 --- a/test/p2p/fast_sync/check_peer.sh +++ b/test/p2p/fast_sync/check_peer.sh @@ -15,10 +15,10 @@ peerID=$(( $(($ID % 4)) + 1 )) # 1->2 ... 3->4 ... 4->1 peer_addr=$(test/p2p/ip.sh $peerID):46657 # get another peer's height -h1=`curl -s $peer_addr/status | jq .result[1].latest_block_height` +h1=`curl -s $peer_addr/status | jq .result.latest_block_height` # get another peer's state -root1=`curl -s $peer_addr/status | jq .result[1].latest_app_hash` +root1=`curl -s $peer_addr/status | jq .result.latest_app_hash` echo "Other peer is on height $h1 with state $root1" echo "Waiting for peer $ID to catch up" @@ -29,12 +29,12 @@ set +o pipefail h2="0" while [[ "$h2" -lt "$(($h1+3))" ]]; do sleep 1 - h2=`curl -s $addr/status | jq .result[1].latest_block_height` + h2=`curl -s $addr/status | jq .result.latest_block_height` echo "... $h2" done # check the app hash -root2=`curl -s $addr/status | jq .result[1].latest_app_hash` +root2=`curl -s $addr/status | jq .result.latest_app_hash` if [[ "$root1" != "$root2" ]]; then echo "App hash after fast sync does not match. Got $root2; expected $root1" diff --git a/test/p2p/fast_sync/test_peer.sh b/test/p2p/fast_sync/test_peer.sh index 615ae9e06..8cfab1c13 100644 --- a/test/p2p/fast_sync/test_peer.sh +++ b/test/p2p/fast_sync/test_peer.sh @@ -27,7 +27,7 @@ SEEDS="$(test/p2p/ip.sh 1):46656" for j in `seq 2 $N`; do SEEDS="$SEEDS,$(test/p2p/ip.sh $j):46656" done -bash test/p2p/peer.sh $DOCKER_IMAGE $NETWORK_NAME $ID $PROXY_APP "--seeds $SEEDS --pex" +bash test/p2p/peer.sh $DOCKER_IMAGE $NETWORK_NAME $ID $PROXY_APP "--p2p.seeds $SEEDS --p2p.pex" # wait for peer to sync and check the app hash bash test/p2p/client.sh $DOCKER_IMAGE $NETWORK_NAME fs_$ID "test/p2p/fast_sync/check_peer.sh $ID" diff --git a/test/p2p/kill_all/check_peers.sh b/test/p2p/kill_all/check_peers.sh index d085a025c..52dcde91c 100644 --- a/test/p2p/kill_all/check_peers.sh +++ b/test/p2p/kill_all/check_peers.sh @@ -23,7 +23,7 @@ set -e # get the first peer's height addr=$(test/p2p/ip.sh 1):46657 -h1=$(curl -s "$addr/status" | jq .result[1].latest_block_height) +h1=$(curl -s "$addr/status" | jq .result.latest_block_height) echo "1st peer is on height $h1" echo "Waiting until other peers reporting a height higher than the 1st one" @@ -33,14 +33,14 @@ for i in $(seq 2 "$NUM_OF_PEERS"); do while [[ $hi -le $h1 ]] ; do addr=$(test/p2p/ip.sh "$i"):46657 - hi=$(curl -s "$addr/status" | jq .result[1].latest_block_height) + hi=$(curl -s "$addr/status" | jq .result.latest_block_height) echo "... peer $i is on height $hi" ((attempt++)) if [ "$attempt" -ge $MAX_ATTEMPTS_TO_CATCH_UP ] ; then echo "$attempt unsuccessful attempts were made to catch up" - curl -s "$addr/dump_consensus_state" | jq .result[1] + curl -s "$addr/dump_consensus_state" | jq .result exit 1 fi diff --git a/test/p2p/local_testnet_start.sh b/test/p2p/local_testnet_start.sh index b7cfb959e..301098fc2 100644 --- a/test/p2p/local_testnet_start.sh +++ b/test/p2p/local_testnet_start.sh @@ -10,7 +10,7 @@ set +u SEEDS=$5 if [[ "$SEEDS" != "" ]]; then echo "Seeds: $SEEDS" - SEEDS="--seeds $SEEDS" + SEEDS="--p2p.seeds $SEEDS" fi set -u @@ -20,5 +20,5 @@ cd "$GOPATH/src/github.com/tendermint/tendermint" docker network create --driver bridge --subnet 172.57.0.0/16 "$NETWORK_NAME" for i in $(seq 1 "$N"); do - bash test/p2p/peer.sh "$DOCKER_IMAGE" "$NETWORK_NAME" "$i" "$APP_PROXY" "$SEEDS --pex" + bash test/p2p/peer.sh "$DOCKER_IMAGE" "$NETWORK_NAME" "$i" "$APP_PROXY" "$SEEDS --p2p.pex" done diff --git a/test/p2p/pex/check_peer.sh b/test/p2p/pex/check_peer.sh index ceabd2ac6..e851c8923 100644 --- a/test/p2p/pex/check_peer.sh +++ b/test/p2p/pex/check_peer.sh @@ -10,7 +10,7 @@ echo "2. wait until peer $ID connects to other nodes using pex reactor" peers_count="0" while [[ "$peers_count" -lt "$((N-1))" ]]; do sleep 1 - peers_count=$(curl -s "$addr/net_info" | jq ".result[1].peers | length") + peers_count=$(curl -s "$addr/net_info" | jq ".result.peers | length") echo "... peers count = $peers_count, expected = $((N-1))" done diff --git a/test/p2p/pex/test_addrbook.sh b/test/p2p/pex/test_addrbook.sh index d63096c8d..f724d3dd4 100644 --- a/test/p2p/pex/test_addrbook.sh +++ b/test/p2p/pex/test_addrbook.sh @@ -23,7 +23,7 @@ docker rm -vf "local_testnet_$ID" set -e # NOTE that we do not provide seeds -bash test/p2p/peer.sh "$DOCKER_IMAGE" "$NETWORK_NAME" "$ID" "$PROXY_APP" "--pex" +bash test/p2p/peer.sh "$DOCKER_IMAGE" "$NETWORK_NAME" "$ID" "$PROXY_APP" "--p2p.pex" docker cp "/tmp/addrbook.json" "local_testnet_$ID:/go/src/github.com/tendermint/tendermint/test/p2p/data/mach1/core/addrbook.json" echo "with the following addrbook:" cat /tmp/addrbook.json @@ -47,7 +47,7 @@ docker rm -vf "local_testnet_$ID" set -e # NOTE that we do not provide seeds -bash test/p2p/peer.sh "$DOCKER_IMAGE" "$NETWORK_NAME" "$ID" "$PROXY_APP" "--pex" +bash test/p2p/peer.sh "$DOCKER_IMAGE" "$NETWORK_NAME" "$ID" "$PROXY_APP" "--p2p.pex" # if the client runs forever, it means other peers have removed us from their books (which should not happen) bash test/p2p/client.sh "$DOCKER_IMAGE" "$NETWORK_NAME" "$CLIENT_NAME" "test/p2p/pex/check_peer.sh $ID $N" diff --git a/test/persist/test_failure_indices.sh b/test/persist/test_failure_indices.sh index bca8b8ae9..9e3c8f19e 100644 --- a/test/persist/test_failure_indices.sh +++ b/test/persist/test_failure_indices.sh @@ -107,11 +107,11 @@ for failIndex in $(seq $failsStart $failsEnd); do done # wait for a new block - h1=$(curl -s --unix-socket "$RPC_ADDR" http://localhost/status | jq .result[1].latest_block_height) + h1=$(curl -s --unix-socket "$RPC_ADDR" http://localhost/status | jq .result.latest_block_height) h2=$h1 while [ "$h2" == "$h1" ]; do sleep 1 - h2=$(curl -s --unix-socket "$RPC_ADDR" http://localhost/status | jq .result[1].latest_block_height) + h2=$(curl -s --unix-socket "$RPC_ADDR" http://localhost/status | jq .result.latest_block_height) done kill_procs diff --git a/test/persist/test_simple.sh b/test/persist/test_simple.sh index 59bc38458..273c714ca 100644 --- a/test/persist/test_simple.sh +++ b/test/persist/test_simple.sh @@ -57,11 +57,11 @@ while [ "$ERR" != 0 ]; do done # wait for a new block -h1=`curl -s $addr/status | jq .result[1].latest_block_height` +h1=`curl -s $addr/status | jq .result.latest_block_height` h2=$h1 while [ "$h2" == "$h1" ]; do sleep 1 - h2=`curl -s $addr/status | jq .result[1].latest_block_height` + h2=`curl -s $addr/status | jq .result.latest_block_height` done kill_procs diff --git a/test/test_libs.sh b/test/test_libs.sh index d08a4659c..0a8485340 100644 --- a/test/test_libs.sh +++ b/test/test_libs.sh @@ -12,23 +12,8 @@ fi # libs we depend on #################### -# some libs are tested with go, others with make -# TODO: should be all make (post repo merge) -LIBS_GO_TEST=(go-clist go-common go-config go-crypto go-db go-events go-merkle go-p2p) -LIBS_MAKE_TEST=(go-rpc go-wire abci) - -for lib in "${LIBS_GO_TEST[@]}"; do - - # checkout vendored version of lib - bash scripts/glide/checkout.sh "$GLIDE" "$lib" - - echo "Testing $lib ..." - go test -v --race "github.com/tendermint/$lib/..." - if [[ "$?" != 0 ]]; then - echo "FAIL" - exit 1 - fi -done +# All libs should define `make test` and `make get_vendor_deps` +LIBS_TEST=(tmlibs go-wire go-crypto abci) DIR=$(pwd) for lib in "${LIBS_MAKE_TEST[@]}"; do @@ -38,6 +23,7 @@ for lib in "${LIBS_MAKE_TEST[@]}"; do echo "Testing $lib ..." cd "$GOPATH/src/github.com/tendermint/$lib" + make get_vendor_deps make test if [[ "$?" != 0 ]]; then echo "FAIL" diff --git a/types/block.go b/types/block.go index 61d25f6e4..b306d57d0 100644 --- a/types/block.go +++ b/types/block.go @@ -8,12 +8,16 @@ import ( "strings" "time" - . "github.com/tendermint/go-common" - "github.com/tendermint/go-merkle" - "github.com/tendermint/go-wire" + wire "github.com/tendermint/go-wire" + "github.com/tendermint/go-wire/data" + . "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/merkle" ) -const MaxBlockSize = 22020096 // 21MB TODO make it configurable +const ( + MaxBlockSize = 22020096 // 21MB TODO make it configurable + DefaultBlockPartSize = 65536 // 64kB TODO: put part size in parts header? +) type Block struct { *Header `json:"header"` @@ -66,7 +70,7 @@ func (b *Block) ValidateBasic(chainID string, lastBlockHeight int, lastBlockID B return errors.New(Fmt("Wrong Block.Header.LastBlockID. Expected %v, got %v", lastBlockID, b.LastBlockID)) } if !bytes.Equal(b.LastCommitHash, b.LastCommit.Hash()) { - return errors.New(Fmt("Wrong Block.Header.LastCommitHash. Expected %X, got %X", b.LastCommitHash, b.LastCommit.Hash())) + return errors.New(Fmt("Wrong Block.Header.LastCommitHash. Expected %v, got %v", b.LastCommitHash, b.LastCommit.Hash())) } if b.Header.Height != 1 { if err := b.LastCommit.ValidateBasic(); err != nil { @@ -74,10 +78,10 @@ func (b *Block) ValidateBasic(chainID string, lastBlockHeight int, lastBlockID B } } if !bytes.Equal(b.DataHash, b.Data.Hash()) { - return errors.New(Fmt("Wrong Block.Header.DataHash. Expected %X, got %X", b.DataHash, b.Data.Hash())) + return errors.New(Fmt("Wrong Block.Header.DataHash. Expected %v, got %v", b.DataHash, b.Data.Hash())) } if !bytes.Equal(b.AppHash, appHash) { - return errors.New(Fmt("Wrong Block.Header.AppHash. Expected %X, got %X", appHash, b.AppHash)) + return errors.New(Fmt("Wrong Block.Header.AppHash. Expected %X, got %v", appHash, b.AppHash)) } // NOTE: the AppHash and ValidatorsHash are validated later. return nil @@ -94,7 +98,7 @@ func (b *Block) FillHeader() { // Computes and returns the block hash. // If the block is incomplete, block hash is nil for safety. -func (b *Block) Hash() []byte { +func (b *Block) Hash() data.Bytes { // fmt.Println(">>", b.Data) if b == nil || b.Header == nil || b.Data == nil || b.LastCommit == nil { return nil @@ -132,7 +136,7 @@ func (b *Block) StringIndented(indent string) string { %s %v %s %v %s %v -%s}#%X`, +%s}#%v`, indent, b.Header.StringIndented(indent+" "), indent, b.Data.StringIndented(indent+" "), indent, b.LastCommit.StringIndented(indent+" "), @@ -143,26 +147,26 @@ func (b *Block) StringShort() string { if b == nil { return "nil-Block" } else { - return fmt.Sprintf("Block#%X", b.Hash()) + return fmt.Sprintf("Block#%v", b.Hash()) } } //----------------------------------------------------------------------------- type Header struct { - ChainID string `json:"chain_id"` - Height int `json:"height"` - Time time.Time `json:"time"` - NumTxs int `json:"num_txs"` // XXX: Can we get rid of this? - LastBlockID BlockID `json:"last_block_id"` - LastCommitHash []byte `json:"last_commit_hash"` // commit from validators from the last block - DataHash []byte `json:"data_hash"` // transactions - ValidatorsHash []byte `json:"validators_hash"` // validators for the current block - AppHash []byte `json:"app_hash"` // state after txs from the previous block + ChainID string `json:"chain_id"` + Height int `json:"height"` + Time time.Time `json:"time"` + NumTxs int `json:"num_txs"` // XXX: Can we get rid of this? + LastBlockID BlockID `json:"last_block_id"` + LastCommitHash data.Bytes `json:"last_commit_hash"` // commit from validators from the last block + DataHash data.Bytes `json:"data_hash"` // transactions + ValidatorsHash data.Bytes `json:"validators_hash"` // validators for the current block + AppHash data.Bytes `json:"app_hash"` // state after txs from the previous block } // NOTE: hash is nil if required fields are missing. -func (h *Header) Hash() []byte { +func (h *Header) Hash() data.Bytes { if len(h.ValidatorsHash) == 0 { return nil } @@ -189,11 +193,11 @@ func (h *Header) StringIndented(indent string) string { %s Time: %v %s NumTxs: %v %s LastBlockID: %v -%s LastCommit: %X -%s Data: %X -%s Validators: %X -%s App: %X -%s}#%X`, +%s LastCommit: %v +%s Data: %v +%s Validators: %v +%s App: %v +%s}#%v`, indent, h.ChainID, indent, h.Height, indent, h.Time, @@ -218,7 +222,7 @@ type Commit struct { // Volatile firstPrecommit *Vote - hash []byte + hash data.Bytes bitArray *BitArray } @@ -318,7 +322,7 @@ func (commit *Commit) ValidateBasic() error { return nil } -func (commit *Commit) Hash() []byte { +func (commit *Commit) Hash() data.Bytes { if commit.hash == nil { bs := make([]interface{}, len(commit.Precommits)) for i, precommit := range commit.Precommits { @@ -340,7 +344,7 @@ func (commit *Commit) StringIndented(indent string) string { return fmt.Sprintf(`Commit{ %s BlockID: %v %s Precommits: %v -%s}#%X`, +%s}#%v`, indent, commit.BlockID, indent, strings.Join(precommitStrings, "\n"+indent+" "), indent, commit.hash) @@ -356,10 +360,10 @@ type Data struct { Txs Txs `json:"txs"` // Volatile - hash []byte + hash data.Bytes } -func (data *Data) Hash() []byte { +func (data *Data) Hash() data.Bytes { if data.hash == nil { data.hash = data.Txs.Hash() // NOTE: leaves of merkle tree are TxIDs } @@ -380,7 +384,7 @@ func (data *Data) StringIndented(indent string) string { } return fmt.Sprintf(`Data{ %s %v -%s}#%X`, +%s}#%v`, indent, strings.Join(txStrings, "\n"+indent+" "), indent, data.hash) } @@ -388,7 +392,7 @@ func (data *Data) StringIndented(indent string) string { //-------------------------------------------------------------------------------- type BlockID struct { - Hash []byte `json:"hash"` + Hash data.Bytes `json:"hash"` PartsHeader PartSetHeader `json:"parts"` } @@ -415,5 +419,5 @@ func (blockID BlockID) WriteSignBytes(w io.Writer, n *int, err *error) { } func (blockID BlockID) String() string { - return fmt.Sprintf(`%X:%v`, blockID.Hash, blockID.PartsHeader) + return fmt.Sprintf(`%v:%v`, blockID.Hash, blockID.PartsHeader) } diff --git a/types/canonical_json.go b/types/canonical_json.go index 68dd6924c..2e8583a4a 100644 --- a/types/canonical_json.go +++ b/types/canonical_json.go @@ -1,15 +1,19 @@ package types +import ( + "github.com/tendermint/go-wire/data" +) + // canonical json is go-wire's json for structs with fields in alphabetical order type CanonicalJSONBlockID struct { - Hash []byte `json:"hash,omitempty"` + Hash data.Bytes `json:"hash,omitempty"` PartsHeader CanonicalJSONPartSetHeader `json:"parts,omitempty"` } type CanonicalJSONPartSetHeader struct { - Hash []byte `json:"hash"` - Total int `json:"total"` + Hash data.Bytes `json:"hash"` + Total int `json:"total"` } type CanonicalJSONProposal struct { diff --git a/types/events.go b/types/events.go index 114979047..8c29c4445 100644 --- a/types/events.go +++ b/types/events.go @@ -3,9 +3,9 @@ package types import ( // for registering TMEventData as events.EventData abci "github.com/tendermint/abci/types" - . "github.com/tendermint/go-common" - "github.com/tendermint/go-events" - "github.com/tendermint/go-wire" + "github.com/tendermint/go-wire/data" + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/events" ) // Functions to generate eventId strings @@ -16,7 +16,7 @@ func EventStringUnbond() string { return "Unbond" } func EventStringRebond() string { return "Rebond" } func EventStringDupeout() string { return "Dupeout" } func EventStringFork() string { return "Fork" } -func EventStringTx(tx Tx) string { return Fmt("Tx:%X", tx.Hash()) } +func EventStringTx(tx Tx) string { return cmn.Fmt("Tx:%X", tx.Hash()) } func EventStringNewBlock() string { return "NewBlock" } func EventStringNewBlockHeader() string { return "NewBlockHeader" } @@ -33,10 +33,47 @@ func EventStringVote() string { return "Vote" } //---------------------------------------- +var ( + EventDataNameNewBlock = "new_block" + EventDataNameNewBlockHeader = "new_block_header" + EventDataNameTx = "tx" + EventDataNameRoundState = "round_state" + EventDataNameVote = "vote" +) + +//---------------------------------------- + // implements events.EventData -type TMEventData interface { +type TMEventDataInner interface { events.EventData - AssertIsTMEventData() +} + +type TMEventData struct { + TMEventDataInner `json:"unwrap"` +} + +func (tmr TMEventData) MarshalJSON() ([]byte, error) { + return tmEventDataMapper.ToJSON(tmr.TMEventDataInner) +} + +func (tmr *TMEventData) UnmarshalJSON(data []byte) (err error) { + parsed, err := tmEventDataMapper.FromJSON(data) + if err == nil && parsed != nil { + tmr.TMEventDataInner = parsed.(TMEventDataInner) + } + return +} + +func (tmr TMEventData) Unwrap() TMEventDataInner { + tmrI := tmr.TMEventDataInner + for wrap, ok := tmrI.(TMEventData); ok; wrap, ok = tmrI.(TMEventData) { + tmrI = wrap.TMEventDataInner + } + return tmrI +} + +func (tmr TMEventData) Empty() bool { + return tmr.TMEventDataInner == nil } const ( @@ -49,15 +86,12 @@ const ( EventDataTypeVote = byte(0x12) ) -var _ = wire.RegisterInterface( - struct{ TMEventData }{}, - wire.ConcreteType{EventDataNewBlock{}, EventDataTypeNewBlock}, - wire.ConcreteType{EventDataNewBlockHeader{}, EventDataTypeNewBlockHeader}, - // wire.ConcreteType{EventDataFork{}, EventDataTypeFork }, - wire.ConcreteType{EventDataTx{}, EventDataTypeTx}, - wire.ConcreteType{EventDataRoundState{}, EventDataTypeRoundState}, - wire.ConcreteType{EventDataVote{}, EventDataTypeVote}, -) +var tmEventDataMapper = data.NewMapper(TMEventData{}). + RegisterImplementation(EventDataNewBlock{}, EventDataNameNewBlock, EventDataTypeNewBlock). + RegisterImplementation(EventDataNewBlockHeader{}, EventDataNameNewBlockHeader, EventDataTypeNewBlockHeader). + RegisterImplementation(EventDataTx{}, EventDataNameTx, EventDataTypeTx). + RegisterImplementation(EventDataRoundState{}, EventDataNameRoundState, EventDataTypeRoundState). + RegisterImplementation(EventDataVote{}, EventDataNameVote, EventDataTypeVote) // Most event messages are basic types (a block, a transaction) // but some (an input to a call tx or a receive) are more exotic @@ -75,7 +109,7 @@ type EventDataNewBlockHeader struct { type EventDataTx struct { Height int `json:"height"` Tx Tx `json:"tx"` - Data []byte `json:"data"` + Data data.Bytes `json:"data"` Log string `json:"log"` Code abci.CodeType `json:"code"` Error string `json:"error"` // this is redundant information for now @@ -146,55 +180,55 @@ func AddListenerForEvent(evsw EventSwitch, id, event string, cb func(data TMEven //--- block, tx, and vote events func FireEventNewBlock(fireable events.Fireable, block EventDataNewBlock) { - fireEvent(fireable, EventStringNewBlock(), block) + fireEvent(fireable, EventStringNewBlock(), TMEventData{block}) } func FireEventNewBlockHeader(fireable events.Fireable, header EventDataNewBlockHeader) { - fireEvent(fireable, EventStringNewBlockHeader(), header) + fireEvent(fireable, EventStringNewBlockHeader(), TMEventData{header}) } func FireEventVote(fireable events.Fireable, vote EventDataVote) { - fireEvent(fireable, EventStringVote(), vote) + fireEvent(fireable, EventStringVote(), TMEventData{vote}) } func FireEventTx(fireable events.Fireable, tx EventDataTx) { - fireEvent(fireable, EventStringTx(tx.Tx), tx) + fireEvent(fireable, EventStringTx(tx.Tx), TMEventData{tx}) } //--- EventDataRoundState events func FireEventNewRoundStep(fireable events.Fireable, rs EventDataRoundState) { - fireEvent(fireable, EventStringNewRoundStep(), rs) + fireEvent(fireable, EventStringNewRoundStep(), TMEventData{rs}) } func FireEventTimeoutPropose(fireable events.Fireable, rs EventDataRoundState) { - fireEvent(fireable, EventStringTimeoutPropose(), rs) + fireEvent(fireable, EventStringTimeoutPropose(), TMEventData{rs}) } func FireEventTimeoutWait(fireable events.Fireable, rs EventDataRoundState) { - fireEvent(fireable, EventStringTimeoutWait(), rs) + fireEvent(fireable, EventStringTimeoutWait(), TMEventData{rs}) } func FireEventNewRound(fireable events.Fireable, rs EventDataRoundState) { - fireEvent(fireable, EventStringNewRound(), rs) + fireEvent(fireable, EventStringNewRound(), TMEventData{rs}) } func FireEventCompleteProposal(fireable events.Fireable, rs EventDataRoundState) { - fireEvent(fireable, EventStringCompleteProposal(), rs) + fireEvent(fireable, EventStringCompleteProposal(), TMEventData{rs}) } func FireEventPolka(fireable events.Fireable, rs EventDataRoundState) { - fireEvent(fireable, EventStringPolka(), rs) + fireEvent(fireable, EventStringPolka(), TMEventData{rs}) } func FireEventUnlock(fireable events.Fireable, rs EventDataRoundState) { - fireEvent(fireable, EventStringUnlock(), rs) + fireEvent(fireable, EventStringUnlock(), TMEventData{rs}) } func FireEventRelock(fireable events.Fireable, rs EventDataRoundState) { - fireEvent(fireable, EventStringRelock(), rs) + fireEvent(fireable, EventStringRelock(), TMEventData{rs}) } func FireEventLock(fireable events.Fireable, rs EventDataRoundState) { - fireEvent(fireable, EventStringLock(), rs) + fireEvent(fireable, EventStringLock(), TMEventData{rs}) } diff --git a/types/genesis.go b/types/genesis.go index 3a0395488..75999f631 100644 --- a/types/genesis.go +++ b/types/genesis.go @@ -1,11 +1,12 @@ package types import ( + "encoding/json" "time" - . "github.com/tendermint/go-common" "github.com/tendermint/go-crypto" - "github.com/tendermint/go-wire" + "github.com/tendermint/go-wire/data" + cmn "github.com/tendermint/tmlibs/common" ) //------------------------------------------------------------ @@ -26,19 +27,23 @@ type GenesisDoc struct { GenesisTime time.Time `json:"genesis_time"` ChainID string `json:"chain_id"` Validators []GenesisValidator `json:"validators"` - AppHash []byte `json:"app_hash"` + AppHash data.Bytes `json:"app_hash"` } // Utility method for saving GenensisDoc as JSON file. func (genDoc *GenesisDoc) SaveAs(file string) error { - genDocBytes := wire.JSONBytesPretty(genDoc) - return WriteFile(file, genDocBytes, 0644) + genDocBytes, err := json.Marshal(genDoc) + if err != nil { + return err + } + return cmn.WriteFile(file, genDocBytes, 0644) } //------------------------------------------------------------ // Make genesis state from file -func GenesisDocFromJSON(jsonBlob []byte) (genDoc *GenesisDoc, err error) { - wire.ReadJSONPtr(&genDoc, jsonBlob, &err) - return +func GenesisDocFromJSON(jsonBlob []byte) (*GenesisDoc, error) { + genDoc := GenesisDoc{} + err := json.Unmarshal(jsonBlob, &genDoc) + return &genDoc, err } diff --git a/types/log.go b/types/log.go deleted file mode 100644 index dbe8a6782..000000000 --- a/types/log.go +++ /dev/null @@ -1,7 +0,0 @@ -package types - -import ( - "github.com/tendermint/go-logger" -) - -var log = logger.New("module", "types") diff --git a/types/part_set.go b/types/part_set.go index 3a5ee26ad..e15d2cab6 100644 --- a/types/part_set.go +++ b/types/part_set.go @@ -9,9 +9,10 @@ import ( "golang.org/x/crypto/ripemd160" - . "github.com/tendermint/go-common" - "github.com/tendermint/go-merkle" "github.com/tendermint/go-wire" + "github.com/tendermint/go-wire/data" + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/merkle" ) var ( @@ -21,7 +22,7 @@ var ( type Part struct { Index int `json:"index"` - Bytes []byte `json:"bytes"` + Bytes data.Bytes `json:"bytes"` Proof merkle.SimpleProof `json:"proof"` // Cache @@ -49,7 +50,7 @@ func (part *Part) StringIndented(indent string) string { %s Proof: %v %s}`, part.Index, - indent, Fingerprint(part.Bytes), + indent, cmn.Fingerprint(part.Bytes), indent, part.Proof.StringIndented(indent+" "), indent) } @@ -57,12 +58,12 @@ func (part *Part) StringIndented(indent string) string { //------------------------------------- type PartSetHeader struct { - Total int `json:"total"` - Hash []byte `json:"hash"` + Total int `json:"total"` + Hash data.Bytes `json:"hash"` } func (psh PartSetHeader) String() string { - return fmt.Sprintf("%v:%X", psh.Total, Fingerprint(psh.Hash)) + return fmt.Sprintf("%v:%X", psh.Total, cmn.Fingerprint(psh.Hash)) } func (psh PartSetHeader) IsZero() bool { @@ -85,7 +86,7 @@ type PartSet struct { mtx sync.Mutex parts []*Part - partsBitArray *BitArray + partsBitArray *cmn.BitArray count int } @@ -96,11 +97,11 @@ func NewPartSetFromData(data []byte, partSize int) *PartSet { total := (len(data) + partSize - 1) / partSize parts := make([]*Part, total) parts_ := make([]merkle.Hashable, total) - partsBitArray := NewBitArray(total) + partsBitArray := cmn.NewBitArray(total) for i := 0; i < total; i++ { part := &Part{ Index: i, - Bytes: data[i*partSize : MinInt(len(data), (i+1)*partSize)], + Bytes: data[i*partSize : cmn.MinInt(len(data), (i+1)*partSize)], } parts[i] = part parts_[i] = part @@ -126,7 +127,7 @@ func NewPartSetFromHeader(header PartSetHeader) *PartSet { total: header.Total, hash: header.Hash, parts: make([]*Part, header.Total), - partsBitArray: NewBitArray(header.Total), + partsBitArray: cmn.NewBitArray(header.Total), count: 0, } } @@ -150,7 +151,7 @@ func (ps *PartSet) HasHeader(header PartSetHeader) bool { } } -func (ps *PartSet) BitArray() *BitArray { +func (ps *PartSet) BitArray() *cmn.BitArray { ps.mtx.Lock() defer ps.mtx.Unlock() return ps.partsBitArray.Copy() @@ -224,7 +225,7 @@ func (ps *PartSet) IsComplete() bool { func (ps *PartSet) GetReader() io.Reader { if !ps.IsComplete() { - PanicSanity("Cannot GetReader() on incomplete PartSet") + cmn.PanicSanity("Cannot GetReader() on incomplete PartSet") } return NewPartSetReader(ps.parts) } diff --git a/types/part_set_test.go b/types/part_set_test.go index 6e25752da..7088ef317 100644 --- a/types/part_set_test.go +++ b/types/part_set_test.go @@ -5,7 +5,7 @@ import ( "io/ioutil" "testing" - . "github.com/tendermint/go-common" + . "github.com/tendermint/tmlibs/common" ) const ( diff --git a/types/priv_validator.go b/types/priv_validator.go index c200dff3c..c3d59c9eb 100644 --- a/types/priv_validator.go +++ b/types/priv_validator.go @@ -2,17 +2,17 @@ package types import ( "bytes" + "encoding/json" "errors" "fmt" "io/ioutil" "os" "sync" - . "github.com/tendermint/go-common" - "github.com/tendermint/go-crypto" - "github.com/tendermint/go-wire" - - "github.com/tendermint/ed25519" + crypto "github.com/tendermint/go-crypto" + data "github.com/tendermint/go-wire/data" + . "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/log" ) const ( @@ -35,13 +35,13 @@ func voteToStep(vote *Vote) int8 { } type PrivValidator struct { - Address []byte `json:"address"` + Address data.Bytes `json:"address"` PubKey crypto.PubKey `json:"pub_key"` LastHeight int `json:"last_height"` LastRound int `json:"last_round"` LastStep int8 `json:"last_step"` - LastSignature crypto.Signature `json:"last_signature"` // so we dont lose signatures - LastSignBytes []byte `json:"last_signbytes"` // so we dont lose signatures + LastSignature crypto.Signature `json:"last_signature,omitempty"` // so we dont lose signatures + LastSignBytes data.Bytes `json:"last_signbytes,omitempty"` // so we dont lose signatures // PrivKey should be empty if a Signer other than the default is being used. PrivKey crypto.PrivKey `json:"priv_key"` @@ -81,22 +81,15 @@ func (privVal *PrivValidator) SetSigner(s Signer) { // Generates a new validator with private key. func GenPrivValidator() *PrivValidator { - privKeyBytes := new([64]byte) - copy(privKeyBytes[:32], crypto.CRandBytes(32)) - pubKeyBytes := ed25519.MakePublicKey(privKeyBytes) - pubKey := crypto.PubKeyEd25519(*pubKeyBytes) - privKey := crypto.PrivKeyEd25519(*privKeyBytes) + privKey := crypto.GenPrivKeyEd25519().Wrap() + pubKey := privKey.PubKey() return &PrivValidator{ - Address: pubKey.Address(), - PubKey: pubKey, - PrivKey: privKey, - LastHeight: 0, - LastRound: 0, - LastStep: stepNone, - LastSignature: nil, - LastSignBytes: nil, - filePath: "", - Signer: NewDefaultSigner(privKey), + Address: pubKey.Address(), + PubKey: pubKey, + PrivKey: privKey, + LastStep: stepNone, + filePath: "", + Signer: NewDefaultSigner(privKey), } } @@ -105,26 +98,27 @@ func LoadPrivValidator(filePath string) *PrivValidator { if err != nil { Exit(err.Error()) } - privVal := wire.ReadJSON(&PrivValidator{}, privValJSONBytes, &err).(*PrivValidator) + privVal := PrivValidator{} + err = json.Unmarshal(privValJSONBytes, &privVal) if err != nil { Exit(Fmt("Error reading PrivValidator from %v: %v\n", filePath, err)) } privVal.filePath = filePath privVal.Signer = NewDefaultSigner(privVal.PrivKey) - return privVal + return &privVal } -func LoadOrGenPrivValidator(filePath string) *PrivValidator { +func LoadOrGenPrivValidator(filePath string, logger log.Logger) *PrivValidator { var privValidator *PrivValidator if _, err := os.Stat(filePath); err == nil { privValidator = LoadPrivValidator(filePath) - log.Notice("Loaded PrivValidator", + logger.Info("Loaded PrivValidator", "file", filePath, "privValidator", privValidator) } else { privValidator = GenPrivValidator() privValidator.SetFile(filePath) privValidator.Save() - log.Notice("Generated PrivValidator", "file", filePath) + logger.Info("Generated PrivValidator", "file", filePath) } return privValidator } @@ -145,8 +139,12 @@ func (privVal *PrivValidator) save() { if privVal.filePath == "" { PanicSanity("Cannot save PrivValidator: filePath not set") } - jsonBytes := wire.JSONBytesPretty(privVal) - err := WriteFileAtomic(privVal.filePath, jsonBytes, 0600) + jsonBytes, err := json.Marshal(privVal) + if err != nil { + // `@; BOOM!!! + PanicCrisis(err) + } + err = WriteFileAtomic(privVal.filePath, jsonBytes, 0600) if err != nil { // `@; BOOM!!! PanicCrisis(err) @@ -158,7 +156,7 @@ func (privVal *PrivValidator) Reset() { privVal.LastHeight = 0 privVal.LastRound = 0 privVal.LastStep = 0 - privVal.LastSignature = nil + privVal.LastSignature = crypto.Signature{} privVal.LastSignBytes = nil privVal.Save() } @@ -183,7 +181,7 @@ func (privVal *PrivValidator) SignProposal(chainID string, proposal *Proposal) e defer privVal.mtx.Unlock() signature, err := privVal.signBytesHRS(proposal.Height, proposal.Round, stepPropose, SignBytes(chainID, proposal)) if err != nil { - return errors.New(Fmt("Error signing proposal: %v", err)) + return fmt.Errorf("Error signing proposal: %v", err) } proposal.Signature = signature return nil @@ -191,55 +189,56 @@ func (privVal *PrivValidator) SignProposal(chainID string, proposal *Proposal) e // check if there's a regression. Else sign and write the hrs+signature to disk func (privVal *PrivValidator) signBytesHRS(height, round int, step int8, signBytes []byte) (crypto.Signature, error) { + sig := crypto.Signature{} // If height regression, err if privVal.LastHeight > height { - return nil, errors.New("Height regression") + return sig, errors.New("Height regression") } // More cases for when the height matches if privVal.LastHeight == height { // If round regression, err if privVal.LastRound > round { - return nil, errors.New("Round regression") + return sig, errors.New("Round regression") } // If step regression, err if privVal.LastRound == round { if privVal.LastStep > step { - return nil, errors.New("Step regression") + return sig, errors.New("Step regression") } else if privVal.LastStep == step { if privVal.LastSignBytes != nil { - if privVal.LastSignature == nil { + if privVal.LastSignature.Empty() { PanicSanity("privVal: LastSignature is nil but LastSignBytes is not!") } // so we dont sign a conflicting vote or proposal // NOTE: proposals are non-deterministic (include time), // so we can actually lose them, but will still never sign conflicting ones if bytes.Equal(privVal.LastSignBytes, signBytes) { - log.Notice("Using privVal.LastSignature", "sig", privVal.LastSignature) + // log.Notice("Using privVal.LastSignature", "sig", privVal.LastSignature) return privVal.LastSignature, nil } } - return nil, errors.New("Step regression") + return sig, errors.New("Step regression") } } } // Sign - signature := privVal.Sign(signBytes) + sig = privVal.Sign(signBytes) // Persist height/round/step privVal.LastHeight = height privVal.LastRound = round privVal.LastStep = step - privVal.LastSignature = signature + privVal.LastSignature = sig privVal.LastSignBytes = signBytes privVal.save() - return signature, nil + return sig, nil } func (privVal *PrivValidator) String() string { - return fmt.Sprintf("PrivValidator{%X LH:%v, LR:%v, LS:%v}", privVal.Address, privVal.LastHeight, privVal.LastRound, privVal.LastStep) + return fmt.Sprintf("PrivValidator{%v LH:%v, LR:%v, LS:%v}", privVal.Address, privVal.LastHeight, privVal.LastRound, privVal.LastStep) } //------------------------------------- diff --git a/types/priv_validator_test.go b/types/priv_validator_test.go new file mode 100644 index 000000000..1eb0b57db --- /dev/null +++ b/types/priv_validator_test.go @@ -0,0 +1,60 @@ +package types + +import ( + "encoding/hex" + "encoding/json" + "fmt" + "testing" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + crypto "github.com/tendermint/go-crypto" +) + +func TestLoadValidator(t *testing.T) { + assert, require := assert.New(t), require.New(t) + + // create some fixed values + addrStr := "D028C9981F7A87F3093672BF0D5B0E2A1B3ED456" + pubStr := "3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8" + privStr := "27F82582AEFAE7AB151CFB01C48BB6C1A0DA78F9BDDA979A9F70A84D074EB07D3B3069C422E19688B45CBFAE7BB009FC0FA1B1EA86593519318B7214853803C8" + addrBytes, _ := hex.DecodeString(addrStr) + pubBytes, _ := hex.DecodeString(pubStr) + privBytes, _ := hex.DecodeString(privStr) + + // prepend type byte + pubKey, err := crypto.PubKeyFromBytes(append([]byte{1}, pubBytes...)) + require.Nil(err, "%+v", err) + privKey, err := crypto.PrivKeyFromBytes(append([]byte{1}, privBytes...)) + require.Nil(err, "%+v", err) + + serialized := fmt.Sprintf(`{ + "address": "%s", + "pub_key": { + "type": "ed25519", + "data": "%s" + }, + "priv_key": { + "type": "ed25519", + "data": "%s" + }, + "last_height": 0, + "last_round": 0, + "last_step": 0, + "last_signature": null +}`, addrStr, pubStr, privStr) + + val := PrivValidator{} + err = json.Unmarshal([]byte(serialized), &val) + require.Nil(err, "%+v", err) + + // make sure the values match + assert.EqualValues(addrBytes, val.Address) + assert.EqualValues(pubKey, val.PubKey) + assert.EqualValues(privKey, val.PrivKey) + + // export it and make sure it is the same + out, err := json.Marshal(val) + require.Nil(err, "%+v", err) + assert.JSONEq(serialized, string(out)) +} diff --git a/types/proposal.go b/types/proposal.go index 9852011f3..8406403c1 100644 --- a/types/proposal.go +++ b/types/proposal.go @@ -5,7 +5,7 @@ import ( "fmt" "io" - //. "github.com/tendermint/go-common" + //. "github.com/tendermint/tmlibs/common" "github.com/tendermint/go-crypto" "github.com/tendermint/go-wire" ) diff --git a/types/protobuf.go b/types/protobuf.go index 41b4c54bb..59994fea7 100644 --- a/types/protobuf.go +++ b/types/protobuf.go @@ -42,3 +42,11 @@ func (tm2pb) Validator(val *Validator) *types.Validator { Power: uint64(val.VotingPower), } } + +func (tm2pb) Validators(vals *ValidatorSet) []*types.Validator { + validators := make([]*types.Validator, len(vals.Validators)) + for i, val := range vals.Validators { + validators[i] = TM2PB.Validator(val) + } + return validators +} diff --git a/types/signable.go b/types/signable.go index df94e43b4..13389fef7 100644 --- a/types/signable.go +++ b/types/signable.go @@ -4,8 +4,8 @@ import ( "bytes" "io" - . "github.com/tendermint/go-common" - "github.com/tendermint/go-merkle" + . "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/merkle" ) // Signable is an interface for all signable things. diff --git a/types/tx.go b/types/tx.go index df7f0e71a..0334452e1 100644 --- a/types/tx.go +++ b/types/tx.go @@ -3,9 +3,11 @@ package types import ( "bytes" "errors" + "fmt" abci "github.com/tendermint/abci/types" - "github.com/tendermint/go-merkle" + "github.com/tendermint/go-wire/data" + "github.com/tendermint/tmlibs/merkle" ) type Tx []byte @@ -18,11 +20,15 @@ func (tx Tx) Hash() []byte { return merkle.SimpleHashFromBinary(tx) } +func (tx Tx) String() string { + return fmt.Sprintf("Tx{%X}", []byte(tx)) +} + type Txs []Tx func (txs Txs) Hash() []byte { // Recursive impl. - // Copied from go-merkle to avoid allocations + // Copied from tmlibs/merkle to avoid allocations switch len(txs) { case 0: return nil @@ -79,7 +85,7 @@ func (txs Txs) Proof(i int) TxProof { type TxProof struct { Index, Total int - RootHash []byte + RootHash data.Bytes Data Tx Proof merkle.SimpleProof } diff --git a/types/tx_test.go b/types/tx_test.go index 7688a9bf1..91cddecfe 100644 --- a/types/tx_test.go +++ b/types/tx_test.go @@ -5,9 +5,9 @@ import ( "testing" "github.com/stretchr/testify/assert" - cmn "github.com/tendermint/go-common" - ctest "github.com/tendermint/go-common/test" wire "github.com/tendermint/go-wire" + cmn "github.com/tendermint/tmlibs/common" + ctest "github.com/tendermint/tmlibs/test" ) func makeTxs(cnt, size int) Txs { @@ -60,9 +60,9 @@ func TestValidTxProof(t *testing.T) { proof := txs.Proof(i) assert.Equal(i, proof.Index, "%d: %d", h, i) assert.Equal(len(txs), proof.Total, "%d: %d", h, i) - assert.Equal(root, proof.RootHash, "%d: %d", h, i) - assert.Equal(leaf, proof.Data, "%d: %d", h, i) - assert.Equal(leafHash, proof.LeafHash(), "%d: %d", h, i) + assert.EqualValues(root, proof.RootHash, "%d: %d", h, i) + assert.EqualValues(leaf, proof.Data, "%d: %d", h, i) + assert.EqualValues(leafHash, proof.LeafHash(), "%d: %d", h, i) assert.Nil(proof.Validate(root), "%d: %d", h, i) assert.NotNil(proof.Validate([]byte("foobar")), "%d: %d", h, i) diff --git a/types/validator.go b/types/validator.go index c4ecef56e..24f8974f3 100644 --- a/types/validator.go +++ b/types/validator.go @@ -5,19 +5,21 @@ import ( "fmt" "io" - . "github.com/tendermint/go-common" "github.com/tendermint/go-crypto" "github.com/tendermint/go-wire" + "github.com/tendermint/go-wire/data" + cmn "github.com/tendermint/tmlibs/common" ) // Volatile state for each Validator -// TODO: make non-volatile identity -// - Remove Accum - it can be computed, and now valset becomes identifying +// NOTE: The Accum is not included in Validator.Hash(); +// make sure to update that method if changes are made here type Validator struct { - Address []byte `json:"address"` + Address data.Bytes `json:"address"` PubKey crypto.PubKey `json:"pub_key"` VotingPower int64 `json:"voting_power"` - Accum int64 `json:"accum"` + + Accum int64 `json:"accum"` } func NewValidator(pubKey crypto.PubKey, votingPower int64) *Validator { @@ -51,7 +53,7 @@ func (v *Validator) CompareAccum(other *Validator) *Validator { } else if bytes.Compare(v.Address, other.Address) > 0 { return other } else { - PanicSanity("Cannot compare identical validators") + cmn.PanicSanity("Cannot compare identical validators") return nil } } @@ -61,15 +63,25 @@ func (v *Validator) String() string { if v == nil { return "nil-Validator" } - return fmt.Sprintf("Validator{%X %v VP:%v A:%v}", + return fmt.Sprintf("Validator{%v %v VP:%v A:%v}", v.Address, v.PubKey, v.VotingPower, v.Accum) } +// Hash computes the unique ID of a validator with a given voting power. +// It exludes the Accum value, which changes with every round. func (v *Validator) Hash() []byte { - return wire.BinaryRipemd160(v) + return wire.BinaryRipemd160(struct { + Address data.Bytes + PubKey crypto.PubKey + VotingPower int64 + }{ + v.Address, + v.PubKey, + v.VotingPower, + }) } //------------------------------------- @@ -87,7 +99,7 @@ func (vc validatorCodec) Decode(r io.Reader, n *int, err *error) interface{} { } func (vc validatorCodec) Compare(o1 interface{}, o2 interface{}) int { - PanicSanity("ValidatorCodec.Compare not implemented") + cmn.PanicSanity("ValidatorCodec.Compare not implemented") return 0 } @@ -96,11 +108,11 @@ func (vc validatorCodec) Compare(o1 interface{}, o2 interface{}) int { func RandValidator(randPower bool, minPower int64) (*Validator, *PrivValidator) { privVal := GenPrivValidator() - _, tempFilePath := Tempfile("priv_validator_") + _, tempFilePath := cmn.Tempfile("priv_validator_") privVal.SetFile(tempFilePath) votePower := minPower if randPower { - votePower += int64(RandUint32()) + votePower += int64(cmn.RandUint32()) } val := NewValidator(privVal.PubKey, votePower) return val, privVal diff --git a/types/validator_set.go b/types/validator_set.go index b997b4713..b374df576 100644 --- a/types/validator_set.go +++ b/types/validator_set.go @@ -6,9 +6,9 @@ import ( "sort" "strings" - cmn "github.com/tendermint/go-common" - "github.com/tendermint/go-merkle" "github.com/tendermint/go-wire" + cmn "github.com/tendermint/tmlibs/common" + "github.com/tendermint/tmlibs/merkle" ) // ValidatorSet represent a set of *Validator at a given height. @@ -21,10 +21,10 @@ import ( // NOTE: Not goroutine-safe. // NOTE: All get/set to validators should copy the value for safety. // TODO: consider validator Accum overflow -// TODO: move valset into an iavl tree where key is 'blockbonded|pubkey' type ValidatorSet struct { - Validators []*Validator // NOTE: persisted via reflect, must be exported. - Proposer *Validator + // NOTE: persisted via reflect, must be exported. + Validators []*Validator `json:"validators"` + Proposer *Validator `json:"proposer"` // cached (unexported) totalVotingPower int64 diff --git a/types/validator_set_test.go b/types/validator_set_test.go index bc7fef798..71a1993e7 100644 --- a/types/validator_set_test.go +++ b/types/validator_set_test.go @@ -5,14 +5,14 @@ import ( "strings" "testing" - cmn "github.com/tendermint/go-common" + cmn "github.com/tendermint/tmlibs/common" "github.com/tendermint/go-crypto" ) -func randPubKey() crypto.PubKeyEd25519 { +func randPubKey() crypto.PubKey { var pubKey [32]byte copy(pubKey[:], cmn.RandBytes(32)) - return crypto.PubKeyEd25519(pubKey) + return crypto.PubKeyEd25519(pubKey).Wrap() } func randValidator_() *Validator { @@ -194,7 +194,7 @@ func BenchmarkValidatorSetCopy(b *testing.B) { vset := NewValidatorSet([]*Validator{}) for i := 0; i < 1000; i++ { privKey := crypto.GenPrivKeyEd25519() - pubKey := privKey.PubKey().(crypto.PubKeyEd25519) + pubKey := privKey.PubKey() val := NewValidator(pubKey, 0) if !vset.Add(val) { panic("Failed to add validator") diff --git a/types/vote.go b/types/vote.go index af4f60fc5..164293c53 100644 --- a/types/vote.go +++ b/types/vote.go @@ -5,9 +5,10 @@ import ( "fmt" "io" - . "github.com/tendermint/go-common" "github.com/tendermint/go-crypto" "github.com/tendermint/go-wire" + "github.com/tendermint/go-wire/data" + cmn "github.com/tendermint/tmlibs/common" ) var ( @@ -47,7 +48,7 @@ func IsVoteTypeValid(type_ byte) bool { // Represents a prevote, precommit, or commit vote from validators for consensus. type Vote struct { - ValidatorAddress []byte `json:"validator_address"` + ValidatorAddress data.Bytes `json:"validator_address"` ValidatorIndex int `json:"validator_index"` Height int `json:"height"` Round int `json:"round"` @@ -79,11 +80,11 @@ func (vote *Vote) String() string { case VoteTypePrecommit: typeString = "Precommit" default: - PanicSanity("Unknown vote type") + cmn.PanicSanity("Unknown vote type") } return fmt.Sprintf("Vote{%v:%X %v/%02d/%v(%v) %X %v}", - vote.ValidatorIndex, Fingerprint(vote.ValidatorAddress), + vote.ValidatorIndex, cmn.Fingerprint(vote.ValidatorAddress), vote.Height, vote.Round, vote.Type, typeString, - Fingerprint(vote.BlockID.Hash), vote.Signature) + cmn.Fingerprint(vote.BlockID.Hash), vote.Signature) } diff --git a/types/vote_set.go b/types/vote_set.go index de853a5e7..938dbcb61 100644 --- a/types/vote_set.go +++ b/types/vote_set.go @@ -6,7 +6,7 @@ import ( "strings" "sync" - . "github.com/tendermint/go-common" + . "github.com/tendermint/tmlibs/common" ) /* diff --git a/types/vote_set_test.go b/types/vote_set_test.go index 500daadff..84e13ac17 100644 --- a/types/vote_set_test.go +++ b/types/vote_set_test.go @@ -3,15 +3,14 @@ package types import ( "bytes" - . "github.com/tendermint/go-common" - . "github.com/tendermint/go-common/test" "github.com/tendermint/go-crypto" + . "github.com/tendermint/tmlibs/common" + . "github.com/tendermint/tmlibs/test" "testing" ) // NOTE: privValidators are in order -// TODO: Move it out? func randVoteSet(height int, round int, type_ byte, numValidators int, votingPower int64) (*VoteSet, *ValidatorSet, []*PrivValidator) { valSet, privValidators := RandValidatorSet(numValidators, votingPower) return NewVoteSet("test_chain_id", height, round, type_, valSet), valSet, privValidators diff --git a/version/version.go b/version/version.go index b35a2daee..60887d6f9 100644 --- a/version/version.go +++ b/version/version.go @@ -1,7 +1,19 @@ package version const Maj = "0" -const Min = "9" -const Fix = "2" +const Min = "10" +const Fix = "0" -const Version = "0.9.2" +var ( + // The full version string + Version = "0.10.0-rc1" + + // GitCommit is set with --ldflags "-X main.gitCommit=$(git rev-parse HEAD)" + GitCommit string +) + +func init() { + if GitCommit != "" { + Version += "-" + GitCommit[:8] + } +}