Compare commits

..

1 Commits

Author SHA1 Message Date
Ismail Khoffi
79210e658d first stab on checked ints 2018-10-26 18:55:19 +02:00
328 changed files with 13398 additions and 12548 deletions

View File

@@ -3,17 +3,10 @@ version: 2
defaults: &defaults
working_directory: /go/src/github.com/tendermint/tendermint
docker:
- image: circleci/golang:1.11.4
- image: circleci/golang:1.10.3
environment:
GOBIN: /tmp/workspace/bin
docs_update_config: &docs_update_config
working_directory: ~/repo
docker:
- image: tendermint/docs_deployment
environment:
AWS_REGION: us-east-1
jobs:
setup_dependencies:
<<: *defaults
@@ -99,7 +92,6 @@ jobs:
command: |
export PATH="$GOBIN:$PATH"
make get_tools
make get_dev_tools
- run:
name: dependencies
command: |
@@ -240,7 +232,7 @@ jobs:
for pkg in $(go list github.com/tendermint/tendermint/... | circleci tests split --split-by=timings); do
id=$(basename "$pkg")
GOCACHE=off go test -v -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
GOCACHE=off go test -timeout 5m -race -coverprofile=/tmp/workspace/profiles/$id.out -covermode=atomic "$pkg" | tee "/tmp/logs/$id-$RANDOM.log"
done
- persist_to_workspace:
root: /tmp/workspace
@@ -346,25 +338,10 @@ jobs:
name: upload
command: bash .circleci/codecov.sh -f coverage.txt
deploy_docs:
<<: *docs_update_config
steps:
- checkout
- run:
name: Trigger website build
command: |
chamber exec tendermint -- start_website_build
workflows:
version: 2
test-suite:
jobs:
- deploy_docs:
filters:
branches:
only:
- master
- develop
- setup_dependencies
- lint:
requires:

View File

@@ -1,570 +1,5 @@
# Changelog
## v0.29.1
*January 24, 2019*
Special thanks to external contributors on this release:
@infinytum, @gauthamzz
This release contains two important fixes: one for p2p layer where we sometimes
were not closing connections and one for consensus layer where consensus with
no empty blocks (`create_empty_blocks = false`) could halt.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### IMPROVEMENTS:
- [pex] [\#3037](https://github.com/tendermint/tendermint/issues/3037) Only log "Reached max attempts to dial" once
- [rpc] [\#3159](https://github.com/tendermint/tendermint/issues/3159) Expose
`triggered_timeout_commit` in the `/dump_consensus_state`
### BUG FIXES:
- [consensus] [\#3199](https://github.com/tendermint/tendermint/issues/3199) Fix consensus halt with no empty blocks from not resetting triggeredTimeoutCommit
- [p2p] [\#2967](https://github.com/tendermint/tendermint/issues/2967) Fix file descriptor leak
## v0.29.0
*January 21, 2019*
Special thanks to external contributors on this release:
@bradyjoestar, @kunaldhariwal, @gauthamzz, @hrharder
This release is primarily about making some breaking changes to
the Block protocol version before Cosmos launch, and to fixing more issues
in the proposer selection algorithm discovered on Cosmos testnets.
The Block protocol changes include using a standard Merkle tree format (RFC 6962),
fixing some inconsistencies between field orders in Vote and Proposal structs,
and constraining the hash of the ConsensusParams to include only a few fields.
The proposer selection algorithm saw significant progress,
including a [formal proof by @cwgoes for the base-case in Idris](https://github.com/cwgoes/tm-proposer-idris)
and a [much more detailed specification (still in progress) by
@ancazamfir](https://github.com/tendermint/tendermint/pull/3140).
Fixes to the proposer selection algorithm include normalizing the proposer
priorities to mitigate the effects of large changes to the validator set.
That said, we just discovered [another bug](https://github.com/tendermint/tendermint/issues/3181),
which will be fixed in the next breaking release.
While we are trying to stabilize the Block protocol to preserve compatibility
with old chains, there may be some final changes yet to come before Cosmos
launch as we continue to audit and test the software.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
* Apps
- [state] [\#3049](https://github.com/tendermint/tendermint/issues/3049) Total voting power of the validator set is upper bounded by
`MaxInt64 / 8`. Apps must ensure they do not return changes to the validator
set that cause this maximum to be exceeded.
* Go API
- [node] [\#3082](https://github.com/tendermint/tendermint/issues/3082) MetricsProvider now requires you to pass a chain ID
- [types] [\#2713](https://github.com/tendermint/tendermint/issues/2713) Rename `TxProof.LeafHash` to `TxProof.Leaf`
- [crypto/merkle] [\#2713](https://github.com/tendermint/tendermint/issues/2713) `SimpleProof.Verify` takes a `leaf` instead of a
`leafHash` and performs the hashing itself
* Blockchain Protocol
* [crypto/merkle] [\#2713](https://github.com/tendermint/tendermint/issues/2713) Merkle trees now match the RFC 6962 specification
* [types] [\#3078](https://github.com/tendermint/tendermint/issues/3078) Re-order Timestamp and BlockID in CanonicalVote so it's
consistent with CanonicalProposal (BlockID comes
first)
* [types] [\#3165](https://github.com/tendermint/tendermint/issues/3165) Hash of ConsensusParams only includes BlockSize.MaxBytes and
BlockSize.MaxGas
* P2P Protocol
- [consensus] [\#3049](https://github.com/tendermint/tendermint/issues/3049) Normalize priorities to not exceed `2*TotalVotingPower` to mitigate unfair proposer selection
heavily preferring earlier joined validators in the case of an early bonded large validator unbonding
### FEATURES:
### IMPROVEMENTS:
- [rpc] [\#3065](https://github.com/tendermint/tendermint/issues/3065) Return maxPerPage (100), not defaultPerPage (30) if `per_page` is greater than the max 100.
- [instrumentation] [\#3082](https://github.com/tendermint/tendermint/issues/3082) Add `chain_id` label for all metrics
### BUG FIXES:
- [crypto] [\#3164](https://github.com/tendermint/tendermint/issues/3164) Update `btcd` fork for rare signRFC6979 bug
- [lite] [\#3171](https://github.com/tendermint/tendermint/issues/3171) Fix verifying large validator set changes
- [log] [\#3125](https://github.com/tendermint/tendermint/issues/3125) Fix year format
- [mempool] [\#3168](https://github.com/tendermint/tendermint/issues/3168) Limit tx size to fit in the max reactor msg size
- [scripts] [\#3147](https://github.com/tendermint/tendermint/issues/3147) Fix json2wal for large block parts (@bradyjoestar)
## v0.28.1
*January 18th, 2019*
Special thanks to external contributors on this release:
@HaoyangLiu
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BUG FIXES:
- [consensus] Fix consensus halt from proposing blocks with too much evidence
## v0.28.0
*January 16th, 2019*
Special thanks to external contributors on this release:
@fmauricios, @gianfelipe93, @husio, @needkane, @srmo, @yutianwu
This release is primarily about upgrades to the `privval` system -
separating the `priv_validator.json` into distinct config and data files, and
refactoring the socket validator to support reconnections.
**Note:** Please backup your existing `priv_validator.json` before using this
version.
See [UPGRADING.md](UPGRADING.md) for more details.
### BREAKING CHANGES:
* CLI/RPC/Config
- [cli] Removed `--proxy_app=dummy` option. Use `kvstore` (`persistent_kvstore`) instead.
- [cli] Renamed `--proxy_app=nilapp` to `--proxy_app=noop`.
- [config] [\#2992](https://github.com/tendermint/tendermint/issues/2992) `allow_duplicate_ip` is now set to false
- [privval] [\#1181](https://github.com/tendermint/tendermint/issues/1181) Split `priv_validator.json` into immutable (`config/priv_validator_key.json`) and mutable (`data/priv_validator_state.json`) parts (@yutianwu)
- [privval] [\#2926](https://github.com/tendermint/tendermint/issues/2926) Split up `PubKeyMsg` into `PubKeyRequest` and `PubKeyResponse` to be consistent with other message types
- [privval] [\#2923](https://github.com/tendermint/tendermint/issues/2923) Listen for unix socket connections instead of dialing them
* Apps
* Go API
- [types] [\#2981](https://github.com/tendermint/tendermint/issues/2981) Remove `PrivValidator.GetAddress()`
* Blockchain Protocol
* P2P Protocol
### FEATURES:
- [rpc] [\#3052](https://github.com/tendermint/tendermint/issues/3052) Include peer's remote IP in `/net_info`
### IMPROVEMENTS:
- [consensus] [\#3086](https://github.com/tendermint/tendermint/issues/3086) Log peerID on ignored votes (@srmo)
- [docs] [\#3061](https://github.com/tendermint/tendermint/issues/3061) Added specification for signing consensus msgs at
./docs/spec/consensus/signing.md
- [privval] [\#2948](https://github.com/tendermint/tendermint/issues/2948) Memoize pubkey so it's only requested once on startup
- [privval] [\#2923](https://github.com/tendermint/tendermint/issues/2923) Retry RemoteSigner connections on error
### BUG FIXES:
- [build] [\#3085](https://github.com/tendermint/tendermint/issues/3085) Fix `Version` field in build scripts (@husio)
- [crypto/multisig] [\#3102](https://github.com/tendermint/tendermint/issues/3102) Fix multisig keys address length
- [crypto/encoding] [\#3101](https://github.com/tendermint/tendermint/issues/3101) Fix `PubKeyMultisigThreshold` unmarshalling into `crypto.PubKey` interface
- [p2p/conn] [\#3111](https://github.com/tendermint/tendermint/issues/3111) Make SecretConnection thread safe
- [rpc] [\#3053](https://github.com/tendermint/tendermint/issues/3053) Fix internal error in `/tx_search` when results are empty
(@gianfelipe93)
- [types] [\#2926](https://github.com/tendermint/tendermint/issues/2926) Do not panic if retrieving the privval's public key fails
## v0.27.4
*December 21st, 2018*
### BUG FIXES:
- [mempool] [\#3036](https://github.com/tendermint/tendermint/issues/3036) Fix
LRU cache by popping the least recently used item when the cache is full,
not the most recently used one!
## v0.27.3
*December 16th, 2018*
### BREAKING CHANGES:
* Go API
- [dep] [\#3027](https://github.com/tendermint/tendermint/issues/3027) Revert to mainline Go crypto library, eliminating the modified
`bcrypt.GenerateFromPassword`
## v0.27.2
*December 16th, 2018*
### IMPROVEMENTS:
- [node] [\#3025](https://github.com/tendermint/tendermint/issues/3025) Validate NodeInfo addresses on startup.
### BUG FIXES:
- [p2p] [\#3025](https://github.com/tendermint/tendermint/pull/3025) Revert to using defers in addrbook. Fixes deadlocks in pex and consensus upon invalid ExternalAddr/ListenAddr configuration.
## v0.27.1
*December 15th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @hleb-albau, @james-ray, @leo-xinwang
### FEATURES:
- [rpc] [\#2964](https://github.com/tendermint/tendermint/issues/2964) Add `UnconfirmedTxs(limit)` and `NumUnconfirmedTxs()` methods to HTTP/Local clients (@danil-lashin)
- [docs] [\#3004](https://github.com/tendermint/tendermint/issues/3004) Enable full-text search on docs pages
### IMPROVEMENTS:
- [consensus] [\#2971](https://github.com/tendermint/tendermint/issues/2971) Return error if ValidatorSet is empty after InitChain
(@leo-xinwang)
- [ci/cd] [\#3005](https://github.com/tendermint/tendermint/issues/3005) Updated CircleCI job to trigger website build when docs are updated
- [docs] Various updates
### BUG FIXES:
- [cmd] [\#2983](https://github.com/tendermint/tendermint/issues/2983) `testnet` command always sets `addr_book_strict = false`
- [config] [\#2980](https://github.com/tendermint/tendermint/issues/2980) Fix CORS options formatting
- [kv indexer] [\#2912](https://github.com/tendermint/tendermint/issues/2912) Don't ignore key when executing CONTAINS
- [mempool] [\#2961](https://github.com/tendermint/tendermint/issues/2961) Call `notifyTxsAvailable` if there're txs left after committing a block, but recheck=false
- [mempool] [\#2994](https://github.com/tendermint/tendermint/issues/2994) Reject txs with negative GasWanted
- [p2p] [\#2990](https://github.com/tendermint/tendermint/issues/2990) Fix a bug where seeds don't disconnect from a peer after 3h
- [consensus] [\#3006](https://github.com/tendermint/tendermint/issues/3006) Save state after InitChain only when stateHeight is also 0 (@james-ray)
## v0.27.0
*December 5th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @srmo
Special thanks to @dlguddus for discovering a [major
issue](https://github.com/tendermint/tendermint/issues/2718#issuecomment-440888677)
in the proposer selection algorithm.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
This release is primarily about fixes to the proposer selection algorithm
in preparation for the [Cosmos Game of
Stakes](https://blog.cosmos.network/the-game-of-stakes-is-open-for-registration-83a404746ee6).
It also makes use of the `ConsensusParams.Validator.PubKeyTypes` to restrict the
key types that can be used by validators, and removes the `Heartbeat` consensus
message.
### BREAKING CHANGES:
* CLI/RPC/Config
- [rpc] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `accum` to `proposer_priority`
* Go API
- [db] [\#2913](https://github.com/tendermint/tendermint/pull/2913)
ReverseIterator API change: start < end, and end is exclusive.
- [types] [\#2932](https://github.com/tendermint/tendermint/issues/2932) Rename `Validator.Accum` to `Validator.ProposerPriority`
* Blockchain Protocol
- [state] [\#2714](https://github.com/tendermint/tendermint/issues/2714) Validators can now only use pubkeys allowed within
ConsensusParams.Validator.PubKeyTypes
* P2P Protocol
- [consensus] [\#2871](https://github.com/tendermint/tendermint/issues/2871)
Remove *ProposalHeartbeat* message as it serves no real purpose (@srmo)
- [state] Fixes for proposer selection:
- [\#2785](https://github.com/tendermint/tendermint/issues/2785) Accum for new validators is `-1.125*totalVotingPower` instead of 0
- [\#2941](https://github.com/tendermint/tendermint/issues/2941) val.Accum is preserved during ValidatorSet.Update to avoid being
reset to 0
### IMPROVEMENTS:
- [state] [\#2929](https://github.com/tendermint/tendermint/issues/2929) Minor refactor of updateState logic (@danil-lashin)
- [node] [\#2959](https://github.com/tendermint/tendermint/issues/2959) Allow node to start even if software's BlockProtocol is
different from state's BlockProtocol
- [pex] [\#2959](https://github.com/tendermint/tendermint/issues/2959) Pex reactor logger uses `module=pex`
### BUG FIXES:
- [p2p] [\#2968](https://github.com/tendermint/tendermint/issues/2968) Panic on transport error rather than continuing to run but not
accept new connections
- [p2p] [\#2969](https://github.com/tendermint/tendermint/issues/2969) Fix mismatch in peer count between `/net_info` and the prometheus
metrics
- [rpc] [\#2408](https://github.com/tendermint/tendermint/issues/2408) `/broadcast_tx_commit`: Fix "interface conversion: interface {} in nil, not EventDataTx" panic (could happen if somebody sent a tx using `/broadcast_tx_commit` while Tendermint was being stopped)
- [state] [\#2785](https://github.com/tendermint/tendermint/issues/2785) Fix accum for new validators to be `-1.125*totalVotingPower`
instead of 0, forcing them to wait before becoming the proposer. Also:
- do not batch clip
- keep accums averaged near 0
- [txindex/kv] [\#2925](https://github.com/tendermint/tendermint/issues/2925) Don't return false positives when range searching for a prefix of a tag value
- [types] [\#2938](https://github.com/tendermint/tendermint/issues/2938) Fix regression in v0.26.4 where we panic on empty
genDoc.Validators
- [types] [\#2941](https://github.com/tendermint/tendermint/issues/2941) Preserve val.Accum during ValidatorSet.Update to avoid it being
reset to 0 every time a validator is updated
## v0.26.4
*November 27th, 2018*
Special thanks to external contributors on this release:
@ackratos, @goolAdapter, @james-ray, @joe-bowman, @kostko,
@nagarajmanjunath, @tomtau
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc] [\#2747](https://github.com/tendermint/tendermint/issues/2747) Enable subscription to tags emitted from `BeginBlock`/`EndBlock` (@kostko)
- [types] [\#2747](https://github.com/tendermint/tendermint/issues/2747) Add `ResultBeginBlock` and `ResultEndBlock` fields to `EventDataNewBlock`
and `EventDataNewBlockHeader` to support subscriptions (@kostko)
- [types] [\#2918](https://github.com/tendermint/tendermint/issues/2918) Add Marshal, MarshalTo, Unmarshal methods to various structs
to support Protobuf compatibility (@nagarajmanjunath)
### IMPROVEMENTS:
- [config] [\#2877](https://github.com/tendermint/tendermint/issues/2877) Add `blocktime_iota` to the config.toml (@ackratos)
- NOTE: this should be a ConsensusParam, not part of the config, and will be
removed from the config at a later date
([\#2920](https://github.com/tendermint/tendermint/issues/2920).
- [mempool] [\#2882](https://github.com/tendermint/tendermint/issues/2882) Add txs from Update to cache
- [mempool] [\#2891](https://github.com/tendermint/tendermint/issues/2891) Remove local int64 counter from being stored in every tx
- [node] [\#2866](https://github.com/tendermint/tendermint/issues/2866) Add ability to instantiate IPCVal (@joe-bowman)
### BUG FIXES:
- [blockchain] [\#2731](https://github.com/tendermint/tendermint/issues/2731) Retry both blocks if either is bad to avoid getting stuck during fast sync (@goolAdapter)
- [consensus] [\#2893](https://github.com/tendermint/tendermint/issues/2893) Use genDoc.Validators instead of state.NextValidators on replay when appHeight==0 (@james-ray)
- [log] [\#2868](https://github.com/tendermint/tendermint/issues/2868) Fix `module=main` setting overriding all others
- NOTE: this changes the default logging behaviour to be much less verbose.
Set `log_level="info"` to restore the previous behaviour.
- [rpc] [\#2808](https://github.com/tendermint/tendermint/issues/2808) Fix `accum` field in `/validators` by calling `IncrementAccum` if necessary
- [rpc] [\#2811](https://github.com/tendermint/tendermint/issues/2811) Allow integer IDs in JSON-RPC requests (@tomtau)
- [txindex/kv] [\#2759](https://github.com/tendermint/tendermint/issues/2759) Fix tx.height range queries
- [txindex/kv] [\#2775](https://github.com/tendermint/tendermint/issues/2775) Order tx results by index if height is the same
- [txindex/kv] [\#2908](https://github.com/tendermint/tendermint/issues/2908) Don't return false positives when searching for a prefix of a tag value
## v0.26.3
*November 17th, 2018*
Special thanks to external contributors on this release:
@danil-lashin, @kevlubkcm, @krhubert, @srmo
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* Go API
- [rpc] [\#2791](https://github.com/tendermint/tendermint/issues/2791) Functions that start HTTP servers are now blocking:
- Impacts `StartHTTPServer`, `StartHTTPAndTLSServer`, and `StartGRPCServer`
- These functions now take a `net.Listener` instead of an address
- [rpc] [\#2767](https://github.com/tendermint/tendermint/issues/2767) Subscribing to events
`NewRound` and `CompleteProposal` return new types `EventDataNewRound` and
`EventDataCompleteProposal`, respectively, instead of the generic `EventDataRoundState`. (@kevlubkcm)
### FEATURES:
- [log] [\#2843](https://github.com/tendermint/tendermint/issues/2843) New `log_format` config option, which can be set to 'plain' for colored
text or 'json' for JSON output
- [types] [\#2767](https://github.com/tendermint/tendermint/issues/2767) New event types EventDataNewRound (with ProposerInfo) and EventDataCompleteProposal (with BlockID). (@kevlubkcm)
### IMPROVEMENTS:
- [dep] [\#2844](https://github.com/tendermint/tendermint/issues/2844) Dependencies are no longer pinned to an exact version in the
Gopkg.toml:
- Serialization libs are allowed to vary by patch release
- Other libs are allowed to vary by minor release
- [p2p] [\#2857](https://github.com/tendermint/tendermint/issues/2857) "Send failed" is logged at debug level instead of error.
- [rpc] [\#2780](https://github.com/tendermint/tendermint/issues/2780) Add read and write timeouts to HTTP servers
- [state] [\#2848](https://github.com/tendermint/tendermint/issues/2848) Make "Update to validators" msg value pretty (@danil-lashin)
### BUG FIXES:
- [consensus] [\#2819](https://github.com/tendermint/tendermint/issues/2819) Don't send proposalHearbeat if not a validator
- [docs] [\#2859](https://github.com/tendermint/tendermint/issues/2859) Fix ConsensusParams details in spec
- [libs/autofile] [\#2760](https://github.com/tendermint/tendermint/issues/2760) Comment out autofile permissions check - should fix
running Tendermint on Windows
- [p2p] [\#2869](https://github.com/tendermint/tendermint/issues/2869) Set connection config properly instead of always using default
- [p2p/pex] [\#2802](https://github.com/tendermint/tendermint/issues/2802) Seed mode fixes:
- Only disconnect from inbound peers
- Use FlushStop instead of Sleep to ensure all messages are sent before
disconnecting
## v0.26.2
*November 15th, 2018*
Special thanks to external contributors on this release: @hleb-albau, @zhuzeyu
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc] [\#2582](https://github.com/tendermint/tendermint/issues/2582) Enable CORS on RPC API (@hleb-albau)
### BUG FIXES:
- [abci] [\#2748](https://github.com/tendermint/tendermint/issues/2748) Unlock mutex in localClient so even when app panics (e.g. during CheckTx), consensus continue working
- [abci] [\#2748](https://github.com/tendermint/tendermint/issues/2748) Fix DATA RACE in localClient
- [amino] [\#2822](https://github.com/tendermint/tendermint/issues/2822) Update to v0.14.1 to support compiling on 32-bit platforms
- [rpc] [\#2748](https://github.com/tendermint/tendermint/issues/2748) Drain channel before calling Unsubscribe(All) in `/broadcast_tx_commit`
## v0.26.1
*November 11, 2018*
Special thanks to external contributors on this release: @katakonst
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### IMPROVEMENTS:
- [consensus] [\#2704](https://github.com/tendermint/tendermint/issues/2704) Simplify valid POL round logic
- [docs] [\#2749](https://github.com/tendermint/tendermint/issues/2749) Deduplicate some ABCI docs
- [mempool] More detailed log messages
- [\#2724](https://github.com/tendermint/tendermint/issues/2724)
- [\#2762](https://github.com/tendermint/tendermint/issues/2762)
### BUG FIXES:
- [autofile] [\#2703](https://github.com/tendermint/tendermint/issues/2703) Do not panic when checking Head size
- [crypto/merkle] [\#2756](https://github.com/tendermint/tendermint/issues/2756) Fix crypto/merkle ProofOperators.Verify to check bounds on keypath parts.
- [mempool] fix a bug where we create a WAL despite `wal_dir` being empty
- [p2p] [\#2771](https://github.com/tendermint/tendermint/issues/2771) Fix `peer-id` label name to `peer_id` in prometheus metrics
- [p2p] [\#2797](https://github.com/tendermint/tendermint/pull/2797) Fix IDs in peer NodeInfo and require them for addresses
in AddressBook
- [p2p] [\#2797](https://github.com/tendermint/tendermint/pull/2797) Do not close conn immediately after sending pex addrs in seed mode. Partial fix for [\#2092](https://github.com/tendermint/tendermint/issues/2092).
## v0.26.0
*November 2, 2018*
Special thanks to external contributors on this release:
@bradyjoestar, @connorwstein, @goolAdapter, @HaoyangLiu,
@james-ray, @overbool, @phymbert, @Slamper, @Uzair1995, @yutianwu.
Special thanks to @Slamper for a series of bug reports in our [bug bounty
program](https://hackerone.com/tendermint) which are fixed in this release.
This release is primarily about adding Version fields to various data structures,
optimizing consensus messages for signing and verification in
restricted environments (like HSMs and the Ethereum Virtual Machine), and
aligning the consensus code with the [specification](https://arxiv.org/abs/1807.04938).
It also includes our first take at a generalized merkle proof system, and
changes the length of hashes used for hashing data structures from 20 to 32
bytes.
See the [UPGRADING.md](UPGRADING.md#v0.26.0) for details on upgrading to the new
version.
Please note that we are still making breaking changes to the protocols.
While the new Version fields should help us to keep the software backwards compatible
even while upgrading the protocols, we cannot guarantee that new releases will
be compatible with old chains just yet. We expect there will be another breaking
release or two before the Cosmos Hub launch, but we will otherwise be paying
increasing attention to backwards compatibility. Thanks for bearing with us!
### BREAKING CHANGES:
* CLI/RPC/Config
* [config] [\#2232](https://github.com/tendermint/tendermint/issues/2232) Timeouts are now strings like "3s" and "100ms", not ints
* [config] [\#2505](https://github.com/tendermint/tendermint/issues/2505) Remove Mempool.RecheckEmpty (it was effectively useless anyways)
* [config] [\#2490](https://github.com/tendermint/tendermint/issues/2490) `mempool.wal` is disabled by default
* [privval] [\#2459](https://github.com/tendermint/tendermint/issues/2459) Split `SocketPVMsg`s implementations into Request and Response, where the Response may contain a error message (returned by the remote signer)
* [state] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version field to State, breaking the format of State as
encoded on disk.
* [rpc] [\#2298](https://github.com/tendermint/tendermint/issues/2298) `/abci_query` takes `prove` argument instead of `trusted` and switches the default
behaviour to `prove=false`
* [rpc] [\#2654](https://github.com/tendermint/tendermint/issues/2654) Remove all `node_info.other.*_version` fields in `/status` and
`/net_info`
* [rpc] [\#2636](https://github.com/tendermint/tendermint/issues/2636) Remove
`_params` suffix from fields in `consensus_params`.
* Apps
* [abci] [\#2298](https://github.com/tendermint/tendermint/issues/2298) ResponseQuery.Proof is now a structured merkle.Proof, not just
arbitrary bytes
* [abci] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version to Header and shift all fields by one
* [abci] [\#2662](https://github.com/tendermint/tendermint/issues/2662) Bump the field numbers for some `ResponseInfo` fields to make room for
`AppVersion`
* [abci] [\#2636](https://github.com/tendermint/tendermint/issues/2636) Updates to ConsensusParams
* Remove `Params` suffix from field names
* Add `Params` suffix to message types
* Add new field and type, `Validator ValidatorParams`, to control what types of validator keys are allowed.
* Go API
* [config] [\#2232](https://github.com/tendermint/tendermint/issues/2232) Timeouts are time.Duration, not ints
* [crypto/merkle & lite] [\#2298](https://github.com/tendermint/tendermint/issues/2298) Various changes to accomodate General Merkle trees
* [crypto/merkle] [\#2595](https://github.com/tendermint/tendermint/issues/2595) Remove all Hasher objects in favor of byte slices
* [crypto/merkle] [\#2635](https://github.com/tendermint/tendermint/issues/2635) merkle.SimpleHashFromTwoHashes is no longer exported
* [node] [\#2479](https://github.com/tendermint/tendermint/issues/2479) Remove node.RunForever
* [rpc/client] [\#2298](https://github.com/tendermint/tendermint/issues/2298) `ABCIQueryOptions.Trusted` -> `ABCIQueryOptions.Prove`
* [types] [\#2298](https://github.com/tendermint/tendermint/issues/2298) Remove `Index` and `Total` fields from `TxProof`.
* [types] [\#2598](https://github.com/tendermint/tendermint/issues/2598)
`VoteTypeXxx` are now of type `SignedMsgType byte` and named `XxxType`, eg.
`PrevoteType`, `PrecommitType`.
* [types] [\#2636](https://github.com/tendermint/tendermint/issues/2636) Rename fields in ConsensusParams to remove `Params` suffixes
* [types] [\#2735](https://github.com/tendermint/tendermint/issues/2735) Simplify Proposal message to align with spec
* Blockchain Protocol
* [crypto/tmhash] [\#2732](https://github.com/tendermint/tendermint/issues/2732) TMHASH is now full 32-byte SHA256
* All hashes in the block header and Merkle trees are now 32-bytes
* PubKey Addresses are still only 20-bytes
* [state] [\#2587](https://github.com/tendermint/tendermint/issues/2587) Require block.Time of the fist block to be genesis time
* [state] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Require block.Version to match state.Version
* [types] Update SignBytes for `Vote`/`Proposal`/`Heartbeat`:
* [\#2459](https://github.com/tendermint/tendermint/issues/2459) Use amino encoding instead of JSON in `SignBytes`.
* [\#2598](https://github.com/tendermint/tendermint/issues/2598) Reorder fields and use fixed sized encoding.
* [\#2598](https://github.com/tendermint/tendermint/issues/2598) Change `Type` field from `string` to `byte` and use new
`SignedMsgType` to enumerate.
* [types] [\#2730](https://github.com/tendermint/tendermint/issues/2730) Use
same order for fields in `Vote` as in the SignBytes
* [types] [\#2732](https://github.com/tendermint/tendermint/issues/2732) Remove the address field from the validator hash
* [types] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version struct to Header
* [types] [\#2609](https://github.com/tendermint/tendermint/issues/2609) ConsensusParams.Hash() is the hash of the amino encoded
struct instead of the Merkle tree of the fields
* [types] [\#2670](https://github.com/tendermint/tendermint/issues/2670) Header.Hash() builds Merkle tree out of fields in the same
order they appear in the header, instead of sorting by field name
* [types] [\#2682](https://github.com/tendermint/tendermint/issues/2682) Use proto3 `varint` encoding for ints that are usually unsigned (instead of zigzag encoding).
* [types] [\#2636](https://github.com/tendermint/tendermint/issues/2636) Add Validator field to ConsensusParams
(Used to control which pubkey types validators can use, by abci type).
* P2P Protocol
* [consensus] [\#2652](https://github.com/tendermint/tendermint/issues/2652)
Replace `CommitStepMessage` with `NewValidBlockMessage`
* [consensus] [\#2735](https://github.com/tendermint/tendermint/issues/2735) Simplify `Proposal` message to align with spec
* [consensus] [\#2730](https://github.com/tendermint/tendermint/issues/2730)
Add `Type` field to `Proposal` and use same order of fields as in the
SignBytes for both `Proposal` and `Vote`
* [p2p] [\#2654](https://github.com/tendermint/tendermint/issues/2654) Add `ProtocolVersion` struct with protocol versions to top of
DefaultNodeInfo and require `ProtocolVersion.Block` to match during peer handshake
### FEATURES:
- [abci] [\#2557](https://github.com/tendermint/tendermint/issues/2557) Add `Codespace` field to `Response{CheckTx, DeliverTx, Query}`
- [abci] [\#2662](https://github.com/tendermint/tendermint/issues/2662) Add `BlockVersion` and `P2PVersion` to `RequestInfo`
- [crypto/merkle] [\#2298](https://github.com/tendermint/tendermint/issues/2298) General Merkle Proof scheme for chaining various types of Merkle trees together
- [docs/architecture] [\#1181](https://github.com/tendermint/tendermint/issues/1181) S
plit immutable and mutable parts of priv_validator.json
### IMPROVEMENTS:
- Additional Metrics
- [consensus] [\#2169](https://github.com/cosmos/cosmos-sdk/issues/2169)
- [p2p] [\#2169](https://github.com/cosmos/cosmos-sdk/issues/2169)
- [config] [\#2232](https://github.com/tendermint/tendermint/issues/2232) Added ValidateBasic method, which performs basic checks
- [crypto/ed25519] [\#2558](https://github.com/tendermint/tendermint/issues/2558) Switch to use latest `golang.org/x/crypto` through our fork at
github.com/tendermint/crypto
- [libs/log] [\#2707](https://github.com/tendermint/tendermint/issues/2707) Add year to log format (@yutianwu)
- [tools] [\#2238](https://github.com/tendermint/tendermint/issues/2238) Binary dependencies are now locked to a specific git commit
### BUG FIXES:
- [\#2711](https://github.com/tendermint/tendermint/issues/2711) Validate all incoming reactor messages. Fixes various bugs due to negative ints.
- [autofile] [\#2428](https://github.com/tendermint/tendermint/issues/2428) Group.RotateFile need call Flush() before rename (@goolAdapter)
- [common] [\#2533](https://github.com/tendermint/tendermint/issues/2533) Fixed a bug in the `BitArray.Or` method
- [common] [\#2506](https://github.com/tendermint/tendermint/issues/2506) Fixed a bug in the `BitArray.Sub` method (@james-ray)
- [common] [\#2534](https://github.com/tendermint/tendermint/issues/2534) Fix `BitArray.PickRandom` to choose uniformly from true bits
- [consensus] [\#1690](https://github.com/tendermint/tendermint/issues/1690) Wait for
timeoutPrecommit before starting next round
- [consensus] [\#1745](https://github.com/tendermint/tendermint/issues/1745) Wait for
Proposal or timeoutProposal before entering prevote
- [consensus] [\#2642](https://github.com/tendermint/tendermint/issues/2642) Only propose ValidBlock, not LockedBlock
- [consensus] [\#2642](https://github.com/tendermint/tendermint/issues/2642) Initialized ValidRound and LockedRound to -1
- [consensus] [\#1637](https://github.com/tendermint/tendermint/issues/1637) Limit the amount of evidence that can be included in a
block
- [consensus] [\#2652](https://github.com/tendermint/tendermint/issues/2652) Ensure valid block property with faulty proposer
- [evidence] [\#2515](https://github.com/tendermint/tendermint/issues/2515) Fix db iter leak (@goolAdapter)
- [libs/event] [\#2518](https://github.com/tendermint/tendermint/issues/2518) Fix event concurrency flaw (@goolAdapter)
- [node] [\#2434](https://github.com/tendermint/tendermint/issues/2434) Make node respond to signal interrupts while sleeping for genesis time
- [state] [\#2616](https://github.com/tendermint/tendermint/issues/2616) Pass nil to NewValidatorSet() when genesis file's Validators field is nil
- [p2p] [\#2555](https://github.com/tendermint/tendermint/issues/2555) Fix p2p switch FlushThrottle value (@goolAdapter)
- [p2p] [\#2668](https://github.com/tendermint/tendermint/issues/2668) Reconnect to originally dialed address (not self-reported address) for persistent peers
## v0.25.0
*September 22, 2018*
@@ -729,8 +164,8 @@ BUG FIXES:
*August 22nd, 2018*
BUG FIXES:
- [libs/autofile] [\#2261](https://github.com/tendermint/tendermint/issues/2261) Fix log rotation so it actually happens.
- Fixes issues with consensus WAL growing unbounded ala [\#2259](https://github.com/tendermint/tendermint/issues/2259)
- [libs/autofile] \#2261 Fix log rotation so it actually happens.
- Fixes issues with consensus WAL growing unbounded ala \#2259
## 0.23.0

View File

@@ -1,23 +1,107 @@
## v0.30.0
# Pending
*TBD*
## v0.26.0
*October 19, 2018*
Special thanks to external contributors on this release:
@bradyjoestar, @connorwstein, @goolAdapter, @HaoyangLiu,
@james-ray, @overbool, @phymbert, @Slamper, @Uzair1995
### BREAKING CHANGES:
This release is primarily about adding Version fields to various data structures,
optimizing consensus messages for signing and verification in
restricted environments (like HSMs and the Ethereum Virtual Machine), and
aligning the consensus code with the [specification](https://arxiv.org/abs/1807.04938).
It also includes our first take at a generalized merkle proof system.
See the [UPGRADING.md](UPGRADING.md#v0.26.0) for details on upgrading to the new
version.
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
BREAKING CHANGES:
* CLI/RPC/Config
* [config] [\#2232](https://github.com/tendermint/tendermint/issues/2232) timeouts as time.Duration, not ints
* [config] [\#2505](https://github.com/tendermint/tendermint/issues/2505) Remove Mempool.RecheckEmpty (it was effectively useless anyways)
* [config] [\#2490](https://github.com/tendermint/tendermint/issues/2490) `mempool.wal` is disabled by default
* [privval] [\#2459](https://github.com/tendermint/tendermint/issues/2459) Split `SocketPVMsg`s implementations into Request and Response, where the Response may contain a error message (returned by the remote signer)
* [state] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version field to State, breaking the format of State as
encoded on disk.
* [rpc] [\#2298](https://github.com/tendermint/tendermint/issues/2298) `/abci_query` takes `prove` argument instead of `trusted` and switches the default
behaviour to `prove=false`
* [rpc] [\#2654](https://github.com/tendermint/tendermint/issues/2654) Remove all `node_info.other.*_version` fields in `/status` and
`/net_info`
* Apps
* [abci] [\#2298](https://github.com/tendermint/tendermint/issues/2298) ResponseQuery.Proof is now a structured merkle.Proof, not just
arbitrary bytes
* [abci] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version to Header and shift all fields by one
* [abci] [\#2662](https://github.com/tendermint/tendermint/issues/2662) Bump the field numbers for some `ResponseInfo` fields to make room for
`AppVersion`
* Go API
* [config] [\#2232](https://github.com/tendermint/tendermint/issues/2232) timeouts as time.Duration, not ints
* [crypto/merkle & lite] [\#2298](https://github.com/tendermint/tendermint/issues/2298) Various changes to accomodate General Merkle trees
* [crypto/merkle] [\#2595](https://github.com/tendermint/tendermint/issues/2595) Remove all Hasher objects in favor of byte slices
* [crypto/merkle] [\#2635](https://github.com/tendermint/tendermint/issues/2635) merkle.SimpleHashFromTwoHashes is no longer exported
* [node] [\#2479](https://github.com/tendermint/tendermint/issues/2479) Remove node.RunForever
* [rpc/client] [\#2298](https://github.com/tendermint/tendermint/issues/2298) `ABCIQueryOptions.Trusted` -> `ABCIQueryOptions.Prove`
* [types] [\#2298](https://github.com/tendermint/tendermint/issues/2298) Remove `Index` and `Total` fields from `TxProof`.
* [types] [\#2598](https://github.com/tendermint/tendermint/issues/2598) `VoteTypeXxx` are now of type `SignedMsgType byte` and named `XxxType`, eg. `PrevoteType`,
`PrecommitType`.
* Blockchain Protocol
* [types] Update SignBytes for `Vote`/`Proposal`/`Heartbeat`:
* [\#2459](https://github.com/tendermint/tendermint/issues/2459) Use amino encoding instead of JSON in `SignBytes`.
* [\#2598](https://github.com/tendermint/tendermint/issues/2598) Reorder fields and use fixed sized encoding.
* [\#2598](https://github.com/tendermint/tendermint/issues/2598) Change `Type` field fromt `string` to `byte` and use new
`SignedMsgType` to enumerate.
* [types] [\#2512](https://github.com/tendermint/tendermint/issues/2512) Remove the pubkey field from the validator hash
* [types] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version struct to Header
* [types] [\#2609](https://github.com/tendermint/tendermint/issues/2609) ConsensusParams.Hash() is the hash of the amino encoded
struct instead of the Merkle tree of the fields
* [state] [\#2587](https://github.com/tendermint/tendermint/issues/2587) Require block.Time of the fist block to be genesis time
* [state] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Require block.Version to match state.Version
* [types] [\#2670](https://github.com/tendermint/tendermint/issues/2670) Header.Hash() builds Merkle tree out of fields in the same
order they appear in the header, instead of sorting by field name
* P2P Protocol
* [p2p] [\#2654](https://github.com/tendermint/tendermint/issues/2654) Add `ProtocolVersion` struct with protocol versions to top of
DefaultNodeInfo and require `ProtocolVersion.Block` to match during peer handshake
### FEATURES:
FEATURES:
- [abci] [\#2557](https://github.com/tendermint/tendermint/issues/2557) Add `Codespace` field to `Response{CheckTx, DeliverTx, Query}`
- [abci] [\#2662](https://github.com/tendermint/tendermint/issues/2662) Add `BlockVersion` and `P2PVersion` to `RequestInfo`
- [crypto/merkle] [\#2298](https://github.com/tendermint/tendermint/issues/2298) General Merkle Proof scheme for chaining various types of Merkle trees together
### IMPROVEMENTS:
IMPROVEMENTS:
- Additional Metrics
- [consensus] [\#2169](https://github.com/cosmos/cosmos-sdk/issues/2169)
- [p2p] [\#2169](https://github.com/cosmos/cosmos-sdk/issues/2169)
- [config] [\#2232](https://github.com/tendermint/tendermint/issues/2232) Added ValidateBasic method, which performs basic checks
- [crypto/ed25519] [\#2558](https://github.com/tendermint/tendermint/issues/2558) Switch to use latest `golang.org/x/crypto` through our fork at
github.com/tendermint/crypto
- [tools] [\#2238](https://github.com/tendermint/tendermint/issues/2238) Binary dependencies are now locked to a specific git commit
BUG FIXES:
- [autofile] [\#2428](https://github.com/tendermint/tendermint/issues/2428) Group.RotateFile need call Flush() before rename (@goolAdapter)
- [common] [\#2533](https://github.com/tendermint/tendermint/issues/2533) Fixed a bug in the `BitArray.Or` method
- [common] [\#2506](https://github.com/tendermint/tendermint/issues/2506) Fixed a bug in the `BitArray.Sub` method (@james-ray)
- [common] [\#2534](https://github.com/tendermint/tendermint/issues/2534) Fix `BitArray.PickRandom` to choose uniformly from true bits
- [consensus] [\#1690](https://github.com/tendermint/tendermint/issues/1690) Wait for
timeoutPrecommit before starting next round
- [consensus] [\#1745](https://github.com/tendermint/tendermint/issues/1745) Wait for
Proposal or timeoutProposal before entering prevote
- [consensus] [\#2642](https://github.com/tendermint/tendermint/issues/2642) Only propose ValidBlock, not LockedBlock
- [consensus] [\#2642](https://github.com/tendermint/tendermint/issues/2642) Initialized ValidRound and LockedRound to -1
- [consensus] [\#1637](https://github.com/tendermint/tendermint/issues/1637) Limit the amount of evidence that can be included in a
block
- [evidence] [\#2515](https://github.com/tendermint/tendermint/issues/2515) Fix db iter leak (@goolAdapter)
- [libs/event] [\#2518](https://github.com/tendermint/tendermint/issues/2518) Fix event concurrency flaw (@goolAdapter)
- [node] [\#2434](https://github.com/tendermint/tendermint/issues/2434) Make node respond to signal interrupts while sleeping for genesis time
- [state] [\#2616](https://github.com/tendermint/tendermint/issues/2616) Pass nil to NewValidatorSet() when genesis file's Validators field is nil
- [p2p] [\#2555](https://github.com/tendermint/tendermint/issues/2555) Fix p2p switch FlushThrottle value (@goolAdapter)
- [p2p] [\#2668](https://github.com/tendermint/tendermint/issues/2668) Reconnect to originally dialed address (not self-reported
address) for persistent peers
### BUG FIXES:

View File

@@ -6,7 +6,7 @@ This code of conduct applies to all projects run by the Tendermint/COSMOS team a
# Conduct
## Contact: conduct@tendermint.com
## Contact: adrian@tendermint.com
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.

View File

@@ -27,8 +27,8 @@ Of course, replace `ebuchman` with your git handle.
To pull in updates from the origin repo, run
* `git fetch upstream`
* `git rebase upstream/master` (or whatever branch you want)
* `git fetch upstream`
* `git rebase upstream/master` (or whatever branch you want)
Please don't make Pull Requests to `master`.
@@ -50,11 +50,6 @@ as apps, tools, and the core, should use dep.
Run `dep status` to get a list of vendor dependencies that may not be
up-to-date.
When updating dependencies, please only update the particular dependencies you
need. Instead of running `dep ensure -update`, which will update anything,
specify exactly the dependency you want to update, eg.
`dep ensure -update github.com/tendermint/go-amino`.
## Vagrant
If you are a [Vagrant](https://www.vagrantup.com/) user, you can get started
@@ -69,74 +64,43 @@ vagrant ssh
make test
```
## Changelog
## Testing
Every fix, improvement, feature, or breaking change should be made in a
pull-request that includes an update to the `CHANGELOG_PENDING.md` file.
All repos should be hooked up to [CircleCI](https://circleci.com/).
Changelog entries should be formatted as follows:
```
- [module] \#xxx Some description about the change (@contributor)
```
Here, `module` is the part of the code that changed (typically a
top-level Go package), `xxx` is the pull-request number, and `contributor`
is the author/s of the change.
It's also acceptable for `xxx` to refer to the relevent issue number, but pull-request
numbers are preferred.
Note this means pull-requests should be opened first so the changelog can then
be updated with the pull-request's number.
There is no need to include the full link, as this will be added
automatically during release. But please include the backslash and pound, eg. `\#2313`.
Changelog entries should be ordered alphabetically according to the
`module`, and numerically according to the pull-request number.
Changes with multiple classifications should be doubly included (eg. a bug fix
that is also a breaking change should be recorded under both).
Breaking changes are further subdivided according to the APIs/users they impact.
Any change that effects multiple APIs/users should be recorded multiply - for
instance, a change to the `Blockchain Protocol` that removes a field from the
header should also be recorded under `CLI/RPC/Config` since the field will be
removed from the header in rpc responses as well.
If they have `.go` files in the root directory, they will be automatically
tested by circle using `go test -v -race ./...`. If not, they will need a
`circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and
includes its continuous integration status using a badge in the `README.md`.
## Branching Model and Release
All repos should adhere to the branching model: http://nvie.com/posts/a-successful-git-branching-model/.
This means that all pull-requests should be made against develop. Any merge to
master constitutes a tagged release.
User-facing repos should adhere to the branching model: http://nvie.com/posts/a-successful-git-branching-model/.
That is, these repos should be well versioned, and any merge to master requires a version bump and tagged release.
Libraries need not follow the model strictly, but would be wise to,
especially `go-p2p` and `go-rpc`, as their versions are referenced in tendermint core.
### Development Procedure:
- the latest state of development is on `develop`
- `develop` must never fail `make test`
- never --force onto `develop` (except when reverting a broken commit, which should seldom happen)
- no --force onto `develop` (except when reverting a broken commit, which should seldom happen)
- create a development branch either on github.com/tendermint/tendermint, or your fork (using `git remote add origin`)
- make changes and update the `CHANGELOG_PENDING.md` to record your change
- before submitting a pull request, run `git rebase` on top of the latest `develop`
- before submitting a pull request, begin `git rebase` on top of `develop`
### Pull Merge Procedure:
- ensure pull branch is based on a recent develop
- ensure pull branch is rebased on develop
- run `make test` to ensure that all tests pass
- merge pull request
- the `unstable` branch may be used to aggregate pull merges before fixing tests
- the `unstable` branch may be used to aggregate pull merges before testing once
- push master may request that pull requests be rebased on top of `unstable`
### Release Procedure:
- start on `develop`
- run integration tests (see `test_integrations` in Makefile)
- prepare changelog:
- copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all issues
- run `bash ./scripts/authors.sh` to get a list of authors since the latest
release, and add the github aliases of external contributors to the top of
the changelog. To lookup an alias from an email, try `bash
./scripts/authors.sh <email>`
- reset the `CHANGELOG_PENDING.md`
- prepare changelog/release issue
- bump versions
- push to release/vX.X.X to run the extended integration tests on the CI
- push to release-vX.X.X to run the extended integration tests on the CI
- merge to master
- merge master back to develop
@@ -151,13 +115,3 @@ master constitutes a tagged release.
- merge hotfix-vX.X.X to master
- merge hotfix-vX.X.X to develop
- delete the hotfix-vX.X.X branch
## Testing
All repos should be hooked up to [CircleCI](https://circleci.com/).
If they have `.go` files in the root directory, they will be automatically
tested by circle using `go test -v -race ./...`. If not, they will need a
`circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and
includes its continuous integration status using a badge in the `README.md`.

View File

@@ -3,7 +3,7 @@ set -e
# Get the tag from the version, or try to figure it out.
if [ -z "$TAG" ]; then
TAG=$(awk -F\" '/TMCoreSemVer =/ { print $2; exit }' < ../version/version.go)
TAG=$(awk -F\" '/Version =/ { print $2; exit }' < ../version/version.go)
fi
if [ -z "$TAG" ]; then
echo "Please specify a tag."

View File

@@ -3,7 +3,7 @@ set -e
# Get the tag from the version, or try to figure it out.
if [ -z "$TAG" ]; then
TAG=$(awk -F\" '/TMCoreSemVer =/ { print $2; exit }' < ../version/version.go)
TAG=$(awk -F\" '/Version =/ { print $2; exit }' < ../version/version.go)
fi
if [ -z "$TAG" ]; then
echo "Please specify a tag."

51
Gopkg.lock generated
View File

@@ -35,6 +35,13 @@
revision = "8991bc29aa16c548c550c7ff78260e27b9ab7c73"
version = "v1.1.1"
[[projects]]
digest = "1:c7644c73a3d23741fdba8a99b1464e021a224b7e205be497271a8003a15ca41b"
name = "github.com/ebuchman/fail-test"
packages = ["."]
pruneopts = "UT"
revision = "95f809107225be108efcf10a3509e4ea6ceef3c4"
[[projects]]
digest = "1:544229a3ca0fb2dd5ebc2896d3d2ff7ce096d9751635301e44e37e761349ee70"
name = "github.com/fortytw2/leaktest"
@@ -218,16 +225,14 @@
version = "v1.0.0"
[[projects]]
digest = "1:26663fafdea73a38075b07e8e9d82fc0056379d2be8bb4e13899e8fda7c7dd23"
digest = "1:c1a04665f9613e082e1209cf288bf64f4068dcd6c87a64bf1c4ff006ad422ba0"
name = "github.com/prometheus/client_golang"
packages = [
"prometheus",
"prometheus/internal",
"prometheus/promhttp",
]
pruneopts = "UT"
revision = "abad2d1bd44235a26707c172eab6bca5bf2dbad3"
version = "v0.9.1"
revision = "ae27198cdd90bf12cd134ad79d1366a6cf49f632"
[[projects]]
branch = "master"
@@ -269,14 +274,6 @@
pruneopts = "UT"
revision = "e2704e165165ec55d062f5919b4b29494e9fa790"
[[projects]]
digest = "1:b0c25f00bad20d783d259af2af8666969e2fc343fa0dc9efe52936bbd67fb758"
name = "github.com/rs/cors"
packages = ["."]
pruneopts = "UT"
revision = "9a47f48565a795472d43519dd49aac781f3034fb"
version = "v1.6.0"
[[projects]]
digest = "1:6a4a11ba764a56d2758899ec6f3848d24698d48442ebce85ee7a3f63284526cd"
name = "github.com/spf13/afero"
@@ -289,12 +286,12 @@
version = "v1.1.2"
[[projects]]
digest = "1:08d65904057412fc0270fc4812a1c90c594186819243160dc779a402d4b6d0bc"
digest = "1:516e71bed754268937f57d4ecb190e01958452336fa73dbac880894164e91c1f"
name = "github.com/spf13/cast"
packages = ["."]
pruneopts = "UT"
revision = "8c9545af88b134710ab1cd196795e7f2388358d7"
version = "v1.3.0"
revision = "8965335b8c7107321228e3e3702cab9832751bac"
version = "v1.2.0"
[[projects]]
digest = "1:7ffc0983035bc7e297da3688d9fe19d60a420e9c38bef23f845c53788ed6a05e"
@@ -361,23 +358,22 @@
revision = "6b91fda63f2e36186f1c9d0e48578defb69c5d43"
[[projects]]
digest = "1:83f5e189eea2baad419a6a410984514266ff690075759c87e9ede596809bd0b8"
digest = "1:605b6546f3f43745695298ec2d342d3e952b6d91cdf9f349bea9315f677d759f"
name = "github.com/tendermint/btcd"
packages = ["btcec"]
pruneopts = "UT"
revision = "80daadac05d1cd29571fccf27002d79667a88b58"
version = "v0.1.1"
revision = "e5840949ff4fff0c56f9b6a541e22b63581ea9df"
[[projects]]
digest = "1:ad9c4c1a4e7875330b1f62906f2830f043a23edb5db997e3a5ac5d3e6eadf80a"
digest = "1:5f52e817b6c9d52ddba70dece0ea31134d82a52c05bce98fbc739ab2a832df28"
name = "github.com/tendermint/go-amino"
packages = ["."]
pruneopts = "UT"
revision = "dc14acf9ef15f85828bfbc561ed9dd9d2a284885"
version = "v0.14.1"
revision = "cb07448b240918aa8d8df4505153549b86b77134"
version = "v0.13.0"
[[projects]]
digest = "1:00d2b3e64cdc3fa69aa250dfbe4cc38c4837d4f37e62279be2ae52107ffbbb44"
digest = "1:72b71e3a29775e5752ed7a8012052a3dee165e27ec18cedddae5288058f09acf"
name = "golang.org/x/crypto"
packages = [
"bcrypt",
@@ -398,7 +394,8 @@
"salsa20/salsa",
]
pruneopts = "UT"
revision = "505ab145d0a99da450461ae2c1a9f6cd10d1f447"
revision = "3764759f34a542a3aef74d6b02e35be7ab893bba"
source = "github.com/tendermint/crypto"
[[projects]]
digest = "1:d36f55a999540d29b6ea3c2ea29d71c76b1d9853fdcd3e5c5cb4836f2ba118f1"
@@ -418,14 +415,14 @@
[[projects]]
branch = "master"
digest = "1:6f86e2f2e2217cd4d74dec6786163cf80e4d2b99adb341ecc60a45113b844dca"
digest = "1:d1da39c9bac61327dbef1d8ef9f210425e99fd2924b6fb5f0bc587a193353637"
name = "golang.org/x/sys"
packages = [
"cpu",
"unix",
]
pruneopts = "UT"
revision = "7e31e0c00fa05cb5fbf4347b585621d6709e19a4"
revision = "8a28ead16f52c8aaeffbf79239b251dfdf6c4f96"
[[projects]]
digest = "1:a2ab62866c75542dd18d2b069fec854577a20211d7c0ea6ae746072a1dccdd18"
@@ -456,7 +453,7 @@
name = "google.golang.org/genproto"
packages = ["googleapis/rpc/status"]
pruneopts = "UT"
revision = "b69ba1387ce2108ac9bc8e8e5e5a46e7d5c72313"
revision = "94acd270e44e65579b9ee3cdab25034d33fed608"
[[projects]]
digest = "1:2dab32a43451e320e49608ff4542fdfc653c95dcc35d0065ec9c6c3dd540ed74"
@@ -506,6 +503,7 @@
input-imports = [
"github.com/btcsuite/btcutil/base58",
"github.com/btcsuite/btcutil/bech32",
"github.com/ebuchman/fail-test",
"github.com/fortytw2/leaktest",
"github.com/go-kit/kit/log",
"github.com/go-kit/kit/log/level",
@@ -526,7 +524,6 @@
"github.com/prometheus/client_golang/prometheus",
"github.com/prometheus/client_golang/prometheus/promhttp",
"github.com/rcrowley/go-metrics",
"github.com/rs/cors",
"github.com/spf13/cobra",
"github.com/spf13/viper",
"github.com/stretchr/testify/assert",

View File

@@ -20,64 +20,53 @@
# unused-packages = true
#
###########################################################
# NOTE: All packages should be pinned to specific versions.
# Packages without releases must pin to a commit.
# Allow only patch releases for serialization libraries
[[constraint]]
name = "github.com/tendermint/go-amino"
version = "~0.14.1"
name = "github.com/go-kit/kit"
version = "=0.6.0"
[[constraint]]
name = "github.com/gogo/protobuf"
version = "~1.1.1"
version = "=1.1.1"
[[constraint]]
name = "github.com/golang/protobuf"
version = "~1.1.0"
# Allow only minor releases for other libraries
[[constraint]]
name = "github.com/go-kit/kit"
version = "^0.6.0"
version = "=1.1.0"
[[constraint]]
name = "github.com/gorilla/websocket"
version = "^1.2.0"
[[constraint]]
name = "github.com/rs/cors"
version = "^1.6.0"
version = "=1.2.0"
[[constraint]]
name = "github.com/pkg/errors"
version = "^0.8.0"
version = "=0.8.0"
[[constraint]]
name = "github.com/spf13/cobra"
version = "^0.0.1"
version = "=0.0.1"
[[constraint]]
name = "github.com/spf13/viper"
version = "^1.0.0"
version = "=1.0.0"
[[constraint]]
name = "github.com/stretchr/testify"
version = "^1.2.1"
version = "=1.2.1"
[[constraint]]
name = "github.com/tendermint/go-amino"
version = "v0.13.0"
[[constraint]]
name = "google.golang.org/grpc"
version = "^1.13.0"
version = "=1.13.0"
[[constraint]]
name = "github.com/fortytw2/leaktest"
version = "^1.2.0"
[[constraint]]
name = "github.com/prometheus/client_golang"
version = "^0.9.1"
[[constraint]]
name = "github.com/tendermint/btcd"
version = "v0.1.1"
version = "=1.2.0"
###################################
## Some repos dont have releases.
@@ -85,17 +74,30 @@
[[constraint]]
name = "golang.org/x/crypto"
revision = "505ab145d0a99da450461ae2c1a9f6cd10d1f447"
source = "github.com/tendermint/crypto"
revision = "3764759f34a542a3aef74d6b02e35be7ab893bba"
[[override]]
name = "github.com/jmhodges/levigo"
revision = "c42d9e0ca023e2198120196f842701bb4c55d7b9"
[[constraint]]
name = "github.com/ebuchman/fail-test"
revision = "95f809107225be108efcf10a3509e4ea6ceef3c4"
# last revision used by go-crypto
[[constraint]]
name = "github.com/btcsuite/btcutil"
revision = "d4cc87b860166d00d6b5b9e0d3b3d71d6088d4d4"
[[constraint]]
name = "github.com/tendermint/btcd"
revision = "e5840949ff4fff0c56f9b6a541e22b63581ea9df"
# Haven't made a release since 2016.
[[constraint]]
name = "github.com/prometheus/client_golang"
revision = "ae27198cdd90bf12cd134ad79d1366a6cf49f632"
[[constraint]]
name = "github.com/rcrowley/go-metrics"

View File

@@ -17,6 +17,7 @@ all: check build test install
check: check_tools get_vendor_deps
########################################
### Build Tendermint
@@ -32,13 +33,10 @@ build_race:
install:
CGO_ENABLED=0 go install $(BUILD_FLAGS) -tags $(BUILD_TAGS) ./cmd/tendermint
install_c:
CGO_ENABLED=1 go install $(BUILD_FLAGS) -tags "$(BUILD_TAGS) gcc" ./cmd/tendermint
########################################
### Protobuf
protoc_all: protoc_libs protoc_merkle protoc_abci protoc_grpc protoc_proto3types
protoc_all: protoc_libs protoc_merkle protoc_abci protoc_grpc
%.pb.go: %.proto
## If you get the following error,
@@ -54,8 +52,6 @@ protoc_all: protoc_libs protoc_merkle protoc_abci protoc_grpc protoc_proto3types
# see protobuf section above
protoc_abci: abci/types/types.pb.go
protoc_proto3types: types/proto3/block.pb.go
build_abci:
@go build -i ./abci/cmd/...
@@ -81,8 +77,6 @@ check_tools:
get_tools:
@echo "--> Installing tools"
./scripts/get_tools.sh
get_dev_tools:
@echo "--> Downloading linters (this may take awhile)"
$(GOPATH)/src/github.com/alecthomas/gometalinter/scripts/install.sh -b $(GOBIN)
@@ -292,7 +286,8 @@ build-linux:
GOOS=linux GOARCH=amd64 $(MAKE) build
build-docker-localnode:
@cd networks/local && make
cd networks/local
make
# Run a 4-node testnet locally
localnet-start: localnet-stop
@@ -330,4 +325,4 @@ build-slate:
# To avoid unintended conflicts with file names, always add to .PHONY
# unless there is a reason not to.
# https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools get_dev_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all build_c install_c
.PHONY: check build build_race build_abci dist install install_abci check_dep check_tools get_tools update_tools get_vendor_deps draw_deps get_protoc protoc_abci protoc_libs gen_certs clean_certs grpc_dbserver test_cover test_apps test_persistence test_p2p test test_race test_integrations test_release test100 vagrant_test fmt rpc-docs build-linux localnet-start localnet-stop build-docker build-docker-localnode sentry-start sentry-config sentry-stop build-slate protoc_grpc protoc_all

110
README.md
View File

@@ -1,8 +1,8 @@
# Tendermint
[Byzantine-Fault Tolerant](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance)
[State Machines](https://en.wikipedia.org/wiki/State_machine_replication).
Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)), for short.
[State Machine Replication](https://en.wikipedia.org/wiki/State_machine_replication).
Or [Blockchain](https://en.wikipedia.org/wiki/Blockchain_(database)) for short.
[![version](https://img.shields.io/github/tag/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/releases/latest)
[![API Reference](
@@ -36,12 +36,12 @@ We are also still making breaking changes to the protocol and the APIs.
Thus, we tag the releases as *alpha software*.
In any case, if you intend to run Tendermint in production,
please [contact us](mailto:partners@tendermint.com) and [join the chat](https://riot.im/app/#/room/#tendermint:matrix.org).
please [contact us](https://riot.im/app/#/room/#tendermint:matrix.org) :)
## Security
To report a security vulnerability, see our [bug bounty
program](https://hackerone.com/tendermint)
program](https://tendermint.com/security).
For examples of the kinds of bugs we're looking for, see [SECURITY.md](SECURITY.md)
@@ -49,43 +49,61 @@ For examples of the kinds of bugs we're looking for, see [SECURITY.md](SECURITY.
Requirement|Notes
---|---
Go version | Go1.11.4 or higher
Go version | Go1.10 or higher
## Documentation
Complete documentation can be found on the [website](https://tendermint.com/docs/).
### Install
## Install
See the [install instructions](/docs/introduction/install.md)
### Quick Start
## Quick Start
- [Single node](/docs/introduction/quick-start.md)
- [Local cluster using docker-compose](/docs/networks/docker-compose.md)
- [Single node](/docs/tendermint-core/using-tendermint.md)
- [Local cluster using docker-compose](/networks/local)
- [Remote cluster using terraform and ansible](/docs/networks/terraform-and-ansible.md)
- [Join the Cosmos testnet](https://cosmos.network/testnet)
## Resources
### Tendermint Core
For details about the blockchain data structures and the p2p protocols, see the
the [Tendermint specification](/docs/spec).
For details on using the software, see the [documentation](/docs/) which is also
hosted at: https://tendermint.com/docs/
### Tools
Benchmarking and monitoring is provided by `tm-bench` and `tm-monitor`, respectively.
Their code is found [here](/tools) and these binaries need to be built seperately.
Additional documentation is found [here](/docs/tools).
### Sub-projects
* [Amino](http://github.com/tendermint/go-amino), a reflection-based improvement on proto3
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
### Applications
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
* [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
* [Many more](https://tendermint.com/ecosystem)
### Research
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
* [Blog](https://blog.cosmos.network/tendermint/home)
## Contributing
Please abide by the [Code of Conduct](CODE_OF_CONDUCT.md) in all interactions,
and the [contributing guidelines](CONTRIBUTING.md) when submitting code.
Join the larger community on the [forum](https://forum.cosmos.network/) and the [chat](https://riot.im/app/#/room/#tendermint:matrix.org).
To learn more about the structure of the software, watch the [Developer
Sessions](https://www.youtube.com/playlist?list=PLdQIb0qr3pnBbG5ZG-0gr3zM86_s8Rpqv)
and read some [Architectural
Decision Records](https://github.com/tendermint/tendermint/tree/master/docs/architecture).
Learn more by reading the code and comparing it to the
[specification](https://github.com/tendermint/tendermint/tree/develop/docs/spec).
Yay open source! Please see our [contributing guidelines](CONTRIBUTING.md).
## Versioning
### Semantic Versioning
### SemVer
Tendermint uses [Semantic Versioning](http://semver.org/) to determine when and how the version changes.
Tendermint uses [SemVer](http://semver.org/) to determine when and how the version changes.
According to SemVer, anything in the public API can change at any time before version 1.0.0
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
@@ -122,40 +140,8 @@ data into the new chain.
However, any bump in the PATCH version should be compatible with existing histories
(if not please open an [issue](https://github.com/tendermint/tendermint/issues)).
For more information on upgrading, see [UPGRADING.md](./UPGRADING.md)
For more information on upgrading, see [here](./UPGRADING.md)
## Resources
### Tendermint Core
For details about the blockchain data structures and the p2p protocols, see the
[Tendermint specification](/docs/spec).
For details on using the software, see the [documentation](/docs/) which is also
hosted at: https://tendermint.com/docs/
### Tools
Benchmarking and monitoring is provided by `tm-bench` and `tm-monitor`, respectively.
Their code is found [here](/tools) and these binaries need to be built seperately.
Additional documentation is found [here](/docs/tools).
### Sub-projects
* [Amino](http://github.com/tendermint/go-amino), reflection-based proto3, with
interfaces
* [IAVL](http://github.com/tendermint/iavl), Merkleized IAVL+ Tree implementation
### Applications
* [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
* [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
* [Many more](https://tendermint.com/ecosystem)
### Research
* [The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)
* [Master's Thesis on Tendermint](https://atrium.lib.uoguelph.ca/xmlui/handle/10214/9769)
* [Original Whitepaper](https://tendermint.com/static/docs/tendermint.pdf)
* [Blog](https://blog.cosmos.network/tendermint/home)
## Code of Conduct
Please read, understand and adhere to our [code of conduct](CODE_OF_CONDUCT.md).

View File

@@ -1,8 +1,7 @@
# Security
As part of our [Coordinated Vulnerability Disclosure
Policy](https://tendermint.com/security), we operate a [bug
bounty](https://hackerone.com/tendermint).
Policy](https://tendermint.com/security), we operate a bug bounty.
See the policy for more details on submissions and rewards.
Here is a list of examples of the kinds of bugs we're most interested in:

View File

@@ -3,121 +3,9 @@
This guide provides steps to be followed when you upgrade your applications to
a newer version of Tendermint Core.
## v0.29.0
This release contains some breaking changes to the block and p2p protocols,
and will not be compatible with any previous versions of the software, primarily
due to changes in how various data structures are hashed.
Any implementations of Tendermint blockchain verification, including lite clients,
will need to be updated. For specific details:
- [Merkle tree](./docs/spec/blockchain/encoding.md#merkle-trees)
- [ConsensusParams](./docs/spec/blockchain/state.md#consensusparams)
There was also a small change to field ordering in the vote struct. Any
implementations of an out-of-process validator (like a Key-Management Server)
will need to be updated. For specific details:
- [Vote](https://github.com/tendermint/tendermint/blob/develop/docs/spec/consensus/signing.md#votes)
Finally, the proposer selection algorithm continues to evolve. See the
[work-in-progress
specification](https://github.com/tendermint/tendermint/pull/3140).
For everything else, please see the [CHANGELOG](./CHANGELOG.md#v0.29.0).
## v0.28.0
This release breaks the format for the `priv_validator.json` file
and the protocol used for the external validator process.
It is compatible with v0.27.0 blockchains (neither the BlockProtocol nor the
P2PProtocol have changed).
Please read carefully for details about upgrading.
**Note:** Backup your `config/priv_validator.json`
before proceeding.
### `priv_validator.json`
The `config/priv_validator.json` is now two files:
`config/priv_validator_key.json` and `data/priv_validator_state.json`.
The former contains the key material, the later contains the details on the last
message signed.
When running v0.28.0 for the first time, it will back up any pre-existing
`priv_validator.json` file and proceed to split it into the two new files.
Upgrading should happen automatically without problem.
To upgrade manually, use the provided `privValUpgrade.go` script, with exact paths for the old
`priv_validator.json` and the locations for the two new files. It's recomended
to use the default paths, of `config/priv_validator_key.json` and
`data/priv_validator_state.json`, respectively:
```
go run scripts/privValUpgrade.go <old-path> <new-key-path> <new-state-path>
```
### External validator signers
The Unix and TCP implementations of the remote signing validator
have been consolidated into a single implementation.
Thus in both cases, the external process is expected to dial
Tendermint. This is different from how Unix sockets used to work, where
Tendermint dialed the external process.
The `PubKeyMsg` was also split into separate `Request` and `Response` types
for consistency with other messages.
Note that the TCP sockets don't yet use a persistent key,
so while they're encrypted, they can't yet be properly authenticated.
See [#3105](https://github.com/tendermint/tendermint/issues/3105).
Note the Unix socket has neither encryption nor authentication, but will
add a shared-secret in [#3099](https://github.com/tendermint/tendermint/issues/3099).
## v0.27.0
This release contains some breaking changes to the block and p2p protocols,
but does not change any core data structures, so it should be compatible with
existing blockchains from the v0.26 series that only used Ed25519 validator keys.
Blockchains using Secp256k1 for validators will not be compatible. This is due
to the fact that we now enforce which key types validators can use as a
consensus param. The default is Ed25519, and Secp256k1 must be activated
explicitly.
It is recommended to upgrade all nodes at once to avoid incompatibilities at the
peer layer - namely, the heartbeat consensus message has been removed (only
relevant if `create_empty_blocks=false` or `create_empty_blocks_interval > 0`),
and the proposer selection algorithm has changed. Since proposer information is
never included in the blockchain, this change only affects the peer layer.
### Go API Changes
#### libs/db
The ReverseIterator API has changed the meaning of `start` and `end`.
Before, iteration was from `start` to `end`, where
`start > end`. Now, iteration is from `end` to `start`, where `start < end`.
The iterator also excludes `end`. This change allows a simplified and more
intuitive logic, aligning the semantic meaning of `start` and `end` in the
`Iterator` and `ReverseIterator`.
### Applications
This release enforces a new consensus parameter, the
ValidatorParams.PubKeyTypes. Applications must ensure that they only return
validator updates with the allowed PubKeyTypes. If a validator update includes a
pubkey type that is not included in the ConsensusParams.Validator.PubKeyTypes,
block execution will fail and the consensus will halt.
By default, only Ed25519 pubkeys may be used for validators. Enabling
Secp256k1 requires explicit modification of the ConsensusParams.
Please update your application accordingly (ie. restrict validators to only be
able to use Ed25519 keys, or explicitly add additional key types to the genesis
file).
## v0.26.0
This release contains a lot of changes to core data types and protocols. It is not
New 0.26.0 release contains a lot of changes to core data types. It is not
compatible to the old versions and there is no straight forward way to update
old data to be compatible with the new version.
@@ -145,7 +33,7 @@ to `prove`. To get proofs with your queries, ensure you set `prove=true`.
Various version fields like `amino_version`, `p2p_version`, `consensus_version`,
and `rpc_version` have been removed from the `node_info.other` and are
consolidated under the tendermint semantic version (ie. `node_info.version`) and
the new `block` and `p2p` protocol versions under `node_info.protocol_version`.
the new `block` and `p2p` protocol versions under `node_info.protocol_version`..
### ABCI Changes
@@ -157,7 +45,7 @@ protobuf file for these changes.
The `ResponseQuery.Proof` field is now structured as a `[]ProofOp` to support
generalized Merkle tree constructions where the leaves of one Merkle tree are
the root of another. If you don't need this functionality, and you used to
the root of another. If you don't need this functionaluty, and you used to
return `<proof bytes>` here, you should instead return a single `ProofOp` with
just the `Data` field set:
@@ -179,7 +67,7 @@ For more information, see:
### Go API Changes
#### crypto/merkle
#### crypto.merkle
The `merkle.Hasher` interface was removed. Functions which used to take `Hasher`
now simply take `[]byte`. This means that any objects being Merklized should be
@@ -191,10 +79,6 @@ The `node.RunForever` function was removed. Signal handling and running forever
should instead be explicitly configured by the caller. See how we do it
[here](https://github.com/tendermint/tendermint/blob/30519e8361c19f4bf320ef4d26288ebc621ad725/cmd/tendermint/commands/run_node.go#L60).
### Other
All hashes, except for public key addresses, are now 32-bytes.
## v0.25.0
This release has minimal impact.

2
Vagrantfile vendored
View File

@@ -53,6 +53,6 @@ Vagrant.configure("2") do |config|
# get all deps and tools, ready to install/test
su - vagrant -c 'source /home/vagrant/.bash_profile'
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_tools && make get_dev_tools && make get_vendor_deps'
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make get_tools && make get_vendor_deps'
SHELL
end

View File

@@ -1,5 +1,7 @@
# Application BlockChain Interface (ABCI)
[![CircleCI](https://circleci.com/gh/tendermint/abci.svg?style=svg)](https://circleci.com/gh/tendermint/abci)
Blockchains are systems for multi-master state machine replication.
**ABCI** is an interface that defines the boundary between the replication engine (the blockchain),
and the state machine (the application).
@@ -10,28 +12,160 @@ Previously, the ABCI was referred to as TMSP.
The community has provided a number of addtional implementations, see the [Tendermint Ecosystem](https://tendermint.com/ecosystem)
## Installation & Usage
To get up and running quickly, see the [getting started guide](../docs/app-dev/getting-started.md) along with the [abci-cli documentation](../docs/app-dev/abci-cli.md) which will go through the examples found in the [examples](./example/) directory.
## Specification
A detailed description of the ABCI methods and message types is contained in:
- [The main spec](../docs/spec/abci/abci.md)
- [A protobuf file](./types/types.proto)
- [A Go interface](./types/application.go)
- [A prose specification](specification.md)
- [A protobuf file](https://github.com/tendermint/tendermint/blob/master/abci/types/types.proto)
- [A Go interface](https://github.com/tendermint/tendermint/blob/master/abci/types/application.go).
For more background information on ABCI, motivations, and tendermint, please visit [the documentation](https://tendermint.com/docs/).
The two guides to focus on are the `Application Development Guide` and `Using ABCI-CLI`.
## Protocol Buffers
To compile the protobuf file, run (from the root of the repo):
To compile the protobuf file, run:
```
make protoc_abci
cd $GOPATH/src/github.com/tendermint/tendermint/; make protoc_abci
```
See `protoc --help` and [the Protocol Buffers site](https://developers.google.com/protocol-buffers)
for details on compiling for other languages. Note we also include a [GRPC](https://www.grpc.io/docs)
for details on compiling for other languages. Note we also include a [GRPC](http://www.grpc.io/docs)
service definition.
## Install ABCI-CLI
The `abci-cli` is a simple tool for debugging ABCI servers and running some
example apps. To install it:
```
mkdir -p $GOPATH/src/github.com/tendermint
cd $GOPATH/src/github.com/tendermint
git clone https://github.com/tendermint/tendermint.git
cd tendermint
make get_tools
make get_vendor_deps
make install_abci
```
## Implementation
We provide three implementations of the ABCI in Go:
- Golang in-process
- ABCI-socket
- GRPC
Note the GRPC version is maintained primarily to simplify onboarding and prototyping and is not receiving the same
attention to security and performance as the others
### In Process
The simplest implementation just uses function calls within Go.
This means ABCI applications written in Golang can be compiled with TendermintCore and run as a single binary.
See the [examples](#examples) below for more information.
### Socket (TSP)
ABCI is best implemented as a streaming protocol.
The socket implementation provides for asynchronous, ordered message passing over unix or tcp.
Messages are serialized using Protobuf3 and length-prefixed with a [signed Varint](https://developers.google.com/protocol-buffers/docs/encoding?csw=1#signed-integers)
For example, if the Protobuf3 encoded ABCI message is `0xDEADBEEF` (4 bytes), the length-prefixed message is `0x08DEADBEEF`, since `0x08` is the signed varint
encoding of `4`. If the Protobuf3 encoded ABCI message is 65535 bytes long, the length-prefixed message would be like `0xFEFF07...`.
Note the benefit of using this `varint` encoding over the old version (where integers were encoded as `<len of len><big endian len>` is that
it is the standard way to encode integers in Protobuf. It is also generally shorter.
### GRPC
GRPC is an rpc framework native to Protocol Buffers with support in many languages.
Implementing the ABCI using GRPC can allow for faster prototyping, but is expected to be much slower than
the ordered, asynchronous socket protocol. The implementation has also not received as much testing or review.
Note the length-prefixing used in the socket implementation does not apply for GRPC.
## Usage
The `abci-cli` tool wraps an ABCI client and can be used for probing/testing an ABCI server.
For instance, `abci-cli test` will run a test sequence against a listening server running the Counter application (see below).
It can also be used to run some example applications.
See [the documentation](https://tendermint.com/docs/) for more details.
### Examples
Check out the variety of example applications in the [example directory](example/).
It also contains the code refered to by the `counter` and `kvstore` apps; these apps come
built into the `abci-cli` binary.
#### Counter
The `abci-cli counter` application illustrates nonce checking in transactions. It's code looks like:
```golang
func cmdCounter(cmd *cobra.Command, args []string) error {
app := counter.NewCounterApplication(flagSerial)
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Start the listener
srv, err := server.NewServer(flagAddrC, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Cleanup
srv.Stop()
})
return nil
}
```
and can be found in [this file](cmd/abci-cli/abci-cli.go).
#### kvstore
The `abci-cli kvstore` application, which illustrates a simple key-value Merkle tree
```golang
func cmdKVStore(cmd *cobra.Command, args []string) error {
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Create the application - in memory or persisted to disk
var app types.Application
if flagPersist == "" {
app = kvstore.NewKVStoreApplication()
} else {
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
}
// Start the listener
srv, err := server.NewServer(flagAddrD, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Wait forever
cmn.TrapSignal(func() {
// Cleanup
srv.Stop()
})
return nil
}
```

View File

@@ -105,8 +105,8 @@ func (reqRes *ReqRes) SetCallback(cb func(res *types.Response)) {
return
}
defer reqRes.mtx.Unlock()
reqRes.cb = cb
reqRes.mtx.Unlock()
}
func (reqRes *ReqRes) GetCallback() func(*types.Response) {

View File

@@ -54,7 +54,7 @@ RETRY_LOOP:
if cli.mustConnect {
return err
}
cli.Logger.Error(fmt.Sprintf("abci.grpcClient failed to connect to %v. Retrying...\n", cli.addr), "err", err)
cli.Logger.Error(fmt.Sprintf("abci.grpcClient failed to connect to %v. Retrying...\n", cli.addr))
time.Sleep(time.Second * dialRetryIntervalSeconds)
continue RETRY_LOOP
}
@@ -111,8 +111,8 @@ func (cli *grpcClient) Error() error {
// NOTE: callback may get internally generated flush responses.
func (cli *grpcClient) SetResponseCallback(resCb Callback) {
cli.mtx.Lock()
defer cli.mtx.Unlock()
cli.resCb = resCb
cli.mtx.Unlock()
}
//----------------------------------------

View File

@@ -9,13 +9,8 @@ import (
var _ Client = (*localClient)(nil)
// NOTE: use defer to unlock mutex because Application might panic (e.g., in
// case of malicious tx or query). It only makes sense for publicly exposed
// methods like CheckTx (/broadcast_tx_* RPC endpoint) or Query (/abci_query
// RPC endpoint), but defers are used everywhere for the sake of consistency.
type localClient struct {
cmn.BaseService
mtx *sync.Mutex
types.Application
Callback
@@ -35,8 +30,8 @@ func NewLocalClient(mtx *sync.Mutex, app types.Application) *localClient {
func (app *localClient) SetResponseCallback(cb Callback) {
app.mtx.Lock()
defer app.mtx.Unlock()
app.Callback = cb
app.mtx.Unlock()
}
// TODO: change types.Application to include Error()?
@@ -50,9 +45,6 @@ func (app *localClient) FlushAsync() *ReqRes {
}
func (app *localClient) EchoAsync(msg string) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
return app.callback(
types.ToRequestEcho(msg),
types.ToResponseEcho(msg),
@@ -61,9 +53,8 @@ func (app *localClient) EchoAsync(msg string) *ReqRes {
func (app *localClient) InfoAsync(req types.RequestInfo) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Info(req)
app.mtx.Unlock()
return app.callback(
types.ToRequestInfo(req),
types.ToResponseInfo(res),
@@ -72,9 +63,8 @@ func (app *localClient) InfoAsync(req types.RequestInfo) *ReqRes {
func (app *localClient) SetOptionAsync(req types.RequestSetOption) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.SetOption(req)
app.mtx.Unlock()
return app.callback(
types.ToRequestSetOption(req),
types.ToResponseSetOption(res),
@@ -83,9 +73,8 @@ func (app *localClient) SetOptionAsync(req types.RequestSetOption) *ReqRes {
func (app *localClient) DeliverTxAsync(tx []byte) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.DeliverTx(tx)
app.mtx.Unlock()
return app.callback(
types.ToRequestDeliverTx(tx),
types.ToResponseDeliverTx(res),
@@ -94,9 +83,8 @@ func (app *localClient) DeliverTxAsync(tx []byte) *ReqRes {
func (app *localClient) CheckTxAsync(tx []byte) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.CheckTx(tx)
app.mtx.Unlock()
return app.callback(
types.ToRequestCheckTx(tx),
types.ToResponseCheckTx(res),
@@ -105,9 +93,8 @@ func (app *localClient) CheckTxAsync(tx []byte) *ReqRes {
func (app *localClient) QueryAsync(req types.RequestQuery) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Query(req)
app.mtx.Unlock()
return app.callback(
types.ToRequestQuery(req),
types.ToResponseQuery(res),
@@ -116,9 +103,8 @@ func (app *localClient) QueryAsync(req types.RequestQuery) *ReqRes {
func (app *localClient) CommitAsync() *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Commit()
app.mtx.Unlock()
return app.callback(
types.ToRequestCommit(),
types.ToResponseCommit(res),
@@ -127,20 +113,19 @@ func (app *localClient) CommitAsync() *ReqRes {
func (app *localClient) InitChainAsync(req types.RequestInitChain) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.InitChain(req)
return app.callback(
reqRes := app.callback(
types.ToRequestInitChain(req),
types.ToResponseInitChain(res),
)
app.mtx.Unlock()
return reqRes
}
func (app *localClient) BeginBlockAsync(req types.RequestBeginBlock) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.BeginBlock(req)
app.mtx.Unlock()
return app.callback(
types.ToRequestBeginBlock(req),
types.ToResponseBeginBlock(res),
@@ -149,9 +134,8 @@ func (app *localClient) BeginBlockAsync(req types.RequestBeginBlock) *ReqRes {
func (app *localClient) EndBlockAsync(req types.RequestEndBlock) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.EndBlock(req)
app.mtx.Unlock()
return app.callback(
types.ToRequestEndBlock(req),
types.ToResponseEndBlock(res),
@@ -170,73 +154,64 @@ func (app *localClient) EchoSync(msg string) (*types.ResponseEcho, error) {
func (app *localClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Info(req)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) SetOptionSync(req types.RequestSetOption) (*types.ResponseSetOption, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.SetOption(req)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) DeliverTxSync(tx []byte) (*types.ResponseDeliverTx, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.DeliverTx(tx)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) CheckTxSync(tx []byte) (*types.ResponseCheckTx, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.CheckTx(tx)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) QuerySync(req types.RequestQuery) (*types.ResponseQuery, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Query(req)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) CommitSync() (*types.ResponseCommit, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Commit()
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) InitChainSync(req types.RequestInitChain) (*types.ResponseInitChain, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.InitChain(req)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) BeginBlockSync(req types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.BeginBlock(req)
app.mtx.Unlock()
return &res, nil
}
func (app *localClient) EndBlockSync(req types.RequestEndBlock) (*types.ResponseEndBlock, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.EndBlock(req)
app.mtx.Unlock()
return &res, nil
}

View File

@@ -67,7 +67,7 @@ RETRY_LOOP:
if cli.mustConnect {
return err
}
cli.Logger.Error(fmt.Sprintf("abci.socketClient failed to connect to %v. Retrying...", cli.addr), "err", err)
cli.Logger.Error(fmt.Sprintf("abci.socketClient failed to connect to %v. Retrying...", cli.addr))
time.Sleep(time.Second * dialRetryIntervalSeconds)
continue RETRY_LOOP
}
@@ -118,8 +118,8 @@ func (cli *socketClient) Error() error {
// NOTE: callback may get internally generated flush responses.
func (cli *socketClient) SetResponseCallback(resCb Callback) {
cli.mtx.Lock()
defer cli.mtx.Unlock()
cli.resCb = resCb
cli.mtx.Unlock()
}
//----------------------------------------

View File

@@ -58,7 +58,7 @@ var RootCmd = &cobra.Command{
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
switch cmd.Use {
case "counter", "kvstore": // for the examples apps, don't pre-run
case "counter", "kvstore", "dummy": // for the examples apps, don't pre-run
return nil
case "version": // skip running for version command
return nil
@@ -127,6 +127,10 @@ func addCounterFlags() {
counterCmd.PersistentFlags().BoolVarP(&flagSerial, "serial", "", false, "enforce incrementing (serial) transactions")
}
func addDummyFlags() {
dummyCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
}
func addKVStoreFlags() {
kvstoreCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
}
@@ -148,6 +152,10 @@ func addCommands() {
// examples
addCounterFlags()
RootCmd.AddCommand(counterCmd)
// deprecated, left for backwards compatibility
addDummyFlags()
RootCmd.AddCommand(dummyCmd)
// replaces dummy, see issue #196
addKVStoreFlags()
RootCmd.AddCommand(kvstoreCmd)
}
@@ -283,6 +291,18 @@ var counterCmd = &cobra.Command{
},
}
// deprecated, left for backwards compatibility
var dummyCmd = &cobra.Command{
Use: "dummy",
Deprecated: "use: [abci-cli kvstore] instead",
Short: "ABCI demo example",
Long: "ABCI demo example",
Args: cobra.ExactArgs(0),
RunE: func(cmd *cobra.Command, args []string) error {
return cmdKVStore(cmd, args)
},
}
var kvstoreCmd = &cobra.Command{
Use: "kvstore",
Short: "ABCI demo example",

File diff suppressed because it is too large Load Diff

View File

@@ -74,6 +74,7 @@ message RequestQuery {
bool prove = 4;
}
// NOTE: validators here have empty pubkeys.
message RequestBeginBlock {
bytes hash = 1;
Header header = 2 [(gogoproto.nullable)=false];
@@ -207,13 +208,12 @@ message ResponseCommit {
// ConsensusParams contains all consensus-relevant parameters
// that can be adjusted by the abci app
message ConsensusParams {
BlockSizeParams block_size = 1;
EvidenceParams evidence = 2;
ValidatorParams validator = 3;
BlockSize block_size = 1;
EvidenceParams evidence_params = 2;
}
// BlockSize contains limits on the block size.
message BlockSizeParams {
message BlockSize {
// Note: must be greater than 0
int64 max_bytes = 1;
// Note: must be greater or equal to -1
@@ -226,11 +226,6 @@ message EvidenceParams {
int64 max_age = 1;
}
// ValidatorParams contains limits on validators.
message ValidatorParams {
repeated string pub_key_types = 1;
}
message LastCommitInfo {
int32 round = 1;
repeated VoteInfo votes = 2 [(gogoproto.nullable)=false];

View File

@@ -1479,15 +1479,15 @@ func TestConsensusParamsMarshalTo(t *testing.T) {
}
}
func TestBlockSizeParamsProto(t *testing.T) {
func TestBlockSizeProto(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, false)
p := NewPopulatedBlockSize(popr, false)
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
msg := &BlockSizeParams{}
msg := &BlockSize{}
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
@@ -1510,10 +1510,10 @@ func TestBlockSizeParamsProto(t *testing.T) {
}
}
func TestBlockSizeParamsMarshalTo(t *testing.T) {
func TestBlockSizeMarshalTo(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, false)
p := NewPopulatedBlockSize(popr, false)
size := p.Size()
dAtA := make([]byte, size)
for i := range dAtA {
@@ -1523,7 +1523,7 @@ func TestBlockSizeParamsMarshalTo(t *testing.T) {
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
msg := &BlockSizeParams{}
msg := &BlockSize{}
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
@@ -1591,62 +1591,6 @@ func TestEvidenceParamsMarshalTo(t *testing.T) {
}
}
func TestValidatorParamsProto(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedValidatorParams(popr, false)
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
msg := &ValidatorParams{}
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
littlefuzz := make([]byte, len(dAtA))
copy(littlefuzz, dAtA)
for i := range dAtA {
dAtA[i] = byte(popr.Intn(256))
}
if !p.Equal(msg) {
t.Fatalf("seed = %d, %#v !Proto %#v", seed, msg, p)
}
if len(littlefuzz) > 0 {
fuzzamount := 100
for i := 0; i < fuzzamount; i++ {
littlefuzz[popr.Intn(len(littlefuzz))] = byte(popr.Intn(256))
littlefuzz = append(littlefuzz, byte(popr.Intn(256)))
}
// shouldn't panic
_ = github_com_gogo_protobuf_proto.Unmarshal(littlefuzz, msg)
}
}
func TestValidatorParamsMarshalTo(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedValidatorParams(popr, false)
size := p.Size()
dAtA := make([]byte, size)
for i := range dAtA {
dAtA[i] = byte(popr.Intn(256))
}
_, err := p.MarshalTo(dAtA)
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
msg := &ValidatorParams{}
if err := github_com_gogo_protobuf_proto.Unmarshal(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
for i := range dAtA {
dAtA[i] = byte(popr.Intn(256))
}
if !p.Equal(msg) {
t.Fatalf("seed = %d, %#v !Proto %#v", seed, msg, p)
}
}
func TestLastCommitInfoProto(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
@@ -2675,16 +2619,16 @@ func TestConsensusParamsJSON(t *testing.T) {
t.Fatalf("seed = %d, %#v !Json Equal %#v", seed, msg, p)
}
}
func TestBlockSizeParamsJSON(t *testing.T) {
func TestBlockSizeJSON(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, true)
p := NewPopulatedBlockSize(popr, true)
marshaler := github_com_gogo_protobuf_jsonpb.Marshaler{}
jsondata, err := marshaler.MarshalToString(p)
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
msg := &BlockSizeParams{}
msg := &BlockSize{}
err = github_com_gogo_protobuf_jsonpb.UnmarshalString(jsondata, msg)
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
@@ -2711,24 +2655,6 @@ func TestEvidenceParamsJSON(t *testing.T) {
t.Fatalf("seed = %d, %#v !Json Equal %#v", seed, msg, p)
}
}
func TestValidatorParamsJSON(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedValidatorParams(popr, true)
marshaler := github_com_gogo_protobuf_jsonpb.Marshaler{}
jsondata, err := marshaler.MarshalToString(p)
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
msg := &ValidatorParams{}
err = github_com_gogo_protobuf_jsonpb.UnmarshalString(jsondata, msg)
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
if !p.Equal(msg) {
t.Fatalf("seed = %d, %#v !Json Equal %#v", seed, msg, p)
}
}
func TestLastCommitInfoJSON(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
@@ -3637,12 +3563,12 @@ func TestConsensusParamsProtoCompactText(t *testing.T) {
}
}
func TestBlockSizeParamsProtoText(t *testing.T) {
func TestBlockSizeProtoText(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, true)
p := NewPopulatedBlockSize(popr, true)
dAtA := github_com_gogo_protobuf_proto.MarshalTextString(p)
msg := &BlockSizeParams{}
msg := &BlockSize{}
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
@@ -3651,12 +3577,12 @@ func TestBlockSizeParamsProtoText(t *testing.T) {
}
}
func TestBlockSizeParamsProtoCompactText(t *testing.T) {
func TestBlockSizeProtoCompactText(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, true)
p := NewPopulatedBlockSize(popr, true)
dAtA := github_com_gogo_protobuf_proto.CompactTextString(p)
msg := &BlockSizeParams{}
msg := &BlockSize{}
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
@@ -3693,34 +3619,6 @@ func TestEvidenceParamsProtoCompactText(t *testing.T) {
}
}
func TestValidatorParamsProtoText(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedValidatorParams(popr, true)
dAtA := github_com_gogo_protobuf_proto.MarshalTextString(p)
msg := &ValidatorParams{}
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
if !p.Equal(msg) {
t.Fatalf("seed = %d, %#v !Proto %#v", seed, msg, p)
}
}
func TestValidatorParamsProtoCompactText(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedValidatorParams(popr, true)
dAtA := github_com_gogo_protobuf_proto.CompactTextString(p)
msg := &ValidatorParams{}
if err := github_com_gogo_protobuf_proto.UnmarshalText(dAtA, msg); err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
if !p.Equal(msg) {
t.Fatalf("seed = %d, %#v !Proto %#v", seed, msg, p)
}
}
func TestLastCommitInfoProtoText(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
@@ -4573,10 +4471,10 @@ func TestConsensusParamsSize(t *testing.T) {
}
}
func TestBlockSizeParamsSize(t *testing.T) {
func TestBlockSizeSize(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedBlockSizeParams(popr, true)
p := NewPopulatedBlockSize(popr, true)
size2 := github_com_gogo_protobuf_proto.Size(p)
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
if err != nil {
@@ -4617,28 +4515,6 @@ func TestEvidenceParamsSize(t *testing.T) {
}
}
func TestValidatorParamsSize(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))
p := NewPopulatedValidatorParams(popr, true)
size2 := github_com_gogo_protobuf_proto.Size(p)
dAtA, err := github_com_gogo_protobuf_proto.Marshal(p)
if err != nil {
t.Fatalf("seed = %d, err = %v", seed, err)
}
size := p.Size()
if len(dAtA) != size {
t.Errorf("seed = %d, size %v != marshalled size %v", seed, size, len(dAtA))
}
if size2 != size {
t.Errorf("seed = %d, size %v != before marshal proto.Size %v", seed, size, size2)
}
size3 := github_com_gogo_protobuf_proto.Size(p)
if size3 != size {
t.Errorf("seed = %d, size %v != after marshal proto.Size %v", seed, size, size3)
}
}
func TestLastCommitInfoSize(t *testing.T) {
seed := time.Now().UnixNano()
popr := math_rand.New(math_rand.NewSource(seed))

View File

@@ -168,12 +168,9 @@ func (pool *BlockPool) IsCaughtUp() bool {
return false
}
// Some conditions to determine if we're caught up.
// Ensures we've either received a block or waited some amount of time,
// and that we're synced to the highest known height. Note we use maxPeerHeight - 1
// because to sync block H requires block H+1 to verify the LastCommit.
receivedBlockOrTimedOut := pool.height > 0 || time.Since(pool.startTime) > 5*time.Second
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= (pool.maxPeerHeight-1)
// some conditions to determine if we're caught up
receivedBlockOrTimedOut := (pool.height > 0 || time.Since(pool.startTime) > 5*time.Second)
ourChainIsLongestAmongPeers := pool.maxPeerHeight == 0 || pool.height >= pool.maxPeerHeight
isCaughtUp := receivedBlockOrTimedOut && ourChainIsLongestAmongPeers
return isCaughtUp
}
@@ -255,8 +252,7 @@ func (pool *BlockPool) AddBlock(peerID p2p.ID, block *types.Block, blockSize int
peer.decrPending(blockSize)
}
} else {
pool.Logger.Info("invalid peer", "peer", peerID, "blockHeight", block.Height)
pool.sendError(errors.New("invalid peer"), peerID)
// Bad peer?
}
}
@@ -296,7 +292,7 @@ func (pool *BlockPool) RemovePeer(peerID p2p.ID) {
func (pool *BlockPool) removePeer(peerID p2p.ID) {
for _, requester := range pool.requesters {
if requester.getPeerID() == peerID {
requester.redo(peerID)
requester.redo()
}
}
delete(pool.peers, peerID)
@@ -330,11 +326,8 @@ func (pool *BlockPool) makeNextRequester() {
defer pool.mtx.Unlock()
nextHeight := pool.height + pool.requestersLen()
if nextHeight > pool.maxPeerHeight {
return
}
request := newBPRequester(pool, nextHeight)
// request.SetLogger(pool.Logger.With("height", nextHeight))
pool.requesters[nextHeight] = request
atomic.AddInt32(&pool.numPending, 1)
@@ -460,7 +453,7 @@ type bpRequester struct {
pool *BlockPool
height int64
gotBlockCh chan struct{}
redoCh chan p2p.ID //redo may send multitime, add peerId to identify repeat
redoCh chan struct{}
mtx sync.Mutex
peerID p2p.ID
@@ -472,7 +465,7 @@ func newBPRequester(pool *BlockPool, height int64) *bpRequester {
pool: pool,
height: height,
gotBlockCh: make(chan struct{}, 1),
redoCh: make(chan p2p.ID, 1),
redoCh: make(chan struct{}, 1),
peerID: "",
block: nil,
@@ -531,9 +524,9 @@ func (bpr *bpRequester) reset() {
// Tells bpRequester to pick another peer and try again.
// NOTE: Nonblocking, and does nothing if another redo
// was already requested.
func (bpr *bpRequester) redo(peerId p2p.ID) {
func (bpr *bpRequester) redo() {
select {
case bpr.redoCh <- peerId:
case bpr.redoCh <- struct{}{}:
default:
}
}
@@ -572,13 +565,9 @@ OUTER_LOOP:
return
case <-bpr.Quit():
return
case peerID := <-bpr.redoCh:
if peerID == bpr.peerID {
bpr.reset()
continue OUTER_LOOP
} else {
continue WAIT_LOOP
}
case <-bpr.redoCh:
bpr.reset()
continue OUTER_LOOP
case <-bpr.gotBlockCh:
// We got a block!
// Continue the for-loop and wait til Quit.

View File

@@ -16,52 +16,16 @@ func init() {
}
type testPeer struct {
id p2p.ID
height int64
inputChan chan inputData //make sure each peer's data is sequential
id p2p.ID
height int64
}
type inputData struct {
t *testing.T
pool *BlockPool
request BlockRequest
}
func (p testPeer) runInputRoutine() {
go func() {
for input := range p.inputChan {
p.simulateInput(input)
}
}()
}
// Request desired, pretend like we got the block immediately.
func (p testPeer) simulateInput(input inputData) {
block := &types.Block{Header: types.Header{Height: input.request.Height}}
input.pool.AddBlock(input.request.PeerID, block, 123)
input.t.Logf("Added block from peer %v (height: %v)", input.request.PeerID, input.request.Height)
}
type testPeers map[p2p.ID]testPeer
func (ps testPeers) start() {
for _, v := range ps {
v.runInputRoutine()
}
}
func (ps testPeers) stop() {
for _, v := range ps {
close(v.inputChan)
}
}
func makePeers(numPeers int, minHeight, maxHeight int64) testPeers {
peers := make(testPeers, numPeers)
func makePeers(numPeers int, minHeight, maxHeight int64) map[p2p.ID]testPeer {
peers := make(map[p2p.ID]testPeer, numPeers)
for i := 0; i < numPeers; i++ {
peerID := p2p.ID(cmn.RandStr(12))
height := minHeight + cmn.RandInt63n(maxHeight-minHeight)
peers[peerID] = testPeer{peerID, height, make(chan inputData, 10)}
peers[peerID] = testPeer{peerID, height}
}
return peers
}
@@ -81,9 +45,6 @@ func TestBasic(t *testing.T) {
defer pool.Stop()
peers.start()
defer peers.stop()
// Introduce each peer.
go func() {
for _, peer := range peers {
@@ -116,8 +77,12 @@ func TestBasic(t *testing.T) {
if request.Height == 300 {
return // Done!
}
peers[request.PeerID].inputChan <- inputData{t, pool, request}
// Request desired, pretend like we got the block immediately.
go func() {
block := &types.Block{Header: types.Header{Height: request.Height}}
pool.AddBlock(request.PeerID, block, 123)
t.Logf("Added block from peer %v (height: %v)", request.PeerID, request.Height)
}()
}
}
}

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"errors"
"fmt"
"reflect"
"time"
@@ -181,12 +180,6 @@ func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
return
}
if err = msg.ValidateBasic(); err != nil {
bcR.Logger.Error("Peer sent us invalid msg", "peer", src, "msg", msg, "err", err)
bcR.Switch.StopPeerForError(src, err)
return
}
bcR.Logger.Debug("Receive", "src", src, "chID", chID, "msg", msg)
switch msg := msg.(type) {
@@ -195,6 +188,7 @@ func (bcR *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
// Unfortunately not queued since the queue is full.
}
case *bcBlockResponseMessage:
// Got a block.
bcR.pool.AddBlock(src.ID(), msg.Block, len(msgBytes))
case *bcStatusRequestMessage:
// Send peer our state.
@@ -264,12 +258,8 @@ FOR_LOOP:
bcR.Logger.Info("Time to switch to consensus reactor!", "height", height)
bcR.pool.Stop()
conR, ok := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
if ok {
conR.SwitchToConsensus(state, blocksSynced)
} else {
// should only happen during testing
}
conR := bcR.Switch.Reactor("CONSENSUS").(consensusReactor)
conR.SwitchToConsensus(state, blocksSynced)
break FOR_LOOP
}
@@ -318,13 +308,6 @@ FOR_LOOP:
// still need to clean up the rest.
bcR.Switch.StopPeerForError(peer, fmt.Errorf("BlockchainReactor validation error: %v", err))
}
peerID2 := bcR.pool.RedoRequest(second.Height)
peer2 := bcR.Switch.Peers().Get(peerID2)
if peer2 != nil && peer2 != peer {
// NOTE: we've already removed the peer's request, but we
// still need to clean up the rest.
bcR.Switch.StopPeerForError(peer2, fmt.Errorf("BlockchainReactor validation error: %v", err))
}
continue FOR_LOOP
} else {
bcR.pool.PopRequest()
@@ -369,9 +352,7 @@ func (bcR *BlockchainReactor) BroadcastStatusRequest() error {
// Messages
// BlockchainMessage is a generic message for this reactor.
type BlockchainMessage interface {
ValidateBasic() error
}
type BlockchainMessage interface{}
func RegisterBlockchainMessages(cdc *amino.Codec) {
cdc.RegisterInterface((*BlockchainMessage)(nil), nil)
@@ -396,14 +377,6 @@ type bcBlockRequestMessage struct {
Height int64
}
// ValidateBasic performs basic validation.
func (m *bcBlockRequestMessage) ValidateBasic() error {
if m.Height < 0 {
return errors.New("Negative Height")
}
return nil
}
func (m *bcBlockRequestMessage) String() string {
return fmt.Sprintf("[bcBlockRequestMessage %v]", m.Height)
}
@@ -412,14 +385,6 @@ type bcNoBlockResponseMessage struct {
Height int64
}
// ValidateBasic performs basic validation.
func (m *bcNoBlockResponseMessage) ValidateBasic() error {
if m.Height < 0 {
return errors.New("Negative Height")
}
return nil
}
func (brm *bcNoBlockResponseMessage) String() string {
return fmt.Sprintf("[bcNoBlockResponseMessage %d]", brm.Height)
}
@@ -430,11 +395,6 @@ type bcBlockResponseMessage struct {
Block *types.Block
}
// ValidateBasic performs basic validation.
func (m *bcBlockResponseMessage) ValidateBasic() error {
return m.Block.ValidateBasic()
}
func (m *bcBlockResponseMessage) String() string {
return fmt.Sprintf("[bcBlockResponseMessage %v]", m.Block.Height)
}
@@ -445,14 +405,6 @@ type bcStatusRequestMessage struct {
Height int64
}
// ValidateBasic performs basic validation.
func (m *bcStatusRequestMessage) ValidateBasic() error {
if m.Height < 0 {
return errors.New("Negative Height")
}
return nil
}
func (m *bcStatusRequestMessage) String() string {
return fmt.Sprintf("[bcStatusRequestMessage %v]", m.Height)
}
@@ -463,14 +415,6 @@ type bcStatusResponseMessage struct {
Height int64
}
// ValidateBasic performs basic validation.
func (m *bcStatusResponseMessage) ValidateBasic() error {
if m.Height < 0 {
return errors.New("Negative Height")
}
return nil
}
func (m *bcStatusResponseMessage) String() string {
return fmt.Sprintf("[bcStatusResponseMessage %v]", m.Height)
}

View File

@@ -1,151 +1,72 @@
package blockchain
import (
"sort"
"net"
"testing"
"time"
"github.com/stretchr/testify/assert"
abci "github.com/tendermint/tendermint/abci/types"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
)
var config *cfg.Config
func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.GenesisDoc, []types.PrivValidator) {
validators := make([]types.GenesisValidator, numValidators)
privValidators := make([]types.PrivValidator, numValidators)
for i := 0; i < numValidators; i++ {
val, privVal := types.RandValidator(randPower, minPower)
validators[i] = types.GenesisValidator{
PubKey: val.PubKey,
Power: val.VotingPower,
}
privValidators[i] = privVal
}
sort.Sort(types.PrivValidatorsByAddress(privValidators))
return &types.GenesisDoc{
GenesisTime: tmtime.Now(),
ChainID: config.ChainID(),
Validators: validators,
}, privValidators
}
func makeVote(header *types.Header, blockID types.BlockID, valset *types.ValidatorSet, privVal types.PrivValidator) *types.Vote {
addr := privVal.GetPubKey().Address()
idx, _ := valset.GetByAddress(addr)
vote := &types.Vote{
ValidatorAddress: addr,
ValidatorIndex: idx,
Height: header.Height,
Round: 1,
Timestamp: tmtime.Now(),
Type: types.PrecommitType,
BlockID: blockID,
}
privVal.SignVote(header.ChainID, vote)
return vote
}
type BlockchainReactorPair struct {
reactor *BlockchainReactor
app proxy.AppConns
}
func newBlockchainReactor(logger log.Logger, genDoc *types.GenesisDoc, privVals []types.PrivValidator, maxBlockHeight int64) BlockchainReactorPair {
if len(privVals) != 1 {
panic("only support one validator")
}
app := &testApp{}
cc := proxy.NewLocalClientCreator(app)
proxyApp := proxy.NewAppConns(cc)
err := proxyApp.Start()
if err != nil {
panic(cmn.ErrorWrap(err, "error start app"))
}
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
config := cfg.ResetTestRoot("blockchain_reactor_test")
// blockDB := dbm.NewDebugDB("blockDB", dbm.NewMemDB())
// stateDB := dbm.NewDebugDB("stateDB", dbm.NewMemDB())
blockDB := dbm.NewMemDB()
stateDB := dbm.NewMemDB()
blockStore := NewBlockStore(blockDB)
state, err := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
state, err := sm.LoadStateFromDBOrGenesisFile(stateDB, config.GenesisFile())
if err != nil {
panic(cmn.ErrorWrap(err, "error constructing state from genesis file"))
}
return state, blockStore
}
// Make the BlockchainReactor itself.
// NOTE we have to create and commit the blocks first because
// pool.height is determined from the store.
func newBlockchainReactor(logger log.Logger, maxBlockHeight int64) *BlockchainReactor {
state, blockStore := makeStateAndBlockStore(logger)
// Make the blockchainReactor itself
fastSync := true
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), proxyApp.Consensus(),
var nilApp proxy.AppConnConsensus
blockExec := sm.NewBlockExecutor(dbm.NewMemDB(), log.TestingLogger(), nilApp,
sm.MockMempool{}, sm.MockEvidencePool{})
// let's add some blocks in
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
lastCommit := &types.Commit{}
if blockHeight > 1 {
lastBlockMeta := blockStore.LoadBlockMeta(blockHeight - 1)
lastBlock := blockStore.LoadBlock(blockHeight - 1)
vote := makeVote(&lastBlock.Header, lastBlockMeta.BlockID, state.Validators, privVals[0])
lastCommit = &types.Commit{Precommits: []*types.Vote{vote}, BlockID: lastBlockMeta.BlockID}
}
thisBlock := makeBlock(blockHeight, state, lastCommit)
thisParts := thisBlock.MakePartSet(types.BlockPartSizeBytes)
blockID := types.BlockID{thisBlock.Hash(), thisParts.Header()}
state, err = blockExec.ApplyBlock(state, blockID, thisBlock)
if err != nil {
panic(cmn.ErrorWrap(err, "error apply block"))
}
blockStore.SaveBlock(thisBlock, thisParts, lastCommit)
}
bcReactor := NewBlockchainReactor(state.Copy(), blockExec, blockStore, fastSync)
bcReactor.SetLogger(logger.With("module", "blockchain"))
return BlockchainReactorPair{bcReactor, proxyApp}
// Next: we need to set a switch in order for peers to be added in
bcReactor.Switch = p2p.NewSwitch(cfg.DefaultP2PConfig(), nil)
// Lastly: let's add some blocks in
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
firstBlock := makeBlock(blockHeight, state)
secondBlock := makeBlock(blockHeight+1, state)
firstParts := firstBlock.MakePartSet(types.BlockPartSizeBytes)
blockStore.SaveBlock(firstBlock, firstParts, secondBlock.LastCommit)
}
return bcReactor
}
func TestNoBlockResponse(t *testing.T) {
config = cfg.ResetTestRoot("blockchain_reactor_test")
genDoc, privVals := randGenesisDoc(1, false, 30)
maxBlockHeight := int64(20)
maxBlockHeight := int64(65)
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
bcr.Start()
defer bcr.Stop()
reactorPairs := make([]BlockchainReactorPair, 2)
// Add some peers in
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
bcr.AddPeer(peer)
reactorPairs[0] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
reactorPairs[1] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
p2p.MakeConnectedSwitches(config.P2P, 2, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("BLOCKCHAIN", reactorPairs[i].reactor)
return s
}, p2p.Connect2Switches)
defer func() {
for _, r := range reactorPairs {
r.reactor.Stop()
r.app.Stop()
}
}()
chID := byte(0x01)
tests := []struct {
height int64
@@ -157,100 +78,72 @@ func TestNoBlockResponse(t *testing.T) {
{100, false},
}
for {
if reactorPairs[1].reactor.pool.IsCaughtUp() {
break
}
time.Sleep(10 * time.Millisecond)
}
assert.Equal(t, maxBlockHeight, reactorPairs[0].reactor.store.Height())
// receive a request message from peer,
// wait for our response to be received on the peer
for _, tt := range tests {
block := reactorPairs[1].reactor.store.LoadBlock(tt.height)
reqBlockMsg := &bcBlockRequestMessage{tt.height}
reqBlockBytes := cdc.MustMarshalBinaryBare(reqBlockMsg)
bcr.Receive(chID, peer, reqBlockBytes)
msg := peer.lastBlockchainMessage()
if tt.existent {
assert.True(t, block != nil)
if blockMsg, ok := msg.(*bcBlockResponseMessage); !ok {
t.Fatalf("Expected to receive a block response for height %d", tt.height)
} else if blockMsg.Block.Height != tt.height {
t.Fatalf("Expected response to be for height %d, got %d", tt.height, blockMsg.Block.Height)
}
} else {
assert.True(t, block == nil)
if noBlockMsg, ok := msg.(*bcNoBlockResponseMessage); !ok {
t.Fatalf("Expected to receive a no block response for height %d", tt.height)
} else if noBlockMsg.Height != tt.height {
t.Fatalf("Expected response to be for height %d, got %d", tt.height, noBlockMsg.Height)
}
}
}
}
/*
// NOTE: This is too hard to test without
// an easy way to add test peer to switch
// or without significant refactoring of the module.
// Alternatively we could actually dial a TCP conn but
// that seems extreme.
func TestBadBlockStopsPeer(t *testing.T) {
config = cfg.ResetTestRoot("blockchain_reactor_test")
genDoc, privVals := randGenesisDoc(1, false, 30)
maxBlockHeight := int64(20)
maxBlockHeight := int64(148)
bcr := newBlockchainReactor(log.TestingLogger(), maxBlockHeight)
bcr.Start()
defer bcr.Stop()
otherChain := newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
defer func() {
otherChain.reactor.Stop()
otherChain.app.Stop()
}()
// Add some peers in
peer := newbcrTestPeer(p2p.ID(cmn.RandStr(12)))
reactorPairs := make([]BlockchainReactorPair, 4)
// XXX: This doesn't add the peer to anything,
// so it's hard to check that it's later removed
bcr.AddPeer(peer)
assert.True(t, bcr.Switch.Peers().Size() > 0)
reactorPairs[0] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
reactorPairs[1] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[2] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[3] = newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
switches := p2p.MakeConnectedSwitches(config.P2P, 4, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("BLOCKCHAIN", reactorPairs[i].reactor)
return s
}, p2p.Connect2Switches)
defer func() {
for _, r := range reactorPairs {
r.reactor.Stop()
r.app.Stop()
}
}()
// send a bad block from the peer
// default blocks already dont have commits, so should fail
block := bcr.store.LoadBlock(3)
msg := &bcBlockResponseMessage{Block: block}
peer.Send(BlockchainChannel, struct{ BlockchainMessage }{msg})
ticker := time.NewTicker(time.Millisecond * 10)
timer := time.NewTimer(time.Second * 2)
LOOP:
for {
if reactorPairs[3].reactor.pool.IsCaughtUp() {
break
select {
case <-ticker.C:
if bcr.Switch.Peers().Size() == 0 {
break LOOP
}
case <-timer.C:
t.Fatal("Timed out waiting to disconnect peer")
}
time.Sleep(1 * time.Second)
}
//at this time, reactors[0-3] is the newest
assert.Equal(t, 3, reactorPairs[1].reactor.Switch.Peers().Size())
//mark reactorPairs[3] is an invalid peer
reactorPairs[3].reactor.store = otherChain.reactor.store
lastReactorPair := newBlockchainReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs = append(reactorPairs, lastReactorPair)
switches = append(switches, p2p.MakeConnectedSwitches(config.P2P, 1, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("BLOCKCHAIN", reactorPairs[len(reactorPairs)-1].reactor)
return s
}, p2p.Connect2Switches)...)
for i := 0; i < len(reactorPairs)-1; i++ {
p2p.Connect2Switches(switches, i, len(reactorPairs)-1)
}
for {
if lastReactorPair.reactor.pool.IsCaughtUp() || lastReactorPair.reactor.Switch.Peers().Size() == 0 {
break
}
time.Sleep(1 * time.Second)
}
assert.True(t, lastReactorPair.reactor.Switch.Peers().Size() < len(reactorPairs)-1)
}
*/
//----------------------------------------------
// utility funcs
@@ -262,41 +155,55 @@ func makeTxs(height int64) (txs []types.Tx) {
return txs
}
func makeBlock(height int64, state sm.State, lastCommit *types.Commit) *types.Block {
block, _ := state.MakeBlock(height, makeTxs(height), lastCommit, nil, state.Validators.GetProposer().Address)
func makeBlock(height int64, state sm.State) *types.Block {
block, _ := state.MakeBlock(height, makeTxs(height), new(types.Commit), nil, state.Validators.GetProposer().Address)
return block
}
type testApp struct {
abci.BaseApplication
// The Test peer
type bcrTestPeer struct {
cmn.BaseService
id p2p.ID
ch chan interface{}
}
var _ abci.Application = (*testApp)(nil)
var _ p2p.Peer = (*bcrTestPeer)(nil)
func (app *testApp) Info(req abci.RequestInfo) (resInfo abci.ResponseInfo) {
return abci.ResponseInfo{}
func newbcrTestPeer(id p2p.ID) *bcrTestPeer {
bcr := &bcrTestPeer{
id: id,
ch: make(chan interface{}, 2),
}
bcr.BaseService = *cmn.NewBaseService(nil, "bcrTestPeer", bcr)
return bcr
}
func (app *testApp) BeginBlock(req abci.RequestBeginBlock) abci.ResponseBeginBlock {
return abci.ResponseBeginBlock{}
func (tp *bcrTestPeer) lastBlockchainMessage() interface{} { return <-tp.ch }
func (tp *bcrTestPeer) TrySend(chID byte, msgBytes []byte) bool {
var msg BlockchainMessage
err := cdc.UnmarshalBinaryBare(msgBytes, &msg)
if err != nil {
panic(cmn.ErrorWrap(err, "Error while trying to parse a BlockchainMessage"))
}
if _, ok := msg.(*bcStatusResponseMessage); ok {
// Discard status response messages since they skew our results
// We only want to deal with:
// + bcBlockResponseMessage
// + bcNoBlockResponseMessage
} else {
tp.ch <- msg
}
return true
}
func (app *testApp) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlock {
return abci.ResponseEndBlock{}
}
func (app *testApp) DeliverTx(tx []byte) abci.ResponseDeliverTx {
return abci.ResponseDeliverTx{Tags: []cmn.KVPair{}}
}
func (app *testApp) CheckTx(tx []byte) abci.ResponseCheckTx {
return abci.ResponseCheckTx{}
}
func (app *testApp) Commit() abci.ResponseCommit {
return abci.ResponseCommit{}
}
func (app *testApp) Query(reqQuery abci.RequestQuery) (resQuery abci.ResponseQuery) {
return
}
func (tp *bcrTestPeer) Send(chID byte, msgBytes []byte) bool { return tp.TrySend(chID, msgBytes) }
func (tp *bcrTestPeer) NodeInfo() p2p.NodeInfo { return p2p.DefaultNodeInfo{} }
func (tp *bcrTestPeer) Status() p2p.ConnectionStatus { return p2p.ConnectionStatus{} }
func (tp *bcrTestPeer) ID() p2p.ID { return tp.id }
func (tp *bcrTestPeer) IsOutbound() bool { return false }
func (tp *bcrTestPeer) IsPersistent() bool { return true }
func (tp *bcrTestPeer) Get(s string) interface{} { return s }
func (tp *bcrTestPeer) Set(string, interface{}) {}
func (tp *bcrTestPeer) RemoteIP() net.IP { return []byte{127, 0, 0, 1} }
func (tp *bcrTestPeer) OriginalAddr() *p2p.NetAddress { return nil }

View File

@@ -9,30 +9,13 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/db"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/libs/log"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
)
func makeStateAndBlockStore(logger log.Logger) (sm.State, *BlockStore) {
config := cfg.ResetTestRoot("blockchain_reactor_test")
// blockDB := dbm.NewDebugDB("blockDB", dbm.NewMemDB())
// stateDB := dbm.NewDebugDB("stateDB", dbm.NewMemDB())
blockDB := dbm.NewMemDB()
stateDB := dbm.NewMemDB()
state, err := sm.LoadStateFromDBOrGenesisFile(stateDB, config.GenesisFile())
if err != nil {
panic(cmn.ErrorWrap(err, "error constructing state from genesis file"))
}
return state, NewBlockStore(blockDB)
}
func TestLoadBlockStoreStateJSON(t *testing.T) {
db := db.NewMemDB()
@@ -82,7 +65,7 @@ func freshBlockStore() (*BlockStore, db.DB) {
var (
state, _ = makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
block = makeBlock(1, state, new(types.Commit))
block = makeBlock(1, state)
partSet = block.MakePartSet(2)
part1 = partSet.GetPart(0)
part2 = partSet.GetPart(1)
@@ -105,7 +88,7 @@ func TestBlockStoreSaveLoadBlock(t *testing.T) {
}
// save a block
block := makeBlock(bs.Height()+1, state, new(types.Commit))
block := makeBlock(bs.Height()+1, state)
validPartSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,
Timestamp: tmtime.Now()}}}
@@ -311,7 +294,7 @@ func TestLoadBlockPart(t *testing.T) {
gotPart, _, panicErr := doFn(loadPart)
require.Nil(t, panicErr, "an existent and proper block should not panic")
require.Nil(t, res, "a properly saved block should return a proper block")
require.Equal(t, gotPart.(*types.Part), part1,
require.Equal(t, gotPart.(*types.Part).Hash(), part1.Hash(),
"expecting successful retrieval of previously saved block")
}
@@ -348,7 +331,7 @@ func TestLoadBlockMeta(t *testing.T) {
func TestBlockFetchAtHeight(t *testing.T) {
state, bs := makeStateAndBlockStore(log.NewTMLogger(new(bytes.Buffer)))
require.Equal(t, bs.Height(), int64(0), "initially the height should be zero")
block := makeBlock(bs.Height()+1, state, new(types.Commit))
block := makeBlock(bs.Height()+1, state)
partSet := block.MakePartSet(2)
seenCommit := &types.Commit{Precommits: []*types.Vote{{Height: 10,

View File

@@ -3,7 +3,6 @@ package main
import (
"flag"
"os"
"time"
"github.com/tendermint/tendermint/crypto/ed25519"
cmn "github.com/tendermint/tendermint/libs/common"
@@ -14,10 +13,9 @@ import (
func main() {
var (
addr = flag.String("addr", ":26659", "Address of client to connect to")
chainID = flag.String("chain-id", "mychain", "chain id")
privValKeyPath = flag.String("priv-key", "", "priv val key file path")
privValStatePath = flag.String("priv-state", "", "priv val state file path")
addr = flag.String("addr", ":26659", "Address of client to connect to")
chainID = flag.String("chain-id", "mychain", "chain id")
privValPath = flag.String("priv", "", "priv val file path")
logger = log.NewTMLogger(
log.NewSyncWriter(os.Stdout),
@@ -29,26 +27,18 @@ func main() {
"Starting private validator",
"addr", *addr,
"chainID", *chainID,
"privKeyPath", *privValKeyPath,
"privStatePath", *privValStatePath,
"privPath", *privValPath,
)
pv := privval.LoadFilePV(*privValKeyPath, *privValStatePath)
pv := privval.LoadFilePV(*privValPath)
var dialer privval.Dialer
protocol, address := cmn.ProtocolAndAddress(*addr)
switch protocol {
case "unix":
dialer = privval.DialUnixFn(address)
case "tcp":
connTimeout := 3 * time.Second // TODO
dialer = privval.DialTCPFn(address, connTimeout, ed25519.GenPrivKey())
default:
logger.Error("Unknown protocol", "protocol", protocol)
os.Exit(1)
}
rs := privval.NewRemoteSigner(logger, *chainID, pv, dialer)
rs := privval.NewRemoteSigner(
logger,
*chainID,
*addr,
pv,
ed25519.GenPrivKey(),
)
err := rs.Start()
if err != nil {
panic(err)

View File

@@ -17,7 +17,7 @@ var GenValidatorCmd = &cobra.Command{
}
func genValidator(cmd *cobra.Command, args []string) {
pv := privval.GenFilePV("", "")
pv := privval.GenFilePV("")
jsbz, err := cdc.MarshalJSON(pv)
if err != nil {
panic(err)

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/p2p"
@@ -25,18 +26,15 @@ func initFiles(cmd *cobra.Command, args []string) error {
func initFilesWithConfig(config *cfg.Config) error {
// private validator
privValKeyFile := config.PrivValidatorKeyFile()
privValStateFile := config.PrivValidatorStateFile()
privValFile := config.PrivValidatorFile()
var pv *privval.FilePV
if cmn.FileExists(privValKeyFile) {
pv = privval.LoadFilePV(privValKeyFile, privValStateFile)
logger.Info("Found private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
if cmn.FileExists(privValFile) {
pv = privval.LoadFilePV(privValFile)
logger.Info("Found private validator", "path", privValFile)
} else {
pv = privval.GenFilePV(privValKeyFile, privValStateFile)
pv = privval.GenFilePV(privValFile)
pv.Save()
logger.Info("Generated private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
logger.Info("Generated private validator", "path", privValFile)
}
nodeKeyFile := config.NodeKeyFile()
@@ -59,10 +57,9 @@ func initFilesWithConfig(config *cfg.Config) error {
GenesisTime: tmtime.Now(),
ConsensusParams: types.DefaultConsensusParams(),
}
key := pv.GetPubKey()
genDoc.Validators = []types.GenesisValidator{{
Address: key.Address(),
PubKey: key,
Address: pv.GetPubKey().Address(),
PubKey: pv.GetPubKey(),
Power: 10,
}}

View File

@@ -5,7 +5,6 @@ import (
"github.com/spf13/cobra"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/privval"
)
@@ -28,41 +27,36 @@ var ResetPrivValidatorCmd = &cobra.Command{
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAll(cmd *cobra.Command, args []string) {
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorKeyFile(),
config.PrivValidatorStateFile(), logger)
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorFile(), logger)
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) {
resetFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile(), logger)
resetFilePV(config.PrivValidatorFile(), logger)
}
// ResetAll removes address book files plus all data, and resets the privValdiator data.
// ResetAll removes the privValidator and address book files plus all data.
// Exported so other CLI tools can use it.
func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) {
func ResetAll(dbDir, addrBookFile, privValFile string, logger log.Logger) {
resetFilePV(privValFile, logger)
removeAddrBook(addrBookFile, logger)
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
} else {
logger.Error("Error removing all blockchain history", "dir", dbDir, "err", err)
}
// recreate the dbDir since the privVal state needs to live there
cmn.EnsureDir(dbDir, 0700)
resetFilePV(privValKeyFile, privValStateFile, logger)
}
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) {
if _, err := os.Stat(privValKeyFile); err == nil {
pv := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
func resetFilePV(privValFile string, logger log.Logger) {
if _, err := os.Stat(privValFile); err == nil {
pv := privval.LoadFilePV(privValFile)
pv.Reset()
logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
logger.Info("Reset private validator file to genesis state", "file", privValFile)
} else {
pv := privval.GenFilePV(privValKeyFile, privValStateFile)
pv := privval.GenFilePV(privValFile)
pv.Save()
logger.Info("Generated private validator file", "file", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
logger.Info("Generated private validator file", "file", privValFile)
}
}

View File

@@ -54,9 +54,6 @@ var RootCmd = &cobra.Command{
if err != nil {
return err
}
if config.LogFormat == cfg.LogFormatJSON {
logger = log.NewTMJSONLogger(log.NewSyncWriter(os.Stdout))
}
logger, err = tmflags.ParseLogLevel(config.LogLevel, logger, cfg.DefaultLogLevel())
if err != nil {
return err

View File

@@ -24,7 +24,7 @@ func AddNodeFlags(cmd *cobra.Command) {
cmd.Flags().Bool("fast_sync", config.FastSync, "Fast blockchain syncing")
// abci flags
cmd.Flags().String("proxy_app", config.ProxyApp, "Proxy app address, or one of: 'kvstore', 'persistent_kvstore', 'counter', 'counter_serial' or 'noop' for local testing.")
cmd.Flags().String("proxy_app", config.ProxyApp, "Proxy app address, or 'nilapp' or 'kvstore' for local testing.")
cmd.Flags().String("abci", config.ABCI, "Specify abci transport (socket | grpc)")
// rpc flags

View File

@@ -16,7 +16,7 @@ var ShowValidatorCmd = &cobra.Command{
}
func showValidator(cmd *cobra.Command, args []string) {
privValidator := privval.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
privValidator := privval.LoadOrGenFilePV(config.PrivValidatorFile())
pubKeyJSONBytes, _ := cdc.MarshalJSON(privValidator.GetPubKey())
fmt.Println(string(pubKeyJSONBytes))
}

View File

@@ -85,18 +85,11 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
_ = os.RemoveAll(outputDir)
return err
}
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
initFilesWithConfig(config)
pvKeyFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidatorKey)
pvStateFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidatorState)
pv := privval.LoadFilePV(pvKeyFile, pvStateFile)
pvFile := filepath.Join(nodeDir, config.BaseConfig.PrivValidator)
pv := privval.LoadFilePV(pvFile)
genVals[i] = types.GenesisValidator{
Address: pv.GetPubKey().Address(),
PubKey: pv.GetPubKey(),
@@ -134,32 +127,14 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
}
}
// Gather persistent peer addresses.
var (
persistentPeers string
err error
)
if populatePersistentPeers {
persistentPeers, err = persistentPeersString(config)
err := populatePersistentPeersInConfigAndWriteIt(config)
if err != nil {
_ = os.RemoveAll(outputDir)
return err
}
}
// Overwrite default config.
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.AddrBookStrict = false
config.P2P.AllowDuplicateIP = true
if populatePersistentPeers {
config.P2P.PersistentPeers = persistentPeers
}
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
return nil
}
@@ -182,16 +157,28 @@ func hostnameOrIP(i int) string {
return fmt.Sprintf("%s%d", hostnamePrefix, i)
}
func persistentPeersString(config *cfg.Config) (string, error) {
func populatePersistentPeersInConfigAndWriteIt(config *cfg.Config) error {
persistentPeers := make([]string, nValidators+nNonValidators)
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
nodeKey, err := p2p.LoadNodeKey(config.NodeKeyFile())
if err != nil {
return "", err
return err
}
persistentPeers[i] = p2p.IDAddressString(nodeKey.ID(), fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
}
return strings.Join(persistentPeers, ","), nil
persistentPeersList := strings.Join(persistentPeers, ",")
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.PersistentPeers = persistentPeersList
config.P2P.AddrBookStrict = false
// overwrite default config
cfg.WriteConfigFile(filepath.Join(nodeDir, "config", "config.toml"), config)
}
return nil
}

View File

@@ -14,11 +14,6 @@ const (
FuzzModeDrop = iota
// FuzzModeDelay is a mode in which we randomly sleep
FuzzModeDelay
// LogFormatPlain is a format for colored text
LogFormatPlain = "plain"
// LogFormatJSON is a format for json output
LogFormatJSON = "json"
)
// NOTE: Most of the structs & relevant comments + the
@@ -35,24 +30,15 @@ var (
defaultConfigFileName = "config.toml"
defaultGenesisJSONName = "genesis.json"
defaultPrivValKeyName = "priv_validator_key.json"
defaultPrivValStateName = "priv_validator_state.json"
defaultPrivValName = "priv_validator.json"
defaultNodeKeyName = "node_key.json"
defaultAddrBookName = "addrbook.json"
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValKeyPath = filepath.Join(defaultConfigDir, defaultPrivValKeyName)
defaultPrivValStatePath = filepath.Join(defaultDataDir, defaultPrivValStateName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
)
var (
oldPrivVal = "priv_validator.json"
oldPrivValPath = filepath.Join(defaultConfigDir, oldPrivVal)
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValPath = filepath.Join(defaultConfigDir, defaultPrivValName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
)
// Config defines the top level configuration for a Tendermint node
@@ -108,9 +94,6 @@ func (cfg *Config) SetRoot(root string) *Config {
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *Config) ValidateBasic() error {
if err := cfg.BaseConfig.ValidateBasic(); err != nil {
return err
}
if err := cfg.RPC.ValidateBasic(); err != nil {
return errors.Wrap(err, "Error in [rpc] section")
}
@@ -162,17 +145,11 @@ type BaseConfig struct {
// Output level for logging
LogLevel string `mapstructure:"log_level"`
// Output format: 'plain' (colored text) or 'json'
LogFormat string `mapstructure:"log_format"`
// Path to the JSON file containing the initial validator set and other meta data
Genesis string `mapstructure:"genesis_file"`
// Path to the JSON file containing the private key to use as a validator in the consensus protocol
PrivValidatorKey string `mapstructure:"priv_validator_key_file"`
// Path to the JSON file containing the last sign state of a validator
PrivValidatorState string `mapstructure:"priv_validator_state_file"`
PrivValidator string `mapstructure:"priv_validator_file"`
// TCP or UNIX socket address for Tendermint to listen on for
// connections from an external PrivValidator process
@@ -195,20 +172,18 @@ type BaseConfig struct {
// DefaultBaseConfig returns a default base configuration for a Tendermint node
func DefaultBaseConfig() BaseConfig {
return BaseConfig{
Genesis: defaultGenesisJSONPath,
PrivValidatorKey: defaultPrivValKeyPath,
PrivValidatorState: defaultPrivValStatePath,
NodeKey: defaultNodeKeyPath,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultPackageLogLevels(),
LogFormat: LogFormatPlain,
ProfListenAddress: "",
FastSync: true,
FilterPeers: false,
DBBackend: "leveldb",
DBPath: "data",
Genesis: defaultGenesisJSONPath,
PrivValidator: defaultPrivValPath,
NodeKey: defaultNodeKeyPath,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultPackageLogLevels(),
ProfListenAddress: "",
FastSync: true,
FilterPeers: false,
DBBackend: "leveldb",
DBPath: "data",
}
}
@@ -231,20 +206,9 @@ func (cfg BaseConfig) GenesisFile() string {
return rootify(cfg.Genesis, cfg.RootDir)
}
// PrivValidatorKeyFile returns the full path to the priv_validator_key.json file
func (cfg BaseConfig) PrivValidatorKeyFile() string {
return rootify(cfg.PrivValidatorKey, cfg.RootDir)
}
// PrivValidatorFile returns the full path to the priv_validator_state.json file
func (cfg BaseConfig) PrivValidatorStateFile() string {
return rootify(cfg.PrivValidatorState, cfg.RootDir)
}
// OldPrivValidatorFile returns the full path of the priv_validator.json from pre v0.28.0.
// TODO: eventually remove.
func (cfg BaseConfig) OldPrivValidatorFile() string {
return rootify(oldPrivValPath, cfg.RootDir)
// PrivValidatorFile returns the full path to the priv_validator.json file
func (cfg BaseConfig) PrivValidatorFile() string {
return rootify(cfg.PrivValidator, cfg.RootDir)
}
// NodeKeyFile returns the full path to the node_key.json file
@@ -257,17 +221,6 @@ func (cfg BaseConfig) DBDir() string {
return rootify(cfg.DBPath, cfg.RootDir)
}
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg BaseConfig) ValidateBasic() error {
switch cfg.LogFormat {
case LogFormatPlain, LogFormatJSON:
default:
return errors.New("unknown log_format (must be 'plain' or 'json')")
}
return nil
}
// DefaultLogLevel returns a default log level of "error"
func DefaultLogLevel() string {
return "error"
@@ -289,25 +242,13 @@ type RPCConfig struct {
// TCP or UNIX socket address for the RPC server to listen on
ListenAddress string `mapstructure:"laddr"`
// A list of origins a cross-domain request can be executed from.
// If the special '*' value is present in the list, all origins will be allowed.
// An origin may contain a wildcard (*) to replace 0 or more characters (i.e.: http://*.domain.com).
// Only one wildcard can be used per origin.
CORSAllowedOrigins []string `mapstructure:"cors_allowed_origins"`
// A list of methods the client is allowed to use with cross-domain requests.
CORSAllowedMethods []string `mapstructure:"cors_allowed_methods"`
// A list of non simple headers the client is allowed to use with cross-domain requests.
CORSAllowedHeaders []string `mapstructure:"cors_allowed_headers"`
// TCP or UNIX socket address for the gRPC server to listen on
// NOTE: This server only supports /broadcast_tx_commit
GRPCListenAddress string `mapstructure:"grpc_laddr"`
// Maximum number of simultaneous connections.
// Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
// If you want to accept a larger number than the default, make sure
// If you want to accept more significant number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
GRPCMaxOpenConnections int `mapstructure:"grpc_max_open_connections"`
@@ -317,7 +258,7 @@ type RPCConfig struct {
// Maximum number of simultaneous connections (including WebSocket).
// Does not include gRPC connections. See grpc_max_open_connections
// If you want to accept a larger number than the default, make sure
// If you want to accept more significant number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
// Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -328,10 +269,8 @@ type RPCConfig struct {
// DefaultRPCConfig returns a default configuration for the RPC server
func DefaultRPCConfig() *RPCConfig {
return &RPCConfig{
ListenAddress: "tcp://0.0.0.0:26657",
CORSAllowedOrigins: []string{},
CORSAllowedMethods: []string{"HEAD", "GET", "POST"},
CORSAllowedHeaders: []string{"Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"},
ListenAddress: "tcp://0.0.0.0:26657",
GRPCListenAddress: "",
GRPCMaxOpenConnections: 900,
@@ -361,11 +300,6 @@ func (cfg *RPCConfig) ValidateBasic() error {
return nil
}
// IsCorsEnabled returns true if cross-origin resource sharing is enabled.
func (cfg *RPCConfig) IsCorsEnabled() bool {
return len(cfg.CORSAllowedOrigins) != 0
}
//-----------------------------------------------------------------------------
// P2PConfig
@@ -458,7 +392,7 @@ func DefaultP2PConfig() *P2PConfig {
RecvRate: 5120000, // 5 mB/s
PexReactor: true,
SeedMode: false,
AllowDuplicateIP: false,
AllowDuplicateIP: true, // so non-breaking yet
HandshakeTimeout: 20 * time.Second,
DialTimeout: 3 * time.Second,
TestDialFail: false,
@@ -563,11 +497,6 @@ func (cfg *MempoolConfig) WalDir() string {
return rootify(cfg.WalPath, cfg.RootDir)
}
// WalEnabled returns true if the WAL is enabled.
func (cfg *MempoolConfig) WalEnabled() bool {
return cfg.WalPath != ""
}
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *MempoolConfig) ValidateBasic() error {
@@ -798,12 +727,12 @@ type InstrumentationConfig struct {
PrometheusListenAddr string `mapstructure:"prometheus_listen_addr"`
// Maximum number of simultaneous connections.
// If you want to accept a larger number than the default, make sure
// If you want to accept more significant number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
MaxOpenConnections int `mapstructure:"max_open_connections"`
// Instrumentation namespace.
// Tendermint instrumentation namespace.
Namespace string `mapstructure:"namespace"`
}

View File

@@ -86,19 +86,13 @@ db_dir = "{{ js .BaseConfig.DBPath }}"
# Output level for logging, including package level options
log_level = "{{ .BaseConfig.LogLevel }}"
# Output format: 'plain' (colored text) or 'json'
log_format = "{{ .BaseConfig.LogFormat }}"
##### additional base config options #####
# Path to the JSON file containing the initial validator set and other meta data
genesis_file = "{{ js .BaseConfig.Genesis }}"
# Path to the JSON file containing the private key to use as a validator in the consensus protocol
priv_validator_key_file = "{{ js .BaseConfig.PrivValidatorKey }}"
# Path to the JSON file containing the last sign state of a validator
priv_validator_state_file = "{{ js .BaseConfig.PrivValidatorState }}"
priv_validator_file = "{{ js .BaseConfig.PrivValidator }}"
# TCP or UNIX socket address for Tendermint to listen on for
# connections from an external PrivValidator process
@@ -125,24 +119,13 @@ filter_peers = {{ .BaseConfig.FilterPeers }}
# TCP or UNIX socket address for the RPC server to listen on
laddr = "{{ .RPC.ListenAddress }}"
# A list of origins a cross-domain request can be executed from
# Default value '[]' disables cors support
# Use '["*"]' to allow any origin
cors_allowed_origins = [{{ range .RPC.CORSAllowedOrigins }}{{ printf "%q, " . }}{{end}}]
# A list of methods the client is allowed to use with cross-domain requests
cors_allowed_methods = [{{ range .RPC.CORSAllowedMethods }}{{ printf "%q, " . }}{{end}}]
# A list of non simple headers the client is allowed to use with cross-domain requests
cors_allowed_headers = [{{ range .RPC.CORSAllowedHeaders }}{{ printf "%q, " . }}{{end}}]
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
grpc_laddr = "{{ .RPC.GRPCListenAddress }}"
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max_open_connections
# If you want to accept a larger number than the default, make sure
# If you want to accept more significant number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -154,7 +137,7 @@ unsafe = {{ .RPC.Unsafe }}
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc_max_open_connections
# If you want to accept a larger number than the default, make sure
# If you want to accept more significant number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
@@ -263,17 +246,14 @@ create_empty_blocks_interval = "{{ .Consensus.CreateEmptyBlocksInterval }}"
peer_gossip_sleep_duration = "{{ .Consensus.PeerGossipSleepDuration }}"
peer_query_maj23_sleep_duration = "{{ .Consensus.PeerQueryMaj23SleepDuration }}"
# Block time parameters. Corresponds to the minimum time increment between consecutive blocks.
blocktime_iota = "{{ .Consensus.BlockTimeIota }}"
##### transactions indexer configuration options #####
[tx_index]
# What indexer to use for transactions
#
# Options:
# 1) "null"
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
# 1) "null" (default)
# 2) "kv" - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "{{ .TxIndex.Indexer }}"
# Comma-separated list of tags to index (by default the only tag is "tx.hash")
@@ -305,7 +285,7 @@ prometheus = {{ .Instrumentation.Prometheus }}
prometheus_listen_addr = "{{ .Instrumentation.PrometheusListenAddr }}"
# Maximum number of simultaneous connections.
# If you want to accept a larger number than the default, make sure
# If you want to accept more significant number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
max_open_connections = {{ .Instrumentation.MaxOpenConnections }}
@@ -345,8 +325,7 @@ func ResetTestRoot(testName string) *Config {
baseConfig := DefaultBaseConfig()
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
genesisFilePath := filepath.Join(rootDir, baseConfig.Genesis)
privKeyFilePath := filepath.Join(rootDir, baseConfig.PrivValidatorKey)
privStateFilePath := filepath.Join(rootDir, baseConfig.PrivValidatorState)
privFilePath := filepath.Join(rootDir, baseConfig.PrivValidator)
// Write default config file if missing.
if !cmn.FileExists(configFilePath) {
@@ -356,15 +335,14 @@ func ResetTestRoot(testName string) *Config {
cmn.MustWriteFile(genesisFilePath, []byte(testGenesis), 0644)
}
// we always overwrite the priv val
cmn.MustWriteFile(privKeyFilePath, []byte(testPrivValidatorKey), 0644)
cmn.MustWriteFile(privStateFilePath, []byte(testPrivValidatorState), 0644)
cmn.MustWriteFile(privFilePath, []byte(testPrivValidator), 0644)
config := TestConfig().SetRoot(rootDir)
return config
}
var testGenesis = `{
"genesis_time": "2018-10-10T08:20:13.695936996Z",
"genesis_time": "2017-10-10T08:20:13.695936996Z",
"chain_id": "tendermint_test",
"validators": [
{
@@ -379,7 +357,7 @@ var testGenesis = `{
"app_hash": ""
}`
var testPrivValidatorKey = `{
var testPrivValidator = `{
"address": "A3258DCBF45DCA0DF052981870F2D1441A36D145",
"pub_key": {
"type": "tendermint/PubKeyEd25519",
@@ -388,11 +366,8 @@ var testPrivValidatorKey = `{
"priv_key": {
"type": "tendermint/PrivKeyEd25519",
"value": "EVkqJO/jIXp3rkASXfh9YnyToYXRXhBr6g9cQVxPFnQBP/5povV4HTjvsy530kybxKHwEi85iU8YL0qQhSYVoQ=="
}
}`
var testPrivValidatorState = `{
"height": "0",
"round": "0",
"step": 0
},
"last_height": "0",
"last_round": "0",
"last_step": 0
}`

View File

@@ -60,7 +60,7 @@ func TestEnsureTestRoot(t *testing.T) {
// TODO: make sure the cfg returned and testconfig are the same!
baseConfig := DefaultBaseConfig()
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidatorKey, baseConfig.PrivValidatorState)
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, baseConfig.PrivValidator)
}
func checkConfig(configFile string) bool {

View File

@@ -179,16 +179,16 @@ func byzantineDecideProposalFunc(t *testing.T, height int64, round int, cs *Cons
// Create a new proposal block from state/txs from the mempool.
block1, blockParts1 := cs.createProposalBlock()
polRound, propBlockID := cs.ValidRound, types.BlockID{block1.Hash(), blockParts1.Header()}
proposal1 := types.NewProposal(height, round, polRound, propBlockID)
polRound, polBlockID := cs.Votes.POLInfo()
proposal1 := types.NewProposal(height, round, blockParts1.Header(), polRound, polBlockID)
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal1); err != nil {
t.Error(err)
}
// Create a new proposal block from state/txs from the mempool.
block2, blockParts2 := cs.createProposalBlock()
polRound, propBlockID = cs.ValidRound, types.BlockID{block2.Hash(), blockParts2.Header()}
proposal2 := types.NewProposal(height, round, polRound, propBlockID)
polRound, polBlockID = cs.Votes.POLInfo()
proposal2 := types.NewProposal(height, round, blockParts2.Header(), polRound, polBlockID)
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal2); err != nil {
t.Error(err)
}
@@ -263,7 +263,7 @@ func (br *ByzantineReactor) AddPeer(peer p2p.Peer) {
// Send our state to peer.
// If we're fast_syncing, broadcast a RoundStepMessage later upon SwitchToConsensus().
if !br.reactor.fastSync {
br.reactor.sendNewRoundStepMessage(peer)
br.reactor.sendNewRoundStepMessages(peer)
}
}
func (br *ByzantineReactor) RemovePeer(peer p2p.Peer, reason interface{}) {

View File

@@ -6,18 +6,14 @@ import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"path"
"reflect"
"sort"
"sync"
"testing"
"time"
"github.com/go-kit/kit/log/term"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blockchain"
cfg "github.com/tendermint/tendermint/config"
@@ -31,6 +27,11 @@ import (
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/go-kit/kit/log/term"
)
const (
@@ -71,10 +72,9 @@ func NewValidatorStub(privValidator types.PrivValidator, valIndex int) *validato
}
func (vs *validatorStub) signVote(voteType types.SignedMsgType, hash []byte, header types.PartSetHeader) (*types.Vote, error) {
addr := vs.PrivValidator.GetPubKey().Address()
vote := &types.Vote{
ValidatorIndex: vs.Index,
ValidatorAddress: addr,
ValidatorAddress: vs.PrivValidator.GetAddress(),
Height: vs.Height,
Round: vs.Round,
Timestamp: tmtime.Now(),
@@ -130,8 +130,8 @@ func decideProposal(cs1 *ConsensusState, vs *validatorStub, height int64, round
}
// Make proposal
polRound, propBlockID := cs1.ValidRound, types.BlockID{block.Hash(), blockParts.Header()}
proposal = types.NewProposal(height, round, polRound, propBlockID)
polRound, polBlockID := cs1.Votes.POLInfo()
proposal = types.NewProposal(height, round, blockParts.Header(), polRound, polBlockID)
if err := vs.SignProposal(cs1.state.ChainID, proposal); err != nil {
panic(err)
}
@@ -151,9 +151,8 @@ func signAddVotes(to *ConsensusState, voteType types.SignedMsgType, hash []byte,
func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *validatorStub, blockHash []byte) {
prevotes := cs.Votes.Prevotes(round)
address := privVal.GetPubKey().Address()
var vote *types.Vote
if vote = prevotes.GetByAddress(address); vote == nil {
if vote = prevotes.GetByAddress(privVal.GetAddress()); vote == nil {
panic("Failed to find prevote from validator")
}
if blockHash == nil {
@@ -169,9 +168,8 @@ func validatePrevote(t *testing.T, cs *ConsensusState, round int, privVal *valid
func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorStub, blockHash []byte) {
votes := cs.LastCommit
address := privVal.GetPubKey().Address()
var vote *types.Vote
if vote = votes.GetByAddress(address); vote == nil {
if vote = votes.GetByAddress(privVal.GetAddress()); vote == nil {
panic("Failed to find precommit from validator")
}
if !bytes.Equal(vote.BlockID.Hash, blockHash) {
@@ -181,9 +179,8 @@ func validateLastPrecommit(t *testing.T, cs *ConsensusState, privVal *validatorS
func validatePrecommit(t *testing.T, cs *ConsensusState, thisRound, lockRound int, privVal *validatorStub, votedBlockHash, lockedBlockHash []byte) {
precommits := cs.Votes.Precommits(thisRound)
address := privVal.GetPubKey().Address()
var vote *types.Vote
if vote = precommits.GetByAddress(address); vote == nil {
if vote = precommits.GetByAddress(privVal.GetAddress()); vote == nil {
panic("Failed to find precommit from validator")
}
@@ -284,10 +281,9 @@ func newConsensusStateWithConfigAndBlockStore(thisConfig *cfg.Config, state sm.S
}
func loadPrivValidator(config *cfg.Config) *privval.FilePV {
privValidatorKeyFile := config.PrivValidatorKeyFile()
ensureDir(filepath.Dir(privValidatorKeyFile), 0700)
privValidatorStateFile := config.PrivValidatorStateFile()
privValidator := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
privValidatorFile := config.PrivValidatorFile()
ensureDir(path.Dir(privValidatorFile), 0700)
privValidator := privval.LoadOrGenFilePV(privValidatorFile)
privValidator.Reset()
return privValidator
}
@@ -409,24 +405,8 @@ func ensureNewVote(voteCh <-chan interface{}, height int64, round int) {
}
func ensureNewRound(roundCh <-chan interface{}, height int64, round int) {
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewRound event")
case ev := <-roundCh:
rs, ok := ev.(types.EventDataNewRound)
if !ok {
panic(
fmt.Sprintf(
"expected a EventDataNewRound, got %v.Wrong subscription channel?",
reflect.TypeOf(rs)))
}
if rs.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, rs.Height))
}
if rs.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, rs.Round))
}
}
ensureNewEvent(roundCh, height, round, ensureTimeout,
"Timeout expired while waiting for NewRound event")
}
func ensureNewTimeout(timeoutCh <-chan interface{}, height int64, round int, timeout int64) {
@@ -436,29 +416,8 @@ func ensureNewTimeout(timeoutCh <-chan interface{}, height int64, round int, tim
}
func ensureNewProposal(proposalCh <-chan interface{}, height int64, round int) {
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewProposal event")
case ev := <-proposalCh:
rs, ok := ev.(types.EventDataCompleteProposal)
if !ok {
panic(
fmt.Sprintf(
"expected a EventDataCompleteProposal, got %v.Wrong subscription channel?",
reflect.TypeOf(rs)))
}
if rs.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, rs.Height))
}
if rs.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, rs.Round))
}
}
}
func ensureNewValidBlock(validBlockCh <-chan interface{}, height int64, round int) {
ensureNewEvent(validBlockCh, height, round, ensureTimeout,
"Timeout expired while waiting for NewValidBlock event")
ensureNewEvent(proposalCh, height, round, ensureTimeout,
"Timeout expired while waiting for NewProposal event")
}
func ensureNewBlock(blockCh <-chan interface{}, height int64) {
@@ -528,30 +487,6 @@ func ensureVote(voteCh <-chan interface{}, height int64, round int,
}
}
func ensureProposal(proposalCh <-chan interface{}, height int64, round int, propId types.BlockID) {
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewProposal event")
case ev := <-proposalCh:
rs, ok := ev.(types.EventDataCompleteProposal)
if !ok {
panic(
fmt.Sprintf(
"expected a EventDataCompleteProposal, got %v.Wrong subscription channel?",
reflect.TypeOf(rs)))
}
if rs.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, rs.Height))
}
if rs.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, rs.Round))
}
if !rs.BlockID.Equals(propId) {
panic("Proposed block does not match expected block")
}
}
}
func ensurePrecommit(voteCh <-chan interface{}, height int64, round int) {
ensureVote(voteCh, height, round, types.PrecommitType)
}
@@ -595,7 +530,7 @@ func randConsensusNet(nValidators int, testName string, tickerFunc func() Timeou
for _, opt := range configOpts {
opt(thisConfig)
}
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
app := appFunc()
vals := types.TM2PB.ValidatorUpdates(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
@@ -616,21 +551,16 @@ func randConsensusNetWithPeers(nValidators, nPeers int, testName string, tickerF
stateDB := dbm.NewMemDB() // each state needs its own db
state, _ := sm.LoadStateFromDBOrGenesisDoc(stateDB, genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
var privVal types.PrivValidator
if i < nValidators {
privVal = privVals[i]
} else {
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
tempFile, err := ioutil.TempFile("", "priv_validator_")
if err != nil {
panic(err)
}
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
if err != nil {
panic(err)
}
privVal = privval.GenFilePV(tempKeyFile.Name(), tempStateFile.Name())
privVal = privval.GenFilePV(tempFile.Name())
}
app := appFunc()
@@ -680,6 +610,8 @@ func randGenesisDoc(numValidators int, randPower bool, minPower int64) (*types.G
func randGenesisState(numValidators int, randPower bool, minPower int64) (sm.State, []types.PrivValidator) {
genDoc, privValidators := randGenesisDoc(numValidators, randPower, minPower)
s0, _ := sm.MakeGenesisState(genDoc)
db := dbm.NewMemDB() // remove this ?
sm.SaveState(db, s0)
return s0, privValidators
}

View File

@@ -10,7 +10,7 @@ import (
"github.com/tendermint/tendermint/abci/example/code"
abci "github.com/tendermint/tendermint/abci/types"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
@@ -18,17 +18,12 @@ func init() {
config = ResetConfig("consensus_mempool_test")
}
// for testing
func assertMempool(txn txNotifier) sm.Mempool {
return txn.(sm.Mempool)
}
func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) {
config := ResetConfig("consensus_mempool_txs_available_test")
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
assertMempool(cs.txNotifier).EnableTxsAvailable()
cs.mempool.EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, height, round)
@@ -46,7 +41,7 @@ func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
config.Consensus.CreateEmptyBlocksInterval = ensureTimeout
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
assertMempool(cs.txNotifier).EnableTxsAvailable()
cs.mempool.EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, height, round)
@@ -61,7 +56,7 @@ func TestMempoolProgressInHigherRound(t *testing.T) {
config.Consensus.CreateEmptyBlocks = false
state, privVals := randGenesisState(1, false, 10)
cs := newConsensusStateWithConfig(config, state, privVals[0], NewCounterApplication())
assertMempool(cs.txNotifier).EnableTxsAvailable()
cs.mempool.EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound)
@@ -77,19 +72,19 @@ func TestMempoolProgressInHigherRound(t *testing.T) {
}
startTestRound(cs, height, round)
ensureNewRound(newRoundCh, height, round) // first round at first height
ensureNewEventOnChannel(newBlockCh) // first block gets committed
ensureNewRoundStep(newRoundCh, height, round) // first round at first height
ensureNewEventOnChannel(newBlockCh) // first block gets committed
height = height + 1 // moving to the next height
round = 0
ensureNewRound(newRoundCh, height, round) // first round at next height
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
ensureNewRoundStep(newRoundCh, height, round) // first round at next height
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
ensureNewTimeout(timeoutCh, height, round, cs.config.TimeoutPropose.Nanoseconds())
round = round + 1 // moving to the next round
ensureNewRound(newRoundCh, height, round) // wait for the next round
ensureNewEventOnChannel(newBlockCh) // now we can commit the block
round = round + 1 // moving to the next round
ensureNewRoundStep(newRoundCh, height, round) // wait for the next round
ensureNewEventOnChannel(newBlockCh) // now we can commit the block
}
func deliverTxsRange(cs *ConsensusState, start, end int) {
@@ -97,7 +92,7 @@ func deliverTxsRange(cs *ConsensusState, start, end int) {
for i := start; i < end; i++ {
txBytes := make([]byte, 8)
binary.BigEndian.PutUint64(txBytes, uint64(i))
err := assertMempool(cs.txNotifier).CheckTx(txBytes, nil)
err := cs.mempool.CheckTx(txBytes, nil)
if err != nil {
panic(fmt.Sprintf("Error after CheckTx: %v", err))
}
@@ -147,7 +142,7 @@ func TestMempoolRmBadTx(t *testing.T) {
// Try to send the tx through the mempool.
// CheckTx should not err, but the app should return a bad abci code
// and the tx should get removed from the pool
err := assertMempool(cs.txNotifier).CheckTx(txBytes, func(r *abci.Response) {
err := cs.mempool.CheckTx(txBytes, func(r *abci.Response) {
if r.GetCheckTx().Code != code.CodeTypeBadNonce {
t.Fatalf("expected checktx to return bad nonce, got %v", r)
}
@@ -159,7 +154,7 @@ func TestMempoolRmBadTx(t *testing.T) {
// check for the tx
for {
txs := assertMempool(cs.txNotifier).ReapMaxBytesMaxGas(int64(len(txBytes)), -1)
txs := cs.mempool.ReapMaxBytesMaxGas(int64(len(txBytes)), -1)
if len(txs) == 0 {
emptyMempoolCh <- struct{}{}
return

View File

@@ -8,11 +8,7 @@ import (
stdprometheus "github.com/prometheus/client_golang/prometheus"
)
const (
// MetricsSubsystem is a subsystem shared by all metrics exposed by this
// package.
MetricsSubsystem = "consensus"
)
const MetricsSubsystem = "consensus"
// Metrics contains metrics exposed by this package.
type Metrics struct {
@@ -54,107 +50,101 @@ type Metrics struct {
}
// PrometheusMetrics returns Metrics build using Prometheus client library.
// Optionally, labels can be provided along with their values ("foo",
// "fooValue").
func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
labels := []string{}
for i := 0; i < len(labelsAndValues); i += 2 {
labels = append(labels, labelsAndValues[i])
}
func PrometheusMetrics(namespace string) *Metrics {
return &Metrics{
Height: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "height",
Help: "Height of the chain.",
}, labels).With(labelsAndValues...),
}, []string{}),
Rounds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "rounds",
Help: "Number of rounds.",
}, labels).With(labelsAndValues...),
}, []string{}),
Validators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validators",
Help: "Number of validators.",
}, labels).With(labelsAndValues...),
}, []string{}),
ValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validators_power",
Help: "Total power of all validators.",
}, labels).With(labelsAndValues...),
}, []string{}),
MissingValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "missing_validators",
Help: "Number of validators who did not sign.",
}, labels).With(labelsAndValues...),
}, []string{}),
MissingValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "missing_validators_power",
Help: "Total power of the missing validators.",
}, labels).With(labelsAndValues...),
}, []string{}),
ByzantineValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "byzantine_validators",
Help: "Number of validators who tried to double sign.",
}, labels).With(labelsAndValues...),
}, []string{}),
ByzantineValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "byzantine_validators_power",
Help: "Total power of the byzantine validators.",
}, labels).With(labelsAndValues...),
}, []string{}),
BlockIntervalSeconds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_interval_seconds",
Help: "Time between this and the last block.",
}, labels).With(labelsAndValues...),
}, []string{}),
NumTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "num_txs",
Help: "Number of transactions.",
}, labels).With(labelsAndValues...),
}, []string{}),
BlockSizeBytes: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_size_bytes",
Help: "Size of the block.",
}, labels).With(labelsAndValues...),
}, []string{}),
TotalTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "total_txs",
Help: "Total number of transactions.",
}, labels).With(labelsAndValues...),
}, []string{}),
CommittedHeight: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "latest_block_height",
Help: "The latest block height.",
}, labels).With(labelsAndValues...),
}, []string{}),
FastSyncing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "fast_syncing",
Help: "Whether or not a node is fast syncing. 1 if yes, 0 if no.",
}, labels).With(labelsAndValues...),
}, []string{}),
BlockParts: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_parts",
Help: "Number of blockparts transmitted by peer.",
}, append(labels, "peer_id")).With(labelsAndValues...),
}, []string{"peer_id"}),
}
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/pkg/errors"
"github.com/tendermint/go-amino"
cstypes "github.com/tendermint/tendermint/consensus/types"
cmn "github.com/tendermint/tendermint/libs/common"
tmevents "github.com/tendermint/tendermint/libs/events"
@@ -173,7 +174,7 @@ func (conR *ConsensusReactor) AddPeer(peer p2p.Peer) {
// Send our state to peer.
// If we're fast_syncing, broadcast a RoundStepMessage later upon SwitchToConsensus().
if !conR.FastSync() {
conR.sendNewRoundStepMessage(peer)
conR.sendNewRoundStepMessages(peer)
}
}
@@ -183,11 +184,7 @@ func (conR *ConsensusReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
return
}
// TODO
// ps, ok := peer.Get(PeerStateKey).(*PeerState)
// if !ok {
// panic(fmt.Sprintf("Peer %v has no state", peer))
// }
// ps.Disconnect()
//peer.Get(PeerStateKey).(*PeerState).Disconnect()
}
// Receive implements Reactor
@@ -208,28 +205,18 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
conR.Switch.StopPeerForError(src, err)
return
}
if err = msg.ValidateBasic(); err != nil {
conR.Logger.Error("Peer sent us invalid msg", "peer", src, "msg", msg, "err", err)
conR.Switch.StopPeerForError(src, err)
return
}
conR.Logger.Debug("Receive", "src", src, "chId", chID, "msg", msg)
// Get peer states
ps, ok := src.Get(types.PeerStateKey).(*PeerState)
if !ok {
panic(fmt.Sprintf("Peer %v has no state", src))
}
ps := src.Get(types.PeerStateKey).(*PeerState)
switch chID {
case StateChannel:
switch msg := msg.(type) {
case *NewRoundStepMessage:
ps.ApplyNewRoundStepMessage(msg)
case *NewValidBlockMessage:
ps.ApplyNewValidBlockMessage(msg)
case *CommitStepMessage:
ps.ApplyCommitStepMessage(msg)
case *HasVoteMessage:
ps.ApplyHasVoteMessage(msg)
case *VoteSetMaj23Message:
@@ -255,7 +242,8 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
case types.PrecommitType:
ourVotes = votes.Precommits(msg.Round).BitArrayByBlockID(msg.BlockID)
default:
panic("Bad VoteSetBitsMessage field Type. Forgot to add a check in ValidateBasic?")
conR.Logger.Error("Bad VoteSetBitsMessage field Type")
return
}
src.TrySend(VoteSetBitsChannel, cdc.MustMarshalBinaryBare(&VoteSetBitsMessage{
Height: msg.Height,
@@ -264,6 +252,11 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
BlockID: msg.BlockID,
Votes: ourVotes,
}))
case *ProposalHeartbeatMessage:
hb := msg.Heartbeat
conR.Logger.Debug("Received proposal heartbeat message",
"height", hb.Height, "round", hb.Round, "sequence", hb.Sequence,
"valIdx", hb.ValidatorIndex, "valAddr", hb.ValidatorAddress)
default:
conR.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg)))
}
@@ -295,9 +288,9 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
switch msg := msg.(type) {
case *VoteMessage:
cs := conR.conS
cs.mtx.RLock()
cs.mtx.Lock()
height, valSize, lastCommitSize := cs.Height, cs.Validators.Size(), cs.LastCommit.Size()
cs.mtx.RUnlock()
cs.mtx.Unlock()
ps.EnsureVoteBitArrays(height, valSize)
ps.EnsureVoteBitArrays(height-1, lastCommitSize)
ps.SetHasVote(msg.Vote)
@@ -329,7 +322,8 @@ func (conR *ConsensusReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte)
case types.PrecommitType:
ourVotes = votes.Precommits(msg.Round).BitArrayByBlockID(msg.BlockID)
default:
panic("Bad VoteSetBitsMessage field Type. Forgot to add a check in ValidateBasic?")
conR.Logger.Error("Bad VoteSetBitsMessage field Type")
return
}
ps.ApplyVoteSetBitsMessage(msg, ourVotes)
} else {
@@ -364,19 +358,14 @@ func (conR *ConsensusReactor) FastSync() bool {
//--------------------------------------
// subscribeToBroadcastEvents subscribes for new round steps and votes
// using internal pubsub defined on state to broadcast
// subscribeToBroadcastEvents subscribes for new round steps, votes and
// proposal heartbeats using internal pubsub defined on state to broadcast
// them to peers upon receiving.
func (conR *ConsensusReactor) subscribeToBroadcastEvents() {
const subscriber = "consensus-reactor"
conR.conS.evsw.AddListenerForEvent(subscriber, types.EventNewRoundStep,
func(data tmevents.EventData) {
conR.broadcastNewRoundStepMessage(data.(*cstypes.RoundState))
})
conR.conS.evsw.AddListenerForEvent(subscriber, types.EventValidBlock,
func(data tmevents.EventData) {
conR.broadcastNewValidBlockMessage(data.(*cstypes.RoundState))
conR.broadcastNewRoundStepMessages(data.(*cstypes.RoundState))
})
conR.conS.evsw.AddListenerForEvent(subscriber, types.EventVote,
@@ -384,6 +373,10 @@ func (conR *ConsensusReactor) subscribeToBroadcastEvents() {
conR.broadcastHasVoteMessage(data.(*types.Vote))
})
conR.conS.evsw.AddListenerForEvent(subscriber, types.EventProposalHeartbeat,
func(data tmevents.EventData) {
conR.broadcastProposalHeartbeatMessage(data.(*types.Heartbeat))
})
}
func (conR *ConsensusReactor) unsubscribeFromBroadcastEvents() {
@@ -391,20 +384,21 @@ func (conR *ConsensusReactor) unsubscribeFromBroadcastEvents() {
conR.conS.evsw.RemoveListener(subscriber)
}
func (conR *ConsensusReactor) broadcastNewRoundStepMessage(rs *cstypes.RoundState) {
nrsMsg := makeRoundStepMessage(rs)
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(nrsMsg))
func (conR *ConsensusReactor) broadcastProposalHeartbeatMessage(hb *types.Heartbeat) {
conR.Logger.Debug("Broadcasting proposal heartbeat message",
"height", hb.Height, "round", hb.Round, "sequence", hb.Sequence)
msg := &ProposalHeartbeatMessage{hb}
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(msg))
}
func (conR *ConsensusReactor) broadcastNewValidBlockMessage(rs *cstypes.RoundState) {
csMsg := &NewValidBlockMessage{
Height: rs.Height,
Round: rs.Round,
BlockPartsHeader: rs.ProposalBlockParts.Header(),
BlockParts: rs.ProposalBlockParts.BitArray(),
IsCommit: rs.Step == cstypes.RoundStepCommit,
func (conR *ConsensusReactor) broadcastNewRoundStepMessages(rs *cstypes.RoundState) {
nrsMsg, csMsg := makeRoundStepMessages(rs)
if nrsMsg != nil {
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(nrsMsg))
}
if csMsg != nil {
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(csMsg))
}
conR.Switch.Broadcast(StateChannel, cdc.MustMarshalBinaryBare(csMsg))
}
// Broadcasts HasVoteMessage to peers that care.
@@ -419,10 +413,7 @@ func (conR *ConsensusReactor) broadcastHasVoteMessage(vote *types.Vote) {
/*
// TODO: Make this broadcast more selective.
for _, peer := range conR.Switch.Peers().List() {
ps, ok := peer.Get(PeerStateKey).(*PeerState)
if !ok {
panic(fmt.Sprintf("Peer %v has no state", peer))
}
ps := peer.Get(PeerStateKey).(*PeerState)
prs := ps.GetRoundState()
if prs.Height == vote.Height {
// TODO: Also filter on round?
@@ -436,7 +427,7 @@ func (conR *ConsensusReactor) broadcastHasVoteMessage(vote *types.Vote) {
*/
}
func makeRoundStepMessage(rs *cstypes.RoundState) (nrsMsg *NewRoundStepMessage) {
func makeRoundStepMessages(rs *cstypes.RoundState) (nrsMsg *NewRoundStepMessage, csMsg *CommitStepMessage) {
nrsMsg = &NewRoundStepMessage{
Height: rs.Height,
Round: rs.Round,
@@ -444,13 +435,25 @@ func makeRoundStepMessage(rs *cstypes.RoundState) (nrsMsg *NewRoundStepMessage)
SecondsSinceStartTime: int(time.Since(rs.StartTime).Seconds()),
LastCommitRound: rs.LastCommit.Round(),
}
if rs.Step == cstypes.RoundStepCommit {
csMsg = &CommitStepMessage{
Height: rs.Height,
BlockPartsHeader: rs.ProposalBlockParts.Header(),
BlockParts: rs.ProposalBlockParts.BitArray(),
}
}
return
}
func (conR *ConsensusReactor) sendNewRoundStepMessage(peer p2p.Peer) {
func (conR *ConsensusReactor) sendNewRoundStepMessages(peer p2p.Peer) {
rs := conR.conS.GetRoundState()
nrsMsg := makeRoundStepMessage(rs)
peer.Send(StateChannel, cdc.MustMarshalBinaryBare(nrsMsg))
nrsMsg, csMsg := makeRoundStepMessages(rs)
if nrsMsg != nil {
peer.Send(StateChannel, cdc.MustMarshalBinaryBare(nrsMsg))
}
if csMsg != nil {
peer.Send(StateChannel, cdc.MustMarshalBinaryBare(csMsg))
}
}
func (conR *ConsensusReactor) gossipDataRoutine(peer p2p.Peer, ps *PeerState) {
@@ -521,7 +524,6 @@ OUTER_LOOP:
msg := &ProposalMessage{Proposal: rs.Proposal}
logger.Debug("Sending proposal", "height", prs.Height, "round", prs.Round)
if peer.Send(DataChannel, cdc.MustMarshalBinaryBare(msg)) {
// NOTE[ZM]: A peer might have received different proposal msg so this Proposal msg will be rejected!
ps.SetHasProposal(rs.Proposal)
}
}
@@ -820,10 +822,7 @@ func (conR *ConsensusReactor) peerStatsRoutine() {
continue
}
// Get peer state
ps, ok := peer.Get(types.PeerStateKey).(*PeerState)
if !ok {
panic(fmt.Sprintf("Peer %v has no state", peer))
}
ps := peer.Get(types.PeerStateKey).(*PeerState)
switch msg.Msg.(type) {
case *VoteMessage:
if numVotes := ps.RecordVote(); numVotes%votesToContributeToBecomeGoodPeer == 0 {
@@ -856,10 +855,7 @@ func (conR *ConsensusReactor) StringIndented(indent string) string {
s := "ConsensusReactor{\n"
s += indent + " " + conR.conS.StringIndented(indent+" ") + "\n"
for _, peer := range conR.Switch.Peers().List() {
ps, ok := peer.Get(types.PeerStateKey).(*PeerState)
if !ok {
panic(fmt.Sprintf("Peer %v has no state", peer))
}
ps := peer.Get(types.PeerStateKey).(*PeerState)
s += indent + " " + ps.StringIndented(indent+" ") + "\n"
}
s += indent + "}"
@@ -968,20 +964,13 @@ func (ps *PeerState) SetHasProposal(proposal *types.Proposal) {
if ps.PRS.Height != proposal.Height || ps.PRS.Round != proposal.Round {
return
}
if ps.PRS.Proposal {
return
}
ps.PRS.Proposal = true
// ps.PRS.ProposalBlockParts is set due to NewValidBlockMessage
if ps.PRS.ProposalBlockParts != nil {
return
}
ps.PRS.ProposalBlockPartsHeader = proposal.BlockID.PartsHeader
ps.PRS.ProposalBlockParts = cmn.NewBitArray(proposal.BlockID.PartsHeader.Total)
ps.PRS.ProposalBlockPartsHeader = proposal.BlockPartsHeader
ps.PRS.ProposalBlockParts = cmn.NewBitArray(proposal.BlockPartsHeader.Total)
ps.PRS.ProposalPOLRound = proposal.POLRound
ps.PRS.ProposalPOL = nil // Nil until ProposalPOLMessage received.
}
@@ -1017,11 +1006,7 @@ func (ps *PeerState) PickSendVote(votes types.VoteSetReader) bool {
if vote, ok := ps.PickVoteToSend(votes); ok {
msg := &VoteMessage{vote}
ps.logger.Debug("Sending vote message", "ps", ps, "vote", vote)
if ps.peer.Send(VoteChannel, cdc.MustMarshalBinaryBare(msg)) {
ps.SetHasVote(vote)
return true
}
return false
return ps.peer.Send(VoteChannel, cdc.MustMarshalBinaryBare(msg))
}
return false
}
@@ -1050,6 +1035,7 @@ func (ps *PeerState) PickVoteToSend(votes types.VoteSetReader) (vote *types.Vote
return nil, false // Not something worth sending
}
if index, ok := votes.BitArray().Sub(psVotes).PickRandom(); ok {
ps.setHasVote(height, round, type_, index)
return votes.GetByIndex(index), true
}
return nil, false
@@ -1225,6 +1211,7 @@ func (ps *PeerState) ApplyNewRoundStepMessage(msg *NewRoundStepMessage) {
// Just remember these values.
psHeight := ps.PRS.Height
psRound := ps.PRS.Round
//psStep := ps.PRS.Step
psCatchupCommitRound := ps.PRS.CatchupCommitRound
psCatchupCommit := ps.PRS.CatchupCommit
@@ -1265,8 +1252,8 @@ func (ps *PeerState) ApplyNewRoundStepMessage(msg *NewRoundStepMessage) {
}
}
// ApplyNewValidBlockMessage updates the peer state for the new valid block.
func (ps *PeerState) ApplyNewValidBlockMessage(msg *NewValidBlockMessage) {
// ApplyCommitStepMessage updates the peer state for the new commit.
func (ps *PeerState) ApplyCommitStepMessage(msg *CommitStepMessage) {
ps.mtx.Lock()
defer ps.mtx.Unlock()
@@ -1274,10 +1261,6 @@ func (ps *PeerState) ApplyNewValidBlockMessage(msg *NewValidBlockMessage) {
return
}
if ps.PRS.Round != msg.Round && !msg.IsCommit {
return
}
ps.PRS.ProposalBlockPartsHeader = msg.BlockPartsHeader
ps.PRS.ProposalBlockParts = msg.BlockParts
}
@@ -1356,14 +1339,12 @@ func (ps *PeerState) StringIndented(indent string) string {
// Messages
// ConsensusMessage is a message that can be sent and received on the ConsensusReactor
type ConsensusMessage interface {
ValidateBasic() error
}
type ConsensusMessage interface{}
func RegisterConsensusMessages(cdc *amino.Codec) {
cdc.RegisterInterface((*ConsensusMessage)(nil), nil)
cdc.RegisterConcrete(&NewRoundStepMessage{}, "tendermint/NewRoundStepMessage", nil)
cdc.RegisterConcrete(&NewValidBlockMessage{}, "tendermint/NewValidBlockMessage", nil)
cdc.RegisterConcrete(&CommitStepMessage{}, "tendermint/CommitStep", nil)
cdc.RegisterConcrete(&ProposalMessage{}, "tendermint/Proposal", nil)
cdc.RegisterConcrete(&ProposalPOLMessage{}, "tendermint/ProposalPOL", nil)
cdc.RegisterConcrete(&BlockPartMessage{}, "tendermint/BlockPart", nil)
@@ -1371,6 +1352,7 @@ func RegisterConsensusMessages(cdc *amino.Codec) {
cdc.RegisterConcrete(&HasVoteMessage{}, "tendermint/HasVote", nil)
cdc.RegisterConcrete(&VoteSetMaj23Message{}, "tendermint/VoteSetMaj23", nil)
cdc.RegisterConcrete(&VoteSetBitsMessage{}, "tendermint/VoteSetBits", nil)
cdc.RegisterConcrete(&ProposalHeartbeatMessage{}, "tendermint/ProposalHeartbeat", nil)
}
func decodeMsg(bz []byte) (msg ConsensusMessage, err error) {
@@ -1393,27 +1375,6 @@ type NewRoundStepMessage struct {
LastCommitRound int
}
// ValidateBasic performs basic validation.
func (m *NewRoundStepMessage) ValidateBasic() error {
if m.Height < 0 {
return errors.New("Negative Height")
}
if m.Round < 0 {
return errors.New("Negative Round")
}
if !m.Step.IsValid() {
return errors.New("Invalid Step")
}
// NOTE: SecondsSinceStartTime may be negative
if (m.Height == 1 && m.LastCommitRound != -1) ||
(m.Height > 1 && m.LastCommitRound < -1) { // TODO: #2737 LastCommitRound should always be >= 0 for heights > 1
return errors.New("Invalid LastCommitRound (for 1st block: -1, for others: >= 0)")
}
return nil
}
// String returns a string representation.
func (m *NewRoundStepMessage) String() string {
return fmt.Sprintf("[NewRoundStep H:%v R:%v S:%v LCR:%v]",
@@ -1422,40 +1383,16 @@ func (m *NewRoundStepMessage) String() string {
//-------------------------------------
// NewValidBlockMessage is sent when a validator observes a valid block B in some round r,
//i.e., there is a Proposal for block B and 2/3+ prevotes for the block B in the round r.
// In case the block is also committed, then IsCommit flag is set to true.
type NewValidBlockMessage struct {
// CommitStepMessage is sent when a block is committed.
type CommitStepMessage struct {
Height int64
Round int
BlockPartsHeader types.PartSetHeader
BlockParts *cmn.BitArray
IsCommit bool
}
// ValidateBasic performs basic validation.
func (m *NewValidBlockMessage) ValidateBasic() error {
if m.Height < 0 {
return errors.New("Negative Height")
}
if m.Round < 0 {
return errors.New("Negative Round")
}
if err := m.BlockPartsHeader.ValidateBasic(); err != nil {
return fmt.Errorf("Wrong BlockPartsHeader: %v", err)
}
if m.BlockParts.Size() != m.BlockPartsHeader.Total {
return fmt.Errorf("BlockParts bit array size %d not equal to BlockPartsHeader.Total %d",
m.BlockParts.Size(),
m.BlockPartsHeader.Total)
}
return nil
}
// String returns a string representation.
func (m *NewValidBlockMessage) String() string {
return fmt.Sprintf("[ValidBlockMessage H:%v R:%v BP:%v BA:%v IsCommit:%v]",
m.Height, m.Round, m.BlockPartsHeader, m.BlockParts, m.IsCommit)
func (m *CommitStepMessage) String() string {
return fmt.Sprintf("[CommitStep H:%v BP:%v BA:%v]", m.Height, m.BlockPartsHeader, m.BlockParts)
}
//-------------------------------------
@@ -1465,11 +1402,6 @@ type ProposalMessage struct {
Proposal *types.Proposal
}
// ValidateBasic performs basic validation.
func (m *ProposalMessage) ValidateBasic() error {
return m.Proposal.ValidateBasic()
}
// String returns a string representation.
func (m *ProposalMessage) String() string {
return fmt.Sprintf("[Proposal %v]", m.Proposal)
@@ -1484,20 +1416,6 @@ type ProposalPOLMessage struct {
ProposalPOL *cmn.BitArray
}
// ValidateBasic performs basic validation.
func (m *ProposalPOLMessage) ValidateBasic() error {
if m.Height < 0 {
return errors.New("Negative Height")
}
if m.ProposalPOLRound < 0 {
return errors.New("Negative ProposalPOLRound")
}
if m.ProposalPOL.Size() == 0 {
return errors.New("Empty ProposalPOL bit array")
}
return nil
}
// String returns a string representation.
func (m *ProposalPOLMessage) String() string {
return fmt.Sprintf("[ProposalPOL H:%v POLR:%v POL:%v]", m.Height, m.ProposalPOLRound, m.ProposalPOL)
@@ -1512,20 +1430,6 @@ type BlockPartMessage struct {
Part *types.Part
}
// ValidateBasic performs basic validation.
func (m *BlockPartMessage) ValidateBasic() error {
if m.Height < 0 {
return errors.New("Negative Height")
}
if m.Round < 0 {
return errors.New("Negative Round")
}
if err := m.Part.ValidateBasic(); err != nil {
return fmt.Errorf("Wrong Part: %v", err)
}
return nil
}
// String returns a string representation.
func (m *BlockPartMessage) String() string {
return fmt.Sprintf("[BlockPart H:%v R:%v P:%v]", m.Height, m.Round, m.Part)
@@ -1538,11 +1442,6 @@ type VoteMessage struct {
Vote *types.Vote
}
// ValidateBasic performs basic validation.
func (m *VoteMessage) ValidateBasic() error {
return m.Vote.ValidateBasic()
}
// String returns a string representation.
func (m *VoteMessage) String() string {
return fmt.Sprintf("[Vote %v]", m.Vote)
@@ -1558,23 +1457,6 @@ type HasVoteMessage struct {
Index int
}
// ValidateBasic performs basic validation.
func (m *HasVoteMessage) ValidateBasic() error {
if m.Height < 0 {
return errors.New("Negative Height")
}
if m.Round < 0 {
return errors.New("Negative Round")
}
if !types.IsVoteTypeValid(m.Type) {
return errors.New("Invalid Type")
}
if m.Index < 0 {
return errors.New("Negative Index")
}
return nil
}
// String returns a string representation.
func (m *HasVoteMessage) String() string {
return fmt.Sprintf("[HasVote VI:%v V:{%v/%02d/%v}]", m.Index, m.Height, m.Round, m.Type)
@@ -1590,23 +1472,6 @@ type VoteSetMaj23Message struct {
BlockID types.BlockID
}
// ValidateBasic performs basic validation.
func (m *VoteSetMaj23Message) ValidateBasic() error {
if m.Height < 0 {
return errors.New("Negative Height")
}
if m.Round < 0 {
return errors.New("Negative Round")
}
if !types.IsVoteTypeValid(m.Type) {
return errors.New("Invalid Type")
}
if err := m.BlockID.ValidateBasic(); err != nil {
return fmt.Errorf("Wrong BlockID: %v", err)
}
return nil
}
// String returns a string representation.
func (m *VoteSetMaj23Message) String() string {
return fmt.Sprintf("[VSM23 %v/%02d/%v %v]", m.Height, m.Round, m.Type, m.BlockID)
@@ -1623,27 +1488,19 @@ type VoteSetBitsMessage struct {
Votes *cmn.BitArray
}
// ValidateBasic performs basic validation.
func (m *VoteSetBitsMessage) ValidateBasic() error {
if m.Height < 0 {
return errors.New("Negative Height")
}
if m.Round < 0 {
return errors.New("Negative Round")
}
if !types.IsVoteTypeValid(m.Type) {
return errors.New("Invalid Type")
}
if err := m.BlockID.ValidateBasic(); err != nil {
return fmt.Errorf("Wrong BlockID: %v", err)
}
// NOTE: Votes.Size() can be zero if the node does not have any
return nil
}
// String returns a string representation.
func (m *VoteSetBitsMessage) String() string {
return fmt.Sprintf("[VSB %v/%02d/%v %v %v]", m.Height, m.Round, m.Type, m.BlockID, m.Votes)
}
//-------------------------------------
// ProposalHeartbeatMessage is sent to signal that a node is alive and waiting for transactions for a proposal.
type ProposalHeartbeatMessage struct {
Heartbeat *types.Heartbeat
}
// String returns a string representation.
func (m *ProposalHeartbeatMessage) String() string {
return fmt.Sprintf("[HEARTBEAT %v]", m.Heartbeat)
}

View File

@@ -143,8 +143,7 @@ func TestReactorWithEvidence(t *testing.T) {
// mock the evidence pool
// everyone includes evidence of another double signing
vIdx := (i + 1) % nValidators
addr := privVals[vIdx].GetPubKey().Address()
evpool := newMockEvidencePool(addr)
evpool := newMockEvidencePool(privVals[vIdx].GetAddress())
// Make ConsensusState
blockExec := sm.NewBlockExecutor(stateDB, log.TestingLogger(), proxyAppConnCon, mempool, evpool)
@@ -214,8 +213,8 @@ func (m *mockEvidencePool) Update(block *types.Block, state sm.State) {
//------------------------------------
// Ensure a testnet makes blocks when there are txs
func TestReactorCreatesBlockWhenEmptyBlocksFalse(t *testing.T) {
// Ensure a testnet sends proposal heartbeats and makes blocks when there are txs
func TestReactorProposalHeartbeats(t *testing.T) {
N := 4
css := randConsensusNet(N, "consensus_reactor_test", newMockTickerFunc(true), newCounter,
func(c *cfg.Config) {
@@ -223,9 +222,20 @@ func TestReactorCreatesBlockWhenEmptyBlocksFalse(t *testing.T) {
})
reactors, eventChans, eventBuses := startConsensusNet(t, css, N)
defer stopConsensusNet(log.TestingLogger(), reactors, eventBuses)
heartbeatChans := make([]chan interface{}, N)
var err error
for i := 0; i < N; i++ {
heartbeatChans[i] = make(chan interface{}, 1)
err = eventBuses[i].Subscribe(context.Background(), testSubscriber, types.EventQueryProposalHeartbeat, heartbeatChans[i])
require.NoError(t, err)
}
// wait till everyone sends a proposal heartbeat
timeoutWaitGroup(t, N, func(j int) {
<-heartbeatChans[j]
}, css)
// send a tx
if err := assertMempool(css[3].txNotifier).CheckTx([]byte{1, 2, 3}, nil); err != nil {
if err := css[3].mempool.CheckTx([]byte{1, 2, 3}, nil); err != nil {
//t.Fatal(err)
}
@@ -269,8 +279,7 @@ func TestReactorVotingPowerChange(t *testing.T) {
// map of active validators
activeVals := make(map[string]struct{})
for i := 0; i < nVals; i++ {
addr := css[i].privValidator.GetPubKey().Address()
activeVals[string(addr)] = struct{}{}
activeVals[string(css[i].privValidator.GetAddress())] = struct{}{}
}
// wait till everyone makes block 1
@@ -333,8 +342,7 @@ func TestReactorValidatorSetChanges(t *testing.T) {
// map of active validators
activeVals := make(map[string]struct{})
for i := 0; i < nVals; i++ {
addr := css[i].privValidator.GetPubKey().Address()
activeVals[string(addr)] = struct{}{}
activeVals[string(css[i].privValidator.GetAddress())] = struct{}{}
}
// wait till everyone makes block 1
@@ -448,7 +456,7 @@ func waitForAndValidateBlock(t *testing.T, n int, activeVals map[string]struct{}
err := validateBlock(newBlock, activeVals)
assert.Nil(t, err)
for _, tx := range txs {
err := assertMempool(css[j].txNotifier).CheckTx(tx, nil)
err := css[j].mempool.CheckTx(tx, nil)
assert.Nil(t, err)
}
}, css)

View File

@@ -11,6 +11,7 @@ import (
"time"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/version"
//auto "github.com/tendermint/tendermint/libs/autofile"
cmn "github.com/tendermint/tendermint/libs/common"
dbm "github.com/tendermint/tendermint/libs/db"
@@ -19,7 +20,6 @@ import (
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
)
var crc32c = crc32.MakeTable(crc32.Castagnoli)
@@ -73,7 +73,7 @@ func (cs *ConsensusState) readReplayMessage(msg *TimedWALMessage, newStepCh chan
case *ProposalMessage:
p := msg.Proposal
cs.Logger.Info("Replay: Proposal", "height", p.Height, "round", p.Round, "header",
p.BlockID.PartsHeader, "pol", p.POLRound, "peer", peerID)
p.BlockPartsHeader, "pol", p.POLRound, "peer", peerID)
case *BlockPartMessage:
cs.Logger.Info("Replay: BlockPart", "height", msg.Height, "round", msg.Round, "peer", peerID)
case *VoteMessage:
@@ -247,7 +247,6 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// Set AppVersion on the state.
h.initialState.Version.Consensus.App = version.Protocol(res.AppVersion)
sm.SaveState(h.stateDB, h.initialState)
// Replay blocks up to the latest in the blockstore.
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
@@ -265,24 +264,15 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// Replay all blocks since appBlockHeight and ensure the result matches the current state.
// Returns the final AppHash or an error.
func (h *Handshaker) ReplayBlocks(
state sm.State,
appHash []byte,
appBlockHeight int64,
proxyApp proxy.AppConns,
) ([]byte, error) {
func (h *Handshaker) ReplayBlocks(state sm.State, appHash []byte, appBlockHeight int64, proxyApp proxy.AppConns) ([]byte, error) {
storeBlockHeight := h.store.Height()
stateBlockHeight := state.LastBlockHeight
h.logger.Info("ABCI Replay Blocks", "appHeight", appBlockHeight, "storeHeight", storeBlockHeight, "stateHeight", stateBlockHeight)
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain.
if appBlockHeight == 0 {
validators := make([]*types.Validator, len(h.genDoc.Validators))
for i, val := range h.genDoc.Validators {
validators[i] = types.NewValidator(val.PubKey, val.Power)
}
validatorSet := types.NewValidatorSet(validators)
nextVals := types.TM2PB.ValidatorUpdates(validatorSet)
nextVals := types.TM2PB.ValidatorUpdates(state.NextValidators) // state.Validators would work too.
csParams := types.TM2PB.ConsensusParams(h.genDoc.ConsensusParams)
req := abci.RequestInitChain{
Time: h.genDoc.GenesisTime,
@@ -296,27 +286,19 @@ func (h *Handshaker) ReplayBlocks(
return nil, err
}
if stateBlockHeight == 0 { //we only update state when we are in initial state
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
}
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals)
} else {
// If validator set is not set in genesis and still empty after InitChain, exit.
if len(h.genDoc.Validators) == 0 {
return nil, fmt.Errorf("Validator set is nil in genesis and still empty after InitChain")
}
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
}
if res.ConsensusParams != nil {
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
}
sm.SaveState(h.stateDB, state)
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals)
}
if res.ConsensusParams != nil {
state.ConsensusParams = types.PB2TM.ConsensusParams(res.ConsensusParams)
}
sm.SaveState(h.stateDB, state)
}
// First handle edge cases and constraints on the storeBlockHeight.

View File

@@ -58,18 +58,7 @@ func (cs *ConsensusState) ReplayFile(file string, console bool) error {
if err != nil {
return errors.Errorf("failed to subscribe %s to %v", subscriber, types.EventQueryNewRoundStep)
}
defer func() {
// drain newStepCh to make sure we don't block
LOOP:
for {
select {
case <-newStepCh:
default:
break LOOP
}
}
cs.eventBus.Unsubscribe(ctx, subscriber, types.EventQueryNewRoundStep)
}()
defer cs.eventBus.Unsubscribe(ctx, subscriber, types.EventQueryNewRoundStep)
// just open the file for reading, no need to use wal
fp, err := os.OpenFile(file, os.O_RDONLY, 0600)
@@ -137,7 +126,7 @@ func (pb *playback) replayReset(count int, newStepCh chan interface{}) error {
pb.cs.Wait()
newCS := NewConsensusState(pb.cs.config, pb.genesisState.Copy(), pb.cs.blockExec,
pb.cs.blockStore, pb.cs.txNotifier, pb.cs.evpool)
pb.cs.blockStore, pb.cs.mempool, pb.cs.evpool)
newCS.SetEventBus(pb.cs.eventBus)
newCS.startForReplay()
@@ -232,18 +221,7 @@ func (pb *playback) replayConsoleLoop() int {
if err != nil {
cmn.Exit(fmt.Sprintf("failed to subscribe %s to %v", subscriber, types.EventQueryNewRoundStep))
}
defer func() {
// drain newStepCh to make sure we don't block
LOOP:
for {
select {
case <-newStepCh:
default:
break LOOP
}
}
pb.cs.eventBus.Unsubscribe(ctx, subscriber, types.EventQueryNewRoundStep)
}()
defer pb.cs.eventBus.Unsubscribe(ctx, subscriber, types.EventQueryNewRoundStep)
if len(tokens) == 1 {
if err := pb.replayReset(1, newStepCh); err != nil {

View File

@@ -17,7 +17,7 @@ import (
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/crypto"
crypto "github.com/tendermint/tendermint/crypto"
auto "github.com/tendermint/tendermint/libs/autofile"
dbm "github.com/tendermint/tendermint/libs/db"
"github.com/tendermint/tendermint/version"
@@ -87,7 +87,7 @@ func sendTxs(cs *ConsensusState, ctx context.Context) {
return
default:
tx := []byte{byte(i)}
assertMempool(cs.txNotifier).CheckTx(tx, nil)
cs.mempool.CheckTx(tx, nil)
i++
}
}
@@ -315,21 +315,28 @@ func testHandshakeReplay(t *testing.T, nBlocks int, mode uint) {
config := ResetConfig("proxy_test_")
walBody, err := WALWithNBlocks(NUM_BLOCKS)
require.NoError(t, err)
if err != nil {
t.Fatal(err)
}
walFile := tempWALWithData(walBody)
config.Consensus.SetWalFile(walFile)
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
privVal := privval.LoadFilePV(config.PrivValidatorFile())
wal, err := NewWAL(walFile)
require.NoError(t, err)
if err != nil {
t.Fatal(err)
}
wal.SetLogger(log.TestingLogger())
err = wal.Start()
require.NoError(t, err)
if err := wal.Start(); err != nil {
t.Fatal(err)
}
defer wal.Stop()
chain, commits, err := makeBlockchainFromWAL(wal)
require.NoError(t, err)
if err != nil {
t.Fatalf(err.Error())
}
stateDB, state, store := stateAndStore(config, privVal.GetPubKey(), kvstore.ProtocolVersion)
store.chain = chain
@@ -568,7 +575,7 @@ func readPieceFromWAL(msg *TimedWALMessage) interface{} {
case msgInfo:
switch msg := m.Msg.(type) {
case *ProposalMessage:
return &msg.Proposal.BlockID.PartsHeader
return &msg.Proposal.BlockPartsHeader
case *BlockPartMessage:
return msg.Part
case *VoteMessage:
@@ -633,7 +640,7 @@ func TestInitChainUpdateValidators(t *testing.T) {
clientCreator := proxy.NewLocalClientCreator(app)
config := ResetConfig("proxy_test_")
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
privVal := privval.LoadFilePV(config.PrivValidatorFile())
stateDB, state, store := stateAndStore(config, privVal.GetPubKey(), 0x0)
oldValAddr := state.Validators.Validators[0].Address
@@ -659,6 +666,12 @@ func TestInitChainUpdateValidators(t *testing.T) {
assert.Equal(t, newValAddr, expectValAddr)
}
func newInitChainApp(vals []abci.ValidatorUpdate) *initChainApp {
return &initChainApp{
vals: vals,
}
}
// returns the vals on InitChain
type initChainApp struct {
abci.BaseApplication

View File

@@ -2,16 +2,15 @@ package consensus
import (
"bytes"
"errors"
"fmt"
"reflect"
"runtime/debug"
"sync"
"time"
"github.com/pkg/errors"
fail "github.com/ebuchman/fail-test"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/fail"
"github.com/tendermint/tendermint/libs/log"
tmtime "github.com/tendermint/tendermint/types/time"
@@ -23,6 +22,13 @@ import (
"github.com/tendermint/tendermint/types"
)
//-----------------------------------------------------------------------------
// Config
const (
proposalHeartbeatIntervalSeconds = 2
)
//-----------------------------------------------------------------------------
// Errors
@@ -57,16 +63,6 @@ func (ti *timeoutInfo) String() string {
return fmt.Sprintf("%v ; %d/%d %v", ti.Duration, ti.Height, ti.Round, ti.Step)
}
// interface to the mempool
type txNotifier interface {
TxsAvailable() <-chan struct{}
}
// interface to the evidence pool
type evidencePool interface {
AddEvidence(types.Evidence) error
}
// ConsensusState handles execution of the consensus algorithm.
// It processes votes and proposals, and upon reaching agreement,
// commits blocks to the chain and executes them against the application.
@@ -78,22 +74,16 @@ type ConsensusState struct {
config *cfg.ConsensusConfig
privValidator types.PrivValidator // for signing votes
// store blocks and commits
// services for creating and executing blocks
blockExec *sm.BlockExecutor
blockStore sm.BlockStore
// create and execute blocks
blockExec *sm.BlockExecutor
// notify us if txs are available
txNotifier txNotifier
// add evidence to the pool
// when it's detected
evpool evidencePool
mempool sm.Mempool
evpool sm.EvidencePool
// internal state
mtx sync.RWMutex
cstypes.RoundState
triggeredTimeoutPrecommit bool
state sm.State // State until height-1.
// state changes may be triggered by: msgs from peers,
@@ -128,7 +118,7 @@ type ConsensusState struct {
done chan struct{}
// synchronous pubsub between consensus state and reactor.
// state only emits EventNewRoundStep and EventVote
// state only emits EventNewRoundStep, EventVote and EventProposalHeartbeat
evsw tmevents.EventSwitch
// for reporting metrics
@@ -144,15 +134,15 @@ func NewConsensusState(
state sm.State,
blockExec *sm.BlockExecutor,
blockStore sm.BlockStore,
txNotifier txNotifier,
evpool evidencePool,
mempool sm.Mempool,
evpool sm.EvidencePool,
options ...StateOption,
) *ConsensusState {
cs := &ConsensusState{
config: config,
blockExec: blockExec,
blockStore: blockStore,
txNotifier: txNotifier,
mempool: mempool,
peerMsgQueue: make(chan msgInfo, msgQueueSize),
internalMsgQueue: make(chan msgInfo, msgQueueSize),
timeoutTicker: NewTimeoutTicker(),
@@ -217,16 +207,18 @@ func (cs *ConsensusState) GetState() sm.State {
// GetLastHeight returns the last height committed.
// If there were no blocks, returns 0.
func (cs *ConsensusState) GetLastHeight() int64 {
cs.mtx.RLock()
defer cs.mtx.RUnlock()
cs.mtx.Lock()
defer cs.mtx.Unlock()
return cs.RoundState.Height - 1
}
// GetRoundState returns a shallow copy of the internal consensus state.
func (cs *ConsensusState) GetRoundState() *cstypes.RoundState {
cs.mtx.RLock()
defer cs.mtx.RUnlock()
rs := cs.RoundState // copy
cs.mtx.RUnlock()
return &rs
}
@@ -234,6 +226,7 @@ func (cs *ConsensusState) GetRoundState() *cstypes.RoundState {
func (cs *ConsensusState) GetRoundStateJSON() ([]byte, error) {
cs.mtx.RLock()
defer cs.mtx.RUnlock()
return cdc.MarshalJSON(cs.RoundState)
}
@@ -241,6 +234,7 @@ func (cs *ConsensusState) GetRoundStateJSON() ([]byte, error) {
func (cs *ConsensusState) GetRoundStateSimpleJSON() ([]byte, error) {
cs.mtx.RLock()
defer cs.mtx.RUnlock()
return cdc.MarshalJSON(cs.RoundState.RoundStateSimple())
}
@@ -254,15 +248,15 @@ func (cs *ConsensusState) GetValidators() (int64, []*types.Validator) {
// SetPrivValidator sets the private validator account for signing votes.
func (cs *ConsensusState) SetPrivValidator(priv types.PrivValidator) {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.privValidator = priv
cs.mtx.Unlock()
}
// SetTimeoutTicker sets the local timer. It may be useful to overwrite for testing.
func (cs *ConsensusState) SetTimeoutTicker(timeoutTicker TimeoutTicker) {
cs.mtx.Lock()
defer cs.mtx.Unlock()
cs.timeoutTicker = timeoutTicker
cs.mtx.Unlock()
}
// LoadCommit loads the commit for a given height.
@@ -334,11 +328,10 @@ func (cs *ConsensusState) startRoutines(maxSteps int) {
go cs.receiveRoutine(maxSteps)
}
// OnStop implements cmn.Service.
// OnStop implements cmn.Service. It stops all routines and waits for the WAL to finish.
func (cs *ConsensusState) OnStop() {
cs.evsw.Stop()
cs.timeoutTicker.Stop()
// WAL is stopped in receiveRoutine.
}
// Wait waits for the the main routine to return.
@@ -500,7 +493,7 @@ func (cs *ConsensusState) updateToState(state sm.State) {
// If state isn't further out than cs.state, just ignore.
// This happens when SwitchToConsensus() is called in the reactor.
// We don't want to reset e.g. the Votes, but we still want to
// signal the new round step, because other services (eg. txNotifier)
// signal the new round step, because other services (eg. mempool)
// depend on having an up-to-date peer state!
if !cs.state.IsEmpty() && (state.LastBlockHeight <= cs.state.LastBlockHeight) {
cs.Logger.Info("Ignoring updateToState()", "newHeight", state.LastBlockHeight+1, "oldHeight", cs.state.LastBlockHeight+1)
@@ -615,7 +608,7 @@ func (cs *ConsensusState) receiveRoutine(maxSteps int) {
var mi msgInfo
select {
case <-cs.txNotifier.TxsAvailable():
case <-cs.mempool.TxsAvailable():
cs.handleTxsAvailable()
case mi = <-cs.peerMsgQueue:
cs.wal.Write(mi)
@@ -731,7 +724,6 @@ func (cs *ConsensusState) handleTxsAvailable() {
cs.mtx.Lock()
defer cs.mtx.Unlock()
// we only need to do this for round 0
cs.enterNewRound(cs.Height, 0)
cs.enterPropose(cs.Height, 0)
}
@@ -763,7 +755,7 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
validators := cs.Validators
if cs.Round < round {
validators = validators.Copy()
validators.IncrementProposerPriority(round - cs.Round)
validators.IncrementAccum(round - cs.Round)
}
// Setup new round
@@ -782,9 +774,9 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
cs.ProposalBlockParts = nil
}
cs.Votes.SetRound(round + 1) // also track next round (round+1) to allow round-skipping
cs.TriggeredTimeoutPrecommit = false
cs.triggeredTimeoutPrecommit = false
cs.eventBus.PublishEventNewRound(cs.NewRoundEvent())
cs.eventBus.PublishEventNewRound(cs.RoundStateEvent())
cs.metrics.Rounds.Set(float64(round))
// Wait for txs to be available in the mempool
@@ -796,6 +788,7 @@ func (cs *ConsensusState) enterNewRound(height int64, round int) {
cs.scheduleTimeout(cs.config.CreateEmptyBlocksInterval, height, round,
cstypes.RoundStepNewRound)
}
go cs.proposalHeartbeat(height, round)
} else {
cs.enterPropose(height, round)
}
@@ -812,6 +805,32 @@ func (cs *ConsensusState) needProofBlock(height int64) bool {
return !bytes.Equal(cs.state.AppHash, lastBlockMeta.Header.AppHash)
}
func (cs *ConsensusState) proposalHeartbeat(height int64, round int) {
counter := 0
addr := cs.privValidator.GetAddress()
valIndex, _ := cs.Validators.GetByAddress(addr)
chainID := cs.state.ChainID
for {
rs := cs.GetRoundState()
// if we've already moved on, no need to send more heartbeats
if rs.Step > cstypes.RoundStepNewRound || rs.Round > round || rs.Height > height {
return
}
heartbeat := &types.Heartbeat{
Height: rs.Height,
Round: rs.Round,
Sequence: counter,
ValidatorAddress: addr,
ValidatorIndex: valIndex,
}
cs.privValidator.SignHeartbeat(chainID, heartbeat)
cs.eventBus.PublishEventProposalHeartbeat(types.EventDataProposalHeartbeat{heartbeat})
cs.evsw.FireEvent(types.EventProposalHeartbeat, heartbeat)
counter++
time.Sleep(proposalHeartbeatIntervalSeconds * time.Second)
}
}
// Enter (CreateEmptyBlocks): from enterNewRound(height,round)
// Enter (CreateEmptyBlocks, CreateEmptyBlocksInterval > 0 ): after enterNewRound(height,round), after timeout of CreateEmptyBlocksInterval
// Enter (!CreateEmptyBlocks) : after enterNewRound(height,round), once txs are in the mempool
@@ -847,14 +866,13 @@ func (cs *ConsensusState) enterPropose(height int64, round int) {
}
// if not a validator, we're done
address := cs.privValidator.GetPubKey().Address()
if !cs.Validators.HasAddress(address) {
logger.Debug("This node is not a validator", "addr", address, "vals", cs.Validators)
if !cs.Validators.HasAddress(cs.privValidator.GetAddress()) {
logger.Debug("This node is not a validator", "addr", cs.privValidator.GetAddress(), "vals", cs.Validators)
return
}
logger.Debug("This node is a validator")
if cs.isProposer(address) {
if cs.isProposer() {
logger.Info("enterPropose: Our turn to propose", "proposer", cs.Validators.GetProposer().Address, "privValidator", cs.privValidator)
cs.decideProposal(height, round)
} else {
@@ -862,8 +880,8 @@ func (cs *ConsensusState) enterPropose(height int64, round int) {
}
}
func (cs *ConsensusState) isProposer(address []byte) bool {
return bytes.Equal(cs.Validators.GetProposer().Address, address)
func (cs *ConsensusState) isProposer() bool {
return bytes.Equal(cs.Validators.GetProposer().Address, cs.privValidator.GetAddress())
}
func (cs *ConsensusState) defaultDecideProposal(height int64, round int) {
@@ -883,9 +901,15 @@ func (cs *ConsensusState) defaultDecideProposal(height int64, round int) {
}
// Make proposal
propBlockId := types.BlockID{block.Hash(), blockParts.Header()}
proposal := types.NewProposal(height, round, cs.ValidRound, propBlockId)
polRound, polBlockID := cs.Votes.POLInfo()
proposal := types.NewProposal(height, round, blockParts.Header(), polRound, polBlockID)
if err := cs.privValidator.SignProposal(cs.state.ChainID, proposal); err == nil {
// Set fields
/* fields set by setProposal and addBlockPart
cs.Proposal = proposal
cs.ProposalBlock = block
cs.ProposalBlockParts = blockParts
*/
// send proposal and block parts on internal msg queue
cs.sendInternalMessage(msgInfo{&ProposalMessage{proposal}, ""})
@@ -938,8 +962,20 @@ func (cs *ConsensusState) createProposalBlock() (block *types.Block, blockParts
return
}
proposerAddr := cs.privValidator.GetPubKey().Address()
return cs.blockExec.CreateProposalBlock(cs.Height, cs.state, commit, proposerAddr)
maxBytes := cs.state.ConsensusParams.BlockSize.MaxBytes
maxGas := cs.state.ConsensusParams.BlockSize.MaxGas
// bound evidence to 1/10th of the block
evidence := cs.evpool.PendingEvidence(types.MaxEvidenceBytesPerBlock(maxBytes))
// Mempool validated transactions
txs := cs.mempool.ReapMaxBytesMaxGas(types.MaxDataBytes(
maxBytes,
cs.state.Validators.Size(),
len(evidence),
), maxGas)
proposerAddr := cs.privValidator.GetAddress()
block, parts := cs.state.MakeBlock(cs.Height, txs, commit, evidence, proposerAddr)
return block, parts
}
// Enter: `timeoutPropose` after entering Propose.
@@ -958,6 +994,14 @@ func (cs *ConsensusState) enterPrevote(height int64, round int) {
cs.newStep()
}()
// fire event for how we got here
if cs.isProposalComplete() {
cs.eventBus.PublishEventCompleteProposal(cs.RoundStateEvent())
} else {
// we received +2/3 prevotes for a future round
// TODO: catchup event?
}
cs.Logger.Info(fmt.Sprintf("enterPrevote(%v/%v). Current: %v/%v/%v", height, round, cs.Height, cs.Round, cs.Step))
// Sign and broadcast vote as necessary
@@ -1128,12 +1172,12 @@ func (cs *ConsensusState) enterPrecommit(height int64, round int) {
func (cs *ConsensusState) enterPrecommitWait(height int64, round int) {
logger := cs.Logger.With("height", height, "round", round)
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.TriggeredTimeoutPrecommit) {
if cs.Height != height || round < cs.Round || (cs.Round == round && cs.triggeredTimeoutPrecommit) {
logger.Debug(
fmt.Sprintf(
"enterPrecommitWait(%v/%v): Invalid args. "+
"Current state is Height/Round: %v/%v/, TriggeredTimeoutPrecommit:%v",
height, round, cs.Height, cs.Round, cs.TriggeredTimeoutPrecommit))
"Current state is Height/Round: %v/%v/, triggeredTimeoutPrecommit:%v",
height, round, cs.Height, cs.Round, cs.triggeredTimeoutPrecommit))
return
}
if !cs.Votes.Precommits(round).HasTwoThirdsAny() {
@@ -1143,7 +1187,7 @@ func (cs *ConsensusState) enterPrecommitWait(height int64, round int) {
defer func() {
// Done enterPrecommitWait:
cs.TriggeredTimeoutPrecommit = true
cs.triggeredTimeoutPrecommit = true
cs.newStep()
}()
@@ -1196,8 +1240,6 @@ func (cs *ConsensusState) enterCommit(height int64, commitRound int) {
// Set up ProposalBlockParts and keep waiting.
cs.ProposalBlock = nil
cs.ProposalBlockParts = types.NewPartSetFromHeader(blockID.PartsHeader)
cs.eventBus.PublishEventValidBlock(cs.RoundStateEvent())
cs.evsw.FireEvent(types.EventValidBlock, &cs.RoundState)
} else {
// We just need to keep waiting.
}
@@ -1378,9 +1420,14 @@ func (cs *ConsensusState) defaultSetProposal(proposal *types.Proposal) error {
return nil
}
// Verify POLRound, which must be -1 or in range [0, proposal.Round).
if proposal.POLRound < -1 ||
(proposal.POLRound >= 0 && proposal.POLRound >= proposal.Round) {
// We don't care about the proposal if we're already in cstypes.RoundStepCommit.
if cstypes.RoundStepCommit <= cs.Step {
return nil
}
// Verify POLRound, which must be -1 or between 0 and proposal.Round exclusive.
if proposal.POLRound != -1 &&
(proposal.POLRound < 0 || proposal.Round <= proposal.POLRound) {
return ErrInvalidProposalPOLRound
}
@@ -1390,12 +1437,7 @@ func (cs *ConsensusState) defaultSetProposal(proposal *types.Proposal) error {
}
cs.Proposal = proposal
// We don't update cs.ProposalBlockParts if it is already set.
// This happens if we're already in cstypes.RoundStepCommit or if there is a valid block in the current round.
// TODO: We can check if Proposal is for a different block as this is a sign of misbehavior!
if cs.ProposalBlockParts == nil {
cs.ProposalBlockParts = types.NewPartSetFromHeader(proposal.BlockID.PartsHeader)
}
cs.ProposalBlockParts = types.NewPartSetFromHeader(proposal.BlockPartsHeader)
cs.Logger.Info("Received proposal", "proposal", proposal)
return nil
}
@@ -1436,7 +1478,6 @@ func (cs *ConsensusState) addProposalBlockPart(msg *BlockPartMessage, peerID p2p
}
// NOTE: it's possible to receive complete proposal blocks for future rounds without having the proposal
cs.Logger.Info("Received complete proposal block", "height", cs.ProposalBlock.Height, "hash", cs.ProposalBlock.Hash())
cs.eventBus.PublishEventCompleteProposal(cs.CompleteProposalEvent())
// Update Valid* if we can.
prevotes := cs.Votes.Prevotes(cs.Round)
@@ -1481,8 +1522,7 @@ func (cs *ConsensusState) tryAddVote(vote *types.Vote, peerID p2p.ID) (bool, err
if err == ErrVoteHeightMismatch {
return added, err
} else if voteErr, ok := err.(*types.ErrVoteConflictingVotes); ok {
addr := cs.privValidator.GetPubKey().Address()
if bytes.Equal(vote.ValidatorAddress, addr) {
if bytes.Equal(vote.ValidatorAddress, cs.privValidator.GetAddress()) {
cs.Logger.Error("Found conflicting vote from ourselves. Did you unsafe_reset a validator?", "height", vote.Height, "round", vote.Round, "type", vote.Type)
return added, err
}
@@ -1534,7 +1574,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
// Not necessarily a bad peer, but not favourable behaviour.
if vote.Height != cs.Height {
err = ErrVoteHeightMismatch
cs.Logger.Info("Vote ignored and not added", "voteHeight", vote.Height, "csHeight", cs.Height, "peerID", peerID)
cs.Logger.Info("Vote ignored and not added", "voteHeight", vote.Height, "csHeight", cs.Height, "err", err)
return
}
@@ -1576,26 +1616,16 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
// Update Valid* if we can.
// NOTE: our proposal block may be nil or not what received a polka..
if len(blockID.Hash) != 0 && (cs.ValidRound < vote.Round) && (vote.Round == cs.Round) {
// TODO: we may want to still update the ValidBlock and obtain it via gossipping
if len(blockID.Hash) != 0 &&
(cs.ValidRound < vote.Round) &&
(vote.Round <= cs.Round) &&
cs.ProposalBlock.HashesTo(blockID.Hash) {
if cs.ProposalBlock.HashesTo(blockID.Hash) {
cs.Logger.Info(
"Updating ValidBlock because of POL.", "validRound", cs.ValidRound, "POLRound", vote.Round)
cs.ValidRound = vote.Round
cs.ValidBlock = cs.ProposalBlock
cs.ValidBlockParts = cs.ProposalBlockParts
} else {
cs.Logger.Info(
"Valid block we don't know about. Set ProposalBlock=nil",
"proposal", cs.ProposalBlock.Hash(), "blockId", blockID.Hash)
// We're getting the wrong block.
cs.ProposalBlock = nil
}
if !cs.ProposalBlockParts.HasHeader(blockID.PartsHeader) {
cs.ProposalBlockParts = types.NewPartSetFromHeader(blockID.PartsHeader)
}
cs.evsw.FireEvent(types.EventValidBlock, &cs.RoundState)
cs.eventBus.PublishEventValidBlock(cs.RoundStateEvent())
cs.Logger.Info("Updating ValidBlock because of POL.", "validRound", cs.ValidRound, "POLRound", vote.Round)
cs.ValidRound = vote.Round
cs.ValidBlock = cs.ProposalBlock
cs.ValidBlockParts = cs.ProposalBlockParts
}
}
@@ -1604,8 +1634,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
// Round-skip if there is any 2/3+ of votes ahead of us
cs.enterNewRound(height, vote.Round)
} else if cs.Round == vote.Round && cstypes.RoundStepPrevote <= cs.Step { // current round
blockID, ok := prevotes.TwoThirdsMajority()
if ok && (cs.isProposalComplete() || len(blockID.Hash) == 0) {
if prevotes.HasTwoThirdsMajority() {
cs.enterPrecommit(height, vote.Round)
} else if prevotes.HasTwoThirdsAny() {
cs.enterPrevoteWait(height, vote.Round)
@@ -1647,7 +1676,7 @@ func (cs *ConsensusState) addVote(vote *types.Vote, peerID p2p.ID) (added bool,
}
func (cs *ConsensusState) signVote(type_ types.SignedMsgType, hash []byte, header types.PartSetHeader) (*types.Vote, error) {
addr := cs.privValidator.GetPubKey().Address()
addr := cs.privValidator.GetAddress()
valIndex, _ := cs.Validators.GetByAddress(addr)
vote := &types.Vote{
@@ -1683,7 +1712,7 @@ func (cs *ConsensusState) voteTime() time.Time {
// sign the vote and publish on internalMsgQueue
func (cs *ConsensusState) signAddVote(type_ types.SignedMsgType, hash []byte, header types.PartSetHeader) *types.Vote {
// if we don't have a key or we're not in the validator set, do nothing
if cs.privValidator == nil || !cs.Validators.HasAddress(cs.privValidator.GetPubKey().Address()) {
if cs.privValidator == nil || !cs.Validators.HasAddress(cs.privValidator.GetAddress()) {
return nil
}
vote, err := cs.signVote(type_, hash, header)

View File

@@ -73,8 +73,7 @@ func TestStateProposerSelection0(t *testing.T) {
// Commit a block and ensure proposer for the next height is correct.
prop := cs1.GetRoundState().Validators.GetProposer()
address := cs1.privValidator.GetPubKey().Address()
if !bytes.Equal(prop.Address, address) {
if !bytes.Equal(prop.Address, cs1.privValidator.GetAddress()) {
t.Fatalf("expected proposer to be validator %d. Got %X", 0, prop.Address)
}
@@ -88,8 +87,7 @@ func TestStateProposerSelection0(t *testing.T) {
ensureNewRound(newRoundCh, height+1, 0)
prop = cs1.GetRoundState().Validators.GetProposer()
addr := vss[1].GetPubKey().Address()
if !bytes.Equal(prop.Address, addr) {
if !bytes.Equal(prop.Address, vss[1].GetAddress()) {
panic(fmt.Sprintf("expected proposer to be validator %d. Got %X", 1, prop.Address))
}
}
@@ -112,8 +110,7 @@ func TestStateProposerSelection2(t *testing.T) {
// everyone just votes nil. we get a new proposer each round
for i := 0; i < len(vss); i++ {
prop := cs1.GetRoundState().Validators.GetProposer()
addr := vss[(i+round)%len(vss)].GetPubKey().Address()
correctProposer := addr
correctProposer := vss[(i+round)%len(vss)].GetAddress()
if !bytes.Equal(prop.Address, correctProposer) {
panic(fmt.Sprintf("expected RoundState.Validators.GetProposer() to be validator %d. Got %X", (i+2)%len(vss), prop.Address))
}
@@ -200,8 +197,7 @@ func TestStateBadProposal(t *testing.T) {
stateHash[0] = byte((stateHash[0] + 1) % 255)
propBlock.AppHash = stateHash
propBlockParts := propBlock.MakePartSet(partSize)
blockID := types.BlockID{propBlock.Hash(), propBlockParts.Header()}
proposal := types.NewProposal(vs2.Height, round, -1, blockID)
proposal := types.NewProposal(vs2.Height, round, propBlockParts.Header(), -1, types.BlockID{})
if err := vs2.SignProposal(config.ChainID(), proposal); err != nil {
t.Fatal("failed to sign bad proposal", err)
}
@@ -215,7 +211,7 @@ func TestStateBadProposal(t *testing.T) {
startTestRound(cs1, height, round)
// wait for proposal
ensureProposal(proposalCh, height, round, blockID)
ensureNewProposal(proposalCh, height, round)
// wait for prevote
ensurePrevote(voteCh, height, round)
@@ -508,8 +504,7 @@ func TestStateLockPOLRelock(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlockHeader)
@@ -600,8 +595,7 @@ func TestStateLockPOLUnlock(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// everything done from perspective of cs1
@@ -694,8 +688,7 @@ func TestStateLockPOLSafety1(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// start round and wait for propose and prevote
startTestRound(cs1, cs1.Height, round)
@@ -811,15 +804,13 @@ func TestStateLockPOLSafety2(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// the block for R0: gets polkad but we miss it
// (even though we signed it, shhh)
_, propBlock0 := decideProposal(cs1, vss[0], height, round)
propBlockHash0 := propBlock0.Hash()
propBlockParts0 := propBlock0.MakePartSet(partSize)
propBlockID0 := types.BlockID{propBlockHash0, propBlockParts0.Header()}
// the others sign a polka but we don't see it
prevotes := signVotes(types.PrevoteType, propBlockHash0, propBlockParts0.Header(), vs2, vs3, vs4)
@@ -828,6 +819,7 @@ func TestStateLockPOLSafety2(t *testing.T) {
prop1, propBlock1 := decideProposal(cs1, vs2, vs2.Height, vs2.Round+1)
propBlockHash1 := propBlock1.Hash()
propBlockParts1 := propBlock1.MakePartSet(partSize)
propBlockID1 := types.BlockID{propBlockHash1, propBlockParts1.Header()}
incrementRound(vs2, vs3, vs4)
@@ -862,7 +854,7 @@ func TestStateLockPOLSafety2(t *testing.T) {
round = round + 1 // moving to the next round
// in round 2 we see the polkad block from round 0
newProp := types.NewProposal(height, round, 0, propBlockID0)
newProp := types.NewProposal(height, round, propBlockParts0.Header(), 0, propBlockID1)
if err := vs3.SignProposal(config.ChainID(), newProp); err != nil {
t.Fatal(err)
}
@@ -903,8 +895,7 @@ func TestProposeValidBlock(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
unlockCh := subscribe(cs1.eventBus, types.EventQueryUnlock)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// start round and wait for propose and prevote
startTestRound(cs1, cs1.Height, round)
@@ -918,7 +909,7 @@ func TestProposeValidBlock(t *testing.T) {
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], propBlockHash)
// the others sign a polka
// the others sign a polka but we don't see it
signAddVotes(cs1, types.PrevoteType, propBlockHash, propBlock.MakePartSet(partSize).Header(), vs2, vs3, vs4)
ensurePrecommit(voteCh, height, round)
@@ -973,121 +964,6 @@ func TestProposeValidBlock(t *testing.T) {
rs = cs1.GetRoundState()
assert.True(t, bytes.Equal(rs.ProposalBlock.Hash(), propBlockHash))
assert.True(t, bytes.Equal(rs.ProposalBlock.Hash(), rs.ValidBlock.Hash()))
assert.True(t, rs.Proposal.POLRound == rs.ValidRound)
assert.True(t, bytes.Equal(rs.Proposal.BlockID.Hash, rs.ValidBlock.Hash()))
}
// What we want:
// P0 miss to lock B but set valid block to B after receiving delayed prevote.
func TestSetValidBlockOnDelayedPrevote(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
height, round := cs1.Height, cs1.Round
partSize := types.BlockPartSizeBytes
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
validBlockCh := subscribe(cs1.eventBus, types.EventQueryValidBlock)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, cs1.Height, round)
ensureNewRound(newRoundCh, height, round)
ensureNewProposal(proposalCh, height, round)
rs := cs1.GetRoundState()
propBlock := rs.ProposalBlock
propBlockHash := propBlock.Hash()
propBlockParts := propBlock.MakePartSet(partSize)
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], propBlockHash)
// vs2 send prevote for propBlock
signAddVotes(cs1, types.PrevoteType, propBlockHash, propBlockParts.Header(), vs2)
// vs3 send prevote nil
signAddVotes(cs1, types.PrevoteType, nil, types.PartSetHeader{}, vs3)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrevote.Nanoseconds())
ensurePrecommit(voteCh, height, round)
// we should have precommitted
validatePrecommit(t, cs1, round, -1, vss[0], nil, nil)
rs = cs1.GetRoundState()
assert.True(t, rs.ValidBlock == nil)
assert.True(t, rs.ValidBlockParts == nil)
assert.True(t, rs.ValidRound == -1)
// vs2 send (delayed) prevote for propBlock
signAddVotes(cs1, types.PrevoteType, propBlockHash, propBlockParts.Header(), vs4)
ensureNewValidBlock(validBlockCh, height, round)
rs = cs1.GetRoundState()
assert.True(t, bytes.Equal(rs.ValidBlock.Hash(), propBlockHash))
assert.True(t, rs.ValidBlockParts.Header().Equals(propBlockParts.Header()))
assert.True(t, rs.ValidRound == round)
}
// What we want:
// P0 miss to lock B as Proposal Block is missing, but set valid block to B after
// receiving delayed Block Proposal.
func TestSetValidBlockOnDelayedProposal(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
height, round := cs1.Height, cs1.Round
partSize := types.BlockPartSizeBytes
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
validBlockCh := subscribe(cs1.eventBus, types.EventQueryValidBlock)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
round = round + 1 // move to round in which P0 is not proposer
incrementRound(vs2, vs3, vs4)
startTestRound(cs1, cs1.Height, round)
ensureNewRound(newRoundCh, height, round)
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.TimeoutPropose.Nanoseconds())
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], nil)
prop, propBlock := decideProposal(cs1, vs2, vs2.Height, vs2.Round+1)
propBlockHash := propBlock.Hash()
propBlockParts := propBlock.MakePartSet(partSize)
// vs2, vs3 and vs4 send prevote for propBlock
signAddVotes(cs1, types.PrevoteType, propBlockHash, propBlockParts.Header(), vs2, vs3, vs4)
ensureNewValidBlock(validBlockCh, height, round)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.TimeoutPrevote.Nanoseconds())
ensurePrecommit(voteCh, height, round)
validatePrecommit(t, cs1, round, -1, vss[0], nil, nil)
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
ensureNewProposal(proposalCh, height, round)
rs := cs1.GetRoundState()
assert.True(t, bytes.Equal(rs.ValidBlock.Hash(), propBlockHash))
assert.True(t, rs.ValidBlockParts.Header().Equals(propBlockParts.Header()))
assert.True(t, rs.ValidRound == round)
}
// 4 vals, 3 Nil Precommits at P0
@@ -1121,8 +997,7 @@ func TestWaitingTimeoutProposeOnNewRound(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// start round
startTestRound(cs1, height, round)
@@ -1155,8 +1030,7 @@ func TestRoundSkipOnNilPolkaFromHigherRound(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// start round
startTestRound(cs1, height, round)
@@ -1189,8 +1063,7 @@ func TestWaitTimeoutProposeOnNilPolkaForTheCurrentRound(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// start round in which PO is not proposer
startTestRound(cs1, height, round)
@@ -1205,145 +1078,6 @@ func TestWaitTimeoutProposeOnNilPolkaForTheCurrentRound(t *testing.T) {
validatePrevote(t, cs1, round, vss[0], nil)
}
// What we want:
// P0 emit NewValidBlock event upon receiving 2/3+ Precommit for B but hasn't received block B yet
func TestEmitNewValidBlockEventOnCommitWithoutBlock(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
height, round := cs1.Height, 1
incrementRound(vs2, vs3, vs4)
partSize := types.BlockPartSizeBytes
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
validBlockCh := subscribe(cs1.eventBus, types.EventQueryValidBlock)
_, propBlock := decideProposal(cs1, vs2, vs2.Height, vs2.Round)
propBlockHash := propBlock.Hash()
propBlockParts := propBlock.MakePartSet(partSize)
// start round in which PO is not proposer
startTestRound(cs1, height, round)
ensureNewRound(newRoundCh, height, round)
// vs2, vs3 and vs4 send precommit for propBlock
signAddVotes(cs1, types.PrecommitType, propBlockHash, propBlockParts.Header(), vs2, vs3, vs4)
ensureNewValidBlock(validBlockCh, height, round)
rs := cs1.GetRoundState()
assert.True(t, rs.Step == cstypes.RoundStepCommit)
assert.True(t, rs.ProposalBlock == nil)
assert.True(t, rs.ProposalBlockParts.Header().Equals(propBlockParts.Header()))
}
// What we want:
// P0 receives 2/3+ Precommit for B for round 0, while being in round 1. It emits NewValidBlock event.
// After receiving block, it executes block and moves to the next height.
func TestCommitFromPreviousRound(t *testing.T) {
cs1, vss := randConsensusState(4)
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
height, round := cs1.Height, 1
partSize := types.BlockPartSizeBytes
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
validBlockCh := subscribe(cs1.eventBus, types.EventQueryValidBlock)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
prop, propBlock := decideProposal(cs1, vs2, vs2.Height, vs2.Round)
propBlockHash := propBlock.Hash()
propBlockParts := propBlock.MakePartSet(partSize)
// start round in which PO is not proposer
startTestRound(cs1, height, round)
ensureNewRound(newRoundCh, height, round)
// vs2, vs3 and vs4 send precommit for propBlock for the previous round
signAddVotes(cs1, types.PrecommitType, propBlockHash, propBlockParts.Header(), vs2, vs3, vs4)
ensureNewValidBlock(validBlockCh, height, round)
rs := cs1.GetRoundState()
assert.True(t, rs.Step == cstypes.RoundStepCommit)
assert.True(t, rs.CommitRound == vs2.Round)
assert.True(t, rs.ProposalBlock == nil)
assert.True(t, rs.ProposalBlockParts.Header().Equals(propBlockParts.Header()))
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
}
ensureNewProposal(proposalCh, height, round)
ensureNewRound(newRoundCh, height+1, 0)
}
type fakeTxNotifier struct {
ch chan struct{}
}
func (n *fakeTxNotifier) TxsAvailable() <-chan struct{} {
return n.ch
}
func (n *fakeTxNotifier) Notify() {
n.ch <- struct{}{}
}
func TestStartNextHeightCorrectly(t *testing.T) {
cs1, vss := randConsensusState(4)
cs1.config.SkipTimeoutCommit = false
cs1.txNotifier = &fakeTxNotifier{ch: make(chan struct{})}
vs2, vs3, vs4 := vss[1], vss[2], vss[3]
height, round := cs1.Height, cs1.Round
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockHeader := subscribe(cs1.eventBus, types.EventQueryNewBlockHeader)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
// start round and wait for propose and prevote
startTestRound(cs1, height, round)
ensureNewRound(newRoundCh, height, round)
ensureNewProposal(proposalCh, height, round)
rs := cs1.GetRoundState()
theBlockHash := rs.ProposalBlock.Hash()
theBlockParts := rs.ProposalBlockParts.Header()
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], theBlockHash)
signAddVotes(cs1, types.PrevoteType, theBlockHash, theBlockParts, vs2, vs3, vs4)
ensurePrecommit(voteCh, height, round)
// the proposed block should now be locked and our precommit added
validatePrecommit(t, cs1, round, round, vss[0], theBlockHash, theBlockHash)
rs = cs1.GetRoundState()
// add precommits
signAddVotes(cs1, types.PrecommitType, nil, types.PartSetHeader{}, vs2)
signAddVotes(cs1, types.PrecommitType, theBlockHash, theBlockParts, vs3)
signAddVotes(cs1, types.PrecommitType, theBlockHash, theBlockParts, vs4)
ensureNewBlockHeader(newBlockHeader, height, theBlockHash)
rs = cs1.GetRoundState()
assert.True(t, rs.TriggeredTimeoutPrecommit)
cs1.txNotifier.(*fakeTxNotifier).Notify()
ensureNewTimeout(timeoutProposeCh, height+1, round, cs1.config.TimeoutPropose.Nanoseconds())
rs = cs1.GetRoundState()
assert.False(t, rs.TriggeredTimeoutPrecommit, "triggeredTimeoutPrecommit should be false at the beginning of each round")
}
//------------------------------------------------------------------------------------------
// SlashingSuite
// TODO: Slashing
@@ -1439,8 +1173,7 @@ func TestStateHalt1(t *testing.T) {
timeoutWaitCh := subscribe(cs1.eventBus, types.EventQueryTimeoutWait)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
newBlockCh := subscribe(cs1.eventBus, types.EventQueryNewBlock)
addr := cs1.privValidator.GetPubKey().Address()
voteCh := subscribeToVoter(cs1, addr)
voteCh := subscribeToVoter(cs1, cs1.privValidator.GetAddress())
// start round and wait for propose and prevote
startTestRound(cs1, height, round)

View File

@@ -50,9 +50,8 @@ func TestPeerCatchupRounds(t *testing.T) {
func makeVoteHR(t *testing.T, height int64, round int, privVals []types.PrivValidator, valIndex int) *types.Vote {
privVal := privVals[valIndex]
addr := privVal.GetPubKey().Address()
vote := &types.Vote{
ValidatorAddress: addr,
ValidatorAddress: privVal.GetAddress(),
ValidatorIndex: valIndex,
Height: height,
Round: round,

View File

@@ -55,31 +55,3 @@ func (prs PeerRoundState) StringIndented(indent string) string {
indent, prs.CatchupCommit, prs.CatchupCommitRound,
indent)
}
//-----------------------------------------------------------
// These methods are for Protobuf Compatibility
// Size returns the size of the amino encoding, in bytes.
func (ps *PeerRoundState) Size() int {
bs, _ := ps.Marshal()
return len(bs)
}
// Marshal returns the amino encoding.
func (ps *PeerRoundState) Marshal() ([]byte, error) {
return cdc.MarshalBinaryBare(ps)
}
// MarshalTo calls Marshal and copies to the given buffer.
func (ps *PeerRoundState) MarshalTo(data []byte) (int, error) {
bs, err := ps.Marshal()
if err != nil {
return -1, err
}
return copy(data, bs), nil
}
// Unmarshal deserializes from amino encoded form.
func (ps *PeerRoundState) Unmarshal(bs []byte) error {
return cdc.UnmarshalBinaryBare(bs, ps)
}

View File

@@ -26,15 +26,8 @@ const (
RoundStepPrecommitWait = RoundStepType(0x07) // Did receive any +2/3 precommits, start timeout
RoundStepCommit = RoundStepType(0x08) // Entered commit state machine
// NOTE: RoundStepNewHeight acts as RoundStepCommitWait.
// NOTE: Update IsValid method if you change this!
)
// IsValid returns true if the step is valid, false if unknown/undefined.
func (rs RoundStepType) IsValid() bool {
return uint8(rs) >= 0x01 && uint8(rs) <= 0x08
}
// String returns a string
func (rs RoundStepType) String() string {
switch rs {
@@ -65,26 +58,25 @@ func (rs RoundStepType) String() string {
// NOTE: Not thread safe. Should only be manipulated by functions downstream
// of the cs.receiveRoutine
type RoundState struct {
Height int64 `json:"height"` // Height we are working on
Round int `json:"round"`
Step RoundStepType `json:"step"`
StartTime time.Time `json:"start_time"`
CommitTime time.Time `json:"commit_time"` // Subjective time when +2/3 precommits for Block at Round were found
Validators *types.ValidatorSet `json:"validators"`
Proposal *types.Proposal `json:"proposal"`
ProposalBlock *types.Block `json:"proposal_block"`
ProposalBlockParts *types.PartSet `json:"proposal_block_parts"`
LockedRound int `json:"locked_round"`
LockedBlock *types.Block `json:"locked_block"`
LockedBlockParts *types.PartSet `json:"locked_block_parts"`
ValidRound int `json:"valid_round"` // Last known round with POL for non-nil valid block.
ValidBlock *types.Block `json:"valid_block"` // Last known block of POL mentioned above.
ValidBlockParts *types.PartSet `json:"valid_block_parts"` // Last known block parts of POL metnioned above.
Votes *HeightVoteSet `json:"votes"`
CommitRound int `json:"commit_round"` //
LastCommit *types.VoteSet `json:"last_commit"` // Last precommits at Height-1
LastValidators *types.ValidatorSet `json:"last_validators"`
TriggeredTimeoutPrecommit bool `json:"triggered_timeout_precommit"`
Height int64 `json:"height"` // Height we are working on
Round int `json:"round"`
Step RoundStepType `json:"step"`
StartTime time.Time `json:"start_time"`
CommitTime time.Time `json:"commit_time"` // Subjective time when +2/3 precommits for Block at Round were found
Validators *types.ValidatorSet `json:"validators"`
Proposal *types.Proposal `json:"proposal"`
ProposalBlock *types.Block `json:"proposal_block"`
ProposalBlockParts *types.PartSet `json:"proposal_block_parts"`
LockedRound int `json:"locked_round"`
LockedBlock *types.Block `json:"locked_block"`
LockedBlockParts *types.PartSet `json:"locked_block_parts"`
ValidRound int `json:"valid_round"` // Last known round with POL for non-nil valid block.
ValidBlock *types.Block `json:"valid_block"` // Last known block of POL mentioned above.
ValidBlockParts *types.PartSet `json:"valid_block_parts"` // Last known block parts of POL metnioned above.
Votes *HeightVoteSet `json:"votes"`
CommitRound int `json:"commit_round"` //
LastCommit *types.VoteSet `json:"last_commit"` // Last precommits at Height-1
LastValidators *types.ValidatorSet `json:"last_validators"`
}
// Compressed version of the RoundState for use in RPC
@@ -113,50 +105,18 @@ func (rs *RoundState) RoundStateSimple() RoundStateSimple {
}
}
// NewRoundEvent returns the RoundState with proposer information as an event.
func (rs *RoundState) NewRoundEvent() types.EventDataNewRound {
addr := rs.Validators.GetProposer().Address
idx, _ := rs.Validators.GetByAddress(addr)
return types.EventDataNewRound{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
Proposer: types.ValidatorInfo{
Address: addr,
Index: idx,
},
}
}
// CompleteProposalEvent returns information about a proposed block as an event.
func (rs *RoundState) CompleteProposalEvent() types.EventDataCompleteProposal {
// We must construct BlockID from ProposalBlock and ProposalBlockParts
// cs.Proposal is not guaranteed to be set when this function is called
blockId := types.BlockID{
Hash: rs.ProposalBlock.Hash(),
PartsHeader: rs.ProposalBlockParts.Header(),
}
return types.EventDataCompleteProposal{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
BlockID: blockId,
}
}
// RoundStateEvent returns the H/R/S of the RoundState as an event.
func (rs *RoundState) RoundStateEvent() types.EventDataRoundState {
// copy the RoundState.
// TODO: if we want to avoid this, we may need synchronous events after all
rsCopy := *rs
return types.EventDataRoundState{
edrs := types.EventDataRoundState{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step.String(),
RoundState: &rsCopy,
}
return edrs
}
// String returns a string
@@ -202,31 +162,3 @@ func (rs *RoundState) StringShort() string {
return fmt.Sprintf(`RoundState{H:%v R:%v S:%v ST:%v}`,
rs.Height, rs.Round, rs.Step, rs.StartTime)
}
//-----------------------------------------------------------
// These methods are for Protobuf Compatibility
// Size returns the size of the amino encoding, in bytes.
func (rs *RoundStateSimple) Size() int {
bs, _ := rs.Marshal()
return len(bs)
}
// Marshal returns the amino encoding.
func (rs *RoundStateSimple) Marshal() ([]byte, error) {
return cdc.MarshalBinaryBare(rs)
}
// MarshalTo calls Marshal and copies to the given buffer.
func (rs *RoundStateSimple) MarshalTo(data []byte) (int, error) {
bs, err := rs.Marshal()
if err != nil {
return -1, err
}
return copy(data, bs), nil
}
// Unmarshal deserializes from amino encoded form.
func (rs *RoundStateSimple) Unmarshal(bs []byte) error {
return cdc.UnmarshalBinaryBare(bs, rs)
}

View File

@@ -63,8 +63,11 @@ func BenchmarkRoundStateDeepCopy(b *testing.B) {
// Random Proposal
proposal := &types.Proposal{
Timestamp: tmtime.Now(),
BlockID: blockID,
Signature: sig,
BlockPartsHeader: types.PartSetHeader{
Hash: cmn.RandBytes(20),
},
POLBlockID: blockID,
Signature: sig,
}
// Random HeightVoteSet
// TODO: hvs :=

View File

@@ -13,7 +13,6 @@ import (
amino "github.com/tendermint/go-amino"
auto "github.com/tendermint/tendermint/libs/autofile"
cmn "github.com/tendermint/tendermint/libs/common"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
)
@@ -96,11 +95,6 @@ func (wal *baseWAL) Group() *auto.Group {
return wal.group
}
func (wal *baseWAL) SetLogger(l log.Logger) {
wal.BaseService.Logger = l
wal.group.SetLogger(l)
}
func (wal *baseWAL) OnStart() error {
size, err := wal.group.Head.Size()
if err != nil {

View File

@@ -40,9 +40,8 @@ func WALGenerateNBlocks(wr io.Writer, numBlocks int) (err error) {
// COPY PASTE FROM node.go WITH A FEW MODIFICATIONS
// NOTE: we can't import node package because of circular dependency.
// NOTE: we don't do handshake so need to set state.Version.Consensus.App directly.
privValidatorKeyFile := config.PrivValidatorKeyFile()
privValidatorStateFile := config.PrivValidatorStateFile()
privValidator := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
privValidatorFile := config.PrivValidatorFile()
privValidator := privval.LoadOrGenFilePV(privValidatorFile)
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
if err != nil {
return errors.Wrap(err, "failed to read genesis file")

View File

@@ -7,13 +7,13 @@ import (
"io/ioutil"
"os"
"path/filepath"
// "sync"
"testing"
"time"
"github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/libs/autofile"
"github.com/tendermint/tendermint/libs/log"
tmtypes "github.com/tendermint/tendermint/types"
tmtime "github.com/tendermint/tendermint/types/time"
@@ -23,27 +23,29 @@ import (
func TestWALTruncate(t *testing.T) {
walDir, err := ioutil.TempDir("", "wal")
require.NoError(t, err)
if err != nil {
panic(fmt.Errorf("failed to create temp WAL file: %v", err))
}
defer os.RemoveAll(walDir)
walFile := filepath.Join(walDir, "wal")
//this magic number 4K can truncate the content when RotateFile. defaultHeadSizeLimit(10M) is hard to simulate.
//this magic number 1 * time.Millisecond make RotateFile check frequently. defaultGroupCheckDuration(5s) is hard to simulate.
wal, err := NewWAL(walFile,
autofile.GroupHeadSizeLimit(4096),
autofile.GroupCheckDuration(1*time.Millisecond),
)
require.NoError(t, err)
wal.SetLogger(log.TestingLogger())
err = wal.Start()
require.NoError(t, err)
wal, err := NewWAL(walFile, autofile.GroupHeadSizeLimit(4096), autofile.GroupCheckDuration(1*time.Millisecond))
if err != nil {
t.Fatal(err)
}
wal.Start()
defer wal.Stop()
//60 block's size nearly 70K, greater than group's headBuf size(4096 * 10), when headBuf is full, truncate content will Flush to the file.
//at this time, RotateFile is called, truncate content exist in each file.
err = WALGenerateNBlocks(wal.Group(), 60)
require.NoError(t, err)
if err != nil {
t.Fatal(err)
}
time.Sleep(1 * time.Millisecond) //wait groupCheckDuration, make sure RotateFile run
@@ -97,8 +99,9 @@ func TestWALSearchForEndHeight(t *testing.T) {
walFile := tempWALWithData(walBody)
wal, err := NewWAL(walFile)
require.NoError(t, err)
wal.SetLogger(log.TestingLogger())
if err != nil {
t.Fatal(err)
}
h := int64(3)
gr, found, err := wal.SearchForEndHeight(h, &WALSearchOptions{})

View File

@@ -5,7 +5,7 @@ import (
"fmt"
"io/ioutil"
"golang.org/x/crypto/openpgp/armor"
"golang.org/x/crypto/openpgp/armor" // forked to github.com/tendermint/crypto
)
func EncodeArmor(blockType string, headers map[string]string, data []byte) string {

View File

@@ -1,31 +1,9 @@
package crypto
import (
"github.com/tendermint/tendermint/crypto/tmhash"
cmn "github.com/tendermint/tendermint/libs/common"
)
const (
// AddressSize is the size of a pubkey address.
AddressSize = tmhash.TruncatedSize
)
// An address is a []byte, but hex-encoded even in JSON.
// []byte leaves us the option to change the address length.
// Use an alias so Unmarshal methods (with ptr receivers) are available too.
type Address = cmn.HexBytes
func AddressHash(bz []byte) Address {
return Address(tmhash.SumTruncated(bz))
}
type PubKey interface {
Address() Address
Bytes() []byte
VerifyBytes(msg []byte, sig []byte) bool
Equals(PubKey) bool
}
type PrivKey interface {
Bytes() []byte
Sign(msg []byte) ([]byte, error)
@@ -33,6 +11,18 @@ type PrivKey interface {
Equals(PrivKey) bool
}
// An address is a []byte, but hex-encoded even in JSON.
// []byte leaves us the option to change the address length.
// Use an alias so Unmarshal methods (with ptr receivers) are available too.
type Address = cmn.HexBytes
type PubKey interface {
Address() Address
Bytes() []byte
VerifyBytes(msg []byte, sig []byte) bool
Equals(PubKey) bool
}
type Symmetric interface {
Keygen() []byte
Encrypt(plaintext []byte, secret []byte) (ciphertext []byte)

View File

@@ -7,7 +7,7 @@ import (
"io"
amino "github.com/tendermint/go-amino"
"golang.org/x/crypto/ed25519"
"golang.org/x/crypto/ed25519" // forked to github.com/tendermint/crypto
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/tmhash"
@@ -18,8 +18,8 @@ import (
var _ crypto.PrivKey = PrivKeyEd25519{}
const (
PrivKeyAminoName = "tendermint/PrivKeyEd25519"
PubKeyAminoName = "tendermint/PubKeyEd25519"
PrivKeyAminoRoute = "tendermint/PrivKeyEd25519"
PubKeyAminoRoute = "tendermint/PubKeyEd25519"
// Size of an Edwards25519 signature. Namely the size of a compressed
// Edwards25519 point, and a field element. Both of which are 32 bytes.
SignatureSize = 64
@@ -30,11 +30,11 @@ var cdc = amino.NewCodec()
func init() {
cdc.RegisterInterface((*crypto.PubKey)(nil), nil)
cdc.RegisterConcrete(PubKeyEd25519{},
PubKeyAminoName, nil)
PubKeyAminoRoute, nil)
cdc.RegisterInterface((*crypto.PrivKey)(nil), nil)
cdc.RegisterConcrete(PrivKeyEd25519{},
PrivKeyAminoName, nil)
PrivKeyAminoRoute, nil)
}
// PrivKeyEd25519 implements crypto.PrivKey.
@@ -136,7 +136,7 @@ type PubKeyEd25519 [PubKeyEd25519Size]byte
// Address is the SHA256-20 of the raw pubkey bytes.
func (pubKey PubKeyEd25519) Address() crypto.Address {
return crypto.Address(tmhash.SumTruncated(pubKey[:]))
return crypto.Address(tmhash.Sum(pubKey[:]))
}
// Bytes marshals the PubKey using amino encoding.

View File

@@ -12,11 +12,11 @@ import (
var cdc = amino.NewCodec()
// nameTable is used to map public key concrete types back
// to their registered amino names. This should eventually be handled
// routeTable is used to map public key concrete types back
// to their amino routes. This should eventually be handled
// by amino. Example usage:
// nameTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoName
var nameTable = make(map[reflect.Type]string, 3)
// routeTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoRoute
var routeTable = make(map[reflect.Type]string, 3)
func init() {
// NOTE: It's important that there be no conflicts here,
@@ -29,16 +29,16 @@ func init() {
// TODO: Have amino provide a way to go from concrete struct to route directly.
// Its currently a private API
nameTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoName
nameTable[reflect.TypeOf(secp256k1.PubKeySecp256k1{})] = secp256k1.PubKeyAminoName
nameTable[reflect.TypeOf(multisig.PubKeyMultisigThreshold{})] = multisig.PubKeyMultisigThresholdAminoRoute
routeTable[reflect.TypeOf(ed25519.PubKeyEd25519{})] = ed25519.PubKeyAminoRoute
routeTable[reflect.TypeOf(secp256k1.PubKeySecp256k1{})] = secp256k1.PubKeyAminoRoute
routeTable[reflect.TypeOf(&multisig.PubKeyMultisigThreshold{})] = multisig.PubKeyMultisigThresholdAminoRoute
}
// PubkeyAminoName returns the amino route of a pubkey
// PubkeyAminoRoute returns the amino route of a pubkey
// cdc is currently passed in, as eventually this will not be using
// a package level codec.
func PubkeyAminoName(cdc *amino.Codec, key crypto.PubKey) (string, bool) {
route, found := nameTable[reflect.TypeOf(key)]
func PubkeyAminoRoute(cdc *amino.Codec, key crypto.PubKey) (string, bool) {
route, found := routeTable[reflect.TypeOf(key)]
return route, found
}
@@ -47,17 +47,17 @@ func RegisterAmino(cdc *amino.Codec) {
// These are all written here instead of
cdc.RegisterInterface((*crypto.PubKey)(nil), nil)
cdc.RegisterConcrete(ed25519.PubKeyEd25519{},
ed25519.PubKeyAminoName, nil)
ed25519.PubKeyAminoRoute, nil)
cdc.RegisterConcrete(secp256k1.PubKeySecp256k1{},
secp256k1.PubKeyAminoName, nil)
secp256k1.PubKeyAminoRoute, nil)
cdc.RegisterConcrete(multisig.PubKeyMultisigThreshold{},
multisig.PubKeyMultisigThresholdAminoRoute, nil)
cdc.RegisterInterface((*crypto.PrivKey)(nil), nil)
cdc.RegisterConcrete(ed25519.PrivKeyEd25519{},
ed25519.PrivKeyAminoName, nil)
ed25519.PrivKeyAminoRoute, nil)
cdc.RegisterConcrete(secp256k1.PrivKeySecp256k1{},
secp256k1.PrivKeyAminoName, nil)
secp256k1.PrivKeyAminoRoute, nil)
}
func PrivKeyFromBytes(privKeyBytes []byte) (privKey crypto.PrivKey, err error) {

View File

@@ -128,18 +128,18 @@ func TestPubKeyInvalidDataProperReturnsEmpty(t *testing.T) {
require.Nil(t, pk)
}
func TestPubkeyAminoName(t *testing.T) {
func TestPubkeyAminoRoute(t *testing.T) {
tests := []struct {
key crypto.PubKey
want string
found bool
}{
{ed25519.PubKeyEd25519{}, ed25519.PubKeyAminoName, true},
{secp256k1.PubKeySecp256k1{}, secp256k1.PubKeyAminoName, true},
{multisig.PubKeyMultisigThreshold{}, multisig.PubKeyMultisigThresholdAminoRoute, true},
{ed25519.PubKeyEd25519{}, ed25519.PubKeyAminoRoute, true},
{secp256k1.PubKeySecp256k1{}, secp256k1.PubKeyAminoRoute, true},
{&multisig.PubKeyMultisigThreshold{}, multisig.PubKeyMultisigThresholdAminoRoute, true},
}
for i, tc := range tests {
got, found := PubkeyAminoName(cdc, tc.key)
got, found := PubkeyAminoRoute(cdc, tc.key)
require.Equal(t, tc.found, found, "not equal on tc %d", i)
if tc.found {
require.Equal(t, tc.want, got, "not equal on tc %d", i)

View File

@@ -3,7 +3,7 @@ package crypto
import (
"crypto/sha256"
"golang.org/x/crypto/ripemd160"
"golang.org/x/crypto/ripemd160" // forked to github.com/tendermint/crypto
)
func Sha256(bytes []byte) []byte {

View File

@@ -1,21 +0,0 @@
package merkle
import (
"github.com/tendermint/tendermint/crypto/tmhash"
)
// TODO: make these have a large predefined capacity
var (
leafPrefix = []byte{0}
innerPrefix = []byte{1}
)
// returns tmhash(0x00 || leaf)
func leafHash(leaf []byte) []byte {
return tmhash.Sum(append(leafPrefix, leaf...))
}
// returns tmhash(0x01 || left || right)
func innerHash(left []byte, right []byte) []byte {
return tmhash.Sum(append(innerPrefix, append(left, right...)...))
}

View File

@@ -43,14 +43,10 @@ func (poz ProofOperators) Verify(root []byte, keypath string, args [][]byte) (er
for i, op := range poz {
key := op.GetKey()
if len(key) != 0 {
if len(keys) == 0 {
return cmn.NewError("Key path has insufficient # of parts: expected no more keys but got %+v", string(key))
if !bytes.Equal(keys[0], key) {
return cmn.NewError("Key mismatch on operation #%d: expected %+v but %+v", i, []byte(keys[0]), []byte(key))
}
lastKey := keys[len(keys)-1]
if !bytes.Equal(lastKey, key) {
return cmn.NewError("Key mismatch on operation #%d: expected %+v but got %+v", i, string(lastKey), string(key))
}
keys = keys[:len(keys)-1]
keys = keys[1:]
}
args, err = op.Run(args)
if err != nil {
@@ -58,7 +54,7 @@ func (poz ProofOperators) Verify(root []byte, keypath string, args [][]byte) (er
}
}
if !bytes.Equal(root, args[0]) {
return cmn.NewError("Calculated root hash is invalid: expected %+v but got %+v", root, args[0])
return cmn.NewError("Calculated root hash is invalid: expected %+v but %+v", root, args[0])
}
if len(keys) != 0 {
return cmn.NewError("Keypath not consumed all")

View File

@@ -71,11 +71,11 @@ func (op SimpleValueOp) Run(args [][]byte) ([][]byte, error) {
hasher.Write(value) // does not error
vhash := hasher.Sum(nil)
bz := new(bytes.Buffer)
// Wrap <op.Key, vhash> to hash the KVPair.
encodeByteSlice(bz, []byte(op.key)) // does not error
encodeByteSlice(bz, []byte(vhash)) // does not error
kvhash := leafHash(bz.Bytes())
hasher = tmhash.New()
encodeByteSlice(hasher, []byte(op.key)) // does not error
encodeByteSlice(hasher, []byte(vhash)) // does not error
kvhash := hasher.Sum(nil)
if !bytes.Equal(kvhash, op.Proof.LeafHash) {
return nil, cmn.NewError("leaf hash mismatch: want %X got %X", op.Proof.LeafHash, kvhash)

View File

@@ -1,140 +0,0 @@
package merkle
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/tendermint/go-amino"
cmn "github.com/tendermint/tendermint/libs/common"
)
const ProofOpDomino = "test:domino"
// Expects given input, produces given output.
// Like the game dominos.
type DominoOp struct {
key string // unexported, may be empty
Input string
Output string
}
func NewDominoOp(key, input, output string) DominoOp {
return DominoOp{
key: key,
Input: input,
Output: output,
}
}
func DominoOpDecoder(pop ProofOp) (ProofOperator, error) {
if pop.Type != ProofOpDomino {
panic("unexpected proof op type")
}
var op DominoOp // a bit strange as we'll discard this, but it works.
err := amino.UnmarshalBinaryLengthPrefixed(pop.Data, &op)
if err != nil {
return nil, cmn.ErrorWrap(err, "decoding ProofOp.Data into SimpleValueOp")
}
return NewDominoOp(string(pop.Key), op.Input, op.Output), nil
}
func (dop DominoOp) ProofOp() ProofOp {
bz := amino.MustMarshalBinaryLengthPrefixed(dop)
return ProofOp{
Type: ProofOpDomino,
Key: []byte(dop.key),
Data: bz,
}
}
func (dop DominoOp) Run(input [][]byte) (output [][]byte, err error) {
if len(input) != 1 {
return nil, cmn.NewError("Expected input of length 1")
}
if string(input[0]) != dop.Input {
return nil, cmn.NewError("Expected input %v, got %v",
dop.Input, string(input[0]))
}
return [][]byte{[]byte(dop.Output)}, nil
}
func (dop DominoOp) GetKey() []byte {
return []byte(dop.key)
}
//----------------------------------------
func TestProofOperators(t *testing.T) {
var err error
// ProofRuntime setup
// TODO test this somehow.
// prt := NewProofRuntime()
// prt.RegisterOpDecoder(ProofOpDomino, DominoOpDecoder)
// ProofOperators setup
op1 := NewDominoOp("KEY1", "INPUT1", "INPUT2")
op2 := NewDominoOp("KEY2", "INPUT2", "INPUT3")
op3 := NewDominoOp("", "INPUT3", "INPUT4")
op4 := NewDominoOp("KEY4", "INPUT4", "OUTPUT4")
// Good
popz := ProofOperators([]ProofOperator{op1, op2, op3, op4})
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.Nil(t, err)
err = popz.VerifyValue(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", bz("INPUT1"))
assert.Nil(t, err)
// BAD INPUT
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1_WRONG")})
assert.NotNil(t, err)
err = popz.VerifyValue(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", bz("INPUT1_WRONG"))
assert.NotNil(t, err)
// BAD KEY 1
err = popz.Verify(bz("OUTPUT4"), "/KEY3/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
// BAD KEY 2
err = popz.Verify(bz("OUTPUT4"), "KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
// BAD KEY 3
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1/", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
// BAD KEY 4
err = popz.Verify(bz("OUTPUT4"), "//KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
// BAD KEY 5
err = popz.Verify(bz("OUTPUT4"), "/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
// BAD OUTPUT 1
err = popz.Verify(bz("OUTPUT4_WRONG"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
// BAD OUTPUT 2
err = popz.Verify(bz(""), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
// BAD POPZ 1
popz = []ProofOperator{op1, op2, op4}
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
// BAD POPZ 2
popz = []ProofOperator{op4, op3, op2, op1}
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
// BAD POPZ 3
popz = []ProofOperator{}
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
}
func bz(s string) []byte {
return []byte(s)
}

View File

@@ -1,97 +0,0 @@
package merkle
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// These tests were taken from https://github.com/google/trillian/blob/master/merkle/rfc6962/rfc6962_test.go,
// and consequently fall under the above license.
import (
"bytes"
"encoding/hex"
"testing"
"github.com/tendermint/tendermint/crypto/tmhash"
)
func TestRFC6962Hasher(t *testing.T) {
_, leafHashTrail := trailsFromByteSlices([][]byte{[]byte("L123456")})
leafHash := leafHashTrail.Hash
_, leafHashTrail = trailsFromByteSlices([][]byte{[]byte{}})
emptyLeafHash := leafHashTrail.Hash
for _, tc := range []struct {
desc string
got []byte
want string
}{
// Since creating a merkle tree of no leaves is unsupported here, we skip
// the corresponding trillian test vector.
// Check that the empty hash is not the same as the hash of an empty leaf.
// echo -n 00 | xxd -r -p | sha256sum
{
desc: "RFC6962 Empty Leaf",
want: "6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d"[:tmhash.Size*2],
got: emptyLeafHash,
},
// echo -n 004C313233343536 | xxd -r -p | sha256sum
{
desc: "RFC6962 Leaf",
want: "395aa064aa4c29f7010acfe3f25db9485bbd4b91897b6ad7ad547639252b4d56"[:tmhash.Size*2],
got: leafHash,
},
// echo -n 014E3132334E343536 | xxd -r -p | sha256sum
{
desc: "RFC6962 Node",
want: "aa217fe888e47007fa15edab33c2b492a722cb106c64667fc2b044444de66bbb"[:tmhash.Size*2],
got: innerHash([]byte("N123"), []byte("N456")),
},
} {
t.Run(tc.desc, func(t *testing.T) {
wantBytes, err := hex.DecodeString(tc.want)
if err != nil {
t.Fatalf("hex.DecodeString(%x): %v", tc.want, err)
}
if got, want := tc.got, wantBytes; !bytes.Equal(got, want) {
t.Errorf("got %x, want %x", got, want)
}
})
}
}
func TestRFC6962HasherCollisions(t *testing.T) {
// Check that different leaves have different hashes.
leaf1, leaf2 := []byte("Hello"), []byte("World")
_, leafHashTrail := trailsFromByteSlices([][]byte{leaf1})
hash1 := leafHashTrail.Hash
_, leafHashTrail = trailsFromByteSlices([][]byte{leaf2})
hash2 := leafHashTrail.Hash
if bytes.Equal(hash1, hash2) {
t.Errorf("Leaf hashes should differ, but both are %x", hash1)
}
// Compute an intermediate subtree hash.
_, subHash1Trail := trailsFromByteSlices([][]byte{hash1, hash2})
subHash1 := subHash1Trail.Hash
// Check that this is not the same as a leaf hash of their concatenation.
preimage := append(hash1, hash2...)
_, forgedHashTrail := trailsFromByteSlices([][]byte{preimage})
forgedHash := forgedHashTrail.Hash
if bytes.Equal(subHash1, forgedHash) {
t.Errorf("Hasher is not second-preimage resistant")
}
// Swap the order of nodes and check that the hash is different.
_, subHash2Trail := trailsFromByteSlices([][]byte{hash2, hash1})
subHash2 := subHash2Trail.Hash
if bytes.Equal(subHash1, subHash2) {
t.Errorf("Subtree hash does not depend on the order of leaves")
}
}

View File

@@ -13,14 +13,14 @@ func TestSimpleMap(t *testing.T) {
values []string // each string gets converted to []byte in test
want string
}{
{[]string{"key1"}, []string{"value1"}, "a44d3cc7daba1a4600b00a2434b30f8b970652169810d6dfa9fb1793a2189324"},
{[]string{"key1"}, []string{"value2"}, "0638e99b3445caec9d95c05e1a3fc1487b4ddec6a952ff337080360b0dcc078c"},
{[]string{"key1"}, []string{"value1"}, "fa9bc106ffd932d919bee935ceb6cf2b3dd72d8f"},
{[]string{"key1"}, []string{"value2"}, "e00e7dcfe54e9fafef5111e813a587f01ba9c3e8"},
// swap order with 2 keys
{[]string{"key1", "key2"}, []string{"value1", "value2"}, "8fd19b19e7bb3f2b3ee0574027d8a5a4cec370464ea2db2fbfa5c7d35bb0cff3"},
{[]string{"key2", "key1"}, []string{"value2", "value1"}, "8fd19b19e7bb3f2b3ee0574027d8a5a4cec370464ea2db2fbfa5c7d35bb0cff3"},
{[]string{"key1", "key2"}, []string{"value1", "value2"}, "eff12d1c703a1022ab509287c0f196130123d786"},
{[]string{"key2", "key1"}, []string{"value2", "value1"}, "eff12d1c703a1022ab509287c0f196130123d786"},
// swap order with 3 keys
{[]string{"key1", "key2", "key3"}, []string{"value1", "value2", "value3"}, "1dd674ec6782a0d586a903c9c63326a41cbe56b3bba33ed6ff5b527af6efb3dc"},
{[]string{"key1", "key3", "key2"}, []string{"value1", "value3", "value2"}, "1dd674ec6782a0d586a903c9c63326a41cbe56b3bba33ed6ff5b527af6efb3dc"},
{[]string{"key1", "key2", "key3"}, []string{"value1", "value2", "value3"}, "b2c62a277c08dbd2ad73ca53cd1d6bfdf5830d26"},
{[]string{"key1", "key3", "key2"}, []string{"value1", "value3", "value2"}, "b2c62a277c08dbd2ad73ca53cd1d6bfdf5830d26"},
}
for i, tc := range tests {
db := newSimpleMap()

View File

@@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"github.com/tendermint/tendermint/crypto/tmhash"
cmn "github.com/tendermint/tendermint/libs/common"
)
@@ -66,8 +67,7 @@ func SimpleProofsFromMap(m map[string][]byte) (rootHash []byte, proofs map[strin
// Verify that the SimpleProof proves the root hash.
// Check sp.Index/sp.Total manually if needed
func (sp *SimpleProof) Verify(rootHash []byte, leaf []byte) error {
leafHash := leafHash(leaf)
func (sp *SimpleProof) Verify(rootHash []byte, leafHash []byte) error {
if sp.Total < 0 {
return errors.New("Proof total must be positive")
}
@@ -128,19 +128,19 @@ func computeHashFromAunts(index int, total int, leafHash []byte, innerHashes [][
if len(innerHashes) == 0 {
return nil
}
numLeft := getSplitPoint(total)
numLeft := (total + 1) / 2
if index < numLeft {
leftHash := computeHashFromAunts(index, numLeft, leafHash, innerHashes[:len(innerHashes)-1])
if leftHash == nil {
return nil
}
return innerHash(leftHash, innerHashes[len(innerHashes)-1])
return simpleHashFromTwoHashes(leftHash, innerHashes[len(innerHashes)-1])
}
rightHash := computeHashFromAunts(index-numLeft, total-numLeft, leafHash, innerHashes[:len(innerHashes)-1])
if rightHash == nil {
return nil
}
return innerHash(innerHashes[len(innerHashes)-1], rightHash)
return simpleHashFromTwoHashes(innerHashes[len(innerHashes)-1], rightHash)
}
}
@@ -182,13 +182,12 @@ func trailsFromByteSlices(items [][]byte) (trails []*SimpleProofNode, root *Simp
case 0:
return nil, nil
case 1:
trail := &SimpleProofNode{leafHash(items[0]), nil, nil, nil}
trail := &SimpleProofNode{tmhash.Sum(items[0]), nil, nil, nil}
return []*SimpleProofNode{trail}, trail
default:
k := getSplitPoint(len(items))
lefts, leftRoot := trailsFromByteSlices(items[:k])
rights, rightRoot := trailsFromByteSlices(items[k:])
rootHash := innerHash(leftRoot.Hash, rightRoot.Hash)
lefts, leftRoot := trailsFromByteSlices(items[:(len(items)+1)/2])
rights, rightRoot := trailsFromByteSlices(items[(len(items)+1)/2:])
rootHash := simpleHashFromTwoHashes(leftRoot.Hash, rightRoot.Hash)
root := &SimpleProofNode{rootHash, nil, nil, nil}
leftRoot.Parent = root
leftRoot.Right = rightRoot

View File

@@ -1,9 +1,23 @@
package merkle
import (
"math/bits"
"github.com/tendermint/tendermint/crypto/tmhash"
)
// simpleHashFromTwoHashes is the basic operation of the Merkle tree: Hash(left | right).
func simpleHashFromTwoHashes(left, right []byte) []byte {
var hasher = tmhash.New()
err := encodeByteSlice(hasher, left)
if err != nil {
panic(err)
}
err = encodeByteSlice(hasher, right)
if err != nil {
panic(err)
}
return hasher.Sum(nil)
}
// SimpleHashFromByteSlices computes a Merkle tree where the leaves are the byte slice,
// in the provided order.
func SimpleHashFromByteSlices(items [][]byte) []byte {
@@ -11,12 +25,11 @@ func SimpleHashFromByteSlices(items [][]byte) []byte {
case 0:
return nil
case 1:
return leafHash(items[0])
return tmhash.Sum(items[0])
default:
k := getSplitPoint(len(items))
left := SimpleHashFromByteSlices(items[:k])
right := SimpleHashFromByteSlices(items[k:])
return innerHash(left, right)
left := SimpleHashFromByteSlices(items[:(len(items)+1)/2])
right := SimpleHashFromByteSlices(items[(len(items)+1)/2:])
return simpleHashFromTwoHashes(left, right)
}
}
@@ -31,17 +44,3 @@ func SimpleHashFromMap(m map[string][]byte) []byte {
}
return sm.Hash()
}
// getSplitPoint returns the largest power of 2 less than length
func getSplitPoint(length int) int {
if length < 1 {
panic("Trying to split a tree with size < 1")
}
uLength := uint(length)
bitlen := bits.Len(uLength)
k := 1 << uint(bitlen-1)
if k == length {
k >>= 1
}
return k
}

View File

@@ -34,6 +34,7 @@ func TestSimpleProof(t *testing.T) {
// For each item, check the trail.
for i, item := range items {
itemHash := tmhash.Sum(item)
proof := proofs[i]
// Check total/index
@@ -42,53 +43,30 @@ func TestSimpleProof(t *testing.T) {
require.Equal(t, proof.Total, total, "Unmatched totals: %d vs %d", proof.Total, total)
// Verify success
err := proof.Verify(rootHash, item)
require.NoError(t, err, "Verification failed: %v.", err)
err := proof.Verify(rootHash, itemHash)
require.NoError(t, err, "Verificatior failed: %v.", err)
// Trail too long should make it fail
origAunts := proof.Aunts
proof.Aunts = append(proof.Aunts, cmn.RandBytes(32))
err = proof.Verify(rootHash, item)
err = proof.Verify(rootHash, itemHash)
require.Error(t, err, "Expected verification to fail for wrong trail length")
proof.Aunts = origAunts
// Trail too short should make it fail
proof.Aunts = proof.Aunts[0 : len(proof.Aunts)-1]
err = proof.Verify(rootHash, item)
err = proof.Verify(rootHash, itemHash)
require.Error(t, err, "Expected verification to fail for wrong trail length")
proof.Aunts = origAunts
// Mutating the itemHash should make it fail.
err = proof.Verify(rootHash, MutateByteSlice(item))
err = proof.Verify(rootHash, MutateByteSlice(itemHash))
require.Error(t, err, "Expected verification to fail for mutated leaf hash")
// Mutating the rootHash should make it fail.
err = proof.Verify(MutateByteSlice(rootHash), item)
err = proof.Verify(MutateByteSlice(rootHash), itemHash)
require.Error(t, err, "Expected verification to fail for mutated root hash")
}
}
func Test_getSplitPoint(t *testing.T) {
tests := []struct {
length int
want int
}{
{1, 0},
{2, 1},
{3, 2},
{4, 2},
{5, 4},
{10, 8},
{20, 16},
{100, 64},
{255, 128},
{256, 128},
{257, 256},
}
for _, tt := range tests {
got := getSplitPoint(tt.length)
require.Equal(t, tt.want, got, "getSplitPoint(%d) = %v, want %v", tt.length, got, tt.want)
}
}

View File

@@ -2,6 +2,7 @@ package multisig
import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/tmhash"
)
// PubKeyMultisigThreshold implements a K of N threshold multisig.
@@ -10,7 +11,7 @@ type PubKeyMultisigThreshold struct {
PubKeys []crypto.PubKey `json:"pubkeys"`
}
var _ crypto.PubKey = PubKeyMultisigThreshold{}
var _ crypto.PubKey = &PubKeyMultisigThreshold{}
// NewPubKeyMultisigThreshold returns a new PubKeyMultisigThreshold.
// Panics if len(pubkeys) < k or 0 >= k.
@@ -21,7 +22,7 @@ func NewPubKeyMultisigThreshold(k int, pubkeys []crypto.PubKey) crypto.PubKey {
if len(pubkeys) < k {
panic("threshold k of n multisignature: len(pubkeys) < k")
}
return PubKeyMultisigThreshold{uint(k), pubkeys}
return &PubKeyMultisigThreshold{uint(k), pubkeys}
}
// VerifyBytes expects sig to be an amino encoded version of a MultiSignature.
@@ -30,8 +31,8 @@ func NewPubKeyMultisigThreshold(k int, pubkeys []crypto.PubKey) crypto.PubKey {
// and all signatures are valid. (Not just k of the signatures)
// The multisig uses a bitarray, so multiple signatures for the same key is not
// a concern.
func (pk PubKeyMultisigThreshold) VerifyBytes(msg []byte, marshalledSig []byte) bool {
var sig Multisignature
func (pk *PubKeyMultisigThreshold) VerifyBytes(msg []byte, marshalledSig []byte) bool {
var sig *Multisignature
err := cdc.UnmarshalBinaryBare(marshalledSig, &sig)
if err != nil {
return false
@@ -63,19 +64,19 @@ func (pk PubKeyMultisigThreshold) VerifyBytes(msg []byte, marshalledSig []byte)
}
// Bytes returns the amino encoded version of the PubKeyMultisigThreshold
func (pk PubKeyMultisigThreshold) Bytes() []byte {
func (pk *PubKeyMultisigThreshold) Bytes() []byte {
return cdc.MustMarshalBinaryBare(pk)
}
// Address returns tmhash(PubKeyMultisigThreshold.Bytes())
func (pk PubKeyMultisigThreshold) Address() crypto.Address {
return crypto.AddressHash(pk.Bytes())
func (pk *PubKeyMultisigThreshold) Address() crypto.Address {
return crypto.Address(tmhash.Sum(pk.Bytes()))
}
// Equals returns true iff pk and other both have the same number of keys, and
// all constituent keys are the same, and in the same order.
func (pk PubKeyMultisigThreshold) Equals(other crypto.PubKey) bool {
otherKey, sameType := other.(PubKeyMultisigThreshold)
func (pk *PubKeyMultisigThreshold) Equals(other crypto.PubKey) bool {
otherKey, sameType := other.(*PubKeyMultisigThreshold)
if !sameType {
return false
}

View File

@@ -82,7 +82,7 @@ func TestMultiSigPubKeyEquality(t *testing.T) {
msg := []byte{1, 2, 3, 4}
pubkeys, _ := generatePubKeysAndSignatures(5, msg)
multisigKey := NewPubKeyMultisigThreshold(2, pubkeys)
var unmarshalledMultisig PubKeyMultisigThreshold
var unmarshalledMultisig *PubKeyMultisigThreshold
cdc.MustUnmarshalBinaryBare(multisigKey.Bytes(), &unmarshalledMultisig)
require.True(t, multisigKey.Equals(unmarshalledMultisig))
@@ -95,29 +95,6 @@ func TestMultiSigPubKeyEquality(t *testing.T) {
require.False(t, multisigKey.Equals(multisigKey2))
}
func TestAddress(t *testing.T) {
msg := []byte{1, 2, 3, 4}
pubkeys, _ := generatePubKeysAndSignatures(5, msg)
multisigKey := NewPubKeyMultisigThreshold(2, pubkeys)
require.Len(t, multisigKey.Address().Bytes(), 20)
}
func TestPubKeyMultisigThresholdAminoToIface(t *testing.T) {
msg := []byte{1, 2, 3, 4}
pubkeys, _ := generatePubKeysAndSignatures(5, msg)
multisigKey := NewPubKeyMultisigThreshold(2, pubkeys)
ab, err := cdc.MarshalBinaryLengthPrefixed(multisigKey)
require.NoError(t, err)
// like other crypto.Pubkey implementations (e.g. ed25519.PubKeyEd25519),
// PubKeyMultisigThreshold should be deserializable into a crypto.PubKey:
var pubKey crypto.PubKey
err = cdc.UnmarshalBinaryLengthPrefixed(ab, &pubKey)
require.NoError(t, err)
require.Equal(t, multisigKey, pubKey)
}
func generatePubKeysAndSignatures(n int, msg []byte) (pubkeys []crypto.PubKey, signatures [][]byte) {
pubkeys = make([]crypto.PubKey, n)
signatures = make([][]byte, n)

View File

@@ -20,7 +20,7 @@ func init() {
cdc.RegisterConcrete(PubKeyMultisigThreshold{},
PubKeyMultisigThresholdAminoRoute, nil)
cdc.RegisterConcrete(ed25519.PubKeyEd25519{},
ed25519.PubKeyAminoName, nil)
ed25519.PubKeyAminoRoute, nil)
cdc.RegisterConcrete(secp256k1.PubKeySecp256k1{},
secp256k1.PubKeyAminoName, nil)
secp256k1.PubKeyAminoRoute, nil)
}

View File

@@ -9,15 +9,15 @@ import (
secp256k1 "github.com/tendermint/btcd/btcec"
amino "github.com/tendermint/go-amino"
"golang.org/x/crypto/ripemd160"
"golang.org/x/crypto/ripemd160" // forked to github.com/tendermint/crypto
"github.com/tendermint/tendermint/crypto"
)
//-------------------------------------
const (
PrivKeyAminoName = "tendermint/PrivKeySecp256k1"
PubKeyAminoName = "tendermint/PubKeySecp256k1"
PrivKeyAminoRoute = "tendermint/PrivKeySecp256k1"
PubKeyAminoRoute = "tendermint/PubKeySecp256k1"
)
var cdc = amino.NewCodec()
@@ -25,11 +25,11 @@ var cdc = amino.NewCodec()
func init() {
cdc.RegisterInterface((*crypto.PubKey)(nil), nil)
cdc.RegisterConcrete(PubKeySecp256k1{},
PubKeyAminoName, nil)
PubKeyAminoRoute, nil)
cdc.RegisterInterface((*crypto.PrivKey)(nil), nil)
cdc.RegisterConcrete(PrivKeySecp256k1{},
PrivKeyAminoName, nil)
PrivKeyAminoRoute, nil)
}
//-------------------------------------

View File

@@ -6,27 +6,10 @@ import (
)
const (
Size = sha256.Size
Size = 20
BlockSize = sha256.BlockSize
)
// New returns a new hash.Hash.
func New() hash.Hash {
return sha256.New()
}
// Sum returns the SHA256 of the bz.
func Sum(bz []byte) []byte {
h := sha256.Sum256(bz)
return h[:]
}
//-------------------------------------------------------------
const (
TruncatedSize = 20
)
type sha256trunc struct {
sha256 hash.Hash
}
@@ -36,7 +19,7 @@ func (h sha256trunc) Write(p []byte) (n int, err error) {
}
func (h sha256trunc) Sum(b []byte) []byte {
shasum := h.sha256.Sum(b)
return shasum[:TruncatedSize]
return shasum[:Size]
}
func (h sha256trunc) Reset() {
@@ -44,22 +27,22 @@ func (h sha256trunc) Reset() {
}
func (h sha256trunc) Size() int {
return TruncatedSize
return Size
}
func (h sha256trunc) BlockSize() int {
return h.sha256.BlockSize()
}
// NewTruncated returns a new hash.Hash.
func NewTruncated() hash.Hash {
// New returns a new hash.Hash.
func New() hash.Hash {
return sha256trunc{
sha256: sha256.New(),
}
}
// SumTruncated returns the first 20 bytes of SHA256 of the bz.
func SumTruncated(bz []byte) []byte {
// Sum returns the first 20 bytes of SHA256 of the bz.
func Sum(bz []byte) []byte {
hash := sha256.Sum256(bz)
return hash[:TruncatedSize]
return hash[:Size]
}

View File

@@ -14,29 +14,10 @@ func TestHash(t *testing.T) {
hasher.Write(testVector)
bz := hasher.Sum(nil)
bz2 := tmhash.Sum(testVector)
hasher = sha256.New()
hasher.Write(testVector)
bz3 := hasher.Sum(nil)
bz2 := hasher.Sum(nil)
bz2 = bz2[:20]
assert.Equal(t, bz, bz2)
assert.Equal(t, bz, bz3)
}
func TestHashTruncated(t *testing.T) {
testVector := []byte("abc")
hasher := tmhash.NewTruncated()
hasher.Write(testVector)
bz := hasher.Sum(nil)
bz2 := tmhash.SumTruncated(testVector)
hasher = sha256.New()
hasher.Write(testVector)
bz3 := hasher.Sum(nil)
bz3 = bz3[:tmhash.TruncatedSize]
assert.Equal(t, bz, bz2)
assert.Equal(t, bz, bz3)
}

View File

@@ -8,7 +8,7 @@ import (
"errors"
"fmt"
"golang.org/x/crypto/chacha20poly1305"
"golang.org/x/crypto/chacha20poly1305" // forked to github.com/tendermint/crypto
)
// Implements crypto.AEAD

View File

@@ -4,7 +4,7 @@ import (
"errors"
"fmt"
"golang.org/x/crypto/nacl/secretbox"
"golang.org/x/crypto/nacl/secretbox" // forked to github.com/tendermint/crypto
"github.com/tendermint/tendermint/crypto"
cmn "github.com/tendermint/tendermint/libs/common"

View File

@@ -6,7 +6,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/bcrypt"
"golang.org/x/crypto/bcrypt" // forked to github.com/tendermint/crypto
"github.com/tendermint/tendermint/crypto"
)
@@ -30,7 +30,9 @@ func TestSimpleWithKDF(t *testing.T) {
plaintext := []byte("sometext")
secretPass := []byte("somesecret")
secret, err := bcrypt.GenerateFromPassword(secretPass, 12)
salt := []byte("somesaltsomesalt") // len 16
// NOTE: we use a fork of x/crypto so we can inject our own randomness for salt
secret, err := bcrypt.GenerateFromPassword(salt, secretPass, 12)
if err != nil {
t.Error(err)
}

View File

@@ -8,21 +8,8 @@ module.exports = {
lineNumbers: true
},
themeConfig: {
repo: "tendermint/tendermint",
editLinks: true,
docsDir: "docs",
docsBranch: "develop",
editLinkText: 'Edit this page on Github',
lastUpdated: true,
algolia: {
apiKey: '59f0e2deb984aa9cdf2b3a5fd24ac501',
indexName: 'tendermint',
debug: false
},
nav: [
{ text: "Back to Tendermint", link: "https://tendermint.com" },
{ text: "RPC", link: "../rpc/" }
],
lastUpdated: "Last Updated",
nav: [{ text: "Back to Tendermint", link: "https://tendermint.com" }],
sidebar: [
{
title: "Introduction",
@@ -34,20 +21,6 @@ module.exports = {
"/introduction/what-is-tendermint"
]
},
{
title: "Apps",
collapsable: false,
children: [
"/app-dev/getting-started",
"/app-dev/abci-cli",
"/app-dev/app-architecture",
"/app-dev/app-development",
"/app-dev/subscribing-to-events-via-websocket",
"/app-dev/indexing-transactions",
"/app-dev/abci-spec",
"/app-dev/ecosystem"
]
},
{
title: "Tendermint Core",
collapsable: false,
@@ -66,6 +39,15 @@ module.exports = {
"/tendermint-core/validators"
]
},
{
title: "Tools",
collapsable: false,
children: [
"/tools/",
"/tools/benchmarking",
"/tools/monitoring"
]
},
{
title: "Networks",
collapsable: false,
@@ -76,13 +58,18 @@ module.exports = {
]
},
{
title: "Tools",
title: "Apps",
collapsable: false,
children: [
"/tools/",
"/tools/benchmarking",
"/tools/monitoring"
]
children: [
"/app-dev/getting-started",
"/app-dev/abci-cli",
"/app-dev/app-architecture",
"/app-dev/app-development",
"/app-dev/subscribing-to-events-via-websocket",
"/app-dev/indexing-transactions",
"/app-dev/abci-spec",
"/app-dev/ecosystem"
]
},
{
title: "Tendermint Spec",

View File

@@ -12,10 +12,10 @@ respectively.
## How It Works
There is a CircleCI job listening for changes in the `/docs` directory, on both
There is a Jenkins job listening for changes in the `/docs` directory, on both
the `master` and `develop` branches. Any updates to files in this directory
on those branches will automatically trigger a website deployment. Under the hood,
the private website repository has a `make build-docs` target consumed by a CircleCI job in that repo.
a private website repository has make targets consumed by a standard Jenkins task.
## README
@@ -93,10 +93,6 @@ python -m SimpleHTTPServer 8080
then navigate to localhost:8080 in your browser.
## Search
We are using [Algolia](https://www.algolia.com) to power full-text search. This uses a public API search-only key in the `config.js` as well as a [tendermint.json](https://github.com/algolia/docsearch-configs/blob/master/configs/tendermint.json) configuration file that we can update with PRs.
## Consistency
Because the build processes are identical (as is the information contained herein), this file should be kept in sync as

View File

@@ -21,7 +21,7 @@ For more details on using Tendermint, see the respective documentation for
## Contribute
To contribute to the documentation, see [this file](https://github.com/tendermint/tendermint/blob/master/docs/DOCS_README.md) for details of the build process and
To contribute to the documentation, see [this file](./DOCS_README.md) for details of the build process and
considerations when making changes.
## Version

View File

@@ -11,10 +11,13 @@ Make sure you [have Go installed](https://golang.org/doc/install).
Next, install the `abci-cli` tool and example applications:
```
mkdir -p $GOPATH/src/github.com/tendermint
cd $GOPATH/src/github.com/tendermint
git clone https://github.com/tendermint/tendermint.git
cd tendermint
go get github.com/tendermint/tendermint
```
to get vendored dependencies:
```
cd $GOPATH/src/github.com/tendermint/tendermint
make get_tools
make get_vendor_deps
make install_abci

View File

@@ -5,7 +5,7 @@ Tendermint blockchain application.
The following diagram provides a superb example:
![](../imgs/cosmos-tendermint-stack-4k.jpg)
<https://drive.google.com/open?id=1yR2XpRi9YCY9H9uMfcw8-RMJpvDyvjz9>
The end-user application here is the Cosmos Voyager, at the bottom left.
Voyager communicates with a REST API exposed by a local Light-Client
@@ -45,6 +45,6 @@ Tendermint.
See the following for more extensive documentation:
- [Interchain Standard for the Light-Client REST API](https://github.com/cosmos/cosmos-sdk/pull/1028)
- [Tendermint RPC Docs](https://tendermint.com/rpc/)
- [Tendermint RPC Docs](https://tendermint.github.io/slate/)
- [Tendermint in Production](../tendermint-core/running-in-production.md)
- [ABCI spec](./abci-spec.md)

View File

@@ -47,6 +47,90 @@ The mempool and consensus logic act as clients, and each maintains an
open ABCI connection with the application, which hosts an ABCI server.
Shown are the request and response types sent on each connection.
## Message Protocol
The message protocol consists of pairs of requests and responses. Some
messages have no fields, while others may include byte-arrays, strings,
or integers. See the `message Request` and `message Response`
definitions in [the protobuf definition
file](https://github.com/tendermint/tendermint/blob/develop/abci/types/types.proto),
and the [protobuf
documentation](https://developers.google.com/protocol-buffers/docs/overview)
for more details.
For each request, a server should respond with the corresponding
response, where order of requests is preserved in the order of
responses.
## Server
To use ABCI in your programming language of choice, there must be a ABCI
server in that language. Tendermint supports two kinds of implementation
of the server:
- Asynchronous, raw socket server (Tendermint Socket Protocol, also
known as TSP or Teaspoon)
- GRPC
Both can be tested using the `abci-cli` by setting the `--abci` flag
appropriately (ie. to `socket` or `grpc`).
See examples, in various stages of maintenance, in
[Go](https://github.com/tendermint/tendermint/tree/develop/abci/server),
[JavaScript](https://github.com/tendermint/js-abci),
[Python](https://github.com/tendermint/tendermint/tree/develop/abci/example/python3/abci),
[C++](https://github.com/mdyring/cpp-tmsp), and
[Java](https://github.com/jTendermint/jabci).
### GRPC
If GRPC is available in your language, this is the easiest approach,
though it will have significant performance overhead.
To get started with GRPC, copy in the [protobuf
file](https://github.com/tendermint/tendermint/blob/develop/abci/types/types.proto)
and compile it using the GRPC plugin for your language. For instance,
for golang, the command is `protoc --go_out=plugins=grpc:. types.proto`.
See the [grpc documentation for more details](http://www.grpc.io/docs/).
`protoc` will autogenerate all the necessary code for ABCI client and
server in your language, including whatever interface your application
must satisfy to be used by the ABCI server for handling requests.
### TSP
If GRPC is not available in your language, or you require higher
performance, or otherwise enjoy programming, you may implement your own
ABCI server using the Tendermint Socket Protocol, known affectionately
as Teaspoon. The first step is still to auto-generate the relevant data
types and codec in your language using `protoc`. Messages coming over
the socket are proto3 encoded, but additionally length-prefixed to
facilitate use as a streaming protocol. proto3 doesn't have an
official length-prefix standard, so we use our own. The first byte in
the prefix represents the length of the Big Endian encoded length. The
remaining bytes in the prefix are the Big Endian encoded length.
For example, if the proto3 encoded ABCI message is 0xDEADBEEF (4
bytes), the length-prefixed message is 0x0104DEADBEEF. If the proto3
encoded ABCI message is 65535 bytes long, the length-prefixed message
would be like 0x02FFFF....
Note this prefixing does not apply for grpc.
An ABCI server must also be able to support multiple connections, as
Tendermint uses three connections.
## Client
There are currently two use-cases for an ABCI client. One is a testing
tool, as in the `abci-cli`, which allows ABCI requests to be sent via
command line. The other is a consensus engine, such as Tendermint Core,
which makes requests to the application every time a new transaction is
received or a block is committed.
It is unlikely that you will need to implement a client. For details of
our client, see
[here](https://github.com/tendermint/tendermint/tree/develop/abci/client).
Most of the examples below are from [kvstore
application](https://github.com/tendermint/tendermint/blob/develop/abci/example/kvstore/kvstore.go),
which is a part of the abci repo. [persistent_kvstore

View File

@@ -63,13 +63,6 @@
"author": "Zach Balder",
"description": "Public service reporting and tracking"
},
{
"name": "ParadigmCore",
"url": "https://github.com/ParadigmFoundation/ParadigmCore",
"language": "TypeScript",
"author": "Paradigm Labs",
"description": "Reference implementation of the Paradigm Protocol, and OrderStream network client."
},
{
"name": "Passchain",
"url": "https://github.com/trusch/passchain",
@@ -129,7 +122,7 @@
],
"abciServers": [
{
"name": "go-abci",
"name": "abci",
"url": "https://github.com/tendermint/tendermint/tree/master/abci",
"language": "Go",
"author": "Tendermint"
@@ -140,12 +133,6 @@
"language": "Javascript",
"author": "Tendermint"
},
{
"name": "rust-tsp",
"url": "https://github.com/tendermint/rust-tsp",
"language": "Rust",
"author": "Tendermint"
},
{
"name": "cpp-tmsp",
"url": "https://github.com/mdyring/cpp-tmsp",
@@ -177,7 +164,7 @@
"author": "Dave Bryson"
},
{
"name": "tm-abci (fork of py-abci with async IO)",
"name": "tm-abci",
"url": "https://github.com/SoftblocksCo/tm-abci",
"language": "Python",
"author": "Softblocks"
@@ -189,12 +176,34 @@
"author": "Dennis McKinnon"
}
],
"aminoLibraries": [
"deploymentTools": [
{
"name": "JS-Amino",
"url": "https://github.com/TanNgocDo/Js-Amino",
"language": "Javascript",
"author": "TanNgocDo"
"name": "mintnet-kubernetes",
"url": "https://github.com/tendermint/tools",
"technology": "Docker and Kubernetes",
"author": "Tendermint",
"description": "Deploy a Tendermint test network using Google's kubernetes"
},
{
"name": "terraforce",
"url": "https://github.com/tendermint/tools",
"technology": "Terraform",
"author": "Tendermint",
"description": "Terraform + our custom terraforce tool; deploy a production Tendermint network with load balancing over multiple AWS availability zones"
},
{
"name": "ansible-tendermint",
"url": "https://github.com/tendermint/tools",
"technology": "Ansible",
"author": "Tendermint",
"description": "Ansible playbooks + Tendermint"
},
{
"name": "brooklyn-tendermint",
"url": "https://github.com/cloudsoft/brooklyn-tendermint",
"technology": "Clocker for Apache Brooklyn ",
"author": "Cloudsoft",
"description": "Deploy a tendermint test network in docker containers "
}
]
}

View File

@@ -1,9 +1,21 @@
# Ecosystem
The growing list of applications built using various pieces of the
Tendermint stack can be found at the [ecosystem page](https://tendermint.com/ecosystem).
Tendermint stack can be found at:
We thank the community for their contributions and welcome the
- https://tendermint.com/ecosystem
We thank the community for their contributions thus far and welcome the
addition of new projects. A pull request can be submitted to [this
file](https://github.com/tendermint/tendermint/blob/master/docs/app-dev/ecosystem.json)
to include your project.
## Other Tools
See [deploy testnets](./deploy-testnets) for information about all
the tools built by Tendermint. We have Kubernetes, Ansible, and
Terraform integrations.
For upgrading from older to newer versions of tendermint and to migrate
your chain data, see [tm-migrator](https://github.com/hxzqlh/tm-tools)
written by @hxzqlh.

View File

@@ -252,12 +252,14 @@ we'll run a Javascript version of the `counter`. To run it, you'll need
to [install node](https://nodejs.org/en/download/).
You'll also need to fetch the relevant repository, from
[here](https://github.com/tendermint/js-abci), then install it:
[here](https://github.com/tendermint/js-abci) then install it. As go
devs, we keep all our code under the `$GOPATH`, so run:
```
git clone https://github.com/tendermint/js-abci.git
cd js-abci
npm install abci
go get github.com/tendermint/js-abci &> /dev/null
cd $GOPATH/src/github.com/tendermint/js-abci/example
npm install
cd ..
```
Kill the previous `counter` and `tendermint` processes. Now run the app:
@@ -274,16 +276,13 @@ tendermint node
```
Once again, you should see blocks streaming by - but now, our
application is written in Javascript! Try sending some transactions, and
application is written in javascript! Try sending some transactions, and
like before - the results should be the same:
```
# ok
curl localhost:26657/broadcast_tx_commit?tx=0x00
# invalid nonce
curl localhost:26657/broadcast_tx_commit?tx=0x05
# ok
curl localhost:26657/broadcast_tx_commit?tx=0x01
curl localhost:26657/broadcast_tx_commit?tx=0x00 # ok
curl localhost:26657/broadcast_tx_commit?tx=0x05 # invalid nonce
curl localhost:26657/broadcast_tx_commit?tx=0x01 # ok
```
Neat, eh?

View File

@@ -78,7 +78,7 @@ endpoint:
curl "localhost:26657/tx_search?query=\"account.name='igor'\"&prove=true"
```
Check out [API docs](https://tendermint.com/rpc/#txsearch)
Check out [API docs](https://tendermint.github.io/slate/?shell#txsearch)
for more information on query syntax and other options.
## Subscribing to transactions
@@ -97,5 +97,5 @@ by providing a query to `/subscribe` RPC endpoint.
}
```
Check out [API docs](https://tendermint.com/rpc/#subscribe) for
Check out [API docs](https://tendermint.github.io/slate/#subscribe) for
more information on query syntax and other options.

View File

@@ -54,7 +54,7 @@ Response:
"value": "ww0z4WaZ0Xg+YI10w43wTWbBmM3dpVza4mmSQYsd0ck="
},
"voting_power": "10",
"proposer_priority": "0"
"accum": "0"
}
]
}

View File

@@ -52,13 +52,13 @@ On top of this interface, we will need to implement a stdout logger, which will
Many people say that they like the current output, so let's stick with it.
```
NOTE[2017-04-25|14:45:08] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
NOTE[04-25|14:45:08] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
```
Couple of minor changes:
```
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
```
Notice the level is encoded using only one char plus milliseconds.
@@ -155,14 +155,14 @@ Important keyvals should go first. Example:
```
correct
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus instance=1 appHeight=0 storeHeight=0 stateHeight=0
I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus instance=1 appHeight=0 storeHeight=0 stateHeight=0
```
not
```
wrong
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 instance=1
I[04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 instance=1
```
for that in most cases you'll need to add `instance` field to a logger upon creating, not when u log a particular message:

View File

@@ -5,17 +5,14 @@ implementations:
- FilePV uses an unencrypted private key in a "priv_validator.json" file - no
configuration required (just `tendermint init`).
- TCPVal and IPCVal use TCP and Unix sockets respectively to send signing requests
to another process - the user is responsible for starting that process themselves.
- SocketPV uses a socket to send signing requests to another process - user is
responsible for starting that process themselves.
Both TCPVal and IPCVal addresses can be provided via flags at the command line
or in the configuration file; TCPVal addresses must be of the form
`tcp://<ip_address>:<port>` and IPCVal addresses `unix:///path/to/file.sock` -
doing so will cause Tendermint to ignore any private validator files.
TCPVal will listen on the given address for incoming connections from an external
private validator process. It will halt any operation until at least one external
process successfully connected.
The SocketPV address can be provided via flags at the command line - doing so
will cause Tendermint to ignore any "priv_validator.json" file and to listen on
the given address for incoming connections from an external priv_validator
process. It will halt any operation until at least one external process
succesfully connected.
The external priv_validator process will dial the address to connect to
Tendermint, and then Tendermint will send requests on the ensuing connection to
@@ -24,9 +21,6 @@ but the Tendermint process makes all requests. In a later stage we're going to
support multiple validators for fault tolerance. To prevent double signing they
need to be synced, which is deferred to an external solution (see #1185).
Conversely, IPCVal will make an outbound connection to an existing socket opened
by the external validator process.
In addition, Tendermint will provide implementations that can be run in that
external process. These include:

View File

@@ -7,7 +7,6 @@
28-08-2018: Third version after Ethan's comments
30-08-2018: AminoOverheadForBlock => MaxAminoOverheadForBlock
31-08-2018: Bounding evidence and chain ID
13-01-2019: Add section on MaxBytes vs MaxDataBytes
## Context
@@ -21,32 +20,6 @@ We should just remove MaxTxs all together and stick with MaxBytes, and have a
But we can't just reap BlockSize.MaxBytes, since MaxBytes is for the entire block,
not for the txs inside the block. There's extra amino overhead + the actual
headers on top of the actual transactions + evidence + last commit.
We could also consider using a MaxDataBytes instead of or in addition to MaxBytes.
## MaxBytes vs MaxDataBytes
The [PR #3045](https://github.com/tendermint/tendermint/pull/3045) suggested
additional clarity/justification was necessary here, wither respect to the use
of MaxDataBytes in addition to, or instead of, MaxBytes.
MaxBytes provides a clear limit on the total size of a block that requires no
additional calculation if you want to use it to bound resource usage, and there
has been considerable discussions about optimizing tendermint around 1MB blocks.
Regardless, we need some maximum on the size of a block so we can avoid
unmarshalling blocks that are too big during the consensus, and it seems more
straightforward to provide a single fixed number for this rather than a
computation of "MaxDataBytes + everything else you need to make room for
(signatures, evidence, header)". MaxBytes provides a simple bound so we can
always say "blocks are less than X MB".
Having both MaxBytes and MaxDataBytes feels like unnecessary complexity. It's
not particularly surprising for MaxBytes to imply the maximum size of the
entire block (not just txs), one just has to know that a block includes header,
txs, evidence, votes. For more fine grained control over the txs included in the
block, there is the MaxGas. In practice, the MaxGas may be expected to do most of
the tx throttling, and the MaxBytes to just serve as an upper bound on the total
size. Applications can use MaxGas as a MaxDataBytes by just taking the gas for
every tx to be its size in bytes.
## Proposed solution
@@ -88,7 +61,7 @@ MaxXXX stayed the same.
## Status
Accepted.
Proposed.
## Consequences

View File

@@ -126,312 +126,6 @@ func TestConsensusXXX(t *testing.T) {
}
```
## Consensus Executor
## Consensus Core
```go
type Event interface{}
type EventNewHeight struct {
Height int64
ValidatorId int
}
type EventNewRound HeightAndRound
type EventProposal struct {
Height int64
Round int
Timestamp Time
BlockID BlockID
POLRound int
Sender int
}
type Majority23PrevotesBlock struct {
Height int64
Round int
BlockID BlockID
}
type Majority23PrecommitBlock struct {
Height int64
Round int
BlockID BlockID
}
type HeightAndRound struct {
Height int64
Round int
}
type Majority23PrevotesAny HeightAndRound
type Majority23PrecommitAny HeightAndRound
type TimeoutPropose HeightAndRound
type TimeoutPrevotes HeightAndRound
type TimeoutPrecommit HeightAndRound
type Message interface{}
type MessageProposal struct {
Height int64
Round int
BlockID BlockID
POLRound int
}
type VoteType int
const (
VoteTypeUnknown VoteType = iota
Prevote
Precommit
)
type MessageVote struct {
Height int64
Round int
BlockID BlockID
Type VoteType
}
type MessageDecision struct {
Height int64
Round int
BlockID BlockID
}
type TriggerTimeout struct {
Height int64
Round int
Duration Duration
}
type RoundStep int
const (
RoundStepUnknown RoundStep = iota
RoundStepPropose
RoundStepPrevote
RoundStepPrecommit
RoundStepCommit
)
type State struct {
Height int64
Round int
Step RoundStep
LockedValue BlockID
LockedRound int
ValidValue BlockID
ValidRound int
ValidatorId int
ValidatorSetSize int
}
func proposer(height int64, round int) int {}
func getValue() BlockID {}
func Consensus(event Event, state State) (State, Message, TriggerTimeout) {
msg = nil
timeout = nil
switch event := event.(type) {
case EventNewHeight:
if event.Height > state.Height {
state.Height = event.Height
state.Round = -1
state.Step = RoundStepPropose
state.LockedValue = nil
state.LockedRound = -1
state.ValidValue = nil
state.ValidRound = -1
state.ValidatorId = event.ValidatorId
}
return state, msg, timeout
case EventNewRound:
if event.Height == state.Height and event.Round > state.Round {
state.Round = eventRound
state.Step = RoundStepPropose
if proposer(state.Height, state.Round) == state.ValidatorId {
proposal = state.ValidValue
if proposal == nil {
proposal = getValue()
}
msg = MessageProposal { state.Height, state.Round, proposal, state.ValidRound }
}
timeout = TriggerTimeout { state.Height, state.Round, timeoutPropose(state.Round) }
}
return state, msg, timeout
case EventProposal:
if event.Height == state.Height and event.Round == state.Round and
event.Sender == proposal(state.Height, state.Round) and state.Step == RoundStepPropose {
if event.POLRound >= state.LockedRound or event.BlockID == state.BlockID or state.LockedRound == -1 {
msg = MessageVote { state.Height, state.Round, event.BlockID, Prevote }
}
state.Step = RoundStepPrevote
}
return state, msg, timeout
case TimeoutPropose:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPropose {
msg = MessageVote { state.Height, state.Round, nil, Prevote }
state.Step = RoundStepPrevote
}
return state, msg, timeout
case Majority23PrevotesBlock:
if event.Height == state.Height and event.Round == state.Round and state.Step >= RoundStepPrevote and event.Round > state.ValidRound {
state.ValidRound = event.Round
state.ValidValue = event.BlockID
if state.Step == RoundStepPrevote {
state.LockedRound = event.Round
state.LockedValue = event.BlockID
msg = MessageVote { state.Height, state.Round, event.BlockID, Precommit }
state.Step = RoundStepPrecommit
}
}
return state, msg, timeout
case Majority23PrevotesAny:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPrevote {
timeout = TriggerTimeout { state.Height, state.Round, timeoutPrevote(state.Round) }
}
return state, msg, timeout
case TimeoutPrevote:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPrevote {
msg = MessageVote { state.Height, state.Round, nil, Precommit }
state.Step = RoundStepPrecommit
}
return state, msg, timeout
case Majority23PrecommitBlock:
if event.Height == state.Height {
state.Step = RoundStepCommit
state.LockedValue = event.BlockID
}
return state, msg, timeout
case Majority23PrecommitAny:
if event.Height == state.Height and event.Round == state.Round {
timeout = TriggerTimeout { state.Height, state.Round, timeoutPrecommit(state.Round) }
}
return state, msg, timeout
case TimeoutPrecommit:
if event.Height == state.Height and event.Round == state.Round {
state.Round = state.Round + 1
}
return state, msg, timeout
}
}
func ConsensusExecutor() {
proposal = nil
votes = HeightVoteSet { Height: 1 }
state = State {
Height: 1
Round: 0
Step: RoundStepPropose
LockedValue: nil
LockedRound: -1
ValidValue: nil
ValidRound: -1
}
event = EventNewHeight {1, id}
state, msg, timeout = Consensus(event, state)
event = EventNewRound {state.Height, 0}
state, msg, timeout = Consensus(event, state)
if msg != nil {
send msg
}
if timeout != nil {
trigger timeout
}
for {
select {
case message := <- msgCh:
switch msg := message.(type) {
case MessageProposal:
case MessageVote:
if msg.Height == state.Height {
newVote = votes.AddVote(msg)
if newVote {
switch msg.Type {
case Prevote:
prevotes = votes.Prevotes(msg.Round)
if prevotes.WeakCertificate() and msg.Round > state.Round {
event = EventNewRound { msg.Height, msg.Round }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
if blockID, ok = prevotes.TwoThirdsMajority(); ok and blockID != nil {
if msg.Round == state.Round and hasBlock(blockID) {
event = Majority23PrevotesBlock { msg.Height, msg.Round, blockID }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
if proposal != nil and proposal.POLRound == msg.Round and hasBlock(blockID) {
event = EventProposal {
Height: state.Height
Round: state.Round
BlockID: blockID
POLRound: proposal.POLRound
Sender: message.Sender
}
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
}
if prevotes.HasTwoThirdsAny() and msg.Round == state.Round {
event = Majority23PrevotesAny { msg.Height, msg.Round, blockID }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
case Precommit:
}
}
}
case timeout := <- timeoutCh:
case block := <- blockCh:
}
}
}
func handleStateChange(state, msg, timeout) State {
if state.Step == Commit {
state = ExecuteBlock(state.LockedValue)
}
if msg != nil {
send msg
}
if timeout != nil {
trigger timeout
}
}
```
### Implementation roadmap
* implement proposed implementation

View File

@@ -1,193 +0,0 @@
# ADR 033: pubsub 2.0
Author: Anton Kaliaev (@melekes)
## Changelog
02-10-2018: Initial draft
16-01-2019: Second version based on our conversation with Jae
17-01-2019: Third version explaining how new design solves current issues
## Context
Since the initial version of the pubsub, there's been a number of issues
raised: #951, #1879, #1880. Some of them are high-level issues questioning the
core design choices made. Others are minor and mostly about the interface of
`Subscribe()` / `Publish()` functions.
### Sync vs Async
Now, when publishing a message to subscribers, we can do it in a goroutine:
_using channels for data transmission_
```go
for each subscriber {
out := subscriber.outc
go func() {
out <- msg
}
}
```
_by invoking callback functions_
```go
for each subscriber {
go subscriber.callbackFn()
}
```
This gives us greater performance and allows us to avoid "slow client problem"
(when other subscribers have to wait for a slow subscriber). A pool of
goroutines can be used to avoid uncontrolled memory growth.
In certain cases, this is what you want. But in our case, because we need
strict ordering of events (if event A was published before B, the guaranteed
delivery order will be A -> B), we can't publish msg in a new goroutine every time.
We can also have a goroutine per subscriber, although we'd need to be careful
with the number of subscribers. It's more difficult to implement as well +
unclear if we'll benefit from it (cause we'd be forced to create N additional
channels to distribute msg to these goroutines).
### Non-blocking send
There is also a question whenever we should have a non-blocking send:
```go
for each subscriber {
out := subscriber.outc
select {
case out <- msg:
default:
log("subscriber %v buffer is full, skipping...")
}
}
```
This fixes the "slow client problem", but there is no way for a slow client to
know if it had missed a message. We could return a second channel and close it
to indicate subscription termination. On the other hand, if we're going to
stick with blocking send, **devs must always ensure subscriber's handling code
does not block**, which is a hard task to put on their shoulders.
The interim option is to run goroutines pool for a single message, wait for all
goroutines to finish. This will solve "slow client problem", but we'd still
have to wait `max(goroutine_X_time)` before we can publish the next message.
### Channels vs Callbacks
Yet another question is whether we should use channels for message transmission or
call subscriber-defined callback functions. Callback functions give subscribers
more flexibility - you can use mutexes in there, channels, spawn goroutines,
anything you really want. But they also carry local scope, which can result in
memory leaks and/or memory usage increase.
Go channels are de-facto standard for carrying data between goroutines.
### Why `Subscribe()` accepts an `out` channel?
Because in our tests, we create buffered channels (cap: 1). Alternatively, we
can make capacity an argument.
## Decision
Change Subscribe() function to return a `Subscription` struct:
```go
type Subscription struct {
// private fields
}
func (s *Subscription) Out() <-chan MsgAndTags
func (s *Subscription) Cancelled() <-chan struct{}
func (s *Subscription) Err() error
```
Out returns a channel onto which messages and tags are published.
Unsubscribe/UnsubscribeAll does not close the channel to avoid clients from
receiving a nil message.
Cancelled returns a channel that's closed when the subscription is terminated
and supposed to be used in a select statement.
If Cancelled is not closed yet, Err() returns nil.
If Cancelled is closed, Err returns a non-nil error explaining why:
Unsubscribed if the subscriber choose to unsubscribe,
OutOfCapacity if the subscriber is not pulling messages fast enough and the Out channel become full.
After Err returns a non-nil error, successive calls to Err() return the same error.
```go
subscription, err := pubsub.Subscribe(...)
if err != nil {
// ...
}
for {
select {
case msgAndTags <- subscription.Out():
// ...
case <-subscription.Cancelled():
return subscription.Err()
}
```
Make Out() channel buffered (cap: 1) by default. In most cases, we want to
terminate the slow subscriber. Only in rare cases, we want to block the pubsub
(e.g. when debugging consensus). This should lower the chances of the pubsub
being frozen.
```go
// outCap can be used to set capacity of Out channel (1 by default). Set to 0
for unbuffered channel (WARNING: it may block the pubsub).
Subscribe(ctx context.Context, clientID string, query Query, outCap... int) (Subscription, error) {
```
Also, Out() channel should return tags along with a message:
```go
type MsgAndTags struct {
Msg interface{}
Tags TagMap
}
```
to inform clients of which Tags were used with Msg.
### How this new design solves the current issues?
https://github.com/tendermint/tendermint/issues/951 (https://github.com/tendermint/tendermint/issues/1880)
Because of non-blocking send, situation where we'll deadlock is not possible
anymore. If the client stops reading messages, it will be removed.
https://github.com/tendermint/tendermint/issues/1879
MsgAndTags is used now instead of a plain message.
### Future problems and their possible solutions
https://github.com/tendermint/tendermint/issues/2826
One question I am still pondering about: how to prevent pubsub from slowing
down consensus. We can increase the pubsub queue size (which is 0 now). Also,
it's probably a good idea to limit the total number of subscribers.
This can be made automatically. Say we set queue size to 1000 and, when it's >=
80% full, refuse new subscriptions.
## Status
In review
## Consequences
### Positive
- more idiomatic interface
- subscribers know what tags msg was published with
- subscribers aware of the reason their subscription was cancelled
### Negative
- (since v1) no concurrency when it comes to publishing messages
### Neutral

View File

@@ -1,72 +0,0 @@
# ADR 034: PrivValidator file structure
## Changelog
03-11-2018: Initial Draft
## Context
For now, the PrivValidator file `priv_validator.json` contains mutable and immutable parts.
Even in an insecure mode which does not encrypt private key on disk, it is reasonable to separate
the mutable part and immutable part.
References:
[#1181](https://github.com/tendermint/tendermint/issues/1181)
[#2657](https://github.com/tendermint/tendermint/issues/2657)
[#2313](https://github.com/tendermint/tendermint/issues/2313)
## Proposed Solution
We can split mutable and immutable parts with two structs:
```go
// FilePVKey stores the immutable part of PrivValidator
type FilePVKey struct {
Address types.Address `json:"address"`
PubKey crypto.PubKey `json:"pub_key"`
PrivKey crypto.PrivKey `json:"priv_key"`
filePath string
}
// FilePVState stores the mutable part of PrivValidator
type FilePVLastSignState struct {
Height int64 `json:"height"`
Round int `json:"round"`
Step int8 `json:"step"`
Signature []byte `json:"signature,omitempty"`
SignBytes cmn.HexBytes `json:"signbytes,omitempty"`
filePath string
mtx sync.Mutex
}
```
Then we can combine `FilePVKey` with `FilePVLastSignState` and will get the original `FilePV`.
```go
type FilePV struct {
Key FilePVKey
LastSignState FilePVLastSignState
}
```
As discussed, `FilePV` should be located in `config`, and `FilePVLastSignState` should be stored in `data`. The
store path of each file should be specified in `config.yml`.
What we need to do next is changing the methods of `FilePV`.
## Status
Draft.
## Consequences
### Positive
- separate the mutable and immutable of PrivValidator
### Negative
- need to add more config for file path
### Neutral

View File

@@ -1,40 +0,0 @@
# ADR 035: Documentation
Author: @zramsay (Zach Ramsay)
## Changelog
### November 2nd 2018
- initial write-up
## Context
The Tendermint documentation has undergone several changes until settling on the current model. Originally, the documentation was hosted on the website and had to be updated asynchronously from the code. Along with the other repositories requiring documentation, the whole stack moved to using Read The Docs to automatically generate, publish, and host the documentation. This, however, was insufficient; the RTD site had advertisement, it wasn't easily accessible to devs, didn't collect metrics, was another set of external links, etc.
## Decision
For two reasons, the decision was made to use VuePress:
1) ability to get metrics (implemented on both Tendermint and SDK)
2) host the documentation on the website as a `/docs` endpoint.
This is done while maintaining synchrony between the docs and code, i.e., the website is built whenever the docs are updated.
## Status
The two points above have been implemented; the `config.js` has a Google Analytics identifier and the documentation workflow has been up and running largely without problems for several months. Details about the documentation build & workflow can be found [here](../DOCS_README.md)
## Consequences
Because of the organizational seperation between Tendermint & Cosmos, there is a challenge of "what goes where" for certain aspects of documentation.
### Positive
This architecture is largely positive relative to prior docs arrangements.
### Negative
A significant portion of the docs automation / build process is in private repos with limited access/visibility to devs. However, these tasks are handled by the SRE team.
### Neutral

View File

@@ -1,36 +1,15 @@
# ADR {ADR-NUMBER}: {TITLE}
## Changelog
* {date}: {changelog}
# ADR 000: Template for an ADR
## Context
> This section contains all the context one needs to understand the current state, and why there is a problem. It should be as succinct as possible and introduce the high level idea behind the solution.
## Decision
> This section explains all of the details of the proposed solution, including implementation details.
It should also describe affects / corollary items that may need to be changed as a part of this.
If the proposed change will be large, please also indicate a way to do the change to maximize ease of review.
(e.g. the optimal split of things to do between separate PR's)
## Status
> A decision may be "proposed" if it hasn't been agreed upon yet, or "accepted" once it is agreed upon. If a later ADR changes or reverses a decision, it may be marked as "deprecated" or "superseded" with a reference to its replacement.
{Deprecated|Proposed|Accepted}
## Consequences
> This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones.
### Positive
### Negative
### Neutral
## References
> Are there any relevant PR comments, issues that led up to this, or articles referrenced for why we made the given design choice? If so link them here!
* {reference link}

Some files were not shown because too many files have changed in this diff Show More