mirror of
https://github.com/tendermint/tendermint.git
synced 2026-01-15 01:02:50 +00:00
Compare commits
4 Commits
tmp
...
rfc-e2e-te
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
33616bec7a | ||
|
|
c3858e1a97 | ||
|
|
fefa4014fb | ||
|
|
311496a657 |
@@ -40,7 +40,6 @@ sections.
|
||||
- [RFC-000: P2P Roadmap](./rfc-000-p2p-roadmap.rst)
|
||||
- [RFC-001: Storage Engines](./rfc-001-storage-engine.rst)
|
||||
- [RFC-002: Interprocess Communication](./rfc-002-ipc-ecosystem.md)
|
||||
- [RFC-003: Performance Taxonomy](./rfc-003-performance-questions.md)
|
||||
- [RFC-004: E2E Test Framework Enhancements](./rfc-004-e2e-framework.md)
|
||||
|
||||
<!-- - [RFC-NNN: Title](./rfc-NNN-title.md) -->
|
||||
|
||||
@@ -1,283 +0,0 @@
|
||||
# RFC 003: Taxonomy of potential performance issues in Tendermint
|
||||
|
||||
## Changelog
|
||||
|
||||
- 2021-09-02: Created initial draft (@wbanfield)
|
||||
- 2021-09-14: Add discussion of the event system (@wbanfield)
|
||||
|
||||
## Abstract
|
||||
|
||||
This document discusses the various sources of performance issues in Tendermint and
|
||||
attempts to clarify what work may be required to understand and address them.
|
||||
|
||||
## Background
|
||||
|
||||
Performance, loosely defined as the ability of a software process to perform its work
|
||||
quickly and efficiently under load and within reasonable resource limits, is a frequent
|
||||
topic of discussion in the Tendermint project.
|
||||
To effectively address any issues with Tendermint performance we need to
|
||||
categorize the various issues, understand their potential sources, and gauge their
|
||||
impact on users.
|
||||
|
||||
Categorizing the different known performance issues will allow us to discuss and fix them
|
||||
more systematically. This document proposes a rough taxonomy of performance issues
|
||||
and highlights areas where more research into potential performance problems is required.
|
||||
|
||||
Understanding Tendermint's performance limitations will also be critically important
|
||||
as we make changes to many of its subsystems. Performance is a central concern for
|
||||
upcoming decisions regarding the `p2p` protocol, RPC message encoding and structure,
|
||||
database usage and selection, and consensus protocol updates.
|
||||
|
||||
|
||||
## Discussion
|
||||
|
||||
This section attempts to delineate the different sections of Tendermint functionality
|
||||
that are often cited as having performance issues. It raises questions and suggests
|
||||
lines of inquiry that may be valuable for better understanding Tendermint's performance issues.
|
||||
|
||||
As a note: We should avoid quickly adding many microbenchmarks or package level benchmarks.
|
||||
These are prone to being worse than useless as they can obscure what _should_ be
|
||||
focused on: performance of the system from the perspective of a user. We should,
|
||||
instead, tune performance with an eye towards user needs and actions users make. These users comprise
|
||||
both operators of Tendermint chains and the people generating transactions for
|
||||
Tendermint chains. Both of these sets of users are largely aligned in wanting an end-to-end
|
||||
system that operates quickly and efficiently.
|
||||
|
||||
REQUEST: The list below may be incomplete, if there are additional sections that are often
|
||||
cited as creating poor performance, please comment so that they may be included.
|
||||
|
||||
### P2P
|
||||
|
||||
#### Claim: Tendermint cannot scale to large numbers of nodes
|
||||
|
||||
A complaint has been reported that Tendermint networks cannot scale to large numbers of nodes.
|
||||
The listed number of nodes a user reported as causing issue was in the thousands.
|
||||
We don't currently have evidence about what the upper-limit of nodes that Tendermint's
|
||||
P2P stack can scale to.
|
||||
|
||||
We need to more concretely understand the source of issues and determine what layer
|
||||
is causing a problem. It's possible that the P2P layer, in the absence of any reactors
|
||||
sending data, is perfectly capable of managing thousands of peer connections. For
|
||||
a reasonable networking and application setup, thousands of connections should not present any
|
||||
issue for the application.
|
||||
|
||||
We need more data to understand the problem directly. We want to drive the popularity
|
||||
and adoption of Tendermint and this will mean allowing for chains with more validators.
|
||||
We should follow up with users experiencing this issue. We may then want to add
|
||||
a series of metrics to the P2P layer to better understand the inefficiencies it produces.
|
||||
|
||||
The following metrics can help us understand the sources of latency in the Tendermint P2P stack:
|
||||
|
||||
* Number of messages sent and received per second
|
||||
* Time of a message spent on the P2P layer send and receive queues
|
||||
|
||||
The following metrics exist and should be leveraged in addition to those added:
|
||||
|
||||
* Number of peers node's connected to
|
||||
* Number of bytes per channel sent and received from each peer
|
||||
|
||||
### Sync
|
||||
|
||||
#### Claim: Block Syncing is slow
|
||||
|
||||
Bootstrapping a new node in a network to the height of the rest of the network is believed to
|
||||
take longer than users would like. Block sync requires fetching all of the blocks from
|
||||
peers and placing them into the local disk for storage. A useful line of inquiry
|
||||
is understanding how quickly a perfectly tuned system _could_ fetch all of the state
|
||||
over a network so that we understand how much overhead Tendermint actually adds.
|
||||
|
||||
The operation is likely to be _incredibly_ dependent on the environment in which
|
||||
the node is being run. The factors that will influence syncing include:
|
||||
1. Number of peers that a syncing node may fetch from.
|
||||
2. Speed of the disk that a validator is writing to.
|
||||
3. Speed of the network connection between the different peers that node is
|
||||
syncing from.
|
||||
|
||||
We should calculate how quickly this operation _could possibly_ complete for common chains and nodes.
|
||||
To calculate how quickly this operation could possibly complete, we should assume that
|
||||
a node is reading at line-rate of the NIC and writing at the full drive speed to its
|
||||
local storage. Comparing this theoretical upper-limit to the actual sync times
|
||||
observed by node operators will give us a good point of comparison for understanding
|
||||
how much overhead Tendermint incurs.
|
||||
|
||||
We should additionally add metrics to the blocksync operation to more clearly pinpoint
|
||||
slow operations. The following metrics should be added to the block syncing operation:
|
||||
|
||||
* Time to fetch and validate each block
|
||||
* Time to execute a block
|
||||
* Blocks sync'd per unit time
|
||||
|
||||
### Application
|
||||
|
||||
Applications performing complex state transitions have the potential to bottleneck
|
||||
the Tendermint node.
|
||||
|
||||
#### Claim: ABCI block delivery could cause slowdown
|
||||
|
||||
ABCI delivers blocks in several methods: `BeginBlock`, `DeliverTx`, `EndBlock`, `Commit`.
|
||||
|
||||
Tendermint delivers transactions one-by-one via the `DeliverTx` call. Most of the
|
||||
transaction delivery in Tendermint occurs asynchronously and therefore appears unlikely to
|
||||
form a bottleneck in ABCI.
|
||||
|
||||
After delivering all transactions, Tendermint then calls the `Commit` ABCI method.
|
||||
Tendermint [locks all access to the mempool][abci-commit-description] while `Commit`
|
||||
proceeds. This means that an application that is slow to execute all of its
|
||||
transactions or finalize state during the `Commit` method will prevent any new
|
||||
transactions from being added to the mempool. Apps that are slow to commit will
|
||||
prevent consensus from proceeded to the next consensus height since Tendermint
|
||||
cannot validate block proposals or produce block proposals without the
|
||||
AppHash obtained from the `Commit` method. We should add a metric for each
|
||||
step in the ABCI protocol to track the amount of time that a node spends communicating
|
||||
with the application at each step.
|
||||
|
||||
#### Claim: ABCI serialization overhead causes slowdown
|
||||
|
||||
The most common way to run a Tendermint application is using the Cosmos-SDK.
|
||||
The Cosmos-SDK runs the ABCI application within the same process as Tendermint.
|
||||
When an application is run in the same process as Tendermint, a serialization penalty
|
||||
is not paid. This is because the local ABCI client does not serialize method calls
|
||||
and instead passes the protobuf type through directly. This can be seen
|
||||
in [local_client.go][abci-local-client-code].
|
||||
|
||||
Serialization and deserialization in the gRPC and socket protocol ABCI methods
|
||||
may cause slowdown. While these may cause issue, they are not part of the primary
|
||||
usecase of Tendermint and do not necessarily need to be addressed at this time.
|
||||
|
||||
### RPC
|
||||
|
||||
#### Claim: The Query API is slow.
|
||||
|
||||
The query API locks a mutex across the ABCI connections. This causes consensus to
|
||||
slow during queries, as ABCI is no longer able to make progress. This is known
|
||||
to be causing issue in the cosmos-sdk and is being addressed [in the sdk][sdk-query-fix]
|
||||
but a more robust solution may be required. Adding metrics to each ABCI client connection
|
||||
and message as described in the Application section of this document would allow us
|
||||
to further introspect the issue here.
|
||||
|
||||
#### Claim: RPC Serialization may cause slowdown
|
||||
|
||||
The Tendermint RPC uses a modified version of JSON-RPC. This RPC powers the `broadcast_tx_*` methods,
|
||||
which is a critical method for adding transactions to Tendermint at the moment. This method is
|
||||
likely invoked quite frequently on popular networks. Being able to perform efficiently
|
||||
on this common and critical operation is very important. The current JSON-RPC implementation
|
||||
relies heavily on type introspection via reflection, which is known to be very slow in
|
||||
Go. We should therefore produce benchmarks of this method to determine how much overhead
|
||||
we are adding to what, is likely to be, a very common operation.
|
||||
|
||||
The other JSON-RPC methods are much less critical to the core functionality of Tendermint.
|
||||
While there may other points of performance consideration within the RPC, methods that do not
|
||||
receive high volumes of requests should not be prioritized for performance consideration.
|
||||
|
||||
NOTE: Previous discussion of the RPC framework was done in [ADR 57][adr-57] and
|
||||
there is ongoing work to inspect and alter the JSON-RPC framework in [RFC 002][rfc-002].
|
||||
Much of these RPC-related performance considerations can either wait until the work of RFC 002 work is done or be
|
||||
considered concordantly with the in-flight changes to the JSON-RPC.
|
||||
|
||||
### Protocol
|
||||
|
||||
#### Claim: Gossiping messages is a slow process
|
||||
|
||||
Currently, for any validator to successfully vote in a consensus _step_, it must
|
||||
receive votes from greater than 2/3 of the validators on the network. In many cases,
|
||||
it's preferable to receive as many votes as possible from correct validators.
|
||||
|
||||
This produces a quadratic increase in messages that are communicated as more validators join the network.
|
||||
(Each of the N validators must communicate with all other N-1 validators).
|
||||
|
||||
This large number of messages communicated per step has been identified to impact
|
||||
performance of the protocol. Given that the number of messages communicated has been
|
||||
identified as a bottleneck, it would be extremely valuable to gather data on how long
|
||||
it takes for popular chains with many validators to gather all votes within a step.
|
||||
|
||||
Metrics that would improve visibility into this include:
|
||||
|
||||
* Amount of time for a node to gather votes in a step.
|
||||
* Amount of time for a node to gather all block parts.
|
||||
* Number of votes each node sends to gossip (i.e. not its own votes, but votes it is
|
||||
transmitting for a peer).
|
||||
* Total number of votes each node sends to receives (A node may receive duplicate votes
|
||||
so understanding how frequently this occurs will be valuable in evaluating the performance
|
||||
of the gossip system).
|
||||
|
||||
#### Claim: Hashing Txs causes slowdown in Tendermint
|
||||
|
||||
Using a faster hash algorithm for Tx hashes is currently a point of discussion
|
||||
in Tendermint. Namely, it is being considered as part of the [modular hashing proposal][modular-hashing].
|
||||
It is currently unknown if hashing transactions in the Mempool forms a significant bottleneck.
|
||||
Although it does not appear to be documented as slow, there are a few open github
|
||||
issues that indicate a possible user preference for a faster hashing algorithm,
|
||||
including [issue 2187][issue-2187] and [issue 2186][issue-2186].
|
||||
|
||||
It is likely worth investigating what order of magnitude Tx hashing takes in comparison to other
|
||||
aspects of adding a Tx to the mempool. It is not currently clear if the rate of adding Tx
|
||||
to the mempool is a source of user pain. We should not endeavor to make large changes to
|
||||
consensus critical components without first being certain that the change is highly
|
||||
valuable and impactful.
|
||||
|
||||
### Digital Signatures
|
||||
|
||||
#### Claim: Verification of digital signatures may cause slowdown in Tendermint
|
||||
|
||||
Working with cryptographic signatures can be computationally expensive. The cosmos
|
||||
hub uses [ed25519 signatures][hub-signature]. The library performing signature
|
||||
verification in Tendermint on votes is [benchmarked][ed25519-bench] to be able to perform an `ed25519`
|
||||
signature in 75μs on a decently fast CPU. A validator in the Cosmos Hub performs
|
||||
3 sets of verifications on the signatures of the 140 validators in the Hub
|
||||
in a consensus round, during block verification, when verifying the prevotes, and
|
||||
when verifying the precommits. With no batching, this would be roughly `3ms` per
|
||||
round. It is quite unlikely, therefore, that this accounts for any serious amount
|
||||
of the ~7 seconds of block time per height in the Hub.
|
||||
|
||||
This may cause slowdown when syncing, since the process needs to constantly verify
|
||||
signatures. It's possible that improved signature aggregation will lead to improved
|
||||
light client or other syncing performance. In general, a metric should be added
|
||||
to track block rate while blocksyncing.
|
||||
|
||||
#### Claim: Our use of digital signatures in the consensus protocol contributes to performance issue
|
||||
|
||||
Currently, Tendermint's digital signature verification requires that all validators
|
||||
receive all vote messages. Each validator must receive the complete digital signature
|
||||
along with the vote message that it corresponds to. This means that all N validators
|
||||
must receive messages from at least 2/3 of the N validators in each consensus
|
||||
round. Given the potential for oddly shaped network topologies and the expected
|
||||
variable network roundtrip times of a few hundred milliseconds in a blockchain,
|
||||
it is highly likely that this amount of gossiping is leading to a significant amount
|
||||
of the slowdown in the Cosmos Hub and in Tendermint consensus.
|
||||
|
||||
### Tendermint Event System
|
||||
|
||||
#### Claim: The event system is a bottleneck in Tendermint
|
||||
|
||||
The Tendermint Event system is used to communicate and store information about
|
||||
internal Tendermint execution. The system uses channels internally to send messages
|
||||
to different subscribers. Sending an event [blocks on the internal channel][event-send].
|
||||
The default configuration is to [use an unbuffered channel for event publishes][event-buffer-capacity].
|
||||
Several consumers of the event system also use an unbuffered channel for reads.
|
||||
An example of this is the [event indexer][event-indexer-unbuffered], which takes an
|
||||
unbuffered subscription to the event system. The result is that these unbuffered readers
|
||||
can cause writes to the event system to block or slow down depending on contention in the
|
||||
event system. This has implications for the consensus system, which [publishes events][consensus-event-send].
|
||||
To better understand the performance of the event system, we should add metrics to track the timing of
|
||||
event sends. The following metrics would be a good start for tracking this performance:
|
||||
|
||||
* Time in event send, labeled by Event Type
|
||||
* Time in event receive, labeled by subscriber
|
||||
* Event throughput, measured in events per unit time.
|
||||
|
||||
### References
|
||||
[modular-hashing]: https://github.com/tendermint/tendermint/pull/6773
|
||||
[issue-2186]: https://github.com/tendermint/tendermint/issues/2186
|
||||
[issue-2187]: https://github.com/tendermint/tendermint/issues/2187
|
||||
[rfc-002]: https://github.com/tendermint/tendermint/pull/6913
|
||||
[adr-57]: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-057-RPC.md
|
||||
[issue-1319]: https://github.com/tendermint/tendermint/issues/1319
|
||||
[abci-commit-description]: https://github.com/tendermint/spec/blob/master/spec/abci/apps.md#commit
|
||||
[abci-local-client-code]: https://github.com/tendermint/tendermint/blob/511bd3eb7f037855a793a27ff4c53c12f085b570/abci/client/local_client.go#L84
|
||||
[hub-signature]: https://github.com/cosmos/gaia/blob/0ecb6ed8a244d835807f1ced49217d54a9ca2070/docs/resources/genesis.md#consensus-parameters
|
||||
[ed25519-bench]: https://github.com/oasisprotocol/curve25519-voi/blob/d2e7fc59fe38c18ca990c84c4186cba2cc45b1f9/PERFORMANCE.md
|
||||
[event-send]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/libs/pubsub/pubsub.go#L338
|
||||
[event-buffer-capacity]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/types/event_bus.go#L14
|
||||
[event-indexer-unbuffered]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/state/indexer/indexer_service.go#L39
|
||||
[consensus-event-send]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/internal/consensus/state.go#L1573
|
||||
[sdk-query-fix]: https://github.com/cosmos/cosmos-sdk/pull/10045
|
||||
2
go.mod
2
go.mod
@@ -34,7 +34,7 @@ require (
|
||||
github.com/spf13/viper v1.8.1
|
||||
github.com/stretchr/testify v1.7.0
|
||||
github.com/tendermint/tm-db v0.6.4
|
||||
github.com/vektra/mockery/v2 v2.9.3
|
||||
github.com/vektra/mockery/v2 v2.9.0
|
||||
golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a
|
||||
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
|
||||
|
||||
4
go.sum
4
go.sum
@@ -895,8 +895,8 @@ github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyC
|
||||
github.com/valyala/fasthttp v1.16.0/go.mod h1:YOKImeEosDdBPnxc0gy7INqi3m1zK6A+xl6TwOBhHCA=
|
||||
github.com/valyala/quicktemplate v1.6.3/go.mod h1:fwPzK2fHuYEODzJ9pkw0ipCPNHZ2tD5KW4lOuSdPKzY=
|
||||
github.com/valyala/tcplisten v0.0.0-20161114210144-ceec8f93295a/go.mod h1:v3UYOV9WzVtRmSR+PDvWpU/qWl4Wa5LApYYX4ZtKbio=
|
||||
github.com/vektra/mockery/v2 v2.9.3 h1:ma6hcGQw4q/lhFUTJ+E9V8/5tsIcht9i2Q4d1qo26SQ=
|
||||
github.com/vektra/mockery/v2 v2.9.3/go.mod h1:2gU4Cf/f8YyC8oEaSXfCnZBMxMjMl/Ko205rlP0fO90=
|
||||
github.com/vektra/mockery/v2 v2.9.0 h1:+3FhCL3EviR779mTzXwUuhPNnqFUA7sDnt9OFkXaFd4=
|
||||
github.com/vektra/mockery/v2 v2.9.0/go.mod h1:2gU4Cf/f8YyC8oEaSXfCnZBMxMjMl/Ko205rlP0fO90=
|
||||
github.com/viki-org/dnscache v0.0.0-20130720023526-c70c1f23c5d8/go.mod h1:dniwbG03GafCjFohMDmz6Zc6oCuiqgH6tGNyXTkHzXE=
|
||||
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
|
||||
github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778/go.mod h1:2MuV+tbUrU1zIOPMxZ5EncGwgmMJsa+9ucAQZXxsObs=
|
||||
|
||||
@@ -52,7 +52,7 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
|
||||
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
|
||||
defer os.RemoveAll(thisConfig.RootDir)
|
||||
|
||||
ensureDir(t, path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
app := appFunc()
|
||||
vals := types.TM2PB.ValidatorUpdates(state.Validators)
|
||||
app.InitChain(abci.RequestInitChain{Validators: vals})
|
||||
|
||||
@@ -69,10 +69,9 @@ func configSetup(t *testing.T) *cfg.Config {
|
||||
return config
|
||||
}
|
||||
|
||||
func ensureDir(t *testing.T, dir string, mode os.FileMode) {
|
||||
t.Helper()
|
||||
func ensureDir(dir string, mode os.FileMode) {
|
||||
if err := tmos.EnsureDir(dir, mode); err != nil {
|
||||
t.Fatalf("error opening directory: %s", err)
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -222,20 +221,18 @@ func startTestRound(cs *State, height int64, round int32) {
|
||||
|
||||
// Create proposal block from cs1 but sign it with vs.
|
||||
func decideProposal(
|
||||
t *testing.T,
|
||||
cs1 *State,
|
||||
vs *validatorStub,
|
||||
height int64,
|
||||
round int32,
|
||||
) (proposal *types.Proposal, block *types.Block) {
|
||||
t.Helper()
|
||||
cs1.mtx.Lock()
|
||||
block, blockParts := cs1.createProposalBlock()
|
||||
validRound := cs1.ValidRound
|
||||
chainID := cs1.state.ChainID
|
||||
cs1.mtx.Unlock()
|
||||
if block == nil {
|
||||
t.Fatal("Failed to createProposalBlock. Did you forget to add commit for previous block?")
|
||||
panic("Failed to createProposalBlock. Did you forget to add commit for previous block?")
|
||||
}
|
||||
|
||||
// Make proposal
|
||||
@@ -243,7 +240,7 @@ func decideProposal(
|
||||
proposal = types.NewProposal(height, round, polRound, propBlockID)
|
||||
p := proposal.ToProto()
|
||||
if err := vs.SignProposal(context.Background(), chainID, p); err != nil {
|
||||
t.Fatalf("error signing proposal: %s", err)
|
||||
panic(err)
|
||||
}
|
||||
|
||||
proposal.Signature = p.Signature
|
||||
@@ -270,38 +267,36 @@ func signAddVotes(
|
||||
}
|
||||
|
||||
func validatePrevote(t *testing.T, cs *State, round int32, privVal *validatorStub, blockHash []byte) {
|
||||
t.Helper()
|
||||
prevotes := cs.Votes.Prevotes(round)
|
||||
pubKey, err := privVal.GetPubKey(context.Background())
|
||||
require.NoError(t, err)
|
||||
address := pubKey.Address()
|
||||
var vote *types.Vote
|
||||
if vote = prevotes.GetByAddress(address); vote == nil {
|
||||
t.Fatalf("Failed to find prevote from validator")
|
||||
panic("Failed to find prevote from validator")
|
||||
}
|
||||
if blockHash == nil {
|
||||
if vote.BlockID.Hash != nil {
|
||||
t.Fatalf("Expected prevote to be for nil, got %X", vote.BlockID.Hash)
|
||||
panic(fmt.Sprintf("Expected prevote to be for nil, got %X", vote.BlockID.Hash))
|
||||
}
|
||||
} else {
|
||||
if !bytes.Equal(vote.BlockID.Hash, blockHash) {
|
||||
t.Fatalf("Expected prevote to be for %X, got %X", blockHash, vote.BlockID.Hash)
|
||||
panic(fmt.Sprintf("Expected prevote to be for %X, got %X", blockHash, vote.BlockID.Hash))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func validateLastPrecommit(t *testing.T, cs *State, privVal *validatorStub, blockHash []byte) {
|
||||
t.Helper()
|
||||
votes := cs.LastCommit
|
||||
pv, err := privVal.GetPubKey(context.Background())
|
||||
require.NoError(t, err)
|
||||
address := pv.Address()
|
||||
var vote *types.Vote
|
||||
if vote = votes.GetByAddress(address); vote == nil {
|
||||
t.Fatalf("Failed to find precommit from validator")
|
||||
panic("Failed to find precommit from validator")
|
||||
}
|
||||
if !bytes.Equal(vote.BlockID.Hash, blockHash) {
|
||||
t.Fatalf("Expected precommit to be for %X, got %X", blockHash, vote.BlockID.Hash)
|
||||
panic(fmt.Sprintf("Expected precommit to be for %X, got %X", blockHash, vote.BlockID.Hash))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -314,42 +309,41 @@ func validatePrecommit(
|
||||
votedBlockHash,
|
||||
lockedBlockHash []byte,
|
||||
) {
|
||||
t.Helper()
|
||||
precommits := cs.Votes.Precommits(thisRound)
|
||||
pv, err := privVal.GetPubKey(context.Background())
|
||||
require.NoError(t, err)
|
||||
address := pv.Address()
|
||||
var vote *types.Vote
|
||||
if vote = precommits.GetByAddress(address); vote == nil {
|
||||
t.Fatalf("Failed to find precommit from validator")
|
||||
panic("Failed to find precommit from validator")
|
||||
}
|
||||
|
||||
if votedBlockHash == nil {
|
||||
if vote.BlockID.Hash != nil {
|
||||
t.Fatalf("Expected precommit to be for nil")
|
||||
panic("Expected precommit to be for nil")
|
||||
}
|
||||
} else {
|
||||
if !bytes.Equal(vote.BlockID.Hash, votedBlockHash) {
|
||||
t.Fatalf("Expected precommit to be for proposal block")
|
||||
panic("Expected precommit to be for proposal block")
|
||||
}
|
||||
}
|
||||
|
||||
if lockedBlockHash == nil {
|
||||
if cs.LockedRound != lockRound || cs.LockedBlock != nil {
|
||||
t.Fatalf(
|
||||
panic(fmt.Sprintf(
|
||||
"Expected to be locked on nil at round %d. Got locked at round %d with block %v",
|
||||
lockRound,
|
||||
cs.LockedRound,
|
||||
cs.LockedBlock)
|
||||
cs.LockedBlock))
|
||||
}
|
||||
} else {
|
||||
if cs.LockedRound != lockRound || !bytes.Equal(cs.LockedBlock.Hash(), lockedBlockHash) {
|
||||
t.Fatalf(
|
||||
panic(fmt.Sprintf(
|
||||
"Expected block to be locked on round %d, got %d. Got locked block %X, expected %X",
|
||||
lockRound,
|
||||
cs.LockedRound,
|
||||
cs.LockedBlock.Hash(),
|
||||
lockedBlockHash)
|
||||
lockedBlockHash))
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -363,7 +357,6 @@ func validatePrevoteAndPrecommit(
|
||||
votedBlockHash,
|
||||
lockedBlockHash []byte,
|
||||
) {
|
||||
t.Helper()
|
||||
// verify the prevote
|
||||
validatePrevote(t, cs, thisRound, privVal, votedBlockHash)
|
||||
// verify precommit
|
||||
@@ -451,14 +444,13 @@ func newStateWithConfigAndBlockStore(
|
||||
return cs
|
||||
}
|
||||
|
||||
func loadPrivValidator(t *testing.T, config *cfg.Config) *privval.FilePV {
|
||||
t.Helper()
|
||||
func loadPrivValidator(config *cfg.Config) *privval.FilePV {
|
||||
privValidatorKeyFile := config.PrivValidator.KeyFile()
|
||||
ensureDir(t, filepath.Dir(privValidatorKeyFile), 0700)
|
||||
ensureDir(filepath.Dir(privValidatorKeyFile), 0700)
|
||||
privValidatorStateFile := config.PrivValidator.StateFile()
|
||||
privValidator, err := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
|
||||
if err != nil {
|
||||
t.Fatalf("error generating validator file: %s", err)
|
||||
panic(err)
|
||||
}
|
||||
privValidator.Reset()
|
||||
return privValidator
|
||||
@@ -483,238 +475,220 @@ func randState(config *cfg.Config, nValidators int) (*State, []*validatorStub) {
|
||||
|
||||
//-------------------------------------------------------------------------------
|
||||
|
||||
func ensureNoNewEvent(t *testing.T, ch <-chan tmpubsub.Message, timeout time.Duration,
|
||||
func ensureNoNewEvent(ch <-chan tmpubsub.Message, timeout time.Duration,
|
||||
errorMessage string) {
|
||||
t.Helper()
|
||||
select {
|
||||
case <-time.After(timeout):
|
||||
break
|
||||
case <-ch:
|
||||
t.Fatalf("unexpected event: %s", errorMessage)
|
||||
panic(errorMessage)
|
||||
}
|
||||
}
|
||||
|
||||
func ensureNoNewEventOnChannel(t *testing.T, ch <-chan tmpubsub.Message) {
|
||||
t.Helper()
|
||||
func ensureNoNewEventOnChannel(ch <-chan tmpubsub.Message) {
|
||||
ensureNoNewEvent(
|
||||
t,
|
||||
ch,
|
||||
ensureTimeout,
|
||||
"We should be stuck waiting, not receiving new event on the channel")
|
||||
}
|
||||
|
||||
func ensureNoNewRoundStep(t *testing.T, stepCh <-chan tmpubsub.Message) {
|
||||
t.Helper()
|
||||
func ensureNoNewRoundStep(stepCh <-chan tmpubsub.Message) {
|
||||
ensureNoNewEvent(
|
||||
t,
|
||||
stepCh,
|
||||
ensureTimeout,
|
||||
"We should be stuck waiting, not receiving NewRoundStep event")
|
||||
}
|
||||
|
||||
func ensureNoNewTimeout(t *testing.T, stepCh <-chan tmpubsub.Message, timeout int64) {
|
||||
t.Helper()
|
||||
func ensureNoNewUnlock(unlockCh <-chan tmpubsub.Message) {
|
||||
ensureNoNewEvent(
|
||||
unlockCh,
|
||||
ensureTimeout,
|
||||
"We should be stuck waiting, not receiving Unlock event")
|
||||
}
|
||||
|
||||
func ensureNoNewTimeout(stepCh <-chan tmpubsub.Message, timeout int64) {
|
||||
timeoutDuration := time.Duration(timeout*10) * time.Nanosecond
|
||||
ensureNoNewEvent(
|
||||
t,
|
||||
stepCh,
|
||||
timeoutDuration,
|
||||
"We should be stuck waiting, not receiving NewTimeout event")
|
||||
}
|
||||
|
||||
func ensureNewEvent(t *testing.T, ch <-chan tmpubsub.Message, height int64, round int32, timeout time.Duration, errorMessage string) { // nolint: lll
|
||||
t.Helper()
|
||||
func ensureNewEvent(ch <-chan tmpubsub.Message, height int64, round int32, timeout time.Duration, errorMessage string) {
|
||||
select {
|
||||
case <-time.After(timeout):
|
||||
t.Fatalf("timed out waiting for new event: %s", errorMessage)
|
||||
panic(errorMessage)
|
||||
case msg := <-ch:
|
||||
roundStateEvent, ok := msg.Data().(types.EventDataRoundState)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataRoundState, got %T. Wrong subscription channel?", msg.Data())
|
||||
panic(fmt.Sprintf("expected a EventDataRoundState, got %T. Wrong subscription channel?",
|
||||
msg.Data()))
|
||||
}
|
||||
if roundStateEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, roundStateEvent.Height)
|
||||
panic(fmt.Sprintf("expected height %v, got %v", height, roundStateEvent.Height))
|
||||
}
|
||||
if roundStateEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, roundStateEvent.Round)
|
||||
panic(fmt.Sprintf("expected round %v, got %v", round, roundStateEvent.Round))
|
||||
}
|
||||
// TODO: We could check also for a step at this point!
|
||||
}
|
||||
}
|
||||
|
||||
func ensureNewRound(t *testing.T, roundCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
t.Helper()
|
||||
func ensureNewRound(roundCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatal("Timeout expired while waiting for NewRound event")
|
||||
panic("Timeout expired while waiting for NewRound event")
|
||||
case msg := <-roundCh:
|
||||
newRoundEvent, ok := msg.Data().(types.EventDataNewRound)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataNewRound, got %T. Wrong subscription channel?", msg.Data())
|
||||
panic(fmt.Sprintf("expected a EventDataNewRound, got %T. Wrong subscription channel?",
|
||||
msg.Data()))
|
||||
}
|
||||
if newRoundEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, newRoundEvent.Height)
|
||||
panic(fmt.Sprintf("expected height %v, got %v", height, newRoundEvent.Height))
|
||||
}
|
||||
if newRoundEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, newRoundEvent.Round)
|
||||
panic(fmt.Sprintf("expected round %v, got %v", round, newRoundEvent.Round))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func ensureNewTimeout(t *testing.T, timeoutCh <-chan tmpubsub.Message, height int64, round int32, timeout int64) {
|
||||
t.Helper()
|
||||
func ensureNewTimeout(timeoutCh <-chan tmpubsub.Message, height int64, round int32, timeout int64) {
|
||||
timeoutDuration := time.Duration(timeout*10) * time.Nanosecond
|
||||
ensureNewEvent(t, timeoutCh, height, round, timeoutDuration,
|
||||
ensureNewEvent(timeoutCh, height, round, timeoutDuration,
|
||||
"Timeout expired while waiting for NewTimeout event")
|
||||
}
|
||||
|
||||
func ensureNewProposal(t *testing.T, proposalCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
t.Helper()
|
||||
func ensureNewProposal(proposalCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for NewProposal event")
|
||||
panic("Timeout expired while waiting for NewProposal event")
|
||||
case msg := <-proposalCh:
|
||||
proposalEvent, ok := msg.Data().(types.EventDataCompleteProposal)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
|
||||
msg.Data())
|
||||
panic(fmt.Sprintf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
|
||||
msg.Data()))
|
||||
}
|
||||
if proposalEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, proposalEvent.Height)
|
||||
panic(fmt.Sprintf("expected height %v, got %v", height, proposalEvent.Height))
|
||||
}
|
||||
if proposalEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, proposalEvent.Round)
|
||||
panic(fmt.Sprintf("expected round %v, got %v", round, proposalEvent.Round))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func ensureNewValidBlock(t *testing.T, validBlockCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
t.Helper()
|
||||
ensureNewEvent(t, validBlockCh, height, round, ensureTimeout,
|
||||
func ensureNewValidBlock(validBlockCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
ensureNewEvent(validBlockCh, height, round, ensureTimeout,
|
||||
"Timeout expired while waiting for NewValidBlock event")
|
||||
}
|
||||
|
||||
func ensureNewBlock(t *testing.T, blockCh <-chan tmpubsub.Message, height int64) {
|
||||
t.Helper()
|
||||
func ensureNewBlock(blockCh <-chan tmpubsub.Message, height int64) {
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for NewBlock event")
|
||||
panic("Timeout expired while waiting for NewBlock event")
|
||||
case msg := <-blockCh:
|
||||
blockEvent, ok := msg.Data().(types.EventDataNewBlock)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataNewBlock, got %T. Wrong subscription channel?",
|
||||
msg.Data())
|
||||
panic(fmt.Sprintf("expected a EventDataNewBlock, got %T. Wrong subscription channel?",
|
||||
msg.Data()))
|
||||
}
|
||||
if blockEvent.Block.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, blockEvent.Block.Height)
|
||||
panic(fmt.Sprintf("expected height %v, got %v", height, blockEvent.Block.Height))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func ensureNewBlockHeader(t *testing.T, blockCh <-chan tmpubsub.Message, height int64, blockHash tmbytes.HexBytes) {
|
||||
t.Helper()
|
||||
func ensureNewBlockHeader(blockCh <-chan tmpubsub.Message, height int64, blockHash tmbytes.HexBytes) {
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for NewBlockHeader event")
|
||||
panic("Timeout expired while waiting for NewBlockHeader event")
|
||||
case msg := <-blockCh:
|
||||
blockHeaderEvent, ok := msg.Data().(types.EventDataNewBlockHeader)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataNewBlockHeader, got %T. Wrong subscription channel?",
|
||||
msg.Data())
|
||||
panic(fmt.Sprintf("expected a EventDataNewBlockHeader, got %T. Wrong subscription channel?",
|
||||
msg.Data()))
|
||||
}
|
||||
if blockHeaderEvent.Header.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, blockHeaderEvent.Header.Height)
|
||||
panic(fmt.Sprintf("expected height %v, got %v", height, blockHeaderEvent.Header.Height))
|
||||
}
|
||||
if !bytes.Equal(blockHeaderEvent.Header.Hash(), blockHash) {
|
||||
t.Fatalf("expected header %X, got %X", blockHash, blockHeaderEvent.Header.Hash())
|
||||
panic(fmt.Sprintf("expected header %X, got %X", blockHash, blockHeaderEvent.Header.Hash()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func ensureLock(t *testing.T, lockCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
t.Helper()
|
||||
ensureNewEvent(t, lockCh, height, round, ensureTimeout,
|
||||
"Timeout expired while waiting for LockValue event")
|
||||
func ensureNewUnlock(unlockCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
ensureNewEvent(unlockCh, height, round, ensureTimeout,
|
||||
"Timeout expired while waiting for NewUnlock event")
|
||||
}
|
||||
|
||||
func ensureRelock(t *testing.T, relockCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
t.Helper()
|
||||
ensureNewEvent(t, relockCh, height, round, ensureTimeout,
|
||||
"Timeout expired while waiting for RelockValue event")
|
||||
}
|
||||
|
||||
func ensureProposal(t *testing.T, proposalCh <-chan tmpubsub.Message, height int64, round int32, propID types.BlockID) {
|
||||
t.Helper()
|
||||
func ensureProposal(proposalCh <-chan tmpubsub.Message, height int64, round int32, propID types.BlockID) {
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for NewProposal event")
|
||||
panic("Timeout expired while waiting for NewProposal event")
|
||||
case msg := <-proposalCh:
|
||||
proposalEvent, ok := msg.Data().(types.EventDataCompleteProposal)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
|
||||
msg.Data())
|
||||
panic(fmt.Sprintf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
|
||||
msg.Data()))
|
||||
}
|
||||
if proposalEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, proposalEvent.Height)
|
||||
panic(fmt.Sprintf("expected height %v, got %v", height, proposalEvent.Height))
|
||||
}
|
||||
if proposalEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, proposalEvent.Round)
|
||||
panic(fmt.Sprintf("expected round %v, got %v", round, proposalEvent.Round))
|
||||
}
|
||||
if !proposalEvent.BlockID.Equals(propID) {
|
||||
t.Fatalf("Proposed block does not match expected block (%v != %v)", proposalEvent.BlockID, propID)
|
||||
panic(fmt.Sprintf("Proposed block does not match expected block (%v != %v)", proposalEvent.BlockID, propID))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func ensurePrecommit(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
t.Helper()
|
||||
ensureVote(t, voteCh, height, round, tmproto.PrecommitType)
|
||||
func ensurePrecommit(voteCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
ensureVote(voteCh, height, round, tmproto.PrecommitType)
|
||||
}
|
||||
|
||||
func ensurePrevote(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
t.Helper()
|
||||
ensureVote(t, voteCh, height, round, tmproto.PrevoteType)
|
||||
func ensurePrevote(voteCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
ensureVote(voteCh, height, round, tmproto.PrevoteType)
|
||||
}
|
||||
|
||||
func ensureVote(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32,
|
||||
func ensureVote(voteCh <-chan tmpubsub.Message, height int64, round int32,
|
||||
voteType tmproto.SignedMsgType) {
|
||||
t.Helper()
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for NewVote event")
|
||||
panic("Timeout expired while waiting for NewVote event")
|
||||
case msg := <-voteCh:
|
||||
voteEvent, ok := msg.Data().(types.EventDataVote)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataVote, got %T. Wrong subscription channel?",
|
||||
msg.Data())
|
||||
panic(fmt.Sprintf("expected a EventDataVote, got %T. Wrong subscription channel?",
|
||||
msg.Data()))
|
||||
}
|
||||
vote := voteEvent.Vote
|
||||
if vote.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, vote.Height)
|
||||
panic(fmt.Sprintf("expected height %v, got %v", height, vote.Height))
|
||||
}
|
||||
if vote.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, vote.Round)
|
||||
panic(fmt.Sprintf("expected round %v, got %v", round, vote.Round))
|
||||
}
|
||||
if vote.Type != voteType {
|
||||
t.Fatalf("expected type %v, got %v", voteType, vote.Type)
|
||||
panic(fmt.Sprintf("expected type %v, got %v", voteType, vote.Type))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func ensurePrecommitTimeout(t *testing.T, ch <-chan tmpubsub.Message) {
|
||||
t.Helper()
|
||||
func ensurePrecommitTimeout(ch <-chan tmpubsub.Message) {
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for the Precommit to Timeout")
|
||||
panic("Timeout expired while waiting for the Precommit to Timeout")
|
||||
case <-ch:
|
||||
}
|
||||
}
|
||||
|
||||
func ensureNewEventOnChannel(t *testing.T, ch <-chan tmpubsub.Message) {
|
||||
t.Helper()
|
||||
func ensureNewEventOnChannel(ch <-chan tmpubsub.Message) {
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for new activity on the channel")
|
||||
panic("Timeout expired while waiting for new activity on the channel")
|
||||
case <-ch:
|
||||
}
|
||||
}
|
||||
@@ -737,7 +711,6 @@ func randConsensusState(
|
||||
appFunc func() abci.Application,
|
||||
configOpts ...func(*cfg.Config),
|
||||
) ([]*State, cleanupFunc) {
|
||||
t.Helper()
|
||||
|
||||
genDoc, privVals := factory.RandGenesisDoc(config, nValidators, false, 30)
|
||||
css := make([]*State, nValidators)
|
||||
@@ -758,7 +731,7 @@ func randConsensusState(
|
||||
opt(thisConfig)
|
||||
}
|
||||
|
||||
ensureDir(t, filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
|
||||
app := appFunc()
|
||||
|
||||
@@ -786,7 +759,6 @@ func randConsensusState(
|
||||
|
||||
// nPeers = nValidators + nNotValidator
|
||||
func randConsensusNetWithPeers(
|
||||
t *testing.T,
|
||||
config *cfg.Config,
|
||||
nValidators,
|
||||
nPeers int,
|
||||
@@ -796,7 +768,6 @@ func randConsensusNetWithPeers(
|
||||
) ([]*State, *types.GenesisDoc, *cfg.Config, cleanupFunc) {
|
||||
genDoc, privVals := factory.RandGenesisDoc(config, nValidators, false, testMinPower)
|
||||
css := make([]*State, nPeers)
|
||||
t.Helper()
|
||||
logger := consensusLogger()
|
||||
|
||||
var peer0Config *cfg.Config
|
||||
@@ -805,7 +776,7 @@ func randConsensusNetWithPeers(
|
||||
state, _ := sm.MakeGenesisState(genDoc)
|
||||
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
|
||||
configRootDirs = append(configRootDirs, thisConfig.RootDir)
|
||||
ensureDir(t, filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
if i == 0 {
|
||||
peer0Config = thisConfig
|
||||
}
|
||||
@@ -815,16 +786,16 @@ func randConsensusNetWithPeers(
|
||||
} else {
|
||||
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
|
||||
if err != nil {
|
||||
t.Fatalf("error creating temp file for validator key: %s", err)
|
||||
panic(err)
|
||||
}
|
||||
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
|
||||
if err != nil {
|
||||
t.Fatalf("error loading validator state: %s", err)
|
||||
panic(err)
|
||||
}
|
||||
|
||||
privVal, err = privval.GenFilePV(tempKeyFile.Name(), tempStateFile.Name(), "")
|
||||
if err != nil {
|
||||
t.Fatalf("error generating validator key: %s", err)
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -40,12 +40,12 @@ func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) {
|
||||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
|
||||
startTestRound(cs, height, round)
|
||||
|
||||
ensureNewEventOnChannel(t, newBlockCh) // first block gets committed
|
||||
ensureNoNewEventOnChannel(t, newBlockCh)
|
||||
ensureNewEventOnChannel(newBlockCh) // first block gets committed
|
||||
ensureNoNewEventOnChannel(newBlockCh)
|
||||
deliverTxsRange(cs, 0, 1)
|
||||
ensureNewEventOnChannel(t, newBlockCh) // commit txs
|
||||
ensureNewEventOnChannel(t, newBlockCh) // commit updated app hash
|
||||
ensureNoNewEventOnChannel(t, newBlockCh)
|
||||
ensureNewEventOnChannel(newBlockCh) // commit txs
|
||||
ensureNewEventOnChannel(newBlockCh) // commit updated app hash
|
||||
ensureNoNewEventOnChannel(newBlockCh)
|
||||
}
|
||||
|
||||
func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
|
||||
@@ -63,9 +63,9 @@ func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
|
||||
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
|
||||
startTestRound(cs, cs.Height, cs.Round)
|
||||
|
||||
ensureNewEventOnChannel(t, newBlockCh) // first block gets committed
|
||||
ensureNoNewEventOnChannel(t, newBlockCh) // then we dont make a block ...
|
||||
ensureNewEventOnChannel(t, newBlockCh) // until the CreateEmptyBlocksInterval has passed
|
||||
ensureNewEventOnChannel(newBlockCh) // first block gets committed
|
||||
ensureNoNewEventOnChannel(newBlockCh) // then we dont make a block ...
|
||||
ensureNewEventOnChannel(newBlockCh) // until the CreateEmptyBlocksInterval has passed
|
||||
}
|
||||
|
||||
func TestMempoolProgressInHigherRound(t *testing.T) {
|
||||
@@ -93,19 +93,19 @@ func TestMempoolProgressInHigherRound(t *testing.T) {
|
||||
}
|
||||
startTestRound(cs, height, round)
|
||||
|
||||
ensureNewRound(t, newRoundCh, height, round) // first round at first height
|
||||
ensureNewEventOnChannel(t, newBlockCh) // first block gets committed
|
||||
ensureNewRound(newRoundCh, height, round) // first round at first height
|
||||
ensureNewEventOnChannel(newBlockCh) // first block gets committed
|
||||
|
||||
height++ // moving to the next height
|
||||
round = 0
|
||||
|
||||
ensureNewRound(t, newRoundCh, height, round) // first round at next height
|
||||
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
|
||||
ensureNewTimeout(t, timeoutCh, height, round, cs.config.TimeoutPropose.Nanoseconds())
|
||||
ensureNewRound(newRoundCh, height, round) // first round at next height
|
||||
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
|
||||
ensureNewTimeout(timeoutCh, height, round, cs.config.TimeoutPropose.Nanoseconds())
|
||||
|
||||
round++ // moving to the next round
|
||||
ensureNewRound(t, newRoundCh, height, round) // wait for the next round
|
||||
ensureNewEventOnChannel(t, newBlockCh) // now we can commit the block
|
||||
round++ // moving to the next round
|
||||
ensureNewRound(newRoundCh, height, round) // wait for the next round
|
||||
ensureNewEventOnChannel(newBlockCh) // now we can commit the block
|
||||
}
|
||||
|
||||
func deliverTxsRange(cs *State, start, end int) {
|
||||
|
||||
@@ -336,7 +336,7 @@ func TestReactorWithEvidence(t *testing.T) {
|
||||
|
||||
defer os.RemoveAll(thisConfig.RootDir)
|
||||
|
||||
ensureDir(t, path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
|
||||
app := appFunc()
|
||||
vals := types.TM2PB.ValidatorUpdates(state.Validators)
|
||||
app.InitChain(abci.RequestInitChain{Validators: vals})
|
||||
@@ -627,7 +627,6 @@ func TestReactorValidatorSetChanges(t *testing.T) {
|
||||
nPeers := 7
|
||||
nVals := 4
|
||||
states, _, _, cleanup := randConsensusNetWithPeers(
|
||||
t,
|
||||
config,
|
||||
nVals,
|
||||
nPeers,
|
||||
|
||||
@@ -58,7 +58,7 @@ func startNewStateAndWaitForBlock(t *testing.T, consensusReplayConfig *cfg.Confi
|
||||
logger := log.TestingLogger()
|
||||
state, err := sm.MakeGenesisStateFromFile(consensusReplayConfig.GenesisFile())
|
||||
require.NoError(t, err)
|
||||
privValidator := loadPrivValidator(t, consensusReplayConfig)
|
||||
privValidator := loadPrivValidator(consensusReplayConfig)
|
||||
blockStore := store.NewBlockStore(dbm.NewMemDB())
|
||||
cs := newStateWithConfigAndBlockStore(
|
||||
consensusReplayConfig,
|
||||
@@ -154,7 +154,7 @@ LOOP:
|
||||
blockStore := store.NewBlockStore(blockDB)
|
||||
state, err := sm.MakeGenesisStateFromFile(consensusReplayConfig.GenesisFile())
|
||||
require.NoError(t, err)
|
||||
privValidator := loadPrivValidator(t, consensusReplayConfig)
|
||||
privValidator := loadPrivValidator(consensusReplayConfig)
|
||||
cs := newStateWithConfigAndBlockStore(
|
||||
consensusReplayConfig,
|
||||
state,
|
||||
@@ -321,7 +321,6 @@ func setupSimulator(t *testing.T) *simulatorTestSuite {
|
||||
nVals := 4
|
||||
|
||||
css, genDoc, config, cleanup := randConsensusNetWithPeers(
|
||||
t,
|
||||
config,
|
||||
nVals,
|
||||
nPeers,
|
||||
@@ -346,15 +345,15 @@ func setupSimulator(t *testing.T) *simulatorTestSuite {
|
||||
// start the machine
|
||||
startTestRound(css[0], height, round)
|
||||
incrementHeight(vss...)
|
||||
ensureNewRound(t, newRoundCh, height, 0)
|
||||
ensureNewProposal(t, proposalCh, height, round)
|
||||
ensureNewRound(newRoundCh, height, 0)
|
||||
ensureNewProposal(proposalCh, height, round)
|
||||
rs := css[0].GetRoundState()
|
||||
|
||||
signAddVotes(sim.Config, css[0], tmproto.PrecommitType,
|
||||
rs.ProposalBlock.Hash(), rs.ProposalBlockParts.Header(),
|
||||
vss[1:nVals]...)
|
||||
|
||||
ensureNewRound(t, newRoundCh, height+1, 0)
|
||||
ensureNewRound(newRoundCh, height+1, 0)
|
||||
|
||||
// HEIGHT 2
|
||||
height++
|
||||
@@ -381,12 +380,12 @@ func setupSimulator(t *testing.T) *simulatorTestSuite {
|
||||
if err := css[0].SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
ensureNewProposal(t, proposalCh, height, round)
|
||||
ensureNewProposal(proposalCh, height, round)
|
||||
rs = css[0].GetRoundState()
|
||||
signAddVotes(sim.Config, css[0], tmproto.PrecommitType,
|
||||
rs.ProposalBlock.Hash(), rs.ProposalBlockParts.Header(),
|
||||
vss[1:nVals]...)
|
||||
ensureNewRound(t, newRoundCh, height+1, 0)
|
||||
ensureNewRound(newRoundCh, height+1, 0)
|
||||
|
||||
// HEIGHT 3
|
||||
height++
|
||||
@@ -413,12 +412,12 @@ func setupSimulator(t *testing.T) *simulatorTestSuite {
|
||||
if err := css[0].SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
ensureNewProposal(t, proposalCh, height, round)
|
||||
ensureNewProposal(proposalCh, height, round)
|
||||
rs = css[0].GetRoundState()
|
||||
signAddVotes(sim.Config, css[0], tmproto.PrecommitType,
|
||||
rs.ProposalBlock.Hash(), rs.ProposalBlockParts.Header(),
|
||||
vss[1:nVals]...)
|
||||
ensureNewRound(t, newRoundCh, height+1, 0)
|
||||
ensureNewRound(newRoundCh, height+1, 0)
|
||||
|
||||
// HEIGHT 4
|
||||
height++
|
||||
@@ -472,7 +471,7 @@ func setupSimulator(t *testing.T) *simulatorTestSuite {
|
||||
if err := css[0].SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
ensureNewProposal(t, proposalCh, height, round)
|
||||
ensureNewProposal(proposalCh, height, round)
|
||||
|
||||
removeValidatorTx2 := kvstore.MakeValSetChangeTx(newVal2ABCI, 0)
|
||||
err = assertMempool(css[0].txNotifier).CheckTx(context.Background(), removeValidatorTx2, nil, mempl.TxInfo{})
|
||||
@@ -488,7 +487,7 @@ func setupSimulator(t *testing.T) *simulatorTestSuite {
|
||||
rs.ProposalBlockParts.Header(), newVss[i])
|
||||
}
|
||||
|
||||
ensureNewRound(t, newRoundCh, height+1, 0)
|
||||
ensureNewRound(newRoundCh, height+1, 0)
|
||||
|
||||
// HEIGHT 5
|
||||
height++
|
||||
@@ -498,7 +497,7 @@ func setupSimulator(t *testing.T) *simulatorTestSuite {
|
||||
newVss[newVssIdx].VotingPower = 25
|
||||
sort.Sort(ValidatorStubsByPower(newVss))
|
||||
selfIndex = valIndexFn(0)
|
||||
ensureNewProposal(t, proposalCh, height, round)
|
||||
ensureNewProposal(proposalCh, height, round)
|
||||
rs = css[0].GetRoundState()
|
||||
for i := 0; i < nVals+1; i++ {
|
||||
if i == selfIndex {
|
||||
@@ -508,7 +507,7 @@ func setupSimulator(t *testing.T) *simulatorTestSuite {
|
||||
tmproto.PrecommitType, rs.ProposalBlock.Hash(),
|
||||
rs.ProposalBlockParts.Header(), newVss[i])
|
||||
}
|
||||
ensureNewRound(t, newRoundCh, height+1, 0)
|
||||
ensureNewRound(newRoundCh, height+1, 0)
|
||||
|
||||
// HEIGHT 6
|
||||
height++
|
||||
@@ -535,7 +534,7 @@ func setupSimulator(t *testing.T) *simulatorTestSuite {
|
||||
if err := css[0].SetProposalAndBlock(proposal, propBlock, propBlockParts, "some peer"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
ensureNewProposal(t, proposalCh, height, round)
|
||||
ensureNewProposal(proposalCh, height, round)
|
||||
rs = css[0].GetRoundState()
|
||||
for i := 0; i < nVals+3; i++ {
|
||||
if i == selfIndex {
|
||||
@@ -545,7 +544,7 @@ func setupSimulator(t *testing.T) *simulatorTestSuite {
|
||||
tmproto.PrecommitType, rs.ProposalBlock.Hash(),
|
||||
rs.ProposalBlockParts.Header(), newVss[i])
|
||||
}
|
||||
ensureNewRound(t, newRoundCh, height+1, 0)
|
||||
ensureNewRound(newRoundCh, height+1, 0)
|
||||
|
||||
sim.Chain = make([]*types.Block, 0)
|
||||
sim.Commits = make([]*types.Commit, 0)
|
||||
|
||||
@@ -137,7 +137,7 @@ type State struct {
|
||||
done chan struct{}
|
||||
|
||||
// synchronous pubsub between consensus state and reactor.
|
||||
// state only emits EventNewRoundStep, EventValidBlock, and EventVote
|
||||
// state only emits EventNewRoundStep and EventVote
|
||||
evsw tmevents.EventSwitch
|
||||
|
||||
// for reporting metrics
|
||||
@@ -1361,6 +1361,7 @@ func (cs *State) enterPrevoteWait(height int64, round int32) {
|
||||
// Enter: `timeoutPrecommit` after any +2/3 precommits.
|
||||
// Enter: +2/3 precomits for block or nil.
|
||||
// Lock & precommit the ProposalBlock if we have enough prevotes for it (a POL in this round)
|
||||
// else, unlock an existing lock and precommit nil if +2/3 of prevotes were nil,
|
||||
// else, precommit nil otherwise.
|
||||
func (cs *State) enterPrecommit(height int64, round int32) {
|
||||
logger := cs.Logger.With("height", height, "round", round)
|
||||
@@ -1407,9 +1408,21 @@ func (cs *State) enterPrecommit(height int64, round int32) {
|
||||
panic(fmt.Sprintf("this POLRound should be %v but got %v", round, polRound))
|
||||
}
|
||||
|
||||
// +2/3 prevoted nil. Precommit nil.
|
||||
if blockID.IsNil() {
|
||||
logger.Debug("precommit step; +2/3 prevoted for nil")
|
||||
// +2/3 prevoted nil. Unlock and precommit nil.
|
||||
if len(blockID.Hash) == 0 {
|
||||
if cs.LockedBlock == nil {
|
||||
logger.Debug("precommit step; +2/3 prevoted for nil")
|
||||
} else {
|
||||
logger.Debug("precommit step; +2/3 prevoted for nil; unlocking")
|
||||
cs.LockedRound = -1
|
||||
cs.LockedBlock = nil
|
||||
cs.LockedBlockParts = nil
|
||||
|
||||
if err := cs.eventBus.PublishEventUnlock(cs.RoundStateEvent()); err != nil {
|
||||
logger.Error("failed publishing event unlock", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
cs.signAddVote(tmproto.PrecommitType, nil, types.PartSetHeader{})
|
||||
return
|
||||
}
|
||||
@@ -1429,9 +1442,7 @@ func (cs *State) enterPrecommit(height int64, round int32) {
|
||||
return
|
||||
}
|
||||
|
||||
// If greater than 2/3 of the voting power on the network prevoted for
|
||||
// the proposed block, update our locked block to this block and issue a
|
||||
// precommit vote for it.
|
||||
// If +2/3 prevoted for proposal block, stage and precommit it
|
||||
if cs.ProposalBlock.HashesTo(blockID.Hash) {
|
||||
logger.Debug("precommit step; +2/3 prevoted proposal block; locking", "hash", blockID.Hash)
|
||||
|
||||
@@ -1453,14 +1464,23 @@ func (cs *State) enterPrecommit(height int64, round int32) {
|
||||
}
|
||||
|
||||
// There was a polka in this round for a block we don't have.
|
||||
// Fetch that block, and precommit nil.
|
||||
// Fetch that block, unlock, and precommit nil.
|
||||
// The +2/3 prevotes for this round is the POL for our unlock.
|
||||
logger.Debug("precommit step; +2/3 prevotes for a block we do not have; voting nil", "block_id", blockID)
|
||||
|
||||
cs.LockedRound = -1
|
||||
cs.LockedBlock = nil
|
||||
cs.LockedBlockParts = nil
|
||||
|
||||
if !cs.ProposalBlockParts.HasHeader(blockID.PartSetHeader) {
|
||||
cs.ProposalBlock = nil
|
||||
cs.ProposalBlockParts = types.NewPartSetFromHeader(blockID.PartSetHeader)
|
||||
}
|
||||
|
||||
if err := cs.eventBus.PublishEventUnlock(cs.RoundStateEvent()); err != nil {
|
||||
logger.Error("failed publishing event unlock", "err", err)
|
||||
}
|
||||
|
||||
cs.signAddVote(tmproto.PrecommitType, nil, types.PartSetHeader{})
|
||||
}
|
||||
|
||||
@@ -1568,7 +1588,7 @@ func (cs *State) tryFinalizeCommit(height int64) {
|
||||
}
|
||||
|
||||
blockID, ok := cs.Votes.Precommits(cs.CommitRound).TwoThirdsMajority()
|
||||
if !ok || blockID.IsNil() {
|
||||
if !ok || len(blockID.Hash) == 0 {
|
||||
logger.Error("failed attempt to finalize commit; there was no +2/3 majority or +2/3 was for nil")
|
||||
return
|
||||
}
|
||||
@@ -1901,7 +1921,7 @@ func (cs *State) addProposalBlockPart(msg *BlockPartMessage, peerID types.NodeID
|
||||
// Update Valid* if we can.
|
||||
prevotes := cs.Votes.Prevotes(cs.Round)
|
||||
blockID, hasTwoThirds := prevotes.TwoThirdsMajority()
|
||||
if hasTwoThirds && !blockID.IsNil() && (cs.ValidRound < cs.Round) {
|
||||
if hasTwoThirds && !blockID.IsZero() && (cs.ValidRound < cs.Round) {
|
||||
if cs.ProposalBlock.HashesTo(blockID.Hash) {
|
||||
cs.Logger.Debug(
|
||||
"updating valid block to new proposal block",
|
||||
@@ -2050,13 +2070,33 @@ func (cs *State) addVote(vote *types.Vote, peerID types.NodeID) (added bool, err
|
||||
prevotes := cs.Votes.Prevotes(vote.Round)
|
||||
cs.Logger.Debug("added vote to prevote", "vote", vote, "prevotes", prevotes.StringShort())
|
||||
|
||||
// Check to see if >2/3 of the voting power on the network voted for any non-nil block.
|
||||
if blockID, ok := prevotes.TwoThirdsMajority(); ok && !blockID.IsNil() {
|
||||
// Greater than 2/3 of the voting power on the network voted for some
|
||||
// non-nil block
|
||||
// If +2/3 prevotes for a block or nil for *any* round:
|
||||
if blockID, ok := prevotes.TwoThirdsMajority(); ok {
|
||||
// There was a polka!
|
||||
// If we're locked but this is a recent polka, unlock.
|
||||
// If it matches our ProposalBlock, update the ValidBlock
|
||||
|
||||
// Unlock if `cs.LockedRound < vote.Round <= cs.Round`
|
||||
// NOTE: If vote.Round > cs.Round, we'll deal with it when we get to vote.Round
|
||||
if (cs.LockedBlock != nil) &&
|
||||
(cs.LockedRound < vote.Round) &&
|
||||
(vote.Round <= cs.Round) &&
|
||||
!cs.LockedBlock.HashesTo(blockID.Hash) {
|
||||
|
||||
cs.Logger.Debug("unlocking because of POL", "locked_round", cs.LockedRound, "pol_round", vote.Round)
|
||||
|
||||
cs.LockedRound = -1
|
||||
cs.LockedBlock = nil
|
||||
cs.LockedBlockParts = nil
|
||||
|
||||
if err := cs.eventBus.PublishEventUnlock(cs.RoundStateEvent()); err != nil {
|
||||
return added, err
|
||||
}
|
||||
}
|
||||
|
||||
// Update Valid* if we can.
|
||||
if cs.ValidRound < vote.Round && vote.Round == cs.Round {
|
||||
// NOTE: our proposal block may be nil or not what received a polka..
|
||||
if len(blockID.Hash) != 0 && (cs.ValidRound < vote.Round) && (vote.Round == cs.Round) {
|
||||
if cs.ProposalBlock.HashesTo(blockID.Hash) {
|
||||
cs.Logger.Debug("updating valid block because of POL", "valid_round", cs.ValidRound, "pol_round", vote.Round)
|
||||
cs.ValidRound = vote.Round
|
||||
@@ -2092,7 +2132,7 @@ func (cs *State) addVote(vote *types.Vote, peerID types.NodeID) (added bool, err
|
||||
|
||||
case cs.Round == vote.Round && cstypes.RoundStepPrevote <= cs.Step: // current round
|
||||
blockID, ok := prevotes.TwoThirdsMajority()
|
||||
if ok && (cs.isProposalComplete() || blockID.IsNil()) {
|
||||
if ok && (cs.isProposalComplete() || len(blockID.Hash) == 0) {
|
||||
cs.enterPrecommit(height, vote.Round)
|
||||
} else if prevotes.HasTwoThirdsAny() {
|
||||
cs.enterPrevoteWait(height, vote.Round)
|
||||
@@ -2120,7 +2160,7 @@ func (cs *State) addVote(vote *types.Vote, peerID types.NodeID) (added bool, err
|
||||
cs.enterNewRound(height, vote.Round)
|
||||
cs.enterPrecommit(height, vote.Round)
|
||||
|
||||
if !blockID.IsNil() {
|
||||
if len(blockID.Hash) != 0 {
|
||||
cs.enterCommit(height, vote.Round)
|
||||
if cs.config.SkipTimeoutCommit && precommits.HasAll() {
|
||||
cs.enterNewRound(cs.Height, 0)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -55,7 +55,7 @@ func MakeHeader(h *types.Header) (*types.Header, error) {
|
||||
if h.Height == 0 {
|
||||
h.Height = 1
|
||||
}
|
||||
if h.LastBlockID.IsNil() {
|
||||
if h.LastBlockID.IsZero() {
|
||||
h.LastBlockID = MakeBlockID()
|
||||
}
|
||||
if h.ChainID == "" {
|
||||
|
||||
@@ -379,7 +379,6 @@ func (c *Client) Update(ctx context.Context, now time.Time) (*types.LightBlock,
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// If there is a new light block then verify it
|
||||
if latestBlock.Height > lastTrustedHeight {
|
||||
err = c.verifyLightBlock(ctx, latestBlock, now)
|
||||
if err != nil {
|
||||
@@ -389,8 +388,7 @@ func (c *Client) Update(ctx context.Context, now time.Time) (*types.LightBlock,
|
||||
return latestBlock, nil
|
||||
}
|
||||
|
||||
// else return the latestTrustedBlock
|
||||
return c.latestTrustedBlock, nil
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// VerifyLightBlockAtHeight fetches the light block at the given height
|
||||
|
||||
@@ -644,7 +644,7 @@ func TestClientReplacesPrimaryWithWitnessIfPrimaryIsUnavailable(t *testing.T) {
|
||||
chainID,
|
||||
trustOptions,
|
||||
mockDeadNode,
|
||||
[]provider.Provider{mockDeadNode, mockFullNode},
|
||||
[]provider.Provider{mockFullNode, mockFullNode},
|
||||
dbs.New(dbm.NewMemDB()),
|
||||
light.Logger(log.TestingLogger()),
|
||||
)
|
||||
@@ -663,32 +663,6 @@ func TestClientReplacesPrimaryWithWitnessIfPrimaryIsUnavailable(t *testing.T) {
|
||||
mockFullNode.AssertExpectations(t)
|
||||
}
|
||||
|
||||
func TestClientReplacesPrimaryWithWitnessIfPrimaryDoesntHaveBlock(t *testing.T) {
|
||||
mockFullNode := &provider_mocks.Provider{}
|
||||
mockFullNode.On("LightBlock", mock.Anything, mock.Anything).Return(l1, nil)
|
||||
|
||||
mockDeadNode := &provider_mocks.Provider{}
|
||||
mockDeadNode.On("LightBlock", mock.Anything, mock.Anything).Return(nil, provider.ErrLightBlockNotFound)
|
||||
c, err := light.NewClient(
|
||||
ctx,
|
||||
chainID,
|
||||
trustOptions,
|
||||
mockDeadNode,
|
||||
[]provider.Provider{mockDeadNode, mockFullNode},
|
||||
dbs.New(dbm.NewMemDB()),
|
||||
light.Logger(log.TestingLogger()),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
_, err = c.Update(ctx, bTime.Add(2*time.Hour))
|
||||
require.NoError(t, err)
|
||||
|
||||
// we should still have the dead node as a witness because it
|
||||
// hasn't repeatedly been unresponsive yet
|
||||
assert.Equal(t, 2, len(c.Witnesses()))
|
||||
mockDeadNode.AssertExpectations(t)
|
||||
mockFullNode.AssertExpectations(t)
|
||||
}
|
||||
|
||||
func TestClient_BackwardsVerification(t *testing.T) {
|
||||
{
|
||||
headers, vals, _ := genLightBlocksWithKeys(chainID, 9, 3, 0, bTime)
|
||||
|
||||
@@ -702,11 +702,7 @@ func (n *nodeImpl) OnStart() error {
|
||||
n.Logger.Info("starting state sync")
|
||||
state, err := n.stateSyncReactor.Sync(context.TODO())
|
||||
if err != nil {
|
||||
n.Logger.Error("state sync failed; shutting down this node", "err", err)
|
||||
// stop the node
|
||||
if err := n.Stop(); err != nil {
|
||||
n.Logger.Error("failed to shut down node", "err", err)
|
||||
}
|
||||
n.Logger.Error("state sync failed", "err", err)
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -34,7 +34,6 @@ type ConsensusParams struct {
|
||||
Evidence *EvidenceParams `protobuf:"bytes,2,opt,name=evidence,proto3" json:"evidence,omitempty"`
|
||||
Validator *ValidatorParams `protobuf:"bytes,3,opt,name=validator,proto3" json:"validator,omitempty"`
|
||||
Version *VersionParams `protobuf:"bytes,4,opt,name=version,proto3" json:"version,omitempty"`
|
||||
Timestamp *TimestampParams `protobuf:"bytes,5,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
|
||||
}
|
||||
|
||||
func (m *ConsensusParams) Reset() { *m = ConsensusParams{} }
|
||||
@@ -98,13 +97,6 @@ func (m *ConsensusParams) GetVersion() *VersionParams {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *ConsensusParams) GetTimestamp() *TimestampParams {
|
||||
if m != nil {
|
||||
return m.Timestamp
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// BlockParams contains limits on the block size.
|
||||
type BlockParams struct {
|
||||
// Max block size, in bytes.
|
||||
@@ -326,66 +318,6 @@ func (m *VersionParams) GetAppVersion() uint64 {
|
||||
return 0
|
||||
}
|
||||
|
||||
type TimestampParams struct {
|
||||
Accuracy time.Duration `protobuf:"bytes,1,opt,name=accuracy,proto3,stdduration" json:"accuracy"`
|
||||
Precision time.Duration `protobuf:"bytes,2,opt,name=precision,proto3,stdduration" json:"precision"`
|
||||
MessageDelay time.Duration `protobuf:"bytes,3,opt,name=message_delay,json=messageDelay,proto3,stdduration" json:"message_delay"`
|
||||
}
|
||||
|
||||
func (m *TimestampParams) Reset() { *m = TimestampParams{} }
|
||||
func (m *TimestampParams) String() string { return proto.CompactTextString(m) }
|
||||
func (*TimestampParams) ProtoMessage() {}
|
||||
func (*TimestampParams) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_e12598271a686f57, []int{5}
|
||||
}
|
||||
func (m *TimestampParams) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *TimestampParams) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
if deterministic {
|
||||
return xxx_messageInfo_TimestampParams.Marshal(b, m, deterministic)
|
||||
} else {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
}
|
||||
func (m *TimestampParams) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_TimestampParams.Merge(m, src)
|
||||
}
|
||||
func (m *TimestampParams) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *TimestampParams) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_TimestampParams.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_TimestampParams proto.InternalMessageInfo
|
||||
|
||||
func (m *TimestampParams) GetAccuracy() time.Duration {
|
||||
if m != nil {
|
||||
return m.Accuracy
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *TimestampParams) GetPrecision() time.Duration {
|
||||
if m != nil {
|
||||
return m.Precision
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *TimestampParams) GetMessageDelay() time.Duration {
|
||||
if m != nil {
|
||||
return m.MessageDelay
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// HashedParams is a subset of ConsensusParams.
|
||||
//
|
||||
// It is hashed into the Header.ConsensusHash.
|
||||
@@ -398,7 +330,7 @@ func (m *HashedParams) Reset() { *m = HashedParams{} }
|
||||
func (m *HashedParams) String() string { return proto.CompactTextString(m) }
|
||||
func (*HashedParams) ProtoMessage() {}
|
||||
func (*HashedParams) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_e12598271a686f57, []int{6}
|
||||
return fileDescriptor_e12598271a686f57, []int{5}
|
||||
}
|
||||
func (m *HashedParams) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
@@ -447,51 +379,45 @@ func init() {
|
||||
proto.RegisterType((*EvidenceParams)(nil), "tendermint.types.EvidenceParams")
|
||||
proto.RegisterType((*ValidatorParams)(nil), "tendermint.types.ValidatorParams")
|
||||
proto.RegisterType((*VersionParams)(nil), "tendermint.types.VersionParams")
|
||||
proto.RegisterType((*TimestampParams)(nil), "tendermint.types.TimestampParams")
|
||||
proto.RegisterType((*HashedParams)(nil), "tendermint.types.HashedParams")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("tendermint/types/params.proto", fileDescriptor_e12598271a686f57) }
|
||||
|
||||
var fileDescriptor_e12598271a686f57 = []byte{
|
||||
// 577 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x94, 0x4f, 0x6b, 0xd4, 0x40,
|
||||
0x18, 0xc6, 0x37, 0xdd, 0xfe, 0xd9, 0x7d, 0xb7, 0xdb, 0x2d, 0x83, 0x60, 0xac, 0x34, 0x5b, 0x73,
|
||||
0x90, 0x82, 0x90, 0x88, 0x45, 0x44, 0x10, 0x4a, 0xb7, 0x95, 0x16, 0xa4, 0x22, 0xa1, 0x7a, 0xe8,
|
||||
0x25, 0x4c, 0xb2, 0x63, 0x1a, 0xba, 0x93, 0x19, 0x32, 0x49, 0xd9, 0x7c, 0x0b, 0x8f, 0x7e, 0x04,
|
||||
0xfd, 0x18, 0xde, 0x7a, 0xec, 0xd1, 0x93, 0x95, 0xed, 0x17, 0x91, 0x4c, 0x66, 0x36, 0xdd, 0xad,
|
||||
0x42, 0xbd, 0x25, 0xf3, 0x3e, 0xbf, 0x79, 0x79, 0x9f, 0xf7, 0x61, 0x60, 0x33, 0x23, 0xc9, 0x90,
|
||||
0xa4, 0x34, 0x4e, 0x32, 0x37, 0x2b, 0x38, 0x11, 0x2e, 0xc7, 0x29, 0xa6, 0xc2, 0xe1, 0x29, 0xcb,
|
||||
0x18, 0x5a, 0xaf, 0xcb, 0x8e, 0x2c, 0x6f, 0x3c, 0x88, 0x58, 0xc4, 0x64, 0xd1, 0x2d, 0xbf, 0x2a,
|
||||
0xdd, 0x86, 0x15, 0x31, 0x16, 0x8d, 0x88, 0x2b, 0xff, 0x82, 0xfc, 0xb3, 0x3b, 0xcc, 0x53, 0x9c,
|
||||
0xc5, 0x2c, 0xa9, 0xea, 0xf6, 0x8f, 0x05, 0xe8, 0xed, 0xb3, 0x44, 0x90, 0x44, 0xe4, 0xe2, 0x83,
|
||||
0xec, 0x80, 0x76, 0x60, 0x29, 0x18, 0xb1, 0xf0, 0xdc, 0x34, 0xb6, 0x8c, 0xed, 0xce, 0x8b, 0x4d,
|
||||
0x67, 0xbe, 0x97, 0x33, 0x28, 0xcb, 0x95, 0xda, 0xab, 0xb4, 0xe8, 0x0d, 0xb4, 0xc8, 0x45, 0x3c,
|
||||
0x24, 0x49, 0x48, 0xcc, 0x05, 0xc9, 0x6d, 0xdd, 0xe5, 0xde, 0x2a, 0x85, 0x42, 0xa7, 0x04, 0xda,
|
||||
0x85, 0xf6, 0x05, 0x1e, 0xc5, 0x43, 0x9c, 0xb1, 0xd4, 0x6c, 0x4a, 0xfc, 0xc9, 0x5d, 0xfc, 0x93,
|
||||
0x96, 0x28, 0xbe, 0x66, 0xd0, 0x6b, 0x58, 0xb9, 0x20, 0xa9, 0x88, 0x59, 0x62, 0x2e, 0x4a, 0xbc,
|
||||
0xff, 0x17, 0xbc, 0x12, 0x28, 0x58, 0xeb, 0xcb, 0xde, 0x59, 0x4c, 0x89, 0xc8, 0x30, 0xe5, 0xe6,
|
||||
0xd2, 0xbf, 0x7a, 0x9f, 0x68, 0x89, 0xee, 0x3d, 0x65, 0xec, 0x7d, 0xe8, 0xdc, 0x32, 0x04, 0x3d,
|
||||
0x86, 0x36, 0xc5, 0x63, 0x3f, 0x28, 0x32, 0x22, 0xa4, 0x85, 0x4d, 0xaf, 0x45, 0xf1, 0x78, 0x50,
|
||||
0xfe, 0xa3, 0x87, 0xb0, 0x52, 0x16, 0x23, 0x2c, 0xa4, 0x4b, 0x4d, 0x6f, 0x99, 0xe2, 0xf1, 0x21,
|
||||
0x16, 0xf6, 0x77, 0x03, 0xd6, 0x66, 0xed, 0x41, 0xcf, 0x00, 0x95, 0x5a, 0x1c, 0x11, 0x3f, 0xc9,
|
||||
0xa9, 0x2f, 0x7d, 0xd6, 0x37, 0xf6, 0x28, 0x1e, 0xef, 0x45, 0xe4, 0x7d, 0x4e, 0x65, 0x6b, 0x81,
|
||||
0x8e, 0x61, 0x5d, 0x8b, 0xf5, 0x8a, 0xd5, 0x1e, 0x1e, 0x39, 0x55, 0x06, 0x1c, 0x9d, 0x01, 0xe7,
|
||||
0x40, 0x09, 0x06, 0xad, 0xcb, 0x5f, 0xfd, 0xc6, 0xd7, 0xeb, 0xbe, 0xe1, 0xad, 0x55, 0xf7, 0xe9,
|
||||
0xca, 0xec, 0x10, 0xcd, 0xd9, 0x21, 0xec, 0x97, 0xd0, 0x9b, 0x5b, 0x05, 0xb2, 0xa1, 0xcb, 0xf3,
|
||||
0xc0, 0x3f, 0x27, 0x85, 0x2f, 0xfd, 0x32, 0x8d, 0xad, 0xe6, 0x76, 0xdb, 0xeb, 0xf0, 0x3c, 0x78,
|
||||
0x47, 0x8a, 0x93, 0xf2, 0xc8, 0x7e, 0x0e, 0xdd, 0x99, 0x15, 0xa0, 0x3e, 0x74, 0x30, 0xe7, 0xbe,
|
||||
0x5e, 0x5c, 0x39, 0xd9, 0xa2, 0x07, 0x98, 0x73, 0x25, 0xb3, 0xaf, 0x0d, 0xe8, 0xcd, 0x19, 0x8f,
|
||||
0x76, 0xa1, 0x85, 0xc3, 0x30, 0x4f, 0x71, 0x58, 0xa8, 0x80, 0xde, 0x6b, 0xc0, 0x29, 0x84, 0xf6,
|
||||
0xa0, 0xcd, 0x53, 0x12, 0xc6, 0xe2, 0x3f, 0x2d, 0xaa, 0x29, 0x74, 0x04, 0x5d, 0x4a, 0x84, 0x90,
|
||||
0x66, 0x93, 0x11, 0x2e, 0x54, 0x64, 0xef, 0x75, 0xcd, 0xaa, 0x22, 0x0f, 0x4a, 0xd0, 0x3e, 0x85,
|
||||
0xd5, 0x23, 0x2c, 0xce, 0xc8, 0x50, 0x4d, 0xf7, 0x14, 0x7a, 0x72, 0xcf, 0xfe, 0x7c, 0x84, 0xba,
|
||||
0xf2, 0xf8, 0x58, 0xe7, 0xc8, 0x86, 0x6e, 0xad, 0xab, 0xd3, 0xd4, 0xd1, 0xaa, 0x43, 0x2c, 0x06,
|
||||
0x1f, 0xbf, 0x4d, 0x2c, 0xe3, 0x72, 0x62, 0x19, 0x57, 0x13, 0xcb, 0xf8, 0x3d, 0xb1, 0x8c, 0x2f,
|
||||
0x37, 0x56, 0xe3, 0xea, 0xc6, 0x6a, 0xfc, 0xbc, 0xb1, 0x1a, 0xa7, 0xaf, 0xa2, 0x38, 0x3b, 0xcb,
|
||||
0x03, 0x27, 0x64, 0xd4, 0xbd, 0xfd, 0xd6, 0xd4, 0x9f, 0xd5, 0x63, 0x32, 0xff, 0x0e, 0x05, 0xcb,
|
||||
0xf2, 0x7c, 0xe7, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x6e, 0x64, 0xc1, 0x5d, 0xa2, 0x04, 0x00,
|
||||
0x00,
|
||||
// 498 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x93, 0xc1, 0x6a, 0xd4, 0x40,
|
||||
0x1c, 0xc6, 0x77, 0x9a, 0xda, 0xee, 0xfe, 0xe3, 0x76, 0xcb, 0x20, 0x18, 0x2b, 0xcd, 0xae, 0x39,
|
||||
0x48, 0x41, 0x48, 0xc4, 0x22, 0x22, 0x08, 0xe2, 0x56, 0xa9, 0x20, 0x15, 0x09, 0xea, 0xa1, 0x97,
|
||||
0x30, 0xd9, 0x8c, 0x69, 0xe8, 0x4e, 0x66, 0xc8, 0x24, 0xcb, 0xee, 0xcd, 0x47, 0xf0, 0xe8, 0x23,
|
||||
0xe8, 0x9b, 0xf4, 0xd8, 0xa3, 0x27, 0x95, 0xdd, 0x17, 0x91, 0x4c, 0x32, 0xa6, 0x9b, 0xf6, 0x36,
|
||||
0x33, 0xdf, 0xef, 0x9b, 0xe1, 0xfb, 0x86, 0x3f, 0xec, 0xe7, 0x34, 0x8d, 0x68, 0xc6, 0x92, 0x34,
|
||||
0xf7, 0xf2, 0x85, 0xa0, 0xd2, 0x13, 0x24, 0x23, 0x4c, 0xba, 0x22, 0xe3, 0x39, 0xc7, 0xbb, 0x8d,
|
||||
0xec, 0x2a, 0x79, 0xef, 0x4e, 0xcc, 0x63, 0xae, 0x44, 0xaf, 0x5c, 0x55, 0xdc, 0x9e, 0x1d, 0x73,
|
||||
0x1e, 0x4f, 0xa9, 0xa7, 0x76, 0x61, 0xf1, 0xc5, 0x8b, 0x8a, 0x8c, 0xe4, 0x09, 0x4f, 0x2b, 0xdd,
|
||||
0xf9, 0xba, 0x01, 0x83, 0x23, 0x9e, 0x4a, 0x9a, 0xca, 0x42, 0x7e, 0x50, 0x2f, 0xe0, 0x43, 0xb8,
|
||||
0x15, 0x4e, 0xf9, 0xe4, 0xdc, 0x42, 0x23, 0x74, 0x60, 0x3e, 0xd9, 0x77, 0xdb, 0x6f, 0xb9, 0xe3,
|
||||
0x52, 0xae, 0x68, 0xbf, 0x62, 0xf1, 0x0b, 0xe8, 0xd2, 0x59, 0x12, 0xd1, 0x74, 0x42, 0xad, 0x0d,
|
||||
0xe5, 0x1b, 0x5d, 0xf7, 0xbd, 0xa9, 0x89, 0xda, 0xfa, 0xdf, 0x81, 0x5f, 0x42, 0x6f, 0x46, 0xa6,
|
||||
0x49, 0x44, 0x72, 0x9e, 0x59, 0x86, 0xb2, 0x3f, 0xb8, 0x6e, 0xff, 0xac, 0x91, 0xda, 0xdf, 0x78,
|
||||
0xf0, 0x73, 0xd8, 0x9e, 0xd1, 0x4c, 0x26, 0x3c, 0xb5, 0x36, 0x95, 0x7d, 0x78, 0x83, 0xbd, 0x02,
|
||||
0x6a, 0xb3, 0xe6, 0x9d, 0x23, 0x30, 0xaf, 0xe4, 0xc1, 0xf7, 0xa1, 0xc7, 0xc8, 0x3c, 0x08, 0x17,
|
||||
0x39, 0x95, 0xaa, 0x01, 0xc3, 0xef, 0x32, 0x32, 0x1f, 0x97, 0x7b, 0x7c, 0x17, 0xb6, 0x4b, 0x31,
|
||||
0x26, 0x52, 0x85, 0x34, 0xfc, 0x2d, 0x46, 0xe6, 0xc7, 0x44, 0x3a, 0x3f, 0x11, 0xec, 0xac, 0xa7,
|
||||
0xc3, 0x8f, 0x00, 0x97, 0x2c, 0x89, 0x69, 0x90, 0x16, 0x2c, 0x50, 0x35, 0xe9, 0x1b, 0x07, 0x8c,
|
||||
0xcc, 0x5f, 0xc5, 0xf4, 0x7d, 0xc1, 0xd4, 0xd3, 0x12, 0x9f, 0xc0, 0xae, 0x86, 0xf5, 0x0f, 0xd5,
|
||||
0x35, 0xde, 0x73, 0xab, 0x2f, 0x74, 0xf5, 0x17, 0xba, 0xaf, 0x6b, 0x60, 0xdc, 0xbd, 0xf8, 0x3d,
|
||||
0xec, 0x7c, 0xff, 0x33, 0x44, 0xfe, 0x4e, 0x75, 0x9f, 0x56, 0xd6, 0x43, 0x18, 0xeb, 0x21, 0x9c,
|
||||
0xa7, 0x30, 0x68, 0x35, 0x89, 0x1d, 0xe8, 0x8b, 0x22, 0x0c, 0xce, 0xe9, 0x22, 0x50, 0x5d, 0x59,
|
||||
0x68, 0x64, 0x1c, 0xf4, 0x7c, 0x53, 0x14, 0xe1, 0x3b, 0xba, 0xf8, 0x58, 0x1e, 0x39, 0x8f, 0xa1,
|
||||
0xbf, 0xd6, 0x20, 0x1e, 0x82, 0x49, 0x84, 0x08, 0x74, 0xef, 0x65, 0xb2, 0x4d, 0x1f, 0x88, 0x10,
|
||||
0x35, 0xe6, 0x9c, 0xc2, 0xed, 0xb7, 0x44, 0x9e, 0xd1, 0xa8, 0x36, 0x3c, 0x84, 0x81, 0x6a, 0x21,
|
||||
0x68, 0x17, 0xdc, 0x57, 0xc7, 0x27, 0xba, 0x65, 0x07, 0xfa, 0x0d, 0xd7, 0x74, 0x6d, 0x6a, 0xea,
|
||||
0x98, 0xc8, 0xf1, 0xa7, 0x1f, 0x4b, 0x1b, 0x5d, 0x2c, 0x6d, 0x74, 0xb9, 0xb4, 0xd1, 0xdf, 0xa5,
|
||||
0x8d, 0xbe, 0xad, 0xec, 0xce, 0xe5, 0xca, 0xee, 0xfc, 0x5a, 0xd9, 0x9d, 0xd3, 0x67, 0x71, 0x92,
|
||||
0x9f, 0x15, 0xa1, 0x3b, 0xe1, 0xcc, 0xbb, 0x3a, 0x48, 0xcd, 0xb2, 0x9a, 0x94, 0xf6, 0x90, 0x85,
|
||||
0x5b, 0xea, 0xfc, 0xf0, 0x5f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x18, 0x54, 0x4f, 0xe1, 0x7f, 0x03,
|
||||
0x00, 0x00,
|
||||
}
|
||||
|
||||
func (this *ConsensusParams) Equal(that interface{}) bool {
|
||||
@@ -525,9 +451,6 @@ func (this *ConsensusParams) Equal(that interface{}) bool {
|
||||
if !this.Version.Equal(that1.Version) {
|
||||
return false
|
||||
}
|
||||
if !this.Timestamp.Equal(that1.Timestamp) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
func (this *BlockParams) Equal(that interface{}) bool {
|
||||
@@ -640,36 +563,6 @@ func (this *VersionParams) Equal(that interface{}) bool {
|
||||
}
|
||||
return true
|
||||
}
|
||||
func (this *TimestampParams) Equal(that interface{}) bool {
|
||||
if that == nil {
|
||||
return this == nil
|
||||
}
|
||||
|
||||
that1, ok := that.(*TimestampParams)
|
||||
if !ok {
|
||||
that2, ok := that.(TimestampParams)
|
||||
if ok {
|
||||
that1 = &that2
|
||||
} else {
|
||||
return false
|
||||
}
|
||||
}
|
||||
if that1 == nil {
|
||||
return this == nil
|
||||
} else if this == nil {
|
||||
return false
|
||||
}
|
||||
if this.Accuracy != that1.Accuracy {
|
||||
return false
|
||||
}
|
||||
if this.Precision != that1.Precision {
|
||||
return false
|
||||
}
|
||||
if this.MessageDelay != that1.MessageDelay {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
func (this *HashedParams) Equal(that interface{}) bool {
|
||||
if that == nil {
|
||||
return this == nil
|
||||
@@ -717,18 +610,6 @@ func (m *ConsensusParams) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.Timestamp != nil {
|
||||
{
|
||||
size, err := m.Timestamp.MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintParams(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x2a
|
||||
}
|
||||
if m.Version != nil {
|
||||
{
|
||||
size, err := m.Version.MarshalToSizedBuffer(dAtA[:i])
|
||||
@@ -838,12 +719,12 @@ func (m *EvidenceParams) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i--
|
||||
dAtA[i] = 0x18
|
||||
}
|
||||
n6, err6 := github_com_gogo_protobuf_types.StdDurationMarshalTo(m.MaxAgeDuration, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdDuration(m.MaxAgeDuration):])
|
||||
if err6 != nil {
|
||||
return 0, err6
|
||||
n5, err5 := github_com_gogo_protobuf_types.StdDurationMarshalTo(m.MaxAgeDuration, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdDuration(m.MaxAgeDuration):])
|
||||
if err5 != nil {
|
||||
return 0, err5
|
||||
}
|
||||
i -= n6
|
||||
i = encodeVarintParams(dAtA, i, uint64(n6))
|
||||
i -= n5
|
||||
i = encodeVarintParams(dAtA, i, uint64(n5))
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
if m.MaxAgeNumBlocks != 0 {
|
||||
@@ -914,53 +795,6 @@ func (m *VersionParams) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func (m *TimestampParams) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *TimestampParams) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *TimestampParams) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
n7, err7 := github_com_gogo_protobuf_types.StdDurationMarshalTo(m.MessageDelay, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdDuration(m.MessageDelay):])
|
||||
if err7 != nil {
|
||||
return 0, err7
|
||||
}
|
||||
i -= n7
|
||||
i = encodeVarintParams(dAtA, i, uint64(n7))
|
||||
i--
|
||||
dAtA[i] = 0x1a
|
||||
n8, err8 := github_com_gogo_protobuf_types.StdDurationMarshalTo(m.Precision, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdDuration(m.Precision):])
|
||||
if err8 != nil {
|
||||
return 0, err8
|
||||
}
|
||||
i -= n8
|
||||
i = encodeVarintParams(dAtA, i, uint64(n8))
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
n9, err9 := github_com_gogo_protobuf_types.StdDurationMarshalTo(m.Accuracy, dAtA[i-github_com_gogo_protobuf_types.SizeOfStdDuration(m.Accuracy):])
|
||||
if err9 != nil {
|
||||
return 0, err9
|
||||
}
|
||||
i -= n9
|
||||
i = encodeVarintParams(dAtA, i, uint64(n9))
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func (m *HashedParams) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
@@ -1027,10 +861,6 @@ func (m *ConsensusParams) Size() (n int) {
|
||||
l = m.Version.Size()
|
||||
n += 1 + l + sovParams(uint64(l))
|
||||
}
|
||||
if m.Timestamp != nil {
|
||||
l = m.Timestamp.Size()
|
||||
n += 1 + l + sovParams(uint64(l))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
@@ -1093,21 +923,6 @@ func (m *VersionParams) Size() (n int) {
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *TimestampParams) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
l = github_com_gogo_protobuf_types.SizeOfStdDuration(m.Accuracy)
|
||||
n += 1 + l + sovParams(uint64(l))
|
||||
l = github_com_gogo_protobuf_types.SizeOfStdDuration(m.Precision)
|
||||
n += 1 + l + sovParams(uint64(l))
|
||||
l = github_com_gogo_protobuf_types.SizeOfStdDuration(m.MessageDelay)
|
||||
n += 1 + l + sovParams(uint64(l))
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *HashedParams) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
@@ -1302,42 +1117,6 @@ func (m *ConsensusParams) Unmarshal(dAtA []byte) error {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 5:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Timestamp", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowParams
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthParams
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthParams
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if m.Timestamp == nil {
|
||||
m.Timestamp = &TimestampParams{}
|
||||
}
|
||||
if err := m.Timestamp.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipParams(dAtA[iNdEx:])
|
||||
@@ -1719,155 +1498,6 @@ func (m *VersionParams) Unmarshal(dAtA []byte) error {
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *TimestampParams) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowParams
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: TimestampParams: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: TimestampParams: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Accuracy", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowParams
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthParams
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthParams
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := github_com_gogo_protobuf_types.StdDurationUnmarshal(&m.Accuracy, dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Precision", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowParams
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthParams
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthParams
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := github_com_gogo_protobuf_types.StdDurationUnmarshal(&m.Precision, dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 3:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field MessageDelay", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowParams
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthParams
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthParams
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := github_com_gogo_protobuf_types.StdDurationUnmarshal(&m.MessageDelay, dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipParams(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if (skippy < 0) || (iNdEx+skippy) < 0 {
|
||||
return ErrInvalidLengthParams
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *HashedParams) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
|
||||
@@ -15,7 +15,6 @@ message ConsensusParams {
|
||||
EvidenceParams evidence = 2;
|
||||
ValidatorParams validator = 3;
|
||||
VersionParams version = 4;
|
||||
TimestampParams timestamp = 5;
|
||||
}
|
||||
|
||||
// BlockParams contains limits on the block size.
|
||||
@@ -61,15 +60,6 @@ message VersionParams {
|
||||
uint64 app_version = 1;
|
||||
}
|
||||
|
||||
message TimestampParams {
|
||||
google.protobuf.Duration accuracy = 1
|
||||
[(gogoproto.nullable) = false, (gogoproto.stdduration) = true];
|
||||
google.protobuf.Duration precision = 2
|
||||
[(gogoproto.nullable) = false, (gogoproto.stdduration) = true];
|
||||
google.protobuf.Duration message_delay = 3
|
||||
[(gogoproto.nullable) = false, (gogoproto.stdduration) = true];
|
||||
}
|
||||
|
||||
// HashedParams is a subset of ConsensusParams.
|
||||
//
|
||||
// It is hashed into the Header.ConsensusHash.
|
||||
|
||||
@@ -601,32 +601,6 @@ paths:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/ErrorResponse"
|
||||
/unsafe_flush_mempool:
|
||||
get:
|
||||
summary: Flush mempool of all unconfirmed transactions
|
||||
operationId: unsafe_flush_mempool
|
||||
tags:
|
||||
- Unsafe
|
||||
description: |
|
||||
Flush flushes out the mempool. It acquires a read-lock, fetches all the
|
||||
transactions currently in the transaction store and removes each transaction
|
||||
from the store and all indexes and finally resets the cache.
|
||||
|
||||
Note, flushing the mempool may leave the mempool in an inconsistent state.
|
||||
responses:
|
||||
"200":
|
||||
description: empty answer
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/EmptyResponse"
|
||||
"500":
|
||||
description: empty error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/ErrorResponse"
|
||||
|
||||
/blockchain:
|
||||
get:
|
||||
summary: "Get block headers (max: 20) for minHeight <= height <= maxHeight."
|
||||
|
||||
@@ -22,7 +22,7 @@ import (
|
||||
// Metrics are based of the `benchmarkLength`, the amount of consecutive blocks
|
||||
// sampled from in the testnet
|
||||
func Benchmark(ctx context.Context, testnet *e2e.Testnet, benchmarkLength int64) error {
|
||||
block, err := getLatestBlock(ctx, testnet)
|
||||
block, _, err := waitForHeight(ctx, testnet, 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -54,25 +54,9 @@ func Load(ctx context.Context, testnet *e2e.Testnet) error {
|
||||
case numSeen := <-chSuccess:
|
||||
success += numSeen
|
||||
case <-ctx.Done():
|
||||
// if we couldn't submit any transactions,
|
||||
// that's probably a problem and the test
|
||||
// should error; however, for very short tests
|
||||
// we shouldn't abort.
|
||||
//
|
||||
// The 2s cut off, is a rough guess based on
|
||||
// the expected value of
|
||||
// loadGenerateWaitTime. If the implementation
|
||||
// of that function changes, then this might
|
||||
// also need to change without more
|
||||
// refactoring.
|
||||
if success == 0 && time.Since(started) > 2*time.Second {
|
||||
if success == 0 {
|
||||
return errors.New("failed to submit any transactions")
|
||||
}
|
||||
|
||||
// TODO perhaps allow test networks to
|
||||
// declare required transaction rates, which
|
||||
// might allow us to avoid the special case
|
||||
// around 0 txs above.
|
||||
rate := float64(success) / time.Since(started).Seconds()
|
||||
|
||||
logger.Info("ending transaction load",
|
||||
|
||||
@@ -13,22 +13,17 @@ import (
|
||||
)
|
||||
|
||||
// waitForHeight waits for the network to reach a certain height (or above),
|
||||
// returning the block at the height seen. Errors if the network is not making
|
||||
// returning the highest height seen. Errors if the network is not making
|
||||
// progress at all.
|
||||
// If height == 0, the initial height of the test network is used as the target.
|
||||
func waitForHeight(ctx context.Context, testnet *e2e.Testnet, height int64) (*types.Block, *types.BlockID, error) {
|
||||
var (
|
||||
err error
|
||||
maxResult *rpctypes.ResultBlock
|
||||
clients = map[string]*rpchttp.HTTP{}
|
||||
lastHeight int64
|
||||
lastIncrease = time.Now()
|
||||
nodesAtHeight = map[string]struct{}{}
|
||||
numRunningNodes int
|
||||
)
|
||||
if height == 0 {
|
||||
height = testnet.InitialHeight
|
||||
}
|
||||
|
||||
for _, node := range testnet.Nodes {
|
||||
if node.Stateless() {
|
||||
continue
|
||||
@@ -52,10 +47,10 @@ func waitForHeight(ctx context.Context, testnet *e2e.Testnet, height int64) (*ty
|
||||
continue
|
||||
}
|
||||
|
||||
// skip nodes that don't have state or haven't started yet
|
||||
if node.Stateless() {
|
||||
continue
|
||||
}
|
||||
|
||||
if !node.HasStarted {
|
||||
continue
|
||||
}
|
||||
@@ -72,16 +67,16 @@ func waitForHeight(ctx context.Context, testnet *e2e.Testnet, height int64) (*ty
|
||||
|
||||
wctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
result, err := client.Status(wctx)
|
||||
result, err := client.Block(wctx, nil)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if result.SyncInfo.LatestBlockHeight > lastHeight {
|
||||
lastHeight = result.SyncInfo.LatestBlockHeight
|
||||
if result.Block != nil && (maxResult == nil || result.Block.Height > maxResult.Block.Height) {
|
||||
maxResult = result
|
||||
lastIncrease = time.Now()
|
||||
}
|
||||
|
||||
if result.SyncInfo.LatestBlockHeight >= height {
|
||||
if maxResult != nil && maxResult.Block.Height >= height {
|
||||
// the node has achieved the target height!
|
||||
|
||||
// add this node to the set of target
|
||||
@@ -95,16 +90,9 @@ func waitForHeight(ctx context.Context, testnet *e2e.Testnet, height int64) (*ty
|
||||
continue
|
||||
}
|
||||
|
||||
// All nodes are at or above the target height. Now fetch the block for that target height
|
||||
// and return it. We loop again through all clients because some may have pruning set but
|
||||
// at least two of them should be archive nodes.
|
||||
for _, c := range clients {
|
||||
result, err := c.Block(ctx, &height)
|
||||
if err != nil || result == nil || result.Block == nil {
|
||||
continue
|
||||
}
|
||||
return result.Block, &result.BlockID, err
|
||||
}
|
||||
// return once all nodes have reached
|
||||
// the target height.
|
||||
return maxResult.Block, &maxResult.BlockID, nil
|
||||
}
|
||||
}
|
||||
|
||||
@@ -112,12 +100,12 @@ func waitForHeight(ctx context.Context, testnet *e2e.Testnet, height int64) (*ty
|
||||
return nil, nil, errors.New("unable to connect to any network nodes")
|
||||
}
|
||||
if time.Since(lastIncrease) >= time.Minute {
|
||||
if lastHeight == 0 {
|
||||
return nil, nil, errors.New("chain stalled at unknown height (most likely upon starting)")
|
||||
if maxResult == nil {
|
||||
return nil, nil, errors.New("chain stalled at unknown height")
|
||||
}
|
||||
|
||||
return nil, nil, fmt.Errorf("chain stalled at height %v [%d of %d nodes %+v]",
|
||||
lastHeight,
|
||||
maxResult.Block.Height,
|
||||
len(nodesAtHeight),
|
||||
numRunningNodes,
|
||||
nodesAtHeight)
|
||||
@@ -194,35 +182,3 @@ func waitForNode(ctx context.Context, node *e2e.Node, height int64) (*rpctypes.R
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// getLatestBlock returns the last block that all active nodes in the network have
|
||||
// agreed upon i.e. the earlist of each nodes latest block
|
||||
func getLatestBlock(ctx context.Context, testnet *e2e.Testnet) (*types.Block, error) {
|
||||
var earliestBlock *types.Block
|
||||
for _, node := range testnet.Nodes {
|
||||
// skip nodes that don't have state or haven't started yet
|
||||
if node.Stateless() {
|
||||
continue
|
||||
}
|
||||
if !node.HasStarted {
|
||||
continue
|
||||
}
|
||||
|
||||
client, err := node.Client()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
wctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
result, err := client.Block(wctx, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if result.Block != nil && (earliestBlock == nil || earliestBlock.Height > result.Block.Height) {
|
||||
earliestBlock = result.Block
|
||||
}
|
||||
}
|
||||
return earliestBlock, nil
|
||||
}
|
||||
|
||||
@@ -10,7 +10,7 @@ import (
|
||||
// Wait waits for a number of blocks to be produced, and for all nodes to catch
|
||||
// up with it.
|
||||
func Wait(ctx context.Context, testnet *e2e.Testnet, blocks int64) error {
|
||||
block, err := getLatestBlock(ctx, testnet)
|
||||
block, _, err := waitForHeight(ctx, testnet, 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -68,7 +68,7 @@ func TestApp_Tx(t *testing.T) {
|
||||
}{
|
||||
{
|
||||
Name: "Sync",
|
||||
WaitTime: time.Minute,
|
||||
WaitTime: 30 * time.Second,
|
||||
BroadcastTx: func(client *http.HTTP) broadcastFunc {
|
||||
return func(ctx context.Context, tx types.Tx) error {
|
||||
_, err := client.BroadcastTxSync(ctx, tx)
|
||||
@@ -78,13 +78,7 @@ func TestApp_Tx(t *testing.T) {
|
||||
},
|
||||
{
|
||||
Name: "Commit",
|
||||
WaitTime: 15 * time.Second,
|
||||
// TODO: turn this check back on if it can
|
||||
// return reliably. Currently these calls have
|
||||
// a hard timeout of 10s (server side
|
||||
// configured). The Sync check is probably
|
||||
// safe.
|
||||
ShouldSkip: true,
|
||||
WaitTime: time.Minute,
|
||||
BroadcastTx: func(client *http.HTTP) broadcastFunc {
|
||||
return func(ctx context.Context, tx types.Tx) error {
|
||||
_, err := client.BroadcastTxCommit(ctx, tx)
|
||||
@@ -93,12 +87,8 @@ func TestApp_Tx(t *testing.T) {
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "Async",
|
||||
WaitTime: 90 * time.Second,
|
||||
// TODO: turn this check back on if there's a
|
||||
// way to avoid failures in the case that the
|
||||
// transaction doesn't make it into the
|
||||
// mempool. (retries?)
|
||||
Name: "Async",
|
||||
WaitTime: time.Minute,
|
||||
ShouldSkip: true,
|
||||
BroadcastTx: func(client *http.HTTP) broadcastFunc {
|
||||
return func(ctx context.Context, tx types.Tx) error {
|
||||
|
||||
@@ -883,7 +883,7 @@ func (commit *Commit) ValidateBasic() error {
|
||||
}
|
||||
|
||||
if commit.Height >= 1 {
|
||||
if commit.BlockID.IsNil() {
|
||||
if commit.BlockID.IsZero() {
|
||||
return errors.New("commit cannot be for nil block")
|
||||
}
|
||||
|
||||
@@ -1204,8 +1204,8 @@ func (blockID BlockID) ValidateBasic() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsNil returns true if this is the BlockID of a nil block.
|
||||
func (blockID BlockID) IsNil() bool {
|
||||
// IsZero returns true if this is the BlockID of a nil block.
|
||||
func (blockID BlockID) IsZero() bool {
|
||||
return len(blockID.Hash) == 0 &&
|
||||
blockID.PartSetHeader.IsZero()
|
||||
}
|
||||
|
||||
@@ -21,7 +21,7 @@ func CanonicalizeBlockID(bid tmproto.BlockID) *tmproto.CanonicalBlockID {
|
||||
panic(err)
|
||||
}
|
||||
var cbid *tmproto.CanonicalBlockID
|
||||
if rbid == nil || rbid.IsNil() {
|
||||
if rbid == nil || rbid.IsZero() {
|
||||
cbid = nil
|
||||
} else {
|
||||
cbid = &tmproto.CanonicalBlockID{
|
||||
|
||||
@@ -221,6 +221,10 @@ func (b *EventBus) PublishEventPolka(data EventDataRoundState) error {
|
||||
return b.Publish(EventPolkaValue, data)
|
||||
}
|
||||
|
||||
func (b *EventBus) PublishEventUnlock(data EventDataRoundState) error {
|
||||
return b.Publish(EventUnlockValue, data)
|
||||
}
|
||||
|
||||
func (b *EventBus) PublishEventRelock(data EventDataRoundState) error {
|
||||
return b.Publish(EventRelockValue, data)
|
||||
}
|
||||
@@ -297,6 +301,10 @@ func (NopEventBus) PublishEventPolka(data EventDataRoundState) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (NopEventBus) PublishEventUnlock(data EventDataRoundState) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (NopEventBus) PublishEventRelock(data EventDataRoundState) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -362,6 +362,8 @@ func TestEventBusPublish(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
err = eventBus.PublishEventPolka(EventDataRoundState{})
|
||||
require.NoError(t, err)
|
||||
err = eventBus.PublishEventUnlock(EventDataRoundState{})
|
||||
require.NoError(t, err)
|
||||
err = eventBus.PublishEventRelock(EventDataRoundState{})
|
||||
require.NoError(t, err)
|
||||
err = eventBus.PublishEventLock(EventDataRoundState{})
|
||||
@@ -473,6 +475,7 @@ var events = []string{
|
||||
EventTimeoutProposeValue,
|
||||
EventCompleteProposalValue,
|
||||
EventPolkaValue,
|
||||
EventUnlockValue,
|
||||
EventLockValue,
|
||||
EventRelockValue,
|
||||
EventTimeoutWaitValue,
|
||||
@@ -494,6 +497,7 @@ var queries = []tmpubsub.Query{
|
||||
EventQueryTimeoutPropose,
|
||||
EventQueryCompleteProposal,
|
||||
EventQueryPolka,
|
||||
EventQueryUnlock,
|
||||
EventQueryLock,
|
||||
EventQueryRelock,
|
||||
EventQueryTimeoutWait,
|
||||
|
||||
@@ -38,6 +38,7 @@ const (
|
||||
EventStateSyncStatusValue = "StateSyncStatus"
|
||||
EventTimeoutProposeValue = "TimeoutPropose"
|
||||
EventTimeoutWaitValue = "TimeoutWait"
|
||||
EventUnlockValue = "Unlock"
|
||||
EventValidBlockValue = "ValidBlock"
|
||||
EventVoteValue = "Vote"
|
||||
)
|
||||
@@ -222,6 +223,7 @@ var (
|
||||
EventQueryTimeoutPropose = QueryForEvent(EventTimeoutProposeValue)
|
||||
EventQueryTimeoutWait = QueryForEvent(EventTimeoutWaitValue)
|
||||
EventQueryTx = QueryForEvent(EventTxValue)
|
||||
EventQueryUnlock = QueryForEvent(EventUnlockValue)
|
||||
EventQueryValidatorSetUpdates = QueryForEvent(EventValidatorSetUpdatesValue)
|
||||
EventQueryValidBlock = QueryForEvent(EventValidBlockValue)
|
||||
EventQueryVote = QueryForEvent(EventVoteValue)
|
||||
|
||||
@@ -41,7 +41,6 @@ type ConsensusParams struct {
|
||||
Evidence EvidenceParams `json:"evidence"`
|
||||
Validator ValidatorParams `json:"validator"`
|
||||
Version VersionParams `json:"version"`
|
||||
Timestamp TimestampParams `json:"timestamp"`
|
||||
}
|
||||
|
||||
// HashedParams is a subset of ConsensusParams.
|
||||
@@ -66,14 +65,6 @@ type EvidenceParams struct {
|
||||
MaxBytes int64 `json:"max_bytes"`
|
||||
}
|
||||
|
||||
// TimestampParams define the acceptable amount of clock skew among different
|
||||
// validators on a network.
|
||||
type TimestampParams struct {
|
||||
Accuracy time.Duration `json:"accuracy"`
|
||||
Precision time.Duration `json:"precision"`
|
||||
MessageDelay time.Duration `json:"message_delay"`
|
||||
}
|
||||
|
||||
// ValidatorParams restrict the public key types validators can use.
|
||||
// NOTE: uses ABCI pubkey naming, not Amino names.
|
||||
type ValidatorParams struct {
|
||||
@@ -244,11 +235,6 @@ func (params ConsensusParams) UpdateConsensusParams(params2 *tmproto.ConsensusPa
|
||||
if params2.Version != nil {
|
||||
res.Version.AppVersion = params2.Version.AppVersion
|
||||
}
|
||||
if params2.Timestamp != nil {
|
||||
res.Timestamp.Accuracy = params2.Timestamp.Accuracy
|
||||
res.Timestamp.Precision = params2.Timestamp.Precision
|
||||
res.Timestamp.MessageDelay = params2.Timestamp.MessageDelay
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
@@ -269,11 +255,6 @@ func (params *ConsensusParams) ToProto() tmproto.ConsensusParams {
|
||||
Version: &tmproto.VersionParams{
|
||||
AppVersion: params.Version.AppVersion,
|
||||
},
|
||||
Timestamp: &tmproto.TimestampParams{
|
||||
Accuracy: params.Timestamp.Accuracy,
|
||||
Precision: params.Timestamp.Precision,
|
||||
MessageDelay: params.Timestamp.MessageDelay,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -294,10 +275,5 @@ func ConsensusParamsFromProto(pbParams tmproto.ConsensusParams) ConsensusParams
|
||||
Version: VersionParams{
|
||||
AppVersion: pbParams.Version.AppVersion,
|
||||
},
|
||||
Timestamp: TimestampParams{
|
||||
Accuracy: pbParams.Timestamp.Accuracy,
|
||||
Precision: pbParams.Timestamp.Precision,
|
||||
MessageDelay: pbParams.Timestamp.MessageDelay,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -68,7 +68,7 @@ func (vote *Vote) CommitSig() CommitSig {
|
||||
switch {
|
||||
case vote.BlockID.IsComplete():
|
||||
blockIDFlag = BlockIDFlagCommit
|
||||
case vote.BlockID.IsNil():
|
||||
case vote.BlockID.IsZero():
|
||||
blockIDFlag = BlockIDFlagNil
|
||||
default:
|
||||
panic(fmt.Sprintf("Invalid vote %v - expected BlockID to be either empty or complete", vote))
|
||||
@@ -177,7 +177,7 @@ func (vote *Vote) ValidateBasic() error {
|
||||
|
||||
// BlockID.ValidateBasic would not err if we for instance have an empty hash but a
|
||||
// non-empty PartsSetHeader:
|
||||
if !vote.BlockID.IsNil() && !vote.BlockID.IsComplete() {
|
||||
if !vote.BlockID.IsZero() && !vote.BlockID.IsComplete() {
|
||||
return fmt.Errorf("blockID must be either empty or complete, got: %v", vote.BlockID)
|
||||
}
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ func TestVoteSet_AddVote_Good(t *testing.T) {
|
||||
assert.Nil(t, voteSet.GetByAddress(val0Addr))
|
||||
assert.False(t, voteSet.BitArray().GetIndex(0))
|
||||
blockID, ok := voteSet.TwoThirdsMajority()
|
||||
assert.False(t, ok || !blockID.IsNil(), "there should be no 2/3 majority")
|
||||
assert.False(t, ok || !blockID.IsZero(), "there should be no 2/3 majority")
|
||||
|
||||
vote := &Vote{
|
||||
ValidatorAddress: val0Addr,
|
||||
@@ -44,7 +44,7 @@ func TestVoteSet_AddVote_Good(t *testing.T) {
|
||||
assert.NotNil(t, voteSet.GetByAddress(val0Addr))
|
||||
assert.True(t, voteSet.BitArray().GetIndex(0))
|
||||
blockID, ok = voteSet.TwoThirdsMajority()
|
||||
assert.False(t, ok || !blockID.IsNil(), "there should be no 2/3 majority")
|
||||
assert.False(t, ok || !blockID.IsZero(), "there should be no 2/3 majority")
|
||||
}
|
||||
|
||||
func TestVoteSet_AddVote_Bad(t *testing.T) {
|
||||
@@ -145,7 +145,7 @@ func TestVoteSet_2_3Majority(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
blockID, ok := voteSet.TwoThirdsMajority()
|
||||
assert.False(t, ok || !blockID.IsNil(), "there should be no 2/3 majority")
|
||||
assert.False(t, ok || !blockID.IsZero(), "there should be no 2/3 majority")
|
||||
|
||||
// 7th validator voted for some blockhash
|
||||
{
|
||||
@@ -156,7 +156,7 @@ func TestVoteSet_2_3Majority(t *testing.T) {
|
||||
_, err = signAddVote(privValidators[6], withBlockHash(vote, tmrand.Bytes(32)), voteSet)
|
||||
require.NoError(t, err)
|
||||
blockID, ok = voteSet.TwoThirdsMajority()
|
||||
assert.False(t, ok || !blockID.IsNil(), "there should be no 2/3 majority")
|
||||
assert.False(t, ok || !blockID.IsZero(), "there should be no 2/3 majority")
|
||||
}
|
||||
|
||||
// 8th validator voted for nil.
|
||||
@@ -168,7 +168,7 @@ func TestVoteSet_2_3Majority(t *testing.T) {
|
||||
_, err = signAddVote(privValidators[7], vote, voteSet)
|
||||
require.NoError(t, err)
|
||||
blockID, ok = voteSet.TwoThirdsMajority()
|
||||
assert.True(t, ok || blockID.IsNil(), "there should be 2/3 majority for nil")
|
||||
assert.True(t, ok || blockID.IsZero(), "there should be 2/3 majority for nil")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -200,7 +200,7 @@ func TestVoteSet_2_3MajorityRedux(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
blockID, ok := voteSet.TwoThirdsMajority()
|
||||
assert.False(t, ok || !blockID.IsNil(),
|
||||
assert.False(t, ok || !blockID.IsZero(),
|
||||
"there should be no 2/3 majority")
|
||||
|
||||
// 67th validator voted for nil
|
||||
@@ -212,7 +212,7 @@ func TestVoteSet_2_3MajorityRedux(t *testing.T) {
|
||||
_, err = signAddVote(privValidators[66], withBlockHash(vote, nil), voteSet)
|
||||
require.NoError(t, err)
|
||||
blockID, ok = voteSet.TwoThirdsMajority()
|
||||
assert.False(t, ok || !blockID.IsNil(),
|
||||
assert.False(t, ok || !blockID.IsZero(),
|
||||
"there should be no 2/3 majority: last vote added was nil")
|
||||
}
|
||||
|
||||
@@ -226,7 +226,7 @@ func TestVoteSet_2_3MajorityRedux(t *testing.T) {
|
||||
_, err = signAddVote(privValidators[67], withBlockPartSetHeader(vote, blockPartsHeader), voteSet)
|
||||
require.NoError(t, err)
|
||||
blockID, ok = voteSet.TwoThirdsMajority()
|
||||
assert.False(t, ok || !blockID.IsNil(),
|
||||
assert.False(t, ok || !blockID.IsZero(),
|
||||
"there should be no 2/3 majority: last vote added had different PartSetHeader Hash")
|
||||
}
|
||||
|
||||
@@ -240,7 +240,7 @@ func TestVoteSet_2_3MajorityRedux(t *testing.T) {
|
||||
_, err = signAddVote(privValidators[68], withBlockPartSetHeader(vote, blockPartsHeader), voteSet)
|
||||
require.NoError(t, err)
|
||||
blockID, ok = voteSet.TwoThirdsMajority()
|
||||
assert.False(t, ok || !blockID.IsNil(),
|
||||
assert.False(t, ok || !blockID.IsZero(),
|
||||
"there should be no 2/3 majority: last vote added had different PartSetHeader Total")
|
||||
}
|
||||
|
||||
@@ -253,7 +253,7 @@ func TestVoteSet_2_3MajorityRedux(t *testing.T) {
|
||||
_, err = signAddVote(privValidators[69], withBlockHash(vote, tmrand.Bytes(32)), voteSet)
|
||||
require.NoError(t, err)
|
||||
blockID, ok = voteSet.TwoThirdsMajority()
|
||||
assert.False(t, ok || !blockID.IsNil(),
|
||||
assert.False(t, ok || !blockID.IsZero(),
|
||||
"there should be no 2/3 majority: last vote added had different BlockHash")
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user