mirror of
https://github.com/tendermint/tendermint.git
synced 2026-02-13 07:11:13 +00:00
Compare commits
6 Commits
anca/remov
...
wb/propose
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9266dddc75 | ||
|
|
c246bdd443 | ||
|
|
a9aab99b41 | ||
|
|
884e4e99ca | ||
|
|
591cc87669 | ||
|
|
1d68f340a6 |
@@ -12,7 +12,6 @@ Tendermint has some tools that are associated with it for:
|
||||
- [Debugging](./debugging/pro.md)
|
||||
- [Benchmarking](#benchmarking)
|
||||
- [Testnets](#testnets)
|
||||
- [Validation of remote signers](./remote-signer-validation.md)
|
||||
|
||||
## Benchmarking
|
||||
|
||||
|
||||
@@ -1,156 +0,0 @@
|
||||
# Remote Signer
|
||||
|
||||
Located under the `tools/tm-signer-harness` folder in the [Tendermint
|
||||
repository](https://github.com/tendermint/tendermint).
|
||||
|
||||
The Tendermint remote signer test harness facilitates integration testing
|
||||
between Tendermint and remote signers such as
|
||||
[tkkms](https://github.com/iqlusioninc/tmkms). Such remote signers allow for signing
|
||||
of important Tendermint messages using
|
||||
[HSMs](https://en.wikipedia.org/wiki/Hardware_security_module), providing
|
||||
additional security.
|
||||
|
||||
When executed, `tm-signer-harness`:
|
||||
|
||||
1. Runs a listener (either TCP or Unix sockets).
|
||||
2. Waits for a connection from the remote signer.
|
||||
3. Upon connection from the remote signer, executes a number of automated tests
|
||||
to ensure compatibility.
|
||||
4. Upon successful validation, the harness process exits with a 0 exit code.
|
||||
Upon validation failure, it exits with a particular exit code related to the
|
||||
error.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Requires the same prerequisites as for building
|
||||
[Tendermint](https://github.com/tendermint/tendermint).
|
||||
|
||||
## Building
|
||||
|
||||
From the `tools/tm-signer-harness` directory in your Tendermint source
|
||||
repository, simply run:
|
||||
|
||||
```bash
|
||||
make
|
||||
|
||||
# To have global access to this executable
|
||||
make install
|
||||
```
|
||||
|
||||
## Docker Image
|
||||
|
||||
To build a Docker image containing the `tm-signer-harness`, also from the
|
||||
`tools/tm-signer-harness` directory of your Tendermint source repo, simply run:
|
||||
|
||||
```bash
|
||||
make docker-image
|
||||
```
|
||||
|
||||
## Running against KMS
|
||||
|
||||
As an example of how to use `tm-signer-harness`, the following instructions show
|
||||
you how to execute its tests against [tkkms](https://github.com/iqlusioninc/tmkms).
|
||||
For this example, we will make use of the **software signing module in KMS**, as
|
||||
the hardware signing module requires a physical
|
||||
[YubiHSM](https://www.yubico.com/products/yubihsm/) device.
|
||||
|
||||
### Step 1: Install KMS on your local machine
|
||||
|
||||
See the [tkkms repo](https://github.com/iqlusioninc/tmkms) for details on how to set
|
||||
KMS up on your local machine.
|
||||
|
||||
If you have [Rust](https://www.rust-lang.org/) installed on your local machine,
|
||||
you can simply install KMS by:
|
||||
|
||||
```bash
|
||||
cargo install tmkms
|
||||
```
|
||||
|
||||
### Step 2: Make keys for KMS
|
||||
|
||||
The KMS software signing module needs a key with which to sign messages. In our
|
||||
example, we will simply export a signing key from our local Tendermint instance.
|
||||
|
||||
```bash
|
||||
# Will generate all necessary Tendermint configuration files, including:
|
||||
# - ~/.tendermint/config/priv_validator_key.json
|
||||
# - ~/.tendermint/data/priv_validator_state.json
|
||||
tendermint init validator
|
||||
|
||||
# Extract the signing key from our local Tendermint instance
|
||||
tm-signer-harness extract_key \ # Use the "extract_key" command
|
||||
-tmhome ~/.tendermint \ # Where to find the Tendermint home directory
|
||||
-output ./signing.key # Where to write the key
|
||||
```
|
||||
|
||||
Also, because we want KMS to connect to `tm-signer-harness`, we will need to
|
||||
provide a secret connection key from KMS' side:
|
||||
|
||||
```bash
|
||||
tmkms keygen secret_connection.key
|
||||
```
|
||||
|
||||
### Step 3: Configure and run KMS
|
||||
|
||||
KMS needs some configuration to tell it to use the softer signing module as well
|
||||
as the `signing.key` file we just generated. Save the following to a file called
|
||||
`tmkms.toml`:
|
||||
|
||||
```toml
|
||||
[[validator]]
|
||||
addr = "tcp://127.0.0.1:61219" # This is where we will find tm-signer-harness.
|
||||
chain_id = "test-chain-0XwP5E" # The Tendermint chain ID for which KMS will be signing (found in ~/.tendermint/config/genesis.json).
|
||||
reconnect = true # true is the default
|
||||
secret_key = "./secret_connection.key" # Where to find our secret connection key.
|
||||
|
||||
[[providers.softsign]]
|
||||
id = "test-chain-0XwP5E" # The Tendermint chain ID for which KMS will be signing (same as validator.chain_id above).
|
||||
path = "./signing.key" # The signing key we extracted earlier.
|
||||
```
|
||||
|
||||
Then run KMS with this configuration:
|
||||
|
||||
```bash
|
||||
tmkms start -c tmkms.toml
|
||||
```
|
||||
|
||||
This will start KMS, which will repeatedly try to connect to
|
||||
`tcp://127.0.0.1:61219` until it is successful.
|
||||
|
||||
### Step 4: Run tm-signer-harness
|
||||
|
||||
Now we get to run the signer test harness:
|
||||
|
||||
```bash
|
||||
tm-signer-harness run \ # The "run" command executes the tests
|
||||
-addr tcp://127.0.0.1:61219 \ # The address we promised KMS earlier
|
||||
-tmhome ~/.tendermint # Where to find our Tendermint configuration/data files.
|
||||
```
|
||||
|
||||
If the current version of Tendermint and KMS are compatible, `tm-signer-harness`
|
||||
should now exit with a 0 exit code. If they are somehow not compatible, it
|
||||
should exit with a meaningful non-zero exit code (see the exit codes below).
|
||||
|
||||
### Step 5: Shut down KMS
|
||||
|
||||
Simply hit Ctrl+Break on your KMS instance (or use the `kill` command in Linux)
|
||||
to terminate it gracefully.
|
||||
|
||||
## Exit Code Meanings
|
||||
|
||||
The following list shows the various exit codes from `tm-signer-harness` and
|
||||
their meanings:
|
||||
|
||||
| Exit Code | Description |
|
||||
| --- | --- |
|
||||
| 0 | Success! |
|
||||
| 1 | Invalid command line parameters supplied to `tm-signer-harness` |
|
||||
| 2 | Maximum number of accept retries reached (the `-accept-retries` parameter) |
|
||||
| 3 | Failed to load `${TMHOME}/config/genesis.json` |
|
||||
| 4 | Failed to create listener specified by `-addr` parameter |
|
||||
| 5 | Failed to start listener |
|
||||
| 6 | Interrupted by `SIGINT` (e.g. when hitting Ctrl+Break or Ctrl+C) |
|
||||
| 7 | Other unknown error |
|
||||
| 8 | Test 1 failed: public key mismatch |
|
||||
| 9 | Test 2 failed: signing of proposals failed |
|
||||
| 10 | Test 3 failed: signing of votes failed |
|
||||
@@ -214,7 +214,7 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
|
||||
|
||||
// Make proposal
|
||||
propBlockID := types.BlockID{Hash: block.Hash(), PartSetHeader: blockParts.Header()}
|
||||
proposal := types.NewProposal(height, round, lazyNodeState.ValidRound, propBlockID)
|
||||
proposal := types.NewProposal(height, round, lazyNodeState.ValidRound, propBlockID, block.Header.Time)
|
||||
p := proposal.ToProto()
|
||||
if err := lazyNodeState.privValidator.SignProposal(ctx, lazyNodeState.state.ChainID, p); err == nil {
|
||||
proposal.Signature = p.Signature
|
||||
|
||||
@@ -3,6 +3,7 @@ package consensus
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
@@ -247,7 +248,7 @@ func decideProposal(
|
||||
|
||||
// Make proposal
|
||||
polRound, propBlockID := validRound, types.BlockID{Hash: block.Hash(), PartSetHeader: blockParts.Header()}
|
||||
proposal = types.NewProposal(height, round, polRound, propBlockID)
|
||||
proposal = types.NewProposal(height, round, polRound, propBlockID, block.Header.Time)
|
||||
p := proposal.ToProto()
|
||||
if err := vs.SignProposal(ctx, chainID, p); err != nil {
|
||||
t.Fatalf("error signing proposal: %s", err)
|
||||
@@ -275,6 +276,7 @@ func signAddVotes(
|
||||
addVotes(to, signVotes(ctx, voteType, chainID, blockID, vss...)...)
|
||||
}
|
||||
|
||||
// nolint: lll
|
||||
func validatePrevote(ctx context.Context, t *testing.T, cs *State, round int32, privVal *validatorStub, blockHash []byte) {
|
||||
t.Helper()
|
||||
prevotes := cs.Votes.Prevotes(round)
|
||||
@@ -397,6 +399,35 @@ func subscribeToVoter(ctx context.Context, t *testing.T, cs *State, addr []byte)
|
||||
return ch
|
||||
}
|
||||
|
||||
func subscribeToVoterBuffered(ctx context.Context, t *testing.T, cs *State, addr []byte) <-chan tmpubsub.Message {
|
||||
t.Helper()
|
||||
votesSub, err := cs.eventBus.SubscribeWithArgs(ctx, tmpubsub.SubscribeArgs{
|
||||
ClientID: testSubscriber,
|
||||
Query: types.EventQueryVote,
|
||||
Limit: 10})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to subscribe %s to %v", testSubscriber, types.EventQueryVote)
|
||||
}
|
||||
ch := make(chan tmpubsub.Message, 10)
|
||||
go func() {
|
||||
for {
|
||||
msg, err := votesSub.Next(ctx)
|
||||
if err != nil {
|
||||
if !errors.Is(err, tmpubsub.ErrTerminated) && !errors.Is(err, context.Canceled) {
|
||||
t.Errorf("error terminating pubsub %s", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
vote := msg.Data().(types.EventDataVote)
|
||||
// we only fire for our own votes
|
||||
if bytes.Equal(addr, vote.Vote.ValidatorAddress) {
|
||||
ch <- msg
|
||||
}
|
||||
}
|
||||
}()
|
||||
return ch
|
||||
}
|
||||
|
||||
//-------------------------------------------------------------------------------
|
||||
// consensus states
|
||||
|
||||
@@ -488,6 +519,7 @@ func loadPrivValidator(t *testing.T, config *config.Config) *privval.FilePV {
|
||||
return privValidator
|
||||
}
|
||||
|
||||
// nolint: lll
|
||||
func makeState(ctx context.Context, cfg *config.Config, logger log.Logger, nValidators int) (*State, []*validatorStub, error) {
|
||||
// Get State
|
||||
state, privVals := makeGenesisState(cfg, genesisStateArgs{
|
||||
@@ -512,7 +544,7 @@ func makeState(ctx context.Context, cfg *config.Config, logger log.Logger, nVali
|
||||
|
||||
//-------------------------------------------------------------------------------
|
||||
|
||||
func ensureNoNewEvent(t *testing.T, ch <-chan tmpubsub.Message, timeout time.Duration,
|
||||
func ensureNoMessageBeforeTimeout(t *testing.T, ch <-chan tmpubsub.Message, timeout time.Duration,
|
||||
errorMessage string) {
|
||||
t.Helper()
|
||||
select {
|
||||
@@ -525,7 +557,7 @@ func ensureNoNewEvent(t *testing.T, ch <-chan tmpubsub.Message, timeout time.Dur
|
||||
|
||||
func ensureNoNewEventOnChannel(t *testing.T, ch <-chan tmpubsub.Message) {
|
||||
t.Helper()
|
||||
ensureNoNewEvent(
|
||||
ensureNoMessageBeforeTimeout(
|
||||
t,
|
||||
ch,
|
||||
ensureTimeout,
|
||||
@@ -534,7 +566,7 @@ func ensureNoNewEventOnChannel(t *testing.T, ch <-chan tmpubsub.Message) {
|
||||
|
||||
func ensureNoNewRoundStep(t *testing.T, stepCh <-chan tmpubsub.Message) {
|
||||
t.Helper()
|
||||
ensureNoNewEvent(
|
||||
ensureNoMessageBeforeTimeout(
|
||||
t,
|
||||
stepCh,
|
||||
ensureTimeout,
|
||||
@@ -544,7 +576,7 @@ func ensureNoNewRoundStep(t *testing.T, stepCh <-chan tmpubsub.Message) {
|
||||
func ensureNoNewTimeout(t *testing.T, stepCh <-chan tmpubsub.Message, timeout int64) {
|
||||
t.Helper()
|
||||
timeoutDuration := time.Duration(timeout*10) * time.Nanosecond
|
||||
ensureNoNewEvent(
|
||||
ensureNoMessageBeforeTimeout(
|
||||
t,
|
||||
stepCh,
|
||||
timeoutDuration,
|
||||
@@ -553,40 +585,32 @@ func ensureNoNewTimeout(t *testing.T, stepCh <-chan tmpubsub.Message, timeout in
|
||||
|
||||
func ensureNewEvent(t *testing.T, ch <-chan tmpubsub.Message, height int64, round int32, timeout time.Duration, errorMessage string) { // nolint: lll
|
||||
t.Helper()
|
||||
select {
|
||||
case <-time.After(timeout):
|
||||
t.Fatalf("timed out waiting for new event: %s", errorMessage)
|
||||
case msg := <-ch:
|
||||
roundStateEvent, ok := msg.Data().(types.EventDataRoundState)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataRoundState, got %T. Wrong subscription channel?", msg.Data())
|
||||
}
|
||||
if roundStateEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, roundStateEvent.Height)
|
||||
}
|
||||
if roundStateEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, roundStateEvent.Round)
|
||||
}
|
||||
// TODO: We could check also for a step at this point!
|
||||
msg := ensureMessageBeforeTimeout(t, ch, ensureTimeout)
|
||||
roundStateEvent, ok := msg.Data().(types.EventDataRoundState)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataRoundState, got %T. Wrong subscription channel?", msg.Data())
|
||||
}
|
||||
if roundStateEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, roundStateEvent.Height)
|
||||
}
|
||||
if roundStateEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, roundStateEvent.Round)
|
||||
}
|
||||
// TODO: We could check also for a step at this point!
|
||||
}
|
||||
|
||||
func ensureNewRound(t *testing.T, roundCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
t.Helper()
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatal("Timeout expired while waiting for NewRound event")
|
||||
case msg := <-roundCh:
|
||||
newRoundEvent, ok := msg.Data().(types.EventDataNewRound)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataNewRound, got %T. Wrong subscription channel?", msg.Data())
|
||||
}
|
||||
if newRoundEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, newRoundEvent.Height)
|
||||
}
|
||||
if newRoundEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, newRoundEvent.Round)
|
||||
}
|
||||
msg := ensureMessageBeforeTimeout(t, roundCh, ensureTimeout)
|
||||
newRoundEvent, ok := msg.Data().(types.EventDataNewRound)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataNewRound, got %T. Wrong subscription channel?", msg.Data())
|
||||
}
|
||||
if newRoundEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, newRoundEvent.Height)
|
||||
}
|
||||
if newRoundEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, newRoundEvent.Round)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -599,21 +623,16 @@ func ensureNewTimeout(t *testing.T, timeoutCh <-chan tmpubsub.Message, height in
|
||||
|
||||
func ensureNewProposal(t *testing.T, proposalCh <-chan tmpubsub.Message, height int64, round int32) {
|
||||
t.Helper()
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for NewProposal event")
|
||||
case msg := <-proposalCh:
|
||||
proposalEvent, ok := msg.Data().(types.EventDataCompleteProposal)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
|
||||
msg.Data())
|
||||
}
|
||||
if proposalEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, proposalEvent.Height)
|
||||
}
|
||||
if proposalEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, proposalEvent.Round)
|
||||
}
|
||||
msg := ensureMessageBeforeTimeout(t, proposalCh, ensureTimeout)
|
||||
proposalEvent, ok := msg.Data().(types.EventDataCompleteProposal)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?", msg.Data())
|
||||
}
|
||||
if proposalEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, proposalEvent.Height)
|
||||
}
|
||||
if proposalEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, proposalEvent.Round)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -625,38 +644,28 @@ func ensureNewValidBlock(t *testing.T, validBlockCh <-chan tmpubsub.Message, hei
|
||||
|
||||
func ensureNewBlock(t *testing.T, blockCh <-chan tmpubsub.Message, height int64) {
|
||||
t.Helper()
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for NewBlock event")
|
||||
case msg := <-blockCh:
|
||||
blockEvent, ok := msg.Data().(types.EventDataNewBlock)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataNewBlock, got %T. Wrong subscription channel?",
|
||||
msg.Data())
|
||||
}
|
||||
if blockEvent.Block.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, blockEvent.Block.Height)
|
||||
}
|
||||
msg := ensureMessageBeforeTimeout(t, blockCh, ensureTimeout)
|
||||
blockEvent, ok := msg.Data().(types.EventDataNewBlock)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataNewBlock, got %T. Wrong subscription channel?", msg.Data())
|
||||
}
|
||||
if blockEvent.Block.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, blockEvent.Block.Height)
|
||||
}
|
||||
}
|
||||
|
||||
func ensureNewBlockHeader(t *testing.T, blockCh <-chan tmpubsub.Message, height int64, blockHash tmbytes.HexBytes) {
|
||||
t.Helper()
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for NewBlockHeader event")
|
||||
case msg := <-blockCh:
|
||||
blockHeaderEvent, ok := msg.Data().(types.EventDataNewBlockHeader)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataNewBlockHeader, got %T. Wrong subscription channel?",
|
||||
msg.Data())
|
||||
}
|
||||
if blockHeaderEvent.Header.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, blockHeaderEvent.Header.Height)
|
||||
}
|
||||
if !bytes.Equal(blockHeaderEvent.Header.Hash(), blockHash) {
|
||||
t.Fatalf("expected header %X, got %X", blockHash, blockHeaderEvent.Header.Hash())
|
||||
}
|
||||
msg := ensureMessageBeforeTimeout(t, blockCh, ensureTimeout)
|
||||
blockHeaderEvent, ok := msg.Data().(types.EventDataNewBlockHeader)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataNewBlockHeader, got %T. Wrong subscription channel?", msg.Data())
|
||||
}
|
||||
if blockHeaderEvent.Header.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, blockHeaderEvent.Header.Height)
|
||||
}
|
||||
if !bytes.Equal(blockHeaderEvent.Header.Hash(), blockHash) {
|
||||
t.Fatalf("expected header %X, got %X", blockHash, blockHeaderEvent.Header.Hash())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -673,25 +682,19 @@ func ensureRelock(t *testing.T, relockCh <-chan tmpubsub.Message, height int64,
|
||||
}
|
||||
|
||||
func ensureProposal(t *testing.T, proposalCh <-chan tmpubsub.Message, height int64, round int32, propID types.BlockID) {
|
||||
t.Helper()
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for NewProposal event")
|
||||
case msg := <-proposalCh:
|
||||
proposalEvent, ok := msg.Data().(types.EventDataCompleteProposal)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
|
||||
msg.Data())
|
||||
}
|
||||
if proposalEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, proposalEvent.Height)
|
||||
}
|
||||
if proposalEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, proposalEvent.Round)
|
||||
}
|
||||
if !proposalEvent.BlockID.Equals(propID) {
|
||||
t.Fatalf("Proposed block does not match expected block (%v != %v)", proposalEvent.BlockID, propID)
|
||||
}
|
||||
msg := ensureMessageBeforeTimeout(t, proposalCh, ensureTimeout)
|
||||
proposalEvent, ok := msg.Data().(types.EventDataCompleteProposal)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?", msg.Data())
|
||||
}
|
||||
if proposalEvent.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, proposalEvent.Height)
|
||||
}
|
||||
if proposalEvent.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, proposalEvent.Round)
|
||||
}
|
||||
if !proposalEvent.BlockID.Equals(propID) {
|
||||
t.Fatalf("Proposed block does not match expected block (%v != %v)", proposalEvent.BlockID, propID)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -708,44 +711,37 @@ func ensurePrevote(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, r
|
||||
func ensureVote(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32,
|
||||
voteType tmproto.SignedMsgType) {
|
||||
t.Helper()
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for NewVote event")
|
||||
case msg := <-voteCh:
|
||||
voteEvent, ok := msg.Data().(types.EventDataVote)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataVote, got %T. Wrong subscription channel?",
|
||||
msg.Data())
|
||||
}
|
||||
vote := voteEvent.Vote
|
||||
if vote.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, vote.Height)
|
||||
}
|
||||
if vote.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, vote.Round)
|
||||
}
|
||||
if vote.Type != voteType {
|
||||
t.Fatalf("expected type %v, got %v", voteType, vote.Type)
|
||||
}
|
||||
msg := ensureMessageBeforeTimeout(t, voteCh, ensureTimeout)
|
||||
voteEvent, ok := msg.Data().(types.EventDataVote)
|
||||
if !ok {
|
||||
t.Fatalf("expected a EventDataVote, got %T. Wrong subscription channel?", msg.Data())
|
||||
}
|
||||
}
|
||||
|
||||
func ensurePrecommitTimeout(t *testing.T, ch <-chan tmpubsub.Message) {
|
||||
t.Helper()
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for the Precommit to Timeout")
|
||||
case <-ch:
|
||||
vote := voteEvent.Vote
|
||||
if vote.Height != height {
|
||||
t.Fatalf("expected height %v, got %v", height, vote.Height)
|
||||
}
|
||||
if vote.Round != round {
|
||||
t.Fatalf("expected round %v, got %v", round, vote.Round)
|
||||
}
|
||||
if vote.Type != voteType {
|
||||
t.Fatalf("expected type %v, got %v", voteType, vote.Type)
|
||||
}
|
||||
}
|
||||
|
||||
func ensureNewEventOnChannel(t *testing.T, ch <-chan tmpubsub.Message) {
|
||||
t.Helper()
|
||||
ensureMessageBeforeTimeout(t, ch, ensureTimeout)
|
||||
}
|
||||
|
||||
func ensureMessageBeforeTimeout(t *testing.T, ch <-chan tmpubsub.Message, to time.Duration) tmpubsub.Message {
|
||||
t.Helper()
|
||||
select {
|
||||
case <-time.After(ensureTimeout):
|
||||
t.Fatalf("Timeout expired while waiting for new activity on the channel")
|
||||
case <-ch:
|
||||
case <-time.After(to):
|
||||
t.Fatalf("Timeout expired while waiting for message")
|
||||
case msg := <-ch:
|
||||
return msg
|
||||
}
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
//-------------------------------------------------------------------------------
|
||||
|
||||
@@ -1,56 +1,336 @@
|
||||
package consensus
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/example/kvstore"
|
||||
"github.com/tendermint/tendermint/internal/eventbus"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
|
||||
tmtimemocks "github.com/tendermint/tendermint/libs/time/mocks"
|
||||
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// pbtsTestHarness constructs a Tendermint network that can be used for testing the
|
||||
// implementation of the Proposer-Based timestamps algorithm.
|
||||
// It runs a series of consensus heights and captures timing of votes and events.
|
||||
type pbtsTestHarness struct {
|
||||
// configuration options set by the user of the test harness.
|
||||
pbtsTestConfiguration
|
||||
|
||||
// The Tendermint consensus state machine being run during
|
||||
// a run of the pbtsTestHarness.
|
||||
observedState *State
|
||||
|
||||
// A stub for signing votes and messages using the key
|
||||
// from the observedState.
|
||||
observedValidator *validatorStub
|
||||
|
||||
// A list of simulated validators that interact with the observedState and are
|
||||
// fully controlled by the test harness.
|
||||
otherValidators []*validatorStub
|
||||
|
||||
// The mock time source used by all of the validator stubs in the test harness.
|
||||
// This mock clock allows the test harness to produce votes and blocks with arbitrary
|
||||
// timestamps.
|
||||
validatorClock *tmtimemocks.Source
|
||||
|
||||
chainID string
|
||||
|
||||
// channels for verifying that the observed validator completes certain actions.
|
||||
ensureProposalCh, roundCh, blockCh, ensureVoteCh <-chan tmpubsub.Message
|
||||
|
||||
resultCh <-chan heightResult
|
||||
|
||||
currentHeight int64
|
||||
currentRound int32
|
||||
|
||||
t *testing.T
|
||||
ctx context.Context
|
||||
}
|
||||
|
||||
type pbtsTestConfiguration struct {
|
||||
// The timestamp consensus parameters to be used by the state machine under test.
|
||||
timestampParams types.TimestampParams
|
||||
|
||||
// The setting to use for the TimeoutPropose configuration parameter.
|
||||
timeoutPropose time.Duration
|
||||
|
||||
// The timestamp of the first block produced by the network.
|
||||
genesisTime time.Time
|
||||
|
||||
// The time at which the proposal at height 2 should be delivered.
|
||||
height2ProposalDeliverTime time.Time
|
||||
|
||||
// The timestamp of the block proposed at height 2.
|
||||
height2ProposedBlockTime time.Time
|
||||
}
|
||||
|
||||
func newPBTSTestHarness(ctx context.Context, t *testing.T, tc pbtsTestConfiguration) pbtsTestHarness {
|
||||
t.Helper()
|
||||
const validators = 4
|
||||
cfg := configSetup(t)
|
||||
clock := new(tmtimemocks.Source)
|
||||
cfg.Consensus.TimeoutPropose = tc.timeoutPropose
|
||||
consensusParams := types.DefaultConsensusParams()
|
||||
consensusParams.Timestamp = tc.timestampParams
|
||||
|
||||
state, privVals := makeGenesisState(cfg, genesisStateArgs{
|
||||
Params: consensusParams,
|
||||
Time: tc.genesisTime,
|
||||
Validators: validators,
|
||||
})
|
||||
cs, err := newState(ctx, log.TestingLogger(), state, privVals[0], kvstore.NewApplication())
|
||||
require.NoError(t, err)
|
||||
vss := make([]*validatorStub, validators)
|
||||
for i := 0; i < validators; i++ {
|
||||
vss[i] = newValidatorStub(privVals[i], int32(i))
|
||||
}
|
||||
incrementHeight(vss[1:]...)
|
||||
|
||||
for _, vs := range vss {
|
||||
vs.clock = clock
|
||||
}
|
||||
pubKey, err := vss[0].PrivValidator.GetPubKey(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
resultCh := registerResultCollector(ctx, t, cs.eventBus, pubKey.Address())
|
||||
|
||||
return pbtsTestHarness{
|
||||
pbtsTestConfiguration: tc,
|
||||
observedValidator: vss[0],
|
||||
observedState: cs,
|
||||
otherValidators: vss[1:],
|
||||
validatorClock: clock,
|
||||
currentHeight: 1,
|
||||
chainID: cfg.ChainID(),
|
||||
roundCh: subscribe(ctx, t, cs.eventBus, types.EventQueryNewRound),
|
||||
ensureProposalCh: subscribe(ctx, t, cs.eventBus, types.EventQueryCompleteProposal),
|
||||
blockCh: subscribe(ctx, t, cs.eventBus, types.EventQueryNewBlock),
|
||||
ensureVoteCh: subscribeToVoterBuffered(ctx, t, cs, pubKey.Address()),
|
||||
resultCh: resultCh,
|
||||
t: t,
|
||||
ctx: ctx,
|
||||
}
|
||||
}
|
||||
|
||||
func (p *pbtsTestHarness) genesisHeight() heightResult {
|
||||
p.validatorClock.On("Now").Return(p.height2ProposedBlockTime).Times(8)
|
||||
|
||||
startTestRound(p.ctx, p.observedState, p.currentHeight, p.currentRound)
|
||||
ensureNewRound(p.t, p.roundCh, p.currentHeight, p.currentRound)
|
||||
propBlock, partSet := p.observedState.createProposalBlock()
|
||||
bid := types.BlockID{Hash: propBlock.Hash(), PartSetHeader: partSet.Header()}
|
||||
ensureProposal(p.t, p.ensureProposalCh, p.currentHeight, p.currentRound, bid)
|
||||
ensurePrevote(p.t, p.ensureVoteCh, p.currentHeight, p.currentRound)
|
||||
signAddVotes(p.ctx, p.observedState, tmproto.PrevoteType, p.chainID, bid, p.otherValidators...)
|
||||
|
||||
signAddVotes(p.ctx, p.observedState, tmproto.PrecommitType, p.chainID, bid, p.otherValidators...)
|
||||
ensurePrecommit(p.t, p.ensureVoteCh, p.currentHeight, p.currentRound)
|
||||
|
||||
ensureNewBlock(p.t, p.blockCh, p.currentHeight)
|
||||
p.currentHeight++
|
||||
incrementHeight(p.otherValidators...)
|
||||
return <-p.resultCh
|
||||
}
|
||||
|
||||
func (p *pbtsTestHarness) height2() heightResult {
|
||||
signer := p.otherValidators[0].PrivValidator
|
||||
return p.nextHeight(signer, p.height2ProposalDeliverTime, p.height2ProposedBlockTime, time.Now())
|
||||
}
|
||||
|
||||
// nolint: lll
|
||||
func (p *pbtsTestHarness) nextHeight(proposer types.PrivValidator, deliverTime, proposedTime, nextProposedTime time.Time) heightResult {
|
||||
p.validatorClock.On("Now").Return(nextProposedTime).Times(8)
|
||||
|
||||
ensureNewRound(p.t, p.roundCh, p.currentHeight, p.currentRound)
|
||||
|
||||
b, _ := p.observedState.createProposalBlock()
|
||||
b.Height = p.currentHeight
|
||||
b.Header.Height = p.currentHeight
|
||||
b.Header.Time = proposedTime
|
||||
|
||||
k, err := proposer.GetPubKey(context.Background())
|
||||
require.NoError(p.t, err)
|
||||
b.Header.ProposerAddress = k.Address()
|
||||
ps := b.MakePartSet(types.BlockPartSizeBytes)
|
||||
bid := types.BlockID{Hash: b.Hash(), PartSetHeader: ps.Header()}
|
||||
prop := types.NewProposal(p.currentHeight, 0, -1, bid, proposedTime)
|
||||
tp := prop.ToProto()
|
||||
|
||||
if err := proposer.SignProposal(context.Background(), p.observedState.state.ChainID, tp); err != nil {
|
||||
p.t.Fatalf("error signing proposal: %s", err)
|
||||
}
|
||||
|
||||
time.Sleep(time.Until(deliverTime))
|
||||
prop.Signature = tp.Signature
|
||||
if err := p.observedState.SetProposalAndBlock(prop, b, ps, "peerID"); err != nil {
|
||||
p.t.Fatal(err)
|
||||
}
|
||||
ensureProposal(p.t, p.ensureProposalCh, p.currentHeight, 0, bid)
|
||||
|
||||
ensurePrevote(p.t, p.ensureVoteCh, p.currentHeight, p.currentRound)
|
||||
signAddVotes(p.ctx, p.observedState, tmproto.PrevoteType, p.chainID, bid, p.otherValidators...)
|
||||
|
||||
signAddVotes(p.ctx, p.observedState, tmproto.PrecommitType, p.chainID, bid, p.otherValidators...)
|
||||
ensurePrecommit(p.t, p.ensureVoteCh, p.currentHeight, p.currentRound)
|
||||
|
||||
p.currentHeight++
|
||||
incrementHeight(p.otherValidators...)
|
||||
return <-p.resultCh
|
||||
}
|
||||
|
||||
// nolint: lll
|
||||
func registerResultCollector(ctx context.Context, t *testing.T, eb *eventbus.EventBus, address []byte) <-chan heightResult {
|
||||
t.Helper()
|
||||
resultCh := make(chan heightResult, 2)
|
||||
var res heightResult
|
||||
if err := eb.Observe(ctx, func(msg tmpubsub.Message) error {
|
||||
ts := time.Now()
|
||||
vote := msg.Data().(types.EventDataVote)
|
||||
// we only fire for our own votes
|
||||
if !bytes.Equal(address, vote.Vote.ValidatorAddress) {
|
||||
return nil
|
||||
}
|
||||
if vote.Vote.Type != tmproto.PrevoteType {
|
||||
return nil
|
||||
}
|
||||
res.prevoteIssuedAt = ts
|
||||
res.prevote = vote.Vote
|
||||
resultCh <- res
|
||||
return nil
|
||||
}, types.EventQueryVote); err != nil {
|
||||
t.Fatalf("Failed to observe query %v: %v", types.EventQueryVote, err)
|
||||
}
|
||||
return resultCh
|
||||
}
|
||||
|
||||
func (p *pbtsTestHarness) run() resultSet {
|
||||
p.genesisHeight()
|
||||
r2 := p.height2()
|
||||
return resultSet{
|
||||
height2: r2,
|
||||
}
|
||||
}
|
||||
|
||||
type resultSet struct {
|
||||
height2 heightResult
|
||||
}
|
||||
|
||||
type heightResult struct {
|
||||
prevote *types.Vote
|
||||
prevoteIssuedAt time.Time
|
||||
}
|
||||
|
||||
// TestReceiveProposalWaitsForPreviousBlockTime tests that a validator receiving
|
||||
// a proposal waits until the previous block time passes before issuing a prevote.
|
||||
// The test delivers the block to the validator after the configured `timeout-propose`,
|
||||
// but before the proposer-based timestamp bound on block delivery and checks that
|
||||
// the consensus algorithm correctly waits for the new block to be delivered
|
||||
// and issues a prevote for it.
|
||||
func TestReceiveProposalWaitsForPreviousBlockTime(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
initialTime := time.Now().Add(50 * time.Millisecond)
|
||||
cfg := pbtsTestConfiguration{
|
||||
timestampParams: types.TimestampParams{
|
||||
Precision: 100 * time.Millisecond,
|
||||
MsgDelay: 500 * time.Millisecond,
|
||||
},
|
||||
timeoutPropose: 50 * time.Millisecond,
|
||||
genesisTime: initialTime,
|
||||
height2ProposalDeliverTime: initialTime.Add(450 * time.Millisecond),
|
||||
height2ProposedBlockTime: initialTime.Add(350 * time.Millisecond),
|
||||
}
|
||||
|
||||
pbtsTest := newPBTSTestHarness(ctx, t, cfg)
|
||||
results := pbtsTest.run()
|
||||
|
||||
// Check that the validator waited until after the proposer-based timestamp
|
||||
// waitingTime bound.
|
||||
assert.True(t, results.height2.prevoteIssuedAt.After(cfg.height2ProposalDeliverTime))
|
||||
maxWaitingTime := cfg.genesisTime.Add(cfg.timestampParams.Precision).Add(cfg.timestampParams.MsgDelay)
|
||||
assert.True(t, results.height2.prevoteIssuedAt.Before(maxWaitingTime))
|
||||
|
||||
// Check that the validator did not prevote for nil.
|
||||
assert.NotNil(t, results.height2.prevote.BlockID.Hash)
|
||||
}
|
||||
|
||||
// TestReceiveProposalTimesOutOnSlowDelivery tests that a validator receiving
|
||||
// a proposal times out and prevotes nil if the block is not delivered by the
|
||||
// within the proposer-based timestamp algorithm's waitingTime bound.
|
||||
// The test delivers the block to the validator after the previous block's time
|
||||
// and after the proposer-based timestamp bound on block delivery.
|
||||
// The test then checks that the validator correctly waited for the new block
|
||||
// and prevoted nil after timing out.
|
||||
func TestReceiveProposalTimesOutOnSlowDelivery(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
initialTime := time.Now()
|
||||
cfg := pbtsTestConfiguration{
|
||||
timestampParams: types.TimestampParams{
|
||||
Precision: 100 * time.Millisecond,
|
||||
MsgDelay: 500 * time.Millisecond,
|
||||
},
|
||||
timeoutPropose: 50 * time.Millisecond,
|
||||
genesisTime: initialTime,
|
||||
height2ProposalDeliverTime: initialTime.Add(610 * time.Millisecond),
|
||||
height2ProposedBlockTime: initialTime.Add(350 * time.Millisecond),
|
||||
}
|
||||
|
||||
pbtsTest := newPBTSTestHarness(ctx, t, cfg)
|
||||
results := pbtsTest.run()
|
||||
|
||||
// Check that the validator waited until after the proposer-based timestamp
|
||||
// waitinTime bound.
|
||||
maxWaitingTime := initialTime.Add(cfg.timestampParams.Precision).Add(cfg.timestampParams.MsgDelay)
|
||||
assert.True(t, results.height2.prevoteIssuedAt.After(maxWaitingTime))
|
||||
|
||||
// Ensure that the validator issued a prevote for nil.
|
||||
assert.Nil(t, results.height2.prevote.BlockID.Hash)
|
||||
}
|
||||
|
||||
func TestProposerWaitTime(t *testing.T) {
|
||||
genesisTime, err := time.Parse(time.RFC3339, "2019-03-13T23:00:00Z")
|
||||
require.NoError(t, err)
|
||||
testCases := []struct {
|
||||
name string
|
||||
blockTime time.Time
|
||||
localTime time.Time
|
||||
expectedWait time.Duration
|
||||
name string
|
||||
previousBlockTime time.Time
|
||||
localTime time.Time
|
||||
expectedWait time.Duration
|
||||
}{
|
||||
{
|
||||
name: "block time greater than local time",
|
||||
blockTime: genesisTime.Add(5 * time.Nanosecond),
|
||||
localTime: genesisTime.Add(1 * time.Nanosecond),
|
||||
expectedWait: 4 * time.Nanosecond,
|
||||
name: "block time greater than local time",
|
||||
previousBlockTime: genesisTime.Add(5 * time.Nanosecond),
|
||||
localTime: genesisTime.Add(1 * time.Nanosecond),
|
||||
expectedWait: 4 * time.Nanosecond,
|
||||
},
|
||||
{
|
||||
name: "local time greater than block time",
|
||||
blockTime: genesisTime.Add(1 * time.Nanosecond),
|
||||
localTime: genesisTime.Add(5 * time.Nanosecond),
|
||||
expectedWait: 0,
|
||||
name: "local time greater than block time",
|
||||
previousBlockTime: genesisTime.Add(1 * time.Nanosecond),
|
||||
localTime: genesisTime.Add(5 * time.Nanosecond),
|
||||
expectedWait: 0,
|
||||
},
|
||||
{
|
||||
name: "both times equal",
|
||||
blockTime: genesisTime.Add(5 * time.Nanosecond),
|
||||
localTime: genesisTime.Add(5 * time.Nanosecond),
|
||||
expectedWait: 0,
|
||||
name: "both times equal",
|
||||
previousBlockTime: genesisTime.Add(5 * time.Nanosecond),
|
||||
localTime: genesisTime.Add(5 * time.Nanosecond),
|
||||
expectedWait: 0,
|
||||
},
|
||||
}
|
||||
for _, testCase := range testCases {
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
b := types.Block{
|
||||
Header: types.Header{
|
||||
Time: testCase.blockTime,
|
||||
},
|
||||
}
|
||||
|
||||
mockSource := new(tmtimemocks.Source)
|
||||
mockSource.On("Now").Return(testCase.localTime)
|
||||
|
||||
ti := proposerWaitTime(mockSource, b.Header)
|
||||
ti := proposerWaitTime(mockSource, testCase.previousBlockTime)
|
||||
assert.Equal(t, testCase.expectedWait, ti)
|
||||
})
|
||||
}
|
||||
@@ -94,11 +374,6 @@ func TestProposalTimeout(t *testing.T) {
|
||||
}
|
||||
for _, testCase := range testCases {
|
||||
t.Run(testCase.name, func(t *testing.T) {
|
||||
b := types.Block{
|
||||
Header: types.Header{
|
||||
Time: testCase.previousBlockTime,
|
||||
},
|
||||
}
|
||||
|
||||
mockSource := new(tmtimemocks.Source)
|
||||
mockSource.On("Now").Return(testCase.localTime)
|
||||
@@ -108,7 +383,7 @@ func TestProposalTimeout(t *testing.T) {
|
||||
MsgDelay: testCase.msgDelay,
|
||||
}
|
||||
|
||||
ti := proposalStepWaitingTime(mockSource, b.Header, tp)
|
||||
ti := proposalStepWaitingTime(mockSource, testCase.previousBlockTime, tp)
|
||||
assert.Equal(t, testCase.expectedDuration, ti)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -384,7 +384,7 @@ func setupSimulator(ctx context.Context, t *testing.T) *simulatorTestSuite {
|
||||
propBlockParts := propBlock.MakePartSet(partSize)
|
||||
blockID := types.BlockID{Hash: propBlock.Hash(), PartSetHeader: propBlockParts.Header()}
|
||||
|
||||
proposal := types.NewProposal(vss[1].Height, round, -1, blockID)
|
||||
proposal := types.NewProposal(vss[1].Height, round, -1, blockID, propBlock.Header.Time)
|
||||
p := proposal.ToProto()
|
||||
if err := vss[1].SignProposal(ctx, cfg.ChainID(), p); err != nil {
|
||||
t.Fatal("failed to sign bad proposal", err)
|
||||
@@ -416,7 +416,7 @@ func setupSimulator(ctx context.Context, t *testing.T) *simulatorTestSuite {
|
||||
propBlockParts = propBlock.MakePartSet(partSize)
|
||||
blockID = types.BlockID{Hash: propBlock.Hash(), PartSetHeader: propBlockParts.Header()}
|
||||
|
||||
proposal = types.NewProposal(vss[2].Height, round, -1, blockID)
|
||||
proposal = types.NewProposal(vss[2].Height, round, -1, blockID, propBlock.Header.Time)
|
||||
p = proposal.ToProto()
|
||||
if err := vss[2].SignProposal(ctx, cfg.ChainID(), p); err != nil {
|
||||
t.Fatal("failed to sign bad proposal", err)
|
||||
@@ -475,7 +475,7 @@ func setupSimulator(ctx context.Context, t *testing.T) *simulatorTestSuite {
|
||||
|
||||
selfIndex := valIndexFn(0)
|
||||
|
||||
proposal = types.NewProposal(vss[3].Height, round, -1, blockID)
|
||||
proposal = types.NewProposal(vss[3].Height, round, -1, blockID, propBlock.Header.Time)
|
||||
p = proposal.ToProto()
|
||||
if err := vss[3].SignProposal(ctx, cfg.ChainID(), p); err != nil {
|
||||
t.Fatal("failed to sign bad proposal", err)
|
||||
@@ -540,7 +540,7 @@ func setupSimulator(ctx context.Context, t *testing.T) *simulatorTestSuite {
|
||||
sort.Sort(ValidatorStubsByPower(newVss))
|
||||
|
||||
selfIndex = valIndexFn(0)
|
||||
proposal = types.NewProposal(vss[1].Height, round, -1, blockID)
|
||||
proposal = types.NewProposal(vss[1].Height, round, -1, blockID, propBlock.Header.Time)
|
||||
p = proposal.ToProto()
|
||||
if err := vss[1].SignProposal(ctx, cfg.ChainID(), p); err != nil {
|
||||
t.Fatal("failed to sign bad proposal", err)
|
||||
|
||||
@@ -1119,8 +1119,12 @@ func (cs *State) enterPropose(height int64, round int32) {
|
||||
}
|
||||
}()
|
||||
|
||||
//nolint: lll
|
||||
waitingTime := proposalStepWaitingTime(tmtime.DefaultSource{}, cs.state.LastBlockTime, cs.state.ConsensusParams.Timestamp)
|
||||
proposalTimeout := maxDuration(cs.config.Propose(round), waitingTime)
|
||||
|
||||
// If we don't get the proposal and all block parts quick enough, enterPrevote
|
||||
cs.scheduleTimeout(cs.config.Propose(round), height, round, cstypes.RoundStepPropose)
|
||||
cs.scheduleTimeout(proposalTimeout, height, round, cstypes.RoundStepPropose)
|
||||
|
||||
// Nothing more to do if we're not a validator
|
||||
if cs.privValidator == nil {
|
||||
@@ -1188,7 +1192,7 @@ func (cs *State) defaultDecideProposal(height int64, round int32) {
|
||||
|
||||
// Make proposal
|
||||
propBlockID := types.BlockID{Hash: block.Hash(), PartSetHeader: blockParts.Header()}
|
||||
proposal := types.NewProposal(height, round, cs.ValidRound, propBlockID)
|
||||
proposal := types.NewProposal(height, round, cs.ValidRound, propBlockID, block.Header.Time)
|
||||
p := proposal.ToProto()
|
||||
|
||||
// wait the max amount we would wait for a proposal
|
||||
@@ -2423,10 +2427,10 @@ func repairWalFile(src, dst string) error {
|
||||
// Block times must be monotonically increasing, so if the block time of the previous
|
||||
// block is larger than the proposer's current time, then the proposer will sleep
|
||||
// until its local clock exceeds the previous block time.
|
||||
func proposerWaitTime(lt tmtime.Source, h types.Header) time.Duration {
|
||||
func proposerWaitTime(lt tmtime.Source, bt time.Time) time.Duration {
|
||||
t := lt.Now()
|
||||
if h.Time.After(t) {
|
||||
return h.Time.Sub(t)
|
||||
if bt.After(t) {
|
||||
return bt.Sub(t)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
@@ -2444,11 +2448,18 @@ func proposerWaitTime(lt tmtime.Source, h types.Header) time.Duration {
|
||||
// The result of proposalStepWaitingTime is compared with the configured `timeout-propose` duration,
|
||||
// and the validator waits for whichever duration is larger before advancing to the next step
|
||||
// and prevoting nil.
|
||||
func proposalStepWaitingTime(lt tmtime.Source, h types.Header, tp types.TimestampParams) time.Duration {
|
||||
func proposalStepWaitingTime(lt tmtime.Source, bt time.Time, tp types.TimestampParams) time.Duration {
|
||||
t := lt.Now()
|
||||
wt := h.Time.Add(tp.Precision).Add(tp.MsgDelay)
|
||||
wt := bt.Add(tp.Precision).Add(tp.MsgDelay)
|
||||
if t.After(wt) {
|
||||
return 0
|
||||
}
|
||||
return wt.Sub(t)
|
||||
}
|
||||
|
||||
func maxDuration(d1, d2 time.Duration) time.Duration {
|
||||
if d1 >= d2 {
|
||||
return d1
|
||||
}
|
||||
return d2
|
||||
}
|
||||
|
||||
@@ -246,7 +246,7 @@ func TestStateBadProposal(t *testing.T) {
|
||||
propBlock.AppHash = stateHash
|
||||
propBlockParts := propBlock.MakePartSet(partSize)
|
||||
blockID := types.BlockID{Hash: propBlock.Hash(), PartSetHeader: propBlockParts.Header()}
|
||||
proposal := types.NewProposal(vs2.Height, round, -1, blockID)
|
||||
proposal := types.NewProposal(vs2.Height, round, -1, blockID, propBlock.Header.Time)
|
||||
p := proposal.ToProto()
|
||||
if err := vs2.SignProposal(ctx, config.ChainID(), p); err != nil {
|
||||
t.Fatal("failed to sign bad proposal", err)
|
||||
@@ -306,7 +306,7 @@ func TestStateOversizedBlock(t *testing.T) {
|
||||
|
||||
propBlockParts := propBlock.MakePartSet(partSize)
|
||||
blockID := types.BlockID{Hash: propBlock.Hash(), PartSetHeader: propBlockParts.Header()}
|
||||
proposal := types.NewProposal(height, round, -1, blockID)
|
||||
proposal := types.NewProposal(height, round, -1, blockID, propBlock.Header.Time)
|
||||
p := proposal.ToProto()
|
||||
if err := vs2.SignProposal(ctx, config.ChainID(), p); err != nil {
|
||||
t.Fatal("failed to sign bad proposal", err)
|
||||
@@ -856,7 +856,7 @@ func TestStateLock_POLRelock(t *testing.T) {
|
||||
t.Log("### Starting Round 1")
|
||||
incrementRound(vs2, vs3, vs4)
|
||||
round++
|
||||
propR1 := types.NewProposal(height, round, cs1.ValidRound, blockID)
|
||||
propR1 := types.NewProposal(height, round, cs1.ValidRound, blockID, theBlock.Header.Time)
|
||||
p := propR1.ToProto()
|
||||
if err := vs2.SignProposal(ctx, cs1.state.ChainID, p); err != nil {
|
||||
t.Fatalf("error signing proposal: %s", err)
|
||||
@@ -1588,7 +1588,7 @@ func TestStateLock_POLSafety2(t *testing.T) {
|
||||
|
||||
round++ // moving to the next round
|
||||
// in round 2 we see the polkad block from round 0
|
||||
newProp := types.NewProposal(height, round, 0, propBlockID0)
|
||||
newProp := types.NewProposal(height, round, 0, propBlockID0, propBlock0.Header.Time)
|
||||
p := newProp.ToProto()
|
||||
if err := vs3.SignProposal(ctx, config.ChainID(), p); err != nil {
|
||||
t.Fatal(err)
|
||||
@@ -1730,7 +1730,7 @@ func TestState_PrevotePOLFromPreviousRound(t *testing.T) {
|
||||
t.Log("### Starting Round 2")
|
||||
incrementRound(vs2, vs3, vs4)
|
||||
round++
|
||||
propR2 := types.NewProposal(height, round, 1, r1BlockID)
|
||||
propR2 := types.NewProposal(height, round, 1, r1BlockID, propBlockR1.Header.Time)
|
||||
p := propR2.ToProto()
|
||||
if err := vs3.SignProposal(ctx, cs1.state.ChainID, p); err != nil {
|
||||
t.Fatalf("error signing proposal: %s", err)
|
||||
@@ -2291,7 +2291,7 @@ func TestStartNextHeightCorrectlyAfterTimeout(t *testing.T) {
|
||||
signAddVotes(ctx, cs1, tmproto.PrecommitType, config.ChainID(), blockID, vs3)
|
||||
|
||||
// wait till timeout occurs
|
||||
ensurePrecommitTimeout(t, precommitTimeoutCh)
|
||||
ensureNewTimeout(t, precommitTimeoutCh, height, round, cs1.config.TimeoutPrecommit.Nanoseconds())
|
||||
|
||||
ensureNewRound(t, newRoundCh, height, round+1)
|
||||
|
||||
@@ -2529,6 +2529,7 @@ func TestStateOutputsBlockPartsStats(t *testing.T) {
|
||||
cs, _, err := makeState(ctx, config, logger, 1)
|
||||
require.NoError(t, err)
|
||||
peerID, err := types.NewNodeID("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
|
||||
require.NoError(t, err)
|
||||
|
||||
// 1) new block part
|
||||
parts := types.NewPartSetFromData(tmrand.Bytes(100), 10)
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
ARG TENDERMINT_VERSION=latest
|
||||
FROM tendermint/tendermint:${TENDERMINT_VERSION}
|
||||
|
||||
COPY tm-signer-harness /usr/bin/tm-signer-harness
|
||||
@@ -1,21 +0,0 @@
|
||||
.PHONY: build install docker-image
|
||||
|
||||
TENDERMINT_VERSION?=latest
|
||||
BUILD_TAGS?='tendermint'
|
||||
VERSION := $(shell git describe --always)
|
||||
BUILD_FLAGS = -ldflags "-X github.com/tendermint/tendermint/version.TMCoreSemVer=$(VERSION)"
|
||||
|
||||
.DEFAULT_GOAL := build
|
||||
|
||||
build:
|
||||
CGO_ENABLED=0 go build $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o ../../build/tm-signer-harness main.go
|
||||
|
||||
install:
|
||||
CGO_ENABLED=0 go install $(BUILD_FLAGS) -tags $(BUILD_TAGS) .
|
||||
|
||||
docker-image:
|
||||
GOOS=linux GOARCH=amd64 go build $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o tm-signer-harness main.go
|
||||
docker build \
|
||||
--build-arg TENDERMINT_VERSION=$(TENDERMINT_VERSION) \
|
||||
-t tendermint/tm-signer-harness:$(TENDERMINT_VERSION) .
|
||||
rm -rf tm-signer-harness
|
||||
@@ -1,5 +0,0 @@
|
||||
# tm-signer-harness
|
||||
|
||||
See the [`tm-signer-harness`
|
||||
documentation](https://tendermint.com/docs/tools/remote-signer-validation.html)
|
||||
for more details.
|
||||
@@ -1,427 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"os/signal"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto/tmhash"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
"github.com/tendermint/tendermint/internal/state"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmnet "github.com/tendermint/tendermint/libs/net"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// Test harness error codes (which act as exit codes when the test harness fails).
|
||||
const (
|
||||
NoError int = iota // 0
|
||||
ErrInvalidParameters // 1
|
||||
ErrMaxAcceptRetriesReached // 2
|
||||
ErrFailedToLoadGenesisFile // 3
|
||||
ErrFailedToCreateListener // 4
|
||||
ErrFailedToStartListener // 5
|
||||
ErrInterrupted // 6
|
||||
ErrOther // 7
|
||||
ErrTestPublicKeyFailed // 8
|
||||
ErrTestSignProposalFailed // 9
|
||||
ErrTestSignVoteFailed // 10
|
||||
)
|
||||
|
||||
var voteTypes = []tmproto.SignedMsgType{tmproto.PrevoteType, tmproto.PrecommitType}
|
||||
|
||||
// TestHarnessError allows us to keep track of which exit code should be used
|
||||
// when exiting the main program.
|
||||
type TestHarnessError struct {
|
||||
Code int // The exit code to return
|
||||
Err error // The original error
|
||||
Info string // Any additional information
|
||||
}
|
||||
|
||||
var _ error = (*TestHarnessError)(nil)
|
||||
|
||||
// TestHarness allows for testing of a remote signer to ensure compatibility
|
||||
// with this version of Tendermint.
|
||||
type TestHarness struct {
|
||||
addr string
|
||||
signerClient *privval.SignerClient
|
||||
fpv *privval.FilePV
|
||||
chainID string
|
||||
acceptRetries int
|
||||
logger log.Logger
|
||||
exitWhenComplete bool
|
||||
exitCode int
|
||||
}
|
||||
|
||||
// TestHarnessConfig provides configuration to set up a remote signer test
|
||||
// harness.
|
||||
type TestHarnessConfig struct {
|
||||
BindAddr string
|
||||
|
||||
KeyFile string
|
||||
StateFile string
|
||||
GenesisFile string
|
||||
|
||||
AcceptDeadline time.Duration
|
||||
ConnDeadline time.Duration
|
||||
AcceptRetries int
|
||||
|
||||
SecretConnKey ed25519.PrivKey
|
||||
|
||||
ExitWhenComplete bool // Whether or not to call os.Exit when the harness has completed.
|
||||
}
|
||||
|
||||
// timeoutError can be used to check if an error returned from the netp package
|
||||
// was due to a timeout.
|
||||
type timeoutError interface {
|
||||
Timeout() bool
|
||||
}
|
||||
|
||||
// NewTestHarness will load Tendermint data from the given files (including
|
||||
// validator public/private keypairs and chain details) and create a new
|
||||
// harness.
|
||||
func NewTestHarness(ctx context.Context, logger log.Logger, cfg TestHarnessConfig) (*TestHarness, error) {
|
||||
keyFile := ExpandPath(cfg.KeyFile)
|
||||
stateFile := ExpandPath(cfg.StateFile)
|
||||
logger.Info("Loading private validator configuration", "keyFile", keyFile, "stateFile", stateFile)
|
||||
// NOTE: LoadFilePV ultimately calls os.Exit on failure. No error will be
|
||||
// returned if this call fails.
|
||||
fpv, err := privval.LoadFilePV(keyFile, stateFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
genesisFile := ExpandPath(cfg.GenesisFile)
|
||||
logger.Info("Loading chain ID from genesis file", "genesisFile", genesisFile)
|
||||
st, err := state.MakeGenesisDocFromFile(genesisFile)
|
||||
if err != nil {
|
||||
return nil, newTestHarnessError(ErrFailedToLoadGenesisFile, err, genesisFile)
|
||||
}
|
||||
logger.Info("Loaded genesis file", "chainID", st.ChainID)
|
||||
|
||||
spv, err := newTestHarnessListener(logger, cfg)
|
||||
if err != nil {
|
||||
return nil, newTestHarnessError(ErrFailedToCreateListener, err, "")
|
||||
}
|
||||
|
||||
signerClient, err := privval.NewSignerClient(ctx, spv, st.ChainID)
|
||||
if err != nil {
|
||||
return nil, newTestHarnessError(ErrFailedToCreateListener, err, "")
|
||||
}
|
||||
|
||||
return &TestHarness{
|
||||
addr: cfg.BindAddr,
|
||||
signerClient: signerClient,
|
||||
fpv: fpv,
|
||||
chainID: st.ChainID,
|
||||
acceptRetries: cfg.AcceptRetries,
|
||||
logger: logger,
|
||||
exitWhenComplete: cfg.ExitWhenComplete,
|
||||
exitCode: 0,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Run will execute the tests associated with this test harness. The intention
|
||||
// here is to call this from one's `main` function, as the way it succeeds or
|
||||
// fails at present is to call os.Exit() with an exit code related to the error
|
||||
// that caused the tests to fail, or exit code 0 on success.
|
||||
func (th *TestHarness) Run() {
|
||||
c := make(chan os.Signal, 1)
|
||||
signal.Notify(c, os.Interrupt)
|
||||
go func() {
|
||||
for sig := range c {
|
||||
th.logger.Info("Caught interrupt, terminating...", "sig", sig)
|
||||
th.Shutdown(newTestHarnessError(ErrInterrupted, nil, ""))
|
||||
}
|
||||
}()
|
||||
|
||||
th.logger.Info("Starting test harness")
|
||||
accepted := false
|
||||
var startErr error
|
||||
|
||||
for acceptRetries := th.acceptRetries; acceptRetries > 0; acceptRetries-- {
|
||||
th.logger.Info("Attempting to accept incoming connection", "acceptRetries", acceptRetries)
|
||||
|
||||
if err := th.signerClient.WaitForConnection(10 * time.Millisecond); err != nil {
|
||||
// if it wasn't a timeout error
|
||||
if _, ok := err.(timeoutError); !ok {
|
||||
th.logger.Error("Failed to start listener", "err", err)
|
||||
th.Shutdown(newTestHarnessError(ErrFailedToStartListener, err, ""))
|
||||
// we need the return statements in case this is being run
|
||||
// from a unit test - otherwise this function will just die
|
||||
// when os.Exit is called
|
||||
return
|
||||
}
|
||||
startErr = err
|
||||
} else {
|
||||
th.logger.Info("Accepted external connection")
|
||||
accepted = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !accepted {
|
||||
th.logger.Error("Maximum accept retries reached", "acceptRetries", th.acceptRetries)
|
||||
th.Shutdown(newTestHarnessError(ErrMaxAcceptRetriesReached, startErr, ""))
|
||||
return
|
||||
}
|
||||
|
||||
// Run the tests
|
||||
if err := th.TestPublicKey(); err != nil {
|
||||
th.Shutdown(err)
|
||||
return
|
||||
}
|
||||
if err := th.TestSignProposal(); err != nil {
|
||||
th.Shutdown(err)
|
||||
return
|
||||
}
|
||||
if err := th.TestSignVote(); err != nil {
|
||||
th.Shutdown(err)
|
||||
return
|
||||
}
|
||||
th.logger.Info("SUCCESS! All tests passed.")
|
||||
th.Shutdown(nil)
|
||||
}
|
||||
|
||||
// TestPublicKey just validates that we can (1) fetch the public key from the
|
||||
// remote signer, and (2) it matches the public key we've configured for our
|
||||
// local Tendermint version.
|
||||
func (th *TestHarness) TestPublicKey() error {
|
||||
th.logger.Info("TEST: Public key of remote signer")
|
||||
fpvk, err := th.fpv.GetPubKey(context.Background())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
th.logger.Info("Local", "pubKey", fpvk)
|
||||
sck, err := th.signerClient.GetPubKey(context.Background())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
th.logger.Info("Remote", "pubKey", sck)
|
||||
if !bytes.Equal(fpvk.Bytes(), sck.Bytes()) {
|
||||
th.logger.Error("FAILED: Local and remote public keys do not match")
|
||||
return newTestHarnessError(ErrTestPublicKeyFailed, nil, "")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// TestSignProposal makes sure the remote signer can successfully sign
|
||||
// proposals.
|
||||
func (th *TestHarness) TestSignProposal() error {
|
||||
th.logger.Info("TEST: Signing of proposals")
|
||||
// sha256 hash of "hash"
|
||||
hash := tmhash.Sum([]byte("hash"))
|
||||
prop := &types.Proposal{
|
||||
Type: tmproto.ProposalType,
|
||||
Height: 100,
|
||||
Round: 0,
|
||||
POLRound: -1,
|
||||
BlockID: types.BlockID{
|
||||
Hash: hash,
|
||||
PartSetHeader: types.PartSetHeader{
|
||||
Hash: hash,
|
||||
Total: 1000000,
|
||||
},
|
||||
},
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
p := prop.ToProto()
|
||||
propBytes := types.ProposalSignBytes(th.chainID, p)
|
||||
if err := th.signerClient.SignProposal(context.Background(), th.chainID, p); err != nil {
|
||||
th.logger.Error("FAILED: Signing of proposal", "err", err)
|
||||
return newTestHarnessError(ErrTestSignProposalFailed, err, "")
|
||||
}
|
||||
prop.Signature = p.Signature
|
||||
th.logger.Debug("Signed proposal", "prop", prop)
|
||||
// first check that it's a basically valid proposal
|
||||
if err := prop.ValidateBasic(); err != nil {
|
||||
th.logger.Error("FAILED: Signed proposal is invalid", "err", err)
|
||||
return newTestHarnessError(ErrTestSignProposalFailed, err, "")
|
||||
}
|
||||
sck, err := th.signerClient.GetPubKey(context.Background())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// now validate the signature on the proposal
|
||||
if sck.VerifySignature(propBytes, prop.Signature) {
|
||||
th.logger.Info("Successfully validated proposal signature")
|
||||
} else {
|
||||
th.logger.Error("FAILED: Proposal signature validation failed")
|
||||
return newTestHarnessError(ErrTestSignProposalFailed, nil, "signature validation failed")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// TestSignVote makes sure the remote signer can successfully sign all kinds of
|
||||
// votes.
|
||||
func (th *TestHarness) TestSignVote() error {
|
||||
th.logger.Info("TEST: Signing of votes")
|
||||
for _, voteType := range voteTypes {
|
||||
th.logger.Info("Testing vote type", "type", voteType)
|
||||
hash := tmhash.Sum([]byte("hash"))
|
||||
vote := &types.Vote{
|
||||
Type: voteType,
|
||||
Height: 101,
|
||||
Round: 0,
|
||||
BlockID: types.BlockID{
|
||||
Hash: hash,
|
||||
PartSetHeader: types.PartSetHeader{
|
||||
Hash: hash,
|
||||
Total: 1000000,
|
||||
},
|
||||
},
|
||||
ValidatorIndex: 0,
|
||||
ValidatorAddress: tmhash.SumTruncated([]byte("addr")),
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
v := vote.ToProto()
|
||||
voteBytes := types.VoteSignBytes(th.chainID, v)
|
||||
// sign the vote
|
||||
if err := th.signerClient.SignVote(context.Background(), th.chainID, v); err != nil {
|
||||
th.logger.Error("FAILED: Signing of vote", "err", err)
|
||||
return newTestHarnessError(ErrTestSignVoteFailed, err, fmt.Sprintf("voteType=%d", voteType))
|
||||
}
|
||||
vote.Signature = v.Signature
|
||||
th.logger.Debug("Signed vote", "vote", vote)
|
||||
// validate the contents of the vote
|
||||
if err := vote.ValidateBasic(); err != nil {
|
||||
th.logger.Error("FAILED: Signed vote is invalid", "err", err)
|
||||
return newTestHarnessError(ErrTestSignVoteFailed, err, fmt.Sprintf("voteType=%d", voteType))
|
||||
}
|
||||
sck, err := th.signerClient.GetPubKey(context.Background())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// now validate the signature on the proposal
|
||||
if sck.VerifySignature(voteBytes, vote.Signature) {
|
||||
th.logger.Info("Successfully validated vote signature", "type", voteType)
|
||||
} else {
|
||||
th.logger.Error("FAILED: Vote signature validation failed", "type", voteType)
|
||||
return newTestHarnessError(ErrTestSignVoteFailed, nil, "signature validation failed")
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Shutdown will kill the test harness and attempt to close all open sockets
|
||||
// gracefully. If the supplied error is nil, it is assumed that the exit code
|
||||
// should be 0. If err is not nil, it will exit with an exit code related to the
|
||||
// error.
|
||||
func (th *TestHarness) Shutdown(err error) {
|
||||
var exitCode int
|
||||
|
||||
if err == nil {
|
||||
exitCode = NoError
|
||||
} else if therr, ok := err.(*TestHarnessError); ok {
|
||||
exitCode = therr.Code
|
||||
} else {
|
||||
exitCode = ErrOther
|
||||
}
|
||||
th.exitCode = exitCode
|
||||
|
||||
// in case sc.Stop() takes too long
|
||||
if th.exitWhenComplete {
|
||||
go func() {
|
||||
time.Sleep(time.Duration(5) * time.Second)
|
||||
th.logger.Error("Forcibly exiting program after timeout")
|
||||
os.Exit(exitCode)
|
||||
}()
|
||||
}
|
||||
|
||||
err = th.signerClient.Close()
|
||||
if err != nil {
|
||||
th.logger.Error("Failed to cleanly stop listener: %s", err.Error())
|
||||
}
|
||||
|
||||
if th.exitWhenComplete {
|
||||
os.Exit(exitCode)
|
||||
}
|
||||
}
|
||||
|
||||
// newTestHarnessListener creates our client instance which we will use for testing.
|
||||
func newTestHarnessListener(logger log.Logger, cfg TestHarnessConfig) (*privval.SignerListenerEndpoint, error) {
|
||||
proto, addr := tmnet.ProtocolAndAddress(cfg.BindAddr)
|
||||
if proto == "unix" {
|
||||
// make sure the socket doesn't exist - if so, try to delete it
|
||||
if tmos.FileExists(addr) {
|
||||
if err := os.Remove(addr); err != nil {
|
||||
logger.Error("Failed to remove existing Unix domain socket", "addr", addr)
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
ln, err := net.Listen(proto, addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
logger.Info("Listening", "proto", proto, "addr", addr)
|
||||
var svln net.Listener
|
||||
switch proto {
|
||||
case "unix":
|
||||
unixLn := privval.NewUnixListener(ln)
|
||||
privval.UnixListenerTimeoutAccept(cfg.AcceptDeadline)(unixLn)
|
||||
privval.UnixListenerTimeoutReadWrite(cfg.ConnDeadline)(unixLn)
|
||||
svln = unixLn
|
||||
case "tcp":
|
||||
tcpLn := privval.NewTCPListener(ln, cfg.SecretConnKey)
|
||||
privval.TCPListenerTimeoutAccept(cfg.AcceptDeadline)(tcpLn)
|
||||
privval.TCPListenerTimeoutReadWrite(cfg.ConnDeadline)(tcpLn)
|
||||
logger.Info("Resolved TCP address for listener", "addr", tcpLn.Addr())
|
||||
svln = tcpLn
|
||||
default:
|
||||
_ = ln.Close()
|
||||
logger.Error("Unsupported protocol (must be unix:// or tcp://)", "proto", proto)
|
||||
return nil, newTestHarnessError(ErrInvalidParameters, nil, fmt.Sprintf("Unsupported protocol: %s", proto))
|
||||
}
|
||||
return privval.NewSignerListenerEndpoint(logger, svln), nil
|
||||
}
|
||||
|
||||
func newTestHarnessError(code int, err error, info string) *TestHarnessError {
|
||||
return &TestHarnessError{
|
||||
Code: code,
|
||||
Err: err,
|
||||
Info: info,
|
||||
}
|
||||
}
|
||||
|
||||
func (e *TestHarnessError) Error() string {
|
||||
var msg string
|
||||
switch e.Code {
|
||||
case ErrInvalidParameters:
|
||||
msg = "Invalid parameters supplied to application"
|
||||
case ErrMaxAcceptRetriesReached:
|
||||
msg = "Maximum accept retries reached"
|
||||
case ErrFailedToLoadGenesisFile:
|
||||
msg = "Failed to load genesis file"
|
||||
case ErrFailedToCreateListener:
|
||||
msg = "Failed to create listener"
|
||||
case ErrFailedToStartListener:
|
||||
msg = "Failed to start listener"
|
||||
case ErrInterrupted:
|
||||
msg = "Interrupted"
|
||||
case ErrTestPublicKeyFailed:
|
||||
msg = "Public key validation test failed"
|
||||
case ErrTestSignProposalFailed:
|
||||
msg = "Proposal signing validation test failed"
|
||||
case ErrTestSignVoteFailed:
|
||||
msg = "Vote signing validation test failed"
|
||||
default:
|
||||
msg = "Unknown error"
|
||||
}
|
||||
if len(e.Info) > 0 {
|
||||
msg = fmt.Sprintf("%s: %s", msg, e.Info)
|
||||
}
|
||||
if e.Err != nil {
|
||||
msg = fmt.Sprintf("%s (original error: %s)", msg, e.Err.Error())
|
||||
}
|
||||
return msg
|
||||
}
|
||||
@@ -1,226 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
const (
|
||||
keyFileContents = `{
|
||||
"address": "D08FCA3BA74CF17CBFC15E64F9505302BB0E2748",
|
||||
"pub_key": {
|
||||
"type": "tendermint/PubKeyEd25519",
|
||||
"value": "ZCsuTjaczEyon70nmKxwvwu+jqrbq5OH3yQjcK0SFxc="
|
||||
},
|
||||
"priv_key": {
|
||||
"type": "tendermint/PrivKeyEd25519",
|
||||
"value": "8O39AkQsoe1sBQwud/Kdul8lg8K9SFsql9aZvwXQSt1kKy5ONpzMTKifvSeYrHC/C76Oqturk4ffJCNwrRIXFw=="
|
||||
}
|
||||
}`
|
||||
|
||||
stateFileContents = `{
|
||||
"height": "0",
|
||||
"round": 0,
|
||||
"step": 0
|
||||
}`
|
||||
|
||||
genesisFileContents = `{
|
||||
"genesis_time": "2019-01-15T11:56:34.8963Z",
|
||||
"chain_id": "test-chain-0XwP5E",
|
||||
"consensus_params": {
|
||||
"block": {
|
||||
"max_bytes": "22020096",
|
||||
"max_gas": "-1",
|
||||
"time_iota_ms": "1000"
|
||||
},
|
||||
"evidence": {
|
||||
"max_age_num_blocks": "100000",
|
||||
"max_age_duration": "172800000000000",
|
||||
"max_num": 50
|
||||
},
|
||||
"validator": {
|
||||
"pub_key_types": [
|
||||
"ed25519"
|
||||
]
|
||||
}
|
||||
},
|
||||
"validators": [
|
||||
{
|
||||
"address": "D08FCA3BA74CF17CBFC15E64F9505302BB0E2748",
|
||||
"pub_key": {
|
||||
"type": "tendermint/PubKeyEd25519",
|
||||
"value": "ZCsuTjaczEyon70nmKxwvwu+jqrbq5OH3yQjcK0SFxc="
|
||||
},
|
||||
"power": "10",
|
||||
"name": ""
|
||||
}
|
||||
],
|
||||
"app_hash": ""
|
||||
}`
|
||||
|
||||
defaultConnDeadline = 100
|
||||
)
|
||||
|
||||
func TestRemoteSignerTestHarnessMaxAcceptRetriesReached(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
cfg := makeConfig(t, 1, 2)
|
||||
defer cleanup(cfg)
|
||||
|
||||
th, err := NewTestHarness(ctx, log.TestingLogger(), cfg)
|
||||
require.NoError(t, err)
|
||||
th.Run()
|
||||
assert.Equal(t, ErrMaxAcceptRetriesReached, th.exitCode)
|
||||
}
|
||||
|
||||
func TestRemoteSignerTestHarnessSuccessfulRun(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
harnessTest(
|
||||
ctx,
|
||||
t,
|
||||
func(th *TestHarness) *privval.SignerServer {
|
||||
return newMockSignerServer(t, th, th.fpv.Key.PrivKey, false, false)
|
||||
},
|
||||
NoError,
|
||||
)
|
||||
}
|
||||
|
||||
func TestRemoteSignerPublicKeyCheckFailed(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
harnessTest(
|
||||
ctx,
|
||||
t,
|
||||
func(th *TestHarness) *privval.SignerServer {
|
||||
return newMockSignerServer(t, th, ed25519.GenPrivKey(), false, false)
|
||||
},
|
||||
ErrTestPublicKeyFailed,
|
||||
)
|
||||
}
|
||||
|
||||
func TestRemoteSignerProposalSigningFailed(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
harnessTest(
|
||||
ctx,
|
||||
t,
|
||||
func(th *TestHarness) *privval.SignerServer {
|
||||
return newMockSignerServer(t, th, th.fpv.Key.PrivKey, true, false)
|
||||
},
|
||||
ErrTestSignProposalFailed,
|
||||
)
|
||||
}
|
||||
|
||||
func TestRemoteSignerVoteSigningFailed(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
harnessTest(
|
||||
ctx,
|
||||
t,
|
||||
func(th *TestHarness) *privval.SignerServer {
|
||||
return newMockSignerServer(t, th, th.fpv.Key.PrivKey, false, true)
|
||||
},
|
||||
ErrTestSignVoteFailed,
|
||||
)
|
||||
}
|
||||
|
||||
func newMockSignerServer(
|
||||
t *testing.T,
|
||||
th *TestHarness,
|
||||
privKey crypto.PrivKey,
|
||||
breakProposalSigning bool,
|
||||
breakVoteSigning bool,
|
||||
) *privval.SignerServer {
|
||||
mockPV := types.NewMockPVWithParams(privKey, breakProposalSigning, breakVoteSigning)
|
||||
|
||||
dialerEndpoint := privval.NewSignerDialerEndpoint(
|
||||
th.logger,
|
||||
privval.DialTCPFn(
|
||||
th.addr,
|
||||
time.Duration(defaultConnDeadline)*time.Millisecond,
|
||||
ed25519.GenPrivKey(),
|
||||
),
|
||||
)
|
||||
|
||||
return privval.NewSignerServer(dialerEndpoint, th.chainID, mockPV)
|
||||
}
|
||||
|
||||
// For running relatively standard tests.
|
||||
func harnessTest(
|
||||
ctx context.Context,
|
||||
t *testing.T,
|
||||
signerServerMaker func(th *TestHarness) *privval.SignerServer,
|
||||
expectedExitCode int,
|
||||
) {
|
||||
cfg := makeConfig(t, 100, 3)
|
||||
defer cleanup(cfg)
|
||||
|
||||
th, err := NewTestHarness(ctx, log.TestingLogger(), cfg)
|
||||
require.NoError(t, err)
|
||||
donec := make(chan struct{})
|
||||
go func() {
|
||||
defer close(donec)
|
||||
th.Run()
|
||||
}()
|
||||
|
||||
ss := signerServerMaker(th)
|
||||
require.NoError(t, ss.Start(ctx))
|
||||
assert.True(t, ss.IsRunning())
|
||||
defer ss.Stop() //nolint:errcheck // ignore for tests
|
||||
|
||||
<-donec
|
||||
assert.Equal(t, expectedExitCode, th.exitCode)
|
||||
}
|
||||
|
||||
func makeConfig(t *testing.T, acceptDeadline, acceptRetries int) TestHarnessConfig {
|
||||
return TestHarnessConfig{
|
||||
BindAddr: privval.GetFreeLocalhostAddrPort(),
|
||||
KeyFile: makeTempFile("tm-testharness-keyfile", keyFileContents),
|
||||
StateFile: makeTempFile("tm-testharness-statefile", stateFileContents),
|
||||
GenesisFile: makeTempFile("tm-testharness-genesisfile", genesisFileContents),
|
||||
AcceptDeadline: time.Duration(acceptDeadline) * time.Millisecond,
|
||||
ConnDeadline: time.Duration(defaultConnDeadline) * time.Millisecond,
|
||||
AcceptRetries: acceptRetries,
|
||||
SecretConnKey: ed25519.GenPrivKey(),
|
||||
ExitWhenComplete: false,
|
||||
}
|
||||
}
|
||||
|
||||
func cleanup(cfg TestHarnessConfig) {
|
||||
os.Remove(cfg.KeyFile)
|
||||
os.Remove(cfg.StateFile)
|
||||
os.Remove(cfg.GenesisFile)
|
||||
}
|
||||
|
||||
func makeTempFile(name, content string) string {
|
||||
tempFile, err := os.CreateTemp("", fmt.Sprintf("%s-*", name))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if _, err := tempFile.Write([]byte(content)); err != nil {
|
||||
tempFile.Close()
|
||||
panic(err)
|
||||
}
|
||||
if err := tempFile.Close(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return tempFile.Name()
|
||||
}
|
||||
@@ -1,25 +0,0 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"os/user"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ExpandPath will check if the given path begins with a "~" symbol, and if so,
|
||||
// will expand it to become the user's home directory. If it fails to expand the
|
||||
// path it will automatically return the original path itself.
|
||||
func ExpandPath(path string) string {
|
||||
usr, err := user.Current()
|
||||
if err != nil {
|
||||
return path
|
||||
}
|
||||
|
||||
if path == "~" {
|
||||
return usr.HomeDir
|
||||
} else if strings.HasPrefix(path, "~/") {
|
||||
return filepath.Join(usr.HomeDir, path[2:])
|
||||
}
|
||||
|
||||
return path
|
||||
}
|
||||
@@ -1,203 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
"github.com/tendermint/tendermint/tools/tm-signer-harness/internal"
|
||||
"github.com/tendermint/tendermint/version"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultAcceptRetries = 100
|
||||
defaultBindAddr = "tcp://127.0.0.1:0"
|
||||
defaultAcceptDeadline = 1
|
||||
defaultConnDeadline = 3
|
||||
defaultExtractKeyOutput = "./signing.key"
|
||||
)
|
||||
|
||||
var defaultTMHome string
|
||||
|
||||
var logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
|
||||
|
||||
// Command line flags
|
||||
var (
|
||||
flagAcceptRetries int
|
||||
flagBindAddr string
|
||||
flagTMHome string
|
||||
flagKeyOutputPath string
|
||||
)
|
||||
|
||||
// Command line commands
|
||||
var (
|
||||
rootCmd *flag.FlagSet
|
||||
runCmd *flag.FlagSet
|
||||
extractKeyCmd *flag.FlagSet
|
||||
versionCmd *flag.FlagSet
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd = flag.NewFlagSet("root", flag.ExitOnError)
|
||||
rootCmd.Usage = func() {
|
||||
fmt.Println(`Remote signer test harness for Tendermint.
|
||||
|
||||
Usage:
|
||||
tm-signer-harness <command> [flags]
|
||||
|
||||
Available Commands:
|
||||
extract_key Extracts a signing key from a local Tendermint instance
|
||||
help Help on the available commands
|
||||
run Runs the test harness
|
||||
version Display version information and exit
|
||||
|
||||
Use "tm-signer-harness help <command>" for more information about that command.`)
|
||||
fmt.Println("")
|
||||
}
|
||||
|
||||
hd, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
fmt.Println("The UserHomeDir is not defined, setting the default TM Home PATH to \"~/.tendermint\"")
|
||||
defaultTMHome = "~/.tendermint"
|
||||
} else {
|
||||
defaultTMHome = fmt.Sprintf("%s/.tendermint", hd)
|
||||
}
|
||||
|
||||
runCmd = flag.NewFlagSet("run", flag.ExitOnError)
|
||||
runCmd.IntVar(&flagAcceptRetries,
|
||||
"accept-retries",
|
||||
defaultAcceptRetries,
|
||||
"The number of attempts to listen for incoming connections")
|
||||
runCmd.StringVar(&flagBindAddr, "addr", defaultBindAddr, "Bind to this address for the testing")
|
||||
runCmd.StringVar(&flagTMHome, "tmhome", defaultTMHome, "Path to the Tendermint home directory")
|
||||
runCmd.Usage = func() {
|
||||
fmt.Println(`Runs the remote signer test harness for Tendermint.
|
||||
|
||||
Usage:
|
||||
tm-signer-harness run [flags]
|
||||
|
||||
Flags:`)
|
||||
runCmd.PrintDefaults()
|
||||
fmt.Println("")
|
||||
}
|
||||
|
||||
extractKeyCmd = flag.NewFlagSet("extract_key", flag.ExitOnError)
|
||||
extractKeyCmd.StringVar(&flagKeyOutputPath,
|
||||
"output",
|
||||
defaultExtractKeyOutput,
|
||||
"Path to which signing key should be written")
|
||||
extractKeyCmd.StringVar(&flagTMHome, "tmhome", defaultTMHome, "Path to the Tendermint home directory")
|
||||
extractKeyCmd.Usage = func() {
|
||||
fmt.Println(`Extracts a signing key from a local Tendermint instance for use in the remote
|
||||
signer under test.
|
||||
|
||||
Usage:
|
||||
tm-signer-harness extract_key [flags]
|
||||
|
||||
Flags:`)
|
||||
extractKeyCmd.PrintDefaults()
|
||||
fmt.Println("")
|
||||
}
|
||||
|
||||
versionCmd = flag.NewFlagSet("version", flag.ExitOnError)
|
||||
versionCmd.Usage = func() {
|
||||
fmt.Println(`
|
||||
Prints the Tendermint version for which this remote signer harness was built.
|
||||
|
||||
Usage:
|
||||
tm-signer-harness version`)
|
||||
fmt.Println("")
|
||||
}
|
||||
}
|
||||
|
||||
func runTestHarness(ctx context.Context, acceptRetries int, bindAddr, tmhome string) {
|
||||
tmhome = internal.ExpandPath(tmhome)
|
||||
cfg := internal.TestHarnessConfig{
|
||||
BindAddr: bindAddr,
|
||||
KeyFile: filepath.Join(tmhome, "config", "priv_validator_key.json"),
|
||||
StateFile: filepath.Join(tmhome, "data", "priv_validator_state.json"),
|
||||
GenesisFile: filepath.Join(tmhome, "config", "genesis.json"),
|
||||
AcceptDeadline: time.Duration(defaultAcceptDeadline) * time.Second,
|
||||
AcceptRetries: acceptRetries,
|
||||
ConnDeadline: time.Duration(defaultConnDeadline) * time.Second,
|
||||
SecretConnKey: ed25519.GenPrivKey(),
|
||||
ExitWhenComplete: true,
|
||||
}
|
||||
harness, err := internal.NewTestHarness(ctx, logger, cfg)
|
||||
if err != nil {
|
||||
logger.Error(err.Error())
|
||||
if therr, ok := err.(*internal.TestHarnessError); ok {
|
||||
os.Exit(therr.Code)
|
||||
}
|
||||
os.Exit(internal.ErrOther)
|
||||
}
|
||||
harness.Run()
|
||||
}
|
||||
|
||||
func extractKey(tmhome, outputPath string) {
|
||||
keyFile := filepath.Join(internal.ExpandPath(tmhome), "config", "priv_validator_key.json")
|
||||
stateFile := filepath.Join(internal.ExpandPath(tmhome), "data", "priv_validator_state.json")
|
||||
fpv, err := privval.LoadFilePV(keyFile, stateFile)
|
||||
if err != nil {
|
||||
logger.Error("Can't load file pv", "err", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
pkb := []byte(fpv.Key.PrivKey.(ed25519.PrivKey))
|
||||
if err := os.WriteFile(internal.ExpandPath(outputPath), pkb[:32], 0600); err != nil {
|
||||
logger.Info("Failed to write private key", "output", outputPath, "err", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger.Info("Successfully wrote private key", "output", outputPath)
|
||||
}
|
||||
|
||||
func main() {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
if err := rootCmd.Parse(os.Args[1:]); err != nil {
|
||||
fmt.Printf("Error parsing flags: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if rootCmd.NArg() == 0 || (rootCmd.NArg() == 1 && rootCmd.Arg(0) == "help") {
|
||||
rootCmd.Usage()
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
switch rootCmd.Arg(0) {
|
||||
case "help":
|
||||
switch rootCmd.Arg(1) {
|
||||
case "run":
|
||||
runCmd.Usage()
|
||||
case "extract_key":
|
||||
extractKeyCmd.Usage()
|
||||
case "version":
|
||||
versionCmd.Usage()
|
||||
default:
|
||||
fmt.Printf("Unrecognized command: %s\n", rootCmd.Arg(1))
|
||||
os.Exit(1)
|
||||
}
|
||||
case "run":
|
||||
if err := runCmd.Parse(os.Args[2:]); err != nil {
|
||||
fmt.Printf("Error parsing flags: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
runTestHarness(ctx, flagAcceptRetries, flagBindAddr, flagTMHome)
|
||||
case "extract_key":
|
||||
if err := extractKeyCmd.Parse(os.Args[2:]); err != nil {
|
||||
fmt.Printf("Error parsing flags: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
extractKey(flagTMHome, flagKeyOutputPath)
|
||||
case "version":
|
||||
fmt.Println(version.TMVersion)
|
||||
default:
|
||||
fmt.Printf("Unrecognized command: %s\n", flag.Arg(0))
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
@@ -80,7 +80,6 @@ type VersionParams struct {
|
||||
// TODO (@wbanfield): add link to proposer-based timestamp spec when completed.
|
||||
type TimestampParams struct {
|
||||
Precision time.Duration `json:"precision"`
|
||||
Accuracy time.Duration `json:"accuracy"`
|
||||
MsgDelay time.Duration `json:"msg_delay"`
|
||||
}
|
||||
|
||||
@@ -130,7 +129,6 @@ func DefaultTimestampParams() TimestampParams {
|
||||
// https://github.com/tendermint/tendermint/issues/7202
|
||||
return TimestampParams{
|
||||
Precision: 2 * time.Second,
|
||||
Accuracy: 500 * time.Millisecond,
|
||||
MsgDelay: 3 * time.Second,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -34,14 +34,14 @@ type Proposal struct {
|
||||
|
||||
// NewProposal returns a new Proposal.
|
||||
// If there is no POLRound, polRound should be -1.
|
||||
func NewProposal(height int64, round int32, polRound int32, blockID BlockID) *Proposal {
|
||||
func NewProposal(height int64, round int32, polRound int32, blockID BlockID, ts time.Time) *Proposal {
|
||||
return &Proposal{
|
||||
Type: tmproto.ProposalType,
|
||||
Height: height,
|
||||
Round: round,
|
||||
BlockID: blockID,
|
||||
POLRound: polRound,
|
||||
Timestamp: tmtime.Now(),
|
||||
Timestamp: ts,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
"github.com/tendermint/tendermint/crypto/tmhash"
|
||||
"github.com/tendermint/tendermint/internal/libs/protoio"
|
||||
tmrand "github.com/tendermint/tendermint/libs/rand"
|
||||
tmtime "github.com/tendermint/tendermint/libs/time"
|
||||
tmtimemocks "github.com/tendermint/tendermint/libs/time/mocks"
|
||||
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
|
||||
)
|
||||
@@ -63,7 +64,7 @@ func TestProposalVerifySignature(t *testing.T) {
|
||||
|
||||
prop := NewProposal(
|
||||
4, 2, 2,
|
||||
BlockID{tmrand.Bytes(tmhash.Size), PartSetHeader{777, tmrand.Bytes(tmhash.Size)}})
|
||||
BlockID{tmrand.Bytes(tmhash.Size), PartSetHeader{777, tmrand.Bytes(tmhash.Size)}}, tmtime.Now())
|
||||
p := prop.ToProto()
|
||||
signBytes := ProposalSignBytes("test_chain_id", p)
|
||||
|
||||
@@ -154,7 +155,7 @@ func TestProposalValidateBasic(t *testing.T) {
|
||||
t.Run(tc.testName, func(t *testing.T) {
|
||||
prop := NewProposal(
|
||||
4, 2, 2,
|
||||
blockID)
|
||||
blockID, tmtime.Now())
|
||||
p := prop.ToProto()
|
||||
err := privVal.SignProposal(context.Background(), "test_chain_id", p)
|
||||
prop.Signature = p.Signature
|
||||
@@ -166,9 +167,9 @@ func TestProposalValidateBasic(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestProposalProtoBuf(t *testing.T) {
|
||||
proposal := NewProposal(1, 2, 3, makeBlockID([]byte("hash"), 2, []byte("part_set_hash")))
|
||||
proposal := NewProposal(1, 2, 3, makeBlockID([]byte("hash"), 2, []byte("part_set_hash")), tmtime.Now())
|
||||
proposal.Signature = []byte("sig")
|
||||
proposal2 := NewProposal(1, 2, 3, BlockID{})
|
||||
proposal2 := NewProposal(1, 2, 3, BlockID{}, tmtime.Now())
|
||||
|
||||
testCases := []struct {
|
||||
msg string
|
||||
|
||||
Reference in New Issue
Block a user