Compare commits

...

121 Commits

Author SHA1 Message Date
William Banfield
15f2436c28 p2p: pqueue proposal 2022-06-29 18:19:12 -04:00
William Banfield
978f754ad3 p2p: set empty timeouts to configed values. (manual backport of #8847) (#8869)
* regenerate mocks using newer style

* p2p: set empty timeouts to small values. (#8847)

These timeouts default to 'do not time out' if they are not set. This times up resources, potentially indefinitely. If node on the other side of the the handshake is up but unresponsive, the[ handshake call](edec79448a/internal/p2p/router.go (L720)) will _never_ return.

* fix light client select statement
2022-06-28 16:07:15 -04:00
mergify[bot]
c4ef566071 p2p: remove dial sleep and provide disconnect cooldown (backport #8839) (#8875)
(cherry picked from commit 52b6dc19ba)
2022-06-27 10:49:51 -04:00
dependabot[bot]
f19e52e6f2 build(deps): Bump styfle/cancel-workflow-action from 0.9.1 to 0.10.0 (#8882)
Bumps [styfle/cancel-workflow-action](https://github.com/styfle/cancel-workflow-action) from 0.9.1 to 0.10.0.
- [Release notes](https://github.com/styfle/cancel-workflow-action/releases)
- [Commits](https://github.com/styfle/cancel-workflow-action/compare/0.9.1...0.10.0)

---
updated-dependencies:
- dependency-name: styfle/cancel-workflow-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-27 09:13:46 -04:00
mergify[bot]
19b98c7005 e2e: disable another network test (#8862) (#8873)
Follow up on: https://github.com/tendermint/tendermint/pull/8849

(cherry picked from commit c4d24eed7d)

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
2022-06-24 13:22:26 -04:00
mergify[bot]
826f224c2d p2p: add eviction metrics and cleanup dialing error handling (backport #8819) (#8820) 2022-06-24 10:42:58 -04:00
mergify[bot]
2df4c2b19d e2e: add tolerance to peer discovery test (#8849) (#8857)
(cherry picked from commit fb209136f8)

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: Sam Kleinman <garen@tychoish.com>
2022-06-23 14:46:10 -04:00
mergify[bot]
6f4ef72964 p2p: track peers by address (#8841) (#8855)
(cherry picked from commit 436a38f876)

Co-authored-by: Sam Kleinman <garen@tychoish.com>
2022-06-23 13:21:46 -04:00
mergify[bot]
3398f37979 cmd: add tool for compaction of goleveldb (backport #8564) (#8675) 2022-06-23 18:25:19 +02:00
mergify[bot]
8ef63fe3d9 e2e: report peer heights in error message (#8843) (#8853)
(cherry picked from commit 52b2efb827)

Co-authored-by: Sam Kleinman <garen@tychoish.com>
2022-06-23 10:46:51 -04:00
M. J. Fromberger
9daea43375 Update default version marker. (#8844) 2022-06-22 18:16:58 -04:00
M. J. Fromberger
df9363c67c Prepare changelog for Release v0.35.7 (#8772) 2022-06-22 11:54:03 -07:00
mergify[bot]
24701cd587 p2p: more dial routines (#8827) (#8828) 2022-06-21 21:27:28 -04:00
William Banfield
e9c87a3c49 remove dial wake change (#8824) 2022-06-21 20:20:04 -04:00
dependabot[bot]
034a9f8422 build(deps): Bump github.com/spf13/cobra from 1.4.0 to 1.5.0 (#8811)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.4.0 to 1.5.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.4.0...v1.5.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Thane Thomson <connect@thanethomson.com>
2022-06-21 17:16:31 -04:00
Callum Waters
4322f7d0b9 mempool: make error throwing for CheckTx consistent (#8817) 2022-06-21 18:51:50 +02:00
Sam Kleinman
83526cacbc p2p: peer store and dialing changes (0.35.x backport) (#8740)
* p2p: peer store and dialing changes

(cherry picked from commit 9dbb135152)

* reduce persistent peer max

(cherry picked from commit b213a2766f)

* don't gossip inactive peers

(cherry picked from commit cc28ce298f)

* fix small case

(cherry picked from commit 56a91642dc)

* fix error message

(cherry picked from commit 86db59f53b)

* remove seed flag

(cherry picked from commit 000aa05485)

* reduce logging level

(cherry picked from commit 4e2bc8f51e)

* make const

(cherry picked from commit e3068b50b2)

* update comment

(cherry picked from commit 31bd396c88)

* cleanup

(cherry picked from commit eddb23b5af)

* oops

* overflows

(cherry picked from commit 4c8651026a)

* Update internal/p2p/peermanager.go

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>
(cherry picked from commit f23f6e1089)

* Update internal/p2p/peermanager.go

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>
(cherry picked from commit 1c02758eaf)

* comment

(cherry picked from commit 9f604fd2ef)

* test: new scoring

(cherry picked from commit 930fd7f2be)

* fix scoring test

(cherry picked from commit 9abc55f3a0)

* cleanup peer manager

* fix panic

* add metrics

* fix compile

* fix test

* default metrics to noop

* noop metrics

* update metrics

(cherry picked from commit 720600ef62)

* rename metrics

* actually shuffle peers more

* fix up advertise

(cherry picked from commit 8195c97590)

* add max dialing attempts

* connection tracking

* comments mostly

(cherry picked from commit 053ecd9b8c)

* Apply suggestions from code review

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>

* comments

* fix lint

* cr feedback

* fixup cherrypick

* make wb happy

* more comments

* fixup

* fix lint

* iota fix

* add skip

* cleanup

* remove comment

* fix rand

* fix rand

* use numaddresses correctly

* advertise fixes

* remove some things

* cleanup comment

* more fixes

* toml

* fix comment

* fix spell

* dec limit

* fixes

* up the attmept max

* cr feedback

* probablistic test

* fix spell

* add metrics for peers stored on startup

* p2p: peer score should not wrap around (#8790)

(cherry picked from commit 4d820ff4f5)

# Conflicts:
#	internal/p2p/peermanager.go

* fix

* wake more

* wake if we need to

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>
2022-06-20 13:13:21 -04:00
mergify[bot]
25d724b920 e2e: reactivate network test (backport #8635) (#8777) 2022-06-20 17:10:20 +02:00
dependabot[bot]
3945cec115 build(deps): Bump github.com/adlio/schema from 1.3.0 to 1.3.3 (#8797)
Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.3.0 to 1.3.3.
- [Release notes](https://github.com/adlio/schema/releases)
- [Commits](https://github.com/adlio/schema/compare/v1.3.0...v1.3.3)

---
updated-dependencies:
- dependency-name: github.com/adlio/schema
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-20 09:19:34 -04:00
mergify[bot]
74c6d8100d p2p: fix typo (#8793) (#8794) 2022-06-19 11:52:43 -07:00
M. J. Fromberger
e2d01cdcff Make priority mempool fuzz test actually test the priority mempool. (#8785)
Fixes #8783.
2022-06-17 09:29:13 -07:00
dependabot[bot]
bee6597b28 build(deps): Bump github.com/vektra/mockery/v2 from 2.13.0 to 2.13.1 (#8765)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.13.0 to 2.13.1.
- [Release notes](https://github.com/vektra/mockery/releases)
- [Changelog](https://github.com/vektra/mockery/blob/master/.goreleaser.yml)
- [Commits](https://github.com/vektra/mockery/compare/v2.13.0...v2.13.1)

---
updated-dependencies:
- dependency-name: github.com/vektra/mockery/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Sam Kleinman <garen@tychoish.com>
2022-06-15 10:12:21 -04:00
mergify[bot]
ce8284c027 p2p: accept should not abort on first error (backport #8759) (#8760) 2022-06-15 07:56:15 -04:00
dependabot[bot]
d02f58e191 build(deps): Bump github.com/vektra/mockery/v2 from 2.12.3 to 2.13.0 (#8747)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.12.3 to 2.13.0.
- [Release notes](https://github.com/vektra/mockery/releases)
- [Changelog](https://github.com/vektra/mockery/blob/master/.goreleaser.yml)
- [Commits](https://github.com/vektra/mockery/compare/v2.12.3...v2.13.0)

---
updated-dependencies:
- dependency-name: github.com/vektra/mockery/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-14 10:00:05 -07:00
Callum Waters
28c38522e0 do not log an error for duplicate txs (#8732) 2022-06-10 11:56:00 +02:00
Sam Kleinman
0b63e293f1 e2e/generator: add additional testnets (0.35) (#8730) 2022-06-10 03:55:29 -04:00
mergify[bot]
af0590a819 consensus: switch timeout message to be debug and clarify meaning (#8694) (#8696)
(cherry picked from commit 75a12ea0c6)

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: Sam Kleinman <garen@tychoish.com>
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
2022-06-09 09:45:58 -04:00
mergify[bot]
46c27b45ab rpc: always close http bodies (backport #8712) (#8715)
(cherry picked from commit 931c98f7ad)

Co-authored-by: Sam Kleinman <garen@tychoish.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-06-08 11:57:55 -07:00
dependabot[bot]
3c29b6996b build(deps): Bump github.com/rs/zerolog from 1.26.1 to 1.27.0 (#8723)
Bumps [github.com/rs/zerolog](https://github.com/rs/zerolog) from 1.26.1 to 1.27.0.
- [Release notes](https://github.com/rs/zerolog/releases)
- [Commits](https://github.com/rs/zerolog/compare/v1.26.1...v1.27.0)

---
updated-dependencies:
- dependency-name: github.com/rs/zerolog
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-08 07:14:17 -07:00
dependabot[bot]
138be1f7b0 build(deps): Bump github.com/stretchr/testify from 1.7.1 to 1.7.2 (#8710)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.7.1 to 1.7.2.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.7.1...v1.7.2)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-07 04:59:40 -04:00
mergify[bot]
98411962c6 p2p: pass maxConns for MaxPeers during node setup (#8684) (#8692)
* pass maxConns for MaxPeers
* add upgrade connections to max connections for max peers
* change the formula to calculate max peers

(cherry picked from commit 30929cf190)

Co-authored-by: Evan Forbes <42654277+evan-forbes@users.noreply.github.com>
2022-06-04 08:53:41 -07:00
M. J. Fromberger
3079eb8b30 Prepare Release v0.35.6 (#8685) 2022-06-03 10:42:06 +02:00
mergify[bot]
0e3a3fe58b p2p: shed peers from store from other networks (backport #8678) (#8681) 2022-06-02 12:15:55 -04:00
mergify[bot]
e17e6b1aaa migrate: provide function for database production (backport #8614) (#8672)
(cherry picked from commit d5299882b0)
2022-06-02 06:17:06 -04:00
dependabot[bot]
0421f8b25e build(deps): Bump google.golang.org/grpc from 1.46.2 to 1.47.0 (#8666)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.46.2 to 1.47.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.46.2...v1.47.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-01 11:59:15 -07:00
Callum Waters
4faa8b72aa cmd: don't used global config for reset commands (#8668) 2022-06-01 18:34:35 +02:00
Callum Waters
336dc2f2c5 chore: update version (#8634) 2022-06-01 15:48:35 +02:00
Callum Waters
e8ac37223f pex: align max address thresholds (#8657) 2022-05-31 14:07:25 -04:00
Sam Kleinman
a889f17e51 consensus: restructure peer catchup sleep (#8651) 2022-05-31 11:31:51 -04:00
mergify[bot]
2b5a4de4b3 docs: add documentation for undocumented p2p metrics (backport #8640) (#8641)
* docs: add documentation for undocumented p2p metrics (#8640)

Once merged will backport to v0.35

(cherry picked from commit 3dec4a4744)

# Conflicts:
#	docs/nodes/metrics.md

* fix merge conflict

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: William Banfield <wbanfield@gmail.com>
2022-05-30 05:03:50 -04:00
dependabot[bot]
a85d9c5163 build(deps): Bump github.com/spf13/viper from 1.11.0 to 1.12.0 (#8631)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.11.0 to 1.12.0.
- [Release notes](https://github.com/spf13/viper/releases)
- [Commits](https://github.com/spf13/viper/compare/v1.11.0...v1.12.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/viper
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-27 07:51:54 -07:00
M. J. Fromberger
12a0559d67 Prepare changelog for release v0.35.5. (#8601) 2022-05-26 09:08:41 -07:00
mergify[bot]
a22f7bec39 migrate: reorder collection ordering (#8613) (#8616)
(cherry picked from commit f33722b423)

Co-authored-by: Sam Kleinman <garen@tychoish.com>
2022-05-25 13:26:56 -04:00
dependabot[bot]
3784371dd8 build(deps): Bump github.com/vektra/mockery/v2 from 2.12.2 to 2.12.3 (#8608) 2022-05-25 05:12:51 -04:00
mergify[bot]
4ee91663da p2p: reduce ability of SendError to disconnect peers (backport #8597) (#8603) 2022-05-25 04:12:43 -04:00
M. J. Fromberger
87763a3d6a rpc: fix encoding of block_results responses (backport #8593) (#8594)
The block results include validator updates in ABCI protobuf format, which does
not encode "correctly" according to the expected Amino style RPC clients expect.

- Write a regression test for this issue.
- Add JSON marshaling overrides for ABCI ValidatorUpdate messages.

Patches for v0.35.x:

- Replace jsontypes with tmjson (removed in v0.36)
- Regress test data for BeginBlock / EndBlock
2022-05-24 07:21:28 -07:00
dependabot[bot]
ad9e875376 build(deps): Bump goreleaser/goreleaser-action from 2 to 3 (#8589)
Bumps [goreleaser/goreleaser-action](https://github.com/goreleaser/goreleaser-action) from 2 to 3.
- [Release notes](https://github.com/goreleaser/goreleaser-action/releases)
- [Commits](https://github.com/goreleaser/goreleaser-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: goreleaser/goreleaser-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-23 08:31:18 -04:00
mergify[bot]
2f8483aa85 p2p: remove unused get height methods (backport #8569) (#8571) 2022-05-17 11:32:13 -04:00
dependabot[bot]
0e6b85efa9 build(deps): Bump github.com/lib/pq from 1.10.5 to 1.10.6 (#8568) 2022-05-17 04:57:37 -07:00
dependabot[bot]
13cc1931a7 build(deps): Bump google.golang.org/grpc from 1.46.0 to 1.46.2 (#8560)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.46.0 to 1.46.2.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.46.0...v1.46.2)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-16 09:10:13 -07:00
dependabot[bot]
f6b13f8c95 build(deps): Bump docker/login-action from 1.10.0 to 2.0.0 (#8536)
* build(deps): Bump docker/login-action from 1.10.0 to 2.0.0

Bumps [docker/login-action](https://github.com/docker/login-action) from 1.10.0 to 2.0.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v1.10.0...v2.0.0)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-13 12:57:07 -07:00
dependabot[bot]
248cb26845 build(deps): Bump actions/stale from 4 to 5 (#8534)
Bumps [actions/stale](https://github.com/actions/stale) from 4 to 5.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-13 12:55:22 -07:00
dependabot[bot]
79d83cea15 build(deps): Bump docker/setup-buildx-action from 1.6.0 to 2.0.0 (#8533)
* build(deps): Bump docker/setup-buildx-action from 1.6.0 to 2.0.0

Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 1.6.0 to 2.0.0.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v1.6.0...v2.0.0)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-13 12:53:42 -07:00
dependabot[bot]
643eaef146 build(deps): Bump docker/build-push-action from 2.7.0 to 3.0.0 (#8530)
* build(deps): Bump docker/build-push-action from 2.7.0 to 3.0.0

Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 2.7.0 to 3.0.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v2.7.0...v3.0.0)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-13 12:49:56 -07:00
dependabot[bot]
552e1e78b8 build(deps): Bump codecov/codecov-action from 2.1.0 to 3.1.0 (#8529)
* build(deps): Bump codecov/codecov-action from 2.1.0 to 3.1.0

Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 2.1.0 to 3.1.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v2.1.0...v3.1.0)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-13 12:45:04 -07:00
dependabot[bot]
fcf0579f0e build(deps): Bump golangci/golangci-lint-action from 3.1.0 to 3.2.0 (#8526)
* build(deps): Bump golangci/golangci-lint-action from 3.1.0 to 3.2.0

Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 3.1.0 to 3.2.0.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v3.1.0...v3.2.0)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-13 12:42:15 -07:00
dependabot[bot]
3df465c353 build(deps): Bump github.com/prometheus/client_golang (#8542)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.12.1 to 1.12.2.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/v1.12.2/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.12.1...v1.12.2)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-05-13 11:42:44 -07:00
dependabot[bot]
142b273c2f build(deps): Bump gaurav-nelson/github-action-markdown-link-check from 1.0.13 to 1.0.14 (#8523)
* build(deps): Bump gaurav-nelson/github-action-markdown-link-check

Bumps [gaurav-nelson/github-action-markdown-link-check](https://github.com/gaurav-nelson/github-action-markdown-link-check) from 1.0.13 to 1.0.14.
- [Release notes](https://github.com/gaurav-nelson/github-action-markdown-link-check/releases)
- [Commits](https://github.com/gaurav-nelson/github-action-markdown-link-check/compare/1.0.13...1.0.14)

---
updated-dependencies:
- dependency-name: gaurav-nelson/github-action-markdown-link-check
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Target fork.

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-05-13 08:42:47 -07:00
M. J. Fromberger
74267a062e Remove backport-specific Dependabot config (v0.35.x). (#8520)
After #8518, this separate configuration is no longer needed.
The master copy will target updates to this branch.
2022-05-13 08:10:15 -07:00
mergify[bot]
12fed0ed53 blocksync: validate block before persisting it (backport #8493) (#8496) 2022-05-12 10:36:48 +02:00
Sam Kleinman
bdd59c892c statesync: avoid potential race (#8494) 2022-05-11 15:09:41 -04:00
dependabot[bot]
23834b6b31 build(deps): Bump github.com/creachadair/tomledit from 0.0.19 to 0.0.22 (#8503) 2022-05-11 12:38:25 -04:00
Callum Waters
b40a7b63b7 docs: remove developer sessions (#8497) 2022-05-10 22:09:47 -07:00
dependabot[bot]
923d14c439 build(deps): Bump github.com/golangci/golangci-lint (#8489)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.45.2 to 1.46.0.
- [Release notes](https://github.com/golangci/golangci-lint/releases)
- [Changelog](https://github.com/golangci/golangci-lint/blob/master/CHANGELOG.md)
- [Commits](https://github.com/golangci/golangci-lint/compare/v1.45.2...v1.46.0)

---
updated-dependencies:
- dependency-name: github.com/golangci/golangci-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-10 09:02:42 -07:00
dependabot[bot]
5b634976dc build(deps): Bump github.com/vektra/mockery/v2 from 2.12.1 to 2.12.2 (#8473) 2022-05-06 05:26:23 -07:00
mergify[bot]
383408479d keymigrate: improve filtering for legacy transaction hashes (#8466) (#8467)
This is a follow-up to #8352. The check for legacy evidence keys is only based
on the prefix of the key. Hashes, which are unprefixed, could easily have this
form and be misdiagnosed.

Because the conversion for evidence checks the key structure, this should not
cause corruption. The probability that a hash is a syntactically valid evidence
key is negligible.  The tool will report an error rather than storing bad data.
But this does mean that such transaction hashes could cause the migration to
stop and report an error before it is complete.

To ensure we convert all the data, refine the legacy key check to filter these
keys more precisely. Update the test cases to exercise this condition.

(cherry picked from commit dd4fee88ef)
2022-05-04 13:32:40 -07:00
dependabot[bot]
f383e8fa98 build(deps): Bump github.com/creachadair/atomicfile from 0.2.5 to 0.2.6 (#8461)
Bumps [github.com/creachadair/atomicfile](https://github.com/creachadair/atomicfile) from 0.2.5 to 0.2.6.
- [Release notes](https://github.com/creachadair/atomicfile/releases)
- [Commits](https://github.com/creachadair/atomicfile/compare/v0.2.5...v0.2.6)

---
updated-dependencies:
- dependency-name: github.com/creachadair/atomicfile
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-05-04 06:49:09 -07:00
dependabot[bot]
df66afab99 build(deps): Bump github.com/btcsuite/btcd from 0.22.0-beta to 0.22.1 (#8438) 2022-04-29 08:08:40 -04:00
dependabot[bot]
971bd1487e build(deps): Bump github.com/creachadair/tomledit from 0.0.18 to 0.0.19 (#8437) 2022-04-29 04:11:01 -07:00
dependabot[bot]
512a0bf356 build(deps): Bump github.com/google/go-cmp from 0.5.7 to 0.5.8 (#8421) 2022-04-27 06:24:47 -07:00
dependabot[bot]
06d3d41623 build(deps): Bump github.com/vektra/mockery/v2 from 2.12.0 to 2.12.1 (#8418) 2022-04-26 12:31:13 -04:00
dependabot[bot]
5b14d27ccf build(deps): Bump google.golang.org/grpc from 1.45.0 to 1.46.0 (#8409) 2022-04-25 08:59:51 -04:00
M. J. Fromberger
ad7c501359 Update common actions versions on v0.35.x to match master. (#8400) 2022-04-23 09:07:15 -07:00
dependabot[bot]
70d771ead2 build(deps): Bump github.com/vektra/mockery/v2 from 2.11.0 to 2.12.0 (#8394)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.11.0 to 2.12.0.
- [Release notes](https://github.com/vektra/mockery/releases)
- [Changelog](https://github.com/vektra/mockery/blob/master/.goreleaser.yml)
- [Commits](https://github.com/vektra/mockery/compare/v2.11.0...v2.12.0)

---
updated-dependencies:
- dependency-name: github.com/vektra/mockery/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-04-22 10:26:52 -07:00
dependabot[bot]
5b3b3065ad build(deps): Bump github.com/creachadair/tomledit from 0.0.16 to 0.0.18 (#8395) 2022-04-22 10:23:49 -04:00
mergify[bot]
9195a005bd Add config samples from TM v26, v27, v28, v29. (#8384) (#8387) 2022-04-21 08:58:18 -07:00
mergify[bot]
2a91d21b61 Add confix testdata for Tendermint v0.30. (#8380) (#8381)
Some additional testdata I grabbed while writing up the draft of RFC 019.

(cherry picked from commit d56392cee9)
2022-04-20 10:04:21 -07:00
mergify[bot]
14f0d60f24 p2p: fix setting in con-tracker (#8370) (#8371)
(cherry picked from commit 889341152a)
2022-04-19 23:32:54 -07:00
dependabot[bot]
21d68441a1 build(deps): Bump github.com/vektra/mockery/v2 from 2.10.6 to 2.11.0 (#8373)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.10.6 to 2.11.0.
- [Release notes](https://github.com/vektra/mockery/releases)
- [Changelog](https://github.com/vektra/mockery/blob/master/.goreleaser.yml)
- [Commits](https://github.com/vektra/mockery/compare/v2.10.6...v2.11.0)

---
updated-dependencies:
- dependency-name: github.com/vektra/mockery/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-04-19 10:07:33 -07:00
M. J. Fromberger
4d9ad115b0 build: clean up an unnecessary dependency (#8363) 2022-04-18 11:05:10 -07:00
dependabot[bot]
e646bd77ca build(deps): Bump github.com/creachadair/atomicfile from 0.2.4 to 0.2.5 (#8366)
Bumps [github.com/creachadair/atomicfile](https://github.com/creachadair/atomicfile) from 0.2.4 to 0.2.5.
- [Release notes](https://github.com/creachadair/atomicfile/releases)
- [Commits](https://github.com/creachadair/atomicfile/compare/v0.2.4...v0.2.5)

---
updated-dependencies:
- dependency-name: github.com/creachadair/atomicfile
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-18 10:18:18 -04:00
M. J. Fromberger
8682489551 Prepare changelog for release v0.35.4. (#8360) 2022-04-17 22:34:09 -07:00
mergify[bot]
04c1f76569 rpc: avoid leaking threads during checktx (backport #8328) (#8333) 2022-04-17 09:17:03 -04:00
Ethan Reesor
226bc94c5f node: always close database engine (#7113) (#8330) 2022-04-15 14:37:34 -07:00
mergify[bot]
641d290a6d keymigrate: fix conversion of transaction hash keys (backport #8352) (#8353)
In the legacy database format, keys were generally stored with a string prefix
to partition the key space. Transaction hashes, however, were not prefixed: The
hash of a transaction was the entire key for its record.

When the key migration script scans its input, it checks the format of each
key to determine whether it has already been converted, so that it is safe to run
the script over an already-converted database.

After checking for known prefixes, the migration script used two heuristics to
distinguish ABCI events and transaction hashes: For ABCI events, whose keys
used the form "name/value/height/index", it checked for the right number of
separators. For hashes, it checked that the length is exactly 32 bytes (the
length of a SHA-256 digest) AND that the value does not contain a "/".

This last check is problematic: Any hash containing the byte 0x2f (the code
point for "/") would be incorrectly filtered out from conversion. This leads to
some transaction hashes not being converted.

To fix this problem, this changes how the script recognizes keys:

1. Use a more rigorous syntactic check to filter out ABCI metadata.
2. Use only the length to identify hashes among what remains.

This change is still not a complete fix: It is possible, though unlikely, that
a valid hash could happen to look exactly like an ABCI metadata key. However,
the chance of that happening is vastly smaller than the chance of generating a
hash that contains at least one "/" byte.

Similarly, it is possible that an already-converted key of some other type
could be mistaken for a hash (not a converted hash, ironically, but another
type of the right length). Again, we can't do anything about that.

(cherry picked from commit 34e727676c)
2022-04-14 17:04:28 -07:00
William Banfield
8579cc382e invoke callbacks when set late in socket client (Forward-Port #8331) (#8336) 2022-04-14 18:36:09 -04:00
dependabot[bot]
1d8b1c7507 build(deps): Bump github.com/vektra/mockery/v2 from 2.10.4 to 2.10.6 (#8345) 2022-04-14 09:32:11 -07:00
dependabot[bot]
118ff02272 build(deps): Bump github.com/spf13/viper from 1.10.1 to 1.11.0 (#8347) 2022-04-14 08:49:19 -07:00
mergify[bot]
52bcd56d60 confix: convert tx-index.indexer from string to array (backport #8342) (#8348)
The format of this config value was changed in v0.35.

- Move plan to its own file (for ease of reading).
- Convert indexer string to an array if not already done.

(cherry picked from commit 69874c2050)
2022-04-14 06:59:16 -07:00
M. J. Fromberger
12e0ea6ea7 confix: add default mempool.version = "v1" in v0.35. (#8335) 2022-04-13 14:28:54 -07:00
M. J. Fromberger
1c3921f5df Revert CI cache override. (#8324)
The caches for golangci-lint failed to update correctly causing spurious
failures on #8300. To work around this, I disabled caching temporarily.
This change removes that override, restoring the default.
2022-04-12 21:34:57 -07:00
M. J. Fromberger
a639323cf0 Add a tool to update old config files to the latest version. (#8300)
A manual backport of #8281, adjusted to stop at v0.35.

* Update pending changelog.
* Backport applicable fixes for v0.35 from master.
2022-04-12 21:19:12 -07:00
mergify[bot]
e4d83ba2ad keymigrate: fix decoding of block-hash row keys (backport #8294) (#8295)
(cherry picked from commit 322bb460dd)
2022-04-09 09:17:28 -07:00
M. J. Fromberger
9edb87c5f8 Fix release notes to match the prevailing style. (#8293)
A manual backport of #8292.
Also update actions/checkout.
2022-04-08 18:26:44 -07:00
M. J. Fromberger
dfa001f5c7 Prepare changelog for release v0.35.3. (#8289) 2022-04-08 16:11:18 -07:00
dependabot[bot]
5f7031432d build(deps): Bump github.com/lib/pq from 1.10.4 to 1.10.5 (#8282) 2022-04-08 23:14:07 +12:00
mergify[bot]
fb7ce48c15 scmigrate: ensure target key is correctly renamed (backport #8276) (#8280)
Prior to v0.35, the keys for seen-commit records included the applicable
height.  In v0.35 and beyond, we only keep the record for the latest height,
and its key does not include the height.

Update the seen-commit migration to ensure that the record we retain after
migration is correctly renamed to omit the height from its key.

Update the test cases to check for this condition after migrating.

(cherry picked from commit f3858e52de)
2022-04-07 15:51:26 -07:00
Callum Waters
308d283241 cli: fix reset command for v0.35 (#8259) 2022-04-07 11:24:46 -07:00
M. J. Fromberger
530e81dea4 Fix broken links in the changelog. (#8268) 2022-04-06 16:15:53 -07:00
dependabot[bot]
5051a1ce82 build(deps): Bump github.com/BurntSushi/toml from 1.0.0 to 1.1.0 (#8253)
Bumps [github.com/BurntSushi/toml](https://github.com/BurntSushi/toml) from 1.0.0 to 1.1.0.
- [Release notes](https://github.com/BurntSushi/toml/releases)
- [Commits](https://github.com/BurntSushi/toml/compare/v1.0.0...v1.1.0)

---
updated-dependencies:
- dependency-name: github.com/BurntSushi/toml
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-05 17:45:20 -07:00
dependabot[bot]
ac881db09a build(deps): Bump github.com/vektra/mockery/v2 from 2.10.2 to 2.10.4 (#8254)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.10.2 to 2.10.4.
- [Release notes](https://github.com/vektra/mockery/releases)
- [Changelog](https://github.com/vektra/mockery/blob/master/.goreleaser.yml)
- [Commits](https://github.com/vektra/mockery/compare/v2.10.2...v2.10.4)

---
updated-dependencies:
- dependency-name: github.com/vektra/mockery/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-05 10:25:26 -07:00
M. J. Fromberger
14461339f4 Update golangci-lint-action and golang-ci versions. (#8256)
Also specify Go toolchain version in actions (now required).
2022-04-05 08:08:43 -07:00
dependabot[bot]
21da336656 build(deps): Bump github.com/vektra/mockery/v2 from 2.10.1 to 2.10.2 (#8247)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.10.1 to 2.10.2.
- [Release notes](https://github.com/vektra/mockery/releases)
- [Changelog](https://github.com/vektra/mockery/blob/master/.goreleaser.yml)
- [Commits](https://github.com/vektra/mockery/compare/v2.10.1...v2.10.2)

---
updated-dependencies:
- dependency-name: github.com/vektra/mockery/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-04 07:58:28 -07:00
mergify[bot]
524d3ceb88 e2e: Fix hashing for app + Fix logic of TestApp_Hash (backport #8229) (#8236) 2022-04-01 11:12:12 -04:00
dependabot[bot]
d352becfaf build(deps): Bump github.com/vektra/mockery/v2 from 2.10.0 to 2.10.1 (#8227)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.10.0 to 2.10.1.
- [Release notes](https://github.com/vektra/mockery/releases)
- [Changelog](https://github.com/vektra/mockery/blob/master/.goreleaser.yml)
- [Commits](https://github.com/vektra/mockery/compare/v2.10.0...v2.10.1)

---
updated-dependencies:
- dependency-name: github.com/vektra/mockery/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-31 14:50:53 -04:00
M. J. Fromberger
71edac84c8 Fix broken Markdown links (#8214) (#8215) 2022-03-30 07:06:59 -07:00
mergify[bot]
813a3f2c7e migration: remove stale seen commits (backport #8205) (#8211) 2022-03-29 15:38:49 -04:00
dependabot[bot]
95a31f506d build(deps): Bump github.com/adlio/schema from 1.2.3 to 1.3.0 (#8202)
Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.2.3 to 1.3.0.
- [Release notes](https://github.com/adlio/schema/releases)
- [Commits](https://github.com/adlio/schema/compare/v1.2.3...v1.3.0)

---
updated-dependencies:
- dependency-name: github.com/adlio/schema
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-03-28 07:24:18 -07:00
dependabot[bot]
1cf7af83e7 build(deps): Bump github.com/golangci/golangci-lint (#8193)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.45.0 to 1.45.2.
- [Release notes](https://github.com/golangci/golangci-lint/releases)
- [Changelog](https://github.com/golangci/golangci-lint/blob/master/CHANGELOG.md)
- [Commits](https://github.com/golangci/golangci-lint/compare/v1.45.0...v1.45.2)

---
updated-dependencies:
- dependency-name: github.com/golangci/golangci-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-28 09:40:41 -04:00
dependabot[bot]
3b1e5fec3a build(deps): Bump github.com/golangci/golangci-lint (#8168) 2022-03-22 09:55:43 -07:00
William Banfield
114a41f6cc consensus: change lock handling in reactor and handleMsg for RoundState (forward-port #7994 #7992) (#8138)
This ports the changes from #7992 and #7994 from the v0.34.x branch to v0.35.x.
2022-03-18 12:27:21 -04:00
dependabot[bot]
cb698bf4fc build(deps): Bump github.com/stretchr/testify from 1.7.0 to 1.7.1 (#8132) 2022-03-16 10:43:47 -04:00
M. J. Fromberger
a6fd04f1be p2p: update polling interval calculation for PEX requests (backport #8106) (#8118)
A manual cherry-pick of 89b4321af2.
2022-03-14 13:01:00 -07:00
dependabot[bot]
8df368ce25 build(deps): Bump github.com/spf13/cobra from 1.3.0 to 1.4.0 (#8107)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.3.0 to 1.4.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Changelog](https://github.com/spf13/cobra/blob/master/CHANGELOG.md)
- [Commits](https://github.com/spf13/cobra/compare/v1.3.0...v1.4.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-03-11 14:28:54 -08:00
M. J. Fromberger
9b87606dee consensus: avert a data race in round state access (#8112) 2022-03-11 11:16:19 -08:00
dependabot[bot]
76ed7d7954 build(deps): Bump google.golang.org/grpc from 1.44.0 to 1.45.0 (#8103) 2022-03-10 15:49:19 -08:00
M. J. Fromberger
5fb791e020 Fix data race. (#8105)
Original repro:
  go test -run TestStateFullRound1 -race -count=500 ./internal/consensus
2022-03-10 14:29:13 -08:00
William Banfield
7f85fc250a consensus: start the timeout ticker before replay (#7844) (#8082) 2022-03-08 20:53:42 -05:00
M. J. Fromberger
ba0911966d Update changelog for #8081 backport. (#8092) 2022-03-08 13:35:45 -08:00
mergify[bot]
aface5f9b8 cmd: make reset more safe (backport #8081) (#8090)
Backport notes:

- Revert command declaration to the old explicit format.
- Update threading of the keyType argument.
- Fix function naming collision.
2022-03-08 09:20:11 -08:00
mergify[bot]
38da3f02ea Revert "Remove master from versions and copy it from the latest." (backport #8053) (#8057)
This reverts commit f939f962b1.

A lot of inbound links are still broken, so we will need to find a different
approach to suppressing unreleased docs.

(cherry picked from commit 59eaa4dba0)
2022-03-02 09:53:34 -08:00
146 changed files with 7913 additions and 1790 deletions

View File

@@ -1,27 +0,0 @@
version: 2
updates:
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: daily
time: "11:00"
open-pull-requests-limit: 10
- package-ecosystem: npm
directory: "/docs"
schedule:
interval: daily
time: "11:00"
open-pull-requests-limit: 10
reviewers:
- fadeev
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
time: "11:00"
open-pull-requests-limit: 10
reviewers:
- melekes
- tessr
labels:
- T:dependencies

View File

@@ -20,11 +20,11 @@ jobs:
goos: ["linux"]
timeout-minutes: 5
steps:
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: "1.17"
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v5
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
@@ -41,11 +41,11 @@ jobs:
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: "1.17"
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v5
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
@@ -63,11 +63,11 @@ jobs:
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: "1.17"
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v5
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go

View File

@@ -13,7 +13,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2.3.4
- uses: actions/checkout@v3
- name: Prepare
id: prep
run: |
@@ -39,17 +39,17 @@ jobs:
platforms: all
- name: Set up Docker Build
uses: docker/setup-buildx-action@v1.6.0
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@v1.10.0
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v2.7.0
uses: docker/build-push-action@v3
with:
context: .
file: ./DOCKER/Dockerfile

View File

@@ -15,11 +15,11 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: '1.17'
- uses: actions/checkout@v2.4.0
- uses: actions/checkout@v3
- name: Build
working-directory: test/e2e

View File

@@ -21,11 +21,11 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: '1.17'
- uses: actions/checkout@v2.3.4
- uses: actions/checkout@v3
with:
ref: 'v0.34.x'

View File

@@ -21,11 +21,11 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: '1.17'
- uses: actions/checkout@v2.3.4
- uses: actions/checkout@v3
- name: Build
working-directory: test/e2e

View File

@@ -14,11 +14,11 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: '1.17'
- uses: actions/checkout@v2.3.4
- uses: technote-space/get-diff-action@v5
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go

View File

@@ -13,11 +13,11 @@ jobs:
fuzz-nightly-test:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: '1.17'
- uses: actions/checkout@v2.3.4
- uses: actions/checkout@v3
- name: Install go-fuzz
working-directory: test/fuzz
@@ -54,14 +54,14 @@ jobs:
continue-on-error: true
- name: Archive crashers
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: crashers
path: test/fuzz/**/crashers
retention-days: 3
- name: Archive suppressions
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: suppressions
path: test/fuzz/**/suppressions

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 3
steps:
- uses: styfle/cancel-workflow-action@0.9.1
- uses: styfle/cancel-workflow-action@0.10.0
with:
workflow_id: 1041851,1401230,2837803
access_token: ${{ github.token }}

View File

@@ -46,7 +46,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout the Jepsen repository
uses: actions/checkout@v2.3.4
uses: actions/checkout@v3
with:
repository: 'tendermint/jepsen'
@@ -58,7 +58,7 @@ jobs:
run: docker exec -i jepsen-control bash -c 'source /root/.bashrc; cd /jepsen/tendermint; lein run test --nemesis ${{ github.event.inputs.nemesis }} --workload ${{ github.event.inputs.workload }} --concurrency ${{ github.event.inputs.concurrency }} --tendermint-url ${{ github.event.inputs.tendermintUrl }} --merkleeyes-url ${{ github.event.inputs.merkleeyesUrl }} --time-limit ${{ github.event.inputs.timeLimit }} ${{ github.event.inputs.dupOrSuperByzValidators }}'
- name: Archive results
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: results
path: tendermint/store/latest

View File

@@ -6,7 +6,7 @@ jobs:
markdown-link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2.3.4
- uses: gaurav-nelson/github-action-markdown-link-check@1.0.13
- uses: actions/checkout@v3
- uses: creachadair/github-action-markdown-link-check@master
with:
folder-path: "docs"

View File

@@ -1,7 +1,11 @@
name: Lint
# Lint runs golangci-lint over the entire Tendermint repository
# This workflow is run on every pull request and push to master
# The `golangci` job will pass without running if no *.{go, mod, sum} files have been modified.
name: Golang Linter
# Lint runs golangci-lint over the entire Tendermint repository.
#
# This workflow is run on every pull request and push to master.
#
# The `golangci` job will pass without running if no *.{go, mod, sum}
# files have been modified.
on:
pull_request:
push:
@@ -13,17 +17,22 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 8
steps:
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v5
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '^1.17'
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: golangci/golangci-lint-action@v2.5.2
- uses: golangci/golangci-lint-action@v3
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.42.1
# Required: the version of golangci-lint is required and
# must be specified without patch version: we always use the
# latest patch version.
version: v1.45
args: --timeout 10m
github-token: ${{ secrets.github_token }}
if: env.GIT_DIFF

View File

@@ -19,7 +19,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v2.4.0
uses: actions/checkout@v3
- name: Lint Code Base
uses: docker://github/super-linter:v4
env:

View File

@@ -16,7 +16,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2.3.4
- uses: actions/checkout@v3
- name: Prepare
id: prep
run: |
@@ -34,16 +34,16 @@ jobs:
echo ::set-output name=tags::${TAGS}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1.6.0
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v1.10.0
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v2.7.0
uses: docker/build-push-action@v3
with:
context: ./tools/proto
file: ./tools/proto/Dockerfile

View File

@@ -11,13 +11,13 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 4
steps:
- uses: actions/checkout@v2.3.4
- uses: actions/checkout@v3
- name: lint
run: make proto-lint
proto-breakage:
runs-on: ubuntu-latest
timeout-minutes: 4
steps:
- uses: actions/checkout@v2.3.4
- uses: actions/checkout@v3
- name: check-breakage
run: make proto-check-breaking-ci

View File

@@ -12,26 +12,28 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2.3.4
uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: '1.17'
- name: Build
uses: goreleaser/goreleaser-action@v2
uses: goreleaser/goreleaser-action@v3
if: ${{ github.event_name == 'pull_request' }}
with:
version: latest
args: build --skip-validate # skip validate skips initial sanity checks in order to be able to fully run
- run: echo https://github.com/tendermint/tendermint/blob/${GITHUB_REF#refs/tags/}/CHANGELOG.md#${GITHUB_REF#refs/tags/} > ../release_notes.md
- name: Release
uses: goreleaser/goreleaser-action@v2
uses: goreleaser/goreleaser-action@v3
if: startsWith(github.ref, 'refs/tags/')
with:
version: latest
args: release --rm-dist
args: release --rm-dist --release-notes=../release_notes.md
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v4
- uses: actions/stale@v5
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: "This pull request has been automatically marked as stale because it has not had

View File

@@ -16,11 +16,11 @@ jobs:
matrix:
part: ["00", "01", "02", "03", "04", "05"]
steps:
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: "1.17"
- uses: actions/checkout@v2.3.4
- uses: technote-space/get-diff-action@v5
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
@@ -32,7 +32,7 @@ jobs:
run: |
make test-group-${{ matrix.part }} NUM_SPLIT=6
if: env.GIT_DIFF
- uses: actions/upload-artifact@v2
- uses: actions/upload-artifact@v3
with:
name: "${{ github.sha }}-${{ matrix.part }}-coverage"
path: ./build/${{ matrix.part }}.profile.out
@@ -41,8 +41,8 @@ jobs:
runs-on: ubuntu-latest
needs: tests
steps:
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v5
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
@@ -50,26 +50,26 @@ jobs:
go.mod
go.sum
Makefile
- uses: actions/download-artifact@v2
- uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-00-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v2
- uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-01-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v2
- uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-02-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v2
- uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-03-coverage"
if: env.GIT_DIFF
- run: |
cat ./*profile.out | grep -v "mode: set" >> coverage.txt
if: env.GIT_DIFF
- uses: codecov/codecov-action@v2.1.0
- uses: codecov/codecov-action@v3
with:
file: ./coverage.txt
if: env.GIT_DIFF

View File

@@ -2,6 +2,85 @@
Friendly reminder: We have a [bug bounty program](https://hackerone.com/cosmos).
## v0.35.7
June 16, 2022
### BUG FIXES
- [p2p] [\#8692](https://github.com/tendermint/tendermint/pull/8692) scale the number of stored peers by the configured maximum connections (#8684)
- [rpc] [\#8715](https://github.com/tendermint/tendermint/pull/8715) always close http bodies (backport #8712)
- [p2p] [\#8760](https://github.com/tendermint/tendermint/pull/8760) accept should not abort on first error (backport #8759)
### BREAKING CHANGES
- P2P Protocol
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Introduce "inactive" peer label to avoid re-dialing incompatible peers. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Increase frequency of dialing attempts to reduce latency for peer acquisition. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Improvements to peer scoring and sorting to gossip a greater variety of peers during PEX. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Track incoming and outgoing peers separately to ensure more peer slots open for incoming connections. (@tychoish)
## v0.35.6
June 3, 2022
### FEATURES
- [migrate] [\#8672](https://github.com/tendermint/tendermint/pull/8672) provide function for database production (backport #8614) (@tychoish)
### BUG FIXES
- [consensus] [\#8651](https://github.com/tendermint/tendermint/pull/8651) restructure peer catchup sleep (@tychoish)
- [pex] [\#8657](https://github.com/tendermint/tendermint/pull/8657) align max address thresholds (@cmwaters)
- [cmd] [\#8668](https://github.com/tendermint/tendermint/pull/8668) don't used global config for reset commands (@cmwaters)
- [p2p] [\#8681](https://github.com/tendermint/tendermint/pull/8681) shed peers from store from other networks (backport #8678) (@tychoish)
## v0.35.5
May 26, 2022
### BUG FIXES
- [p2p] [\#8371](https://github.com/tendermint/tendermint/pull/8371) fix setting in con-tracker (backport #8370) (@tychoish)
- [blocksync] [\#8496](https://github.com/tendermint/tendermint/pull/8496) validate block against state before persisting it to disk (@cmwaters)
- [statesync] [\#8494](https://github.com/tendermint/tendermint/pull/8494) avoid potential race (@tychoish)
- [keymigrate] [\#8467](https://github.com/tendermint/tendermint/pull/8467) improve filtering for legacy transaction hashes (backport #8466) (@creachadair)
- [rpc] [\#8594](https://github.com/tendermint/tendermint/pull/8594) fix encoding of block_results responses (@creachadair)
## v0.35.4
April 18, 2022
Special thanks to external contributors on this release: @firelizzard18
### FEATURES
- [cli] [\#8300](https://github.com/tendermint/tendermint/pull/8300) Add a tool to update old config files to the latest version [backport [\#8281](https://github.com/tendermint/tendermint/pull/8281)]. (@creachadair)
### IMPROVEMENTS
### BUG FIXES
- [cli] [\#8294](https://github.com/tendermint/tendermint/pull/8294) keymigrate: ensure block hash keys are correctly translated. (@creachadair)
- [cli] [\#8352](https://github.com/tendermint/tendermint/pull/8352) keymigrate: ensure transaction hash keys are correctly translated. (@creachadair)
## v0.35.3
April 8, 2022
### FEATURES
- [cli] [\#8081](https://github.com/tendermint/tendermint/pull/8081) add a safer-to-use `reset-state` command. (@marbar3778)
### IMPROVEMENTS
- [consensus] [\#8138](https://github.com/tendermint/tendermint/pull/8138) change lock handling in reactor and handleMsg for RoundState. (@williambanfield)
### BUG FIXES
- [cli] [\#8276](https://github.com/tendermint/tendermint/pull/8276) scmigrate: ensure target key is correctly renamed. (@creachadair)
## v0.35.2
February 28, 2022
@@ -1908,7 +1987,7 @@ more details.
- [rpc] [\#3269](https://github.com/tendermint/tendermint/issues/2826) Limit number of unique clientIDs with open subscriptions. Configurable via `rpc.max_subscription_clients`
- [rpc] [\#3269](https://github.com/tendermint/tendermint/issues/2826) Limit number of unique queries a given client can subscribe to at once. Configurable via `rpc.max_subscriptions_per_client`.
- [rpc] [\#3435](https://github.com/tendermint/tendermint/issues/3435) Default ReadTimeout and WriteTimeout changed to 10s. WriteTimeout can increased by setting `rpc.timeout_broadcast_tx_commit` in the config.
- [rpc/client] [\#3269](https://github.com/tendermint/tendermint/issues/3269) Update `EventsClient` interface to reflect new pubsub/eventBus API [ADR-33](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-033-pubsub.md). This includes `Subscribe`, `Unsubscribe`, and `UnsubscribeAll` methods.
- [rpc/client] [\#3269](https://github.com/tendermint/tendermint/issues/3269) Update `EventsClient` interface to reflect new pubsub/eventBus API [ADR-33](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-033-pubsub.md). This includes `Subscribe`, `Unsubscribe`, and `UnsubscribeAll` methods.
* Apps
- [abci] [\#3403](https://github.com/tendermint/tendermint/issues/3403) Remove `time_iota_ms` from BlockParams. This is a
@@ -1961,7 +2040,7 @@ more details.
- [blockchain] [\#3358](https://github.com/tendermint/tendermint/pull/3358) Fix timer leak in `BlockPool` (@guagualvcha)
- [cmd] [\#3408](https://github.com/tendermint/tendermint/issues/3408) Fix `testnet` command's panic when creating non-validator configs (using `--n` flag) (@srmo)
- [libs/db/remotedb/grpcdb] [\#3402](https://github.com/tendermint/tendermint/issues/3402) Close Iterator/ReverseIterator after use
- [libs/pubsub] [\#951](https://github.com/tendermint/tendermint/issues/951), [\#1880](https://github.com/tendermint/tendermint/issues/1880) Use non-blocking send when dispatching messages [ADR-33](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-033-pubsub.md)
- [libs/pubsub] [\#951](https://github.com/tendermint/tendermint/issues/951), [\#1880](https://github.com/tendermint/tendermint/issues/1880) Use non-blocking send when dispatching messages [ADR-33](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-033-pubsub.md)
- [lite] [\#3364](https://github.com/tendermint/tendermint/issues/3364) Fix `/validators` and `/abci_query` proxy endpoints
(@guagualvcha)
- [p2p/conn] [\#3347](https://github.com/tendermint/tendermint/issues/3347) Reject all-zero shared secrets in the Diffie-Hellman step of secret-connection
@@ -2665,7 +2744,7 @@ Special thanks to external contributors on this release:
This release is mostly about the ConsensusParams - removing fields and enforcing MaxGas.
It also addresses some issues found via security audit, removes various unused
functions from `libs/common`, and implements
[ADR-012](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-012-peer-transport.md).
[ADR-012](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-012-peer-transport.md).
BREAKING CHANGES:
@@ -2746,7 +2825,7 @@ BREAKING CHANGES:
- [abci] Added address of the original proposer of the block to Header
- [abci] Change ABCI Header to match Tendermint exactly
- [abci] [\#2159](https://github.com/tendermint/tendermint/issues/2159) Update use of `Validator` (see
[ADR-018](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-018-ABCI-Validators.md)):
[ADR-018](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-018-ABCI-Validators.md)):
- Remove PubKey from `Validator` (so it's just Address and Power)
- Introduce `ValidatorUpdate` (with just PubKey and Power)
- InitChain and EndBlock use ValidatorUpdate
@@ -2768,7 +2847,7 @@ BREAKING CHANGES:
- [state] [\#1815](https://github.com/tendermint/tendermint/issues/1815) Validator set changes are now delayed by one block (!)
- Add NextValidatorSet to State, changes on-disk representation of state
- [state] [\#2184](https://github.com/tendermint/tendermint/issues/2184) Enforce ConsensusParams.BlockSize.MaxBytes (See
[ADR-020](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-020-block-size.md)).
[ADR-020](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-020-block-size.md)).
- Remove ConsensusParams.BlockSize.MaxTxs
- Introduce maximum sizes for all components of a block, including ChainID
- [types] Updates to the block Header:
@@ -2779,7 +2858,7 @@ BREAKING CHANGES:
- [consensus] [\#2203](https://github.com/tendermint/tendermint/issues/2203) Implement BFT time
- Timestamp in block must be monotonic and equal the median of timestamps in block's LastCommit
- [crypto] [\#2239](https://github.com/tendermint/tendermint/issues/2239) Secp256k1 signature changes (See
[ADR-014](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-014-secp-malleability.md)):
[ADR-014](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-014-secp-malleability.md)):
- format changed from DER to `r || s`, both little endian encoded as 32 bytes.
- malleability removed by requiring `s` to be in canonical form.

View File

@@ -2,9 +2,9 @@
Friendly reminder: We have a [bug bounty program](https://hackerone.com/cosmos).
## v0.35.3
## v0.35.8
Month, DD, YYYY
Month DD, YYYY
Special thanks to external contributors on this release:
@@ -22,6 +22,8 @@ Special thanks to external contributors on this release:
### FEATURES
- [cli] [\#8675] Add command to force compact goleveldb databases (@cmwaters)
### IMPROVEMENTS
### BUG FIXES

View File

@@ -20,7 +20,7 @@ This code of conduct applies to all projects run by the Tendermint/COSMOS team a
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term “harassment” as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we dont tolerate behavior that excludes people in socially marginalized groups.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term “harassment” as including the definition in the [Citizen Code of Conduct](https://github.com/stumpsyn/policies/blob/master/citizen_code_of_conduct.md); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we dont tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the channel admins or the person mentioned above immediately. Whether youre a regular contributor or a newcomer, we care about making this community a safe place for you and weve got your back.

View File

@@ -221,19 +221,15 @@ build-docs:
mkdir -p ~/output/$${path_prefix} ; \
cp -r .vuepress/dist/* ~/output/$${path_prefix}/ ; \
cp ~/output/$${path_prefix}/index.html ~/output ; \
done < versions ; \
mkdir -p ~/output/master ; \
cp -r .vuepress/dist/* ~/output/master/
done < versions ;
.PHONY: build-docs
###############################################################################
### Docker image ###
###############################################################################
build-docker: build-linux
cp $(BUILDDIR)/tendermint DOCKER/tendermint
build-docker:
docker build --label=tendermint --tag="tendermint/tendermint" -f DOCKER/Dockerfile .
rm -rf DOCKER/tendermint
.PHONY: build-docker

View File

@@ -44,26 +44,47 @@ This guide provides instructions for upgrading to specific versions of Tendermin
* The fast sync process as well as the blockchain package and service has all
been renamed to block sync
* We have added a new, experimental tool to help operators migrate
configuration files created by previous versions of Tendermint.
To try this tool, run:
```shell
# Install the tool.
go install github.com/tendermint/tendermint/scripts/confix@v0.35.x
# Run the tool with the old configuration file as input.
# Replace the -config argument with your path.
confix -config ~/.tendermint/config/config.toml -out updated.toml
```
This tool should be able to update configurations from v0.34 to v0.35. We
plan to extend it to handle older configuration files in the future. For now,
it will report an error (without making any changes) if it does not recognize
the version that created the file.
### Database Key Format Changes
The format of all tendermint on-disk database keys changes in
0.35. Upgrading nodes must either re-sync all data or run a migration
script provided in this release. The script located in
`github.com/tendermint/tendermint/scripts/keymigrate/migrate.go`
provides the function `Migrate(context.Context, db.DB)` which you can
operationalize as makes sense for your deployment.
script provided in this release.
The script located in
`github.com/tendermint/tendermint/scripts/keymigrate/migrate.go` provides the
function `Migrate(context.Context, db.DB)` which you can operationalize as
makes sense for your deployment.
For ease of use the `tendermint` command includes a CLI version of the
migration script, which you can invoke, as in:
tendermint key-migrate
This reads the configuration file as normal and allows the
`--db-backend` and `--db-dir` flags to change database operations as
needed.
This reads the configuration file as normal and allows the `--db-backend` and
`--db-dir` flags to override the database location as needed.
The migration operation is idempotent and can be run more than once,
if needed.
The migration operation is intended to be idempotent, and should be safe to
rerun on the same database multiple times. As a safety measure, however, we
recommend that operators test out the migration on a copy of the database
first, if it is practical to do so, before applying it to the production data.
### CLI Changes
@@ -113,11 +134,11 @@ To access any of the functionality previously available via the
`node.Node` type, use the `*local.Local` "RPC" client, that exposes
the full RPC interface provided as direct function calls. Import the
`github.com/tendermint/tendermint/rpc/client/local` package and pass
the node service as in the following:
the node service as in the following:
```go
node := node.NewDefault() //construct the node object
// start and set up the node service
// start and set up the node service
client := local.New(node.(local.NodeService))
// use client object to interact with the node
@@ -144,10 +165,10 @@ both stacks.
The P2P library was reimplemented in this release. The new implementation is
enabled by default in this version of Tendermint. The legacy implementation is still
included in this version of Tendermint as a backstop to work around unforeseen
production issues. The new and legacy version are interoperable. If necessary,
production issues. The new and legacy version are interoperable. If necessary,
you can enable the legacy implementation in the server configuration file.
To make use of the legacy P2P implemementation add or update the following field of
To make use of the legacy P2P implemementation add or update the following field of
your server's configuration file under the `[p2p]` section:
```toml
@@ -172,8 +193,8 @@ in the order in which they were received.
* `priority`: A priority queue of messages.
* `wdrr`: A queue implementing the Weighted Deficit Round Robin algorithm. A
weighted deficit round robin queue is created per peer. Each queue contains a
* `wdrr`: A queue implementing the Weighted Deficit Round Robin algorithm. A
weighted deficit round robin queue is created per peer. Each queue contains a
separate 'flow' for each of the channels of communication that exist between any two
peers. Tendermint maintains a channel per message type between peers. Each WDRR
queue maintains a shared buffered with a fixed capacity through which messages on different

View File

@@ -87,9 +87,15 @@ type ReqRes struct {
*sync.WaitGroup
*types.Response // Not set atomically, so be sure to use WaitGroup.
mtx tmsync.Mutex
done bool // Gets set to true once *after* WaitGroup.Done().
cb func(*types.Response) // A single callback that may be set.
mtx tmsync.Mutex
// callbackInvoked as a variable to track if the callback was already
// invoked during the regular execution of the request. This variable
// allows clients to set the callback simultaneously without potentially
// invoking the callback twice by accident, once when 'SetCallback' is
// called and once during the normal request.
callbackInvoked bool
cb func(*types.Response) // A single callback that may be set.
}
func NewReqRes(req *types.Request) *ReqRes {
@@ -98,8 +104,8 @@ func NewReqRes(req *types.Request) *ReqRes {
WaitGroup: waitGroup1(),
Response: nil,
done: false,
cb: nil,
callbackInvoked: false,
cb: nil,
}
}
@@ -109,7 +115,7 @@ func NewReqRes(req *types.Request) *ReqRes {
func (r *ReqRes) SetCallback(cb func(res *types.Response)) {
r.mtx.Lock()
if r.done {
if r.callbackInvoked {
r.mtx.Unlock()
cb(r.Response)
return
@@ -128,6 +134,7 @@ func (r *ReqRes) InvokeCallback() {
if r.cb != nil {
r.cb(r.Response)
}
r.callbackInvoked = true
}
// GetCallback returns the configured callback of the ReqRes object which may be
@@ -142,13 +149,6 @@ func (r *ReqRes) GetCallback() func(*types.Response) {
return r.cb
}
// SetDone marks the ReqRes object as done.
func (r *ReqRes) SetDone() {
r.mtx.Lock()
r.done = true
r.mtx.Unlock()
}
func waitGroup1() (wg *sync.WaitGroup) {
wg = &sync.WaitGroup{}
wg.Add(1)

View File

@@ -72,7 +72,6 @@ func (cli *grpcClient) OnStart() error {
cli.mtx.Lock()
defer cli.mtx.Unlock()
reqres.SetDone()
reqres.Done()
// Notify client listener if set
@@ -81,9 +80,7 @@ func (cli *grpcClient) OnStart() error {
}
// Notify reqRes listener if set
if cb := reqres.GetCallback(); cb != nil {
cb(reqres.Response)
}
reqres.InvokeCallback()
}
for reqres := range cli.chReqRes {
if reqres != nil {

View File

@@ -348,12 +348,13 @@ func (app *localClient) ApplySnapshotChunkSync(
func (app *localClient) callback(req *types.Request, res *types.Response) *ReqRes {
app.Callback(req, res)
return newLocalReqRes(req, res)
rr := newLocalReqRes(req, res)
rr.callbackInvoked = true
return rr
}
func newLocalReqRes(req *types.Request, res *types.Response) *ReqRes {
reqRes := NewReqRes(req)
reqRes.Response = res
reqRes.SetDone()
return reqRes
}

View File

@@ -801,3 +801,18 @@ func (_m *Client) String() string {
func (_m *Client) Wait() {
_m.Called()
}
type NewClientT interface {
mock.TestingT
Cleanup(func())
}
// NewClient creates a new instance of Client. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewClient(t NewClientT) *Client {
mock := &Client{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -3,6 +3,7 @@ package abciclient_test
import (
"context"
"fmt"
"sync"
"testing"
"time"
@@ -125,3 +126,73 @@ func (slowApp) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock
time.Sleep(200 * time.Millisecond)
return types.ResponseBeginBlock{}
}
// TestCallbackInvokedWhenSetLaet ensures that the callback is invoked when
// set after the client completes the call into the app. Currently this
// test relies on the callback being allowed to be invoked twice if set multiple
// times, once when set early and once when set late.
func TestCallbackInvokedWhenSetLate(t *testing.T) {
wg := &sync.WaitGroup{}
wg.Add(1)
app := blockedABCIApplication{
wg: wg,
}
_, c := setupClientServer(t, app)
reqRes, err := c.CheckTxAsync(context.Background(), types.RequestCheckTx{})
require.NoError(t, err)
done := make(chan struct{})
cb := func(_ *types.Response) {
close(done)
}
reqRes.SetCallback(cb)
app.wg.Done()
<-done
var called bool
cb = func(_ *types.Response) {
called = true
}
reqRes.SetCallback(cb)
require.True(t, called)
}
type blockedABCIApplication struct {
wg *sync.WaitGroup
types.BaseApplication
}
func (b blockedABCIApplication) CheckTx(r types.RequestCheckTx) types.ResponseCheckTx {
b.wg.Wait()
return b.BaseApplication.CheckTx(r)
}
// TestCallbackInvokedWhenSetEarly ensures that the callback is invoked when
// set before the client completes the call into the app.
func TestCallbackInvokedWhenSetEarly(t *testing.T) {
wg := &sync.WaitGroup{}
wg.Add(1)
app := blockedABCIApplication{
wg: wg,
}
_, c := setupClientServer(t, app)
reqRes, err := c.CheckTxAsync(context.Background(), types.RequestCheckTx{})
require.NoError(t, err)
done := make(chan struct{})
cb := func(_ *types.Response) {
close(done)
}
reqRes.SetCallback(cb)
app.wg.Done()
called := func() bool {
select {
case <-done:
return true
default:
return false
}
}
require.Eventually(t, called, time.Second, time.Millisecond*25)
}

View File

@@ -5,6 +5,9 @@ import (
"encoding/json"
"github.com/gogo/protobuf/jsonpb"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/encoding"
tmjson "github.com/tendermint/tendermint/libs/json"
)
const (
@@ -102,6 +105,48 @@ func (r *EventAttribute) UnmarshalJSON(b []byte) error {
return jsonpbUnmarshaller.Unmarshal(reader, r)
}
// validatorUpdateJSON is the JSON encoding of a validator update.
//
// It handles translation of public keys from the protobuf representation to
// the legacy Amino-compatible format expected by RPC clients.
type validatorUpdateJSON struct {
PubKey json.RawMessage `json:"pub_key,omitempty"`
Power int64 `json:"power,string"`
}
func (v *ValidatorUpdate) MarshalJSON() ([]byte, error) {
key, err := encoding.PubKeyFromProto(v.PubKey)
if err != nil {
return nil, err
}
jkey, err := tmjson.Marshal(key)
if err != nil {
return nil, err
}
return json.Marshal(validatorUpdateJSON{
PubKey: jkey,
Power: v.GetPower(),
})
}
func (v *ValidatorUpdate) UnmarshalJSON(data []byte) error {
var vu validatorUpdateJSON
if err := json.Unmarshal(data, &vu); err != nil {
return err
}
var key crypto.PubKey
if err := tmjson.Unmarshal(vu.PubKey, &key); err != nil {
return err
}
pkey, err := encoding.PubKeyToProto(key)
if err != nil {
return err
}
v.PubKey = pkey
v.Power = vu.Power
return nil
}
// Some compile time assertions to ensure we don't
// have accidental runtime surprises later on.

View File

@@ -0,0 +1,69 @@
package commands
import (
"errors"
"path/filepath"
"sync"
"github.com/spf13/cobra"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/util"
"github.com/tendermint/tendermint/libs/log"
)
func MakeCompactDBCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "experimental-compact-goleveldb",
Short: "force compacts the tendermint storage engine (only GoLevelDB supported)",
Long: `
This is a temporary utility command that performs a force compaction on the state
and blockstores to reduce disk space for a pruning node. This should only be run
once the node has stopped. This command will likely be omitted in the future after
the planned refactor to the storage engine.
Currently, only GoLevelDB is supported.
`,
RunE: func(cmd *cobra.Command, args []string) error {
if config.DBBackend != "goleveldb" {
return errors.New("compaction is currently only supported with goleveldb")
}
compactGoLevelDBs(config.RootDir, logger)
return nil
},
}
return cmd
}
func compactGoLevelDBs(rootDir string, logger log.Logger) {
dbNames := []string{"state", "blockstore"}
o := &opt.Options{
DisableSeeksCompaction: true,
}
wg := sync.WaitGroup{}
for _, dbName := range dbNames {
dbName := dbName
wg.Add(1)
go func() {
defer wg.Done()
dbPath := filepath.Join(rootDir, "data", dbName+".db")
store, err := leveldb.OpenFile(dbPath, o)
if err != nil {
logger.Error("failed to initialize tendermint db", "path", dbPath, "err", err)
return
}
defer store.Close()
logger.Info("starting compaction...", "db", dbPath)
err = store.CompactRange(util.Range{Start: nil, Limit: nil})
if err != nil {
logger.Error("failed to compact tendermint db", "path", dbPath, "err", err)
}
}()
}
wg.Wait()
}

View File

@@ -5,8 +5,11 @@ import (
"fmt"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/scripts/keymigrate"
"github.com/tendermint/tendermint/scripts/scmigrate"
)
func MakeKeyMigrateCommand() *cobra.Command {
@@ -14,46 +17,7 @@ func MakeKeyMigrateCommand() *cobra.Command {
Use: "key-migrate",
Short: "Run Database key migration",
RunE: func(cmd *cobra.Command, args []string) error {
ctx, cancel := context.WithCancel(cmd.Context())
defer cancel()
contexts := []string{
// this is ordered to put the
// (presumably) biggest/most important
// subsets first.
"blockstore",
"state",
"peerstore",
"tx_index",
"evidence",
"light",
}
for idx, dbctx := range contexts {
logger.Info("beginning a key migration",
"dbctx", dbctx,
"num", idx+1,
"total", len(contexts),
)
db, err := cfg.DefaultDBProvider(&cfg.DBContext{
ID: dbctx,
Config: config,
})
if err != nil {
return fmt.Errorf("constructing database handle: %w", err)
}
if err = keymigrate.Migrate(ctx, db); err != nil {
return fmt.Errorf("running migration for context %q: %w",
dbctx, err)
}
}
logger.Info("completed database migration successfully")
return nil
return RunDatabaseMigration(cmd.Context(), logger, config)
},
}
@@ -62,3 +26,51 @@ func MakeKeyMigrateCommand() *cobra.Command {
return cmd
}
func RunDatabaseMigration(ctx context.Context, logger log.Logger, conf *cfg.Config) error {
contexts := []string{
// this is ordered to put
// the more ephemeral tables first to
// reduce the possibility of the
// ephemeral data overwriting later data
"tx_index",
"peerstore",
"light",
"blockstore",
"state",
"evidence",
}
for idx, dbctx := range contexts {
logger.Info("beginning a key migration",
"dbctx", dbctx,
"num", idx+1,
"total", len(contexts),
)
db, err := cfg.DefaultDBProvider(&cfg.DBContext{
ID: dbctx,
Config: conf,
})
if err != nil {
return fmt.Errorf("constructing database handle: %w", err)
}
if err = keymigrate.Migrate(ctx, db); err != nil {
return fmt.Errorf("running migration for context %q: %w",
dbctx, err)
}
if dbctx == "blockstore" {
if err := scmigrate.Migrate(ctx, db); err != nil {
return fmt.Errorf("running seen commit migration: %w", err)
}
}
}
logger.Info("completed database migration successfully")
return nil
}

View File

@@ -0,0 +1,190 @@
package commands
import (
"os"
"path/filepath"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
// ResetAllCmd removes the database of this Tendermint core
// instance.
var ResetAllCmd = &cobra.Command{
Use: "unsafe-reset-all",
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
RunE: resetAllCmd,
}
var keepAddrBook bool
// ResetStateCmd removes the database of the specified Tendermint core instance.
var ResetStateCmd = &cobra.Command{
Use: "reset-state",
Short: "Remove all the data and WAL",
RunE: func(cmd *cobra.Command, args []string) error {
config, err := ParseConfig()
if err != nil {
return err
}
return resetState(config.DBDir(), logger, keyType)
},
}
func init() {
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
ResetPrivValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
"Key type to generate privval file with. Options: ed25519, secp256k1")
}
// ResetPrivValidatorCmd resets the private validator files.
var ResetPrivValidatorCmd = &cobra.Command{
Use: "unsafe-reset-priv-validator",
Short: "(unsafe) Reset this node's validator to genesis state",
RunE: resetPrivValidator,
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAllCmd(cmd *cobra.Command, args []string) error {
config, err := ParseConfig()
if err != nil {
return err
}
return resetAll(
config.DBDir(),
config.P2P.AddrBookFile(),
config.PrivValidator.KeyFile(),
config.PrivValidator.StateFile(),
logger,
)
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) error {
config, err := ParseConfig()
if err != nil {
return err
}
return resetFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile(), logger, keyType)
}
// resetAllCmd removes address book files plus all data, and resets the privValidator data.
func resetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) error {
if keepAddrBook {
logger.Info("The address book remains intact")
} else {
removeAddrBook(addrBookFile, logger)
}
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
} else {
logger.Error("Error removing all blockchain history", "dir", dbDir, "err", err)
}
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
logger.Error("unable to recreate dbDir", "err", err)
}
// recreate the dbDir since the privVal state needs to live there
return resetFilePV(privValKeyFile, privValStateFile, logger, keyType)
}
// resetState removes address book files plus all databases.
func resetState(dbDir string, logger log.Logger, keyType string) error {
blockdb := filepath.Join(dbDir, "blockstore.db")
state := filepath.Join(dbDir, "state.db")
wal := filepath.Join(dbDir, "cs.wal")
evidence := filepath.Join(dbDir, "evidence.db")
txIndex := filepath.Join(dbDir, "tx_index.db")
peerstore := filepath.Join(dbDir, "peerstore.db")
if tmos.FileExists(blockdb) {
if err := os.RemoveAll(blockdb); err == nil {
logger.Info("Removed all blockstore.db", "dir", blockdb)
} else {
logger.Error("error removing all blockstore.db", "dir", blockdb, "err", err)
}
}
if tmos.FileExists(state) {
if err := os.RemoveAll(state); err == nil {
logger.Info("Removed all state.db", "dir", state)
} else {
logger.Error("error removing all state.db", "dir", state, "err", err)
}
}
if tmos.FileExists(wal) {
if err := os.RemoveAll(wal); err == nil {
logger.Info("Removed all cs.wal", "dir", wal)
} else {
logger.Error("error removing all cs.wal", "dir", wal, "err", err)
}
}
if tmos.FileExists(evidence) {
if err := os.RemoveAll(evidence); err == nil {
logger.Info("Removed all evidence.db", "dir", evidence)
} else {
logger.Error("error removing all evidence.db", "dir", evidence, "err", err)
}
}
if tmos.FileExists(txIndex) {
if err := os.RemoveAll(txIndex); err == nil {
logger.Info("Removed tx_index.db", "dir", txIndex)
} else {
logger.Error("error removing tx_index.db", "dir", txIndex, "err", err)
}
}
if tmos.FileExists(peerstore) {
if err := os.RemoveAll(peerstore); err == nil {
logger.Info("Removed peerstore.db", "dir", peerstore)
} else {
logger.Error("error removing peerstore.db", "dir", peerstore, "err", err)
}
}
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
logger.Error("unable to recreate dbDir", "err", err)
}
return nil
}
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger, keyType string) error {
if _, err := os.Stat(privValKeyFile); err == nil {
pv, err := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
if err != nil {
return err
}
pv.Reset()
logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv, err := privval.GenFilePV(privValKeyFile, privValStateFile, keyType)
if err != nil {
return err
}
pv.Save()
logger.Info("Generated private validator file", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
return nil
}
func removeAddrBook(addrBookFile string, logger log.Logger) {
if err := os.Remove(addrBookFile); err == nil {
logger.Info("Removed existing address book", "file", addrBookFile)
} else if !os.IsNotExist(err) {
logger.Info("Error removing address book", "file", addrBookFile, "err", err)
}
}

View File

@@ -1,97 +0,0 @@
package commands
import (
"os"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
// ResetAllCmd removes the database of this Tendermint core
// instance.
var ResetAllCmd = &cobra.Command{
Use: "unsafe-reset-all",
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
RunE: resetAll,
}
var keepAddrBook bool
func init() {
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
ResetPrivValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
"Key type to generate privval file with. Options: ed25519, secp256k1")
}
// ResetPrivValidatorCmd resets the private validator files.
var ResetPrivValidatorCmd = &cobra.Command{
Use: "unsafe-reset-priv-validator",
Short: "(unsafe) Reset this node's validator to genesis state",
RunE: resetPrivValidator,
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAll(cmd *cobra.Command, args []string) error {
return ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidator.KeyFile(),
config.PrivValidator.StateFile(), logger)
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) error {
return resetFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile(), logger)
}
// ResetAll removes address book files plus all data, and resets the privValdiator data.
// Exported so other CLI tools can use it.
func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) error {
if keepAddrBook {
logger.Info("The address book remains intact")
} else {
removeAddrBook(addrBookFile, logger)
}
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
} else {
logger.Error("Error removing all blockchain history", "dir", dbDir, "err", err)
}
// recreate the dbDir since the privVal state needs to live there
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
logger.Error("unable to recreate dbDir", "err", err)
}
return resetFilePV(privValKeyFile, privValStateFile, logger)
}
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) error {
if _, err := os.Stat(privValKeyFile); err == nil {
pv, err := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
if err != nil {
return err
}
pv.Reset()
logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv, err := privval.GenFilePV(privValKeyFile, privValStateFile, keyType)
if err != nil {
return err
}
pv.Save()
logger.Info("Generated private validator file", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
return nil
}
func removeAddrBook(addrBookFile string, logger log.Logger) {
if err := os.Remove(addrBookFile); err == nil {
logger.Info("Removed existing address book", "file", addrBookFile)
} else if !os.IsNotExist(err) {
logger.Info("Error removing address book", "file", addrBookFile, "err", err)
}
}

View File

@@ -0,0 +1,57 @@
package commands
import (
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/privval"
)
func Test_ResetAll(t *testing.T) {
config := cfg.TestConfig()
dir := t.TempDir()
config.SetRoot(dir)
cfg.EnsureRoot(dir)
require.NoError(t, initFilesWithConfig(config))
pv, err := privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
pv.LastSignState.Height = 10
pv.Save()
require.NoError(t, resetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidator.KeyFile(),
config.PrivValidator.StateFile(), logger))
require.DirExists(t, config.DBDir())
require.NoFileExists(t, filepath.Join(config.DBDir(), "block.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "state.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "evidence.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "tx_index.db"))
require.FileExists(t, config.PrivValidator.StateFile())
pv, err = privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
require.Equal(t, int64(0), pv.LastSignState.Height)
}
func Test_ResetState(t *testing.T) {
config := cfg.TestConfig()
dir := t.TempDir()
config.SetRoot(dir)
cfg.EnsureRoot(dir)
require.NoError(t, initFilesWithConfig(config))
pv, err := privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
pv.LastSignState.Height = 10
pv.Save()
require.NoError(t, resetState(config.DBDir(), logger, keyType))
require.DirExists(t, config.DBDir())
require.NoFileExists(t, filepath.Join(config.DBDir(), "block.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "state.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "evidence.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "tx_index.db"))
require.FileExists(t, config.PrivValidator.StateFile())
pv, err = privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
// private validator state should still be intact.
require.Equal(t, int64(10), pv.LastSignState.Height)
}

View File

@@ -23,6 +23,7 @@ func main() {
cmd.ReplayConsoleCmd,
cmd.ResetAllCmd,
cmd.ResetPrivValidatorCmd,
cmd.ResetStateCmd,
cmd.ShowValidatorCmd,
cmd.TestnetFilesCmd,
cmd.ShowNodeIDCmd,
@@ -31,6 +32,7 @@ func main() {
cmd.InspectCmd,
cmd.RollbackStateCmd,
cmd.MakeKeyMigrateCommand(),
cmd.MakeCompactDBCommand(),
debug.DebugCmd,
cli.NewCompletionCmd(rootCmd, true),
)

View File

@@ -712,6 +712,10 @@ type P2PConfig struct { //nolint: maligned
// outbound).
MaxConnections uint16 `mapstructure:"max-connections"`
// MaxOutgoingConnections defines the maximum number of connected peers (inbound and
// outbound).
MaxOutgoingConnections uint16 `mapstructure:"max-outgoing-connections"`
// MaxIncomingConnectionAttempts rate limits the number of incoming connection
// attempts per IP address.
MaxIncomingConnectionAttempts uint `mapstructure:"max-incoming-connection-attempts"`
@@ -774,6 +778,7 @@ func DefaultP2PConfig() *P2PConfig {
MaxNumInboundPeers: 40,
MaxNumOutboundPeers: 10,
MaxConnections: 64,
MaxOutgoingConnections: 32,
MaxIncomingConnectionAttempts: 100,
PersistentPeersMaxDialPeriod: 0 * time.Second,
FlushThrottleTimeout: 100 * time.Millisecond,
@@ -833,6 +838,9 @@ func (cfg *P2PConfig) ValidateBasic() error {
if cfg.RecvRate < 0 {
return errors.New("recv-rate can't be negative")
}
if cfg.MaxOutgoingConnections > cfg.MaxConnections {
return errors.New("max-outgoing-connections cannot be larger than max-connections")
}
return nil
}

View File

@@ -355,6 +355,10 @@ max-num-outbound-peers = {{ .P2P.MaxNumOutboundPeers }}
# Maximum number of connections (inbound and outbound).
max-connections = {{ .P2P.MaxConnections }}
# Maximum number of connections reserved for outgoing
# connections. Must be less than max-connections
max-outgoing-connections = {{ .P2P.MaxOutgoingConnections }}
# Rate limits the number of incoming connection attempts per IP address.
max-incoming-connection-attempts = {{ .P2P.MaxIncomingConnectionAttempts }}

View File

@@ -25,19 +25,19 @@ func TestRandom(t *testing.T) {
plaintext := make([]byte, pl)
_, err := crand.Read(key[:])
if err != nil {
t.Errorf("error on read: %w", err)
t.Errorf("error on read: %v", err)
}
_, err = crand.Read(nonce[:])
if err != nil {
t.Errorf("error on read: %w", err)
t.Errorf("error on read: %v", err)
}
_, err = crand.Read(ad)
if err != nil {
t.Errorf("error on read: %w", err)
t.Errorf("error on read: %v", err)
}
_, err = crand.Read(plaintext)
if err != nil {
t.Errorf("error on read: %w", err)
t.Errorf("error on read: %v", err)
}
aead, err := New(key[:])

View File

@@ -44,10 +44,6 @@ module.exports = {
{
title: 'Resources',
children: [
{
title: 'Developer Sessions',
path: '/DEV_SESSIONS.html'
},
{
// TODO(creachadair): Figure out how to make this per-branch.
// See: https://github.com/tendermint/tendermint/issues/7908

View File

@@ -68,10 +68,10 @@ Tendermint is in essence similar software, but with two key differences:
- It is Byzantine Fault Tolerant, meaning it can only tolerate up to a
1/3 of failures, but those failures can include arbitrary behaviour -
including hacking and malicious attacks.
- It does not specify a particular application, like a fancy key-value
store. Instead, it focuses on arbitrary state machine replication,
so developers can build the application logic that's right for them,
including hacking and malicious attacks.
- It does not specify a particular application, like a fancy key-value
store. Instead, it focuses on arbitrary state machine replication,
so developers can build the application logic that's right for them,
from key-value store to cryptocurrency to e-voting platform and beyond.
### Bitcoin, Ethereum, etc
@@ -104,12 +104,10 @@ to Tendermint, but is more opinionated about how the state is managed,
and requires that all application behaviour runs in potentially many
docker containers, modules it calls "chaincode". It uses an
implementation of [PBFT](http://pmg.csail.mit.edu/papers/osdi99.pdf).
from a team at IBM that is [augmented to handle potentially
non-deterministic
chaincode](https://www.zurich.ibm.com/~cca/papers/sieve.pdf) It is
possible to implement this docker-based behaviour as a ABCI app in
Tendermint, though extending Tendermint to handle non-determinism
remains for future work.
from a team at IBM that is augmented to handle potentially non-deterministic
chaincode It is possible to implement this docker-based behaviour as a ABCI app
in Tendermint, though extending Tendermint to handle non-determinism remains
for future work.
[Burrow](https://github.com/hyperledger/burrow) is an implementation of
the Ethereum Virtual Machine and Ethereum transaction mechanics, with

View File

@@ -18,39 +18,43 @@ Listen address can be changed in the config file (see
The following metrics are available:
| **Name** | **Type** | **Tags** | **Description** |
| -------------------------------------- | --------- | ------------- | ---------------------------------------------------------------------- |
| abci_connection_method_timing | Histogram | method, type | Timings for each of the ABCI methods |
| consensus_height | Gauge | | Height of the chain |
| consensus_validators | Gauge | | Number of validators |
| consensus_validators_power | Gauge | | Total voting power of all validators |
| consensus_validator_power | Gauge | | Voting power of the node if in the validator set |
| consensus_validator_last_signed_height | Gauge | | Last height the node signed a block, if the node is a validator |
| consensus_validator_missed_blocks | Gauge | | Total amount of blocks missed for the node, if the node is a validator |
| consensus_missing_validators | Gauge | | Number of validators who did not sign |
| consensus_missing_validators_power | Gauge | | Total voting power of the missing validators |
| consensus_byzantine_validators | Gauge | | Number of validators who tried to double sign |
| consensus_byzantine_validators_power | Gauge | | Total voting power of the byzantine validators |
| consensus_block_interval_seconds | Histogram | | Time between this and last block (Block.Header.Time) in seconds |
| consensus_rounds | Gauge | | Number of rounds |
| consensus_num_txs | Gauge | | Number of transactions |
| consensus_total_txs | Gauge | | Total number of transactions committed |
| consensus_block_parts | counter | peer_id | number of blockparts transmitted by peer |
| consensus_latest_block_height | gauge | | /status sync_info number |
| consensus_fast_syncing | gauge | | either 0 (not fast syncing) or 1 (syncing) |
| consensus_state_syncing | gauge | | either 0 (not state syncing) or 1 (syncing) |
| consensus_block_size_bytes | Gauge | | Block size in bytes |
| p2p_peers | Gauge | | Number of peers node's connected to |
| p2p_peer_receive_bytes_total | counter | peer_id, chID | number of bytes per channel received from a given peer |
| p2p_peer_send_bytes_total | counter | peer_id, chID | number of bytes per channel sent to a given peer |
| p2p_peer_pending_send_bytes | gauge | peer_id | number of pending bytes to be sent to a given peer |
| p2p_num_txs | gauge | peer_id | number of transactions submitted by each peer_id |
| p2p_pending_send_bytes | gauge | peer_id | amount of data pending to be sent to peer |
| mempool_size | Gauge | | Number of uncommitted transactions |
| mempool_tx_size_bytes | histogram | | transaction sizes in bytes |
| mempool_failed_txs | counter | | number of failed transactions |
| mempool_recheck_times | counter | | number of transactions rechecked in the mempool |
| state_block_processing_time | histogram | | time between BeginBlock and EndBlock in ms |
| **Name** | **Type** | **Tags** | **Description** |
|----------------------------------------|-----------|---------------|-----------------------------------------------------------------------------------------------------------|
| abci_connection_method_timing | Histogram | method, type | Timings for each of the ABCI methods |
| consensus_height | Gauge | | Height of the chain |
| consensus_validators | Gauge | | Number of validators |
| consensus_validators_power | Gauge | | Total voting power of all validators |
| consensus_validator_power | Gauge | | Voting power of the node if in the validator set |
| consensus_validator_last_signed_height | Gauge | | Last height the node signed a block, if the node is a validator |
| consensus_validator_missed_blocks | Gauge | | Total amount of blocks missed for the node, if the node is a validator |
| consensus_missing_validators | Gauge | | Number of validators who did not sign |
| consensus_missing_validators_power | Gauge | | Total voting power of the missing validators |
| consensus_byzantine_validators | Gauge | | Number of validators who tried to double sign |
| consensus_byzantine_validators_power | Gauge | | Total voting power of the byzantine validators |
| consensus_block_interval_seconds | Histogram | | Time between this and last block (Block.Header.Time) in seconds |
| consensus_rounds | Gauge | | Number of rounds |
| consensus_num_txs | Gauge | | Number of transactions |
| consensus_total_txs | Gauge | | Total number of transactions committed |
| consensus_block_parts | counter | peer_id | number of blockparts transmitted by peer |
| consensus_latest_block_height | gauge | | /status sync_info number |
| consensus_fast_syncing | gauge | | either 0 (not fast syncing) or 1 (syncing) |
| consensus_state_syncing | gauge | | either 0 (not state syncing) or 1 (syncing) |
| consensus_block_size_bytes | Gauge | | Block size in bytes |
| p2p_peers | Gauge | | Number of peers node's connected to |
| p2p_peer_receive_bytes_total | Counter | peer_id, chID | number of bytes per channel received from a given peer |
| p2p_peer_send_bytes_total | Counter | peer_id, chID | number of bytes per channel sent to a given peer |
| p2p_peer_pending_send_bytes | Gauge | peer_id | number of pending bytes to be sent to a given peer |
| p2p_router_peer_queue_recv | Histogram | | The time taken to read off of a peer's queue before sending on the connection |
| p2p_router_peer_queue_send | Histogram | | The time taken to send on a peer's queue which will later be sent on the connection |
| p2p_router_channel_queue_send | Histogram | | The time taken to send on a p2p channel's queue which will later be consumed by the corresponding service |
| p2p_router_channel_queue_dropped_msgs | Counter | ch_id | The number of messages dropped from a peer's queue for a specific p2p channel |
| p2p_peer_queue_msg_size | Gauge | ch_id | The size of messages sent over a peer's queue for a specific p2p channel |
| mempool_size | Gauge | | Number of uncommitted transactions |
| mempool_tx_size_bytes | histogram | | transaction sizes in bytes |
| mempool_failed_txs | counter | | number of failed transactions |
| mempool_recheck_times | counter | | number of transactions rechecked in the mempool |
| state_block_processing_time | histogram | | time between BeginBlock and EndBlock in ms |
## Useful queries

View File

@@ -1,3 +1,4 @@
master master
v0.33.x v0.33
v0.34.x v0.34
v0.35.x v0.35

38
go.mod
View File

@@ -3,11 +3,15 @@ module github.com/tendermint/tendermint
go 1.16
require (
github.com/BurntSushi/toml v1.0.0
github.com/BurntSushi/toml v1.1.0
github.com/Workiva/go-datastructures v1.0.53
github.com/adlio/schema v1.2.3
github.com/btcsuite/btcd v0.22.0-beta
github.com/adlio/schema v1.3.3
github.com/btcsuite/btcd v0.22.1
github.com/btcsuite/btcutil v1.0.3-0.20201208143702-a53e38424cce
github.com/cenkalti/backoff v2.2.1+incompatible // indirect
github.com/creachadair/atomicfile v0.2.6
github.com/creachadair/taskgroup v0.3.2
github.com/creachadair/tomledit v0.0.22
github.com/facebookgo/ensure v0.0.0-20160127193407-b4ab57deab51 // indirect
github.com/facebookgo/stack v0.0.0-20160209184415-751773369052 // indirect
github.com/facebookgo/subset v0.0.0-20150612182917-8dac2c3c4870 // indirect
@@ -15,33 +19,37 @@ require (
github.com/go-kit/kit v0.12.0
github.com/gogo/protobuf v1.3.2
github.com/golang/protobuf v1.5.2
github.com/golangci/golangci-lint v1.44.2
github.com/golangci/golangci-lint v1.46.0
github.com/google/go-cmp v0.5.8
github.com/google/orderedcode v0.0.1
github.com/google/uuid v1.3.0
github.com/gorilla/websocket v1.5.0
github.com/gotestyourself/gotestyourself v2.2.0+incompatible // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
github.com/lib/pq v1.10.4
github.com/lib/pq v1.10.6
github.com/libp2p/go-buffer-pool v0.0.2
github.com/minio/highwayhash v1.0.2
github.com/mroth/weightedrand v0.4.1
github.com/oasisprotocol/curve25519-voi v0.0.0-20210609091139-0a56a4bca00b
github.com/ory/dockertest v3.3.5+incompatible
github.com/prometheus/client_golang v1.12.1
github.com/prometheus/client_golang v1.12.2
github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0
github.com/rs/cors v1.8.2
github.com/rs/zerolog v1.26.1
github.com/rs/zerolog v1.27.0
github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa
github.com/snikch/goodman v0.0.0-20171125024755-10e37e294daa
github.com/spf13/cobra v1.3.0
github.com/spf13/viper v1.10.1
github.com/stretchr/testify v1.7.0
github.com/spf13/cobra v1.5.0
github.com/spf13/viper v1.12.0
github.com/stretchr/testify v1.7.2
github.com/syndtr/goleveldb v1.0.1-0.20200815110645-5c35d600f0ca
github.com/tendermint/tm-db v0.6.6
github.com/vektra/mockery/v2 v2.10.0
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce
golang.org/x/net v0.0.0-20211208012354-db4efeb81f4b
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
google.golang.org/grpc v1.44.0
github.com/vektra/mockery/v2 v2.13.1
golang.org/x/crypto v0.0.0-20220525230936-793ad666bf5e
golang.org/x/net v0.0.0-20220617184016-355a448f1bc9
golang.org/x/sync v0.0.0-20220513210516-0976fa681c29
google.golang.org/grpc v1.47.0
gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b // indirect
gotest.tools v2.2.0+incompatible // indirect
pgregory.net/rapid v0.4.7
)

463
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -544,8 +544,15 @@ FOR_LOOP:
// first.Hash() doesn't verify the tx contents, so MakePartSet() is
// currently necessary.
err := state.Validators.VerifyCommitLight(chainID, firstID, first.Height, second.LastCommit)
if err == nil {
// validate the block before we persist it
err = r.blockExec.ValidateBlock(state, first)
}
// If either of the checks failed we log the error and request for a new block
// at that height
if err != nil {
err = fmt.Errorf("invalid last commit: %w", err)
r.Logger.Error(
err.Error(),
"last_commit", second.LastCommit,
@@ -570,37 +577,34 @@ FOR_LOOP:
}
continue FOR_LOOP
} else {
r.pool.PopRequest()
}
// TODO: batch saves so we do not persist to disk every block
r.store.SaveBlock(first, firstParts, second.LastCommit)
r.pool.PopRequest()
var err error
// TODO: batch saves so we do not persist to disk every block
r.store.SaveBlock(first, firstParts, second.LastCommit)
// TODO: Same thing for app - but we would need a way to get the hash
// without persisting the state.
state, err = r.blockExec.ApplyBlock(state, firstID, first)
if err != nil {
// TODO: This is bad, are we zombie?
panic(fmt.Sprintf("failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
}
// TODO: Same thing for app - but we would need a way to get the hash
// without persisting the state.
state, err = r.blockExec.ApplyBlock(state, firstID, first)
if err != nil {
panic(fmt.Sprintf("failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
}
r.metrics.RecordConsMetrics(first)
r.metrics.RecordConsMetrics(first)
blocksSynced++
blocksSynced++
if blocksSynced%100 == 0 {
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
r.Logger.Info(
"block sync rate",
"height", r.pool.height,
"max_peer_height", r.pool.MaxPeerHeight(),
"blocks/s", lastRate,
)
if blocksSynced%100 == 0 {
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
r.Logger.Info(
"block sync rate",
"height", r.pool.height,
"max_peer_height", r.pool.MaxPeerHeight(),
"blocks/s", lastRate,
)
lastHundred = time.Now()
}
lastHundred = time.Now()
}
continue FOR_LOOP

View File

@@ -271,7 +271,7 @@ func signAddVotes(
}
func validatePrevote(t *testing.T, cs *State, round int32, privVal *validatorStub, blockHash []byte) {
prevotes := cs.Votes.Prevotes(round)
prevotes := cs.GetRoundState().Votes.Prevotes(round)
pubKey, err := privVal.GetPubKey(context.Background())
require.NoError(t, err)
address := pubKey.Address()
@@ -291,7 +291,7 @@ func validatePrevote(t *testing.T, cs *State, round int32, privVal *validatorStu
}
func validateLastPrecommit(t *testing.T, cs *State, privVal *validatorStub, blockHash []byte) {
votes := cs.LastCommit
votes := cs.GetRoundState().LastCommit
pv, err := privVal.GetPubKey(context.Background())
require.NoError(t, err)
address := pv.Address()
@@ -663,6 +663,39 @@ func ensurePrevote(voteCh <-chan tmpubsub.Message, height int64, round int32) {
ensureVote(voteCh, height, round, tmproto.PrevoteType)
}
func ensurePrevoteMatch(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32, hash []byte) {
t.Helper()
ensureVoteMatch(t, voteCh, height, round, hash, tmproto.PrevoteType)
}
func ensurePrecommitMatch(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32, hash []byte) {
t.Helper()
ensureVoteMatch(t, voteCh, height, round, hash, tmproto.PrecommitType)
}
func ensureVoteMatch(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32, hash []byte, voteType tmproto.SignedMsgType) {
t.Helper()
select {
case <-time.After(ensureTimeout):
t.Fatal("Timeout expired while waiting for NewVote event")
case msg := <-voteCh:
voteEvent, ok := msg.Data().(types.EventDataVote)
require.True(t, ok, "expected a EventDataVote, got %T. Wrong subscription channel?",
msg.Data())
vote := voteEvent.Vote
require.Equal(t, height, vote.Height)
require.Equal(t, round, vote.Round)
require.Equal(t, voteType, vote.Type)
if hash == nil {
require.Nil(t, vote.BlockID.Hash, "Expected prevote to be for nil, got %X", vote.BlockID.Hash)
} else {
require.True(t, bytes.Equal(vote.BlockID.Hash, hash), "Expected prevote to be for %X, got %X", hash, vote.BlockID.Hash)
}
}
}
func ensureVote(voteCh <-chan tmpubsub.Message, height int64, round int32,
voteType tmproto.SignedMsgType) {
select {

View File

@@ -26,3 +26,18 @@ func (_m *ConsSyncReactor) SetStateSyncingMetrics(_a0 float64) {
func (_m *ConsSyncReactor) SwitchToConsensus(_a0 state.State, _a1 bool) {
_m.Called(_a0, _a1)
}
type NewConsSyncReactorT interface {
mock.TestingT
Cleanup(func())
}
// NewConsSyncReactor creates a new instance of ConsSyncReactor. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewConsSyncReactor(t NewConsSyncReactorT) *ConsSyncReactor {
mock := &ConsSyncReactor{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -131,6 +131,7 @@ type Reactor struct {
mtx sync.RWMutex
peers map[types.NodeID]*PeerState
waitSync bool
rs *cstypes.RoundState
readySignal chan struct{} // closed when the node is ready to start consensus
stateCh *p2p.Channel
@@ -161,6 +162,7 @@ func NewReactor(
r := &Reactor{
state: cs,
waitSync: waitSync,
rs: cs.GetRoundState(),
peers: make(map[types.NodeID]*PeerState),
Metrics: NopMetrics(),
stateCh: stateCh,
@@ -198,6 +200,7 @@ func (r *Reactor) OnStart() error {
go r.peerStatsRoutine()
r.subscribeToBroadcastEvents()
go r.updateRoundStateRoutine()
if !r.WaitSync() {
if err := r.state.Start(); err != nil {
@@ -443,7 +446,7 @@ func makeRoundStepMessage(rs *cstypes.RoundState) *tmcons.NewRoundStep {
}
func (r *Reactor) sendNewRoundStepMessage(peerID types.NodeID) {
rs := r.state.GetRoundState()
rs := r.getRoundState()
msg := makeRoundStepMessage(rs)
select {
case <-r.closeCh:
@@ -455,6 +458,26 @@ func (r *Reactor) sendNewRoundStepMessage(peerID types.NodeID) {
}
}
func (r *Reactor) updateRoundStateRoutine() {
t := time.NewTicker(100 * time.Microsecond)
defer t.Stop()
for range t.C {
if !r.IsRunning() {
return
}
rs := r.state.GetRoundState()
r.mtx.Lock()
r.rs = rs
r.mtx.Unlock()
}
}
func (r *Reactor) getRoundState() *cstypes.RoundState {
r.mtx.RLock()
defer r.mtx.RUnlock()
return r.rs
}
func (r *Reactor) gossipDataForCatchup(rs *cstypes.RoundState, prs *cstypes.PeerRoundState, ps *PeerState) {
logger := r.Logger.With("height", prs.Height).With("peer", ps.peerID)
@@ -521,6 +544,8 @@ func (r *Reactor) gossipDataForCatchup(rs *cstypes.RoundState, prs *cstypes.Peer
func (r *Reactor) gossipDataRoutine(ps *PeerState) {
logger := r.Logger.With("peer", ps.peerID)
timer := time.NewTimer(r.state.config.PeerGossipSleepDuration)
defer timer.Stop()
OUTER_LOOP:
for {
@@ -528,6 +553,8 @@ OUTER_LOOP:
return
}
timer.Reset(r.state.config.PeerGossipSleepDuration)
select {
case <-r.closeCh:
return
@@ -535,11 +562,10 @@ OUTER_LOOP:
// The peer is marked for removal via a PeerUpdate as the doneCh was
// explicitly closed to signal we should exit.
return
default:
case <-timer.C:
}
rs := r.state.GetRoundState()
rs := r.getRoundState()
prs := ps.GetRoundState()
// Send proposal Block parts?
@@ -582,7 +608,6 @@ OUTER_LOOP:
"blockstoreBase", blockStoreBase,
"blockstoreHeight", r.state.blockStore.Height(),
)
time.Sleep(r.state.config.PeerGossipSleepDuration)
} else {
ps.InitProposalBlockParts(blockMeta.BlockID.PartSetHeader)
}
@@ -598,7 +623,6 @@ OUTER_LOOP:
// if height and round don't match, sleep
if (rs.Height != prs.Height) || (rs.Round != prs.Round) {
time.Sleep(r.state.config.PeerGossipSleepDuration)
continue OUTER_LOOP
}
@@ -653,12 +677,8 @@ OUTER_LOOP:
}:
}
}
continue OUTER_LOOP
}
// nothing to do -- sleep
time.Sleep(r.state.config.PeerGossipSleepDuration)
continue OUTER_LOOP
}
}
@@ -766,7 +786,7 @@ OUTER_LOOP:
default:
}
rs := r.state.GetRoundState()
rs := r.getRoundState()
prs := ps.GetRoundState()
switch logThrottle {
@@ -848,7 +868,7 @@ func (r *Reactor) queryMaj23Routine(ps *PeerState) {
return
}
rs := r.state.GetRoundState()
rs := r.getRoundState()
prs := ps.GetRoundState()
// TODO create more reliable coppies of these
// structures so the following go routines don't race

View File

@@ -343,6 +343,15 @@ func (cs *State) OnStart() error {
}
}
// we need the timeoutRoutine for replay so
// we don't block on the tick chan.
// NOTE: we will get a build up of garbage go routines
// firing on the tockChan until the receiveRoutine is started
// to deal with them (by that point, at most one will be valid)
if err := cs.timeoutTicker.Start(); err != nil {
return err
}
// We may have lost some votes if the process crashed reload from consensus
// log to catchup.
if cs.doWALCatchup {
@@ -399,15 +408,6 @@ func (cs *State) OnStart() error {
return err
}
// we need the timeoutRoutine for replay so
// we don't block on the tick chan.
// NOTE: we will get a build up of garbage go routines
// firing on the tockChan until the receiveRoutine is started
// to deal with them (by that point, at most one will be valid)
if err := cs.timeoutTicker.Start(); err != nil {
return err
}
// Double Signing Risk Reduction
if err := cs.checkDoubleSigningRisk(cs.Height); err != nil {
return err
@@ -864,7 +864,6 @@ func (cs *State) receiveRoutine(maxSteps int) {
func (cs *State) handleMsg(mi msgInfo) {
cs.mtx.Lock()
defer cs.mtx.Unlock()
var (
added bool
err error
@@ -881,6 +880,24 @@ func (cs *State) handleMsg(mi msgInfo) {
case *BlockPartMessage:
// if the proposal is complete, we'll enterPrevote or tryFinalizeCommit
added, err = cs.addProposalBlockPart(msg, peerID)
// We unlock here to yield to any routines that need to read the the RoundState.
// Previously, this code held the lock from the point at which the final block
// part was received until the block executed against the application.
// This prevented the reactor from being able to retrieve the most updated
// version of the RoundState. The reactor needs the updated RoundState to
// gossip the now completed block.
//
// This code can be further improved by either always operating on a copy
// of RoundState and only locking when switching out State's copy of
// RoundState with the updated copy or by emitting RoundState events in
// more places for routines depending on it to listen for.
cs.mtx.Unlock()
cs.mtx.Lock()
if added && cs.ProposalBlockParts.IsComplete() {
cs.handleCompleteProposal(msg.Height)
}
if added {
cs.statsMsgQueue <- mi
}
@@ -1941,44 +1958,43 @@ func (cs *State) addProposalBlockPart(msg *BlockPartMessage, peerID types.NodeID
if err := cs.eventBus.PublishEventCompleteProposal(cs.CompleteProposalEvent()); err != nil {
cs.Logger.Error("failed publishing event complete proposal", "err", err)
}
}
return added, nil
}
// Update Valid* if we can.
prevotes := cs.Votes.Prevotes(cs.Round)
blockID, hasTwoThirds := prevotes.TwoThirdsMajority()
if hasTwoThirds && !blockID.IsZero() && (cs.ValidRound < cs.Round) {
if cs.ProposalBlock.HashesTo(blockID.Hash) {
cs.Logger.Debug(
"updating valid block to new proposal block",
"valid_round", cs.Round,
"valid_block_hash", cs.ProposalBlock.Hash(),
)
func (cs *State) handleCompleteProposal(blockHeight int64) {
// Update Valid* if we can.
prevotes := cs.Votes.Prevotes(cs.Round)
blockID, hasTwoThirds := prevotes.TwoThirdsMajority()
if hasTwoThirds && !blockID.IsZero() && (cs.ValidRound < cs.Round) {
if cs.ProposalBlock.HashesTo(blockID.Hash) {
cs.Logger.Debug(
"updating valid block to new proposal block",
"valid_round", cs.Round,
"valid_block_hash", cs.ProposalBlock.Hash(),
)
cs.ValidRound = cs.Round
cs.ValidBlock = cs.ProposalBlock
cs.ValidBlockParts = cs.ProposalBlockParts
}
// TODO: In case there is +2/3 majority in Prevotes set for some
// block and cs.ProposalBlock contains different block, either
// proposer is faulty or voting power of faulty processes is more
// than 1/3. We should trigger in the future accountability
// procedure at this point.
cs.ValidRound = cs.Round
cs.ValidBlock = cs.ProposalBlock
cs.ValidBlockParts = cs.ProposalBlockParts
}
if cs.Step <= cstypes.RoundStepPropose && cs.isProposalComplete() {
// Move onto the next step
cs.enterPrevote(height, cs.Round)
if hasTwoThirds { // this is optimisation as this will be triggered when prevote is added
cs.enterPrecommit(height, cs.Round)
}
} else if cs.Step == cstypes.RoundStepCommit {
// If we're waiting on the proposal block...
cs.tryFinalizeCommit(height)
}
return added, nil
// TODO: In case there is +2/3 majority in Prevotes set for some
// block and cs.ProposalBlock contains different block, either
// proposer is faulty or voting power of faulty processes is more
// than 1/3. We should trigger in the future accountability
// procedure at this point.
}
return added, nil
if cs.Step <= cstypes.RoundStepPropose && cs.isProposalComplete() {
// Move onto the next step
cs.enterPrevote(blockHeight, cs.Round)
if hasTwoThirds { // this is optimisation as this will be triggered when prevote is added
cs.enterPrecommit(blockHeight, cs.Round)
}
} else if cs.Step == cstypes.RoundStepCommit {
// If we're waiting on the proposal block...
cs.tryFinalizeCommit(blockHeight)
}
}
// Attempt to add the vote. if its a duplicate signature, dupeout the validator

View File

@@ -243,8 +243,7 @@ func TestStateBadProposal(t *testing.T) {
ensureProposal(proposalCh, height, round, blockID)
// wait for prevote
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], nil)
ensurePrevoteMatch(t, voteCh, height, round, nil)
// add bad prevote from vs2 and wait for it
signAddVotes(config, cs1, tmproto.PrevoteType, propBlock.Hash(), propBlock.MakePartSet(partSize).Header(), vs2)
@@ -308,8 +307,7 @@ func TestStateOversizedBlock(t *testing.T) {
// and then should send nil prevote and precommit regardless of whether other validators prevote and
// precommit on it
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], nil)
ensurePrevoteMatch(t, voteCh, height, round, nil)
signAddVotes(config, cs1, tmproto.PrevoteType, propBlock.Hash(), propBlock.MakePartSet(partSize).Header(), vs2)
ensurePrevote(voteCh, height, round)
ensurePrecommit(voteCh, height, round)
@@ -352,8 +350,7 @@ func TestStateFullRound1(t *testing.T) {
ensureNewProposal(propCh, height, round)
propBlockHash := cs.GetRoundState().ProposalBlock.Hash()
ensurePrevote(voteCh, height, round) // wait for prevote
validatePrevote(t, cs, round, vss[0], propBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, propBlockHash)
ensurePrecommit(voteCh, height, round) // wait for precommit
@@ -376,8 +373,8 @@ func TestStateFullRoundNil(t *testing.T) {
cs.enterPrevote(height, round)
cs.startRoutines(4)
ensurePrevote(voteCh, height, round) // prevote
ensurePrecommit(voteCh, height, round) // precommit
ensurePrevoteMatch(t, voteCh, height, round, nil) // prevote
ensurePrecommitMatch(t, voteCh, height, round, nil) // precommit
// should prevote and precommit nil
validatePrevoteAndPrecommit(t, cs, round, -1, vss[0], nil, nil)
@@ -502,10 +499,8 @@ func TestStateLockNoPOL(t *testing.T) {
panic("Expected proposal block to be nil")
}
// wait to finish prevote
ensurePrevote(voteCh, height, round)
// we should have prevoted our locked block
validatePrevote(t, cs1, round, vss[0], rs.LockedBlock.Hash())
// wait to finish prevote and ensure we have prevoted our locked block
ensurePrevoteMatch(t, voteCh, height, round, rs.LockedBlock.Hash())
// add a conflicting prevote from the other validator
signAddVotes(config, cs1, tmproto.PrevoteType, hash, rs.LockedBlock.MakePartSet(partSize).Header(), vs2)
@@ -548,8 +543,7 @@ func TestStateLockNoPOL(t *testing.T) {
rs.LockedBlock))
}
ensurePrevote(voteCh, height, round) // prevote
validatePrevote(t, cs1, round, vss[0], rs.LockedBlock.Hash())
ensurePrevoteMatch(t, voteCh, height, round, rs.LockedBlock.Hash())
signAddVotes(config, cs1, tmproto.PrevoteType, hash, rs.ProposalBlock.MakePartSet(partSize).Header(), vs2)
ensurePrevote(voteCh, height, round)
@@ -594,9 +588,8 @@ func TestStateLockNoPOL(t *testing.T) {
}
ensureNewProposal(proposalCh, height, round)
ensurePrevote(voteCh, height, round) // prevote
// prevote for locked block (not proposal)
validatePrevote(t, cs1, 3, vss[0], cs1.LockedBlock.Hash())
ensurePrevoteMatch(t, voteCh, height, round, cs1.LockedBlock.Hash())
// prevote for proposed block
signAddVotes(config, cs1, tmproto.PrevoteType, propBlock.Hash(), propBlock.MakePartSet(partSize).Header(), vs2)
@@ -704,8 +697,7 @@ func TestStateLockPOLRelock(t *testing.T) {
ensureNewProposal(proposalCh, height, round)
// go to prevote, node should prevote for locked block (not the new proposal) - this is relocking
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], theBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, theBlockHash)
// now lets add prevotes from everyone else for the new block
signAddVotes(config, cs1, tmproto.PrevoteType, propBlockHash, propBlockParts.Header(), vs2, vs3, vs4)
@@ -757,8 +749,7 @@ func TestStateLockPOLUnlock(t *testing.T) {
theBlockHash := rs.ProposalBlock.Hash()
theBlockParts := rs.ProposalBlockParts.Header()
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], theBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, theBlockHash)
signAddVotes(config, cs1, tmproto.PrevoteType, theBlockHash, theBlockParts, vs2, vs3, vs4)
@@ -796,8 +787,7 @@ func TestStateLockPOLUnlock(t *testing.T) {
ensureNewProposal(proposalCh, height, round)
// go to prevote, prevote for locked block (not proposal)
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], lockedBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, lockedBlockHash)
// now lets add prevotes from everyone else for nil (a polka!)
signAddVotes(config, cs1, tmproto.PrevoteType, nil, types.PartSetHeader{}, vs2, vs3, vs4)
@@ -888,8 +878,7 @@ func TestStateLockPOLUnlockOnUnknownBlock(t *testing.T) {
// now we're on a new round but v1 misses the proposal
// go to prevote, node should prevote for locked block (not the new proposal) - this is relocking
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], firstBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, firstBlockHash)
// now lets add prevotes from everyone else for the new block
signAddVotes(config, cs1, tmproto.PrevoteType, secondBlockHash, secondBlockParts.Header(), vs2, vs3, vs4)
@@ -933,9 +922,7 @@ func TestStateLockPOLUnlockOnUnknownBlock(t *testing.T) {
t.Fatal(err)
}
ensurePrevote(voteCh, height, round)
// we are no longer locked to the first block so we should be able to prevote
validatePrevote(t, cs1, round, vss[0], thirdPropBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, thirdPropBlockHash)
signAddVotes(config, cs1, tmproto.PrevoteType, thirdPropBlockHash, thirdPropBlockParts.Header(), vs2, vs3, vs4)
@@ -975,8 +962,7 @@ func TestStateLockPOLSafety1(t *testing.T) {
rs := cs1.GetRoundState()
propBlock := rs.ProposalBlock
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], propBlock.Hash())
ensurePrevoteMatch(t, voteCh, height, round, propBlock.Hash())
// the others sign a polka but we don't see it
prevotes := signVotes(config, tmproto.PrevoteType,
@@ -1022,8 +1008,7 @@ func TestStateLockPOLSafety1(t *testing.T) {
t.Logf("new prop hash %v", fmt.Sprintf("%X", propBlockHash))
// go to prevote, prevote for proposal block
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], propBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, propBlockHash)
// now we see the others prevote for it, so we should lock on it
signAddVotes(config, cs1, tmproto.PrevoteType, propBlockHash, propBlockParts.Header(), vs2, vs3, vs4)
@@ -1049,10 +1034,8 @@ func TestStateLockPOLSafety1(t *testing.T) {
// timeout of propose
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.Propose(round).Nanoseconds())
// finish prevote
ensurePrevote(voteCh, height, round)
// we should prevote what we're locked on
validatePrevote(t, cs1, round, vss[0], propBlockHash)
// finish prevote and vote for the block we're locked on
ensurePrevoteMatch(t, voteCh, height, round, propBlockHash)
newStepCh := subscribe(cs1.eventBus, types.EventQueryNewRoundStep)
@@ -1119,8 +1102,7 @@ func TestStateLockPOLSafety2(t *testing.T) {
}
ensureNewProposal(proposalCh, height, round)
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], propBlockHash1)
ensurePrevoteMatch(t, voteCh, height, round, propBlockHash1)
signAddVotes(config, cs1, tmproto.PrevoteType, propBlockHash1, propBlockParts1.Header(), vs2, vs3, vs4)
@@ -1162,9 +1144,7 @@ func TestStateLockPOLSafety2(t *testing.T) {
ensureNewProposal(proposalCh, height, round)
ensureNoNewUnlock(unlockCh)
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], propBlockHash1)
ensurePrevoteMatch(t, voteCh, height, round, propBlockHash1)
}
// 4 vals.
@@ -1201,8 +1181,7 @@ func TestProposeValidBlock(t *testing.T) {
propBlock := rs.ProposalBlock
propBlockHash := propBlock.Hash()
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], propBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, propBlockHash)
// the others sign a polka
signAddVotes(config, cs1, tmproto.PrevoteType, propBlockHash, propBlock.MakePartSet(partSize).Header(), vs2, vs3, vs4)
@@ -1225,8 +1204,7 @@ func TestProposeValidBlock(t *testing.T) {
// timeout of propose
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.Propose(round).Nanoseconds())
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], propBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, propBlockHash)
signAddVotes(config, cs1, tmproto.PrevoteType, nil, types.PartSetHeader{}, vs2, vs3, vs4)
@@ -1294,8 +1272,7 @@ func TestSetValidBlockOnDelayedPrevote(t *testing.T) {
propBlockHash := propBlock.Hash()
propBlockParts := propBlock.MakePartSet(partSize)
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], propBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, propBlockHash)
// vs2 send prevote for propBlock
signAddVotes(config, cs1, tmproto.PrevoteType, propBlockHash, propBlockParts.Header(), vs2)
@@ -1358,8 +1335,7 @@ func TestSetValidBlockOnDelayedProposal(t *testing.T) {
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.Propose(round).Nanoseconds())
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], nil)
ensurePrevoteMatch(t, voteCh, height, round, nil)
prop, propBlock := decideProposal(cs1, vs2, vs2.Height, vs2.Round+1)
propBlockHash := propBlock.Hash()
@@ -1445,8 +1421,7 @@ func TestWaitingTimeoutProposeOnNewRound(t *testing.T) {
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Propose(round).Nanoseconds())
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], nil)
ensurePrevoteMatch(t, voteCh, height, round, nil)
}
// 4 vals, 3 Precommits for nil from the higher round.
@@ -1515,8 +1490,7 @@ func TestWaitTimeoutProposeOnNilPolkaForTheCurrentRound(t *testing.T) {
ensureNewTimeout(timeoutProposeCh, height, round, cs1.config.Propose(round).Nanoseconds())
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], nil)
ensurePrevoteMatch(t, voteCh, height, round, nil)
}
// What we want:
@@ -1645,8 +1619,7 @@ func TestStartNextHeightCorrectlyAfterTimeout(t *testing.T) {
theBlockHash := rs.ProposalBlock.Hash()
theBlockParts := rs.ProposalBlockParts.Header()
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], theBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, theBlockHash)
signAddVotes(config, cs1, tmproto.PrevoteType, theBlockHash, theBlockParts, vs2, vs3, vs4)
@@ -1708,8 +1681,7 @@ func TestResetTimeoutPrecommitUponNewHeight(t *testing.T) {
theBlockHash := rs.ProposalBlock.Hash()
theBlockParts := rs.ProposalBlockParts.Header()
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], theBlockHash)
ensurePrevoteMatch(t, voteCh, height, round, theBlockHash)
signAddVotes(config, cs1, tmproto.PrevoteType, theBlockHash, theBlockParts, vs2, vs3, vs4)
@@ -1881,8 +1853,7 @@ func TestStateHalt1(t *testing.T) {
*/
// go to prevote, prevote for locked block
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], rs.LockedBlock.Hash())
ensurePrevoteMatch(t, voteCh, height, round, rs.LockedBlock.Hash())
// now we receive the precommit from the previous round
addVotes(cs1, precommit4)

View File

@@ -119,9 +119,9 @@ func (t *timeoutTicker) timeoutRoutine() {
// NOTE time.Timer allows duration to be non-positive
ti = newti
t.timer.Reset(ti.Duration)
t.Logger.Debug("Scheduled timeout", "dur", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step)
t.Logger.Debug("Internal state machine timeout scheduled", "duration", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step)
case <-t.timer.C:
t.Logger.Info("Timed out", "dur", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step)
t.Logger.Debug("Internal state machine timeout elapsed ", "duration", ti.Duration, "height", ti.Height, "round", ti.Round, "step", ti.Step)
// go routine here guarantees timeoutRoutine doesn't block.
// Determinism comes from playback in the receiveRoutine.
// We can eliminate it by merging the timeoutRoutine into receiveRoutine

View File

@@ -57,3 +57,18 @@ func (_m *BlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
return r0
}
type NewBlockStoreT interface {
mock.TestingT
Cleanup(func())
}
// NewBlockStore creates a new instance of BlockStore. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewBlockStore(t NewBlockStoreT) *BlockStore {
mock := &BlockStore{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -242,17 +242,13 @@ func (mem *CListMempool) CheckTx(
// so we only record the sender for txs still in the mempool.
if e, ok := mem.txsMap.Load(tx.Key()); ok {
memTx := e.(*clist.CElement).Value.(*mempoolTx)
_, loaded := memTx.senders.LoadOrStore(txInfo.SenderID, true)
memTx.senders.LoadOrStore(txInfo.SenderID, true)
// TODO: consider punishing peer for dups,
// its non-trivial since invalid txs can become valid,
// but they can spam the same tx with little cost to them atm.
if loaded {
return types.ErrTxInCache
}
}
mem.logger.Debug("tx exists already in cache", "tx_hash", tx.Hash())
return nil
return types.ErrTxInCache
}
if ctx == nil {

View File

@@ -200,7 +200,7 @@ func TestMempoolUpdate(t *testing.T) {
err := mp.Update(1, []types.Tx{[]byte{0x01}}, abciResponses(1, abci.CodeTypeOK), nil, nil)
require.NoError(t, err)
err = mp.CheckTx(context.Background(), []byte{0x01}, nil, mempool.TxInfo{})
require.NoError(t, err)
assert.Error(t, err)
}
// 2. Removes valid txs from the mempool
@@ -248,13 +248,13 @@ func TestMempoolUpdateDoesNotPanicWhenApplicationMissedTx(t *testing.T) {
for _, tx := range txs {
reqRes := abciclient.NewReqRes(abci.ToRequestCheckTx(abci.RequestCheckTx{Tx: tx}))
reqRes.Response = abci.ToResponseCheckTx(abci.ResponseCheckTx{Code: abci.CodeTypeOK})
// SetDone allows the ReqRes to process its callback synchronously.
// This simulates the Response being ready for the client immediately.
reqRes.SetDone()
mockClient.On("CheckTxAsync", mock.Anything, mock.Anything).Return(reqRes, nil)
err := mp.CheckTx(context.Background(), tx, nil, mempool.TxInfo{})
require.NoError(t, err)
// ensure that the callback that the mempool sets on the ReqRes is run.
reqRes.InvokeCallback()
}
// Calling update to remove the first transaction from the mempool.
@@ -305,11 +305,15 @@ func TestMempool_KeepInvalidTxsInCache(t *testing.T) {
// a must be added to the cache
err = mp.CheckTx(context.Background(), a, nil, mempool.TxInfo{})
require.NoError(t, err)
if assert.Error(t, err) {
assert.Equal(t, types.ErrTxInCache, err)
}
// b must remain in the cache
err = mp.CheckTx(context.Background(), b, nil, mempool.TxInfo{})
require.NoError(t, err)
if assert.Error(t, err) {
assert.Equal(t, types.ErrTxInCache, err)
}
}
// 2. An invalid transaction must remain in the cache

View File

@@ -6,7 +6,6 @@ import (
"fmt"
"runtime/debug"
"sync"
"time"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/libs/clist"
@@ -24,13 +23,6 @@ var (
_ p2p.Wrapper = (*protomem.Message)(nil)
)
// PeerManager defines the interface contract required for getting necessary
// peer information. This should eventually be replaced with a message-oriented
// approach utilizing the p2p stack.
type PeerManager interface {
GetHeight(types.NodeID) int64
}
// Reactor implements a service that contains mempool of txs that are broadcasted
// amongst peers. It maintains a map from peer ID to counter, to prevent gossiping
// txs to the peers you received it from.
@@ -41,11 +33,6 @@ type Reactor struct {
mempool *CListMempool
ids *mempool.MempoolIDs
// XXX: Currently, this is the only way to get information about a peer. Ideally,
// we rely on message-oriented communication to get necessary peer data.
// ref: https://github.com/tendermint/tendermint/issues/5670
peerMgr PeerManager
mempoolCh *p2p.Channel
peerUpdates *p2p.PeerUpdates
closeCh chan struct{}
@@ -62,7 +49,6 @@ type Reactor struct {
func NewReactor(
logger log.Logger,
cfg *config.MempoolConfig,
peerMgr PeerManager,
mp *CListMempool,
mempoolCh *p2p.Channel,
peerUpdates *p2p.PeerUpdates,
@@ -70,7 +56,6 @@ func NewReactor(
r := &Reactor{
cfg: cfg,
peerMgr: peerMgr,
mempool: mp,
ids: mempool.NewMempoolIDs(),
mempoolCh: mempoolCh,
@@ -171,6 +156,15 @@ func (r *Reactor) handleMempoolMessage(envelope p2p.Envelope) error {
for _, tx := range protoTxs {
if err := r.mempool.CheckTx(context.Background(), types.Tx(tx), nil, txInfo); err != nil {
if errors.Is(err, types.ErrTxInCache) {
// if the tx is in the cache,
// then we've been gossiped a
// Tx that we've already
// got. Gossip should be
// smarter, but it's not a
// problem.
continue
}
logger.Error("checktx failed for tx", "tx", fmt.Sprintf("%X", types.Tx(tx).Hash()), "err", err)
}
}
@@ -355,15 +349,6 @@ func (r *Reactor) broadcastTxRoutine(peerID types.NodeID, closer *tmsync.Closer)
memTx := next.Value.(*mempoolTx)
if r.peerMgr != nil {
height := r.peerMgr.GetHeight(peerID)
if height > 0 && height < memTx.Height()-1 {
// allow for a lag of one block
time.Sleep(mempool.PeerCatchupSleepIntervalMS * time.Millisecond)
continue
}
}
// NOTE: Transaction batching was disabled due to:
// https://github.com/tendermint/tendermint/issues/5796

View File

@@ -70,7 +70,6 @@ func setup(t *testing.T, config *config.MempoolConfig, numNodes int, chBuf uint)
rts.reactors[nodeID] = NewReactor(
rts.logger.With("nodeID", nodeID),
config,
rts.network.Nodes[nodeID].PeerManager,
mempool,
rts.mempoolChnnels[nodeID],
rts.peerUpdates[nodeID],

View File

@@ -6,7 +6,6 @@ import (
"fmt"
"runtime/debug"
"sync"
"time"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/libs/clist"
@@ -24,13 +23,6 @@ var (
_ p2p.Wrapper = (*protomem.Message)(nil)
)
// PeerManager defines the interface contract required for getting necessary
// peer information. This should eventually be replaced with a message-oriented
// approach utilizing the p2p stack.
type PeerManager interface {
GetHeight(types.NodeID) int64
}
// Reactor implements a service that contains mempool of txs that are broadcasted
// amongst peers. It maintains a map from peer ID to counter, to prevent gossiping
// txs to the peers you received it from.
@@ -41,11 +33,6 @@ type Reactor struct {
mempool *TxMempool
ids *mempool.MempoolIDs
// XXX: Currently, this is the only way to get information about a peer. Ideally,
// we rely on message-oriented communication to get necessary peer data.
// ref: https://github.com/tendermint/tendermint/issues/5670
peerMgr PeerManager
mempoolCh *p2p.Channel
peerUpdates *p2p.PeerUpdates
closeCh chan struct{}
@@ -66,7 +53,6 @@ type Reactor struct {
func NewReactor(
logger log.Logger,
cfg *config.MempoolConfig,
peerMgr PeerManager,
txmp *TxMempool,
mempoolCh *p2p.Channel,
peerUpdates *p2p.PeerUpdates,
@@ -74,7 +60,6 @@ func NewReactor(
r := &Reactor{
cfg: cfg,
peerMgr: peerMgr,
mempool: txmp,
ids: mempool.NewMempoolIDs(),
mempoolCh: mempoolCh,
@@ -178,6 +163,15 @@ func (r *Reactor) handleMempoolMessage(envelope p2p.Envelope) error {
for _, tx := range protoTxs {
if err := r.mempool.CheckTx(context.Background(), types.Tx(tx), nil, txInfo); err != nil {
if errors.Is(err, types.ErrTxInCache) {
// if the tx is in the cache,
// then we've been gossiped a
// Tx that we've already
// got. Gossip should be
// smarter, but it's not a
// problem.
continue
}
logger.Error("checktx failed for tx", "tx", fmt.Sprintf("%X", types.Tx(tx).Hash()), "err", err)
}
}
@@ -364,15 +358,6 @@ func (r *Reactor) broadcastTxRoutine(peerID types.NodeID, closer *tmsync.Closer)
memTx := nextGossipTx.Value.(*WrappedTx)
if r.peerMgr != nil {
height := r.peerMgr.GetHeight(peerID)
if height > 0 && height < memTx.height-1 {
// allow for a lag of one block
time.Sleep(mempool.PeerCatchupSleepIntervalMS * time.Millisecond)
continue
}
}
// NOTE: Transaction batching was disabled due to:
// https://github.com/tendermint/tendermint/issues/5796
if ok := r.mempool.txStore.TxHasPeer(memTx.hash, peerMempoolID); !ok {

View File

@@ -67,7 +67,6 @@ func setupReactors(t *testing.T, numNodes int, chBuf uint) *reactorTestSuite {
rts.reactors[nodeID] = NewReactor(
rts.logger.With("nodeID", nodeID),
cfg.Mempool,
rts.network.Nodes[nodeID].PeerManager,
mempool,
rts.mempoolChannels[nodeID],
rts.peerUpdates[nodeID],

View File

@@ -26,6 +26,7 @@ func newConnTracker(max uint, window time.Duration) connectionTracker {
cache: make(map[string]uint),
lastConnect: make(map[string]time.Time),
max: max,
window: window,
}
}
@@ -43,7 +44,7 @@ func (rat *connTrackerImpl) AddConn(addr net.IP) error {
if num := rat.cache[address]; num >= rat.max {
return fmt.Errorf("%q has %d connections [max=%d]", address, num, rat.max)
} else if num == 0 {
// if there is already at least connection, check to
// if there is already at least one connection, check to
// see if it was established before within the window,
// and error if so.
if last := rat.lastConnect[address]; time.Since(last) < rat.window {

View File

@@ -70,4 +70,15 @@ func TestConnTracker(t *testing.T) {
}
require.Equal(t, 10, ct.Len())
})
t.Run("Window", func(t *testing.T) {
const window = 100 * time.Millisecond
ct := newConnTracker(10, window)
ip := randLocalIPv4()
require.NoError(t, ct.AddConn(ip))
ct.RemoveConn(ip)
require.Error(t, ct.AddConn(ip))
time.Sleep(window)
require.NoError(t, ct.AddConn(ip))
})
}

View File

@@ -27,8 +27,13 @@ var (
// Metrics contains metrics exposed by this package.
type Metrics struct {
// Number of peers.
// Number of peers connected.
Peers metrics.Gauge
// Nomber of peers in the peer store database.
PeersStored metrics.Gauge
// Number of inactive peers stored.
PeersInactivated metrics.Gauge
// Number of bytes received from a given peer.
PeerReceiveBytesTotal metrics.Counter
// Number of bytes sent to a given peer.
@@ -36,6 +41,21 @@ type Metrics struct {
// Pending bytes to be sent to a given peer.
PeerPendingSendBytes metrics.Gauge
// Number of successful connection attempts
PeersConnectedSuccess metrics.Counter
// Number of failed connection attempts
PeersConnectedFailure metrics.Counter
// Number of peers connected as a result of dialing the
// peer.
PeersConnectedOutgoing metrics.Gauge
// Number of peers connected as a result of the peer dialing
// this node.
PeersConnectedIncoming metrics.Gauge
// Number of peers evicted by this node.
PeersEvicted metrics.Counter
// RouterPeerQueueRecv defines the time taken to read off of a peer's queue
// before sending on the connection.
RouterPeerQueueRecv metrics.Histogram
@@ -73,7 +93,49 @@ func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "peers",
Help: "Number of peers.",
Help: "Number of peers connected.",
}, labels).With(labelsAndValues...),
PeersStored: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "peers_stored",
Help: "Number of peers in the peer Store",
}, labels).With(labelsAndValues...),
PeersInactivated: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "peers_inactivated",
Help: "Number of peers inactivated",
}, labels).With(labelsAndValues...),
PeersConnectedSuccess: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "peers_connected_success",
Help: "Number of successful peer connection attempts",
}, labels).With(labelsAndValues...),
PeersEvicted: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "peers_evicted",
Help: "Number of connected peers evicted",
}, labels).With(labelsAndValues...),
PeersConnectedFailure: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "peers_connected_failure",
Help: "Number of unsuccessful peer connection attempts",
}, labels).With(labelsAndValues...),
PeersConnectedIncoming: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "peers_connected_incoming",
Help: "Number of peers connected by peer dialing this node",
}, labels).With(labelsAndValues...),
PeersConnectedOutgoing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "peers_connected_outgoing",
Help: "Number of peers connected by this node dialing the peer",
}, labels).With(labelsAndValues...),
PeerReceiveBytesTotal: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
@@ -141,6 +203,13 @@ func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
func NopMetrics() *Metrics {
return &Metrics{
Peers: discard.NewGauge(),
PeersStored: discard.NewGauge(),
PeersConnectedSuccess: discard.NewCounter(),
PeersConnectedFailure: discard.NewCounter(),
PeersConnectedIncoming: discard.NewGauge(),
PeersConnectedOutgoing: discard.NewGauge(),
PeersInactivated: discard.NewGauge(),
PeersEvicted: discard.NewCounter(),
PeerReceiveBytesTotal: discard.NewCounter(),
PeerSendBytesTotal: discard.NewCounter(),
PeerPendingSendBytes: discard.NewGauge(),

View File

@@ -13,6 +13,8 @@ import (
p2p "github.com/tendermint/tendermint/internal/p2p"
time "time"
types "github.com/tendermint/tendermint/types"
)
@@ -49,20 +51,20 @@ func (_m *Connection) FlushClose() error {
return r0
}
// Handshake provides a mock function with given fields: _a0, _a1, _a2
func (_m *Connection) Handshake(_a0 context.Context, _a1 types.NodeInfo, _a2 crypto.PrivKey) (types.NodeInfo, crypto.PubKey, error) {
ret := _m.Called(_a0, _a1, _a2)
// Handshake provides a mock function with given fields: _a0, _a1, _a2, _a3
func (_m *Connection) Handshake(_a0 context.Context, _a1 time.Duration, _a2 types.NodeInfo, _a3 crypto.PrivKey) (types.NodeInfo, crypto.PubKey, error) {
ret := _m.Called(_a0, _a1, _a2, _a3)
var r0 types.NodeInfo
if rf, ok := ret.Get(0).(func(context.Context, types.NodeInfo, crypto.PrivKey) types.NodeInfo); ok {
r0 = rf(_a0, _a1, _a2)
if rf, ok := ret.Get(0).(func(context.Context, time.Duration, types.NodeInfo, crypto.PrivKey) types.NodeInfo); ok {
r0 = rf(_a0, _a1, _a2, _a3)
} else {
r0 = ret.Get(0).(types.NodeInfo)
}
var r1 crypto.PubKey
if rf, ok := ret.Get(1).(func(context.Context, types.NodeInfo, crypto.PrivKey) crypto.PubKey); ok {
r1 = rf(_a0, _a1, _a2)
if rf, ok := ret.Get(1).(func(context.Context, time.Duration, types.NodeInfo, crypto.PrivKey) crypto.PubKey); ok {
r1 = rf(_a0, _a1, _a2, _a3)
} else {
if ret.Get(1) != nil {
r1 = ret.Get(1).(crypto.PubKey)
@@ -70,8 +72,8 @@ func (_m *Connection) Handshake(_a0 context.Context, _a1 types.NodeInfo, _a2 cry
}
var r2 error
if rf, ok := ret.Get(2).(func(context.Context, types.NodeInfo, crypto.PrivKey) error); ok {
r2 = rf(_a0, _a1, _a2)
if rf, ok := ret.Get(2).(func(context.Context, time.Duration, types.NodeInfo, crypto.PrivKey) error); ok {
r2 = rf(_a0, _a1, _a2, _a3)
} else {
r2 = ret.Error(2)
}
@@ -206,3 +208,18 @@ func (_m *Connection) TrySendMessage(_a0 p2p.ChannelID, _a1 []byte) (bool, error
return r0, r1
}
type NewConnectionT interface {
mock.TestingT
Cleanup(func())
}
// NewConnection creates a new instance of Connection. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewConnection(t NewConnectionT) *Connection {
mock := &Connection{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -332,3 +332,18 @@ func (_m *Peer) TrySend(_a0 byte, _a1 []byte) bool {
func (_m *Peer) Wait() {
_m.Called()
}
type NewPeerT interface {
mock.TestingT
Cleanup(func())
}
// NewPeer creates a new instance of Peer. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewPeer(t NewPeerT) *Peer {
mock := &Peer{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -119,3 +119,18 @@ func (_m *Transport) String() string {
return r0
}
type NewTransportT interface {
mock.TestingT
Cleanup(func())
}
// NewTransport creates a new instance of Transport. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewTransport(t NewTransportT) *Transport {
mock := &Transport{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -1,7 +1,6 @@
package p2ptest
import (
"context"
"math/rand"
"testing"
"time"
@@ -243,6 +242,7 @@ func (n *Network) MakeNode(t *testing.T, opts NodeOptions) *Node {
RetryTimeJitter: time.Millisecond,
MaxPeers: opts.MaxPeers,
MaxConnected: opts.MaxConnected,
Metrics: p2p.NopMetrics(),
})
require.NoError(t, err)
@@ -253,7 +253,7 @@ func (n *Network) MakeNode(t *testing.T, opts NodeOptions) *Node {
privKey,
peerManager,
[]p2p.Transport{transport},
p2p.RouterOptions{DialSleep: func(_ context.Context) {}},
p2p.RouterOptions{},
)
require.NoError(t, err)
require.NoError(t, router.Start())

View File

@@ -90,7 +90,7 @@ func createOutboundPeerAndPerformHandshake(
if err != nil {
return nil, err
}
peerInfo, _, err := pc.conn.Handshake(context.Background(), ourNodeInfo, pk)
peerInfo, _, err := pc.conn.Handshake(context.Background(), 0, ourNodeInfo, pk)
if err != nil {
return nil, err
}
@@ -187,7 +187,7 @@ func (rp *remotePeer) Dial(addr *NetAddress) (net.Conn, error) {
if err != nil {
return nil, err
}
_, _, err = pc.conn.Handshake(context.Background(), rp.nodeInfo(), rp.PrivKey)
_, _, err = pc.conn.Handshake(context.Background(), 0, rp.nodeInfo(), rp.PrivKey)
if err != nil {
return nil, err
}
@@ -213,7 +213,7 @@ func (rp *remotePeer) accept() {
if err != nil {
golog.Printf("Failed to create a peer: %+v", err)
}
_, _, err = pc.conn.Handshake(context.Background(), rp.nodeInfo(), rp.PrivKey)
_, _, err = pc.conn.Handshake(context.Background(), 0, rp.nodeInfo(), rp.PrivKey)
if err != nil {
golog.Printf("Failed to handshake a peer: %+v", err)
}

View File

@@ -38,11 +38,19 @@ const (
PeerStatusBad PeerStatus = "bad" // peer observed as bad
)
// PeerScore is a numeric score assigned to a peer (higher is better).
type PeerScore uint8
type peerConnectionDirection int
const (
PeerScorePersistent PeerScore = math.MaxUint8 // persistent peers
peerConnectionIncoming peerConnectionDirection = iota + 1
peerConnectionOutgoing
)
// PeerScore is a numeric score assigned to a peer (higher is better).
type PeerScore int16
const (
PeerScorePersistent PeerScore = math.MaxInt16 // persistent peers
MaxPeerScoreNotPersistent PeerScore = PeerScorePersistent - 1
)
// PeerUpdate is a peer update event sent via PeerUpdates.
@@ -118,6 +126,13 @@ type PeerManagerOptions struct {
// outbound). 0 means no limit.
MaxConnected uint16
// MaxOutgoingConnections specifies how many outgoing
// connections. It must be lower than MaxConnected. If it is
// 0, then all connections can be outgoing. Once this limit is
// reached, the node will not dial peers, allowing the
// remaining peer connections to be used by incoming connections.
MaxOutgoingConnections uint16
// MaxConnectedUpgrade is the maximum number of additional connections to
// use for probing any better-scored peers to upgrade to when all connection
// slots are full. 0 disables peer upgrading.
@@ -147,6 +162,10 @@ type PeerManagerOptions struct {
// retry times, to avoid thundering herds. 0 disables jitter.
RetryTimeJitter time.Duration
// DisconnectCooldownPeriod is the amount of time after we
// disconnect from a peer before we'll consider dialing a new peer
DisconnectCooldownPeriod time.Duration
// PeerScores sets fixed scores for specific peers. It is mainly used
// for testing. A score of 0 is ignored.
PeerScores map[types.NodeID]PeerScore
@@ -162,6 +181,9 @@ type PeerManagerOptions struct {
// persistentPeers provides fast PersistentPeers lookups. It is built
// by optimize().
persistentPeers map[types.NodeID]bool
// Peer Metrics
Metrics *Metrics
}
// Validate validates the options.
@@ -212,6 +234,10 @@ func (o *PeerManagerOptions) Validate() error {
}
}
if o.MaxOutgoingConnections > 0 && o.MaxConnected < o.MaxOutgoingConnections {
return errors.New("cannot set MaxOutgoingConnections to a value larger than MaxConnected")
}
return nil
}
@@ -280,6 +306,7 @@ func (o *PeerManagerOptions) optimize() {
type PeerManager struct {
selfID types.NodeID
options PeerManagerOptions
metrics *Metrics
rand *rand.Rand
dialWaker *tmsync.Waker // wakes up DialNext() on relevant peer changes
evictWaker *tmsync.Waker // wakes up EvictNext() on relevant peer changes
@@ -288,13 +315,13 @@ type PeerManager struct {
mtx sync.Mutex
store *peerStore
subscriptions map[*PeerUpdates]*PeerUpdates // keyed by struct identity (address)
dialing map[types.NodeID]bool // peers being dialed (DialNext → Dialed/DialFail)
upgrading map[types.NodeID]types.NodeID // peers claimed for upgrade (DialNext → Dialed/DialFail)
connected map[types.NodeID]bool // connected peers (Dialed/Accepted → Disconnected)
ready map[types.NodeID]bool // ready peers (Ready → Disconnected)
evict map[types.NodeID]bool // peers scheduled for eviction (Connected → EvictNext)
evicting map[types.NodeID]bool // peers being evicted (EvictNext → Disconnected)
subscriptions map[*PeerUpdates]*PeerUpdates // keyed by struct identity (address)
dialing map[types.NodeID]bool // peers being dialed (DialNext → Dialed/DialFail)
upgrading map[types.NodeID]types.NodeID // peers claimed for upgrade (DialNext → Dialed/DialFail)
connected map[types.NodeID]peerConnectionDirection // connected peers (Dialed/Accepted → Disconnected)
ready map[types.NodeID]bool // ready peers (Ready → Disconnected)
evict map[types.NodeID]bool // peers scheduled for eviction (Connected → EvictNext)
evicting map[types.NodeID]bool // peers being evicted (EvictNext → Disconnected)
}
// NewPeerManager creates a new peer manager.
@@ -314,28 +341,34 @@ func NewPeerManager(selfID types.NodeID, peerDB dbm.DB, options PeerManagerOptio
}
peerManager := &PeerManager{
selfID: selfID,
options: options,
rand: rand.New(rand.NewSource(time.Now().UnixNano())), // nolint:gosec
dialWaker: tmsync.NewWaker(),
evictWaker: tmsync.NewWaker(),
closeCh: make(chan struct{}),
selfID: selfID,
options: options,
rand: rand.New(rand.NewSource(time.Now().UnixNano())), // nolint:gosec
dialWaker: tmsync.NewWaker(),
evictWaker: tmsync.NewWaker(),
closeCh: make(chan struct{}),
metrics: NopMetrics(),
store: store,
dialing: map[types.NodeID]bool{},
upgrading: map[types.NodeID]types.NodeID{},
connected: map[types.NodeID]bool{},
connected: map[types.NodeID]peerConnectionDirection{},
ready: map[types.NodeID]bool{},
evict: map[types.NodeID]bool{},
evicting: map[types.NodeID]bool{},
subscriptions: map[*PeerUpdates]*PeerUpdates{},
}
if options.Metrics != nil {
peerManager.metrics = options.Metrics
}
if err = peerManager.configurePeers(); err != nil {
return nil, err
}
if err = peerManager.prunePeers(); err != nil {
return nil, err
}
return peerManager, nil
}
@@ -361,6 +394,7 @@ func (m *PeerManager) configurePeers() error {
}
}
}
m.metrics.PeersStored.Add(float64(m.store.Size()))
return nil
}
@@ -390,20 +424,45 @@ func (m *PeerManager) prunePeers() error {
ranked := m.store.Ranked()
for i := len(ranked) - 1; i >= 0; i-- {
peerID := ranked[i].ID
switch {
case m.store.Size() <= int(m.options.MaxPeers):
return nil
case m.dialing[peerID]:
case m.connected[peerID]:
case m.isConnected(peerID):
default:
if err := m.store.Delete(peerID); err != nil {
return err
}
m.metrics.PeersStored.Add(-1)
}
}
return nil
}
func (m *PeerManager) isConnected(peerID types.NodeID) bool {
_, ok := m.connected[peerID]
return ok
}
type connectionStats struct {
incoming uint16
outgoing uint16
}
func (m *PeerManager) getConnectedInfo() connectionStats {
out := connectionStats{}
for _, direction := range m.connected {
switch direction {
case peerConnectionIncoming:
out.incoming++
case peerConnectionOutgoing:
out.outgoing++
}
}
return out
}
// Add adds a peer to the manager, given as an address. If the peer already
// exists, the address is added to it if it isn't already present. This will push
// low scoring peers out of the address book if it exceeds the maximum size.
@@ -427,12 +486,17 @@ func (m *PeerManager) Add(address NodeAddress) (bool, error) {
if ok {
return false, nil
}
if peer.Inactive {
return false, nil
}
// else add the new address
peer.AddressInfo[address] = &peerAddressInfo{Address: address}
if err := m.store.Set(peer); err != nil {
return false, err
}
m.metrics.PeersStored.Add(1)
if err := m.prunePeers(); err != nil {
return true, err
}
@@ -452,18 +516,35 @@ func (m *PeerManager) PeerRatio() float64 {
return float64(m.store.Size()) / float64(m.options.MaxPeers)
}
func (m *PeerManager) HasMaxPeerCapacity() bool {
m.mtx.Lock()
defer m.mtx.Unlock()
return len(m.connected) >= int(m.options.MaxConnected)
}
func (m *PeerManager) HasDialedMaxPeers() bool {
m.mtx.Lock()
defer m.mtx.Unlock()
stats := m.getConnectedInfo()
return stats.outgoing >= m.options.MaxOutgoingConnections
}
// DialNext finds an appropriate peer address to dial, and marks it as dialing.
// If no peer is found, or all connection slots are full, it blocks until one
// becomes available. The caller must call Dialed() or DialFailed() for the
// returned peer.
func (m *PeerManager) DialNext(ctx context.Context) (NodeAddress, error) {
for {
address, err := m.TryDialNext()
if err != nil || (address != NodeAddress{}) {
return address, err
if address := m.TryDialNext(); (address != NodeAddress{}) {
return address, nil
}
select {
case <-m.dialWaker.Sleep():
continue
case <-ctx.Done():
return NodeAddress{}, ctx.Err()
}
@@ -472,20 +553,28 @@ func (m *PeerManager) DialNext(ctx context.Context) (NodeAddress, error) {
// TryDialNext is equivalent to DialNext(), but immediately returns an empty
// address if no peers or connection slots are available.
func (m *PeerManager) TryDialNext() (NodeAddress, error) {
func (m *PeerManager) TryDialNext() NodeAddress {
m.mtx.Lock()
defer m.mtx.Unlock()
// We allow dialing MaxConnected+MaxConnectedUpgrade peers. Including
// MaxConnectedUpgrade allows us to probe additional peers that have a
// higher score than any other peers, and if successful evict it.
if m.options.MaxConnected > 0 && len(m.connected)+len(m.dialing) >=
int(m.options.MaxConnected)+int(m.options.MaxConnectedUpgrade) {
return NodeAddress{}, nil
if m.options.MaxConnected > 0 && len(m.connected)+len(m.dialing) >= int(m.options.MaxConnected)+int(m.options.MaxConnectedUpgrade) {
return NodeAddress{}
}
cinfo := m.getConnectedInfo()
if m.options.MaxOutgoingConnections > 0 && cinfo.outgoing >= m.options.MaxOutgoingConnections {
return NodeAddress{}
}
for _, peer := range m.store.Ranked() {
if m.dialing[peer.ID] || m.connected[peer.ID] {
if m.dialing[peer.ID] || m.isConnected(peer.ID) {
continue
}
if !peer.LastDisconnected.IsZero() && time.Since(peer.LastDisconnected) < m.options.DisconnectCooldownPeriod {
continue
}
@@ -494,6 +583,10 @@ func (m *PeerManager) TryDialNext() (NodeAddress, error) {
continue
}
if id, ok := m.store.Resolve(addressInfo.Address); ok && (m.isConnected(id) || m.dialing[id]) {
continue
}
// We now have an eligible address to dial. If we're full but have
// upgrade capacity (as checked above), we find a lower-scored peer
// we can replace and mark it as upgrading so noone else claims it.
@@ -504,25 +597,24 @@ func (m *PeerManager) TryDialNext() (NodeAddress, error) {
if m.options.MaxConnected > 0 && len(m.connected) >= int(m.options.MaxConnected) {
upgradeFromPeer := m.findUpgradeCandidate(peer.ID, peer.Score())
if upgradeFromPeer == "" {
return NodeAddress{}, nil
return NodeAddress{}
}
m.upgrading[upgradeFromPeer] = peer.ID
}
m.dialing[peer.ID] = true
return addressInfo.Address, nil
return addressInfo.Address
}
}
return NodeAddress{}, nil
return NodeAddress{}
}
// DialFailed reports a failed dial attempt. This will make the peer available
// for dialing again when appropriate (possibly after a retry timeout).
//
// FIXME: This should probably delete or mark bad addresses/peers after some time.
func (m *PeerManager) DialFailed(address NodeAddress) error {
m.mtx.Lock()
defer m.mtx.Unlock()
m.metrics.PeersConnectedFailure.Add(1)
delete(m.dialing, address.NodeID)
for from, to := range m.upgrading {
@@ -542,6 +634,7 @@ func (m *PeerManager) DialFailed(address NodeAddress) error {
addressInfo.LastDialFailure = time.Now().UTC()
addressInfo.DialFailures++
if err := m.store.Set(peer); err != nil {
return err
}
@@ -575,6 +668,8 @@ func (m *PeerManager) Dialed(address NodeAddress) error {
m.mtx.Lock()
defer m.mtx.Unlock()
m.metrics.PeersConnectedSuccess.Add(1)
delete(m.dialing, address.NodeID)
var upgradeFromPeer types.NodeID
@@ -589,12 +684,11 @@ func (m *PeerManager) Dialed(address NodeAddress) error {
if address.NodeID == m.selfID {
return fmt.Errorf("rejecting connection to self (%v)", address.NodeID)
}
if m.connected[address.NodeID] {
if m.isConnected(address.NodeID) {
return fmt.Errorf("peer %v is already connected", address.NodeID)
}
if m.options.MaxConnected > 0 && len(m.connected) >= int(m.options.MaxConnected) {
if upgradeFromPeer == "" || len(m.connected) >=
int(m.options.MaxConnected)+int(m.options.MaxConnectedUpgrade) {
if upgradeFromPeer == "" || len(m.connected) >= int(m.options.MaxConnected)+int(m.options.MaxConnectedUpgrade) {
return fmt.Errorf("already connected to maximum number of peers")
}
}
@@ -604,6 +698,11 @@ func (m *PeerManager) Dialed(address NodeAddress) error {
return fmt.Errorf("peer %q was removed while dialing", address.NodeID)
}
now := time.Now().UTC()
if peer.Inactive {
m.metrics.PeersInactivated.Add(-1)
}
peer.Inactive = false
peer.LastConnected = now
if addressInfo, ok := peer.AddressInfo[address]; ok {
addressInfo.DialFailures = 0
@@ -615,8 +714,7 @@ func (m *PeerManager) Dialed(address NodeAddress) error {
return err
}
if upgradeFromPeer != "" && m.options.MaxConnected > 0 &&
len(m.connected) >= int(m.options.MaxConnected) {
if upgradeFromPeer != "" && m.options.MaxConnected > 0 && len(m.connected) >= int(m.options.MaxConnected) {
// Look for an even lower-scored peer that may have appeared since we
// started the upgrade.
if p, ok := m.store.Get(upgradeFromPeer); ok {
@@ -625,9 +723,11 @@ func (m *PeerManager) Dialed(address NodeAddress) error {
}
}
m.evict[upgradeFromPeer] = true
m.evictWaker.Wake()
}
m.connected[peer.ID] = true
m.evictWaker.Wake()
m.metrics.PeersConnectedOutgoing.Add(1)
m.connected[peer.ID] = peerConnectionOutgoing
return nil
}
@@ -656,11 +756,10 @@ func (m *PeerManager) Accepted(peerID types.NodeID) error {
if peerID == m.selfID {
return fmt.Errorf("rejecting connection from self (%v)", peerID)
}
if m.connected[peerID] {
if m.isConnected(peerID) {
return fmt.Errorf("peer %q is already connected", peerID)
}
if m.options.MaxConnected > 0 &&
len(m.connected) >= int(m.options.MaxConnected)+int(m.options.MaxConnectedUpgrade) {
if m.options.MaxConnected > 0 && len(m.connected) >= int(m.options.MaxConnected)+int(m.options.MaxConnectedUpgrade) {
return fmt.Errorf("already connected to maximum number of peers")
}
@@ -685,12 +784,17 @@ func (m *PeerManager) Accepted(peerID types.NodeID) error {
}
}
if peer.Inactive {
m.metrics.PeersInactivated.Add(-1)
}
peer.Inactive = false
peer.LastConnected = time.Now().UTC()
if err := m.store.Set(peer); err != nil {
return err
}
m.connected[peerID] = true
m.metrics.PeersConnectedIncoming.Add(1)
m.connected[peerID] = peerConnectionIncoming
if upgradeFromPeer != "" {
m.evict[upgradeFromPeer] = true
}
@@ -709,7 +813,7 @@ func (m *PeerManager) Ready(peerID types.NodeID, channels ChannelIDSet) {
m.mtx.Lock()
defer m.mtx.Unlock()
if m.connected[peerID] {
if m.isConnected(peerID) {
m.ready[peerID] = true
m.broadcast(PeerUpdate{
NodeID: peerID,
@@ -745,7 +849,7 @@ func (m *PeerManager) TryEvictNext() (types.NodeID, error) {
// random one.
for peerID := range m.evict {
delete(m.evict, peerID)
if m.connected[peerID] && !m.evicting[peerID] {
if m.isConnected(peerID) && !m.evicting[peerID] {
m.evicting[peerID] = true
return peerID, nil
}
@@ -762,7 +866,7 @@ func (m *PeerManager) TryEvictNext() (types.NodeID, error) {
ranked := m.store.Ranked()
for i := len(ranked) - 1; i >= 0; i-- {
peer := ranked[i]
if m.connected[peer.ID] && !m.evicting[peer.ID] {
if m.isConnected(peer.ID) && !m.evicting[peer.ID] {
m.evicting[peer.ID] = true
return peer.ID, nil
}
@@ -777,6 +881,13 @@ func (m *PeerManager) Disconnected(peerID types.NodeID) {
m.mtx.Lock()
defer m.mtx.Unlock()
switch m.connected[peerID] {
case peerConnectionIncoming:
m.metrics.PeersConnectedIncoming.Add(-1)
case peerConnectionOutgoing:
m.metrics.PeersConnectedOutgoing.Add(-1)
}
ready := m.ready[peerID]
delete(m.connected, peerID)
@@ -785,6 +896,22 @@ func (m *PeerManager) Disconnected(peerID types.NodeID) {
delete(m.evicting, peerID)
delete(m.ready, peerID)
if peer, ok := m.store.Get(peerID); ok {
peer.LastDisconnected = time.Now()
_ = m.store.Set(peer)
// launch a thread to ping the dialWaker when the
// disconnected peer can be dialed again.
go func() {
timer := time.NewTimer(m.options.DisconnectCooldownPeriod)
defer timer.Stop()
select {
case <-timer.C:
m.dialWaker.Wake()
case <-m.closeCh:
}
}()
}
if ready {
m.broadcast(PeerUpdate{
NodeID: peerID,
@@ -807,17 +934,34 @@ func (m *PeerManager) Errored(peerID types.NodeID, err error) {
m.mtx.Lock()
defer m.mtx.Unlock()
if m.connected[peerID] {
if m.isConnected(peerID) {
m.evict[peerID] = true
}
m.evictWaker.Wake()
}
// Inactivate marks a peer as inactive which means we won't attempt to
// dial this peer again. A peer can be reactivated by successfully
// dialing and connecting to the node.
func (m *PeerManager) Inactivate(peerID types.NodeID) error {
m.mtx.Lock()
defer m.mtx.Unlock()
peer, ok := m.store.peers[peerID]
if !ok {
return nil
}
peer.Inactive = true
m.metrics.PeersInactivated.Add(1)
return m.store.Set(*peer)
}
// Advertise returns a list of peer addresses to advertise to a peer.
//
// FIXME: This is fairly naïve and only returns the addresses of the
// highest-ranked peers.
// It sorts all peers in the peer store, and assembles a list of peers
// that is most likely to include the highest priority of peers.
func (m *PeerManager) Advertise(peerID types.NodeID, limit uint16) []NodeAddress {
m.mtx.Lock()
defer m.mtx.Unlock()
@@ -830,19 +974,92 @@ func (m *PeerManager) Advertise(peerID types.NodeID, limit uint16) []NodeAddress
addresses = append(addresses, m.options.SelfAddress)
}
for _, peer := range m.store.Ranked() {
var numAddresses int
var totalScore int
ranked := m.store.Ranked()
seenAddresses := map[NodeAddress]struct{}{}
scores := map[types.NodeID]int{}
// get the total number of possible addresses
for _, peer := range ranked {
if peer.ID == peerID {
continue
}
score := int(peer.Score())
for nodeAddr, addressInfo := range peer.AddressInfo {
if len(addresses) >= int(limit) {
return addresses
totalScore += score
scores[peer.ID] = score
for addr := range peer.AddressInfo {
if _, ok := m.options.PrivatePeers[addr.NodeID]; !ok {
numAddresses++
}
}
}
var attempts uint16
var addedLastIteration bool
// if the number of addresses is less than the number of peers
// to advertise, adjust the limit downwards
if numAddresses < int(limit) {
limit = uint16(numAddresses)
}
// collect addresses until we have the number requested
// (limit), or we've added all known addresses, or we've tried
// at least 256 times and the last time we iterated over
// remaining addresses we added no new candidates.
for len(addresses) < int(limit) && (attempts < (limit*2) || !addedLastIteration) {
attempts++
addedLastIteration = false
for idx, peer := range ranked {
if peer.ID == peerID {
continue
}
// only add non-private NodeIDs
if _, ok := m.options.PrivatePeers[nodeAddr.NodeID]; !ok {
addresses = append(addresses, addressInfo.Address)
if len(addresses) >= int(limit) {
break
}
for nodeAddr, addressInfo := range peer.AddressInfo {
if len(addresses) >= int(limit) {
break
}
// only look at each address once, by
// tracking a set of addresses seen
if _, ok := seenAddresses[addressInfo.Address]; ok {
continue
}
// only add non-private NodeIDs
if _, ok := m.options.PrivatePeers[nodeAddr.NodeID]; !ok {
// add the peer if the total number of ranked addresses is
// will fit within the limit, or otherwise adding
// addresses based on a coin flip.
// the coinflip is based on the score, commonly, but
// 10% of the time we'll randomly insert a "loosing"
// peer.
// nolint:gosec // G404: Use of weak random number generator
if numAddresses <= int(limit) || rand.Intn(totalScore+1) <= scores[peer.ID]+1 || rand.Intn((idx+1)*10) <= idx+1 {
addresses = append(addresses, addressInfo.Address)
addedLastIteration = true
seenAddresses[addressInfo.Address] = struct{}{}
}
} else {
seenAddresses[addressInfo.Address] = struct{}{}
// if the number of addresses
// is the same as the limit,
// we should remove private
// addresses from the limit so
// we can still return early.
if numAddresses == int(limit) {
limit--
}
}
}
}
}
@@ -912,8 +1129,14 @@ func (m *PeerManager) processPeerEvent(pu PeerUpdate) {
switch pu.Status {
case PeerStatusBad:
if m.store.peers[pu.NodeID].MutableScore == math.MinInt16 {
return
}
m.store.peers[pu.NodeID].MutableScore--
case PeerStatusGood:
if m.store.peers[pu.NodeID].MutableScore == math.MaxInt16 {
return
}
m.store.peers[pu.NodeID].MutableScore++
}
}
@@ -1014,9 +1237,11 @@ func (m *PeerManager) findUpgradeCandidate(id types.NodeID, score PeerScore) typ
for i := len(ranked) - 1; i >= 0; i-- {
candidate := ranked[i]
switch {
case candidate.ID == id:
continue
case candidate.Score() >= score:
return "" // no further peers can be scored lower, due to sorting
case !m.connected[candidate.ID]:
case !m.isConnected(candidate.ID):
case m.evict[candidate.ID]:
case m.evicting[candidate.ID]:
case m.upgrading[candidate.ID] != "":
@@ -1055,37 +1280,6 @@ func (m *PeerManager) retryDelay(failures uint32, persistent bool) time.Duration
return delay
}
// GetHeight returns a peer's height, as reported via SetHeight, or 0 if the
// peer or height is unknown.
//
// FIXME: This is a temporary workaround to share state between the consensus
// and mempool reactors, carried over from the legacy P2P stack. Reactors should
// not have dependencies on each other, instead tracking this themselves.
func (m *PeerManager) GetHeight(peerID types.NodeID) int64 {
m.mtx.Lock()
defer m.mtx.Unlock()
peer, _ := m.store.Get(peerID)
return peer.Height
}
// SetHeight stores a peer's height, making it available via GetHeight.
//
// FIXME: This is a temporary workaround to share state between the consensus
// and mempool reactors, carried over from the legacy P2P stack. Reactors should
// not have dependencies on each other, instead tracking this themselves.
func (m *PeerManager) SetHeight(peerID types.NodeID, height int64) error {
m.mtx.Lock()
defer m.mtx.Unlock()
peer, ok := m.store.Get(peerID)
if !ok {
peer = m.newPeerInfo(peerID)
}
peer.Height = height
return m.store.Set(peer)
}
// peerStore stores information about peers. It is not thread-safe, assuming it
// is only used by PeerManager which handles concurrency control. This allows
// the manager to execute multiple operations atomically via its own mutex.
@@ -1096,6 +1290,7 @@ func (m *PeerManager) SetHeight(peerID types.NodeID, height int64) error {
type peerStore struct {
db dbm.DB
peers map[types.NodeID]*peerInfo
index map[NodeAddress]types.NodeID
ranked []*peerInfo // cache for Ranked(), nil invalidates cache
}
@@ -1115,6 +1310,7 @@ func newPeerStore(db dbm.DB) (*peerStore, error) {
// loadPeers loads all peers from the database into memory.
func (s *peerStore) loadPeers() error {
peers := map[types.NodeID]*peerInfo{}
addrs := map[NodeAddress]types.NodeID{}
start, end := keyPeerInfoRange()
iter, err := s.db.Iterator(start, end)
@@ -1134,11 +1330,18 @@ func (s *peerStore) loadPeers() error {
return fmt.Errorf("invalid peer data: %w", err)
}
peers[peer.ID] = peer
for addr := range peer.AddressInfo {
// TODO maybe check to see if we've seen this
// addr before for a different peer, there
// could be duplicates.
addrs[addr] = peer.ID
}
}
if iter.Error() != nil {
return iter.Error()
}
s.peers = peers
s.index = addrs
s.ranked = nil // invalidate cache if populated
return nil
}
@@ -1150,6 +1353,12 @@ func (s *peerStore) Get(id types.NodeID) (peerInfo, bool) {
return peer.Copy(), ok
}
// Resolve returns the peer ID for a given node address if known.
func (s *peerStore) Resolve(addr NodeAddress) (types.NodeID, bool) {
id, ok := s.index[addr]
return id, ok
}
// Set stores peer data. The input data will be copied, and can safely be reused
// by the caller.
func (s *peerStore) Set(peer peerInfo) error {
@@ -1178,20 +1387,29 @@ func (s *peerStore) Set(peer peerInfo) error {
// update the existing pointer address.
*current = peer
}
for addr := range peer.AddressInfo {
s.index[addr] = peer.ID
}
return nil
}
// Delete deletes a peer, or does nothing if it does not exist.
func (s *peerStore) Delete(id types.NodeID) error {
if _, ok := s.peers[id]; !ok {
peer, ok := s.peers[id]
if !ok {
return nil
}
if err := s.db.Delete(keyPeerInfo(id)); err != nil {
return err
for _, addr := range peer.AddressInfo {
delete(s.index, addr.Address)
}
delete(s.peers, id)
s.ranked = nil
if err := s.db.Delete(keyPeerInfo(id)); err != nil {
return err
}
return nil
}
@@ -1227,9 +1445,48 @@ func (s *peerStore) Ranked() []*peerInfo {
s.ranked = append(s.ranked, peer)
}
sort.Slice(s.ranked, func(i, j int) bool {
// FIXME: If necessary, consider precomputing scores before sorting,
// to reduce the number of Score() calls.
return s.ranked[i].Score() > s.ranked[j].Score()
// TODO: reevaluate more wholistic sorting, perhaps as follows:
// // sort inactive peers after active peers
// if s.ranked[i].Inactive && !s.ranked[j].Inactive {
// return false
// } else if !s.ranked[i].Inactive && s.ranked[j].Inactive {
// return true
// }
// iLastDialed, iLastDialSuccess := s.ranked[i].LastDialed()
// jLastDialed, jLastDialSuccess := s.ranked[j].LastDialed()
// // sort peers who our most recent dialing attempt was
// // successful ahead of peers with recent dialing
// // failures
// switch {
// case iLastDialSuccess && jLastDialSuccess:
// // if both peers were (are?) successfully
// // connected, convey their score, but give the
// // one we dialed successfully most recently a bonus
// iScore := s.ranked[i].Score()
// jScore := s.ranked[j].Score()
// if jLastDialed.Before(iLastDialed) {
// jScore++
// } else {
// iScore++
// }
// return iScore > jScore
// case iLastDialSuccess:
// return true
// case jLastDialSuccess:
// return false
// default:
// // if both peers were not successful in their
// // most recent dialing attempt, fall back to
// // peer score.
// return s.ranked[i].Score() > s.ranked[j].Score()
// }
})
return s.ranked
}
@@ -1241,17 +1498,18 @@ func (s *peerStore) Size() int {
// peerInfo contains peer information stored in a peerStore.
type peerInfo struct {
ID types.NodeID
AddressInfo map[NodeAddress]*peerAddressInfo
LastConnected time.Time
ID types.NodeID
AddressInfo map[NodeAddress]*peerAddressInfo
LastConnected time.Time
LastDisconnected time.Time
// These fields are ephemeral, i.e. not persisted to the database.
Persistent bool
Seed bool
Height int64
FixedScore PeerScore // mainly for tests
MutableScore int64 // updated by router
Inactive bool
}
// peerInfoFromProto converts a Protobuf PeerInfo message to a peerInfo,
@@ -1260,6 +1518,7 @@ func peerInfoFromProto(msg *p2pproto.PeerInfo) (*peerInfo, error) {
p := &peerInfo{
ID: types.NodeID(msg.ID),
AddressInfo: map[NodeAddress]*peerAddressInfo{},
Inactive: msg.Inactive,
}
if msg.LastConnected != nil {
p.LastConnected = *msg.LastConnected
@@ -1282,6 +1541,7 @@ func peerInfoFromProto(msg *p2pproto.PeerInfo) (*peerInfo, error) {
func (p *peerInfo) ToProto() *p2pproto.PeerInfo {
msg := &p2pproto.PeerInfo{
ID: string(p.ID),
Inactive: p.Inactive,
LastConnected: &p.LastConnected,
}
for _, addressInfo := range p.AddressInfo {
@@ -1290,6 +1550,7 @@ func (p *peerInfo) ToProto() *p2pproto.PeerInfo {
if msg.LastConnected.IsZero() {
msg.LastConnected = nil
}
return msg
}
@@ -1306,6 +1567,45 @@ func (p *peerInfo) Copy() peerInfo {
return c
}
// LastDialed returns when the peer was last dialed, and if that dial
// attempt was successful. If the peer was never dialed the time stamp
// is zero time.
func (p *peerInfo) LastDialed() (time.Time, bool) {
var (
last time.Time
success bool
)
last = last.Add(-1) // so it's after the epoch
for _, addr := range p.AddressInfo {
if addr.LastDialFailure.Equal(addr.LastDialSuccess) {
if addr.LastDialFailure.IsZero() {
continue
}
if last.After(addr.LastDialSuccess) {
continue
}
success = true
last = addr.LastDialSuccess
}
if addr.LastDialFailure.After(last) {
success = false
last = addr.LastDialFailure
}
if addr.LastDialSuccess.After(last) || last.Equal(addr.LastDialSuccess) {
success = true
last = addr.LastDialSuccess
}
}
// if we never modified last, then
if last.Add(1).IsZero() {
return time.Time{}, success
}
return last, success
}
// Score calculates a score for the peer. Higher-scored peers will be
// preferred over lower scores.
func (p *peerInfo) Score() PeerScore {
@@ -1324,12 +1624,8 @@ func (p *peerInfo) Score() PeerScore {
score -= int64(addr.DialFailures)
}
if score <= 0 {
return 0
}
if score >= math.MaxUint8 {
return PeerScore(math.MaxUint8)
if score < math.MinInt16 {
score = math.MinInt16
}
return PeerScore(score)

View File

@@ -31,7 +31,7 @@ func TestPeerScoring(t *testing.T) {
t.Run("Synchronous", func(t *testing.T) {
// update the manager and make sure it's correct
require.EqualValues(t, 0, peerManager.Scores()[id])
require.Zero(t, peerManager.Scores()[id])
// add a bunch of good status updates and watch things increase.
for i := 1; i < 10; i++ {
@@ -80,3 +80,173 @@ func TestPeerScoring(t *testing.T) {
"startAt=%d score=%d", start, peerManager.Scores()[id])
})
}
func makeMockPeerStore(t *testing.T, peers ...peerInfo) *peerStore {
t.Helper()
s, err := newPeerStore(dbm.NewMemDB())
if err != nil {
t.Fatal(err)
}
for idx := range peers {
if err := s.Set(peers[idx]); err != nil {
t.Fatal(err)
}
}
return s
}
func TestPeerRanking(t *testing.T) {
t.Run("InactiveSecond", func(t *testing.T) {
t.Skip("inactive status is not currently factored into peer rank.")
store := makeMockPeerStore(t,
peerInfo{ID: "second", Inactive: true},
peerInfo{ID: "first", Inactive: false},
)
ranked := store.Ranked()
if len(ranked) != 2 {
t.Fatal("missing peer in ranked output")
}
if ranked[0].ID != "first" {
t.Error("inactive peer is first")
}
if ranked[1].ID != "second" {
t.Error("active peer is second")
}
})
t.Run("ScoreOrder", func(t *testing.T) {
for _, test := range []struct {
Name string
First int64
Second int64
}{
{
Name: "Mirror",
First: 100,
Second: -100,
},
{
Name: "VeryLow",
First: 0,
Second: -100,
},
{
Name: "High",
First: 300,
Second: 256,
},
} {
t.Run(test.Name, func(t *testing.T) {
store := makeMockPeerStore(t,
peerInfo{
ID: "second",
MutableScore: test.Second,
},
peerInfo{
ID: "first",
MutableScore: test.First,
})
ranked := store.Ranked()
if len(ranked) != 2 {
t.Fatal("missing peer in ranked output")
}
if ranked[0].ID != "first" {
t.Error("higher peer is first")
}
if ranked[1].ID != "second" {
t.Error("higher peer is second")
}
})
}
})
}
func TestLastDialed(t *testing.T) {
t.Run("Zero", func(t *testing.T) {
p := &peerInfo{}
ts, ok := p.LastDialed()
if !ts.IsZero() {
t.Error("timestamp should be zero:", ts)
}
if ok {
t.Error("peer reported success, despite none")
}
})
t.Run("NeverDialed", func(t *testing.T) {
p := &peerInfo{
AddressInfo: map[NodeAddress]*peerAddressInfo{
{NodeID: "kip"}: {},
{NodeID: "merlin"}: {},
},
}
ts, ok := p.LastDialed()
if !ts.IsZero() {
t.Error("timestamp should be zero:", ts)
}
if ok {
t.Error("peer reported success, despite none")
}
})
t.Run("Ordered", func(t *testing.T) {
base := time.Now()
for _, test := range []struct {
Name string
SuccessTime time.Time
FailTime time.Time
ExpectedSuccess bool
}{
{
Name: "Zero",
},
{
Name: "Success",
SuccessTime: base.Add(time.Hour),
FailTime: base,
ExpectedSuccess: true,
},
{
Name: "Equal",
SuccessTime: base,
FailTime: base,
ExpectedSuccess: true,
},
{
Name: "Failure",
SuccessTime: base,
FailTime: base.Add(time.Hour),
ExpectedSuccess: false,
},
} {
t.Run(test.Name, func(t *testing.T) {
p := &peerInfo{
AddressInfo: map[NodeAddress]*peerAddressInfo{
{NodeID: "kip"}: {LastDialSuccess: test.SuccessTime},
{NodeID: "merlin"}: {LastDialFailure: test.FailTime},
},
}
ts, ok := p.LastDialed()
if test.ExpectedSuccess && !ts.Equal(test.SuccessTime) {
if !ts.Equal(test.FailTime) {
t.Fatal("got unexpected timestamp:", ts)
}
t.Error("last dialed time reported incorrect value:", ts)
}
if !test.ExpectedSuccess && !ts.Equal(test.FailTime) {
if !ts.Equal(test.SuccessTime) {
t.Fatal("got unexpected timestamp:", ts)
}
t.Error("last dialed time reported incorrect value:", ts)
}
if test.ExpectedSuccess != ok {
t.Error("test reported incorrect outcome for last dialed type")
}
})
}
})
}

View File

@@ -378,16 +378,14 @@ func TestPeerManager_DialNext_WakeOnDialFailed(t *testing.T) {
added, err := peerManager.Add(a)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, a, dial)
// Add b. We shouldn't be able to dial it, due to MaxConnected.
added, err = peerManager.Add(b)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Zero(t, dial)
// Spawn a goroutine to fail a's dial attempt.
@@ -415,8 +413,7 @@ func TestPeerManager_DialNext_WakeOnDialFailedRetry(t *testing.T) {
added, err := peerManager.Add(a)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, a, dial)
require.NoError(t, peerManager.DialFailed(dial))
failed := time.Now()
@@ -443,8 +440,7 @@ func TestPeerManager_DialNext_WakeOnDisconnected(t *testing.T) {
err = peerManager.Accepted(a.NodeID)
require.NoError(t, err)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Zero(t, dial)
go func() {
@@ -473,8 +469,7 @@ func TestPeerManager_TryDialNext_MaxConnected(t *testing.T) {
added, err := peerManager.Add(a)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, a, dial)
require.NoError(t, peerManager.Dialed(a))
@@ -482,16 +477,14 @@ func TestPeerManager_TryDialNext_MaxConnected(t *testing.T) {
added, err = peerManager.Add(b)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Equal(t, b, dial)
// At this point, adding c will not allow dialing it.
added, err = peerManager.Add(c)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Zero(t, dial)
}
@@ -504,11 +497,11 @@ func TestPeerManager_TryDialNext_MaxConnectedUpgrade(t *testing.T) {
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{
PeerScores: map[types.NodeID]p2p.PeerScore{
a.NodeID: 0,
b.NodeID: 1,
c.NodeID: 2,
d.NodeID: 3,
e.NodeID: 0,
a.NodeID: p2p.PeerScore(0),
b.NodeID: p2p.PeerScore(1),
c.NodeID: p2p.PeerScore(2),
d.NodeID: p2p.PeerScore(3),
e.NodeID: p2p.PeerScore(0),
},
PersistentPeers: []types.NodeID{c.NodeID, d.NodeID},
MaxConnected: 2,
@@ -520,7 +513,7 @@ func TestPeerManager_TryDialNext_MaxConnectedUpgrade(t *testing.T) {
added, err := peerManager.Add(a)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
dial := peerManager.TryDialNext()
require.NoError(t, err)
require.Equal(t, a, dial)
require.NoError(t, peerManager.Dialed(a))
@@ -529,8 +522,7 @@ func TestPeerManager_TryDialNext_MaxConnectedUpgrade(t *testing.T) {
added, err = peerManager.Add(b)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Equal(t, b, dial)
// Even though we are at capacity, we should be allowed to dial c for an
@@ -538,8 +530,7 @@ func TestPeerManager_TryDialNext_MaxConnectedUpgrade(t *testing.T) {
added, err = peerManager.Add(c)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Equal(t, c, dial)
// However, since we're using all upgrade slots now, we can't add and dial
@@ -547,24 +538,20 @@ func TestPeerManager_TryDialNext_MaxConnectedUpgrade(t *testing.T) {
added, err = peerManager.Add(d)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Zero(t, dial)
// We go through with c's upgrade.
require.NoError(t, peerManager.Dialed(c))
// Still can't dial d.
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Zero(t, dial)
// Now, if we disconnect a, we should be allowed to dial d because we have a
// free upgrade slot.
require.Error(t, peerManager.Dialed(d))
peerManager.Disconnected(a.NodeID)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
require.Equal(t, d, dial)
require.NoError(t, peerManager.Dialed(d))
// However, if we disconnect b (such that only c and d are connected), we
@@ -574,8 +561,7 @@ func TestPeerManager_TryDialNext_MaxConnectedUpgrade(t *testing.T) {
added, err = peerManager.Add(e)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Zero(t, dial)
}
@@ -585,7 +571,7 @@ func TestPeerManager_TryDialNext_UpgradeReservesPeer(t *testing.T) {
c := p2p.NodeAddress{Protocol: "memory", NodeID: types.NodeID(strings.Repeat("c", 40))}
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{
PeerScores: map[types.NodeID]p2p.PeerScore{b.NodeID: 1, c.NodeID: 1},
PeerScores: map[types.NodeID]p2p.PeerScore{b.NodeID: p2p.PeerScore(1), c.NodeID: 1},
MaxConnected: 1,
MaxConnectedUpgrade: 2,
})
@@ -595,8 +581,7 @@ func TestPeerManager_TryDialNext_UpgradeReservesPeer(t *testing.T) {
added, err := peerManager.Add(a)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, a, dial)
require.NoError(t, peerManager.Dialed(a))
@@ -604,8 +589,7 @@ func TestPeerManager_TryDialNext_UpgradeReservesPeer(t *testing.T) {
added, err = peerManager.Add(b)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Equal(t, b, dial)
// Adding c and dialing it will fail, because a is the only connected
@@ -613,8 +597,7 @@ func TestPeerManager_TryDialNext_UpgradeReservesPeer(t *testing.T) {
added, err = peerManager.Add(c)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Empty(t, dial)
}
@@ -635,22 +618,19 @@ func TestPeerManager_TryDialNext_DialingConnected(t *testing.T) {
added, err := peerManager.Add(a)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, a, dial)
// Adding a's TCP address will not dispense a, since it's already dialing.
added, err = peerManager.Add(aTCP)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Zero(t, dial)
// Marking a as dialed will still not dispense it.
require.NoError(t, peerManager.Dialed(a))
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Zero(t, dial)
// Adding b and accepting a connection from it will not dispense it either.
@@ -658,8 +638,7 @@ func TestPeerManager_TryDialNext_DialingConnected(t *testing.T) {
require.NoError(t, err)
require.True(t, added)
require.NoError(t, peerManager.Accepted(bID))
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Zero(t, dial)
}
@@ -685,16 +664,14 @@ func TestPeerManager_TryDialNext_Multiple(t *testing.T) {
// All addresses should be dispensed as long as dialing them has failed.
dial := []p2p.NodeAddress{}
for range addresses {
address, err := peerManager.TryDialNext()
require.NoError(t, err)
address := peerManager.TryDialNext()
require.NotZero(t, address)
require.NoError(t, peerManager.DialFailed(address))
dial = append(dial, address)
}
require.ElementsMatch(t, dial, addresses)
address, err := peerManager.TryDialNext()
require.NoError(t, err)
address := peerManager.TryDialNext()
require.Zero(t, address)
}
@@ -716,15 +693,14 @@ func TestPeerManager_DialFailed(t *testing.T) {
// Dialing and then calling DialFailed with a different address (same
// NodeID) should unmark as dialing and allow us to dial the other address
// again, but not register the failed address.
dial, err := peerManager.TryDialNext()
dial := peerManager.TryDialNext()
require.NoError(t, err)
require.Equal(t, a, dial)
require.NoError(t, peerManager.DialFailed(p2p.NodeAddress{
Protocol: "tcp", NodeID: aID, Hostname: "localhost"}))
require.Equal(t, []p2p.NodeAddress{a}, peerManager.Addresses(aID))
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Equal(t, a, dial)
// Calling DialFailed on same address twice should be fine.
@@ -742,7 +718,10 @@ func TestPeerManager_DialFailed_UnreservePeer(t *testing.T) {
c := p2p.NodeAddress{Protocol: "memory", NodeID: types.NodeID(strings.Repeat("c", 40))}
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{
PeerScores: map[types.NodeID]p2p.PeerScore{b.NodeID: 1, c.NodeID: 1},
PeerScores: map[types.NodeID]p2p.PeerScore{
b.NodeID: p2p.PeerScore(1),
c.NodeID: p2p.PeerScore(2),
},
MaxConnected: 1,
MaxConnectedUpgrade: 2,
})
@@ -752,8 +731,7 @@ func TestPeerManager_DialFailed_UnreservePeer(t *testing.T) {
added, err := peerManager.Add(a)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, a, dial)
require.NoError(t, peerManager.Dialed(a))
@@ -761,8 +739,7 @@ func TestPeerManager_DialFailed_UnreservePeer(t *testing.T) {
added, err = peerManager.Add(b)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Equal(t, b, dial)
// Adding c and dialing it will fail, even though it could upgrade a and we
@@ -771,14 +748,12 @@ func TestPeerManager_DialFailed_UnreservePeer(t *testing.T) {
added, err = peerManager.Add(c)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Empty(t, dial)
// Failing b's dial will now make c available for dialing.
require.NoError(t, peerManager.DialFailed(b))
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Equal(t, c, dial)
}
@@ -793,8 +768,7 @@ func TestPeerManager_Dialed_Connected(t *testing.T) {
added, err := peerManager.Add(a)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, a, dial)
require.NoError(t, peerManager.Dialed(a))
@@ -804,8 +778,7 @@ func TestPeerManager_Dialed_Connected(t *testing.T) {
added, err = peerManager.Add(b)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Equal(t, b, dial)
require.NoError(t, peerManager.Accepted(b.NodeID))
@@ -834,8 +807,7 @@ func TestPeerManager_Dialed_MaxConnected(t *testing.T) {
added, err := peerManager.Add(a)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, a, dial)
// Marking b as dialed in the meanwhile (even without TryDialNext)
@@ -858,7 +830,7 @@ func TestPeerManager_Dialed_MaxConnectedUpgrade(t *testing.T) {
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{
MaxConnected: 2,
MaxConnectedUpgrade: 1,
PeerScores: map[types.NodeID]p2p.PeerScore{c.NodeID: 1, d.NodeID: 1},
PeerScores: map[types.NodeID]p2p.PeerScore{c.NodeID: p2p.PeerScore(1), d.NodeID: 1},
})
require.NoError(t, err)
@@ -877,8 +849,7 @@ func TestPeerManager_Dialed_MaxConnectedUpgrade(t *testing.T) {
added, err = peerManager.Add(c)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, c, dial)
require.NoError(t, peerManager.Dialed(c))
@@ -908,7 +879,7 @@ func TestPeerManager_Dialed_Upgrade(t *testing.T) {
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{
MaxConnected: 1,
MaxConnectedUpgrade: 2,
PeerScores: map[types.NodeID]p2p.PeerScore{b.NodeID: 1, c.NodeID: 1},
PeerScores: map[types.NodeID]p2p.PeerScore{b.NodeID: p2p.PeerScore(1), c.NodeID: 1},
})
require.NoError(t, err)
@@ -922,8 +893,7 @@ func TestPeerManager_Dialed_Upgrade(t *testing.T) {
added, err = peerManager.Add(b)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, b, dial)
require.NoError(t, peerManager.Dialed(b))
@@ -932,8 +902,7 @@ func TestPeerManager_Dialed_Upgrade(t *testing.T) {
added, err = peerManager.Add(c)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Empty(t, dial)
// a should now be evicted.
@@ -952,10 +921,10 @@ func TestPeerManager_Dialed_UpgradeEvenLower(t *testing.T) {
MaxConnected: 2,
MaxConnectedUpgrade: 1,
PeerScores: map[types.NodeID]p2p.PeerScore{
a.NodeID: 3,
b.NodeID: 2,
c.NodeID: 10,
d.NodeID: 1,
a.NodeID: p2p.PeerScore(3),
b.NodeID: p2p.PeerScore(2),
c.NodeID: p2p.PeerScore(10),
d.NodeID: p2p.PeerScore(1),
},
})
require.NoError(t, err)
@@ -976,8 +945,7 @@ func TestPeerManager_Dialed_UpgradeEvenLower(t *testing.T) {
added, err = peerManager.Add(c)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, c, dial)
// In the meanwhile, a disconnects and d connects. d is even lower-scored
@@ -1005,9 +973,9 @@ func TestPeerManager_Dialed_UpgradeNoEvict(t *testing.T) {
MaxConnected: 2,
MaxConnectedUpgrade: 1,
PeerScores: map[types.NodeID]p2p.PeerScore{
a.NodeID: 1,
b.NodeID: 2,
c.NodeID: 3,
a.NodeID: p2p.PeerScore(1),
b.NodeID: p2p.PeerScore(2),
c.NodeID: p2p.PeerScore(3),
},
})
require.NoError(t, err)
@@ -1027,7 +995,7 @@ func TestPeerManager_Dialed_UpgradeNoEvict(t *testing.T) {
added, err = peerManager.Add(c)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
dial := peerManager.TryDialNext()
require.NoError(t, err)
require.Equal(t, c, dial)
@@ -1073,8 +1041,7 @@ func TestPeerManager_Accepted(t *testing.T) {
added, err = peerManager.Add(c)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, c, dial)
require.NoError(t, peerManager.Accepted(c.NodeID))
require.Error(t, peerManager.Dialed(c))
@@ -1083,8 +1050,7 @@ func TestPeerManager_Accepted(t *testing.T) {
added, err = peerManager.Add(d)
require.NoError(t, err)
require.True(t, added)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Equal(t, d, dial)
require.NoError(t, peerManager.Dialed(d))
require.Error(t, peerManager.Accepted(d.NodeID))
@@ -1126,8 +1092,8 @@ func TestPeerManager_Accepted_MaxConnectedUpgrade(t *testing.T) {
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{
PeerScores: map[types.NodeID]p2p.PeerScore{
c.NodeID: 1,
d.NodeID: 2,
c.NodeID: p2p.PeerScore(1),
d.NodeID: p2p.PeerScore(2),
},
MaxConnected: 1,
MaxConnectedUpgrade: 1,
@@ -1171,8 +1137,8 @@ func TestPeerManager_Accepted_Upgrade(t *testing.T) {
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{
PeerScores: map[types.NodeID]p2p.PeerScore{
b.NodeID: 1,
c.NodeID: 1,
b.NodeID: p2p.PeerScore(1),
c.NodeID: p2p.PeerScore(1),
},
MaxConnected: 1,
MaxConnectedUpgrade: 2,
@@ -1214,8 +1180,8 @@ func TestPeerManager_Accepted_UpgradeDialing(t *testing.T) {
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{
PeerScores: map[types.NodeID]p2p.PeerScore{
b.NodeID: 1,
c.NodeID: 1,
b.NodeID: p2p.PeerScore(1),
c.NodeID: p2p.PeerScore(1),
},
MaxConnected: 1,
MaxConnectedUpgrade: 2,
@@ -1232,8 +1198,7 @@ func TestPeerManager_Accepted_UpgradeDialing(t *testing.T) {
added, err = peerManager.Add(b)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, b, dial)
// a has already been claimed as an upgrade of a, so accepting
@@ -1376,7 +1341,7 @@ func TestPeerManager_EvictNext_WakeOnUpgradeDialed(t *testing.T) {
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{
MaxConnected: 1,
MaxConnectedUpgrade: 1,
PeerScores: map[types.NodeID]p2p.PeerScore{b.NodeID: 1},
PeerScores: map[types.NodeID]p2p.PeerScore{b.NodeID: p2p.PeerScore(1)},
})
require.NoError(t, err)
@@ -1393,8 +1358,7 @@ func TestPeerManager_EvictNext_WakeOnUpgradeDialed(t *testing.T) {
added, err := peerManager.Add(b)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, b, dial)
require.NoError(t, peerManager.Dialed(b))
}()
@@ -1414,7 +1378,9 @@ func TestPeerManager_EvictNext_WakeOnUpgradeAccepted(t *testing.T) {
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{
MaxConnected: 1,
MaxConnectedUpgrade: 1,
PeerScores: map[types.NodeID]p2p.PeerScore{b.NodeID: 1},
PeerScores: map[types.NodeID]p2p.PeerScore{
b.NodeID: p2p.PeerScore(1),
},
})
require.NoError(t, err)
@@ -1518,13 +1484,11 @@ func TestPeerManager_Disconnected(t *testing.T) {
// Disconnecting a dialing peer does not unmark it as dialing, to avoid
// dialing it multiple times in parallel.
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, a, dial)
peerManager.Disconnected(a.NodeID)
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Zero(t, dial)
}
@@ -1592,8 +1556,7 @@ func TestPeerManager_Subscribe(t *testing.T) {
require.Equal(t, p2p.PeerUpdate{NodeID: a.NodeID, Status: p2p.PeerStatusDown}, <-sub.Updates())
// Outbound connection with peer error and eviction.
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, a, dial)
require.Empty(t, sub.Updates())
@@ -1616,8 +1579,7 @@ func TestPeerManager_Subscribe(t *testing.T) {
require.Equal(t, p2p.PeerUpdate{NodeID: a.NodeID, Status: p2p.PeerStatusDown}, <-sub.Updates())
// Outbound connection with dial failure.
dial, err = peerManager.TryDialNext()
require.NoError(t, err)
dial = peerManager.TryDialNext()
require.Equal(t, a, dial)
require.Empty(t, sub.Updates())
@@ -1713,8 +1675,7 @@ func TestPeerManager_Close(t *testing.T) {
added, err := peerManager.Add(a)
require.NoError(t, err)
require.True(t, added)
dial, err := peerManager.TryDialNext()
require.NoError(t, err)
dial := peerManager.TryDialNext()
require.Equal(t, a, dial)
require.NoError(t, peerManager.DialFailed(a))
@@ -1763,6 +1724,7 @@ func TestPeerManager_Advertise(t *testing.T) {
require.NoError(t, err)
require.True(t, added)
require.Len(t, peerManager.Advertise(dID, 100), 6)
// d should get all addresses.
require.ElementsMatch(t, []p2p.NodeAddress{
aTCP, aMem, bTCP, bMem, cTCP, cMem,
@@ -1776,10 +1738,24 @@ func TestPeerManager_Advertise(t *testing.T) {
// Asking for 0 addresses should return, well, 0.
require.Empty(t, peerManager.Advertise(aID, 0))
// Asking for 2 addresses should get the highest-rated ones, i.e. a.
require.ElementsMatch(t, []p2p.NodeAddress{
aTCP, aMem,
}, peerManager.Advertise(dID, 2))
// Asking for 2 addresses should get two addresses
// and usually not the lowest ranked one
numLowestRanked := 0
for i := 0; i < 100; i++ {
addrs := peerManager.Advertise(dID, 2)
require.Len(t, addrs, 2)
for _, addr := range addrs {
if dID == addr.NodeID {
t.Fatal("never advertise self")
}
if cID == addr.NodeID {
numLowestRanked++
}
}
}
if numLowestRanked > 20 {
t.Errorf("lowest ranked peer returned in results too often: %d", numLowestRanked)
}
}
func TestPeerManager_Advertise_Self(t *testing.T) {
@@ -1799,39 +1775,3 @@ func TestPeerManager_Advertise_Self(t *testing.T) {
self,
}, peerManager.Advertise(dID, 100))
}
func TestPeerManager_SetHeight_GetHeight(t *testing.T) {
a := p2p.NodeAddress{Protocol: "memory", NodeID: types.NodeID(strings.Repeat("a", 40))}
b := p2p.NodeAddress{Protocol: "memory", NodeID: types.NodeID(strings.Repeat("b", 40))}
db := dbm.NewMemDB()
peerManager, err := p2p.NewPeerManager(selfID, db, p2p.PeerManagerOptions{})
require.NoError(t, err)
// Getting a height should default to 0, for unknown peers and
// for known peers without height.
added, err := peerManager.Add(a)
require.NoError(t, err)
require.True(t, added)
require.EqualValues(t, 0, peerManager.GetHeight(a.NodeID))
require.EqualValues(t, 0, peerManager.GetHeight(b.NodeID))
// Setting a height should work for a known node.
require.NoError(t, peerManager.SetHeight(a.NodeID, 3))
require.EqualValues(t, 3, peerManager.GetHeight(a.NodeID))
// Setting a height should add an unknown node.
require.Equal(t, []types.NodeID{a.NodeID}, peerManager.Peers())
require.NoError(t, peerManager.SetHeight(b.NodeID, 7))
require.EqualValues(t, 7, peerManager.GetHeight(b.NodeID))
require.ElementsMatch(t, []types.NodeID{a.NodeID, b.NodeID}, peerManager.Peers())
// The heights should not be persisted.
peerManager.Close()
peerManager, err = p2p.NewPeerManager(selfID, db, p2p.PeerManagerOptions{})
require.NoError(t, err)
require.ElementsMatch(t, []types.NodeID{a.NodeID, b.NodeID}, peerManager.Peers())
require.Zero(t, peerManager.GetHeight(a.NodeID))
require.Zero(t, peerManager.GetHeight(b.NodeID))
}

View File

@@ -51,5 +51,5 @@ const (
// max addresses returned by GetSelection
// NOTE: this must match "maxMsgSize"
maxGetSelection = 250
maxGetSelection = 100
)

View File

@@ -102,12 +102,6 @@ type Reactor struct {
crawlPeerInfos map[types.NodeID]crawlPeerInfo
}
func (r *Reactor) minReceiveRequestInterval() time.Duration {
// NOTE: must be less than ensurePeersPeriod, otherwise we'll request
// peers too quickly from others and they'll think we're bad!
return r.ensurePeersPeriod / 3
}
// ReactorConfig holds reactor specific configuration data.
type ReactorConfig struct {
// Seed/Crawler mode
@@ -331,7 +325,7 @@ func (r *Reactor) receiveRequest(src Peer) error {
}
now := time.Now()
minInterval := r.minReceiveRequestInterval()
minInterval := minReceiveRequestInterval
if now.Sub(lastReceived) < minInterval {
return fmt.Errorf(
"peer (%v) sent next PEX request too soon. lastReceived: %v, now: %v, minInterval: %v. Disconnecting",

View File

@@ -10,7 +10,6 @@ import (
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/internal/p2p/conn"
"github.com/tendermint/tendermint/libs/log"
tmmath "github.com/tendermint/tendermint/libs/math"
"github.com/tendermint/tendermint/libs/service"
protop2p "github.com/tendermint/tendermint/proto/tendermint/p2p"
"github.com/tendermint/tendermint/types"
@@ -25,7 +24,7 @@ var (
// See https://github.com/tendermint/tendermint/issues/6371
const (
// the minimum time one peer can send another request to the same peer
minReceiveRequestInterval = 100 * time.Millisecond
minReceiveRequestInterval = 200 * time.Millisecond
// the maximum amount of addresses that can be included in a response
maxAddresses uint16 = 100
@@ -95,17 +94,10 @@ type ReactorV2 struct {
lastReceivedRequests map[types.NodeID]time.Time
// the time when another request will be sent
nextRequestTime time.Time
nextRequestInterval time.Duration
// keep track of how many new peers to existing peers we have received to
// extrapolate the size of the network
newPeers uint32
totalPeers uint32
// discoveryRatio is the inverse ratio of new peers to old peers squared.
// This is multiplied by the minimum duration to calculate how long to wait
// between each request.
discoveryRatio float32
// the total number of unique peers added
totalPeers int
}
// NewReactor returns a reference to a new reactor.
@@ -159,6 +151,7 @@ func (r *ReactorV2) OnStop() {
func (r *ReactorV2) processPexCh() {
defer r.pexCh.Close()
r.nextRequestInterval = minReceiveRequestInterval
for {
select {
case <-r.closeCh:
@@ -235,6 +228,7 @@ func (r *ReactorV2) handlePexMessage(envelope p2p.Envelope) error {
)
}
var numAdded int
for _, pexAddress := range msg.Addresses {
// no protocol is prefixed so we assume the default (mconn)
peerAddress, err := p2p.ParseNodeAddress(
@@ -247,11 +241,11 @@ func (r *ReactorV2) handlePexMessage(envelope p2p.Envelope) error {
logger.Error("failed to add PEX address", "address", peerAddress, "err", err)
}
if added {
r.newPeers++
numAdded++
logger.Debug("added PEX address", "address", peerAddress)
}
r.totalPeers++
}
r.calculateNextRequestTime(numAdded)
// V2 PEX MESSAGES
case *protop2p.PexRequestV2:
@@ -289,6 +283,7 @@ func (r *ReactorV2) handlePexMessage(envelope p2p.Envelope) error {
)
}
var numAdded int
for _, pexAddress := range msg.Addresses {
peerAddress, err := p2p.ParseNodeAddress(pexAddress.URL)
if err != nil {
@@ -299,11 +294,11 @@ func (r *ReactorV2) handlePexMessage(envelope p2p.Envelope) error {
logger.Error("failed to add V2 PEX address", "address", peerAddress, "err", err)
}
if added {
r.newPeers++
numAdded++
logger.Debug("added V2 PEX address", "address", peerAddress)
}
r.totalPeers++
}
r.calculateNextRequestTime(numAdded)
default:
return fmt.Errorf("received unknown message: %T", msg)
@@ -409,7 +404,7 @@ func (r *ReactorV2) processPeerUpdate(peerUpdate p2p.PeerUpdate) {
}
func (r *ReactorV2) waitUntilNextRequest() <-chan time.Time {
return time.After(time.Until(r.nextRequestTime))
return time.After(r.nextRequestInterval)
}
// sendRequestForPeers pops the first peerID off the list and sends the
@@ -421,14 +416,12 @@ func (r *ReactorV2) sendRequestForPeers() {
defer r.mtx.Unlock()
if len(r.availablePeers) == 0 {
// no peers are available
r.Logger.Debug("no available peers to send request to, waiting...")
r.nextRequestTime = time.Now().Add(noAvailablePeersWaitPeriod)
r.Logger.Debug("no available peers to send a PEX request to (retrying)")
return
}
var peerID types.NodeID
// use range to get a random peer.
// Select an arbitrary peer from the available set.
var peerID types.NodeID
for peerID = range r.availablePeers {
break
}
@@ -449,55 +442,49 @@ func (r *ReactorV2) sendRequestForPeers() {
// remove the peer from the abvailable peers list and mark it in the requestsSent map
delete(r.availablePeers, peerID)
r.requestsSent[peerID] = struct{}{}
r.calculateNextRequestTime()
r.Logger.Debug("peer request sent", "next_request_time", r.nextRequestTime)
}
// calculateNextRequestTime implements something of a proportional controller
// to estimate how often the reactor should be requesting new peer addresses.
// The dependent variable in this calculation is the ratio of new peers to
// all peers that the reactor receives. The interval is thus calculated as the
// inverse squared. In the beginning, all peers should be new peers.
// We expect this ratio to be near 1 and thus the interval to be as short
// as possible. As the node becomes more familiar with the network the ratio of
// new nodes will plummet to a very small number, meaning the interval expands
// to its upper bound.
// CONTRACT: Must use a write lock as nextRequestTime is updated
func (r *ReactorV2) calculateNextRequestTime() {
// check if the peer store is full. If so then there is no need
// to send peer requests too often
// calculateNextRequestTime selects how long we should wait before attempting
// to send out another request for peer addresses.
//
// This implements a simplified proportional control mechanism to poll more
// often when our knowledge of the network is incomplete, and less often as our
// knowledge grows. To estimate our knowledge of the network, we use the
// fraction of "new" peers (addresses we have not previously seen) to the total
// so far observed. When we first join the network, this fraction will be close
// to 1, meaning most new peers are "new" to us, and as we discover more peers,
// the fraction will go toward zero.
//
// The minimum interval will be minReceiveRequestInterval to ensure we will not
// request from any peer more often than we would allow them to do from us.
func (r *ReactorV2) calculateNextRequestTime(added int) {
r.mtx.Lock()
defer r.mtx.Unlock()
r.totalPeers += added
// If the peer store is nearly full, wait the maximum interval.
if ratio := r.peerManager.PeerRatio(); ratio >= 0.95 {
r.Logger.Debug("peer manager near full ratio, sleeping...",
r.Logger.Debug("Peer manager is nearly full",
"sleep_period", fullCapacityInterval, "ratio", ratio)
r.nextRequestTime = time.Now().Add(fullCapacityInterval)
r.nextRequestInterval = fullCapacityInterval
return
}
// baseTime represents the shortest interval that we can send peer requests
// in. For example if we have 10 peers and we can't send a message to the
// same peer every 500ms, then we can send a request every 50ms. In practice
// we use a safety margin of 2, ergo 100ms
peers := tmmath.MinInt(len(r.availablePeers), 50)
baseTime := minReceiveRequestInterval
if peers > 0 {
baseTime = minReceiveRequestInterval * 2 / time.Duration(peers)
// If there are no available peers to query, poll less aggressively.
if len(r.availablePeers) == 0 {
r.Logger.Debug("No available peers to send a PEX request",
"sleep_period", noAvailablePeersWaitPeriod)
r.nextRequestInterval = noAvailablePeersWaitPeriod
return
}
if r.totalPeers > 0 || r.discoveryRatio == 0 {
// find the ratio of new peers. NOTE: We add 1 to both sides to avoid
// divide by zero problems
ratio := float32(r.totalPeers+1) / float32(r.newPeers+1)
// square the ratio in order to get non linear time intervals
// NOTE: The longest possible interval for a network with 100 or more peers
// where a node is connected to 50 of them is 2 minutes.
r.discoveryRatio = ratio * ratio
r.newPeers = 0
r.totalPeers = 0
}
// NOTE: As ratio is always >= 1, discovery ratio is >= 1. Therefore we don't need to worry
// about the next request time being less than the minimum time
r.nextRequestTime = time.Now().Add(baseTime * time.Duration(r.discoveryRatio))
// Reaching here, there are available peers to query and the peer store
// still has space. Estimate our knowledge of the network from the latest
// update and choose a new interval.
base := float64(minReceiveRequestInterval) / float64(len(r.availablePeers))
multiplier := float64(r.totalPeers+1) / float64(added+1) // +1 to avert zero division
r.nextRequestInterval = time.Duration(base*multiplier*multiplier) + minReceiveRequestInterval
}
func (r *ReactorV2) markPeerRequest(peer types.NodeID) error {

View File

@@ -1,6 +1,6 @@
// Temporarily disabled pending ttps://github.com/tendermint/tendermint/issues/7626.
//go:build issue7626
//+build issue7626
// +build issue7626
package pex_test
@@ -91,7 +91,7 @@ func TestReactorSendsRequestsTooOften(t *testing.T) {
peerErr := <-r.pexErrCh
require.Error(t, peerErr.Err)
require.Empty(t, r.pexOutCh)
require.Contains(t, peerErr.Err.Error(), "peer sent a request too close after a prior one")
require.Contains(t, peerErr.Err.Error(), "sent PEX request too soon")
require.Equal(t, badNode, peerErr.NodeID)
}

View File

@@ -156,6 +156,7 @@ func (s *pqScheduler) start() {
func (s *pqScheduler) process() {
defer s.done.Close()
LOOP:
for {
select {
case e := <-s.enqueueCh:
@@ -247,21 +248,24 @@ func (s *pqScheduler) process() {
for s.pq.Len() > 0 {
pqEnv = heap.Pop(s.pq).(*pqEnvelope)
s.size -= pqEnv.size
// deduct the Envelope size from all the relevant cumulative sizes
for i := 0; i < len(s.chDescs) && pqEnv.priority <= uint(s.chDescs[i].Priority); i++ {
s.sizes[uint(s.chDescs[i].Priority)] -= pqEnv.size
}
s.metrics.PeerSendBytesTotal.With(
"chID", chIDStr,
"peer_id", string(pqEnv.envelope.To),
"message_type", s.metrics.ValueToMetricLabel(pqEnv.envelope.Message)).Add(float64(pqEnv.size))
select {
case s.dequeueCh <- pqEnv.envelope:
s.size -= pqEnv.size
// deduct the Envelope size from all the relevant cumulative sizes
for i := 0; i < len(s.chDescs) && pqEnv.priority <= uint(s.chDescs[i].Priority); i++ {
s.sizes[uint(s.chDescs[i].Priority)] -= pqEnv.size
}
s.metrics.PeerSendBytesTotal.With(
"chID", chIDStr,
"peer_id", string(pqEnv.envelope.To),
"message_type", s.metrics.ValueToMetricLabel(pqEnv.envelope.Message)).Add(float64(pqEnv.size))
case <-s.closer.Done():
return
default:
heap.Push(s.pq, pqEnv)
continue LOOP
}
}

View File

@@ -5,7 +5,6 @@ import (
"errors"
"fmt"
"io"
"math/rand"
"net"
"runtime"
"sync"
@@ -54,6 +53,7 @@ type Envelope struct {
type PeerError struct {
NodeID types.NodeID
Err error
Fatal bool
}
// Channel is a bidirectional channel to exchange Protobuf messages with peers,
@@ -159,12 +159,6 @@ type RouterOptions struct {
// return an error to reject the peer.
FilterPeerByID func(context.Context, types.NodeID) error
// DialSleep controls the amount of time that the router
// sleeps between dialing peers. If not set, a default value
// is used that sleeps for a (random) amount of time up to 3
// seconds between submitting each peer to be dialed.
DialSleep func(context.Context)
// NumConcrruentDials controls how many parallel go routines
// are used to dial peers. This defaults to the value of
// runtime.NumCPU.
@@ -290,7 +284,7 @@ func NewRouter(
router := &Router{
logger: logger,
metrics: metrics,
metrics: NopMetrics(),
nodeInfo: nodeInfo,
privKey: privKey,
connTracker: newConnTracker(
@@ -311,6 +305,10 @@ func NewRouter(
router.BaseService = service.NewBaseService(logger, "router", router)
if metrics != nil {
router.metrics = metrics
}
qf, err := router.createQueueFactory()
if err != nil {
return nil, err
@@ -421,11 +419,7 @@ func (r *Router) routeChannel(
) {
for {
select {
case envelope, ok := <-outCh:
if !ok {
return
}
case envelope := <-outCh:
// Mark the envelope with the channel ID to allow sendPeer() to pass
// it on to Transport.SendMessage().
envelope.channelID = chID
@@ -502,24 +496,36 @@ func (r *Router) routeChannel(
}
}
case peerError, ok := <-errCh:
if !ok {
return
case peerError := <-errCh:
maxPeerCapacity := r.peerManager.HasMaxPeerCapacity()
r.logger.Error("peer error",
"peer", peerError.NodeID,
"err", peerError.Err,
"disconnecting", peerError.Fatal || maxPeerCapacity,
)
if peerError.Fatal || maxPeerCapacity {
// if the error is fatal or all peer
// slots are in use, we can error
// (disconnect) from the peer.
r.peerManager.Errored(peerError.NodeID, peerError.Err)
} else {
// this just decrements the peer
// score.
r.peerManager.processPeerEvent(PeerUpdate{
NodeID: peerError.NodeID,
Status: PeerStatusBad,
})
}
r.logger.Error("peer error, evicting", "peer", peerError.NodeID, "err", peerError.Err)
r.peerManager.Errored(peerError.NodeID, peerError.Err)
case <-r.stopCh:
return
}
}
}
func (r *Router) numConccurentDials() int {
func (r *Router) numConcurrentDials() int {
if r.options.NumConcurrentDials == nil {
return runtime.NumCPU()
return runtime.NumCPU() * 32
}
return r.options.NumConcurrentDials()
@@ -541,23 +547,6 @@ func (r *Router) filterPeersID(ctx context.Context, id types.NodeID) error {
return r.options.FilterPeerByID(ctx, id)
}
func (r *Router) dialSleep(ctx context.Context) {
if r.options.DialSleep == nil {
// nolint:gosec // G404: Use of weak random number generator
timer := time.NewTimer(time.Duration(rand.Int63n(dialRandomizerIntervalMilliseconds)) * time.Millisecond)
defer timer.Stop()
select {
case <-ctx.Done():
case <-timer.C:
}
return
}
r.options.DialSleep(ctx)
}
// acceptPeers accepts inbound connections from peers on the given transport,
// and spawns goroutines that route messages to/from them.
func (r *Router) acceptPeers(transport Transport) {
@@ -565,14 +554,14 @@ func (r *Router) acceptPeers(transport Transport) {
ctx := r.stopCtx()
for {
conn, err := transport.Accept()
switch err {
case nil:
case io.EOF:
r.logger.Debug("stopping accept routine", "transport", transport)
switch {
case errors.Is(err, io.EOF):
r.logger.Debug("stopping accept routine", "transport", transport, "err", "EOF")
return
default:
case err != nil:
// in this case we got an error from the net.Listener.
r.logger.Error("failed to accept connection", "transport", transport, "err", err)
return
continue
}
incomingIP := conn.RemoteEndpoint().IP
@@ -584,7 +573,7 @@ func (r *Router) acceptPeers(transport Transport) {
"close_err", closeErr,
)
return
continue
}
// Spawn a goroutine for the handshake, to avoid head-of-line blocking.
@@ -656,7 +645,7 @@ func (r *Router) dialPeers() {
// able to add peers at a reasonable pace, though the number
// is somewhat arbitrary. The action is further throttled by a
// sleep after sending to the addresses channel.
for i := 0; i < r.numConccurentDials(); i++ {
for i := 0; i < r.numConcurrentDials(); i++ {
wg.Add(1)
go func() {
defer wg.Done()
@@ -679,19 +668,13 @@ LOOP:
case errors.Is(err, context.Canceled):
r.logger.Debug("stopping dial routine")
break LOOP
case err != nil:
r.logger.Error("failed to find next peer to dial", "err", err)
break LOOP
case address == NodeAddress{}:
continue LOOP
}
select {
case addresses <- address:
// this jitters the frequency that we call
// DialNext and prevents us from attempting to
// create connections too quickly.
r.dialSleep(ctx)
continue
continue LOOP
case <-ctx.Done():
close(addresses)
break LOOP
@@ -707,7 +690,7 @@ func (r *Router) connectPeer(ctx context.Context, address NodeAddress) {
case errors.Is(err, context.Canceled):
return
case err != nil:
r.logger.Error("failed to dial peer", "peer", address, "err", err)
r.logger.Debug("failed to dial peer", "peer", address, "err", err)
if err = r.peerManager.DialFailed(address); err != nil {
r.logger.Error("failed to report dial failure", "peer", address, "err", err)
}
@@ -729,8 +712,8 @@ func (r *Router) connectPeer(ctx context.Context, address NodeAddress) {
}
if err := r.runWithPeerMutex(func() error { return r.peerManager.Dialed(address) }); err != nil {
r.logger.Error("failed to dial peer",
"op", "outgoing/dialing", "peer", address.NodeID, "err", err)
r.logger.Error("failed to dial peer", "op", "outgoing/dialing", "peer", address.NodeID, "err", err)
r.peerManager.dialWaker.Wake()
conn.Close()
return
}
@@ -794,12 +777,13 @@ func (r *Router) dialPeer(ctx context.Context, address NodeAddress) (Connection,
// Internet can't and needs a different public address.
conn, err := transport.Dial(dialCtx, endpoint)
if err != nil {
r.logger.Error("failed to dial endpoint", "peer", address.NodeID, "endpoint", endpoint, "err", err)
r.logger.Debug("failed to dial endpoint", "peer", address.NodeID, "endpoint", endpoint, "err", err)
} else {
r.logger.Debug("dialed peer", "peer", address.NodeID, "endpoint", endpoint)
return conn, nil
}
}
return nil, errors.New("all endpoints failed")
}
@@ -811,19 +795,14 @@ func (r *Router) handshakePeer(
expectID types.NodeID,
) (types.NodeInfo, crypto.PubKey, error) {
if r.options.HandshakeTimeout > 0 {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, r.options.HandshakeTimeout)
defer cancel()
}
peerInfo, peerKey, err := conn.Handshake(ctx, r.nodeInfo, r.privKey)
peerInfo, peerKey, err := conn.Handshake(ctx, r.options.HandshakeTimeout, r.nodeInfo, r.privKey)
if err != nil {
return peerInfo, peerKey, err
}
if err = peerInfo.Validate(); err != nil {
return peerInfo, peerKey, fmt.Errorf("invalid handshake NodeInfo: %w", err)
}
if types.NodeIDFromPubKey(peerKey) != peerInfo.NodeID {
return peerInfo, peerKey, fmt.Errorf("peer's public key did not match its node ID %q (expected %q)",
peerInfo.NodeID, types.NodeIDFromPubKey(peerKey))
@@ -832,7 +811,12 @@ func (r *Router) handshakePeer(
return peerInfo, peerKey, fmt.Errorf("expected to connect with peer %q, got %q",
expectID, peerInfo.NodeID)
}
if err := r.nodeInfo.CompatibleWith(peerInfo); err != nil {
if err := r.peerManager.Inactivate(peerInfo.NodeID); err != nil {
return peerInfo, peerKey, fmt.Errorf("problem inactivating peer %q: %w", peerInfo.ID(), err)
}
return peerInfo, peerKey, ErrRejected{
err: err,
id: peerInfo.ID(),
@@ -1011,6 +995,8 @@ func (r *Router) evictPeers() {
queue, ok := r.peerQueues[peerID]
r.peerMtx.RUnlock()
r.metrics.PeersEvicted.Add(1)
if ok {
queue.close()
}

View File

@@ -1,7 +1,6 @@
package p2p_test
import (
"context"
"errors"
"fmt"
"io"
@@ -133,13 +132,6 @@ func TestRouter_Channel_Basic(t *testing.T) {
require.NoError(t, err)
require.Contains(t, router.NodeInfo().Channels, chDesc2.ID)
// Closing the channel, then opening it again should be fine.
channel.Close()
time.Sleep(100 * time.Millisecond) // yes yes, but Close() is async...
channel, err = router.OpenChannel(chDesc, &p2ptest.Message{}, 0)
require.NoError(t, err)
// We should be able to send on the channel, even though there are no peers.
p2ptest.RequireSend(t, channel, p2p.Envelope{
To: types.NodeID(strings.Repeat("a", 40)),
@@ -352,7 +344,7 @@ func TestRouter_AcceptPeers(t *testing.T) {
closer := tmsync.NewCloser()
mockConnection := &mocks.Connection{}
mockConnection.On("String").Maybe().Return("mock")
mockConnection.On("Handshake", mock.Anything, selfInfo, selfKey).
mockConnection.On("Handshake", mock.Anything, mock.Anything, selfInfo, selfKey).
Return(tc.peerInfo, tc.peerKey, nil)
mockConnection.On("Close").Run(func(_ mock.Arguments) { closer.Close() }).Return(nil)
mockConnection.On("RemoteEndpoint").Return(p2p.Endpoint{})
@@ -413,72 +405,42 @@ func TestRouter_AcceptPeers(t *testing.T) {
}
}
func TestRouter_AcceptPeers_Error(t *testing.T) {
t.Cleanup(leaktest.Check(t))
func TestRouter_AcceptPeers_Errors(t *testing.T) {
for _, err := range []error{io.EOF} {
t.Run(err.Error(), func(t *testing.T) {
t.Cleanup(leaktest.Check(t))
// Set up a mock transport that returns an error, which should prevent
// the router from calling Accept again.
mockTransport := &mocks.Transport{}
mockTransport.On("String").Maybe().Return("mock")
mockTransport.On("Protocols").Return([]p2p.Protocol{"mock"})
mockTransport.On("Accept").Once().Return(nil, errors.New("boom"))
mockTransport.On("Close").Return(nil)
// Set up a mock transport that returns io.EOF once, which should prevent
// the router from calling Accept again.
mockTransport := &mocks.Transport{}
mockTransport.On("String").Maybe().Return("mock")
mockTransport.On("Accept", mock.Anything).Once().Return(nil, err)
mockTransport.On("Listen", mock.Anything).Return(nil).Maybe()
mockTransport.On("Close").Return(nil)
mockTransport.On("Protocols").Return([]p2p.Protocol{"mock"})
// Set up and start the router.
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{})
require.NoError(t, err)
// Set up and start the router.
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{})
require.NoError(t, err)
defer peerManager.Close()
router, err := p2p.NewRouter(
log.TestingLogger(),
p2p.NopMetrics(),
selfInfo,
selfKey,
peerManager,
[]p2p.Transport{mockTransport},
p2p.RouterOptions{},
)
require.NoError(t, err)
router, err := p2p.NewRouter(
log.TestingLogger(),
p2p.NopMetrics(),
selfInfo,
selfKey,
peerManager,
[]p2p.Transport{mockTransport},
p2p.RouterOptions{},
)
require.NoError(t, err)
require.NoError(t, router.Start())
time.Sleep(time.Second)
require.NoError(t, router.Stop())
require.NoError(t, router.Start())
time.Sleep(time.Second)
require.NoError(t, router.Stop())
mockTransport.AssertExpectations(t)
mockTransport.AssertExpectations(t)
}
func TestRouter_AcceptPeers_ErrorEOF(t *testing.T) {
t.Cleanup(leaktest.Check(t))
// Set up a mock transport that returns io.EOF once, which should prevent
// the router from calling Accept again.
mockTransport := &mocks.Transport{}
mockTransport.On("String").Maybe().Return("mock")
mockTransport.On("Protocols").Return([]p2p.Protocol{"mock"})
mockTransport.On("Accept").Once().Return(nil, io.EOF)
mockTransport.On("Close").Return(nil)
// Set up and start the router.
peerManager, err := p2p.NewPeerManager(selfID, dbm.NewMemDB(), p2p.PeerManagerOptions{})
require.NoError(t, err)
defer peerManager.Close()
router, err := p2p.NewRouter(
log.TestingLogger(),
p2p.NopMetrics(),
selfInfo,
selfKey,
peerManager,
[]p2p.Transport{mockTransport},
p2p.RouterOptions{},
)
require.NoError(t, err)
require.NoError(t, router.Start())
time.Sleep(time.Second)
require.NoError(t, router.Stop())
mockTransport.AssertExpectations(t)
})
}
}
func TestRouter_AcceptPeers_HeadOfLineBlocking(t *testing.T) {
@@ -492,7 +454,7 @@ func TestRouter_AcceptPeers_HeadOfLineBlocking(t *testing.T) {
mockConnection := &mocks.Connection{}
mockConnection.On("String").Maybe().Return("mock")
mockConnection.On("Handshake", mock.Anything, selfInfo, selfKey).
mockConnection.On("Handshake", mock.Anything, mock.Anything, selfInfo, selfKey).
WaitUntil(closeCh).Return(types.NodeInfo{}, nil, io.EOF)
mockConnection.On("Close").Return(nil)
mockConnection.On("RemoteEndpoint").Return(p2p.Endpoint{})
@@ -573,7 +535,7 @@ func TestRouter_DialPeers(t *testing.T) {
mockConnection := &mocks.Connection{}
mockConnection.On("String").Maybe().Return("mock")
if tc.dialErr == nil {
mockConnection.On("Handshake", mock.Anything, selfInfo, selfKey).
mockConnection.On("Handshake", mock.Anything, mock.Anything, selfInfo, selfKey).
Return(tc.peerInfo, tc.peerKey, nil)
mockConnection.On("Close").Run(func(_ mock.Arguments) { closer.Close() }).Return(nil)
}
@@ -660,7 +622,7 @@ func TestRouter_DialPeers_Parallel(t *testing.T) {
mockConnection := &mocks.Connection{}
mockConnection.On("String").Maybe().Return("mock")
mockConnection.On("Handshake", mock.Anything, selfInfo, selfKey).
mockConnection.On("Handshake", mock.Anything, mock.Anything, selfInfo, selfKey).
WaitUntil(closeCh).Return(types.NodeInfo{}, nil, io.EOF)
mockConnection.On("Close").Return(nil)
@@ -701,7 +663,6 @@ func TestRouter_DialPeers_Parallel(t *testing.T) {
peerManager,
[]p2p.Transport{mockTransport},
p2p.RouterOptions{
DialSleep: func(_ context.Context) {},
NumConcurrentDials: func() int {
ncpu := runtime.NumCPU()
if ncpu <= 3 {
@@ -740,7 +701,7 @@ func TestRouter_EvictPeers(t *testing.T) {
mockConnection := &mocks.Connection{}
mockConnection.On("String").Maybe().Return("mock")
mockConnection.On("Handshake", mock.Anything, selfInfo, selfKey).
mockConnection.On("Handshake", mock.Anything, mock.Anything, selfInfo, selfKey).
Return(peerInfo, peerKey.PubKey(), nil)
mockConnection.On("ReceiveMessage").WaitUntil(closeCh).Return(chID, nil, io.EOF)
mockConnection.On("RemoteEndpoint").Return(p2p.Endpoint{})
@@ -809,7 +770,7 @@ func TestRouter_ChannelCompatability(t *testing.T) {
mockConnection := &mocks.Connection{}
mockConnection.On("String").Maybe().Return("mock")
mockConnection.On("Handshake", mock.Anything, selfInfo, selfKey).
mockConnection.On("Handshake", mock.Anything, mock.Anything, selfInfo, selfKey).
Return(incompatiblePeer, peerKey.PubKey(), nil)
mockConnection.On("RemoteEndpoint").Return(p2p.Endpoint{})
mockConnection.On("Close").Return(nil)
@@ -858,7 +819,7 @@ func TestRouter_DontSendOnInvalidChannel(t *testing.T) {
mockConnection := &mocks.Connection{}
mockConnection.On("String").Maybe().Return("mock")
mockConnection.On("Handshake", mock.Anything, selfInfo, selfKey).
mockConnection.On("Handshake", mock.Anything, mock.Anything, selfInfo, selfKey).
Return(peer, peerKey.PubKey(), nil)
mockConnection.On("RemoteEndpoint").Return(p2p.Endpoint{})
mockConnection.On("Close").Return(nil)

View File

@@ -417,7 +417,7 @@ func (sw *Switch) stopAndRemovePeer(peer Peer, reason interface{}) {
// RemovePeer is finished.
// https://github.com/tendermint/tendermint/issues/3338
if sw.peers.Remove(peer) {
sw.metrics.Peers.Add(float64(-1))
sw.metrics.Peers.Add(-1)
}
sw.conns.RemoveAddr(peer.RemoteAddr())
@@ -865,11 +865,11 @@ func (sw *Switch) handshakePeer(
c Connection,
expectPeerID types.NodeID,
) (types.NodeInfo, crypto.PubKey, error) {
// Moved from transport and hardcoded until legacy P2P stack removal.
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
peerInfo, peerKey, err := c.Handshake(ctx, sw.nodeInfo, sw.nodeKey.PrivKey)
// Moved timeout from transport and hardcoded until legacy P2P stack removal.
peerInfo, peerKey, err := c.Handshake(ctx, 5*time.Second, sw.nodeInfo, sw.nodeKey.PrivKey)
if err != nil {
return peerInfo, peerKey, ErrRejected{
conn: c.(*mConnConnection).conn,
@@ -1035,7 +1035,7 @@ func (sw *Switch) addPeer(p Peer) error {
if err := sw.peers.Add(p); err != nil {
return err
}
sw.metrics.Peers.Add(float64(1))
sw.metrics.Peers.Add(1)
// Start all the reactor protocols on the peer.
for _, reactor := range sw.reactors {

View File

@@ -267,7 +267,7 @@ func TestSwitchPeerFilter(t *testing.T) {
if err != nil {
t.Fatal(err)
}
peerInfo, _, err := c.Handshake(ctx, sw.nodeInfo, sw.nodeKey.PrivKey)
peerInfo, _, err := c.Handshake(ctx, 0, sw.nodeInfo, sw.nodeKey.PrivKey)
if err != nil {
t.Fatal(err)
}
@@ -324,7 +324,7 @@ func TestSwitchPeerFilterTimeout(t *testing.T) {
if err != nil {
t.Fatal(err)
}
peerInfo, _, err := c.Handshake(ctx, sw.nodeInfo, sw.nodeKey.PrivKey)
peerInfo, _, err := c.Handshake(ctx, 0, sw.nodeInfo, sw.nodeKey.PrivKey)
if err != nil {
t.Fatal(err)
}
@@ -360,7 +360,7 @@ func TestSwitchPeerFilterDuplicate(t *testing.T) {
if err != nil {
t.Fatal(err)
}
peerInfo, _, err := c.Handshake(ctx, sw.nodeInfo, sw.nodeKey.PrivKey)
peerInfo, _, err := c.Handshake(ctx, 0, sw.nodeInfo, sw.nodeKey.PrivKey)
if err != nil {
t.Fatal(err)
}
@@ -415,7 +415,7 @@ func TestSwitchStopsNonPersistentPeerOnError(t *testing.T) {
if err != nil {
t.Fatal(err)
}
peerInfo, _, err := c.Handshake(ctx, sw.nodeInfo, sw.nodeKey.PrivKey)
peerInfo, _, err := c.Handshake(ctx, 0, sw.nodeInfo, sw.nodeKey.PrivKey)
if err != nil {
t.Fatal(err)
}

View File

@@ -126,7 +126,7 @@ func (sw *Switch) addPeerWithConnection(conn net.Conn) error {
}
return err
}
peerNodeInfo, _, err := pc.conn.Handshake(context.Background(), sw.nodeInfo, sw.nodeKey.PrivKey)
peerNodeInfo, _, err := pc.conn.Handshake(context.Background(), 0, sw.nodeInfo, sw.nodeKey.PrivKey)
if err != nil {
if err := conn.Close(); err != nil {
sw.Logger.Error("Error closing connection", "err", err)

View File

@@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"net"
"time"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/internal/p2p/conn"
@@ -84,7 +85,7 @@ type Connection interface {
// FIXME: The handshake should really be the Router's responsibility, but
// that requires the connection interface to be byte-oriented rather than
// message-oriented (see comment above).
Handshake(context.Context, types.NodeInfo, crypto.PrivKey) (types.NodeInfo, crypto.PubKey, error)
Handshake(context.Context, time.Duration, types.NodeInfo, crypto.PrivKey) (types.NodeInfo, crypto.PubKey, error)
// ReceiveMessage returns the next message received on the connection,
// blocking until one is available. Returns io.EOF if closed.

View File

@@ -9,6 +9,7 @@ import (
"net"
"strconv"
"sync"
"time"
"golang.org/x/net/netutil"
@@ -255,6 +256,7 @@ func newMConnConnection(
// Handshake implements Connection.
func (c *mConnConnection) Handshake(
ctx context.Context,
timeout time.Duration,
nodeInfo types.NodeInfo,
privKey crypto.PrivKey,
) (types.NodeInfo, crypto.PubKey, error) {
@@ -264,6 +266,12 @@ func (c *mConnConnection) Handshake(
peerKey crypto.PubKey
errCh = make(chan error, 1)
)
handshakeCtx := ctx
if timeout > 0 {
var cancel context.CancelFunc
handshakeCtx, cancel = context.WithTimeout(ctx, timeout)
defer cancel()
}
// To handle context cancellation, we need to do the handshake in a
// goroutine and abort the blocking network calls by closing the connection
// when the context is canceled.
@@ -276,12 +284,17 @@ func (c *mConnConnection) Handshake(
}
}()
var err error
mconn, peerInfo, peerKey, err = c.handshake(ctx, nodeInfo, privKey)
errCh <- err
mconn, peerInfo, peerKey, err = c.handshake(handshakeCtx, nodeInfo, privKey)
select {
case errCh <- err:
case <-handshakeCtx.Done():
}
}()
select {
case <-ctx.Done():
case <-handshakeCtx.Done():
_ = c.Close()
return types.NodeInfo{}, nil, ctx.Err()

View File

@@ -7,6 +7,7 @@ import (
"io"
"net"
"sync"
"time"
"github.com/tendermint/tendermint/crypto"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
@@ -270,9 +271,16 @@ func (c *MemoryConnection) Status() conn.ConnectionStatus {
// Handshake implements Connection.
func (c *MemoryConnection) Handshake(
ctx context.Context,
timeout time.Duration,
nodeInfo types.NodeInfo,
privKey crypto.PrivKey,
) (types.NodeInfo, crypto.PubKey, error) {
if timeout > 0 {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, timeout)
defer cancel()
}
select {
case c.sendCh <- memoryMessage{nodeInfo: &nodeInfo, pubKey: privKey.PubKey()}:
c.logger.Debug("sent handshake", "nodeInfo", nodeInfo)

View File

@@ -265,7 +265,7 @@ func TestConnection_Handshake(t *testing.T) {
errCh := make(chan error, 1)
go func() {
// Must use assert due to goroutine.
peerInfo, peerKey, err := ba.Handshake(ctx, bInfo, bKey)
peerInfo, peerKey, err := ba.Handshake(ctx, 0, bInfo, bKey)
if err == nil {
assert.Equal(t, aInfo, peerInfo)
assert.Equal(t, aKey.PubKey(), peerKey)
@@ -273,7 +273,7 @@ func TestConnection_Handshake(t *testing.T) {
errCh <- err
}()
peerInfo, peerKey, err := ab.Handshake(ctx, aInfo, aKey)
peerInfo, peerKey, err := ab.Handshake(ctx, 0, aInfo, aKey)
require.NoError(t, err)
require.Equal(t, bInfo, peerInfo)
require.Equal(t, bKey.PubKey(), peerKey)
@@ -291,7 +291,7 @@ func TestConnection_HandshakeCancel(t *testing.T) {
ab, ba := dialAccept(t, a, b)
timeoutCtx, cancel := context.WithTimeout(ctx, 1*time.Minute)
cancel()
_, _, err := ab.Handshake(timeoutCtx, types.NodeInfo{}, ed25519.GenPrivKey())
_, _, err := ab.Handshake(timeoutCtx, 0, types.NodeInfo{}, ed25519.GenPrivKey())
require.Error(t, err)
require.Equal(t, context.Canceled, err)
_ = ab.Close()
@@ -301,7 +301,7 @@ func TestConnection_HandshakeCancel(t *testing.T) {
ab, ba = dialAccept(t, a, b)
timeoutCtx, cancel = context.WithTimeout(ctx, 200*time.Millisecond)
defer cancel()
_, _, err = ab.Handshake(timeoutCtx, types.NodeInfo{}, ed25519.GenPrivKey())
_, _, err = ab.Handshake(timeoutCtx, 0, types.NodeInfo{}, ed25519.GenPrivKey())
require.Error(t, err)
require.Equal(t, context.DeadlineExceeded, err)
_ = ab.Close()
@@ -630,13 +630,13 @@ func dialAcceptHandshake(t *testing.T, a, b p2p.Transport) (p2p.Connection, p2p.
go func() {
privKey := ed25519.GenPrivKey()
nodeInfo := types.NodeInfo{NodeID: types.NodeIDFromPubKey(privKey.PubKey())}
_, _, err := ba.Handshake(ctx, nodeInfo, privKey)
_, _, err := ba.Handshake(ctx, 0, nodeInfo, privKey)
errCh <- err
}()
privKey := ed25519.GenPrivKey()
nodeInfo := types.NodeInfo{NodeID: types.NodeIDFromPubKey(privKey.PubKey())}
_, _, err := ab.Handshake(ctx, nodeInfo, privKey)
_, _, err := ab.Handshake(ctx, 0, nodeInfo, privKey)
require.NoError(t, err)
timer := time.NewTimer(2 * time.Second)

View File

@@ -150,3 +150,18 @@ func (_m *AppConnConsensus) InitChainSync(_a0 context.Context, _a1 types.Request
func (_m *AppConnConsensus) SetResponseCallback(_a0 abciclient.Callback) {
_m.Called(_a0)
}
type NewAppConnConsensusT interface {
mock.TestingT
Cleanup(func())
}
// NewAppConnConsensus creates a new instance of AppConnConsensus. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewAppConnConsensus(t NewAppConnConsensusT) *AppConnConsensus {
mock := &AppConnConsensus{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -118,3 +118,18 @@ func (_m *AppConnMempool) FlushSync(_a0 context.Context) error {
func (_m *AppConnMempool) SetResponseCallback(_a0 abciclient.Callback) {
_m.Called(_a0)
}
type NewAppConnMempoolT interface {
mock.TestingT
Cleanup(func())
}
// NewAppConnMempool creates a new instance of AppConnMempool. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewAppConnMempool(t NewAppConnMempoolT) *AppConnMempool {
mock := &AppConnMempool{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -97,3 +97,18 @@ func (_m *AppConnQuery) QuerySync(_a0 context.Context, _a1 types.RequestQuery) (
return r0, r1
}
type NewAppConnQueryT interface {
mock.TestingT
Cleanup(func())
}
// NewAppConnQuery creates a new instance of AppConnQuery. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewAppConnQuery(t NewAppConnQueryT) *AppConnQuery {
mock := &AppConnQuery{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -120,3 +120,18 @@ func (_m *AppConnSnapshot) OfferSnapshotSync(_a0 context.Context, _a1 types.Requ
return r0, r1
}
type NewAppConnSnapshotT interface {
mock.TestingT
Cleanup(func())
}
// NewAppConnSnapshot creates a new instance of AppConnSnapshot. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewAppConnSnapshot(t NewAppConnSnapshotT) *AppConnSnapshot {
mock := &AppConnSnapshot{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -37,24 +37,32 @@ func (env *Environment) BroadcastTxSync(ctx *rpctypes.Context, tx types.Tx) (*co
err := env.Mempool.CheckTx(
ctx.Context(),
tx,
func(res *abci.Response) { resCh <- res },
func(res *abci.Response) {
select {
case <-ctx.Context().Done():
case resCh <- res:
}
},
mempool.TxInfo{},
)
if err != nil {
return nil, err
}
res := <-resCh
r := res.GetCheckTx()
return &coretypes.ResultBroadcastTx{
Code: r.Code,
Data: r.Data,
Log: r.Log,
Codespace: r.Codespace,
MempoolError: r.MempoolError,
Hash: tx.Hash(),
}, nil
select {
case <-ctx.Context().Done():
return nil, fmt.Errorf("broadcast confirmation not received: %w", ctx.Context().Err())
case res := <-resCh:
r := res.GetCheckTx()
return &coretypes.ResultBroadcastTx{
Code: r.Code,
Data: r.Data,
Log: r.Log,
Codespace: r.Codespace,
MempoolError: r.MempoolError,
Hash: tx.Hash(),
}, nil
}
}
// BroadcastTxCommit returns with the responses from CheckTx and DeliverTx.
@@ -64,61 +72,71 @@ func (env *Environment) BroadcastTxCommit(ctx *rpctypes.Context, tx types.Tx) (*
err := env.Mempool.CheckTx(
ctx.Context(),
tx,
func(res *abci.Response) { resCh <- res },
func(res *abci.Response) {
select {
case <-ctx.Context().Done():
case resCh <- res:
}
},
mempool.TxInfo{},
)
if err != nil {
return nil, err
}
r := (<-resCh).GetCheckTx()
if r.Code != abci.CodeTypeOK {
return &coretypes.ResultBroadcastTxCommit{
CheckTx: *r,
Hash: tx.Hash(),
}, fmt.Errorf("transaction encountered error (%s)", r.MempoolError)
}
if !indexer.KVSinkEnabled(env.EventSinks) {
return &coretypes.ResultBroadcastTxCommit{
select {
case <-ctx.Context().Done():
return nil, fmt.Errorf("broadcast confirmation not received: %w", ctx.Context().Err())
case res := <-resCh:
r := res.GetCheckTx()
if r.Code != abci.CodeTypeOK {
return &coretypes.ResultBroadcastTxCommit{
CheckTx: *r,
Hash: tx.Hash(),
},
errors.New("cannot wait for commit because kvEventSync is not enabled")
}
}, fmt.Errorf("transaction encountered error (%s)", r.MempoolError)
}
startAt := time.Now()
timer := time.NewTimer(0)
defer timer.Stop()
count := 0
for {
count++
select {
case <-ctx.Context().Done():
env.Logger.Error("Error on broadcastTxCommit",
"duration", time.Since(startAt),
"err", err)
if !indexer.KVSinkEnabled(env.EventSinks) {
return &coretypes.ResultBroadcastTxCommit{
CheckTx: *r,
Hash: tx.Hash(),
}, fmt.Errorf("timeout waiting for commit of tx %s (%s)",
tx.Hash(), time.Since(startAt))
case <-timer.C:
txres, err := env.Tx(ctx, tx.Hash(), false)
if err != nil {
jitter := 100*time.Millisecond + time.Duration(rand.Int63n(int64(time.Second))) // nolint: gosec
backoff := 100 * time.Duration(count) * time.Millisecond
timer.Reset(jitter + backoff)
continue
}
},
errors.New("cannot confirm transaction because kvEventSink is not enabled")
}
return &coretypes.ResultBroadcastTxCommit{
CheckTx: *r,
DeliverTx: txres.TxResult,
Hash: tx.Hash(),
Height: txres.Height,
}, nil
startAt := time.Now()
timer := time.NewTimer(0)
defer timer.Stop()
count := 0
for {
count++
select {
case <-ctx.Context().Done():
env.Logger.Error("error on broadcastTxCommit",
"duration", time.Since(startAt),
"err", err)
return &coretypes.ResultBroadcastTxCommit{
CheckTx: *r,
Hash: tx.Hash(),
}, fmt.Errorf("timeout waiting for commit of tx %s (%s)",
tx.Hash(), time.Since(startAt))
case <-timer.C:
txres, err := env.Tx(ctx, tx.Hash(), false)
if err != nil {
jitter := 100*time.Millisecond + time.Duration(rand.Int63n(int64(time.Second))) // nolint: gosec
backoff := 100 * time.Duration(count) * time.Millisecond
timer.Reset(jitter + backoff)
continue
}
return &coretypes.ResultBroadcastTxCommit{
CheckTx: *r,
DeliverTx: txres.TxResult,
Hash: tx.Hash(),
Height: txres.Height,
}, nil
}
}
}
}

View File

@@ -165,3 +165,18 @@ func (_m *EventSink) Type() indexer.EventSinkType {
return r0
}
type NewEventSinkT interface {
mock.TestingT
Cleanup(func())
}
// NewEventSink creates a new instance of EventSink. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewEventSink(t NewEventSinkT) *EventSink {
mock := &EventSink{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -18,14 +18,16 @@ var _ indexer.EventSink = (*EventSink)(nil)
// The EventSink is an aggregator for redirecting the call path of the tx/block kvIndexer.
// For the implementation details please see the kv.go in the indexer/block and indexer/tx folder.
type EventSink struct {
txi *kvt.TxIndex
bi *kvb.BlockerIndexer
txi *kvt.TxIndex
bi *kvb.BlockerIndexer
store dbm.DB
}
func NewEventSink(store dbm.DB) indexer.EventSink {
return &EventSink{
txi: kvt.NewTxIndex(store),
bi: kvb.New(store),
txi: kvt.NewTxIndex(store),
bi: kvb.New(store),
store: store,
}
}
@@ -58,5 +60,5 @@ func (kves *EventSink) HasBlock(h int64) (bool, error) {
}
func (kves *EventSink) Stop() error {
return nil
return kves.store.Close()
}

View File

@@ -208,3 +208,18 @@ func (_m *BlockStore) Size() int64 {
return r0
}
type NewBlockStoreT interface {
mock.TestingT
Cleanup(func())
}
// NewBlockStore creates a new instance of BlockStore. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewBlockStore(t NewBlockStoreT) *BlockStore {
mock := &BlockStore{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -68,3 +68,18 @@ func (_m *EvidencePool) PendingEvidence(maxBytes int64) ([]types.Evidence, int64
func (_m *EvidencePool) Update(_a0 state.State, _a1 types.EvidenceList) {
_m.Called(_a0, _a1)
}
type NewEvidencePoolT interface {
mock.TestingT
Cleanup(func())
}
// NewEvidencePool creates a new instance of EvidencePool. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewEvidencePool(t NewEvidencePoolT) *EvidencePool {
mock := &EvidencePool{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -186,3 +186,18 @@ func (_m *Store) SaveValidatorSets(_a0 int64, _a1 int64, _a2 *types.ValidatorSet
return r0
}
type NewStoreT interface {
mock.TestingT
Cleanup(func())
}
// NewStore creates a new instance of Store. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewStore(t NewStoreT) *Store {
mock := &Store{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -82,3 +82,18 @@ func (_m *StateProvider) State(ctx context.Context, height uint64) (state.State,
return r0, r1
}
type NewStateProviderT interface {
mock.TestingT
Cleanup(func())
}
// NewStateProvider creates a new instance of StateProvider. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewStateProvider(t NewStateProviderT) *StateProvider {
mock := &StateProvider{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -388,7 +388,7 @@ func (s *stateProviderP2P) consensusParams(ctx context.Context, height int64) (t
for {
iterCount++
select {
case s.paramsSendCh <- p2p.Envelope{
case requestCh <- p2p.Envelope{
To: peer,
Message: &ssproto.ParamsRequest{
Height: uint64(height),

View File

@@ -1,9 +0,0 @@
.PHONY: docs
REPO:=github.com/tendermint/tendermint/libs/events
docs:
@go get github.com/davecheney/godoc2md
godoc2md $(REPO) > README.md
test:
go test -v ./...

View File

@@ -1,193 +0,0 @@
# events
`import "github.com/tendermint/tendermint/libs/events"`
* [Overview](#pkg-overview)
* [Index](#pkg-index)
## Overview
Pub-Sub in go with event caching
## Index
* [type EventCache](#EventCache)
* [func NewEventCache(evsw Fireable) *EventCache](#NewEventCache)
* [func (evc *EventCache) FireEvent(event string, data EventData)](#EventCache.FireEvent)
* [func (evc *EventCache) Flush()](#EventCache.Flush)
* [type EventCallback](#EventCallback)
* [type EventData](#EventData)
* [type EventSwitch](#EventSwitch)
* [func NewEventSwitch() EventSwitch](#NewEventSwitch)
* [type Eventable](#Eventable)
* [type Fireable](#Fireable)
### Package files
[event_cache.go](/src/github.com/tendermint/tendermint/libs/events/event_cache.go) [events.go](/src/github.com/tendermint/tendermint/libs/events/events.go)
## Type [EventCache](/src/target/event_cache.go?s=116:179#L5)
``` go
type EventCache struct {
// contains filtered or unexported fields
}
```
An EventCache buffers events for a Fireable
All events are cached. Filtering happens on Flush
### func [NewEventCache](/src/target/event_cache.go?s=239:284#L11)
``` go
func NewEventCache(evsw Fireable) *EventCache
```
Create a new EventCache with an EventSwitch as backend
### func (\*EventCache) [FireEvent](/src/target/event_cache.go?s=449:511#L24)
``` go
func (evc *EventCache) FireEvent(event string, data EventData)
```
Cache an event to be fired upon finality.
### func (\*EventCache) [Flush](/src/target/event_cache.go?s=735:765#L31)
``` go
func (evc *EventCache) Flush()
```
Fire events by running evsw.FireEvent on all cached events. Blocks.
Clears cached events
## Type [EventCallback](/src/target/events.go?s=4201:4240#L185)
``` go
type EventCallback func(data EventData)
```
## Type [EventData](/src/target/events.go?s=243:294#L14)
``` go
type EventData interface {
}
```
Generic event data can be typed and registered with tendermint/go-amino
via concrete implementation of this interface
## Type [EventSwitch](/src/target/events.go?s=560:771#L29)
``` go
type EventSwitch interface {
service.Service
Fireable
AddListenerForEvent(listenerID, event string, cb EventCallback)
RemoveListenerForEvent(event string, listenerID string)
RemoveListener(listenerID string)
}
```
### func [NewEventSwitch](/src/target/events.go?s=917:950#L46)
``` go
func NewEventSwitch() EventSwitch
```
## Type [Eventable](/src/target/events.go?s=378:440#L20)
``` go
type Eventable interface {
SetEventSwitch(evsw EventSwitch)
}
```
reactors and other modules should export
this interface to become eventable
## Type [Fireable](/src/target/events.go?s=490:558#L25)
``` go
type Fireable interface {
FireEvent(event string, data EventData)
}
```
an event switch or cache implements fireable
- - -
Generated by [godoc2md](http://godoc.org/github.com/davecheney/godoc2md)

View File

@@ -1018,7 +1018,12 @@ func (c *Client) findNewPrimary(ctx context.Context, height int64, remove bool)
// process all the responses as they come in
for i := 0; i < cap(witnessResponsesC); i++ {
response := <-witnessResponsesC
var response witnessResponse
select {
case response = <-witnessResponsesC:
case <-ctx.Done():
return nil, ctx.Err()
}
switch response.err {
// success! We have found a new primary
case nil:
@@ -1047,10 +1052,6 @@ func (c *Client) findNewPrimary(ctx context.Context, height int64, remove bool)
// return the light block that new primary responded with
return response.lb, nil
// catch canceled contexts or deadlines
case context.Canceled, context.DeadlineExceeded:
return nil, response.err
// process benign errors by logging them only
case provider.ErrNoResponse, provider.ErrLightBlockNotFound, provider.ErrHeightTooHigh:
lastError = response.err

View File

@@ -51,3 +51,18 @@ func (_m *Provider) ReportEvidence(_a0 context.Context, _a1 types.Evidence) erro
return r0
}
type NewProviderT interface {
mock.TestingT
Cleanup(func())
}
// NewProvider creates a new instance of Provider. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewProvider(t NewProviderT) *Provider {
mock := &Provider{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -99,3 +99,18 @@ func (_m *LightClient) VerifyLightBlockAtHeight(ctx context.Context, height int6
return r0, r1
}
type NewLightClientT interface {
mock.TestingT
Cleanup(func())
}
// NewLightClient creates a new instance of LightClient. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewLightClient(t NewLightClientT) *LightClient {
mock := &LightClient{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

Some files were not shown because too many files have changed in this diff Show More