Compare commits

...

50 Commits

Author SHA1 Message Date
Callum Waters
e2d8666782 fix merge conflicts 2021-10-20 11:34:44 +02:00
Callum Waters
102f1f5af6 Merge branch 'master' into callum/handshake 2021-10-18 10:25:00 +02:00
Sam Kleinman
ca8f004112 p2p: remove final shims from p2p package (#7136)
This is, perhaps, the trival final piece of #7075 that I've been
working on.

There's more work to be done: 
- push more of the setup into the pacakges themselves
- move channel-based sending/filtering out of the 
- simplify the buffering throuhgout the p2p stack.
2021-10-15 20:08:09 +00:00
Sam Kleinman
7143f14a63 p2p: simplify open channel interface (#7133)
A fourth #7075 component patch to simplify the channel creation interface
2021-10-15 18:00:24 +00:00
Sam Kleinman
cbe6ad6cd5 p2p: flatten channel descriptor (#7132) 2021-10-15 13:03:10 -04:00
Sam Kleinman
0900ea8396 p2p: channel shim cleanup (#7129) 2021-10-15 12:31:33 -04:00
Sam Kleinman
f4a56f4034 p2p: refactor channel description (#7130)
This is another small sliver of #7075, with the intention of removing
the legacy shim layer related to channel registration.
2021-10-15 15:45:12 +00:00
Marko
66a11fe527 blocksync: remove v0 folder structure (#7128)
Remove v0 blocksync folder structure.
2021-10-15 13:03:53 +00:00
M. J. Fromberger
006e6108a1 Fix the protobuf generation script. (#7127)
This should have been part of #7121, but I missed it.
2021-10-15 12:18:53 +00:00
Callum Waters
c3dc7d20df docs: add roadmap to repo (#7107) 2021-10-15 10:35:36 +02:00
Jared Zhou
b95c261981 rpc: fix typo in broadcast commit (#7124) 2021-10-14 10:34:25 +02:00
M. J. Fromberger
bc1a20dbb8 Revert temporary patch to buf.yaml. (#7122)
This patch was needed to pass the buf breakage check for the proto file removed
in #7121, but now that master contains the change we no longer need the patch.
2021-10-13 23:37:50 +00:00
M. J. Fromberger
86f00135dd rpc: Remove the deprecated gRPC interface to the RPC service (#7121)
This change removes the partial gRPC interface to the RPC service, which was
deprecated in resolution of #6718.

Details:
- rpc: Remove the client and server interfaces and proto definitions.
- Remove the gRPC settings from the config library.
- Remove gRPC setup for the RPC service in the node startup.
- Fix various test helpers to remove gRPC bits.
- Remove the --rpc.grpc-laddr flag from the CLI.

Note that to satisfy the protobuf interface check, this change also includes a
temporary edit to buf.yaml, that I will revert after this is merged.
2021-10-13 15:01:01 -07:00
William Banfield
ff7b0e638e p2p: fix priority queue bytes pending calculation (#7120)
This metric describes itself as 'pending' but never actual decrements when the messages are removed from the queue.

This change fixes that by decrementing the metric when the data is removed from the queue.
2021-10-13 21:18:44 +00:00
William Banfield
36a1acff52 internal/proxy: add initial set of abci metrics (#7115)
This PR adds an initial set of metrics for use ABCI. The initial metrics enable the calculation of timing histograms and call counts for each of the ABCI methods. The metrics are also labeled as either 'sync' or 'async' to determine if the method call was performed using ABCI's `*Async` methods.

An example of these metrics is included here for reference:
```
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.0001"} 0
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.0004"} 5
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.002"} 12
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.009"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.02"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.1"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.65"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="2"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="6"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="25"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="+Inf"} 13
tendermint_abci_connection_method_timing_sum{chain_id="ci",method="commit",type="sync"} 0.007802058000000001
tendermint_abci_connection_method_timing_count{chain_id="ci",method="commit",type="sync"} 13
```

These metrics can easily be graphed using prometheus's `histogram_quantile(...)` method to pick out a particular quantile to graph or examine. I chose buckets that were somewhat of an estimate of expected range of times for ABCI operations. They start at .0001 seconds and range to 25 seconds. The hope is that this range captures enough possible times to be useful for us and operators.
2021-10-13 20:52:25 +00:00
Sam Kleinman
164de91842 rpc: move evidence tests to shared fixtures (#7119)
This is follow on to the work in #7112.
2021-10-13 18:29:03 +00:00
Callum Waters
4fe0f262d4 changelog: add 0.34.14 updates (#7117)
This PR adds the 0.34.14 changes to the changelog in master
2021-10-13 12:28:43 +00:00
M. J. Fromberger
6538776e6a build: Fix build-docker to include the full context. (#7114)
Fixes #7068. The build-docker rule relies on being able to run make
build-linux, but did not pull the Makefile into the build context.
There are various ways to fix this, but this was probably the smallest.
2021-10-12 22:29:06 +00:00
Sam Kleinman
4781d04d18 node: always close database engine (#7113) 2021-10-12 17:40:59 -04:00
Sam Kleinman
52ed994416 test: cleanup rpc/client and node test fixtures (#7112) 2021-10-12 16:49:45 -04:00
lklimek
0524558696 refactor: assignment copies lock value (#7108)
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2021-10-12 13:22:57 -07:00
Sam Kleinman
d837432681 ci: use run-multiple.sh for e2e pr tests (#7111)
* ci: use run-multiple.sh for e2e pr tests

* fix labeling

* Update .github/workflows/e2e-nightly-35x.yml

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>
2021-10-12 16:28:10 +00:00
Sam Kleinman
34a3fcd8fc Revert "abci: change client to use multi-reader mutexes (#6306)" (#7106)
This reverts commit 1c4dbe30d4.
2021-10-12 11:29:31 -04:00
M. J. Fromberger
48295955ed light: Update links in package docs. (#7099)
Fixes #7098. The light client documentation moved to the spec repository.

I was not able to figure out what happened to light-client-protocol.md, it was removed in #5252 but no corresponding file exists in the spec repository. Since the spec also discusses the protocol, this change simply links to the spec and removes the non-functional reference.

Alternatively we could link to the top-level [light client doc](https://docs.tendermint.com/master/tendermint-core/light-client.html) if you think that's better.
2021-10-11 23:21:45 +00:00
Sam Kleinman
ded310093e lint: fix collection of stale errors (#7090)
Few things that had been annoying.
2021-10-09 15:33:54 +00:00
Sam Kleinman
befd669794 e2e: light nodes should use builtin abci app (#7095) 2021-10-09 04:20:09 +00:00
Sam Kleinman
3646b635d3 p2p, types: remove legacy NetAddress type (#7084) 2021-10-08 12:29:20 -04:00
Callum Waters
59404003ee p2p: rename pexV2 to pex (#7088) 2021-10-08 16:53:54 +02:00
Sam Kleinman
f2a8f5e054 e2e: abci protocol should be consistent across networks (#7078)
It seems weird in retrospect that we allow networks to contain
applications that use different ABCI protocols.
2021-10-08 13:42:23 +00:00
Sam Kleinman
1b5bb5348f p2p: cleanup unused arguments (#7079)
This is mostly just reading through the output of uparam, after
noticing that there were a few places where we were ignoring some arguments.
2021-10-08 12:49:17 +00:00
Callum Waters
4ca130d226 cli: allow node operator to rollback last state (#7033) 2021-10-08 09:15:13 +02:00
Sam Kleinman
1f438f205a e2e: improve network connectivity (#7077)
This tweaks the connectivity of test configurations, in hopes that more will be viable.

Additionally reduces the prevalence of testing the legacy mempool.
2021-10-07 23:07:35 +00:00
Sam Kleinman
5bf30bb049 p2p: cleanup transport interface (#7071)
This is another batch of things to cleanup in the legacy P2P system.
2021-10-06 19:17:44 +00:00
dependabot[bot]
e53f92ba9c build(deps): Bump github.com/adlio/schema from 1.1.13 to 1.1.14 (#7069)
Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.1.13 to 1.1.14.
- [Release notes](https://github.com/adlio/schema/releases)
- [Commits](https://github.com/adlio/schema/compare/v1.1.13...v1.1.14)

---
updated-dependencies:
- dependency-name: github.com/adlio/schema
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-10-06 09:14:55 -04:00
Callum Waters
e4d6f6df09 docs: create separate releases doc (#7040) 2021-10-06 11:59:21 +02:00
Sam Kleinman
0ef1a12186 ci: fix p2p configuration for e2e tests (#7066)
My earlier p2p cleanup code removed support for the p2p tests from the
e2e generator and runner, but missed removing the CI
configuration. This patch remedies that.
2021-10-06 04:11:19 +00:00
Sam Kleinman
72aee47847 ci: 0.35.x nightly should run from master and checkout the release branch (#7067)
Nightly branches run CI from master branch, and the configuration misses checking out the correct ref.
2021-10-06 04:08:54 +00:00
M. J. Fromberger
109814c85a Clarify decision record for ADR-065. (#7062)
While discussing a question about the indexing interface (#7044), we found some
confusion about the intent of the design decisions in ADR 065.

Based on discussion with the original authors of the ADR, this commit adds some
language to the Decisions section to spell out the intentions more clearly, and
to call out future work that this ADR did not explicitly decide about.
2021-10-05 14:10:11 -07:00
Sam Kleinman
851d2e3bde mempool,rpc: add removetx rpc method (#7047)
Addresses one of the concerns with #7041.

Provides a mechanism (via the RPC interface) to delete a single transaction, described by its hash, from the mempool. The method returns an error if the transaction cannot be found. Once the transaction is removed it remains in the cache and cannot be resubmitted until the cache is cleared or it expires from the cache.
2021-10-05 20:23:15 +00:00
Sam Kleinman
3ea81bfaa7 p2p: remove wdrr queue (#7064)
This code hasn't been battle tested, and seems to have grown
increasingly flaky int tests. Given our general direction of reducing
queue complexity over the next couple of releases I think it makes
sense to remove it.
2021-10-05 20:09:31 +00:00
Callum Waters
5703ae2fb3 e2e: automatically prune old app snapshots (#7034)
This PR tackles the case of using the e2e application in a long lived testnet. The application continually saves snapshots (usually every 100 blocks) which after a while bloats the size of the application. This PR prunes older snapshots so that only the most recent 10 snapshots remain.
2021-10-05 18:19:12 +00:00
Sam Kleinman
03ad7d6f20 p2p: delete legacy stack initial pass (#7035)
A few notes:

- this is not all the deletion that we can do, but this is the most
  "simple" case: it leaves in shims, and there's some trivial
  additional cleanup to the transport that can happen but that
  requires writing more code, and I wanted this to be easy to review
  above all else.
  
- This should land *after* we cut the branch for 0.35, but I'm
  anticipating that to happen soon, and I wanted to run this through
  CI.
2021-10-05 13:40:32 +00:00
William Banfield
f5b9c210ca consensus: wait until peerUpdates channel is closed to close remaining peers (#7058)
The race occurred as a result of a goroutine launched by `processPeerUpdate` racing with the `OnStop` method. The `processPeerUpdates` goroutine deletes from the map as `OnStop` is reading from it. This change updates the `OnStop` method to wait for the peer updates channel to be done before closing the peers. It also copies the map contents to a new map so that it will not conflict with the view of the map that the goroutine created in `processPeerUpdate` sees.
2021-10-04 22:37:18 +00:00
Sam Kleinman
cb69ed8135 blocksync/v2: remove unsupported reactor (#7046)
This commit should be one of the first to land as part of the v0.36
cycle *after* cutting the 0.35 branch. 

The blocksync/v2 reactor was originally implemented as an experiement
to produce an implementation of the blockstack protocol that would be
easier to test and validate, but it was never appropriately
operationalized and this implementation was never fully debugged. When
the p2p layer was refactored as part of the 0.35 cycle, the v2
implementation was not refactored and it was left in the codebase but
not removed. This commit just removes all references to it.
2021-10-04 21:12:51 +00:00
William Banfield
c201e3b54d scripts: fix authors script to take a ref (#7051)
This script is referenced from the release documentation, we should make sure it's functional. This is helpful in generating the "Special Thanks" section of the changelog.
2021-10-04 19:47:50 +00:00
M. J. Fromberger
b30ec89ee9 Add an e2e workflow for the v0.35.x backport branch. (#7048) 2021-10-04 10:35:16 -07:00
Sam Kleinman
6276fdcb5d ci: mergify support for 0.35 backports (#7050) 2021-10-04 13:04:15 -04:00
Callum Waters
64fb13a561 save state after making changes during sync 2021-08-03 10:20:29 +02:00
Callum Waters
5682dd55f9 Merge branch 'master' into callum/handshake 2021-08-02 12:17:03 +02:00
Callum Waters
4db618db7a separate out replaying of blocks to application and run in OnStart 2021-07-28 17:39:14 +02:00
233 changed files with 4452 additions and 22494 deletions

9
.github/mergify.yml vendored
View File

@@ -16,3 +16,12 @@ pull_request_rules:
backport:
branches:
- v0.34.x
- name: backport patches to v0.35.x branch
conditions:
- base=master
- label=S:backport-to-v0.35.x
actions:
backport:
branches:
- v0.35.x

76
.github/workflows/e2e-nightly-35x.yml vendored Normal file
View File

@@ -0,0 +1,76 @@
# Runs randomly generated E2E testnets nightly on v0.35.x.
# !! If you change something in this file, you probably want
# to update the e2e-nightly-master workflow as well!
name: e2e-nightly-35x
on:
workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 2 * * *'
jobs:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
p2p: ['legacy', 'new', 'hybrid']
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v2
with:
go-version: '1.17'
- uses: actions/checkout@v2.3.4
with:
ref: 'v0.35.x'
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner tests
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly/${{ matrix.p2p }} -p ${{ matrix.p2p }}
- name: Run ${{ matrix.p2p }} p2p testnets in group ${{ matrix.group }}
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/${{ matrix.p2p }}/*-group${{ matrix.group }}-*.toml
e2e-nightly-fail-2:
needs: e2e-nightly-test
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':skull:'
SLACK_COLOR: danger
SLACK_MESSAGE: Nightly E2E tests failed on v0.35.x
SLACK_FOOTER: ''
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on success
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':white_check_mark:'
SLACK_COLOR: good
SLACK_MESSAGE: Nightly E2E tests passed on v0.35.x
SLACK_FOOTER: ''

View File

@@ -10,13 +10,12 @@ on:
- cron: '0 2 * * *'
jobs:
e2e-nightly-test-2:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
p2p: ['legacy', 'new', 'hybrid']
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
@@ -35,14 +34,14 @@ jobs:
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly/${{ matrix.p2p }} -p ${{ matrix.p2p }}
run: ./build/generator -g 4 -d networks/nightly/
- name: Run ${{ matrix.p2p }} p2p testnets in group ${{ matrix.group }}
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/${{ matrix.p2p }}/*-group${{ matrix.group }}-*.toml
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
e2e-nightly-fail-2:
needs: e2e-nightly-test-2
needs: e2e-nightly-test
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
@@ -58,7 +57,7 @@ jobs:
SLACK_FOOTER: ''
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test-2
needs: e2e-nightly-test
if: ${{ success() }}
runs-on: ubuntu-latest
steps:

View File

@@ -33,10 +33,6 @@ jobs:
- name: Run CI testnet
working-directory: test/e2e
run: ./build/runner -f networks/ci.toml
run: ./run-multiple.sh networks/ci.toml
if: "env.GIT_DIFF != ''"
- name: Emit logs on failure
if: ${{ failure() }}
working-directory: test/e2e
run: ./build/runner -f networks/ci.toml logs

View File

@@ -23,7 +23,7 @@ jobs:
- uses: golangci/golangci-lint-action@v2.5.2
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.38
version: v1.42.1
args: --timeout 10m
github-token: ${{ secrets.github_token }}
if: env.GIT_DIFF

View File

@@ -13,12 +13,12 @@ linters:
# - gochecknoinits
# - gocognit
- goconst
- gocritic
# - gocritic
# - gocyclo
# - godox
- gofmt
- goimports
- golint
- revive
- gosec
- gosimple
- govet

View File

@@ -178,6 +178,24 @@ Special thanks to external contributors on this release: @JayT106, @bipulprasad,
- [statesync] [\#6463](https://github.com/tendermint/tendermint/pull/6463) Adds Reverse Sync feature to fetch historical light blocks after state sync in order to verify any evidence (@cmwaters)
- [blocksync] [\#6590](https://github.com/tendermint/tendermint/pull/6590) Update the metrics during blocksync (@JayT106)
## v0.34.14
This release backports the `rollback` feature to allow recovery in the event of an incorrect app hash.
### FEATURES
- [\#6982](https://github.com/tendermint/tendermint/pull/6982) The tendermint binary now has built-in suppport for running the end-to-end test application (with state sync support) (@cmwaters).
- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state. This may be useful in the event of non-determinstic app hash or when reverting an upgrade. @cmwaters
### IMPROVEMENTS
- [\#7103](https://github.com/tendermint/tendermint/pull/7104) Remove IAVL dependency (backport of #6550) (@cmwaters)
### BUG FIXES
- [\#7057](https://github.com/tendermint/tendermint/pull/7057) Import Postgres driver support for the psql indexer (@creachadair).
- [ABCI] [\#7110](https://github.com/tendermint/tendermint/issues/7110) Revert "change client to use multi-reader mutexes (#6873)" (@tychoish).
## v0.34.13
*September 6, 2021*

View File

@@ -12,16 +12,31 @@ Special thanks to external contributors on this release:
- CLI/RPC/Config
- [rpc] Remove the deprecated gRPC interface to the RPC service (@creachadair).
- Apps
- P2P Protocol
- [p2p] \#7035 Remove legacy P2P routing implementation and
associated configuration options (@tychoish)
- Go API
- [blocksync] \#7046 Remove v2 implementation of the blocksync
service and recactor, which was disabled in the previous release
(@tychoish)
- [p2p] \#7064 Remove WDRR queue implementation. (@tychoish)
- Blockchain Protocol
### FEATURES
- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state in the event of non-determinstic app hash or reverting an upgrade.
- [mempool, rpc] \#7041 Add removeTx operation to the RPC layer. (@tychoish)
### IMPROVEMENTS
### BUG FIXES
- fix: assignment copies lock value in `BitArray.UnmarshalJSON()` (@lklimek)

View File

@@ -227,150 +227,6 @@ Fixes #nnnn
Each PR should have one commit once it lands on `master`; this can be accomplished by using the "squash and merge" button on Github. Be sure to edit your commit message, though!
### Release procedure
#### A note about backport branches
Tendermint's `master` branch is under active development.
Releases are specified using tags and are built from long-lived "backport" branches.
Each release "line" (e.g. 0.34 or 0.33) has its own long-lived backport branch,
and the backport branches have names like `v0.34.x` or `v0.33.x`
(literally, `x`; it is not a placeholder in this case).
As non-breaking changes land on `master`, they should also be backported (cherry-picked)
to these backport branches.
We use Mergify's [backport feature](https://mergify.io/features/backports) to automatically backport
to the needed branch. There should be a label for any backport branch that you'll be targeting.
To notify the bot to backport a pull request, mark the pull request with
the label `S:backport-to-<backport_branch>`.
Once the original pull request is merged, the bot will try to cherry-pick the pull request
to the backport branch. If the bot fails to backport, it will open a pull request.
The author of the original pull request is responsible for solving the conflicts and
merging the pull request.
#### Creating a backport branch
If this is the first release candidate for a major release, you get to have the honor of creating
the backport branch!
Note that, after creating the backport branch, you'll also need to update the tags on `master`
so that `go mod` is able to order the branches correctly. You should tag `master` with a "dev" tag
that is "greater than" the backport branches tags. See #6072 for more context.
In the following example, we'll assume that we're making a backport branch for
the 0.35.x line.
1. Start on `master`
2. Create the backport branch:
`git checkout -b v0.35.x`
3. Go back to master and tag it as the dev branch for the _next_ major release and push it back up:
`git tag -a v0.36.0-dev; git push v0.36.0-dev`
4. Create a new workflow to run the e2e nightlies for this backport branch.
(See https://github.com/tendermint/tendermint/blob/master/.github/workflows/e2e-nightly-34x.yml
for an example.)
#### Release candidates
Before creating an official release, especially a major release, we may want to create a
release candidate (RC) for our friends and partners to test out. We use git tags to
create RCs, and we build them off of backport branches.
Tags for RCs should follow the "standard" release naming conventions, with `-rcX` at the end
(for example, `v0.35.0-rc0`).
(Note that branches and tags _cannot_ have the same names, so it's important that these branches
have distinct names from the tags/release names.)
If this is the first RC for a major release, you'll have to make a new backport branch (see above).
Otherwise:
1. Start from the backport branch (e.g. `v0.35.x`).
1. Run the integration tests and the e2e nightlies
(which can be triggered from the Github UI;
e.g., https://github.com/tendermint/tendermint/actions/workflows/e2e-nightly-34x.yml).
1. Prepare the changelog:
- Move the changes included in `CHANGELOG_PENDING.md` into `CHANGELOG.md`.
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that UPGRADING.md is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
1. Open a PR with these changes against the backport branch.
1. Once these changes have landed on the backport branch, be sure to pull them back down locally.
2. Once you have the changes locally, create the new tag, specifying a name and a tag "message":
`git tag -a v0.35.0-rc0 -m "Release Candidate v0.35.0-rc0`
3. Push the tag back up to origin:
`git push origin v0.35.0-rc0`
Now the tag should be available on the repo's releases page.
4. Future RCs will continue to be built off of this branch.
Note that this process should only be used for "true" RCs--
release candidates that, if successful, will be the next release.
For more experimental "RCs," create a new, short-lived branch and tag that instead.
#### Major release
This major release process assumes that this release was preceded by release candidates.
If there were no release candidates, begin by creating a backport branch, as described above.
1. Start on the backport branch (e.g. `v0.35.x`)
2. Run integration tests and the e2e nightlies.
3. Prepare the release:
- "Squash" changes from the changelog entries for the RCs into a single entry,
and add all changes included in `CHANGELOG_PENDING.md`.
(Squashing includes both combining all entries, as well as removing or simplifying
any intra-RC changes. It may also help to alphabetize the entries by package name.)
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that UPGRADING.md is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
4. Open a PR with these changes against the backport branch.
5. Once these changes are on the backport branch, push a tag with prepared release details.
This will trigger the actual release `v0.35.0`.
- `git tag -a v0.35.0 -m 'Release v0.35.0'`
- `git push origin v0.35.0`
7. Make sure that `master` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
8. Add the release to the documentation site generator config (see
[DOCS_README.md](./docs/DOCS_README.md) for more details). In summary:
- Start on branch `master`.
- Add a new line at the bottom of [`docs/versions`](./docs/versions) to
ensure the newest release is the default for the landing page.
- Add a new entry to `themeConfig.versions` in
[`docs/.vuepress/config.js`](./docs/.vuepress/config.js) to include the
release in the dropdown versions menu.
#### Minor release (point releases)
Minor releases are done differently from major releases: They are built off of long-lived backport branches, rather than from master.
As non-breaking changes land on `master`, they should also be backported (cherry-picked) to these backport branches.
Minor releases don't have release candidates by default, although any tricky changes may merit a release candidate.
To create a minor release:
1. Checkout the long-lived backport branch: `git checkout v0.35.x`
2. Run integration tests (`make test_integrations`) and the nightlies.
3. Check out a new branch and prepare the release:
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during minor releases, and only field additions are valid minor changes.)
4. Open a PR with these changes that will land them back on `v0.35.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- `git tag -a v0.35.1 -m 'Release v0.35.1'`
- `git push origin v0.35.1`
6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
- Remove all `R:minor` labels from the pull requests that were included in the release.
- Do not merge the backport branch into master.
## Testing
### Unit tests

View File

@@ -89,7 +89,7 @@ proto-gen:
.PHONY: proto-gen
proto-lint:
@$(DOCKER_BUF) check lint --error-format=json
@$(DOCKER_BUF) lint --error-format=json
.PHONY: proto-lint
proto-format:
@@ -98,11 +98,11 @@ proto-format:
.PHONY: proto-format
proto-check-breaking:
@$(DOCKER_BUF) check breaking --against-input .git#branch=master
@$(DOCKER_BUF) breaking --against .git#branch=master
.PHONY: proto-check-breaking
proto-check-breaking-ci:
@$(DOCKER_BUF) check breaking --against-input $(HTTPS_GIT)#branch=master
@$(DOCKER_BUF) breaking --against $(HTTPS_GIT)#branch=master
.PHONY: proto-check-breaking-ci
###############################################################################
@@ -227,13 +227,13 @@ build-docs:
build-docker: build-linux
cp $(BUILDDIR)/tendermint DOCKER/tendermint
docker build --label=tendermint --tag="tendermint/tendermint" DOCKER
docker build --label=tendermint --tag="tendermint/tendermint" -f DOCKER/Dockerfile .
rm -rf DOCKER/tendermint
.PHONY: build-docker
###############################################################################
### Mocks ###
### Mocks ###
###############################################################################
mockery:

View File

@@ -35,6 +35,8 @@ See below for more details about [versioning](#versioning).
In any case, if you intend to run Tendermint in production, we're happy to help. You can
contact us [over email](mailto:hello@interchain.berlin) or [join the chat](https://discord.gg/cosmosnetwork).
More on how releases are conducted can be found [here](./RELEASES.md).
## Security
To report a security vulnerability, see our [bug bounty
@@ -112,6 +114,8 @@ in [UPGRADING.md](./UPGRADING.md).
### Tendermint Core
We keep a public up-to-date version of our roadmap [here](./docs/roadmap/roadmap.md)
For details about the blockchain data structures and the p2p protocols, see the
[Tendermint specification](https://docs.tendermint.com/master/spec/).

161
RELEASES.md Normal file
View File

@@ -0,0 +1,161 @@
# Releases
Tendermint uses [semantic versioning](https://semver.org/) with each release following
a `vX.Y.Z` format. The `master` branch is used for active development and thus it's
advisable not to build against it.
The latest changes are always initially merged into `master`.
Releases are specified using tags and are built from long-lived "backport" branches
that are cut from `master` when the release process begins.
Each release "line" (e.g. 0.34 or 0.33) has its own long-lived backport branch,
and the backport branches have names like `v0.34.x` or `v0.33.x`
(literally, `x`; it is not a placeholder in this case). Tendermint only
maintains the last two releases at a time (the oldest release is predominantly
just security patches).
## Backporting
As non-breaking changes land on `master`, they should also be backported
to these backport branches.
We use Mergify's [backport feature](https://mergify.io/features/backports) to automatically backport
to the needed branch. There should be a label for any backport branch that you'll be targeting.
To notify the bot to backport a pull request, mark the pull request with the label corresponding
to the correct backport branch. For example, to backport to v0.35.x, add the label `S:backport-to-v0.35.x`.
Once the original pull request is merged, the bot will try to cherry-pick the pull request
to the backport branch. If the bot fails to backport, it will open a pull request.
The author of the original pull request is responsible for solving the conflicts and
merging the pull request.
### Creating a backport branch
If this is the first release candidate for a major release, you get to have the honor of creating
the backport branch!
Note that, after creating the backport branch, you'll also need to update the
tags on `master` so that `go mod` is able to order the branches correctly. You
should tag `master` with a "dev" tag that is "greater than" the backport
branches tags. See [#6072](https://github.com/tendermint/tendermint/pull/6072)
for more context.
In the following example, we'll assume that we're making a backport branch for
the 0.35.x line.
1. Start on `master`
2. Create and push the backport branch:
`git checkout -b v0.35.x; git push origin v0.35.x`
3. Go back to master and tag it as the dev branch for the _next_ major release and push it back up:
`git tag -a v0.36.0-dev -m "Development base for Tendermint v0.36."; git push origin v0.36.0-dev`
4. Create a new workflow (still on master) to run e2e nightlies for the new backport branch.
(See https://github.com/tendermint/tendermint/blob/master/.github/workflows/e2e-nightly-master.yml
for an example.)
5. Add a new section to the Mergify config (`.github/mergify.yml`) to enable the
backport bot to work on this branch, and add a corresponding `S:backport-to-v0.35.x`
[label](https://github.com/tendermint/tendermint/labels) so the bot can be triggered.
## Release candidates
Before creating an official release, especially a major release, we may want to create a
release candidate (RC) for our friends and partners to test out. We use git tags to
create RCs, and we build them off of backport branches.
Tags for RCs should follow the "standard" release naming conventions, with `-rcX` at the end
(for example, `v0.35.0-rc0`).
(Note that branches and tags _cannot_ have the same names, so it's important that these branches
have distinct names from the tags/release names.)
If this is the first RC for a major release, you'll have to make a new backport branch (see above).
Otherwise:
1. Start from the backport branch (e.g. `v0.35.x`).
2. Run the integration tests and the e2e nightlies
(which can be triggered from the Github UI;
e.g., https://github.com/tendermint/tendermint/actions/workflows/e2e-nightly-34x.yml).
3. Prepare the changelog:
- Move the changes included in `CHANGELOG_PENDING.md` into `CHANGELOG.md`. Each RC should have
it's own changelog section. These will be squashed when the final candidate is released.
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary.
Check the changelog for breaking changes in these components.
- Bump ABCI protocol version in `version.go`, if necessary
4. Open a PR with these changes against the backport branch.
5. Once these changes have landed on the backport branch, be sure to pull them back down locally.
6. Once you have the changes locally, create the new tag, specifying a name and a tag "message":
`git tag -a v0.35.0-rc0 -m "Release Candidate v0.35.0-rc0`
7. Push the tag back up to origin:
`git push origin v0.35.0-rc0`
Now the tag should be available on the repo's releases page.
8. Future RCs will continue to be built off of this branch.
Note that this process should only be used for "true" RCs--
release candidates that, if successful, will be the next release.
For more experimental "RCs," create a new, short-lived branch and tag that instead.
## Major release
This major release process assumes that this release was preceded by release candidates.
If there were no release candidates, begin by creating a backport branch, as described above.
1. Start on the backport branch (e.g. `v0.35.x`)
2. Run integration tests (`make test_integrations`) and the e2e nightlies.
3. Prepare the release:
- "Squash" changes from the changelog entries for the RCs into a single entry,
and add all changes included in `CHANGELOG_PENDING.md`.
(Squashing includes both combining all entries, as well as removing or simplifying
any intra-RC changes. It may also help to alphabetize the entries by package name.)
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
4. Open a PR with these changes against the backport branch.
5. Once these changes are on the backport branch, push a tag with prepared release details.
This will trigger the actual release `v0.35.0`.
- `git tag -a v0.35.0 -m 'Release v0.35.0'`
- `git push origin v0.35.0`
6. Make sure that `master` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
7. Add the release to the documentation site generator config (see
[DOCS_README.md](./docs/DOCS_README.md) for more details). In summary:
- Start on branch `master`.
- Add a new line at the bottom of [`docs/versions`](./docs/versions) to
ensure the newest release is the default for the landing page.
- Add a new entry to `themeConfig.versions` in
[`docs/.vuepress/config.js`](./docs/.vuepress/config.js) to include the
release in the dropdown versions menu.
## Minor release (point releases)
Minor releases are done differently from major releases: They are built off of
long-lived backport branches, rather than from master. As non-breaking changes
land on `master`, they should also be backported into these backport branches.
Minor releases don't have release candidates by default, although any tricky
changes may merit a release candidate.
To create a minor release:
1. Checkout the long-lived backport branch: `git checkout v0.35.x`
2. Run integration tests (`make test_integrations`) and the nightlies.
3. Check out a new branch and prepare the release:
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump the TMDefaultVersion in `version.go`
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during minor releases, and only field additions are valid minor changes.)
4. Open a PR with these changes that will land them back on `v0.35.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- `git tag -a v0.35.1 -m 'Release v0.35.1'`
- `git push origin v0.35.1`
6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
- Remove all `R:minor` labels from the pull requests that were included in the release.
- Do not merge the backport branch into master.

View File

@@ -87,7 +87,7 @@ type ReqRes struct {
*sync.WaitGroup
*types.Response // Not set atomically, so be sure to use WaitGroup.
mtx tmsync.RWMutex
mtx tmsync.Mutex
done bool // Gets set to true once *after* WaitGroup.Done().
cb func(*types.Response) // A single callback that may be set.
}
@@ -137,16 +137,16 @@ func (r *ReqRes) InvokeCallback() {
//
// ref: https://github.com/tendermint/tendermint/issues/5439
func (r *ReqRes) GetCallback() func(*types.Response) {
r.mtx.RLock()
defer r.mtx.RUnlock()
r.mtx.Lock()
defer r.mtx.Unlock()
return r.cb
}
// SetDone marks the ReqRes object as done.
func (r *ReqRes) SetDone() {
r.mtx.Lock()
defer r.mtx.Unlock()
r.done = true
r.mtx.Unlock()
}
func waitGroup1() (wg *sync.WaitGroup) {

View File

@@ -13,7 +13,7 @@ type Creator func() (Client, error)
// NewLocalCreator returns a Creator for the given app,
// which will be running locally.
func NewLocalCreator(app types.Application) Creator {
mtx := new(tmsync.RWMutex)
mtx := new(tmsync.Mutex)
return func() (Client, error) {
return NewLocalClient(mtx, app), nil

View File

@@ -24,7 +24,7 @@ type grpcClient struct {
conn *grpc.ClientConn
chReqRes chan *ReqRes // dispatches "async" responses to callbacks *in order*, needed by mempool
mtx tmsync.RWMutex
mtx tmsync.Mutex
addr string
err error
resCb func(*types.Request, *types.Response) // listens to all callbacks
@@ -149,8 +149,8 @@ func (cli *grpcClient) StopForError(err error) {
}
func (cli *grpcClient) Error() error {
cli.mtx.RLock()
defer cli.mtx.RUnlock()
cli.mtx.Lock()
defer cli.mtx.Unlock()
return cli.err
}
@@ -158,8 +158,8 @@ func (cli *grpcClient) Error() error {
// NOTE: callback may get internally generated flush responses.
func (cli *grpcClient) SetResponseCallback(resCb Callback) {
cli.mtx.Lock()
defer cli.mtx.Unlock()
cli.resCb = resCb
cli.mtx.Unlock()
}
//----------------------------------------

View File

@@ -15,7 +15,7 @@ import (
type localClient struct {
service.BaseService
mtx *tmsync.RWMutex
mtx *tmsync.Mutex
types.Application
Callback
}
@@ -26,24 +26,22 @@ var _ Client = (*localClient)(nil)
// methods of the given app.
//
// Both Async and Sync methods ignore the given context.Context parameter.
func NewLocalClient(mtx *tmsync.RWMutex, app types.Application) Client {
func NewLocalClient(mtx *tmsync.Mutex, app types.Application) Client {
if mtx == nil {
mtx = &tmsync.RWMutex{}
mtx = new(tmsync.Mutex)
}
cli := &localClient{
mtx: mtx,
Application: app,
}
cli.BaseService = *service.NewBaseService(nil, "localClient", cli)
return cli
}
func (app *localClient) SetResponseCallback(cb Callback) {
app.mtx.Lock()
defer app.mtx.Unlock()
app.Callback = cb
app.mtx.Unlock()
}
// TODO: change types.Application to include Error()?
@@ -67,8 +65,8 @@ func (app *localClient) EchoAsync(ctx context.Context, msg string) (*ReqRes, err
}
func (app *localClient) InfoAsync(ctx context.Context, req types.RequestInfo) (*ReqRes, error) {
app.mtx.RLock()
defer app.mtx.RUnlock()
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Info(req)
return app.callback(
@@ -100,8 +98,8 @@ func (app *localClient) CheckTxAsync(ctx context.Context, req types.RequestCheck
}
func (app *localClient) QueryAsync(ctx context.Context, req types.RequestQuery) (*ReqRes, error) {
app.mtx.RLock()
defer app.mtx.RUnlock()
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Query(req)
return app.callback(
@@ -215,8 +213,8 @@ func (app *localClient) EchoSync(ctx context.Context, msg string) (*types.Respon
}
func (app *localClient) InfoSync(ctx context.Context, req types.RequestInfo) (*types.ResponseInfo, error) {
app.mtx.RLock()
defer app.mtx.RUnlock()
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Info(req)
return &res, nil
@@ -249,8 +247,8 @@ func (app *localClient) QuerySync(
ctx context.Context,
req types.RequestQuery,
) (*types.ResponseQuery, error) {
app.mtx.RLock()
defer app.mtx.RUnlock()
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Query(req)
return &res, nil

View File

@@ -39,7 +39,7 @@ type socketClient struct {
reqQueue chan *reqResWithContext
mtx tmsync.RWMutex
mtx tmsync.Mutex
err error
reqSent *list.List // list of requests sent, waiting for response
resCb func(*types.Request, *types.Response) // called on all requests, if set.
@@ -102,8 +102,8 @@ func (cli *socketClient) OnStop() {
// Error returns an error if the client was stopped abruptly.
func (cli *socketClient) Error() error {
cli.mtx.RLock()
defer cli.mtx.RUnlock()
cli.mtx.Lock()
defer cli.mtx.Unlock()
return cli.err
}
@@ -113,8 +113,8 @@ func (cli *socketClient) Error() error {
// NOTE: callback may get internally generated flush responses.
func (cli *socketClient) SetResponseCallback(resCb Callback) {
cli.mtx.Lock()
defer cli.mtx.Unlock()
cli.resCb = resCb
cli.mtx.Unlock()
}
//----------------------------------------

View File

@@ -29,11 +29,12 @@ var ReIndexEventCmd = &cobra.Command{
Use: "reindex-event",
Short: "reindex events to the event store backends",
Long: `
reindex-event is an offline tooling to re-index block and tx events to the eventsinks,
you can run this command when the event store backend dropped/disconnected or you want to replace the backend.
The default start-height is 0, meaning the tooling will start reindex from the base block height(inclusive); and the
default end-height is 0, meaning the tooling will reindex until the latest block height(inclusive). User can omits
either or both arguments.
reindex-event is an offline tooling to re-index block and tx events to the eventsinks,
you can run this command when the event store backend dropped/disconnected or you want to
replace the backend. The default start-height is 0, meaning the tooling will start
reindex from the base block height(inclusive); and the default end-height is 0, meaning
the tooling will reindex until the latest block height(inclusive). User can omit
either or both arguments.
`,
Example: `
tendermint reindex-event

View File

@@ -37,7 +37,7 @@ var ResetPrivValidatorCmd = &cobra.Command{
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAll(cmd *cobra.Command, args []string) error {
return ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidator.KeyFile(),
return ResetAll(config.DBDir(), config.PrivValidator.KeyFile(),
config.PrivValidator.StateFile(), logger)
}
@@ -49,12 +49,7 @@ func resetPrivValidator(cmd *cobra.Command, args []string) error {
// ResetAll removes address book files plus all data, and resets the privValdiator data.
// Exported so other CLI tools can use it.
func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) error {
if keepAddrBook {
logger.Info("The address book remains intact")
} else {
removeAddrBook(addrBookFile, logger)
}
func ResetAll(dbDir, privValKeyFile, privValStateFile string, logger log.Logger) error {
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
} else {
@@ -87,11 +82,3 @@ func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) err
}
return nil
}
func removeAddrBook(addrBookFile string, logger log.Logger) {
if err := os.Remove(addrBookFile); err == nil {
logger.Info("Removed existing address book", "file", addrBookFile)
} else if !os.IsNotExist(err) {
logger.Info("Error removing address book", "file", addrBookFile, "err", err)
}
}

View File

@@ -0,0 +1,46 @@
package commands
import (
"fmt"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/state"
)
var RollbackStateCmd = &cobra.Command{
Use: "rollback",
Short: "rollback tendermint state by one height",
Long: `
A state rollback is performed to recover from an incorrect application state transition,
when Tendermint has persisted an incorrect app hash and is thus unable to make
progress. Rollback overwrites a state at height n with the state at height n - 1.
The application should also roll back to height n - 1. No blocks are removed, so upon
restarting Tendermint the transactions in block n will be re-executed against the
application.
`,
RunE: func(cmd *cobra.Command, args []string) error {
height, hash, err := RollbackState(config)
if err != nil {
return fmt.Errorf("failed to rollback state: %w", err)
}
fmt.Printf("Rolled back state to height %d and hash %v", height, hash)
return nil
},
}
// RollbackState takes the state at the current height n and overwrites it with the state
// at height n - 1. Note state here refers to tendermint state not application state.
// Returns the latest state height and app hash alongside an error if there was one.
func RollbackState(config *cfg.Config) (int64, []byte, error) {
// use the parsed config to load the block and state store
blockStore, stateStore, err := loadStateAndBlockStore(config)
if err != nil {
return -1, nil, err
}
// rollback the last state
return state.Rollback(blockStore, stateStore)
}

View File

@@ -70,10 +70,6 @@ func AddNodeFlags(cmd *cobra.Command) {
// rpc flags
cmd.Flags().String("rpc.laddr", config.RPC.ListenAddress, "RPC listen address. Port required")
cmd.Flags().String(
"rpc.grpc-laddr",
config.RPC.GRPCListenAddress,
"GRPC listen address (BroadcastTx only). Port required")
cmd.Flags().Bool("rpc.unsafe", config.RPC.Unsafe, "enabled unsafe rpc methods")
cmd.Flags().String("rpc.pprof-laddr", config.RPC.PprofListenAddress, "pprof listen address (https://golang.org/pkg/net/http/pprof)")
@@ -84,8 +80,6 @@ func AddNodeFlags(cmd *cobra.Command) {
"node listen address. (0.0.0.0:0 means any interface, any port)")
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "comma-delimited ID@host:port seed nodes")
cmd.Flags().String("p2p.persistent-peers", config.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
cmd.Flags().String("p2p.unconditional-peer-ids",
config.P2P.UnconditionalPeerIDs, "comma-delimited IDs of unconditional peers")
cmd.Flags().Bool("p2p.upnp", config.P2P.UPNP, "enable/disable UPNP port forwarding")
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "enable/disable Peer-Exchange")
cmd.Flags().String("p2p.private-peer-ids", config.P2P.PrivatePeerIDs, "comma-delimited private peer IDs")

View File

@@ -226,7 +226,6 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.AddrBookStrict = false
config.P2P.AllowDuplicateIP = true
if populatePersistentPeers {
persistentPeersWithoutSelf := make([]string, 0)

View File

@@ -29,6 +29,7 @@ func main() {
cmd.GenNodeKeyCmd,
cmd.VersionCmd,
cmd.InspectCmd,
cmd.RollbackStateCmd,
cmd.MakeKeyMigrateCommand(),
debug.DebugCmd,
cli.NewCompletionCmd(rootCmd, true),

View File

@@ -30,7 +30,6 @@ const (
ModeSeed = "seed"
BlockSyncV0 = "v0"
BlockSyncV2 = "v2"
MempoolV0 = "v0"
MempoolV1 = "v1"
@@ -54,16 +53,14 @@ var (
defaultPrivValKeyName = "priv_validator_key.json"
defaultPrivValStateName = "priv_validator_state.json"
defaultNodeKeyName = "node_key.json"
defaultAddrBookName = "addrbook.json"
defaultNodeKeyName = "node_key.json"
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValKeyPath = filepath.Join(defaultConfigDir, defaultPrivValKeyName)
defaultPrivValStatePath = filepath.Join(defaultDataDir, defaultPrivValStateName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
)
// Config defines the top level configuration for a Tendermint node
@@ -142,9 +139,6 @@ func (cfg *Config) ValidateBasic() error {
if err := cfg.RPC.ValidateBasic(); err != nil {
return fmt.Errorf("error in [rpc] section: %w", err)
}
if err := cfg.P2P.ValidateBasic(); err != nil {
return fmt.Errorf("error in [p2p] section: %w", err)
}
if err := cfg.Mempool.ValidateBasic(); err != nil {
return fmt.Errorf("error in [mempool] section: %w", err)
}
@@ -461,24 +455,10 @@ type RPCConfig struct {
// A list of non simple headers the client is allowed to use with cross-domain requests.
CORSAllowedHeaders []string `mapstructure:"cors-allowed-headers"`
// TCP or UNIX socket address for the gRPC server to listen on
// NOTE: This server only supports /broadcast_tx_commit
// Deprecated: gRPC in the RPC layer of Tendermint will be removed in 0.36.
GRPCListenAddress string `mapstructure:"grpc-laddr"`
// Maximum number of simultaneous connections.
// Does not include RPC (HTTP&WebSocket) connections. See max-open-connections
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
// Deprecated: gRPC in the RPC layer of Tendermint will be removed in 0.36.
GRPCMaxOpenConnections int `mapstructure:"grpc-max-open-connections"`
// Activate unsafe RPC commands like /dial-persistent-peers and /unsafe-flush-mempool
Unsafe bool `mapstructure:"unsafe"`
// Maximum number of simultaneous connections (including WebSocket).
// Does not include gRPC connections. See grpc-max-open-connections
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
@@ -492,7 +472,7 @@ type RPCConfig struct {
MaxSubscriptionClients int `mapstructure:"max-subscription-clients"`
// Maximum number of unique queries a given client can /subscribe to
// If you're using GRPC (or Local RPC client) and /broadcast_tx_commit, set
// If you're using a Local RPC client and /broadcast_tx_commit, set this
// to the estimated maximum number of broadcast_tx_commit calls per block.
MaxSubscriptionsPerClient int `mapstructure:"max-subscriptions-per-client"`
@@ -533,12 +513,10 @@ type RPCConfig struct {
// DefaultRPCConfig returns a default configuration for the RPC server
func DefaultRPCConfig() *RPCConfig {
return &RPCConfig{
ListenAddress: "tcp://127.0.0.1:26657",
CORSAllowedOrigins: []string{},
CORSAllowedMethods: []string{http.MethodHead, http.MethodGet, http.MethodPost},
CORSAllowedHeaders: []string{"Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"},
GRPCListenAddress: "",
GRPCMaxOpenConnections: 900,
ListenAddress: "tcp://127.0.0.1:26657",
CORSAllowedOrigins: []string{},
CORSAllowedMethods: []string{http.MethodHead, http.MethodGet, http.MethodPost},
CORSAllowedHeaders: []string{"Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"},
Unsafe: false,
MaxOpenConnections: 900,
@@ -559,7 +537,6 @@ func DefaultRPCConfig() *RPCConfig {
func TestRPCConfig() *RPCConfig {
cfg := DefaultRPCConfig()
cfg.ListenAddress = "tcp://127.0.0.1:36657"
cfg.GRPCListenAddress = "tcp://127.0.0.1:36658"
cfg.Unsafe = true
return cfg
}
@@ -567,9 +544,6 @@ func TestRPCConfig() *RPCConfig {
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *RPCConfig) ValidateBasic() error {
if cfg.GRPCMaxOpenConnections < 0 {
return errors.New("grpc-max-open-connections can't be negative")
}
if cfg.MaxOpenConnections < 0 {
return errors.New("max-open-connections can't be negative")
}
@@ -647,25 +621,6 @@ type P2PConfig struct { //nolint: maligned
// UPNP port forwarding
UPNP bool `mapstructure:"upnp"`
// Path to address book
AddrBook string `mapstructure:"addr-book-file"`
// Set true for strict address routability rules
// Set false for private or local networks
AddrBookStrict bool `mapstructure:"addr-book-strict"`
// Maximum number of inbound peers
//
// TODO: Remove once p2p refactor is complete in favor of MaxConnections.
// ref: https://github.com/tendermint/tendermint/issues/5670
MaxNumInboundPeers int `mapstructure:"max-num-inbound-peers"`
// Maximum number of outbound peers to connect to, excluding persistent peers.
//
// TODO: Remove once p2p refactor is complete in favor of MaxConnections.
// ref: https://github.com/tendermint/tendermint/issues/5670
MaxNumOutboundPeers int `mapstructure:"max-num-outbound-peers"`
// MaxConnections defines the maximum number of connected peers (inbound and
// outbound).
MaxConnections uint16 `mapstructure:"max-connections"`
@@ -674,24 +629,6 @@ type P2PConfig struct { //nolint: maligned
// attempts per IP address.
MaxIncomingConnectionAttempts uint `mapstructure:"max-incoming-connection-attempts"`
// List of node IDs, to which a connection will be (re)established ignoring any existing limits
UnconditionalPeerIDs string `mapstructure:"unconditional-peer-ids"`
// Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
PersistentPeersMaxDialPeriod time.Duration `mapstructure:"persistent-peers-max-dial-period"`
// Time to wait before flushing messages out on the connection
FlushThrottleTimeout time.Duration `mapstructure:"flush-throttle-timeout"`
// Maximum size of a message packet payload, in bytes
MaxPacketMsgPayloadSize int `mapstructure:"max-packet-msg-payload-size"`
// Rate at which packets can be sent, in bytes/second
SendRate int64 `mapstructure:"send-rate"`
// Rate at which packets can be received, in bytes/second
RecvRate int64 `mapstructure:"recv-rate"`
// Set true to enable the peer-exchange reactor
PexReactor bool `mapstructure:"pex"`
@@ -710,13 +647,8 @@ type P2PConfig struct { //nolint: maligned
// Force dial to fail
TestDialFail bool `mapstructure:"test-dial-fail"`
// UseLegacy enables the "legacy" P2P implementation and
// disables the newer default implementation. This flag will
// be removed in a future release.
UseLegacy bool `mapstructure:"use-legacy"`
// Makes it possible to configure which queue backend the p2p
// layer uses. Options are: "fifo", "priority" and "wdrr",
// layer uses. Options are: "fifo" and "priority",
// with the default being "priority".
QueueType string `mapstructure:"queue-type"`
}
@@ -727,29 +659,14 @@ func DefaultP2PConfig() *P2PConfig {
ListenAddress: "tcp://0.0.0.0:26656",
ExternalAddress: "",
UPNP: false,
AddrBook: defaultAddrBookPath,
AddrBookStrict: true,
MaxNumInboundPeers: 40,
MaxNumOutboundPeers: 10,
MaxConnections: 64,
MaxIncomingConnectionAttempts: 100,
PersistentPeersMaxDialPeriod: 0 * time.Second,
FlushThrottleTimeout: 100 * time.Millisecond,
// The MTU (Maximum Transmission Unit) for Ethernet is 1500 bytes.
// The IP header and the TCP header take up 20 bytes each at least (unless
// optional header fields are used) and thus the max for (non-Jumbo frame)
// Ethernet is 1500 - 20 -20 = 1460
// Source: https://stackoverflow.com/a/3074427/820520
MaxPacketMsgPayloadSize: 1400,
SendRate: 5120000, // 5 mB/s
RecvRate: 5120000, // 5 mB/s
PexReactor: true,
AllowDuplicateIP: false,
HandshakeTimeout: 20 * time.Second,
DialTimeout: 3 * time.Second,
TestDialFail: false,
QueueType: "priority",
UseLegacy: false,
PexReactor: true,
AllowDuplicateIP: false,
HandshakeTimeout: 20 * time.Second,
DialTimeout: 3 * time.Second,
TestDialFail: false,
QueueType: "priority",
}
}
@@ -757,43 +674,10 @@ func DefaultP2PConfig() *P2PConfig {
func TestP2PConfig() *P2PConfig {
cfg := DefaultP2PConfig()
cfg.ListenAddress = "tcp://127.0.0.1:36656"
cfg.FlushThrottleTimeout = 10 * time.Millisecond
cfg.AllowDuplicateIP = true
return cfg
}
// AddrBookFile returns the full path to the address book
func (cfg *P2PConfig) AddrBookFile() string {
return rootify(cfg.AddrBook, cfg.RootDir)
}
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *P2PConfig) ValidateBasic() error {
if cfg.MaxNumInboundPeers < 0 {
return errors.New("max-num-inbound-peers can't be negative")
}
if cfg.MaxNumOutboundPeers < 0 {
return errors.New("max-num-outbound-peers can't be negative")
}
if cfg.FlushThrottleTimeout < 0 {
return errors.New("flush-throttle-timeout can't be negative")
}
if cfg.PersistentPeersMaxDialPeriod < 0 {
return errors.New("persistent-peers-max-dial-period can't be negative")
}
if cfg.MaxPacketMsgPayloadSize < 0 {
return errors.New("max-packet-msg-payload-size can't be negative")
}
if cfg.SendRate < 0 {
return errors.New("send-rate can't be negative")
}
if cfg.RecvRate < 0 {
return errors.New("recv-rate can't be negative")
}
return nil
}
//-----------------------------------------------------------------------------
// MempoolConfig
@@ -1025,15 +909,13 @@ func (cfg *StateSyncConfig) ValidateBasic() error {
// allows them to catchup quickly by downloading blocks in parallel
// and verifying their commits.
type BlockSyncConfig struct {
Enable bool `mapstructure:"enable"`
Version string `mapstructure:"version"`
Enable bool `mapstructure:"enable"`
}
// DefaultBlockSyncConfig returns a default configuration for the block sync service
func DefaultBlockSyncConfig() *BlockSyncConfig {
return &BlockSyncConfig{
Enable: true,
Version: BlockSyncV0,
Enable: true,
}
}
@@ -1043,16 +925,7 @@ func TestBlockSyncConfig() *BlockSyncConfig {
}
// ValidateBasic performs basic validation.
func (cfg *BlockSyncConfig) ValidateBasic() error {
switch cfg.Version {
case BlockSyncV0:
return nil
case BlockSyncV2:
return errors.New("blocksync version v2 is no longer supported. Please use v0")
default:
return fmt.Errorf("unknown blocksync version %s", cfg.Version)
}
}
func (cfg *BlockSyncConfig) ValidateBasic() error { return nil }
//-----------------------------------------------------------------------------
// ConsensusConfig

View File

@@ -66,7 +66,6 @@ func TestRPCConfigValidateBasic(t *testing.T) {
assert.NoError(t, cfg.ValidateBasic())
fieldsToTest := []string{
"GRPCMaxOpenConnections",
"MaxOpenConnections",
"MaxSubscriptionClients",
"MaxSubscriptionsPerClient",
@@ -82,26 +81,6 @@ func TestRPCConfigValidateBasic(t *testing.T) {
}
}
func TestP2PConfigValidateBasic(t *testing.T) {
cfg := TestP2PConfig()
assert.NoError(t, cfg.ValidateBasic())
fieldsToTest := []string{
"MaxNumInboundPeers",
"MaxNumOutboundPeers",
"FlushThrottleTimeout",
"MaxPacketMsgPayloadSize",
"SendRate",
"RecvRate",
}
for _, fieldName := range fieldsToTest {
reflect.ValueOf(cfg).Elem().FieldByName(fieldName).SetInt(-1)
assert.Error(t, cfg.ValidateBasic())
reflect.ValueOf(cfg).Elem().FieldByName(fieldName).SetInt(0)
}
}
func TestMempoolConfigValidateBasic(t *testing.T) {
cfg := TestMempoolConfig()
assert.NoError(t, cfg.ValidateBasic())
@@ -128,13 +107,6 @@ func TestStateSyncConfigValidateBasic(t *testing.T) {
func TestBlockSyncConfigValidateBasic(t *testing.T) {
cfg := TestBlockSyncConfig()
assert.NoError(t, cfg.ValidateBasic())
// tamper with version
cfg.Version = "v2"
assert.Error(t, cfg.ValidateBasic())
cfg.Version = "invalid"
assert.Error(t, cfg.ValidateBasic())
}
func TestConsensusConfig_ValidateBasic(t *testing.T) {

View File

@@ -193,26 +193,10 @@ cors-allowed-methods = [{{ range .RPC.CORSAllowedMethods }}{{ printf "%q, " . }}
# A list of non simple headers the client is allowed to use with cross-domain requests
cors-allowed-headers = [{{ range .RPC.CORSAllowedHeaders }}{{ printf "%q, " . }}{{end}}]
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
# Deprecated gRPC in the RPC layer of Tendermint will be deprecated in 0.36.
grpc-laddr = "{{ .RPC.GRPCListenAddress }}"
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max-open-connections
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
# 1024 - 40 - 10 - 50 = 924 = ~900
# Deprecated gRPC in the RPC layer of Tendermint will be deprecated in 0.36.
grpc-max-open-connections = {{ .RPC.GRPCMaxOpenConnections }}
# Activate unsafe RPC commands like /dial-seeds and /unsafe-flush-mempool
unsafe = {{ .RPC.Unsafe }}
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc-max-open-connections
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
@@ -226,8 +210,8 @@ max-open-connections = {{ .RPC.MaxOpenConnections }}
max-subscription-clients = {{ .RPC.MaxSubscriptionClients }}
# Maximum number of unique queries a given client can /subscribe to
# If you're using GRPC (or Local RPC client) and /broadcast_tx_commit, set to
# the estimated # maximum number of broadcast_tx_commit calls per block.
# If you're using a Local RPC client and /broadcast_tx_commit, set this
# to the estimated maximum number of broadcast_tx_commit calls per block.
max-subscriptions-per-client = {{ .RPC.MaxSubscriptionsPerClient }}
# How long to wait for a tx to be committed during /broadcast_tx_commit.
@@ -265,9 +249,6 @@ pprof-laddr = "{{ .RPC.PprofListenAddress }}"
#######################################################
[p2p]
# Enable the legacy p2p layer.
use-legacy = {{ .P2P.UseLegacy }}
# Select the p2p internal queue
queue-type = "{{ .P2P.QueueType }}"
@@ -299,62 +280,12 @@ persistent-peers = "{{ .P2P.PersistentPeers }}"
# UPNP port forwarding
upnp = {{ .P2P.UPNP }}
# Path to address book
# TODO: Remove once p2p refactor is complete in favor of peer store.
addr-book-file = "{{ js .P2P.AddrBook }}"
# Set true for strict address routability rules
# Set false for private or local networks
addr-book-strict = {{ .P2P.AddrBookStrict }}
# Maximum number of inbound peers
#
# TODO: Remove once p2p refactor is complete in favor of MaxConnections.
# ref: https://github.com/tendermint/tendermint/issues/5670
max-num-inbound-peers = {{ .P2P.MaxNumInboundPeers }}
# Maximum number of outbound peers to connect to, excluding persistent peers
#
# TODO: Remove once p2p refactor is complete in favor of MaxConnections.
# ref: https://github.com/tendermint/tendermint/issues/5670
max-num-outbound-peers = {{ .P2P.MaxNumOutboundPeers }}
# Maximum number of connections (inbound and outbound).
max-connections = {{ .P2P.MaxConnections }}
# Rate limits the number of incoming connection attempts per IP address.
max-incoming-connection-attempts = {{ .P2P.MaxIncomingConnectionAttempts }}
# List of node IDs, to which a connection will be (re)established ignoring any existing limits
# TODO: Remove once p2p refactor is complete.
# ref: https://github.com/tendermint/tendermint/issues/5670
unconditional-peer-ids = "{{ .P2P.UnconditionalPeerIDs }}"
# Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
persistent-peers-max-dial-period = "{{ .P2P.PersistentPeersMaxDialPeriod }}"
# Time to wait before flushing messages out on the connection
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
flush-throttle-timeout = "{{ .P2P.FlushThrottleTimeout }}"
# Maximum size of a message packet payload, in bytes
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
max-packet-msg-payload-size = {{ .P2P.MaxPacketMsgPayloadSize }}
# Rate at which packets can be sent, in bytes/second
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
send-rate = {{ .P2P.SendRate }}
# Rate at which packets can be received, in bytes/second
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
recv-rate = {{ .P2P.RecvRate }}
# Set true to enable the peer-exchange reactor
pex = {{ .P2P.PexReactor }}
@@ -477,11 +408,6 @@ fetchers = "{{ .StateSync.Fetchers }}"
# and verifying their commits
enable = {{ .BlockSync.Enable }}
# Block Sync version to use:
# 1) "v0" (default) - the standard Block Sync implementation
# 2) "v2" - DEPRECATED, please use v0
version = "{{ .BlockSync.Version }}"
#######################################################
### Consensus Configuration Options ###
#######################################################

View File

@@ -1,3 +1,4 @@
//go:build !libsecp256k1
// +build !libsecp256k1
package secp256k1

View File

@@ -48,10 +48,6 @@ module.exports = {
{
title: 'Resources',
children: [
{
title: 'Developer Sessions',
path: '/DEV_SESSIONS.html'
},
{
title: 'RPC',
path: 'https://docs.tendermint.com/master/rpc/',

View File

@@ -1,7 +1,6 @@
---
order: false
parent:
title: "Building Applications"
order: 3
---
# Apps
---

View File

@@ -25,6 +25,7 @@
- April 28, 2021: Specify search capabilities are only supported through the KV indexer (@marbar3778)
- May 19, 2021: Update the SQL schema and the eventsink interface (@jayt106)
- Aug 30, 2021: Update the SQL schema and the psql implementation (@creachadair)
- Oct 5, 2021: Clarify goals and implementation changes (@creachadair)
## Status
@@ -73,19 +74,38 @@ the database used.
We will adopt a similar approach to that of the Cosmos SDK's `KVStore` state
listening described in [ADR-038](https://github.com/cosmos/cosmos-sdk/blob/master/docs/architecture/adr-038-state-listening.md).
Namely, we will perform the following:
We will implement the following changes:
- Introduce a new interface, `EventSink`, that all data sinks must implement.
- Augment the existing `tx_index.indexer` configuration to now accept a series
of one or more indexer types, i.e sinks.
of one or more indexer types, i.e., sinks.
- Combine the current `TxIndexer` and `BlockIndexer` into a single `KVEventSink`
that implements the `EventSink` interface.
- Introduce an additional `EventSink` that is backed by [PostgreSQL](https://www.postgresql.org/).
- Implement the necessary schemas to support both block and transaction event
indexing.
- Introduce an additional `EventSink` implementation that is backed by
[PostgreSQL](https://www.postgresql.org/).
- Implement the necessary schemas to support both block and transaction event indexing.
- Update `IndexerService` to use a series of `EventSinks`.
- Proxy queries to the relevant sink's native query layer.
- Update all relevant RPC methods.
In addition:
- The Postgres indexer implementation will _not_ implement the proprietary `kv`
query language. Users wishing to write queries against the Postgres indexer
will connect to the underlying DBMS directly and use SQL queries based on the
indexing schema.
Future custom indexer implementations will not be required to support the
proprietary query language either.
- For now, the existing `kv` indexer will be left in place with its current
query support, but will be marked as deprecated in a subsequent release, and
the documentation will be updated to encourage users who need to query the
event index to migrate to the Postgres indexer.
- In the future we may remove the `kv` indexer entirely, or replace it with a
different implementation; that decision is deferred as future work.
- In the future, we may remove the index query endpoints from the RPC service
entirely; that decision is deferred as future work, but recommended.
## Detailed Design

View File

@@ -1,16 +0,0 @@
---
order: 1
parent:
title: Networks
order: 1
---
# Overview
Use [Docker Compose](./docker-compose.md) to spin up Tendermint testnets on your
local machine.
Use [Terraform and Ansible](./terraform-and-ansible.md) to deploy Tendermint
testnets to the cloud.
See the `tendermint testnet --help` command for more help initializing testnets.

View File

@@ -1,7 +1,7 @@
---
order: 1
parent:
title: Nodes
title: Node Operators
order: 4
---

View File

@@ -221,9 +221,6 @@ pprof-laddr = ""
#######################################################
[p2p]
# Enable the legacy p2p layer.
use-legacy = false
# Select the p2p internal queue
queue-type = "priority"

100
docs/roadmap/roadmap.md Normal file
View File

@@ -0,0 +1,100 @@
---
order: false
parent:
title: Roadmap
order: 7
---
# Tendermint Roadmap
*Last Updated: Friday 8 October 2021*
This document endeavours to inform the wider Tendermint community about development plans and priorities for Tendermint Core, and when we expect features to be delivered. It is intended to broadly inform all users of Tendermint, including application developers, node operators, integrators, and the engineering and research teams.
Anyone wishing to propose work to be a part of this roadmap should do so by opening an [issue](https://github.com/tendermint/spec/issues/new/choose) in the spec. Bug reports and other implementation concerns should be brought up in the [core repository](https://github.com/tendermint/tendermint).
This roadmap should be read as a high-level guide to plans and priorities, rather than a commitment to schedules and deliverables. Features earlier on the roadmap will generally be more specific and detailed than those later on. We will update this document periodically to reflect the current status.
The upgrades are split into two components: **Epics**, the features that define a release and to a large part dictate the timing of releases; and **minors**, features of smaller scale and lower priority, that could land in neighboring releases.
## V0.35 (completed Q3 2021)
### Prioritized Mempool
Transactions were previously added to blocks in the order with which they arrived to the mempool. Adding a priority field via `CheckTx` gives applications more control over which transactions make it into a block. This is important in the presence of transaction fees. [More](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-067-mempool-refactor.md)
### Refactor of the P2P Framework
The Tendermint P2P system is undergoing a large redesign to improve its performance and reliability. The first phase of this redesign is included in 0.35. This phase cleans and decouples abstractions, improves peer lifecycle management, peer address handling and enables pluggable transports. It is implemented to be protocol-compatible with the previous implementation. [More](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-062-p2p-architecture.md)
### State Sync Improvements
Following the initial version of state sync, several improvements have been made. These include the addition of [Reverse Sync](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-068-reverse-sync.md) needed for evidence handling, the introduction of a [P2P State Provider](https://github.com/tendermint/tendermint/pull/6807) as an alternative to RPC endpoints, new configuration parameters to adjust throughput, and several bug fixes.
### Custom event indexing + PSQL Indexer
Added a new `EventSink` interface to allow alternatives to Tendermint's proprietary transaction indexer. We also added a PostgreSQL Indexer implementation, allowing rich SQL-based index queries. [More](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-065-custom-event-indexing.md)
### Minor Works
- Several Go packages were reorganized to make the distinction between public APIs and implementation details more clear.
- Block indexer to index begin-block and end-block events. [More](https://github.com/tendermint/tendermint/pull/6226)
- Block, state, evidence, and light storage keys were reworked to preserve lexicographic order. This change requires a database migration. [More](https://github.com/tendermint/tendermint/pull/5771)
- Introduciton of Tendermint modes. Part of this change includes the possibility to run a separate seed node that runs the PEX reactor only. [More](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-052-tendermint-mode.md)
## V0.36 (expected Q1 2022)
### ABCI++
An overhaul of the existing interface between the application and consensus, to give the application more control over block construction. ABCI++ adds new hooks allowing modification of transactions before they get into a block, verification of a block before voting, injection of signed information into votes, and more compact delivery of blocks after agreement (to allow for concurrent execution). [More](https://github.com/tendermint/spec/blob/master/rfc/004-abci%2B%2B.md)
### Proposer-Based Timestamps
Proposer-based timestamps are a replacement of [BFT time](https://docs.tendermint.com/master/spec/consensus/bft-time.html), whereby the proposer chooses a timestamp and validators vote on the block only if the timestamp is considered *timely*. This increases reliance on an accurate local clock, but in exchange makes block time more reliable and resistant to faults. This has important use cases in light clients, IBC relayers, CosmosHub inflation and enabling signature aggregation. [More](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-071-proposer-based-timestamps.md)
### Soft Upgrades
We are working on a suite of tools and patterns to make it easier for both node operators and application developers to quickly and safely upgrade to newer versions of Tendermint. [More](https://github.com/tendermint/spec/pull/222)
### Minor Works
- Remove the "legacy" P2P framework, and clean up of P2P package. [More](https://github.com/tendermint/tendermint/issues/5670)
- Remove the global mutex from the local ABCI client to enable application-controlled concurrency. [More](https://github.com/tendermint/tendermint/issues/7073)
- Enable P2P support for light clients
- Node orchestration of services + Node initialization and composibility
- Remove redundancy in several data structures. Remove unused components such as the block sync v2 reactor, gRPC in the RPC layer, and the socket-based remote signer.
- Improve node visibility by introducing more metrics
## V0.37 (expected Q3 2022)
### Complete P2P Refactor
Finish the final phase of the P2P system. Ongoing research and planning is taking place to decide whether to adopt [libp2p](https://libp2p.io/), alternative transports to `MConn` such as [QUIC](https://en.wikipedia.org/wiki/QUIC) and handshake/authentication protocols such as [Noise](https://noiseprotocol.org/). Research into more advanced gossiping techniques.
### Streamline Storage Engine
Tendermint currently has an abstraction to allow support for multiple database backends. This generality incurs maintenance overhead and interferes with application-specific optimizations that Tendermint could use (ACID guarantees, etc.). We plan to converge on a single database and streamline the Tendermint storage engine. [More](https://github.com/tendermint/tendermint/pull/6897)
### Evaluate Interprocess Communication
Tendermint nodes currently have multiple areas of communication with other processes (ABCI, remote-signer, P2P, JSONRPC, websockets, events as examples). Many of these have multiple implementations in which a single suffices. Consolidate and clean up IPC. [More](https://github.com/tendermint/tendermint/blob/master/docs/rfc/rfc-002-ipc-ecosystem.md)
### Minor Works
- Amnesia attack handling. [More](https://github.com/tendermint/tendermint/issues/5270)
- Remove / Update Consensus WAL. [More](https://github.com/tendermint/tendermint/issues/6397)
- Signature Aggregation. [More](https://github.com/tendermint/tendermint/issues/1319)
- Remove gogoproto dependency. [More](https://github.com/tendermint/tendermint/issues/5446)
## V1.0 (expected Q4 2022)
Has the same feature set as V0.37 but with a focus towards testing, protocol correctness and minor tweaks to ensure a stable product. Such work might include extending the [consensus testing framework](https://github.com/tendermint/tendermint/issues/5920), the use of canary/long-lived testnets and greater integration tests.
## Post 1.0 Work
- Improved block propagation with erasure coding and/or compact blocks. [More](https://github.com/tendermint/spec/issues/347)
- Consensus engine refactor
- Bidirectional ABCI
- Randomized Leader Election
- ZK proofs / other cryptographic primitives
- Multichain Tendermint

View File

@@ -1,7 +1,7 @@
---
order: 1
parent:
title: System
title: Understanding Tendermint
order: 5
---
@@ -10,7 +10,6 @@ parent:
This section dives into the internals of Go-Tendermint.
- [Using Tendermint](./using-tendermint.md)
- [Running in Production](./running-in-production.md)
- [Subscribing to events](./subscription.md)
- [Block Structure](./block-structure.md)
- [RPC](./rpc.md)
@@ -18,3 +17,5 @@ This section dives into the internals of Go-Tendermint.
- [State Sync](./state-sync.md)
- [Mempool](./mempool.md)
- [Light Client](./light-client.md)
For full specifications refer to the [spec repo](https://github.com/tendermint/spec).

View File

@@ -27,3 +27,11 @@ testing Tendermint networks.
This repository contains various different configurations of test networks for,
and relating to, Tendermint.
Use [Docker Compose](./docker-compose.md) to spin up Tendermint testnets on your
local machine.
Use [Terraform and Ansible](./terraform-and-ansible.md) to deploy Tendermint
testnets to the cloud.
See the `tendermint testnet --help` command for more help initializing testnets.

View File

@@ -4,4 +4,4 @@ parent:
order: 2
---
# Guides
# Tutorials

6
go.mod
View File

@@ -4,8 +4,7 @@ go 1.16
require (
github.com/BurntSushi/toml v0.4.1
github.com/Workiva/go-datastructures v1.0.53
github.com/adlio/schema v1.1.13
github.com/adlio/schema v1.1.14
github.com/btcsuite/btcd v0.22.0-beta
github.com/btcsuite/btcutil v1.0.3-0.20201208143702-a53e38424cce
github.com/fortytw2/leaktest v1.3.0
@@ -20,7 +19,6 @@ require (
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
github.com/lib/pq v1.10.3
github.com/libp2p/go-buffer-pool v0.0.2
github.com/minio/highwayhash v1.0.2
github.com/mroth/weightedrand v0.4.1
github.com/oasisprotocol/curve25519-voi v0.0.0-20210609091139-0a56a4bca00b
github.com/ory/dockertest v3.3.5+incompatible
@@ -36,7 +34,7 @@ require (
github.com/tendermint/tm-db v0.6.4
github.com/vektra/mockery/v2 v2.9.4
golang.org/x/crypto v0.0.0-20210915214749-c084706c2272
golang.org/x/net v0.0.0-20210917221730-978cfadd31cf
golang.org/x/net v0.0.0-20211005001312-d4b1ae081e3b
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
google.golang.org/grpc v1.41.0
gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b // indirect

62
go.sum
View File

@@ -1,5 +1,6 @@
4d63.com/gochecknoglobals v0.0.0-20201008074935-acfc0b28355a h1:wFEQiK85fRsEVF0CRrPAos5LoAryUsIX1kPW/WrIqFw=
4d63.com/gochecknoglobals v0.0.0-20201008074935-acfc0b28355a/go.mod h1:wfdC5ZjKSPr7CybKEcgJhUOgeAQW1+7WcyK8OvUilfo=
bazil.org/fuse v0.0.0-20200407214033-5883e5a4b512/go.mod h1:FbcW6z/2VytnFDhZfumh8Ss8zxHE6qpMP5sHTRe0EaM=
bitbucket.org/creachadair/shell v0.0.6/go.mod h1:8Qqi/cYk7vPnsOePHroKXDJYmb5x7ENhtiFtfZq8K+M=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
@@ -52,8 +53,8 @@ contrib.go.opencensus.io/exporter/stackdriver v0.13.4/go.mod h1:aXENhDJ1Y4lIg4EU
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Antonboom/errname v0.1.4 h1:lGSlI42Gm4bI1e+IITtXJXvxFM8N7naWimVFKcb0McY=
github.com/Antonboom/errname v0.1.4/go.mod h1:jRXo3m0E0EuCnK3wbsSVH3X55Z4iTDLl6ZfCxwFj4TM=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 h1:w+iIsaOQNcT7OZ575w+acHgRric5iCyQh+xv+KJ4HB8=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v0.4.1 h1:GaI7EiDXDRfa8VshkTj7Fym7ha+y8/XxIgD2okUIjLw=
github.com/BurntSushi/toml v0.4.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
@@ -72,8 +73,8 @@ github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3Q
github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/Masterminds/sprig v2.15.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/Masterminds/sprig v2.22.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/Microsoft/go-winio v0.4.14 h1:+hMXMk01us9KgxGb7ftKQt2Xpf5hH/yky+TDA+qxleU=
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
github.com/Microsoft/go-winio v0.5.0 h1:Elr9Wn+sGKPlkaBvwu4mTrxtmOp3F3yV9qhaHbXGjwU=
github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 h1:TngWCqHvy9oXAN6lEVMRuU21PR1EtLVZJmdB18Gu3Rw=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk=
github.com/OneOfOne/xxhash v1.2.2 h1:KMrpdQIwFcEqXDklaen+P1axHaj9BSKzvpUUfnHldSE=
@@ -85,10 +86,8 @@ github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMx
github.com/StackExchange/wmi v1.2.1/go.mod h1:rcmrprowKIVzvc+NUiLncP2uuArMWLCbu9SBzvHz7e8=
github.com/VividCortex/gohistogram v1.0.0 h1:6+hBz+qvs0JOrrNhhmR7lFxo5sINxBCGXrdtl/UvroE=
github.com/VividCortex/gohistogram v1.0.0/go.mod h1:Pf5mBqqDxYaXu3hDrrU+w6nw50o/4+TcAqDqk/vUH7g=
github.com/Workiva/go-datastructures v1.0.53 h1:J6Y/52yX10Xc5JjXmGtWoSSxs3mZnGSaq37xZZh7Yig=
github.com/Workiva/go-datastructures v1.0.53/go.mod h1:1yZL+zfsztete+ePzZz/Zb1/t5BnDuE2Ya2MMGhzP6A=
github.com/adlio/schema v1.1.13 h1:LeNMVg5Z1FX+Qgz8tJUijBLRdcpbFUElz+d1489On98=
github.com/adlio/schema v1.1.13/go.mod h1:L5Z7tw+7lRK1Fnpi/LT/ooCP1elkXn0krMWBQHUhEDE=
github.com/adlio/schema v1.1.14 h1:lIjyp5/2wSuEOmeQGNPpaRsVGZRqz9A/B+PaMtEotaU=
github.com/adlio/schema v1.1.14/go.mod h1:hQveFEMiDlG/M9yz9RAajnH5DzT6nAfqOG9YkEQU2pg=
github.com/aead/siphash v1.0.1/go.mod h1:Nywa3cDsYNNK3gaciGTWPwHt0wlpNV15vwmswBAUSII=
github.com/afex/hystrix-go v0.0.0-20180502004556-fa1af6a1f4f5/go.mod h1:SkGFH1ia65gfNATL8TAiHDNxPzPdmEL5uirI2Uyuz6c=
github.com/ajstarks/svgo v0.0.0-20180226025133-644b8db467af/go.mod h1:K08gAheRH3/J6wwsYMMT4xOr94bZjxIelGM0+d/wbFw=
@@ -126,6 +125,7 @@ github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+Ce
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bits-and-blooms/bitset v1.2.0/go.mod h1:gIdJ4wp64HaoK2YrL1Q5/N7Y16edYb8uY+O0FJTyyDA=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/bketelsen/crypt v0.0.4/go.mod h1:aI6NrJ0pMGgvZKL1iVgXLnfIFJtfV+bKCoqOes/6LfM=
github.com/bkielbasa/cyclop v1.2.0 h1:7Jmnh0yL2DjKfw28p86YTd/B4lRGcNuu12sKE35sM7A=
@@ -160,9 +160,11 @@ github.com/charithe/durationcheck v0.0.8 h1:cnZrThioNW9gSV5JsRIXmkyHUbcDH7Y9hkzF
github.com/charithe/durationcheck v0.0.8/go.mod h1:SSbRIBVfMjCi/kEB6K65XEA83D6prSM8ap1UCpNKtgg=
github.com/chavacava/garif v0.0.0-20210405164556-e8a0a408d6af h1:spmv8nSH9h5oCQf40jt/ufBCt9j0/58u4G+rkeMqXGI=
github.com/chavacava/garif v0.0.0-20210405164556-e8a0a408d6af/go.mod h1:Qjyv4H3//PWVzTeCezG2b9IRn6myJxJSr4TD/xo6ojU=
github.com/checkpoint-restore/go-criu/v5 v5.0.0/go.mod h1:cfwC0EG7HMUenopBsUf9d89JlCLQIfgVcNsNN0t6T2M=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cilium/ebpf v0.6.2/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
github.com/clbanning/mxj v1.8.4/go.mod h1:BVjHeAH+rl9rs6f+QIpeRl0tfu10SXn1pUSa5PVGJng=
@@ -173,8 +175,9 @@ github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnht
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/containerd/continuity v0.0.0-20190827140505-75bee3e2ccb6 h1:NmTXa/uVnDyp0TY5MKi197+3HWcnYWfnHGyaFthlnGw=
github.com/containerd/continuity v0.0.0-20190827140505-75bee3e2ccb6/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/console v1.0.2/go.mod h1:ytZPjGgY2oeTkAONYafi2kSj0aYggsf8acV1PGKCbzQ=
github.com/containerd/continuity v0.2.0 h1:j/9Wnn+hrEWjLvHuIxUU1YI5JjEjVlT2AA68cse9rwY=
github.com/containerd/continuity v0.2.0/go.mod h1:wCYX+dRqZdImhGucXOqTQn05AhX6EUDaGEMUzTFFpLg=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
@@ -192,6 +195,7 @@ github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:ma
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4=
github.com/daixiang0/gci v0.2.9 h1:iwJvwQpBZmMg31w+QQ6jsyZ54KEATn6/nfARbBNW294=
github.com/daixiang0/gci v0.2.9/go.mod h1:+4dZ7TISfSmqfAGv59ePaHfNzgGtIkHAhhdKggP1JAc=
github.com/davecgh/go-spew v0.0.0-20161028175848-04cdfd42973b/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -253,6 +257,7 @@ github.com/fortytw2/leaktest v1.3.0 h1:u8491cBMTQ8ft8aeV+adlcytMZylmA5nnwwkRZjI8
github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
github.com/franela/goblin v0.0.0-20210519012713-85d372ac71e2/go.mod h1:VzmDKDJVZI3aJmnRI9VjAn9nJ8qPPsN1fqzr9dqInIo=
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.5.1 h1:mZcQUHVQUQWoPXXtuf9yuEXKudkV2sx1E06UadKWpgI=
@@ -574,6 +579,7 @@ github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
@@ -592,7 +598,6 @@ github.com/ldez/tagliatelle v0.2.0/go.mod h1:8s6WJQwEYHbKZDsp/LjArytKOG8qaMrKQQ3
github.com/leodido/go-urn v1.1.0/go.mod h1:+cyI34gQWZcE1eQU7NVgKkkzdXDQHr1dBMtdAPozLkw=
github.com/letsencrypt/pkcs11key/v4 v4.0.0/go.mod h1:EFUvBDay26dErnNb70Nd0/VW3tJiIbETBPTl9ATXQag=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.8.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.9.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
@@ -645,7 +650,6 @@ github.com/miekg/dns v1.1.43/go.mod h1:+evo5L0630/F6ca/Z9+GAqzhjGyn8/c+TBaOyfEl0
github.com/miekg/pkcs11 v1.0.2/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/miekg/pkcs11 v1.0.3/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/minio/highwayhash v1.0.1/go.mod h1:BQskDq+xkJ12lmlUUi7U0M5Swg3EWR+dLTk+kldvVxY=
github.com/minio/highwayhash v1.0.2 h1:Aak5U0nElisjDCfPSG79Tgzkn2gl66NxOMspRrKnA/g=
github.com/minio/highwayhash v1.0.2/go.mod h1:BQskDq+xkJ12lmlUUi7U0M5Swg3EWR+dLTk+kldvVxY=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
@@ -664,6 +668,7 @@ github.com/mitchellh/mapstructure v1.4.2 h1:6h7AQ0yhTcIsmFmnAwQls75jp2Gzs4iB8W7p
github.com/mitchellh/mapstructure v1.4.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/mitchellh/reflectwalk v1.0.1/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/moby/sys/mountinfo v0.4.1/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
@@ -676,6 +681,7 @@ github.com/mozilla/scribe v0.0.0-20180711195314-fb71baf557c1/go.mod h1:FIczTrinK
github.com/mozilla/tls-observatory v0.0.0-20210609171429-7bc42856d2e5/go.mod h1:FUqVoUPHSEdDR0MnFM3Dh8AU0pZHLXUD127SAJGER/s=
github.com/mroth/weightedrand v0.4.1 h1:rHcbUBopmi/3x4nnrvwGJBhX9d0vk+KgoLUZeDP6YyI=
github.com/mroth/weightedrand v0.4.1/go.mod h1:3p2SIcC8al1YMzGhAIoXD+r9olo/g/cdJgAD905gyNE=
github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-proto-validators v0.0.0-20180403085117-0950a7990007/go.mod h1:m2XC9Qq0AlmmVksL6FktJCdTYyLk7V3fKyp0sl1yWQo=
@@ -724,12 +730,14 @@ github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1y
github.com/onsi/gomega v1.13.0 h1:7lLHu94wT9Ij0o6EWWclhu0aOh32VxhkwEJvzuWPeak=
github.com/onsi/gomega v1.13.0/go.mod h1:lRk9szgn8TxENtWd0Tp4c3wjlRfMTMH27I+3Je41yGY=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
github.com/opencontainers/go-digest v1.0.0-rc1 h1:WzifXhOVOEOuFYOJAW6aQqW0TooG2iki3E3Ii+WN7gQ=
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/runc v0.1.1 h1:GlxAyO6x8rfZYN9Tt0Kti5a/cP41iuiO2yYT0IJGY8Y=
github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.2 h1:opHZMaswlyxz1OuGpBE53Dwe4/xF7EZTY0A2L/FpCOg=
github.com/opencontainers/runc v1.0.2/go.mod h1:aTaHFFwQXuA71CiyxOdFFIorAoemI04suvGRQFzWTD0=
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/selinux v1.8.2/go.mod h1:MUIHuUEvKB1wtJjQdOyYRgOnLD2xAPP8dBsCoU0KuF8=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc=
github.com/openzipkin/zipkin-go v0.2.5/go.mod h1:KpXfKdgRDnnhsxw4pNIH9Md5lyFqKUa4YDFlwRYAMyE=
@@ -748,7 +756,6 @@ github.com/petermattis/goid v0.0.0-20180202154549-b0b1615b78e5 h1:q2e307iGHPdTGp
github.com/petermattis/goid v0.0.0-20180202154549-b0b1615b78e5/go.mod h1:jvVRKCrJTQWu0XVbaOlby/2lO20uSCHEMzzplHXte1o=
github.com/phayes/checkstyle v0.0.0-20170904204023-bfd46e6a821d h1:CdDQnGF8Nq9ocOS/xlSptM1N3BbrA6/kmaep5ggwaIA=
github.com/phayes/checkstyle v0.0.0-20170904204023-bfd46e6a821d/go.mod h1:3OzsM7FXDQlpCiw2j81fOmAwQLnZnLGXVKUzeKQXIAw=
github.com/philhofer/fwd v1.1.1/go.mod h1:gk3iGcWd9+svBvR0sR+KPcfE+RNWozjowpeBVG3ZVNU=
github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
@@ -832,6 +839,7 @@ github.com/sanposhiho/wastedassign/v2 v2.0.6/go.mod h1:KyZ0MWTwxxBmfwn33zh3k1dms
github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa h1:0U2s5loxrTy6/VgfVoLuVLFJcURKLH49ie0zSch7gh4=
github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa/go.mod h1:F73l+cr82YSh10GxyRI6qZiCgK64VaZjwesgfQ1/iLM=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
github.com/securego/gosec/v2 v2.8.1 h1:Tyy/nsH39TYCOkqf5HAgRE+7B5D8sHDwPdXRgFWokh8=
github.com/securego/gosec/v2 v2.8.1/go.mod h1:pUmsq6+VyFEElJMUX+QB3p3LWNHXg1R3xh2ssVJPs8Q=
github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
@@ -842,7 +850,6 @@ github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxr
github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
@@ -906,6 +913,7 @@ github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5Cc
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/goleveldb v1.0.1-0.20200815110645-5c35d600f0ca h1:Ld/zXl5t4+D69SiV4JoN7kkfvJdOWlPpfxrzxpLMoUk=
github.com/syndtr/goleveldb v1.0.1-0.20200815110645-5c35d600f0ca/go.mod h1:u2MKkTVTVJWe5D1rCvame8WqhBd88EuIwODJZ1VHCPM=
github.com/tdakkota/asciicheck v0.0.0-20200416200610-e657995f937b h1:HxLVTlqcHhFAz3nWUcuvpH7WuOMv8LQoCWmruLfFH2U=
@@ -918,7 +926,6 @@ github.com/tetafro/godot v1.4.9 h1:wsNd0RuUxISqqudFqchsSsMqsM188DoZVPBeKl87tP0=
github.com/tetafro/godot v1.4.9/go.mod h1:LR3CJpxDVGlYOWn3ZZg1PgNZdTUvzsZWu8xaEohUpn8=
github.com/timakin/bodyclose v0.0.0-20200424151742-cb6215831a94 h1:ig99OeTyDwQWhPe2iw9lwfQVF1KB3Q4fpP3X7/2VBG8=
github.com/timakin/bodyclose v0.0.0-20200424151742-cb6215831a94/go.mod h1:Qimiffbc6q9tBWlVV6x0P9sat/ao1xEkREYPPj9hphk=
github.com/tinylib/msgp v1.1.5/go.mod h1:eQsjooMTnV42mHu917E26IogZ2930nFyBQdofk10Udg=
github.com/tklauser/go-sysconf v0.3.7/go.mod h1:JZIdXh4RmBvZDBZ41ld2bGxRV3n4daiiqA3skYhAoQ4=
github.com/tklauser/numcpus v0.2.3/go.mod h1:vpEPS/JC+oZGGQ/My/vJnNsvMDQL6PwOqt8dsCw5j+E=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
@@ -929,8 +936,8 @@ github.com/tomarrell/wrapcheck/v2 v2.3.0/go.mod h1:aF5rnkdtqNWP/gC7vPUO5pKsB0Oac
github.com/tomasen/realip v0.0.0-20180522021738-f0c99a92ddce/go.mod h1:o8v6yHRoik09Xen7gje4m9ERNah1d1PPsVq1VEx9vE4=
github.com/tommy-muehle/go-mnd/v2 v2.4.0 h1:1t0f8Uiaq+fqKteUR4N9Umr6E99R+lDnLnq7PwX2PPE=
github.com/tommy-muehle/go-mnd/v2 v2.4.0/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw=
github.com/ttacon/chalk v0.0.0-20160626202418-22c06c80ed31/go.mod h1:onvgF043R+lC5RZ8IT9rBXDaEDnpnw/Cl+HFiw+v/7Q=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c/go.mod h1:hzIxponao9Kjc7aWznkXaL4U4TWaDSs8zcsY4Ka08nM=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
@@ -950,6 +957,8 @@ github.com/valyala/tcplisten v0.0.0-20161114210144-ceec8f93295a/go.mod h1:v3UYOV
github.com/vektra/mockery/v2 v2.9.4 h1:ZjpYWY+YLkDIKrKtFnYPxJax10lktcUapWZtOSg4g7g=
github.com/vektra/mockery/v2 v2.9.4/go.mod h1:2gU4Cf/f8YyC8oEaSXfCnZBMxMjMl/Ko205rlP0fO90=
github.com/viki-org/dnscache v0.0.0-20130720023526-c70c1f23c5d8/go.mod h1:dniwbG03GafCjFohMDmz6Zc6oCuiqgH6tGNyXTkHzXE=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778/go.mod h1:2MuV+tbUrU1zIOPMxZ5EncGwgmMJsa+9ucAQZXxsObs=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
@@ -1107,6 +1116,7 @@ golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwY
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
@@ -1115,8 +1125,9 @@ golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210917221730-978cfadd31cf h1:R150MpwJIv1MpS0N/pc+NhTM8ajzvlmxlY5OYsrevXQ=
golang.org/x/net v0.0.0-20210917221730-978cfadd31cf/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211005001312-d4b1ae081e3b h1:SXy8Ld8oKlcogOvUAh0J5Pm5RKzgYBMMxLxt6n5XW50=
golang.org/x/net v0.0.0-20211005001312-d4b1ae081e3b/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1163,6 +1174,7 @@ golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1176,8 +1188,10 @@ golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1202,6 +1216,7 @@ golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200814200057-3d37ad5750ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200909081042-eff7692f9009/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1217,6 +1232,7 @@ golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210426230700-d19ff857e887/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -1226,8 +1242,9 @@ golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210917161153-d61c044b1678 h1:J27LZFQBFoihqXoegpscI10HpjZ7B5WQLLKL2FZXQKw=
golang.org/x/sys v0.0.0-20210917161153-d61c044b1678/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211004093028-2c5d950f24ef h1:fPxZ3Umkct3LZ8gK9nbk+DWDJ9fstZa2grBn+lWVKPs=
golang.org/x/sys v0.0.0-20211004093028-2c5d950f24ef/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
@@ -1328,7 +1345,6 @@ golang.org/x/tools v0.0.0-20200831203904-5a2aa26beb65/go.mod h1:Cj7w3i3Rnn0Xh82u
golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
golang.org/x/tools v0.0.0-20201001104356-43ebab892c4c/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20201002184944-ecd9fd270d5d/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20201022035929-9cf592e881e9/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201028025901-8cd080b735b3/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=

View File

@@ -13,14 +13,9 @@ will no longer blocksync and thus no longer run the blocksync process.
Note, the blocksync reactor Service gossips entire block and relevant data such
that each receiving peer may construct the entire view of the blocksync state.
There are currently two versions of the blocksync reactor Service:
- v0: The initial implementation that is battle-tested, but whose test coverage
is lacking and is not formally verifiable.
- v2: The latest implementation that has much higher test coverage and is formally
verified. However, the current implementation of v2 is not as battle-tested and
is known to have various bugs that could make it unreliable in production
environments.
There is currently only one version of the blocksync reactor Service
that is battle-tested, but whose test coverage is lacking and is not
formally verified.
The v0 blocksync reactor Service has one p2p channel, BlockchainChannel. This
channel is responsible for handling messages that both request blocks and respond

View File

@@ -1,4 +1,4 @@
package v0
package blocksync
import (
"errors"

View File

@@ -1,4 +1,4 @@
package v0
package blocksync
import (
"fmt"

View File

@@ -1,4 +1,4 @@
package v0
package blocksync
import (
"fmt"
@@ -6,7 +6,6 @@ import (
"sync"
"time"
"github.com/tendermint/tendermint/internal/blocksync"
"github.com/tendermint/tendermint/internal/consensus"
"github.com/tendermint/tendermint/internal/p2p"
sm "github.com/tendermint/tendermint/internal/state"
@@ -18,30 +17,7 @@ import (
"github.com/tendermint/tendermint/types"
)
var (
_ service.Service = (*Reactor)(nil)
// ChannelShims contains a map of ChannelDescriptorShim objects, where each
// object wraps a reference to a legacy p2p ChannelDescriptor and the corresponding
// p2p proto.Message the new p2p Channel is responsible for handling.
//
//
// TODO: Remove once p2p refactor is complete.
// ref: https://github.com/tendermint/tendermint/issues/5670
ChannelShims = map[p2p.ChannelID]*p2p.ChannelDescriptorShim{
BlockSyncChannel: {
MsgType: new(bcproto.Message),
Descriptor: &p2p.ChannelDescriptor{
ID: byte(BlockSyncChannel),
Priority: 5,
SendQueueCapacity: 1000,
RecvBufferCapacity: 1024,
RecvMessageCapacity: blocksync.MaxMsgSize,
MaxSendBytes: 100,
},
},
}
)
var _ service.Service = (*Reactor)(nil)
const (
// BlockSyncChannel is a channel for blocks and status updates
@@ -59,6 +35,17 @@ const (
syncTimeout = 60 * time.Second
)
func GetChannelDescriptor() *p2p.ChannelDescriptor {
return &p2p.ChannelDescriptor{
ID: BlockSyncChannel,
MessageType: new(bcproto.Message),
Priority: 5,
SendQueueCapacity: 1000,
RecvBufferCapacity: 1024,
RecvMessageCapacity: MaxMsgSize,
}
}
type consensusReactor interface {
// For when we switch from block sync reactor to the consensus
// machine.

View File

@@ -1,4 +1,4 @@
package v0
package blocksync
import (
"os"
@@ -65,8 +65,8 @@ func setup(
blockSync: true,
}
chDesc := p2p.ChannelDescriptor{ID: byte(BlockSyncChannel)}
rts.blockSyncChannels = rts.network.MakeChannelsNoCleanup(t, chDesc, new(bcproto.Message), int(chBuf))
chDesc := &p2p.ChannelDescriptor{ID: BlockSyncChannel, MessageType: new(bcproto.Message)}
rts.blockSyncChannels = rts.network.MakeChannelsNoCleanup(t, chDesc)
i := 0
for nodeID := range rts.network.Nodes {
@@ -98,7 +98,7 @@ func (rts *reactorTestSuite) addNode(t *testing.T,
t.Helper()
rts.nodes = append(rts.nodes, nodeID)
rts.app[nodeID] = proxy.NewAppConns(abciclient.NewLocalCreator(&abci.BaseApplication{}))
rts.app[nodeID] = proxy.NewAppConns(abciclient.NewLocalCreator(&abci.BaseApplication{}), proxy.NopMetrics())
require.NoError(t, rts.app[nodeID].Start())
blockDB := dbm.NewMemDB()

View File

@@ -1,42 +0,0 @@
/*
Package Behavior provides a mechanism for reactors to report behavior of peers.
Instead of a reactor calling the switch directly it will call the behavior module which will
handle the stoping and marking peer as good on behalf of the reactor.
There are four different behaviors a reactor can report.
1. bad message
type badMessage struct {
explanation string
}
This message will request the peer be stopped for an error
2. message out of order
type messageOutOfOrder struct {
explanation string
}
This message will request the peer be stopped for an error
3. consesnsus Vote
type consensusVote struct {
explanation string
}
This message will request the peer be marked as good
4. block part
type blockPart struct {
explanation string
}
This message will request the peer be marked as good
*/
package behavior

View File

@@ -1,47 +0,0 @@
package behavior
import "github.com/tendermint/tendermint/types"
// PeerBehavior is a struct describing a behavior a peer performed.
// `peerID` identifies the peer and reason characterizes the specific
// behavior performed by the peer.
type PeerBehavior struct {
peerID types.NodeID
reason interface{}
}
type badMessage struct {
explanation string
}
// BadMessage returns a badMessage PeerBehavior.
func BadMessage(peerID types.NodeID, explanation string) PeerBehavior {
return PeerBehavior{peerID: peerID, reason: badMessage{explanation}}
}
type messageOutOfOrder struct {
explanation string
}
// MessageOutOfOrder returns a messagOutOfOrder PeerBehavior.
func MessageOutOfOrder(peerID types.NodeID, explanation string) PeerBehavior {
return PeerBehavior{peerID: peerID, reason: messageOutOfOrder{explanation}}
}
type consensusVote struct {
explanation string
}
// ConsensusVote returns a consensusVote PeerBehavior.
func ConsensusVote(peerID types.NodeID, explanation string) PeerBehavior {
return PeerBehavior{peerID: peerID, reason: consensusVote{explanation}}
}
type blockPart struct {
explanation string
}
// BlockPart returns blockPart PeerBehavior.
func BlockPart(peerID types.NodeID, explanation string) PeerBehavior {
return PeerBehavior{peerID: peerID, reason: blockPart{explanation}}
}

View File

@@ -1,87 +0,0 @@
package behavior
import (
"errors"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/types"
)
// Reporter provides an interface for reactors to report the behavior
// of peers synchronously to other components.
type Reporter interface {
Report(behavior PeerBehavior) error
}
// SwitchReporter reports peer behavior to an internal Switch.
type SwitchReporter struct {
sw *p2p.Switch
}
// NewSwitchReporter return a new SwitchReporter instance which wraps the Switch.
func NewSwitchReporter(sw *p2p.Switch) *SwitchReporter {
return &SwitchReporter{
sw: sw,
}
}
// Report reports the behavior of a peer to the Switch.
func (spbr *SwitchReporter) Report(behavior PeerBehavior) error {
peer := spbr.sw.Peers().Get(behavior.peerID)
if peer == nil {
return errors.New("peer not found")
}
switch reason := behavior.reason.(type) {
case consensusVote, blockPart:
spbr.sw.MarkPeerAsGood(peer)
case badMessage:
spbr.sw.StopPeerForError(peer, reason.explanation)
case messageOutOfOrder:
spbr.sw.StopPeerForError(peer, reason.explanation)
default:
return errors.New("unknown reason reported")
}
return nil
}
// MockReporter is a concrete implementation of the Reporter
// interface used in reactor tests to ensure reactors report the correct
// behavior in manufactured scenarios.
type MockReporter struct {
mtx tmsync.RWMutex
pb map[types.NodeID][]PeerBehavior
}
// NewMockReporter returns a Reporter which records all reported
// behaviors in memory.
func NewMockReporter() *MockReporter {
return &MockReporter{
pb: map[types.NodeID][]PeerBehavior{},
}
}
// Report stores the PeerBehavior produced by the peer identified by peerID.
func (mpbr *MockReporter) Report(behavior PeerBehavior) error {
mpbr.mtx.Lock()
defer mpbr.mtx.Unlock()
mpbr.pb[behavior.peerID] = append(mpbr.pb[behavior.peerID], behavior)
return nil
}
// GetBehaviors returns all behaviors reported on the peer identified by peerID.
func (mpbr *MockReporter) GetBehaviors(peerID types.NodeID) []PeerBehavior {
mpbr.mtx.RLock()
defer mpbr.mtx.RUnlock()
if items, ok := mpbr.pb[peerID]; ok {
result := make([]PeerBehavior, len(items))
copy(result, items)
return result
}
return []PeerBehavior{}
}

View File

@@ -1,205 +0,0 @@
package behavior_test
import (
"sync"
"testing"
bh "github.com/tendermint/tendermint/internal/blocksync/v2/internal/behavior"
"github.com/tendermint/tendermint/types"
)
// TestMockReporter tests the MockReporter's ability to store reported
// peer behavior in memory indexed by the peerID.
func TestMockReporter(t *testing.T) {
var peerID types.NodeID = "MockPeer"
pr := bh.NewMockReporter()
behaviors := pr.GetBehaviors(peerID)
if len(behaviors) != 0 {
t.Error("Expected to have no behaviors reported")
}
badMessage := bh.BadMessage(peerID, "bad message")
if err := pr.Report(badMessage); err != nil {
t.Error(err)
}
behaviors = pr.GetBehaviors(peerID)
if len(behaviors) != 1 {
t.Error("Expected the peer have one reported behavior")
}
if behaviors[0] != badMessage {
t.Error("Expected Bad Message to have been reported")
}
}
type scriptItem struct {
peerID types.NodeID
behavior bh.PeerBehavior
}
// equalBehaviors returns true if a and b contain the same PeerBehaviors with
// the same freequencies and otherwise false.
func equalBehaviors(a []bh.PeerBehavior, b []bh.PeerBehavior) bool {
aHistogram := map[bh.PeerBehavior]int{}
bHistogram := map[bh.PeerBehavior]int{}
for _, behavior := range a {
aHistogram[behavior]++
}
for _, behavior := range b {
bHistogram[behavior]++
}
if len(aHistogram) != len(bHistogram) {
return false
}
for _, behavior := range a {
if aHistogram[behavior] != bHistogram[behavior] {
return false
}
}
for _, behavior := range b {
if bHistogram[behavior] != aHistogram[behavior] {
return false
}
}
return true
}
// TestEqualPeerBehaviors tests that equalBehaviors can tell that two slices
// of peer behaviors can be compared for the behaviors they contain and the
// freequencies that those behaviors occur.
func TestEqualPeerBehaviors(t *testing.T) {
var (
peerID types.NodeID = "MockPeer"
consensusVote = bh.ConsensusVote(peerID, "voted")
blockPart = bh.BlockPart(peerID, "blocked")
equals = []struct {
left []bh.PeerBehavior
right []bh.PeerBehavior
}{
// Empty sets
{[]bh.PeerBehavior{}, []bh.PeerBehavior{}},
// Single behaviors
{[]bh.PeerBehavior{consensusVote}, []bh.PeerBehavior{consensusVote}},
// Equal Frequencies
{[]bh.PeerBehavior{consensusVote, consensusVote},
[]bh.PeerBehavior{consensusVote, consensusVote}},
// Equal frequencies different orders
{[]bh.PeerBehavior{consensusVote, blockPart},
[]bh.PeerBehavior{blockPart, consensusVote}},
}
unequals = []struct {
left []bh.PeerBehavior
right []bh.PeerBehavior
}{
// Comparing empty sets to non empty sets
{[]bh.PeerBehavior{}, []bh.PeerBehavior{consensusVote}},
// Different behaviors
{[]bh.PeerBehavior{consensusVote}, []bh.PeerBehavior{blockPart}},
// Same behavior with different frequencies
{[]bh.PeerBehavior{consensusVote},
[]bh.PeerBehavior{consensusVote, consensusVote}},
}
)
for _, test := range equals {
if !equalBehaviors(test.left, test.right) {
t.Errorf("expected %#v and %#v to be equal", test.left, test.right)
}
}
for _, test := range unequals {
if equalBehaviors(test.left, test.right) {
t.Errorf("expected %#v and %#v to be unequal", test.left, test.right)
}
}
}
// TestPeerBehaviorConcurrency constructs a scenario in which
// multiple goroutines are using the same MockReporter instance.
// This test reproduces the conditions in which MockReporter will
// be used within a Reactor `Receive` method tests to ensure thread safety.
func TestMockPeerBehaviorReporterConcurrency(t *testing.T) {
var (
behaviorScript = []struct {
peerID types.NodeID
behaviors []bh.PeerBehavior
}{
{"1", []bh.PeerBehavior{bh.ConsensusVote("1", "")}},
{"2", []bh.PeerBehavior{bh.ConsensusVote("2", ""), bh.ConsensusVote("2", ""), bh.ConsensusVote("2", "")}},
{
"3",
[]bh.PeerBehavior{bh.BlockPart("3", ""),
bh.ConsensusVote("3", ""),
bh.BlockPart("3", ""),
bh.ConsensusVote("3", "")}},
{
"4",
[]bh.PeerBehavior{bh.ConsensusVote("4", ""),
bh.ConsensusVote("4", ""),
bh.ConsensusVote("4", ""),
bh.ConsensusVote("4", "")}},
{
"5",
[]bh.PeerBehavior{bh.BlockPart("5", ""),
bh.ConsensusVote("5", ""),
bh.BlockPart("5", ""),
bh.ConsensusVote("5", "")}},
}
)
var receiveWg sync.WaitGroup
pr := bh.NewMockReporter()
scriptItems := make(chan scriptItem)
done := make(chan int)
numConsumers := 3
for i := 0; i < numConsumers; i++ {
receiveWg.Add(1)
go func() {
defer receiveWg.Done()
for {
select {
case pb := <-scriptItems:
if err := pr.Report(pb.behavior); err != nil {
t.Error(err)
}
case <-done:
return
}
}
}()
}
var sendingWg sync.WaitGroup
sendingWg.Add(1)
go func() {
defer sendingWg.Done()
for _, item := range behaviorScript {
for _, reason := range item.behaviors {
scriptItems <- scriptItem{item.peerID, reason}
}
}
}()
sendingWg.Wait()
for i := 0; i < numConsumers; i++ {
done <- 1
}
receiveWg.Wait()
for _, items := range behaviorScript {
reported := pr.GetBehaviors(items.peerID)
if !equalBehaviors(reported, items.behaviors) {
t.Errorf("expected peer %s to have behaved \nExpected: %#v \nGot %#v \n",
items.peerID, items.behaviors, reported)
}
}
}

View File

@@ -1,187 +0,0 @@
package v2
import (
"errors"
"github.com/gogo/protobuf/proto"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/internal/state"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
"github.com/tendermint/tendermint/types"
)
var (
errPeerQueueFull = errors.New("peer queue full")
)
type iIO interface {
sendBlockRequest(peer p2p.Peer, height int64) error
sendBlockToPeer(block *types.Block, peer p2p.Peer) error
sendBlockNotFound(height int64, peer p2p.Peer) error
sendStatusResponse(base, height int64, peer p2p.Peer) error
sendStatusRequest(peer p2p.Peer) error
broadcastStatusRequest() error
trySwitchToConsensus(state state.State, skipWAL bool) bool
}
type switchIO struct {
sw *p2p.Switch
}
func newSwitchIo(sw *p2p.Switch) *switchIO {
return &switchIO{
sw: sw,
}
}
const (
// BlockchainChannel is a channel for blocks and status updates (`BlockStore` height)
BlockchainChannel = byte(0x40)
)
type consensusReactor interface {
// for when we switch from blockchain reactor and block sync to
// the consensus machine
SwitchToConsensus(state state.State, skipWAL bool)
}
func (sio *switchIO) sendBlockRequest(peer p2p.Peer, height int64) error {
msgProto := &bcproto.Message{
Sum: &bcproto.Message_BlockRequest{
BlockRequest: &bcproto.BlockRequest{
Height: height,
},
},
}
msgBytes, err := proto.Marshal(msgProto)
if err != nil {
return err
}
queued := peer.TrySend(BlockchainChannel, msgBytes)
if !queued {
return errPeerQueueFull
}
return nil
}
func (sio *switchIO) sendStatusResponse(base int64, height int64, peer p2p.Peer) error {
msgProto := &bcproto.Message{
Sum: &bcproto.Message_StatusResponse{
StatusResponse: &bcproto.StatusResponse{
Height: height,
Base: base,
},
},
}
msgBytes, err := proto.Marshal(msgProto)
if err != nil {
return err
}
if queued := peer.TrySend(BlockchainChannel, msgBytes); !queued {
return errPeerQueueFull
}
return nil
}
func (sio *switchIO) sendBlockToPeer(block *types.Block, peer p2p.Peer) error {
if block == nil {
panic("trying to send nil block")
}
bpb, err := block.ToProto()
if err != nil {
return err
}
msgProto := &bcproto.Message{
Sum: &bcproto.Message_BlockResponse{
BlockResponse: &bcproto.BlockResponse{
Block: bpb,
},
},
}
msgBytes, err := proto.Marshal(msgProto)
if err != nil {
return err
}
if queued := peer.TrySend(BlockchainChannel, msgBytes); !queued {
return errPeerQueueFull
}
return nil
}
func (sio *switchIO) sendBlockNotFound(height int64, peer p2p.Peer) error {
msgProto := &bcproto.Message{
Sum: &bcproto.Message_NoBlockResponse{
NoBlockResponse: &bcproto.NoBlockResponse{
Height: height,
},
},
}
msgBytes, err := proto.Marshal(msgProto)
if err != nil {
return err
}
if queued := peer.TrySend(BlockchainChannel, msgBytes); !queued {
return errPeerQueueFull
}
return nil
}
func (sio *switchIO) trySwitchToConsensus(state state.State, skipWAL bool) bool {
conR, ok := sio.sw.Reactor("CONSENSUS").(consensusReactor)
if ok {
conR.SwitchToConsensus(state, skipWAL)
}
return ok
}
func (sio *switchIO) sendStatusRequest(peer p2p.Peer) error {
msgProto := &bcproto.Message{
Sum: &bcproto.Message_StatusRequest{
StatusRequest: &bcproto.StatusRequest{},
},
}
msgBytes, err := proto.Marshal(msgProto)
if err != nil {
return err
}
if queued := peer.TrySend(BlockchainChannel, msgBytes); !queued {
return errPeerQueueFull
}
return nil
}
func (sio *switchIO) broadcastStatusRequest() error {
msgProto := &bcproto.Message{
Sum: &bcproto.Message_StatusRequest{
StatusRequest: &bcproto.StatusRequest{},
},
}
msgBytes, err := proto.Marshal(msgProto)
if err != nil {
return err
}
// XXX: maybe we should use an io specific peer list here
sio.sw.Broadcast(BlockchainChannel, msgBytes)
return nil
}

View File

@@ -1,125 +0,0 @@
package v2
import (
"github.com/go-kit/kit/metrics"
"github.com/go-kit/kit/metrics/discard"
"github.com/go-kit/kit/metrics/prometheus"
stdprometheus "github.com/prometheus/client_golang/prometheus"
)
const (
// MetricsSubsystem is a subsystem shared by all metrics exposed by this
// package.
MetricsSubsystem = "blockchain"
)
// Metrics contains metrics exposed by this package.
type Metrics struct {
// events_in
EventsIn metrics.Counter
// events_in
EventsHandled metrics.Counter
// events_out
EventsOut metrics.Counter
// errors_in
ErrorsIn metrics.Counter
// errors_handled
ErrorsHandled metrics.Counter
// errors_out
ErrorsOut metrics.Counter
// events_shed
EventsShed metrics.Counter
// events_sent
EventsSent metrics.Counter
// errors_sent
ErrorsSent metrics.Counter
// errors_shed
ErrorsShed metrics.Counter
}
// PrometheusMetrics returns metrics for in and out events, errors, etc. handled by routines.
// Can we burn in the routine name here?
func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
labels := []string{}
for i := 0; i < len(labelsAndValues); i += 2 {
labels = append(labels, labelsAndValues[i])
}
return &Metrics{
EventsIn: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "events_in",
Help: "Events read from the channel.",
}, labels).With(labelsAndValues...),
EventsHandled: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "events_handled",
Help: "Events handled",
}, labels).With(labelsAndValues...),
EventsOut: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "events_out",
Help: "Events output from routine.",
}, labels).With(labelsAndValues...),
ErrorsIn: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "errors_in",
Help: "Errors read from the channel.",
}, labels).With(labelsAndValues...),
ErrorsHandled: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "errors_handled",
Help: "Errors handled.",
}, labels).With(labelsAndValues...),
ErrorsOut: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "errors_out",
Help: "Errors output from routine.",
}, labels).With(labelsAndValues...),
ErrorsSent: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "errors_sent",
Help: "Errors sent to routine.",
}, labels).With(labelsAndValues...),
ErrorsShed: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "errors_shed",
Help: "Errors dropped from sending.",
}, labels).With(labelsAndValues...),
EventsSent: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "events_sent",
Help: "Events sent to routine.",
}, labels).With(labelsAndValues...),
EventsShed: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "events_shed",
Help: "Events dropped from sending.",
}, labels).With(labelsAndValues...),
}
}
// NopMetrics returns no-op Metrics.
func NopMetrics() *Metrics {
return &Metrics{
EventsIn: discard.NewCounter(),
EventsHandled: discard.NewCounter(),
EventsOut: discard.NewCounter(),
ErrorsIn: discard.NewCounter(),
ErrorsHandled: discard.NewCounter(),
ErrorsOut: discard.NewCounter(),
EventsShed: discard.NewCounter(),
EventsSent: discard.NewCounter(),
ErrorsSent: discard.NewCounter(),
ErrorsShed: discard.NewCounter(),
}
}

View File

@@ -1,193 +0,0 @@
package v2
import (
"fmt"
tmstate "github.com/tendermint/tendermint/internal/state"
"github.com/tendermint/tendermint/types"
)
// Events generated by the processor:
// block execution failure, event will indicate the peer(s) that caused the error
type pcBlockVerificationFailure struct {
priorityNormal
height int64
firstPeerID types.NodeID
secondPeerID types.NodeID
}
func (e pcBlockVerificationFailure) String() string {
return fmt.Sprintf("pcBlockVerificationFailure{%d 1st peer: %v, 2nd peer: %v}",
e.height, e.firstPeerID, e.secondPeerID)
}
// successful block execution
type pcBlockProcessed struct {
priorityNormal
height int64
peerID types.NodeID
}
func (e pcBlockProcessed) String() string {
return fmt.Sprintf("pcBlockProcessed{%d peer: %v}", e.height, e.peerID)
}
// processor has finished
type pcFinished struct {
priorityNormal
blocksSynced int
tmState tmstate.State
}
func (p pcFinished) Error() string {
return "finished"
}
type queueItem struct {
block *types.Block
peerID types.NodeID
}
type blockQueue map[int64]queueItem
type pcState struct {
// blocks waiting to be processed
queue blockQueue
// draining indicates that the next rProcessBlock event with a queue miss constitutes completion
draining bool
// the number of blocks successfully synced by the processor
blocksSynced int
// the processorContext which contains the processor dependencies
context processorContext
}
func (state *pcState) String() string {
return fmt.Sprintf("height: %d queue length: %d draining: %v blocks synced: %d",
state.height(), len(state.queue), state.draining, state.blocksSynced)
}
// newPcState returns a pcState initialized with the last verified block enqueued
func newPcState(context processorContext) *pcState {
return &pcState{
queue: blockQueue{},
draining: false,
blocksSynced: 0,
context: context,
}
}
// nextTwo returns the next two unverified blocks
func (state *pcState) nextTwo() (queueItem, queueItem, error) {
if first, ok := state.queue[state.height()+1]; ok {
if second, ok := state.queue[state.height()+2]; ok {
return first, second, nil
}
}
return queueItem{}, queueItem{}, fmt.Errorf("not found")
}
// synced returns true when at most the last verified block remains in the queue
func (state *pcState) synced() bool {
return len(state.queue) <= 1
}
func (state *pcState) enqueue(peerID types.NodeID, block *types.Block, height int64) {
if item, ok := state.queue[height]; ok {
panic(fmt.Sprintf(
"duplicate block %d (%X) enqueued by processor (sent by %v; existing block %X from %v)",
height, block.Hash(), peerID, item.block.Hash(), item.peerID))
}
state.queue[height] = queueItem{block: block, peerID: peerID}
}
func (state *pcState) height() int64 {
return state.context.tmState().LastBlockHeight
}
// purgePeer moves all unprocessed blocks from the queue
func (state *pcState) purgePeer(peerID types.NodeID) {
// what if height is less than state.height?
for height, item := range state.queue {
if item.peerID == peerID {
delete(state.queue, height)
}
}
}
// handle processes FSM events
func (state *pcState) handle(event Event) (Event, error) {
switch event := event.(type) {
case bcResetState:
state.context.setState(event.state)
return noOp, nil
case scFinishedEv:
if state.synced() {
return pcFinished{tmState: state.context.tmState(), blocksSynced: state.blocksSynced}, nil
}
state.draining = true
return noOp, nil
case scPeerError:
state.purgePeer(event.peerID)
return noOp, nil
case scBlockReceived:
if event.block == nil {
return noOp, nil
}
// enqueue block if height is higher than state height, else ignore it
if event.block.Height > state.height() {
state.enqueue(event.peerID, event.block, event.block.Height)
}
return noOp, nil
case rProcessBlock:
tmstate := state.context.tmState()
firstItem, secondItem, err := state.nextTwo()
if err != nil {
if state.draining {
return pcFinished{tmState: tmstate, blocksSynced: state.blocksSynced}, nil
}
return noOp, nil
}
var (
first, second = firstItem.block, secondItem.block
firstParts = first.MakePartSet(types.BlockPartSizeBytes)
firstID = types.BlockID{Hash: first.Hash(), PartSetHeader: firstParts.Header()}
)
// verify if +second+ last commit "confirms" +first+ block
err = state.context.verifyCommit(tmstate.ChainID, firstID, first.Height, second.LastCommit)
if err != nil {
state.purgePeer(firstItem.peerID)
if firstItem.peerID != secondItem.peerID {
state.purgePeer(secondItem.peerID)
}
return pcBlockVerificationFailure{
height: first.Height, firstPeerID: firstItem.peerID, secondPeerID: secondItem.peerID},
nil
}
state.context.saveBlock(first, firstParts, second.LastCommit)
if err := state.context.applyBlock(firstID, first); err != nil {
panic(fmt.Sprintf("failed to process committed block (%d:%X): %v", first.Height, first.Hash(), err))
}
state.context.recordConsMetrics(first)
delete(state.queue, first.Height)
state.blocksSynced++
return pcBlockProcessed{height: first.Height, peerID: firstItem.peerID}, nil
}
return noOp, nil
}

View File

@@ -1,112 +0,0 @@
package v2
import (
"fmt"
"github.com/tendermint/tendermint/internal/consensus"
"github.com/tendermint/tendermint/internal/state"
"github.com/tendermint/tendermint/types"
)
type processorContext interface {
applyBlock(blockID types.BlockID, block *types.Block) error
verifyCommit(chainID string, blockID types.BlockID, height int64, commit *types.Commit) error
saveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit)
tmState() state.State
setState(state.State)
recordConsMetrics(block *types.Block)
}
type pContext struct {
store blockStore
applier blockApplier
state state.State
metrics *consensus.Metrics
}
func newProcessorContext(st blockStore, ex blockApplier, s state.State, m *consensus.Metrics) *pContext {
return &pContext{
store: st,
applier: ex,
state: s,
metrics: m,
}
}
func (pc *pContext) applyBlock(blockID types.BlockID, block *types.Block) error {
newState, err := pc.applier.ApplyBlock(pc.state, blockID, block)
pc.state = newState
return err
}
func (pc pContext) tmState() state.State {
return pc.state
}
func (pc *pContext) setState(state state.State) {
pc.state = state
}
func (pc pContext) verifyCommit(chainID string, blockID types.BlockID, height int64, commit *types.Commit) error {
return pc.state.Validators.VerifyCommitLight(chainID, blockID, height, commit)
}
func (pc *pContext) saveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
pc.store.SaveBlock(block, blockParts, seenCommit)
}
func (pc *pContext) recordConsMetrics(block *types.Block) {
pc.metrics.RecordConsMetrics(block)
}
type mockPContext struct {
applicationBL []int64
verificationBL []int64
state state.State
}
func newMockProcessorContext(
state state.State,
verificationBlackList []int64,
applicationBlackList []int64) *mockPContext {
return &mockPContext{
applicationBL: applicationBlackList,
verificationBL: verificationBlackList,
state: state,
}
}
func (mpc *mockPContext) applyBlock(blockID types.BlockID, block *types.Block) error {
for _, h := range mpc.applicationBL {
if h == block.Height {
return fmt.Errorf("generic application error")
}
}
mpc.state.LastBlockHeight = block.Height
return nil
}
func (mpc *mockPContext) verifyCommit(chainID string, blockID types.BlockID, height int64, commit *types.Commit) error {
for _, h := range mpc.verificationBL {
if h == height {
return fmt.Errorf("generic verification error")
}
}
return nil
}
func (mpc *mockPContext) saveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
}
func (mpc *mockPContext) setState(state state.State) {
mpc.state = state
}
func (mpc *mockPContext) tmState() state.State {
return mpc.state
}
func (mpc *mockPContext) recordConsMetrics(block *types.Block) {
}

View File

@@ -1,305 +0,0 @@
package v2
import (
"testing"
"github.com/stretchr/testify/assert"
tmstate "github.com/tendermint/tendermint/internal/state"
"github.com/tendermint/tendermint/types"
)
// pcBlock is a test helper structure with simple types. Its purpose is to help with test readability.
type pcBlock struct {
pid string
height int64
}
// params is a test structure used to create processor state.
type params struct {
height int64
items []pcBlock
blocksSynced int
verBL []int64
appBL []int64
draining bool
}
// makePcBlock makes an empty block.
func makePcBlock(height int64) *types.Block {
return &types.Block{Header: types.Header{Height: height}}
}
// makeState takes test parameters and creates a specific processor state.
func makeState(p *params) *pcState {
var (
tmState = tmstate.State{LastBlockHeight: p.height}
context = newMockProcessorContext(tmState, p.verBL, p.appBL)
)
state := newPcState(context)
for _, item := range p.items {
state.enqueue(types.NodeID(item.pid), makePcBlock(item.height), item.height)
}
state.blocksSynced = p.blocksSynced
state.draining = p.draining
return state
}
func mBlockResponse(peerID types.NodeID, height int64) scBlockReceived {
return scBlockReceived{
peerID: peerID,
block: makePcBlock(height),
}
}
type pcFsmMakeStateValues struct {
currentState *params
event Event
wantState *params
wantNextEvent Event
wantErr error
wantPanic bool
}
type testFields struct {
name string
steps []pcFsmMakeStateValues
}
func executeProcessorTests(t *testing.T, tests []testFields) {
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
var state *pcState
for _, step := range tt.steps {
defer func() {
r := recover()
if (r != nil) != step.wantPanic {
t.Errorf("recover = %v, wantPanic = %v", r, step.wantPanic)
}
}()
// First step must always initialize the currentState as state.
if step.currentState != nil {
state = makeState(step.currentState)
}
if state == nil {
panic("Bad (initial?) step")
}
nextEvent, err := state.handle(step.event)
t.Log(state)
assert.Equal(t, step.wantErr, err)
assert.Equal(t, makeState(step.wantState), state)
assert.Equal(t, step.wantNextEvent, nextEvent)
// Next step may use the wantedState as their currentState.
state = makeState(step.wantState)
}
})
}
}
func TestRProcessPeerError(t *testing.T) {
tests := []testFields{
{
name: "error for existing peer",
steps: []pcFsmMakeStateValues{
{
currentState: &params{items: []pcBlock{{"P1", 1}, {"P2", 2}}},
event: scPeerError{peerID: "P2"},
wantState: &params{items: []pcBlock{{"P1", 1}}},
wantNextEvent: noOp,
},
},
},
{
name: "error for unknown peer",
steps: []pcFsmMakeStateValues{
{
currentState: &params{items: []pcBlock{{"P1", 1}, {"P2", 2}}},
event: scPeerError{peerID: "P3"},
wantState: &params{items: []pcBlock{{"P1", 1}, {"P2", 2}}},
wantNextEvent: noOp,
},
},
},
}
executeProcessorTests(t, tests)
}
func TestPcBlockResponse(t *testing.T) {
tests := []testFields{
{
name: "add one block",
steps: []pcFsmMakeStateValues{
{
currentState: &params{}, event: mBlockResponse("P1", 1),
wantState: &params{items: []pcBlock{{"P1", 1}}}, wantNextEvent: noOp,
},
},
},
{
name: "add two blocks",
steps: []pcFsmMakeStateValues{
{
currentState: &params{}, event: mBlockResponse("P1", 3),
wantState: &params{items: []pcBlock{{"P1", 3}}}, wantNextEvent: noOp,
},
{ // use previous wantState as currentState,
event: mBlockResponse("P1", 4),
wantState: &params{items: []pcBlock{{"P1", 3}, {"P1", 4}}}, wantNextEvent: noOp,
},
},
},
}
executeProcessorTests(t, tests)
}
func TestRProcessBlockSuccess(t *testing.T) {
tests := []testFields{
{
name: "noop - no blocks over current height",
steps: []pcFsmMakeStateValues{
{
currentState: &params{}, event: rProcessBlock{},
wantState: &params{}, wantNextEvent: noOp,
},
},
},
{
name: "noop - high new blocks",
steps: []pcFsmMakeStateValues{
{
currentState: &params{height: 5, items: []pcBlock{{"P1", 30}, {"P2", 31}}}, event: rProcessBlock{},
wantState: &params{height: 5, items: []pcBlock{{"P1", 30}, {"P2", 31}}}, wantNextEvent: noOp,
},
},
},
{
name: "blocks H+1 and H+2 present",
steps: []pcFsmMakeStateValues{
{
currentState: &params{items: []pcBlock{{"P1", 1}, {"P2", 2}}}, event: rProcessBlock{},
wantState: &params{height: 1, items: []pcBlock{{"P2", 2}}, blocksSynced: 1},
wantNextEvent: pcBlockProcessed{height: 1, peerID: "P1"},
},
},
},
{
name: "blocks H+1 and H+2 present after draining",
steps: []pcFsmMakeStateValues{
{ // some contiguous blocks - on stop check draining is set
currentState: &params{items: []pcBlock{{"P1", 1}, {"P2", 2}, {"P1", 4}}},
event: scFinishedEv{},
wantState: &params{items: []pcBlock{{"P1", 1}, {"P2", 2}, {"P1", 4}}, draining: true},
wantNextEvent: noOp,
},
{
event: rProcessBlock{},
wantState: &params{height: 1, items: []pcBlock{{"P2", 2}, {"P1", 4}}, blocksSynced: 1, draining: true},
wantNextEvent: pcBlockProcessed{height: 1, peerID: "P1"},
},
{ // finish when H+1 or/and H+2 are missing
event: rProcessBlock{},
wantState: &params{height: 1, items: []pcBlock{{"P2", 2}, {"P1", 4}}, blocksSynced: 1, draining: true},
wantNextEvent: pcFinished{tmState: tmstate.State{LastBlockHeight: 1}, blocksSynced: 1},
},
},
},
}
executeProcessorTests(t, tests)
}
func TestRProcessBlockFailures(t *testing.T) {
tests := []testFields{
{
name: "blocks H+1 and H+2 present from different peers - H+1 verification fails ",
steps: []pcFsmMakeStateValues{
{
currentState: &params{items: []pcBlock{{"P1", 1}, {"P2", 2}}, verBL: []int64{1}}, event: rProcessBlock{},
wantState: &params{items: []pcBlock{}, verBL: []int64{1}},
wantNextEvent: pcBlockVerificationFailure{height: 1, firstPeerID: "P1", secondPeerID: "P2"},
},
},
},
{
name: "blocks H+1 and H+2 present from same peer - H+1 applyBlock fails ",
steps: []pcFsmMakeStateValues{
{
currentState: &params{items: []pcBlock{{"P1", 1}, {"P2", 2}}, appBL: []int64{1}}, event: rProcessBlock{},
wantState: &params{items: []pcBlock{}, appBL: []int64{1}}, wantPanic: true,
},
},
},
{
name: "blocks H+1 and H+2 present from same peers - H+1 verification fails ",
steps: []pcFsmMakeStateValues{
{
currentState: &params{height: 0, items: []pcBlock{{"P1", 1}, {"P1", 2}, {"P2", 3}},
verBL: []int64{1}}, event: rProcessBlock{},
wantState: &params{height: 0, items: []pcBlock{{"P2", 3}}, verBL: []int64{1}},
wantNextEvent: pcBlockVerificationFailure{height: 1, firstPeerID: "P1", secondPeerID: "P1"},
},
},
},
{
name: "blocks H+1 and H+2 present from different peers - H+1 applyBlock fails ",
steps: []pcFsmMakeStateValues{
{
currentState: &params{items: []pcBlock{{"P1", 1}, {"P2", 2}, {"P2", 3}}, appBL: []int64{1}},
event: rProcessBlock{},
wantState: &params{items: []pcBlock{{"P2", 3}}, appBL: []int64{1}}, wantPanic: true,
},
},
},
}
executeProcessorTests(t, tests)
}
func TestScFinishedEv(t *testing.T) {
tests := []testFields{
{
name: "no blocks",
steps: []pcFsmMakeStateValues{
{
currentState: &params{height: 100, items: []pcBlock{}, blocksSynced: 100}, event: scFinishedEv{},
wantState: &params{height: 100, items: []pcBlock{}, blocksSynced: 100},
wantNextEvent: pcFinished{tmState: tmstate.State{LastBlockHeight: 100}, blocksSynced: 100},
},
},
},
{
name: "maxHeight+1 block present",
steps: []pcFsmMakeStateValues{
{
currentState: &params{height: 100, items: []pcBlock{
{"P1", 101}}, blocksSynced: 100}, event: scFinishedEv{},
wantState: &params{height: 100, items: []pcBlock{{"P1", 101}}, blocksSynced: 100},
wantNextEvent: pcFinished{tmState: tmstate.State{LastBlockHeight: 100}, blocksSynced: 100},
},
},
},
{
name: "more blocks present",
steps: []pcFsmMakeStateValues{
{
currentState: &params{height: 100, items: []pcBlock{
{"P1", 101}, {"P1", 102}}, blocksSynced: 100}, event: scFinishedEv{},
wantState: &params{height: 100, items: []pcBlock{
{"P1", 101}, {"P1", 102}}, blocksSynced: 100, draining: true},
wantNextEvent: noOp,
wantErr: nil,
},
},
},
}
executeProcessorTests(t, tests)
}

View File

@@ -1,643 +0,0 @@
package v2
import (
"errors"
"fmt"
"time"
"github.com/gogo/protobuf/proto"
"github.com/tendermint/tendermint/internal/blocksync"
"github.com/tendermint/tendermint/internal/blocksync/v2/internal/behavior"
"github.com/tendermint/tendermint/internal/consensus"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/internal/state"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/sync"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
"github.com/tendermint/tendermint/types"
)
const (
// chBufferSize is the buffer size of all event channels.
chBufferSize int = 1000
)
type blockStore interface {
LoadBlock(height int64) *types.Block
SaveBlock(*types.Block, *types.PartSet, *types.Commit)
Base() int64
Height() int64
}
// BlockchainReactor handles block sync protocol.
type BlockchainReactor struct {
p2p.BaseReactor
blockSync *sync.AtomicBool // enable block sync on start when it's been Set
stateSynced bool // set to true when SwitchToBlockSync is called by state sync
scheduler *Routine
processor *Routine
logger log.Logger
mtx tmsync.RWMutex
maxPeerHeight int64
syncHeight int64
events chan Event // non-nil during a block sync
reporter behavior.Reporter
io iIO
store blockStore
syncStartTime time.Time
syncStartHeight int64
lastSyncRate float64 // # blocks sync per sec base on the last 100 blocks
}
type blockApplier interface {
ApplyBlock(state state.State, blockID types.BlockID, block *types.Block) (state.State, error)
}
// XXX: unify naming in this package around tmState
func newReactor(state state.State, store blockStore, reporter behavior.Reporter,
blockApplier blockApplier, blockSync bool, metrics *consensus.Metrics) *BlockchainReactor {
initHeight := state.LastBlockHeight + 1
if initHeight == 1 {
initHeight = state.InitialHeight
}
scheduler := newScheduler(initHeight, time.Now())
pContext := newProcessorContext(store, blockApplier, state, metrics)
// TODO: Fix naming to just newProcesssor
// newPcState requires a processorContext
processor := newPcState(pContext)
return &BlockchainReactor{
scheduler: newRoutine("scheduler", scheduler.handle, chBufferSize),
processor: newRoutine("processor", processor.handle, chBufferSize),
store: store,
reporter: reporter,
logger: log.NewNopLogger(),
blockSync: sync.NewBool(blockSync),
syncStartHeight: initHeight,
syncStartTime: time.Time{},
lastSyncRate: 0,
}
}
// NewBlockchainReactor creates a new reactor instance.
func NewBlockchainReactor(
state state.State,
blockApplier blockApplier,
store blockStore,
blockSync bool,
metrics *consensus.Metrics) *BlockchainReactor {
reporter := behavior.NewMockReporter()
return newReactor(state, store, reporter, blockApplier, blockSync, metrics)
}
// SetSwitch implements Reactor interface.
func (r *BlockchainReactor) SetSwitch(sw *p2p.Switch) {
r.Switch = sw
if sw != nil {
r.io = newSwitchIo(sw)
} else {
r.io = nil
}
}
func (r *BlockchainReactor) setMaxPeerHeight(height int64) {
r.mtx.Lock()
defer r.mtx.Unlock()
if height > r.maxPeerHeight {
r.maxPeerHeight = height
}
}
func (r *BlockchainReactor) setSyncHeight(height int64) {
r.mtx.Lock()
defer r.mtx.Unlock()
r.syncHeight = height
}
// SyncHeight returns the height to which the BlockchainReactor has synced.
func (r *BlockchainReactor) SyncHeight() int64 {
r.mtx.RLock()
defer r.mtx.RUnlock()
return r.syncHeight
}
// SetLogger sets the logger of the reactor.
func (r *BlockchainReactor) SetLogger(logger log.Logger) {
r.logger = logger
r.scheduler.setLogger(logger)
r.processor.setLogger(logger)
}
// Start implements cmn.Service interface
func (r *BlockchainReactor) Start() error {
r.reporter = behavior.NewSwitchReporter(r.BaseReactor.Switch)
if r.blockSync.IsSet() {
err := r.startSync(nil)
if err != nil {
return fmt.Errorf("failed to start block sync: %w", err)
}
}
return nil
}
// startSync begins a block sync, signaled by r.events being non-nil. If state is non-nil,
// the scheduler and processor is updated with this state on startup.
func (r *BlockchainReactor) startSync(state *state.State) error {
r.mtx.Lock()
defer r.mtx.Unlock()
if r.events != nil {
return errors.New("block sync already in progress")
}
r.events = make(chan Event, chBufferSize)
go r.scheduler.start()
go r.processor.start()
if state != nil {
<-r.scheduler.ready()
<-r.processor.ready()
r.scheduler.send(bcResetState{state: *state})
r.processor.send(bcResetState{state: *state})
}
go r.demux(r.events)
return nil
}
// endSync ends a block sync
func (r *BlockchainReactor) endSync() {
r.mtx.Lock()
defer r.mtx.Unlock()
if r.events != nil {
close(r.events)
}
r.events = nil
r.scheduler.stop()
r.processor.stop()
}
// SwitchToBlockSync is called by the state sync reactor when switching to block sync.
func (r *BlockchainReactor) SwitchToBlockSync(state state.State) error {
r.stateSynced = true
state = state.Copy()
err := r.startSync(&state)
if err == nil {
r.syncStartTime = time.Now()
}
return err
}
// reactor generated ticker events:
// ticker for cleaning peers
type rTryPrunePeer struct {
priorityHigh
time time.Time
}
func (e rTryPrunePeer) String() string {
return fmt.Sprintf("rTryPrunePeer{%v}", e.time)
}
// ticker event for scheduling block requests
type rTrySchedule struct {
priorityHigh
time time.Time
}
func (e rTrySchedule) String() string {
return fmt.Sprintf("rTrySchedule{%v}", e.time)
}
// ticker for block processing
type rProcessBlock struct {
priorityNormal
}
func (e rProcessBlock) String() string {
return "rProcessBlock"
}
// reactor generated events based on blockchain related messages from peers:
// blockResponse message received from a peer
type bcBlockResponse struct {
priorityNormal
time time.Time
peerID types.NodeID
size int64
block *types.Block
}
func (resp bcBlockResponse) String() string {
return fmt.Sprintf("bcBlockResponse{%d#%X (size: %d bytes) from %v at %v}",
resp.block.Height, resp.block.Hash(), resp.size, resp.peerID, resp.time)
}
// blockNoResponse message received from a peer
type bcNoBlockResponse struct {
priorityNormal
time time.Time
peerID types.NodeID
height int64
}
func (resp bcNoBlockResponse) String() string {
return fmt.Sprintf("bcNoBlockResponse{%v has no block at height %d at %v}",
resp.peerID, resp.height, resp.time)
}
// statusResponse message received from a peer
type bcStatusResponse struct {
priorityNormal
time time.Time
peerID types.NodeID
base int64
height int64
}
func (resp bcStatusResponse) String() string {
return fmt.Sprintf("bcStatusResponse{%v is at height %d (base: %d) at %v}",
resp.peerID, resp.height, resp.base, resp.time)
}
// new peer is connected
type bcAddNewPeer struct {
priorityNormal
peerID types.NodeID
}
func (resp bcAddNewPeer) String() string {
return fmt.Sprintf("bcAddNewPeer{%v}", resp.peerID)
}
// existing peer is removed
type bcRemovePeer struct {
priorityHigh
peerID types.NodeID
reason interface{}
}
func (resp bcRemovePeer) String() string {
return fmt.Sprintf("bcRemovePeer{%v due to %v}", resp.peerID, resp.reason)
}
// resets the scheduler and processor state, e.g. following a switch from state syncing
type bcResetState struct {
priorityHigh
state state.State
}
func (e bcResetState) String() string {
return fmt.Sprintf("bcResetState{%v}", e.state)
}
// Takes the channel as a parameter to avoid race conditions on r.events.
func (r *BlockchainReactor) demux(events <-chan Event) {
var lastHundred = time.Now()
var (
processBlockFreq = 20 * time.Millisecond
doProcessBlockCh = make(chan struct{}, 1)
doProcessBlockTk = time.NewTicker(processBlockFreq)
)
defer doProcessBlockTk.Stop()
var (
prunePeerFreq = 1 * time.Second
doPrunePeerCh = make(chan struct{}, 1)
doPrunePeerTk = time.NewTicker(prunePeerFreq)
)
defer doPrunePeerTk.Stop()
var (
scheduleFreq = 20 * time.Millisecond
doScheduleCh = make(chan struct{}, 1)
doScheduleTk = time.NewTicker(scheduleFreq)
)
defer doScheduleTk.Stop()
var (
statusFreq = 10 * time.Second
doStatusCh = make(chan struct{}, 1)
doStatusTk = time.NewTicker(statusFreq)
)
defer doStatusTk.Stop()
doStatusCh <- struct{}{} // immediately broadcast to get status of existing peers
// Memoize the scSchedulerFail error to avoid printing it every scheduleFreq.
var scSchedulerFailErr error
// XXX: Extract timers to make testing atemporal
for {
select {
// Pacers: send at most per frequency but don't saturate
case <-doProcessBlockTk.C:
select {
case doProcessBlockCh <- struct{}{}:
default:
}
case <-doPrunePeerTk.C:
select {
case doPrunePeerCh <- struct{}{}:
default:
}
case <-doScheduleTk.C:
select {
case doScheduleCh <- struct{}{}:
default:
}
case <-doStatusTk.C:
select {
case doStatusCh <- struct{}{}:
default:
}
// Tickers: perform tasks periodically
case <-doScheduleCh:
r.scheduler.send(rTrySchedule{time: time.Now()})
case <-doPrunePeerCh:
r.scheduler.send(rTryPrunePeer{time: time.Now()})
case <-doProcessBlockCh:
r.processor.send(rProcessBlock{})
case <-doStatusCh:
if err := r.io.broadcastStatusRequest(); err != nil {
r.logger.Error("Error broadcasting status request", "err", err)
}
// Events from peers. Closing the channel signals event loop termination.
case event, ok := <-events:
if !ok {
r.logger.Info("Stopping event processing")
return
}
switch event := event.(type) {
case bcStatusResponse:
r.setMaxPeerHeight(event.height)
r.scheduler.send(event)
case bcAddNewPeer, bcRemovePeer, bcBlockResponse, bcNoBlockResponse:
r.scheduler.send(event)
default:
r.logger.Error("Received unexpected event", "event", fmt.Sprintf("%T", event))
}
// Incremental events from scheduler
case event := <-r.scheduler.next():
switch event := event.(type) {
case scBlockReceived:
r.processor.send(event)
case scPeerError:
r.processor.send(event)
if err := r.reporter.Report(behavior.BadMessage(event.peerID, "scPeerError")); err != nil {
r.logger.Error("Error reporting peer", "err", err)
}
case scBlockRequest:
peer := r.Switch.Peers().Get(event.peerID)
if peer == nil {
r.logger.Error("Wanted to send block request, but no such peer", "peerID", event.peerID)
continue
}
if err := r.io.sendBlockRequest(peer, event.height); err != nil {
r.logger.Error("Error sending block request", "err", err)
}
case scFinishedEv:
r.processor.send(event)
r.scheduler.stop()
case scSchedulerFail:
if scSchedulerFailErr != event.reason {
r.logger.Error("Scheduler failure", "err", event.reason.Error())
scSchedulerFailErr = event.reason
}
case scPeersPruned:
// Remove peers from the processor.
for _, peerID := range event.peers {
r.processor.send(scPeerError{peerID: peerID, reason: errors.New("peer was pruned")})
}
r.logger.Debug("Pruned peers", "count", len(event.peers))
case noOpEvent:
default:
r.logger.Error("Received unexpected scheduler event", "event", fmt.Sprintf("%T", event))
}
// Incremental events from processor
case event := <-r.processor.next():
switch event := event.(type) {
case pcBlockProcessed:
r.setSyncHeight(event.height)
if (r.syncHeight-r.syncStartHeight)%100 == 0 {
newSyncRate := 100 / time.Since(lastHundred).Seconds()
if r.lastSyncRate == 0 {
r.lastSyncRate = newSyncRate
} else {
r.lastSyncRate = 0.9*r.lastSyncRate + 0.1*newSyncRate
}
r.logger.Info("block sync Rate", "height", r.syncHeight,
"max_peer_height", r.maxPeerHeight, "blocks/s", r.lastSyncRate)
lastHundred = time.Now()
}
r.scheduler.send(event)
case pcBlockVerificationFailure:
r.scheduler.send(event)
case pcFinished:
r.logger.Info("block sync complete, switching to consensus")
if !r.io.trySwitchToConsensus(event.tmState, event.blocksSynced > 0 || r.stateSynced) {
r.logger.Error("Failed to switch to consensus reactor")
}
r.endSync()
r.blockSync.UnSet()
return
case noOpEvent:
default:
r.logger.Error("Received unexpected processor event", "event", fmt.Sprintf("%T", event))
}
// Terminal event from scheduler
case err := <-r.scheduler.final():
switch err {
case nil:
r.logger.Info("Scheduler stopped")
default:
r.logger.Error("Scheduler aborted with error", "err", err)
}
// Terminal event from processor
case err := <-r.processor.final():
switch err {
case nil:
r.logger.Info("Processor stopped")
default:
r.logger.Error("Processor aborted with error", "err", err)
}
}
}
}
// Stop implements cmn.Service interface.
func (r *BlockchainReactor) Stop() error {
r.logger.Info("reactor stopping")
r.endSync()
r.logger.Info("reactor stopped")
return nil
}
// Receive implements Reactor by handling different message types.
// XXX: do not call any methods that can block or incur heavy processing.
// https://github.com/tendermint/tendermint/issues/2888
func (r *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
logger := r.logger.With("src", src.ID(), "chID", chID)
msgProto := new(bcproto.Message)
if err := proto.Unmarshal(msgBytes, msgProto); err != nil {
logger.Error("error decoding message", "err", err)
_ = r.reporter.Report(behavior.BadMessage(src.ID(), err.Error()))
return
}
if err := msgProto.Validate(); err != nil {
logger.Error("peer sent us an invalid msg", "msg", msgProto, "err", err)
_ = r.reporter.Report(behavior.BadMessage(src.ID(), err.Error()))
return
}
r.logger.Debug("received", "msg", msgProto)
switch msg := msgProto.Sum.(type) {
case *bcproto.Message_StatusRequest:
if err := r.io.sendStatusResponse(r.store.Base(), r.store.Height(), src); err != nil {
logger.Error("Could not send status message to src peer")
}
case *bcproto.Message_BlockRequest:
block := r.store.LoadBlock(msg.BlockRequest.Height)
if block != nil {
if err := r.io.sendBlockToPeer(block, src); err != nil {
logger.Error("Could not send block message to src peer", "err", err)
}
} else {
logger.Info("peer asking for a block we don't have", "height", msg.BlockRequest.Height)
if err := r.io.sendBlockNotFound(msg.BlockRequest.Height, src); err != nil {
logger.Error("Couldn't send block not found msg", "err", err)
}
}
case *bcproto.Message_StatusResponse:
r.mtx.RLock()
if r.events != nil {
r.events <- bcStatusResponse{
peerID: src.ID(),
base: msg.StatusResponse.Base,
height: msg.StatusResponse.Height,
}
}
r.mtx.RUnlock()
case *bcproto.Message_BlockResponse:
bi, err := types.BlockFromProto(msg.BlockResponse.Block)
if err != nil {
logger.Error("error transitioning block from protobuf", "err", err)
_ = r.reporter.Report(behavior.BadMessage(src.ID(), err.Error()))
return
}
r.mtx.RLock()
if r.events != nil {
r.events <- bcBlockResponse{
peerID: src.ID(),
block: bi,
size: int64(len(msgBytes)),
time: time.Now(),
}
}
r.mtx.RUnlock()
case *bcproto.Message_NoBlockResponse:
r.mtx.RLock()
if r.events != nil {
r.events <- bcNoBlockResponse{
peerID: src.ID(),
height: msg.NoBlockResponse.Height,
time: time.Now(),
}
}
r.mtx.RUnlock()
}
}
// AddPeer implements Reactor interface
func (r *BlockchainReactor) AddPeer(peer p2p.Peer) {
err := r.io.sendStatusResponse(r.store.Base(), r.store.Height(), peer)
if err != nil {
r.logger.Error("could not send our status to the new peer", "peer", peer.ID, "err", err)
}
err = r.io.sendStatusRequest(peer)
if err != nil {
r.logger.Error("could not send status request to the new peer", "peer", peer.ID, "err", err)
}
r.mtx.RLock()
defer r.mtx.RUnlock()
if r.events != nil {
r.events <- bcAddNewPeer{peerID: peer.ID()}
}
}
// RemovePeer implements Reactor interface.
func (r *BlockchainReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
r.mtx.RLock()
defer r.mtx.RUnlock()
if r.events != nil {
r.events <- bcRemovePeer{
peerID: peer.ID(),
reason: reason,
}
}
}
// GetChannels implements Reactor
func (r *BlockchainReactor) GetChannels() []*p2p.ChannelDescriptor {
return []*p2p.ChannelDescriptor{
{
ID: BlockchainChannel,
Priority: 5,
SendQueueCapacity: 2000,
RecvBufferCapacity: 1024,
RecvMessageCapacity: blocksync.MaxMsgSize,
},
}
}
func (r *BlockchainReactor) GetMaxPeerBlockHeight() int64 {
r.mtx.RLock()
defer r.mtx.RUnlock()
return r.maxPeerHeight
}
func (r *BlockchainReactor) GetTotalSyncedTime() time.Duration {
if !r.blockSync.IsSet() || r.syncStartTime.IsZero() {
return time.Duration(0)
}
return time.Since(r.syncStartTime)
}
func (r *BlockchainReactor) GetRemainingSyncTime() time.Duration {
if !r.blockSync.IsSet() {
return time.Duration(0)
}
r.mtx.RLock()
defer r.mtx.RUnlock()
targetSyncs := r.maxPeerHeight - r.syncStartHeight
currentSyncs := r.syncHeight - r.syncStartHeight + 1
if currentSyncs < 0 || r.lastSyncRate < 0.001 {
return time.Duration(0)
}
remain := float64(targetSyncs-currentSyncs) / r.lastSyncRate
return time.Duration(int64(remain * float64(time.Second)))
}

View File

@@ -1,533 +0,0 @@
package v2
import (
"fmt"
"net"
"os"
"sync"
"testing"
"time"
"github.com/gogo/protobuf/proto"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
dbm "github.com/tendermint/tm-db"
abciclient "github.com/tendermint/tendermint/abci/client"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/blocksync/v2/internal/behavior"
"github.com/tendermint/tendermint/internal/consensus"
"github.com/tendermint/tendermint/internal/mempool/mock"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/internal/p2p/conn"
"github.com/tendermint/tendermint/internal/proxy"
sm "github.com/tendermint/tendermint/internal/state"
sf "github.com/tendermint/tendermint/internal/state/test/factory"
tmstore "github.com/tendermint/tendermint/internal/store"
"github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
"github.com/tendermint/tendermint/types"
)
type mockPeer struct {
service.Service
id types.NodeID
}
func (mp mockPeer) FlushStop() {}
func (mp mockPeer) ID() types.NodeID { return mp.id }
func (mp mockPeer) RemoteIP() net.IP { return net.IP{} }
func (mp mockPeer) RemoteAddr() net.Addr { return &net.TCPAddr{IP: mp.RemoteIP(), Port: 8800} }
func (mp mockPeer) IsOutbound() bool { return true }
func (mp mockPeer) IsPersistent() bool { return true }
func (mp mockPeer) CloseConn() error { return nil }
func (mp mockPeer) NodeInfo() types.NodeInfo {
return types.NodeInfo{
NodeID: "",
ListenAddr: "",
}
}
func (mp mockPeer) Status() conn.ConnectionStatus { return conn.ConnectionStatus{} }
func (mp mockPeer) SocketAddr() *p2p.NetAddress { return &p2p.NetAddress{} }
func (mp mockPeer) Send(byte, []byte) bool { return true }
func (mp mockPeer) TrySend(byte, []byte) bool { return true }
func (mp mockPeer) Set(string, interface{}) {}
func (mp mockPeer) Get(string) interface{} { return struct{}{} }
//nolint:unused
type mockBlockStore struct {
blocks map[int64]*types.Block
}
//nolint:unused
func (ml *mockBlockStore) Height() int64 {
return int64(len(ml.blocks))
}
//nolint:unused
func (ml *mockBlockStore) LoadBlock(height int64) *types.Block {
return ml.blocks[height]
}
//nolint:unused
func (ml *mockBlockStore) SaveBlock(block *types.Block, part *types.PartSet, commit *types.Commit) {
ml.blocks[block.Height] = block
}
type mockBlockApplier struct {
}
// XXX: Add whitelist/blacklist?
func (mba *mockBlockApplier) ApplyBlock(
state sm.State, blockID types.BlockID, block *types.Block,
) (sm.State, error) {
state.LastBlockHeight++
return state, nil
}
type mockSwitchIo struct {
mtx sync.Mutex
switchedToConsensus bool
numStatusResponse int
numBlockResponse int
numNoBlockResponse int
numStatusRequest int
}
var _ iIO = (*mockSwitchIo)(nil)
func (sio *mockSwitchIo) sendBlockRequest(_ p2p.Peer, _ int64) error {
return nil
}
func (sio *mockSwitchIo) sendStatusResponse(_, _ int64, _ p2p.Peer) error {
sio.mtx.Lock()
defer sio.mtx.Unlock()
sio.numStatusResponse++
return nil
}
func (sio *mockSwitchIo) sendBlockToPeer(_ *types.Block, _ p2p.Peer) error {
sio.mtx.Lock()
defer sio.mtx.Unlock()
sio.numBlockResponse++
return nil
}
func (sio *mockSwitchIo) sendBlockNotFound(_ int64, _ p2p.Peer) error {
sio.mtx.Lock()
defer sio.mtx.Unlock()
sio.numNoBlockResponse++
return nil
}
func (sio *mockSwitchIo) trySwitchToConsensus(_ sm.State, _ bool) bool {
sio.mtx.Lock()
defer sio.mtx.Unlock()
sio.switchedToConsensus = true
return true
}
func (sio *mockSwitchIo) broadcastStatusRequest() error {
return nil
}
func (sio *mockSwitchIo) sendStatusRequest(_ p2p.Peer) error {
sio.mtx.Lock()
defer sio.mtx.Unlock()
sio.numStatusRequest++
return nil
}
type testReactorParams struct {
logger log.Logger
genDoc *types.GenesisDoc
privVals []types.PrivValidator
startHeight int64
mockA bool
}
func newTestReactor(t *testing.T, p testReactorParams) *BlockchainReactor {
store, state, _ := newReactorStore(t, p.genDoc, p.privVals, p.startHeight)
reporter := behavior.NewMockReporter()
var appl blockApplier
if p.mockA {
appl = &mockBlockApplier{}
} else {
app := &testApp{}
cc := abciclient.NewLocalCreator(app)
proxyApp := proxy.NewAppConns(cc)
err := proxyApp.Start()
require.NoError(t, err)
db := dbm.NewMemDB()
stateStore := sm.NewStore(db)
blockStore := tmstore.NewBlockStore(dbm.NewMemDB())
appl = sm.NewBlockExecutor(
stateStore, p.logger, proxyApp.Consensus(), mock.Mempool{}, sm.EmptyEvidencePool{}, blockStore)
err = stateStore.Save(state)
require.NoError(t, err)
}
r := newReactor(state, store, reporter, appl, true, consensus.NopMetrics())
logger := log.TestingLogger()
r.SetLogger(logger.With("module", "blockchain"))
return r
}
// This test is left here and not deleted to retain the termination cases for
// future improvement in [#4482](https://github.com/tendermint/tendermint/issues/4482).
// func TestReactorTerminationScenarios(t *testing.T) {
// config := cfg.ResetTestRoot("blockchain_reactor_v2_test")
// defer os.RemoveAll(config.RootDir)
// genDoc, privVals := randGenesisDoc(config.ChainID(), 1, false, 30)
// refStore, _, _ := newReactorStore(genDoc, privVals, 20)
// params := testReactorParams{
// logger: log.TestingLogger(),
// genDoc: genDoc,
// privVals: privVals,
// startHeight: 10,
// bufferSize: 100,
// mockA: true,
// }
// type testEvent struct {
// evType string
// peer string
// height int64
// }
// tests := []struct {
// name string
// params testReactorParams
// msgs []testEvent
// }{
// {
// name: "simple termination on max peer height - one peer",
// params: params,
// msgs: []testEvent{
// {evType: "AddPeer", peer: "P1"},
// {evType: "ReceiveS", peer: "P1", height: 13},
// {evType: "BlockReq"},
// {evType: "ReceiveB", peer: "P1", height: 11},
// {evType: "BlockReq"},
// {evType: "BlockReq"},
// {evType: "ReceiveB", peer: "P1", height: 12},
// {evType: "Process"},
// {evType: "ReceiveB", peer: "P1", height: 13},
// {evType: "Process"},
// },
// },
// {
// name: "simple termination on max peer height - two peers",
// params: params,
// msgs: []testEvent{
// {evType: "AddPeer", peer: "P1"},
// {evType: "AddPeer", peer: "P2"},
// {evType: "ReceiveS", peer: "P1", height: 13},
// {evType: "ReceiveS", peer: "P2", height: 15},
// {evType: "BlockReq"},
// {evType: "BlockReq"},
// {evType: "ReceiveB", peer: "P1", height: 11},
// {evType: "ReceiveB", peer: "P2", height: 12},
// {evType: "Process"},
// {evType: "BlockReq"},
// {evType: "BlockReq"},
// {evType: "ReceiveB", peer: "P1", height: 13},
// {evType: "Process"},
// {evType: "ReceiveB", peer: "P2", height: 14},
// {evType: "Process"},
// {evType: "BlockReq"},
// {evType: "ReceiveB", peer: "P2", height: 15},
// {evType: "Process"},
// },
// },
// {
// name: "termination on max peer height - two peers, noBlock error",
// params: params,
// msgs: []testEvent{
// {evType: "AddPeer", peer: "P1"},
// {evType: "AddPeer", peer: "P2"},
// {evType: "ReceiveS", peer: "P1", height: 13},
// {evType: "ReceiveS", peer: "P2", height: 15},
// {evType: "BlockReq"},
// {evType: "BlockReq"},
// {evType: "ReceiveNB", peer: "P1", height: 11},
// {evType: "BlockReq"},
// {evType: "ReceiveB", peer: "P2", height: 12},
// {evType: "ReceiveB", peer: "P2", height: 11},
// {evType: "Process"},
// {evType: "BlockReq"},
// {evType: "BlockReq"},
// {evType: "ReceiveB", peer: "P2", height: 13},
// {evType: "Process"},
// {evType: "ReceiveB", peer: "P2", height: 14},
// {evType: "Process"},
// {evType: "BlockReq"},
// {evType: "ReceiveB", peer: "P2", height: 15},
// {evType: "Process"},
// },
// },
// {
// name: "termination on max peer height - two peers, remove one peer",
// params: params,
// msgs: []testEvent{
// {evType: "AddPeer", peer: "P1"},
// {evType: "AddPeer", peer: "P2"},
// {evType: "ReceiveS", peer: "P1", height: 13},
// {evType: "ReceiveS", peer: "P2", height: 15},
// {evType: "BlockReq"},
// {evType: "BlockReq"},
// {evType: "RemovePeer", peer: "P1"},
// {evType: "BlockReq"},
// {evType: "ReceiveB", peer: "P2", height: 12},
// {evType: "ReceiveB", peer: "P2", height: 11},
// {evType: "Process"},
// {evType: "BlockReq"},
// {evType: "BlockReq"},
// {evType: "ReceiveB", peer: "P2", height: 13},
// {evType: "Process"},
// {evType: "ReceiveB", peer: "P2", height: 14},
// {evType: "Process"},
// {evType: "BlockReq"},
// {evType: "ReceiveB", peer: "P2", height: 15},
// {evType: "Process"},
// },
// },
// }
// for _, tt := range tests {
// tt := tt
// t.Run(tt.name, func(t *testing.T) {
// reactor := newTestReactor(params)
// reactor.Start()
// reactor.reporter = behavior.NewMockReporter()
// mockSwitch := &mockSwitchIo{switchedToConsensus: false}
// reactor.io = mockSwitch
// // time for go routines to start
// time.Sleep(time.Millisecond)
// for _, step := range tt.msgs {
// switch step.evType {
// case "AddPeer":
// reactor.scheduler.send(bcAddNewPeer{peerID: p2p.ID(step.peer)})
// case "RemovePeer":
// reactor.scheduler.send(bcRemovePeer{peerID: p2p.ID(step.peer)})
// case "ReceiveS":
// reactor.scheduler.send(bcStatusResponse{
// peerID: p2p.ID(step.peer),
// height: step.height,
// time: time.Now(),
// })
// case "ReceiveB":
// reactor.scheduler.send(bcBlockResponse{
// peerID: p2p.ID(step.peer),
// block: refStore.LoadBlock(step.height),
// size: 10,
// time: time.Now(),
// })
// case "ReceiveNB":
// reactor.scheduler.send(bcNoBlockResponse{
// peerID: p2p.ID(step.peer),
// height: step.height,
// time: time.Now(),
// })
// case "BlockReq":
// reactor.scheduler.send(rTrySchedule{time: time.Now()})
// case "Process":
// reactor.processor.send(rProcessBlock{})
// }
// // give time for messages to propagate between routines
// time.Sleep(time.Millisecond)
// }
// // time for processor to finish and reactor to switch to consensus
// time.Sleep(20 * time.Millisecond)
// assert.True(t, mockSwitch.hasSwitchedToConsensus())
// reactor.Stop()
// })
// }
// }
func TestReactorHelperMode(t *testing.T) {
var (
channelID = byte(0x40)
)
cfg := config.ResetTestRoot("blockchain_reactor_v2_test")
defer os.RemoveAll(cfg.RootDir)
genDoc, privVals := factory.RandGenesisDoc(cfg, 1, false, 30)
params := testReactorParams{
logger: log.TestingLogger(),
genDoc: genDoc,
privVals: privVals,
startHeight: 20,
mockA: true,
}
type testEvent struct {
peer string
event interface{}
}
tests := []struct {
name string
params testReactorParams
msgs []testEvent
}{
{
name: "status request",
params: params,
msgs: []testEvent{
{"P1", bcproto.StatusRequest{}},
{"P1", bcproto.BlockRequest{Height: 13}},
{"P1", bcproto.BlockRequest{Height: 20}},
{"P1", bcproto.BlockRequest{Height: 22}},
},
},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
reactor := newTestReactor(t, params)
mockSwitch := &mockSwitchIo{switchedToConsensus: false}
reactor.io = mockSwitch
err := reactor.Start()
require.NoError(t, err)
for i := 0; i < len(tt.msgs); i++ {
step := tt.msgs[i]
switch ev := step.event.(type) {
case bcproto.StatusRequest:
old := mockSwitch.numStatusResponse
msgProto := new(bcproto.Message)
require.NoError(t, msgProto.Wrap(&ev))
msgBz, err := proto.Marshal(msgProto)
require.NoError(t, err)
reactor.Receive(channelID, mockPeer{id: types.NodeID(step.peer)}, msgBz)
assert.Equal(t, old+1, mockSwitch.numStatusResponse)
case bcproto.BlockRequest:
if ev.Height > params.startHeight {
old := mockSwitch.numNoBlockResponse
msgProto := new(bcproto.Message)
require.NoError(t, msgProto.Wrap(&ev))
msgBz, err := proto.Marshal(msgProto)
require.NoError(t, err)
reactor.Receive(channelID, mockPeer{id: types.NodeID(step.peer)}, msgBz)
assert.Equal(t, old+1, mockSwitch.numNoBlockResponse)
} else {
old := mockSwitch.numBlockResponse
msgProto := new(bcproto.Message)
require.NoError(t, msgProto.Wrap(&ev))
msgBz, err := proto.Marshal(msgProto)
require.NoError(t, err)
reactor.Receive(channelID, mockPeer{id: types.NodeID(step.peer)}, msgBz)
assert.Equal(t, old+1, mockSwitch.numBlockResponse)
}
}
}
err = reactor.Stop()
require.NoError(t, err)
})
}
}
func TestReactorSetSwitchNil(t *testing.T) {
cfg := config.ResetTestRoot("blockchain_reactor_v2_test")
defer os.RemoveAll(cfg.RootDir)
genDoc, privVals := factory.RandGenesisDoc(cfg, 1, false, 30)
reactor := newTestReactor(t, testReactorParams{
logger: log.TestingLogger(),
genDoc: genDoc,
privVals: privVals,
})
reactor.SetSwitch(nil)
assert.Nil(t, reactor.Switch)
assert.Nil(t, reactor.io)
}
type testApp struct {
abci.BaseApplication
}
func newReactorStore(
t *testing.T,
genDoc *types.GenesisDoc,
privVals []types.PrivValidator,
maxBlockHeight int64) (*tmstore.BlockStore, sm.State, *sm.BlockExecutor) {
t.Helper()
require.Len(t, privVals, 1)
app := &testApp{}
cc := abciclient.NewLocalCreator(app)
proxyApp := proxy.NewAppConns(cc)
err := proxyApp.Start()
if err != nil {
panic(fmt.Errorf("error start app: %w", err))
}
stateDB := dbm.NewMemDB()
blockStore := tmstore.NewBlockStore(dbm.NewMemDB())
stateStore := sm.NewStore(stateDB)
state, err := sm.MakeGenesisState(genDoc)
require.NoError(t, err)
blockExec := sm.NewBlockExecutor(stateStore, log.TestingLogger(), proxyApp.Consensus(),
mock.Mempool{}, sm.EmptyEvidencePool{}, blockStore)
err = stateStore.Save(state)
require.NoError(t, err)
// add blocks in
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
lastCommit := types.NewCommit(blockHeight-1, 0, types.BlockID{}, nil)
if blockHeight > 1 {
lastBlockMeta := blockStore.LoadBlockMeta(blockHeight - 1)
lastBlock := blockStore.LoadBlock(blockHeight - 1)
vote, err := factory.MakeVote(
privVals[0],
lastBlock.Header.ChainID, 0,
lastBlock.Header.Height, 0, 2,
lastBlockMeta.BlockID,
time.Now(),
)
require.NoError(t, err)
lastCommit = types.NewCommit(vote.Height, vote.Round,
lastBlockMeta.BlockID, []types.CommitSig{vote.CommitSig()})
}
thisBlock := sf.MakeBlock(state, blockHeight, lastCommit)
thisParts := thisBlock.MakePartSet(types.BlockPartSizeBytes)
blockID := types.BlockID{Hash: thisBlock.Hash(), PartSetHeader: thisParts.Header()}
state, err = blockExec.ApplyBlock(state, blockID, thisBlock)
require.NoError(t, err)
blockStore.SaveBlock(thisBlock, thisParts, lastCommit)
}
return blockStore, state, blockExec
}

View File

@@ -1,166 +0,0 @@
package v2
import (
"fmt"
"strings"
"sync/atomic"
"github.com/Workiva/go-datastructures/queue"
"github.com/tendermint/tendermint/libs/log"
)
type handleFunc = func(event Event) (Event, error)
const historySize = 25
// Routine is a structure that models a finite state machine as serialized
// stream of events processed by a handle function. This Routine structure
// handles the concurrency and messaging guarantees. Events are sent via
// `send` are handled by the `handle` function to produce an iterator
// `next()`. Calling `stop()` on a routine will conclude processing of all
// sent events and produce `final()` event representing the terminal state.
type Routine struct {
name string
handle handleFunc
queue *queue.PriorityQueue
history []Event
out chan Event
fin chan error
rdy chan struct{}
running *uint32
logger log.Logger
metrics *Metrics
}
func newRoutine(name string, handleFunc handleFunc, bufferSize int) *Routine {
return &Routine{
name: name,
handle: handleFunc,
queue: queue.NewPriorityQueue(bufferSize, true),
history: make([]Event, 0, historySize),
out: make(chan Event, bufferSize),
rdy: make(chan struct{}, 1),
fin: make(chan error, 1),
running: new(uint32),
logger: log.NewNopLogger(),
metrics: NopMetrics(),
}
}
func (rt *Routine) setLogger(logger log.Logger) {
rt.logger = logger
}
// nolint:unused
func (rt *Routine) setMetrics(metrics *Metrics) {
rt.metrics = metrics
}
func (rt *Routine) start() {
rt.logger.Info(fmt.Sprintf("%s: run", rt.name))
running := atomic.CompareAndSwapUint32(rt.running, uint32(0), uint32(1))
if !running {
panic(fmt.Sprintf("%s is already running", rt.name))
}
close(rt.rdy)
defer func() {
if r := recover(); r != nil {
var (
b strings.Builder
j int
)
for i := len(rt.history) - 1; i >= 0; i-- {
fmt.Fprintf(&b, "%d: %+v\n", j, rt.history[i])
j++
}
panic(fmt.Sprintf("%v\nlast events:\n%v", r, b.String()))
}
stopped := atomic.CompareAndSwapUint32(rt.running, uint32(1), uint32(0))
if !stopped {
panic(fmt.Sprintf("%s is failed to stop", rt.name))
}
}()
for {
events, err := rt.queue.Get(1)
if err == queue.ErrDisposed {
rt.terminate(nil)
return
} else if err != nil {
rt.terminate(err)
return
}
oEvent, err := rt.handle(events[0].(Event))
rt.metrics.EventsHandled.With("routine", rt.name).Add(1)
if err != nil {
rt.terminate(err)
return
}
rt.metrics.EventsOut.With("routine", rt.name).Add(1)
rt.logger.Debug(fmt.Sprintf("%s: produced %T %+v", rt.name, oEvent, oEvent))
// Skip rTrySchedule and rProcessBlock events as they clutter the history
// due to their frequency.
switch events[0].(type) {
case rTrySchedule:
case rProcessBlock:
default:
rt.history = append(rt.history, events[0].(Event))
if len(rt.history) > historySize {
rt.history = rt.history[1:]
}
}
rt.out <- oEvent
}
}
// XXX: look into returning OpError in the net package
func (rt *Routine) send(event Event) bool {
rt.logger.Debug(fmt.Sprintf("%s: received %T %+v", rt.name, event, event))
if !rt.isRunning() {
return false
}
err := rt.queue.Put(event)
if err != nil {
rt.metrics.EventsShed.With("routine", rt.name).Add(1)
rt.logger.Error(fmt.Sprintf("%s: send failed, queue was full/stopped", rt.name))
return false
}
rt.metrics.EventsSent.With("routine", rt.name).Add(1)
return true
}
func (rt *Routine) isRunning() bool {
return atomic.LoadUint32(rt.running) == 1
}
func (rt *Routine) next() chan Event {
return rt.out
}
func (rt *Routine) ready() chan struct{} {
return rt.rdy
}
func (rt *Routine) stop() {
if !rt.isRunning() { // XXX: this should check rt.queue.Disposed()
return
}
rt.logger.Info(fmt.Sprintf("%s: stop", rt.name))
rt.queue.Dispose() // this should block until all queue items are free?
}
func (rt *Routine) final() chan error {
return rt.fin
}
// XXX: Maybe get rid of this
func (rt *Routine) terminate(reason error) {
// We don't close the rt.out channel here, to avoid spinning on the closed channel
// in the event loop.
rt.fin <- reason
}

View File

@@ -1,163 +0,0 @@
package v2
import (
"fmt"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
type eventA struct {
priorityNormal
}
var errDone = fmt.Errorf("done")
func simpleHandler(event Event) (Event, error) {
if _, ok := event.(eventA); ok {
return noOp, errDone
}
return noOp, nil
}
func TestRoutineFinal(t *testing.T) {
var (
bufferSize = 10
routine = newRoutine("simpleRoutine", simpleHandler, bufferSize)
)
assert.False(t, routine.isRunning(),
"expected an initialized routine to not be running")
go routine.start()
<-routine.ready()
assert.True(t, routine.isRunning(),
"expected an started routine")
assert.True(t, routine.send(eventA{}),
"expected sending to a ready routine to succeed")
assert.Equal(t, errDone, <-routine.final(),
"expected the final event to be done")
assert.False(t, routine.isRunning(),
"expected an completed routine to no longer be running")
}
func TestRoutineStop(t *testing.T) {
var (
bufferSize = 10
routine = newRoutine("simpleRoutine", simpleHandler, bufferSize)
)
assert.False(t, routine.send(eventA{}),
"expected sending to an unstarted routine to fail")
go routine.start()
<-routine.ready()
assert.True(t, routine.send(eventA{}),
"expected sending to a running routine to succeed")
routine.stop()
assert.False(t, routine.send(eventA{}),
"expected sending to a stopped routine to fail")
}
type finalCount struct {
count int
}
func (f finalCount) Error() string {
return "end"
}
func genStatefulHandler(maxCount int) handleFunc {
counter := 0
return func(event Event) (Event, error) {
if _, ok := event.(eventA); ok {
counter++
if counter >= maxCount {
return noOp, finalCount{counter}
}
return eventA{}, nil
}
return noOp, nil
}
}
func feedback(r *Routine) {
for event := range r.next() {
r.send(event)
}
}
func TestStatefulRoutine(t *testing.T) {
var (
count = 10
handler = genStatefulHandler(count)
bufferSize = 20
routine = newRoutine("statefulRoutine", handler, bufferSize)
)
go routine.start()
go feedback(routine)
<-routine.ready()
assert.True(t, routine.send(eventA{}),
"expected sending to a started routine to succeed")
final := <-routine.final()
if fnl, ok := final.(finalCount); ok {
assert.Equal(t, count, fnl.count,
"expected the routine to count to 10")
} else {
t.Fail()
}
}
type lowPriorityEvent struct {
priorityLow
}
type highPriorityEvent struct {
priorityHigh
}
func handleWithPriority(event Event) (Event, error) {
switch event.(type) {
case lowPriorityEvent:
return noOp, nil
case highPriorityEvent:
return noOp, errDone
}
return noOp, nil
}
func TestPriority(t *testing.T) {
var (
bufferSize = 20
routine = newRoutine("priorityRoutine", handleWithPriority, bufferSize)
)
go routine.start()
<-routine.ready()
go func() {
for {
routine.send(lowPriorityEvent{})
time.Sleep(1 * time.Millisecond)
}
}()
time.Sleep(10 * time.Millisecond)
assert.True(t, routine.isRunning(),
"expected an started routine")
assert.True(t, routine.send(highPriorityEvent{}),
"expected send to succeed even when saturated")
assert.Equal(t, errDone, <-routine.final())
assert.False(t, routine.isRunning(),
"expected an started routine")
}

View File

@@ -1,711 +0,0 @@
package v2
import (
"bytes"
"errors"
"fmt"
"math"
"sort"
"time"
"github.com/tendermint/tendermint/types"
)
// Events generated by the scheduler:
// all blocks have been processed
type scFinishedEv struct {
priorityNormal
reason string
}
func (e scFinishedEv) String() string {
return fmt.Sprintf("scFinishedEv{%v}", e.reason)
}
// send a blockRequest message
type scBlockRequest struct {
priorityNormal
peerID types.NodeID
height int64
}
func (e scBlockRequest) String() string {
return fmt.Sprintf("scBlockRequest{%d from %v}", e.height, e.peerID)
}
// a block has been received and validated by the scheduler
type scBlockReceived struct {
priorityNormal
peerID types.NodeID
block *types.Block
}
func (e scBlockReceived) String() string {
return fmt.Sprintf("scBlockReceived{%d#%X from %v}", e.block.Height, e.block.Hash(), e.peerID)
}
// scheduler detected a peer error
type scPeerError struct {
priorityHigh
peerID types.NodeID
reason error
}
func (e scPeerError) String() string {
return fmt.Sprintf("scPeerError{%v errored with %v}", e.peerID, e.reason)
}
// scheduler removed a set of peers (timed out or slow peer)
type scPeersPruned struct {
priorityHigh
peers []types.NodeID
}
func (e scPeersPruned) String() string {
return fmt.Sprintf("scPeersPruned{%v}", e.peers)
}
// XXX: make this fatal?
// scheduler encountered a fatal error
type scSchedulerFail struct {
priorityHigh
reason error
}
func (e scSchedulerFail) String() string {
return fmt.Sprintf("scSchedulerFail{%v}", e.reason)
}
type blockState int
const (
blockStateUnknown blockState = iota + 1 // no known peer has this block
blockStateNew // indicates that a peer has reported having this block
blockStatePending // indicates that this block has been requested from a peer
blockStateReceived // indicates that this block has been received by a peer
blockStateProcessed // indicates that this block has been applied
)
func (e blockState) String() string {
switch e {
case blockStateUnknown:
return "Unknown"
case blockStateNew:
return "New"
case blockStatePending:
return "Pending"
case blockStateReceived:
return "Received"
case blockStateProcessed:
return "Processed"
default:
return fmt.Sprintf("invalid blockState: %d", e)
}
}
type peerState int
const (
peerStateNew = iota + 1
peerStateReady
peerStateRemoved
)
func (e peerState) String() string {
switch e {
case peerStateNew:
return "New"
case peerStateReady:
return "Ready"
case peerStateRemoved:
return "Removed"
default:
panic(fmt.Sprintf("unknown peerState: %d", e))
}
}
type scPeer struct {
peerID types.NodeID
// initialized as New when peer is added, updated to Ready when statusUpdate is received,
// updated to Removed when peer is removed
state peerState
base int64 // updated when statusResponse is received
height int64 // updated when statusResponse is received
lastTouched time.Time
lastRate int64 // last receive rate in bytes
}
func (p scPeer) String() string {
return fmt.Sprintf("{state %v, base %d, height %d, lastTouched %v, lastRate %d, id %v}",
p.state, p.base, p.height, p.lastTouched, p.lastRate, p.peerID)
}
func newScPeer(peerID types.NodeID) *scPeer {
return &scPeer{
peerID: peerID,
state: peerStateNew,
base: -1,
height: -1,
lastTouched: time.Time{},
}
}
// The scheduler keep track of the state of each block and each peer. The
// scheduler will attempt to schedule new block requests with `trySchedule`
// events and remove slow peers with `tryPrune` events.
type scheduler struct {
initHeight int64
// next block that needs to be processed. All blocks with smaller height are
// in Processed state.
height int64
// lastAdvance tracks the last time a block execution happened.
// syncTimeout is the maximum time the scheduler waits to advance in the block sync process before finishing.
// This covers the cases where there are no peers or all peers have a lower height.
lastAdvance time.Time
syncTimeout time.Duration
// a map of peerID to scheduler specific peer struct `scPeer` used to keep
// track of peer specific state
peers map[types.NodeID]*scPeer
peerTimeout time.Duration // maximum response time from a peer otherwise prune
minRecvRate int64 // minimum receive rate from peer otherwise prune
// the maximum number of blocks that should be New, Received or Pending at any point
// in time. This is used to enforce a limit on the blockStates map.
targetPending int
// a list of blocks to be scheduled (New), Pending or Received. Its length should be
// smaller than targetPending.
blockStates map[int64]blockState
// a map of heights to the peer we are waiting a response from
pendingBlocks map[int64]types.NodeID
// the time at which a block was put in blockStatePending
pendingTime map[int64]time.Time
// a map of heights to the peers that put the block in blockStateReceived
receivedBlocks map[int64]types.NodeID
}
func (sc scheduler) String() string {
return fmt.Sprintf("ih: %d, bst: %v, peers: %v, pblks: %v, ptm %v, rblks: %v",
sc.initHeight, sc.blockStates, sc.peers, sc.pendingBlocks, sc.pendingTime, sc.receivedBlocks)
}
func newScheduler(initHeight int64, startTime time.Time) *scheduler {
sc := scheduler{
initHeight: initHeight,
lastAdvance: startTime,
syncTimeout: 60 * time.Second,
height: initHeight,
blockStates: make(map[int64]blockState),
peers: make(map[types.NodeID]*scPeer),
pendingBlocks: make(map[int64]types.NodeID),
pendingTime: make(map[int64]time.Time),
receivedBlocks: make(map[int64]types.NodeID),
targetPending: 10, // TODO - pass as param
peerTimeout: 15 * time.Second, // TODO - pass as param
minRecvRate: 0, // int64(7680), TODO - pass as param
}
return &sc
}
func (sc *scheduler) ensurePeer(peerID types.NodeID) *scPeer {
if _, ok := sc.peers[peerID]; !ok {
sc.peers[peerID] = newScPeer(peerID)
}
return sc.peers[peerID]
}
func (sc *scheduler) touchPeer(peerID types.NodeID, time time.Time) error {
peer, ok := sc.peers[peerID]
if !ok {
return fmt.Errorf("couldn't find peer %s", peerID)
}
if peer.state != peerStateReady {
return fmt.Errorf("tried to touch peer in state %s, must be Ready", peer.state)
}
peer.lastTouched = time
return nil
}
func (sc *scheduler) removePeer(peerID types.NodeID) {
peer, ok := sc.peers[peerID]
if !ok {
return
}
if peer.state == peerStateRemoved {
return
}
for height, pendingPeerID := range sc.pendingBlocks {
if pendingPeerID == peerID {
sc.setStateAtHeight(height, blockStateNew)
delete(sc.pendingTime, height)
delete(sc.pendingBlocks, height)
}
}
for height, rcvPeerID := range sc.receivedBlocks {
if rcvPeerID == peerID {
sc.setStateAtHeight(height, blockStateNew)
delete(sc.receivedBlocks, height)
}
}
// remove the blocks from blockStates if the peer removal causes the max peer height to be lower.
peer.state = peerStateRemoved
maxPeerHeight := int64(0)
for _, otherPeer := range sc.peers {
if otherPeer.state != peerStateReady {
continue
}
if otherPeer.peerID != peer.peerID && otherPeer.height > maxPeerHeight {
maxPeerHeight = otherPeer.height
}
}
for h := range sc.blockStates {
if h > maxPeerHeight {
delete(sc.blockStates, h)
}
}
}
// check if the blockPool is running low and add new blocks in New state to be requested.
// This function is called when there is an increase in the maximum peer height or when
// blocks are processed.
func (sc *scheduler) addNewBlocks() {
if len(sc.blockStates) >= sc.targetPending {
return
}
for i := sc.height; i < int64(sc.targetPending)+sc.height; i++ {
if i > sc.maxHeight() {
break
}
if sc.getStateAtHeight(i) == blockStateUnknown {
sc.setStateAtHeight(i, blockStateNew)
}
}
}
func (sc *scheduler) setPeerRange(peerID types.NodeID, base int64, height int64) error {
peer := sc.ensurePeer(peerID)
if peer.state == peerStateRemoved {
return nil // noop
}
if height < peer.height {
sc.removePeer(peerID)
return fmt.Errorf("cannot move peer height lower. from %d to %d", peer.height, height)
}
if base > height {
sc.removePeer(peerID)
return fmt.Errorf("cannot set peer base higher than its height")
}
peer.base = base
peer.height = height
peer.state = peerStateReady
sc.addNewBlocks()
return nil
}
func (sc *scheduler) getStateAtHeight(height int64) blockState {
if height < sc.height {
return blockStateProcessed
} else if state, ok := sc.blockStates[height]; ok {
return state
} else {
return blockStateUnknown
}
}
func (sc *scheduler) getPeersWithHeight(height int64) []types.NodeID {
peers := make([]types.NodeID, 0)
for _, peer := range sc.peers {
if peer.state != peerStateReady {
continue
}
if peer.base <= height && peer.height >= height {
peers = append(peers, peer.peerID)
}
}
return peers
}
func (sc *scheduler) prunablePeers(peerTimout time.Duration, minRecvRate int64, now time.Time) []types.NodeID {
prunable := make([]types.NodeID, 0)
for peerID, peer := range sc.peers {
if peer.state != peerStateReady {
continue
}
if now.Sub(peer.lastTouched) > peerTimout || peer.lastRate < minRecvRate {
prunable = append(prunable, peerID)
}
}
// Tests for handleTryPrunePeer() may fail without sort due to range non-determinism
sort.Sort(PeerByID(prunable))
return prunable
}
func (sc *scheduler) setStateAtHeight(height int64, state blockState) {
sc.blockStates[height] = state
}
// CONTRACT: peer exists and in Ready state.
func (sc *scheduler) markReceived(peerID types.NodeID, height int64, size int64, now time.Time) error {
peer := sc.peers[peerID]
if state := sc.getStateAtHeight(height); state != blockStatePending || sc.pendingBlocks[height] != peerID {
return fmt.Errorf("received block %d from peer %s without being requested", height, peerID)
}
pendingTime, ok := sc.pendingTime[height]
if !ok || now.Sub(pendingTime) <= 0 {
return fmt.Errorf("clock error: block %d received at %s but requested at %s",
height, pendingTime, now)
}
peer.lastRate = size / now.Sub(pendingTime).Nanoseconds()
sc.setStateAtHeight(height, blockStateReceived)
delete(sc.pendingBlocks, height)
delete(sc.pendingTime, height)
sc.receivedBlocks[height] = peerID
return nil
}
func (sc *scheduler) markPending(peerID types.NodeID, height int64, time time.Time) error {
state := sc.getStateAtHeight(height)
if state != blockStateNew {
return fmt.Errorf("block %d should be in blockStateNew but is %s", height, state)
}
peer, ok := sc.peers[peerID]
if !ok {
return fmt.Errorf("cannot find peer %s", peerID)
}
if peer.state != peerStateReady {
return fmt.Errorf("cannot schedule %d from %s in %s", height, peerID, peer.state)
}
if height > peer.height {
return fmt.Errorf("cannot request height %d from peer %s that is at height %d",
height, peerID, peer.height)
}
if height < peer.base {
return fmt.Errorf("cannot request height %d for peer %s with base %d",
height, peerID, peer.base)
}
sc.setStateAtHeight(height, blockStatePending)
sc.pendingBlocks[height] = peerID
sc.pendingTime[height] = time
return nil
}
func (sc *scheduler) markProcessed(height int64) error {
// It is possible that a peer error or timeout is handled after the processor
// has processed the block but before the scheduler received this event, so
// when pcBlockProcessed event is received, the block had been requested
// again => don't check the block state.
sc.lastAdvance = time.Now()
sc.height = height + 1
delete(sc.pendingBlocks, height)
delete(sc.pendingTime, height)
delete(sc.receivedBlocks, height)
delete(sc.blockStates, height)
sc.addNewBlocks()
return nil
}
func (sc *scheduler) allBlocksProcessed() bool {
if len(sc.peers) == 0 {
return false
}
return sc.height >= sc.maxHeight()
}
// returns max peer height or the last processed block, i.e. sc.height
func (sc *scheduler) maxHeight() int64 {
max := sc.height - 1
for _, peer := range sc.peers {
if peer.state != peerStateReady {
continue
}
if max < peer.height {
max = peer.height
}
}
return max
}
// lowest block in sc.blockStates with state == blockStateNew or -1 if no new blocks
func (sc *scheduler) nextHeightToSchedule() int64 {
var min int64 = math.MaxInt64
for height, state := range sc.blockStates {
if state == blockStateNew && height < min {
min = height
}
}
if min == math.MaxInt64 {
min = -1
}
return min
}
func (sc *scheduler) pendingFrom(peerID types.NodeID) []int64 {
var heights []int64
for height, pendingPeerID := range sc.pendingBlocks {
if pendingPeerID == peerID {
heights = append(heights, height)
}
}
return heights
}
func (sc *scheduler) selectPeer(height int64) (types.NodeID, error) {
peers := sc.getPeersWithHeight(height)
if len(peers) == 0 {
return "", fmt.Errorf("cannot find peer for height %d", height)
}
// create a map from number of pending requests to a list
// of peers having that number of pending requests.
pendingFrom := make(map[int][]types.NodeID)
for _, peerID := range peers {
numPending := len(sc.pendingFrom(peerID))
pendingFrom[numPending] = append(pendingFrom[numPending], peerID)
}
// find the set of peers with minimum number of pending requests.
var minPending int64 = math.MaxInt64
for mp := range pendingFrom {
if int64(mp) < minPending {
minPending = int64(mp)
}
}
sort.Sort(PeerByID(pendingFrom[int(minPending)]))
return pendingFrom[int(minPending)][0], nil
}
// PeerByID is a list of peers sorted by peerID.
type PeerByID []types.NodeID
func (peers PeerByID) Len() int {
return len(peers)
}
func (peers PeerByID) Less(i, j int) bool {
return bytes.Compare([]byte(peers[i]), []byte(peers[j])) == -1
}
func (peers PeerByID) Swap(i, j int) {
peers[i], peers[j] = peers[j], peers[i]
}
// Handlers
// This handler gets the block, performs some validation and then passes it on to the processor.
func (sc *scheduler) handleBlockResponse(event bcBlockResponse) (Event, error) {
err := sc.touchPeer(event.peerID, event.time)
if err != nil {
// peer does not exist OR not ready
return noOp, nil
}
err = sc.markReceived(event.peerID, event.block.Height, event.size, event.time)
if err != nil {
sc.removePeer(event.peerID)
return scPeerError{peerID: event.peerID, reason: err}, nil
}
return scBlockReceived{peerID: event.peerID, block: event.block}, nil
}
func (sc *scheduler) handleNoBlockResponse(event bcNoBlockResponse) (Event, error) {
// No such peer or peer was removed.
peer, ok := sc.peers[event.peerID]
if !ok || peer.state == peerStateRemoved {
return noOp, nil
}
// The peer may have been just removed due to errors, low speed or timeouts.
sc.removePeer(event.peerID)
return scPeerError{peerID: event.peerID,
reason: fmt.Errorf("peer %v with base %d height %d claims no block for %d",
event.peerID, peer.base, peer.height, event.height)}, nil
}
func (sc *scheduler) handleBlockProcessed(event pcBlockProcessed) (Event, error) {
if event.height != sc.height {
panic(fmt.Sprintf("processed height %d, but expected height %d", event.height, sc.height))
}
err := sc.markProcessed(event.height)
if err != nil {
return scSchedulerFail{reason: err}, nil
}
if sc.allBlocksProcessed() {
return scFinishedEv{reason: "processed all blocks"}, nil
}
return noOp, nil
}
// Handles an error from the processor. The processor had already cleaned the blocks from
// the peers included in this event. Just attempt to remove the peers.
func (sc *scheduler) handleBlockProcessError(event pcBlockVerificationFailure) (Event, error) {
// The peers may have been just removed due to errors, low speed or timeouts.
sc.removePeer(event.firstPeerID)
if event.firstPeerID != event.secondPeerID {
sc.removePeer(event.secondPeerID)
}
if sc.allBlocksProcessed() {
return scFinishedEv{reason: "error on last block"}, nil
}
return noOp, nil
}
func (sc *scheduler) handleAddNewPeer(event bcAddNewPeer) (Event, error) {
sc.ensurePeer(event.peerID)
return noOp, nil
}
func (sc *scheduler) handleRemovePeer(event bcRemovePeer) (Event, error) {
sc.removePeer(event.peerID)
if sc.allBlocksProcessed() {
return scFinishedEv{reason: "removed peer"}, nil
}
// Return scPeerError so the peer (and all associated blocks) is removed from
// the processor.
return scPeerError{peerID: event.peerID, reason: errors.New("peer was stopped")}, nil
}
func (sc *scheduler) handleTryPrunePeer(event rTryPrunePeer) (Event, error) {
// Check behavior of peer responsible to deliver block at sc.height.
timeHeightAsked, ok := sc.pendingTime[sc.height]
if ok && time.Since(timeHeightAsked) > sc.peerTimeout {
// A request was sent to a peer for block at sc.height but a response was not received
// from that peer within sc.peerTimeout. Remove the peer. This is to ensure that a peer
// will be timed out even if it sends blocks at higher heights but prevents progress by
// not sending the block at current height.
sc.removePeer(sc.pendingBlocks[sc.height])
}
prunablePeers := sc.prunablePeers(sc.peerTimeout, sc.minRecvRate, event.time)
if len(prunablePeers) == 0 {
return noOp, nil
}
for _, peerID := range prunablePeers {
sc.removePeer(peerID)
}
// If all blocks are processed we should finish.
if sc.allBlocksProcessed() {
return scFinishedEv{reason: "after try prune"}, nil
}
return scPeersPruned{peers: prunablePeers}, nil
}
func (sc *scheduler) handleResetState(event bcResetState) (Event, error) {
initHeight := event.state.LastBlockHeight + 1
if initHeight == 1 {
initHeight = event.state.InitialHeight
}
sc.initHeight = initHeight
sc.height = initHeight
sc.lastAdvance = time.Now()
sc.addNewBlocks()
return noOp, nil
}
func (sc *scheduler) handleTrySchedule(event rTrySchedule) (Event, error) {
if time.Since(sc.lastAdvance) > sc.syncTimeout {
return scFinishedEv{reason: "timeout, no advance"}, nil
}
nextHeight := sc.nextHeightToSchedule()
if nextHeight == -1 {
return noOp, nil
}
bestPeerID, err := sc.selectPeer(nextHeight)
if err != nil {
return scSchedulerFail{reason: err}, nil
}
if err := sc.markPending(bestPeerID, nextHeight, event.time); err != nil {
return scSchedulerFail{reason: err}, nil // XXX: peerError might be more appropriate
}
return scBlockRequest{peerID: bestPeerID, height: nextHeight}, nil
}
func (sc *scheduler) handleStatusResponse(event bcStatusResponse) (Event, error) {
err := sc.setPeerRange(event.peerID, event.base, event.height)
if err != nil {
return scPeerError{peerID: event.peerID, reason: err}, nil
}
return noOp, nil
}
func (sc *scheduler) handle(event Event) (Event, error) {
switch event := event.(type) {
case bcResetState:
nextEvent, err := sc.handleResetState(event)
return nextEvent, err
case bcStatusResponse:
nextEvent, err := sc.handleStatusResponse(event)
return nextEvent, err
case bcBlockResponse:
nextEvent, err := sc.handleBlockResponse(event)
return nextEvent, err
case bcNoBlockResponse:
nextEvent, err := sc.handleNoBlockResponse(event)
return nextEvent, err
case rTrySchedule:
nextEvent, err := sc.handleTrySchedule(event)
return nextEvent, err
case bcAddNewPeer:
nextEvent, err := sc.handleAddNewPeer(event)
return nextEvent, err
case bcRemovePeer:
nextEvent, err := sc.handleRemovePeer(event)
return nextEvent, err
case rTryPrunePeer:
nextEvent, err := sc.handleTryPrunePeer(event)
return nextEvent, err
case pcBlockProcessed:
nextEvent, err := sc.handleBlockProcessed(event)
return nextEvent, err
case pcBlockVerificationFailure:
nextEvent, err := sc.handleBlockProcessError(event)
return nextEvent, err
default:
return scSchedulerFail{reason: fmt.Errorf("unknown event %v", event)}, nil
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,65 +0,0 @@
package v2
import (
"github.com/Workiva/go-datastructures/queue"
)
// Event is the type that can be added to the priority queue.
type Event queue.Item
type priority interface {
Compare(other queue.Item) int
Priority() int
}
type priorityLow struct{}
type priorityNormal struct{}
type priorityHigh struct{}
func (p priorityLow) Priority() int {
return 1
}
func (p priorityNormal) Priority() int {
return 2
}
func (p priorityHigh) Priority() int {
return 3
}
func (p priorityLow) Compare(other queue.Item) int {
op := other.(priority)
if p.Priority() > op.Priority() {
return 1
} else if p.Priority() == op.Priority() {
return 0
}
return -1
}
func (p priorityNormal) Compare(other queue.Item) int {
op := other.(priority)
if p.Priority() > op.Priority() {
return 1
} else if p.Priority() == op.Priority() {
return 0
}
return -1
}
func (p priorityHigh) Compare(other queue.Item) int {
op := other.(priority)
if p.Priority() > op.Priority() {
return 1
} else if p.Priority() == op.Priority() {
return 0
}
return -1
}
type noOpEvent struct {
priorityLow
}
var noOp = noOpEvent{}

View File

@@ -62,7 +62,7 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
blockStore := store.NewBlockStore(blockDB)
// one for mempool, one for consensus
mtx := new(tmsync.RWMutex)
mtx := new(tmsync.Mutex)
proxyAppConnMem := abciclient.NewLocalClient(mtx, app)
proxyAppConnCon := abciclient.NewLocalClient(mtx, app)

View File

@@ -407,7 +407,7 @@ func newStateWithConfigAndBlockStore(
blockStore *store.BlockStore,
) *State {
// one for mempool, one for consensus
mtx := new(tmsync.RWMutex)
mtx := new(tmsync.Mutex)
proxyAppConnMem := abciclient.NewLocalClient(mtx, app)
proxyAppConnCon := abciclient.NewLocalClient(mtx, app)

View File

@@ -61,6 +61,9 @@ type Metrics struct {
// Number of blockparts transmitted by peer.
BlockParts metrics.Counter
// Histogram of time taken per step annotated with reason that the step proceeded.
StepTime metrics.Histogram
}
// PrometheusMetrics returns Metrics build using Prometheus client library.
@@ -187,6 +190,12 @@ func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
Name: "block_parts",
Help: "Number of blockparts transmitted by peer.",
}, append(labels, "peer_id")).With(labelsAndValues...),
StepTime: prometheus.NewHistogramFrom(stdprometheus.HistogramOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "step_time",
Help: "Time spent per step.",
}, append(labels, "step", "reason")).With(labelsAndValues...),
}
}

View File

@@ -77,7 +77,7 @@ func (m *NewRoundStepMessage) ValidateHeight(initialHeight int64) error {
m.LastCommitRound, initialHeight)
}
if m.Height > initialHeight && m.LastCommitRound < 0 {
return fmt.Errorf("LastCommitRound can only be negative for initial height %v", // nolint
return fmt.Errorf("LastCommitRound can only be negative for initial height %v",
initialHeight)
}
return nil

View File

@@ -21,64 +21,49 @@ import (
var (
_ service.Service = (*Reactor)(nil)
_ p2p.Wrapper = (*tmcons.Message)(nil)
)
// ChannelShims contains a map of ChannelDescriptorShim objects, where each
// object wraps a reference to a legacy p2p ChannelDescriptor and the corresponding
// p2p proto.Message the new p2p Channel is responsible for handling.
//
//
// TODO: Remove once p2p refactor is complete.
// ref: https://github.com/tendermint/tendermint/issues/5670
ChannelShims = map[p2p.ChannelID]*p2p.ChannelDescriptorShim{
StateChannel: {
MsgType: new(tmcons.Message),
Descriptor: &p2p.ChannelDescriptor{
ID: byte(StateChannel),
Priority: 8,
SendQueueCapacity: 64,
RecvMessageCapacity: maxMsgSize,
RecvBufferCapacity: 128,
MaxSendBytes: 12000,
},
// GetChannelDescriptor produces an instance of a descriptor for this
// package's required channels.
func GetChannelDescriptors() []*p2p.ChannelDescriptor {
return []*p2p.ChannelDescriptor{
{
ID: StateChannel,
MessageType: new(tmcons.Message),
Priority: 8,
SendQueueCapacity: 64,
RecvMessageCapacity: maxMsgSize,
RecvBufferCapacity: 128,
},
DataChannel: {
MsgType: new(tmcons.Message),
Descriptor: &p2p.ChannelDescriptor{
// TODO: Consider a split between gossiping current block and catchup
// stuff. Once we gossip the whole block there is nothing left to send
// until next height or round.
ID: byte(DataChannel),
Priority: 12,
SendQueueCapacity: 64,
RecvBufferCapacity: 512,
RecvMessageCapacity: maxMsgSize,
MaxSendBytes: 40000,
},
{
// TODO: Consider a split between gossiping current block and catchup
// stuff. Once we gossip the whole block there is nothing left to send
// until next height or round.
ID: DataChannel,
MessageType: new(tmcons.Message),
Priority: 12,
SendQueueCapacity: 64,
RecvBufferCapacity: 512,
RecvMessageCapacity: maxMsgSize,
},
VoteChannel: {
MsgType: new(tmcons.Message),
Descriptor: &p2p.ChannelDescriptor{
ID: byte(VoteChannel),
Priority: 10,
SendQueueCapacity: 64,
RecvBufferCapacity: 128,
RecvMessageCapacity: maxMsgSize,
MaxSendBytes: 150,
},
{
ID: VoteChannel,
MessageType: new(tmcons.Message),
Priority: 10,
SendQueueCapacity: 64,
RecvBufferCapacity: 128,
RecvMessageCapacity: maxMsgSize,
},
VoteSetBitsChannel: {
MsgType: new(tmcons.Message),
Descriptor: &p2p.ChannelDescriptor{
ID: byte(VoteSetBitsChannel),
Priority: 5,
SendQueueCapacity: 8,
RecvBufferCapacity: 128,
RecvMessageCapacity: maxMsgSize,
MaxSendBytes: 50,
},
{
ID: VoteSetBitsChannel,
MessageType: new(tmcons.Message),
Priority: 5,
SendQueueCapacity: 8,
RecvBufferCapacity: 128,
RecvMessageCapacity: maxMsgSize,
},
}
)
}
const (
StateChannel = p2p.ChannelID(0x20)
@@ -230,17 +215,15 @@ func (r *Reactor) OnStop() {
}
r.mtx.Lock()
peers := r.peers
// Close and wait for each of the peers to shutdown.
// This is safe to perform with the lock since none of the peers require the
// lock to complete any of the methods that the waitgroup is waiting on.
for _, state := range r.peers {
state.closer.Close()
state.broadcastWG.Wait()
}
r.mtx.Unlock()
// wait for all spawned peer goroutines to gracefully exit
for _, ps := range peers {
ps.closer.Close()
}
for _, ps := range peers {
ps.broadcastWG.Wait()
}
// Close the StateChannel goroutine separately since it uses its own channel
// to signal closure.
close(r.stateCloseCh)

View File

@@ -50,9 +50,11 @@ type reactorTestSuite struct {
voteSetBitsChannels map[types.NodeID]*p2p.Channel
}
func chDesc(chID p2p.ChannelID) p2p.ChannelDescriptor {
return p2p.ChannelDescriptor{
ID: byte(chID),
func chDesc(chID p2p.ChannelID, size int) *p2p.ChannelDescriptor {
return &p2p.ChannelDescriptor{
ID: chID,
MessageType: new(tmcons.Message),
RecvBufferCapacity: size,
}
}
@@ -67,10 +69,10 @@ func setup(t *testing.T, numNodes int, states []*State, size int) *reactorTestSu
blocksyncSubs: make(map[types.NodeID]types.Subscription, numNodes),
}
rts.stateChannels = rts.network.MakeChannelsNoCleanup(t, chDesc(StateChannel), new(tmcons.Message), size)
rts.dataChannels = rts.network.MakeChannelsNoCleanup(t, chDesc(DataChannel), new(tmcons.Message), size)
rts.voteChannels = rts.network.MakeChannelsNoCleanup(t, chDesc(VoteChannel), new(tmcons.Message), size)
rts.voteSetBitsChannels = rts.network.MakeChannelsNoCleanup(t, chDesc(VoteSetBitsChannel), new(tmcons.Message), size)
rts.stateChannels = rts.network.MakeChannelsNoCleanup(t, chDesc(StateChannel, size))
rts.dataChannels = rts.network.MakeChannelsNoCleanup(t, chDesc(DataChannel, size))
rts.voteChannels = rts.network.MakeChannelsNoCleanup(t, chDesc(VoteChannel, size))
rts.voteSetBitsChannels = rts.network.MakeChannelsNoCleanup(t, chDesc(VoteSetBitsChannel, size))
_, cancel := context.WithCancel(context.Background())
@@ -346,7 +348,7 @@ func TestReactorWithEvidence(t *testing.T) {
blockStore := store.NewBlockStore(blockDB)
// one for mempool, one for consensus
mtx := new(tmsync.RWMutex)
mtx := new(tmsync.Mutex)
proxyAppConnMem := abciclient.NewLocalClient(mtx, app)
proxyAppConnCon := abciclient.NewLocalClient(mtx, app)

View File

@@ -1,19 +1,12 @@
package consensus
import (
"bytes"
"context"
"fmt"
"hash/crc32"
"io"
"reflect"
"time"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/crypto/merkle"
"github.com/tendermint/tendermint/internal/proxy"
sm "github.com/tendermint/tendermint/internal/state"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/types"
)
@@ -191,358 +184,3 @@ func makeHeightSearchFunc(height int64) auto.SearchFunc {
}
}
}*/
//---------------------------------------------------
// 2. Recover from failure while applying the block.
// (by handshaking with the app to figure out where
// we were last, and using the WAL to recover there.)
//---------------------------------------------------
type Handshaker struct {
stateStore sm.Store
initialState sm.State
store sm.BlockStore
eventBus types.BlockEventPublisher
genDoc *types.GenesisDoc
logger log.Logger
nBlocks int // number of blocks applied to the state
}
func NewHandshaker(stateStore sm.Store, state sm.State,
store sm.BlockStore, genDoc *types.GenesisDoc) *Handshaker {
return &Handshaker{
stateStore: stateStore,
initialState: state,
store: store,
eventBus: types.NopEventBus{},
genDoc: genDoc,
logger: log.NewNopLogger(),
nBlocks: 0,
}
}
func (h *Handshaker) SetLogger(l log.Logger) {
h.logger = l
}
// SetEventBus - sets the event bus for publishing block related events.
// If not called, it defaults to types.NopEventBus.
func (h *Handshaker) SetEventBus(eventBus types.BlockEventPublisher) {
h.eventBus = eventBus
}
// NBlocks returns the number of blocks applied to the state.
func (h *Handshaker) NBlocks() int {
return h.nBlocks
}
// TODO: retry the handshake/replay if it fails ?
func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
// Handshake is done via ABCI Info on the query conn.
res, err := proxyApp.Query().InfoSync(context.Background(), proxy.RequestInfo)
if err != nil {
return fmt.Errorf("error calling Info: %v", err)
}
blockHeight := res.LastBlockHeight
if blockHeight < 0 {
return fmt.Errorf("got a negative last block height (%d) from the app", blockHeight)
}
appHash := res.LastBlockAppHash
h.logger.Info("ABCI Handshake App Info",
"height", blockHeight,
"hash", appHash,
"software-version", res.Version,
"protocol-version", res.AppVersion,
)
// Only set the version if there is no existing state.
if h.initialState.LastBlockHeight == 0 {
h.initialState.Version.Consensus.App = res.AppVersion
}
// Replay blocks up to the latest in the blockstore.
_, err = h.ReplayBlocks(h.initialState, appHash, blockHeight, proxyApp)
if err != nil {
return fmt.Errorf("error on replay: %v", err)
}
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced",
"appHeight", blockHeight, "appHash", appHash)
// TODO: (on restart) replay mempool
return nil
}
// ReplayBlocks replays all blocks since appBlockHeight and ensures the result
// matches the current state.
// Returns the final AppHash or an error.
func (h *Handshaker) ReplayBlocks(
state sm.State,
appHash []byte,
appBlockHeight int64,
proxyApp proxy.AppConns,
) ([]byte, error) {
storeBlockBase := h.store.Base()
storeBlockHeight := h.store.Height()
stateBlockHeight := state.LastBlockHeight
h.logger.Info(
"ABCI Replay Blocks",
"appHeight",
appBlockHeight,
"storeHeight",
storeBlockHeight,
"stateHeight",
stateBlockHeight)
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain.
if appBlockHeight == 0 {
validators := make([]*types.Validator, len(h.genDoc.Validators))
for i, val := range h.genDoc.Validators {
validators[i] = types.NewValidator(val.PubKey, val.Power)
}
validatorSet := types.NewValidatorSet(validators)
nextVals := types.TM2PB.ValidatorUpdates(validatorSet)
pbParams := h.genDoc.ConsensusParams.ToProto()
req := abci.RequestInitChain{
Time: h.genDoc.GenesisTime,
ChainId: h.genDoc.ChainID,
InitialHeight: h.genDoc.InitialHeight,
ConsensusParams: &pbParams,
Validators: nextVals,
AppStateBytes: h.genDoc.AppState,
}
res, err := proxyApp.Consensus().InitChainSync(context.Background(), req)
if err != nil {
return nil, err
}
appHash = res.AppHash
if stateBlockHeight == 0 { // we only update state when we are in initial state
// If the app did not return an app hash, we keep the one set from the genesis doc in
// the state. We don't set appHash since we don't want the genesis doc app hash
// recorded in the genesis block. We should probably just remove GenesisDoc.AppHash.
if len(res.AppHash) > 0 {
state.AppHash = res.AppHash
}
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
}
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals).CopyIncrementProposerPriority(1)
} else if len(h.genDoc.Validators) == 0 {
// If validator set is not set in genesis and still empty after InitChain, exit.
return nil, fmt.Errorf("validator set is nil in genesis and still empty after InitChain")
}
if res.ConsensusParams != nil {
state.ConsensusParams = state.ConsensusParams.UpdateConsensusParams(res.ConsensusParams)
state.Version.Consensus.App = state.ConsensusParams.Version.AppVersion
}
// We update the last results hash with the empty hash, to conform with RFC-6962.
state.LastResultsHash = merkle.HashFromByteSlices(nil)
if err := h.stateStore.Save(state); err != nil {
return nil, err
}
}
}
// First handle edge cases and constraints on the storeBlockHeight and storeBlockBase.
switch {
case storeBlockHeight == 0:
assertAppHashEqualsOneFromState(appHash, state)
return appHash, nil
case appBlockHeight == 0 && state.InitialHeight < storeBlockBase:
// the app has no state, and the block store is truncated above the initial height
return appHash, sm.ErrAppBlockHeightTooLow{AppHeight: appBlockHeight, StoreBase: storeBlockBase}
case appBlockHeight > 0 && appBlockHeight < storeBlockBase-1:
// the app is too far behind truncated store (can be 1 behind since we replay the next)
return appHash, sm.ErrAppBlockHeightTooLow{AppHeight: appBlockHeight, StoreBase: storeBlockBase}
case storeBlockHeight < appBlockHeight:
// the app should never be ahead of the store (but this is under app's control)
return appHash, sm.ErrAppBlockHeightTooHigh{CoreHeight: storeBlockHeight, AppHeight: appBlockHeight}
case storeBlockHeight < stateBlockHeight:
// the state should never be ahead of the store (this is under tendermint's control)
panic(fmt.Sprintf("StateBlockHeight (%d) > StoreBlockHeight (%d)", stateBlockHeight, storeBlockHeight))
case storeBlockHeight > stateBlockHeight+1:
// store should be at most one ahead of the state (this is under tendermint's control)
panic(fmt.Sprintf("StoreBlockHeight (%d) > StateBlockHeight + 1 (%d)", storeBlockHeight, stateBlockHeight+1))
}
var err error
// Now either store is equal to state, or one ahead.
// For each, consider all cases of where the app could be, given app <= store
if storeBlockHeight == stateBlockHeight {
// Tendermint ran Commit and saved the state.
// Either the app is asking for replay, or we're all synced up.
if appBlockHeight < storeBlockHeight {
// the app is behind, so replay blocks, but no need to go through WAL (state is already synced to store)
return h.replayBlocks(state, proxyApp, appBlockHeight, storeBlockHeight, false)
} else if appBlockHeight == storeBlockHeight {
// We're good!
assertAppHashEqualsOneFromState(appHash, state)
return appHash, nil
}
} else if storeBlockHeight == stateBlockHeight+1 {
// We saved the block in the store but haven't updated the state,
// so we'll need to replay a block using the WAL.
switch {
case appBlockHeight < stateBlockHeight:
// the app is further behind than it should be, so replay blocks
// but leave the last block to go through the WAL
return h.replayBlocks(state, proxyApp, appBlockHeight, storeBlockHeight, true)
case appBlockHeight == stateBlockHeight:
// We haven't run Commit (both the state and app are one block behind),
// so replayBlock with the real app.
// NOTE: We could instead use the cs.WAL on cs.Start,
// but we'd have to allow the WAL to replay a block that wrote it's #ENDHEIGHT
h.logger.Info("Replay last block using real app")
state, err = h.replayBlock(state, storeBlockHeight, proxyApp.Consensus())
return state.AppHash, err
case appBlockHeight == storeBlockHeight:
// We ran Commit, but didn't save the state, so replayBlock with mock app.
abciResponses, err := h.stateStore.LoadABCIResponses(storeBlockHeight)
if err != nil {
return nil, err
}
mockApp := newMockProxyApp(appHash, abciResponses)
h.logger.Info("Replay last block using mock app")
state, err = h.replayBlock(state, storeBlockHeight, mockApp)
return state.AppHash, err
}
}
panic(fmt.Sprintf("uncovered case! appHeight: %d, storeHeight: %d, stateHeight: %d",
appBlockHeight, storeBlockHeight, stateBlockHeight))
}
func (h *Handshaker) replayBlocks(
state sm.State,
proxyApp proxy.AppConns,
appBlockHeight,
storeBlockHeight int64,
mutateState bool) ([]byte, error) {
// App is further behind than it should be, so we need to replay blocks.
// We replay all blocks from appBlockHeight+1.
//
// Note that we don't have an old version of the state,
// so we by-pass state validation/mutation using sm.ExecCommitBlock.
// This also means we won't be saving validator sets if they change during this period.
// TODO: Load the historical information to fix this and just use state.ApplyBlock
//
// If mutateState == true, the final block is replayed with h.replayBlock()
var appHash []byte
var err error
finalBlock := storeBlockHeight
if mutateState {
finalBlock--
}
firstBlock := appBlockHeight + 1
if firstBlock == 1 {
firstBlock = state.InitialHeight
}
for i := firstBlock; i <= finalBlock; i++ {
h.logger.Info("Applying block", "height", i)
block := h.store.LoadBlock(i)
// Extra check to ensure the app was not changed in a way it shouldn't have.
if len(appHash) > 0 {
assertAppHashEqualsOneFromBlock(appHash, block)
}
if i == finalBlock && !mutateState {
// We emit events for the index services at the final block due to the sync issue when
// the node shutdown during the block committing status.
blockExec := sm.NewBlockExecutor(
h.stateStore, h.logger, proxyApp.Consensus(), emptyMempool{}, sm.EmptyEvidencePool{}, h.store)
blockExec.SetEventBus(h.eventBus)
appHash, err = sm.ExecCommitBlock(
blockExec, proxyApp.Consensus(), block, h.logger, h.stateStore, h.genDoc.InitialHeight, state)
if err != nil {
return nil, err
}
} else {
appHash, err = sm.ExecCommitBlock(
nil, proxyApp.Consensus(), block, h.logger, h.stateStore, h.genDoc.InitialHeight, state)
if err != nil {
return nil, err
}
}
h.nBlocks++
}
if mutateState {
// sync the final block
state, err = h.replayBlock(state, storeBlockHeight, proxyApp.Consensus())
if err != nil {
return nil, err
}
appHash = state.AppHash
}
assertAppHashEqualsOneFromState(appHash, state)
return appHash, nil
}
// ApplyBlock on the proxyApp with the last block.
func (h *Handshaker) replayBlock(state sm.State, height int64, proxyApp proxy.AppConnConsensus) (sm.State, error) {
block := h.store.LoadBlock(height)
meta := h.store.LoadBlockMeta(height)
// Use stubs for both mempool and evidence pool since no transactions nor
// evidence are needed here - block already exists.
blockExec := sm.NewBlockExecutor(h.stateStore, h.logger, proxyApp, emptyMempool{}, sm.EmptyEvidencePool{}, h.store)
blockExec.SetEventBus(h.eventBus)
var err error
state, err = blockExec.ApplyBlock(state, meta.BlockID, block)
if err != nil {
return sm.State{}, err
}
h.nBlocks++
return state, nil
}
func assertAppHashEqualsOneFromBlock(appHash []byte, block *types.Block) {
if !bytes.Equal(appHash, block.AppHash) {
panic(fmt.Sprintf(`block.AppHash does not match AppHash after replay. Got %X, expected %X.
Block: %v
`,
appHash, block.AppHash, block))
}
}
func assertAppHashEqualsOneFromState(appHash []byte, state sm.State) {
if !bytes.Equal(appHash, state.AppHash) {
panic(fmt.Sprintf(`state.AppHash does not match AppHash after replay. Got
%X, expected %X.
State: %v
Did you reset Tendermint without resetting your application's data?`,
appHash, state.AppHash, state))
}
}

View File

@@ -312,7 +312,7 @@ func newConsensusStateForReplay(cfg config.BaseConfig, csConfig *config.Consensu
// Create proxyAppConn connection (consensus, mempool, query)
clientCreator, _ := proxy.DefaultClientCreator(cfg.ProxyApp, cfg.ABCI, cfg.DBDir())
proxyApp := proxy.NewAppConns(clientCreator)
proxyApp := proxy.NewAppConns(clientCreator, proxy.NopMetrics())
err = proxyApp.Start()
if err != nil {
tmos.Exit(fmt.Sprintf("Error starting proxy app conns: %v", err))
@@ -323,13 +323,6 @@ func newConsensusStateForReplay(cfg config.BaseConfig, csConfig *config.Consensu
tmos.Exit(fmt.Sprintf("Failed to start event bus: %v", err))
}
handshaker := NewHandshaker(stateStore, state, blockStore, gdoc)
handshaker.SetEventBus(eventBus)
err = handshaker.Handshake(proxyApp)
if err != nil {
tmos.Exit(fmt.Sprintf("Error on handshake: %v", err))
}
mempool, evpool := emptyMempool{}, sm.EmptyEvidencePool{}
blockExec := sm.NewBlockExecutor(stateStore, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool, blockStore)

View File

@@ -3,12 +3,9 @@ package consensus
import (
"context"
abciclient "github.com/tendermint/tendermint/abci/client"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/internal/libs/clist"
"github.com/tendermint/tendermint/internal/mempool"
"github.com/tendermint/tendermint/internal/proxy"
tmstate "github.com/tendermint/tendermint/proto/tendermint/state"
"github.com/tendermint/tendermint/types"
)
@@ -24,6 +21,7 @@ func (emptyMempool) Size() int { return 0 }
func (emptyMempool) CheckTx(_ context.Context, _ types.Tx, _ func(*abci.Response), _ mempool.TxInfo) error {
return nil
}
func (emptyMempool) RemoveTxByKey(txKey types.TxKey) error { return nil }
func (emptyMempool) ReapMaxBytesMaxGas(_, _ int64) types.Txs { return types.Txs{} }
func (emptyMempool) ReapMaxTxs(n int) types.Txs { return types.Txs{} }
func (emptyMempool) Update(
@@ -46,48 +44,3 @@ func (emptyMempool) TxsWaitChan() <-chan struct{} { return nil }
func (emptyMempool) InitWAL() error { return nil }
func (emptyMempool) CloseWAL() {}
//-----------------------------------------------------------------------------
// mockProxyApp uses ABCIResponses to give the right results.
//
// Useful because we don't want to call Commit() twice for the same block on
// the real app.
func newMockProxyApp(appHash []byte, abciResponses *tmstate.ABCIResponses) proxy.AppConnConsensus {
clientCreator := abciclient.NewLocalCreator(&mockProxyApp{
appHash: appHash,
abciResponses: abciResponses,
})
cli, _ := clientCreator()
err := cli.Start()
if err != nil {
panic(err)
}
return proxy.NewAppConnConsensus(cli)
}
type mockProxyApp struct {
abci.BaseApplication
appHash []byte
txCount int
abciResponses *tmstate.ABCIResponses
}
func (mock *mockProxyApp) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx {
r := mock.abciResponses.DeliverTxs[mock.txCount]
mock.txCount++
if r == nil {
return abci.ResponseDeliverTx{}
}
return *r
}
func (mock *mockProxyApp) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlock {
mock.txCount = 0
return *mock.abciResponses.EndBlock
}
func (mock *mockProxyApp) Commit() abci.ResponseCommit {
return abci.ResponseCommit{Data: mock.appHash}
}

View File

@@ -1,12 +1,10 @@
package consensus
import (
"bytes"
"context"
"fmt"
"io"
"io/ioutil"
"math/rand"
"os"
"path/filepath"
"runtime"
@@ -26,16 +24,12 @@ import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/encoding"
"github.com/tendermint/tendermint/internal/mempool"
"github.com/tendermint/tendermint/internal/proxy"
sm "github.com/tendermint/tendermint/internal/state"
sf "github.com/tendermint/tendermint/internal/state/test/factory"
"github.com/tendermint/tendermint/internal/store"
"github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/libs/log"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/privval"
tmstate "github.com/tendermint/tendermint/proto/tendermint/state"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/internal/proxy"
sm "github.com/tendermint/tendermint/internal/state"
"github.com/tendermint/tendermint/internal/store"
"github.com/tendermint/tendermint/types"
)
@@ -560,109 +554,6 @@ func setupSimulator(t *testing.T) *simulatorTestSuite {
return sim
}
// Sync from scratch
func TestHandshakeReplayAll(t *testing.T) {
sim := setupSimulator(t)
for _, m := range modes {
testHandshakeReplay(t, sim, 0, m, false)
}
for _, m := range modes {
testHandshakeReplay(t, sim, 0, m, true)
}
}
// Sync many, not from scratch
func TestHandshakeReplaySome(t *testing.T) {
sim := setupSimulator(t)
for _, m := range modes {
testHandshakeReplay(t, sim, 2, m, false)
}
for _, m := range modes {
testHandshakeReplay(t, sim, 2, m, true)
}
}
// Sync from lagging by one
func TestHandshakeReplayOne(t *testing.T) {
sim := setupSimulator(t)
for _, m := range modes {
testHandshakeReplay(t, sim, numBlocks-1, m, false)
}
for _, m := range modes {
testHandshakeReplay(t, sim, numBlocks-1, m, true)
}
}
// Sync from caught up
func TestHandshakeReplayNone(t *testing.T) {
sim := setupSimulator(t)
for _, m := range modes {
testHandshakeReplay(t, sim, numBlocks, m, false)
}
for _, m := range modes {
testHandshakeReplay(t, sim, numBlocks, m, true)
}
}
// Test mockProxyApp should not panic when app return ABCIResponses with some empty ResponseDeliverTx
func TestMockProxyApp(t *testing.T) {
sim := setupSimulator(t) // setup config and simulator
cfg := sim.Config
assert.NotNil(t, cfg)
logger := log.TestingLogger()
var validTxs, invalidTxs = 0, 0
txIndex := 0
assert.NotPanics(t, func() {
abciResWithEmptyDeliverTx := new(tmstate.ABCIResponses)
abciResWithEmptyDeliverTx.DeliverTxs = make([]*abci.ResponseDeliverTx, 0)
abciResWithEmptyDeliverTx.DeliverTxs = append(abciResWithEmptyDeliverTx.DeliverTxs, &abci.ResponseDeliverTx{})
// called when saveABCIResponses:
bytes, err := proto.Marshal(abciResWithEmptyDeliverTx)
require.NoError(t, err)
loadedAbciRes := new(tmstate.ABCIResponses)
// this also happens sm.LoadABCIResponses
err = proto.Unmarshal(bytes, loadedAbciRes)
require.NoError(t, err)
mock := newMockProxyApp([]byte("mock_hash"), loadedAbciRes)
abciRes := new(tmstate.ABCIResponses)
abciRes.DeliverTxs = make([]*abci.ResponseDeliverTx, len(loadedAbciRes.DeliverTxs))
// Execute transactions and get hash.
proxyCb := func(req *abci.Request, res *abci.Response) {
if r, ok := res.Value.(*abci.Response_DeliverTx); ok {
// TODO: make use of res.Log
// TODO: make use of this info
// Blocks may include invalid txs.
txRes := r.DeliverTx
if txRes.Code == abci.CodeTypeOK {
validTxs++
} else {
logger.Debug("Invalid tx", "code", txRes.Code, "log", txRes.Log)
invalidTxs++
}
abciRes.DeliverTxs[txIndex] = txRes
txIndex++
}
}
mock.SetResponseCallback(proxyCb)
someTx := []byte("tx")
_, err = mock.DeliverTxAsync(context.Background(), abci.RequestDeliverTx{Tx: someTx})
assert.NoError(t, err)
})
assert.True(t, validTxs == 1)
assert.True(t, invalidTxs == 0)
}
func tempWALWithData(data []byte) string {
walFile, err := ioutil.TempFile("", "wal")
if err != nil {
@@ -678,138 +569,6 @@ func tempWALWithData(data []byte) string {
return walFile.Name()
}
// Make some blocks. Start a fresh app and apply nBlocks blocks.
// Then restart the app and sync it up with the remaining blocks
func testHandshakeReplay(t *testing.T, sim *simulatorTestSuite, nBlocks int, mode uint, testValidatorsChange bool) {
var chain []*types.Block
var commits []*types.Commit
var store *mockBlockStore
var stateDB dbm.DB
var genesisState sm.State
cfg := sim.Config
if testValidatorsChange {
testConfig := ResetConfig(fmt.Sprintf("%s_%v_m", t.Name(), mode))
defer func() { _ = os.RemoveAll(testConfig.RootDir) }()
stateDB = dbm.NewMemDB()
genesisState = sim.GenesisState
cfg = sim.Config
chain = append([]*types.Block{}, sim.Chain...) // copy chain
commits = sim.Commits
store = newMockBlockStore(cfg, genesisState.ConsensusParams)
} else { // test single node
testConfig := ResetConfig(fmt.Sprintf("%s_%v_s", t.Name(), mode))
defer func() { _ = os.RemoveAll(testConfig.RootDir) }()
walBody, err := WALWithNBlocks(t, numBlocks)
require.NoError(t, err)
walFile := tempWALWithData(walBody)
cfg.Consensus.SetWalFile(walFile)
privVal, err := privval.LoadFilePV(cfg.PrivValidator.KeyFile(), cfg.PrivValidator.StateFile())
require.NoError(t, err)
wal, err := NewWAL(walFile)
require.NoError(t, err)
wal.SetLogger(log.TestingLogger())
err = wal.Start()
require.NoError(t, err)
t.Cleanup(func() {
if err := wal.Stop(); err != nil {
t.Error(err)
}
})
chain, commits, err = makeBlockchainFromWAL(wal)
require.NoError(t, err)
pubKey, err := privVal.GetPubKey(context.Background())
require.NoError(t, err)
stateDB, genesisState, store = stateAndStore(cfg, pubKey, kvstore.ProtocolVersion)
}
stateStore := sm.NewStore(stateDB)
store.chain = chain
store.commits = commits
state := genesisState.Copy()
// run the chain through state.ApplyBlock to build up the tendermint state
state = buildTMStateFromChain(cfg, sim.Mempool, sim.Evpool, stateStore, state, chain, nBlocks, mode, store)
latestAppHash := state.AppHash
// make a new client creator
kvstoreApp := kvstore.NewPersistentKVStoreApplication(
filepath.Join(cfg.DBDir(), fmt.Sprintf("replay_test_%d_%d_a_r%d", nBlocks, mode, rand.Int())))
t.Cleanup(func() { require.NoError(t, kvstoreApp.Close()) })
clientCreator2 := abciclient.NewLocalCreator(kvstoreApp)
if nBlocks > 0 {
// run nBlocks against a new client to build up the app state.
// use a throwaway tendermint state
proxyApp := proxy.NewAppConns(clientCreator2)
stateDB1 := dbm.NewMemDB()
stateStore := sm.NewStore(stateDB1)
err := stateStore.Save(genesisState)
require.NoError(t, err)
buildAppStateFromChain(proxyApp, stateStore, sim.Mempool, sim.Evpool, genesisState, chain, nBlocks, mode, store)
}
// Prune block store if requested
expectError := false
if mode == 3 {
pruned, err := store.PruneBlocks(2)
require.NoError(t, err)
require.EqualValues(t, 1, pruned)
expectError = int64(nBlocks) < 2
}
// now start the app using the handshake - it should sync
genDoc, _ := sm.MakeGenesisDocFromFile(cfg.GenesisFile())
handshaker := NewHandshaker(stateStore, state, store, genDoc)
proxyApp := proxy.NewAppConns(clientCreator2)
if err := proxyApp.Start(); err != nil {
t.Fatalf("Error starting proxy app connections: %v", err)
}
t.Cleanup(func() {
if err := proxyApp.Stop(); err != nil {
t.Error(err)
}
})
err := handshaker.Handshake(proxyApp)
if expectError {
require.Error(t, err)
return
} else if err != nil {
t.Fatalf("Error on abci handshake: %v", err)
}
// get the latest app hash from the app
res, err := proxyApp.Query().InfoSync(context.Background(), abci.RequestInfo{Version: ""})
if err != nil {
t.Fatal(err)
}
// the app hash should be synced up
if !bytes.Equal(latestAppHash, res.LastBlockAppHash) {
t.Fatalf(
"Expected app hashes to match after handshake/replay. got %X, expected %X",
res.LastBlockAppHash,
latestAppHash)
}
expectedBlocksToSync := numBlocks - nBlocks
if nBlocks == numBlocks && mode > 0 {
expectedBlocksToSync++
} else if nBlocks > 0 && mode == 1 {
expectedBlocksToSync++
}
if handshaker.NBlocks() != expectedBlocksToSync {
t.Fatalf("Expected handshake to sync %d blocks, got %d", expectedBlocksToSync, handshaker.NBlocks())
}
}
func applyBlock(stateStore sm.Store,
mempool mempool.Mempool,
evpool sm.EvidencePool,
@@ -893,7 +652,7 @@ func buildTMStateFromChain(
defer kvstoreApp.Close()
clientCreator := abciclient.NewLocalCreator(kvstoreApp)
proxyApp := proxy.NewAppConns(clientCreator)
proxyApp := proxy.NewAppConns(clientCreator, proxy.NopMetrics())
if err := proxyApp.Start(); err != nil {
panic(err)
}
@@ -933,75 +692,6 @@ func buildTMStateFromChain(
return state
}
func TestHandshakePanicsIfAppReturnsWrongAppHash(t *testing.T) {
// 1. Initialize tendermint and commit 3 blocks with the following app hashes:
// - 0x01
// - 0x02
// - 0x03
cfg := ResetConfig("handshake_test_")
t.Cleanup(func() { os.RemoveAll(cfg.RootDir) })
privVal, err := privval.LoadFilePV(cfg.PrivValidator.KeyFile(), cfg.PrivValidator.StateFile())
require.NoError(t, err)
const appVersion = 0x0
pubKey, err := privVal.GetPubKey(context.Background())
require.NoError(t, err)
stateDB, state, store := stateAndStore(cfg, pubKey, appVersion)
stateStore := sm.NewStore(stateDB)
genDoc, _ := sm.MakeGenesisDocFromFile(cfg.GenesisFile())
state.LastValidators = state.Validators.Copy()
// mode = 0 for committing all the blocks
blocks := sf.MakeBlocks(3, &state, privVal)
store.chain = blocks
// 2. Tendermint must panic if app returns wrong hash for the first block
// - RANDOM HASH
// - 0x02
// - 0x03
{
app := &badApp{numBlocks: 3, allHashesAreWrong: true}
clientCreator := abciclient.NewLocalCreator(app)
proxyApp := proxy.NewAppConns(clientCreator)
err := proxyApp.Start()
require.NoError(t, err)
t.Cleanup(func() {
if err := proxyApp.Stop(); err != nil {
t.Error(err)
}
})
assert.Panics(t, func() {
h := NewHandshaker(stateStore, state, store, genDoc)
if err = h.Handshake(proxyApp); err != nil {
t.Log(err)
}
})
}
// 3. Tendermint must panic if app returns wrong hash for the last block
// - 0x01
// - 0x02
// - RANDOM HASH
{
app := &badApp{numBlocks: 3, onlyLastHashIsWrong: true}
clientCreator := abciclient.NewLocalCreator(app)
proxyApp := proxy.NewAppConns(clientCreator)
err := proxyApp.Start()
require.NoError(t, err)
t.Cleanup(func() {
if err := proxyApp.Stop(); err != nil {
t.Error(err)
}
})
assert.Panics(t, func() {
h := NewHandshaker(stateStore, state, store, genDoc)
if err = h.Handshake(proxyApp); err != nil {
t.Log(err)
}
})
}
}
type badApp struct {
abci.BaseApplication
numBlocks byte
@@ -1219,52 +909,6 @@ func (bs *mockBlockStore) PruneBlocks(height int64) (uint64, error) {
return pruned, nil
}
//---------------------------------------
// Test handshake/init chain
func TestHandshakeUpdatesValidators(t *testing.T) {
val, _ := factory.RandValidator(true, 10)
vals := types.NewValidatorSet([]*types.Validator{val})
app := &initChainApp{vals: types.TM2PB.ValidatorUpdates(vals)}
clientCreator := abciclient.NewLocalCreator(app)
cfg := ResetConfig("handshake_test_")
t.Cleanup(func() { _ = os.RemoveAll(cfg.RootDir) })
privVal, err := privval.LoadFilePV(cfg.PrivValidator.KeyFile(), cfg.PrivValidator.StateFile())
require.NoError(t, err)
pubKey, err := privVal.GetPubKey(context.Background())
require.NoError(t, err)
stateDB, state, store := stateAndStore(cfg, pubKey, 0x0)
stateStore := sm.NewStore(stateDB)
oldValAddr := state.Validators.Validators[0].Address
// now start the app using the handshake - it should sync
genDoc, _ := sm.MakeGenesisDocFromFile(cfg.GenesisFile())
handshaker := NewHandshaker(stateStore, state, store, genDoc)
proxyApp := proxy.NewAppConns(clientCreator)
if err := proxyApp.Start(); err != nil {
t.Fatalf("Error starting proxy app connections: %v", err)
}
t.Cleanup(func() {
if err := proxyApp.Stop(); err != nil {
t.Error(err)
}
})
if err := handshaker.Handshake(proxyApp); err != nil {
t.Fatalf("Error on abci handshake: %v", err)
}
// reload the state, check the validator set was updated
state, err = stateStore.Load()
require.NoError(t, err)
newValAddr := state.Validators.Validators[0].Address
expectValAddr := val.Address
assert.NotEqual(t, oldValAddr, newValAddr)
assert.Equal(t, newValAddr, expectValAddr)
}
// returns the vals on InitChain
type initChainApp struct {
abci.BaseApplication

View File

@@ -13,7 +13,6 @@ import (
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/tendermint/tendermint/crypto/tmhash"
cstypes "github.com/tendermint/tendermint/internal/consensus/types"
p2pmock "github.com/tendermint/tendermint/internal/p2p/mock"
"github.com/tendermint/tendermint/libs/log"
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
tmrand "github.com/tendermint/tendermint/libs/rand"
@@ -1864,7 +1863,8 @@ func TestStateOutputsBlockPartsStats(t *testing.T) {
// create dummy peer
cs, _ := randState(config, 1)
peer := p2pmock.NewPeer(nil)
peerID, err := types.NewNodeID("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
require.NoError(t, err)
// 1) new block part
parts := types.NewPartSetFromData(tmrand.Bytes(100), 10)
@@ -1875,26 +1875,26 @@ func TestStateOutputsBlockPartsStats(t *testing.T) {
}
cs.ProposalBlockParts = types.NewPartSetFromHeader(parts.Header())
cs.handleMsg(msgInfo{msg, peer.ID()})
cs.handleMsg(msgInfo{msg, peerID})
statsMessage := <-cs.statsMsgQueue
require.Equal(t, msg, statsMessage.Msg, "")
require.Equal(t, peer.ID(), statsMessage.PeerID, "")
require.Equal(t, peerID, statsMessage.PeerID, "")
// sending the same part from different peer
cs.handleMsg(msgInfo{msg, "peer2"})
// sending the part with the same height, but different round
msg.Round = 1
cs.handleMsg(msgInfo{msg, peer.ID()})
cs.handleMsg(msgInfo{msg, peerID})
// sending the part from the smaller height
msg.Height = 0
cs.handleMsg(msgInfo{msg, peer.ID()})
cs.handleMsg(msgInfo{msg, peerID})
// sending the part from the bigger height
msg.Height = 3
cs.handleMsg(msgInfo{msg, peer.ID()})
cs.handleMsg(msgInfo{msg, peerID})
select {
case <-cs.statsMsgQueue:
@@ -1909,18 +1909,19 @@ func TestStateOutputVoteStats(t *testing.T) {
cs, vss := randState(config, 2)
// create dummy peer
peer := p2pmock.NewPeer(nil)
peerID, err := types.NewNodeID("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
require.NoError(t, err)
randBytes := tmrand.Bytes(tmhash.Size)
vote := signVote(vss[1], config, tmproto.PrecommitType, randBytes, types.PartSetHeader{})
voteMessage := &VoteMessage{vote}
cs.handleMsg(msgInfo{voteMessage, peer.ID()})
cs.handleMsg(msgInfo{voteMessage, peerID})
statsMessage := <-cs.statsMsgQueue
require.Equal(t, voteMessage, statsMessage.Msg, "")
require.Equal(t, peer.ID(), statsMessage.PeerID, "")
require.Equal(t, peerID, statsMessage.PeerID, "")
// sending the same part from different peer
cs.handleMsg(msgInfo{&VoteMessage{vote}, "peer2"})
@@ -1929,7 +1930,7 @@ func TestStateOutputVoteStats(t *testing.T) {
incrementHeight(vss[1])
vote = signVote(vss[1], config, tmproto.PrecommitType, randBytes, types.PartSetHeader{})
cs.handleMsg(msgInfo{&VoteMessage{vote}, peer.ID()})
cs.handleMsg(msgInfo{&VoteMessage{vote}, peerID})
select {
case <-cs.statsMsgQueue:

View File

@@ -1,3 +1,4 @@
//go:build gofuzz
// +build gofuzz
package consensus

View File

@@ -65,7 +65,7 @@ func WALGenerateNBlocks(t *testing.T, wr io.Writer, numBlocks int) (err error) {
blockStore := store.NewBlockStore(blockStoreDB)
proxyApp := proxy.NewAppConns(abciclient.NewLocalCreator(app))
proxyApp := proxy.NewAppConns(abciclient.NewLocalCreator(app), proxy.NopMetrics())
proxyApp.SetLogger(logger.With("module", "proxy"))
if err := proxyApp.Start(); err != nil {
return fmt.Errorf("failed to start proxy app connections: %w", err)
@@ -145,22 +145,22 @@ func randPort() int {
return base + mrand.Intn(spread)
}
func makeAddrs() (string, string, string) {
// makeAddrs constructs local TCP addresses for node services.
// It uses consecutive ports from a random starting point, so that concurrent
// instances are less likely to collide.
func makeAddrs() (p2pAddr, rpcAddr string) {
const addrTemplate = "tcp://127.0.0.1:%d"
start := randPort()
return fmt.Sprintf("tcp://127.0.0.1:%d", start),
fmt.Sprintf("tcp://127.0.0.1:%d", start+1),
fmt.Sprintf("tcp://127.0.0.1:%d", start+2)
return fmt.Sprintf(addrTemplate, start), fmt.Sprintf(addrTemplate, start+1)
}
// getConfig returns a config for test cases
func getConfig(t *testing.T) *config.Config {
c := config.ResetTestRoot(t.Name())
// and we use random ports to run in parallel
tm, rpc, grpc := makeAddrs()
c.P2P.ListenAddress = tm
c.RPC.ListenAddress = rpc
c.RPC.GRPCListenAddress = grpc
p2pAddr, rpcAddr := makeAddrs()
c.P2P.ListenAddress = p2pAddr
c.RPC.ListenAddress = rpcAddr
return c
}

View File

@@ -15,29 +15,7 @@ import (
"github.com/tendermint/tendermint/types"
)
var (
_ service.Service = (*Reactor)(nil)
// ChannelShims contains a map of ChannelDescriptorShim objects, where each
// object wraps a reference to a legacy p2p ChannelDescriptor and the corresponding
// p2p proto.Message the new p2p Channel is responsible for handling.
//
//
// TODO: Remove once p2p refactor is complete.
// ref: https://github.com/tendermint/tendermint/issues/5670
ChannelShims = map[p2p.ChannelID]*p2p.ChannelDescriptorShim{
EvidenceChannel: {
MsgType: new(tmproto.EvidenceList),
Descriptor: &p2p.ChannelDescriptor{
ID: byte(EvidenceChannel),
Priority: 6,
RecvMessageCapacity: maxMsgSize,
RecvBufferCapacity: 32,
MaxSendBytes: 400,
},
},
}
)
var _ service.Service = (*Reactor)(nil)
const (
EvidenceChannel = p2p.ChannelID(0x38)
@@ -51,6 +29,18 @@ const (
broadcastEvidenceIntervalS = 10
)
// GetChannelDescriptor produces an instance of a descriptor for this
// package's required channels.
func GetChannelDescriptor() *p2p.ChannelDescriptor {
return &p2p.ChannelDescriptor{
ID: EvidenceChannel,
MessageType: new(tmproto.EvidenceList),
Priority: 6,
RecvMessageCapacity: maxMsgSize,
RecvBufferCapacity: 32,
}
}
// Reactor handles evpool evidence broadcasting amongst peers.
type Reactor struct {
service.BaseService

View File

@@ -62,11 +62,8 @@ func setup(t *testing.T, stateStores []sm.Store, chBuf uint) *reactorTestSuite {
peerChans: make(map[types.NodeID]chan p2p.PeerUpdate, numStateStores),
}
chDesc := p2p.ChannelDescriptor{ID: byte(evidence.EvidenceChannel)}
rts.evidenceChannels = rts.network.MakeChannelsNoCleanup(t,
chDesc,
new(tmproto.EvidenceList),
int(chBuf))
chDesc := &p2p.ChannelDescriptor{ID: evidence.EvidenceChannel, MessageType: new(tmproto.EvidenceList)}
rts.evidenceChannels = rts.network.MakeChannelsNoCleanup(t, chDesc)
require.Len(t, rts.network.RandomNode().PeerManager.Peers(), 0)
idx := 0

View File

@@ -71,7 +71,7 @@ func iotest(writer protoio.WriteCloser, reader protoio.ReadCloser) error {
return err
}
if n != len(bz)+visize {
return fmt.Errorf("WriteMsg() wrote %v bytes, expected %v", n, len(bz)+visize) // nolint
return fmt.Errorf("WriteMsg() wrote %v bytes, expected %v", n, len(bz)+visize)
}
lens[i] = n
}

View File

@@ -1,3 +1,4 @@
//go:build deadlock
// +build deadlock
package sync

View File

@@ -1,3 +1,4 @@
//go:build !deadlock
// +build !deadlock
package sync

View File

@@ -31,14 +31,14 @@ var _ TxCache = (*LRUTxCache)(nil)
type LRUTxCache struct {
mtx tmsync.Mutex
size int
cacheMap map[[TxKeySize]byte]*list.Element
cacheMap map[types.TxKey]*list.Element
list *list.List
}
func NewLRUTxCache(cacheSize int) *LRUTxCache {
return &LRUTxCache{
size: cacheSize,
cacheMap: make(map[[TxKeySize]byte]*list.Element, cacheSize),
cacheMap: make(map[types.TxKey]*list.Element, cacheSize),
list: list.New(),
}
}
@@ -53,7 +53,7 @@ func (c *LRUTxCache) Reset() {
c.mtx.Lock()
defer c.mtx.Unlock()
c.cacheMap = make(map[[TxKeySize]byte]*list.Element, c.size)
c.cacheMap = make(map[types.TxKey]*list.Element, c.size)
c.list.Init()
}
@@ -61,7 +61,7 @@ func (c *LRUTxCache) Push(tx types.Tx) bool {
c.mtx.Lock()
defer c.mtx.Unlock()
key := TxKey(tx)
key := tx.Key()
moved, ok := c.cacheMap[key]
if ok {
@@ -72,7 +72,7 @@ func (c *LRUTxCache) Push(tx types.Tx) bool {
if c.list.Len() >= c.size {
front := c.list.Front()
if front != nil {
frontKey := front.Value.([TxKeySize]byte)
frontKey := front.Value.(types.TxKey)
delete(c.cacheMap, frontKey)
c.list.Remove(front)
}
@@ -88,7 +88,7 @@ func (c *LRUTxCache) Remove(tx types.Tx) {
c.mtx.Lock()
defer c.mtx.Unlock()
key := TxKey(tx)
key := tx.Key()
e := c.cacheMap[key]
delete(c.cacheMap, key)

View File

@@ -7,17 +7,15 @@ import (
"github.com/tendermint/tendermint/types"
)
// nolint: golint
// TODO: Rename type.
type MempoolIDs struct {
type IDs struct {
mtx tmsync.RWMutex
peerMap map[types.NodeID]uint16
nextID uint16 // assumes that a node will never have over 65536 active peers
activeIDs map[uint16]struct{} // used to check if a given peerID key is used
}
func NewMempoolIDs() *MempoolIDs {
return &MempoolIDs{
func NewMempoolIDs() *IDs {
return &IDs{
peerMap: make(map[types.NodeID]uint16),
// reserve UnknownPeerID for mempoolReactor.BroadcastTx
@@ -28,7 +26,7 @@ func NewMempoolIDs() *MempoolIDs {
// ReserveForPeer searches for the next unused ID and assigns it to the provided
// peer.
func (ids *MempoolIDs) ReserveForPeer(peerID types.NodeID) {
func (ids *IDs) ReserveForPeer(peerID types.NodeID) {
ids.mtx.Lock()
defer ids.mtx.Unlock()
@@ -38,7 +36,7 @@ func (ids *MempoolIDs) ReserveForPeer(peerID types.NodeID) {
}
// Reclaim returns the ID reserved for the peer back to unused pool.
func (ids *MempoolIDs) Reclaim(peerID types.NodeID) {
func (ids *IDs) Reclaim(peerID types.NodeID) {
ids.mtx.Lock()
defer ids.mtx.Unlock()
@@ -50,7 +48,7 @@ func (ids *MempoolIDs) Reclaim(peerID types.NodeID) {
}
// GetForPeer returns an ID reserved for the peer.
func (ids *MempoolIDs) GetForPeer(peerID types.NodeID) uint16 {
func (ids *IDs) GetForPeer(peerID types.NodeID) uint16 {
ids.mtx.RLock()
defer ids.mtx.RUnlock()
@@ -59,7 +57,7 @@ func (ids *MempoolIDs) GetForPeer(peerID types.NodeID) uint16 {
// nextPeerID returns the next unused peer ID to use. We assume that the mutex
// is already held.
func (ids *MempoolIDs) nextPeerID() uint16 {
func (ids *IDs) nextPeerID() uint16 {
if len(ids.activeIDs) == MaxActiveIDs {
panic(fmt.Sprintf("node has maximum %d active IDs and wanted to get one more", MaxActiveIDs))
}

View File

@@ -32,6 +32,10 @@ type Mempool interface {
// its validity and whether it should be added to the mempool.
CheckTx(ctx context.Context, tx types.Tx, callback func(*abci.Response), txInfo TxInfo) error
// RemoveTxByKey removes a transaction, identified by its key,
// from the mempool.
RemoveTxByKey(txKey types.TxKey) error
// ReapMaxBytesMaxGas reaps transactions from the mempool up to maxBytes
// bytes total with the condition that the total gasWanted must be less than
// maxGas.

View File

@@ -20,6 +20,7 @@ func (Mempool) Size() int { return 0 }
func (Mempool) CheckTx(_ context.Context, _ types.Tx, _ func(*abci.Response), _ mempool.TxInfo) error {
return nil
}
func (Mempool) RemoveTxByKey(txKey types.TxKey) error { return nil }
func (Mempool) ReapMaxBytesMaxGas(_, _ int64) types.Txs { return types.Txs{} }
func (Mempool) ReapMaxTxs(n int) types.Txs { return types.Txs{} }
func (Mempool) Update(

View File

@@ -1,24 +1,9 @@
package mempool
import (
"crypto/sha256"
"github.com/tendermint/tendermint/types"
)
// TxKeySize defines the size of the transaction's key used for indexing.
const TxKeySize = sha256.Size
// TxKey is the fixed length array key used as an index.
func TxKey(tx types.Tx) [TxKeySize]byte {
return sha256.Sum256(tx)
}
// TxHashFromBytes returns the hash of a transaction from raw bytes.
func TxHashFromBytes(tx []byte) []byte {
return types.Tx(tx).Hash()
}
// TxInfo are parameters that get passed when attempting to add a tx to the
// mempool.
type TxInfo struct {

View File

@@ -61,7 +61,7 @@ func TestCacheAfterUpdate(t *testing.T) {
require.NotEqual(t, len(tc.txsInCache), counter,
"cache larger than expected on testcase %d", tcIndex)
nodeVal := node.Value.([sha256.Size]byte)
nodeVal := node.Value.(types.TxKey)
expectedBz := sha256.Sum256([]byte{byte(tc.txsInCache[len(tc.txsInCache)-counter-1])})
// Reference for reading the errors:
// >>> sha256('\x00').hexdigest()
@@ -71,7 +71,7 @@ func TestCacheAfterUpdate(t *testing.T) {
// >>> sha256('\x02').hexdigest()
// 'dbc1b4c900ffe48d575b5da5c638040125f65db0fe3e24494b76ea986457d986'
require.Equal(t, expectedBz, nodeVal, "Equality failed on index %d, tc %d", counter, tcIndex)
require.EqualValues(t, expectedBz, nodeVal, "Equality failed on index %d, tc %d", counter, tcIndex)
counter++
node = node.Next()
}

View File

@@ -3,6 +3,7 @@ package v0
import (
"bytes"
"context"
"errors"
"fmt"
"sync"
"sync/atomic"
@@ -240,7 +241,7 @@ func (mem *CListMempool) CheckTx(
// Note it's possible a tx is still in the cache but no longer in the mempool
// (eg. after committing a block, txs are removed from mempool but not cache),
// so we only record the sender for txs still in the mempool.
if e, ok := mem.txsMap.Load(mempool.TxKey(tx)); ok {
if e, ok := mem.txsMap.Load(tx.Key()); ok {
memTx := e.(*clist.CElement).Value.(*mempoolTx)
_, loaded := memTx.senders.LoadOrStore(txInfo.SenderID, true)
// TODO: consider punishing peer for dups,
@@ -327,7 +328,7 @@ func (mem *CListMempool) reqResCb(
// - resCbFirstTime (lock not held) if tx is valid
func (mem *CListMempool) addTx(memTx *mempoolTx) {
e := mem.txs.PushBack(memTx)
mem.txsMap.Store(mempool.TxKey(memTx.tx), e)
mem.txsMap.Store(memTx.tx.Key(), e)
atomic.AddInt64(&mem.txsBytes, int64(len(memTx.tx)))
mem.metrics.TxSizeBytes.Observe(float64(len(memTx.tx)))
}
@@ -338,7 +339,7 @@ func (mem *CListMempool) addTx(memTx *mempoolTx) {
func (mem *CListMempool) removeTx(tx types.Tx, elem *clist.CElement, removeFromCache bool) {
mem.txs.Remove(elem)
elem.DetachPrev()
mem.txsMap.Delete(mempool.TxKey(tx))
mem.txsMap.Delete(tx.Key())
atomic.AddInt64(&mem.txsBytes, int64(-len(tx)))
if removeFromCache {
@@ -347,13 +348,16 @@ func (mem *CListMempool) removeTx(tx types.Tx, elem *clist.CElement, removeFromC
}
// RemoveTxByKey removes a transaction from the mempool by its TxKey index.
func (mem *CListMempool) RemoveTxByKey(txKey [mempool.TxKeySize]byte, removeFromCache bool) {
func (mem *CListMempool) RemoveTxByKey(txKey types.TxKey) error {
if e, ok := mem.txsMap.Load(txKey); ok {
memTx := e.(*clist.CElement).Value.(*mempoolTx)
if memTx != nil {
mem.removeTx(memTx.tx, e.(*clist.CElement), removeFromCache)
mem.removeTx(memTx.tx, e.(*clist.CElement), false)
return nil
}
return errors.New("transaction not found")
}
return errors.New("invalid transaction found")
}
func (mem *CListMempool) isFull(txSize int) error {
@@ -409,7 +413,7 @@ func (mem *CListMempool) resCbFirstTime(
mem.addTx(memTx)
mem.logger.Debug(
"added good transaction",
"tx", mempool.TxHashFromBytes(tx),
"tx", types.Tx(tx).Hash(),
"res", r,
"height", memTx.height,
"total", mem.Size(),
@@ -419,7 +423,7 @@ func (mem *CListMempool) resCbFirstTime(
// ignore bad transaction
mem.logger.Debug(
"rejected bad transaction",
"tx", mempool.TxHashFromBytes(tx),
"tx", types.Tx(tx).Hash(),
"peerID", peerP2PID,
"res", r,
"err", postCheckErr,
@@ -460,7 +464,7 @@ func (mem *CListMempool) resCbRecheck(req *abci.Request, res *abci.Response) {
// Good, nothing to do.
} else {
// Tx became invalidated due to newly committed block.
mem.logger.Debug("tx is no longer valid", "tx", mempool.TxHashFromBytes(tx), "res", r, "err", postCheckErr)
mem.logger.Debug("tx is no longer valid", "tx", types.Tx(tx).Hash(), "res", r, "err", postCheckErr)
// NOTE: we remove tx from the cache because it might be good later
mem.removeTx(tx, mem.recheckCursor, !mem.config.KeepInvalidTxsInCache)
}
@@ -598,7 +602,7 @@ func (mem *CListMempool) Update(
// Mempool after:
// 100
// https://github.com/tendermint/tendermint/issues/3322.
if e, ok := mem.txsMap.Load(mempool.TxKey(tx)); ok {
if e, ok := mem.txsMap.Load(tx.Key()); ok {
mem.removeTx(tx, e.(*clist.CElement), false)
}
}

View File

@@ -544,9 +544,9 @@ func TestMempoolTxsBytes(t *testing.T) {
err = mp.CheckTx(context.Background(), []byte{0x06}, nil, mempool.TxInfo{})
require.NoError(t, err)
assert.EqualValues(t, 9, mp.SizeBytes())
mp.RemoveTxByKey(mempool.TxKey([]byte{0x07}), true)
assert.Error(t, mp.RemoveTxByKey(types.Tx([]byte{0x07}).Key()))
assert.EqualValues(t, 9, mp.SizeBytes())
mp.RemoveTxByKey(mempool.TxKey([]byte{0x06}), true)
assert.NoError(t, mp.RemoveTxByKey(types.Tx([]byte{0x06}).Key()))
assert.EqualValues(t, 8, mp.SizeBytes())
}

View File

@@ -39,7 +39,7 @@ type Reactor struct {
cfg *config.MempoolConfig
mempool *CListMempool
ids *mempool.MempoolIDs
ids *mempool.IDs
// XXX: Currently, this is the only way to get information about a peer. Ideally,
// we rely on message-oriented communication to get necessary peer data.
@@ -83,14 +83,9 @@ func NewReactor(
return r
}
// GetChannelShims returns a map of ChannelDescriptorShim objects, where each
// object wraps a reference to a legacy p2p ChannelDescriptor and the corresponding
// p2p proto.Message the new p2p Channel is responsible for handling.
//
//
// TODO: Remove once p2p refactor is complete.
// ref: https://github.com/tendermint/tendermint/issues/5670
func GetChannelShims(cfg *config.MempoolConfig) map[p2p.ChannelID]*p2p.ChannelDescriptorShim {
// GetChannelDescriptor produces an instance of a descriptor for this
// package's required channels.
func GetChannelDescriptor(cfg *config.MempoolConfig) *p2p.ChannelDescriptor {
largestTx := make([]byte, cfg.MaxTxBytes)
batchMsg := protomem.Message{
Sum: &protomem.Message_Txs{
@@ -98,17 +93,12 @@ func GetChannelShims(cfg *config.MempoolConfig) map[p2p.ChannelID]*p2p.ChannelDe
},
}
return map[p2p.ChannelID]*p2p.ChannelDescriptorShim{
mempool.MempoolChannel: {
MsgType: new(protomem.Message),
Descriptor: &p2p.ChannelDescriptor{
ID: byte(mempool.MempoolChannel),
Priority: 5,
RecvMessageCapacity: batchMsg.Size(),
RecvBufferCapacity: 128,
MaxSendBytes: 5000,
},
},
return &p2p.ChannelDescriptor{
ID: mempool.MempoolChannel,
MessageType: new(protomem.Message),
Priority: 5,
RecvMessageCapacity: batchMsg.Size(),
RecvBufferCapacity: 128,
}
}
@@ -171,7 +161,7 @@ func (r *Reactor) handleMempoolMessage(envelope p2p.Envelope) error {
for _, tx := range protoTxs {
if err := r.mempool.CheckTx(context.Background(), types.Tx(tx), nil, txInfo); err != nil {
logger.Error("checktx failed for tx", "tx", fmt.Sprintf("%X", mempool.TxHashFromBytes(tx)), "err", err)
logger.Error("checktx failed for tx", "tx", fmt.Sprintf("%X", types.Tx(tx).Hash()), "err", err)
}
}
@@ -378,7 +368,7 @@ func (r *Reactor) broadcastTxRoutine(peerID types.NodeID, closer *tmsync.Closer)
}
r.Logger.Debug(
"gossiped tx to peer",
"tx", fmt.Sprintf("%X", mempool.TxHashFromBytes(memTx.tx)),
"tx", fmt.Sprintf("%X", memTx.tx.Hash()),
"peer", peerID,
)
}

View File

@@ -50,8 +50,9 @@ func setup(t *testing.T, config *config.MempoolConfig, numNodes int, chBuf uint)
peerUpdates: make(map[types.NodeID]*p2p.PeerUpdates, numNodes),
}
chDesc := p2p.ChannelDescriptor{ID: byte(mempool.MempoolChannel)}
rts.mempoolChnnels = rts.network.MakeChannelsNoCleanup(t, chDesc, new(protomem.Message), int(chBuf))
chDesc := GetChannelDescriptor(config)
chDesc.RecvBufferCapacity = int(chBuf)
rts.mempoolChnnels = rts.network.MakeChannelsNoCleanup(t, chDesc)
for nodeID := range rts.network.Nodes {
rts.kvstores[nodeID] = kvstore.NewApplication()

View File

@@ -3,6 +3,7 @@ package v1
import (
"bytes"
"context"
"errors"
"fmt"
"sync/atomic"
"time"
@@ -256,7 +257,7 @@ func (txmp *TxMempool) CheckTx(
return err
}
txHash := mempool.TxKey(tx)
txHash := tx.Key()
// We add the transaction to the mempool's cache and if the transaction already
// exists, i.e. false is returned, then we check if we've seen this transaction
@@ -304,6 +305,19 @@ func (txmp *TxMempool) CheckTx(
return nil
}
func (txmp *TxMempool) RemoveTxByKey(txKey types.TxKey) error {
txmp.Lock()
defer txmp.Unlock()
// remove the committed transaction from the transaction store and indexes
if wtx := txmp.txStore.GetTxByHash(txKey); wtx != nil {
txmp.removeTx(wtx, false)
return nil
}
return errors.New("transaction not found")
}
// Flush flushes out the mempool. It acquires a read-lock, fetches all the
// transactions currently in the transaction store and removes each transaction
// from the store and all indexes and finally resets the cache.
@@ -451,7 +465,7 @@ func (txmp *TxMempool) Update(
}
// remove the committed transaction from the transaction store and indexes
if wtx := txmp.txStore.GetTxByHash(mempool.TxKey(tx)); wtx != nil {
if wtx := txmp.txStore.GetTxByHash(tx.Key()); wtx != nil {
txmp.removeTx(wtx, false)
}
}
@@ -629,7 +643,7 @@ func (txmp *TxMempool) defaultTxCallback(req *abci.Request, res *abci.Response)
tx := req.GetCheckTx().Tx
wtx := txmp.recheckCursor.Value.(*WrappedTx)
if !bytes.Equal(tx, wtx.tx) {
panic(fmt.Sprintf("re-CheckTx transaction mismatch; got: %X, expected: %X", wtx.tx.Hash(), mempool.TxKey(tx)))
panic(fmt.Sprintf("re-CheckTx transaction mismatch; got: %X, expected: %X", wtx.tx.Hash(), types.Tx(tx).Key()))
}
// Only evaluate transactions that have not been removed. This can happen
@@ -647,7 +661,7 @@ func (txmp *TxMempool) defaultTxCallback(req *abci.Request, res *abci.Response)
txmp.logger.Debug(
"existing transaction no longer valid; failed re-CheckTx callback",
"priority", wtx.priority,
"tx", fmt.Sprintf("%X", mempool.TxHashFromBytes(wtx.tx)),
"tx", fmt.Sprintf("%X", wtx.tx.Hash()),
"err", err,
"code", checkTxRes.CheckTx.Code,
)
@@ -784,13 +798,13 @@ func (txmp *TxMempool) removeTx(wtx *WrappedTx, removeFromCache bool) {
// the height and time based indexes.
func (txmp *TxMempool) purgeExpiredTxs(blockHeight int64) {
now := time.Now()
expiredTxs := make(map[[mempool.TxKeySize]byte]*WrappedTx)
expiredTxs := make(map[types.TxKey]*WrappedTx)
if txmp.config.TTLNumBlocks > 0 {
purgeIdx := -1
for i, wtx := range txmp.heightIndex.txs {
if (blockHeight - wtx.height) > txmp.config.TTLNumBlocks {
expiredTxs[mempool.TxKey(wtx.tx)] = wtx
expiredTxs[wtx.tx.Key()] = wtx
purgeIdx = i
} else {
// since the index is sorted, we know no other txs can be be purged
@@ -807,7 +821,7 @@ func (txmp *TxMempool) purgeExpiredTxs(blockHeight int64) {
purgeIdx := -1
for i, wtx := range txmp.timestampIndex.txs {
if now.Sub(wtx.timestamp) > txmp.config.TTLDuration {
expiredTxs[mempool.TxKey(wtx.tx)] = wtx
expiredTxs[wtx.tx.Key()] = wtx
purgeIdx = i
} else {
// since the index is sorted, we know no other txs can be be purged

View File

@@ -226,10 +226,10 @@ func TestTxMempool_ReapMaxBytesMaxGas(t *testing.T) {
require.Equal(t, len(tTxs), txmp.Size())
require.Equal(t, int64(5690), txmp.SizeBytes())
txMap := make(map[[mempool.TxKeySize]byte]testTx)
txMap := make(map[types.TxKey]testTx)
priorities := make([]int64, len(tTxs))
for i, tTx := range tTxs {
txMap[mempool.TxKey(tTx.tx)] = tTx
txMap[tTx.tx.Key()] = tTx
priorities[i] = tTx.priority
}
@@ -241,7 +241,7 @@ func TestTxMempool_ReapMaxBytesMaxGas(t *testing.T) {
ensurePrioritized := func(reapedTxs types.Txs) {
reapedPriorities := make([]int64, len(reapedTxs))
for i, rTx := range reapedTxs {
reapedPriorities[i] = txMap[mempool.TxKey(rTx)].priority
reapedPriorities[i] = txMap[rTx.Key()].priority
}
require.Equal(t, priorities[:len(reapedPriorities)], reapedPriorities)
@@ -276,10 +276,10 @@ func TestTxMempool_ReapMaxTxs(t *testing.T) {
require.Equal(t, len(tTxs), txmp.Size())
require.Equal(t, int64(5690), txmp.SizeBytes())
txMap := make(map[[mempool.TxKeySize]byte]testTx)
txMap := make(map[types.TxKey]testTx)
priorities := make([]int64, len(tTxs))
for i, tTx := range tTxs {
txMap[mempool.TxKey(tTx.tx)] = tTx
txMap[tTx.tx.Key()] = tTx
priorities[i] = tTx.priority
}
@@ -291,7 +291,7 @@ func TestTxMempool_ReapMaxTxs(t *testing.T) {
ensurePrioritized := func(reapedTxs types.Txs) {
reapedPriorities := make([]int64, len(reapedTxs))
for i, rTx := range reapedTxs {
reapedPriorities[i] = txMap[mempool.TxKey(rTx)].priority
reapedPriorities[i] = txMap[rTx.Key()].priority
}
require.Equal(t, priorities[:len(reapedPriorities)], reapedPriorities)

View File

@@ -39,7 +39,7 @@ type Reactor struct {
cfg *config.MempoolConfig
mempool *TxMempool
ids *mempool.MempoolIDs
ids *mempool.IDs
// XXX: Currently, this is the only way to get information about a peer. Ideally,
// we rely on message-oriented communication to get necessary peer data.
@@ -90,14 +90,9 @@ func NewReactor(
func defaultObservePanic(r interface{}) {}
// GetChannelShims returns a map of ChannelDescriptorShim objects, where each
// object wraps a reference to a legacy p2p ChannelDescriptor and the corresponding
// p2p proto.Message the new p2p Channel is responsible for handling.
//
//
// TODO: Remove once p2p refactor is complete.
// ref: https://github.com/tendermint/tendermint/issues/5670
func GetChannelShims(cfg *config.MempoolConfig) map[p2p.ChannelID]*p2p.ChannelDescriptorShim {
// GetChannelDescriptor produces an instance of a descriptor for this
// package's required channels.
func GetChannelDescriptor(cfg *config.MempoolConfig) *p2p.ChannelDescriptor {
largestTx := make([]byte, cfg.MaxTxBytes)
batchMsg := protomem.Message{
Sum: &protomem.Message_Txs{
@@ -105,17 +100,12 @@ func GetChannelShims(cfg *config.MempoolConfig) map[p2p.ChannelID]*p2p.ChannelDe
},
}
return map[p2p.ChannelID]*p2p.ChannelDescriptorShim{
mempool.MempoolChannel: {
MsgType: new(protomem.Message),
Descriptor: &p2p.ChannelDescriptor{
ID: byte(mempool.MempoolChannel),
Priority: 5,
RecvMessageCapacity: batchMsg.Size(),
RecvBufferCapacity: 128,
MaxSendBytes: 5000,
},
},
return &p2p.ChannelDescriptor{
ID: mempool.MempoolChannel,
MessageType: new(protomem.Message),
Priority: 5,
RecvMessageCapacity: batchMsg.Size(),
RecvBufferCapacity: 128,
}
}
@@ -178,7 +168,7 @@ func (r *Reactor) handleMempoolMessage(envelope p2p.Envelope) error {
for _, tx := range protoTxs {
if err := r.mempool.CheckTx(context.Background(), types.Tx(tx), nil, txInfo); err != nil {
logger.Error("checktx failed for tx", "tx", fmt.Sprintf("%X", mempool.TxHashFromBytes(tx)), "err", err)
logger.Error("checktx failed for tx", "tx", fmt.Sprintf("%X", types.Tx(tx).Hash()), "err", err)
}
}
@@ -386,7 +376,7 @@ func (r *Reactor) broadcastTxRoutine(peerID types.NodeID, closer *tmsync.Closer)
}
r.Logger.Debug(
"gossiped tx to peer",
"tx", fmt.Sprintf("%X", mempool.TxHashFromBytes(memTx.tx)),
"tx", fmt.Sprintf("%X", memTx.tx.Hash()),
"peer", peerID,
)
}

View File

@@ -10,11 +10,9 @@ import (
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/tendermint/tendermint/config"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/internal/mempool"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/internal/p2p/p2ptest"
"github.com/tendermint/tendermint/libs/log"
protomem "github.com/tendermint/tendermint/proto/tendermint/mempool"
"github.com/tendermint/tendermint/types"
)
@@ -52,8 +50,8 @@ func setupReactors(t *testing.T, numNodes int, chBuf uint) *reactorTestSuite {
peerUpdates: make(map[types.NodeID]*p2p.PeerUpdates, numNodes),
}
chDesc := p2p.ChannelDescriptor{ID: byte(mempool.MempoolChannel)}
rts.mempoolChannels = rts.network.MakeChannelsNoCleanup(t, chDesc, new(protomem.Message), int(chBuf))
chDesc := GetChannelDescriptor(cfg.Mempool)
rts.mempoolChannels = rts.network.MakeChannelsNoCleanup(t, chDesc)
for nodeID := range rts.network.Nodes {
rts.kvstores[nodeID] = kvstore.NewApplication()

View File

@@ -6,7 +6,6 @@ import (
"github.com/tendermint/tendermint/internal/libs/clist"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/internal/mempool"
"github.com/tendermint/tendermint/types"
)
@@ -17,7 +16,7 @@ type WrappedTx struct {
tx types.Tx
// hash defines the transaction hash and the primary key used in the mempool
hash [mempool.TxKeySize]byte
hash types.TxKey
// height defines the height at which the transaction was validated at
height int64
@@ -66,14 +65,14 @@ func (wtx *WrappedTx) Size() int {
// need mutative access.
type TxStore struct {
mtx tmsync.RWMutex
hashTxs map[[mempool.TxKeySize]byte]*WrappedTx // primary index
senderTxs map[string]*WrappedTx // sender is defined by the ABCI application
hashTxs map[types.TxKey]*WrappedTx // primary index
senderTxs map[string]*WrappedTx // sender is defined by the ABCI application
}
func NewTxStore() *TxStore {
return &TxStore{
senderTxs: make(map[string]*WrappedTx),
hashTxs: make(map[[mempool.TxKeySize]byte]*WrappedTx),
hashTxs: make(map[types.TxKey]*WrappedTx),
}
}
@@ -110,7 +109,7 @@ func (txs *TxStore) GetTxBySender(sender string) *WrappedTx {
}
// GetTxByHash returns a *WrappedTx by the transaction's hash.
func (txs *TxStore) GetTxByHash(hash [mempool.TxKeySize]byte) *WrappedTx {
func (txs *TxStore) GetTxByHash(hash types.TxKey) *WrappedTx {
txs.mtx.RLock()
defer txs.mtx.RUnlock()
@@ -119,7 +118,7 @@ func (txs *TxStore) GetTxByHash(hash [mempool.TxKeySize]byte) *WrappedTx {
// IsTxRemoved returns true if a transaction by hash is marked as removed and
// false otherwise.
func (txs *TxStore) IsTxRemoved(hash [mempool.TxKeySize]byte) bool {
func (txs *TxStore) IsTxRemoved(hash types.TxKey) bool {
txs.mtx.RLock()
defer txs.mtx.RUnlock()
@@ -142,7 +141,7 @@ func (txs *TxStore) SetTx(wtx *WrappedTx) {
txs.senderTxs[wtx.sender] = wtx
}
txs.hashTxs[mempool.TxKey(wtx.tx)] = wtx
txs.hashTxs[wtx.tx.Key()] = wtx
}
// RemoveTx removes a *WrappedTx from the transaction store. It deletes all
@@ -155,13 +154,13 @@ func (txs *TxStore) RemoveTx(wtx *WrappedTx) {
delete(txs.senderTxs, wtx.sender)
}
delete(txs.hashTxs, mempool.TxKey(wtx.tx))
delete(txs.hashTxs, wtx.tx.Key())
wtx.removed = true
}
// TxHasPeer returns true if a transaction by hash has a given peer ID and false
// otherwise. If the transaction does not exist, false is returned.
func (txs *TxStore) TxHasPeer(hash [mempool.TxKeySize]byte, peerID uint16) bool {
func (txs *TxStore) TxHasPeer(hash types.TxKey, peerID uint16) bool {
txs.mtx.RLock()
defer txs.mtx.RUnlock()
@@ -179,7 +178,7 @@ func (txs *TxStore) TxHasPeer(hash [mempool.TxKeySize]byte, peerID uint16) bool
// We return true if we've already recorded the given peer for this transaction
// and false otherwise. If the transaction does not exist by hash, we return
// (nil, false).
func (txs *TxStore) GetOrSetPeerByTxHash(hash [mempool.TxKeySize]byte, peerID uint16) (*WrappedTx, bool) {
func (txs *TxStore) GetOrSetPeerByTxHash(hash types.TxKey, peerID uint16) (*WrappedTx, bool) {
txs.mtx.Lock()
defer txs.mtx.Unlock()

View File

@@ -8,7 +8,7 @@ import (
"time"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/internal/mempool"
"github.com/tendermint/tendermint/types"
)
func TestTxStore_GetTxBySender(t *testing.T) {
@@ -39,7 +39,7 @@ func TestTxStore_GetTxByHash(t *testing.T) {
timestamp: time.Now(),
}
key := mempool.TxKey(wtx.tx)
key := wtx.tx.Key()
res := txs.GetTxByHash(key)
require.Nil(t, res)
@@ -58,7 +58,7 @@ func TestTxStore_SetTx(t *testing.T) {
timestamp: time.Now(),
}
key := mempool.TxKey(wtx.tx)
key := wtx.tx.Key()
txs.SetTx(wtx)
res := txs.GetTxByHash(key)
@@ -81,10 +81,10 @@ func TestTxStore_GetOrSetPeerByTxHash(t *testing.T) {
timestamp: time.Now(),
}
key := mempool.TxKey(wtx.tx)
key := wtx.tx.Key()
txs.SetTx(wtx)
res, ok := txs.GetOrSetPeerByTxHash(mempool.TxKey([]byte("test_tx_2")), 15)
res, ok := txs.GetOrSetPeerByTxHash(types.Tx([]byte("test_tx_2")).Key(), 15)
require.Nil(t, res)
require.False(t, ok)
@@ -110,7 +110,7 @@ func TestTxStore_RemoveTx(t *testing.T) {
txs.SetTx(wtx)
key := mempool.TxKey(wtx.tx)
key := wtx.tx.Key()
res := txs.GetTxByHash(key)
require.NotNil(t, res)

View File

@@ -1,74 +0,0 @@
package p2p
import (
"github.com/tendermint/tendermint/internal/p2p/conn"
"github.com/tendermint/tendermint/libs/service"
)
// Reactor is responsible for handling incoming messages on one or more
// Channel. Switch calls GetChannels when reactor is added to it. When a new
// peer joins our node, InitPeer and AddPeer are called. RemovePeer is called
// when the peer is stopped. Receive is called when a message is received on a
// channel associated with this reactor.
//
// Peer#Send or Peer#TrySend should be used to send the message to a peer.
type Reactor interface {
service.Service // Start, Stop
// SetSwitch allows setting a switch.
SetSwitch(*Switch)
// GetChannels returns the list of MConnection.ChannelDescriptor. Make sure
// that each ID is unique across all the reactors added to the switch.
GetChannels() []*conn.ChannelDescriptor
// InitPeer is called by the switch before the peer is started. Use it to
// initialize data for the peer (e.g. peer state).
//
// NOTE: The switch won't call AddPeer nor RemovePeer if it fails to start
// the peer. Do not store any data associated with the peer in the reactor
// itself unless you don't want to have a state, which is never cleaned up.
InitPeer(peer Peer) Peer
// AddPeer is called by the switch after the peer is added and successfully
// started. Use it to start goroutines communicating with the peer.
AddPeer(peer Peer)
// RemovePeer is called by the switch when the peer is stopped (due to error
// or other reason).
RemovePeer(peer Peer, reason interface{})
// Receive is called by the switch when msgBytes is received from the peer.
//
// NOTE reactor can not keep msgBytes around after Receive completes without
// copying.
//
// CONTRACT: msgBytes are not nil.
//
// XXX: do not call any methods that can block or incur heavy processing.
// https://github.com/tendermint/tendermint/issues/2888
Receive(chID byte, peer Peer, msgBytes []byte)
}
//--------------------------------------
type BaseReactor struct {
service.BaseService // Provides Start, Stop, .Quit
Switch *Switch
}
func NewBaseReactor(name string, impl Reactor) *BaseReactor {
return &BaseReactor{
BaseService: *service.NewBaseService(nil, name, impl),
Switch: nil,
}
}
func (br *BaseReactor) SetSwitch(sw *Switch) {
br.Switch = sw
}
func (*BaseReactor) GetChannels() []*conn.ChannelDescriptor { return nil }
func (*BaseReactor) AddPeer(peer Peer) {}
func (*BaseReactor) RemovePeer(peer Peer, reason interface{}) {}
func (*BaseReactor) Receive(chID byte, peer Peer, msgBytes []byte) {}
func (*BaseReactor) InitPeer(peer Peer) Peer { return peer }

View File

@@ -1,3 +1,4 @@
//go:build go1.10
// +build go1.10
package conn

View File

@@ -1,3 +1,4 @@
//go:build !go1.10
// +build !go1.10
package conn

View File

@@ -48,7 +48,7 @@ const (
defaultPongTimeout = 45 * time.Second
)
type receiveCbFunc func(chID byte, msgBytes []byte)
type receiveCbFunc func(chID ChannelID, msgBytes []byte)
type errorCbFunc func(interface{})
/*
@@ -64,15 +64,11 @@ initialization of the connection.
There are two methods for sending messages:
func (m MConnection) Send(chID byte, msgBytes []byte) bool {}
func (m MConnection) TrySend(chID byte, msgBytes []byte}) bool {}
`Send(chID, msgBytes)` is a blocking call that waits until `msg` is
successfully queued for the channel with the given id byte `chID`, or until the
request times out. The message `msg` is serialized using Protobuf.
`TrySend(chID, msgBytes)` is a nonblocking call that returns false if the
channel's queue is full.
Inbound message bytes are handled with an onReceive callback function.
*/
type MConnection struct {
@@ -85,8 +81,8 @@ type MConnection struct {
recvMonitor *flow.Monitor
send chan struct{}
pong chan struct{}
channels []*Channel
channelsIdx map[byte]*Channel
channels []*channel
channelsIdx map[ChannelID]*channel
onReceive receiveCbFunc
onError errorCbFunc
errored uint32
@@ -190,8 +186,8 @@ func NewMConnectionWithConfig(
}
// Create channels
var channelsIdx = map[byte]*Channel{}
var channels = []*Channel{}
var channelsIdx = map[ChannelID]*channel{}
var channels = []*channel{}
for _, desc := range chDescs {
channel := newChannel(mconn, *desc)
@@ -265,43 +261,6 @@ func (c *MConnection) stopServices() (alreadyStopped bool) {
return false
}
// FlushStop replicates the logic of OnStop.
// It additionally ensures that all successful
// .Send() calls will get flushed before closing
// the connection.
func (c *MConnection) FlushStop() {
if c.stopServices() {
return
}
// this block is unique to FlushStop
{
// wait until the sendRoutine exits
// so we dont race on calling sendSomePacketMsgs
<-c.doneSendRoutine
// Send and flush all pending msgs.
// Since sendRoutine has exited, we can call this
// safely
eof := c.sendSomePacketMsgs()
for !eof {
eof = c.sendSomePacketMsgs()
}
c.flush()
// Now we can close the connection
}
c.conn.Close()
// We can't close pong safely here because
// recvRoutine may write to it after we've stopped.
// Though it doesn't need to get closed at all,
// we close it @ recvRoutine.
// c.Stop()
}
// OnStop implements BaseService
func (c *MConnection) OnStop() {
if c.stopServices() {
@@ -348,7 +307,7 @@ func (c *MConnection) stopForError(r interface{}) {
}
// Queues a message to be sent to channel.
func (c *MConnection) Send(chID byte, msgBytes []byte) bool {
func (c *MConnection) Send(chID ChannelID, msgBytes []byte) bool {
if !c.IsRunning() {
return false
}
@@ -375,49 +334,6 @@ func (c *MConnection) Send(chID byte, msgBytes []byte) bool {
return success
}
// Queues a message to be sent to channel.
// Nonblocking, returns true if successful.
func (c *MConnection) TrySend(chID byte, msgBytes []byte) bool {
if !c.IsRunning() {
return false
}
c.Logger.Debug("TrySend", "channel", chID, "conn", c, "msgBytes", msgBytes)
// Send message to channel.
channel, ok := c.channelsIdx[chID]
if !ok {
c.Logger.Error(fmt.Sprintf("Cannot send bytes, unknown channel %X", chID))
return false
}
ok = channel.trySendBytes(msgBytes)
if ok {
// Wake up sendRoutine if necessary
select {
case c.send <- struct{}{}:
default:
}
}
return ok
}
// CanSend returns true if you can send more data onto the chID, false
// otherwise. Use only as a heuristic.
func (c *MConnection) CanSend(chID byte) bool {
if !c.IsRunning() {
return false
}
channel, ok := c.channelsIdx[chID]
if !ok {
c.Logger.Error(fmt.Sprintf("Unknown channel %X", chID))
return false
}
return channel.canSend()
}
// sendRoutine polls for packets to send from channels.
func (c *MConnection) sendRoutine() {
defer c._recover()
@@ -520,7 +436,7 @@ func (c *MConnection) sendPacketMsg() bool {
// Choose a channel to create a PacketMsg from.
// The chosen channel will be the one whose recentlySent/priority is the least.
var leastRatio float32 = math.MaxFloat32
var leastChannel *Channel
var leastChannel *channel
for _, channel := range c.channels {
// If nothing to send, skip this channel
if !channel.isSendPending() {
@@ -624,7 +540,7 @@ FOR_LOOP:
// never block
}
case *tmp2p.Packet_PacketMsg:
channelID := byte(pkt.PacketMsg.ChannelID)
channelID := ChannelID(pkt.PacketMsg.ChannelID)
channel, ok := c.channelsIdx[channelID]
if pkt.PacketMsg.ChannelID < 0 || pkt.PacketMsg.ChannelID > math.MaxUint8 || !ok || channel == nil {
err := fmt.Errorf("unknown channel %X", pkt.PacketMsg.ChannelID)
@@ -682,13 +598,6 @@ func (c *MConnection) maxPacketMsgSize() int {
return len(bz)
}
type ConnectionStatus struct {
Duration time.Duration
SendMonitor flow.Status
RecvMonitor flow.Status
Channels []ChannelStatus
}
type ChannelStatus struct {
ID byte
SendQueueCapacity int
@@ -697,30 +606,16 @@ type ChannelStatus struct {
RecentlySent int64
}
func (c *MConnection) Status() ConnectionStatus {
var status ConnectionStatus
status.Duration = time.Since(c.created)
status.SendMonitor = c.sendMonitor.Status()
status.RecvMonitor = c.recvMonitor.Status()
status.Channels = make([]ChannelStatus, len(c.channels))
for i, channel := range c.channels {
status.Channels[i] = ChannelStatus{
ID: channel.desc.ID,
SendQueueCapacity: cap(channel.sendQueue),
SendQueueSize: int(atomic.LoadInt32(&channel.sendQueueSize)),
Priority: channel.desc.Priority,
RecentlySent: atomic.LoadInt64(&channel.recentlySent),
}
}
return status
}
//-----------------------------------------------------------------------------
// ChannelID is an arbitrary channel ID.
type ChannelID uint16
type ChannelDescriptor struct {
ID byte
ID ChannelID
Priority int
MessageType proto.Message
// TODO: Remove once p2p refactor is complete.
SendQueueCapacity int
RecvMessageCapacity int
@@ -728,10 +623,6 @@ type ChannelDescriptor struct {
// RecvBufferCapacity defines the max buffer size of inbound messages for a
// given p2p Channel queue.
RecvBufferCapacity int
// MaxSendBytes defines the maximum number of bytes that can be sent at any
// given moment from a Channel to a peer.
MaxSendBytes uint
}
func (chDesc ChannelDescriptor) FillDefaults() (filled ChannelDescriptor) {
@@ -748,9 +639,8 @@ func (chDesc ChannelDescriptor) FillDefaults() (filled ChannelDescriptor) {
return
}
// TODO: lowercase.
// NOTE: not goroutine-safe.
type Channel struct {
type channel struct {
// Exponential moving average.
// This field must be accessed atomically.
// It is first in the struct to ensure correct alignment.
@@ -769,12 +659,12 @@ type Channel struct {
Logger log.Logger
}
func newChannel(conn *MConnection, desc ChannelDescriptor) *Channel {
func newChannel(conn *MConnection, desc ChannelDescriptor) *channel {
desc = desc.FillDefaults()
if desc.Priority <= 0 {
panic("Channel default priority must be a positive integer")
}
return &Channel{
return &channel{
conn: conn,
desc: desc,
sendQueue: make(chan []byte, desc.SendQueueCapacity),
@@ -783,14 +673,14 @@ func newChannel(conn *MConnection, desc ChannelDescriptor) *Channel {
}
}
func (ch *Channel) SetLogger(l log.Logger) {
func (ch *channel) SetLogger(l log.Logger) {
ch.Logger = l
}
// Queues message to send to this channel.
// Goroutine-safe
// Times out (and returns false) after defaultSendTimeout
func (ch *Channel) sendBytes(bytes []byte) bool {
func (ch *channel) sendBytes(bytes []byte) bool {
select {
case ch.sendQueue <- bytes:
atomic.AddInt32(&ch.sendQueueSize, 1)
@@ -800,34 +690,10 @@ func (ch *Channel) sendBytes(bytes []byte) bool {
}
}
// Queues message to send to this channel.
// Nonblocking, returns true if successful.
// Goroutine-safe
func (ch *Channel) trySendBytes(bytes []byte) bool {
select {
case ch.sendQueue <- bytes:
atomic.AddInt32(&ch.sendQueueSize, 1)
return true
default:
return false
}
}
// Goroutine-safe
func (ch *Channel) loadSendQueueSize() (size int) {
return int(atomic.LoadInt32(&ch.sendQueueSize))
}
// Goroutine-safe
// Use only as a heuristic.
func (ch *Channel) canSend() bool {
return ch.loadSendQueueSize() < defaultSendQueueCapacity
}
// Returns true if any PacketMsgs are pending to be sent.
// Call before calling nextPacketMsg()
// Goroutine-safe
func (ch *Channel) isSendPending() bool {
func (ch *channel) isSendPending() bool {
if len(ch.sending) == 0 {
if len(ch.sendQueue) == 0 {
return false
@@ -839,7 +705,7 @@ func (ch *Channel) isSendPending() bool {
// Creates a new PacketMsg to send.
// Not goroutine-safe
func (ch *Channel) nextPacketMsg() tmp2p.PacketMsg {
func (ch *channel) nextPacketMsg() tmp2p.PacketMsg {
packet := tmp2p.PacketMsg{ChannelID: int32(ch.desc.ID)}
maxSize := ch.maxPacketMsgPayloadSize
packet.Data = ch.sending[:tmmath.MinInt(maxSize, len(ch.sending))]
@@ -856,7 +722,7 @@ func (ch *Channel) nextPacketMsg() tmp2p.PacketMsg {
// Writes next PacketMsg to w and updates c.recentlySent.
// Not goroutine-safe
func (ch *Channel) writePacketMsgTo(w io.Writer) (n int, err error) {
func (ch *channel) writePacketMsgTo(w io.Writer) (n int, err error) {
packet := ch.nextPacketMsg()
n, err = protoio.NewDelimitedWriter(w).WriteMsg(mustWrapPacket(&packet))
atomic.AddInt64(&ch.recentlySent, int64(n))
@@ -866,7 +732,7 @@ func (ch *Channel) writePacketMsgTo(w io.Writer) (n int, err error) {
// Handles incoming PacketMsgs. It returns a message bytes if message is
// complete, which is owned by the caller and will not be modified.
// Not goroutine-safe
func (ch *Channel) recvPacketMsg(packet tmp2p.PacketMsg) ([]byte, error) {
func (ch *channel) recvPacketMsg(packet tmp2p.PacketMsg) ([]byte, error) {
ch.Logger.Debug("Read PacketMsg", "conn", ch.conn, "packet", packet)
var recvCap, recvReceived = ch.desc.RecvMessageCapacity, len(ch.recving) + len(packet.Data)
if recvCap < recvReceived {
@@ -883,7 +749,7 @@ func (ch *Channel) recvPacketMsg(packet tmp2p.PacketMsg) ([]byte, error) {
// Call this periodically to update stats for throttling purposes.
// Not goroutine-safe
func (ch *Channel) updateStats() {
func (ch *channel) updateStats() {
// Exponential decay of stats.
// TODO: optimize.
atomic.StoreInt64(&ch.recentlySent, int64(float64(atomic.LoadInt64(&ch.recentlySent))*0.8))

Some files were not shown because too many files have changed in this diff Show More