Compare commits

..

67 Commits

Author SHA1 Message Date
William Banfield
750f709b42 wip 2021-09-28 16:42:43 -04:00
William Banfield
c9b775d2f0 wip 2021-09-28 16:10:46 -04:00
William Banfield
a3889ee2cb consensus: remove logic to unlock block on 2/3 prevote for nil (#6954) 2021-09-24 11:19:57 -04:00
William Banfield
87f4beb374 consensus: remove panics from test helper functions (#6969) 2021-09-22 08:56:42 -04:00
Sam Kleinman
b0423e2445 e2e: allow load generator to succed for short tests (#6952)
This should address last night's failure. We've taken the perspective
of "the load generator shouldn't cause tests to fail" in recent
days/weeks, and I think this is just a next step along that line. The
e2e tests shouldn't test performance. 

I included some comments indicating the ways that this isn't ideal (it
is perhaps not), and I think that if test networks could make
assertions about the required rate, that might be a cool future
improvement (and good, perhaps, for system benchmarking.)
2021-09-16 15:45:51 +00:00
dependabot[bot]
b0684bd300 build(deps): Bump github.com/vektra/mockery/v2 from 2.9.0 to 2.9.3 (#6951)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.9.0 to 2.9.3.
- [Release notes](https://github.com/vektra/mockery/releases)
- [Changelog](https://github.com/vektra/mockery/blob/master/.goreleaser.yml)
- [Commits](https://github.com/vektra/mockery/compare/v2.9.0...v2.9.3)

---
updated-dependencies:
- dependency-name: github.com/vektra/mockery/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-09-16 08:41:46 -04:00
William Banfield
382947ce93 rfc: add performance taxonomy rfc (#6921)
This document attempts to capture and discuss some of the areas of Tendermint that seem to be cited as causing performance issue. I'm hoping to continue to gather feedback and input on this document to better understand what issues Tendermint performance may cause for our users. 

The overall goal of this document is to allow the maintainers and community to get a better sense of these issues and to be more capably able to discuss them and weight trade-offs about any proposed performance-focused changes. This document does not aim to propose any performance improvements. It does suggest useful places for benchmarks and places where additional metrics would be useful for diagnosing and further understanding Tendermint performance.

Please comment with areas where my reasoning seems off or with additional areas that Tendermint performance may be causing user pain.
2021-09-16 06:13:27 +00:00
Callum Waters
9a7ce08e3e statesync: shut down node when statesync fails (#6944) 2021-09-16 07:43:23 +02:00
Sam Kleinman
55f6d20977 e2e: skip broadcastTxCommit check (#6949)
I think the `Sync` check covers our primary use case, and perhaps we
can turn this back on in the future after some kind of event-system
rewrite, or RPC rewrite that will avoid the serverside timeout.
2021-09-15 21:24:35 +00:00
Sam Kleinman
b9c35c1263 docs: fix openapi yaml lint (#6948)
saw this in the super lint.
2021-09-15 19:29:25 +00:00
Sam Kleinman
f08f72e334 rfc: e2e improvements (#6941) 2021-09-15 15:26:39 -04:00
Callum Waters
e932b469ed e2e: tweak semantics of waitForHeight (#6943) 2021-09-15 20:49:24 +02:00
Callum Waters
5db2a39643 docs: add documentation of unsafe_flush_mempool to openapi (#6947) 2021-09-15 17:28:01 +02:00
Sam Kleinman
6909158933 e2e: reduce load pressure (#6939) 2021-09-14 10:44:30 -04:00
dependabot[bot]
de2cffe7a4 build(deps): Bump codecov/codecov-action from 2.0.3 to 2.1.0 (#6938) 2021-09-14 08:31:41 -04:00
Sam Kleinman
c257cda212 e2e: slow load processes with longer evidence timeouts (#6936)
These are mostly the timeouts that I think we're still hitting in CI. 

At this point, the tests (on master) pass on my local machine (which is quite beefy) so I think this is just the first in (perhaps?) a sequence of changes that attempt to change timeouts and load patterns so that the tests pass in CI more reliably.
2021-09-13 20:57:25 +00:00
M. J. Fromberger
5a49d1b997 RFC 002 Interprocess Communication in Tendermint (#6913)
Communication in Tendermint among consensus nodes, applications, and operator
tools all use different message formats and transport mechanisms.  In some
cases there are multiple options. Having all these options complicates both the
code and the developer experience, and hides bugs. To support a more robust,
trustworthy, and usable system, we should document which communication paths
are essential, which could be removed or reduced in scope, and what we can
improve for the most important use cases.

This document proposes a variety of possible improvements of varying size and
scope. Specific design proposals should get their own documentation.
2021-09-13 15:41:21 -04:00
M. J. Fromberger
e4feb56813 Update CHANGELOG.md for release v0.34.13. (#6935)
Fixes #6933. I forgot to update master after I did the v0.34.13 release.
2021-09-13 18:58:04 +00:00
Sam Kleinman
abbe8209b5 e2e: reduce load volume (#6932) 2021-09-13 13:45:01 -04:00
Sam Kleinman
723bf92ebb ci: skip coverage tasks for test infrastructure (#6934) 2021-09-13 13:23:16 -04:00
Sam Kleinman
ef79241f79 ci: skip coverage for non-go changes (#6927) 2021-09-13 09:06:56 -04:00
Sam Kleinman
3bf0c7a712 e2e: improve p2p mode selection (#6929)
The previous implemention of hybrid set testing, which was entirely my
own creation, was a bit peculiar, and I think this probably clears thins up.

The previous implementation had far fewer legacy nodes in hybrid
networks, *and* also for some reason that I can't quite explain,
caused a test case to fail.
2021-09-12 06:30:58 +00:00
Sam Kleinman
055f1b3279 ci: disable codecov patch status check (#6930) 2021-09-10 19:08:04 -04:00
Sam Kleinman
1998cf7e77 e2e: compile tests (#6926) 2021-09-10 13:34:26 -04:00
Sam Kleinman
c3bcf9b180 e2e: test multiple broadcast tx methods (#6925) 2021-09-10 12:03:41 -04:00
Callum Waters
f1b9613301 e2e: increase retain height to at least twice evidence age (#6924) 2021-09-10 16:18:01 +02:00
dependabot[bot]
5d279c93db build(deps): Bump github.com/rs/zerolog from 1.24.0 to 1.25.0 (#6923)
Bumps [github.com/rs/zerolog](https://github.com/rs/zerolog) from 1.24.0 to 1.25.0.
<details>
<summary>Commits</summary>
<ul>
<li><a href="65adfd88ec"><code>65adfd8</code></a> Make Fields method accept both map and slice (<a href="https://github-redirect.dependabot.com/rs/zerolog/issues/352">#352</a>)</li>
<li>See full diff in <a href="https://github.com/rs/zerolog/compare/v1.24.0...v1.25.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/rs/zerolog&package-manager=go_modules&previous-version=1.24.0&new-version=1.25.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-09-10 13:38:55 +00:00
Sam Kleinman
af71f1cbcb e2e: load generation and logging changes (#6912) 2021-09-10 09:26:17 -04:00
Sam Kleinman
1a9bad9dd3 ci: tweak code coverage settings (#6920) 2021-09-09 15:58:52 -04:00
M. J. Fromberger
db690c3b68 rpc: fix hash encoding in JSON parameters (#6813)
The responses from node RPCs encode hash values as hexadecimal strings. This
behaviour is stipulated in our OpenAPI documentation. In some cases, however,
hashes received as JSON parameters were being decoded as byte buffers, as is
the convention for JSON.

This resulted in the confusing situation that a hash reported by one request
(e.g., broadcast_tx_commit) could not be passed as a parameter to another
(e.g., tx) via JSON, without translating the hex-encoded output hash into the
base64 encoding used by JSON for opaque bytes.

Fixes #6802.
2021-09-09 15:51:17 -04:00
Sam Kleinman
0c3601bcac e2e: introduce canonical ordering of manifests (#6918) 2021-09-09 14:31:17 -04:00
Sam Kleinman
816e9b0b49 ci: drop codecov bot (#6917) 2021-09-09 14:04:56 -04:00
Sam Kleinman
2a224fb2bd rfc: database storage engine (#6897) 2021-09-09 12:42:15 -04:00
William Banfield
2a74c9c498 update readme to more accurately reflect the tendermint public api (#6916) 2021-09-08 17:49:36 -04:00
William Banfield
dc0e04d243 rename configuration parameters to use the new blocksync nomenclature (#6896)
The 0.35 release cycle renamed the 'fastsync' functionality to 'blocksync'. This change brings the configuration parameters in line with that change. Namely, it updates the configuration file `[fastsync]` field to be `[blocksync]` and changes the command line flag and config file parameters `--fast-sync` and `fast-sync` to `--enable-block-sync` and `enable-block-sync` respectively.

Error messages were added to help users encountering these changes be able to quickly make the needed update to their files/scripts.

When using the old command line argument for fast-sync, the following is printed

```
./build/tendermint start --proxy-app=kvstore --consensus.create-empty-blocks=false --fast-sync=false
ERROR: invalid argument "false" for "--fast-sync" flag: --fast-sync has been deprecated, please use --enable-block-sync
```

When using one of the old config file parameters, the following is printed:
```
./build/tendermint start --proxy-app=kvstore --consensus.create-empty-blocks=false
ERROR: error in config file: a configuration parameter named 'fast-sync' was found in the configuration file. The 'fast-sync' parameter has been renamed to 'enable-block-sync', please update the 'fast-sync' field in your configuration file to 'enable-block-sync'
```
2021-09-08 13:58:12 +00:00
William Banfield
63aeb50665 upgrading: add information into the UPGRADING.md for users of the codebase wishing to upgrade (#6898)
* add information on upgrading to the new p2p library

* clarify p2p backwards compatibility

* reorder p2p queue list

* add demo for p2p selection

* fix spacing in upgrading
2021-09-08 09:41:12 -04:00
William Banfield
9b458a1c43 update changelog ahead of v0.35 release (#6893)
This change moves the changelog entries from CHANGELOG_PENDING.md to CHANGELOG.md ahead of the 0.35 release.
2021-09-07 22:38:45 +00:00
M. J. Fromberger
cfe64ed8b6 cleanup: fix order of linters in the golangci-lint config (#6910)
This is a cosmetic change that restores lexicographic order to the selected
linters in the CI config. No change to which linters we run, only putting them
back in order so it's easier to spot the one you care about.
2021-09-07 20:08:46 +00:00
M. J. Fromberger
db6e031a16 doc: fix a typo in the indexing section (#6909) 2021-09-07 18:44:23 +00:00
dependabot[bot]
04cca018c7 build(deps): Bump github.com/golangci/golangci-lint (#6907)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.42.0 to 1.42.1.
- [Release notes](https://github.com/golangci/golangci-lint/releases)
- [Changelog](https://github.com/golangci/golangci-lint/blob/master/CHANGELOG.md)
- [Commits](https://github.com/golangci/golangci-lint/compare/v1.42.0...v1.42.1)

---
updated-dependencies:
- dependency-name: github.com/golangci/golangci-lint
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-09-07 08:55:25 -04:00
Callum Waters
8fe651ba30 e2e: clean up generation of evidence (#6904) 2021-09-07 12:20:43 +02:00
Peter Lai
89539d0454 networks: update to latest DigitalOcean modules (#6902) 2021-09-06 18:04:56 +02:00
dependabot[bot]
4bab061cb2 build(deps): Bump docker/setup-buildx-action from 1.5.0 to 1.6.0 (#6903) 2021-09-06 15:07:27 +02:00
Peter Lai
5ee39f05b9 network: update terraform config (#6901) 2021-09-06 11:53:28 +02:00
Sam Kleinman
1f8bb74bba e2e: skip assertions for stateless nodes (#6894) 2021-09-03 13:25:36 -04:00
Sam Kleinman
77615b900f e2e: wait for all nodes rather than just one (#6892) 2021-09-03 13:03:16 -04:00
Sam Kleinman
21b5e5931a e2e: skip light clients when waiting for height (#6891) 2021-09-03 10:19:15 -04:00
dependabot[bot]
0055f9efcc build(deps): Bump github.com/lib/pq from 1.10.2 to 1.10.3 (#6890)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.2 to 1.10.3.
- [Release notes](https://github.com/lib/pq/releases)
- [Commits](https://github.com/lib/pq/compare/v1.10.2...v1.10.3)

---
updated-dependencies:
- dependency-name: github.com/lib/pq
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-09-03 09:07:30 -04:00
Callum Waters
bda948e814 statesync: implement p2p state provider (#6807) 2021-09-02 13:19:18 +02:00
Sam Kleinman
511bd3eb7f e2e: weight protocol dimensions (#6884)
This changes the focus of the e2e suite, to (roughly) focus on
configurations that are more well used. Most production users of
tendermint run ABCI application in process and the GRPC/socket methods
cover the vast majority of the remaining use cases. 

Perhaps we should consider drop support unix domain sockets in a
future release, but I think in the mean time it's useful to have the
tests *mostly* focus on the primary use cases.
2021-09-01 19:52:40 +00:00
Sam Kleinman
9a0081f076 e2e: change restart mechanism (#6883) 2021-09-01 12:49:45 -04:00
M. J. Fromberger
7fe3e78a38 Update the psql indexer schema and implementation (#6868)
Update the schema and implementation of the Postgres event indexer to improve 
certain types of queries against the index. These changes address the use cases
raised by #6843, and are partly inspired by the prototype schema in that issue.

In the old schema, events were flattened, making it difficult to find all the events 
associated with a particular block or transaction. In addition, events with no key/value
attributes were entirely lost, since entries were generated only for attributes.

To address these issues, this new schema records blocks, transactions, events,
and attributes in separate tables, and provides views that join these tables to
give a more convenient query surface for block and transaction events.

- All events for a given block can be queried from the `block_events` view.
- All events for a given transaction can be queried from the `tx_events` view.
- Multiple events for the same key can be indexed for both blocks and transactions.

The tests have been reworked, but all of the existing test cases for the old schema 
still pass with the new implementation. Various other minor cleanups are included,

ADR-065 is also updated to reflect the updated schema.
2021-08-31 18:35:07 -04:00
Sam Kleinman
7169d26ddf e2e: more reliable method for selecting node to inject evidence (#6880)
In retrospect my previous implementation of this node, could get
unlucky and never find the correct node. This method is more reliable.
2021-08-31 21:56:06 +00:00
Sam Kleinman
c4df8a3840 types: move mempool error for consistency (#6875)
This is a little change just to make things more consistent ahead of
the 0.35 release.
2021-08-30 17:42:58 +00:00
dependabot[bot]
f858ebeb88 build(deps): Bump github.com/rs/zerolog from 1.23.0 to 1.24.0 (#6874)
Bumps [github.com/rs/zerolog](https://github.com/rs/zerolog) from 1.23.0 to 1.24.0.
- [Release notes](https://github.com/rs/zerolog/releases)
- [Commits](https://github.com/rs/zerolog/compare/v1.23.0...v1.24.0)

---
updated-dependencies:
- dependency-name: github.com/rs/zerolog
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-30 09:34:54 -04:00
Jeeyong Um
c9347a0647 docs: remove return code in normal case from go built-in example (#6841)
An explicit exit prevents the deferred cleanup code from running. In this case,
falling off the end of main will achieve the same goal as an explicit exit.
2021-08-29 12:14:20 -04:00
Sam Kleinman
0df421b37f e2e: add weighted random configuration selector (#6869)
When revwing #6807 I assumed that `probSetChoice` worked this way. 

I think that the coverage of various configuration options should
generally track what we expect the actual useage to be to focus the
most test coverage on the configurations that are the most prevelent.
2021-08-27 16:32:40 +00:00
Sam Kleinman
94e1eb8cfe rfc: fix link style (#6870)
This is super minor, but chaning this to fix a broken link and to
match the existing style of the ADRs.
2021-08-27 16:21:12 +00:00
Sam Kleinman
23abb0de8b rfc: p2p next steps (#6866) 2021-08-27 12:14:59 -04:00
Aleksandr Bezobchuk
58a6cfff9a internal/consensus: update error log (#6863)
Issues reported in Osmosis, where the message is extremely long. Also, there is absolutely no reason to log the message IMO. If we must, we can make the message log DEBUG.
2021-08-25 22:43:21 +00:00
Sam Kleinman
6e921f6644 p2p: change default to use new stack (#6862)
This is just a configuration change to default to using the new stack
unless explicitly disabled (e.g. `UseLegacy`) this renames the
configuration value and makes the configuration logic more clear.

The legacy option is good to retain as a fallback if the new stack has
issues operationally, but we should make sure that most of the time
we're using the new stack.
2021-08-25 17:33:38 +00:00
Sam Kleinman
a0a5d45cb1 lint: change deprecated linter (#6861)
This is a super minor change that silences a warning when running the linter locally.
2021-08-25 12:26:11 +00:00
Sam Kleinman
9c8379ef30 e2e: more consistent node selection during tests (#6857)
In the transaction load generator, the e2e test harness previously distributed load randomly to hosts, which was a source of test non-determinism. This change distributes the load generation to the different nodes in the set in a round robin fashion, to produce more reliable results, but does not otherwise change the behavior of the test harness.
2021-08-25 12:24:01 +00:00
dependabot[bot]
e053643b95 build(deps): Bump codecov/codecov-action from 2.0.2 to 2.0.3 (#6860)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 2.0.2 to 2.0.3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v2.0.2...v2.0.3)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-25 08:12:04 -04:00
M. J. Fromberger
41a361ed8d psql: add documentation and simplify constructor API (#6856)
Add documentation comments to the psql event sink package, and simplify the
constructor function so that it does not return the SQL database handle.  The
handle is needed for testing, so expose that via a separate method on the
concrete type.

Update the tests and existing usage for the change. This change does not affect
the behaviour of the sink, so there are no functional changes, only syntactic
updates.
2021-08-24 17:18:27 -04:00
William Banfield
bc2b529b95 inspect: add inspect mode for debugging crashed tendermint node (#6785)
EDIT: Updated, see [comment below]( https://github.com/tendermint/tendermint/pull/6785#issuecomment-897793175)

This change adds a sketch of the `Debug` mode. 

This change adds a `Debug` struct to the node package. This `Debug` struct is intended to be created and started by a command in the `cmd` directory. The `Debug` struct runs the RPC server on the data directories: both the state store and the block store.

This change required a good deal of refactoring. Namely, a new `rpc.go` file was added to the `node` package. This file encapsulates functions for starting RPC servers used by nodes. A potential additional change is to further factor this code into shared code _in_ the `rpc` package. 

Minor API tweaks were also made that seemed appropriate such as the mechanism for fetching routes from the `rpc/core` package.

Additional work is required to register the `Debug` service as a command in the `cmd` directory but I am looking for feedback on if this direction seems appropriate before diving much further.

closes: #5908
2021-08-24 18:12:06 +00:00
Tess Rinearson
6d5ff590c3 contributing: remove release_notes.md reference (#6846) 2021-08-24 19:07:53 +02:00
439 changed files with 15865 additions and 11193 deletions

14
.github/codecov.yml vendored
View File

@@ -5,19 +5,14 @@ coverage:
status:
project:
default:
threshold: 1%
patch: on
threshold: 20%
patch: off
changes: off
github_checks:
annotations: false
comment:
layout: "diff, files"
behavior: default
require_changes: no
require_base: no
require_head: yes
comment: false
ignore:
- "docs"
@@ -25,3 +20,6 @@ ignore:
- "scripts"
- "**/*.pb.go"
- "libs/pubsub/query/query.peg.go"
- "*.md"
- "*.rst"
- "*.yml"

View File

@@ -2,6 +2,9 @@ name: Test Coverage
on:
pull_request:
push:
paths:
- "**.go"
- "!test/"
branches:
- master
- release/**
@@ -50,6 +53,7 @@ jobs:
with:
PATTERNS: |
**/**.go
"!test/"
go.mod
go.sum
- name: install
@@ -72,6 +76,7 @@ jobs:
with:
PATTERNS: |
**/**.go
"!test/"
go.mod
go.sum
- uses: actions/download-artifact@v2
@@ -100,6 +105,7 @@ jobs:
with:
PATTERNS: |
**/**.go
"!test/"
go.mod
go.sum
- uses: actions/download-artifact@v2
@@ -121,7 +127,7 @@ jobs:
- run: |
cat ./*profile.out | grep -v "mode: atomic" >> coverage.txt
if: env.GIT_DIFF
- uses: codecov/codecov-action@v2.0.2
- uses: codecov/codecov-action@v2.1.0
with:
file: ./coverage.txt
if: env.GIT_DIFF

View File

@@ -40,7 +40,7 @@ jobs:
platforms: all
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1.5.0
uses: docker/setup-buildx-action@v1.6.0
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}

View File

@@ -30,7 +30,7 @@ jobs:
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner
run: make -j2 docker generator runner tests
- name: Generate testnets
working-directory: test/e2e

View File

@@ -28,7 +28,7 @@ jobs:
- name: Build
working-directory: test/e2e
# Run two make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker runner
run: make -j2 docker runner tests
if: "env.GIT_DIFF != ''"
- name: Run CI testnet

View File

@@ -34,7 +34,7 @@ jobs:
echo ::set-output name=tags::${TAGS}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1.5.0
uses: docker/setup-buildx-action@v1.6.0
- name: Login to DockerHub
uses: docker/login-action@v1.10.0

View File

@@ -2,7 +2,7 @@ name: "Release"
on:
push:
branches:
branches:
- "RC[0-9]/**"
tags:
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
@@ -20,9 +20,6 @@ jobs:
with:
go-version: '1.16'
- run: echo https://github.com/tendermint/tendermint/blob/${GITHUB_REF#refs/tags/}/CHANGELOG.md#${GITHUB_REF#refs/tags/} > ../release_notes.md
if: startsWith(github.ref, 'refs/tags/')
- name: Build
uses: goreleaser/goreleaser-action@v2
if: ${{ github.event_name == 'pull_request' }}
@@ -35,6 +32,6 @@ jobs:
if: startsWith(github.ref, 'refs/tags/')
with:
version: latest
args: release --rm-dist --release-notes=../release_notes.md
args: release --rm-dist
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,14 +1,17 @@
linters:
enable:
- asciicheck
- bodyclose
- deadcode
- depguard
- dogsled
- dupl
- errcheck
- exportloopref
# - funlen
# - gochecknoglobals
# - gochecknoinits
# - gocognit
- goconst
- gocritic
# - gocyclo
@@ -22,11 +25,11 @@ linters:
- ineffassign
# - interfacer
- lll
- misspell
# - maligned
- misspell
- nakedret
- nolintlint
- prealloc
- scopelint
- staticcheck
- structcheck
- stylecheck
@@ -37,9 +40,6 @@ linters:
- varcheck
# - whitespace
# - wsl
# - gocognit
- nolintlint
- asciicheck
issues:
exclude-rules:

View File

@@ -1,6 +1,173 @@
# Changelog
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermint).
## v0.35
Special thanks to external contributors on this release: @JayT106, @bipulprasad, @alessio, @Yawning, @silasdavis,
@cuonglm, @tanyabouman, @JoeKash, @githubsands, @jeebster, @crypto-facs, @liamsi, and @gotjoshua
### BREAKING CHANGES
- CLI/RPC/Config
- [pubsub/events] \#6634 The `ResultEvent.Events` field is now of type `[]abci.Event` preserving event order instead of `map[string][]string`. (@alexanderbez)
- [config] \#5598 The `test_fuzz` and `test_fuzz_config` P2P settings have been removed. (@erikgrinaker)
- [config] \#5728 `fastsync.version = "v1"` is no longer supported (@melekes)
- [cli] \#5772 `gen_node_key` prints JSON-encoded `NodeKey` rather than ID and does not save it to `node_key.json` (@melekes)
- [cli] \#5777 use hyphen-case instead of snake_case for all cli commands and config parameters (@cmwaters)
- [rpc] \#6019 standardise RPC errors and return the correct status code (@bipulprasad & @cmwaters)
- [rpc] \#6168 Change default sorting to desc for `/tx_search` results (@melekes)
- [cli] \#6282 User must specify the node mode when using `tendermint init` (@cmwaters)
- [state/indexer] \#6382 reconstruct indexer, move txindex into the indexer package (@JayT106)
- [cli] \#6372 Introduce `BootstrapPeers` as part of the new p2p stack. Peers to be connected on startup (@cmwaters)
- [config] \#6462 Move `PrivValidator` configuration out of `BaseConfig` into its own section. (@tychoish)
- [rpc] \#6610 Add MaxPeerBlockHeight into /status rpc call (@JayT106)
- [blocksync/rpc] \#6620 Add TotalSyncedTime & RemainingTime to SyncInfo in /status RPC (@JayT106)
- [rpc/grpc] \#6725 Mark gRPC in the RPC layer as deprecated.
- [blocksync/v2] \#6730 Fast Sync v2 is deprecated, please use v0
- [rpc] Add genesis_chunked method to support paginated and parallel fetching of large genesis documents.
- [rpc/jsonrpc/server] \#6785 `Listen` function updated to take an `int` argument, `maxOpenConnections`, instead of an entire config object. (@williambanfield)
- [rpc] \#6820 Update RPC methods to reflect changes in the p2p layer, disabling support for `UnsafeDialPeers` and `UnsafeDialPeers` when used with the new p2p layer, and changing the response format of the peer list in `NetInfo` for all users.
- [cli] \#6854 Remove deprecated snake case commands. (@tychoish)
- Apps
- [ABCI] \#6408 Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
- [ABCI] \#5447 Remove `SetOption` method from `ABCI.Client` interface
- [ABCI] \#5447 Reset `Oneof` indexes for `Request` and `Response`.
- [ABCI] \#5818 Use protoio for msg length delimitation. Migrates from int64 to uint64 length delimiters.
- [ABCI] \#3546 Add `mempool_error` field to `ResponseCheckTx`. This field will contain an error string if Tendermint encountered an error while adding a transaction to the mempool. (@williambanfield)
- [Version] \#6494 `TMCoreSemVer` has been renamed to `TMVersion`.
- It is not required any longer to set ldflags to set version strings
- [abci/counter] \#6684 Delete counter example app
- Go API
- [pubsub] \#6634 The `Query#Matches` method along with other pubsub methods, now accepts a `[]abci.Event` instead of `map[string][]string`. (@alexanderbez)
- [p2p] \#6618 \#6583 Move `p2p.NodeInfo`, `p2p.NodeID` and `p2p.NetAddress` into `types` to support use in external packages. (@tychoish)
- [node] \#6540 Reduce surface area of the `node` package by making most of the implementation details private. (@tychoish)
- [p2p] \#6547 Move the entire `p2p` package and all reactor implementations into `internal`. (@tychoish)
- [libs/log] \#6534 Remove the existing custom Tendermint logger backed by go-kit. The logging interface, `Logger`, remains. Tendermint still provides a default logger backed by the performant zerolog logger. (@alexanderbez)
- [libs/time] \#6495 Move types/time to libs/time to improve consistency. (@tychoish)
- [mempool] \#6529 The `Context` field has been removed from the `TxInfo` type. `CheckTx` now requires a `Context` argument. (@alexanderbez)
- [abci/client, proxy] \#5673 `Async` funcs return an error, `Sync` and `Async` funcs accept `context.Context` (@melekes)
- [p2p] Remove unused function `MakePoWTarget`. (@erikgrinaker)
- [libs/bits] \#5720 Validate `BitArray` in `FromProto`, which now returns an error (@melekes)
- [proto/p2p] Rename `DefaultNodeInfo` and `DefaultNodeInfoOther` to `NodeInfo` and `NodeInfoOther` (@erikgrinaker)
- [proto/p2p] Rename `NodeInfo.default_node_id` to `node_id` (@erikgrinaker)
- [libs/os] Kill() and {Must,}{Read,Write}File() functions have been removed. (@alessio)
- [store] \#5848 Remove block store state in favor of using the db iterators directly (@cmwaters)
- [state] \#5864 Use an iterator when pruning state (@cmwaters)
- [types] \#6023 Remove `tm2pb.Header`, `tm2pb.BlockID`, `tm2pb.PartSetHeader` and `tm2pb.NewValidatorUpdate`.
- Each of the above types has a `ToProto` and `FromProto` method or function which replaced this logic.
- [light] \#6054 Move `MaxRetryAttempt` option from client to provider.
- `NewWithOptions` now sets the max retry attempts and timeouts (@cmwaters)
- [all] \#6077 Change spelling from British English to American (@cmwaters)
- Rename "Subscription.Cancelled()" to "Subscription.Canceled()" in libs/pubsub
- Rename "behaviour" pkg to "behavior" and internalized it in blocksync v2
- [rpc/client/http] \#6176 Remove `endpoint` arg from `New`, `NewWithTimeout` and `NewWithClient` (@melekes)
- [rpc/client/http] \#6176 Unexpose `WSEvents` (@melekes)
- [rpc/jsonrpc/client/ws_client] \#6176 `NewWS` no longer accepts options (use `NewWSWithOptions` and `OnReconnect` funcs to configure the client) (@melekes)
- [internal/libs] \#6366 Move `autofile`, `clist`,`fail`,`flowrate`, `protoio`, `sync`, `tempfile`, `test` and `timer` lib packages to an internal folder
- [libs/rand] \#6364 Remove most of libs/rand in favour of standard lib's `math/rand` (@liamsi)
- [mempool] \#6466 The original mempool reactor has been versioned as `v0` and moved to a sub-package under the root `mempool` package.
Some core types have been kept in the `mempool` package such as `TxCache` and it's implementations, the `Mempool` interface itself
and `TxInfo`. (@alexanderbez)
- [crypto/sr25519] \#6526 Do not re-execute the Ed25519-style key derivation step when doing signing and verification. The derivation is now done once and only once. This breaks `sr25519.GenPrivKeyFromSecret` output compatibility. (@Yawning)
- [types] \#6627 Move `NodeKey` to types to make the type public.
- [config] \#6627 Extend `config` to contain methods `LoadNodeKeyID` and `LoadorGenNodeKeyID`
- [blocksync] \#6755 Rename `FastSync` and `Blockchain` package to `BlockSync` (@cmwaters)
- Data Storage
- [store/state/evidence/light] \#5771 Use an order-preserving varint key encoding (@cmwaters)
- [mempool] \#6396 Remove mempool's write ahead log (WAL), (previously unused by the tendermint code). (@tychoish)
- [state] \#6541 Move pruneBlocks from consensus/state to state/execution. (@JayT106)
- Tooling
- [tools] \#6498 Set OS home dir to instead of the hardcoded PATH. (@JayT106)
- [cli/indexer] \#6676 Reindex events command line tooling. (@JayT106)
### FEATURES
- [config] Add `--mode` flag and config variable. See [ADR-52](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-052-tendermint-mode.md) @dongsam
- [rpc] \#6329 Don't cap page size in unsafe mode (@gotjoshua, @cmwaters)
- [pex] \#6305 v2 pex reactor with backwards compatability. Introduces two new pex messages to
accomodate for the new p2p stack. Removes the notion of seeds and crawling. All peer
exchange reactors behave the same. (@cmwaters)
- [crypto] \#6376 Enable sr25519 as a validator key type
- [mempool] \#6466 Introduction of a prioritized mempool. (@alexanderbez)
- `Priority` and `Sender` have been introduced into the `ResponseCheckTx` type, where the `priority` will determine the prioritization of
the transaction when a proposer reaps transactions for a block proposal. The `sender` field acts as an index.
- Operators may toggle between the legacy mempool reactor, `v0`, and the new prioritized reactor, `v1`, by setting the
`mempool.version` configuration, where `v1` is the default configuration.
- Applications that do not specify a priority, i.e. zero, will have transactions reaped by the order in which they are received by the node.
- Transactions are gossiped in FIFO order as they are in `v0`.
- [config/indexer] \#6411 Introduce support for custom event indexing data sources, specifically PostgreSQL. (@JayT106)
- [blocksync/event] \#6619 Emit blocksync status event when switching consensus/blocksync (@JayT106)
- [statesync/event] \#6700 Emit statesync status start/end event (@JayT106)
- [inspect] \#6785 Add a new `inspect` command for introspecting the state and block store of a crashed tendermint node. (@williambanfield)
### IMPROVEMENTS
- [libs/log] Console log formatting changes as a result of \#6534 and \#6589. (@tychoish)
- [statesync] \#6566 Allow state sync fetchers and request timeout to be configurable. (@alexanderbez)
- [types] \#6478 Add `block_id` to `newblock` event (@jeebster)
- [crypto/ed25519] \#5632 Adopt zip215 `ed25519` verification. (@marbar3778)
- [crypto/ed25519] \#6526 Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `ed25519` signing and verification. (@Yawning)
- [crypto/sr25519] \#6526 Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `sr25519` signing and verification. (@Yawning)
- [privval] \#5603 Add `--key` to `init`, `gen_validator`, `testnet` & `unsafe_reset_priv_validator` for use in generating `secp256k1` keys.
- [privval] \#5725 Add gRPC support to private validator.
- [privval] \#5876 `tendermint show-validator` will query the remote signer if gRPC is being used (@marbar3778)
- [abci/client] \#5673 `Async` requests return an error if queue is full (@melekes)
- [mempool] \#5673 Cancel `CheckTx` requests if RPC client disconnects or times out (@melekes)
- [abci] \#5706 Added `AbciVersion` to `RequestInfo` allowing applications to check ABCI version when connecting to Tendermint. (@marbar3778)
- [blocksync/v1] \#5728 Remove blocksync v1 (@melekes)
- [blocksync/v0] \#5741 Relax termination conditions and increase sync timeout (@melekes)
- [cli] \#5772 `gen_node_key` output now contains node ID (`id` field) (@melekes)
- [blocksync/v2] \#5774 Send status request when new peer joins (@melekes)
- [store] \#5888 store.SaveBlock saves using batches instead of transactions for now to improve ACID properties. This is a quick fix for underlying issues around tm-db and ACID guarantees. (@githubsands)
- [consensus] \#5987 and \#5792 Remove the `time_iota_ms` consensus parameter. Merge `tmproto.ConsensusParams` and `abci.ConsensusParams`. (@marbar3778, @valardragon)
- [types] \#5994 Reduce the use of protobuf types in core logic. (@marbar3778)
- `ConsensusParams`, `BlockParams`, `ValidatorParams`, `EvidenceParams`, `VersionParams`, `sm.Version` and `version.Consensus` have become native types. They still utilize protobuf when being sent over the wire or written to disk.
- [rpc/client/http] \#6163 Do not drop events even if the `out` channel is full (@melekes)
- [node] \#6059 Validate and complete genesis doc before saving to state store (@silasdavis)
- [state] \#6067 Batch save state data (@githubsands & @cmwaters)
- [crypto] \#6120 Implement batch verification interface for ed25519 and sr25519. (@marbar3778)
- [types] \#6120 use batch verification for verifying commits signatures.
- If the key type supports the batch verification API it will try to batch verify. If the verification fails we will single verify each signature.
- [privval/file] \#6185 Return error on `LoadFilePV`, `LoadFilePVEmptyState`. Allows for better programmatic control of Tendermint.
- [privval] \#6240 Add `context.Context` to privval interface.
- [rpc] \#6265 set cache control in http-rpc response header (@JayT106)
- [statesync] \#6378 Retry requests for snapshots and add a minimum discovery time (5s) for new snapshots.
- [node/state] \#6370 graceful shutdown in the consensus reactor (@JayT106)
- [crypto/merkle] \#6443 Improve HashAlternatives performance (@cuonglm)
- [crypto/merkle] \#6513 Optimize HashAlternatives (@marbar3778)
- [p2p/pex] \#6509 Improve addrBook.hash performance (@cuonglm)
- [consensus/metrics] \#6549 Change block_size gauge to a histogram for better observability over time (@marbar3778)
- [statesync] \#6587 Increase chunk priority and re-request chunks that don't arrive (@cmwaters)
- [state/privval] \#6578 No GetPubKey retry beyond the proposal/voting window (@JayT106)
- [rpc] \#6615 Add TotalGasUsed to block_results response (@crypto-facs)
- [cmd/tendermint/commands] \#6623 replace `$HOME/.some/test/dir` with `t.TempDir` (@tanyabouman)
- [statesync] \6807 Implement P2P state provider as an alternative to RPC (@cmwaters)
### BUG FIXES
- [privval] \#5638 Increase read/write timeout to 5s and calculate ping interval based on it (@JoeKash)
- [evidence] \#6375 Fix bug with inconsistent LightClientAttackEvidence hashing (cmwaters)
- [rpc] \#6507 Ensure RPC client can handle URLs without ports (@JayT106)
- [statesync] \#6463 Adds Reverse Sync feature to fetch historical light blocks after state sync in order to verify any evidence (@cmwaters)
- [blocksync] \#6590 Update the metrics during blocksync (@JayT106)
## v0.34.13
*September 6, 2021*
This release backports improvements to state synchronization and ABCI
performance under concurrent load, and the PostgreSQL event indexer.
### IMPROVEMENTS
- [statesync] [\#6881](https://github.com/tendermint/tendermint/issues/6881) improvements to stateprovider logic (@cmwaters)
- [ABCI] [\#6873](https://github.com/tendermint/tendermint/issues/6873) change client to use multi-reader mutexes (@tychoish)
- [indexing] [\#6906](https://github.com/tendermint/tendermint/issues/6906) enable the PostgreSQL indexer sink (@creachadair)
## v0.34.12

View File

@@ -9,153 +9,18 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
### BREAKING CHANGES
- CLI/RPC/Config
- [pubsub/events] \#6634 The `ResultEvent.Events` field is now of type `[]abci.Event` preserving event order instead of `map[string][]string`. (@alexanderbez)
- [config] \#5598 The `test_fuzz` and `test_fuzz_config` P2P settings have been removed. (@erikgrinaker)
- [config] \#5728 `fast_sync = "v1"` is no longer supported (@melekes)
- [cli] \#5772 `gen_node_key` prints JSON-encoded `NodeKey` rather than ID and does not save it to `node_key.json` (@melekes)
- [cli] \#5777 use hyphen-case instead of snake_case for all cli commands and config parameters (@cmwaters)
- [rpc] \#6019 standardise RPC errors and return the correct status code (@bipulprasad & @cmwaters)
- [rpc] \#6168 Change default sorting to desc for `/tx_search` results (@melekes)
- [cli] \#6282 User must specify the node mode when using `tendermint init` (@cmwaters)
- [state/indexer] \#6382 reconstruct indexer, move txindex into the indexer package (@JayT106)
- [cli] \#6372 Introduce `BootstrapPeers` as part of the new p2p stack. Peers to be connected on startup (@cmwaters)
- [config] \#6462 Move `PrivValidator` configuration out of `BaseConfig` into its own section. (@tychoish)
- [rpc] \#6610 Add MaxPeerBlockHeight into /status rpc call (@JayT106)
- [fastsync/rpc] \#6620 Add TotalSyncedTime & RemainingTime to SyncInfo in /status RPC (@JayT106)
- [rpc/grpc] \#6725 Mark gRPC in the RPC layer as deprecated.
- [blockchain/v2] \#6730 Fast Sync v2 is deprecated, please use v0
- [rpc] \#6820 Update RPC methods to reflect changes in the p2p layer, disabling support for `UnsafeDialPeers` and `UnsafeDialPeers` when used with the new p2p layer, and changing the response format of the peer list in `NetInfo` for all users.
- [cli] \#6854 Remove deprecated snake case commands. (@tychoish)
- Apps
- [ABCI] \#6408 Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
- [ABCI] \#5447 Remove `SetOption` method from `ABCI.Client` interface
- [ABCI] \#5447 Reset `Oneof` indexes for `Request` and `Response`.
- [ABCI] \#5818 Use protoio for msg length delimitation. Migrates from int64 to uint64 length delimiters.
- [ABCI] \#3546 Add `mempool_error` field to `ResponseCheckTx`. This field will contain an error string if Tendermint encountered an error while adding a transaction to the mempool. (@williambanfield)
- [Version] \#6494 `TMCoreSemVer` has been renamed to `TMVersion`.
- It is not required any longer to set ldflags to set version strings
- [abci/counter] \#6684 Delete counter example app
- P2P Protocol
- Go API
- [pubsub] \#6634 The `Query#Matches` method along with other pubsub methods, now accepts a `[]abci.Event` instead of `map[string][]string`. (@alexanderbez)
- [p2p] \#6618 Move `p2p.NodeInfo` into `types` to support use of the SDK. (@tychoish)
- [p2p] \#6583 Make `p2p.NodeID` and `p2p.NetAddress` exported types to support their use in the RPC layer. (@tychoish)
- [node] \#6540 Reduce surface area of the `node` package by making most of the implementation details private. (@tychoish)
- [p2p] \#6547 Move the entire `p2p` package and all reactor implementations into `internal`. (@tychoish)
- [libs/log] \#6534 Remove the existing custom Tendermint logger backed by go-kit. The logging interface, `Logger`, remains. Tendermint still provides a default logger backed by the performant zerolog logger. (@alexanderbez)
- [libs/time] \#6495 Move types/time to libs/time to improve consistency. (@tychoish)
- [mempool] \#6529 The `Context` field has been removed from the `TxInfo` type. `CheckTx` now requires a `Context` argument. (@alexanderbez)
- [abci/client, proxy] \#5673 `Async` funcs return an error, `Sync` and `Async` funcs accept `context.Context` (@melekes)
- [p2p] Remove unused function `MakePoWTarget`. (@erikgrinaker)
- [libs/bits] \#5720 Validate `BitArray` in `FromProto`, which now returns an error (@melekes)
- [proto/p2p] Rename `DefaultNodeInfo` and `DefaultNodeInfoOther` to `NodeInfo` and `NodeInfoOther` (@erikgrinaker)
- [proto/p2p] Rename `NodeInfo.default_node_id` to `node_id` (@erikgrinaker)
- [libs/os] Kill() and {Must,}{Read,Write}File() functions have been removed. (@alessio)
- [store] \#5848 Remove block store state in favor of using the db iterators directly (@cmwaters)
- [state] \#5864 Use an iterator when pruning state (@cmwaters)
- [types] \#6023 Remove `tm2pb.Header`, `tm2pb.BlockID`, `tm2pb.PartSetHeader` and `tm2pb.NewValidatorUpdate`.
- Each of the above types has a `ToProto` and `FromProto` method or function which replaced this logic.
- [light] \#6054 Move `MaxRetryAttempt` option from client to provider.
- `NewWithOptions` now sets the max retry attempts and timeouts (@cmwaters)
- [all] \#6077 Change spelling from British English to American (@cmwaters)
- Rename "Subscription.Cancelled()" to "Subscription.Canceled()" in libs/pubsub
- Rename "behaviour" pkg to "behavior" and internalized it in blockchain v2
- [rpc/client/http] \#6176 Remove `endpoint` arg from `New`, `NewWithTimeout` and `NewWithClient` (@melekes)
- [rpc/client/http] \#6176 Unexpose `WSEvents` (@melekes)
- [rpc/jsonrpc/client/ws_client] \#6176 `NewWS` no longer accepts options (use `NewWSWithOptions` and `OnReconnect` funcs to configure the client) (@melekes)
- [internal/libs] \#6366 Move `autofile`, `clist`,`fail`,`flowrate`, `protoio`, `sync`, `tempfile`, `test` and `timer` lib packages to an internal folder
- [libs/rand] \#6364 Remove most of libs/rand in favour of standard lib's `math/rand` (@liamsi)
- [mempool] \#6466 The original mempool reactor has been versioned as `v0` and moved to a sub-package under the root `mempool` package.
Some core types have been kept in the `mempool` package such as `TxCache` and it's implementations, the `Mempool` interface itself
and `TxInfo`. (@alexanderbez)
- [crypto/sr25519] \#6526 Do not re-execute the Ed25519-style key derivation step when doing signing and verification. The derivation is now done once and only once. This breaks `sr25519.GenPrivKeyFromSecret` output compatibility. (@Yawning)
- [types] \#6627 Move `NodeKey` to types to make the type public.
- [config] \#6627 Extend `config` to contain methods `LoadNodeKeyID` and `LoadorGenNodeKeyID`
- [blocksync] \#6755 Rename `FastSync` and `Blockchain` package to `BlockSync`
(@cmwaters)
- Blockchain Protocol
- Data Storage
- [store/state/evidence/light] \#5771 Use an order-preserving varint key encoding (@cmwaters)
- [mempool] \#6396 Remove mempool's write ahead log (WAL), (previously unused by the tendermint code). (@tychoish)
- [state] \#6541 Move pruneBlocks from consensus/state to state/execution. (@JayT106)
- Tooling
- [tools] \#6498 Set OS home dir to instead of the hardcoded PATH. (@JayT106)
- [cli/indexer] \#6676 Reindex events command line tooling. (@JayT106)
### FEATURES
- [config] Add `--mode` flag and config variable. See [ADR-52](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-052-tendermint-mode.md) @dongsam
- [rpc] \#6329 Don't cap page size in unsafe mode (@gotjoshua, @cmwaters)
- [pex] \#6305 v2 pex reactor with backwards compatability. Introduces two new pex messages to
accomodate for the new p2p stack. Removes the notion of seeds and crawling. All peer
exchange reactors behave the same. (@cmwaters)
- [crypto] \#6376 Enable sr25519 as a validator key
- [mempool] \#6466 Introduction of a prioritized mempool. (@alexanderbez)
- `Priority` and `Sender` have been introduced into the `ResponseCheckTx` type, where the `priority` will determine the prioritization of
the transaction when a proposer reaps transactions for a block proposal. The `sender` field acts as an index.
- Operators may toggle between the legacy mempool reactor, `v0`, and the new prioritized reactor, `v1`, by setting the
`mempool.version` configuration, where `v1` is the default configuration.
- Applications that do not specify a priority, i.e. zero, will have transactions reaped by the order in which they are received by the node.
- Transactions are gossiped in FIFO order as they are in `v0`.
- [config/indexer] \#6411 Introduce support for custom event indexing data sources, specifically PostgreSQL. (@JayT106)
- [fastsync/event] \#6619 Emit fastsync status event when switching consensus/fastsync (@JayT106)
- [statesync/event] \#6700 Emit statesync status start/end event (@JayT106)
### IMPROVEMENTS
- [libs/log] Console log formatting changes as a result of \#6534 and \#6589. (@tychoish)
- [statesync] \#6566 Allow state sync fetchers and request timeout to be configurable. (@alexanderbez)
- [types] \#6478 Add `block_id` to `newblock` event (@jeebster)
- [crypto/ed25519] \#5632 Adopt zip215 `ed25519` verification. (@marbar3778)
- [crypto/ed25519] \#6526 Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `ed25519` signing and verification. (@Yawning)
- [crypto/sr25519] \#6526 Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `sr25519` signing and verification. (@Yawning)
- [privval] \#5603 Add `--key` to `init`, `gen_validator`, `testnet` & `unsafe_reset_priv_validator` for use in generating `secp256k1` keys.
- [privval] \#5725 Add gRPC support to private validator.
- [privval] \#5876 `tendermint show-validator` will query the remote signer if gRPC is being used (@marbar3778)
- [abci/client] \#5673 `Async` requests return an error if queue is full (@melekes)
- [mempool] \#5673 Cancel `CheckTx` requests if RPC client disconnects or times out (@melekes)
- [abci] \#5706 Added `AbciVersion` to `RequestInfo` allowing applications to check ABCI version when connecting to Tendermint. (@marbar3778)
- [blockchain/v1] \#5728 Remove in favor of v2 (@melekes)
- [blockchain/v0] \#5741 Relax termination conditions and increase sync timeout (@melekes)
- [cli] \#5772 `gen_node_key` output now contains node ID (`id` field) (@melekes)
- [blockchain/v2] \#5774 Send status request when new peer joins (@melekes)
- [consensus] \#5792 Deprecates the `time_iota_ms` consensus parameter, to reduce the bug surface. The parameter is no longer used. (@valardragon)
- [store] \#5888 store.SaveBlock saves using batches instead of transactions for now to improve ACID properties. This is a quick fix for underlying issues around tm-db and ACID guarantees. (@githubsands)
- [consensus] \#5987 Remove `time_iota_ms` from consensus params. Merge `tmproto.ConsensusParams` and `abci.ConsensusParams`. (@marbar3778)
- [types] \#5994 Reduce the use of protobuf types in core logic. (@marbar3778)
- `ConsensusParams`, `BlockParams`, `ValidatorParams`, `EvidenceParams`, `VersionParams`, `sm.Version` and `version.Consensus` have become native types. They still utilize protobuf when being sent over the wire or written to disk.
- [rpc/client/http] \#6163 Do not drop events even if the `out` channel is full (@melekes)
- [node] \#6059 Validate and complete genesis doc before saving to state store (@silasdavis)
- [state] \#6067 Batch save state data (@githubsands & @cmwaters)
- [crypto] \#6120 Implement batch verification interface for ed25519 and sr25519. (@marbar3778)
- [types] \#6120 use batch verification for verifying commits signatures.
- If the key type supports the batch verification API it will try to batch verify. If the verification fails we will single verify each signature.
- [privval/file] \#6185 Return error on `LoadFilePV`, `LoadFilePVEmptyState`. Allows for better programmatic control of Tendermint.
- [privval] \#6240 Add `context.Context` to privval interface.
- [rpc] \#6265 set cache control in http-rpc response header (@JayT106)
- [statesync] \#6378 Retry requests for snapshots and add a minimum discovery time (5s) for new snapshots.
- [node/state] \#6370 graceful shutdown in the consensus reactor (@JayT106)
- [crypto/merkle] \#6443 Improve HashAlternatives performance (@cuonglm)
- [crypto/merkle] \#6513 Optimize HashAlternatives (@marbar3778)
- [p2p/pex] \#6509 Improve addrBook.hash performance (@cuonglm)
- [consensus/metrics] \#6549 Change block_size gauge to a histogram for better observability over time (@marbar3778)
- [statesync] \#6587 Increase chunk priority and re-request chunks that don't arrive (@cmwaters)
- [state/privval] \#6578 No GetPubKey retry beyond the proposal/voting window (@JayT106)
- [rpc] \#6615 Add TotalGasUsed to block_results response (@crypto-facs)
- [cmd/tendermint/commands] \#6623 replace `$HOME/.some/test/dir` with `t.TempDir` (@tanyabouman)
### BUG FIXES
- [privval] \#5638 Increase read/write timeout to 5s and calculate ping interval based on it (@JoeKash)
- [blockchain/v1] [\#5701](https://github.com/tendermint/tendermint/pull/5701) Handle peers without blocks (@melekes)
- [blockchain/v1] \#5711 Fix deadlock (@melekes)
- [evidence] \#6375 Fix bug with inconsistent LightClientAttackEvidence hashing (cmwaters)
- [rpc] \#6507 Ensure RPC client can handle URLs without ports (@JayT106)
- [statesync] \#6463 Adds Reverse Sync feature to fetch historical light blocks after state sync in order to verify any evidence (@cmwaters)
- [fastsync] \#6590 Update the metrics during fast-sync (@JayT106)
- [gitignore] \#6668 Fix gitignore of abci-cli (@tanyabouman)

View File

@@ -328,7 +328,6 @@ If there were no release candidates, begin by creating a backport branch, as des
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Open a PR with these changes against the backport branch.
5. Once these changes are on the backport branch, push a tag with prepared release details.
This will trigger the actual release `v0.35.0`.
@@ -355,7 +354,6 @@ To create a minor release:
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during minor releases, and only field additions are valid minor changes.)
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Open a PR with these changes that will land them back on `v0.35.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- `git tag -a v0.35.1 -m 'Release v0.35.1'`

View File

@@ -82,32 +82,12 @@ and familiarize yourself with our
Tendermint uses [Semantic Versioning](http://semver.org/) to determine when and how the version changes.
According to SemVer, anything in the public API can change at any time before version 1.0.0
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
to signal breaking changes across a subset of the total public API. This subset includes all
interfaces exposed to other processes (cli, rpc, p2p, etc.), but does not
include the Go APIs.
To provide some stability to users of 0.X.X versions of Tendermint, the MINOR version is used
to signal breaking changes across Tendermint's API. This API includes all
publicly exposed types, functions, and methods in non-internal Go packages as well as
the types and methods accessible via the Tendermint RPC interface.
That said, breaking changes in the following packages will be documented in the
CHANGELOG even if they don't lead to MINOR version bumps:
- crypto
- config
- libs
- bits
- bytes
- json
- log
- math
- net
- os
- protoio
- rand
- sync
- strings
- service
- node
- rpc/client
- types
Breaking changes to these public APIs will be documented in the CHANGELOG.
### Upgrades

View File

@@ -2,7 +2,7 @@
This guide provides instructions for upgrading to specific versions of Tendermint Core.
## Unreleased
## v0.35
### ABCI Changes
@@ -17,7 +17,16 @@ This guide provides instructions for upgrading to specific versions of Tendermin
### Config Changes
* `fast_sync = "v1"` and `fast_sync = "v2"` are no longer supported. Please use `v0` instead.
* The configuration file field `[fastsync]` has been renamed to `[blocksync]`.
* The top level configuration file field `fast-sync` has moved under the new `[blocksync]`
field as `blocksync.enable`.
* `blocksync.version = "v1"` and `blocksync.version = "v2"` (previously `fastsync`)
are no longer supported. Please use `v0` instead. During the v0.35 release cycle, `v0` was
determined to suit the existing needs and the cost of maintaining the `v1` and `v2` modules
was determined to be greater than necessary.
* All config parameters are now hyphen-case (also known as kebab-case) instead of snake_case. Before restarting the node make sure
you have updated all the variables in your `config.toml` file.
@@ -35,7 +44,7 @@ This guide provides instructions for upgrading to specific versions of Tendermin
* The fast sync process as well as the blockchain package and service has all
been renamed to block sync
### Key Format Changes
### Database Key Format Changes
The format of all tendermint on-disk database keys changes in
0.35. Upgrading nodes must either re-sync all data or run a migration
@@ -60,6 +69,8 @@ if needed.
* You must now specify the node mode (validator|full|seed) in `tendermint init [mode]`
* The `--fast-sync` command line option has been renamed to `--blocksync.enable`
* If you had previously used `tendermint gen_node_key` to generate a new node
key, keep in mind that it no longer saves the output to a file. You can use
`tendermint init validator` or pipe the output of `tendermint gen_node_key` to
@@ -74,8 +85,8 @@ if needed.
### API Changes
The p2p layer was reimplemented as part of the 0.35 release cycle, and
all reactors were refactored. As part of that work these
The p2p layer was reimplemented as part of the 0.35 release cycle and
all reactors were refactored to accomodate the change. As part of that work these
implementations moved into the `internal` package and are no longer
considered part of the public Go API of tendermint. These packages
are:
@@ -98,13 +109,11 @@ will need to change to accommodate these changes. Most notably:
longer exported and have been replaced with `node.New` and
`node.NewDefault` which provide more functional interfaces.
### RPC changes
#### gRPC Support
### gRPC Support
Mark gRPC in the RPC layer as deprecated and to be removed in 0.36.
#### Peer Management Interface
### Peer Management Interface
When running with the new P2P Layer, the methods `UnsafeDialSeeds` and
`UnsafeDialPeers` RPC methods will always return an error. They are
@@ -116,6 +125,58 @@ method changes in this release to accommodate the different way that
the new stack tracks data about peers. This change affects users of
both stacks.
### Using the updated p2p library
The P2P library was reimplemented in this release. The new implementation is
enabled by default in this version of Tendermint. The legacy implementation is still
included in this version of Tendermint as a backstop to work around unforeseen
production issues. The new and legacy version are interoperable. If necessary,
you can enable the legacy implementation in the server configuration file.
To make use of the legacy P2P implemementation add or update the following field of
your server's configuration file under the `[p2p]` section:
```toml
[p2p]
...
use-legacy = true
...
```
If you need to do this, please consider filing an issue in the Tendermint repository
to let us know why. We plan to remove the legacy P2P code in the next (v0.36) release.
#### New p2p queue types
The new p2p implementation enables selection of the queue type to be used for
passing messages between peers.
The following values may be used when selecting which queue type to use:
* `fifo`: (**default**) An unbuffered and lossless queue that passes messages through
in the order in which they were received.
* `priority`: A priority queue of messages.
* `wdrr`: A queue implementing the Weighted Deficit Round Robin algorithm. A
weighted deficit round robin queue is created per peer. Each queue contains a
separate 'flow' for each of the channels of communication that exist between any two
peers. Tendermint maintains a channel per message type between peers. Each WDRR
queue maintains a shared buffered with a fixed capacity through which messages on different
flows are passed.
For more information on WDRR scheduling, see: https://en.wikipedia.org/wiki/Deficit_round_robin
To select a queue type, add or update the following field under the `[p2p]`
section of your server's configuration file.
```toml
[p2p]
...
queue-type = wdrr
...
```
### Support for Custom Reactor and Mempool Implementations
The changes to p2p layer removed existing support for custom

View File

@@ -5,9 +5,9 @@ import (
"fmt"
"sync"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/pkg/abci"
)
const (
@@ -34,34 +34,34 @@ type Client interface {
// Asynchronous requests
FlushAsync(context.Context) (*ReqRes, error)
EchoAsync(ctx context.Context, msg string) (*ReqRes, error)
InfoAsync(context.Context, abci.RequestInfo) (*ReqRes, error)
DeliverTxAsync(context.Context, abci.RequestDeliverTx) (*ReqRes, error)
CheckTxAsync(context.Context, abci.RequestCheckTx) (*ReqRes, error)
QueryAsync(context.Context, abci.RequestQuery) (*ReqRes, error)
InfoAsync(context.Context, types.RequestInfo) (*ReqRes, error)
DeliverTxAsync(context.Context, types.RequestDeliverTx) (*ReqRes, error)
CheckTxAsync(context.Context, types.RequestCheckTx) (*ReqRes, error)
QueryAsync(context.Context, types.RequestQuery) (*ReqRes, error)
CommitAsync(context.Context) (*ReqRes, error)
InitChainAsync(context.Context, abci.RequestInitChain) (*ReqRes, error)
BeginBlockAsync(context.Context, abci.RequestBeginBlock) (*ReqRes, error)
EndBlockAsync(context.Context, abci.RequestEndBlock) (*ReqRes, error)
ListSnapshotsAsync(context.Context, abci.RequestListSnapshots) (*ReqRes, error)
OfferSnapshotAsync(context.Context, abci.RequestOfferSnapshot) (*ReqRes, error)
LoadSnapshotChunkAsync(context.Context, abci.RequestLoadSnapshotChunk) (*ReqRes, error)
ApplySnapshotChunkAsync(context.Context, abci.RequestApplySnapshotChunk) (*ReqRes, error)
InitChainAsync(context.Context, types.RequestInitChain) (*ReqRes, error)
BeginBlockAsync(context.Context, types.RequestBeginBlock) (*ReqRes, error)
EndBlockAsync(context.Context, types.RequestEndBlock) (*ReqRes, error)
ListSnapshotsAsync(context.Context, types.RequestListSnapshots) (*ReqRes, error)
OfferSnapshotAsync(context.Context, types.RequestOfferSnapshot) (*ReqRes, error)
LoadSnapshotChunkAsync(context.Context, types.RequestLoadSnapshotChunk) (*ReqRes, error)
ApplySnapshotChunkAsync(context.Context, types.RequestApplySnapshotChunk) (*ReqRes, error)
// Synchronous requests
FlushSync(context.Context) error
EchoSync(ctx context.Context, msg string) (*abci.ResponseEcho, error)
InfoSync(context.Context, abci.RequestInfo) (*abci.ResponseInfo, error)
DeliverTxSync(context.Context, abci.RequestDeliverTx) (*abci.ResponseDeliverTx, error)
CheckTxSync(context.Context, abci.RequestCheckTx) (*abci.ResponseCheckTx, error)
QuerySync(context.Context, abci.RequestQuery) (*abci.ResponseQuery, error)
CommitSync(context.Context) (*abci.ResponseCommit, error)
InitChainSync(context.Context, abci.RequestInitChain) (*abci.ResponseInitChain, error)
BeginBlockSync(context.Context, abci.RequestBeginBlock) (*abci.ResponseBeginBlock, error)
EndBlockSync(context.Context, abci.RequestEndBlock) (*abci.ResponseEndBlock, error)
ListSnapshotsSync(context.Context, abci.RequestListSnapshots) (*abci.ResponseListSnapshots, error)
OfferSnapshotSync(context.Context, abci.RequestOfferSnapshot) (*abci.ResponseOfferSnapshot, error)
LoadSnapshotChunkSync(context.Context, abci.RequestLoadSnapshotChunk) (*abci.ResponseLoadSnapshotChunk, error)
ApplySnapshotChunkSync(context.Context, abci.RequestApplySnapshotChunk) (*abci.ResponseApplySnapshotChunk, error)
EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error)
InfoSync(context.Context, types.RequestInfo) (*types.ResponseInfo, error)
DeliverTxSync(context.Context, types.RequestDeliverTx) (*types.ResponseDeliverTx, error)
CheckTxSync(context.Context, types.RequestCheckTx) (*types.ResponseCheckTx, error)
QuerySync(context.Context, types.RequestQuery) (*types.ResponseQuery, error)
CommitSync(context.Context) (*types.ResponseCommit, error)
InitChainSync(context.Context, types.RequestInitChain) (*types.ResponseInitChain, error)
BeginBlockSync(context.Context, types.RequestBeginBlock) (*types.ResponseBeginBlock, error)
EndBlockSync(context.Context, types.RequestEndBlock) (*types.ResponseEndBlock, error)
ListSnapshotsSync(context.Context, types.RequestListSnapshots) (*types.ResponseListSnapshots, error)
OfferSnapshotSync(context.Context, types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error)
LoadSnapshotChunkSync(context.Context, types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error)
ApplySnapshotChunkSync(context.Context, types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error)
}
//----------------------------------------
@@ -80,19 +80,19 @@ func NewClient(addr, transport string, mustConnect bool) (client Client, err err
return
}
type Callback func(*abci.Request, *abci.Response)
type Callback func(*types.Request, *types.Response)
type ReqRes struct {
*abci.Request
*types.Request
*sync.WaitGroup
*abci.Response // Not set atomically, so be sure to use WaitGroup.
*types.Response // Not set atomically, so be sure to use WaitGroup.
mtx tmsync.RWMutex
done bool // Gets set to true once *after* WaitGroup.Done().
cb func(*abci.Response) // A single callback that may be set.
done bool // Gets set to true once *after* WaitGroup.Done().
cb func(*types.Response) // A single callback that may be set.
}
func NewReqRes(req *abci.Request) *ReqRes {
func NewReqRes(req *types.Request) *ReqRes {
return &ReqRes{
Request: req,
WaitGroup: waitGroup1(),
@@ -106,7 +106,7 @@ func NewReqRes(req *abci.Request) *ReqRes {
// Sets sets the callback. If reqRes is already done, it will call the cb
// immediately. Note, reqRes.cb should not change if reqRes.done and only one
// callback is supported.
func (r *ReqRes) SetCallback(cb func(res *abci.Response)) {
func (r *ReqRes) SetCallback(cb func(res *types.Response)) {
r.mtx.Lock()
if r.done {
@@ -136,7 +136,7 @@ func (r *ReqRes) InvokeCallback() {
// will invoke the callback twice and create a potential race condition.
//
// ref: https://github.com/tendermint/tendermint/issues/5439
func (r *ReqRes) GetCallback() func(*abci.Response) {
func (r *ReqRes) GetCallback() func(*types.Response) {
r.mtx.RLock()
defer r.mtx.RUnlock()
return r.cb

View File

@@ -9,10 +9,10 @@ import (
"google.golang.org/grpc"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/pkg/abci"
)
// A gRPC client.
@@ -20,14 +20,14 @@ type grpcClient struct {
service.BaseService
mustConnect bool
client abci.ABCIApplicationClient
client types.ABCIApplicationClient
conn *grpc.ClientConn
chReqRes chan *ReqRes // dispatches "async" responses to callbacks *in order*, needed by mempool
mtx tmsync.RWMutex
addr string
err error
resCb func(*abci.Request, *abci.Response) // listens to all callbacks
resCb func(*types.Request, *types.Response) // listens to all callbacks
}
var _ Client = (*grpcClient)(nil)
@@ -106,12 +106,12 @@ RETRY_LOOP:
}
cli.Logger.Info("Dialed server. Waiting for echo.", "addr", cli.addr)
client := abci.NewABCIApplicationClient(conn)
client := types.NewABCIApplicationClient(conn)
cli.conn = conn
ENSURE_CONNECTED:
for {
_, err := client.Echo(context.Background(), &abci.RequestEcho{Message: "hello"}, grpc.WaitForReady(true))
_, err := client.Echo(context.Background(), &types.RequestEcho{Message: "hello"}, grpc.WaitForReady(true))
if err == nil {
break ENSURE_CONNECTED
}
@@ -166,143 +166,143 @@ func (cli *grpcClient) SetResponseCallback(resCb Callback) {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) EchoAsync(ctx context.Context, msg string) (*ReqRes, error) {
req := abci.ToRequestEcho(msg)
req := types.ToRequestEcho(msg)
res, err := cli.client.Echo(ctx, req.GetEcho(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_Echo{Echo: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Echo{Echo: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
req := abci.ToRequestFlush()
req := types.ToRequestFlush()
res, err := cli.client.Flush(ctx, req.GetFlush(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_Flush{Flush: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Flush{Flush: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) InfoAsync(ctx context.Context, params abci.RequestInfo) (*ReqRes, error) {
req := abci.ToRequestInfo(params)
func (cli *grpcClient) InfoAsync(ctx context.Context, params types.RequestInfo) (*ReqRes, error) {
req := types.ToRequestInfo(params)
res, err := cli.client.Info(ctx, req.GetInfo(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_Info{Info: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Info{Info: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) DeliverTxAsync(ctx context.Context, params abci.RequestDeliverTx) (*ReqRes, error) {
req := abci.ToRequestDeliverTx(params)
func (cli *grpcClient) DeliverTxAsync(ctx context.Context, params types.RequestDeliverTx) (*ReqRes, error) {
req := types.ToRequestDeliverTx(params)
res, err := cli.client.DeliverTx(ctx, req.GetDeliverTx(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_DeliverTx{DeliverTx: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_DeliverTx{DeliverTx: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) CheckTxAsync(ctx context.Context, params abci.RequestCheckTx) (*ReqRes, error) {
req := abci.ToRequestCheckTx(params)
func (cli *grpcClient) CheckTxAsync(ctx context.Context, params types.RequestCheckTx) (*ReqRes, error) {
req := types.ToRequestCheckTx(params)
res, err := cli.client.CheckTx(ctx, req.GetCheckTx(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_CheckTx{CheckTx: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_CheckTx{CheckTx: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) QueryAsync(ctx context.Context, params abci.RequestQuery) (*ReqRes, error) {
req := abci.ToRequestQuery(params)
func (cli *grpcClient) QueryAsync(ctx context.Context, params types.RequestQuery) (*ReqRes, error) {
req := types.ToRequestQuery(params)
res, err := cli.client.Query(ctx, req.GetQuery(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_Query{Query: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Query{Query: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) CommitAsync(ctx context.Context) (*ReqRes, error) {
req := abci.ToRequestCommit()
req := types.ToRequestCommit()
res, err := cli.client.Commit(ctx, req.GetCommit(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_Commit{Commit: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Commit{Commit: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) InitChainAsync(ctx context.Context, params abci.RequestInitChain) (*ReqRes, error) {
req := abci.ToRequestInitChain(params)
func (cli *grpcClient) InitChainAsync(ctx context.Context, params types.RequestInitChain) (*ReqRes, error) {
req := types.ToRequestInitChain(params)
res, err := cli.client.InitChain(ctx, req.GetInitChain(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_InitChain{InitChain: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_InitChain{InitChain: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) BeginBlockAsync(ctx context.Context, params abci.RequestBeginBlock) (*ReqRes, error) {
req := abci.ToRequestBeginBlock(params)
func (cli *grpcClient) BeginBlockAsync(ctx context.Context, params types.RequestBeginBlock) (*ReqRes, error) {
req := types.ToRequestBeginBlock(params)
res, err := cli.client.BeginBlock(ctx, req.GetBeginBlock(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_BeginBlock{BeginBlock: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_BeginBlock{BeginBlock: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) EndBlockAsync(ctx context.Context, params abci.RequestEndBlock) (*ReqRes, error) {
req := abci.ToRequestEndBlock(params)
func (cli *grpcClient) EndBlockAsync(ctx context.Context, params types.RequestEndBlock) (*ReqRes, error) {
req := types.ToRequestEndBlock(params)
res, err := cli.client.EndBlock(ctx, req.GetEndBlock(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_EndBlock{EndBlock: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_EndBlock{EndBlock: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) ListSnapshotsAsync(ctx context.Context, params abci.RequestListSnapshots) (*ReqRes, error) {
req := abci.ToRequestListSnapshots(params)
func (cli *grpcClient) ListSnapshotsAsync(ctx context.Context, params types.RequestListSnapshots) (*ReqRes, error) {
req := types.ToRequestListSnapshots(params)
res, err := cli.client.ListSnapshots(ctx, req.GetListSnapshots(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_ListSnapshots{ListSnapshots: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_ListSnapshots{ListSnapshots: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) OfferSnapshotAsync(ctx context.Context, params abci.RequestOfferSnapshot) (*ReqRes, error) {
req := abci.ToRequestOfferSnapshot(params)
func (cli *grpcClient) OfferSnapshotAsync(ctx context.Context, params types.RequestOfferSnapshot) (*ReqRes, error) {
req := types.ToRequestOfferSnapshot(params)
res, err := cli.client.OfferSnapshot(ctx, req.GetOfferSnapshot(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_OfferSnapshot{OfferSnapshot: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_OfferSnapshot{OfferSnapshot: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) LoadSnapshotChunkAsync(
ctx context.Context,
params abci.RequestLoadSnapshotChunk,
params types.RequestLoadSnapshotChunk,
) (*ReqRes, error) {
req := abci.ToRequestLoadSnapshotChunk(params)
req := types.ToRequestLoadSnapshotChunk(params)
res, err := cli.client.LoadSnapshotChunk(ctx, req.GetLoadSnapshotChunk(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &abci.Response{Value: &abci.Response_LoadSnapshotChunk{LoadSnapshotChunk: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_LoadSnapshotChunk{LoadSnapshotChunk: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) ApplySnapshotChunkAsync(
ctx context.Context,
params abci.RequestApplySnapshotChunk,
params types.RequestApplySnapshotChunk,
) (*ReqRes, error) {
req := abci.ToRequestApplySnapshotChunk(params)
req := types.ToRequestApplySnapshotChunk(params)
res, err := cli.client.ApplySnapshotChunk(ctx, req.GetApplySnapshotChunk(), grpc.WaitForReady(true))
if err != nil {
return nil, err
@@ -310,13 +310,13 @@ func (cli *grpcClient) ApplySnapshotChunkAsync(
return cli.finishAsyncCall(
ctx,
req,
&abci.Response{Value: &abci.Response_ApplySnapshotChunk{ApplySnapshotChunk: res}},
&types.Response{Value: &types.Response_ApplySnapshotChunk{ApplySnapshotChunk: res}},
)
}
// finishAsyncCall creates a ReqRes for an async call, and immediately populates it
// with the response. We don't complete it until it's been ordered via the channel.
func (cli *grpcClient) finishAsyncCall(ctx context.Context, req *abci.Request, res *abci.Response) (*ReqRes, error) {
func (cli *grpcClient) finishAsyncCall(ctx context.Context, req *types.Request, res *types.Response) (*ReqRes, error) {
reqres := NewReqRes(req)
reqres.Response = res
select {
@@ -330,7 +330,7 @@ func (cli *grpcClient) finishAsyncCall(ctx context.Context, req *abci.Request, r
// finishSyncCall waits for an async call to complete. It is necessary to call all
// sync calls asynchronously as well, to maintain call and response ordering via
// the channel, and this method will wait until the async call completes.
func (cli *grpcClient) finishSyncCall(reqres *ReqRes) *abci.Response {
func (cli *grpcClient) finishSyncCall(reqres *ReqRes) *types.Response {
// It's possible that the callback is called twice, since the callback can
// be called immediately on SetCallback() in addition to after it has been
// set. This is because completing the ReqRes happens in a separate critical
@@ -346,8 +346,8 @@ func (cli *grpcClient) finishSyncCall(reqres *ReqRes) *abci.Response {
// ReqRes should really handle callback dispatch internally, to guarantee
// that it's only called once and avoid the above race conditions.
var once sync.Once
ch := make(chan *abci.Response, 1)
reqres.SetCallback(func(res *abci.Response) {
ch := make(chan *types.Response, 1)
reqres.SetCallback(func(res *types.Response) {
once.Do(func() {
ch <- res
})
@@ -361,7 +361,7 @@ func (cli *grpcClient) FlushSync(ctx context.Context) error {
return nil
}
func (cli *grpcClient) EchoSync(ctx context.Context, msg string) (*abci.ResponseEcho, error) {
func (cli *grpcClient) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error) {
reqres, err := cli.EchoAsync(ctx, msg)
if err != nil {
return nil, err
@@ -371,8 +371,8 @@ func (cli *grpcClient) EchoSync(ctx context.Context, msg string) (*abci.Response
func (cli *grpcClient) InfoSync(
ctx context.Context,
req abci.RequestInfo,
) (*abci.ResponseInfo, error) {
req types.RequestInfo,
) (*types.ResponseInfo, error) {
reqres, err := cli.InfoAsync(ctx, req)
if err != nil {
return nil, err
@@ -382,8 +382,8 @@ func (cli *grpcClient) InfoSync(
func (cli *grpcClient) DeliverTxSync(
ctx context.Context,
params abci.RequestDeliverTx,
) (*abci.ResponseDeliverTx, error) {
params types.RequestDeliverTx,
) (*types.ResponseDeliverTx, error) {
reqres, err := cli.DeliverTxAsync(ctx, params)
if err != nil {
@@ -394,8 +394,8 @@ func (cli *grpcClient) DeliverTxSync(
func (cli *grpcClient) CheckTxSync(
ctx context.Context,
params abci.RequestCheckTx,
) (*abci.ResponseCheckTx, error) {
params types.RequestCheckTx,
) (*types.ResponseCheckTx, error) {
reqres, err := cli.CheckTxAsync(ctx, params)
if err != nil {
@@ -406,8 +406,8 @@ func (cli *grpcClient) CheckTxSync(
func (cli *grpcClient) QuerySync(
ctx context.Context,
req abci.RequestQuery,
) (*abci.ResponseQuery, error) {
req types.RequestQuery,
) (*types.ResponseQuery, error) {
reqres, err := cli.QueryAsync(ctx, req)
if err != nil {
return nil, err
@@ -415,7 +415,7 @@ func (cli *grpcClient) QuerySync(
return cli.finishSyncCall(reqres).GetQuery(), cli.Error()
}
func (cli *grpcClient) CommitSync(ctx context.Context) (*abci.ResponseCommit, error) {
func (cli *grpcClient) CommitSync(ctx context.Context) (*types.ResponseCommit, error) {
reqres, err := cli.CommitAsync(ctx)
if err != nil {
return nil, err
@@ -425,8 +425,8 @@ func (cli *grpcClient) CommitSync(ctx context.Context) (*abci.ResponseCommit, er
func (cli *grpcClient) InitChainSync(
ctx context.Context,
params abci.RequestInitChain,
) (*abci.ResponseInitChain, error) {
params types.RequestInitChain,
) (*types.ResponseInitChain, error) {
reqres, err := cli.InitChainAsync(ctx, params)
if err != nil {
@@ -437,8 +437,8 @@ func (cli *grpcClient) InitChainSync(
func (cli *grpcClient) BeginBlockSync(
ctx context.Context,
params abci.RequestBeginBlock,
) (*abci.ResponseBeginBlock, error) {
params types.RequestBeginBlock,
) (*types.ResponseBeginBlock, error) {
reqres, err := cli.BeginBlockAsync(ctx, params)
if err != nil {
@@ -449,8 +449,8 @@ func (cli *grpcClient) BeginBlockSync(
func (cli *grpcClient) EndBlockSync(
ctx context.Context,
params abci.RequestEndBlock,
) (*abci.ResponseEndBlock, error) {
params types.RequestEndBlock,
) (*types.ResponseEndBlock, error) {
reqres, err := cli.EndBlockAsync(ctx, params)
if err != nil {
@@ -461,8 +461,8 @@ func (cli *grpcClient) EndBlockSync(
func (cli *grpcClient) ListSnapshotsSync(
ctx context.Context,
params abci.RequestListSnapshots,
) (*abci.ResponseListSnapshots, error) {
params types.RequestListSnapshots,
) (*types.ResponseListSnapshots, error) {
reqres, err := cli.ListSnapshotsAsync(ctx, params)
if err != nil {
@@ -473,8 +473,8 @@ func (cli *grpcClient) ListSnapshotsSync(
func (cli *grpcClient) OfferSnapshotSync(
ctx context.Context,
params abci.RequestOfferSnapshot,
) (*abci.ResponseOfferSnapshot, error) {
params types.RequestOfferSnapshot,
) (*types.ResponseOfferSnapshot, error) {
reqres, err := cli.OfferSnapshotAsync(ctx, params)
if err != nil {
@@ -485,7 +485,7 @@ func (cli *grpcClient) OfferSnapshotSync(
func (cli *grpcClient) LoadSnapshotChunkSync(
ctx context.Context,
params abci.RequestLoadSnapshotChunk) (*abci.ResponseLoadSnapshotChunk, error) {
params types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
reqres, err := cli.LoadSnapshotChunkAsync(ctx, params)
if err != nil {
@@ -496,7 +496,7 @@ func (cli *grpcClient) LoadSnapshotChunkSync(
func (cli *grpcClient) ApplySnapshotChunkSync(
ctx context.Context,
params abci.RequestApplySnapshotChunk) (*abci.ResponseApplySnapshotChunk, error) {
params types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
reqres, err := cli.ApplySnapshotChunkAsync(ctx, params)
if err != nil {

View File

@@ -3,9 +3,9 @@ package abcicli
import (
"context"
types "github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/libs/service"
types "github.com/tendermint/tendermint/pkg/abci"
)
// NOTE: use defer to unlock mutex because Application might panic (e.g., in

View File

@@ -3,14 +3,15 @@
package mocks
import (
abcicli "github.com/tendermint/tendermint/abci/client"
abci "github.com/tendermint/tendermint/pkg/abci"
context "context"
abcicli "github.com/tendermint/tendermint/abci/client"
log "github.com/tendermint/tendermint/libs/log"
mock "github.com/stretchr/testify/mock"
types "github.com/tendermint/tendermint/abci/types"
)
// Client is an autogenerated mock type for the Client type
@@ -19,11 +20,11 @@ type Client struct {
}
// ApplySnapshotChunkAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) ApplySnapshotChunkAsync(_a0 context.Context, _a1 abci.RequestApplySnapshotChunk) (*abcicli.ReqRes, error) {
func (_m *Client) ApplySnapshotChunkAsync(_a0 context.Context, _a1 types.RequestApplySnapshotChunk) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestApplySnapshotChunk) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context, types.RequestApplySnapshotChunk) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
@@ -32,7 +33,7 @@ func (_m *Client) ApplySnapshotChunkAsync(_a0 context.Context, _a1 abci.RequestA
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestApplySnapshotChunk) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestApplySnapshotChunk) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -42,20 +43,20 @@ func (_m *Client) ApplySnapshotChunkAsync(_a0 context.Context, _a1 abci.RequestA
}
// ApplySnapshotChunkSync provides a mock function with given fields: _a0, _a1
func (_m *Client) ApplySnapshotChunkSync(_a0 context.Context, _a1 abci.RequestApplySnapshotChunk) (*abci.ResponseApplySnapshotChunk, error) {
func (_m *Client) ApplySnapshotChunkSync(_a0 context.Context, _a1 types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
ret := _m.Called(_a0, _a1)
var r0 *abci.ResponseApplySnapshotChunk
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestApplySnapshotChunk) *abci.ResponseApplySnapshotChunk); ok {
var r0 *types.ResponseApplySnapshotChunk
if rf, ok := ret.Get(0).(func(context.Context, types.RequestApplySnapshotChunk) *types.ResponseApplySnapshotChunk); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseApplySnapshotChunk)
r0 = ret.Get(0).(*types.ResponseApplySnapshotChunk)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestApplySnapshotChunk) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestApplySnapshotChunk) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -65,11 +66,11 @@ func (_m *Client) ApplySnapshotChunkSync(_a0 context.Context, _a1 abci.RequestAp
}
// BeginBlockAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) BeginBlockAsync(_a0 context.Context, _a1 abci.RequestBeginBlock) (*abcicli.ReqRes, error) {
func (_m *Client) BeginBlockAsync(_a0 context.Context, _a1 types.RequestBeginBlock) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestBeginBlock) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context, types.RequestBeginBlock) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
@@ -78,7 +79,7 @@ func (_m *Client) BeginBlockAsync(_a0 context.Context, _a1 abci.RequestBeginBloc
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestBeginBlock) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestBeginBlock) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -88,20 +89,20 @@ func (_m *Client) BeginBlockAsync(_a0 context.Context, _a1 abci.RequestBeginBloc
}
// BeginBlockSync provides a mock function with given fields: _a0, _a1
func (_m *Client) BeginBlockSync(_a0 context.Context, _a1 abci.RequestBeginBlock) (*abci.ResponseBeginBlock, error) {
func (_m *Client) BeginBlockSync(_a0 context.Context, _a1 types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
ret := _m.Called(_a0, _a1)
var r0 *abci.ResponseBeginBlock
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestBeginBlock) *abci.ResponseBeginBlock); ok {
var r0 *types.ResponseBeginBlock
if rf, ok := ret.Get(0).(func(context.Context, types.RequestBeginBlock) *types.ResponseBeginBlock); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseBeginBlock)
r0 = ret.Get(0).(*types.ResponseBeginBlock)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestBeginBlock) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestBeginBlock) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -111,11 +112,11 @@ func (_m *Client) BeginBlockSync(_a0 context.Context, _a1 abci.RequestBeginBlock
}
// CheckTxAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) CheckTxAsync(_a0 context.Context, _a1 abci.RequestCheckTx) (*abcicli.ReqRes, error) {
func (_m *Client) CheckTxAsync(_a0 context.Context, _a1 types.RequestCheckTx) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestCheckTx) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context, types.RequestCheckTx) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
@@ -124,7 +125,7 @@ func (_m *Client) CheckTxAsync(_a0 context.Context, _a1 abci.RequestCheckTx) (*a
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestCheckTx) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestCheckTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -134,20 +135,20 @@ func (_m *Client) CheckTxAsync(_a0 context.Context, _a1 abci.RequestCheckTx) (*a
}
// CheckTxSync provides a mock function with given fields: _a0, _a1
func (_m *Client) CheckTxSync(_a0 context.Context, _a1 abci.RequestCheckTx) (*abci.ResponseCheckTx, error) {
func (_m *Client) CheckTxSync(_a0 context.Context, _a1 types.RequestCheckTx) (*types.ResponseCheckTx, error) {
ret := _m.Called(_a0, _a1)
var r0 *abci.ResponseCheckTx
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestCheckTx) *abci.ResponseCheckTx); ok {
var r0 *types.ResponseCheckTx
if rf, ok := ret.Get(0).(func(context.Context, types.RequestCheckTx) *types.ResponseCheckTx); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseCheckTx)
r0 = ret.Get(0).(*types.ResponseCheckTx)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestCheckTx) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestCheckTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -180,15 +181,15 @@ func (_m *Client) CommitAsync(_a0 context.Context) (*abcicli.ReqRes, error) {
}
// CommitSync provides a mock function with given fields: _a0
func (_m *Client) CommitSync(_a0 context.Context) (*abci.ResponseCommit, error) {
func (_m *Client) CommitSync(_a0 context.Context) (*types.ResponseCommit, error) {
ret := _m.Called(_a0)
var r0 *abci.ResponseCommit
if rf, ok := ret.Get(0).(func(context.Context) *abci.ResponseCommit); ok {
var r0 *types.ResponseCommit
if rf, ok := ret.Get(0).(func(context.Context) *types.ResponseCommit); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseCommit)
r0 = ret.Get(0).(*types.ResponseCommit)
}
}
@@ -203,11 +204,11 @@ func (_m *Client) CommitSync(_a0 context.Context) (*abci.ResponseCommit, error)
}
// DeliverTxAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 abci.RequestDeliverTx) (*abcicli.ReqRes, error) {
func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 types.RequestDeliverTx) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestDeliverTx) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
@@ -216,7 +217,7 @@ func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 abci.RequestDeliverTx)
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestDeliverTx) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -226,20 +227,20 @@ func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 abci.RequestDeliverTx)
}
// DeliverTxSync provides a mock function with given fields: _a0, _a1
func (_m *Client) DeliverTxSync(_a0 context.Context, _a1 abci.RequestDeliverTx) (*abci.ResponseDeliverTx, error) {
func (_m *Client) DeliverTxSync(_a0 context.Context, _a1 types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
ret := _m.Called(_a0, _a1)
var r0 *abci.ResponseDeliverTx
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestDeliverTx) *abci.ResponseDeliverTx); ok {
var r0 *types.ResponseDeliverTx
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *types.ResponseDeliverTx); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseDeliverTx)
r0 = ret.Get(0).(*types.ResponseDeliverTx)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestDeliverTx) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -272,15 +273,15 @@ func (_m *Client) EchoAsync(ctx context.Context, msg string) (*abcicli.ReqRes, e
}
// EchoSync provides a mock function with given fields: ctx, msg
func (_m *Client) EchoSync(ctx context.Context, msg string) (*abci.ResponseEcho, error) {
func (_m *Client) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error) {
ret := _m.Called(ctx, msg)
var r0 *abci.ResponseEcho
if rf, ok := ret.Get(0).(func(context.Context, string) *abci.ResponseEcho); ok {
var r0 *types.ResponseEcho
if rf, ok := ret.Get(0).(func(context.Context, string) *types.ResponseEcho); ok {
r0 = rf(ctx, msg)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseEcho)
r0 = ret.Get(0).(*types.ResponseEcho)
}
}
@@ -295,11 +296,11 @@ func (_m *Client) EchoSync(ctx context.Context, msg string) (*abci.ResponseEcho,
}
// EndBlockAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) EndBlockAsync(_a0 context.Context, _a1 abci.RequestEndBlock) (*abcicli.ReqRes, error) {
func (_m *Client) EndBlockAsync(_a0 context.Context, _a1 types.RequestEndBlock) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestEndBlock) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context, types.RequestEndBlock) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
@@ -308,7 +309,7 @@ func (_m *Client) EndBlockAsync(_a0 context.Context, _a1 abci.RequestEndBlock) (
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestEndBlock) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestEndBlock) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -318,20 +319,20 @@ func (_m *Client) EndBlockAsync(_a0 context.Context, _a1 abci.RequestEndBlock) (
}
// EndBlockSync provides a mock function with given fields: _a0, _a1
func (_m *Client) EndBlockSync(_a0 context.Context, _a1 abci.RequestEndBlock) (*abci.ResponseEndBlock, error) {
func (_m *Client) EndBlockSync(_a0 context.Context, _a1 types.RequestEndBlock) (*types.ResponseEndBlock, error) {
ret := _m.Called(_a0, _a1)
var r0 *abci.ResponseEndBlock
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestEndBlock) *abci.ResponseEndBlock); ok {
var r0 *types.ResponseEndBlock
if rf, ok := ret.Get(0).(func(context.Context, types.RequestEndBlock) *types.ResponseEndBlock); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseEndBlock)
r0 = ret.Get(0).(*types.ResponseEndBlock)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestEndBlock) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestEndBlock) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -392,11 +393,11 @@ func (_m *Client) FlushSync(_a0 context.Context) error {
}
// InfoAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) InfoAsync(_a0 context.Context, _a1 abci.RequestInfo) (*abcicli.ReqRes, error) {
func (_m *Client) InfoAsync(_a0 context.Context, _a1 types.RequestInfo) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestInfo) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInfo) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
@@ -405,7 +406,7 @@ func (_m *Client) InfoAsync(_a0 context.Context, _a1 abci.RequestInfo) (*abcicli
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestInfo) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestInfo) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -415,20 +416,20 @@ func (_m *Client) InfoAsync(_a0 context.Context, _a1 abci.RequestInfo) (*abcicli
}
// InfoSync provides a mock function with given fields: _a0, _a1
func (_m *Client) InfoSync(_a0 context.Context, _a1 abci.RequestInfo) (*abci.ResponseInfo, error) {
func (_m *Client) InfoSync(_a0 context.Context, _a1 types.RequestInfo) (*types.ResponseInfo, error) {
ret := _m.Called(_a0, _a1)
var r0 *abci.ResponseInfo
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestInfo) *abci.ResponseInfo); ok {
var r0 *types.ResponseInfo
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInfo) *types.ResponseInfo); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseInfo)
r0 = ret.Get(0).(*types.ResponseInfo)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestInfo) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestInfo) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -438,11 +439,11 @@ func (_m *Client) InfoSync(_a0 context.Context, _a1 abci.RequestInfo) (*abci.Res
}
// InitChainAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) InitChainAsync(_a0 context.Context, _a1 abci.RequestInitChain) (*abcicli.ReqRes, error) {
func (_m *Client) InitChainAsync(_a0 context.Context, _a1 types.RequestInitChain) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestInitChain) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInitChain) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
@@ -451,7 +452,7 @@ func (_m *Client) InitChainAsync(_a0 context.Context, _a1 abci.RequestInitChain)
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestInitChain) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestInitChain) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -461,20 +462,20 @@ func (_m *Client) InitChainAsync(_a0 context.Context, _a1 abci.RequestInitChain)
}
// InitChainSync provides a mock function with given fields: _a0, _a1
func (_m *Client) InitChainSync(_a0 context.Context, _a1 abci.RequestInitChain) (*abci.ResponseInitChain, error) {
func (_m *Client) InitChainSync(_a0 context.Context, _a1 types.RequestInitChain) (*types.ResponseInitChain, error) {
ret := _m.Called(_a0, _a1)
var r0 *abci.ResponseInitChain
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestInitChain) *abci.ResponseInitChain); ok {
var r0 *types.ResponseInitChain
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInitChain) *types.ResponseInitChain); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseInitChain)
r0 = ret.Get(0).(*types.ResponseInitChain)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestInitChain) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestInitChain) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -498,11 +499,11 @@ func (_m *Client) IsRunning() bool {
}
// ListSnapshotsAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) ListSnapshotsAsync(_a0 context.Context, _a1 abci.RequestListSnapshots) (*abcicli.ReqRes, error) {
func (_m *Client) ListSnapshotsAsync(_a0 context.Context, _a1 types.RequestListSnapshots) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestListSnapshots) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context, types.RequestListSnapshots) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
@@ -511,7 +512,7 @@ func (_m *Client) ListSnapshotsAsync(_a0 context.Context, _a1 abci.RequestListSn
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestListSnapshots) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestListSnapshots) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -521,20 +522,20 @@ func (_m *Client) ListSnapshotsAsync(_a0 context.Context, _a1 abci.RequestListSn
}
// ListSnapshotsSync provides a mock function with given fields: _a0, _a1
func (_m *Client) ListSnapshotsSync(_a0 context.Context, _a1 abci.RequestListSnapshots) (*abci.ResponseListSnapshots, error) {
func (_m *Client) ListSnapshotsSync(_a0 context.Context, _a1 types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
ret := _m.Called(_a0, _a1)
var r0 *abci.ResponseListSnapshots
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestListSnapshots) *abci.ResponseListSnapshots); ok {
var r0 *types.ResponseListSnapshots
if rf, ok := ret.Get(0).(func(context.Context, types.RequestListSnapshots) *types.ResponseListSnapshots); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseListSnapshots)
r0 = ret.Get(0).(*types.ResponseListSnapshots)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestListSnapshots) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestListSnapshots) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -544,11 +545,11 @@ func (_m *Client) ListSnapshotsSync(_a0 context.Context, _a1 abci.RequestListSna
}
// LoadSnapshotChunkAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) LoadSnapshotChunkAsync(_a0 context.Context, _a1 abci.RequestLoadSnapshotChunk) (*abcicli.ReqRes, error) {
func (_m *Client) LoadSnapshotChunkAsync(_a0 context.Context, _a1 types.RequestLoadSnapshotChunk) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestLoadSnapshotChunk) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context, types.RequestLoadSnapshotChunk) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
@@ -557,7 +558,7 @@ func (_m *Client) LoadSnapshotChunkAsync(_a0 context.Context, _a1 abci.RequestLo
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestLoadSnapshotChunk) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestLoadSnapshotChunk) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -567,20 +568,20 @@ func (_m *Client) LoadSnapshotChunkAsync(_a0 context.Context, _a1 abci.RequestLo
}
// LoadSnapshotChunkSync provides a mock function with given fields: _a0, _a1
func (_m *Client) LoadSnapshotChunkSync(_a0 context.Context, _a1 abci.RequestLoadSnapshotChunk) (*abci.ResponseLoadSnapshotChunk, error) {
func (_m *Client) LoadSnapshotChunkSync(_a0 context.Context, _a1 types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
ret := _m.Called(_a0, _a1)
var r0 *abci.ResponseLoadSnapshotChunk
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestLoadSnapshotChunk) *abci.ResponseLoadSnapshotChunk); ok {
var r0 *types.ResponseLoadSnapshotChunk
if rf, ok := ret.Get(0).(func(context.Context, types.RequestLoadSnapshotChunk) *types.ResponseLoadSnapshotChunk); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseLoadSnapshotChunk)
r0 = ret.Get(0).(*types.ResponseLoadSnapshotChunk)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestLoadSnapshotChunk) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestLoadSnapshotChunk) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -590,11 +591,11 @@ func (_m *Client) LoadSnapshotChunkSync(_a0 context.Context, _a1 abci.RequestLoa
}
// OfferSnapshotAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) OfferSnapshotAsync(_a0 context.Context, _a1 abci.RequestOfferSnapshot) (*abcicli.ReqRes, error) {
func (_m *Client) OfferSnapshotAsync(_a0 context.Context, _a1 types.RequestOfferSnapshot) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestOfferSnapshot) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context, types.RequestOfferSnapshot) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
@@ -603,7 +604,7 @@ func (_m *Client) OfferSnapshotAsync(_a0 context.Context, _a1 abci.RequestOfferS
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestOfferSnapshot) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestOfferSnapshot) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -613,20 +614,20 @@ func (_m *Client) OfferSnapshotAsync(_a0 context.Context, _a1 abci.RequestOfferS
}
// OfferSnapshotSync provides a mock function with given fields: _a0, _a1
func (_m *Client) OfferSnapshotSync(_a0 context.Context, _a1 abci.RequestOfferSnapshot) (*abci.ResponseOfferSnapshot, error) {
func (_m *Client) OfferSnapshotSync(_a0 context.Context, _a1 types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
ret := _m.Called(_a0, _a1)
var r0 *abci.ResponseOfferSnapshot
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestOfferSnapshot) *abci.ResponseOfferSnapshot); ok {
var r0 *types.ResponseOfferSnapshot
if rf, ok := ret.Get(0).(func(context.Context, types.RequestOfferSnapshot) *types.ResponseOfferSnapshot); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseOfferSnapshot)
r0 = ret.Get(0).(*types.ResponseOfferSnapshot)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestOfferSnapshot) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestOfferSnapshot) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -669,11 +670,11 @@ func (_m *Client) OnStop() {
}
// QueryAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) QueryAsync(_a0 context.Context, _a1 abci.RequestQuery) (*abcicli.ReqRes, error) {
func (_m *Client) QueryAsync(_a0 context.Context, _a1 types.RequestQuery) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestQuery) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context, types.RequestQuery) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
@@ -682,7 +683,7 @@ func (_m *Client) QueryAsync(_a0 context.Context, _a1 abci.RequestQuery) (*abcic
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestQuery) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestQuery) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
@@ -692,20 +693,20 @@ func (_m *Client) QueryAsync(_a0 context.Context, _a1 abci.RequestQuery) (*abcic
}
// QuerySync provides a mock function with given fields: _a0, _a1
func (_m *Client) QuerySync(_a0 context.Context, _a1 abci.RequestQuery) (*abci.ResponseQuery, error) {
func (_m *Client) QuerySync(_a0 context.Context, _a1 types.RequestQuery) (*types.ResponseQuery, error) {
ret := _m.Called(_a0, _a1)
var r0 *abci.ResponseQuery
if rf, ok := ret.Get(0).(func(context.Context, abci.RequestQuery) *abci.ResponseQuery); ok {
var r0 *types.ResponseQuery
if rf, ok := ret.Get(0).(func(context.Context, types.RequestQuery) *types.ResponseQuery); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abci.ResponseQuery)
r0 = ret.Get(0).(*types.ResponseQuery)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, abci.RequestQuery) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestQuery) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)

View File

@@ -11,11 +11,11 @@ import (
"reflect"
"time"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/internal/libs/timer"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/pkg/abci"
)
const (
@@ -45,8 +45,8 @@ type socketClient struct {
mtx tmsync.RWMutex
err error
reqSent *list.List // list of requests sent, waiting for response
resCb func(*abci.Request, *abci.Response) // called on all requests, if set.
reqSent *list.List // list of requests sent, waiting for response
resCb func(*types.Request, *types.Response) // called on all requests, if set.
}
var _ Client = (*socketClient)(nil)
@@ -138,14 +138,14 @@ func (cli *socketClient) sendRequestsRoutine(conn io.Writer) {
}
cli.willSendReq(reqres.R)
err := abci.WriteMessage(reqres.R.Request, w)
err := types.WriteMessage(reqres.R.Request, w)
if err != nil {
cli.stopForError(fmt.Errorf("write to buffer: %w", err))
return
}
// If it's a flush request, flush the current buffer.
if _, ok := reqres.R.Request.Value.(*abci.Request_Flush); ok {
if _, ok := reqres.R.Request.Value.(*types.Request_Flush); ok {
err = w.Flush()
if err != nil {
cli.stopForError(fmt.Errorf("flush buffer: %w", err))
@@ -154,7 +154,7 @@ func (cli *socketClient) sendRequestsRoutine(conn io.Writer) {
}
case <-cli.flushTimer.Ch: // flush queue
select {
case cli.reqQueue <- &reqResWithContext{R: NewReqRes(abci.ToRequestFlush()), C: context.Background()}:
case cli.reqQueue <- &reqResWithContext{R: NewReqRes(types.ToRequestFlush()), C: context.Background()}:
default:
// Probably will fill the buffer, or retry later.
}
@@ -167,8 +167,8 @@ func (cli *socketClient) sendRequestsRoutine(conn io.Writer) {
func (cli *socketClient) recvResponseRoutine(conn io.Reader) {
r := bufio.NewReader(conn)
for {
var res = &abci.Response{}
err := abci.ReadMessage(r, res)
var res = &types.Response{}
err := types.ReadMessage(r, res)
if err != nil {
cli.stopForError(fmt.Errorf("read message: %w", err))
return
@@ -177,7 +177,7 @@ func (cli *socketClient) recvResponseRoutine(conn io.Reader) {
// cli.Logger.Debug("Received response", "responseType", reflect.TypeOf(res), "response", res)
switch r := res.Value.(type) {
case *abci.Response_Exception: // app responded with error
case *types.Response_Exception: // app responded with error
// XXX After setting cli.err, release waiters (e.g. reqres.Done())
cli.stopForError(errors.New(r.Exception.Error))
return
@@ -197,7 +197,7 @@ func (cli *socketClient) willSendReq(reqres *ReqRes) {
cli.reqSent.PushBack(reqres)
}
func (cli *socketClient) didRecvResponse(res *abci.Response) error {
func (cli *socketClient) didRecvResponse(res *types.Response) error {
cli.mtx.Lock()
defer cli.mtx.Unlock()
@@ -234,71 +234,71 @@ func (cli *socketClient) didRecvResponse(res *abci.Response) error {
//----------------------------------------
func (cli *socketClient) EchoAsync(ctx context.Context, msg string) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestEcho(msg))
return cli.queueRequestAsync(ctx, types.ToRequestEcho(msg))
}
func (cli *socketClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestFlush())
return cli.queueRequestAsync(ctx, types.ToRequestFlush())
}
func (cli *socketClient) InfoAsync(ctx context.Context, req abci.RequestInfo) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestInfo(req))
func (cli *socketClient) InfoAsync(ctx context.Context, req types.RequestInfo) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestInfo(req))
}
func (cli *socketClient) DeliverTxAsync(ctx context.Context, req abci.RequestDeliverTx) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestDeliverTx(req))
func (cli *socketClient) DeliverTxAsync(ctx context.Context, req types.RequestDeliverTx) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestDeliverTx(req))
}
func (cli *socketClient) CheckTxAsync(ctx context.Context, req abci.RequestCheckTx) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestCheckTx(req))
func (cli *socketClient) CheckTxAsync(ctx context.Context, req types.RequestCheckTx) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestCheckTx(req))
}
func (cli *socketClient) QueryAsync(ctx context.Context, req abci.RequestQuery) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestQuery(req))
func (cli *socketClient) QueryAsync(ctx context.Context, req types.RequestQuery) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestQuery(req))
}
func (cli *socketClient) CommitAsync(ctx context.Context) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestCommit())
return cli.queueRequestAsync(ctx, types.ToRequestCommit())
}
func (cli *socketClient) InitChainAsync(ctx context.Context, req abci.RequestInitChain) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestInitChain(req))
func (cli *socketClient) InitChainAsync(ctx context.Context, req types.RequestInitChain) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestInitChain(req))
}
func (cli *socketClient) BeginBlockAsync(ctx context.Context, req abci.RequestBeginBlock) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestBeginBlock(req))
func (cli *socketClient) BeginBlockAsync(ctx context.Context, req types.RequestBeginBlock) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestBeginBlock(req))
}
func (cli *socketClient) EndBlockAsync(ctx context.Context, req abci.RequestEndBlock) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestEndBlock(req))
func (cli *socketClient) EndBlockAsync(ctx context.Context, req types.RequestEndBlock) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestEndBlock(req))
}
func (cli *socketClient) ListSnapshotsAsync(ctx context.Context, req abci.RequestListSnapshots) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestListSnapshots(req))
func (cli *socketClient) ListSnapshotsAsync(ctx context.Context, req types.RequestListSnapshots) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestListSnapshots(req))
}
func (cli *socketClient) OfferSnapshotAsync(ctx context.Context, req abci.RequestOfferSnapshot) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestOfferSnapshot(req))
func (cli *socketClient) OfferSnapshotAsync(ctx context.Context, req types.RequestOfferSnapshot) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestOfferSnapshot(req))
}
func (cli *socketClient) LoadSnapshotChunkAsync(
ctx context.Context,
req abci.RequestLoadSnapshotChunk,
req types.RequestLoadSnapshotChunk,
) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestLoadSnapshotChunk(req))
return cli.queueRequestAsync(ctx, types.ToRequestLoadSnapshotChunk(req))
}
func (cli *socketClient) ApplySnapshotChunkAsync(
ctx context.Context,
req abci.RequestApplySnapshotChunk,
req types.RequestApplySnapshotChunk,
) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, abci.ToRequestApplySnapshotChunk(req))
return cli.queueRequestAsync(ctx, types.ToRequestApplySnapshotChunk(req))
}
//----------------------------------------
func (cli *socketClient) FlushSync(ctx context.Context) error {
reqRes, err := cli.queueRequest(ctx, abci.ToRequestFlush(), true)
reqRes, err := cli.queueRequest(ctx, types.ToRequestFlush(), true)
if err != nil {
return queueErr(err)
}
@@ -322,8 +322,8 @@ func (cli *socketClient) FlushSync(ctx context.Context) error {
}
}
func (cli *socketClient) EchoSync(ctx context.Context, msg string) (*abci.ResponseEcho, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestEcho(msg))
func (cli *socketClient) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestEcho(msg))
if err != nil {
return nil, err
}
@@ -332,9 +332,9 @@ func (cli *socketClient) EchoSync(ctx context.Context, msg string) (*abci.Respon
func (cli *socketClient) InfoSync(
ctx context.Context,
req abci.RequestInfo,
) (*abci.ResponseInfo, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestInfo(req))
req types.RequestInfo,
) (*types.ResponseInfo, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestInfo(req))
if err != nil {
return nil, err
}
@@ -343,10 +343,10 @@ func (cli *socketClient) InfoSync(
func (cli *socketClient) DeliverTxSync(
ctx context.Context,
req abci.RequestDeliverTx,
) (*abci.ResponseDeliverTx, error) {
req types.RequestDeliverTx,
) (*types.ResponseDeliverTx, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestDeliverTx(req))
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestDeliverTx(req))
if err != nil {
return nil, err
}
@@ -355,9 +355,9 @@ func (cli *socketClient) DeliverTxSync(
func (cli *socketClient) CheckTxSync(
ctx context.Context,
req abci.RequestCheckTx,
) (*abci.ResponseCheckTx, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestCheckTx(req))
req types.RequestCheckTx,
) (*types.ResponseCheckTx, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestCheckTx(req))
if err != nil {
return nil, err
}
@@ -366,17 +366,17 @@ func (cli *socketClient) CheckTxSync(
func (cli *socketClient) QuerySync(
ctx context.Context,
req abci.RequestQuery,
) (*abci.ResponseQuery, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestQuery(req))
req types.RequestQuery,
) (*types.ResponseQuery, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestQuery(req))
if err != nil {
return nil, err
}
return reqres.Response.GetQuery(), nil
}
func (cli *socketClient) CommitSync(ctx context.Context) (*abci.ResponseCommit, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestCommit())
func (cli *socketClient) CommitSync(ctx context.Context) (*types.ResponseCommit, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestCommit())
if err != nil {
return nil, err
}
@@ -385,10 +385,10 @@ func (cli *socketClient) CommitSync(ctx context.Context) (*abci.ResponseCommit,
func (cli *socketClient) InitChainSync(
ctx context.Context,
req abci.RequestInitChain,
) (*abci.ResponseInitChain, error) {
req types.RequestInitChain,
) (*types.ResponseInitChain, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestInitChain(req))
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestInitChain(req))
if err != nil {
return nil, err
}
@@ -397,10 +397,10 @@ func (cli *socketClient) InitChainSync(
func (cli *socketClient) BeginBlockSync(
ctx context.Context,
req abci.RequestBeginBlock,
) (*abci.ResponseBeginBlock, error) {
req types.RequestBeginBlock,
) (*types.ResponseBeginBlock, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestBeginBlock(req))
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestBeginBlock(req))
if err != nil {
return nil, err
}
@@ -409,10 +409,10 @@ func (cli *socketClient) BeginBlockSync(
func (cli *socketClient) EndBlockSync(
ctx context.Context,
req abci.RequestEndBlock,
) (*abci.ResponseEndBlock, error) {
req types.RequestEndBlock,
) (*types.ResponseEndBlock, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestEndBlock(req))
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestEndBlock(req))
if err != nil {
return nil, err
}
@@ -421,10 +421,10 @@ func (cli *socketClient) EndBlockSync(
func (cli *socketClient) ListSnapshotsSync(
ctx context.Context,
req abci.RequestListSnapshots,
) (*abci.ResponseListSnapshots, error) {
req types.RequestListSnapshots,
) (*types.ResponseListSnapshots, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestListSnapshots(req))
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestListSnapshots(req))
if err != nil {
return nil, err
}
@@ -433,10 +433,10 @@ func (cli *socketClient) ListSnapshotsSync(
func (cli *socketClient) OfferSnapshotSync(
ctx context.Context,
req abci.RequestOfferSnapshot,
) (*abci.ResponseOfferSnapshot, error) {
req types.RequestOfferSnapshot,
) (*types.ResponseOfferSnapshot, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestOfferSnapshot(req))
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestOfferSnapshot(req))
if err != nil {
return nil, err
}
@@ -445,9 +445,9 @@ func (cli *socketClient) OfferSnapshotSync(
func (cli *socketClient) LoadSnapshotChunkSync(
ctx context.Context,
req abci.RequestLoadSnapshotChunk) (*abci.ResponseLoadSnapshotChunk, error) {
req types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestLoadSnapshotChunk(req))
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestLoadSnapshotChunk(req))
if err != nil {
return nil, err
}
@@ -456,9 +456,9 @@ func (cli *socketClient) LoadSnapshotChunkSync(
func (cli *socketClient) ApplySnapshotChunkSync(
ctx context.Context,
req abci.RequestApplySnapshotChunk) (*abci.ResponseApplySnapshotChunk, error) {
req types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, abci.ToRequestApplySnapshotChunk(req))
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestApplySnapshotChunk(req))
if err != nil {
return nil, err
}
@@ -475,7 +475,7 @@ func (cli *socketClient) ApplySnapshotChunkSync(
// non-nil).
//
// The caller is responsible for checking cli.Error.
func (cli *socketClient) queueRequest(ctx context.Context, req *abci.Request, sync bool) (*ReqRes, error) {
func (cli *socketClient) queueRequest(ctx context.Context, req *types.Request, sync bool) (*ReqRes, error) {
reqres := NewReqRes(req)
if sync {
@@ -494,7 +494,7 @@ func (cli *socketClient) queueRequest(ctx context.Context, req *abci.Request, sy
// Maybe auto-flush, or unset auto-flush
switch req.Value.(type) {
case *abci.Request_Flush:
case *types.Request_Flush:
cli.flushTimer.Unset()
default:
cli.flushTimer.Set()
@@ -505,7 +505,7 @@ func (cli *socketClient) queueRequest(ctx context.Context, req *abci.Request, sy
func (cli *socketClient) queueRequestAsync(
ctx context.Context,
req *abci.Request,
req *types.Request,
) (*ReqRes, error) {
reqres, err := cli.queueRequest(ctx, req, false)
@@ -518,7 +518,7 @@ func (cli *socketClient) queueRequestAsync(
func (cli *socketClient) queueRequestAndFlushSync(
ctx context.Context,
req *abci.Request,
req *types.Request,
) (*ReqRes, error) {
reqres, err := cli.queueRequest(ctx, req, true)
@@ -561,36 +561,36 @@ LOOP:
//----------------------------------------
func resMatchesReq(req *abci.Request, res *abci.Response) (ok bool) {
func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
switch req.Value.(type) {
case *abci.Request_Echo:
_, ok = res.Value.(*abci.Response_Echo)
case *abci.Request_Flush:
_, ok = res.Value.(*abci.Response_Flush)
case *abci.Request_Info:
_, ok = res.Value.(*abci.Response_Info)
case *abci.Request_DeliverTx:
_, ok = res.Value.(*abci.Response_DeliverTx)
case *abci.Request_CheckTx:
_, ok = res.Value.(*abci.Response_CheckTx)
case *abci.Request_Commit:
_, ok = res.Value.(*abci.Response_Commit)
case *abci.Request_Query:
_, ok = res.Value.(*abci.Response_Query)
case *abci.Request_InitChain:
_, ok = res.Value.(*abci.Response_InitChain)
case *abci.Request_BeginBlock:
_, ok = res.Value.(*abci.Response_BeginBlock)
case *abci.Request_EndBlock:
_, ok = res.Value.(*abci.Response_EndBlock)
case *abci.Request_ApplySnapshotChunk:
_, ok = res.Value.(*abci.Response_ApplySnapshotChunk)
case *abci.Request_LoadSnapshotChunk:
_, ok = res.Value.(*abci.Response_LoadSnapshotChunk)
case *abci.Request_ListSnapshots:
_, ok = res.Value.(*abci.Response_ListSnapshots)
case *abci.Request_OfferSnapshot:
_, ok = res.Value.(*abci.Response_OfferSnapshot)
case *types.Request_Echo:
_, ok = res.Value.(*types.Response_Echo)
case *types.Request_Flush:
_, ok = res.Value.(*types.Response_Flush)
case *types.Request_Info:
_, ok = res.Value.(*types.Response_Info)
case *types.Request_DeliverTx:
_, ok = res.Value.(*types.Response_DeliverTx)
case *types.Request_CheckTx:
_, ok = res.Value.(*types.Response_CheckTx)
case *types.Request_Commit:
_, ok = res.Value.(*types.Response_Commit)
case *types.Request_Query:
_, ok = res.Value.(*types.Response_Query)
case *types.Request_InitChain:
_, ok = res.Value.(*types.Response_InitChain)
case *types.Request_BeginBlock:
_, ok = res.Value.(*types.Response_BeginBlock)
case *types.Request_EndBlock:
_, ok = res.Value.(*types.Response_EndBlock)
case *types.Request_ApplySnapshotChunk:
_, ok = res.Value.(*types.Response_ApplySnapshotChunk)
case *types.Request_LoadSnapshotChunk:
_, ok = res.Value.(*types.Response_LoadSnapshotChunk)
case *types.Request_ListSnapshots:
_, ok = res.Value.(*types.Response_ListSnapshots)
case *types.Request_OfferSnapshot:
_, ok = res.Value.(*types.Response_OfferSnapshot)
}
return ok
}

View File

@@ -13,8 +13,8 @@ import (
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/server"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/pkg/abci"
)
var ctx = context.Background()
@@ -37,7 +37,7 @@ func TestProperSyncCalls(t *testing.T) {
resp := make(chan error, 1)
go func() {
// This is BeginBlockSync unrolled....
reqres, err := c.BeginBlockAsync(ctx, abci.RequestBeginBlock{})
reqres, err := c.BeginBlockAsync(ctx, types.RequestBeginBlock{})
assert.NoError(t, err)
err = c.FlushSync(context.Background())
assert.NoError(t, err)
@@ -73,7 +73,7 @@ func TestHangingSyncCalls(t *testing.T) {
resp := make(chan error, 1)
go func() {
// Start BeginBlock and flush it
reqres, err := c.BeginBlockAsync(ctx, abci.RequestBeginBlock{})
reqres, err := c.BeginBlockAsync(ctx, types.RequestBeginBlock{})
assert.NoError(t, err)
flush, err := c.FlushAsync(ctx)
assert.NoError(t, err)
@@ -99,7 +99,7 @@ func TestHangingSyncCalls(t *testing.T) {
}
}
func setupClientServer(t *testing.T, app abci.Application) (
func setupClientServer(t *testing.T, app types.Application) (
service.Service, abcicli.Client) {
// some port between 20k and 30k
port := 20000 + rand.Int31()%10000
@@ -118,10 +118,10 @@ func setupClientServer(t *testing.T, app abci.Application) (
}
type slowApp struct {
abci.BaseApplication
types.BaseApplication
}
func (slowApp) BeginBlock(req abci.RequestBeginBlock) abci.ResponseBeginBlock {
func (slowApp) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
time.Sleep(200 * time.Millisecond)
return abci.ResponseBeginBlock{}
return types.ResponseBeginBlock{}
}

View File

@@ -20,8 +20,8 @@ import (
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/tendermint/tendermint/abci/server"
servertest "github.com/tendermint/tendermint/abci/tests/server"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/abci/version"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/proto/tendermint/crypto"
)
@@ -459,7 +459,7 @@ func cmdInfo(cmd *cobra.Command, args []string) error {
if len(args) == 1 {
version = args[0]
}
res, err := client.InfoSync(ctx, abci.RequestInfo{Version: version})
res, err := client.InfoSync(ctx, types.RequestInfo{Version: version})
if err != nil {
return err
}
@@ -484,7 +484,7 @@ func cmdDeliverTx(cmd *cobra.Command, args []string) error {
if err != nil {
return err
}
res, err := client.DeliverTxSync(ctx, abci.RequestDeliverTx{Tx: txBytes})
res, err := client.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: txBytes})
if err != nil {
return err
}
@@ -510,7 +510,7 @@ func cmdCheckTx(cmd *cobra.Command, args []string) error {
if err != nil {
return err
}
res, err := client.CheckTxSync(ctx, abci.RequestCheckTx{Tx: txBytes})
res, err := client.CheckTxSync(ctx, types.RequestCheckTx{Tx: txBytes})
if err != nil {
return err
}
@@ -550,7 +550,7 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
return err
}
resQuery, err := client.QuerySync(ctx, abci.RequestQuery{
resQuery, err := client.QuerySync(ctx, types.RequestQuery{
Data: queryBytes,
Path: flagPath,
Height: int64(flagHeight),
@@ -577,7 +577,7 @@ func cmdKVStore(cmd *cobra.Command, args []string) error {
logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
// Create the application - in memory or persisted to disk
var app abci.Application
var app types.Application
if flagPersist == "" {
app = kvstore.NewApplication()
} else {
@@ -616,7 +616,7 @@ func printResponse(cmd *cobra.Command, args []string, rsp response) {
}
// Always print the status code.
if rsp.Code == abci.CodeTypeOK {
if rsp.Code == types.CodeTypeOK {
fmt.Printf("-> code: OK\n")
} else {
fmt.Printf("-> code: %d\n", rsp.Code)

View File

@@ -21,7 +21,7 @@ import (
"github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/abci/example/kvstore"
abciserver "github.com/tendermint/tendermint/abci/server"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/abci/types"
)
func init() {
@@ -35,15 +35,15 @@ func TestKVStore(t *testing.T) {
func TestBaseApp(t *testing.T) {
fmt.Println("### Testing BaseApp")
testStream(t, abci.NewBaseApplication())
testStream(t, types.NewBaseApplication())
}
func TestGRPC(t *testing.T) {
fmt.Println("### Testing GRPC")
testGRPCSync(t, abci.NewGRPCApplication(abci.NewBaseApplication()))
testGRPCSync(t, types.NewGRPCApplication(types.NewBaseApplication()))
}
func testStream(t *testing.T, app abci.Application) {
func testStream(t *testing.T, app types.Application) {
const numDeliverTxs = 20000
socketFile := fmt.Sprintf("test-%08x.sock", rand.Int31n(1<<30))
defer os.Remove(socketFile)
@@ -73,10 +73,10 @@ func testStream(t *testing.T, app abci.Application) {
done := make(chan struct{})
counter := 0
client.SetResponseCallback(func(req *abci.Request, res *abci.Response) {
client.SetResponseCallback(func(req *types.Request, res *types.Response) {
// Process response
switch r := res.Value.(type) {
case *abci.Response_DeliverTx:
case *types.Response_DeliverTx:
counter++
if r.DeliverTx.Code != code.CodeTypeOK {
t.Error("DeliverTx failed with ret_code", r.DeliverTx.Code)
@@ -91,7 +91,7 @@ func testStream(t *testing.T, app abci.Application) {
}()
return
}
case *abci.Response_Flush:
case *types.Response_Flush:
// ignore
default:
t.Error("Unexpected response type", reflect.TypeOf(res.Value))
@@ -103,7 +103,7 @@ func testStream(t *testing.T, app abci.Application) {
// Write requests
for counter := 0; counter < numDeliverTxs; counter++ {
// Send request
_, err = client.DeliverTxAsync(ctx, abci.RequestDeliverTx{Tx: []byte("test")})
_, err = client.DeliverTxAsync(ctx, types.RequestDeliverTx{Tx: []byte("test")})
require.NoError(t, err)
// Sometimes send flush messages
@@ -127,7 +127,7 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
return tmnet.Connect(addr)
}
func testGRPCSync(t *testing.T, app abci.ABCIApplicationServer) {
func testGRPCSync(t *testing.T, app types.ABCIApplicationServer) {
numDeliverTxs := 2000
socketFile := fmt.Sprintf("test-%08x.sock", rand.Int31n(1<<30))
defer os.Remove(socketFile)
@@ -158,12 +158,12 @@ func testGRPCSync(t *testing.T, app abci.ABCIApplicationServer) {
}
})
client := abci.NewABCIApplicationClient(conn)
client := types.NewABCIApplicationClient(conn)
// Write requests
for counter := 0; counter < numDeliverTxs; counter++ {
// Send request
response, err := client.DeliverTx(context.Background(), &abci.RequestDeliverTx{Tx: []byte("test")})
response, err := client.DeliverTx(context.Background(), &types.RequestDeliverTx{Tx: []byte("test")})
if err != nil {
t.Fatalf("Error in GRPC DeliverTx: %v", err.Error())
}

View File

@@ -3,17 +3,17 @@ package kvstore
import (
mrand "math/rand"
"github.com/tendermint/tendermint/abci/types"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/pkg/abci"
)
// RandVal creates one random validator, with a key derived
// from the input value
func RandVal(i int) abci.ValidatorUpdate {
func RandVal(i int) types.ValidatorUpdate {
pubkey := tmrand.Bytes(32)
// Random value between [0, 2^16 - 1]
power := mrand.Uint32() & (1<<16 - 1) // nolint:gosec // G404: Use of weak random number generator
v := abci.UpdateValidator(pubkey, int64(power), "")
v := types.UpdateValidator(pubkey, int64(power), "")
return v
}
@@ -21,8 +21,8 @@ func RandVal(i int) abci.ValidatorUpdate {
// the application. Note that the keys are deterministically
// derived from the index in the array, while the power is
// random (Change this if not desired)
func RandVals(cnt int) []abci.ValidatorUpdate {
res := make([]abci.ValidatorUpdate, cnt)
func RandVals(cnt int) []types.ValidatorUpdate {
res := make([]types.ValidatorUpdate, cnt)
for i := 0; i < cnt; i++ {
res[i] = RandVal(i)
}
@@ -33,7 +33,7 @@ func RandVals(cnt int) []abci.ValidatorUpdate {
// which allows tests to pass and is fine as long as you
// don't make any tx that modify the validator state
func InitKVStore(app *PersistentKVStoreApplication) {
app.InitChain(abci.RequestInitChain{
app.InitChain(types.RequestInitChain{
Validators: RandVals(1),
})
}

View File

@@ -9,7 +9,7 @@ import (
dbm "github.com/tendermint/tm-db"
"github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/version"
)
@@ -61,10 +61,10 @@ func prefixKey(key []byte) []byte {
//---------------------------------------------------
var _ abci.Application = (*Application)(nil)
var _ types.Application = (*Application)(nil)
type Application struct {
abci.BaseApplication
types.BaseApplication
state State
RetainBlocks int64 // blocks to retain after commit (via ResponseCommit.RetainHeight)
@@ -75,8 +75,8 @@ func NewApplication() *Application {
return &Application{state: state}
}
func (app *Application) Info(req abci.RequestInfo) (resInfo abci.ResponseInfo) {
return abci.ResponseInfo{
func (app *Application) Info(req types.RequestInfo) (resInfo types.ResponseInfo) {
return types.ResponseInfo{
Data: fmt.Sprintf("{\"size\":%v}", app.state.Size),
Version: version.ABCIVersion,
AppVersion: ProtocolVersion,
@@ -86,7 +86,7 @@ func (app *Application) Info(req abci.RequestInfo) (resInfo abci.ResponseInfo) {
}
// tx is either "key=value" or just arbitrary bytes
func (app *Application) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx {
func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
var key, value string
parts := bytes.Split(req.Tx, []byte("="))
@@ -102,10 +102,10 @@ func (app *Application) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDelive
}
app.state.Size++
events := []abci.Event{
events := []types.Event{
{
Type: "app",
Attributes: []abci.EventAttribute{
Attributes: []types.EventAttribute{
{Key: "creator", Value: "Cosmoshi Netowoko", Index: true},
{Key: "key", Value: key, Index: true},
{Key: "index_key", Value: "index is working", Index: true},
@@ -114,14 +114,14 @@ func (app *Application) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDelive
},
}
return abci.ResponseDeliverTx{Code: code.CodeTypeOK, Events: events}
return types.ResponseDeliverTx{Code: code.CodeTypeOK, Events: events}
}
func (app *Application) CheckTx(req abci.RequestCheckTx) abci.ResponseCheckTx {
return abci.ResponseCheckTx{Code: code.CodeTypeOK, GasWanted: 1}
func (app *Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
return types.ResponseCheckTx{Code: code.CodeTypeOK, GasWanted: 1}
}
func (app *Application) Commit() abci.ResponseCommit {
func (app *Application) Commit() types.ResponseCommit {
// Using a memdb - just return the big endian size of the db
appHash := make([]byte, 8)
binary.PutVarint(appHash, app.state.Size)
@@ -129,7 +129,7 @@ func (app *Application) Commit() abci.ResponseCommit {
app.state.Height++
saveState(app.state)
resp := abci.ResponseCommit{Data: appHash}
resp := types.ResponseCommit{Data: appHash}
if app.RetainBlocks > 0 && app.state.Height >= app.RetainBlocks {
resp.RetainHeight = app.state.Height - app.RetainBlocks + 1
}
@@ -137,7 +137,7 @@ func (app *Application) Commit() abci.ResponseCommit {
}
// Returns an associated value or nil if missing.
func (app *Application) Query(reqQuery abci.RequestQuery) (resQuery abci.ResponseQuery) {
func (app *Application) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
if reqQuery.Prove {
value, err := app.state.db.Get(prefixKey(reqQuery.Data))
if err != nil {

View File

@@ -15,7 +15,7 @@ import (
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/code"
abciserver "github.com/tendermint/tendermint/abci/server"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/abci/types"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
@@ -26,8 +26,8 @@ const (
var ctx = context.Background()
func testKVStore(t *testing.T, app abci.Application, tx []byte, key, value string) {
req := abci.RequestDeliverTx{Tx: tx}
func testKVStore(t *testing.T, app types.Application, tx []byte, key, value string) {
req := types.RequestDeliverTx{Tx: tx}
ar := app.DeliverTx(req)
require.False(t, ar.IsErr(), ar)
// repeating tx doesn't raise error
@@ -36,11 +36,11 @@ func testKVStore(t *testing.T, app abci.Application, tx []byte, key, value strin
// commit
app.Commit()
info := app.Info(abci.RequestInfo{})
info := app.Info(types.RequestInfo{})
require.NotZero(t, info.LastBlockHeight)
// make sure query is fine
resQuery := app.Query(abci.RequestQuery{
resQuery := app.Query(types.RequestQuery{
Path: "/store",
Data: []byte(key),
})
@@ -50,7 +50,7 @@ func testKVStore(t *testing.T, app abci.Application, tx []byte, key, value strin
require.EqualValues(t, info.LastBlockHeight, resQuery.Height)
// make sure proof is fine
resQuery = app.Query(abci.RequestQuery{
resQuery = app.Query(types.RequestQuery{
Path: "/store",
Data: []byte(key),
Prove: true,
@@ -98,7 +98,7 @@ func TestPersistentKVStoreInfo(t *testing.T) {
InitKVStore(kvstore)
height := int64(0)
resInfo := kvstore.Info(abci.RequestInfo{})
resInfo := kvstore.Info(types.RequestInfo{})
if resInfo.LastBlockHeight != height {
t.Fatalf("expected height of %d, got %d", height, resInfo.LastBlockHeight)
}
@@ -109,11 +109,11 @@ func TestPersistentKVStoreInfo(t *testing.T) {
header := tmproto.Header{
Height: height,
}
kvstore.BeginBlock(abci.RequestBeginBlock{Hash: hash, Header: header})
kvstore.EndBlock(abci.RequestEndBlock{Height: header.Height})
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
kvstore.Commit()
resInfo = kvstore.Info(abci.RequestInfo{})
resInfo = kvstore.Info(types.RequestInfo{})
if resInfo.LastBlockHeight != height {
t.Fatalf("expected height of %d, got %d", height, resInfo.LastBlockHeight)
}
@@ -133,18 +133,18 @@ func TestValUpdates(t *testing.T) {
nInit := 5
vals := RandVals(total)
// initialize with the first nInit
kvstore.InitChain(abci.RequestInitChain{
kvstore.InitChain(types.RequestInitChain{
Validators: vals[:nInit],
})
vals1, vals2 := vals[:nInit], kvstore.Validators()
valsEqual(t, vals1, vals2)
var v1, v2, v3 abci.ValidatorUpdate
var v1, v2, v3 types.ValidatorUpdate
// add some validators
v1, v2 = vals[nInit], vals[nInit+1]
diff := []abci.ValidatorUpdate{v1, v2}
diff := []types.ValidatorUpdate{v1, v2}
tx1 := MakeValSetChangeTx(v1.PubKey, v1.Power)
tx2 := MakeValSetChangeTx(v2.PubKey, v2.Power)
@@ -158,7 +158,7 @@ func TestValUpdates(t *testing.T) {
v1.Power = 0
v2.Power = 0
v3.Power = 0
diff = []abci.ValidatorUpdate{v1, v2, v3}
diff = []types.ValidatorUpdate{v1, v2, v3}
tx1 = MakeValSetChangeTx(v1.PubKey, v1.Power)
tx2 = MakeValSetChangeTx(v2.PubKey, v2.Power)
tx3 := MakeValSetChangeTx(v3.PubKey, v3.Power)
@@ -176,12 +176,12 @@ func TestValUpdates(t *testing.T) {
} else {
v1.Power = 5
}
diff = []abci.ValidatorUpdate{v1}
diff = []types.ValidatorUpdate{v1}
tx1 = MakeValSetChangeTx(v1.PubKey, v1.Power)
makeApplyBlock(t, kvstore, 3, diff, tx1)
vals1 = append([]abci.ValidatorUpdate{v1}, vals1[1:]...)
vals1 = append([]types.ValidatorUpdate{v1}, vals1[1:]...)
vals2 = kvstore.Validators()
valsEqual(t, vals1, vals2)
@@ -189,9 +189,9 @@ func TestValUpdates(t *testing.T) {
func makeApplyBlock(
t *testing.T,
kvstore abci.Application,
kvstore types.Application,
heightInt int,
diff []abci.ValidatorUpdate,
diff []types.ValidatorUpdate,
txs ...[]byte) {
// make and apply block
height := int64(heightInt)
@@ -200,13 +200,13 @@ func makeApplyBlock(
Height: height,
}
kvstore.BeginBlock(abci.RequestBeginBlock{Hash: hash, Header: header})
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
for _, tx := range txs {
if r := kvstore.DeliverTx(abci.RequestDeliverTx{Tx: tx}); r.IsErr() {
if r := kvstore.DeliverTx(types.RequestDeliverTx{Tx: tx}); r.IsErr() {
t.Fatal(r)
}
}
resEndBlock := kvstore.EndBlock(abci.RequestEndBlock{Height: header.Height})
resEndBlock := kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
kvstore.Commit()
valsEqual(t, diff, resEndBlock.ValidatorUpdates)
@@ -214,12 +214,12 @@ func makeApplyBlock(
}
// order doesn't matter
func valsEqual(t *testing.T, vals1, vals2 []abci.ValidatorUpdate) {
func valsEqual(t *testing.T, vals1, vals2 []types.ValidatorUpdate) {
if len(vals1) != len(vals2) {
t.Fatalf("vals dont match in len. got %d, expected %d", len(vals2), len(vals1))
}
sort.Sort(abci.ValidatorUpdates(vals1))
sort.Sort(abci.ValidatorUpdates(vals2))
sort.Sort(types.ValidatorUpdates(vals1))
sort.Sort(types.ValidatorUpdates(vals2))
for i, v1 := range vals1 {
v2 := vals2[i]
if !v1.PubKey.Equal(v2.PubKey) ||
@@ -229,7 +229,7 @@ func valsEqual(t *testing.T, vals1, vals2 []abci.ValidatorUpdate) {
}
}
func makeSocketClientServer(app abci.Application, name string) (abcicli.Client, service.Service, error) {
func makeSocketClientServer(app types.Application, name string) (abcicli.Client, service.Service, error) {
// Start the listener
socket := fmt.Sprintf("unix://%s.sock", name)
logger := log.TestingLogger()
@@ -253,12 +253,12 @@ func makeSocketClientServer(app abci.Application, name string) (abcicli.Client,
return client, server, nil
}
func makeGRPCClientServer(app abci.Application, name string) (abcicli.Client, service.Service, error) {
func makeGRPCClientServer(app types.Application, name string) (abcicli.Client, service.Service, error) {
// Start the listener
socket := fmt.Sprintf("unix://%s.sock", name)
logger := log.TestingLogger()
gapp := abci.NewGRPCApplication(app)
gapp := types.NewGRPCApplication(app)
server := abciserver.NewGRPCServer(socket, gapp)
server.SetLogger(logger.With("module", "abci-server"))
if err := server.Start(); err != nil {
@@ -326,23 +326,23 @@ func runClientTests(t *testing.T, client abcicli.Client) {
}
func testClient(t *testing.T, app abcicli.Client, tx []byte, key, value string) {
ar, err := app.DeliverTxSync(ctx, abci.RequestDeliverTx{Tx: tx})
ar, err := app.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: tx})
require.NoError(t, err)
require.False(t, ar.IsErr(), ar)
// repeating tx doesn't raise error
ar, err = app.DeliverTxSync(ctx, abci.RequestDeliverTx{Tx: tx})
ar, err = app.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: tx})
require.NoError(t, err)
require.False(t, ar.IsErr(), ar)
// commit
_, err = app.CommitSync(ctx)
require.NoError(t, err)
info, err := app.InfoSync(ctx, abci.RequestInfo{})
info, err := app.InfoSync(ctx, types.RequestInfo{})
require.NoError(t, err)
require.NotZero(t, info.LastBlockHeight)
// make sure query is fine
resQuery, err := app.QuerySync(ctx, abci.RequestQuery{
resQuery, err := app.QuerySync(ctx, types.RequestQuery{
Path: "/store",
Data: []byte(key),
})
@@ -353,7 +353,7 @@ func testClient(t *testing.T, app abcicli.Client, tx []byte, key, value string)
require.EqualValues(t, info.LastBlockHeight, resQuery.Height)
// make sure proof is fine
resQuery, err = app.QuerySync(ctx, abci.RequestQuery{
resQuery, err = app.QuerySync(ctx, types.RequestQuery{
Path: "/store",
Data: []byte(key),
Prove: true,

View File

@@ -10,9 +10,9 @@ import (
dbm "github.com/tendermint/tm-db"
"github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/abci/types"
cryptoenc "github.com/tendermint/tendermint/crypto/encoding"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/pkg/abci"
pc "github.com/tendermint/tendermint/proto/tendermint/crypto"
)
@@ -22,13 +22,13 @@ const (
//-----------------------------------------
var _ abci.Application = (*PersistentKVStoreApplication)(nil)
var _ types.Application = (*PersistentKVStoreApplication)(nil)
type PersistentKVStoreApplication struct {
app *Application
// validator set
ValUpdates []abci.ValidatorUpdate
ValUpdates []types.ValidatorUpdate
valAddrToPubKeyMap map[string]pc.PublicKey
@@ -59,7 +59,7 @@ func (app *PersistentKVStoreApplication) SetLogger(l log.Logger) {
app.logger = l
}
func (app *PersistentKVStoreApplication) Info(req abci.RequestInfo) abci.ResponseInfo {
func (app *PersistentKVStoreApplication) Info(req types.RequestInfo) types.ResponseInfo {
res := app.app.Info(req)
res.LastBlockHeight = app.app.state.Height
res.LastBlockAppHash = app.app.state.AppHash
@@ -67,7 +67,7 @@ func (app *PersistentKVStoreApplication) Info(req abci.RequestInfo) abci.Respons
}
// tx is either "val:pubkey!power" or "key=value" or just arbitrary bytes
func (app *PersistentKVStoreApplication) DeliverTx(req abci.RequestDeliverTx) abci.ResponseDeliverTx {
func (app *PersistentKVStoreApplication) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
// if it starts with "val:", update the validator set
// format is "val:pubkey!power"
if isValidatorTx(req.Tx) {
@@ -80,18 +80,18 @@ func (app *PersistentKVStoreApplication) DeliverTx(req abci.RequestDeliverTx) ab
return app.app.DeliverTx(req)
}
func (app *PersistentKVStoreApplication) CheckTx(req abci.RequestCheckTx) abci.ResponseCheckTx {
func (app *PersistentKVStoreApplication) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
return app.app.CheckTx(req)
}
// Commit will panic if InitChain was not called
func (app *PersistentKVStoreApplication) Commit() abci.ResponseCommit {
func (app *PersistentKVStoreApplication) Commit() types.ResponseCommit {
return app.app.Commit()
}
// When path=/val and data={validator address}, returns the validator update (abci.ValidatorUpdate) varint encoded.
// When path=/val and data={validator address}, returns the validator update (types.ValidatorUpdate) varint encoded.
// For any other path, returns an associated value or nil if missing.
func (app *PersistentKVStoreApplication) Query(reqQuery abci.RequestQuery) (resQuery abci.ResponseQuery) {
func (app *PersistentKVStoreApplication) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
switch reqQuery.Path {
case "/val":
key := []byte("val:" + string(reqQuery.Data))
@@ -109,27 +109,27 @@ func (app *PersistentKVStoreApplication) Query(reqQuery abci.RequestQuery) (resQ
}
// Save the validators in the merkle tree
func (app *PersistentKVStoreApplication) InitChain(req abci.RequestInitChain) abci.ResponseInitChain {
func (app *PersistentKVStoreApplication) InitChain(req types.RequestInitChain) types.ResponseInitChain {
for _, v := range req.Validators {
r := app.updateValidator(v)
if r.IsErr() {
app.logger.Error("Error updating validators", "r", r)
}
}
return abci.ResponseInitChain{}
return types.ResponseInitChain{}
}
// Track the block hash and header information
func (app *PersistentKVStoreApplication) BeginBlock(req abci.RequestBeginBlock) abci.ResponseBeginBlock {
func (app *PersistentKVStoreApplication) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
// reset valset changes
app.ValUpdates = make([]abci.ValidatorUpdate, 0)
app.ValUpdates = make([]types.ValidatorUpdate, 0)
// Punish validators who committed equivocation.
for _, ev := range req.ByzantineValidators {
if ev.Type == abci.EvidenceType_DUPLICATE_VOTE {
if ev.Type == types.EvidenceType_DUPLICATE_VOTE {
addr := string(ev.Validator.Address)
if pubKey, ok := app.valAddrToPubKeyMap[addr]; ok {
app.updateValidator(abci.ValidatorUpdate{
app.updateValidator(types.ValidatorUpdate{
PubKey: pubKey,
Power: ev.Validator.Power - 1,
})
@@ -142,46 +142,46 @@ func (app *PersistentKVStoreApplication) BeginBlock(req abci.RequestBeginBlock)
}
}
return abci.ResponseBeginBlock{}
return types.ResponseBeginBlock{}
}
// Update the validator set
func (app *PersistentKVStoreApplication) EndBlock(req abci.RequestEndBlock) abci.ResponseEndBlock {
return abci.ResponseEndBlock{ValidatorUpdates: app.ValUpdates}
func (app *PersistentKVStoreApplication) EndBlock(req types.RequestEndBlock) types.ResponseEndBlock {
return types.ResponseEndBlock{ValidatorUpdates: app.ValUpdates}
}
func (app *PersistentKVStoreApplication) ListSnapshots(
req abci.RequestListSnapshots) abci.ResponseListSnapshots {
return abci.ResponseListSnapshots{}
req types.RequestListSnapshots) types.ResponseListSnapshots {
return types.ResponseListSnapshots{}
}
func (app *PersistentKVStoreApplication) LoadSnapshotChunk(
req abci.RequestLoadSnapshotChunk) abci.ResponseLoadSnapshotChunk {
return abci.ResponseLoadSnapshotChunk{}
req types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk {
return types.ResponseLoadSnapshotChunk{}
}
func (app *PersistentKVStoreApplication) OfferSnapshot(
req abci.RequestOfferSnapshot) abci.ResponseOfferSnapshot {
return abci.ResponseOfferSnapshot{Result: abci.ResponseOfferSnapshot_ABORT}
req types.RequestOfferSnapshot) types.ResponseOfferSnapshot {
return types.ResponseOfferSnapshot{Result: types.ResponseOfferSnapshot_ABORT}
}
func (app *PersistentKVStoreApplication) ApplySnapshotChunk(
req abci.RequestApplySnapshotChunk) abci.ResponseApplySnapshotChunk {
return abci.ResponseApplySnapshotChunk{Result: abci.ResponseApplySnapshotChunk_ABORT}
req types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk {
return types.ResponseApplySnapshotChunk{Result: types.ResponseApplySnapshotChunk_ABORT}
}
//---------------------------------------------
// update validators
func (app *PersistentKVStoreApplication) Validators() (validators []abci.ValidatorUpdate) {
func (app *PersistentKVStoreApplication) Validators() (validators []types.ValidatorUpdate) {
itr, err := app.app.state.db.Iterator(nil, nil)
if err != nil {
panic(err)
}
for ; itr.Valid(); itr.Next() {
if isValidatorTx(itr.Key()) {
validator := new(abci.ValidatorUpdate)
err := abci.ReadMessage(bytes.NewBuffer(itr.Value()), validator)
validator := new(types.ValidatorUpdate)
err := types.ReadMessage(bytes.NewBuffer(itr.Value()), validator)
if err != nil {
panic(err)
}
@@ -209,13 +209,13 @@ func isValidatorTx(tx []byte) bool {
// format is "val:pubkey!power"
// pubkey is a base64-encoded 32-byte ed25519 key
func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) abci.ResponseDeliverTx {
func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) types.ResponseDeliverTx {
tx = tx[len(ValidatorSetChangePrefix):]
// get the pubkey and power
pubKeyAndPower := strings.Split(string(tx), "!")
if len(pubKeyAndPower) != 2 {
return abci.ResponseDeliverTx{
return types.ResponseDeliverTx{
Code: code.CodeTypeEncodingError,
Log: fmt.Sprintf("Expected 'pubkey!power'. Got %v", pubKeyAndPower)}
}
@@ -224,7 +224,7 @@ func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) abci.Respons
// decode the pubkey
pubkey, err := base64.StdEncoding.DecodeString(pubkeyS)
if err != nil {
return abci.ResponseDeliverTx{
return types.ResponseDeliverTx{
Code: code.CodeTypeEncodingError,
Log: fmt.Sprintf("Pubkey (%s) is invalid base64", pubkeyS)}
}
@@ -232,17 +232,17 @@ func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) abci.Respons
// decode the power
power, err := strconv.ParseInt(powerS, 10, 64)
if err != nil {
return abci.ResponseDeliverTx{
return types.ResponseDeliverTx{
Code: code.CodeTypeEncodingError,
Log: fmt.Sprintf("Power (%s) is not an int", powerS)}
}
// update
return app.updateValidator(abci.UpdateValidator(pubkey, power, ""))
return app.updateValidator(types.UpdateValidator(pubkey, power, ""))
}
// add, update, or remove a validator
func (app *PersistentKVStoreApplication) updateValidator(v abci.ValidatorUpdate) abci.ResponseDeliverTx {
func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate) types.ResponseDeliverTx {
pubkey, err := cryptoenc.PubKeyFromProto(v.PubKey)
if err != nil {
panic(fmt.Errorf("can't decode public key: %w", err))
@@ -257,7 +257,7 @@ func (app *PersistentKVStoreApplication) updateValidator(v abci.ValidatorUpdate)
}
if !hasKey {
pubStr := base64.StdEncoding.EncodeToString(pubkey.Bytes())
return abci.ResponseDeliverTx{
return types.ResponseDeliverTx{
Code: code.CodeTypeUnauthorized,
Log: fmt.Sprintf("Cannot remove non-existent validator %s", pubStr)}
}
@@ -268,8 +268,8 @@ func (app *PersistentKVStoreApplication) updateValidator(v abci.ValidatorUpdate)
} else {
// add or update validator
value := bytes.NewBuffer(make([]byte, 0))
if err := abci.WriteMessage(&v, value); err != nil {
return abci.ResponseDeliverTx{
if err := types.WriteMessage(&v, value); err != nil {
return types.ResponseDeliverTx{
Code: code.CodeTypeEncodingError,
Log: fmt.Sprintf("Error encoding validator: %v", err)}
}
@@ -282,5 +282,5 @@ func (app *PersistentKVStoreApplication) updateValidator(v abci.ValidatorUpdate)
// we only update the changes array if we successfully updated the tree
app.ValUpdates = append(app.ValUpdates, v)
return abci.ResponseDeliverTx{Code: code.CodeTypeOK}
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
}

View File

@@ -5,9 +5,9 @@ import (
"google.golang.org/grpc"
"github.com/tendermint/tendermint/abci/types"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/pkg/abci"
)
type GRPCServer struct {
@@ -18,11 +18,11 @@ type GRPCServer struct {
listener net.Listener
server *grpc.Server
app abci.ABCIApplicationServer
app types.ABCIApplicationServer
}
// NewGRPCServer returns a new gRPC ABCI server
func NewGRPCServer(protoAddr string, app abci.ABCIApplicationServer) service.Service {
func NewGRPCServer(protoAddr string, app types.ABCIApplicationServer) service.Service {
proto, addr := tmnet.ProtocolAndAddress(protoAddr)
s := &GRPCServer{
proto: proto,
@@ -44,7 +44,7 @@ func (s *GRPCServer) OnStart() error {
s.listener = ln
s.server = grpc.NewServer()
abci.RegisterABCIApplicationServer(s.server, s.app)
types.RegisterABCIApplicationServer(s.server, s.app)
s.Logger.Info("Listening", "proto", s.proto, "addr", s.addr)
go func() {

View File

@@ -11,18 +11,18 @@ package server
import (
"fmt"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/pkg/abci"
)
func NewServer(protoAddr, transport string, app abci.Application) (service.Service, error) {
func NewServer(protoAddr, transport string, app types.Application) (service.Service, error) {
var s service.Service
var err error
switch transport {
case "socket":
s = NewSocketServer(protoAddr, app)
case "grpc":
s = NewGRPCServer(protoAddr, abci.NewGRPCApplication(app))
s = NewGRPCServer(protoAddr, types.NewGRPCApplication(app))
default:
err = fmt.Errorf("unknown server type %s", transport)
}

View File

@@ -8,11 +8,11 @@ import (
"os"
"runtime"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
tmlog "github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/pkg/abci"
)
// var maxNumberConnections = 2
@@ -30,10 +30,10 @@ type SocketServer struct {
nextConnID int
appMtx tmsync.Mutex
app abci.Application
app types.Application
}
func NewSocketServer(protoAddr string, app abci.Application) service.Service {
func NewSocketServer(protoAddr string, app types.Application) service.Service {
proto, addr := tmnet.ProtocolAndAddress(protoAddr)
s := &SocketServer{
proto: proto,
@@ -120,8 +120,8 @@ func (s *SocketServer) acceptConnectionsRoutine() {
connID := s.addConn(conn)
closeConn := make(chan error, 2) // Push to signal connection closed
responses := make(chan *abci.Response, 1000) // A channel to buffer responses
closeConn := make(chan error, 2) // Push to signal connection closed
responses := make(chan *types.Response, 1000) // A channel to buffer responses
// Read requests from conn and deal with them
go s.handleRequests(closeConn, conn, responses)
@@ -152,7 +152,7 @@ func (s *SocketServer) waitForClose(closeConn chan error, connID int) {
}
// Read requests from conn and deal with them
func (s *SocketServer) handleRequests(closeConn chan error, conn io.Reader, responses chan<- *abci.Response) {
func (s *SocketServer) handleRequests(closeConn chan error, conn io.Reader, responses chan<- *types.Response) {
var count int
var bufReader = bufio.NewReader(conn)
@@ -174,8 +174,8 @@ func (s *SocketServer) handleRequests(closeConn chan error, conn io.Reader, resp
for {
var req = &abci.Request{}
err := abci.ReadMessage(bufReader, req)
var req = &types.Request{}
err := types.ReadMessage(bufReader, req)
if err != nil {
if err == io.EOF {
closeConn <- err
@@ -191,65 +191,65 @@ func (s *SocketServer) handleRequests(closeConn chan error, conn io.Reader, resp
}
}
func (s *SocketServer) handleRequest(req *abci.Request, responses chan<- *abci.Response) {
func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types.Response) {
switch r := req.Value.(type) {
case *abci.Request_Echo:
responses <- abci.ToResponseEcho(r.Echo.Message)
case *abci.Request_Flush:
responses <- abci.ToResponseFlush()
case *abci.Request_Info:
case *types.Request_Echo:
responses <- types.ToResponseEcho(r.Echo.Message)
case *types.Request_Flush:
responses <- types.ToResponseFlush()
case *types.Request_Info:
res := s.app.Info(*r.Info)
responses <- abci.ToResponseInfo(res)
case *abci.Request_DeliverTx:
responses <- types.ToResponseInfo(res)
case *types.Request_DeliverTx:
res := s.app.DeliverTx(*r.DeliverTx)
responses <- abci.ToResponseDeliverTx(res)
case *abci.Request_CheckTx:
responses <- types.ToResponseDeliverTx(res)
case *types.Request_CheckTx:
res := s.app.CheckTx(*r.CheckTx)
responses <- abci.ToResponseCheckTx(res)
case *abci.Request_Commit:
responses <- types.ToResponseCheckTx(res)
case *types.Request_Commit:
res := s.app.Commit()
responses <- abci.ToResponseCommit(res)
case *abci.Request_Query:
responses <- types.ToResponseCommit(res)
case *types.Request_Query:
res := s.app.Query(*r.Query)
responses <- abci.ToResponseQuery(res)
case *abci.Request_InitChain:
responses <- types.ToResponseQuery(res)
case *types.Request_InitChain:
res := s.app.InitChain(*r.InitChain)
responses <- abci.ToResponseInitChain(res)
case *abci.Request_BeginBlock:
responses <- types.ToResponseInitChain(res)
case *types.Request_BeginBlock:
res := s.app.BeginBlock(*r.BeginBlock)
responses <- abci.ToResponseBeginBlock(res)
case *abci.Request_EndBlock:
responses <- types.ToResponseBeginBlock(res)
case *types.Request_EndBlock:
res := s.app.EndBlock(*r.EndBlock)
responses <- abci.ToResponseEndBlock(res)
case *abci.Request_ListSnapshots:
responses <- types.ToResponseEndBlock(res)
case *types.Request_ListSnapshots:
res := s.app.ListSnapshots(*r.ListSnapshots)
responses <- abci.ToResponseListSnapshots(res)
case *abci.Request_OfferSnapshot:
responses <- types.ToResponseListSnapshots(res)
case *types.Request_OfferSnapshot:
res := s.app.OfferSnapshot(*r.OfferSnapshot)
responses <- abci.ToResponseOfferSnapshot(res)
case *abci.Request_LoadSnapshotChunk:
responses <- types.ToResponseOfferSnapshot(res)
case *types.Request_LoadSnapshotChunk:
res := s.app.LoadSnapshotChunk(*r.LoadSnapshotChunk)
responses <- abci.ToResponseLoadSnapshotChunk(res)
case *abci.Request_ApplySnapshotChunk:
responses <- types.ToResponseLoadSnapshotChunk(res)
case *types.Request_ApplySnapshotChunk:
res := s.app.ApplySnapshotChunk(*r.ApplySnapshotChunk)
responses <- abci.ToResponseApplySnapshotChunk(res)
responses <- types.ToResponseApplySnapshotChunk(res)
default:
responses <- abci.ToResponseException("Unknown request")
responses <- types.ToResponseException("Unknown request")
}
}
// Pull responses from 'responses' and write them to conn.
func (s *SocketServer) handleResponses(closeConn chan error, conn io.Writer, responses <-chan *abci.Response) {
func (s *SocketServer) handleResponses(closeConn chan error, conn io.Writer, responses <-chan *types.Response) {
var count int
var bufWriter = bufio.NewWriter(conn)
for {
var res = <-responses
err := abci.WriteMessage(res, bufWriter)
err := types.WriteMessage(res, bufWriter)
if err != nil {
closeConn <- fmt.Errorf("error writing message: %w", err)
return
}
if _, ok := res.Value.(*abci.Response_Flush); ok {
if _, ok := res.Value.(*types.Response_Flush); ok {
err = bufWriter.Flush()
if err != nil {
closeConn <- fmt.Errorf("error flushing write buffer: %w", err)

View File

@@ -5,8 +5,8 @@ import (
"fmt"
"log"
"github.com/tendermint/tendermint/abci/types"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/pkg/abci"
)
func main() {
@@ -20,8 +20,8 @@ func main() {
go func() {
counter := 0
for {
var res = &abci.Response{}
err := abci.ReadMessage(conn, res)
var res = &types.Response{}
err := types.ReadMessage(conn, res)
if err != nil {
log.Fatal(err.Error())
}
@@ -36,9 +36,9 @@ func main() {
counter := 0
for i := 0; ; i++ {
var bufWriter = bufio.NewWriter(conn)
var req = abci.ToRequestEcho("foobar")
var req = types.ToRequestEcho("foobar")
err := abci.WriteMessage(req, bufWriter)
err := types.WriteMessage(req, bufWriter)
if err != nil {
log.Fatal(err.Error())
}

View File

@@ -7,8 +7,8 @@ import (
"log"
"reflect"
"github.com/tendermint/tendermint/abci/types"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/pkg/abci"
)
func main() {
@@ -21,7 +21,7 @@ func main() {
// Make a bunch of requests
counter := 0
for i := 0; ; i++ {
req := abci.ToRequestEcho("foobar")
req := types.ToRequestEcho("foobar")
_, err := makeRequest(conn, req)
if err != nil {
log.Fatal(err.Error())
@@ -33,15 +33,15 @@ func main() {
}
}
func makeRequest(conn io.ReadWriter, req *abci.Request) (*abci.Response, error) {
func makeRequest(conn io.ReadWriter, req *types.Request) (*types.Response, error) {
var bufWriter = bufio.NewWriter(conn)
// Write desired request
err := abci.WriteMessage(req, bufWriter)
err := types.WriteMessage(req, bufWriter)
if err != nil {
return nil, err
}
err = abci.WriteMessage(abci.ToRequestFlush(), bufWriter)
err = types.WriteMessage(types.ToRequestFlush(), bufWriter)
if err != nil {
return nil, err
}
@@ -51,17 +51,17 @@ func makeRequest(conn io.ReadWriter, req *abci.Request) (*abci.Response, error)
}
// Read desired response
var res = &abci.Response{}
err = abci.ReadMessage(conn, res)
var res = &types.Response{}
err = types.ReadMessage(conn, res)
if err != nil {
return nil, err
}
var resFlush = &abci.Response{}
err = abci.ReadMessage(conn, resFlush)
var resFlush = &types.Response{}
err = types.ReadMessage(conn, resFlush)
if err != nil {
return nil, err
}
if _, ok := resFlush.Value.(*abci.Response_Flush); !ok {
if _, ok := resFlush.Value.(*types.Response_Flush); !ok {
return nil, fmt.Errorf("expected flush response but got something else: %v", reflect.TypeOf(resFlush))
}

View File

@@ -8,22 +8,22 @@ import (
mrand "math/rand"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/types"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/pkg/abci"
)
var ctx = context.Background()
func InitChain(client abcicli.Client) error {
total := 10
vals := make([]abci.ValidatorUpdate, total)
vals := make([]types.ValidatorUpdate, total)
for i := 0; i < total; i++ {
pubkey := tmrand.Bytes(33)
// nolint:gosec // G404: Use of weak random number generator
power := mrand.Int()
vals[i] = abci.UpdateValidator(pubkey, int64(power), "")
vals[i] = types.UpdateValidator(pubkey, int64(power), "")
}
_, err := client.InitChainSync(ctx, abci.RequestInitChain{
_, err := client.InitChainSync(ctx, types.RequestInitChain{
Validators: vals,
})
if err != nil {
@@ -52,7 +52,7 @@ func Commit(client abcicli.Client, hashExp []byte) error {
}
func DeliverTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.DeliverTxSync(ctx, abci.RequestDeliverTx{Tx: txBytes})
res, _ := client.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: txBytes})
code, data, log := res.Code, res.Data, res.Log
if code != codeExp {
fmt.Println("Failed test: DeliverTx")
@@ -71,7 +71,7 @@ func DeliverTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []
}
func CheckTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.CheckTxSync(ctx, abci.RequestCheckTx{Tx: txBytes})
res, _ := client.CheckTxSync(ctx, types.RequestCheckTx{Tx: txBytes})
code, data, log := res.Code, res.Data, res.Log
if code != codeExp {
fmt.Println("Failed test: CheckTx")

View File

@@ -1,4 +1,4 @@
package abci
package types
import (
"context"

View File

@@ -1,4 +1,4 @@
package abci
package types
import (
"io"

View File

@@ -1,4 +1,4 @@
package abci
package types
import (
"bytes"

View File

@@ -1,4 +1,4 @@
package abci
package types
import (
fmt "fmt"

View File

@@ -1,4 +1,4 @@
package abci
package types
import (
"bytes"

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: tendermint/abci/types.proto
package abci
package types
import (
context "context"

View File

@@ -1,4 +1,4 @@
package abci
package types
import (
"sort"

View File

@@ -6,7 +6,7 @@ import (
"github.com/spf13/cobra"
tmjson "github.com/tendermint/tendermint/libs/json"
"github.com/tendermint/tendermint/pkg/p2p"
"github.com/tendermint/tendermint/types"
)
// GenNodeKeyCmd allows the generation of a node key. It prints JSON-encoded
@@ -18,7 +18,7 @@ var GenNodeKeyCmd = &cobra.Command{
}
func genNodeKey(cmd *cobra.Command, args []string) error {
nodeKey := p2p.GenNodeKey()
nodeKey := types.GenNodeKey()
bz, err := tmjson.Marshal(nodeKey)
if err != nil {

View File

@@ -6,8 +6,8 @@ import (
"github.com/spf13/cobra"
tmjson "github.com/tendermint/tendermint/libs/json"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
// GenValidatorCmd allows the generation of a keypair for a
@@ -19,7 +19,7 @@ var GenValidatorCmd = &cobra.Command{
}
func init() {
GenValidatorCmd.Flags().StringVar(&keyType, "key", consensus.ABCIPubKeyTypeEd25519,
GenValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
"Key type to generate privval file with. Options: ed25519, secp256k1")
}

View File

@@ -11,9 +11,8 @@ import (
tmos "github.com/tendermint/tendermint/libs/os"
tmrand "github.com/tendermint/tendermint/libs/rand"
tmtime "github.com/tendermint/tendermint/libs/time"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/p2p"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
// InitFilesCmd initializes a fresh Tendermint Core instance.
@@ -31,7 +30,7 @@ var (
)
func init() {
InitFilesCmd.Flags().StringVar(&keyType, "key", consensus.ABCIPubKeyTypeEd25519,
InitFilesCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
"Key type to generate privval file with. Options: ed25519, secp256k1")
}
@@ -76,7 +75,7 @@ func initFilesWithConfig(config *cfg.Config) error {
if tmos.FileExists(nodeKeyFile) {
logger.Info("Found node key", "path", nodeKeyFile)
} else {
if _, err := p2p.LoadOrGenNodeKey(nodeKeyFile); err != nil {
if _, err := types.LoadOrGenNodeKey(nodeKeyFile); err != nil {
return err
}
logger.Info("Generated node key", "path", nodeKeyFile)
@@ -88,14 +87,14 @@ func initFilesWithConfig(config *cfg.Config) error {
logger.Info("Found genesis file", "path", genFile)
} else {
genDoc := consensus.GenesisDoc{
genDoc := types.GenesisDoc{
ChainID: fmt.Sprintf("test-chain-%v", tmrand.Str(6)),
GenesisTime: tmtime.Now(),
ConsensusParams: consensus.DefaultConsensusParams(),
ConsensusParams: types.DefaultConsensusParams(),
}
if keyType == "secp256k1" {
genDoc.ConsensusParams.Validator = consensus.ValidatorParams{
PubKeyTypes: []string{consensus.ABCIPubKeyTypeSecp256k1},
genDoc.ConsensusParams.Validator = types.ValidatorParams{
PubKeyTypes: []string{types.ABCIPubKeyTypeSecp256k1},
}
}
@@ -108,7 +107,7 @@ func initFilesWithConfig(config *cfg.Config) error {
if err != nil {
return fmt.Errorf("can't get pubkey: %w", err)
}
genDoc.Validators = []consensus.GenesisValidator{{
genDoc.Validators = []types.GenesisValidator{{
Address: pubKey.Address(),
PubKey: pubKey,
Power: 10,

View File

@@ -0,0 +1,87 @@
package commands
import (
"context"
"os"
"os/signal"
"syscall"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/inspect"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer/sink"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
// InspectCmd is the command for starting an inspect server.
var InspectCmd = &cobra.Command{
Use: "inspect",
Short: "Run an inspect server for investigating Tendermint state",
Long: `
inspect runs a subset of Tendermint's RPC endpoints that are useful for debugging
issues with Tendermint.
When the Tendermint consensus engine detects inconsistent state, it will crash the
tendermint process. Tendermint will not start up while in this inconsistent state.
The inspect command can be used to query the block and state store using Tendermint
RPC calls to debug issues of inconsistent state.
`,
RunE: runInspect,
}
func init() {
InspectCmd.Flags().
String("rpc.laddr",
config.RPC.ListenAddress, "RPC listenener address. Port required")
InspectCmd.Flags().
String("db-backend",
config.DBBackend, "database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
InspectCmd.Flags().
String("db-dir", config.DBPath, "database directory")
}
func runInspect(cmd *cobra.Command, args []string) error {
ctx, cancel := context.WithCancel(cmd.Context())
defer cancel()
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGTERM, syscall.SIGINT)
go func() {
<-c
cancel()
}()
blockStoreDB, err := cfg.DefaultDBProvider(&cfg.DBContext{ID: "blockstore", Config: config})
if err != nil {
return err
}
blockStore := store.NewBlockStore(blockStoreDB)
stateDB, err := cfg.DefaultDBProvider(&cfg.DBContext{ID: "state", Config: config})
if err != nil {
if err := blockStoreDB.Close(); err != nil {
logger.Error("error closing block store db", "error", err)
}
return err
}
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
if err != nil {
return err
}
sinks, err := sink.EventSinksFromConfig(config, cfg.DefaultDBProvider, genDoc.ChainID)
if err != nil {
return err
}
stateStore := state.NewStore(stateDB)
ins := inspect.New(config.RPC, blockStore, stateStore, sinks, logger)
logger.Info("starting inspect server")
if err := ins.Run(ctx); err != nil {
return err
}
return nil
}

View File

@@ -8,16 +8,16 @@ import (
"github.com/spf13/cobra"
tmdb "github.com/tendermint/tm-db"
abcitypes "github.com/tendermint/tendermint/abci/types"
tmcfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/libs/progressbar"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/pkg/events"
ctypes "github.com/tendermint/tendermint/rpc/core/types"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
"github.com/tendermint/tendermint/state/indexer/sink/kv"
"github.com/tendermint/tendermint/state/indexer/sink/psql"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
const (
@@ -31,7 +31,7 @@ var ReIndexEventCmd = &cobra.Command{
Long: `
reindex-event is an offline tooling to re-index block and tx events to the eventsinks,
you can run this command when the event store backend dropped/disconnected or you want to replace the backend.
The default start-height is 0, meaning the tooling will start reindex from the base block height(inclusive); and the
The default start-height is 0, meaning the tooling will start reindex from the base block height(inclusive); and the
default end-height is 0, meaning the tooling will reindex until the latest block height(inclusive). User can omits
either or both arguments.
`,
@@ -106,7 +106,7 @@ func loadEventSinks(cfg *tmcfg.Config) ([]indexer.EventSink, error) {
if conn == "" {
return nil, errors.New("the psql connection settings cannot be empty")
}
es, _, err := psql.NewEventSink(conn, chainID)
es, err := psql.NewEventSink(conn, chainID)
if err != nil {
return nil, err
}
@@ -170,7 +170,7 @@ func eventReIndex(cmd *cobra.Command, es []indexer.EventSink, bs state.BlockStor
return fmt.Errorf("not able to load ABCI Response at height %d from the statestore", i)
}
e := events.EventDataNewBlockHeader{
e := types.EventDataNewBlockHeader{
Header: b.Header,
NumTxs: int64(len(b.Txs)),
ResultBeginBlock: *r.BeginBlock,
@@ -182,7 +182,7 @@ func eventReIndex(cmd *cobra.Command, es []indexer.EventSink, bs state.BlockStor
batch = indexer.NewBatch(e.NumTxs)
for i, tx := range b.Data.Txs {
tr := abci.TxResult{
tr := abcitypes.TxResult{
Height: b.Height,
Index: uint32(i),
Tx: tx,

View File

@@ -9,14 +9,12 @@ import (
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
abcitypes "github.com/tendermint/tendermint/abci/types"
tmcfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/mempool"
prototmstate "github.com/tendermint/tendermint/proto/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
evmocks "github.com/tendermint/tendermint/state/indexer/mocks"
"github.com/tendermint/tendermint/state/mocks"
"github.com/tendermint/tendermint/types"
)
const (
@@ -118,27 +116,27 @@ func TestLoadBlockStore(t *testing.T) {
func TestReIndexEvent(t *testing.T) {
mockBlockStore := &mocks.BlockStore{}
mockStateStore := &mocks.Store{}
mockEventSink := &evmocks.EventSink{}
mockEventSink := &mocks.EventSink{}
mockBlockStore.
On("Base").Return(base).
On("Height").Return(height).
On("LoadBlock", base).Return(nil).Once().
On("LoadBlock", base).Return(&block.Block{Data: block.Data{Txs: mempool.Txs{make(mempool.Tx, 1)}}}).
On("LoadBlock", height).Return(&block.Block{Data: block.Data{Txs: mempool.Txs{make(mempool.Tx, 1)}}})
On("LoadBlock", base).Return(&types.Block{Data: types.Data{Txs: types.Txs{make(types.Tx, 1)}}}).
On("LoadBlock", height).Return(&types.Block{Data: types.Data{Txs: types.Txs{make(types.Tx, 1)}}})
mockEventSink.
On("Type").Return(indexer.KV).
On("IndexBlockEvents", mock.AnythingOfType("events.EventDataNewBlockHeader")).Return(errors.New("")).Once().
On("IndexBlockEvents", mock.AnythingOfType("events.EventDataNewBlockHeader")).Return(nil).
On("IndexTxEvents", mock.AnythingOfType("[]*abci.TxResult")).Return(errors.New("")).Once().
On("IndexTxEvents", mock.AnythingOfType("[]*abci.TxResult")).Return(nil)
On("IndexBlockEvents", mock.AnythingOfType("types.EventDataNewBlockHeader")).Return(errors.New("")).Once().
On("IndexBlockEvents", mock.AnythingOfType("types.EventDataNewBlockHeader")).Return(nil).
On("IndexTxEvents", mock.AnythingOfType("[]*types.TxResult")).Return(errors.New("")).Once().
On("IndexTxEvents", mock.AnythingOfType("[]*types.TxResult")).Return(nil)
dtx := abci.ResponseDeliverTx{}
dtx := abcitypes.ResponseDeliverTx{}
abciResp := &prototmstate.ABCIResponses{
DeliverTxs: []*abci.ResponseDeliverTx{&dtx},
EndBlock: &abci.ResponseEndBlock{},
BeginBlock: &abci.ResponseBeginBlock{},
DeliverTxs: []*abcitypes.ResponseDeliverTx{&dtx},
EndBlock: &abcitypes.ResponseEndBlock{},
BeginBlock: &abcitypes.ResponseBeginBlock{},
}
mockStateStore.

View File

@@ -7,8 +7,8 @@ import (
"github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
// ResetAllCmd removes the database of this Tendermint core
@@ -23,7 +23,7 @@ var keepAddrBook bool
func init() {
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
ResetPrivValidatorCmd.Flags().StringVar(&keyType, "key", consensus.ABCIPubKeyTypeEd25519,
ResetPrivValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
"Key type to generate privval file with. Options: ed25519, secp256k1")
}

View File

@@ -3,6 +3,8 @@ package commands
import (
"bytes"
"crypto/sha256"
"errors"
"flag"
"fmt"
"io"
"os"
@@ -33,7 +35,22 @@ func AddNodeFlags(cmd *cobra.Command) {
"socket address to listen on for connections from external priv-validator process")
// node flags
cmd.Flags().Bool("fast-sync", config.FastSyncMode, "fast blockchain syncing")
cmd.Flags().Bool("blocksync.enable", config.BlockSync.Enable, "enable fast blockchain syncing")
// TODO (https://github.com/tendermint/tendermint/issues/6908): remove this check after the v0.35 release cycle
// This check was added to give users an upgrade prompt to use the new flag for syncing.
//
// The pflag package does not have a native way to print a depcrecation warning
// and return an error. This logic was added to print a deprecation message to the user
// and then crash if the user attempts to use the old --fast-sync flag.
fs := flag.NewFlagSet("", flag.ExitOnError)
fs.Func("fast-sync", "deprecated",
func(string) error {
return errors.New("--fast-sync has been deprecated, please use --blocksync.enable")
})
cmd.Flags().AddGoFlagSet(fs)
cmd.Flags().MarkHidden("fast-sync") //nolint:errcheck
cmd.Flags().BytesHexVar(
&genesisHash,
"genesis-hash",
@@ -158,7 +175,7 @@ func checkGenesisHash(config *cfg.Config) error {
// Compare with the flag.
if !bytes.Equal(genesisHash, actualHash) {
return fmt.Errorf(
"--genesis_hash=%X does not match %s hash: %X",
"--genesis-hash=%X does not match %s hash: %X",
genesisHash, config.GenesisFile(), actualHash)
}

View File

@@ -15,8 +15,8 @@ import (
"github.com/tendermint/tendermint/libs/bytes"
tmrand "github.com/tendermint/tendermint/libs/rand"
tmtime "github.com/tendermint/tendermint/libs/time"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
var (
@@ -74,7 +74,7 @@ func init() {
"P2P Port")
TestnetFilesCmd.Flags().BoolVar(&randomMonikers, "random-monikers", false,
"randomize the moniker for each generated node")
TestnetFilesCmd.Flags().StringVar(&keyType, "key", consensus.ABCIPubKeyTypeEd25519,
TestnetFilesCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
"Key type to generate privval file with. Options: ed25519, secp256k1")
}
@@ -121,7 +121,7 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
}
}
genVals := make([]consensus.GenesisValidator, nValidators)
genVals := make([]types.GenesisValidator, nValidators)
for i := 0; i < nValidators; i++ {
nodeDirName := fmt.Sprintf("%s%d", nodeDirPrefix, i)
@@ -157,7 +157,7 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
if err != nil {
return fmt.Errorf("can't get pubkey: %w", err)
}
genVals[i] = consensus.GenesisValidator{
genVals[i] = types.GenesisValidator{
Address: pubKey.Address(),
PubKey: pubKey,
Power: 1,
@@ -187,16 +187,16 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
}
// Generate genesis doc from generated validators
genDoc := &consensus.GenesisDoc{
genDoc := &types.GenesisDoc{
ChainID: "chain-" + tmrand.Str(6),
GenesisTime: tmtime.Now(),
InitialHeight: initialHeight,
Validators: genVals,
ConsensusParams: consensus.DefaultConsensusParams(),
ConsensusParams: types.DefaultConsensusParams(),
}
if keyType == "secp256k1" {
genDoc.ConsensusParams.Validator = consensus.ValidatorParams{
PubKeyTypes: []string{consensus.ABCIPubKeyTypeSecp256k1},
genDoc.ConsensusParams.Validator = types.ValidatorParams{
PubKeyTypes: []string{types.ABCIPubKeyTypeSecp256k1},
}
}

View File

@@ -28,6 +28,7 @@ func main() {
cmd.ShowNodeIDCmd,
cmd.GenNodeKeyCmd,
cmd.VersionCmd,
cmd.InspectCmd,
cmd.MakeKeyMigrateCommand(),
debug.DebugCmd,
cli.NewCompletionCmd(rootCmd, true),

View File

@@ -13,7 +13,7 @@ import (
tmjson "github.com/tendermint/tendermint/libs/json"
"github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/pkg/p2p"
"github.com/tendermint/tendermint/types"
)
const (
@@ -76,7 +76,7 @@ type Config struct {
P2P *P2PConfig `mapstructure:"p2p"`
Mempool *MempoolConfig `mapstructure:"mempool"`
StateSync *StateSyncConfig `mapstructure:"statesync"`
BlockSync *BlockSyncConfig `mapstructure:"fastsync"`
BlockSync *BlockSyncConfig `mapstructure:"blocksync"`
Consensus *ConsensusConfig `mapstructure:"consensus"`
TxIndex *TxIndexConfig `mapstructure:"tx-index"`
Instrumentation *InstrumentationConfig `mapstructure:"instrumentation"`
@@ -152,7 +152,7 @@ func (cfg *Config) ValidateBasic() error {
return fmt.Errorf("error in [statesync] section: %w", err)
}
if err := cfg.BlockSync.ValidateBasic(); err != nil {
return fmt.Errorf("error in [fastsync] section: %w", err)
return fmt.Errorf("error in [blocksync] section: %w", err)
}
if err := cfg.Consensus.ValidateBasic(); err != nil {
return fmt.Errorf("error in [consensus] section: %w", err)
@@ -194,12 +194,6 @@ type BaseConfig struct { //nolint: maligned
// - No priv_validator_key.json, priv_validator_state.json
Mode string `mapstructure:"mode"`
// If this node is many blocks behind the tip of the chain, FastSync
// allows them to catchup quickly by downloading blocks in parallel
// and verifying their commits
// TODO: This should be moved to the blocksync config
FastSyncMode bool `mapstructure:"fast-sync"`
// Database backend: goleveldb | cleveldb | boltdb | rocksdb
// * goleveldb (github.com/syndtr/goleveldb - most popular implementation)
// - pure go
@@ -242,23 +236,24 @@ type BaseConfig struct { //nolint: maligned
// If true, query the ABCI app on connecting to a new peer
// so the app can decide if we should keep the connection or not
FilterPeers bool `mapstructure:"filter-peers"` // false
Other map[string]interface{} `mapstructure:",remain"`
}
// DefaultBaseConfig returns a default base configuration for a Tendermint node
func DefaultBaseConfig() BaseConfig {
return BaseConfig{
Genesis: defaultGenesisJSONPath,
NodeKey: defaultNodeKeyPath,
Mode: defaultMode,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultLogLevel,
LogFormat: log.LogFormatPlain,
FastSyncMode: true,
FilterPeers: false,
DBBackend: "goleveldb",
DBPath: "data",
Genesis: defaultGenesisJSONPath,
NodeKey: defaultNodeKeyPath,
Mode: defaultMode,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultLogLevel,
LogFormat: log.LogFormatPlain,
FilterPeers: false,
DBBackend: "goleveldb",
DBPath: "data",
}
}
@@ -268,7 +263,6 @@ func TestBaseConfig() BaseConfig {
cfg.chainID = "tendermint_test"
cfg.Mode = ModeValidator
cfg.ProxyApp = "kvstore"
cfg.FastSyncMode = false
cfg.DBBackend = "memdb"
return cfg
}
@@ -288,23 +282,23 @@ func (cfg BaseConfig) NodeKeyFile() string {
}
// LoadNodeKey loads NodeKey located in filePath.
func (cfg BaseConfig) LoadNodeKeyID() (p2p.NodeID, error) {
func (cfg BaseConfig) LoadNodeKeyID() (types.NodeID, error) {
jsonBytes, err := ioutil.ReadFile(cfg.NodeKeyFile())
if err != nil {
return "", err
}
nodeKey := p2p.NodeKey{}
nodeKey := types.NodeKey{}
err = tmjson.Unmarshal(jsonBytes, &nodeKey)
if err != nil {
return "", err
}
nodeKey.ID = p2p.NodeIDFromPubKey(nodeKey.PubKey())
nodeKey.ID = types.NodeIDFromPubKey(nodeKey.PubKey())
return nodeKey.ID, nil
}
// LoadOrGenNodeKey attempts to load the NodeKey from the given filePath. If
// the file does not exist, it generates and saves a new NodeKey.
func (cfg BaseConfig) LoadOrGenNodeKeyID() (p2p.NodeID, error) {
func (cfg BaseConfig) LoadOrGenNodeKeyID() (types.NodeID, error) {
if tmos.FileExists(cfg.NodeKeyFile()) {
nodeKey, err := cfg.LoadNodeKeyID()
if err != nil {
@@ -313,7 +307,7 @@ func (cfg BaseConfig) LoadOrGenNodeKeyID() (p2p.NodeID, error) {
return nodeKey, nil
}
nodeKey := p2p.GenNodeKey()
nodeKey := types.GenNodeKey()
if err := nodeKey.SaveAs(cfg.NodeKeyFile()); err != nil {
return "", err
@@ -345,6 +339,28 @@ func (cfg BaseConfig) ValidateBasic() error {
return fmt.Errorf("unknown mode: %v", cfg.Mode)
}
// TODO (https://github.com/tendermint/tendermint/issues/6908) remove this check after the v0.35 release cycle.
// This check was added to give users an upgrade prompt to use the new
// configuration option in v0.35. In future release cycles they should no longer
// be using this configuration parameter so the check can be removed.
// The cfg.Other field can likely be removed at the same time if it is not referenced
// elsewhere as it was added to service this check.
if fs, ok := cfg.Other["fastsync"]; ok {
if _, ok := fs.(map[string]interface{}); ok {
return fmt.Errorf("a configuration section named 'fastsync' was found in the " +
"configuration file. The 'fastsync' section has been renamed to " +
"'blocksync', please update the 'fastsync' field in your configuration file to 'blocksync'")
}
}
if fs, ok := cfg.Other["fast-sync"]; ok {
if fs != "" {
return fmt.Errorf("a parameter named 'fast-sync' was found in the " +
"configuration file. The parameter to enable or disable quickly syncing with a blockchain" +
"has moved to the [blocksync] section of the configuration file as blocksync.enable. " +
"Please move the 'fast-sync' field in your configuration file to 'blocksync.enable'")
}
}
return nil
}
@@ -694,13 +710,14 @@ type P2PConfig struct { //nolint: maligned
// Force dial to fail
TestDialFail bool `mapstructure:"test-dial-fail"`
// DisableLegacy is used mostly for testing to enable or disable the legacy
// P2P stack.
DisableLegacy bool `mapstructure:"disable-legacy"`
// UseLegacy enables the "legacy" P2P implementation and
// disables the newer default implementation. This flag will
// be removed in a future release.
UseLegacy bool `mapstructure:"use-legacy"`
// Makes it possible to configure which queue backend the p2p
// layer uses. Options are: "fifo", "priority" and "wdrr",
// with the default being "fifo".
// with the default being "priority".
QueueType string `mapstructure:"queue-type"`
}
@@ -732,6 +749,7 @@ func DefaultP2PConfig() *P2PConfig {
DialTimeout: 3 * time.Second,
TestDialFail: false,
QueueType: "priority",
UseLegacy: false,
}
}
@@ -882,15 +900,46 @@ func (cfg *MempoolConfig) ValidateBasic() error {
// StateSyncConfig defines the configuration for the Tendermint state sync service
type StateSyncConfig struct {
Enable bool `mapstructure:"enable"`
TempDir string `mapstructure:"temp-dir"`
RPCServers []string `mapstructure:"rpc-servers"`
TrustPeriod time.Duration `mapstructure:"trust-period"`
TrustHeight int64 `mapstructure:"trust-height"`
TrustHash string `mapstructure:"trust-hash"`
DiscoveryTime time.Duration `mapstructure:"discovery-time"`
// State sync rapidly bootstraps a new node by discovering, fetching, and restoring a
// state machine snapshot from peers instead of fetching and replaying historical
// blocks. Requires some peers in the network to take and serve state machine
// snapshots. State sync is not attempted if the node has any local state
// (LastBlockHeight > 0). The node will have a truncated block history, starting from
// the height of the snapshot.
Enable bool `mapstructure:"enable"`
// State sync uses light client verification to verify state. This can be done either
// through the P2P layer or the RPC layer. Set this to true to use the P2P layer. If
// false (default), the RPC layer will be used.
UseP2P bool `mapstructure:"use-p2p"`
// If using RPC, at least two addresses need to be provided. They should be compatible
// with net.Dial, for example: "host.example.com:2125".
RPCServers []string `mapstructure:"rpc-servers"`
// The hash and height of a trusted block. Must be within the trust-period.
TrustHeight int64 `mapstructure:"trust-height"`
TrustHash string `mapstructure:"trust-hash"`
// The trust period should be set so that Tendermint can detect and gossip
// misbehavior before it is considered expired. For chains based on the Cosmos SDK,
// one day less than the unbonding period should suffice.
TrustPeriod time.Duration `mapstructure:"trust-period"`
// Time to spend discovering snapshots before initiating a restore.
DiscoveryTime time.Duration `mapstructure:"discovery-time"`
// Temporary directory for state sync snapshot chunks, defaults to os.TempDir().
// The synchronizer will create a new, randomly named directory within this directory
// and remove it when the sync is complete.
TempDir string `mapstructure:"temp-dir"`
// The timeout duration before re-requesting a chunk, possibly from a different
// peer (default: 15 seconds).
ChunkRequestTimeout time.Duration `mapstructure:"chunk-request-timeout"`
Fetchers int32 `mapstructure:"fetchers"`
// The number of concurrent chunk and block fetchers to run (default: 4).
Fetchers int32 `mapstructure:"fetchers"`
}
func (cfg *StateSyncConfig) TrustHashBytes() []byte {
@@ -919,49 +968,51 @@ func TestStateSyncConfig() *StateSyncConfig {
// ValidateBasic performs basic validation.
func (cfg *StateSyncConfig) ValidateBasic() error {
if cfg.Enable {
if len(cfg.RPCServers) == 0 {
return errors.New("rpc-servers is required")
}
if !cfg.Enable {
return nil
}
// If we're not using the P2P stack then we need to validate the
// RPCServers
if !cfg.UseP2P {
if len(cfg.RPCServers) < 2 {
return errors.New("at least two rpc-servers entries is required")
return errors.New("at least two rpc-servers must be specified")
}
for _, server := range cfg.RPCServers {
if len(server) == 0 {
if server == "" {
return errors.New("found empty rpc-servers entry")
}
}
}
if cfg.DiscoveryTime != 0 && cfg.DiscoveryTime < 5*time.Second {
return errors.New("discovery time must be 0s or greater than five seconds")
}
if cfg.DiscoveryTime != 0 && cfg.DiscoveryTime < 5*time.Second {
return errors.New("discovery time must be 0s or greater than five seconds")
}
if cfg.TrustPeriod <= 0 {
return errors.New("trusted-period is required")
}
if cfg.TrustPeriod <= 0 {
return errors.New("trusted-period is required")
}
if cfg.TrustHeight <= 0 {
return errors.New("trusted-height is required")
}
if cfg.TrustHeight <= 0 {
return errors.New("trusted-height is required")
}
if len(cfg.TrustHash) == 0 {
return errors.New("trusted-hash is required")
}
if len(cfg.TrustHash) == 0 {
return errors.New("trusted-hash is required")
}
_, err := hex.DecodeString(cfg.TrustHash)
if err != nil {
return fmt.Errorf("invalid trusted-hash: %w", err)
}
_, err := hex.DecodeString(cfg.TrustHash)
if err != nil {
return fmt.Errorf("invalid trusted-hash: %w", err)
}
if cfg.ChunkRequestTimeout < 5*time.Second {
return errors.New("chunk-request-timeout must be at least 5 seconds")
}
if cfg.ChunkRequestTimeout < 5*time.Second {
return errors.New("chunk-request-timeout must be at least 5 seconds")
}
if cfg.Fetchers <= 0 {
return errors.New("fetchers is required")
}
if cfg.Fetchers <= 0 {
return errors.New("fetchers is required")
}
return nil
@@ -970,13 +1021,18 @@ func (cfg *StateSyncConfig) ValidateBasic() error {
//-----------------------------------------------------------------------------
// BlockSyncConfig (formerly known as FastSync) defines the configuration for the Tendermint block sync service
// If this node is many blocks behind the tip of the chain, BlockSync
// allows them to catchup quickly by downloading blocks in parallel
// and verifying their commits.
type BlockSyncConfig struct {
Enable bool `mapstructure:"enable"`
Version string `mapstructure:"version"`
}
// DefaultBlockSyncConfig returns a default configuration for the block sync service
func DefaultBlockSyncConfig() *BlockSyncConfig {
return &BlockSyncConfig{
Enable: true,
Version: BlockSyncV0,
}
}

View File

@@ -97,11 +97,6 @@ moniker = "{{ .BaseConfig.Moniker }}"
# - No priv_validator_key.json, priv_validator_state.json
mode = "{{ .BaseConfig.Mode }}"
# If this node is many blocks behind the tip of the chain, FastSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast-sync = {{ .BaseConfig.FastSyncMode }}
# Database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb
# * goleveldb (github.com/syndtr/goleveldb - most popular implementation)
# - pure go
@@ -270,8 +265,8 @@ pprof-laddr = "{{ .RPC.PprofListenAddress }}"
#######################################################
[p2p]
# Enable the new p2p layer.
disable-legacy = {{ .P2P.DisableLegacy }}
# Enable the legacy p2p layer.
use-legacy = {{ .P2P.UseLegacy }}
# Select the p2p internal queue
queue-type = "{{ .P2P.QueueType }}"
@@ -305,6 +300,7 @@ persistent-peers = "{{ .P2P.PersistentPeers }}"
upnp = {{ .P2P.UPNP }}
# Path to address book
# TODO: Remove once p2p refactor is complete in favor of peer store.
addr-book-file = "{{ js .P2P.AddrBook }}"
# Set true for strict address routability rules
@@ -330,6 +326,8 @@ max-connections = {{ .P2P.MaxConnections }}
max-incoming-connection-attempts = {{ .P2P.MaxIncomingConnectionAttempts }}
# List of node IDs, to which a connection will be (re)established ignoring any existing limits
# TODO: Remove once p2p refactor is complete.
# ref: https://github.com/tendermint/tendermint/issues/5670
unconditional-peer-ids = "{{ .P2P.UnconditionalPeerIDs }}"
# Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
@@ -426,22 +424,30 @@ ttl-num-blocks = {{ .Mempool.TTLNumBlocks }}
# starting from the height of the snapshot.
enable = {{ .StateSync.Enable }}
# RPC servers (comma-separated) for light client verification of the synced state machine and
# retrieval of state data for node bootstrapping. Also needs a trusted height and corresponding
# header hash obtained from a trusted source, and a period during which validators can be trusted.
#
# For Cosmos SDK-based chains, trust-period should usually be about 2/3 of the unbonding time (~2
# weeks) during which they can be financially punished (slashed) for misbehavior.
# State sync uses light client verification to verify state. This can be done either through the
# P2P layer or RPC layer. Set this to true to use the P2P layer. If false (default), RPC layer
# will be used.
use-p2p = {{ .StateSync.UseP2P }}
# If using RPC, at least two addresses need to be provided. They should be compatible with net.Dial,
# for example: "host.example.com:2125"
rpc-servers = "{{ StringsJoin .StateSync.RPCServers "," }}"
# The hash and height of a trusted block. Must be within the trust-period.
trust-height = {{ .StateSync.TrustHeight }}
trust-hash = "{{ .StateSync.TrustHash }}"
# The trust period should be set so that Tendermint can detect and gossip misbehavior before
# it is considered expired. For chains based on the Cosmos SDK, one day less than the unbonding
# period should suffice.
trust-period = "{{ .StateSync.TrustPeriod }}"
# Time to spend discovering snapshots before initiating a restore.
discovery-time = "{{ .StateSync.DiscoveryTime }}"
# Temporary directory for state sync snapshot chunks, defaults to the OS tempdir (typically /tmp).
# Will create a new, randomly named directory within, and remove it when done.
# Temporary directory for state sync snapshot chunks, defaults to os.TempDir().
# The synchronizer will create a new, randomly named directory within this directory
# and remove it when the sync is complete.
temp-dir = "{{ .StateSync.TempDir }}"
# The timeout duration before re-requesting a chunk, possibly from a different
@@ -454,10 +460,15 @@ fetchers = "{{ .StateSync.Fetchers }}"
#######################################################
### Block Sync Configuration Connections ###
#######################################################
[fastsync]
[blocksync]
# If this node is many blocks behind the tip of the chain, BlockSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
enable = {{ .BlockSync.Enable }}
# Block Sync version to use:
# 1) "v0" (default) - the legacy block sync implementation
# 1) "v0" (default) - the standard Block Sync implementation
# 2) "v2" - DEPRECATED, please use v0
version = "{{ .BlockSync.Version }}"

View File

@@ -36,9 +36,7 @@ func TestEnsureRoot(t *testing.T) {
data, err := ioutil.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
require.Nil(err)
if !checkConfig(string(data)) {
t.Fatalf("config file missing some information")
}
checkConfig(t, string(data))
ensureFiles(t, tmpDir, "data")
}
@@ -57,9 +55,7 @@ func TestEnsureTestRoot(t *testing.T) {
data, err := ioutil.ReadFile(filepath.Join(rootDir, defaultConfigFilePath))
require.Nil(err)
if !checkConfig(string(data)) {
t.Fatalf("config file missing some information")
}
checkConfig(t, string(data))
// TODO: make sure the cfg returned and testconfig are the same!
baseConfig := DefaultBaseConfig()
@@ -67,16 +63,15 @@ func TestEnsureTestRoot(t *testing.T) {
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, pvConfig.Key, pvConfig.State)
}
func checkConfig(configFile string) bool {
var valid bool
func checkConfig(t *testing.T, configFile string) {
t.Helper()
// list of words we expect in the config
var elems = []string{
"moniker",
"seeds",
"proxy-app",
"fast_sync",
"create_empty_blocks",
"blocksync",
"create-empty-blocks",
"peer",
"timeout",
"broadcast",
@@ -89,10 +84,7 @@ func checkConfig(configFile string) bool {
}
for _, e := range elems {
if !strings.Contains(configFile, e) {
valid = false
} else {
valid = true
t.Errorf("config file was expected to contain %s but did not", e)
}
}
return valid
}

View File

@@ -36,8 +36,7 @@ func TestPubKeySecp256k1Address(t *testing.T) {
addrBbz, _, _ := base58.CheckDecode(d.addr)
addrB := crypto.Address(addrBbz)
var priv secp256k1.PrivKey = secp256k1.PrivKey(privB)
priv := secp256k1.PrivKey(privB)
pubKey := priv.PubKey()
pubT, _ := pubKey.(secp256k1.PubKey)
pub := pubT

View File

@@ -62,7 +62,7 @@ be turned off regardless of other values provided.
#### KV
The `kv` indexer type is an embedded key-value store supported by the main
underling Tendermint database. Using the `kv` indexer type allows you to query
underlying Tendermint database. Using the `kv` indexer type allows you to query
for block and transaction events directly against Tendermint's RPC. However, the
query syntax is limited and so this indexer type might be deprecated or removed
entirely in the future.

View File

@@ -24,6 +24,7 @@
- April 1, 2021: Initial Draft (@alexanderbez)
- April 28, 2021: Specify search capabilities are only supported through the KV indexer (@marbar3778)
- May 19, 2021: Update the SQL schema and the eventsink interface (@jayt106)
- Aug 30, 2021: Update the SQL schema and the psql implementation (@creachadair)
## Status
@@ -145,163 +146,190 @@ The postgres eventsink will not support `tx_search`, `block_search`, `GetTxByHas
```sql
-- Table Definition ----------------------------------------------
CREATE TYPE block_event_type AS ENUM ('begin_block', 'end_block', '');
-- The blocks table records metadata about each block.
-- The block record does not include its events or transactions (see tx_results).
CREATE TABLE blocks (
rowid BIGSERIAL PRIMARY KEY,
CREATE TABLE block_events (
id SERIAL PRIMARY KEY,
key VARCHAR NOT NULL,
value VARCHAR NOT NULL,
height INTEGER NOT NULL,
type block_event_type,
created_at TIMESTAMPTZ NOT NULL,
chain_id VARCHAR NOT NULL
height BIGINT NOT NULL,
chain_id VARCHAR NOT NULL,
-- When this block header was logged into the sink, in UTC.
created_at TIMESTAMPTZ NOT NULL,
UNIQUE (height, chain_id)
);
-- Index blocks by height and chain, since we need to resolve block IDs when
-- indexing transaction records and transaction events.
CREATE INDEX idx_blocks_height_chain ON blocks(height, chain_id);
-- The tx_results table records metadata about transaction results. Note that
-- the events from a transaction are stored separately.
CREATE TABLE tx_results (
id SERIAL PRIMARY KEY,
tx_result BYTEA NOT NULL,
created_at TIMESTAMPTZ NOT NULL
rowid BIGSERIAL PRIMARY KEY,
-- The block to which this transaction belongs.
block_id BIGINT NOT NULL REFERENCES blocks(rowid),
-- The sequential index of the transaction within the block.
index INTEGER NOT NULL,
-- When this result record was logged into the sink, in UTC.
created_at TIMESTAMPTZ NOT NULL,
-- The hex-encoded hash of the transaction.
tx_hash VARCHAR NOT NULL,
-- The protobuf wire encoding of the TxResult message.
tx_result BYTEA NOT NULL,
UNIQUE (block_id, index)
);
CREATE TABLE tx_events (
id SERIAL PRIMARY KEY,
key VARCHAR NOT NULL,
value VARCHAR NOT NULL,
height INTEGER NOT NULL,
hash VARCHAR NOT NULL,
tx_result_id SERIAL,
created_at TIMESTAMPTZ NOT NULL,
chain_id VARCHAR NOT NULL,
FOREIGN KEY (tx_result_id)
REFERENCES tx_results(id)
ON DELETE CASCADE
-- The events table records events. All events (both block and transaction) are
-- associated with a block ID; transaction events also have a transaction ID.
CREATE TABLE events (
rowid BIGSERIAL PRIMARY KEY,
-- The block and transaction this event belongs to.
-- If tx_id is NULL, this is a block event.
block_id BIGINT NOT NULL REFERENCES blocks(rowid),
tx_id BIGINT NULL REFERENCES tx_results(rowid),
-- The application-defined type label for the event.
type VARCHAR NOT NULL
);
-- Indices -------------------------------------------------------
-- The attributes table records event attributes.
CREATE TABLE attributes (
event_id BIGINT NOT NULL REFERENCES events(rowid),
key VARCHAR NOT NULL, -- bare key
composite_key VARCHAR NOT NULL, -- composed type.key
value VARCHAR NULL,
CREATE INDEX idx_block_events_key_value ON block_events(key, value);
CREATE INDEX idx_tx_events_key_value ON tx_events(key, value);
CREATE INDEX idx_tx_events_hash ON tx_events(hash);
UNIQUE (event_id, key)
);
-- A joined view of events and their attributes. Events that do not have any
-- attributes are represented as a single row with empty key and value fields.
CREATE VIEW event_attributes AS
SELECT block_id, tx_id, type, key, composite_key, value
FROM events LEFT JOIN attributes ON (events.rowid = attributes.event_id);
-- A joined view of all block events (those having tx_id NULL).
CREATE VIEW block_events AS
SELECT blocks.rowid as block_id, height, chain_id, type, key, composite_key, value
FROM blocks JOIN event_attributes ON (blocks.rowid = event_attributes.block_id)
WHERE event_attributes.tx_id IS NULL;
-- A joined view of all transaction events.
CREATE VIEW tx_events AS
SELECT height, index, chain_id, type, key, composite_key, value, tx_results.created_at
FROM blocks JOIN tx_results ON (blocks.rowid = tx_results.block_id)
JOIN event_attributes ON (tx_results.rowid = event_attributes.tx_id)
WHERE event_attributes.tx_id IS NOT NULL;
```
The `PSQLEventSink` will implement the `EventSink` interface as follows
(some details omitted for brevity):
```go
func NewPSQLEventSink(connStr string, chainID string) (*PSQLEventSink, error) {
db, err := sql.Open("postgres", connStr)
if err != nil {
return nil, err
}
func NewEventSink(connStr, chainID string) (*EventSink, error) {
db, err := sql.Open(driverName, connStr)
// ...
// ...
return &EventSink{
store: db,
chainID: chainID,
}, nil
}
func (es *PSQLEventSink) IndexBlockEvents(h types.EventDataNewBlockHeader) error {
sqlStmt := sq.Insert("block_events").Columns("key", "value", "height", "type", "created_at", "chain_id")
func (es *EventSink) IndexBlockEvents(h types.EventDataNewBlockHeader) error {
ts := time.Now().UTC()
// index the reserved block height index
ts := time.Now()
sqlStmt = sqlStmt.Values(types.BlockHeightKey, h.Header.Height, h.Header.Height, "", ts, es.chainID)
return runInTransaction(es.store, func(tx *sql.Tx) error {
// Add the block to the blocks table and report back its row ID for use
// in indexing the events for the block.
blockID, err := queryWithID(tx, `
INSERT INTO blocks (height, chain_id, created_at)
VALUES ($1, $2, $3)
ON CONFLICT DO NOTHING
RETURNING rowid;
`, h.Header.Height, es.chainID, ts)
// ...
for _, event := range h.ResultBeginBlock.Events {
// only index events with a non-empty type
if len(event.Type) == 0 {
continue
}
for _, attr := range event.Attributes {
if len(attr.Key) == 0 {
continue
}
// index iff the event specified index:true and it's not a reserved event
compositeKey := fmt.Sprintf("%s.%s", event.Type, string(attr.Key))
if compositeKey == types.BlockHeightKey {
return fmt.Errorf("event type and attribute key \"%s\" is reserved; please use a different key", compositeKey)
}
if attr.GetIndex() {
sqlStmt = sqlStmt.Values(compositeKey, string(attr.Value), h.Header.Height, BlockEventTypeBeginBlock, ts, es.chainID)
}
}
}
// index end_block events...
// execute sqlStmt db query...
// Insert the special block meta-event for height.
if err := insertEvents(tx, blockID, 0, []abci.Event{
makeIndexedEvent(types.BlockHeightKey, fmt.Sprint(h.Header.Height)),
}); err != nil {
return fmt.Errorf("block meta-events: %w", err)
}
// Insert all the block events. Order is important here,
if err := insertEvents(tx, blockID, 0, h.ResultBeginBlock.Events); err != nil {
return fmt.Errorf("begin-block events: %w", err)
}
if err := insertEvents(tx, blockID, 0, h.ResultEndBlock.Events); err != nil {
return fmt.Errorf("end-block events: %w", err)
}
return nil
})
}
func (es *PSQLEventSink) IndexTxEvents(txr []*abci.TxResult) error {
sqlStmtEvents := sq.Insert("tx_events").Columns("key", "value", "height", "hash", "tx_result_id", "created_at", "chain_id")
sqlStmtTxResult := sq.Insert("tx_results").Columns("tx_result", "created_at")
func (es *EventSink) IndexTxEvents(txrs []*abci.TxResult) error {
ts := time.Now().UTC()
ts := time.Now()
for _, tx := range txr {
// store the tx result
txBz, err := proto.Marshal(tx)
if err != nil {
return err
}
for _, txr := range txrs {
// Encode the result message in protobuf wire format for indexing.
resultData, err := proto.Marshal(txr)
// ...
sqlStmtTxResult = sqlStmtTxResult.Values(txBz, ts)
// Index the hash of the underlying transaction as a hex string.
txHash := fmt.Sprintf("%X", types.Tx(txr.Tx).Hash())
// execute sqlStmtTxResult db query...
var txID uint32
err = sqlStmtTxResult.QueryRow().Scan(&txID)
if err != nil {
if err := runInTransaction(es.store, func(tx *sql.Tx) error {
// Find the block associated with this transaction.
blockID, err := queryWithID(tx, `
SELECT rowid FROM blocks WHERE height = $1 AND chain_id = $2;
`, txr.Height, es.chainID)
// ...
// Insert a record for this tx_result and capture its ID for indexing events.
txID, err := queryWithID(tx, `
INSERT INTO tx_results (block_id, index, created_at, tx_hash, tx_result)
VALUES ($1, $2, $3, $4, $5)
ON CONFLICT DO NOTHING
RETURNING rowid;
`, blockID, txr.Index, ts, txHash, resultData)
// ...
// Insert the special transaction meta-events for hash and height.
if err := insertEvents(tx, blockID, txID, []abci.Event{
makeIndexedEvent(types.TxHashKey, txHash),
makeIndexedEvent(types.TxHeightKey, fmt.Sprint(txr.Height)),
}); err != nil {
return fmt.Errorf("indexing transaction meta-events: %w", err)
}
// Index any events packaged with the transaction.
if err := insertEvents(tx, blockID, txID, txr.Result.Events); err != nil {
return fmt.Errorf("indexing transaction events: %w", err)
}
return nil
}); err != nil {
return err
}
// index the reserved height and hash indices
hash := types.Tx(tx.Tx).Hash()
sqlStmtEvents = sqlStmtEvents.Values(types.TxHashKey, hash, tx.Height, hash, txID, ts, es.chainID)
sqlStmtEvents = sqlStmtEvents.Values(types.TxHeightKey, tx.Height, tx.Height, hash, txID, ts, es.chainID)
for _, event := range result.Result.Events {
// only index events with a non-empty type
if len(event.Type) == 0 {
continue
}
for _, attr := range event.Attributes {
if len(attr.Key) == 0 {
continue
}
// index if `index: true` is set
compositeTag := fmt.Sprintf("%s.%s", event.Type, string(attr.Key))
// ensure event does not conflict with a reserved prefix key
if compositeTag == types.TxHashKey || compositeTag == types.TxHeightKey {
return fmt.Errorf("event type and attribute key \"%s\" is reserved; please use a different key", compositeTag)
}
if attr.GetIndex() {
sqlStmtEvents = sqlStmtEvents.Values(compositeKey, string(attr.Value), tx.Height, hash, txID, ts, es.chainID)
}
}
}
}
// execute sqlStmtEvents db query...
}
return nil
}
func (es *PSQLEventSink) SearchBlockEvents(ctx context.Context, q *query.Query) ([]int64, error) {
return nil, errors.New("block search is not supported via the postgres event sink")
}
// SearchBlockEvents is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) SearchBlockEvents(ctx context.Context, q *query.Query) ([]int64, error)
func (es *PSQLEventSink) SearchTxEvents(ctx context.Context, q *query.Query) ([]*abci.TxResult, error) {
return nil, errors.New("tx search is not supported via the postgres event sink")
}
// SearchTxEvents is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) SearchTxEvents(ctx context.Context, q *query.Query) ([]*abci.TxResult, error)
func (es *PSQLEventSink) GetTxByHash(hash []byte) (*abci.TxResult, error) {
return nil, errors.New("getTxByHash is not supported via the postgres event sink")
}
// GetTxByHash is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) GetTxByHash(hash []byte) (*abci.TxResult, error)
func (es *PSQLEventSink) HasBlock(h int64) (bool, error) {
return false, errors.New("hasBlock is not supported via the postgres event sink")
}
// HasBlock is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) HasBlock(h int64) (bool, error)
```
### Configuration

View File

@@ -36,10 +36,6 @@ proxy-app = "tcp://127.0.0.1:26658"
# A custom human readable name for this node
moniker = "anonymous"
# If this node is many blocks behind the tip of the chain, BlockSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast-sync = true
# Mode of Node: full | validator | seed (default: "validator")
# * validator node (default)
@@ -356,11 +352,16 @@ temp-dir = ""
#######################################################
### BlockSync Configuration Connections ###
#######################################################
[fastsync]
[blocksync]
# If this node is many blocks behind the tip of the chain, BlockSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
enable = true
# Block Sync version to use:
# 1) "v0" (default) - the legacy block sync implementation
# 2) "v2" - complete redesign of v0, optimized for testability & readability
# 1) "v0" (default) - the standard block sync implementation
# 2) "v2" - DEPRECATED, please use v0
version = "v0"
#######################################################

View File

@@ -37,4 +37,10 @@ sections.
## Table of Contents
<!-- - [RFC-NNN: Title](./rfc-NNN.title.md) -->
- [RFC-000: P2P Roadmap](./rfc-000-p2p-roadmap.rst)
- [RFC-001: Storage Engines](./rfc-001-storage-engine.rst)
- [RFC-002: Interprocess Communication](./rfc-002-ipc-ecosystem.md)
- [RFC-003: Performance Taxonomy](./rfc-003-performance-questions.md)
- [RFC-004: E2E Test Framework Enhancements](./rfc-004-e2e-framework.md)
<!-- - [RFC-NNN: Title](./rfc-NNN-title.md) -->

View File

@@ -0,0 +1,316 @@
====================
RFC 000: P2P Roadmap
====================
Changelog
---------
- 2021-08-20: Completed initial draft and distributed via a gist
- 2021-08-25: Migrated as an RFC and changed format
Abstract
--------
This document discusses the future of peer network management in Tendermint, with
a particular focus on features, semantics, and a proposed roadmap.
Specifically, we consider libp2p as a tool kit for implementing some fundamentals.
Background
----------
For the 0.35 release cycle the switching/routing layer of Tendermint was
replaced. This work was done "in place," and produced a version of Tendermint
that was backward-compatible and interoperable with previous versions of the
software. While there are new p2p/peer management constructs in the new
version (e.g. ``PeerManager`` and ``Router``), the main effect of this change
was to simplify the ways that other components within Tendermint interacted with
the peer management layer, and to make it possible for higher-level components
(specifically the reactors), to be used and tested more independently.
This refactoring, which was a major undertaking, was entirely necessary to
enable areas for future development and iteration on this aspect of
Tendermint. There are also a number of potential user-facing features that
depend heavily on the p2p layer: additional transport protocols, transport
compression, improved resilience to network partitions. These improvements to
modularity, stability, and reliability of the p2p system will also make
ongoing maintenance and feature development easier in the rest of Tendermint.
Critique of Current Peer-to-Peer Infrastructure
---------------------------------------
The current (refactored) P2P stack is an improvement on the previous iteration
(legacy), but as of 0.35, there remains room for improvement in the design and
implementation of the P2P layer.
Some limitations of the current stack include:
- heavy reliance on buffering to avoid backups in the flow of components,
which is fragile to maintain and can lead to unexpected memory usage
patterns and forces the routing layer to make decisions about when messages
should be discarded.
- the current p2p stack relies on convention (rather than the compiler) to
enforce the API boundaries and conventions between reactors and the router,
making it very easy to write "wrong" reactor code or introduce a bad
dependency.
- the current stack is probably more complex and difficult to maintain because
the legacy system must coexist with the new components in 0.35. When the
legacy stack is removed there are some simple changes that will become
possible and could reduce the complexity of the new system. (e.g. `#6598
<https://github.com/tendermint/tendermint/issues/6598>`_.)
- the current stack encapsulates a lot of information about peers, and makes it
difficult to expose that information to monitoring/observability tools. This
general opacity also makes it difficult to interact with the peer system
from other areas of the code base (e.g. tests, reactors).
- the legacy stack provided some control to operators to force the system to
dial new peers or seed nodes or manipulate the topology of the system _in
situ_. The current stack can't easily provide this, and while the new stack
may have better behavior, it does leave operators hands tied.
Some of these issues will be resolved early in the 0.36 cycle, with the
removal of the legacy components.
The 0.36 release also provides the opportunity to make changes to the
protocol, as the release will not be compatible with previous releases.
Areas for Development
---------------------
These sections describe features that may make sense to include in a Phase 2 of
a P2P project.
Internal Message Passing
~~~~~~~~~~~~~~~~~~~~~~~~
Currently, there's no provision for intranode communication using the P2P
layer, which means when two reactors need to interact with each other they
have to have dependencies on each other's interfaces, and
initialization. Changing these interactions (e.g. transitions between
blocksync and consensus) from procedure calls to message passing.
This is a relatively simple change and could be implemented with the following
components:
- a constant to represent "local" delivery as the ``To``` field on
``p2p.Envelope``.
- special path for routing local messages that doesn't require message
serialization (protobuf marshalling/unmarshaling).
Adding these semantics, particularly if in conjunction with synchronous
semantics provides a solution to dependency graph problems currently present
in the Tendermint codebase, which will simplify development, make it possible
to isolate components for testing.
Eventually, this will also make it possible to have a logical Tendermint node
running in multiple processes or in a collection of containers, although the
usecase of this may be debatable.
Synchronous Semantics (Paired Request/Response)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the current system, all messages are sent with fire-and-forget semantics,
and there's no coupling between a request sent via the p2p layer, and a
response. These kinds of semantics would simplify the implementation of
state and block sync reactors, and make intra-node message passing more
powerful.
For some interactions, like gossiping transactions between the mempools of
different nodes, fire-and-forget semantics make sense, but for other
operations the missing link between requests/responses leads to either
inefficiency when a node fails to respond or becomes unavailable, or code that
is just difficult to follow.
To support this kind of work, the protocol would need to accommodate some kind
of request/response ID to allow identifying out-of-order responses over a
single connection. Additionally, expanded the programming model of the
``p2p.Channel`` to accommodate some kind of _future_ or similar paradigm to
make it viable to write reactor code without needing for the reactor developer
to wrestle with lower level concurency constructs.
Timeout Handling (QoS)
~~~~~~~~~~~~~~~~~~~~~~
Currently, all timeouts, buffering, and QoS features are handled at the router
layer, and the reactors are implemented in ways that assume/require
asynchronous operation. This both increases the required complexity at the
routing layer, and means that misbehavior at the reactor level is difficult to
detect or attribute. Additionally, the current system provides three main
parameters to control quality of service:
- buffer sizes for channels and queues.
- priorities for channels
- queue implementation details for shedding load.
These end up being quite coarse controls, and changing the settings are
difficult because as the queues and channels are able to buffer large numbers
of messages it can be hard to see the impact of a given change, particularly
in our extant test environment. In general, we should endeavor to:
- set real timeouts, via contexts, on most message send operations, so that
senders rather than queues can be responsible for timeout
logic. Additionally, this will make it possible to avoid sending messages
during shutdown.
- reduce (to the greatest extent possible) the amount of buffering in
channels and the queues, to more readily surface backpressure and reduce the
potential for buildup of stale messages.
Stream Based Connection Handling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Currently the transport layer is message based, which makes sense from a
mental model of how the protocol works, but makes it more difficult to
implement transports and connection types, as it forces a higher level view of
the connection and interaction which makes it harder to implement for novel
transport types and makes it more likely that message-based caching and rate
limiting will be implemented at the transport layer rather than at a more
appropriate level.
The transport then, would be responsible for negitating the connection and the
handshake and otherwise behave like a socket/file discriptor with ``Read` and
``Write`` methods.
While this was included in the initial design for the new P2P layer, it may be
obviated entirely if the transport and peer layer is replaced with libp2p,
which is primarily stream based.
Service Discovery
~~~~~~~~~~~~~~~~~
In the current system, Tendermint assumes that all nodes in a network are
largely equivelent, and nodes tend to be "chatty" making many requests of
large numbers of peers and waiting for peers to (hopefully) respond. While
this works and has allowed Tendermint to get to a certain point, this both
produces a theoretical scaling bottle neck and makes it harder to test and
verify components of the system.
In addition to peer's identity and connection information, peers should be
able to advertise a number of services or capabilities, and node operators or
developers should be able to specify peer capability requirements (e.g. target
at least <x>-percent of peers with <y> capability.)
These capabilities may be useful in selecting peers to send messages to, it
may make sense to extend Tendermint's message addressing capability to allow
reactors to send messages to groups of peers based on role rather than only
allowing addressing to one or all peers.
Having a good service discovery mechanism may pair well with the synchronous
semantics (request/response) work, as it allows reactors to "make a request of
a peer with <x> capability and wait for the response," rather force the
reactors to need to track the capabilities or state of specific peers.
Solutions
---------
Continued Homegrown Implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The current peer system is homegrown and is conceptually compatible with the
needs of the project, and while there are limitations to the system, the p2p
layer is not (currently as of 0.35) a major source of bugs or friction during
development.
However, the current implementation makes a number of allowances for
interoperability, and there are a collection of iterative improvements that
should be considered in the next couple of releases. To maintain the current
implementation, upcoming work would include:
- change the ``Transport`` mechanism to facilitate easier implementations.
- implement different ``Transport`` handlers to be able to manage peer
connections using different protocols (e.g. QUIC, etc.)
- entirely remove the constructs and implementations of the legacy peer
implementation.
- establish and enforce clearer chains of responsibility for connection
establishment (e.g. handshaking, setup,) which is currently shared between
three components.
- report better metrics regarding the into the state of peers and network
connectivity, which are opaque outside of the system. This is constrained at
the moment as a side effect of the split responsibility for connection
establishment.
- extend the PEX system to include service information so that ndoes in the
network weren't necessarily homogeneous.
While maintaining a bespoke peer management layer would seem to distract from
development of core functionality, the truth is that (once the legacy code is
removed,) the scope of the peer layer is relatively small from a maintenance
perspective, and having control at this layer might actually afford the
project with the ability to more rapidly iterate on some features.
LibP2P
~~~~~~
LibP2P provides components that, approximately, account for the
``PeerManager`` and ``Transport`` components of the current (new) P2P
stack. The Go APIs seem reasonable, and being able to externalize the
implementation details of peer and connection management seems like it could
provide a lot of benefits, particularly in supporting a more active ecosystem.
In general the API provides the kind of stream-based, multi-protocol
supporting, and idiomatic baseline for implementing a peer layer. Additionally
because it handles peer exchange and connection management at a lower
level, by using libp2p it'd be possible to remove a good deal of code in favor
of just using libp2p. Having said that, Tendermint's P2P layer covers a
greater scope (e.g. message routing to different peers) and that layer is
something that Tendermint might want to retain.
The are a number of unknowns that require more research including how much of
a peer database the Tendermint engine itself needs to maintain, in order to
support higher level operations (consensus, statesync), but it might be the
case that our internal systems need to know much less about peers than
otherwise specified. Similarly, the current system has a notion of peer
scoring that cannot be communicated to libp2p, which may be fine as this is
only used to support peer exchange (PEX,) which would become a property libp2p
and not expressed in it's current higher-level form.
In general, the effort to switch to libp2p would involve:
- timing it during an appropriate protocol-breaking window, as it doesn't seem
viable to support both libp2p *and* the current p2p protocol.
- providing some in-memory testing network to support the use case that the
current ``p2p.MemoryNetwork`` provides.
- re-homing the ``p2p.Router`` implementation on top of libp2p components to
be able to maintain the current reactor implementations.
Open question include:
- how much local buffering should we be doing? It sort of seems like we should
figure out what the expected behavior is for libp2p for QoS-type
functionality, and if our requirements mean that we should be implementing
this on top of things ourselves?
- if Tendermint was going to use libp2p, how would libp2p's stability
guarantees (protocol, etc.) impact/constrain Tendermint's stability
guarantees?
- what kind of introspection does libp2p provide, and to what extend would
this change or constrain the kind of observability that Tendermint is able
to provide?
- how do efforts to select "the best" (healthy, close, well-behaving, etc.)
peers work out if Tendermint is not maintaining a local peer database?
- would adding additional higher level semantics (internal message passing,
request/response pairs, service discovery, etc.) facilitate removing some of
the direct linkages between constructs/components in the system and reduce
the need for Tendermint nodes to maintain state about its peers?
References
----------
- `Tracking Ticket for P2P Refactor Project <https://github.com/tendermint/tendermint/issues/5670>`_
- `ADR 61: P2P Refactor Scope <../architecture/adr-061-p2p-refactor-scope.md>`_
- `ADR 62: P2P Architecture and Abstraction <../architecture/adr-061-p2p-architecture.md>`_

View File

@@ -0,0 +1,179 @@
===========================================
RFC 001: Storage Engines and Database Layer
===========================================
Changelog
---------
- 2021-04-19: Initial Draft (gist)
- 2021-09-02: Migrated to RFC folder, with some updates
Abstract
--------
The aspect of Tendermint that's responsible for persistence and storage (often
"the database" internally) represents a bottle neck in the architecture of the
platform, that the 0.36 release presents a good opportunity to correct. The
current storage engine layer provides a great deal of flexibility that is
difficult for users to leverage or benefit from, while also making it harder
for Tendermint Core developers to deliver improvements on storage engine. This
RFC discusses the possible improvements to this layer of the system.
Background
----------
Tendermint has a very thin common wrapper that makes Tendermint itself
(largely) agnostic to the data storage layer (within the realm of the popular
key-value/embedded databases.) This flexibility is not particularly useful:
the benefits of a specific database engine in the context of Tendermint is not
particularly well understood, and the maintenance burden for multiple backends
is not commensurate with the benefit provided. Additionally, because the data
storage layer is handled generically, and most tests run with an in-memory
framework, it's difficult to take advantage of any higher-level features of a
database engine.
Ideally, developers within Tendermint will be able to interact with persisted
data via an interface that can function, approximately like an object
store, and this storage interface will be able to accommodate all existing
persistence workloads (e.g. block storage, local peer management information
like the "address book", crash-recovery log like the WAL.) In addition to
providing a more ergonomic interface and new semantics, by selecting a single
storage engine tendermint can use native durability and atomicity features of
the storage engine and simplify its own implementations.
Data Access Patterns
~~~~~~~~~~~~~~~~~~~~
Tendermint's data access patterns have the following characteristics:
- aggregate data size often exceeds memory.
- data is rarely mutated after it's written for most data (e.g. blocks), but
small amounts of working data is persisted by nodes and is frequently
mutated (e.g. peer information, validator information.)
- read patterns can be quite random.
- crash resistance and crash recovery, provided by write-ahead-logs (in
consensus, and potentially for the mempool) should allow the system to
resume work after an unexpected shut down.
Project Goals
~~~~~~~~~~~~~
As we think about replacing the current persistence layer, we should consider
the following high level goals:
- drop dependencies on storage engines that have a CGo dependency.
- encapsulate data format and data storage from higher-level services
(e.g. reactors) within tendermint.
- select a storage engine that does not incur any additional operational
complexity (e.g. database should be embedded.)
- provide database semantics with sufficient ACID, snapshots, and
transactional support.
Open Questions
~~~~~~~~~~~~~~
The following questions remain:
- what kind of data-access concurrency does tendermint require?
- would tendermint users SDK/etc. benefit from some shared database
infrastructure?
- In earlier conversations it seemed as if the SDK has selected Badger and
RocksDB for their storage engines, and it might make sense to be able to
(optionally) pass a handle to a Badger instance between the libraries in
some cases.
- what are typical data sizes, and what kinds of memory sizes can we expect
operators to be able to provide?
- in addition to simple persistence, what kind of additional semantics would
tendermint like to enjoy (e.g. transactional semantics, unique constraints,
indexes, in-place-updates, etc.)?
Decision Framework
~~~~~~~~~~~~~~~~~~
Given the constraint of removing the CGo dependency, the decision is between
"badger" and "boltdb" (in the form of the etcd/CoreOS fork,) as low level. On
top of this and somewhat orthogonally, we must also decide on the interface to
the database and how the larger application will have to interact with the
database layer. Users of the data layer shouldn't ever need to interact with
raw byte slices from the database, and should mostly have the experience of
interacting with Go-types.
Badger is more consistently developed and has a broader feature set than
Bolt. At the same time, Badger is likely more memory intensive and may have
more overhead in terms of open file handles given it's model. At first glance,
Badger is the obvious choice: it's actively developed and it has a lot of
features that could be useful. Bolt is not without some benefits: it's stable
and is maintained by the etcd folks, it's simpler model (single memory mapped
file, etc,) may be easier to reason about.
I propose that we consider the following specific questions about storage
engines:
- does Badger's evolving development, which may result in data file format
changes in the future, and could restrict our access to using the latest
version of the library between major upgrades, present a problem?
- do we do we have goals/concerns about memory footprint that Badger may
prevent us from hitting, particularly as data sets grow over time?
- what kind of additional tooling might we need/like to build (dump/restore,
etc.)?
- do we want to run unit/integration tests against a data files on disk rather
than relying exclusively on the memory database?
Project Scope
~~~~~~~~~~~~~
This project will consist of the following aspects:
- selecting a storage engine, and modifying the tendermint codebase to
disallow any configuration of the storage engine outside of the tendermint.
- remove the dependency on the current tm-db interfaces and replace with some
internalized, safe, and ergonomic interface for data persistence with all
required database semantics.
- update core tendermint code to use the new interface and data tools.
Next Steps
~~~~~~~~~~
- circulate the RFC, and discuss options with appropriate stakeholders.
- write brief ADR to summarize decisions around technical decisions reached
during the RFC phase.
References
----------
- `bolddb <https://github.com/etcd-io/bbolt>`_
- `badger <https://github.com/dgraph-io/badger>`_
- `badgerdb overview <https://dbdb.io/db/badgerdb>`_
- `botldb overview <https://dbdb.io/db/boltdb>`_
- `boltdb vs badger <https://tech.townsourced.com/post/boltdb-vs-badger>`_
- `bolthold <https://github.com/timshannon/bolthold>`_
- `badgerhold <https://github.com/timshannon/badgerhold>`_
- `Pebble <https://github.com/cockroachdb/pebble>`_
- `SDK Issue Regarding IVAL <https://github.com/cosmos/cosmos-sdk/issues/7100>`_
- `SDK Discussion about SMT/IVAL <https://github.com/cosmos/cosmos-sdk/discussions/8297>`_
Discussion
----------
- All things being equal, my tendency would be to use badger, with badgerhold
(if that makes sense) for its ergonomics and indexing capabilities, which
will require some small selection of wrappers for better write transaction
support. This is a weakly held tendency/belief and I think it would be
useful for the RFC process to build consensus (or not) around this basic
assumption.

View File

@@ -0,0 +1,420 @@
# RFC 002: Interprocess Communication (IPC) in Tendermint
## Changelog
- 08-Sep-2021: Initial draft (@creachadair).
## Abstract
Communication in Tendermint among consensus nodes, applications, and operator
tools all use different message formats and transport mechanisms. In some
cases there are multiple options. Having all these options complicates both the
code and the developer experience, and hides bugs. To support a more robust,
trustworthy, and usable system, we should document which communication paths
are essential, which could be removed or reduced in scope, and what we can
improve for the most important use cases.
This document proposes a variety of possible improvements of varying size and
scope. Specific design proposals should get their own documentation.
## Background
The Tendermint state replication engine has a complex IPC footprint.
1. Consensus nodes communicate with each other using a networked peer-to-peer
message-passing protocol.
2. Consensus nodes communicate with the application whose state is being
replicated via the [Application BlockChain Interface (ABCI)][abci].
3. Consensus nodes export a network-accessible [RPC service][rpc-service] to
support operations (bootstrapping, debugging) and synchronization of [light clients][light-client].
This interface is also used by the [`tendermint` CLI][tm-cli].
4. Consensus nodes export a gRPC service exposing a subset of the methods of
the RPC service described by (3). This was intended to simplify the
implementation of tools that already use gRPC to communicate with an
application (via the Cosmos SDK), and wanted to also talk to the consensus
node without implementing yet another RPC protocol.
The gRPC interface to the consensus node has been deprecated and is slated
for removal in the forthcoming Tendermint v0.36 release.
5. Consensus nodes may optionally communicate with a "remote signer" that holds
a validator key and can provide public keys and signatures to the consensus
node. One of the stated goals of this configuration is to allow the signer
to be run on a private network, separate from the consensus node, so that a
compromise of the consensus node from the public network would be less
likely to expose validator keys.
## Discussion: Transport Mechanisms
### Remote Signer Transport
A remote signer communicates with the consensus node in one of two ways:
1. "Raw": Using a TCP or Unix-domain socket which carries varint-prefixed
protocol buffer messages. In this mode, the consensus node is the server,
and the remote signer is the client.
This mode has been deprecated, and is intended to be removed.
2. gRPC: This mode uses the same protobuf messages as "Raw" node, but uses a
standard encrypted gRPC HTTP/2 stub as the transport. In this mode, the
remote signer is the server and the consensus node is the client.
### ABCI Transport
In ABCI, the _application_ is the server, and the Tendermint consensus engine
is the client. Most applications implement the server using the [Cosmos SDK][cosmos-sdk],
which handles low-level details of the ABCI interaction and provides a
higher-level interface to the rest of the application. The SDK is written in Go.
Beneath the SDK, the application communicates with Tendermint core in one of
two ways:
- In-process direct calls (for applications written in Go and compiled against
the Tendermint code). This is an optimization for the common case where an
application is written in Go, to save on the overhead of marshaling and
unmarshaling requests and responses within the same process:
[`abci/client/local_client.go`][local-client]
- A custom remote procedure protocol built on wire-format protobuf messages
using a socket (the "socket protocol"): [`abci/server/socket_server.go`][socket-server]
The SDK also provides a [gRPC service][sdk-grpc] accessible from outside the
application, allowing transactions to be broadcast to the network, look up
transactions, and simulate transaction costs.
### RPC Transport
The consensus node RPC service allows callers to query consensus parameters
(genesis data, transactions, commits), node status (network info, health
checks), application state (abci_query, abci_info), mempool state, and other
attributes of the node and its application. The service also provides methods
allowing transactions and evidence to be injected ("broadcast") into the
blockchain.
The RPC service is exposed in several ways:
- HTTP GET: Queries may be sent as URI parameters, with method names in the path.
- HTTP POST: Queries may be sent as JSON-RPC request messages in the body of an
HTTP POST request. The server uses a custom implementation of JSON-RPC that
is not fully compatible with the [JSON-RPC 2.0 spec][json-rpc], but handles
the common cases.
- Websocket: Queries may be sent as JSON-RPC request messages via a websocket.
This transport uses more or less the same JSON-RPC plumbing as the HTTP POST
handler.
The websocket endpoint also includes three methods that are _only_ exported
via websocket, which appear to support event subscription.
- gRPC: A subset of queries may be issued in protocol buffer format to the gRPC
interface described above under (4). As noted, this endpoint is deprecated
and will be removed in v0.36.
### Opportunities for Simplification
**Claim:** There are too many IPC mechanisms.
The preponderance of ABCI usage is via the Cosmos SDK, which means the
application and the consensus node are compiled together into a single binary,
and the consensus node calls the ABCI methods of the application directly as Go
functions.
We also need a true IPC transport to support ABCI applications _not_ written in
Go. There are also several known applications written in Rust, for example
(including [Anoma](https://github.com/anoma/anoma), Penumbra,
[Oasis](https://github.com/oasisprotocol/oasis-core), Twilight, and
[Nomic](https://github.com/nomic-io/nomic)). Ideally we will have at most one
such transport "built-in": More esoteric cases can be handled by a custom proxy.
Pragmatically, gRPC is probably the right choice here.
The primary consumers of the multi-headed "RPC service" today are the light
client and the `tendermint` command-line client. There is probably some local
use via curl, but I expect that is mostly ad hoc. Ethan reports that nodes are
often configured with the ports to the RPC service blocked, which is good for
security but complicates use by the light client.
### Context: Remote Signer Issues
Since the remote signer needs a secure communication channel to exchange keys
and signatures, and is expected to run truly remotely from the node (i.e., on a
separate physical server), there is not a whole lot we can do here. We should
finish the deprecation and removal of the "raw" socket protocol between the
consensus node and remote signers, but the use of gRPC is appropriate.
The main improvement we can make is to simplify the implementation quite a bit,
once we no longer need to support both "raw" and gRPC transports.
### Context: ABCI Issues
In the original design of ABCI, the presumption was that all access to the
application should be mediated by the consensus node. The idea is that outside
access could change application state and corrupt the consensus process, which
relies on the application to be deterministic. Of course, even without outside
access an application could behave nondeterministically, but allowing other
programs to send it requests was seen as courting trouble.
Conversely, users noted that most of the time, tools written for a particular
application don't want to talk to the consensus module directly. The
application "owns" the state machine the consensus engine is replicating, so
tools that care about application state should talk to the application.
Otherwise, they would have to bake in knowledge about Tendermint (e.g., its
interfaces and data structures) just because of the mediation.
For clients to talk directly to the application, however, there is another
concern: The consensus node is the ABCI _client_, so it is inconvenient for the
application to "push" work into the consensus module via ABCI itself. The
current implementation works around this by calling the consensus node's RPC
service, which exposes an `ABCIQuery` kitchen-sink method that allows the
application a way to poke ABCI messages in the other direction.
Without this RPC method, you could work around this (at least in principle) by
having the consensus module "poll" the application for work that needs done,
but that has unsatisfactory implications for performance and robustness, as
well as being harder to understand.
There has apparently been discussion about trying to make a more bidirectional
communication between the consensus node and the application, but this issue
seems to still be unresolved.
Another complication of ABCI is that it requires the application (server) to
maintain [four separate connections][abci-conn]: One for "consensus" operations
(BeginBlock, EndBlock, DeliverTx, Commit), one for "mempool" operations, one
for "query" operations, and one for "snapshot" (state synchronization) operations.
The rationale seems to have been that these groups of operations should be able
to proceed concurrently with each other. In practice, it results in a very complex
state management problem to coordinate state updates between the separate streams.
While application authors in Go are mostly insulated from that complexity by the
Cosmos SDK, the plumbing to maintain those separate streams is complicated, hard
to understand, and we suspect it contains concurrency bugs and/or lock contention
issues affecting performance that are subtle and difficult to pin down.
Even without changing the semantics of any ABCI operations, this code could be
made smaller and easier to debug by separating the management of concurrency
and locking from the IPC transport: If all requests and responses are routed
through one connection, the server can explicitly maintain priority queues for
requests and responses, and make less-conservative decisions about when locks
are (or aren't) required to synchronize state access. With independent queues,
the server must lock conservatively, and no optimistic scheduling is practical.
This would be a tedious implementation change, but should be achievable without
breaking any of the existing interfaces. More importantly, it could potentially
address a lot of difficult concurrency and performance problems we currently
see anecdotally but have difficultly isolating because of how intertwined these
separate message streams are at runtime.
TODO: Impact of ABCI++ for this topic?
### Context: RPC Issues
The RPC system serves several masters, and has a complex surface area. I
believe there are some improvements that can be exposed by separating some of
these concerns.
The Tendermint light client currently uses the RPC service to look up blocks
and transactions, and to forward ABCI queries to the application. The light
client proxy uses the RPC service via a websocket. The Cosmos IBC relayer also
uses the RPC service via websocket to watch for transaction events, and uses
the `ABCIQuery` method to fetch information and proofs for posted transactions.
Some work is already underway toward using P2P message passing rather than RPC
to synchronize light client state with the rest of the network. IBC relaying,
however, requires access to the event system, which is currently not accessible
except via the RPC interface. Event subscription _could_ be exposed via P2P,
but that is a larger project since it adds P2P communication load, and might
thus have an impact on the performance of consensus.
If event subscription can be moved into the P2P network, we could entirely
remove the websocket transport, even for clients that still need access to the
RPC service. Until then, we may still be able to reduce the scope of the
websocket endpoint to _only_ event subscription, by moving uses of the RPC
server as a proxy to ABCI over to the gRPC interface.
Having the RPC server still makes sense for local bootstrapping and operations,
but can be further simplified. Here are some specific proposals:
- Remove the HTTP GET interface entirely.
- Simplify JSON-RPC plumbing to remove unnecessary reflection and wrapping.
- Remove the gRPC interface (this is already planned for v0.36).
- Separate the websocket interface from the rest of the RPC service, and
restrict it to only event subscription.
Eventually we should try to emove the websocket interface entirely, but we
will need to revisit that (probably in a new RFC) once we've done some of the
easier things.
These changes would preserve the ability of operators to issue queries with
curl (but would require using JSON-RPC instead of URI parameters). That would
be a little less user-friendly, but for a use case that should not be that
prevalent.
These changes would also preserve compatibility with existing JSON-RPC based
code paths like the `tendermint` CLI and the light client (even ahead of
further work to remove that dependency).
**Design goal:** An operator should be able to disable non-local access to the
RPC server on any node in the network without impairing the ability of the
network to function for service of state replication, including light clients.
**Design principle:** All communication required to implement and monitor the
consensus network should use P2P, including the various synchronizations.
### Options for ABCI Transport
The majority of current usage is in Go, and the majority of that is mediated by
the Cosmos SDK, which uses the "direct call" interface. There is probably some
opportunity to clean up the implementation of that code, notably by inverting
which interface is at the "top" of the abstraction stack (currently it acts
like an RPC interface, and escape-hatches into the direct call). However, this
general approach works fine and doesn't need to be fundamentally changed.
For applications _not_ written in Go, the two remaining options are the
"socket" protocol (another variation on varint-prefixed protobuf messages over
an unstructured stream) and gRPC. It would be nice if we could get rid of one
of these to reduce (unneeded?) optionality.
Since both the socket protocol and gRPC depend on protocol buffers, the
"socket" protocol is the most obvious choice to remove. While gRPC is more
complex, the set of languages that _have_ protobuf support but _lack_ gRPC
support is small. Moreover, gRPC is already widely used in the rest of the
ecosystem (including the Cosmos SDK).
If some use case did arise later that can't work with gRPC, it would not be too
difficult for that application author to write a little proxy (in Go) that
bridges the convenient SDK APIs into a simpler protocol than gRPC.
**Design principle:** It is better for an uncommon special case to carry the
burdens of its specialness, than to bake an escape hatch into the infrastructure.
**Recommendation:** We should deprecate and remove the socket protocol.
### Options for RPC Transport
[ADR 057][adr-57] proposes using gRPC for the Tendermint RPC implementation.
This is still possible, but if we are able to simplify and decouple the
concerns as described above, I do not think it should be necessary.
While JSON-RPC is not the best possible RPC protocol for all situations, it has
some advantages over gRPC for our domain. Specifically:
- It is easy to call JSON-RPC manually from the command-line, which helps with
a common concern for the RPC service, local debugging and operations.
Relatedly: JSON is relatively easy for humans to read and write, and it can
be easily copied and pasted to share sample queries and debugging results in
chat, issue comments, and so on. Ideally, the RPC service will not be used
for activities where the costs of a text protocol are important compared to
its legibility and manual usability benefits.
- gRPC has an enormous dependency footprint for both clients and servers, and
many of the features it provides to support security and performance
(encryption, compression, streaming, etc.) are mostly irrelevant to local
use. Tendermint already needs to include a gRPC client for the remote signer,
but if we can avoid the need for a _client_ to depend on gRPC, that is a win
for usability.
- If we intend to migrate light clients off RPC to use P2P entirely, there is
no advantage to forcing a temporary migration to gRPC along the way; and once
the light client is not dependent on the RPC service, the efficiency of the
protocol is much less important.
- We can still get the benefits of generated data types using protocol buffers, even
without using gRPC:
- Protobuf defines a standard JSON encoding for all message types so
languages with protobuf support do not need to worry about type mapping
oddities.
- Using JSON means that even languages _without_ good protobuf support can
implement the protocol with a bit more work, and I expect this situation to
be rare.
Even if a language lacks a good standard JSON-RPC mechanism, the protocol is
lightweight and can be implemented by simple send/receive over TCP or
Unix-domain sockets with no need for code generation, encryption, etc. gRPC
uses a complex HTTP/2 based transport that is not easily replicated.
### Future Work
The background and proposals sketched above focus on the existing structure of
Tendermint and improvements we can make in the short term. It is worthwhile to
also consider options for longer-term broader changes to the IPC ecosystem.
The following outlines some ideas at a high level:
- **Consensus service:** Today, the application and the consensus node are
nominally connected only via ABCI. Tendermint was originally designed with
the assumption that all communication with the application should be mediated
by the consensus node. Based on further experience, however, the design goal
is now that the _application_ should be the mediator of application state.
As noted above, however, ABCI is a client/server protocol, with the
application as the server. For outside clients that turns out to have been a
good choice, but it complicates the relationship between the application and
the consensus node: Previously transactions were entered via the node, now
they are entered via the app.
We have worked around this by using the Tendermint RPC service to give the
application a "back channel" to the consensus node, so that it can push
transactions back into the consensus network. But the RPC service exposes a
lot of other functionality, too, including event subscription, block and
transaction queries, and a lot of node status information.
Even if we can't easily "fix" the orientation of the ABCI relationship, we
could improve isolation by splitting out the parts of the RPC service that
the application needs as a back-channel, and sharing those _only_ with the
application. By defining a "consensus service", we could give the application
a way to talk back limited to only the capabilities it needs. This approach
has the benefit that we could do it without breaking existing use, and if we
later did "fix" the ABCI directionality, we could drop the special case
without disrupting the rest of the RPC interface.
- **Event service:** Right now, the IBC relayer relies on the Tendermint RPC
service to provide a stream of block and transaction events, which it uses to
discover which transactions need relaying to other chains. While I think
that event subscription should eventually be handled via P2P, we could gain
some immediate benefit by splitting out event subscription from the rest of
the RPC service.
In this model, an event subscription service would be exposed on the public
network, but on a different endpoint. This would remove the need for the RPC
service to support the websocket protocol, and would allow operators to
isolate potentially sensitive status query results from the public network.
At the moment the relayers also use the RPC service to get block data for
synchronization, but work is already in progress to handle that concern via
the P2P layer. Once that's done, event subscription could be separated.
Separating parts of the existing RPC service is not without cost: It might
require additional connection endpoints, for example, though it is also not too
difficult for multiple otherwise-independent services to share a connection.
In return, though, it would become easier to reduce transport options and for
operators to independently control access to sensitive data. Considering the
viability and implications of these ideas is beyond the scope of this RFC, but
they are documented here since they follow from the background we have already
discussed.
## References
[abci]: https://github.com/tendermint/spec/tree/95cf253b6df623066ff7cd4074a94e7a3f147c7a/spec/abci
[rpc-service]: https://docs.tendermint.com/master/rpc/
[light-client]: https://docs.tendermint.com/master/tendermint-core/light-client.html
[tm-cli]: https://github.com/tendermint/tendermint/tree/master/cmd/tendermint
[cosmos-sdk]: https://github.com/cosmos/cosmos-sdk/
[local-client]: https://github.com/tendermint/tendermint/blob/master/abci/client/local_client.go
[socket-server]: https://github.com/tendermint/tendermint/blob/master/abci/server/socket_server.go
[sdk-grpc]: https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/tx#ServiceServer
[json-rpc]: https://www.jsonrpc.org/specification
[abci-conn]: https://github.com/tendermint/spec/blob/master/spec/abci/apps.md#state
[adr-57]: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-057-RPC.md

View File

@@ -0,0 +1,283 @@
# RFC 003: Taxonomy of potential performance issues in Tendermint
## Changelog
- 2021-09-02: Created initial draft (@wbanfield)
- 2021-09-14: Add discussion of the event system (@wbanfield)
## Abstract
This document discusses the various sources of performance issues in Tendermint and
attempts to clarify what work may be required to understand and address them.
## Background
Performance, loosely defined as the ability of a software process to perform its work
quickly and efficiently under load and within reasonable resource limits, is a frequent
topic of discussion in the Tendermint project.
To effectively address any issues with Tendermint performance we need to
categorize the various issues, understand their potential sources, and gauge their
impact on users.
Categorizing the different known performance issues will allow us to discuss and fix them
more systematically. This document proposes a rough taxonomy of performance issues
and highlights areas where more research into potential performance problems is required.
Understanding Tendermint's performance limitations will also be critically important
as we make changes to many of its subsystems. Performance is a central concern for
upcoming decisions regarding the `p2p` protocol, RPC message encoding and structure,
database usage and selection, and consensus protocol updates.
## Discussion
This section attempts to delineate the different sections of Tendermint functionality
that are often cited as having performance issues. It raises questions and suggests
lines of inquiry that may be valuable for better understanding Tendermint's performance issues.
As a note: We should avoid quickly adding many microbenchmarks or package level benchmarks.
These are prone to being worse than useless as they can obscure what _should_ be
focused on: performance of the system from the perspective of a user. We should,
instead, tune performance with an eye towards user needs and actions users make. These users comprise
both operators of Tendermint chains and the people generating transactions for
Tendermint chains. Both of these sets of users are largely aligned in wanting an end-to-end
system that operates quickly and efficiently.
REQUEST: The list below may be incomplete, if there are additional sections that are often
cited as creating poor performance, please comment so that they may be included.
### P2P
#### Claim: Tendermint cannot scale to large numbers of nodes
A complaint has been reported that Tendermint networks cannot scale to large numbers of nodes.
The listed number of nodes a user reported as causing issue was in the thousands.
We don't currently have evidence about what the upper-limit of nodes that Tendermint's
P2P stack can scale to.
We need to more concretely understand the source of issues and determine what layer
is causing a problem. It's possible that the P2P layer, in the absence of any reactors
sending data, is perfectly capable of managing thousands of peer connections. For
a reasonable networking and application setup, thousands of connections should not present any
issue for the application.
We need more data to understand the problem directly. We want to drive the popularity
and adoption of Tendermint and this will mean allowing for chains with more validators.
We should follow up with users experiencing this issue. We may then want to add
a series of metrics to the P2P layer to better understand the inefficiencies it produces.
The following metrics can help us understand the sources of latency in the Tendermint P2P stack:
* Number of messages sent and received per second
* Time of a message spent on the P2P layer send and receive queues
The following metrics exist and should be leveraged in addition to those added:
* Number of peers node's connected to
* Number of bytes per channel sent and received from each peer
### Sync
#### Claim: Block Syncing is slow
Bootstrapping a new node in a network to the height of the rest of the network is believed to
take longer than users would like. Block sync requires fetching all of the blocks from
peers and placing them into the local disk for storage. A useful line of inquiry
is understanding how quickly a perfectly tuned system _could_ fetch all of the state
over a network so that we understand how much overhead Tendermint actually adds.
The operation is likely to be _incredibly_ dependent on the environment in which
the node is being run. The factors that will influence syncing include:
1. Number of peers that a syncing node may fetch from.
2. Speed of the disk that a validator is writing to.
3. Speed of the network connection between the different peers that node is
syncing from.
We should calculate how quickly this operation _could possibly_ complete for common chains and nodes.
To calculate how quickly this operation could possibly complete, we should assume that
a node is reading at line-rate of the NIC and writing at the full drive speed to its
local storage. Comparing this theoretical upper-limit to the actual sync times
observed by node operators will give us a good point of comparison for understanding
how much overhead Tendermint incurs.
We should additionally add metrics to the blocksync operation to more clearly pinpoint
slow operations. The following metrics should be added to the block syncing operation:
* Time to fetch and validate each block
* Time to execute a block
* Blocks sync'd per unit time
### Application
Applications performing complex state transitions have the potential to bottleneck
the Tendermint node.
#### Claim: ABCI block delivery could cause slowdown
ABCI delivers blocks in several methods: `BeginBlock`, `DeliverTx`, `EndBlock`, `Commit`.
Tendermint delivers transactions one-by-one via the `DeliverTx` call. Most of the
transaction delivery in Tendermint occurs asynchronously and therefore appears unlikely to
form a bottleneck in ABCI.
After delivering all transactions, Tendermint then calls the `Commit` ABCI method.
Tendermint [locks all access to the mempool][abci-commit-description] while `Commit`
proceeds. This means that an application that is slow to execute all of its
transactions or finalize state during the `Commit` method will prevent any new
transactions from being added to the mempool. Apps that are slow to commit will
prevent consensus from proceeded to the next consensus height since Tendermint
cannot validate block proposals or produce block proposals without the
AppHash obtained from the `Commit` method. We should add a metric for each
step in the ABCI protocol to track the amount of time that a node spends communicating
with the application at each step.
#### Claim: ABCI serialization overhead causes slowdown
The most common way to run a Tendermint application is using the Cosmos-SDK.
The Cosmos-SDK runs the ABCI application within the same process as Tendermint.
When an application is run in the same process as Tendermint, a serialization penalty
is not paid. This is because the local ABCI client does not serialize method calls
and instead passes the protobuf type through directly. This can be seen
in [local_client.go][abci-local-client-code].
Serialization and deserialization in the gRPC and socket protocol ABCI methods
may cause slowdown. While these may cause issue, they are not part of the primary
usecase of Tendermint and do not necessarily need to be addressed at this time.
### RPC
#### Claim: The Query API is slow.
The query API locks a mutex across the ABCI connections. This causes consensus to
slow during queries, as ABCI is no longer able to make progress. This is known
to be causing issue in the cosmos-sdk and is being addressed [in the sdk][sdk-query-fix]
but a more robust solution may be required. Adding metrics to each ABCI client connection
and message as described in the Application section of this document would allow us
to further introspect the issue here.
#### Claim: RPC Serialization may cause slowdown
The Tendermint RPC uses a modified version of JSON-RPC. This RPC powers the `broadcast_tx_*` methods,
which is a critical method for adding transactions to Tendermint at the moment. This method is
likely invoked quite frequently on popular networks. Being able to perform efficiently
on this common and critical operation is very important. The current JSON-RPC implementation
relies heavily on type introspection via reflection, which is known to be very slow in
Go. We should therefore produce benchmarks of this method to determine how much overhead
we are adding to what, is likely to be, a very common operation.
The other JSON-RPC methods are much less critical to the core functionality of Tendermint.
While there may other points of performance consideration within the RPC, methods that do not
receive high volumes of requests should not be prioritized for performance consideration.
NOTE: Previous discussion of the RPC framework was done in [ADR 57][adr-57] and
there is ongoing work to inspect and alter the JSON-RPC framework in [RFC 002][rfc-002].
Much of these RPC-related performance considerations can either wait until the work of RFC 002 work is done or be
considered concordantly with the in-flight changes to the JSON-RPC.
### Protocol
#### Claim: Gossiping messages is a slow process
Currently, for any validator to successfully vote in a consensus _step_, it must
receive votes from greater than 2/3 of the validators on the network. In many cases,
it's preferable to receive as many votes as possible from correct validators.
This produces a quadratic increase in messages that are communicated as more validators join the network.
(Each of the N validators must communicate with all other N-1 validators).
This large number of messages communicated per step has been identified to impact
performance of the protocol. Given that the number of messages communicated has been
identified as a bottleneck, it would be extremely valuable to gather data on how long
it takes for popular chains with many validators to gather all votes within a step.
Metrics that would improve visibility into this include:
* Amount of time for a node to gather votes in a step.
* Amount of time for a node to gather all block parts.
* Number of votes each node sends to gossip (i.e. not its own votes, but votes it is
transmitting for a peer).
* Total number of votes each node sends to receives (A node may receive duplicate votes
so understanding how frequently this occurs will be valuable in evaluating the performance
of the gossip system).
#### Claim: Hashing Txs causes slowdown in Tendermint
Using a faster hash algorithm for Tx hashes is currently a point of discussion
in Tendermint. Namely, it is being considered as part of the [modular hashing proposal][modular-hashing].
It is currently unknown if hashing transactions in the Mempool forms a significant bottleneck.
Although it does not appear to be documented as slow, there are a few open github
issues that indicate a possible user preference for a faster hashing algorithm,
including [issue 2187][issue-2187] and [issue 2186][issue-2186].
It is likely worth investigating what order of magnitude Tx hashing takes in comparison to other
aspects of adding a Tx to the mempool. It is not currently clear if the rate of adding Tx
to the mempool is a source of user pain. We should not endeavor to make large changes to
consensus critical components without first being certain that the change is highly
valuable and impactful.
### Digital Signatures
#### Claim: Verification of digital signatures may cause slowdown in Tendermint
Working with cryptographic signatures can be computationally expensive. The cosmos
hub uses [ed25519 signatures][hub-signature]. The library performing signature
verification in Tendermint on votes is [benchmarked][ed25519-bench] to be able to perform an `ed25519`
signature in 75μs on a decently fast CPU. A validator in the Cosmos Hub performs
3 sets of verifications on the signatures of the 140 validators in the Hub
in a consensus round, during block verification, when verifying the prevotes, and
when verifying the precommits. With no batching, this would be roughly `3ms` per
round. It is quite unlikely, therefore, that this accounts for any serious amount
of the ~7 seconds of block time per height in the Hub.
This may cause slowdown when syncing, since the process needs to constantly verify
signatures. It's possible that improved signature aggregation will lead to improved
light client or other syncing performance. In general, a metric should be added
to track block rate while blocksyncing.
#### Claim: Our use of digital signatures in the consensus protocol contributes to performance issue
Currently, Tendermint's digital signature verification requires that all validators
receive all vote messages. Each validator must receive the complete digital signature
along with the vote message that it corresponds to. This means that all N validators
must receive messages from at least 2/3 of the N validators in each consensus
round. Given the potential for oddly shaped network topologies and the expected
variable network roundtrip times of a few hundred milliseconds in a blockchain,
it is highly likely that this amount of gossiping is leading to a significant amount
of the slowdown in the Cosmos Hub and in Tendermint consensus.
### Tendermint Event System
#### Claim: The event system is a bottleneck in Tendermint
The Tendermint Event system is used to communicate and store information about
internal Tendermint execution. The system uses channels internally to send messages
to different subscribers. Sending an event [blocks on the internal channel][event-send].
The default configuration is to [use an unbuffered channel for event publishes][event-buffer-capacity].
Several consumers of the event system also use an unbuffered channel for reads.
An example of this is the [event indexer][event-indexer-unbuffered], which takes an
unbuffered subscription to the event system. The result is that these unbuffered readers
can cause writes to the event system to block or slow down depending on contention in the
event system. This has implications for the consensus system, which [publishes events][consensus-event-send].
To better understand the performance of the event system, we should add metrics to track the timing of
event sends. The following metrics would be a good start for tracking this performance:
* Time in event send, labeled by Event Type
* Time in event receive, labeled by subscriber
* Event throughput, measured in events per unit time.
### References
[modular-hashing]: https://github.com/tendermint/tendermint/pull/6773
[issue-2186]: https://github.com/tendermint/tendermint/issues/2186
[issue-2187]: https://github.com/tendermint/tendermint/issues/2187
[rfc-002]: https://github.com/tendermint/tendermint/pull/6913
[adr-57]: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-057-RPC.md
[issue-1319]: https://github.com/tendermint/tendermint/issues/1319
[abci-commit-description]: https://github.com/tendermint/spec/blob/master/spec/abci/apps.md#commit
[abci-local-client-code]: https://github.com/tendermint/tendermint/blob/511bd3eb7f037855a793a27ff4c53c12f085b570/abci/client/local_client.go#L84
[hub-signature]: https://github.com/cosmos/gaia/blob/0ecb6ed8a244d835807f1ced49217d54a9ca2070/docs/resources/genesis.md#consensus-parameters
[ed25519-bench]: https://github.com/oasisprotocol/curve25519-voi/blob/d2e7fc59fe38c18ca990c84c4186cba2cc45b1f9/PERFORMANCE.md
[event-send]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/libs/pubsub/pubsub.go#L338
[event-buffer-capacity]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/types/event_bus.go#L14
[event-indexer-unbuffered]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/state/indexer/indexer_service.go#L39
[consensus-event-send]: https://github.com/tendermint/tendermint/blob/5bd3b286a2b715737f6d6c33051b69061d38f8ef/internal/consensus/state.go#L1573
[sdk-query-fix]: https://github.com/cosmos/cosmos-sdk/pull/10045

View File

@@ -0,0 +1,213 @@
========================================
RFC 004: E2E Test Framework Enhancements
========================================
Changelog
---------
- 2021-09-14: started initial draft (@tychoish)
Abstract
--------
This document discusses a series of improvements to the e2e test framework
that we can consider during the next few releases to help boost confidence in
Tendermint releases, and improve developer efficiency.
Background
----------
During the 0.35 release cycle, the E2E tests were a source of great
value, helping to identify a number of bugs before release. At the same time,
the tests were not consistently passing during this time, thereby reducing
their value, and forcing the core development team to allocate time and energy
to maintaining and chasing down issues with the e2e tests and the test
harness. The experience of this release cycle calls to mind a series of
improvements to the test framework, and this document attempts to capture
these improvements, along with motivations, and potential for impact.
Projects
--------
Flexible Workload Generation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Presently the e2e suite contains a single workload generation pattern, which
exists simply to ensure that the test networks have some work during their
runs. However, the shape and volume of the work is very consistent and is very
gentle to help ensure test reliability.
We don't need a complex workload generation framework, but being able to have
a few different workload shapes available for test networks, both generated and
hand-crafted, would be useful.
Workload patterns/configurations might include:
- transaction targeting patterns (include light nodes, round robin, target
individual nodes)
- variable transaction size over time.
- transaction broadcast option (synchronously, checked, fire-and-forget,
mixed).
- number of transactions to submit.
- non-transaction workloads: (evidence submission, query, event subscription.)
Configurable Generator
~~~~~~~~~~~~~~~~~~~~~~
The nightly e2e suite is defined by the `testnet generator
<https://github.com/tendermint/tendermint/blob/master/test/e2e/generator/generate.go#L13-L65>`_,
and it's difficult to add dimensions or change the focus of the test suite in
any way without modifying the implementation of the generator. If the
generator were more configurable, potentially via a file rather than in
the Go implementation, we could modify the focus of the test suite on the
fly.
Features that we might want to configure:
- number of test networks to generate of various topologies, to improve
coverage of different configurations.
- test application configurations (to modify the latency of ABCI calls, etc.)
- size of test networks.
- workload shape and behavior.
- initial sync and catch-up configurations.
The workload generator currently provides runtime options for limiting the
generator to specific types of P2P stacks, and for generating multiple groups
of test cases to support parallelism. The goal is to extend this pattern and
avoid hardcoding the matrix of test cases in the generator code. Once the
testnet configuration generation behavior is configurable at runtime,
developers may be able to use the e2e framework to validate changes before
landing changes that break e2e tests a day later.
In addition to the autogenerated suite, it might make sense to maintain a
small collection of hand-crafted cases that exercise configurations of
concern, to run as part of the nightly (or less frequent) loop.
Implementation Plan Structure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As a development team, we should determine the features should impact the e2e
testing early in the development cycle, and if we intend to modify the e2e
tests to exercise a feature, we should identify this early and begin the
integration process as early as possible.
To facilitate this, we should adopt a practice whereby we exercise specific
features that are currently under development more rigorously in the e2e
suite, and then as development stabilizes we can reduce the number or weight
of these features in the suite.
As of 0.35 there are essentially two end to end tests: the suite of 64
generated test networks, and the hand crafted `ci.toml` test case. The
generated test cases help provide systemtic coverage, while the `ci` run
provides coverage for a large number of features.
Reduce Cycle Time
~~~~~~~~~~~~~~~~~
One of the barriers to leveraging the e2e framework, and one of the challenges
in debugging failures, is the cycle time of running a single test iteration is
quite high: 5 minutes to build the docker image, plus the time to run the test
or tests.
There are a number of improvements and enhancements that can reduce the cycle
time in practice:
- reduce the amount of time required to build the docker image used in these
tests. Without the dependency on CGo, the tendermint binaries could be
(cross) compiled outside of the docker container and then injected into
them, which would take better advantage of docker's native caching,
although, without the dependency on CGo there would be no hard requirement
for the e2e tests to use docker.
- support test parallelism. Because of the way the testnets are orchestrated
a single system can really only run one network at a time. For executions
(local or remote) with more resources, there's no reason to run a few
networks in parallel to reduce the feedback time.
- prune testnet configurations that are unlikely to provide good signal, to
shorten the time to feedback.
- apply some kind of tiered approach to test execution, to improve the
legibility of the test result. For example order tests by the dependency of
their features, or run test networks without perturbations before running
that configuration with perturbations, to be able to isolate the impact of
specific features.
- orchestrate the test harness directly from go test rather than via a special
harness and shell scripts so e2e tests may more naively fit into developers
existing workflows.
Many of these improvements, particularly, reducing the build time will also
reduce the time to get feedback during automated builds.
Deeper Insights
~~~~~~~~~~~~~~~
When a test network fails, it's incredibly difficult to understand _why_ the
network failed, as the current system provides very little insight into the
system outside of the process logs. When a test network stalls or fails
developers should be able to quickly and easily get a sense of the state of
the network and all nodes.
Improvements in persuit of this goal, include functionality that would help
node operators in production environments by improving the quality and utility
of the logging messages and other reported metrics, but also provide some
tools to collect and aggregate this data for developers in the context of test
networks.
- Interleave messages from all nodes in the network to be able to correlate
events during the test run.
- Collect structured metrics of the system operation (CPU/MEM/IO) during the
test run, as well as from each tendermint/application process.
- Build (simple) tools to be able to render and summarize the data collected
during the test run to answer basic questions about test outcome.
Flexible Assertions
~~~~~~~~~~~~~~~~~~~
Currently, all assertions run for every test network, which makes the
assertions pretty bland, and the framework primarily useful as a smoke-test
framework, but it might be useful to be able to write and run different
tests for different configurations. This could allow us to test outside of the
happy-path.
In general our existing assertions occupy a fraction of the total test time,
so the relative cost of adding a few extra test assertions would be of limited
cost, and could help build confidence.
Additional Kinds of Testing
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The existing e2e suite, exercises networks of nodes that have homogeneous
tendermint version, stable configuration, that are expected to make
progress. There are many other possible test configurations that may be
interesting to engage with. These could include dimensions, such as:
- Multi-version testing to exercise our compatibility guarantees for networks
that might have different tendermint versions.
- As a flavor or mult-version testing, include upgrade testing, to build
confidence in migration code and procedures.
- Additional test applications, particularly practical-type applciations
including some that use gaiad and/or the cosmos-sdk. Test-only applications
that simulate other kinds of applications (e.g. variable application
operation latency.)
- Tests of "non-viable" configurations that ensure that forbidden combinations
lead to halts.
References
----------
- `ADR 66: End-to-End Testing <../architecture/adr-66-e2e-testing.md>`_

View File

@@ -17,9 +17,9 @@ consensus gossip protocol.
## Using Block Sync
To support faster syncing, Tendermint offers a `fast-sync` mode, which
To support faster syncing, Tendermint offers a `blocksync` mode, which
is enabled by default, and can be toggled in the `config.toml` or via
`--fast_sync=false`.
`--blocksync.enable=false`.
In this mode, the Tendermint daemon will sync hundreds of times faster
than if it used the real-time consensus process. Once caught up, the
@@ -29,18 +29,23 @@ has at least one peer and it's height is at least as high as the max
reported peer height. See [the IsCaughtUp
method](https://github.com/tendermint/tendermint/blob/b467515719e686e4678e6da4e102f32a491b85a0/blockchain/pool.go#L128).
Note: There are two versions of Block Sync. We recommend using v0 as v2 is still in beta.
Note: There are multiple versions of Block Sync. Please use v0 as the other versions are no longer supported.
If you would like to use a different version you can do so by changing the version in the `config.toml`:
```toml
#######################################################
### Block Sync Configuration Connections ###
#######################################################
[fastsync]
[blocksync]
# If this node is many blocks behind the tip of the chain, BlockSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
enable = true
# Block Sync version to use:
# 1) "v0" (default) - the legacy Block Sync implementation
# 2) "v2" - complete redesign of v0, optimized for testability & readability
# 1) "v0" (default) - the standard Block Sync implementation
# 2) "v2" - DEPRECATED, please use v0
version = "v0"
```
@@ -55,4 +60,4 @@ the network best height, it will switches to the state sync mechanism and then e
another event for exposing the fast-sync `complete` status and the state `height`.
The user can query the events by subscribing `EventQueryBlockSyncStatus`
Please check [types](https://pkg.go.dev/github.com/tendermint/tendermint/types?utm_source=godoc#pkg-constants) for the details.
Please check [types](https://pkg.go.dev/github.com/tendermint/tendermint/types?utm_source=godoc#pkg-constants) for the details.

View File

@@ -185,51 +185,65 @@ the argument name and use `_` as a placeholder.
### Formatting
The following nuances when sending/formatting transactions should be
taken into account:
When sending transactions to the RPC interface, the following formatting rules
must be followed:
With `GET`:
Using `GET` (with parameters in the URL):
To send a UTF8 string byte array, quote the value of the tx parameter:
To send a UTF8 string as transaction data, enclose the value of the `tx`
parameter in double quotes:
```sh
curl 'http://localhost:26657/broadcast_tx_commit?tx="hello"'
```
which sends a 5 byte transaction: "h e l l o" \[68 65 6c 6c 6f\].
which sends a 5-byte transaction: "h e l l o" \[68 65 6c 6c 6f\].
Note the URL must be wrapped with single quotes, else bash will ignore
the double quotes. To avoid the single quotes, escape the double quotes:
Note that the URL in this example is enclosed in single quotes to prevent the
shell from interpreting the double quotes. Alternatively, you may escape the
double quotes with backslashes:
```sh
curl http://localhost:26657/broadcast_tx_commit?tx=\"hello\"
```
Using a special character:
The double-quoted format works with for multibyte characters, as long as they
are valid UTF8, for example:
```sh
curl 'http://localhost:26657/broadcast_tx_commit?tx="€5"'
```
sends a 4 byte transaction: "€5" (UTF8) \[e2 82 ac 35\].
sends a 4-byte transaction: "€5" (UTF8) \[e2 82 ac 35\].
To send as raw hex, omit quotes AND prefix the hex string with `0x`:
Arbitrary (non-UTF8) transaction data may also be encoded as a string of
hexadecimal digits (2 digits per byte). To do this, omit the quotation marks
and prefix the hex string with `0x`:
```sh
curl http://localhost:26657/broadcast_tx_commit?tx=0x01020304
curl http://localhost:26657/broadcast_tx_commit?tx=0x68656C6C6F
```
which sends a 4 byte transaction: \[01 02 03 04\].
which sends the 5-byte transaction: \[68 65 6c 6c 6f\].
With `POST` (using `json`), the raw hex must be `base64` encoded:
Using `POST` (with parameters in JSON), the transaction data are sent as a JSON
string in base64 encoding:
```sh
curl --data-binary '{"jsonrpc":"2.0","id":"anything","method":"broadcast_tx_commit","params": {"tx": "AQIDBA=="}}' -H 'content-type:text/plain;' http://localhost:26657
curl http://localhost:26657 -H 'Content-Type: application/json' --data-binary '{
"jsonrpc": "2.0",
"id": "anything",
"method": "broadcast_tx_commit",
"params": {
"tx": "aGVsbG8="
}
}'
```
which sends the same 4 byte transaction: \[01 02 03 04\].
which sends the same 5-byte transaction: \[68 65 6c 6c 6f\].
Note that raw hex cannot be used in `POST` transactions.
Note that the hexadecimal encoding of transaction data is _not_ supported in
JSON (`POST`) requests.
## Reset

View File

@@ -62,3 +62,30 @@ given destination directory. Each archive will contain:
Note: goroutine.out and heap.out will only be written if a profile address is
provided and is operational. This command is blocking and will log any error.
## Tendermint Inspect
Tendermint includes an `inspect` command for querying Tendermint's state store and block
store over Tendermint RPC.
When the Tendermint consensus engine detects inconsistent state, it will crash the
entire Tendermint process.
While in this inconsistent state, a node running Tendermint's consensus engine will not start up.
The `inspect` command runs only a subset of Tendermint's RPC endpoints for querying the block store
and state store.
`inspect` allows operators to query a read-only view of the stage.
`inspect` does not run the consensus engine at all and can therefore be used to debug
processes that have crashed due to inconsistent state.
To start the `inspect` process, run
```bash
tendermint inspect
```
### RPC endpoints
The list of available RPC endpoints can be found by making a request to the RPC port.
For an `inspect` process running on `127.0.0.1:26657`, navigate your browser to
`http://127.0.0.1:26657/` to retrieve the list of enabled RPC endpoints.
Additional information on the Tendermint RPC endpoints can be found in the [rpc documentation](https://docs.tendermint.com/master/rpc).

View File

@@ -64,13 +64,42 @@ It wont kill the node, but it will gather all of the above data and package i
At this point, depending on how severe the degradation is, you may want to restart the process.
## Tendermint Inspect
What if the Tendermint node will not start up due to inconsistent consensus state?
When a node running the Tendermint consensus engine detects an inconsistent state
it will crash the entire Tendermint process.
The Tendermint consensus engine cannot be run in this inconsistent state and the so node
will fail to start up as a result.
The Tendermint RPC server can provide valuable information for debugging in this situation.
The Tendermint `inspect` command will run a subset of the Tendermint RPC server
that is useful for debugging inconsistent state.
### Running inspect
Start up the `inspect` tool on the machine where Tendermint crashed using:
```bash
tendermint inspect --home=</path/to/app.d>
```
`inspect` will use the data directory specified in your Tendermint configuration file.
`inspect` will also run the RPC server at the address specified in your Tendermint configuration file.
### Using inspect
With the `inspect` server running, you can access RPC endpoints that are critically important
for debugging.
Calling the `/status`, `/consensus_state` and `/dump_consensus_state` RPC endpoint
will return useful information about the Tendermint consensus state.
## Outro
Were hoping that the `tendermint debug` subcommand will become de facto the first response to any accidents.
Were hoping that these Tendermint tools will become de facto the first response for any accidents.
Let us know what your experience has been so far! Have you had a chance to try `tendermint debug` yet?
Let us know what your experience has been so far! Have you had a chance to try `tendermint debug` or `tendermint inspect` yet?
Join our chat, where we discuss the current issues and future improvements.
Join our [discord chat](https://discord.gg/vcExX9T), where we discuss the current issues and future improvements.

View File

@@ -87,7 +87,7 @@ Create a file called `app.go` with the following content:
package main
import (
abcitypes "github.com/tendermint/tendermint/pkg/abci"
abcitypes "github.com/tendermint/tendermint/abci/types"
)
type KVStoreApplication struct {}
@@ -346,7 +346,7 @@ import (
"github.com/dgraph-io/badger"
"github.com/spf13/viper"
"github.com/tendermint/tendermint/pkg/abci"
abci "github.com/tendermint/tendermint/abci/types"
cfg "github.com/tendermint/tendermint/config"
tmflags "github.com/tendermint/tendermint/libs/cli/flags"
"github.com/tendermint/tendermint/libs/log"
@@ -388,7 +388,6 @@ func main() {
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
<-c
os.Exit(0)
}
func newTendermint(app abci.Application, configFile string) (*nm.Node, error) {
@@ -564,7 +563,6 @@ defer func() {
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
<-c
os.Exit(0)
```
## 1.5 Getting Up and Running

View File

@@ -90,7 +90,7 @@ Create a file called `app.go` with the following content:
package main
import (
abcitypes "github.com/tendermint/tendermint/pkg/abci"
abcitypes "github.com/tendermint/tendermint/abci/types"
)
type KVStoreApplication struct {}

11
go.mod
View File

@@ -4,7 +4,6 @@ go 1.16
require (
github.com/BurntSushi/toml v0.4.1
github.com/Masterminds/squirrel v1.5.0
github.com/Workiva/go-datastructures v1.0.53
github.com/adlio/schema v1.1.13
github.com/btcsuite/btcd v0.22.0-beta
@@ -13,30 +12,32 @@ require (
github.com/go-kit/kit v0.11.0
github.com/gogo/protobuf v1.3.2
github.com/golang/protobuf v1.5.2
github.com/golangci/golangci-lint v1.42.0
github.com/golangci/golangci-lint v1.42.1
github.com/google/orderedcode v0.0.1
github.com/google/uuid v1.3.0
github.com/gorilla/websocket v1.4.2
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
github.com/lib/pq v1.10.2
github.com/lib/pq v1.10.3
github.com/libp2p/go-buffer-pool v0.0.2
github.com/minio/highwayhash v1.0.2
github.com/mroth/weightedrand v0.4.1
github.com/oasisprotocol/curve25519-voi v0.0.0-20210609091139-0a56a4bca00b
github.com/ory/dockertest v3.3.5+incompatible
github.com/prometheus/client_golang v1.11.0
github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0
github.com/rs/cors v1.8.0
github.com/rs/zerolog v1.23.0
github.com/rs/zerolog v1.25.0
github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa
github.com/snikch/goodman v0.0.0-20171125024755-10e37e294daa
github.com/spf13/cobra v1.2.1
github.com/spf13/viper v1.8.1
github.com/stretchr/testify v1.7.0
github.com/tendermint/tm-db v0.6.4
github.com/vektra/mockery/v2 v2.9.0
github.com/vektra/mockery/v2 v2.9.3
golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
google.golang.org/grpc v1.40.0
gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b // indirect
pgregory.net/rapid v0.4.7

41
go.sum
View File

@@ -44,8 +44,8 @@ cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RX
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
contrib.go.opencensus.io/exporter/stackdriver v0.13.4/go.mod h1:aXENhDJ1Y4lIg4EUaVTwzvYETVNZk10Pu26tevFKLUc=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Antonboom/errname v0.1.3 h1:qKV8gSzPzBqrG/q0dgraZXJCymWt6KuD9+Y7K7xtzN8=
github.com/Antonboom/errname v0.1.3/go.mod h1:jRXo3m0E0EuCnK3wbsSVH3X55Z4iTDLl6ZfCxwFj4TM=
github.com/Antonboom/errname v0.1.4 h1:lGSlI42Gm4bI1e+IITtXJXvxFM8N7naWimVFKcb0McY=
github.com/Antonboom/errname v0.1.4/go.mod h1:jRXo3m0E0EuCnK3wbsSVH3X55Z4iTDLl6ZfCxwFj4TM=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 h1:w+iIsaOQNcT7OZ575w+acHgRric5iCyQh+xv+KJ4HB8=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
@@ -63,8 +63,6 @@ github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3Q
github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/Masterminds/sprig v2.15.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/Masterminds/sprig v2.22.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/Masterminds/squirrel v1.5.0 h1:JukIZisrUXadA9pl3rMkjhiamxiB0cXiu+HGp/Y8cY8=
github.com/Masterminds/squirrel v1.5.0/go.mod h1:NNaOrjSoIDfDA40n7sr2tPNZRfjzjA400rg+riTZj10=
github.com/Microsoft/go-winio v0.4.14 h1:+hMXMk01us9KgxGb7ftKQt2Xpf5hH/yky+TDA+qxleU=
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 h1:TngWCqHvy9oXAN6lEVMRuU21PR1EtLVZJmdB18Gu3Rw=
@@ -343,8 +341,8 @@ github.com/golangci/go-misc v0.0.0-20180628070357-927a3d87b613 h1:9kfjN3AdxcbsZB
github.com/golangci/go-misc v0.0.0-20180628070357-927a3d87b613/go.mod h1:SyvUF2NxV+sN8upjjeVYr5W7tyxaT1JVtvhKhOn2ii8=
github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a h1:iR3fYXUjHCR97qWS8ch1y9zPNsgXThGwjKPrYfqMPks=
github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a/go.mod h1:9qCChq59u/eW8im404Q2WWTrnBUQKjpNYKMbU4M7EFU=
github.com/golangci/golangci-lint v1.42.0 h1:hqf1zo6zY3GKGjjBk3ttdH22tGwF6ZRpk6j6xyJmE8I=
github.com/golangci/golangci-lint v1.42.0/go.mod h1:wgkGQnU9lOUFvTFo5QBSOvaSSddEV21Z1zYkJSbppZA=
github.com/golangci/golangci-lint v1.42.1 h1:nC4WyrbdnNdohDVUoNKjy/4N4FTM1gCFaVeXecy6vzM=
github.com/golangci/golangci-lint v1.42.1/go.mod h1:MuInrVlgg2jq4do6XI1jbkErbVHVbwdrLLtGv6p2wPI=
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 h1:MfyDlzVjl1hoaPzPD4Gpb/QgoRfSBR0jdhwGyAWwMSA=
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0/go.mod h1:66R6K6P6VWk9I95jvqGxkqJxVWGFy9XlDwLwVz1RCFg=
github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca h1:kNY3/svz5T29MYHubXix4aDDuE3RWHkPvopM/EDv/MA=
@@ -548,10 +546,6 @@ github.com/kunwardeep/paralleltest v1.0.2/go.mod h1:ZPqNm1fVHPllh5LPVujzbVz1JN2G
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/kyoh86/exportloopref v0.1.8 h1:5Ry/at+eFdkX9Vsdw3qU4YkvGtzuVfzT4X7S77LoN/M=
github.com/kyoh86/exportloopref v0.1.8/go.mod h1:1tUcJeiioIs7VWe5gcOObrux3lb66+sBqGZrRkMwPgg=
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 h1:SOEGU9fKiNWd/HOJuq6+3iTQz8KNCLtVX6idSoTLdUw=
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0/go.mod h1:dXGbAdH5GtBTC4WfIxhKZfyBF/HBFgRZSWwZ9g/He9o=
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 h1:P6pPBnrTSX3DEVR4fDembhRWSsG5rVo6hYhAB/ADZrk=
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0/go.mod h1:vmVJ0l/dxyfGW6FmdpVm2joNMFikkuWg0EoCKLGUMNw=
github.com/ldez/gomoddirectives v0.2.2 h1:p9/sXuNFArS2RLc+UpYZSI4KQwGMEDWC/LbtF5OPFVg=
github.com/ldez/gomoddirectives v0.2.2/go.mod h1:cpgBogWITnCfRq2qGoDkKMEVSaarhdBr6g8G04uz6d0=
github.com/ldez/tagliatelle v0.2.0 h1:693V8Bf1NdShJ8eu/s84QySA0J2VWBanVBa2WwXD/Wk=
@@ -562,8 +556,9 @@ github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.8.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.9.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.10.2 h1:AqzbZs4ZoCBp+GtejcpCpcxM3zlSMx29dXbUSeVtJb8=
github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.10.3 h1:v9QZf2Sn6AmjXtQeFpdoq/eaNtYP6IN+7lcrygsIAtg=
github.com/lib/pq v1.10.3/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/libp2p/go-buffer-pool v0.0.2 h1:QNK2iAFa8gjAe1SPz6mHSMuCcjs+X1wlHzeOSqcmlfs=
github.com/libp2p/go-buffer-pool v0.0.2/go.mod h1:MvaB6xw5vOrDl8rYZGLFdKAuk/hRoRZd1Vi32+RXyFM=
github.com/logrusorgru/aurora v0.0.0-20181002194514-a7b3b318ed4e/go.mod h1:7rIyQOR62GCctdiQpZ/zOJlFyk6y+94wXzv6RNZgaR4=
@@ -601,8 +596,8 @@ github.com/mbilski/exhaustivestruct v1.2.0 h1:wCBmUnSYufAHO6J4AVWY6ff+oxWxsVFrwg
github.com/mbilski/exhaustivestruct v1.2.0/go.mod h1:OeTBVxQWoEmB2J2JCHmXWPJ0aksxSUOUy+nvtVEfzXc=
github.com/mgechev/dots v0.0.0-20190921121421-c36f7dcfbb81 h1:QASJXOGm2RZ5Ardbc86qNFvby9AqkLDibfChMtAg5QM=
github.com/mgechev/dots v0.0.0-20190921121421-c36f7dcfbb81/go.mod h1:KQ7+USdGKfpPjXk4Ga+5XxQM4Lm4e3gAogrreFAYpOg=
github.com/mgechev/revive v1.1.0 h1:TvabpsolbtlzZTyJcgMRN38MHrgi8C0DhmGE5dhscGY=
github.com/mgechev/revive v1.1.0/go.mod h1:PKqk4L74K6wVNwY2b6fr+9Qqr/3hIsHVfZCJdbvozrY=
github.com/mgechev/revive v1.1.1 h1:mkXNHP14Y6tfq+ocnQaiKEtgJDM41yaoyQq4qn6TD/4=
github.com/mgechev/revive v1.1.1/go.mod h1:PKqk4L74K6wVNwY2b6fr+9Qqr/3hIsHVfZCJdbvozrY=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
github.com/miekg/dns v1.1.35/go.mod h1:KNUDUusw/aVsxyTYZM1oqvCicbwhgbNgztCETuNZ7xM=
@@ -636,6 +631,8 @@ github.com/moricho/tparallel v0.2.1 h1:95FytivzT6rYzdJLdtfn6m1bfFJylOJK41+lgv/EH
github.com/moricho/tparallel v0.2.1/go.mod h1:fXEIZxG2vdfl0ZF8b42f5a78EhjjD5mX8qUplsoSU4k=
github.com/mozilla/scribe v0.0.0-20180711195314-fb71baf557c1/go.mod h1:FIczTrinKo8VaLxe6PWTPEXRXDIHz2QAwiaBaP5/4a8=
github.com/mozilla/tls-observatory v0.0.0-20210609171429-7bc42856d2e5/go.mod h1:FUqVoUPHSEdDR0MnFM3Dh8AU0pZHLXUD127SAJGER/s=
github.com/mroth/weightedrand v0.4.1 h1:rHcbUBopmi/3x4nnrvwGJBhX9d0vk+KgoLUZeDP6YyI=
github.com/mroth/weightedrand v0.4.1/go.mod h1:3p2SIcC8al1YMzGhAIoXD+r9olo/g/cdJgAD905gyNE=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-proto-validators v0.0.0-20180403085117-0950a7990007/go.mod h1:m2XC9Qq0AlmmVksL6FktJCdTYyLk7V3fKyp0sl1yWQo=
@@ -717,8 +714,8 @@ github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZ
github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/polyfloyd/go-errorlint v0.0.0-20210510181950-ab96adb96fea h1:Sk6Xawg57ZkjXmFYD1xCHSKN6FtYM+km51MM7Lveyyc=
github.com/polyfloyd/go-errorlint v0.0.0-20210510181950-ab96adb96fea/go.mod h1:wi9BfjxjF/bwiZ701TzmfKu6UKC357IOAtNr0Td0Lvw=
github.com/polyfloyd/go-errorlint v0.0.0-20210722154253-910bb7978349 h1:Kq/3kL0k033ds3tyez5lFPrfQ74fNJ+OqCclRipubwA=
github.com/polyfloyd/go-errorlint v0.0.0-20210722154253-910bb7978349/go.mod h1:wi9BfjxjF/bwiZ701TzmfKu6UKC357IOAtNr0Td0Lvw=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
@@ -768,9 +765,10 @@ github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/rs/cors v1.8.0 h1:P2KMzcFwrPoSjkF1WLRPsp3UMLyql8L4v9hQpVeK5so=
github.com/rs/cors v1.8.0/go.mod h1:EBwu+T5AvHOcXwvZIkQFjUN6s8Czyqw12GL/Y0tUyRM=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/xid v1.3.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.18.0/go.mod h1:9nvC1axdVrAHcu/s9taAVfBuIdTZLVQmKQyvrUjF5+I=
github.com/rs/zerolog v1.23.0 h1:UskrK+saS9P9Y789yNNulYKdARjPZuS35B8gJF2x60g=
github.com/rs/zerolog v1.23.0/go.mod h1:6c7hFfxPOy7TacJc4Fcdi24/J0NKYGzjG8FWRI916Qo=
github.com/rs/zerolog v1.25.0 h1:Rj7XygbUHKUlDPcVdoLyR91fJBsduXj5fRxyqIQj/II=
github.com/rs/zerolog v1.25.0/go.mod h1:7KHcEGe0QZPOm2IE4Kpb5rTh6n1h2hIgS5OOnu1rUaI=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryancurrah/gomodguard v1.2.3 h1:ww2fsjqocGCAFamzvv/b8IsRduuHHeK2MHTcTxZTQX8=
@@ -865,8 +863,8 @@ github.com/tecbot/gorocksdb v0.0.0-20191217155057-f0fad39f321c h1:g+WoO5jjkqGAzH
github.com/tecbot/gorocksdb v0.0.0-20191217155057-f0fad39f321c/go.mod h1:ahpPrc7HpcfEWDQRZEmnXMzHY03mLDYMCxeDzy46i+8=
github.com/tendermint/tm-db v0.6.4 h1:3N2jlnYQkXNQclQwd/eKV/NzlqPlfK21cpRRIx80XXQ=
github.com/tendermint/tm-db v0.6.4/go.mod h1:dptYhIpJ2M5kUuenLr+Yyf3zQOv1SgBZcl8/BmWlMBw=
github.com/tetafro/godot v1.4.8 h1:rhuUH+tBrx24yVAr6Ox3/UxcsiUPPJcGhinfLdbdew0=
github.com/tetafro/godot v1.4.8/go.mod h1:LR3CJpxDVGlYOWn3ZZg1PgNZdTUvzsZWu8xaEohUpn8=
github.com/tetafro/godot v1.4.9 h1:wsNd0RuUxISqqudFqchsSsMqsM188DoZVPBeKl87tP0=
github.com/tetafro/godot v1.4.9/go.mod h1:LR3CJpxDVGlYOWn3ZZg1PgNZdTUvzsZWu8xaEohUpn8=
github.com/timakin/bodyclose v0.0.0-20200424151742-cb6215831a94 h1:ig99OeTyDwQWhPe2iw9lwfQVF1KB3Q4fpP3X7/2VBG8=
github.com/timakin/bodyclose v0.0.0-20200424151742-cb6215831a94/go.mod h1:Qimiffbc6q9tBWlVV6x0P9sat/ao1xEkREYPPj9hphk=
github.com/tinylib/msgp v1.1.5/go.mod h1:eQsjooMTnV42mHu917E26IogZ2930nFyBQdofk10Udg=
@@ -897,8 +895,8 @@ github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyC
github.com/valyala/fasthttp v1.16.0/go.mod h1:YOKImeEosDdBPnxc0gy7INqi3m1zK6A+xl6TwOBhHCA=
github.com/valyala/quicktemplate v1.6.3/go.mod h1:fwPzK2fHuYEODzJ9pkw0ipCPNHZ2tD5KW4lOuSdPKzY=
github.com/valyala/tcplisten v0.0.0-20161114210144-ceec8f93295a/go.mod h1:v3UYOV9WzVtRmSR+PDvWpU/qWl4Wa5LApYYX4ZtKbio=
github.com/vektra/mockery/v2 v2.9.0 h1:+3FhCL3EviR779mTzXwUuhPNnqFUA7sDnt9OFkXaFd4=
github.com/vektra/mockery/v2 v2.9.0/go.mod h1:2gU4Cf/f8YyC8oEaSXfCnZBMxMjMl/Ko205rlP0fO90=
github.com/vektra/mockery/v2 v2.9.3 h1:ma6hcGQw4q/lhFUTJ+E9V8/5tsIcht9i2Q4d1qo26SQ=
github.com/vektra/mockery/v2 v2.9.3/go.mod h1:2gU4Cf/f8YyC8oEaSXfCnZBMxMjMl/Ko205rlP0fO90=
github.com/viki-org/dnscache v0.0.0-20130720023526-c70c1f23c5d8/go.mod h1:dniwbG03GafCjFohMDmz6Zc6oCuiqgH6tGNyXTkHzXE=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778/go.mod h1:2MuV+tbUrU1zIOPMxZ5EncGwgmMJsa+9ucAQZXxsObs=
@@ -1075,6 +1073,7 @@ golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=

36
inspect/doc.go Normal file
View File

@@ -0,0 +1,36 @@
/*
Package inspect provides a tool for investigating the state of a
failed Tendermint node.
This package provides the Inspector type. The Inspector type runs a subset of the Tendermint
RPC endpoints that are useful for debugging issues with Tendermint consensus.
When a node running the Tendermint consensus engine detects an inconsistent consensus state,
the entire node will crash. The Tendermint consensus engine cannot run in this
inconsistent state so the node will not be able to start up again.
The RPC endpoints provided by the Inspector type allow for a node operator to inspect
the block store and state store to better understand what may have caused the inconsistent state.
The Inspector type's lifecycle is controlled by a context.Context
ins := inspect.NewFromConfig(rpcConfig)
ctx, cancelFunc:= context.WithCancel(context.Background())
// Run blocks until the Inspector server is shut down.
go ins.Run(ctx)
...
// calling the cancel function will stop the running inspect server
cancelFunc()
Inspector serves its RPC endpoints on the address configured in the RPC configuration
rpcConfig.ListenAddress = "tcp://127.0.0.1:26657"
ins := inspect.NewFromConfig(rpcConfig)
go ins.Run(ctx)
The list of available RPC endpoints can then be viewed by navigating to
http://127.0.0.1:26657/ in the web browser.
*/
package inspect

149
inspect/inspect.go Normal file
View File

@@ -0,0 +1,149 @@
package inspect
import (
"context"
"errors"
"fmt"
"net"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/inspect/rpc"
"github.com/tendermint/tendermint/libs/log"
tmstrings "github.com/tendermint/tendermint/libs/strings"
rpccore "github.com/tendermint/tendermint/rpc/core"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
"github.com/tendermint/tendermint/state/indexer/sink"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
"golang.org/x/sync/errgroup"
)
// Inspector manages an RPC service that exports methods to debug a failed node.
// After a node shuts down due to a consensus failure, it will no longer start
// up its state cannot easily be inspected. An Inspector value provides a similar interface
// to the node, using the underlying Tendermint data stores, without bringing up
// any other components. A caller can query the Inspector service to inspect the
// persisted state and debug the failure.
type Inspector struct {
routes rpccore.RoutesMap
config *config.RPCConfig
indexerService *indexer.Service
eventBus *types.EventBus
logger log.Logger
}
// New returns an Inspector that serves RPC on the specified BlockStore and StateStore.
// The Inspector type does not modify the state or block stores.
// The sinks are used to enable block and transaction querying via the RPC server.
// The caller is responsible for starting and stopping the Inspector service.
///
//nolint:lll
func New(cfg *config.RPCConfig, bs state.BlockStore, ss state.Store, es []indexer.EventSink, logger log.Logger) *Inspector {
routes := rpc.Routes(*cfg, ss, bs, es, logger)
eb := types.NewEventBus()
eb.SetLogger(logger.With("module", "events"))
is := indexer.NewIndexerService(es, eb)
is.SetLogger(logger.With("module", "txindex"))
return &Inspector{
routes: routes,
config: cfg,
logger: logger,
eventBus: eb,
indexerService: is,
}
}
// NewFromConfig constructs an Inspector using the values defined in the passed in config.
func NewFromConfig(cfg *config.Config) (*Inspector, error) {
bsDB, err := config.DefaultDBProvider(&config.DBContext{ID: "blockstore", Config: cfg})
if err != nil {
return nil, err
}
bs := store.NewBlockStore(bsDB)
sDB, err := config.DefaultDBProvider(&config.DBContext{ID: "state", Config: cfg})
if err != nil {
return nil, err
}
genDoc, err := types.GenesisDocFromFile(cfg.GenesisFile())
if err != nil {
return nil, err
}
sinks, err := sink.EventSinksFromConfig(cfg, config.DefaultDBProvider, genDoc.ChainID)
if err != nil {
return nil, err
}
logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
ss := state.NewStore(sDB)
return New(cfg.RPC, bs, ss, sinks, logger), nil
}
// Run starts the Inspector servers and blocks until the servers shut down. The passed
// in context is used to control the lifecycle of the servers.
func (ins *Inspector) Run(ctx context.Context) error {
err := ins.eventBus.Start()
if err != nil {
return fmt.Errorf("error starting event bus: %s", err)
}
defer func() {
err := ins.eventBus.Stop()
if err != nil {
ins.logger.Error("event bus stopped with error", "err", err)
}
}()
err = ins.indexerService.Start()
if err != nil {
return fmt.Errorf("error starting indexer service: %s", err)
}
defer func() {
err := ins.indexerService.Stop()
if err != nil {
ins.logger.Error("indexer service stopped with error", "err", err)
}
}()
return startRPCServers(ctx, ins.config, ins.logger, ins.routes)
}
func startRPCServers(ctx context.Context, cfg *config.RPCConfig, logger log.Logger, routes rpccore.RoutesMap) error {
g, tctx := errgroup.WithContext(ctx)
listenAddrs := tmstrings.SplitAndTrimEmpty(cfg.ListenAddress, ",", " ")
rh := rpc.Handler(cfg, routes, logger)
for _, listenerAddr := range listenAddrs {
server := rpc.Server{
Logger: logger,
Config: cfg,
Handler: rh,
Addr: listenerAddr,
}
if cfg.IsTLSEnabled() {
keyFile := cfg.KeyFile()
certFile := cfg.CertFile()
listenerAddr := listenerAddr
g.Go(func() error {
logger.Info("RPC HTTPS server starting", "address", listenerAddr,
"certfile", certFile, "keyfile", keyFile)
err := server.ListenAndServeTLS(tctx, certFile, keyFile)
if !errors.Is(err, net.ErrClosed) {
return err
}
logger.Info("RPC HTTPS server stopped", "address", listenerAddr)
return nil
})
} else {
listenerAddr := listenerAddr
g.Go(func() error {
logger.Info("RPC HTTP server starting", "address", listenerAddr)
err := server.ListenAndServe(tctx)
if !errors.Is(err, net.ErrClosed) {
return err
}
logger.Info("RPC HTTP server stopped", "address", listenerAddr)
return nil
})
}
}
return g.Wait()
}

583
inspect/inspect_test.go Normal file
View File

@@ -0,0 +1,583 @@
package inspect_test
import (
"context"
"fmt"
"net"
"os"
"strings"
"sync"
"testing"
"time"
"github.com/fortytw2/leaktest"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
abcitypes "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/inspect"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/pubsub/query"
"github.com/tendermint/tendermint/proto/tendermint/state"
httpclient "github.com/tendermint/tendermint/rpc/client/http"
"github.com/tendermint/tendermint/state/indexer"
indexermocks "github.com/tendermint/tendermint/state/indexer/mocks"
statemocks "github.com/tendermint/tendermint/state/mocks"
"github.com/tendermint/tendermint/types"
)
func TestInspectConstructor(t *testing.T) {
cfg := config.ResetTestRoot("test")
t.Cleanup(leaktest.Check(t))
defer func() { _ = os.RemoveAll(cfg.RootDir) }()
t.Run("from config", func(t *testing.T) {
d, err := inspect.NewFromConfig(cfg)
require.NoError(t, err)
require.NotNil(t, d)
})
}
func TestInspectRun(t *testing.T) {
cfg := config.ResetTestRoot("test")
t.Cleanup(leaktest.Check(t))
defer func() { _ = os.RemoveAll(cfg.RootDir) }()
t.Run("from config", func(t *testing.T) {
d, err := inspect.NewFromConfig(cfg)
require.NoError(t, err)
ctx, cancel := context.WithCancel(context.Background())
stoppedWG := &sync.WaitGroup{}
stoppedWG.Add(1)
go func() {
require.NoError(t, d.Run(ctx))
stoppedWG.Done()
}()
cancel()
stoppedWG.Wait()
})
}
func TestBlock(t *testing.T) {
testHeight := int64(1)
testBlock := new(types.Block)
testBlock.Header.Height = testHeight
testBlock.Header.LastCommitHash = []byte("test hash")
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Height").Return(testHeight)
blockStoreMock.On("Base").Return(int64(0))
blockStoreMock.On("LoadBlockMeta", testHeight).Return(&types.BlockMeta{})
blockStoreMock.On("LoadBlock", testHeight).Return(testBlock)
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
resultBlock, err := cli.Block(context.Background(), &testHeight)
require.NoError(t, err)
require.Equal(t, testBlock.Height, resultBlock.Block.Height)
require.Equal(t, testBlock.LastCommitHash, resultBlock.Block.LastCommitHash)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestTxSearch(t *testing.T) {
testHash := []byte("test")
testTx := []byte("tx")
testQuery := fmt.Sprintf("tx.hash='%s'", string(testHash))
testTxResult := &abcitypes.TxResult{
Height: 1,
Index: 100,
Tx: testTx,
}
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
eventSinkMock.On("Type").Return(indexer.KV)
eventSinkMock.On("SearchTxEvents", mock.Anything,
mock.MatchedBy(func(q *query.Query) bool { return testQuery == q.String() })).
Return([]*abcitypes.TxResult{testTxResult}, nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
var page = 1
resultTxSearch, err := cli.TxSearch(context.Background(), testQuery, false, &page, &page, "")
require.NoError(t, err)
require.Len(t, resultTxSearch.Txs, 1)
require.Equal(t, types.Tx(testTx), resultTxSearch.Txs[0].Tx)
cancel()
wg.Wait()
eventSinkMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
blockStoreMock.AssertExpectations(t)
}
func TestTx(t *testing.T) {
testHash := []byte("test")
testTx := []byte("tx")
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
eventSinkMock.On("Type").Return(indexer.KV)
eventSinkMock.On("GetTxByHash", testHash).Return(&abcitypes.TxResult{
Tx: testTx,
}, nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
res, err := cli.Tx(context.Background(), testHash, false)
require.NoError(t, err)
require.Equal(t, types.Tx(testTx), res.Tx)
cancel()
wg.Wait()
eventSinkMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
blockStoreMock.AssertExpectations(t)
}
func TestConsensusParams(t *testing.T) {
testHeight := int64(1)
testMaxGas := int64(55)
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Height").Return(testHeight)
blockStoreMock.On("Base").Return(int64(0))
stateStoreMock.On("LoadConsensusParams", testHeight).Return(types.ConsensusParams{
Block: types.BlockParams{
MaxGas: testMaxGas,
},
}, nil)
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
params, err := cli.ConsensusParams(context.Background(), &testHeight)
require.NoError(t, err)
require.Equal(t, params.ConsensusParams.Block.MaxGas, testMaxGas)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestBlockResults(t *testing.T) {
testHeight := int64(1)
testGasUsed := int64(100)
stateStoreMock := &statemocks.Store{}
// tmstate "github.com/tendermint/tendermint/proto/tendermint/state"
stateStoreMock.On("LoadABCIResponses", testHeight).Return(&state.ABCIResponses{
DeliverTxs: []*abcitypes.ResponseDeliverTx{
{
GasUsed: testGasUsed,
},
},
EndBlock: &abcitypes.ResponseEndBlock{},
BeginBlock: &abcitypes.ResponseBeginBlock{},
}, nil)
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Base").Return(int64(0))
blockStoreMock.On("Height").Return(testHeight)
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
res, err := cli.BlockResults(context.Background(), &testHeight)
require.NoError(t, err)
require.Equal(t, res.TotalGasUsed, testGasUsed)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestCommit(t *testing.T) {
testHeight := int64(1)
testRound := int32(101)
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Base").Return(int64(0))
blockStoreMock.On("Height").Return(testHeight)
blockStoreMock.On("LoadBlockMeta", testHeight).Return(&types.BlockMeta{}, nil)
blockStoreMock.On("LoadSeenCommit").Return(&types.Commit{
Height: testHeight,
Round: testRound,
}, nil)
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
res, err := cli.Commit(context.Background(), &testHeight)
require.NoError(t, err)
require.NotNil(t, res)
require.Equal(t, res.SignedHeader.Commit.Round, testRound)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestBlockByHash(t *testing.T) {
testHeight := int64(1)
testHash := []byte("test hash")
testBlock := new(types.Block)
testBlock.Header.Height = testHeight
testBlock.Header.LastCommitHash = testHash
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("LoadBlockMeta", testHeight).Return(&types.BlockMeta{
BlockID: types.BlockID{
Hash: testHash,
},
Header: types.Header{
Height: testHeight,
},
}, nil)
blockStoreMock.On("LoadBlockByHash", testHash).Return(testBlock, nil)
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
res, err := cli.BlockByHash(context.Background(), testHash)
require.NoError(t, err)
require.NotNil(t, res)
require.Equal(t, []byte(res.BlockID.Hash), testHash)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestBlockchain(t *testing.T) {
testHeight := int64(1)
testBlock := new(types.Block)
testBlockHash := []byte("test hash")
testBlock.Header.Height = testHeight
testBlock.Header.LastCommitHash = testBlockHash
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Height").Return(testHeight)
blockStoreMock.On("Base").Return(int64(0))
blockStoreMock.On("LoadBlockMeta", testHeight).Return(&types.BlockMeta{
BlockID: types.BlockID{
Hash: testBlockHash,
},
})
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
res, err := cli.BlockchainInfo(context.Background(), 0, 100)
require.NoError(t, err)
require.NotNil(t, res)
require.Equal(t, testBlockHash, []byte(res.BlockMetas[0].BlockID.Hash))
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestValidators(t *testing.T) {
testHeight := int64(1)
testVotingPower := int64(100)
testValidators := types.ValidatorSet{
Validators: []*types.Validator{
{
VotingPower: testVotingPower,
},
},
}
stateStoreMock := &statemocks.Store{}
stateStoreMock.On("LoadValidators", testHeight).Return(&testValidators, nil)
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Height").Return(testHeight)
blockStoreMock.On("Base").Return(int64(0))
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
testPage := 1
testPerPage := 100
res, err := cli.Validators(context.Background(), &testHeight, &testPage, &testPerPage)
require.NoError(t, err)
require.NotNil(t, res)
require.Equal(t, testVotingPower, res.Validators[0].VotingPower)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestBlockSearch(t *testing.T) {
testHeight := int64(1)
testBlockHash := []byte("test hash")
testQuery := "block.height = 1"
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
eventSinkMock.On("Type").Return(indexer.KV)
blockStoreMock.On("LoadBlock", testHeight).Return(&types.Block{
Header: types.Header{
Height: testHeight,
},
}, nil)
blockStoreMock.On("LoadBlockMeta", testHeight).Return(&types.BlockMeta{
BlockID: types.BlockID{
Hash: testBlockHash,
},
})
eventSinkMock.On("SearchBlockEvents", mock.Anything,
mock.MatchedBy(func(q *query.Query) bool { return testQuery == q.String() })).
Return([]int64{testHeight}, nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
testPage := 1
testPerPage := 100
testOrderBy := "desc"
res, err := cli.BlockSearch(context.Background(), testQuery, &testPage, &testPerPage, testOrderBy)
require.NoError(t, err)
require.NotNil(t, res)
require.Equal(t, testBlockHash, []byte(res.Blocks[0].BlockID.Hash))
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func requireConnect(t testing.TB, addr string, retries int) {
parts := strings.SplitN(addr, "://", 2)
if len(parts) != 2 {
t.Fatalf("malformed address to dial: %s", addr)
}
var err error
for i := 0; i < retries; i++ {
var conn net.Conn
conn, err = net.Dial(parts[0], parts[1])
if err == nil {
conn.Close()
return
}
// FIXME attempt to yield and let the other goroutine continue execution.
time.Sleep(time.Microsecond * 100)
}
t.Fatalf("unable to connect to server %s after %d tries: %s", addr, retries, err)
}

143
inspect/rpc/rpc.go Normal file
View File

@@ -0,0 +1,143 @@
package rpc
import (
"context"
"net/http"
"time"
"github.com/rs/cors"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/consensus"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/pubsub"
"github.com/tendermint/tendermint/rpc/core"
"github.com/tendermint/tendermint/rpc/jsonrpc/server"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
"github.com/tendermint/tendermint/types"
)
// Server defines parameters for running an Inspector rpc server.
type Server struct {
Addr string // TCP address to listen on, ":http" if empty
Handler http.Handler
Logger log.Logger
Config *config.RPCConfig
}
// Routes returns the set of routes used by the Inspector server.
//
//nolint: lll
func Routes(cfg config.RPCConfig, s state.Store, bs state.BlockStore, es []indexer.EventSink, logger log.Logger) core.RoutesMap {
env := &core.Environment{
Config: cfg,
EventSinks: es,
StateStore: s,
BlockStore: bs,
ConsensusReactor: waitSyncCheckerImpl{},
Logger: logger,
}
return core.RoutesMap{
"blockchain": server.NewRPCFunc(env.BlockchainInfo, "minHeight,maxHeight", true),
"consensus_params": server.NewRPCFunc(env.ConsensusParams, "height", true),
"block": server.NewRPCFunc(env.Block, "height", true),
"block_by_hash": server.NewRPCFunc(env.BlockByHash, "hash", true),
"block_results": server.NewRPCFunc(env.BlockResults, "height", true),
"commit": server.NewRPCFunc(env.Commit, "height", true),
"validators": server.NewRPCFunc(env.Validators, "height,page,per_page", true),
"tx": server.NewRPCFunc(env.Tx, "hash,prove", true),
"tx_search": server.NewRPCFunc(env.TxSearch, "query,prove,page,per_page,order_by", false),
"block_search": server.NewRPCFunc(env.BlockSearch, "query,page,per_page,order_by", false),
}
}
// Handler returns the http.Handler configured for use with an Inspector server. Handler
// registers the routes on the http.Handler and also registers the websocket handler
// and the CORS handler if specified by the configuration options.
func Handler(rpcConfig *config.RPCConfig, routes core.RoutesMap, logger log.Logger) http.Handler {
mux := http.NewServeMux()
wmLogger := logger.With("protocol", "websocket")
var eventBus types.EventBusSubscriber
websocketDisconnectFn := func(remoteAddr string) {
err := eventBus.UnsubscribeAll(context.Background(), remoteAddr)
if err != nil && err != pubsub.ErrSubscriptionNotFound {
wmLogger.Error("Failed to unsubscribe addr from events", "addr", remoteAddr, "err", err)
}
}
wm := server.NewWebsocketManager(routes,
server.OnDisconnect(websocketDisconnectFn),
server.ReadLimit(rpcConfig.MaxBodyBytes))
wm.SetLogger(wmLogger)
mux.HandleFunc("/websocket", wm.WebsocketHandler)
server.RegisterRPCFuncs(mux, routes, logger)
var rootHandler http.Handler = mux
if rpcConfig.IsCorsEnabled() {
rootHandler = addCORSHandler(rpcConfig, mux)
}
return rootHandler
}
func addCORSHandler(rpcConfig *config.RPCConfig, h http.Handler) http.Handler {
corsMiddleware := cors.New(cors.Options{
AllowedOrigins: rpcConfig.CORSAllowedOrigins,
AllowedMethods: rpcConfig.CORSAllowedMethods,
AllowedHeaders: rpcConfig.CORSAllowedHeaders,
})
h = corsMiddleware.Handler(h)
return h
}
type waitSyncCheckerImpl struct{}
func (waitSyncCheckerImpl) WaitSync() bool {
return false
}
func (waitSyncCheckerImpl) GetPeerState(peerID types.NodeID) (*consensus.PeerState, bool) {
return nil, false
}
// ListenAndServe listens on the address specified in srv.Addr and handles any
// incoming requests over HTTP using the Inspector rpc handler specified on the server.
func (srv *Server) ListenAndServe(ctx context.Context) error {
listener, err := server.Listen(srv.Addr, srv.Config.MaxOpenConnections)
if err != nil {
return err
}
go func() {
<-ctx.Done()
listener.Close()
}()
return server.Serve(listener, srv.Handler, srv.Logger, serverRPCConfig(srv.Config))
}
// ListenAndServeTLS listens on the address specified in srv.Addr. ListenAndServeTLS handles
// incoming requests over HTTPS using the Inspector rpc handler specified on the server.
func (srv *Server) ListenAndServeTLS(ctx context.Context, certFile, keyFile string) error {
listener, err := server.Listen(srv.Addr, srv.Config.MaxOpenConnections)
if err != nil {
return err
}
go func() {
<-ctx.Done()
listener.Close()
}()
return server.ServeTLS(listener, srv.Handler, certFile, keyFile, srv.Logger, serverRPCConfig(srv.Config))
}
func serverRPCConfig(r *config.RPCConfig) *server.Config {
cfg := server.DefaultConfig()
cfg.MaxBodyBytes = r.MaxBodyBytes
cfg.MaxHeaderBytes = r.MaxHeaderBytes
// If necessary adjust global WriteTimeout to ensure it's greater than
// TimeoutBroadcastTxCommit.
// See https://github.com/tendermint/tendermint/issues/3435
if cfg.WriteTimeout <= r.TimeoutBroadcastTxCommit {
cfg.WriteTimeout = r.TimeoutBroadcastTxCommit + 1*time.Second
}
return cfg
}

View File

@@ -1,12 +1,12 @@
package blocksync
import (
"github.com/tendermint/tendermint/pkg/metadata"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
"github.com/tendermint/tendermint/types"
)
const (
MaxMsgSize = metadata.MaxBlockSizeBytes +
MaxMsgSize = types.MaxBlockSizeBytes +
bcproto.BlockResponseMessagePrefixSize +
bcproto.BlockResponseMessageFieldKeySize
)

View File

@@ -11,8 +11,7 @@ import (
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/p2p"
"github.com/tendermint/tendermint/types"
)
/*
@@ -63,7 +62,7 @@ var peerTimeout = 15 * time.Second // not const so we can override with tests
// PeerID responsible for delivering the block.
type BlockRequest struct {
Height int64
PeerID p2p.NodeID
PeerID types.NodeID
}
// BlockPool keeps track of the block sync peers, block requests and block responses.
@@ -76,7 +75,7 @@ type BlockPool struct {
requesters map[int64]*bpRequester
height int64 // the lowest key in requesters.
// peers
peers map[p2p.NodeID]*bpPeer
peers map[types.NodeID]*bpPeer
maxPeerHeight int64 // the biggest reported height
// atomic
@@ -94,7 +93,7 @@ type BlockPool struct {
// requests and errors will be sent to requestsCh and errorsCh accordingly.
func NewBlockPool(start int64, requestsCh chan<- BlockRequest, errorsCh chan<- peerError) *BlockPool {
bp := &BlockPool{
peers: make(map[p2p.NodeID]*bpPeer),
peers: make(map[types.NodeID]*bpPeer),
requesters: make(map[int64]*bpRequester),
height: start,
@@ -198,7 +197,7 @@ func (pool *BlockPool) IsCaughtUp() bool {
// We need to see the second block's Commit to validate the first block.
// So we peek two blocks at a time.
// The caller will verify the commit.
func (pool *BlockPool) PeekTwoBlocks() (first *block.Block, second *block.Block) {
func (pool *BlockPool) PeekTwoBlocks() (first *types.Block, second *types.Block) {
pool.mtx.RLock()
defer pool.mtx.RUnlock()
@@ -245,13 +244,13 @@ func (pool *BlockPool) PopRequest() {
// RedoRequest invalidates the block at pool.height,
// Remove the peer and redo request from others.
// Returns the ID of the removed peer.
func (pool *BlockPool) RedoRequest(height int64) p2p.NodeID {
func (pool *BlockPool) RedoRequest(height int64) types.NodeID {
pool.mtx.Lock()
defer pool.mtx.Unlock()
request := pool.requesters[height]
peerID := request.getPeerID()
if peerID != p2p.NodeID("") {
if peerID != types.NodeID("") {
// RemovePeer will redo all requesters associated with this peer.
pool.removePeer(peerID)
}
@@ -260,7 +259,7 @@ func (pool *BlockPool) RedoRequest(height int64) p2p.NodeID {
// AddBlock validates that the block comes from the peer it was expected from and calls the requester to store it.
// TODO: ensure that blocks come in order for each peer.
func (pool *BlockPool) AddBlock(peerID p2p.NodeID, block *block.Block, blockSize int) {
func (pool *BlockPool) AddBlock(peerID types.NodeID, block *types.Block, blockSize int) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
@@ -307,7 +306,7 @@ func (pool *BlockPool) LastAdvance() time.Time {
}
// SetPeerRange sets the peer's alleged blockchain base and height.
func (pool *BlockPool) SetPeerRange(peerID p2p.NodeID, base int64, height int64) {
func (pool *BlockPool) SetPeerRange(peerID types.NodeID, base int64, height int64) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
@@ -328,14 +327,14 @@ func (pool *BlockPool) SetPeerRange(peerID p2p.NodeID, base int64, height int64)
// RemovePeer removes the peer with peerID from the pool. If there's no peer
// with peerID, function is a no-op.
func (pool *BlockPool) RemovePeer(peerID p2p.NodeID) {
func (pool *BlockPool) RemovePeer(peerID types.NodeID) {
pool.mtx.Lock()
defer pool.mtx.Unlock()
pool.removePeer(peerID)
}
func (pool *BlockPool) removePeer(peerID p2p.NodeID) {
func (pool *BlockPool) removePeer(peerID types.NodeID) {
for _, requester := range pool.requesters {
if requester.getPeerID() == peerID {
requester.redo(peerID)
@@ -416,14 +415,14 @@ func (pool *BlockPool) requestersLen() int64 {
return int64(len(pool.requesters))
}
func (pool *BlockPool) sendRequest(height int64, peerID p2p.NodeID) {
func (pool *BlockPool) sendRequest(height int64, peerID types.NodeID) {
if !pool.IsRunning() {
return
}
pool.requestsCh <- BlockRequest{height, peerID}
}
func (pool *BlockPool) sendError(err error, peerID p2p.NodeID) {
func (pool *BlockPool) sendError(err error, peerID types.NodeID) {
if !pool.IsRunning() {
return
}
@@ -471,7 +470,7 @@ type bpPeer struct {
height int64
base int64
pool *BlockPool
id p2p.NodeID
id types.NodeID
recvMonitor *flow.Monitor
timeout *time.Timer
@@ -479,7 +478,7 @@ type bpPeer struct {
logger log.Logger
}
func newBPPeer(pool *BlockPool, peerID p2p.NodeID, base int64, height int64) *bpPeer {
func newBPPeer(pool *BlockPool, peerID types.NodeID, base int64, height int64) *bpPeer {
peer := &bpPeer{
pool: pool,
id: peerID,
@@ -544,11 +543,11 @@ type bpRequester struct {
pool *BlockPool
height int64
gotBlockCh chan struct{}
redoCh chan p2p.NodeID // redo may send multitime, add peerId to identify repeat
redoCh chan types.NodeID // redo may send multitime, add peerId to identify repeat
mtx tmsync.Mutex
peerID p2p.NodeID
block *block.Block
peerID types.NodeID
block *types.Block
}
func newBPRequester(pool *BlockPool, height int64) *bpRequester {
@@ -556,7 +555,7 @@ func newBPRequester(pool *BlockPool, height int64) *bpRequester {
pool: pool,
height: height,
gotBlockCh: make(chan struct{}, 1),
redoCh: make(chan p2p.NodeID, 1),
redoCh: make(chan types.NodeID, 1),
peerID: "",
block: nil,
@@ -571,7 +570,7 @@ func (bpr *bpRequester) OnStart() error {
}
// Returns true if the peer matches and block doesn't already exist.
func (bpr *bpRequester) setBlock(block *block.Block, peerID p2p.NodeID) bool {
func (bpr *bpRequester) setBlock(block *types.Block, peerID types.NodeID) bool {
bpr.mtx.Lock()
if bpr.block != nil || bpr.peerID != peerID {
bpr.mtx.Unlock()
@@ -587,13 +586,13 @@ func (bpr *bpRequester) setBlock(block *block.Block, peerID p2p.NodeID) bool {
return true
}
func (bpr *bpRequester) getBlock() *block.Block {
func (bpr *bpRequester) getBlock() *types.Block {
bpr.mtx.Lock()
defer bpr.mtx.Unlock()
return bpr.block
}
func (bpr *bpRequester) getPeerID() p2p.NodeID {
func (bpr *bpRequester) getPeerID() types.NodeID {
bpr.mtx.Lock()
defer bpr.mtx.Unlock()
return bpr.peerID
@@ -615,7 +614,7 @@ func (bpr *bpRequester) reset() {
// Tells bpRequester to pick another peer and try again.
// NOTE: Nonblocking, and does nothing if another redo
// was already requested.
func (bpr *bpRequester) redo(peerID p2p.NodeID) {
func (bpr *bpRequester) redo(peerID types.NodeID) {
select {
case bpr.redoCh <- peerID:
default:

View File

@@ -11,9 +11,7 @@ import (
"github.com/tendermint/tendermint/libs/log"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/metadata"
"github.com/tendermint/tendermint/pkg/p2p"
"github.com/tendermint/tendermint/types"
)
func init() {
@@ -21,7 +19,7 @@ func init() {
}
type testPeer struct {
id p2p.NodeID
id types.NodeID
base int64
height int64
inputChan chan inputData // make sure each peer's data is sequential
@@ -43,7 +41,7 @@ func (p testPeer) runInputRoutine() {
// Request desired, pretend like we got the block immediately.
func (p testPeer) simulateInput(input inputData) {
block := &block.Block{Header: metadata.Header{Height: input.request.Height}}
block := &types.Block{Header: types.Header{Height: input.request.Height}}
input.pool.AddBlock(input.request.PeerID, block, 123)
// TODO: uncommenting this creates a race which is detected by:
// https://github.com/golang/go/blob/2bd767b1022dd3254bcec469f0ee164024726486/src/testing/testing.go#L854-L856
@@ -51,7 +49,7 @@ func (p testPeer) simulateInput(input inputData) {
// input.t.Logf("Added block from peer %v (height: %v)", input.request.PeerID, input.request.Height)
}
type testPeers map[p2p.NodeID]testPeer
type testPeers map[types.NodeID]testPeer
func (ps testPeers) start() {
for _, v := range ps {
@@ -68,7 +66,7 @@ func (ps testPeers) stop() {
func makePeers(numPeers int, minHeight, maxHeight int64) testPeers {
peers := make(testPeers, numPeers)
for i := 0; i < numPeers; i++ {
peerID := p2p.NodeID(tmrand.Str(12))
peerID := types.NodeID(tmrand.Str(12))
height := minHeight + mrand.Int63n(maxHeight-minHeight)
base := minHeight + int64(i)
if base > height {
@@ -184,7 +182,7 @@ func TestBlockPoolTimeout(t *testing.T) {
// Pull from channels
counter := 0
timedOut := map[p2p.NodeID]struct{}{}
timedOut := map[types.NodeID]struct{}{}
for {
select {
case err := <-errorsCh:
@@ -205,7 +203,7 @@ func TestBlockPoolTimeout(t *testing.T) {
func TestBlockPoolRemovePeer(t *testing.T) {
peers := make(testPeers, 10)
for i := 0; i < 10; i++ {
peerID := p2p.NodeID(fmt.Sprintf("%d", i+1))
peerID := types.NodeID(fmt.Sprintf("%d", i+1))
height := int64(i + 1)
peers[peerID] = testPeer{peerID, 0, height, make(chan inputData)}
}
@@ -229,10 +227,10 @@ func TestBlockPoolRemovePeer(t *testing.T) {
assert.EqualValues(t, 10, pool.MaxPeerHeight())
// remove not-existing peer
assert.NotPanics(t, func() { pool.RemovePeer(p2p.NodeID("Superman")) })
assert.NotPanics(t, func() { pool.RemovePeer(types.NodeID("Superman")) })
// remove peer with biggest height
pool.RemovePeer(p2p.NodeID("10"))
pool.RemovePeer(types.NodeID("10"))
assert.EqualValues(t, 9, pool.MaxPeerHeight())
// remove all peers

View File

@@ -12,12 +12,10 @@ import (
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
tmSync "github.com/tendermint/tendermint/libs/sync"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/metadata"
p2ptypes "github.com/tendermint/tendermint/pkg/p2p"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
var (
@@ -69,7 +67,7 @@ type consensusReactor interface {
type peerError struct {
err error
peerID p2ptypes.NodeID
peerID types.NodeID
}
func (e peerError) Error() string {
@@ -207,7 +205,7 @@ func (r *Reactor) OnStop() {
// respondToPeer loads a block and sends it to the requesting peer, if we have it.
// Otherwise, we'll respond saying we do not have it.
func (r *Reactor) respondToPeer(msg *bcproto.BlockRequest, peerID p2ptypes.NodeID) {
func (r *Reactor) respondToPeer(msg *bcproto.BlockRequest, peerID types.NodeID) {
block := r.store.LoadBlock(msg.Height)
if block != nil {
blockProto, err := block.ToProto()
@@ -242,7 +240,7 @@ func (r *Reactor) handleBlockSyncMessage(envelope p2p.Envelope) error {
r.respondToPeer(msg, envelope.From)
case *bcproto.BlockResponse:
block, err := block.BlockFromProto(msg.Block)
block, err := types.BlockFromProto(msg.Block)
if err != nil {
logger.Error("failed to convert block from proto", "err", err)
return err
@@ -533,9 +531,9 @@ FOR_LOOP:
}
var (
firstParts = first.MakePartSet(metadata.BlockPartSizeBytes)
firstParts = first.MakePartSet(types.BlockPartSizeBytes)
firstPartSetHeader = firstParts.Header()
firstID = metadata.BlockID{Hash: first.Hash(), PartSetHeader: firstPartSetHeader}
firstID = types.BlockID{Hash: first.Hash(), PartSetHeader: firstPartSetHeader}
)
// Finally, verify the first block using the second's commit.

View File

@@ -7,6 +7,7 @@ import (
"github.com/stretchr/testify/require"
abci "github.com/tendermint/tendermint/abci/types"
cfg "github.com/tendermint/tendermint/config"
cons "github.com/tendermint/tendermint/internal/consensus"
"github.com/tendermint/tendermint/internal/mempool/mock"
@@ -14,37 +15,34 @@ import (
"github.com/tendermint/tendermint/internal/p2p/p2ptest"
"github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/metadata"
p2ptypes "github.com/tendermint/tendermint/pkg/p2p"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
sf "github.com/tendermint/tendermint/state/test/factory"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
dbm "github.com/tendermint/tm-db"
)
type reactorTestSuite struct {
network *p2ptest.Network
logger log.Logger
nodes []p2ptypes.NodeID
nodes []types.NodeID
reactors map[p2ptypes.NodeID]*Reactor
app map[p2ptypes.NodeID]proxy.AppConns
reactors map[types.NodeID]*Reactor
app map[types.NodeID]proxy.AppConns
blockSyncChannels map[p2ptypes.NodeID]*p2p.Channel
peerChans map[p2ptypes.NodeID]chan p2p.PeerUpdate
peerUpdates map[p2ptypes.NodeID]*p2p.PeerUpdates
blockSyncChannels map[types.NodeID]*p2p.Channel
peerChans map[types.NodeID]chan p2p.PeerUpdate
peerUpdates map[types.NodeID]*p2p.PeerUpdates
blockSync bool
}
func setup(
t *testing.T,
genDoc *consensus.GenesisDoc,
privVal consensus.PrivValidator,
genDoc *types.GenesisDoc,
privVal types.PrivValidator,
maxBlockHeights []int64,
chBuf uint,
) *reactorTestSuite {
@@ -55,14 +53,14 @@ func setup(
"must specify at least one block height (nodes)")
rts := &reactorTestSuite{
logger: log.TestingLogger().With("module", "blockchain", "testCase", t.Name()),
logger: log.TestingLogger().With("module", "block_sync", "testCase", t.Name()),
network: p2ptest.MakeNetwork(t, p2ptest.NetworkOptions{NumNodes: numNodes}),
nodes: make([]p2ptypes.NodeID, 0, numNodes),
reactors: make(map[p2ptypes.NodeID]*Reactor, numNodes),
app: make(map[p2ptypes.NodeID]proxy.AppConns, numNodes),
blockSyncChannels: make(map[p2ptypes.NodeID]*p2p.Channel, numNodes),
peerChans: make(map[p2ptypes.NodeID]chan p2p.PeerUpdate, numNodes),
peerUpdates: make(map[p2ptypes.NodeID]*p2p.PeerUpdates, numNodes),
nodes: make([]types.NodeID, 0, numNodes),
reactors: make(map[types.NodeID]*Reactor, numNodes),
app: make(map[types.NodeID]proxy.AppConns, numNodes),
blockSyncChannels: make(map[types.NodeID]*p2p.Channel, numNodes),
peerChans: make(map[types.NodeID]chan p2p.PeerUpdate, numNodes),
peerUpdates: make(map[types.NodeID]*p2p.PeerUpdates, numNodes),
blockSync: true,
}
@@ -91,9 +89,9 @@ func setup(
}
func (rts *reactorTestSuite) addNode(t *testing.T,
nodeID p2ptypes.NodeID,
genDoc *consensus.GenesisDoc,
privVal consensus.PrivValidator,
nodeID types.NodeID,
genDoc *types.GenesisDoc,
privVal types.PrivValidator,
maxBlockHeight int64,
) {
t.Helper()
@@ -121,7 +119,7 @@ func (rts *reactorTestSuite) addNode(t *testing.T,
)
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
lastCommit := metadata.NewCommit(blockHeight-1, 0, metadata.BlockID{}, nil)
lastCommit := types.NewCommit(blockHeight-1, 0, types.BlockID{}, nil)
if blockHeight > 1 {
lastBlockMeta := blockStore.LoadBlockMeta(blockHeight - 1)
@@ -136,17 +134,17 @@ func (rts *reactorTestSuite) addNode(t *testing.T,
)
require.NoError(t, err)
lastCommit = metadata.NewCommit(
lastCommit = types.NewCommit(
vote.Height,
vote.Round,
lastBlockMeta.BlockID,
[]metadata.CommitSig{vote.CommitSig()},
[]types.CommitSig{vote.CommitSig()},
)
}
thisBlock := sf.MakeBlock(state, blockHeight, lastCommit)
thisParts := thisBlock.MakePartSet(metadata.BlockPartSizeBytes)
blockID := metadata.BlockID{Hash: thisBlock.Hash(), PartSetHeader: thisParts.Header()}
thisParts := thisBlock.MakePartSet(types.BlockPartSizeBytes)
blockID := types.BlockID{Hash: thisBlock.Hash(), PartSetHeader: thisParts.Header()}
state, err = blockExec.ApplyBlock(state, blockID, thisBlock)
require.NoError(t, err)

View File

@@ -1,12 +1,12 @@
package behavior
import "github.com/tendermint/tendermint/pkg/p2p"
import "github.com/tendermint/tendermint/types"
// PeerBehavior is a struct describing a behavior a peer performed.
// `peerID` identifies the peer and reason characterizes the specific
// behavior performed by the peer.
type PeerBehavior struct {
peerID p2p.NodeID
peerID types.NodeID
reason interface{}
}
@@ -15,7 +15,7 @@ type badMessage struct {
}
// BadMessage returns a badMessage PeerBehavior.
func BadMessage(peerID p2p.NodeID, explanation string) PeerBehavior {
func BadMessage(peerID types.NodeID, explanation string) PeerBehavior {
return PeerBehavior{peerID: peerID, reason: badMessage{explanation}}
}
@@ -24,7 +24,7 @@ type messageOutOfOrder struct {
}
// MessageOutOfOrder returns a messagOutOfOrder PeerBehavior.
func MessageOutOfOrder(peerID p2p.NodeID, explanation string) PeerBehavior {
func MessageOutOfOrder(peerID types.NodeID, explanation string) PeerBehavior {
return PeerBehavior{peerID: peerID, reason: messageOutOfOrder{explanation}}
}
@@ -33,7 +33,7 @@ type consensusVote struct {
}
// ConsensusVote returns a consensusVote PeerBehavior.
func ConsensusVote(peerID p2p.NodeID, explanation string) PeerBehavior {
func ConsensusVote(peerID types.NodeID, explanation string) PeerBehavior {
return PeerBehavior{peerID: peerID, reason: consensusVote{explanation}}
}
@@ -42,6 +42,6 @@ type blockPart struct {
}
// BlockPart returns blockPart PeerBehavior.
func BlockPart(peerID p2p.NodeID, explanation string) PeerBehavior {
func BlockPart(peerID types.NodeID, explanation string) PeerBehavior {
return PeerBehavior{peerID: peerID, reason: blockPart{explanation}}
}

View File

@@ -5,7 +5,7 @@ import (
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/internal/p2p"
p2ptypes "github.com/tendermint/tendermint/pkg/p2p"
"github.com/tendermint/tendermint/types"
)
// Reporter provides an interface for reactors to report the behavior
@@ -52,14 +52,14 @@ func (spbr *SwitchReporter) Report(behavior PeerBehavior) error {
// behavior in manufactured scenarios.
type MockReporter struct {
mtx tmsync.RWMutex
pb map[p2ptypes.NodeID][]PeerBehavior
pb map[types.NodeID][]PeerBehavior
}
// NewMockReporter returns a Reporter which records all reported
// behaviors in memory.
func NewMockReporter() *MockReporter {
return &MockReporter{
pb: map[p2ptypes.NodeID][]PeerBehavior{},
pb: map[types.NodeID][]PeerBehavior{},
}
}
@@ -73,7 +73,7 @@ func (mpbr *MockReporter) Report(behavior PeerBehavior) error {
}
// GetBehaviors returns all behaviors reported on the peer identified by peerID.
func (mpbr *MockReporter) GetBehaviors(peerID p2ptypes.NodeID) []PeerBehavior {
func (mpbr *MockReporter) GetBehaviors(peerID types.NodeID) []PeerBehavior {
mpbr.mtx.RLock()
defer mpbr.mtx.RUnlock()
if items, ok := mpbr.pb[peerID]; ok {

View File

@@ -5,13 +5,13 @@ import (
"testing"
bh "github.com/tendermint/tendermint/internal/blocksync/v2/internal/behavior"
"github.com/tendermint/tendermint/pkg/p2p"
"github.com/tendermint/tendermint/types"
)
// TestMockReporter tests the MockReporter's ability to store reported
// peer behavior in memory indexed by the peerID.
func TestMockReporter(t *testing.T) {
var peerID p2p.NodeID = "MockPeer"
var peerID types.NodeID = "MockPeer"
pr := bh.NewMockReporter()
behaviors := pr.GetBehaviors(peerID)
@@ -34,7 +34,7 @@ func TestMockReporter(t *testing.T) {
}
type scriptItem struct {
peerID p2p.NodeID
peerID types.NodeID
behavior bh.PeerBehavior
}
@@ -76,10 +76,10 @@ func equalBehaviors(a []bh.PeerBehavior, b []bh.PeerBehavior) bool {
// freequencies that those behaviors occur.
func TestEqualPeerBehaviors(t *testing.T) {
var (
peerID p2p.NodeID = "MockPeer"
consensusVote = bh.ConsensusVote(peerID, "voted")
blockPart = bh.BlockPart(peerID, "blocked")
equals = []struct {
peerID types.NodeID = "MockPeer"
consensusVote = bh.ConsensusVote(peerID, "voted")
blockPart = bh.BlockPart(peerID, "blocked")
equals = []struct {
left []bh.PeerBehavior
right []bh.PeerBehavior
}{
@@ -128,7 +128,7 @@ func TestEqualPeerBehaviors(t *testing.T) {
func TestMockPeerBehaviorReporterConcurrency(t *testing.T) {
var (
behaviorScript = []struct {
peerID p2p.NodeID
peerID types.NodeID
behaviors []bh.PeerBehavior
}{
{"1", []bh.PeerBehavior{bh.ConsensusVote("1", "")}},

View File

@@ -5,9 +5,9 @@ import (
"github.com/gogo/protobuf/proto"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/pkg/block"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
var (
@@ -16,7 +16,7 @@ var (
type iIO interface {
sendBlockRequest(peer p2p.Peer, height int64) error
sendBlockToPeer(block *block.Block, peer p2p.Peer) error
sendBlockToPeer(block *types.Block, peer p2p.Peer) error
sendBlockNotFound(height int64, peer p2p.Peer) error
sendStatusResponse(base, height int64, peer p2p.Peer) error
@@ -90,7 +90,7 @@ func (sio *switchIO) sendStatusResponse(base int64, height int64, peer p2p.Peer)
return nil
}
func (sio *switchIO) sendBlockToPeer(block *block.Block, peer p2p.Peer) error {
func (sio *switchIO) sendBlockToPeer(block *types.Block, peer p2p.Peer) error {
if block == nil {
panic("trying to send nil block")
}

View File

@@ -3,10 +3,8 @@ package v2
import (
"fmt"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/metadata"
"github.com/tendermint/tendermint/pkg/p2p"
tmState "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
// Events generated by the processor:
@@ -14,8 +12,8 @@ import (
type pcBlockVerificationFailure struct {
priorityNormal
height int64
firstPeerID p2p.NodeID
secondPeerID p2p.NodeID
firstPeerID types.NodeID
secondPeerID types.NodeID
}
func (e pcBlockVerificationFailure) String() string {
@@ -27,7 +25,7 @@ func (e pcBlockVerificationFailure) String() string {
type pcBlockProcessed struct {
priorityNormal
height int64
peerID p2p.NodeID
peerID types.NodeID
}
func (e pcBlockProcessed) String() string {
@@ -46,8 +44,8 @@ func (p pcFinished) Error() string {
}
type queueItem struct {
block *block.Block
peerID p2p.NodeID
block *types.Block
peerID types.NodeID
}
type blockQueue map[int64]queueItem
@@ -96,7 +94,7 @@ func (state *pcState) synced() bool {
return len(state.queue) <= 1
}
func (state *pcState) enqueue(peerID p2p.NodeID, block *block.Block, height int64) {
func (state *pcState) enqueue(peerID types.NodeID, block *types.Block, height int64) {
if item, ok := state.queue[height]; ok {
panic(fmt.Sprintf(
"duplicate block %d (%X) enqueued by processor (sent by %v; existing block %X from %v)",
@@ -111,7 +109,7 @@ func (state *pcState) height() int64 {
}
// purgePeer moves all unprocessed blocks from the queue
func (state *pcState) purgePeer(peerID p2p.NodeID) {
func (state *pcState) purgePeer(peerID types.NodeID) {
// what if height is less than state.height?
for height, item := range state.queue {
if item.peerID == peerID {
@@ -161,8 +159,8 @@ func (state *pcState) handle(event Event) (Event, error) {
var (
first, second = firstItem.block, secondItem.block
firstParts = first.MakePartSet(metadata.BlockPartSizeBytes)
firstID = metadata.BlockID{Hash: first.Hash(), PartSetHeader: firstParts.Header()}
firstParts = first.MakePartSet(types.BlockPartSizeBytes)
firstID = types.BlockID{Hash: first.Hash(), PartSetHeader: firstParts.Header()}
)
// verify if +second+ last commit "confirms" +first+ block

View File

@@ -4,18 +4,17 @@ import (
"fmt"
cons "github.com/tendermint/tendermint/internal/consensus"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/metadata"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
type processorContext interface {
applyBlock(blockID metadata.BlockID, block *block.Block) error
verifyCommit(chainID string, blockID metadata.BlockID, height int64, commit *metadata.Commit) error
saveBlock(block *block.Block, blockParts *metadata.PartSet, seenCommit *metadata.Commit)
applyBlock(blockID types.BlockID, block *types.Block) error
verifyCommit(chainID string, blockID types.BlockID, height int64, commit *types.Commit) error
saveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit)
tmState() state.State
setState(state.State)
recordConsMetrics(block *block.Block)
recordConsMetrics(block *types.Block)
}
type pContext struct {
@@ -34,7 +33,7 @@ func newProcessorContext(st blockStore, ex blockApplier, s state.State, m *cons.
}
}
func (pc *pContext) applyBlock(blockID metadata.BlockID, block *block.Block) error {
func (pc *pContext) applyBlock(blockID types.BlockID, block *types.Block) error {
newState, err := pc.applier.ApplyBlock(pc.state, blockID, block)
pc.state = newState
return err
@@ -48,15 +47,15 @@ func (pc *pContext) setState(state state.State) {
pc.state = state
}
func (pc pContext) verifyCommit(chainID string, blockID metadata.BlockID, height int64, commit *metadata.Commit) error {
func (pc pContext) verifyCommit(chainID string, blockID types.BlockID, height int64, commit *types.Commit) error {
return pc.state.Validators.VerifyCommitLight(chainID, blockID, height, commit)
}
func (pc *pContext) saveBlock(block *block.Block, blockParts *metadata.PartSet, seenCommit *metadata.Commit) {
func (pc *pContext) saveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
pc.store.SaveBlock(block, blockParts, seenCommit)
}
func (pc *pContext) recordConsMetrics(block *block.Block) {
func (pc *pContext) recordConsMetrics(block *types.Block) {
pc.metrics.RecordConsMetrics(block)
}
@@ -77,7 +76,7 @@ func newMockProcessorContext(
}
}
func (mpc *mockPContext) applyBlock(blockID metadata.BlockID, block *block.Block) error {
func (mpc *mockPContext) applyBlock(blockID types.BlockID, block *types.Block) error {
for _, h := range mpc.applicationBL {
if h == block.Height {
return fmt.Errorf("generic application error")
@@ -87,7 +86,7 @@ func (mpc *mockPContext) applyBlock(blockID metadata.BlockID, block *block.Block
return nil
}
func (mpc *mockPContext) verifyCommit(chainID string, blockID metadata.BlockID, height int64, commit *metadata.Commit) error {
func (mpc *mockPContext) verifyCommit(chainID string, blockID types.BlockID, height int64, commit *types.Commit) error {
for _, h := range mpc.verificationBL {
if h == height {
return fmt.Errorf("generic verification error")
@@ -96,7 +95,7 @@ func (mpc *mockPContext) verifyCommit(chainID string, blockID metadata.BlockID,
return nil
}
func (mpc *mockPContext) saveBlock(block *block.Block, blockParts *metadata.PartSet, seenCommit *metadata.Commit) {
func (mpc *mockPContext) saveBlock(block *types.Block, blockParts *types.PartSet, seenCommit *types.Commit) {
}
@@ -108,6 +107,6 @@ func (mpc *mockPContext) tmState() state.State {
return mpc.state
}
func (mpc *mockPContext) recordConsMetrics(block *block.Block) {
func (mpc *mockPContext) recordConsMetrics(block *types.Block) {
}

View File

@@ -5,10 +5,8 @@ import (
"github.com/stretchr/testify/assert"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/metadata"
"github.com/tendermint/tendermint/pkg/p2p"
tmState "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
// pcBlock is a test helper structure with simple types. Its purpose is to help with test readability.
@@ -28,8 +26,8 @@ type params struct {
}
// makePcBlock makes an empty block.
func makePcBlock(height int64) *block.Block {
return &block.Block{Header: metadata.Header{Height: height}}
func makePcBlock(height int64) *types.Block {
return &types.Block{Header: types.Header{Height: height}}
}
// makeState takes test parameters and creates a specific processor state.
@@ -41,7 +39,7 @@ func makeState(p *params) *pcState {
state := newPcState(context)
for _, item := range p.items {
state.enqueue(p2p.NodeID(item.pid), makePcBlock(item.height), item.height)
state.enqueue(types.NodeID(item.pid), makePcBlock(item.height), item.height)
}
state.blocksSynced = p.blocksSynced
@@ -49,7 +47,7 @@ func makeState(p *params) *pcState {
return state
}
func mBlockResponse(peerID p2p.NodeID, height int64) scBlockReceived {
func mBlockResponse(peerID types.NodeID, height int64) scBlockReceived {
return scBlockReceived{
peerID: peerID,
block: makePcBlock(height),

View File

@@ -14,11 +14,9 @@ import (
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/sync"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/metadata"
p2ptypes "github.com/tendermint/tendermint/pkg/p2p"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
const (
@@ -27,8 +25,8 @@ const (
)
type blockStore interface {
LoadBlock(height int64) *block.Block
SaveBlock(*block.Block, *metadata.PartSet, *metadata.Commit)
LoadBlock(height int64) *types.Block
SaveBlock(*types.Block, *types.PartSet, *types.Commit)
Base() int64
Height() int64
}
@@ -58,7 +56,7 @@ type BlockchainReactor struct {
}
type blockApplier interface {
ApplyBlock(state state.State, blockID metadata.BlockID, block *block.Block) (state.State, error)
ApplyBlock(state state.State, blockID types.BlockID, block *types.Block) (state.State, error)
}
// XXX: unify naming in this package around tmState
@@ -229,9 +227,9 @@ func (e rProcessBlock) String() string {
type bcBlockResponse struct {
priorityNormal
time time.Time
peerID p2ptypes.NodeID
peerID types.NodeID
size int64
block *block.Block
block *types.Block
}
func (resp bcBlockResponse) String() string {
@@ -243,7 +241,7 @@ func (resp bcBlockResponse) String() string {
type bcNoBlockResponse struct {
priorityNormal
time time.Time
peerID p2ptypes.NodeID
peerID types.NodeID
height int64
}
@@ -256,7 +254,7 @@ func (resp bcNoBlockResponse) String() string {
type bcStatusResponse struct {
priorityNormal
time time.Time
peerID p2ptypes.NodeID
peerID types.NodeID
base int64
height int64
}
@@ -269,7 +267,7 @@ func (resp bcStatusResponse) String() string {
// new peer is connected
type bcAddNewPeer struct {
priorityNormal
peerID p2ptypes.NodeID
peerID types.NodeID
}
func (resp bcAddNewPeer) String() string {
@@ -279,7 +277,7 @@ func (resp bcAddNewPeer) String() string {
// existing peer is removed
type bcRemovePeer struct {
priorityHigh
peerID p2ptypes.NodeID
peerID types.NodeID
reason interface{}
}
@@ -538,7 +536,7 @@ func (r *BlockchainReactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
r.mtx.RUnlock()
case *bcproto.Message_BlockResponse:
bi, err := block.BlockFromProto(msg.BlockResponse.Block)
bi, err := types.BlockFromProto(msg.BlockResponse.Block)
if err != nil {
logger.Error("error transitioning block from protobuf", "err", err)
_ = r.reporter.Report(behavior.BadMessage(src.ID(), err.Error()))

View File

@@ -13,6 +13,7 @@ import (
"github.com/stretchr/testify/require"
dbm "github.com/tendermint/tm-db"
abci "github.com/tendermint/tendermint/abci/types"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/blocksync/v2/internal/behavior"
cons "github.com/tendermint/tendermint/internal/consensus"
@@ -22,25 +23,21 @@ import (
"github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/metadata"
p2ptypes "github.com/tendermint/tendermint/pkg/p2p"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
sf "github.com/tendermint/tendermint/state/test/factory"
tmstore "github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
type mockPeer struct {
service.Service
id p2ptypes.NodeID
id types.NodeID
}
func (mp mockPeer) FlushStop() {}
func (mp mockPeer) ID() p2ptypes.NodeID { return mp.id }
func (mp mockPeer) ID() types.NodeID { return mp.id }
func (mp mockPeer) RemoteIP() net.IP { return net.IP{} }
func (mp mockPeer) RemoteAddr() net.Addr { return &net.TCPAddr{IP: mp.RemoteIP(), Port: 8800} }
@@ -48,8 +45,8 @@ func (mp mockPeer) IsOutbound() bool { return true }
func (mp mockPeer) IsPersistent() bool { return true }
func (mp mockPeer) CloseConn() error { return nil }
func (mp mockPeer) NodeInfo() p2ptypes.NodeInfo {
return p2ptypes.NodeInfo{
func (mp mockPeer) NodeInfo() types.NodeInfo {
return types.NodeInfo{
NodeID: "",
ListenAddr: "",
}
@@ -65,7 +62,7 @@ func (mp mockPeer) Get(string) interface{} { return struct{}{} }
//nolint:unused
type mockBlockStore struct {
blocks map[int64]*block.Block
blocks map[int64]*types.Block
}
//nolint:unused
@@ -74,12 +71,12 @@ func (ml *mockBlockStore) Height() int64 {
}
//nolint:unused
func (ml *mockBlockStore) LoadBlock(height int64) *block.Block {
func (ml *mockBlockStore) LoadBlock(height int64) *types.Block {
return ml.blocks[height]
}
//nolint:unused
func (ml *mockBlockStore) SaveBlock(block *block.Block, part *metadata.PartSet, commit *metadata.Commit) {
func (ml *mockBlockStore) SaveBlock(block *types.Block, part *types.PartSet, commit *types.Commit) {
ml.blocks[block.Height] = block
}
@@ -88,7 +85,7 @@ type mockBlockApplier struct {
// XXX: Add whitelist/blacklist?
func (mba *mockBlockApplier) ApplyBlock(
state sm.State, blockID metadata.BlockID, block *block.Block,
state sm.State, blockID types.BlockID, block *types.Block,
) (sm.State, error) {
state.LastBlockHeight++
return state, nil
@@ -116,7 +113,7 @@ func (sio *mockSwitchIo) sendStatusResponse(_, _ int64, _ p2p.Peer) error {
return nil
}
func (sio *mockSwitchIo) sendBlockToPeer(_ *block.Block, _ p2p.Peer) error {
func (sio *mockSwitchIo) sendBlockToPeer(_ *types.Block, _ p2p.Peer) error {
sio.mtx.Lock()
defer sio.mtx.Unlock()
sio.numBlockResponse++
@@ -150,8 +147,8 @@ func (sio *mockSwitchIo) sendStatusRequest(_ p2p.Peer) error {
type testReactorParams struct {
logger log.Logger
genDoc *consensus.GenesisDoc
privVals []consensus.PrivValidator
genDoc *types.GenesisDoc
privVals []types.PrivValidator
startHeight int64
mockA bool
}
@@ -422,7 +419,7 @@ func TestReactorHelperMode(t *testing.T) {
msgBz, err := proto.Marshal(msgProto)
require.NoError(t, err)
reactor.Receive(channelID, mockPeer{id: p2ptypes.NodeID(step.peer)}, msgBz)
reactor.Receive(channelID, mockPeer{id: types.NodeID(step.peer)}, msgBz)
assert.Equal(t, old+1, mockSwitch.numStatusResponse)
case bcproto.BlockRequest:
if ev.Height > params.startHeight {
@@ -434,7 +431,7 @@ func TestReactorHelperMode(t *testing.T) {
msgBz, err := proto.Marshal(msgProto)
require.NoError(t, err)
reactor.Receive(channelID, mockPeer{id: p2ptypes.NodeID(step.peer)}, msgBz)
reactor.Receive(channelID, mockPeer{id: types.NodeID(step.peer)}, msgBz)
assert.Equal(t, old+1, mockSwitch.numNoBlockResponse)
} else {
old := mockSwitch.numBlockResponse
@@ -445,7 +442,7 @@ func TestReactorHelperMode(t *testing.T) {
msgBz, err := proto.Marshal(msgProto)
require.NoError(t, err)
reactor.Receive(channelID, mockPeer{id: p2ptypes.NodeID(step.peer)}, msgBz)
reactor.Receive(channelID, mockPeer{id: types.NodeID(step.peer)}, msgBz)
assert.Equal(t, old+1, mockSwitch.numBlockResponse)
}
}
@@ -478,8 +475,8 @@ type testApp struct {
func newReactorStore(
t *testing.T,
genDoc *consensus.GenesisDoc,
privVals []consensus.PrivValidator,
genDoc *types.GenesisDoc,
privVals []types.PrivValidator,
maxBlockHeight int64) (*tmstore.BlockStore, sm.State, *sm.BlockExecutor) {
t.Helper()
@@ -505,7 +502,7 @@ func newReactorStore(
// add blocks in
for blockHeight := int64(1); blockHeight <= maxBlockHeight; blockHeight++ {
lastCommit := metadata.NewCommit(blockHeight-1, 0, metadata.BlockID{}, nil)
lastCommit := types.NewCommit(blockHeight-1, 0, types.BlockID{}, nil)
if blockHeight > 1 {
lastBlockMeta := blockStore.LoadBlockMeta(blockHeight - 1)
lastBlock := blockStore.LoadBlock(blockHeight - 1)
@@ -517,14 +514,14 @@ func newReactorStore(
time.Now(),
)
require.NoError(t, err)
lastCommit = metadata.NewCommit(vote.Height, vote.Round,
lastBlockMeta.BlockID, []metadata.CommitSig{vote.CommitSig()})
lastCommit = types.NewCommit(vote.Height, vote.Round,
lastBlockMeta.BlockID, []types.CommitSig{vote.CommitSig()})
}
thisBlock := sf.MakeBlock(state, blockHeight, lastCommit)
thisParts := thisBlock.MakePartSet(metadata.BlockPartSizeBytes)
blockID := metadata.BlockID{Hash: thisBlock.Hash(), PartSetHeader: thisParts.Header()}
thisParts := thisBlock.MakePartSet(types.BlockPartSizeBytes)
blockID := types.BlockID{Hash: thisBlock.Hash(), PartSetHeader: thisParts.Header()}
state, err = blockExec.ApplyBlock(state, blockID, thisBlock)
require.NoError(t, err)

View File

@@ -8,8 +8,7 @@ import (
"sort"
"time"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/p2p"
"github.com/tendermint/tendermint/types"
)
// Events generated by the scheduler:
@@ -26,7 +25,7 @@ func (e scFinishedEv) String() string {
// send a blockRequest message
type scBlockRequest struct {
priorityNormal
peerID p2p.NodeID
peerID types.NodeID
height int64
}
@@ -37,8 +36,8 @@ func (e scBlockRequest) String() string {
// a block has been received and validated by the scheduler
type scBlockReceived struct {
priorityNormal
peerID p2p.NodeID
block *block.Block
peerID types.NodeID
block *types.Block
}
func (e scBlockReceived) String() string {
@@ -48,7 +47,7 @@ func (e scBlockReceived) String() string {
// scheduler detected a peer error
type scPeerError struct {
priorityHigh
peerID p2p.NodeID
peerID types.NodeID
reason error
}
@@ -59,7 +58,7 @@ func (e scPeerError) String() string {
// scheduler removed a set of peers (timed out or slow peer)
type scPeersPruned struct {
priorityHigh
peers []p2p.NodeID
peers []types.NodeID
}
func (e scPeersPruned) String() string {
@@ -126,7 +125,7 @@ func (e peerState) String() string {
}
type scPeer struct {
peerID p2p.NodeID
peerID types.NodeID
// initialized as New when peer is added, updated to Ready when statusUpdate is received,
// updated to Removed when peer is removed
@@ -143,7 +142,7 @@ func (p scPeer) String() string {
p.state, p.base, p.height, p.lastTouched, p.lastRate, p.peerID)
}
func newScPeer(peerID p2p.NodeID) *scPeer {
func newScPeer(peerID types.NodeID) *scPeer {
return &scPeer{
peerID: peerID,
state: peerStateNew,
@@ -171,7 +170,7 @@ type scheduler struct {
// a map of peerID to scheduler specific peer struct `scPeer` used to keep
// track of peer specific state
peers map[p2p.NodeID]*scPeer
peers map[types.NodeID]*scPeer
peerTimeout time.Duration // maximum response time from a peer otherwise prune
minRecvRate int64 // minimum receive rate from peer otherwise prune
@@ -183,13 +182,13 @@ type scheduler struct {
blockStates map[int64]blockState
// a map of heights to the peer we are waiting a response from
pendingBlocks map[int64]p2p.NodeID
pendingBlocks map[int64]types.NodeID
// the time at which a block was put in blockStatePending
pendingTime map[int64]time.Time
// a map of heights to the peers that put the block in blockStateReceived
receivedBlocks map[int64]p2p.NodeID
receivedBlocks map[int64]types.NodeID
}
func (sc scheduler) String() string {
@@ -204,10 +203,10 @@ func newScheduler(initHeight int64, startTime time.Time) *scheduler {
syncTimeout: 60 * time.Second,
height: initHeight,
blockStates: make(map[int64]blockState),
peers: make(map[p2p.NodeID]*scPeer),
pendingBlocks: make(map[int64]p2p.NodeID),
peers: make(map[types.NodeID]*scPeer),
pendingBlocks: make(map[int64]types.NodeID),
pendingTime: make(map[int64]time.Time),
receivedBlocks: make(map[int64]p2p.NodeID),
receivedBlocks: make(map[int64]types.NodeID),
targetPending: 10, // TODO - pass as param
peerTimeout: 15 * time.Second, // TODO - pass as param
minRecvRate: 0, // int64(7680), TODO - pass as param
@@ -216,14 +215,14 @@ func newScheduler(initHeight int64, startTime time.Time) *scheduler {
return &sc
}
func (sc *scheduler) ensurePeer(peerID p2p.NodeID) *scPeer {
func (sc *scheduler) ensurePeer(peerID types.NodeID) *scPeer {
if _, ok := sc.peers[peerID]; !ok {
sc.peers[peerID] = newScPeer(peerID)
}
return sc.peers[peerID]
}
func (sc *scheduler) touchPeer(peerID p2p.NodeID, time time.Time) error {
func (sc *scheduler) touchPeer(peerID types.NodeID, time time.Time) error {
peer, ok := sc.peers[peerID]
if !ok {
return fmt.Errorf("couldn't find peer %s", peerID)
@@ -238,7 +237,7 @@ func (sc *scheduler) touchPeer(peerID p2p.NodeID, time time.Time) error {
return nil
}
func (sc *scheduler) removePeer(peerID p2p.NodeID) {
func (sc *scheduler) removePeer(peerID types.NodeID) {
peer, ok := sc.peers[peerID]
if !ok {
return
@@ -298,7 +297,7 @@ func (sc *scheduler) addNewBlocks() {
}
}
func (sc *scheduler) setPeerRange(peerID p2p.NodeID, base int64, height int64) error {
func (sc *scheduler) setPeerRange(peerID types.NodeID, base int64, height int64) error {
peer := sc.ensurePeer(peerID)
if peer.state == peerStateRemoved {
@@ -333,8 +332,8 @@ func (sc *scheduler) getStateAtHeight(height int64) blockState {
}
}
func (sc *scheduler) getPeersWithHeight(height int64) []p2p.NodeID {
peers := make([]p2p.NodeID, 0)
func (sc *scheduler) getPeersWithHeight(height int64) []types.NodeID {
peers := make([]types.NodeID, 0)
for _, peer := range sc.peers {
if peer.state != peerStateReady {
continue
@@ -346,8 +345,8 @@ func (sc *scheduler) getPeersWithHeight(height int64) []p2p.NodeID {
return peers
}
func (sc *scheduler) prunablePeers(peerTimout time.Duration, minRecvRate int64, now time.Time) []p2p.NodeID {
prunable := make([]p2p.NodeID, 0)
func (sc *scheduler) prunablePeers(peerTimout time.Duration, minRecvRate int64, now time.Time) []types.NodeID {
prunable := make([]types.NodeID, 0)
for peerID, peer := range sc.peers {
if peer.state != peerStateReady {
continue
@@ -366,7 +365,7 @@ func (sc *scheduler) setStateAtHeight(height int64, state blockState) {
}
// CONTRACT: peer exists and in Ready state.
func (sc *scheduler) markReceived(peerID p2p.NodeID, height int64, size int64, now time.Time) error {
func (sc *scheduler) markReceived(peerID types.NodeID, height int64, size int64, now time.Time) error {
peer := sc.peers[peerID]
if state := sc.getStateAtHeight(height); state != blockStatePending || sc.pendingBlocks[height] != peerID {
@@ -390,7 +389,7 @@ func (sc *scheduler) markReceived(peerID p2p.NodeID, height int64, size int64, n
return nil
}
func (sc *scheduler) markPending(peerID p2p.NodeID, height int64, time time.Time) error {
func (sc *scheduler) markPending(peerID types.NodeID, height int64, time time.Time) error {
state := sc.getStateAtHeight(height)
if state != blockStateNew {
return fmt.Errorf("block %d should be in blockStateNew but is %s", height, state)
@@ -472,7 +471,7 @@ func (sc *scheduler) nextHeightToSchedule() int64 {
return min
}
func (sc *scheduler) pendingFrom(peerID p2p.NodeID) []int64 {
func (sc *scheduler) pendingFrom(peerID types.NodeID) []int64 {
var heights []int64
for height, pendingPeerID := range sc.pendingBlocks {
if pendingPeerID == peerID {
@@ -482,7 +481,7 @@ func (sc *scheduler) pendingFrom(peerID p2p.NodeID) []int64 {
return heights
}
func (sc *scheduler) selectPeer(height int64) (p2p.NodeID, error) {
func (sc *scheduler) selectPeer(height int64) (types.NodeID, error) {
peers := sc.getPeersWithHeight(height)
if len(peers) == 0 {
return "", fmt.Errorf("cannot find peer for height %d", height)
@@ -490,7 +489,7 @@ func (sc *scheduler) selectPeer(height int64) (p2p.NodeID, error) {
// create a map from number of pending requests to a list
// of peers having that number of pending requests.
pendingFrom := make(map[int][]p2p.NodeID)
pendingFrom := make(map[int][]types.NodeID)
for _, peerID := range peers {
numPending := len(sc.pendingFrom(peerID))
pendingFrom[numPending] = append(pendingFrom[numPending], peerID)
@@ -509,7 +508,7 @@ func (sc *scheduler) selectPeer(height int64) (p2p.NodeID, error) {
}
// PeerByID is a list of peers sorted by peerID.
type PeerByID []p2p.NodeID
type PeerByID []types.NodeID
func (peers PeerByID) Len() int {
return len(peers)

View File

@@ -10,10 +10,8 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/metadata"
"github.com/tendermint/tendermint/pkg/p2p"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
type scTestParams struct {
@@ -21,9 +19,9 @@ type scTestParams struct {
initHeight int64
height int64
allB []int64
pending map[int64]p2p.NodeID
pending map[int64]types.NodeID
pendingTime map[int64]time.Time
received map[int64]p2p.NodeID
received map[int64]types.NodeID
peerTimeout time.Duration
minRecvRate int64
targetPending int
@@ -42,7 +40,7 @@ func verifyScheduler(sc *scheduler) {
}
func newTestScheduler(params scTestParams) *scheduler {
peers := make(map[p2p.NodeID]*scPeer)
peers := make(map[types.NodeID]*scPeer)
var maxHeight int64
initHeight := params.initHeight
@@ -55,8 +53,8 @@ func newTestScheduler(params scTestParams) *scheduler {
}
for id, peer := range params.peers {
peer.peerID = p2p.NodeID(id)
peers[p2p.NodeID(id)] = peer
peer.peerID = types.NodeID(id)
peers[types.NodeID(id)] = peer
if maxHeight < peer.height {
maxHeight = peer.height
}
@@ -123,7 +121,7 @@ func TestScMaxHeights(t *testing.T) {
name: "one ready peer",
sc: scheduler{
height: 3,
peers: map[p2p.NodeID]*scPeer{"P1": {height: 6, state: peerStateReady}},
peers: map[types.NodeID]*scPeer{"P1": {height: 6, state: peerStateReady}},
},
wantMax: 6,
},
@@ -131,7 +129,7 @@ func TestScMaxHeights(t *testing.T) {
name: "ready and removed peers",
sc: scheduler{
height: 1,
peers: map[p2p.NodeID]*scPeer{
peers: map[types.NodeID]*scPeer{
"P1": {height: 4, state: peerStateReady},
"P2": {height: 10, state: peerStateRemoved}},
},
@@ -141,7 +139,7 @@ func TestScMaxHeights(t *testing.T) {
name: "removed peers",
sc: scheduler{
height: 1,
peers: map[p2p.NodeID]*scPeer{
peers: map[types.NodeID]*scPeer{
"P1": {height: 4, state: peerStateRemoved},
"P2": {height: 10, state: peerStateRemoved}},
},
@@ -151,7 +149,7 @@ func TestScMaxHeights(t *testing.T) {
name: "new peers",
sc: scheduler{
height: 1,
peers: map[p2p.NodeID]*scPeer{
peers: map[types.NodeID]*scPeer{
"P1": {base: -1, height: -1, state: peerStateNew},
"P2": {base: -1, height: -1, state: peerStateNew}},
},
@@ -161,7 +159,7 @@ func TestScMaxHeights(t *testing.T) {
name: "mixed peers",
sc: scheduler{
height: 1,
peers: map[p2p.NodeID]*scPeer{
peers: map[types.NodeID]*scPeer{
"P1": {height: -1, state: peerStateNew},
"P2": {height: 10, state: peerStateReady},
"P3": {height: 20, state: peerStateRemoved},
@@ -188,7 +186,7 @@ func TestScMaxHeights(t *testing.T) {
func TestScEnsurePeer(t *testing.T) {
type args struct {
peerID p2p.NodeID
peerID types.NodeID
}
tests := []struct {
name string
@@ -245,7 +243,7 @@ func TestScTouchPeer(t *testing.T) {
now := time.Now()
type args struct {
peerID p2p.NodeID
peerID types.NodeID
time time.Time
}
@@ -317,13 +315,13 @@ func TestScPrunablePeers(t *testing.T) {
name string
fields scTestParams
args args
wantResult []p2p.NodeID
wantResult []types.NodeID
}{
{
name: "no peers",
fields: scTestParams{peers: map[string]*scPeer{}},
args: args{threshold: time.Second, time: now.Add(time.Second + time.Millisecond), minSpeed: 100},
wantResult: []p2p.NodeID{},
wantResult: []types.NodeID{},
},
{
name: "mixed peers",
@@ -342,7 +340,7 @@ func TestScPrunablePeers(t *testing.T) {
"P6": {state: peerStateReady, lastTouched: now.Add(time.Second), lastRate: 90},
}},
args: args{threshold: time.Second, time: now.Add(time.Second + time.Millisecond), minSpeed: 100},
wantResult: []p2p.NodeID{"P4", "P5", "P6"},
wantResult: []types.NodeID{"P4", "P5", "P6"},
},
}
@@ -362,7 +360,7 @@ func TestScPrunablePeers(t *testing.T) {
func TestScRemovePeer(t *testing.T) {
type args struct {
peerID p2p.NodeID
peerID types.NodeID
}
tests := []struct {
name string
@@ -425,13 +423,13 @@ func TestScRemovePeer(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateReady}},
allB: []int64{1, 2, 3},
pending: map[int64]p2p.NodeID{1: "P1"},
pending: map[int64]types.NodeID{1: "P1"},
},
args: args{peerID: "P1"},
wantFields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateRemoved}},
allB: []int64{},
pending: map[int64]p2p.NodeID{},
pending: map[int64]types.NodeID{},
},
},
{
@@ -439,13 +437,13 @@ func TestScRemovePeer(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateReady}},
allB: []int64{1, 2, 3},
received: map[int64]p2p.NodeID{1: "P1"},
received: map[int64]types.NodeID{1: "P1"},
},
args: args{peerID: "P1"},
wantFields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateRemoved}},
allB: []int64{},
received: map[int64]p2p.NodeID{},
received: map[int64]types.NodeID{},
},
},
{
@@ -453,15 +451,15 @@ func TestScRemovePeer(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 4, state: peerStateReady}},
allB: []int64{1, 2, 3, 4},
pending: map[int64]p2p.NodeID{1: "P1", 3: "P1"},
received: map[int64]p2p.NodeID{2: "P1", 4: "P1"},
pending: map[int64]types.NodeID{1: "P1", 3: "P1"},
received: map[int64]types.NodeID{2: "P1", 4: "P1"},
},
args: args{peerID: "P1"},
wantFields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 4, state: peerStateRemoved}},
allB: []int64{},
pending: map[int64]p2p.NodeID{},
received: map[int64]p2p.NodeID{},
pending: map[int64]types.NodeID{},
received: map[int64]types.NodeID{},
},
},
{
@@ -472,8 +470,8 @@ func TestScRemovePeer(t *testing.T) {
"P2": {height: 6, state: peerStateReady},
},
allB: []int64{1, 2, 3, 4, 5, 6},
pending: map[int64]p2p.NodeID{1: "P1", 3: "P2", 6: "P1"},
received: map[int64]p2p.NodeID{2: "P1", 4: "P2", 5: "P2"},
pending: map[int64]types.NodeID{1: "P1", 3: "P2", 6: "P1"},
received: map[int64]types.NodeID{2: "P1", 4: "P2", 5: "P2"},
},
args: args{peerID: "P1"},
wantFields: scTestParams{
@@ -482,8 +480,8 @@ func TestScRemovePeer(t *testing.T) {
"P2": {height: 6, state: peerStateReady},
},
allB: []int64{1, 2, 3, 4, 5, 6},
pending: map[int64]p2p.NodeID{3: "P2"},
received: map[int64]p2p.NodeID{4: "P2", 5: "P2"},
pending: map[int64]types.NodeID{3: "P2"},
received: map[int64]types.NodeID{4: "P2", 5: "P2"},
},
},
}
@@ -502,7 +500,7 @@ func TestScRemovePeer(t *testing.T) {
func TestScSetPeerRange(t *testing.T) {
type args struct {
peerID p2p.NodeID
peerID types.NodeID
base int64
height int64
}
@@ -623,25 +621,25 @@ func TestScGetPeersWithHeight(t *testing.T) {
name string
fields scTestParams
args args
wantResult []p2p.NodeID
wantResult []types.NodeID
}{
{
name: "no peers",
fields: scTestParams{peers: map[string]*scPeer{}},
args: args{height: 10},
wantResult: []p2p.NodeID{},
wantResult: []types.NodeID{},
},
{
name: "only new peers",
fields: scTestParams{peers: map[string]*scPeer{"P1": {height: -1, state: peerStateNew}}},
args: args{height: 10},
wantResult: []p2p.NodeID{},
wantResult: []types.NodeID{},
},
{
name: "only Removed peers",
fields: scTestParams{peers: map[string]*scPeer{"P1": {height: 4, state: peerStateRemoved}}},
args: args{height: 2},
wantResult: []p2p.NodeID{},
wantResult: []types.NodeID{},
},
{
name: "one Ready shorter peer",
@@ -650,7 +648,7 @@ func TestScGetPeersWithHeight(t *testing.T) {
allB: []int64{1, 2, 3, 4},
},
args: args{height: 5},
wantResult: []p2p.NodeID{},
wantResult: []types.NodeID{},
},
{
name: "one Ready equal peer",
@@ -659,7 +657,7 @@ func TestScGetPeersWithHeight(t *testing.T) {
allB: []int64{1, 2, 3, 4},
},
args: args{height: 4},
wantResult: []p2p.NodeID{"P1"},
wantResult: []types.NodeID{"P1"},
},
{
name: "one Ready higher peer",
@@ -669,7 +667,7 @@ func TestScGetPeersWithHeight(t *testing.T) {
allB: []int64{1, 2, 3, 4},
},
args: args{height: 4},
wantResult: []p2p.NodeID{"P1"},
wantResult: []types.NodeID{"P1"},
},
{
name: "one Ready higher peer at base",
@@ -679,7 +677,7 @@ func TestScGetPeersWithHeight(t *testing.T) {
allB: []int64{1, 2, 3, 4},
},
args: args{height: 4},
wantResult: []p2p.NodeID{"P1"},
wantResult: []types.NodeID{"P1"},
},
{
name: "one Ready higher peer with higher base",
@@ -689,7 +687,7 @@ func TestScGetPeersWithHeight(t *testing.T) {
allB: []int64{1, 2, 3, 4},
},
args: args{height: 4},
wantResult: []p2p.NodeID{},
wantResult: []types.NodeID{},
},
{
name: "multiple mixed peers",
@@ -704,7 +702,7 @@ func TestScGetPeersWithHeight(t *testing.T) {
allB: []int64{8, 9, 10, 11},
},
args: args{height: 8},
wantResult: []p2p.NodeID{"P2", "P5"},
wantResult: []types.NodeID{"P2", "P5"},
},
}
@@ -726,7 +724,7 @@ func TestScMarkPending(t *testing.T) {
now := time.Now()
type args struct {
peerID p2p.NodeID
peerID types.NodeID
height int64
tm time.Time
}
@@ -822,14 +820,14 @@ func TestScMarkPending(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 2, state: peerStateReady}},
allB: []int64{1, 2},
pending: map[int64]p2p.NodeID{1: "P1"},
pending: map[int64]types.NodeID{1: "P1"},
pendingTime: map[int64]time.Time{1: now},
},
args: args{peerID: "P1", height: 2, tm: now.Add(time.Millisecond)},
wantFields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 2, state: peerStateReady}},
allB: []int64{1, 2},
pending: map[int64]p2p.NodeID{1: "P1", 2: "P1"},
pending: map[int64]types.NodeID{1: "P1", 2: "P1"},
pendingTime: map[int64]time.Time{1: now, 2: now.Add(time.Millisecond)},
},
},
@@ -852,7 +850,7 @@ func TestScMarkReceived(t *testing.T) {
now := time.Now()
type args struct {
peerID p2p.NodeID
peerID types.NodeID
height int64
size int64
tm time.Time
@@ -892,7 +890,7 @@ func TestScMarkReceived(t *testing.T) {
"P2": {height: 4, state: peerStateReady},
},
allB: []int64{1, 2, 3, 4},
pending: map[int64]p2p.NodeID{1: "P1", 2: "P2", 3: "P2", 4: "P1"},
pending: map[int64]types.NodeID{1: "P1", 2: "P2", 3: "P2", 4: "P1"},
},
args: args{peerID: "P1", height: 2, size: 1000, tm: now},
wantFields: scTestParams{
@@ -901,7 +899,7 @@ func TestScMarkReceived(t *testing.T) {
"P2": {height: 4, state: peerStateReady},
},
allB: []int64{1, 2, 3, 4},
pending: map[int64]p2p.NodeID{1: "P1", 2: "P2", 3: "P2", 4: "P1"},
pending: map[int64]types.NodeID{1: "P1", 2: "P2", 3: "P2", 4: "P1"},
},
wantErr: true,
},
@@ -910,13 +908,13 @@ func TestScMarkReceived(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 4, state: peerStateReady}},
allB: []int64{1, 2, 3, 4},
pending: map[int64]p2p.NodeID{},
pending: map[int64]types.NodeID{},
},
args: args{peerID: "P1", height: 2, size: 1000, tm: now},
wantFields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 4, state: peerStateReady}},
allB: []int64{1, 2, 3, 4},
pending: map[int64]p2p.NodeID{},
pending: map[int64]types.NodeID{},
},
wantErr: true,
},
@@ -925,14 +923,14 @@ func TestScMarkReceived(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 2, state: peerStateReady}},
allB: []int64{1, 2},
pending: map[int64]p2p.NodeID{1: "P1", 2: "P1"},
pending: map[int64]types.NodeID{1: "P1", 2: "P1"},
pendingTime: map[int64]time.Time{1: now, 2: now.Add(time.Second)},
},
args: args{peerID: "P1", height: 2, size: 1000, tm: now},
wantFields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 2, state: peerStateReady}},
allB: []int64{1, 2},
pending: map[int64]p2p.NodeID{1: "P1", 2: "P1"},
pending: map[int64]types.NodeID{1: "P1", 2: "P1"},
pendingTime: map[int64]time.Time{1: now, 2: now.Add(time.Second)},
},
wantErr: true,
@@ -942,16 +940,16 @@ func TestScMarkReceived(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 2, state: peerStateReady}},
allB: []int64{1, 2},
pending: map[int64]p2p.NodeID{1: "P1", 2: "P1"},
pending: map[int64]types.NodeID{1: "P1", 2: "P1"},
pendingTime: map[int64]time.Time{1: now, 2: now},
},
args: args{peerID: "P1", height: 2, size: 1000, tm: now.Add(time.Millisecond)},
wantFields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 2, state: peerStateReady}},
allB: []int64{1, 2},
pending: map[int64]p2p.NodeID{1: "P1"},
pending: map[int64]types.NodeID{1: "P1"},
pendingTime: map[int64]time.Time{1: now},
received: map[int64]p2p.NodeID{2: "P1"},
received: map[int64]types.NodeID{2: "P1"},
},
},
}
@@ -992,7 +990,7 @@ func TestScMarkProcessed(t *testing.T) {
height: 2,
peers: map[string]*scPeer{"P1": {height: 4, state: peerStateReady}},
allB: []int64{2},
pending: map[int64]p2p.NodeID{2: "P1"},
pending: map[int64]types.NodeID{2: "P1"},
pendingTime: map[int64]time.Time{2: now},
targetPending: 1,
},
@@ -1010,15 +1008,15 @@ func TestScMarkProcessed(t *testing.T) {
height: 1,
peers: map[string]*scPeer{"P1": {height: 2, state: peerStateReady}},
allB: []int64{1, 2},
pending: map[int64]p2p.NodeID{2: "P1"},
pending: map[int64]types.NodeID{2: "P1"},
pendingTime: map[int64]time.Time{2: now},
received: map[int64]p2p.NodeID{1: "P1"}},
received: map[int64]types.NodeID{1: "P1"}},
args: args{height: 1},
wantFields: scTestParams{
height: 2,
peers: map[string]*scPeer{"P1": {height: 2, state: peerStateReady}},
allB: []int64{2},
pending: map[int64]p2p.NodeID{2: "P1"},
pending: map[int64]types.NodeID{2: "P1"},
pendingTime: map[int64]time.Time{2: now}},
},
}
@@ -1102,7 +1100,7 @@ func TestScAllBlocksProcessed(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 4, state: peerStateReady}},
allB: []int64{1, 2, 3, 4},
pending: map[int64]p2p.NodeID{1: "P1", 2: "P1", 3: "P1", 4: "P1"},
pending: map[int64]types.NodeID{1: "P1", 2: "P1", 3: "P1", 4: "P1"},
pendingTime: map[int64]time.Time{1: now, 2: now, 3: now, 4: now},
},
wantResult: false,
@@ -1112,7 +1110,7 @@ func TestScAllBlocksProcessed(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 4, state: peerStateReady}},
allB: []int64{1, 2, 3, 4},
received: map[int64]p2p.NodeID{1: "P1", 2: "P1", 3: "P1", 4: "P1"},
received: map[int64]types.NodeID{1: "P1", 2: "P1", 3: "P1", 4: "P1"},
},
wantResult: false,
},
@@ -1123,7 +1121,7 @@ func TestScAllBlocksProcessed(t *testing.T) {
peers: map[string]*scPeer{
"P1": {height: 4, state: peerStateReady}},
allB: []int64{4},
received: map[int64]p2p.NodeID{4: "P1"},
received: map[int64]types.NodeID{4: "P1"},
},
wantResult: true,
},
@@ -1132,7 +1130,7 @@ func TestScAllBlocksProcessed(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 4, state: peerStateReady}},
allB: []int64{1, 2, 3, 4},
pending: map[int64]p2p.NodeID{2: "P1", 4: "P1"},
pending: map[int64]types.NodeID{2: "P1", 4: "P1"},
pendingTime: map[int64]time.Time{2: now, 4: now},
},
wantResult: false,
@@ -1180,7 +1178,7 @@ func TestScNextHeightToSchedule(t *testing.T) {
initHeight: 1,
peers: map[string]*scPeer{"P1": {height: 4, state: peerStateReady}},
allB: []int64{1, 2, 3, 4},
pending: map[int64]p2p.NodeID{1: "P1", 2: "P1", 3: "P1", 4: "P1"},
pending: map[int64]types.NodeID{1: "P1", 2: "P1", 3: "P1", 4: "P1"},
pendingTime: map[int64]time.Time{1: now, 2: now, 3: now, 4: now},
},
wantHeight: -1,
@@ -1191,7 +1189,7 @@ func TestScNextHeightToSchedule(t *testing.T) {
initHeight: 1,
peers: map[string]*scPeer{"P1": {height: 4, state: peerStateReady}},
allB: []int64{1, 2, 3, 4},
received: map[int64]p2p.NodeID{1: "P1", 2: "P1", 3: "P1", 4: "P1"},
received: map[int64]types.NodeID{1: "P1", 2: "P1", 3: "P1", 4: "P1"},
},
wantHeight: -1,
},
@@ -1210,7 +1208,7 @@ func TestScNextHeightToSchedule(t *testing.T) {
initHeight: 1,
peers: map[string]*scPeer{"P1": {height: 4, state: peerStateReady}},
allB: []int64{1, 2, 3, 4},
pending: map[int64]p2p.NodeID{2: "P1"},
pending: map[int64]types.NodeID{2: "P1"},
pendingTime: map[int64]time.Time{2: now},
},
wantHeight: 1,
@@ -1240,7 +1238,7 @@ func TestScSelectPeer(t *testing.T) {
name string
fields scTestParams
args args
wantResult p2p.NodeID
wantResult types.NodeID
wantError bool
}{
{
@@ -1308,7 +1306,7 @@ func TestScSelectPeer(t *testing.T) {
"P1": {height: 8, state: peerStateReady},
"P2": {height: 9, state: peerStateReady}},
allB: []int64{4, 5, 6, 7, 8, 9},
pending: map[int64]p2p.NodeID{
pending: map[int64]types.NodeID{
4: "P1", 6: "P1",
5: "P2",
},
@@ -1324,7 +1322,7 @@ func TestScSelectPeer(t *testing.T) {
"P1": {height: 15, state: peerStateReady},
"P3": {height: 15, state: peerStateReady}},
allB: []int64{1, 2, 3, 4, 5, 6, 7, 8, 9, 10},
pending: map[int64]p2p.NodeID{
pending: map[int64]types.NodeID{
1: "P1", 2: "P1",
3: "P3", 4: "P3",
5: "P2", 6: "P2",
@@ -1350,8 +1348,8 @@ func TestScSelectPeer(t *testing.T) {
}
// makeScBlock makes an empty block.
func makeScBlock(height int64) *block.Block {
return &block.Block{Header: metadata.Header{Height: height}}
func makeScBlock(height int64) *types.Block {
return &types.Block{Header: types.Header{Height: height}}
}
// used in place of assert.Equal(t, want, actual) to avoid failures due to
@@ -1393,7 +1391,7 @@ func TestScHandleBlockResponse(t *testing.T) {
now := time.Now()
block6FromP1 := bcBlockResponse{
time: now.Add(time.Millisecond),
peerID: p2p.NodeID("P1"),
peerID: types.NodeID("P1"),
size: 100,
block: makeScBlock(6),
}
@@ -1434,7 +1432,7 @@ func TestScHandleBlockResponse(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P2": {height: 8, state: peerStateReady}},
allB: []int64{1, 2, 3, 4, 5, 6, 7, 8},
pending: map[int64]p2p.NodeID{6: "P2"},
pending: map[int64]types.NodeID{6: "P2"},
pendingTime: map[int64]time.Time{6: now},
},
args: args{event: block6FromP1},
@@ -1445,7 +1443,7 @@ func TestScHandleBlockResponse(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 8, state: peerStateReady}},
allB: []int64{1, 2, 3, 4, 5, 6, 7, 8},
pending: map[int64]p2p.NodeID{6: "P1"},
pending: map[int64]types.NodeID{6: "P1"},
pendingTime: map[int64]time.Time{6: now.Add(time.Second)},
},
args: args{event: block6FromP1},
@@ -1456,7 +1454,7 @@ func TestScHandleBlockResponse(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 8, state: peerStateReady}},
allB: []int64{1, 2, 3, 4, 5, 6, 7, 8},
pending: map[int64]p2p.NodeID{6: "P1"},
pending: map[int64]types.NodeID{6: "P1"},
pendingTime: map[int64]time.Time{6: now},
},
args: args{event: block6FromP1},
@@ -1478,7 +1476,7 @@ func TestScHandleNoBlockResponse(t *testing.T) {
now := time.Now()
noBlock6FromP1 := bcNoBlockResponse{
time: now.Add(time.Millisecond),
peerID: p2p.NodeID("P1"),
peerID: types.NodeID("P1"),
height: 6,
}
@@ -1514,14 +1512,14 @@ func TestScHandleNoBlockResponse(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P2": {height: 8, state: peerStateReady}},
allB: []int64{1, 2, 3, 4, 5, 6, 7, 8},
pending: map[int64]p2p.NodeID{6: "P2"},
pending: map[int64]types.NodeID{6: "P2"},
pendingTime: map[int64]time.Time{6: now},
},
wantEvent: noOpEvent{},
wantFields: scTestParams{
peers: map[string]*scPeer{"P2": {height: 8, state: peerStateReady}},
allB: []int64{1, 2, 3, 4, 5, 6, 7, 8},
pending: map[int64]p2p.NodeID{6: "P2"},
pending: map[int64]types.NodeID{6: "P2"},
pendingTime: map[int64]time.Time{6: now},
},
},
@@ -1530,7 +1528,7 @@ func TestScHandleNoBlockResponse(t *testing.T) {
fields: scTestParams{
peers: map[string]*scPeer{"P1": {height: 8, state: peerStateReady}},
allB: []int64{1, 2, 3, 4, 5, 6, 7, 8},
pending: map[int64]p2p.NodeID{6: "P1"},
pending: map[int64]types.NodeID{6: "P1"},
pendingTime: map[int64]time.Time{6: now},
},
wantEvent: scPeerError{peerID: "P1", reason: fmt.Errorf("some error")},
@@ -1553,7 +1551,7 @@ func TestScHandleNoBlockResponse(t *testing.T) {
func TestScHandleBlockProcessed(t *testing.T) {
now := time.Now()
processed6FromP1 := pcBlockProcessed{
peerID: p2p.NodeID("P1"),
peerID: types.NodeID("P1"),
height: 6,
}
@@ -1580,7 +1578,7 @@ func TestScHandleBlockProcessed(t *testing.T) {
initHeight: 6,
peers: map[string]*scPeer{"P1": {height: 8, state: peerStateReady}},
allB: []int64{6, 7, 8},
pending: map[int64]p2p.NodeID{6: "P1"},
pending: map[int64]types.NodeID{6: "P1"},
pendingTime: map[int64]time.Time{6: now},
},
args: args{event: processed6FromP1},
@@ -1592,7 +1590,7 @@ func TestScHandleBlockProcessed(t *testing.T) {
initHeight: 6,
peers: map[string]*scPeer{"P1": {height: 7, state: peerStateReady}},
allB: []int64{6, 7},
received: map[int64]p2p.NodeID{6: "P1", 7: "P1"},
received: map[int64]types.NodeID{6: "P1", 7: "P1"},
},
args: args{event: processed6FromP1},
wantEvent: scFinishedEv{},
@@ -1603,8 +1601,8 @@ func TestScHandleBlockProcessed(t *testing.T) {
initHeight: 6,
peers: map[string]*scPeer{"P1": {height: 8, state: peerStateReady}},
allB: []int64{6, 7, 8},
pending: map[int64]p2p.NodeID{7: "P1", 8: "P1"},
received: map[int64]p2p.NodeID{6: "P1"},
pending: map[int64]types.NodeID{7: "P1", 8: "P1"},
received: map[int64]types.NodeID{6: "P1"},
},
args: args{event: processed6FromP1},
wantEvent: noOpEvent{},
@@ -1647,7 +1645,7 @@ func TestScHandleBlockVerificationFailure(t *testing.T) {
initHeight: 6,
peers: map[string]*scPeer{"P1": {height: 8, state: peerStateReady}},
allB: []int64{6, 7, 8},
pending: map[int64]p2p.NodeID{6: "P1"},
pending: map[int64]types.NodeID{6: "P1"},
pendingTime: map[int64]time.Time{6: now},
},
args: args{event: pcBlockVerificationFailure{height: 10, firstPeerID: "P1", secondPeerID: "P1"}},
@@ -1659,7 +1657,7 @@ func TestScHandleBlockVerificationFailure(t *testing.T) {
initHeight: 6,
peers: map[string]*scPeer{"P1": {height: 8, state: peerStateReady}, "P2": {height: 8, state: peerStateReady}},
allB: []int64{6, 7, 8},
pending: map[int64]p2p.NodeID{6: "P1"},
pending: map[int64]types.NodeID{6: "P1"},
pendingTime: map[int64]time.Time{6: now},
},
args: args{event: pcBlockVerificationFailure{height: 10, firstPeerID: "P1", secondPeerID: "P1"}},
@@ -1671,7 +1669,7 @@ func TestScHandleBlockVerificationFailure(t *testing.T) {
initHeight: 6,
peers: map[string]*scPeer{"P1": {height: 7, state: peerStateReady}},
allB: []int64{6, 7},
received: map[int64]p2p.NodeID{6: "P1", 7: "P1"},
received: map[int64]types.NodeID{6: "P1", 7: "P1"},
},
args: args{event: pcBlockVerificationFailure{height: 7, firstPeerID: "P1", secondPeerID: "P1"}},
wantEvent: scFinishedEv{},
@@ -1682,8 +1680,8 @@ func TestScHandleBlockVerificationFailure(t *testing.T) {
initHeight: 5,
peers: map[string]*scPeer{"P1": {height: 8, state: peerStateReady}, "P2": {height: 8, state: peerStateReady}},
allB: []int64{5, 6, 7, 8},
pending: map[int64]p2p.NodeID{7: "P1", 8: "P1"},
received: map[int64]p2p.NodeID{5: "P1", 6: "P1"},
pending: map[int64]types.NodeID{7: "P1", 8: "P1"},
received: map[int64]types.NodeID{5: "P1", 6: "P1"},
},
args: args{event: pcBlockVerificationFailure{height: 5, firstPeerID: "P1", secondPeerID: "P1"}},
wantEvent: noOpEvent{},
@@ -1698,8 +1696,8 @@ func TestScHandleBlockVerificationFailure(t *testing.T) {
"P3": {height: 8, state: peerStateReady},
},
allB: []int64{5, 6, 7, 8},
pending: map[int64]p2p.NodeID{7: "P1", 8: "P1"},
received: map[int64]p2p.NodeID{5: "P1", 6: "P1"},
pending: map[int64]types.NodeID{7: "P1", 8: "P1"},
received: map[int64]types.NodeID{5: "P1", 6: "P1"},
},
args: args{event: pcBlockVerificationFailure{height: 5, firstPeerID: "P1", secondPeerID: "P2"}},
wantEvent: noOpEvent{},
@@ -1718,7 +1716,7 @@ func TestScHandleBlockVerificationFailure(t *testing.T) {
func TestScHandleAddNewPeer(t *testing.T) {
addP1 := bcAddNewPeer{
peerID: p2p.NodeID("P1"),
peerID: types.NodeID("P1"),
}
type args struct {
event bcAddNewPeer
@@ -1829,7 +1827,7 @@ func TestScHandleTryPrunePeer(t *testing.T) {
allB: []int64{1, 2, 3, 4, 5, 6, 7},
peerTimeout: time.Second},
args: args{event: pruneEv},
wantEvent: scPeersPruned{peers: []p2p.NodeID{"P4", "P5", "P6"}},
wantEvent: scPeersPruned{peers: []types.NodeID{"P4", "P5", "P6"}},
},
{
name: "mixed peers, finish after pruning",
@@ -1927,7 +1925,7 @@ func TestScHandleTrySchedule(t *testing.T) {
"P1": {height: 4, state: peerStateReady},
"P2": {height: 5, state: peerStateReady}},
allB: []int64{1, 2, 3, 4, 5},
pending: map[int64]p2p.NodeID{
pending: map[int64]types.NodeID{
1: "P1", 2: "P1",
3: "P2",
},
@@ -1945,7 +1943,7 @@ func TestScHandleTrySchedule(t *testing.T) {
"P1": {height: 8, state: peerStateReady},
"P3": {height: 8, state: peerStateReady}},
allB: []int64{1, 2, 3, 4, 5, 6, 7, 8},
pending: map[int64]p2p.NodeID{
pending: map[int64]types.NodeID{
1: "P1", 2: "P1",
3: "P3", 4: "P3",
5: "P2", 6: "P2",
@@ -2107,7 +2105,7 @@ func TestScHandle(t *testing.T) {
startTime: now,
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateReady}},
allB: []int64{1, 2, 3},
pending: map[int64]p2p.NodeID{1: "P1"},
pending: map[int64]types.NodeID{1: "P1"},
pendingTime: map[int64]time.Time{1: tick[1]},
height: 1,
},
@@ -2119,7 +2117,7 @@ func TestScHandle(t *testing.T) {
startTime: now,
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateReady}},
allB: []int64{1, 2, 3},
pending: map[int64]p2p.NodeID{1: "P1", 2: "P1"},
pending: map[int64]types.NodeID{1: "P1", 2: "P1"},
pendingTime: map[int64]time.Time{1: tick[1], 2: tick[2]},
height: 1,
},
@@ -2131,7 +2129,7 @@ func TestScHandle(t *testing.T) {
startTime: now,
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateReady}},
allB: []int64{1, 2, 3},
pending: map[int64]p2p.NodeID{1: "P1", 2: "P1", 3: "P1"},
pending: map[int64]types.NodeID{1: "P1", 2: "P1", 3: "P1"},
pendingTime: map[int64]time.Time{1: tick[1], 2: tick[2], 3: tick[3]},
height: 1,
},
@@ -2143,9 +2141,9 @@ func TestScHandle(t *testing.T) {
startTime: now,
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateReady, lastTouched: tick[4]}},
allB: []int64{1, 2, 3},
pending: map[int64]p2p.NodeID{2: "P1", 3: "P1"},
pending: map[int64]types.NodeID{2: "P1", 3: "P1"},
pendingTime: map[int64]time.Time{2: tick[2], 3: tick[3]},
received: map[int64]p2p.NodeID{1: "P1"},
received: map[int64]types.NodeID{1: "P1"},
height: 1,
},
},
@@ -2156,9 +2154,9 @@ func TestScHandle(t *testing.T) {
startTime: now,
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateReady, lastTouched: tick[5]}},
allB: []int64{1, 2, 3},
pending: map[int64]p2p.NodeID{3: "P1"},
pending: map[int64]types.NodeID{3: "P1"},
pendingTime: map[int64]time.Time{3: tick[3]},
received: map[int64]p2p.NodeID{1: "P1", 2: "P1"},
received: map[int64]types.NodeID{1: "P1", 2: "P1"},
height: 1,
},
},
@@ -2169,29 +2167,29 @@ func TestScHandle(t *testing.T) {
startTime: now,
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateReady, lastTouched: tick[6]}},
allB: []int64{1, 2, 3},
received: map[int64]p2p.NodeID{1: "P1", 2: "P1", 3: "P1"},
received: map[int64]types.NodeID{1: "P1", 2: "P1", 3: "P1"},
height: 1,
},
},
{ // processed block 1
args: args{event: pcBlockProcessed{peerID: p2p.NodeID("P1"), height: 1}},
args: args{event: pcBlockProcessed{peerID: types.NodeID("P1"), height: 1}},
wantEvent: noOpEvent{},
wantSc: &scTestParams{
startTime: now,
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateReady, lastTouched: tick[6]}},
allB: []int64{2, 3},
received: map[int64]p2p.NodeID{2: "P1", 3: "P1"},
received: map[int64]types.NodeID{2: "P1", 3: "P1"},
height: 2,
},
},
{ // processed block 2
args: args{event: pcBlockProcessed{peerID: p2p.NodeID("P1"), height: 2}},
args: args{event: pcBlockProcessed{peerID: types.NodeID("P1"), height: 2}},
wantEvent: scFinishedEv{},
wantSc: &scTestParams{
startTime: now,
peers: map[string]*scPeer{"P1": {height: 3, state: peerStateReady, lastTouched: tick[6]}},
allB: []int64{3},
received: map[int64]p2p.NodeID{3: "P1"},
received: map[int64]types.NodeID{3: "P1"},
height: 3,
},
},
@@ -2207,7 +2205,7 @@ func TestScHandle(t *testing.T) {
"P1": {height: 4, state: peerStateReady, lastTouched: tick[6]},
"P2": {height: 3, state: peerStateReady, lastTouched: tick[6]}},
allB: []int64{1, 2, 3, 4},
received: map[int64]p2p.NodeID{1: "P1", 2: "P1", 3: "P1"},
received: map[int64]types.NodeID{1: "P1", 2: "P1", 3: "P1"},
height: 1,
},
args: args{event: pcBlockVerificationFailure{height: 1, firstPeerID: "P1", secondPeerID: "P1"}},
@@ -2218,7 +2216,7 @@ func TestScHandle(t *testing.T) {
"P1": {height: 4, state: peerStateRemoved, lastTouched: tick[6]},
"P2": {height: 3, state: peerStateReady, lastTouched: tick[6]}},
allB: []int64{1, 2, 3},
received: map[int64]p2p.NodeID{},
received: map[int64]types.NodeID{},
height: 1,
},
},

View File

@@ -11,22 +11,18 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
abcicli "github.com/tendermint/tendermint/abci/client"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/internal/evidence"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
mempoolv0 "github.com/tendermint/tendermint/internal/mempool/v0"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/events"
evtypes "github.com/tendermint/tendermint/pkg/evidence"
"github.com/tendermint/tendermint/pkg/metadata"
p2ptypes "github.com/tendermint/tendermint/pkg/p2p"
tmcons "github.com/tendermint/tendermint/proto/tendermint/consensus"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
dbm "github.com/tendermint/tm-db"
)
@@ -56,9 +52,9 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
defer os.RemoveAll(thisConfig.RootDir)
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(t, path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
app := appFunc()
vals := consensus.TM2PB.ValidatorUpdates(state.Validators)
vals := types.TM2PB.ValidatorUpdates(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
blockDB := dbm.NewMemDB()
@@ -89,7 +85,7 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
pv := privVals[i]
cs.SetPrivValidator(pv)
eventBus := events.NewEventBus()
eventBus := types.NewEventBus()
eventBus.SetLogger(log.TestingLogger().With("module", "events"))
err = eventBus.Start()
require.NoError(t, err)
@@ -104,7 +100,7 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
rts := setup(t, nValidators, states, 100) // buffer must be large enough to not deadlock
var bzNodeID p2ptypes.NodeID
var bzNodeID types.NodeID
// Set the first state's reactor as the dedicated byzantine reactor and grab
// the NodeID that corresponds to the state so we can reference the reactor.
@@ -129,7 +125,7 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
)
require.NoError(t, err)
prevote2, err := bzNodeState.signVote(tmproto.PrevoteType, nil, metadata.PartSetHeader{})
prevote2, err := bzNodeState.signVote(tmproto.PrevoteType, nil, types.PartSetHeader{})
require.NoError(t, err)
// send two votes to all peers (1st to one half, 2nd to another half)
@@ -171,12 +167,12 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
lazyNodeState.Logger.Info("Lazy Proposer proposing condensed commit")
require.NotNil(t, lazyNodeState.privValidator)
var commit *metadata.Commit
var commit *types.Commit
switch {
case lazyNodeState.Height == lazyNodeState.state.InitialHeight:
// We're creating a proposal for the first block.
// The commit is empty, but not nil.
commit = metadata.NewCommit(0, 0, metadata.BlockID{}, nil)
commit = types.NewCommit(0, 0, types.BlockID{}, nil)
case lazyNodeState.LastCommit.HasTwoThirdsMajority():
// Make the commit from LastCommit
commit = lazyNodeState.LastCommit.MakeCommit()
@@ -186,7 +182,7 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
}
// omit the last signature in the commit
commit.Signatures[len(commit.Signatures)-1] = metadata.NewCommitSigAbsent()
commit.Signatures[len(commit.Signatures)-1] = types.NewCommitSigAbsent()
if lazyNodeState.privValidatorPubKey == nil {
// If this node is a validator & proposer in the current round, it will
@@ -207,8 +203,8 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
}
// Make proposal
propBlockID := metadata.BlockID{Hash: block.Hash(), PartSetHeader: blockParts.Header()}
proposal := consensus.NewProposal(height, round, lazyNodeState.ValidRound, propBlockID)
propBlockID := types.BlockID{Hash: block.Hash(), PartSetHeader: blockParts.Header()}
proposal := types.NewProposal(height, round, lazyNodeState.ValidRound, propBlockID)
p := proposal.ToProto()
if err := lazyNodeState.privValidator.SignProposal(context.Background(), lazyNodeState.state.ChainID, p); err == nil {
proposal.Signature = p.Signature
@@ -233,20 +229,20 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
// Evidence should be submitted and committed at the third height but
// we will check the first six just in case
evidenceFromEachValidator := make([]evtypes.Evidence, nValidators)
evidenceFromEachValidator := make([]types.Evidence, nValidators)
wg := new(sync.WaitGroup)
i := 0
for _, sub := range rts.subs {
wg.Add(1)
go func(j int, s events.Subscription) {
go func(j int, s types.Subscription) {
defer wg.Done()
for {
select {
case msg := <-s.Out():
require.NotNil(t, msg)
block := msg.Data().(events.EventDataNewBlock).Block
block := msg.Data().(types.EventDataNewBlock).Block
if len(block.Evidence.Evidence) != 0 {
evidenceFromEachValidator[j] = block.Evidence.Evidence[0]
return
@@ -268,7 +264,7 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
for idx, ev := range evidenceFromEachValidator {
if assert.NotNil(t, ev, idx) {
ev, ok := ev.(*evtypes.DuplicateVoteEvidence)
ev, ok := ev.(*types.DuplicateVoteEvidence)
assert.True(t, ok)
assert.Equal(t, pubkey.Address(), ev.VoteA.ValidatorAddress)
assert.Equal(t, prevoteHeight, ev.Height())

View File

@@ -20,6 +20,7 @@ import (
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
cfg "github.com/tendermint/tendermint/config"
cstypes "github.com/tendermint/tendermint/internal/consensus/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
@@ -30,15 +31,11 @@ import (
tmos "github.com/tendermint/tendermint/libs/os"
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
tmtime "github.com/tendermint/tendermint/libs/time"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/events"
"github.com/tendermint/tendermint/pkg/metadata"
"github.com/tendermint/tendermint/privval"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
const (
@@ -72,9 +69,10 @@ func configSetup(t *testing.T) *cfg.Config {
return config
}
func ensureDir(dir string, mode os.FileMode) {
func ensureDir(t *testing.T, dir string, mode os.FileMode) {
t.Helper()
if err := tmos.EnsureDir(dir, mode); err != nil {
panic(err)
t.Fatalf("error opening directory: %s", err)
}
}
@@ -89,14 +87,14 @@ type validatorStub struct {
Index int32 // Validator index. NOTE: we don't assume validator set changes.
Height int64
Round int32
consensus.PrivValidator
types.PrivValidator
VotingPower int64
lastVote *consensus.Vote
lastVote *types.Vote
}
const testMinPower int64 = 10
func newValidatorStub(privValidator consensus.PrivValidator, valIndex int32) *validatorStub {
func newValidatorStub(privValidator types.PrivValidator, valIndex int32) *validatorStub {
return &validatorStub{
Index: valIndex,
PrivValidator: privValidator,
@@ -108,21 +106,21 @@ func (vs *validatorStub) signVote(
config *cfg.Config,
voteType tmproto.SignedMsgType,
hash []byte,
header metadata.PartSetHeader) (*consensus.Vote, error) {
header types.PartSetHeader) (*types.Vote, error) {
pubKey, err := vs.PrivValidator.GetPubKey(context.Background())
if err != nil {
return nil, fmt.Errorf("can't get pubkey: %w", err)
}
vote := &consensus.Vote{
vote := &types.Vote{
ValidatorIndex: vs.Index,
ValidatorAddress: pubKey.Address(),
Height: vs.Height,
Round: vs.Round,
Timestamp: tmtime.Now(),
Type: voteType,
BlockID: metadata.BlockID{Hash: hash, PartSetHeader: header},
BlockID: types.BlockID{Hash: hash, PartSetHeader: header},
}
v := vote.ToProto()
if err := vs.PrivValidator.SignVote(context.Background(), config.ChainID(), v); err != nil {
@@ -147,7 +145,7 @@ func signVote(
config *cfg.Config,
voteType tmproto.SignedMsgType,
hash []byte,
header metadata.PartSetHeader) *consensus.Vote {
header types.PartSetHeader) *types.Vote {
v, err := vs.signVote(config, voteType, hash, header)
if err != nil {
@@ -163,9 +161,9 @@ func signVotes(
config *cfg.Config,
voteType tmproto.SignedMsgType,
hash []byte,
header metadata.PartSetHeader,
vss ...*validatorStub) []*consensus.Vote {
votes := make([]*consensus.Vote, len(vss))
header types.PartSetHeader,
vss ...*validatorStub) []*types.Vote {
votes := make([]*types.Vote, len(vss))
for i, vs := range vss {
votes[i] = signVote(vs, config, voteType, hash, header)
}
@@ -224,26 +222,28 @@ func startTestRound(cs *State, height int64, round int32) {
// Create proposal block from cs1 but sign it with vs.
func decideProposal(
t *testing.T,
cs1 *State,
vs *validatorStub,
height int64,
round int32,
) (proposal *consensus.Proposal, block *block.Block) {
) (proposal *types.Proposal, block *types.Block) {
t.Helper()
cs1.mtx.Lock()
block, blockParts := cs1.createProposalBlock()
validRound := cs1.ValidRound
chainID := cs1.state.ChainID
cs1.mtx.Unlock()
if block == nil {
panic("Failed to createProposalBlock. Did you forget to add commit for previous block?")
t.Fatal("Failed to createProposalBlock. Did you forget to add commit for previous block?")
}
// Make proposal
polRound, propBlockID := validRound, metadata.BlockID{Hash: block.Hash(), PartSetHeader: blockParts.Header()}
proposal = consensus.NewProposal(height, round, polRound, propBlockID)
polRound, propBlockID := validRound, types.BlockID{Hash: block.Hash(), PartSetHeader: blockParts.Header()}
proposal = types.NewProposal(height, round, polRound, propBlockID)
p := proposal.ToProto()
if err := vs.SignProposal(context.Background(), chainID, p); err != nil {
panic(err)
t.Fatalf("error signing proposal: %s", err)
}
proposal.Signature = p.Signature
@@ -251,7 +251,7 @@ func decideProposal(
return
}
func addVotes(to *State, votes ...*consensus.Vote) {
func addVotes(to *State, votes ...*types.Vote) {
for _, vote := range votes {
to.peerMsgQueue <- msgInfo{Msg: &VoteMessage{vote}}
}
@@ -262,7 +262,7 @@ func signAddVotes(
to *State,
voteType tmproto.SignedMsgType,
hash []byte,
header metadata.PartSetHeader,
header types.PartSetHeader,
vss ...*validatorStub,
) {
votes := signVotes(config, voteType, hash, header, vss...)
@@ -270,36 +270,38 @@ func signAddVotes(
}
func validatePrevote(t *testing.T, cs *State, round int32, privVal *validatorStub, blockHash []byte) {
t.Helper()
prevotes := cs.Votes.Prevotes(round)
pubKey, err := privVal.GetPubKey(context.Background())
require.NoError(t, err)
address := pubKey.Address()
var vote *consensus.Vote
var vote *types.Vote
if vote = prevotes.GetByAddress(address); vote == nil {
panic("Failed to find prevote from validator")
t.Fatalf("Failed to find prevote from validator")
}
if blockHash == nil {
if vote.BlockID.Hash != nil {
panic(fmt.Sprintf("Expected prevote to be for nil, got %X", vote.BlockID.Hash))
t.Fatalf("Expected prevote to be for nil, got %X", vote.BlockID.Hash)
}
} else {
if !bytes.Equal(vote.BlockID.Hash, blockHash) {
panic(fmt.Sprintf("Expected prevote to be for %X, got %X", blockHash, vote.BlockID.Hash))
t.Fatalf("Expected prevote to be for %X, got %X", blockHash, vote.BlockID.Hash)
}
}
}
func validateLastPrecommit(t *testing.T, cs *State, privVal *validatorStub, blockHash []byte) {
t.Helper()
votes := cs.LastCommit
pv, err := privVal.GetPubKey(context.Background())
require.NoError(t, err)
address := pv.Address()
var vote *consensus.Vote
var vote *types.Vote
if vote = votes.GetByAddress(address); vote == nil {
panic("Failed to find precommit from validator")
t.Fatalf("Failed to find precommit from validator")
}
if !bytes.Equal(vote.BlockID.Hash, blockHash) {
panic(fmt.Sprintf("Expected precommit to be for %X, got %X", blockHash, vote.BlockID.Hash))
t.Fatalf("Expected precommit to be for %X, got %X", blockHash, vote.BlockID.Hash)
}
}
@@ -312,41 +314,42 @@ func validatePrecommit(
votedBlockHash,
lockedBlockHash []byte,
) {
t.Helper()
precommits := cs.Votes.Precommits(thisRound)
pv, err := privVal.GetPubKey(context.Background())
require.NoError(t, err)
address := pv.Address()
var vote *consensus.Vote
var vote *types.Vote
if vote = precommits.GetByAddress(address); vote == nil {
panic("Failed to find precommit from validator")
t.Fatalf("Failed to find precommit from validator")
}
if votedBlockHash == nil {
if vote.BlockID.Hash != nil {
panic("Expected precommit to be for nil")
t.Fatalf("Expected precommit to be for nil")
}
} else {
if !bytes.Equal(vote.BlockID.Hash, votedBlockHash) {
panic("Expected precommit to be for proposal block")
t.Fatalf("Expected precommit to be for proposal block")
}
}
if lockedBlockHash == nil {
if cs.LockedRound != lockRound || cs.LockedBlock != nil {
panic(fmt.Sprintf(
t.Fatalf(
"Expected to be locked on nil at round %d. Got locked at round %d with block %v",
lockRound,
cs.LockedRound,
cs.LockedBlock))
cs.LockedBlock)
}
} else {
if cs.LockedRound != lockRound || !bytes.Equal(cs.LockedBlock.Hash(), lockedBlockHash) {
panic(fmt.Sprintf(
t.Fatalf(
"Expected block to be locked on round %d, got %d. Got locked block %X, expected %X",
lockRound,
cs.LockedRound,
cs.LockedBlock.Hash(),
lockedBlockHash))
lockedBlockHash)
}
}
}
@@ -360,6 +363,7 @@ func validatePrevoteAndPrecommit(
votedBlockHash,
lockedBlockHash []byte,
) {
t.Helper()
// verify the prevote
validatePrevote(t, cs, thisRound, privVal, votedBlockHash)
// verify precommit
@@ -369,14 +373,14 @@ func validatePrevoteAndPrecommit(
}
func subscribeToVoter(cs *State, addr []byte) <-chan tmpubsub.Message {
votesSub, err := cs.eventBus.SubscribeUnbuffered(context.Background(), testSubscriber, events.EventQueryVote)
votesSub, err := cs.eventBus.SubscribeUnbuffered(context.Background(), testSubscriber, types.EventQueryVote)
if err != nil {
panic(fmt.Sprintf("failed to subscribe %s to %v", testSubscriber, events.EventQueryVote))
panic(fmt.Sprintf("failed to subscribe %s to %v", testSubscriber, types.EventQueryVote))
}
ch := make(chan tmpubsub.Message)
go func() {
for msg := range votesSub.Out() {
vote := msg.Data().(events.EventDataVote)
vote := msg.Data().(types.EventDataVote)
// we only fire for our own votes
if bytes.Equal(addr, vote.Vote.ValidatorAddress) {
ch <- msg
@@ -389,7 +393,7 @@ func subscribeToVoter(cs *State, addr []byte) <-chan tmpubsub.Message {
//-------------------------------------------------------------------------------
// consensus states
func newState(state sm.State, pv consensus.PrivValidator, app abci.Application) *State {
func newState(state sm.State, pv types.PrivValidator, app abci.Application) *State {
config := cfg.ResetTestRoot("consensus_state_test")
return newStateWithConfig(config, state, pv, app)
}
@@ -397,7 +401,7 @@ func newState(state sm.State, pv consensus.PrivValidator, app abci.Application)
func newStateWithConfig(
thisConfig *cfg.Config,
state sm.State,
pv consensus.PrivValidator,
pv types.PrivValidator,
app abci.Application,
) *State {
blockStore := store.NewBlockStore(dbm.NewMemDB())
@@ -407,7 +411,7 @@ func newStateWithConfig(
func newStateWithConfigAndBlockStore(
thisConfig *cfg.Config,
state sm.State,
pv consensus.PrivValidator,
pv types.PrivValidator,
app abci.Application,
blockStore *store.BlockStore,
) *State {
@@ -437,7 +441,7 @@ func newStateWithConfigAndBlockStore(
cs.SetLogger(log.TestingLogger().With("module", "consensus"))
cs.SetPrivValidator(pv)
eventBus := events.NewEventBus()
eventBus := types.NewEventBus()
eventBus.SetLogger(log.TestingLogger().With("module", "events"))
err := eventBus.Start()
if err != nil {
@@ -447,13 +451,14 @@ func newStateWithConfigAndBlockStore(
return cs
}
func loadPrivValidator(config *cfg.Config) *privval.FilePV {
func loadPrivValidator(t *testing.T, config *cfg.Config) *privval.FilePV {
t.Helper()
privValidatorKeyFile := config.PrivValidator.KeyFile()
ensureDir(filepath.Dir(privValidatorKeyFile), 0700)
ensureDir(t, filepath.Dir(privValidatorKeyFile), 0700)
privValidatorStateFile := config.PrivValidator.StateFile()
privValidator, err := privval.LoadOrGenFilePV(privValidatorKeyFile, privValidatorStateFile)
if err != nil {
panic(err)
t.Fatalf("error generating validator file: %s", err)
}
privValidator.Reset()
return privValidator
@@ -478,220 +483,238 @@ func randState(config *cfg.Config, nValidators int) (*State, []*validatorStub) {
//-------------------------------------------------------------------------------
func ensureNoNewEvent(ch <-chan tmpubsub.Message, timeout time.Duration,
func ensureNoNewEvent(t *testing.T, ch <-chan tmpubsub.Message, timeout time.Duration,
errorMessage string) {
t.Helper()
select {
case <-time.After(timeout):
break
case <-ch:
panic(errorMessage)
t.Fatalf("unexpected event: %s", errorMessage)
}
}
func ensureNoNewEventOnChannel(ch <-chan tmpubsub.Message) {
func ensureNoNewEventOnChannel(t *testing.T, ch <-chan tmpubsub.Message) {
t.Helper()
ensureNoNewEvent(
t,
ch,
ensureTimeout,
"We should be stuck waiting, not receiving new event on the channel")
}
func ensureNoNewRoundStep(stepCh <-chan tmpubsub.Message) {
func ensureNoNewRoundStep(t *testing.T, stepCh <-chan tmpubsub.Message) {
t.Helper()
ensureNoNewEvent(
t,
stepCh,
ensureTimeout,
"We should be stuck waiting, not receiving NewRoundStep event")
}
func ensureNoNewUnlock(unlockCh <-chan tmpubsub.Message) {
ensureNoNewEvent(
unlockCh,
ensureTimeout,
"We should be stuck waiting, not receiving Unlock event")
}
func ensureNoNewTimeout(stepCh <-chan tmpubsub.Message, timeout int64) {
func ensureNoNewTimeout(t *testing.T, stepCh <-chan tmpubsub.Message, timeout int64) {
t.Helper()
timeoutDuration := time.Duration(timeout*10) * time.Nanosecond
ensureNoNewEvent(
t,
stepCh,
timeoutDuration,
"We should be stuck waiting, not receiving NewTimeout event")
}
func ensureNewEvent(ch <-chan tmpubsub.Message, height int64, round int32, timeout time.Duration, errorMessage string) {
func ensureNewEvent(t *testing.T, ch <-chan tmpubsub.Message, height int64, round int32, timeout time.Duration, errorMessage string) { // nolint: lll
t.Helper()
select {
case <-time.After(timeout):
panic(errorMessage)
t.Fatalf("timed out waiting for new event: %s", errorMessage)
case msg := <-ch:
roundStateEvent, ok := msg.Data().(events.EventDataRoundState)
roundStateEvent, ok := msg.Data().(types.EventDataRoundState)
if !ok {
panic(fmt.Sprintf("expected a EventDataRoundState, got %T. Wrong subscription channel?",
msg.Data()))
t.Fatalf("expected a EventDataRoundState, got %T. Wrong subscription channel?", msg.Data())
}
if roundStateEvent.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, roundStateEvent.Height))
t.Fatalf("expected height %v, got %v", height, roundStateEvent.Height)
}
if roundStateEvent.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, roundStateEvent.Round))
t.Fatalf("expected round %v, got %v", round, roundStateEvent.Round)
}
// TODO: We could check also for a step at this point!
}
}
func ensureNewRound(roundCh <-chan tmpubsub.Message, height int64, round int32) {
func ensureNewRound(t *testing.T, roundCh <-chan tmpubsub.Message, height int64, round int32) {
t.Helper()
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewRound event")
t.Fatal("Timeout expired while waiting for NewRound event")
case msg := <-roundCh:
newRoundEvent, ok := msg.Data().(events.EventDataNewRound)
newRoundEvent, ok := msg.Data().(types.EventDataNewRound)
if !ok {
panic(fmt.Sprintf("expected a EventDataNewRound, got %T. Wrong subscription channel?",
msg.Data()))
t.Fatalf("expected a EventDataNewRound, got %T. Wrong subscription channel?", msg.Data())
}
if newRoundEvent.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, newRoundEvent.Height))
t.Fatalf("expected height %v, got %v", height, newRoundEvent.Height)
}
if newRoundEvent.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, newRoundEvent.Round))
t.Fatalf("expected round %v, got %v", round, newRoundEvent.Round)
}
}
}
func ensureNewTimeout(timeoutCh <-chan tmpubsub.Message, height int64, round int32, timeout int64) {
func ensureNewTimeout(t *testing.T, timeoutCh <-chan tmpubsub.Message, height int64, round int32, timeout int64) {
t.Helper()
timeoutDuration := time.Duration(timeout*10) * time.Nanosecond
ensureNewEvent(timeoutCh, height, round, timeoutDuration,
ensureNewEvent(t, timeoutCh, height, round, timeoutDuration,
"Timeout expired while waiting for NewTimeout event")
}
func ensureNewProposal(proposalCh <-chan tmpubsub.Message, height int64, round int32) {
func ensureNewProposal(t *testing.T, proposalCh <-chan tmpubsub.Message, height int64, round int32) {
t.Helper()
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewProposal event")
t.Fatalf("Timeout expired while waiting for NewProposal event")
case msg := <-proposalCh:
proposalEvent, ok := msg.Data().(events.EventDataCompleteProposal)
proposalEvent, ok := msg.Data().(types.EventDataCompleteProposal)
if !ok {
panic(fmt.Sprintf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
msg.Data()))
t.Fatalf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
msg.Data())
}
if proposalEvent.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, proposalEvent.Height))
t.Fatalf("expected height %v, got %v", height, proposalEvent.Height)
}
if proposalEvent.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, proposalEvent.Round))
t.Fatalf("expected round %v, got %v", round, proposalEvent.Round)
}
}
}
func ensureNewValidBlock(validBlockCh <-chan tmpubsub.Message, height int64, round int32) {
ensureNewEvent(validBlockCh, height, round, ensureTimeout,
func ensureNewValidBlock(t *testing.T, validBlockCh <-chan tmpubsub.Message, height int64, round int32) {
t.Helper()
ensureNewEvent(t, validBlockCh, height, round, ensureTimeout,
"Timeout expired while waiting for NewValidBlock event")
}
func ensureNewBlock(blockCh <-chan tmpubsub.Message, height int64) {
func ensureNewBlock(t *testing.T, blockCh <-chan tmpubsub.Message, height int64) {
t.Helper()
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewBlock event")
t.Fatalf("Timeout expired while waiting for NewBlock event")
case msg := <-blockCh:
blockEvent, ok := msg.Data().(events.EventDataNewBlock)
blockEvent, ok := msg.Data().(types.EventDataNewBlock)
if !ok {
panic(fmt.Sprintf("expected a EventDataNewBlock, got %T. Wrong subscription channel?",
msg.Data()))
t.Fatalf("expected a EventDataNewBlock, got %T. Wrong subscription channel?",
msg.Data())
}
if blockEvent.Block.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, blockEvent.Block.Height))
t.Fatalf("expected height %v, got %v", height, blockEvent.Block.Height)
}
}
}
func ensureNewBlockHeader(blockCh <-chan tmpubsub.Message, height int64, blockHash tmbytes.HexBytes) {
func ensureNewBlockHeader(t *testing.T, blockCh <-chan tmpubsub.Message, height int64, blockHash tmbytes.HexBytes) {
t.Helper()
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewBlockHeader event")
t.Fatalf("Timeout expired while waiting for NewBlockHeader event")
case msg := <-blockCh:
blockHeaderEvent, ok := msg.Data().(events.EventDataNewBlockHeader)
blockHeaderEvent, ok := msg.Data().(types.EventDataNewBlockHeader)
if !ok {
panic(fmt.Sprintf("expected a EventDataNewBlockHeader, got %T. Wrong subscription channel?",
msg.Data()))
t.Fatalf("expected a EventDataNewBlockHeader, got %T. Wrong subscription channel?",
msg.Data())
}
if blockHeaderEvent.Header.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, blockHeaderEvent.Header.Height))
t.Fatalf("expected height %v, got %v", height, blockHeaderEvent.Header.Height)
}
if !bytes.Equal(blockHeaderEvent.Header.Hash(), blockHash) {
panic(fmt.Sprintf("expected header %X, got %X", blockHash, blockHeaderEvent.Header.Hash()))
t.Fatalf("expected header %X, got %X", blockHash, blockHeaderEvent.Header.Hash())
}
}
}
func ensureNewUnlock(unlockCh <-chan tmpubsub.Message, height int64, round int32) {
ensureNewEvent(unlockCh, height, round, ensureTimeout,
"Timeout expired while waiting for NewUnlock event")
func ensureLock(t *testing.T, lockCh <-chan tmpubsub.Message, height int64, round int32) {
t.Helper()
ensureNewEvent(t, lockCh, height, round, ensureTimeout,
"Timeout expired while waiting for LockValue event")
}
func ensureProposal(proposalCh <-chan tmpubsub.Message, height int64, round int32, propID metadata.BlockID) {
func ensureRelock(t *testing.T, relockCh <-chan tmpubsub.Message, height int64, round int32) {
t.Helper()
ensureNewEvent(t, relockCh, height, round, ensureTimeout,
"Timeout expired while waiting for RelockValue event")
}
func ensureProposal(t *testing.T, proposalCh <-chan tmpubsub.Message, height int64, round int32, propID types.BlockID) {
t.Helper()
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewProposal event")
t.Fatalf("Timeout expired while waiting for NewProposal event")
case msg := <-proposalCh:
proposalEvent, ok := msg.Data().(events.EventDataCompleteProposal)
proposalEvent, ok := msg.Data().(types.EventDataCompleteProposal)
if !ok {
panic(fmt.Sprintf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
msg.Data()))
t.Fatalf("expected a EventDataCompleteProposal, got %T. Wrong subscription channel?",
msg.Data())
}
if proposalEvent.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, proposalEvent.Height))
t.Fatalf("expected height %v, got %v", height, proposalEvent.Height)
}
if proposalEvent.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, proposalEvent.Round))
t.Fatalf("expected round %v, got %v", round, proposalEvent.Round)
}
if !proposalEvent.BlockID.Equals(propID) {
panic(fmt.Sprintf("Proposed block does not match expected block (%v != %v)", proposalEvent.BlockID, propID))
t.Fatalf("Proposed block does not match expected block (%v != %v)", proposalEvent.BlockID, propID)
}
}
}
func ensurePrecommit(voteCh <-chan tmpubsub.Message, height int64, round int32) {
ensureVote(voteCh, height, round, tmproto.PrecommitType)
func ensurePrecommit(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32) {
t.Helper()
ensureVote(t, voteCh, height, round, tmproto.PrecommitType)
}
func ensurePrevote(voteCh <-chan tmpubsub.Message, height int64, round int32) {
ensureVote(voteCh, height, round, tmproto.PrevoteType)
func ensurePrevote(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32) {
t.Helper()
ensureVote(t, voteCh, height, round, tmproto.PrevoteType)
}
func ensureVote(voteCh <-chan tmpubsub.Message, height int64, round int32,
func ensureVote(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32,
voteType tmproto.SignedMsgType) {
t.Helper()
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for NewVote event")
t.Fatalf("Timeout expired while waiting for NewVote event")
case msg := <-voteCh:
voteEvent, ok := msg.Data().(events.EventDataVote)
voteEvent, ok := msg.Data().(types.EventDataVote)
if !ok {
panic(fmt.Sprintf("expected a EventDataVote, got %T. Wrong subscription channel?",
msg.Data()))
t.Fatalf("expected a EventDataVote, got %T. Wrong subscription channel?",
msg.Data())
}
vote := voteEvent.Vote
if vote.Height != height {
panic(fmt.Sprintf("expected height %v, got %v", height, vote.Height))
t.Fatalf("expected height %v, got %v", height, vote.Height)
}
if vote.Round != round {
panic(fmt.Sprintf("expected round %v, got %v", round, vote.Round))
t.Fatalf("expected round %v, got %v", round, vote.Round)
}
if vote.Type != voteType {
panic(fmt.Sprintf("expected type %v, got %v", voteType, vote.Type))
t.Fatalf("expected type %v, got %v", voteType, vote.Type)
}
}
}
func ensurePrecommitTimeout(ch <-chan tmpubsub.Message) {
func ensurePrecommitTimeout(t *testing.T, ch <-chan tmpubsub.Message) {
t.Helper()
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for the Precommit to Timeout")
t.Fatalf("Timeout expired while waiting for the Precommit to Timeout")
case <-ch:
}
}
func ensureNewEventOnChannel(ch <-chan tmpubsub.Message) {
func ensureNewEventOnChannel(t *testing.T, ch <-chan tmpubsub.Message) {
t.Helper()
select {
case <-time.After(ensureTimeout):
panic("Timeout expired while waiting for new activity on the channel")
t.Fatalf("Timeout expired while waiting for new activity on the channel")
case <-ch:
}
}
@@ -714,6 +737,7 @@ func randConsensusState(
appFunc func() abci.Application,
configOpts ...func(*cfg.Config),
) ([]*State, cleanupFunc) {
t.Helper()
genDoc, privVals := factory.RandGenesisDoc(config, nValidators, false, 30)
css := make([]*State, nValidators)
@@ -734,7 +758,7 @@ func randConsensusState(
opt(thisConfig)
}
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(t, filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
app := appFunc()
@@ -742,7 +766,7 @@ func randConsensusState(
closeFuncs = append(closeFuncs, appCloser.Close)
}
vals := consensus.TM2PB.ValidatorUpdates(state.Validators)
vals := types.TM2PB.ValidatorUpdates(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
css[i] = newStateWithConfigAndBlockStore(thisConfig, state, privVals[i], app, blockStore)
@@ -762,15 +786,17 @@ func randConsensusState(
// nPeers = nValidators + nNotValidator
func randConsensusNetWithPeers(
t *testing.T,
config *cfg.Config,
nValidators,
nPeers int,
testName string,
tickerFunc func() TimeoutTicker,
appFunc func(string) abci.Application,
) ([]*State, *consensus.GenesisDoc, *cfg.Config, cleanupFunc) {
) ([]*State, *types.GenesisDoc, *cfg.Config, cleanupFunc) {
genDoc, privVals := factory.RandGenesisDoc(config, nValidators, false, testMinPower)
css := make([]*State, nPeers)
t.Helper()
logger := consensusLogger()
var peer0Config *cfg.Config
@@ -779,31 +805,31 @@ func randConsensusNetWithPeers(
state, _ := sm.MakeGenesisState(genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
configRootDirs = append(configRootDirs, thisConfig.RootDir)
ensureDir(filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(t, filepath.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
if i == 0 {
peer0Config = thisConfig
}
var privVal consensus.PrivValidator
var privVal types.PrivValidator
if i < nValidators {
privVal = privVals[i]
} else {
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
if err != nil {
panic(err)
t.Fatalf("error creating temp file for validator key: %s", err)
}
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
if err != nil {
panic(err)
t.Fatalf("error loading validator state: %s", err)
}
privVal, err = privval.GenFilePV(tempKeyFile.Name(), tempStateFile.Name(), "")
if err != nil {
panic(err)
t.Fatalf("error generating validator key: %s", err)
}
}
app := appFunc(path.Join(config.DBDir(), fmt.Sprintf("%s_%d", testName, i)))
vals := consensus.TM2PB.ValidatorUpdates(state.Validators)
vals := types.TM2PB.ValidatorUpdates(state.Validators)
if _, ok := app.(*kvstore.PersistentKVStoreApplication); ok {
// simulate handshake, receive app version. If don't do this, replay test will fail
state.Version.Consensus.App = kvstore.ProtocolVersion
@@ -826,7 +852,7 @@ func randGenesisState(
config *cfg.Config,
numValidators int,
randPower bool,
minPower int64) (sm.State, []consensus.PrivValidator) {
minPower int64) (sm.State, []types.PrivValidator) {
genDoc, privValidators := factory.RandGenesisDoc(config, numValidators, randPower, minPower)
s0, _ := sm.MakeGenesisState(genDoc)
@@ -894,7 +920,7 @@ func newPersistentKVStoreWithPath(dbDir string) abci.Application {
return kvstore.NewPersistentKVStoreApplication(dbDir)
}
func signDataIsEqual(v1 *consensus.Vote, v2 *tmproto.Vote) bool {
func signDataIsEqual(v1 *types.Vote, v2 *tmproto.Vote) bool {
if v1 == nil || v2 == nil {
return false
}

View File

@@ -9,11 +9,9 @@ import (
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/libs/bytes"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/events"
"github.com/tendermint/tendermint/pkg/metadata"
tmcons "github.com/tendermint/tendermint/proto/tendermint/consensus"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/types"
)
func TestReactorInvalidPrecommit(t *testing.T) {
@@ -61,7 +59,7 @@ func TestReactorInvalidPrecommit(t *testing.T) {
for _, sub := range rts.subs {
wg.Add(1)
go func(s events.Subscription) {
go func(s types.Subscription) {
<-s.Out()
wg.Done()
}(sub)
@@ -71,7 +69,7 @@ func TestReactorInvalidPrecommit(t *testing.T) {
wg.Wait()
}
func invalidDoPrevoteFunc(t *testing.T, height int64, round int32, cs *State, r *Reactor, pv consensus.PrivValidator) {
func invalidDoPrevoteFunc(t *testing.T, height int64, round int32, cs *State, r *Reactor, pv types.PrivValidator) {
// routine to:
// - precommit for a random block
// - send precommit to all peers
@@ -88,16 +86,16 @@ func invalidDoPrevoteFunc(t *testing.T, height int64, round int32, cs *State, r
// precommit a random block
blockHash := bytes.HexBytes(tmrand.Bytes(32))
precommit := &consensus.Vote{
precommit := &types.Vote{
ValidatorAddress: addr,
ValidatorIndex: valIndex,
Height: cs.Height,
Round: cs.Round,
Timestamp: cs.voteTime(),
Type: tmproto.PrecommitType,
BlockID: metadata.BlockID{
BlockID: types.BlockID{
Hash: blockHash,
PartSetHeader: metadata.PartSetHeader{Total: 1, Hash: tmrand.Bytes(32)}},
PartSetHeader: types.PartSetHeader{Total: 1, Hash: tmrand.Bytes(32)}},
}
p := precommit.ToProto()

View File

@@ -14,12 +14,11 @@ import (
dbm "github.com/tendermint/tm-db"
"github.com/tendermint/tendermint/abci/example/code"
abci "github.com/tendermint/tendermint/abci/types"
mempl "github.com/tendermint/tendermint/internal/mempool"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/events"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
// for testing
@@ -38,15 +37,15 @@ func TestMempoolNoProgressUntilTxsAvailable(t *testing.T) {
cs := newStateWithConfig(config, state, privVals[0], NewCounterApplication())
assertMempool(cs.txNotifier).EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, events.EventQueryNewBlock)
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, height, round)
ensureNewEventOnChannel(newBlockCh) // first block gets committed
ensureNoNewEventOnChannel(newBlockCh)
ensureNewEventOnChannel(t, newBlockCh) // first block gets committed
ensureNoNewEventOnChannel(t, newBlockCh)
deliverTxsRange(cs, 0, 1)
ensureNewEventOnChannel(newBlockCh) // commit txs
ensureNewEventOnChannel(newBlockCh) // commit updated app hash
ensureNoNewEventOnChannel(newBlockCh)
ensureNewEventOnChannel(t, newBlockCh) // commit txs
ensureNewEventOnChannel(t, newBlockCh) // commit updated app hash
ensureNoNewEventOnChannel(t, newBlockCh)
}
func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
@@ -61,12 +60,12 @@ func TestMempoolProgressAfterCreateEmptyBlocksInterval(t *testing.T) {
assertMempool(cs.txNotifier).EnableTxsAvailable()
newBlockCh := subscribe(cs.eventBus, events.EventQueryNewBlock)
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
startTestRound(cs, cs.Height, cs.Round)
ensureNewEventOnChannel(newBlockCh) // first block gets committed
ensureNoNewEventOnChannel(newBlockCh) // then we dont make a block ...
ensureNewEventOnChannel(newBlockCh) // until the CreateEmptyBlocksInterval has passed
ensureNewEventOnChannel(t, newBlockCh) // first block gets committed
ensureNoNewEventOnChannel(t, newBlockCh) // then we dont make a block ...
ensureNewEventOnChannel(t, newBlockCh) // until the CreateEmptyBlocksInterval has passed
}
func TestMempoolProgressInHigherRound(t *testing.T) {
@@ -80,10 +79,10 @@ func TestMempoolProgressInHigherRound(t *testing.T) {
cs := newStateWithConfig(config, state, privVals[0], NewCounterApplication())
assertMempool(cs.txNotifier).EnableTxsAvailable()
height, round := cs.Height, cs.Round
newBlockCh := subscribe(cs.eventBus, events.EventQueryNewBlock)
newRoundCh := subscribe(cs.eventBus, events.EventQueryNewRound)
timeoutCh := subscribe(cs.eventBus, events.EventQueryTimeoutPropose)
cs.setProposal = func(proposal *consensus.Proposal) error {
newBlockCh := subscribe(cs.eventBus, types.EventQueryNewBlock)
newRoundCh := subscribe(cs.eventBus, types.EventQueryNewRound)
timeoutCh := subscribe(cs.eventBus, types.EventQueryTimeoutPropose)
cs.setProposal = func(proposal *types.Proposal) error {
if cs.Height == 2 && cs.Round == 0 {
// dont set the proposal in round 0 so we timeout and
// go to next round
@@ -94,19 +93,19 @@ func TestMempoolProgressInHigherRound(t *testing.T) {
}
startTestRound(cs, height, round)
ensureNewRound(newRoundCh, height, round) // first round at first height
ensureNewEventOnChannel(newBlockCh) // first block gets committed
ensureNewRound(t, newRoundCh, height, round) // first round at first height
ensureNewEventOnChannel(t, newBlockCh) // first block gets committed
height++ // moving to the next height
round = 0
ensureNewRound(newRoundCh, height, round) // first round at next height
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
ensureNewTimeout(timeoutCh, height, round, cs.config.TimeoutPropose.Nanoseconds())
ensureNewRound(t, newRoundCh, height, round) // first round at next height
deliverTxsRange(cs, 0, 1) // we deliver txs, but dont set a proposal so we get the next round
ensureNewTimeout(t, timeoutCh, height, round, cs.config.TimeoutPropose.Nanoseconds())
round++ // moving to the next round
ensureNewRound(newRoundCh, height, round) // wait for the next round
ensureNewEventOnChannel(newBlockCh) // now we can commit the block
round++ // moving to the next round
ensureNewRound(t, newRoundCh, height, round) // wait for the next round
ensureNewEventOnChannel(t, newBlockCh) // now we can commit the block
}
func deliverTxsRange(cs *State, start, end int) {
@@ -130,7 +129,7 @@ func TestMempoolTxConcurrentWithCommit(t *testing.T) {
cs := newStateWithConfigAndBlockStore(config, state, privVals[0], NewCounterApplication(), blockStore)
err := stateStore.Save(state)
require.NoError(t, err)
newBlockHeaderCh := subscribe(cs.eventBus, events.EventQueryNewBlockHeader)
newBlockHeaderCh := subscribe(cs.eventBus, types.EventQueryNewBlockHeader)
const numTxs int64 = 3000
go deliverTxsRange(cs, 0, int(numTxs))
@@ -139,7 +138,7 @@ func TestMempoolTxConcurrentWithCommit(t *testing.T) {
for n := int64(0); n < numTxs; {
select {
case msg := <-newBlockHeaderCh:
headerEvent := msg.Data().(events.EventDataNewBlockHeader)
headerEvent := msg.Data().(types.EventDataNewBlockHeader)
n += headerEvent.NumTxs
case <-time.After(30 * time.Second):
t.Fatal("Timed out waiting 30s to commit blocks with transactions")

View File

@@ -3,7 +3,7 @@ package consensus
import (
"github.com/go-kit/kit/metrics"
"github.com/go-kit/kit/metrics/discard"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/types"
prometheus "github.com/go-kit/kit/metrics/prometheus"
stdprometheus "github.com/prometheus/client_golang/prometheus"
@@ -221,7 +221,7 @@ func NopMetrics() *Metrics {
}
// RecordConsMetrics uses for recording the block related metrics during fast-sync.
func (m *Metrics) RecordConsMetrics(block *block.Block) {
func (m *Metrics) RecordConsMetrics(block *types.Block) {
m.NumTxs.Set(float64(len(block.Data.Txs)))
m.TotalTxs.Add(float64(len(block.Data.Txs)))
m.BlockSizeBytes.Observe(float64(block.Size()))

View File

@@ -8,12 +8,9 @@ import (
"github.com/tendermint/tendermint/libs/bits"
tmjson "github.com/tendermint/tendermint/libs/json"
tmmath "github.com/tendermint/tendermint/libs/math"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/events"
"github.com/tendermint/tendermint/pkg/metadata"
"github.com/tendermint/tendermint/pkg/p2p"
tmcons "github.com/tendermint/tendermint/proto/tendermint/consensus"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/types"
)
// Message defines an interface that the consensus domain types implement. When
@@ -98,7 +95,7 @@ func (m *NewRoundStepMessage) String() string {
type NewValidBlockMessage struct {
Height int64
Round int32
BlockPartSetHeader metadata.PartSetHeader
BlockPartSetHeader types.PartSetHeader
BlockParts *bits.BitArray
IsCommit bool
}
@@ -122,8 +119,8 @@ func (m *NewValidBlockMessage) ValidateBasic() error {
m.BlockParts.Size(),
m.BlockPartSetHeader.Total)
}
if m.BlockParts.Size() > int(metadata.MaxBlockPartsCount) {
return fmt.Errorf("blockParts bit array is too big: %d, max: %d", m.BlockParts.Size(), metadata.MaxBlockPartsCount)
if m.BlockParts.Size() > int(types.MaxBlockPartsCount) {
return fmt.Errorf("blockParts bit array is too big: %d, max: %d", m.BlockParts.Size(), types.MaxBlockPartsCount)
}
return nil
}
@@ -136,7 +133,7 @@ func (m *NewValidBlockMessage) String() string {
// ProposalMessage is sent when a new block is proposed.
type ProposalMessage struct {
Proposal *consensus.Proposal
Proposal *types.Proposal
}
// ValidateBasic performs basic validation.
@@ -167,8 +164,8 @@ func (m *ProposalPOLMessage) ValidateBasic() error {
if m.ProposalPOL.Size() == 0 {
return errors.New("empty ProposalPOL bit array")
}
if m.ProposalPOL.Size() > consensus.MaxVotesCount {
return fmt.Errorf("proposalPOL bit array is too big: %d, max: %d", m.ProposalPOL.Size(), consensus.MaxVotesCount)
if m.ProposalPOL.Size() > types.MaxVotesCount {
return fmt.Errorf("proposalPOL bit array is too big: %d, max: %d", m.ProposalPOL.Size(), types.MaxVotesCount)
}
return nil
}
@@ -182,7 +179,7 @@ func (m *ProposalPOLMessage) String() string {
type BlockPartMessage struct {
Height int64
Round int32
Part *metadata.Part
Part *types.Part
}
// ValidateBasic performs basic validation.
@@ -206,7 +203,7 @@ func (m *BlockPartMessage) String() string {
// VoteMessage is sent when voting for a proposal (or lack thereof).
type VoteMessage struct {
Vote *consensus.Vote
Vote *types.Vote
}
// ValidateBasic performs basic validation.
@@ -235,7 +232,7 @@ func (m *HasVoteMessage) ValidateBasic() error {
if m.Round < 0 {
return errors.New("negative Round")
}
if !consensus.IsVoteTypeValid(m.Type) {
if !types.IsVoteTypeValid(m.Type) {
return errors.New("invalid Type")
}
if m.Index < 0 {
@@ -254,7 +251,7 @@ type VoteSetMaj23Message struct {
Height int64
Round int32
Type tmproto.SignedMsgType
BlockID metadata.BlockID
BlockID types.BlockID
}
// ValidateBasic performs basic validation.
@@ -265,7 +262,7 @@ func (m *VoteSetMaj23Message) ValidateBasic() error {
if m.Round < 0 {
return errors.New("negative Round")
}
if !consensus.IsVoteTypeValid(m.Type) {
if !types.IsVoteTypeValid(m.Type) {
return errors.New("invalid Type")
}
if err := m.BlockID.ValidateBasic(); err != nil {
@@ -286,7 +283,7 @@ type VoteSetBitsMessage struct {
Height int64
Round int32
Type tmproto.SignedMsgType
BlockID metadata.BlockID
BlockID types.BlockID
Votes *bits.BitArray
}
@@ -295,7 +292,7 @@ func (m *VoteSetBitsMessage) ValidateBasic() error {
if m.Height < 0 {
return errors.New("negative Height")
}
if !consensus.IsVoteTypeValid(m.Type) {
if !types.IsVoteTypeValid(m.Type) {
return errors.New("invalid Type")
}
if err := m.BlockID.ValidateBasic(); err != nil {
@@ -303,8 +300,8 @@ func (m *VoteSetBitsMessage) ValidateBasic() error {
}
// NOTE: Votes.Size() can be zero if the node does not have any
if m.Votes.Size() > consensus.MaxVotesCount {
return fmt.Errorf("votes bit array is too big: %d, max: %d", m.Votes.Size(), consensus.MaxVotesCount)
if m.Votes.Size() > types.MaxVotesCount {
return fmt.Errorf("votes bit array is too big: %d, max: %d", m.Votes.Size(), types.MaxVotesCount)
}
return nil
@@ -468,7 +465,7 @@ func MsgFromProto(msg *tmcons.Message) (Message, error) {
LastCommitRound: msg.NewRoundStep.LastCommitRound,
}
case *tmcons.Message_NewValidBlock:
pbPartSetHeader, err := metadata.PartSetHeaderFromProto(&msg.NewValidBlock.BlockPartSetHeader)
pbPartSetHeader, err := types.PartSetHeaderFromProto(&msg.NewValidBlock.BlockPartSetHeader)
if err != nil {
return nil, fmt.Errorf("parts header to proto error: %w", err)
}
@@ -487,7 +484,7 @@ func MsgFromProto(msg *tmcons.Message) (Message, error) {
IsCommit: msg.NewValidBlock.IsCommit,
}
case *tmcons.Message_Proposal:
pbP, err := consensus.ProposalFromProto(&msg.Proposal.Proposal)
pbP, err := types.ProposalFromProto(&msg.Proposal.Proposal)
if err != nil {
return nil, fmt.Errorf("proposal msg to proto error: %w", err)
}
@@ -507,7 +504,7 @@ func MsgFromProto(msg *tmcons.Message) (Message, error) {
ProposalPOL: pbBits,
}
case *tmcons.Message_BlockPart:
parts, err := metadata.PartFromProto(&msg.BlockPart.Part)
parts, err := types.PartFromProto(&msg.BlockPart.Part)
if err != nil {
return nil, fmt.Errorf("blockpart msg to proto error: %w", err)
}
@@ -517,7 +514,7 @@ func MsgFromProto(msg *tmcons.Message) (Message, error) {
Part: parts,
}
case *tmcons.Message_Vote:
vote, err := consensus.VoteFromProto(msg.Vote.Vote)
vote, err := types.VoteFromProto(msg.Vote.Vote)
if err != nil {
return nil, fmt.Errorf("vote msg to proto error: %w", err)
}
@@ -533,7 +530,7 @@ func MsgFromProto(msg *tmcons.Message) (Message, error) {
Index: msg.HasVote.Index,
}
case *tmcons.Message_VoteSetMaj23:
bi, err := metadata.BlockIDFromProto(&msg.VoteSetMaj23.BlockID)
bi, err := types.BlockIDFromProto(&msg.VoteSetMaj23.BlockID)
if err != nil {
return nil, fmt.Errorf("voteSetMaj23 msg to proto error: %w", err)
}
@@ -544,7 +541,7 @@ func MsgFromProto(msg *tmcons.Message) (Message, error) {
BlockID: *bi,
}
case *tmcons.Message_VoteSetBits:
bi, err := metadata.BlockIDFromProto(&msg.VoteSetBits.BlockID)
bi, err := types.BlockIDFromProto(&msg.VoteSetBits.BlockID)
if err != nil {
return nil, fmt.Errorf("block ID to proto error: %w", err)
}
@@ -577,7 +574,7 @@ func WALToProto(msg WALMessage) (*tmcons.WALMessage, error) {
var pb tmcons.WALMessage
switch msg := msg.(type) {
case events.EventDataRoundState:
case types.EventDataRoundState:
pb = tmcons.WALMessage{
Sum: &tmcons.WALMessage_EventDataRoundState{
EventDataRoundState: &tmproto.EventDataRoundState{
@@ -640,7 +637,7 @@ func WALFromProto(msg *tmcons.WALMessage) (WALMessage, error) {
switch msg := msg.Sum.(type) {
case *tmcons.WALMessage_EventDataRoundState:
pb = events.EventDataRoundState{
pb = types.EventDataRoundState{
Height: msg.EventDataRoundState.Height,
Round: msg.EventDataRoundState.Round,
Step: msg.EventDataRoundState.Step,
@@ -653,7 +650,7 @@ func WALFromProto(msg *tmcons.WALMessage) (WALMessage, error) {
}
pb = msgInfo{
Msg: walMsg,
PeerID: p2p.NodeID(msg.MsgInfo.PeerID),
PeerID: types.NodeID(msg.MsgInfo.PeerID),
}
case *tmcons.WALMessage_TimeoutInfo:

View File

@@ -18,21 +18,18 @@ import (
"github.com/tendermint/tendermint/libs/bits"
"github.com/tendermint/tendermint/libs/bytes"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/events"
"github.com/tendermint/tendermint/pkg/metadata"
"github.com/tendermint/tendermint/pkg/p2p"
tmcons "github.com/tendermint/tendermint/proto/tendermint/consensus"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/types"
)
func TestMsgToProto(t *testing.T) {
psh := metadata.PartSetHeader{
psh := types.PartSetHeader{
Total: 1,
Hash: tmrand.Bytes(32),
}
pbPsh := psh.ToProto()
bi := metadata.BlockID{
bi := types.BlockID{
Hash: tmrand.Bytes(32),
PartSetHeader: psh,
}
@@ -40,7 +37,7 @@ func TestMsgToProto(t *testing.T) {
bits := bits.NewBitArray(1)
pbBits := bits.ToProto()
parts := metadata.Part{
parts := types.Part{
Index: 1,
Bytes: []byte("test"),
Proof: merkle.Proof{
@@ -53,7 +50,7 @@ func TestMsgToProto(t *testing.T) {
pbParts, err := parts.ToProto()
require.NoError(t, err)
proposal := consensus.Proposal{
proposal := types.Proposal{
Type: tmproto.ProposalType,
Height: 1,
Round: 1,
@@ -64,9 +61,9 @@ func TestMsgToProto(t *testing.T) {
}
pbProposal := proposal.ToProto()
pv := consensus.NewMockPV()
pv := types.NewMockPV()
vote, err := factory.MakeVote(pv, factory.DefaultTestChainID,
0, 1, 0, 2, metadata.BlockID{}, time.Now())
0, 1, 0, 2, types.BlockID{}, time.Now())
require.NoError(t, err)
pbVote := vote.ToProto()
@@ -213,7 +210,7 @@ func TestMsgToProto(t *testing.T) {
func TestWALMsgProto(t *testing.T) {
parts := metadata.Part{
parts := types.Part{
Index: 1,
Bytes: []byte("test"),
Proof: merkle.Proof{
@@ -232,7 +229,7 @@ func TestWALMsgProto(t *testing.T) {
want *tmcons.WALMessage
wantErr bool
}{
{"successful EventDataRoundState", events.EventDataRoundState{
{"successful EventDataRoundState", types.EventDataRoundState{
Height: 2,
Round: 1,
Step: "ronies",
@@ -251,7 +248,7 @@ func TestWALMsgProto(t *testing.T) {
Round: 1,
Part: &parts,
},
PeerID: p2p.NodeID("string"),
PeerID: types.NodeID("string"),
}, &tmcons.WALMessage{
Sum: &tmcons.WALMessage_MsgInfo{
MsgInfo: &tmcons.MsgInfo{
@@ -319,13 +316,13 @@ func TestWALMsgProto(t *testing.T) {
// nolint:lll //ignore line length for tests
func TestConsMsgsVectors(t *testing.T) {
date := time.Date(2018, 8, 30, 12, 0, 0, 0, time.UTC)
psh := metadata.PartSetHeader{
psh := types.PartSetHeader{
Total: 1,
Hash: []byte("add_more_exclamation_marks_code-"),
}
pbPsh := psh.ToProto()
bi := metadata.BlockID{
bi := types.BlockID{
Hash: []byte("add_more_exclamation_marks_code-"),
PartSetHeader: psh,
}
@@ -333,7 +330,7 @@ func TestConsMsgsVectors(t *testing.T) {
bits := bits.NewBitArray(1)
pbBits := bits.ToProto()
parts := metadata.Part{
parts := types.Part{
Index: 1,
Bytes: []byte("test"),
Proof: merkle.Proof{
@@ -346,7 +343,7 @@ func TestConsMsgsVectors(t *testing.T) {
pbParts, err := parts.ToProto()
require.NoError(t, err)
proposal := consensus.Proposal{
proposal := types.Proposal{
Type: tmproto.ProposalType,
Height: 1,
Round: 1,
@@ -357,7 +354,7 @@ func TestConsMsgsVectors(t *testing.T) {
}
pbProposal := proposal.ToProto()
v := &consensus.Vote{
v := &types.Vote{
ValidatorAddress: []byte("add_more_exclamation"),
ValidatorIndex: 1,
Height: 1,
@@ -434,10 +431,10 @@ func TestVoteSetMaj23MessageValidateBasic(t *testing.T) {
invalidSignedMsgType tmproto.SignedMsgType = 0x03
)
validBlockID := metadata.BlockID{}
invalidBlockID := metadata.BlockID{
validBlockID := types.BlockID{}
invalidBlockID := types.BlockID{
Hash: bytes.HexBytes{},
PartSetHeader: metadata.PartSetHeader{
PartSetHeader: types.PartSetHeader{
Total: 1,
Hash: []byte{0},
},
@@ -449,7 +446,7 @@ func TestVoteSetMaj23MessageValidateBasic(t *testing.T) {
messageHeight int64
testName string
messageType tmproto.SignedMsgType
messageBlockID metadata.BlockID
messageBlockID types.BlockID
}{
{false, 0, 0, "Valid Message", validSignedMsgType, validBlockID},
{true, -1, 0, "Invalid Message", validSignedMsgType, validBlockID},
@@ -482,15 +479,15 @@ func TestVoteSetBitsMessageValidateBasic(t *testing.T) {
{func(msg *VoteSetBitsMessage) { msg.Height = -1 }, "negative Height"},
{func(msg *VoteSetBitsMessage) { msg.Type = 0x03 }, "invalid Type"},
{func(msg *VoteSetBitsMessage) {
msg.BlockID = metadata.BlockID{
msg.BlockID = types.BlockID{
Hash: bytes.HexBytes{},
PartSetHeader: metadata.PartSetHeader{
PartSetHeader: types.PartSetHeader{
Total: 1,
Hash: []byte{0},
},
}
}, "wrong BlockID: wrong PartSetHeader: wrong Hash:"},
{func(msg *VoteSetBitsMessage) { msg.Votes = bits.NewBitArray(consensus.MaxVotesCount + 1) },
{func(msg *VoteSetBitsMessage) { msg.Votes = bits.NewBitArray(types.MaxVotesCount + 1) },
"votes bit array is too big: 10001, max: 10000"},
}
@@ -502,7 +499,7 @@ func TestVoteSetBitsMessageValidateBasic(t *testing.T) {
Round: 0,
Type: 0x01,
Votes: bits.NewBitArray(1),
BlockID: metadata.BlockID{},
BlockID: types.BlockID{},
}
tc.malleateFn(msg)
@@ -607,9 +604,7 @@ func TestNewValidBlockMessageValidateBasic(t *testing.T) {
"empty blockParts",
},
{
func(msg *NewValidBlockMessage) {
msg.BlockParts = bits.NewBitArray(int(metadata.MaxBlockPartsCount) + 1)
},
func(msg *NewValidBlockMessage) { msg.BlockParts = bits.NewBitArray(int(types.MaxBlockPartsCount) + 1) },
"blockParts bit array size 1602 not equal to BlockPartSetHeader.Total 1",
},
}
@@ -620,7 +615,7 @@ func TestNewValidBlockMessageValidateBasic(t *testing.T) {
msg := &NewValidBlockMessage{
Height: 1,
Round: 0,
BlockPartSetHeader: metadata.PartSetHeader{
BlockPartSetHeader: types.PartSetHeader{
Total: 1,
},
BlockParts: bits.NewBitArray(1),
@@ -644,7 +639,7 @@ func TestProposalPOLMessageValidateBasic(t *testing.T) {
{func(msg *ProposalPOLMessage) { msg.Height = -1 }, "negative Height"},
{func(msg *ProposalPOLMessage) { msg.ProposalPOLRound = -1 }, "negative ProposalPOLRound"},
{func(msg *ProposalPOLMessage) { msg.ProposalPOL = bits.NewBitArray(0) }, "empty ProposalPOL bit array"},
{func(msg *ProposalPOLMessage) { msg.ProposalPOL = bits.NewBitArray(consensus.MaxVotesCount + 1) },
{func(msg *ProposalPOLMessage) { msg.ProposalPOL = bits.NewBitArray(types.MaxVotesCount + 1) },
"proposalPOL bit array is too big: 10001, max: 10000"},
}
@@ -667,13 +662,13 @@ func TestProposalPOLMessageValidateBasic(t *testing.T) {
}
func TestBlockPartMessageValidateBasic(t *testing.T) {
testPart := new(metadata.Part)
testPart := new(types.Part)
testPart.Proof.LeafHash = tmhash.Sum([]byte("leaf"))
testCases := []struct {
testName string
messageHeight int64
messageRound int32
messagePart *metadata.Part
messagePart *types.Part
expectErr bool
}{
{"Valid Message", 0, 0, testPart, false},
@@ -694,7 +689,7 @@ func TestBlockPartMessageValidateBasic(t *testing.T) {
})
}
message := BlockPartMessage{Height: 0, Round: 0, Part: new(metadata.Part)}
message := BlockPartMessage{Height: 0, Round: 0, Part: new(types.Part)}
message.Part.Index = 1
assert.Equal(t, true, message.ValidateBasic() != nil, "Validate Basic had an unexpected result")

View File

@@ -12,10 +12,8 @@ import (
tmjson "github.com/tendermint/tendermint/libs/json"
"github.com/tendermint/tendermint/libs/log"
tmtime "github.com/tendermint/tendermint/libs/time"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/metadata"
"github.com/tendermint/tendermint/pkg/p2p"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/types"
)
var (
@@ -38,7 +36,7 @@ func (pss peerStateStats) String() string {
// NOTE: THIS GETS DUMPED WITH rpc/core/consensus.go.
// Be mindful of what you Expose.
type PeerState struct {
peerID p2p.NodeID
peerID types.NodeID
logger log.Logger
// NOTE: Modify below using setters, never directly.
@@ -52,7 +50,7 @@ type PeerState struct {
}
// NewPeerState returns a new PeerState for the given node ID.
func NewPeerState(logger log.Logger, peerID p2p.NodeID) *PeerState {
func NewPeerState(logger log.Logger, peerID types.NodeID) *PeerState {
return &PeerState{
peerID: peerID,
logger: logger,
@@ -112,7 +110,7 @@ func (ps *PeerState) GetHeight() int64 {
}
// SetHasProposal sets the given proposal as known for the peer.
func (ps *PeerState) SetHasProposal(proposal *consensus.Proposal) {
func (ps *PeerState) SetHasProposal(proposal *types.Proposal) {
ps.mtx.Lock()
defer ps.mtx.Unlock()
@@ -139,7 +137,7 @@ func (ps *PeerState) SetHasProposal(proposal *consensus.Proposal) {
// InitProposalBlockParts initializes the peer's proposal block parts header
// and bit array.
func (ps *PeerState) InitProposalBlockParts(partSetHeader metadata.PartSetHeader) {
func (ps *PeerState) InitProposalBlockParts(partSetHeader types.PartSetHeader) {
ps.mtx.Lock()
defer ps.mtx.Unlock()
@@ -167,7 +165,7 @@ func (ps *PeerState) SetHasProposalBlockPart(height int64, round int32, index in
// vote was picked.
//
// NOTE: `votes` must be the correct Size() for the Height().
func (ps *PeerState) PickVoteToSend(votes consensus.VoteSetReader) (*consensus.Vote, bool) {
func (ps *PeerState) PickVoteToSend(votes types.VoteSetReader) (*types.Vote, bool) {
ps.mtx.Lock()
defer ps.mtx.Unlock()
@@ -201,21 +199,8 @@ func (ps *PeerState) PickVoteToSend(votes consensus.VoteSetReader) (*consensus.V
return nil, false
}
func (ps *PeerState) PickVoteFromCommit(commit *metadata.Commit) (*consensus.Vote, bool) {
psVotes := ps.getVoteBitArray(commit.Height, commit.Round, tmproto.PrecommitType)
if psVotes == nil {
return nil, false // not something worth sending
}
if index, ok := commit.BitArray().Sub(psVotes).PickRandom(); ok {
return consensus.GetVoteFromCommit(commit, int32(index)), true
}
return nil, false
}
func (ps *PeerState) getVoteBitArray(height int64, round int32, votesType tmproto.SignedMsgType) *bits.BitArray {
if !consensus.IsVoteTypeValid(votesType) {
if !types.IsVoteTypeValid(votesType) {
return nil
}
@@ -372,7 +357,7 @@ func (ps *PeerState) BlockPartsSent() int {
}
// SetHasVote sets the given vote as known by the peer
func (ps *PeerState) SetHasVote(vote *consensus.Vote) {
func (ps *PeerState) SetHasVote(vote *types.Vote) {
ps.mtx.Lock()
defer ps.mtx.Unlock()
@@ -419,7 +404,7 @@ func (ps *PeerState) ApplyNewRoundStepMessage(msg *NewRoundStepMessage) {
if psHeight != msg.Height || psRound != msg.Round {
ps.PRS.Proposal = false
ps.PRS.ProposalBlockPartSetHeader = metadata.PartSetHeader{}
ps.PRS.ProposalBlockPartSetHeader = types.PartSetHeader{}
ps.PRS.ProposalBlockParts = nil
ps.PRS.ProposalPOLRound = -1
ps.PRS.ProposalPOL = nil

View File

@@ -12,13 +12,10 @@ import (
tmevents "github.com/tendermint/tendermint/libs/events"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/events"
"github.com/tendermint/tendermint/pkg/metadata"
p2ptypes "github.com/tendermint/tendermint/pkg/p2p"
tmcons "github.com/tendermint/tendermint/proto/tendermint/consensus"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
var (
@@ -128,11 +125,11 @@ type Reactor struct {
service.BaseService
state *State
eventBus *events.EventBus
eventBus *types.EventBus
Metrics *Metrics
mtx tmsync.RWMutex
peers map[p2ptypes.NodeID]*PeerState
peers map[types.NodeID]*PeerState
waitSync bool
stateCh *p2p.Channel
@@ -168,7 +165,7 @@ func NewReactor(
r := &Reactor{
state: cs,
waitSync: waitSync,
peers: make(map[p2ptypes.NodeID]*PeerState),
peers: make(map[types.NodeID]*PeerState),
Metrics: NopMetrics(),
stateCh: stateCh,
dataCh: dataCh,
@@ -263,7 +260,7 @@ func (r *Reactor) OnStop() {
}
// SetEventBus sets the reactor's event bus.
func (r *Reactor) SetEventBus(b *events.EventBus) {
func (r *Reactor) SetEventBus(b *types.EventBus) {
r.eventBus = b
r.state.SetEventBus(b)
}
@@ -316,7 +313,7 @@ conR:
%+v`, err, r.state, r))
}
d := events.EventDataBlockSyncStatus{Complete: true, Height: state.LastBlockHeight}
d := types.EventDataBlockSyncStatus{Complete: true, Height: state.LastBlockHeight}
if err := r.eventBus.PublishEventBlockSyncStatus(d); err != nil {
r.Logger.Error("failed to emit the blocksync complete event", "err", err)
}
@@ -349,7 +346,7 @@ func (r *Reactor) StringIndented(indent string) string {
}
// GetPeerState returns PeerState for a given NodeID.
func (r *Reactor) GetPeerState(peerID p2ptypes.NodeID) (*PeerState, bool) {
func (r *Reactor) GetPeerState(peerID types.NodeID) (*PeerState, bool) {
r.mtx.RLock()
defer r.mtx.RUnlock()
@@ -378,7 +375,7 @@ func (r *Reactor) broadcastNewValidBlockMessage(rs *cstypes.RoundState) {
}
}
func (r *Reactor) broadcastHasVoteMessage(vote *consensus.Vote) {
func (r *Reactor) broadcastHasVoteMessage(vote *types.Vote) {
r.stateCh.Out <- p2p.Envelope{
Broadcast: true,
Message: &tmcons.HasVote{
@@ -396,7 +393,7 @@ func (r *Reactor) broadcastHasVoteMessage(vote *consensus.Vote) {
func (r *Reactor) subscribeToBroadcastEvents() {
err := r.state.evsw.AddListenerForEvent(
listenerIDConsensus,
events.EventNewRoundStepValue,
types.EventNewRoundStepValue,
func(data tmevents.EventData) {
r.broadcastNewRoundStepMessage(data.(*cstypes.RoundState))
select {
@@ -411,7 +408,7 @@ func (r *Reactor) subscribeToBroadcastEvents() {
err = r.state.evsw.AddListenerForEvent(
listenerIDConsensus,
events.EventValidBlockValue,
types.EventValidBlockValue,
func(data tmevents.EventData) {
r.broadcastNewValidBlockMessage(data.(*cstypes.RoundState))
},
@@ -422,9 +419,9 @@ func (r *Reactor) subscribeToBroadcastEvents() {
err = r.state.evsw.AddListenerForEvent(
listenerIDConsensus,
events.EventVoteValue,
types.EventVoteValue,
func(data tmevents.EventData) {
r.broadcastHasVoteMessage(data.(*consensus.Vote))
r.broadcastHasVoteMessage(data.(*types.Vote))
},
)
if err != nil {
@@ -446,7 +443,7 @@ func makeRoundStepMessage(rs *cstypes.RoundState) *tmcons.NewRoundStep {
}
}
func (r *Reactor) sendNewRoundStepMessage(peerID p2ptypes.NodeID) {
func (r *Reactor) sendNewRoundStepMessage(peerID types.NodeID) {
rs := r.state.GetRoundState()
msg := makeRoundStepMessage(rs)
r.stateCh.Out <- p2p.Envelope{
@@ -656,7 +653,7 @@ OUTER_LOOP:
// pickSendVote picks a vote and sends it to the peer. It will return true if
// there is a vote to send and false otherwise.
func (r *Reactor) pickSendVote(ps *PeerState, votes consensus.VoteSetReader) bool {
func (r *Reactor) pickSendVote(ps *PeerState, votes types.VoteSetReader) bool {
if vote, ok := ps.PickVoteToSend(votes); ok {
r.Logger.Debug("sending vote message", "ps", ps, "vote", vote)
r.voteCh.Out <- p2p.Envelope{
@@ -673,23 +670,6 @@ func (r *Reactor) pickSendVote(ps *PeerState, votes consensus.VoteSetReader) boo
return false
}
func (r *Reactor) sendVoteFromCommit(ps *PeerState, commit *metadata.Commit) bool {
if vote, ok := ps.PickVoteFromCommit(commit); ok {
r.Logger.Debug("sending vote message", "ps", ps, "vote", vote)
r.voteCh.Out <- p2p.Envelope{
To: ps.peerID,
Message: &tmcons.Vote{
Vote: vote.ToProto(),
},
}
ps.SetHasVote(vote)
return true
}
return false
}
func (r *Reactor) gossipVotesForHeight(rs *cstypes.RoundState, prs *cstypes.PeerRoundState, ps *PeerState) bool {
logger := r.Logger.With("height", prs.Height).With("peer", ps.peerID)
@@ -801,10 +781,8 @@ OUTER_LOOP:
if blockStoreBase > 0 && prs.Height != 0 && rs.Height >= prs.Height+2 && prs.Height >= blockStoreBase {
// Load the block commit for prs.Height, which contains precommit
// signatures for prs.Height.
// FIXME: It's incredibly inefficient to be sending individual votes to a node that is lagging behind.
// We should instead be gossiping entire commits
if commit := r.state.blockStore.LoadBlockCommit(prs.Height); commit != nil {
if r.sendVoteFromCommit(ps, commit) {
if r.pickSendVote(ps, commit) {
logger.Debug("picked Catchup commit to send", "height", prs.Height)
continue OUTER_LOOP
}
@@ -1118,7 +1096,7 @@ func (r *Reactor) handleDataMessage(envelope p2p.Envelope, msgI Message) error {
}
if r.WaitSync() {
logger.Info("ignoring message received during sync", "msg", msgI)
logger.Info("ignoring message received during sync", "msg", fmt.Sprintf("%T", msgI))
return nil
}

View File

@@ -15,6 +15,7 @@ import (
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
cfg "github.com/tendermint/tendermint/config"
cryptoenc "github.com/tendermint/tendermint/crypto/encoding"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
@@ -25,16 +26,11 @@ import (
"github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/libs/log"
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/events"
"github.com/tendermint/tendermint/pkg/evidence"
p2ptypes "github.com/tendermint/tendermint/pkg/p2p"
tmcons "github.com/tendermint/tendermint/proto/tendermint/consensus"
sm "github.com/tendermint/tendermint/state"
statemocks "github.com/tendermint/tendermint/state/mocks"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
dbm "github.com/tendermint/tm-db"
)
@@ -44,14 +40,14 @@ var (
type reactorTestSuite struct {
network *p2ptest.Network
states map[p2ptypes.NodeID]*State
reactors map[p2ptypes.NodeID]*Reactor
subs map[p2ptypes.NodeID]events.Subscription
blocksyncSubs map[p2ptypes.NodeID]events.Subscription
stateChannels map[p2ptypes.NodeID]*p2p.Channel
dataChannels map[p2ptypes.NodeID]*p2p.Channel
voteChannels map[p2ptypes.NodeID]*p2p.Channel
voteSetBitsChannels map[p2ptypes.NodeID]*p2p.Channel
states map[types.NodeID]*State
reactors map[types.NodeID]*Reactor
subs map[types.NodeID]types.Subscription
blocksyncSubs map[types.NodeID]types.Subscription
stateChannels map[types.NodeID]*p2p.Channel
dataChannels map[types.NodeID]*p2p.Channel
voteChannels map[types.NodeID]*p2p.Channel
voteSetBitsChannels map[types.NodeID]*p2p.Channel
}
func chDesc(chID p2p.ChannelID) p2p.ChannelDescriptor {
@@ -65,10 +61,10 @@ func setup(t *testing.T, numNodes int, states []*State, size int) *reactorTestSu
rts := &reactorTestSuite{
network: p2ptest.MakeNetwork(t, p2ptest.NetworkOptions{NumNodes: numNodes}),
states: make(map[p2ptypes.NodeID]*State),
reactors: make(map[p2ptypes.NodeID]*Reactor, numNodes),
subs: make(map[p2ptypes.NodeID]events.Subscription, numNodes),
blocksyncSubs: make(map[p2ptypes.NodeID]events.Subscription, numNodes),
states: make(map[types.NodeID]*State),
reactors: make(map[types.NodeID]*Reactor, numNodes),
subs: make(map[types.NodeID]types.Subscription, numNodes),
blocksyncSubs: make(map[types.NodeID]types.Subscription, numNodes),
}
rts.stateChannels = rts.network.MakeChannelsNoCleanup(t, chDesc(StateChannel), new(tmcons.Message), size)
@@ -95,10 +91,10 @@ func setup(t *testing.T, numNodes int, states []*State, size int) *reactorTestSu
reactor.SetEventBus(state.eventBus)
blocksSub, err := state.eventBus.Subscribe(context.Background(), testSubscriber, events.EventQueryNewBlock, size)
blocksSub, err := state.eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryNewBlock, size)
require.NoError(t, err)
fsSub, err := state.eventBus.Subscribe(context.Background(), testSubscriber, events.EventQueryBlockSyncStatus, size)
fsSub, err := state.eventBus.Subscribe(context.Background(), testSubscriber, types.EventQueryBlockSyncStatus, size)
require.NoError(t, err)
rts.states[nodeID] = state
@@ -136,7 +132,7 @@ func setup(t *testing.T, numNodes int, states []*State, size int) *reactorTestSu
return rts
}
func validateBlock(block *block.Block, activeVals map[string]struct{}) error {
func validateBlock(block *types.Block, activeVals map[string]struct{}) error {
if block.LastCommit.Size() != len(activeVals) {
return fmt.Errorf(
"commit size doesn't match number of active validators. Got %d, expected %d",
@@ -157,14 +153,14 @@ func waitForAndValidateBlock(
t *testing.T,
n int,
activeVals map[string]struct{},
blocksSubs []events.Subscription,
blocksSubs []types.Subscription,
states []*State,
txs ...[]byte,
) {
fn := func(j int) {
msg := <-blocksSubs[j].Out()
newBlock := msg.Data().(events.EventDataNewBlock).Block
newBlock := msg.Data().(types.EventDataNewBlock).Block
require.NoError(t, validateBlock(newBlock, activeVals))
@@ -190,7 +186,7 @@ func waitForAndValidateBlockWithTx(
t *testing.T,
n int,
activeVals map[string]struct{},
blocksSubs []events.Subscription,
blocksSubs []types.Subscription,
states []*State,
txs ...[]byte,
) {
@@ -200,7 +196,7 @@ func waitForAndValidateBlockWithTx(
BLOCK_TX_LOOP:
for {
msg := <-blocksSubs[j].Out()
newBlock := msg.Data().(events.EventDataNewBlock).Block
newBlock := msg.Data().(types.EventDataNewBlock).Block
require.NoError(t, validateBlock(newBlock, activeVals))
@@ -235,17 +231,17 @@ func waitForBlockWithUpdatedValsAndValidateIt(
t *testing.T,
n int,
updatedVals map[string]struct{},
blocksSubs []events.Subscription,
blocksSubs []types.Subscription,
css []*State,
) {
fn := func(j int) {
var newBlock *block.Block
var newBlock *types.Block
LOOP:
for {
msg := <-blocksSubs[j].Out()
newBlock = msg.Data().(events.EventDataNewBlock).Block
newBlock = msg.Data().(types.EventDataNewBlock).Block
if newBlock.LastCommit.Size() == len(updatedVals) {
break LOOP
}
@@ -269,7 +265,7 @@ func waitForBlockWithUpdatedValsAndValidateIt(
func ensureBlockSyncStatus(t *testing.T, msg tmpubsub.Message, complete bool, height int64) {
t.Helper()
status, ok := msg.Data().(events.EventDataBlockSyncStatus)
status, ok := msg.Data().(types.EventDataBlockSyncStatus)
require.True(t, ok)
require.Equal(t, complete, status.Complete)
@@ -297,7 +293,7 @@ func TestReactorBasic(t *testing.T) {
wg.Add(1)
// wait till everyone makes the first new block
go func(s events.Subscription) {
go func(s types.Subscription) {
defer wg.Done()
<-s.Out()
}(sub)
@@ -309,7 +305,7 @@ func TestReactorBasic(t *testing.T) {
wg.Add(1)
// wait till everyone makes the consensus switch
go func(s events.Subscription) {
go func(s types.Subscription) {
defer wg.Done()
msg := <-s.Out()
ensureBlockSyncStatus(t, msg, true, 0)
@@ -340,9 +336,9 @@ func TestReactorWithEvidence(t *testing.T) {
defer os.RemoveAll(thisConfig.RootDir)
ensureDir(path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
ensureDir(t, path.Dir(thisConfig.Consensus.WalFile()), 0700) // dir for wal
app := appFunc()
vals := consensus.TM2PB.ValidatorUpdates(state.Validators)
vals := types.TM2PB.ValidatorUpdates(state.Validators)
app.InitChain(abci.RequestInitChain{Validators: vals})
pv := privVals[i]
@@ -364,12 +360,12 @@ func TestReactorWithEvidence(t *testing.T) {
// everyone includes evidence of another double signing
vIdx := (i + 1) % n
ev := evidence.NewMockDuplicateVoteEvidenceWithValidator(1, defaultTestTime, privVals[vIdx], config.ChainID())
ev := types.NewMockDuplicateVoteEvidenceWithValidator(1, defaultTestTime, privVals[vIdx], config.ChainID())
evpool := &statemocks.EvidencePool{}
evpool.On("CheckEvidence", mock.AnythingOfType("p2ptypes.EvidenceList")).Return(nil)
evpool.On("PendingEvidence", mock.AnythingOfType("int64")).Return([]evidence.Evidence{
evpool.On("CheckEvidence", mock.AnythingOfType("types.EvidenceList")).Return(nil)
evpool.On("PendingEvidence", mock.AnythingOfType("int64")).Return([]types.Evidence{
ev}, int64(len(ev.Bytes())))
evpool.On("Update", mock.AnythingOfType("state.State"), mock.AnythingOfType("p2ptypes.EvidenceList")).Return()
evpool.On("Update", mock.AnythingOfType("state.State"), mock.AnythingOfType("types.EvidenceList")).Return()
evpool2 := sm.EmptyEvidencePool{}
@@ -378,7 +374,7 @@ func TestReactorWithEvidence(t *testing.T) {
cs.SetLogger(log.TestingLogger().With("module", "consensus"))
cs.SetPrivValidator(pv)
eventBus := events.NewEventBus()
eventBus := types.NewEventBus()
eventBus.SetLogger(log.TestingLogger().With("module", "events"))
err = eventBus.Start()
require.NoError(t, err)
@@ -403,9 +399,9 @@ func TestReactorWithEvidence(t *testing.T) {
// We expect for each validator that is the proposer to propose one piece of
// evidence.
go func(s events.Subscription) {
go func(s types.Subscription) {
msg := <-s.Out()
block := msg.Data().(events.EventDataNewBlock).Block
block := msg.Data().(types.EventDataNewBlock).Block
require.Len(t, block.Evidence.Evidence, 1)
wg.Done()
@@ -456,7 +452,7 @@ func TestReactorCreatesBlockWhenEmptyBlocksFalse(t *testing.T) {
wg.Add(1)
// wait till everyone makes the first new block
go func(s events.Subscription) {
go func(s types.Subscription) {
<-s.Out()
wg.Done()
}(sub)
@@ -486,7 +482,7 @@ func TestReactorRecordsVotesAndBlockParts(t *testing.T) {
wg.Add(1)
// wait till everyone makes the first new block
go func(s events.Subscription) {
go func(s types.Subscription) {
<-s.Out()
wg.Done()
}(sub)
@@ -561,7 +557,7 @@ func TestReactorVotingPowerChange(t *testing.T) {
wg.Add(1)
// wait till everyone makes the first new block
go func(s events.Subscription) {
go func(s types.Subscription) {
<-s.Out()
wg.Done()
}(sub)
@@ -569,7 +565,7 @@ func TestReactorVotingPowerChange(t *testing.T) {
wg.Wait()
blocksSubs := []events.Subscription{}
blocksSubs := []types.Subscription{}
for _, sub := range rts.subs {
blocksSubs = append(blocksSubs, sub)
}
@@ -631,6 +627,7 @@ func TestReactorValidatorSetChanges(t *testing.T) {
nPeers := 7
nVals := 4
states, _, _, cleanup := randConsensusNetWithPeers(
t,
config,
nVals,
nPeers,
@@ -661,7 +658,7 @@ func TestReactorValidatorSetChanges(t *testing.T) {
wg.Add(1)
// wait till everyone makes the first new block
go func(s events.Subscription) {
go func(s types.Subscription) {
<-s.Out()
wg.Done()
}(sub)
@@ -677,7 +674,7 @@ func TestReactorValidatorSetChanges(t *testing.T) {
newValidatorTx1 := kvstore.MakeValSetChangeTx(valPubKey1ABCI, testMinPower)
blocksSubs := []events.Subscription{}
blocksSubs := []types.Subscription{}
for _, sub := range rts.subs {
blocksSubs = append(blocksSubs, sub)
}

View File

@@ -9,14 +9,12 @@ import (
"reflect"
"time"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/crypto/merkle"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/pkg/abci"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/events"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
)
var crc32c = crc32.MakeTable(crc32.Castagnoli)
@@ -38,7 +36,7 @@ var crc32c = crc32.MakeTable(crc32.Castagnoli)
// Unmarshal and apply a single message to the consensus state as if it were
// received in receiveRoutine. Lines that start with "#" are ignored.
// NOTE: receiveRoutine should not be running.
func (cs *State) readReplayMessage(msg *TimedWALMessage, newStepSub events.Subscription) error {
func (cs *State) readReplayMessage(msg *TimedWALMessage, newStepSub types.Subscription) error {
// Skip meta messages which exist for demarcating boundaries.
if _, ok := msg.Msg.(EndHeightMessage); ok {
return nil
@@ -46,14 +44,14 @@ func (cs *State) readReplayMessage(msg *TimedWALMessage, newStepSub events.Subsc
// for logging
switch m := msg.Msg.(type) {
case events.EventDataRoundState:
case types.EventDataRoundState:
cs.Logger.Info("Replay: New Step", "height", m.Height, "round", m.Round, "step", m.Step)
// these are playback checks
ticker := time.After(time.Second * 2)
if newStepSub != nil {
select {
case stepMsg := <-newStepSub.Out():
m2 := stepMsg.Data().(events.EventDataRoundState)
m2 := stepMsg.Data().(types.EventDataRoundState)
if m.Height != m2.Height || m.Round != m2.Round || m.Step != m2.Step {
return fmt.Errorf("roundState mismatch. Got %v; Expected %v", m2, m)
}
@@ -204,21 +202,21 @@ type Handshaker struct {
stateStore sm.Store
initialState sm.State
store sm.BlockStore
eventBus events.BlockEventPublisher
genDoc *consensus.GenesisDoc
eventBus types.BlockEventPublisher
genDoc *types.GenesisDoc
logger log.Logger
nBlocks int // number of blocks applied to the state
}
func NewHandshaker(stateStore sm.Store, state sm.State,
store sm.BlockStore, genDoc *consensus.GenesisDoc) *Handshaker {
store sm.BlockStore, genDoc *types.GenesisDoc) *Handshaker {
return &Handshaker{
stateStore: stateStore,
initialState: state,
store: store,
eventBus: events.NopEventBus{},
eventBus: types.NopEventBus{},
genDoc: genDoc,
logger: log.NewNopLogger(),
nBlocks: 0,
@@ -231,7 +229,7 @@ func (h *Handshaker) SetLogger(l log.Logger) {
// SetEventBus - sets the event bus for publishing block related events.
// If not called, it defaults to types.NopEventBus.
func (h *Handshaker) SetEventBus(eventBus events.BlockEventPublisher) {
func (h *Handshaker) SetEventBus(eventBus types.BlockEventPublisher) {
h.eventBus = eventBus
}
@@ -304,12 +302,12 @@ func (h *Handshaker) ReplayBlocks(
// If appBlockHeight == 0 it means that we are at genesis and hence should send InitChain.
if appBlockHeight == 0 {
validators := make([]*consensus.Validator, len(h.genDoc.Validators))
validators := make([]*types.Validator, len(h.genDoc.Validators))
for i, val := range h.genDoc.Validators {
validators[i] = consensus.NewValidator(val.PubKey, val.Power)
validators[i] = types.NewValidator(val.PubKey, val.Power)
}
validatorSet := consensus.NewValidatorSet(validators)
nextVals := consensus.TM2PB.ValidatorUpdates(validatorSet)
validatorSet := types.NewValidatorSet(validators)
nextVals := types.TM2PB.ValidatorUpdates(validatorSet)
pbParams := h.genDoc.ConsensusParams.ToProto()
req := abci.RequestInitChain{
Time: h.genDoc.GenesisTime,
@@ -335,12 +333,12 @@ func (h *Handshaker) ReplayBlocks(
}
// If the app returned validators or consensus params, update the state.
if len(res.Validators) > 0 {
vals, err := consensus.PB2TM.ValidatorUpdates(res.Validators)
vals, err := types.PB2TM.ValidatorUpdates(res.Validators)
if err != nil {
return nil, err
}
state.Validators = consensus.NewValidatorSet(vals)
state.NextValidators = consensus.NewValidatorSet(vals).CopyIncrementProposerPriority(1)
state.Validators = types.NewValidatorSet(vals)
state.NextValidators = types.NewValidatorSet(vals).CopyIncrementProposerPriority(1)
} else if len(h.genDoc.Validators) == 0 {
// If validator set is not set in genesis and still empty after InitChain, exit.
return nil, fmt.Errorf("validator set is nil in genesis and still empty after InitChain")
@@ -527,13 +525,13 @@ func (h *Handshaker) replayBlock(state sm.State, height int64, proxyApp proxy.Ap
return state, nil
}
func assertAppHashEqualsOneFromBlock(appHash []byte, b *block.Block) {
if !bytes.Equal(appHash, b.AppHash) {
func assertAppHashEqualsOneFromBlock(appHash []byte, block *types.Block) {
if !bytes.Equal(appHash, block.AppHash) {
panic(fmt.Sprintf(`block.AppHash does not match AppHash after replay. Got %X, expected %X.
Block: %v
`,
appHash, b.AppHash, b))
appHash, block.AppHash, block))
}
}

Some files were not shown because too many files have changed in this diff Show More