Compare commits

..

57 Commits

Author SHA1 Message Date
dependabot[bot]
0b58342a46 build(deps): Bump github.com/golangci/golangci-lint (#9343)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.48.0 to 1.49.0.
- [Release notes](https://github.com/golangci/golangci-lint/releases)
- [Changelog](https://github.com/golangci/golangci-lint/blob/master/CHANGELOG.md)
- [Commits](https://github.com/golangci/golangci-lint/compare/v1.48.0...v1.49.0)

---
updated-dependencies:
- dependency-name: github.com/golangci/golangci-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-31 08:01:48 -04:00
Thane Thomson
a9a9e1ccf0 Update docs to reference v0.37 branch instead of main where applicable (#9337)
Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-08-30 22:50:13 -04:00
Thane Thomson
cceea4de22 chore: Format and fix lints (#9336)
* make format

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Fix linting directives

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* make mockery

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Appease CI linter

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Appease CI linter

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-08-30 12:28:46 -04:00
Thane Thomson
7b40167f58 abci: Port EventAttribute field type change to main (#9335)
* types: Refactor EventAttribute (#6408)

Cherry-pick of 09a6ad7b1e

* Ensure context is honored

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Replace with tagged switch

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Ensure contexts are honored

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Add UPGRADING note about type change

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove unnecessary conversion

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
2022-08-30 10:13:55 -04:00
dependabot[bot]
2313f35800 build(deps): Bump google.golang.org/grpc from 1.48.0 to 1.49.0 (#9322)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.48.0 to 1.49.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc-go/releases">google.golang.org/grpc's releases</a>.</em></p>
<blockquote>
<h2>Release 1.49.0</h2>
<h1>New Features</h1>
<ul>
<li>gcp/observability: add support for Environment Variable <code>GRPC_CONFIG_OBSERVABILITY_JSON</code> (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5525">#5525</a>)</li>
<li>gcp/observability: add support for custom tags (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5565">#5565</a>)</li>
</ul>
<h1>Behavior Changes</h1>
<ul>
<li>server: reduce log level from Warning to Info for early connection establishment errors (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5524">#5524</a>)
<ul>
<li>Special Thanks: <a href="https://github.com/jpkrohling"><code>@​jpkrohling</code></a></li>
</ul>
</li>
</ul>
<h1>Bug Fixes</h1>
<ul>
<li>client: fix race in flow control that could lead to unexpected EOF errors (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5494">#5494</a>)</li>
<li>client: fix a race that could cause RPCs to time out instead of failing more quickly with UNAVAILABLE (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5503">#5503</a>)</li>
<li>client &amp; server: fix a panic caused by passing a <code>nil</code> stats handler to <code>grpc.WithStatsHandler</code> or <code>grpc.StatsHandler</code> (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5543">#5543</a>)</li>
<li>transport/server: fix a race that could cause a stray header to be sent (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5513">#5513</a>)</li>
<li>balancer: give precedence to <code>IDLE</code> over <code>TRANSIENT_FAILURE</code> when aggregating connectivity state (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5473">#5473</a>)</li>
<li>xds/xdsclient: request correct resource name when user specifies a new style resource name with empty authority (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5488">#5488</a>)</li>
<li>xds/xdsclient: NACK endpoint resources with zero weight (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5560">#5560</a>)</li>
<li>xds/xdsclient: fix bug that would reset resource version information after ADS stream restart (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5422">#5422</a>)</li>
<li>xds/xdsclient: fix goroutine leaks when load reporting is enabled (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5505">#5505</a>)</li>
<li>xds/ringhash: fix config update processing to recreate ring and picker when min/max ring size changes (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5557">#5557</a>)</li>
<li>xds/ringhash: avoid recreating subChannels when update doesn't change address weight information (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5431">#5431</a>)</li>
<li>xds/priority: fix bug which could cause priority LB to block all traffic after a config update (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5549">#5549</a>)</li>
<li>xds: fix bug when environment variable <code>GRPC_EXPERIMENTAL_ENABLE_OUTLIER_DETECTION</code> is set to true (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5537">#5537</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="1c29e075ab"><code>1c29e07</code></a> Change version to 1.49.0 (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5583">#5583</a>)</li>
<li><a href="8e5a84e6b2"><code>8e5a84e</code></a> xds/resolver: generate channel ID randomly (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5603">#5603</a>)</li>
<li><a href="92cee3440f"><code>92cee34</code></a> gcp/observability: Add logging filters for logging, tracing, and metrics API ...</li>
<li><a href="c7fe135d12"><code>c7fe135</code></a> O11Y: Added support for custom tags (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5565">#5565</a>)</li>
<li><a href="7981af402b"><code>7981af4</code></a> test/kokoro: add missing image tagging to the xDS interop url map buildscript...</li>
<li><a href="6f34b7ad15"><code>6f34b7a</code></a> xdsclient: NACK endpoint resource if load_balancing_weight is specified and i...</li>
<li><a href="f9409d385f"><code>f9409d3</code></a> ringhash: handle config updates properly (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5557">#5557</a>)</li>
<li><a href="946dde008f"><code>946dde0</code></a> xdsclient: NACK endpoint resources with zero weight (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5560">#5560</a>)</li>
<li><a href="b89f49b0ff"><code>b89f49b</code></a> xdsclient: deflake Test/LDSWatch_PartialValid (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5552">#5552</a>)</li>
<li><a href="9bc72deba4"><code>9bc72de</code></a> grpc: remove mentions of WithBalancerName from comments (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5555">#5555</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/grpc/grpc-go/compare/v1.48.0...v1.49.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/grpc&package-manager=go_modules&previous-version=1.48.0&new-version=1.49.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-08-29 13:01:00 +00:00
Sergio Mena
3233568cee Added PrepareProposal and ProcessProposal data CHANGELOG_PENDING.md. Reworked UPGRADING.md (#9315)
* master-->main in `CONTRIBUTING.md`

* Added feature branches in `CONTRIBUTING.md`

* Fixes to `UPGRADING.md`

* [cherrypicked] docs: minor tweaks (#5404)

* docs: fix /validators description

Refs https://github.com/tendermint/spec/pull/169

* consensus: remove nil err from logging statement

* update UPGRADING.md

* note about LightBlocks

* Reworked "Unreleased" section of `UPGRADING.md`

* Added PrepareProposal and ProcessProposal to `CHANGELOG_PENDING.md`

* Apply suggestions from @thanethomson's code review

Co-authored-by: Thane Thomson <connect@thanethomson.com>

* Addressed @tychoish's comment

Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Co-authored-by: Thane Thomson <connect@thanethomson.com>
2022-08-26 23:53:13 +02:00
Thane Thomson
9f76e8da15 Temporarily revert #9175: remove lastresulthash from merklization in lastresult hash (#9313)
* Revert "remove lastresulthash from merklization in lastresult hash (#9175)"

This reverts commit bff63aec83.

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Clarify wording in ABCI upgrade guidelines

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-08-24 14:34:36 -04:00
Thane Thomson
28cfb039c9 docs: Add redirects to GitHub for spec links (#9311)
* Add spec redirects

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Use hack to make redirects work

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-08-24 10:14:54 -04:00
Callum Waters
5f932a53f0 e23: fix generation by considering evidence age in retain height (#9306) 2022-08-23 11:28:32 +02:00
Sergio Mena
50b5c23d88 Merge branch 'feature/abci++ppp' 2022-08-22 17:16:17 +02:00
Callum Waters
b37f062619 e2e: add evidence tests (#9292) 2022-08-22 13:33:47 +02:00
samricotta
aa303edaef Forward port discard abci responses change (#9286)
* Backport of sam/abci-responses (#9090) (#9159)

Signed-off-by: Thane Thomson <connect@thanethomson.com>
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: Thane Thomson <connect@thanethomson.com>
2022-08-22 12:53:01 +02:00
Callum Waters
2d8df1bd4e proto: deduplicate consensus params (#9287) 2022-08-22 10:50:21 +02:00
Thane Thomson
d886bc8fdd docs: Capture UX-oriented practices for changelog (#9284) 2022-08-20 10:02:39 -04:00
Thane Thomson
daaf5d6441 docs: Update all docs to prepare for v0.37 (#9243)
* Update docs references from master to main

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update DOCS_README to reflect current reality

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update vuepress config with current versions and updated discussions link

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update generated docs versions

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update docs build to use temp folder instead of home

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Document build-docs Makefile target

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Add serve-docs Makefile target to serve local build of docs

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Ensure 404 page is copied during docs build

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Redirect /master/ to /main/

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Attempt to resolve #7908

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update OpenAPI references from master to main

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update CHANGELOG references from master to main

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update Docker readme references from master to main

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update UPGRADING references from master to main

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update package-specific documentation references from master to main

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update spec references from master to main

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update all code comment references to docs site from master to main

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Build v0.34.x as "latest"

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Explicitly mark v0.34 docs as latest in version selector

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update all links from docs.tendermint.com/main to docs.tendermint.com/latest

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* ci: Redeploy docs on pushes to v0.34.x

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Temporarily copy spec directory into docs while building

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Add nav link to main and clearly mark as unstable

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Revert to only publishing docs in nav for v0.34 and v0.33 with no latest

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Link to docs.tendermint.com/v0.34 from RFCs

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Rather just use main for all docs.tendermint.com references on main branch

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Rename GitHub tree links from master to main

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update link for ABCI Rust client

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update github links from master to main

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update badges in root readme

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove codecov badge since we do not use it any more

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove Java and Kotlin tutorials

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove specs from docs build

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Migrate spec links to GitHub repo from docs site

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove references to non-existent PEX reactor spec

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Fix linting badge in README

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-08-19 13:18:33 -04:00
Thane Thomson
3cc976482d docs: Apply standard Apache 2.0 license (#9293)
Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-08-19 11:36:29 -04:00
Callum Waters
0ca3a89c90 e2e: add abci delays (#9254) 2022-08-19 12:00:58 +02:00
William Banfield
1a4d9397e9 abci++: update PrepareProposal API to accept just a list of bytes (#9283)
This changes the ResponsePrepareProposal type, substituting the []TxRecord for just []bytes. This change is made in the .proto file and in all necessary additional places in the code.
2022-08-18 16:33:19 -04:00
fatcat22
c8302c5fcb consensus: fix round number when handling RoundStepNewRound timeout (#9252) 2022-08-18 10:24:06 +02:00
Sergio Mena
fb9afcc969 Restore e2e nightly workflow on feature/abci++ppp branch (#9280) 2022-08-17 21:05:53 +02:00
Sergio Mena
622b930e3a abci-cli: PrepareProposal and ProcessProposal (#9281)
* [cherrypicked] abci-cli: added `PrepareProposal` command to cli (#8656)

* Prepare prosal cli

* [cherrypicked + fixes] abci-cli: Add `process_proposal` command to abci-cli (#8901)

* Add `process_proposal` command to abci-cli

* Added process proposal to the 'tutorial' examples

* Added entry in CHANGELOG_PENDING.md

* Allow empty blocks in PrepareProposal, ProcessProposal, and FinalizeBlock

* Fix minimum arguments

* Add tests for empty block

* Updated abci-cli doc

Co-authored-by: Sergio Mena <sergio@informal.systems>
Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>

* Addressed @williambanfield's comment

Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>
Co-authored-by: Hernán Vanzetto <hernan.vanzetto@gmail.com>
2022-08-17 20:47:52 +02:00
William Banfield
1069ffc6aa config: backport the rename of fastsync to blocksync (#9259)
This is largely a cherry pick of #6755 with some additional fixups added where detected. 
This change moves the blockchain package to a package called blocksync. Additionally, it renames the relevant uses of the term `fastsync` to `blocksync`.

closes: #9227 

#### PR checklist

- [ ] Tests written/updated, or no tests needed
- [x] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [x] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed
2022-08-17 15:19:20 +00:00
William Banfield
7bd86ec004 consensus: backport abci and consensus metrics (#9273)
Partial backport of #8480
2022-08-17 09:37:45 -04:00
samricotta
9993514893 abci: Remove SetOption #5447 #9091 (#9266)
* Remove set option for abci
2022-08-16 22:58:04 +02:00
Igor Konnov
3003e05581 Update type annotations in the TLA+ spec of Tendermint for accountability (#9263)
* update Apalache type annotations and split evidence into 3 variables

* remove the duplicate of AllPrevotes, due to merge
2022-08-16 16:12:04 +02:00
AdamKorcz
498657f128 test/fuzz: fix OSS-Fuzz build (#9183) 2022-08-16 15:59:24 +02:00
William Banfield
c322b89b2a RFC-024: block structure consolidation (#8457)
This RFC lays out a proposed set of changes for consolidating the set of information that may be included in the block structure.

{{ [rendered](https://github.com/tendermint/tendermint/blob/wb/rfc-block-structure/docs/rfc/rfc-020-block-structure-consolidation.md) }}
2022-08-16 13:51:55 +00:00
Thane Thomson
cbc7a1abcf spec: Sync Light Client TLA+ code with master (#9238)
* Typo fix in README.md (#350)

* Updated Apalache type annotations (#395)

Co-authored-by: Prajjwol Gautam <prajjwol@gmail.com>
Co-authored-by: Kukovec <jure.kukovec@gmail.com>
2022-08-12 14:33:47 -04:00
Sergio Mena
670abbc330 Merge pull request #9236 from tendermint/sergio/merging-main-abci++ppp
Merge `main` into `feature/abci++ppp`
2022-08-12 15:55:47 +02:00
Sergio Mena
7dc4f934b0 Merge branch main into feature/abci++ppp 2022-08-12 13:59:19 +02:00
Sergio Mena
cb570f6672 ABCI types.proto. Handle remaining discrepancies (#9224)
* [cherrypicked] version: add abci version to handshake (#5706)

- add `AbciVersion` RequestInfo

Closes: #2804

* make proto-gen

* Bump ABCI version: Prepare and Process proposal

Co-authored-by: Marko <marbar3778@yahoo.com>
2022-08-12 11:57:13 +02:00
Marko
f36999e484 retract all of 0.35 (#9198) 2022-08-12 11:47:36 +02:00
Thane Thomson
ae1fc74f80 abci: Make ABCI and P2P wire-level length delimiters consistent (#9182)
* abci: use protoio for length delimitation (#5818)

Migrate ABCI to use protoio (uint64 length delimiters) instead of int64
length delimiters to be consistent with the approach used in the P2P
layer.

Closes: #5783

* Import ReadMsg interface change from #5868

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Convert PR number to link in UPGRADING

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update Tendermint Socket Protocol docs to reflect length prefix encoding change

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Clarify that length delimiters are varints

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
2022-08-11 13:30:39 -04:00
Sergio Mena
c4eb6113a8 Remove unused code in internal directory (#9223) 2022-08-11 17:33:55 +02:00
Sergio Mena
25b0c7c78e PrepareProposal/ProcessProposal: Dealing with block data exposed to App (#9219)
* [cherrypicked] abci++: only include meaningful header fields in data passed-through to application (#8216)

* make proto-gen

* Fix kvstore tests

* Small changes to abci protobufs taken from v0.36.x

* make proto-gen (again)

* [Partial cherrypick] Restore `Commit` to the ABCI++ spec, and other late modifications (backport #8796) (#8936)

* Restore `Commit` to the ABCI++ spec, and other late modifications (#8796)

* Added VoteExtensionsEnableHeight

* Fix reference to `modified`

* Removed old pseudo-code, now included in spec

* Removed markdown warnings in abci++_basic_concepts_002_draft.md

* Restored `Commit` in the Methods section

* Addressed remaining markdown warnings

* Revisited intro and basic concepts section

* Extra pass at all spec sections to recover Commit, and other ABCI++ spec modifications

* Fixed links

* make proto-gen

* Remove _primes_ from spec notation

* Update proto/tendermint/abci/types.proto

Co-authored-by: Callum Waters <cmwaters19@gmail.com>

* Update spec/abci++/abci++_tmint_expected_behavior_002_draft.md

Co-authored-by: Callum Waters <cmwaters19@gmail.com>

* Addressed @cmwaters' comments

* Addressed @angbrav's and @mpoke's comments on spec

* make proto-gen

* Fix MD anchor reference

* Clarify throughout the spec when `ProcessProposal` and `VerifyVoteExtension` are called

* Update spec/abci++/abci++_app_requirements_002_draft.md

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* Update spec/abci++/abci++_app_requirements_002_draft.md

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* Update spec/abci++/abci++_app_requirements_002_draft.md

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update spec/abci++/abci++_basic_concepts_002_draft.md

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update spec/abci++/abci++_basic_concepts_002_draft.md

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* Update spec/abci++/abci++_basic_concepts_002_draft.md

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update spec/abci++/abci++_methods_002_draft.md

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* Update spec/abci++/abci++_tmint_expected_behavior_002_draft.md

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Addresed comments

* Renamed 'draft' files

* Adatped links to new filenames

* Fixed links and minor cosmetic changes

* Renamed 'byzantine_validators' to 'misbehavior' in ABCI++ spec and protobufs

* make proto-gen

* Renamed 'byzantine_validators' to 'misbehavior' in the code

* Fixed link

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Addressed @cason's comments

* Clarified conditions for `ProcessProposal` call at proposer

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: Daniel <daniel.cason@usi.ch>
(cherry picked from commit 331860c2a8)

* Fixed merge conflicts

Co-authored-by: Sergio Mena <sergio@informal.systems>

* make proto-gen (and again)

* make build

* fix UTs

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-08-11 11:21:43 +02:00
Jacob Gadikian
a6dde14ec4 do not use ioutil (#9206) 2022-08-10 14:35:14 -04:00
Sam Kleinman
152a2fa5c9 metrics: add metricsgen makefile target (#9172)
Follow on work as a missing piece of #9156, to make it possible to
generate metrics automatically using existing build infrastructure.
2022-08-09 21:00:55 +00:00
dependabot[bot]
f295b4d431 build(deps): Bump github.com/golangci/golangci-lint from 1.42.1 to 1.48.0 (#9203)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.42.1 to 1.48.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/golangci/golangci-lint/releases">github.com/golangci/golangci-lint's releases</a>.</em></p>
<blockquote>
<h2>v1.48.0</h2>
<h2>Changelog</h2>
<ul>
<li>368c41cd build(deps): bump github.com/daixiang0/gci from 0.5.0 to 0.6.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3035">#3035</a>)</li>
<li>2d8fea81 build(deps): bump github.com/daixiang0/gci from 0.6.0 to 0.6.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3058">#3058</a>)</li>
<li>aeb5860c build(deps): bump github.com/kisielk/errcheck from 1.6.1 to 1.6.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3059">#3059</a>)</li>
<li>0559b922 build(deps): bump revgrep to HEAD (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3054">#3054</a>)</li>
<li>6313fa9a contextcheck: disable linter (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3050">#3050</a>)</li>
<li>0ba1388a feat: add usestdlibvars (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3016">#3016</a>)</li>
<li>1557692e feat: go1.19 support  (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3037">#3037</a>)</li>
<li>15cba447 gci: add missing <code>custom-order</code> setting (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3052">#3052</a>)</li>
<li>9a1b9492 ifshort: deprecate linter (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3034">#3034</a>)</li>
<li>f8f8f9a6 nolint: drop allow-leading-space option and add &quot;nolint:all&quot; (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3002">#3002</a>)</li>
</ul>
<h2>v1.47.3</h2>
<h2>Changelog</h2>
<ul>
<li>72fc41ce build(deps): bump github.com/BurntSushi/toml from 1.1.0 to 1.2.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3009">#3009</a>)</li>
<li>57d61afb build(deps): bump github.com/GaijinEntertainment/go-exhaustruct/v2 from 2.2.0 to 2.2.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3030">#3030</a>)</li>
<li>9cb17e4f build(deps): bump github.com/alingse/asasalint from 0.0.10 to 0.0.11 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3003">#3003</a>)</li>
<li>2ab46788 build(deps): bump github.com/daixiang0/gci from 0.4.3 to 0.5.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3031">#3031</a>)</li>
<li>03d9b113 build(deps): bump github.com/ryancurrah/gomodguard from 1.2.3 to 1.2.4 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3029">#3029</a>)</li>
<li>e55f22c7 build(deps): bump github.com/sirupsen/logrus from 1.8.1 to 1.9.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3010">#3010</a>)</li>
<li>c7ed8b67 build(deps): bump github.com/sivchari/nosnakecase from 1.5.0 to 1.7.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3008">#3008</a>)</li>
<li>95d57d99 build(deps): bump gitlab.com/bosi/decorder from 0.2.2 to 0.2.3 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3033">#3033</a>)</li>
<li>d186efe9 build(deps): bump honnef.co/go/tools from 0.3.2 to 0.3.3 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3032">#3032</a>)</li>
<li>846fab81 cgo: fix linters ignoring Cgo files (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3025">#3025</a>)</li>
<li>d44cd49a feat: remove some go1.18 limitations (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3001">#3001</a>)</li>
<li>886fbd71 gci: fix panic with invalid configuration option (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3019">#3019</a>)</li>
</ul>
<h2>v1.47.2</h2>
<h2>Changelog</h2>
<ul>
<li>61673b34 revive: ignore slow rules (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2999">#2999</a>)</li>
</ul>
<h2>v1.47.1</h2>
<h2>Changelog</h2>
<ul>
<li>a91463cd build(deps): bump github.com/daixiang0/gci from 0.4.2 to 0.4.3 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2992">#2992</a>)</li>
<li>4c8bdc70 build(deps): bump github.com/sivchari/tenv from 1.6.0 to 1.7.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2988">#2988</a>)</li>
<li>4e60e8a8 gci: fix options display (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2989">#2989</a>)</li>
<li>fd87bd1e gci: remove the use of stdin (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2984">#2984</a>)</li>
</ul>
<h2>v1.47.0</h2>
<h2>Changelog</h2>
<ul>
<li>b4154027 Add linter <code>asasalint</code> to lint pass []any as any (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2968">#2968</a>)</li>
<li>1d8a15a0 add nosnakecase lint (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2828">#2828</a>)</li>
<li>2a1edcef build(deps): bump github.com/Antonboom/errname from 0.1.6 to 0.1.7 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2888">#2888</a>)</li>
<li>c766184c build(deps): bump github.com/GaijinEntertainment/go-exhaustruct/v2 from 2.1.0 to 2.2.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2916">#2916</a>)</li>
<li>b8f1e2a5 build(deps): bump github.com/daixiang0/gci from 0.3.4 to 0.4.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2965">#2965</a>)</li>
<li>5e183652 build(deps): bump github.com/daixiang0/gci from 0.4.0 to 0.4.1 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2973">#2973</a>)</li>
<li>e60937a1 build(deps): bump github.com/daixiang0/gci from 0.4.1 to 0.4.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2979">#2979</a>)</li>
</ul>

</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/golangci/golangci-lint/blob/master/CHANGELOG.md">github.com/golangci/golangci-lint's changelog</a>.</em></p>
<blockquote>
<h3>v1.48.0</h3>
<ol>
<li>new linters
<ul>
<li><code>usestdlibvars</code>:<a href="https://github.com/sashamelentyev/usestdlibvars">https://github.com/sashamelentyev/usestdlibvars</a></li>
</ul>
</li>
<li>updated linters
<ul>
<li><code>contextcheck</code>: disable linter</li>
<li><code>errcheck</code>: from 1.6.1 to 1.6.2</li>
<li><code>gci</code>: add missing <code>custom-order</code> setting</li>
<li><code>gci</code>: from 0.5.0 to 0.6.0</li>
<li><code>ifshort</code>: deprecate linter</li>
<li><code>nolint</code>: drop allow-leading-space option and add &quot;nolint:all&quot;</li>
<li><code>revgrep</code>: bump to HEAD</li>
</ul>
</li>
<li>documentation
<ul>
<li>remove outdated info on source install</li>
</ul>
</li>
<li>misc
<ul>
<li>go1.19 support</li>
</ul>
</li>
</ol>
<h3>v1.47.3</h3>
<ol>
<li>updated linters:
<ul>
<li>remove some go1.18 limitations</li>
<li><code>asasalint</code>: from 0.0.10 to 0.0.11</li>
<li><code>decorder</code>: from 0.2.2 to v0.2.3</li>
<li><code>gci</code>: fix panic with invalid configuration option</li>
<li><code>gci</code>: from 0.4.3 to v0.5.0</li>
<li><code>go-exhaustruct</code>: from 2.2.0 to 2.2.2</li>
<li><code>gomodguard</code>: from 1.2.3 to 1.2.4</li>
<li><code>nosnakecase</code>: from 1.5.0 to 1.7.0</li>
<li><code>honnef.co/go/tools</code>: from 0.3.2 to v0.3.3</li>
</ul>
</li>
<li>misc
<ul>
<li>cgo: fix linters ignoring CGo files</li>
</ul>
</li>
</ol>
<h3>v1.47.2</h3>
<ol>
<li>updated linters:
<ul>
<li><code>revive</code>: ignore slow rules</li>
</ul>
</li>
</ol>
<h3>v1.47.1</h3>
<ol>
<li>updated linters:
<ul>
<li><code>gci</code>: from 0.4.2 to 0.4.3</li>
<li><code>gci</code>: remove the use of stdin</li>
<li><code>gci</code>: fix options display</li>
<li><code>tenv</code>: from 1.6.0 to 1.7.0</li>
<li><code>unparam</code>: bump to HEAD</li>
</ul>
</li>
</ol>
<h3>v1.47.0</h3>
<ol>
<li>new linters:
<ul>
<li><code>asasalint</code>: <a href="https://github.com/alingse/asasalint">https://github.com/alingse/asasalint</a></li>
</ul>
</li>
</ol>

</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="2d8fea819e"><code>2d8fea8</code></a> build(deps): bump github.com/daixiang0/gci from 0.6.0 to 0.6.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3058">#3058</a>)</li>
<li><a href="aeb5860ca8"><code>aeb5860</code></a> build(deps): bump github.com/kisielk/errcheck from 1.6.1 to 1.6.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3059">#3059</a>)</li>
<li><a href="0559b9220b"><code>0559b92</code></a> build(deps): bump revgrep to HEAD (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3054">#3054</a>)</li>
<li><a href="3ffde13a17"><code>3ffde13</code></a> dev: remove stable from actions/setup-go (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3055">#3055</a>)</li>
<li><a href="1557692e59"><code>1557692</code></a> feat: go1.19 support  (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3037">#3037</a>)</li>
<li><a href="6313fa9a67"><code>6313fa9</code></a> contextcheck: disable linter (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3050">#3050</a>)</li>
<li><a href="0ba1388a41"><code>0ba1388</code></a> feat: add usestdlibvars (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3016">#3016</a>)</li>
<li><a href="15cba447fd"><code>15cba44</code></a> gci: add missing <code>custom-order</code> setting (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3052">#3052</a>)</li>
<li><a href="452544a2e7"><code>452544a</code></a> build(deps): bump gatsby from 4.15.2 to 4.19.2 in /docs (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3046">#3046</a>)</li>
<li><a href="f9dfb687aa"><code>f9dfb68</code></a> build(deps): bump gatsby-transformer-yaml from 4.13.0 to 4.19.0 in /docs (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/3045">#3045</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/golangci/golangci-lint/compare/v1.42.1...v1.48.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/golangci/golangci-lint&package-manager=go_modules&previous-version=1.42.1&new-version=1.48.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-08-09 15:21:16 +00:00
William Banfield
0bea0647fe tools: remove mockery from tools.go (#9196)
The `mockery` project recommends against using a binary of `mockery` that has been created using `go install`. https://github.com/vektra/mockery/pull/456. Developers of Tendermint wishing to generate mocks should avoid having a version of `mockery` on their path that does not match the version listed in  [mockery_generate.sh](10e1ac8fea/scripts/mockery_generate.sh (L11)). To make this easier for developers, the `mockery_generate.sh` script uses a containerized copy of `mockery` if `mockery` is not present on the developer's `PATH`. This containerized version of `mockery` uses the same version of mockery as our CI pipelines and allows all developers to automatically use the same version without having to manage it themselves. 

#### PR checklist

- [ ] Tests written/updated, or no tests needed
- [ ] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [ ] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed
2022-08-09 15:12:31 +00:00
Callum Waters
f861062ee2 abci: implement process proposal to spec (#9122) 2022-08-09 15:15:18 +02:00
Sam Kleinman
d5ec276052 e2e: fix out of sync configuration (#9199)
The v0.34.x tests have been failing (or reporting failures, I don't
believe that this is a real failure,) because the CI configuration has
been out of sync with itself, likely due to a mistake during
backporting configs from the `master` branch.

The entire 0.34.x e2e test suite takes 26 minutes to run, plus about 7
minutes to build the docker image. Each split has to build the same
docker image, (and is therefore a cap on the amount of parallelism we
can get at the moment.) Having more groups, just seems like we'll be
burning money building the docker image with no really meaningful
difference in throughput. For a nightly test that people don't really
wait on, the current latency (time-to-completion) of roughly 19
minutes, isn't causing in friction.
2022-08-09 12:02:14 +00:00
Marko
bff63aec83 remove lastresulthash from merklization in lastresult hash (#9175)
remove gas from merklization in headers


Im not sure where to change docs since main points to the spec repo but that repo is archived. Maybe someone can help me?
2022-08-09 08:16:41 +00:00
William Banfield
69845bb44e metrics: fixup after cherry-pick (#9195)
* fixup after cherry-pick

* cherry-pick fixups
2022-08-08 15:55:01 -04:00
William Banfield
10e1ac8fea metrics: transition all metrics to using metricsgen generated constructors. (port of #8488) (#9178)
This pull request completes the change to the `metricsgen` metrics. It adds `go generate` directives to all of the files containing the `Metrics` structs.

Using the outputs of `metricsdiff` between these generated metrics and `main`, we can see that there is a minimal diff between the two sets of metrics when run locally. The diff here stems from removal of the word 'message' which was done in v0.36+ and is ultimately a better phrasing. This metric has not yet been released, so this phrasing is preferred.

```
./metricsdiff old new
Metric changes:
+++ tendermint_consensus_full_prevote_delay
+++ tendermint_consensus_quorum_prevote_delay
--- tendermint_consensus_full_prevote_message_delay
--- tendermint_consensus_quorum_prevote_message_delay
```

This change also adds parsing for a `metrics:` key in a field comment. If a comment line begins with `//metrics:` the rest of the line is interpreted to be the metric help text. Additionally, a bug where lists of labels were not properly quoted in the `metricsgen` rendered output was fixed.

In my view, docs and tests are not needed for this internal only change.

---
#### PR checklist

- [ ] Tests written/updated, or no tests needed
- [ ] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [ ] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed

Ports #8488 to main
2022-08-08 16:53:00 +00:00
dependabot[bot]
74dd21eb89 build(deps): Bump docker/build-push-action from 3.1.0 to 3.1.1 (#9189)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 3.1.0 to 3.1.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/build-push-action/releases">docker/build-push-action's releases</a>.</em></p>
<blockquote>
<h2>v3.1.1</h2>
<ul>
<li>Fix GitHub token not passed with Git context if subdir defined by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/663">#663</a>)</li>
<li>Replace deprecated <code>fs.rmdir</code> with <code>fs.rm</code> by <a href="https://github.com/bendrucker"><code>@​bendrucker</code></a> (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/657">#657</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/build-push-action/compare/v3.1.0...v3.1.1">https://github.com/docker/build-push-action/compare/v3.1.0...v3.1.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="c84f382811"><code>c84f382</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/663">#663</a> from crazy-max/fix-git-token-cond</li>
<li><a href="cd5d0b79ea"><code>cd5d0b7</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/661">#661</a> from dud225/subdir_context</li>
<li><a href="30a32246ba"><code>30a3224</code></a> Fix GitHub token not passed with Git context if subdir defined</li>
<li><a href="1f19633b92"><code>1f19633</code></a> Update comment regarding the support of subdir context</li>
<li><a href="67af6dc1d3"><code>67af6dc</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/657">#657</a> from bendrucker/deprecated-fs-rmdir</li>
<li><a href="988cb093f2"><code>988cb09</code></a> replace deprecated <code>fs.rmdir</code> with <code>fs.rm</code></li>
<li>See full diff in <a href="https://github.com/docker/build-push-action/compare/v3.1.0...v3.1.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/build-push-action&package-manager=github_actions&previous-version=3.1.0&new-version=3.1.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-08-08 13:35:59 +00:00
dependabot[bot]
ad1f9b49bc build(deps): Bump github.com/prometheus/client_golang (#9190)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.12.2 to 1.13.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.12.2...v1.13.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-08 09:34:21 -04:00
Thane Thomson
ef4e37b532 ci: Restore ToC check for ADRs/RFCs (#9180)
* Import presubmit TOC check script from master and fix warning

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Fix misspelled ADR link discovered by presubmit script

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Restore docs-toc workflow

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Create makefile target for docs ToC check

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Use makefile target in CI workflow for docs ToC check

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-08-06 13:19:05 -04:00
Thane Thomson
03c79b666d ci: Fix nightly E2E notifications (#9179)
Update the nightly E2E workflows to fix the notifications for the
v0.34.x branch while also simplifying the messages and making them more
readable.

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-08-06 10:17:21 -04:00
Thane Thomson
1148759a94 ci: Update nightly E2E notifications (#9177) 2022-08-05 21:38:14 -04:00
William Banfield
608933b73e consensus: additional timing metrics (port of #7849) (#9168)
This change introduces an additional set of metrics aimed at helping operators understand the timing for consensus.

This change adds the following metrics:

```
tendermint_consensus_round_duration_seconds_bucket{chain_id="test-chain-IrF74Y",le="0.1"} 29
tendermint_consensus_round_duration_seconds_bucket{chain_id="test-chain-IrF74Y",le="0.2682695795279726"} 29
tendermint_consensus_round_duration_seconds_bucket{chain_id="test-chain-IrF74Y",le="0.7196856730011522"} 29
tendermint_consensus_round_duration_seconds_bucket{chain_id="test-chain-IrF74Y",le="1.9306977288832508"} 29
tendermint_consensus_round_duration_seconds_bucket{chain_id="test-chain-IrF74Y",le="5.1794746792312125"} 29
tendermint_consensus_round_duration_seconds_bucket{chain_id="test-chain-IrF74Y",le="13.894954943731381"} 29
tendermint_consensus_round_duration_seconds_bucket{chain_id="test-chain-IrF74Y",le="37.27593720314942"} 29
tendermint_consensus_round_duration_seconds_bucket{chain_id="test-chain-IrF74Y",le="100.00000000000006"} 29
tendermint_consensus_round_duration_seconds_bucket{chain_id="test-chain-IrF74Y",le="+Inf"} 29
tendermint_consensus_round_duration_seconds_sum{chain_id="test-chain-IrF74Y"} 0.028651869999999996
tendermint_consensus_round_duration_seconds_count{chain_id="test-chain-IrF74Y"} 29
```

```
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Commit",le="0.1"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Commit",le="0.2682695795279726"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Commit",le="0.7196856730011522"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Commit",le="1.9306977288832508"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Commit",le="5.1794746792312125"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Commit",le="13.894954943731381"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Commit",le="37.27593720314942"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Commit",le="100.00000000000006"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Commit",le="+Inf"} 29
tendermint_consensus_step_duration_seconds_sum{chain_id="test-chain-IrF74Y",step="Commit"} 0.26650875
tendermint_consensus_step_duration_seconds_count{chain_id="test-chain-IrF74Y",step="Commit"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewHeight",le="0.1"} 0
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewHeight",le="0.2682695795279726"} 0
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewHeight",le="0.7196856730011522"} 0
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewHeight",le="1.9306977288832508"} 28
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewHeight",le="5.1794746792312125"} 28
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewHeight",le="13.894954943731381"} 28
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewHeight",le="37.27593720314942"} 28
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewHeight",le="100.00000000000006"} 28
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewHeight",le="+Inf"} 28
tendermint_consensus_step_duration_seconds_sum{chain_id="test-chain-IrF74Y",step="NewHeight"} 27.773921702
tendermint_consensus_step_duration_seconds_count{chain_id="test-chain-IrF74Y",step="NewHeight"} 28
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewRound",le="0.1"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewRound",le="0.2682695795279726"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewRound",le="0.7196856730011522"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewRound",le="1.9306977288832508"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewRound",le="5.1794746792312125"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewRound",le="13.894954943731381"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewRound",le="37.27593720314942"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewRound",le="100.00000000000006"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="NewRound",le="+Inf"} 29
tendermint_consensus_step_duration_seconds_sum{chain_id="test-chain-IrF74Y",step="NewRound"} 0.168961052
tendermint_consensus_step_duration_seconds_count{chain_id="test-chain-IrF74Y",step="NewRound"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Precommit",le="0.1"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Precommit",le="0.2682695795279726"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Precommit",le="0.7196856730011522"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Precommit",le="1.9306977288832508"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Precommit",le="5.1794746792312125"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Precommit",le="13.894954943731381"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Precommit",le="37.27593720314942"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Precommit",le="100.00000000000006"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Precommit",le="+Inf"} 29
tendermint_consensus_step_duration_seconds_sum{chain_id="test-chain-IrF74Y",step="Precommit"} 0.06414115999999999
tendermint_consensus_step_duration_seconds_count{chain_id="test-chain-IrF74Y",step="Precommit"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Prevote",le="0.1"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Prevote",le="0.2682695795279726"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Prevote",le="0.7196856730011522"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Prevote",le="1.9306977288832508"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Prevote",le="5.1794746792312125"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Prevote",le="13.894954943731381"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Prevote",le="37.27593720314942"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Prevote",le="100.00000000000006"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Prevote",le="+Inf"} 29
tendermint_consensus_step_duration_seconds_sum{chain_id="test-chain-IrF74Y",step="Prevote"} 0.177714525
tendermint_consensus_step_duration_seconds_count{chain_id="test-chain-IrF74Y",step="Prevote"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Propose",le="0.1"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Propose",le="0.2682695795279726"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Propose",le="0.7196856730011522"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Propose",le="1.9306977288832508"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Propose",le="5.1794746792312125"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Propose",le="13.894954943731381"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Propose",le="37.27593720314942"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Propose",le="100.00000000000006"} 29
tendermint_consensus_step_duration_seconds_bucket{chain_id="test-chain-IrF74Y",step="Propose",le="+Inf"} 29
tendermint_consensus_step_duration_seconds_sum{chain_id="test-chain-IrF74Y",step="Propose"} 0.221851927
tendermint_consensus_step_duration_seconds_count{chain_id="test-chain-IrF74Y",step="Propose"} 29
```


```
tendermint_consensus_block_gossip_parts_received{chain_id="test-chain-IrF74Y",matches_current="true"} 29
```


---

#### PR checklist

- [x] Tests written/updated, or no tests needed
- [ ] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [x] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed

Closes: #9166
2022-08-05 21:24:02 +00:00
William Banfield
b92a19b2ce proxy: add initial set of abci metrics (port of #7115) (#9169)
* internal/proxy: add initial set of abci metrics (#7115)

This PR adds an initial set of metrics for use ABCI. The initial metrics enable the calculation of timing histograms and call counts for each of the ABCI methods. The metrics are also labeled as either 'sync' or 'async' to determine if the method call was performed using ABCI's `*Async` methods.

An example of these metrics is included here for reference:
```
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.0001"} 0
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.0004"} 5
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.002"} 12
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.009"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.02"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.1"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.65"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="2"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="6"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="25"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="+Inf"} 13
tendermint_abci_connection_method_timing_sum{chain_id="ci",method="commit",type="sync"} 0.007802058000000001
tendermint_abci_connection_method_timing_count{chain_id="ci",method="commit",type="sync"} 13
```

These metrics can easily be graphed using prometheus's `histogram_quantile(...)` method to pick out a particular quantile to graph or examine. I chose buckets that were somewhat of an estimate of expected range of times for ABCI operations. They start at .0001 seconds and range to 25 seconds. The hope is that this range captures enough possible times to be useful for us and operators.

* fixup

* fixups

* docs: add abci timing metrics to the metrics docs (#7311)

* format table
2022-08-05 13:29:00 -04:00
William Banfield
6b499aeb31 DOCKER: use go 1.18 (#9170) 2022-08-04 17:57:53 -04:00
Sergio Mena
b2409b3345 Follow-up fixes to main PrepareProposal implementation (#9162)
* -----start------

* [cherrypicked] state: panic on ResponsePrepareProposal validation error (#8145)

* state: panic on ResponsePrepareProposal validation error

* lint++

Co-authored-by: Sam Kleinman <garen@tychoish.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>

* [cherrypicked] abci++: remove CheckTx call from PrepareProposal flow (#8176)

* [cherrypicked] abci++: correct max-size check to only operate on added and unmodified (#8242)

* [cherrypicked] Remove `ModifiedTxStatus` from the spec and the code (#8210)

* Outstanding abci-gen changes to 'pb.go' files

* Removed modified_tx_status from spec and protobufs

* Fix sed for OSX

* Regenerated abci protobufs with 'abci-proto-gen'

* Code changes. UTs e2e tests passing

* Recovered UT: TestPrepareProposalModifiedTxStatusFalse

* Adapted UT

* Fixed UT

* Revert "Fix sed for OSX"

This reverts commit e576708c61.

* Update internal/state/execution_test.go

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update abci/example/kvstore/kvstore.go

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* Update internal/state/execution_test.go

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update spec/abci++/abci++_tmint_expected_behavior_002_draft.md

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Addressed some comments

* Added one test that tests error at the ABCI client + Fixed some mock calls

* Addressed remaining comments

* Update abci/example/kvstore/kvstore.go

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update abci/example/kvstore/kvstore.go

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update abci/example/kvstore/kvstore.go

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update spec/abci++/abci++_tmint_expected_behavior_002_draft.md

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Addressed William's latest comments

* Adressed Michael's comment

* Fixed UT

* Some md fixes

* More md fixes

* gofmt

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* make proto-gen

* Fixed testcase on PrepareProposal error

* mockery

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: Sam Kleinman <garen@tychoish.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-08-03 19:39:17 +02:00
Sergio Mena
d268e56383 Sync PrepareProposal with Spec. Main part (#9158)
* ----start----

* [PARTIAL cherry-pick] ABCI Vote Extension 2 (#6885)

* Cherry-picked #6567: state/types: refactor makeBlock, makeBlocks and makeTxs (#6567)

* [Cherrypicked] types: remove panic from block methods (#7501)

* [cherrypicked] abci++: synchronize PrepareProposal with the newest version of the spec (#8094)

This change implements the logic for the PrepareProposal ABCI++ method call. The main logic for creating and issuing the PrepareProposal request lives in execution.go and is tested in a set of new tests in execution_test.go. This change also updates the mempool mock to use a mockery generated version and removes much of the plumbing for the no longer used ABCIResponses.

* make proto-gen

* Backported EvidenceList's method ToABCI from #7961

* make build

* Fix mockery for Mempool

* mockery

* Backported abci Application mocks from #7961

* mockery2

* Fixed new PrepareProposal test cases in state/execution_test.go

* Fixed returned errors in consensus/state.go

* lint

* Addressed @cmwaters' comment

Co-authored-by: mconcat <monoidconcat@gmail.com>
Co-authored-by: JayT106 <JayT106@users.noreply.github.com>
Co-authored-by: Sam Kleinman <garen@tychoish.com>
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
2022-08-03 17:24:24 +02:00
Sergio Mena
2164883501 Old PrepareProposal and vote extension integration on feature/abci++ppp (#9117)
* abci: PrepareProposal-VoteExtension integration [2nd try] (#7821)

* PrepareProposal-VoteExtension integration (#6915)

* make proto-gen

* Fix protobuf crash in e2e nightly tests

* Update types/vote.go

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* Addressed @creachadair's comments

Co-authored-by: mconcat <monoidconcat@gmail.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* Proto changes

* make proto-gen

* Fixed UTs

* bump

* lint

* lint2

* lint3

* lint4

* lint5

* lint6

* no_lint

Co-authored-by: mconcat <monoidconcat@gmail.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-08-03 13:29:15 +02:00
Sergio Mena
61619ab072 Update feature/abci++ppp branch with latest infra changes in main (#9157)
* Update CODEOWNERS to use teams (#9129)

* Update CODEOWNERS to use teams

Update the `CODEOWNERS` file to use the
@tendermint/tendermint-engineering and @tendermint/tendermint-research
teams as opposed to adding people one by one. This makes repository
administration somewhat easier to manage, especially when
onboarding/offboarding people.

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Add Ethan as superuser

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Prepare `main` to become new default branch (#9095)

* Update Makefile with changes from #7372

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Sync main GitHub config with master and update

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove unnecesary dot folders

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Sync dotfiles

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove unused Jepsen tests for now

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* tools: remove k8s (#6625)

Remove mintnet as discussed on team call.

closes #1941

* Restore nightly fuzz testing of P2P addrbook and pex

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Fix YAML lints

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Fix YAML formatting nits

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* More YAML nits

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* github: fix linter configuration errors and occluded errors (#6400)

* Minor fixes to OpenAPI spec to sync with structs on main

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove .github/auto-comment.yml - does not appear to be used

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Add issue config with link to discussions

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Adjust issue/PR templates to suit current process

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove unused RC branch config from release workflow

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Fix wildcard matching in build jobs config

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Document markdownlint config

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Restore manual E2E test group config

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Document linter workflow with local execution instructions

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Document and fix minor nit in Super-Linter markdownlint config

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update .github/ISSUE_TEMPLATE/bug-report.md

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update pull request template to add language around discussions/issues

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* .golangci.yml: Deleted commented-out lines

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* ci: Drop "-2" from e2e-nightly-fail workflow

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Address triviality concern in PR template

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Co-authored-by: Marko <marbar3778@yahoo.com>
Co-authored-by: Sam Kleinman <garen@tychoish.com>
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

Co-authored-by: Thane Thomson <connect@thanethomson.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
Co-authored-by: Sam Kleinman <garen@tychoish.com>
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
2022-08-03 11:06:54 +02:00
Sergio Mena
58effdd8b3 Old PrepareProposal on feature/abci++ppp (#9106)
* [Rebased to v0.34.x] abci: PrepareProposal (#6544)

* fixed cherry-pick

* proto changes

* make proto-gen

* UT fixes

* generate Client directive

* mockery

* App fixes

* Disable 'modified tx' hack

* mockery

* Make format

* Fix lint

Co-authored-by: Marko <marbar3778@yahoo.com>
2022-07-27 15:09:02 +02:00
387 changed files with 10443 additions and 6235 deletions

View File

@@ -26,8 +26,6 @@ jobs:
run: |
set -euo pipefail
readonly MOCKERY=2.12.3 # N.B. no leading "v"
curl -sL "https://github.com/vektra/mockery/releases/download/v${MOCKERY}/mockery_${MOCKERY}_Linux_x86_64.tar.gz" | tar -C /usr/local/bin -xzf -
make mockery 2>/dev/null
if ! git diff --stat --exit-code ; then

View File

@@ -49,7 +49,7 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v3.1.0
uses: docker/build-push-action@v3.1.1
with:
context: .
file: ./DOCKER/Dockerfile

View File

@@ -8,6 +8,7 @@ on:
push:
branches:
- main
- v0.34.x
paths:
- docs/**
- spec/**
@@ -40,7 +41,7 @@ jobs:
- uses: actions/upload-artifact@v3
with:
name: build-output
path: ~/output/
path: /tmp/tendermint-core-docs
deploy:
name: Deploy to GitHub Pages

View File

@@ -1,21 +1,20 @@
# TODO(thane): Re-enable once we've pulled in the ADRs and RFCs from master.
# Verify that important design docs have ToC entries.
#name: Check documentation ToC
#on:
# pull_request:
# push:
# branches:
# - main
#
#jobs:
# check:
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v3
# - uses: technote-space/get-diff-action@v6
# with:
# PATTERNS: |
# docs/architecture/**
# docs/rfc/**
# - run: ./docs/presubmit.sh
# if: env.GIT_DIFF
name: Check documentation ToC
on:
pull_request:
push:
branches:
- main
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
docs/architecture/**
docs/rfc/**
- run: make check-docs-toc
if: env.GIT_DIFF

View File

@@ -15,7 +15,7 @@ jobs:
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03', "04"]
group: ['00', '01']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
@@ -27,6 +27,12 @@ jobs:
with:
ref: 'v0.34.x'
- name: Capture git repo info
id: git-info
run: |
echo "::set-output name=branch::`git branch --show-current`"
echo "::set-output name=commit::`git rev-parse HEAD`"
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
@@ -41,18 +47,58 @@ jobs:
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
outputs:
git-branch: ${{ steps.git-info.outputs.branch }}
git-commit: ${{ steps.git-info.outputs.commit }}
e2e-nightly-fail:
needs: e2e-nightly-test
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
uses: slackapi/slack-github-action@v1.21.0
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':skull:'
SLACK_COLOR: danger
SLACK_MESSAGE: Nightly E2E tests failed on v0.34.x
SLACK_FOOTER: ''
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ needs.e2e-nightly-test.outputs.git-branch }}
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
COMMIT_URL: "${{ github.server_url }}/${{ github.repository }}/commit/${{ needs.e2e-nightly-test.outputs.git-commit }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> that caused the failure."
}
}
]
}
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on success
uses: slackapi/slack-github-action@v1.21.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ needs.e2e-nightly-test.outputs.git-branch }}
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":white_check_mark: Nightly E2E tests for `${{ env.BRANCH }}` passed."
}
}
]
}

View File

@@ -46,15 +46,26 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
uses: slackapi/slack-github-action@v1.21.0
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':skull:'
SLACK_COLOR: danger
SLACK_MESSAGE: Nightly E2E tests failed on main
SLACK_FOOTER: ''
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ github.ref_name }}
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
COMMIT_URL: "${{ github.server_url }}/${{ github.repository }}/commit/${{ github.sha }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> that caused the failure."
}
}
]
}
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test
@@ -62,12 +73,21 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Notify Slack on success
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
uses: slackapi/slack-github-action@v1.21.0
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':white_check_mark:'
SLACK_COLOR: good
SLACK_MESSAGE: Nightly E2E tests passed on main
SLACK_FOOTER: ''
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ github.ref_name }}
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":white_check_mark: Nightly E2E tests for `${{ env.BRANCH }}` passed."
}
}
]
}

View File

@@ -75,13 +75,24 @@ jobs:
if: ${{ needs.fuzz-nightly-test.outputs.crashers-count != 0 }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack if any crashers
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.21.0
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly Fuzz Tests
SLACK_ICON_EMOJI: ':firecracker:'
SLACK_COLOR: danger
SLACK_MESSAGE: Crashers found in Nightly Fuzz tests
SLACK_FOOTER: ''
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ github.ref_name }}
CRASHERS: ${{ needs.fuzz-nightly-test.outputs.crashers-count }}
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly fuzz tests for `${{ env.BRANCH }}` failed with ${{ env.CRASHERS }} crasher(s). See the <${{ env.RUN_URL }}|run details>."
}
}
]
}

1
.gitignore vendored
View File

@@ -51,7 +51,6 @@ proto/spec/**/*.pb.go
*.aux
*.bbl
*.blg
*.log
*.pdf
*.gz
*.dvi

View File

@@ -438,7 +438,7 @@ Special thanks to external contributors on this release: @james-ray, @fedekunze,
- [abci] [\#5174](https://github.com/tendermint/tendermint/pull/5174) Remove `MockEvidence` in favor of testing with actual evidence types (`DuplicateVoteEvidence` & `LightClientAttackEvidence`) (@cmwaters)
- [abci] [\#5191](https://github.com/tendermint/tendermint/pull/5191) Add `InitChain.InitialHeight` field giving the initial block height (@erikgrinaker)
- [abci] [\#5227](https://github.com/tendermint/tendermint/pull/5227) Add `ResponseInitChain.app_hash` which is recorded in genesis block (@erikgrinaker)
- [config] [\#5147](https://github.com/tendermint/tendermint/pull/5147) Add `--consensus.double_sign_check_height` flag and `DoubleSignCheckHeight` config variable. See [ADR-51](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-051-double-signing-risk-reduction.md) (@dongsam)
- [config] [\#5147](https://github.com/tendermint/tendermint/pull/5147) Add `--consensus.double_sign_check_height` flag and `DoubleSignCheckHeight` config variable. See [ADR-51](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-051-double-signing-risk-reduction.md) (@dongsam)
- [db] [\#5233](https://github.com/tendermint/tendermint/pull/5233) Add support for `badgerdb` database backend (@erikgrinaker)
- [evidence] [\#4532](https://github.com/tendermint/tendermint/pull/4532) Handle evidence from light clients (@melekes)
- [evidence] [#4821](https://github.com/tendermint/tendermint/pull/4821) Amnesia (light client attack) evidence can be detected, verified and committed (@cmwaters)
@@ -452,7 +452,7 @@ Special thanks to external contributors on this release: @james-ray, @fedekunze,
- [rpc] [\#5017](https://github.com/tendermint/tendermint/pull/5017) Add `/check_tx` endpoint to check transactions without executing them or adding them to the mempool (@melekes)
- [rpc] [\#5108](https://github.com/tendermint/tendermint/pull/5108) Subscribe using the websocket for new evidence events (@cmwaters)
- [statesync] Add state sync support, where a new node can be rapidly bootstrapped by fetching state snapshots from peers instead of replaying blocks. See the `[statesync]` config section.
- [evidence] [\#5361](https://github.com/tendermint/tendermint/pull/5361) Add LightClientAttackEvidence and refactor evidence lifecycle - for more information see [ADR-059](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-059-evidence-composition-and-lifecycle.md) (@cmwaters)
- [evidence] [\#5361](https://github.com/tendermint/tendermint/pull/5361) Add LightClientAttackEvidence and refactor evidence lifecycle - for more information see [ADR-059](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-059-evidence-composition-and-lifecycle.md) (@cmwaters)
### IMPROVEMENTS
@@ -532,7 +532,7 @@ This security release fixes:
Tendermint 0.33.0 and above allow block proposers to include signatures for the
wrong block. This may happen naturally if you start a network, have it run for
some time and restart it **without changing the chainID**. (It is a
[misconfiguration](https://docs.tendermint.com/master/tendermint-core/using-tendermint.html)
[misconfiguration](https://docs.tendermint.com/v0.33/tendermint-core/using-tendermint.html)
to reuse chainIDs.) Correct block proposers will accidentally include signatures
for the wrong block if they see these signatures, and then commits won't validate,
making all proposed blocks invalid. A malicious validator (even with a minimal
@@ -831,7 +831,7 @@ and a validator address plus a timestamp. Note we may remove the validator
address & timestamp fields in the future (see ADR-25).
`lite2` package has been added to solve `lite` issues and introduce weak
subjectivity interface. Refer to the [spec](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client.md) for complete details.
subjectivity interface. Refer to the [spec](https://github.com/tendermint/tendermint/blob/main/spec/consensus/light-client.md) for complete details.
`lite` package is now deprecated and will be removed in v0.34 release.
### BREAKING CHANGES:
@@ -1192,7 +1192,7 @@ Special thanks to external contributors on this release: @jon-certik, @gracenoah
*August 28, 2019*
@climber73 wrote the [Writing a Tendermint Core application in Java
(gRPC)](https://github.com/tendermint/tendermint/blob/master/docs/guides/java.md)
(gRPC)](https://github.com/tendermint/tendermint/blob/main/docs/guides/java.md)
guide.
Special thanks to external contributors on this release:
@@ -1225,7 +1225,7 @@ Special thanks to external contributors on this release:
### FEATURES:
- [blockchain] [\#3561](https://github.com/tendermint/tendermint/issues/3561) Add early version of the new blockchain reactor, which is supposed to be more modular and testable compared to the old version. To try it, you'll have to change `version` in the config file, [here](https://github.com/tendermint/tendermint/blob/master/config/toml.go#L303) NOTE: It's not ready for a production yet. For further information, see [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md) & [ADR-43](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-043-blockchain-riri-org.md)
- [blockchain] [\#3561](https://github.com/tendermint/tendermint/issues/3561) Add early version of the new blockchain reactor, which is supposed to be more modular and testable compared to the old version. To try it, you'll have to change `version` in the config file, [here](https://github.com/tendermint/tendermint/blob/main/config/toml.go#L303) NOTE: It's not ready for a production yet. For further information, see [ADR-40](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-040-blockchain-reactor-refactor.md) & [ADR-43](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-043-blockchain-riri-org.md)
- [mempool] [\#3826](https://github.com/tendermint/tendermint/issues/3826) Make `max_msg_bytes` configurable(@bluele)
- [node] [\#3846](https://github.com/tendermint/tendermint/pull/3846) Allow replacing existing p2p.Reactor(s) using [`CustomReactors`
option](https://godoc.org/github.com/tendermint/tendermint/node#CustomReactors).
@@ -1542,7 +1542,7 @@ Special thanks to external contributors on this release:
- [libs/db] [\#3611](https://github.com/tendermint/tendermint/issues/3611) Conditional compilation
* Use `cleveldb` tag instead of `gcc` to compile Tendermint with CLevelDB or
use `make build_c` / `make install_c` (full instructions can be found at
https://docs.tendermint.com/master/introduction/install.html#compile-with-cleveldb-support)
<https://docs.tendermint.com>)
* Use `boltdb` tag to compile Tendermint with bolt db
- [node] [\#3362](https://github.com/tendermint/tendermint/issues/3362) Return an error if `persistent_peers` list is invalid (except
when IP lookup fails)
@@ -1766,7 +1766,7 @@ more details.
- [rpc] [\#3269](https://github.com/tendermint/tendermint/issues/2826) Limit number of unique clientIDs with open subscriptions. Configurable via `rpc.max_subscription_clients`
- [rpc] [\#3269](https://github.com/tendermint/tendermint/issues/2826) Limit number of unique queries a given client can subscribe to at once. Configurable via `rpc.max_subscriptions_per_client`.
- [rpc] [\#3435](https://github.com/tendermint/tendermint/issues/3435) Default ReadTimeout and WriteTimeout changed to 10s. WriteTimeout can increased by setting `rpc.timeout_broadcast_tx_commit` in the config.
- [rpc/client] [\#3269](https://github.com/tendermint/tendermint/issues/3269) Update `EventsClient` interface to reflect new pubsub/eventBus API [ADR-33](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-033-pubsub.md). This includes `Subscribe`, `Unsubscribe`, and `UnsubscribeAll` methods.
- [rpc/client] [\#3269](https://github.com/tendermint/tendermint/issues/3269) Update `EventsClient` interface to reflect new pubsub/eventBus API [ADR-33](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-033-pubsub.md). This includes `Subscribe`, `Unsubscribe`, and `UnsubscribeAll` methods.
* Apps
- [abci] [\#3403](https://github.com/tendermint/tendermint/issues/3403) Remove `time_iota_ms` from BlockParams. This is a
@@ -1819,7 +1819,7 @@ more details.
- [blockchain] [\#3358](https://github.com/tendermint/tendermint/pull/3358) Fix timer leak in `BlockPool` (@guagualvcha)
- [cmd] [\#3408](https://github.com/tendermint/tendermint/issues/3408) Fix `testnet` command's panic when creating non-validator configs (using `--n` flag) (@srmo)
- [libs/db/remotedb/grpcdb] [\#3402](https://github.com/tendermint/tendermint/issues/3402) Close Iterator/ReverseIterator after use
- [libs/pubsub] [\#951](https://github.com/tendermint/tendermint/issues/951), [\#1880](https://github.com/tendermint/tendermint/issues/1880) Use non-blocking send when dispatching messages [ADR-33](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-033-pubsub.md)
- [libs/pubsub] [\#951](https://github.com/tendermint/tendermint/issues/951), [\#1880](https://github.com/tendermint/tendermint/issues/1880) Use non-blocking send when dispatching messages [ADR-33](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-033-pubsub.md)
- [lite] [\#3364](https://github.com/tendermint/tendermint/issues/3364) Fix `/validators` and `/abci_query` proxy endpoints
(@guagualvcha)
- [p2p/conn] [\#3347](https://github.com/tendermint/tendermint/issues/3347) Reject all-zero shared secrets in the Diffie-Hellman step of secret-connection
@@ -2517,7 +2517,7 @@ Special thanks to external contributors on this release:
This release is mostly about the ConsensusParams - removing fields and enforcing MaxGas.
It also addresses some issues found via security audit, removes various unused
functions from `libs/common`, and implements
[ADR-012](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-012-peer-transport.md).
[ADR-012](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-012-peer-transport.md).
BREAKING CHANGES:
@@ -2598,7 +2598,7 @@ BREAKING CHANGES:
- [abci] Added address of the original proposer of the block to Header
- [abci] Change ABCI Header to match Tendermint exactly
- [abci] [\#2159](https://github.com/tendermint/tendermint/issues/2159) Update use of `Validator` (see
[ADR-018](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-018-ABCI-Validators.md)):
[ADR-018](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-018-ABCI-Validators.md)):
- Remove PubKey from `Validator` (so it's just Address and Power)
- Introduce `ValidatorUpdate` (with just PubKey and Power)
- InitChain and EndBlock use ValidatorUpdate
@@ -2620,7 +2620,7 @@ BREAKING CHANGES:
- [state] [\#1815](https://github.com/tendermint/tendermint/issues/1815) Validator set changes are now delayed by one block (!)
- Add NextValidatorSet to State, changes on-disk representation of state
- [state] [\#2184](https://github.com/tendermint/tendermint/issues/2184) Enforce ConsensusParams.BlockSize.MaxBytes (See
[ADR-020](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-020-block-size.md)).
[ADR-020](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-020-block-size.md)).
- Remove ConsensusParams.BlockSize.MaxTxs
- Introduce maximum sizes for all components of a block, including ChainID
- [types] Updates to the block Header:
@@ -2631,7 +2631,7 @@ BREAKING CHANGES:
- [consensus] [\#2203](https://github.com/tendermint/tendermint/issues/2203) Implement BFT time
- Timestamp in block must be monotonic and equal the median of timestamps in block's LastCommit
- [crypto] [\#2239](https://github.com/tendermint/tendermint/issues/2239) Secp256k1 signature changes (See
[ADR-014](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-014-secp-malleability.md)):
[ADR-014](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-014-secp-malleability.md)):
- format changed from DER to `r || s`, both little endian encoded as 32 bytes.
- malleability removed by requiring `s` to be in canonical form.
@@ -3431,7 +3431,7 @@ Also includes the Grand Repo-Merge of 2017.
BREAKING CHANGES:
- Config and Flags:
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/master/config/config.go#L11),
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/main/config/config.go#L11),
containing substructs: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusConfig`, `RPCConfig`
- This affects the following flags:
- `--seeds` is now `--p2p.seeds`

View File

@@ -9,24 +9,36 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
### BREAKING CHANGES
- CLI/RPC/Config
- [config] \#9259 Rename the fastsync section and the fast_sync key blocksync and block_sync respectively
- Apps
- [abci/counter] \#6684 Delete counter example app
- [abci] \#5783 Make length delimiter encoding consistent (`uint64`) between ABCI and P2P wire-level protocols
- [abci] \#9145 Removes unused Response/Request `SetOption` from ABCI (@samricotta)
- [abci/params] \#9287 Deduplicate `ConsensusParams` and `BlockParams` so only `types` proto definitions are used (@cmwaters)
- Remove `TimeIotaMs` and use a hard-coded 1 millisecond value to ensure monotonically increasing block times.
- Rename `AppVersion` to `App` so as to not stutter.
- [abci] \#9301 New ABCI methods `PrepareProposal` and `ProcessProposal` which give the app control over transactions proposed and allows for verification of proposed blocks.
- [abci] \#8656, \#8901 Added cli commands for `PrepareProposal` and `ProcessProposal`. (@jmalicevic, @hvanz)
- [abci] \#6403 Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
- P2P Protocol
- Go API
- [all] [#9144] Change spelling from British English to American (@cmwaters)
- [all] \#9144 Change spelling from British English to American (@cmwaters)
- Rename "Subscription.Cancelled()" to "Subscription.Canceled()" in libs/pubsub
- Blockchain Protocol
### FEATURES
- [abci] \#9301 New ABCI methods `PrepareProposal` and `ProcessProposal` which give the app control over transactions proposed and allows for verification of proposed blocks.
### IMPROVEMENTS
- [abci] \#5706 Added `AbciVersion` to `RequestInfo` allowing applications to check ABCI version when connecting to Tendermint. (@marbar3778)
### BUG FIXES
[docker] \#9073 enable cross platform build using docker buildx
- [consensus] \#9229 fix round number of `enterPropose` when handling `RoundStepNewRound` timeout. (@fatcat22)
- [docker] \#9073 enable cross platform build using docker buildx

View File

@@ -7,7 +7,7 @@ support permissionless value-carrying networks. While all contributions are
welcome, contributors should bear this goal in mind in deciding if they should
target the main Tendermint project or a potential fork. When targeting the
main Tendermint project, the following process leads to the best chance of
landing changes in master.
landing changes in `main`.
All work on the code base should be motivated by a [Github
Issue](https://github.com/tendermint/tendermint/issues).
@@ -26,7 +26,7 @@ will indicate their support with a heartfelt emoji.
If the issue would benefit from thorough discussion, maintainers may
request that you create a [Request For
Comment](https://github.com/tendermint/spec/tree/master/rfc)
Comment](https://github.com/tendermint/tendermint/tree/main/docs/rfc)
in the Tendermint spec repo. Discussion
at the RFC stage will build collective understanding of the dimensions
of the problems and help structure conversations around trade-offs.
@@ -46,7 +46,7 @@ Find the largest existing ADR number and bump it by 1.
When the problem as well as proposed solution are well understood,
changes should start with a [draft
pull request](https://github.blog/2019-02-14-introducing-draft-pull-requests/)
against master. The draft signals that work is underway. When the work
against `main`. The draft signals that work is underway. When the work
is ready for feedback, hitting "Ready for Review" will signal to the
maintainers to take a look.
@@ -54,7 +54,7 @@ maintainers to take a look.
Each stage of the process is aimed at creating feedback cycles which align contributors and maintainers to make sure:
- Contributors dont waste their time implementing/proposing features which wont land in master.
- Contributors dont waste their time implementing/proposing features which wont land in `main`.
- Maintainers have the necessary context in order to support and review contributions.
## Forking
@@ -73,19 +73,19 @@ For instance, to create a fork and work on a branch of it, I would:
- `git remote add origin git@github.com:ebuchman/basecoin.git`
Now `origin` refers to my fork and `upstream` refers to the Tendermint version.
So I can `git push -u origin master` to update my fork, and make pull requests to tendermint from there.
So I can `git push -u origin main` to update my fork, and make pull requests to tendermint from there.
Of course, replace `ebuchman` with your git handle.
To pull in updates from the origin repo, run
- `git fetch upstream`
- `git rebase upstream/master` (or whatever branch you want)
- `git rebase upstream/main` (or whatever branch you want)
## Dependencies
We use [go modules](https://github.com/golang/go/wiki/Modules) to manage dependencies.
That said, the master branch of every Tendermint repository should just build
That said, the `main` branch of every Tendermint repository should just build
with `go get`, which means they should be kept up-to-date with their
dependencies so we can get away with telling people they can just `go get` our
software.
@@ -153,10 +153,47 @@ If you are a VS Code user, you may want to add the following to your `.vscode/se
Every fix, improvement, feature, or breaking change should be made in a
pull-request that includes an update to the `CHANGELOG_PENDING.md` file.
A feature can also be worked on a feature branch, if its size and/or risk
justifies it (see #branching-model-and-release) below.
### What does a good changelog entry look like?
Changelog entries should answer the question: "what is important about this
change for users to know?" or "what problem does this solve for users?". It
should not simply be a reiteration of the title of the associated PR, unless the
title of the PR _very_ clearly explains the benefit of a change to a user.
Some good examples of changelog entry descriptions:
```md
- [consensus] \#1111 Small transaction throughput improvement (approximately
3-5\% from preliminary tests) through refactoring the way we use channels
- [mempool] \#1112 Refactor Go API to be able to easily swap out the current
mempool implementation in Tendermint forks
- [p2p] \#1113 Automatically ban peers when their messages are unsolicited or
are received too frequently
```
Some bad examples of changelog entry descriptions:
```md
- [consensus] \#1111 Refactor channel usage
- [mempool] \#1112 Make API generic
- [p2p] \#1113 Ban for PEX message abuse
```
For more on how to write good changelog entries, see:
- <https://keepachangelog.com>
- <https://docs.gitlab.com/ee/development/changelog.html#writing-good-changelog-entries>
- <https://depfu.com/blog/what-makes-a-good-changelog>
### Changelog entry format
Changelog entries should be formatted as follows:
```md
- [module] \#xxx Some description about the change (@contributor)
- [module] \#xxx Some description of the change (@contributor)
```
Here, `module` is the part of the code that changed (typically a
@@ -184,22 +221,31 @@ removed from the header in RPC responses as well.
## Branching Model and Release
The main development branch is master.
The main development branch is `main`.
Every release is maintained in a release branch named `vX.Y.Z`.
Pending minor releases have long-lived release candidate ("RC") branches. Minor release changes should be merged to these long-lived RC branches at the same time that the changes are merged to master.
Pending minor releases have long-lived release candidate ("RC") branches. Minor release changes should be merged to these long-lived RC branches at the same time that the changes are merged to `main`.
If a feature's size is big and/or its risk is high, it can be implemented in a feature branch.
While the feature work is in progress,
pull requests are open and squash merged against the feature branch.
Branch `main` is periodically merged (merge commit) into the feature branch,
to reduce branch divergence.
When the feature is complete, the feature branch is merged back (merge commit) into `main`.
The moment of the final merge can be carefully chosen
so as to land different features in different releases.
Note all pull requests should be squash merged except for merging to a release branch (named `vX.Y`). This keeps the commit history clean and makes it
easy to reference the pull request where a change was introduced.
### Development Procedure
The latest state of development is on `master`, which must never fail `make test`. _Never_ force push `master`, unless fixing broken git history (which we rarely do anyways).
The latest state of development is on `main`, which must never fail `make test`. _Never_ force push `main`, unless fixing broken git history (which we rarely do anyways).
To begin contributing, create a development branch either on `github.com/tendermint/tendermint`, or your fork (using `git remote add origin`).
Make changes, and before submitting a pull request, update the `CHANGELOG_PENDING.md` to record your change. Also, run either `git rebase` or `git merge` on top of the latest `master`. (Since pull requests are squash-merged, either is fine!)
Make changes, and before submitting a pull request, update the `CHANGELOG_PENDING.md` to record your change. Also, run either `git rebase` or `git merge` on top of the latest `main`. (Since pull requests are squash-merged, either is fine!)
Update the `UPGRADING.md` if the change you've made is breaking and the
instructions should be in place for a user on how he/she can upgrade it's
@@ -207,7 +253,7 @@ software (ABCI application, Tendermint-based blockchain, light client, wallet).
Once you have submitted a pull request label the pull request with either `R:minor`, if the change should be included in the next minor release, or `R:major`, if the change is meant for a major release.
Sometimes (often!) pull requests get out-of-date with master, as other people merge different pull requests to master. It is our convention that pull request authors are responsible for updating their branches with master. (This also means that you shouldn't update someone else's branch for them; even if it seems like you're doing them a favor, you may be interfering with their git flow in some way!)
Sometimes (often!) pull requests get out-of-date with `main`, as other people merge different pull requests to `main`. It is our convention that pull request authors are responsible for updating their branches with `main`. (This also means that you shouldn't update someone else's branch for them; even if it seems like you're doing them a favor, you may be interfering with their git flow in some way!)
#### Merging Pull Requests
@@ -215,20 +261,20 @@ It is also our convention that authors merge their own pull requests, when possi
Before merging a pull request:
- Ensure pull branch is up-to-date with a recent `master` (GitHub won't let you merge without this!)
- Ensure pull branch is up-to-date with a recent `main` (GitHub won't let you merge without this!)
- Run `make test` to ensure that all tests pass
- [Squash](https://stackoverflow.com/questions/5189560/squash-my-last-x-commits-together-using-git) merge pull request
#### Pull Requests for Minor Releases
If your change should be included in a minor release, please also open a PR against the long-lived minor release candidate branch (e.g., `rc1/v0.33.5`) _immediately after your change has been merged to master_.
If your change should be included in a minor release, please also open a PR against the long-lived minor release candidate branch (e.g., `rc1/v0.33.5`) _immediately after your change has been merged to main_.
You can do this by cherry-picking your commit off master:
You can do this by cherry-picking your commit off `main`:
```sh
$ git checkout rc1/v0.33.5
$ git checkout -b {new branch name}
$ git cherry-pick {commit SHA from master}
$ git cherry-pick {commit SHA from main}
# may need to fix conflicts, and then use git add and git cherry-pick --continue
$ git push origin {new branch name}
```
@@ -247,7 +293,7 @@ cmd/debug: execute p.Signal only when p is not nil
Fixes #nnnn
```
Each PR should have one commit once it lands on `master`; this can be accomplished by using the "squash and merge" button on Github. Be sure to edit your commit message, though!
Each PR should have one commit once it lands on `main`; this can be accomplished by using the "squash and merge" button on Github. Be sure to edit your commit message, though!
## Testing

View File

@@ -1,5 +1,5 @@
# stage 1 Generate Tendermint Binary
FROM --platform=$BUILDPLATFORM golang:1.15-alpine as builder
FROM --platform=$BUILDPLATFORM golang:1.18-alpine as builder
RUN apk update && \
apk upgrade && \
apk --no-cache add make

View File

@@ -6,7 +6,7 @@ DockerHub tags for official releases are [here](https://hub.docker.com/r/tenderm
Official releases can be found [here](https://github.com/tendermint/tendermint/releases).
The Dockerfile for tendermint is not expected to change in the near future. The master file used for all builds can be found [here](https://raw.githubusercontent.com/tendermint/tendermint/master/DOCKER/Dockerfile).
The Dockerfile for Tendermint is not expected to change in the near future. The main file used for all builds can be found [here](https://raw.githubusercontent.com/tendermint/tendermint/main/DOCKER/Dockerfile).
Respective versioned files can be found <https://raw.githubusercontent.com/tendermint/tendermint/vX.XX.XX/DOCKER/Dockerfile> (replace the Xs with the version number).
@@ -20,9 +20,9 @@ Respective versioned files can be found <https://raw.githubusercontent.com/tende
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language, and securely replicates it on many machines.
For more background, see the [the docs](https://docs.tendermint.com/master/introduction/#quick-start).
For more background, see the [the docs](https://docs.tendermint.com/v0.37/introduction/#quick-start).
To get started developing applications, see the [application developers guide](https://docs.tendermint.com/master/introduction/quick-start.html).
To get started developing applications, see the [application developers guide](https://docs.tendermint.com/v0.37/introduction/quick-start.html).
## How to use this image
@@ -37,7 +37,7 @@ docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app
## Local cluster
To run a 4-node network, see the `Makefile` in the root of [the repo](https://github.com/tendermint/tendermint/blob/master/Makefile) and run:
To run a 4-node network, see the `Makefile` in the root of [the repo](https://github.com/tendermint/tendermint/blob/v0.37.x/Makefile) and run:
```sh
make build-linux
@@ -49,8 +49,8 @@ Note that this will build and use a different image than the ones provided here.
## License
- Tendermint's license is [Apache 2.0](https://github.com/tendermint/tendermint/blob/master/LICENSE).
- Tendermint's license is [Apache 2.0](https://github.com/tendermint/tendermint/blob/main/LICENSE).
## Contributing
Contributions are most welcome! See the [contributing file](https://github.com/tendermint/tendermint/blob/master/CONTRIBUTING.md) for more information.
Contributions are most welcome! See the [contributing file](https://github.com/tendermint/tendermint/blob/v0.37.x/CONTRIBUTING.md) for more information.

View File

@@ -1,5 +1,3 @@
Tendermint Core
License: Apache2.0
Apache License
Version 2.0, January 2004
@@ -181,7 +179,7 @@ License: Apache2.0
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
@@ -189,7 +187,7 @@ License: Apache2.0
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2016 All in Bits, Inc
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -131,6 +131,20 @@ install:
CGO_ENABLED=$(CGO_ENABLED) go install $(BUILD_FLAGS) -tags $(BUILD_TAGS) ./cmd/tendermint
.PHONY: install
###############################################################################
### Metrics ###
###############################################################################
metrics: testdata-metrics
go generate -run="scripts/metricsgen" ./...
.PHONY: metrics
# By convention, the go tool ignores subdirectories of directories named
# 'testdata'. This command invokes the generate command on the folder directly
# to avoid this.
testdata-metrics:
ls ./scripts/metricsgen/testdata | xargs -I{} go generate -v -run="scripts/metricsgen" ./scripts/metricsgen/testdata/{}
.PHONY: testdata-metrics
###############################################################################
### Mocks ###
@@ -271,16 +285,33 @@ DESTINATION = ./index.html.md
### Documentation ###
###############################################################################
DOCS_OUTPUT?=/tmp/tendermint-core-docs
# This builds a docs site for each branch/tag in `./docs/versions` and copies
# each site to a version prefixed path. The last entry inside the `versions`
# file will be the default root index.html. Only redirects that are built into
# the "redirects" folder of each of the branches will be copied out to the root
# of the build at the end.
build-docs:
@cd docs && \
while read -r branch path_prefix; do \
(git checkout $${branch} && npm ci && VUEPRESS_BASE="/$${path_prefix}/" npm run build) ; \
mkdir -p ~/output/$${path_prefix} ; \
cp -r .vuepress/dist/* ~/output/$${path_prefix}/ ; \
cp ~/output/$${path_prefix}/index.html ~/output ; \
mkdir -p $(DOCS_OUTPUT)/$${path_prefix} ; \
cp -r .vuepress/dist/* $(DOCS_OUTPUT)/$${path_prefix}/ ; \
cp $(DOCS_OUTPUT)/$${path_prefix}/index.html $(DOCS_OUTPUT) ; \
cp $(DOCS_OUTPUT)/$${path_prefix}/404.html $(DOCS_OUTPUT) ; \
cp -r $(DOCS_OUTPUT)/$${path_prefix}/redirects/* $(DOCS_OUTPUT) || true ; \
done < versions ;
.PHONY: build-docs
# Build and serve the local version of the docs on the current branch from
# http://0.0.0.0:8080
serve-docs:
@cd docs && \
npm ci && \
npm run serve
.PHONY: serve-docs
sync-docs:
cd ~/output && \
echo "role_arn = ${DEPLOYMENT_ROLE_ARN}" >> /root/.aws/config ; \
@@ -289,6 +320,11 @@ sync-docs:
aws cloudfront create-invalidation --distribution-id ${CF_DISTRIBUTION_ID} --profile terraform --path "/*" ;
.PHONY: sync-docs
# Verify that important design docs have ToC entries.
check-docs-toc:
@./docs/presubmit.sh
.PHONY: check-docs-toc
###############################################################################
### Docker image ###
###############################################################################

View File

@@ -10,12 +10,11 @@
[![Go version][go-badge]][go-url]
[![Discord chat][discord-badge]][discord-url]
[![License][license-badge]][license-url]
[![tendermint/tendermint][loc-badge]][loc-url]
[![Sourcegraph][sg-badge]][sg-url]
| Branch | Tests | Coverage | Linting |
|--------|-----------------------|------------------------------------------|---------------------|
| main | ![Tests][tests-badge] | [![codecov][codecov-badge]][codecov-url] | ![Lint][lint-badge] |
| Branch | Tests | Linting |
|--------|------------------------------------|---------------------------------|
| main | [![Tests][tests-badge]][tests-url] | [![Lint][lint-badge]][lint-url] |
Tendermint Core is a Byzantine Fault Tolerant (BFT) middleware that takes a
state transition machine - written in any programming language - and securely
@@ -155,7 +154,6 @@ Funding for Tendermint Core development comes primarily from the
Tendermint trademark is owned by [Tendermint Inc.](https://tendermint.com), the
for-profit entity that also maintains [tendermint.com](https://tendermint.com).
[bft]: https://en.wikipedia.org/wiki/Byzantine_fault_tolerance
[smr]: https://en.wikipedia.org/wiki/State_machine_replication
[Blockchain]: https://en.wikipedia.org/wiki/Blockchain
@@ -163,17 +161,15 @@ for-profit entity that also maintains [tendermint.com](https://tendermint.com).
[version-url]: https://github.com/tendermint/tendermint/releases/latest
[api-badge]: https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667
[api-url]: https://pkg.go.dev/github.com/tendermint/tendermint
[go-badge]: https://img.shields.io/badge/go-1.17-blue.svg
[go-badge]: https://img.shields.io/badge/go-1.18-blue.svg
[go-url]: https://github.com/moovweb/gvm
[discord-badge]: https://img.shields.io/discord/669268347736686612.svg
[discord-url]: https://discord.gg/cosmosnetwork
[license-badge]: https://img.shields.io/github/license/tendermint/tendermint.svg
[license-url]: https://github.com/tendermint/tendermint/blob/main/LICENSE
[loc-badge]: https://tokei.rs/b1/github/tendermint/tendermint?category=lines
[loc-url]: https://github.com/tendermint/tendermint
[sg-badge]: https://sourcegraph.com/github.com/tendermint/tendermint/-/badge.svg
[sg-url]: https://sourcegraph.com/github.com/tendermint/tendermint?badge
[tests-badge]: https://github.com/tendermint/tendermint/workflows/Tests/badge.svg?branch=main
[codecov-badge]: https://codecov.io/gh/tendermint/tendermint/branch/main/graph/badge.svg
[codecov-url]: https://codecov.io/gh/tendermint/tendermint
[lint-badge]: https://github.com/tendermint/tendermint/workflows/Lint/badge.svg
[tests-url]: https://github.com/tendermint/tendermint/actions/workflows/tests.yml
[tests-badge]: https://github.com/tendermint/tendermint/actions/workflows/tests.yml/badge.svg?branch=main
[lint-badge]: https://github.com/tendermint/tendermint/actions/workflows/lint.yml/badge.svg
[lint-url]: https://github.com/tendermint/tendermint/actions/workflows/lint.yml

View File

@@ -121,6 +121,11 @@ branch (see above). Otherwise:
3. Prepare the changelog:
- Move the changes included in `CHANGELOG_PENDING.md` into `CHANGELOG.md`. Each RC should have
it's own changelog section. These will be squashed when the final candidate is released.
- Ensure that there is a "release highlights" or "release summary" paragraph
after the version heading describing what we feel are the most important
changes in this release from a user's perspective. This paragraph should
answer the question: "why should users upgrade to this version?", and with
specific reasons (not generic ones like "more bug fixes").
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes

View File

@@ -1,6 +1,36 @@
# Upgrading Tendermint Core
This guide provides instructions for upgrading to specific versions of Tendermint Core.
This guide provides instructions for upgrading to specific versions of
Tendermint Core.
## Unreleased
### ABCI Changes
* The `ABCIVersion` is now `0.18.0`.
* Added new ABCI methods `PrepareProposal` and `ProcessProposal`. For details,
please see the [spec](spec/abci/README.md). Applications upgrading to
v0.37.0 must implement these methods, at the very minimum, as described
[here](spec/abci/apps.md)
* Deduplicated `ConsensusParams` and `BlockParams`.
In the v0.34 branch they are defined both in `abci/types.proto` and `types/params.proto`.
The definitions in `abci/types.proto` have been removed.
In-process applications should make sure they are not using the deleted
version of those structures.
* In v0.34, messages on the wire used to be length-delimited with `int64` varint
values, which was inconsistent with the `uint64` varint length delimiters used
in the P2P layer. Both now consistently use `uint64` varint length delimiters.
* Added `AbciVersion` to `RequestInfo`.
Applications should check that Tendermint's ABCI version matches the one they expect
in order to ensure compatibility.
* The `SetOption` method has been removed from the ABCI `Client` interface.
The corresponding Protobuf types have been deprecated.
* The `key` and `value` fields in the `EventAttribute` type have been changed
from type `bytes` to `string`. As per the [Protocol Buffers updating
guidelines](https://developers.google.com/protocol-buffers/docs/proto3#updating),
this should have no effect on the wire-level encoding for UTF8-encoded
strings.
## v0.34.20
@@ -15,7 +45,7 @@ and gas cost).
Operators can enable the priority mempool by setting `mempool.version` to
`"v1"` in the `config.toml`. For more technical details about the priority
mempool, see [ADR 067: Mempool
Refactor](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-067-mempool-refactor.md).
Refactor](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-067-mempool-refactor.md).
## v0.34.0
@@ -23,7 +53,7 @@ Refactor](https://github.com/tendermint/tendermint/blob/master/docs/architecture
This release is not compatible with previous blockchains due to changes to
the encoding format (see "Protocol Buffers," below) and the block header (see "Blockchain Protocol").
Note also that Tendermint 0.34 also requires Go 1.15 or higher.
Note also that Tendermint 0.34 also requires Go 1.16 or higher.
### ABCI Changes
@@ -33,7 +63,7 @@ Note also that Tendermint 0.34 also requires Go 1.15 or higher.
were added to support the new State Sync feature.
Previously, syncing a new node to a preexisting network could take days; but with State Sync,
new nodes are able to join a network in a matter of seconds.
Read [the spec](https://docs.tendermint.com/master/spec/abci/apps.html#state-sync)
Read [the spec](https://github.com/tendermint/tendermint/blob/v0.34.x/spec/abci/apps.md#state-sync)
if you want to learn more about State Sync, or if you'd like your application to use it.
(If you don't want to support State Sync in your application, you can just implement these new
ABCI methods as no-ops, leaving them empty.)
@@ -49,7 +79,7 @@ Note also that Tendermint 0.34 also requires Go 1.15 or higher.
Applications should be able to handle these evidence types
(i.e., through slashing or other accountability measures).
* The [`PublicKey` type](https://github.com/tendermint/tendermint/blob/master/proto/tendermint/crypto/keys.proto#L13-L15)
* The [`PublicKey` type](https://github.com/tendermint/tendermint/blob/main/proto/tendermint/crypto/keys.proto#L13-L15)
(used in ABCI as part of `ValidatorUpdate`) now uses a `oneof` protobuf type.
Note that since Tendermint only supports ed25519 validator keys, there's only one
option in the `oneof`. For more, see "Protocol Buffers," below.
@@ -64,12 +94,9 @@ directory. For more, see "Protobuf," below.
### Blockchain Protocol
* `Header#LastResultsHash` previously was the root hash of a Merkle tree built from `ResponseDeliverTx(Code, Data)` responses.
As of 0.34,`Header#LastResultsHash` is now the root hash of a Merkle tree built from:
* `BeginBlock#Events`
* Root hash of a Merkle tree built from `ResponseDeliverTx(Code, Data,
GasWanted, GasUsed, Events)` responses
* `BeginBlock#Events`
* `Header#LastResultsHash`, which is the root hash of a Merkle tree built from
`ResponseDeliverTx(Code, Data)` as of v0.34 also includes `GasWanted` and `GasUsed`
fields.
* Merkle hashes of empty trees previously returned nothing, but now return the hash of an empty input,
to conform with [RFC-6962](https://tools.ietf.org/html/rfc6962).
@@ -159,7 +186,7 @@ The `bech32` package has moved to the Cosmos SDK:
### CLI
The `tendermint lite` command has been renamed to `tendermint light` and has a slightly different API.
See [the docs](https://docs.tendermint.com/master/tendermint-core/light-client-protocol.html#http-proxy) for details.
See [the docs](https://docs.tendermint.com/v0.33/tendermint-core/light-client-protocol.html#http-proxy) for details.
### Light Client
@@ -173,6 +200,7 @@ Other user-relevant changes include:
* The `Verifier` was broken up into two pieces:
* Core verification logic (pure `VerifyX` functions)
* `Client` object, which represents the complete light client
* The new light client stores headers and validator sets as `LightBlock`s
* The RPC client can be found in the `/rpc` directory.
* The HTTP(S) proxy is located in the `/proxy` directory.
@@ -314,7 +342,7 @@ Evidence Params has been changed to include duration.
### RPC Changes
* `/validators` is now paginated (default: 30 vals per page)
* `/block_results` response format updated [see RPC docs for details](https://docs.tendermint.com/master/rpc/#/Info/block_results)
* `/block_results` response format updated [see RPC docs for details](https://docs.tendermint.com/v0.33/rpc/#/Info/block_results)
* Event suffix has been removed from the ID in event responses
* IDs are now integers not `json-client-XYZ`
@@ -433,7 +461,7 @@ the compilation tag:
Use `cleveldb` tag instead of `gcc` to compile Tendermint with CLevelDB or
use `make build_c` / `make install_c` (full instructions can be found at
<https://tendermint.com/docs/introduction/install.html#compile-with-cleveldb-support>)
<https://docs.tendermint.com/v0.33/introduction/install.html#compile-with-cleveldb-support>)
## v0.31.0
@@ -508,14 +536,14 @@ due to changes in how various data structures are hashed.
Any implementations of Tendermint blockchain verification, including lite clients,
will need to be updated. For specific details:
* [Merkle tree](https://github.com/tendermint/spec/blob/master/spec/blockchain/encoding.md#merkle-trees)
* [ConsensusParams](https://github.com/tendermint/spec/blob/master/spec/blockchain/state.md#consensusparams)
* [Merkle tree](https://github.com/tendermint/tendermint/blob/main/spec/blockchain/encoding.md#merkle-trees)
* [ConsensusParams](https://github.com/tendermint/tendermint/blob/main/spec/blockchain/state.md#consensusparams)
There was also a small change to field ordering in the vote struct. Any
implementations of an out-of-process validator (like a Key-Management Server)
will need to be updated. For specific details:
* [Vote](https://github.com/tendermint/spec/blob/master/spec/consensus/signing.md#votes)
* [Vote](https://github.com/tendermint/tendermint/blob/main/spec/consensus/signing.md#votes)
Finally, the proposer selection algorithm continues to evolve. See the
[work-in-progress

View File

@@ -19,7 +19,7 @@ To get up and running quickly, see the [getting started guide](../docs/app-dev/g
A detailed description of the ABCI methods and message types is contained in:
- [The main spec](https://github.com/tendermint/spec/blob/master/spec/abci/abci.md)
- [The main spec](https://github.com/tendermint/tendermint/blob/v0.37.x/spec/abci/abci.md)
- [A protobuf file](./types/types.proto)
- [A Go interface](./types/application.go)

View File

@@ -14,6 +14,8 @@ const (
echoRetryIntervalSeconds = 1
)
//go:generate ../../scripts/mockery_generate.sh Client
// Client defines an interface for an ABCI client.
// All `Async` methods return a `ReqRes` object.
// All `Sync` methods return the appropriate protobuf ResponseXxx struct and an error.
@@ -28,34 +30,36 @@ type Client interface {
FlushAsync() *ReqRes
EchoAsync(msg string) *ReqRes
InfoAsync(types.RequestInfo) *ReqRes
SetOptionAsync(types.RequestSetOption) *ReqRes
DeliverTxAsync(types.RequestDeliverTx) *ReqRes
CheckTxAsync(types.RequestCheckTx) *ReqRes
QueryAsync(types.RequestQuery) *ReqRes
CommitAsync() *ReqRes
InitChainAsync(types.RequestInitChain) *ReqRes
PrepareProposalAsync(types.RequestPrepareProposal) *ReqRes
BeginBlockAsync(types.RequestBeginBlock) *ReqRes
EndBlockAsync(types.RequestEndBlock) *ReqRes
ListSnapshotsAsync(types.RequestListSnapshots) *ReqRes
OfferSnapshotAsync(types.RequestOfferSnapshot) *ReqRes
LoadSnapshotChunkAsync(types.RequestLoadSnapshotChunk) *ReqRes
ApplySnapshotChunkAsync(types.RequestApplySnapshotChunk) *ReqRes
ProcessProposalAsync(types.RequestProcessProposal) *ReqRes
FlushSync() error
EchoSync(msg string) (*types.ResponseEcho, error)
InfoSync(types.RequestInfo) (*types.ResponseInfo, error)
SetOptionSync(types.RequestSetOption) (*types.ResponseSetOption, error)
DeliverTxSync(types.RequestDeliverTx) (*types.ResponseDeliverTx, error)
CheckTxSync(types.RequestCheckTx) (*types.ResponseCheckTx, error)
QuerySync(types.RequestQuery) (*types.ResponseQuery, error)
CommitSync() (*types.ResponseCommit, error)
InitChainSync(types.RequestInitChain) (*types.ResponseInitChain, error)
PrepareProposalSync(types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error)
BeginBlockSync(types.RequestBeginBlock) (*types.ResponseBeginBlock, error)
EndBlockSync(types.RequestEndBlock) (*types.ResponseEndBlock, error)
ListSnapshotsSync(types.RequestListSnapshots) (*types.ResponseListSnapshots, error)
OfferSnapshotSync(types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error)
LoadSnapshotChunkSync(types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error)
ApplySnapshotChunkSync(types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error)
ProcessProposalSync(types.RequestProcessProposal) (*types.ResponseProcessProposal, error)
}
//----------------------------------------

View File

@@ -192,15 +192,6 @@ func (cli *grpcClient) InfoAsync(params types.RequestInfo) *ReqRes {
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Info{Info: res}})
}
func (cli *grpcClient) SetOptionAsync(params types.RequestSetOption) *ReqRes {
req := types.ToRequestSetOption(params)
res, err := cli.client.SetOption(context.Background(), req.GetSetOption(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_SetOption{SetOption: res}})
}
func (cli *grpcClient) DeliverTxAsync(params types.RequestDeliverTx) *ReqRes {
req := types.ToRequestDeliverTx(params)
res, err := cli.client.DeliverTx(context.Background(), req.GetDeliverTx(), grpc.WaitForReady(true))
@@ -300,6 +291,25 @@ func (cli *grpcClient) ApplySnapshotChunkAsync(params types.RequestApplySnapshot
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_ApplySnapshotChunk{ApplySnapshotChunk: res}})
}
func (cli *grpcClient) PrepareProposalAsync(params types.RequestPrepareProposal) *ReqRes {
req := types.ToRequestPrepareProposal(params)
res, err := cli.client.PrepareProposal(context.Background(), req.GetPrepareProposal(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_PrepareProposal{PrepareProposal: res}})
}
func (cli *grpcClient) ProcessProposalAsync(params types.RequestProcessProposal) *ReqRes {
req := types.ToRequestProcessProposal(params)
res, err := cli.client.ProcessProposal(context.Background(), req.GetProcessProposal(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_ProcessProposal{ProcessProposal: res}})
}
// finishAsyncCall creates a ReqRes for an async call, and immediately populates it
// with the response. We don't complete it until it's been ordered via the channel.
func (cli *grpcClient) finishAsyncCall(req *types.Request, res *types.Response) *ReqRes {
@@ -356,11 +366,6 @@ func (cli *grpcClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, err
return cli.finishSyncCall(reqres).GetInfo(), cli.Error()
}
func (cli *grpcClient) SetOptionSync(req types.RequestSetOption) (*types.ResponseSetOption, error) {
reqres := cli.SetOptionAsync(req)
return reqres.Response.GetSetOption(), cli.Error()
}
func (cli *grpcClient) DeliverTxSync(params types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
reqres := cli.DeliverTxAsync(params)
return cli.finishSyncCall(reqres).GetDeliverTx(), cli.Error()
@@ -417,3 +422,14 @@ func (cli *grpcClient) ApplySnapshotChunkSync(
reqres := cli.ApplySnapshotChunkAsync(params)
return cli.finishSyncCall(reqres).GetApplySnapshotChunk(), cli.Error()
}
func (cli *grpcClient) PrepareProposalSync(
params types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
reqres := cli.PrepareProposalAsync(params)
return cli.finishSyncCall(reqres).GetPrepareProposal(), cli.Error()
}
func (cli *grpcClient) ProcessProposalSync(params types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
reqres := cli.ProcessProposalAsync(params)
return cli.finishSyncCall(reqres).GetProcessProposal(), cli.Error()
}

View File

@@ -75,17 +75,6 @@ func (app *localClient) InfoAsync(req types.RequestInfo) *ReqRes {
)
}
func (app *localClient) SetOptionAsync(req types.RequestSetOption) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.SetOption(req)
return app.callback(
types.ToRequestSetOption(req),
types.ToResponseSetOption(res),
)
}
func (app *localClient) DeliverTxAsync(params types.RequestDeliverTx) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -207,6 +196,28 @@ func (app *localClient) ApplySnapshotChunkAsync(req types.RequestApplySnapshotCh
)
}
func (app *localClient) PrepareProposalAsync(req types.RequestPrepareProposal) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.PrepareProposal(req)
return app.callback(
types.ToRequestPrepareProposal(req),
types.ToResponsePrepareProposal(res),
)
}
func (app *localClient) ProcessProposalAsync(req types.RequestProcessProposal) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.ProcessProposal(req)
return app.callback(
types.ToRequestProcessProposal(req),
types.ToResponseProcessProposal(res),
)
}
//-------------------------------------------------------
func (app *localClient) FlushSync() error {
@@ -225,14 +236,6 @@ func (app *localClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, er
return &res, nil
}
func (app *localClient) SetOptionSync(req types.RequestSetOption) (*types.ResponseSetOption, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.SetOption(req)
return &res, nil
}
func (app *localClient) DeliverTxSync(req types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -323,6 +326,22 @@ func (app *localClient) ApplySnapshotChunkSync(
return &res, nil
}
func (app *localClient) PrepareProposalSync(req types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.PrepareProposal(req)
return &res, nil
}
func (app *localClient) ProcessProposalSync(req types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.ProcessProposal(req)
return &res, nil
}
//-------------------------------------------------------
func (app *localClient) callback(req *types.Request, res *types.Response) *ReqRes {

View File

@@ -1,4 +1,4 @@
// Code generated by mockery v1.1.1. DO NOT EDIT.
// Code generated by mockery. DO NOT EDIT.
package mocks
@@ -575,6 +575,84 @@ func (_m *Client) OnStop() {
_m.Called()
}
// PrepareProposalAsync provides a mock function with given fields: _a0
func (_m *Client) PrepareProposalAsync(_a0 types.RequestPrepareProposal) *abcicli.ReqRes {
ret := _m.Called(_a0)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestPrepareProposal) *abcicli.ReqRes); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
}
// PrepareProposalSync provides a mock function with given fields: _a0
func (_m *Client) PrepareProposalSync(_a0 types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
ret := _m.Called(_a0)
var r0 *types.ResponsePrepareProposal
if rf, ok := ret.Get(0).(func(types.RequestPrepareProposal) *types.ResponsePrepareProposal); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponsePrepareProposal)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestPrepareProposal) error); ok {
r1 = rf(_a0)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// ProcessProposalAsync provides a mock function with given fields: _a0
func (_m *Client) ProcessProposalAsync(_a0 types.RequestProcessProposal) *abcicli.ReqRes {
ret := _m.Called(_a0)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestProcessProposal) *abcicli.ReqRes); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
}
// ProcessProposalSync provides a mock function with given fields: _a0
func (_m *Client) ProcessProposalSync(_a0 types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
ret := _m.Called(_a0)
var r0 *types.ResponseProcessProposal
if rf, ok := ret.Get(0).(func(types.RequestProcessProposal) *types.ResponseProcessProposal); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseProcessProposal)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestProcessProposal) error); ok {
r1 = rf(_a0)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// QueryAsync provides a mock function with given fields: _a0
func (_m *Client) QueryAsync(_a0 types.RequestQuery) *abcicli.ReqRes {
ret := _m.Called(_a0)
@@ -649,45 +727,6 @@ func (_m *Client) SetLogger(_a0 log.Logger) {
_m.Called(_a0)
}
// SetOptionAsync provides a mock function with given fields: _a0
func (_m *Client) SetOptionAsync(_a0 types.RequestSetOption) *abcicli.ReqRes {
ret := _m.Called(_a0)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestSetOption) *abcicli.ReqRes); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
}
// SetOptionSync provides a mock function with given fields: _a0
func (_m *Client) SetOptionSync(_a0 types.RequestSetOption) (*types.ResponseSetOption, error) {
ret := _m.Called(_a0)
var r0 *types.ResponseSetOption
if rf, ok := ret.Get(0).(func(types.RequestSetOption) *types.ResponseSetOption); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseSetOption)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestSetOption) error); ok {
r1 = rf(_a0)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// SetResponseCallback provides a mock function with given fields: _a0
func (_m *Client) SetResponseCallback(_a0 abcicli.Callback) {
_m.Called(_a0)
@@ -734,3 +773,18 @@ func (_m *Client) String() string {
return r0
}
type mockConstructorTestingTNewClient interface {
mock.TestingT
Cleanup(func())
}
// NewClient creates a new instance of Client. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewClient(t mockConstructorTestingTNewClient) *Client {
mock := &Client{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -231,10 +231,6 @@ func (cli *socketClient) InfoAsync(req types.RequestInfo) *ReqRes {
return cli.queueRequest(types.ToRequestInfo(req))
}
func (cli *socketClient) SetOptionAsync(req types.RequestSetOption) *ReqRes {
return cli.queueRequest(types.ToRequestSetOption(req))
}
func (cli *socketClient) DeliverTxAsync(req types.RequestDeliverTx) *ReqRes {
return cli.queueRequest(types.ToRequestDeliverTx(req))
}
@@ -279,6 +275,14 @@ func (cli *socketClient) ApplySnapshotChunkAsync(req types.RequestApplySnapshotC
return cli.queueRequest(types.ToRequestApplySnapshotChunk(req))
}
func (cli *socketClient) PrepareProposalAsync(req types.RequestPrepareProposal) *ReqRes {
return cli.queueRequest(types.ToRequestPrepareProposal(req))
}
func (cli *socketClient) ProcessProposalAsync(req types.RequestProcessProposal) *ReqRes {
return cli.queueRequest(types.ToRequestProcessProposal(req))
}
//----------------------------------------
func (cli *socketClient) FlushSync() error {
@@ -308,15 +312,6 @@ func (cli *socketClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, e
return reqres.Response.GetInfo(), cli.Error()
}
func (cli *socketClient) SetOptionSync(req types.RequestSetOption) (*types.ResponseSetOption, error) {
reqres := cli.queueRequest(types.ToRequestSetOption(req))
if err := cli.FlushSync(); err != nil {
return nil, err
}
return reqres.Response.GetSetOption(), cli.Error()
}
func (cli *socketClient) DeliverTxSync(req types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
reqres := cli.queueRequest(types.ToRequestDeliverTx(req))
if err := cli.FlushSync(); err != nil {
@@ -417,6 +412,24 @@ func (cli *socketClient) ApplySnapshotChunkSync(
return reqres.Response.GetApplySnapshotChunk(), cli.Error()
}
func (cli *socketClient) PrepareProposalSync(req types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
reqres := cli.queueRequest(types.ToRequestPrepareProposal(req))
if err := cli.FlushSync(); err != nil {
return nil, err
}
return reqres.Response.GetPrepareProposal(), cli.Error()
}
func (cli *socketClient) ProcessProposalSync(req types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
reqres := cli.queueRequest(types.ToRequestProcessProposal(req))
if err := cli.FlushSync(); err != nil {
return nil, err
}
return reqres.Response.GetProcessProposal(), cli.Error()
}
//----------------------------------------
func (cli *socketClient) queueRequest(req *types.Request) *ReqRes {
@@ -468,8 +481,6 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
_, ok = res.Value.(*types.Response_Flush)
case *types.Request_Info:
_, ok = res.Value.(*types.Response_Info)
case *types.Request_SetOption:
_, ok = res.Value.(*types.Response_SetOption)
case *types.Request_DeliverTx:
_, ok = res.Value.(*types.Response_DeliverTx)
case *types.Request_CheckTx:
@@ -492,6 +503,10 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
_, ok = res.Value.(*types.Response_ListSnapshots)
case *types.Request_OfferSnapshot:
_, ok = res.Value.(*types.Response_OfferSnapshot)
case *types.Request_PrepareProposal:
_, ok = res.Value.(*types.Response_PrepareProposal)
case *types.Request_ProcessProposal:
_, ok = res.Value.(*types.Response_ProcessProposal)
}
return ok
}

View File

@@ -83,10 +83,11 @@ var RootCmd = &cobra.Command{
// Structure for data passed to print response.
type response struct {
// generic abci response
Data []byte
Code uint32
Info string
Log string
Data []byte
Code uint32
Info string
Log string
Status int32
Query *queryResponse
}
@@ -138,12 +139,13 @@ func addCommands() {
RootCmd.AddCommand(consoleCmd)
RootCmd.AddCommand(echoCmd)
RootCmd.AddCommand(infoCmd)
RootCmd.AddCommand(setOptionCmd)
RootCmd.AddCommand(deliverTxCmd)
RootCmd.AddCommand(checkTxCmd)
RootCmd.AddCommand(commitCmd)
RootCmd.AddCommand(versionCmd)
RootCmd.AddCommand(testCmd)
RootCmd.AddCommand(prepareProposalCmd)
RootCmd.AddCommand(processProposalCmd)
addQueryFlags()
RootCmd.AddCommand(queryCmd)
@@ -164,7 +166,6 @@ you'd like to run:
where example.file looks something like:
set_option serial on
check_tx 0x00
check_tx 0xff
deliver_tx 0x00
@@ -186,7 +187,7 @@ This command opens an interactive console for running any of the other commands
without opening a new connection each time
`,
Args: cobra.ExactArgs(0),
ValidArgs: []string{"echo", "info", "set_option", "deliver_tx", "check_tx", "commit", "query"},
ValidArgs: []string{"echo", "info", "deliver_tx", "check_tx", "prepare_proposal", "process_proposal", "commit", "query"},
RunE: cmdConsole,
}
@@ -204,13 +205,6 @@ var infoCmd = &cobra.Command{
Args: cobra.ExactArgs(0),
RunE: cmdInfo,
}
var setOptionCmd = &cobra.Command{
Use: "set_option",
Short: "set an option on the application",
Long: "set an option on the application",
Args: cobra.ExactArgs(2),
RunE: cmdSetOption,
}
var deliverTxCmd = &cobra.Command{
Use: "deliver_tx",
@@ -247,6 +241,22 @@ var versionCmd = &cobra.Command{
},
}
var prepareProposalCmd = &cobra.Command{
Use: "prepare_proposal",
Short: "prepare proposal",
Long: "prepare proposal",
Args: cobra.MinimumNArgs(0),
RunE: cmdPrepareProposal,
}
var processProposalCmd = &cobra.Command{
Use: "process_proposal",
Short: "process proposal",
Long: "process proposal",
Args: cobra.MinimumNArgs(0),
RunE: cmdProcessProposal,
}
var queryCmd = &cobra.Command{
Use: "query",
Short: "query the application state",
@@ -304,7 +314,6 @@ func cmdTest(cmd *cobra.Command, args []string) error {
return compose(
[]func() error{
func() error { return servertest.InitChain(client) },
func() error { return servertest.SetOption(client, "serial", "on") },
func() error { return servertest.Commit(client, nil) },
func() error { return servertest.DeliverTx(client, []byte("abc"), code.CodeTypeBadNonce, nil) },
func() error { return servertest.Commit(client, nil) },
@@ -319,6 +328,16 @@ func cmdTest(cmd *cobra.Command, args []string) error {
return servertest.DeliverTx(client, []byte{0x00, 0x00, 0x06}, code.CodeTypeBadNonce, nil)
},
func() error { return servertest.Commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 5}) },
func() error {
return servertest.PrepareProposal(client, [][]byte{
{0x01},
}, [][]byte{{0x01}}, nil)
},
func() error {
return servertest.ProcessProposal(client, [][]byte{
{0x01},
}, types.ResponseProcessProposal_ACCEPT)
},
})
}
@@ -419,8 +438,10 @@ func muxOnCommands(cmd *cobra.Command, pArgs []string) error {
return cmdInfo(cmd, actualArgs)
case "query":
return cmdQuery(cmd, actualArgs)
case "set_option":
return cmdSetOption(cmd, actualArgs)
case "prepare_proposal":
return cmdPrepareProposal(cmd, actualArgs)
case "process_proposal":
return cmdProcessProposal(cmd, actualArgs)
default:
return cmdUnimplemented(cmd, pArgs)
}
@@ -444,7 +465,6 @@ func cmdUnimplemented(cmd *cobra.Command, args []string) error {
fmt.Printf("%s: %s\n", deliverTxCmd.Use, deliverTxCmd.Short)
fmt.Printf("%s: %s\n", queryCmd.Use, queryCmd.Short)
fmt.Printf("%s: %s\n", commitCmd.Use, commitCmd.Short)
fmt.Printf("%s: %s\n", setOptionCmd.Use, setOptionCmd.Short)
fmt.Println("Use \"[command] --help\" for more information about a command.")
return nil
@@ -484,25 +504,6 @@ func cmdInfo(cmd *cobra.Command, args []string) error {
const codeBad uint32 = 10
// Set an option on the application
func cmdSetOption(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
printResponse(cmd, args, response{
Code: codeBad,
Log: "want at least arguments of the form: <key> <value>",
})
return nil
}
key, val := args[0], args[1]
_, err := client.SetOptionSync(types.RequestSetOption{Key: key, Value: val})
if err != nil {
return err
}
printResponse(cmd, args, response{Log: "OK (SetOption doesn't return anything.)"}) // NOTE: Nothing to show...
return nil
}
// Append a new tx to application
func cmdDeliverTx(cmd *cobra.Command, args []string) error {
if len(args) == 0 {
@@ -605,17 +606,75 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
return nil
}
func cmdPrepareProposal(cmd *cobra.Command, args []string) error {
txsBytesArray := make([][]byte, len(args))
for i, arg := range args {
txBytes, err := stringOrHexToBytes(arg)
if err != nil {
return err
}
txsBytesArray[i] = txBytes
}
res, err := client.PrepareProposalSync(types.RequestPrepareProposal{
Txs: txsBytesArray,
// kvstore has to have this parameter in order not to reject a tx as the default value is 0
MaxTxBytes: 65536,
})
if err != nil {
return err
}
resps := make([]response, 0, len(res.Txs))
for _, tx := range res.Txs {
resps = append(resps, response{
Code: code.CodeTypeOK,
Log: "Succeeded. Tx: " + string(tx),
})
}
printResponse(cmd, args, resps...)
return nil
}
func cmdProcessProposal(cmd *cobra.Command, args []string) error {
txsBytesArray := make([][]byte, len(args))
for i, arg := range args {
txBytes, err := stringOrHexToBytes(arg)
if err != nil {
return err
}
txsBytesArray[i] = txBytes
}
res, err := client.ProcessProposalSync(types.RequestProcessProposal{
Txs: txsBytesArray,
})
if err != nil {
return err
}
printResponse(cmd, args, response{
Status: int32(res.Status),
})
return nil
}
func cmdKVStore(cmd *cobra.Command, args []string) error {
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Create the application - in memory or persisted to disk
var app types.Application
if flagPersist == "" {
app = kvstore.NewApplication()
} else {
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
var err error
flagPersist, err = os.MkdirTemp("", "persistent_kvstore_tmp")
if err != nil {
return err
}
}
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
// Start the listener
srv, err := server.NewServer(flagAddress, flagAbci, app)
@@ -641,44 +700,49 @@ func cmdKVStore(cmd *cobra.Command, args []string) error {
//--------------------------------------------------------------------------------
func printResponse(cmd *cobra.Command, args []string, rsp response) {
func printResponse(cmd *cobra.Command, args []string, rsps ...response) {
if flagVerbose {
fmt.Println(">", cmd.Use, strings.Join(args, " "))
}
// Always print the status code.
if rsp.Code == types.CodeTypeOK {
fmt.Printf("-> code: OK\n")
} else {
fmt.Printf("-> code: %d\n", rsp.Code)
for _, rsp := range rsps {
// Always print the status code.
if rsp.Code == types.CodeTypeOK {
fmt.Printf("-> code: OK\n")
} else {
fmt.Printf("-> code: %d\n", rsp.Code)
}
}
if len(rsp.Data) != 0 {
// Do no print this line when using the commit command
// because the string comes out as gibberish
if cmd.Use != "commit" {
fmt.Printf("-> data: %s\n", rsp.Data)
if len(rsp.Data) != 0 {
// Do no print this line when using the commit command
// because the string comes out as gibberish
if cmd.Use != "commit" {
fmt.Printf("-> data: %s\n", rsp.Data)
}
fmt.Printf("-> data.hex: 0x%X\n", rsp.Data)
}
if rsp.Log != "" {
fmt.Printf("-> log: %s\n", rsp.Log)
}
if cmd.Use == "process_proposal" {
fmt.Printf("-> status: %s\n", types.ResponseProcessProposal_ProposalStatus_name[rsp.Status])
}
fmt.Printf("-> data.hex: 0x%X\n", rsp.Data)
}
if rsp.Log != "" {
fmt.Printf("-> log: %s\n", rsp.Log)
}
if rsp.Query != nil {
fmt.Printf("-> height: %d\n", rsp.Query.Height)
if rsp.Query.Key != nil {
fmt.Printf("-> key: %s\n", rsp.Query.Key)
fmt.Printf("-> key.hex: %X\n", rsp.Query.Key)
}
if rsp.Query.Value != nil {
fmt.Printf("-> value: %s\n", rsp.Query.Value)
fmt.Printf("-> value.hex: %X\n", rsp.Query.Value)
}
if rsp.Query.ProofOps != nil {
fmt.Printf("-> proof: %#v\n", rsp.Query.ProofOps)
if rsp.Query != nil {
fmt.Printf("-> height: %d\n", rsp.Query.Height)
if rsp.Query.Key != nil {
fmt.Printf("-> key: %s\n", rsp.Query.Key)
fmt.Printf("-> key.hex: %X\n", rsp.Query.Key)
}
if rsp.Query.Value != nil {
fmt.Printf("-> value: %s\n", rsp.Query.Value)
fmt.Printf("-> value.hex: %X\n", rsp.Query.Value)
}
if rsp.Query.ProofOps != nil {
fmt.Printf("-> proof: %#v\n", rsp.Query.ProofOps)
}
}
}
}

View File

@@ -7,4 +7,5 @@ const (
CodeTypeBadNonce uint32 = 2
CodeTypeUnauthorized uint32 = 3
CodeTypeUnknownError uint32 = 4
CodeTypeExecuted uint32 = 5
)

View File

@@ -68,6 +68,7 @@ type Application struct {
state State
RetainBlocks int64 // blocks to retain after commit (via ResponseCommit.RetainHeight)
txToRemove map[string]struct{}
}
func NewApplication() *Application {
@@ -87,15 +88,19 @@ func (app *Application) Info(req types.RequestInfo) (resInfo types.ResponseInfo)
// tx is either "key=value" or just arbitrary bytes
func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
var key, value []byte
if isReplacedTx(req.Tx) {
app.txToRemove[string(req.Tx)] = struct{}{}
}
var key, value string
parts := bytes.Split(req.Tx, []byte("="))
if len(parts) == 2 {
key, value = parts[0], parts[1]
key, value = string(parts[0]), string(parts[1])
} else {
key, value = req.Tx, req.Tx
key, value = string(req.Tx), string(req.Tx)
}
err := app.state.db.Set(prefixKey(key), value)
err := app.state.db.Set(prefixKey([]byte(key)), []byte(value))
if err != nil {
panic(err)
}
@@ -105,10 +110,10 @@ func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeli
{
Type: "app",
Attributes: []types.EventAttribute{
{Key: []byte("creator"), Value: []byte("Cosmoshi Netowoko"), Index: true},
{Key: []byte("key"), Value: key, Index: true},
{Key: []byte("index_key"), Value: []byte("index is working"), Index: true},
{Key: []byte("noindex_key"), Value: []byte("index is working"), Index: false},
{Key: "creator", Value: "Cosmoshi Netowoko", Index: true},
{Key: "key", Value: key, Index: true},
{Key: "index_key", Value: "index is working", Index: true},
{Key: "noindex_key", Value: "index is working", Index: false},
},
},
}
@@ -117,6 +122,11 @@ func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeli
}
func (app *Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
if req.Type == types.CheckTxType_Recheck {
if _, ok := app.txToRemove[string(req.Tx)]; ok {
return types.ResponseCheckTx{Code: code.CodeTypeExecuted, GasWanted: 1}
}
}
return types.ResponseCheckTx{Code: code.CodeTypeOK, GasWanted: 1}
}
@@ -126,6 +136,8 @@ func (app *Application) Commit() types.ResponseCommit {
binary.PutVarint(appHash, app.state.Size)
app.state.AppHash = appHash
app.state.Height++
// empty out the set of transactions to remove via rechecktx
saveState(app.state)
resp := types.ResponseCommit{Data: appHash}
@@ -170,3 +182,18 @@ func (app *Application) Query(reqQuery types.RequestQuery) (resQuery types.Respo
return resQuery
}
func (app *Application) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
app.txToRemove = map[string]struct{}{}
return types.ResponseBeginBlock{}
}
func (app *Application) ProcessProposal(
req types.RequestProcessProposal) types.ResponseProcessProposal {
for _, tx := range req.Txs {
if len(tx) == 0 {
return types.ResponseProcessProposal{Status: types.ResponseProcessProposal_REJECT}
}
}
return types.ResponseProcessProposal{Status: types.ResponseProcessProposal_ACCEPT}
}

View File

@@ -2,7 +2,7 @@ package kvstore
import (
"fmt"
"io/ioutil"
"os"
"sort"
"testing"
@@ -71,7 +71,7 @@ func TestKVStoreKV(t *testing.T) {
}
func TestPersistentKVStoreKV(t *testing.T) {
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
if err != nil {
t.Fatal(err)
}
@@ -87,7 +87,7 @@ func TestPersistentKVStoreKV(t *testing.T) {
}
func TestPersistentKVStoreInfo(t *testing.T) {
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
if err != nil {
t.Fatal(err)
}
@@ -119,7 +119,7 @@ func TestPersistentKVStoreInfo(t *testing.T) {
// add a validator, remove a validator, update a validator
func TestValUpdates(t *testing.T) {
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
if err != nil {
t.Fatal(err)
}
@@ -162,7 +162,7 @@ func TestValUpdates(t *testing.T) {
makeApplyBlock(t, kvstore, 2, diff, tx1, tx2, tx3)
vals1 = append(vals[:nInit-2], vals[nInit+1]) // nolint: gocritic
vals1 = append(vals[:nInit-2], vals[nInit+1]) //nolint: gocritic
vals2 = kvstore.Validators()
valsEqual(t, vals1, vals2)

View File

@@ -45,7 +45,10 @@ func NewPersistentKVStoreApplication(dbDir string) *PersistentKVStoreApplication
state := loadState(db)
return &PersistentKVStoreApplication{
app: &Application{state: state},
app: &Application{
state: state,
txToRemove: map[string]struct{}{},
},
valAddrToPubKeyMap: make(map[string]pc.PublicKey),
logger: log.NewNopLogger(),
}
@@ -62,10 +65,6 @@ func (app *PersistentKVStoreApplication) Info(req types.RequestInfo) types.Respo
return res
}
func (app *PersistentKVStoreApplication) SetOption(req types.RequestSetOption) types.ResponseSetOption {
return app.app.SetOption(req)
}
// tx is either "val:pubkey!power" or "key=value" or just arbitrary bytes
func (app *PersistentKVStoreApplication) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
// if it starts with "val:", update the validator set
@@ -76,6 +75,10 @@ func (app *PersistentKVStoreApplication) DeliverTx(req types.RequestDeliverTx) t
return app.execValidatorTx(req.Tx)
}
if isPrepareTx(req.Tx) {
return app.execPrepareTx(req.Tx)
}
// otherwise, update the key-value store
return app.app.DeliverTx(req)
}
@@ -126,7 +129,7 @@ func (app *PersistentKVStoreApplication) BeginBlock(req types.RequestBeginBlock)
// Punish validators who committed equivocation.
for _, ev := range req.ByzantineValidators {
if ev.Type == types.EvidenceType_DUPLICATE_VOTE {
if ev.Type == types.MisbehaviorType_DUPLICATE_VOTE {
addr := string(ev.Validator.Address)
if pubKey, ok := app.valAddrToPubKeyMap[addr]; ok {
app.updateValidator(types.ValidatorUpdate{
@@ -142,7 +145,7 @@ func (app *PersistentKVStoreApplication) BeginBlock(req types.RequestBeginBlock)
}
}
return types.ResponseBeginBlock{}
return app.app.BeginBlock(req)
}
// Update the validator set
@@ -170,6 +173,21 @@ func (app *PersistentKVStoreApplication) ApplySnapshotChunk(
return types.ResponseApplySnapshotChunk{Result: types.ResponseApplySnapshotChunk_ABORT}
}
func (app *PersistentKVStoreApplication) PrepareProposal(
req types.RequestPrepareProposal) types.ResponsePrepareProposal {
return types.ResponsePrepareProposal{Txs: app.substPrepareTx(req.Txs, req.MaxTxBytes)}
}
func (app *PersistentKVStoreApplication) ProcessProposal(
req types.RequestProcessProposal) types.ResponseProcessProposal {
for _, tx := range req.Txs {
if len(tx) == 0 || isPrepareTx(tx) {
return types.ResponseProcessProposal{Status: types.ResponseProcessProposal_REJECT}
}
}
return types.ResponseProcessProposal{Status: types.ResponseProcessProposal_ACCEPT}
}
//---------------------------------------------
// update validators
@@ -284,3 +302,42 @@ func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
}
// -----------------------------
const (
PreparePrefix = "prepare"
ReplacePrefix = "replace"
)
func isPrepareTx(tx []byte) bool { return bytes.HasPrefix(tx, []byte(PreparePrefix)) }
func isReplacedTx(tx []byte) bool {
return bytes.HasPrefix(tx, []byte(ReplacePrefix))
}
// execPrepareTx is noop. tx data is considered as placeholder
// and is substitute at the PrepareProposal.
func (app *PersistentKVStoreApplication) execPrepareTx(tx []byte) types.ResponseDeliverTx {
// noop
return types.ResponseDeliverTx{}
}
// substPrepareTx substitutes all the transactions prefixed with 'prepare' in the
// proposal for transactions with the prefix stripped.
func (app *PersistentKVStoreApplication) substPrepareTx(blockData [][]byte, maxTxBytes int64) [][]byte {
txs := make([][]byte, 0, len(blockData))
var totalBytes int64
for _, tx := range blockData {
txMod := tx
if isPrepareTx(tx) {
txMod = bytes.Replace(tx, []byte(PreparePrefix), []byte(ReplacePrefix), 1)
}
totalBytes += int64(len(txMod))
if totalBytes > maxTxBytes {
break
}
txs = append(txs, txMod)
}
return txs
}

View File

@@ -2,9 +2,8 @@
Package server is used to start a new ABCI server.
It contains two server implementation:
* gRPC server
* socket server
- gRPC server
- socket server
*/
package server

View File

@@ -200,9 +200,6 @@ func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types
case *types.Request_Info:
res := s.app.Info(*r.Info)
responses <- types.ToResponseInfo(res)
case *types.Request_SetOption:
res := s.app.SetOption(*r.SetOption)
responses <- types.ToResponseSetOption(res)
case *types.Request_DeliverTx:
res := s.app.DeliverTx(*r.DeliverTx)
responses <- types.ToResponseDeliverTx(res)
@@ -230,6 +227,12 @@ func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types
case *types.Request_OfferSnapshot:
res := s.app.OfferSnapshot(*r.OfferSnapshot)
responses <- types.ToResponseOfferSnapshot(res)
case *types.Request_PrepareProposal:
res := s.app.PrepareProposal(*r.PrepareProposal)
responses <- types.ToResponsePrepareProposal(res)
case *types.Request_ProcessProposal:
res := s.app.ProcessProposal(*r.ProcessProposal)
responses <- types.ToResponseProcessProposal(res)
case *types.Request_LoadSnapshotChunk:
res := s.app.LoadSnapshotChunk(*r.LoadSnapshotChunk)
responses <- types.ToResponseLoadSnapshotChunk(res)

View File

@@ -29,17 +29,6 @@ func InitChain(client abcicli.Client) error {
return nil
}
func SetOption(client abcicli.Client, key, value string) error {
_, err := client.SetOptionSync(types.RequestSetOption{Key: key, Value: value})
if err != nil {
fmt.Println("Failed test: SetOption")
fmt.Printf("error while setting %v=%v: \nerror: %v\n", key, value, err)
return err
}
fmt.Println("Passed test: SetOption")
return nil
}
func Commit(client abcicli.Client, hashExp []byte) error {
res, err := client.CommitSync()
data := res.Data
@@ -76,6 +65,32 @@ func DeliverTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []
return nil
}
func PrepareProposal(client abcicli.Client, txBytes [][]byte, txExpected [][]byte, dataExp []byte) error {
res, _ := client.PrepareProposalSync(types.RequestPrepareProposal{Txs: txBytes})
for i, tx := range res.Txs {
if !bytes.Equal(tx, txExpected[i]) {
fmt.Println("Failed test: PrepareProposal")
fmt.Printf("PrepareProposal transaction was unexpected. Got %x expected %x.",
tx, txExpected[i])
return errors.New("PrepareProposal error")
}
}
fmt.Println("Passed test: PrepareProposal")
return nil
}
func ProcessProposal(client abcicli.Client, txBytes [][]byte, statusExp types.ResponseProcessProposal_ProposalStatus) error {
res, _ := client.ProcessProposalSync(types.RequestProcessProposal{Txs: txBytes})
if res.Status != statusExp {
fmt.Println("Failed test: ProcessProposal")
fmt.Printf("ProcessProposal response status was unexpected. Got %v expected %v.",
res.Status, statusExp)
return errors.New("ProcessProposal error")
}
fmt.Println("Passed test: ProcessProposal")
return nil
}
func CheckTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.CheckTxSync(types.RequestCheckTx{Tx: txBytes})
code, data, log := res.Code, res.Data, res.Log

View File

@@ -1,5 +1,7 @@
echo hello
info
prepare_proposal "abc"
process_proposal "abc"
commit
deliver_tx "abc"
info
@@ -8,3 +10,9 @@ query "abc"
deliver_tx "def=xyz"
commit
query "def"
prepare_proposal "preparedef"
process_proposal "replacedef"
process_proposal "preparedef"
prepare_proposal
process_proposal
commit

View File

@@ -8,6 +8,14 @@
-> data: {"size":0}
-> data.hex: 0x7B2273697A65223A307D
> prepare_proposal "abc"
-> code: OK
-> log: Succeeded. Tx: abc
> process_proposal "abc"
-> code: OK
-> status: ACCEPT
> commit
-> code: OK
-> data.hex: 0x0000000000000000
@@ -49,3 +57,25 @@
-> value: xyz
-> value.hex: 78797A
> prepare_proposal "preparedef"
-> code: OK
-> log: Succeeded. Tx: replacedef
> process_proposal "replacedef"
-> code: OK
-> status: ACCEPT
> process_proposal "preparedef"
-> code: OK
-> status: REJECT
> prepare_proposal
> process_proposal
-> code: OK
-> status: ACCEPT
> commit
-> code: OK
-> data.hex: 0x0400000000000000

View File

@@ -1,4 +1,3 @@
set_option serial on
check_tx 0x00
check_tx 0xff
deliver_tx 0x00

View File

@@ -1,7 +1,3 @@
> set_option serial on
-> code: OK
-> log: OK (SetOption doesn't return anything.)
> check_tx 0x00
-> code: OK
@@ -12,18 +8,16 @@
-> code: OK
> check_tx 0x00
-> code: 2
-> log: Invalid nonce. Expected >= 1, got 0
-> code: OK
> deliver_tx 0x01
-> code: OK
> deliver_tx 0x04
-> code: 2
-> log: Invalid nonce. Expected 2, got 4
-> code: OK
> info
-> code: OK
-> data: {"hashes":0,"txs":2}
-> data.hex: 0x7B22686173686573223A302C22747873223A327D
-> data: {"size":3}
-> data.hex: 0x7B2273697A65223A337D

View File

@@ -30,6 +30,8 @@ function testExample() {
cat "${INPUT}.out.new"
echo "Expected:"
cat "${INPUT}.out"
echo "Diff:"
diff "${INPUT}.out" "${INPUT}.out.new"
exit 1
fi
@@ -37,6 +39,7 @@ function testExample() {
}
testExample 1 tests/test_cli/ex1.abci abci-cli kvstore
testExample 2 tests/test_cli/ex2.abci abci-cli kvstore
echo ""
echo "PASS"

View File

@@ -4,21 +4,24 @@ import (
context "golang.org/x/net/context"
)
//go:generate ../../scripts/mockery_generate.sh Application
// Application is an interface that enables any finite, deterministic state machine
// to be driven by a blockchain-based replication engine via the ABCI.
// All methods take a RequestXxx argument and return a ResponseXxx argument,
// except CheckTx/DeliverTx, which take `tx []byte`, and `Commit`, which takes nothing.
type Application interface {
// Info/Query Connection
Info(RequestInfo) ResponseInfo // Return application info
SetOption(RequestSetOption) ResponseSetOption // Set application option
Query(RequestQuery) ResponseQuery // Query for state
Info(RequestInfo) ResponseInfo // Return application info
Query(RequestQuery) ResponseQuery // Query for state
// Mempool Connection
CheckTx(RequestCheckTx) ResponseCheckTx // Validate a tx for the mempool
// Consensus Connection
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain w validators/other info from TendermintCore
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain w validators/other info from TendermintCore
PrepareProposal(RequestPrepareProposal) ResponsePrepareProposal
ProcessProposal(RequestProcessProposal) ResponseProcessProposal
BeginBlock(RequestBeginBlock) ResponseBeginBlock // Signals the beginning of a block
DeliverTx(RequestDeliverTx) ResponseDeliverTx // Deliver a tx for full processing
EndBlock(RequestEndBlock) ResponseEndBlock // Signals the end of a block, returns changes to the validator set
@@ -47,10 +50,6 @@ func (BaseApplication) Info(req RequestInfo) ResponseInfo {
return ResponseInfo{}
}
func (BaseApplication) SetOption(req RequestSetOption) ResponseSetOption {
return ResponseSetOption{}
}
func (BaseApplication) DeliverTx(req RequestDeliverTx) ResponseDeliverTx {
return ResponseDeliverTx{Code: CodeTypeOK}
}
@@ -95,6 +94,24 @@ func (BaseApplication) ApplySnapshotChunk(req RequestApplySnapshotChunk) Respons
return ResponseApplySnapshotChunk{}
}
func (BaseApplication) PrepareProposal(req RequestPrepareProposal) ResponsePrepareProposal {
txs := make([][]byte, 0, len(req.Txs))
var totalBytes int64
for _, tx := range req.Txs {
totalBytes += int64(len(tx))
if totalBytes > req.MaxTxBytes {
break
}
txs = append(txs, tx)
}
return ResponsePrepareProposal{Txs: txs}
}
func (BaseApplication) ProcessProposal(req RequestProcessProposal) ResponseProcessProposal {
return ResponseProcessProposal{
Status: ResponseProcessProposal_ACCEPT}
}
//-------------------------------------------------------
// GRPCApplication is a GRPC wrapper for Application
@@ -119,11 +136,6 @@ func (app *GRPCApplication) Info(ctx context.Context, req *RequestInfo) (*Respon
return &res, nil
}
func (app *GRPCApplication) SetOption(ctx context.Context, req *RequestSetOption) (*ResponseSetOption, error) {
res := app.app.SetOption(*req)
return &res, nil
}
func (app *GRPCApplication) DeliverTx(ctx context.Context, req *RequestDeliverTx) (*ResponseDeliverTx, error) {
res := app.app.DeliverTx(*req)
return &res, nil
@@ -182,3 +194,15 @@ func (app *GRPCApplication) ApplySnapshotChunk(
res := app.app.ApplySnapshotChunk(*req)
return &res, nil
}
func (app *GRPCApplication) PrepareProposal(
ctx context.Context, req *RequestPrepareProposal) (*ResponsePrepareProposal, error) {
res := app.app.PrepareProposal(*req)
return &res, nil
}
func (app *GRPCApplication) ProcessProposal(
ctx context.Context, req *RequestProcessProposal) (*ResponseProcessProposal, error) {
res := app.app.ProcessProposal(*req)
return &res, nil
}

View File

@@ -1,11 +1,11 @@
package types
import (
"bufio"
"encoding/binary"
"io"
"github.com/gogo/protobuf/proto"
"github.com/tendermint/tendermint/libs/protoio"
)
const (
@@ -14,57 +14,19 @@ const (
// WriteMessage writes a varint length-delimited protobuf message.
func WriteMessage(msg proto.Message, w io.Writer) error {
bz, err := proto.Marshal(msg)
protoWriter := protoio.NewDelimitedWriter(w)
_, err := protoWriter.WriteMsg(msg)
if err != nil {
return err
}
return encodeByteSlice(w, bz)
return nil
}
// ReadMessage reads a varint length-delimited protobuf message.
func ReadMessage(r io.Reader, msg proto.Message) error {
return readProtoMsg(r, msg, maxMsgSize)
}
func readProtoMsg(r io.Reader, msg proto.Message, maxSize int) error {
// binary.ReadVarint takes an io.ByteReader, eg. a bufio.Reader
reader, ok := r.(*bufio.Reader)
if !ok {
reader = bufio.NewReader(r)
}
length64, err := binary.ReadVarint(reader)
if err != nil {
return err
}
length := int(length64)
if length < 0 || length > maxSize {
return io.ErrShortBuffer
}
buf := make([]byte, length)
if _, err := io.ReadFull(reader, buf); err != nil {
return err
}
return proto.Unmarshal(buf, msg)
}
//-----------------------------------------------------------------------
// NOTE: we copied wire.EncodeByteSlice from go-wire rather than keep
// go-wire as a dep
func encodeByteSlice(w io.Writer, bz []byte) (err error) {
err = encodeVarint(w, int64(len(bz)))
if err != nil {
return
}
_, err = w.Write(bz)
return
}
func encodeVarint(w io.Writer, i int64) (err error) {
var buf [10]byte
n := binary.PutVarint(buf[:], i)
_, err = w.Write(buf[0:n])
return
_, err := protoio.NewDelimitedReader(r, maxMsgSize).ReadMsg(msg)
return err
}
//----------------------------------------
@@ -87,12 +49,6 @@ func ToRequestInfo(req RequestInfo) *Request {
}
}
func ToRequestSetOption(req RequestSetOption) *Request {
return &Request{
Value: &Request_SetOption{&req},
}
}
func ToRequestDeliverTx(req RequestDeliverTx) *Request {
return &Request{
Value: &Request_DeliverTx{&req},
@@ -159,6 +115,18 @@ func ToRequestApplySnapshotChunk(req RequestApplySnapshotChunk) *Request {
}
}
func ToRequestPrepareProposal(req RequestPrepareProposal) *Request {
return &Request{
Value: &Request_PrepareProposal{&req},
}
}
func ToRequestProcessProposal(req RequestProcessProposal) *Request {
return &Request{
Value: &Request_ProcessProposal{&req},
}
}
//----------------------------------------
func ToResponseException(errStr string) *Response {
@@ -185,12 +153,6 @@ func ToResponseInfo(res ResponseInfo) *Response {
}
}
func ToResponseSetOption(res ResponseSetOption) *Response {
return &Response{
Value: &Response_SetOption{&res},
}
}
func ToResponseDeliverTx(res ResponseDeliverTx) *Response {
return &Response{
Value: &Response_DeliverTx{&res},
@@ -256,3 +218,15 @@ func ToResponseApplySnapshotChunk(res ResponseApplySnapshotChunk) *Response {
Value: &Response_ApplySnapshotChunk{&res},
}
}
func ToResponsePrepareProposal(res ResponsePrepareProposal) *Response {
return &Response{
Value: &Response_PrepareProposal{&res},
}
}
func ToResponseProcessProposal(res ResponseProcessProposal) *Response {
return &Response{
Value: &Response_ProcessProposal{&res},
}
}

View File

@@ -25,7 +25,7 @@ func TestMarshalJSON(t *testing.T) {
{
Type: "testEvent",
Attributes: []EventAttribute{
{Key: []byte("pho"), Value: []byte("bo")},
{Key: "pho", Value: "bo"},
},
},
},
@@ -92,7 +92,7 @@ func TestWriteReadMessage2(t *testing.T) {
{
Type: "testEvent",
Attributes: []EventAttribute{
{Key: []byte("abc"), Value: []byte("def")},
{Key: "abc", Value: "def"},
},
},
},

View File

@@ -0,0 +1,224 @@
// Code generated by mockery. DO NOT EDIT.
package mocks
import (
mock "github.com/stretchr/testify/mock"
types "github.com/tendermint/tendermint/abci/types"
)
// Application is an autogenerated mock type for the Application type
type Application struct {
mock.Mock
}
// ApplySnapshotChunk provides a mock function with given fields: _a0
func (_m *Application) ApplySnapshotChunk(_a0 types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk {
ret := _m.Called(_a0)
var r0 types.ResponseApplySnapshotChunk
if rf, ok := ret.Get(0).(func(types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseApplySnapshotChunk)
}
return r0
}
// BeginBlock provides a mock function with given fields: _a0
func (_m *Application) BeginBlock(_a0 types.RequestBeginBlock) types.ResponseBeginBlock {
ret := _m.Called(_a0)
var r0 types.ResponseBeginBlock
if rf, ok := ret.Get(0).(func(types.RequestBeginBlock) types.ResponseBeginBlock); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseBeginBlock)
}
return r0
}
// CheckTx provides a mock function with given fields: _a0
func (_m *Application) CheckTx(_a0 types.RequestCheckTx) types.ResponseCheckTx {
ret := _m.Called(_a0)
var r0 types.ResponseCheckTx
if rf, ok := ret.Get(0).(func(types.RequestCheckTx) types.ResponseCheckTx); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseCheckTx)
}
return r0
}
// Commit provides a mock function with given fields:
func (_m *Application) Commit() types.ResponseCommit {
ret := _m.Called()
var r0 types.ResponseCommit
if rf, ok := ret.Get(0).(func() types.ResponseCommit); ok {
r0 = rf()
} else {
r0 = ret.Get(0).(types.ResponseCommit)
}
return r0
}
// DeliverTx provides a mock function with given fields: _a0
func (_m *Application) DeliverTx(_a0 types.RequestDeliverTx) types.ResponseDeliverTx {
ret := _m.Called(_a0)
var r0 types.ResponseDeliverTx
if rf, ok := ret.Get(0).(func(types.RequestDeliverTx) types.ResponseDeliverTx); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseDeliverTx)
}
return r0
}
// EndBlock provides a mock function with given fields: _a0
func (_m *Application) EndBlock(_a0 types.RequestEndBlock) types.ResponseEndBlock {
ret := _m.Called(_a0)
var r0 types.ResponseEndBlock
if rf, ok := ret.Get(0).(func(types.RequestEndBlock) types.ResponseEndBlock); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseEndBlock)
}
return r0
}
// Info provides a mock function with given fields: _a0
func (_m *Application) Info(_a0 types.RequestInfo) types.ResponseInfo {
ret := _m.Called(_a0)
var r0 types.ResponseInfo
if rf, ok := ret.Get(0).(func(types.RequestInfo) types.ResponseInfo); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseInfo)
}
return r0
}
// InitChain provides a mock function with given fields: _a0
func (_m *Application) InitChain(_a0 types.RequestInitChain) types.ResponseInitChain {
ret := _m.Called(_a0)
var r0 types.ResponseInitChain
if rf, ok := ret.Get(0).(func(types.RequestInitChain) types.ResponseInitChain); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseInitChain)
}
return r0
}
// ListSnapshots provides a mock function with given fields: _a0
func (_m *Application) ListSnapshots(_a0 types.RequestListSnapshots) types.ResponseListSnapshots {
ret := _m.Called(_a0)
var r0 types.ResponseListSnapshots
if rf, ok := ret.Get(0).(func(types.RequestListSnapshots) types.ResponseListSnapshots); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseListSnapshots)
}
return r0
}
// LoadSnapshotChunk provides a mock function with given fields: _a0
func (_m *Application) LoadSnapshotChunk(_a0 types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk {
ret := _m.Called(_a0)
var r0 types.ResponseLoadSnapshotChunk
if rf, ok := ret.Get(0).(func(types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseLoadSnapshotChunk)
}
return r0
}
// OfferSnapshot provides a mock function with given fields: _a0
func (_m *Application) OfferSnapshot(_a0 types.RequestOfferSnapshot) types.ResponseOfferSnapshot {
ret := _m.Called(_a0)
var r0 types.ResponseOfferSnapshot
if rf, ok := ret.Get(0).(func(types.RequestOfferSnapshot) types.ResponseOfferSnapshot); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseOfferSnapshot)
}
return r0
}
// PrepareProposal provides a mock function with given fields: _a0
func (_m *Application) PrepareProposal(_a0 types.RequestPrepareProposal) types.ResponsePrepareProposal {
ret := _m.Called(_a0)
var r0 types.ResponsePrepareProposal
if rf, ok := ret.Get(0).(func(types.RequestPrepareProposal) types.ResponsePrepareProposal); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponsePrepareProposal)
}
return r0
}
// ProcessProposal provides a mock function with given fields: _a0
func (_m *Application) ProcessProposal(_a0 types.RequestProcessProposal) types.ResponseProcessProposal {
ret := _m.Called(_a0)
var r0 types.ResponseProcessProposal
if rf, ok := ret.Get(0).(func(types.RequestProcessProposal) types.ResponseProcessProposal); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseProcessProposal)
}
return r0
}
// Query provides a mock function with given fields: _a0
func (_m *Application) Query(_a0 types.RequestQuery) types.ResponseQuery {
ret := _m.Called(_a0)
var r0 types.ResponseQuery
if rf, ok := ret.Get(0).(func(types.RequestQuery) types.ResponseQuery); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseQuery)
}
return r0
}
type mockConstructorTestingTNewApplication interface {
mock.TestingT
Cleanup(func())
}
// NewApplication creates a new instance of Application. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewApplication(t mockConstructorTestingTNewApplication) *Application {
mock := &Application{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

187
abci/types/mocks/base.go Normal file
View File

@@ -0,0 +1,187 @@
package mocks
import (
types "github.com/tendermint/tendermint/abci/types"
)
// BaseMock provides a wrapper around the generated Application mock and a BaseApplication.
// BaseMock first tries to use the mock's implementation of the method.
// If no functionality was provided for the mock by the user, BaseMock dispatches
// to the BaseApplication and uses its functionality.
// BaseMock allows users to provide mocked functionality for only the methods that matter
// for their test while avoiding a panic if the code calls Application methods that are
// not relevant to the test.
type BaseMock struct {
base *types.BaseApplication
*Application
}
func NewBaseMock() BaseMock {
return BaseMock{
base: types.NewBaseApplication(),
Application: new(Application),
}
}
// Info/Query Connection
// Return application info
func (m BaseMock) Info(input types.RequestInfo) types.ResponseInfo {
var ret types.ResponseInfo
defer func() {
if r := recover(); r != nil {
ret = m.base.Info(input)
}
}()
ret = m.Application.Info(input)
return ret
}
func (m BaseMock) Query(input types.RequestQuery) types.ResponseQuery {
var ret types.ResponseQuery
defer func() {
if r := recover(); r != nil {
ret = m.base.Query(input)
}
}()
ret = m.Application.Query(input)
return ret
}
// Mempool Connection
// Validate a tx for the mempool
func (m BaseMock) CheckTx(input types.RequestCheckTx) types.ResponseCheckTx {
var ret types.ResponseCheckTx
defer func() {
if r := recover(); r != nil {
ret = m.base.CheckTx(input)
}
}()
ret = m.Application.CheckTx(input)
return ret
}
// Consensus Connection
// Initialize blockchain w validators/other info from TendermintCore
func (m BaseMock) InitChain(input types.RequestInitChain) types.ResponseInitChain {
var ret types.ResponseInitChain
defer func() {
if r := recover(); r != nil {
ret = m.base.InitChain(input)
}
}()
ret = m.Application.InitChain(input)
return ret
}
func (m BaseMock) PrepareProposal(input types.RequestPrepareProposal) types.ResponsePrepareProposal {
var ret types.ResponsePrepareProposal
defer func() {
if r := recover(); r != nil {
ret = m.base.PrepareProposal(input)
}
}()
ret = m.Application.PrepareProposal(input)
return ret
}
func (m BaseMock) ProcessProposal(input types.RequestProcessProposal) types.ResponseProcessProposal {
var ret types.ResponseProcessProposal
defer func() {
if r := recover(); r != nil {
ret = m.base.ProcessProposal(input)
}
}()
ret = m.Application.ProcessProposal(input)
return ret
}
// Commit the state and return the application Merkle root hash
func (m BaseMock) Commit() types.ResponseCommit {
var ret types.ResponseCommit
defer func() {
if r := recover(); r != nil {
ret = m.base.Commit()
}
}()
ret = m.Application.Commit()
return ret
}
// State Sync Connection
// List available snapshots
func (m BaseMock) ListSnapshots(input types.RequestListSnapshots) types.ResponseListSnapshots {
var ret types.ResponseListSnapshots
defer func() {
if r := recover(); r != nil {
ret = m.base.ListSnapshots(input)
}
}()
ret = m.Application.ListSnapshots(input)
return ret
}
func (m BaseMock) OfferSnapshot(input types.RequestOfferSnapshot) types.ResponseOfferSnapshot {
var ret types.ResponseOfferSnapshot
defer func() {
if r := recover(); r != nil {
ret = m.base.OfferSnapshot(input)
}
}()
ret = m.Application.OfferSnapshot(input)
return ret
}
func (m BaseMock) LoadSnapshotChunk(input types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk {
var ret types.ResponseLoadSnapshotChunk
defer func() {
if r := recover(); r != nil {
ret = m.base.LoadSnapshotChunk(input)
}
}()
ret = m.Application.LoadSnapshotChunk(input)
return ret
}
func (m BaseMock) ApplySnapshotChunk(input types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk {
var ret types.ResponseApplySnapshotChunk
defer func() {
if r := recover(); r != nil {
ret = m.base.ApplySnapshotChunk(input)
}
}()
ret = m.Application.ApplySnapshotChunk(input)
return ret
}
func (m BaseMock) BeginBlock(input types.RequestBeginBlock) types.ResponseBeginBlock {
var ret types.ResponseBeginBlock
defer func() {
if r := recover(); r != nil {
ret = m.base.BeginBlock(input)
}
}()
ret = m.Application.BeginBlock(input)
return ret
}
func (m BaseMock) DeliverTx(input types.RequestDeliverTx) types.ResponseDeliverTx {
var ret types.ResponseDeliverTx
defer func() {
if r := recover(); r != nil {
ret = m.base.DeliverTx(input)
}
}()
ret = m.Application.DeliverTx(input)
return ret
}
func (m BaseMock) EndBlock(input types.RequestEndBlock) types.ResponseEndBlock {
var ret types.ResponseEndBlock
defer func() {
if r := recover(); r != nil {
ret = m.base.EndBlock(input)
}
}()
ret = m.Application.EndBlock(input)
return ret
}

View File

@@ -41,6 +41,16 @@ func (r ResponseQuery) IsErr() bool {
return r.Code != CodeTypeOK
}
// IsAccepted returns true if Code is ACCEPT
func (r ResponseProcessProposal) IsAccepted() bool {
return r.Status == ResponseProcessProposal_ACCEPT
}
// IsStatusUnknown returns true if Code is UNKNOWN
func (r ResponseProcessProposal) IsStatusUnknown() bool {
return r.Status == ResponseProcessProposal_UNKNOWN
}
//---------------------------------------------------------------------------
// override JSON marshaling so we emit defaults (ie. disable omitempty)
@@ -52,16 +62,6 @@ var (
jsonpbUnmarshaller = jsonpb.Unmarshaler{}
)
func (r *ResponseSetOption) MarshalJSON() ([]byte, error) {
s, err := jsonpbMarshaller.MarshalToString(r)
return []byte(s), err
}
func (r *ResponseSetOption) UnmarshalJSON(b []byte) error {
reader := bytes.NewBuffer(b)
return jsonpbUnmarshaller.Unmarshal(reader, r)
}
func (r *ResponseCheckTx) MarshalJSON() ([]byte, error) {
s, err := jsonpbMarshaller.MarshalToString(r)
return []byte(s), err
@@ -126,6 +126,5 @@ var _ jsonRoundTripper = (*ResponseCommit)(nil)
var _ jsonRoundTripper = (*ResponseQuery)(nil)
var _ jsonRoundTripper = (*ResponseDeliverTx)(nil)
var _ jsonRoundTripper = (*ResponseCheckTx)(nil)
var _ jsonRoundTripper = (*ResponseSetOption)(nil)
var _ jsonRoundTripper = (*EventAttribute)(nil)

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
package blockchain
package blocksync
import (
"errors"
@@ -6,7 +6,7 @@ import (
"github.com/gogo/protobuf/proto"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blockchain"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
"github.com/tendermint/tendermint/types"
)

View File

@@ -1,4 +1,4 @@
package blockchain
package blocksync_test
import (
"encoding/hex"
@@ -9,7 +9,8 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blockchain"
"github.com/tendermint/tendermint/blocksync"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
"github.com/tendermint/tendermint/types"
)
@@ -28,7 +29,7 @@ func TestBcBlockRequestMessageValidateBasic(t *testing.T) {
tc := tc
t.Run(tc.testName, func(t *testing.T) {
request := bcproto.BlockRequest{Height: tc.requestHeight}
assert.Equal(t, tc.expectErr, ValidateMsg(&request) != nil, "Validate Basic had an unexpected result")
assert.Equal(t, tc.expectErr, blocksync.ValidateMsg(&request) != nil, "Validate Basic had an unexpected result")
})
}
}
@@ -48,14 +49,14 @@ func TestBcNoBlockResponseMessageValidateBasic(t *testing.T) {
tc := tc
t.Run(tc.testName, func(t *testing.T) {
nonResponse := bcproto.NoBlockResponse{Height: tc.nonResponseHeight}
assert.Equal(t, tc.expectErr, ValidateMsg(&nonResponse) != nil, "Validate Basic had an unexpected result")
assert.Equal(t, tc.expectErr, blocksync.ValidateMsg(&nonResponse) != nil, "Validate Basic had an unexpected result")
})
}
}
func TestBcStatusRequestMessageValidateBasic(t *testing.T) {
request := bcproto.StatusRequest{}
assert.NoError(t, ValidateMsg(&request))
assert.NoError(t, blocksync.ValidateMsg(&request))
}
func TestBcStatusResponseMessageValidateBasic(t *testing.T) {
@@ -73,12 +74,12 @@ func TestBcStatusResponseMessageValidateBasic(t *testing.T) {
tc := tc
t.Run(tc.testName, func(t *testing.T) {
response := bcproto.StatusResponse{Height: tc.responseHeight}
assert.Equal(t, tc.expectErr, ValidateMsg(&response) != nil, "Validate Basic had an unexpected result")
assert.Equal(t, tc.expectErr, blocksync.ValidateMsg(&response) != nil, "Validate Basic had an unexpected result")
})
}
}
// nolint:lll // ignore line length in tests
//nolint:lll // ignore line length in tests
func TestBlockchainMessageVectors(t *testing.T) {
block := types.MakeBlock(int64(3), []types.Tx{types.Tx("Hello World")}, nil, nil)
block.Version.Block = 11 // overwrite updated protocol version

View File

@@ -1,4 +1,4 @@
package blockchain
package blocksync
import (
"errors"
@@ -58,7 +58,7 @@ var peerTimeout = 15 * time.Second // not const so we can override with tests
are not at peer limits, we can probably switch to consensus reactor
*/
// BlockPool keeps track of the fast sync peers, block requests and block responses.
// BlockPool keeps track of the block sync peers, block requests and block responses.
type BlockPool struct {
service.BaseService
startTime time.Time
@@ -410,6 +410,7 @@ func (pool *BlockPool) sendError(err error, peerID p2p.ID) {
}
// for debugging purposes
//
//nolint:unused
func (pool *BlockPool) debug() string {
pool.mtx.Lock()

View File

@@ -1,4 +1,4 @@
package blockchain
package blocksync
import (
"fmt"

View File

@@ -1,4 +1,4 @@
package blockchain
package blocksync
import (
"fmt"
@@ -7,15 +7,15 @@ import (
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/p2p"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blockchain"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blocksync"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
const (
// BlockchainChannel is a channel for blocks and status updates (`BlockStore` height)
BlockchainChannel = byte(0x40)
// BlocksyncChannel is a channel for blocks and status updates (`BlockStore` height)
BlocksyncChannel = byte(0x40)
trySyncIntervalMS = 10
@@ -30,7 +30,7 @@ const (
)
type consensusReactor interface {
// for when we switch from blockchain reactor and fast sync to
// for when we switch from blockchain reactor and block sync to
// the consensus machine
SwitchToConsensus(state sm.State, skipWAL bool)
}
@@ -54,7 +54,7 @@ type Reactor struct {
blockExec *sm.BlockExecutor
store *store.BlockStore
pool *BlockPool
fastSync bool
blockSync bool
requestsCh <-chan BlockRequest
errorsCh <-chan peerError
@@ -62,7 +62,7 @@ type Reactor struct {
// NewReactor returns new reactor instance.
func NewReactor(state sm.State, blockExec *sm.BlockExecutor, store *store.BlockStore,
fastSync bool) *Reactor {
blockSync bool) *Reactor {
if state.LastBlockHeight != store.Height() {
panic(fmt.Sprintf("state (%v) and store (%v) height mismatch", state.LastBlockHeight,
@@ -85,7 +85,7 @@ func NewReactor(state sm.State, blockExec *sm.BlockExecutor, store *store.BlockS
blockExec: blockExec,
store: store,
pool: pool,
fastSync: fastSync,
blockSync: blockSync,
requestsCh: requestsCh,
errorsCh: errorsCh,
}
@@ -101,7 +101,7 @@ func (bcR *Reactor) SetLogger(l log.Logger) {
// OnStart implements service.Service.
func (bcR *Reactor) OnStart() error {
if bcR.fastSync {
if bcR.blockSync {
err := bcR.pool.Start()
if err != nil {
return err
@@ -111,9 +111,9 @@ func (bcR *Reactor) OnStart() error {
return nil
}
// SwitchToFastSync is called by the state sync reactor when switching to fast sync.
func (bcR *Reactor) SwitchToFastSync(state sm.State) error {
bcR.fastSync = true
// SwitchToBlockSync is called by the state sync reactor when switching to block sync.
func (bcR *Reactor) SwitchToBlockSync(state sm.State) error {
bcR.blockSync = true
bcR.initialState = state
bcR.pool.height = state.LastBlockHeight + 1
@@ -127,7 +127,7 @@ func (bcR *Reactor) SwitchToFastSync(state sm.State) error {
// OnStop implements service.Service.
func (bcR *Reactor) OnStop() {
if bcR.fastSync {
if bcR.blockSync {
if err := bcR.pool.Stop(); err != nil {
bcR.Logger.Error("Error stopping pool", "err", err)
}
@@ -138,7 +138,7 @@ func (bcR *Reactor) OnStop() {
func (bcR *Reactor) GetChannels() []*p2p.ChannelDescriptor {
return []*p2p.ChannelDescriptor{
{
ID: BlockchainChannel,
ID: BlocksyncChannel,
Priority: 5,
SendQueueCapacity: 1000,
RecvBufferCapacity: 50 * 4096,
@@ -157,7 +157,7 @@ func (bcR *Reactor) AddPeer(peer p2p.Peer) {
return
}
peer.Send(BlockchainChannel, msgBytes)
peer.Send(BlocksyncChannel, msgBytes)
// it's OK if send fails. will try later in poolRoutine
// peer is added to the pool once we receive the first
@@ -188,7 +188,7 @@ func (bcR *Reactor) respondToPeer(msg *bcproto.BlockRequest,
return false
}
return src.TrySend(BlockchainChannel, msgBytes)
return src.TrySend(BlocksyncChannel, msgBytes)
}
bcR.Logger.Info("Peer asking for a block we don't have", "src", src, "height", msg.Height)
@@ -199,7 +199,7 @@ func (bcR *Reactor) respondToPeer(msg *bcproto.BlockRequest,
return false
}
return src.TrySend(BlockchainChannel, msgBytes)
return src.TrySend(BlocksyncChannel, msgBytes)
}
// Receive implements Reactor by handling 4 types of messages (look below).
@@ -239,7 +239,7 @@ func (bcR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
bcR.Logger.Error("could not convert msg to protobut", "err", err)
return
}
src.TrySend(BlockchainChannel, msgBytes)
src.TrySend(BlocksyncChannel, msgBytes)
case *bcproto.StatusResponse:
// Got a peer status. Unverified.
bcR.pool.SetPeerRange(src.ID(), msg.Base, msg.Height)
@@ -291,7 +291,7 @@ func (bcR *Reactor) poolRoutine(stateSynced bool) {
continue
}
queued := peer.TrySend(BlockchainChannel, msgBytes)
queued := peer.TrySend(BlocksyncChannel, msgBytes)
if !queued {
bcR.Logger.Debug("Send queue is full, drop block request", "peer", peer.ID(), "height", request.Height)
}
@@ -303,7 +303,7 @@ func (bcR *Reactor) poolRoutine(stateSynced bool) {
case <-statusUpdateTicker.C:
// ask for status updates
go bcR.BroadcastStatusRequest() // nolint: errcheck
go bcR.BroadcastStatusRequest() //nolint: errcheck
}
}
@@ -359,14 +359,20 @@ FOR_LOOP:
didProcessCh <- struct{}{}
}
firstParts := first.MakePartSet(types.BlockPartSizeBytes)
firstParts, err := first.MakePartSet(types.BlockPartSizeBytes)
if err != nil {
bcR.Logger.Error("failed to make ",
"height", first.Height,
"err", err.Error())
break FOR_LOOP
}
firstPartSetHeader := firstParts.Header()
firstID := types.BlockID{Hash: first.Hash(), PartSetHeader: firstPartSetHeader}
// Finally, verify the first block using the second's commit
// NOTE: we can probably make this more efficient, but note that calling
// first.Hash() doesn't verify the tx contents, so MakePartSet() is
// currently necessary.
err := state.Validators.VerifyCommitLight(
err = state.Validators.VerifyCommitLight(
chainID, firstID, first.Height, second.LastCommit)
if err == nil {
@@ -409,7 +415,7 @@ FOR_LOOP:
if blocksSynced%100 == 0 {
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
bcR.Logger.Info("Fast Sync Rate", "height", bcR.pool.height,
bcR.Logger.Info("Block Sync Rate", "height", bcR.pool.height,
"max_peer_height", bcR.pool.MaxPeerHeight(), "blocks/s", lastRate)
lastHundred = time.Now()
}
@@ -430,7 +436,7 @@ func (bcR *Reactor) BroadcastStatusRequest() error {
return fmt.Errorf("could not convert msg to proto: %w", err)
}
bcR.Switch.Broadcast(BlockchainChannel, bm)
bcR.Switch.Broadcast(BlocksyncChannel, bm)
return nil
}

View File

@@ -1,4 +1,4 @@
package blockchain
package blocksync
import (
"fmt"
@@ -8,6 +8,7 @@ import (
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
dbm "github.com/tendermint/tm-db"
@@ -15,7 +16,7 @@ import (
abci "github.com/tendermint/tendermint/abci/types"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/mempool/mock"
mpmocks "github.com/tendermint/tendermint/mempool/mocks"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
@@ -52,6 +53,7 @@ type ReactorPair struct {
}
func newReactor(
t *testing.T,
logger log.Logger,
genDoc *types.GenesisDoc,
privVals []types.PrivValidator,
@@ -62,7 +64,7 @@ func newReactor(
app := &testApp{}
cc := proxy.NewLocalClientCreator(app)
proxyApp := proxy.NewAppConns(cc)
proxyApp := proxy.NewAppConns(cc, proxy.NopMetrics())
err := proxyApp.Start()
if err != nil {
panic(fmt.Errorf("error start app: %w", err))
@@ -70,7 +72,9 @@ func newReactor(
blockDB := dbm.NewMemDB()
stateDB := dbm.NewMemDB()
stateStore := sm.NewStore(stateDB)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
blockStore := store.NewBlockStore(blockDB)
state, err := stateStore.LoadFromDBOrGenesisDoc(genDoc)
@@ -78,14 +82,28 @@ func newReactor(
panic(fmt.Errorf("error constructing state from genesis file: %w", err))
}
mp := &mpmocks.Mempool{}
mp.On("Lock").Return()
mp.On("Unlock").Return()
mp.On("FlushAppConn", mock.Anything).Return(nil)
mp.On("Update",
mock.Anything,
mock.Anything,
mock.Anything,
mock.Anything,
mock.Anything,
mock.Anything).Return(nil)
// Make the Reactor itself.
// NOTE we have to create and commit the blocks first because
// pool.height is determined from the store.
fastSync := true
db := dbm.NewMemDB()
stateStore = sm.NewStore(db)
stateStore = sm.NewStore(db, sm.StoreOptions{
DiscardABCIResponses: false,
})
blockExec := sm.NewBlockExecutor(stateStore, log.TestingLogger(), proxyApp.Consensus(),
mock.Mempool{}, sm.EmptyEvidencePool{})
mp, sm.EmptyEvidencePool{})
if err = stateStore.Save(state); err != nil {
panic(err)
}
@@ -112,9 +130,10 @@ func newReactor(
lastBlockMeta.BlockID, []types.CommitSig{vote.CommitSig()})
}
thisBlock := makeBlock(blockHeight, state, lastCommit)
thisBlock := state.MakeBlock(blockHeight, nil, lastCommit, nil, state.Validators.Proposer.Address)
thisParts := thisBlock.MakePartSet(types.BlockPartSizeBytes)
thisParts, err := thisBlock.MakePartSet(types.BlockPartSizeBytes)
require.NoError(t, err)
blockID := types.BlockID{Hash: thisBlock.Hash(), PartSetHeader: thisParts.Header()}
state, _, err = blockExec.ApplyBlock(state, blockID, thisBlock)
@@ -140,8 +159,8 @@ func TestNoBlockResponse(t *testing.T) {
reactorPairs := make([]ReactorPair, 2)
reactorPairs[0] = newReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
reactorPairs[1] = newReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[0] = newReactor(t, log.TestingLogger(), genDoc, privVals, maxBlockHeight)
reactorPairs[1] = newReactor(t, log.TestingLogger(), genDoc, privVals, 0)
p2p.MakeConnectedSwitches(config.P2P, 2, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("BLOCKCHAIN", reactorPairs[i].reactor)
@@ -202,7 +221,7 @@ func TestBadBlockStopsPeer(t *testing.T) {
// Other chain needs a different validator set
otherGenDoc, otherPrivVals := randGenesisDoc(1, false, 30)
otherChain := newReactor(log.TestingLogger(), otherGenDoc, otherPrivVals, maxBlockHeight)
otherChain := newReactor(t, log.TestingLogger(), otherGenDoc, otherPrivVals, maxBlockHeight)
defer func() {
err := otherChain.reactor.Stop()
@@ -213,10 +232,10 @@ func TestBadBlockStopsPeer(t *testing.T) {
reactorPairs := make([]ReactorPair, 4)
reactorPairs[0] = newReactor(log.TestingLogger(), genDoc, privVals, maxBlockHeight)
reactorPairs[1] = newReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[2] = newReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[3] = newReactor(log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[0] = newReactor(t, log.TestingLogger(), genDoc, privVals, maxBlockHeight)
reactorPairs[1] = newReactor(t, log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[2] = newReactor(t, log.TestingLogger(), genDoc, privVals, 0)
reactorPairs[3] = newReactor(t, log.TestingLogger(), genDoc, privVals, 0)
switches := p2p.MakeConnectedSwitches(config.P2P, 4, func(i int, s *p2p.Switch) *p2p.Switch {
s.AddReactor("BLOCKCHAIN", reactorPairs[i].reactor)
@@ -254,7 +273,7 @@ func TestBadBlockStopsPeer(t *testing.T) {
// race, but can't be easily avoided.
reactorPairs[3].reactor.store = otherChain.reactor.store
lastReactorPair := newReactor(log.TestingLogger(), genDoc, privVals, 0)
lastReactorPair := newReactor(t, log.TestingLogger(), genDoc, privVals, 0)
reactorPairs = append(reactorPairs, lastReactorPair)
switches = append(switches, p2p.MakeConnectedSwitches(config.P2P, 1, func(i int, s *p2p.Switch) *p2p.Switch {
@@ -278,21 +297,6 @@ func TestBadBlockStopsPeer(t *testing.T) {
assert.True(t, lastReactorPair.reactor.Switch.Peers().Size() < len(reactorPairs)-1)
}
//----------------------------------------------
// utility funcs
func makeTxs(height int64) (txs []types.Tx) {
for i := 0; i < 10; i++ {
txs = append(txs, types.Tx([]byte{byte(height), byte(i)}))
}
return txs
}
func makeBlock(height int64, state sm.State, lastCommit *types.Commit) *types.Block {
block, _ := state.MakeBlock(height, makeTxs(height), lastCommit, nil, state.Validators.GetProposer().Address)
return block
}
type testApp struct {
abci.BaseApplication
}

View File

@@ -3,7 +3,6 @@ package debug
import (
"errors"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"time"
@@ -82,7 +81,7 @@ func dumpCmdHandler(_ *cobra.Command, args []string) error {
func dumpDebugData(outDir string, conf *cfg.Config, rpc *rpchttp.HTTP) {
start := time.Now().UTC()
tmpDir, err := ioutil.TempDir(outDir, "tendermint_debug_tmp")
tmpDir, err := os.MkdirTemp(outDir, "tendermint_debug_tmp")
if err != nil {
logger.Error("failed to create temporary directory", "dir", tmpDir, "error", err)
return

View File

@@ -5,7 +5,6 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"path/filepath"
@@ -111,5 +110,5 @@ func writeStateJSONToFile(state interface{}, dir, filename string) error {
return fmt.Errorf("failed to encode state dump: %w", err)
}
return ioutil.WriteFile(path.Join(dir, filename), stateJSON, os.ModePerm)
return os.WriteFile(path.Join(dir, filename), stateJSON, os.ModePerm)
}

View File

@@ -3,7 +3,6 @@ package debug
import (
"errors"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
@@ -56,7 +55,7 @@ func killCmdHandler(cmd *cobra.Command, args []string) error {
// Create a temporary directory which will contain all the state dumps and
// relevant files and directories that will be compressed into a file.
tmpDir, err := ioutil.TempDir(os.TempDir(), "tendermint_debug_tmp")
tmpDir, err := os.MkdirTemp(os.TempDir(), "tendermint_debug_tmp")
if err != nil {
return fmt.Errorf("failed to create temporary directory: %w", err)
}
@@ -105,7 +104,7 @@ func killProc(pid uint64, dir string) error {
// pipe STDERR output from tailing the Tendermint process to a file
//
// NOTE: This will only work on UNIX systems.
cmd := exec.Command("tail", "-f", fmt.Sprintf("/proc/%d/fd/2", pid)) // nolint: gosec
cmd := exec.Command("tail", "-f", fmt.Sprintf("/proc/%d/fd/2", pid)) //nolint: gosec
outFile, err := os.Create(filepath.Join(dir, "stacktrace.out"))
if err != nil {

View File

@@ -3,7 +3,7 @@ package debug
import (
"context"
"fmt"
"io/ioutil"
"io"
"net/http"
"os"
"path"
@@ -67,16 +67,16 @@ func copyConfig(home, dir string) error {
func dumpProfile(dir, addr, profile string, debug int) error {
endpoint := fmt.Sprintf("%s/debug/pprof/%s?debug=%d", addr, profile, debug)
resp, err := http.Get(endpoint) // nolint: gosec
resp, err := http.Get(endpoint) //nolint: gosec
if err != nil {
return fmt.Errorf("failed to query for %s profile: %w", profile, err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
body, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("failed to read %s profile response body: %w", profile, err)
}
return ioutil.WriteFile(path.Join(dir, fmt.Sprintf("%s.out", profile)), body, os.ModePerm)
return os.WriteFile(path.Join(dir, fmt.Sprintf("%s.out", profile)), body, os.ModePerm)
}

View File

@@ -40,6 +40,9 @@ replace the backend. The default start-height is 0, meaning the tooling will sta
reindex from the base block height(inclusive); and the default end-height is 0, meaning
the tooling will reindex until the latest block height(inclusive). User can omit
either or both arguments.
Note: This operation requires ABCI Responses. Do not set DiscardABCIResponses to true if you
want to use this command.
`,
Example: `
tendermint reindex-event

View File

@@ -9,6 +9,8 @@ import (
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
dbm "github.com/tendermint/tm-db"
abcitypes "github.com/tendermint/tendermint/abci/types"
tmcfg "github.com/tendermint/tendermint/config"
prototmstate "github.com/tendermint/tendermint/proto/tendermint/state"
@@ -16,7 +18,6 @@ import (
"github.com/tendermint/tendermint/state/mocks"
txmocks "github.com/tendermint/tendermint/state/txindex/mocks"
"github.com/tendermint/tendermint/types"
dbm "github.com/tendermint/tm-db"
)
const (

View File

@@ -77,7 +77,9 @@ func loadStateAndBlockStore(config *cfg.Config) (*store.BlockStore, state.Store,
if err != nil {
return nil, nil, err
}
stateStore := state.NewStore(stateDB)
stateStore := state.NewStore(stateDB, state.StoreOptions{
DiscardABCIResponses: config.Storage.DiscardABCIResponses,
})
return blockStore, stateStore, nil
}

View File

@@ -53,6 +53,11 @@ func ParseConfig(cmd *cobra.Command) (*cfg.Config, error) {
if err := conf.ValidateBasic(); err != nil {
return nil, fmt.Errorf("error in config file: %v", err)
}
if warnings := conf.CheckDeprecated(); len(warnings) > 0 {
for _, warning := range warnings {
logger.Info("deprecated usage found in configuration file", "usage", warning)
}
}
return conf, nil
}

View File

@@ -2,7 +2,6 @@ package commands
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strconv"
@@ -168,5 +167,5 @@ func WriteConfigVals(dir string, vals map[string]string) error {
data += fmt.Sprintf("%s = \"%s\"\n", k, v)
}
cfile := filepath.Join(dir, "config.toml")
return ioutil.WriteFile(cfile, []byte(data), 0600)
return os.WriteFile(cfile, []byte(data), 0600)
}

View File

@@ -31,7 +31,7 @@ func AddNodeFlags(cmd *cobra.Command) {
"socket address to listen on for connections from external priv_validator process")
// node flags
cmd.Flags().Bool("fast_sync", config.FastSyncMode, "fast blockchain syncing")
cmd.Flags().Bool("block_sync", config.BlockSyncMode, "sync the block chain using the blocksync algorithm")
cmd.Flags().BytesHexVar(
&genesisHash,
"genesis_hash",

View File

@@ -68,14 +68,18 @@ type Config struct {
BaseConfig `mapstructure:",squash"`
// Options for services
RPC *RPCConfig `mapstructure:"rpc"`
P2P *P2PConfig `mapstructure:"p2p"`
Mempool *MempoolConfig `mapstructure:"mempool"`
StateSync *StateSyncConfig `mapstructure:"statesync"`
FastSync *FastSyncConfig `mapstructure:"fastsync"`
Consensus *ConsensusConfig `mapstructure:"consensus"`
TxIndex *TxIndexConfig `mapstructure:"tx_index"`
Instrumentation *InstrumentationConfig `mapstructure:"instrumentation"`
RPC *RPCConfig `mapstructure:"rpc"`
P2P *P2PConfig `mapstructure:"p2p"`
Mempool *MempoolConfig `mapstructure:"mempool"`
StateSync *StateSyncConfig `mapstructure:"statesync"`
BlockSync *BlockSyncConfig `mapstructure:"blocksync"`
//TODO(williambanfield): remove this field once v0.37 is released.
// https://github.com/tendermint/tendermint/issues/9279
DeprecatedFastSyncConfig map[interface{}]interface{} `mapstructure:"fastsync"`
Consensus *ConsensusConfig `mapstructure:"consensus"`
Storage *StorageConfig `mapstructure:"storage"`
TxIndex *TxIndexConfig `mapstructure:"tx_index"`
Instrumentation *InstrumentationConfig `mapstructure:"instrumentation"`
}
// DefaultConfig returns a default configuration for a Tendermint node
@@ -86,8 +90,9 @@ func DefaultConfig() *Config {
P2P: DefaultP2PConfig(),
Mempool: DefaultMempoolConfig(),
StateSync: DefaultStateSyncConfig(),
FastSync: DefaultFastSyncConfig(),
BlockSync: DefaultBlockSyncConfig(),
Consensus: DefaultConsensusConfig(),
Storage: DefaultStorageConfig(),
TxIndex: DefaultTxIndexConfig(),
Instrumentation: DefaultInstrumentationConfig(),
}
@@ -101,8 +106,9 @@ func TestConfig() *Config {
P2P: TestP2PConfig(),
Mempool: TestMempoolConfig(),
StateSync: TestStateSyncConfig(),
FastSync: TestFastSyncConfig(),
BlockSync: TestBlockSyncConfig(),
Consensus: TestConsensusConfig(),
Storage: TestStorageConfig(),
TxIndex: TestTxIndexConfig(),
Instrumentation: TestInstrumentationConfig(),
}
@@ -136,8 +142,8 @@ func (cfg *Config) ValidateBasic() error {
if err := cfg.StateSync.ValidateBasic(); err != nil {
return fmt.Errorf("error in [statesync] section: %w", err)
}
if err := cfg.FastSync.ValidateBasic(); err != nil {
return fmt.Errorf("error in [fastsync] section: %w", err)
if err := cfg.BlockSync.ValidateBasic(); err != nil {
return fmt.Errorf("error in [blocksync] section: %w", err)
}
if err := cfg.Consensus.ValidateBasic(); err != nil {
return fmt.Errorf("error in [consensus] section: %w", err)
@@ -148,6 +154,17 @@ func (cfg *Config) ValidateBasic() error {
return nil
}
func (cfg *Config) CheckDeprecated() []string {
var warnings []string
if cfg.DeprecatedFastSyncConfig != nil {
warnings = append(warnings, "[fastsync] table detected. This section has been renamed to [blocksync]. The values in this deprecated section will be disregarded.")
}
if cfg.BaseConfig.DeprecatedFastSyncMode != nil {
warnings = append(warnings, "fast_sync key detected. This key has been renamed to block_sync. The value of this deprecated key will be disregarded.")
}
return warnings
}
//-----------------------------------------------------------------------------
// BaseConfig
@@ -167,10 +184,14 @@ type BaseConfig struct { //nolint: maligned
// A custom human readable name for this node
Moniker string `mapstructure:"moniker"`
// If this node is many blocks behind the tip of the chain, FastSync
// If this node is many blocks behind the tip of the chain, Blocksync
// allows them to catchup quickly by downloading blocks in parallel
// and verifying their commits
FastSyncMode bool `mapstructure:"fast_sync"`
BlockSyncMode bool `mapstructure:"block_sync"`
//TODO(williambanfield): remove this field once v0.37 is released.
// https://github.com/tendermint/tendermint/issues/9279
DeprecatedFastSyncMode interface{} `mapstructure:"fast_sync"`
// Database backend: goleveldb | cleveldb | boltdb | rocksdb
// * goleveldb (github.com/syndtr/goleveldb - most popular implementation)
@@ -238,7 +259,7 @@ func DefaultBaseConfig() BaseConfig {
ABCI: "socket",
LogLevel: DefaultLogLevel,
LogFormat: LogFormatPlain,
FastSyncMode: true,
BlockSyncMode: true,
FilterPeers: false,
DBBackend: "goleveldb",
DBPath: "data",
@@ -250,7 +271,7 @@ func TestBaseConfig() BaseConfig {
cfg := DefaultBaseConfig()
cfg.chainID = "tendermint_test"
cfg.ProxyApp = "kvstore"
cfg.FastSyncMode = false
cfg.BlockSyncMode = false
cfg.DBBackend = "memdb"
return cfg
}
@@ -817,7 +838,7 @@ func DefaultStateSyncConfig() *StateSyncConfig {
}
}
// TestFastSyncConfig returns a default configuration for the state sync service
// TestStateSyncConfig returns a default configuration for the state sync service
func TestStateSyncConfig() *StateSyncConfig {
return DefaultStateSyncConfig()
}
@@ -873,34 +894,34 @@ func (cfg *StateSyncConfig) ValidateBasic() error {
}
//-----------------------------------------------------------------------------
// FastSyncConfig
// BlockSyncConfig
// FastSyncConfig defines the configuration for the Tendermint fast sync service
type FastSyncConfig struct {
// BlockSyncConfig (formerly known as FastSync) defines the configuration for the Tendermint block sync service
type BlockSyncConfig struct {
Version string `mapstructure:"version"`
}
// DefaultFastSyncConfig returns a default configuration for the fast sync service
func DefaultFastSyncConfig() *FastSyncConfig {
return &FastSyncConfig{
// DefaultBlockSyncConfig returns a default configuration for the block sync service
func DefaultBlockSyncConfig() *BlockSyncConfig {
return &BlockSyncConfig{
Version: "v0",
}
}
// TestFastSyncConfig returns a default configuration for the fast sync.
func TestFastSyncConfig() *FastSyncConfig {
return DefaultFastSyncConfig()
// TestBlockSyncConfig returns a default configuration for the block sync.
func TestBlockSyncConfig() *BlockSyncConfig {
return DefaultBlockSyncConfig()
}
// ValidateBasic performs basic validation.
func (cfg *FastSyncConfig) ValidateBasic() error {
func (cfg *BlockSyncConfig) ValidateBasic() error {
switch cfg.Version {
case "v0":
return nil
case "v1", "v2":
return fmt.Errorf("fast sync version %s has been deprecated. Please use v0 instead", cfg.Version)
return fmt.Errorf("blocksync version %s has been deprecated. Please use v0 instead", cfg.Version)
default:
return fmt.Errorf("unknown fastsync version %s", cfg.Version)
return fmt.Errorf("unknown blocksync version %s", cfg.Version)
}
}
@@ -1069,11 +1090,41 @@ func (cfg *ConsensusConfig) ValidateBasic() error {
}
//-----------------------------------------------------------------------------
// StorageConfig
// StorageConfig allows more fine-grained control over certain storage-related
// behavior.
type StorageConfig struct {
// Set to false to ensure ABCI responses are persisted. ABCI responses are
// required for `/block_results` RPC queries, and to reindex events in the
// command-line tool.
DiscardABCIResponses bool `mapstructure:"discard_abci_responses"`
}
// DefaultStorageConfig returns the default configuration options relating to
// Tendermint storage optimization.
func DefaultStorageConfig() *StorageConfig {
return &StorageConfig{
DiscardABCIResponses: false,
}
}
// TestStorageConfig returns storage configuration that can be used for
// testing.
func TestStorageConfig() *StorageConfig {
return &StorageConfig{
DiscardABCIResponses: false,
}
}
// -----------------------------------------------------------------------------
// TxIndexConfig
// Remember that Event has the following structure:
// type: [
// key: value,
// ...
//
// key: value,
// ...
//
// ]
//
// CompositeKeys are constructed by `type.key`

View File

@@ -128,8 +128,8 @@ func TestStateSyncConfigValidateBasic(t *testing.T) {
require.NoError(t, cfg.ValidateBasic())
}
func TestFastSyncConfigValidateBasic(t *testing.T) {
cfg := TestFastSyncConfig()
func TestBlockSyncConfigValidateBasic(t *testing.T) {
cfg := TestBlockSyncConfig()
assert.NoError(t, cfg.ValidateBasic())
// tamper with version
@@ -141,7 +141,7 @@ func TestFastSyncConfigValidateBasic(t *testing.T) {
}
func TestConsensusConfig_ValidateBasic(t *testing.T) {
// nolint: lll
//nolint: lll
testcases := map[string]struct {
modify func(*ConsensusConfig)
expectErr bool

View File

@@ -3,7 +3,7 @@ package config
import (
"bytes"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
"text/template"
@@ -87,10 +87,10 @@ proxy_app = "{{ .BaseConfig.ProxyApp }}"
# A custom human readable name for this node
moniker = "{{ .BaseConfig.Moniker }}"
# If this node is many blocks behind the tip of the chain, FastSync
# If this node is many blocks behind the tip of the chain, BlockSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast_sync = {{ .BaseConfig.FastSyncMode }}
block_sync = {{ .BaseConfig.BlockSyncMode }}
# Database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb
# * goleveldb (github.com/syndtr/goleveldb - most popular implementation)
@@ -429,17 +429,17 @@ chunk_request_timeout = "{{ .StateSync.ChunkRequestTimeout }}"
chunk_fetchers = "{{ .StateSync.ChunkFetchers }}"
#######################################################
### Fast Sync Configuration Connections ###
### Block Sync Configuration Options ###
#######################################################
[fastsync]
[blocksync]
# Fast Sync version to use:
# Block Sync version to use:
#
# In v0.37, v1 and v2 of the fast sync protocol were deprecated.
# In v0.37, v1 and v2 of the block sync protocols were deprecated.
# Please use v0 instead.
#
# 1) "v0" - the default fast sync implementation
version = "{{ .FastSync.Version }}"
# 1) "v0" - the default block sync implementation
version = "{{ .BlockSync.Version }}"
#######################################################
### Consensus Configuration Options ###
@@ -482,6 +482,16 @@ create_empty_blocks_interval = "{{ .Consensus.CreateEmptyBlocksInterval }}"
peer_gossip_sleep_duration = "{{ .Consensus.PeerGossipSleepDuration }}"
peer_query_maj23_sleep_duration = "{{ .Consensus.PeerQueryMaj23SleepDuration }}"
#######################################################
### Storage Configuration Options ###
#######################################################
# Set to true to discard ABCI responses from the state store, which can save a
# considerable amount of disk space. Set to false to ensure ABCI responses are
# persisted. ABCI responses are required for /block_results RPC queries, and to
# reindex events in the command-line tool.
discard_abci_responses = {{ .Storage.DiscardABCIResponses}}
#######################################################
### Transaction Indexer Configuration Options ###
#######################################################
@@ -535,7 +545,7 @@ func ResetTestRoot(testName string) *Config {
func ResetTestRootWithChainID(testName string, chainID string) *Config {
// create a unique, concurrency-safe test directory under os.TempDir()
rootDir, err := ioutil.TempDir("", fmt.Sprintf("%s-%s_", chainID, testName))
rootDir, err := os.MkdirTemp("", fmt.Sprintf("%s-%s_", chainID, testName))
if err != nil {
panic(err)
}

View File

@@ -1,7 +1,6 @@
package config
import (
"io/ioutil"
"os"
"path/filepath"
"strings"
@@ -23,7 +22,7 @@ func TestEnsureRoot(t *testing.T) {
require := require.New(t)
// setup temp dir for test
tmpDir, err := ioutil.TempDir("", "config-test")
tmpDir, err := os.MkdirTemp("", "config-test")
require.Nil(err)
defer os.RemoveAll(tmpDir)
@@ -31,7 +30,7 @@ func TestEnsureRoot(t *testing.T) {
EnsureRoot(tmpDir)
// make sure config is set properly
data, err := ioutil.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
data, err := os.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
require.Nil(err)
if !checkConfig(string(data)) {
@@ -52,7 +51,7 @@ func TestEnsureTestRoot(t *testing.T) {
rootDir := cfg.RootDir
// make sure config is set properly
data, err := ioutil.ReadFile(filepath.Join(rootDir, defaultConfigFilePath))
data, err := os.ReadFile(filepath.Join(rootDir, defaultConfigFilePath))
require.Nil(err)
if !checkConfig(string(data)) {

View File

@@ -1,3 +1,3 @@
# Consensus
See the [consensus spec](https://github.com/tendermint/spec/tree/master/spec/consensus) and the [reactor consensus spec](https://github.com/tendermint/spec/tree/master/spec/reactors/consensus) for more information.
See the [consensus spec](https://github.com/tendermint/tendermint/tree/v0.37.x/spec/consensus) and the [reactor consensus spec](https://github.com/tendermint/tendermint/tree/v0.37.x/spec/reactors/consensus) for more information.

View File

@@ -50,7 +50,9 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
for i := 0; i < nValidators; i++ {
logger := consensusLogger().With("test", "byzantine", "validator", i)
stateDB := dbm.NewMemDB() // each state needs its own db
stateStore := sm.NewStore(stateDB)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
state, _ := stateStore.LoadFromDBOrGenesisDoc(genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
defer os.RemoveAll(thisConfig.RootDir)
@@ -211,9 +213,11 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
}
proposerAddr := lazyProposer.privValidatorPubKey.Address()
block, blockParts := lazyProposer.blockExec.CreateProposalBlock(
lazyProposer.Height, lazyProposer.state, commit, proposerAddr,
)
block, err := lazyProposer.blockExec.CreateProposalBlock(
lazyProposer.Height, lazyProposer.state, commit, proposerAddr, nil)
require.NoError(t, err)
blockParts, err := block.MakePartSet(types.BlockPartSizeBytes)
require.NoError(t, err)
// Flush the WAL. Otherwise, we may not recompute the same proposal to sign,
// and the privValidator will refuse to sign anything.
@@ -461,7 +465,10 @@ func byzantineDecideProposalFunc(t *testing.T, height int64, round int32, cs *St
// Avoid sending on internalMsgQueue and running consensus state.
// Create a new proposal block from state/txs from the mempool.
block1, blockParts1 := cs.createProposalBlock()
block1, err := cs.createProposalBlock()
require.NoError(t, err)
blockParts1, err := block1.MakePartSet(types.BlockPartSizeBytes)
require.NoError(t, err)
polRound, propBlockID := cs.ValidRound, types.BlockID{Hash: block1.Hash(), PartSetHeader: blockParts1.Header()}
proposal1 := types.NewProposal(height, round, polRound, propBlockID)
p1 := proposal1.ToProto()
@@ -475,7 +482,10 @@ func byzantineDecideProposalFunc(t *testing.T, height int64, round int32, cs *St
deliverTxsRange(cs, 0, 1)
// Create a new proposal block from state/txs from the mempool.
block2, blockParts2 := cs.createProposalBlock()
block2, err := cs.createProposalBlock()
require.NoError(t, err)
blockParts2, err := block2.MakePartSet(types.BlockPartSizeBytes)
require.NoError(t, err)
polRound, propBlockID = cs.ValidRound, types.BlockID{Hash: block2.Hash(), PartSetHeader: blockParts2.Header()}
proposal2 := types.NewProposal(height, round, polRound, propBlockID)
p2 := proposal2.ToProto()

View File

@@ -4,7 +4,6 @@ import (
"bytes"
"context"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"sort"
@@ -13,6 +12,7 @@ import (
"time"
"github.com/go-kit/log/term"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"path"
@@ -200,13 +200,17 @@ func startTestRound(cs *State, height int64, round int32) {
// Create proposal block from cs1 but sign it with vs.
func decideProposal(
t *testing.T,
cs1 *State,
vs *validatorStub,
height int64,
round int32,
) (proposal *types.Proposal, block *types.Block) {
cs1.mtx.Lock()
block, blockParts := cs1.createProposalBlock()
block, err := cs1.createProposalBlock()
require.NoError(t, err)
blockParts, err := block.MakePartSet(types.BlockPartSizeBytes)
require.NoError(t, err)
validRound := cs1.ValidRound
chainID := cs1.state.ChainID
cs1.mtx.Unlock()
@@ -427,7 +431,9 @@ func newStateWithConfigAndBlockStore(
// Make State
stateDB := blockDB
stateStore := sm.NewStore(stateDB)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
if err := stateStore.Save(state); err != nil { // for save height 1's validators info
panic(err)
}
@@ -457,12 +463,16 @@ func loadPrivValidator(config *cfg.Config) *privval.FilePV {
}
func randState(nValidators int) (*State, []*validatorStub) {
return randStateWithApp(nValidators, kvstore.NewApplication())
}
func randStateWithApp(nValidators int, app abci.Application) (*State, []*validatorStub) {
// Get State
state, privVals := randGenesisState(nValidators, false, 10)
vss := make([]*validatorStub, nValidators)
cs := newState(state, privVals[0], kvstore.NewApplication())
cs := newState(state, privVals[0], app)
for i := 0; i < nValidators; i++ {
vss[i] = newValidatorStub(privVals[i], int32(i))
@@ -677,6 +687,33 @@ func ensureVote(voteCh <-chan tmpubsub.Message, height int64, round int32,
}
}
func ensurePrevoteMatch(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32, hash []byte) {
t.Helper()
ensureVoteMatch(t, voteCh, height, round, hash, tmproto.PrevoteType)
}
func ensureVoteMatch(t *testing.T, voteCh <-chan tmpubsub.Message, height int64, round int32, hash []byte, voteType tmproto.SignedMsgType) {
t.Helper()
select {
case <-time.After(ensureTimeout):
t.Fatal("Timeout expired while waiting for NewVote event")
case msg := <-voteCh:
voteEvent, ok := msg.Data().(types.EventDataVote)
require.True(t, ok, "expected a EventDataVote, got %T. Wrong subscription channel?",
msg.Data())
vote := voteEvent.Vote
assert.Equal(t, height, vote.Height, "expected height %d, but got %d", height, vote.Height)
assert.Equal(t, round, vote.Round, "expected round %d, but got %d", round, vote.Round)
assert.Equal(t, voteType, vote.Type, "expected type %s, but got %s", voteType, vote.Type)
if hash == nil {
require.Nil(t, vote.BlockID.Hash, "Expected prevote to be for nil, got %X", vote.BlockID.Hash)
} else {
require.True(t, bytes.Equal(vote.BlockID.Hash, hash), "Expected prevote to be for %X, got %X", hash, vote.BlockID.Hash)
}
}
}
func ensurePrecommitTimeout(ch <-chan tmpubsub.Message) {
select {
case <-time.After(ensureTimeout):
@@ -717,7 +754,9 @@ func randConsensusNet(nValidators int, testName string, tickerFunc func() Timeou
configRootDirs := make([]string, 0, nValidators)
for i := 0; i < nValidators; i++ {
stateDB := dbm.NewMemDB() // each state needs its own db
stateStore := sm.NewStore(stateDB)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
state, _ := stateStore.LoadFromDBOrGenesisDoc(genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
configRootDirs = append(configRootDirs, thisConfig.RootDir)
@@ -755,7 +794,9 @@ func randConsensusNetWithPeers(
configRootDirs := make([]string, 0, nPeers)
for i := 0; i < nPeers; i++ {
stateDB := dbm.NewMemDB() // each state needs its own db
stateStore := sm.NewStore(stateDB)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
state, _ := stateStore.LoadFromDBOrGenesisDoc(genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
configRootDirs = append(configRootDirs, thisConfig.RootDir)
@@ -767,11 +808,11 @@ func randConsensusNetWithPeers(
if i < nValidators {
privVal = privVals[i]
} else {
tempKeyFile, err := ioutil.TempFile("", "priv_validator_key_")
tempKeyFile, err := os.CreateTemp("", "priv_validator_key_")
if err != nil {
panic(err)
}
tempStateFile, err := ioutil.TempFile("", "priv_validator_state_")
tempStateFile, err := os.CreateTemp("", "priv_validator_state_")
if err != nil {
panic(err)
}
@@ -887,7 +928,7 @@ func (m *mockTicker) Chan() <-chan timeoutInfo {
func (*mockTicker) SetLogger(log.Logger) {}
func newPersistentKVStore() abci.Application {
dir, err := ioutil.TempDir("", "persistent-kvstore")
dir, err := os.MkdirTemp("", "persistent-kvstore")
if err != nil {
panic(err)
}

View File

@@ -113,7 +113,7 @@ func deliverTxsRange(cs *State, start, end int) {
func TestMempoolTxConcurrentWithCommit(t *testing.T) {
state, privVals := randGenesisState(1, false, 10)
blockDB := dbm.NewMemDB()
stateStore := sm.NewStore(blockDB)
stateStore := sm.NewStore(blockDB, sm.StoreOptions{DiscardABCIResponses: false})
cs := newStateWithConfigAndBlockStore(config, state, privVals[0], NewCounterApplication(), blockDB)
err := stateStore.Save(state)
require.NoError(t, err)
@@ -138,7 +138,7 @@ func TestMempoolRmBadTx(t *testing.T) {
state, privVals := randGenesisState(1, false, 10)
app := NewCounterApplication()
blockDB := dbm.NewMemDB()
stateStore := sm.NewStore(blockDB)
stateStore := sm.NewStore(blockDB, sm.StoreOptions{DiscardABCIResponses: false})
cs := newStateWithConfigAndBlockStore(config, state, privVals[0], app, blockDB)
err := stateStore.Save(state)
require.NoError(t, err)
@@ -256,3 +256,22 @@ func (app *CounterApplication) Commit() abci.ResponseCommit {
binary.BigEndian.PutUint64(hash, uint64(app.txCount))
return abci.ResponseCommit{Data: hash}
}
func (app *CounterApplication) PrepareProposal(
req abci.RequestPrepareProposal) abci.ResponsePrepareProposal {
txs := make([][]byte, 0, len(req.Txs))
var totalBytes int64
for _, tx := range req.Txs {
totalBytes += int64(len(tx))
if totalBytes > req.MaxTxBytes {
break
}
txs = append(txs, tx)
}
return abci.ResponsePrepareProposal{Txs: txs}
}
func (app *CounterApplication) ProcessProposal(
req abci.RequestProcessProposal) abci.ResponseProcessProposal {
return abci.ResponseProcessProposal{Status: abci.ResponseProcessProposal_ACCEPT}
}

223
consensus/metrics.gen.go Normal file
View File

@@ -0,0 +1,223 @@
// Code generated by metricsgen. DO NOT EDIT.
package consensus
import (
"github.com/go-kit/kit/metrics/discard"
prometheus "github.com/go-kit/kit/metrics/prometheus"
stdprometheus "github.com/prometheus/client_golang/prometheus"
)
func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
labels := []string{}
for i := 0; i < len(labelsAndValues); i += 2 {
labels = append(labels, labelsAndValues[i])
}
return &Metrics{
Height: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "height",
Help: "Height of the chain.",
}, labels).With(labelsAndValues...),
ValidatorLastSignedHeight: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validator_last_signed_height",
Help: "Last height signed by this validator if the node is a validator.",
}, append(labels, "validator_address")).With(labelsAndValues...),
Rounds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "rounds",
Help: "Number of rounds.",
}, labels).With(labelsAndValues...),
RoundDurationSeconds: prometheus.NewHistogramFrom(stdprometheus.HistogramOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "round_duration_seconds",
Help: "Histogram of round duration.",
Buckets: stdprometheus.ExponentialBucketsRange(0.1, 100, 8),
}, labels).With(labelsAndValues...),
Validators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validators",
Help: "Number of validators.",
}, labels).With(labelsAndValues...),
ValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validators_power",
Help: "Total power of all validators.",
}, labels).With(labelsAndValues...),
ValidatorPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validator_power",
Help: "Power of a validator.",
}, append(labels, "validator_address")).With(labelsAndValues...),
ValidatorMissedBlocks: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validator_missed_blocks",
Help: "Amount of blocks missed per validator.",
}, append(labels, "validator_address")).With(labelsAndValues...),
MissingValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "missing_validators",
Help: "Number of validators who did not sign.",
}, labels).With(labelsAndValues...),
MissingValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "missing_validators_power",
Help: "Total power of the missing validators.",
}, labels).With(labelsAndValues...),
ByzantineValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "byzantine_validators",
Help: "Number of validators who tried to double sign.",
}, labels).With(labelsAndValues...),
ByzantineValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "byzantine_validators_power",
Help: "Total power of the byzantine validators.",
}, labels).With(labelsAndValues...),
BlockIntervalSeconds: prometheus.NewHistogramFrom(stdprometheus.HistogramOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_interval_seconds",
Help: "Time between this and the last block.",
}, labels).With(labelsAndValues...),
NumTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "num_txs",
Help: "Number of transactions.",
}, labels).With(labelsAndValues...),
BlockSizeBytes: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_size_bytes",
Help: "Size of the block.",
}, labels).With(labelsAndValues...),
TotalTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "total_txs",
Help: "Total number of transactions.",
}, labels).With(labelsAndValues...),
CommittedHeight: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "latest_block_height",
Help: "The latest block height.",
}, labels).With(labelsAndValues...),
BlockSyncing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_syncing",
Help: "Whether or not a node is block syncing. 1 if yes, 0 if no.",
}, labels).With(labelsAndValues...),
StateSyncing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "state_syncing",
Help: "Whether or not a node is state syncing. 1 if yes, 0 if no.",
}, labels).With(labelsAndValues...),
BlockParts: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_parts",
Help: "Number of block parts transmitted by each peer.",
}, append(labels, "peer_id")).With(labelsAndValues...),
StepDurationSeconds: prometheus.NewHistogramFrom(stdprometheus.HistogramOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "step_duration_seconds",
Help: "Histogram of durations for each step in the consensus protocol.",
Buckets: stdprometheus.ExponentialBucketsRange(0.1, 100, 8),
}, append(labels, "step")).With(labelsAndValues...),
BlockGossipPartsReceived: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_gossip_parts_received",
Help: "Number of block parts received by the node, separated by whether the part was relevant to the block the node is trying to gather or not.",
}, append(labels, "matches_current")).With(labelsAndValues...),
QuorumPrevoteDelay: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "quorum_prevote_delay",
Help: "Interval in seconds between the proposal timestamp and the timestamp of the earliest prevote that achieved a quorum.",
}, append(labels, "proposer_address")).With(labelsAndValues...),
FullPrevoteDelay: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "full_prevote_delay",
Help: "Interval in seconds between the proposal timestamp and the timestamp of the latest prevote in a round where all validators voted.",
}, append(labels, "proposer_address")).With(labelsAndValues...),
ProposalReceiveCount: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "proposal_receive_count",
Help: "ProposalReceiveCount is the total number of proposals received by this node since process start. The metric is annotated by the status of the proposal from the application, either 'accepted' or 'rejected'.",
}, append(labels, "status")).With(labelsAndValues...),
ProposalCreateCount: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "proposal_create_count",
Help: "ProposalCreationCount is the total number of proposals created by this node since process start.",
}, labels).With(labelsAndValues...),
RoundVotingPowerPercent: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "round_voting_power_percent",
Help: "RoundVotingPowerPercent is the percentage of the total voting power received with a round. The value begins at 0 for each round and approaches 1.0 as additional voting power is observed. The metric is labeled by vote type.",
}, labels).With(labelsAndValues...),
LateVotes: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "late_votes",
Help: "LateVotes stores the number of votes that were received by this node that correspond to earlier heights and rounds than this node is currently in.",
}, labels).With(labelsAndValues...),
}
}
func NopMetrics() *Metrics {
return &Metrics{
Height: discard.NewGauge(),
ValidatorLastSignedHeight: discard.NewGauge(),
Rounds: discard.NewGauge(),
RoundDurationSeconds: discard.NewHistogram(),
Validators: discard.NewGauge(),
ValidatorsPower: discard.NewGauge(),
ValidatorPower: discard.NewGauge(),
ValidatorMissedBlocks: discard.NewGauge(),
MissingValidators: discard.NewGauge(),
MissingValidatorsPower: discard.NewGauge(),
ByzantineValidators: discard.NewGauge(),
ByzantineValidatorsPower: discard.NewGauge(),
BlockIntervalSeconds: discard.NewHistogram(),
NumTxs: discard.NewGauge(),
BlockSizeBytes: discard.NewGauge(),
TotalTxs: discard.NewGauge(),
CommittedHeight: discard.NewGauge(),
BlockSyncing: discard.NewGauge(),
StateSyncing: discard.NewGauge(),
BlockParts: discard.NewCounter(),
StepDurationSeconds: discard.NewHistogram(),
BlockGossipPartsReceived: discard.NewCounter(),
QuorumPrevoteDelay: discard.NewGauge(),
FullPrevoteDelay: discard.NewGauge(),
ProposalReceiveCount: discard.NewCounter(),
ProposalCreateCount: discard.NewCounter(),
RoundVotingPowerPercent: discard.NewGauge(),
LateVotes: discard.NewCounter(),
}
}

View File

@@ -5,12 +5,10 @@ import (
"time"
"github.com/go-kit/kit/metrics"
"github.com/go-kit/kit/metrics/discard"
cstypes "github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/types"
prometheus "github.com/go-kit/kit/metrics/prometheus"
stdprometheus "github.com/prometheus/client_golang/prometheus"
cstypes "github.com/tendermint/tendermint/consensus/types"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/types"
)
const (
@@ -19,28 +17,30 @@ const (
MetricsSubsystem = "consensus"
)
//go:generate go run ../scripts/metricsgen -struct=Metrics
// Metrics contains metrics exposed by this package.
type Metrics struct {
// Height of the chain.
Height metrics.Gauge
// ValidatorLastSignedHeight of a validator.
ValidatorLastSignedHeight metrics.Gauge
// Last height signed by this validator if the node is a validator.
ValidatorLastSignedHeight metrics.Gauge `metrics_labels:"validator_address"`
// Number of rounds.
Rounds metrics.Gauge
// Histogram of round duration.
RoundDuration metrics.Histogram
RoundDurationSeconds metrics.Histogram `metrics_buckettype:"exprange" metrics_bucketsizes:"0.1, 100, 8"`
// Number of validators.
Validators metrics.Gauge
// Total power of all validators.
ValidatorsPower metrics.Gauge
// Power of a validator.
ValidatorPower metrics.Gauge
// Amount of blocks missed by a validator.
ValidatorMissedBlocks metrics.Gauge
ValidatorPower metrics.Gauge `metrics_labels:"validator_address"`
// Amount of blocks missed per validator.
ValidatorMissedBlocks metrics.Gauge `metrics_labels:"validator_address"`
// Number of validators who did not sign.
MissingValidators metrics.Gauge
// Total power of the missing validators.
@@ -60,22 +60,22 @@ type Metrics struct {
// Total number of transactions.
TotalTxs metrics.Gauge
// The latest block height.
CommittedHeight metrics.Gauge
// Whether or not a node is fast syncing. 1 if yes, 0 if no.
FastSyncing metrics.Gauge
CommittedHeight metrics.Gauge `metrics_name:"latest_block_height"`
// Whether or not a node is block syncing. 1 if yes, 0 if no.
BlockSyncing metrics.Gauge
// Whether or not a node is state syncing. 1 if yes, 0 if no.
StateSyncing metrics.Gauge
// Number of blockparts transmitted by peer.
BlockParts metrics.Counter
// Number of block parts transmitted by each peer.
BlockParts metrics.Counter `metrics_labels:"peer_id"`
// Histogram of step duration.
StepDuration metrics.Histogram
stepStart time.Time
// Histogram of durations for each step in the consensus protocol.
StepDurationSeconds metrics.Histogram `metrics_labels:"step" metrics_buckettype:"exprange" metrics_bucketsizes:"0.1, 100, 8"`
stepStart time.Time
// Number of block parts received by the node, separated by whether the part
// was relevant to the block the node is trying to gather or not.
BlockGossipPartsReceived metrics.Counter
BlockGossipPartsReceived metrics.Counter `metrics_labels:"matches_current"`
// QuroumPrevoteMessageDelay is the interval in seconds between the proposal
// timestamp and the timestamp of the earliest prevote that achieved a quorum
@@ -86,208 +86,34 @@ type Metrics struct {
// be above 2/3 of the total voting power of the network defines the endpoint
// the endpoint of the interval. Subtract the proposal timestamp from this endpoint
// to obtain the quorum delay.
QuorumPrevoteMessageDelay metrics.Gauge
//metrics:Interval in seconds between the proposal timestamp and the timestamp of the earliest prevote that achieved a quorum.
QuorumPrevoteDelay metrics.Gauge `metrics_labels:"proposer_address"`
// FullPrevoteMessageDelay is the interval in seconds between the proposal
// FullPrevoteDelay is the interval in seconds between the proposal
// timestamp and the timestamp of the latest prevote in a round where 100%
// of the voting power on the network issued prevotes.
FullPrevoteMessageDelay metrics.Gauge
}
//metrics:Interval in seconds between the proposal timestamp and the timestamp of the latest prevote in a round where all validators voted.
FullPrevoteDelay metrics.Gauge `metrics_labels:"proposer_address"`
// PrometheusMetrics returns Metrics build using Prometheus client library.
// Optionally, labels can be provided along with their values ("foo",
// "fooValue").
func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
labels := []string{}
for i := 0; i < len(labelsAndValues); i += 2 {
labels = append(labels, labelsAndValues[i])
}
return &Metrics{
Height: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "height",
Help: "Height of the chain.",
}, labels).With(labelsAndValues...),
Rounds: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "rounds",
Help: "Number of rounds.",
}, labels).With(labelsAndValues...),
RoundDuration: prometheus.NewHistogramFrom(stdprometheus.HistogramOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "round_duration_seconds",
Help: "Time spent in a round.",
Buckets: stdprometheus.ExponentialBucketsRange(0.1, 100, 8),
}, labels).With(labelsAndValues...),
Validators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validators",
Help: "Number of validators.",
}, labels).With(labelsAndValues...),
ValidatorLastSignedHeight: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validator_last_signed_height",
Help: "Last signed height for a validator",
}, append(labels, "validator_address")).With(labelsAndValues...),
ValidatorMissedBlocks: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validator_missed_blocks",
Help: "Total missed blocks for a validator",
}, append(labels, "validator_address")).With(labelsAndValues...),
ValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validators_power",
Help: "Total power of all validators.",
}, labels).With(labelsAndValues...),
ValidatorPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "validator_power",
Help: "Power of a validator",
}, append(labels, "validator_address")).With(labelsAndValues...),
MissingValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "missing_validators",
Help: "Number of validators who did not sign.",
}, labels).With(labelsAndValues...),
MissingValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "missing_validators_power",
Help: "Total power of the missing validators.",
}, labels).With(labelsAndValues...),
ByzantineValidators: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "byzantine_validators",
Help: "Number of validators who tried to double sign.",
}, labels).With(labelsAndValues...),
ByzantineValidatorsPower: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "byzantine_validators_power",
Help: "Total power of the byzantine validators.",
}, labels).With(labelsAndValues...),
BlockIntervalSeconds: prometheus.NewHistogramFrom(stdprometheus.HistogramOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_interval_seconds",
Help: "Time between this and the last block.",
}, labels).With(labelsAndValues...),
NumTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "num_txs",
Help: "Number of transactions.",
}, labels).With(labelsAndValues...),
BlockSizeBytes: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_size_bytes",
Help: "Size of the block.",
}, labels).With(labelsAndValues...),
TotalTxs: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "total_txs",
Help: "Total number of transactions.",
}, labels).With(labelsAndValues...),
CommittedHeight: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "latest_block_height",
Help: "The latest block height.",
}, labels).With(labelsAndValues...),
FastSyncing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "fast_syncing",
Help: "Whether or not a node is fast syncing. 1 if yes, 0 if no.",
}, labels).With(labelsAndValues...),
StateSyncing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "state_syncing",
Help: "Whether or not a node is state syncing. 1 if yes, 0 if no.",
}, labels).With(labelsAndValues...),
BlockParts: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_parts",
Help: "Number of blockparts transmitted by peer.",
}, append(labels, "peer_id")).With(labelsAndValues...),
BlockGossipPartsReceived: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_gossip_parts_received",
Help: "Number of block parts received by the node, labeled by whether the " +
"part was relevant to the block the node was currently gathering or not.",
}, append(labels, "matches_current")).With(labelsAndValues...),
StepDuration: prometheus.NewHistogramFrom(stdprometheus.HistogramOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "step_duration_seconds",
Help: "Time spent per step.",
Buckets: stdprometheus.ExponentialBucketsRange(0.1, 100, 8),
}, append(labels, "step")).With(labelsAndValues...),
QuorumPrevoteMessageDelay: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "quorum_prevote_message_delay",
Help: "Difference in seconds between the proposal timestamp and the timestamp " +
"of the latest prevote that achieved a quorum in the prevote step.",
}, labels).With(labelsAndValues...),
FullPrevoteMessageDelay: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "full_prevote_message_delay",
Help: "Difference in seconds between the proposal timestamp and the timestamp " +
"of the latest prevote that achieved 100% of the voting power in the prevote step.",
}, labels).With(labelsAndValues...),
}
}
// ProposalReceiveCount is the total number of proposals received by this node
// since process start.
// The metric is annotated by the status of the proposal from the application,
// either 'accepted' or 'rejected'.
ProposalReceiveCount metrics.Counter `metrics_labels:"status"`
// NopMetrics returns no-op Metrics.
func NopMetrics() *Metrics {
return &Metrics{
Height: discard.NewGauge(),
// ProposalCreationCount is the total number of proposals created by this node
// since process start.
ProposalCreateCount metrics.Counter
ValidatorLastSignedHeight: discard.NewGauge(),
// RoundVotingPowerPercent is the percentage of the total voting power received
// with a round. The value begins at 0 for each round and approaches 1.0 as
// additional voting power is observed. The metric is labeled by vote type.
RoundVotingPowerPercent metrics.Gauge
Rounds: discard.NewGauge(),
RoundDuration: discard.NewHistogram(),
StepDuration: discard.NewHistogram(),
Validators: discard.NewGauge(),
ValidatorsPower: discard.NewGauge(),
ValidatorPower: discard.NewGauge(),
ValidatorMissedBlocks: discard.NewGauge(),
MissingValidators: discard.NewGauge(),
MissingValidatorsPower: discard.NewGauge(),
ByzantineValidators: discard.NewGauge(),
ByzantineValidatorsPower: discard.NewGauge(),
BlockIntervalSeconds: discard.NewHistogram(),
NumTxs: discard.NewGauge(),
BlockSizeBytes: discard.NewGauge(),
TotalTxs: discard.NewGauge(),
CommittedHeight: discard.NewGauge(),
FastSyncing: discard.NewGauge(),
StateSyncing: discard.NewGauge(),
BlockParts: discard.NewCounter(),
BlockGossipPartsReceived: discard.NewCounter(),
QuorumPrevoteMessageDelay: discard.NewGauge(),
FullPrevoteMessageDelay: discard.NewGauge(),
}
// LateVotes stores the number of votes that were received by this node that
// correspond to earlier heights and rounds than this node is currently
// in.
LateVotes metrics.Counter
}
// RecordConsMetrics uses for recording the block related metrics during fast-sync.
@@ -298,17 +124,36 @@ func (m *Metrics) RecordConsMetrics(block *types.Block) {
m.CommittedHeight.Set(float64(block.Height))
}
func (m *Metrics) MarkProposalProcessed(accepted bool) {
status := "accepted"
if !accepted {
status = "rejected"
}
m.ProposalReceiveCount.With("status", status).Add(1)
}
func (m *Metrics) MarkVoteReceived(vt tmproto.SignedMsgType, power, totalPower int64) {
p := float64(power) / float64(totalPower)
n := strings.ToLower(strings.TrimPrefix(vt.String(), "SIGNED_MSG_TYPE_"))
m.RoundVotingPowerPercent.With("vote_type", n).Add(p)
}
func (m *Metrics) MarkRound(r int32, st time.Time) {
m.Rounds.Set(float64(r))
roundTime := time.Since(st).Seconds()
m.RoundDuration.Observe(roundTime)
m.RoundDurationSeconds.Observe(roundTime)
}
func (m *Metrics) MarkLateVote(vt tmproto.SignedMsgType) {
n := strings.ToLower(strings.TrimPrefix(vt.String(), "SIGNED_MSG_TYPE_"))
m.LateVotes.With("vote_type", n).Add(1)
}
func (m *Metrics) MarkStep(s cstypes.RoundStepType) {
if !m.stepStart.IsZero() {
stepTime := time.Since(m.stepStart).Seconds()
stepName := strings.TrimPrefix(s.String(), "RoundStep")
m.StepDuration.With("step", stepName).Observe(stepTime)
m.StepDurationSeconds.With("step", stepName).Observe(stepTime)
}
m.stepStart = time.Now()
}

View File

@@ -314,7 +314,7 @@ func TestWALMsgProto(t *testing.T) {
}
}
// nolint:lll //ignore line length for tests
//nolint:lll //ignore line length for tests
func TestConsMsgsVectors(t *testing.T) {
date := time.Date(2018, 8, 30, 12, 0, 0, 0, time.UTC)
psh := types.PartSetHeader{

View File

@@ -72,7 +72,7 @@ func NewReactor(consensusState *State, waitSync bool, options ...ReactorOption)
}
// OnStart implements BaseService by subscribing to events, which later will be
// broadcasted to other peers and starting state if we're not in fast sync.
// broadcasted to other peers and starting state if we're not in block sync.
func (conR *Reactor) OnStart() error {
conR.Logger.Info("Reactor ", "waitSync", conR.WaitSync())
@@ -104,8 +104,8 @@ func (conR *Reactor) OnStop() {
}
}
// SwitchToConsensus switches from fast_sync mode to consensus mode.
// It resets the state, turns off fast_sync, and starts the consensus state-machine
// SwitchToConsensus switches from block_sync mode to consensus mode.
// It resets the state, turns off block_sync, and starts the consensus state-machine
func (conR *Reactor) SwitchToConsensus(state sm.State, skipWAL bool) {
conR.Logger.Info("SwitchToConsensus")
@@ -121,7 +121,7 @@ func (conR *Reactor) SwitchToConsensus(state sm.State, skipWAL bool) {
conR.mtx.Lock()
conR.waitSync = false
conR.mtx.Unlock()
conR.Metrics.FastSyncing.Set(0)
conR.Metrics.BlockSyncing.Set(0)
conR.Metrics.StateSyncing.Set(0)
if skipWAL {
@@ -198,7 +198,7 @@ func (conR *Reactor) AddPeer(peer p2p.Peer) {
go conR.queryMaj23Routine(peer, peerState)
// Send our state to peer.
// If we're fast_syncing, broadcast a RoundStepMessage later upon SwitchToConsensus().
// If we're block_syncing, broadcast a RoundStepMessage later upon SwitchToConsensus().
if !conR.WaitSync() {
conR.sendNewRoundStepMessage(peer)
}
@@ -218,7 +218,7 @@ func (conR *Reactor) RemovePeer(peer p2p.Peer, reason interface{}) {
}
// Receive implements Reactor
// NOTE: We process these messages even when we're fast_syncing.
// NOTE: We process these messages even when we're block_syncing.
// Messages affect either a peer state or the consensus state.
// Peer state updates can happen in parallel, but processing of
// proposals, block parts, and votes are ordered by the receiveRoutine
@@ -386,7 +386,7 @@ func (conR *Reactor) SetEventBus(b *types.EventBus) {
conR.conS.SetEventBus(b)
}
// WaitSync returns whether the consensus reactor is waiting for state/fast sync.
// WaitSync returns whether the consensus reactor is waiting for state/block sync.
func (conR *Reactor) WaitSync() bool {
conR.mtx.RLock()
defer conR.mtx.RUnlock()

View File

@@ -138,7 +138,9 @@ func TestReactorWithEvidence(t *testing.T) {
logger := consensusLogger()
for i := 0; i < nValidators; i++ {
stateDB := dbm.NewMemDB() // each state needs its own db
stateStore := sm.NewStore(stateDB)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
state, _ := stateStore.LoadFromDBOrGenesisDoc(genDoc)
thisConfig := ResetConfig(fmt.Sprintf("%s_%d", testName, i))
defer os.RemoveAll(thisConfig.RootDir)
@@ -188,7 +190,8 @@ func TestReactorWithEvidence(t *testing.T) {
// mock the evidence pool
// everyone includes evidence of another double signing
vIdx := (i + 1) % nValidators
ev := types.NewMockDuplicateVoteEvidenceWithValidator(1, defaultTestTime, privVals[vIdx], config.ChainID())
ev, err := types.NewMockDuplicateVoteEvidenceWithValidator(1, defaultTestTime, privVals[vIdx], config.ChainID())
require.NoError(t, err)
evpool := &statemocks.EvidencePool{}
evpool.On("CheckEvidence", mock.AnythingOfType("types.EvidenceList")).Return(nil)
evpool.On("PendingEvidence", mock.AnythingOfType("int64")).Return([]types.Evidence{
@@ -205,7 +208,7 @@ func TestReactorWithEvidence(t *testing.T) {
eventBus := types.NewEventBus()
eventBus.SetLogger(log.TestingLogger().With("module", "events"))
err := eventBus.Start()
err = eventBus.Start()
require.NoError(t, err)
cs.SetEventBus(eventBus)
@@ -689,7 +692,7 @@ func capture() {
// Ensure basic validation of structs is functioning
func TestNewRoundStepMessageValidateBasic(t *testing.T) {
testCases := []struct { // nolint: maligned
testCases := []struct { //nolint: maligned
expectErr bool
messageRound int32
messageLastCommitRound int32
@@ -728,7 +731,7 @@ func TestNewRoundStepMessageValidateBasic(t *testing.T) {
func TestNewRoundStepMessageValidateHeight(t *testing.T) {
initialHeight := int64(10)
testCases := []struct { // nolint: maligned
testCases := []struct { //nolint: maligned
expectErr bool
messageLastCommitRound int32
messageHeight int64
@@ -878,7 +881,7 @@ func TestHasVoteMessageValidateBasic(t *testing.T) {
invalidSignedMsgType tmproto.SignedMsgType = 0x03
)
testCases := []struct { // nolint: maligned
testCases := []struct { //nolint: maligned
expectErr bool
messageRound int32
messageIndex int32
@@ -923,7 +926,7 @@ func TestVoteSetMaj23MessageValidateBasic(t *testing.T) {
},
}
testCases := []struct { // nolint: maligned
testCases := []struct { //nolint: maligned
expectErr bool
messageRound int32
messageHeight int64

View File

@@ -307,12 +307,12 @@ func (h *Handshaker) ReplayBlocks(
}
validatorSet := types.NewValidatorSet(validators)
nextVals := types.TM2PB.ValidatorUpdates(validatorSet)
csParams := types.TM2PB.ConsensusParams(h.genDoc.ConsensusParams)
pbparams := h.genDoc.ConsensusParams.ToProto()
req := abci.RequestInitChain{
Time: h.genDoc.GenesisTime,
ChainId: h.genDoc.ChainID,
InitialHeight: h.genDoc.InitialHeight,
ConsensusParams: csParams,
ConsensusParams: &pbparams,
Validators: nextVals,
AppStateBytes: h.genDoc.AppState,
}
@@ -344,8 +344,8 @@ func (h *Handshaker) ReplayBlocks(
}
if res.ConsensusParams != nil {
state.ConsensusParams = types.UpdateConsensusParams(state.ConsensusParams, res.ConsensusParams)
state.Version.Consensus.App = state.ConsensusParams.Version.AppVersion
state.ConsensusParams = state.ConsensusParams.Update(res.ConsensusParams)
state.Version.Consensus.App = state.ConsensusParams.Version.App
}
// We update the last results hash with the empty hash, to conform with RFC-6962.
state.LastResultsHash = merkle.HashFromByteSlices(nil)
@@ -418,7 +418,7 @@ func (h *Handshaker) ReplayBlocks(
case appBlockHeight == storeBlockHeight:
// We ran Commit, but didn't save the state, so replayBlock with mock app.
abciResponses, err := h.stateStore.LoadABCIResponses(storeBlockHeight)
abciResponses, err := h.stateStore.LoadLastABCIResponse(storeBlockHeight)
if err != nil {
return nil, err
}

View File

@@ -297,7 +297,9 @@ func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusCo
if err != nil {
tmos.Exit(err.Error())
}
stateStore := sm.NewStore(stateDB)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
gdoc, err := sm.MakeGenesisDocFromFile(config.GenesisFile())
if err != nil {
tmos.Exit(err.Error())
@@ -309,7 +311,7 @@ func newConsensusStateForReplay(config cfg.BaseConfig, csConfig *cfg.ConsensusCo
// Create proxyAppConn connection (consensus, mempool, query)
clientCreator := proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir())
proxyApp := proxy.NewAppConns(clientCreator)
proxyApp := proxy.NewAppConns(clientCreator, proxy.NopMetrics())
err = proxyApp.Start()
if err != nil {
tmos.Exit(fmt.Sprintf("Error starting proxy app conns: %v", err))

View File

@@ -66,7 +66,7 @@ func newMockProxyApp(appHash []byte, abciResponses *tmstate.ABCIResponses) proxy
if err != nil {
panic(err)
}
return proxy.NewAppConnConsensus(cli)
return proxy.NewAppConnConsensus(cli, proxy.NopMetrics())
}
type mockProxyApp struct {

View File

@@ -5,7 +5,6 @@ import (
"context"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"runtime"
@@ -23,6 +22,7 @@ import (
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/crypto"
cryptoenc "github.com/tendermint/tendermint/crypto/encoding"
"github.com/tendermint/tendermint/internal/test"
"github.com/tendermint/tendermint/libs/log"
tmrand "github.com/tendermint/tendermint/libs/rand"
mempl "github.com/tendermint/tendermint/mempool"
@@ -78,7 +78,7 @@ func startNewStateAndWaitForBlock(t *testing.T, consensusReplayConfig *cfg.Confi
)
cs.SetLogger(logger)
bytes, _ := ioutil.ReadFile(cs.config.WalFile())
bytes, _ := os.ReadFile(cs.config.WalFile())
t.Logf("====== WAL: \n\r%X\n", bytes)
err := cs.Start()
@@ -159,7 +159,9 @@ LOOP:
logger := log.NewNopLogger()
blockDB := dbm.NewMemDB()
stateDB := blockDB
stateStore := sm.NewStore(stateDB)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
state, err := sm.MakeGenesisStateFromFile(consensusReplayConfig.GenesisFile())
require.NoError(t, err)
privValidator := loadPrivValidator(consensusReplayConfig)
@@ -290,7 +292,7 @@ func (w *crashingWAL) Start() error { return w.next.Start() }
func (w *crashingWAL) Stop() error { return w.next.Stop() }
func (w *crashingWAL) Wait() { w.next.Wait() }
//------------------------------------------------------------------------------------------
// ------------------------------------------------------------------------------------------
type testSim struct {
GenesisState sm.State
Config *cfg.Config
@@ -362,9 +364,11 @@ func TestSimulateValidatorsChange(t *testing.T) {
require.NoError(t, err)
newValidatorTx1 := kvstore.MakeValSetChangeTx(valPubKey1ABCI, testMinPower)
err = assertMempool(css[0].txNotifier).CheckTx(newValidatorTx1, nil, mempl.TxInfo{})
assert.Nil(t, err)
propBlock, _ := css[0].createProposalBlock() // changeProposer(t, cs1, vs2)
propBlockParts := propBlock.MakePartSet(partSize)
assert.NoError(t, err)
propBlock, err := css[0].createProposalBlock() // changeProposer(t, cs1, vs2)
require.NoError(t, err)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
blockID := types.BlockID{Hash: propBlock.Hash(), PartSetHeader: propBlockParts.Header()}
proposal := types.NewProposal(vss[1].Height, round, -1, blockID)
@@ -392,9 +396,11 @@ func TestSimulateValidatorsChange(t *testing.T) {
require.NoError(t, err)
updateValidatorTx1 := kvstore.MakeValSetChangeTx(updatePubKey1ABCI, 25)
err = assertMempool(css[0].txNotifier).CheckTx(updateValidatorTx1, nil, mempl.TxInfo{})
assert.Nil(t, err)
propBlock, _ = css[0].createProposalBlock() // changeProposer(t, cs1, vs2)
propBlockParts = propBlock.MakePartSet(partSize)
assert.NoError(t, err)
propBlock, err = css[0].createProposalBlock() // changeProposer(t, cs1, vs2)
require.NoError(t, err)
propBlockParts, err = propBlock.MakePartSet(partSize)
require.NoError(t, err)
blockID = types.BlockID{Hash: propBlock.Hash(), PartSetHeader: propBlockParts.Header()}
proposal = types.NewProposal(vss[2].Height, round, -1, blockID)
@@ -429,9 +435,11 @@ func TestSimulateValidatorsChange(t *testing.T) {
require.NoError(t, err)
newValidatorTx3 := kvstore.MakeValSetChangeTx(newVal3ABCI, testMinPower)
err = assertMempool(css[0].txNotifier).CheckTx(newValidatorTx3, nil, mempl.TxInfo{})
assert.Nil(t, err)
propBlock, _ = css[0].createProposalBlock() // changeProposer(t, cs1, vs2)
propBlockParts = propBlock.MakePartSet(partSize)
assert.NoError(t, err)
propBlock, err = css[0].createProposalBlock() // changeProposer(t, cs1, vs2)
require.NoError(t, err)
propBlockParts, err = propBlock.MakePartSet(partSize)
require.NoError(t, err)
blockID = types.BlockID{Hash: propBlock.Hash(), PartSetHeader: propBlockParts.Header()}
newVss := make([]*validatorStub, nVals+1)
copy(newVss, vss[:nVals+1])
@@ -504,9 +512,11 @@ func TestSimulateValidatorsChange(t *testing.T) {
incrementHeight(vss...)
removeValidatorTx3 := kvstore.MakeValSetChangeTx(newVal3ABCI, 0)
err = assertMempool(css[0].txNotifier).CheckTx(removeValidatorTx3, nil, mempl.TxInfo{})
assert.Nil(t, err)
propBlock, _ = css[0].createProposalBlock() // changeProposer(t, cs1, vs2)
propBlockParts = propBlock.MakePartSet(partSize)
assert.NoError(t, err)
propBlock, err = css[0].createProposalBlock() // changeProposer(t, cs1, vs2)
require.NoError(t, err)
propBlockParts, err = propBlock.MakePartSet(partSize)
require.NoError(t, err)
blockID = types.BlockID{Hash: propBlock.Hash(), PartSetHeader: propBlockParts.Header()}
newVss = make([]*validatorStub, nVals+3)
copy(newVss, vss[:nVals+3])
@@ -634,7 +644,7 @@ func TestMockProxyApp(t *testing.T) {
}
func tempWALWithData(data []byte) string {
walFile, err := ioutil.TempFile("", "wal")
walFile, err := os.CreateTemp("", "wal")
if err != nil {
panic(fmt.Sprintf("failed to create temp WAL file: %v", err))
}
@@ -665,7 +675,7 @@ func testHandshakeReplay(t *testing.T, config *cfg.Config, nBlocks int, mode uin
config = sim.Config
chain = append([]*types.Block{}, sim.Chain...) // copy chain
commits = sim.Commits
store = newMockBlockStore(config, genesisState.ConsensusParams)
store = newMockBlockStore(t, config, genesisState.ConsensusParams)
} else { // test single node
testConfig := ResetConfig(fmt.Sprintf("%s_%v_s", t.Name(), mode))
defer os.RemoveAll(testConfig.RootDir)
@@ -690,16 +700,18 @@ func testHandshakeReplay(t *testing.T, config *cfg.Config, nBlocks int, mode uin
require.NoError(t, err)
pubKey, err := privVal.GetPubKey()
require.NoError(t, err)
stateDB, genesisState, store = stateAndStore(config, pubKey, kvstore.ProtocolVersion)
stateDB, genesisState, store = stateAndStore(t, config, pubKey, kvstore.ProtocolVersion)
}
stateStore := sm.NewStore(stateDB)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
store.chain = chain
store.commits = commits
state := genesisState.Copy()
// run the chain through state.ApplyBlock to build up the tendermint state
state = buildTMStateFromChain(config, stateStore, state, chain, nBlocks, mode)
state = buildTMStateFromChain(t, config, stateStore, state, chain, nBlocks, mode)
latestAppHash := state.AppHash
// make a new client creator
@@ -710,12 +722,14 @@ func testHandshakeReplay(t *testing.T, config *cfg.Config, nBlocks int, mode uin
if nBlocks > 0 {
// run nBlocks against a new client to build up the app state.
// use a throwaway tendermint state
proxyApp := proxy.NewAppConns(clientCreator2)
proxyApp := proxy.NewAppConns(clientCreator2, proxy.NopMetrics())
stateDB1 := dbm.NewMemDB()
stateStore := sm.NewStore(stateDB1)
stateStore := sm.NewStore(stateDB1, sm.StoreOptions{
DiscardABCIResponses: false,
})
err := stateStore.Save(genesisState)
require.NoError(t, err)
buildAppStateFromChain(proxyApp, stateStore, genesisState, chain, nBlocks, mode)
buildAppStateFromChain(t, proxyApp, stateStore, genesisState, chain, nBlocks, mode)
}
// Prune block store if requested
@@ -730,7 +744,7 @@ func testHandshakeReplay(t *testing.T, config *cfg.Config, nBlocks int, mode uin
// now start the app using the handshake - it should sync
genDoc, _ := sm.MakeGenesisDocFromFile(config.GenesisFile())
handshaker := NewHandshaker(stateStore, state, store, genDoc)
proxyApp := proxy.NewAppConns(clientCreator2)
proxyApp := proxy.NewAppConns(clientCreator2, proxy.NopMetrics())
if err := proxyApp.Start(); err != nil {
t.Fatalf("Error starting proxy app connections: %v", err)
}
@@ -775,19 +789,19 @@ func testHandshakeReplay(t *testing.T, config *cfg.Config, nBlocks int, mode uin
}
}
func applyBlock(stateStore sm.Store, st sm.State, blk *types.Block, proxyApp proxy.AppConns) sm.State {
func applyBlock(t *testing.T, stateStore sm.Store, st sm.State, blk *types.Block, proxyApp proxy.AppConns) sm.State {
testPartSize := types.BlockPartSizeBytes
blockExec := sm.NewBlockExecutor(stateStore, log.TestingLogger(), proxyApp.Consensus(), mempool, evpool)
blkID := types.BlockID{Hash: blk.Hash(), PartSetHeader: blk.MakePartSet(testPartSize).Header()}
bps, err := blk.MakePartSet(testPartSize)
require.NoError(t, err)
blkID := types.BlockID{Hash: blk.Hash(), PartSetHeader: bps.Header()}
newState, _, err := blockExec.ApplyBlock(st, blkID, blk)
if err != nil {
panic(err)
}
require.NoError(t, err)
return newState
}
func buildAppStateFromChain(proxyApp proxy.AppConns, stateStore sm.Store,
func buildAppStateFromChain(t *testing.T, proxyApp proxy.AppConns, stateStore sm.Store,
state sm.State, chain []*types.Block, nBlocks int, mode uint) {
// start a new app without handshake, play nBlocks blocks
if err := proxyApp.Start(); err != nil {
@@ -809,18 +823,18 @@ func buildAppStateFromChain(proxyApp proxy.AppConns, stateStore sm.Store,
case 0:
for i := 0; i < nBlocks; i++ {
block := chain[i]
state = applyBlock(stateStore, state, block, proxyApp)
state = applyBlock(t, stateStore, state, block, proxyApp)
}
case 1, 2, 3:
for i := 0; i < nBlocks-1; i++ {
block := chain[i]
state = applyBlock(stateStore, state, block, proxyApp)
state = applyBlock(t, stateStore, state, block, proxyApp)
}
if mode == 2 || mode == 3 {
// update the kvstore height and apphash
// as if we ran commit but not
state = applyBlock(stateStore, state, chain[nBlocks-1], proxyApp)
state = applyBlock(t, stateStore, state, chain[nBlocks-1], proxyApp)
}
default:
panic(fmt.Sprintf("unknown mode %v", mode))
@@ -829,6 +843,7 @@ func buildAppStateFromChain(proxyApp proxy.AppConns, stateStore sm.Store,
}
func buildTMStateFromChain(
t *testing.T,
config *cfg.Config,
stateStore sm.Store,
state sm.State,
@@ -839,7 +854,7 @@ func buildTMStateFromChain(
clientCreator := proxy.NewLocalClientCreator(
kvstore.NewPersistentKVStoreApplication(
filepath.Join(config.DBDir(), fmt.Sprintf("replay_test_%d_%d_t", nBlocks, mode))))
proxyApp := proxy.NewAppConns(clientCreator)
proxyApp := proxy.NewAppConns(clientCreator, proxy.NopMetrics())
if err := proxyApp.Start(); err != nil {
panic(err)
}
@@ -859,19 +874,19 @@ func buildTMStateFromChain(
case 0:
// sync right up
for _, block := range chain {
state = applyBlock(stateStore, state, block, proxyApp)
state = applyBlock(t, stateStore, state, block, proxyApp)
}
case 1, 2, 3:
// sync up to the penultimate as if we stored the block.
// whether we commit or not depends on the appHash
for _, block := range chain[:len(chain)-1] {
state = applyBlock(stateStore, state, block, proxyApp)
state = applyBlock(t, stateStore, state, block, proxyApp)
}
// apply the final block to a state copy so we can
// get the right next appHash but keep the state back
applyBlock(stateStore, state, chain[len(chain)-1], proxyApp)
applyBlock(t, stateStore, state, chain[len(chain)-1], proxyApp)
default:
panic(fmt.Sprintf("unknown mode %v", mode))
}
@@ -890,12 +905,16 @@ func TestHandshakePanicsIfAppReturnsWrongAppHash(t *testing.T) {
const appVersion = 0x0
pubKey, err := privVal.GetPubKey()
require.NoError(t, err)
stateDB, state, store := stateAndStore(config, pubKey, appVersion)
stateStore := sm.NewStore(stateDB)
stateDB, state, store := stateAndStore(t, config, pubKey, appVersion)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
genDoc, _ := sm.MakeGenesisDocFromFile(config.GenesisFile())
state.LastValidators = state.Validators.Copy()
// mode = 0 for committing all the blocks
blocks := makeBlocks(3, &state, privVal)
blocks, err := test.MakeBlocks(3, state, []types.PrivValidator{privVal})
require.NoError(t, err)
store.chain = blocks
// 2. Tendermint must panic if app returns wrong hash for the first block
@@ -905,7 +924,7 @@ func TestHandshakePanicsIfAppReturnsWrongAppHash(t *testing.T) {
{
app := &badApp{numBlocks: 3, allHashesAreWrong: true}
clientCreator := proxy.NewLocalClientCreator(app)
proxyApp := proxy.NewAppConns(clientCreator)
proxyApp := proxy.NewAppConns(clientCreator, proxy.NopMetrics())
err := proxyApp.Start()
require.NoError(t, err)
t.Cleanup(func() {
@@ -929,7 +948,7 @@ func TestHandshakePanicsIfAppReturnsWrongAppHash(t *testing.T) {
{
app := &badApp{numBlocks: 3, onlyLastHashIsWrong: true}
clientCreator := proxy.NewLocalClientCreator(app)
proxyApp := proxy.NewAppConns(clientCreator)
proxyApp := proxy.NewAppConns(clientCreator, proxy.NopMetrics())
err := proxyApp.Start()
require.NoError(t, err)
t.Cleanup(func() {
@@ -947,52 +966,6 @@ func TestHandshakePanicsIfAppReturnsWrongAppHash(t *testing.T) {
}
}
func makeBlocks(n int, state *sm.State, privVal types.PrivValidator) []*types.Block {
blocks := make([]*types.Block, 0)
var (
prevBlock *types.Block
prevBlockMeta *types.BlockMeta
)
appHeight := byte(0x01)
for i := 0; i < n; i++ {
height := int64(i + 1)
block, parts := makeBlock(*state, prevBlock, prevBlockMeta, privVal, height)
blocks = append(blocks, block)
prevBlock = block
prevBlockMeta = types.NewBlockMeta(block, parts)
// update state
state.AppHash = []byte{appHeight}
appHeight++
state.LastBlockHeight = height
}
return blocks
}
func makeBlock(state sm.State, lastBlock *types.Block, lastBlockMeta *types.BlockMeta,
privVal types.PrivValidator, height int64) (*types.Block, *types.PartSet) {
lastCommit := types.NewCommit(height-1, 0, types.BlockID{}, nil)
if height > 1 {
vote, _ := types.MakeVote(
lastBlock.Header.Height,
lastBlockMeta.BlockID,
state.Validators,
privVal,
lastBlock.Header.ChainID,
time.Now())
lastCommit = types.NewCommit(vote.Height, vote.Round,
lastBlockMeta.BlockID, []types.CommitSig{vote.CommitSig()})
}
return state.MakeBlock(height, []types.Tx{}, lastCommit, nil, state.Validators.GetProposer().Address)
}
type badApp struct {
abci.BaseApplication
numBlocks byte
@@ -1059,7 +1032,7 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
// if its not the first one, we have a full block
if thisBlockParts != nil {
var pbb = new(tmproto.Block)
bz, err := ioutil.ReadAll(thisBlockParts.GetReader())
bz, err := io.ReadAll(thisBlockParts.GetReader())
if err != nil {
panic(err)
}
@@ -1098,7 +1071,7 @@ func makeBlockchainFromWAL(wal WAL) ([]*types.Block, []*types.Commit, error) {
}
}
// grab the last block too
bz, err := ioutil.ReadAll(thisBlockParts.GetReader())
bz, err := io.ReadAll(thisBlockParts.GetReader())
if err != nil {
panic(err)
}
@@ -1144,17 +1117,21 @@ func readPieceFromWAL(msg *TimedWALMessage) interface{} {
// fresh state and mock store
func stateAndStore(
t *testing.T,
config *cfg.Config,
pubKey crypto.PubKey,
appVersion uint64) (dbm.DB, sm.State, *mockBlockStore) {
appVersion uint64,
) (dbm.DB, sm.State, *mockBlockStore) {
stateDB := dbm.NewMemDB()
stateStore := sm.NewStore(stateDB)
state, _ := sm.MakeGenesisStateFromFile(config.GenesisFile())
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
state, err := sm.MakeGenesisStateFromFile(config.GenesisFile())
require.NoError(t, err)
state.Version.Consensus.App = appVersion
store := newMockBlockStore(config, state.ConsensusParams)
if err := stateStore.Save(state); err != nil {
panic(err)
}
store := newMockBlockStore(t, config, state.ConsensusParams)
require.NoError(t, stateStore.Save(state))
return stateDB, state, store
}
@@ -1163,15 +1140,20 @@ func stateAndStore(
type mockBlockStore struct {
config *cfg.Config
params tmproto.ConsensusParams
params types.ConsensusParams
chain []*types.Block
commits []*types.Commit
base int64
t *testing.T
}
// TODO: NewBlockStore(db.NewMemDB) ...
func newMockBlockStore(config *cfg.Config, params tmproto.ConsensusParams) *mockBlockStore {
return &mockBlockStore{config, params, nil, nil, 0}
func newMockBlockStore(t *testing.T, config *cfg.Config, params types.ConsensusParams) *mockBlockStore {
return &mockBlockStore{
config: config,
params: params,
t: t,
}
}
func (bs *mockBlockStore) Height() int64 { return int64(len(bs.chain)) }
@@ -1184,8 +1166,10 @@ func (bs *mockBlockStore) LoadBlockByHash(hash []byte) *types.Block {
}
func (bs *mockBlockStore) LoadBlockMeta(height int64) *types.BlockMeta {
block := bs.chain[height-1]
bps, err := block.MakePartSet(types.BlockPartSizeBytes)
require.NoError(bs.t, err)
return &types.BlockMeta{
BlockID: types.BlockID{Hash: block.Hash(), PartSetHeader: block.MakePartSet(types.BlockPartSizeBytes).Header()},
BlockID: types.BlockID{Hash: block.Hash(), PartSetHeader: bps.Header()},
Header: block.Header,
}
}
@@ -1224,15 +1208,17 @@ func TestHandshakeUpdatesValidators(t *testing.T) {
privVal := privval.LoadFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile())
pubKey, err := privVal.GetPubKey()
require.NoError(t, err)
stateDB, state, store := stateAndStore(config, pubKey, 0x0)
stateStore := sm.NewStore(stateDB)
stateDB, state, store := stateAndStore(t, config, pubKey, 0x0)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
oldValAddr := state.Validators.Validators[0].Address
// now start the app using the handshake - it should sync
genDoc, _ := sm.MakeGenesisDocFromFile(config.GenesisFile())
handshaker := NewHandshaker(stateStore, state, store, genDoc)
proxyApp := proxy.NewAppConns(clientCreator)
proxyApp := proxy.NewAppConns(clientCreator, proxy.NopMetrics())
if err := proxyApp.Start(); err != nil {
t.Fatalf("Error starting proxy app connections: %v", err)
}

View File

@@ -468,7 +468,6 @@ func (cs *State) AddVote(vote *types.Vote, peerID p2p.ID) (added bool, err error
// SetProposal inputs a proposal.
func (cs *State) SetProposal(proposal *types.Proposal, peerID p2p.ID) error {
if peerID == "" {
cs.internalMsgQueue <- msgInfo{&ProposalMessage{proposal}, ""}
} else {
@@ -481,7 +480,6 @@ func (cs *State) SetProposal(proposal *types.Proposal, peerID p2p.ID) error {
// AddProposalBlockPart inputs a part of the proposal block.
func (cs *State) AddProposalBlockPart(height int64, round int32, part *types.Part, peerID p2p.ID) error {
if peerID == "" {
cs.internalMsgQueue <- msgInfo{&BlockPartMessage{height, round, part}, ""}
} else {
@@ -499,7 +497,6 @@ func (cs *State) SetProposalAndBlock(
parts *types.PartSet,
peerID p2p.ID,
) error {
if err := cs.SetProposal(proposal, peerID); err != nil {
return err
}
@@ -918,7 +915,7 @@ func (cs *State) handleTimeout(ti timeoutInfo, rs cstypes.RoundState) {
cs.enterNewRound(ti.Height, 0)
case cstypes.RoundStepNewRound:
cs.enterPropose(ti.Height, 0)
cs.enterPropose(ti.Height, ti.Round)
case cstypes.RoundStepPropose:
if err := cs.eventBus.PublishEventTimeoutPropose(cs.RoundStateEvent()); err != nil {
@@ -945,7 +942,6 @@ func (cs *State) handleTimeout(ti timeoutInfo, rs cstypes.RoundState) {
default:
panic(fmt.Sprintf("invalid timeout step: %v", ti.Step))
}
}
func (cs *State) handleTxsAvailable() {
@@ -978,7 +974,9 @@ func (cs *State) handleTxsAvailable() {
// Used internally by handleTimeout and handleMsg to make state transitions
// Enter: `timeoutNewHeight` by startTime (commitTime+timeoutCommit),
// or, if SkipTimeoutCommit==true, after receiving all precommits from (height,round-1)
//
// or, if SkipTimeoutCommit==true, after receiving all precommits from (height,round-1)
//
// Enter: `timeoutPrecommits` after any +2/3 precommits from (height,round-1)
// Enter: +2/3 precommits for nil at (height,round-1)
// Enter: +2/3 prevotes any or +2/3 precommits for block or any from (height, round)
@@ -1060,7 +1058,9 @@ func (cs *State) needProofBlock(height int64) bool {
// Enter (CreateEmptyBlocks): from enterNewRound(height,round)
// Enter (CreateEmptyBlocks, CreateEmptyBlocksInterval > 0 ):
// after enterNewRound(height,round), after timeout of CreateEmptyBlocksInterval
//
// after enterNewRound(height,round), after timeout of CreateEmptyBlocksInterval
//
// Enter (!CreateEmptyBlocks) : after enterNewRound(height,round), once txs are in the mempool
func (cs *State) enterPropose(height int64, round int32) {
logger := cs.Logger.With("height", height, "round", round)
@@ -1136,8 +1136,18 @@ func (cs *State) defaultDecideProposal(height int64, round int32) {
block, blockParts = cs.ValidBlock, cs.ValidBlockParts
} else {
// Create a new proposal block from state/txs from the mempool.
block, blockParts = cs.createProposalBlock()
if block == nil {
var err error
block, err = cs.createProposalBlock()
if err != nil {
cs.Logger.Error("unable to create proposal block", "error", err)
return
} else if block == nil {
panic("Method createProposalBlock should not provide a nil block without errors")
}
cs.metrics.ProposalCreateCount.Add(1)
blockParts, err = block.MakePartSet(types.BlockPartSizeBytes)
if err != nil {
cs.Logger.Error("unable to create proposal block part set", "error", err)
return
}
}
@@ -1182,7 +1192,6 @@ func (cs *State) isProposalComplete() bool {
}
// if this is false the proposer is lying or we haven't received the POL yet
return cs.Votes.Prevotes(cs.Proposal.POLRound).HasTwoThirdsMajority()
}
// Create the next block to propose and return it. Returns nil block upon error.
@@ -1192,9 +1201,9 @@ func (cs *State) isProposalComplete() bool {
//
// NOTE: keep it side-effect free for clarity.
// CONTRACT: cs.privValidator is not nil.
func (cs *State) createProposalBlock() (block *types.Block, blockParts *types.PartSet) {
func (cs *State) createProposalBlock() (*types.Block, error) {
if cs.privValidator == nil {
panic("entered createProposalBlock with privValidator being nil")
return nil, errors.New("entered createProposalBlock with privValidator being nil")
}
var commit *types.Commit
@@ -1209,20 +1218,22 @@ func (cs *State) createProposalBlock() (block *types.Block, blockParts *types.Pa
commit = cs.LastCommit.MakeCommit()
default: // This shouldn't happen.
cs.Logger.Error("propose step; cannot propose anything without commit for the previous block")
return
return nil, errors.New("propose step; cannot propose anything without commit for the previous block")
}
if cs.privValidatorPubKey == nil {
// If this node is a validator & proposer in the current round, it will
// miss the opportunity to create a block.
cs.Logger.Error("propose step; empty priv validator public key", "err", errPubKeyIsNotSet)
return
return nil, fmt.Errorf("propose step; empty priv validator public key, error: %w", errPubKeyIsNotSet)
}
proposerAddr := cs.privValidatorPubKey.Address()
return cs.blockExec.CreateProposalBlock(cs.Height, cs.state, commit, proposerAddr)
ret, err := cs.blockExec.CreateProposalBlock(cs.Height, cs.state, commit, proposerAddr, cs.LastCommit.GetVotes())
if err != nil {
panic(err)
}
return ret, nil
}
// Enter: `timeoutPropose` after entering Propose.
@@ -1272,11 +1283,37 @@ func (cs *State) defaultDoPrevote(height int64, round int32) {
return
}
// Validate proposal block
// Validate proposal block, from Tendermint's perspective
err := cs.blockExec.ValidateBlock(cs.state, cs.ProposalBlock)
if err != nil {
// ProposalBlock is invalid, prevote nil.
logger.Error("prevote step: ProposalBlock is invalid", "err", err)
logger.Error("prevote step: consensus deems this block invalid; prevoting nil",
"err", err)
cs.signAddVote(tmproto.PrevoteType, nil, types.PartSetHeader{})
return
}
/*
Before prevoting on the block received from the proposer for the current round and height,
we request the Application, via `ProcessProposal` ABCI call, to confirm that the block is
valid. If the Application does not accept the block, Tendermint prevotes `nil`.
WARNING: misuse of block rejection by the Application can seriously compromise Tendermint's
liveness properties. Please see `PrepareProosal`-`ProcessProposal` coherence and determinism
properties in the ABCI++ specification.
*/
isAppValid, err := cs.blockExec.ProcessProposal(cs.ProposalBlock, cs.state)
if err != nil {
panic(fmt.Sprintf(
"state machine returned an error (%v) when calling ProcessProposal", err,
))
}
cs.metrics.MarkProposalProcessed(isAppValid)
// Vote nil if the Application rejected the block
if !isAppValid {
logger.Error("prevote step: state machine rejected a proposed block; this should not happen:"+
"the proposer may be misbehaving; prevoting nil", "err", err)
cs.signAddVote(tmproto.PrevoteType, nil, types.PartSetHeader{})
return
}
@@ -1899,7 +1936,7 @@ func (cs *State) addProposalBlockPart(msg *BlockPartMessage, peerID p2p.ID) (add
return added, err
}
var pbb = new(tmproto.Block)
pbb := new(tmproto.Block)
err = proto.Unmarshal(bz, pbb)
if err != nil {
return added, err
@@ -1964,7 +2001,7 @@ func (cs *State) tryAddVote(vote *types.Vote, peerID p2p.ID) (bool, error) {
// If the vote height is off, we'll just ignore it,
// But if it's a conflicting sig, add it to the cs.evpool.
// If it's otherwise invalid, punish peer.
// nolint: gocritic
//nolint: gocritic
if voteErr, ok := err.(*types.ErrVoteConflictingVotes); ok {
if cs.privValidatorPubKey == nil {
return false, errPubKeyIsNotSet
@@ -2015,6 +2052,10 @@ func (cs *State) addVote(vote *types.Vote, peerID p2p.ID) (added bool, err error
"cs_height", cs.Height,
)
if vote.Height < cs.Height || (vote.Height == cs.Height && vote.Round < cs.Round) {
cs.metrics.MarkLateVote(vote.Type)
}
// A precommit for the previous height?
// These come in while we wait timeoutCommit
if vote.Height+1 == cs.Height && vote.Type == tmproto.PrecommitType {
@@ -2059,6 +2100,11 @@ func (cs *State) addVote(vote *types.Vote, peerID p2p.ID) (added bool, err error
// Either duplicate, or error upon cs.Votes.AddByIndex()
return
}
if vote.Round == cs.Round {
vals := cs.state.Validators
_, val := vals.GetByIndex(vote.ValidatorIndex)
cs.metrics.MarkVoteReceived(vote.Type, val.VotingPower, vals.TotalVotingPower())
}
if err := cs.eventBus.PublishEventVote(types.EventDataVote{Vote: vote}); err != nil {
return added, err
@@ -2220,11 +2266,13 @@ func (cs *State) signVote(
func (cs *State) voteTime() time.Time {
now := tmtime.Now()
minVoteTime := now
// Minimum time increment between blocks
const timeIota = time.Millisecond
// TODO: We should remove next line in case we don't vote for v in case cs.ProposalBlock == nil,
// even if cs.LockedBlock != nil. See https://docs.tendermint.com/master/spec/.
timeIota := time.Duration(cs.state.ConsensusParams.Block.TimeIotaMs) * time.Millisecond
// even if cs.LockedBlock != nil. See https://github.com/tendermint/tendermint/tree/v0.37.x/spec/.
if cs.LockedBlock != nil {
// See the BFT time spec https://docs.tendermint.com/master/spec/consensus/bft-time.html
// See the BFT time spec
// https://github.com/tendermint/tendermint/blob/v0.37.x/spec/consensus/bft-time.md
minVoteTime = cs.LockedBlock.Time.Add(timeIota)
} else if cs.ProposalBlock != nil {
minVoteTime = cs.ProposalBlock.Time.Add(timeIota)
@@ -2323,12 +2371,12 @@ func (cs *State) calculatePrevoteMessageDelayMetrics() {
_, val := cs.Validators.GetByAddress(v.ValidatorAddress)
votingPowerSeen += val.VotingPower
if votingPowerSeen >= cs.Validators.TotalVotingPower()*2/3+1 {
cs.metrics.QuorumPrevoteMessageDelay.Set(v.Timestamp.Sub(cs.Proposal.Timestamp).Seconds())
cs.metrics.QuorumPrevoteDelay.With("proposer_address", cs.Validators.GetProposer().Address.String()).Set(v.Timestamp.Sub(cs.Proposal.Timestamp).Seconds())
break
}
}
if ps.HasAll() {
cs.metrics.FullPrevoteMessageDelay.Set(pl[len(pl)-1].Timestamp.Sub(cs.Proposal.Timestamp).Seconds())
cs.metrics.FullPrevoteDelay.With("proposer_address", cs.Validators.GetProposer().Address.String()).Set(pl[len(pl)-1].Timestamp.Sub(cs.Proposal.Timestamp).Seconds())
}
}

View File

@@ -8,11 +8,15 @@ import (
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
abcimocks "github.com/tendermint/tendermint/abci/types/mocks"
cstypes "github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/crypto/tmhash"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
"github.com/tendermint/tendermint/libs/log"
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
tmrand "github.com/tendermint/tendermint/libs/rand"
@@ -191,7 +195,8 @@ func TestStateBadProposal(t *testing.T) {
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
propBlock, _ := cs1.createProposalBlock() // changeProposer(t, cs1, vs2)
propBlock, err := cs1.createProposalBlock() // changeProposer(t, cs1, vs2)
require.NoError(t, err)
// make the second validator the proposer by incrementing round
round++
@@ -204,7 +209,8 @@ func TestStateBadProposal(t *testing.T) {
}
stateHash[0] = (stateHash[0] + 1) % 255
propBlock.AppHash = stateHash
propBlockParts := propBlock.MakePartSet(partSize)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
blockID := types.BlockID{Hash: propBlock.Hash(), PartSetHeader: propBlockParts.Header()}
proposal := types.NewProposal(vs2.Height, round, -1, blockID)
p := proposal.ToProto()
@@ -230,13 +236,19 @@ func TestStateBadProposal(t *testing.T) {
validatePrevote(t, cs1, round, vss[0], nil)
// add bad prevote from vs2 and wait for it
signAddVotes(cs1, tmproto.PrevoteType, propBlock.Hash(), propBlock.MakePartSet(partSize).Header(), vs2)
bps, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
signAddVotes(cs1, tmproto.PrevoteType, propBlock.Hash(), bps.Header(), vs2)
ensurePrevote(voteCh, height, round)
// wait for precommit
ensurePrecommit(voteCh, height, round)
validatePrecommit(t, cs1, round, -1, vss[0], nil, nil)
signAddVotes(cs1, tmproto.PrecommitType, propBlock.Hash(), propBlock.MakePartSet(partSize).Header(), vs2)
bps2, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
signAddVotes(cs1, tmproto.PrecommitType, propBlock.Hash(), bps2.Header(), vs2)
}
func TestStateOversizedBlock(t *testing.T) {
@@ -250,7 +262,8 @@ func TestStateOversizedBlock(t *testing.T) {
timeoutProposeCh := subscribe(cs1.eventBus, types.EventQueryTimeoutPropose)
voteCh := subscribe(cs1.eventBus, types.EventQueryVote)
propBlock, _ := cs1.createProposalBlock()
propBlock, err := cs1.createProposalBlock()
require.NoError(t, err)
propBlock.Data.Txs = []types.Tx{tmrand.Bytes(2001)}
propBlock.Header.DataHash = propBlock.Data.Hash()
@@ -258,7 +271,8 @@ func TestStateOversizedBlock(t *testing.T) {
round++
incrementRound(vss[1:]...)
propBlockParts := propBlock.MakePartSet(partSize)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
blockID := types.BlockID{Hash: propBlock.Hash(), PartSetHeader: propBlockParts.Header()}
proposal := types.NewProposal(height, round, -1, blockID)
p := proposal.ToProto()
@@ -290,11 +304,18 @@ func TestStateOversizedBlock(t *testing.T) {
// precommit on it
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], nil)
signAddVotes(cs1, tmproto.PrevoteType, propBlock.Hash(), propBlock.MakePartSet(partSize).Header(), vs2)
bps, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
signAddVotes(cs1, tmproto.PrevoteType, propBlock.Hash(), bps.Header(), vs2)
ensurePrevote(voteCh, height, round)
ensurePrecommit(voteCh, height, round)
validatePrecommit(t, cs1, round, -1, vss[0], nil, nil)
signAddVotes(cs1, tmproto.PrecommitType, propBlock.Hash(), propBlock.MakePartSet(partSize).Header(), vs2)
bps2, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
signAddVotes(cs1, tmproto.PrecommitType, propBlock.Hash(), bps2.Header(), vs2)
}
//----------------------------------------------------------------------------------------------------
@@ -466,9 +487,7 @@ func TestStateLockNoPOL(t *testing.T) {
rs := cs1.GetRoundState()
if rs.ProposalBlock != nil {
panic("Expected proposal block to be nil")
}
require.Nil(t, rs.ProposalBlock, "Expected proposal block to be nil")
// wait to finish prevote
ensurePrevote(voteCh, height, round)
@@ -476,7 +495,10 @@ func TestStateLockNoPOL(t *testing.T) {
validatePrevote(t, cs1, round, vss[0], rs.LockedBlock.Hash())
// add a conflicting prevote from the other validator
signAddVotes(cs1, tmproto.PrevoteType, hash, rs.LockedBlock.MakePartSet(partSize).Header(), vs2)
bps, err := rs.LockedBlock.MakePartSet(partSize)
require.NoError(t, err)
signAddVotes(cs1, tmproto.PrevoteType, hash, bps.Header(), vs2)
ensurePrevote(voteCh, height, round)
// now we're going to enter prevote again, but with invalid args
@@ -489,7 +511,9 @@ func TestStateLockNoPOL(t *testing.T) {
validatePrecommit(t, cs1, round, 0, vss[0], nil, theBlockHash)
// add conflicting precommit from vs2
signAddVotes(cs1, tmproto.PrecommitType, hash, rs.LockedBlock.MakePartSet(partSize).Header(), vs2)
bps2, err := rs.LockedBlock.MakePartSet(partSize)
require.NoError(t, err)
signAddVotes(cs1, tmproto.PrecommitType, hash, bps2.Header(), vs2)
ensurePrecommit(voteCh, height, round)
// (note we're entering precommit for a second time this round, but with invalid args
@@ -519,7 +543,9 @@ func TestStateLockNoPOL(t *testing.T) {
ensurePrevote(voteCh, height, round) // prevote
validatePrevote(t, cs1, round, vss[0], rs.LockedBlock.Hash())
signAddVotes(cs1, tmproto.PrevoteType, hash, rs.ProposalBlock.MakePartSet(partSize).Header(), vs2)
bps0, err := rs.ProposalBlock.MakePartSet(partSize)
require.NoError(t, err)
signAddVotes(cs1, tmproto.PrevoteType, hash, bps0.Header(), vs2)
ensurePrevote(voteCh, height, round)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Prevote(round).Nanoseconds())
@@ -527,11 +553,13 @@ func TestStateLockNoPOL(t *testing.T) {
validatePrecommit(t, cs1, round, 0, vss[0], nil, theBlockHash) // precommit nil but be locked on proposal
bps1, err := rs.ProposalBlock.MakePartSet(partSize)
require.NoError(t, err)
signAddVotes(
cs1,
tmproto.PrecommitType,
hash,
rs.ProposalBlock.MakePartSet(partSize).Header(),
bps1.Header(),
vs2) // NOTE: conflicting precommits at same height
ensurePrecommit(voteCh, height, round)
@@ -539,7 +567,7 @@ func TestStateLockNoPOL(t *testing.T) {
cs2, _ := randState(2) // needed so generated block is different than locked block
// before we time out into new round, set next proposal block
prop, propBlock := decideProposal(cs2, vs2, vs2.Height, vs2.Round+1)
prop, propBlock := decideProposal(t, cs2, vs2, vs2.Height, vs2.Round+1)
if prop == nil || propBlock == nil {
t.Fatal("Failed to create proposal block with vs2")
}
@@ -555,7 +583,9 @@ func TestStateLockNoPOL(t *testing.T) {
// now we're on a new round and not the proposer
// so set the proposal block
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlock.MakePartSet(partSize), ""); err != nil {
bps3, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
if err := cs1.SetProposalAndBlock(prop, propBlock, bps3, ""); err != nil {
t.Fatal(err)
}
@@ -565,18 +595,23 @@ func TestStateLockNoPOL(t *testing.T) {
validatePrevote(t, cs1, 3, vss[0], cs1.LockedBlock.Hash())
// prevote for proposed block
signAddVotes(cs1, tmproto.PrevoteType, propBlock.Hash(), propBlock.MakePartSet(partSize).Header(), vs2)
bps4, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
signAddVotes(cs1, tmproto.PrevoteType, propBlock.Hash(), bps4.Header(), vs2)
ensurePrevote(voteCh, height, round)
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Prevote(round).Nanoseconds())
ensurePrecommit(voteCh, height, round)
validatePrecommit(t, cs1, round, 0, vss[0], nil, theBlockHash) // precommit nil but locked on proposal
bps5, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
signAddVotes(
cs1,
tmproto.PrecommitType,
propBlock.Hash(),
propBlock.MakePartSet(partSize).Header(),
bps5.Header(),
vs2) // NOTE: conflicting precommits at same height
ensurePrecommit(voteCh, height, round)
}
@@ -631,11 +666,13 @@ func TestStateLockPOLRelock(t *testing.T) {
// before we timeout to the new round set the new proposal
cs2 := newState(cs1.state, vs2, kvstore.NewApplication())
prop, propBlock := decideProposal(cs2, vs2, vs2.Height, vs2.Round+1)
prop, propBlock := decideProposal(t, cs2, vs2, vs2.Height, vs2.Round+1)
if prop == nil || propBlock == nil {
t.Fatal("Failed to create proposal block with vs2")
}
propBlockParts := propBlock.MakePartSet(partSize)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
propBlockHash := propBlock.Hash()
require.NotEqual(t, propBlockHash, theBlockHash)
@@ -728,8 +765,9 @@ func TestStateLockPOLUnlock(t *testing.T) {
signAddVotes(cs1, tmproto.PrecommitType, theBlockHash, theBlockParts, vs3)
// before we time out into new round, set next proposal block
prop, propBlock := decideProposal(cs1, vs2, vs2.Height, vs2.Round+1)
propBlockParts := propBlock.MakePartSet(partSize)
prop, propBlock := decideProposal(t, cs1, vs2, vs2.Height, vs2.Round+1)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
// timeout to new round
ensureNewTimeout(timeoutWaitCh, height, round, cs1.config.Precommit(round).Nanoseconds())
@@ -816,11 +854,13 @@ func TestStateLockPOLUnlockOnUnknownBlock(t *testing.T) {
// before we timeout to the new round set the new proposal
cs2 := newState(cs1.state, vs2, kvstore.NewApplication())
prop, propBlock := decideProposal(cs2, vs2, vs2.Height, vs2.Round+1)
prop, propBlock := decideProposal(t, cs2, vs2, vs2.Height, vs2.Round+1)
if prop == nil || propBlock == nil {
t.Fatal("Failed to create proposal block with vs2")
}
secondBlockParts := propBlock.MakePartSet(partSize)
secondBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
secondBlockHash := propBlock.Hash()
require.NotEqual(t, secondBlockHash, firstBlockHash)
@@ -860,11 +900,12 @@ func TestStateLockPOLUnlockOnUnknownBlock(t *testing.T) {
// before we timeout to the new round set the new proposal
cs3 := newState(cs1.state, vs3, kvstore.NewApplication())
prop, propBlock = decideProposal(cs3, vs3, vs3.Height, vs3.Round+1)
prop, propBlock = decideProposal(t, cs3, vs3, vs3.Height, vs3.Round+1)
if prop == nil || propBlock == nil {
t.Fatal("Failed to create proposal block with vs2")
}
thirdPropBlockParts := propBlock.MakePartSet(partSize)
thirdPropBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
thirdPropBlockHash := propBlock.Hash()
require.NotEqual(t, secondBlockHash, thirdPropBlockHash)
@@ -928,7 +969,10 @@ func TestStateLockPOLSafety1(t *testing.T) {
validatePrevote(t, cs1, round, vss[0], propBlock.Hash())
// the others sign a polka but we don't see it
prevotes := signVotes(tmproto.PrevoteType, propBlock.Hash(), propBlock.MakePartSet(partSize).Header(), vs2, vs3, vs4)
bps, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
prevotes := signVotes(tmproto.PrevoteType, propBlock.Hash(), bps.Header(), vs2, vs3, vs4)
t.Logf("old prop hash %v", fmt.Sprintf("%X", propBlock.Hash()))
@@ -941,9 +985,10 @@ func TestStateLockPOLSafety1(t *testing.T) {
t.Log("### ONTO ROUND 1")
prop, propBlock := decideProposal(cs1, vs2, vs2.Height, vs2.Round+1)
prop, propBlock := decideProposal(t, cs1, vs2, vs2.Height, vs2.Round+1)
propBlockHash := propBlock.Hash()
propBlockParts := propBlock.MakePartSet(partSize)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
incrementRound(vs2, vs3, vs4)
@@ -1037,18 +1082,20 @@ func TestStateLockPOLSafety2(t *testing.T) {
// the block for R0: gets polkad but we miss it
// (even though we signed it, shhh)
_, propBlock0 := decideProposal(cs1, vss[0], height, round)
_, propBlock0 := decideProposal(t, cs1, vss[0], height, round)
propBlockHash0 := propBlock0.Hash()
propBlockParts0 := propBlock0.MakePartSet(partSize)
propBlockParts0, err := propBlock0.MakePartSet(partSize)
require.NoError(t, err)
propBlockID0 := types.BlockID{Hash: propBlockHash0, PartSetHeader: propBlockParts0.Header()}
// the others sign a polka but we don't see it
prevotes := signVotes(tmproto.PrevoteType, propBlockHash0, propBlockParts0.Header(), vs2, vs3, vs4)
// the block for round 1
prop1, propBlock1 := decideProposal(cs1, vs2, vs2.Height, vs2.Round+1)
prop1, propBlock1 := decideProposal(t, cs1, vs2, vs2.Height, vs2.Round+1)
propBlockHash1 := propBlock1.Hash()
propBlockParts1 := propBlock1.MakePartSet(partSize)
propBlockParts1, err := propBlock1.MakePartSet(partSize)
require.NoError(t, err)
incrementRound(vs2, vs3, vs4)
@@ -1146,7 +1193,9 @@ func TestProposeValidBlock(t *testing.T) {
validatePrevote(t, cs1, round, vss[0], propBlockHash)
// the others sign a polka
signAddVotes(cs1, tmproto.PrevoteType, propBlockHash, propBlock.MakePartSet(partSize).Header(), vs2, vs3, vs4)
bps, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
signAddVotes(cs1, tmproto.PrevoteType, propBlockHash, bps.Header(), vs2, vs3, vs4)
ensurePrecommit(voteCh, height, round)
// we should have precommitted
@@ -1230,7 +1279,8 @@ func TestSetValidBlockOnDelayedPrevote(t *testing.T) {
rs := cs1.GetRoundState()
propBlock := rs.ProposalBlock
propBlockHash := propBlock.Hash()
propBlockParts := propBlock.MakePartSet(partSize)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], propBlockHash)
@@ -1296,9 +1346,10 @@ func TestSetValidBlockOnDelayedProposal(t *testing.T) {
ensurePrevote(voteCh, height, round)
validatePrevote(t, cs1, round, vss[0], nil)
prop, propBlock := decideProposal(cs1, vs2, vs2.Height, vs2.Round+1)
prop, propBlock := decideProposal(t, cs1, vs2, vs2.Height, vs2.Round+1)
propBlockHash := propBlock.Hash()
propBlockParts := propBlock.MakePartSet(partSize)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
// vs2, vs3 and vs4 send prevote for propBlock
signAddVotes(cs1, tmproto.PrevoteType, propBlockHash, propBlockParts.Header(), vs2, vs3, vs4)
@@ -1321,6 +1372,55 @@ func TestSetValidBlockOnDelayedProposal(t *testing.T) {
assert.True(t, rs.ValidRound == round)
}
func TestProcessProposalAccept(t *testing.T) {
for _, testCase := range []struct {
name string
accept bool
expectedNilPrevote bool
}{
{
name: "accepted block is prevoted",
accept: true,
expectedNilPrevote: false,
},
{
name: "rejected block is not prevoted",
accept: false,
expectedNilPrevote: true,
},
} {
t.Run(testCase.name, func(t *testing.T) {
m := abcimocks.NewApplication(t)
status := abci.ResponseProcessProposal_REJECT
if testCase.accept {
status = abci.ResponseProcessProposal_ACCEPT
}
m.On("ProcessProposal", mock.Anything).Return(abci.ResponseProcessProposal{Status: status})
m.On("PrepareProposal", mock.Anything).Return(abci.ResponsePrepareProposal{}).Maybe()
cs1, _ := randStateWithApp(4, m)
height, round := cs1.Height, cs1.Round
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
pv1, err := cs1.privValidator.GetPubKey()
require.NoError(t, err)
addr := pv1.Address()
voteCh := subscribeToVoter(cs1, addr)
startTestRound(cs1, cs1.Height, round)
ensureNewRound(newRoundCh, height, round)
ensureNewProposal(proposalCh, height, round)
rs := cs1.GetRoundState()
var prevoteHash tmbytes.HexBytes
if !testCase.expectedNilPrevote {
prevoteHash = rs.ProposalBlock.Hash()
}
ensurePrevoteMatch(t, voteCh, height, round, prevoteHash)
})
}
}
// 4 vals, 3 Nil Precommits at P0
// What we want:
// P0 waits for timeoutPrecommit before starting next round
@@ -1456,9 +1556,10 @@ func TestEmitNewValidBlockEventOnCommitWithoutBlock(t *testing.T) {
newRoundCh := subscribe(cs1.eventBus, types.EventQueryNewRound)
validBlockCh := subscribe(cs1.eventBus, types.EventQueryValidBlock)
_, propBlock := decideProposal(cs1, vs2, vs2.Height, vs2.Round)
_, propBlock := decideProposal(t, cs1, vs2, vs2.Height, vs2.Round)
propBlockHash := propBlock.Hash()
propBlockParts := propBlock.MakePartSet(partSize)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
// start round in which PO is not proposer
startTestRound(cs1, height, round)
@@ -1489,9 +1590,10 @@ func TestCommitFromPreviousRound(t *testing.T) {
validBlockCh := subscribe(cs1.eventBus, types.EventQueryValidBlock)
proposalCh := subscribe(cs1.eventBus, types.EventQueryCompleteProposal)
prop, propBlock := decideProposal(cs1, vs2, vs2.Height, vs2.Round)
prop, propBlock := decideProposal(t, cs1, vs2, vs2.Height, vs2.Round)
propBlockHash := propBlock.Hash()
propBlockParts := propBlock.MakePartSet(partSize)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
// start round in which PO is not proposer
startTestRound(cs1, height, round)
@@ -1634,8 +1736,9 @@ func TestResetTimeoutPrecommitUponNewHeight(t *testing.T) {
ensureNewBlockHeader(newBlockHeader, height, theBlockHash)
prop, propBlock := decideProposal(cs1, vs2, height+1, 0)
propBlockParts := propBlock.MakePartSet(partSize)
prop, propBlock := decideProposal(t, cs1, vs2, height+1, 0)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
if err := cs1.SetProposalAndBlock(prop, propBlock, propBlockParts, "some peer"); err != nil {
t.Fatal(err)
@@ -1756,7 +1859,8 @@ func TestStateHalt1(t *testing.T) {
ensureNewProposal(proposalCh, height, round)
rs := cs1.GetRoundState()
propBlock := rs.ProposalBlock
propBlockParts := propBlock.MakePartSet(partSize)
propBlockParts, err := propBlock.MakePartSet(partSize)
require.NoError(t, err)
ensurePrevote(voteCh, height, round)

View File

@@ -80,6 +80,15 @@ type RoundState struct {
LockedBlock *types.Block `json:"locked_block"`
LockedBlockParts *types.PartSet `json:"locked_block_parts"`
// The variables below starting with "Valid..." derive their name from
// the algorithm presented in this paper:
// [The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938).
// Therefore, "Valid...":
// * means that the block or round that the variable refers to has
// received 2/3+ non-`nil` prevotes (a.k.a. a *polka*)
// * has nothing to do with whether the Application returned "Accept" in its
// response to `ProcessProposal`, or "Reject"
// Last known round with POL for non-nil valid block.
ValidRound int32 `json:"valid_round"`
ValidBlock *types.Block `json:"valid_block"` // Last known block of POL mentioned above.
@@ -186,8 +195,8 @@ func (rs *RoundState) StringIndented(indent string) string {
%s ProposalBlock: %v %v
%s LockedRound: %v
%s LockedBlock: %v %v
%s ValidRound: %v
%s ValidBlock: %v %v
%s ValidRound: %v
%s ValidBlock: %v %v
%s Votes: %v
%s LastCommit: %v
%s LastValidators:%v

View File

@@ -47,7 +47,9 @@ func WALGenerateNBlocks(t *testing.T, wr io.Writer, numBlocks int) (err error) {
}
blockStoreDB := db.NewMemDB()
stateDB := blockStoreDB
stateStore := sm.NewStore(stateDB)
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
state, err := sm.MakeGenesisState(genDoc)
if err != nil {
return fmt.Errorf("failed to make genesis state: %w", err)
@@ -59,7 +61,7 @@ func WALGenerateNBlocks(t *testing.T, wr io.Writer, numBlocks int) (err error) {
blockStore := store.NewBlockStore(blockStoreDB)
proxyApp := proxy.NewAppConns(proxy.NewLocalClientCreator(app))
proxyApp := proxy.NewAppConns(proxy.NewLocalClientCreator(app), proxy.NopMetrics())
proxyApp.SetLogger(logger.With("module", "proxy"))
if err := proxyApp.Start(); err != nil {
return fmt.Errorf("failed to start proxy app connections: %w", err)

View File

@@ -3,7 +3,6 @@ package consensus
import (
"bytes"
"crypto/rand"
"io/ioutil"
"os"
"path/filepath"
@@ -27,7 +26,7 @@ const (
)
func TestWALTruncate(t *testing.T) {
walDir, err := ioutil.TempDir("", "wal")
walDir, err := os.MkdirTemp("", "wal")
require.NoError(t, err)
defer os.RemoveAll(walDir)
@@ -109,7 +108,7 @@ func TestWALEncoderDecoder(t *testing.T) {
}
func TestWALWrite(t *testing.T) {
walDir, err := ioutil.TempDir("", "wal")
walDir, err := os.MkdirTemp("", "wal")
require.NoError(t, err)
defer os.RemoveAll(walDir)
walFile := filepath.Join(walDir, "wal")
@@ -177,7 +176,7 @@ func TestWALSearchForEndHeight(t *testing.T) {
}
func TestWALPeriodicSync(t *testing.T) {
walDir, err := ioutil.TempDir("", "wal")
walDir, err := os.MkdirTemp("", "wal")
require.NoError(t, err)
defer os.RemoveAll(walDir)

View File

@@ -12,7 +12,7 @@ For any specific algorithm, use its specific module e.g.
## Binary encoding
For Binary encoding, please refer to the [Tendermint encoding specification](https://docs.tendermint.com/master/spec/blockchain/encoding.html).
For Binary encoding, please refer to the [Tendermint encoding specification](https://github.com/tendermint/tendermint/blob/v0.37.x/spec/core/encoding.md).
## JSON Encoding

View File

@@ -3,9 +3,9 @@ package armor
import (
"bytes"
"fmt"
"io/ioutil"
"io"
"golang.org/x/crypto/openpgp/armor" // nolint: staticcheck
"golang.org/x/crypto/openpgp/armor" //nolint: staticcheck
)
func EncodeArmor(blockType string, headers map[string]string, data []byte) string {
@@ -31,7 +31,7 @@ func DecodeArmor(armorStr string) (blockType string, headers map[string]string,
if err != nil {
return "", nil, nil, err
}
data, err = ioutil.ReadAll(block.Body)
data, err = io.ReadAll(block.Body)
if err != nil {
return "", nil, nil, err
}

View File

@@ -12,20 +12,19 @@ second pre-image attacks. Hence, use this library with caution.
Otherwise you might run into similar issues as, e.g., in early Bitcoin:
https://bitcointalk.org/?topic=102395
*
/ \
/ \
/ \
/ \
* *
/ \ / \
/ \ / \
/ \ / \
* * * h6
/ \ / \ / \
h0 h1 h2 h3 h4 h5
*
/ \
/ \
/ \
/ \
* *
/ \ / \
/ \ / \
/ \ / \
* * * h6
/ \ / \ / \
h0 h1 h2 h3 h4 h5
TODO(ismail): add 2nd pre-image protection or clarify further on how we use this and why this secure.
*/
package merkle

View File

@@ -85,8 +85,8 @@ func (op ValueOp) Run(args [][]byte) ([][]byte, error) {
bz := new(bytes.Buffer)
// Wrap <op.Key, vhash> to hash the KVPair.
encodeByteSlice(bz, op.key) // nolint: errcheck // does not error
encodeByteSlice(bz, vhash) // nolint: errcheck // does not error
encodeByteSlice(bz, op.key) //nolint: errcheck // does not error
encodeByteSlice(bz, vhash) //nolint: errcheck // does not error
kvhash := leafHash(bz.Bytes())
if !bytes.Equal(kvhash, op.Proof.LeafHash) {

View File

@@ -47,10 +47,10 @@ func HashFromByteSlices(items [][]byte) []byte {
//
// These preliminary results suggest:
//
// 1. The performance of the HashFromByteSlice is pretty good
// 2. Go has low overhead for recursive functions
// 3. The performance of the HashFromByteSlice routine is dominated
// by the actual hashing of data
// 1. The performance of the HashFromByteSlice is pretty good
// 2. Go has low overhead for recursive functions
// 3. The performance of the HashFromByteSlice routine is dominated
// by the actual hashing of data
//
// Although this work is in no way exhaustive, point #3 suggests that
// optimization of this routine would need to take an alternative

View File

@@ -9,13 +9,13 @@ import (
"math/big"
secp256k1 "github.com/btcsuite/btcd/btcec"
"golang.org/x/crypto/ripemd160" // nolint: staticcheck // necessary for Bitcoin address format
"golang.org/x/crypto/ripemd160" //nolint: staticcheck // necessary for Bitcoin address format
"github.com/tendermint/tendermint/crypto"
tmjson "github.com/tendermint/tendermint/libs/json"
)
//-------------------------------------
// -------------------------------------
const (
PrivKeyName = "tendermint/PrivKeySecp256k1"
PubKeyName = "tendermint/PubKeySecp256k1"
@@ -124,8 +124,8 @@ func GenPrivKeySecp256k1(secret []byte) PrivKey {
// used to reject malleable signatures
// see:
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.

View File

@@ -1,14 +1,6 @@
module.exports = {
theme: 'cosmos',
title: 'Tendermint Core',
// locales: {
// "/": {
// lang: "en-US"
// },
// "/ru/": {
// lang: "ru"
// }
// },
base: process.env.VUEPRESS_BASE,
themeConfig: {
repo: 'tendermint/tendermint',
@@ -23,16 +15,12 @@ module.exports = {
},
versions: [
{
"label": "v0.33",
"key": "v0.33"
},
{
"label": "v0.34",
"label": "v0.34 (latest)",
"key": "v0.34"
},
{
"label": "v0.35",
"key": "v0.35"
"label": "v0.33",
"key": "v0.33"
}
],
topbar: {
@@ -45,10 +33,8 @@ module.exports = {
title: 'Resources',
children: [
{
// TODO(creachadair): Figure out how to make this per-branch.
// See: https://github.com/tendermint/tendermint/issues/7908
title: 'RPC',
path: 'https://docs.tendermint.com/v0.35/rpc/',
path: (process.env.VUEPRESS_BASE ? process.env.VUEPRESS_BASE : '/')+'rpc/',
static: true
},
]
@@ -59,9 +45,9 @@ module.exports = {
title: 'Help & Support',
editLink: true,
forum: {
title: 'Tendermint Forum',
text: 'Join the Tendermint forum to learn more',
url: 'https://forum.cosmos.network/c/tendermint',
title: 'Tendermint Discussions',
text: 'Join the Tendermint discussions to learn more',
url: 'https://github.com/tendermint/tendermint/discussions',
bg: '#0B7E0B',
logo: 'tendermint'
},
@@ -72,7 +58,7 @@ module.exports = {
},
footer: {
question: {
text: 'Chat with Tendermint developers in <a href=\'https://discord.gg/vcExX9T\' target=\'_blank\'>Discord</a> or reach out on the <a href=\'https://forum.cosmos.network/c/tendermint\' target=\'_blank\'>Tendermint Forum</a> to learn more.'
text: 'Chat with Tendermint developers in <a href=\'https://discord.gg/vcExX9T\' target=\'_blank\'>Discord</a> or reach out on <a href=\'https://github.com/tendermint/tendermint/discussions\' target=\'_blank\'>GitHub</a> to learn more.'
},
logo: '/logo-bw.svg',
textLink: {
@@ -129,8 +115,8 @@ module.exports = {
url: 'https://medium.com/@tendermint'
},
{
title: 'Forum',
url: 'https://forum.cosmos.network/c/tendermint'
title: 'GitHub Discussions',
url: 'https://github.com/tendermint/tendermint/discussions'
}
]
},

View File

@@ -1 +1,65 @@
/master/ /v0.35/
/redirects/master/ /main/
/redirects/master/spec/core/state.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/core/state.md
/redirects/master/spec/core/encoding.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/core/encoding.md
/redirects/master/spec/core/genesis.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/core/genesis.md
/redirects/master/spec/core/data_structures.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/core/data_structures.md
/redirects/master/spec/core/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/core/readme.md
/redirects/master/spec/p2p/messages/pex.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/messages/pex.md
/redirects/master/spec/p2p/messages/mempool.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/messages/mempool.md
/redirects/master/spec/p2p/messages/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/messages/README.md
/redirects/master/spec/p2p/messages/block-sync.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/messages/block-sync.md
/redirects/master/spec/p2p/messages/state-sync.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/messages/state-sync.md
/redirects/master/spec/p2p/messages/consensus.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/messages/consensus.md
/redirects/master/spec/p2p/messages/evidence.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/messages/evidence.md
/redirects/master/spec/p2p/peer.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/peer.md
/redirects/master/spec/p2p/connection.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/connection.md
/redirects/master/spec/p2p/config.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/config.md
/redirects/master/spec/p2p/node.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/node.md
/redirects/master/spec/p2p/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/p2p/readme.md
/redirects/master/spec/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/README.md
/redirects/master/spec/ivy-proofs/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/ivy-proofs/README.md
/redirects/master/spec/consensus/proposer-selection.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/proposer-selection.md
/redirects/master/spec/consensus/creating-proposal.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/creating-proposal.md
/redirects/master/spec/consensus/proposer-based-timestamp/pbts-sysmodel_001_draft.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/proposer-based-timestamp/pbts-sysmodel_001_draft.md
/redirects/master/spec/consensus/proposer-based-timestamp/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/proposer-based-timestamp/README.md
/redirects/master/spec/consensus/proposer-based-timestamp/pbts-algorithm_001_draft.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/proposer-based-timestamp/pbts-algorithm_001_draft.md
/redirects/master/spec/consensus/proposer-based-timestamp/pbts_001_draft.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/proposer-based-timestamp/pbts_001_draft.md
/redirects/master/spec/consensus/light-client/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/light-client/README.md
/redirects/master/spec/consensus/light-client/accountability.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/light-client/accountability.md
/redirects/master/spec/consensus/light-client/detection.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/light-client/detection.md
/redirects/master/spec/consensus/light-client/verification.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/light-client/verification.md
/redirects/master/spec/consensus/consensus-paper/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/consensus-paper/README.md
/redirects/master/spec/consensus/signing.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/signing.md
/redirects/master/spec/consensus/consensus.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/consensus.md
/redirects/master/spec/consensus/evidence.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/evidence.md
/redirects/master/spec/consensus/bft-time.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/bft-time.md
/redirects/master/spec/consensus/wal.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/wal.md
/redirects/master/spec/consensus/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/consensus/readme.md
/redirects/master/spec/light-client/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/README.md
/redirects/master/spec/light-client/detection/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/detection/README.md
/redirects/master/spec/light-client/detection/req-ibc-detection.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/detection/req-ibc-detection.md
/redirects/master/spec/light-client/detection/detection_001_reviewed.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/detection/detection_001_reviewed.md
/redirects/master/spec/light-client/detection/draft-functions.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/detection/draft-functions.md
/redirects/master/spec/light-client/detection/discussions.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/detection/discussions.md
/redirects/master/spec/light-client/detection/detection_003_reviewed.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/detection/detection_003_reviewed.md
/redirects/master/spec/light-client/accountability/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/accountability/README.md
/redirects/master/spec/light-client/accountability/results/001indinv-apalache-report.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/accountability/results/001indinv-apalache-report.md
/redirects/master/spec/light-client/accountability/Synopsis.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/accountability/Synopsis.md
/redirects/master/spec/light-client/verification/verification_003_draft.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/verification/verification_003_draft.md
/redirects/master/spec/light-client/verification/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/verification/README.md
/redirects/master/spec/light-client/verification/verification_001_published.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/verification/verification_001_published.md
/redirects/master/spec/light-client/verification/verification_002_draft.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/verification/verification_002_draft.md
/redirects/master/spec/light-client/supervisor/supervisor_002_draft.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/supervisor/supervisor_002_draft.md
/redirects/master/spec/light-client/supervisor/supervisor_001_draft.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/supervisor/supervisor_001_draft.md
/redirects/master/spec/light-client/attacks/isolate-attackers_001_draft.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/attacks/isolate-attackers_001_draft.md
/redirects/master/spec/light-client/attacks/notes-on-evidence-handling.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/attacks/notes-on-evidence-handling.md
/redirects/master/spec/light-client/attacks/isolate-attackers_002_reviewed.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/light-client/attacks/isolate-attackers_002_reviewed.md
/redirects/master/spec/abci/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/abci/README.md
/redirects/master/spec/abci/client-server.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/abci/client-server.md
/redirects/master/spec/abci/apps.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/abci/apps.md
/redirects/master/spec/abci/abci.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/abci/abci.md
/redirects/master/spec/rpc/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/rpc/README.md
/redirects/master/spec/blockchain/state.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/blockchain/state.md
/redirects/master/spec/blockchain/encoding.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/blockchain/encoding.md
/redirects/master/spec/blockchain/blockchain.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/blockchain/blockchain.md
/redirects/master/spec/blockchain/index.html https://github.com/tendermint/tendermint/blob/v0.34.x/spec/blockchain/readme.md

View File

@@ -2,39 +2,38 @@
The documentation for Tendermint Core is hosted at:
- <https://docs.tendermint.com/master/>
- <https://docs.tendermint.com>
built from the files in this (`/docs`) directory for
[master](https://github.com/tendermint/tendermint/tree/master/docs) respectively.
built from the files in this (`/docs`) directory.
## How It Works
There is a CircleCI job listening for changes in the `/docs` directory, on both
the `master` branch. Any updates to files in this directory
on those branches will automatically trigger a website deployment. Under the hood,
the private website repository has a `make build-docs` target consumed by a CircleCI job in that repo.
There is a [GitHub Action](../.github/workflows/docs-deployment.yml) that is
triggered by changes in the `/docs` directory on `main` as well as the branch of
each major supported version (e.g. `v0.34.x`). Any updates to files in this
directory on those branches will automatically trigger a website deployment.
## README
The [README.md](./README.md) is also the landing page for the documentation
on the website. During the Jenkins build, the current commit is added to the bottom
of the README.
The [README.md](./README.md) is also the landing page for the documentation on
the website.
## Config.js
The [config.js](./.vuepress/config.js) generates the sidebar and Table of Contents
on the website docs. Note the use of relative links and the omission of
file extensions. Additional features are available to improve the look
of the sidebar.
The [config.js](./.vuepress/config.js) generates the sidebar and Table of
Contents on the website docs. Note the use of relative links and the omission of
file extensions. Additional features are available to improve the look of the
sidebar.
## Links
**NOTE:** Strongly consider the existing links - both within this directory
and to the website docs - when moving or deleting files.
**NOTE:** Strongly consider the existing links - both within this directory and
to the website docs - when moving or deleting files.
Links to directories _MUST_ end in a `/`.
Relative links should be used nearly everywhere, having discovered and weighed the following:
Relative links should be used nearly everywhere, having discovered and weighed
the following:
### Relative
@@ -65,7 +64,8 @@ Make sure you are in the `docs` directory and run the following commands:
rm -rf node_modules
```
This command will remove old version of the visual theme and required packages. This step is optional.
This command will remove old version of the visual theme and required packages.
This step is optional.
```bash
npm install
@@ -79,17 +79,24 @@ npm run serve
<!-- markdown-link-check-disable -->
Run `pre` and `post` hooks and start a hot-reloading web-server. See output of this command for the URL (it is often <https://localhost:8080>).
Run `pre` and `post` hooks and start a hot-reloading web-server. See output of
this command for the URL (it is often <https://localhost:8080>).
<!-- markdown-link-check-enable -->
To build documentation as a static website run `npm run build`. You will find the website in `.vuepress/dist` directory.
To build documentation as a static website run `npm run build`. You will find
the website in `.vuepress/dist` directory.
## Search
We are using [Algolia](https://www.algolia.com) to power full-text search. This uses a public API search-only key in the `config.js` as well as a [tendermint.json](https://github.com/algolia/docsearch-configs/blob/master/configs/tendermint.json) configuration file that we can update with PRs.
We are using [Algolia](https://www.algolia.com) to power full-text search. This
uses a public API search-only key in the `config.js` as well as a
[tendermint.json](https://github.com/algolia/docsearch-configs/blob/master/configs/tendermint.json)
configuration file that we can update with PRs.
## Consistency
Because the build processes are identical (as is the information contained herein), this file should be kept in sync as
much as possible with its [counterpart in the Cosmos SDK repo](https://github.com/cosmos/cosmos-sdk/blob/master/docs/DOCS_README.md).
Because the build processes are identical (as is the information contained
herein), this file should be kept in sync as much as possible with its
[counterpart in the Cosmos SDK
repo](https://github.com/cosmos/cosmos-sdk/blob/master/docs/DOCS_README.md).

View File

@@ -21,7 +21,7 @@ Tendermint?](introduction/what-is-tendermint.md).
To get started quickly with an example application, see the [quick start guide](introduction/quick-start.md).
To learn about application development on Tendermint, see the [Application Blockchain Interface](https://github.com/tendermint/spec/tree/master/spec/abci).
To learn about application development on Tendermint, see the [Application Blockchain Interface](https://github.com/tendermint/tendermint/tree/v0.37.x/spec/abci).
For more details on using Tendermint, see the respective documentation for
[Tendermint Core](tendermint-core/), [benchmarking and monitoring](tools/), and [network deployments](networks/).
@@ -30,4 +30,4 @@ To find out about the Tendermint ecosystem you can go [here](https://github.com/
## Contribute
To contribute to the documentation, see [this file](https://github.com/tendermint/tendermint/blob/master/docs/DOCS_README.md) for details of the build process and considerations when making changes.
To contribute to the documentation, see [this file](https://github.com/tendermint/tendermint/blob/v0.37.x/docs/DOCS_README.md) for details of the build process and considerations when making changes.

View File

@@ -37,7 +37,6 @@ Available Commands:
help Help about any command
info Get some info about the application
query Query the application state
set_option Set an options on the application
Flags:
--abci string socket or grpc (default "socket")
@@ -137,7 +136,7 @@ response.
The server may be generic for a particular language, and we provide a
[reference implementation in
Golang](https://github.com/tendermint/tendermint/tree/master/abci/server). See the
Golang](https://github.com/tendermint/tendermint/tree/v0.37.x/abci/server). See the
[list of other ABCI implementations](https://github.com/tendermint/awesome#ecosystem) for servers in
other languages.
@@ -169,6 +168,14 @@ Try running these commands:
-> data: {"size":0}
-> data.hex: 0x7B2273697A65223A307D
> prepare_proposal "abc"
-> code: OK
-> log: Succeeded. Tx: abc action: UNMODIFIED
> process_proposal "abc"
-> code: OK
-> status: ACCEPT
> commit
-> code: OK
-> data.hex: 0x0000000000000000
@@ -189,6 +196,8 @@ Try running these commands:
-> code: OK
-> log: exists
-> height: 2
-> key: abc
-> key.hex: 616263
-> value: abc
-> value.hex: 616263
@@ -203,8 +212,34 @@ Try running these commands:
-> code: OK
-> log: exists
-> height: 3
-> key: def
-> key.hex: 646566
-> value: xyz
-> value.hex: 78797A
> prepare_proposal "preparedef"
-> code: OK
-> log: Succeeded. Tx: def action: ADDED
-> code: OK
-> log: Succeeded. Tx: preparedef action: REMOVED
> process_proposal "def"
-> code: OK
-> status: ACCEPT
> process_proposal "preparedef"
-> code: OK
-> status: REJECT
> prepare_proposal
> process_proposal
-> code: OK
-> status: ACCEPT
> commit
-> code: OK
-> data.hex: 0x0400000000000000
```
Note that if we do `deliver_tx "abc"` it will store `(abc, abc)`, but if

View File

@@ -55,6 +55,6 @@ Tendermint.
See the following for more extensive documentation:
- [Interchain Standard for the Light-Client REST API](https://github.com/cosmos/cosmos-sdk/pull/1028)
- [Tendermint RPC Docs](https://docs.tendermint.com/master/rpc/)
- [Tendermint RPC Docs](https://docs.tendermint.com/v0.37/rpc/)
- [Tendermint in Production](../tendermint-core/running-in-production.md)
- [ABCI spec](https://github.com/tendermint/spec/tree/95cf253b6df623066ff7cd4074a94e7a3f147c7a/spec/abci)

View File

@@ -15,7 +15,7 @@ the block itself is never stored.
Each event contains a type and a list of attributes, which are key-value pairs
denoting something about what happened during the method's execution. For more
details on `Events`, see the
[ABCI](https://github.com/tendermint/spec/blob/master/spec/abci/abci.md#events)
[ABCI](https://github.com/tendermint/tendermint/blob/v0.37.x/spec/abci/abci.md#events)
documentation.
An `Event` has a composite key associated with it. A `compositeKey` is
@@ -146,7 +146,7 @@ You can query for a paginated set of transaction by their events by calling the
curl "localhost:26657/tx_search?query=\"message.sender='cosmos1...'\"&prove=true"
```
Check out [API docs](https://docs.tendermint.com/master/rpc/#/Info/tx_search)
Check out [API docs](https://docs.tendermint.com/v0.37/rpc/#/Info/tx_search)
for more information on query syntax and other options.
## Subscribing to Transactions
@@ -165,7 +165,7 @@ a query to `/subscribe` RPC endpoint.
}
```
Check out [API docs](https://docs.tendermint.com/master/rpc/#subscribe) for more information
Check out [API docs](https://docs.tendermint.com/v0.37/rpc/#subscribe) for more information
on query syntax and other options.
## Querying Blocks Events
@@ -177,5 +177,5 @@ You can query for a paginated set of blocks by their events by calling the
curl "localhost:26657/block_search?query=\"block.height > 10 AND val_set.num_changed > 0\""
```
Check out [API docs](https://docs.tendermint.com/master/rpc/#/Info/block_search)
Check out [API docs](https://docs.tendermint.com/v0.37/rpc/#/Info/block_search)
for more information on query syntax and other options.

View File

@@ -77,7 +77,7 @@ Note the context/background should be written in the present tense.
- [ADR-006: Trust-Metric](./adr-006-trust-metric.md)
- [ADR-024: Sign-Bytes](./adr-024-sign-bytes.md)
- [ADR-035: Documentation](./adr-035-documentation.md)
- [ADR-039: Peer-Behaviour](./adr-039-peer-behavior.md)
- [ADR-039: Peer-Behaviour](./adr-039-peer-behaviour.md)
- [ADR-060: Go-API-Stability](./adr-060-go-api-stability.md)
- [ADR-061: P2P-Refactor-Scope](./adr-061-p2p-refactor-scope.md)
- [ADR-065: Custom Event Indexing](./adr-065-custom-event-indexing.md)

View File

@@ -7,7 +7,7 @@
## Context
ABCI tags were first described in [ADR 002](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-002-event-subscription.md).
ABCI tags were first described in [ADR 002](https://github.com/tendermint/tendermint/blob/v0.37.x/docs/architecture/adr-002-event-subscription.md).
They are key-value pairs that can be used to index transactions.
Currently, ABCI messages return a list of tags to describe an

View File

@@ -16,7 +16,7 @@ The messages exchanged between tendermint and a remote signer currently live in
[privval/socket.go]: https://github.com/tendermint/tendermint/blob/d419fffe18531317c28c29a292ad7d253f6cafdf/privval/socket.go#L496-L502
[issue#1622]: https://github.com/tendermint/tendermint/issues/1622
[types]: https://github.com/tendermint/tendermint/tree/master/types
[types]: https://github.com/tendermint/tendermint/tree/v0.37.x/types
## Decision

View File

@@ -9,7 +9,7 @@
## Context
The blockchain reactor is responsible for two high level processes:sending/receiving blocks from peers and FastSync-ing blocks to catch upnode who is far behind. The goal of [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md) was to refactor these two processes by separating business logic currently wrapped up in go-channels into pure `handle*` functions. While the ADR specified what the final form of the reactor might look like it lacked guidance on intermediary steps to get there.
The blockchain reactor is responsible for two high level processes:sending/receiving blocks from peers and FastSync-ing blocks to catch upnode who is far behind. The goal of [ADR-40](https://github.com/tendermint/tendermint/blob/v0.37.x/docs/architecture/adr-040-blockchain-reactor-refactor.md) was to refactor these two processes by separating business logic currently wrapped up in go-channels into pure `handle*` functions. While the ADR specified what the final form of the reactor might look like it lacked guidance on intermediary steps to get there.
The following diagram illustrates the state of the [blockchain-reorg](https://github.com/tendermint/tendermint/pull/3561) reactor which will be referred to as `v1`.
![v1 Blockchain Reactor Architecture
@@ -22,7 +22,7 @@ While `v1` of the blockchain reactor has shown significant improvements in terms
- Peer communication is spread over multiple components creating complex dependency graph which must be mocked out during testing.
- Timeouts modeled as stateful tickers introduce non-determinism in tests
This ADR is meant to specify the missing components and control necessary to achieve [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md).
This ADR is meant to specify the missing components and control necessary to achieve [ADR-40](https://github.com/tendermint/tendermint/blob/v0.37.x/docs/architecture/adr-040-blockchain-reactor-refactor.md).
## Decision
@@ -41,7 +41,7 @@ Diagram](https://github.com/tendermint/tendermint/blob/5cf570690f989646fb3b615b7
### Reactor changes in detail
The reactor will include a demultiplexing routine which will send each message to each sub routine for independent processing. Each sub routine will then select the messages it's interested in and call the handle specific function specified in [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md). The demuxRoutine acts as "pacemaker" setting the time in which events are expected to be handled.
The reactor will include a demultiplexing routine which will send each message to each sub routine for independent processing. Each sub routine will then select the messages it's interested in and call the handle specific function specified in [ADR-40](https://github.com/tendermint/tendermint/blob/v0.37.x/docs/architecture/adr-040-blockchain-reactor-refactor.md). The demuxRoutine acts as "pacemaker" setting the time in which events are expected to be handled.
```go
func demuxRoutine(msgs, scheduleMsgs, processorMsgs, ioMsgs) {
@@ -400,5 +400,5 @@ Implemented
## References
- [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md): The original blockchain reactor re-org proposal
- [ADR-40](https://github.com/tendermint/tendermint/blob/v0.37.x/docs/architecture/adr-040-blockchain-reactor-refactor.md): The original blockchain reactor re-org proposal
- [Blockchain re-org](https://github.com/tendermint/tendermint/pull/3561): The current blockchain reactor re-org implementation (v1)

View File

@@ -32,7 +32,7 @@ fork the network at some point in its prior history. See Vitaliks post at
Subjectivity](https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/).
Currently, Tendermint provides a lite client implementation in the
[light](https://github.com/tendermint/tendermint/tree/master/light) package. This
[light](https://github.com/tendermint/tendermint/tree/v0.37.x/light) package. This
lite client implements a bisection algorithm that tries to use a binary search
to find the minimum number of block headers where the validator set voting
power changes are less than < 1/3rd. This interface does not support weak
@@ -84,7 +84,7 @@ The linear verification algorithm requires downloading all headers
between the `TrustHeight` and the `LatestHeight`. The lite client downloads the
full header for the provided `TrustHeight` and then proceeds to download `N+1`
headers and applies the [Tendermint validation
rules](https://github.com/tendermint/tendermint/tree/master/spec/light-client/verification/README.md)
rules](https://github.com/tendermint/tendermint/tree/v0.37.x/spec/light-client/verification/README.md)
to each block.
### Bisecting Verification
@@ -119,7 +119,7 @@ network usage.
---
Check out the formal specification
[here](https://github.com/tendermint/tendermint/tree/master/spec/light-client).
[here](https://github.com/tendermint/tendermint/tree/v0.37.x/spec/light-client).
## Status

Some files were not shown because too many files have changed in this diff Show More