Compare commits

..

104 Commits

Author SHA1 Message Date
Jae Kwon
66ac0bf8b0 fix inaccuracies (#10144) 2025-03-06 15:39:57 -08:00
Callum Waters
85a584dd2e add retraction notice for v0.35 (#9308) 2022-08-24 15:57:05 +02:00
Daniel
d0c3457343 Documentation of v0.35 p2p layer: peer manager (#8982)
* Doc: documentation of new p2p layer, first commit

* Doc: p2p peer manager abstraction, first commit

* Doc: life cycle of a peer, first part

* Doc: life cycle of a p2p peer, picture added

* typos

* Doc: life cycle of a p2p peer picture updated

* Doc: life cycle of a p2p peer section refactored

* Doc: p2p connection policy and connection slots

* Doc: peer manager defines the connection policy

* Doc: peer manager connection slots upgrading

* Doc: peer manager eviction procedure introduced

* Doc: several corrections in peer manager documentation

* Doc: peer ranking mechanism documented

* Doc: EvictNext peer manager transition documented

* Doc: concept of candidate peer added to peer manager

* Doc: peer manager documentation, aesthetic changes

* Apply suggestions from code review (again)

Co-authored-by: Sergio Mena <sergio@informal.systems>

* Spec of v0.35 p2p layer moved to spec/p2p/v0.35

* Spec: p2p markdown links fixed

* Spec: addressing more issues on peer manager spec

* Spec: p2p peer manager DialNext algorithm

* Spec: p2p peer manager Dial and Accepted algorithms

* Spec: p2p router dialing peers thread

* Spec: p2p router accept peers threads

* Spec: p2p router evict peers routine

* Spec: p2p router routing messages routines

* Spec: p2p v0.35 readme points to other documents

* Spec: fixing markdown links

* Apply suggestions from Josef's code review

* They state that this is a work in progress, that has been interrupted to focus on the specification of the p2p layer adopted by Tendermint v0.34.

Co-authored-by: Josef Widder <44643235+josef-widder@users.noreply.github.com>

* Spc: p2p v0.35 spec mentions new p2p layer

Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>
Co-authored-by: Sergio Mena <sergio@informal.systems>
Co-authored-by: Josef Widder <44643235+josef-widder@users.noreply.github.com>
Co-authored-by: Daniel Cason <daniel.cason@informal.systems>
2022-08-08 11:40:50 +02:00
Mark Rushakoff
d433ebe68d Improve handling of -short flag in tests (#9075)
As a small developer quality of life improvement, I found many individual unit tests that take longer than around a second to complete, and set them to skip when run under `go test -short`.

On my machine, the wall timings for tests (with `go test -count=1 ./...` and optionally `-short` and `-race`) are roughly:

- Long tests, no race detector: about 1m42s
- Short tests, no race detector: about 17s
- Long tests, race detector enabled: about 2m1s
- Short tests, race detector enabled: about 28s

This PR is split into many commits each touching a single package, with commit messages detailing the approximate timing change per package.
2022-07-29 13:41:54 +00:00
Sam Kleinman
48147e1fb9 logging: implement lazy sprinting (#8898)
shout out to @joeabbey for the inspiration. This makes the lazy
functions internal by default to prevent potential misuse by external
callers.

Should backport cleanly into 0.36 and I'll handle a messy merge into 0.35
2022-07-27 19:16:51 +00:00
William Banfield
b9d6bb4cd1 RELEASES: add a set of pre-release steps for Tendermint versions (#8786)
{[rendered](https://github.com/tendermint/tendermint/blob/wb/release-document/RELEASES.md)}

This pull request adds a proposed set of steps to perform before each Tendermint minor version release. This represents an initial set of ideas that derives from conversations among members of the Tendermint team. If you have additional suggestions for pre-release steps, please leave a comment explaining the specific suggestion and detailing how it would help build confidence in the quality of the release of Tendermint.
2022-07-27 15:03:36 +00:00
Callum Waters
7a84425aec update dependabot frequencies (#9087) 2022-07-26 12:05:48 +02:00
dependabot[bot]
488e1d4bd2 build(deps): Bump github.com/creachadair/tomledit from 0.0.22 to 0.0.23 (#9084)
Bumps [github.com/creachadair/tomledit](https://github.com/creachadair/tomledit) from 0.0.22 to 0.0.23.
<details>
<summary>Commits</summary>
<ul>
<li><a href="e7b6512f75"><code>e7b6512</code></a> Release v0.0.23.</li>
<li><a href="fdb62db21d"><code>fdb62db</code></a> Add tomldiff command-line tool.</li>
<li><a href="ce86b667b5"><code>ce86b66</code></a> Fix a comment typo.</li>
<li><a href="3834adf9ee"><code>3834adf</code></a> Fix documentation.</li>
<li><a href="525c9c12f6"><code>525c9c1</code></a> cli: escape as basic strings by default</li>
<li>See full diff in <a href="https://github.com/creachadair/tomledit/compare/v0.0.22...v0.0.23">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/creachadair/tomledit&package-manager=go_modules&previous-version=0.0.22&new-version=0.0.23)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-07-25 12:53:08 +00:00
dependabot[bot]
89246e993a build(deps): Bump docker/build-push-action from 3.0.0 to 3.1.0 (#9082) 2022-07-25 12:07:29 +02:00
Steven Ferrer
6c302218e3 Documentation: update go tutorials (#9048) 2022-07-25 10:15:18 +02:00
dependabot[bot]
023c21f307 build(deps): Bump github.com/golangci/golangci-lint from 1.47.1 to 1.47.2 (#9070)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.47.1 to 1.47.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/golangci/golangci-lint/releases">github.com/golangci/golangci-lint's releases</a>.</em></p>
<blockquote>
<h2>v1.47.2</h2>
<h2>Changelog</h2>
<ul>
<li>61673b34 revive: ignore slow rules (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2999">#2999</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/golangci/golangci-lint/blob/master/CHANGELOG.md">github.com/golangci/golangci-lint's changelog</a>.</em></p>
<blockquote>
<h3>v1.47.2</h3>
<ol>
<li>updated linters:
<ul>
<li><code>revive</code>: ignore slow rules</li>
</ul>
</li>
</ol>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="61673b3436"><code>61673b3</code></a> revive: ignore slow rules (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2999">#2999</a>)</li>
<li><a href="3fb60a3e8a"><code>3fb60a3</code></a> build(deps): bump terser from 5.12.0 to 5.14.2 in /docs (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2998">#2998</a>)</li>
<li><a href="dc0237854f"><code>dc02378</code></a> docs: Update documentation and assets (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2996">#2996</a>)</li>
<li>See full diff in <a href="https://github.com/golangci/golangci-lint/compare/v1.47.1...v1.47.2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/golangci/golangci-lint&package-manager=go_modules&previous-version=1.47.1&new-version=1.47.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-07-22 14:16:44 +00:00
dependabot[bot]
65c0fba564 build(deps): Bump github.com/BurntSushi/toml from 1.1.0 to 1.2.0 (#9061)
Bumps [github.com/BurntSushi/toml](https://github.com/BurntSushi/toml) from 1.1.0 to 1.2.0.
- [Release notes](https://github.com/BurntSushi/toml/releases)
- [Commits](https://github.com/BurntSushi/toml/compare/v1.1.0...v1.2.0)

---
updated-dependencies:
- dependency-name: github.com/BurntSushi/toml
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-07-21 09:20:01 -04:00
M. J. Fromberger
87708b8ae7 Forward-port changelog for v0.35.9 to master. (#9059) 2022-07-21 08:22:23 +00:00
dependabot[bot]
d320b3088e build(deps): Bump github.com/golangci/golangci-lint from 1.47.0 to 1.47.1 (#9044)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.47.0 to 1.47.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/golangci/golangci-lint/releases">github.com/golangci/golangci-lint's releases</a>.</em></p>
<blockquote>
<h2>v1.47.1</h2>
<h2>Changelog</h2>
<ul>
<li>a91463cd build(deps): bump github.com/daixiang0/gci from 0.4.2 to 0.4.3 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2992">#2992</a>)</li>
<li>4c8bdc70 build(deps): bump github.com/sivchari/tenv from 1.6.0 to 1.7.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2988">#2988</a>)</li>
<li>4e60e8a8 gci: fix options display (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2989">#2989</a>)</li>
<li>fd87bd1e gci: remove the use of stdin (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2984">#2984</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/golangci/golangci-lint/blob/master/CHANGELOG.md">github.com/golangci/golangci-lint's changelog</a>.</em></p>
<blockquote>
<h3>v1.47.1</h3>
<ol>
<li>updated linters:
<ul>
<li><code>gci</code>: from 0.4.2 to 0.4.3</li>
<li><code>gci</code>: remove the use of stdin</li>
<li><code>gci</code>: fix options display</li>
<li><code>tenv</code>: from 1.6.0 to 1.7.0</li>
<li><code>unparam</code>: bump to HEAD</li>
</ul>
</li>
</ol>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="ebd6dcbf92"><code>ebd6dcb</code></a> Revert 'fix: generics (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2991">#2991</a>)' (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2995">#2995</a>)</li>
<li><a href="c39bf96129"><code>c39bf96</code></a> fix: disable structcheck for go &gt;= 1.18 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2994">#2994</a>)</li>
<li><a href="a91463cde4"><code>a91463c</code></a> build(deps): bump github.com/daixiang0/gci from 0.4.2 to 0.4.3 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2992">#2992</a>)</li>
<li><a href="eb16e701db"><code>eb16e70</code></a> fix: generics (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2991">#2991</a>)</li>
<li><a href="4e60e8a894"><code>4e60e8a</code></a> gci: fix options display (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2989">#2989</a>)</li>
<li><a href="4c8bdc70cc"><code>4c8bdc7</code></a> build(deps): bump github.com/sivchari/tenv from 1.6.0 to 1.7.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2988">#2988</a>)</li>
<li><a href="fd87bd1efb"><code>fd87bd1</code></a> gci: remove the use of stdin (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2984">#2984</a>)</li>
<li><a href="a913b3e955"><code>a913b3e</code></a> docs: Update documentation and assets (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2981">#2981</a>)</li>
<li>See full diff in <a href="https://github.com/golangci/golangci-lint/compare/v1.47.0...v1.47.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/golangci/golangci-lint&package-manager=go_modules&previous-version=1.47.0&new-version=1.47.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-07-21 01:50:14 +00:00
dependabot[bot]
e33635a601 build(deps): Bump terser from 4.8.0 to 4.8.1 in /docs (#9051) 2022-07-20 17:20:31 -07:00
M. J. Fromberger
fa32078ceb Fix unbounded heap growth in the priority mempool. (#9052)
This is a manual forward-port of #8944 and related fixes from v0.35.x.

One difference of note is that the CheckTx response messages no longer have a
field to record an error from the ABCI application. The code is set up so that
these could be reported directly to the CheckTx caller, but it would be an API
change, and right now a bunch of test plumbing depends on the existing semantics.

Also fix up tests relying on implementation-specific mempool behavior.

- Commit was setting the expected mempool size incorrectly.
- Fix sequence test not to depend on the incorrect size.
2022-07-20 14:37:13 -07:00
M. J. Fromberger
fc3a24a669 Disable upload-coverage-report workflow in CI. (#9056)
As far as I know nobody looks at these reports anyway, and lately
the codecov API has been failing for an expired certificate.
2022-07-20 12:21:14 -07:00
M. J. Fromberger
525624f1ce Fix a typo in the RFC ToC (#9047) 2022-07-20 10:01:23 +00:00
M. J. Fromberger
8a815417b4 RFC 021: The Future of the Socket Protocol (#8584) 2022-07-19 16:47:11 -07:00
dependabot[bot]
066b7a9999 build(deps): Bump github.com/golangci/golangci-lint from 1.46.0 to 1.47.0 (#9038)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.46.0 to 1.47.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/golangci/golangci-lint/releases">github.com/golangci/golangci-lint's releases</a>.</em></p>
<blockquote>
<h2>v1.47.0</h2>
<h2>Changelog</h2>
<ul>
<li>b4154027 Add linter <code>asasalint</code> to lint pass []any as any (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2968">#2968</a>)</li>
<li>1d8a15a0 add nosnakecase lint (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2828">#2828</a>)</li>
<li>2a1edcef build(deps): bump github.com/Antonboom/errname from 0.1.6 to 0.1.7 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2888">#2888</a>)</li>
<li>c766184c build(deps): bump github.com/GaijinEntertainment/go-exhaustruct/v2 from 2.1.0 to 2.2.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2916">#2916</a>)</li>
<li>b8f1e2a5 build(deps): bump github.com/daixiang0/gci from 0.3.4 to 0.4.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2965">#2965</a>)</li>
<li>5e183652 build(deps): bump github.com/daixiang0/gci from 0.4.0 to 0.4.1 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2973">#2973</a>)</li>
<li>e60937a1 build(deps): bump github.com/daixiang0/gci from 0.4.1 to 0.4.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2979">#2979</a>)</li>
<li>98c811d0 build(deps): bump github.com/firefart/nonamedreturns from 1.0.1 to 1.0.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2929">#2929</a>)</li>
<li>023e1c4f build(deps): bump github.com/firefart/nonamedreturns from 1.0.2 to 1.0.4 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2944">#2944</a>)</li>
<li>7fbb11ca build(deps): bump github.com/fzipp/gocyclo from 0.5.1 to 0.6.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2926">#2926</a>)</li>
<li>db5d58cd build(deps): bump github.com/hashicorp/go-version from 1.4.0 to 1.5.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2873">#2873</a>)</li>
<li>f75b1a8b build(deps): bump github.com/hashicorp/go-version from 1.5.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2958">#2958</a>)</li>
<li>75be924e build(deps): bump github.com/kisielk/errcheck from 1.6.0 to 1.6.1 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2871">#2871</a>)</li>
<li>33f4aeeb build(deps): bump github.com/kulti/thelper from 0.6.2 to 0.6.3 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2872">#2872</a>)</li>
<li>6a412d3d build(deps): bump github.com/kunwardeep/paralleltest from 1.0.3 to 1.0.4 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2907">#2907</a>)</li>
<li>97eea6ea build(deps): bump github.com/kunwardeep/paralleltest from 1.0.4 to 1.0.6 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2918">#2918</a>)</li>
<li>3a0f646e build(deps): bump github.com/maratori/testpackage from 1.0.1 to 1.1.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2945">#2945</a>)</li>
<li>92d7022d build(deps): bump github.com/nishanths/exhaustive from 0.7.11 to 0.8.1 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2906">#2906</a>)</li>
<li>97d7415b build(deps): bump github.com/quasilyte/go-ruleguard/dsl from 0.3.19 to 0.3.21 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2874">#2874</a>)</li>
<li>0e3730d3 build(deps): bump github.com/securego/gosec/v2 from 2.11.0 to 2.12.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2925">#2925</a>)</li>
<li>ac99dbcc build(deps): bump github.com/shirou/gopsutil/v3 from 3.22.4 to 3.22.5 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2908">#2908</a>)</li>
<li>8e0a6725 build(deps): bump github.com/shirou/gopsutil/v3 from 3.22.5 to 3.22.6 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2959">#2959</a>)</li>
<li>c8e38c4b build(deps): bump github.com/sivchari/tenv from 1.5.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2927">#2927</a>)</li>
<li>f70bf666 build(deps): bump github.com/spf13/cobra from 1.4.0 to 1.5.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2933">#2933</a>)</li>
<li>153b4072 build(deps): bump github.com/spf13/viper from 1.11.0 to 1.12.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2889">#2889</a>)</li>
<li>f03a5207 build(deps): bump github.com/stretchr/testify from 1.7.1 to 1.7.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2917">#2917</a>)</li>
<li>e33e63ed build(deps): bump github.com/stretchr/testify from 1.7.2 to 1.7.4 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2934">#2934</a>)</li>
<li>44e9b34d build(deps): bump github.com/stretchr/testify from 1.7.4 to 1.7.5 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2942">#2942</a>)</li>
<li>bb5b6625 build(deps): bump github.com/stretchr/testify from 1.7.5 to 1.8.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2957">#2957</a>)</li>
<li>2c30625c build(deps): bump github.com/tomarrell/wrapcheck/v2 from 2.6.1 to 2.6.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2928">#2928</a>)</li>
<li>9317da6c build(deps): bump github.com/uudashr/gocognit from 1.0.5 to 1.0.6 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2962">#2962</a>)</li>
<li>3071fecb build(deps): bump gitlab.com/bosi/decorder from 0.2.1 to 0.2.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2943">#2943</a>)</li>
<li>d92f144d build(deps): bump goreleaser/goreleaser-action from 2 to 3 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2876">#2876</a>)</li>
<li>ddee31ae build(deps): bump honnef.co/go/tools from 0.3.1 to 0.3.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2870">#2870</a>)</li>
<li>9ebc2d52 build(deps): bump moment from 2.29.2 to 2.29.4 in /.github/contributors (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2966">#2966</a>)</li>
<li>f9d81511 bump golang.org/x/tools to HEAD (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2875">#2875</a>)</li>
<li>de7cc56e chore: remove reviewers from dependabot configuration (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2932">#2932</a>)</li>
<li>86bd8423 chore: spelling and grammar fixes (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2865">#2865</a>)</li>
<li>4b218e66 config: spread go version on linter's configurations (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2913">#2913</a>)</li>
<li>ae2a9688 depguard: adjust phrasing (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2921">#2921</a>)</li>
<li>f2634d40 fix: codeQL scanning (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2882">#2882</a>)</li>
<li>2f41c1f0 gci: fix issues and re-enable autofix (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2892">#2892</a>)</li>
<li>c531fc2a gosec: allow <code>global</code> config (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2880">#2880</a>)</li>
<li>0abb2981 staticcheck: fix generics (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2976">#2976</a>)</li>
</ul>
<h2>v1.46.2</h2>
<h2>Changelog</h2>
<ul>
<li>a3336890 build(deps): bump golangci/golangci-lint-action from 3.1.0 to 3.2.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2858">#2858</a>)</li>
</ul>

</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/golangci/golangci-lint/blob/master/CHANGELOG.md">github.com/golangci/golangci-lint's changelog</a>.</em></p>
<blockquote>
<h3>v1.47.0</h3>
<ol>
<li>new linters:
<ul>
<li><code>asasalint</code>: <a href="https://github.com/alingse/asasalint">https://github.com/alingse/asasalint</a></li>
<li><code>nosnakecase</code>: <a href="https://github.com/sivchari/nosnakecase">https://github.com/sivchari/nosnakecase</a></li>
</ul>
</li>
<li>updated linters:
<ul>
<li><code>errname</code>: from 0.1.6 to 0.1.7</li>
<li><code>gci</code>: from 0.3.4 to 0.4.2</li>
<li><code>nonamedreturns</code>: from 1.0.1 to 1.0.4</li>
<li><code>gocyclo</code>: from 0.5.1 to 0.6.0</li>
<li><code>go-exhaustruct</code>: from 2.1.0 to 2.2.0</li>
<li><code>errcheck</code>: from 1.6.0 to 1.6.1</li>
<li><code>thelper</code>: from 0.6.2 to 0.6.3</li>
<li><code>paralleltest</code>: from 1.0.3 to 1.0.6</li>
<li><code>testpackage</code>: from 1.0.1 to 1.1.0</li>
<li><code>exhaustive</code>: from 0.7.11 to 0.8.1</li>
<li><code>go-ruleguard</code>: from 0.3.19 to 0.3.21</li>
<li><code>gosec</code>: from 2.11.0 to 2.12.0</li>
<li><code>tenv</code>: from 1.5.0 to 1.6.0</li>
<li><code>wrapcheck</code>: from 2.6.1 to 2.6.2</li>
<li><code>gocognit</code>: from 1.0.5 to 1.0.6</li>
<li><code>decorder</code>: from 0.2.1 to 0.2.2</li>
<li><code>honnef.co/go/tools</code>: from 0.3.1 to 0.3.2</li>
<li><code>golang.org/x/tools</code>: bump to HEAD</li>
<li><code>gci</code>: fix issues and re-enable autofix</li>
<li><code>gosec</code>: allow <code>global</code> config</li>
<li><code>staticcheck</code>: fix generics</li>
</ul>
</li>
<li>documentation:
<ul>
<li>add thanks page</li>
<li>add a clear explanation about the <code>staticcheck</code> integration.</li>
<li><code>depguard</code>: add <code>ignore-file-rules</code></li>
<li><code>depguard</code>: adjust phrasing</li>
<li><code>gocritic</code>: add <code>enable</code> and <code>disable</code> ruleguard settings</li>
<li><code>gomnd</code>: fix typo</li>
<li><code>gosec</code>: add configs for all existing rules</li>
<li><code>govet</code>: add settings for <code>shadow</code> and <code>unusedresult</code></li>
<li><code>thelper</code>: add <code>fuzz</code> config and description</li>
<li>linters: add defaults</li>
</ul>
</li>
</ol>
<h3>v1.46.2</h3>
<ol>
<li>updated linters:
<ul>
<li><code>execinquery</code>: bump from v1.2.0 to v1.2.1</li>
<li><code>errorlint</code>: bump to v1.0.0</li>
<li><code>thelper</code>: allow to disable one option</li>
</ul>
</li>
<li>documentation:
<ul>
<li>rename <code>.golangci.example.yml</code> to <code>.golangci.reference.yml</code></li>
<li>add <code>containedctx</code> linter to the list of available linters</li>
</ul>
</li>
</ol>
<h3>v1.46.1</h3>

</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="b415402720"><code>b415402</code></a> Add linter <code>asasalint</code> to lint pass []any as any (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2968">#2968</a>)</li>
<li><a href="e60937a179"><code>e60937a</code></a> build(deps): bump github.com/daixiang0/gci from 0.4.1 to 0.4.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2979">#2979</a>)</li>
<li><a href="27f921fa14"><code>27f921f</code></a> dev: use directives instead of comments for tests (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2978">#2978</a>)</li>
<li><a href="0abb298136"><code>0abb298</code></a> staticcheck: fix generics (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2976">#2976</a>)</li>
<li><a href="d6a39ef374"><code>d6a39ef</code></a> dev: remove kortschak from generated team (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2974">#2974</a>)</li>
<li><a href="5e183652ba"><code>5e18365</code></a> build(deps): bump github.com/daixiang0/gci from 0.4.0 to 0.4.1 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2973">#2973</a>)</li>
<li><a href="ed4befe5ef"><code>ed4befe</code></a> dev: change err to nil (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2971">#2971</a>)</li>
<li><a href="9ebc2d5237"><code>9ebc2d5</code></a> build(deps): bump moment from 2.29.2 to 2.29.4 in /.github/contributors (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2966">#2966</a>)</li>
<li><a href="b050b42309"><code>b050b42</code></a> build(deps): bump moment from 2.29.2 to 2.29.4 in /docs (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2967">#2967</a>)</li>
<li><a href="b8f1e2a549"><code>b8f1e2a</code></a> build(deps): bump github.com/daixiang0/gci from 0.3.4 to 0.4.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2965">#2965</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/golangci/golangci-lint/compare/v1.46.0...v1.47.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/golangci/golangci-lint&package-manager=go_modules&previous-version=1.46.0&new-version=1.47.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-07-19 13:13:02 +00:00
Rishabh Goel
48f3062d9d Updated potential errors in abci.md (#9003)
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: Josef Widder <44643235+josef-widder@users.noreply.github.com>
2022-07-18 10:50:55 +02:00
Sam Kleinman
cc07318866 migration: scope key migration to stores (#9005) 2022-07-15 21:14:18 -04:00
dependabot[bot]
503ddf1c4d build(deps): Bump github.com/prometheus/common from 0.36.0 to 0.37.0 (#9013)
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.36.0 to 0.37.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/prometheus/common/releases">github.com/prometheus/common's releases</a>.</em></p>
<blockquote>
<h2>sigv4/v0.1.0</h2>
<p>Initial release</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="49b36038ae"><code>49b3603</code></a> Improve OAuth2 user agent handling (<a href="https://github-redirect.dependabot.com/prometheus/common/issues/391">#391</a>)</li>
<li>See full diff in <a href="https://github.com/prometheus/common/compare/v0.36.0...v0.37.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/prometheus/common&package-manager=go_modules&previous-version=0.36.0&new-version=0.37.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-07-15 16:10:24 +00:00
M. J. Fromberger
18b5a500da Extract a library from the confix command-line tool. (#9012)
Pull out the library functionality from scripts/confix and move it to
internal/libs/confix. Replace scripts/confix with a simple stub that has the
same command-line API, but uses the library instead.

Related:

- Move and update unit tests.
- Move scripts/confix/condiff to scripts/condiff.
- Update test data for v34, v35, and v36.
- Update reference diffs.
- Update testdata README.
2022-07-15 07:29:34 -07:00
dependabot[bot]
d2db54ae9a build(deps): Bump pgregory.net/rapid from 0.4.7 to 0.4.8 (#9014)
Bumps [pgregory.net/rapid](https://github.com/flyingmutant/rapid) from 0.4.7 to 0.4.8.
<details>
<summary>Commits</summary>
<ul>
<li><a href="110d7a5dd1"><code>110d7a5</code></a> persist: bump rapid version</li>
<li><a href="94a73e7be2"><code>94a73e7</code></a> Remove shrinking-challenge tests</li>
<li><a href="1a852a237b"><code>1a852a2</code></a> persist: bump rapid version</li>
<li><a href="bc396c3ffa"><code>bc396c3</code></a> persist: put failfiles under testdata/rapid/ subdirectories</li>
<li><a href="5408033e60"><code>5408033</code></a> ci: test on Go 1.18</li>
<li><a href="dd3e976d4d"><code>dd3e976</code></a> Document concurrent usage of *T</li>
<li><a href="a0267553f2"><code>a026755</code></a> Expose TB interface</li>
<li><a href="32f9d9b0a8"><code>32f9d9b</code></a> ci: test on Go 1.17</li>
<li><a href="ef97f657bb"><code>ef97f65</code></a> Avoid division in genFloat01()</li>
<li>See full diff in <a href="https://github.com/flyingmutant/rapid/compare/v0.4.7...v0.4.8">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pgregory.net/rapid&package-manager=go_modules&previous-version=0.4.7&new-version=0.4.8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-07-15 13:24:17 +00:00
Sergio Mena
31457ad361 typo (#9001) 2022-07-15 12:01:29 +02:00
M. J. Fromberger
4214d998f2 Forward-port point release changelogs from v0.35.x. (#9011) 2022-07-14 18:43:38 -07:00
William Banfield
c1c501ecd4 config: update config to reflect simple-priority queue (#9007)
Update the queue documentation to reflect the types of queues and current default queue.
2022-07-14 20:19:53 +00:00
kuniseichi
b71ec8c83f doc: fix typos in quick-start.md. (#8990) 2022-07-13 07:04:17 -07:00
dependabot[bot]
b421138e53 build(deps): Bump google.golang.org/grpc from 1.47.0 to 1.48.0 (#8993)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.47.0 to 1.48.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc-go/releases">google.golang.org/grpc's releases</a>.</em></p>
<blockquote>
<h2>Release 1.48.0</h2>
<h1>Bug Fixes</h1>
<ul>
<li>xds/priority: fix bug that could prevent higher priorities from receiving config updates (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5417">#5417</a>)</li>
<li>RLS load balancer: don't propagate the status code returned on control plane RPCs to data plane RPCs (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5400">#5400</a>)</li>
</ul>
<h1>New Features</h1>
<ul>
<li>stats: add support for multiple stats handlers in a single client or server (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5347">#5347</a>)</li>
<li>gcp/observability: add experimental OpenCensus tracing/metrics support (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5372">#5372</a>)</li>
<li>xds: enable aggregate and logical DNS clusters by default (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5380">#5380</a>)</li>
<li>credentials/google (for xds): support xdstp C2P cluster names (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5399">#5399</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="6417495520"><code>6417495</code></a> Change version to 1.48.0 (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5482">#5482</a>)</li>
<li><a href="5770b1dea5"><code>5770b1d</code></a> xds: drop localities with zero weight at the xdsClient layer (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5476">#5476</a>)</li>
<li><a href="423cd8e3ad"><code>423cd8e</code></a> interop: update proto to make vet happy (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5475">#5475</a>)</li>
<li><a href="c9b16c884c"><code>c9b16c8</code></a> transport: remove unused <code>bufWriter.onFlush()</code> (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5464">#5464</a>)</li>
<li><a href="755bf5a191"><code>755bf5a</code></a> fix typo in the binary log (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5467">#5467</a>)</li>
<li><a href="15739b5c88"><code>15739b5</code></a> health: split imports into healthpb and healthgrpc (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5466">#5466</a>)</li>
<li><a href="c075d2011c"><code>c075d20</code></a> interop client: provide new flag, --soak_min_time_ms_between_rpcs (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5421">#5421</a>)</li>
<li><a href="4b750055a5"><code>4b75005</code></a> clusterresolver: merge P(p)arseConfig functions (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5462">#5462</a>)</li>
<li><a href="d883f3d5fa"><code>d883f3d</code></a> test/xds: fail only when state changes to something other than READY and IDLE...</li>
<li><a href="c6ee1c7144"><code>c6ee1c7</code></a> xdsclient: only include nodeID in error strings, not the whole nodeProto (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5461">#5461</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/grpc/grpc-go/compare/v1.47.0...v1.48.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/grpc&package-manager=go_modules&previous-version=1.47.0&new-version=1.48.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-07-13 13:49:52 +00:00
Sam Kleinman
e9239e9ca8 p2p: switch default queue implementation (#8976) 2022-07-12 12:20:17 +00:00
dependabot[bot]
136b62762f build(deps): Bump github.com/prometheus/common from 0.35.0 to 0.36.0 (#8980)
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.35.0 to 0.36.0.
- [Release notes](https://github.com/prometheus/common/releases)
- [Commits](https://github.com/prometheus/common/compare/v0.35.0...v0.36.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/common
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-07-12 08:00:55 -04:00
Sam Kleinman
d5fb82e414 p2p: make p2p.Channel an interface (#8967)
This is (#8446) pulled from the `main/libp2p` branch but without any
of the libp2p content, and is perhaps the easiest first step to enable
pluggability at the peer layer, and makes it possible hoist shims
(including for, say 0.34) into tendermint without touching the reactors.
2022-07-11 20:22:40 +00:00
Rootul Patel
6902fa9282 Fix punctuation (#8972) 2022-07-11 07:14:54 -07:00
Sam Kleinman
61ce384d75 p2p: make peer gossiping coinflip safer (#8949)
Closes #8948
2022-07-08 14:06:57 +00:00
Sam Kleinman
636320f901 p2p: delete cruft (#8958)
I think the decision in #8806 is that we shouldn't do this yet, so I think it's best to just drop this.
2022-07-07 16:29:50 +00:00
Sam Kleinman
d1a16e8ff0 p2p: simpler priority queue (#8929) 2022-07-07 12:13:52 -04:00
Callum Waters
27c523dccb mempool: return error when mempool is full and inbound tx is rejected (#8942)
Addresses: https://github.com/tendermint/tendermint/issues/8928
2022-07-06 18:17:56 +00:00
yihuang
be6d74e657 Work around indexing problem for duplicate transactions (forward port: #8625) (#8945)
Port the bug fix terra-money#76 to upstream. This is critical for ethermint json-rpc to work.

fix: prevent duplicate tx index if it succeeded before
fix: use CodeTypeOk instead of 0
fix: handle duplicate txs within the same block
Co-authored-by: jess jesse@soob.co

ref: #5281

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-07-06 14:05:48 -04:00
samricotta
70e7372c4a added name to CO (#8947)
Co-authored-by: Samantha Ricotta <samantharicotta@Samanthas-MacBook-Pro.local>
2022-07-06 14:30:45 +02:00
Sergio Mena
2b5329ae47 Typos in spec (#8939) 2022-07-06 00:02:29 +02:00
Hernán Vanzetto
3cde9a0bbc abci-cli: Add process_proposal command to abci-cli (#8901)
* Add `process_proposal` command to abci-cli

* Added process proposal to the 'tutorial' examples

* Added entry in CHANGELOG_PENDING.md

* Allow empty blocks in PrepareProposal, ProcessProposal, and FinalizeBlock

* Fix minimum arguments

* Add tests for empty block

* Updated abci-cli doc

Co-authored-by: Sergio Mena <sergio@informal.systems>
Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>
2022-07-05 15:08:58 +02:00
dependabot[bot]
d8de2d85c0 build(deps): Bump github.com/libp2p/go-buffer-pool from 0.0.2 to 0.1.0 (#8932) 2022-07-05 12:12:15 +02:00
Sergio Mena
331860c2a8 Restore Commit to the ABCI++ spec, and other late modifications (#8796)
* Added VoteExtensionsEnableHeight

* Fix reference to `modified`

* Removed old pseudo-code, now included in spec

* Removed markdown warnings in abci++_basic_concepts_002_draft.md

* Restored `Commit` in the Methods section

* Addressed remaining markdown warnings

* Revisited intro and basic concepts section

* Extra pass at all spec sections to recover Commit, and other ABCI++ spec modifications

* Fixed links

* make proto-gen

* Remove _primes_ from spec notation

* Update proto/tendermint/abci/types.proto

Co-authored-by: Callum Waters <cmwaters19@gmail.com>

* Update spec/abci++/abci++_tmint_expected_behavior_002_draft.md

Co-authored-by: Callum Waters <cmwaters19@gmail.com>

* Addressed @cmwaters' comments

* Addressed @angbrav's and @mpoke's comments on spec

* make proto-gen

* Fix MD anchor reference

* Clarify throughout the spec when `ProcessProposal` and `VerifyVoteExtension` are called

* Update spec/abci++/abci++_app_requirements_002_draft.md

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* Update spec/abci++/abci++_app_requirements_002_draft.md

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* Update spec/abci++/abci++_app_requirements_002_draft.md

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update spec/abci++/abci++_basic_concepts_002_draft.md

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update spec/abci++/abci++_basic_concepts_002_draft.md

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* Update spec/abci++/abci++_basic_concepts_002_draft.md

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Update spec/abci++/abci++_methods_002_draft.md

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* Update spec/abci++/abci++_tmint_expected_behavior_002_draft.md

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Addresed comments

* Renamed 'draft' files

* Adatped links to new filenames

* Fixed links and minor cosmetic changes

* Renamed 'byzantine_validators' to 'misbehavior' in ABCI++ spec and protobufs

* make proto-gen

* Renamed 'byzantine_validators' to 'misbehavior' in the code

* Fixed link

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_basic_concepts.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Update spec/abci++/abci++_methods.md

Co-authored-by: Daniel <daniel.cason@usi.ch>

* Addressed @cason's comments

* Clarified conditions for `ProcessProposal` call at proposer

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: Daniel <daniel.cason@usi.ch>
2022-07-04 16:25:48 +02:00
dependabot[bot]
6d8079559b build(deps): Bump github.com/vektra/mockery/v2 from 2.13.1 to 2.14.0 (#8924)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.13.1 to 2.14.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/vektra/mockery/releases">github.com/vektra/mockery/v2's releases</a>.</em></p>
<blockquote>
<h2>v2.14.0</h2>
<h2>Changelog</h2>
<ul>
<li>8582bd6 Add test for getLocalizedPath</li>
<li>686b90c Apply PR comments</li>
<li>de0cade Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/474">#474</a> from RSid/add-flag-documentation</li>
<li>1fa7d2f Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/480">#480</a> from vektra/LandonTClipp-patch-1</li>
<li>4d1f925 Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/484">#484</a> from LouisBrunner/fix_generics_with_expecter</li>
<li>519a84f Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/486">#486</a> from abhinavnair/replace-ioutil</li>
<li>2ca0b83 Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/488">#488</a> from vektra/getLocalizedPath_test</li>
<li>cc82d49 Replace deprecated ioutil pkg with os &amp; io</li>
<li>a420307 Update README.md</li>
<li>4e4a96b Update issue template</li>
<li>fa182fe add documentation to readme</li>
<li>e4954a2 fix: add support for with-expecter when using generics</li>
<li>ca9ddd4 update issue template</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="4d1f925ad5"><code>4d1f925</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/484">#484</a> from LouisBrunner/fix_generics_with_expecter</li>
<li><a href="de0cade475"><code>de0cade</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/474">#474</a> from RSid/add-flag-documentation</li>
<li><a href="2ca0b83ade"><code>2ca0b83</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/488">#488</a> from vektra/getLocalizedPath_test</li>
<li><a href="8582bd63c8"><code>8582bd6</code></a> Add test for getLocalizedPath</li>
<li><a href="519a84f8de"><code>519a84f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/486">#486</a> from abhinavnair/replace-ioutil</li>
<li><a href="cc82d49a73"><code>cc82d49</code></a> Replace deprecated ioutil pkg with os &amp; io</li>
<li><a href="e4954a2411"><code>e4954a2</code></a> fix: add support for with-expecter when using generics</li>
<li><a href="ca9ddd4e97"><code>ca9ddd4</code></a> update issue template</li>
<li><a href="686b90cccf"><code>686b90c</code></a> Apply PR comments</li>
<li><a href="1fa7d2f723"><code>1fa7d2f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/480">#480</a> from vektra/LandonTClipp-patch-1</li>
<li>Additional commits viewable in <a href="https://github.com/vektra/mockery/compare/v2.13.1...v2.14.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/vektra/mockery/v2&package-manager=go_modules&previous-version=2.13.1&new-version=2.14.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-07-01 14:17:45 +00:00
Ian Jungyong Um
1d96faa35a p2p: fix typo (#8922)
Fix minor typos in peermanager
2022-07-01 12:44:39 +00:00
William Banfield
921530c352 p2p: use correct context error (#8916)
handshakeCtx is the internal context carrying the timeout. Its error should be used for the error return.
2022-06-30 22:02:59 +00:00
William Banfield
5274f80de4 p2p: fix flakey test due to disconnect cooldown (#8917)
This test was made flakey by #8839. The cooldown period means that the node in the test will not try to reconnect as quickly as the test expects. This change makes the cooldown shorter in the test so that the node quickly reconnects.
2022-06-30 21:48:10 +00:00
William Banfield
47cb30fc1d p2p: set outgoing connections to around 20% of total connections (#8913) 2022-06-30 16:51:16 -04:00
dependabot[bot]
5c26db733b build(deps): Bump github.com/stretchr/testify from 1.7.5 to 1.8.0 (#8907)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.7.5 to 1.8.0.
<details>
<summary>Commits</summary>
<ul>
<li><a href="181cea6eab"><code>181cea6</code></a> impr: <code>CallerInfo</code> should print full paths to the terminal (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1201">#1201</a>)</li>
<li><a href="cf1284f8dd"><code>cf1284f</code></a> Allow mock expectations to be ordered (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1106">#1106</a>)</li>
<li><a href="66eef0ef3a"><code>66eef0e</code></a> fix: assert.MapSubset (or just support maps in assert.Subset) (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1178">#1178</a>)</li>
<li><a href="2fab6dffcf"><code>2fab6df</code></a> Add WithinTimeRange method (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1188">#1188</a>)</li>
<li>See full diff in <a href="https://github.com/stretchr/testify/compare/v1.7.5...v1.8.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/stretchr/testify&package-manager=go_modules&previous-version=1.7.5&new-version=1.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-06-30 16:34:43 +00:00
Sam Kleinman
60881f1d06 p2p: stop mconn channel sends without timeout (#8906) 2022-06-30 09:01:02 -04:00
Thane Thomson
3bec1668c6 e2e: Extract Docker-specific functionality (#8754)
* e2e: Extract Docker-specific functionality

Extract Docker-specific functionality and put it behind an interface
that should hopefully, without too much modification, allow us to
implement a Digital Ocean-based infrastructure provider.

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Thread contexts through all potentially long-running functions

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Drop the "API" from interface/struct/var naming

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Simplify function returns

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Rename GenerateConfig to Setup to make it more generic

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Consolidate all infra functions into a single interface

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Localize linter directives

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Look up and use complete node in ShowNodeLogs and TailNodeLogs calls

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Restructure infra provider API into a separate package

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Rename interface again

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Rename exec functions for readability

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Relocate staticcheck lint directive

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove staticcheck lint directive

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Make testnet infra struct private

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Only pass testnetDir to Cleanup function

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-06-29 08:02:05 -04:00
Sam Kleinman
37f9d59969 log: do not pre-process log results (#8895)
I was digging around in the zero log functions, and the following
functions using the `Fields()` method directly in zerolog, 

- https://github.com/rs/zerolog/blob/v1.27.0/event.go#L161
- e9344a8c50/fields.go (L15)

Have meaningfully equivalent semantics and our pre-processing of
values is getting us much (except forcing zerolog to always sort our
keys, and nooping in the case when you miss the last field.)

With this change also, we can pass maps (or, pratically a single map)
to the logger, which might be a less wacky seeming interface.
2022-06-28 14:40:16 +00:00
Sam Kleinman
013b46a6c3 libs/strings: move to internal (#8890)
I think we were leaving this library public because the SDK dependend
upon it, but the function the SDK was using was one that we'd removed
because *we* weren't using it any more, and I made a PR agasint the
SDK to clean that up.

ref: https://github.com/cosmos/cosmos-sdk/pull/12368
2022-06-27 21:44:50 +00:00
dependabot[bot]
373b262f35 build(deps): Bump styfle/cancel-workflow-action from 0.9.1 to 0.10.0 (#8881)
Bumps [styfle/cancel-workflow-action](https://github.com/styfle/cancel-workflow-action) from 0.9.1 to 0.10.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/styfle/cancel-workflow-action/releases">styfle/cancel-workflow-action's releases</a>.</em></p>
<blockquote>
<h2>0.10.0</h2>
<h3>Changes</h3>
<ul>
<li>Feat(all):support for considering all workflows with one term: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/165">#165</a></li>
<li>Chore: rebuild: 74a81dc1a9321342ebc12fa8670cc91600c8c494</li>
<li>Chore: update <code>main.yml</code>: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/78">#78</a></li>
<li>Bump <code>@​vercel/ncc</code> from 0.28.6 to 0.29.1: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/106">#106</a></li>
<li>Bump <code>@​vercel/ncc</code> from 0.29.1 to 0.29.2: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/109">#109</a></li>
<li>Bump <code>@​vercel/ncc</code> from 0.29.2 to 0.30.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/112">#112</a></li>
<li>Bump husky from 7.0.1 to 7.0.2: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/110">#110</a></li>
<li>Bump prettier from 2.3.2 to 2.4.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/116">#116</a></li>
<li>Bump <code>@​vercel/ncc</code> from 0.30.0 to 0.31.1: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/115">#115</a></li>
<li>Bump typescript from 4.3.5 to 4.4.3: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/114">#114</a></li>
<li>Bump prettier from 2.4.0 to 2.4.1: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/117">#117</a></li>
<li>Bump <code>@​actions/github</code> from 4.0.0 to 5.0.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/89">#89</a></li>
<li>Bump <code>@​actions/core</code> from 1.3.0 to 1.6.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/118">#118</a></li>
<li>Bump typescript from 4.4.3 to 4.4.4: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/119">#119</a></li>
<li>Bump husky from 7.0.2 to 7.0.4: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/120">#120</a></li>
<li>Bump typescript from 4.4.4 to 4.5.2: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/124">#124</a></li>
<li>Bump <code>@​vercel/ncc</code> from 0.31.1 to 0.32.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/123">#123</a></li>
<li>Bump prettier from 2.4.1 to 2.5.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/125">#125</a></li>
<li>Bump prettier from 2.5.0 to 2.5.1: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/126">#126</a></li>
<li>Bump <code>@​vercel/ncc</code> from 0.32.0 to 0.33.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/127">#127</a></li>
<li>Bump typescript from 4.5.2 to 4.5.3: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/128">#128</a></li>
<li>Bump <code>@​vercel/ncc</code> from 0.33.0 to 0.33.1: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/130">#130</a></li>
<li>Bump typescript from 4.5.3 to 4.5.4: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/129">#129</a></li>
<li>Bump typescript from 4.5.4 to 4.5.5: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/131">#131</a></li>
<li>Bump node-fetch from 2.6.5 to 2.6.7: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/132">#132</a></li>
<li>Bump <code>@​vercel/ncc</code> from 0.33.1 to 0.33.3: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/138">#138</a></li>
<li>Bump actions/setup-node from 2 to 3.0.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/140">#140</a></li>
<li>Bump actions/checkout from 2 to 3: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/141">#141</a></li>
<li>Bump typescript from 4.5.5 to 4.6.2: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/142">#142</a></li>
<li>Bump prettier from 2.5.1 to 2.6.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/143">#143</a></li>
<li>Bump prettier from 2.6.0 to 2.6.1: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/145">#145</a></li>
<li>Bump actions/setup-node from 3.0.0 to 3.1.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/146">#146</a></li>
<li>Bump typescript from 4.6.2 to 4.6.3: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/144">#144</a></li>
<li>Bump prettier from 2.6.1 to 2.6.2: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/147">#147</a></li>
<li>Bump <code>@​actions/github</code> from 5.0.0 to 5.0.1: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/148">#148</a></li>
<li>Bump actions/setup-node from 3.1.0 to 3.1.1: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/149">#149</a></li>
<li>Bump <code>@​vercel/ncc</code> from 0.33.3 to 0.33.4: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/151">#151</a></li>
<li>Bump <code>@​actions/core</code> from 1.6.0 to 1.7.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/153">#153</a></li>
<li>Bump typescript from 4.6.3 to 4.6.4: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/154">#154</a></li>
<li>Bump husky from 7.0.4 to 8.0.1: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/155">#155</a></li>
<li>Bump <code>@​actions/core</code> from 1.7.0 to 1.8.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/156">#156</a></li>
<li>Bump actions/setup-node from 3.1.1 to 3.2.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/159">#159</a></li>
<li>Bump <code>@​actions/github</code> from 5.0.1 to 5.0.3: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/157">#157</a></li>
<li>Bump <code>@​actions/core</code> from 1.8.0 to 1.8.2: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/158">#158</a></li>
<li>Bump typescript from 4.6.4 to 4.7.2: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/160">#160</a></li>
<li>Bump <code>@​vercel/ncc</code> from 0.33.4 to 0.34.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/161">#161</a></li>
<li>Bump typescript from 4.7.2 to 4.7.3: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/163">#163</a></li>
</ul>

</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="bb6001c4ea"><code>bb6001c</code></a> 0.10.0</li>
<li><a href="74a81dc1a9"><code>74a81dc</code></a> chore: rebuild</li>
<li><a href="d2d941c239"><code>d2d941c</code></a> feat(all):support for considering all workflows with one term (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/165">#165</a>)</li>
<li><a href="9cd53caff9"><code>9cd53ca</code></a> Bump <code>@​actions/core</code> from 1.8.2 to 1.9.0 (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/166">#166</a>)</li>
<li><a href="4d9b633214"><code>4d9b633</code></a> Bump prettier from 2.6.2 to 2.7.1 (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/168">#168</a>)</li>
<li><a href="89c9307adc"><code>89c9307</code></a> Bump typescript from 4.7.3 to 4.7.4 (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/167">#167</a>)</li>
<li><a href="f680a0158a"><code>f680a01</code></a> Bump actions/setup-node from 3.2.0 to 3.3.0 (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/164">#164</a>)</li>
<li><a href="4ba58d5728"><code>4ba58d5</code></a> Bump typescript from 4.7.2 to 4.7.3 (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/163">#163</a>)</li>
<li><a href="0d0a9a5ae6"><code>0d0a9a5</code></a> Bump <code>@​vercel/ncc</code> from 0.33.4 to 0.34.0 (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/161">#161</a>)</li>
<li><a href="8ca5a00083"><code>8ca5a00</code></a> Bump typescript from 4.6.4 to 4.7.2 (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/160">#160</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/styfle/cancel-workflow-action/compare/0.9.1...0.10.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=styfle/cancel-workflow-action&package-manager=github_actions&previous-version=0.9.1&new-version=0.10.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-06-27 13:15:29 +00:00
dependabot[bot]
463cff456b build(deps): Bump bufbuild/buf-setup-action from 1.5.0 to 1.6.0 (#8883)
Bumps [bufbuild/buf-setup-action](https://github.com/bufbuild/buf-setup-action) from 1.5.0 to 1.6.0.
- [Release notes](https://github.com/bufbuild/buf-setup-action/releases)
- [Commits](https://github.com/bufbuild/buf-setup-action/compare/v1.5.0...v1.6.0)

---
updated-dependencies:
- dependency-name: bufbuild/buf-setup-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-27 09:13:25 -04:00
Sergio Mena
27ff2f46b8 Add @sergio-mena and @jmalicevic to list of spec reviewers (#8870)
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
2022-06-24 20:53:31 +02:00
Sam Kleinman
52b6dc19ba p2p: remove dial sleep and provide disconnect cooldown (#8839)
Alternative proposal for #8826
2022-06-24 17:57:49 +00:00
Callum Waters
c4d24eed7d e2e: disable another network test (#8862)
Follow up on: https://github.com/tendermint/tendermint/pull/8849
2022-06-24 16:31:30 +00:00
William Banfield
409e057d73 fix light client select statement (#8871) 2022-06-24 12:10:27 -04:00
dependabot[bot]
6b5053046a build(deps): Bump github.com/stretchr/testify from 1.7.2 to 1.7.5 (#8864)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.7.2 to 1.7.5.
<details>
<summary>Commits</summary>
<ul>
<li><a href="b5ce165710"><code>b5ce165</code></a> fixing panic in calls to assertion with nil m.mutex (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1212">#1212</a>)</li>
<li><a href="c206b2e823"><code>c206b2e</code></a> Mock can be deadlocked by a panic (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1157">#1157</a>)</li>
<li><a href="1b73601ae8"><code>1b73601</code></a> suite: correctly set stats on test panic (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1195">#1195</a>)</li>
<li><a href="ba1076d8b3"><code>ba1076d</code></a> Add .Unset method to mock (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/982">#982</a>)</li>
<li><a href="c31ea0312f"><code>c31ea03</code></a> Support comparing byte slice (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1202">#1202</a>)</li>
<li><a href="48391ba5eb"><code>48391ba</code></a> Fix panic in AssertExpectations for mocks without expectations (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1207">#1207</a>)</li>
<li><a href="840cb80149"><code>840cb80</code></a> arrays value types in a zero-initialized state are considered empty (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1126">#1126</a>)</li>
<li><a href="07dc7ee5ab"><code>07dc7ee</code></a> Bump actions/setup-go from 3.1.0 to 3.2.0 (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1191">#1191</a>)</li>
<li><a href="c33fc8d30d"><code>c33fc8d</code></a> Bump actions/checkout from 2 to 3 (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1163">#1163</a>)</li>
<li><a href="3c33e07c4c"><code>3c33e07</code></a> Added Go 1.18.1 as a build/supported version (<a href="https://github-redirect.dependabot.com/stretchr/testify/issues/1182">#1182</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/stretchr/testify/compare/v1.7.2...v1.7.5">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/stretchr/testify&package-manager=go_modules&previous-version=1.7.2&new-version=1.7.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-06-24 13:12:17 +00:00
William Banfield
5f5e74798b p2p: set empty timeouts to small values. (#8847)
These timeouts default to 'do not time out' if they are not set. This times up resources, potentially indefinitely. If node on the other side of the the handshake is up but unresponsive, the[ handshake call](edec79448a/internal/p2p/router.go (L720)) will _never_ return.

These are proposed values that have not been validated. I intend to validate them in a production setting.
2022-06-23 22:33:21 +00:00
Callum Waters
fb209136f8 e2e: add tolerance to peer discovery test (#8849) 2022-06-23 19:11:21 +02:00
Sam Kleinman
436a38f876 p2p: track peers by address (#8841) 2022-06-23 10:03:10 -04:00
Sam Kleinman
52b2efb827 e2e: report peer heights in error message (#8843) 2022-06-23 08:40:36 -04:00
Ian Jungyong Um
2e11760fbe p2p: fix typo (#8836) 2022-06-22 09:30:11 +02:00
William Banfield
8860e027a8 p2p: more dial routines (#8827)
The dial routines perform network i/o, which is a blocking call into the kernel. These routines are completely unable to do anything else while the dial occurs, so for most of their lifecycle they are sitting idle waiting for the tcp stack to hand them data. We should increase this value by _a lot_ to enable more concurrent dials. This is unlikely to cause CPU starvation because these routines sit idle most of the time. The current value causes dials to occur _way_ too slowly. 

Below is a graph demonstrating the before and after of this change in a testnetwork with many dead peers. You can observe that the rate that we connect to new, valid peers, is _much_ higher than previously. Change was deployed around the 31 minute mark on the graph.

![image](https://user-images.githubusercontent.com/4561443/174919007-50e4453a-edd8-41d0-97ee-dea8853d57f7.png)
2022-06-22 00:51:09 +00:00
Sam Kleinman
cfd13825e2 p2p: add eviction metrics and cleanup dialing error handling (#8819) 2022-06-21 20:44:14 +00:00
dependabot[bot]
6f168df7e4 build(deps): Bump github.com/spf13/cobra from 1.4.0 to 1.5.0 (#8812)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.4.0 to 1.5.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/cobra/releases">github.com/spf13/cobra's releases</a>.</em></p>
<blockquote>
<h2>v1.5.0</h2>
<h2>Spring 2022 Release 🌥️</h2>
<p>Hello everyone! Welcome to another release of cobra. Completions continue to get better and better. This release adds a few really cool new features. We also continue to patch versions of our dependencies as they become available via dependabot. Happy coding!</p>
<h2>Active help 👐🏼</h2>
<p>Shout out to <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> for a big value add: Active Help <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1482">spf13/cobra#1482</a>. With active help, a program can provide some inline warnings or hints for users as they hit tab. Now, your CLIs can be even more intuitive to use!</p>
<p>Currently active help is only supported for bash V2 and zsh. Marc wrote a whole guide on how to do this, so make sure to give it a good read to learn how you can add this to your cobra code! <a href="https://github.com/spf13/cobra/blob/master/active_help.md">https://github.com/spf13/cobra/blob/master/active_help.md</a></p>
<h2>Group flags 🧑🏼‍🤝‍🧑🏼</h2>
<p>Cobra now has the ability to mark flags as required or exclusive as a <strong><em>group</em></strong>. Shout out to our newest maintainer <a href="https://github.com/johnSchnake"><code>@​johnSchnake</code></a> for this! <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1654">spf13/cobra#1654</a> Let's say you have a <code>username</code> flag that <em><strong>MUST</strong></em> be partnered with a <code>password</code> flag. Well, now, you can enforce those as being required together:</p>
<pre lang="go"><code>rootCmd.Flags().StringVarP(&amp;u, &quot;username&quot;, &quot;u&quot;, &quot;&quot;, &quot;Username (required if password is set)&quot;)
rootCmd.Flags().StringVarP(&amp;pw, &quot;password&quot;, &quot;p&quot;, &quot;&quot;, &quot;Password (required if username is set)&quot;)
rootCmd.MarkFlagsRequiredTogether(&quot;username&quot;, &quot;password&quot;)
</code></pre>
<p>Flags may also be marked as &quot;mutally exclusive&quot; with the <code>MarkFlagsMutuallyExclusive(string, string ... )</code> command API. Refer to our <a href="https://github.com/spf13/cobra/blob/master/user_guide.md">user guide documentation</a> for further info!</p>
<h2>Completions 👀</h2>
<ul>
<li>Add backwards-compatibility tests for legacyArgs() by <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1547">spf13/cobra#1547</a></li>
<li>feat: Add how to load completions in your current zsh session by <a href="https://github.com/ondrejsika"><code>@​ondrejsika</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1608">spf13/cobra#1608</a></li>
<li>Introduce FixedCompletions by <a href="https://github.com/emersion"><code>@​emersion</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1574">spf13/cobra#1574</a></li>
<li>Add shell completion to flag groups by <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1659">spf13/cobra#1659</a></li>
<li>Modify brew prefix path in macOS system by <a href="https://github.com/imxw"><code>@​imxw</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1719">spf13/cobra#1719</a></li>
<li>perf(bash-v2): use backslash escape string expansion for tab by <a href="https://github.com/scop"><code>@​scop</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1682">spf13/cobra#1682</a></li>
<li>style(bash-v2): out is not an array variable, do not refer to it as such by <a href="https://github.com/scop"><code>@​scop</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1681">spf13/cobra#1681</a></li>
<li>perf(bash-v2): standard completion optimizations by <a href="https://github.com/scop"><code>@​scop</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1683">spf13/cobra#1683</a></li>
<li>style(bash): out is not an array variable, do not refer to it as such by <a href="https://github.com/scop"><code>@​scop</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1684">spf13/cobra#1684</a></li>
<li>perf(bash-v2): short-circuit descriptionless candidate lists by <a href="https://github.com/scop"><code>@​scop</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1686">spf13/cobra#1686</a></li>
<li>perf(bash-v2): speed up filtering entries with descriptions by <a href="https://github.com/scop"><code>@​scop</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1689">spf13/cobra#1689</a></li>
<li>perf(bash-v2): speed up filtering menu-complete descriptions by <a href="https://github.com/scop"><code>@​scop</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1692">spf13/cobra#1692</a></li>
<li>fix(bash-v2): skip empty completions when filtering descriptions by <a href="https://github.com/scop"><code>@​scop</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1691">spf13/cobra#1691</a></li>
<li>perf(bash-v2): read directly to COMPREPLY on descriptionless short circuit by <a href="https://github.com/scop"><code>@​scop</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1700">spf13/cobra#1700</a></li>
<li>fix: Don't complete _command on zsh by <a href="https://github.com/twpayne"><code>@​twpayne</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1690">spf13/cobra#1690</a></li>
<li>Improve fish_completions code quality by <a href="https://github.com/t29kida"><code>@​t29kida</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1515">spf13/cobra#1515</a></li>
<li>Fix handling of descriptions for bash v3 by <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1735">spf13/cobra#1735</a></li>
<li>undefined or nil Args default to ArbitraryArgs by <a href="https://github.com/umarcor"><code>@​umarcor</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1612">spf13/cobra#1612</a></li>
<li>Add Command.SetContext by <a href="https://github.com/joshcarp"><code>@​joshcarp</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1551">spf13/cobra#1551</a></li>
<li>Wrap printf tab with quotes by <a href="https://github.com/PapaCharlie"><code>@​PapaCharlie</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1665">spf13/cobra#1665</a></li>
</ul>
<h2>Documentation 📝</h2>
<ul>
<li>Fixed typos in completions docs - <a href="https://github.com/cuishuang"><code>@​cuishuang</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1625">spf13/cobra#1625</a></li>
<li>Removed <code>CHANGELOG.md</code> as it isn't updated - <a href="https://github.com/johnSchnake"><code>@​johnSchnake</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1634">spf13/cobra#1634</a></li>
<li>Minor typo fix in <code>shell_completion.md</code> - <a href="https://github.com/danieldn"><code>@​danieldn</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1678">spf13/cobra#1678</a></li>
<li>Changed branch name in the cobra generator link to 'main' - <a href="https://github.com/skywalker2909"><code>@​skywalker2909</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1645">spf13/cobra#1645</a></li>
<li>Fix Command.Context comment by <a href="https://github.com/katexochen"><code>@​katexochen</code></a> in <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1639">spf13/cobra#1639</a></li>
</ul>

</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="06b06a9dc9"><code>06b06a9</code></a> Bump golangci/golangci-lint-action from 3.1.0 to 3.2.0 (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1697">#1697</a>)</li>
<li><a href="5f2ec3c897"><code>5f2ec3c</code></a> Update shell completion to respect flag groups (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1659">#1659</a>)</li>
<li><a href="b9ca5949e2"><code>b9ca594</code></a> use errors.Is() to check for errors (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1730">#1730</a>)</li>
<li><a href="ea94a3db55"><code>ea94a3d</code></a> undefined or nil Args default to ArbitraryArgs (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1612">#1612</a>)</li>
<li><a href="7c9831d376"><code>7c9831d</code></a> Fix handling of descriptions for bash v3 (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1735">#1735</a>)</li>
<li><a href="ed7bb9dda4"><code>ed7bb9d</code></a> Add unit test for fish completion (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1515">#1515</a>)</li>
<li><a href="f464d6c82e"><code>f464d6c</code></a> Add Active Help support (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1482">#1482</a>)</li>
<li><a href="7dc8b004e6"><code>7dc8b00</code></a> Bump actions/setup-go from 2 to 3 (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1660">#1660</a>)</li>
<li><a href="87ea1807f7"><code>87ea180</code></a> Modify brew prefix path in macOS system (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1719">#1719</a>)</li>
<li><a href="ca8e3c2779"><code>ca8e3c2</code></a> Add Pulumi as a project using cobra (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1720">#1720</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/spf13/cobra/compare/v1.4.0...v1.5.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/cobra&package-manager=go_modules&previous-version=1.4.0&new-version=1.5.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-06-21 17:46:01 +00:00
Sam Kleinman
28d3239958 p2p: wake dialing thread after sleep (#8803) 2022-06-20 15:47:56 +00:00
dependabot[bot]
acf97128f3 build(deps): Bump github.com/prometheus/common from 0.34.0 to 0.35.0 (#8800)
Bumps [github.com/prometheus/common](https://github.com/prometheus/common) from 0.34.0 to 0.35.0.
<details>
<summary>Commits</summary>
<ul>
<li><a href="26d49741eb"><code>26d4974</code></a> Add more mimetypes (<a href="https://github-redirect.dependabot.com/prometheus/common/issues/385">#385</a>)</li>
<li><a href="627089d3a7"><code>627089d</code></a> Set minimum version for go to 1.16 (<a href="https://github-redirect.dependabot.com/prometheus/common/issues/372">#372</a>)</li>
<li>See full diff in <a href="https://github.com/prometheus/common/compare/v0.34.0...v0.35.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/prometheus/common&package-manager=go_modules&previous-version=0.34.0&new-version=0.35.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-06-20 12:04:19 +00:00
dependabot[bot]
2382b5c364 build(deps): Bump github.com/adlio/schema from 1.3.0 to 1.3.3 (#8798)
Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.3.0 to 1.3.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/adlio/schema/releases">github.com/adlio/schema's releases</a>.</em></p>
<blockquote>
<h2>v1.3.3</h2>
<p><strong>Full Changelog</strong>: <a href="https://github.com/adlio/schema/compare/v1.3.1...v1.3.3">https://github.com/adlio/schema/compare/v1.3.1...v1.3.3</a></p>
<h2>v1.3.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Update ory/dockertest Dependency to 3.9.1 by <a href="https://github.com/adlio"><code>@​adlio</code></a> in <a href="https://github-redirect.dependabot.com/adlio/schema/pull/17">adlio/schema#17</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/adlio/schema/compare/v1.3.0...v1.3.1">https://github.com/adlio/schema/compare/v1.3.0...v1.3.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="4775fe4615"><code>4775fe4</code></a> Fix badge link</li>
<li><a href="95a1695cf7"><code>95a1695</code></a> Update build badge.</li>
<li><a href="41e0d31321"><code>41e0d31</code></a> CircleCI Ubuntu version.</li>
<li><a href="f5210da8ae"><code>f5210da</code></a> CircleCI image update</li>
<li><a href="0a9113e3a2"><code>0a9113e</code></a> Module updates</li>
<li><a href="33628d531f"><code>33628d5</code></a> Add circleci</li>
<li><a href="a81a7a3b72"><code>a81a7a3</code></a> Merge branch 'runc-112'</li>
<li><a href="128f3b3b5a"><code>128f3b3</code></a> Fix go.mod and go.sum</li>
<li><a href="471923f5f9"><code>471923f</code></a> Merge branch 'runc-112'</li>
<li><a href="40c69567e1"><code>40c6956</code></a> Fix ory/dockertest</li>
<li>Additional commits viewable in <a href="https://github.com/adlio/schema/compare/v1.3.0...v1.3.3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/adlio/schema&package-manager=go_modules&previous-version=1.3.0&new-version=1.3.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-06-20 08:54:09 +00:00
Ian Jungyong Um
e3e162ff10 p2p: fix typo (#8793)
Fix the typo in router
2022-06-19 15:26:25 +00:00
Sam Kleinman
4d820ff4f5 p2p: peer score should not wrap around (#8790) 2022-06-17 22:27:38 +00:00
Sam Kleinman
82c1372f9e abci+test/e2e/app: add mutex for new methods (#8577)
These methods should be protected by a mutex.
2022-06-17 21:22:40 +00:00
Sam Kleinman
0ac03468d8 p2p: track peers stored on startup (#8787) 2022-06-17 18:10:10 +00:00
Sam Kleinman
9e5b13725d p2p: peer store and dialing changes (#8737) 2022-06-17 08:02:10 -04:00
Sergio Mena
a4f29bfd44 Don't check PBTS-timeliness when in replay mode (#8774)
Closes #8781

Temporary fix to this issue, so that e2e tests don't fail and potentially leave other problems uncovered.
2022-06-16 19:31:17 +00:00
Adolfo Olivera
7cf09399bb Fix typo in Using Tendermint section (#8780)
Modify using-tendermint.md to replace unsafe_reset_all to unsafe-reset-all . Closes #8779 .
2022-06-16 18:36:58 +00:00
Callum Waters
8854ce4e68 e2e: reactivate network test (#8635) 2022-06-16 18:04:10 +02:00
Emmanuel T Odeke
56e329aa9e cmd/tendermint/commands/debug: guard against PID int overflows (#8764) 2022-06-15 20:46:50 -07:00
Sam Kleinman
1062ae73d6 e2e/ci: add extra split to 0.36 (#8770) 2022-06-15 09:04:13 -04:00
dependabot[bot]
134bfefbe5 build(deps): Bump github.com/vektra/mockery/v2 from 2.13.0 to 2.13.1 (#8766)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.13.0 to 2.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/vektra/mockery/releases">github.com/vektra/mockery/v2's releases</a>.</em></p>
<blockquote>
<h2>v2.13.1</h2>
<h2>Changelog</h2>
<ul>
<li>f04b040 Fix infinity mocking caused interface in mock</li>
<li>9d7c819 Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/472">#472</a> from grongor/fix-infinite-mocking</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="9d7c8190e1"><code>9d7c819</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/472">#472</a> from grongor/fix-infinite-mocking</li>
<li><a href="f04b040510"><code>f04b040</code></a> Fix infinity mocking caused interface in mock</li>
<li>See full diff in <a href="https://github.com/vektra/mockery/compare/v2.13.0...v2.13.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/vektra/mockery/v2&package-manager=go_modules&previous-version=2.13.0&new-version=2.13.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-06-15 10:46:37 +00:00
Roman
f0b0f34f3f refactor: improve string representation for a vote against the proposal (#8745) 2022-06-15 12:32:22 +02:00
Sam Kleinman
51b3f111dc p2p: fix mconn transport accept test (#8762)
Fix minor test incongruency missed earlier.
2022-06-14 23:48:48 +00:00
Sam Kleinman
979a6a1b13 p2p: accept should not abort on first error (#8759) 2022-06-14 19:12:53 -04:00
Sam Kleinman
bf1cb89bb7 Revert "p2p: self-add node should not error (tendermint#8753)" (#8757) 2022-06-14 20:55:10 +00:00
Sam Kleinman
7971f4a2fc p2p: self-add node should not error (#8753) 2022-06-14 16:45:05 +00:00
dependabot[bot]
a2908c29d5 build(deps): Bump github.com/vektra/mockery/v2 from 2.12.3 to 2.13.0 (#8748)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.12.3 to 2.13.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/vektra/mockery/releases">github.com/vektra/mockery/v2's releases</a>.</em></p>
<blockquote>
<h2>v2.13.0</h2>
<h2>Changelog</h2>
<ul>
<li>f9f6d38 Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/477">#477</a> from LandonTClipp/generics_release</li>
<li>6498bd6 Updating dependencies</li>
</ul>
<h2>v2.13.0-beta.1</h2>
<h2>Changelog</h2>
<ul>
<li>9dba1fd Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/454">#454</a> from Gevrai/gejo-move-expecter-test</li>
<li>cde38d9 Move generated ExpecterTest to mocks directory</li>
</ul>
<h2>v2.13.0-beta.0</h2>
<h2>Changelog</h2>
<ul>
<li>dc5539e Add support for generating mocks for generic interfaces</li>
<li>33c4bf3 Generics mocking fixes</li>
<li>a727d70 Genrics support: rename and comment methods</li>
<li>4c0f6c8 Merge conflict resolution</li>
<li>46c61f0 Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/456">#456</a> from cruickshankpg/mock-generic-interfaces</li>
<li>ed55a47 Update x/tools to pick up fix for <a href="https://github-redirect.dependabot.com/golang/go/issues/51629">golang/go#51629</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="f9f6d38ea8"><code>f9f6d38</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/477">#477</a> from LandonTClipp/generics_release</li>
<li><a href="6498bd6d40"><code>6498bd6</code></a> Updating dependencies</li>
<li><a href="9dba1fd9c3"><code>9dba1fd</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/454">#454</a> from Gevrai/gejo-move-expecter-test</li>
<li><a href="cde38d9a5c"><code>cde38d9</code></a> Move generated ExpecterTest to mocks directory</li>
<li><a href="46c61f037d"><code>46c61f0</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/456">#456</a> from cruickshankpg/mock-generic-interfaces</li>
<li><a href="4c0f6c85e0"><code>4c0f6c8</code></a> Merge conflict resolution</li>
<li><a href="ed55a470b6"><code>ed55a47</code></a> Update x/tools to pick up fix for <a href="https://github-redirect.dependabot.com/golang/go/issues/51629">golang/go#51629</a></li>
<li><a href="a727d70486"><code>a727d70</code></a> Genrics support: rename and comment methods</li>
<li><a href="33c4bf34b9"><code>33c4bf3</code></a> Generics mocking fixes</li>
<li><a href="dc5539e60b"><code>dc5539e</code></a> Add support for generating mocks for generic interfaces</li>
<li>See full diff in <a href="https://github.com/vektra/mockery/compare/v2.12.3...v2.13.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/vektra/mockery/v2&package-manager=go_modules&previous-version=2.12.3&new-version=2.13.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-06-14 11:06:36 +00:00
Jeeyong Um
a4cf8939b8 mempool: fix error message check in test (#8750) 2022-06-14 06:54:35 -04:00
Jeeyong Um
21bbbe3e2a mempool: fix typos in test (#8746) 2022-06-14 10:50:55 +02:00
Marko
82907c84fa sink/psql: json marshal instead of proto (#8637)
Storing transaction records as JSON makes it simpler for clients of the index.
2022-06-13 10:20:54 -07:00
Jasmina Malicevic
06175129ed abci-cli: added PrepareProposal command to cli (#8656)
* Prepare prosal cli
2022-06-13 14:09:57 +02:00
Sam Kleinman
7172862786 e2e: split test cases to avoid hitting timeouts (#8741) 2022-06-11 08:21:55 -04:00
Callum Waters
b0e48ca5f3 remove unused config variables (#8738) 2022-06-10 17:17:33 +02:00
Sam Kleinman
8d9d19a113 e2e/generator: enlarge nightly test suite (#8731) 2022-06-09 18:22:47 -04:00
M. J. Fromberger
6983885835 Update config migration test data for v0.36.x (#8727)
This is a trivial update, but also a convenient way to make sure the mergify settings are working.
2022-06-08 20:34:14 +00:00
dependabot[bot]
bb0737ee3c build(deps): Bump github.com/rs/zerolog from 1.26.1 to 1.27.0 (#8724)
Bumps [github.com/rs/zerolog](https://github.com/rs/zerolog) from 1.26.1 to 1.27.0.
<details>
<summary>Commits</summary>
<ul>
<li><a href="e9344a8c50"><code>e9344a8</code></a> docs: add an example for Lshortfile-like implementation of CallerMarshalFunc ...</li>
<li><a href="263b0bde36"><code>263b0bd</code></a> <a href="https://github-redirect.dependabot.com/rs/zerolog/issues/411">#411</a> Add FieldsExclude parameter to console writer (<a href="https://github-redirect.dependabot.com/rs/zerolog/issues/418">#418</a>)</li>
<li><a href="588a61c2df"><code>588a61c</code></a> ctx: Modify WithContext to use a non-pointer receiver (<a href="https://github-redirect.dependabot.com/rs/zerolog/issues/409">#409</a>)</li>
<li><a href="361cdf616a"><code>361cdf6</code></a> Remove extra space in console when there is no message (<a href="https://github-redirect.dependabot.com/rs/zerolog/issues/413">#413</a>)</li>
<li><a href="fc26014bd4"><code>fc26014</code></a> MsgFunc function added to Event (<a href="https://github-redirect.dependabot.com/rs/zerolog/issues/406">#406</a>)</li>
<li><a href="025f9f1819"><code>025f9f1</code></a> journald: don't call Enabled before each write (<a href="https://github-redirect.dependabot.com/rs/zerolog/issues/407">#407</a>)</li>
<li><a href="3efdd82416"><code>3efdd82</code></a> call done function when logger is disabled (<a href="https://github-redirect.dependabot.com/rs/zerolog/issues/393">#393</a>)</li>
<li><a href="c0c2e11fc3"><code>c0c2e11</code></a> Consistent casing, redundancy, and spelling/grammar (<a href="https://github-redirect.dependabot.com/rs/zerolog/issues/391">#391</a>)</li>
<li><a href="665519c4da"><code>665519c</code></a> Fix ConsoleWriter color on Windows (<a href="https://github-redirect.dependabot.com/rs/zerolog/issues/390">#390</a>)</li>
<li><a href="0c8d3c0b10"><code>0c8d3c0</code></a> move the lint command to its own package (<a href="https://github-redirect.dependabot.com/rs/zerolog/issues/389">#389</a>)</li>
<li>See full diff in <a href="https://github.com/rs/zerolog/compare/v1.26.1...v1.27.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/rs/zerolog&package-manager=go_modules&previous-version=1.26.1&new-version=1.27.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-06-08 13:09:16 +00:00
M. J. Fromberger
e84ca617cf Set up automation settings for the v0.36.x backport branch. (#8714)
- Add Mergify settings for backport labels.
- Add e2e nightly workflow for v0.36.x.
- Update documentation on e2e workflows.
- Add v0.36.x to the Dependabot configs.
2022-06-07 12:25:16 -07:00
Sam Kleinman
931c98f7ad rpc: always close http bodies (#8712)
Closes #8686
2022-06-07 16:40:22 +00:00
Elias Naur
3bb68b49f5 ci(fuzz): remove Go 1.18 workaround for OSS-Fuzz (#8711) 2022-06-07 06:11:06 -07:00
dependabot[bot]
618b841a4b build(deps): Bump github.com/stretchr/testify from 1.7.1 to 1.7.2 (#8708)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.7.1 to 1.7.2.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.7.1...v1.7.2)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-07 04:59:00 -04:00
M. J. Fromberger
3e97479ab8 Fix a "broken" markdown link. (#8706)
The link actually works in context because of the auto-redirect from GitHub,
but the link checker doesn't follow them. Splice out the redirect.
2022-06-07 08:13:05 +00:00
dependabot[bot]
85d1946602 build(deps): Bump bufbuild/buf-setup-action from 1.4.0 to 1.5.0 (#8701)
Bumps [bufbuild/buf-setup-action](https://github.com/bufbuild/buf-setup-action) from 1.4.0 to 1.5.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/bufbuild/buf-setup-action/releases">bufbuild/buf-setup-action's releases</a>.</em></p>
<blockquote>
<h2>v1.5.0</h2>
<ul>
<li>Set the default buf version to v1.4.0</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="8cc6b77780"><code>8cc6b77</code></a> Update to v1.5.0</li>
<li>See full diff in <a href="https://github.com/bufbuild/buf-setup-action/compare/v1.4.0...v1.5.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=bufbuild/buf-setup-action&package-manager=github_actions&previous-version=1.4.0&new-version=1.5.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-06-06 17:17:28 +00:00
327 changed files with 24057 additions and 5112 deletions

4
.github/CODEOWNERS vendored
View File

@@ -7,7 +7,7 @@
# global owners are only requested if there isn't a more specific
# codeowner specified below. For this reason, the global codeowners
# are often repeated in package-level definitions.
* @ebuchman @cmwaters @tychoish @williambanfield @creachadair @sergio-mena @jmalicevic @thanethomson @ancazamfir
* @ebuchman @cmwaters @tychoish @williambanfield @creachadair @sergio-mena @jmalicevic @thanethomson @ancazamfir @samricotta
# Spec related changes can be approved by the protocol design team
/spec @josef-widder @milosevic @cason
/spec @josef-widder @milosevic @cason @sergio-mena @jmalicevic

View File

@@ -4,7 +4,6 @@ updates:
directory: "/"
schedule:
interval: weekly
day: monday
target-branch: "master"
open-pull-requests-limit: 10
labels:
@@ -15,7 +14,16 @@ updates:
directory: "/"
schedule:
interval: weekly
day: monday
target-branch: "v0.34.x"
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: weekly
target-branch: "v0.35.x"
open-pull-requests-limit: 10
labels:
@@ -26,8 +34,7 @@ updates:
directory: "/"
schedule:
interval: weekly
day: monday
target-branch: "v0.34.x"
target-branch: "v0.36.x"
open-pull-requests-limit: 10
labels:
- T:dependencies
@@ -37,7 +44,6 @@ updates:
directory: "/docs"
schedule:
interval: weekly
day: monday
open-pull-requests-limit: 10
###################################
@@ -47,7 +53,7 @@ updates:
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
interval: weekly
target-branch: "master"
open-pull-requests-limit: 10
labels:
@@ -57,7 +63,7 @@ updates:
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
interval: weekly
target-branch: "v0.34.x"
open-pull-requests-limit: 10
labels:
@@ -67,9 +73,19 @@ updates:
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
interval: weekly
target-branch: "v0.35.x"
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge
- package-ecosystem: gomod
directory: "/"
schedule:
interval: weekly
target-branch: "v0.36.x"
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge

8
.github/mergify.yml vendored
View File

@@ -33,3 +33,11 @@ pull_request_rules:
backport:
branches:
- v0.35.x
- name: backport patches to v0.36.x branch
conditions:
- base=master
- label=S:backport-to-v0.36.x
actions:
backport:
branches:
- v0.36.x

View File

@@ -49,7 +49,7 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v3.0.0
uses: docker/build-push-action@v3.1.0
with:
context: .
file: ./DOCKER/Dockerfile

View File

@@ -28,7 +28,7 @@ jobs:
- name: Install generator dependencies
run: |
apk add --no-cache make bash git npm
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
# We need to fetch full history so the backport branches for previous
# versions will be available for the build.
@@ -37,7 +37,7 @@ jobs:
run: |
git config --global --add safe.directory "$PWD"
make build-docs
- uses: actions/upload-artifact@v3
- uses: actions/upload-artifact@v4
with:
name: build-output
path: ~/output/
@@ -49,8 +49,8 @@ jobs:
permissions:
contents: write
steps:
- uses: actions/checkout@v3
- uses: actions/download-artifact@v3
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: build-output
path: ~/output

View File

@@ -11,7 +11,7 @@ jobs:
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03']
group: ['00', '01', '02', '03', '04']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
@@ -29,7 +29,7 @@ jobs:
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly/
run: ./build/generator -g 5 -d networks/nightly/
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e

View File

@@ -1,8 +1,7 @@
# Runs randomly generated E2E testnets nightly
# on the 0.34.x release branch
# Runs randomly generated E2E testnets nightly on the 0.34.x branch.
# !! If you change something in this file, you probably want
# to update the e2e-nightly-master workflow as well!
# !! This file should be kept in sync with the e2e-nightly-master.yml file,
# modulo changes to the version labels.
name: e2e-nightly-34x
on:

View File

@@ -1,7 +1,7 @@
# Runs randomly generated E2E testnets nightly on v0.35.x.
# Runs randomly generated E2E testnets nightly on the v0.35.x branch.
# !! If you change something in this file, you probably want
# to update the e2e-nightly-master workflow as well!
# !! This file should be kept in sync with the e2e-nightly-master.yml file,
# modulo changes to the version labels.
name: e2e-nightly-35x
on:

74
.github/workflows/e2e-nightly-36x.yml vendored Normal file
View File

@@ -0,0 +1,74 @@
# Runs randomly generated E2E testnets nightly on the v0.36.x branch.
# !! This file should be kept in sync with the e2e-nightly-master.yml file,
# modulo changes to the version labels.
name: e2e-nightly-36x
on:
schedule:
- cron: '0 2 * * *'
jobs:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03', '04']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
with:
go-version: '1.18'
- uses: actions/checkout@v3
with:
ref: 'v0.36.x'
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner tests
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 5 -d networks/nightly
- name: Run testnets in group ${{ matrix.group }}
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
e2e-nightly-fail-2:
needs: e2e-nightly-test
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':skull:'
SLACK_COLOR: danger
SLACK_MESSAGE: Nightly E2E tests failed on v0.36.x
SLACK_FOOTER: ''
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on success
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':white_check_mark:'
SLACK_COLOR: good
SLACK_MESSAGE: Nightly E2E tests passed on v0.36.x
SLACK_FOOTER: ''

View File

@@ -1,7 +1,8 @@
# Runs randomly generated E2E testnets nightly on master
# !! If you change something in this file, you probably want
# to update the e2e-nightly-34x workflow as well!
# !! Relevant changes to this file should be propagated to the e2e-nightly-<V>x
# files for the supported backport branches, when appropriate, modulo version
# markers.
name: e2e-nightly-master
on:
@@ -15,7 +16,7 @@ jobs:
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03']
group: ['00', '01', '02', '03', "04"]
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
@@ -33,7 +34,7 @@ jobs:
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly/
run: ./build/generator -g 5 -d networks/nightly/
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 3
steps:
- uses: styfle/cancel-workflow-action@0.9.1
- uses: styfle/cancel-workflow-action@0.10.0
with:
workflow_id: 1041851,1401230,2837803
access_token: ${{ github.token }}

View File

@@ -15,7 +15,7 @@ jobs:
timeout-minutes: 5
steps:
- uses: actions/checkout@v3
- uses: bufbuild/buf-setup-action@v1.5.0
- uses: bufbuild/buf-setup-action@v1.6.0
- uses: bufbuild/buf-lint-action@v1
with:
input: 'proto'

View File

@@ -32,44 +32,3 @@ jobs:
run: |
make test-group-${{ matrix.part }} NUM_SPLIT=6
if: env.GIT_DIFF
- uses: actions/upload-artifact@v3
with:
name: "${{ github.sha }}-${{ matrix.part }}-coverage"
path: ./build/${{ matrix.part }}.profile.out
upload-coverage-report:
runs-on: ubuntu-latest
needs: tests
steps:
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
"!test/"
go.mod
go.sum
Makefile
- uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-00-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-01-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-02-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v3
with:
name: "${{ github.sha }}-03-coverage"
if: env.GIT_DIFF
- run: |
cat ./*profile.out | grep -v "mode: set" >> coverage.txt
if: env.GIT_DIFF
- uses: codecov/codecov-action@v3.1.0
with:
file: ./coverage.txt
if: env.GIT_DIFF

View File

@@ -2,6 +2,88 @@
Friendly reminder: We have a [bug bounty program](https://hackerone.com/cosmos).
## v0.35.9
July 20, 2022
This release fixes a deadlock that could occur in some cases when using the
priority mempool with the ABCI socket client.
### BUG FIXES
- [mempool] [\#9030](https://github.com/tendermint/tendermint/pull/9030) rework lock discipline to mitigate callback deadlocks (@creachadair)
## v0.35.8
July 12, 2022
Special thanks to external contributors on this release: @joeabbey
This release fixes an unbounded heap growth issue in the implementation of the
priority mempool, as well as some configuration, logging, and peer dialing
improvements in the non-legacy p2p stack. It also adds a new opt-in
"simple-priority" value for the `p2p.queue-type` setting, that should improve
gossip performance for non-legacy peer networks.
### BREAKING CHANGES
- CLI/RPC/Config
- [node] [\#8902](https://github.com/tendermint/tendermint/pull/8902) Always start blocksync and avoid misconfiguration (@tychoish)
### FEATURES
- [cli] [\#8675](https://github.com/tendermint/tendermint/pull/8675) Add command to force compact goleveldb databases (@cmwaters)
### IMPROVEMENTS
- [p2p] [\#8914](https://github.com/tendermint/tendermint/pull/8914) [\#8875](https://github.com/tendermint/tendermint/pull/8875) Improvements to peer dialing (backported). (@tychoish)
- [p2p] [\#8820](https://github.com/tendermint/tendermint/pull/8820) add eviction metrics and cleanup dialing error handling (backport #8819) (@tychoish)
- [logging] [\#8896](https://github.com/tendermint/tendermint/pull/8896) Do not pre-process log results (backport #8895). (@tychoish)
- [p2p] [\#8956](https://github.com/tendermint/tendermint/pull/8956) Simpler priority queue (backport #8929). (@tychoish)
### BUG FIXES
- [mempool] [\#8944](https://github.com/tendermint/tendermint/pull/8944) Fix unbounded heap growth in the priority mempool. (@creachadair)
- [p2p] [\#8869](https://github.com/tendermint/tendermint/pull/8869) Set empty timeouts to configed values. (backport #8847). (@williambanfield)
## v0.35.7
June 16, 2022
### BUG FIXES
- [p2p] [\#8692](https://github.com/tendermint/tendermint/pull/8692) scale the number of stored peers by the configured maximum connections (#8684)
- [rpc] [\#8715](https://github.com/tendermint/tendermint/pull/8715) always close http bodies (backport #8712)
- [p2p] [\#8760](https://github.com/tendermint/tendermint/pull/8760) accept should not abort on first error (backport #8759)
### BREAKING CHANGES
- P2P Protocol
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Introduce "inactive" peer label to avoid re-dialing incompatible peers. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Increase frequency of dialing attempts to reduce latency for peer acquisition. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Improvements to peer scoring and sorting to gossip a greater variety of peers during PEX. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Track incoming and outgoing peers separately to ensure more peer slots open for incoming connections. (@tychoish)
## v0.35.6
June 3, 2022
### FEATURES
- [migrate] [\#8672](https://github.com/tendermint/tendermint/pull/8672) provide function for database production (backport #8614) (@tychoish)
### BUG FIXES
- [consensus] [\#8651](https://github.com/tendermint/tendermint/pull/8651) restructure peer catchup sleep (@tychoish)
- [pex] [\#8657](https://github.com/tendermint/tendermint/pull/8657) align max address thresholds (@cmwaters)
- [cmd] [\#8668](https://github.com/tendermint/tendermint/pull/8668) don't used global config for reset commands (@cmwaters)
- [p2p] [\#8681](https://github.com/tendermint/tendermint/pull/8681) shed peers from store from other networks (backport #8678) (@tychoish)
## v0.35.5
May 26, 2022
@@ -665,7 +747,7 @@ Special thanks to external contributors on this release: @james-ray, @fedekunze,
- [light] [\#5347](https://github.com/tendermint/tendermint/pull/5347) `NewClient`, `NewHTTPClient`, `VerifyHeader` and `VerifyLightBlockAtHeight` now accept `context.Context` as 1st param (@melekes)
- [merkle] [\#5193](https://github.com/tendermint/tendermint/pull/5193) `HashFromByteSlices` and `ProofsFromByteSlices` now return a hash for empty inputs, following RFC6962 (@erikgrinaker)
- [proto] [\#5025](https://github.com/tendermint/tendermint/pull/5025) All proto files have been moved to `/proto` directory. (@marbar3778)
- Using the recommended the file layout from buf, [see here for more info](https://buf.build/docs/lint-checkers#file_layout)
- Using the recommended the file layout from buf, [see here for more info](https://docs.buf.build/lint/rules) <!-- markdown-link-check-disable-line -->
- [rpc/client] [\#4947](https://github.com/tendermint/tendermint/pull/4947) `Validators`, `TxSearch` `page`/`per_page` params become pointers (@melekes)
- `UnconfirmedTxs` `limit` param is a pointer
- [rpc/jsonrpc/server] [\#5141](https://github.com/tendermint/tendermint/pull/5141) Remove `WriteRPCResponseArrayHTTP` (use `WriteRPCResponseHTTP` instead) (@melekes)

View File

@@ -31,6 +31,9 @@ Special thanks to external contributors on this release:
- [abci] \#7984 Remove the locks preventing concurrent use of ABCI applications by Tendermint. (@tychoish)
- [abci] \#8605 Remove info, log, events, gasUsed and mempoolError fields from ResponseCheckTx as they are not used by Tendermint. (@jmalicevic)
- [abci] \#8664 Move `app_hash` parameter from `Commit` to `FinalizeBlock`. (@sergio-mena)
- [abci] \#8656 Added cli command for `PrepareProposal`. (@jmalicevic)
- [sink/psql] \#8637 tx_results emitted from psql sink are now json encoded, previously they were protobuf encoded
- [abci] \#8901 Added cli command for `ProcessProposal`. (@hvanz)
- P2P Protocol
@@ -98,3 +101,4 @@ Special thanks to external contributors on this release:
- [cli] \#8276 scmigrate: ensure target key is correctly renamed. (@creachadair)
- [cli] \#8294 keymigrate: ensure block hash keys are correctly translated. (@creachadair)
- [cli] \#8352 keymigrate: ensure transaction hash keys are correctly translated. (@creachadair)
- (indexer) \#8625 Fix overriding tx index of duplicated txs.

View File

@@ -224,7 +224,7 @@ build-docs:
@cd docs && \
while read -r branch path_prefix; do \
( git checkout $${branch} && npm ci --quiet && \
VUEPRESS_BASE="/$${path_prefix}/" npm run build --quiet ) ; \
VUEPRESS_BASE="/$${path_prefix}/" NODE_OPTIONS="--openssl-legacy-provider" npm run build --quiet ) ; \
mkdir -p ~/output/$${path_prefix} ; \
cp -r .vuepress/dist/* ~/output/$${path_prefix}/ ; \
cp ~/output/$${path_prefix}/index.html ~/output ; \

View File

@@ -36,20 +36,17 @@ Please do not depend on master as your production branch. Use [releases](https:/
Tendermint has been in the production of private and public environments, most notably the blockchains of the Cosmos Network. we haven't released v1.0 yet since we are making breaking changes to the protocol and the APIs.
See below for more details about [versioning](#versioning).
In any case, if you intend to run Tendermint in production, we're happy to help. You can
contact us [over email](mailto:hello@interchain.io) or [join the chat](https://discord.gg/cosmosnetwork).
In any case, if you intend to run Tendermint in production, we're happy to help.
You can contact us [over email](mailto:hello@newtendermint.org) or [join the
chat](https://discord.gg/gnoland).
More on how releases are conducted can be found [here](./RELEASES.md).
## Security
To report a security vulnerability, see our [bug bounty
program](https://hackerone.com/cosmos).
To report a security vulnerability, please [email us](mailto:security@newtendermint.org).
For examples of the kinds of bugs we're looking for, see [our security policy](SECURITY.md).
We also maintain a dedicated mailing list for security updates. We will only ever use this mailing list
to notify you of vulnerabilities and fixes in Tendermint Core. You can subscribe [here](http://eepurl.com/gZ5hQD).
## Minimum requirements
| Requirement | Notes |
@@ -139,9 +136,9 @@ We keep a public up-to-date version of our roadmap [here](./docs/roadmap/roadmap
## Join us!
Tendermint Core is maintained by [Interchain GmbH](https://interchain.berlin).
If you'd like to work full-time on Tendermint Core, [we're hiring](https://interchain-gmbh.breezy.hr/)!
The development of Tendermint Core was led primarily by All in Bits, Inc. The
Tendermint trademark is owned by New Tendermint, LLC. If you'd like to work
full-time on Tendermint2 or [gno.land](https://gno.land), [we're
hiring](mailto:hiring@newtendermint.org)!
Funding for Tendermint Core development comes primarily from the [Interchain Foundation](https://interchain.io),
a Swiss non-profit. The Tendermint trademark is owned by [Tendermint Inc.](https://tendermint.com), the for-profit entity
that also maintains [tendermint.com](https://tendermint.com).

View File

@@ -1,8 +1,9 @@
# Releases
Tendermint uses [semantic versioning](https://semver.org/) with each release following
a `vX.Y.Z` format. The `master` branch is used for active development and thus it's
advisable not to build against it.
Tendermint uses modified [semantic versioning](https://semver.org/) with each
release following a `vX.Y.Z` format. Tendermint is currently on major version
0 and uses the minor version to signal breaking changes. The `master` branch is
used for active development and thus it is not advisable to build against it.
The latest changes are always initially merged into `master`.
Releases are specified using tags and are built from long-lived "backport" branches
@@ -29,8 +30,8 @@ merging the pull request.
### Creating a backport branch
If this is the first release candidate for a major release, you get to have the
honor of creating the backport branch!
If this is the first release candidate for a minor version release, e.g.
v0.25.0, you get to have the honor of creating the backport branch!
Note that, after creating the backport branch, you'll also need to update the
tags on `master` so that `go mod` is able to order the branches correctly. You
@@ -77,7 +78,8 @@ the 0.35.x line.
After doing these steps, go back to `master` and do the following:
1. Tag `master` as the dev branch for the _next_ major release and push it up to GitHub.
1. Tag `master` as the dev branch for the _next_ minor version release and push
it up to GitHub.
For example:
```sh
git tag -a v0.36.0-dev -m "Development base for Tendermint v0.36."
@@ -99,7 +101,7 @@ After doing these steps, go back to `master` and do the following:
## Release candidates
Before creating an official release, especially a major release, we may want to create a
Before creating an official release, especially a minor release, we may want to create a
release candidate (RC) for our friends and partners to test out. We use git tags to
create RCs, and we build them off of backport branches.
@@ -109,7 +111,7 @@ Tags for RCs should follow the "standard" release naming conventions, with `-rcX
(Note that branches and tags _cannot_ have the same names, so it's important that these branches
have distinct names from the tags/release names.)
If this is the first RC for a major release, you'll have to make a new backport branch (see above).
If this is the first RC for a minor release, you'll have to make a new backport branch (see above).
Otherwise:
1. Start from the backport branch (e.g. `v0.35.x`).
@@ -140,11 +142,13 @@ Note that this process should only be used for "true" RCs--
release candidates that, if successful, will be the next release.
For more experimental "RCs," create a new, short-lived branch and tag that instead.
## Major release
## Minor release
This major release process assumes that this release was preceded by release candidates.
This minor release process assumes that this release was preceded by release candidates.
If there were no release candidates, begin by creating a backport branch, as described above.
Before performing these steps, be sure the [Minor Release Checklist](#minor-release-checklist) has been completed.
1. Start on the backport branch (e.g. `v0.35.x`)
2. Run integration tests (`make test_integrations`) and the e2e nightlies.
3. Prepare the release:
@@ -176,16 +180,16 @@ If there were no release candidates, begin by creating a backport branch, as des
- Commit these changes to `master` and backport them into the backport
branch for this release.
## Minor release (point releases)
## Patch release
Minor releases are done differently from major releases: They are built off of
Patch releases are done differently from minor releases: They are built off of
long-lived backport branches, rather than from master. As non-breaking changes
land on `master`, they should also be backported into these backport branches.
Minor releases don't have release candidates by default, although any tricky
Patch releases don't have release candidates by default, although any tricky
changes may merit a release candidate.
To create a minor release:
To create a patch release:
1. Checkout the long-lived backport branch: `git checkout v0.35.x`
2. Run integration tests (`make test_integrations`) and the nightlies.
@@ -197,11 +201,143 @@ To create a minor release:
- Bump the TMDefaultVersion in `version.go`
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during minor releases, and only field additions are valid minor changes.)
which can change during patch releases, and only field additions are valid patch changes.)
4. Open a PR with these changes that will land them back on `v0.35.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- `git tag -a v0.35.1 -m 'Release v0.35.1'`
- `git push origin v0.35.1`
6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
- Remove all `R:minor` labels from the pull requests that were included in the release.
- Remove all `R:patch` labels from the pull requests that were included in the release.
- Do not merge the backport branch into master.
## Minor Release Checklist
The following set of steps are performed on all releases that increment the
_minor_ version, e.g. v0.25 to v0.26. These steps ensure that Tendermint is
well tested, stable, and suitable for adoption by the various diverse projects
that rely on Tendermint.
### Feature Freeze
Ahead of any minor version release of Tendermint, the software enters 'Feature
Freeze' for at least two weeks. A feature freeze means that _no_ new features
are added to the code being prepared for release. No code changes should be made
to the code being released that do not directly improve pressing issues of code
quality. The following must not be merged during a feature freeze:
* Refactors that are not related to specific bug fixes.
* Dependency upgrades.
* New test code that does not test a discovered regression.
* New features of any kind.
* Documentation or spec improvements that are not related to the newly developed
code.
This period directly follows the creation of the [backport
branch](#creating-a-backport-branch). The Tendermint team instead directs all
attention to ensuring that the existing code is stable and reliable. Broken
tests are fixed, flakey-tests are remedied, end-to-end test failures are
thoroughly diagnosed and all efforts of the team are aimed at improving the
quality of the code. During this period, the upgrade harness tests are run
repeatedly and a variety of in-house testnets are run to ensure Tendermint
functions at the scale it will be used by application developers and node
operators.
### Nightly End-To-End Tests
The Tendermint team maintains [a set of end-to-end
tests](https://github.com/tendermint/tendermint/blob/master/test/e2e/README.md#L1)
that run each night on the latest commit of the project and on the code in the
tip of each supported backport branch. These tests start a network of containerized
Tendermint processes and run automated checks that the network functions as
expected in both stable and unstable conditions. During the feature freeze,
these tests are run nightly and must pass consistently for a release of
Tendermint to be considered stable.
### Upgrade Harness
> TODO(williambanfield): Change to past tense and clarify this section once
> upgrade harness is complete.
The Tendermint team is creating an upgrade test harness to exercise the
workflow of stopping an instance of Tendermint running one version of the
software and starting up the same application running the next version. To
support upgrade testing, we will add the ability to terminate the Tendermint
process at specific pre-defined points in its execution so that we can verify
upgrades work in a representative sample of stop conditions.
### Large Scale Testnets
The Tendermint end-to-end tests run a small network (~10s of nodes) to exercise
basic consensus interactions. Real world deployments of Tendermint often have over
a hundred nodes just in the validator set, with many others acting as full
nodes and sentry nodes. To gain more assurance before a release, we will also run
larger-scale test networks to shake out emergent behaviors at scale.
Large-scale test networks are run on a set of virtual machines (VMs). Each VM
is equipped with 4 Gigabytes of RAM and 2 CPU cores. The network runs a very
simple key-value store application. The application adds artificial delays to
different ABCI calls to simulate a slow application. Each testnet is briefly
run with no load being generated to collect a baseline performance. Once
baseline is captured, a consistent load is applied across the network. This
load takes the form of 10% of the running nodes all receiving a consistent
stream of two hundred transactions per minute each.
During each test net, the following metrics are monitored and collected on each
node:
* Consensus rounds per height
* Maximum connected peers, Minimum connected peers, Rate of change of peer connections
* Memory resident set size
* CPU utilization
* Blocks produced per minute
* Seconds for each step of consensus (Propose, Prevote, Precommit, Commit)
* Latency to receive block proposals
For these tests we intentionally target low-powered host machines (with low core
counts and limited memory) to ensure we observe similar kinds of resource contention
and limitation that real-world deployments of Tendermint experience in production.
#### 200 Node Testnet
To test the stability and performance of Tendermint in a real world scenario,
a 200 node test network is run. The network comprises 5 seed nodes, 100
validators and 95 non-validating full nodes. All nodes begin by dialing
a subset of the seed nodes to discover peers. The network is run for several
days, with metrics being collected continuously. In cases of changes to performance
critical systems, testnets of larger sizes should be considered.
#### Rotating Node Testnet
Real-world deployments of Tendermint frequently see new nodes arrive and old
nodes exit the network. The rotating node testnet ensures that Tendermint is
able to handle this reliably. In this test, a network with 10 validators and
3 seed nodes is started. A rolling set of 25 full nodes are started and each
connects to the network by dialing one of the seed nodes. Once the node is able
to blocksync to the head of the chain and begins producing blocks using
Tendermint consensus it is stopped. Once stopped, a new node is started and
takes its place. This network is run for several days.
#### Network Partition Testnet
Tendermint is expected to recover from network partitions. A partition where no
subset of the nodes is left with the super-majority of the stake is expected to
stop making blocks. Upon alleviation of the partition, the network is expected
to once again become fully connected and capable of producing blocks. The
network partition testnet ensures that Tendermint is able to handle this
reliably at scale. In this test, a network with 100 validators and 95 full
nodes is started. All validators have equal stake. Once the network is
producing blocks, a set of firewall rules is deployed to create a partitioned
network with 50% of the stake on one side and 50% on the other. Once the
network stops producing blocks, the firewall rules are removed and the nodes
are monitored to ensure they reconnect and that the network again begins
producing blocks.
#### Absent Stake Testnet
Tendermint networks often run with _some_ portion of the voting power offline.
The absent stake testnet ensures that large networks are able to handle this
reliably. A set of 150 validator nodes and three seed nodes is started. The set
of 150 validators is configured to only possess a cumulative stake of 67% of
the total stake. The remaining 33% of the stake is configured to belong to
a validator that is never actually run in the test network. The network is run
for multiple days, ensuring that it is able to produce blocks without issue.

View File

@@ -17,9 +17,9 @@ by Tendermint itself. Right now, we return a regular error when this happens.
#### ABCI++
For information on how ABCI++ works, see the
[Specification](https://github.com/tendermint/tendermint/blob/master/spec/abci%2B%2B/README.md).
[Specification](spec/abci%2B%2B/README.md).
In particular, the simplest way to upgrade your application is described
[here](https://github.com/tendermint/tendermint/blob/master/spec/abci%2B%2B/abci++_tmint_expected_behavior_002_draft.md#adapting-existing-applications-that-use-abci).
[here](spec/abci%2B%2B/abci++_tmint_expected_behavior.md#adapting-existing-applications-that-use-abci).
#### Moving the `app_hash` parameter

View File

@@ -3,7 +3,6 @@ package abciclient
import (
"context"
"errors"
"fmt"
"net"
"sync"
"time"
@@ -65,7 +64,7 @@ RETRY_LOOP:
if cli.mustConnect {
return err
}
cli.logger.Error(fmt.Sprintf("abci.grpcClient failed to connect to %v. Retrying...\n", cli.addr), "err", err)
cli.logger.Error("abci.grpcClient failed to connect, Retrying...", "addr", cli.addr, "err", err)
timer.Reset(time.Second * dialRetryIntervalSeconds)
select {
case <-ctx.Done():

View File

@@ -67,8 +67,11 @@ func (cli *socketClient) OnStart(ctx context.Context) error {
if cli.mustConnect {
return err
}
cli.logger.Error(fmt.Sprintf("abci.socketClient failed to connect to %v. Retrying after %vs...",
cli.addr, dialRetryIntervalSeconds), "err", err)
cli.logger.Error("abci.socketClient failed to connect, retrying after",
"retry_after", dialRetryIntervalSeconds,
"target", cli.addr,
"err", err)
timer.Reset(time.Second * dialRetryIntervalSeconds)
select {
@@ -77,7 +80,6 @@ func (cli *socketClient) OnStart(ctx context.Context) error {
case <-timer.C:
continue
}
}
cli.conn = conn

View File

@@ -2,6 +2,7 @@ package main
import (
"bufio"
"bytes"
"encoding/hex"
"errors"
"fmt"
@@ -78,10 +79,11 @@ func RootCmmand(logger log.Logger) *cobra.Command {
// Structure for data passed to print response.
type response struct {
// generic abci response
Data []byte
Code uint32
Info string
Log string
Data []byte
Code uint32
Info string
Log string
Status int32
Query *queryResponse
}
@@ -130,6 +132,8 @@ func addCommands(cmd *cobra.Command, logger log.Logger) {
cmd.AddCommand(commitCmd)
cmd.AddCommand(versionCmd)
cmd.AddCommand(testCmd)
cmd.AddCommand(prepareProposalCmd)
cmd.AddCommand(processProposalCmd)
cmd.AddCommand(getQueryCmd())
// examples
@@ -170,7 +174,7 @@ This command opens an interactive console for running any of the other commands
without opening a new connection each time
`,
Args: cobra.ExactArgs(0),
ValidArgs: []string{"echo", "info", "finalize_block", "check_tx", "commit", "query"},
ValidArgs: []string{"echo", "info", "query", "check_tx", "prepare_proposal", "process_proposal", "finalize_block", "commit"},
RunE: cmdConsole,
}
@@ -193,7 +197,7 @@ var finalizeBlockCmd = &cobra.Command{
Use: "finalize_block",
Short: "deliver a block of transactions to the application",
Long: "deliver a block of transactions to the application",
Args: cobra.MinimumNArgs(1),
Args: cobra.MinimumNArgs(0),
RunE: cmdFinalizeBlock,
}
@@ -224,6 +228,22 @@ var versionCmd = &cobra.Command{
},
}
var prepareProposalCmd = &cobra.Command{
Use: "prepare_proposal",
Short: "prepare proposal",
Long: "prepare proposal",
Args: cobra.MinimumNArgs(0),
RunE: cmdPrepareProposal,
}
var processProposalCmd = &cobra.Command{
Use: "process_proposal",
Short: "process proposal",
Long: "process proposal",
Args: cobra.MinimumNArgs(0),
RunE: cmdProcessProposal,
}
func getQueryCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "query",
@@ -335,6 +355,18 @@ func cmdTest(cmd *cobra.Command, args []string) error {
}, nil, []byte{0, 0, 0, 0, 0, 0, 0, 5})
},
func() error { return servertest.Commit(ctx, client) },
func() error {
return servertest.PrepareProposal(ctx, client, [][]byte{
{0x01},
}, []types.TxRecord_TxAction{
types.TxRecord_UNMODIFIED,
}, nil)
},
func() error {
return servertest.ProcessProposal(ctx, client, [][]byte{
{0x01},
}, types.ResponseProcessProposal_ACCEPT)
},
})
}
@@ -435,6 +467,10 @@ func muxOnCommands(cmd *cobra.Command, pArgs []string) error {
return cmdInfo(cmd, actualArgs)
case "query":
return cmdQuery(cmd, actualArgs)
case "prepare_proposal":
return cmdPrepareProposal(cmd, actualArgs)
case "process_proposal":
return cmdProcessProposal(cmd, actualArgs)
default:
return cmdUnimplemented(cmd, pArgs)
}
@@ -498,13 +534,6 @@ const codeBad uint32 = 10
// Append new txs to application
func cmdFinalizeBlock(cmd *cobra.Command, args []string) error {
if len(args) == 0 {
printResponse(cmd, args, response{
Code: codeBad,
Log: "Must provide at least one transaction",
})
return nil
}
txs := make([][]byte, len(args))
for i, arg := range args {
txBytes, err := stringOrHexToBytes(arg)
@@ -605,6 +634,81 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
return nil
}
func inTxArray(txByteArray [][]byte, tx []byte) bool {
for _, txTmp := range txByteArray {
if bytes.Equal(txTmp, tx) {
return true
}
}
return false
}
func cmdPrepareProposal(cmd *cobra.Command, args []string) error {
txsBytesArray := make([][]byte, len(args))
for i, arg := range args {
txBytes, err := stringOrHexToBytes(arg)
if err != nil {
return err
}
txsBytesArray[i] = txBytes
}
res, err := client.PrepareProposal(cmd.Context(), &types.RequestPrepareProposal{
Txs: txsBytesArray,
// kvstore has to have this parameter in order not to reject a tx as the default value is 0
MaxTxBytes: 65536,
})
if err != nil {
return err
}
resps := make([]response, 0, len(res.TxResults)+1)
for _, tx := range res.TxRecords {
existingTx := inTxArray(txsBytesArray, tx.Tx)
if tx.Action == types.TxRecord_UNKNOWN ||
(existingTx && tx.Action == types.TxRecord_ADDED) ||
(!existingTx && (tx.Action == types.TxRecord_UNMODIFIED || tx.Action == types.TxRecord_REMOVED)) {
resps = append(resps, response{
Code: codeBad,
Log: "Failed. Tx: " + string(tx.GetTx()) + " action: " + tx.Action.String(),
})
} else {
resps = append(resps, response{
Code: code.CodeTypeOK,
Log: "Succeeded. Tx: " + string(tx.Tx) + " action: " + tx.Action.String(),
})
}
}
printResponse(cmd, args, resps...)
return nil
}
func cmdProcessProposal(cmd *cobra.Command, args []string) error {
txsBytesArray := make([][]byte, len(args))
for i, arg := range args {
txBytes, err := stringOrHexToBytes(arg)
if err != nil {
return err
}
txsBytesArray[i] = txBytes
}
res, err := client.ProcessProposal(cmd.Context(), &types.RequestProcessProposal{
Txs: txsBytesArray,
})
if err != nil {
return err
}
printResponse(cmd, args, response{
Status: int32(res.Status),
})
return nil
}
func makeKVStoreCmd(logger log.Logger) func(*cobra.Command, []string) error {
return func(cmd *cobra.Command, args []string) error {
// Create the application - in memory or persisted to disk
@@ -649,7 +753,6 @@ func printResponse(cmd *cobra.Command, args []string, rsps ...response) {
fmt.Printf("-> code: OK\n")
} else {
fmt.Printf("-> code: %d\n", rsp.Code)
}
if len(rsp.Data) != 0 {
@@ -663,6 +766,9 @@ func printResponse(cmd *cobra.Command, args []string, rsps ...response) {
if rsp.Log != "" {
fmt.Printf("-> log: %s\n", rsp.Log)
}
if cmd.Use == "process_proposal" {
fmt.Printf("-> status: %s\n", types.ResponseProcessProposal_ProposalStatus_name[rsp.Status])
}
if rsp.Query != nil {
fmt.Printf("-> height: %d\n", rsp.Query.Height)

View File

@@ -175,7 +175,7 @@ func (app *Application) FinalizeBlock(_ context.Context, req *types.RequestFinal
app.ValUpdates = make([]types.ValidatorUpdate, 0)
// Punish validators who committed equivocation.
for _, ev := range req.ByzantineValidators {
for _, ev := range req.Misbehavior {
if ev.Type == types.MisbehaviorType_DUPLICATE_VOTE {
addr := string(ev.Validator.Address)
if pubKey, ok := app.valAddrToPubKeyMap[addr]; ok {
@@ -293,7 +293,7 @@ func (app *Application) PrepareProposal(_ context.Context, req *types.RequestPre
func (*Application) ProcessProposal(_ context.Context, req *types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
for _, tx := range req.Txs {
if len(tx) == 0 {
if len(tx) == 0 || isPrepareTx(tx) {
return &types.ResponseProcessProposal{Status: types.ResponseProcessProposal_REJECT}, nil
}
}

View File

@@ -70,6 +70,32 @@ func FinalizeBlock(ctx context.Context, client abciclient.Client, txBytes [][]by
return nil
}
func PrepareProposal(ctx context.Context, client abciclient.Client, txBytes [][]byte, codeExp []types.TxRecord_TxAction, dataExp []byte) error {
res, _ := client.PrepareProposal(ctx, &types.RequestPrepareProposal{Txs: txBytes})
for i, tx := range res.TxRecords {
if tx.Action != codeExp[i] {
fmt.Println("Failed test: PrepareProposal")
fmt.Printf("PrepareProposal response code was unexpected. Got %v expected %v.",
tx.Action, codeExp)
return errors.New("PrepareProposal error")
}
}
fmt.Println("Passed test: PrepareProposal")
return nil
}
func ProcessProposal(ctx context.Context, client abciclient.Client, txBytes [][]byte, statusExp types.ResponseProcessProposal_ProposalStatus) error {
res, _ := client.ProcessProposal(ctx, &types.RequestProcessProposal{Txs: txBytes})
if res.Status != statusExp {
fmt.Println("Failed test: ProcessProposal")
fmt.Printf("ProcessProposal response status was unexpected. Got %v expected %v.",
res.Status, statusExp)
return errors.New("ProcessProposal error")
}
fmt.Println("Passed test: ProcessProposal")
return nil
}
func CheckTx(ctx context.Context, client abciclient.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.CheckTx(ctx, &types.RequestCheckTx{Tx: txBytes})
code, data := res.Code, res.Data

View File

@@ -1,5 +1,7 @@
echo hello
info
prepare_proposal "abc"
process_proposal "abc"
finalize_block "abc"
commit
info
@@ -7,3 +9,10 @@ query "abc"
finalize_block "def=xyz" "ghi=123"
commit
query "def"
prepare_proposal "preparedef"
process_proposal "def"
process_proposal "preparedef"
prepare_proposal
process_proposal
finalize_block
commit

View File

@@ -8,6 +8,14 @@
-> data: {"size":0}
-> data.hex: 0x7B2273697A65223A307D
> prepare_proposal "abc"
-> code: OK
-> log: Succeeded. Tx: abc action: UNMODIFIED
> process_proposal "abc"
-> code: OK
-> status: ACCEPT
> finalize_block "abc"
-> code: OK
-> code: OK
@@ -48,3 +56,30 @@
-> value: xyz
-> value.hex: 78797A
> prepare_proposal "preparedef"
-> code: OK
-> log: Succeeded. Tx: def action: ADDED
-> code: OK
-> log: Succeeded. Tx: preparedef action: REMOVED
> process_proposal "def"
-> code: OK
-> status: ACCEPT
> process_proposal "preparedef"
-> code: OK
-> status: REJECT
> prepare_proposal
> process_proposal
-> code: OK
-> status: ACCEPT
> finalize_block
-> code: OK
-> data.hex: 0x0600000000000000
> commit
-> code: OK

View File

@@ -30,6 +30,8 @@ function testExample() {
cat "${INPUT}.out.new"
echo "Expected:"
cat "${INPUT}.out"
echo "Diff:"
diff "${INPUT}.out" "${INPUT}.out.new"
exit 1
fi

View File

@@ -1120,12 +1120,12 @@ type RequestPrepareProposal struct {
MaxTxBytes int64 `protobuf:"varint,1,opt,name=max_tx_bytes,json=maxTxBytes,proto3" json:"max_tx_bytes,omitempty"`
// txs is an array of transactions that will be included in a block,
// sent to the app for possible modifications.
Txs [][]byte `protobuf:"bytes,2,rep,name=txs,proto3" json:"txs,omitempty"`
LocalLastCommit ExtendedCommitInfo `protobuf:"bytes,3,opt,name=local_last_commit,json=localLastCommit,proto3" json:"local_last_commit"`
ByzantineValidators []Misbehavior `protobuf:"bytes,4,rep,name=byzantine_validators,json=byzantineValidators,proto3" json:"byzantine_validators"`
Height int64 `protobuf:"varint,5,opt,name=height,proto3" json:"height,omitempty"`
Time time.Time `protobuf:"bytes,6,opt,name=time,proto3,stdtime" json:"time"`
NextValidatorsHash []byte `protobuf:"bytes,7,opt,name=next_validators_hash,json=nextValidatorsHash,proto3" json:"next_validators_hash,omitempty"`
Txs [][]byte `protobuf:"bytes,2,rep,name=txs,proto3" json:"txs,omitempty"`
LocalLastCommit ExtendedCommitInfo `protobuf:"bytes,3,opt,name=local_last_commit,json=localLastCommit,proto3" json:"local_last_commit"`
Misbehavior []Misbehavior `protobuf:"bytes,4,rep,name=misbehavior,proto3" json:"misbehavior"`
Height int64 `protobuf:"varint,5,opt,name=height,proto3" json:"height,omitempty"`
Time time.Time `protobuf:"bytes,6,opt,name=time,proto3,stdtime" json:"time"`
NextValidatorsHash []byte `protobuf:"bytes,7,opt,name=next_validators_hash,json=nextValidatorsHash,proto3" json:"next_validators_hash,omitempty"`
// address of the public key of the validator proposing the block.
ProposerAddress []byte `protobuf:"bytes,8,opt,name=proposer_address,json=proposerAddress,proto3" json:"proposer_address,omitempty"`
}
@@ -1184,9 +1184,9 @@ func (m *RequestPrepareProposal) GetLocalLastCommit() ExtendedCommitInfo {
return ExtendedCommitInfo{}
}
func (m *RequestPrepareProposal) GetByzantineValidators() []Misbehavior {
func (m *RequestPrepareProposal) GetMisbehavior() []Misbehavior {
if m != nil {
return m.ByzantineValidators
return m.Misbehavior
}
return nil
}
@@ -1220,9 +1220,9 @@ func (m *RequestPrepareProposal) GetProposerAddress() []byte {
}
type RequestProcessProposal struct {
Txs [][]byte `protobuf:"bytes,1,rep,name=txs,proto3" json:"txs,omitempty"`
ProposedLastCommit CommitInfo `protobuf:"bytes,2,opt,name=proposed_last_commit,json=proposedLastCommit,proto3" json:"proposed_last_commit"`
ByzantineValidators []Misbehavior `protobuf:"bytes,3,rep,name=byzantine_validators,json=byzantineValidators,proto3" json:"byzantine_validators"`
Txs [][]byte `protobuf:"bytes,1,rep,name=txs,proto3" json:"txs,omitempty"`
ProposedLastCommit CommitInfo `protobuf:"bytes,2,opt,name=proposed_last_commit,json=proposedLastCommit,proto3" json:"proposed_last_commit"`
Misbehavior []Misbehavior `protobuf:"bytes,3,rep,name=misbehavior,proto3" json:"misbehavior"`
// hash is the merkle root hash of the fields of the proposed block.
Hash []byte `protobuf:"bytes,4,opt,name=hash,proto3" json:"hash,omitempty"`
Height int64 `protobuf:"varint,5,opt,name=height,proto3" json:"height,omitempty"`
@@ -1279,9 +1279,9 @@ func (m *RequestProcessProposal) GetProposedLastCommit() CommitInfo {
return CommitInfo{}
}
func (m *RequestProcessProposal) GetByzantineValidators() []Misbehavior {
func (m *RequestProcessProposal) GetMisbehavior() []Misbehavior {
if m != nil {
return m.ByzantineValidators
return m.Misbehavior
}
return nil
}
@@ -1444,10 +1444,10 @@ func (m *RequestVerifyVoteExtension) GetVoteExtension() []byte {
}
type RequestFinalizeBlock struct {
Txs [][]byte `protobuf:"bytes,1,rep,name=txs,proto3" json:"txs,omitempty"`
DecidedLastCommit CommitInfo `protobuf:"bytes,2,opt,name=decided_last_commit,json=decidedLastCommit,proto3" json:"decided_last_commit"`
ByzantineValidators []Misbehavior `protobuf:"bytes,3,rep,name=byzantine_validators,json=byzantineValidators,proto3" json:"byzantine_validators"`
// hash is the merkle root hash of the fields of the proposed block.
Txs [][]byte `protobuf:"bytes,1,rep,name=txs,proto3" json:"txs,omitempty"`
DecidedLastCommit CommitInfo `protobuf:"bytes,2,opt,name=decided_last_commit,json=decidedLastCommit,proto3" json:"decided_last_commit"`
Misbehavior []Misbehavior `protobuf:"bytes,3,rep,name=misbehavior,proto3" json:"misbehavior"`
// hash is the merkle root hash of the fields of the decided block.
Hash []byte `protobuf:"bytes,4,opt,name=hash,proto3" json:"hash,omitempty"`
Height int64 `protobuf:"varint,5,opt,name=height,proto3" json:"height,omitempty"`
Time time.Time `protobuf:"bytes,6,opt,name=time,proto3,stdtime" json:"time"`
@@ -1503,9 +1503,9 @@ func (m *RequestFinalizeBlock) GetDecidedLastCommit() CommitInfo {
return CommitInfo{}
}
func (m *RequestFinalizeBlock) GetByzantineValidators() []Misbehavior {
func (m *RequestFinalizeBlock) GetMisbehavior() []Misbehavior {
if m != nil {
return m.ByzantineValidators
return m.Misbehavior
}
return nil
}
@@ -2381,7 +2381,6 @@ func (m *ResponseDeliverTx) GetCodespace() string {
}
type ResponseCommit struct {
// reserve 1
RetainHeight int64 `protobuf:"varint,3,opt,name=retain_height,json=retainHeight,proto3" json:"retain_height,omitempty"`
}
@@ -3829,211 +3828,211 @@ func init() {
func init() { proto.RegisterFile("tendermint/abci/types.proto", fileDescriptor_252557cfdd89a31a) }
var fileDescriptor_252557cfdd89a31a = []byte{
// 3253 bytes of a gzipped FileDescriptorProto
// 3257 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x5a, 0xcd, 0x73, 0x23, 0xd5,
0x11, 0xd7, 0xe8, 0x5b, 0xad, 0xaf, 0xf1, 0xb3, 0x59, 0xb4, 0x62, 0xd7, 0x36, 0x43, 0x01, 0xcb,
0x02, 0x36, 0xf1, 0x66, 0x61, 0xc9, 0x42, 0x28, 0x5b, 0xd6, 0x46, 0xf6, 0x7a, 0x6d, 0x33, 0x96,
0x4d, 0x91, 0x0f, 0x86, 0xb1, 0xf4, 0x6c, 0x0d, 0x2b, 0x69, 0x86, 0x99, 0x91, 0x91, 0x39, 0x26,
0xc5, 0x85, 0x43, 0x8a, 0x4b, 0x2a, 0x49, 0x55, 0xb8, 0x25, 0x55, 0xc9, 0x7f, 0x90, 0x5c, 0x72,
0xca, 0x81, 0x43, 0x0e, 0x9c, 0x52, 0x39, 0x91, 0x14, 0xdc, 0xf8, 0x07, 0x72, 0x4b, 0xa5, 0xde,
0xc7, 0x7c, 0x49, 0x33, 0xfa, 0x00, 0x8a, 0xaa, 0x54, 0x71, 0x9b, 0xd7, 0xd3, 0xdd, 0xef, 0xab,
0x5f, 0x77, 0xff, 0xfa, 0x3d, 0x78, 0xcc, 0xc6, 0xfd, 0x36, 0x36, 0x7b, 0x5a, 0xdf, 0x5e, 0x57,
0x4f, 0x5b, 0xda, 0xba, 0x7d, 0x69, 0x60, 0x6b, 0xcd, 0x30, 0x75, 0x5b, 0x47, 0x65, 0xef, 0xe7,
0x1a, 0xf9, 0x59, 0xbd, 0xee, 0xe3, 0x6e, 0x99, 0x97, 0x86, 0xad, 0xaf, 0x1b, 0xa6, 0xae, 0x9f,
0x31, 0xfe, 0xea, 0xb5, 0xf1, 0xdf, 0x0f, 0xf1, 0x25, 0xd7, 0x16, 0x10, 0xa6, 0xbd, 0xac, 0x1b,
0xaa, 0xa9, 0xf6, 0x9c, 0xdf, 0x2b, 0xe7, 0xba, 0x7e, 0xde, 0xc5, 0xeb, 0xb4, 0x75, 0x3a, 0x38,
0x5b, 0xb7, 0xb5, 0x1e, 0xb6, 0x6c, 0xb5, 0x67, 0x70, 0x86, 0xa5, 0x73, 0xfd, 0x5c, 0xa7, 0x9f,
0xeb, 0xe4, 0x8b, 0x51, 0xa5, 0xbf, 0xe4, 0x20, 0x23, 0xe3, 0x77, 0x07, 0xd8, 0xb2, 0xd1, 0x06,
0x24, 0x71, 0xab, 0xa3, 0x57, 0x84, 0x55, 0xe1, 0x46, 0x7e, 0xe3, 0xda, 0xda, 0xc8, 0xf0, 0xd7,
0x38, 0x5f, 0xbd, 0xd5, 0xd1, 0x1b, 0x31, 0x99, 0xf2, 0xa2, 0xdb, 0x90, 0x3a, 0xeb, 0x0e, 0xac,
0x4e, 0x25, 0x4e, 0x85, 0xae, 0x47, 0x09, 0xdd, 0x23, 0x4c, 0x8d, 0x98, 0xcc, 0xb8, 0x49, 0x57,
0x5a, 0xff, 0x4c, 0xaf, 0x24, 0x26, 0x77, 0xb5, 0xd3, 0x3f, 0xa3, 0x5d, 0x11, 0x5e, 0xb4, 0x05,
0xa0, 0xf5, 0x35, 0x5b, 0x69, 0x75, 0x54, 0xad, 0x5f, 0x49, 0x52, 0xc9, 0xc7, 0xa3, 0x25, 0x35,
0xbb, 0x46, 0x18, 0x1b, 0x31, 0x39, 0xa7, 0x39, 0x0d, 0x32, 0xdc, 0x77, 0x07, 0xd8, 0xbc, 0xac,
0xa4, 0x26, 0x0f, 0xf7, 0x75, 0xc2, 0x44, 0x86, 0x4b, 0xb9, 0xd1, 0x2b, 0x90, 0x6d, 0x75, 0x70,
0xeb, 0xa1, 0x62, 0x0f, 0x2b, 0x19, 0x2a, 0xb9, 0x12, 0x25, 0x59, 0x23, 0x7c, 0xcd, 0x61, 0x23,
0x26, 0x67, 0x5a, 0xec, 0x13, 0xdd, 0x81, 0x74, 0x4b, 0xef, 0xf5, 0x34, 0xbb, 0x02, 0x54, 0x76,
0x39, 0x52, 0x96, 0x72, 0x35, 0x62, 0x32, 0xe7, 0x47, 0xfb, 0x50, 0xea, 0x6a, 0x96, 0xad, 0x58,
0x7d, 0xd5, 0xb0, 0x3a, 0xba, 0x6d, 0x55, 0xf2, 0x54, 0xc3, 0x93, 0x51, 0x1a, 0xf6, 0x34, 0xcb,
0x3e, 0x72, 0x98, 0x1b, 0x31, 0xb9, 0xd8, 0xf5, 0x13, 0x88, 0x3e, 0xfd, 0xec, 0x0c, 0x9b, 0xae,
0xc2, 0x4a, 0x61, 0xb2, 0xbe, 0x03, 0xc2, 0xed, 0xc8, 0x13, 0x7d, 0xba, 0x9f, 0x80, 0x7e, 0x02,
0x8b, 0x5d, 0x5d, 0x6d, 0xbb, 0xea, 0x94, 0x56, 0x67, 0xd0, 0x7f, 0x58, 0x29, 0x52, 0xa5, 0xcf,
0x44, 0x0e, 0x52, 0x57, 0xdb, 0x8e, 0x8a, 0x1a, 0x11, 0x68, 0xc4, 0xe4, 0x85, 0xee, 0x28, 0x11,
0xbd, 0x05, 0x4b, 0xaa, 0x61, 0x74, 0x2f, 0x47, 0xb5, 0x97, 0xa8, 0xf6, 0x9b, 0x51, 0xda, 0x37,
0x89, 0xcc, 0xa8, 0x7a, 0xa4, 0x8e, 0x51, 0x51, 0x13, 0x44, 0xc3, 0xc4, 0x86, 0x6a, 0x62, 0xc5,
0x30, 0x75, 0x43, 0xb7, 0xd4, 0x6e, 0xa5, 0x4c, 0x75, 0x3f, 0x1d, 0xa5, 0xfb, 0x90, 0xf1, 0x1f,
0x72, 0xf6, 0x46, 0x4c, 0x2e, 0x1b, 0x41, 0x12, 0xd3, 0xaa, 0xb7, 0xb0, 0x65, 0x79, 0x5a, 0xc5,
0x69, 0x5a, 0x29, 0x7f, 0x50, 0x6b, 0x80, 0x84, 0xea, 0x90, 0xc7, 0x43, 0x22, 0xae, 0x5c, 0xe8,
0x36, 0xae, 0x2c, 0x50, 0x85, 0x52, 0xe4, 0x09, 0xa5, 0xac, 0x27, 0xba, 0x8d, 0x1b, 0x31, 0x19,
0xb0, 0xdb, 0x42, 0x2a, 0x3c, 0x72, 0x81, 0x4d, 0xed, 0xec, 0x92, 0xaa, 0x51, 0xe8, 0x1f, 0x4b,
0xd3, 0xfb, 0x15, 0x44, 0x15, 0x3e, 0x1b, 0xa5, 0xf0, 0x84, 0x0a, 0x11, 0x15, 0x75, 0x47, 0xa4,
0x11, 0x93, 0x17, 0x2f, 0xc6, 0xc9, 0xc4, 0xc4, 0xce, 0xb4, 0xbe, 0xda, 0xd5, 0xde, 0xc7, 0xca,
0x69, 0x57, 0x6f, 0x3d, 0xac, 0x2c, 0x4e, 0x36, 0xb1, 0x7b, 0x9c, 0x7b, 0x8b, 0x30, 0x13, 0x13,
0x3b, 0xf3, 0x13, 0xb6, 0x32, 0x90, 0xba, 0x50, 0xbb, 0x03, 0xbc, 0x9b, 0xcc, 0xa6, 0xc5, 0xcc,
0x6e, 0x32, 0x9b, 0x15, 0x73, 0xbb, 0xc9, 0x6c, 0x4e, 0x04, 0xe9, 0x69, 0xc8, 0xfb, 0x5c, 0x12,
0xaa, 0x40, 0xa6, 0x87, 0x2d, 0x4b, 0x3d, 0xc7, 0xd4, 0x83, 0xe5, 0x64, 0xa7, 0x29, 0x95, 0xa0,
0xe0, 0x77, 0x43, 0xd2, 0x47, 0x82, 0x2b, 0x49, 0x3c, 0x0c, 0x91, 0xbc, 0xc0, 0x26, 0x5d, 0x08,
0x2e, 0xc9, 0x9b, 0xe8, 0x09, 0x28, 0xd2, 0x49, 0x28, 0xce, 0x7f, 0xe2, 0xe6, 0x92, 0x72, 0x81,
0x12, 0x4f, 0x38, 0xd3, 0x0a, 0xe4, 0x8d, 0x0d, 0xc3, 0x65, 0x49, 0x50, 0x16, 0x30, 0x36, 0x0c,
0x87, 0xe1, 0x71, 0x28, 0x90, 0x19, 0xbb, 0x1c, 0x49, 0xda, 0x49, 0x9e, 0xd0, 0x38, 0x8b, 0xf4,
0xf7, 0x38, 0x88, 0xa3, 0xae, 0x0b, 0xdd, 0x81, 0x24, 0xf1, 0xe2, 0xdc, 0x21, 0x57, 0xd7, 0x98,
0x8b, 0x5f, 0x73, 0x5c, 0xfc, 0x5a, 0xd3, 0x71, 0xf1, 0x5b, 0xd9, 0x4f, 0x3e, 0x5b, 0x89, 0x7d,
0xf4, 0xaf, 0x15, 0x41, 0xa6, 0x12, 0xe8, 0x2a, 0x71, 0x58, 0xaa, 0xd6, 0x57, 0xb4, 0x36, 0x1d,
0x72, 0x8e, 0x78, 0x23, 0x55, 0xeb, 0xef, 0xb4, 0xd1, 0x1e, 0x88, 0x2d, 0xbd, 0x6f, 0xe1, 0xbe,
0x35, 0xb0, 0x14, 0x16, 0x42, 0xb8, 0x1b, 0x0e, 0x38, 0x53, 0x16, 0xc8, 0x6a, 0x0e, 0xe7, 0x21,
0x65, 0x94, 0xcb, 0xad, 0x20, 0x01, 0xdd, 0x03, 0xb8, 0x50, 0xbb, 0x5a, 0x5b, 0xb5, 0x75, 0xd3,
0xaa, 0x24, 0x57, 0x13, 0x37, 0xf2, 0x1b, 0xab, 0x63, 0x5b, 0x7d, 0xe2, 0xb0, 0x1c, 0x1b, 0x6d,
0xd5, 0xc6, 0x5b, 0x49, 0x32, 0x5c, 0xd9, 0x27, 0x89, 0x9e, 0x82, 0xb2, 0x6a, 0x18, 0x8a, 0x65,
0xab, 0x36, 0x56, 0x4e, 0x2f, 0x6d, 0x6c, 0x51, 0x17, 0x5d, 0x90, 0x8b, 0xaa, 0x61, 0x1c, 0x11,
0xea, 0x16, 0x21, 0xa2, 0x27, 0xa1, 0x44, 0xbc, 0xb9, 0xa6, 0x76, 0x95, 0x0e, 0xd6, 0xce, 0x3b,
0x76, 0x25, 0xbd, 0x2a, 0xdc, 0x48, 0xc8, 0x45, 0x4e, 0x6d, 0x50, 0xa2, 0xd4, 0x76, 0x77, 0x9c,
0x7a, 0x72, 0x84, 0x20, 0xd9, 0x56, 0x6d, 0x95, 0xae, 0x64, 0x41, 0xa6, 0xdf, 0x84, 0x66, 0xa8,
0x76, 0x87, 0xaf, 0x0f, 0xfd, 0x46, 0x57, 0x20, 0xcd, 0xd5, 0x26, 0xa8, 0x5a, 0xde, 0x42, 0x4b,
0x90, 0x32, 0x4c, 0xfd, 0x02, 0xd3, 0xad, 0xcb, 0xca, 0xac, 0x21, 0xc9, 0x50, 0x0a, 0x7a, 0x7d,
0x54, 0x82, 0xb8, 0x3d, 0xe4, 0xbd, 0xc4, 0xed, 0x21, 0x7a, 0x01, 0x92, 0x64, 0x21, 0x69, 0x1f,
0xa5, 0x90, 0x38, 0xc7, 0xe5, 0x9a, 0x97, 0x06, 0x96, 0x29, 0xa7, 0x54, 0x86, 0x62, 0x20, 0x1a,
0x48, 0x57, 0x60, 0x29, 0xcc, 0xb9, 0x4b, 0x1d, 0x97, 0x1e, 0x70, 0xd2, 0xe8, 0x36, 0x64, 0x5d,
0xef, 0xce, 0x0c, 0xe7, 0xea, 0x58, 0xb7, 0x0e, 0xb3, 0xec, 0xb2, 0x12, 0x8b, 0x21, 0x1b, 0xd0,
0x51, 0x79, 0x2c, 0x2f, 0xc8, 0x19, 0xd5, 0x30, 0x1a, 0xaa, 0xd5, 0x91, 0xde, 0x86, 0x4a, 0x94,
0xe7, 0xf6, 0x2d, 0x98, 0x40, 0xcd, 0xde, 0x59, 0xb0, 0x2b, 0x90, 0x3e, 0xd3, 0xcd, 0x9e, 0x6a,
0x53, 0x65, 0x45, 0x99, 0xb7, 0xc8, 0x42, 0x32, 0x2f, 0x9e, 0xa0, 0x64, 0xd6, 0x90, 0x14, 0xb8,
0x1a, 0xe9, 0xbd, 0x89, 0x88, 0xd6, 0x6f, 0x63, 0xb6, 0xac, 0x45, 0x99, 0x35, 0x3c, 0x45, 0x6c,
0xb0, 0xac, 0x41, 0xba, 0xb5, 0xe8, 0x5c, 0xa9, 0xfe, 0x9c, 0xcc, 0x5b, 0xd2, 0x9f, 0x12, 0x70,
0x25, 0xdc, 0x87, 0xa3, 0x55, 0x28, 0xf4, 0xd4, 0xa1, 0x62, 0x0f, 0xb9, 0xd9, 0x09, 0x74, 0xe3,
0xa1, 0xa7, 0x0e, 0x9b, 0x43, 0x66, 0x73, 0x22, 0x24, 0xec, 0xa1, 0x55, 0x89, 0xaf, 0x26, 0x6e,
0x14, 0x64, 0xf2, 0x89, 0x8e, 0x61, 0xa1, 0xab, 0xb7, 0xd4, 0xae, 0xd2, 0x55, 0x2d, 0x5b, 0xe1,
0xc1, 0x9d, 0x1d, 0xa2, 0x27, 0xc6, 0x16, 0x9b, 0x79, 0x63, 0xdc, 0x66, 0xfb, 0x49, 0x1c, 0x0e,
0xb7, 0xff, 0x32, 0xd5, 0xb1, 0xa7, 0x3a, 0x5b, 0x8d, 0x8e, 0x61, 0xe9, 0xf4, 0xf2, 0x7d, 0xb5,
0x6f, 0x6b, 0x7d, 0xac, 0x8c, 0x1d, 0xab, 0x71, 0xeb, 0x79, 0xa0, 0x59, 0xa7, 0xb8, 0xa3, 0x5e,
0x68, 0xba, 0xc9, 0x55, 0x2e, 0xba, 0xf2, 0x27, 0xde, 0xd9, 0xf2, 0xf6, 0x28, 0x15, 0x30, 0x6a,
0xc7, 0xbd, 0xa4, 0xe7, 0x76, 0x2f, 0x2f, 0xc0, 0x52, 0x1f, 0x0f, 0x6d, 0xdf, 0x18, 0x99, 0xe1,
0x64, 0xe8, 0x5e, 0x20, 0xf2, 0xcf, 0xeb, 0x9f, 0xd8, 0x10, 0x7a, 0x86, 0x86, 0x45, 0x43, 0xb7,
0xb0, 0xa9, 0xa8, 0xed, 0xb6, 0x89, 0x2d, 0xab, 0x92, 0xa5, 0xdc, 0x65, 0x87, 0xbe, 0xc9, 0xc8,
0xd2, 0x6f, 0xfd, 0x7b, 0x15, 0x0c, 0x83, 0x7c, 0x27, 0x04, 0x6f, 0x27, 0x8e, 0x60, 0x89, 0xcb,
0xb7, 0x03, 0x9b, 0xc1, 0xd2, 0xd1, 0xc7, 0xc6, 0x0f, 0xdc, 0xe8, 0x26, 0x20, 0x47, 0x7c, 0x86,
0x7d, 0x48, 0x7c, 0xbd, 0x7d, 0x40, 0x90, 0xa4, 0xab, 0x94, 0x64, 0x4e, 0x88, 0x7c, 0xff, 0xbf,
0xed, 0xcd, 0x6b, 0xb0, 0x30, 0x96, 0x63, 0xb8, 0xf3, 0x12, 0x42, 0xe7, 0x15, 0xf7, 0xcf, 0x4b,
0xfa, 0x9d, 0x00, 0xd5, 0xe8, 0xa4, 0x22, 0x54, 0xd5, 0xb3, 0xb0, 0xe0, 0xce, 0xc5, 0x1d, 0x1f,
0x3b, 0xf5, 0xa2, 0xfb, 0x83, 0x0f, 0x30, 0xd2, 0x81, 0x3f, 0x09, 0xa5, 0x91, 0x94, 0x87, 0xed,
0x42, 0xf1, 0xc2, 0xdf, 0xbf, 0xf4, 0xab, 0x84, 0xeb, 0x55, 0x03, 0x79, 0x49, 0x88, 0xe5, 0xbd,
0x0e, 0x8b, 0x6d, 0xdc, 0xd2, 0xda, 0x5f, 0xd5, 0xf0, 0x16, 0xb8, 0xf4, 0x77, 0x76, 0x37, 0x83,
0xdd, 0xfd, 0x12, 0x20, 0x2b, 0x63, 0xcb, 0x20, 0xd9, 0x07, 0xda, 0x82, 0x1c, 0x1e, 0xb6, 0xb0,
0x61, 0x3b, 0x09, 0x5b, 0x78, 0x2a, 0xcc, 0xb8, 0xeb, 0x0e, 0x27, 0x01, 0x82, 0xae, 0x18, 0xba,
0xc5, 0xb1, 0x6e, 0x34, 0x6c, 0xe5, 0xe2, 0x7e, 0xb0, 0xfb, 0xa2, 0x03, 0x76, 0x13, 0x91, 0x38,
0x8e, 0x49, 0x8d, 0xa0, 0xdd, 0x5b, 0x1c, 0xed, 0x26, 0xa7, 0x74, 0x16, 0x80, 0xbb, 0xb5, 0x00,
0xdc, 0x4d, 0x4d, 0x99, 0x66, 0x04, 0xde, 0x7d, 0xd1, 0xc1, 0xbb, 0xe9, 0x29, 0x23, 0x1e, 0x01,
0xbc, 0xaf, 0xfa, 0x00, 0x6f, 0x96, 0x8a, 0xae, 0x46, 0x8a, 0x86, 0x20, 0xde, 0x97, 0x5d, 0xc4,
0x9b, 0x8f, 0x44, 0xcb, 0x5c, 0x78, 0x14, 0xf2, 0x1e, 0x8c, 0x41, 0x5e, 0x06, 0x51, 0x9f, 0x8a,
0x54, 0x31, 0x05, 0xf3, 0x1e, 0x8c, 0x61, 0xde, 0xe2, 0x14, 0x85, 0x53, 0x40, 0xef, 0x4f, 0xc3,
0x41, 0x6f, 0x34, 0x2c, 0xe5, 0xc3, 0x9c, 0x0d, 0xf5, 0x2a, 0x11, 0xa8, 0xb7, 0x1c, 0x89, 0xd0,
0x98, 0xfa, 0x99, 0x61, 0xef, 0x71, 0x08, 0xec, 0x65, 0x00, 0xf5, 0x46, 0xa4, 0xf2, 0x19, 0x70,
0xef, 0x71, 0x08, 0xee, 0x5d, 0x98, 0xaa, 0x76, 0x2a, 0xf0, 0xbd, 0x17, 0x04, 0xbe, 0x28, 0x22,
0xc7, 0xf2, 0x4e, 0x7b, 0x04, 0xf2, 0x3d, 0x8d, 0x42, 0xbe, 0x0c, 0x9d, 0x3e, 0x17, 0xa9, 0x71,
0x0e, 0xe8, 0x7b, 0x30, 0x06, 0x7d, 0x97, 0xa6, 0x58, 0xda, 0xec, 0xd8, 0x37, 0x23, 0x66, 0x19,
0xea, 0xdd, 0x4d, 0x66, 0x41, 0xcc, 0x4b, 0xcf, 0x90, 0x40, 0x3c, 0xe2, 0xe1, 0x48, 0x4e, 0x8c,
0x4d, 0x53, 0x37, 0x39, 0x8a, 0x65, 0x0d, 0xe9, 0x06, 0xc1, 0x42, 0x9e, 0x37, 0x9b, 0x80, 0x93,
0x29, 0xf6, 0xf0, 0x79, 0x30, 0xe9, 0xcf, 0x82, 0x27, 0x4b, 0x91, 0xb2, 0x1f, 0x47, 0xe5, 0x38,
0x8e, 0xf2, 0xa1, 0xe7, 0x78, 0x10, 0x3d, 0xaf, 0x40, 0x9e, 0x60, 0x8a, 0x11, 0x60, 0xac, 0x1a,
0x2e, 0x30, 0xbe, 0x09, 0x0b, 0x34, 0x76, 0x32, 0x8c, 0xcd, 0x03, 0x52, 0x92, 0x06, 0xa4, 0x32,
0xf9, 0xc1, 0xd6, 0x85, 0x45, 0xa6, 0xe7, 0x61, 0xd1, 0xc7, 0xeb, 0x62, 0x15, 0x86, 0x12, 0x45,
0x97, 0x7b, 0x93, 0x83, 0x96, 0xbf, 0x09, 0xde, 0x0a, 0x79, 0x88, 0x3a, 0x0c, 0xfc, 0x0a, 0xdf,
0x10, 0xf8, 0x8d, 0x7f, 0x65, 0xf0, 0xeb, 0xc7, 0x5e, 0x89, 0x20, 0xf6, 0xfa, 0x8f, 0xe0, 0xed,
0x89, 0x0b, 0x65, 0x5b, 0x7a, 0x1b, 0x73, 0x34, 0x44, 0xbf, 0x49, 0x76, 0xd2, 0xd5, 0xcf, 0x39,
0xe6, 0x21, 0x9f, 0x84, 0xcb, 0x0d, 0x39, 0x39, 0x1e, 0x51, 0x5c, 0x20, 0xc5, 0x42, 0x3e, 0x07,
0x52, 0x22, 0x24, 0x1e, 0x62, 0x16, 0x20, 0x0a, 0x32, 0xf9, 0x24, 0x7c, 0xd4, 0xec, 0x78, 0xe8,
0x66, 0x0d, 0x74, 0x07, 0x72, 0xb4, 0x58, 0xad, 0xe8, 0x86, 0xc5, 0x63, 0x42, 0x20, 0xcb, 0x61,
0x15, 0xeb, 0xb5, 0x43, 0xc2, 0x73, 0x60, 0x58, 0x72, 0xd6, 0xe0, 0x5f, 0xbe, 0x5c, 0x23, 0x17,
0xc8, 0x35, 0xae, 0x41, 0x8e, 0x8c, 0xde, 0x32, 0xd4, 0x16, 0xa6, 0xa5, 0xd1, 0x9c, 0xec, 0x11,
0xa4, 0x4f, 0x04, 0x28, 0x8f, 0x84, 0x98, 0xd0, 0xb9, 0x3b, 0x26, 0x19, 0xf7, 0x41, 0xfb, 0xeb,
0x00, 0xe7, 0xaa, 0xa5, 0xbc, 0xa7, 0xf6, 0x6d, 0xdc, 0xe6, 0xd3, 0xcd, 0x9d, 0xab, 0xd6, 0x1b,
0x94, 0x10, 0xec, 0x38, 0x3b, 0xd2, 0xb1, 0x0f, 0x43, 0xe6, 0xfc, 0x18, 0x12, 0x55, 0x21, 0x6b,
0x98, 0x9a, 0x6e, 0x6a, 0xf6, 0x25, 0x1d, 0x6d, 0x42, 0x76, 0xdb, 0xbb, 0xc9, 0x6c, 0x42, 0x4c,
0xee, 0x26, 0xb3, 0x49, 0x31, 0xe5, 0x16, 0xaa, 0xd8, 0x91, 0xcd, 0x8b, 0x05, 0xe9, 0x83, 0xb8,
0x67, 0x8b, 0xdb, 0xb8, 0xab, 0x5d, 0x60, 0x73, 0x8e, 0xc9, 0xcc, 0xb6, 0xb9, 0xcb, 0x21, 0x53,
0xf6, 0x51, 0xc8, 0xe8, 0x49, 0x6b, 0x60, 0xe1, 0x36, 0x2f, 0x99, 0xb8, 0x6d, 0xd4, 0x80, 0x34,
0xbe, 0xc0, 0x7d, 0xdb, 0xaa, 0x64, 0xa8, 0x0d, 0x5f, 0x19, 0xc7, 0xb0, 0xe4, 0xf7, 0x56, 0x85,
0x58, 0xee, 0x97, 0x9f, 0xad, 0x88, 0x8c, 0xfb, 0x39, 0xbd, 0xa7, 0xd9, 0xb8, 0x67, 0xd8, 0x97,
0x32, 0x97, 0x9f, 0xbc, 0xb2, 0xd2, 0x6d, 0x28, 0x05, 0xe3, 0x3e, 0x7a, 0x02, 0x8a, 0x26, 0xb6,
0x55, 0xad, 0xaf, 0x04, 0xb2, 0xf6, 0x02, 0x23, 0xf2, 0x62, 0xce, 0x21, 0x3c, 0x12, 0x1a, 0xeb,
0xd1, 0x4b, 0x90, 0xf3, 0xd2, 0x04, 0x81, 0x0e, 0x7d, 0x42, 0xad, 0xc3, 0xe3, 0x95, 0xfe, 0x2a,
0x78, 0x2a, 0x83, 0xd5, 0x93, 0x3a, 0xa4, 0x4d, 0x6c, 0x0d, 0xba, 0xac, 0x9e, 0x51, 0xda, 0x78,
0x7e, 0xb6, 0x2c, 0x81, 0x50, 0x07, 0x5d, 0x5b, 0xe6, 0xc2, 0xd2, 0x5b, 0x90, 0x66, 0x14, 0x94,
0x87, 0xcc, 0xf1, 0xfe, 0xfd, 0xfd, 0x83, 0x37, 0xf6, 0xc5, 0x18, 0x02, 0x48, 0x6f, 0xd6, 0x6a,
0xf5, 0xc3, 0xa6, 0x28, 0xa0, 0x1c, 0xa4, 0x36, 0xb7, 0x0e, 0xe4, 0xa6, 0x18, 0x27, 0x64, 0xb9,
0xbe, 0x5b, 0xaf, 0x35, 0xc5, 0x04, 0x5a, 0x80, 0x22, 0xfb, 0x56, 0xee, 0x1d, 0xc8, 0x0f, 0x36,
0x9b, 0x62, 0xd2, 0x47, 0x3a, 0xaa, 0xef, 0x6f, 0xd7, 0x65, 0x31, 0x25, 0x7d, 0x0f, 0xae, 0x46,
0xe6, 0x15, 0x5e, 0x69, 0x44, 0xf0, 0x95, 0x46, 0xa4, 0xdf, 0xc4, 0x09, 0xf2, 0x8a, 0x4a, 0x16,
0xd0, 0xee, 0xc8, 0xc4, 0x37, 0xe6, 0xc8, 0x34, 0x46, 0x66, 0x4f, 0xc0, 0x96, 0x89, 0xcf, 0xb0,
0xdd, 0xea, 0xb0, 0xe4, 0x85, 0xf9, 0xc6, 0xa2, 0x5c, 0xe4, 0x54, 0x2a, 0x64, 0x31, 0xb6, 0x77,
0x70, 0xcb, 0x56, 0xd8, 0x09, 0x63, 0x40, 0x27, 0x47, 0xd8, 0x08, 0xf5, 0x88, 0x11, 0xa5, 0xb7,
0xe7, 0x5a, 0xcb, 0x1c, 0xa4, 0xe4, 0x7a, 0x53, 0x7e, 0x53, 0x4c, 0x20, 0x04, 0x25, 0xfa, 0xa9,
0x1c, 0xed, 0x6f, 0x1e, 0x1e, 0x35, 0x0e, 0xc8, 0x5a, 0x2e, 0x42, 0xd9, 0x59, 0x4b, 0x87, 0x98,
0x92, 0xfe, 0x11, 0x87, 0x47, 0x23, 0x52, 0x1d, 0x74, 0x07, 0xc0, 0x1e, 0x2a, 0x26, 0x6e, 0xe9,
0x66, 0x3b, 0xda, 0xc8, 0x9a, 0x43, 0x99, 0x72, 0xc8, 0x39, 0x9b, 0x7f, 0x59, 0x13, 0x2a, 0x6a,
0xe8, 0x15, 0xae, 0x94, 0xcc, 0xca, 0x81, 0x77, 0xd7, 0x43, 0x0a, 0x47, 0xb8, 0x45, 0x14, 0xd3,
0xb5, 0xa5, 0x8a, 0x29, 0x3f, 0x7a, 0xe0, 0x07, 0xc4, 0x03, 0x1a, 0x54, 0x66, 0x2e, 0xbd, 0xfa,
0x20, 0x33, 0x23, 0x58, 0xe8, 0x4d, 0x78, 0x74, 0x24, 0x26, 0xba, 0x4a, 0x53, 0xb3, 0x86, 0xc6,
0x47, 0x82, 0xa1, 0x91, 0xab, 0x96, 0x7e, 0x9f, 0xf0, 0x2f, 0x6c, 0x30, 0xb3, 0x3b, 0x80, 0xb4,
0x65, 0xab, 0xf6, 0xc0, 0xe2, 0x06, 0xf7, 0xd2, 0xac, 0x69, 0xe2, 0x9a, 0xf3, 0x71, 0x44, 0xc5,
0x65, 0xae, 0xe6, 0xbb, 0xf5, 0xb6, 0x88, 0x83, 0x0d, 0x2e, 0x4e, 0xf4, 0x91, 0xf1, 0x7c, 0x4e,
0x5c, 0xba, 0x0b, 0x68, 0x3c, 0x81, 0x0e, 0x29, 0x99, 0x08, 0x61, 0x25, 0x93, 0x3f, 0x08, 0xf0,
0xd8, 0x84, 0x64, 0x19, 0xbd, 0x3e, 0xb2, 0xcf, 0x2f, 0xcf, 0x93, 0x6a, 0xaf, 0x31, 0x5a, 0x70,
0xa7, 0xa5, 0x5b, 0x50, 0xf0, 0xd3, 0x67, 0x9b, 0xe4, 0x97, 0x71, 0xcf, 0xe7, 0x07, 0x6b, 0x3b,
0x5e, 0xf8, 0x13, 0xbe, 0x66, 0xf8, 0x0b, 0xda, 0x59, 0x7c, 0x4e, 0x3b, 0x3b, 0x0a, 0xb3, 0xb3,
0xc4, 0x5c, 0x59, 0xe5, 0x5c, 0xd6, 0x96, 0xfc, 0x7a, 0xd6, 0x16, 0x38, 0x70, 0xa9, 0x60, 0xda,
0xfa, 0x26, 0x80, 0x57, 0xf0, 0x22, 0x01, 0xc9, 0xd4, 0x07, 0xfd, 0x36, 0xb5, 0x80, 0x94, 0xcc,
0x1a, 0xe8, 0x36, 0xa4, 0x88, 0x25, 0x39, 0xeb, 0x34, 0xee, 0x54, 0x89, 0x25, 0xf8, 0x0a, 0x66,
0x8c, 0x5b, 0xd2, 0x00, 0x8d, 0x57, 0xd4, 0x23, 0xba, 0x78, 0x35, 0xd8, 0xc5, 0xe3, 0x91, 0xb5,
0xf9, 0xf0, 0xae, 0xde, 0x87, 0x14, 0xdd, 0x79, 0x92, 0x70, 0xd1, 0x6b, 0x1c, 0x0e, 0x7b, 0xc8,
0x37, 0xfa, 0x19, 0x80, 0x6a, 0xdb, 0xa6, 0x76, 0x3a, 0xf0, 0x3a, 0x58, 0x09, 0xb7, 0x9c, 0x4d,
0x87, 0x6f, 0xeb, 0x1a, 0x37, 0xa1, 0x25, 0x4f, 0xd4, 0x67, 0x46, 0x3e, 0x85, 0xd2, 0x3e, 0x94,
0x82, 0xb2, 0x4e, 0xa2, 0xce, 0xc6, 0x10, 0x4c, 0xd4, 0x19, 0xee, 0xe2, 0x89, 0xba, 0x9b, 0xe6,
0x27, 0xd8, 0x5d, 0x15, 0x6d, 0x48, 0xff, 0x15, 0xa0, 0xe0, 0x37, 0xbc, 0x6f, 0x38, 0xfd, 0x9c,
0x92, 0x71, 0x5f, 0x1d, 0xcb, 0x3e, 0x33, 0xe7, 0xaa, 0x75, 0xfc, 0x6d, 0x26, 0x9f, 0x1f, 0x08,
0x90, 0x75, 0x27, 0x1f, 0xbc, 0xb6, 0x0a, 0xdc, 0xf3, 0xb1, 0xb5, 0x8b, 0xfb, 0xef, 0x9a, 0xd8,
0xad, 0x5e, 0xc2, 0xbd, 0xd5, 0xbb, 0xeb, 0xe6, 0x4a, 0x51, 0x15, 0x3d, 0xff, 0x4a, 0x73, 0x9b,
0x72, 0x52, 0xc3, 0x5f, 0xf3, 0x71, 0x90, 0x24, 0x01, 0xfd, 0x00, 0xd2, 0x6a, 0xcb, 0xad, 0x63,
0x96, 0x42, 0x0a, 0x7c, 0x0e, 0xeb, 0x5a, 0x73, 0xb8, 0x49, 0x39, 0x65, 0x2e, 0xc1, 0x47, 0x15,
0x77, 0x46, 0x25, 0xbd, 0x46, 0xf4, 0x32, 0x9e, 0xa0, 0x47, 0x2c, 0x01, 0x1c, 0xef, 0x3f, 0x38,
0xd8, 0xde, 0xb9, 0xb7, 0x53, 0xdf, 0xe6, 0xd9, 0xd2, 0xf6, 0x76, 0x7d, 0x5b, 0x8c, 0x13, 0x3e,
0xb9, 0xfe, 0xe0, 0xe0, 0xa4, 0xbe, 0x2d, 0x26, 0xa4, 0xbb, 0x90, 0x73, 0xbd, 0x0a, 0x41, 0xf5,
0x4e, 0x4d, 0x56, 0xe0, 0x67, 0x9b, 0x97, 0xd8, 0x97, 0x20, 0x65, 0xe8, 0xef, 0xf1, 0x2b, 0xb6,
0x84, 0xcc, 0x1a, 0x52, 0x1b, 0xca, 0x23, 0x2e, 0x09, 0xdd, 0x85, 0x8c, 0x31, 0x38, 0x55, 0x1c,
0xa3, 0x1d, 0xa9, 0x60, 0x3b, 0x78, 0x71, 0x70, 0xda, 0xd5, 0x5a, 0xf7, 0xf1, 0xa5, 0xb3, 0x4c,
0xc6, 0xe0, 0xf4, 0x3e, 0xb3, 0x6d, 0xd6, 0x4b, 0xdc, 0xdf, 0xcb, 0x05, 0x64, 0x9d, 0xa3, 0x8a,
0x7e, 0x08, 0x39, 0xd7, 0xdb, 0xb9, 0x57, 0xe4, 0x91, 0x6e, 0x92, 0xab, 0xf7, 0x44, 0xd0, 0x4d,
0x58, 0xb0, 0xb4, 0xf3, 0xbe, 0x53, 0xbf, 0x67, 0x15, 0x9b, 0x38, 0x3d, 0x33, 0x65, 0xf6, 0x63,
0xcf, 0x29, 0x2a, 0x90, 0x20, 0x27, 0x8e, 0xfa, 0x8a, 0x6f, 0x73, 0x00, 0x21, 0xc1, 0x38, 0x11,
0x16, 0x8c, 0x7f, 0x11, 0x87, 0xbc, 0xef, 0x56, 0x00, 0x7d, 0xdf, 0xe7, 0xb8, 0x4a, 0x21, 0x51,
0xc4, 0xc7, 0xeb, 0xdd, 0x41, 0x07, 0x27, 0x16, 0x9f, 0x7f, 0x62, 0x51, 0x97, 0x30, 0xce, 0xe5,
0x42, 0x72, 0xee, 0xcb, 0x85, 0xe7, 0x00, 0xd9, 0xba, 0xad, 0x76, 0x95, 0x0b, 0xdd, 0xd6, 0xfa,
0xe7, 0x0a, 0x33, 0x0d, 0xe6, 0x66, 0x44, 0xfa, 0xe7, 0x84, 0xfe, 0x38, 0xa4, 0x56, 0xf2, 0x73,
0x01, 0xb2, 0x2e, 0xa2, 0x9b, 0xf7, 0x86, 0xfa, 0x0a, 0xa4, 0x39, 0x68, 0x61, 0x57, 0xd4, 0xbc,
0x15, 0x7a, 0x8b, 0x52, 0x85, 0x6c, 0x0f, 0xdb, 0x2a, 0xf5, 0x99, 0x2c, 0x02, 0xba, 0xed, 0x9b,
0x2f, 0x43, 0xde, 0x77, 0xbb, 0x4f, 0xdc, 0xe8, 0x7e, 0xfd, 0x0d, 0x31, 0x56, 0xcd, 0x7c, 0xf8,
0xf1, 0x6a, 0x62, 0x1f, 0xbf, 0x47, 0x4e, 0x98, 0x5c, 0xaf, 0x35, 0xea, 0xb5, 0xfb, 0xa2, 0x50,
0xcd, 0x7f, 0xf8, 0xf1, 0x6a, 0x46, 0xc6, 0xb4, 0x80, 0x7e, 0xf3, 0x3e, 0x94, 0x47, 0x36, 0x26,
0x78, 0xa0, 0x11, 0x94, 0xb6, 0x8f, 0x0f, 0xf7, 0x76, 0x6a, 0x9b, 0xcd, 0xba, 0x72, 0x72, 0xd0,
0xac, 0x8b, 0x02, 0x7a, 0x14, 0x16, 0xf7, 0x76, 0x7e, 0xd4, 0x68, 0x2a, 0xb5, 0xbd, 0x9d, 0xfa,
0x7e, 0x53, 0xd9, 0x6c, 0x36, 0x37, 0x6b, 0xf7, 0xc5, 0xf8, 0xc6, 0x1f, 0xf3, 0x50, 0xde, 0xdc,
0xaa, 0xed, 0x10, 0xd8, 0xa6, 0xb5, 0x54, 0xea, 0x1e, 0x6a, 0x90, 0xa4, 0xa5, 0xc0, 0x89, 0x6f,
0xfc, 0xaa, 0x93, 0x6f, 0x45, 0xd0, 0x3d, 0x48, 0xd1, 0x2a, 0x21, 0x9a, 0xfc, 0xe8, 0xaf, 0x3a,
0xe5, 0x9a, 0x84, 0x0c, 0x86, 0x1e, 0xa7, 0x89, 0xaf, 0x00, 0xab, 0x93, 0x6f, 0x4d, 0xd0, 0x1e,
0x64, 0x9c, 0x22, 0xd1, 0xb4, 0xa7, 0x79, 0xd5, 0xa9, 0x57, 0x19, 0x64, 0x6a, 0xac, 0xd8, 0x36,
0xf9, 0x81, 0x60, 0x75, 0xca, 0x7d, 0x0a, 0xda, 0x81, 0x34, 0x2f, 0x74, 0x4c, 0x79, 0xf3, 0x57,
0x9d, 0x76, 0x43, 0x82, 0x64, 0xc8, 0x79, 0x65, 0xcc, 0xe9, 0xcf, 0x1e, 0xab, 0x33, 0x5c, 0x15,
0xa1, 0xb7, 0xa0, 0x18, 0x2c, 0xa8, 0xcc, 0xf6, 0xae, 0xb0, 0x3a, 0xe3, 0x5d, 0x0c, 0xd1, 0x1f,
0xac, 0xae, 0xcc, 0xf6, 0xce, 0xb0, 0x3a, 0xe3, 0xd5, 0x0c, 0x7a, 0x07, 0x16, 0xc6, 0xab, 0x1f,
0xb3, 0x3f, 0x3b, 0xac, 0xce, 0x71, 0x59, 0x83, 0x7a, 0x80, 0x42, 0xaa, 0x26, 0x73, 0xbc, 0x42,
0xac, 0xce, 0x73, 0x77, 0x83, 0xda, 0x50, 0x1e, 0xad, 0x44, 0xcc, 0xfa, 0x2a, 0xb1, 0x3a, 0xf3,
0x3d, 0x0e, 0xeb, 0x25, 0x08, 0xcb, 0x67, 0x7d, 0xa5, 0x58, 0x9d, 0xf9, 0x5a, 0x07, 0x1d, 0x03,
0xf8, 0x60, 0xe5, 0x0c, 0xaf, 0x16, 0xab, 0xb3, 0x5c, 0xf0, 0x20, 0x03, 0x16, 0xc3, 0xf0, 0xe6,
0x3c, 0x8f, 0x18, 0xab, 0x73, 0xdd, 0xfb, 0x10, 0x7b, 0x0e, 0x22, 0xc7, 0xd9, 0x1e, 0x35, 0x56,
0x67, 0xbc, 0x00, 0xda, 0xaa, 0x7f, 0xf2, 0xf9, 0xb2, 0xf0, 0xe9, 0xe7, 0xcb, 0xc2, 0xbf, 0x3f,
0x5f, 0x16, 0x3e, 0xfa, 0x62, 0x39, 0xf6, 0xe9, 0x17, 0xcb, 0xb1, 0x7f, 0x7e, 0xb1, 0x1c, 0xfb,
0xf1, 0xb3, 0xe7, 0x9a, 0xdd, 0x19, 0x9c, 0xae, 0xb5, 0xf4, 0xde, 0xba, 0xff, 0x1d, 0x78, 0xd8,
0xeb, 0xf3, 0xd3, 0x34, 0x0d, 0xa8, 0xb7, 0xfe, 0x17, 0x00, 0x00, 0xff, 0xff, 0x4c, 0xb4, 0x7f,
0x53, 0x9d, 0x2e, 0x00, 0x00,
0x11, 0xd7, 0xe8, 0x5b, 0x2d, 0x4b, 0x1a, 0x3f, 0x9b, 0x45, 0x2b, 0x76, 0x6d, 0x33, 0x14, 0xb0,
0x2c, 0x60, 0x13, 0x6f, 0x16, 0x96, 0x2c, 0x84, 0x92, 0x65, 0x6d, 0x64, 0xaf, 0xd7, 0x36, 0x63,
0xd9, 0x14, 0xf9, 0x60, 0x18, 0x4b, 0xcf, 0xd6, 0xb0, 0x92, 0x66, 0x98, 0x19, 0x19, 0x99, 0x63,
0x12, 0xaa, 0x52, 0x1c, 0x52, 0xdc, 0xc2, 0x21, 0xdc, 0x92, 0xaa, 0xfc, 0x09, 0xc9, 0x25, 0xa7,
0x1c, 0x38, 0xe4, 0xc0, 0x29, 0x95, 0x13, 0x49, 0xc1, 0x8d, 0x7f, 0x20, 0xb7, 0x54, 0xea, 0x7d,
0xcc, 0x97, 0x34, 0xa3, 0x0f, 0xa0, 0xa8, 0x4a, 0x15, 0xb7, 0x79, 0x3d, 0xdd, 0xfd, 0xbe, 0xfa,
0x75, 0xf7, 0xaf, 0xdf, 0x83, 0xc7, 0x6c, 0xdc, 0x6f, 0x63, 0xb3, 0xa7, 0xf5, 0xed, 0x0d, 0xf5,
0xb4, 0xa5, 0x6d, 0xd8, 0x97, 0x06, 0xb6, 0xd6, 0x0d, 0x53, 0xb7, 0x75, 0x54, 0xf2, 0x7e, 0xae,
0x93, 0x9f, 0x95, 0xeb, 0x3e, 0xee, 0x96, 0x79, 0x69, 0xd8, 0xfa, 0x86, 0x61, 0xea, 0xfa, 0x19,
0xe3, 0xaf, 0x5c, 0x1b, 0xff, 0xfd, 0x10, 0x5f, 0x72, 0x6d, 0x01, 0x61, 0xda, 0xcb, 0x86, 0xa1,
0x9a, 0x6a, 0xcf, 0xf9, 0xbd, 0x7a, 0xae, 0xeb, 0xe7, 0x5d, 0xbc, 0x41, 0x5b, 0xa7, 0x83, 0xb3,
0x0d, 0x5b, 0xeb, 0x61, 0xcb, 0x56, 0x7b, 0x06, 0x67, 0x58, 0x3e, 0xd7, 0xcf, 0x75, 0xfa, 0xb9,
0x41, 0xbe, 0x18, 0x55, 0xfa, 0x4b, 0x0e, 0x32, 0x32, 0x7e, 0x77, 0x80, 0x2d, 0x1b, 0x6d, 0x42,
0x12, 0xb7, 0x3a, 0x7a, 0x59, 0x58, 0x13, 0x6e, 0xe4, 0x37, 0xaf, 0xad, 0x8f, 0x0c, 0x7f, 0x9d,
0xf3, 0xd5, 0x5b, 0x1d, 0xbd, 0x11, 0x93, 0x29, 0x2f, 0xba, 0x0d, 0xa9, 0xb3, 0xee, 0xc0, 0xea,
0x94, 0xe3, 0x54, 0xe8, 0x7a, 0x94, 0xd0, 0x3d, 0xc2, 0xd4, 0x88, 0xc9, 0x8c, 0x9b, 0x74, 0xa5,
0xf5, 0xcf, 0xf4, 0x72, 0x62, 0x72, 0x57, 0x3b, 0xfd, 0x33, 0xda, 0x15, 0xe1, 0x45, 0x5b, 0x00,
0x5a, 0x5f, 0xb3, 0x95, 0x56, 0x47, 0xd5, 0xfa, 0xe5, 0x24, 0x95, 0x7c, 0x3c, 0x5a, 0x52, 0xb3,
0x6b, 0x84, 0xb1, 0x11, 0x93, 0x73, 0x9a, 0xd3, 0x20, 0xc3, 0x7d, 0x77, 0x80, 0xcd, 0xcb, 0x72,
0x6a, 0xf2, 0x70, 0x5f, 0x27, 0x4c, 0x64, 0xb8, 0x94, 0x1b, 0xbd, 0x02, 0xd9, 0x56, 0x07, 0xb7,
0x1e, 0x2a, 0xf6, 0xb0, 0x9c, 0xa1, 0x92, 0xab, 0x51, 0x92, 0x35, 0xc2, 0xd7, 0x1c, 0x36, 0x62,
0x72, 0xa6, 0xc5, 0x3e, 0xd1, 0x1d, 0x48, 0xb7, 0xf4, 0x5e, 0x4f, 0xb3, 0xcb, 0x40, 0x65, 0x57,
0x22, 0x65, 0x29, 0x57, 0x23, 0x26, 0x73, 0x7e, 0xb4, 0x0f, 0xc5, 0xae, 0x66, 0xd9, 0x8a, 0xd5,
0x57, 0x0d, 0xab, 0xa3, 0xdb, 0x56, 0x39, 0x4f, 0x35, 0x3c, 0x19, 0xa5, 0x61, 0x4f, 0xb3, 0xec,
0x23, 0x87, 0xb9, 0x11, 0x93, 0x0b, 0x5d, 0x3f, 0x81, 0xe8, 0xd3, 0xcf, 0xce, 0xb0, 0xe9, 0x2a,
0x2c, 0x2f, 0x4c, 0xd6, 0x77, 0x40, 0xb8, 0x1d, 0x79, 0xa2, 0x4f, 0xf7, 0x13, 0xd0, 0xcf, 0x60,
0xa9, 0xab, 0xab, 0x6d, 0x57, 0x9d, 0xd2, 0xea, 0x0c, 0xfa, 0x0f, 0xcb, 0x05, 0xaa, 0xf4, 0x99,
0xc8, 0x41, 0xea, 0x6a, 0xdb, 0x51, 0x51, 0x23, 0x02, 0x8d, 0x98, 0xbc, 0xd8, 0x1d, 0x25, 0xa2,
0xb7, 0x60, 0x59, 0x35, 0x8c, 0xee, 0xe5, 0xa8, 0xf6, 0x22, 0xd5, 0x7e, 0x33, 0x4a, 0x7b, 0x95,
0xc8, 0x8c, 0xaa, 0x47, 0xea, 0x18, 0x15, 0x35, 0x41, 0x34, 0x4c, 0x6c, 0xa8, 0x26, 0x56, 0x0c,
0x53, 0x37, 0x74, 0x4b, 0xed, 0x96, 0x4b, 0x54, 0xf7, 0xd3, 0x51, 0xba, 0x0f, 0x19, 0xff, 0x21,
0x67, 0x6f, 0xc4, 0xe4, 0x92, 0x11, 0x24, 0x31, 0xad, 0x7a, 0x0b, 0x5b, 0x96, 0xa7, 0x55, 0x9c,
0xa6, 0x95, 0xf2, 0x07, 0xb5, 0x06, 0x48, 0xa8, 0x0e, 0x79, 0x3c, 0x24, 0xe2, 0xca, 0x85, 0x6e,
0xe3, 0xf2, 0x22, 0x55, 0x28, 0x45, 0x9e, 0x50, 0xca, 0x7a, 0xa2, 0xdb, 0xb8, 0x11, 0x93, 0x01,
0xbb, 0x2d, 0xa4, 0xc2, 0x23, 0x17, 0xd8, 0xd4, 0xce, 0x2e, 0xa9, 0x1a, 0x85, 0xfe, 0xb1, 0x34,
0xbd, 0x5f, 0x46, 0x54, 0xe1, 0xb3, 0x51, 0x0a, 0x4f, 0xa8, 0x10, 0x51, 0x51, 0x77, 0x44, 0x1a,
0x31, 0x79, 0xe9, 0x62, 0x9c, 0x4c, 0x4c, 0xec, 0x4c, 0xeb, 0xab, 0x5d, 0xed, 0x7d, 0xac, 0x9c,
0x76, 0xf5, 0xd6, 0xc3, 0xf2, 0xd2, 0x64, 0x13, 0xbb, 0xc7, 0xb9, 0xb7, 0x08, 0x33, 0x31, 0xb1,
0x33, 0x3f, 0x61, 0x2b, 0x03, 0xa9, 0x0b, 0xb5, 0x3b, 0xc0, 0xbb, 0xc9, 0x6c, 0x5a, 0xcc, 0xec,
0x26, 0xb3, 0x59, 0x31, 0xb7, 0x9b, 0xcc, 0xe6, 0x44, 0x90, 0x9e, 0x86, 0xbc, 0xcf, 0x25, 0xa1,
0x32, 0x64, 0x7a, 0xd8, 0xb2, 0xd4, 0x73, 0x4c, 0x3d, 0x58, 0x4e, 0x76, 0x9a, 0x52, 0x11, 0x16,
0xfc, 0x6e, 0x48, 0xfa, 0x48, 0x70, 0x25, 0x89, 0x87, 0x21, 0x92, 0x17, 0xd8, 0xa4, 0x0b, 0xc1,
0x25, 0x79, 0x13, 0x3d, 0x01, 0x05, 0x3a, 0x09, 0xc5, 0xf9, 0x4f, 0xdc, 0x5c, 0x52, 0x5e, 0xa0,
0xc4, 0x13, 0xce, 0xb4, 0x0a, 0x79, 0x63, 0xd3, 0x70, 0x59, 0x12, 0x94, 0x05, 0x8c, 0x4d, 0xc3,
0x61, 0x78, 0x1c, 0x16, 0xc8, 0x8c, 0x5d, 0x8e, 0x24, 0xed, 0x24, 0x4f, 0x68, 0x9c, 0x45, 0xfa,
0x7b, 0x1c, 0xc4, 0x51, 0xd7, 0x85, 0xee, 0x40, 0x92, 0x78, 0x71, 0xee, 0x90, 0x2b, 0xeb, 0xcc,
0xc5, 0xaf, 0x3b, 0x2e, 0x7e, 0xbd, 0xe9, 0xb8, 0xf8, 0xad, 0xec, 0xa7, 0x9f, 0xaf, 0xc6, 0x3e,
0xfa, 0xd7, 0xaa, 0x20, 0x53, 0x09, 0x74, 0x95, 0x38, 0x2c, 0x55, 0xeb, 0x2b, 0x5a, 0x9b, 0x0e,
0x39, 0x47, 0xbc, 0x91, 0xaa, 0xf5, 0x77, 0xda, 0x68, 0x0f, 0xc4, 0x96, 0xde, 0xb7, 0x70, 0xdf,
0x1a, 0x58, 0x0a, 0x0b, 0x21, 0xdc, 0x0d, 0x07, 0x9c, 0x29, 0x0b, 0x64, 0x35, 0x87, 0xf3, 0x90,
0x32, 0xca, 0xa5, 0x56, 0x90, 0x80, 0xee, 0x01, 0x5c, 0xa8, 0x5d, 0xad, 0xad, 0xda, 0xba, 0x69,
0x95, 0x93, 0x6b, 0x89, 0x1b, 0xf9, 0xcd, 0xb5, 0xb1, 0xad, 0x3e, 0x71, 0x58, 0x8e, 0x8d, 0xb6,
0x6a, 0xe3, 0xad, 0x24, 0x19, 0xae, 0xec, 0x93, 0x44, 0x4f, 0x41, 0x49, 0x35, 0x0c, 0xc5, 0xb2,
0x55, 0x1b, 0x2b, 0xa7, 0x97, 0x36, 0xb6, 0xa8, 0x8b, 0x5e, 0x90, 0x0b, 0xaa, 0x61, 0x1c, 0x11,
0xea, 0x16, 0x21, 0xa2, 0x27, 0xa1, 0x48, 0xbc, 0xb9, 0xa6, 0x76, 0x95, 0x0e, 0xd6, 0xce, 0x3b,
0x76, 0x39, 0xbd, 0x26, 0xdc, 0x48, 0xc8, 0x05, 0x4e, 0x6d, 0x50, 0xa2, 0xd4, 0x76, 0x77, 0x9c,
0x7a, 0x72, 0x84, 0x20, 0xd9, 0x56, 0x6d, 0x95, 0xae, 0xe4, 0x82, 0x4c, 0xbf, 0x09, 0xcd, 0x50,
0xed, 0x0e, 0x5f, 0x1f, 0xfa, 0x8d, 0xae, 0x40, 0x9a, 0xab, 0x4d, 0x50, 0xb5, 0xbc, 0x85, 0x96,
0x21, 0x65, 0x98, 0xfa, 0x05, 0xa6, 0x5b, 0x97, 0x95, 0x59, 0x43, 0x92, 0xa1, 0x18, 0xf4, 0xfa,
0xa8, 0x08, 0x71, 0x7b, 0xc8, 0x7b, 0x89, 0xdb, 0x43, 0xf4, 0x02, 0x24, 0xc9, 0x42, 0xd2, 0x3e,
0x8a, 0x21, 0x71, 0x8e, 0xcb, 0x35, 0x2f, 0x0d, 0x2c, 0x53, 0x4e, 0xa9, 0x04, 0x85, 0x40, 0x34,
0x90, 0xae, 0xc0, 0x72, 0x98, 0x73, 0x97, 0x3a, 0x2e, 0x3d, 0xe0, 0xa4, 0xd1, 0x6d, 0xc8, 0xba,
0xde, 0x9d, 0x19, 0xce, 0xd5, 0xb1, 0x6e, 0x1d, 0x66, 0xd9, 0x65, 0x25, 0x16, 0x43, 0x36, 0xa0,
0xa3, 0xf2, 0x58, 0xbe, 0x20, 0x67, 0x54, 0xc3, 0x68, 0xa8, 0x56, 0x47, 0x7a, 0x1b, 0xca, 0x51,
0x9e, 0xdb, 0xb7, 0x60, 0x02, 0x35, 0x7b, 0x67, 0xc1, 0xae, 0x40, 0xfa, 0x4c, 0x37, 0x7b, 0xaa,
0x4d, 0x95, 0x15, 0x64, 0xde, 0x22, 0x0b, 0xc9, 0xbc, 0x78, 0x82, 0x92, 0x59, 0x43, 0x52, 0xe0,
0x6a, 0xa4, 0xf7, 0x26, 0x22, 0x5a, 0xbf, 0x8d, 0xd9, 0xb2, 0x16, 0x64, 0xd6, 0xf0, 0x14, 0xb1,
0xc1, 0xb2, 0x06, 0xe9, 0xd6, 0xa2, 0x73, 0xa5, 0xfa, 0x73, 0x32, 0x6f, 0x49, 0x1f, 0x27, 0xe0,
0x4a, 0xb8, 0x0f, 0x47, 0x6b, 0xb0, 0xd0, 0x53, 0x87, 0x8a, 0x3d, 0xe4, 0x66, 0x27, 0xd0, 0x8d,
0x87, 0x9e, 0x3a, 0x6c, 0x0e, 0x99, 0xcd, 0x89, 0x90, 0xb0, 0x87, 0x56, 0x39, 0xbe, 0x96, 0xb8,
0xb1, 0x20, 0x93, 0x4f, 0x74, 0x0c, 0x8b, 0x5d, 0xbd, 0xa5, 0x76, 0x95, 0xae, 0x6a, 0xd9, 0x0a,
0x0f, 0xee, 0xec, 0x10, 0x3d, 0x31, 0xb6, 0xd8, 0xcc, 0x1b, 0xe3, 0x36, 0xdb, 0x4f, 0xe2, 0x70,
0xb8, 0xfd, 0x97, 0xa8, 0x8e, 0x3d, 0xd5, 0xd9, 0x6a, 0xb4, 0x0d, 0xf9, 0x9e, 0x66, 0x9d, 0xe2,
0x8e, 0x7a, 0xa1, 0xe9, 0x26, 0x3f, 0x4d, 0xe3, 0x46, 0xf3, 0xc0, 0xe3, 0xe1, 0x9a, 0xfc, 0x62,
0xbe, 0x2d, 0x49, 0x05, 0x6c, 0xd8, 0xf1, 0x26, 0xe9, 0xb9, 0xbd, 0xc9, 0x0b, 0xb0, 0xdc, 0xc7,
0x43, 0x5b, 0xf1, 0xce, 0x2b, 0xb3, 0x93, 0x0c, 0x5d, 0x7a, 0x44, 0xfe, 0xb9, 0x27, 0xdc, 0x22,
0x26, 0x83, 0x9e, 0xa1, 0x51, 0xd0, 0xd0, 0x2d, 0x6c, 0x2a, 0x6a, 0xbb, 0x6d, 0x62, 0xcb, 0x2a,
0x67, 0x29, 0x77, 0xc9, 0xa1, 0x57, 0x19, 0x59, 0xfa, 0x8d, 0x7f, 0x6b, 0x82, 0x51, 0x8f, 0x2f,
0xbc, 0xe0, 0x2d, 0xfc, 0x11, 0x2c, 0x73, 0xf9, 0x76, 0x60, 0xed, 0x59, 0xf6, 0xf9, 0xd8, 0xf8,
0xf9, 0x1a, 0x5d, 0x73, 0xe4, 0x88, 0x47, 0x2f, 0x7b, 0xe2, 0xeb, 0x2d, 0x3b, 0x82, 0x24, 0x5d,
0x94, 0x24, 0x73, 0x31, 0xe4, 0xfb, 0xff, 0x6d, 0x2b, 0x5e, 0x83, 0xc5, 0xb1, 0x0c, 0xc2, 0x9d,
0x97, 0x10, 0x3a, 0xaf, 0xb8, 0x7f, 0x5e, 0xd2, 0xef, 0x05, 0xa8, 0x44, 0xa7, 0x0c, 0xa1, 0xaa,
0x9e, 0x85, 0x45, 0x77, 0x2e, 0xee, 0xf8, 0xd8, 0x99, 0x16, 0xdd, 0x1f, 0x7c, 0x80, 0x91, 0xee,
0xf9, 0x49, 0x28, 0x8e, 0x24, 0x34, 0x6c, 0x17, 0x0a, 0x17, 0xfe, 0xfe, 0xa5, 0x5f, 0x27, 0x5c,
0x9f, 0x19, 0xc8, 0x3a, 0x42, 0x0c, 0xed, 0x75, 0x58, 0x6a, 0xe3, 0x96, 0xd6, 0xfe, 0xba, 0x76,
0xb6, 0xc8, 0xa5, 0xbf, 0x37, 0xb3, 0x71, 0x33, 0xfb, 0x2d, 0x40, 0x56, 0xc6, 0x96, 0x41, 0x52,
0x09, 0xb4, 0x05, 0x39, 0x3c, 0x6c, 0x61, 0xc3, 0x76, 0xb2, 0xaf, 0xf0, 0xbc, 0x96, 0x71, 0xd7,
0x1d, 0x4e, 0x82, 0xea, 0x5c, 0x31, 0x74, 0x8b, 0x03, 0xd7, 0x68, 0x0c, 0xca, 0xc5, 0xfd, 0xc8,
0xf5, 0x45, 0x07, 0xb9, 0x26, 0x22, 0x41, 0x19, 0x93, 0x1a, 0x81, 0xae, 0xb7, 0x38, 0x74, 0x4d,
0x4e, 0xe9, 0x2c, 0x80, 0x5d, 0x6b, 0x01, 0xec, 0x9a, 0x9a, 0x32, 0xcd, 0x08, 0xf0, 0xfa, 0xa2,
0x03, 0x5e, 0xd3, 0x53, 0x46, 0x3c, 0x82, 0x5e, 0x5f, 0xf5, 0xa1, 0xd7, 0x2c, 0x15, 0x5d, 0x8b,
0x14, 0x0d, 0x81, 0xaf, 0x2f, 0xbb, 0xf0, 0x35, 0x1f, 0x09, 0x7d, 0xb9, 0xf0, 0x28, 0x7e, 0x3d,
0x18, 0xc3, 0xaf, 0x0c, 0x6f, 0x3e, 0x15, 0xa9, 0x62, 0x0a, 0x80, 0x3d, 0x18, 0x03, 0xb0, 0x85,
0x29, 0x0a, 0xa7, 0x20, 0xd8, 0x9f, 0x87, 0x23, 0xd8, 0x68, 0x8c, 0xc9, 0x87, 0x39, 0x1b, 0x84,
0x55, 0x22, 0x20, 0x6c, 0x29, 0x12, 0x6e, 0x31, 0xf5, 0x33, 0x63, 0xd8, 0xe3, 0x10, 0x0c, 0xcb,
0xd0, 0xe6, 0x8d, 0x48, 0xe5, 0x33, 0x80, 0xd8, 0xe3, 0x10, 0x10, 0xbb, 0x38, 0x55, 0xed, 0x54,
0x14, 0x7b, 0x2f, 0x88, 0x62, 0x51, 0x44, 0xc2, 0xe4, 0x9d, 0xf6, 0x08, 0x18, 0x7b, 0x1a, 0x05,
0x63, 0x19, 0xd4, 0x7c, 0x2e, 0x52, 0xe3, 0x1c, 0x38, 0xf6, 0x60, 0x0c, 0xc7, 0x2e, 0x4f, 0xb1,
0xb4, 0xd9, 0x81, 0x6c, 0x46, 0xcc, 0x32, 0x08, 0xbb, 0x9b, 0xcc, 0x82, 0x98, 0x97, 0x9e, 0x21,
0x71, 0x77, 0xc4, 0xc3, 0x91, 0x04, 0x17, 0x9b, 0xa6, 0x6e, 0x72, 0x48, 0xca, 0x1a, 0xd2, 0x0d,
0x02, 0x6c, 0x3c, 0x6f, 0x36, 0x01, 0xf4, 0x52, 0x20, 0xe1, 0xf3, 0x60, 0xd2, 0x9f, 0x05, 0x4f,
0x96, 0xc2, 0x5e, 0x3f, 0x28, 0xca, 0x71, 0x50, 0xe4, 0x83, 0xc2, 0xf1, 0x20, 0x14, 0x5e, 0x85,
0x3c, 0x01, 0x08, 0x23, 0x28, 0x57, 0x35, 0x5c, 0x94, 0x7b, 0x13, 0x16, 0x69, 0xa8, 0x64, 0x80,
0x99, 0x07, 0xa4, 0x24, 0x0d, 0x48, 0x25, 0xf2, 0x83, 0xad, 0x0b, 0x8b, 0x4c, 0xcf, 0xc3, 0x92,
0x8f, 0xd7, 0x05, 0x1e, 0x0c, 0xf2, 0x89, 0x2e, 0x77, 0x95, 0x23, 0x90, 0xbf, 0x09, 0xde, 0x0a,
0x79, 0xf0, 0x38, 0x0c, 0xc9, 0x0a, 0xdf, 0x12, 0x92, 0x8d, 0x7f, 0x6d, 0x24, 0xeb, 0x07, 0x52,
0x89, 0x20, 0x90, 0xfa, 0x8f, 0xe0, 0xed, 0x89, 0x8b, 0x4b, 0x5b, 0x7a, 0x1b, 0x73, 0x68, 0x43,
0xbf, 0x49, 0x32, 0xd2, 0xd5, 0xcf, 0x39, 0x80, 0x21, 0x9f, 0x84, 0xcb, 0x0d, 0x39, 0x39, 0x1e,
0x51, 0x5c, 0x54, 0xc4, 0x42, 0x3e, 0x47, 0x45, 0x22, 0x24, 0x1e, 0x62, 0x16, 0x20, 0x16, 0x64,
0xf2, 0x49, 0xf8, 0xa8, 0xd9, 0xf1, 0xd0, 0xcd, 0x1a, 0xe8, 0x0e, 0xe4, 0x68, 0xe5, 0x59, 0xd1,
0x0d, 0x8b, 0xc7, 0x84, 0x40, 0x52, 0xc3, 0xca, 0xcf, 0xeb, 0x87, 0x84, 0xe7, 0xc0, 0xb0, 0xe4,
0xac, 0xc1, 0xbf, 0x7c, 0xb9, 0x46, 0x2e, 0x90, 0x6b, 0x5c, 0x83, 0x1c, 0x19, 0xbd, 0x65, 0xa8,
0x2d, 0x4c, 0xeb, 0x9c, 0x39, 0xd9, 0x23, 0x48, 0x9f, 0x0a, 0x50, 0x1a, 0x09, 0x31, 0xa1, 0x73,
0x77, 0x4c, 0x32, 0xee, 0xc3, 0xe9, 0xd7, 0x01, 0xce, 0x55, 0x4b, 0x79, 0x4f, 0xed, 0xdb, 0xb8,
0xcd, 0xa7, 0x9b, 0x3b, 0x57, 0xad, 0x37, 0x28, 0x21, 0xd8, 0x71, 0x76, 0xa4, 0x63, 0x1f, 0x20,
0xcc, 0xf9, 0x01, 0x21, 0xaa, 0x40, 0xd6, 0x30, 0x35, 0xdd, 0xd4, 0xec, 0x4b, 0x3a, 0xda, 0x84,
0xec, 0xb6, 0x77, 0x93, 0xd9, 0x84, 0x98, 0xdc, 0x4d, 0x66, 0x93, 0x62, 0xca, 0xad, 0x3a, 0xb1,
0x23, 0x9b, 0x17, 0x17, 0xa4, 0x0f, 0xe2, 0x9e, 0x2d, 0x6e, 0xe3, 0xae, 0x76, 0x81, 0xcd, 0x39,
0x26, 0x33, 0xdb, 0xe6, 0xae, 0x84, 0x4c, 0xd9, 0x47, 0x21, 0xa3, 0x27, 0xad, 0x81, 0x85, 0xdb,
0xbc, 0xfe, 0xe1, 0xb6, 0x51, 0x03, 0xd2, 0xf8, 0x02, 0xf7, 0x6d, 0xab, 0x9c, 0xa1, 0x36, 0x7c,
0x65, 0x1c, 0x90, 0x92, 0xdf, 0x5b, 0x65, 0x62, 0xb9, 0x5f, 0x7d, 0xbe, 0x2a, 0x32, 0xee, 0xe7,
0xf4, 0x9e, 0x66, 0xe3, 0x9e, 0x61, 0x5f, 0xca, 0x5c, 0x7e, 0xf2, 0xca, 0x4a, 0x55, 0x28, 0x06,
0xe3, 0x3e, 0x7a, 0x02, 0x0a, 0x26, 0xb6, 0x55, 0xad, 0xaf, 0x04, 0x92, 0xf4, 0x05, 0x46, 0x64,
0x27, 0x7f, 0x37, 0x99, 0x15, 0xc4, 0xf8, 0x6e, 0x32, 0x1b, 0x17, 0x13, 0xd2, 0x21, 0x3c, 0x12,
0x1a, 0xf7, 0xd1, 0x4b, 0x90, 0xf3, 0x52, 0x06, 0x81, 0x4e, 0x63, 0x42, 0x11, 0xc3, 0xe3, 0x95,
0xfe, 0x2a, 0x78, 0x2a, 0x83, 0x65, 0x91, 0x3a, 0xa4, 0x4d, 0x6c, 0x0d, 0xba, 0xac, 0x50, 0x51,
0xdc, 0x7c, 0x7e, 0xb6, 0x8c, 0x81, 0x50, 0x07, 0x5d, 0x5b, 0xe6, 0xc2, 0xd2, 0x5b, 0x90, 0x66,
0x14, 0x94, 0x87, 0xcc, 0xf1, 0xfe, 0xfd, 0xfd, 0x83, 0x37, 0xf6, 0xc5, 0x18, 0x02, 0x48, 0x57,
0x6b, 0xb5, 0xfa, 0x61, 0x53, 0x14, 0x50, 0x0e, 0x52, 0xd5, 0xad, 0x03, 0xb9, 0x29, 0xc6, 0x09,
0x59, 0xae, 0xef, 0xd6, 0x6b, 0x4d, 0x31, 0x81, 0x16, 0xa1, 0xc0, 0xbe, 0x95, 0x7b, 0x07, 0xf2,
0x83, 0x6a, 0x53, 0x4c, 0xfa, 0x48, 0x47, 0xf5, 0xfd, 0xed, 0xba, 0x2c, 0xa6, 0xa4, 0x1f, 0xc0,
0xd5, 0xc8, 0x1c, 0xc3, 0xab, 0x79, 0x08, 0xbe, 0x9a, 0x87, 0xf4, 0x71, 0x9c, 0x80, 0xae, 0xa8,
0xc4, 0x01, 0xed, 0x8e, 0x4c, 0x7c, 0x73, 0x8e, 0xac, 0x63, 0x64, 0xf6, 0x04, 0x67, 0x99, 0xf8,
0x0c, 0xdb, 0xad, 0x0e, 0x4b, 0x64, 0x98, 0x9f, 0x2c, 0xc8, 0x05, 0x4e, 0xa5, 0x42, 0x16, 0x63,
0x7b, 0x07, 0xb7, 0x6c, 0x85, 0x9d, 0x36, 0x8b, 0x82, 0x9d, 0x1c, 0x61, 0x23, 0xd4, 0x23, 0x46,
0x94, 0xde, 0x9e, 0x6b, 0x2d, 0x73, 0x90, 0x92, 0xeb, 0x4d, 0xf9, 0x4d, 0x31, 0x81, 0x10, 0x14,
0xe9, 0xa7, 0x72, 0xb4, 0x5f, 0x3d, 0x3c, 0x6a, 0x1c, 0x90, 0xb5, 0x5c, 0x82, 0x92, 0xb3, 0x96,
0x0e, 0x31, 0x25, 0xfd, 0x23, 0x0e, 0x8f, 0x46, 0xa4, 0x3d, 0xe8, 0x0e, 0x80, 0x3d, 0x54, 0x4c,
0xdc, 0xd2, 0xcd, 0x76, 0xb4, 0x91, 0x35, 0x87, 0x32, 0xe5, 0x90, 0x73, 0x36, 0xff, 0xb2, 0x26,
0x94, 0xca, 0xd0, 0x2b, 0x5c, 0x29, 0x99, 0x95, 0xc5, 0x21, 0xde, 0xf5, 0x90, 0x8a, 0x10, 0x6e,
0x11, 0xc5, 0x74, 0x6d, 0xa9, 0x62, 0xca, 0x8f, 0x1e, 0xf8, 0xb1, 0xf0, 0x80, 0x06, 0x98, 0x99,
0x6b, 0xaa, 0x3e, 0xb4, 0xcc, 0x08, 0x16, 0x7a, 0x13, 0x1e, 0x1d, 0x89, 0x8f, 0xae, 0xd2, 0xd4,
0xac, 0x61, 0xf2, 0x91, 0x60, 0x98, 0xe4, 0xaa, 0xa5, 0x3f, 0x24, 0xfc, 0x0b, 0x1b, 0xcc, 0xf2,
0x0e, 0x20, 0x6d, 0xd9, 0xaa, 0x3d, 0xb0, 0xb8, 0xc1, 0xbd, 0x34, 0x6b, 0xca, 0xb8, 0xee, 0x7c,
0x1c, 0x51, 0x71, 0x99, 0xab, 0xf9, 0x7e, 0xbd, 0x2d, 0xe9, 0x36, 0x14, 0x83, 0x8b, 0x13, 0x7d,
0x64, 0x3c, 0x9f, 0x13, 0x97, 0xee, 0x02, 0x1a, 0x4f, 0xa6, 0x43, 0xaa, 0x25, 0x42, 0x58, 0xb5,
0xe4, 0x8f, 0x02, 0x3c, 0x36, 0x21, 0x71, 0x46, 0xaf, 0x8f, 0xec, 0xf3, 0xcb, 0xf3, 0xa4, 0xdd,
0xeb, 0x8c, 0x16, 0xdc, 0x69, 0xe9, 0x16, 0x2c, 0xf8, 0xe9, 0xb3, 0x4d, 0xf2, 0xab, 0xb8, 0xe7,
0xf3, 0x83, 0x65, 0x1d, 0x2f, 0x14, 0x0a, 0xdf, 0x30, 0x14, 0x06, 0xed, 0x2c, 0x3e, 0xa7, 0x9d,
0x1d, 0x85, 0xd9, 0x59, 0x62, 0xae, 0x0c, 0x73, 0x2e, 0x6b, 0x4b, 0x7e, 0x33, 0x6b, 0x0b, 0x1c,
0xb8, 0x54, 0x30, 0x85, 0x7d, 0x13, 0xc0, 0xab, 0x75, 0x91, 0x80, 0x64, 0xea, 0x83, 0x7e, 0x9b,
0x5a, 0x40, 0x4a, 0x66, 0x0d, 0x74, 0x1b, 0x52, 0xc4, 0x92, 0x9c, 0x75, 0x1a, 0x77, 0xaa, 0xc4,
0x12, 0x7c, 0xb5, 0x32, 0xc6, 0x2d, 0x69, 0x80, 0xc6, 0x4b, 0xe5, 0x11, 0x5d, 0xbc, 0x1a, 0xec,
0xe2, 0xf1, 0xc8, 0xa2, 0x7b, 0x78, 0x57, 0xef, 0x43, 0x8a, 0xee, 0x3c, 0x49, 0xbe, 0xe8, 0xfd,
0x0c, 0x87, 0x40, 0xe4, 0x1b, 0xfd, 0x02, 0x40, 0xb5, 0x6d, 0x53, 0x3b, 0x1d, 0x78, 0x1d, 0xac,
0x86, 0x5b, 0x4e, 0xd5, 0xe1, 0xdb, 0xba, 0xc6, 0x4d, 0x68, 0xd9, 0x13, 0xf5, 0x99, 0x91, 0x4f,
0xa1, 0xb4, 0x0f, 0xc5, 0xa0, 0xac, 0x93, 0xb4, 0xb3, 0x31, 0x04, 0x93, 0x76, 0x86, 0xc1, 0x78,
0xd2, 0xee, 0xa6, 0xfc, 0x09, 0x76, 0x09, 0x45, 0x1b, 0xd2, 0x7f, 0x05, 0x58, 0xf0, 0x1b, 0xde,
0xb7, 0x9c, 0x8a, 0x4e, 0xc9, 0xbe, 0xaf, 0x8e, 0x65, 0xa2, 0x99, 0x73, 0xd5, 0x3a, 0xfe, 0x2e,
0x13, 0xd1, 0x0f, 0x04, 0xc8, 0xba, 0x93, 0x0f, 0xde, 0x47, 0x05, 0x2e, 0xf0, 0xd8, 0xda, 0xc5,
0xfd, 0x97, 0x48, 0xec, 0xba, 0x2e, 0xe1, 0x5e, 0xd7, 0xdd, 0x75, 0x73, 0xa5, 0xa8, 0xea, 0x9e,
0x7f, 0xa5, 0xb9, 0x4d, 0x39, 0xa9, 0xe1, 0xef, 0xf8, 0x38, 0x48, 0x92, 0x80, 0x7e, 0x04, 0x69,
0xb5, 0xe5, 0xd6, 0x34, 0x8b, 0x21, 0xc5, 0x3e, 0x87, 0x75, 0xbd, 0x39, 0xac, 0x52, 0x4e, 0x99,
0x4b, 0xf0, 0x51, 0xc5, 0x9d, 0x51, 0x49, 0xaf, 0x11, 0xbd, 0x8c, 0x27, 0xe8, 0x11, 0x8b, 0x00,
0xc7, 0xfb, 0x0f, 0x0e, 0xb6, 0x77, 0xee, 0xed, 0xd4, 0xb7, 0x79, 0xb6, 0xb4, 0xbd, 0x5d, 0xdf,
0x16, 0xe3, 0x84, 0x4f, 0xae, 0x3f, 0x38, 0x38, 0xa9, 0x6f, 0x8b, 0x09, 0xe9, 0x2e, 0xe4, 0x5c,
0xaf, 0x42, 0x10, 0xbe, 0x53, 0x9f, 0x15, 0xf8, 0xd9, 0xe6, 0xd5, 0xf5, 0x65, 0x48, 0x19, 0xfa,
0x7b, 0xfc, 0xee, 0x2c, 0x21, 0xb3, 0x86, 0xd4, 0x86, 0xd2, 0x88, 0x4b, 0x42, 0x77, 0x21, 0x63,
0x0c, 0x4e, 0x15, 0xc7, 0x68, 0x47, 0xaa, 0xd8, 0x0e, 0x76, 0x1c, 0x9c, 0x76, 0xb5, 0xd6, 0x7d,
0x7c, 0xe9, 0x2c, 0x93, 0x31, 0x38, 0xbd, 0xcf, 0x6c, 0x9b, 0xf5, 0x12, 0xf7, 0xf7, 0x72, 0x01,
0x59, 0xe7, 0xa8, 0xa2, 0x1f, 0x43, 0xce, 0xf5, 0x76, 0xee, 0xdd, 0x77, 0xa4, 0x9b, 0xe4, 0xea,
0x3d, 0x11, 0x74, 0x13, 0x16, 0x2d, 0xed, 0xbc, 0xef, 0x94, 0xee, 0x59, 0xf5, 0x26, 0x4e, 0xcf,
0x4c, 0x89, 0xfd, 0xd8, 0x73, 0x0a, 0x0c, 0x24, 0xc8, 0x89, 0xa3, 0xbe, 0xe2, 0xbb, 0x1c, 0x40,
0x48, 0x30, 0x4e, 0x84, 0x05, 0xe3, 0x5f, 0xc5, 0x21, 0xef, 0xbb, 0x19, 0x40, 0x3f, 0xf4, 0x39,
0xae, 0x62, 0x48, 0x14, 0xf1, 0xf1, 0x7a, 0x97, 0xcb, 0xc1, 0x89, 0xc5, 0xe7, 0x9f, 0x58, 0xd4,
0xfd, 0x8b, 0x73, 0xd1, 0x90, 0x9c, 0xfb, 0xa2, 0xe1, 0x39, 0x40, 0xb6, 0x6e, 0xab, 0x5d, 0xe5,
0x42, 0xb7, 0xb5, 0xfe, 0xb9, 0xc2, 0x4c, 0x83, 0xb9, 0x19, 0x91, 0xfe, 0x39, 0xa1, 0x3f, 0x0e,
0xa9, 0x95, 0xfc, 0x52, 0x80, 0xac, 0x8b, 0xe8, 0xe6, 0xbd, 0x7a, 0xbe, 0x02, 0x69, 0x0e, 0x5a,
0xd8, 0xdd, 0x33, 0x6f, 0x85, 0xde, 0xa8, 0x54, 0x20, 0xdb, 0xc3, 0xb6, 0x4a, 0x7d, 0x26, 0x8b,
0x80, 0x6e, 0xfb, 0xe6, 0xcb, 0x90, 0xf7, 0x5d, 0xdb, 0x13, 0x37, 0xba, 0x5f, 0x7f, 0x43, 0x8c,
0x55, 0x32, 0x1f, 0x7e, 0xb2, 0x96, 0xd8, 0xc7, 0xef, 0x91, 0x13, 0x26, 0xd7, 0x6b, 0x8d, 0x7a,
0xed, 0xbe, 0x28, 0x54, 0xf2, 0x1f, 0x7e, 0xb2, 0x96, 0x91, 0x31, 0x2d, 0xa6, 0xdf, 0xbc, 0x0f,
0xa5, 0x91, 0x8d, 0x09, 0x1e, 0x68, 0x04, 0xc5, 0xed, 0xe3, 0xc3, 0xbd, 0x9d, 0x5a, 0xb5, 0x59,
0x57, 0x4e, 0x0e, 0x9a, 0x75, 0x51, 0x40, 0x8f, 0xc2, 0xd2, 0xde, 0xce, 0x4f, 0x1a, 0x4d, 0xa5,
0xb6, 0xb7, 0x53, 0xdf, 0x6f, 0x2a, 0xd5, 0x66, 0xb3, 0x5a, 0xbb, 0x2f, 0xc6, 0x37, 0xff, 0x94,
0x87, 0x52, 0x75, 0xab, 0xb6, 0x43, 0x60, 0x9b, 0xd6, 0x52, 0xa9, 0x7b, 0xa8, 0x41, 0x92, 0x96,
0x05, 0x27, 0x3e, 0xde, 0xab, 0x4c, 0xbe, 0x21, 0x41, 0xf7, 0x20, 0x45, 0x2b, 0x86, 0x68, 0xf2,
0x6b, 0xbe, 0xca, 0x94, 0x2b, 0x13, 0x32, 0x18, 0x7a, 0x9c, 0x26, 0x3e, 0xef, 0xab, 0x4c, 0xbe,
0x41, 0x41, 0x7b, 0x90, 0x71, 0x0a, 0x46, 0xd3, 0xde, 0xdc, 0x55, 0xa6, 0x5e, 0x6b, 0x90, 0xa9,
0xb1, 0xc2, 0xdb, 0xe4, 0x97, 0x7f, 0x95, 0x29, 0x77, 0x2b, 0x68, 0x07, 0xd2, 0xbc, 0xe8, 0x31,
0xe5, 0x31, 0x5f, 0x65, 0xda, 0x6d, 0x09, 0x92, 0x21, 0xe7, 0x95, 0x34, 0xa7, 0xbf, 0x67, 0xac,
0xcc, 0x70, 0x6d, 0x84, 0xde, 0x82, 0x42, 0xb0, 0xa0, 0x32, 0xdb, 0x83, 0xc1, 0xca, 0x8c, 0xf7,
0x32, 0x44, 0x7f, 0xb0, 0xba, 0x32, 0xdb, 0x03, 0xc2, 0xca, 0x8c, 0xd7, 0x34, 0xe8, 0x1d, 0x58,
0x1c, 0xaf, 0x7e, 0xcc, 0xfe, 0x9e, 0xb0, 0x32, 0xc7, 0xc5, 0x0d, 0xea, 0x01, 0x0a, 0xa9, 0x9a,
0xcc, 0xf1, 0xbc, 0xb0, 0x32, 0xcf, 0x3d, 0x0e, 0x6a, 0x43, 0x69, 0xb4, 0x12, 0x31, 0xeb, 0x73,
0xc3, 0xca, 0xcc, 0x77, 0x3a, 0xac, 0x97, 0x20, 0x2c, 0x9f, 0xf5, 0xf9, 0x61, 0x65, 0xe6, 0x2b,
0x1e, 0x74, 0x0c, 0xe0, 0x83, 0x95, 0x33, 0x3c, 0x47, 0xac, 0xcc, 0x72, 0xd9, 0x83, 0x0c, 0x58,
0x0a, 0xc3, 0x9b, 0xf3, 0xbc, 0x4e, 0xac, 0xcc, 0x75, 0x07, 0x44, 0xec, 0x39, 0x88, 0x1c, 0x67,
0x7b, 0xad, 0x58, 0x99, 0xf1, 0x32, 0x68, 0xab, 0xfe, 0xe9, 0x17, 0x2b, 0xc2, 0x67, 0x5f, 0xac,
0x08, 0xff, 0xfe, 0x62, 0x45, 0xf8, 0xe8, 0xcb, 0x95, 0xd8, 0x67, 0x5f, 0xae, 0xc4, 0xfe, 0xf9,
0xe5, 0x4a, 0xec, 0xa7, 0xcf, 0x9e, 0x6b, 0x76, 0x67, 0x70, 0xba, 0xde, 0xd2, 0x7b, 0x1b, 0xfe,
0x07, 0xde, 0x61, 0xcf, 0xca, 0x4f, 0xd3, 0x34, 0xa0, 0xde, 0xfa, 0x5f, 0x00, 0x00, 0x00, 0xff,
0xff, 0xdc, 0x85, 0x19, 0x4e, 0x76, 0x2e, 0x00, 0x00,
}
// Reference imports to suppress errors if they are not otherwise used.
@@ -5510,10 +5509,10 @@ func (m *RequestPrepareProposal) MarshalToSizedBuffer(dAtA []byte) (int, error)
i--
dAtA[i] = 0x28
}
if len(m.ByzantineValidators) > 0 {
for iNdEx := len(m.ByzantineValidators) - 1; iNdEx >= 0; iNdEx-- {
if len(m.Misbehavior) > 0 {
for iNdEx := len(m.Misbehavior) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.ByzantineValidators[iNdEx].MarshalToSizedBuffer(dAtA[:i])
size, err := m.Misbehavior[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
@@ -5605,10 +5604,10 @@ func (m *RequestProcessProposal) MarshalToSizedBuffer(dAtA []byte) (int, error)
i--
dAtA[i] = 0x22
}
if len(m.ByzantineValidators) > 0 {
for iNdEx := len(m.ByzantineValidators) - 1; iNdEx >= 0; iNdEx-- {
if len(m.Misbehavior) > 0 {
for iNdEx := len(m.Misbehavior) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.ByzantineValidators[iNdEx].MarshalToSizedBuffer(dAtA[:i])
size, err := m.Misbehavior[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
@@ -5779,10 +5778,10 @@ func (m *RequestFinalizeBlock) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i--
dAtA[i] = 0x22
}
if len(m.ByzantineValidators) > 0 {
for iNdEx := len(m.ByzantineValidators) - 1; iNdEx >= 0; iNdEx-- {
if len(m.Misbehavior) > 0 {
for iNdEx := len(m.Misbehavior) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.ByzantineValidators[iNdEx].MarshalToSizedBuffer(dAtA[:i])
size, err := m.Misbehavior[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
@@ -8145,8 +8144,8 @@ func (m *RequestPrepareProposal) Size() (n int) {
}
l = m.LocalLastCommit.Size()
n += 1 + l + sovTypes(uint64(l))
if len(m.ByzantineValidators) > 0 {
for _, e := range m.ByzantineValidators {
if len(m.Misbehavior) > 0 {
for _, e := range m.Misbehavior {
l = e.Size()
n += 1 + l + sovTypes(uint64(l))
}
@@ -8181,8 +8180,8 @@ func (m *RequestProcessProposal) Size() (n int) {
}
l = m.ProposedLastCommit.Size()
n += 1 + l + sovTypes(uint64(l))
if len(m.ByzantineValidators) > 0 {
for _, e := range m.ByzantineValidators {
if len(m.Misbehavior) > 0 {
for _, e := range m.Misbehavior {
l = e.Size()
n += 1 + l + sovTypes(uint64(l))
}
@@ -8261,8 +8260,8 @@ func (m *RequestFinalizeBlock) Size() (n int) {
}
l = m.DecidedLastCommit.Size()
n += 1 + l + sovTypes(uint64(l))
if len(m.ByzantineValidators) > 0 {
for _, e := range m.ByzantineValidators {
if len(m.Misbehavior) > 0 {
for _, e := range m.Misbehavior {
l = e.Size()
n += 1 + l + sovTypes(uint64(l))
}
@@ -11139,7 +11138,7 @@ func (m *RequestPrepareProposal) Unmarshal(dAtA []byte) error {
iNdEx = postIndex
case 4:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ByzantineValidators", wireType)
return fmt.Errorf("proto: wrong wireType = %d for field Misbehavior", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -11166,8 +11165,8 @@ func (m *RequestPrepareProposal) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.ByzantineValidators = append(m.ByzantineValidators, Misbehavior{})
if err := m.ByzantineValidators[len(m.ByzantineValidators)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
m.Misbehavior = append(m.Misbehavior, Misbehavior{})
if err := m.Misbehavior[len(m.Misbehavior)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -11408,7 +11407,7 @@ func (m *RequestProcessProposal) Unmarshal(dAtA []byte) error {
iNdEx = postIndex
case 3:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ByzantineValidators", wireType)
return fmt.Errorf("proto: wrong wireType = %d for field Misbehavior", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -11435,8 +11434,8 @@ func (m *RequestProcessProposal) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.ByzantineValidators = append(m.ByzantineValidators, Misbehavior{})
if err := m.ByzantineValidators[len(m.ByzantineValidators)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
m.Misbehavior = append(m.Misbehavior, Misbehavior{})
if err := m.Misbehavior[len(m.Misbehavior)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -11985,7 +11984,7 @@ func (m *RequestFinalizeBlock) Unmarshal(dAtA []byte) error {
iNdEx = postIndex
case 3:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ByzantineValidators", wireType)
return fmt.Errorf("proto: wrong wireType = %d for field Misbehavior", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -12012,8 +12011,8 @@ func (m *RequestFinalizeBlock) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.ByzantineValidators = append(m.ByzantineValidators, Misbehavior{})
if err := m.ByzantineValidators[len(m.ByzantineValidators)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
m.Misbehavior = append(m.Misbehavior, Misbehavior{})
if err := m.Misbehavior[len(m.Misbehavior)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex

View File

@@ -33,10 +33,14 @@ $ tendermint debug kill 34255 /path/to/tm-debug.zip`,
Args: cobra.ExactArgs(2),
RunE: func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
pid, err := strconv.ParseInt(args[0], 10, 64)
// Using Atoi so that the size of an integer can be automatically inferred.
pid, err := strconv.Atoi(args[0])
if err != nil {
return err
}
if pid <= 0 {
return fmt.Errorf("PID value must be > 0; given value %q, got %d", args[0], pid)
}
outFile := args[1]
if outFile == "" {
@@ -95,7 +99,7 @@ $ tendermint debug kill 34255 /path/to/tm-debug.zip`,
}
logger.Info("killing Tendermint process")
if err := killProc(int(pid), tmpDir); err != nil {
if err := killProc(pid, tmpDir); err != nil {
return err
}
@@ -113,6 +117,9 @@ $ tendermint debug kill 34255 /path/to/tm-debug.zip`,
// if the output file cannot be created or the tail command cannot be started.
// An error is not returned if any subsequent syscall fails.
func killProc(pid int, dir string) error {
if pid <= 0 {
return fmt.Errorf("PID must be > 0, got %d", pid)
}
// pipe STDERR output from tailing the Tendermint process to a file
//
// NOTE: This will only work on UNIX systems.

View File

@@ -34,7 +34,6 @@ func RunDatabaseMigration(ctx context.Context, logger log.Logger, conf *config.C
// reduce the possibility of the
// ephemeral data overwriting later data
"tx_index",
"peerstore",
"light",
"blockstore",
"state",
@@ -57,7 +56,7 @@ func RunDatabaseMigration(ctx context.Context, logger log.Logger, conf *config.C
return fmt.Errorf("constructing database handle: %w", err)
}
if err = keymigrate.Migrate(ctx, db); err != nil {
if err = keymigrate.Migrate(ctx, dbctx, db); err != nil {
return fmt.Errorf("running migration for context %q: %w",
dbctx, err)
}

View File

@@ -15,6 +15,10 @@ import (
)
func TestRollbackIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping test in short mode")
}
var height int64
dir := t.TempDir()
ctx, cancel := context.WithCancel(context.Background())

View File

@@ -239,7 +239,6 @@ Example:
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.AllowDuplicateIP = true
if populatePersistentPeers {
persistentPeersWithoutSelf := make([]string, 0)
for j := 0; j < len(persistentPeers); j++ {

View File

@@ -642,9 +642,6 @@ type P2PConfig struct { //nolint: maligned
// other peers)
PrivatePeerIDs string `mapstructure:"private-peer-ids"`
// Toggle to disable guard against peers connecting from the same ip.
AllowDuplicateIP bool `mapstructure:"allow-duplicate-ip"`
// Time to wait before flushing messages out on the connection
FlushThrottleTimeout time.Duration `mapstructure:"flush-throttle-timeout"`
@@ -661,13 +658,9 @@ type P2PConfig struct { //nolint: maligned
HandshakeTimeout time.Duration `mapstructure:"handshake-timeout"`
DialTimeout time.Duration `mapstructure:"dial-timeout"`
// Testing params.
// Force dial to fail
TestDialFail bool `mapstructure:"test-dial-fail"`
// Makes it possible to configure which queue backend the p2p
// layer uses. Options are: "fifo" and "priority",
// with the default being "priority".
// layer uses. Options are: "fifo" and "simple-priority", and "priority",
// with the default being "simple-priority".
QueueType string `mapstructure:"queue-type"`
}
@@ -678,7 +671,7 @@ func DefaultP2PConfig() *P2PConfig {
ExternalAddress: "",
UPNP: false,
MaxConnections: 64,
MaxOutgoingConnections: 32,
MaxOutgoingConnections: 12,
MaxIncomingConnectionAttempts: 100,
FlushThrottleTimeout: 100 * time.Millisecond,
// The MTU (Maximum Transmission Unit) for Ethernet is 1500 bytes.
@@ -690,11 +683,9 @@ func DefaultP2PConfig() *P2PConfig {
SendRate: 5120000, // 5 mB/s
RecvRate: 5120000, // 5 mB/s
PexReactor: true,
AllowDuplicateIP: false,
HandshakeTimeout: 20 * time.Second,
DialTimeout: 3 * time.Second,
TestDialFail: false,
QueueType: "priority",
QueueType: "simple-priority",
}
}
@@ -723,7 +714,6 @@ func (cfg *P2PConfig) ValidateBasic() error {
func TestP2PConfig() *P2PConfig {
cfg := DefaultP2PConfig()
cfg.ListenAddress = "tcp://127.0.0.1:36656"
cfg.AllowDuplicateIP = true
cfg.FlushThrottleTimeout = 10 * time.Millisecond
return cfg
}

View File

@@ -25,5 +25,6 @@ type DBProvider func(*DBContext) (dbm.DB, error)
// specified in the Config.
func DefaultDBProvider(ctx *DBContext) (dbm.DB, error) {
dbType := dbm.BackendType(ctx.Config.DBBackend)
return dbm.NewDB(ctx.ID, dbType, ctx.Config.DBDir())
}

View File

@@ -282,7 +282,9 @@ pprof-laddr = "{{ .RPC.PprofListenAddress }}"
#######################################################
[p2p]
# Select the p2p internal queue
# Select the p2p internal queue.
# Options are: "fifo" and "simple-priority", and "priority",
# with the default being "simple-priority".
queue-type = "{{ .P2P.QueueType }}"
# Address to listen for incoming connections
@@ -323,9 +325,6 @@ pex = {{ .P2P.PexReactor }}
# Warning: IPs will be exposed at /net_info, for more information https://github.com/tendermint/tendermint/issues/3055
private-peer-ids = "{{ .P2P.PrivatePeerIDs }}"
# Toggle to disable guard against peers connecting from the same ip.
allow-duplicate-ip = {{ .P2P.AllowDuplicateIP }}
# Peer connection configuration.
handshake-timeout = "{{ .P2P.HandshakeTimeout }}"
dial-timeout = "{{ .P2P.DialTimeout }}"

View File

@@ -106,7 +106,7 @@ module.exports = {
}
],
smallprint:
'The development of Tendermint Core is led primarily by [Interchain GmbH](https://interchain.berlin/). Funding for this development comes primarily from the Interchain Foundation, a Swiss non-profit. The Tendermint trademark is owned by Tendermint Inc, the for-profit entity that also maintains this website.',
'The development of Tendermint Core was led primarily by All in Bits, Inc. The Tendermint trademark is owned by New Tendermint, LLC.',
links: [
{
title: 'Documentation',

View File

@@ -31,3 +31,4 @@ To find out about the Tendermint ecosystem you can go [here](https://github.com/
## Contribute
To contribute to the documentation, see [this file](https://github.com/tendermint/tendermint/blob/master/docs/DOCS_README.md) for details of the build process and considerations when making changes.

View File

@@ -62,7 +62,7 @@ as `abci-cli` above. The kvstore just stores transactions in a merkle
tree.
Its code can be found
[here](https://github.com/tendermint/tendermint/blob/v0.36.x/abci/cmd/abci-cli/abci-cli.go)
[here](https://github.com/tendermint/tendermint/blob/master/abci/cmd/abci-cli/abci-cli.go)
and looks like:
```go
@@ -208,6 +208,33 @@ Try running these commands:
-> key.hex: 646566
-> value: xyz
-> value.hex: 78797A
> prepare_proposal "preparedef"
-> code: OK
-> log: Succeeded. Tx: def action: ADDED
-> code: OK
-> log: Succeeded. Tx: preparedef action: REMOVED
> process_proposal "def"
-> code: OK
-> status: ACCEPT
> process_proposal "preparedef"
-> code: OK
-> status: REJECT
> prepare_proposal
> process_proposal
-> code: OK
-> status: ACCEPT
> finalize_block
-> code: OK
-> data.hex: 0x0600000000000000
> commit
-> code: OK
```
Note that if we do `finalize_block "abc" ...` it will store `(abc, abc)`, but if
@@ -220,8 +247,8 @@ Similarly, you could put the commands in a file and run
Want to write an app in your favorite language?! We'd be happy
to add you to our [ecosystem](https://github.com/tendermint/awesome#ecosystem)!
See [funding](https://github.com/interchainio/funding) opportunities from the
[Interchain Foundation](https://interchain.io/) for implementations in new languages and more.
TODO link to bounties page.
The `abci-cli` is designed strictly for testing and debugging. In a real
deployment, the role of sending messages is taken by Tendermint, which

View File

@@ -15,7 +15,7 @@ the block itself is never stored.
Each event contains a type and a list of attributes, which are key-value pairs
denoting something about what happened during the method's execution. For more
details on `Events`, see the
[ABCI](https://github.com/tendermint/tendermint/blob/v0.36.x/spec/abci/abci.md#events)
[ABCI](https://github.com/tendermint/tendermint/blob/master/spec/abci/abci.md#events)
documentation.
An `Event` has a composite key associated with it. A `compositeKey` is

120
docs/architecture/README.md Normal file
View File

@@ -0,0 +1,120 @@
---
order: 1
parent:
order: false
---
# Architecture Decision Records (ADR)
This is a location to record all high-level architecture decisions in the tendermint project.
You can read more about the ADR concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t).
An ADR should provide:
- Context on the relevant goals and the current state
- Proposed changes to achieve the goals
- Summary of pros and cons
- References
- Changelog
Note the distinction between an ADR and a spec. The ADR provides the context, intuition, reasoning, and
justification for a change in architecture, or for the architecture of something
new. The spec is much more compressed and streamlined summary of everything as
it stands today.
If recorded decisions turned out to be lacking, convene a discussion, record the new decisions here, and then modify the code to match.
Note the context/background should be written in the present tense.
## Table of Contents
### Implemented
- [ADR-001: Logging](./adr-001-logging.md)
- [ADR-002: Event-Subscription](./adr-002-event-subscription.md)
- [ADR-003: ABCI-APP-RPC](./adr-003-abci-app-rpc.md)
- [ADR-004: Historical-Validators](./adr-004-historical-validators.md)
- [ADR-005: Consensus-Params](./adr-005-consensus-params.md)
- [ADR-008: Priv-Validator](./adr-008-priv-validator.md)
- [ADR-009: ABCI-Design](./adr-009-ABCI-design.md)
- [ADR-010: Crypto-Changes](./adr-010-crypto-changes.md)
- [ADR-011: Monitoring](./adr-011-monitoring.md)
- [ADR-014: Secp-Malleability](./adr-014-secp-malleability.md)
- [ADR-015: Crypto-Encoding](./adr-015-crypto-encoding.md)
- [ADR-016: Protocol-Versions](./adr-016-protocol-versions.md)
- [ADR-017: Chain-Versions](./adr-017-chain-versions.md)
- [ADR-018: ABCI-Validators](./adr-018-ABCI-Validators.md)
- [ADR-019: Multisigs](./adr-019-multisigs.md)
- [ADR-020: Block-Size](./adr-020-block-size.md)
- [ADR-021: ABCI-Events](./adr-021-abci-events.md)
- [ADR-025: Commit](./adr-025-commit.md)
- [ADR-026: General-Merkle-Proof](./adr-026-general-merkle-proof.md)
- [ADR-033: Pubsub](./adr-033-pubsub.md)
- [ADR-034: Priv-Validator-File-Structure](./adr-034-priv-validator-file-structure.md)
- [ADR-043: Blockchain-RiRi-Org](./adr-043-blockchain-riri-org.md)
- [ADR-044: Lite-Client-With-Weak-Subjectivity](./adr-044-lite-client-with-weak-subjectivity.md)
- [ADR-046: Light-Client-Implementation](./adr-046-light-client-implementation.md)
- [ADR-047: Handling-Evidence-From-Light-Client](./adr-047-handling-evidence-from-light-client.md)
- [ADR-051: Double-Signing-Risk-Reduction](./adr-051-double-signing-risk-reduction.md)
- [ADR-052: Tendermint-Mode](./adr-052-tendermint-mode.md)
- [ADR-053: State-Sync-Prototype](./adr-053-state-sync-prototype.md)
- [ADR-054: Crypto-Encoding-2](./adr-054-crypto-encoding-2.md)
- [ADR-055: Protobuf-Design](./adr-055-protobuf-design.md)
- [ADR-056: Light-Client-Amnesia-Attacks](./adr-056-light-client-amnesia-attacks.md)
- [ADR-059: Evidence-Composition-and-Lifecycle](./adr-059-evidence-composition-and-lifecycle.md)
- [ADR-062: P2P-Architecture](./adr-062-p2p-architecture.md)
- [ADR-063: Privval-gRPC](./adr-063-privval-grpc.md)
- [ADR-066: E2E-Testing](./adr-066-e2e-testing.md)
- [ADR-072: Restore Requests for Comments](./adr-072-request-for-comments.md)
- [ADR-077: Block Retention](./adr-077-block-retention.md)
- [ADR-078: Non-zero Genesis](./adr-078-nonzero-genesis.md)
- [ADR-079: ED25519 Verification](./adr-079-ed25519-verification.md)
- [ADR-080: Reverse Sync](./adr-080-reverse-sync.md)
### Accepted
- [ADR-006: Trust-Metric](./adr-006-trust-metric.md)
- [ADR-024: Sign-Bytes](./adr-024-sign-bytes.md)
- [ADR-035: Documentation](./adr-035-documentation.md)
- [ADR-039: Peer-Behaviour](./adr-039-peer-behaviour.md)
- [ADR-060: Go-API-Stability](./adr-060-go-api-stability.md)
- [ADR-061: P2P-Refactor-Scope](./adr-061-p2p-refactor-scope.md)
- [ADR-065: Custom Event Indexing](./adr-065-custom-event-indexing.md)
- [ADR-068: Reverse-Sync](./adr-068-reverse-sync.md)
- [ADR-067: Mempool Refactor](./adr-067-mempool-refactor.md)
- [ADR-075: RPC Event Subscription Interface](./adr-075-rpc-subscription.md)
- [ADR-076: Combine Spec and Tendermint Repositories](./adr-076-combine-spec-repo.md)
- [ADR-081: Protocol Buffers Management](./adr-081-protobuf-mgmt.md)
### Deprecated
None
### Rejected
- [ADR-023: ABCI-Propose-tx](./adr-023-ABCI-propose-tx.md)
- [ADR-029: Check-Tx-Consensus](./adr-029-check-tx-consensus.md)
- [ADR-058: Event-Hashing](./adr-058-event-hashing.md)
### Proposed
- [ADR-007: Trust-Metric-Usage](./adr-007-trust-metric-usage.md)
- [ADR-012: Peer-Transport](./adr-012-peer-transport.md)
- [ADR-013: Symmetric-Crypto](./adr-013-symmetric-crypto.md)
- [ADR-022: ABCI-Errors](./adr-022-abci-errors.md)
- [ADR-030: Consensus-Refactor](./adr-030-consensus-refactor.md)
- [ADR-036: Empty Blocks via ABCI](./adr-036-empty-blocks-abci.md)
- [ADR-037: Deliver-Block](./adr-037-deliver-block.md)
- [ADR-038: Non-Zero-Start-Height](./adr-038-non-zero-start-height.md)
- [ADR-040: Blockchain Reactor Refactor](./adr-040-blockchain-reactor-refactor.md)
- [ADR-041: Proposer-Selection-via-ABCI](./adr-041-proposer-selection-via-abci.md)
- [ADR-042: State Sync Design](./adr-042-state-sync.md)
- [ADR-045: ABCI-Evidence](./adr-045-abci-evidence.md)
- [ADR-050: Improved Trusted Peering](./adr-050-improved-trusted-peering.md)
- [ADR-057: RPC](./adr-057-RPC.md)
- [ADR-064: Batch Verification](./adr-064-batch-verification.md)
- [ADR-069: Node Initialization](./adr-069-flexible-node-initialization.md)
- [ADR-071: Proposer-Based Timestamps](./adr-071-proposer-based-timestamps.md)
- [ADR-073: Adopt LibP2P](./adr-073-libp2p.md)
- [ADR-074: Migrate Timeout Parameters to Consensus Parameters](./adr-074-timeout-params.md)

View File

@@ -0,0 +1,216 @@
# ADR 1: Logging
## Context
Current logging system in Tendermint is very static and not flexible enough.
Issues: [358](https://github.com/tendermint/tendermint/issues/358), [375](https://github.com/tendermint/tendermint/issues/375).
What we want from the new system:
- per package dynamic log levels
- dynamic logger setting (logger tied to the processing struct)
- conventions
- be more visually appealing
"dynamic" here means the ability to set smth in runtime.
## Decision
### 1) An interface
First, we will need an interface for all of our libraries (`tmlibs`, Tendermint, etc.). My personal preference is go-kit `Logger` interface (see Appendix A.), but that is too much a bigger change. Plus we will still need levels.
```go
# log.go
type Logger interface {
Debug(msg string, keyvals ...interface{}) error
Info(msg string, keyvals ...interface{}) error
Error(msg string, keyvals ...interface{}) error
With(keyvals ...interface{}) Logger
}
```
On a side note: difference between `Info` and `Notice` is subtle. We probably
could do without `Notice`. Don't think we need `Panic` or `Fatal` as a part of
the interface. These funcs could be implemented as helpers. In fact, we already
have some in `tmlibs/common`.
- `Debug` - extended output for devs
- `Info` - all that is useful for a user
- `Error` - errors
`Notice` should become `Info`, `Warn` either `Error` or `Debug` depending on the message, `Crit` -> `Error`.
This interface should go into `tmlibs/log`. All libraries which are part of the core (tendermint/tendermint) should obey it.
### 2) Logger with our current formatting
On top of this interface, we will need to implement a stdout logger, which will be used when Tendermint is configured to output logs to STDOUT.
Many people say that they like the current output, so let's stick with it.
```
NOTE[2017-04-25|14:45:08] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
```
Couple of minor changes:
```
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
```
Notice the level is encoded using only one char plus milliseconds.
Note: there are many other formats out there like [logfmt](https://brandur.org/logfmt).
This logger could be implemented using any logger - [logrus](https://github.com/sirupsen/logrus), [go-kit/log](https://github.com/go-kit/kit/tree/master/log), [zap](https://github.com/uber-go/zap), log15 so far as it
a) supports coloring output<br>
b) is moderately fast (buffering) <br>
c) conforms to the new interface or adapter could be written for it <br>
d) is somewhat configurable<br>
go-kit is my favorite so far. Check out how easy it is to color errors in red https://github.com/go-kit/kit/blob/master/log/term/example_test.go#L12. Although, coloring could only be applied to the whole string :(
```
go-kit +: flexible, modular
go-kit “-”: logfmt format https://brandur.org/logfmt
logrus +: popular, feature rich (hooks), API and output is more like what we want
logrus -: not so flexible
```
```go
# tm_logger.go
// NewTmLogger returns a logger that encodes keyvals to the Writer in
// tm format.
func NewTmLogger(w io.Writer) Logger {
return &tmLogger{kitlog.NewLogfmtLogger(w)}
}
func (l tmLogger) SetLevel(level string() {
switch (level) {
case "debug":
l.sourceLogger = level.NewFilter(l.sourceLogger, level.AllowDebug())
}
}
func (l tmLogger) Info(msg string, keyvals ...interface{}) error {
l.sourceLogger.Log("msg", msg, keyvals...)
}
# log.go
func With(logger Logger, keyvals ...interface{}) Logger {
kitlog.With(logger.sourceLogger, keyvals...)
}
```
Usage:
```go
logger := log.NewTmLogger(os.Stdout)
logger.SetLevel(config.GetString("log_level"))
node.SetLogger(log.With(logger, "node", Name))
```
**Other log formatters**
In the future, we may want other formatters like JSONFormatter.
```
{ "level": "notice", "time": "2017-04-25 14:45:08.562471297 -0400 EDT", "module": "consensus", "msg": "ABCI Replay Blocks", "appHeight": 0, "storeHeight": 0, "stateHeight": 0 }
```
### 3) Dynamic logger setting
https://dave.cheney.net/2017/01/23/the-package-level-logger-anti-pattern
This is the hardest part and where the most work will be done. logger should be tied to the processing struct, or the context if it adds some fields to the logger.
```go
type BaseService struct {
log log15.Logger
name string
started uint32 // atomic
stopped uint32 // atomic
...
}
```
BaseService already contains `log` field, so most of the structs embedding it should be fine. We should rename it to `logger`.
The only thing missing is the ability to set logger:
```
func (bs *BaseService) SetLogger(l log.Logger) {
bs.logger = l
}
```
### 4) Conventions
Important keyvals should go first. Example:
```
correct
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus instance=1 appHeight=0 storeHeight=0 stateHeight=0
```
not
```
wrong
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 instance=1
```
for that in most cases you'll need to add `instance` field to a logger upon creating, not when u log a particular message:
```go
colorFn := func(keyvals ...interface{}) term.FgBgColor {
for i := 1; i < len(keyvals); i += 2 {
if keyvals[i] == "instance" && keyvals[i+1] == "1" {
return term.FgBgColor{Fg: term.Blue}
} else if keyvals[i] == "instance" && keyvals[i+1] == "1" {
return term.FgBgColor{Fg: term.Red}
}
}
return term.FgBgColor{}
}
logger := term.NewLogger(os.Stdout, log.NewTmLogger, colorFn)
c1 := NewConsensusReactor(...)
c1.SetLogger(log.With(logger, "instance", 1))
c2 := NewConsensusReactor(...)
c2.SetLogger(log.With(logger, "instance", 2))
```
## Status
Implemented
## Consequences
### Positive
Dynamic logger, which could be turned off for some modules at runtime. Public interface for other projects using Tendermint libraries.
### Negative
We may loose the ability to color keys in keyvalue pairs. go-kit allow you to easily change foreground / background colors of the whole string, but not its parts.
### Neutral
## Appendix A.
I really like a minimalistic approach go-kit took with his logger https://github.com/go-kit/kit/tree/master/log:
```
type Logger interface {
Log(keyvals ...interface{}) error
}
```
See [The Hunt for a Logger Interface](https://web.archive.org/web/20210902161539/https://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide#1). The advantage is greater composability (check out how go-kit defines colored logging or log-leveled logging on top of this interface https://github.com/go-kit/kit/tree/master/log).

View File

@@ -0,0 +1,88 @@
# ADR 2: Event Subscription
## Context
In the light client (or any other client), the user may want to **subscribe to
a subset of transactions** (rather than all of them) using `/subscribe?event=X`. For
example, I want to subscribe for all transactions associated with a particular
account. Same for fetching. The user may want to **fetch transactions based on
some filter** (rather than fetching all the blocks). For example, I want to get
all transactions for a particular account in the last two weeks (`tx's block time >= '2017-06-05'`).
Now you can't even subscribe to "all txs" in Tendermint.
The goal is a simple and easy to use API for doing that.
![Tx Send Flow Diagram](img/tags1.png)
## Decision
ABCI app return tags with a `DeliverTx` response inside the `data` field (_for
now, later we may create a separate field_). Tags is a list of key-value pairs,
protobuf encoded.
Example data:
```json
{
"abci.account.name": "Igor",
"abci.account.address": "0xdeadbeef",
"tx.gas": 7
}
```
### Subscribing for transactions events
If the user wants to receive only a subset of transactions, ABCI-app must
return a list of tags with a `DeliverTx` response. These tags will be parsed and
matched with the current queries (subscribers). If the query matches the tags,
subscriber will get the transaction event.
```
/subscribe?query="tm.event = Tx AND tx.hash = AB0023433CF0334223212243BDD AND abci.account.invoice.number = 22"
```
A new package must be developed to replace the current `events` package. It
will allow clients to subscribe to a different types of events in the future:
```
/subscribe?query="abci.account.invoice.number = 22"
/subscribe?query="abci.account.invoice.owner CONTAINS Igor"
```
### Fetching transactions
This is a bit tricky because a) we want to support a number of indexers, all of
which have a different API b) we don't know whenever tags will be sufficient
for the most apps (I guess we'll see).
```
/txs/search?query="tx.hash = AB0023433CF0334223212243BDD AND abci.account.owner CONTAINS Igor"
/txs/search?query="abci.account.owner = Igor"
```
For historic queries we will need a indexing storage (Postgres, SQLite, ...).
### Issues
- https://github.com/tendermint/tendermint/issues/376
- https://github.com/tendermint/tendermint/issues/287
- https://github.com/tendermint/tendermint/issues/525 (related)
## Status
Implemented
## Consequences
### Positive
- same format for event notifications and search APIs
- powerful enough query
### Negative
- performance of the `match` function (where we have too many queries / subscribers)
- there is an issue where there are too many txs in the DB
### Neutral

View File

@@ -0,0 +1,34 @@
# ADR 3: Must an ABCI-app have an RPC server?
## Context
ABCI-server could expose its own RPC-server and act as a proxy to Tendermint.
The idea was for the Tendermint RPC to just be a transparent proxy to the app.
Clients need to talk to Tendermint for proofs, unless we burden all app devs
with exposing Tendermint proof stuff. Also seems less complex to lock down one
server than two, but granted it makes querying a bit more kludgy since it needs
to be passed as a `Query`. Also, **having a very standard rpc interface means
the light-client can work with all apps and handle proofs**. The only
app-specific logic is decoding the binary data to a more readable form (eg.
json). This is a huge advantage for code-reuse and standardization.
## Decision
We dont expose an RPC server on any of our ABCI-apps.
## Status
Implemented
## Consequences
### Positive
- Unified interface for all apps
### Negative
- `Query` interface
### Neutral

View File

@@ -0,0 +1,38 @@
# ADR 004: Historical Validators
## Context
Right now, we can query the present validator set, but there is no history.
If you were offline for a long time, there is no way to reconstruct past validators. This is needed for the light client and we agreed needs enhancement of the API.
## Decision
For every block, store a new structure that contains either the latest validator set,
or the height of the last block for which the validator set changed. Note this is not
the height of the block which returned the validator set change itself, but the next block,
ie. the first block it comes into effect for.
Storing the validators will be handled by the `state` package.
At some point in the future, we may consider more efficient storage in the case where the validators
are updated frequently - for instance by only saving the diffs, rather than the whole set.
An alternative approach suggested keeping the validator set, or diffs of it, in a merkle IAVL tree.
While it might afford cheaper proofs that a validator set has not changed, it would be more complex,
and likely less efficient.
## Status
Implemented
## Consequences
### Positive
- Can query old validator sets, with proof.
### Negative
- Writes an extra structure to disk with every block.
### Neutral

View File

@@ -0,0 +1,85 @@
# ADR 005: Consensus Params
## Context
Consensus critical parameters controlling blockchain capacity have until now been hard coded, loaded from a local config, or neglected.
Since they may be need to be different in different networks, and potentially to evolve over time within
networks, we seek to initialize them in a genesis file, and expose them through the ABCI.
While we have some specific parameters now, like maximum block and transaction size, we expect to have more in the future,
such as a period over which evidence is valid, or the frequency of checkpoints.
## Decision
### ConsensusParams
No consensus critical parameters should ever be found in the `config.toml`.
A new `ConsensusParams` is optionally included in the `genesis.json` file,
and loaded into the `State`. Any items not included are set to their default value.
A value of 0 is undefined (see ABCI, below). A value of -1 is used to indicate the parameter does not apply.
The parameters are used to determine the validity of a block (and tx) via the union of all relevant parameters.
```
type ConsensusParams struct {
BlockSize
TxSize
BlockGossip
}
type BlockSize struct {
MaxBytes int
MaxTxs int
MaxGas int
}
type TxSize struct {
MaxBytes int
MaxGas int
}
type BlockGossip struct {
BlockPartSizeBytes int
}
```
The `ConsensusParams` can evolve over time by adding new structs that cover different aspects of the consensus rules.
The `BlockPartSizeBytes` and the `BlockSize.MaxBytes` are enforced to be greater than 0.
The former because we need a part size, the latter so that we always have at least some sanity check over the size of blocks.
### ABCI
#### InitChain
InitChain currently takes the initial validator set. It should be extended to also take parts of the ConsensusParams.
There is some case to be made for it to take the entire Genesis, except there may be things in the genesis,
like the BlockPartSize, that the app shouldn't really know about.
#### EndBlock
The EndBlock response includes a `ConsensusParams`, which includes BlockSize and TxSize, but not BlockGossip.
Other param struct can be added to `ConsensusParams` in the future.
The `0` value is used to denote no change.
Any other value will update that parameter in the `State.ConsensusParams`, to be applied for the next block.
Tendermint should have hard-coded upper limits as sanity checks.
## Status
Implemented
## Consequences
### Positive
- Alternative capacity limits and consensus parameters can be specified without re-compiling the software.
- They can also change over time under the control of the application
### Negative
- More exposed parameters is more complexity
- Different rules at different heights in the blockchain complicates fast sync
### Neutral
- The TxSize, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes

View File

@@ -0,0 +1,229 @@
# ADR 006: Trust Metric Design
## Context
The proposed trust metric will allow Tendermint to maintain local trust rankings for peers it has directly interacted with, which can then be used to implement soft security controls. The calculations were obtained from the [TrustGuard](https://dl.acm.org/citation.cfm?id=1060808) project.
### Background
The Tendermint Core project developers would like to improve Tendermint security and reliability by keeping track of the level of trustworthiness peers have demonstrated within the peer-to-peer network. This way, undesirable outcomes from peers will not immediately result in them being dropped from the network (potentially causing drastic changes to take place). Instead, peers behavior can be monitored with appropriate metrics and be removed from the network once Tendermint Core is certain the peer is a threat. For example, when the PEXReactor makes a request for peers network addresses from a already known peer, and the returned network addresses are unreachable, this untrustworthy behavior should be tracked. Returning a few bad network addresses probably shouldnt cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer being dropped.
Trust metrics can be circumvented by malicious nodes through the use of strategic oscillation techniques, which adapts the malicious nodes behavior pattern in order to maximize its goals. For instance, if the malicious node learns that the time interval of the Tendermint trust metric is _X_ hours, then it could wait _X_ hours in-between malicious activities. We could try to combat this issue by increasing the interval length, yet this will make the system less adaptive to recent events.
Instead, having shorter intervals, but keeping a history of interval values, will give our metric the flexibility needed in order to keep the network stable, while also making it resilient against a strategic malicious node in the Tendermint peer-to-peer network. Also, the metric can access trust data over a rather long period of time while not greatly increasing its history size by aggregating older history values over a larger number of intervals, and at the same time, maintain great precision for the recent intervals. This approach is referred to as fading memories, and closely resembles the way human beings remember their experiences. The trade-off to using history data is that the interval values should be preserved in-between executions of the node.
### References
S. Mudhakar, L. Xiong, and L. Liu, “TrustGuard: Countering Vulnerabilities in Reputation Management for Decentralized Overlay Networks,” in _Proceedings of the 14th international conference on World Wide Web, pp. 422-431_, May 2005.
## Decision
The proposed trust metric will allow a developer to inform the trust metric store of all good and bad events relevant to a peer's behavior, and at any time, the metric can be queried for a peer's current trust ranking.
The three subsections below will cover the process being considered for calculating the trust ranking, the concept of the trust metric store, and the interface for the trust metric.
### Proposed Process
The proposed trust metric will count good and bad events relevant to the object, and calculate the percent of counters that are good over an interval with a predefined duration. This is the procedure that will continue for the life of the trust metric. When the trust metric is queried for the current **trust value**, a resilient equation will be utilized to perform the calculation.
The equation being proposed resembles a Proportional-Integral-Derivative (PID) controller used in control systems. The proportional component allows us to be sensitive to the value of the most recent interval, while the integral component allows us to incorporate trust values stored in the history data, and the derivative component allows us to give weight to sudden changes in the behavior of a peer. We compute the trust value of a peer in interval i based on its current trust ranking, its trust rating history prior to interval _i_ (over the past _maxH_ number of intervals) and its trust ranking fluctuation. We will break up the equation into the three components.
```math
(1) Proportional Value = a * R[i]
```
where _R_[*i*] denotes the raw trust value at time interval _i_ (where _i_ == 0 being current time) and _a_ is the weight applied to the contribution of the current reports. The next component of our equation uses a weighted sum over the last _maxH_ intervals to calculate the history value for time _i_:
`H[i] =` ![formula1](img/formula1.png "Weighted Sum Formula")
The weights can be chosen either optimistically or pessimistically. An optimistic weight creates larger weights for newer history data values, while the the pessimistic weight creates larger weights for time intervals with lower scores. The default weights used during the calculation of the history value are optimistic and calculated as _Wk_ = 0.8^_k_, for time interval _k_. With the history value available, we can now finish calculating the integral value:
```math
(2) Integral Value = b * H[i]
```
Where _H_[*i*] denotes the history value at time interval _i_ and _b_ is the weight applied to the contribution of past performance for the object being measured. The derivative component will be calculated as follows:
```math
D[i] = R[i] H[i]
(3) Derivative Value = c(D[i]) * D[i]
```
Where the value of _c_ is selected based on the _D_[*i*] value relative to zero. The default selection process makes _c_ equal to 0 unless _D_[*i*] is a negative value, in which case c is equal to 1. The result is that the maximum penalty is applied when current behavior is lower than previously experienced behavior. If the current behavior is better than the previously experienced behavior, then the Derivative Value has no impact on the trust value. With the three components brought together, our trust value equation is calculated as follows:
```math
TrustValue[i] = a * R[i] + b * H[i] + c(D[i]) * D[i]
```
As a performance optimization that will keep the amount of raw interval data being saved to a reasonable size of _m_, while allowing us to represent 2^_m_ - 1 history intervals, we can employ the fading memories technique that will trade space and time complexity for the precision of the history data values by summarizing larger quantities of less recent values. While our equation above attempts to access up to _maxH_ (which can be 2^_m_ - 1), we will map those requests down to _m_ values using equation 4 below:
```math
(4) j = index, where index > 0
```
Where _j_ is one of _(0, 1, 2, … , m 1)_ indices used to access history interval data. Now we can access the raw intervals using the following calculations:
```math
R[0] = raw data for current time interval
```
`R[j] =` ![formula2](img/formula2.png "Fading Memories Formula")
### Trust Metric Store
Similar to the P2P subsystem AddrBook, the trust metric store will maintain information relevant to Tendermint peers. Additionally, the trust metric store will ensure that trust metrics will only be active for peers that a node is currently and directly engaged with.
Reactors will provide a peer key to the trust metric store in order to retrieve the associated trust metric. The trust metric can then record new positive and negative events experienced by the reactor, as well as provided the current trust score calculated by the metric.
When the node is shutting down, the trust metric store will save history data for trust metrics associated with all known peers. This saved information allows experiences with a peer to be preserved across node executions, which can span a tracking windows of days or weeks. The trust history data is loaded automatically during OnStart.
### Interface Detailed Design
Each trust metric allows for the recording of positive/negative events, querying the current trust value/score, and the stopping/pausing of tracking over time intervals. This can be seen below:
```go
// TrustMetric - keeps track of peer reliability
type TrustMetric struct {
// Private elements.
}
// Pause tells the metric to pause recording data over time intervals.
// All method calls that indicate events will unpause the metric
func (tm *TrustMetric) Pause() {}
// Stop tells the metric to stop recording data over time intervals
func (tm *TrustMetric) Stop() {}
// BadEvents indicates that an undesirable event(s) took place
func (tm *TrustMetric) BadEvents(num int) {}
// GoodEvents indicates that a desirable event(s) took place
func (tm *TrustMetric) GoodEvents(num int) {}
// TrustValue gets the dependable trust value; always between 0 and 1
func (tm *TrustMetric) TrustValue() float64 {}
// TrustScore gets a score based on the trust value always between 0 and 100
func (tm *TrustMetric) TrustScore() int {}
// NewMetric returns a trust metric with the default configuration
func NewMetric() *TrustMetric {}
//------------------------------------------------------------------------------------------------
// For example
tm := NewMetric()
tm.BadEvents(1)
score := tm.TrustScore()
tm.Stop()
```
Some of the trust metric parameters can be configured. The weight values should probably be left alone in more cases, yet the time durations for the tracking window and individual time interval should be considered.
```go
// TrustMetricConfig - Configures the weight functions and time intervals for the metric
type TrustMetricConfig struct {
// Determines the percentage given to current behavior
ProportionalWeight float64
// Determines the percentage given to prior behavior
IntegralWeight float64
// The window of time that the trust metric will track events across.
// This can be set to cover many days without issue
TrackingWindow time.Duration
// Each interval should be short for adapability.
// Less than 30 seconds is too sensitive,
// and greater than 5 minutes will make the metric numb
IntervalLength time.Duration
}
// DefaultConfig returns a config with values that have been tested and produce desirable results
func DefaultConfig() TrustMetricConfig {}
// NewMetricWithConfig returns a trust metric with a custom configuration
func NewMetricWithConfig(tmc TrustMetricConfig) *TrustMetric {}
//------------------------------------------------------------------------------------------------
// For example
config := TrustMetricConfig{
TrackingWindow: time.Minute * 60 * 24, // one day
IntervalLength: time.Minute * 2,
}
tm := NewMetricWithConfig(config)
tm.BadEvents(10)
tm.Pause()
tm.GoodEvents(1) // becomes active again
```
A trust metric store should be created with a DB that has persistent storage so it can save history data across node executions. All trust metrics instantiated by the store will be created with the provided TrustMetricConfig configuration.
When you attempt to fetch the trust metric for a peer, and an entry does not exist in the trust metric store, a new metric is automatically created and the entry made within the store.
In additional to the fetching method, GetPeerTrustMetric, the trust metric store provides a method to call when a peer has disconnected from the node. This is so the metric can be paused (history data will not be saved) for periods of time when the node is not having direct experiences with the peer.
```go
// TrustMetricStore - Manages all trust metrics for peers
type TrustMetricStore struct {
cmn.BaseService
// Private elements
}
// OnStart implements Service
func (tms *TrustMetricStore) OnStart(context.Context) error { return nil }
// OnStop implements Service
func (tms *TrustMetricStore) OnStop() {}
// NewTrustMetricStore returns a store that saves data to the DB
// and uses the config when creating new trust metrics
func NewTrustMetricStore(db dbm.DB, tmc TrustMetricConfig) *TrustMetricStore {}
// Size returns the number of entries in the trust metric store
func (tms *TrustMetricStore) Size() int {}
// GetPeerTrustMetric returns a trust metric by peer key
func (tms *TrustMetricStore) GetPeerTrustMetric(key string) *TrustMetric {}
// PeerDisconnected pauses the trust metric associated with the peer identified by the key
func (tms *TrustMetricStore) PeerDisconnected(key string) {}
//------------------------------------------------------------------------------------------------
// For example
db := dbm.NewDB("trusthistory", "goleveldb", dirPathStr)
tms := NewTrustMetricStore(db, DefaultConfig())
tm := tms.GetPeerTrustMetric(key)
tm.BadEvents(1)
tms.PeerDisconnected(key)
```
## Status
Approved.
## Consequences
### Positive
- The trust metric will allow Tendermint to make non-binary security and reliability decisions
- Will help Tendermint implement deterrents that provide soft security controls, yet avoids disruption on the network
- Will provide useful profiling information when analyzing performance over time related to peer interaction
### Negative
- Requires saving the trust metric history data across node executions
### Neutral
- Keep in mind that, good events need to be recorded just as bad events do using this implementation

View File

@@ -0,0 +1,106 @@
# ADR 007: Trust Metric Usage Guide
## Context
Tendermint is required to monitor peer quality in order to inform its peer dialing and peer exchange strategies.
When a node first connects to the network, it is important that it can quickly find good peers.
Thus, while a node has fewer connections, it should prioritize connecting to higher quality peers.
As the node becomes well connected to the rest of the network, it can dial lesser known or lesser
quality peers and help assess their quality. Similarly, when queried for peers, a node should make
sure they dont return low quality peers.
Peer quality can be tracked using a trust metric that flags certain behaviours as good or bad. When enough
bad behaviour accumulates, we can mark the peer as bad and disconnect.
For example, when the PEXReactor makes a request for peers network addresses from an already known peer, and the returned network addresses are unreachable, this undesirable behavior should be tracked. Returning a few bad network addresses probably shouldnt cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer for removal. The originally proposed approach and design document for the trust metric can be found in the [ADR 006](adr-006-trust-metric.md) document.
The trust metric implementation allows a developer to obtain a peer's trust metric from a trust metric store, and track good and bad events relevant to a peer's behavior, and at any time, the peer's metric can be queried for a current trust value. The current trust value is calculated with a formula that utilizes current behavior, previous behavior, and change between the two. Current behavior is calculated as the percentage of good behavior within a time interval. The time interval is short; probably set between 30 seconds and 5 minutes. On the other hand, the historic data can estimate a peer's behavior over days worth of tracking. At the end of a time interval, the current behavior becomes part of the historic data, and a new time interval begins with the good and bad counters reset to zero.
These are some important things to keep in mind regarding how the trust metrics handle time intervals and scoring:
- Each new time interval begins with a perfect score
- Bad events quickly bring the score down and good events cause the score to slowly rise
- When the time interval is over, the percentage of good events becomes historic data.
Some useful information about the inner workings of the trust metric:
- When a trust metric is first instantiated, a timer (ticker) periodically fires in order to handle transitions between trust metric time intervals
- If a peer is disconnected from a node, the timer should be paused, since the node is no longer connected to that peer
- The ability to pause the metric is supported with the store **PeerDisconnected** method and the metric **Pause** method
- After a pause, if a good or bad event method is called on a metric, it automatically becomes unpaused and begins a new time interval.
## Decision
The trust metric capability is now available, yet, it still leaves the question of how should it be applied throughout Tendermint in order to properly track the quality of peers?
### Proposed Process
Peers are managed using an address book and a trust metric:
- The address book keeps a record of peers and provides selection methods
- The trust metric tracks the quality of the peers
#### Presence in Address Book
Outbound peers are added to the address book before they are dialed,
and inbound peers are added once the peer connection is set up.
Peers are also added to the address book when they are received in response to
a pexRequestMessage.
While a node has less than `needAddressThreshold`, it will periodically request more,
via pexRequestMessage, from randomly selected peers and from newly dialed outbound peers.
When a new address is added to an address book that has more than `0.5*needAddressThreshold` addresses,
then with some low probability, a randomly chosen low quality peer is removed.
#### Outbound Peers
Peers attempt to maintain a minimum number of outbound connections by
repeatedly querying the address book for peers to connect to.
While a node has few to no outbound connections, the address book is biased to return
higher quality peers. As the node increases the number of outbound connections,
the address book is biased to return less-vetted or lower-quality peers.
#### Inbound Peers
Peers also maintain a maximum number of total connections, MaxNumPeers.
If a peer has MaxNumPeers, new incoming connections will be accepted with low probability.
When such a new connection is accepted, the peer disconnects from a probabilistically chosen low ranking peer
so it does not exceed MaxNumPeers.
#### Peer Exchange
When a peer receives a pexRequestMessage, it returns a random sample of high quality peers from the address book. Peers with no score or low score should not be inclided in a response to pexRequestMessage.
#### Peer Quality
Peer quality is tracked in the connection and across the reactors by storing the TrustMetric in the peer's
thread safe Data store.
Peer behaviour is then defined as one of the following:
- Fatal - something outright malicious that causes us to disconnect the peer and ban it from the address book for some amount of time
- Bad - Any kind of timeout, messages that don't unmarshal, fail other validity checks, or messages we didn't ask for or aren't expecting (usually worth one bad event)
- Neutral - Unknown channels/message types/version upgrades (no good or bad events recorded)
- Correct - Normal correct behavior (worth one good event)
- Good - some random majority of peers per reactor sending us useful messages (worth more than one good event).
Note that Fatal behaviour causes us to remove the peer, and neutral behaviour does not affect the score.
## Status
Proposed.
## Consequences
### Positive
- Bringing the address book and trust metric store together will cause the network to be built in a way that encourages greater security and reliability.
### Negative
- TBD
### Neutral
- Keep in mind that, good events need to be recorded just as bad events do using this implementation.

View File

@@ -0,0 +1,35 @@
# ADR 008: SocketPV
Tendermint node's should support only two in-process PrivValidator
implementations:
- FilePV uses an unencrypted private key in a "priv_validator.json" file - no
configuration required (just `tendermint init validator`).
- TCPVal and IPCVal use TCP and Unix sockets respectively to send signing requests
to another process - the user is responsible for starting that process themselves.
Both TCPVal and IPCVal addresses can be provided via flags at the command line
or in the configuration file; TCPVal addresses must be of the form
`tcp://<ip_address>:<port>` and IPCVal addresses `unix:///path/to/file.sock` -
doing so will cause Tendermint to ignore any private validator files.
TCPVal will listen on the given address for incoming connections from an external
private validator process. It will halt any operation until at least one external
process successfully connected.
The external priv_validator process will dial the address to connect to
Tendermint, and then Tendermint will send requests on the ensuing connection to
sign votes and proposals. Thus the external process initiates the connection,
but the Tendermint process makes all requests. In a later stage we're going to
support multiple validators for fault tolerance. To prevent double signing they
need to be synced, which is deferred to an external solution (see #1185).
Conversely, IPCVal will make an outbound connection to an existing socket opened
by the external validator process.
In addition, Tendermint will provide implementations that can be run in that
external process. These include:
- FilePV will encrypt the private key, and the user must enter password to
decrypt key when process is started.
- LedgerPV uses a Ledger Nano S to handle all signing.

View File

@@ -0,0 +1,271 @@
# ADR 009: ABCI UX Improvements
## Changelog
23-06-2018: Some minor fixes from review
07-06-2018: Some updates based on discussion with Jae
07-06-2018: Initial draft to match what was released in ABCI v0.11
## Context
The ABCI was first introduced in late 2015. It's purpose is to be:
- a generic interface between state machines and their replication engines
- agnostic to the language the state machine is written in
- agnostic to the replication engine that drives it
This means ABCI should provide an interface for both pluggable applications and
pluggable consensus engines.
To achieve this, it uses Protocol Buffers (proto3) for message types. The dominant
implementation is in Go.
After some recent discussions with the community on github, the following were
identified as pain points:
- Amino encoded types
- Managing validator sets
- Imports in the protobuf file
See the [references](#references) for more.
### Imports
The native proto library in Go generates inflexible and verbose code.
Many in the Go community have adopted a fork called
[gogoproto](https://github.com/gogo/protobuf) that provides a
variety of features aimed to improve the developer experience.
While `gogoproto` is nice, it creates an additional dependency, and compiling
the protobuf types for other languages has been reported to fail when `gogoproto` is used.
### Amino
Amino is an encoding protocol designed to improve over insufficiencies of protobuf.
It's goal is to be proto4.
Many people are frustrated by incompatibility with protobuf,
and with the requirement for Amino to be used at all within ABCI.
We intend to make Amino successful enough that we can eventually use it for ABCI
message types directly. By then it should be called proto4. In the meantime,
we want it to be easy to use.
### PubKey
PubKeys are encoded using Amino (and before that, go-wire).
Ideally, PubKeys are an interface type where we don't know all the
implementation types, so its unfitting to use `oneof` or `enum`.
### Addresses
The address for ED25519 pubkey is the RIPEMD160 of the Amino
encoded pubkey. This introduces an Amino dependency in the address generation,
a functionality that is widely required and should be easy to compute as
possible.
### Validators
To change the validator set, applications can return a list of validator updates
with ResponseEndBlock. In these updates, the public key _must_ be included,
because Tendermint requires the public key to verify validator signatures. This
means ABCI developers have to work with PubKeys. That said, it would also be
convenient to work with address information, and for it to be simple to do so.
### AbsentValidators
Tendermint also provides a list of validators in BeginBlock who did not sign the
last block. This allows applications to reflect availability behaviour in the
application, for instance by punishing validators for not having votes included
in commits.
### InitChain
Tendermint passes in a list of validators here, and nothing else. It would
benefit the application to be able to control the initial validator set. For
instance the genesis file could include application-based information about the
initial validator set that the application could process to determine the
initial validator set. Additionally, InitChain would benefit from getting all
the genesis information.
### Header
ABCI provides the Header in RequestBeginBlock so the application can have
important information about the latest state of the blockchain.
## Decision
### Imports
Move away from gogoproto. In the short term, we will just maintain a second
protobuf file without the gogoproto annotations. In the medium term, we will
make copies of all the structs in Golang and shuttle back and forth. In the long
term, we will use Amino.
### Amino
To simplify ABCI application development in the short term,
Amino will be completely removed from the ABCI:
- It will not be required for PubKey encoding
- It will not be required for computing PubKey addresses
That said, we are working to make Amino a huge success, and to become proto4.
To facilitate adoption and cross-language compatibility in the near-term, Amino
v1 will:
- be fully compatible with the subset of proto3 that excludes `oneof`
- use the Amino prefix system to provide interface types, as opposed to `oneof`
style union types.
That said, an Amino v2 will be worked on to improve the performance of the
format and its useability in cryptographic applications.
### PubKey
Encoding schemes infect software. As a generic middleware, ABCI aims to have
some cross scheme compatibility. For this it has no choice but to include opaque
bytes from time to time. While we will not enforce Amino encoding for these
bytes yet, we need to provide a type system. The simplest way to do this is to
use a type string.
PubKey will now look like:
```
message PubKey {
string type
bytes data
}
```
where `type` can be:
- "ed225519", with `data = <raw 32-byte pubkey>`
- "secp256k1", with `data = <33-byte OpenSSL compressed pubkey>`
As we want to retain flexibility here, and since ideally, PubKey would be an
interface type, we do not use `enum` or `oneof`.
### Addresses
To simplify and improve computing addresses, we change it to the first 20-bytes of the SHA256
of the raw 32-byte public key.
We continue to use the Bitcoin address scheme for secp256k1 keys.
### Validators
Add a `bytes address` field:
```
message Validator {
bytes address
PubKey pub_key
int64 power
}
```
### RequestBeginBlock and AbsentValidators
To simplify this, RequestBeginBlock will include the complete validator set,
including the address, and voting power of each validator, along
with a boolean for whether or not they voted:
```
message RequestBeginBlock {
bytes hash
Header header
LastCommitInfo last_commit_info
repeated Evidence byzantine_validators
}
message LastCommitInfo {
int32 CommitRound
repeated SigningValidator validators
}
message SigningValidator {
Validator validator
bool signed_last_block
}
```
Note that in Validators in RequestBeginBlock, we DO NOT include public keys. Public keys are
larger than addresses and in the future, with quantum computers, will be much
larger. The overhead of passing them, especially during fast-sync, is
significant.
Additional, addresses are changing to be simpler to compute, further removing
the need to include pubkeys here.
In short, ABCI developers must be aware of both addresses and public keys.
### ResponseEndBlock
Since ResponseEndBlock includes Validator, it must now include their address.
### InitChain
Change RequestInitChain to give the app all the information from the genesis file:
```
message RequestInitChain {
int64 time
string chain_id
ConsensusParams consensus_params
repeated Validator validators
bytes app_state_bytes
}
```
Change ResponseInitChain to allow the app to specify the initial validator set
and consensus parameters.
```
message ResponseInitChain {
ConsensusParams consensus_params
repeated Validator validators
}
```
### Header
Now that Tendermint Amino will be compatible with proto3, the Header in ABCI
should exactly match the Tendermint header - they will then be encoded
identically in ABCI and in Tendermint Core.
## Status
Implemented
## Consequences
### Positive
- Easier for developers to build on the ABCI
- ABCI and Tendermint headers are identically serialized
### Negative
- Maintenance overhead of alternative type encoding scheme
- Performance overhead of passing all validator info every block (at least its
only addresses, and not also pubkeys)
- Maintenance overhead of duplicate types
### Neutral
- ABCI developers must know about validator addresses
## References
- [ABCI v0.10.3 Specification (before this
proposal)](https://github.com/tendermint/abci/blob/v0.10.3/specification.rst)
- [ABCI v0.11.0 Specification (implementing first draft of this
proposal)](https://github.com/tendermint/abci/blob/v0.11.0/specification.md)
- [Ed25519 addresses](https://github.com/tendermint/go-crypto/issues/103)
- [InitChain contains the
Genesis](https://github.com/tendermint/abci/issues/216)
- [PubKeys](https://github.com/tendermint/tendermint/issues/1524)
- [Notes on
Header](https://github.com/tendermint/tendermint/issues/1605)
- [Gogoproto issues](https://github.com/tendermint/abci/issues/256)
- [Absent Validators](https://github.com/tendermint/abci/issues/231)

View File

@@ -0,0 +1,77 @@
# ADR 010: Crypto Changes
## Context
Tendermint is a cryptographic protocol that uses and composes a variety of cryptographic primitives.
After nearly 4 years of development, Tendermint has recently undergone multiple security reviews to search for vulnerabilities and to assess the the use and composition of cryptographic primitives.
### Hash Functions
Tendermint uses RIPEMD160 universally as a hash function, most notably in its Merkle tree implementation.
RIPEMD160 was chosen because it provides the shortest fingerprint that is long enough to be considered secure (ie. birthday bound of 80-bits).
It was also developed in the open academic community, unlike NSA-designed algorithms like SHA256.
That said, the cryptographic community appears to unanimously agree on the security of SHA256. It has become a universal standard, especially now that SHA1 is broken, being required in TLS connections and having optimized support in hardware.
### Merkle Trees
Tendermint uses a simple Merkle tree to compute digests of large structures like transaction batches
and even blockchain headers. The Merkle tree length prefixes byte arrays before concatenating and hashing them.
It uses RIPEMD160.
### Addresses
ED25519 addresses are computed using the RIPEMD160 of the Amino encoding of the public key.
RIPEMD160 is generally considered an outdated hash function, and is much slower
than more modern functions like SHA256 or Blake2.
### Authenticated Encryption
Tendermint P2P connections use authenticated encryption to provide privacy and authentication in the communications.
This is done using the simple Station-to-Station protocol with the NaCL Ed25519 library.
While there have been no vulnerabilities found in the implementation, there are some concerns:
- NaCL uses Salsa20, a not-widely used and relatively out-dated stream cipher that has been obsoleted by ChaCha20
- Connections use RIPEMD160 to compute a value that is used for the encryption nonce with subtle requirements on how it's used
## Decision
### Hash Functions
Use the first 20-bytes of the SHA256 hash instead of RIPEMD160 for everything
### Merkle Trees
TODO
### Addresses
Compute ED25519 addresses as the first 20-bytes of the SHA256 of the raw 32-byte public key
### Authenticated Encryption
Make the following changes:
- Use xChaCha20 instead of xSalsa20 - https://github.com/tendermint/tendermint/issues/1124
- Use an HKDF instead of RIPEMD160 to compute nonces - https://github.com/tendermint/tendermint/issues/1165
## Status
Implemented
## Consequences
### Positive
- More modern and standard cryptographic functions with wider adoption and hardware acceleration
### Negative
- Exact authenticated encryption construction isn't already provided in a well-used library
### Neutral
## References

View File

@@ -0,0 +1,116 @@
# ADR 011: Monitoring
## Changelog
08-06-2018: Initial draft
11-06-2018: Reorg after @xla comments
13-06-2018: Clarification about usage of labels
## Context
In order to bring more visibility into Tendermint, we would like it to report
metrics and, maybe later, traces of transactions and RPC queries. See
https://github.com/tendermint/tendermint/issues/986.
A few solutions were considered:
1. [Prometheus](https://prometheus.io)
a) Prometheus API
b) [go-kit metrics package](https://github.com/go-kit/kit/tree/master/metrics) as an interface plus Prometheus
c) [telegraf](https://github.com/influxdata/telegraf)
d) new service, which will listen to events emitted by pubsub and report metrics
2. [OpenCensus](https://opencensus.io/introduction/)
### 1. Prometheus
Prometheus seems to be the most popular product out there for monitoring. It has
a Go client library, powerful queries, alerts.
**a) Prometheus API**
We can commit to using Prometheus in Tendermint, but I think Tendermint users
should be free to choose whatever monitoring tool they feel will better suit
their needs (if they don't have existing one already). So we should try to
abstract interface enough so people can switch between Prometheus and other
similar tools.
**b) go-kit metrics package as an interface**
metrics package provides a set of uniform interfaces for service
instrumentation and offers adapters to popular metrics packages:
https://godoc.org/github.com/go-kit/kit/metrics#pkg-subdirectories
Comparing to Prometheus API, we're losing customisability and control, but gaining
freedom in choosing any instrument from the above list given we will extract
metrics creation into a separate function (see "providers" in node/node.go).
**c) telegraf**
Unlike already discussed options, telegraf does not require modifying Tendermint
source code. You create something called an input plugin, which polls
Tendermint RPC every second and calculates the metrics itself.
While it may sound good, but some metrics we want to report are not exposed via
RPC or pubsub, therefore can't be accessed externally.
**d) service, listening to pubsub**
Same issue as the above.
### 2. opencensus
opencensus provides both metrics and tracing, which may be important in the
future. It's API looks different from go-kit and Prometheus, but looks like it
covers everything we need.
Unfortunately, OpenCensus go client does not define any
interfaces, so if we want to abstract away metrics we
will need to write interfaces ourselves.
### List of metrics
| | Name | Type | Description |
| --- | ------------------------------------ | ------ | ----------------------------------------------------------------------------- |
| A | consensus_height | Gauge | |
| A | consensus_validators | Gauge | Number of validators who signed |
| A | consensus_validators_power | Gauge | Total voting power of all validators |
| A | consensus_missing_validators | Gauge | Number of validators who did not sign |
| A | consensus_missing_validators_power | Gauge | Total voting power of the missing validators |
| A | consensus_byzantine_validators | Gauge | Number of validators who tried to double sign |
| A | consensus_byzantine_validators_power | Gauge | Total voting power of the byzantine validators |
| A | consensus_block_interval | Timing | Time between this and last block (Block.Header.Time) |
| | consensus_block_time | Timing | Time to create a block (from creating a proposal to commit) |
| | consensus_time_between_blocks | Timing | Time between committing last block and (receiving proposal creating proposal) |
| A | consensus_rounds | Gauge | Number of rounds |
| | consensus_prevotes | Gauge | |
| | consensus_precommits | Gauge | |
| | consensus_prevotes_total_power | Gauge | |
| | consensus_precommits_total_power | Gauge | |
| A | consensus_num_txs | Gauge | |
| A | mempool_size | Gauge | |
| A | consensus_total_txs | Gauge | |
| A | consensus_block_size | Gauge | In bytes |
| A | p2p_peers | Gauge | Number of peers node's connected to |
`A` - will be implemented in the fist place.
**Proposed solution**
## Status
Implemented
## Consequences
### Positive
Better visibility, support of variety of monitoring backends
### Negative
One more library to audit, messing metrics reporting code with business domain.
### Neutral
-

View File

@@ -0,0 +1,113 @@
# ADR 012: PeerTransport
## Context
One of the more apparent problems with the current architecture in the p2p
package is that there is no clear separation of concerns between different
components. Most notably the `Switch` is currently doing physical connection
handling. An artifact is the dependency of the Switch on
`[config.P2PConfig`](https://github.com/tendermint/tendermint/blob/05a76fb517f50da27b4bfcdc7b4cf185fc61eff6/config/config.go#L272-L339).
Addresses:
- [#2046](https://github.com/tendermint/tendermint/issues/2046)
- [#2047](https://github.com/tendermint/tendermint/issues/2047)
First iteraton in [#2067](https://github.com/tendermint/tendermint/issues/2067)
## Decision
Transport concerns will be handled by a new component (`PeerTransport`) which
will provide Peers at its boundary to the caller. In turn `Switch` will use
this new component accept new `Peer`s and dial them based on `NetAddress`.
### PeerTransport
Responsible for emitting and connecting to Peers. The implementation of `Peer`
is left to the transport, which implies that the chosen transport dictates the
characteristics of the implementation handed back to the `Switch`. Each
transport implementation is responsible to filter establishing peers specific
to its domain, for the default multiplexed implementation the following will
apply:
- connections from our own node
- handshake fails
- upgrade to secret connection fails
- prevent duplicate ip
- prevent duplicate id
- nodeinfo incompatibility
```go
// PeerTransport proxies incoming and outgoing peer connections.
type PeerTransport interface {
// Accept returns a newly connected Peer.
Accept() (Peer, error)
// Dial connects to a Peer.
Dial(NetAddress) (Peer, error)
}
// EXAMPLE OF DEFAULT IMPLEMENTATION
// multiplexTransport accepts tcp connections and upgrades to multiplexted
// peers.
type multiplexTransport struct {
listener net.Listener
acceptc chan accept
closec <-chan struct{}
listenc <-chan struct{}
dialTimeout time.Duration
handshakeTimeout time.Duration
nodeAddr NetAddress
nodeInfo NodeInfo
nodeKey NodeKey
// TODO(xla): Remove when MConnection is refactored into mPeer.
mConfig conn.MConnConfig
}
var _ PeerTransport = (*multiplexTransport)(nil)
// NewMTransport returns network connected multiplexed peers.
func NewMTransport(
nodeAddr NetAddress,
nodeInfo NodeInfo,
nodeKey NodeKey,
) *multiplexTransport
```
### Switch
From now the Switch will depend on a fully setup `PeerTransport` to
retrieve/reach out to its peers. As the more low-level concerns are pushed to
the transport, we can omit passing the `config.P2PConfig` to the Switch.
```go
func NewSwitch(transport PeerTransport, opts ...SwitchOption) *Switch
```
## Status
In Review.
## Consequences
### Positive
- free Switch from transport concerns - simpler implementation
- pluggable transport implementation - simpler test setup
- remove Switch dependency on P2PConfig - easier to test
### Negative
- more setup for tests which depend on Switches
### Neutral
- multiplexed will be the default implementation
[0] These guards could be potentially extended to be pluggable much like
middlewares to express different concerns required by differentally configured
environments.

View File

@@ -0,0 +1,99 @@
# ADR 013: Need for symmetric cryptography
## Context
We require symmetric ciphers to handle how we encrypt keys in the sdk,
and to potentially encrypt `priv_validator.json` in tendermint.
Currently we use AEAD's to support symmetric encryption,
which is great since we want data integrity in addition to privacy and authenticity.
We don't currently have a scenario where we want to encrypt without data integrity,
so it is fine to optimize our code to just use AEAD's.
Currently there is not a way to switch out AEAD's easily, this ADR outlines a way
to easily swap these out.
### How do we encrypt with AEAD's
AEAD's typically require a nonce in addition to the key.
For the purposes we require symmetric cryptography for,
we need encryption to be stateless.
Because of this we use random nonces.
(Thus the AEAD must support random nonces)
We currently construct a random nonce, and encrypt the data with it.
The returned value is `nonce || encrypted data`.
The limitation of this is that does not provide a way to identify
which algorithm was used in encryption.
Consequently decryption with multiple algoritms is sub-optimal.
(You have to try them all)
## Decision
We should create the following two methods in a new `crypto/encoding/symmetric` package:
```golang
func Encrypt(aead cipher.AEAD, plaintext []byte) (ciphertext []byte, err error)
func Decrypt(key []byte, ciphertext []byte) (plaintext []byte, err error)
func Register(aead cipher.AEAD, algo_name string, NewAead func(key []byte) (cipher.Aead, error)) error
```
This allows you to specify the algorithm in encryption, but not have to specify
it in decryption.
This is intended for ease of use in downstream applications, in addition to people
looking at the file directly.
One downside is that for the encrypt function you must have already initialized an AEAD,
but I don't really see this as an issue.
If there is no error in encryption, Encrypt will return `algo_name || nonce || aead_ciphertext`.
`algo_name` should be length prefixed, using standard varuint encoding.
This will be binary data, but thats not a problem considering the nonce and ciphertext are also binary.
This solution requires a mapping from aead type to name.
We can achieve this via reflection.
```golang
func getType(myvar interface{}) string {
if t := reflect.TypeOf(myvar); t.Kind() == reflect.Ptr {
return "*" + t.Elem().Name()
} else {
return t.Name()
}
}
```
Then we maintain a map from the name returned from `getType(aead)` to `algo_name`.
In decryption, we read the `algo_name`, and then instantiate a new AEAD with the key.
Then we call the AEAD's decrypt method on the provided nonce/ciphertext.
`Register` allows a downstream user to add their own desired AEAD to the symmetric package.
It will error if the AEAD name is already registered.
This prevents a malicious import from modifying / nullifying an AEAD at runtime.
## Implementation strategy
The golang implementation of what is proposed is rather straight forward.
The concern is that we will break existing private keys if we just switch to this.
If this is concerning, we can make a simple script which doesn't require decoding privkeys,
for converting from the old format to the new one.
## Status
Proposed.
## Consequences
### Positive
- Allows us to support new AEAD's, in a way that makes decryption easier
- Allows downstream users to add their own AEAD
### Negative
- We will have to break all private keys stored on disk.
They can be recovered using seed words, and upgrade scripts are simple.
### Neutral
- Caller has to instantiate the AEAD with the private key.
However it forces them to be aware of what signing algorithm they are using, which is a positive.

View File

@@ -0,0 +1,63 @@
# ADR 014: Secp256k1 Signature Malleability
## Context
Secp256k1 has two layers of malleability.
The signer has a random nonce, and thus can produce many different valid signatures.
This ADR is not concerned with that.
The second layer of malleability basically allows one who is given a signature
to produce exactly one more valid signature for the same message from the same public key.
(They don't even have to know the message!)
The math behind this will be explained in the subsequent section.
Note that in many downstream applications, signatures will appear in a transaction, and therefore in the tx hash.
This means that if someone broadcasts a transaction with secp256k1 signature, the signature can be altered into the other form by anyone in the p2p network.
Thus the tx hash will change, and this altered tx hash may be committed instead.
This breaks the assumption that you can broadcast a valid transaction and just wait for its hash to be included on chain.
One example is if you are broadcasting a tx in cosmos,
and you wait for it to appear on chain before incrementing your sequence number.
You may never increment your sequence number if a different tx hash got committed.
Removing this second layer of signature malleability concerns could ease downstream development.
### ECDSA context
Secp256k1 is ECDSA over a particular curve.
The signature is of the form `(r, s)`, where `s` is a field element.
(The particular field is the `Z_n`, where the elliptic curve has order `n`)
However `(r, -s)` is also another valid solution.
Note that anyone can negate a group element, and therefore can get this second signature.
## Decision
We can just distinguish a canonical form for the ECDSA signatures.
Then we require that all ECDSA signatures be in the form which we defined as canonical.
We reject signatures in non-canonical form.
A canonical form is rather easy to define and check.
It would just be the smaller of the two values for `s`, defined lexicographically.
This is a simple check, instead of checking if `s < n`, instead check `s <= (n - 1)/2`.
An example of another cryptosystem using this
is the parity definition here https://github.com/zkcrypto/pairing/pull/30#issuecomment-372910663.
This is the same solution Ethereum has chosen for solving secp malleability.
## Proposed Implementation
Fork https://github.com/btcsuite/btcd, and just update the [parse sig method](https://github.com/btcsuite/btcd/blob/11fcd83963ab0ecd1b84b429b1efc1d2cdc6d5c5/btcec/signature.go#L195) and serialize functions to enforce our canonical form.
## Status
Implemented
## Consequences
### Positive
- Lets us maintain the ability to expect a tx hash to appear in the blockchain.
### Negative
- More work in all future implementations (Though this is a very simple check)
- Requires us to maintain another fork
### Neutral

View File

@@ -0,0 +1,84 @@
# ADR 015: Crypto encoding
## Context
We must standardize our method for encoding public keys and signatures on chain.
Currently we amino encode the public keys and signatures.
The reason we are using amino here is primarily due to ease of support in
parsing for other languages.
We don't need its upgradability properties in cryptosystems, as a change in
the crypto that requires adapting the encoding, likely warrants being deemed
a new cryptosystem.
(I.e. using new public parameters)
## Decision
### Public keys
For public keys, we will continue to use amino encoding on the canonical
representation of the pubkey.
(Canonical as defined by the cryptosystem itself)
This has two significant drawbacks.
Amino encoding is less space-efficient, due to requiring support for upgradability.
Amino encoding support requires forking protobuf and adding this new interface support
option in the language of choice.
The reason for continuing to use amino however is that people can create code
more easily in languages that already have an up to date amino library.
It is possible that this will change in the future, if it is deemed that
requiring amino for interacting with Tendermint cryptography is unnecessary.
The arguments for space efficiency here are refuted on the basis that there are
far more egregious wastages of space in the SDK.
The space requirement of the public keys doesn't cause many problems beyond
increasing the space attached to each validator / account.
The alternative to using amino here would be for us to create an enum type.
Switching to just an enum type is worthy of investigation post-launch.
For reference, part of amino encoding interfaces is basically a 4 byte enum
type definition.
Enum types would just change that 4 bytes to be a variant, and it would remove
the protobuf overhead, but it would be hard to integrate into the existing API.
### Signatures
Signatures should be switched to be `[]byte`.
Spatial efficiency in the signatures is quite important,
as it directly affects the gas cost of every transaction,
and the throughput of the chain.
Signatures don't need to encode what type they are for (unlike public keys)
since public keys must already be known.
Therefore we can validate the signature without needing to encode its type.
When placed in state, signatures will still be amino encoded, but it will be the
primitive type `[]byte` getting encoded.
#### Ed25519
Use the canonical representation for signatures.
#### Secp256k1
There isn't a clear canonical representation here.
Signatures have two elements `r,s`.
These bytes are encoded as `r || s`, where `r` and `s` are both exactly
32 bytes long, encoded big-endian.
This is basically Ethereum's encoding, but without the leading recovery bit.
## Status
Implemented
## Consequences
### Positive
- More space efficient signatures
### Negative
- We have an amino dependency for cryptography.
### Neutral
- No change to public keys

View File

@@ -0,0 +1,308 @@
# ADR 016: Protocol Versions
## TODO
- How to / should we version the authenticated encryption handshake itself (ie.
upfront protocol negotiation for the P2PVersion)
- How to / should we version ABCI itself? Should it just be absorbed by the
BlockVersion?
## Changelog
- 18-09-2018: Updates after working a bit on implementation
- ABCI Handshake needs to happen independently of starting the app
conns so we can see the result
- Add question about ABCI protocol version
- 16-08-2018: Updates after discussion with SDK team
- Remove signalling for next version from Header/ABCI
- 03-08-2018: Updates from discussion with Jae:
- ProtocolVersion contains Block/AppVersion, not Current/Next
- signal upgrades to Tendermint using EndBlock fields
- dont restrict peer compatibilty by version to simplify syncing old nodes
- 28-07-2018: Updates from review
- split into two ADRs - one for protocol, one for chains
- include signalling for upgrades in header
- 16-07-2018: Initial draft - was originally joint ADR for protocol and chain
versions
## Context
Here we focus on software-agnostic protocol versions.
The Software Version is covered by SemVer and described elsewhere.
It is not relevant to the protocol description, suffice to say that if any protocol version
changes, the software version changes, but not necessarily vice versa.
Software version should be included in NodeInfo for convenience/diagnostics.
We are also interested in versioning across different blockchains in a
meaningful way, for instance to differentiate branches of a contentious
hard-fork. We leave that for a later ADR.
## Requirements
We need to version components of the blockchain that may be independently upgraded.
We need to do it in a way that is scalable and maintainable - we can't just litter
the code with conditionals.
We can consider the complete version of the protocol to contain the following sub-versions:
BlockVersion, P2PVersion, AppVersion. These versions reflect the major sub-components
of the software that are likely to evolve together, at different rates, and in different ways,
as described below.
The BlockVersion defines the core of the blockchain data structures and
should change infrequently.
The P2PVersion defines how peers connect and communicate with eachother - it's
not part of the blockchain data structures, but defines the protocols used to build the
blockchain. It may change gradually.
The AppVersion determines how we compute app specific information, like the
AppHash and the Results.
All of these versions may change over the life of a blockchain, and we need to
be able to help new nodes sync up across version changes. This means we must be willing
to connect to peers with older version.
### BlockVersion
- All tendermint hashed data-structures (headers, votes, txs, responses, etc.).
- Note the semantic meaning of a transaction may change according to the AppVersion, but the way txs are merklized into the header is part of the BlockVersion
- It should be the least frequent/likely to change.
- Tendermint should be stabilizing - it's just Atomic Broadcast.
- We can start considering for Tendermint v2.0 in a year
- It's easy to determine the version of a block from its serialized form
### P2PVersion
- All p2p and reactor messaging (messages, detectable behaviour)
- Will change gradually as reactors evolve to improve performance and support new features - eg proposed new message types BatchTx in the mempool and HasBlockPart in the consensus
- It's easy to determine the version of a peer from its first serialized message/s
- New versions must be compatible with at least one old version to allow gradual upgrades
### AppVersion
- The ABCI state machine (txs, begin/endblock behaviour, commit hashing)
- Behaviour and message types will change abruptly in the course of the life of a chain
- Need to minimize complexity of the code for supporting different AppVersions at different heights
- Ideally, each version of the software supports only a _single_ AppVersion at one time
- this means we checkout different versions of the software at different heights instead of littering the code
with conditionals
- minimize the number of data migrations required across AppVersion (ie. most AppVersion should be able to read the same state from disk as previous AppVersion).
## Ideal
Each component of the software is independently versioned in a modular way and its easy to mix and match and upgrade.
## Proposal
Each of BlockVersion, AppVersion, P2PVersion, is a monotonically increasing uint64.
To use these versions, we need to update the block Header, the p2p NodeInfo, and the ABCI.
### Header
Block Header should include a `Version` struct as its first field like:
```
type Version struct {
Block uint64
App uint64
}
```
Here, `Version.Block` defines the rules for the current block, while
`Version.App` defines the app version that processed the last block and computed
the `AppHash` in the current block. Together they provide a complete description
of the consensus-critical protocol.
Since we have settled on a proto3 header, the ability to read the BlockVersion out of the serialized header is unanimous.
Using a Version struct gives us more flexibility to add fields without breaking
the header.
The ProtocolVersion struct includes both the Block and App versions - it should
serve as a complete description of the consensus-critical protocol.
### NodeInfo
NodeInfo should include a Version struct as its first field like:
```
type Version struct {
P2P uint64
Block uint64
App uint64
Other []string
}
```
Note this effectively makes `Version.P2P` the first field in the NodeInfo, so it
should be easy to read this out of the serialized header if need be to facilitate an upgrade.
The `Version.Other` here should include additional information like the name of the software client and
it's SemVer version - this is for convenience only. Eg.
`tendermint-core/v0.22.8`. It's a `[]string` so it can include information about
the version of Tendermint, of the app, of Tendermint libraries, etc.
### ABCI
Since the ABCI is responsible for keeping Tendermint and the App in sync, we
need to communicate version information through it.
On startup, we use Info to perform a basic handshake. It should include all the
version information.
We also need to be able to update versions in the life of a blockchain. The
natural place to do this is EndBlock.
Note that currently the result of the Handshake isn't exposed anywhere, as the
handshaking happens inside the `proxy.AppConns` abstraction. We will need to
remove the handshaking from the `proxy` package so we can call it independently
and get the result, which should contain the application version.
#### Info
RequestInfo should add support for protocol versions like:
```
message RequestInfo {
string version
uint64 block_version
uint64 p2p_version
}
```
Similarly, ResponseInfo should return the versions:
```
message ResponseInfo {
string data
string version
uint64 app_version
int64 last_block_height
bytes last_block_app_hash
}
```
The existing `version` fields should be called `software_version` but we leave
them for now to reduce the number of breaking changes.
#### EndBlock
Updating the version could be done either with new fields or by using the
existing `tags`. Since we're trying to communicate information that will be
included in Tendermint block Headers, it should be native to the ABCI, and not
something embedded through some scheme in the tags. Thus, version updates should
be communicated through EndBlock.
EndBlock already contains `ConsensusParams`. We can add version information to
the ConsensusParams as well:
```
message ConsensusParams {
BlockSize block_size
EvidenceParams evidence_params
VersionParams version
}
message VersionParams {
uint64 block_version
uint64 app_version
}
```
For now, the `block_version` will be ignored, as we do not allow block version
to be updated live. If the `app_version` is set, it signals that the app's
protocol version has changed, and the new `app_version` will be included in the
`Block.Header.Version.App` for the next block.
### BlockVersion
BlockVersion is included in both the Header and the NodeInfo.
Changing BlockVersion should happen quite infrequently and ideally only for
critical upgrades. For now, it is not encoded in ABCI, though it's always
possible to use tags to signal an external process to co-ordinate an upgrade.
Note Ethereum has not had to make an upgrade like this (everything has been at state machine level, AFAIK).
### P2PVersion
P2PVersion is not included in the block Header, just the NodeInfo.
P2PVersion is the first field in the NodeInfo. NodeInfo is also proto3 so this is easy to read out.
Note we need the peer/reactor protocols to take the versions of peers into account when sending messages:
- don't send messages they don't understand
- don't send messages they don't expect
Doing this will be specific to the upgrades being made.
Note we also include the list of reactor channels in the NodeInfo and already don't send messages for channels the peer doesn't understand.
If upgrades always use new channels, this simplifies the development cost of backwards compatibility.
Note NodeInfo is only exchanged after the authenticated encryption handshake to ensure that it's private.
Doing any version exchange before encrypting could be considered information leakage, though I'm not sure
how much that matters compared to being able to upgrade the protocol.
XXX: if needed, can we change the meaning of the first byte of the first message to encode a handshake version?
this is the first byte of a 32-byte ed25519 pubkey.
### AppVersion
AppVersion is also included in the block Header and the NodeInfo.
AppVersion essentially defines how the AppHash and LastResults are computed.
### Peer Compatibility
Restricting peer compatibility based on version is complicated by the need to
help old peers, possibly on older versions, sync the blockchain.
We might be tempted to say that we only connect to peers with the same
AppVersion and BlockVersion (since these define the consensus critical
computations), and a select list of P2PVersions (ie. those compatible with
ours), but then we'd need to make accomodations for connecting to peers with the
right Block/AppVersion for the height they're on.
For now, we will connect to peers with any version and restrict compatibility
solely based on the ChainID. We leave more restrictive rules on peer
compatibiltiy to a future proposal.
### Future Changes
It may be valuable to support an `/unsafe_stop?height=_` endpoint to tell Tendermint to shutdown at a given height.
This could be use by an external manager process that oversees upgrades by
checking out and installing new software versions and restarting the process. It
would subscribe to the relevant upgrade event (needs to be implemented) and call `/unsafe_stop` at
the correct height (of course only after getting approval from its user!)
## Consequences
### Positive
- Make tendermint and application versions native to the ABCI to more clearly
communicate about them
- Distinguish clearly between protocol versions and software version to
facilitate implementations in other languages
- Versions included in key data structures in easy to discern way
- Allows proposers to signal for upgrades and apps to decide when to actually change the
version (and start signalling for a new version)
### Neutral
- Unclear how to version the initial P2P handshake itself
- Versions aren't being used (yet) to restrict peer compatibility
- Signalling for a new version happens through the proposer and must be
tallied/tracked in the app.
### Negative
- Adds more fields to the ABCI
- Implies that a single codebase must be able to handle multiple versions

View File

@@ -0,0 +1,99 @@
# ADR 017: Chain Versions
## TODO
- clarify how to handle slashing when ChainID changes
## Changelog
- 28-07-2018: Updates from review
- split into two ADRs - one for protocol, one for chains
- 16-07-2018: Initial draft - was originally joint ADR for protocol and chain
versions
## Context
Software and Protocol versions are covered in a separate ADR.
Here we focus on chain versions.
## Requirements
We need to version blockchains across protocols, networks, forks, etc.
We need chain identifiers and descriptions so we can talk about a multitude of chains,
and especially the differences between them, in a meaningful way.
### Networks
We need to support many independent networks running the same version of the software,
even possibly starting from the same initial state.
They must have distinct identifiers so that peers know which one they are joining and so
validators and users can prevent replay attacks.
Call this the `NetworkName` (note we currently call this `ChainID` in the software. In this
ADR, ChainID has a different meaning).
It represents both the application being run and the community or intention
of running it.
Peers only connect to other peers with the same NetworkName.
### Forks
We need to support existing networks upgrading and forking, wherein they may do any of:
- revert back to some height, continue with the same versions but new blocks
- arbitrarily mutate state at some height, continue with the same versions (eg. Dao Fork)
- change the AppVersion at some height
Note because of Tendermint's voting power threshold rules, a chain can only be extended under the "original" rules and under the new rules
if 1/3 or more is double signing, which is expressly prohibited, and is supposed to result in their punishment on both chains. Since they can censor
the punishment, the chain is expected to be hardforked to remove the validators. Thus, if both branches are to continue after a fork,
they will each require a new identifier, and the old chain identifier will be retired (ie. only useful for syncing history, not for new blocks)..
TODO: explain how to handle slashing when chain id changed!
We need a consistent way to describe forks.
## Proposal
### ChainDescription
ChainDescription is a complete immutable description of a blockchain. It takes the following form:
```
ChainDescription = <NetworkName>/<BlockVersion>/<AppVersion>/<StateHash>/<ValHash>/<ConsensusParamsHash>
```
Here, StateHash is the merkle root of the initial state, ValHash is the merkle root of the initial Tendermint validator set,
and ConsensusParamsHash is the merkle root of the initial Tendermint consensus parameters.
The `genesis.json` file must contain enough information to compute this value. It need not contain the StateHash or ValHash itself,
but contain the state from which they can be computed with the given protocol versions.
NOTE: consider splitting NetworkName into NetworkName and AppName - this allows
folks to independently use the same application for different networks (ie we
could imagine multiple communities of validators wanting to put up a Hub using
the same app but having a distinct network name. Arguably not needed if
differences will come via different initial state / validators).
#### ChainID
Define `ChainID = TMHASH(ChainDescriptor)`. It's the unique ID of a blockchain.
It should be Bech32 encoded when handled by users, eg. with `cosmoschain` prefix.
#### Forks and Uprades
When a chain forks or upgrades but continues the same history, it takes a new ChainDescription as follows:
```
ChainDescription = <ChainID>/x/<Height>/<ForkDescription>
```
Where
- ChainID is the ChainID from the previous ChainDescription (ie. its hash)
- `x` denotes that a change occured
- `Height` is the height the change occured
- ForkDescription has the same form as ChainDescription but for the fork
- this allows forks to specify new versions for tendermint or the app, as well as arbitrary changes to the state or validator set

View File

@@ -0,0 +1,100 @@
# ADR 018: ABCI Validator Improvements
## Changelog
016-08-2018: Follow up from review: - Revert changes to commit round - Remind about justification for removing pubkey - Update pros/cons
05-08-2018: Initial draft
## Context
ADR 009 introduced major improvements to the ABCI around validators and the use
of Amino. Here we follow up with some additional changes to improve the naming
and expected use of Validator messages.
## Decision
### Validator
Currently a Validator contains `address` and `pub_key`, and one or the other is
optional/not-sent depending on the use case. Instead, we should have a
`Validator` (with just the address, used for RequestBeginBlock)
and a `ValidatorUpdate` (with the pubkey, used for ResponseEndBlock):
```
message Validator {
bytes address
int64 power
}
message ValidatorUpdate {
PubKey pub_key
int64 power
}
```
As noted in [ADR-009](adr-009-ABCI-design.md),
the `Validator` does not contain a pubkey because quantum public keys are
quite large and it would be wasteful to send them all over ABCI with every block.
Thus, applications that want to take advantage of the information in BeginBlock
are _required_ to store pubkeys in state (or use much less efficient lazy means
of verifying BeginBlock data).
### RequestBeginBlock
LastCommitInfo currently has an array of `SigningValidator` that contains
information for each validator in the entire validator set.
Instead, this should be called `VoteInfo`, since it is information about the
validator votes.
Note that all votes in a commit must be from the same round.
```
message LastCommitInfo {
int64 round
repeated VoteInfo commit_votes
}
message VoteInfo {
Validator validator
bool signed_last_block
}
```
### ResponseEndBlock
Use ValidatorUpdates instead of Validators. Then it's clear we don't need an
address, and we do need a pubkey.
We could require the address here as well as a sanity check, but it doesn't seem
necessary.
### InitChain
Use ValidatorUpdates for both Request and Response. InitChain
is about setting/updating the initial validator set, unlike BeginBlock
which is just informational.
## Status
Implemented
## Consequences
### Positive
- Clarifies the distinction between the different uses of validator information
### Negative
- Apps must still store the public keys in state to utilize the RequestBeginBlock info
### Neutral
- ResponseEndBlock does not require an address
## References
- [Latest ABCI Spec](https://github.com/tendermint/tendermint/blob/v0.22.8/docs/app-dev/abci-spec.md)
- [ADR-009](https://github.com/tendermint/tendermint/blob/v0.22.8/docs/architecture/adr-009-ABCI-design.md)
- [Issue #1712 - Don't send PubKey in
RequestBeginBlock](https://github.com/tendermint/tendermint/issues/1712)

View File

@@ -0,0 +1,162 @@
# ADR 019: Encoding standard for Multisignatures
## Changelog
06-08-2018: Minor updates
27-07-2018: Update draft to use amino encoding
11-07-2018: Initial Draft
5-26-2021: Multisigs were moved into the Cosmos-sdk
## Context
Multisignatures, or technically _Accountable Subgroup Multisignatures_ (ASM),
are signature schemes which enable any subgroup of a set of signers to sign any message,
and reveal to the verifier exactly who the signers were.
This allows for complex conditionals of when to validate a signature.
Suppose the set of signers is of size _n_.
If we validate a signature if any subgroup of size _k_ signs a message,
this becomes what is commonly reffered to as a _k of n multisig_ in Bitcoin.
This ADR specifies the encoding standard for general accountable subgroup multisignatures,
k of n accountable subgroup multisignatures, and its weighted variant.
In the future, we can also allow for more complex conditionals on the accountable subgroup.
## Proposed Solution
### New structs
Every ASM will then have its own struct, implementing the crypto.Pubkey interface.
This ADR assumes that [replacing crypto.Signature with []bytes](https://github.com/tendermint/tendermint/issues/1957) has been accepted.
#### K of N threshold signature
The pubkey is the following struct:
```golang
type ThresholdMultiSignaturePubKey struct { // K of N threshold multisig
K uint `json:"threshold"`
Pubkeys []crypto.Pubkey `json:"pubkeys"`
}
```
We will derive N from the length of pubkeys. (For spatial efficiency in encoding)
`Verify` will expect an `[]byte` encoded version of the Multisignature.
(Multisignature is described in the next section)
The multisignature will be rejected if the bitmap has less than k indices,
or if any signature at any of the k indices is not a valid signature from
the kth public key on the message.
(If more than k signatures are included, all must be valid)
`Bytes` will be the amino encoded version of the pubkey.
Address will be `Hash(amino_encoded_pubkey)`
The reason this doesn't use `log_8(n)` bytes per signer is because that heavily optimizes for the case where a very small number of signers are required.
e.g. for `n` of size `24`, that would only be more space efficient for `k < 3`.
This seems less likely, and that it should not be the case optimized for.
#### Weighted threshold signature
The pubkey is the following struct:
```golang
type WeightedThresholdMultiSignaturePubKey struct {
Weights []uint `json:"weights"`
Threshold uint `json:"threshold"`
Pubkeys []crypto.Pubkey `json:"pubkeys"`
}
```
Weights and Pubkeys must be of the same length.
Everything else proceeds identically to the K of N multisig,
except the multisig fails if the sum of the weights is less than the threshold.
#### Multisignature
The inter-mediate phase of the signatures (as it accrues more signatures) will be the following struct:
```golang
type Multisignature struct {
BitArray CryptoBitArray // Documented later
Sigs [][]byte
```
It is important to recall that each private key will output a signature on the provided message itself.
So no signing algorithm ever outputs the multisignature.
The UI will take a signature, cast into a multisignature, and then keep adding
new signatures into it, and when done marshal into `[]byte`.
This will require the following helper methods:
```golang
func SigToMultisig(sig []byte, n int)
func GetIndex(pk crypto.Pubkey, []crypto.Pubkey)
func AddSignature(sig Signature, index int, multiSig *Multisignature)
```
The multisignature will be converted to an `[]byte` using amino.MarshalBinaryBare. \*
#### Bit Array
We would be using a new implementation of a bitarray. The struct it would be encoded/decoded from is
```golang
type CryptoBitArray struct {
ExtraBitsStored byte `json:"extra_bits"` // The number of extra bits in elems.
Elems []byte `json:"elems"`
}
```
The reason for not using the BitArray currently implemented in `libs/common/bit_array.go`
is that it is less space efficient, due to a space / time trade-off.
Evidence for this is outlined in [this issue](https://github.com/tendermint/tendermint/issues/2077).
In the multisig, we will not be performing arithmetic operations,
so there is no performance increase with the current implementation,
and just loss of spatial efficiency.
Implementing this new bit array with `[]byte` _should_ be simple, as no
arithmetic operations between bit arrays are required, and save a couple of bytes.
(Explained in that same issue)
When this bit array encoded, the number of elements is encoded due to amino.
However we may be encoding a full byte for what we actually only need 1-7 bits for.
We store that difference in ExtraBitsStored.
This allows for us to have an unbounded number of signers, and is more space efficient than what is currently used in `libs/common`.
Again the implementation of this space saving feature is straight forward.
### Encoding the structs
We will use straight forward amino encoding. This is chosen for ease of compatibility in other languages.
### Future points of discussion
If desired, we can use ed25519 batch verification for all ed25519 keys.
This is a future point of discussion, but would be backwards compatible as this information won't need to be marshalled.
(There may even be cofactor concerns without ristretto)
Aggregation of pubkeys / sigs in Schnorr sigs / BLS sigs is not backwards compatible, and would need to be a new ASM type.
## Status
Implemented (moved to cosmos-sdk)
## Consequences
### Positive
- Supports multisignatures, in a way that won't require any special cases in our downstream verification code.
- Easy to serialize / deserialize
- Unbounded number of signers
### Negative
- Larger codebase, however this should reside in a subfolder of tendermint/crypto, as it provides no new interfaces. (Ref #https://github.com/tendermint/go-crypto/issues/136)
- Space inefficient due to utilization of amino encoding
- Suggested implementation requires a new struct for every ASM.
### Neutral

View File

@@ -0,0 +1,104 @@
# ADR 020: Limiting txs size inside a block
## Changelog
13-08-2018: Initial Draft
15-08-2018: Second version after Dev's comments
28-08-2018: Third version after Ethan's comments
30-08-2018: AminoOverheadForBlock => MaxAminoOverheadForBlock
31-08-2018: Bounding evidence and chain ID
13-01-2019: Add section on MaxBytes vs MaxDataBytes
## Context
We currently use MaxTxs to reap txs from the mempool when proposing a block,
but enforce MaxBytes when unmarshaling a block, so we could easily propose a
block thats too large to be valid.
We should just remove MaxTxs all together and stick with MaxBytes, and have a
`mempool.ReapMaxBytes`.
But we can't just reap BlockSize.MaxBytes, since MaxBytes is for the entire block,
not for the txs inside the block. There's extra amino overhead + the actual
headers on top of the actual transactions + evidence + last commit.
We could also consider using a MaxDataBytes instead of or in addition to MaxBytes.
## MaxBytes vs MaxDataBytes
The [PR #3045](https://github.com/tendermint/tendermint/pull/3045) suggested
additional clarity/justification was necessary here, wither respect to the use
of MaxDataBytes in addition to, or instead of, MaxBytes.
MaxBytes provides a clear limit on the total size of a block that requires no
additional calculation if you want to use it to bound resource usage, and there
has been considerable discussions about optimizing tendermint around 1MB blocks.
Regardless, we need some maximum on the size of a block so we can avoid
unmarshaling blocks that are too big during the consensus, and it seems more
straightforward to provide a single fixed number for this rather than a
computation of "MaxDataBytes + everything else you need to make room for
(signatures, evidence, header)". MaxBytes provides a simple bound so we can
always say "blocks are less than X MB".
Having both MaxBytes and MaxDataBytes feels like unnecessary complexity. It's
not particularly surprising for MaxBytes to imply the maximum size of the
entire block (not just txs), one just has to know that a block includes header,
txs, evidence, votes. For more fine grained control over the txs included in the
block, there is the MaxGas. In practice, the MaxGas may be expected to do most of
the tx throttling, and the MaxBytes to just serve as an upper bound on the total
size. Applications can use MaxGas as a MaxDataBytes by just taking the gas for
every tx to be its size in bytes.
## Proposed solution
Therefore, we should
1) Get rid of MaxTxs.
2) Rename MaxTxsBytes to MaxBytes.
When we need to ReapMaxBytes from the mempool, we calculate the upper bound as follows:
```
ExactLastCommitBytes = {number of validators currently enabled} * {MaxVoteBytes}
MaxEvidenceBytesPerBlock = MaxBytes / 10
ExactEvidenceBytes = cs.evpool.PendingEvidence(MaxEvidenceBytesPerBlock) * MaxEvidenceBytes
mempool.ReapMaxBytes(MaxBytes - MaxAminoOverheadForBlock - ExactLastCommitBytes - ExactEvidenceBytes - MaxHeaderBytes)
```
where MaxVoteBytes, MaxEvidenceBytes, MaxHeaderBytes and MaxAminoOverheadForBlock
are constants defined inside the `types` package:
- MaxVoteBytes - 170 bytes
- MaxEvidenceBytes - 364 bytes
- MaxHeaderBytes - 476 bytes (~276 bytes hashes + 200 bytes - 50 UTF-8 encoded
symbols of chain ID 4 bytes each in the worst case + amino overhead)
- MaxAminoOverheadForBlock - 8 bytes (assuming MaxHeaderBytes includes amino
overhead for encoding header, MaxVoteBytes - for encoding vote, etc.)
ChainID needs to bound to 50 symbols max.
When reaping evidence, we use MaxBytes to calculate the upper bound (e.g. 1/10)
to save some space for transactions.
NOTE while reaping the `max int` bytes in mempool, we should account that every
transaction will take `len(tx)+aminoOverhead`, where aminoOverhead=1-4 bytes.
We should write a test that fails if the underlying structs got changed, but
MaxXXX stayed the same.
## Status
Implemented
## Consequences
### Positive
* one way to limit the size of a block
* less variables to configure
### Negative
* constants that need to be adjusted if the underlying structs got changed
### Neutral

View File

@@ -0,0 +1,52 @@
# ADR 012: ABCI Events
## Changelog
- *2018-09-02* Remove ABCI errors component. Update description for events
- *2018-07-12* Initial version
## Context
ABCI tags were first described in [ADR 002](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-002-event-subscription.md).
They are key-value pairs that can be used to index transactions.
Currently, ABCI messages return a list of tags to describe an
"event" that took place during the Check/DeliverTx/Begin/EndBlock,
where each tag refers to a different property of the event, like the sending and receiving account addresses.
Since there is only one list of tags, recording data for multiple such events in
a single Check/DeliverTx/Begin/EndBlock must be done using prefixes in the key
space.
Alternatively, groups of tags that constitute an event can be separated by a
special tag that denotes a break between the events. This would allow
straightforward encoding of multiple events into a single list of tags without
prefixing, at the cost of these "special" tags to separate the different events.
TODO: brief description of how the indexing works
## Decision
Instead of returning a list of tags, return a list of events, where
each event is a list of tags. This way we naturally capture the concept of
multiple events happening during a single ABCI message.
TODO: describe impact on indexing and querying
## Status
Implemented
## Consequences
### Positive
- Ability to track distinct events separate from ABCI calls (DeliverTx/BeginBlock/EndBlock)
- More powerful query abilities
### Negative
- More complex query syntax
- More complex search implementation
### Neutral

View File

@@ -0,0 +1,63 @@
# ADR 022: ABCI Errors
## Changelog
- *2018-09-01* Initial version
## Context
ABCI errors should provide an abstraction between application details
and the client interface responsible for formatting & displaying errors to the user.
Currently, this abstraction consists of a single integer (the `code`), where any
`code > 0` is considered an error (ie. invalid transaction) and all type
information about the error is contained in the code. This integer is
expected to be decoded by the client into a known error string, where any
more specific data is contained in the `data`.
In a [previous conversation](https://github.com/tendermint/abci/issues/165#issuecomment-353704015),
it was suggested that not all non-zero codes need to be errors, hence why it's called `code` and not `error code`.
It is unclear exactly how the semantics of the `code` field will evolve, though
better lite-client proofs (like discussed for tags
[here](https://github.com/tendermint/tendermint/issues/1007#issuecomment-413917763))
may play a role.
Note that having all type information in a single integer
precludes an easy coordination method between "module implementers" and "client
implementers", especially for apps with many "modules". With an unbounded error domain (such as a string), module
implementers can pick a globally unique prefix & error code set, so client
implementers could easily implement support for "module A" regardless of which
particular blockchain network it was running in and which other modules were running with it. With
only error codes, globally unique codes are difficult/impossible, as the space
is finite and collisions are likely without an easy way to coordinate.
For instance, while trying to build an ecosystem of modules that can be composed into a single
ABCI application, the Cosmos-SDK had to hack a higher level "codespace" into the
single integer so that each module could have its own space to express its
errors.
## Decision
Include a `string code_space` in all ABCI messages that have a `code`.
This allows applications to namespace the codes so they can experiment with
their own code schemes.
It is the responsibility of applications to limit the size of the `code_space`
string.
How the codespace is hashed into block headers (ie. so it can be queried
efficiently by lite clients) is left for a separate ADR.
## Consequences
## Positive
- No need for complex codespacing on a single integer
- More expressive type system for errors
## Negative
- Another field in the response needs to be accounted for
- Some redundancy with `code` field
- May encourage more error/code type info to move to the `codespace` string, which
could impact lite clients.

View File

@@ -0,0 +1,183 @@
# ADR 023: ABCI `ProposeTx` Method
## Changelog
25-06-2018: Initial draft based on [#1776](https://github.com/tendermint/tendermint/issues/1776)
## Context
[#1776](https://github.com/tendermint/tendermint/issues/1776) was
opened in relation to implementation of a Plasma child chain using Tendermint
Core as consensus/replication engine.
Due to the requirements of [Minimal Viable Plasma (MVP)](https://ethresear.ch/t/minimal-viable-plasma/426) and [Plasma Cash](https://ethresear.ch/t/plasma-cash-plasma-with-much-less-per-user-data-checking/1298), it is necessary for ABCI apps to have a mechanism to handle the following cases (more may emerge in the near future):
1. `deposit` transactions on the Root Chain, which must consist of a block
with a single transaction, where there are no inputs and only one output
made in favour of the depositor. In this case, a `block` consists of
a transaction with the following shape:
```
[0, 0, 0, 0, #input1 - zeroed out
0, 0, 0, 0, #input2 - zeroed out
<depositor_address>, <amount>, #output1 - in favour of depositor
0, 0, #output2 - zeroed out
<fee>,
]
```
`exit` transactions may also be treated in a similar manner, wherein the
input is the UTXO being exited on the Root Chain, and the output belongs to
a reserved "burn" address, e.g., `0x0`. In such cases, it is favourable for
the containing block to only hold a single transaction that may receive
special treatment.
2. Other "internal" transactions on the child chain, which may be initiated
unilaterally. The most basic example of is a coinbase transaction
implementing validator node incentives, but may also be app-specific. In
these cases, it may be favourable for such transactions to
be ordered in a specific manner, e.g., coinbase transactions will always be
at index 0. In general, such strategies increase the determinism and
predictability of blockchain applications.
While it is possible to deal with the cases enumerated above using the
existing ABCI, currently available result in suboptimal workarounds. Two are
explained in greater detail below.
### Solution 1: App state-based Plasma chain
In this work around, the app maintains a `PlasmaStore` with a corresponding
`Keeper`. The PlasmaStore is responsible for maintaing a second, separate
blockchain that complies with the MVP specification, including `deposit`
blocks and other "internal" transactions. These "virtual" blocks are then broadcasted
to the Root Chain.
This naive approach is, however, fundamentally flawed, as it by definition
diverges from the canonical chain maintained by Tendermint. This is further
exacerbated if the business logic for generating such transactions is
potentially non-deterministic, as this should not even be done in
`Begin/EndBlock`, which may, as a result, break consensus guarantees.
Additinoally, this has serious implications for "watchers" - independent third parties,
or even an auxilliary blockchain, responsible for ensuring that blocks recorded
on the Root Chain are consistent with the Plasma chain's. Since, in this case,
the Plasma chain is inconsistent with the canonical one maintained by Tendermint
Core, it seems that there exists no compact means of verifying the legitimacy of
the Plasma chain without replaying every state transition from genesis (!).
### Solution 2: Broadcast to Tendermint Core from ABCI app
This approach is inspired by `tendermint`, in which Ethereum transactions are
relayed to Tendermint Core. It requires the app to maintain a client connection
to the consensus engine.
Whenever an "internal" transaction needs to be created, the proposer of the
current block broadcasts the transaction or transactions to Tendermint as
needed in order to ensure that the Tendermint chain and Plasma chain are
completely consistent.
This allows "internal" transactions to pass through the full consensus
process, and can be validated in methods like `CheckTx`, i.e., signed by the
proposer, is the semantically correct, etc. Note that this involves informing
the ABCI app of the block proposer, which was temporarily hacked in as a means
of conducting this experiment, although this should not be necessary when the
current proposer is passed to `BeginBlock`.
It is much easier to relay these transactions directly to the Root
Chain smart contract and/or maintain a "compressed" auxiliary chain comprised
of Plasma-friendly blocks that 100% reflect the canonical (Tendermint)
blockchain. Unfortunately, this approach not idiomatic (i.e., utilises the
Tendermint consensus engine in unintended ways). Additionally, it does not
allow the application developer to:
- Control the _ordering_ of transactions in the proposed block (e.g., index 0,
or 0 to `n` for coinbase transactions)
- Control the _number_ of transactions in the block (e.g., when a `deposit`
block is required)
Since determinism is of utmost importance in blockchain engineering, this approach,
while more viable, should also not be considered as fit for production.
## Decision
### `ProposeTx`
In order to address the difficulties described above, the ABCI interface must
expose an additional method, tentatively named `ProposeTx`.
It should have the following signature:
```
ProposeTx(RequestProposeTx) ResponseProposeTx
```
Where `RequestProposeTx` and `ResponseProposeTx` are `message`s with the
following shapes:
```
message RequestProposeTx {
int64 next_block_height = 1; // height of the block the proposed tx would be part of
Validator proposer = 2; // the proposer details
}
message ResponseProposeTx {
int64 num_tx = 1; // the number of tx to include in proposed block
repeated bytes txs = 2; // ordered transaction data to include in block
bool exclusive = 3; // whether the block should include other transactions (from `mempool`)
}
```
`ProposeTx` would be called by before `mempool.Reap` at this
[line](https://github.com/tendermint/tendermint/blob/9cd9f3338bc80a12590631632c23c8dbe3ff5c34/consensus/state.go#L935).
Depending on whether `exclusive` is `true` or `false`, the proposed
transactions are then pushed on top of the transactions received from
`mempool.Reap`.
### `DeliverTx`
Since the list of `tx` received from `ProposeTx` are _not_ passed through `CheckTx`,
it is probably a good idea to provide a means of differentiatiating "internal" transactions
from user-generated ones, in case the app developer needs/wants to take extra measures to
ensure validity of the proposed transactions.
Therefore, the `RequestDeliverTx` message should be changed to provide an additional flag, like so:
```
message RequestDeliverTx {
bytes tx = 1;
bool internal = 2;
}
```
Alternatively, an additional method `DeliverProposeTx` may be added as an accompanient to
`ProposeTx`. However, it is not clear at this stage if this additional overhead is necessary
to preserve consensus guarantees given that a simple flag may suffice for now.
## Status
Pending
## Consequences
### Positive
- Tendermint ABCI apps will be able to function as minimally viable Plasma chains.
- It will thereby become possible to add an extension to `cosmos-sdk` to enable
ABCI apps to support both IBC and Plasma, maximising interop.
- ABCI apps will have great control and flexibility in managing blockchain state,
without having to resort to non-deterministic hacks and/or unsafe workarounds
### Negative
- Maintenance overhead of exposing additional ABCI method
- Potential security issues that may have been overlooked and must now be tested extensively
### Neutral
- ABCI developers must deal with increased (albeit nominal) API surface area.
## References
- [#1776 Plasma and "Internal" Transactions in ABCI Apps](https://github.com/tendermint/tendermint/issues/1776)
- [Minimal Viable Plasma](https://ethresear.ch/t/minimal-viable-plasma/426)
- [Plasma Cash: Plasma with much less per-user data checking](https://ethresear.ch/t/plasma-cash-plasma-with-much-less-per-user-data-checking/1298)

View File

@@ -0,0 +1,234 @@
# ADR 024: SignBytes and validator types in privval
## Context
Currently, the messages exchanged between tendermint and a (potentially remote) signer/validator,
namely votes, proposals, and heartbeats, are encoded as a JSON string
(e.g., via `Vote.SignBytes(...)`) and then
signed . JSON encoding is sub-optimal for both, hardware wallets
and for usage in ethereum smart contracts. Both is laid down in detail in [issue#1622].
Also, there are currently no differences between sign-request and -replies. Also, there is no possibility
for a remote signer to include an error code or message in case something went wrong.
The messages exchanged between tendermint and a remote signer currently live in
[privval/socket.go] and encapsulate the corresponding types in [types].
[privval/socket.go]: https://github.com/tendermint/tendermint/blob/d419fffe18531317c28c29a292ad7d253f6cafdf/privval/socket.go#L496-L502
[issue#1622]: https://github.com/tendermint/tendermint/issues/1622
[types]: https://github.com/tendermint/tendermint/tree/master/types
## Decision
- restructure vote, proposal, and heartbeat such that their encoding is easily parseable by
hardware devices and smart contracts using a binary encoding format ([amino] in this case)
- split up the messages exchanged between tendermint and remote signers into requests and
responses (see details below)
- include an error type in responses
### Overview
```
+--------------+ +----------------+
| | SignXRequest | |
|Remote signer |<---------------------+ tendermint |
| (e.g. KMS) | | |
| +--------------------->| |
+--------------+ SignedXReply +----------------+
SignXRequest {
x: X
}
SignedXReply {
x: X
sig: Signature // []byte
err: Error{
code: int
desc: string
}
}
```
TODO: Alternatively, the type `X` might directly include the signature. A lot of places expect a vote with a
signature and do not necessarily deal with "Replies".
Still exploring what would work best here.
This would look like (exemplified using X = Vote):
```
Vote {
// all fields besides signature
}
SignedVote {
Vote Vote
Signature []byte
}
SignVoteRequest {
Vote Vote
}
SignedVoteReply {
Vote SignedVote
Err Error
}
```
**Note:** There was a related discussion around including a fingerprint of, or, the whole public-key
into each sign-request to tell the signer which corresponding private-key to
use to sign the message. This is particularly relevant in the context of the KMS
but is currently not considered in this ADR.
[amino]: https://github.com/tendermint/go-amino/
### Vote
As explained in [issue#1622] `Vote` will be changed to contain the following fields
(notation in protobuf-like syntax for easy readability):
```proto
// vanilla protobuf / amino encoded
message Vote {
Version fixed32
Height sfixed64
Round sfixed32
VoteType fixed32
Timestamp Timestamp // << using protobuf definition
BlockID BlockID // << as already defined
ChainID string // at the end because length could vary a lot
}
// this is an amino registered type; like currently privval.SignVoteMsg:
// registered with "tendermint/socketpv/SignVoteRequest"
message SignVoteRequest {
Vote vote
}
// amino registered type
// registered with "tendermint/socketpv/SignedVoteReply"
message SignedVoteReply {
Vote Vote
Signature Signature
Err Error
}
// we will use this type everywhere below
message Error {
Type uint // error code
Description string // optional description
}
```
The `ChainID` gets moved into the vote message directly. Previously, it was injected
using the [Signable] interface method `SignBytes(chainID string) []byte`. Also, the
signature won't be included directly, only in the corresponding `SignedVoteReply` message.
[Signable]: https://github.com/tendermint/tendermint/blob/d419fffe18531317c28c29a292ad7d253f6cafdf/types/signable.go#L9-L11
### Proposal
```proto
// vanilla protobuf / amino encoded
message Proposal {
Height sfixed64
Round sfixed32
Timestamp Timestamp // << using protobuf definition
BlockPartsHeader PartSetHeader // as already defined
POLRound sfixed32
POLBlockID BlockID // << as already defined
}
// amino registered with "tendermint/socketpv/SignProposalRequest"
message SignProposalRequest {
Proposal proposal
}
// amino registered with "tendermint/socketpv/SignProposalReply"
message SignProposalReply {
Prop Proposal
Sig Signature
Err Error // as defined above
}
```
### Heartbeat
**TODO**: clarify if heartbeat also needs a fixed offset and update the fields accordingly:
```proto
message Heartbeat {
ValidatorAddress Address
ValidatorIndex int
Height int64
Round int
Sequence int
}
// amino registered with "tendermint/socketpv/SignHeartbeatRequest"
message SignHeartbeatRequest {
Hb Heartbeat
}
// amino registered with "tendermint/socketpv/SignHeartbeatReply"
message SignHeartbeatReply {
Hb Heartbeat
Sig Signature
Err Error // as defined above
}
```
## PubKey
TBA - this needs further thoughts: e.g. what todo like in the case of the KMS which holds
several keys? How does it know with which key to reply?
## SignBytes
`SignBytes` will not require a `ChainID` parameter:
```golang
type Signable interface {
SignBytes() []byte
}
```
And the implementation for vote, heartbeat, proposal will look like:
```golang
// type T is one of vote, sign, proposal
func (tp *T) SignBytes() []byte {
bz, err := cdc.MarshalBinary(tp)
if err != nil {
panic(err)
}
return bz
}
```
## Status
Partially Accepted
## Consequences
### Positive
The most relevant positive effect is that the signing bytes can easily be parsed by a
hardware module and a smart contract. Besides that:
- clearer separation between requests and responses
- added error messages enable better error handling
### Negative
- relatively huge change / refactoring touching quite some code
- lot's of places assume a `Vote` with a signature included -> they will need to
- need to modify some interfaces
### Neutral
not even the swiss are neutral

View File

@@ -0,0 +1,150 @@
# ADR 025 Commit
## Context
Currently the `Commit` structure contains a lot of potentially redundant or unnecessary data.
It contains a list of precommits from every validator, where the precommit
includes the whole `Vote` structure. Thus each of the commit height, round,
type, and blockID are repeated for every validator, and could be deduplicated,
leading to very significant savings in block size.
```
type Commit struct {
BlockID BlockID `json:"block_id"`
Precommits []*Vote `json:"precommits"`
}
type Vote struct {
ValidatorAddress Address `json:"validator_address"`
ValidatorIndex int `json:"validator_index"`
Height int64 `json:"height"`
Round int `json:"round"`
Timestamp time.Time `json:"timestamp"`
Type byte `json:"type"`
BlockID BlockID `json:"block_id"`
Signature []byte `json:"signature"`
}
```
The original tracking issue for this is [#1648](https://github.com/tendermint/tendermint/issues/1648).
We have discussed replacing the `Vote` type in `Commit` with a new `CommitSig`
type, which includes at minimum the vote signature. The `Vote` type will
continue to be used in the consensus reactor and elsewhere.
A primary question is what should be included in the `CommitSig` beyond the
signature. One current constraint is that we must include a timestamp, since
this is how we calculuate BFT time, though we may be able to change this [in the
future](https://github.com/tendermint/tendermint/issues/2840).
Other concerns here include:
- Validator Address [#3596](https://github.com/tendermint/tendermint/issues/3596) -
Should the CommitSig include the validator address? It is very convenient to
do so, but likely not necessary. This was also discussed in [#2226](https://github.com/tendermint/tendermint/issues/2226).
- Absent Votes [#3591](https://github.com/tendermint/tendermint/issues/3591) -
How to represent absent votes? Currently they are just present as `nil` in the
Precommits list, which is actually problematic for serialization
- Other BlockIDs [#3485](https://github.com/tendermint/tendermint/issues/3485) -
How to represent votes for nil and for other block IDs? We currently allow
votes for nil and votes for alternative block ids, but just ignore them
## Decision
Deduplicate the fields and introduce `CommitSig`:
```
type Commit struct {
Height int64
Round int
BlockID BlockID `json:"block_id"`
Precommits []CommitSig `json:"precommits"`
}
type CommitSig struct {
BlockID BlockIDFlag
ValidatorAddress Address
Timestamp time.Time
Signature []byte
}
// indicate which BlockID the signature is for
type BlockIDFlag int
const (
BlockIDFlagAbsent BlockIDFlag = iota // vote is not included in the Commit.Precommits
BlockIDFlagCommit // voted for the Commit.BlockID
BlockIDFlagNil // voted for nil
)
```
Re the concerns outlined in the context:
**Timestamp**: Leave the timestamp for now. Removing it and switching to
proposer based time will take more analysis and work, and will be left for a
future breaking change. In the meantime, the concerns with the current approach to
BFT time [can be
mitigated](https://github.com/tendermint/tendermint/issues/2840#issuecomment-529122431).
**ValidatorAddress**: we include it in the `CommitSig` for now. While this
does increase the block size unecessarily (20-bytes per validator), it has some ergonomic and debugging advantages:
- `Commit` contains everything necessary to reconstruct `[]Vote`, and doesn't depend on additional access to a `ValidatorSet`
- Lite clients can check if they know the validators in a commit without
re-downloading the validator set
- Easy to see directly in a commit which validators signed what without having
to fetch the validator set
If and when we change the `CommitSig` again, for instance to remove the timestamp,
we can reconsider whether the ValidatorAddress should be removed.
**Absent Votes**: we include absent votes explicitly with no Signature or
Timestamp but with the ValidatorAddress. This should resolve the serialization
issues and make it easy to see which validator's votes failed to be included.
**Other BlockIDs**: We use a single byte to indicate which blockID a `CommitSig`
is for. The only options are:
- `Absent` - no vote received from the this validator, so no signature
- `Nil` - validator voted Nil - meaning they did not see a polka in time
- `Commit` - validator voted for this block
Note this means we don't allow votes for any other blockIDs. If a signature is
included in a commit, it is either for nil or the correct blockID. According to
the Tendermint protocol and assumptions, there is no way for a correct validator to
precommit for a conflicting blockID in the same round an actual commit was
created. This was the consensus from
[#3485](https://github.com/tendermint/tendermint/issues/3485)
We may want to consider supporting other blockIDs later, as a way to capture
evidence that might be helpful. We should clarify if/when/how doing so would
actually help first. To implement it, we could change the `Commit.BlockID`
field to a slice, where the first entry is the correct block ID and the other
entries are other BlockIDs that validators precommited before. The BlockIDFlag
enum can be extended to represent these additional block IDs on a per block
basis.
## Status
Implemented
## Consequences
### Positive
Removing the Type/Height/Round/Index and the BlockID saves roughly 80 bytes per precommit.
It varies because some integers are varint. The BlockID contains two 32-byte hashes an integer,
and the Height is 8-bytes.
For a chain with 100 validators, that's up to 8kB in savings per block!
### Negative
- Large breaking change to the block and commit structure
- Requires differentiating in code between the Vote and CommitSig objects, which may add some complexity (votes need to be reconstructed to be verified and gossiped)
### Neutral
- Commit.Precommits no longer contains nil values

View File

@@ -0,0 +1,49 @@
# ADR 026: General Merkle Proof
## Context
We are using raw `[]byte` for merkle proofs in `abci.ResponseQuery`. It makes hard to handle multilayer merkle proofs and general cases. Here, new interface `ProofOperator` is defined. The users can defines their own Merkle proof format and layer them easily.
Goals:
- Layer Merkle proofs without decoding/reencoding
- Provide general way to chain proofs
- Make the proof format extensible, allowing thirdparty proof types
## Decision
### ProofOperator
`type ProofOperator` is an interface for Merkle proofs. The definition is:
```go
type ProofOperator interface {
Run([][]byte) ([][]byte, error)
GetKey() []byte
ProofOp() ProofOp
}
```
Since a proof can treat various data type, `Run()` takes `[][]byte` as the argument, not `[]byte`. For example, a range proof's `Run()` can take multiple key-values as its argument. It will then return the root of the tree for the further process, calculated with the input value.
`ProofOperator` does not have to be a Merkle proof - it can be a function that transforms the argument for intermediate process e.g. prepending the length to the `[]byte`.
### ProofOp
`type ProofOp` is a protobuf message which is a triple of `Type string`, `Key []byte`, and `Data []byte`. `ProofOperator` and `ProofOp`are interconvertible, using `ProofOperator.ProofOp()` and `OpDecoder()`, where `OpDecoder` is a function that each proof type can register for their own encoding scheme. For example, we can add an byte for encoding scheme before the serialized proof, supporting JSON decoding.
## Status
Implemented
## Consequences
### Positive
- Layering becomes easier (no encoding/decoding at each step)
- Thirdparty proof format is available
### Negative
- Larger size for abci.ResponseQuery
- Unintuitive proof chaining(it is not clear what `Run()` is doing)
- Additional codes for registering `OpDecoder`s

View File

@@ -0,0 +1,127 @@
# ADR 029: Check block txs before prevote
## Changelog
04-10-2018: Update with link to issue
[#2384](https://github.com/tendermint/tendermint/issues/2384) and reason for rejection
19-09-2018: Initial Draft
## Context
We currently check a tx's validity through 2 ways.
1. Through checkTx in mempool connection.
2. Through deliverTx in consensus connection.
The 1st is called when external tx comes in, so the node should be a proposer this time. The 2nd is called when external block comes in and reach the commit phase, the node doesn't need to be the proposer of the block, however it should check the txs in that block.
In the 2nd situation, if there are many invalid txs in the block, it would be too late for all nodes to discover that most txs in the block are invalid, and we'd better not record invalid txs in the blockchain too.
## Proposed solution
Therefore, we should find a way to check the txs' validity before send out a prevote. Currently we have cs.isProposalComplete() to judge whether a block is complete. We can have
```
func (blockExec *BlockExecutor) CheckBlock(block *types.Block) error {
// check txs of block.
for _, tx := range block.Txs {
reqRes := blockExec.proxyApp.CheckTxAsync(tx)
reqRes.Wait()
if reqRes.Response == nil || reqRes.Response.GetCheckTx() == nil || reqRes.Response.GetCheckTx().Code != abci.CodeTypeOK {
return errors.Errorf("tx %v check failed. response: %v", tx, reqRes.Response)
}
}
return nil
}
```
such a method in BlockExecutor to check all txs' validity in that block.
However, this method should not be implemented like that, because checkTx will share the same state used in mempool in the app. So we should define a new interface method checkBlock in Application to indicate it to use the same state as deliverTx.
```
type Application interface {
// Info/Query Connection
Info(RequestInfo) ResponseInfo // Return application info
Query(RequestQuery) ResponseQuery // Query for state
// Mempool Connection
CheckTx(tx []byte) ResponseCheckTx // Validate a tx for the mempool
// Consensus Connection
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain with validators and other info from TendermintCore
CheckBlock(RequestCheckBlock) ResponseCheckBlock
BeginBlock(RequestBeginBlock) ResponseBeginBlock // Signals the beginning of a block
DeliverTx(tx []byte) ResponseDeliverTx // Deliver a tx for full processing
EndBlock(RequestEndBlock) ResponseEndBlock // Signals the end of a block, returns changes to the validator set
Commit() ResponseCommit // Commit the state and return the application Merkle root hash
}
```
All app should implement that method. For example, counter:
```
func (app *CounterApplication) CheckBlock(block types.Request_CheckBlock) types.ResponseCheckBlock {
if app.serial {
app.originalTxCount = app.txCount //backup the txCount state
for _, tx := range block.CheckBlock.Block.Txs {
if len(tx) > 8 {
return types.ResponseCheckBlock{
Code: code.CodeTypeEncodingError,
Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(tx))}
}
tx8 := make([]byte, 8)
copy(tx8[len(tx8)-len(tx):], tx)
txValue := binary.BigEndian.Uint64(tx8)
if txValue < uint64(app.txCount) {
return types.ResponseCheckBlock{
Code: code.CodeTypeBadNonce,
Log: fmt.Sprintf("Invalid nonce. Expected >= %v, got %v", app.txCount, txValue)}
}
app.txCount++
}
}
return types.ResponseCheckBlock{Code: code.CodeTypeOK}
}
```
In BeginBlock, the app should restore the state to the orignal state before checking the block:
```
func (app *CounterApplication) DeliverTx(tx []byte) types.ResponseDeliverTx {
if app.serial {
app.txCount = app.originalTxCount //restore the txCount state
}
app.txCount++
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
}
```
The txCount is like the nonce in ethermint, it should be restored when entering the deliverTx phase. While some operation like checking the tx signature needs not to be done again. So the deliverTx can focus on how a tx can be applied, ignoring the checking of the tx, because all the checking has already been done in the checkBlock phase before.
An optional optimization is alter the deliverTx to deliverBlock. For the block has already been checked by checkBlock, so all the txs in it are valid. So the app can cache the block, and in the deliverBlock phase, it just needs to apply the block in the cache. This optimization can save network current in deliverTx.
## Status
Rejected
## Decision
Performance impact is considered too great. See [#2384](https://github.com/tendermint/tendermint/issues/2384)
## Consequences
### Positive
- more robust to defend the adversary to propose a block full of invalid txs.
### Negative
- add a new interface method. app logic needs to adjust to appeal to it.
- sending all the tx data over the ABCI twice
- potentially redundant validations (eg. signature checks in both CheckBlock and
DeliverTx)
### Neutral

View File

@@ -0,0 +1,458 @@
# ADR 030: Consensus Refactor
## Context
One of the biggest challenges this project faces is to proof that the
implementations of the specifications are correct, much like we strive to
formaly verify our alogrithms and protocols we should work towards high
confidence about the correctness of our program code. One of those is the core
of Tendermint - Consensus - which currently resides in the `consensus` package.
Over time there has been high friction making changes to the package due to the
algorithm being scattered in a side-effectful container (the current
`ConsensusState`). In order to test the algorithm a large object-graph needs to
be set up and even than the non-deterministic parts of the container makes will
prevent high certainty. Where ideally we have a 1-to-1 representation of the
[spec](https://github.com/tendermint/spec), ready and easy to test for domain
experts.
Addresses:
- [#1495](https://github.com/tendermint/tendermint/issues/1495)
- [#1692](https://github.com/tendermint/tendermint/issues/1692)
## Decision
To remedy these issues we plan a gradual, non-invasive refactoring of the
`consensus` package. Starting of by isolating the consensus alogrithm into
a pure function and a finite state machine to address the most pressuring issue
of lack of confidence. Doing so while leaving the rest of the package in tact
and have follow-up optional changes to improve the sepration of concerns.
### Implementation changes
The core of Consensus can be modelled as a function with clear defined inputs:
* `State` - data container for current round, height, etc.
* `Event`- significant events in the network
producing clear outputs;
* `State` - updated input
* `Message` - signal what actions to perform
```go
type Event int
const (
EventUnknown Event = iota
EventProposal
Majority23PrevotesBlock
Majority23PrecommitBlock
Majority23PrevotesAny
Majority23PrecommitAny
TimeoutNewRound
TimeoutPropose
TimeoutPrevotes
TimeoutPrecommit
)
type Message int
const (
MeesageUnknown Message = iota
MessageProposal
MessageVotes
MessageDecision
)
type State struct {
height uint64
round uint64
step uint64
lockedValue interface{} // TODO: Define proper type.
lockedRound interface{} // TODO: Define proper type.
validValue interface{} // TODO: Define proper type.
validRound interface{} // TODO: Define proper type.
// From the original notes: valid(v)
valid interface{} // TODO: Define proper type.
// From the original notes: proposer(h, r)
proposer interface{} // TODO: Define proper type.
}
func Consensus(Event, State) (State, Message) {
// Consolidate implementation.
}
```
Tracking of relevant information to feed `Event` into the function and act on
the output is left to the `ConsensusExecutor` (formerly `ConsensusState`).
Benefits for testing surfacing nicely as testing for a sequence of events
against algorithm could be as simple as the following example:
``` go
func TestConsensusXXX(t *testing.T) {
type expected struct {
message Message
state State
}
// Setup order of events, initial state and expectation.
var (
events = []struct {
event Event
want expected
}{
// ...
}
state = State{
// ...
}
)
for _, e := range events {
sate, msg = Consensus(e.event, state)
// Test message expectation.
if msg != e.want.message {
t.Fatalf("have %v, want %v", msg, e.want.message)
}
// Test state expectation.
if !reflect.DeepEqual(state, e.want.state) {
t.Fatalf("have %v, want %v", state, e.want.state)
}
}
}
```
## Consensus Executor
## Consensus Core
```go
type Event interface{}
type EventNewHeight struct {
Height int64
ValidatorId int
}
type EventNewRound HeightAndRound
type EventProposal struct {
Height int64
Round int
Timestamp Time
BlockID BlockID
POLRound int
Sender int
}
type Majority23PrevotesBlock struct {
Height int64
Round int
BlockID BlockID
}
type Majority23PrecommitBlock struct {
Height int64
Round int
BlockID BlockID
}
type HeightAndRound struct {
Height int64
Round int
}
type Majority23PrevotesAny HeightAndRound
type Majority23PrecommitAny HeightAndRound
type TimeoutPropose HeightAndRound
type TimeoutPrevotes HeightAndRound
type TimeoutPrecommit HeightAndRound
type Message interface{}
type MessageProposal struct {
Height int64
Round int
BlockID BlockID
POLRound int
}
type VoteType int
const (
VoteTypeUnknown VoteType = iota
Prevote
Precommit
)
type MessageVote struct {
Height int64
Round int
BlockID BlockID
Type VoteType
}
type MessageDecision struct {
Height int64
Round int
BlockID BlockID
}
type TriggerTimeout struct {
Height int64
Round int
Duration Duration
}
type RoundStep int
const (
RoundStepUnknown RoundStep = iota
RoundStepPropose
RoundStepPrevote
RoundStepPrecommit
RoundStepCommit
)
type State struct {
Height int64
Round int
Step RoundStep
LockedValue BlockID
LockedRound int
ValidValue BlockID
ValidRound int
ValidatorId int
ValidatorSetSize int
}
func proposer(height int64, round int) int {}
func getValue() BlockID {}
func Consensus(event Event, state State) (State, Message, TriggerTimeout) {
msg = nil
timeout = nil
switch event := event.(type) {
case EventNewHeight:
if event.Height > state.Height {
state.Height = event.Height
state.Round = -1
state.Step = RoundStepPropose
state.LockedValue = nil
state.LockedRound = -1
state.ValidValue = nil
state.ValidRound = -1
state.ValidatorId = event.ValidatorId
}
return state, msg, timeout
case EventNewRound:
if event.Height == state.Height and event.Round > state.Round {
state.Round = eventRound
state.Step = RoundStepPropose
if proposer(state.Height, state.Round) == state.ValidatorId {
proposal = state.ValidValue
if proposal == nil {
proposal = getValue()
}
msg = MessageProposal { state.Height, state.Round, proposal, state.ValidRound }
}
timeout = TriggerTimeout { state.Height, state.Round, timeoutPropose(state.Round) }
}
return state, msg, timeout
case EventProposal:
if event.Height == state.Height and event.Round == state.Round and
event.Sender == proposal(state.Height, state.Round) and state.Step == RoundStepPropose {
if event.POLRound >= state.LockedRound or event.BlockID == state.BlockID or state.LockedRound == -1 {
msg = MessageVote { state.Height, state.Round, event.BlockID, Prevote }
}
state.Step = RoundStepPrevote
}
return state, msg, timeout
case TimeoutPropose:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPropose {
msg = MessageVote { state.Height, state.Round, nil, Prevote }
state.Step = RoundStepPrevote
}
return state, msg, timeout
case Majority23PrevotesBlock:
if event.Height == state.Height and event.Round == state.Round and state.Step >= RoundStepPrevote and event.Round > state.ValidRound {
state.ValidRound = event.Round
state.ValidValue = event.BlockID
if state.Step == RoundStepPrevote {
state.LockedRound = event.Round
state.LockedValue = event.BlockID
msg = MessageVote { state.Height, state.Round, event.BlockID, Precommit }
state.Step = RoundStepPrecommit
}
}
return state, msg, timeout
case Majority23PrevotesAny:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPrevote {
timeout = TriggerTimeout { state.Height, state.Round, timeoutPrevote(state.Round) }
}
return state, msg, timeout
case TimeoutPrevote:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPrevote {
msg = MessageVote { state.Height, state.Round, nil, Precommit }
state.Step = RoundStepPrecommit
}
return state, msg, timeout
case Majority23PrecommitBlock:
if event.Height == state.Height {
state.Step = RoundStepCommit
state.LockedValue = event.BlockID
}
return state, msg, timeout
case Majority23PrecommitAny:
if event.Height == state.Height and event.Round == state.Round {
timeout = TriggerTimeout { state.Height, state.Round, timeoutPrecommit(state.Round) }
}
return state, msg, timeout
case TimeoutPrecommit:
if event.Height == state.Height and event.Round == state.Round {
state.Round = state.Round + 1
}
return state, msg, timeout
}
}
func ConsensusExecutor() {
proposal = nil
votes = HeightVoteSet { Height: 1 }
state = State {
Height: 1
Round: 0
Step: RoundStepPropose
LockedValue: nil
LockedRound: -1
ValidValue: nil
ValidRound: -1
}
event = EventNewHeight {1, id}
state, msg, timeout = Consensus(event, state)
event = EventNewRound {state.Height, 0}
state, msg, timeout = Consensus(event, state)
if msg != nil {
send msg
}
if timeout != nil {
trigger timeout
}
for {
select {
case message := <- msgCh:
switch msg := message.(type) {
case MessageProposal:
case MessageVote:
if msg.Height == state.Height {
newVote = votes.AddVote(msg)
if newVote {
switch msg.Type {
case Prevote:
prevotes = votes.Prevotes(msg.Round)
if prevotes.WeakCertificate() and msg.Round > state.Round {
event = EventNewRound { msg.Height, msg.Round }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
if blockID, ok = prevotes.TwoThirdsMajority(); ok and blockID != nil {
if msg.Round == state.Round and hasBlock(blockID) {
event = Majority23PrevotesBlock { msg.Height, msg.Round, blockID }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
if proposal != nil and proposal.POLRound == msg.Round and hasBlock(blockID) {
event = EventProposal {
Height: state.Height
Round: state.Round
BlockID: blockID
POLRound: proposal.POLRound
Sender: message.Sender
}
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
}
if prevotes.HasTwoThirdsAny() and msg.Round == state.Round {
event = Majority23PrevotesAny { msg.Height, msg.Round, blockID }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
case Precommit:
}
}
}
case timeout := <- timeoutCh:
case block := <- blockCh:
}
}
}
func handleStateChange(state, msg, timeout) State {
if state.Step == Commit {
state = ExecuteBlock(state.LockedValue)
}
if msg != nil {
send msg
}
if timeout != nil {
trigger timeout
}
}
```
### Implementation roadmap
* implement proposed implementation
* replace currently scattered calls in `ConsensusState` with calls to the new
`Consensus` function
* rename `ConsensusState` to `ConsensusExecutor` to avoid confusion
* propose design for improved separation and clear information flow between
`ConsensusExecutor` and `ConsensusReactor`
## Status
Draft.
## Consequences
### Positive
- isolated implementation of the algorithm
- improved testability - simpler to proof correctness
- clearer separation of concerns - easier to reason
### Negative
### Neutral

View File

@@ -0,0 +1,247 @@
# ADR 033: pubsub 2.0
Author: Anton Kaliaev (@melekes)
## Changelog
02-10-2018: Initial draft
16-01-2019: Second version based on our conversation with Jae
17-01-2019: Third version explaining how new design solves current issues
25-01-2019: Fourth version to treat buffered and unbuffered channels differently
## Context
Since the initial version of the pubsub, there's been a number of issues
raised: [#951], [#1879], [#1880]. Some of them are high-level issues questioning the
core design choices made. Others are minor and mostly about the interface of
`Subscribe()` / `Publish()` functions.
### Sync vs Async
Now, when publishing a message to subscribers, we can do it in a goroutine:
_using channels for data transmission_
```go
for each subscriber {
out := subscriber.outc
go func() {
out <- msg
}
}
```
_by invoking callback functions_
```go
for each subscriber {
go subscriber.callbackFn()
}
```
This gives us greater performance and allows us to avoid "slow client problem"
(when other subscribers have to wait for a slow subscriber). A pool of
goroutines can be used to avoid uncontrolled memory growth.
In certain cases, this is what you want. But in our case, because we need
strict ordering of events (if event A was published before B, the guaranteed
delivery order will be A -> B), we can't publish msg in a new goroutine every time.
We can also have a goroutine per subscriber, although we'd need to be careful
with the number of subscribers. It's more difficult to implement as well +
unclear if we'll benefit from it (cause we'd be forced to create N additional
channels to distribute msg to these goroutines).
### Non-blocking send
There is also a question whenever we should have a non-blocking send.
Currently, sends are blocking, so publishing to one client can block on
publishing to another. This means a slow or unresponsive client can halt the
system. Instead, we can use a non-blocking send:
```go
for each subscriber {
out := subscriber.outc
select {
case out <- msg:
default:
log("subscriber %v buffer is full, skipping...")
}
}
```
This fixes the "slow client problem", but there is no way for a slow client to
know if it had missed a message. We could return a second channel and close it
to indicate subscription termination. On the other hand, if we're going to
stick with blocking send, **devs must always ensure subscriber's handling code
does not block**, which is a hard task to put on their shoulders.
The interim option is to run goroutines pool for a single message, wait for all
goroutines to finish. This will solve "slow client problem", but we'd still
have to wait `max(goroutine_X_time)` before we can publish the next message.
### Channels vs Callbacks
Yet another question is whether we should use channels for message transmission or
call subscriber-defined callback functions. Callback functions give subscribers
more flexibility - you can use mutexes in there, channels, spawn goroutines,
anything you really want. But they also carry local scope, which can result in
memory leaks and/or memory usage increase.
Go channels are de-facto standard for carrying data between goroutines.
### Why `Subscribe()` accepts an `out` channel?
Because in our tests, we create buffered channels (cap: 1). Alternatively, we
can make capacity an argument and return a channel.
## Decision
### MsgAndTags
Use a `MsgAndTags` struct on the subscription channel to indicate what tags the
msg matched.
```go
type MsgAndTags struct {
Msg interface{}
Tags TagMap
}
```
### Subscription Struct
Change `Subscribe()` function to return a `Subscription` struct:
```go
type Subscription struct {
// private fields
}
func (s *Subscription) Out() <-chan MsgAndTags
func (s *Subscription) Canceled() <-chan struct{}
func (s *Subscription) Err() error
```
`Out()` returns a channel onto which messages and tags are published.
`Unsubscribe`/`UnsubscribeAll` does not close the channel to avoid clients from
receiving a nil message.
`Canceled()` returns a channel that's closed when the subscription is terminated
and supposed to be used in a select statement.
If the channel returned by `Canceled()` is not closed yet, `Err()` returns nil.
If the channel is closed, `Err()` returns a non-nil error explaining why:
`ErrUnsubscribed` if the subscriber choose to unsubscribe,
`ErrOutOfCapacity` if the subscriber is not pulling messages fast enough and the channel returned by `Out()` became full.
After `Err()` returns a non-nil error, successive calls to `Err() return the same error.
```go
subscription, err := pubsub.Subscribe(...)
if err != nil {
// ...
}
for {
select {
case msgAndTags <- subscription.Out():
// ...
case <-subscription.Canceled():
return subscription.Err()
}
```
### Capacity and Subscriptions
Make the `Out()` channel buffered (with capacity 1) by default. In most cases, we want to
terminate the slow subscriber. Only in rare cases, we want to block the pubsub
(e.g. when debugging consensus). This should lower the chances of the pubsub
being frozen.
```go
// outCap can be used to set capacity of Out channel
// (1 by default, must be greater than 0).
Subscribe(ctx context.Context, clientID string, query Query, outCap... int) (Subscription, error) {
```
Use a different function for an unbuffered channel:
```go
// Subscription uses an unbuffered channel. Publishing will block.
SubscribeUnbuffered(ctx context.Context, clientID string, query Query) (Subscription, error) {
```
SubscribeUnbuffered should not be exposed to users.
### Blocking/Nonblocking
The publisher should treat these kinds of channels separately.
It should block on unbuffered channels (for use with internal consensus events
in the consensus tests) and not block on the buffered ones. If a client is too
slow to keep up with it's messages, it's subscription is terminated:
for each subscription {
out := subscription.outChan
if cap(out) == 0 {
// block on unbuffered channel
out <- msg
} else {
// don't block on buffered channels
select {
case out <- msg:
default:
// set the error, notify on the cancel chan
subscription.err = fmt.Errorf("client is too slow for msg)
close(subscription.cancelChan)
// ... unsubscribe and close out
}
}
}
### How this new design solves the current issues?
[#951] ([#1880]):
Because of non-blocking send, situation where we'll deadlock is not possible
anymore. If the client stops reading messages, it will be removed.
[#1879]:
MsgAndTags is used now instead of a plain message.
### Future problems and their possible solutions
[#2826]
One question I am still pondering about: how to prevent pubsub from slowing
down consensus. We can increase the pubsub queue size (which is 0 now). Also,
it's probably a good idea to limit the total number of subscribers.
This can be made automatically. Say we set queue size to 1000 and, when it's >=
80% full, refuse new subscriptions.
## Status
Implemented
## Consequences
### Positive
- more idiomatic interface
- subscribers know what tags msg was published with
- subscribers aware of the reason their subscription was canceled
### Negative
- (since v1) no concurrency when it comes to publishing messages
### Neutral
[#951]: https://github.com/tendermint/tendermint/issues/951
[#1879]: https://github.com/tendermint/tendermint/issues/1879
[#1880]: https://github.com/tendermint/tendermint/issues/1880
[#2826]: https://github.com/tendermint/tendermint/issues/2826

View File

@@ -0,0 +1,72 @@
# ADR 034: PrivValidator file structure
## Changelog
03-11-2018: Initial Draft
## Context
For now, the PrivValidator file `priv_validator.json` contains mutable and immutable parts.
Even in an insecure mode which does not encrypt private key on disk, it is reasonable to separate
the mutable part and immutable part.
References:
[#1181](https://github.com/tendermint/tendermint/issues/1181)
[#2657](https://github.com/tendermint/tendermint/issues/2657)
[#2313](https://github.com/tendermint/tendermint/issues/2313)
## Proposed Solution
We can split mutable and immutable parts with two structs:
```go
// FilePVKey stores the immutable part of PrivValidator
type FilePVKey struct {
Address types.Address `json:"address"`
PubKey crypto.PubKey `json:"pub_key"`
PrivKey crypto.PrivKey `json:"priv_key"`
filePath string
}
// FilePVState stores the mutable part of PrivValidator
type FilePVLastSignState struct {
Height int64 `json:"height"`
Round int `json:"round"`
Step int8 `json:"step"`
Signature []byte `json:"signature,omitempty"`
SignBytes cmn.HexBytes `json:"signbytes,omitempty"`
filePath string
mtx sync.Mutex
}
```
Then we can combine `FilePVKey` with `FilePVLastSignState` and will get the original `FilePV`.
```go
type FilePV struct {
Key FilePVKey
LastSignState FilePVLastSignState
}
```
As discussed, `FilePV` should be located in `config`, and `FilePVLastSignState` should be stored in `data`. The
store path of each file should be specified in `config.yml`.
What we need to do next is changing the methods of `FilePV`.
## Status
Implemented
## Consequences
### Positive
- separate the mutable and immutable of PrivValidator
### Negative
- need to add more config for file path
### Neutral

View File

@@ -0,0 +1,40 @@
# ADR 035: Documentation
Author: @zramsay (Zach Ramsay)
## Changelog
### November 2nd 2018
- initial write-up
## Context
The Tendermint documentation has undergone several changes until settling on the current model. Originally, the documentation was hosted on the website and had to be updated asynchronously from the code. Along with the other repositories requiring documentation, the whole stack moved to using Read The Docs to automatically generate, publish, and host the documentation. This, however, was insufficient; the RTD site had advertisement, it wasn't easily accessible to devs, didn't collect metrics, was another set of external links, etc.
## Decision
For two reasons, the decision was made to use VuePress:
1) ability to get metrics (implemented on both Tendermint and SDK)
2) host the documentation on the website as a `/docs` endpoint.
This is done while maintaining synchrony between the docs and code, i.e., the website is built whenever the docs are updated.
## Status
The two points above have been implemented; the `config.js` has a Google Analytics identifier and the documentation workflow has been up and running largely without problems for several months. Details about the documentation build & workflow can be found [here](../DOCS_README.md)
## Consequences
Because of the organizational seperation between Tendermint & Cosmos, there is a challenge of "what goes where" for certain aspects of documentation.
### Positive
This architecture is largely positive relative to prior docs arrangements.
### Negative
A significant portion of the docs automation / build process is in private repos with limited access/visibility to devs. However, these tasks are handled by the SRE team.
### Neutral

View File

@@ -0,0 +1,38 @@
# ADR 036: Empty Blocks via ABCI
## Changelog
- {date}: {changelog}
## Context
> This section contains all the context one needs to understand the current state, and why there is a problem. It should be as succinct as possible and introduce the high level idea behind the solution.
## Decision
> This section explains all of the details of the proposed solution, including implementation details.
> It should also describe affects / corollary items that may need to be changed as a part of this.
> If the proposed change will be large, please also indicate a way to do the change to maximize ease of review.
> (e.g. the optimal split of things to do between separate PR's)
## Status
> A decision may be "proposed" if it hasn't been agreed upon yet, or "accepted" once it is agreed upon. If a later ADR changes or reverses a decision, it may be marked as "deprecated" or "superseded" with a reference to its replacement.
{Deprecated|Proposed|Accepted|Declined}
## Consequences
> This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones.
### Positive
### Negative
### Neutral
## References
> Are there any relevant PR comments, issues that led up to this, or articles referenced for why we made the given design choice? If so link them here!
- {reference link}

View File

@@ -0,0 +1,100 @@
# ADR 037: Deliver Block
Author: Daniil Lashin (@danil-lashin)
## Changelog
13-03-2019: Initial draft
## Context
Initial conversation: https://github.com/tendermint/tendermint/issues/2901
Some applications can handle transactions in parallel, or at least some
part of tx processing can be parallelized. Now it is not possible for developer
to execute txs in parallel because Tendermint delivers them consequentially.
## Decision
Now Tendermint have `BeginBlock`, `EndBlock`, `Commit`, `DeliverTx` steps
while executing block. This doc proposes merging this steps into one `DeliverBlock`
step. It will allow developers of applications to decide how they want to
execute transactions (in parallel or consequentially). Also it will simplify and
speed up communications between application and Tendermint.
As @jaekwon [mentioned](https://github.com/tendermint/tendermint/issues/2901#issuecomment-477746128)
in discussion not all application will benefit from this solution. In some cases,
when application handles transaction consequentially, it way slow down the blockchain,
because it need to wait until full block is transmitted to application to start
processing it. Also, in the case of complete change of ABCI, we need to force all the apps
to change their implementation completely. That's why I propose to introduce one more ABCI
type.
# Implementation Changes
In addition to default application interface which now have this structure
```go
type Application interface {
// Info and Mempool methods...
// Consensus Connection
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain with validators and other info from TendermintCore
BeginBlock(RequestBeginBlock) ResponseBeginBlock // Signals the beginning of a block
DeliverTx(tx []byte) ResponseDeliverTx // Deliver a tx for full processing
EndBlock(RequestEndBlock) ResponseEndBlock // Signals the end of a block, returns changes to the validator set
Commit() ResponseCommit // Commit the state and return the application Merkle root hash
}
```
this doc proposes to add one more:
```go
type Application interface {
// Info and Mempool methods...
// Consensus Connection
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain with validators and other info from TendermintCore
DeliverBlock(RequestDeliverBlock) ResponseDeliverBlock // Deliver full block
Commit() ResponseCommit // Commit the state and return the application Merkle root hash
}
type RequestDeliverBlock struct {
Hash []byte
Header Header
Txs Txs
LastCommitInfo LastCommitInfo
ByzantineValidators []Evidence
}
type ResponseDeliverBlock struct {
ValidatorUpdates []ValidatorUpdate
ConsensusParamUpdates *ConsensusParams
Tags []kv.Pair
TxResults []ResponseDeliverTx
}
```
Also, we will need to add new config param, which will specify what kind of ABCI application uses.
For example, it can be `abci_type`. Then we will have 2 types:
- `advanced` - current ABCI
- `simple` - proposed implementation
## Status
In review
## Consequences
### Positive
- much simpler introduction and tutorials for new developers (instead of implementing 5 methods whey
will need to implement only 3)
- txs can be handled in parallel
- simpler interface
- faster communications between Tendermint and application
### Negative
- Tendermint should now support 2 kinds of ABCI

View File

@@ -0,0 +1,38 @@
# ADR 038: Non-zero start height
## Changelog
- {date}: {changelog}
## Context
> This section contains all the context one needs to understand the current state, and why there is a problem. It should be as succinct as possible and introduce the high level idea behind the solution.
## Decision
> This section explains all of the details of the proposed solution, including implementation details.
> It should also describe affects / corollary items that may need to be changed as a part of this.
> If the proposed change will be large, please also indicate a way to do the change to maximize ease of review.
> (e.g. the optimal split of things to do between separate PR's)
## Status
> A decision may be "proposed" if it hasn't been agreed upon yet, or "accepted" once it is agreed upon. If a later ADR changes or reverses a decision, it may be marked as "deprecated" or "superseded" with a reference to its replacement.
{Deprecated|Proposed|Accepted|Declined}
## Consequences
> This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones.
### Positive
### Negative
### Neutral
## References
> Are there any relevant PR comments, issues that led up to this, or articles referenced for why we made the given design choice? If so link them here!
- {reference link}

View File

@@ -0,0 +1,159 @@
# ADR 039: Peer Behaviour Interface
## Changelog
* 07-03-2019: Initial draft
* 14-03-2019: Updates from feedback
## Context
The responsibility for signaling and acting upon peer behaviour lacks a single
owning component and is heavily coupled with the network stack[<sup>1</sup>](#references). Reactors
maintain a reference to the `p2p.Switch` which they use to call
`switch.StopPeerForError(...)` when a peer misbehaves and
`switch.MarkAsGood(...)` when a peer contributes in some meaningful way.
While the switch handles `StopPeerForError` internally, the `MarkAsGood`
method delegates to another component, `p2p.AddrBook`. This scheme of delegation
across Switch obscures the responsibility for handling peer behaviour
and ties up the reactors in a larger dependency graph when testing.
## Decision
Introduce a `PeerBehaviour` interface and concrete implementations which
provide methods for reactors to signal peer behaviour without direct
coupling `p2p.Switch`. Introduce a ErrorBehaviourPeer to provide
concrete reasons for stopping peers. Introduce GoodBehaviourPeer to provide
concrete ways in which a peer contributes.
### Implementation Changes
PeerBehaviour then becomes an interface for signaling peer errors as well
as for marking peers as `good`.
```go
type PeerBehaviour interface {
Behaved(peer Peer, reason GoodBehaviourPeer)
Errored(peer Peer, reason ErrorBehaviourPeer)
}
```
Instead of signaling peers to stop with arbitrary reasons:
`reason interface{}`
We introduce a concrete error type ErrorBehaviourPeer:
```go
type ErrorBehaviourPeer int
const (
ErrorBehaviourUnknown = iota
ErrorBehaviourBadMessage
ErrorBehaviourMessageOutofOrder
...
)
```
To provide additional information on the ways a peer contributed, we introduce
the GoodBehaviourPeer type.
```go
type GoodBehaviourPeer int
const (
GoodBehaviourVote = iota
GoodBehaviourBlockPart
...
)
```
As a first iteration we provide a concrete implementation which wraps
the switch:
```go
type SwitchedPeerBehaviour struct {
sw *Switch
}
func (spb *SwitchedPeerBehaviour) Errored(peer Peer, reason ErrorBehaviourPeer) {
spb.sw.StopPeerForError(peer, reason)
}
func (spb *SwitchedPeerBehaviour) Behaved(peer Peer, reason GoodBehaviourPeer) {
spb.sw.MarkPeerAsGood(peer)
}
func NewSwitchedPeerBehaviour(sw *Switch) *SwitchedPeerBehaviour {
return &SwitchedPeerBehaviour{
sw: sw,
}
}
```
Reactors, which are often difficult to unit test[<sup>2</sup>](#references) could use an implementation which exposes the signals produced by the reactor in
manufactured scenarios:
```go
type ErrorBehaviours map[Peer][]ErrorBehaviourPeer
type GoodBehaviours map[Peer][]GoodBehaviourPeer
type StorePeerBehaviour struct {
eb ErrorBehaviours
gb GoodBehaviours
}
func NewStorePeerBehaviour() *StorePeerBehaviour{
return &StorePeerBehaviour{
eb: make(ErrorBehaviours),
gb: make(GoodBehaviours),
}
}
func (spb StorePeerBehaviour) Errored(peer Peer, reason ErrorBehaviourPeer) {
if _, ok := spb.eb[peer]; !ok {
spb.eb[peer] = []ErrorBehaviours{reason}
} else {
spb.eb[peer] = append(spb.eb[peer], reason)
}
}
func (mpb *StorePeerBehaviour) GetErrored() ErrorBehaviours {
return mpb.eb
}
func (spb StorePeerBehaviour) Behaved(peer Peer, reason GoodBehaviourPeer) {
if _, ok := spb.gb[peer]; !ok {
spb.gb[peer] = []GoodBehaviourPeer{reason}
} else {
spb.gb[peer] = append(spb.gb[peer], reason)
}
}
func (spb *StorePeerBehaviour) GetBehaved() GoodBehaviours {
return spb.gb
}
```
## Status
Accepted
## Consequences
### Positive
* De-couple signaling from acting upon peer behaviour.
* Reduce the coupling of reactors and the Switch and the network
stack
* The responsibility of managing peer behaviour can be migrated to
a single component instead of split between the switch and the
address book.
### Negative
* The first iteration will simply wrap the Switch and introduce a
level of indirection.
### Neutral
## References
1. Issue [#2067](https://github.com/tendermint/tendermint/issues/2067): P2P Refactor
2. PR: [#3506](https://github.com/tendermint/tendermint/pull/3506): ADR 036: Blockchain Reactor Refactor

View File

@@ -0,0 +1,534 @@
# ADR 040: Blockchain Reactor Refactor
## Changelog
19-03-2019: Initial draft
## Context
The Blockchain Reactor's high level responsibility is to enable peers who are far behind the current state of the
blockchain to quickly catch up by downloading many blocks in parallel from its peers, verifying block correctness, and
executing them against the ABCI application. We call the protocol executed by the Blockchain Reactor `fast-sync`.
The current architecture diagram of the blockchain reactor can be found here:
![Blockchain Reactor Architecture Diagram](img/bc-reactor.png)
The current architecture consists of dozens of routines and it is tightly depending on the `Switch`, making writing
unit tests almost impossible. Current tests require setting up complex dependency graphs and dealing with concurrency.
Note that having dozens of routines is in this case overkill as most of the time routines sits idle waiting for
something to happen (message to arrive or timeout to expire). Due to dependency on the `Switch`, testing relatively
complex network scenarios and failures (for example adding and removing peers) is very complex tasks and frequently lead
to complex tests with not deterministic behavior ([#3400]). Impossibility to write proper tests makes confidence in
the code low and this resulted in several issues (some are fixed in the meantime and some are still open):
[#3400], [#2897], [#2896], [#2699], [#2888], [#2457], [#2622], [#2026].
## Decision
To remedy these issues we plan a major refactor of the blockchain reactor. The proposed architecture is largely inspired
by ADR-30 and is presented on the following diagram:
![Blockchain Reactor Refactor Diagram](img/bc-reactor-refactor.png)
We suggest a concurrency architecture where the core algorithm (we call it `Controller`) is extracted into a finite
state machine. The active routine of the reactor is called `Executor` and is responsible for receiving and sending
messages from/to peers and triggering timeouts. What messages should be sent and timeouts triggered is determined mostly
by the `Controller`. The exception is `Peer Heartbeat` mechanism which is `Executor` responsibility. The heartbeat
mechanism is used to remove slow and unresponsive peers from the peer list. Writing of unit tests is simpler with
this architecture as most of the critical logic is part of the `Controller` function. We expect that simpler concurrency
architecture will not have significant negative effect on the performance of this reactor (to be confirmed by
experimental evaluation).
### Implementation changes
We assume the following system model for "fast sync" protocol:
* a node is connected to a random subset of all nodes that represents its peer set. Some nodes are correct and some
might be faulty. We don't make assumptions about ratio of faulty nodes, i.e., it is possible that all nodes in some
peer set are faulty.
* we assume that communication between correct nodes is synchronous, i.e., if a correct node `p` sends a message `m` to
a correct node `q` at time `t`, then `q` will receive message the latest at time `t+Delta` where `Delta` is a system
parameter that is known by network participants. `Delta` is normally chosen to be an order of magnitude higher than
the real communication delay (maximum) between correct nodes. Therefore if a correct node `p` sends a request message
to a correct node `q` at time `t` and there is no the corresponding reply at time `t + 2*Delta`, then `p` can assume
that `q` is faulty. Note that the network assumptions for the consensus reactor are different (we assume partially
synchronous model there).
The requirements for the "fast sync" protocol are formally specified as follows:
- `Correctness`: If a correct node `p` is connected to a correct node `q` for a long enough period of time, then `p`
- will eventually download all requested blocks from `q`.
- `Termination`: If a set of peers of a correct node `p` is stable (no new nodes are added to the peer set of `p`) for
- a long enough period of time, then protocol eventually terminates.
- `Fairness`: A correct node `p` sends requests for blocks to all peers from its peer set.
As explained above, the `Executor` is responsible for sending and receiving messages that are part of the `fast-sync`
protocol. The following messages are exchanged as part of `fast-sync` protocol:
``` go
type Message int
const (
MessageUnknown Message = iota
MessageStatusRequest
MessageStatusResponse
MessageBlockRequest
MessageBlockResponse
)
```
`MessageStatusRequest` is sent periodically to all peers as a request for a peer to provide its current height. It is
part of the `Peer Heartbeat` mechanism and a failure to respond timely to this message results in a peer being removed
from the peer set. Note that the `Peer Heartbeat` mechanism is used only while a peer is in `fast-sync` mode. We assume
here existence of a mechanism that gives node a possibility to inform its peers that it is in the `fast-sync` mode.
``` go
type MessageStatusRequest struct {
SeqNum int64 // sequence number of the request
}
```
`MessageStatusResponse` is sent as a response to `MessageStatusRequest` to inform requester about the peer current
height.
``` go
type MessageStatusResponse struct {
SeqNum int64 // sequence number of the corresponding request
Height int64 // current peer height
}
```
`MessageBlockRequest` is used to make a request for a block and the corresponding commit certificate at a given height.
``` go
type MessageBlockRequest struct {
Height int64
}
```
`MessageBlockResponse` is a response for the corresponding block request. In addition to providing the block and the
corresponding commit certificate, it contains also a current peer height.
``` go
type MessageBlockResponse struct {
Height int64
Block Block
Commit Commit
PeerHeight int64
}
```
In addition to sending and receiving messages, and `HeartBeat` mechanism, controller is also managing timeouts
that are triggered upon `Controller` request. `Controller` is then informed once a timeout expires.
``` go
type TimeoutTrigger int
const (
TimeoutUnknown TimeoutTrigger = iota
TimeoutResponseTrigger
TimeoutTerminationTrigger
)
```
The `Controller` can be modelled as a function with clearly defined inputs:
* `State` - current state of the node. Contains data about connected peers and its behavior, pending requests,
* received blocks, etc.
* `Event` - significant events in the network.
producing clear outputs:
* `State` - updated state of the node,
* `MessageToSend` - signal what message to send and to which peer
* `TimeoutTrigger` - signal that timeout should be triggered.
We consider the following `Event` types:
``` go
type Event int
const (
EventUnknown Event = iota
EventStatusReport
EventBlockRequest
EventBlockResponse
EventRemovePeer
EventTimeoutResponse
EventTimeoutTermination
)
```
`EventStatusResponse` event is generated once `MessageStatusResponse` is received by the `Executor`.
``` go
type EventStatusReport struct {
PeerID ID
Height int64
}
```
`EventBlockRequest` event is generated once `MessageBlockRequest` is received by the `Executor`.
``` go
type EventBlockRequest struct {
Height int64
PeerID p2p.ID
}
```
`EventBlockResponse` event is generated upon reception of `MessageBlockResponse` message by the `Executor`.
``` go
type EventBlockResponse struct {
Height int64
Block Block
Commit Commit
PeerID ID
PeerHeight int64
}
```
`EventRemovePeer` is generated by `Executor` to signal that the connection to a peer is closed due to peer misbehavior.
``` go
type EventRemovePeer struct {
PeerID ID
}
```
`EventTimeoutResponse` is generated by `Executor` to signal that a timeout triggered by `TimeoutResponseTrigger` has
expired.
``` go
type EventTimeoutResponse struct {
PeerID ID
Height int64
}
```
`EventTimeoutTermination` is generated by `Executor` to signal that a timeout triggered by `TimeoutTerminationTrigger`
has expired.
``` go
type EventTimeoutTermination struct {
Height int64
}
```
`MessageToSend` is just a wrapper around `Message` type that contains id of the peer to which message should be sent.
``` go
type MessageToSend struct {
PeerID ID
Message Message
}
```
The Controller state machine can be in two modes: `ModeFastSync` when
a node is trying to catch up with the network by downloading committed blocks,
and `ModeConsensus` in which it executes Tendermint consensus protocol. We
consider that `fast sync` mode terminates once the Controller switch to
`ModeConsensus`.
``` go
type Mode int
const (
ModeUnknown Mode = iota
ModeFastSync
ModeConsensus
)
```
`Controller` is managing the following state:
``` go
type ControllerState struct {
Height int64 // the first block that is not committed
Mode Mode // mode of operation
PeerMap map[ID]PeerStats // map of peer IDs to peer statistics
MaxRequestPending int64 // maximum height of the pending requests
FailedRequests []int64 // list of failed block requests
PendingRequestsNum int // total number of pending requests
Store []BlockInfo // contains list of downloaded blocks
Executor BlockExecutor // store, verify and executes blocks
}
```
`PeerStats` data structure keeps for every peer its current height and a list of pending requests for blocks.
``` go
type PeerStats struct {
Height int64
PendingRequest int64 // a request sent to this peer
}
```
`BlockInfo` data structure is used to store information (as part of block store) about downloaded blocks: from what peer
a block and the corresponding commit certificate are received.
``` go
type BlockInfo struct {
Block Block
Commit Commit
PeerID ID // a peer from which we received the corresponding Block and Commit
}
```
The `Controller` is initialized by providing an initial height (`startHeight`) from which it will start downloading
blocks from peers and the current state of the `BlockExecutor`.
``` go
func NewControllerState(startHeight int64, executor BlockExecutor) ControllerState {
state = ControllerState {}
state.Height = startHeight
state.Mode = ModeFastSync
state.MaxRequestPending = startHeight - 1
state.PendingRequestsNum = 0
state.Executor = executor
initialize state.PeerMap, state.FailedRequests and state.Store to empty data structures
return state
}
```
The core protocol logic is given with the following function:
``` go
func handleEvent(state ControllerState, event Event) (ControllerState, Message, TimeoutTrigger, Error) {
msg = nil
timeout = nil
error = nil
switch state.Mode {
case ModeConsensus:
switch event := event.(type) {
case EventBlockRequest:
msg = createBlockResponseMessage(state, event)
return state, msg, timeout, error
default:
error = "Only respond to BlockRequests while in ModeConsensus!"
return state, msg, timeout, error
}
case ModeFastSync:
switch event := event.(type) {
case EventBlockRequest:
msg = createBlockResponseMessage(state, event)
return state, msg, timeout, error
case EventStatusResponse:
return handleEventStatusResponse(event, state)
case EventRemovePeer:
return handleEventRemovePeer(event, state)
case EventBlockResponse:
return handleEventBlockResponse(event, state)
case EventResponseTimeout:
return handleEventResponseTimeout(event, state)
case EventTerminationTimeout:
// Termination timeout is triggered in case of empty peer set and in case there are no pending requests.
// If this timeout expires and in the meantime no new peers are added or new pending requests are made
// then `fast-sync` mode terminates by switching to `ModeConsensus`.
// Note that termination timeout should be higher than the response timeout.
if state.Height == event.Height && state.PendingRequestsNum == 0 { state.State = ConsensusMode }
return state, msg, timeout, error
default:
error = "Received unknown event type!"
return state, msg, timeout, error
}
}
}
```
``` go
func createBlockResponseMessage(state ControllerState, event BlockRequest) MessageToSend {
msgToSend = nil
if _, ok := state.PeerMap[event.PeerID]; !ok { peerStats = PeerStats{-1, -1} }
if state.Executor.ContainsBlockWithHeight(event.Height) && event.Height > peerStats.Height {
peerStats = event.Height
msg = BlockResponseMessage{
Height: event.Height,
Block: state.Executor.getBlock(eventHeight),
Commit: state.Executor.getCommit(eventHeight),
PeerID: event.PeerID,
CurrentHeight: state.Height - 1,
}
msgToSend = MessageToSend { event.PeerID, msg }
}
state.PeerMap[event.PeerID] = peerStats
return msgToSend
}
```
``` go
func handleEventStatusResponse(event EventStatusResponse, state ControllerState) (ControllerState, MessageToSend, TimeoutTrigger, Error) {
if _, ok := state.PeerMap[event.PeerID]; !ok {
peerStats = PeerStats{ -1, -1 }
} else {
peerStats = state.PeerMap[event.PeerID]
}
if event.Height > peerStats.Height { peerStats.Height = event.Height }
// if there are no pending requests for this peer, try to send him a request for block
if peerStats.PendingRequest == -1 {
msg = createBlockRequestMessages(state, event.PeerID, peerStats.Height)
// msg is nil if no request for block can be made to a peer at this point in time
if msg != nil {
peerStats.PendingRequests = msg.Height
state.PendingRequestsNum++
// when a request for a block is sent to a peer, a response timeout is triggered. If no corresponding block is sent by the peer
// during response timeout period, then the peer is considered faulty and is removed from the peer set.
timeout = ResponseTimeoutTrigger{ msg.PeerID, msg.Height, PeerTimeout }
} else if state.PendingRequestsNum == 0 {
// if there are no pending requests and no new request can be placed to the peer, termination timeout is triggered.
// If termination timeout expires and we are still at the same height and there are no pending requests, the "fast-sync"
// mode is finished and we switch to `ModeConsensus`.
timeout = TerminationTimeoutTrigger{ state.Height, TerminationTimeout }
}
}
state.PeerMap[event.PeerID] = peerStats
return state, msg, timeout, error
}
```
``` go
func handleEventRemovePeer(event EventRemovePeer, state ControllerState) (ControllerState, MessageToSend, TimeoutTrigger, Error) {
if _, ok := state.PeerMap[event.PeerID]; ok {
pendingRequest = state.PeerMap[event.PeerID].PendingRequest
// if a peer is removed from the peer set, its pending request is declared failed and added to the `FailedRequests` list
// so it can be retried.
if pendingRequest != -1 {
add(state.FailedRequests, pendingRequest)
}
state.PendingRequestsNum--
delete(state.PeerMap, event.PeerID)
// if the peer set is empty after removal of this peer then termination timeout is triggered.
if state.PeerMap.isEmpty() {
timeout = TerminationTimeoutTrigger{ state.Height, TerminationTimeout }
}
} else { error = "Removing unknown peer!" }
return state, msg, timeout, error
```
``` go
func handleEventBlockResponse(event EventBlockResponse, state ControllerState) (ControllerState, MessageToSend, TimeoutTrigger, Error)
if state.PeerMap[event.PeerID] {
peerStats = state.PeerMap[event.PeerID]
// when expected block arrives from a peer, it is added to the store so it can be verified and if correct executed after.
if peerStats.PendingRequest == event.Height {
peerStats.PendingRequest = -1
state.PendingRequestsNum--
if event.PeerHeight > peerStats.Height { peerStats.Height = event.PeerHeight }
state.Store[event.Height] = BlockInfo{ event.Block, event.Commit, event.PeerID }
// blocks are verified sequentially so adding a block to the store does not mean that it will be immediately verified
// as some of the previous blocks might be missing.
state = verifyBlocks(state) // it can lead to event.PeerID being removed from peer list
if _, ok := state.PeerMap[event.PeerID]; ok {
// we try to identify new request for a block that can be asked to the peer
msg = createBlockRequestMessage(state, event.PeerID, peerStats.Height)
if msg != nil {
peerStats.PendingRequests = msg.Height
state.PendingRequestsNum++
// if request for block is made, response timeout is triggered
timeout = ResponseTimeoutTrigger{ msg.PeerID, msg.Height, PeerTimeout }
} else if state.PeerMap.isEmpty() || state.PendingRequestsNum == 0 {
// if the peer map is empty (the peer can be removed as block verification failed) or there are no pending requests
// termination timeout is triggered.
timeout = TerminationTimeoutTrigger{ state.Height, TerminationTimeout }
}
}
} else { error = "Received Block from wrong peer!" }
} else { error = "Received Block from unknown peer!" }
state.PeerMap[event.PeerID] = peerStats
return state, msg, timeout, error
}
```
``` go
func handleEventResponseTimeout(event, state) {
if _, ok := state.PeerMap[event.PeerID]; ok {
peerStats = state.PeerMap[event.PeerID]
// if a response timeout expires and the peer hasn't delivered the block, the peer is removed from the peer list and
// the request is added to the `FailedRequests` so the block can be downloaded from other peer
if peerStats.PendingRequest == event.Height {
add(state.FailedRequests, pendingRequest)
delete(state.PeerMap, event.PeerID)
state.PendingRequestsNum--
// if peer set is empty, then termination timeout is triggered
if state.PeerMap.isEmpty() {
timeout = TimeoutTrigger{ state.Height, TerminationTimeout }
}
}
}
return state, msg, timeout, error
}
```
``` go
func createBlockRequestMessage(state ControllerState, peerID ID, peerHeight int64) MessageToSend {
msg = nil
blockHeight = -1
r = find request in state.FailedRequests such that r <= peerHeight // returns `nil` if there are no such request
// if there is a height in failed requests that can be downloaded from the peer send request to it
if r != nil {
blockNumber = r
delete(state.FailedRequests, r)
} else if state.MaxRequestPending < peerHeight {
// if height of the maximum pending request is smaller than peer height, then ask peer for next block
state.MaxRequestPending++
blockHeight = state.MaxRequestPending // increment state.MaxRequestPending and then return the new value
}
if blockHeight > -1 { msg = MessageToSend { peerID, MessageBlockRequest { blockHeight } }
return msg
}
```
``` go
func verifyBlocks(state State) State {
done = false
for !done {
block = state.Store[height]
if block != nil {
verified = verify block.Block using block.Commit // return `true` is verification succeed, 'false` otherwise
if verified {
block.Execute() // executing block is costly operation so it might make sense executing asynchronously
state.Height++
} else {
// if block verification failed, then it is added to `FailedRequests` and the peer is removed from the peer set
add(state.FailedRequests, height)
state.Store[height] = nil
if _, ok := state.PeerMap[block.PeerID]; ok {
pendingRequest = state.PeerMap[block.PeerID].PendingRequest
// if there is a pending request sent to the peer that is just to be removed from the peer set, add it to `FailedRequests`
if pendingRequest != -1 {
add(state.FailedRequests, pendingRequest)
state.PendingRequestsNum--
}
delete(state.PeerMap, event.PeerID)
}
done = true
}
} else { done = true }
}
return state
}
```
In the proposed architecture `Controller` is not active task, i.e., it is being called by the `Executor`. Depending on
the return values returned by `Controller`,`Executor` will send a message to some peer (`msg` != nil), trigger a
timeout (`timeout` != nil) or deal with errors (`error` != nil).
In case a timeout is triggered, it will provide as an input to `Controller` the corresponding timeout event once
timeout expires.
## Status
Draft.
## Consequences
### Positive
- isolated implementation of the algorithm
- improved testability - simpler to prove correctness
- clearer separation of concerns - easier to reason
### Negative
### Neutral

View File

@@ -0,0 +1,29 @@
# ADR 041: Application should be in charge of validator set
## Changelog
## Context
Currently Tendermint is in charge of validator set and proposer selection. Application can only update the validator set changes at EndBlock time.
To support Light Client, application should make sure at least 2/3 of validator are same at each round.
Application should have full control on validator set changes and proposer selection. In each round Application can provide the list of validators for next rounds in order with their power. The proposer is the first in the list, in case the proposer is offline, the next one can propose the proposal and so on.
## Decision
## Status
## Consequences
Tendermint is no more in charge of validator set and its changes. The Application should provide the correct information.
However Tendermint can provide psedo-randomness algorithm to help application for selecting proposer in each round.
### Positive
### Negative
### Neutral
## References

View File

@@ -0,0 +1,235 @@
# ADR 042: State Sync Design
## Changelog
2019-06-27: Init by EB
2019-07-04: Follow up by brapse
## Context
StateSync is a feature which would allow a new node to receive a
snapshot of the application state without downloading blocks or going
through consensus. Once downloaded, the node could switch to FastSync
and eventually participate in consensus. The goal of StateSync is to
facilitate setting up a new node as quickly as possible.
## Considerations
Because Tendermint doesn't know anything about the application state,
StateSync will broker messages between nodes and through
the ABCI to an opaque applicaton. The implementation will have multiple
touch points on both the tendermint code base and ABCI application.
* A StateSync reactor to facilitate peer communication - Tendermint
* A Set of ABCI messages to transmit application state to the reactor - Tendermint
* A Set of MultiStore APIs for exposing snapshot data to the ABCI - ABCI application
* A Storage format with validation and performance considerations - ABCI application
### Implementation Properties
Beyond the approach, any implementation of StateSync can be evaluated
across different criteria:
* Speed: Expected throughput of producing and consuming snapshots
* Safety: Cost of pushing invalid snapshots to a node
* Liveness: Cost of preventing a node from receiving/constructing a snapshot
* Effort: How much effort does an implementation require
### Implementation Question
* What is the format of a snapshot
* Complete snapshot
* Ordered IAVL key ranges
* Compressed individually chunks which can be validated
* How is data validated
* Trust a peer with it's data blindly
* Trust a majority of peers
* Use light client validation to validate each chunk against consensus
produced merkle tree root
* What are the performance characteristics
* Random vs sequential reads
* How parallelizeable is the scheduling algorithm
### Proposals
Broadly speaking there are two approaches to this problem which have had
varying degrees of discussion and progress. These approach can be
summarized as:
**Lazy:** Where snapshots are produced dynamically at request time. This
solution would use the existing data structure.
**Eager:** Where snapshots are produced periodically and served from disk at
request time. This solution would create an auxiliary data structure
optimized for batch read/writes.
Additionally the propsosals tend to vary on how they provide safety
properties.
**LightClient** Where a client can aquire the merkle root from the block
headers synchronized from a trusted validator set. Subsets of the application state,
called chunks can therefore be validated on receipt to ensure each chunk
is part of the merkle root.
**Majority of Peers** Where manifests of chunks along with checksums are
downloaded and compared against versions provided by a majority of
peers.
#### Lazy StateSync
An initial specification was published by Alexis Sellier.
In this design, the state has a given `size` of primitive elements (like
keys or nodes), each element is assigned a number from 0 to `size-1`,
and chunks consists of a range of such elements. Ackratos raised
[some concerns](https://docs.google.com/document/d/1npGTAa1qxe8EQZ1wG0a0Sip9t5oX2vYZNUDwr_LVRR4/edit)
about this design, somewhat specific to the IAVL tree, and mainly concerning
performance of random reads and of iterating through the tree to determine element numbers
(ie. elements aren't indexed by the element number).
An alternative design was suggested by Jae Kwon in
[#3639](https://github.com/tendermint/tendermint/issues/3639) where chunking
happens lazily and in a dynamic way: nodes request key ranges from their peers,
and peers respond with some subset of the
requested range and with notes on how to request the rest in parallel from other
peers. Unlike chunk numbers, keys can be verified directly. And if some keys in the
range are ommitted, proofs for the range will fail to verify.
This way a node can start by requesting the entire tree from one peer,
and that peer can respond with say the first few keys, and the ranges to request
from other peers.
Additionally, per chunk validation tends to come more naturally to the
Lazy approach since it tends to use the existing structure of the tree
(ie. keys or nodes) rather than state-sync specific chunks. Such a
design for tendermint was originally tracked in
[#828](https://github.com/tendermint/tendermint/issues/828).
#### Eager StateSync
Warp Sync as implemented in OpenEthereum to rapidly
download both blocks and state snapshots from peers. Data is carved into ~4MB
chunks and snappy compressed. Hashes of snappy compressed chunks are stored in a
manifest file which co-ordinates the state-sync. Obtaining a correct manifest
file seems to require an honest majority of peers. This means you may not find
out the state is incorrect until you download the whole thing and compare it
with a verified block header.
A similar solution was implemented by Binance in
[#3594](https://github.com/tendermint/tendermint/pull/3594)
based on their initial implementation in
[PR #3243](https://github.com/tendermint/tendermint/pull/3243)
and [some learnings](https://docs.google.com/document/d/1npGTAa1qxe8EQZ1wG0a0Sip9t5oX2vYZNUDwr_LVRR4/edit).
Note this still requires the honest majority peer assumption.
As an eager protocol, warp-sync can efficiently compress larger, more
predicatable chunks once per snapshot and service many new peers. By
comparison lazy chunkers would have to compress each chunk at request
time.
### Analysis of Lazy vs Eager
Lazy vs Eager have more in common than they differ. They all require
reactors on the tendermint side, a set of ABCI messages and a method for
serializing/deserializing snapshots facilitated by a SnapshotFormat.
The biggest difference between Lazy and Eager proposals is in the
read/write patterns necessitated by serving a snapshot chunk.
Specifically, Lazy State Sync performs random reads to the underlying data
structure while Eager can optimize for sequential reads.
This distinctin between approaches was demonstrated by Binance's
[ackratos](https://github.com/ackratos) in their implementation of [Lazy
State sync](https://github.com/tendermint/tendermint/pull/3243), The
[analysis](https://docs.google.com/document/d/1npGTAa1qxe8EQZ1wG0a0Sip9t5oX2vYZNUDwr_LVRR4/)
of the performance, and follow up implementation of [Warp
Sync](http://github.com/tendermint/tendermint/pull/3594).
#### Compairing Security Models
There are several different security models which have been
discussed/proposed in the past but generally fall into two categories.
Light client validation: In which the node receiving data is expected to
first perform a light client sync and have all the nessesary block
headers. Within the trusted block header (trusted in terms of from a
validator set subject to [weak
subjectivity](https://github.com/tendermint/tendermint/pull/3795)) and
can compare any subset of keys called a chunk against the merkle root.
The advantage of light client validation is that the block headers are
signed by validators which have something to lose for malicious
behaviour. If a validator were to provide an invalid proof, they can be
slashed.
Majority of peer validation: A manifest file containing a list of chunks
along with checksums of each chunk is downloaded from a
trusted source. That source can be a community resource similar to
[sum.golang.org](https://sum.golang.org) or downloaded from the majority
of peers. One disadantage of the majority of peer security model is the
vuliberability to eclipse attacks in which a malicious users looks to
saturate a target node's peer list and produce a manufactured picture of
majority.
A third option would be to include snapshot related data in the
block header. This could include the manifest with related checksums and be
secured through consensus. One challenge of this approach is to
ensure that creating snapshots does not put undo burden on block
propsers by synchronizing snapshot creation and block creation. One
approach to minimizing the burden is for snapshots for height
`H` to be included in block `H+n` where `n` is some `n` block away,
giving the block propser enough time to complete the snapshot
asynchronousy.
## Proposal: Eager StateSync With Per Chunk Light Client Validation
The conclusion after some concideration of the advantages/disadvances of
eager/lazy and different security models is to produce a state sync
which eagerly produces snapshots and uses light client validation. This
approach has the performance advantages of pre-computing efficient
snapshots which can streamed to new nodes on demand using sequential IO.
Secondly, by using light client validation we cna validate each chunk on
receipt and avoid the potential eclipse attack of majority of peer based
security.
### Implementation
Tendermint is responsible for downloading and verifying chunks of
AppState from peers. ABCI Application is responsible for taking
AppStateChunk objects from TM and constructing a valid state tree whose
root corresponds with the AppHash of syncing block. In particular we
will need implement:
* Build new StateSync reactor brokers message transmission between the peers
and the ABCI application
* A set of ABCI Messages
* Design SnapshotFormat as an interface which can:
* validate chunks
* read/write chunks from file
* read/write chunks to/from application state store
* convert manifests into chunkRequest ABCI messages
* Implement SnapshotFormat for cosmos-hub with concrete implementation for:
* read/write chunks in a way which can be:
* parallelized across peers
* validated on receipt
* read/write to/from IAVL+ tree
![StateSync Architecture Diagram](img/state-sync.png)
## Implementation Path
* Create StateSync reactor based on [#3753](https://github.com/tendermint/tendermint/pull/3753)
* Design SnapshotFormat with an eye towards cosmos-hub implementation
* ABCI message to send/receive SnapshotFormat
* IAVL+ changes to support SnapshotFormat
* Deliver Warp sync (no chunk validation)
* light client implementation for weak subjectivity
* Deliver StateSync with chunk validation
## Status
Proposed
## Concequences
### Neutral
### Positive
* Safe & performant state sync design substantiated with real world implementation experience
* General interfaces allowing application specific innovation
* Parallizable implementation trajectory with reasonable engineering effort
### Negative
* Static Scheduling lacks opportunity for real time chunk availability optimizations
## References
[sync: Sync current state without full replay for Applications](https://github.com/tendermint/tendermint/issues/828) - original issue
[tendermint state sync proposal 2](https://docs.google.com/document/d/1npGTAa1qxe8EQZ1wG0a0Sip9t5oX2vYZNUDwr_LVRR4/edit) - ackratos proposal
[proposal 2 implementation](https://github.com/tendermint/tendermint/pull/3243) - ackratos implementation
[WIP General/Lazy State-Sync pseudo-spec](https://github.com/tendermint/tendermint/issues/3639) - Jae Proposal
[Warp Sync Implementation](https://github.com/tendermint/tendermint/pull/3594) - ackratos
[Chunk Proposal](https://github.com/tendermint/tendermint/pull/3799) - Bucky proposed

View File

@@ -0,0 +1,404 @@
# ADR 043: Blockhchain Reactor Riri-Org
## Changelog
- 18-06-2019: Initial draft
- 08-07-2019: Reviewed
- 29-11-2019: Implemented
- 14-02-2020: Updated with the implementation details
## Context
The blockchain reactor is responsible for two high level processes:sending/receiving blocks from peers and FastSync-ing blocks to catch upnode who is far behind. The goal of [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md) was to refactor these two processes by separating business logic currently wrapped up in go-channels into pure `handle*` functions. While the ADR specified what the final form of the reactor might look like it lacked guidance on intermediary steps to get there.
The following diagram illustrates the state of the [blockchain-reorg](https://github.com/tendermint/tendermint/pull/3561) reactor which will be referred to as `v1`.
![v1 Blockchain Reactor Architecture
Diagram](https://github.com/tendermint/tendermint/blob/f9e556481654a24aeb689bdadaf5eab3ccd66829/docs/architecture/img/blockchain-reactor-v1.png)
While `v1` of the blockchain reactor has shown significant improvements in terms of simplifying the concurrency model, the current PR has run into few roadblocks.
- The current PR large and difficult to review.
- Block gossiping and fast sync processes are highly coupled to the shared `Pool` data structure.
- Peer communication is spread over multiple components creating complex dependency graph which must be mocked out during testing.
- Timeouts modeled as stateful tickers introduce non-determinism in tests
This ADR is meant to specify the missing components and control necessary to achieve [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md).
## Decision
Partition the responsibilities of the blockchain reactor into a set of components which communicate exclusively with events. Events will contain timestamps allowing each component to track time as internal state. The internal state will be mutated by a set of `handle*` which will produce event(s). The integration between components will happen in the reactor and reactor tests will then become integration tests between components. This design will be known as `v2`.
![v2 Blockchain Reactor Architecture
Diagram](https://github.com/tendermint/tendermint/blob/584e67ac3fac220c5c3e0652e3582eca8231e814/docs/architecture/img/blockchain-reactor-v2.png)
### Fast Sync Related Communication Channels
The diagram below shows the fast sync routines and the types of channels and queues used to communicate with each other.
In addition the per reactor channels used by the sendRoutine to send messages over the Peer MConnection are shown.
![v2 Blockchain Channels and Queues
Diagram](https://github.com/tendermint/tendermint/blob/5cf570690f989646fb3b615b734da503f038891f/docs/architecture/img/blockchain-v2-channels.png)
### Reactor changes in detail
The reactor will include a demultiplexing routine which will send each message to each sub routine for independent processing. Each sub routine will then select the messages it's interested in and call the handle specific function specified in [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md). The demuxRoutine acts as "pacemaker" setting the time in which events are expected to be handled.
```go
func demuxRoutine(msgs, scheduleMsgs, processorMsgs, ioMsgs) {
timer := time.NewTicker(interval)
for {
select {
case <-timer.C:
now := evTimeCheck{time.Now()}
schedulerMsgs <- now
processorMsgs <- now
ioMsgs <- now
case msg:= <- msgs:
msg.time = time.Now()
// These channels should produce backpressure before
// being full to avoid starving each other
schedulerMsgs <- msg
processorMsgs <- msg
ioMesgs <- msg
if msg == stop {
break;
}
}
}
}
func processRoutine(input chan Message, output chan Message) {
processor := NewProcessor(..)
for {
msg := <- input
switch msg := msg.(type) {
case bcBlockRequestMessage:
output <- processor.handleBlockRequest(msg))
...
case stop:
processor.stop()
break;
}
}
func scheduleRoutine(input chan Message, output chan Message) {
schelduer = NewScheduler(...)
for {
msg := <-msgs
switch msg := input.(type) {
case bcBlockResponseMessage:
output <- scheduler.handleBlockResponse(msg)
...
case stop:
schedule.stop()
break;
}
}
}
```
## Lifecycle management
A set of routines for individual processes allow processes to run in parallel with clear lifecycle management. `Start`, `Stop`, and `AddPeer` hooks currently present in the reactor will delegate to the sub-routines allowing them to manage internal state independent without further coupling to the reactor.
```go
func (r *BlockChainReactor) Start() {
r.msgs := make(chan Message, maxInFlight)
schedulerMsgs := make(chan Message)
processorMsgs := make(chan Message)
ioMsgs := make(chan Message)
go processorRoutine(processorMsgs, r.msgs)
go scheduleRoutine(schedulerMsgs, r.msgs)
go ioRoutine(ioMsgs, r.msgs)
...
}
func (bcR *BlockchainReactor) Receive(...) {
...
r.msgs <- msg
...
}
func (r *BlockchainReactor) Stop() {
...
r.msgs <- stop
...
}
...
func (r *BlockchainReactor) Stop() {
...
r.msgs <- stop
...
}
...
func (r *BlockchainReactor) AddPeer(peer p2p.Peer) {
...
r.msgs <- bcAddPeerEv{peer.ID}
...
}
```
## IO handling
An io handling routine within the reactor will isolate peer communication. Message going through the ioRoutine will usually be one way, using `p2p` APIs. In the case in which the `p2p` API such as `trySend` return errors, the ioRoutine can funnel those message back to the demuxRoutine for distribution to the other routines. For instance errors from the ioRoutine can be consumed by the scheduler to inform better peer selection implementations.
```go
func (r *BlockchainReacor) ioRoutine(ioMesgs chan Message, outMsgs chan Message) {
...
for {
msg := <-ioMsgs
switch msg := msg.(type) {
case scBlockRequestMessage:
queued := r.sendBlockRequestToPeer(...)
if queued {
outMsgs <- ioSendQueued{...}
}
case scStatusRequestMessage
r.sendStatusRequestToPeer(...)
case bcPeerError
r.Swtich.StopPeerForError(msg.src)
...
...
case bcFinished
break;
}
}
}
```
### Processor Internals
The processor is responsible for ordering, verifying and executing blocks. The Processor will maintain an internal cursor `height` refering to the last processed block. As a set of blocks arrive unordered, the Processor will check if it has `height+1` necessary to process the next block. The processor also maintains the map `blockPeers` of peers to height, to keep track of which peer provided the block at `height`. `blockPeers` can be used in`handleRemovePeer(...)` to reschedule all unprocessed blocks provided by a peer who has errored.
```go
type Processor struct {
height int64 // the height cursor
state ...
blocks [height]*Block // keep a set of blocks in memory until they are processed
blockPeers [height]PeerID // keep track of which heights came from which peerID
lastTouch timestamp
}
func (proc *Processor) handleBlockResponse(peerID, block) {
if block.height <= height || block[block.height] {
} else if blocks[block.height] {
return errDuplicateBlock{}
} else {
blocks[block.height] = block
}
if blocks[height] && blocks[height+1] {
... = state.Validators.VerifyCommit(...)
... = store.SaveBlock(...)
state, err = blockExec.ApplyBlock(...)
...
if err == nil {
delete blocks[height]
height++
lastTouch = msg.time
return pcBlockProcessed{height-1}
} else {
... // Delete all unprocessed block from the peer
return pcBlockProcessError{peerID, height}
}
}
}
func (proc *Processor) handleRemovePeer(peerID) {
events = []
// Delete all unprocessed blocks from peerID
for i = height; i < len(blocks); i++ {
if blockPeers[i] == peerID {
events = append(events, pcBlockReschedule{height})
delete block[height]
}
}
return events
}
func handleTimeCheckEv(time) {
if time - lastTouch > timeout {
// Timeout the processor
...
}
}
```
## Schedule
The Schedule maintains the internal state used for scheduling blockRequestMessages based on some scheduling algorithm. The schedule needs to maintain state on:
- The state `blockState` of every block seem up to height of maxHeight
- The set of peers and their peer state `peerState`
- which peers have which blocks
- which blocks have been requested from which peers
```go
type blockState int
const (
blockStateNew = iota
blockStatePending,
blockStateReceived,
blockStateProcessed
)
type schedule {
// a list of blocks in which blockState
blockStates map[height]blockState
// a map of which blocks are available from which peers
blockPeers map[height]map[p2p.ID]scPeer
// a map of peerID to schedule specific peer struct `scPeer`
peers map[p2p.ID]scPeer
// a map of heights to the peer we are waiting for a response from
pending map[height]scPeer
targetPending int // the number of blocks we want in blockStatePending
targetReceived int // the number of blocks we want in blockStateReceived
peerTimeout int
peerMinSpeed int
}
func (sc *schedule) numBlockInState(state blockState) uint32 {
num := 0
for i := sc.minHeight(); i <= sc.maxHeight(); i++ {
if sc.blockState[i] == state {
num++
}
}
return num
}
func (sc *schedule) popSchedule(maxRequest int) []scBlockRequestMessage {
// We only want to schedule requests such that we have less than sc.targetPending and sc.targetReceived
// This ensures we don't saturate the network or flood the processor with unprocessed blocks
todo := min(sc.targetPending - sc.numBlockInState(blockStatePending), sc.numBlockInState(blockStateReceived))
events := []scBlockRequestMessage{}
for i := sc.minHeight(); i < sc.maxMaxHeight(); i++ {
if todo == 0 {
break
}
if blockStates[i] == blockStateNew {
peer = sc.selectPeer(blockPeers[i])
sc.blockStates[i] = blockStatePending
sc.pending[i] = peer
events = append(events, scBlockRequestMessage{peerID: peer.peerID, height: i})
todo--
}
}
return events
}
...
type scPeer struct {
peerID p2p.ID
numOustandingRequest int
lastTouched time.Time
monitor flow.Monitor
}
```
# Scheduler
The scheduler is configured to maintain a target `n` of in flight
messages and will use feedback from `_blockResponseMessage`,
`_statusResponseMessage` and `_peerError` produce an optimal assignment
of scBlockRequestMessage at each `timeCheckEv`.
```
func handleStatusResponse(peerID, height, time) {
schedule.touchPeer(peerID, time)
schedule.setPeerHeight(peerID, height)
}
func handleBlockResponseMessage(peerID, height, block, time) {
schedule.touchPeer(peerID, time)
schedule.markReceived(peerID, height, size(block))
}
func handleNoBlockResponseMessage(peerID, height, time) {
schedule.touchPeer(peerID, time)
// reschedule that block, punish peer...
...
}
func handlePeerError(peerID) {
// Remove the peer, reschedule the requests
...
}
func handleTimeCheckEv(time) {
// clean peer list
events = []
for peerID := range schedule.peersNotTouchedSince(time) {
pending = schedule.pendingFrom(peerID)
schedule.setPeerState(peerID, timedout)
schedule.resetBlocks(pending)
events = append(events, peerTimeout{peerID})
}
events = append(events, schedule.popSchedule())
return events
}
```
## Peer
The Peer Stores per peer state based on messages received by the scheduler.
```go
type Peer struct {
lastTouched timestamp
lastDownloaded timestamp
pending map[height]struct{}
height height // max height for the peer
state {
pending, // we know the peer but not the height
active, // we know the height
timeout // the peer has timed out
}
}
```
## Status
Implemented
## Consequences
### Positive
- Test become deterministic
- Simulation becomes a-termporal: no need wait for a wall-time timeout
- Peer Selection can be independently tested/simulated
- Develop a general approach to refactoring reactors
### Negative
### Neutral
### Implementation Path
- Implement the scheduler, test the scheduler, review the rescheduler
- Implement the processor, test the processor, review the processor
- Implement the demuxer, write integration test, review integration tests
## References
- [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md): The original blockchain reactor re-org proposal
- [Blockchain re-org](https://github.com/tendermint/tendermint/pull/3561): The current blockchain reactor re-org implementation (v1)

View File

@@ -0,0 +1,141 @@
# ADR 044: Lite Client with Weak Subjectivity
## Changelog
* 13-07-2019: Initial draft
* 14-08-2019: Address cwgoes comments
## Context
The concept of light clients was introduced in the Bitcoin white paper. It
describes a watcher of distributed consensus process that only validates the
consensus algorithm and not the state machine transactions within.
Tendermint light clients allow bandwidth & compute-constrained devices, such as smartphones, low-power embedded chips, or other blockchains to
efficiently verify the consensus of a Tendermint blockchain. This forms the
basis of safe and efficient state synchronization for new network nodes and
inter-blockchain communication (where a light client of one Tendermint instance
runs in another chain's state machine).
In a network that is expected to reliably punish validators for misbehavior
by slashing bonded stake and where the validator set changes
infrequently, clients can take advantage of this assumption to safely
synchronize a lite client without downloading the intervening headers.
Light clients (and full nodes) operating in the Proof Of Stake context need a
trusted block height from a trusted source that is no older than 1 unbonding
window plus a configurable evidence submission synchrony bound. This is called “weak subjectivity”.
Weak subjectivity is required in Proof of Stake blockchains because it is
costless for an attacker to buy up voting keys that are no longer bonded and
fork the network at some point in its prior history. See Vitaliks post at
[Proof of Stake: How I Learned to Love Weak
Subjectivity](https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/).
Currently, Tendermint provides a lite client implementation in the
[light](https://github.com/tendermint/tendermint/tree/master/light) package. This
lite client implements a bisection algorithm that tries to use a binary search
to find the minimum number of block headers where the validator set voting
power changes are less than < 1/3rd. This interface does not support weak
subjectivity at this time. The Cosmos SDK also does not support counterfactual
slashing, nor does the lite client have any capacity to report evidence making
these systems *theoretically unsafe*.
NOTE: Tendermint provides a somewhat different (stronger) light client model
than Bitcoin under eclipse, since the eclipsing node(s) can only fool the light
client if they have two-thirds of the private keys from the last root-of-trust.
## Decision
### The Weak Subjectivity Interface
Add the weak subjectivity interface for when a new light client connects to the
network or when a light client that has been offline for longer than the
unbonding period connects to the network. Specifically, the node needs to
initialize the following structure before syncing from user input:
```
type TrustOptions struct {
// Required: only trust commits up to this old.
// Should be equal to the unbonding period minus some delta for evidence reporting.
TrustPeriod time.Duration `json:"trust-period"`
// Option 1: TrustHeight and TrustHash can both be provided
// to force the trusting of a particular height and hash.
// If the latest trusted height/hash is more recent, then this option is
// ignored.
TrustHeight int64 `json:"trust-height"`
TrustHash []byte `json:"trust-hash"`
// Option 2: Callback can be set to implement a confirmation
// step if the trust store is uninitialized, or expired.
Callback func(height int64, hash []byte) error
}
```
The expectation is the user will get this information from a trusted source
like a validator, a friend, or a secure website. A more user friendly
solution with trust tradeoffs is that we establish an https based protocol with
a default end point that populates this information. Also an on-chain registry
of roots-of-trust (e.g. on the Cosmos Hub) seems likely in the future.
### Linear Verification
The linear verification algorithm requires downloading all headers
between the `TrustHeight` and the `LatestHeight`. The lite client downloads the
full header for the provided `TrustHeight` and then proceeds to download `N+1`
headers and applies the [Tendermint validation
rules](https://github.com/tendermint/tendermint/tree/master/spec/light-client/verification/README.md)
to each block.
### Bisecting Verification
Bisecting Verification is a more bandwidth and compute intensive mechanism that
in the most optimistic case requires a light client to only download two block
headers to come into synchronization.
The bisection algorithm proceeds in the following fashion. The client downloads
and verifies the full block header for `TrustHeight` and then fetches
`LatestHeight` blocker header. The client then verifies the `LatestHeight`
header. Finally the client attempts to verify the `LatestHeight` header with
voting powers taken from `NextValidatorSet` in the `TrustHeight` header. This
verification will succeed if the validators from `TrustHeight` still have > 2/3
+1 of voting power in the `LatestHeight`. If this succeeds, the client is fully
synchronized. If this fails, then following Bisection Algorithm should be
executed.
The Client tries to download the block at the mid-point block between
`LatestHeight` and `TrustHeight` and attempts that same algorithm as above
using `MidPointHeight` instead of `LatestHeight` and a different threshold -
1/3 +1 of voting power for *non-adjacent headers*. In the case the of failure,
recursively perform the `MidPoint` verification until success then start over
with an updated `NextValidatorSet` and `TrustHeight`.
If the client encounters a forged header, it should submit the header along
with some other intermediate headers as the evidence of misbehavior to other
full nodes. After that, it can retry the bisection using another full node. An
optimal client will cache trusted headers from the previous run to minimize
network usage.
---
Check out the formal specification
[here](https://github.com/tendermint/tendermint/tree/master/spec/light-client).
## Status
Implemented
## Consequences
### Positive
* light client which is safe to use (it can go offline, but not for too long)
### Negative
* complexity of bisection
### Neutral
* social consensus can be prone to errors (for cases where a new light client
joins a network or it has been offline for too long)

View File

@@ -0,0 +1,140 @@
# ADR 45 - ABCI Evidence Handling
## Changelog
* 21-09-2019: Initial draft
## Context
Evidence is a distinct component in a Tendermint block and has it's own reactor
for high priority gossipping. Currently, Tendermint supports only a single form of evidence, an explicit
equivocation, where a validator signs conflicting blocks at the same
height/round. It is detected in real-time in the consensus reactor, and gossiped
through the evidence reactor. Evidence can also be submitted through the RPC.
Currently, Tendermint does not gracefully handle a fork on the main chain.
If a fork is detected, the node panics. At this point manual intervention and
social consensus are required to reconfigure. We'd like to do something more
graceful here, but that's for another day.
It's possible to fool lite clients without there being a fork on the
main chain - so called Fork-Lite. See the
[fork accountability](https://github.com/tendermint/tendermint/blob/master/spec/light-client/accountability/README.md)
document for more details. For a sequential lite client, this can happen via
equivocation or amnesia attacks. For a skipping lite client this can also happen
via lunatic validator attacks. There must be some way for applications to punish
all forms of misbehaviour.
The essential question is whether Tendermint should manage the evidence
verification, or whether it should treat evidence more like a transaction (ie.
arbitrary bytes) and let the application handle it (including all the signature
checking).
Currently, evidence verification is handled by Tendermint. Once committed,
[evidence is passed over
ABCI](https://github.com/tendermint/tendermint/blob/master/proto/tendermint/abci/types.proto#L354)
in BeginBlock in a reduced form that includes only
the type of evidence, its height and timestamp, the validator it's from, and the
total voting power of the validator set at the height. The app trusts Tendermint
to perform the evidence verification, as the ABCI evidence does not contain the
signatures and additional data for the app to verify itself.
Arguments in favor of leaving evidence handling in Tendermint:
1) Attacks on full nodes must be detectable by full nodes in real time, ie. within the consensus reactor.
So at the very least, any evidence involved in something that could fool a full
node must be handled natively by Tendermint as there would otherwise be no way
for the ABCI app to detect it (ie. we don't send all votes we receive during
consensus to the app ... ).
2) Amensia attacks can not be easily detected - they require an interactive
protocol among all the validators to submit justification for their past
votes. Our best notion of [how to do this
currently](https://github.com/tendermint/tendermint/blob/c67154232ca8be8f5c21dff65d154127adc4f7bb/docs/spec/consensus/fork-detection.md)
is via a centralized
monitor service that is trusted for liveness to aggregate data from
current and past validators, but which produces a proof of misbehaviour (ie.
via amnesia) that can be verified by anyone, including the blockchain.
Validators must submit all the votes they saw for the relevant consensus
height to justify their precommits. This is quite specific to the Tendermint
protocol and may change if the protocol is upgraded. Hence it would be awkward
to co-ordinate this from the app.
3) Evidence gossipping is similar to tx gossipping, but it should be higher
priority. Since the mempool does not support any notion of priority yet,
evidence is gossipped through a distinct Evidence reactor. If we just treated
evidence like any other transaction, leaving it entirely to the application,
Tendermint would have no way to know how to prioritize it, unless/until we
significantly upgrade the mempool. Thus we would need to continue to treat evidence
distinctly and update the ABCI to either support sending Evidence through
CheckTx/DeliverTx, or to introduce new CheckEvidence/DeliverEvidence methods.
In either case we'd need to make more changes to ABCI then if Tendermint
handled things and we just added support for another evidence type that could be included
in BeginBlock.
4) All ABCI application frameworks will benefit from most of the heavy lifting
being handled by Tendermint, rather than each of them needing to re-implement
all the evidence verification logic in each language.
Arguments in favor of moving evidence handling to the application:
5) Skipping lite clients require us to track the set of all validators that were
bonded over some period in case validators that are unbonding but still
slashable sign invalid headers to fool lite clients. The Cosmos-SDK
staking/slashing modules track this, as it's used for slashing.
Tendermint does not currently track this, though it does keep track of the
validator set at every height. This leans in favour of managing evidence in
the app to avoid redundantly managing the historical validator set data in
Tendermint
6) Applications supporting cross-chain validation will be required to process
evidence from other chains. This data will come in the form of a transaction,
but it means the app will be required to have all the functionality to process
evidence, even if the evidence for its own chain is handled directly by
Tendermint.
7) Evidence from lite clients may be large and constitute some form of DoS
vector against full nodes. Putting it in transactions allows it to engage the application's fee
mechanism to pay for cost of executions in the event the evidence is false.
This means the evidence submitter must be able to afford the fees for the
submission, but of course it should be refunded if the evidence is valid.
That said, the burden is mostly on full nodes, which don't necessarily benefit
from fees.
## Decision
The above mostly seems to suggest that evidence detection belongs in Tendermint.
(5) does not impose particularly large obligations on Tendermint and (6) just
means the app can use Tendermint libraries. That said, (7) is potentially
cause for some concern, though it could still attack full nodes that weren't associated with validators
(ie. that don't benefit from fees). This could be handled out of band, for instance by
full nodes offering the light client service via payment channels or via some
other payment service. This can also be mitigated by banning client IPs if they
send bad data. Note the burden is on the client to actually send us a lot of
data in the first place.
A separate ADR will describe how Tendermint will handle these new forms of
evidence, in terms of how it will engage the monitoring protocol described in
the [fork
detection](https://github.com/tendermint/tendermint/blob/c67154232ca8be8f5c21dff65d154127adc4f7bb/docs/spec/consensus/fork-detection.md) document,
and how it will track past validators and manage DoS issues.
## Status
Proposed.
## Consequences
### Positive
- No real changes to ABCI
- Tendermint handles evidence for all apps
### Neutral
- Need to be careful about denial of service on the Tendermint RPC
### Negative
- Tendermint duplicates data by tracking all pubkeys that were validators during
the unbonding period

View File

@@ -0,0 +1,169 @@
# ADR 046: Lite Client Implementation
## Changelog
* 13-02-2020: Initial draft
* 26-02-2020: Cross-checking the first header
* 28-02-2020: Bisection algorithm details
* 31-03-2020: Verify signature got changed
## Context
A `Client` struct represents a light client, connected to a single blockchain.
The user has an option to verify headers using `VerifyHeader` or
`VerifyHeaderAtHeight` or `Update` methods. The latter method downloads the
latest header from primary and compares it with the currently trusted one.
```go
type Client interface {
// verify new headers
VerifyHeaderAtHeight(height int64, now time.Time) (*types.SignedHeader, error)
VerifyHeader(newHeader *types.SignedHeader, newVals *types.ValidatorSet, now time.Time) error
Update(now time.Time) (*types.SignedHeader, error)
// get trusted headers & validators
TrustedHeader(height int64) (*types.SignedHeader, error)
TrustedValidatorSet(height int64) (valSet *types.ValidatorSet, heightUsed int64, err error)
LastTrustedHeight() (int64, error)
FirstTrustedHeight() (int64, error)
// query configuration options
ChainID() string
Primary() provider.Provider
Witnesses() []provider.Provider
Cleanup() error
}
```
A new light client can either be created from scratch (via `NewClient`) or
using the trusted store (via `NewClientFromTrustedStore`). When there's some
data in the trusted store and `NewClient` is called, the light client will a)
check if stored header is more recent b) optionally ask the user whenever it
should rollback (no confirmation required by default).
```go
func NewClient(
chainID string,
trustOptions TrustOptions,
primary provider.Provider,
witnesses []provider.Provider,
trustedStore store.Store,
options ...Option) (*Client, error) {
```
`witnesses` as argument (as opposite to `Option`) is an intentional choice,
made to increase security by default. At least one witness is required,
although, right now, the light client does not check that primary != witness.
When cross-checking a new header with witnesses, minimum number of witnesses
required to respond: 1. Note the very first header (`TrustOptions.Hash`) is
also cross-checked with witnesses for additional security.
Due to bisection algorithm nature, some headers might be skipped. If the light
client does not have a header for height `X` and `VerifyHeaderAtHeight(X)` or
`VerifyHeader(H#X)` methods are called, these will perform either a) backwards
verification from the latest header back to the header at height `X` or b)
bisection verification from the first stored header to the header at height `X`.
`TrustedHeader`, `TrustedValidatorSet` only communicate with the trusted store.
If some header is not there, an error will be returned indicating that
verification is required.
```go
type Provider interface {
ChainID() string
SignedHeader(height int64) (*types.SignedHeader, error)
ValidatorSet(height int64) (*types.ValidatorSet, error)
}
```
Provider is a full node usually, but can be another light client. The above
interface is thin and can accommodate many implementations.
If provider (primary or witness) becomes unavailable for a prolonged period of
time, it will be removed to ensure smooth operation.
Both `Client` and providers expose chain ID to track if there are on the same
chain. Note, when chain upgrades or intentionally forks, chain ID changes.
The light client stores headers & validators in the trusted store:
```go
type Store interface {
SaveSignedHeaderAndValidatorSet(sh *types.SignedHeader, valSet *types.ValidatorSet) error
DeleteSignedHeaderAndValidatorSet(height int64) error
SignedHeader(height int64) (*types.SignedHeader, error)
ValidatorSet(height int64) (*types.ValidatorSet, error)
LastSignedHeaderHeight() (int64, error)
FirstSignedHeaderHeight() (int64, error)
SignedHeaderAfter(height int64) (*types.SignedHeader, error)
Prune(size uint16) error
Size() uint16
}
```
At the moment, the only implementation is the `db` store (wrapper around the KV
database, used in Tendermint). In the future, remote adapters are possible
(e.g. `Postgresql`).
```go
func Verify(
chainID string,
trustedHeader *types.SignedHeader, // height=X
trustedVals *types.ValidatorSet, // height=X or height=X+1
untrustedHeader *types.SignedHeader, // height=Y
untrustedVals *types.ValidatorSet, // height=Y
trustingPeriod time.Duration,
now time.Time,
maxClockDrift time.Duration,
trustLevel tmmath.Fraction) error {
```
`Verify` pure function is exposed for a header verification. It handles both
cases of adjacent and non-adjacent headers. In the former case, it compares the
hashes directly (2/3+ signed transition). Otherwise, it verifies 1/3+
(`trustLevel`) of trusted validators are still present in new validators.
While `Verify` function is certainly handy, `VerifyAdjacent` and
`VerifyNonAdjacent` should be used most often to avoid logic errors.
### Bisection algorithm details
Non-recursive bisection algorithm was implemented despite the spec containing
the recursive version. There are two major reasons:
1) Constant memory consumption => no risk of getting OOM (Out-Of-Memory) exceptions;
2) Faster finality (see Fig. 1).
_Fig. 1: Differences between recursive and non-recursive bisections_
![Fig. 1](./img/adr-046-fig1.png)
Specification of the non-recursive bisection can be found
[here](https://github.com/tendermint/spec/blob/zm_non-recursive-verification/spec/consensus/light-client/non-recursive-verification.md).
## Status
Implemented
## Consequences
### Positive
* single `Client` struct, which is easy to use
* flexible interfaces for header providers and trusted storage
### Negative
* `Verify` needs to be aligned with the current spec
### Neutral
* `Verify` function might be misused (called with non-adjacent headers in
incorrectly implemented sequential verification)

View File

@@ -0,0 +1,254 @@
# ADR 047: Handling evidence from light client
## Changelog
* 18-02-2020: Initial draft
* 24-02-2020: Second version
* 13-04-2020: Add PotentialAmnesiaEvidence and a few remarks
* 31-07-2020: Remove PhantomValidatorEvidence
* 14-08-2020: Introduce light traces (listed now as an alternative approach)
* 20-08-2020: Light client produces evidence when detected instead of passing to full node
* 16-09-2020: Post-implementation revision
* 15-03-2020: Ammends for the case of a forward lunatic attack
### Glossary of Terms
- a `LightBlock` is the unit of data that a light client receives, verifies and stores.
It is composed of a validator set, commit and header all at the same height.
- a **Trace** is seen as an array of light blocks across a range of heights that were
created as a result of skipping verification.
- a **Provider** is a full node that a light client is connected to and serves the light
client signed headers and validator sets.
- `VerifySkipping` (sometimes known as bisection or verify non-adjacent) is a method the
light client uses to verify a target header from a trusted header. The process involves verifying
intermediate headers in between the two by making sure that 1/3 of the validators that signed
the trusted header also signed the untrusted one.
- **Light Bifurcation Point**: If the light client was to run `VerifySkipping` with two providers
(i.e. a primary and a witness), the bifurcation point is the height that the headers
from each of these providers are different yet valid. This signals that one of the providers
may be trying to fool the light client.
## Context
The bisection method of header verification used by the light client exposes
itself to a potential attack if any block within the light clients trusted period has
a malicious group of validators with power that exceeds the light clients trust level
(default is 1/3). To improve light client (and overall network) security, the light
client has a detector component that compares the verified header provided by the
primary against witness headers. This ADR outlines the process of mitigating attacks
on the light client by using witness nodes to cross reference with.
## Alternative Approaches
A previously discussed approach to handling evidence was to pass all the data that the
light client had witnessed when it had observed diverging headers for the full node to
process.This was known as a light trace and had the following structure:
```go
type ConflictingHeadersTrace struct {
Headers []*types.SignedHeader
}
```
This approach has the advantage of not requiring as much processing on the light
client side in the event that an attack happens. Although, this is not a significant
difference as the light client would in any case have to validate all the headers
from both witness and primary. Using traces would consume a large amount of bandwidth
and adds a DDOS vector to the full node.
## Decision
The light client will be divided into two components: a `Verifier` (either sequential or
skipping) and a `Detector` (see [Informal's Detector](https://github.com/informalsystems/tendermint-rs/blob/master/docs/spec/lightclient/detection/detection.md))
. The detector will take the trace of headers from the primary and check it against all
witnesses. For a witness with a diverging header, the detector will first verify the header
by bisecting through all the heights defined by the trace that the primary provided. If valid,
the light client will trawl through both traces and find the point of bifurcation where it
can proceed to extract any evidence (as is discussed in detail later).
Upon successfully detecting the evidence, the light client will send it to both primary and
witness before halting. It will not send evidence to other peers nor continue to verify the
primary's header against any other header.
## Detailed Design
The verification process of the light client will start from a trusted header and use a bisectional
algorithm to verify up to a header at a given height. This becomes the verified header (does not
mean that it is trusted yet). All headers that were verified in between are cached and known as
intermediary headers and the entire array is sometimes referred to as a trace.
The light client's detector then takes all the headers and runs the detect function.
```golang
func (c *Client) detectDivergence(primaryTrace []*types.LightBlock, now time.Time) error
```
The function takes the last header it received, the target header and compares it against all the witnesses
it has through the following function:
```golang
func (c *Client) compareNewHeaderWithWitness(errc chan error, h *types.SignedHeader,
witness provider.Provider, witnessIndex int)
```
The err channel is used to send back all the outcomes so that they can be processed in parallel.
Invalid headers result in dropping the witness, lack of response or not having the headers is ignored
just as headers that have the same hash. Headers, however,
of a different hash then trigger the detection process between the primary and that particular witness.
This begins with verification of the witness's header via skipping verification which is run in tande
with locating the Light Bifurcation Point
![](../imgs/light-client-detector.png)
This is done with:
```golang
func (c *Client) examineConflictingHeaderAgainstTrace(
trace []*types.LightBlock,
targetBlock *types.LightBlock,
source provider.Provider,
now time.Time,
) ([]*types.LightBlock, *types.LightBlock, error)
```
which performs the following
1. Checking that the trusted header is the same. Currently, they should not theoretically be different
because witnesses cannot be added and removed after the client is initialized. But we do this any way
as a sanity check. If this fails we have to drop the witness.
2. Querying and verifying the witness's headers using bisection at the same heights of all the
intermediary headers of the primary (In the above example this is A, B, C, D, F, H). If bisection fails
or the witness stops responding then we can call the witness faulty and drop it.
3. We eventually reach a verified header by the witness which is not the same as the intermediary header
(In the above example this is E). This is the point of bifurcation (This could also be the last header).
4. There is a unique case where the trace that is being examined against has blocks that have a greater
height than the targetBlock. This can occur as part of a forward lunatic attack where the primary has
provided a light block that has a height greater than the head of the chain (see Appendix B). In this
case, the light client will verify the sources blocks up to the targetBlock and return the block in the
trace that is directly after the targetBlock in height as the `ConflictingBlock`
This function then returns the trace of blocks from the witness node between the common header and the
divergent header of the primary as it is likely, as seen in the example to the right, that multiple
headers where required in order to verify the divergent one. This trace will
be used later (as is also described later in this document).
![](../imgs/bifurcation-point.png)
Now, that an attack has been detected, the light client must form evidence to prove it. There are
three types of attacks that either the primary or witness could have done to try fool the light client
into verifying the wrong header: Lunatic, Equivocation and Amnesia. As the consequence is the same and
the data required to prove it is also very similar, we bundle these attack styles together in a single
evidence:
```golang
type LightClientAttackEvidence struct {
ConflictingBlock *LightBlock
CommonHeight int64
}
```
The light client takes the stance of first suspecting the primary. Given the bifurcation point found
above, it takes the two divergent headers and compares whether the one from the primary is valid with
respect to the one from the witness. This is done by calling `isInvalidHeader()` which looks to see if
any one of the deterministically derived header fields differ from one another. This could be one of
`ValidatorsHash`, `NextValidatorsHash`, `ConsensusHash`, `AppHash`, and `LastResultsHash`.
In this case we know it's a Lunatic attack and to help the witness verify it we send the height
of the common header which is 1 in the example above or C in the example above that. If all these
hashes are the same then we can infer that it is either Equivocation or Amnesia. In this case we send
the height of the diverged headers because we know that the validator sets are the same, hence the
malicious nodes are still bonded at that height. In the example above, this is height 10 and the
example above that it is the height at E.
The light client now has the evidence and broadcasts it to the witness.
However, it could have been that the header the light client used from the witness against the primary
was forged, so before halting the light client swaps the process and thus suspects the witness and
uses the primary to create evidence. It calls `examineConflictingHeaderAgainstTrace` this time using
the witness trace found earlier.
If the primary was malicious it is likely that it will not respond but if it is innocent then the
light client will produce the same evidence but this time the conflicting
block will come from the witness node instead of the primary. The evidence is then formed and sent to
the primary node.
This then ends the process and the verify function that was called at the start returns the error to
the user.
For a detailed overview of how each of these three attacks can be conducted please refer to the
[fork accountability spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md).
## Full Node Verification
When a full node receives evidence from the light client it will need to verify
it for itself before gossiping it to peers and trying to commit it on chain. This process is outlined
in [ADR-059](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-059-evidence-composition-and-lifecycle.md).
## Status
Implemented
## Consequences
### Positive
* Light client has increased security against Lunatic, Equivocation and Amnesia attacks.
* Do not need intermediate data structures to encapsulate the malicious behavior
* Generalized evidence makes the code simpler
### Negative
* Breaking change on the light client from versions 0.33.8 and below. Previous
versions will still send `ConflictingHeadersEvidence` but it won't be recognized
by the full node. Light clients will however still refuse the header and shut down.
* Amnesia attacks although detected, will not be able to be punished as it is not
clear from the current information which nodes behaved maliciously.
* Evidence module must handle both individual and grouped evidence.
### Neutral
## References
* [Fork accountability spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md)
* [ADR 056: Light client amnesia attacks](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-056-light-client-amnesia-attacks.md)
* [ADR-059: Evidence Composition and Lifecycle](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-059-evidence-composition-and-lifecycle.md)
* [Informal's Light Client Detector](https://github.com/informalsystems/tendermint-rs/blob/master/docs/spec/lightclient/detection/detection.md)
## Appendix A
PhantomValidatorEvidence was used to capture when a validator that was still staked
(i.e. within the bonded period) but was not in the current validator set had voted for a block.
In later discussions it was argued that although possible to keep phantom validator
evidence, any case a phantom validator that could have the capacity to be involved
in fooling a light client would have to be aided by 1/3+ lunatic validators.
It would also be very unlikely that the new validators injected by the lunatic attack
would be validators that currently still have something staked.
Not only this but there was a large degree of extra computation required in storing all
the currently staked validators that could possibly fall into the group of being
a phantom validator. Given this, it was removed.
## Appendix B
A unique flavor of lunatic attack is a forward lunatic attack. This is where a malicious
node provides a header with a height greater than the height of the blockchain. Thus there
are no witnesses capable of rebutting the malicious header. Such an attack will also
require an accomplice, i.e. at least one other witness to also return the same forged header.
Although such attacks can be any arbitrary height ahead, they must still remain within the
clock drift of the light clients real time. Therefore, to detect such an attack, a light
client will wait for a time
```
2 * MAX_CLOCK_DRIFT + LAG
```
for a witness to provide the latest block it has. Given the time constraints, if the witness
is operating at the head of the blockchain, it will have a header with an earlier height but
a later timestamp. This can be used to prove that the primary has submitted a lunatic header
which violates monotonically increasing time.

View File

@@ -0,0 +1,58 @@
# ADR 50: Improved Trusted Peering
## Changelog
* 22-10-2019: Initial draft
* 05-11-2019: Modify `maximum-dial-period` to `persistent-peers-max-dial-period`
## Context
When `max-num-inbound-peers` or `max-num-outbound-peers` of a node is reached, the node cannot spare more slots to any peer
by inbound or outbound. Therefore, after a certain period of disconnection, any important peering can be lost indefinitely
because all slots are consumed by other peers, and the node stops trying to dial the peer anymore.
This is happening because of two reasons, exponential backoff and absence of unconditional peering feature for trusted peers.
## Decision
We would like to suggest solving this problem by introducing two parameters in `config.toml`, `unconditional-peer-ids` and
`persistent-peers-max-dial-period`.
1) `unconditional-peer-ids`
A node operator inputs list of ids of peers which are allowed to be connected by both inbound or outbound regardless of
`max-num-inbound-peers` or `max-num-outbound-peers` of user's node reached or not.
2) `persistent-peers-max-dial-period`
Terms between each dial to each persistent peer will not exceed `persistent-peers-max-dial-period` during exponential backoff.
Therefore, `dial-period` = min(`persistent-peers-max-dial-period`, `exponential-backoff-dial-period`)
Alternative approach
Persistent-peers is only for outbound, therefore it is not enough to cover the full utility of `unconditional-peer-ids`.
@creamers158(https://github.com/Creamers158) suggested putting id-only items into persistent-peers to be handled as
`unconditional-peer-ids`, but it needs very complicated struct exception for different structure of items in persistent-peers.
Therefore we decided to have `unconditional-peer-ids` to independently cover this use-case.
## Status
Proposed
## Consequences
### Positive
A node operator can configure two new parameters in `config.toml` so that he/she can assure that tendermint will allow connections
from/to peers in `unconditional-peer-ids`. Also he/she can assure that every persistent peer will be dialed at least once in every
`persistent-peers-max-dial-period` term. It achieves more stable and persistent peering for trusted peers.
### Negative
The new feature introduces two new parameters in `config.toml` which needs explanation for node operators.
### Neutral
## References
* two p2p feature enhancement proposal(https://github.com/tendermint/tendermint/issues/4053)

View File

@@ -0,0 +1,53 @@
# ADR 051: Double Signing Risk Reduction
## Changelog
* 27-11-2019: Initial draft
* 13-01-2020: Separate into 2 ADR, This ADR will only cover Double signing Protection and ADR-052 handle Tendermint Mode
* 22-01-2020: change the title from "Double signing Protection" to "Double Signing Risk Reduction"
## Context
To provide a risk reduction method for double signing incidents mistakenly executed by validators
- Validators often mistakenly run duplicated validators to cause double-signing incident
- This proposed feature is to reduce the risk of mistaken double-signing incident by checking recent N blocks before voting begins
- When we think of such serious impact on double-signing incident, it is very reasonable to have multiple risk reduction algorithm built in node daemon
## Decision
We would like to suggest a double signing risk reduction method.
- Methodology : query recent consensus results to find out whether node's consensus key is used on consensus recently or not
- When to check
- When the state machine starts `ConsensusReactor` after fully synced
- When the node is validator ( with privValidator )
- When `cs.config.DoubleSignCheckHeight > 0`
- How to check
1. When a validator is transformed from syncing status to fully synced status, the state machine check recent N blocks (`latest_height - double_sign_check_height`) to find out whether there exists consensus votes using the validator's consensus key
2. If there exists votes from the validator's consensus key, exit state machine program
- Configuration
- We would like to suggest by introducing `double_sign_check_height` parameter in `config.toml` and cli, how many blocks state machine looks back to check votes
- <span v-pre>`double_sign_check_height = {{ .Consensus.DoubleSignCheckHeight }}`</span> in `config.toml`
- `tendermint node --consensus.double_sign_check_height` in cli
- State machine ignore checking procedure when `double_sign_check_height == 0`
## Status
Implemented
## Consequences
### Positive
- Validators can avoid double signing incident by mistakes. (eg. If another validator node is voting on consensus, starting new validator node with same consensus key will cause panic stop of the state machine because consensus votes with the consensus key are found in recent blocks)
- We expect this method will prevent majority of double signing incident by mistakes.
### Negative
- When the risk reduction method is on, restarting a validator node will panic because the node itself voted on consensus with the same consensus key. So, validators should stop the state machine, wait for some blocks, and then restart the state machine to avoid panic stop.
### Neutral
## References
- Issue [#4059](https://github.com/tendermint/tendermint/issues/4059) : double-signing protection

View File

@@ -0,0 +1,85 @@
# ADR 052: Tendermint Mode
## Changelog
* 27-11-2019: Initial draft from ADR-051
* 13-01-2020: Separate ADR Tendermint Mode from ADR-051
* 29-03-2021: Update info regarding defaults
## Context
- Full mode: full mode does not have the capability to become a validator.
- Validator mode : this mode is exactly same as existing state machine behavior. sync without voting on consensus, and participate consensus when fully synced
- Seed mode : lightweight seed node maintaining an address book, p2p like [TenderSeed](https://gitlab.com/polychainlabs/tenderseed)
## Decision
We would like to suggest a simple Tendermint mode abstraction. These modes will live under one binary, and when initializing a node the user will be able to specify which node they would like to create.
- Which reactor, component to include for each node
- full
- switch, transport
- reactors
- mempool
- consensus
- evidence
- blockchain
- p2p/pex
- statesync
- rpc (safe connections only)
- *~~no privValidator(priv_validator_key.json, priv_validator_state.json)~~*
- validator
- switch, transport
- reactors
- mempool
- consensus
- evidence
- blockchain
- p2p/pex
- statesync
- rpc (safe connections only)
- with privValidator(priv_validator_key.json, priv_validator_state.json)
- seed
- switch, transport
- reactor
- p2p/pex
- Configuration, cli command
- We would like to suggest by introducing `mode` parameter in `config.toml` and cli
- <span v-pre>`mode = "{{ .BaseConfig.Mode }}"`</span> in `config.toml`
- `tendermint start --mode validator` in cli
- full | validator | seednode
- There will be no default. Users will need to specify when they run `tendermint init`
- RPC modification
- `host:26657/status`
- return empty `validator_info` when in full mode
- no rpc server in seednode
- Where to modify in codebase
- Add switch for `config.Mode` on `node/node.go:DefaultNewNode`
- If `config.Mode==validator`, call default `NewNode` (current logic)
- If `config.Mode==full`, call `NewNode` with `nil` `privValidator` (do not load or generation)
- Need to add exception routine for `nil` `privValidator` to related functions
- If `config.Mode==seed`, call `NewSeedNode` (seed node version of `node/node.go:NewNode`)
- Need to add exception routine for `nil` `reactor`, `component` to related functions
## Status
Implemented
## Consequences
### Positive
- Node operators can choose mode when they run state machine according to the purpose of the node.
- Mode can prevent mistakes because users have to specify which mode they want to run via flag. (eg. If a user want to run a validator node, she/he should explicitly write down validator as mode)
- Different mode needs different reactors, resulting in efficient resource usage.
### Negative
- Users need to study how each mode operate and which capability it has.
### Neutral
## References
- Issue [#2237](https://github.com/tendermint/tendermint/issues/2237) : Tendermint "mode"
- [TenderSeed](https://gitlab.com/polychainlabs/tenderseed) : A lightweight Tendermint Seed Node.

View File

@@ -0,0 +1,254 @@
# ADR 053: State Sync Prototype
State sync is now [merged](https://github.com/tendermint/tendermint/pull/4705). Up-to-date ABCI documentation is [available](https://github.com/tendermint/spec/pull/90), refer to it rather than this ADR for details.
This ADR outlines the plan for an initial state sync prototype, and is subject to change as we gain feedback and experience. It builds on discussions and findings in [ADR-042](./adr-042-state-sync.md), see that for background information.
## Changelog
* 2020-01-28: Initial draft (Erik Grinaker)
* 2020-02-18: Updates after initial prototype (Erik Grinaker)
* ABCI: added missing `reason` fields.
* ABCI: used 32-bit 1-based chunk indexes (was 64-bit 0-based).
* ABCI: moved `RequestApplySnapshotChunk.chain_hash` to `RequestOfferSnapshot.app_hash`.
* Gaia: snapshots must include node versions as well, both for inner and leaf nodes.
* Added experimental prototype info.
* Added open questions and implementation plan.
* 2020-03-29: Strengthened and simplified ABCI interface (Erik Grinaker)
* ABCI: replaced `chunks` with `chunk_hashes` in `Snapshot`.
* ABCI: removed `SnapshotChunk` message.
* ABCI: renamed `GetSnapshotChunk` to `LoadSnapshotChunk`.
* ABCI: chunks are now exchanged simply as `bytes`.
* ABCI: chunks are now 0-indexed, for parity with `chunk_hashes` array.
* Reduced maximum chunk size to 16 MB, and increased snapshot message size to 4 MB.
* 2020-04-29: Update with final released ABCI interface (Erik Grinaker)
## Context
State sync will allow a new node to receive a snapshot of the application state without downloading blocks or going through consensus. This bootstraps the node significantly faster than the current fast sync system, which replays all historical blocks.
Background discussions and justifications are detailed in [ADR-042](./adr-042-state-sync.md). Its recommendations can be summarized as:
* The application periodically takes full state snapshots (i.e. eager snapshots).
* The application splits snapshots into smaller chunks that can be individually verified against a chain app hash.
* Tendermint uses the light client to obtain a trusted chain app hash for verification.
* Tendermint discovers and downloads snapshot chunks in parallel from multiple peers, and passes them to the application via ABCI to be applied and verified against the chain app hash.
* Historical blocks are not backfilled, so state synced nodes will have a truncated block history.
## Tendermint Proposal
This describes the snapshot/restore process seen from Tendermint. The interface is kept as small and general as possible to give applications maximum flexibility.
### Snapshot Data Structure
A node can have multiple snapshots taken at various heights. Snapshots can be taken in different application-specified formats (e.g. MessagePack as format `1` and Protobuf as format `2`, or similarly for schema versioning). Each snapshot consists of multiple chunks containing the actual state data, for parallel downloads and reduced memory usage.
```proto
message Snapshot {
uint64 height = 1; // The height at which the snapshot was taken
uint32 format = 2; // The application-specific snapshot format
uint32 chunks = 3; // Number of chunks in the snapshot
bytes hash = 4; // Arbitrary snapshot hash - should be equal only for identical snapshots
bytes metadata = 5; // Arbitrary application metadata
}
```
Chunks are exchanged simply as `bytes`, and cannot be larger than 16 MB. `Snapshot` messages should be less than 4 MB.
### ABCI Interface
```proto
// Lists available snapshots
message RequestListSnapshots {}
message ResponseListSnapshots {
repeated Snapshot snapshots = 1;
}
// Offers a snapshot to the application
message RequestOfferSnapshot {
Snapshot snapshot = 1; // snapshot offered by peers
bytes app_hash = 2; // light client-verified app hash for snapshot height
}
message ResponseOfferSnapshot {
Result result = 1;
enum Result {
accept = 0; // Snapshot accepted, apply chunks
abort = 1; // Abort all snapshot restoration
reject = 2; // Reject this specific snapshot, and try a different one
reject_format = 3; // Reject all snapshots of this format, and try a different one
reject_sender = 4; // Reject all snapshots from the sender(s), and try a different one
}
}
// Loads a snapshot chunk
message RequestLoadSnapshotChunk {
uint64 height = 1;
uint32 format = 2;
uint32 chunk = 3; // Zero-indexed
}
message ResponseLoadSnapshotChunk {
bytes chunk = 1;
}
// Applies a snapshot chunk
message RequestApplySnapshotChunk {
uint32 index = 1;
bytes chunk = 2;
string sender = 3;
}
message ResponseApplySnapshotChunk {
Result result = 1;
repeated uint32 refetch_chunks = 2; // Chunks to refetch and reapply (regardless of result)
repeated string reject_senders = 3; // Chunk senders to reject and ban (regardless of result)
enum Result {
accept = 0; // Chunk successfully accepted
abort = 1; // Abort all snapshot restoration
retry = 2; // Retry chunk, combine with refetch and reject as appropriate
retry_snapshot = 3; // Retry snapshot, combine with refetch and reject as appropriate
reject_snapshot = 4; // Reject this snapshot, try a different one but keep sender rejections
}
}
```
### Taking Snapshots
Tendermint is not aware of the snapshotting process at all, it is entirely an application concern. The following guarantees must be provided:
* **Periodic:** snapshots must be taken periodically, not on-demand, for faster restores, lower load, and less DoS risk.
* **Deterministic:** snapshots must be deterministic, and identical across all nodes - typically by taking a snapshot at given height intervals.
* **Consistent:** snapshots must be consistent, i.e. not affected by concurrent writes - typically by using a data store that supports versioning and/or snapshot isolation.
* **Asynchronous:** snapshots must be asynchronous, i.e. not halt block processing and state transitions.
* **Chunked:** snapshots must be split into chunks of reasonable size (on the order of megabytes), and each chunk must be verifiable against the chain app hash.
* **Garbage collected:** snapshots must be garbage collected periodically.
### Restoring Snapshots
Nodes should have options for enabling state sync and/or fast sync, and be provided a trusted header hash for the light client.
When starting an empty node with state sync and fast sync enabled, snapshots are restored as follows:
1. The node checks that it is empty, i.e. that it has no state nor blocks.
2. The node contacts the given seeds to discover peers.
3. The node contacts a set of full nodes, and verifies the trusted block header using the given hash via the light client.
4. The node requests available snapshots via P2P from peers, via `RequestListSnapshots`. Peers will return the 10 most recent snapshots, one message per snapshot.
5. The node aggregates snapshots from multiple peers, ordered by height and format (in reverse). If there are mismatches between different snapshots, the one hosted by the largest amount of peers is chosen. The node iterates over all snapshots in reverse order by height and format until it finds one that satisfies all of the following conditions:
* The snapshot height's block is considered trustworthy by the light client (i.e. snapshot height is greater than trusted header and within unbonding period of the latest trustworthy block).
* The snapshot's height or format hasn't been explicitly rejected by an earlier `RequestOfferSnapshot`.
* The application accepts the `RequestOfferSnapshot` call.
6. The node downloads chunks in parallel from multiple peers, via `RequestLoadSnapshotChunk`. Chunk messages cannot exceed 16 MB.
7. The node passes chunks sequentially to the app via `RequestApplySnapshotChunk`.
8. Once all chunks have been applied, the node compares the app hash to the chain app hash, and if they do not match it either errors or discards the state and starts over.
9. The node switches to fast sync to catch up blocks that were committed while restoring the snapshot.
10. The node switches to normal consensus mode.
## Gaia Proposal
This describes the snapshot process seen from Gaia, using format version `1`. The serialization format is unspecified, but likely to be compressed Amino or Protobuf.
### Snapshot Metadata
In the initial version there is no snapshot metadata, so it is set to an empty byte buffer.
Once all chunks have been successfully built, snapshot metadata should be stored in a database and served via `RequestListSnapshots`.
### Snapshot Chunk Format
The Gaia data structure consists of a set of named IAVL trees. A root hash is constructed by taking the root hashes of each of the IAVL trees, then constructing a Merkle tree of the sorted name/hash map.
IAVL trees are versioned, but a snapshot only contains the version relevant for the snapshot height. All historical versions are ignored.
IAVL trees are insertion-order dependent, so key/value pairs must be set in an appropriate insertion order to produce the same tree branching structure. This insertion order can be found by doing a breadth-first scan of all nodes (including inner nodes) and collecting unique keys in order. However, the node hash also depends on the node's version, so snapshots must contain the inner nodes' version numbers as well.
For the initial prototype, each chunk consists of a complete dump of all node data for all nodes in an entire IAVL tree. Thus the number of chunks equals the number of persistent stores in Gaia. No incremental verification of chunks is done, only a final app hash comparison at the end of the snapshot restoration.
For a production version, it should be sufficient to store key/value/version for all nodes (leaf and inner) in insertion order, chunked in some appropriate way. If per-chunk verification is required, the chunk must also contain enough information to reconstruct the Merkle proofs all the way up to the root of the multistore, e.g. by storing a complete subtree's key/value/version data plus Merkle hashes of all other branches up to the multistore root. The exact approach will depend on tradeoffs between size, time, and verification. IAVL RangeProofs are not recommended, since these include redundant data such as proofs for intermediate and leaf nodes that can be derived from the above data.
Chunks should be built greedily by collecting node data up to some size limit (e.g. 10 MB) and serializing it. Chunk data is stored in the file system as `snapshots/<height>/<format>/<chunk>`, and a SHA-256 checksum is stored along with the snapshot metadata.
### Snapshot Scheduling
Snapshots should be taken at some configurable height interval, e.g. every 1000 blocks. All nodes should preferably have the same snapshot schedule, such that all nodes can serve chunks for a given snapshot.
Taking consistent snapshots of IAVL trees is greatly simplified by them being versioned: simply snapshot the version that corresponds to the snapshot height, while concurrent writes create new versions. IAVL pruning must not prune a version that is being snapshotted.
Snapshots must also be garbage collected after some configurable time, e.g. by keeping the latest `n` snapshots.
## Resolved Questions
* Is it OK for state-synced nodes to not have historical blocks nor historical IAVL versions?
> Yes, this is as intended. Maybe backfill blocks later.
* Do we need incremental chunk verification for first version?
> No, we'll start simple. Can add chunk verification via a new snapshot format without any breaking changes in Tendermint. For adversarial conditions, maybe consider support for whitelisting peers to download chunks from.
* Should the snapshot ABCI interface be a separate optional ABCI service, or mandatory?
> Mandatory, to keep things simple for now. It will therefore be a breaking change and push the release. For apps using the Cosmos SDK, we can provide a default implementation that does not serve snapshots and errors when trying to apply them.
* How can we make sure `ListSnapshots` data is valid? An adversary can provide fake/invalid snapshots to DoS peers.
> For now, just pick snapshots that are available on a large number of peers. Maybe support whitelisting. We may consider e.g. placing snapshot manifests on the blockchain later.
* Should we punish nodes that provide invalid snapshots? How?
> No, these are full nodes not validators, so we can't punish them. Just disconnect from them and ignore them.
* Should we call these snapshots? The SDK already uses the term "snapshot" for `PruningOptions.SnapshotEvery`, and state sync will introduce additional SDK options for snapshot scheduling and pruning that are not related to IAVL snapshotting or pruning.
> Yes. Hopefully these concepts are distinct enough that we can refer to state sync snapshots and IAVL snapshots without too much confusion.
* Should we store snapshot and chunk metadata in a database? Can we use the database for chunks?
> As a first approach, store metadata in a database and chunks in the filesystem.
* Should a snapshot at height H be taken before or after the block at H is processed? E.g. RPC `/commit` returns app_hash after _previous_ height, i.e. _before_ current height.
> After commit.
* Do we need to support all versions of blockchain reactor (i.e. fast sync)?
> We should remove the v1 reactor completely once v2 has stabilized.
* Should `ListSnapshots` be a streaming API instead of a request/response API?
> No, just use a max message size.
## Status
Implemented
## References
* [ADR-042](./adr-042-state-sync.md) and its references

View File

@@ -0,0 +1,71 @@
# ADR 054: Crypto encoding (part 2)
## Changelog
2020-2-27: Created
2020-4-16: Update
## Context
Amino has been a pain point of many users in the ecosystem. While Tendermint does not suffer greatly from the performance degradation introduced by amino, we are making an effort in moving the encoding format to a widely adopted format, [Protocol Buffers](https://developers.google.com/protocol-buffers). With this migration a new standard is needed for the encoding of keys. This will cause ecosystem wide breaking changes.
Currently amino encodes keys as `<PrefixBytes> <Length> <ByteArray>`.
## Decision
Previously Tendermint defined all the key types for use in Tendermint and the Cosmos-SDK. Going forward the Cosmos-SDK will define its own protobuf type for keys. This will allow Tendermint to only define the keys that are being used in the codebase (ed25519).
There is the the opportunity to only define the usage of ed25519 (`bytes`) and not have it be a `oneof`, but this would mean that the `oneof` work is only being postponed to a later date. When using the `oneof` protobuf type we will have to manually switch over the possible key types and then pass them to the interface which is needed.
The approach that will be taken to minimize headaches for users is one where all encoding of keys will shift to protobuf and where amino encoding is relied on, there will be custom marshal and unmarshal functions.
Protobuf messages:
```proto
message PubKey {
oneof key {
bytes ed25519 = 1;
}
message PrivKey {
oneof sum {
bytes ed25519 = 1;
}
}
```
> Note: The places where backwards compatibility is needed is still unclear.
All modules currently do not rely on amino encoded bytes and keys are not amino encoded for genesis, therefore a hardfork upgrade is what will be needed to adopt these changes.
This work will be broken out into a few PRs, this work will be merged into a proto-breakage branch, all PRs will be reviewed prior to being merged:
1. Encoding of keys to protobuf and protobuf messages
2. Move Tendermint types to protobuf, mainly the ones that are being encoded.
3. Go one by one through the reactors and transition amino encoded messages to protobuf.
4. Test with cosmos-sdk and/or testnets repo.
## Status
Implemented
## Consequences
- Move keys to protobuf encoding, where backwards compatibility is needed, amino marshal and unmarshal functions will be used.
### Positive
- Protocol Buffer encoding will not change going forward.
- Removing amino overhead from keys will help with the KSM.
- Have a large ecosystem of supported languages.
### Negative
- Hardfork is required to integrate this into running chains.
### Neutral
## References
> Are there any relevant PR comments, issues that led up to this, or articles referenced for why we made the given design choice? If so link them here!
- {reference link}

View File

@@ -0,0 +1,61 @@
# ADR 055: Protobuf Design
## Changelog
- 2020-4-15: Created (@marbar3778)
- 2020-6-18: Updated (@marbar3778)
## Context
Currently we use [go-amino](https://github.com/tendermint/go-amino) throughout Tendermint. Amino is not being maintained anymore (April 15, 2020) by the Tendermint team and has been found to have issues:
- https://github.com/tendermint/go-amino/issues/286
- https://github.com/tendermint/go-amino/issues/230
- https://github.com/tendermint/go-amino/issues/121
These are a few of the known issues that users could run into.
Amino enables quick prototyping and development of features. While this is nice, amino does not provide the performance and developer convenience that is expected. For Tendermint to see wider adoption as a BFT protocol engine a transition to an adopted encoding format is needed. Below are some possible options that can be explored.
There are a few options to pick from:
- `Protobuf`: Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data think XML, but smaller, faster, and simpler. It is supported in countless languages and has been proven in production for many years.
- `FlatBuffers`: FlatBuffers is an efficient cross platform serialization library. Flatbuffers are more efficient than Protobuf due to the fast that there is no parsing/unpacking to a second representation. FlatBuffers has been tested and used in production but is not widely adopted.
- `CapnProto`: Capn Proto is an insanely fast data interchange format and capability-based RPC system. Cap'n Proto does not have a encoding/decoding step. It has not seen wide adoption throughout the industry.
- @erikgrinaker - https://github.com/tendermint/tendermint/pull/4623#discussion_r401163501
```
Cap'n'Proto is awesome. It was written by one of the original Protobuf developers to fix some of its issues, and supports e.g. random access to process huge messages without loading them into memory and an (opt-in) canonical form which would be very useful when determinism is needed (e.g. in the state machine). That said, I suspect Protobuf is the better choice due to wider adoption, although it makes me kind of sad since Cap'n'Proto is technically better.
```
## Decision
Transition Tendermint to Protobuf because of its performance and tooling. The Ecosystem behind Protobuf is vast and has outstanding [support for many languages](https://developers.google.com/protocol-buffers/docs/tutorials).
We will be making this possible by keeping the current types in there current form (handwritten) and creating a `/proto` directory in which all the `.proto` files will live. Where encoding is needed, on disk and over the wire, we will call util functions that will transition the types from handwritten go types to protobuf generated types. This is inline with the recommended file structure from [buf](https://buf.build). You can find more information on this file structure [here](https://buf.build/docs/lint-checkers#file_layout).
By going with this design we will enable future changes to types and allow for a more modular codebase.
## Status
Implemented
## Consequences
### Positive
- Allows for modular types in the future
- Less refactoring
- Allows the proto files to be pulled into the spec repo in the future.
- Performance
- Tooling & support in multiple languages
### Negative
- When a developer is updating a type they need to make sure to update the proto type as well
### Neutral
## References

View File

@@ -0,0 +1,170 @@
# ADR 056: Light client amnesia attacks
## Changelog
- 02.04.20: Initial Draft
- 06.04.20: Second Draft
- 10.06.20: Post Implementation Revision
- 19.08.20: Short Term Amnesia Alteration
- 01.10.20: Status of Amnesia for 0.34
## Context
Whilst most created evidence of malicious behavior is self evident such that any individual can verify them independently there are types of evidence, known collectively as global evidence, that require further collaboration from the network in order to accumulate enough information to create evidence that is individually verifiable and can therefore be processed through consensus. [Fork Accountability](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md) has been coined to describe the entire process of detection, proving and punishing of malicious behavior. This ADR addresses specifically what a light client amnesia attack is and how it can be proven and the current decision around handling light client amnesia attacks. For information on evidence handling by the light client, it is recommended to read [ADR 47](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-047-handling-evidence-from-light-client.md).
### Amnesia Attack
The schematic below explains a scenario where an amnesia attack can occur such that two sets of honest nodes, C1 and C2, commit different blocks.
![](../imgs/tm-amnesia-attack.png)
1. C1 and F send PREVOTE messages for block A.
2. C1 sends PRECOMMIT for round 1 for block A.
3. A new round is started, C2 and F send PREVOTE messages for a different block B.
4. C2 and F then send PRECOMMIT messages for block B.
5. F later on creates PRECOMMITS for block A and combines it with those from C1 to form a block
This forged block can then be used to fool light clients trying to verify it. It must be stressed that there are a few more hurdles or dimensions to the attack to consider.For a more detailed walkthrough refer to Appendix A.
## Decision
The decision surrounding amnesia attacks has both a short term and long term component. In the long term, a more sturdy protocol will need to be fleshed out and implemented. There is already draft documents outlining what such a protocol would look like and the resources it would require (see references). Prior revisions however outlined a protocol which had been implemented (See Appendix B). It was agreed that it still required greater consideration and review given it's importance. It was therefore discussed, with the limited time frame set before 0.34, whether the protocol should be completely removed or if there should remain some logic in handling the aforementioned scenarios.
The latter of the two options meant storing a record of all votes in any height with which there was more than one round. This information would then be accessible for applications if they wanted to perform some off-chain verification and punishment.
In summary, this seemed like too much to ask of the application to implement only on a temporary basis, whilst not having the domain specific knowledge and considering such a difficult and unlikely attack. Therefore the short term decision is to identify when the attack has occurred and implement the detector algorithm highlighted in [ADR 47](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-047-handling-evidence-from-light-client.md) but to not implement any accountability protocol that would identify malicious validators and allow applications to punish them. This will hopefully change in the long term with the focus on eventually reaching a concrete and secure protocol with identifying and dealing with these attacks.
## Implications
- Light clients will still be able to detect amnesia attacks so long as the assumption of having at least one correct witness holds
- Light clients will gossip the attack to witnesses and halt thus failing to validate the incorrect block (and therefore not being fooled)
- Validators will propose and commit evidence of the amnesia attack on chain
- No evidence will be passed to the application indicting any malicious validators, thus meaning that no malicious validators will be punished for performing the attack
- If a light clients bubble of providers are all faulty the light client will falsely validate amnesia attacks as well as any other 1/3+ light client attack.
## Status
Implemented
## Consequences
### Positive
Light clients are still able to prevent falsely validating a block.
Already implemented.
### Negative
Light clients where all witnesses are faulty can be subject to an amnesia attack and verify a forged block that is not part of the chain.
### Neutral
## References
- [Fork accountability algorithm](https://docs.google.com/document/d/11ZhMsCj3y7zIZz4udO9l25xqb0kl7gmWqNpGVRzOeyY/edit)
- [Fork accountability spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md)
## Appendix A: Detailed Walkthrough of Performing a Light Client Amnesia Attack
As the attacker, a prerequisite to this attack is first to observe or attempt to craft a block where a subset (less than ⅓) of correct validators sent precommit votes for a proposal in an earlier round and later received ⅔ prevotes for a different proposal thus changing their lock and correctly sending precommit votes (and later committing) for the proposal in the latter round. The second prerequisite is to have at least ⅓ validating power in that height (or enough voting power to have ⅔+ when combined with the precommits of the earlier round).
To go back to how one may craft such a block, we begin with one of the validators in this cabal being the proposer. They propose a block with all the txs that they want to fool a light client with. The proposer then only relays this to the members of their cabal and a controlled subset of correct validators (less than ⅓). We will call ourselves f for faulty and c1 for this correct subset.
Attackers need to rely on the assistance of some form of a network partition or on the nature of the sporadic voting to conjure their desired environment. The attackers need at least ⅓ of the validating power of the remaining correct validators, we shall denote this as c2, to not see ⅔ prevotes and thus not be locked on a block when it comes to the next round. If we have less than ⅓ remaining validators that dont see this first proposal, then we will not have enough voting power to reach ⅔+ prevotes (the sum of f and c2) in the following round and thus change the lock of c1 such that we correctly commit the block in the latter round yet have enough precommits in the earlier round to fool the light client. Remember this is our desired scenario: to save all these precommit votes for a different (in this case earlier) proposed block.
To try to break this down even further lets go back to the first round. F sends c1 a proposal (and not c2), c1 in turn sends their prevotes to all whom they are connected to. This means that some will be received by c2. F then sends their prevotes just to c1. Now not all validators in c1 may be connected to each other, so perhaps some validators in c1 might not receive ⅔ (from their own cohort and from f) and thus not precommit. In other situations we may see a validator in c2 connected to all validators in c1. Therefore they too will receive ⅔ prevotes and thus precommit. We can conclude therefore that although targeting this c1 subset of validators, those that actually precommit may be somewhat different. The key is for the attackers to observe the n amount of precommits they need in round 1 where n is ⅔+ - f, whilst ensuring that n itself does not go over ⅓. If it does then less than ⅔ validators remain to be able to change the lock and commit the block in the later round.
An extra dimension to this puzzle is the timeouts. Whilst c1 is relaying votes to its peers and these validators count closer towards the ⅔ threshold needed to send their precommit votes at any moment the timeout could be reached and thus the nodes will precommit nil and ignore any late prevote messages.
This is all to say that such an attack is partly out of the attackers hands. All they can do is tweak the subset of validators that they first choose to gossip the proposal and modify the timings around when they send their prevotes until they reach the desired precondition: n precommits for an earlier proposal and ⅔ precommits for the later proposal. So this is up to the gods of non deterministic behavior to help them out with their plight. Im not going to allocate the hours to calculate the probability but it could be in the magnitude of 1000s of blocks trying to get this scenario before the precondition is met.
Obviously, the probability becomes substantially higher as the cabals voting power nears ⅔. This is because both n decreases and there is greater tolerance to send prevotes to a greater amount of validators without going overboard and reaching the ⅓ precommit threshold in the first round which would mean they would have to try again.
Once weve got our n, we can then forge the remaining signatures for that block (from the f) and bundle them all together and tada we have a forged signed header.
Now weve done that, its time to find some light clients to fool.
Also critical to this type of attack is that the light client that is connected to our nodes must request a light block at that specific height with which we forged this signed header but this shouldnt be hard to do. To bring this back to a real context, say our faulty cabal, f, bought some groceries using atoms and then wanted to prove that they did, the grocery owner whips out their phone, runs the light client and f tells them the height they committed the transaction.
An important note here is that because the validator sets are the same between the canonical and the forged block, this attack also works on light clients that verify sequentially. In fact, they are especially vulnerable because they currently dont run the detector function afterwards.
However, if our grocery owner verifies using the skipping algorithm, they will then run the detector and therefore they will compare with other witness nodes. Ideally for our attackers, if f has a lot of nodes exposing their rpc endpoints, then there is a chance that all the witnesses the light client has are faulty and thus we have a successful attack and the grocery owner has been fooled into handing f a few apples and carrots.
However, there is a greater chance, especially if the light client is connected to quite a few other nodes that a divergence will be detected. The light client will figure out there was an amnesia attack and send the evidence to the witness to commit on chain. The grocery owner will see that verification failed and won't hand over the apples or carrots but also f won't be punished for their villainous behavior. This means that they can go over to the hairdressers and see if they can pull off the same stunt again.
So this brings to the fore the current defenses that are in place. As long as there has not been a cabal of validators with greater than 1/3 power (or the trust level), the light clients verification algorithm will prevent any attempts to deceive it. Greater than this threshold and we rely on the detector as a second layer of defense to pick up on any attack. It's security is chiefly tied with the assumption that at least one of the witnesses is correct. If this fails then as illustrated above, the light client can be suceptible to amnesia (as well as equivocation and lunatic) attacks.
The outstanding problem, if we indeed consider it big enough to be one, therefore lies in the incentivisation mechanism which is how f and other malicious validators are punished. This is decided by the application but it's up to Tendermint to identify them. With other forms of attacks the evidence lies in the precommits. But because an amnesia attack uses precommits from another round, which is information that is discarded by the consensus engine once the block is committed, it is difficult to understand which validators were in fact faulty.
If we cast our minds back to what I previously wrote, part of an amnesia attack depends on getting n precommits from an earlier round. These are then bundled with the malicious validators' own signatures. This means that the light client nor full nodes are capable of distinguishing which of the signatures were correctly created as part of Tendermint consensus and which were forged later on.
## Appendix B: Prior Amnesia Evidence Accountability Implementation
As the distinction between these two attacks (amnesia and back to the past) can only be distinguished by confirming with all validators (to see if it is a full fork or a light fork), for the purpose of simplicity, these attacks will be treated as the same.
Currently, the evidence reactor is used to simply broadcast and store evidence. The idea of creating a new reactor for the specific task of verifying these attacks was briefly discussed, but it is decided that the current evidence reactor will be extended.
The process begins with a light client receiving conflicting headers (in the future this could also be a full node during fast sync or state sync), which it sends to a full node to analyze. As part of [evidence handling](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-047-handling-evidence-from-light-client.md), this is extracted into potential amnesia evidence when the validator voted in more than one round for a different block.
```golang
type PotentialAmnesiaEvidence struct {
VoteA *types.Vote
VoteB *types.Vote
Heightstamp int64
}
```
*NOTE: There had been an earlier notion towards batching evidence against the entire set of validators all together but this has given way to individual processing predominantly to maintain consistency with the other forms of evidence. A more extensive breakdown can be found [here](https://github.com/tendermint/tendermint/issues/4729)*
The evidence will contain the precommit votes for a validator that voted for both rounds. If the validator voted in more than two rounds, then they will have multiple `PotentialAmnesiaEvidence` against them hence it is possible that there is multiple evidence for a validator in a single height but not for a single round. The votes should be all valid and the height and time that the infringement was made should be within:
`MaxEvidenceAge - ProofTrialPeriod`
This trial period will be discussed later.
Returning to the event of an amnesia attack, if we were to examine the behavior of the honest nodes, C1 and C2, in the schematic, C2 will not PRECOMMIT an earlier round, but it is likely, if a node in C1 were to receive +2/3 PREVOTE's or PRECOMMIT's for a higher round, that it would remove the lock and PREVOTE and PRECOMMIT for the later round. Therefore, unfortunately it is not a case of simply punishing all nodes that have double voted in the `PotentialAmnesiaEvidence`.
Instead we use the Proof of Lock Change (PoLC) referred to in the [consensus spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/consensus.md#terms). When an honest node votes again for a different block in a later round
(which will only occur in very rare cases), it will generate the PoLC and store it in the evidence reactor for a time equal to the `MaxEvidenceAge`
```golang
type ProofOfLockChange struct {
Votes []*types.Vote
PubKey crypto.PubKey
}
```
This can be either evidence of +2/3 PREVOTES or PRECOMMITS (either warrants the honest node the right to vote) and is valid, among other checks, so long as the PRECOMMIT vote of the node in V2 came after all the votes in the `ProofOfLockChange` i.e. it received +2/3 votes for a block and then voted for that block thereafter (F is unable to prove this).
In the event that an honest node receives `PotentialAmnesiaEvidence` it will first `ValidateBasic()` and `Verify()` it and then will check if it is among the suspected nodes in the evidence. If so, it will retrieve the `ProofOfLockChange` and combine it with `PotentialAmensiaEvidence` to form `AmensiaEvidence`. All honest nodes that are part of the indicted group will have a time, measured in blocks, equal to `ProofTrialPeriod`, the aforementioned evidence paramter, to gossip their `AmnesiaEvidence` with their `ProofOfLockChange`
```golang
type AmnesiaEvidence struct {
*types.PotentialAmnesiaEvidence
Polc *types.ProofOfLockChange
}
```
If the node is not required to submit any proof than it will simply broadcast the `PotentialAmnesiaEvidence`, stamp the height that it received the evidence and begin to wait out the trial period. It will ignore other `PotentialAmnesiaEvidence` gossiped at the same height and round.
If a node receives `AmnesiaEvidence` that contains a valid `ProofOfClockChange` it will add it to the evidence store and replace any PotentialAmnesiaEvidence of the same height and round. At this stage, an amnesia evidence with polc, it is ready to be submitted to the chin. If a node receives `AmnesiaEvidence` with an empty polc it will ignore it as each honest node will conduct their own trial period to be sure that time was given for any other honest nodes to respond.
There can only be one `AmnesiaEvidence` and one `PotentialAmneisaEvidence` stored for each attack (i.e. for each height).
When, `state.LastBlockHeight > PotentialAmnesiaEvidence.timestamp + ProofTrialPeriod`, nodes will upgrade the corresponding `PotentialAmnesiaEvidence` and attach an empty `ProofOfLockChange`. Then honest validators of the current validator set can begin proposing the block that contains the `AmnesiaEvidence`.
*NOTE: Even before the evidence is proposed and committed, the off-chain process of gossiping valid evidence could be
enough for honest nodes to recognize the fork and halt.*
Other validators will vote `nil` if:
- The Amnesia Evidence is not valid
- The Amensia Evidence is not within their own trial period i.e. too soon.
- They don't have the Amnesia Evidence and it is has an empty polc (each validator needs to run their own trial period of the evidence)
- Is of an AmnesiaEvidence that has already been committed to the chain.
Finally it is important to stress that the protocol of having a trial period addresses attacks where a validator voted again for a different block at a later round and time. In the event, however, that the validator voted for an earlier round after voting for a later round i.e. `VoteA.Timestamp < VoteB.Timestamp && VoteA.Round > VoteB.Round` then this action is inexcusable and can be punished immediately without the need of a trial period. In this case, PotentialAmnesiaEvidence will be instantly upgraded to AmnesiaEvidence.

View File

@@ -0,0 +1,90 @@
# ADR 057: RPC
## Changelog
- 19-05-2020: created
## Context
Currently the RPC layer of Tendermint is using a variant of the JSON-RPC protocol. This ADR is meant to serve as a pro/con list for possible alternatives and JSON-RPC.
There are currently two options being discussed: gRPC & JSON-RPC.
### JSON-RPC
JSON-RPC is a JSON-based RPC protocol. Tendermint has implemented its own variant of JSON-RPC which is not compatible with the [JSON-RPC 2.0 specification](https://www.jsonrpc.org/specification).
**Pros:**
- Easy to use & implement (by default)
- Well-known and well-understood by users and integrators
- Integrates reasonably well with web infrastructure (proxies, API gateways, service meshes, caches, etc)
- human readable encoding (by default)
**Cons:**
- No schema support
- RPC clients must be hand-written
- Streaming not built into protocol
- Underspecified types (e.g. numbers and timestamps)
- Tendermint has its own implementation (not standards compliant, maintenance overhead)
- High maintenance cost associated to this
- Stdlib `jsonrpc` package only supports JSON-RPC 1.0, no dominant package for JSON-RPC 2.0
- Tooling around documentation/specification (e.g. Swagger) could be better
- JSON data is larger (offset by HTTP compression)
- Serializing is slow ([~100% marshal, ~400% unmarshal](https://github.com/alecthomas/go_serialization_benchmarks)); insignificant in absolute terms
- Specification was last updated in 2013 and is way behind Swagger/OpenAPI
### gRPC + gRPC-gateway (REST + Swagger)
gRPC is a high performant RPC framework. It has been battle tested by a large number of users and is heavily relied on and maintained by countless large corporations.
**Pros:**
- Efficient data retrieval for users, lite clients and other protocols
- Easily implemented in supported languages (Go, Dart, JS, TS, rust, Elixir, Haskell, ...)
- Defined schema with richer type system (Protocol Buffers)
- Can use common schemas and types across all protocols and data stores (RPC, ABCI, blocks, etc)
- Established conventions for forwards- and backwards-compatibility
- Bi-directional streaming
- Servers and clients are be autogenerated in many languages (e.g. Tendermint-rs)
- Auto-generated swagger documentation for REST API
- Backwards and forwards compatibility guarantees enforced at the protocol level.
- Can be used with different codecs (JSON, CBOR, ...)
**Cons:**
- Complex system involving cross-language schemas, code generation, and custom protocols
- Type system does not always map cleanly to native language type system; integration woes
- Many common types require Protobuf plugins (e.g. timestamps and duration)
- Generated code may be non-idiomatic and hard to use
- Migration will be disruptive and laborious
## Decision
> This section explains all of the details of the proposed solution, including implementation details.
> It should also describe affects / corollary items that may need to be changed as a part of this.
> If the proposed change will be large, please also indicate a way to do the change to maximize ease of review.
> (e.g. the optimal split of things to do between separate PR's)
## Status
> A decision may be "proposed" if it hasn't been agreed upon yet, or "accepted" once it is agreed upon. If a later ADR changes or reverses a decision, it may be marked as "deprecated" or "superseded" with a reference to its replacement.
{Deprecated|Proposed|Accepted}
## Consequences
> This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones.
### Positive
### Negative
### Neutral
## References
> Are there any relevant PR comments, issues that led up to this, or articles referenced for why we made the given design choice? If so link them here!
- {reference link}

View File

@@ -0,0 +1,122 @@
# ADR 058: Event hashing
## Changelog
- 2020-07-17: initial version
- 2020-07-27: fixes after Ismail and Ethan's comments
- 2020-07-27: declined
## Context
Before [PR#4845](https://github.com/tendermint/tendermint/pull/4845),
`Header#LastResultsHash` was a root of the Merkle tree built from `DeliverTx`
results. Only `Code`, `Data` fields were included because `Info` and `Log`
fields are non-deterministic.
At some point, we've added events to `ResponseBeginBlock`, `ResponseEndBlock`,
and `ResponseDeliverTx` to give applications a way to attach some additional
information to blocks / transactions.
Many applications seem to have started using them since.
However, before [PR#4845](https://github.com/tendermint/tendermint/pull/4845)
there was no way to prove that certain events were a part of the result
(_unless the application developer includes them into the state tree_).
Hence, [PR#4845](https://github.com/tendermint/tendermint/pull/4845) was
opened. In it, `GasWanted` along with `GasUsed` are included when hashing
`DeliverTx` results. Also, events from `BeginBlock`, `EndBlock` and `DeliverTx`
results are hashed into the `LastResultsHash` as follows:
- Since we do not expect `BeginBlock` and `EndBlock` to contain many events,
these will be Protobuf encoded and included in the Merkle tree as leaves.
- `LastResultsHash` therefore is the root hash of a Merkle tree w/ 3 leafs:
proto-encoded `ResponseBeginBlock#Events`, root hash of a Merkle tree build
from `ResponseDeliverTx` responses (Log, Info and Codespace fields are
ignored), and proto-encoded `ResponseEndBlock#Events`.
- Order of events is unchanged - same as received from the ABCI application.
[Spec PR](https://github.com/tendermint/spec/pull/97/files)
While it's certainly good to be able to prove something, introducing new events
or removing such becomes difficult because it breaks the `LastResultsHash`. It
means that every time you add, remove or update an event, you'll need a
hard-fork. And that is undoubtedly bad for applications, which are evolving and
don't have a stable events set.
## Decision
As a middle ground approach, the proposal is to add the
`Block#LastResultsEvents` consensus parameter that is a list of all events that
are to be hashed in the header.
```
@ proto/tendermint/abci/types.proto:295 @ message BlockParams {
int64 max_bytes = 1;
// Note: must be greater or equal to -1
int64 max_gas = 2;
// List of events, which will be hashed into the LastResultsHash
repeated string last_results_events = 3;
}
```
Initially the list is empty. The ABCI application can change it via `InitChain`
or `EndBlock`.
Example:
```go
func (app *MyApp) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
//...
events := []abci.Event{
{
Type: "transfer",
Attributes: []abci.EventAttribute{
{Key: []byte("sender"), Value: []byte("Bob"), Index: true},
},
},
}
return types.ResponseDeliverTx{Code: code.CodeTypeOK, Events: events}
}
```
For "transfer" event to be hashed, the `LastResultsEvents` must contain a
string "transfer".
## Status
Declined
**Until there's more stability/motivation/use-cases/demand, the decision is to
push this entirely application side and just have apps which want events to be
provable to insert them into their application-side merkle trees. Of course
this puts more pressure on their application state and makes event proving
application specific, but it might help built up a better sense of use-cases
and how this ought to ultimately be done by Tendermint.**
## Consequences
### Positive
1. networks can perform parameter change proposals to update this list as new events are added
2. allows networks to avoid having to do hard-forks
3. events can still be added at-will to the application w/o breaking anything
### Negative
1. yet another consensus parameter
2. more things to track in the tendermint state
## References
- [ADR 021](./adr-021-abci-events.md)
- [Indexing transactions](../app-dev/indexing-transactions.md)
## Appendix A. Alternative proposals
The other proposal was to add `Hash bool` flag to the `Event`, similarly to
`Index bool` EventAttribute's field. When `true`, Tendermint would hash it into
the `LastResultsEvents`. The downside is that the logic is implicit and depends
largely on the node's operator, who decides what application code to run. The
above proposal makes it (the logic) explicit and easy to upgrade via
governance.

View File

@@ -0,0 +1,306 @@
# ADR 059: Evidence Composition and Lifecycle
## Changelog
- 04/09/2020: Initial Draft (Unabridged)
- 07/09/2020: First Version
- 13/03/2021: Ammendment to accomodate forward lunatic attack
- 29/06/2021: Add information about ABCI specific fields
## Scope
This document is designed to collate together and surface some predicaments involving evidence in Tendermint: both its composition and lifecycle. It then aims to find a solution to these. The scope does not extend to the verification nor detection of certain types of evidence but concerns itself mainly with the general form of evidence and how it moves from inception to application.
## Background
For a long time `DuplicateVoteEvidence`, formed in the consensus reactor, was the only evidence Tendermint had. It was produced whenever two votes from the same validator in the same round
was observed and thus it was designed that each evidence was for a single validator. It was predicted that there may come more forms of evidence and thus `DuplicateVoteEvidence` was used as the model for the `Evidence` interface and also for the form of the evidence data sent to the application. It is important to note that Tendermint concerns itself just with the detection and reporting of evidence and it is the responsibility of the application to exercise punishment.
```go
type Evidence interface { //existing
Height() int64 // height of the offense
Time() time.Time // time of the offense
Address() []byte // address of the offending validator
Bytes() []byte // bytes which comprise the evidence
Hash() []byte // hash of the evidence
Verify(chainID string, pubKey crypto.PubKey) error // verify the evidence
Equal(Evidence) bool // check equality of evidence
ValidateBasic() error
String() string
}
```
```go
type DuplicateVoteEvidence struct {
VoteA *Vote
VoteB *Vote
timestamp time.Time // taken from the block time
}
```
Tendermint has now introduced a new type of evidence to protect light clients from being attacked. This `LightClientAttackEvidence` (see [here](https://github.com/informalsystems/tendermint-rs/blob/31ca3e64ce90786c1734caf186e30595832297a4/docs/spec/lightclient/attacks/evidence-handling.md) for more information) is vastly different to `DuplicateVoteEvidence` in that it is physically a much different size containing a complete signed header and validator set. It is formed within the light client, not the consensus reactor and requires a lot more information from state to verify (`VerifyLightClientAttack(commonHeader, trustedHeader *SignedHeader, commonVals *ValidatorSet)` vs `VerifyDuplicateVote(chainID string, pubKey PubKey)`). Finally it batches validators together (a single piece of evidence that implicates multiple malicious validators at a height) as opposed to having individual evidence (each piece of evidence is per validator per height). This evidence stretches the existing mould that was used to accommodate new types of evidence and has thus caused us to reconsider how evidence should be formatted and processed.
```go
type LightClientAttackEvidence struct { // proposed struct in spec
ConflictingBlock *LightBlock
CommonHeight int64
Type AttackType // enum: {Lunatic|Equivocation|Amnesia}
timestamp time.Time // taken from the block time at the common height
}
```
*Note: These three attack types have been proven by the research team to be exhaustive*
## Possible Approaches for Evidence Composition
### Individual framework
Evidence remains on a per validator basis. This causes the least disruption to the current processes but requires that we break `LightClientAttackEvidence` into several pieces of evidence for each malicious validator. This not only has performance consequences in that there are n times as many database operations and that the gossiping of evidence will require more bandwidth then necessary (by requiring a header for each piece) but it potentially impacts our ability to validate it. In batch form, the full node can run the same process the light client did to see that 1/3 validating power was present in both the common block and the conflicting block whereas this becomes more difficult to verify individually without opening the possibility that malicious validators forge evidence against innocent . Not only that, but `LightClientAttackEvidence` also deals with amnesia attacks which unfortunately have the characteristic where we know the set of validators involved but not the subset that were actually malicious (more to be said about this later). And finally splitting the evidence into individual pieces makes it difficult to understand the severity of the attack (i.e. the total voting power involved in the attack)
#### An example of a possible implementation path
We would ignore amnesia evidence (as individually it's hard to make) and revert to the initial split we had before where `DuplicateVoteEvidence` is also used for light client equivocation attacks and thus we only need `LunaticEvidence`. We would also most likely need to remove `Verify` from the interface as this isn't really something that can be used.
``` go
type LunaticEvidence struct { // individual lunatic attack
header *Header
commonHeight int64
vote *Vote
timestamp time.Time // once again taken from the block time at the height of the common header
}
```
### Batch Framework
The last approach of this category would be to consider batch only evidence. This works fine with `LightClientAttackEvidence` but would require alterations to `DuplicateVoteEvidence` which would most likely mean that the consensus would send conflicting votes to a buffer in the evidence module which would then wrap all the votes together per height before gossiping them to other nodes and trying to commit it on chain. At a glance this may improve IO and verification speed and perhaps more importantly grouping validators gives the application and Tendermint a better overview of the severity of the attack.
However individual evidence has the advantage that it is easy to check if a node already has that evidence meaning we just need to check hashes to know that we've already verified this evidence before. Batching evidence would imply that each node may have a different combination of duplicate votes which may complicate things.
#### An example of a possible implementation path
`LightClientAttackEvidence` won't change but the evidence interface will need to look like the proposed one above and `DuplicateVoteEvidence` will need to change to encompass multiple double votes. A problem with batch evidence is that it needs to be unique to avoid people from submitting different permutations.
## Decision
The decision is to adopt a hybrid design.
We allow individual and batch evidence to coexist together, meaning that verification is done depending on the evidence type and that the bulk of the work is done in the evidence pool itself (including forming the evidence to be sent to the application).
## Detailed Design
Evidence has the following simple interface:
```go
type Evidence interface { //proposed
Height() int64 // height of the offense
Bytes() []byte // bytes which comprise the evidence
Hash() []byte // hash of the evidence
ValidateBasic() error
String() string
}
```
The changing of the interface is backwards compatible as these methods are all present in the previous version of the interface. However, networks will need to upgrade to be able to process the new evidence as verification has changed.
We have two concrete types of evidence that fulfil this interface
```go
type LightClientAttackEvidence struct {
ConflictingBlock *LightBlock
CommonHeight int64 // the last height at which the primary provider and witness provider had the same header
// abci specific information
ByzantineValidators []*Validator // validators in the validator set that misbehaved in creating the conflicting block
TotalVotingPower int64 // total voting power of the validator set at the common height
Timestamp time.Time // timestamp of the block at the common height
}
```
where the `Hash()` is the hash of the header and commonHeight.
Note: It was also discussed whether to include the commit hash which captures the validators that signed the header. However this would open the opportunity for someone to propose multiple permutations of the same evidence (through different commit signatures) hence it was omitted. Consequentially, when it comes to verifying evidence in a block, for `LightClientAttackEvidence` we can't just check the hashes because someone could have the same hash as us but a different commit where less than 1/3 validators voted which would be an invalid version of the evidence. (see `fastCheck` for more details)
```go
type DuplicateVoteEvidence {
VoteA *Vote
VoteB *Vote
// abci specific information
TotalVotingPower int64
ValidatorPower int64
Timestamp time.Time
}
```
where the `Hash()` is the hash of the two votes
For both of these types of evidence, `Bytes()` represents the proto-encoded byte array format of the evidence and `ValidateBasic` is
an initial consistency check to make sure the evidence has a valid structure.
### The Evidence Pool
`LightClientAttackEvidence` is generated in the light client and `DuplicateVoteEvidence` in consensus. Both are sent to the evidence pool through `AddEvidence(ev Evidence) error`. The evidence pool's primary purpose is to verify evidence. It also gossips evidence to other peers' evidence pool and serves it to consensus so it can be committed on chain and the relevant information can be sent to the application in order to exercise punishment. When evidence is added, the pool first runs `Has(ev Evidence)` to check if it has already received it (by comparing hashes) and then `Verify(ev Evidence) error`. Once verified the evidence pool stores it it's pending database. There are two databases: one for pending evidence that is not yet committed and another of the committed evidence (to avoid committing evidence twice)
#### Verification
`Verify()` does the following:
- Use the hash to see if we already have this evidence in our committed database.
- Use the height to check if the evidence hasn't expired.
- If it has expired then use the height to find the block header and check if the time has also expired in which case we drop the evidence
- Then proceed with switch statement for each of the two evidence:
For `DuplicateVote`:
- Check that height, round, type and validator address are the same
- Check that the Block ID is different
- Check the look up table for addresses to make sure there already isn't evidence against this validator
- Fetch the validator set and confirm that the address is in the set at the height of the attack
- Check that the chain ID and signature is valid.
For `LightClientAttack`
- Fetch the common signed header and val set from the common height and use skipping verification to verify the conflicting header
- Fetch the trusted signed header at the same height as the conflicting header and compare with the conflicting header to work out which type of attack it is and in doing so return the malicious validators. NOTE: If the node doesn't have the signed header at the height of the conflicting header, it instead fetches the latest header it has and checks to see if it can prove the evidence based on a violation of header time. This is known as forward lunatic attack.
- If equivocation, return the validators that signed for the commits of both the trusted and signed header
- If lunatic, return the validators from the common val set that signed in the conflicting block
- If amnesia, return no validators (since we can't know which validators are malicious). This also means that we don't currently send amnesia evidence to the application, although we will introduce more robust amnesia evidence handling in future Tendermint Core releases
- Check that the hashes of the conflicting header and the trusted header are different
- In the case of a forward lunatic attack, where the trusted header height is less than the conflicting header height, the node checks that the time of the trusted header is later than the time of conflicting header. This proves that the conflicting header breaks monotonically increasing time. If the node doesn't have a trusted header with a later time then it is unable to validate the evidence for now.
- Lastly, for each validator, check the look up table to make sure there already isn't evidence against this validator
After verification we persist the evidence with the key `height/hash` to the pending evidence database in the evidence pool.
#### ABCI Evidence
Both evidence structures contain data (such as timestamp) that are necessary to be passed to the application but do not strictly constitute evidence of misbehaviour. As such, these fields are verified last. If any of these fields are invalid to a node i.e. they don't correspond with their state, nodes will reconstruct a new evidence struct from the existing fields and repopulate the abci specific fields with their own state data.
#### Broadcasting and receiving evidence
The evidence pool also runs a reactor that broadcasts the newly validated
evidence to all connected peers.
Receiving evidence from other evidence reactors works in the same manner as receiving evidence from the consensus reactor or a light client.
#### Proposing evidence on the block
When it comes to prevoting and precomitting a proposal that contains evidence, the full node will once again
call upon the evidence pool to verify the evidence using `CheckEvidence(ev []Evidence)`:
This performs the following actions:
1. Loops through all the evidence to check that nothing has been duplicated
2. For each evidence, run `fastCheck(ev evidence)` which works similar to `Has` but instead for `LightClientAttackEvidence` if it has the
same hash it then goes on to check that the validators it has are all signers in the commit of the conflicting header. If it doesn't pass fast check (because it hasn't seen the evidence before) then it will have to verify the evidence.
3. runs `Verify(ev Evidence)` - Note: this also saves the evidence to the db as mentioned before.
#### Updating application and pool
The final part of the lifecycle is when the block is committed and the `BlockExecutor` then updates state. As part of this process, the `BlockExecutor` gets the evidence pool to create a simplified format for the evidence to be sent to the application. This happens in `ApplyBlock` where the executor calls `Update(Block, State) []abci.Evidence`.
```go
abciResponses.BeginBlock.ByzantineValidators = evpool.Update(block, state)
```
Here is the format of the evidence that the application will receive. As seen above, this is stored as an array within `BeginBlock`.
The changes to the application are minimal (it is still formed one for each malicious validator) with the exception of using an enum instead of a string for the evidence type.
```go
type Evidence struct {
// either LightClientAttackEvidence or DuplicateVoteEvidence as an enum (abci.EvidenceType)
Type EvidenceType `protobuf:"varint,1,opt,name=type,proto3,enum=tendermint.abci.EvidenceType" json:"type,omitempty"`
// The offending validator
Validator Validator `protobuf:"bytes,2,opt,name=validator,proto3" json:"validator"`
// The height when the offense occurred
Height int64 `protobuf:"varint,3,opt,name=height,proto3" json:"height,omitempty"`
// The corresponding time where the offense occurred
Time time.Time `protobuf:"bytes,4,opt,name=time,proto3,stdtime" json:"time"`
// Total voting power of the validator set in case the ABCI application does
// not store historical validators.
// https://github.com/tendermint/tendermint/issues/4581
TotalVotingPower int64 `protobuf:"varint,5,opt,name=total_voting_power,json=totalVotingPower,proto3" json:"total_voting_power,omitempty"`
}
```
This `Update()` function does the following:
- Increments state which keeps track of both the current time and height used for measuring expiry
- Marks evidence as committed and saves to db. This prevents validators from proposing committed evidence in the future
Note: the db just saves the height and the hash. There is no need to save the entire committed evidence
- Forms ABCI evidence as such: (note for `DuplicateVoteEvidence` the validators array size is 1)
```go
for _, val := range evInfo.Validators {
abciEv = append(abciEv, &abci.Evidence{
Type: evType, // either DuplicateVote or LightClientAttack
Validator: val, // the offending validator (which includes the address, pubkey and power)
Height: evInfo.ev.Height(), // the height when the offense happened
Time: evInfo.time, // the time when the offense happened
TotalVotingPower: evInfo.totalVotingPower // the total voting power of the validator set
})
}
```
- Removes expired evidence from both pending and committed databases
The ABCI evidence is then sent via the `BlockExecutor` to the application.
#### Summary
To summarize, we can see the lifecycle of evidence as such:
![evidence_lifecycle](../imgs/evidence_lifecycle.png)
Evidence is first detected and created in the light client and consensus reactor. It is verified and stored as `EvidenceInfo` and gossiped to the evidence pools in other nodes. The consensus reactor later communicates with the evidence pool to either retrieve evidence to be put into a block, or verify the evidence the consensus reactor has retrieved in a block. Lastly when a block is added to the chain, the block executor sends the committed evidence back to the evidence pool so a pointer to the evidence can be stored in the evidence pool and it can update it's height and time. Finally, it turns the committed evidence into ABCI evidence and through the block executor passes the evidence to the application so the application can handle it.
## Status
Implemented
## Consequences
<!-- > This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones. -->
### Positive
- Evidence is better contained to the evidence pool / module
- LightClientAttack is kept together (easier for verification and bandwidth)
- Variations on commit sigs in LightClientAttack doesn't lead to multiple permutations and multiple evidence
- Address to evidence map prevents DOS attacks, where a single validator could DOS the network by flooding it with evidence submissions
### Negative
- Changes the `Evidence` interface and thus is a block breaking change
- Changes the ABCI `Evidence` and is thus a ABCI breaking change
- Unable to query evidence for address / time without evidence pool
### Neutral
## References
<!-- > Are there any relevant PR comments, issues that led up to this, or articles referenced for why we made the given design choice? If so link them here! -->
- [LightClientAttackEvidence](https://github.com/informalsystems/tendermint-rs/blob/31ca3e64ce90786c1734caf186e30595832297a4/docs/spec/lightclient/attacks/evidence-handling.md)

View File

@@ -0,0 +1,193 @@
# ADR 060: Go API Stability
## Changelog
- 2020-09-08: Initial version. (@erikgrinaker)
- 2020-09-09: Tweak accepted changes, add initial public API packages, add consequences. (@erikgrinaker)
- 2020-09-17: Clarify initial public API. (@erikgrinaker)
## Context
With the release of Tendermint 1.0 we will adopt [semantic versioning](https://semver.org). One major implication is a guarantee that we will not make backwards-incompatible changes until Tendermint 2.0 (except in pre-release versions). In order to provide this guarantee for our Go API, we must clearly define which of our APIs are public, and what changes are considered backwards-compatible.
Currently, we list packages that we consider public in our [README](https://github.com/tendermint/tendermint#versioning), but since we are still at version 0.x we do not provide any backwards compatiblity guarantees at all.
### Glossary
* **External project:** a different Git/VCS repository or code base.
* **External package:** a different Go package, can be a child or sibling package in the same project.
* **Internal code:** code not intended for use in external projects.
* **Internal directory:** code under `internal/` which cannot be imported in external projects.
* **Exported:** a Go identifier starting with an uppercase letter, which can therefore be accessed by an external package.
* **Private:** a Go identifier starting with a lowercase letter, which therefore cannot be accessed by an external package unless via an exported field, variable, or function/method return value.
* **Public API:** any Go identifier that can be imported or accessed by an external project, except test code in `_test.go` files.
* **Private API:** any Go identifier that is not accessible via a public API, including all code in the internal directory.
## Alternative Approaches
- Split all public APIs out to separate Go modules in separate Git repositories, and consider all Tendermint code internal and not subject to API backwards compatibility at all. This was rejected, since it has been attempted by the Tendermint project earlier, resulting in too much dependency management overhead.
- Simply document which APIs are public and which are private. This is the current approach, but users should not be expected to self-enforce this, the documentation is not always up-to-date, and external projects will often end up depending on internal code anyway.
## Decision
From Tendermint 1.0, all internal code (except private APIs) will be placed in a root-level [`internal` directory](https://golang.org/cmd/go/#hdr-Internal_Directories), which the Go compiler will block for use by external projects. All exported items outside of the `internal` directory are considered a public API and subject to backwards compatibility guarantees, except files ending in `_test.go`.
The `crypto` package may be split out to a separate module in a separate repo. This is the main general-purpose package used by external projects, and is the only Tendermint dependency in e.g. IAVL which can cause some problems for projects depending on both IAVL and Tendermint. This will be decided after further discussion.
The `tm-db` package will remain a separate module in a separate repo. The `crypto` package may possibly be split out, pending further discussion, as this is the main general-purpose package used by other projects.
## Detailed Design
### Public API
When preparing our public API for 1.0, we should keep these principles in mind:
- Limit the number of public APIs that we start out with - we can always add new APIs later, but we can't change or remove APIs once they're made public.
- Before an API is made public, do a thorough review of the API to make sure it covers any future needs, can accomodate expected changes, and follows good API design practices.
The following is the minimum set of public APIs that will be included in 1.0, in some form:
- `abci`
- packages used for constructing nodes `config`, `libs/log`, and `version`
- Client APIs, i.e. `rpc/client`, `light`, and `privval`.
- `crypto` (possibly as a separate repo)
We may offer additional APIs as well, following further discussions internally and with other stakeholders. However, public APIs for providing custom components (e.g. reactors and mempools) are not planned for 1.0, but may be added in a later 1.x version if this is something we want to offer.
For comparison, the following are the number of Tendermint imports in the Cosmos SDK (excluding tests), which should be mostly satisfied by the planned APIs.
```
1 github.com/tendermint/tendermint/abci/server
73 github.com/tendermint/tendermint/abci/types
2 github.com/tendermint/tendermint/cmd/tendermint/commands
7 github.com/tendermint/tendermint/config
68 github.com/tendermint/tendermint/crypto
1 github.com/tendermint/tendermint/crypto/armor
10 github.com/tendermint/tendermint/crypto/ed25519
2 github.com/tendermint/tendermint/crypto/encoding
3 github.com/tendermint/tendermint/crypto/merkle
3 github.com/tendermint/tendermint/crypto/sr25519
8 github.com/tendermint/tendermint/crypto/tmhash
1 github.com/tendermint/tendermint/crypto/xsalsa20symmetric
11 github.com/tendermint/tendermint/libs/bytes
2 github.com/tendermint/tendermint/libs/bytes.HexBytes
15 github.com/tendermint/tendermint/libs/cli
2 github.com/tendermint/tendermint/libs/cli/flags
2 github.com/tendermint/tendermint/libs/json
30 github.com/tendermint/tendermint/libs/log
1 github.com/tendermint/tendermint/libs/math
11 github.com/tendermint/tendermint/libs/os
4 github.com/tendermint/tendermint/libs/rand
1 github.com/tendermint/tendermint/libs/strings
5 github.com/tendermint/tendermint/light
1 github.com/tendermint/tendermint/internal/mempool
3 github.com/tendermint/tendermint/node
5 github.com/tendermint/tendermint/internal/p2p
4 github.com/tendermint/tendermint/privval
10 github.com/tendermint/tendermint/proto/tendermint/crypto
1 github.com/tendermint/tendermint/proto/tendermint/libs/bits
24 github.com/tendermint/tendermint/proto/tendermint/types
3 github.com/tendermint/tendermint/proto/tendermint/version
2 github.com/tendermint/tendermint/proxy
3 github.com/tendermint/tendermint/rpc/client
1 github.com/tendermint/tendermint/rpc/client/http
2 github.com/tendermint/tendermint/rpc/client/local
3 github.com/tendermint/tendermint/rpc/core/types
1 github.com/tendermint/tendermint/rpc/jsonrpc/server
33 github.com/tendermint/tendermint/types
2 github.com/tendermint/tendermint/types/time
1 github.com/tendermint/tendermint/version
```
### Backwards-Compatible Changes
In Go, [almost all API changes are backwards-incompatible](https://blog.golang.org/module-compatibility) and thus exported items in public APIs generally cannot be changed until Tendermint 2.0. The only backwards-compatible changes we can make to public APIs are:
- Adding a package.
- Adding a new identifier to the package scope (e.g. const, var, func, struct, interface, etc.).
- Adding a new method to a struct.
- Adding a new field to a struct, if the zero-value preserves any old behavior.
- Changing the order of fields in a struct.
- Adding a variadic parameter to a named function or struct method, if the function type itself is not assignable in any public APIs (e.g. a callback).
- Adding a new method to an interface, or a variadic parameter to an interface method, _if the interface already has a private method_ (which prevents external packages from implementing it).
- Widening a numeric type as long as it is a named type (e.g. `type Number int32` can change to `int64`, but not `int8` or `uint32`).
Note that public APIs can expose private types (e.g. via an exported variable, field, or function/method return value), in which case the exported fields and methods on these private types are also part of the public API and covered by its backwards compatiblity guarantees. In general, private types should never be accessible via public APIs unless wrapped in an exported interface.
Also note that if we accept, return, export, or embed types from a dependency, we assume the backwards compatibility responsibility for that dependency, and must make sure any dependency upgrades comply with the above constraints.
We should run CI linters for minor version branches to enforce this, e.g. [apidiff](https://go.googlesource.com/exp/+/refs/heads/master/apidiff/README.md), [breakcheck](https://github.com/gbbr/breakcheck), and [apicombat](https://github.com/bradleyfalzon/apicompat).
#### Accepted Breakage
The above changes can still break programs in a few ways - these are _not_ considered backwards-incompatible changes, and users are advised to avoid this usage:
- If a program uses unkeyed struct literals (e.g. `Foo{"bar", "baz"}`) and we add fields or change the field order, the program will no longer compile or may have logic errors.
- If a program embeds two structs in a struct, and we add a new field or method to an embedded Tendermint struct which also exists in the other embedded struct, the program will no longer compile.
- If a program compares two structs (e.g. with `==`), and we add a new field of an incomparable type (slice, map, func, or struct that contains these) to a Tendermint struct which is compared, the program will no longer compile.
- If a program assigns a Tendermint function to an identifier, and we add a variadic parameter to the function signature, the program will no longer compile.
### Strategies for API Evolution
The API guarantees above can be fairly constraining, but are unavoidable given the Go language design. The following tricks can be employed where appropriate to allow us to make changes to the API:
- We can add a new function or method with a different name that takes additional parameters, and have the old function call the new one.
- Functions and methods can take an options struct instead of separate parameters, to allow adding new options - this is particularly suitable for functions that take many parameters and are expected to be extended, and especially for interfaces where we cannot add new methods with different parameters at all.
- Interfaces can include a private method, e.g. `interface { private() }`, to make them unimplementable by external packages and thus allow us to add new methods to the interface without breaking other programs. Of course, this can't be used for interfaces that should be implementable externally.
- We can use [interface upgrades](https://avtok.com/2014/11/05/interface-upgrades.html) to allow implementers of an existing interface to also implement a new interface, as long as the old interface can still be used - e.g. the new interface `BetterReader` may have a method `ReadBetter()`, and a function that takes a `Reader` interface as an input can check if the implementer also implements `BetterReader` and in that case call `ReadBetter()` instead of `Read()`.
## Status
Accepted
## Consequences
### Positive
- Users can safely upgrade with less fear of applications breaking, and know whether an upgrade only includes bug fixes or also functional enhancements
- External developers have a predictable and well-defined API to build on that will be supported for some time
- Less synchronization between teams, since there is a clearer contract and timeline for changes and they happen less frequently
- More documentation will remain accurate, since it's not chasing a moving target
- Less time will be spent on code churn and more time spent on functional improvements, both for the community and for our teams
### Negative
- Many improvements, changes, and bug fixes will have to be postponed until the next major version, possibly for a year or more
- The pace of development will slow down, since we must work within the existing API constraints, and spend more time planning public APIs
- External developers may lose access to some currently exported APIs and functionality
## References
- [#4451: Place internal APIs under internal package](https://github.com/tendermint/tendermint/issues/4451)
- [On Pluggability](https://docs.google.com/document/d/1G08LnwSyb6BAuCVSMF3EKn47CGdhZ5wPZYJQr4-bw58/edit?ts=5f609f11)

View File

@@ -0,0 +1,109 @@
# ADR 061: P2P Refactor Scope
## Changelog
- 2020-10-30: Initial version (@erikgrinaker)
## Context
The `p2p` package responsible for peer-to-peer networking is rather old and has a number of weaknesses, including tight coupling, leaky abstractions, lack of tests, DoS vulnerabilites, poor performance, custom protocols, and incorrect behavior. A refactor has been discussed for several years ([#2067](https://github.com/tendermint/tendermint/issues/2067)).
Informal Systems are also building a Rust implementation of Tendermint, [Tendermint-rs](https://github.com/informalsystems/tendermint-rs), and plan to implement P2P networking support over the next year. As part of this work, they have requested adopting e.g. [QUIC](https://datatracker.ietf.org/doc/draft-ietf-quic-transport/) as a transport protocol instead of implementing the custom application-level `MConnection` stream multiplexing protocol that Tendermint currently uses.
This ADR summarizes recent discussion with stakeholders on the scope of a P2P refactor. Specific designs and implementations will be submitted as separate ADRs.
## Alternative Approaches
There have been recurring proposals to adopt [LibP2P](https://libp2p.io) instead of maintaining our own P2P networking stack (see [#3696](https://github.com/tendermint/tendermint/issues/3696)). While this appears to be a good idea in principle, it would be a highly breaking protocol change, there are indications that we might have to fork and modify LibP2P, and there are concerns about the abstractions used.
In discussions with Informal Systems we decided to begin with incremental improvements to the current P2P stack, add support for pluggable transports, and then gradually start experimenting with LibP2P as a transport layer. If this proves successful, we can consider adopting it for higher-level components at a later time.
## Decision
The P2P stack will be refactored and improved iteratively, in several phases:
* **Phase 1:** code and API refactoring, maintaining protocol compatibility as far as possible.
* **Phase 2:** additional transports and incremental protocol improvements.
* **Phase 3:** disruptive protocol changes.
The scope of phases 2 and 3 is still uncertain, and will be revisited once the preceding phases have been completed as we'll have a better sense of requirements and challenges.
## Detailed Design
Separate ADRs will be submitted for specific designs and changes in each phase, following research and prototyping. Below are objectives in order of priority.
### Phase 1: Code and API Refactoring
This phase will focus on improving the internal abstractions and implementations in the `p2p` package. As far as possible, it should not change the P2P protocol in a backwards-incompatible way.
* Cleaner, decoupled abstractions for e.g. `Reactor`, `Switch`, and `Peer`. [#2067](https://github.com/tendermint/tendermint/issues/2067) [#5287](https://github.com/tendermint/tendermint/issues/5287) [#3833](https://github.com/tendermint/tendermint/issues/3833)
* Reactors should receive messages in separate goroutines or via buffered channels. [#2888](https://github.com/tendermint/tendermint/issues/2888)
* Improved peer lifecycle management. [#3679](https://github.com/tendermint/tendermint/issues/3679) [#3719](https://github.com/tendermint/tendermint/issues/3719) [#3653](https://github.com/tendermint/tendermint/issues/3653) [#3540](https://github.com/tendermint/tendermint/issues/3540) [#3183](https://github.com/tendermint/tendermint/issues/3183) [#3081](https://github.com/tendermint/tendermint/issues/3081) [#1356](https://github.com/tendermint/tendermint/issues/1356)
* Peer prioritization. [#2860](https://github.com/tendermint/tendermint/issues/2860) [#2041](https://github.com/tendermint/tendermint/issues/2041)
* Pluggable transports, with `MConnection` as one implementation. [#5587](https://github.com/tendermint/tendermint/issues/5587) [#2430](https://github.com/tendermint/tendermint/issues/2430) [#805](https://github.com/tendermint/tendermint/issues/805)
* Improved peer address handling.
* Address book refactor. [#4848](https://github.com/tendermint/tendermint/issues/4848) [#2661](https://github.com/tendermint/tendermint/issues/2661)
* Transport-agnostic peer addressing. [#5587](https://github.com/tendermint/tendermint/issues/5587) [#3782](https://github.com/tendermint/tendermint/issues/3782) [#3692](https://github.com/tendermint/tendermint/issues/3692)
* Improved detection and advertisement of own address. [#5588](https://github.com/tendermint/tendermint/issues/5588) [#4260](https://github.com/tendermint/tendermint/issues/4260) [#3716](https://github.com/tendermint/tendermint/issues/3716) [#1727](https://github.com/tendermint/tendermint/issues/1727)
* Support multiple IPs per peer. [#1521](https://github.com/tendermint/tendermint/issues/1521) [#2317](https://github.com/tendermint/tendermint/issues/2317)
The refactor should attempt to address the following secondary objectives: testability, observability, performance, security, quality-of-service, backpressure, and DoS resilience. Much of this will be revisited as explicit objectives in phase 2.
Ideally, the refactor should happen incrementally, with regular merges to `master` every few weeks. This will take more time overall, and cause frequent breaking changes to internal Go APIs, but it reduces the branch drift and gets the code tested sooner and more broadly.
### Phase 2: Additional Transports and Protocol Improvements
This phase will focus on protocol improvements and other breaking changes. The following are considered proposals that will need to be evaluated separately once the refactor is done. Additional proposals are likely to be added during phase 1.
* QUIC transport. [#198](https://github.com/tendermint/spec/issues/198)
* Noise protocol for secret connection handshake. [#5589](https://github.com/tendermint/tendermint/issues/5589) [#3340](https://github.com/tendermint/tendermint/issues/3340)
* Peer ID in connection handshake. [#5590](https://github.com/tendermint/tendermint/issues/5590)
* Peer and service discovery (e.g. RPC nodes, state sync snapshots). [#5481](https://github.com/tendermint/tendermint/issues/5481) [#4583](https://github.com/tendermint/tendermint/issues/4583)
* Rate-limiting, backpressure, and QoS scheduling. [#4753](https://github.com/tendermint/tendermint/issues/4753) [#2338](https://github.com/tendermint/tendermint/issues/2338)
* Compression. [#2375](https://github.com/tendermint/tendermint/issues/2375)
* Improved metrics and tracing. [#3849](https://github.com/tendermint/tendermint/issues/3849) [#2600](https://github.com/tendermint/tendermint/issues/2600)
* Simplified P2P configuration options.
### Phase 3: Disruptive Protocol Changes
This phase covers speculative, wide-reaching proposals that are poorly defined and highly uncertain. They will be evaluated once the previous phases are done.
* Adopt LibP2P. [#3696](https://github.com/tendermint/tendermint/issues/3696)
* Allow cross-reactor communication, possibly without channels.
* Dynamic channel advertisment, as reactors are enabled/disabled. [#4394](https://github.com/tendermint/tendermint/issues/4394) [#1148](https://github.com/tendermint/tendermint/issues/1148)
* Pubsub-style networking topology and pattern.
* Support multiple chain IDs in the same network.
## Status
Accepted
## Consequences
### Positive
* Cleaner, simpler architecture that's easier to reason about and test, and thus hopefully less buggy.
* Improved performance and robustness.
* Reduced maintenance burden and increased interoperability by the possible adoption of standardized protocols such as QUIC and Noise.
* Improved usability, with better observability, simpler configuration, and more automation (e.g. peer/service/address discovery, rate-limiting, and backpressure).
### Negative
* Maintaining our own P2P networking stack is resource-intensive.
* Abstracting away the underlying transport may prevent usage of advanced transport features.
* Breaking changes to APIs and protocols are disruptive to users.
## References
See issue links above.
- [#2067: P2P Refactor](https://github.com/tendermint/tendermint/issues/2067)
- [P2P refactor brainstorm document](https://docs.google.com/document/d/1FUTADZyLnwA9z7ndayuhAdAFRKujhh_y73D0ZFdKiOQ/edit?pli=1#)

View File

@@ -0,0 +1,615 @@
# ADR 062: P2P Architecture and Abstractions
## Changelog
- 2020-11-09: Initial version (@erikgrinaker)
- 2020-11-13: Remove stream IDs, move peer errors onto channel, note on moving PEX into core (@erikgrinaker)
- 2020-11-16: Notes on recommended reactor implementation patterns, approve ADR (@erikgrinaker)
- 2021-02-04: Update with new P2P core and Transport API changes (@erikgrinaker).
## Context
In [ADR 061](adr-061-p2p-refactor-scope.md) we decided to refactor the peer-to-peer (P2P) networking stack. The first phase is to redesign and refactor the internal P2P architecture, while retaining protocol compatibility as far as possible.
## Alternative Approaches
Several variations of the proposed design were considered, including e.g. calling interface methods instead of passing messages (like the current architecture), merging channels with streams, exposing the internal peer data structure to reactors, being message format-agnostic via arbitrary codecs, and so on. This design was chosen because it has very loose coupling, is simpler to reason about and more convenient to use, avoids race conditions and lock contention for internal data structures, gives reactors better control of message ordering and processing semantics, and allows for QoS scheduling and backpressure in a very natural way.
[multiaddr](https://github.com/multiformats/multiaddr) was considered as a transport-agnostic peer address format over regular URLs, but it does not appear to have very widespread adoption, and advanced features like protocol encapsulation and tunneling do not appear to be immediately useful to us.
There were also proposals to use LibP2P instead of maintaining our own P2P stack, which were rejected (for now) in [ADR 061](adr-061-p2p-refactor-scope.md).
The initial version of this ADR had a byte-oriented multi-stream transport API, but this had to be abandoned/postponed to maintain backwards-compatibility with the existing MConnection protocol which is message-oriented. See the rejected RFC in [tendermint/spec#227](https://github.com/tendermint/spec/pull/227) for details.
## Decision
The P2P stack will be redesigned as a message-oriented architecture, primarily relying on Go channels for communication and scheduling. It will use a message-oriented transport to binary messages with individual peers, bidirectional peer-addressable channels to send and receive Protobuf messages, a router to route messages between reactors and peers, and a peer manager to manage peer lifecycle information. Message passing is asynchronous with at-most-once delivery.
## Detailed Design
This ADR is primarily concerned with the architecture and interfaces of the P2P stack, not implementation details. The interfaces described here should therefore be considered a rough architecture outline, not a complete and final design.
Primary design objectives have been:
* Loose coupling between components, for a simpler, more robust, and test-friendly architecture.
* Pluggable transports (not necessarily networked).
* Better scheduling of messages, with improved prioritization, backpressure, and performance.
* Centralized peer lifecycle and connection management.
* Better peer address detection, advertisement, and exchange.
* Wire-level backwards compatibility with current P2P network protocols, except where it proves too obstructive.
The main abstractions in the new stack are:
* `Transport`: An arbitrary mechanism to exchange binary messages with a peer across a `Connection`.
* `Channel`: A bidirectional channel to asynchronously exchange Protobuf messages with peers using node ID addressing.
* `Router`: Maintains transport connections to relevant peers and routes channel messages.
* `PeerManager`: Manages peer lifecycle information, e.g. deciding which peers to dial and when, using a `peerStore` for storage.
* Reactor: A design pattern loosely defined as "something which listens on a channel and reacts to messages".
These abstractions are illustrated in the following diagram (representing the internals of node A) and described in detail below.
![P2P Architecture Diagram](img/adr-062-architecture.svg)
### Transports
Transports are arbitrary mechanisms for exchanging binary messages with a peer. For example, a gRPC transport would connect to a peer over TCP/IP and send data using the gRPC protocol, while an in-memory transport might communicate with a peer running in another goroutine using internal Go channels. Note that transports don't have a notion of a "peer" or "node" as such - instead, they establish connections between arbitrary endpoint addresses (e.g. IP address and port number), to decouple them from the rest of the P2P stack.
Transports must satisfy the following requirements:
* Be connection-oriented, and support both listening for inbound connections and making outbound connections using endpoint addresses.
* Support sending binary messages with distinct channel IDs (although channels and channel IDs are a higher-level application protocol concept explained in the Router section, they are threaded through the transport layer as well for backwards compatibilty with the existing MConnection protocol).
* Exchange the MConnection `NodeInfo` and public key via a node handshake, and possibly encrypt or sign the traffic as appropriate.
The initial transport is a port of the current MConnection protocol currently used by Tendermint, and should be backwards-compatible at the wire level. An in-memory transport for testing has also been implemented. There are plans to explore a QUIC transport that may replace the MConnection protocol.
The `Transport` interface is as follows:
```go
// Transport is a connection-oriented mechanism for exchanging data with a peer.
type Transport interface {
// Protocols returns the protocols supported by the transport. The Router
// uses this to pick a transport for an Endpoint.
Protocols() []Protocol
// Endpoints returns the local endpoints the transport is listening on, if any.
// How to listen is transport-dependent, e.g. MConnTransport uses Listen() while
// MemoryTransport starts listening via MemoryNetwork.CreateTransport().
Endpoints() []Endpoint
// Accept waits for the next inbound connection on a listening endpoint, blocking
// until either a connection is available or the transport is closed. On closure,
// io.EOF is returned and further Accept calls are futile.
Accept() (Connection, error)
// Dial creates an outbound connection to an endpoint.
Dial(context.Context, Endpoint) (Connection, error)
// Close stops accepting new connections, but does not close active connections.
Close() error
}
```
How the transport configures listening is transport-dependent, and not covered by the interface. This typically happens during transport construction, where a single instance of the transport is created and set to listen on an appropriate network interface before being passed to the router.
#### Endpoints
`Endpoint` represents a transport endpoint (e.g. an IP address and port). A connection always has two endpoints: one at the local node and one at the remote peer. Outbound connections to remote endpoints are made via `Dial()`, and inbound connections to listening endpoints are returned via `Accept()`.
The `Endpoint` struct is:
```go
// Endpoint represents a transport connection endpoint, either local or remote.
//
// Endpoints are not necessarily networked (see e.g. MemoryTransport) but all
// networked endpoints must use IP as the underlying transport protocol to allow
// e.g. IP address filtering. Either IP or Path (or both) must be set.
type Endpoint struct {
// Protocol specifies the transport protocol.
Protocol Protocol
// IP is an IP address (v4 or v6) to connect to. If set, this defines the
// endpoint as a networked endpoint.
IP net.IP
// Port is a network port (either TCP or UDP). If 0, a default port may be
// used depending on the protocol.
Port uint16
// Path is an optional transport-specific path or identifier.
Path string
}
// Protocol identifies a transport protocol.
type Protocol string
```
Endpoints are arbitrary transport-specific addresses, but if they are networked they must use IP addresses and thus rely on IP as a fundamental packet routing protocol. This enables policies for address discovery, advertisement, and exchange - for example, a private `192.168.0.0/24` IP address should only be advertised to peers on that IP network, while the public address `8.8.8.8` may be advertised to all peers. Similarly, any port numbers if given must represent TCP and/or UDP port numbers, in order to use [UPnP](https://en.wikipedia.org/wiki/Universal_Plug_and_Play) to autoconfigure e.g. NAT gateways.
Non-networked endpoints (without an IP address) are considered local, and will only be advertised to other peers connecting via the same protocol. For example, the in-memory transport used for testing uses `Endpoint{Protocol: "memory", Path: "foo"}` as an address for the node "foo", and this should only be advertised to other nodes using `Protocol: "memory"`.
#### Connections
A connection represents an established transport connection between two endpoints (i.e. two nodes), which can be used to exchange binary messages with logical channel IDs (corresponding to the higher-level channel IDs used in the router). Connections are set up either via `Transport.Dial()` (outbound) or `Transport.Accept()` (inbound).
Once a connection is esablished, `Transport.Handshake()` must be called to perform a node handshake, exchanging node info and public keys to verify node identities. Node handshakes should not really be part of the transport layer (it's an application protocol concern), this exists for backwards-compatibility with the existing MConnection protocol which conflates the two. `NodeInfo` is part of the existing MConnection protocol, but does not appear to be documented in the specification -- refer to the Go codebase for details.
The `Connection` interface is shown below. It omits certain additions that are currently implemented for backwards compatibility with the legacy P2P stack and are planned to be removed before the final release.
```go
// Connection represents an established connection between two endpoints.
type Connection interface {
// Handshake executes a node handshake with the remote peer. It must be
// called once the connection is established, and returns the remote peer's
// node info and public key. The caller is responsible for validation.
Handshake(context.Context, NodeInfo, crypto.PrivKey) (NodeInfo, crypto.PubKey, error)
// ReceiveMessage returns the next message received on the connection,
// blocking until one is available. Returns io.EOF if closed.
ReceiveMessage() (ChannelID, []byte, error)
// SendMessage sends a message on the connection. Returns io.EOF if closed.
SendMessage(ChannelID, []byte) error
// LocalEndpoint returns the local endpoint for the connection.
LocalEndpoint() Endpoint
// RemoteEndpoint returns the remote endpoint for the connection.
RemoteEndpoint() Endpoint
// Close closes the connection.
Close() error
}
```
This ADR initially proposed a byte-oriented multi-stream connection API that follows more typical networking API conventions (using e.g. `io.Reader` and `io.Writer` interfaces which easily compose with other libraries). This would also allow moving the responsibility for message framing, node handshakes, and traffic scheduling to the common router instead of reimplementing this across transports, and would allow making better use of multi-stream protocols such as QUIC. However, this would require minor breaking changes to the MConnection protocol which were rejected, see [tendermint/spec#227](https://github.com/tendermint/spec/pull/227) for details. This should be revisited when starting work on a QUIC transport.
### Peer Management
Peers are other Tendermint nodes. Each peer is identified by a unique `NodeID` (tied to the node's private key).
#### Peer Addresses
Nodes have one or more `NodeAddress` addresses expressed as URLs that they can be reached at. Examples of node addresses might be e.g.:
* `mconn://nodeid@host.domain.com:25567/path`
* `memory:nodeid`
Addresses are resolved into one or more transport endpoints, e.g. by resolving DNS hostnames into IP addresses. Peers should always be expressed as address URLs rather than endpoints (which are a lower-level transport construct).
```go
// NodeID is a hex-encoded crypto.Address. It must be lowercased
// (for uniqueness) and of length 40.
type NodeID string
// NodeAddress is a node address URL. It differs from a transport Endpoint in
// that it contains the node's ID, and that the address hostname may be resolved
// into multiple IP addresses (and thus multiple endpoints).
//
// If the URL is opaque, i.e. of the form "scheme:opaque", then the opaque part
// is expected to contain a node ID.
type NodeAddress struct {
NodeID NodeID
Protocol Protocol
Hostname string
Port uint16
Path string
}
// ParseNodeAddress parses a node address URL into a NodeAddress, normalizing
// and validating it.
func ParseNodeAddress(urlString string) (NodeAddress, error)
// Resolve resolves a NodeAddress into a set of Endpoints, e.g. by expanding
// out a DNS hostname to IP addresses.
func (a NodeAddress) Resolve(ctx context.Context) ([]Endpoint, error)
```
#### Peer Manager
The P2P stack needs to track a lot of internal state about peers, such as their addresses, connection state, priorities, availability, failures, retries, and so on. This responsibility has been separated out to a `PeerManager`, which track this state for the `Router` (but does not maintain the actual transport connections themselves, which is the router's responsibility).
The `PeerManager` is a synchronous state machine, where all state transitions are serialized (implemented as synchronous method calls holding an exclusive mutex lock). Most peer state is intentionally kept internal, stored in a `peerStore` database that persists it as appropriate, and the external interfaces pass the minimum amount of information necessary in order to avoid shared state between router goroutines. This design significantly simplifies the model, making it much easier to reason about and test than if it was baked into the asynchronous ball of concurrency that the P2P networking core must necessarily be. As peer lifecycle events are expected to be relatively infrequent, this should not significantly impact performance either.
The `Router` uses the `PeerManager` to request which peers to dial and evict, and reports in with peer lifecycle events such as connections, disconnections, and failures as they occur. The manager can reject these events (e.g. reject an inbound connection) by returning errors. This happens as follows:
* Outbound connections, via `Transport.Dial`:
* `DialNext()`: returns a peer address to dial, or blocks until one is available.
* `DialFailed()`: reports a peer dial failure.
* `Dialed()`: reports a peer dial success.
* `Ready()`: reports the peer as routed and ready.
* `Disconnected()`: reports a peer disconnection.
* Inbound connections, via `Transport.Accept`:
* `Accepted()`: reports an inbound peer connection.
* `Ready()`: reports the peer as routed and ready.
* `Disconnected()`: reports a peer disconnection.
* Evictions, via `Connection.Close`:
* `EvictNext()`: returns a peer to disconnect, or blocks until one is available.
* `Disconnected()`: reports a peer disconnection.
These calls have the following interface:
```go
// DialNext returns a peer address to dial, blocking until one is available.
func (m *PeerManager) DialNext(ctx context.Context) (NodeAddress, error)
// DialFailed reports a dial failure for the given address.
func (m *PeerManager) DialFailed(address NodeAddress) error
// Dialed reports a successful outbound connection to the given address.
func (m *PeerManager) Dialed(address NodeAddress) error
// Accepted reports a successful inbound connection from the given node.
func (m *PeerManager) Accepted(peerID NodeID) error
// Ready reports the peer as fully routed and ready for use.
func (m *PeerManager) Ready(peerID NodeID) error
// EvictNext returns a peer ID to disconnect, blocking until one is available.
func (m *PeerManager) EvictNext(ctx context.Context) (NodeID, error)
// Disconnected reports a peer disconnection.
func (m *PeerManager) Disconnected(peerID NodeID) error
```
Internally, the `PeerManager` uses a numeric peer score to prioritize peers, e.g. when deciding which peers to dial next. The scoring policy has not yet been implemented, but should take into account e.g. node configuration such a `persistent_peers`, uptime and connection failures, performance, and so on. The manager will also attempt to automatically upgrade to better-scored peers by evicting lower-scored peers when a better one becomes available (e.g. when a persistent peer comes back online after an outage).
The `PeerManager` should also have an API for reporting peer behavior from reactors that affects its score (e.g. signing a block increases the score, double-voting decreases it or even bans the peer), but this has not yet been designed and implemented.
Additionally, the `PeerManager` provides `PeerUpdates` subscriptions that will receive `PeerUpdate` events whenever significant peer state changes happen. Reactors can use these e.g. to know when peers are connected or disconnected, and take appropriate action. This is currently fairly minimal:
```go
// Subscribe subscribes to peer updates. The caller must consume the peer updates
// in a timely fashion and close the subscription when done, to avoid stalling the
// PeerManager as delivery is semi-synchronous, guaranteed, and ordered.
func (m *PeerManager) Subscribe() *PeerUpdates
// PeerUpdate is a peer update event sent via PeerUpdates.
type PeerUpdate struct {
NodeID NodeID
Status PeerStatus
}
// PeerStatus is a peer status.
type PeerStatus string
const (
PeerStatusUp PeerStatus = "up" // Connected and ready.
PeerStatusDown PeerStatus = "down" // Disconnected.
)
// PeerUpdates is a real-time peer update subscription.
type PeerUpdates struct { ... }
// Updates returns a channel for consuming peer updates.
func (pu *PeerUpdates) Updates() <-chan PeerUpdate
// Close closes the peer updates subscription.
func (pu *PeerUpdates) Close()
```
The `PeerManager` will also be responsible for providing peer information to the PEX reactor that can be gossipped to other nodes. This requires an improved system for peer address detection and advertisement, that e.g. reliably detects peer and self addresses and only gossips private network addresses to other peers on the same network, but this system has not yet been fully designed and implemented.
### Channels
While low-level data exchange happens via the `Transport`, the high-level API is based on a bidirectional `Channel` that can send and receive Protobuf messages addressed by `NodeID`. A channel is identified by an arbitrary `ChannelID` identifier, and can exchange Protobuf messages of one specific type (since the type to unmarshal into must be predefined). Message delivery is asynchronous and at-most-once.
The channel can also be used to report peer errors, e.g. when receiving an invalid or malignant message. This may cause the peer to be disconnected or banned depending on `PeerManager` policy, but should probably be replaced by a broader peer behavior API that can also report good behavior.
A `Channel` has this interface:
```go
// ChannelID is an arbitrary channel ID.
type ChannelID uint16
// Channel is a bidirectional channel to exchange Protobuf messages with peers.
type Channel struct {
ID ChannelID // Channel ID.
In <-chan Envelope // Inbound messages (peers to reactors).
Out chan<- Envelope // outbound messages (reactors to peers)
Error chan<- PeerError // Peer error reporting.
messageType proto.Message // Channel's message type, for e.g. unmarshaling.
}
// Close closes the channel, also closing Out and Error.
func (c *Channel) Close() error
// Envelope specifies the message receiver and sender.
type Envelope struct {
From NodeID // Sender (empty if outbound).
To NodeID // Receiver (empty if inbound).
Broadcast bool // Send to all connected peers, ignoring To.
Message proto.Message // Message payload.
}
// PeerError is a peer error reported via the Error channel.
type PeerError struct {
NodeID NodeID
Err error
}
```
A channel can reach any connected peer, and will automatically (un)marshal the Protobuf messages. Message scheduling and queueing is a `Router` implementation concern, and can use any number of algorithms such as FIFO, round-robin, priority queues, etc. Since message delivery is not guaranteed, both inbound and outbound messages may be dropped, buffered, reordered, or blocked as appropriate.
Since a channel can only exchange messages of a single type, it is often useful to use a wrapper message type with e.g. a Protobuf `oneof` field that specifies a set of inner message types that it can contain. The channel can automatically perform this (un)wrapping if the outer message type implements the `Wrapper` interface (see [Reactor Example](#reactor-example) for an example):
```go
// Wrapper is a Protobuf message that can contain a variety of inner messages.
// If a Channel's message type implements Wrapper, the channel will
// automatically (un)wrap passed messages using the container type, such that
// the channel can transparently support multiple message types.
type Wrapper interface {
proto.Message
// Wrap will take a message and wrap it in this one.
Wrap(proto.Message) error
// Unwrap will unwrap the inner message contained in this message.
Unwrap() (proto.Message, error)
}
```
### Routers
The router exeutes P2P networking for a node, taking instructions from and reporting events to the `PeerManager`, maintaining transport connections to peers, and routing messages between channels and peers.
Practically all concurrency in the P2P stack has been moved into the router and reactors, while as many other responsibilities as possible have been moved into separate components such as the `Transport` and `PeerManager` that can remain largely synchronous. Limiting concurrency to a single core component makes it much easier to reason about since there is only a single concurrency structure, while the remaining components can be serial, simple, and easily testable.
The `Router` has a very minimal API, since it is mostly driven by `PeerManager` and `Transport` events:
```go
// Router maintains peer transport connections and routes messages between
// peers and channels.
type Router struct {
// Some details have been omitted below.
logger log.Logger
options RouterOptions
nodeInfo NodeInfo
privKey crypto.PrivKey
peerManager *PeerManager
transports []Transport
peerMtx sync.RWMutex
peerQueues map[NodeID]queue
channelMtx sync.RWMutex
channelQueues map[ChannelID]queue
}
// OpenChannel opens a new channel for the given message type. The caller must
// close the channel when done, before stopping the Router. messageType is the
// type of message passed through the channel.
func (r *Router) OpenChannel(id ChannelID, messageType proto.Message) (*Channel, error)
// Start starts the router, connecting to peers and routing messages.
func (r *Router) Start() error
// Stop stops the router, disconnecting from all peers and stopping message routing.
func (r *Router) Stop() error
```
All Go channel sends in the `Router` and reactors are blocking (the router also selects on signal channels for closure and shutdown). The responsibility for message scheduling, prioritization, backpressure, and load shedding is centralized in a core `queue` interface that is used at contention points (i.e. from all peers to a single channel, and from all channels to a single peer):
```go
// queue does QoS scheduling for Envelopes, enqueueing and dequeueing according
// to some policy. Queues are used at contention points, i.e.:
// - Receiving inbound messages to a single channel from all peers.
// - Sending outbound messages to a single peer from all channels.
type queue interface {
// enqueue returns a channel for submitting envelopes.
enqueue() chan<- Envelope
// dequeue returns a channel ordered according to some queueing policy.
dequeue() <-chan Envelope
// close closes the queue. After this call enqueue() will block, so the
// caller must select on closed() as well to avoid blocking forever. The
// enqueue() and dequeue() channels will not be closed.
close()
// closed returns a channel that's closed when the scheduler is closed.
closed() <-chan struct{}
}
```
The current implementation is `fifoQueue`, which is a simple unbuffered lossless queue that passes messages in the order they were received and blocks until the message is delivered (i.e. it is a Go channel). The router will need a more sophisticated queueing policy, but this has not yet been implemented.
The internal `Router` goroutine structure and design is described in the `Router` GoDoc, which is included below for reference:
```go
// On startup, three main goroutines are spawned to maintain peer connections:
//
// dialPeers(): in a loop, calls PeerManager.DialNext() to get the next peer
// address to dial and spawns a goroutine that dials the peer, handshakes
// with it, and begins to route messages if successful.
//
// acceptPeers(): in a loop, waits for an inbound connection via
// Transport.Accept() and spawns a goroutine that handshakes with it and
// begins to route messages if successful.
//
// evictPeers(): in a loop, calls PeerManager.EvictNext() to get the next
// peer to evict, and disconnects it by closing its message queue.
//
// When a peer is connected, an outbound peer message queue is registered in
// peerQueues, and routePeer() is called to spawn off two additional goroutines:
//
// sendPeer(): waits for an outbound message from the peerQueues queue,
// marshals it, and passes it to the peer transport which delivers it.
//
// receivePeer(): waits for an inbound message from the peer transport,
// unmarshals it, and passes it to the appropriate inbound channel queue
// in channelQueues.
//
// When a reactor opens a channel via OpenChannel, an inbound channel message
// queue is registered in channelQueues, and a channel goroutine is spawned:
//
// routeChannel(): waits for an outbound message from the channel, looks
// up the recipient peer's outbound message queue in peerQueues, and submits
// the message to it.
//
// All channel sends in the router are blocking. It is the responsibility of the
// queue interface in peerQueues and channelQueues to prioritize and drop
// messages as appropriate during contention to prevent stalls and ensure good
// quality of service.
```
### Reactor Example
While reactors are a first-class concept in the current P2P stack (i.e. there is an explicit `p2p.Reactor` interface), they will simply be a design pattern in the new stack, loosely defined as "something which listens on a channel and reacts to messages".
Since reactors have very few formal constraints, they can be implemented in a variety of ways. There is currently no recommended pattern for implementing reactors, to avoid overspecification and scope creep in this ADR. However, prototyping and developing a reactor pattern should be done early during implementation, to make sure reactors built using the `Channel` interface can satisfy the needs for convenience, deterministic tests, and reliability.
Below is a trivial example of a simple echo reactor implemented as a function. The reactor will exchange the following Protobuf messages:
```protobuf
message EchoMessage {
oneof inner {
PingMessage ping = 1;
PongMessage pong = 2;
}
}
message PingMessage {
string content = 1;
}
message PongMessage {
string content = 1;
}
```
Implementing the `Wrapper` interface for `EchoMessage` allows transparently passing `PingMessage` and `PongMessage` through the channel, where it will automatically be (un)wrapped in an `EchoMessage`:
```go
func (m *EchoMessage) Wrap(inner proto.Message) error {
switch inner := inner.(type) {
case *PingMessage:
m.Inner = &EchoMessage_PingMessage{Ping: inner}
case *PongMessage:
m.Inner = &EchoMessage_PongMessage{Pong: inner}
default:
return fmt.Errorf("unknown message %T", inner)
}
return nil
}
func (m *EchoMessage) Unwrap() (proto.Message, error) {
switch inner := m.Inner.(type) {
case *EchoMessage_PingMessage:
return inner.Ping, nil
case *EchoMessage_PongMessage:
return inner.Pong, nil
default:
return nil, fmt.Errorf("unknown message %T", inner)
}
}
```
The reactor itself would be implemented e.g. like this:
```go
// RunEchoReactor wires up an echo reactor to a router and runs it.
func RunEchoReactor(router *p2p.Router, peerManager *p2p.PeerManager) error {
channel, err := router.OpenChannel(1, &EchoMessage{})
if err != nil {
return err
}
defer channel.Close()
peerUpdates := peerManager.Subscribe()
defer peerUpdates.Close()
return EchoReactor(context.Background(), channel, peerUpdates)
}
// EchoReactor provides an echo service, pinging all known peers until the given
// context is canceled.
func EchoReactor(ctx context.Context, channel *p2p.Channel, peerUpdates *p2p.PeerUpdates) error {
ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
for {
select {
// Send ping message to all known peers every 5 seconds.
case <-ticker.C:
channel.Out <- Envelope{
Broadcast: true,
Message: &PingMessage{Content: "👋"},
}
// When we receive a message from a peer, either respond to ping, output
// pong, or report peer error on unknown message type.
case envelope := <-channel.In:
switch msg := envelope.Message.(type) {
case *PingMessage:
channel.Out <- Envelope{
To: envelope.From,
Message: &PongMessage{Content: msg.Content},
}
case *PongMessage:
fmt.Printf("%q replied with %q\n", envelope.From, msg.Content)
default:
channel.Error <- PeerError{
PeerID: envelope.From,
Err: fmt.Errorf("unexpected message %T", msg),
}
}
// Output info about any peer status changes.
case peerUpdate := <-peerUpdates:
fmt.Printf("Peer %q changed status to %q", peerUpdate.PeerID, peerUpdate.Status)
// Exit when context is canceled.
case <-ctx.Done():
return nil
}
}
}
```
## Status
Partially implemented ([#5670](https://github.com/tendermint/tendermint/issues/5670))
## Consequences
### Positive
* Reduced coupling and simplified interfaces should lead to better understandability, increased reliability, and more testing.
* Using message passing via Go channels gives better control of backpressure and quality-of-service scheduling.
* Peer lifecycle and connection management is centralized in a single entity, making it easier to reason about.
* Detection, advertisement, and exchange of node addresses will be improved.
* Additional transports (e.g. QUIC) can be implemented and used in parallel with the existing MConn protocol.
* The P2P protocol will not be broken in the initial version, if possible.
### Negative
* Fully implementing the new design as indended is likely to require breaking changes to the P2P protocol at some point, although the initial implementation shouldn't.
* Gradually migrating the existing stack and maintaining backwards-compatibility will be more labor-intensive than simply replacing the entire stack.
* A complete overhaul of P2P internals is likely to cause temporary performance regressions and bugs as the implementation matures.
* Hiding peer management information inside the `PeerManager` may prevent certain functionality or require additional deliberate interfaces for information exchange, as a tradeoff to simplify the design, reduce coupling, and avoid race conditions and lock contention.
### Neutral
* Implementation details around e.g. peer management, message scheduling, and peer and endpoint advertisement are not yet determined.
## References
* [ADR 061: P2P Refactor Scope](adr-061-p2p-refactor-scope.md)
* [#5670 p2p: internal refactor and architecture redesign](https://github.com/tendermint/tendermint/issues/5670)

View File

@@ -0,0 +1,109 @@
# ADR 063: Privval gRPC
## Changelog
- 23/11/2020: Initial Version (@marbar3778)
## Context
Validators use remote signers to help secure their keys. This system is Tendermint's recommended way to secure validators, but the path to integration with Tendermint's private validator client is plagued with custom protocols.
Tendermint uses its own custom secure connection protocol (`SecretConnection`) and a raw tcp/unix socket connection protocol. The secure connection protocol until recently was exposed to man in the middle attacks and can take longer to integrate if not using Golang. The raw tcp connection protocol is less custom, but has been causing minute issues with users.
Migrating Tendermint's private validator client to a widely adopted protocol, gRPC, will ease the current maintenance and integration burden experienced with the current protocol.
## Decision
After discussing with multiple stake holders, [gRPC](https://grpc.io/) was decided on to replace the current private validator protocol. gRPC is a widely adopted protocol in the micro-service and cloud infrastructure world. gRPC uses [protocol-buffers](https://developers.google.com/protocol-buffers) to describe its services, providing a language agnostic implementation. Tendermint uses protobuf for on disk and over the wire encoding already making the integration with gRPC simpler.
## Alternative Approaches
- JSON-RPC: We did not consider JSON-RPC because Tendermint uses protobuf extensively making gRPC a natural choice.
## Detailed Design
With the recent integration of [Protobuf](https://developers.google.com/protocol-buffers) into Tendermint the needed changes to migrate from the current private validator protocol to gRPC is not large.
The [service definition](https://grpc.io/docs/what-is-grpc/core-concepts/#service-definition) for gRPC will be defined as:
```proto
service PrivValidatorAPI {
rpc GetPubKey(tendermint.proto.privval.PubKeyRequest) returns (tendermint.proto.privval.PubKeyResponse);
rpc SignVote(tendermint.proto.privval.SignVoteRequest) returns (tendermint.proto.privval.SignedVoteResponse);
rpc SignProposal(tendermint.proto.privval.SignProposalRequest) returns (tendermint.proto.privval.SignedProposalResponse);
message PubKeyRequest {
string chain_id = 1;
}
// PubKeyResponse is a response message containing the public key.
message PubKeyResponse {
tendermint.crypto.PublicKey pub_key = 1 [(gogoproto.nullable) = false];
}
// SignVoteRequest is a request to sign a vote
message SignVoteRequest {
tendermint.types.Vote vote = 1;
string chain_id = 2;
}
// SignedVoteResponse is a response containing a signed vote or an error
message SignedVoteResponse {
tendermint.types.Vote vote = 1 [(gogoproto.nullable) = false];
}
// SignProposalRequest is a request to sign a proposal
message SignProposalRequest {
tendermint.types.Proposal proposal = 1;
string chain_id = 2;
}
// SignedProposalResponse is response containing a signed proposal or an error
message SignedProposalResponse {
tendermint.types.Proposal proposal = 1 [(gogoproto.nullable) = false];
}
}
```
> Note: Remote Singer errors are removed in favor of [grpc status error codes](https://grpc.io/docs/guides/error/).
In previous versions of the remote signer, Tendermint acted as the server and the remote signer as the client. In this process the client established a long lived connection providing a way for the server to make requests to the client. In the new version it has been simplified. Tendermint is the client and the remote signer is the server. This follows client and server architecture and simplifies the previous protocol.
#### Keep Alive
If you have worked on the private validator system you will see that we are removing the `PingRequest` and `PingResponse` messages. These messages were used to create functionality which kept the connection alive. With gRPC there is a [keep alive feature](https://github.com/grpc/grpc/blob/master/doc/keepalive.md) that will be added along side the integration to provide the same functionality.
#### Metrics
Remote signers are crucial to operating secure and consistently up Validators. In the past there were no metrics to tell the operator if something is wrong other than the node not signing. Integrating metrics into the client and provided server will be done with [prometheus](https://github.com/grpc-ecosystem/go-grpc-prometheus). This will be integrated into node's prometheus export for node operators.
#### Security
[TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) is widely adopted with the use of gRPC. There are various forms of TLS (one-way & two-way). One way is the client identifying who the server is, while two way is both parties identifying the other. For Tendermint's use case having both parties identifying each other provides adds an extra layer of security. This requires users to generate both client and server certificates for a TLS connection.
An insecure option will be provided for users who do not wish to secure the connection.
#### Upgrade Path
This is a largely breaking change for validator operators. The optimal upgrade path would be to release gRPC in a minor release, allow key management systems to migrate to the new protocol. In the next major release the current system (raw tcp/unix) is removed. This allows users to migrate to the new system and not have to coordinate upgrading the key management system alongside a network upgrade.
The upgrade of [tmkms](https://github.com/iqlusioninc/tmkms) will be coordinated with Iqlusion. They will be able to make the necessary upgrades to allow users to migrate to gRPC from the current protocol.
## Status
Implemented
### Positive
- Use an adopted standard for secure communication. (TLS)
- Use an adopted communication protocol. (gRPC)
- Requests are multiplexed onto the tcp connection. (http/2)
- Language agnostic service definition.
### Negative
- Users will need to generate certificates to use TLS. (Added step)
- Users will need to find a supported gRPC supported key management system
### Neutral

View File

@@ -0,0 +1,90 @@
# ADR 064: Batch Verification
## Changelog
- January 28, 2021: Created (@marbar3778)
## Context
Tendermint uses public private key cryptography for validator signing. When a block is proposed and voted on validators sign a message representing acceptance of a block, rejection is signaled via a nil vote. These signatures are also used to verify previous blocks are correct if a node is syncing. Currently, Tendermint requires each signature to be verified individually, this leads to a slow down of block times.
Batch Verification is the process of taking many messages, keys, and signatures adding them together and verifying them all at once. The public key can be the same in which case it would mean a single user is signing many messages. In our case each public key is unique, each validator has their own and contribute a unique message. The algorithm can vary from curve to curve but the performance benefit, over single verifying messages, public keys and signatures is shared.
## Alternative Approaches
- Signature aggregation
- Signature aggregation is an alternative to batch verification. Signature aggregation leads to fast verification and smaller block sizes. At the time of writing this ADR there is on going work to enable signature aggregation in Tendermint. The reason why we have opted to not introduce it at this time is because every validator signs a unique message.
Signing a unique message prevents aggregation before verification. For example if we were to implement signature aggregation with BLS, there could be a potential slow down of 10x-100x in verification speeds.
## Decision
Adopt Batch Verification.
## Detailed Design
A new interface will be introduced. This interface will have three methods `NewBatchVerifier`, `Add` and `VerifyBatch`.
```go
type BatchVerifier interface {
Add(key crypto.Pubkey, signature, message []byte) error // Add appends an entry into the BatchVerifier.
Verify() bool // Verify verifies all the entries in the BatchVerifier. If the verification fails it is unknown which entry failed and each entry will need to be verified individually.
}
```
- `NewBatchVerifier` creates a new verifier. This verifier will be populated with entries to be verified.
- `Add` adds an entry to the Verifier. Add accepts a public key and two slice of bytes (signature and message).
- `Verify` verifies all the entires. At the end of Verify if the underlying API does not reset the Verifier to its initial state (empty), it should be done here. This prevents accidentally reusing the verifier with entries from a previous verification.
Above there is mention of an entry. An entry can be constructed in many ways depending on the needs of the underlying curve. A simple approach would be:
```go
type entry struct {
pubKey crypto.Pubkey
signature []byte
message []byte
}
```
The main reason this approach is being taken is to prevent simple mistakes. Some APIs allow the user to create three slices and pass them to the `VerifyBatch` function but this relies on the user to safely generate all the slices (see example below). We would like to minimize the possibility of making a mistake.
```go
func Verify(keys []crypto.Pubkey, signatures, messages[][]byte) bool
```
This change will not affect any users in anyway other than faster verification times.
This new api will be used for verification in both consensus and block syncing. Within the current Verify functions there will be a check to see if the key types supports the BatchVerification API. If it does it will execute batch verification, if not single signature verification will be used.
#### Consensus
The process within consensus will be to wait for 2/3+ of the votes to be received, once they are received `Verify()` will be called to batch verify all the messages. The messages that come in after 2/3+ has been verified will be individually verified.
#### Block Sync & Light Client
The process for block sync & light client verification will be to verify only 2/3+ in a batch style. Since these processes are not participating in consensus there is no need to wait for more messages.
If batch verifications fails for any reason, it will not be known which entry caused the failure. Verification will need to revert to single signature verification.
Starting out, only ed25519 will support batch verification.
## Status
Implemented
### Positive
- Faster verification times, if the curve supports it
### Negative
- No way to see which key failed verification
- A failure means reverting back to single signature verification.
### Neutral
## References
[Ed25519 Library](https://github.com/hdevalence/ed25519consensus)
[Ed25519 spec](https://ed25519.cr.yp.to/)
[Signature Aggregation for votes](https://github.com/tendermint/tendermint/issues/1319)
[Proposer-based timestamps](https://github.com/tendermint/tendermint/issues/2840)

View File

@@ -0,0 +1,425 @@
# ADR 065: Custom Event Indexing
- [ADR 065: Custom Event Indexing](#adr-065-custom-event-indexing)
- [Changelog](#changelog)
- [Status](#status)
- [Context](#context)
- [Alternative Approaches](#alternative-approaches)
- [Decision](#decision)
- [Detailed Design](#detailed-design)
- [EventSink](#eventsink)
- [Supported Sinks](#supported-sinks)
- [`KVEventSink`](#kveventsink)
- [`PSQLEventSink`](#psqleventsink)
- [Configuration](#configuration)
- [Future Improvements](#future-improvements)
- [Consequences](#consequences)
- [Positive](#positive)
- [Negative](#negative)
- [Neutral](#neutral)
- [References](#references)
## Changelog
- April 1, 2021: Initial Draft (@alexanderbez)
- April 28, 2021: Specify search capabilities are only supported through the KV indexer (@marbar3778)
- May 19, 2021: Update the SQL schema and the eventsink interface (@jayt106)
- Aug 30, 2021: Update the SQL schema and the psql implementation (@creachadair)
- Oct 5, 2021: Clarify goals and implementation changes (@creachadair)
## Status
Accepted
## Context
Currently, Tendermint Core supports block and transaction event indexing through
the `tx_index.indexer` configuration. Events are captured in transactions and
are indexed via a `TxIndexer` type. Events are captured in blocks, specifically
from `BeginBlock` and `EndBlock` application responses, and are indexed via a
`BlockIndexer` type. Both of these types are managed by a single `IndexerService`
which is responsible for consuming events and sending those events off to be
indexed by the respective type.
In addition to indexing, Tendermint Core also supports the ability to query for
both indexed transaction and block events via Tendermint's RPC layer. The ability
to query for these indexed events facilitates a great multitude of upstream client
and application capabilities, e.g. block explorers, IBC relayers, and auxiliary
data availability and indexing services.
Currently, Tendermint only supports indexing via a `kv` indexer, which is supported
by an underlying embedded key/value store database. The `kv` indexer implements
its own indexing and query mechanisms. While the former is somewhat trivial,
providing a rich and flexible query layer is not as trivial and has caused many
issues and UX concerns for upstream clients and applications.
The fragile nature of the proprietary `kv` query engine and the potential
performance and scaling issues that arise when a large number of consumers are
introduced, motivate the need for a more robust and flexible indexing and query
solution.
## Alternative Approaches
With regards to alternative approaches to a more robust solution, the only serious
contender that was considered was to transition to using [SQLite](https://www.sqlite.org/index.html).
While the approach would work, it locks us into a specific query language and
storage layer, so in some ways it's only a bit better than our current approach.
In addition, the implementation would require the introduction of CGO into the
Tendermint Core stack, whereas right now CGO is only introduced depending on
the database used.
## Decision
We will adopt a similar approach to that of the Cosmos SDK's `KVStore` state
listening described in [ADR-038](https://github.com/cosmos/cosmos-sdk/blob/master/docs/architecture/adr-038-state-listening.md).
We will implement the following changes:
- Introduce a new interface, `EventSink`, that all data sinks must implement.
- Augment the existing `tx_index.indexer` configuration to now accept a series
of one or more indexer types, i.e., sinks.
- Combine the current `TxIndexer` and `BlockIndexer` into a single `KVEventSink`
that implements the `EventSink` interface.
- Introduce an additional `EventSink` implementation that is backed by
[PostgreSQL](https://www.postgresql.org/).
- Implement the necessary schemas to support both block and transaction event indexing.
- Update `IndexerService` to use a series of `EventSinks`.
In addition:
- The Postgres indexer implementation will _not_ implement the proprietary `kv`
query language. Users wishing to write queries against the Postgres indexer
will connect to the underlying DBMS directly and use SQL queries based on the
indexing schema.
Future custom indexer implementations will not be required to support the
proprietary query language either.
- For now, the existing `kv` indexer will be left in place with its current
query support, but will be marked as deprecated in a subsequent release, and
the documentation will be updated to encourage users who need to query the
event index to migrate to the Postgres indexer.
- In the future we may remove the `kv` indexer entirely, or replace it with a
different implementation; that decision is deferred as future work.
- In the future, we may remove the index query endpoints from the RPC service
entirely; that decision is deferred as future work, but recommended.
## Detailed Design
### EventSink
We introduce the `EventSink` interface type that all supported sinks must implement.
The interface is defined as follows:
```go
type EventSink interface {
IndexBlockEvents(types.EventDataNewBlockHeader) error
IndexTxEvents([]*abci.TxResult) error
SearchBlockEvents(context.Context, *query.Query) ([]int64, error)
SearchTxEvents(context.Context, *query.Query) ([]*abci.TxResult, error)
GetTxByHash([]byte) (*abci.TxResult, error)
HasBlock(int64) (bool, error)
Type() EventSinkType
Stop() error
}
```
The `IndexerService` will accept a list of one or more `EventSink` types. During
the `OnStart` method it will call the appropriate APIs on each `EventSink` to
index both block and transaction events.
### Supported Sinks
We will initially support two `EventSink` types out of the box.
#### `KVEventSink`
This type of `EventSink` is a combination of the `TxIndexer` and `BlockIndexer`
indexers, both of which are backed by a single embedded key/value database.
A bulk of the existing business logic will remain the same, but the existing APIs
mapped to the new `EventSink` API. Both types will be removed in favor of a single
`KVEventSink` type.
The `KVEventSink` will be the only `EventSink` enabled by default, so from a UX
perspective, operators should not notice a difference apart from a configuration
change.
We omit `EventSink` implementation details as it should be fairly straightforward
to map the existing business logic to the new APIs.
#### `PSQLEventSink`
This type of `EventSink` indexes block and transaction events into a [PostgreSQL](https://www.postgresql.org/).
database. We define and automatically migrate the following schema when the
`IndexerService` starts.
The postgres eventsink will not support `tx_search`, `block_search`, `GetTxByHash` and `HasBlock`.
```sql
-- Table Definition ----------------------------------------------
-- The blocks table records metadata about each block.
-- The block record does not include its events or transactions (see tx_results).
CREATE TABLE blocks (
rowid BIGSERIAL PRIMARY KEY,
height BIGINT NOT NULL,
chain_id VARCHAR NOT NULL,
-- When this block header was logged into the sink, in UTC.
created_at TIMESTAMPTZ NOT NULL,
UNIQUE (height, chain_id)
);
-- Index blocks by height and chain, since we need to resolve block IDs when
-- indexing transaction records and transaction events.
CREATE INDEX idx_blocks_height_chain ON blocks(height, chain_id);
-- The tx_results table records metadata about transaction results. Note that
-- the events from a transaction are stored separately.
CREATE TABLE tx_results (
rowid BIGSERIAL PRIMARY KEY,
-- The block to which this transaction belongs.
block_id BIGINT NOT NULL REFERENCES blocks(rowid),
-- The sequential index of the transaction within the block.
index INTEGER NOT NULL,
-- When this result record was logged into the sink, in UTC.
created_at TIMESTAMPTZ NOT NULL,
-- The hex-encoded hash of the transaction.
tx_hash VARCHAR NOT NULL,
-- The protobuf wire encoding of the TxResult message.
tx_result BYTEA NOT NULL,
UNIQUE (block_id, index)
);
-- The events table records events. All events (both block and transaction) are
-- associated with a block ID; transaction events also have a transaction ID.
CREATE TABLE events (
rowid BIGSERIAL PRIMARY KEY,
-- The block and transaction this event belongs to.
-- If tx_id is NULL, this is a block event.
block_id BIGINT NOT NULL REFERENCES blocks(rowid),
tx_id BIGINT NULL REFERENCES tx_results(rowid),
-- The application-defined type label for the event.
type VARCHAR NOT NULL
);
-- The attributes table records event attributes.
CREATE TABLE attributes (
event_id BIGINT NOT NULL REFERENCES events(rowid),
key VARCHAR NOT NULL, -- bare key
composite_key VARCHAR NOT NULL, -- composed type.key
value VARCHAR NULL,
UNIQUE (event_id, key)
);
-- A joined view of events and their attributes. Events that do not have any
-- attributes are represented as a single row with empty key and value fields.
CREATE VIEW event_attributes AS
SELECT block_id, tx_id, type, key, composite_key, value
FROM events LEFT JOIN attributes ON (events.rowid = attributes.event_id);
-- A joined view of all block events (those having tx_id NULL).
CREATE VIEW block_events AS
SELECT blocks.rowid as block_id, height, chain_id, type, key, composite_key, value
FROM blocks JOIN event_attributes ON (blocks.rowid = event_attributes.block_id)
WHERE event_attributes.tx_id IS NULL;
-- A joined view of all transaction events.
CREATE VIEW tx_events AS
SELECT height, index, chain_id, type, key, composite_key, value, tx_results.created_at
FROM blocks JOIN tx_results ON (blocks.rowid = tx_results.block_id)
JOIN event_attributes ON (tx_results.rowid = event_attributes.tx_id)
WHERE event_attributes.tx_id IS NOT NULL;
```
The `PSQLEventSink` will implement the `EventSink` interface as follows
(some details omitted for brevity):
```go
func NewEventSink(connStr, chainID string) (*EventSink, error) {
db, err := sql.Open(driverName, connStr)
// ...
return &EventSink{
store: db,
chainID: chainID,
}, nil
}
func (es *EventSink) IndexBlockEvents(h types.EventDataNewBlockHeader) error {
ts := time.Now().UTC()
return runInTransaction(es.store, func(tx *sql.Tx) error {
// Add the block to the blocks table and report back its row ID for use
// in indexing the events for the block.
blockID, err := queryWithID(tx, `
INSERT INTO blocks (height, chain_id, created_at)
VALUES ($1, $2, $3)
ON CONFLICT DO NOTHING
RETURNING rowid;
`, h.Header.Height, es.chainID, ts)
// ...
// Insert the special block meta-event for height.
if err := insertEvents(tx, blockID, 0, []abci.Event{
makeIndexedEvent(types.BlockHeightKey, fmt.Sprint(h.Header.Height)),
}); err != nil {
return fmt.Errorf("block meta-events: %w", err)
}
// Insert all the block events. Order is important here,
if err := insertEvents(tx, blockID, 0, h.ResultBeginBlock.Events); err != nil {
return fmt.Errorf("begin-block events: %w", err)
}
if err := insertEvents(tx, blockID, 0, h.ResultEndBlock.Events); err != nil {
return fmt.Errorf("end-block events: %w", err)
}
return nil
})
}
func (es *EventSink) IndexTxEvents(txrs []*abci.TxResult) error {
ts := time.Now().UTC()
for _, txr := range txrs {
// Encode the result message in protobuf wire format for indexing.
resultData, err := proto.Marshal(txr)
// ...
// Index the hash of the underlying transaction as a hex string.
txHash := fmt.Sprintf("%X", types.Tx(txr.Tx).Hash())
if err := runInTransaction(es.store, func(tx *sql.Tx) error {
// Find the block associated with this transaction.
blockID, err := queryWithID(tx, `
SELECT rowid FROM blocks WHERE height = $1 AND chain_id = $2;
`, txr.Height, es.chainID)
// ...
// Insert a record for this tx_result and capture its ID for indexing events.
txID, err := queryWithID(tx, `
INSERT INTO tx_results (block_id, index, created_at, tx_hash, tx_result)
VALUES ($1, $2, $3, $4, $5)
ON CONFLICT DO NOTHING
RETURNING rowid;
`, blockID, txr.Index, ts, txHash, resultData)
// ...
// Insert the special transaction meta-events for hash and height.
if err := insertEvents(tx, blockID, txID, []abci.Event{
makeIndexedEvent(types.TxHashKey, txHash),
makeIndexedEvent(types.TxHeightKey, fmt.Sprint(txr.Height)),
}); err != nil {
return fmt.Errorf("indexing transaction meta-events: %w", err)
}
// Index any events packaged with the transaction.
if err := insertEvents(tx, blockID, txID, txr.Result.Events); err != nil {
return fmt.Errorf("indexing transaction events: %w", err)
}
return nil
}); err != nil {
return err
}
}
return nil
}
// SearchBlockEvents is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) SearchBlockEvents(ctx context.Context, q *query.Query) ([]int64, error)
// SearchTxEvents is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) SearchTxEvents(ctx context.Context, q *query.Query) ([]*abci.TxResult, error)
// GetTxByHash is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) GetTxByHash(hash []byte) (*abci.TxResult, error)
// HasBlock is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) HasBlock(h int64) (bool, error)
```
### Configuration
The current `tx_index.indexer` configuration would be changed to accept a list
of supported `EventSink` types instead of a single value.
Example:
```toml
[tx_index]
indexer = [
"kv",
"psql"
]
```
If the `indexer` list contains the `null` indexer, then no indexers will be used
regardless of what other values may exist.
Additional configuration parameters might be required depending on what event
sinks are supplied to `tx_index.indexer`. The `psql` will require an additional
connection configuration.
```toml
[tx_index]
indexer = [
"kv",
"psql"
]
pqsql_conn = "postgresql://<user>:<password>@<host>:<port>/<db>?<opts>"
```
Any invalid or misconfigured `tx_index` configuration should yield an error as
early as possible.
## Future Improvements
Although not technically required to maintain feature parity with the current
existing Tendermint indexer, it would be beneficial for operators to have a method
of performing a "re-index". Specifically, Tendermint operators could invoke an
RPC method that allows the Tendermint node to perform a re-indexing of all block
and transaction events between two given heights, H<sub>1</sub> and H<sub>2</sub>,
so long as the block store contains the blocks and transaction results for all
the heights specified in a given range.
## Consequences
### Positive
- A more robust and flexible indexing and query engine for indexing and search
block and transaction events.
- The ability to not have to support a custom indexing and query engine beyond
the legacy `kv` type.
- The ability to offload/proxy indexing and querying to the underling sink.
- Scalability and reliability that essentially comes "for free" from the underlying
sink, if it supports it.
### Negative
- The need to support multiple and potentially a growing set of custom `EventSink`
types.
### Neutral
## References
- [Cosmos SDK ADR-038](https://github.com/cosmos/cosmos-sdk/blob/master/docs/architecture/adr-038-state-listening.md)
- [PostgreSQL](https://www.postgresql.org/)
- [SQLite](https://www.sqlite.org/index.html)

View File

@@ -0,0 +1,140 @@
# ADR 66: End-to-End Testing
## Changelog
- 2020-09-07: Initial draft (@erikgrinaker)
- 2020-09-08: Minor improvements (@erikgrinaker)
- 2021-04-12: Renamed from RFC 001 (@tessr)
## Authors
- Erik Grinaker (@erikgrinaker)
## Context
The current set of end-to-end tests under `test/` are very limited, mostly focusing on P2P testing in a standard configuration. They do not test various configurations (e.g. fast sync reactor versions, state sync, block pruning, genesis vs InitChain setup), nor do they test various network topologies (e.g. sentry node architecture). This leads to poor test coverage, which has caused several serious bugs to go unnoticed.
We need an end-to-end test suite that can run a large number of combinations of configuration options, genesis settings, network topologies, ABCI interactions, and failure scenarios and check that the network is still functional. This ADR outlines the basic requirements and design for such a system.
This ADR will not cover comprehensive chaos testing, only a few simple scenarios (e.g. abrupt process termination and network partitioning). Chaos testing of the core consensus algorithm should be implemented e.g. via Jepsen tests or a similar framework, or alternatively be added to these end-to-end tests at a later time. Similarly, malicious or adversarial behavior is out of scope for the first implementation, but may be added later.
## Proposal
### Functional Coverage
The following lists the functionality we would like to test:
#### Environments
- **Topology:** single node, 4 nodes (seeds and persistent), sentry architecture, NAT (UPnP)
- **Networking:** IPv4, IPv6
- **ABCI connection:** UNIX socket, TCP, gRPC
- **PrivVal:** file, UNIX socket, TCP
#### Node/App Configurations
- **Database:** goleveldb, cleveldb, boltdb, rocksdb, badgerdb
- **Fast sync:** disabled, v0, v2
- **State sync:** disabled, enabled
- **Block pruning:** none, keep 20, keep 1, keep random
- **Role:** validator, full node
- **App persistence:** enabled, disabled
- **Node modes:** validator, full, light, seed
#### Geneses
- **Validators:** none (InitChain), given
- **Initial height:** 1, 1000
- **App state:** none, given
#### Behaviors
- **Recovery:** stop/start, power cycling, validator outage, network partition, total network loss
- **Validators:** add, remove, change power
- **Evidence:** injection of DuplicateVoteEvidence and LightClientAttackEvidence
### Functional Combinations
Running separate tests for all combinations of the above functionality is not feasible, as there are millions of them. However, the functionality can be grouped into three broad classes:
- **Global:** affects the entire network, needing a separate testnet for each combination (e.g. topology, network protocol, genesis settings)
- **Local:** affects a single node, and can be varied per node in a testnet (e.g. ABCI/privval connections, database backend, block pruning)
- **Temporal:** can be run after each other in the same testnet (e.g. recovery and validator changes)
Thus, we can run separate testnets for all combinations of global options (on the order of 100). In each testnet, we run nodes with randomly generated node configurations optimized for broad coverage (i.e. if one node is using GoLevelDB, then no other node should use it if possible). And in each testnet, we sequentially and randomly pick nodes to stop/start, power cycle, add/remove, disconnect, and so on.
All of the settings should be specified in a testnet configuration (or alternatively the seed that generated it) such that it can be retrieved from CI and debugged locally.
A custom ABCI application will have to be built that can exhibit the necessary behavior (e.g. make validator changes, prune blocks, enable/disable persistence, and so on).
### Test Stages
Given a test configuration, the test runner has the following stages:
- **Setup:** configures the Docker containers and networks, but does not start them.
- **Initialization:** starts the Docker containers, performs fast sync/state sync. Accomodates for different start heights.
- **Perturbation:** adds/removes validators, restarts nodes, perturbs networking, etc - liveness and readiness checked between each operation.
- **Testing:** runs RPC tests independently against all network nodes, making sure data matches expectations and invariants hold.
### Tests
The general approach will be to put the network through a sequence of operations (see stages above), check basic liveness and readiness after each operation, and then once the network stabilizes run an RPC test suite against each node in the network.
The test suite will do black-box testing against a single node's RPC service. We will be testing the behavior of the network as a whole, e.g. that a fast synced node correctly catches up to the chain head and serves basic block data via RPC. Thus the tests will not send e.g. P2P messages or examine the node database, as these are considered internal implementation details - if the network behaves correctly, presumably the internal components function correctly. Comprehensive component testing (e.g. each and every RPC method parameter) should be done via unit/integration tests.
The tests must take into account the node configuration (e.g. some nodes may be pruned, others may not be validators), and should somehow be provided access to expected data (i.e. complete block headers for the entire chain).
The test suite should use the Tendermint RPC client and the Tendermint light client, to exercise the client code as well.
### Implementation Considerations
The testnets should run in Docker Compose, both locally and in CI. This makes it easier to reproduce test failures locally. Supporting multiple test-runners (e.g. on VMs or Kubernetes) is out of scope. The same image should be used for all tests, with configuration passed via a mounted volume.
There does not appear to be any off-the-shelf solutions that would do this for us, so we will have to roll our own on top of Docker Compose. This gives us more flexibility, but is estimated to be a few weeks of work.
Testnets should be configured via a YAML file. These are used as inputs for the test runner, which e.g. generates Docker Compose configurations from them. An additional layer on top should generate these testnet configurations from a YAML file that specifies all the option combinations to test.
Comprehensive testnets should run against master nightly. However, a small subset of representative testnets should run for each pull request, e.g. a four-node IPv4 network with state sync and fast sync.
Tests should be written using the standard Go test framework (and e.g. Testify), with a helper function to fetch info from the test configuration. The test runner will run the tests separately for each network node, and the test must vary its expectations based on the node's configuration.
It should be possible to launch a specific testnet and run individual test cases from the IDE or local terminal against a it.
If possible, the existing `testnet` command should be extended to set up the network topologies needed by the end-to-end tests.
## Status
Implemented
## Consequences
### Positive
- Comprehensive end-to-end test coverage of basic Tendermint functionality, exercising common code paths in the same way that users would
- Test environments can easily be reproduced locally and debugged via standard tooling
### Negative
- Limited coverage of consensus correctness testing (e.g. Jepsen)
- No coverage of malicious or adversarial behavior
- Have to roll our own test framework, which takes engineering resources
- Possibly slower CI times, depending on which tests are run
- Operational costs and overhead, e.g. infrastructure costs and system maintenance
### Neutral
- No support for alternative infrastructure platforms, e.g. Kubernetes or VMs
## References
- [#5291: new end-to-end test suite](https://github.com/tendermint/tendermint/issues/5291)

Some files were not shown because too many files have changed in this diff Show More