Compare commits

...

51 Commits

Author SHA1 Message Date
Callum Waters
e0417d0855 lint 2022-11-09 16:54:46 +01:00
Callum Waters
e885094080 Merge branch 'main' into cal/default-trust-level 2022-11-09 16:48:20 +01:00
Thane Thomson
c6a0dc8559 docs: Add supported versions to README (#9677)
Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-11-09 07:07:41 -05:00
Thane Thomson
3aa6c816e5 docs: Add new per-message type P2P metrics (#9676)
* docs: Monospace metric names

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* docs: Consistently capitalize metric types

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* docs: Monospace metric tags

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* docs: Fix underscores in metrics page

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* docs: Make metric description capitalization consistent

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* docs: Add new per-message P2P metrics

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-11-08 17:43:17 -05:00
dependabot[bot]
ff0f98892f build(deps): Bump github.com/prometheus/client_golang from 1.13.0 to 1.13.1 (#9672)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.13.0 to 1.13.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/prometheus/client_golang/releases">github.com/prometheus/client_golang's releases</a>.</em></p>
<blockquote>
<h2>1.13.1 / 2022-11-02</h2>
<ul>
<li>[BUGFIX] Fix race condition with Exemplar in Counter. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/1146">#1146</a></li>
<li>[BUGFIX] Fix <code>CumulativeCount</code> value of <code>+Inf</code> bucket created from exemplar. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/1148">#1148</a></li>
<li>[BUGFIX] Fix double-counting bug in <code>promhttp.InstrumentRoundTripperCounter</code>. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/1118">#1118</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/prometheus/client_golang/compare/v1.13.0...v1.13.1">https://github.com/prometheus/client_golang/compare/v1.13.0...v1.13.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/prometheus/client_golang/blob/v1.13.1/CHANGELOG.md">github.com/prometheus/client_golang's changelog</a>.</em></p>
<blockquote>
<h2>1.13.1 / 2022-11-01</h2>
<ul>
<li>[BUGFIX] Fix race condition with Exemplar in Counter. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/1146">#1146</a></li>
<li>[BUGFIX] Fix <code>CumulativeCount</code> value of <code>+Inf</code> bucket created from exemplar. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/1148">#1148</a></li>
<li>[BUGFIX] Fix double-counting bug in <code>promhttp.InstrumentRoundTripperCounter</code>. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/1118">#1118</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="53e51c4f53"><code>53e51c4</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/1157">#1157</a> from prometheus/cut-1.13.1</li>
<li><a href="79ca0eb2ba"><code>79ca0eb</code></a> Added tip from Björn + Grammarly.</li>
<li><a href="078f11f85b"><code>078f11f</code></a> Cut 1.13.1 release (+ documenting release process).</li>
<li><a href="ddd7f0edcd"><code>ddd7f0e</code></a> Fix race condition with Exemplar in Counter (<a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/1146">#1146</a>)</li>
<li><a href="1f93f64580"><code>1f93f64</code></a> Fix <code>CumulativeCount</code> value of <code>+Inf</code> bucket created from exemplar (<a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/1148">#1148</a>)</li>
<li><a href="8cc2b6c472"><code>8cc2b6c</code></a> Fix double-counting bug in promhttp.InstrumentRoundTripperCounter (<a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/1118">#1118</a>)</li>
<li>See full diff in <a href="https://github.com/prometheus/client_golang/compare/v1.13.0...v1.13.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/prometheus/client_golang&package-manager=go_modules&previous-version=1.13.0&new-version=1.13.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-11-07 13:44:11 +00:00
dependabot[bot]
0beac722b0 build(deps): Bump github.com/gofrs/uuid from 4.3.0+incompatible to 4.3.1+incompatible (#9671)
Bumps [github.com/gofrs/uuid](https://github.com/gofrs/uuid) from 4.3.0+incompatible to 4.3.1+incompatible.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gofrs/uuid/releases">github.com/gofrs/uuid's releases</a>.</em></p>
<blockquote>
<h2>v4.3.1</h2>
<ul>
<li>Update UUIDv7 to use unix millisecond calculation that is friendly to legacy go versions by <a href="https://github.com/convto"><code>@​convto</code></a>
Full Changelog: v4.3.0...v4.3.1</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="e1079f31cf"><code>e1079f3</code></a> Use legacy go versions compatible unix millisecond calculation (<a href="https://github-redirect.dependabot.com/gofrs/uuid/issues/104">#104</a>)</li>
<li>See full diff in <a href="https://github.com/gofrs/uuid/compare/v4.3.0...v4.3.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/gofrs/uuid&package-manager=go_modules&previous-version=4.3.0+incompatible&new-version=4.3.1+incompatible)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-11-07 11:57:42 +00:00
Thane Thomson
45071d1f23 abci: Add unsynchronized local client (#9660)
* Remove extra interface cast

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove irrelevant comment

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* abci: Add unsynchronized local client

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* proxy: Add unsync local client creator

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* e2e: Add sync app for use with unsync local client

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* abci: Elaborate on mutex param in unsync local client

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* proxy: Remove unnecessary comment

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* abcicli: Remove unnecessary mutex param from unsync client

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* ci/e2e: Explicitly use sync app for validator04

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* e2e: Ensure app is definitely the E2E app

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-11-07 06:46:55 -05:00
dependabot[bot]
5a9a84eb02 build(deps): Bump github.com/spf13/viper from 1.13.0 to 1.14.0 (#9670)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.13.0 to 1.14.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/viper/releases">github.com/spf13/viper's releases</a>.</em></p>
<blockquote>
<h2>v1.14.0</h2>

<h2>What's Changed</h2>
<h3>Enhancements 🚀</h3>
<ul>
<li>feat: make Viper compile on platforms unsupported by fsnotify by <a href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a> in <a href="https://github-redirect.dependabot.com/spf13/viper/pull/1457">spf13/viper#1457</a></li>
<li>Fsnotify improvements by <a href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a> in <a href="https://github-redirect.dependabot.com/spf13/viper/pull/1458">spf13/viper#1458</a></li>
<li>Disable watch on appengine by <a href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a> in <a href="https://github-redirect.dependabot.com/spf13/viper/pull/1460">spf13/viper#1460</a></li>
</ul>
<h3>Breaking Changes 🛠</h3>
<ul>
<li>Drop support for Go 1.15 by <a href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a> in <a href="https://github-redirect.dependabot.com/spf13/viper/pull/1428">spf13/viper#1428</a></li>
</ul>
<h3>Dependency Updates ⬆️</h3>
<ul>
<li>build(deps): bump github.com/spf13/afero from 1.8.2 to 1.9.2 by <a href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a href="https://github-redirect.dependabot.com/spf13/viper/pull/1406">spf13/viper#1406</a></li>
<li>build(deps): bump github.com/sagikazarmark/crypt from 0.6.0 to 0.7.0 by <a href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a href="https://github-redirect.dependabot.com/spf13/viper/pull/1437">spf13/viper#1437</a></li>
<li>build(deps): bump github.com/stretchr/testify from 1.8.0 to 1.8.1 by <a href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a href="https://github-redirect.dependabot.com/spf13/viper/pull/1453">spf13/viper#1453</a></li>
<li>build(deps): bump github.com/fsnotify/fsnotify from 1.5.4 to 1.6.0 by <a href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a href="https://github-redirect.dependabot.com/spf13/viper/pull/1449">spf13/viper#1449</a></li>
<li>chore: update crypt by <a href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a> in <a href="https://github-redirect.dependabot.com/spf13/viper/pull/1461">spf13/viper#1461</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/spf13/viper/compare/v1.13.0...v1.14.0">https://github.com/spf13/viper/compare/v1.13.0...v1.14.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="b89e554a96"><code>b89e554</code></a> chore: update crypt</li>
<li><a href="db9f89ac41"><code>db9f89a</code></a> chore: disable watch on appengine</li>
<li><a href="4b8d14881e"><code>4b8d148</code></a> refactor: use new Has fsnotify method for event matching</li>
<li><a href="2e99a57324"><code>2e99a57</code></a> refactor: rename watch file to unsupported</li>
<li><a href="dcb7f30f39"><code>dcb7f30</code></a> feat: fix compilation for all platforms unsupported by fsnotify</li>
<li><a href="2e04739b68"><code>2e04739</code></a> ci: drop dedicated wasm build</li>
<li><a href="b2234f214f"><code>b2234f2</code></a> ci: add build for aix</li>
<li><a href="52009d3493"><code>52009d3</code></a> feat: disable watcher on aix</li>
<li><a href="b274f639e0"><code>b274f63</code></a> build(deps): bump github.com/fsnotify/fsnotify from 1.5.4 to 1.6.0</li>
<li><a href="7c62cfdbac"><code>7c62cfd</code></a> build(deps): bump github.com/stretchr/testify from 1.8.0 to 1.8.1</li>
<li>Additional commits viewable in <a href="https://github.com/spf13/viper/compare/v1.13.0...v1.14.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/viper&package-manager=go_modules&previous-version=1.13.0&new-version=1.14.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-11-07 11:18:51 +00:00
dependabot[bot]
f008a275d1 build(deps): Bump slackapi/slack-github-action from 1.22.0 to 1.23.0 (#9669)
Bumps [slackapi/slack-github-action](https://github.com/slackapi/slack-github-action) from 1.22.0 to 1.23.0.
- [Release notes](https://github.com/slackapi/slack-github-action/releases)
- [Commits](https://github.com/slackapi/slack-github-action/compare/v1.22...v1.23.0)

---
updated-dependencies:
- dependency-name: slackapi/slack-github-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-07 06:08:39 -05:00
Thane Thomson
816c6bac00 rpc: Add caching support (#9650)
* Set cache control in the HTTP-RPC response header

* Add a simply cache policy to the RPC routes

* add a condition to check the RPC request has default height settings

* fix cherry pick error

* update pending log

* use options struct intead of single parameter

* refacor FuncOptions to functional options

* add functional options in WebSocket RPC function

* revert doc

* replace deprecated function call

* revise functional options

* remove unuse comment

* fix revised error

* adjust cache-control settings

* Update rpc/jsonrpc/server/http_json_handler.go

Co-authored-by: Thane Thomson <connect@thanethomson.com>

* linter: Fix false positive

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* rpc: Separate cacheable and non-cacheable HTTP response writers

Allows us to roll this change out in a non-API-breaking way, since this
is an additive change.

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* rpc: Ensure consistent caching strategy

Ensure a consistent caching strategy across both JSONRPC- and URI-based
requests.

This requires a bit of a refactor of the previous caching logic, which
is complicated a little by the complex reflection-based approach taken
in the Tendermint RPC.

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* rpc: Add more tests for caching

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update CHANGELOG_PENDING

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* light: Sync routes config with RPC core

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* rpc: Update OpenAPI docs

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
Co-authored-by: jayt106 <jaytseng106@gmail.com>
Co-authored-by: jay tseng <jay.tseng@crypto.com>
Co-authored-by: JayT106 <JayT106@users.noreply.github.com>
2022-11-03 13:19:44 -04:00
Lasaro Camargos
9ec9085678 Moves the PBTS ADR to Accepted list (#9654)
ditto
---

#### PR checklist

- [x] Tests written/updated, or no tests needed
- [x] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [x] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed
2022-11-03 17:07:45 +00:00
Lasaro Camargos
f58ba4d2f9 Removes space in hyperlink (#9653)
Simple formatting issue.

---

#### PR checklist

- [x] Tests written/updated, or no tests needed
- [x] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [x] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed
2022-11-03 16:35:22 +00:00
Daniel
6bde634be2 Documentation of p2p layer in Tendermint v0.34 (#9348)
* spec: overview of p2p in v0.34 (#9120)

* Iniital comments on v0.34 p2p

* Added conf, updated text

* Moved everything to spec

* Update README.md

* spec: overview of the p2p implementation in v0.34 (#9126)

* Spec: p2p v0.34 doc, switch initial documentation

* Spec: p2p v0.34 doc, list of source files

* Spec: p2p v0.34 doc, transport documentation

* Spec: p2p v0.34 doc, transport error handling

* Spec: p2p v0.34 doc, PEX initial documentation

   * PEX protocol documentation is a separated file
   * PEX reactor documentation with a general documentation, including
     the address book and its role as (outbound) peer manager.

* Spec: p2p v0.34 doc, PEX protocol documentation

* Spec: p2p v0.34 doc, PEX protocol on seed nodes

* Spec: p2p v0.34 doc, address book

* Spec: p2p v0.34 doc, address book, more details

* Spec: p2p v0.34 doc, address book persistence

* Spec: p2p v0.34 doc, address book random samples

* Spec: p2p v0.34 doc, status of this documentation

* Spec: p2p v0.34 doc, pex reactor documentation

* Spec: p2p v0.34 doc, addressing PR #9126 comments

Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>

* Spec: p2p v0.34 doc, peer manager, outbound peers

Co-authored-by: Daniel Cason <cason@gandria>
Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>

* spec:p2p v0.34 introduction (#9319)

* restructure README.md initial

* Fix typos

* Reorganization

* spec: overview of p2p in v0.34 (#9120)

* Iniital comments on v0.34 p2p

* Added conf, updated text

* Moved everything to spec

* Update README.md

* spec: overview of the p2p implementation in v0.34 (#9126)

* Spec: p2p v0.34 doc, switch initial documentation

* Spec: p2p v0.34 doc, list of source files

* Spec: p2p v0.34 doc, transport documentation

* Spec: p2p v0.34 doc, transport error handling

* Spec: p2p v0.34 doc, PEX initial documentation

   * PEX protocol documentation is a separated file
   * PEX reactor documentation with a general documentation, including
     the address book and its role as (outbound) peer manager.

* Spec: p2p v0.34 doc, PEX protocol documentation

* Spec: p2p v0.34 doc, PEX protocol on seed nodes

* Spec: p2p v0.34 doc, address book

* Spec: p2p v0.34 doc, address book, more details

* Spec: p2p v0.34 doc, address book persistence

* Spec: p2p v0.34 doc, address book random samples

* Spec: p2p v0.34 doc, status of this documentation

* Spec: p2p v0.34 doc, pex reactor documentation

* Spec: p2p v0.34 doc, addressing PR #9126 comments

Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>

* Spec: p2p v0.34 doc, peer manager, outbound peers

Co-authored-by: Daniel Cason <cason@gandria>
Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>

* spec:p2p v0.34 introduction (#9319)

* restructure README.md initial

* Fix typos

* Reorganization

* spec: p2p v0.34, addressbook review

* spec: p2p v0.34, peer manager review

* spec: p2p v0.34, peer manager review

* spec: p2p v0.34, peer manager review

* spec: p2p v0.34, peer manager review

* spec: p2p v0.34, peer manager review

* spec: p2p v0.34, peer manager review

* spec: p2p v0.34, peer manager review

* Filled config description

* spec: p2p v0.34, transport review

* spec: p2p v0.34, switch review

* spec: p2p v0.34, overview, first version

* spec: p2p v0.34, peer manager review

* spec: p2p v0.34, shorter readme

* Configuration update

* Configuration update

* Shortened README

* spec: p2p v0.34, readme intro

* spec: p2p v0.34, readme contents

* spec: p2p v0.34, readme references

* spec: p2p readme points to v0.34

* spec: p2p, v0.34, fixing brokend markdown links

* Makrdown fix

* Apply suggestions from code review

Co-authored-by: Adi Seredinschi <adizere@gmail.com>
Co-authored-by: Zarko Milosevic <zarko@informal.systems>

* spec: p2p v0.34, address book new intro

* spec: p2p v0.34, address book buckets summary

* spec: p2p v0.34, peer manager, issue link

* spec: p2p v0.34, fixing links

* spec: p2p v0.34, addressing comments from reviews

* spec: p2p v0.34, addressing comments from reviews

* Apply suggestions from Jasmina's code review

Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>

* spec: p2p v0.34, addressing comments from reviews

* Apply suggestions from code review

Co-authored-by: Sergio Mena <sergio@informal.systems>

* Apply suggestions from code review

Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>
Co-authored-by: Sergio Mena <sergio@informal.systems>

* spec: p2p, v0.34, address book section reorganized

* spec: p2p, v0.34, addressing review comments

* Typos

* Typo

* spec: p2p, v0.34, address book markbad

Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>
Co-authored-by: Daniel Cason <cason@gandria>
Co-authored-by: Adi Seredinschi <adizere@gmail.com>
Co-authored-by: Zarko Milosevic <zarko@informal.systems>
Co-authored-by: Sergio Mena <sergio@informal.systems>
2022-11-03 12:46:42 -03:00
Rootul P
d704c0a0b6 docs: describe undocumented MempoolConfig fields (#9598)
* docs: describe undocumented MempoolConfig fields


Co-authored-by: Jasmina Malicevic <jasmina.dustinac@gmail.com>
2022-11-02 12:25:05 +01:00
Jasmina Malicevic
629cdc7f3d spec/abci: Removed reference to Finalize block (#9656)
* spec/abci: Removed reference to Finalize block

* Update spec/abci/abci++_methods.md

Co-authored-by: Sergio Mena <sergio@informal.systems>

Co-authored-by: Sergio Mena <sergio@informal.systems>
2022-11-02 10:49:25 +01:00
dependabot[bot]
c8f9f061fd build(deps): Bump github.com/vektra/mockery/v2 from 2.14.0 to 2.14.1 (#9649)
Bumps [github.com/vektra/mockery/v2](https://github.com/vektra/mockery) from 2.14.0 to 2.14.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/vektra/mockery/releases">github.com/vektra/mockery/v2's releases</a>.</em></p>
<blockquote>
<h2>v2.14.1</h2>
<h2>Changelog</h2>
<ul>
<li>1361e94 Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/493">#493</a> from CorentinClabaut/doc</li>
<li>546b334 Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/496">#496</a> from ccoVeille/typos</li>
<li>94c17ff Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/511">#511</a> from acln0/respect-dumb-terminal</li>
<li>178902b PR update</li>
<li>464ea71 Slightly improve documentation</li>
<li>c60fce5 cmd: respect TERM=dumb by not using colors</li>
<li>4ca0450 fix typos and style in documentation, test and error reporting</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="94c17ff51f"><code>94c17ff</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/511">#511</a> from acln0/respect-dumb-terminal</li>
<li><a href="c60fce57fa"><code>c60fce5</code></a> cmd: respect TERM=dumb by not using colors</li>
<li><a href="546b33489d"><code>546b334</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/496">#496</a> from ccoVeille/typos</li>
<li><a href="4ca0450f9b"><code>4ca0450</code></a> fix typos and style in documentation, test and error reporting</li>
<li><a href="1361e94bd2"><code>1361e94</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/vektra/mockery/issues/493">#493</a> from CorentinClabaut/doc</li>
<li><a href="178902b330"><code>178902b</code></a> PR update</li>
<li><a href="464ea71ef6"><code>464ea71</code></a> Slightly improve documentation</li>
<li>See full diff in <a href="https://github.com/vektra/mockery/compare/v2.14.0...v2.14.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/vektra/mockery/v2&package-manager=go_modules&previous-version=2.14.0&new-version=2.14.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-31 20:04:34 +00:00
Thane Thomson
f138cb9c0c ci: Run Markdown link checker nightly (#9642)
* ci: Run Markdown link checker nightly

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* ci: Switch to Informal Systems fork of link checker

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Update link checker config to work with GitHub

As per https://github.com/tcort/markdown-link-check/issues/201#issuecomment-1110242146

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
2022-10-31 15:27:49 -04:00
Thane Thomson
83b7f4ad5b ci: Fix linter complaint (#9645)
Fixes a very silly linter complaint that makes absolutely no sense and is blocking the merging of several PRs.

---

#### PR checklist

- [x] Tests written/updated, or no tests needed
- [x] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [x] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed
2022-10-28 15:01:16 +00:00
William Banfield
09b8708314 p2p: add a per-message type send and receive metric (#9622)
* p2p: ressurrect the p2p envelope and use to calculate message metric

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
2022-10-27 15:46:15 -04:00
dependabot[bot]
d95e423756 build(deps): Bump github.com/spf13/cobra from 1.6.0 to 1.6.1 (#9635)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.6.0 to 1.6.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/cobra/releases">github.com/spf13/cobra's releases</a>.</em></p>
<blockquote>
<h2>v1.6.1</h2>
<h3>Bug fixes 🐛</h3>
<ul>
<li>Fixes a panic when <code>AddGroup</code> isn't called before <code>AddCommand(my-sub-command)</code> is executed. This can happen within more complex cobra file structures that have many different <code>init</code>s to be executed. Now, the check for groups has been moved to <code>ExecuteC</code> and provides more flexibility when working with grouped commands - <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> (and shout out to <a href="https://github.com/aawsome"><code>@​aawsome</code></a>, <a href="https://github.com/andig"><code>@​andig</code></a> and <a href="https://github.com/KINGSABRI"><code>@​KINGSABRI</code></a> for a deep investigation into this! 👏🏼)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="b43be995eb"><code>b43be99</code></a> Check for group presence after full initialization (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1839">#1839</a>) (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1841">#1841</a>)</li>
<li>See full diff in <a href="https://github.com/spf13/cobra/compare/v1.6.0...v1.6.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/cobra&package-manager=go_modules&previous-version=1.6.0&new-version=1.6.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-27 15:17:45 +00:00
Callum Waters
95bd4b6701 node: improve pprof lifecycle (#9628) 2022-10-27 16:50:27 +02:00
dependabot[bot]
98ad5f1bf3 build(deps): Bump github.com/btcsuite/btcd/btcec/v2 from 2.2.1 to 2.3.0 (#9634)
Bumps [github.com/btcsuite/btcd/btcec/v2](https://github.com/btcsuite/btcd) from 2.2.1 to 2.3.0.
<details>
<summary>Commits</summary>
<ul>
<li><a href="2cc19083f2"><code>2cc1908</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/btcsuite/btcd/issues/1894">#1894</a> from Roasbeef/musig2-1-0</li>
<li><a href="eef9fbc5c9"><code>eef9fbc</code></a> btcec/schnorr/musig2: always pass in priv key for early nonce gen</li>
<li><a href="323871ff16"><code>323871f</code></a> btcec/musig2: remove old canned test vector code</li>
<li><a href="5d895bbea5"><code>5d895bb</code></a> btcec/schnorr/musig2: add sig combine test vectors</li>
<li><a href="ca28a98425"><code>ca28a98</code></a> btcec/schnorr/musig2: add sig verify+sign test vectors</li>
<li><a href="cc12483f0a"><code>cc12483</code></a> btcec/schnorr/musig2: add key tweak sign test vectors</li>
<li><a href="4e55273815"><code>4e55273</code></a> btcec/schnorr/musig2: update key agg test vectors to musig2 1.0.0</li>
<li><a href="3d9f4484df"><code>3d9f448</code></a> btcec/schnorr/musig: update nonce test vectors to musig2 1.0.0</li>
<li><a href="1567f20055"><code>1567f20</code></a> btcec/schnorr/musig2: update to musig 1.0.0</li>
<li><a href="a34e777916"><code>a34e777</code></a> btcec/schnorr/musig2: update musig2 impl to version 0.7.0</li>
<li>Additional commits viewable in <a href="https://github.com/btcsuite/btcd/compare/btcec/v2.2.1...btcec/v2.3.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/btcsuite/btcd/btcec/v2&package-manager=go_modules&previous-version=2.2.1&new-version=2.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-27 11:50:20 +00:00
Thane Thomson
82c29db2bc ci: Add Slack notifications when releases and pre-releases are cut (#9596)
Automatically notify the team when pre-releases and releases are cut.

[Pre-release notification rendered](https://app.slack.com/block-kit-builder/TREF53MTJ#%7B%22blocks%22:%5B%7B%22type%22:%22section%22,%22text%22:%7B%22type%22:%22mrkdwn%22,%22text%22:%22%20New%20Tendermint%20pre-release:%20%3Chttps://github.com/tendermint/tendermint/releases/tag/v0.37.0-rc1%7Cv0.37.0-rc1%3E%22%7D%7D%5D%7D)

[Release notification rendered](https://app.slack.com/block-kit-builder/TREF53MTJ#%7B%22blocks%22:%5B%7B%22type%22:%22section%22,%22text%22:%7B%22type%22:%22mrkdwn%22,%22text%22:%22🚀%20New%20Tendermint%20release:%20%3Chttps://github.com/tendermint/tendermint/releases/tag/v0.34.22%7Cv0.34.22%3E%22%7D%7D%5D%7D)

---

#### PR checklist

- [x] Tests written/updated, or no tests needed
- [x] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [x] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed
2022-10-26 21:04:28 +00:00
Thane Thomson
160a33fdb1 ci: Only allow automated security-related dependency updates on release branches (#9600)
At present we allow automated dependency updates on release branches via Dependabot. This seems fine for `main`, but is risky for release branches.

This PR enables _daily_ checks for security-related dependency updates on release branches, but only performs automated non-security-related updates for `main` (weekly).

---

#### PR checklist

- [x] Tests written/updated, or no tests needed
- [x] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [x] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed
2022-10-26 21:02:42 +00:00
William Banfield
13bd4b63f8 github: remove forked version of gosec (#9629) 2022-10-26 13:36:39 -04:00
dependabot[bot]
bc15531e1d build(deps): Bump docker/setup-buildx-action from 2.1.0 to 2.2.1 (#9612) 2022-10-26 14:01:17 +02:00
dependabot[bot]
716a624d57 build(deps): Bump bufbuild/buf-setup-action from 1.8.0 to 1.9.0 (#9613) 2022-10-26 13:48:19 +02:00
dependabot[bot]
58b9e4f03d build(deps): Bump github.com/bufbuild/buf from 1.8.0 to 1.9.0 (#9615)
Bumps [github.com/bufbuild/buf](https://github.com/bufbuild/buf) from 1.8.0 to 1.9.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/bufbuild/buf/releases">github.com/bufbuild/buf's releases</a>.</em></p>
<blockquote>
<h2>v1.9.0</h2>
<ul>
<li>
<p>New compiler that is faster and uses less memory than the outgoing one.</p>
<ul>
<li>When generating source code info, the new compiler is 20% faster, and allocates
13% less memory.</li>
<li>If <em>not</em> generating source code info, the new compiler is 50% faster and
allocates 35% less memory.</li>
<li>In addition to allocating less memory through the course of a compilation, the
new compiler releases some memory much earlier, allowing it to be garbage
collected much sooner. This means that by the end of a very large compilation
process, less than half as much memory is live/pinned to the heap, decreasing
overall memory pressure.</li>
</ul>
<p>The new compiler also addresses a few bugs where Buf would accept proto sources
that protoc would reject:</p>
<ul>
<li>In proto3 files, field and enum names undergo a validation that they are
sufficiently different so that there will be no conflicts in JSON names.</li>
<li>Fully-qualified names of elements (like a message, enum, or service) may not
conflict with package names.</li>
<li>A oneof or extend block may not contain empty statements.</li>
<li>Package names may not be &gt;= 512 characters in length or contain &gt; 100 dots.</li>
<li>Nesting depth of messages may not be &gt; 32.</li>
<li>Field types and method input/output types may not refer to synthetic
map entry messages.</li>
</ul>
</li>
<li>
<p>Push lint and breaking configuration to the registry.</p>
</li>
<li>
<p>Include <code>LICENSE</code> file in the module on <code>buf push</code>.</p>
</li>
<li>
<p>Formatter better edits/preserves whitespace around inline comments.</p>
</li>
<li>
<p>Formatter correctly indents multi-line block (C-style) comments.</p>
</li>
<li>
<p>Formatter now indents trailing comments at the end of an indented block body
(including contents of message and array literals and elements in compact options)
the same as the rest of the body (instead of out one level, like the closing
punctuation).</p>
</li>
<li>
<p>Formatter uses a compact, single-line representation for array and message literals
in option values that are sufficiently simple (single scalar element or field).</p>
</li>
<li>
<p><code>buf beta convert</code> flags have changed from <code>--input</code> to <code>--from</code> and <code>--output</code>/<code>-o</code> to <code>--to</code></p>
</li>
<li>
<p>Fully qualified type names now must be passed to the <code>input</code> argument and <code>--type</code> flag separately</p>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/bufbuild/buf/blob/main/CHANGELOG.md">github.com/bufbuild/buf's changelog</a>.</em></p>
<blockquote>
<h2>[v1.9.0] - 2022-10-19</h2>
<ul>
<li>
<p>New compiler that is faster and uses less memory than the outgoing one.</p>
<ul>
<li>When generating source code info, the new compiler is 20% faster, and allocates
13% less memory.</li>
<li>If <em>not</em> generating source code info, the new compiler is 50% faster and
allocates 35% less memory.</li>
<li>In addition to allocating less memory through the course of a compilation, the
new compiler releases some memory much earlier, allowing it to be garbage
collected much sooner. This means that by the end of a very large compilation
process, less than half as much memory is live/pinned to the heap, decreasing
overall memory pressure.</li>
</ul>
<p>The new compiler also addresses a few bugs where Buf would accept proto sources
that protoc would reject:</p>
<ul>
<li>In proto3 files, field and enum names undergo a validation that they are
sufficiently different so that there will be no conflicts in JSON names.</li>
<li>Fully-qualified names of elements (like a message, enum, or service) may not
conflict with package names.</li>
<li>A oneof or extend block may not contain empty statements.</li>
<li>Package names may not be &gt;= 512 characters in length or contain &gt; 100 dots.</li>
<li>Nesting depth of messages may not be &gt; 32.</li>
<li>Field types and method input/output types may not refer to synthetic
map entry messages.</li>
</ul>
</li>
<li>
<p>Push lint and breaking configuration to the registry.</p>
</li>
<li>
<p>Include <code>LICENSE</code> file in the module on <code>buf push</code>.</p>
</li>
<li>
<p>Formatter better edits/preserves whitespace around inline comments.</p>
</li>
<li>
<p>Formatter correctly indents multi-line block (C-style) comments.</p>
</li>
<li>
<p>Formatter now indents trailing comments at the end of an indented block body
(including contents of message and array literals and elements in compact options)
the same as the rest of the body (instead of out one level, like the closing
punctuation).</p>
</li>
<li>
<p>Formatter uses a compact, single-line representation for array and message literals
in option values that are sufficiently simple (single scalar element or field).</p>
</li>
<li>
<p><code>buf beta convert</code> flags have changed from <code>--input</code> to <code>--from</code> and <code>--output</code>/<code>-o</code> to <code>--to</code></p>
</li>
<li>
<p>fully qualified type names now must be parsed to the <code>input</code> argument and <code>--type</code> flag separately</p>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="0d39fac763"><code>0d39fac</code></a> Update to v1.9.0 (<a href="https://github-redirect.dependabot.com/bufbuild/buf/issues/1511">#1511</a>)</li>
<li><a href="9033632c0f"><code>9033632</code></a> Update CHANGELOG.md (<a href="https://github-redirect.dependabot.com/bufbuild/buf/issues/1510">#1510</a>)</li>
<li><a href="3bafb05ee5"><code>3bafb05</code></a> Update CHANGELOG.md (<a href="https://github-redirect.dependabot.com/bufbuild/buf/issues/1501">#1501</a>)</li>
<li><a href="ff03a69a0f"><code>ff03a69</code></a> Use protocompile as the compiler (<a href="https://github-redirect.dependabot.com/bufbuild/buf/issues/1463">#1463</a>)</li>
<li><a href="bda6cf66c4"><code>bda6cf6</code></a> avoid panic in OptionExtensionDescriptor (<a href="https://github-redirect.dependabot.com/bufbuild/buf/issues/1506">#1506</a>)</li>
<li><a href="a39cbab60f"><code>a39cbab</code></a> BSR-578/Protobuf definition for updating metadata (<a href="https://github-redirect.dependabot.com/bufbuild/buf/issues/1483">#1483</a>)</li>
<li><a href="e7889dcecf"><code>e7889dc</code></a> move convert command tests to where commands are tested (<a href="https://github-redirect.dependabot.com/bufbuild/buf/issues/1507">#1507</a>)</li>
<li><a href="85bed09780"><code>85bed09</code></a> Change field option packed type to optional</li>
<li><a href="760da6748b"><code>760da67</code></a> bufmodule: test NewModuleForBucket (<a href="https://github-redirect.dependabot.com/bufbuild/buf/issues/1486">#1486</a>)</li>
<li><a href="045f5685fc"><code>045f568</code></a> update makego and go dependencies (<a href="https://github-redirect.dependabot.com/bufbuild/buf/issues/1495">#1495</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/bufbuild/buf/compare/v1.8.0...v1.9.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/bufbuild/buf&package-manager=go_modules&previous-version=1.8.0&new-version=1.9.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-26 09:45:43 +00:00
dependabot[bot]
6a46a76163 build(deps): Bump github.com/prometheus/client_model from 0.2.0 to 0.3.0 (#9617) 2022-10-26 11:14:58 +02:00
Callum Waters
0d4db7aae6 remove trust package (#9625) 2022-10-26 10:31:06 +02:00
dependabot[bot]
40a59d1e10 build(deps): Bump github.com/stretchr/testify from 1.8.0 to 1.8.1 (#9614) 2022-10-25 17:30:54 +02:00
William Banfield
f6709208b0 e2e: configurable IP addresses for e2e testnet generator (#9592)
* add the infrastructure types

* add infra data to testnetload

* extract infrastructure generation from manifest creation

* add infrastructure type and data flags

* rename docker ifd constructor

* implement read ifd from file

* add 'provider' field to the infrastructure data file to disable ip range check

* return error from infrastructure from data file function

* remove ifd from Setup

* implement a basic infra provider with a simple setup command

* remove misbehavior remnants

* use manifest instead of file in all places

* include cidr block range in the infrastructure data

* nolint gosec

* gosec

* lint
2022-10-25 10:19:10 -04:00
dependabot[bot]
2c40ca52c1 build(deps): Bump github.com/BurntSushi/toml from 1.2.0 to 1.2.1 (#9616)
Bumps [github.com/BurntSushi/toml](https://github.com/BurntSushi/toml) from 1.2.0 to 1.2.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/BurntSushi/toml/releases">github.com/BurntSushi/toml's releases</a>.</em></p>
<blockquote>
<h2>v1.2.1</h2>
<p>This release fixes the <code>omitempty</code> struct tag on an uncomparable type panicking.</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="74c008f3d2"><code>74c008f</code></a> Test Go 1.19; gofmt 1.19</li>
<li><a href="8de7f4a34c"><code>8de7f4a</code></a> Update tests a little bit and add comment</li>
<li><a href="8bbca55db5"><code>8bbca55</code></a> add a check for uncomparable empty structs</li>
<li><a href="17ef72d8f7"><code>17ef72d</code></a> Tweak docs to use Go 1.19 syntax</li>
<li><a href="1ba7f5b059"><code>1ba7f5b</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/367">#367</a> from zhsj/fix-32</li>
<li><a href="473c10f5a3"><code>473c10f</code></a> Fix test on 32 bit arch</li>
<li><a href="360c9e3496"><code>360c9e3</code></a> Don't return error on uncomparable types: just silently ignore like before</li>
<li><a href="929b0a7b98"><code>929b0a7</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/361">#361</a> from BurntSushi/p-omitempty</li>
<li><a href="8d9ffad36d"><code>8d9ffad</code></a> Don't panic with 'omitempty' and uncomparable type</li>
<li>See full diff in <a href="https://github.com/BurntSushi/toml/compare/v1.2.0...v1.2.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/BurntSushi/toml&package-manager=go_modules&previous-version=1.2.0&new-version=1.2.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-25 14:06:35 +00:00
Sergio Mena
3136b7a084 Revert make proto-gen in #9590 (#9621) 2022-10-25 12:44:55 +02:00
Rootul P
af2981a2f7 docs: remove outdated comment (#9597)
https://github.com/tendermint/tendermint/issues/8775 was resolved and backported so I think this comment is no longer applicable.
2022-10-24 08:19:23 +00:00
Rootul P
3bd2153136 docs: clarify BlockIDFlag variants (#9590)
* docs: clarify BlockIDFlag variants

* Update proto/tendermint/types/types.proto

Co-authored-by: Sergio Mena <sergio@informal.systems>

* Update proto/tendermint/types/types.proto

Co-authored-by: Sergio Mena <sergio@informal.systems>

* Update spec/core/data_structures.md

Co-authored-by: Sergio Mena <sergio@informal.systems>

* Update spec/core/data_structures.md

Co-authored-by: Sergio Mena <sergio@informal.systems>

* make proto-gen

Co-authored-by: Sergio Mena <sergio@informal.systems>
2022-10-21 22:33:37 +02:00
dependabot[bot]
301211c2cb build(deps): Bump google.golang.org/grpc from 1.50.0 to 1.50.1 (#9567)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.50.0 to 1.50.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc-go/releases">google.golang.org/grpc's releases</a>.</em></p>
<blockquote>
<h2>Release 1.50.1</h2>
<p>New Features</p>
<ul>
<li>gcp/observability: support new configuration defined in public preview user guide</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="4c776ec015"><code>4c776ec</code></a> Cherry-pick observability changes from master to v1.50.x and update version t...</li>
<li><a href="6576007e56"><code>6576007</code></a> Change version to 1.50.1-dev (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5686">#5686</a>)</li>
<li>See full diff in <a href="https://github.com/grpc/grpc-go/compare/v1.50.0...v1.50.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/grpc&package-manager=go_modules&previous-version=1.50.0&new-version=1.50.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-19 21:49:06 +00:00
dependabot[bot]
58ee42ca52 build(deps): Bump github.com/spf13/cobra from 1.5.0 to 1.6.0 (#9566)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.5.0 to 1.6.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/cobra/releases">github.com/spf13/cobra's releases</a>.</em></p>
<blockquote>
<h2>v1.6.0</h2>
<h3>Summer 2022 Release</h3>
<p>Some exciting changes make their way to Cobra! Command completions continue to get better and better (including adding <code>--help</code> and <code>--version</code> automatic flags to the completions list). Grouping is now possible in your help output as well! And you can now use the <code>OnFinalize</code> method to cleanup things when all &quot;work&quot; is done. Checkout the full changelog below:</p>
<hr />
<h4>Features 🌠</h4>
<ul>
<li>Add groups for commands in help: <a href="https://github.com/aawsome"><code>@​aawsome</code></a> <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1003">#1003</a></li>
<li>Support for case-insensitive command names: <a href="https://github.com/YuviGold"><code>@​YuviGold</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1802">#1802</a></li>
<li>Expose <code>ValidateRequiredFlags</code> and <code>ValidateFlagGroups</code>: <a href="https://github.com/skeetwu"><code>@​skeetwu</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1760">#1760</a></li>
<li>Add <code>--version</code> flag to help output: <a href="https://github.com/fnickels"><code>@​fnickels</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1707">#1707</a></li>
<li>Add <code>--help</code> and <code>--version</code> flag in completions: <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1813">#1813</a></li>
<li>Add <code>OnFinalize</code> method: <a href="https://github.com/yann-soubeyrand"><code>@​yann-soubeyrand</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1788">#1788</a></li>
<li>Allow user to add completion for powershell alias: <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1621">#1621</a></li>
<li>Make <code>InitDefaultcompletionCmd</code> public: <a href="https://github.com/gssbzn"><code>@​gssbzn</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1467">#1467</a></li>
</ul>
<h4>Deprecation 👎🏼</h4>
<ul>
<li><code>ExactValidArgs</code> is deprecated (but not being removed entirely). This is abit nuanced, so checkout <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1643">#1643</a> for further information and the <a href="https://github.com/spf13/cobra/blob/main/user_guide.md">updated <code>user_guide.md</code></a> on how this may affect you (and how you can take advantage of the <em>correct</em> behavior in the validators): <a href="https://github.com/umarcor"><code>@​umarcor</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1643">#1643</a></li>
</ul>
<h4>Bug fixes 🐛</h4>
<ul>
<li>Fix (bash-v2) <code>activeHelp</code> length check syntax: <a href="https://github.com/scop"><code>@​scop</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1762">#1762</a></li>
<li>Fix correct command path in <code>see_also</code> for yaml documentation: <a href="https://github.com/zregvart"><code>@​zregvart</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1771">#1771</a></li>
<li>Fix showing flags that shadow parent persistent flag in child help messaging: <a href="https://github.com/brianpursley"><code>@​brianpursley</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1776">#1776</a></li>
</ul>
<h4>Dependencies 🗳️</h4>
<ul>
<li>Upgrade to use <code>gopkg.in/yaml.v3</code>: <a href="https://github.com/tklauser"><code>@​tklauser</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1766">#1766</a></li>
</ul>
<h4>Testing 🤔</h4>
<ul>
<li>Test on Golang 1.19: <a href="https://github.com/umarcor"><code>@​umarcor</code></a> &amp; <a href="https://github.com/jpmcb"><code>@​jpmcb</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1782">#1782</a></li>
<li>Renamed powershell completion tests: <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1803">#1803</a></li>
<li>Use <code>action/setup-go</code> cache: <a href="https://github.com/umarcor"><code>@​umarcor</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1783">#1783</a></li>
<li>Add <code>workflow_dispatch</code> to CI actions: <a href="https://github.com/umarcor"><code>@​umarcor</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1387">#1387</a></li>
<li>Add minimum GitHub token permissions for workflows: <a href="https://github.com/varunsh-coder"><code>@​varunsh-coder</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1792">#1792</a></li>
</ul>
<h4>Docs ✏️</h4>
<ul>
<li>Fixup spelling for GitHub CLI: <a href="https://github.com/eltociear"><code>@​eltociear</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1744">#1744</a></li>
<li>Clarify <code>SetContext</code> documentation: <a href="https://github.com/katexochen"><code>@​katexochen</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1748">#1748</a></li>
<li>Instruct user to <code>go install</code> for binary: <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1726">#1726</a></li>
<li>User guide cleanup: <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1656">#1656</a></li>
<li>Document option to hide the default completion command: <a href="https://github.com/marckhouzam"><code>@​marckhouzam</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1779">#1779</a></li>
</ul>
<h4>Misc 💭</h4>
<ul>
<li>Add KubeVirt, CloudQuery, Cilium, Okteto, Zitadel, Allero to projects using cobra: <a href="https://github.com/maiqueb"><code>@​maiqueb</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1741">#1741</a>, <a href="https://github.com/yevgenypats"><code>@​yevgenypats</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1742">#1742</a>, <a href="https://github.com/tklauser"><code>@​tklauser</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1745">#1745</a>, <a href="https://github.com/jLopezbarb"><code>@​jLopezbarb</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1759">#1759</a>, <a href="https://github.com/fforootd"><code>@​fforootd</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1772">#1772</a>, <a href="https://github.com/dimabru"><code>@​dimabru</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1819">#1819</a></li>
<li>Use correct stale action <code>exempt</code> yaml keys: <a href="https://github.com/jpmcb"><code>@​jpmcb</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1800">#1800</a></li>
<li>Add missing license headers: <a href="https://github.com/umarcor"><code>@​umarcor</code></a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1809">#1809</a></li>
</ul>
<p><em>Note:</em> Per <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1804">#1804</a>, we will be moving away from &quot;seasonal&quot; releases and doing more generic point release targets. Continue to track the milestones and issues in the <code>spf13/cobra</code> GitHub repository for more information!</p>
<p>Great work everyone! Cobra would never be possible without your contributions! 🐍</p>

</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="860791844e"><code>8607918</code></a> feat: make InitDefaultCompletionCmd public (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1467">#1467</a>)</li>
<li><a href="2169adb574"><code>2169adb</code></a> Add groups for commands in help (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1003">#1003</a>)</li>
<li><a href="212ea40783"><code>212ea40</code></a> Include --help and --version flag in completion (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1813">#1813</a>)</li>
<li><a href="d4040ad8db"><code>d4040ad</code></a> Allow user to add completion for powershell alias (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1621">#1621</a>)</li>
<li><a href="23fc5e099f"><code>23fc5e0</code></a> ci: add minimum GitHub token permissions for workflows (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1792">#1792</a>)</li>
<li><a href="93d1913fb0"><code>93d1913</code></a> Add OnFinalize method (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1788">#1788</a>)</li>
<li><a href="07034fee49"><code>07034fe</code></a> build(deps): bump actions/stale from 5 to 6 (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1815">#1815</a>)</li>
<li><a href="3dc9761b36"><code>3dc9761</code></a> Add allero to list of projects using cobra (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1819">#1819</a>)</li>
<li><a href="7039e1fa21"><code>7039e1f</code></a> Add '--version' flag to Help output (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1707">#1707</a>)</li>
<li><a href="fce8d8aeb0"><code>fce8d8a</code></a> Expose ValidateRequiredFlags and ValidateFlagGroups (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1760">#1760</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/spf13/cobra/compare/v1.5.0...v1.6.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/cobra&package-manager=go_modules&previous-version=1.5.0&new-version=1.6.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-19 21:38:25 +00:00
dependabot[bot]
6e38fff9ed build(deps): Bump docker/login-action from 2.0.0 to 2.1.0 (#9565)
Bumps [docker/login-action](https://github.com/docker/login-action) from 2.0.0 to 2.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/login-action/releases">docker/login-action's releases</a>.</em></p>
<blockquote>
<h2>v2.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Ensure AWS temp credentials are redacted in workflow logs by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/275">#275</a>)</li>
<li>Bump <code>@​actions/core</code> from 1.6.0 to 1.10.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/252">#252</a> <a href="https://github-redirect.dependabot.com/docker/login-action/issues/292">#292</a>)</li>
<li>Bump <code>@​aws-sdk/client-ecr</code> from 3.53.0 to 3.186.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/298">#298</a>)</li>
<li>Bump <code>@​aws-sdk/client-ecr-public</code> from 3.53.0 to 3.186.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/299">#299</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/login-action/compare/v2.0.0...v2.1.0">https://github.com/docker/login-action/compare/v2.0.0...v2.1.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="f4ef78c080"><code>f4ef78c</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/299">#299</a> from docker/dependabot/npm_and_yarn/aws-sdk/client-ec...</li>
<li><a href="9ad4ce3929"><code>9ad4ce3</code></a> Update generated content</li>
<li><a href="884eadd4f8"><code>884eadd</code></a> Bump <code>@​aws-sdk/client-ecr-public</code> from 3.53.0 to 3.186.0</li>
<li><a href="a266232f5c"><code>a266232</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/298">#298</a> from docker/dependabot/npm_and_yarn/aws-sdk/client-ec...</li>
<li><a href="f97efcfbf9"><code>f97efcf</code></a> Update generated content</li>
<li><a href="5ae789beac"><code>5ae789b</code></a> Bump <code>@​aws-sdk/client-ecr</code> from 3.53.0 to 3.186.0</li>
<li><a href="71c23b5b34"><code>71c23b5</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/292">#292</a> from docker/dependabot/npm_and_yarn/actions/core-1.10.0</li>
<li><a href="6401d70aab"><code>6401d70</code></a> Update generated content</li>
<li><a href="67e8909cc6"><code>67e8909</code></a> Bump <code>@​actions/core</code> from 1.9.1 to 1.10.0</li>
<li><a href="21f251affc"><code>21f251a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/275">#275</a> from crazy-max/redact-aws-creds</li>
<li>Additional commits viewable in <a href="https://github.com/docker/login-action/compare/v2.0.0...v2.1.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/login-action&package-manager=github_actions&previous-version=2.0.0&new-version=2.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-19 21:29:43 +00:00
dependabot[bot]
93ab364abc build(deps): Bump slackapi/slack-github-action from 1.22.0 to 1.23.0 (#9564)
Bumps [slackapi/slack-github-action](https://github.com/slackapi/slack-github-action) from 1.22.0 to 1.23.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/slackapi/slack-github-action/releases">slackapi/slack-github-action's releases</a>.</em></p>
<blockquote>
<h2>Slack Send V1.23.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump node from 12 to 16 by <a href="https://github.com/quinnjn"><code>@​quinnjn</code></a> in <a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/pull/128">slackapi/slack-github-action#128</a></li>
<li>Bump eslint from 8.23.0 to 8.24.0 by <a href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/pull/135">slackapi/slack-github-action#135</a></li>
<li>Bump <code>@​actions/core</code> from 1.9.1 to 1.10.0 by <a href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/pull/134">slackapi/slack-github-action#134</a></li>
<li>Bump <code>@​actions/github</code> from 5.0.3 to 5.1.1 by <a href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/pull/133">slackapi/slack-github-action#133</a></li>
<li>Use https proxy agent by <a href="https://github.com/EHitchcockIAG"><code>@​EHitchcockIAG</code></a> in <a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/pull/132">slackapi/slack-github-action#132</a></li>
<li>Release v1.23.0 by <a href="https://github.com/hello-ashleyintech"><code>@​hello-ashleyintech</code></a> in <a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/pull/139">slackapi/slack-github-action#139</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/quinnjn"><code>@​quinnjn</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/pull/128">slackapi/slack-github-action#128</a></li>
<li><a href="https://github.com/EHitchcockIAG"><code>@​EHitchcockIAG</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/pull/132">slackapi/slack-github-action#132</a></li>
<li><a href="https://github.com/hello-ashleyintech"><code>@​hello-ashleyintech</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/pull/139">slackapi/slack-github-action#139</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/slackapi/slack-github-action/compare/v1.22.0...v1.23.0">https://github.com/slackapi/slack-github-action/compare/v1.22.0...v1.23.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="007b2c3c75"><code>007b2c3</code></a> Automatic compilation</li>
<li><a href="60532b0844"><code>60532b0</code></a> Release v1.23.0 (<a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/issues/139">#139</a>)</li>
<li><a href="acb114ffb5"><code>acb114f</code></a> Use https proxy agent (<a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/issues/132">#132</a>)</li>
<li><a href="0ae8044e6f"><code>0ae8044</code></a> Improve README to clearly mention a channel ID is required for updating messages</li>
<li><a href="71bf093cd3"><code>71bf093</code></a> Bump <code>@​actions/github</code> from 5.0.3 to 5.1.1 (<a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/issues/133">#133</a>)</li>
<li><a href="9dba6b6137"><code>9dba6b6</code></a> Bump <code>@​actions/core</code> from 1.9.1 to 1.10.0 (<a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/issues/134">#134</a>)</li>
<li><a href="7190fb233e"><code>7190fb2</code></a> Bump eslint from 8.23.0 to 8.24.0 (<a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/issues/135">#135</a>)</li>
<li><a href="a764c057f3"><code>a764c05</code></a> Bump node from 12 to 16 (<a href="https://github-redirect.dependabot.com/slackapi/slack-github-action/issues/128">#128</a>)</li>
<li><a href="eb1a153fad"><code>eb1a153</code></a> Add language to the maintainers guide about milestone management.</li>
<li>See full diff in <a href="https://github.com/slackapi/slack-github-action/compare/v1.22.0...v1.23.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=slackapi/slack-github-action&package-manager=github_actions&previous-version=1.22.0&new-version=1.23.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-19 21:28:32 +00:00
dependabot[bot]
1c60efc0bc build(deps): Bump styfle/cancel-workflow-action from 0.10.1 to 0.11.0 (#9561)
Bumps [styfle/cancel-workflow-action](https://github.com/styfle/cancel-workflow-action) from 0.10.1 to 0.11.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/styfle/cancel-workflow-action/releases">styfle/cancel-workflow-action's releases</a>.</em></p>
<blockquote>
<h2>0.11.0</h2>
<h3>Minor Changes</h3>
<ul>
<li>Update to Node 16: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/186">#186</a></li>
<li>Chore: rebuild: 1e0e690cd3756927cda56ad0033137ff1268c477</li>
<li>Chore(deps-dev): bump typescript from 4.8.3 to 4.8.4: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/181">#181</a></li>
<li>Chore(deps): bump <code>@​actions/github</code> from 5.1.0 to 5.1.1: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/182">#182</a></li>
<li>Chore(deps): bump <code>@​actions/core</code> from 1.9.1 to 1.10.0: <a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/183">#183</a></li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a href="https://github.com/mattjohnsonpint"><code>@​mattjohnsonpint</code></a> for helping!</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="b173b6ec01"><code>b173b6e</code></a> 0.11.0</li>
<li><a href="1e0e690cd3"><code>1e0e690</code></a> chore: rebuild</li>
<li><a href="4e668e5dc3"><code>4e668e5</code></a> Update to Node 16 (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/186">#186</a>)</li>
<li><a href="f78dcd888e"><code>f78dcd8</code></a> chore(deps): bump <code>@​actions/core</code> from 1.9.1 to 1.10.0 (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/183">#183</a>)</li>
<li><a href="6b6782c03d"><code>6b6782c</code></a> chore(deps): bump <code>@​actions/github</code> from 5.1.0 to 5.1.1 (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/182">#182</a>)</li>
<li><a href="1a300fe93c"><code>1a300fe</code></a> chore(deps-dev): bump typescript from 4.8.3 to 4.8.4 (<a href="https://github-redirect.dependabot.com/styfle/cancel-workflow-action/issues/181">#181</a>)</li>
<li>See full diff in <a href="https://github.com/styfle/cancel-workflow-action/compare/0.10.1...0.11.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=styfle/cancel-workflow-action&package-manager=github_actions&previous-version=0.10.1&new-version=0.11.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-19 21:27:11 +00:00
dependabot[bot]
6768b98568 build(deps): Bump docker/setup-buildx-action from 2.0.0 to 2.1.0 (#9563)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 2.0.0 to 2.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/setup-buildx-action/releases">docker/setup-buildx-action's releases</a>.</em></p>
<blockquote>
<h2>v2.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Auth support for tls endpoint by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/164">#164</a>)</li>
<li>Nodes metadata JSON ouput by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/162">#162</a>)
<ul>
<li><code>endpoint</code>, <code>status</code> and <code>flags</code> outputs are deprecated. Use <code>nodes</code> output instead.</li>
</ul>
</li>
<li>Skip setting buildkitd flags and config for remote driver by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/161">#161</a>)</li>
<li>Move args logic to context module and add tests by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/169">#169</a>)</li>
<li>Remove workaround for <code>setOutput</code> by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/170">#170</a>)</li>
<li>Fix deprecated <code>fs.rmdir</code> by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/171">#171</a>)</li>
<li>Docs: clarify install option by <a href="https://github.com/rodrigc"><code>@​rodrigc</code></a> in (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/152">#152</a>)</li>
<li>Bump <code>@​actions/core</code> from 1.6.0 to 1.10.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/151">#151</a> <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/157">#157</a> <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/167">#167</a>)</li>
<li>Bump <code>@​actions/tool-cache</code> from 1.7.2 to 2.0.1 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/150">#150</a>)</li>
<li>Bump <code>@​actions/http-client</code> from 1.0.11 to 2.0.1 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/149">#149</a>)</li>
<li>Bump uuid from 8.3.2 to 9.0.0 (<a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/159">#159</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/setup-buildx-action/compare/v2.0.0...v2.1.0">https://github.com/docker/setup-buildx-action/compare/v2.0.0...v2.1.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="95cb08cb26"><code>95cb08c</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/171">#171</a> from crazy-max/rmsync</li>
<li><a href="eb5c2a6eea"><code>eb5c2a6</code></a> Fix deprecated fs.rmdir</li>
<li><a href="83612bea36"><code>83612be</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/170">#170</a> from crazy-max/setOutput</li>
<li><a href="40fefd8a58"><code>40fefd8</code></a> Remove workaround for setOutput</li>
<li><a href="90a1e4619e"><code>90a1e46</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/169">#169</a> from crazy-max/context-module</li>
<li><a href="5a9fc40575"><code>5a9fc40</code></a> move args logic to context module and add tests</li>
<li><a href="6c48dad5f0"><code>6c48dad</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/159">#159</a> from docker/dependabot/npm_and_yarn/uuid-9.0.0</li>
<li><a href="16c2ddbfa7"><code>16c2ddb</code></a> update generated content</li>
<li><a href="0fe8589bf4"><code>0fe8589</code></a> Bump uuid from 8.3.2 to 9.0.0</li>
<li><a href="f3692cbe43"><code>f3692cb</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/setup-buildx-action/issues/167">#167</a> from docker/dependabot/npm_and_yarn/actions/core-1.10.0</li>
<li>Additional commits viewable in <a href="https://github.com/docker/setup-buildx-action/compare/v2.0.0...v2.1.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/setup-buildx-action&package-manager=github_actions&previous-version=2.0.0&new-version=2.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-19 21:25:52 +00:00
dependabot[bot]
3cdfbda2eb build(deps): Bump docker/build-push-action from 3.1.1 to 3.2.0 (#9562)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 3.1.1 to 3.2.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/build-push-action/releases">docker/build-push-action's releases</a>.</em></p>
<blockquote>
<h2>v3.2.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Remove workaround for <code>setOutput</code> by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/704">#704</a>)</li>
<li>Docs: fix Git context link and add more details about subdir support by <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/685">#685</a>)</li>
<li>Docs: named context by <a href="https://github.com/baibaratsky"><code>@​baibaratsky</code></a> and <a href="https://github.com/crazy-max"><code>@​crazy-max</code></a> (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/665">#665</a>)</li>
<li>Bump <code>@​actions/core</code> from 1.9.0 to 1.10.0 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/667">#667</a> <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/695">#695</a>)</li>
<li>Bump <code>@​actions/github</code> from 5.0.3 to 5.1.1 (<a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/696">#696</a>)</li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/docker/build-push-action/compare/v3.1.1...v3.2.0">https://github.com/docker/build-push-action/compare/v3.1.1...v3.2.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="c56af95754"><code>c56af95</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/704">#704</a> from crazy-max/setOutput</li>
<li><a href="75aaa63262"><code>75aaa63</code></a> Remove workaround for setOutput</li>
<li><a href="f97d6e2850"><code>f97d6e2</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/700">#700</a> from crazy-max/update-docs</li>
<li><a href="47c00d78bf"><code>47c00d7</code></a> ci: secret job to check for invalid secrets</li>
<li><a href="871b930e7a"><code>871b930</code></a> docs: update links and layout</li>
<li><a href="105bf59b00"><code>105bf59</code></a> docs: copy between registries with buildx</li>
<li><a href="48888e0b13"><code>48888e0</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/699">#699</a> from crazy-max/docs-outputs</li>
<li><a href="6b820ad47e"><code>6b820ad</code></a> docs: note about multiple outputs</li>
<li><a href="e1a10350ee"><code>e1a1035</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/build-push-action/issues/665">#665</a> from baibaratsky/patch-1</li>
<li><a href="0f5a7d48d5"><code>0f5a7d4</code></a> docs: named contexts</li>
<li>Additional commits viewable in <a href="https://github.com/docker/build-push-action/compare/v3.1.1...v3.2.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/build-push-action&package-manager=github_actions&previous-version=3.1.1&new-version=3.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-10-19 21:24:16 +00:00
Thane Thomson
4552cfc271 Update changelog with v0.34.22 entry (#9588)
Adds the changelog entry from #9583 to the changelog on `main`.

---

#### PR checklist

- [x] Tests written/updated, or no tests needed
- [x] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [x] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed
2022-10-19 15:25:03 +00:00
Sergio Mena
91fba07e49 Fix some broken links in docs (#9579)
Some links that the linter found as broken are replaced by working ones that point to the same contents

---

#### PR checklist

- [x] Tests written/updated, or no tests needed
- [x] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [x] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed
2022-10-19 09:16:52 +00:00
Sergio Mena
5df9c410ff Fix tested version in 200 node test + added prometheus problem as found during QA (#9582) 2022-10-18 18:02:24 +02:00
Rootul P
c8f203293d fix: header link (#9574)
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
2022-10-18 10:08:55 -04:00
Sergio Mena
b06e1cea54 QA Process report for v0.37.x (and baseline for v0.34.x) (#9499)
* 1st version. 200 nodes. Missing rotating node

* Small fixes

* Addressed @jmalicevic's comment

* Explain in method how to set the tmint version to test. Improve result section

* 1st version of how to run the 'rotating node' testnet

* Apply suggestions from @williambanfield

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Addressed @williambanfield's comments

* Added reference to Unix load metric

* Added total TXs

* Fixed some 'png's that got swapped. Excluded '.*-node-exporter' processes from memory plots

* Report for rotating node

* Adressed remaining comments from @williambanfield

* Cosmetic

* Addressed some of @thanethomson's comments

* Re-executed the 200 node tests and updated the corresponding sections of the report

* Ignore Python virtualenv directories

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Add latency vs throughput script

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Add README for latency vs throughput script

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Fix local links to folders

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* v034: only have one level-1 heading

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Adjust headings

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* v0.37.x: add links to issues/PRs

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* v0.37.x: add note about bug being present in v0.34

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* method: adjust heading depths

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Show data points on latency vs throughput plot

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Add latency vs throughput plots

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Correct mentioning of v0.34.21 and add heading

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Refactor latency vs throughput script

Update the latency vs throughput script to rather generate plots from
the "raw" CSV output from the loadtime reporting tool as opposed to the
separated CSV files from the experimental method.

Also update the relevant documentation, and regenerate the images from
the raw CSV data (resulting in pretty much the same plots as the
previous ones).

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Remove unused default duration const

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Adjust experiment start time to be more accurate and re-plot latency vs throughput

Signed-off-by: Thane Thomson <connect@thanethomson.com>

* Addressed @williambanfield's comments

* Apply suggestions from code review

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>

* scripts: Update latency vs throughput readme for clarity

Signed-off-by: Thane Thomson <connect@thanethomson.com>

Signed-off-by: Thane Thomson <connect@thanethomson.com>
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: Thane Thomson <connect@thanethomson.com>
2022-10-17 22:08:51 +02:00
Thane Thomson
6ea968d576 ci: Update Slack nightly failure messages (#9551)
It's mostly not true that a particular commit _caused_ a failure, so I've changed the wording here.

---

#### PR checklist

- [x] Tests written/updated, or no tests needed
- [x] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [x] Updated relevant documentation (`docs/`) and code comments, or no
      documentation updates needed
2022-10-17 11:42:08 +00:00
Callum Waters
788cf9db0c add changelog entry 2022-09-12 14:07:33 +02:00
Callum Waters
413728c367 light: update default trust level to 2/3 2022-09-12 13:30:34 +02:00
177 changed files with 6508 additions and 3261 deletions

View File

@@ -53,10 +53,10 @@ updates:
- package-ecosystem: gomod
directory: "/"
schedule:
interval: weekly
interval: daily
target-branch: "v0.37.x"
# Only allow automated security-related dependency updates until we cut the
# final v0.37.0 release.
# Only allow automated security-related dependency updates on release
# branches.
open-pull-requests-limit: 0
labels:
- T:dependencies
@@ -65,9 +65,11 @@ updates:
- package-ecosystem: gomod
directory: "/"
schedule:
interval: weekly
interval: daily
target-branch: "v0.34.x"
open-pull-requests-limit: 10
# Only allow automated security-related dependency updates on release
# branches.
open-pull-requests-limit: 0
labels:
- T:dependencies
- S:automerge

View File

@@ -41,17 +41,17 @@ jobs:
platforms: all
- name: Set up Docker Build
uses: docker/setup-buildx-action@v2.0.0
uses: docker/setup-buildx-action@v2.2.1
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@v2.0.0
uses: docker/login-action@v2.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v3.1.1
uses: docker/build-push-action@v3.2.0
with:
context: .
file: ./DOCKER/Dockerfile

View File

@@ -57,7 +57,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.22.0
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
@@ -72,7 +72,7 @@ jobs:
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> that caused the failure."
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> related to the failure."
}
}
]

View File

@@ -57,7 +57,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.22.0
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
@@ -72,7 +72,7 @@ jobs:
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> that caused the failure."
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> related to the failure."
}
}
]

View File

@@ -46,7 +46,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.22.0
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
@@ -61,7 +61,7 @@ jobs:
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> that caused the failure."
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> related to the failure."
}
}
]

View File

@@ -76,7 +76,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.22.0
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK

View File

@@ -1,41 +0,0 @@
name: Run Gosec
on:
pull_request:
paths:
- '**/*.go'
- 'go.mod'
- 'go.sum'
push:
branches:
- main
- 'feature/*'
- 'v0.37.x'
- 'v0.34.x'
paths:
- '**/*.go'
- 'go.mod'
- 'go.sum'
jobs:
Gosec:
permissions:
security-events: write
runs-on: ubuntu-latest
env:
GO111MODULE: on
steps:
- name: Checkout Source
uses: actions/checkout@v3
- name: Run Gosec Security Scanner
uses: cosmos/gosec@master
with:
# Let the report trigger a failure with the Github Security scanner features.
args: "-no-fail -fmt sarif -out results.sarif ./..."
- name: Upload SARIF file
uses: github/codeql-action/upload-sarif@v2
with:
# Path to SARIF file relative to the root of the repository
sarif_file: results.sarif

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 3
steps:
- uses: styfle/cancel-workflow-action@0.10.1
- uses: styfle/cancel-workflow-action@0.11.0
with:
workflow_id: 1041851,1401230,2837803
access_token: ${{ github.token }}

View File

@@ -31,10 +31,7 @@ jobs:
go.sum
- uses: golangci/golangci-lint-action@v3
with:
# Required: the version of golangci-lint is required and
# must be specified without patch version: we always use the
# latest patch version.
version: v1.47.3
version: v1.50.1
args: --timeout 10m
github-token: ${{ secrets.github_token }}
if: env.GIT_DIFF

View File

@@ -1,23 +1,20 @@
name: Check Markdown links
on:
push:
branches:
- main
pull_request:
branches: [main]
schedule:
# 2am UTC daily
- cron: '0 2 * * *'
jobs:
markdown-link-check:
strategy:
matrix:
branch: ['main', 'v0.37.x', 'v0.34.x']
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.md
- uses: creachadair/github-action-markdown-link-check@master
ref: ${{ matrix.branch }}
- uses: informalsystems/github-action-markdown-link-check@main
with:
check-modified-files-only: 'yes'
config-file: '.md-link-check.json'
if: env.GIT_DIFF

View File

@@ -8,7 +8,7 @@ on:
- "v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+" # e.g. v0.37.0-rc1, v0.38.0-rc10
jobs:
goreleaser:
prerelease:
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -38,3 +38,28 @@ jobs:
args: release --rm-dist --release-notes=../release_notes.md
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
prerelease-success:
needs: prerelease
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack upon pre-release
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
RELEASE_URL: "${{ github.server_url }}/${{ github.repository }}/releases/tag/${{ github.ref_name }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":sparkles: New Tendermint pre-release: <${{ env.RELEASE_URL }}|${{ github.ref_name }}>"
}
}
]
}

View File

@@ -15,7 +15,7 @@ jobs:
timeout-minutes: 5
steps:
- uses: actions/checkout@v3
- uses: bufbuild/buf-setup-action@v1.8.0
- uses: bufbuild/buf-setup-action@v1.9.0
- uses: bufbuild/buf-lint-action@v1
with:
input: 'proto'

View File

@@ -6,7 +6,7 @@ on:
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
jobs:
goreleaser:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -35,3 +35,28 @@ jobs:
args: release --rm-dist --release-notes=../release_notes.md
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
release-success:
needs: release
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack upon release
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
RELEASE_URL: "${{ github.server_url }}/${{ github.repository }}/releases/tag/${{ github.ref_name }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":rocket: New Tendermint release: <${{ env.RELEASE_URL }}|${{ github.ref_name }}>"
}
}
]
}

2
.gitignore vendored
View File

@@ -55,3 +55,5 @@ proto/spec/**/*.pb.go
*.pdf
*.gz
*.dvi
# Python virtual environments
.venv

View File

@@ -2,7 +2,6 @@ linters:
enable:
- asciicheck
- bodyclose
- deadcode
- depguard
- dogsled
- dupl
@@ -26,7 +25,6 @@ linters:
- typecheck
- unconvert
- unused
- varcheck
issues:
exclude-rules:

View File

@@ -2,5 +2,16 @@
"retryOn429": true,
"retryCount": 5,
"fallbackRetryDelay": "30s",
"aliveStatusCodes": [200, 206, 503]
"aliveStatusCodes": [200, 206, 503],
"httpHeaders": [
{
"urls": [
"https://docs.github.com/",
"https://help.github.com/"
],
"headers": {
"Accept-Encoding": "zstd, br, gzip, deflate"
}
}
]
}

View File

@@ -2,6 +2,36 @@
Friendly reminder, we have a [bug bounty program](https://hackerone.com/cosmos).
## v0.34.22
This release includes several bug fixes, [one of
which](https://github.com/tendermint/tendermint/pull/9518) we discovered while
building up a baseline for v0.34 against which to compare our upcoming v0.37
release during our [QA process](./docs/qa/).
Special thanks to external contributors on this release: @RiccardoM
### FEATURES
- [rpc] [\#9423](https://github.com/tendermint/tendermint/pull/9423) Support
HTTPS URLs from the WebSocket client (@RiccardoM, @cmwaters)
### BUG FIXES
- [config] [\#9483](https://github.com/tendermint/tendermint/issues/9483)
Calling `tendermint init` would incorrectly leave out the new `[storage]`
section delimiter in the generated configuration file - this has now been
fixed
- [p2p] [\#9500](https://github.com/tendermint/tendermint/issues/9500) Prevent
peers who have errored being added to the peer set (@jmalicevic)
- [indexer] [\#9473](https://github.com/tendermint/tendermint/issues/9473) Fix
bug that caused the psql indexer to index empty blocks whenever one of the
transactions returned a non zero code. The relevant deduplication logic has
been moved within the kv indexer only (@cmwaters)
- [blocksync] [\#9518](https://github.com/tendermint/tendermint/issues/9518) A
block sync stall was observed during our QA process whereby the node was
unable to make progress. Retrying block requests after a timeout fixes this.
## v0.34.21
Release highlights include:

View File

@@ -11,6 +11,8 @@
- P2P Protocol
- Go API
- [p2p] \#9625 Remove unused p2p/trust package (@cmwaters)
- [light] \#9420 Default trust level has been modified to a more secure value of 2/3 (@cmwaters)
- Blockchain Protocol
@@ -27,6 +29,7 @@
- [pubsub] \#7319 Performance improvements for the event query API (@creachadair)
- [p2p/pex] \#6509 Improve addrBook.hash performance (@cuonglm)
- [crypto/merkle] \#6443 & \#6513 Improve HashAlternatives performance (@cuonglm, @marbar3778)
- [rpc] \#9650 Enable caching of RPC responses (@JayT106)
### BUG FIXES
@@ -38,6 +41,29 @@ Special thanks to external contributors on this release:
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
## v0.38.0
### BREAKING CHANGES
- CLI/RPC/Config
- Apps
- P2P Protocol
- Go API
- [light] \#9420 Default trust level has been modified to a more secure value of 2/3 (@cmwaters)
- Blockchain Protocol
### FEATURES
### IMPROVEMENTS
### BUG FIXES
## v0.37.0
### BREAKING CHANGES
- CLI/RPC/Config
@@ -96,4 +122,4 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
- [consensus] \#9229 fix round number of `enterPropose` when handling `RoundStepNewRound` timeout. (@fatcat22)
- [docker] \#9073 enable cross platform build using docker buildx
- [blocksync] \#9518 handle the case when the sending queue is full: retry block request after a timeout
- [blocksync] \#9518 handle the case when the sending queue is full: retry block request after a timeout

View File

@@ -271,7 +271,7 @@ format:
lint:
@echo "--> Running linter"
@golangci-lint run
@go run github.com/golangci/golangci-lint/cmd/golangci-lint run
.PHONY: lint
DESTINATION = ./index.html.md

View File

@@ -113,10 +113,15 @@ For more information on upgrading, see [UPGRADING.md](./UPGRADING.md).
### Supported Versions
Because we are a small core team, we only ship patch updates, including security
updates, to the most recent minor release and the second-most recent minor
release. Consequently, we strongly recommend keeping Tendermint up-to-date.
Upgrading instructions can be found in [UPGRADING.md](./UPGRADING.md).
Because we are a small core team, we have limited capacity to ship patch
updates, including security updates. Consequently, we strongly recommend keeping
Tendermint up-to-date. Upgrading instructions can be found in
[UPGRADING.md](./UPGRADING.md).
Currently supported versions include:
- v0.34.x
- v0.37.x (release candidate)
## Resources

View File

@@ -98,7 +98,7 @@ Sometimes it's necessary to rename libraries to avoid naming collisions or ambig
* Make use of table driven testing where possible and not-cumbersome
* [Inspiration](https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go)
* Make use of [assert](https://godoc.org/github.com/stretchr/testify/assert) and [require](https://godoc.org/github.com/stretchr/testify/require)
* When using mocks, it is recommended to use Testify [mock] (<https://pkg.go.dev/github.com/stretchr/testify/mock>
* When using mocks, it is recommended to use Testify [mock](<https://pkg.go.dev/github.com/stretchr/testify/mock>
) along with [Mockery](https://github.com/vektra/mockery) for autogeneration
## Errors

View File

@@ -6,8 +6,6 @@ import (
tmsync "github.com/tendermint/tendermint/libs/sync"
)
var _ Client = (*localClient)(nil)
// NOTE: use defer to unlock mutex because Application might panic (e.g., in
// case of malicious tx or query). It only makes sense for publicly exposed
// methods like CheckTx (/broadcast_tx_* RPC endpoint) or Query (/abci_query
@@ -24,8 +22,6 @@ var _ Client = (*localClient)(nil)
// NewLocalClient creates a local client, which will be directly calling the
// methods of the given app.
//
// Both Async and Sync methods ignore the given context.Context parameter.
func NewLocalClient(mtx *tmsync.Mutex, app types.Application) Client {
if mtx == nil {
mtx = new(tmsync.Mutex)
@@ -309,7 +305,8 @@ func (app *localClient) OfferSnapshotSync(req types.RequestOfferSnapshot) (*type
}
func (app *localClient) LoadSnapshotChunkSync(
req types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
req types.RequestLoadSnapshotChunk,
) (*types.ResponseLoadSnapshotChunk, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -318,7 +315,8 @@ func (app *localClient) LoadSnapshotChunkSync(
}
func (app *localClient) ApplySnapshotChunkSync(
req types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
req types.RequestApplySnapshotChunk,
) (*types.ResponseApplySnapshotChunk, error) {
app.mtx.Lock()
defer app.mtx.Unlock()

View File

@@ -0,0 +1,263 @@
package abcicli
import (
"sync"
types "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/service"
)
type unsyncLocalClient struct {
service.BaseService
types.Application
// This mutex is exclusively used to protect the callback.
mtx sync.RWMutex
Callback
}
var _ Client = (*unsyncLocalClient)(nil)
// NewUnsyncLocalClient creates an unsynchronized local client, which will be
// directly calling the methods of the given app.
//
// Unlike NewLocalClient, it does not hold a mutex around the application, so
// it is up to the application to manage its synchronization properly.
func NewUnsyncLocalClient(app types.Application) Client {
cli := &unsyncLocalClient{
Application: app,
}
cli.BaseService = *service.NewBaseService(nil, "unsyncLocalClient", cli)
return cli
}
func (app *unsyncLocalClient) SetResponseCallback(cb Callback) {
app.mtx.Lock()
defer app.mtx.Unlock()
app.Callback = cb
}
// TODO: change types.Application to include Error()?
func (app *unsyncLocalClient) Error() error {
return nil
}
func (app *unsyncLocalClient) FlushAsync() *ReqRes {
// Do nothing
return newLocalReqRes(types.ToRequestFlush(), nil)
}
func (app *unsyncLocalClient) EchoAsync(msg string) *ReqRes {
return app.callback(
types.ToRequestEcho(msg),
types.ToResponseEcho(msg),
)
}
func (app *unsyncLocalClient) InfoAsync(req types.RequestInfo) *ReqRes {
res := app.Application.Info(req)
return app.callback(
types.ToRequestInfo(req),
types.ToResponseInfo(res),
)
}
func (app *unsyncLocalClient) DeliverTxAsync(params types.RequestDeliverTx) *ReqRes {
res := app.Application.DeliverTx(params)
return app.callback(
types.ToRequestDeliverTx(params),
types.ToResponseDeliverTx(res),
)
}
func (app *unsyncLocalClient) CheckTxAsync(req types.RequestCheckTx) *ReqRes {
res := app.Application.CheckTx(req)
return app.callback(
types.ToRequestCheckTx(req),
types.ToResponseCheckTx(res),
)
}
func (app *unsyncLocalClient) QueryAsync(req types.RequestQuery) *ReqRes {
res := app.Application.Query(req)
return app.callback(
types.ToRequestQuery(req),
types.ToResponseQuery(res),
)
}
func (app *unsyncLocalClient) CommitAsync() *ReqRes {
res := app.Application.Commit()
return app.callback(
types.ToRequestCommit(),
types.ToResponseCommit(res),
)
}
func (app *unsyncLocalClient) InitChainAsync(req types.RequestInitChain) *ReqRes {
res := app.Application.InitChain(req)
return app.callback(
types.ToRequestInitChain(req),
types.ToResponseInitChain(res),
)
}
func (app *unsyncLocalClient) BeginBlockAsync(req types.RequestBeginBlock) *ReqRes {
res := app.Application.BeginBlock(req)
return app.callback(
types.ToRequestBeginBlock(req),
types.ToResponseBeginBlock(res),
)
}
func (app *unsyncLocalClient) EndBlockAsync(req types.RequestEndBlock) *ReqRes {
res := app.Application.EndBlock(req)
return app.callback(
types.ToRequestEndBlock(req),
types.ToResponseEndBlock(res),
)
}
func (app *unsyncLocalClient) ListSnapshotsAsync(req types.RequestListSnapshots) *ReqRes {
res := app.Application.ListSnapshots(req)
return app.callback(
types.ToRequestListSnapshots(req),
types.ToResponseListSnapshots(res),
)
}
func (app *unsyncLocalClient) OfferSnapshotAsync(req types.RequestOfferSnapshot) *ReqRes {
res := app.Application.OfferSnapshot(req)
return app.callback(
types.ToRequestOfferSnapshot(req),
types.ToResponseOfferSnapshot(res),
)
}
func (app *unsyncLocalClient) LoadSnapshotChunkAsync(req types.RequestLoadSnapshotChunk) *ReqRes {
res := app.Application.LoadSnapshotChunk(req)
return app.callback(
types.ToRequestLoadSnapshotChunk(req),
types.ToResponseLoadSnapshotChunk(res),
)
}
func (app *unsyncLocalClient) ApplySnapshotChunkAsync(req types.RequestApplySnapshotChunk) *ReqRes {
res := app.Application.ApplySnapshotChunk(req)
return app.callback(
types.ToRequestApplySnapshotChunk(req),
types.ToResponseApplySnapshotChunk(res),
)
}
func (app *unsyncLocalClient) PrepareProposalAsync(req types.RequestPrepareProposal) *ReqRes {
res := app.Application.PrepareProposal(req)
return app.callback(
types.ToRequestPrepareProposal(req),
types.ToResponsePrepareProposal(res),
)
}
func (app *unsyncLocalClient) ProcessProposalAsync(req types.RequestProcessProposal) *ReqRes {
res := app.Application.ProcessProposal(req)
return app.callback(
types.ToRequestProcessProposal(req),
types.ToResponseProcessProposal(res),
)
}
//-------------------------------------------------------
func (app *unsyncLocalClient) FlushSync() error {
return nil
}
func (app *unsyncLocalClient) EchoSync(msg string) (*types.ResponseEcho, error) {
return &types.ResponseEcho{Message: msg}, nil
}
func (app *unsyncLocalClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, error) {
res := app.Application.Info(req)
return &res, nil
}
func (app *unsyncLocalClient) DeliverTxSync(req types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
res := app.Application.DeliverTx(req)
return &res, nil
}
func (app *unsyncLocalClient) CheckTxSync(req types.RequestCheckTx) (*types.ResponseCheckTx, error) {
res := app.Application.CheckTx(req)
return &res, nil
}
func (app *unsyncLocalClient) QuerySync(req types.RequestQuery) (*types.ResponseQuery, error) {
res := app.Application.Query(req)
return &res, nil
}
func (app *unsyncLocalClient) CommitSync() (*types.ResponseCommit, error) {
res := app.Application.Commit()
return &res, nil
}
func (app *unsyncLocalClient) InitChainSync(req types.RequestInitChain) (*types.ResponseInitChain, error) {
res := app.Application.InitChain(req)
return &res, nil
}
func (app *unsyncLocalClient) BeginBlockSync(req types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
res := app.Application.BeginBlock(req)
return &res, nil
}
func (app *unsyncLocalClient) EndBlockSync(req types.RequestEndBlock) (*types.ResponseEndBlock, error) {
res := app.Application.EndBlock(req)
return &res, nil
}
func (app *unsyncLocalClient) ListSnapshotsSync(req types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
res := app.Application.ListSnapshots(req)
return &res, nil
}
func (app *unsyncLocalClient) OfferSnapshotSync(req types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
res := app.Application.OfferSnapshot(req)
return &res, nil
}
func (app *unsyncLocalClient) LoadSnapshotChunkSync(
req types.RequestLoadSnapshotChunk,
) (*types.ResponseLoadSnapshotChunk, error) {
res := app.Application.LoadSnapshotChunk(req)
return &res, nil
}
func (app *unsyncLocalClient) ApplySnapshotChunkSync(
req types.RequestApplySnapshotChunk,
) (*types.ResponseApplySnapshotChunk, error) {
res := app.Application.ApplySnapshotChunk(req)
return &res, nil
}
func (app *unsyncLocalClient) PrepareProposalSync(req types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
res := app.Application.PrepareProposal(req)
return &res, nil
}
func (app *unsyncLocalClient) ProcessProposalSync(req types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
res := app.Application.ProcessProposal(req)
return &res, nil
}
//-------------------------------------------------------
func (app *unsyncLocalClient) callback(req *types.Request, res *types.Response) *ReqRes {
app.mtx.RLock()
defer app.mtx.RUnlock()
app.Callback(req, res)
rr := newLocalReqRes(req, res)
rr.callbackInvoked = true
return rr
}

View File

@@ -19,58 +19,6 @@ const (
BlockResponseMessageFieldKeySize
)
// EncodeMsg encodes a Protobuf message
func EncodeMsg(pb proto.Message) ([]byte, error) {
msg := bcproto.Message{}
switch pb := pb.(type) {
case *bcproto.BlockRequest:
msg.Sum = &bcproto.Message_BlockRequest{BlockRequest: pb}
case *bcproto.BlockResponse:
msg.Sum = &bcproto.Message_BlockResponse{BlockResponse: pb}
case *bcproto.NoBlockResponse:
msg.Sum = &bcproto.Message_NoBlockResponse{NoBlockResponse: pb}
case *bcproto.StatusRequest:
msg.Sum = &bcproto.Message_StatusRequest{StatusRequest: pb}
case *bcproto.StatusResponse:
msg.Sum = &bcproto.Message_StatusResponse{StatusResponse: pb}
default:
return nil, fmt.Errorf("unknown message type %T", pb)
}
bz, err := proto.Marshal(&msg)
if err != nil {
return nil, fmt.Errorf("unable to marshal %T: %w", pb, err)
}
return bz, nil
}
// DecodeMsg decodes a Protobuf message.
func DecodeMsg(bz []byte) (proto.Message, error) {
pb := &bcproto.Message{}
err := proto.Unmarshal(bz, pb)
if err != nil {
return nil, err
}
switch msg := pb.Sum.(type) {
case *bcproto.Message_BlockRequest:
return msg.BlockRequest, nil
case *bcproto.Message_BlockResponse:
return msg.BlockResponse, nil
case *bcproto.Message_NoBlockResponse:
return msg.NoBlockResponse, nil
case *bcproto.Message_StatusRequest:
return msg.StatusRequest, nil
case *bcproto.Message_StatusResponse:
return msg.StatusResponse, nil
default:
return nil, fmt.Errorf("unknown message type %T", msg)
}
}
// ValidateMsg validates a message.
func ValidateMsg(pb proto.Message) error {
if pb == nil {

View File

@@ -143,21 +143,20 @@ func (bcR *Reactor) GetChannels() []*p2p.ChannelDescriptor {
SendQueueCapacity: 1000,
RecvBufferCapacity: 50 * 4096,
RecvMessageCapacity: MaxMsgSize,
MessageType: &bcproto.Message{},
},
}
}
// AddPeer implements Reactor by sending our state to peer.
func (bcR *Reactor) AddPeer(peer p2p.Peer) {
msgBytes, err := EncodeMsg(&bcproto.StatusResponse{
Base: bcR.store.Base(),
Height: bcR.store.Height()})
if err != nil {
bcR.Logger.Error("could not convert msg to protobuf", "err", err)
return
}
peer.Send(BlocksyncChannel, msgBytes)
peer.Send(p2p.Envelope{
ChannelID: BlocksyncChannel,
Message: &bcproto.StatusResponse{
Base: bcR.store.Base(),
Height: bcR.store.Height(),
},
})
// it's OK if send fails. will try later in poolRoutine
// peer is added to the pool once we receive the first
@@ -182,69 +181,53 @@ func (bcR *Reactor) respondToPeer(msg *bcproto.BlockRequest,
return false
}
msgBytes, err := EncodeMsg(&bcproto.BlockResponse{Block: bl})
if err != nil {
bcR.Logger.Error("could not marshal msg", "err", err)
return false
}
return src.TrySend(BlocksyncChannel, msgBytes)
return src.TrySend(p2p.Envelope{
ChannelID: BlocksyncChannel,
Message: &bcproto.BlockResponse{Block: bl},
})
}
bcR.Logger.Info("Peer asking for a block we don't have", "src", src, "height", msg.Height)
msgBytes, err := EncodeMsg(&bcproto.NoBlockResponse{Height: msg.Height})
if err != nil {
bcR.Logger.Error("could not convert msg to protobuf", "err", err)
return false
}
return src.TrySend(BlocksyncChannel, msgBytes)
return src.TrySend(p2p.Envelope{
ChannelID: BlocksyncChannel,
Message: &bcproto.NoBlockResponse{Height: msg.Height},
})
}
// Receive implements Reactor by handling 4 types of messages (look below).
func (bcR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
msg, err := DecodeMsg(msgBytes)
if err != nil {
bcR.Logger.Error("Error decoding message", "src", src, "chId", chID, "err", err)
bcR.Switch.StopPeerForError(src, err)
func (bcR *Reactor) Receive(e p2p.Envelope) {
if err := ValidateMsg(e.Message); err != nil {
bcR.Logger.Error("Peer sent us invalid msg", "peer", e.Src, "msg", e.Message, "err", err)
bcR.Switch.StopPeerForError(e.Src, err)
return
}
if err = ValidateMsg(msg); err != nil {
bcR.Logger.Error("Peer sent us invalid msg", "peer", src, "msg", msg, "err", err)
bcR.Switch.StopPeerForError(src, err)
return
}
bcR.Logger.Debug("Receive", "e.Src", e.Src, "chID", e.ChannelID, "msg", e.Message)
bcR.Logger.Debug("Receive", "src", src, "chID", chID, "msg", msg)
switch msg := msg.(type) {
switch msg := e.Message.(type) {
case *bcproto.BlockRequest:
bcR.respondToPeer(msg, src)
bcR.respondToPeer(msg, e.Src)
case *bcproto.BlockResponse:
bi, err := types.BlockFromProto(msg.Block)
if err != nil {
bcR.Logger.Error("Block content is invalid", "err", err)
return
}
bcR.pool.AddBlock(src.ID(), bi, len(msgBytes))
bcR.pool.AddBlock(e.Src.ID(), bi, msg.Block.Size())
case *bcproto.StatusRequest:
// Send peer our state.
msgBytes, err := EncodeMsg(&bcproto.StatusResponse{
Height: bcR.store.Height(),
Base: bcR.store.Base(),
e.Src.TrySend(p2p.Envelope{
ChannelID: BlocksyncChannel,
Message: &bcproto.StatusResponse{
Height: bcR.store.Height(),
Base: bcR.store.Base(),
},
})
if err != nil {
bcR.Logger.Error("could not convert msg to protobut", "err", err)
return
}
src.TrySend(BlocksyncChannel, msgBytes)
case *bcproto.StatusResponse:
// Got a peer status. Unverified.
bcR.pool.SetPeerRange(src.ID(), msg.Base, msg.Height)
bcR.pool.SetPeerRange(e.Src.ID(), msg.Base, msg.Height)
case *bcproto.NoBlockResponse:
bcR.Logger.Debug("Peer does not have requested block", "peer", src, "height", msg.Height)
bcR.Logger.Debug("Peer does not have requested block", "peer", e.Src, "height", msg.Height)
default:
bcR.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg)))
}
@@ -285,13 +268,10 @@ func (bcR *Reactor) poolRoutine(stateSynced bool) {
if peer == nil {
continue
}
msgBytes, err := EncodeMsg(&bcproto.BlockRequest{Height: request.Height})
if err != nil {
bcR.Logger.Error("could not convert msg to proto", "err", err)
continue
}
queued := peer.TrySend(BlocksyncChannel, msgBytes)
queued := peer.TrySend(p2p.Envelope{
ChannelID: BlocksyncChannel,
Message: &bcproto.BlockRequest{Height: request.Height},
})
if !queued {
bcR.Logger.Debug("Send queue is full, drop block request", "peer", peer.ID(), "height", request.Height)
}
@@ -303,7 +283,7 @@ func (bcR *Reactor) poolRoutine(stateSynced bool) {
case <-statusUpdateTicker.C:
// ask for status updates
go bcR.BroadcastStatusRequest() //nolint: errcheck
go bcR.BroadcastStatusRequest()
}
}
@@ -429,14 +409,9 @@ FOR_LOOP:
}
// BroadcastStatusRequest broadcasts `BlockStore` base and height.
func (bcR *Reactor) BroadcastStatusRequest() error {
bm, err := EncodeMsg(&bcproto.StatusRequest{})
if err != nil {
bcR.Logger.Error("could not convert msg to proto", "err", err)
return fmt.Errorf("could not convert msg to proto: %w", err)
}
bcR.Switch.Broadcast(BlocksyncChannel, bm)
return nil
func (bcR *Reactor) BroadcastStatusRequest() {
bcR.Switch.Broadcast(p2p.Envelope{
ChannelID: BlocksyncChannel,
Message: &bcproto.StatusRequest{},
})
}

View File

@@ -67,7 +67,8 @@ func copyConfig(home, dir string) error {
func dumpProfile(dir, addr, profile string, debug int) error {
endpoint := fmt.Sprintf("%s/debug/pprof/%s?debug=%d", addr, profile, debug)
resp, err := http.Get(endpoint) //nolint: gosec
//nolint:gosec,nolintlint
resp, err := http.Get(endpoint)
if err != nil {
return fmt.Errorf("failed to query for %s profile: %w", profile, err)
}

View File

@@ -14,7 +14,7 @@ import (
"github.com/tendermint/tendermint/store"
)
var removeBlock bool = false
var removeBlock = false
func init() {
RollbackStateCmd.Flags().BoolVar(&removeBlock, "hard", false, "remove last block as well as state")

View File

@@ -423,6 +423,7 @@ type RPCConfig struct {
TLSKeyFile string `mapstructure:"tls_key_file"`
// pprof listen address (https://golang.org/pkg/net/http/pprof)
// FIXME: This should be moved under the instrumentation section
PprofListenAddress string `mapstructure:"pprof_laddr"`
}
@@ -506,6 +507,10 @@ func (cfg *RPCConfig) IsCorsEnabled() bool {
return len(cfg.CORSAllowedOrigins) != 0
}
func (cfg *RPCConfig) IsPprofEnabled() bool {
return len(cfg.PprofListenAddress) != 0
}
func (cfg RPCConfig) KeyFile() string {
path := cfg.TLSKeyFile
if filepath.IsAbs(path) {
@@ -703,14 +708,28 @@ type MempoolConfig struct {
// Mempool version to use:
// 1) "v0" - (default) FIFO mempool.
// 2) "v1" - prioritized mempool.
// WARNING: There's a known memory leak with the prioritized mempool
// that the team are working on. Read more here:
// https://github.com/tendermint/tendermint/issues/8775
Version string `mapstructure:"version"`
RootDir string `mapstructure:"home"`
Recheck bool `mapstructure:"recheck"`
Broadcast bool `mapstructure:"broadcast"`
WalPath string `mapstructure:"wal_dir"`
Version string `mapstructure:"version"`
// RootDir is the root directory for all data. This should be configured via
// the $TMHOME env variable or --home cmd flag rather than overriding this
// struct field.
RootDir string `mapstructure:"home"`
// Recheck (default: true) defines whether Tendermint should recheck the
// validity for all remaining transaction in the mempool after a block.
// Since a block affects the application state, some transactions in the
// mempool may become invalid. If this does not apply to your application,
// you can disable rechecking.
Recheck bool `mapstructure:"recheck"`
// Broadcast (default: true) defines whether the mempool should relay
// transactions to other peers. Setting this to false will stop the mempool
// from relaying transactions to other peers until they are included in a
// block. In other words, if Broadcast is disabled, only the peer you send
// the tx to will see it until it is included in a block.
Broadcast bool `mapstructure:"broadcast"`
// WalPath (default: "") configures the location of the Write Ahead Log
// (WAL) for the mempool. The WAL is disabled by default. To enable, set
// WalPath to where you want the WAL to be written (e.g.
// "data/mempool.wal").
WalPath string `mapstructure:"wal_dir"`
// Maximum number of transactions in the mempool
Size int `mapstructure:"size"`
// Limit the total size of all txs in the mempool.
@@ -1204,6 +1223,10 @@ func (cfg *InstrumentationConfig) ValidateBasic() error {
return nil
}
func (cfg *InstrumentationConfig) IsPrometheusEnabled() bool {
return cfg.Prometheus && cfg.PrometheusListenAddr != ""
}
//-----------------------------------------------------------------------------
// Utils

View File

@@ -349,8 +349,24 @@ dial_timeout = "{{ .P2P.DialTimeout }}"
# 2) "v1" - prioritized mempool.
version = "{{ .Mempool.Version }}"
# Recheck (default: true) defines whether Tendermint should recheck the
# validity for all remaining transaction in the mempool after a block.
# Since a block affects the application state, some transactions in the
# mempool may become invalid. If this does not apply to your application,
# you can disable rechecking.
recheck = {{ .Mempool.Recheck }}
# Broadcast (default: true) defines whether the mempool should relay
# transactions to other peers. Setting this to false will stop the mempool
# from relaying transactions to other peers until they are included in a
# block. In other words, if Broadcast is disabled, only the peer you send
# the tx to will see it until it is included in a block.
broadcast = {{ .Mempool.Broadcast }}
# WalPath (default: "") configures the location of the Write Ahead Log
# (WAL) for the mempool. The WAL is disabled by default. To enable, set
# WalPath to where you want the WAL to be written (e.g.
# "data/mempool.wal").
wal_dir = "{{ js .Mempool.WalPath }}"
# Maximum number of transactions in the mempool
@@ -436,7 +452,7 @@ chunk_fetchers = "{{ .StateSync.ChunkFetchers }}"
[blocksync]
# Block Sync version to use:
#
#
# In v0.37, v1 and v2 of the block sync protocols were deprecated.
# Please use v0 instead.
#

View File

@@ -26,6 +26,7 @@ import (
mempoolv0 "github.com/tendermint/tendermint/mempool/v0"
mempoolv1 "github.com/tendermint/tendermint/mempool/v1"
"github.com/tendermint/tendermint/p2p"
tmcons "github.com/tendermint/tendermint/proto/tendermint/consensus"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/store"
@@ -165,10 +166,16 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
for i, peer := range peerList {
if i < len(peerList)/2 {
bcs.Logger.Info("Signed and pushed vote", "vote", prevote1, "peer", peer)
peer.Send(VoteChannel, MustEncode(&VoteMessage{prevote1}))
peer.Send(p2p.Envelope{
Message: &tmcons.Vote{Vote: prevote1.ToProto()},
ChannelID: VoteChannel,
})
} else {
bcs.Logger.Info("Signed and pushed vote", "vote", prevote2, "peer", peer)
peer.Send(VoteChannel, MustEncode(&VoteMessage{prevote2}))
peer.Send(p2p.Envelope{
Message: &tmcons.Vote{Vote: prevote2.ToProto()},
ChannelID: VoteChannel,
})
}
}
} else {
@@ -520,18 +527,26 @@ func sendProposalAndParts(
parts *types.PartSet,
) {
// proposal
msg := &ProposalMessage{Proposal: proposal}
peer.Send(DataChannel, MustEncode(msg))
peer.Send(p2p.Envelope{
ChannelID: DataChannel,
Message: &tmcons.Proposal{Proposal: *proposal.ToProto()},
})
// parts
for i := 0; i < int(parts.Total()); i++ {
part := parts.GetPart(i)
msg := &BlockPartMessage{
Height: height, // This tells peer that this part applies to us.
Round: round, // This tells peer that this part applies to us.
Part: part,
pp, err := part.ToProto()
if err != nil {
panic(err) // TODO: wbanfield better error handling
}
peer.Send(DataChannel, MustEncode(msg))
peer.Send(p2p.Envelope{
ChannelID: DataChannel,
Message: &tmcons.BlockPart{
Height: height, // This tells peer that this part applies to us.
Round: round, // This tells peer that this part applies to us.
Part: *pp,
},
})
}
// votes
@@ -539,9 +554,14 @@ func sendProposalAndParts(
prevote, _ := cs.signVote(tmproto.PrevoteType, blockHash, parts.Header())
precommit, _ := cs.signVote(tmproto.PrecommitType, blockHash, parts.Header())
cs.mtx.Unlock()
peer.Send(VoteChannel, MustEncode(&VoteMessage{prevote}))
peer.Send(VoteChannel, MustEncode(&VoteMessage{precommit}))
peer.Send(p2p.Envelope{
ChannelID: VoteChannel,
Message: &tmcons.Vote{Vote: prevote.ToProto()},
})
peer.Send(p2p.Envelope{
ChannelID: VoteChannel,
Message: &tmcons.Vote{Vote: precommit.ToProto()},
})
}
//----------------------------------------
@@ -579,7 +599,7 @@ func (br *ByzantineReactor) AddPeer(peer p2p.Peer) {
func (br *ByzantineReactor) RemovePeer(peer p2p.Peer, reason interface{}) {
br.reactor.RemovePeer(peer, reason)
}
func (br *ByzantineReactor) Receive(chID byte, peer p2p.Peer, msgBytes []byte) {
br.reactor.Receive(chID, peer, msgBytes)
func (br *ByzantineReactor) Receive(e p2p.Envelope) {
br.reactor.Receive(e)
}
func (br *ByzantineReactor) InitPeer(peer p2p.Peer) p2p.Peer { return peer }

View File

@@ -7,6 +7,7 @@ import (
"github.com/tendermint/tendermint/libs/log"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/p2p"
tmcons "github.com/tendermint/tendermint/proto/tendermint/consensus"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/types"
)
@@ -94,7 +95,10 @@ func invalidDoPrevoteFunc(t *testing.T, height int64, round int32, cs *State, sw
peers := sw.Peers().List()
for _, peer := range peers {
cs.Logger.Info("Sending bad vote", "block", blockHash, "peer", peer)
peer.Send(VoteChannel, MustEncode(&VoteMessage{precommit}))
peer.Send(p2p.Envelope{
Message: &tmcons.Vote{Vote: precommit.ToProto()},
ChannelID: VoteChannel,
})
}
}()
}

View File

@@ -5,7 +5,6 @@ import (
"fmt"
"github.com/cosmos/gogoproto/proto"
cstypes "github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/libs/bits"
tmmath "github.com/tendermint/tendermint/libs/math"
@@ -15,173 +14,147 @@ import (
"github.com/tendermint/tendermint/types"
)
// MsgToProto takes a consensus message type and returns the proto defined consensus message
func MsgToProto(msg Message) (*tmcons.Message, error) {
// MsgToProto takes a consensus message type and returns the proto defined consensus message.
//
// TODO: This needs to be removed, but WALToProto depends on this.
func MsgToProto(msg Message) (proto.Message, error) {
if msg == nil {
return nil, errors.New("consensus: message is nil")
}
var pb tmcons.Message
var pb proto.Message
switch msg := msg.(type) {
case *NewRoundStepMessage:
pb = tmcons.Message{
Sum: &tmcons.Message_NewRoundStep{
NewRoundStep: &tmcons.NewRoundStep{
Height: msg.Height,
Round: msg.Round,
Step: uint32(msg.Step),
SecondsSinceStartTime: msg.SecondsSinceStartTime,
LastCommitRound: msg.LastCommitRound,
},
},
pb = &tmcons.NewRoundStep{
Height: msg.Height,
Round: msg.Round,
Step: uint32(msg.Step),
SecondsSinceStartTime: msg.SecondsSinceStartTime,
LastCommitRound: msg.LastCommitRound,
}
case *NewValidBlockMessage:
pbPartSetHeader := msg.BlockPartSetHeader.ToProto()
pbBits := msg.BlockParts.ToProto()
pb = tmcons.Message{
Sum: &tmcons.Message_NewValidBlock{
NewValidBlock: &tmcons.NewValidBlock{
Height: msg.Height,
Round: msg.Round,
BlockPartSetHeader: pbPartSetHeader,
BlockParts: pbBits,
IsCommit: msg.IsCommit,
},
},
pb = &tmcons.NewValidBlock{
Height: msg.Height,
Round: msg.Round,
BlockPartSetHeader: pbPartSetHeader,
BlockParts: pbBits,
IsCommit: msg.IsCommit,
}
case *ProposalMessage:
pbP := msg.Proposal.ToProto()
pb = tmcons.Message{
Sum: &tmcons.Message_Proposal{
Proposal: &tmcons.Proposal{
Proposal: *pbP,
},
},
pb = &tmcons.Proposal{
Proposal: *pbP,
}
case *ProposalPOLMessage:
pbBits := msg.ProposalPOL.ToProto()
pb = tmcons.Message{
Sum: &tmcons.Message_ProposalPol{
ProposalPol: &tmcons.ProposalPOL{
Height: msg.Height,
ProposalPolRound: msg.ProposalPOLRound,
ProposalPol: *pbBits,
},
},
pb = &tmcons.ProposalPOL{
Height: msg.Height,
ProposalPolRound: msg.ProposalPOLRound,
ProposalPol: *pbBits,
}
case *BlockPartMessage:
parts, err := msg.Part.ToProto()
if err != nil {
return nil, fmt.Errorf("msg to proto error: %w", err)
}
pb = tmcons.Message{
Sum: &tmcons.Message_BlockPart{
BlockPart: &tmcons.BlockPart{
Height: msg.Height,
Round: msg.Round,
Part: *parts,
},
},
pb = &tmcons.BlockPart{
Height: msg.Height,
Round: msg.Round,
Part: *parts,
}
case *VoteMessage:
vote := msg.Vote.ToProto()
pb = tmcons.Message{
Sum: &tmcons.Message_Vote{
Vote: &tmcons.Vote{
Vote: vote,
},
},
pb = &tmcons.Vote{
Vote: vote,
}
case *HasVoteMessage:
pb = tmcons.Message{
Sum: &tmcons.Message_HasVote{
HasVote: &tmcons.HasVote{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
Index: msg.Index,
},
},
pb = &tmcons.HasVote{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
Index: msg.Index,
}
case *VoteSetMaj23Message:
bi := msg.BlockID.ToProto()
pb = tmcons.Message{
Sum: &tmcons.Message_VoteSetMaj23{
VoteSetMaj23: &tmcons.VoteSetMaj23{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
BlockID: bi,
},
},
pb = &tmcons.VoteSetMaj23{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
BlockID: bi,
}
case *VoteSetBitsMessage:
bi := msg.BlockID.ToProto()
bits := msg.Votes.ToProto()
vsb := &tmcons.Message_VoteSetBits{
VoteSetBits: &tmcons.VoteSetBits{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
BlockID: bi,
},
vsb := &tmcons.VoteSetBits{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
BlockID: bi,
}
if bits != nil {
vsb.VoteSetBits.Votes = *bits
vsb.Votes = *bits
}
pb = tmcons.Message{
Sum: vsb,
}
pb = vsb
default:
return nil, fmt.Errorf("consensus: message not recognized: %T", msg)
}
return &pb, nil
return pb, nil
}
// MsgFromProto takes a consensus proto message and returns the native go type
func MsgFromProto(msg *tmcons.Message) (Message, error) {
if msg == nil {
func MsgFromProto(p proto.Message) (Message, error) {
if p == nil {
return nil, errors.New("consensus: nil message")
}
var pb Message
switch msg := msg.Sum.(type) {
case *tmcons.Message_NewRoundStep:
rs, err := tmmath.SafeConvertUint8(int64(msg.NewRoundStep.Step))
switch msg := p.(type) {
case *tmcons.NewRoundStep:
rs, err := tmmath.SafeConvertUint8(int64(msg.Step))
// deny message based on possible overflow
if err != nil {
return nil, fmt.Errorf("denying message due to possible overflow: %w", err)
}
pb = &NewRoundStepMessage{
Height: msg.NewRoundStep.Height,
Round: msg.NewRoundStep.Round,
Height: msg.Height,
Round: msg.Round,
Step: cstypes.RoundStepType(rs),
SecondsSinceStartTime: msg.NewRoundStep.SecondsSinceStartTime,
LastCommitRound: msg.NewRoundStep.LastCommitRound,
SecondsSinceStartTime: msg.SecondsSinceStartTime,
LastCommitRound: msg.LastCommitRound,
}
case *tmcons.Message_NewValidBlock:
pbPartSetHeader, err := types.PartSetHeaderFromProto(&msg.NewValidBlock.BlockPartSetHeader)
case *tmcons.NewValidBlock:
pbPartSetHeader, err := types.PartSetHeaderFromProto(&msg.BlockPartSetHeader)
if err != nil {
return nil, fmt.Errorf("parts to proto error: %w", err)
}
pbBits := new(bits.BitArray)
pbBits.FromProto(msg.NewValidBlock.BlockParts)
pbBits.FromProto(msg.BlockParts)
pb = &NewValidBlockMessage{
Height: msg.NewValidBlock.Height,
Round: msg.NewValidBlock.Round,
Height: msg.Height,
Round: msg.Round,
BlockPartSetHeader: *pbPartSetHeader,
BlockParts: pbBits,
IsCommit: msg.NewValidBlock.IsCommit,
IsCommit: msg.IsCommit,
}
case *tmcons.Message_Proposal:
pbP, err := types.ProposalFromProto(&msg.Proposal.Proposal)
case *tmcons.Proposal:
pbP, err := types.ProposalFromProto(&msg.Proposal)
if err != nil {
return nil, fmt.Errorf("proposal msg to proto error: %w", err)
}
@@ -189,26 +162,26 @@ func MsgFromProto(msg *tmcons.Message) (Message, error) {
pb = &ProposalMessage{
Proposal: pbP,
}
case *tmcons.Message_ProposalPol:
case *tmcons.ProposalPOL:
pbBits := new(bits.BitArray)
pbBits.FromProto(&msg.ProposalPol.ProposalPol)
pbBits.FromProto(&msg.ProposalPol)
pb = &ProposalPOLMessage{
Height: msg.ProposalPol.Height,
ProposalPOLRound: msg.ProposalPol.ProposalPolRound,
Height: msg.Height,
ProposalPOLRound: msg.ProposalPolRound,
ProposalPOL: pbBits,
}
case *tmcons.Message_BlockPart:
parts, err := types.PartFromProto(&msg.BlockPart.Part)
case *tmcons.BlockPart:
parts, err := types.PartFromProto(&msg.Part)
if err != nil {
return nil, fmt.Errorf("blockpart msg to proto error: %w", err)
}
pb = &BlockPartMessage{
Height: msg.BlockPart.Height,
Round: msg.BlockPart.Round,
Height: msg.Height,
Round: msg.Round,
Part: parts,
}
case *tmcons.Message_Vote:
vote, err := types.VoteFromProto(msg.Vote.Vote)
case *tmcons.Vote:
vote, err := types.VoteFromProto(msg.Vote)
if err != nil {
return nil, fmt.Errorf("vote msg to proto error: %w", err)
}
@@ -216,36 +189,36 @@ func MsgFromProto(msg *tmcons.Message) (Message, error) {
pb = &VoteMessage{
Vote: vote,
}
case *tmcons.Message_HasVote:
case *tmcons.HasVote:
pb = &HasVoteMessage{
Height: msg.HasVote.Height,
Round: msg.HasVote.Round,
Type: msg.HasVote.Type,
Index: msg.HasVote.Index,
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
Index: msg.Index,
}
case *tmcons.Message_VoteSetMaj23:
bi, err := types.BlockIDFromProto(&msg.VoteSetMaj23.BlockID)
case *tmcons.VoteSetMaj23:
bi, err := types.BlockIDFromProto(&msg.BlockID)
if err != nil {
return nil, fmt.Errorf("voteSetMaj23 msg to proto error: %w", err)
}
pb = &VoteSetMaj23Message{
Height: msg.VoteSetMaj23.Height,
Round: msg.VoteSetMaj23.Round,
Type: msg.VoteSetMaj23.Type,
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
BlockID: *bi,
}
case *tmcons.Message_VoteSetBits:
bi, err := types.BlockIDFromProto(&msg.VoteSetBits.BlockID)
case *tmcons.VoteSetBits:
bi, err := types.BlockIDFromProto(&msg.BlockID)
if err != nil {
return nil, fmt.Errorf("voteSetBits msg to proto error: %w", err)
}
bits := new(bits.BitArray)
bits.FromProto(&msg.VoteSetBits.Votes)
bits.FromProto(&msg.Votes)
pb = &VoteSetBitsMessage{
Height: msg.VoteSetBits.Height,
Round: msg.VoteSetBits.Round,
Type: msg.VoteSetBits.Type,
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
BlockID: *bi,
Votes: bits,
}
@@ -260,20 +233,6 @@ func MsgFromProto(msg *tmcons.Message) (Message, error) {
return pb, nil
}
// MustEncode takes the reactors msg, makes it proto and marshals it
// this mimics `MustMarshalBinaryBare` in that is panics on error
func MustEncode(msg Message) []byte {
pb, err := MsgToProto(msg)
if err != nil {
panic(err)
}
enc, err := proto.Marshal(pb)
if err != nil {
panic(err)
}
return enc
}
// WALToProto takes a WAL message and return a proto walMessage and error
func WALToProto(msg WALMessage) (*tmcons.WALMessage, error) {
var pb tmcons.WALMessage
@@ -294,10 +253,14 @@ func WALToProto(msg WALMessage) (*tmcons.WALMessage, error) {
if err != nil {
return nil, err
}
if w, ok := consMsg.(p2p.Wrapper); ok {
consMsg = w.Wrap()
}
cm := consMsg.(*tmcons.Message)
pb = tmcons.WALMessage{
Sum: &tmcons.WALMessage_MsgInfo{
MsgInfo: &tmcons.MsgInfo{
Msg: *consMsg,
Msg: *cm,
PeerID: string(msg.PeerID),
},
},
@@ -343,7 +306,11 @@ func WALFromProto(msg *tmcons.WALMessage) (WALMessage, error) {
Step: msg.EventDataRoundState.Step,
}
case *tmcons.WALMessage_MsgInfo:
walMsg, err := MsgFromProto(&msg.MsgInfo.Msg)
um, err := msg.MsgInfo.Msg.Unwrap()
if err != nil {
return nil, fmt.Errorf("unwrap message: %w", err)
}
walMsg, err := MsgFromProto(um)
if err != nil {
return nil, fmt.Errorf("msgInfo from proto error: %w", err)
}

View File

@@ -71,7 +71,7 @@ func TestMsgToProto(t *testing.T) {
testsCases := []struct {
testName string
msg Message
want *tmcons.Message
want proto.Message
wantErr bool
}{
{"successful NewRoundStepMessage", &NewRoundStepMessage{
@@ -80,17 +80,15 @@ func TestMsgToProto(t *testing.T) {
Step: 1,
SecondsSinceStartTime: 1,
LastCommitRound: 2,
}, &tmcons.Message{
Sum: &tmcons.Message_NewRoundStep{
NewRoundStep: &tmcons.NewRoundStep{
Height: 2,
Round: 1,
Step: 1,
SecondsSinceStartTime: 1,
LastCommitRound: 2,
},
},
}, false},
}, &tmcons.NewRoundStep{
Height: 2,
Round: 1,
Step: 1,
SecondsSinceStartTime: 1,
LastCommitRound: 2,
},
false},
{"successful NewValidBlockMessage", &NewValidBlockMessage{
Height: 1,
@@ -98,92 +96,78 @@ func TestMsgToProto(t *testing.T) {
BlockPartSetHeader: psh,
BlockParts: bits,
IsCommit: false,
}, &tmcons.Message{
Sum: &tmcons.Message_NewValidBlock{
NewValidBlock: &tmcons.NewValidBlock{
Height: 1,
Round: 1,
BlockPartSetHeader: pbPsh,
BlockParts: pbBits,
IsCommit: false,
},
},
}, false},
}, &tmcons.NewValidBlock{
Height: 1,
Round: 1,
BlockPartSetHeader: pbPsh,
BlockParts: pbBits,
IsCommit: false,
},
false},
{"successful BlockPartMessage", &BlockPartMessage{
Height: 100,
Round: 1,
Part: &parts,
}, &tmcons.Message{
Sum: &tmcons.Message_BlockPart{
BlockPart: &tmcons.BlockPart{
Height: 100,
Round: 1,
Part: *pbParts,
},
},
}, false},
}, &tmcons.BlockPart{
Height: 100,
Round: 1,
Part: *pbParts,
},
false},
{"successful ProposalPOLMessage", &ProposalPOLMessage{
Height: 1,
ProposalPOLRound: 1,
ProposalPOL: bits,
}, &tmcons.Message{
Sum: &tmcons.Message_ProposalPol{
ProposalPol: &tmcons.ProposalPOL{
Height: 1,
ProposalPolRound: 1,
ProposalPol: *pbBits,
},
}}, false},
}, &tmcons.ProposalPOL{
Height: 1,
ProposalPolRound: 1,
ProposalPol: *pbBits,
},
false},
{"successful ProposalMessage", &ProposalMessage{
Proposal: &proposal,
}, &tmcons.Message{
Sum: &tmcons.Message_Proposal{
Proposal: &tmcons.Proposal{
Proposal: *pbProposal,
},
},
}, false},
}, &tmcons.Proposal{
Proposal: *pbProposal,
},
false},
{"successful VoteMessage", &VoteMessage{
Vote: vote,
}, &tmcons.Message{
Sum: &tmcons.Message_Vote{
Vote: &tmcons.Vote{
Vote: pbVote,
},
},
}, false},
}, &tmcons.Vote{
Vote: pbVote,
},
false},
{"successful VoteSetMaj23", &VoteSetMaj23Message{
Height: 1,
Round: 1,
Type: 1,
BlockID: bi,
}, &tmcons.Message{
Sum: &tmcons.Message_VoteSetMaj23{
VoteSetMaj23: &tmcons.VoteSetMaj23{
Height: 1,
Round: 1,
Type: 1,
BlockID: pbBi,
},
},
}, false},
}, &tmcons.VoteSetMaj23{
Height: 1,
Round: 1,
Type: 1,
BlockID: pbBi,
},
false},
{"successful VoteSetBits", &VoteSetBitsMessage{
Height: 1,
Round: 1,
Type: 1,
BlockID: bi,
Votes: bits,
}, &tmcons.Message{
Sum: &tmcons.Message_VoteSetBits{
VoteSetBits: &tmcons.VoteSetBits{
Height: 1,
Round: 1,
Type: 1,
BlockID: pbBi,
Votes: *pbBits,
},
},
}, false},
}, &tmcons.VoteSetBits{
Height: 1,
Round: 1,
Type: 1,
BlockID: pbBi,
Votes: *pbBits,
},
false},
{"failure", nil, &tmcons.Message{}, true},
}
for _, tt := range testsCases {

View File

@@ -7,8 +7,6 @@ import (
"sync"
"time"
"github.com/cosmos/gogoproto/proto"
cstypes "github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/libs/bits"
tmevents "github.com/tendermint/tendermint/libs/events"
@@ -148,6 +146,7 @@ func (conR *Reactor) GetChannels() []*p2p.ChannelDescriptor {
Priority: 6,
SendQueueCapacity: 100,
RecvMessageCapacity: maxMsgSize,
MessageType: &tmcons.Message{},
},
{
ID: DataChannel, // maybe split between gossiping current block and catchup stuff
@@ -156,6 +155,7 @@ func (conR *Reactor) GetChannels() []*p2p.ChannelDescriptor {
SendQueueCapacity: 100,
RecvBufferCapacity: 50 * 4096,
RecvMessageCapacity: maxMsgSize,
MessageType: &tmcons.Message{},
},
{
ID: VoteChannel,
@@ -163,6 +163,7 @@ func (conR *Reactor) GetChannels() []*p2p.ChannelDescriptor {
SendQueueCapacity: 100,
RecvBufferCapacity: 100 * 100,
RecvMessageCapacity: maxMsgSize,
MessageType: &tmcons.Message{},
},
{
ID: VoteSetBitsChannel,
@@ -170,6 +171,7 @@ func (conR *Reactor) GetChannels() []*p2p.ChannelDescriptor {
SendQueueCapacity: 2,
RecvBufferCapacity: 1024,
RecvMessageCapacity: maxMsgSize,
MessageType: &tmcons.Message{},
},
}
}
@@ -223,34 +225,33 @@ func (conR *Reactor) RemovePeer(peer p2p.Peer, reason interface{}) {
// Peer state updates can happen in parallel, but processing of
// proposals, block parts, and votes are ordered by the receiveRoutine
// NOTE: blocks on consensus state for proposals, block parts, and votes
func (conR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
func (conR *Reactor) Receive(e p2p.Envelope) {
if !conR.IsRunning() {
conR.Logger.Debug("Receive", "src", src, "chId", chID, "bytes", msgBytes)
conR.Logger.Debug("Receive", "src", e.Src, "chId", e.ChannelID)
return
}
msg, err := decodeMsg(msgBytes)
msg, err := MsgFromProto(e.Message)
if err != nil {
conR.Logger.Error("Error decoding message", "src", src, "chId", chID, "err", err)
conR.Switch.StopPeerForError(src, err)
conR.Logger.Error("Error decoding message", "src", e.Src, "chId", e.ChannelID, "err", err)
conR.Switch.StopPeerForError(e.Src, err)
return
}
if err = msg.ValidateBasic(); err != nil {
conR.Logger.Error("Peer sent us invalid msg", "peer", src, "msg", msg, "err", err)
conR.Switch.StopPeerForError(src, err)
conR.Logger.Error("Peer sent us invalid msg", "peer", e.Src, "msg", e.Message, "err", err)
conR.Switch.StopPeerForError(e.Src, err)
return
}
conR.Logger.Debug("Receive", "src", src, "chId", chID, "msg", msg)
conR.Logger.Debug("Receive", "src", e.Src, "chId", e.ChannelID, "msg", msg)
// Get peer states
ps, ok := src.Get(types.PeerStateKey).(*PeerState)
ps, ok := e.Src.Get(types.PeerStateKey).(*PeerState)
if !ok {
panic(fmt.Sprintf("Peer %v has no state", src))
panic(fmt.Sprintf("Peer %v has no state", e.Src))
}
switch chID {
switch e.ChannelID {
case StateChannel:
switch msg := msg.(type) {
case *NewRoundStepMessage:
@@ -258,8 +259,8 @@ func (conR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
initialHeight := conR.conS.state.InitialHeight
conR.conS.mtx.Unlock()
if err = msg.ValidateHeight(initialHeight); err != nil {
conR.Logger.Error("Peer sent us invalid msg", "peer", src, "msg", msg, "err", err)
conR.Switch.StopPeerForError(src, err)
conR.Logger.Error("Peer sent us invalid msg", "peer", e.Src, "msg", msg, "err", err)
conR.Switch.StopPeerForError(e.Src, err)
return
}
ps.ApplyNewRoundStepMessage(msg)
@@ -278,7 +279,7 @@ func (conR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
// Peer claims to have a maj23 for some BlockID at H,R,S,
err := votes.SetPeerMaj23(msg.Round, msg.Type, ps.peer.ID(), msg.BlockID)
if err != nil {
conR.Switch.StopPeerForError(src, err)
conR.Switch.StopPeerForError(e.Src, err)
return
}
// Respond with a VoteSetBitsMessage showing which votes we have.
@@ -292,13 +293,19 @@ func (conR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
default:
panic("Bad VoteSetBitsMessage field Type. Forgot to add a check in ValidateBasic?")
}
src.TrySend(VoteSetBitsChannel, MustEncode(&VoteSetBitsMessage{
eMsg := &tmcons.VoteSetBits{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
BlockID: msg.BlockID,
Votes: ourVotes,
}))
BlockID: msg.BlockID.ToProto(),
}
if votes := ourVotes.ToProto(); votes != nil {
eMsg.Votes = *votes
}
e.Src.TrySend(p2p.Envelope{
ChannelID: VoteSetBitsChannel,
Message: eMsg,
})
default:
conR.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg)))
}
@@ -311,13 +318,13 @@ func (conR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
switch msg := msg.(type) {
case *ProposalMessage:
ps.SetHasProposal(msg.Proposal)
conR.conS.peerMsgQueue <- msgInfo{msg, src.ID()}
conR.conS.peerMsgQueue <- msgInfo{msg, e.Src.ID()}
case *ProposalPOLMessage:
ps.ApplyProposalPOLMessage(msg)
case *BlockPartMessage:
ps.SetHasProposalBlockPart(msg.Height, msg.Round, int(msg.Part.Index))
conR.Metrics.BlockParts.With("peer_id", string(src.ID())).Add(1)
conR.conS.peerMsgQueue <- msgInfo{msg, src.ID()}
conR.Metrics.BlockParts.With("peer_id", string(e.Src.ID())).Add(1)
conR.conS.peerMsgQueue <- msgInfo{msg, e.Src.ID()}
default:
conR.Logger.Error(fmt.Sprintf("Unknown message type %v", reflect.TypeOf(msg)))
}
@@ -337,7 +344,7 @@ func (conR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
ps.EnsureVoteBitArrays(height-1, lastCommitSize)
ps.SetHasVote(msg.Vote)
cs.peerMsgQueue <- msgInfo{msg, src.ID()}
cs.peerMsgQueue <- msgInfo{msg, e.Src.ID()}
default:
// don't punish (leave room for soft upgrades)
@@ -376,7 +383,7 @@ func (conR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
}
default:
conR.Logger.Error(fmt.Sprintf("Unknown chId %X", chID))
conR.Logger.Error(fmt.Sprintf("Unknown chId %X", e.ChannelID))
}
}
@@ -430,29 +437,39 @@ func (conR *Reactor) unsubscribeFromBroadcastEvents() {
func (conR *Reactor) broadcastNewRoundStepMessage(rs *cstypes.RoundState) {
nrsMsg := makeRoundStepMessage(rs)
conR.Switch.Broadcast(StateChannel, MustEncode(nrsMsg))
conR.Switch.Broadcast(p2p.Envelope{
ChannelID: StateChannel,
Message: nrsMsg,
})
}
func (conR *Reactor) broadcastNewValidBlockMessage(rs *cstypes.RoundState) {
csMsg := &NewValidBlockMessage{
psh := rs.ProposalBlockParts.Header()
csMsg := &tmcons.NewValidBlock{
Height: rs.Height,
Round: rs.Round,
BlockPartSetHeader: rs.ProposalBlockParts.Header(),
BlockParts: rs.ProposalBlockParts.BitArray(),
BlockPartSetHeader: psh.ToProto(),
BlockParts: rs.ProposalBlockParts.BitArray().ToProto(),
IsCommit: rs.Step == cstypes.RoundStepCommit,
}
conR.Switch.Broadcast(StateChannel, MustEncode(csMsg))
conR.Switch.Broadcast(p2p.Envelope{
ChannelID: StateChannel,
Message: csMsg,
})
}
// Broadcasts HasVoteMessage to peers that care.
func (conR *Reactor) broadcastHasVoteMessage(vote *types.Vote) {
msg := &HasVoteMessage{
msg := &tmcons.HasVote{
Height: vote.Height,
Round: vote.Round,
Type: vote.Type,
Index: vote.ValidatorIndex,
}
conR.Switch.Broadcast(StateChannel, MustEncode(msg))
conR.Switch.Broadcast(p2p.Envelope{
ChannelID: StateChannel,
Message: msg,
})
/*
// TODO: Make this broadcast more selective.
for _, peer := range conR.Switch.Peers().List() {
@@ -463,7 +480,11 @@ func (conR *Reactor) broadcastHasVoteMessage(vote *types.Vote) {
prs := ps.GetRoundState()
if prs.Height == vote.Height {
// TODO: Also filter on round?
peer.TrySend(StateChannel, struct{ ConsensusMessage }{msg})
e := p2p.Envelope{
ChannelID: StateChannel, struct{ ConsensusMessage }{msg},
Message: p,
}
peer.TrySend(e)
} else {
// Height doesn't match
// TODO: check a field, maybe CatchupCommitRound?
@@ -473,11 +494,11 @@ func (conR *Reactor) broadcastHasVoteMessage(vote *types.Vote) {
*/
}
func makeRoundStepMessage(rs *cstypes.RoundState) (nrsMsg *NewRoundStepMessage) {
nrsMsg = &NewRoundStepMessage{
func makeRoundStepMessage(rs *cstypes.RoundState) (nrsMsg *tmcons.NewRoundStep) {
nrsMsg = &tmcons.NewRoundStep{
Height: rs.Height,
Round: rs.Round,
Step: rs.Step,
Step: uint32(rs.Step),
SecondsSinceStartTime: int64(time.Since(rs.StartTime).Seconds()),
LastCommitRound: rs.LastCommit.GetRound(),
}
@@ -487,7 +508,10 @@ func makeRoundStepMessage(rs *cstypes.RoundState) (nrsMsg *NewRoundStepMessage)
func (conR *Reactor) sendNewRoundStepMessage(peer p2p.Peer) {
rs := conR.getRoundState()
nrsMsg := makeRoundStepMessage(rs)
peer.Send(StateChannel, MustEncode(nrsMsg))
peer.Send(p2p.Envelope{
ChannelID: StateChannel,
Message: nrsMsg,
})
}
func (conR *Reactor) updateRoundStateRoutine() {
@@ -526,13 +550,19 @@ OUTER_LOOP:
if rs.ProposalBlockParts.HasHeader(prs.ProposalBlockPartSetHeader) {
if index, ok := rs.ProposalBlockParts.BitArray().Sub(prs.ProposalBlockParts.Copy()).PickRandom(); ok {
part := rs.ProposalBlockParts.GetPart(index)
msg := &BlockPartMessage{
Height: rs.Height, // This tells peer that this part applies to us.
Round: rs.Round, // This tells peer that this part applies to us.
Part: part,
parts, err := part.ToProto()
if err != nil {
panic(err)
}
logger.Debug("Sending block part", "height", prs.Height, "round", prs.Round)
if peer.Send(DataChannel, MustEncode(msg)) {
if peer.Send(p2p.Envelope{
ChannelID: DataChannel,
Message: &tmcons.BlockPart{
Height: rs.Height, // This tells peer that this part applies to us.
Round: rs.Round, // This tells peer that this part applies to us.
Part: *parts,
},
}) {
ps.SetHasProposalBlockPart(prs.Height, prs.Round, index)
}
continue OUTER_LOOP
@@ -578,9 +608,11 @@ OUTER_LOOP:
if rs.Proposal != nil && !prs.Proposal {
// Proposal: share the proposal metadata with peer.
{
msg := &ProposalMessage{Proposal: rs.Proposal}
logger.Debug("Sending proposal", "height", prs.Height, "round", prs.Round)
if peer.Send(DataChannel, MustEncode(msg)) {
if peer.Send(p2p.Envelope{
ChannelID: DataChannel,
Message: &tmcons.Proposal{Proposal: *rs.Proposal.ToProto()},
}) {
// NOTE[ZM]: A peer might have received different proposal msg so this Proposal msg will be rejected!
ps.SetHasProposal(rs.Proposal)
}
@@ -590,13 +622,15 @@ OUTER_LOOP:
// rs.Proposal was validated, so rs.Proposal.POLRound <= rs.Round,
// so we definitely have rs.Votes.Prevotes(rs.Proposal.POLRound).
if 0 <= rs.Proposal.POLRound {
msg := &ProposalPOLMessage{
Height: rs.Height,
ProposalPOLRound: rs.Proposal.POLRound,
ProposalPOL: rs.Votes.Prevotes(rs.Proposal.POLRound).BitArray(),
}
logger.Debug("Sending POL", "height", prs.Height, "round", prs.Round)
peer.Send(DataChannel, MustEncode(msg))
peer.Send(p2p.Envelope{
ChannelID: DataChannel,
Message: &tmcons.ProposalPOL{
Height: rs.Height,
ProposalPolRound: rs.Proposal.POLRound,
ProposalPol: *rs.Votes.Prevotes(rs.Proposal.POLRound).BitArray().ToProto(),
},
})
}
continue OUTER_LOOP
}
@@ -633,13 +667,20 @@ func (conR *Reactor) gossipDataForCatchup(logger log.Logger, rs *cstypes.RoundSt
return
}
// Send the part
msg := &BlockPartMessage{
Height: prs.Height, // Not our height, so it doesn't matter.
Round: prs.Round, // Not our height, so it doesn't matter.
Part: part,
}
logger.Debug("Sending block part for catchup", "round", prs.Round, "index", index)
if peer.Send(DataChannel, MustEncode(msg)) {
pp, err := part.ToProto()
if err != nil {
logger.Error("Could not convert part to proto", "index", index, "error", err)
return
}
if peer.Send(p2p.Envelope{
ChannelID: DataChannel,
Message: &tmcons.BlockPart{
Height: prs.Height, // Not our height, so it doesn't matter.
Round: prs.Round, // Not our height, so it doesn't matter.
Part: *pp,
},
}) {
ps.SetHasProposalBlockPart(prs.Height, prs.Round, index)
} else {
logger.Debug("Sending block part for catchup failed")
@@ -798,12 +839,16 @@ OUTER_LOOP:
prs := ps.GetRoundState()
if rs.Height == prs.Height {
if maj23, ok := rs.Votes.Prevotes(prs.Round).TwoThirdsMajority(); ok {
peer.TrySend(StateChannel, MustEncode(&VoteSetMaj23Message{
Height: prs.Height,
Round: prs.Round,
Type: tmproto.PrevoteType,
BlockID: maj23,
}))
peer.TrySend(p2p.Envelope{
ChannelID: StateChannel,
Message: &tmcons.VoteSetMaj23{
Height: prs.Height,
Round: prs.Round,
Type: tmproto.PrevoteType,
BlockID: maj23.ToProto(),
},
})
time.Sleep(conR.conS.config.PeerQueryMaj23SleepDuration)
}
}
@@ -815,12 +860,15 @@ OUTER_LOOP:
prs := ps.GetRoundState()
if rs.Height == prs.Height {
if maj23, ok := rs.Votes.Precommits(prs.Round).TwoThirdsMajority(); ok {
peer.TrySend(StateChannel, MustEncode(&VoteSetMaj23Message{
Height: prs.Height,
Round: prs.Round,
Type: tmproto.PrecommitType,
BlockID: maj23,
}))
peer.TrySend(p2p.Envelope{
ChannelID: StateChannel,
Message: &tmcons.VoteSetMaj23{
Height: prs.Height,
Round: prs.Round,
Type: tmproto.PrecommitType,
BlockID: maj23.ToProto(),
},
})
time.Sleep(conR.conS.config.PeerQueryMaj23SleepDuration)
}
}
@@ -832,12 +880,16 @@ OUTER_LOOP:
prs := ps.GetRoundState()
if rs.Height == prs.Height && prs.ProposalPOLRound >= 0 {
if maj23, ok := rs.Votes.Prevotes(prs.ProposalPOLRound).TwoThirdsMajority(); ok {
peer.TrySend(StateChannel, MustEncode(&VoteSetMaj23Message{
Height: prs.Height,
Round: prs.ProposalPOLRound,
Type: tmproto.PrevoteType,
BlockID: maj23,
}))
peer.TrySend(p2p.Envelope{
ChannelID: StateChannel,
Message: &tmcons.VoteSetMaj23{
Height: prs.Height,
Round: prs.ProposalPOLRound,
Type: tmproto.PrevoteType,
BlockID: maj23.ToProto(),
},
})
time.Sleep(conR.conS.config.PeerQueryMaj23SleepDuration)
}
}
@@ -852,12 +904,15 @@ OUTER_LOOP:
if prs.CatchupCommitRound != -1 && prs.Height > 0 && prs.Height <= conR.conS.blockStore.Height() &&
prs.Height >= conR.conS.blockStore.Base() {
if commit := conR.conS.LoadCommit(prs.Height); commit != nil {
peer.TrySend(StateChannel, MustEncode(&VoteSetMaj23Message{
Height: prs.Height,
Round: commit.Round,
Type: tmproto.PrecommitType,
BlockID: commit.BlockID,
}))
peer.TrySend(p2p.Envelope{
ChannelID: StateChannel,
Message: &tmcons.VoteSetMaj23{
Height: prs.Height,
Round: commit.Round,
Type: tmproto.PrecommitType,
BlockID: commit.BlockID.ToProto(),
},
})
time.Sleep(conR.conS.config.PeerQueryMaj23SleepDuration)
}
}
@@ -1071,9 +1126,13 @@ func (ps *PeerState) SetHasProposalBlockPart(height int64, round int32, index in
// Returns true if vote was sent.
func (ps *PeerState) PickSendVote(votes types.VoteSetReader) bool {
if vote, ok := ps.PickVoteToSend(votes); ok {
msg := &VoteMessage{vote}
ps.logger.Debug("Sending vote message", "ps", ps, "vote", vote)
if ps.peer.Send(VoteChannel, MustEncode(msg)) {
if ps.peer.Send(p2p.Envelope{
ChannelID: VoteChannel,
Message: &tmcons.Vote{
Vote: vote.ToProto(),
},
}) {
ps.SetHasVote(vote)
return true
}
@@ -1439,15 +1498,6 @@ func init() {
tmjson.RegisterType(&VoteSetBitsMessage{}, "tendermint/VoteSetBits")
}
func decodeMsg(bz []byte) (msg Message, err error) {
pb := &tmcons.Message{}
if err = proto.Unmarshal(bz, pb); err != nil {
return msg, err
}
return MsgFromProto(pb)
}
//-------------------------------------
// NewRoundStepMessage is sent for every step taken in the ConsensusState.

View File

@@ -33,6 +33,7 @@ import (
mempoolv1 "github.com/tendermint/tendermint/mempool/v1"
"github.com/tendermint/tendermint/p2p"
p2pmock "github.com/tendermint/tendermint/p2p/mock"
tmcons "github.com/tendermint/tendermint/proto/tendermint/consensus"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
sm "github.com/tendermint/tendermint/state"
statemocks "github.com/tendermint/tendermint/state/mocks"
@@ -265,15 +266,18 @@ func TestReactorReceiveDoesNotPanicIfAddPeerHasntBeenCalledYet(t *testing.T) {
var (
reactor = reactors[0]
peer = p2pmock.NewPeer(nil)
msg = MustEncode(&HasVoteMessage{Height: 1,
Round: 1, Index: 1, Type: tmproto.PrevoteType})
)
reactor.InitPeer(peer)
// simulate switch calling Receive before AddPeer
assert.NotPanics(t, func() {
reactor.Receive(StateChannel, peer, msg)
reactor.Receive(p2p.Envelope{
ChannelID: StateChannel,
Src: peer,
Message: &tmcons.HasVote{Height: 1,
Round: 1, Index: 1, Type: tmproto.PrevoteType},
})
reactor.AddPeer(peer)
})
}
@@ -288,15 +292,18 @@ func TestReactorReceivePanicsIfInitPeerHasntBeenCalledYet(t *testing.T) {
var (
reactor = reactors[0]
peer = p2pmock.NewPeer(nil)
msg = MustEncode(&HasVoteMessage{Height: 1,
Round: 1, Index: 1, Type: tmproto.PrevoteType})
)
// we should call InitPeer here
// simulate switch calling Receive before AddPeer
assert.Panics(t, func() {
reactor.Receive(StateChannel, peer, msg)
reactor.Receive(p2p.Envelope{
ChannelID: StateChannel,
Src: peer,
Message: &tmcons.HasVote{Height: 1,
Round: 1, Index: 1, Type: tmproto.PrevoteType},
})
})
}

View File

@@ -99,4 +99,4 @@ configuration file that we can update with PRs.
Because the build processes are identical (as is the information contained
herein), this file should be kept in sync as much as possible with its
[counterpart in the Cosmos SDK
repo](https://github.com/cosmos/cosmos-sdk/blob/master/docs/DOCS_README.md).
repo](https://github.com/cosmos/cosmos-sdk/blob/main/docs/README.md).

View File

@@ -78,6 +78,7 @@ Note the context/background should be written in the present tense.
- [ADR-039: Peer-Behaviour](./adr-039-peer-behaviour.md)
- [ADR-063: Privval-gRPC](./adr-063-privval-grpc.md)
- [ADR-067: Mempool Refactor](./adr-067-mempool-refactor.md)
- [ADR-071: Proposer-Based Timestamps](./adr-071-proposer-based-timestamps.md)
- [ADR-075: RPC Event Subscription Interface](./adr-075-rpc-subscription.md)
- [ADR-079: Ed25519 Verification](./adr-079-ed25519-verification.md)
- [ADR-081: Protocol Buffers Management](./adr-081-protobuf-mgmt.md)
@@ -114,7 +115,6 @@ None
- [ADR-064: Batch Verification](./adr-064-batch-verification.md)
- [ADR-068: Reverse-Sync](./adr-068-reverse-sync.md)
- [ADR-069: Node Initialization](./adr-069-flexible-node-initialization.md)
- [ADR-071: Proposer-Based Timestamps](./adr-071-proposer-based-timestamps.md)
- [ADR-073: Adopt LibP2P](./adr-073-libp2p.md)
- [ADR-074: Migrate Timeout Parameters to Consensus Parameters](./adr-074-timeout-params.md)
- [ADR-080: Reverse Sync](./adr-080-reverse-sync.md)

View File

@@ -61,7 +61,7 @@ The following protocols and application features require a reliable source of ti
* Tendermint Light Clients [rely on correspondence between their known time](https://github.com/tendermint/tendermint/blob/main/spec/light-client/verification/README.md#definitions-1) and the block time for block verification.
* Tendermint Evidence validity is determined [either in terms of heights or in terms of time](https://github.com/tendermint/tendermint/blob/8029cf7a0fcc89a5004e173ec065aa48ad5ba3c8/spec/consensus/evidence.md#verification).
* Unbonding of staked assets in the Cosmos Hub [occurs after a period of 21 days](https://github.com/cosmos/governance/blob/ce75de4019b0129f6efcbb0e752cd2cc9e6136d3/params-change/Staking.md#unbondingtime).
* IBC packets can use either a [timestamp or a height to timeout packet delivery](https://docs.cosmos.network/v0.44/ibc/overview.html#acknowledgements)
* IBC packets can use either a [timestamp or a height to timeout packet delivery](https://docs.cosmos.network/v0.45/ibc/overview.html#acknowledgements)
Finally, inflation distribution in the Cosmos Hub uses an approximation of time to calculate an annual percentage rate.
This approximation of time is calculated using [block heights with an estimated number of blocks produced in a year](https://github.com/cosmos/governance/blob/master/params-change/Mint.md#blocksperyear).

23
docs/qa/README.md Normal file
View File

@@ -0,0 +1,23 @@
---
order: 1
parent:
title: Tendermint Quality Assurance
description: This is a report on the process followed and results obtained when running v0.34.x on testnets
order: 2
---
# Tendermint Quality Assurance
This directory keeps track of the process followed by the Tendermint Core team
for Quality Assurance before cutting a release.
This directory is to live in multiple branches. On each release branch,
the contents of this directory reflect the status of the process
at the time the Quality Assurance process was applied for that release.
File [method](./method.md) keeps track of the process followed to obtain the results
used to decide if a release is passing the Quality Assurance process.
The results obtained in each release are stored in their own directory.
The following releases have undergone the Quality Assurance process:
* [v0.34.x](./v034/), which was tested just before releasing v0.34.22
* [v0.37.x](./v037/), with v.34.x acting as a baseline

214
docs/qa/method.md Normal file
View File

@@ -0,0 +1,214 @@
---
order: 1
title: Method
---
# Method
This document provides a detailed description of the QA process.
It is intended to be used by engineers reproducing the experimental setup for future tests of Tendermint.
The (first iteration of the) QA process as described [in the RELEASES.md document][releases]
was applied to version v0.34.x in order to have a set of results acting as benchmarking baseline.
This baseline is then compared with results obtained in later versions.
Out of the testnet-based test cases described in [the releases document][releases] we focused on two of them:
_200 Node Test_, and _Rotating Nodes Test_.
[releases]: https://github.com/tendermint/tendermint/blob/v0.37.x/RELEASES.md#large-scale-testnets
## Software Dependencies
### Infrastructure Requirements to Run the Tests
* An account at Digital Ocean (DO), with a high droplet limit (>202)
* The machine to orchestrate the tests should have the following installed:
* A clone of the [testnet repository][testnet-repo]
* This repository contains all the scripts mentioned in the reminder of this section
* [Digital Ocean CLI][doctl]
* [Terraform CLI][Terraform]
* [Ansible CLI][Ansible]
[testnet-repo]: https://github.com/interchainio/tendermint-testnet
[Ansible]: https://docs.ansible.com/ansible/latest/index.html
[Terraform]: https://www.terraform.io/docs
[doctl]: https://docs.digitalocean.com/reference/doctl/how-to/install/
### Requirements for Result Extraction
* Matlab or Octave
* [Prometheus][prometheus] server installed
* blockstore DB of one of the full nodes in the testnet
* Prometheus DB
[prometheus]: https://prometheus.io/
## 200 Node Testnet
### Running the test
This section explains how the tests were carried out for reproducibility purposes.
1. [If you haven't done it before]
Follow steps 1-4 of the `README.md` at the top of the testnet repository to configure Terraform, and `doctl`.
2. Copy file `testnets/testnet200.toml` onto `testnet.toml` (do NOT commit this change)
3. Set the variable `VERSION_TAG` in the `Makefile` to the git hash that is to be tested.
4. Follow steps 5-10 of the `README.md` to configure and start the 200 node testnet
* WARNING: Do NOT forget to run `make terraform-destroy` as soon as you are done with the tests (see step 9)
5. As a sanity check, connect to the Prometheus node's web interface and check the graph for the `tendermint_consensus_height` metric.
All nodes should be increasing their heights.
6. `ssh` into the `testnet-load-runner`, then copy script `script/200-node-loadscript.sh` and run it from the load runner node.
* Before running it, you need to edit the script to provide the IP address of a full node.
This node will receive all transactions from the load runner node.
* This script will take about 40 mins to run
* It is running 90-seconds-long experiments in a loop with different loads
7. Run `make retrieve-data` to gather all relevant data from the testnet into the orchestrating machine
8. Verify that the data was collected without errors
* at least one blockstore DB for a Tendermint validator
* the Prometheus database from the Prometheus node
* for extra care, you can run `zip -T` on the `prometheus.zip` file and (one of) the `blockstore.db.zip` file(s)
9. **Run `make terraform-destroy`**
* Don't forget to type `yes`! Otherwise you're in trouble.
### Result Extraction
The method for extracting the results described here is highly manual (and exploratory) at this stage.
The Core team should improve it at every iteration to increase the amount of automation.
#### Steps
1. Unzip the blockstore into a directory
2. Extract the latency report and the raw latencies for all the experiments. Run these commands from the directory containing the blockstore
* `go run github.com/tendermint/tendermint/test/loadtime/cmd/report@3ec6e424d --database-type goleveldb --data-dir ./ > results/report.txt`
* `go run github.com/tendermint/tendermint/test/loadtime/cmd/report@3ec6e424d --database-type goleveldb --data-dir ./ --csv results/raw.csv`
3. File `report.txt` contains an unordered list of experiments with varying concurrent connections and transaction rate
* Create files `report01.txt`, `report02.txt`, `report04.txt` and, for each experiment in file `report.txt`,
copy its related lines to the filename that matches the number of connections.
* Sort the experiments in `report01.txt` in ascending tx rate order. Likewise for `report02.txt` and `report04.txt`.
4. Generate file `report_tabbed.txt` by showing the contents `report01.txt`, `report02.txt`, `report04.txt` side by side
* This effectively creates a table where rows are a particular tx rate and columns are a particular number of websocket connections.
5. Extract the raw latencies from file `raw.csv` using the following bash loop. This creates a `.csv` file and a `.dat` file per experiment.
The format of the `.dat` files is amenable to loading them as matrices in Octave
```bash
uuids=($(cat report01.txt report02.txt report04.txt | grep '^Experiment ID: ' | awk '{ print $3 }'))
c=1
for i in 01 02 04; do
for j in 0025 0050 0100 0200; do
echo $i $j $c "${uuids[$c]}"
filename=c${i}_r${j}
grep ${uuids[$c]} raw.csv > ${filename}.csv
cat ${filename}.csv | tr , ' ' | awk '{ print $2, $3 }' > ${filename}.dat
c=$(expr $c + 1)
done
done
```
6. Enter Octave
7. Load all `.dat` files generated in step 5 into matrices using this Octave code snippet
```octave
conns = { "01"; "02"; "04" };
rates = { "0025"; "0050"; "0100"; "0200" };
for i = 1:length(conns)
for j = 1:length(rates)
filename = strcat("c", conns{i}, "_r", rates{j}, ".dat");
load("-ascii", filename);
endfor
endfor
```
8. Set variable release to the current release undergoing QA
```octave
release = "v0.34.x";
```
9. Generate a plot with all (or some) experiments, where the X axis is the experiment time,
and the y axis is the latency of transactions.
The following snippet plots all experiments.
```octave
legends = {};
hold off;
for i = 1:length(conns)
for j = 1:length(rates)
data_name = strcat("c", conns{i}, "_r", rates{j});
l = strcat("c=", conns{i}, " r=", rates{j});
m = eval(data_name); plot((m(:,1) - min(m(:,1))) / 1e+9, m(:,2) / 1e+9, ".");
hold on;
legends(1, end+1) = l;
endfor
endfor
legend(legends, "location", "northeastoutside");
xlabel("experiment time (s)");
ylabel("latency (s)");
t = sprintf("200-node testnet - %s", release);
title(t);
```
10. Consider adjusting the axis, in case you want to compare your results to the baseline, for instance
```octave
axis([0, 100, 0, 30], "tic");
```
11. Use Octave's GUI menu to save the plot (e.g. as `.png`)
12. Repeat steps 9 and 10 to obtain as many plots as deemed necessary.
13. To generate a latency vs throughput plot, using the raw CSV file generated
in step 2, follow the instructions for the [`latency_throughput.py`] script.
[`latency_throughput.py`]: ../../scripts/qa/reporting/README.md
#### Extracting Prometheus Metrics
1. Stop the prometheus server if it is running as a service (e.g. a `systemd` unit).
2. Unzip the prometheus database retrieved from the testnet, and move it to replace the
local prometheus database.
3. Start the prometheus server and make sure no error logs appear at start up.
4. Introduce the metrics you want to gather or plot.
## Rotating Node Testnet
### Running the test
This section explains how the tests were carried out for reproducibility purposes.
1. [If you haven't done it before]
Follow steps 1-4 of the `README.md` at the top of the testnet repository to configure Terraform, and `doctl`.
2. Copy file `testnet_rotating.toml` onto `testnet.toml` (do NOT commit this change)
3. Set variable `VERSION_TAG` to the git hash that is to be tested.
4. Run `make terraform-apply EPHEMERAL_SIZE=25`
* WARNING: Do NOT forget to run `make terraform-destroy` as soon as you are done with the tests
5. Follow steps 6-10 of the `README.md` to configure and start the "stable" part of the rotating node testnet
6. As a sanity check, connect to the Prometheus node's web interface and check the graph for the `tendermint_consensus_height` metric.
All nodes should be increasing their heights.
7. On a different shell,
* run `make runload ROTATE_CONNECTIONS=X ROTATE_TX_RATE=Y`
* `X` and `Y` should reflect a load below the saturation point (see, e.g.,
[this paragraph](./v034/README.md#finding-the-saturation-point) for further info)
8. Run `make rotate` to start the script that creates the ephemeral nodes, and kills them when they are caught up.
* WARNING: If you run this command from your laptop, the laptop needs to be up and connected for full length
of the experiment.
9. When the height of the chain reaches 3000, stop the `make rotate` script
10. When the rotate script has made two iterations (i.e., all ephemeral nodes have caught up twice)
after height 3000 was reached, stop `make rotate`
11. Run `make retrieve-data` to gather all relevant data from the testnet into the orchestrating machine
12. Verify that the data was collected without errors
* at least one blockstore DB for a Tendermint validator
* the Prometheus database from the Prometheus node
* for extra care, you can run `zip -T` on the `prometheus.zip` file and (one of) the `blockstore.db.zip` file(s)
13. **Run `make terraform-destroy`**
Steps 8 to 10 are highly manual at the moment and will be improved in next iterations.
### Result Extraction
In order to obtain a latency plot, follow the instructions above for the 200 node experiment, but:
* The `results.txt` file contains only one experiment
* Therefore, no need for any `for` loops
As for prometheus, the same method as for the 200 node experiment can be applied.

278
docs/qa/v034/README.md Normal file
View File

@@ -0,0 +1,278 @@
---
order: 1
parent:
title: Tendermint Quality Assurance Results for v0.34.x
description: This is a report on the results obtained when running v0.34.x on testnets
order: 2
---
# v0.34.x
## 200 Node Testnet
### Finding the Saturation Point
The first goal when examining the results of the tests is identifying the saturation point.
The saturation point is a setup with a transaction load big enough to prevent the testnet
from being stable: the load runner tries to produce slightly more transactions than can
be processed by the testnet.
The following table summarizes the results for v0.34.x, for the different experiments
(extracted from file [`v034_report_tabbed.txt`](./img/v034_report_tabbed.txt)).
The X axis of this table is `c`, the number of connections created by the load runner process to the target node.
The Y axis of this table is `r`, the rate or number of transactions issued per second.
| | c=1 | c=2 | c=4 |
| :--- | ----: | ----: | ----: |
| r=25 | 2225 | 4450 | 8900 |
| r=50 | 4450 | 8900 | 17800 |
| r=100 | 8900 | 17800 | 35600 |
| r=200 | 17800 | 35600 | 38660 |
The table shows the number of 1024-byte-long transactions that were produced by the load runner,
and processed by Tendermint, during the 90 seconds of the experiment's duration.
Each cell in the table refers to an experiment with a particular number of websocket connections (`c`)
to a chosen validator, and the number of transactions per second that the load runner
tries to produce (`r`). Note that the overall load that the tool attempts to generate is $c \cdot r$.
We can see that the saturation point is beyond the diagonal that spans cells
* `r=200,c=2`
* `r=100,c=4`
given that the total transactions should be close to the product of the rate, the number of connections,
and the experiment time (89 seconds, since the last batch never gets sent).
All experiments below the saturation diagonal (`r=200,c=4`) have in common that the total
number of transactions processed is noticeably less than the product $c \cdot r \cdot 89$,
which is the expected number of transactions when the system is able to deal well with the
load.
With `r=200,c=4`, we obtained 38660 whereas the theoretical number of transactions should
have been $200 \cdot 4 \cdot 89 = 71200$.
At this point, we chose an experiment at the limit of the saturation diagonal,
in order to further study the performance of this release.
**The chosen experiment is `r=200,c=2`**.
This is a plot of the CPU load (average over 1 minute, as output by `top`) of the load runner for `r=200,c=2`,
where we can see that the load stays close to 0 most of the time.
![load-load-runner](./img/v034_r200c2_load-runner.png)
### Examining latencies
The method described [here](../method.md) allows us to plot the latencies of transactions
for all experiments.
![all-latencies](./img/v034_200node_latencies.png)
As we can see, even the experiments beyond the saturation diagonal managed to keep
transaction latency stable (i.e. not constantly increasing).
Our interpretation for this is that contention within Tendermint was propagated,
via the websockets, to the load runner,
hence the load runner could not produce the target load, but a fraction of it.
Further examination of the Prometheus data (see below), showed that the mempool contained many transactions
at steady state, but did not grow much without quickly returning to this steady state. This demonstrates
that the transactions were able to be processed by the Tendermint network at least as quickly as they
were submitted to the mempool. Finally, the test script made sure that, at the end of an experiment, the
mempool was empty so that all transactions submitted to the chain were processed.
Finally, the number of points present in the plot appears to be much less than expected given the
number of transactions in each experiment, particularly close to or above the saturation diagonal.
This is a visual effect of the plot; what appear to be points in the plot are actually potentially huge
clusters of points. To corroborate this, we have zoomed in the plot above by setting (carefully chosen)
tiny axis intervals. The cluster shown below looks like a single point in the plot above.
![all-latencies-zoomed](./img/v034_200node_latencies_zoomed.png)
The plot of latencies can we used as a baseline to compare with other releases.
The following plot summarizes average latencies versus overall throughputs
across different numbers of WebSocket connections to the node into which
transactions are being loaded.
![latency-vs-throughput](./img/v034_latency_throughput.png)
### Prometheus Metrics on the Chosen Experiment
As mentioned [above](#finding-the-saturation-point), the chosen experiment is `r=200,c=2`.
This section further examines key metrics for this experiment extracted from Prometheus data.
#### Mempool Size
The mempool size, a count of the number of transactions in the mempool, was shown to be stable and homogeneous
at all full nodes. It did not exhibit any unconstrained growth.
The plot below shows the evolution over time of the cumulative number of transactions inside all full nodes' mempools
at a given time.
The two spikes that can be observed correspond to a period where consensus instances proceeded beyond the initial round
at some nodes.
![mempool-cumulative](./img/v034_r200c2_mempool_size.png)
The plot below shows evolution of the average over all full nodes, which oscillates between 1500 and 2000
outstanding transactions.
![mempool-avg](./img/v034_r200c2_mempool_size_avg.png)
The peaks observed coincide with the moments when some nodes proceeded beyond the initial round of consensus (see below).
#### Peers
The number of peers was stable at all nodes.
It was higher for the seed nodes (around 140) than for the rest (between 21 and 74).
The fact that non-seed nodes reach more than 50 peers is due to #9548.
![peers](./img/v034_r200c2_peers.png)
#### Consensus Rounds per Height
Most heights took just one round, but some nodes needed to advance to round 1 at some point.
![rounds](./img/v034_r200c2_rounds.png)
#### Blocks Produced per Minute, Transactions Processed per Minute
The blocks produced per minute are the slope of this plot.
![heights](./img/v034_r200c2_heights.png)
Over a period of 2 minutes, the height goes from 530 to 569.
This results in an average of 19.5 blocks produced per minute.
The transactions processed per minute are the slope of this plot.
![total-txs](./img/v034_r200c2_total-txs.png)
Over a period of 2 minutes, the total goes from 64525 to 100125 transactions,
resulting in 17800 transactions per minute. However, we can see in the plot that
all transactions in the load are processed long before the two minutes.
If we adjust the time window when transactions are processed (approx. 105 seconds),
we obtain 20343 transactions per minute.
#### Memory Resident Set Size
Resident Set Size of all monitored processes is plotted below.
![rss](./img/v034_r200c2_rss.png)
The average over all processes oscillates around 1.2 GiB and does not demonstrate unconstrained growth.
![rss-avg](./img/v034_r200c2_rss_avg.png)
#### CPU utilization
The best metric from Prometheus to gauge CPU utilization in a Unix machine is `load1`,
as it usually appears in the
[output of `top`](https://www.digitalocean.com/community/tutorials/load-average-in-linux).
![load1](./img/v034_r200c2_load1.png)
It is contained in most cases below 5, which is generally considered acceptable load.
### Test Result
**Result: N/A** (v0.34.x is the baseline)
Date: 2022-10-14
Version: 3ec6e424d6ae4c96867c2dcf8310572156068bb6
## Rotating Node Testnet
For this testnet, we will use a load that can safely be considered below the saturation
point for the size of this testnet (between 13 and 38 full nodes): `c=4,r=800`.
N.B.: The version of Tendermint used for these tests is affected by #9539.
However, the reduced load that reaches the mempools is orthogonal to functionality
we are focusing on here.
### Latencies
The plot of all latencies can be seen in the following plot.
![rotating-all-latencies](./img/v034_rotating_latencies.png)
We can observe there are some very high latencies, towards the end of the test.
Upon suspicion that they are duplicate transactions, we examined the latencies
raw file and discovered there are more than 100K duplicate transactions.
The following plot shows the latencies file where all duplicate transactions have
been removed, i.e., only the first occurrence of a duplicate transaction is kept.
![rotating-all-latencies-uniq](./img/v034_rotating_latencies_uniq.png)
This problem, existing in `v0.34.x`, will need to be addressed, perhaps in the same way
we addressed it when running the 200 node test with high loads: increasing the `cache_size`
configuration parameter.
### Prometheus Metrics
The set of metrics shown here are less than for the 200 node experiment.
We are only interested in those for which the catch-up process (blocksync) may have an impact.
#### Blocks and Transactions per minute
Just as shown for the 200 node test, the blocks produced per minute are the gradient of this plot.
![rotating-heights](./img/v034_rotating_heights.png)
Over a period of 5229 seconds, the height goes from 2 to 3638.
This results in an average of 41 blocks produced per minute.
The following plot shows only the heights reported by ephemeral nodes
(which are also included in the plot above). Note that the _height_ metric
is only showed _once the node has switched to consensus_, hence the gaps
when nodes are killed, wiped out, started from scratch, and catching up.
![rotating-heights-ephe](./img/v034_rotating_heights_ephe.png)
The transactions processed per minute are the gradient of this plot.
![rotating-total-txs](./img/v034_rotating_total-txs.png)
The small lines we see periodically close to `y=0` are the transactions that
ephemeral nodes start processing when they are caught up.
Over a period of 5229 minutes, the total goes from 0 to 387697 transactions,
resulting in 4449 transactions per minute. We can see some abrupt changes in
the plot's gradient. This will need to be investigated.
#### Peers
The plot below shows the evolution in peers throughout the experiment.
The periodic changes observed are due to the ephemeral nodes being stopped,
wiped out, and recreated.
![rotating-peers](./img/v034_rotating_peers.png)
The validators' plots are concentrated at the higher part of the graph, whereas the ephemeral nodes
are mostly at the lower part.
#### Memory Resident Set Size
The average Resident Set Size (RSS) over all processes seems stable, and slightly growing toward the end.
This might be related to the increased in transaction load observed above.
![rotating-rss-avg](./img/v034_rotating_rss_avg.png)
The memory taken by the validators and the ephemeral nodes (when they are up) is comparable.
#### CPU utilization
The plot shows metric `load1` for all nodes.
![rotating-load1](./img/v034_rotating_load1.png)
It is contained under 5 most of the time, which is considered normal load.
The purple line, which follows a different pattern is the validator receiving all
transactions, via RPC, from the load runner process.
### Test Result
**Result: N/A**
Date: 2022-10-10
Version: a28c987f5a604ff66b515dd415270063e6fb069d

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 378 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 759 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 926 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 534 KiB

View File

@@ -0,0 +1,52 @@
Experiment ID: 3d5cf4ef-1a1a-4b46-aa2d-da5643d2e81e │Experiment ID: 80e472ec-13a1-4772-a827-3b0c907fb51d │Experiment ID: 07aca6cf-c5a4-4696-988f-e3270fc6333b
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 25 │ Rate: 25 │ Rate: 25
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 2225 │ Total Valid Tx: 4450 │ Total Valid Tx: 8900
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 599.404362ms │ Minimum Latency: 448.145181ms │ Minimum Latency: 412.485729ms
Maximum Latency: 3.539686885s │ Maximum Latency: 3.237392049s │ Maximum Latency: 12.026665368s
Average Latency: 1.441485349s │ Average Latency: 1.441267946s │ Average Latency: 2.150192457s
Standard Deviation: 541.049869ms │ Standard Deviation: 525.040007ms │ Standard Deviation: 2.233852478s
│ │
Experiment ID: 953dc544-dd40-40e8-8712-20c34c3ce45e │Experiment ID: d31fc258-16e7-45cd-9dc8-13ab87bc0b0a │Experiment ID: 15d90a7e-b941-42f4-b411-2f15f857739e
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 50 │ Rate: 50 │ Rate: 50
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 4450 │ Total Valid Tx: 8900 │ Total Valid Tx: 17800
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 482.046942ms │ Minimum Latency: 435.458913ms │ Minimum Latency: 510.746448ms
Maximum Latency: 3.761483455s │ Maximum Latency: 7.175583584s │ Maximum Latency: 6.551497882s
Average Latency: 1.450408183s │ Average Latency: 1.681673116s │ Average Latency: 1.738083875s
Standard Deviation: 587.560056ms │ Standard Deviation: 1.147902047s │ Standard Deviation: 943.46522ms
│ │
Experiment ID: 9a0b9980-9ce6-4db5-a80a-65ca70294b87 │Experiment ID: df8fa4f4-80af-4ded-8a28-356d15018b43 │Experiment ID: d0e41c2c-89c0-4f38-8e34-ca07adae593a
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 100 │ Rate: 100 │ Rate: 100
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 8900 │ Total Valid Tx: 17800 │ Total Valid Tx: 35600
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 477.417219ms │ Minimum Latency: 564.29247ms │ Minimum Latency: 840.71089ms
Maximum Latency: 6.63744785s │ Maximum Latency: 6.988553219s │ Maximum Latency: 9.555312398s
Average Latency: 1.561216103s │ Average Latency: 1.76419063s │ Average Latency: 3.200941683s
Standard Deviation: 1.011333552s │ Standard Deviation: 1.068459423s │ Standard Deviation: 1.732346601s
│ │
Experiment ID: 493df3ee-4a36-4bce-80f8-6d65da66beda │Experiment ID: 13060525-f04f-46f6-8ade-286684b2fe50 │Experiment ID: 1777cbd2-8c96-42e4-9ec7-9b21f2225e4d
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 200 │ Rate: 200 │ Rate: 200
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 17800 │ Total Valid Tx: 35600 │ Total Valid Tx: 38660
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 493.705261ms │ Minimum Latency: 955.090573ms │ Minimum Latency: 1.9485821s
Maximum Latency: 7.440921872s │ Maximum Latency: 10.086673491s │ Maximum Latency: 17.73103976s
Average Latency: 1.875510582s │ Average Latency: 3.438130099s │ Average Latency: 8.143862237s
Standard Deviation: 1.304336995s │ Standard Deviation: 1.966391574s │ Standard Deviation: 3.943140002s

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 486 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 197 KiB

326
docs/qa/v037/README.md Normal file
View File

@@ -0,0 +1,326 @@
---
order: 1
parent:
title: Tendermint Quality Assurance Results for v0.37.x
description: This is a report on the results obtained when running v0.37.x on testnets
order: 2
---
# v0.37.x
## Issues discovered
During this iteration of the QA process, the following issues were found:
* (critical, fixed) [\#9533] - This bug caused full nodes to sometimes get stuck
when blocksyncing, requiring a manual restart to unblock them. Importantly,
this bug was also present in v0.34.x and the fix was also backported in
[\#9534].
* (critical, fixed) [\#9539] - `loadtime` is very likely to include more than
one "=" character in transactions, with is rejected by the e2e application.
* (critical, fixed) [\#9581] - Absent prometheus label makes Tendermint crash
when enabling Prometheus metric collection
* (non-critical, not fixed) [\#9548] - Full nodes can go over 50 connected
peers, which is not intended by the default configuration.
* (non-critical, not fixed) [\#9537] - With the default mempool cache setting,
duplicated transactions are not rejected when gossipped and eventually flood
all mempools. The 200 node testnets were thus run with a value of 200000 (as
opposed to the default 10000)
## 200 Node Testnet
### Finding the Saturation Point
The first goal is to identify the saturation point and compare it with the baseline (v0.34.x).
For further details, see [this paragraph](../v034/README.md#finding-the-saturation-point)
in the baseline version.
The following table summarizes the results for v0.37.x, for the different experiments
(extracted from file [`v037_report_tabbed.txt`](./img/v037_report_tabbed.txt)).
The X axis of this table is `c`, the number of connections created by the load runner process to the target node.
The Y axis of this table is `r`, the rate or number of transactions issued per second.
| | c=1 | c=2 | c=4 |
| :--- | ----: | ----: | ----: |
| r=25 | 2225 | 4450 | 8900 |
| r=50 | 4450 | 8900 | 17800 |
| r=100 | 8900 | 17800 | 35600 |
| r=200 | 17800 | 35600 | 38660 |
For comparison, this is the table with the baseline version.
| | c=1 | c=2 | c=4 |
| :--- | ----: | ----: | ----: |
| r=25 | 2225 | 4450 | 8900 |
| r=50 | 4450 | 8900 | 17800 |
| r=100 | 8900 | 17800 | 35400 |
| r=200 | 17800 | 35600 | 37358 |
The saturation point is beyond the diagonal:
* `r=200,c=2`
* `r=100,c=4`
which is at the same place as the baseline. For more details on the saturation point, see
[this paragraph](../v034/README.md#finding-the-saturation-point) in the baseline version.
The experiment chosen to examine Prometheus metrics is the same as in the baseline:
**`r=200,c=2`**.
The load runner's CPU load was negligible (near 0) when running `r=200,c=2`.
### Examining latencies
The method described [here](../method.md) allows us to plot the latencies of transactions
for all experiments.
![all-latencies](./img/v037_200node_latencies.png)
The data seen in the plot is similar to that of the baseline.
![all-latencies-bl](../v034/img/v034_200node_latencies.png)
Therefore, for further details on these plots,
see [this paragraph](../v034/README.md#examining-latencies) in the baseline version.
The following plot summarizes average latencies versus overall throughputs
across different numbers of WebSocket connections to the node into which
transactions are being loaded.
![latency-vs-throughput](./img/v037_latency_throughput.png)
This is similar to that of the baseline plot:
![latency-vs-throughput-bl](../v034/img/v034_latency_throughput.png)
### Prometheus Metrics on the Chosen Experiment
As mentioned [above](#finding-the-saturation-point), the chosen experiment is `r=200,c=2`.
This section further examines key metrics for this experiment extracted from Prometheus data.
#### Mempool Size
The mempool size, a count of the number of transactions in the mempool, was shown to be stable and homogeneous
at all full nodes. It did not exhibit any unconstrained growth.
The plot below shows the evolution over time of the cumulative number of transactions inside all full nodes' mempools
at a given time.
![mempool-cumulative](./img/v037_r200c2_mempool_size.png)
The plot below shows evolution of the average over all full nodes, which oscillate between 1500 and 2000 outstanding transactions.
![mempool-avg](./img/v037_r200c2_mempool_size_avg.png)
The peaks observed coincide with the moments when some nodes reached round 1 of consensus (see below).
**These plots yield similar results to the baseline**:
![mempool-cumulative-bl](../v034/img/v034_r200c2_mempool_size.png)
![mempool-avg-bl](../v034/img/v034_r200c2_mempool_size_avg.png)
#### Peers
The number of peers was stable at all nodes.
It was higher for the seed nodes (around 140) than for the rest (between 16 and 78).
![peers](./img/v037_r200c2_peers.png)
Just as in the baseline, the fact that non-seed nodes reach more than 50 peers is due to #9548.
**This plot yields similar results to the baseline**:
![peers-bl](../v034/img/v034_r200c2_peers.png)
#### Consensus Rounds per Height
Most heights took just one round, but some nodes needed to advance to round 1 at some point.
![rounds](./img/v037_r200c2_rounds.png)
**This plot yields slightly better results than the baseline**:
![rounds-bl](../v034/img/v034_r200c2_rounds.png)
#### Blocks Produced per Minute, Transactions Processed per Minute
The blocks produced per minute are the gradient of this plot.
![heights](./img/v037_r200c2_heights.png)
Over a period of 2 minutes, the height goes from 477 to 524.
This results in an average of 23.5 blocks produced per minute.
The transactions processed per minute are the gradient of this plot.
![total-txs](./img/v037_r200c2_total-txs.png)
Over a period of 2 minutes, the total goes from 64525 to 100125 transactions,
resulting in 17800 transactions per minute. However, we can see in the plot that
all transactions in the load are process long before the two minutes.
If we adjust the time window when transactions are processed (approx. 90 seconds),
we obtain 23733 transactions per minute.
**These plots yield similar results to the baseline**:
![heights-bl](../v034/img/v034_r200c2_heights.png)
![total-txs](../v034/img/v034_r200c2_total-txs.png)
#### Memory Resident Set Size
Resident Set Size of all monitored processes is plotted below.
![rss](./img/v037_r200c2_rss.png)
The average over all processes oscillates around 380 MiB and does not demonstrate unconstrained growth.
![rss-avg](./img/v037_r200c2_rss_avg.png)
**These plots yield similar results to the baseline**:
![rss-bl](../v034/img/v034_r200c2_rss.png)
![rss-avg-bl](../v034/img/v034_r200c2_rss_avg.png)
#### CPU utilization
The best metric from Prometheus to gauge CPU utilization in a Unix machine is `load1`,
as it usually appears in the
[output of `top`](https://www.digitalocean.com/community/tutorials/load-average-in-linux).
![load1](./img/v037_r200c2_load1.png)
It is contained below 5 on most nodes.
**This plot yields similar results to the baseline**:
![load1](../v034/img/v034_r200c2_load1.png)
### Test Result
**Result: PASS**
Date: 2022-10-14
Version: 1cf9d8e276afe8595cba960b51cd056514965fd1
## Rotating Node Testnet
We use the same load as in the baseline: `c=4,r=800`.
Just as in the baseline tests, the version of Tendermint used for these tests is affected by #9539.
See this paragraph in the [baseline report](../v034/README.md#rotating-node-testnet) for further details.
Finally, note that this setup allows for a fairer comparison between this version and the baseline.
### Latencies
The plot of all latencies can be seen here.
![rotating-all-latencies](./img/v037_rotating_latencies.png)
Which is similar to the baseline.
![rotating-all-latencies-bl](../v034/img/v034_rotating_latencies_uniq.png)
Note that we are comparing against the baseline plot with _unique_
transactions. This is because the problem with duplicate transactions
detected during the baseline experiment did not show up for `v0.37`,
which is _not_ proof that the problem is not present in `v0.37`.
### Prometheus Metrics
The set of metrics shown here match those shown on the baseline (`v0.34`) for the same experiment.
We also show the baseline results for comparison.
#### Blocks and Transactions per minute
The blocks produced per minute are the gradient of this plot.
![rotating-heights](./img/v037_rotating_heights.png)
Over a period of 4446 seconds, the height goes from 5 to 3323.
This results in an average of 45 blocks produced per minute,
which is similar to the baseline, shown below.
![rotating-heights-bl](../v034/img/v034_rotating_heights.png)
The following two plots show only the heights reported by ephemeral nodes.
The second plot is the baseline plot for comparison.
![rotating-heights-ephe](./img/v037_rotating_heights_ephe.png)
![rotating-heights-ephe-bl](../v034/img/v034_rotating_heights_ephe.png)
By the length of the segments, we can see that ephemeral nodes in `v0.37`
catch up slightly faster.
The transactions processed per minute are the gradient of this plot.
![rotating-total-txs](./img/v037_rotating_total-txs.png)
Over a period of 3852 seconds, the total goes from 597 to 267298 transactions in one of the validators,
resulting in 4154 transactions per minute, which is slightly lower than the baseline,
although the baseline had to deal with duplicate transactions.
For comparison, this is the baseline plot.
![rotating-total-txs-bl](../v034/img/v034_rotating_total-txs.png)
#### Peers
The plot below shows the evolution of the number of peers throughout the experiment.
![rotating-peers](./img/v037_rotating_peers.png)
This is the baseline plot, for comparison.
![rotating-peers-bl](../v034/img/v034_rotating_peers.png)
The plotted values and their evolution are comparable in both plots.
For further details on these plots, see the baseline report.
#### Memory Resident Set Size
The average Resident Set Size (RSS) over all processes looks slightly more stable
on `v0.37` (first plot) than on the baseline (second plot).
![rotating-rss-avg](./img/v037_rotating_rss_avg.png)
![rotating-rss-avg-bl](../v034/img/v034_rotating_rss_avg.png)
The memory taken by the validators and the ephemeral nodes when they are up is comparable (not shown in the plots),
just as observed in the baseline.
#### CPU utilization
The plot shows metric `load1` for all nodes.
![rotating-load1](./img/v037_rotating_load1.png)
This is the baseline plot.
![rotating-load1-bl](../v034/img/v034_rotating_load1.png)
In both cases, it is contained under 5 most of the time, which is considered normal load.
The green line in the `v0.37` plot and the purple line in the baseline plot (`v0.34`)
correspond to the validators receiving all transactions, via RPC, from the load runner process.
In both cases, they oscillate around 5 (normal load). The main difference is that other
nodes are generally less loaded in `v0.37`.
### Test Result
**Result: PASS**
Date: 2022-10-10
Version: 155110007b9d8b83997a799016c1d0844c8efbaf
[\#9533]: https://github.com/tendermint/tendermint/pull/9533
[\#9534]: https://github.com/tendermint/tendermint/pull/9534
[\#9539]: https://github.com/tendermint/tendermint/issues/9539
[\#9548]: https://github.com/tendermint/tendermint/issues/9548
[\#9537]: https://github.com/tendermint/tendermint/issues/9537
[\#9581]: https://github.com/tendermint/tendermint/issues/9581

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 411 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 887 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 589 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 816 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 538 KiB

View File

@@ -0,0 +1,52 @@
Experiment ID: af129eae-7039-4c76-8c37-cff9ac636a84 │Experiment ID: 0f88bd33-9bf0-4197-8d1d-9a737c301ec6 │Experiment ID: 88227cad-2ba8-4eb6-b493-041d8120b46f
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 25 │ Rate: 25 │ Rate: 25
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 2225 │ Total Valid Tx: 4450 │ Total Valid Tx: 8900
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 506.248587ms │ Minimum Latency: 469.53452ms │ Minimum Latency: 588.900721ms
Maximum Latency: 3.032125789s │ Maximum Latency: 6.548830955s │ Maximum Latency: 6.533739843s
Average Latency: 1.427767726s │ Average Latency: 1.448582257s │ Average Latency: 1.717432341s
Standard Deviation: 524.11782ms │ Standard Deviation: 768.684133ms │ Standard Deviation: 1.000015768s
│ │
Experiment ID: f03d39bd-0233-4b3c-b461-543445ae1d4b │Experiment ID: 46674f1c-e591-4e36-bb9b-f375c19fc475 │Experiment ID: 5385c159-8d4d-455b-bced-dcd4a3209988
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 50 │ Rate: 50 │ Rate: 50
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 4450 │ Total Valid Tx: 8900 │ Total Valid Tx: 17800
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 477.46027ms │ Minimum Latency: 455.757111ms │ Minimum Latency: 594.749081ms
Maximum Latency: 2.483895394s │ Maximum Latency: 2.904715695s │ Maximum Latency: 9.294950389s
Average Latency: 1.407374662s │ Average Latency: 1.397385779s │ Average Latency: 2.621122536s
Standard Deviation: 505.150067ms │ Standard Deviation: 551.67603ms │ Standard Deviation: 1.772725794s
│ │
Experiment ID: 9161b4a7-d75c-455f-b82d-2b5235d533cf │Experiment ID: 993a13a8-9db1-4b2b-9c20-71a5b85e4bbf │Experiment ID: ad1eb9e1-f4d6-41fd-9ba7-0f1f7dde1e3e
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 100 │ Rate: 100 │ Rate: 100
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 8900 │ Total Valid Tx: 17800 │ Total Valid Tx: 35400
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 448.050467ms │ Minimum Latency: 605.436195ms │ Minimum Latency: 1.16816912s
Maximum Latency: 3.789711139s │ Maximum Latency: 7.292770222s │ Maximum Latency: 11.378681842s
Average Latency: 1.451342158s │ Average Latency: 2.07457999s │ Average Latency: 3.918384209s
Standard Deviation: 644.075973ms │ Standard Deviation: 1.230204022s │ Standard Deviation: 2.172400458s
│ │
Experiment ID: 3cbe9c3d-9c43-4c9f-b5ca-b567d20bbd57 │Experiment ID: af836c5e-d9b6-4d5d-971c-2fc7f07aa2a0 │Experiment ID: 77606397-4989-41d4-b13b-f1f4d1af063f
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 200 │ Rate: 200 │ Rate: 200
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 17800 │ Total Valid Tx: 35600 │ Total Valid Tx: 37358
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 519.984701ms │ Minimum Latency: 820.755087ms │ Minimum Latency: 1.712574804s
Maximum Latency: 12.609056712s │ Maximum Latency: 9.260798095s │ Maximum Latency: 25.739223696s
Average Latency: 2.717853101s │ Average Latency: 3.477731881s │ Average Latency: 8.547725264s
Standard Deviation: 2.390778155s │ Standard Deviation: 1.675000913s │ Standard Deviation: 4.76961569s

Binary file not shown.

After

Width:  |  Height:  |  Size: 167 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 577 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 217 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

View File

@@ -18,50 +18,52 @@ Listen address can be changed in the config file (see
The following metrics are available:
| **Name** | **Type** | **Tags** | **Description** |
|----------------------------------------|-----------|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------|
| abci_connection_method_timing_seconds | Histogram | method, type | Timings for each of the ABCI methods |
| consensus_height | Gauge | | Height of the chain |
| consensus_validators | Gauge | | Number of validators |
| consensus_validators_power | Gauge | | Total voting power of all validators |
| consensus_validator_power | Gauge | | Voting power of the node if in the validator set |
| consensus_validator_last_signed_height | Gauge | | Last height the node signed a block, if the node is a validator |
| consensus_validator_missed_blocks | Gauge | | Total amount of blocks missed for the node, if the node is a validator |
| consensus_missing_validators | Gauge | | Number of validators who did not sign |
| consensus_missing_validators_power | Gauge | | Total voting power of the missing validators |
| consensus_byzantine_validators | Gauge | | Number of validators who tried to double sign |
| consensus_byzantine_validators_power | Gauge | | Total voting power of the byzantine validators |
| consensus_block_interval_seconds | Histogram | | Time between this and last block (Block.Header.Time) in seconds |
| consensus_rounds | Gauge | | Number of rounds |
| consensus_num_txs | Gauge | | Number of transactions |
| consensus_total_txs | Gauge | | Total number of transactions committed |
| consensus_block_parts | counter | peer_id | number of blockparts transmitted by peer |
| consensus_latest_block_height | gauge | | /status sync_info number |
| consensus_block_syncing | gauge | | either 0 (not block syncing) or 1 (syncing) |
| consensus_state_syncing | gauge | | either 0 (not state syncing) or 1 (syncing) |
| consensus_block_size_bytes | Gauge | | Block size in bytes |
| consensus_step_duration | Histogram | step | Histogram of durations for each step in the consensus protocol |
| consensus_round_duration | Histogram | | Histogram of durations for all the rounds that have occurred since the process started |
| consensus_block_gossip_parts_received | Counter | matches_current | Number of block parts received by the node |
| consensus_quorum_prevote_delay | Gauge | | Interval in seconds between the proposal timestamp and the timestamp of the earliest prevote that achieved a quorum |
| consensus_full_prevote_delay | Gauge | | Interval in seconds between the proposal timestamp and the timestamp of the latest prevote in a round where all validators voted |
| consensus_proposal_receive_count | Counter | status | Total number of proposals received by the node since process start |
| consensus_proposal_create_count | Counter | | Total number of proposals created by the node since process start |
| consensus_round_voting_power_percent | Gauge | vote_type | A value between 0 and 1.0 representing the percentage of the total voting power per vote type received within a round |
| consensus_late_votes | Counter | vote_type | Number of votes received by the node since process start that correspond to earlier heights and rounds than this node is currently in. |
| p2p_peers | Gauge | | Number of peers node's connected to |
| p2p_peer_receive_bytes_total | counter | peer_id, chID | number of bytes per channel received from a given peer |
| p2p_peer_send_bytes_total | counter | peer_id, chID | number of bytes per channel sent to a given peer |
| p2p_peer_pending_send_bytes | gauge | peer_id | number of pending bytes to be sent to a given peer |
| p2p_num_txs | gauge | peer_id | number of transactions submitted by each peer_id |
| p2p_pending_send_bytes | gauge | peer_id | amount of data pending to be sent to peer |
| mempool_size | Gauge | | Number of uncommitted transactions |
| mempool_tx_size_bytes | histogram | | transaction sizes in bytes |
| mempool_failed_txs | counter | | number of failed transactions |
| mempool_recheck_times | counter | | number of transactions rechecked in the mempool |
| state_block_processing_time | histogram | | time between BeginBlock and EndBlock in ms |
| state_consensus_param_updates | Counter | | number of consensus parameter updates returned by the application since process start |
| state_validator_set_updates | Counter | | number of validator set updates returned by the application since process start |
| **Name** | **Type** | **Tags** | **Description** |
|------------------------------------------|-----------|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------|
| `abci_connection_method_timing_seconds` | Histogram | `method`, `type` | Timings for each of the ABCI methods |
| `consensus_height` | Gauge | | Height of the chain |
| `consensus_validators` | Gauge | | Number of validators |
| `consensus_validators_power` | Gauge | | Total voting power of all validators |
| `consensus_validator_power` | Gauge | | Voting power of the node if in the validator set |
| `consensus_validator_last_signed_height` | Gauge | | Last height the node signed a block, if the node is a validator |
| `consensus_validator_missed_blocks` | Gauge | | Total amount of blocks missed for the node, if the node is a validator |
| `consensus_missing_validators` | Gauge | | Number of validators who did not sign |
| `consensus_missing_validators_power` | Gauge | | Total voting power of the missing validators |
| `consensus_byzantine_validators` | Gauge | | Number of validators who tried to double sign |
| `consensus_byzantine_validators_power` | Gauge | | Total voting power of the byzantine validators |
| `consensus_block_interval_seconds` | Histogram | | Time between this and last block (Block.Header.Time) in seconds |
| `consensus_rounds` | Gauge | | Number of rounds |
| `consensus_num_txs` | Gauge | | Number of transactions |
| `consensus_total_txs` | Gauge | | Total number of transactions committed |
| `consensus_block_parts` | Counter | `peer_id` | Number of blockparts transmitted by peer |
| `consensus_latest_block_height` | Gauge | | /status sync\_info number |
| `consensus_block_syncing` | Gauge | | Either 0 (not block syncing) or 1 (syncing) |
| `consensus_state_syncing` | Gauge | | Either 0 (not state syncing) or 1 (syncing) |
| `consensus_block_size_bytes` | Gauge | | Block size in bytes |
| `consensus_step_duration` | Histogram | `step` | Histogram of durations for each step in the consensus protocol |
| `consensus_round_duration` | Histogram | | Histogram of durations for all the rounds that have occurred since the process started |
| `consensus_block_gossip_parts_received` | Counter | `matches_current` | Number of block parts received by the node |
| `consensus_quorum_prevote_delay` | Gauge | | Interval in seconds between the proposal timestamp and the timestamp of the earliest prevote that achieved a quorum |
| `consensus_full_prevote_delay` | Gauge | | Interval in seconds between the proposal timestamp and the timestamp of the latest prevote in a round where all validators voted |
| `consensus_proposal_receive_count` | Counter | `status` | Total number of proposals received by the node since process start |
| `consensus_proposal_create_count` | Counter | | Total number of proposals created by the node since process start |
| `consensus_round_voting_power_percent` | Gauge | `vote_type` | A value between 0 and 1.0 representing the percentage of the total voting power per vote type received within a round |
| `consensus_late_votes` | Counter | `vote_type` | Number of votes received by the node since process start that correspond to earlier heights and rounds than this node is currently in. |
| `p2p_message_send_bytes_total` | Counter | `message_type` | Number of bytes sent to all peers per message type |
| `p2p_message_receive_bytes_total` | Counter | `message_type` | Number of bytes received from all peers per message type |
| `p2p_peers` | Gauge | | Number of peers node's connected to |
| `p2p_peer_receive_bytes_total` | Counter | `peer_id`, `chID` | Number of bytes per channel received from a given peer |
| `p2p_peer_send_bytes_total` | Counter | `peer_id`, `chID` | Number of bytes per channel sent to a given peer |
| `p2p_peer_pending_send_bytes` | Gauge | `peer_id` | Number of pending bytes to be sent to a given peer |
| `p2p_num_txs` | Gauge | `peer_id` | Number of transactions submitted by each peer\_id |
| `p2p_pending_send_bytes` | Gauge | `peer_id` | Amount of data pending to be sent to peer |
| `mempool_size` | Gauge | | Number of uncommitted transactions |
| `mempool_tx_size_bytes` | Histogram | | Transaction sizes in bytes |
| `mempool_failed_txs` | Counter | | Number of failed transactions |
| `mempool_recheck_times` | Counter | | Number of transactions rechecked in the mempool |
| `state_block_processing_time` | Histogram | | Time between BeginBlock and EndBlock in ms |
| `state_consensus_param_updates` | Counter | | Number of consensus parameter updates returned by the application since process start |
| `state_validator_set_updates` | Counter | | Number of validator set updates returned by the application since process start |
## Useful queries

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"time"
"github.com/cosmos/gogoproto/proto"
clist "github.com/tendermint/tendermint/libs/clist"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/p2p"
@@ -55,6 +56,7 @@ func (evR *Reactor) GetChannels() []*p2p.ChannelDescriptor {
ID: EvidenceChannel,
Priority: 6,
RecvMessageCapacity: maxMsgSize,
MessageType: &tmproto.EvidenceList{},
},
}
}
@@ -66,11 +68,11 @@ func (evR *Reactor) AddPeer(peer p2p.Peer) {
// Receive implements Reactor.
// It adds any received evidence to the evpool.
func (evR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
evis, err := decodeMsg(msgBytes)
func (evR *Reactor) Receive(e p2p.Envelope) {
evis, err := evidenceListFromProto(e.Message)
if err != nil {
evR.Logger.Error("Error decoding message", "src", src, "chId", chID, "err", err)
evR.Switch.StopPeerForError(src, err)
evR.Logger.Error("Error decoding message", "src", e.Src, "chId", e.ChannelID, "err", err)
evR.Switch.StopPeerForError(e.Src, err)
return
}
@@ -80,7 +82,7 @@ func (evR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
case *types.ErrInvalidEvidence:
evR.Logger.Error(err.Error())
// punish peer
evR.Switch.StopPeerForError(src, err)
evR.Switch.StopPeerForError(e.Src, err)
return
case nil:
default:
@@ -126,11 +128,15 @@ func (evR *Reactor) broadcastEvidenceRoutine(peer p2p.Peer) {
evis := evR.prepareEvidenceMessage(peer, ev)
if len(evis) > 0 {
evR.Logger.Debug("Gossiping evidence to peer", "ev", ev, "peer", peer)
msgBytes, err := encodeMsg(evis)
evp, err := evidenceListToProto(evis)
if err != nil {
panic(err)
}
success := peer.Send(EvidenceChannel, msgBytes)
success := peer.Send(p2p.Envelope{
ChannelID: EvidenceChannel,
Message: evp,
})
if !success {
time.Sleep(peerRetryMessageIntervalMS * time.Millisecond)
continue
@@ -210,7 +216,7 @@ type PeerState interface {
// encodemsg takes a array of evidence
// returns the byte encoding of the List Message
func encodeMsg(evis []types.Evidence) ([]byte, error) {
func evidenceListToProto(evis []types.Evidence) (*tmproto.EvidenceList, error) {
evi := make([]tmproto.Evidence, len(evis))
for i := 0; i < len(evis); i++ {
ev, err := types.EvidenceToProto(evis[i])
@@ -222,19 +228,13 @@ func encodeMsg(evis []types.Evidence) ([]byte, error) {
epl := tmproto.EvidenceList{
Evidence: evi,
}
return epl.Marshal()
return &epl, nil
}
// decodemsg takes an array of bytes
// returns an array of evidence
func decodeMsg(bz []byte) (evis []types.Evidence, err error) {
lm := tmproto.EvidenceList{}
if err := lm.Unmarshal(bz); err != nil {
return nil, err
}
func evidenceListFromProto(m proto.Message) ([]types.Evidence, error) {
lm := m.(*tmproto.EvidenceList)
evis = make([]types.Evidence, len(lm.Evidence))
evis := make([]types.Evidence, len(lm.Evidence))
for i := 0; i < len(lm.Evidence); i++ {
ev, err := types.EvidenceFromProto(&lm.Evidence[i])
if err != nil {

View File

@@ -208,7 +208,10 @@ func TestReactorBroadcastEvidenceMemoryLeak(t *testing.T) {
// i.e. broadcastEvidenceRoutine finishes when peer is stopped
defer leaktest.CheckTimeout(t, 10*time.Second)()
p.On("Send", evidence.EvidenceChannel, mock.AnythingOfType("[]uint8")).Return(false)
p.On("Send", mock.MatchedBy(func(i interface{}) bool {
e, ok := i.(p2p.Envelope)
return ok && e.ChannelID == evidence.EvidenceChannel
})).Return(false)
quitChan := make(<-chan struct{})
p.On("Quit").Return(quitChan)
ps := peerState{2}

118
go.mod
View File

@@ -3,7 +3,7 @@ module github.com/tendermint/tendermint
go 1.18
require (
github.com/BurntSushi/toml v1.2.0
github.com/BurntSushi/toml v1.2.1
github.com/adlio/schema v1.3.3
github.com/cenkalti/backoff v2.2.1+incompatible // indirect
github.com/fortytw2/leaktest v1.3.0
@@ -11,7 +11,7 @@ require (
github.com/go-kit/log v0.2.1
github.com/go-logfmt/logfmt v0.5.1
github.com/golang/protobuf v1.5.2
github.com/golangci/golangci-lint v1.49.0
github.com/golangci/golangci-lint v1.50.1
github.com/google/orderedcode v0.0.1
github.com/gorilla/websocket v1.5.0
github.com/informalsystems/tm-load-test v1.0.0
@@ -21,42 +21,43 @@ require (
github.com/ory/dockertest v3.3.5+incompatible
github.com/pkg/errors v0.9.1
github.com/pointlander/peg v1.0.1
github.com/prometheus/client_golang v1.13.0
github.com/prometheus/client_model v0.2.0
github.com/prometheus/client_golang v1.13.1
github.com/prometheus/client_model v0.3.0
github.com/prometheus/common v0.37.0
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475
github.com/rs/cors v1.8.2
github.com/sasha-s/go-deadlock v0.3.1
github.com/snikch/goodman v0.0.0-20171125024755-10e37e294daa
github.com/spf13/cobra v1.5.0
github.com/spf13/viper v1.13.0
github.com/stretchr/testify v1.8.0
github.com/spf13/cobra v1.6.1
github.com/spf13/viper v1.14.0
github.com/stretchr/testify v1.8.1
github.com/tendermint/tm-db v0.6.6
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa
golang.org/x/net v0.0.0-20220812174116-3211cb980234
google.golang.org/grpc v1.50.0
golang.org/x/crypto v0.1.0
golang.org/x/net v0.1.0
google.golang.org/grpc v1.50.1
)
require (
github.com/bufbuild/buf v1.8.0
github.com/bufbuild/buf v1.9.0
github.com/creachadair/taskgroup v0.3.2
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7
)
require (
github.com/btcsuite/btcd/btcec/v2 v2.2.1
github.com/btcsuite/btcd/btcec/v2 v2.3.0
github.com/btcsuite/btcd/btcutil v1.1.2
github.com/cosmos/gogoproto v1.4.2
github.com/gofrs/uuid v4.3.0+incompatible
github.com/gofrs/uuid v4.3.1+incompatible
github.com/google/uuid v1.3.0
github.com/oasisprotocol/curve25519-voi v0.0.0-20220708102147-0a8a51822cae
github.com/vektra/mockery/v2 v2.14.0
github.com/vektra/mockery/v2 v2.14.1
gonum.org/v1/gonum v0.12.0
google.golang.org/protobuf v1.28.1
google.golang.org/protobuf v1.28.2-0.20220831092852-f930b1dc76e8
)
require (
4d63.com/gochecknoglobals v0.1.0 // indirect
github.com/Abirdcfly/dupword v0.0.7 // indirect
github.com/Antonboom/errname v0.1.7 // indirect
github.com/Antonboom/nilnil v0.1.1 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
@@ -64,9 +65,9 @@ require (
github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect
github.com/GaijinEntertainment/go-exhaustruct/v2 v2.3.0 // indirect
github.com/Masterminds/semver v1.5.0 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/Microsoft/go-winio v0.6.0 // indirect
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 // indirect
github.com/OpenPeeDeeP/depguard v1.1.0 // indirect
github.com/OpenPeeDeeP/depguard v1.1.1 // indirect
github.com/alexkohler/prealloc v1.0.0 // indirect
github.com/alingse/asasalint v0.0.11 // indirect
github.com/ashanbrown/forbidigo v1.3.0 // indirect
@@ -77,7 +78,8 @@ require (
github.com/bombsimon/wsl/v3 v3.3.0 // indirect
github.com/breml/bidichk v0.2.3 // indirect
github.com/breml/errchkjson v0.3.0 // indirect
github.com/bufbuild/connect-go v0.4.0 // indirect
github.com/bufbuild/connect-go v1.0.0 // indirect
github.com/bufbuild/protocompile v0.1.0 // indirect
github.com/butuzov/ireturn v0.1.1 // indirect
github.com/cespare/xxhash v1.1.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
@@ -87,8 +89,8 @@ require (
github.com/containerd/continuity v0.3.0 // indirect
github.com/containerd/typeurl v1.0.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/curioswitch/go-reassign v0.1.2 // indirect
github.com/daixiang0/gci v0.6.3 // indirect
github.com/curioswitch/go-reassign v0.2.0 // indirect
github.com/daixiang0/gci v0.8.1 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 // indirect
github.com/denis-tingaikin/go-header v0.4.3 // indirect
@@ -96,24 +98,24 @@ require (
github.com/dgraph-io/ristretto v0.0.3-0.20200630154024-f66de99634de // indirect
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2 // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/docker/docker v20.10.17+incompatible // indirect
github.com/docker/docker v20.10.19+incompatible // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dustin/go-humanize v1.0.0 // indirect
github.com/esimonov/ifshort v1.0.4 // indirect
github.com/ettle/strcase v0.1.1 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/fatih/structtag v1.2.0 // indirect
github.com/firefart/nonamedreturns v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.5.4 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/fzipp/gocyclo v0.6.0 // indirect
github.com/go-chi/chi/v5 v5.0.7 // indirect
github.com/go-critic/go-critic v0.6.4 // indirect
github.com/go-critic/go-critic v0.6.5 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-toolsmith/astcast v1.0.0 // indirect
github.com/go-toolsmith/astcopy v1.0.1 // indirect
github.com/go-toolsmith/astequal v1.0.2 // indirect
github.com/go-toolsmith/astcopy v1.0.2 // indirect
github.com/go-toolsmith/astequal v1.0.3 // indirect
github.com/go-toolsmith/astfmt v1.0.0 // indirect
github.com/go-toolsmith/astp v1.0.0 // indirect
github.com/go-toolsmith/strparse v1.0.0 // indirect
@@ -127,14 +129,14 @@ require (
github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2 // indirect
github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a // indirect
github.com/golangci/go-misc v0.0.0-20220329215616-d24fe342adfe // indirect
github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a // indirect
github.com/golangci/gofmt v0.0.0-20220901101216-f2edd75033f2 // indirect
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 // indirect
github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca // indirect
github.com/golangci/misspell v0.3.5 // indirect
github.com/golangci/revgrep v0.0.0-20220804021717-745bb2f7c2e6 // indirect
github.com/golangci/unconvert v0.0.0-20180507085042-28b1c447d1f4 // indirect
github.com/google/btree v1.0.0 // indirect
github.com/google/go-cmp v0.5.8 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/gordonklaus/ineffassign v0.0.0-20210914165742-4cc7213b9bc8 // indirect
github.com/gostaticanalysis/analysisutil v0.7.1 // indirect
github.com/gostaticanalysis/comment v1.4.2 // indirect
@@ -149,15 +151,14 @@ require (
github.com/inconshreveable/mousetrap v1.0.1 // indirect
github.com/jdxcode/netrc v0.0.0-20210204082910-926c7f70242a // indirect
github.com/jgautheron/goconst v1.5.1 // indirect
github.com/jhump/protocompile v0.0.0-20220812162104-d108583e055d // indirect
github.com/jhump/protoreflect v1.12.1-0.20220721211354-060cc04fc18b // indirect
github.com/jingyugao/rowserrcheck v1.1.1 // indirect
github.com/jirfag/go-printf-func-name v0.0.0-20200119135958-7558a9eaa5af // indirect
github.com/jmhodges/levigo v1.0.0 // indirect
github.com/julz/importas v0.1.0 // indirect
github.com/kisielk/errcheck v1.6.2 // indirect
github.com/kisielk/gotool v1.0.0 // indirect
github.com/klauspost/compress v1.15.9 // indirect
github.com/kkHAIKE/contextcheck v1.1.3 // indirect
github.com/klauspost/compress v1.15.11 // indirect
github.com/klauspost/pgzip v1.2.5 // indirect
github.com/kulti/thelper v0.6.3 // indirect
github.com/kunwardeep/paralleltest v1.0.6 // indirect
@@ -167,6 +168,7 @@ require (
github.com/leonklingele/grouper v1.1.0 // indirect
github.com/lufeee/execinquery v1.2.1 // indirect
github.com/magiconair/properties v1.8.6 // indirect
github.com/maratori/testableexamples v1.0.0 // indirect
github.com/maratori/testpackage v1.1.0 // indirect
github.com/matoous/godox v0.0.0-20210227103229-6504466cf951 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
@@ -174,20 +176,20 @@ require (
github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/mbilski/exhaustivestruct v1.2.0 // indirect
github.com/mgechev/revive v1.2.3 // indirect
github.com/mgechev/revive v1.2.4 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/moby/buildkit v0.10.3 // indirect
github.com/moby/buildkit v0.10.4 // indirect
github.com/moby/term v0.0.0-20220808134915-39b0c02b01ae // indirect
github.com/moricho/tparallel v0.2.1 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/nakabonne/nestif v0.3.1 // indirect
github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 // indirect
github.com/nishanths/exhaustive v0.8.1 // indirect
github.com/nishanths/exhaustive v0.8.3 // indirect
github.com/nishanths/predeclared v0.2.2 // indirect
github.com/olekukonko/tablewriter v0.0.5 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/opencontainers/image-spec v1.1.0-rc2 // indirect
github.com/opencontainers/runc v1.1.3 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/pelletier/go-toml/v2 v2.0.5 // indirect
@@ -198,10 +200,10 @@ require (
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/pointlander/compress v1.1.1-0.20190518213731-ff44bd196cc3 // indirect
github.com/pointlander/jetset v1.0.1-0.20190518214125-eee7eff80bd4 // indirect
github.com/polyfloyd/go-errorlint v1.0.2 // indirect
github.com/polyfloyd/go-errorlint v1.0.5 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
github.com/quasilyte/go-ruleguard v0.3.17 // indirect
github.com/quasilyte/gogrep v0.0.0-20220120141003-628d8b3623b5 // indirect
github.com/quasilyte/go-ruleguard v0.3.18 // indirect
github.com/quasilyte/gogrep v0.0.0-20220828223005-86e4605de09f // indirect
github.com/quasilyte/regex/syntax v0.0.0-20200407221936-30656e2c4a95 // indirect
github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 // indirect
github.com/rs/zerolog v1.27.0 // indirect
@@ -210,7 +212,7 @@ require (
github.com/ryanrolds/sqlclosecheck v0.3.0 // indirect
github.com/sanposhiho/wastedassign/v2 v2.0.6 // indirect
github.com/sashamelentyev/interfacebloat v1.1.0 // indirect
github.com/sashamelentyev/usestdlibvars v1.13.0 // indirect
github.com/sashamelentyev/usestdlibvars v1.20.0 // indirect
github.com/satori/go.uuid v1.2.0 // indirect
github.com/securego/gosec/v2 v2.13.1 // indirect
github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c // indirect
@@ -220,22 +222,21 @@ require (
github.com/sivchari/tenv v1.7.0 // indirect
github.com/sonatard/noctx v0.0.1 // indirect
github.com/sourcegraph/go-diff v0.6.1 // indirect
github.com/spf13/afero v1.8.2 // indirect
github.com/spf13/afero v1.9.2 // indirect
github.com/spf13/cast v1.5.0 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect
github.com/stbenjam/no-sprintf-host-port v0.1.1 // indirect
github.com/stretchr/objx v0.4.0 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/subosito/gotenv v1.4.1 // indirect
github.com/sylvia7788/contextcheck v1.0.6 // indirect
github.com/tdakkota/asciicheck v0.1.1 // indirect
github.com/tecbot/gorocksdb v0.0.0-20191217155057-f0fad39f321c // indirect
github.com/tetafro/godot v1.4.11 // indirect
github.com/timakin/bodyclose v0.0.0-20210704033933-f49887972144 // indirect
github.com/timonwong/logrlint v0.1.0 // indirect
github.com/tomarrell/wrapcheck/v2 v2.6.2 // indirect
github.com/tommy-muehle/go-mnd/v2 v2.5.0 // indirect
github.com/timonwong/loggercheck v0.9.3 // indirect
github.com/tomarrell/wrapcheck/v2 v2.7.0 // indirect
github.com/tommy-muehle/go-mnd/v2 v2.5.1 // indirect
github.com/ultraware/funlen v0.0.3 // indirect
github.com/ultraware/whitespace v0.0.5 // indirect
github.com/uudashr/gocognit v1.0.6 // indirect
@@ -244,26 +245,27 @@ require (
gitlab.com/bosi/decorder v0.2.3 // indirect
go.etcd.io/bbolt v1.3.6 // indirect
go.opencensus.io v0.23.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.34.0 // indirect
go.opentelemetry.io/otel v1.9.0 // indirect
go.opentelemetry.io/otel/trace v1.9.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.36.3 // indirect
go.opentelemetry.io/otel v1.11.0 // indirect
go.opentelemetry.io/otel/metric v0.32.3 // indirect
go.opentelemetry.io/otel/trace v1.11.0 // indirect
go.uber.org/atomic v1.10.0 // indirect
go.uber.org/multierr v1.8.0 // indirect
go.uber.org/zap v1.22.0 // indirect
go.uber.org/zap v1.23.0 // indirect
golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e // indirect
golang.org/x/exp/typeparams v0.0.0-20220613132600-b0d781184e0d // indirect
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect
golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde // indirect
golang.org/x/sys v0.0.0-20220818161305-2296e01440c6 // indirect
golang.org/x/term v0.0.0-20220722155259-a9ba230a4035 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/tools v0.1.12 // indirect
google.golang.org/genproto v0.0.0-20220725144611-272f38e5d71b // indirect
golang.org/x/exp/typeparams v0.0.0-20220827204233-334a2380cb91 // indirect
golang.org/x/mod v0.6.0 // indirect
golang.org/x/sync v0.1.0 // indirect
golang.org/x/sys v0.1.0 // indirect
golang.org/x/term v0.1.0 // indirect
golang.org/x/text v0.4.0 // indirect
golang.org/x/tools v0.2.0 // indirect
google.golang.org/genproto v0.0.0-20221024183307-1bc688fe9f3e // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
honnef.co/go/tools v0.3.3 // indirect
mvdan.cc/gofumpt v0.3.1 // indirect
mvdan.cc/gofumpt v0.4.0 // indirect
mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed // indirect
mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b // indirect
mvdan.cc/unparam v0.0.0-20220706161116-678bad134442 // indirect

250
go.sum
View File

@@ -23,14 +23,15 @@ cloud.google.com/go v0.75.0/go.mod h1:VGuuCn7PG0dwsd5XPVm2Mm3wlh3EL55/79EKB6hlPT
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
cloud.google.com/go v0.100.2 h1:t9Iw5QH5v4XtlEQaCtUY7x6sCABps8sW0acw7e2WQ6Y=
cloud.google.com/go v0.104.0 h1:gSmWO7DY1vOm0MVU6DNXM11BWHHsTUmsC5cv1fuW5X8=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
cloud.google.com/go/compute v1.6.1 h1:2sMmt8prCn7DPaG4Pmh0N3Inmc8cT8ae5k1M6VJ9Wqc=
cloud.google.com/go/compute v1.12.1 h1:gKVJMEyqV5c/UnpzjjQbo3Rjvvqpr9B1DFSbJC4OXr0=
cloud.google.com/go/compute/metadata v0.2.1 h1:efOwf5ymceDhK6PKMnnrTHP4pppY5L22mle96M1yP48=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
@@ -45,6 +46,8 @@ cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RX
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Abirdcfly/dupword v0.0.7 h1:z14n0yytA3wNO2gpCD/jVtp/acEXPGmYu0esewpBt6Q=
github.com/Abirdcfly/dupword v0.0.7/go.mod h1:K/4M1kj+Zh39d2aotRwypvasonOyAMH1c/IZJzE0dmk=
github.com/Antonboom/errname v0.1.7 h1:mBBDKvEYwPl4WFFNwec1CZO096G6vzK9vvDQzAwkako=
github.com/Antonboom/errname v0.1.7/go.mod h1:g0ONh16msHIPgJSGsecu1G/dcF2hlYR/0SddnIAGavU=
github.com/Antonboom/nilnil v0.1.1 h1:PHhrh5ANKFWRBh7TdYmyyq2gyT2lotnvFvvFbylF81Q=
@@ -53,8 +56,8 @@ github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.2.0 h1:Rt8g24XnyGTyglgET/PRUNlrUeu9F5L+7FilkXfZgs0=
github.com/BurntSushi/toml v1.2.0/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/toml v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/ChainSafe/go-schnorrkel v0.0.0-20200405005733-88cbf1b4c40d/go.mod h1:URdX5+vg25ts3aCh8H5IFZybJYKWhJHYMTnf+ULtoC4=
github.com/DATA-DOG/go-sqlmock v1.5.0 h1:Shsta01QNfFxHCfpW6YH2STWB0MudeXXEWMr20OEh60=
@@ -72,14 +75,14 @@ github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3Q
github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.5.2 h1:a9IhgEQBCUEk6QCdml9CiJGhAws+YwffDHEMp1VMrpA=
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
github.com/Microsoft/go-winio v0.6.0 h1:slsWYD/zyx7lCXoZVlvQrj0hPTM1HI4+v1sIda2yDvg=
github.com/Microsoft/go-winio v0.6.0/go.mod h1:cTAf44im0RAYeL23bpB+fzCyDH2MJiz2BO69KH/soAE=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 h1:TngWCqHvy9oXAN6lEVMRuU21PR1EtLVZJmdB18Gu3Rw=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk=
github.com/OneOfOne/xxhash v1.2.2 h1:KMrpdQIwFcEqXDklaen+P1axHaj9BSKzvpUUfnHldSE=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/OpenPeeDeeP/depguard v1.1.0 h1:pjK9nLPS1FwQYGGpPxoMYpe7qACHOhAWQMQzV71i49o=
github.com/OpenPeeDeeP/depguard v1.1.0/go.mod h1:JtAMzWkmFEzDPyAd+W0NHl1lvpQKTvT9jnRVsohBKpc=
github.com/OpenPeeDeeP/depguard v1.1.1 h1:TSUznLjvp/4IUP+OQ0t/4jF4QUyxIcVX8YnghZdunyA=
github.com/OpenPeeDeeP/depguard v1.1.1/go.mod h1:JtAMzWkmFEzDPyAd+W0NHl1lvpQKTvT9jnRVsohBKpc=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/VividCortex/gohistogram v1.0.0 h1:6+hBz+qvs0JOrrNhhmR7lFxo5sINxBCGXrdtl/UvroE=
@@ -149,8 +152,8 @@ github.com/btcsuite/btcd v0.23.0 h1:V2/ZgjfDFIygAX3ZapeigkVBoVUtOJKSwrhZdlpSvaA=
github.com/btcsuite/btcd v0.23.0/go.mod h1:0QJIIN1wwIXF/3G/m87gIwGniDMDQqjVn4SZgnFpsYY=
github.com/btcsuite/btcd/btcec/v2 v2.1.0/go.mod h1:2VzYrv4Gm4apmbVVsSq5bqf1Ec8v56E48Vt0Y/umPgA=
github.com/btcsuite/btcd/btcec/v2 v2.1.3/go.mod h1:ctjw4H1kknNJmRN4iP1R7bTQ+v3GJkZBd6mui8ZsAZE=
github.com/btcsuite/btcd/btcec/v2 v2.2.1 h1:xP60mv8fvp+0khmrN0zTdPC3cNm24rfeE6lh2R/Yv3E=
github.com/btcsuite/btcd/btcec/v2 v2.2.1/go.mod h1:9/CSmJxmuvqzX9Wh2fXMWToLOHhPd11lSPuIupwTkI8=
github.com/btcsuite/btcd/btcec/v2 v2.3.0 h1:S/6K1GEwlEsFzZP4cOOl5mg6PEd/pr0zz7hvXcaxhJ4=
github.com/btcsuite/btcd/btcec/v2 v2.3.0/go.mod h1:zYzJ8etWJQIv1Ogk7OzpWjowwOdXY1W/17j2MW85J04=
github.com/btcsuite/btcd/btcutil v1.0.0/go.mod h1:Uoxwv0pqYWhD//tfTiipkxNfdhG9UrLwaeswfjfdF0A=
github.com/btcsuite/btcd/btcutil v1.1.0/go.mod h1:5OapHB7A2hBBWLm48mmw4MOHNJCcUBTwmWH/0Jn8VHE=
github.com/btcsuite/btcd/btcutil v1.1.2 h1:XLMbX8JQEiwMcYft2EGi8zPUkoa0abKIU6/BJSRsjzQ=
@@ -169,10 +172,12 @@ github.com/btcsuite/snappy-go v0.0.0-20151229074030-0bdef8d06723/go.mod h1:8woku
github.com/btcsuite/snappy-go v1.0.0/go.mod h1:8woku9dyThutzjeg+3xrA5iCpBRH8XEEg3lh6TiUghc=
github.com/btcsuite/websocket v0.0.0-20150119174127-31079b680792/go.mod h1:ghJtEyQwv5/p4Mg4C0fgbePVuGr935/5ddU9Z3TmDRY=
github.com/btcsuite/winsvc v1.0.0/go.mod h1:jsenWakMcC0zFBFurPLEAyrnc/teJEM1O46fmI40EZs=
github.com/bufbuild/buf v1.8.0 h1:53qJ3QY/KOHwSjWgCQYkQaR3jGWst7aOfTXnFe8e+VQ=
github.com/bufbuild/buf v1.8.0/go.mod h1:tBzKkd1fzCcBV6KKSO7zo3rlhk3o1YQ0F2tQKSC2aNU=
github.com/bufbuild/connect-go v0.4.0 h1:fIMyUYG8mXSTH+nnlOx9KmRUf3mBF0R2uKK+BQBoOHE=
github.com/bufbuild/connect-go v0.4.0/go.mod h1:ZEtBnQ7J/m7bvWOW+H8T/+hKQCzPVfhhhICuvtcnjlI=
github.com/bufbuild/buf v1.9.0 h1:8a60qapVuRj6crerWR0rny4UUV/MhZSL5gagJuBxmx8=
github.com/bufbuild/buf v1.9.0/go.mod h1:1Q+rMHiMVcfgScEF/GOldxmu4o9TrQ2sQQh58K6MscE=
github.com/bufbuild/connect-go v1.0.0 h1:htSflKUT8y1jxhoPhPYTZMrsY3ipUXjjrbcZR5O2cVo=
github.com/bufbuild/connect-go v1.0.0/go.mod h1:9iNvh/NOsfhNBUH5CtvXeVUskQO1xsrEviH7ZArwZ3I=
github.com/bufbuild/protocompile v0.1.0 h1:HjgJBI85hY/qmW5tw/66sNDZ7z0UDdVSi/5r40WHw4s=
github.com/bufbuild/protocompile v0.1.0/go.mod h1:ix/MMMdsT3fzxfw91dvbfzKW3fRRnuPCP47kpAm5m/4=
github.com/butuzov/ireturn v0.1.1 h1:QvrO2QF2+/Cx1WA/vETCIYBKtRjc30vesdoPUNo1EbY=
github.com/butuzov/ireturn v0.1.1/go.mod h1:Wh6Zl3IMtTpaIKbmwzqi6olnM9ptYQxxVacMsOEFPoc=
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
@@ -249,13 +254,13 @@ github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7Do
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.11 h1:07n33Z8lZxZ2qwegKbObQohDhXDQxiMMz1NOUGYlesw=
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/cristalhq/acmd v0.7.0/go.mod h1:LG5oa43pE/BbxtfMoImHCQN++0Su7dzipdgBjMCBVDQ=
github.com/curioswitch/go-reassign v0.1.2 h1:ekM07+z+VFT560Exz4mTv0/s1yU9gem6CJc/tlYpkmI=
github.com/curioswitch/go-reassign v0.1.2/go.mod h1:bFJIHgtTM3hRm2sKXSPkbwNjSFyGURQXyn4IXD2qwfQ=
github.com/cristalhq/acmd v0.8.1/go.mod h1:LG5oa43pE/BbxtfMoImHCQN++0Su7dzipdgBjMCBVDQ=
github.com/curioswitch/go-reassign v0.2.0 h1:G9UZyOcpk/d7Gd6mqYgd8XYWFMw/znxwGDUstnC9DIo=
github.com/curioswitch/go-reassign v0.2.0/go.mod h1:x6OpXuWvgfQaMGks2BZybTngWjT84hqJfKoO8Tt/Roc=
github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4=
github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/daixiang0/gci v0.6.3 h1:wUAqXChk8HbwXn8AfxD9DYSCp9Bpz1L3e6Q4Roe+q9E=
github.com/daixiang0/gci v0.6.3/go.mod h1:EpVfrztufwVgQRXjnX4zuNinEpLj5OmMjtu/+MB0V0c=
github.com/daixiang0/gci v0.8.1 h1:T4xpSC+hmsi4CSyuYfIJdMZAr9o7xZmHpQVygMghGZ4=
github.com/daixiang0/gci v0.8.1/go.mod h1:EpVfrztufwVgQRXjnX4zuNinEpLj5OmMjtu/+MB0V0c=
github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
@@ -279,12 +284,13 @@ github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8
github.com/docker/cli v20.10.17+incompatible h1:eO2KS7ZFeov5UJeaDmIs1NFEDRf32PaqRpvoEkKBy5M=
github.com/docker/distribution v2.8.1+incompatible h1:Q50tZOPR6T/hjNsyc9g8/syEs6bk8XXApsHjKukMl68=
github.com/docker/distribution v2.8.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v20.10.17+incompatible h1:JYCuMrWaVNophQTOrMMoSwudOVEfcegoZZrleKc1xwE=
github.com/docker/docker v20.10.17+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v20.10.19+incompatible h1:lzEmjivyNHFHMNAFLXORMBXyGIhw/UP4DvJwvyKYq64=
github.com/docker/docker v20.10.19+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
@@ -331,15 +337,15 @@ github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM
github.com/frankban/quicktest v1.14.3 h1:FJKSZTDHjyhriyC81FLQ0LY93eSai0ZyR/ZIkd3ZUKE=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.5.4 h1:jRbGcIw6P2Meqdwuo0H1p6JVLbL5DHKAKlYndzMwVZI=
github.com/fsnotify/fsnotify v1.5.4/go.mod h1:OVB6XrOHzAwXMpEM7uPOzcehqUV2UqJxmVXmkdnm1bU=
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
github.com/fzipp/gocyclo v0.6.0 h1:lsblElZG7d3ALtGMx9fmxeTKZaLLpU8mET09yN4BBLo=
github.com/fzipp/gocyclo v0.6.0/go.mod h1:rXPyn8fnlpa0R2csP/31uerbiVBugk5whMdlyaLkLoA=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-chi/chi/v5 v5.0.7 h1:rDTPXLDHGATaeHvVlLcR4Qe0zftYethFucbjVQ1PxU8=
github.com/go-chi/chi/v5 v5.0.7/go.mod h1:DslCQbL2OYiznFReuXYUmQ2hGd1aDpCnlMNITLSKoi8=
github.com/go-critic/go-critic v0.6.4 h1:tucuG1pvOyYgpBIrVxw0R6gwO42lNa92Aq3VaDoIs+E=
github.com/go-critic/go-critic v0.6.4/go.mod h1:qL5SOlk7NtY6sJPoVCTKDIgzNOxHkkkOCVDyi9wJe1U=
github.com/go-critic/go-critic v0.6.5 h1:fDaR/5GWURljXwF8Eh31T2GZNz9X4jeboS912mWF8Uo=
github.com/go-critic/go-critic v0.6.5/go.mod h1:ezfP/Lh7MA6dBNn4c6ab5ALv3sKnZVLx37tr00uuaOY=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
@@ -368,13 +374,12 @@ github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/me
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/go-toolsmith/astcast v1.0.0 h1:JojxlmI6STnFVG9yOImLeGREv8W2ocNUM+iOhR6jE7g=
github.com/go-toolsmith/astcast v1.0.0/go.mod h1:mt2OdQTeAQcY4DQgPSArJjHCcOwlX+Wl/kwN+LbLGQ4=
github.com/go-toolsmith/astcopy v1.0.0/go.mod h1:vrgyG+5Bxrnz4MZWPF+pI4R8h3qKRjjyvV/DSez4WVQ=
github.com/go-toolsmith/astcopy v1.0.1 h1:l09oBhAPyV74kLJ3ZO31iBU8htZGTwr9LTjuMCyL8go=
github.com/go-toolsmith/astcopy v1.0.1/go.mod h1:4TcEdbElGc9twQEYpVo/aieIXfHhiuLh4aLAck6dO7Y=
github.com/go-toolsmith/astcopy v1.0.2 h1:YnWf5Rnh1hUudj11kei53kI57quN/VH6Hp1n+erozn0=
github.com/go-toolsmith/astcopy v1.0.2/go.mod h1:4TcEdbElGc9twQEYpVo/aieIXfHhiuLh4aLAck6dO7Y=
github.com/go-toolsmith/astequal v1.0.0/go.mod h1:H+xSiq0+LtiDC11+h1G32h7Of5O3CYFJ99GVbS5lDKY=
github.com/go-toolsmith/astequal v1.0.1/go.mod h1:4oGA3EZXTVItV/ipGiOx7NWkY5veFfcsOJVS2YxltLw=
github.com/go-toolsmith/astequal v1.0.2 h1:+XvaV8zNxua+9+Oa4AHmgmpo4RYAbwr/qjNppLfX2yM=
github.com/go-toolsmith/astequal v1.0.2/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4=
github.com/go-toolsmith/astequal v1.0.3 h1:+LVdyRatFS+XO78SGV4I3TCEA0AC7fKEGma+fH+674o=
github.com/go-toolsmith/astequal v1.0.3/go.mod h1:9Ai4UglvtR+4up+bAD4+hCj7iTo4m/OXVTSLnCyTAx4=
github.com/go-toolsmith/astfmt v1.0.0 h1:A0vDDXt+vsvLEdbMFJAUBI/uTbRw1ffOPnxsILnFL6k=
github.com/go-toolsmith/astfmt v1.0.0/go.mod h1:cnWmsOAuq4jJY6Ct5YWlVLmcmLMn1JUPuQIHCY7CJDw=
github.com/go-toolsmith/astp v1.0.0 h1:alXE75TXgcmupDsMK1fRAy0YUzLzqPVvBKoyWV+KPXg=
@@ -394,8 +399,8 @@ github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5x
github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=
github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
github.com/gofrs/uuid v4.3.0+incompatible h1:CaSVZxm5B+7o45rtab4jC2G37WGYX1zQfuU2i6DSvnc=
github.com/gofrs/uuid v4.3.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/gofrs/uuid v4.3.1+incompatible h1:0/KbAdpx3UXAx1kEOWHJeOkpbgRFGHVgv+CFIY7dBJI=
github.com/gofrs/uuid v4.3.1+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
@@ -451,10 +456,10 @@ github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a h1:w8hkcTqaFpzKqonE9
github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a/go.mod h1:ryS0uhF+x9jgbj/N71xsEqODy9BN81/GonCZiOzirOk=
github.com/golangci/go-misc v0.0.0-20220329215616-d24fe342adfe h1:6RGUuS7EGotKx6J5HIP8ZtyMdiDscjMLfRBSPuzVVeo=
github.com/golangci/go-misc v0.0.0-20220329215616-d24fe342adfe/go.mod h1:gjqyPShc/m8pEMpk0a3SeagVb0kaqvhscv+i9jI5ZhQ=
github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a h1:iR3fYXUjHCR97qWS8ch1y9zPNsgXThGwjKPrYfqMPks=
github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a/go.mod h1:9qCChq59u/eW8im404Q2WWTrnBUQKjpNYKMbU4M7EFU=
github.com/golangci/golangci-lint v1.49.0 h1:I8WHOavragDttlLHtSraHn/h39C+R60bEQ5NoGcHQr8=
github.com/golangci/golangci-lint v1.49.0/go.mod h1:+V/7lLv449R6w9mQ3WdV0EKh7Je/jTylMeSwBZcLeWE=
github.com/golangci/gofmt v0.0.0-20220901101216-f2edd75033f2 h1:amWTbTGqOZ71ruzrdA+Nx5WA3tV1N0goTspwmKCQvBY=
github.com/golangci/gofmt v0.0.0-20220901101216-f2edd75033f2/go.mod h1:9wOXstvyDRshQ9LggQuzBCGysxs3b6Uo/1MvYCR2NMs=
github.com/golangci/golangci-lint v1.50.1 h1:C829clMcZXEORakZlwpk7M4iDw2XiwxxKaG504SZ9zY=
github.com/golangci/golangci-lint v1.50.1/go.mod h1:AQjHBopYS//oB8xs0y0M/dtxdKHkdhl0RvmjUct0/4w=
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 h1:MfyDlzVjl1hoaPzPD4Gpb/QgoRfSBR0jdhwGyAWwMSA=
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0/go.mod h1:66R6K6P6VWk9I95jvqGxkqJxVWGFy9XlDwLwVz1RCFg=
github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca h1:kNY3/svz5T29MYHubXix4aDDuE3RWHkPvopM/EDv/MA=
@@ -480,8 +485,9 @@ github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
@@ -614,14 +620,7 @@ github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jgautheron/goconst v1.5.1 h1:HxVbL1MhydKs8R8n/HE5NPvzfaYmQJA3o879lE4+WcM=
github.com/jgautheron/goconst v1.5.1/go.mod h1:aAosetZ5zaeC/2EfMeRswtxUFBpe2Hr7HzkgX4fanO4=
github.com/jhump/gopoet v0.0.0-20190322174617-17282ff210b3/go.mod h1:me9yfT6IJSlOL3FCfrg+L6yzUEZ+5jW6WHt4Sk+UPUI=
github.com/jhump/gopoet v0.1.0/go.mod h1:me9yfT6IJSlOL3FCfrg+L6yzUEZ+5jW6WHt4Sk+UPUI=
github.com/jhump/goprotoc v0.5.0/go.mod h1:VrbvcYrQOrTi3i0Vf+m+oqQWk9l72mjkJCYo7UvLHRQ=
github.com/jhump/protocompile v0.0.0-20220812162104-d108583e055d h1:1BLWxsvcb5w9/vGjtyEo//r3dwEPNg7z73nbQ/XV4/s=
github.com/jhump/protocompile v0.0.0-20220812162104-d108583e055d/go.mod h1:qr2b5kx4HbFS7/g4uYO5qv9ei8303JMsC7ESbYiqr2Q=
github.com/jhump/protoreflect v1.11.0/go.mod h1:U7aMIjN0NWq9swDP7xDdoMfRHb35uiuTd3Z9nFXJf5E=
github.com/jhump/protoreflect v1.12.1-0.20220721211354-060cc04fc18b h1:izTof8BKh/nE1wrKOrloNA5q4odOarjf+Xpe+4qow98=
github.com/jhump/protoreflect v1.12.1-0.20220721211354-060cc04fc18b/go.mod h1:JytZfP5d0r8pVNLZvai7U/MCuTWITgrI4tTg7puQFKI=
github.com/jhump/protoreflect v1.13.1-0.20220928232736-101791cb1b4c h1:XImQJfpJLmGEEd8ll5yPVyL/aEvmgGHW4WYTyNseLOM=
github.com/jingyugao/rowserrcheck v1.1.1 h1:zibz55j/MJtLsjP1OF4bSdgXxwL1b+Vn7Tjzq7gFzUs=
github.com/jingyugao/rowserrcheck v1.1.1/go.mod h1:4yvlZSDb3IyDTUZJUmpZfm2Hwok+Dtp+nu2qOq+er9c=
github.com/jirfag/go-printf-func-name v0.0.0-20200119135958-7558a9eaa5af h1:KA9BjwUk7KlCh6S9EAGWBt1oExIUv9WyNCiRz5amv48=
@@ -656,11 +655,13 @@ github.com/kisielk/errcheck v1.6.2 h1:uGQ9xI8/pgc9iOoCe7kWQgRE6SBTrCGmTSf0LrEtY7
github.com/kisielk/errcheck v1.6.2/go.mod h1:nXw/i/MfnvRHqXa7XXmQMUB0oNFGuBrNI8d8NLy0LPw=
github.com/kisielk/gotool v1.0.0 h1:AV2c/EiW3KqPNT9ZKl07ehoAGi4C5/01Cfbblndcapg=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kkHAIKE/contextcheck v1.1.3 h1:l4pNvrb8JSwRd51ojtcOxOeHJzHek+MtOyXbaR0uvmw=
github.com/kkHAIKE/contextcheck v1.1.3/go.mod h1:PG/cwd6c0705/LM0KTr1acO2gORUxkSVWyLJOFW5qoo=
github.com/kkdai/bstream v0.0.0-20161212061736-f391b8402d23/go.mod h1:J+Gs4SYgM6CZQHDETBtE9HaSEkGmuNXF86RwHhHUvq4=
github.com/klauspost/compress v1.13.4/go.mod h1:8dP1Hq4DHOhN9w426knH3Rhby4rFm6D8eO+e+Dq5Gzg=
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.15.9 h1:wKRjX6JRtDdrE9qwa4b/Cip7ACOshUI4smpCQanqjSY=
github.com/klauspost/compress v1.15.9/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
github.com/klauspost/compress v1.15.11 h1:Lcadnb3RKGin4FYM/orgq0qde+nc15E5Cbqg4B9Sx9c=
github.com/klauspost/compress v1.15.11/go.mod h1:QPwzmACJjUTFsnSHH934V6woptycfrDDJnH7hvFVbGM=
github.com/klauspost/pgzip v1.2.5 h1:qnWYvvKqedOF2ulHpMG72XQol4ILEJ8k2wwRl/Km8oE=
github.com/klauspost/pgzip v1.2.5/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@@ -704,6 +705,8 @@ github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czP
github.com/magiconair/properties v1.8.5/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
github.com/magiconair/properties v1.8.6 h1:5ibWZ6iY0NctNGWo87LalDlEZ6R41TqbbDamhfG/Qzo=
github.com/magiconair/properties v1.8.6/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
github.com/maratori/testableexamples v1.0.0 h1:dU5alXRrD8WKSjOUnmJZuzdxWOEQ57+7s93SLMxb2vI=
github.com/maratori/testableexamples v1.0.0/go.mod h1:4rhjL1n20TUTT4vdh3RDqSizKLyXp7K2u6HgraZCGzE=
github.com/maratori/testpackage v1.1.0 h1:GJY4wlzQhuBusMF1oahQCBtUV/AQ/k69IZ68vxaac2Q=
github.com/maratori/testpackage v1.1.0/go.mod h1:PeAhzU8qkCwdGEMTEupsHJNlQu2gZopMC6RjbhmHeDc=
github.com/matoous/godox v0.0.0-20210227103229-6504466cf951 h1:pWxk9e//NbPwfxat7RXkts09K+dEBJWakUWwICVqYbA=
@@ -737,8 +740,8 @@ github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182aff
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/mbilski/exhaustivestruct v1.2.0 h1:wCBmUnSYufAHO6J4AVWY6ff+oxWxsVFrwgOdMUQePUo=
github.com/mbilski/exhaustivestruct v1.2.0/go.mod h1:OeTBVxQWoEmB2J2JCHmXWPJ0aksxSUOUy+nvtVEfzXc=
github.com/mgechev/revive v1.2.3 h1:NzIEEa9+WimQ6q2Ov7OcNeySS/IOcwtkQ8RAh0R5UJ4=
github.com/mgechev/revive v1.2.3/go.mod h1:iAWlQishqCuj4yhV24FTnKSXGpbAA+0SckXB8GQMX/Q=
github.com/mgechev/revive v1.2.4 h1:+2Hd/S8oO2H0Ikq2+egtNwQsVhAeELHjxjIUFX5ajLI=
github.com/mgechev/revive v1.2.4/go.mod h1:iAWlQishqCuj4yhV24FTnKSXGpbAA+0SckXB8GQMX/Q=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
github.com/miekg/dns v1.1.43/go.mod h1:+evo5L0630/F6ca/Z9+GAqzhjGyn8/c+TBaOyfEl0V4=
@@ -761,8 +764,8 @@ github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RR
github.com/mitchellh/mapstructure v1.4.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/buildkit v0.10.3 h1:/dGykD8FW+H4p++q5+KqKEo6gAkYKyBQHdawdjVwVAU=
github.com/moby/buildkit v0.10.3/go.mod h1:jxeOuly98l9gWHai0Ojrbnczrk/rf+o9/JqNhY+UCSo=
github.com/moby/buildkit v0.10.4 h1:FvC+buO8isGpUFZ1abdSLdGHZVqg9sqI4BbFL8tlzP4=
github.com/moby/buildkit v0.10.4/go.mod h1:Yajz9vt1Zw5q9Pp4pdb3TCSUXJBIroIQGQ3TTs/sLug=
github.com/moby/sys/mountinfo v0.4.1/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/term v0.0.0-20220808134915-39b0c02b01ae h1:O4SWKdcHVCvYqyDV+9CJA1fcDN2L11Bule0iFy3YlAI=
@@ -798,8 +801,8 @@ github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 h1:4kuARK6Y6Fx
github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354/go.mod h1:KSVJerMDfblTH7p5MZaTt+8zaT2iEk3AkVb9PQdZuE8=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nishanths/exhaustive v0.8.1 h1:0QKNascWv9qIHY7zRoZSxeRr6kuk5aAT3YXLTiDmjTo=
github.com/nishanths/exhaustive v0.8.1/go.mod h1:qj+zJJUgJ76tR92+25+03oYUhzF4R7/2Wk7fGTfCHmg=
github.com/nishanths/exhaustive v0.8.3 h1:pw5O09vwg8ZaditDp/nQRqVnrMczSJDxRDJMowvhsrM=
github.com/nishanths/exhaustive v0.8.3/go.mod h1:qj+zJJUgJ76tR92+25+03oYUhzF4R7/2Wk7fGTfCHmg=
github.com/nishanths/predeclared v0.2.2 h1:V2EPdZPliZymNAn79T8RkNApBjMmVKh5XRpLm/w98Vk=
github.com/nishanths/predeclared v0.2.2/go.mod h1:RROzoN6TnGQupbC+lqggsOlcgysk3LMK/HI84Mp280c=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
@@ -831,8 +834,8 @@ github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQ
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 h1:rc3tiVYb5z54aKaDfakKn0dDjIyPpTtszkjuMzyt7ec=
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.1.0-rc2 h1:2zx/Stx4Wc5pIPDvIxHXvXtQFW/7XWJGmnM7r3wg034=
github.com/opencontainers/image-spec v1.1.0-rc2/go.mod h1:3OVijpioIKYWTqjiG0zfF6wvoJ4fAXGbjdZuI2NgsRQ=
github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.2/go.mod h1:aTaHFFwQXuA71CiyxOdFFIorAoemI04suvGRQFzWTD0=
github.com/opencontainers/runc v1.1.3 h1:vIXrkId+0/J2Ymu2m7VjGvbSlAId9XNRPhn2p4b+d8w=
@@ -896,8 +899,8 @@ github.com/pointlander/jetset v1.0.1-0.20190518214125-eee7eff80bd4 h1:RHHRCZeaNy
github.com/pointlander/jetset v1.0.1-0.20190518214125-eee7eff80bd4/go.mod h1:RdR1j20Aj5pB6+fw6Y9Ur7lMHpegTEjY1vc19hEZL40=
github.com/pointlander/peg v1.0.1 h1:mgA/GQE8TeS9MdkU6Xn6iEzBmQUQCNuWD7rHCK6Mjs0=
github.com/pointlander/peg v1.0.1/go.mod h1:5hsGDQR2oZI4QoWz0/Kdg3VSVEC31iJw/b7WjqCBGRI=
github.com/polyfloyd/go-errorlint v1.0.2 h1:kp1yvHflYhTmw5m3MmBy8SCyQkKPjwDthVuMH0ug6Yk=
github.com/polyfloyd/go-errorlint v1.0.2/go.mod h1:APVvOesVSAnne5SClsPxPdfvZTVDojXh1/G3qb5wjGI=
github.com/polyfloyd/go-errorlint v1.0.5 h1:AHB5JRCjlmelh9RrLxT9sgzpalIwwq4hqE8EkwIwKdY=
github.com/polyfloyd/go-errorlint v1.0.5/go.mod h1:APVvOesVSAnne5SClsPxPdfvZTVDojXh1/G3qb5wjGI=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
@@ -910,15 +913,16 @@ github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP
github.com/prometheus/client_golang v1.8.0/go.mod h1:O9VU6huf47PktckDQfMTX0Y8tY0/7TSWwj+ITvv0TnM=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_golang v1.13.0 h1:b71QUfeo5M8gq2+evJdTPfZhYMAU0uKPkyPJ7TPsloU=
github.com/prometheus/client_golang v1.13.0/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
github.com/prometheus/client_golang v1.13.1 h1:3gMjIY2+/hzmqhtUC/aQNYldJA6DtH3CgQvwS+02K1c=
github.com/prometheus/client_golang v1.13.1/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4=
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
@@ -945,14 +949,14 @@ github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/quasilyte/go-ruleguard v0.3.1-0.20210203134552-1b5a410e1cc8/go.mod h1:KsAh3x0e7Fkpgs+Q9pNLS5XpFSvYCEVl5gP9Pp1xp30=
github.com/quasilyte/go-ruleguard v0.3.17 h1:cDdoaSbQg11LXPDQqiCK54QmQXsEQQCTIgdcpeULGSI=
github.com/quasilyte/go-ruleguard v0.3.17/go.mod h1:sST5PvaR7yb/Az5ksX8oc88usJ4EGjmJv7cK7y3jyig=
github.com/quasilyte/go-ruleguard v0.3.18 h1:sd+abO1PEI9fkYennwzHn9kl3nqP6M5vE7FiOzZ+5CE=
github.com/quasilyte/go-ruleguard v0.3.18/go.mod h1:lOIzcYlgxrQ2sGJ735EHXmf/e9MJ516j16K/Ifcttvs=
github.com/quasilyte/go-ruleguard/dsl v0.3.0/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU=
github.com/quasilyte/go-ruleguard/dsl v0.3.21/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU=
github.com/quasilyte/go-ruleguard/rules v0.0.0-20201231183845-9e62ed36efe1/go.mod h1:7JTjp89EGyU1d6XfBiXihJNG37wB2VRkd125Q1u7Plc=
github.com/quasilyte/go-ruleguard/rules v0.0.0-20211022131956-028d6511ab71/go.mod h1:4cgAphtvu7Ftv7vOT2ZOYhC6CvBxZixcasr8qIOTA50=
github.com/quasilyte/gogrep v0.0.0-20220120141003-628d8b3623b5 h1:PDWGei+Rf2bBiuZIbZmM20J2ftEy9IeUCHA8HbQqed8=
github.com/quasilyte/gogrep v0.0.0-20220120141003-628d8b3623b5/go.mod h1:wSEyW6O61xRV6zb6My3HxrQ5/8ke7NE2OayqCHa3xRM=
github.com/quasilyte/gogrep v0.0.0-20220828223005-86e4605de09f h1:6Gtn2i04RD0gVyYf2/IUMTIs+qYleBt4zxDqkLTcu4U=
github.com/quasilyte/gogrep v0.0.0-20220828223005-86e4605de09f/go.mod h1:Cm9lpz9NZjEoL1tgZ2OgeUKPIxL1meE7eo60Z6Sk+Ng=
github.com/quasilyte/regex/syntax v0.0.0-20200407221936-30656e2c4a95 h1:L8QM9bvf68pVdQ3bCFZMDmnt9yqcMBro1pC7F+IPYMY=
github.com/quasilyte/regex/syntax v0.0.0-20200407221936-30656e2c4a95/go.mod h1:rlzQ04UMyJXu/aOvhd8qT+hvDrFpiwqp8MRXDY9szc0=
github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567 h1:M8mH9eK4OUR4lu7Gd+PU1fV2/qnDNfzT635KRSObncs=
@@ -964,7 +968,7 @@ github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqn
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.8.1 h1:geMPLpDpQOgVyCg5z5GoRwLHepNdb71NXb67XFkP+Eg=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/rs/cors v1.8.2 h1:KCooALfAYGs415Cwu5ABvv9n9509fSiG5SQJn/AQo4U=
github.com/rs/cors v1.8.2/go.mod h1:XyqrcTp5zjWr1wsJ8PIRZssZ8b/WMcMf71DJnit4EMU=
@@ -988,8 +992,8 @@ github.com/sasha-s/go-deadlock v0.3.1 h1:sqv7fDNShgjcaxkO0JNcOAlr8B9+cV5Ey/OB71e
github.com/sasha-s/go-deadlock v0.3.1/go.mod h1:F73l+cr82YSh10GxyRI6qZiCgK64VaZjwesgfQ1/iLM=
github.com/sashamelentyev/interfacebloat v1.1.0 h1:xdRdJp0irL086OyW1H/RTZTr1h/tMEOsumirXcOJqAw=
github.com/sashamelentyev/interfacebloat v1.1.0/go.mod h1:+Y9yU5YdTkrNvoX0xHc84dxiN1iBi9+G8zZIhPVoNjQ=
github.com/sashamelentyev/usestdlibvars v1.13.0 h1:uObNudVEEHf6JbOJy5bgKJloA1bWjxR9fwgNFpPzKnI=
github.com/sashamelentyev/usestdlibvars v1.13.0/go.mod h1:D2Wb7niIYmTB+gB8z7kh8tyP5ccof1dQ+SFk+WW5NtY=
github.com/sashamelentyev/usestdlibvars v1.20.0 h1:K6CXjqqtSYSsuyRDDC7Sjn6vTMLiSJa4ZmDkiokoqtw=
github.com/sashamelentyev/usestdlibvars v1.20.0/go.mod h1:0GaP+ecfZMXShS0A94CJn6aEuPRILv8h/VuWI9n1ygg=
github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
@@ -1031,8 +1035,8 @@ github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0b
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
github.com/spf13/afero v1.8.2 h1:xehSyVa0YnHWsJ49JFljMpg1HX19V6NDZ1fkm1Xznbo=
github.com/spf13/afero v1.8.2/go.mod h1:CtAatgMJh6bJEIs48Ay/FOnkljP3WeGUG0MC1RfAqwo=
github.com/spf13/afero v1.9.2 h1:j49Hj62F0n+DaZ1dDCvhABaPNSGNkt32oRFxI33IEMw=
github.com/spf13/afero v1.9.2/go.mod h1:iUV7ddyEEZPO5gA3zD4fJt6iStLlL+Lg4m2cihcDf8Y=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.5.0 h1:rj3WzYc11XZaIZMPKmwP96zkFEnnAmV8s6XbB2aY32w=
@@ -1042,8 +1046,8 @@ github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tL
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI=
github.com/spf13/cobra v1.2.1/go.mod h1:ExllRjgxM/piMAM+3tAZvg8fsklGAf3tPfi+i8t68Nk=
github.com/spf13/cobra v1.5.0 h1:X+jTBEBqF0bHN+9cSMgmfuvv2VHJ9ezmFNf9Y/XstYU=
github.com/spf13/cobra v1.5.0/go.mod h1:dWXEIy2H428czQCjInthrTRUg7yKbok+2Qi/yBIJoUM=
github.com/spf13/cobra v1.6.1 h1:o94oiPyS4KD1mPy2fmcYYHHfCxLqYjJOhGsCHFZtEzA=
github.com/spf13/cobra v1.6.1/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
@@ -1056,8 +1060,8 @@ github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/y
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/spf13/viper v1.7.1/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/spf13/viper v1.8.1/go.mod h1:o0Pch8wJ9BVSWGQMbra6iw0oQ5oktSIBaujf1rJH9Ns=
github.com/spf13/viper v1.13.0 h1:BWSJ/M+f+3nmdz9bxB+bWX28kkALN2ok11D0rSo8EJU=
github.com/spf13/viper v1.13.0/go.mod h1:Icm2xNL3/8uyh/wFuB1jI7TiTNKp8632Nwegu+zgdYw=
github.com/spf13/viper v1.14.0 h1:Rg7d3Lo706X9tHsJMUjdiwMpHB7W8WnSVOssIY+JElU=
github.com/spf13/viper v1.14.0/go.mod h1:WT//axPky3FdvXHzGw33dNdXXXfFQqmEalje+egj8As=
github.com/ssgreg/nlreturn/v2 v2.2.1 h1:X4XDI7jstt3ySqGU86YGAURbxw3oTDPK9sPEi6YEwQ0=
github.com/ssgreg/nlreturn/v2 v2.2.1/go.mod h1:E/iiPB78hV7Szg2YfRgyIrk1AD6JVMTRkkxBiELzh2I=
github.com/stbenjam/no-sprintf-host-port v0.1.1 h1:tYugd/yrm1O0dV+ThCbaKZh195Dfm07ysF0U6JQXczc=
@@ -1069,8 +1073,9 @@ github.com/streadway/handy v0.0.0-20190108123426-d5acb3125c2a/go.mod h1:qNTQ5P5J
github.com/streadway/handy v0.0.0-20200128134331-0f66f006fb2e/go.mod h1:qNTQ5P5JnDBl6z3cMAg/SywNDC5ABu5ApDIw6lUbRmI=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0 h1:M2gUjqZET1qApGOWNSnZ49BAIMX4F/1plDv3+l31EJ4=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.1.4/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
@@ -1079,13 +1084,12 @@ github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/subosito/gotenv v1.4.1 h1:jyEFiXpy21Wm81FBN71l9VoMMV8H8jG+qIK3GCpY6Qs=
github.com/subosito/gotenv v1.4.1/go.mod h1:ayKnFf/c6rvx/2iiLrJUk1e6plDbT3edrFNGqEflhK0=
github.com/sylvia7788/contextcheck v1.0.6 h1:o2EZgVPyMKE/Mtoqym61DInKEjwEbsmyoxg3VrmjNO4=
github.com/sylvia7788/contextcheck v1.0.6/go.mod h1:9XDxwvxyuKD+8N+a7Gs7bfWLityh5t70g/GjdEt2N2M=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/goleveldb v1.0.1-0.20200815110645-5c35d600f0ca/go.mod h1:u2MKkTVTVJWe5D1rCvame8WqhBd88EuIwODJZ1VHCPM=
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 h1:epCh84lMvA70Z7CTTCmYQn2CKbY8j86K7/FAIr141uY=
@@ -1106,14 +1110,14 @@ github.com/tetafro/godot v1.4.11 h1:BVoBIqAf/2QdbFmSwAWnaIqDivZdOV0ZRwEm6jivLKw=
github.com/tetafro/godot v1.4.11/go.mod h1:LR3CJpxDVGlYOWn3ZZg1PgNZdTUvzsZWu8xaEohUpn8=
github.com/timakin/bodyclose v0.0.0-20210704033933-f49887972144 h1:kl4KhGNsJIbDHS9/4U9yQo1UcPQM0kOMJHn29EoH/Ro=
github.com/timakin/bodyclose v0.0.0-20210704033933-f49887972144/go.mod h1:Qimiffbc6q9tBWlVV6x0P9sat/ao1xEkREYPPj9hphk=
github.com/timonwong/logrlint v0.1.0 h1:phZCcypL/vtx6cGxObJgWZ5wexZF5SXFPLOM+ru0e/M=
github.com/timonwong/logrlint v0.1.0/go.mod h1:Zleg4Gw+kRxNej+Ra7o+tEaW5k1qthTaYKU7rSD39LU=
github.com/timonwong/loggercheck v0.9.3 h1:ecACo9fNiHxX4/Bc02rW2+kaJIAMAes7qJ7JKxt0EZI=
github.com/timonwong/loggercheck v0.9.3/go.mod h1:wUqnk9yAOIKtGA39l1KLE9Iz0QiTocu/YZoOf+OzFdw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tomarrell/wrapcheck/v2 v2.6.2 h1:3dI6YNcrJTQ/CJQ6M/DUkc0gnqYSIk6o0rChn9E/D0M=
github.com/tomarrell/wrapcheck/v2 v2.6.2/go.mod h1:ao7l5p0aOlUNJKI0qVwB4Yjlqutd0IvAB9Rdwyilxvg=
github.com/tommy-muehle/go-mnd/v2 v2.5.0 h1:iAj0a8e6+dXSL7Liq0aXPox36FiN1dBbjA6lt9fl65s=
github.com/tommy-muehle/go-mnd/v2 v2.5.0/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw=
github.com/tomarrell/wrapcheck/v2 v2.7.0 h1:J/F8DbSKJC83bAvC6FoZaRjZiZ/iKoueSdrEkmGeacA=
github.com/tomarrell/wrapcheck/v2 v2.7.0/go.mod h1:ao7l5p0aOlUNJKI0qVwB4Yjlqutd0IvAB9Rdwyilxvg=
github.com/tommy-muehle/go-mnd/v2 v2.5.1 h1:NowYhSdyE/1zwK9QCLeRb6USWdoif80Ie+v+yU8u1Zw=
github.com/tommy-muehle/go-mnd/v2 v2.5.1/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
@@ -1125,8 +1129,8 @@ github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijb
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/uudashr/gocognit v1.0.6 h1:2Cgi6MweCsdB6kpcVQp7EW4U23iBFQWfTXiWlyp842Y=
github.com/uudashr/gocognit v1.0.6/go.mod h1:nAIUuVBnYU7pcninia3BHOvQkpQCeO76Uscky5BOwcY=
github.com/vektra/mockery/v2 v2.14.0 h1:KZ1p5Hrn8tiY+LErRMr14HHle6khxo+JKOXLBW/yfqs=
github.com/vektra/mockery/v2 v2.14.0/go.mod h1:bnD1T8tExSgPD1ripLkDbr60JA9VtQeu12P3wgLZd7M=
github.com/vektra/mockery/v2 v2.14.1 h1:Xamr4zUkFBDGdZhJ6iCiJ1AwkGRmUgZd8zkwjRXt+TU=
github.com/vektra/mockery/v2 v2.14.1/go.mod h1:bnD1T8tExSgPD1ripLkDbr60JA9VtQeu12P3wgLZd7M=
github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo=
@@ -1167,12 +1171,14 @@ go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.34.0 h1:PNEMW4EvpNQ7SuoPFNkvbZqi1STkTPKq+8vfoMl/6AE=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.34.0/go.mod h1:fk1+icoN47ytLSgkoWHLJrtVTSQ+HgmkNgPTKrk/Nsc=
go.opentelemetry.io/otel v1.9.0 h1:8WZNQFIB2a71LnANS9JeyidJKKGOOremcUtb/OtHISw=
go.opentelemetry.io/otel v1.9.0/go.mod h1:np4EoPGzoPs3O67xUVNoPPcmSvsfOxNlNA4F4AC+0Eo=
go.opentelemetry.io/otel/trace v1.9.0 h1:oZaCNJUjWcg60VXWee8lJKlqhPbXAPB51URuR47pQYc=
go.opentelemetry.io/otel/trace v1.9.0/go.mod h1:2737Q0MuG8q1uILYm2YYVkAyLtOofiTNGg6VODnOiPo=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.36.3 h1:syAz40OyelLZo42+3U68Phisvrx4qh+4wpdZw7eUUdY=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.36.3/go.mod h1:Dts42MGkzZne2yCru741+bFiTMWkIj/LLRizad7b9tw=
go.opentelemetry.io/otel v1.11.0 h1:kfToEGMDq6TrVrJ9Vht84Y8y9enykSZzDDZglV0kIEk=
go.opentelemetry.io/otel v1.11.0/go.mod h1:H2KtuEphyMvlhZ+F7tg9GRhAOe60moNx61Ex+WmiKkk=
go.opentelemetry.io/otel/metric v0.32.3 h1:dMpnJYk2KULXr0j8ph6N7+IcuiIQXlPXD4kix9t7L9c=
go.opentelemetry.io/otel/metric v0.32.3/go.mod h1:pgiGmKohxHyTPHGOff+vrtIH39/R9fiO/WoenUQ3kcc=
go.opentelemetry.io/otel/trace v1.11.0 h1:20U/Vj42SX+mASlXLmSGBg6jpI1jQtv682lZtTAOVFI=
go.opentelemetry.io/otel/trace v1.11.0/go.mod h1:nyYjis9jy0gytE9LXGU+/m1sHTKbRY0fX0hulNNDP1U=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
@@ -1194,8 +1200,8 @@ go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
go.uber.org/zap v1.19.1/go.mod h1:j3DNczoxDZroyBnOT1L/Q79cfUMGZxlv/9dzN7SM1rI=
go.uber.org/zap v1.22.0 h1:Zcye5DUgBloQ9BaT4qc9BnjOFog5TvBSAGkJ3Nf70c0=
go.uber.org/zap v1.22.0/go.mod h1:H4siCOZOrAolnUPJEkfaSjDqyP+BDS0DdDWzwcgt3+U=
go.uber.org/zap v1.23.0 h1:OjGQ5KQDEUawVHxNwQgPpiypGHOxo2mNZsOqTak4fFY=
go.uber.org/zap v1.23.0/go.mod h1:D+nX8jyLsMHMYrln8A0rJjFt/T/9/bGgIhAqxv5URuY=
golang.org/x/crypto v0.0.0-20170930174604-9419663f5a44/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
@@ -1219,8 +1225,8 @@ golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5y
golang.org/x/crypto v0.0.0-20210915214749-c084706c2272/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa h1:zuSxTR4o9y82ebqCUJYNGJbGPo6sKVl54f/TVDObg1c=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@@ -1237,8 +1243,8 @@ golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMk
golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e h1:+WEEuIdZHnUeJJmEUjyYC2gfUMj69yZXw17EnHg/otA=
golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e/go.mod h1:Kr81I6Kryrl9sr8s2FK3vxD90NdsKWRuOIl2O4CvYbA=
golang.org/x/exp/typeparams v0.0.0-20220428152302-39d4317da171/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/exp/typeparams v0.0.0-20220613132600-b0d781184e0d h1:+W8Qf4iJtMGKkyAygcKohjxTk4JPsL9DpzApJ22m5Ic=
golang.org/x/exp/typeparams v0.0.0-20220613132600-b0d781184e0d/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/exp/typeparams v0.0.0-20220827204233-334a2380cb91 h1:Ic/qN6TEifvObMGQy72k0n1LlJr7DjWWEi+MOsDOiSk=
golang.org/x/exp/typeparams v0.0.0-20220827204233-334a2380cb91/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
@@ -1267,8 +1273,9 @@ golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.6.0 h1:b9gGHsz9/HhJ3HF5DHQytPpuwocVTChQJK3AvoLRD5I=
golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI=
golang.org/x/net v0.0.0-20180719180050-a680a1efc54d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -1327,8 +1334,8 @@ golang.org/x/net v0.0.0-20211029224645-99673261e6eb/go.mod h1:9nx3DQGgdP8bBQD5qx
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220812174116-3211cb980234 h1:RDqmgfe7SvlMWoqC3xwQ2blLO3fcWcxMa3eBLRdRW7E=
golang.org/x/net v0.0.0-20220812174116-3211cb980234/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
golang.org/x/net v0.1.0 h1:hZ/3BUoy5aId7sCpA/Tc5lt8DkFgdVS2onTpJsZ/fl0=
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1343,7 +1350,7 @@ golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5 h1:OSnWWcOd/CtWQC2cYSBgbTSJv3ciqd8r54ySIW2y3RE=
golang.org/x/oauth2 v0.0.0-20221014153046-6fdb5e3db783 h1:nt+Q6cXKz4MosCSpnbMtqiQ8Oz0pxTef2B4Vca2lvfk=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -1356,8 +1363,8 @@ golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde h1:ejfdSekXMDxDLbRrJMwUk6KnSLZ2McaUCVcIKM+N6jc=
golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -1452,19 +1459,19 @@ golang.org/x/sys v0.0.0-20211105183446-c75c47738b0c/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220702020025-31831981b65f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220818161305-2296e01440c6 h1:Sx/u41w+OwrInGdEckYmEuU5gHoGSL4QbDz3S9s6j4U=
golang.org/x/sys v0.0.0-20220818161305-2296e01440c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0 h1:kunALQeHf1/185U1i0GOB/fy1IPRDDpuoOOqRReG57U=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.0.0-20220722155259-a9ba230a4035 h1:Q5284mrmYTpACcm+eAKjKJH48BBwSyfJqmmGDTtT8Vc=
golang.org/x/term v0.0.0-20220722155259-a9ba230a4035/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0 h1:g6Z6vPFA9dYBAF7DWcH6sCcOntplXsDKcliusYijMlw=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -1473,15 +1480,16 @@ golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.4.0 h1:BrVqGRd7+k1DiOgtnFvAkoQEWQvBc25ouMJM6429SFg=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac h1:7zkz7BUtwNFFqcowJ+RIgu2MaV/MapERkDIy+mwPyjs=
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20220609170525-579cf78fd858 h1:Dpdu/EMxGMFgq0CeYMh4fazTD2vtlZRYE7wyynxJb9U=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -1574,8 +1582,9 @@ golang.org/x/tools v0.1.9-0.20211228192929-ee1ca4ffc4da/go.mod h1:nABZi5QlRsZVlz
golang.org/x/tools v0.1.9/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/tools v0.1.11/go.mod h1:SgwaegtQh8clINPpECJMqnxLv9I09HLqnW3RMqW0CA4=
golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.2.0 h1:G6AHpWxTMGY1KyEYoAQ5WTtIekUUvDNjan3ugu60JvE=
golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -1665,8 +1674,8 @@ google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaE
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20210917145530-b395a37504d4/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20211101144312-62acf1d99145/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220725144611-272f38e5d71b h1:SfSkJugek6xm7lWywqth4r2iTrYLpD8lOj1nMIIhMNM=
google.golang.org/genproto v0.0.0-20220725144611-272f38e5d71b/go.mod h1:iHe1svFLAZg9VWz891+QbRMwUv9O/1Ww+/mngYeThbc=
google.golang.org/genproto v0.0.0-20221024183307-1bc688fe9f3e h1:S9GbmC1iCgvbLyAokVCwiO6tVIrU9Y7c5oMx1V/ki/Y=
google.golang.org/genproto v0.0.0-20221024183307-1bc688fe9f3e/go.mod h1:9qHF0xnpdSfF6knlcsnpzUu5y+rpwgbvsyGAZPBMg4s=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
@@ -1696,8 +1705,8 @@ google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQ
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.41.0/go.mod h1:U3l9uK9J0sini8mHphKoXyaqDA/8VyGnDee1zzIUK6k=
google.golang.org/grpc v1.42.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/grpc v1.50.0 h1:fPVVDxY9w++VjTZsYvXWqEf9Rqar/e+9zYfxKK+W+YU=
google.golang.org/grpc v1.50.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
google.golang.org/grpc v1.50.1 h1:DS/BukOZWp8s6p4Dt/tOaJaTQyPyOoCcrjroHuCeLzY=
google.golang.org/grpc v1.50.1/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -1710,10 +1719,9 @@ google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGj
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.28.2-0.20220831092852-f930b1dc76e8 h1:KR8+MyP7/qOlV+8Af01LtjL04bu7on42eVsxT4EyBQk=
google.golang.org/protobuf v1.28.2-0.20220831092852-f930b1dc76e8/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -1761,8 +1769,8 @@ honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.3.3 h1:oDx7VAwstgpYpb3wv0oxiZlxY+foCpRAwY7Vk6XpAgA=
honnef.co/go/tools v0.3.3/go.mod h1:jzwdWgg7Jdq75wlfblQxO4neNaFFSvgc1tD5Wv8U0Yw=
mvdan.cc/gofumpt v0.3.1 h1:avhhrOmv0IuvQVK7fvwV91oFSGAk5/6Po8GXTzICeu8=
mvdan.cc/gofumpt v0.3.1/go.mod h1:w3ymliuxvzVx8DAutBnVyDqYb1Niy/yCJt/lk821YCE=
mvdan.cc/gofumpt v0.4.0 h1:JVf4NN1mIpHogBj7ABpgOyZc65/UUOkKQFkoURsz4MM=
mvdan.cc/gofumpt v0.4.0/go.mod h1:PljLOHDeZqgS8opHRKLzp2It2VBuSdteAgqUfzMTxlQ=
mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed h1:WX1yoOaKQfddO/mLzdV4wptyWgoH/6hwLs7QHTixo0I=
mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed/go.mod h1:Xkxe497xwlCKkIaQYRfC7CSLworTXY9RMqwhhCm+8Nc=
mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b h1:DxJ5nJdkhDlLok9K6qO+5290kphDJbHOQO1DFFFTeBo=

View File

@@ -48,7 +48,7 @@ func (r *Rand) init() {
}
func (r *Rand) reset(seed int64) {
r.rand = mrand.New(mrand.NewSource(seed)) //nolint:gosec
r.rand = mrand.New(mrand.NewSource(seed))
}
//----------------------------------------

View File

@@ -57,6 +57,7 @@ func TestLightClientAttackEvidence_Lunatic(t *testing.T) {
dbs.New(dbm.NewMemDB(), chainID),
light.Logger(log.TestingLogger()),
light.MaxRetryAttempts(1),
light.SkippingVerification(minimumTrustLevel),
)
require.NoError(t, err)
@@ -92,7 +93,7 @@ func TestLightClientAttackEvidence_Lunatic(t *testing.T) {
func TestLightClientAttackEvidence_Equivocation(t *testing.T) {
verificationOptions := map[string]light.Option{
"sequential": light.SequentialVerification(),
"skipping": light.SkippingVerification(light.DefaultTrustLevel),
"skipping": light.SkippingVerification(minimumTrustLevel),
}
for s, verificationOption := range verificationOptions {
@@ -230,6 +231,7 @@ func TestLightClientAttackEvidence_ForwardLunatic(t *testing.T) {
light.Logger(log.TestingLogger()),
light.MaxClockDrift(1*time.Second),
light.MaxBlockLag(1*time.Second),
light.SkippingVerification(minimumTrustLevel),
)
require.NoError(t, err)
@@ -297,6 +299,7 @@ func TestLightClientAttackEvidence_ForwardLunatic(t *testing.T) {
light.Logger(log.TestingLogger()),
light.MaxClockDrift(1*time.Second),
light.MaxBlockLag(1*time.Second),
light.SkippingVerification(minimumTrustLevel),
)
require.NoError(t, err)
@@ -327,6 +330,7 @@ func TestClientDivergentTraces1(t *testing.T) {
dbs.New(dbm.NewMemDB(), chainID),
light.Logger(log.TestingLogger()),
light.MaxRetryAttempts(1),
light.SkippingVerification(minimumTrustLevel),
)
require.Error(t, err)
assert.Contains(t, err.Error(), "does not match primary")
@@ -351,6 +355,7 @@ func TestClientDivergentTraces2(t *testing.T) {
dbs.New(dbm.NewMemDB(), chainID),
light.Logger(log.TestingLogger()),
light.MaxRetryAttempts(1),
light.SkippingVerification(minimumTrustLevel),
)
require.NoError(t, err)
@@ -422,6 +427,7 @@ func TestClientDivergentTraces4(t *testing.T) {
[]provider.Provider{witness},
dbs.New(dbm.NewMemDB(), chainID),
light.Logger(log.TestingLogger()),
light.SkippingVerification(minimumTrustLevel),
)
require.NoError(t, err)

View File

@@ -21,22 +21,22 @@ func RPCRoutes(c *lrpc.Client) map[string]*rpcserver.RPCFunc {
"health": rpcserver.NewRPCFunc(makeHealthFunc(c), ""),
"status": rpcserver.NewRPCFunc(makeStatusFunc(c), ""),
"net_info": rpcserver.NewRPCFunc(makeNetInfoFunc(c), ""),
"blockchain": rpcserver.NewRPCFunc(makeBlockchainInfoFunc(c), "minHeight,maxHeight"),
"genesis": rpcserver.NewRPCFunc(makeGenesisFunc(c), ""),
"genesis_chunked": rpcserver.NewRPCFunc(makeGenesisChunkedFunc(c), ""),
"block": rpcserver.NewRPCFunc(makeBlockFunc(c), "height"),
"header": rpcserver.NewRPCFunc(makeHeaderFunc(c), "height"),
"header_by_hash": rpcserver.NewRPCFunc(makeHeaderByHashFunc(c), "hash"),
"block_by_hash": rpcserver.NewRPCFunc(makeBlockByHashFunc(c), "hash"),
"block_results": rpcserver.NewRPCFunc(makeBlockResultsFunc(c), "height"),
"commit": rpcserver.NewRPCFunc(makeCommitFunc(c), "height"),
"tx": rpcserver.NewRPCFunc(makeTxFunc(c), "hash,prove"),
"blockchain": rpcserver.NewRPCFunc(makeBlockchainInfoFunc(c), "minHeight,maxHeight", rpcserver.Cacheable()),
"genesis": rpcserver.NewRPCFunc(makeGenesisFunc(c), "", rpcserver.Cacheable()),
"genesis_chunked": rpcserver.NewRPCFunc(makeGenesisChunkedFunc(c), "", rpcserver.Cacheable()),
"block": rpcserver.NewRPCFunc(makeBlockFunc(c), "height", rpcserver.Cacheable("height")),
"header": rpcserver.NewRPCFunc(makeHeaderFunc(c), "height", rpcserver.Cacheable("height")),
"header_by_hash": rpcserver.NewRPCFunc(makeHeaderByHashFunc(c), "hash", rpcserver.Cacheable()),
"block_by_hash": rpcserver.NewRPCFunc(makeBlockByHashFunc(c), "hash", rpcserver.Cacheable()),
"block_results": rpcserver.NewRPCFunc(makeBlockResultsFunc(c), "height", rpcserver.Cacheable("height")),
"commit": rpcserver.NewRPCFunc(makeCommitFunc(c), "height", rpcserver.Cacheable("height")),
"tx": rpcserver.NewRPCFunc(makeTxFunc(c), "hash,prove", rpcserver.Cacheable()),
"tx_search": rpcserver.NewRPCFunc(makeTxSearchFunc(c), "query,prove,page,per_page,order_by"),
"block_search": rpcserver.NewRPCFunc(makeBlockSearchFunc(c), "query,page,per_page,order_by"),
"validators": rpcserver.NewRPCFunc(makeValidatorsFunc(c), "height,page,per_page"),
"validators": rpcserver.NewRPCFunc(makeValidatorsFunc(c), "height,page,per_page", rpcserver.Cacheable("height")),
"dump_consensus_state": rpcserver.NewRPCFunc(makeDumpConsensusStateFunc(c), ""),
"consensus_state": rpcserver.NewRPCFunc(makeConsensusStateFunc(c), ""),
"consensus_params": rpcserver.NewRPCFunc(makeConsensusParamsFunc(c), "height"),
"consensus_params": rpcserver.NewRPCFunc(makeConsensusParamsFunc(c), "height", rpcserver.Cacheable("height")),
"unconfirmed_txs": rpcserver.NewRPCFunc(makeUnconfirmedTxsFunc(c), "limit"),
"num_unconfirmed_txs": rpcserver.NewRPCFunc(makeNumUnconfirmedTxsFunc(c), ""),
@@ -47,7 +47,7 @@ func RPCRoutes(c *lrpc.Client) map[string]*rpcserver.RPCFunc {
// abci API
"abci_query": rpcserver.NewRPCFunc(makeABCIQueryFunc(c), "path,data,height,prove"),
"abci_info": rpcserver.NewRPCFunc(makeABCIInfoFunc(c), ""),
"abci_info": rpcserver.NewRPCFunc(makeABCIInfoFunc(c), "", rpcserver.Cacheable()),
// evidence API
"broadcast_evidence": rpcserver.NewRPCFunc(makeBroadcastEvidenceFunc(c), "evidence"),

View File

@@ -10,22 +10,30 @@ import (
"github.com/tendermint/tendermint/types"
)
var (
// DefaultTrustLevel - new header can be trusted if at least one correct
// validator signed it.
DefaultTrustLevel = tmmath.Fraction{Numerator: 1, Denominator: 3}
)
// DefaultTrustLevel - We default to assuming that 2/3 in voting power of the validator set must be in common
// between the trusted and untrusted validator set. Guarantees at least 1/3 honest validators signed.
//
// Trust level must at minimum be greater than 1/3 and less than or equal to 1. Assuming that
// 1/3 or less of the validator set are byzantine (Tendermint's standard security guarantee), this equates
// to at least 1 honest node that we can trust in the untrusted signer set.
//
// Practically speaking, a trust level greater than 2/3 does not provide signifcantly greater
// security as a plethora of other attack vectors become apparent at that level of byzantine.
// The lower the trust level the less amount of intermediary verification steps need to be taken
// between a trusted header and untrusted header. This is also depedent on how much a validator
// set changes. In chains with low turnover, 2/3 should be a good balance of speed and security.
var DefaultTrustLevel = tmmath.Fraction{Numerator: 2, Denominator: 3}
// VerifyNonAdjacent verifies non-adjacent untrustedHeader against
// trustedHeader. It ensures that:
//
// a) trustedHeader can still be trusted (if not, ErrOldHeaderExpired is returned)
// b) untrustedHeader is valid (if not, ErrInvalidHeader is returned)
// c) trustLevel ([1/3, 1]) of trustedHeaderVals (or trustedHeaderNextVals)
// c) trustLevel ([1/3, 2/3]) of trustedHeaderVals (or trustedHeaderNextVals)
// signed correctly (if not, ErrNewValSetCantBeTrusted is returned)
// d) more than 2/3 of untrustedVals have signed h2
// (otherwise, ErrInvalidHeader is returned)
// e) headers are non-adjacent.
// e) headers are non-adjacent.
//
// maxClockDrift defines how much untrustedHeader.Time can drift into the
// future.

View File

@@ -12,9 +12,9 @@ import (
"github.com/tendermint/tendermint/types"
)
const (
maxClockDrift = 10 * time.Second
)
const maxClockDrift = 10 * time.Second
var minimumTrustLevel = tmmath.Fraction{Numerator: 1, Denominator: 3}
func TestVerifyAdjacentHeaders(t *testing.T) {
const (
@@ -271,7 +271,7 @@ func TestVerifyNonAdjacentHeaders(t *testing.T) {
t.Run(fmt.Sprintf("#%d", i), func(t *testing.T) {
err := light.VerifyNonAdjacent(header, vals, tc.newHeader, tc.newVals, tc.trustingPeriod,
tc.now, maxClockDrift,
light.DefaultTrustLevel)
minimumTrustLevel)
switch {
case tc.expErr != nil && assert.Error(t, err):

View File

@@ -134,6 +134,7 @@ func (memR *Reactor) GetChannels() []*p2p.ChannelDescriptor {
ID: mempool.MempoolChannel,
Priority: 5,
RecvMessageCapacity: batchMsg.Size(),
MessageType: &protomem.Message{},
},
}
}
@@ -154,27 +155,34 @@ func (memR *Reactor) RemovePeer(peer p2p.Peer, reason interface{}) {
// Receive implements Reactor.
// It adds any received transactions to the mempool.
func (memR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
msg, err := memR.decodeMsg(msgBytes)
if err != nil {
memR.Logger.Error("Error decoding message", "src", src, "chId", chID, "err", err)
memR.Switch.StopPeerForError(src, err)
return
}
memR.Logger.Debug("Receive", "src", src, "chId", chID, "msg", msg)
txInfo := mempool.TxInfo{SenderID: memR.ids.GetForPeer(src)}
if src != nil {
txInfo.SenderP2PID = src.ID()
}
for _, tx := range msg.Txs {
err = memR.mempool.CheckTx(tx, nil, txInfo)
if errors.Is(err, mempool.ErrTxInCache) {
memR.Logger.Debug("Tx already exists in cache", "tx", tx.String())
} else if err != nil {
memR.Logger.Info("Could not check tx", "tx", tx.String(), "err", err)
func (memR *Reactor) Receive(e p2p.Envelope) {
memR.Logger.Debug("Receive", "src", e.Src, "chId", e.ChannelID, "msg", e.Message)
switch msg := e.Message.(type) {
case *protomem.Txs:
protoTxs := msg.GetTxs()
if len(protoTxs) == 0 {
memR.Logger.Error("received empty txs from peer", "src", e.Src)
return
}
txInfo := mempool.TxInfo{SenderID: memR.ids.GetForPeer(e.Src)}
if e.Src != nil {
txInfo.SenderP2PID = e.Src.ID()
}
var err error
for _, tx := range protoTxs {
ntx := types.Tx(tx)
err = memR.mempool.CheckTx(ntx, nil, txInfo)
if errors.Is(err, mempool.ErrTxInCache) {
memR.Logger.Debug("Tx already exists in cache", "tx", ntx.String())
} else if err != nil {
memR.Logger.Info("Could not check tx", "tx", ntx.String(), "err", err)
}
}
default:
memR.Logger.Error("unknown message type", "src", e.Src, "chId", e.ChannelID, "msg", e.Message)
memR.Switch.StopPeerForError(e.Src, fmt.Errorf("mempool cannot handle message of type: %T", e.Message))
return
}
// broadcasting happens from go routines per peer
@@ -234,18 +242,10 @@ func (memR *Reactor) broadcastTxRoutine(peer p2p.Peer) {
// https://github.com/tendermint/tendermint/issues/5796
if _, ok := memTx.senders.Load(peerID); !ok {
msg := protomem.Message{
Sum: &protomem.Message_Txs{
Txs: &protomem.Txs{Txs: [][]byte{memTx.tx}},
},
}
bz, err := msg.Marshal()
if err != nil {
panic(err)
}
success := peer.Send(mempool.MempoolChannel, bz)
success := peer.Send(p2p.Envelope{
ChannelID: mempool.MempoolChannel,
Message: &protomem.Txs{Txs: [][]byte{memTx.tx}},
})
if !success {
time.Sleep(mempool.PeerCatchupSleepIntervalMS * time.Millisecond)
continue
@@ -264,35 +264,6 @@ func (memR *Reactor) broadcastTxRoutine(peer p2p.Peer) {
}
}
func (memR *Reactor) decodeMsg(bz []byte) (TxsMessage, error) {
msg := protomem.Message{}
err := msg.Unmarshal(bz)
if err != nil {
return TxsMessage{}, err
}
var message TxsMessage
if i, ok := msg.Sum.(*protomem.Message_Txs); ok {
txs := i.Txs.GetTxs()
if len(txs) == 0 {
return message, errors.New("empty TxsMessage")
}
decoded := make([]types.Tx, len(txs))
for j, tx := range txs {
decoded[j] = types.Tx(tx)
}
message = TxsMessage{
Txs: decoded,
}
return message, nil
}
return message, fmt.Errorf("msg type: %T is not supported", msg)
}
// TxsMessage is a Message containing transactions.
type TxsMessage struct {
Txs []types.Tx

View File

@@ -264,6 +264,10 @@ func TestMempoolIDsPanicsIfNodeRequestsOvermaxActiveIDs(t *testing.T) {
})
}
// TODO: This test tests that we don't panic and are able to generate new
// PeerIDs for each peer we add. It seems as though we should be able to test
// this in a much more direct way.
// https://github.com/tendermint/tendermint/issues/9639
func TestDontExhaustMaxActiveIDs(t *testing.T) {
config := cfg.TestConfig()
const N = 1
@@ -279,7 +283,12 @@ func TestDontExhaustMaxActiveIDs(t *testing.T) {
for i := 0; i < mempool.MaxActiveIDs+1; i++ {
peer := mock.NewPeer(nil)
reactor.Receive(mempool.MempoolChannel, peer, []byte{0x1, 0x2, 0x3})
reactor.Receive(p2p.Envelope{
ChannelID: mempool.MempoolChannel,
Src: peer,
Message: &memproto.Message{}, // This uses the wrong message type on purpose to stop the peer as in an error state in the reactor.
},
)
reactor.AddPeer(peer)
}
}

View File

@@ -133,6 +133,7 @@ func (memR *Reactor) GetChannels() []*p2p.ChannelDescriptor {
ID: mempool.MempoolChannel,
Priority: 5,
RecvMessageCapacity: batchMsg.Size(),
MessageType: &protomem.Message{},
},
}
}
@@ -153,27 +154,36 @@ func (memR *Reactor) RemovePeer(peer p2p.Peer, reason interface{}) {
// Receive implements Reactor.
// It adds any received transactions to the mempool.
func (memR *Reactor) Receive(chID byte, src p2p.Peer, msgBytes []byte) {
msg, err := memR.decodeMsg(msgBytes)
if err != nil {
memR.Logger.Error("Error decoding message", "src", src, "chId", chID, "err", err)
memR.Switch.StopPeerForError(src, err)
func (memR *Reactor) Receive(e p2p.Envelope) {
memR.Logger.Debug("Receive", "src", e.Src, "chId", e.ChannelID, "msg", e.Message)
switch msg := e.Message.(type) {
case *protomem.Txs:
protoTxs := msg.GetTxs()
if len(protoTxs) == 0 {
memR.Logger.Error("received tmpty txs from peer", "src", e.Src)
return
}
txInfo := mempool.TxInfo{SenderID: memR.ids.GetForPeer(e.Src)}
if e.Src != nil {
txInfo.SenderP2PID = e.Src.ID()
}
var err error
for _, tx := range protoTxs {
ntx := types.Tx(tx)
err = memR.mempool.CheckTx(ntx, nil, txInfo)
if errors.Is(err, mempool.ErrTxInCache) {
memR.Logger.Debug("Tx already exists in cache", "tx", ntx.String())
} else if err != nil {
memR.Logger.Info("Could not check tx", "tx", ntx.String(), "err", err)
}
}
default:
memR.Logger.Error("unknown message type", "src", e.Src, "chId", e.ChannelID, "msg", e.Message)
memR.Switch.StopPeerForError(e.Src, fmt.Errorf("mempool cannot handle message of type: %T", e.Message))
return
}
memR.Logger.Debug("Receive", "src", src, "chId", chID, "msg", msg)
txInfo := mempool.TxInfo{SenderID: memR.ids.GetForPeer(src)}
if src != nil {
txInfo.SenderP2PID = src.ID()
}
for _, tx := range msg.Txs {
err = memR.mempool.CheckTx(tx, nil, txInfo)
if err == mempool.ErrTxInCache {
memR.Logger.Debug("Tx already exists in cache", "tx", tx.String())
} else if err != nil {
memR.Logger.Info("Could not check tx", "tx", tx.String(), "err", err)
}
}
// broadcasting happens from go routines per peer
}
@@ -233,18 +243,10 @@ func (memR *Reactor) broadcastTxRoutine(peer p2p.Peer) {
// NOTE: Transaction batching was disabled due to
// https://github.com/tendermint/tendermint/issues/5796
if !memTx.HasPeer(peerID) {
msg := protomem.Message{
Sum: &protomem.Message_Txs{
Txs: &protomem.Txs{Txs: [][]byte{memTx.tx}},
},
}
bz, err := msg.Marshal()
if err != nil {
panic(err)
}
success := peer.Send(mempool.MempoolChannel, bz)
success := peer.Send(p2p.Envelope{
ChannelID: mempool.MempoolChannel,
Message: &protomem.Txs{Txs: [][]byte{memTx.tx}},
})
if !success {
time.Sleep(mempool.PeerCatchupSleepIntervalMS * time.Millisecond)
continue
@@ -268,37 +270,6 @@ func (memR *Reactor) broadcastTxRoutine(peer p2p.Peer) {
//-----------------------------------------------------------------------------
// Messages
func (memR *Reactor) decodeMsg(bz []byte) (TxsMessage, error) {
msg := protomem.Message{}
err := msg.Unmarshal(bz)
if err != nil {
return TxsMessage{}, err
}
var message TxsMessage
if i, ok := msg.Sum.(*protomem.Message_Txs); ok {
txs := i.Txs.GetTxs()
if len(txs) == 0 {
return message, errors.New("empty TxsMessage")
}
decoded := make([]types.Tx, len(txs))
for j, tx := range txs {
decoded[j] = types.Tx(tx)
}
message = TxsMessage{
Txs: decoded,
}
return message, nil
}
return message, fmt.Errorf("msg type: %T is not supported", msg)
}
//-------------------------------------
// TxsMessage is a Message containing transactions.
type TxsMessage struct {
Txs []types.Tx

View File

@@ -1,35 +0,0 @@
package node
import (
"time"
"github.com/tendermint/tendermint/crypto"
)
type ID struct {
Name string
PubKey crypto.PubKey
}
type PrivNodeID struct {
ID
PrivKey crypto.PrivKey
}
type Greeting struct {
ID
Version string
ChainID string
Message string
Time time.Time
}
type SignedNodeGreeting struct {
Greeting
Signature []byte
}
func (pnid *PrivNodeID) SignGreeting() *SignedNodeGreeting {
// greeting := NodeGreeting{}
return nil
}

View File

@@ -1,49 +1,34 @@
package node
import (
"bytes"
"context"
"errors"
"fmt"
"net"
"net/http"
"strings"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/rs/cors"
dbm "github.com/tendermint/tm-db"
abci "github.com/tendermint/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blocksync"
cfg "github.com/tendermint/tendermint/config"
cs "github.com/tendermint/tendermint/consensus"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/evidence"
tmjson "github.com/tendermint/tendermint/libs/json"
"github.com/tendermint/tendermint/libs/log"
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/light"
mempl "github.com/tendermint/tendermint/mempool"
mempoolv0 "github.com/tendermint/tendermint/mempool/v0"
mempoolv1 "github.com/tendermint/tendermint/mempool/v1"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/p2p/pex"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/proxy"
rpccore "github.com/tendermint/tendermint/rpc/core"
grpccore "github.com/tendermint/tendermint/rpc/grpc"
rpcserver "github.com/tendermint/tendermint/rpc/jsonrpc/server"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
blockidxkv "github.com/tendermint/tendermint/state/indexer/block/kv"
blockidxnull "github.com/tendermint/tendermint/state/indexer/block/null"
"github.com/tendermint/tendermint/state/indexer/sink/psql"
"github.com/tendermint/tendermint/state/txindex"
"github.com/tendermint/tendermint/state/txindex/kv"
"github.com/tendermint/tendermint/state/txindex/null"
"github.com/tendermint/tendermint/statesync"
"github.com/tendermint/tendermint/store"
@@ -51,94 +36,54 @@ import (
tmtime "github.com/tendermint/tendermint/types/time"
"github.com/tendermint/tendermint/version"
_ "net/http/pprof" //nolint: gosec // securely exposed on separate, optional port
_ "github.com/lib/pq" // provide the psql db driver
_ "net/http/pprof" //nolint: gosec
)
//------------------------------------------------------------------------------
// Node is the highest level interface to a full Tendermint node.
// It includes all configuration information and running services.
type Node struct {
service.BaseService
// DBContext specifies config information for loading a new DB.
type DBContext struct {
ID string
Config *cfg.Config
}
// config
config *cfg.Config
genesisDoc *types.GenesisDoc // initial validator set
privValidator types.PrivValidator // local node's validator key
// DBProvider takes a DBContext and returns an instantiated DB.
type DBProvider func(*DBContext) (dbm.DB, error)
// network
transport *p2p.MultiplexTransport
sw *p2p.Switch // p2p connections
addrBook pex.AddrBook // known peers
nodeInfo p2p.NodeInfo
nodeKey *p2p.NodeKey // our node privkey
isListening bool
const readHeaderTimeout = 10 * time.Second
// DefaultDBProvider returns a database using the DBBackend and DBDir
// specified in the ctx.Config.
func DefaultDBProvider(ctx *DBContext) (dbm.DB, error) {
dbType := dbm.BackendType(ctx.Config.DBBackend)
return dbm.NewDB(ctx.ID, dbType, ctx.Config.DBDir())
}
// GenesisDocProvider returns a GenesisDoc.
// It allows the GenesisDoc to be pulled from sources other than the
// filesystem, for instance from a distributed key-value store cluster.
type GenesisDocProvider func() (*types.GenesisDoc, error)
// DefaultGenesisDocProviderFunc returns a GenesisDocProvider that loads
// the GenesisDoc from the config.GenesisFile() on the filesystem.
func DefaultGenesisDocProviderFunc(config *cfg.Config) GenesisDocProvider {
return func() (*types.GenesisDoc, error) {
return types.GenesisDocFromFile(config.GenesisFile())
}
}
// Provider takes a config and a logger and returns a ready to go Node.
type Provider func(*cfg.Config, log.Logger) (*Node, error)
// DefaultNewNode returns a Tendermint node with default settings for the
// PrivValidator, ClientCreator, GenesisDoc, and DBProvider.
// It implements NodeProvider.
func DefaultNewNode(config *cfg.Config, logger log.Logger) (*Node, error) {
nodeKey, err := p2p.LoadOrGenNodeKey(config.NodeKeyFile())
if err != nil {
return nil, fmt.Errorf("failed to load or gen node key %s: %w", config.NodeKeyFile(), err)
}
return NewNode(config,
privval.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile()),
nodeKey,
proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir()),
DefaultGenesisDocProviderFunc(config),
DefaultDBProvider,
DefaultMetricsProvider(config.Instrumentation),
logger,
)
}
// MetricsProvider returns a consensus, p2p and mempool Metrics.
type MetricsProvider func(chainID string) (*cs.Metrics, *p2p.Metrics, *mempl.Metrics, *sm.Metrics, *proxy.Metrics)
// DefaultMetricsProvider returns Metrics build using Prometheus client library
// if Prometheus is enabled. Otherwise, it returns no-op Metrics.
func DefaultMetricsProvider(config *cfg.InstrumentationConfig) MetricsProvider {
return func(chainID string) (*cs.Metrics, *p2p.Metrics, *mempl.Metrics, *sm.Metrics, *proxy.Metrics) {
if config.Prometheus {
return cs.PrometheusMetrics(config.Namespace, "chain_id", chainID),
p2p.PrometheusMetrics(config.Namespace, "chain_id", chainID),
mempl.PrometheusMetrics(config.Namespace, "chain_id", chainID),
sm.PrometheusMetrics(config.Namespace, "chain_id", chainID),
proxy.PrometheusMetrics(config.Namespace, "chain_id", chainID)
}
return cs.NopMetrics(), p2p.NopMetrics(), mempl.NopMetrics(), sm.NopMetrics(), proxy.NopMetrics()
}
// services
eventBus *types.EventBus // pub/sub for services
stateStore sm.Store
blockStore *store.BlockStore // store the blockchain to disk
bcReactor p2p.Reactor // for block-syncing
mempoolReactor p2p.Reactor // for gossipping transactions
mempool mempl.Mempool
stateSync bool // whether the node should state sync on startup
stateSyncReactor *statesync.Reactor // for hosting and restoring state sync snapshots
stateSyncProvider statesync.StateProvider // provides state data for bootstrapping a node
stateSyncGenesis sm.State // provides the genesis state for state sync
consensusState *cs.State // latest consensus state
consensusReactor *cs.Reactor // for participating in the consensus
pexReactor *pex.Reactor // for exchanging peer addresses
evidencePool *evidence.Pool // tracking evidence
proxyApp proxy.AppConns // connection to the application
rpcListeners []net.Listener // rpc servers
txIndexer txindex.TxIndexer
blockIndexer indexer.BlockIndexer
indexerService *txindex.IndexerService
prometheusSrv *http.Server
pprofSrv *http.Server
}
// Option sets a parameter for the node.
type Option func(*Node)
// Temporary interface for switching to block sync, we should get rid of v0 and v1 reactors.
// See: https://github.com/tendermint/tendermint/issues/4595
type blockSyncReactor interface {
SwitchToBlockSync(sm.State) error
}
// CustomReactors allows you to add custom reactors (name -> p2p.Reactor) to
// the node's Switch.
//
@@ -190,517 +135,6 @@ func StateProvider(stateProvider statesync.StateProvider) Option {
//------------------------------------------------------------------------------
// Node is the highest level interface to a full Tendermint node.
// It includes all configuration information and running services.
type Node struct {
service.BaseService
// config
config *cfg.Config
genesisDoc *types.GenesisDoc // initial validator set
privValidator types.PrivValidator // local node's validator key
// network
transport *p2p.MultiplexTransport
sw *p2p.Switch // p2p connections
addrBook pex.AddrBook // known peers
nodeInfo p2p.NodeInfo
nodeKey *p2p.NodeKey // our node privkey
isListening bool
// services
eventBus *types.EventBus // pub/sub for services
stateStore sm.Store
blockStore *store.BlockStore // store the blockchain to disk
bcReactor p2p.Reactor // for block-syncing
mempoolReactor p2p.Reactor // for gossipping transactions
mempool mempl.Mempool
stateSync bool // whether the node should state sync on startup
stateSyncReactor *statesync.Reactor // for hosting and restoring state sync snapshots
stateSyncProvider statesync.StateProvider // provides state data for bootstrapping a node
stateSyncGenesis sm.State // provides the genesis state for state sync
consensusState *cs.State // latest consensus state
consensusReactor *cs.Reactor // for participating in the consensus
pexReactor *pex.Reactor // for exchanging peer addresses
evidencePool *evidence.Pool // tracking evidence
proxyApp proxy.AppConns // connection to the application
rpcListeners []net.Listener // rpc servers
txIndexer txindex.TxIndexer
blockIndexer indexer.BlockIndexer
indexerService *txindex.IndexerService
prometheusSrv *http.Server
}
func initDBs(config *cfg.Config, dbProvider DBProvider) (blockStore *store.BlockStore, stateDB dbm.DB, err error) {
var blockStoreDB dbm.DB
blockStoreDB, err = dbProvider(&DBContext{"blockstore", config})
if err != nil {
return
}
blockStore = store.NewBlockStore(blockStoreDB)
stateDB, err = dbProvider(&DBContext{"state", config})
if err != nil {
return
}
return
}
func createAndStartProxyAppConns(clientCreator proxy.ClientCreator, logger log.Logger, metrics *proxy.Metrics) (proxy.AppConns, error) {
proxyApp := proxy.NewAppConns(clientCreator, metrics)
proxyApp.SetLogger(logger.With("module", "proxy"))
if err := proxyApp.Start(); err != nil {
return nil, fmt.Errorf("error starting proxy app connections: %v", err)
}
return proxyApp, nil
}
func createAndStartEventBus(logger log.Logger) (*types.EventBus, error) {
eventBus := types.NewEventBus()
eventBus.SetLogger(logger.With("module", "events"))
if err := eventBus.Start(); err != nil {
return nil, err
}
return eventBus, nil
}
func createAndStartIndexerService(
config *cfg.Config,
chainID string,
dbProvider DBProvider,
eventBus *types.EventBus,
logger log.Logger,
) (*txindex.IndexerService, txindex.TxIndexer, indexer.BlockIndexer, error) {
var (
txIndexer txindex.TxIndexer
blockIndexer indexer.BlockIndexer
)
switch config.TxIndex.Indexer {
case "kv":
store, err := dbProvider(&DBContext{"tx_index", config})
if err != nil {
return nil, nil, nil, err
}
txIndexer = kv.NewTxIndex(store)
blockIndexer = blockidxkv.New(dbm.NewPrefixDB(store, []byte("block_events")))
case "psql":
if config.TxIndex.PsqlConn == "" {
return nil, nil, nil, errors.New(`no psql-conn is set for the "psql" indexer`)
}
es, err := psql.NewEventSink(config.TxIndex.PsqlConn, chainID)
if err != nil {
return nil, nil, nil, fmt.Errorf("creating psql indexer: %w", err)
}
txIndexer = es.TxIndexer()
blockIndexer = es.BlockIndexer()
default:
txIndexer = &null.TxIndex{}
blockIndexer = &blockidxnull.BlockerIndexer{}
}
indexerService := txindex.NewIndexerService(txIndexer, blockIndexer, eventBus, false)
indexerService.SetLogger(logger.With("module", "txindex"))
if err := indexerService.Start(); err != nil {
return nil, nil, nil, err
}
return indexerService, txIndexer, blockIndexer, nil
}
func doHandshake(
stateStore sm.Store,
state sm.State,
blockStore sm.BlockStore,
genDoc *types.GenesisDoc,
eventBus types.BlockEventPublisher,
proxyApp proxy.AppConns,
consensusLogger log.Logger,
) error {
handshaker := cs.NewHandshaker(stateStore, state, blockStore, genDoc)
handshaker.SetLogger(consensusLogger)
handshaker.SetEventBus(eventBus)
if err := handshaker.Handshake(proxyApp); err != nil {
return fmt.Errorf("error during handshake: %v", err)
}
return nil
}
func logNodeStartupInfo(state sm.State, pubKey crypto.PubKey, logger, consensusLogger log.Logger) {
// Log the version info.
logger.Info("Version info",
"tendermint_version", version.TMCoreSemVer,
"abci", version.ABCISemVer,
"block", version.BlockProtocol,
"p2p", version.P2PProtocol,
"commit_hash", version.TMGitCommitHash,
)
// If the state and software differ in block version, at least log it.
if state.Version.Consensus.Block != version.BlockProtocol {
logger.Info("Software and state have different block protocols",
"software", version.BlockProtocol,
"state", state.Version.Consensus.Block,
)
}
addr := pubKey.Address()
// Log whether this node is a validator or an observer
if state.Validators.HasAddress(addr) {
consensusLogger.Info("This node is a validator", "addr", addr, "pubKey", pubKey)
} else {
consensusLogger.Info("This node is not a validator", "addr", addr, "pubKey", pubKey)
}
}
func onlyValidatorIsUs(state sm.State, pubKey crypto.PubKey) bool {
if state.Validators.Size() > 1 {
return false
}
addr, _ := state.Validators.GetByIndex(0)
return bytes.Equal(pubKey.Address(), addr)
}
func createMempoolAndMempoolReactor(
config *cfg.Config,
proxyApp proxy.AppConns,
state sm.State,
memplMetrics *mempl.Metrics,
logger log.Logger,
) (mempl.Mempool, p2p.Reactor) {
switch config.Mempool.Version {
case cfg.MempoolV1:
mp := mempoolv1.NewTxMempool(
logger,
config.Mempool,
proxyApp.Mempool(),
state.LastBlockHeight,
mempoolv1.WithMetrics(memplMetrics),
mempoolv1.WithPreCheck(sm.TxPreCheck(state)),
mempoolv1.WithPostCheck(sm.TxPostCheck(state)),
)
reactor := mempoolv1.NewReactor(
config.Mempool,
mp,
)
if config.Consensus.WaitForTxs() {
mp.EnableTxsAvailable()
}
return mp, reactor
case cfg.MempoolV0:
mp := mempoolv0.NewCListMempool(
config.Mempool,
proxyApp.Mempool(),
state.LastBlockHeight,
mempoolv0.WithMetrics(memplMetrics),
mempoolv0.WithPreCheck(sm.TxPreCheck(state)),
mempoolv0.WithPostCheck(sm.TxPostCheck(state)),
)
mp.SetLogger(logger)
reactor := mempoolv0.NewReactor(
config.Mempool,
mp,
)
if config.Consensus.WaitForTxs() {
mp.EnableTxsAvailable()
}
return mp, reactor
default:
return nil, nil
}
}
func createEvidenceReactor(config *cfg.Config, dbProvider DBProvider,
stateDB dbm.DB, blockStore *store.BlockStore, logger log.Logger,
) (*evidence.Reactor, *evidence.Pool, error) {
evidenceDB, err := dbProvider(&DBContext{"evidence", config})
if err != nil {
return nil, nil, err
}
evidenceLogger := logger.With("module", "evidence")
evidencePool, err := evidence.NewPool(evidenceDB, sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: config.Storage.DiscardABCIResponses,
}), blockStore)
if err != nil {
return nil, nil, err
}
evidenceReactor := evidence.NewReactor(evidencePool)
evidenceReactor.SetLogger(evidenceLogger)
return evidenceReactor, evidencePool, nil
}
func createBlocksyncReactor(config *cfg.Config,
state sm.State,
blockExec *sm.BlockExecutor,
blockStore *store.BlockStore,
blockSync bool,
logger log.Logger,
) (bcReactor p2p.Reactor, err error) {
switch config.BlockSync.Version {
case "v0":
bcReactor = bc.NewReactor(state.Copy(), blockExec, blockStore, blockSync)
case "v1", "v2":
return nil, fmt.Errorf("block sync version %s has been deprecated. Please use v0", config.BlockSync.Version)
default:
return nil, fmt.Errorf("unknown fastsync version %s", config.BlockSync.Version)
}
bcReactor.SetLogger(logger.With("module", "blocksync"))
return bcReactor, nil
}
func createConsensusReactor(config *cfg.Config,
state sm.State,
blockExec *sm.BlockExecutor,
blockStore sm.BlockStore,
mempool mempl.Mempool,
evidencePool *evidence.Pool,
privValidator types.PrivValidator,
csMetrics *cs.Metrics,
waitSync bool,
eventBus *types.EventBus,
consensusLogger log.Logger,
) (*cs.Reactor, *cs.State) {
consensusState := cs.NewState(
config.Consensus,
state.Copy(),
blockExec,
blockStore,
mempool,
evidencePool,
cs.StateMetrics(csMetrics),
)
consensusState.SetLogger(consensusLogger)
if privValidator != nil {
consensusState.SetPrivValidator(privValidator)
}
consensusReactor := cs.NewReactor(consensusState, waitSync, cs.ReactorMetrics(csMetrics))
consensusReactor.SetLogger(consensusLogger)
// services which will be publishing and/or subscribing for messages (events)
// consensusReactor will set it on consensusState and blockExecutor
consensusReactor.SetEventBus(eventBus)
return consensusReactor, consensusState
}
func createTransport(
config *cfg.Config,
nodeInfo p2p.NodeInfo,
nodeKey *p2p.NodeKey,
proxyApp proxy.AppConns,
) (
*p2p.MultiplexTransport,
[]p2p.PeerFilterFunc,
) {
var (
mConnConfig = p2p.MConnConfig(config.P2P)
transport = p2p.NewMultiplexTransport(nodeInfo, *nodeKey, mConnConfig)
connFilters = []p2p.ConnFilterFunc{}
peerFilters = []p2p.PeerFilterFunc{}
)
if !config.P2P.AllowDuplicateIP {
connFilters = append(connFilters, p2p.ConnDuplicateIPFilter())
}
// Filter peers by addr or pubkey with an ABCI query.
// If the query return code is OK, add peer.
if config.FilterPeers {
connFilters = append(
connFilters,
// ABCI query for address filtering.
func(_ p2p.ConnSet, c net.Conn, _ []net.IP) error {
res, err := proxyApp.Query().QuerySync(abci.RequestQuery{
Path: fmt.Sprintf("/p2p/filter/addr/%s", c.RemoteAddr().String()),
})
if err != nil {
return err
}
if res.IsErr() {
return fmt.Errorf("error querying abci app: %v", res)
}
return nil
},
)
peerFilters = append(
peerFilters,
// ABCI query for ID filtering.
func(_ p2p.IPeerSet, p p2p.Peer) error {
res, err := proxyApp.Query().QuerySync(abci.RequestQuery{
Path: fmt.Sprintf("/p2p/filter/id/%s", p.ID()),
})
if err != nil {
return err
}
if res.IsErr() {
return fmt.Errorf("error querying abci app: %v", res)
}
return nil
},
)
}
p2p.MultiplexTransportConnFilters(connFilters...)(transport)
// Limit the number of incoming connections.
max := config.P2P.MaxNumInboundPeers + len(splitAndTrimEmpty(config.P2P.UnconditionalPeerIDs, ",", " "))
p2p.MultiplexTransportMaxIncomingConnections(max)(transport)
return transport, peerFilters
}
func createSwitch(config *cfg.Config,
transport p2p.Transport,
p2pMetrics *p2p.Metrics,
peerFilters []p2p.PeerFilterFunc,
mempoolReactor p2p.Reactor,
bcReactor p2p.Reactor,
stateSyncReactor *statesync.Reactor,
consensusReactor *cs.Reactor,
evidenceReactor *evidence.Reactor,
nodeInfo p2p.NodeInfo,
nodeKey *p2p.NodeKey,
p2pLogger log.Logger,
) *p2p.Switch {
sw := p2p.NewSwitch(
config.P2P,
transport,
p2p.WithMetrics(p2pMetrics),
p2p.SwitchPeerFilters(peerFilters...),
)
sw.SetLogger(p2pLogger)
sw.AddReactor("MEMPOOL", mempoolReactor)
sw.AddReactor("BLOCKSYNC", bcReactor)
sw.AddReactor("CONSENSUS", consensusReactor)
sw.AddReactor("EVIDENCE", evidenceReactor)
sw.AddReactor("STATESYNC", stateSyncReactor)
sw.SetNodeInfo(nodeInfo)
sw.SetNodeKey(nodeKey)
p2pLogger.Info("P2P Node ID", "ID", nodeKey.ID(), "file", config.NodeKeyFile())
return sw
}
func createAddrBookAndSetOnSwitch(config *cfg.Config, sw *p2p.Switch,
p2pLogger log.Logger, nodeKey *p2p.NodeKey,
) (pex.AddrBook, error) {
addrBook := pex.NewAddrBook(config.P2P.AddrBookFile(), config.P2P.AddrBookStrict)
addrBook.SetLogger(p2pLogger.With("book", config.P2P.AddrBookFile()))
// Add ourselves to addrbook to prevent dialing ourselves
if config.P2P.ExternalAddress != "" {
addr, err := p2p.NewNetAddressString(p2p.IDAddressString(nodeKey.ID(), config.P2P.ExternalAddress))
if err != nil {
return nil, fmt.Errorf("p2p.external_address is incorrect: %w", err)
}
addrBook.AddOurAddress(addr)
}
if config.P2P.ListenAddress != "" {
addr, err := p2p.NewNetAddressString(p2p.IDAddressString(nodeKey.ID(), config.P2P.ListenAddress))
if err != nil {
return nil, fmt.Errorf("p2p.laddr is incorrect: %w", err)
}
addrBook.AddOurAddress(addr)
}
sw.SetAddrBook(addrBook)
return addrBook, nil
}
func createPEXReactorAndAddToSwitch(addrBook pex.AddrBook, config *cfg.Config,
sw *p2p.Switch, logger log.Logger,
) *pex.Reactor {
// TODO persistent peers ? so we can have their DNS addrs saved
pexReactor := pex.NewReactor(addrBook,
&pex.ReactorConfig{
Seeds: splitAndTrimEmpty(config.P2P.Seeds, ",", " "),
SeedMode: config.P2P.SeedMode,
// See consensus/reactor.go: blocksToContributeToBecomeGoodPeer 10000
// blocks assuming 10s blocks ~ 28 hours.
// TODO (melekes): make it dynamic based on the actual block latencies
// from the live network.
// https://github.com/tendermint/tendermint/issues/3523
SeedDisconnectWaitPeriod: 28 * time.Hour,
PersistentPeersMaxDialPeriod: config.P2P.PersistentPeersMaxDialPeriod,
})
pexReactor.SetLogger(logger.With("module", "pex"))
sw.AddReactor("PEX", pexReactor)
return pexReactor
}
// startStateSync starts an asynchronous state sync process, then switches to block sync mode.
func startStateSync(ssR *statesync.Reactor, bcR blockSyncReactor, conR *cs.Reactor,
stateProvider statesync.StateProvider, config *cfg.StateSyncConfig, blockSync bool,
stateStore sm.Store, blockStore *store.BlockStore, state sm.State,
) error {
ssR.Logger.Info("Starting state sync")
if stateProvider == nil {
var err error
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
stateProvider, err = statesync.NewLightClientStateProvider(
ctx,
state.ChainID, state.Version, state.InitialHeight,
config.RPCServers, light.TrustOptions{
Period: config.TrustPeriod,
Height: config.TrustHeight,
Hash: config.TrustHashBytes(),
}, ssR.Logger.With("module", "light"))
if err != nil {
return fmt.Errorf("failed to set up light client state provider: %w", err)
}
}
go func() {
state, commit, err := ssR.Sync(stateProvider, config.DiscoveryTime)
if err != nil {
ssR.Logger.Error("State sync failed", "err", err)
return
}
err = stateStore.Bootstrap(state)
if err != nil {
ssR.Logger.Error("Failed to bootstrap node with new state", "err", err)
return
}
err = blockStore.SaveSeenCommit(state.LastBlockHeight, commit)
if err != nil {
ssR.Logger.Error("Failed to store last seen commit", "err", err)
return
}
if blockSync {
// FIXME Very ugly to have these metrics bleed through here.
conR.Metrics.StateSyncing.Set(0)
conR.Metrics.BlockSyncing.Set(1)
err = bcR.SwitchToBlockSync(state)
if err != nil {
ssR.Logger.Error("Failed to switch to block sync", "err", err)
return
}
} else {
conR.SwitchToConsensus(state, true)
}
}()
return nil
}
// NewNode returns a new, ready to go, Tendermint Node.
func NewNode(config *cfg.Config,
privValidator types.PrivValidator,
@@ -798,7 +232,7 @@ func NewNode(config *cfg.Config,
mempool, mempoolReactor := createMempoolAndMempoolReactor(config, proxyApp, state, memplMetrics, logger)
// Make Evidence Reactor
evidenceReactor, evidencePool, err := createEvidenceReactor(config, dbProvider, stateDB, blockStore, logger)
evidenceReactor, evidencePool, err := createEvidenceReactor(config, dbProvider, stateStore, blockStore, logger)
if err != nil {
return nil, err
}
@@ -891,12 +325,8 @@ func NewNode(config *cfg.Config,
pexReactor = createPEXReactorAndAddToSwitch(addrBook, config, sw, logger)
}
if config.RPC.PprofListenAddress != "" {
go func() {
logger.Info("Starting pprof server", "laddr", config.RPC.PprofListenAddress)
logger.Error("pprof server error", "err", http.ListenAndServe(config.RPC.PprofListenAddress, nil))
}()
}
// Add private IDs to addrbook to block those peers being added
addrBook.AddPrivateIDs(splitAndTrimEmpty(config.P2P.PrivatePeerIDs, ",", " "))
node := &Node{
config: config,
@@ -945,8 +375,15 @@ func (n *Node) OnStart() error {
time.Sleep(genTime.Sub(now))
}
// Add private IDs to addrbook to block those peers being added
n.addrBook.AddPrivateIDs(splitAndTrimEmpty(n.config.P2P.PrivatePeerIDs, ",", " "))
// run pprof server if it is enabled
if n.config.RPC.IsPprofEnabled() {
n.pprofSrv = n.startPprofServer()
}
// begin prometheus metrics gathering if it is enabled
if n.config.Instrumentation.IsPrometheusEnabled() {
n.prometheusSrv = n.startPrometheusServer()
}
// Start the RPC server before the P2P server
// so we can eg. receive txs for the first block
@@ -958,11 +395,6 @@ func (n *Node) OnStart() error {
n.rpcListeners = listeners
}
if n.config.Instrumentation.Prometheus &&
n.config.Instrumentation.PrometheusListenAddr != "" {
n.prometheusSrv = n.startPrometheusServer(n.config.Instrumentation.PrometheusListenAddr)
}
// Start the transport.
addr, err := p2p.NewNetAddressString(p2p.IDAddressString(n.nodeKey.ID(), n.config.P2P.ListenAddress))
if err != nil {
@@ -1047,6 +479,11 @@ func (n *Node) OnStop() {
n.Logger.Error("Prometheus HTTP server Shutdown", "err", err)
}
}
if n.pprofSrv != nil {
if err := n.pprofSrv.Shutdown(context.Background()); err != nil {
n.Logger.Error("Pprof HTTP server Shutdown", "err", err)
}
}
if n.blockStore != nil {
if err := n.blockStore.Close(); err != nil {
n.Logger.Error("problem closing blockstore", "err", err)
@@ -1215,9 +652,9 @@ func (n *Node) startRPC() ([]net.Listener, error) {
// startPrometheusServer starts a Prometheus HTTP server, listening for metrics
// collectors on addr.
func (n *Node) startPrometheusServer(addr string) *http.Server {
func (n *Node) startPrometheusServer() *http.Server {
srv := &http.Server{
Addr: addr,
Addr: n.config.Instrumentation.PrometheusListenAddr,
Handler: promhttp.InstrumentMetricHandler(
prometheus.DefaultRegisterer, promhttp.HandlerFor(
prometheus.DefaultGatherer,
@@ -1235,6 +672,22 @@ func (n *Node) startPrometheusServer(addr string) *http.Server {
return srv
}
// starts a ppro
func (n *Node) startPprofServer() *http.Server {
srv := &http.Server{
Addr: n.config.RPC.PprofListenAddress,
Handler: nil,
ReadHeaderTimeout: readHeaderTimeout,
}
go func() {
if err := srv.ListenAndServe(); err != http.ErrServerClosed {
// Error starting or closing listener:
n.Logger.Error("pprof HTTP server ListenAndServe", "err", err)
}
}()
return srv
}
// Switch returns the Node's Switch.
func (n *Node) Switch() *p2p.Switch {
return n.sw
@@ -1368,123 +821,3 @@ func makeNodeInfo(
err := nodeInfo.Validate()
return nodeInfo, err
}
//------------------------------------------------------------------------------
var genesisDocKey = []byte("genesisDoc")
// LoadStateFromDBOrGenesisDocProvider attempts to load the state from the
// database, or creates one using the given genesisDocProvider. On success this also
// returns the genesis doc loaded through the given provider.
func LoadStateFromDBOrGenesisDocProvider(
stateDB dbm.DB,
genesisDocProvider GenesisDocProvider,
) (sm.State, *types.GenesisDoc, error) {
// Get genesis doc
genDoc, err := loadGenesisDoc(stateDB)
if err != nil {
genDoc, err = genesisDocProvider()
if err != nil {
return sm.State{}, nil, err
}
err = genDoc.ValidateAndComplete()
if err != nil {
return sm.State{}, nil, fmt.Errorf("error in genesis doc: %w", err)
}
// save genesis doc to prevent a certain class of user errors (e.g. when it
// was changed, accidentally or not). Also good for audit trail.
if err := saveGenesisDoc(stateDB, genDoc); err != nil {
return sm.State{}, nil, err
}
}
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
state, err := stateStore.LoadFromDBOrGenesisDoc(genDoc)
if err != nil {
return sm.State{}, nil, err
}
return state, genDoc, nil
}
// panics if failed to unmarshal bytes
func loadGenesisDoc(db dbm.DB) (*types.GenesisDoc, error) {
b, err := db.Get(genesisDocKey)
if err != nil {
panic(err)
}
if len(b) == 0 {
return nil, errors.New("genesis doc not found")
}
var genDoc *types.GenesisDoc
err = tmjson.Unmarshal(b, &genDoc)
if err != nil {
panic(fmt.Sprintf("Failed to load genesis doc due to unmarshaling error: %v (bytes: %X)", err, b))
}
return genDoc, nil
}
// panics if failed to marshal the given genesis document
func saveGenesisDoc(db dbm.DB, genDoc *types.GenesisDoc) error {
b, err := tmjson.Marshal(genDoc)
if err != nil {
return fmt.Errorf("failed to save genesis doc due to marshaling error: %w", err)
}
if err := db.SetSync(genesisDocKey, b); err != nil {
return err
}
return nil
}
func createAndStartPrivValidatorSocketClient(
listenAddr,
chainID string,
logger log.Logger,
) (types.PrivValidator, error) {
pve, err := privval.NewSignerListener(listenAddr, logger)
if err != nil {
return nil, fmt.Errorf("failed to start private validator: %w", err)
}
pvsc, err := privval.NewSignerClient(pve, chainID)
if err != nil {
return nil, fmt.Errorf("failed to start private validator: %w", err)
}
// try to get a pubkey from private validate first time
_, err = pvsc.GetPubKey()
if err != nil {
return nil, fmt.Errorf("can't get pubkey: %w", err)
}
const (
retries = 50 // 50 * 100ms = 5s total
timeout = 100 * time.Millisecond
)
pvscWithRetries := privval.NewRetrySignerClient(pvsc, retries, timeout)
return pvscWithRetries, nil
}
// splitAndTrimEmpty slices s into all subslices separated by sep and returns a
// slice of the string s with all leading and trailing Unicode code points
// contained in cutset removed. If sep is empty, SplitAndTrim splits after each
// UTF-8 sequence. First part is equivalent to strings.SplitN with a count of
// -1. also filter out empty strings, only return non-empty strings.
func splitAndTrimEmpty(s, sep, cutset string) []string {
if s == "" {
return []string{}
}
spl := strings.Split(s, sep)
nonEmptyStrings := make([]string, 0, len(spl))
for i := 0; i < len(spl); i++ {
element := strings.Trim(spl[i], cutset)
if element != "" {
nonEmptyStrings = append(nonEmptyStrings, element)
}
}
return nonEmptyStrings
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"net"
"net/http"
"os"
"syscall"
"testing"
@@ -135,6 +136,29 @@ func TestNodeSetAppVersion(t *testing.T) {
assert.Equal(t, n.nodeInfo.(p2p.DefaultNodeInfo).ProtocolVersion.App, appVersion)
}
func TestPprofServer(t *testing.T) {
config := test.ResetTestRoot("node_pprof_test")
defer os.RemoveAll(config.RootDir)
config.RPC.PprofListenAddress = testFreeAddr(t)
// should not work yet
_, err := http.Get("http://" + config.RPC.PprofListenAddress) //nolint: bodyclose
assert.Error(t, err)
n, err := DefaultNewNode(config, log.TestingLogger())
assert.NoError(t, err)
assert.NoError(t, n.Start())
defer func() {
require.NoError(t, n.Stop())
}()
assert.NotNil(t, n.pprofSrv)
resp, err := http.Get("http://" + config.RPC.PprofListenAddress + "/debug/pprof")
assert.NoError(t, err)
defer resp.Body.Close()
assert.Equal(t, 200, resp.StatusCode)
}
func TestNodeSetPrivValTCP(t *testing.T) {
addr := "tcp://" + testFreeAddr(t)

711
node/setup.go Normal file
View File

@@ -0,0 +1,711 @@
package node
import (
"bytes"
"context"
"errors"
"fmt"
"net"
"strings"
"time"
dbm "github.com/tendermint/tm-db"
abci "github.com/tendermint/tendermint/abci/types"
bc "github.com/tendermint/tendermint/blocksync"
cfg "github.com/tendermint/tendermint/config"
cs "github.com/tendermint/tendermint/consensus"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/evidence"
tmjson "github.com/tendermint/tendermint/libs/json"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/light"
mempl "github.com/tendermint/tendermint/mempool"
mempoolv0 "github.com/tendermint/tendermint/mempool/v0"
mempoolv1 "github.com/tendermint/tendermint/mempool/v1"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/p2p/pex"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
blockidxkv "github.com/tendermint/tendermint/state/indexer/block/kv"
blockidxnull "github.com/tendermint/tendermint/state/indexer/block/null"
"github.com/tendermint/tendermint/state/indexer/sink/psql"
"github.com/tendermint/tendermint/state/txindex"
"github.com/tendermint/tendermint/state/txindex/kv"
"github.com/tendermint/tendermint/state/txindex/null"
"github.com/tendermint/tendermint/statesync"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
_ "github.com/lib/pq" // provide the psql db driver
)
// DBContext specifies config information for loading a new DB.
type DBContext struct {
ID string
Config *cfg.Config
}
// DBProvider takes a DBContext and returns an instantiated DB.
type DBProvider func(*DBContext) (dbm.DB, error)
const readHeaderTimeout = 10 * time.Second
// DefaultDBProvider returns a database using the DBBackend and DBDir
// specified in the ctx.Config.
func DefaultDBProvider(ctx *DBContext) (dbm.DB, error) {
dbType := dbm.BackendType(ctx.Config.DBBackend)
return dbm.NewDB(ctx.ID, dbType, ctx.Config.DBDir())
}
// GenesisDocProvider returns a GenesisDoc.
// It allows the GenesisDoc to be pulled from sources other than the
// filesystem, for instance from a distributed key-value store cluster.
type GenesisDocProvider func() (*types.GenesisDoc, error)
// DefaultGenesisDocProviderFunc returns a GenesisDocProvider that loads
// the GenesisDoc from the config.GenesisFile() on the filesystem.
func DefaultGenesisDocProviderFunc(config *cfg.Config) GenesisDocProvider {
return func() (*types.GenesisDoc, error) {
return types.GenesisDocFromFile(config.GenesisFile())
}
}
// Provider takes a config and a logger and returns a ready to go Node.
type Provider func(*cfg.Config, log.Logger) (*Node, error)
// DefaultNewNode returns a Tendermint node with default settings for the
// PrivValidator, ClientCreator, GenesisDoc, and DBProvider.
// It implements NodeProvider.
func DefaultNewNode(config *cfg.Config, logger log.Logger) (*Node, error) {
nodeKey, err := p2p.LoadOrGenNodeKey(config.NodeKeyFile())
if err != nil {
return nil, fmt.Errorf("failed to load or gen node key %s: %w", config.NodeKeyFile(), err)
}
return NewNode(config,
privval.LoadOrGenFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile()),
nodeKey,
proxy.DefaultClientCreator(config.ProxyApp, config.ABCI, config.DBDir()),
DefaultGenesisDocProviderFunc(config),
DefaultDBProvider,
DefaultMetricsProvider(config.Instrumentation),
logger,
)
}
// MetricsProvider returns a consensus, p2p and mempool Metrics.
type MetricsProvider func(chainID string) (*cs.Metrics, *p2p.Metrics, *mempl.Metrics, *sm.Metrics, *proxy.Metrics)
// DefaultMetricsProvider returns Metrics build using Prometheus client library
// if Prometheus is enabled. Otherwise, it returns no-op Metrics.
func DefaultMetricsProvider(config *cfg.InstrumentationConfig) MetricsProvider {
return func(chainID string) (*cs.Metrics, *p2p.Metrics, *mempl.Metrics, *sm.Metrics, *proxy.Metrics) {
if config.Prometheus {
return cs.PrometheusMetrics(config.Namespace, "chain_id", chainID),
p2p.PrometheusMetrics(config.Namespace, "chain_id", chainID),
mempl.PrometheusMetrics(config.Namespace, "chain_id", chainID),
sm.PrometheusMetrics(config.Namespace, "chain_id", chainID),
proxy.PrometheusMetrics(config.Namespace, "chain_id", chainID)
}
return cs.NopMetrics(), p2p.NopMetrics(), mempl.NopMetrics(), sm.NopMetrics(), proxy.NopMetrics()
}
}
type blockSyncReactor interface {
SwitchToBlockSync(sm.State) error
}
//------------------------------------------------------------------------------
func initDBs(config *cfg.Config, dbProvider DBProvider) (blockStore *store.BlockStore, stateDB dbm.DB, err error) {
var blockStoreDB dbm.DB
blockStoreDB, err = dbProvider(&DBContext{"blockstore", config})
if err != nil {
return
}
blockStore = store.NewBlockStore(blockStoreDB)
stateDB, err = dbProvider(&DBContext{"state", config})
if err != nil {
return
}
return
}
func createAndStartProxyAppConns(clientCreator proxy.ClientCreator, logger log.Logger, metrics *proxy.Metrics) (proxy.AppConns, error) {
proxyApp := proxy.NewAppConns(clientCreator, metrics)
proxyApp.SetLogger(logger.With("module", "proxy"))
if err := proxyApp.Start(); err != nil {
return nil, fmt.Errorf("error starting proxy app connections: %v", err)
}
return proxyApp, nil
}
func createAndStartEventBus(logger log.Logger) (*types.EventBus, error) {
eventBus := types.NewEventBus()
eventBus.SetLogger(logger.With("module", "events"))
if err := eventBus.Start(); err != nil {
return nil, err
}
return eventBus, nil
}
func createAndStartIndexerService(
config *cfg.Config,
chainID string,
dbProvider DBProvider,
eventBus *types.EventBus,
logger log.Logger,
) (*txindex.IndexerService, txindex.TxIndexer, indexer.BlockIndexer, error) {
var (
txIndexer txindex.TxIndexer
blockIndexer indexer.BlockIndexer
)
switch config.TxIndex.Indexer {
case "kv":
store, err := dbProvider(&DBContext{"tx_index", config})
if err != nil {
return nil, nil, nil, err
}
txIndexer = kv.NewTxIndex(store)
blockIndexer = blockidxkv.New(dbm.NewPrefixDB(store, []byte("block_events")))
case "psql":
if config.TxIndex.PsqlConn == "" {
return nil, nil, nil, errors.New(`no psql-conn is set for the "psql" indexer`)
}
es, err := psql.NewEventSink(config.TxIndex.PsqlConn, chainID)
if err != nil {
return nil, nil, nil, fmt.Errorf("creating psql indexer: %w", err)
}
txIndexer = es.TxIndexer()
blockIndexer = es.BlockIndexer()
default:
txIndexer = &null.TxIndex{}
blockIndexer = &blockidxnull.BlockerIndexer{}
}
indexerService := txindex.NewIndexerService(txIndexer, blockIndexer, eventBus, false)
indexerService.SetLogger(logger.With("module", "txindex"))
if err := indexerService.Start(); err != nil {
return nil, nil, nil, err
}
return indexerService, txIndexer, blockIndexer, nil
}
func doHandshake(
stateStore sm.Store,
state sm.State,
blockStore sm.BlockStore,
genDoc *types.GenesisDoc,
eventBus types.BlockEventPublisher,
proxyApp proxy.AppConns,
consensusLogger log.Logger,
) error {
handshaker := cs.NewHandshaker(stateStore, state, blockStore, genDoc)
handshaker.SetLogger(consensusLogger)
handshaker.SetEventBus(eventBus)
if err := handshaker.Handshake(proxyApp); err != nil {
return fmt.Errorf("error during handshake: %v", err)
}
return nil
}
func logNodeStartupInfo(state sm.State, pubKey crypto.PubKey, logger, consensusLogger log.Logger) {
// Log the version info.
logger.Info("Version info",
"tendermint_version", version.TMCoreSemVer,
"abci", version.ABCISemVer,
"block", version.BlockProtocol,
"p2p", version.P2PProtocol,
"commit_hash", version.TMGitCommitHash,
)
// If the state and software differ in block version, at least log it.
if state.Version.Consensus.Block != version.BlockProtocol {
logger.Info("Software and state have different block protocols",
"software", version.BlockProtocol,
"state", state.Version.Consensus.Block,
)
}
addr := pubKey.Address()
// Log whether this node is a validator or an observer
if state.Validators.HasAddress(addr) {
consensusLogger.Info("This node is a validator", "addr", addr, "pubKey", pubKey)
} else {
consensusLogger.Info("This node is not a validator", "addr", addr, "pubKey", pubKey)
}
}
func onlyValidatorIsUs(state sm.State, pubKey crypto.PubKey) bool {
if state.Validators.Size() > 1 {
return false
}
addr, _ := state.Validators.GetByIndex(0)
return bytes.Equal(pubKey.Address(), addr)
}
func createMempoolAndMempoolReactor(
config *cfg.Config,
proxyApp proxy.AppConns,
state sm.State,
memplMetrics *mempl.Metrics,
logger log.Logger,
) (mempl.Mempool, p2p.Reactor) {
switch config.Mempool.Version {
case cfg.MempoolV1:
mp := mempoolv1.NewTxMempool(
logger,
config.Mempool,
proxyApp.Mempool(),
state.LastBlockHeight,
mempoolv1.WithMetrics(memplMetrics),
mempoolv1.WithPreCheck(sm.TxPreCheck(state)),
mempoolv1.WithPostCheck(sm.TxPostCheck(state)),
)
reactor := mempoolv1.NewReactor(
config.Mempool,
mp,
)
if config.Consensus.WaitForTxs() {
mp.EnableTxsAvailable()
}
return mp, reactor
case cfg.MempoolV0:
mp := mempoolv0.NewCListMempool(
config.Mempool,
proxyApp.Mempool(),
state.LastBlockHeight,
mempoolv0.WithMetrics(memplMetrics),
mempoolv0.WithPreCheck(sm.TxPreCheck(state)),
mempoolv0.WithPostCheck(sm.TxPostCheck(state)),
)
mp.SetLogger(logger)
reactor := mempoolv0.NewReactor(
config.Mempool,
mp,
)
if config.Consensus.WaitForTxs() {
mp.EnableTxsAvailable()
}
return mp, reactor
default:
return nil, nil
}
}
func createEvidenceReactor(config *cfg.Config, dbProvider DBProvider,
stateStore sm.Store, blockStore *store.BlockStore, logger log.Logger,
) (*evidence.Reactor, *evidence.Pool, error) {
evidenceDB, err := dbProvider(&DBContext{"evidence", config})
if err != nil {
return nil, nil, err
}
evidenceLogger := logger.With("module", "evidence")
evidencePool, err := evidence.NewPool(evidenceDB, stateStore, blockStore)
if err != nil {
return nil, nil, err
}
evidenceReactor := evidence.NewReactor(evidencePool)
evidenceReactor.SetLogger(evidenceLogger)
return evidenceReactor, evidencePool, nil
}
func createBlocksyncReactor(config *cfg.Config,
state sm.State,
blockExec *sm.BlockExecutor,
blockStore *store.BlockStore,
blockSync bool,
logger log.Logger,
) (bcReactor p2p.Reactor, err error) {
switch config.BlockSync.Version {
case "v0":
bcReactor = bc.NewReactor(state.Copy(), blockExec, blockStore, blockSync)
case "v1", "v2":
return nil, fmt.Errorf("block sync version %s has been deprecated. Please use v0", config.BlockSync.Version)
default:
return nil, fmt.Errorf("unknown fastsync version %s", config.BlockSync.Version)
}
bcReactor.SetLogger(logger.With("module", "blocksync"))
return bcReactor, nil
}
func createConsensusReactor(config *cfg.Config,
state sm.State,
blockExec *sm.BlockExecutor,
blockStore sm.BlockStore,
mempool mempl.Mempool,
evidencePool *evidence.Pool,
privValidator types.PrivValidator,
csMetrics *cs.Metrics,
waitSync bool,
eventBus *types.EventBus,
consensusLogger log.Logger,
) (*cs.Reactor, *cs.State) {
consensusState := cs.NewState(
config.Consensus,
state.Copy(),
blockExec,
blockStore,
mempool,
evidencePool,
cs.StateMetrics(csMetrics),
)
consensusState.SetLogger(consensusLogger)
if privValidator != nil {
consensusState.SetPrivValidator(privValidator)
}
consensusReactor := cs.NewReactor(consensusState, waitSync, cs.ReactorMetrics(csMetrics))
consensusReactor.SetLogger(consensusLogger)
// services which will be publishing and/or subscribing for messages (events)
// consensusReactor will set it on consensusState and blockExecutor
consensusReactor.SetEventBus(eventBus)
return consensusReactor, consensusState
}
func createTransport(
config *cfg.Config,
nodeInfo p2p.NodeInfo,
nodeKey *p2p.NodeKey,
proxyApp proxy.AppConns,
) (
*p2p.MultiplexTransport,
[]p2p.PeerFilterFunc,
) {
var (
mConnConfig = p2p.MConnConfig(config.P2P)
transport = p2p.NewMultiplexTransport(nodeInfo, *nodeKey, mConnConfig)
connFilters = []p2p.ConnFilterFunc{}
peerFilters = []p2p.PeerFilterFunc{}
)
if !config.P2P.AllowDuplicateIP {
connFilters = append(connFilters, p2p.ConnDuplicateIPFilter())
}
// Filter peers by addr or pubkey with an ABCI query.
// If the query return code is OK, add peer.
if config.FilterPeers {
connFilters = append(
connFilters,
// ABCI query for address filtering.
func(_ p2p.ConnSet, c net.Conn, _ []net.IP) error {
res, err := proxyApp.Query().QuerySync(abci.RequestQuery{
Path: fmt.Sprintf("/p2p/filter/addr/%s", c.RemoteAddr().String()),
})
if err != nil {
return err
}
if res.IsErr() {
return fmt.Errorf("error querying abci app: %v", res)
}
return nil
},
)
peerFilters = append(
peerFilters,
// ABCI query for ID filtering.
func(_ p2p.IPeerSet, p p2p.Peer) error {
res, err := proxyApp.Query().QuerySync(abci.RequestQuery{
Path: fmt.Sprintf("/p2p/filter/id/%s", p.ID()),
})
if err != nil {
return err
}
if res.IsErr() {
return fmt.Errorf("error querying abci app: %v", res)
}
return nil
},
)
}
p2p.MultiplexTransportConnFilters(connFilters...)(transport)
// Limit the number of incoming connections.
max := config.P2P.MaxNumInboundPeers + len(splitAndTrimEmpty(config.P2P.UnconditionalPeerIDs, ",", " "))
p2p.MultiplexTransportMaxIncomingConnections(max)(transport)
return transport, peerFilters
}
func createSwitch(config *cfg.Config,
transport p2p.Transport,
p2pMetrics *p2p.Metrics,
peerFilters []p2p.PeerFilterFunc,
mempoolReactor p2p.Reactor,
bcReactor p2p.Reactor,
stateSyncReactor *statesync.Reactor,
consensusReactor *cs.Reactor,
evidenceReactor *evidence.Reactor,
nodeInfo p2p.NodeInfo,
nodeKey *p2p.NodeKey,
p2pLogger log.Logger,
) *p2p.Switch {
sw := p2p.NewSwitch(
config.P2P,
transport,
p2p.WithMetrics(p2pMetrics),
p2p.SwitchPeerFilters(peerFilters...),
)
sw.SetLogger(p2pLogger)
sw.AddReactor("MEMPOOL", mempoolReactor)
sw.AddReactor("BLOCKSYNC", bcReactor)
sw.AddReactor("CONSENSUS", consensusReactor)
sw.AddReactor("EVIDENCE", evidenceReactor)
sw.AddReactor("STATESYNC", stateSyncReactor)
sw.SetNodeInfo(nodeInfo)
sw.SetNodeKey(nodeKey)
p2pLogger.Info("P2P Node ID", "ID", nodeKey.ID(), "file", config.NodeKeyFile())
return sw
}
func createAddrBookAndSetOnSwitch(config *cfg.Config, sw *p2p.Switch,
p2pLogger log.Logger, nodeKey *p2p.NodeKey,
) (pex.AddrBook, error) {
addrBook := pex.NewAddrBook(config.P2P.AddrBookFile(), config.P2P.AddrBookStrict)
addrBook.SetLogger(p2pLogger.With("book", config.P2P.AddrBookFile()))
// Add ourselves to addrbook to prevent dialing ourselves
if config.P2P.ExternalAddress != "" {
addr, err := p2p.NewNetAddressString(p2p.IDAddressString(nodeKey.ID(), config.P2P.ExternalAddress))
if err != nil {
return nil, fmt.Errorf("p2p.external_address is incorrect: %w", err)
}
addrBook.AddOurAddress(addr)
}
if config.P2P.ListenAddress != "" {
addr, err := p2p.NewNetAddressString(p2p.IDAddressString(nodeKey.ID(), config.P2P.ListenAddress))
if err != nil {
return nil, fmt.Errorf("p2p.laddr is incorrect: %w", err)
}
addrBook.AddOurAddress(addr)
}
sw.SetAddrBook(addrBook)
return addrBook, nil
}
func createPEXReactorAndAddToSwitch(addrBook pex.AddrBook, config *cfg.Config,
sw *p2p.Switch, logger log.Logger,
) *pex.Reactor {
// TODO persistent peers ? so we can have their DNS addrs saved
pexReactor := pex.NewReactor(addrBook,
&pex.ReactorConfig{
Seeds: splitAndTrimEmpty(config.P2P.Seeds, ",", " "),
SeedMode: config.P2P.SeedMode,
// See consensus/reactor.go: blocksToContributeToBecomeGoodPeer 10000
// blocks assuming 10s blocks ~ 28 hours.
// TODO (melekes): make it dynamic based on the actual block latencies
// from the live network.
// https://github.com/tendermint/tendermint/issues/3523
SeedDisconnectWaitPeriod: 28 * time.Hour,
PersistentPeersMaxDialPeriod: config.P2P.PersistentPeersMaxDialPeriod,
})
pexReactor.SetLogger(logger.With("module", "pex"))
sw.AddReactor("PEX", pexReactor)
return pexReactor
}
// startStateSync starts an asynchronous state sync process, then switches to block sync mode.
func startStateSync(ssR *statesync.Reactor, bcR blockSyncReactor, conR *cs.Reactor,
stateProvider statesync.StateProvider, config *cfg.StateSyncConfig, blockSync bool,
stateStore sm.Store, blockStore *store.BlockStore, state sm.State,
) error {
ssR.Logger.Info("Starting state sync")
if stateProvider == nil {
var err error
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
stateProvider, err = statesync.NewLightClientStateProvider(
ctx,
state.ChainID, state.Version, state.InitialHeight,
config.RPCServers, light.TrustOptions{
Period: config.TrustPeriod,
Height: config.TrustHeight,
Hash: config.TrustHashBytes(),
}, ssR.Logger.With("module", "light"))
if err != nil {
return fmt.Errorf("failed to set up light client state provider: %w", err)
}
}
go func() {
state, commit, err := ssR.Sync(stateProvider, config.DiscoveryTime)
if err != nil {
ssR.Logger.Error("State sync failed", "err", err)
return
}
err = stateStore.Bootstrap(state)
if err != nil {
ssR.Logger.Error("Failed to bootstrap node with new state", "err", err)
return
}
err = blockStore.SaveSeenCommit(state.LastBlockHeight, commit)
if err != nil {
ssR.Logger.Error("Failed to store last seen commit", "err", err)
return
}
if blockSync {
// FIXME Very ugly to have these metrics bleed through here.
conR.Metrics.StateSyncing.Set(0)
conR.Metrics.BlockSyncing.Set(1)
err = bcR.SwitchToBlockSync(state)
if err != nil {
ssR.Logger.Error("Failed to switch to block sync", "err", err)
return
}
} else {
conR.SwitchToConsensus(state, true)
}
}()
return nil
}
//------------------------------------------------------------------------------
var genesisDocKey = []byte("genesisDoc")
// LoadStateFromDBOrGenesisDocProvider attempts to load the state from the
// database, or creates one using the given genesisDocProvider. On success this also
// returns the genesis doc loaded through the given provider.
func LoadStateFromDBOrGenesisDocProvider(
stateDB dbm.DB,
genesisDocProvider GenesisDocProvider,
) (sm.State, *types.GenesisDoc, error) {
// Get genesis doc
genDoc, err := loadGenesisDoc(stateDB)
if err != nil {
genDoc, err = genesisDocProvider()
if err != nil {
return sm.State{}, nil, err
}
err = genDoc.ValidateAndComplete()
if err != nil {
return sm.State{}, nil, fmt.Errorf("error in genesis doc: %w", err)
}
// save genesis doc to prevent a certain class of user errors (e.g. when it
// was changed, accidentally or not). Also good for audit trail.
if err := saveGenesisDoc(stateDB, genDoc); err != nil {
return sm.State{}, nil, err
}
}
stateStore := sm.NewStore(stateDB, sm.StoreOptions{
DiscardABCIResponses: false,
})
state, err := stateStore.LoadFromDBOrGenesisDoc(genDoc)
if err != nil {
return sm.State{}, nil, err
}
return state, genDoc, nil
}
// panics if failed to unmarshal bytes
func loadGenesisDoc(db dbm.DB) (*types.GenesisDoc, error) {
b, err := db.Get(genesisDocKey)
if err != nil {
panic(err)
}
if len(b) == 0 {
return nil, errors.New("genesis doc not found")
}
var genDoc *types.GenesisDoc
err = tmjson.Unmarshal(b, &genDoc)
if err != nil {
panic(fmt.Sprintf("Failed to load genesis doc due to unmarshaling error: %v (bytes: %X)", err, b))
}
return genDoc, nil
}
// panics if failed to marshal the given genesis document
func saveGenesisDoc(db dbm.DB, genDoc *types.GenesisDoc) error {
b, err := tmjson.Marshal(genDoc)
if err != nil {
return fmt.Errorf("failed to save genesis doc due to marshaling error: %w", err)
}
if err := db.SetSync(genesisDocKey, b); err != nil {
return err
}
return nil
}
func createAndStartPrivValidatorSocketClient(
listenAddr,
chainID string,
logger log.Logger,
) (types.PrivValidator, error) {
pve, err := privval.NewSignerListener(listenAddr, logger)
if err != nil {
return nil, fmt.Errorf("failed to start private validator: %w", err)
}
pvsc, err := privval.NewSignerClient(pve, chainID)
if err != nil {
return nil, fmt.Errorf("failed to start private validator: %w", err)
}
// try to get a pubkey from private validate first time
_, err = pvsc.GetPubKey()
if err != nil {
return nil, fmt.Errorf("can't get pubkey: %w", err)
}
const (
retries = 50 // 50 * 100ms = 5s total
timeout = 100 * time.Millisecond
)
pvscWithRetries := privval.NewRetrySignerClient(pvsc, retries, timeout)
return pvscWithRetries, nil
}
// splitAndTrimEmpty slices s into all subslices separated by sep and returns a
// slice of the string s with all leading and trailing Unicode code points
// contained in cutset removed. If sep is empty, SplitAndTrim splits after each
// UTF-8 sequence. First part is equivalent to strings.SplitN with a count of
// -1. also filter out empty strings, only return non-empty strings.
func splitAndTrimEmpty(s, sep, cutset string) []string {
if s == "" {
return []string{}
}
spl := strings.Split(s, sep)
nonEmptyStrings := make([]string, 0, len(spl))
for i := 0; i < len(spl); i++ {
element := strings.Trim(spl[i], cutset)
if element != "" {
nonEmptyStrings = append(nonEmptyStrings, element)
}
}
return nonEmptyStrings
}

Some files were not shown because too many files have changed in this diff Show More