Compare commits

...

50 Commits

Author SHA1 Message Date
William Banfield
c3c4274d67 re-arrange benchmark file 2021-07-21 09:22:13 -04:00
William Banfield
b6b286813a light: replace homegrown mock with mockery 2021-07-19 16:11:49 -04:00
William Banfield
f70396c6fd add and run make target for generating existing mocks (#6732)
There are many `//go:generate mockery` lines in the source code.
This change adds a make target to invoke these mock generations.
This change also invokes the mock invocations and adds the resulting mocks to the repo.

Related to #5274
2021-07-18 00:46:04 +00:00
William Banfield
fdc246e4a8 libs/clist: revert clear and detach changes while debugging (#6731) 2021-07-17 12:10:53 -04:00
Marko
78a0a5fe73 blockchain: error on v2 selection (#6730)
## Description

Remove v2 flag from toml
2021-07-16 10:19:55 +00:00
Marko
4f885209aa RPC: mark grpc as deprecated (#6725)
## Description

Mark gRPC as deprecated in the RPC layer. 

closes #6718
2021-07-15 22:05:21 +00:00
Callum Waters
6dd0cf92c8 router/statesync: add helpful log messages (#6724) 2021-07-15 19:26:35 +02:00
dependabot[bot]
626d9b4fbe build(deps): Bump actions/stale from 3.0.19 to 4 (#6726)
Bumps [actions/stale](https://github.com/actions/stale) from 3.0.19 to 4.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/stale/compare/v3.0.19...v4)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-07-15 10:17:51 -04:00
Sam Kleinman
8addf99f90 e2e: tweak sleep for pertubations (#6723)
This tweaks sleeps around pertubations, based on a theory that our
tests with "kill" pertubations restart the nodes fast enough the peers
haven't marked it down when it tries to reconnect. In my local test
runs, this clears out *most* of the test failures that I've seen,
except for one evidence-related test-harness problem (which should be
handled separately.)
2021-07-14 21:07:25 +00:00
Marko
76c6c67734 docs: fix broken links (#6719)
## Description 

Fix broken links

closes #6695
2021-07-14 14:31:08 +00:00
William Banfield
a46724e4f6 statesync: dispatcher test uses internal channel for timing (#6713)
This code change amends the dispatcher tests to read from the dispatcher's `requestCh`. This ensures that a request is waiting when the test calls `dispatcher.respond`. 
addresses: #6711
2021-07-14 14:16:09 +00:00
Callum Waters
40fba3960d add missing context catch and tests (#6701) 2021-07-14 11:23:15 +02:00
Callum Waters
36a859ae54 e2e: ensure evidence validator set matches nodes validator set (#6712) 2021-07-13 19:47:36 +02:00
Sam Kleinman
ab5c63eff3 statesync: increase dispatcher timeout (#6714) 2021-07-13 13:04:18 -04:00
Sam Kleinman
8228936155 e2e: extend timeouts in test harness (#6694) 2021-07-13 11:28:07 -04:00
Callum Waters
a12e2bbb60 statesync: use initial height as a floor to backfilling (#6709) 2021-07-13 16:36:16 +02:00
dependabot[bot]
11bebfb6a0 build(deps): Bump github.com/google/uuid from 1.2.0 to 1.3.0 (#6708)
Bumps [github.com/google/uuid](https://github.com/google/uuid) from 1.2.0 to 1.3.0.
- [Release notes](https://github.com/google/uuid/releases)
- [Commits](https://github.com/google/uuid/compare/v1.2.0...v1.3.0)

---
updated-dependencies:
- dependency-name: github.com/google/uuid
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-07-13 15:15:09 +02:00
William Banfield
4009102e2b statesync: remove outgoingCalls race condition in dispatcher (#6699)
* statesync: remove outgoing calls race condition
2021-07-12 19:05:47 -04:00
William Banfield
cabd916517 Revert "statesync: keep peer despite lightblock query fail (#6692)" (#6696)
* Revert "statesync: keep peer despite lightblock query fail (#6692)"

This reverts commit 50b00dff71.
2021-07-12 15:20:02 -04:00
Marko
363ea56680 abci: remove counter app (#6684)
* remove counter app

* remove unneeeded ci

* lint fix

* modify tx sizes

* cleanup docs

* Update abci/cmd/abci-cli/abci-cli.go

Co-authored-by: Callum Waters <cmwaters19@gmail.com>

* Update docs/app-dev/getting-started.md

Co-authored-by: Callum Waters <cmwaters19@gmail.com>

* Update docs/app-dev/getting-started.md

Co-authored-by: Callum Waters <cmwaters19@gmail.com>

* bring back comment

* migrate to kvstore and not persistent

* remove unused func

* test persistent

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
2021-07-12 14:55:32 +00:00
Callum Waters
aa4854ff8f docs: add docs file for the peer exchange (#6665) 2021-07-12 14:11:29 +02:00
William Banfield
581dd01d47 Update CODEOWNERS to include williambanfield (#6683) 2021-07-09 18:50:13 -04:00
William Banfield
50b00dff71 statesync: keep peer despite lightblock query fail (#6692)
When a peer responds with no lightblock for the height we queried, we call the [removePeer method](https://github.com/tendermint/tendermint/blob/master/internal/statesync/reactor.go#L339). This removes the peer from the [dispatcher's list of called peer's](ad65883152/internal/statesync/dispatcher.go (L159)). When the dispatcher then receives responses from the removed peer, it [drops their responses](ad65883152/internal/statesync/dispatcher.go (L130)). These responses may be meaningful or contain a block or data that will help statesync proceed.

[The logs](https://gist.github.com/tychoish/34a1f61eaae3c36c23efc7d0001e805c), when this change is applied, show an additional 3 networking testnets passing. 

addresses:  #6691
2021-07-09 21:20:25 +00:00
Callum Waters
051e127d38 light: correctly handle contexts (#6687) 2021-07-09 18:48:18 +02:00
Marko
5530726df8 tools: move tools.go to subdir (#6689)
## Description

Move tools to subdir to fix `go get`
2021-07-09 13:05:27 +00:00
Callum Waters
decac693ab p2p: remove annoying error log (#6688)
I put this error log in here because I thought it might be a helpful indicator to see when a reactor sends a message to a peer that doesn't have that channel open but it turns out this is happening all the time and it's kind of annoying
2021-07-09 12:48:33 +00:00
dependabot[bot]
7ca0f24040 build(deps): Bump github.com/golangci/golangci-lint (#6686)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.38.0 to 1.41.1.
- [Release notes](https://github.com/golangci/golangci-lint/releases)
- [Changelog](https://github.com/golangci/golangci-lint/blob/master/CHANGELOG.md)
- [Commits](https://github.com/golangci/golangci-lint/compare/v1.38.0...v1.41.1)

---
updated-dependencies:
- dependency-name: github.com/golangci/golangci-lint
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-07-09 13:47:01 +02:00
Marko
69848bef26 deps: run go mod tidy (#6677)
## Description

Run go mod tidy
2021-07-09 08:17:28 +00:00
Callum Waters
2c14d491f6 fix leaking statesync test (#6680) 2021-07-08 15:26:35 +02:00
Sam Kleinman
cd248576ea e2e: remove colorized output from docker-compose (#6670) 2021-07-08 12:54:13 +00:00
Callum Waters
c256edc622 fix evidence rpc test by extending wait time (#6678) 2021-07-08 14:43:41 +02:00
Callum Waters
9d9360774f adjust tx load (#6681) 2021-07-08 14:22:50 +02:00
dependabot[bot]
c7c11fc7d5 build(deps): Bump gaurav-nelson/github-action-markdown-link-check (#6679)
Bumps [gaurav-nelson/github-action-markdown-link-check](https://github.com/gaurav-nelson/github-action-markdown-link-check) from 1.0.12 to 1.0.13.
- [Release notes](https://github.com/gaurav-nelson/github-action-markdown-link-check/releases)
- [Commits](https://github.com/gaurav-nelson/github-action-markdown-link-check/compare/1.0.12...1.0.13)

---
updated-dependencies:
- dependency-name: gaurav-nelson/github-action-markdown-link-check
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-07-08 13:17:41 +02:00
Cuong Manh Le
37bc1d74df internal/blockchain/v0: prevent all possible race for blockchainCh.Out (#6637)
This commit extends the fix in #6518, so all other goroutine which run
concurrently with processBlockchainCh can safely send data to blockchain
out channel via a bridge channel. This helps eliminating all possible
data race with sending and closing blockchainCh.Out channel at the same
time.

Fixes #6516
2021-07-08 09:42:54 +00:00
William Banfield
d882f31569 use tools.go pattern for managing linter (#6643) 2021-07-07 14:52:10 -04:00
Tanya Bouman
ba3f7106b1 abci: Fix gitignore abci-cli (#6668)
Closes #6663
2021-07-07 16:01:38 +00:00
William Banfield
3ccfb26137 psql: close opened rows in tests (#6669) 2021-07-07 11:37:42 -04:00
Marko
96863decca deps: remove pkg errors (#6666)
## Description

remove pkg/errors since we use the provided fmt.Errorf
2021-07-07 11:39:19 +00:00
JayT106
d4cda544ae fastsync/rpc: add TotalSyncedTime & RemainingTime to SyncInfo in /status RPC (#6620) 2021-07-07 07:26:01 -04:00
Callum Waters
800cce80b7 e2e: allow variable tx size (#6659) 2021-07-07 12:59:27 +02:00
JayT106
e850863296 state/indexer: close row after query (#6664)
Closes: #6661 

Note: see another error during the events indexing, guess the raw tx size exceeds the limitation?
```
3:17PM ERR failed to index block txs err="pq: index row size 2768 exceeds btree version 4 maximum 2704 for index \"tx_results_tx_result_key\"" height=5205112 module=txindex
2021-07-07 10:28:54 +00:00
Aleksandr Bezobchuk
1dec3e139a add stacktrace to panic logs (#6662) 2021-07-06 14:26:18 -04:00
Marko
11b920480f docs: add sentence about windows support (#6655)
## Description

Add sentence about windows support. 

closes #1887
2021-07-06 13:15:39 +00:00
Aleksandr Bezobchuk
4f8bcb1cce docs: update events (#6658)
* docs: update events

* lint++

* lint++
2021-07-06 12:48:05 +00:00
Callum Waters
2d95e38986 Revert "consensus: skip all messages during sync (#6577)" (#6654)
This reverts commit 13b95e7127.
2021-07-06 14:27:20 +02:00
Callum Waters
6bb4b688e0 use grpc abci protocol in e2e tests (#6652) 2021-07-05 18:18:52 +02:00
Callum Waters
a1e1e6c290 test: fix non-deterministic backfill test (#6648) 2021-07-05 16:42:36 +02:00
rene
736364178a fix typo in log message (#6653)
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
2021-07-05 14:00:09 +00:00
dependabot[bot]
a99c7188d7 build(deps): Bump github.com/spf13/cobra from 1.2.0 to 1.2.1 (#6650)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.2.0 to 1.2.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/cobra/releases">github.com/spf13/cobra's releases</a>.</em></p>
<blockquote>
<h2>v1.2.1</h2>
<h3>Bug fixes</h3>
<ul>
<li>Quickfix for <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1437">spf13/cobra#1437</a> after v1.2.0 where parallel use of the <code>cmd.RegisterFlagCompletionFunc()</code> (and subsequent map) now works correctly and flag completions now work again</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="de187e874d"><code>de187e8</code></a> Fix flag completion (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1438">#1438</a>)</li>
<li>See full diff in <a href="https://github.com/spf13/cobra/compare/v1.2.0...v1.2.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/cobra&package-manager=go_modules&previous-version=1.2.0&new-version=1.2.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-07-05 13:06:11 +00:00
dependabot[bot]
a56b10fbef build(deps): Bump github.com/go-kit/kit from 0.10.0 to 0.11.0 (#6651)
Bumps [github.com/go-kit/kit](https://github.com/go-kit/kit) from 0.10.0 to 0.11.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/go-kit/kit/releases">github.com/go-kit/kit's releases</a>.</em></p>
<blockquote>
<h2>v0.11.0</h2>
<p>A new release with several improvements and enhancements. The first one in a long while! Huge thanks to <a href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a> for putting in most of the gruntwork to make it happen! You're a superstar.</p>
<p>The biggest thing: package log has been extracted to a separate repository and module, <a href="https://github.com/go-kit/log">go-kit/log</a>. This means that if you or your project was importing go-kit/kit just to get package log, you can significantly reduce your go.mod and dep graph by switching to the new module. Note that we have no current plans to alias the existing go-kit/kit/log to the new go-kit/log module and package, nor to deprecate the current package in favor of the new one. They are two distinct packages with no defined relationship to each other. This may change in the future.</p>
<p>Major changes:</p>
<ul>
<li>The log package was extracted to a <a href="https://github.com/go-kit/log">separate repository</a></li>
<li>Examples were moved to a separate <a href="https://github.com/go-kit/examples">repository</a></li>
<li>Deprecated kitgen was removed</li>
</ul>
<p>Thanks to the 22 contributors who contributed to this release! 🏌️‍♂️</p>
<h1>Bug fixes</h1>
<ul>
<li>metrics/cloudwatch: log CloudWatch response error (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/961">#961</a>) (thanks <a href="https://github.com/Trane9991"><code>@​Trane9991</code></a>)</li>
<li>log: defer mutex unlocks for panic safety in SyncLogger (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/974">#974</a>)</li>
<li>util/conn: close old connection before reconnect (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/982">#982</a>) (thanks <a href="https://github.com/chikaku"><code>@​chikaku</code></a>)</li>
<li>log/term: fix build on GOOS=js GOARCH=wasm (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/993">#993</a>) (thanks <a href="https://github.com/mvdan"><code>@​mvdan</code></a>)</li>
<li>transport/http/jsonrpc: move the ClientAfter calls to before the decode (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1008">#1008</a>) (thanks <a href="https://github.com/directionless"><code>@​directionless</code></a>)</li>
<li>sd/etcdv3: fix etcdv3 client won't return error when no endpoint is available (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1009">#1009</a>) (thanks <a href="https://github.com/wayjam"><code>@​wayjam</code></a>)</li>
<li>metrics/generic: fix uint64 alignment (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1007">#1007</a>) (thanks <a href="https://github.com/ldez"><code>@​ldez</code></a>)</li>
<li>log: fix stdlibadapter when prefixed (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1036">#1036</a>) (thanks <a href="https://github.com/soven"><code>@​soven</code></a>)</li>
<li>log: capture newlines in log stdlib (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1041">#1041</a>) (thanks <a href="https://github.com/SuperQ"><code>@​SuperQ</code></a>)</li>
</ul>
<h1>Enhancements</h1>
<ul>
<li>metrics/cloudwatch: use batch values API for CloudWatch PutMetric data call (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/960">#960</a>) (thanks <a href="https://github.com/Trane9991"><code>@​Trane9991</code></a>)</li>
<li>log: allow to use specific logrus level in the adaptor (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/962">#962</a>) (thanks <a href="https://github.com/Trane9991"><code>@​Trane9991</code></a>)</li>
<li>transport/http: add NewExplicitClient (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/971">#971</a>)</li>
<li>transport/http/jsonrpc: add RequestID in error body when using the DefaultErrorEncoder (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/969">#969</a>) (thanks <a href="https://github.com/esenac"><code>@​esenac</code></a>)</li>
<li>transport/http/jsonrpc: add Version to JSON-RPC client request (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/990">#990</a>) (thanks <a href="https://github.com/shirolimit"><code>@​shirolimit</code></a>)</li>
<li>log: add WithSuffix to append key-value pairs to those passed to Log (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/992">#992</a>) (thanks <a href="https://github.com/vinayvinay"><code>@​vinayvinay</code></a>)</li>
<li>sd/consul: improve inconsistent Consul SD index handling (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/999">#999</a>) (thanks <a href="https://github.com/vinayvinay"><code>@​vinayvinay</code></a>)</li>
<li>all: dependency updates (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1029">#1029</a>, <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1095">#1095</a>, <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1097">#1097</a>, <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1098">#1098</a>, <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1106">#1106</a>, <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1118">#1118</a>, <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1115">#1115</a>, <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1119">#1119</a>, <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1124">#1124</a>) (thanks <a href="https://github.com/ChrisHines"><code>@​ChrisHines</code></a>, <a href="https://github.com/Enrico204"><code>@​Enrico204</code></a>, <a href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a>)</li>
<li>tracing/opencensus: add support for JSONRPC (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1022">#1022</a>) (thanks <a href="https://github.com/ryan-lang"><code>@​ryan-lang</code></a>)</li>
<li>tracing/opentracing: improve endpoint middleware options (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1072">#1072</a>) (thanks <a href="https://github.com/alebabai"><code>@​alebabai</code></a>)</li>
<li>auth/jwt: fix repetition of the word &quot;token&quot; in JWT (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1070">#1070</a>) (thanks <a href="https://github.com/amidam"><code>@​amidam</code></a>)</li>
<li>sd/zk: replace unmaintained zk library with drop-in replacement (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1120">#1120</a>) (thanks <a href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a>)</li>
<li>cmd/kitgen: remove deprecated kitgen (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1121">#1121</a>) (thanks <a href="https://github.com/sagikazarmark"><code>@​sagikazarmark</code></a>)</li>
</ul>
<h1>Documentation, examples, tests</h1>
<ul>
<li>readme: change godoc to pkg.go.dev (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/963">#963</a>) (thanks <a href="https://github.com/relunctance"><code>@​relunctance</code></a>)</li>
<li>readme: add links to generator tools (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/964">#964</a>)</li>
<li>metrics/cloudwatch: fix bad Gauge test (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/975">#975</a>) (thanks <a href="https://github.com/Trane9991"><code>@​Trane9991</code></a>)</li>
<li>readme: update the link and description for go-micro (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/989">#989</a>) (thanks <a href="https://github.com/asim"><code>@​asim</code></a>)</li>
<li>examples: add missing &quot;to&quot; preposition (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1014">#1014</a>)</li>
<li>tracing/opencensus: fix failing tests (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1021">#1021</a>) (thanks <a href="https://github.com/ryan-lang"><code>@​ryan-lang</code></a>)</li>
<li>log: fix doc comment (<a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1028">#1028</a>) (thanks <a href="https://github.com/vrazdalovschi"><code>@​vrazdalovschi</code></a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="a6c5d5805b"><code>a6c5d58</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1129">#1129</a> from sagikazarmark/improve-example-references</li>
<li><a href="4c47fd8c8a"><code>4c47fd8</code></a> remove examples from gitignore</li>
<li><a href="908c5cf02c"><code>908c5cf</code></a> docs: fix example links</li>
<li><a href="d19ee33dd5"><code>d19ee33</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1128">#1128</a> from robbert229/patch-1</li>
<li><a href="ccf3d8d333"><code>ccf3d8d</code></a> fix a broken link to the addsvc example</li>
<li><a href="f80eb06d27"><code>f80eb06</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1121">#1121</a> from sagikazarmark/remove-kitgen</li>
<li><a href="32681cc0d6"><code>32681cc</code></a> remove deprecated kitgen</li>
<li><a href="2ca6ab212f"><code>2ca6ab2</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1112">#1112</a> from sagikazarmark/opentelemetry</li>
<li><a href="a119c95f09"><code>a119c95</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1122">#1122</a> from sagikazarmark/nats-test-panic</li>
<li><a href="2216160e8e"><code>2216160</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/go-kit/kit/issues/1124">#1124</a> from sagikazarmark/update-dependencies</li>
<li>Additional commits viewable in <a href="https://github.com/go-kit/kit/compare/v0.10.0...v0.11.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/go-kit/kit&package-manager=go_modules&previous-version=0.10.0&new-version=0.11.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-07-05 11:28:34 +00:00
111 changed files with 1983 additions and 1587 deletions

2
.github/CODEOWNERS vendored
View File

@@ -7,5 +7,5 @@
# global owners are only requested if there isn't a more specific
# codeowner specified below. For this reason, the global codeowners
# are often repeated in package-level definitions.
* @alexanderbez @ebuchman @cmwaters @tessr @tychoish
* @alexanderbez @ebuchman @cmwaters @tessr @tychoish @williambanfield

View File

@@ -7,6 +7,6 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2.3.4
- uses: gaurav-nelson/github-action-markdown-link-check@1.0.12
- uses: gaurav-nelson/github-action-markdown-link-check@1.0.13
with:
folder-path: "docs"

View File

@@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v3.0.19
- uses: actions/stale@v4
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: "This pull request has been automatically marked as stale because it has not had

View File

@@ -42,38 +42,6 @@ jobs:
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
if: env.GIT_DIFF
test_abci_apps:
runs-on: ubuntu-latest
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v2
with:
go-version: "1.16"
- uses: actions/checkout@v2.3.4
- uses: technote-space/get-diff-action@v4
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: actions/cache@v2.1.6
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
if: env.GIT_DIFF
- uses: actions/cache@v2.1.6
with:
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
if: env.GIT_DIFF
- name: test_abci_apps
run: abci/tests/test_app/test.sh
shell: bash
if: env.GIT_DIFF
test_abci_cli:
runs-on: ubuntu-latest
needs: build

2
.gitignore vendored
View File

@@ -15,7 +15,7 @@
.vagrant
.vendor-new/
.vscode/
abci-cli
abci/abci-cli
addrbook.json
artifacts/*
build/*

View File

@@ -21,7 +21,9 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [cli] \#6372 Introduce `BootstrapPeers` as part of the new p2p stack. Peers to be connected on startup (@cmwaters)
- [config] \#6462 Move `PrivValidator` configuration out of `BaseConfig` into its own section. (@tychoish)
- [rpc] \#6610 Add MaxPeerBlockHeight into /status rpc call (@JayT106)
- [libs/CList] \#6626 Automatically detach the prev/next elements in Remove function (@JayT106)
- [fastsync/rpc] \#6620 Add TotalSyncedTime & RemainingTime to SyncInfo in /status RPC (@JayT106)
- [rpc/grpc] \#6725 Mark gRPC in the RPC layer as deprecated.
- [blockchain/v2] \#6730 Fast Sync v2 is deprecated, please use v0
- Apps
- [ABCI] \#6408 Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
@@ -30,6 +32,7 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [ABCI] \#5818 Use protoio for msg length delimitation. Migrates from int64 to uint64 length delimiters.
- [Version] \#6494 `TMCoreSemVer` has been renamed to `TMVersion`.
- It is not required any longer to set ldflags to set version strings
- [abci/counter] \#6684 Delete counter example app
- P2P Protocol
@@ -147,3 +150,5 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [rpc] \#6507 fix RPC client doesn't handle url's without ports (@JayT106)
- [statesync] \#6463 Adds Reverse Sync feature to fetch historical light blocks after state sync in order to verify any evidence (@cmwaters)
- [fastsync] \#6590 Update the metrics during fast-sync (@JayT106)
- [gitignore] \#6668 Fix gitignore of abci-cli (@tanyabouman)
- [light] \#6687 Fix bug with incorrecly handled contexts in the light client (@cmwaters)

View File

@@ -202,7 +202,7 @@ format:
lint:
@echo "--> Running linter"
@golangci-lint run
go run github.com/golangci/golangci-lint/cmd/golangci-lint run
.PHONY: lint
DESTINATION = ./index.html.md
@@ -231,6 +231,19 @@ build-docker: build-linux
rm -rf DOCKER/tendermint
.PHONY: build-docker
###############################################################################
### Testing ###
###############################################################################
mockery:
go generate -run="mockery" ./...
.PHONY: mockery
test:
go test ./...
.PHONY: test
###############################################################################
### Local testnet using docker ###
###############################################################################

View File

@@ -32,6 +32,8 @@ This guide provides instructions for upgrading to specific versions of Tendermin
- configuration values starting with `priv-validator-` have moved to the new
`priv-validator` section, without the `priv-validator-` prefix.
* Fast Sync v2 has been deprecated, please use v0 to sync a node.
### CLI Changes
* You must now specify the node mode (validator|full|seed) in `tendermint init [mode]`
@@ -74,6 +76,10 @@ will need to change to accommodate these changes. Most notably:
longer exported and have been replaced with `node.New` and
`node.NewDefault` which provide more functional interfaces.
### RPC changes
Mark gRPC in the RPC layer as deprecated and to be removed in 0.36.
## v0.34.0
**Upgrading to Tendermint 0.34 requires a blockchain restart.**

View File

@@ -1,4 +1,4 @@
// Code generated by mockery 2.7.4. DO NOT EDIT.
// Code generated by mockery v0.0.0-dev. DO NOT EDIT.
package mocks

View File

@@ -17,7 +17,6 @@ import (
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/tendermint/tendermint/abci/server"
servertest "github.com/tendermint/tendermint/abci/tests/server"
@@ -47,9 +46,6 @@ var (
flagHeight int
flagProve bool
// counter
flagSerial bool
// kvstore
flagPersist string
)
@@ -61,9 +57,7 @@ var RootCmd = &cobra.Command{
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
switch cmd.Use {
case "counter", "kvstore": // for the examples apps, don't pre-run
return nil
case "version": // skip running for version command
case "kvstore", "version":
return nil
}
@@ -135,10 +129,6 @@ func addQueryFlags() {
"whether or not to return a merkle proof of the query result")
}
func addCounterFlags() {
counterCmd.PersistentFlags().BoolVarP(&flagSerial, "serial", "", false, "enforce incrementing (serial) transactions")
}
func addKVStoreFlags() {
kvstoreCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
}
@@ -157,8 +147,6 @@ func addCommands() {
RootCmd.AddCommand(queryCmd)
// examples
addCounterFlags()
RootCmd.AddCommand(counterCmd)
addKVStoreFlags()
RootCmd.AddCommand(kvstoreCmd)
}
@@ -258,14 +246,6 @@ var queryCmd = &cobra.Command{
RunE: cmdQuery,
}
var counterCmd = &cobra.Command{
Use: "counter",
Short: "ABCI demo example",
Long: "ABCI demo example",
Args: cobra.ExactArgs(0),
RunE: cmdCounter,
}
var kvstoreCmd = &cobra.Command{
Use: "kvstore",
Short: "ABCI demo example",
@@ -593,32 +573,6 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
return nil
}
func cmdCounter(cmd *cobra.Command, args []string) error {
app := counter.NewApplication(flagSerial)
logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
// Start the listener
srv, err := server.NewServer(flagAddress, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Stop upon receiving SIGTERM or CTRL-C.
tmos.TrapSignal(logger, func() {
// Cleanup
if err := srv.Stop(); err != nil {
logger.Error("Error while stopping server", "err", err)
}
})
// Run forever.
select {}
}
func cmdKVStore(cmd *cobra.Command, args []string) error {
logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)

View File

@@ -1,86 +0,0 @@
package counter
import (
"encoding/binary"
"fmt"
"github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/abci/types"
)
type Application struct {
types.BaseApplication
hashCount int
txCount int
serial bool
}
func NewApplication(serial bool) *Application {
return &Application{serial: serial}
}
func (app *Application) Info(req types.RequestInfo) types.ResponseInfo {
return types.ResponseInfo{Data: fmt.Sprintf("{\"hashes\":%v,\"txs\":%v}", app.hashCount, app.txCount)}
}
func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
if app.serial {
if len(req.Tx) > 8 {
return types.ResponseDeliverTx{
Code: code.CodeTypeEncodingError,
Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(req.Tx))}
}
tx8 := make([]byte, 8)
copy(tx8[len(tx8)-len(req.Tx):], req.Tx)
txValue := binary.BigEndian.Uint64(tx8)
if txValue != uint64(app.txCount) {
return types.ResponseDeliverTx{
Code: code.CodeTypeBadNonce,
Log: fmt.Sprintf("Invalid nonce. Expected %v, got %v", app.txCount, txValue)}
}
}
app.txCount++
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
}
func (app *Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
if app.serial {
if len(req.Tx) > 8 {
return types.ResponseCheckTx{
Code: code.CodeTypeEncodingError,
Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(req.Tx))}
}
tx8 := make([]byte, 8)
copy(tx8[len(tx8)-len(req.Tx):], req.Tx)
txValue := binary.BigEndian.Uint64(tx8)
if txValue < uint64(app.txCount) {
return types.ResponseCheckTx{
Code: code.CodeTypeBadNonce,
Log: fmt.Sprintf("Invalid nonce. Expected >= %v, got %v", app.txCount, txValue)}
}
}
return types.ResponseCheckTx{Code: code.CodeTypeOK}
}
func (app *Application) Commit() (resp types.ResponseCommit) {
app.hashCount++
if app.txCount == 0 {
return types.ResponseCommit{}
}
hash := make([]byte, 8)
binary.BigEndian.PutUint64(hash, uint64(app.txCount))
return types.ResponseCommit{Data: hash}
}
func (app *Application) Query(reqQuery types.RequestQuery) types.ResponseQuery {
switch reqQuery.Path {
case "hash":
return types.ResponseQuery{Value: []byte(fmt.Sprintf("%v", app.hashCount))}
case "tx":
return types.ResponseQuery{Value: []byte(fmt.Sprintf("%v", app.txCount))}
default:
return types.ResponseQuery{Log: fmt.Sprintf("Invalid query path. Expected hash or tx, got %v", reqQuery.Path)}
}
}

View File

@@ -1,55 +0,0 @@
package main
import (
"bytes"
"context"
"fmt"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/log"
)
var ctx = context.Background()
func startClient(abciType string) abcicli.Client {
// Start client
client, err := abcicli.NewClient("tcp://127.0.0.1:26658", abciType, true)
if err != nil {
panic(err.Error())
}
logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
client.SetLogger(logger.With("module", "abcicli"))
if err := client.Start(); err != nil {
panicf("connecting to abci_app: %v", err.Error())
}
return client
}
func commit(client abcicli.Client, hashExp []byte) {
res, err := client.CommitSync(ctx)
if err != nil {
panicf("client error: %v", err)
}
if !bytes.Equal(res.Data, hashExp) {
panicf("Commit hash was unexpected. Got %X expected %X", res.Data, hashExp)
}
}
func deliverTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) {
res, err := client.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: txBytes})
if err != nil {
panicf("client error: %v", err)
}
if res.Code != codeExp {
panicf("DeliverTx response code was unexpected. Got %v expected %v. Log: %v", res.Code, codeExp, res.Log)
}
if !bytes.Equal(res.Data, dataExp) {
panicf("DeliverTx response data was unexpected. Got %X expected %X", res.Data, dataExp)
}
}
func panicf(format string, a ...interface{}) {
panic(fmt.Sprintf(format, a...))
}

View File

@@ -1,93 +0,0 @@
package main
import (
"fmt"
"log"
"os"
"os/exec"
"time"
"github.com/tendermint/tendermint/abci/types"
)
var abciType string
func init() {
abciType = os.Getenv("ABCI")
if abciType == "" {
abciType = "socket"
}
}
func main() {
testCounter()
}
const (
maxABCIConnectTries = 10
)
func ensureABCIIsUp(typ string, n int) error {
var err error
cmdString := "abci-cli echo hello"
if typ == "grpc" {
cmdString = "abci-cli --abci grpc echo hello"
}
for i := 0; i < n; i++ {
cmd := exec.Command("bash", "-c", cmdString)
_, err = cmd.CombinedOutput()
if err == nil {
break
}
time.Sleep(500 * time.Millisecond)
}
return err
}
func testCounter() {
abciApp := os.Getenv("ABCI_APP")
if abciApp == "" {
panic("No ABCI_APP specified")
}
fmt.Printf("Running %s test with abci=%s\n", abciApp, abciType)
subCommand := fmt.Sprintf("abci-cli %s", abciApp)
cmd := exec.Command("bash", "-c", subCommand)
cmd.Stdout = os.Stdout
if err := cmd.Start(); err != nil {
log.Fatalf("starting %q err: %v", abciApp, err)
}
defer func() {
if err := cmd.Process.Kill(); err != nil {
log.Printf("error on process kill: %v", err)
}
if err := cmd.Wait(); err != nil {
log.Printf("error while waiting for cmd to exit: %v", err)
}
}()
if err := ensureABCIIsUp(abciType, maxABCIConnectTries); err != nil {
log.Fatalf("echo failed: %v", err) //nolint:gocritic
}
client := startClient(abciType)
defer func() {
if err := client.Stop(); err != nil {
log.Printf("error trying client stop: %v", err)
}
}()
// commit(client, nil)
// deliverTx(client, []byte("abc"), code.CodeTypeBadNonce, nil)
commit(client, nil)
deliverTx(client, []byte{0x00}, types.CodeTypeOK, nil)
commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 1})
// deliverTx(client, []byte{0x00}, code.CodeTypeBadNonce, nil)
deliverTx(client, []byte{0x01}, types.CodeTypeOK, nil)
deliverTx(client, []byte{0x00, 0x02}, types.CodeTypeOK, nil)
deliverTx(client, []byte{0x00, 0x03}, types.CodeTypeOK, nil)
deliverTx(client, []byte{0x00, 0x00, 0x04}, types.CodeTypeOK, nil)
// deliverTx(client, []byte{0x00, 0x00, 0x06}, code.CodeTypeBadNonce, nil)
commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 5})
}

View File

@@ -1,28 +0,0 @@
#! /bin/bash
set -e
# These tests spawn the counter app and server by execing the ABCI_APP command and run some simple client tests against it
# Get the directory of where this script is.
export PATH="$GOBIN:$PATH"
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
# Change into that dir because we expect that.
cd "$DIR"
echo "RUN COUNTER OVER SOCKET"
# test golang counter
ABCI_APP="counter" go run -mod=readonly ./*.go
echo "----------------------"
echo "RUN COUNTER OVER GRPC"
# test golang counter via grpc
ABCI_APP="counter --abci=grpc" ABCI="grpc" go run -mod=readonly ./*.go
echo "----------------------"
# test nodejs counter
# TODO: fix node app
#ABCI_APP="node $GOPATH/src/github.com/tendermint/js-abci/example/app.js" go test -test.run TestCounter

View File

@@ -37,7 +37,6 @@ function testExample() {
}
testExample 1 tests/test_cli/ex1.abci abci-cli kvstore
testExample 2 tests/test_cli/ex2.abci abci-cli counter
echo ""
echo "PASS"

View File

@@ -48,9 +48,7 @@ func AddNodeFlags(cmd *cobra.Command) {
"proxy-app",
config.ProxyApp,
"proxy app address, or one of: 'kvstore',"+
" 'persistent_kvstore',"+
" 'counter',"+
" 'counter_serial' or 'noop' for local testing.")
" 'persistent_kvstore' or 'noop' for local testing.")
cmd.Flags().String("abci", config.ABCI, "specify abci transport (socket | grpc)")
// rpc flags

View File

@@ -446,6 +446,7 @@ type RPCConfig struct {
// TCP or UNIX socket address for the gRPC server to listen on
// NOTE: This server only supports /broadcast_tx_commit
// Deprecated: gRPC in the RPC layer of Tendermint will be removed in 0.36.
GRPCListenAddress string `mapstructure:"grpc-laddr"`
// Maximum number of simultaneous connections.
@@ -453,6 +454,7 @@ type RPCConfig struct {
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
// Deprecated: gRPC in the RPC layer of Tendermint will be removed in 0.36.
GRPCMaxOpenConnections int `mapstructure:"grpc-max-open-connections"`
// Activate unsafe RPC commands like /dial-persistent-peers and /unsafe-flush-mempool
@@ -959,7 +961,7 @@ func (cfg *FastSyncConfig) ValidateBasic() error {
case BlockchainV0:
return nil
case BlockchainV2:
return nil
return errors.New("fastsync version v2 is no longer supported. Please use v0")
default:
return fmt.Errorf("unknown fastsync version %s", cfg.Version)
}

View File

@@ -131,7 +131,7 @@ func TestFastSyncConfigValidateBasic(t *testing.T) {
// tamper with version
cfg.Version = "v2"
assert.NoError(t, cfg.ValidateBasic())
assert.Error(t, cfg.ValidateBasic())
cfg.Version = "invalid"
assert.Error(t, cfg.ValidateBasic())

View File

@@ -200,6 +200,7 @@ cors-allowed-headers = [{{ range .RPC.CORSAllowedHeaders }}{{ printf "%q, " . }}
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
# Deprecated gRPC in the RPC layer of Tendermint will be deprecated in 0.36.
grpc-laddr = "{{ .RPC.GRPCListenAddress }}"
# Maximum number of simultaneous connections.
@@ -209,6 +210,7 @@ grpc-laddr = "{{ .RPC.GRPCListenAddress }}"
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
# 1024 - 40 - 10 - 50 = 924 = ~900
# Deprecated gRPC in the RPC layer of Tendermint will be deprecated in 0.36.
grpc-max-open-connections = {{ .RPC.GRPCMaxOpenConnections }}
# Activate unsafe RPC commands like /dial-seeds and /unsafe-flush-mempool
@@ -440,7 +442,7 @@ fetchers = "{{ .StateSync.Fetchers }}"
# Fast Sync version to use:
# 1) "v0" (default) - the legacy fast sync implementation
# 2) "v2" - complete redesign of v0, optimized for testability & readability
# 2) "v2" - DEPRECATED, please use v0
version = "{{ .FastSync.Version }}"
#######################################################

View File

@@ -31,7 +31,6 @@ Available Commands:
check_tx Validate a tx
commit Commit the application state and return the Merkle root hash
console Start an interactive abci console for multiple commands
counter ABCI demo example
deliver_tx Deliver a new tx to the application
kvstore ABCI demo example
echo Have the application echo a message
@@ -214,137 +213,9 @@ we do `deliver_tx "abc=efg"` it will store `(abc, efg)`.
Similarly, you could put the commands in a file and run
`abci-cli --verbose batch < myfile`.
## Counter - Another Example
Now that we've got the hang of it, let's try another application, the
"counter" app.
Like the kvstore app, its code can be found
[here](https://github.com/tendermint/tendermint/blob/master/abci/cmd/abci-cli/abci-cli.go)
and looks like:
```go
func cmdCounter(cmd *cobra.Command, args []string) error {
app := counter.NewCounterApplication(flagSerial)
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Start the listener
srv, err := server.NewServer(flagAddrC, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Stop upon receiving SIGTERM or CTRL-C.
tmos.TrapSignal(logger, func() {
// Cleanup
srv.Stop()
})
// Run forever.
select {}
}
```
The counter app doesn't use a Merkle tree, it just counts how many times
we've sent a transaction, asked for a hash, or committed the state. The
result of `commit` is just the number of transactions sent.
This application has two modes: `serial=off` and `serial=on`.
When `serial=on`, transactions must be a big-endian encoded incrementing
integer, starting at 0.
If `serial=off`, there are no restrictions on transactions.
We can toggle the value of `serial` using the `set_option` ABCI message.
When `serial=on`, some transactions are invalid. In a live blockchain,
transactions collect in memory before they are committed into blocks. To
avoid wasting resources on invalid transactions, ABCI provides the
`check_tx` message, which application developers can use to accept or
reject transactions, before they are stored in memory or gossipped to
other peers.
In this instance of the counter app, `check_tx` only allows transactions
whose integer is greater than the last committed one.
Let's kill the console and the kvstore application, and start the
counter app:
```sh
abci-cli counter
```
In another window, start the `abci-cli console`:
```sh
> check_tx 0x00
-> code: OK
> check_tx 0xff
-> code: OK
> deliver_tx 0x00
-> code: OK
> check_tx 0x00
-> code: BadNonce
-> log: Invalid nonce. Expected >= 1, got 0
> deliver_tx 0x01
-> code: OK
> deliver_tx 0x04
-> code: BadNonce
-> log: Invalid nonce. Expected 2, got 4
> info
-> code: OK
-> data: {"hashes":0,"txs":2}
-> data.hex: 0x7B22686173686573223A302C22747873223A327D
```
This is a very simple application, but between `counter` and `kvstore`,
its easy to see how you can build out arbitrary application states on
top of the ABCI. [Hyperledger's
Burrow](https://github.com/hyperledger/burrow) also runs atop ABCI,
bringing with it Ethereum-like accounts, the Ethereum virtual-machine,
Monax's permissioning scheme, and native contracts extensions.
But the ultimate flexibility comes from being able to write the
application easily in any language.
We have implemented the counter in a number of languages [see the
example directory](https://github.com/tendermint/tendermint/tree/master/abci/example).
To run the Node.js version, fist download & install [the Javascript ABCI server](https://github.com/tendermint/js-abci):
```sh
git clone https://github.com/tendermint/js-abci.git
cd js-abci
npm install abci
```
Now you can start the app:
```sh
node example/counter.js
```
(you'll have to kill the other counter application process). In another
window, run the console and those previous ABCI commands. You should get
the same results as for the Go version.
## Bounties
Want to write the counter app in your favorite language?! We'd be happy
Want to write an app in your favorite language?! We'd be happy
to add you to our [ecosystem](https://github.com/tendermint/awesome#ecosystem)!
See [funding](https://github.com/interchainio/funding) opportunities from the
[Interchain Foundation](https://interchain.io/) for implementations in new languages and more.

View File

@@ -37,8 +37,8 @@ cd $GOPATH/src/github.com/tendermint/tendermint
make install_abci
```
Now you should have the `abci-cli` installed; you'll see a couple of
commands (`counter` and `kvstore`) that are example applications written
Now you should have the `abci-cli` installed; you'll notice the `kvstore`
command, an example application written
in Go. See below for an application written in JavaScript.
Now, let's run some apps!
@@ -165,92 +165,6 @@ curl -s 'localhost:26657/abci_query?data="name"'
Try some other transactions and queries to make sure everything is
working!
## Counter - Another Example
Now that we've got the hang of it, let's try another application, the
`counter` app.
The counter app doesn't use a Merkle tree, it just counts how many times
we've sent a transaction, or committed the state.
This application has two modes: `serial=off` and `serial=on`.
When `serial=on`, transactions must be a big-endian encoded incrementing
integer, starting at 0.
If `serial=off`, there are no restrictions on transactions.
In a live blockchain, transactions collect in memory before they are
committed into blocks. To avoid wasting resources on invalid
transactions, ABCI provides the `CheckTx` message, which application
developers can use to accept or reject transactions, before they are
stored in memory or gossipped to other peers.
In this instance of the counter app, with `serial=on`, `CheckTx` only
allows transactions whose integer is greater than the last committed
one.
Let's kill the previous instance of `tendermint` and the `kvstore`
application, and start the counter app. We can enable `serial=on` with a
flag:
```sh
abci-cli counter --serial
```
In another window, reset then start Tendermint:
```sh
tendermint unsafe_reset_all
tendermint start
```
Once again, you can see the blocks streaming by. Let's send some
transactions. Since we have set `serial=on`, the first transaction must
be the number `0`:
```sh
curl localhost:26657/broadcast_tx_commit?tx=0x00
```
Note the empty (hence successful) response. The next transaction must be
the number `1`. If instead, we try to send a `5`, we get an error:
```json
> curl localhost:26657/broadcast_tx_commit?tx=0x05
{
"jsonrpc": "2.0",
"id": "",
"result": {
"check_tx": {},
"deliver_tx": {
"code": 2,
"log": "Invalid nonce. Expected 1, got 5"
},
"hash": "33B93DFF98749B0D6996A70F64071347060DC19C",
"height": 34
}
}
```
But if we send a `1`, it works again:
```json
> curl localhost:26657/broadcast_tx_commit?tx=0x01
{
"jsonrpc": "2.0",
"id": "",
"result": {
"check_tx": {},
"deliver_tx": {},
"hash": "F17854A977F6FA7EEA1BD758E296710B86F72F3D",
"height": 60
}
}
```
For more details on the `broadcast_tx` API, see [the guide on using
Tendermint](../tendermint-core/using-tendermint.md).
## CounterJS - Example in Another Language

View File

@@ -31,24 +31,61 @@ For example:
would be equal to the composite key of `jack.account.number`.
Let's take a look at the `[tx_index]` config section:
By default, Tendermint will index all transactions by their respective hashes
and height and blocks by their height.
## Configuration
Operators can configure indexing via the `[tx_index]` section. The `indexer`
field takes a series of supported indexers. If `null` is included, indexing will
be turned off regardless of other values provided.
```toml
##### transactions indexer configuration options #####
[tx_index]
[tx-index]
# What indexer to use for transactions
# The backend database list to back the indexer.
# If list contains null, meaning no indexer service will be used.
#
# The application will set which txs to index. In some cases a node operator will be able
# to decide which txs to index based on configuration set in the application.
#
# Options:
# 1) "null"
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "kv"
# - When "kv" is chosen "tx.height" and "tx.hash" will always be indexed.
# 3) "psql" - the indexer services backed by PostgreSQL.
# indexer = []
```
By default, Tendermint will index all transactions by their respective hashes
and height and blocks by their height.
### Supported Indexers
You can turn off indexing completely by setting `tx_index` to `null`.
#### KV
The `kv` indexer type is an embedded key-value store supported by the main
underling Tendermint database. Using the `kv` indexer type allows you to query
for block and transaction events directly against Tendermint's RPC. However, the
query syntax is limited and so this indexer type might be deprecated or removed
entirely in the future.
#### PostgreSQL
The `psql` indexer type allows an operator to enable block and transaction event
indexing by proxying it to an external PostgreSQL instance allowing for the events
to be stored in relational models. Since the events are stored in a RDBMS, operators
can leverage SQL to perform a series of rich and complex queries that are not
supported by the `kv` indexer type. Since operators can leverage SQL directly,
searching is not enabled for the `psql` indexer type via Tendermint's RPC -- any
such query will fail.
Note, the SQL schema is stored in `state/indexer/sink/psql/schema.sql` and operators
must explicitly create the relations prior to starting Tendermint and enabling
the `psql` indexer type.
Example:
```shell
$ psql ... -f state/indexer/sink/psql/schema.sql
```
## Default Indexes

View File

@@ -61,7 +61,7 @@ Note the context/background should be written in the present tense.
- [ADR-053: State-Sync-Prototype](./adr-053-state-sync-prototype.md)
- [ADR-054: Crypto-Encoding-2](./adr-054-crypto-encoding-2.md)
- [ADR-055: Protobuf-Design](./adr-055-protobuf-design.md)
- [ADR-056: Light-Client-Amnesia-Attacks](./adr-056-light-client-amnesia-attacks)
- [ADR-056: Light-Client-Amnesia-Attacks](./adr-056-light-client-amnesia-attacks.md)
- [ADR-059: Evidence-Composition-and-Lifecycle](./adr-059-evidence-composition-and-lifecycle.md)
- [ADR-062: P2P-Architecture](./adr-062-p2p-architecture.md)
- [ADR-063: Privval-gRPC](./adr-063-privval-grpc.md)

View File

@@ -97,8 +97,7 @@ design for tendermint was originally tracked in
[#828](https://github.com/tendermint/tendermint/issues/828).
#### Eager StateSync
Warp Sync as implemented in Parity
["Warp Sync"](https://wiki.parity.io/Warp-Sync-Snapshot-Format.html) to rapidly
Warp Sync as implemented in OpenEthereum to rapidly
download both blocks and state snapshots from peers. Data is carved into ~4MB
chunks and snappy compressed. Hashes of snappy compressed chunks are stored in a
manifest file which co-ordinates the state-sync. Obtaining a correct manifest
@@ -234,5 +233,3 @@ Proposed
[WIP General/Lazy State-Sync pseudo-spec](https://github.com/tendermint/tendermint/issues/3639) - Jae Proposal
[Warp Sync Implementation](https://github.com/tendermint/tendermint/pull/3594) - ackratos
[Chunk Proposal](https://github.com/tendermint/tendermint/pull/3799) - Bucky proposed

View File

@@ -119,7 +119,7 @@ network usage.
---
Check out the formal specification
[here](https://docs.tendermint.com/master/spec/consensus/light-client.html).
[here](https://github.com/tendermint/spec/tree/master/spec/light-client).
## Status

View File

@@ -18,7 +18,7 @@ graceful here, but that's for another day.
It's possible to fool lite clients without there being a fork on the
main chain - so called Fork-Lite. See the
[fork accountability](https://docs.tendermint.com/master/spec/consensus/fork-accountability.html)
[fork accountability](https://docs.tendermint.com/master/spec/light-client/accountability/)
document for more details. For a sequential lite client, this can happen via
equivocation or amnesia attacks. For a skipping lite client this can also happen
via lunatic validator attacks. There must be some way for applications to punish

View File

@@ -179,7 +179,7 @@ This then ends the process and the verify function that was called at the start
the user.
For a detailed overview of how each of these three attacks can be conducted please refer to the
[fork accountability spec]((https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/accountability.md)).
[fork accountability spec](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/accountability.md).
## Full Node Verification

View File

@@ -32,7 +32,7 @@ tendermint start --log-level "info"
Here is the list of modules you may encounter in Tendermint's log and a
little overview what they do.
- `abci-client` As mentioned in [Application Development Guide](../app-dev/app-development.md), Tendermint acts as an ABCI
- `abci-client` As mentioned in [Application Architecture Guide](../app-dev/app-architecture.md), Tendermint acts as an ABCI
client with respect to the application and maintains 3 connections:
mempool, consensus and query. The code used by Tendermint Core can
be found [here](https://github.com/tendermint/tendermint/tree/master/abci/client).
@@ -45,12 +45,12 @@ little overview what they do.
from a crash.
[here](https://github.com/tendermint/tendermint/blob/master/types/events.go).
You can subscribe to them by calling `subscribe` RPC method. Refer
to [RPC docs](./rpc.md) for additional information.
to [RPC docs](../tendermint-core/rpc.md) for additional information.
- `mempool` Mempool module handles all incoming transactions, whenever
they are coming from peers or the application.
- `p2p` Provides an abstraction around peer-to-peer communication. For
more details, please check out the
[README](https://github.com/tendermint/tendermint/blob/master/p2p/README.md).
[README](https://github.com/tendermint/spec/tree/master/spec/p2p).
- `rpc-server` RPC server. For implementation details, please read the
[doc.go](https://github.com/tendermint/tendermint/blob/master/rpc/jsonrpc/doc.go).
- `state` Represents the latest state and execution submodule, which

View File

@@ -40,7 +40,7 @@ Default logging level (`log-level = "info"`) should suffice for
normal operation mode. Read [this
post](https://blog.cosmos.network/one-of-the-exciting-new-features-in-0-10-0-release-is-smart-log-level-flag-e2506b4ab756)
for details on how to configure `log-level` config variable. Some of the
modules can be found [here](../nodes/logging#list-of-modules). If
modules can be found [here](logging.md#list-of-modules). If
you're trying to debug Tendermint or asked to provide logs with debug
logging level, you can do so by running Tendermint with
`--log-level="debug"`.
@@ -114,7 +114,7 @@ just the votes seen at the current height.
If, after consulting with the logs and above endpoints, you still have no idea
what's happening, consider using `tendermint debug kill` sub-command. This
command will scrap all the available info and kill the process. See
[Debugging](../tools/debugging.md) for the exact format.
[Debugging](../tools/debugging/README.md) for the exact format.
You can inspect the resulting archive yourself or create an issue on
[Github](https://github.com/tendermint/tendermint). Before opening an issue
@@ -134,7 +134,7 @@ Tendermint also can report and serve Prometheus metrics. See
[Metrics](./metrics.md).
`tendermint debug dump` sub-command can be used to periodically dump useful
information into an archive. See [Debugging](../tools/debugging.md) for more
information into an archive. See [Debugging](../tools/debugging/README.md) for more
information.
## What happens when my app dies
@@ -268,6 +268,8 @@ While we do not favor any operation system, more secure and stable Linux server
distributions (like Centos) should be preferred over desktop operation systems
(like Mac OS).
Native Windows support is not provided. If you are using a windows machine, you can try using the [bash shell](https://docs.microsoft.com/en-us/windows/wsl/install-win10).
### Miscellaneous
NOTE: if you are going to use Tendermint in a public domain, make sure
@@ -313,7 +315,7 @@ We want `skip-timeout-commit=false` when there is economics on the line
because proposers should wait to hear for more votes. But if you don't
care about that and want the fastest consensus, you can skip it. It will
be kept false by default for public deployments (e.g. [Cosmos
Hub](https://cosmos.network/intro/hub)) while for enterprise
Hub](https://hub.cosmos.network/main/hub-overview/overview.html)) while for enterprise
applications, setting it to true is not a problem.
- `consensus.peer-gossip-sleep-duration`

View File

@@ -36,7 +36,7 @@ more information on query syntax and other options.
You can also use tags, given you had included them into DeliverTx
response, to query transaction results. See [Indexing
transactions](./indexing-transactions.md) for details.
transactions](../app-dev/indexing-transactions.md) for details.
## ValidatorSetUpdates

View File

@@ -552,8 +552,7 @@ To make a Tendermint network that can tolerate one of the validators
failing, you need at least four validator nodes (e.g., 2/3).
Updating validators in a live network is supported but must be
explicitly programmed by the application developer. See the [application
developers guide](../app-dev/app-development.md) for more details.
explicitly programmed by the application developer.
### Local Network

11
go.mod
View File

@@ -10,32 +10,31 @@ require (
github.com/btcsuite/btcd v0.22.0-beta
github.com/btcsuite/btcutil v1.0.3-0.20201208143702-a53e38424cce
github.com/fortytw2/leaktest v1.3.0
github.com/go-kit/kit v0.10.0
github.com/go-kit/kit v0.11.0
github.com/gogo/protobuf v1.3.2
github.com/golang/protobuf v1.5.2
github.com/golangci/golangci-lint v1.41.1
github.com/google/orderedcode v0.0.1
github.com/google/uuid v1.2.0
github.com/google/uuid v1.3.0
github.com/gorilla/websocket v1.4.2
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
github.com/lib/pq v1.10.2
github.com/libp2p/go-buffer-pool v0.0.2
github.com/minio/highwayhash v1.0.2
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
github.com/oasisprotocol/curve25519-voi v0.0.0-20210609091139-0a56a4bca00b
github.com/ory/dockertest v3.3.5+incompatible
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.11.0
github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0
github.com/rs/cors v1.8.0
github.com/rs/zerolog v1.23.0
github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa
github.com/snikch/goodman v0.0.0-20171125024755-10e37e294daa
github.com/spf13/cobra v1.2.0
github.com/spf13/cobra v1.2.1
github.com/spf13/viper v1.8.1
github.com/stretchr/testify v1.7.0
github.com/tendermint/tm-db v0.6.4
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad
golang.org/x/crypto v0.0.0-20210314154223-e6e6c4f2bb5b
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4
google.golang.org/grpc v1.39.0
gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b // indirect

558
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -83,6 +83,10 @@ type BlockPool struct {
requestsCh chan<- BlockRequest
errorsCh chan<- peerError
startHeight int64
lastHundredBlockTimeStamp time.Time
lastSyncRate float64
}
// NewBlockPool returns a new BlockPool with the height equal to start. Block
@@ -91,12 +95,14 @@ func NewBlockPool(start int64, requestsCh chan<- BlockRequest, errorsCh chan<- p
bp := &BlockPool{
peers: make(map[types.NodeID]*bpPeer),
requesters: make(map[int64]*bpRequester),
height: start,
numPending: 0,
requesters: make(map[int64]*bpRequester),
height: start,
startHeight: start,
numPending: 0,
requestsCh: requestsCh,
errorsCh: errorsCh,
requestsCh: requestsCh,
errorsCh: errorsCh,
lastSyncRate: 0,
}
bp.BaseService = *service.NewBaseService(nil, "BlockPool", bp)
return bp
@@ -106,6 +112,7 @@ func NewBlockPool(start int64, requestsCh chan<- BlockRequest, errorsCh chan<- p
// pool's start time.
func (pool *BlockPool) OnStart() error {
pool.lastAdvance = time.Now()
pool.lastHundredBlockTimeStamp = pool.lastAdvance
go pool.makeRequestersRoutine()
return nil
}
@@ -216,6 +223,19 @@ func (pool *BlockPool) PopRequest() {
delete(pool.requesters, pool.height)
pool.height++
pool.lastAdvance = time.Now()
// the lastSyncRate will be updated every 100 blocks, it uses the adaptive filter
// to smooth the block sync rate and the unit represents the number of blocks per second.
if (pool.height-pool.startHeight)%100 == 0 {
newSyncRate := 100 / time.Since(pool.lastHundredBlockTimeStamp).Seconds()
if pool.lastSyncRate == 0 {
pool.lastSyncRate = newSyncRate
} else {
pool.lastSyncRate = 0.9*pool.lastSyncRate + 0.1*newSyncRate
}
pool.lastHundredBlockTimeStamp = time.Now()
}
} else {
panic(fmt.Sprintf("Expected requester to pop, got nothing at height %v", pool.height))
}
@@ -428,6 +448,20 @@ func (pool *BlockPool) debug() string {
return str
}
func (pool *BlockPool) targetSyncBlocks() int64 {
pool.mtx.RLock()
defer pool.mtx.RUnlock()
return pool.maxPeerHeight - pool.startHeight + 1
}
func (pool *BlockPool) getLastSyncRate() float64 {
pool.mtx.RLock()
defer pool.mtx.RUnlock()
return pool.lastSyncRate
}
//-------------------------------------
type bpPeer struct {

View File

@@ -2,6 +2,7 @@ package v0
import (
"fmt"
"runtime/debug"
"sync"
"time"
@@ -10,6 +11,7 @@ import (
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
tmSync "github.com/tendermint/tendermint/libs/sync"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blockchain"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/store"
@@ -72,7 +74,7 @@ func (e peerError) Error() string {
return fmt.Sprintf("error with peer %v: %s", e.peerID, e.err.Error())
}
// BlockchainReactor handles long-term catchup syncing.
// Reactor handles long-term catchup syncing.
type Reactor struct {
service.BaseService
@@ -83,12 +85,19 @@ type Reactor struct {
store *store.BlockStore
pool *BlockPool
consReactor consensusReactor
fastSync bool
fastSync *tmSync.AtomicBool
blockchainCh *p2p.Channel
peerUpdates *p2p.PeerUpdates
peerUpdatesCh chan p2p.Envelope
closeCh chan struct{}
blockchainCh *p2p.Channel
// blockchainOutBridgeCh defines a channel that acts as a bridge between sending Envelope
// messages that the reactor will consume in processBlockchainCh and receiving messages
// from the peer updates channel and other goroutines. We do this instead of directly
// sending on blockchainCh.Out to avoid race conditions in the case where other goroutines
// send Envelopes directly to the to blockchainCh.Out channel, since processBlockchainCh
// may close the blockchainCh.Out channel at the same time that other goroutines send to
// blockchainCh.Out.
blockchainOutBridgeCh chan p2p.Envelope
peerUpdates *p2p.PeerUpdates
closeCh chan struct{}
requestsCh <-chan BlockRequest
errorsCh <-chan peerError
@@ -99,6 +108,8 @@ type Reactor struct {
poolWG sync.WaitGroup
metrics *cons.Metrics
syncStartTime time.Time
}
// NewReactor returns new reactor instance.
@@ -126,19 +137,20 @@ func NewReactor(
errorsCh := make(chan peerError, maxPeerErrBuffer) // NOTE: The capacity should be larger than the peer count.
r := &Reactor{
initialState: state,
blockExec: blockExec,
store: store,
pool: NewBlockPool(startHeight, requestsCh, errorsCh),
consReactor: consReactor,
fastSync: fastSync,
requestsCh: requestsCh,
errorsCh: errorsCh,
blockchainCh: blockchainCh,
peerUpdates: peerUpdates,
peerUpdatesCh: make(chan p2p.Envelope),
closeCh: make(chan struct{}),
metrics: metrics,
initialState: state,
blockExec: blockExec,
store: store,
pool: NewBlockPool(startHeight, requestsCh, errorsCh),
consReactor: consReactor,
fastSync: tmSync.NewBool(fastSync),
requestsCh: requestsCh,
errorsCh: errorsCh,
blockchainCh: blockchainCh,
blockchainOutBridgeCh: make(chan p2p.Envelope),
peerUpdates: peerUpdates,
closeCh: make(chan struct{}),
metrics: metrics,
syncStartTime: time.Time{},
}
r.BaseService = *service.NewBaseService(logger, "Blockchain", r)
@@ -153,7 +165,7 @@ func NewReactor(
// If fastSync is enabled, we also start the pool and the pool processing
// goroutine. If the pool fails to start, an error is returned.
func (r *Reactor) OnStart() error {
if r.fastSync {
if r.fastSync.IsSet() {
if err := r.pool.Start(); err != nil {
return err
}
@@ -171,7 +183,7 @@ func (r *Reactor) OnStart() error {
// OnStop stops the reactor by signaling to all spawned goroutines to exit and
// blocking until they all exit.
func (r *Reactor) OnStop() {
if r.fastSync {
if r.fastSync.IsSet() {
if err := r.pool.Stop(); err != nil {
r.Logger.Error("failed to stop pool", "err", err)
}
@@ -265,7 +277,11 @@ func (r *Reactor) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err
defer func() {
if e := recover(); e != nil {
err = fmt.Errorf("panic in processing message: %v", e)
r.Logger.Error("recovering from processing message panic", "err", err)
r.Logger.Error(
"recovering from processing message panic",
"err", err,
"stack", string(debug.Stack()),
)
}
}()
@@ -283,7 +299,7 @@ func (r *Reactor) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err
}
// processBlockchainCh initiates a blocking process where we listen for and handle
// envelopes on the BlockchainChannel and peerUpdatesCh. Any error encountered during
// envelopes on the BlockchainChannel and blockchainOutBridgeCh. Any error encountered during
// message execution will result in a PeerError being sent on the BlockchainChannel.
// When the reactor is stopped, we will catch the signal and close the p2p Channel
// gracefully.
@@ -301,8 +317,8 @@ func (r *Reactor) processBlockchainCh() {
}
}
case envelop := <-r.peerUpdatesCh:
r.blockchainCh.Out <- envelop
case envelope := <-r.blockchainOutBridgeCh:
r.blockchainCh.Out <- envelope
case <-r.closeCh:
r.Logger.Debug("stopped listening on blockchain channel; closing...")
@@ -324,7 +340,7 @@ func (r *Reactor) processPeerUpdate(peerUpdate p2p.PeerUpdate) {
switch peerUpdate.Status {
case p2p.PeerStatusUp:
// send a status update the newly added peer
r.peerUpdatesCh <- p2p.Envelope{
r.blockchainOutBridgeCh <- p2p.Envelope{
To: peerUpdate.NodeID,
Message: &bcproto.StatusResponse{
Base: r.store.Base(),
@@ -358,7 +374,7 @@ func (r *Reactor) processPeerUpdates() {
// SwitchToFastSync is called by the state sync reactor when switching to fast
// sync.
func (r *Reactor) SwitchToFastSync(state sm.State) error {
r.fastSync = true
r.fastSync.Set()
r.initialState = state
r.pool.height = state.LastBlockHeight + 1
@@ -366,6 +382,8 @@ func (r *Reactor) SwitchToFastSync(state sm.State) error {
return err
}
r.syncStartTime = time.Now()
r.poolWG.Add(1)
go r.poolRoutine(true)
@@ -388,7 +406,7 @@ func (r *Reactor) requestRoutine() {
return
case request := <-r.requestsCh:
r.blockchainCh.Out <- p2p.Envelope{
r.blockchainOutBridgeCh <- p2p.Envelope{
To: request.PeerID,
Message: &bcproto.BlockRequest{Height: request.Height},
}
@@ -405,7 +423,7 @@ func (r *Reactor) requestRoutine() {
go func() {
defer r.poolWG.Done()
r.blockchainCh.Out <- p2p.Envelope{
r.blockchainOutBridgeCh <- p2p.Envelope{
Broadcast: true,
Message: &bcproto.StatusRequest{},
}
@@ -478,6 +496,8 @@ FOR_LOOP:
r.Logger.Error("failed to stop pool", "err", err)
}
r.fastSync.UnSet()
if r.consReactor != nil {
r.consReactor.SwitchToConsensus(state, blocksSynced > 0 || stateSynced)
}
@@ -592,3 +612,27 @@ FOR_LOOP:
func (r *Reactor) GetMaxPeerBlockHeight() int64 {
return r.pool.MaxPeerHeight()
}
func (r *Reactor) GetTotalSyncedTime() time.Duration {
if !r.fastSync.IsSet() || r.syncStartTime.IsZero() {
return time.Duration(0)
}
return time.Since(r.syncStartTime)
}
func (r *Reactor) GetRemainingSyncTime() time.Duration {
if !r.fastSync.IsSet() {
return time.Duration(0)
}
targetSyncs := r.pool.targetSyncBlocks()
currentSyncs := r.store.Height() - r.pool.startHeight + 1
lastSyncRate := r.pool.getLastSyncRate()
if currentSyncs < 0 || lastSyncRate < 0.001 {
return time.Duration(0)
}
remain := float64(targetSyncs-currentSyncs) / lastSyncRate
return time.Duration(int64(remain * float64(time.Second)))
}

View File

@@ -215,6 +215,29 @@ func TestReactor_AbruptDisconnect(t *testing.T) {
rts.network.Nodes[rts.nodes[1]].PeerManager.Disconnected(rts.nodes[0])
}
func TestReactor_SyncTime(t *testing.T) {
config := cfg.ResetTestRoot("blockchain_reactor_test")
defer os.RemoveAll(config.RootDir)
genDoc, privVals := factory.RandGenesisDoc(config, 1, false, 30)
maxBlockHeight := int64(101)
rts := setup(t, genDoc, privVals[0], []int64{maxBlockHeight, 0}, 0)
require.Equal(t, maxBlockHeight, rts.reactors[rts.nodes[0]].store.Height())
rts.start(t)
require.Eventually(
t,
func() bool {
return rts.reactors[rts.nodes[1]].GetRemainingSyncTime() > time.Nanosecond &&
rts.reactors[rts.nodes[1]].pool.getLastSyncRate() > 0.001
},
10*time.Second,
10*time.Millisecond,
"expected node to be partially synced",
)
}
func TestReactor_NoBlockResponse(t *testing.T) {
config := cfg.ResetTestRoot("blockchain_reactor_test")
defer os.RemoveAll(config.RootDir)

View File

@@ -13,6 +13,7 @@ import (
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/sync"
bcproto "github.com/tendermint/tendermint/proto/tendermint/blockchain"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
@@ -34,8 +35,8 @@ type blockStore interface {
type BlockchainReactor struct {
p2p.BaseReactor
fastSync bool // if true, enable fast sync on start
stateSynced bool // set to true when SwitchToFastSync is called by state sync
fastSync *sync.AtomicBool // enable fast sync on start when it's been Set
stateSynced bool // set to true when SwitchToFastSync is called by state sync
scheduler *Routine
processor *Routine
logger log.Logger
@@ -48,6 +49,10 @@ type BlockchainReactor struct {
reporter behavior.Reporter
io iIO
store blockStore
syncStartTime time.Time
syncStartHeight int64
lastSyncRate float64 // # blocks sync per sec base on the last 100 blocks
}
type blockApplier interface {
@@ -68,12 +73,15 @@ func newReactor(state state.State, store blockStore, reporter behavior.Reporter,
processor := newPcState(pContext)
return &BlockchainReactor{
scheduler: newRoutine("scheduler", scheduler.handle, chBufferSize),
processor: newRoutine("processor", processor.handle, chBufferSize),
store: store,
reporter: reporter,
logger: log.NewNopLogger(),
fastSync: fastSync,
scheduler: newRoutine("scheduler", scheduler.handle, chBufferSize),
processor: newRoutine("processor", processor.handle, chBufferSize),
store: store,
reporter: reporter,
logger: log.NewNopLogger(),
fastSync: sync.NewBool(fastSync),
syncStartHeight: initHeight,
syncStartTime: time.Time{},
lastSyncRate: 0,
}
}
@@ -129,7 +137,7 @@ func (r *BlockchainReactor) SetLogger(logger log.Logger) {
// Start implements cmn.Service interface
func (r *BlockchainReactor) Start() error {
r.reporter = behavior.NewSwitchReporter(r.BaseReactor.Switch)
if r.fastSync {
if r.fastSync.IsSet() {
err := r.startSync(nil)
if err != nil {
return fmt.Errorf("failed to start fast sync: %w", err)
@@ -175,7 +183,13 @@ func (r *BlockchainReactor) endSync() {
func (r *BlockchainReactor) SwitchToFastSync(state state.State) error {
r.stateSynced = true
state = state.Copy()
return r.startSync(&state)
err := r.startSync(&state)
if err == nil {
r.syncStartTime = time.Now()
}
return err
}
// reactor generated ticker events:
@@ -283,7 +297,6 @@ func (e bcResetState) String() string {
// Takes the channel as a parameter to avoid race conditions on r.events.
func (r *BlockchainReactor) demux(events <-chan Event) {
var lastRate = 0.0
var lastHundred = time.Now()
var (
@@ -414,10 +427,15 @@ func (r *BlockchainReactor) demux(events <-chan Event) {
switch event := event.(type) {
case pcBlockProcessed:
r.setSyncHeight(event.height)
if r.syncHeight%100 == 0 {
lastRate = 0.9*lastRate + 0.1*(100/time.Since(lastHundred).Seconds())
if (r.syncHeight-r.syncStartHeight)%100 == 0 {
newSyncRate := 100 / time.Since(lastHundred).Seconds()
if r.lastSyncRate == 0 {
r.lastSyncRate = newSyncRate
} else {
r.lastSyncRate = 0.9*r.lastSyncRate + 0.1*newSyncRate
}
r.logger.Info("Fast Sync Rate", "height", r.syncHeight,
"max_peer_height", r.maxPeerHeight, "blocks/s", lastRate)
"max_peer_height", r.maxPeerHeight, "blocks/s", r.lastSyncRate)
lastHundred = time.Now()
}
r.scheduler.send(event)
@@ -429,6 +447,7 @@ func (r *BlockchainReactor) demux(events <-chan Event) {
r.logger.Error("Failed to switch to consensus reactor")
}
r.endSync()
r.fastSync.UnSet()
return
case noOpEvent:
default:
@@ -596,3 +615,29 @@ func (r *BlockchainReactor) GetMaxPeerBlockHeight() int64 {
defer r.mtx.RUnlock()
return r.maxPeerHeight
}
func (r *BlockchainReactor) GetTotalSyncedTime() time.Duration {
if !r.fastSync.IsSet() || r.syncStartTime.IsZero() {
return time.Duration(0)
}
return time.Since(r.syncStartTime)
}
func (r *BlockchainReactor) GetRemainingSyncTime() time.Duration {
if !r.fastSync.IsSet() {
return time.Duration(0)
}
r.mtx.RLock()
defer r.mtx.RUnlock()
targetSyncs := r.maxPeerHeight - r.syncStartHeight
currentSyncs := r.syncHeight - r.syncStartHeight + 1
if currentSyncs < 0 || r.lastSyncRate < 0.001 {
return time.Duration(0)
}
remain := float64(targetSyncs-currentSyncs) / r.lastSyncRate
return time.Duration(int64(remain * float64(time.Second)))
}

View File

@@ -35,7 +35,7 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
prevoteHeight := int64(2)
testName := "consensus_byzantine_test"
tickerFunc := newMockTickerFunc(true)
appFunc := newCounter
appFunc := newKVStore
genDoc, privVals := factory.RandGenesisDoc(config, nValidators, false, 30)
states := make([]*State, nValidators)

View File

@@ -19,7 +19,6 @@ import (
dbm "github.com/tendermint/tm-db"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
abci "github.com/tendermint/tendermint/abci/types"
cfg "github.com/tendermint/tendermint/config"
@@ -449,7 +448,7 @@ func randState(config *cfg.Config, nValidators int) (*State, []*validatorStub) {
vss := make([]*validatorStub, nValidators)
cs := newState(state, privVals[0], counter.NewApplication(true))
cs := newState(state, privVals[0], kvstore.NewApplication())
for i := 0; i < nValidators; i++ {
vss[i] = newValidatorStub(privVals[i], int32(i))
@@ -862,10 +861,6 @@ func (m *mockTicker) Chan() <-chan timeoutInfo {
func (*mockTicker) SetLogger(log.Logger) {}
func newCounter() abci.Application {
return counter.NewApplication(true)
}
func newPersistentKVStore() abci.Application {
dir, err := ioutil.TempDir("", "persistent-kvstore")
if err != nil {
@@ -874,6 +869,10 @@ func newPersistentKVStore() abci.Application {
return kvstore.NewPersistentKVStoreApplication(dir)
}
func newKVStore() abci.Application {
return kvstore.NewApplication()
}
func newPersistentKVStoreWithPath(dbDir string) abci.Application {
return kvstore.NewPersistentKVStoreApplication(dbDir)
}

View File

@@ -18,7 +18,9 @@ func TestReactorInvalidPrecommit(t *testing.T) {
config := configSetup(t)
n := 4
states, cleanup := randConsensusState(t, config, n, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
states, cleanup := randConsensusState(t,
config, n, "consensus_reactor_test",
newMockTickerFunc(true), newKVStore)
t.Cleanup(cleanup)
for i := 0; i < 4; i++ {

View File

@@ -2,6 +2,7 @@ package consensus
import (
"fmt"
"runtime/debug"
"time"
cstypes "github.com/tendermint/tendermint/internal/consensus/types"
@@ -101,6 +102,14 @@ type FastSyncReactor interface {
SwitchToFastSync(sm.State) error
GetMaxPeerBlockHeight() int64
// GetTotalSyncedTime returns the time duration since the fastsync starting.
GetTotalSyncedTime() time.Duration
// GetRemainingSyncTime returns the estimating time the node will be fully synced,
// if will return 0 if the fastsync does not perform or the number of block synced is
// too small (less than 100).
GetRemainingSyncTime() time.Duration
}
// Reactor defines a reactor for the consensus service.
@@ -1197,21 +1206,24 @@ func (r *Reactor) handleVoteSetBitsMessage(envelope p2p.Envelope, msgI Message)
// It will handle errors and any possible panics gracefully. A caller can handle
// any error returned by sending a PeerError on the respective channel.
//
// NOTE: We process these messages even when we're fast_syncing. Messages affect
// either a peer state or the consensus state. Peer state updates can happen in
// parallel, but processing of proposals, block parts, and votes are ordered by
// the p2p channel.
//
// NOTE: We block on consensus state for proposals, block parts, and votes.
func (r *Reactor) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err error) {
defer func() {
if e := recover(); e != nil {
err = fmt.Errorf("panic in processing message: %v", e)
r.Logger.Error("recovering from processing message panic", "err", err)
r.Logger.Error(
"recovering from processing message panic",
"err", err,
"stack", string(debug.Stack()),
)
}
}()
// Just skip the entire message during syncing so that we can
// process fewer messages.
if r.WaitSync() {
return
}
// We wrap the envelope's message in a Proto wire type so we can convert back
// the domain type that individual channel message handlers can work with. We
// do this here once to avoid having to do it for each individual message type.

View File

@@ -257,7 +257,9 @@ func TestReactorBasic(t *testing.T) {
config := configSetup(t)
n := 4
states, cleanup := randConsensusState(t, config, n, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
states, cleanup := randConsensusState(t,
config, n, "consensus_reactor_test",
newMockTickerFunc(true), newKVStore)
t.Cleanup(cleanup)
rts := setup(t, n, states, 100) // buffer must be large enough to not deadlock
@@ -287,7 +289,7 @@ func TestReactorWithEvidence(t *testing.T) {
n := 4
testName := "consensus_reactor_test"
tickerFunc := newMockTickerFunc(true)
appFunc := newCounter
appFunc := newKVStore
genDoc, privVals := factory.RandGenesisDoc(config, n, false, 30)
states := make([]*State, n)
@@ -387,7 +389,7 @@ func TestReactorCreatesBlockWhenEmptyBlocksFalse(t *testing.T) {
n,
"consensus_reactor_test",
newMockTickerFunc(true),
newCounter,
newKVStore,
func(c *cfg.Config) {
c.Consensus.CreateEmptyBlocks = false
},
@@ -431,7 +433,9 @@ func TestReactorRecordsVotesAndBlockParts(t *testing.T) {
config := configSetup(t)
n := 4
states, cleanup := randConsensusState(t, config, n, "consensus_reactor_test", newMockTickerFunc(true), newCounter)
states, cleanup := randConsensusState(t,
config, n, "consensus_reactor_test",
newMockTickerFunc(true), newKVStore)
t.Cleanup(cleanup)
rts := setup(t, n, states, 100) // buffer must be large enough to not deadlock

View File

@@ -10,7 +10,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/tendermint/tendermint/crypto/tmhash"
cstypes "github.com/tendermint/tendermint/internal/consensus/types"
p2pmock "github.com/tendermint/tendermint/internal/p2p/mock"
@@ -654,7 +654,7 @@ func TestStateLockPOLRelock(t *testing.T) {
signAddVotes(config, cs1, tmproto.PrecommitType, nil, types.PartSetHeader{}, vs2, vs3, vs4)
// before we timeout to the new round set the new proposal
cs2 := newState(cs1.state, vs2, counter.NewApplication(true))
cs2 := newState(cs1.state, vs2, kvstore.NewApplication())
prop, propBlock := decideProposal(cs2, vs2, vs2.Height, vs2.Round+1)
if prop == nil || propBlock == nil {
t.Fatal("Failed to create proposal block with vs2")
@@ -843,7 +843,7 @@ func TestStateLockPOLUnlockOnUnknownBlock(t *testing.T) {
signAddVotes(config, cs1, tmproto.PrecommitType, nil, types.PartSetHeader{}, vs2, vs3, vs4)
// before we timeout to the new round set the new proposal
cs2 := newState(cs1.state, vs2, counter.NewApplication(true))
cs2 := newState(cs1.state, vs2, kvstore.NewApplication())
prop, propBlock := decideProposal(cs2, vs2, vs2.Height, vs2.Round+1)
if prop == nil || propBlock == nil {
t.Fatal("Failed to create proposal block with vs2")
@@ -887,7 +887,7 @@ func TestStateLockPOLUnlockOnUnknownBlock(t *testing.T) {
signAddVotes(config, cs1, tmproto.PrecommitType, nil, types.PartSetHeader{}, vs2, vs3, vs4)
// before we timeout to the new round set the new proposal
cs3 := newState(cs1.state, vs3, counter.NewApplication(true))
cs3 := newState(cs1.state, vs3, kvstore.NewApplication())
prop, propBlock = decideProposal(cs3, vs3, vs3.Height, vs3.Round+1)
if prop == nil || propBlock == nil {
t.Fatal("Failed to create proposal block with vs2")

View File

@@ -489,14 +489,12 @@ func (evpool *Pool) batchExpiredPendingEvidence(batch dbm.Batch) (int64, time.Ti
func (evpool *Pool) removeEvidenceFromList(
blockEvidenceMap map[string]struct{}) {
el := evpool.evidenceList
var nextE *clist.CElement
for e := el.Front(); e != nil; e = nextE {
nextE = e.Next()
for e := evpool.evidenceList.Front(); e != nil; e = e.Next() {
// Remove from clist
ev := e.Value.(types.Evidence)
if _, ok := blockEvidenceMap[evMapKey(ev)]; ok {
evpool.evidenceList.Remove(e)
e.DetachPrev()
}
}
}

View File

@@ -2,6 +2,7 @@ package evidence
import (
"fmt"
"runtime/debug"
"sync"
"time"
@@ -165,6 +166,11 @@ func (r *Reactor) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err
defer func() {
if e := recover(); e != nil {
err = fmt.Errorf("panic in processing message: %v", e)
r.Logger.Error(
"recovering from processing message panic",
"err", err,
"stack", string(debug.Stack()),
)
}
}()
@@ -296,7 +302,11 @@ func (r *Reactor) broadcastEvidenceLoop(peerID types.NodeID, closer *tmsync.Clos
r.peerWG.Done()
if e := recover(); e != nil {
r.Logger.Error("recovering from broadcasting evidence loop", "err", e)
r.Logger.Error(
"recovering from broadcasting evidence loop",
"err", e,
"stack", string(debug.Stack()),
)
}
}()

View File

@@ -146,7 +146,7 @@ func (g *Group) OnStart() error {
func (g *Group) OnStop() {
g.ticker.Stop()
if err := g.FlushAndSync(); err != nil {
g.Logger.Error("Error flushin to disk", "err", err)
g.Logger.Error("Error flushing to disk", "err", err)
}
}
@@ -160,7 +160,7 @@ func (g *Group) Wait() {
// Close closes the head file. The group must be stopped by this moment.
func (g *Group) Close() {
if err := g.FlushAndSync(); err != nil {
g.Logger.Error("Error flushin to disk", "err", err)
g.Logger.Error("Error flushing to disk", "err", err)
}
g.mtx.Lock()

View File

@@ -196,26 +196,21 @@ func (e *CElement) SetPrev(newPrev *CElement) {
e.mtx.Unlock()
}
// Detach is a shortcut to mark the given element remove and detach the prev/next elements.
func (e *CElement) Detach() {
func (e *CElement) SetRemoved() {
e.mtx.Lock()
defer e.mtx.Unlock()
e.removed = true
// This wakes up anyone waiting in either direction.
if e.prev == nil {
e.prevWg.Done()
close(e.prevWaitCh)
} else {
e.prev = nil
}
if e.next == nil {
e.nextWg.Done()
close(e.nextWaitCh)
} else {
e.next = nil
}
e.mtx.Unlock()
}
//--------------------------------------------------------------------------------
@@ -353,26 +348,24 @@ func (l *CList) PushBack(v interface{}) *CElement {
return e
}
// Remove removes the given element in the CList
// CONTRACT: Caller must call e.DetachPrev() and/or e.DetachNext() to avoid memory leaks.
// NOTE: As per the contract of CList, removed elements cannot be added back.
// Because CList detachse the prev/next element when it removes the given element,
// please do not use CElement.Next() in the for loop postcondition, uses
// a variable ahead the for loop and then assigns the Next() element
// to it in the loop as the postcondition.
func (l *CList) Remove(e *CElement) interface{} {
l.mtx.Lock()
defer l.mtx.Unlock()
prev := e.Prev()
next := e.Next()
if l.head == nil || l.tail == nil {
l.mtx.Unlock()
panic("Remove(e) on empty CList")
}
if prev == nil && l.head != e {
l.mtx.Unlock()
panic("Remove(e) with false head")
}
if next == nil && l.tail != e {
l.mtx.Unlock()
panic("Remove(e) with false tail")
}
@@ -397,53 +390,13 @@ func (l *CList) Remove(e *CElement) interface{} {
next.SetPrev(prev)
}
e.Detach()
// Set .Done() on e, otherwise waiters will wait forever.
e.SetRemoved()
l.mtx.Unlock()
return e.Value
}
// Clear removes all the elements in the CList
func (l *CList) Clear() {
l.mtx.Lock()
defer l.mtx.Unlock()
if l.head == nil || l.tail == nil {
return
}
for e := l.head; e != nil; e = l.head {
prevE := e.Prev()
nextE := e.Next()
if prevE == nil && e != l.head {
panic("CList.Clear failed due to nil prev element")
}
if nextE == nil && e != l.tail {
panic("CList.Clear failed due to nil next element")
}
if l.len == 1 {
l.wg = waitGroup1()
l.waitCh = make(chan struct{})
}
l.len--
if prevE == nil {
l.head = nextE
} else {
prevE.SetNext(nextE)
}
if nextE == nil {
l.tail = prevE
} else {
nextE.SetPrev(prevE)
}
e.Detach()
}
}
func waitGroup1() (wg *sync.WaitGroup) {
wg = &sync.WaitGroup{}
wg.Add(1)

View File

@@ -252,16 +252,9 @@ func TestScanRightDeleteRandom(t *testing.T) {
// time.Sleep(time.Second * 1)
// And remove all the elements.
// we detach the prev/next element in CList.Remove, so just assign l.Front direcly.
halfElements := numElements / 2
for el := l.Front(); el != nil && halfElements > 0; el = l.Front() {
for el := l.Front(); el != nil; el = el.Next() {
l.Remove(el)
halfElements--
}
// remove the rest half elements in the CList
l.Clear()
if l.Len() != 0 {
t.Fatal("Failed to remove all elements from CList")
}

View File

@@ -164,7 +164,10 @@ func (mem *CListMempool) Flush() {
_ = atomic.SwapInt64(&mem.txsBytes, 0)
mem.cache.Reset()
mem.txs.Clear()
for e := mem.txs.Front(); e != nil; e = e.Next() {
mem.txs.Remove(e)
e.DetachPrev()
}
mem.txsMap.Range(func(key, _ interface{}) bool {
mem.txsMap.Delete(key)
@@ -335,6 +338,7 @@ func (mem *CListMempool) addTx(memTx *mempoolTx) {
// - resCbRecheck (lock not held) if tx was invalidated
func (mem *CListMempool) removeTx(tx types.Tx, elem *clist.CElement, removeFromCache bool) {
mem.txs.Remove(elem)
elem.DetachPrev()
mem.txsMap.Delete(mempool.TxKey(tx))
atomic.AddInt64(&mem.txsBytes, int64(-len(tx)))

View File

@@ -15,7 +15,6 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
abciserver "github.com/tendermint/tendermint/abci/server"
abci "github.com/tendermint/tendermint/abci/types"
@@ -217,7 +216,7 @@ func TestMempoolUpdate(t *testing.T) {
}
func TestMempool_KeepInvalidTxsInCache(t *testing.T) {
app := counter.NewApplication(true)
app := kvstore.NewApplication()
cc := proxy.NewLocalClientCreator(app)
wcfg := cfg.DefaultConfig()
wcfg.Mempool.KeepInvalidTxsInCache = true
@@ -309,7 +308,7 @@ func TestTxsAvailable(t *testing.T) {
}
func TestSerialReap(t *testing.T) {
app := counter.NewApplication(true)
app := kvstore.NewApplication()
cc := proxy.NewLocalClientCreator(app)
mp, cleanup := newMempoolWithApp(cc)
@@ -508,7 +507,7 @@ func TestMempoolTxsBytes(t *testing.T) {
}
// 6. zero after tx is rechecked and removed due to not being valid anymore
app2 := counter.NewApplication(true)
app2 := kvstore.NewApplication()
cc = proxy.NewLocalClientCreator(app2)
mp, cleanup = newMempoolWithApp(cc)
defer cleanup()
@@ -540,16 +539,16 @@ func TestMempoolTxsBytes(t *testing.T) {
// Pretend like we committed nothing so txBytes gets rechecked and removed.
err = mp.Update(1, []types.Tx{}, abciResponses(0, abci.CodeTypeOK), nil, nil)
require.NoError(t, err)
assert.EqualValues(t, 0, mp.SizeBytes())
assert.EqualValues(t, 8, mp.SizeBytes())
// 7. Test RemoveTxByKey function
err = mp.CheckTx(context.Background(), []byte{0x06}, nil, mempool.TxInfo{})
require.NoError(t, err)
assert.EqualValues(t, 1, mp.SizeBytes())
assert.EqualValues(t, 9, mp.SizeBytes())
mp.RemoveTxByKey(mempool.TxKey([]byte{0x07}), true)
assert.EqualValues(t, 1, mp.SizeBytes())
assert.EqualValues(t, 9, mp.SizeBytes())
mp.RemoveTxByKey(mempool.TxKey([]byte{0x06}), true)
assert.EqualValues(t, 0, mp.SizeBytes())
assert.EqualValues(t, 8, mp.SizeBytes())
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"errors"
"fmt"
"runtime/debug"
"sync"
"time"
@@ -188,6 +189,11 @@ func (r *Reactor) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err
defer func() {
if e := recover(); e != nil {
err = fmt.Errorf("panic in processing message: %v", e)
r.Logger.Error(
"recovering from processing message panic",
"err", err,
"stack", string(debug.Stack()),
)
}
}()
@@ -312,7 +318,11 @@ func (r *Reactor) broadcastTxRoutine(peerID types.NodeID, closer *tmsync.Closer)
r.peerWG.Done()
if e := recover(); e != nil {
r.Logger.Error("recovering from broadcasting mempool loop", "err", e)
r.Logger.Error(
"recovering from broadcasting mempool loop",
"err", e,
"stack", string(debug.Stack()),
)
}
}()

View File

@@ -305,6 +305,7 @@ func (txmp *TxMempool) Flush() {
txmp.txStore.RemoveTx(wtx)
txmp.priorityIndex.RemoveTx(wtx)
txmp.gossipIndex.Remove(wtx.gossipEl)
wtx.gossipEl.DetachPrev()
}
}
@@ -742,6 +743,7 @@ func (txmp *TxMempool) removeTx(wtx *WrappedTx, removeFromCache bool) {
// Remove the transaction from the gossip index and cleanup the linked-list
// element so it can be garbage collected.
txmp.gossipIndex.Remove(wtx.gossipEl)
wtx.gossipEl.DetachPrev()
atomic.AddInt64(&txmp.sizeBytes, int64(-wtx.Size()))

View File

@@ -4,6 +4,7 @@ import (
"context"
"errors"
"fmt"
"runtime/debug"
"sync"
"time"
@@ -187,6 +188,11 @@ func (r *Reactor) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err
defer func() {
if e := recover(); e != nil {
err = fmt.Errorf("panic in processing message: %v", e)
r.Logger.Error(
"recovering from processing message panic",
"err", err,
"stack", string(debug.Stack()),
)
}
}()
@@ -311,7 +317,11 @@ func (r *Reactor) broadcastTxRoutine(peerID types.NodeID, closer *tmsync.Closer)
r.peerWG.Done()
if e := recover(); e != nil {
r.Logger.Error("recovering from broadcasting mempool loop", "err", e)
r.Logger.Error(
"recovering from broadcasting mempool loop",
"err", e,
"stack", string(debug.Stack()),
)
}
}()

View File

@@ -1,4 +1,4 @@
// Code generated by mockery 2.7.4. DO NOT EDIT.
// Code generated by mockery v0.0.0-dev. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery 2.7.4. DO NOT EDIT.
// Code generated by mockery v0.0.0-dev. DO NOT EDIT.
package mocks

View File

@@ -1,4 +1,4 @@
// Code generated by mockery 2.7.4. DO NOT EDIT.
// Code generated by mockery v0.0.0-dev. DO NOT EDIT.
package mocks

34
internal/p2p/pex/doc.go Normal file
View File

@@ -0,0 +1,34 @@
/*
Package PEX (Peer exchange) handles all the logic necessary for nodes to share
information about their peers to other nodes. Specifically, this is the exchange
of addresses that a peer can use to discover more peers within the network.
The PEX reactor is a continuous service which periodically requests addresses
and serves addresses to other peers. There are two versions of this service
aligning with the two p2p frameworks that Tendermint currently supports.
V1 is coupled with the Switch (which handles peer connections and routing of
messages) and, alongside exchanging peer information in the form of port/IP
pairs, also has the responsibility of dialing peers and ensuring that a
node has a sufficient amount of peers connected.
V2 is embedded with the new p2p stack and uses the peer manager to advertise
peers as well as add new peers to the peer store. The V2 reactor passes a
different set of proto messages which include a list of
[urls](https://golang.org/pkg/net/url/#URL).These can be used to save a set of
endpoints that each peer uses. The V2 reactor has backwards compatibility with
V1. It can also handle V1 messages.
The V2 reactor is able to tweak the intensity of it's search by decreasing or
increasing the interval between each request. It tracks connected peers via a
linked list, sending a request to the node at the front of the list and adding
it to the back of the list once a response is received. Using this method, a
node is able to spread out the load of requesting peers across all the peers it
is currently connected with.
With each inbound set of addresses, the reactor monitors the amount of new
addresses to already seen addresses and uses the information to dynamically
build a picture of the size of the network in order to ascertain how often the
node needs to search for new peers.
*/
package pex

View File

@@ -3,6 +3,7 @@ package pex
import (
"context"
"fmt"
"runtime/debug"
"sync"
"time"
@@ -367,6 +368,11 @@ func (r *ReactorV2) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (er
defer func() {
if e := recover(); e != nil {
err = fmt.Errorf("panic in processing message: %v", e)
r.Logger.Error(
"recovering from processing message panic",
"err", err,
"stack", string(debug.Stack()),
)
}
}()

View File

@@ -476,8 +476,10 @@ func (r *Router) routeChannel(
}
if !contains {
r.logger.Error("tried to send message across a channel that the peer doesn't have available",
"peer", envelope.To, "channel", chID)
// reactor tried to send a message across a channel that the
// peer doesn't have available. This is a known issue due to
// how peer subscriptions work:
// https://github.com/tendermint/tendermint/issues/6598
continue
}
@@ -1021,6 +1023,15 @@ func (r *Router) NodeInfo() types.NodeInfo {
// OnStart implements service.Service.
func (r *Router) OnStart() error {
netAddr, _ := r.nodeInfo.NetAddress()
r.Logger.Info(
"starting router",
"node_id", r.nodeInfo.NodeID,
"channels", r.nodeInfo.Channels,
"listen_addr", r.nodeInfo.ListenAddr,
"net_addr", netAddr,
)
go r.dialPeers()
go r.evictPeers()

View File

@@ -23,9 +23,10 @@ type blockQueue struct {
verifyHeight int64
// termination conditions
stopHeight int64
stopTime time.Time
terminal *types.LightBlock
initialHeight int64
stopHeight int64
stopTime time.Time
terminal *types.LightBlock
// track failed heights so we know what blocks to try fetch again
failed *maxIntHeap
@@ -45,21 +46,22 @@ type blockQueue struct {
}
func newBlockQueue(
startHeight, stopHeight int64,
startHeight, stopHeight, initialHeight int64,
stopTime time.Time,
maxRetries int,
) *blockQueue {
return &blockQueue{
stopHeight: stopHeight,
stopTime: stopTime,
fetchHeight: startHeight,
verifyHeight: startHeight,
pending: make(map[int64]lightBlockResponse),
failed: &maxIntHeap{},
retries: 0,
maxRetries: maxRetries,
waiters: make([]chan int64, 0),
doneCh: make(chan struct{}),
stopHeight: stopHeight,
initialHeight: initialHeight,
stopTime: stopTime,
fetchHeight: startHeight,
verifyHeight: startHeight,
pending: make(map[int64]lightBlockResponse),
failed: &maxIntHeap{},
retries: 0,
maxRetries: maxRetries,
waiters: make([]chan int64, 0),
doneCh: make(chan struct{}),
}
}
@@ -93,9 +95,10 @@ func (q *blockQueue) add(l lightBlockResponse) {
q.pending[l.block.Height] = l
}
// Lastly, if the incoming block is past the stop time and stop height then
// we mark it as the terminal block
if l.block.Height <= q.stopHeight && l.block.Time.Before(q.stopTime) {
// Lastly, if the incoming block is past the stop time and stop height or
// is equal to the initial height then we mark it as the terminal block.
if l.block.Height <= q.stopHeight && l.block.Time.Before(q.stopTime) ||
l.block.Height == q.initialHeight {
q.terminal = l.block
}
}
@@ -115,7 +118,7 @@ func (q *blockQueue) nextHeight() <-chan int64 {
return ch
}
if q.terminal == nil {
if q.terminal == nil && q.fetchHeight >= q.initialHeight {
// return and decrement the fetch height
ch <- q.fetchHeight
q.fetchHeight--

View File

@@ -25,7 +25,7 @@ func TestBlockQueueBasic(t *testing.T) {
peerID, err := types.NewNodeID("0011223344556677889900112233445566778899")
require.NoError(t, err)
queue := newBlockQueue(startHeight, stopHeight, stopTime, 1)
queue := newBlockQueue(startHeight, stopHeight, 1, stopTime, 1)
wg := &sync.WaitGroup{}
// asynchronously fetch blocks and add it to the queue
@@ -72,7 +72,7 @@ func TestBlockQueueWithFailures(t *testing.T) {
peerID, err := types.NewNodeID("0011223344556677889900112233445566778899")
require.NoError(t, err)
queue := newBlockQueue(startHeight, stopHeight, stopTime, 200)
queue := newBlockQueue(startHeight, stopHeight, 1, stopTime, 200)
wg := &sync.WaitGroup{}
failureRate := 4
@@ -121,7 +121,7 @@ func TestBlockQueueWithFailures(t *testing.T) {
func TestBlockQueueBlocks(t *testing.T) {
peerID, err := types.NewNodeID("0011223344556677889900112233445566778899")
require.NoError(t, err)
queue := newBlockQueue(startHeight, stopHeight, stopTime, 2)
queue := newBlockQueue(startHeight, stopHeight, 1, stopTime, 2)
expectedHeight := startHeight
retryHeight := stopHeight + 2
@@ -168,7 +168,7 @@ loop:
func TestBlockQueueAcceptsNoMoreBlocks(t *testing.T) {
peerID, err := types.NewNodeID("0011223344556677889900112233445566778899")
require.NoError(t, err)
queue := newBlockQueue(startHeight, stopHeight, stopTime, 1)
queue := newBlockQueue(startHeight, stopHeight, 1, stopTime, 1)
defer queue.close()
loop:
@@ -194,7 +194,7 @@ func TestBlockQueueStopTime(t *testing.T) {
peerID, err := types.NewNodeID("0011223344556677889900112233445566778899")
require.NoError(t, err)
queue := newBlockQueue(startHeight, stopHeight, stopTime, 1)
queue := newBlockQueue(startHeight, stopHeight, 1, stopTime, 1)
wg := &sync.WaitGroup{}
baseTime := stopTime.Add(-50 * time.Second)
@@ -233,6 +233,46 @@ func TestBlockQueueStopTime(t *testing.T) {
}
}
func TestBlockQueueInitialHeight(t *testing.T) {
peerID, err := types.NewNodeID("0011223344556677889900112233445566778899")
require.NoError(t, err)
const initialHeight int64 = 120
queue := newBlockQueue(startHeight, stopHeight, initialHeight, stopTime, 1)
wg := &sync.WaitGroup{}
// asynchronously fetch blocks and add it to the queue
for i := 0; i <= numWorkers; i++ {
wg.Add(1)
go func() {
for {
select {
case height := <-queue.nextHeight():
require.GreaterOrEqual(t, height, initialHeight)
queue.add(mockLBResp(t, peerID, height, endTime))
case <-queue.done():
wg.Done()
return
}
}
}()
}
loop:
for {
select {
case <-queue.done():
wg.Wait()
require.NoError(t, queue.error())
break loop
case resp := <-queue.verifyNext():
require.GreaterOrEqual(t, resp.block.Height, initialHeight)
queue.success(resp.block.Height)
}
}
}
func mockLBResp(t *testing.T, peer types.NodeID, height int64, time time.Time) lightBlockResponse {
return lightBlockResponse{
block: mockLB(t, height, time, factory.MakeBlockID()),

View File

@@ -45,23 +45,26 @@ func newDispatcher(requestCh chan<- p2p.Envelope, timeout time.Duration) *dispat
}
}
// LightBlock uses the request channel to fetch a light block from the next peer
// in a list, tracks the call and waits for the reactor to pass along the response
func (d *dispatcher) LightBlock(ctx context.Context, height int64) (*types.LightBlock, types.NodeID, error) {
d.mtx.Lock()
outgoingCalls := len(d.calls)
d.mtx.Unlock()
// check to see that the dispatcher is connected to at least one peer
if d.availablePeers.Len() == 0 && outgoingCalls == 0 {
if d.availablePeers.Len() == 0 && len(d.calls) == 0 {
d.mtx.Unlock()
return nil, "", errNoConnectedPeers
}
d.mtx.Unlock()
// fetch the next peer id in the list and request a light block from that
// peer
peer := d.availablePeers.Pop()
peer := d.availablePeers.Pop(ctx)
lb, err := d.lightBlock(ctx, height, peer)
return lb, peer, err
}
// Providers turns the dispatcher into a set of providers (per peer) which can
// be used by a light client
func (d *dispatcher) Providers(chainID string, timeout time.Duration) []provider.Provider {
d.mtx.Lock()
defer d.mtx.Unlock()
@@ -272,7 +275,7 @@ func (l *peerlist) Len() int {
return len(l.peers)
}
func (l *peerlist) Pop() types.NodeID {
func (l *peerlist) Pop(ctx context.Context) types.NodeID {
l.mtx.Lock()
if len(l.peers) == 0 {
// if we don't have any peers in the list we block until a peer is
@@ -281,8 +284,13 @@ func (l *peerlist) Pop() types.NodeID {
l.waiting = append(l.waiting, wait)
// unlock whilst waiting so that the list can be appended to
l.mtx.Unlock()
peer := <-wait
return peer
select {
case peer := <-wait:
return peer
case <-ctx.Done():
return ""
}
}
peer := l.peers[0]

View File

@@ -8,6 +8,7 @@ import (
"testing"
"time"
"github.com/fortytw2/leaktest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -17,6 +18,7 @@ import (
)
func TestDispatcherBasic(t *testing.T) {
t.Cleanup(leaktest.Check(t))
ch := make(chan p2p.Envelope, 100)
closeCh := make(chan struct{})
@@ -49,7 +51,83 @@ func TestDispatcherBasic(t *testing.T) {
wg.Wait()
}
func TestDispatcherReturnsNoBlock(t *testing.T) {
t.Cleanup(leaktest.Check(t))
ch := make(chan p2p.Envelope, 100)
d := newDispatcher(ch, 1*time.Second)
peerFromSet := createPeerSet(1)[0]
d.addPeer(peerFromSet)
doneCh := make(chan struct{})
go func() {
<-ch
require.NoError(t, d.respond(nil, peerFromSet))
close(doneCh)
}()
lb, peerResult, err := d.LightBlock(context.Background(), 1)
<-doneCh
require.Nil(t, lb)
require.Nil(t, err)
require.Equal(t, peerFromSet, peerResult)
}
func TestDispatcherErrorsWhenNoPeers(t *testing.T) {
t.Cleanup(leaktest.Check(t))
ch := make(chan p2p.Envelope, 100)
d := newDispatcher(ch, 1*time.Second)
lb, peerResult, err := d.LightBlock(context.Background(), 1)
require.Nil(t, lb)
require.Empty(t, peerResult)
require.Equal(t, errNoConnectedPeers, err)
}
func TestDispatcherReturnsBlockOncePeerAvailable(t *testing.T) {
t.Cleanup(leaktest.Check(t))
dispatcherRequestCh := make(chan p2p.Envelope, 100)
d := newDispatcher(dispatcherRequestCh, 1*time.Second)
peerFromSet := createPeerSet(1)[0]
d.addPeer(peerFromSet)
ctx := context.Background()
wrapped, cancelFunc := context.WithCancel(ctx)
doneCh := make(chan struct{})
go func() {
lb, peerResult, err := d.LightBlock(wrapped, 1)
require.Nil(t, lb)
require.Equal(t, peerFromSet, peerResult)
require.Nil(t, err)
// calls to dispatcher.Lightblock write into the dispatcher's requestCh.
// we read from the requestCh here to unblock the requestCh for future
// calls.
<-dispatcherRequestCh
close(doneCh)
}()
cancelFunc()
<-doneCh
go func() {
<-dispatcherRequestCh
lb := &types.LightBlock{}
asProto, err := lb.ToProto()
require.Nil(t, err)
err = d.respond(asProto, peerFromSet)
require.Nil(t, err)
}()
lb, peerResult, err := d.LightBlock(context.Background(), 1)
require.NotNil(t, lb)
require.Equal(t, peerFromSet, peerResult)
require.Nil(t, err)
}
func TestDispatcherProviders(t *testing.T) {
t.Cleanup(leaktest.Check(t))
ch := make(chan p2p.Envelope, 100)
chainID := "state-sync-test"
@@ -78,6 +156,7 @@ func TestDispatcherProviders(t *testing.T) {
}
func TestPeerListBasic(t *testing.T) {
t.Cleanup(leaktest.Check(t))
peerList := newPeerList()
assert.Zero(t, peerList.Len())
numPeers := 10
@@ -95,7 +174,7 @@ func TestPeerListBasic(t *testing.T) {
half := numPeers / 2
for i := 0; i < half; i++ {
assert.Equal(t, peerSet[i], peerList.Pop())
assert.Equal(t, peerSet[i], peerList.Pop(ctx))
}
assert.Equal(t, half, peerList.Len())
@@ -104,11 +183,56 @@ func TestPeerListBasic(t *testing.T) {
peerList.Remove(peerSet[half])
half++
assert.Equal(t, peerSet[half], peerList.Pop())
assert.Equal(t, peerSet[half], peerList.Pop(ctx))
}
func TestPeerListBlocksWhenEmpty(t *testing.T) {
t.Cleanup(leaktest.Check(t))
peerList := newPeerList()
require.Zero(t, peerList.Len())
doneCh := make(chan struct{})
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
go func() {
peerList.Pop(ctx)
close(doneCh)
}()
select {
case <-doneCh:
t.Error("empty peer list should not have returned result")
case <-time.After(100 * time.Millisecond):
}
}
func TestEmptyPeerListReturnsWhenContextCanceled(t *testing.T) {
t.Cleanup(leaktest.Check(t))
peerList := newPeerList()
require.Zero(t, peerList.Len())
doneCh := make(chan struct{})
ctx := context.Background()
wrapped, cancel := context.WithCancel(ctx)
go func() {
peerList.Pop(wrapped)
close(doneCh)
}()
select {
case <-doneCh:
t.Error("empty peer list should not have returned result")
case <-time.After(100 * time.Millisecond):
}
cancel()
select {
case <-doneCh:
case <-time.After(100 * time.Millisecond):
t.Error("peer list should have returned after context canceled")
}
}
func TestPeerListConcurrent(t *testing.T) {
t.Cleanup(leaktest.Check(t))
peerList := newPeerList()
numPeers := 10
@@ -117,7 +241,7 @@ func TestPeerListConcurrent(t *testing.T) {
// peer list hasn't been populated each these go routines should block
for i := 0; i < numPeers/2; i++ {
go func() {
_ = peerList.Pop()
_ = peerList.Pop(ctx)
wg.Done()
}()
}
@@ -132,7 +256,7 @@ func TestPeerListConcurrent(t *testing.T) {
// we request the second half of the peer set
for i := 0; i < numPeers/2; i++ {
go func() {
_ = peerList.Pop()
_ = peerList.Pop(ctx)
wg.Done()
}()
}

View File

@@ -6,6 +6,7 @@ import (
"errors"
"fmt"
"reflect"
"runtime/debug"
"sort"
"time"
@@ -94,7 +95,7 @@ const (
// lightBlockResponseTimeout is how long the dispatcher waits for a peer to
// return a light block
lightBlockResponseTimeout = 10 * time.Second
lightBlockResponseTimeout = 30 * time.Second
// maxLightBlockRequestRetries is the amount of retries acceptable before
// the backfill process aborts
@@ -280,7 +281,9 @@ func (r *Reactor) Backfill(state sm.State) error {
return r.backfill(
context.Background(),
state.ChainID,
state.LastBlockHeight, stopHeight,
state.LastBlockHeight,
stopHeight,
state.InitialHeight,
state.LastBlockID,
stopTime,
)
@@ -289,7 +292,7 @@ func (r *Reactor) Backfill(state sm.State) error {
func (r *Reactor) backfill(
ctx context.Context,
chainID string,
startHeight, stopHeight int64,
startHeight, stopHeight, initialHeight int64,
trustedBlockID types.BlockID,
stopTime time.Time,
) error {
@@ -302,7 +305,7 @@ func (r *Reactor) backfill(
lastChangeHeight int64 = startHeight
)
queue := newBlockQueue(startHeight, stopHeight, stopTime, maxLightBlockRequestRetries)
queue := newBlockQueue(startHeight, stopHeight, initialHeight, stopTime, maxLightBlockRequestRetries)
// fetch light blocks across four workers. The aim with deploying concurrent
// workers is to equate the network messaging time with the verification
@@ -310,12 +313,17 @@ func (r *Reactor) backfill(
// waiting on blocks. If it takes 4s to retrieve a block and 1s to verify
// it, then steady state involves four workers.
for i := 0; i < int(r.cfg.Fetchers); i++ {
ctxWithCancel, cancel := context.WithCancel(ctx)
defer cancel()
go func() {
for {
select {
case height := <-queue.nextHeight():
r.Logger.Debug("fetching next block", "height", height)
lb, peer, err := r.dispatcher.LightBlock(ctx, height)
lb, peer, err := r.dispatcher.LightBlock(ctxWithCancel, height)
if errors.Is(err, context.Canceled) {
return
}
if err != nil {
queue.retry(height)
if errors.Is(err, errNoConnectedPeers) {
@@ -332,8 +340,8 @@ func (r *Reactor) backfill(
if lb == nil {
r.Logger.Info("backfill: peer didn't have block, fetching from another peer", "height", height)
queue.retry(height)
// as we are fetching blocks backwards, if this node doesn't have the block it likely doesn't
// have any prior ones, thus we remove it from the peer list
// As we are fetching blocks backwards, if this node doesn't have the block it likely doesn't
// have any prior ones, thus we remove it from the peer list.
r.dispatcher.removePeer(peer)
continue
}
@@ -452,10 +460,11 @@ func (r *Reactor) handleSnapshotMessage(envelope p2p.Envelope) error {
}
for _, snapshot := range snapshots {
logger.Debug(
logger.Info(
"advertising snapshot",
"height", snapshot.Height,
"format", snapshot.Format,
"peer", envelope.From,
)
r.snapshotCh.Out <- p2p.Envelope{
To: envelope.From,
@@ -622,7 +631,6 @@ func (r *Reactor) handleLightBlockMessage(envelope p2p.Envelope) error {
case *ssproto.LightBlockResponse:
if err := r.dispatcher.respond(msg.LightBlock, envelope.From); err != nil {
r.Logger.Error("error processing light block response", "err", err)
return err
}
default:
@@ -639,6 +647,11 @@ func (r *Reactor) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err
defer func() {
if e := recover(); e != nil {
err = fmt.Errorf("panic in processing message: %v", e)
r.Logger.Error(
"recovering from processing message panic",
"err", err,
"stack", string(debug.Stack()),
)
}
}()

View File

@@ -3,12 +3,11 @@ package statesync
import (
"context"
"fmt"
"math/rand"
"sync"
"testing"
"time"
// "github.com/fortytw2/leaktest"
"github.com/fortytw2/leaktest"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
dbm "github.com/tendermint/tm-db"
@@ -421,11 +420,11 @@ func TestReactor_Dispatcher(t *testing.T) {
func TestReactor_Backfill(t *testing.T) {
// test backfill algorithm with varying failure rates [0, 10]
failureRates := []int{0, 3, 9}
failureRates := []int{0, 2, 9}
for _, failureRate := range failureRates {
failureRate := failureRate
t.Run(fmt.Sprintf("failure rate: %d", failureRate), func(t *testing.T) {
// t.Cleanup(leaktest.Check(t))
t.Cleanup(leaktest.CheckTimeout(t, 1*time.Minute))
rts := setup(t, nil, nil, nil, 21)
var (
@@ -464,10 +463,11 @@ func TestReactor_Backfill(t *testing.T) {
factory.DefaultTestChainID,
startHeight,
stopHeight,
1,
factory.MakeBlockIDWithHash(chain[startHeight].Header.Hash()),
stopTime,
)
if failureRate > 5 {
if failureRate > 3 {
require.Error(t, err)
} else {
require.NoError(t, err)
@@ -506,6 +506,7 @@ func handleLightBlockRequests(t *testing.T,
close chan struct{},
failureRate int) {
requests := 0
errorCount := 0
for {
select {
case envelope := <-receiving:
@@ -520,7 +521,7 @@ func handleLightBlockRequests(t *testing.T,
},
}
} else {
switch rand.Intn(3) {
switch errorCount % 3 {
case 0: // send a different block
differntLB, err := mockLB(t, int64(msg.Height), factory.DefaultTestTime, factory.MakeBlockID()).ToProto()
require.NoError(t, err)
@@ -539,6 +540,7 @@ func handleLightBlockRequests(t *testing.T,
}
case 2: // don't do anything
}
errorCount++
}
}
case <-close:

View File

@@ -80,7 +80,7 @@ func newSnapshotPool(stateProvider StateProvider) *snapshotPool {
// snapshot height is verified using the light client, and the expected app hash
// is set for the snapshot.
func (p *snapshotPool) Add(peerID types.NodeID, snapshot *snapshot) (bool, error) {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
ctx, cancel := context.WithTimeout(context.TODO(), 30*time.Second)
defer cancel()
appHash, err := p.stateProvider.AppHash(ctx, snapshot.Height)

View File

@@ -94,7 +94,7 @@ func NewLightClientStateProviderFromDispatcher(
trustOptions light.TrustOptions,
logger log.Logger,
) (StateProvider, error) {
providers := dispatcher.Providers(chainID, 10*time.Second)
providers := dispatcher.Providers(chainID, 30*time.Second)
if len(providers) < 2 {
return nil, fmt.Errorf("at least 2 peers are required, got %d", len(providers))
}

View File

@@ -38,12 +38,16 @@ var (
errRejectFormat = errors.New("snapshot format was rejected")
// errRejectSender is returned by Sync() when the snapshot sender is rejected.
errRejectSender = errors.New("snapshot sender was rejected")
// errVerifyFailed is returned by Sync() when app hash or last height verification fails.
// errVerifyFailed is returned by Sync() when app hash or last height
// verification fails.
errVerifyFailed = errors.New("verification failed")
// errTimeout is returned by Sync() when we've waited too long to receive a chunk.
errTimeout = errors.New("timed out waiting for chunk")
// errNoSnapshots is returned by SyncAny() if no snapshots are found and discovery is disabled.
errNoSnapshots = errors.New("no suitable snapshots found")
// errStateCommitTimeout is returned by Sync() when the timeout for retrieving
// tendermint state or the commit is exceeded
errStateCommitTimeout = errors.New("timed out trying to retrieve state and commit")
)
// syncer runs a state sync against an ABCI app. Use either SyncAny() to automatically attempt to
@@ -226,6 +230,10 @@ func (s *syncer) SyncAny(
s.logger.Info("Snapshot sender rejected", "peer", peer)
}
case errors.Is(err, errStateCommitTimeout):
s.logger.Info("Timed out retrieving state and commit, rejecting and retrying...", "height", snapshot.Height)
s.snapshots.Reject(snapshot)
default:
return sm.State{}, nil, fmt.Errorf("snapshot restoration failed: %w", err)
}
@@ -269,16 +277,26 @@ func (s *syncer) Sync(ctx context.Context, snapshot *snapshot, chunks *chunkQueu
go s.fetchChunks(fetchCtx, snapshot, chunks)
}
pctx, pcancel := context.WithTimeout(ctx, 10*time.Second)
pctx, pcancel := context.WithTimeout(ctx, 30*time.Second)
defer pcancel()
// Optimistically build new state, so we don't discover any light client failures at the end.
state, err := s.stateProvider.State(pctx, snapshot.Height)
if err != nil {
// check if the provider context exceeded the 10 second deadline
if err == context.DeadlineExceeded && ctx.Err() == nil {
return sm.State{}, nil, errStateCommitTimeout
}
return sm.State{}, nil, fmt.Errorf("failed to build new state: %w", err)
}
commit, err := s.stateProvider.Commit(pctx, snapshot.Height)
if err != nil {
// check if the provider context exceeded the 10 second deadline
if err == context.DeadlineExceeded && ctx.Err() == nil {
return sm.State{}, nil, errStateCommitTimeout
}
return sm.State{}, nil, fmt.Errorf("failed to fetch commit: %w", err)
}

33
libs/sync/atomic_bool.go Normal file
View File

@@ -0,0 +1,33 @@
package sync
import "sync/atomic"
// AtomicBool is an atomic Boolean.
// Its methods are all atomic, thus safe to be called by multiple goroutines simultaneously.
// Note: When embedding into a struct one should always use *AtomicBool to avoid copy.
// it's a simple implmentation from https://github.com/tevino/abool
type AtomicBool int32
// NewBool creates an AtomicBool with given default value.
func NewBool(ok bool) *AtomicBool {
ab := new(AtomicBool)
if ok {
ab.Set()
}
return ab
}
// Set sets the Boolean to true.
func (ab *AtomicBool) Set() {
atomic.StoreInt32((*int32)(ab), 1)
}
// UnSet sets the Boolean to false.
func (ab *AtomicBool) UnSet() {
atomic.StoreInt32((*int32)(ab), 0)
}
// IsSet returns whether the Boolean is true.
func (ab *AtomicBool) IsSet() bool {
return atomic.LoadInt32((*int32)(ab))&1 == 1
}

View File

@@ -0,0 +1,27 @@
package sync
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestDefaultValue(t *testing.T) {
t.Parallel()
v := NewBool(false)
assert.False(t, v.IsSet())
v = NewBool(true)
assert.True(t, v.IsSet())
}
func TestSetUnSet(t *testing.T) {
t.Parallel()
v := NewBool(false)
v.Set()
assert.True(t, v.IsSet())
v.UnSet()
assert.False(t, v.IsSet())
}

View File

@@ -909,6 +909,10 @@ func (c *Client) lightBlockFromPrimary(ctx context.Context, height int64) (*type
// Everything went smoothly. We reset the lightBlockRequests and return the light block
return l, nil
// catch canceled contexts or deadlines
case context.Canceled, context.DeadlineExceeded:
return nil, err
case provider.ErrNoResponse, provider.ErrLightBlockNotFound, provider.ErrHeightTooHigh:
// we find a new witness to replace the primary
c.logger.Debug("error from light block request from primary, replacing...",
@@ -1011,6 +1015,10 @@ func (c *Client) findNewPrimary(ctx context.Context, height int64, remove bool)
// return the light block that new primary responded with
return response.lb, nil
// catch canceled contexts or deadlines
case context.Canceled, context.DeadlineExceeded:
return nil, response.err
// process benign errors by logging them only
case provider.ErrNoResponse, provider.ErrLightBlockNotFound, provider.ErrHeightTooHigh:
lastError = response.err
@@ -1067,7 +1075,13 @@ and remove witness. Otherwise, use a different primary`, e.WitnessIndex), "witne
"witness", c.witnesses[e.WitnessIndex],
"err", err)
witnessesToRemove = append(witnessesToRemove, e.WitnessIndex)
default: // the witness either didn't respond or didn't have the block. We ignore it.
default:
// check for canceled contexts or deadlines
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
// the witness either didn't respond or didn't have the block. We ignore it.
c.logger.Debug("unable to compare first header with witness",
"err", err)
}

View File

@@ -10,8 +10,8 @@ import (
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/light"
"github.com/tendermint/tendermint/light/provider"
mockp "github.com/tendermint/tendermint/light/provider/mock"
dbs "github.com/tendermint/tendermint/light/store/db"
"github.com/tendermint/tendermint/types"
)
// NOTE: block is produced every minute. Make sure the verification time
@@ -22,7 +22,7 @@ import (
//
// Remember that none of these benchmarks account for network latency.
var (
benchmarkFullNode = mockp.New(genMockNode(chainID, 1000, 100, 1, bTime))
benchmarkFullNode = newProviderBenchmarkImpl(genMockNode(chainID, 1000, 100, 1, bTime))
genesisBlock, _ = benchmarkFullNode.LightBlock(context.Background(), 1)
)
@@ -108,3 +108,42 @@ func BenchmarkBackwards(b *testing.B) {
}
}
}
type providerBenchmarkImpl struct {
chainID string
currentHeight int64
blocks map[int64]*types.LightBlock
}
func newProviderBenchmarkImpl(chainID string, headers map[int64]*types.SignedHeader,
vals map[int64]*types.ValidatorSet) provider.Provider {
impl := providerBenchmarkImpl{
chainID: chainID,
blocks: make(map[int64]*types.LightBlock, len(headers)),
}
for height, header := range headers {
if height > impl.currentHeight {
impl.currentHeight = height
}
impl.blocks[height] = &types.LightBlock{
SignedHeader: header,
ValidatorSet: vals[height],
}
}
return &impl
}
func (impl *providerBenchmarkImpl) LightBlock(ctx context.Context, height int64) (*types.LightBlock, error) {
if height == 0 {
return impl.blocks[impl.currentHeight], nil
}
lb, ok := impl.blocks[height]
if !ok {
return nil, provider.ErrLightBlockNotFound
}
return lb, nil
}
func (impl *providerBenchmarkImpl) ReportEvidence(_ context.Context, _ types.Evidence) error {
panic("not implemented")
}

View File

@@ -2,11 +2,14 @@ package light_test
import (
"context"
"errors"
"fmt"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
dbm "github.com/tendermint/tm-db"
@@ -15,7 +18,7 @@ import (
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/light"
"github.com/tendermint/tendermint/light/provider"
mockp "github.com/tendermint/tendermint/light/provider/mock"
provider_mocks "github.com/tendermint/tendermint/light/provider/mocks"
dbs "github.com/tendermint/tendermint/light/store/db"
"github.com/tendermint/tendermint/types"
)
@@ -56,14 +59,9 @@ var (
// last header (3/3 signed)
3: h3,
}
l1 = &types.LightBlock{SignedHeader: h1, ValidatorSet: vals}
fullNode = mockp.New(
chainID,
headerSet,
valSet,
)
deadNode = mockp.NewDeadMock(chainID)
largeFullNode = mockp.New(genMockNode(chainID, 10, 3, 0, bTime))
l1 = &types.LightBlock{SignedHeader: h1, ValidatorSet: vals}
l2 = &types.LightBlock{SignedHeader: h2, ValidatorSet: vals}
l3 = &types.LightBlock{SignedHeader: h3, ValidatorSet: vals}
)
func TestValidateTrustOptions(t *testing.T) {
@@ -112,11 +110,6 @@ func TestValidateTrustOptions(t *testing.T) {
}
func TestMock(t *testing.T) {
l, _ := fullNode.LightBlock(ctx, 3)
assert.Equal(t, int64(3), l.Height)
}
func TestClient_SequentialVerification(t *testing.T) {
newKeys := genPrivKeys(4)
newVals := newKeys.ToValidators(10, 1)
@@ -215,22 +208,15 @@ func TestClient_SequentialVerification(t *testing.T) {
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
mockNode := mockNodeFromHeadersAndVals(tc.otherHeaders, tc.vals)
mockNode.On("LightBlock", mock.Anything, mock.Anything).Return(nil, provider.ErrLightBlockNotFound)
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
mockp.New(
chainID,
tc.otherHeaders,
tc.vals,
),
[]provider.Provider{mockp.New(
chainID,
tc.otherHeaders,
tc.vals,
)},
mockNode,
[]provider.Provider{mockNode},
dbs.New(dbm.NewMemDB()),
light.SequentialVerification(),
light.Logger(log.TestingLogger()),
@@ -249,6 +235,7 @@ func TestClient_SequentialVerification(t *testing.T) {
} else {
assert.NoError(t, err)
}
mockNode.AssertExpectations(t)
})
}
}
@@ -342,20 +329,14 @@ func TestClient_SkippingVerification(t *testing.T) {
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
mockNode := mockNodeFromHeadersAndVals(tc.otherHeaders, tc.vals)
mockNode.On("LightBlock", mock.Anything, mock.Anything).Return(nil, provider.ErrLightBlockNotFound)
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
mockp.New(
chainID,
tc.otherHeaders,
tc.vals,
),
[]provider.Provider{mockp.New(
chainID,
tc.otherHeaders,
tc.vals,
)},
mockNode,
[]provider.Provider{mockNode},
dbs.New(dbm.NewMemDB()),
light.SkippingVerification(light.DefaultTrustLevel),
light.Logger(log.TestingLogger()),
@@ -381,8 +362,17 @@ func TestClient_SkippingVerification(t *testing.T) {
// start from a large light block to make sure that the pivot height doesn't select a height outside
// the appropriate range
func TestClientLargeBisectionVerification(t *testing.T) {
veryLargeFullNode := mockp.New(genMockNode(chainID, 100, 3, 0, bTime))
trustedLightBlock, err := veryLargeFullNode.LightBlock(ctx, 5)
numBlocks := 100
_, mockHeaders, mockVals := genMockNode(chainID, int64(numBlocks), 3, 0, bTime)
mockNode := &provider_mocks.Provider{}
mockNode.On("LightBlock", mock.Anything, int64(100)).Return(&types.LightBlock{SignedHeader: mockHeaders[100], ValidatorSet: mockVals[100]}, nil)
mockNode.On("LightBlock", mock.Anything, int64(5)).Return(&types.LightBlock{SignedHeader: mockHeaders[5], ValidatorSet: mockVals[5]}, nil)
lastBlock, _ := mockNode.LightBlock(ctx, int64(numBlocks))
mockNode.On("LightBlock", mock.Anything, int64(0)).Return(lastBlock, nil)
trustedLightBlock, err := mockNode.LightBlock(ctx, 5)
require.NoError(t, err)
c, err := light.NewClient(
ctx,
@@ -392,20 +382,22 @@ func TestClientLargeBisectionVerification(t *testing.T) {
Height: trustedLightBlock.Height,
Hash: trustedLightBlock.Hash(),
},
veryLargeFullNode,
[]provider.Provider{veryLargeFullNode},
mockNode,
[]provider.Provider{mockNode},
dbs.New(dbm.NewMemDB()),
light.SkippingVerification(light.DefaultTrustLevel),
)
require.NoError(t, err)
h, err := c.Update(ctx, bTime.Add(100*time.Minute))
assert.NoError(t, err)
h2, err := veryLargeFullNode.LightBlock(ctx, 100)
h2, err := mockNode.LightBlock(ctx, 100)
require.NoError(t, err)
assert.Equal(t, h, h2)
mockNode.AssertExpectations(t)
}
func TestClientBisectionBetweenTrustedHeaders(t *testing.T) {
mockFullNode := mockNodeFromHeadersAndVals(headerSet, valSet)
c, err := light.NewClient(
ctx,
chainID,
@@ -414,8 +406,8 @@ func TestClientBisectionBetweenTrustedHeaders(t *testing.T) {
Height: 1,
Hash: h1.Hash(),
},
fullNode,
[]provider.Provider{fullNode},
mockFullNode,
[]provider.Provider{mockFullNode},
dbs.New(dbm.NewMemDB()),
light.SkippingVerification(light.DefaultTrustLevel),
)
@@ -431,15 +423,18 @@ func TestClientBisectionBetweenTrustedHeaders(t *testing.T) {
// verify using bisection the light block between the two trusted light blocks
_, err = c.VerifyLightBlockAtHeight(ctx, 2, bTime.Add(1*time.Hour))
assert.NoError(t, err)
mockFullNode.AssertExpectations(t)
}
func TestClient_Cleanup(t *testing.T) {
mockFullNode := &provider_mocks.Provider{}
mockFullNode.On("LightBlock", mock.Anything, int64(1)).Return(l1, nil)
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
fullNode,
[]provider.Provider{fullNode},
mockFullNode,
[]provider.Provider{mockFullNode},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
@@ -454,12 +449,14 @@ func TestClient_Cleanup(t *testing.T) {
l, err := c.TrustedLightBlock(1)
assert.Error(t, err)
assert.Nil(t, l)
mockFullNode.AssertExpectations(t)
}
// trustedHeader.Height == options.Height
func TestClientRestoresTrustedHeaderAfterStartup(t *testing.T) {
// 1. options.Hash == trustedHeader.Hash
{
t.Run("hashes should match", func(t *testing.T) {
mockNode := &provider_mocks.Provider{}
trustedStore := dbs.New(dbm.NewMemDB())
err := trustedStore.SaveLightBlock(l1)
require.NoError(t, err)
@@ -468,8 +465,8 @@ func TestClientRestoresTrustedHeaderAfterStartup(t *testing.T) {
ctx,
chainID,
trustOptions,
fullNode,
[]provider.Provider{fullNode},
mockNode,
[]provider.Provider{mockNode},
trustedStore,
light.Logger(log.TestingLogger()),
)
@@ -480,10 +477,11 @@ func TestClientRestoresTrustedHeaderAfterStartup(t *testing.T) {
assert.NotNil(t, l)
assert.Equal(t, l.Hash(), h1.Hash())
assert.Equal(t, l.ValidatorSet.Hash(), h1.ValidatorsHash.Bytes())
}
mockNode.AssertExpectations(t)
})
// 2. options.Hash != trustedHeader.Hash
{
t.Run("hashes should not match", func(t *testing.T) {
trustedStore := dbs.New(dbm.NewMemDB())
err := trustedStore.SaveLightBlock(l1)
require.NoError(t, err)
@@ -491,15 +489,7 @@ func TestClientRestoresTrustedHeaderAfterStartup(t *testing.T) {
// header1 != h1
header1 := keys.GenSignedHeader(chainID, 1, bTime.Add(1*time.Hour), nil, vals, vals,
hash("app_hash"), hash("cons_hash"), hash("results_hash"), 0, len(keys))
primary := mockp.New(
chainID,
map[int64]*types.SignedHeader{
// trusted header
1: header1,
},
valSet,
)
mockNode := &provider_mocks.Provider{}
c, err := light.NewClient(
ctx,
@@ -509,8 +499,8 @@ func TestClientRestoresTrustedHeaderAfterStartup(t *testing.T) {
Height: 1,
Hash: header1.Hash(),
},
primary,
[]provider.Provider{primary},
mockNode,
[]provider.Provider{mockNode},
trustedStore,
light.Logger(log.TestingLogger()),
)
@@ -523,16 +513,21 @@ func TestClientRestoresTrustedHeaderAfterStartup(t *testing.T) {
assert.Equal(t, l.Hash(), l1.Hash())
assert.NoError(t, l.ValidateBasic(chainID))
}
}
mockNode.AssertExpectations(t)
})
}
func TestClient_Update(t *testing.T) {
mockFullNode := &provider_mocks.Provider{}
mockFullNode.On("LightBlock", mock.Anything, int64(0)).Return(l3, nil)
mockFullNode.On("LightBlock", mock.Anything, int64(1)).Return(l1, nil)
mockFullNode.On("LightBlock", mock.Anything, int64(3)).Return(l3, nil)
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
fullNode,
[]provider.Provider{fullNode},
mockFullNode,
[]provider.Provider{mockFullNode},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
@@ -545,15 +540,19 @@ func TestClient_Update(t *testing.T) {
assert.EqualValues(t, 3, l.Height)
assert.NoError(t, l.ValidateBasic(chainID))
}
mockFullNode.AssertExpectations(t)
}
func TestClient_Concurrency(t *testing.T) {
mockFullNode := &provider_mocks.Provider{}
mockFullNode.On("LightBlock", mock.Anything, int64(2)).Return(l2, nil)
mockFullNode.On("LightBlock", mock.Anything, int64(1)).Return(l1, nil)
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
fullNode,
[]provider.Provider{fullNode},
mockFullNode,
[]provider.Provider{mockFullNode},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
@@ -586,15 +585,21 @@ func TestClient_Concurrency(t *testing.T) {
}
wg.Wait()
mockFullNode.AssertExpectations(t)
}
func TestClientReplacesPrimaryWithWitnessIfPrimaryIsUnavailable(t *testing.T) {
mockFullNode := &provider_mocks.Provider{}
mockFullNode.On("LightBlock", mock.Anything, mock.Anything).Return(l1, nil)
mockDeadNode := &provider_mocks.Provider{}
mockDeadNode.On("LightBlock", mock.Anything, mock.Anything).Return(nil, provider.ErrNoResponse)
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
deadNode,
[]provider.Provider{fullNode, fullNode},
mockDeadNode,
[]provider.Provider{mockFullNode, mockFullNode},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
@@ -604,16 +609,25 @@ func TestClientReplacesPrimaryWithWitnessIfPrimaryIsUnavailable(t *testing.T) {
require.NoError(t, err)
// the primary should no longer be the deadNode
assert.NotEqual(t, c.Primary(), deadNode)
assert.NotEqual(t, c.Primary(), mockDeadNode)
// we should still have the dead node as a witness because it
// hasn't repeatedly been unresponsive yet
assert.Equal(t, 2, len(c.Witnesses()))
mockDeadNode.AssertExpectations(t)
mockFullNode.AssertExpectations(t)
}
func TestClient_BackwardsVerification(t *testing.T) {
{
trustHeader, _ := largeFullNode.LightBlock(ctx, 6)
_, headers, vals := genMockNode(chainID, 9, 3, 0, bTime)
delete(headers, 1)
delete(headers, 2)
delete(vals, 1)
delete(vals, 2)
mockLargeFullNode := mockNodeFromHeadersAndVals(headers, vals)
trustHeader, _ := mockLargeFullNode.LightBlock(ctx, 6)
c, err := light.NewClient(
ctx,
chainID,
@@ -622,8 +636,8 @@ func TestClient_BackwardsVerification(t *testing.T) {
Height: trustHeader.Height,
Hash: trustHeader.Hash(),
},
largeFullNode,
[]provider.Provider{largeFullNode},
mockLargeFullNode,
[]provider.Provider{mockLargeFullNode},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
@@ -661,41 +675,36 @@ func TestClient_BackwardsVerification(t *testing.T) {
// so expect error
_, err = c.VerifyLightBlockAtHeight(ctx, 8, bTime.Add(12*time.Minute))
assert.Error(t, err)
mockLargeFullNode.AssertExpectations(t)
}
{
testCases := []struct {
provider provider.Provider
headers map[int64]*types.SignedHeader
vals map[int64]*types.ValidatorSet
}{
{
// 7) provides incorrect height
mockp.New(
chainID,
map[int64]*types.SignedHeader{
1: h1,
2: keys.GenSignedHeader(chainID, 1, bTime.Add(30*time.Minute), nil, vals, vals,
hash("app_hash"), hash("cons_hash"), hash("results_hash"), 0, len(keys)),
3: h3,
},
valSet,
),
headers: map[int64]*types.SignedHeader{
2: keys.GenSignedHeader(chainID, 1, bTime.Add(30*time.Minute), nil, vals, vals,
hash("app_hash"), hash("cons_hash"), hash("results_hash"), 0, len(keys)),
3: h3,
},
vals: valSet,
},
{
// 8) provides incorrect hash
mockp.New(
chainID,
map[int64]*types.SignedHeader{
1: h1,
2: keys.GenSignedHeader(chainID, 2, bTime.Add(30*time.Minute), nil, vals, vals,
hash("app_hash2"), hash("cons_hash23"), hash("results_hash30"), 0, len(keys)),
3: h3,
},
valSet,
),
headers: map[int64]*types.SignedHeader{
2: keys.GenSignedHeader(chainID, 2, bTime.Add(30*time.Minute), nil, vals, vals,
hash("app_hash2"), hash("cons_hash23"), hash("results_hash30"), 0, len(keys)),
3: h3,
},
vals: valSet,
},
}
for idx, tc := range testCases {
mockNode := mockNodeFromHeadersAndVals(tc.headers, tc.vals)
c, err := light.NewClient(
ctx,
chainID,
@@ -704,8 +713,8 @@ func TestClient_BackwardsVerification(t *testing.T) {
Height: 3,
Hash: h3.Hash(),
},
tc.provider,
[]provider.Provider{tc.provider},
mockNode,
[]provider.Provider{mockNode},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
@@ -713,6 +722,7 @@ func TestClient_BackwardsVerification(t *testing.T) {
_, err = c.VerifyLightBlockAtHeight(ctx, 2, bTime.Add(1*time.Hour).Add(1*time.Second))
assert.Error(t, err, idx)
mockNode.AssertExpectations(t)
}
}
}
@@ -722,60 +732,62 @@ func TestClient_NewClientFromTrustedStore(t *testing.T) {
db := dbs.New(dbm.NewMemDB())
err := db.SaveLightBlock(l1)
require.NoError(t, err)
mockNode := &provider_mocks.Provider{}
c, err := light.NewClientFromTrustedStore(
chainID,
trustPeriod,
deadNode,
[]provider.Provider{deadNode},
mockNode,
[]provider.Provider{mockNode},
db,
)
require.NoError(t, err)
// 2) Check light block exists (deadNode is being used to ensure we're not getting
// it from primary)
// 2) Check light block exists
h, err := c.TrustedLightBlock(1)
assert.NoError(t, err)
assert.EqualValues(t, l1.Height, h.Height)
mockNode.AssertExpectations(t)
}
func TestClientRemovesWitnessIfItSendsUsIncorrectHeader(t *testing.T) {
// different headers hash then primary plus less than 1/3 signed (no fork)
badProvider1 := mockp.New(
chainID,
map[int64]*types.SignedHeader{
1: h1,
2: keys.GenSignedHeaderLastBlockID(chainID, 2, bTime.Add(30*time.Minute), nil, vals, vals,
hash("app_hash2"), hash("cons_hash"), hash("results_hash"),
len(keys), len(keys), types.BlockID{Hash: h1.Hash()}),
},
map[int64]*types.ValidatorSet{
1: vals,
2: vals,
},
)
// header is empty
badProvider2 := mockp.New(
chainID,
map[int64]*types.SignedHeader{
1: h1,
2: h2,
},
map[int64]*types.ValidatorSet{
1: vals,
2: vals,
},
)
headers1 := map[int64]*types.SignedHeader{
1: h1,
2: keys.GenSignedHeaderLastBlockID(chainID, 2, bTime.Add(30*time.Minute), nil, vals, vals,
hash("app_hash2"), hash("cons_hash"), hash("results_hash"),
len(keys), len(keys), types.BlockID{Hash: h1.Hash()}),
}
vals1 := map[int64]*types.ValidatorSet{
1: vals,
2: vals,
}
mockBadNode1 := mockNodeFromHeadersAndVals(headers1, vals1)
mockBadNode1.On("LightBlock", mock.Anything, mock.Anything).Return(nil, provider.ErrLightBlockNotFound)
lb1, _ := badProvider1.LightBlock(ctx, 2)
// header is empty
headers2 := map[int64]*types.SignedHeader{
1: h1,
2: h2,
}
vals2 := map[int64]*types.ValidatorSet{
1: vals,
2: vals,
}
mockBadNode2 := mockNodeFromHeadersAndVals(headers2, vals2)
mockBadNode2.On("LightBlock", mock.Anything, mock.Anything).Return(nil, provider.ErrLightBlockNotFound)
mockFullNode := mockNodeFromHeadersAndVals(headerSet, valSet)
lb1, _ := mockBadNode1.LightBlock(ctx, 2)
require.NotEqual(t, lb1.Hash(), l1.Hash())
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
fullNode,
[]provider.Provider{badProvider1, badProvider2},
mockFullNode,
[]provider.Provider{mockBadNode1, mockBadNode2},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
@@ -797,12 +809,13 @@ func TestClientRemovesWitnessIfItSendsUsIncorrectHeader(t *testing.T) {
}
// witness does not have a light block -> left in the list
assert.EqualValues(t, 1, len(c.Witnesses()))
mockBadNode1.AssertExpectations(t)
mockBadNode2.AssertExpectations(t)
}
func TestClient_TrustedValidatorSet(t *testing.T) {
differentVals, _ := factory.RandValidatorSet(10, 100)
badValSetNode := mockp.New(
chainID,
mockBadValSetNode := mockNodeFromHeadersAndVals(
map[int64]*types.SignedHeader{
1: h1,
// 3/3 signed, but validator set at height 2 below is invalid -> witness
@@ -810,21 +823,27 @@ func TestClient_TrustedValidatorSet(t *testing.T) {
2: keys.GenSignedHeaderLastBlockID(chainID, 2, bTime.Add(30*time.Minute), nil, vals, vals,
hash("app_hash2"), hash("cons_hash"), hash("results_hash"),
0, len(keys), types.BlockID{Hash: h1.Hash()}),
3: h3,
},
map[int64]*types.ValidatorSet{
1: vals,
2: differentVals,
3: differentVals,
})
mockFullNode := mockNodeFromHeadersAndVals(
map[int64]*types.SignedHeader{
1: h1,
2: h2,
},
)
map[int64]*types.ValidatorSet{
1: vals,
2: vals,
})
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
fullNode,
[]provider.Provider{badValSetNode, fullNode},
mockFullNode,
[]provider.Provider{mockBadValSetNode, mockFullNode},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
@@ -834,15 +853,29 @@ func TestClient_TrustedValidatorSet(t *testing.T) {
_, err = c.VerifyLightBlockAtHeight(ctx, 2, bTime.Add(2*time.Hour).Add(1*time.Second))
assert.NoError(t, err)
assert.Equal(t, 1, len(c.Witnesses()))
mockBadValSetNode.AssertExpectations(t)
mockFullNode.AssertExpectations(t)
}
func TestClientPrunesHeadersAndValidatorSets(t *testing.T) {
mockFullNode := mockNodeFromHeadersAndVals(
map[int64]*types.SignedHeader{
1: h1,
3: h3,
0: h3,
},
map[int64]*types.ValidatorSet{
1: vals,
3: vals,
0: vals,
})
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
fullNode,
[]provider.Provider{fullNode},
mockFullNode,
[]provider.Provider{mockFullNode},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
light.PruningSize(1),
@@ -857,6 +890,7 @@ func TestClientPrunesHeadersAndValidatorSets(t *testing.T) {
_, err = c.TrustedLightBlock(1)
assert.Error(t, err)
mockFullNode.AssertExpectations(t)
}
func TestClientEnsureValidHeadersAndValSets(t *testing.T) {
@@ -868,64 +902,137 @@ func TestClientEnsureValidHeadersAndValSets(t *testing.T) {
testCases := []struct {
headers map[int64]*types.SignedHeader
vals map[int64]*types.ValidatorSet
err bool
errorToThrow error
errorHeight int64
err bool
}{
{
headerSet,
valSet,
false,
},
{
headerSet,
map[int64]*types.ValidatorSet{
1: vals,
2: vals,
3: nil,
},
true,
},
{
map[int64]*types.SignedHeader{
headers: map[int64]*types.SignedHeader{
1: h1,
2: h2,
3: nil,
3: h3,
},
valSet,
true,
vals: map[int64]*types.ValidatorSet{
1: vals,
3: vals,
},
err: false,
},
{
headerSet,
map[int64]*types.ValidatorSet{
headers: map[int64]*types.SignedHeader{
1: h1,
},
vals: map[int64]*types.ValidatorSet{
1: vals,
},
errorToThrow: provider.ErrBadLightBlock{Reason: errors.New("nil header or vals")},
errorHeight: 3,
err: true,
},
{
headers: map[int64]*types.SignedHeader{
1: h1,
},
errorToThrow: provider.ErrBadLightBlock{Reason: errors.New("nil header or vals")},
errorHeight: 3,
vals: valSet,
err: true,
},
{
headers: map[int64]*types.SignedHeader{
1: h1,
3: h3,
},
vals: map[int64]*types.ValidatorSet{
1: vals,
2: vals,
3: emptyValSet,
},
true,
err: true,
},
}
for _, tc := range testCases {
badNode := mockp.New(
chainID,
tc.headers,
tc.vals,
)
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
badNode,
[]provider.Provider{badNode, badNode},
dbs.New(dbm.NewMemDB()),
)
require.NoError(t, err)
for i, tc := range testCases {
t.Run(fmt.Sprintf("case: %d", i), func(t *testing.T) {
mockBadNode := mockNodeFromHeadersAndVals(tc.headers, tc.vals)
if tc.errorToThrow != nil {
mockBadNode.On("LightBlock", mock.Anything, tc.errorHeight).Return(nil, tc.errorToThrow)
}
_, err = c.VerifyLightBlockAtHeight(ctx, 3, bTime.Add(2*time.Hour))
if tc.err {
assert.Error(t, err)
} else {
assert.NoError(t, err)
}
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
mockBadNode,
[]provider.Provider{mockBadNode, mockBadNode},
dbs.New(dbm.NewMemDB()),
)
require.NoError(t, err)
_, err = c.VerifyLightBlockAtHeight(ctx, 3, bTime.Add(2*time.Hour))
if tc.err {
assert.Error(t, err)
} else {
assert.NoError(t, err)
}
mockBadNode.AssertExpectations(t)
})
}
}
func TestClientHandlesContexts(t *testing.T) {
mockNode := &provider_mocks.Provider{}
mockNode.On("LightBlock", mock.MatchedBy(func(ctx context.Context) bool { return ctx.Err() == nil }), int64(1)).Return(l1, nil)
mockNode.On("LightBlock",
mock.MatchedBy(func(ctx context.Context) bool { return ctx.Err() == context.DeadlineExceeded }),
mock.Anything).Return(nil, context.DeadlineExceeded)
mockNode.On("LightBlock",
mock.MatchedBy(func(ctx context.Context) bool { return ctx.Err() == context.Canceled }),
mock.Anything).Return(nil, context.Canceled)
// instantiate the light client with a timeout
ctxTimeOut, cancel := context.WithTimeout(ctx, 1*time.Nanosecond)
defer cancel()
_, err := light.NewClient(
ctxTimeOut,
chainID,
trustOptions,
mockNode,
[]provider.Provider{mockNode, mockNode},
dbs.New(dbm.NewMemDB()),
)
require.Error(t, ctxTimeOut.Err())
require.Error(t, err)
require.True(t, errors.Is(err, context.DeadlineExceeded))
// instantiate the client for real
c, err := light.NewClient(
ctx,
chainID,
trustOptions,
mockNode,
[]provider.Provider{mockNode, mockNode},
dbs.New(dbm.NewMemDB()),
)
require.NoError(t, err)
// verify a block with a timeout
ctxTimeOutBlock, cancel := context.WithTimeout(ctx, 1*time.Nanosecond)
defer cancel()
_, err = c.VerifyLightBlockAtHeight(ctxTimeOutBlock, 100, bTime.Add(100*time.Minute))
require.Error(t, ctxTimeOutBlock.Err())
require.Error(t, err)
require.True(t, errors.Is(err, context.DeadlineExceeded))
// verify a block with a cancel
ctxCancel, cancel := context.WithCancel(ctx)
cancel()
_, err = c.VerifyLightBlockAtHeight(ctxCancel, 100, bTime.Add(100*time.Minute))
require.Error(t, ctxCancel.Err())
require.Error(t, err)
require.True(t, errors.Is(err, context.Canceled))
mockNode.AssertExpectations(t)
}

View File

@@ -78,7 +78,10 @@ func (c *Client) detectDivergence(ctx context.Context, primaryTrace []*types.Lig
"witness", c.witnesses[e.WitnessIndex], "err", err)
witnessesToRemove = append(witnessesToRemove, e.WitnessIndex)
default:
c.logger.Debug("error in light block request to witness", "err", err)
if errors.Is(e, context.Canceled) || errors.Is(e, context.DeadlineExceeded) {
return e
}
c.logger.Info("error in light block request to witness", "err", err)
}
}
@@ -115,7 +118,7 @@ func (c *Client) compareNewHeaderWithWitness(ctx context.Context, errc chan erro
// the witness hasn't been helpful in comparing headers, we mark the response and continue
// comparing with the rest of the witnesses
case provider.ErrNoResponse, provider.ErrLightBlockNotFound:
case provider.ErrNoResponse, provider.ErrLightBlockNotFound, context.DeadlineExceeded, context.Canceled:
errc <- err
return

View File

@@ -1,10 +1,12 @@
package light_test
import (
"bytes"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
dbm "github.com/tendermint/tm-db"
@@ -12,7 +14,7 @@ import (
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/light"
"github.com/tendermint/tendermint/light/provider"
mockp "github.com/tendermint/tendermint/light/provider/mock"
provider_mocks "github.com/tendermint/tendermint/light/provider/mocks"
dbs "github.com/tendermint/tendermint/light/store/db"
"github.com/tendermint/tendermint/types"
)
@@ -20,18 +22,20 @@ import (
func TestLightClientAttackEvidence_Lunatic(t *testing.T) {
// primary performs a lunatic attack
var (
latestHeight = int64(10)
latestHeight = int64(3)
valSize = 5
divergenceHeight = int64(6)
divergenceHeight = int64(2)
primaryHeaders = make(map[int64]*types.SignedHeader, latestHeight)
primaryValidators = make(map[int64]*types.ValidatorSet, latestHeight)
)
witnessHeaders, witnessValidators, chainKeys := genMockNodeWithKeys(chainID, latestHeight, valSize, 2, bTime)
witness := mockp.New(chainID, witnessHeaders, witnessValidators)
// never called, delete it to make mockery asserts pass
delete(witnessHeaders, 2)
forgedKeys := chainKeys[divergenceHeight-1].ChangeKeys(3) // we change 3 out of the 5 validators (still 2/5 remain)
forgedVals := forgedKeys.ToValidators(2, 0)
for height := int64(1); height <= latestHeight; height++ {
if height < divergenceHeight {
primaryHeaders[height] = witnessHeaders[height]
@@ -42,7 +46,35 @@ func TestLightClientAttackEvidence_Lunatic(t *testing.T) {
nil, forgedVals, forgedVals, hash("app_hash"), hash("cons_hash"), hash("results_hash"), 0, len(forgedKeys))
primaryValidators[height] = forgedVals
}
primary := mockp.New(chainID, primaryHeaders, primaryValidators)
delete(primaryHeaders, 2)
mockWitness := mockNodeFromHeadersAndVals(witnessHeaders, witnessValidators)
mockPrimary := mockNodeFromHeadersAndVals(primaryHeaders, primaryValidators)
mockWitness.On("ReportEvidence", mock.Anything, mock.MatchedBy(func(evidence types.Evidence) bool {
evAgainstPrimary := &types.LightClientAttackEvidence{
// after the divergence height the valset doesn't change so we expect the evidence to be for the latest height
ConflictingBlock: &types.LightBlock{
SignedHeader: primaryHeaders[latestHeight],
ValidatorSet: primaryValidators[latestHeight],
},
CommonHeight: 1,
}
return bytes.Equal(evidence.Hash(), evAgainstPrimary.Hash())
})).Return(nil)
mockPrimary.On("ReportEvidence", mock.Anything, mock.MatchedBy(func(evidence types.Evidence) bool {
evAgainstWitness := &types.LightClientAttackEvidence{
// when forming evidence against witness we learn that the canonical chain continued to change validator sets
// hence the conflicting block is at 7
ConflictingBlock: &types.LightBlock{
SignedHeader: witnessHeaders[divergenceHeight+1],
ValidatorSet: witnessValidators[divergenceHeight+1],
},
CommonHeight: divergenceHeight - 1,
}
return bytes.Equal(evidence.Hash(), evAgainstWitness.Hash())
})).Return(nil)
c, err := light.NewClient(
ctx,
@@ -52,121 +84,132 @@ func TestLightClientAttackEvidence_Lunatic(t *testing.T) {
Height: 1,
Hash: primaryHeaders[1].Hash(),
},
primary,
[]provider.Provider{witness},
mockPrimary,
[]provider.Provider{mockWitness},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
require.NoError(t, err)
// Check verification returns an error.
_, err = c.VerifyLightBlockAtHeight(ctx, 10, bTime.Add(1*time.Hour))
_, err = c.VerifyLightBlockAtHeight(ctx, latestHeight, bTime.Add(1*time.Hour))
if assert.Error(t, err) {
assert.Equal(t, light.ErrLightClientAttack, err)
}
// Check evidence was sent to both full nodes.
evAgainstPrimary := &types.LightClientAttackEvidence{
// after the divergence height the valset doesn't change so we expect the evidence to be for height 10
ConflictingBlock: &types.LightBlock{
SignedHeader: primaryHeaders[10],
ValidatorSet: primaryValidators[10],
},
CommonHeight: 4,
}
assert.True(t, witness.HasEvidence(evAgainstPrimary))
evAgainstWitness := &types.LightClientAttackEvidence{
// when forming evidence against witness we learn that the canonical chain continued to change validator sets
// hence the conflicting block is at 7
ConflictingBlock: &types.LightBlock{
SignedHeader: witnessHeaders[7],
ValidatorSet: witnessValidators[7],
},
CommonHeight: 4,
}
assert.True(t, primary.HasEvidence(evAgainstWitness))
mockWitness.AssertExpectations(t)
mockPrimary.AssertExpectations(t)
}
func TestLightClientAttackEvidence_Equivocation(t *testing.T) {
verificationOptions := map[string]light.Option{
"sequential": light.SequentialVerification(),
"skipping": light.SkippingVerification(light.DefaultTrustLevel),
cases := []struct {
name string
lightOption light.Option
unusedWitnessBlockHeights []int64
unusedPrimaryBlockHeights []int64
latestHeight int64
divergenceHeight int64
}{
{
name: "sequential",
lightOption: light.SequentialVerification(),
unusedWitnessBlockHeights: []int64{4, 6},
latestHeight: int64(5),
divergenceHeight: int64(3),
},
{
name: "skipping",
lightOption: light.SkippingVerification(light.DefaultTrustLevel),
unusedWitnessBlockHeights: []int64{2, 4, 6},
unusedPrimaryBlockHeights: []int64{2, 4, 6},
latestHeight: int64(5),
divergenceHeight: int64(3),
},
}
for s, verificationOption := range verificationOptions {
t.Log("==> verification", s)
// primary performs an equivocation attack
var (
latestHeight = int64(10)
valSize = 5
divergenceHeight = int64(6)
primaryHeaders = make(map[int64]*types.SignedHeader, latestHeight)
primaryValidators = make(map[int64]*types.ValidatorSet, latestHeight)
)
// validators don't change in this network (however we still use a map just for convenience)
witnessHeaders, witnessValidators, chainKeys := genMockNodeWithKeys(chainID, latestHeight+2, valSize, 2, bTime)
witness := mockp.New(chainID, witnessHeaders, witnessValidators)
for height := int64(1); height <= latestHeight; height++ {
if height < divergenceHeight {
primaryHeaders[height] = witnessHeaders[height]
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
// primary performs an equivocation attack
var (
valSize = 5
primaryHeaders = make(map[int64]*types.SignedHeader, tc.latestHeight)
// validators don't change in this network (however we still use a map just for convenience)
primaryValidators = make(map[int64]*types.ValidatorSet, tc.latestHeight)
)
witnessHeaders, witnessValidators, chainKeys := genMockNodeWithKeys(chainID, tc.latestHeight+1, valSize, 2, bTime)
for height := int64(1); height <= tc.latestHeight; height++ {
if height < tc.divergenceHeight {
primaryHeaders[height] = witnessHeaders[height]
primaryValidators[height] = witnessValidators[height]
continue
}
// we don't have a network partition so we will make 4/5 (greater than 2/3) malicious and vote again for
// a different block (which we do by adding txs)
primaryHeaders[height] = chainKeys[height].GenSignedHeader(chainID, height,
bTime.Add(time.Duration(height)*time.Minute), []types.Tx{[]byte("abcd")},
witnessValidators[height], witnessValidators[height+1], hash("app_hash"),
hash("cons_hash"), hash("results_hash"), 0, len(chainKeys[height])-1)
primaryValidators[height] = witnessValidators[height]
continue
}
// we don't have a network partition so we will make 4/5 (greater than 2/3) malicious and vote again for
// a different block (which we do by adding txs)
primaryHeaders[height] = chainKeys[height].GenSignedHeader(chainID, height,
bTime.Add(time.Duration(height)*time.Minute), []types.Tx{[]byte("abcd")},
witnessValidators[height], witnessValidators[height+1], hash("app_hash"),
hash("cons_hash"), hash("results_hash"), 0, len(chainKeys[height])-1)
primaryValidators[height] = witnessValidators[height]
}
primary := mockp.New(chainID, primaryHeaders, primaryValidators)
c, err := light.NewClient(
ctx,
chainID,
light.TrustOptions{
Period: 4 * time.Hour,
Height: 1,
Hash: primaryHeaders[1].Hash(),
},
primary,
[]provider.Provider{witness},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
verificationOption,
)
require.NoError(t, err)
for _, height := range tc.unusedWitnessBlockHeights {
delete(witnessHeaders, height)
}
mockWitness := mockNodeFromHeadersAndVals(witnessHeaders, witnessValidators)
for _, height := range tc.unusedPrimaryBlockHeights {
delete(primaryHeaders, height)
}
mockPrimary := mockNodeFromHeadersAndVals(primaryHeaders, primaryValidators)
// Check verification returns an error.
_, err = c.VerifyLightBlockAtHeight(ctx, 10, bTime.Add(1*time.Hour))
if assert.Error(t, err) {
assert.Equal(t, light.ErrLightClientAttack, err)
}
// Check evidence was sent to both full nodes.
// Common height should be set to the height of the divergent header in the instance
// of an equivocation attack and the validator sets are the same as what the witness has
mockWitness.On("ReportEvidence", mock.Anything, mock.MatchedBy(func(evidence types.Evidence) bool {
evAgainstPrimary := &types.LightClientAttackEvidence{
ConflictingBlock: &types.LightBlock{
SignedHeader: primaryHeaders[tc.divergenceHeight],
ValidatorSet: primaryValidators[tc.divergenceHeight],
},
CommonHeight: tc.divergenceHeight,
}
return bytes.Equal(evidence.Hash(), evAgainstPrimary.Hash())
})).Return(nil)
mockPrimary.On("ReportEvidence", mock.Anything, mock.MatchedBy(func(evidence types.Evidence) bool {
evAgainstWitness := &types.LightClientAttackEvidence{
ConflictingBlock: &types.LightBlock{
SignedHeader: witnessHeaders[tc.divergenceHeight],
ValidatorSet: witnessValidators[tc.divergenceHeight],
},
CommonHeight: tc.divergenceHeight,
}
return bytes.Equal(evidence.Hash(), evAgainstWitness.Hash())
})).Return(nil)
// Check evidence was sent to both full nodes.
// Common height should be set to the height of the divergent header in the instance
// of an equivocation attack and the validator sets are the same as what the witness has
evAgainstPrimary := &types.LightClientAttackEvidence{
ConflictingBlock: &types.LightBlock{
SignedHeader: primaryHeaders[divergenceHeight],
ValidatorSet: primaryValidators[divergenceHeight],
},
CommonHeight: divergenceHeight,
}
assert.True(t, witness.HasEvidence(evAgainstPrimary))
c, err := light.NewClient(
ctx,
chainID,
light.TrustOptions{
Period: 4 * time.Hour,
Height: 1,
Hash: primaryHeaders[1].Hash(),
},
mockPrimary,
[]provider.Provider{mockWitness},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
tc.lightOption,
)
require.NoError(t, err)
evAgainstWitness := &types.LightClientAttackEvidence{
ConflictingBlock: &types.LightBlock{
SignedHeader: witnessHeaders[divergenceHeight],
ValidatorSet: witnessValidators[divergenceHeight],
},
CommonHeight: divergenceHeight,
}
assert.True(t, primary.HasEvidence(evAgainstWitness))
// Check verification returns an error.
_, err = c.VerifyLightBlockAtHeight(ctx, tc.latestHeight, bTime.Add(1*time.Hour))
if assert.Error(t, err) {
assert.Equal(t, light.ErrLightClientAttack, err)
}
mockWitness.AssertExpectations(t)
mockPrimary.AssertExpectations(t)
})
}
}
@@ -205,14 +248,37 @@ func TestLightClientAttackEvidence_ForwardLunatic(t *testing.T) {
0, len(forgedKeys),
)
witness := mockp.New(chainID, witnessHeaders, witnessValidators)
primary := mockp.New(chainID, primaryHeaders, primaryValidators)
for _, unusedHeader := range []int64{3, 5, 6, 8} {
delete(primaryHeaders, unusedHeader)
}
mockPrimary := mockNodeFromHeadersAndVals(primaryHeaders, primaryValidators)
lastBlock, _ := mockPrimary.LightBlock(ctx, forgedHeight)
mockPrimary.On("LightBlock", mock.Anything, int64(0)).Return(lastBlock, nil)
mockPrimary.On("LightBlock", mock.Anything, mock.Anything).Return(nil, provider.ErrLightBlockNotFound)
laggingWitness := witness.Copy("laggingWitness")
for _, unusedHeader := range []int64{3, 5, 6, 8} {
delete(witnessHeaders, unusedHeader)
}
mockWitness := mockNodeFromHeadersAndVals(witnessHeaders, witnessValidators)
lastBlock, _ = mockWitness.LightBlock(ctx, latestHeight)
mockWitness.On("LightBlock", mock.Anything, int64(0)).Return(lastBlock, nil).Once()
mockWitness.On("LightBlock", mock.Anything, int64(12)).Return(nil, provider.ErrHeightTooHigh)
mockWitness.On("ReportEvidence", mock.Anything, mock.MatchedBy(func(evidence types.Evidence) bool {
// Check evidence was sent to the witness against the full node
evAgainstPrimary := &types.LightClientAttackEvidence{
ConflictingBlock: &types.LightBlock{
SignedHeader: primaryHeaders[forgedHeight],
ValidatorSet: primaryValidators[forgedHeight],
},
CommonHeight: latestHeight,
}
return bytes.Equal(evidence.Hash(), evAgainstPrimary.Hash())
})).Return(nil).Twice()
// In order to perform the attack, the primary needs at least one accomplice as a witness to also
// send the forged block
accomplice := primary
accomplice := mockPrimary
c, err := light.NewClient(
ctx,
@@ -222,8 +288,8 @@ func TestLightClientAttackEvidence_ForwardLunatic(t *testing.T) {
Height: 1,
Hash: primaryHeaders[1].Hash(),
},
primary,
[]provider.Provider{witness, accomplice},
mockPrimary,
[]provider.Provider{mockWitness, accomplice},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
light.MaxClockDrift(1*time.Second),
@@ -251,7 +317,7 @@ func TestLightClientAttackEvidence_ForwardLunatic(t *testing.T) {
}
go func() {
time.Sleep(2 * time.Second)
witness.AddLightBlock(newLb)
mockWitness.On("LightBlock", mock.Anything, int64(0)).Return(newLb, nil)
}()
// Now assert that verification returns an error. We craft the light clients time to be a little ahead of the chain
@@ -261,26 +327,19 @@ func TestLightClientAttackEvidence_ForwardLunatic(t *testing.T) {
assert.Equal(t, light.ErrLightClientAttack, err)
}
// Check evidence was sent to the witness against the full node
evAgainstPrimary := &types.LightClientAttackEvidence{
ConflictingBlock: &types.LightBlock{
SignedHeader: primaryHeaders[forgedHeight],
ValidatorSet: primaryValidators[forgedHeight],
},
CommonHeight: latestHeight,
}
assert.True(t, witness.HasEvidence(evAgainstPrimary))
// We attempt the same call but now the supporting witness has a block which should
// immediately conflict in time with the primary
_, err = c.VerifyLightBlockAtHeight(ctx, forgedHeight, bTime.Add(time.Duration(forgedHeight)*time.Minute))
if assert.Error(t, err) {
assert.Equal(t, light.ErrLightClientAttack, err)
}
assert.True(t, witness.HasEvidence(evAgainstPrimary))
// Lastly we test the unfortunate case where the light clients supporting witness doesn't update
// in enough time
mockLaggingWitness := mockNodeFromHeadersAndVals(witnessHeaders, witnessValidators)
mockLaggingWitness.On("LightBlock", mock.Anything, int64(12)).Return(nil, provider.ErrHeightTooHigh)
lastBlock, _ = mockLaggingWitness.LightBlock(ctx, latestHeight)
mockLaggingWitness.On("LightBlock", mock.Anything, int64(0)).Return(lastBlock, nil)
c, err = light.NewClient(
ctx,
chainID,
@@ -289,8 +348,8 @@ func TestLightClientAttackEvidence_ForwardLunatic(t *testing.T) {
Height: 1,
Hash: primaryHeaders[1].Hash(),
},
primary,
[]provider.Provider{laggingWitness, accomplice},
mockPrimary,
[]provider.Provider{mockLaggingWitness, accomplice},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
light.MaxClockDrift(1*time.Second),
@@ -300,17 +359,20 @@ func TestLightClientAttackEvidence_ForwardLunatic(t *testing.T) {
_, err = c.Update(ctx, bTime.Add(time.Duration(forgedHeight)*time.Minute))
assert.NoError(t, err)
mockPrimary.AssertExpectations(t)
mockWitness.AssertExpectations(t)
}
// 1. Different nodes therefore a divergent header is produced.
// => light client returns an error upon creation because primary and witness
// have a different view.
func TestClientDivergentTraces1(t *testing.T) {
primary := mockp.New(genMockNode(chainID, 10, 5, 2, bTime))
firstBlock, err := primary.LightBlock(ctx, 1)
_, headers, vals := genMockNode(chainID, 1, 5, 2, bTime)
mockPrimary := mockNodeFromHeadersAndVals(headers, vals)
firstBlock, err := mockPrimary.LightBlock(ctx, 1)
require.NoError(t, err)
witness := mockp.New(genMockNode(chainID, 10, 5, 2, bTime))
_, headers, vals = genMockNode(chainID, 1, 5, 2, bTime)
mockWitness := mockNodeFromHeadersAndVals(headers, vals)
_, err = light.NewClient(
ctx,
@@ -320,20 +382,25 @@ func TestClientDivergentTraces1(t *testing.T) {
Hash: firstBlock.Hash(),
Period: 4 * time.Hour,
},
primary,
[]provider.Provider{witness},
mockPrimary,
[]provider.Provider{mockWitness},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
require.Error(t, err)
assert.Contains(t, err.Error(), "does not match primary")
mockWitness.AssertExpectations(t)
mockPrimary.AssertExpectations(t)
}
// 2. Two out of three nodes don't respond but the third has a header that matches
// => verification should be successful and all the witnesses should remain
func TestClientDivergentTraces2(t *testing.T) {
primary := mockp.New(genMockNode(chainID, 10, 5, 2, bTime))
firstBlock, err := primary.LightBlock(ctx, 1)
_, headers, vals := genMockNode(chainID, 2, 5, 2, bTime)
mockPrimaryNode := mockNodeFromHeadersAndVals(headers, vals)
mockDeadNode := &provider_mocks.Provider{}
mockDeadNode.On("LightBlock", mock.Anything, mock.Anything).Return(nil, provider.ErrNoResponse)
firstBlock, err := mockPrimaryNode.LightBlock(ctx, 1)
require.NoError(t, err)
c, err := light.NewClient(
ctx,
@@ -343,31 +410,33 @@ func TestClientDivergentTraces2(t *testing.T) {
Hash: firstBlock.Hash(),
Period: 4 * time.Hour,
},
primary,
[]provider.Provider{deadNode, deadNode, primary},
mockPrimaryNode,
[]provider.Provider{mockDeadNode, mockDeadNode, mockPrimaryNode},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
require.NoError(t, err)
_, err = c.VerifyLightBlockAtHeight(ctx, 10, bTime.Add(1*time.Hour))
_, err = c.VerifyLightBlockAtHeight(ctx, 2, bTime.Add(1*time.Hour))
assert.NoError(t, err)
assert.Equal(t, 3, len(c.Witnesses()))
mockDeadNode.AssertExpectations(t)
mockPrimaryNode.AssertExpectations(t)
}
// 3. witness has the same first header, but different second header
// => creation should succeed, but the verification should fail
func TestClientDivergentTraces3(t *testing.T) {
_, primaryHeaders, primaryVals := genMockNode(chainID, 10, 5, 2, bTime)
primary := mockp.New(chainID, primaryHeaders, primaryVals)
_, primaryHeaders, primaryVals := genMockNode(chainID, 2, 5, 2, bTime)
mockPrimary := mockNodeFromHeadersAndVals(primaryHeaders, primaryVals)
firstBlock, err := primary.LightBlock(ctx, 1)
firstBlock, err := mockPrimary.LightBlock(ctx, 1)
require.NoError(t, err)
_, mockHeaders, mockVals := genMockNode(chainID, 10, 5, 2, bTime)
_, mockHeaders, mockVals := genMockNode(chainID, 2, 5, 2, bTime)
mockHeaders[1] = primaryHeaders[1]
mockVals[1] = primaryVals[1]
witness := mockp.New(chainID, mockHeaders, mockVals)
mockWitness := mockNodeFromHeadersAndVals(mockHeaders, mockVals)
c, err := light.NewClient(
ctx,
@@ -377,33 +446,33 @@ func TestClientDivergentTraces3(t *testing.T) {
Hash: firstBlock.Hash(),
Period: 4 * time.Hour,
},
primary,
[]provider.Provider{witness},
mockPrimary,
[]provider.Provider{mockWitness},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
require.NoError(t, err)
_, err = c.VerifyLightBlockAtHeight(ctx, 10, bTime.Add(1*time.Hour))
_, err = c.VerifyLightBlockAtHeight(ctx, 2, bTime.Add(1*time.Hour))
assert.Error(t, err)
assert.Equal(t, 1, len(c.Witnesses()))
mockWitness.AssertExpectations(t)
mockPrimary.AssertExpectations(t)
}
// 4. Witness has a divergent header but can not produce a valid trace to back it up.
// It should be ignored
func TestClientDivergentTraces4(t *testing.T) {
_, primaryHeaders, primaryVals := genMockNode(chainID, 10, 5, 2, bTime)
primary := mockp.New(chainID, primaryHeaders, primaryVals)
_, primaryHeaders, primaryVals := genMockNode(chainID, 2, 5, 2, bTime)
mockPrimary := mockNodeFromHeadersAndVals(primaryHeaders, primaryVals)
firstBlock, err := primary.LightBlock(ctx, 1)
firstBlock, err := mockPrimary.LightBlock(ctx, 1)
require.NoError(t, err)
_, mockHeaders, mockVals := genMockNode(chainID, 10, 5, 2, bTime)
witness := primary.Copy("witness")
witness.AddLightBlock(&types.LightBlock{
SignedHeader: mockHeaders[10],
ValidatorSet: mockVals[10],
})
_, witnessHeaders, witnessVals := genMockNode(chainID, 2, 5, 2, bTime)
primaryHeaders[2] = witnessHeaders[2]
primaryVals[2] = witnessVals[2]
mockWitness := mockNodeFromHeadersAndVals(primaryHeaders, primaryVals)
c, err := light.NewClient(
ctx,
@@ -413,14 +482,16 @@ func TestClientDivergentTraces4(t *testing.T) {
Hash: firstBlock.Hash(),
Period: 4 * time.Hour,
},
primary,
[]provider.Provider{witness},
mockPrimary,
[]provider.Provider{mockWitness},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
require.NoError(t, err)
_, err = c.VerifyLightBlockAtHeight(ctx, 10, bTime.Add(1*time.Hour))
_, err = c.VerifyLightBlockAtHeight(ctx, 2, bTime.Add(1*time.Hour))
assert.Error(t, err)
assert.Equal(t, 1, len(c.Witnesses()))
mockWitness.AssertExpectations(t)
mockPrimary.AssertExpectations(t)
}

View File

@@ -3,10 +3,12 @@ package light_test
import (
"time"
"github.com/stretchr/testify/mock"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/crypto/tmhash"
tmtime "github.com/tendermint/tendermint/libs/time"
provider_mocks "github.com/tendermint/tendermint/light/provider/mocks"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
@@ -239,6 +241,16 @@ func genMockNode(
return chainID, headers, valset
}
func mockNodeFromHeadersAndVals(headers map[int64]*types.SignedHeader,
vals map[int64]*types.ValidatorSet) *provider_mocks.Provider {
mockNode := &provider_mocks.Provider{}
for i, header := range headers {
lb := &types.LightBlock{SignedHeader: header, ValidatorSet: vals[i]}
mockNode.On("LightBlock", mock.Anything, i).Return(lb, nil)
}
return mockNode
}
func hash(s string) []byte {
return tmhash.Sum([]byte(s))
}

View File

@@ -19,7 +19,7 @@ import (
var defaultOptions = Options{
MaxRetryAttempts: 5,
Timeout: 3 * time.Second,
Timeout: 5 * time.Second,
NoBlockThreshold: 5,
NoResponseThreshold: 5,
}
@@ -125,7 +125,7 @@ func (p *http) LightBlock(ctx context.Context, height int64) (*types.LightBlock,
if sh.Header == nil {
return nil, provider.ErrBadLightBlock{
Reason: errors.New("header is nil unexpectedly"),
Reason: errors.New("returned header is nil unexpectedly"),
}
}
@@ -205,6 +205,11 @@ func (p *http) validatorSet(ctx context.Context, height *int64) (*types.Validato
return nil, p.parseRPCError(e)
default:
// check if the error stems from the context
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return nil, err
}
// If we don't know the error then by default we return an unreliable provider error and
// terminate the connection with the peer.
return nil, provider.ErrUnreliableProvider{Reason: e.Error()}
@@ -236,11 +241,19 @@ func (p *http) signedHeader(ctx context.Context, height *int64) (*types.SignedHe
return &commit.SignedHeader, nil
case *url.Error:
// check if the request timed out
if e.Timeout() {
// we wait and try again with exponential backoff
time.Sleep(backoffTimeout(attempt))
continue
}
// check if the connection was refused or dropped
if strings.Contains(e.Error(), "connection refused") {
return nil, provider.ErrConnectionClosed
}
// else, as a catch all, we return the error as a bad light block response
return nil, provider.ErrBadLightBlock{Reason: e}
case *rpctypes.RPCError:
@@ -248,6 +261,11 @@ func (p *http) signedHeader(ctx context.Context, height *int64) (*types.SignedHe
return nil, p.parseRPCError(e)
default:
// check if the error stems from the context
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return nil, err
}
// If we don't know the error then by default we return an unreliable provider error and
// terminate the connection with the peer.
return nil, provider.ErrUnreliableProvider{Reason: e.Error()}

View File

@@ -4,13 +4,12 @@ import (
"context"
"fmt"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/light/provider"
lighthttp "github.com/tendermint/tendermint/light/provider/http"
rpcclient "github.com/tendermint/tendermint/rpc/client"
@@ -33,30 +32,17 @@ func TestNewProvider(t *testing.T) {
require.Equal(t, fmt.Sprintf("%s", c), "http{http://153.200.0.1}")
}
// NodeSuite initiates and runs a full node instance in the
// background, stopping it once the test is completed
func NodeSuite(t *testing.T) (service.Service, *config.Config) {
t.Helper()
func TestProvider(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
conf := rpctest.CreateConfig(t.Name())
defer cancel()
cfg := rpctest.CreateConfig(t.Name())
// start a tendermint node in the background to test against
app := kvstore.NewApplication()
app.RetainBlocks = 9
node, closer, err := rpctest.StartTendermint(ctx, conf, app)
_, closer, err := rpctest.StartTendermint(ctx, cfg, app)
require.NoError(t, err)
t.Cleanup(func() {
_ = closer(ctx)
cancel()
})
return node, conf
}
func TestProvider(t *testing.T) {
_, cfg := NodeSuite(t)
rpcAddr := cfg.RPC.ListenAddress
genDoc, err := types.GenesisDocFromFile(cfg.GenesisFile())
require.NoError(t, err)
@@ -95,8 +81,9 @@ func TestProvider(t *testing.T) {
require.Nil(t, lb)
assert.Equal(t, provider.ErrHeightTooHigh, err)
_, err = p.LightBlock(context.Background(), 1)
lb, err = p.LightBlock(context.Background(), 1)
require.Error(t, err)
require.Nil(t, lb)
assert.Equal(t, provider.ErrLightBlockNotFound, err)
// if the provider is unable to provide four more blocks then we should return
@@ -105,4 +92,15 @@ func TestProvider(t *testing.T) {
_, err = p.LightBlock(context.Background(), 1)
}
assert.IsType(t, provider.ErrUnreliableProvider{}, err)
// shut down tendermint node
require.NoError(t, closer(ctx))
cancel()
time.Sleep(10 * time.Second)
lb, err = p.LightBlock(context.Background(), lower+2)
// we should see a connection refused
require.Error(t, err)
require.Nil(t, lb)
assert.Equal(t, provider.ErrConnectionClosed, err)
}

View File

@@ -1,30 +0,0 @@
package mock
import (
"context"
"fmt"
"github.com/tendermint/tendermint/light/provider"
"github.com/tendermint/tendermint/types"
)
type deadMock struct {
id string
}
// NewDeadMock creates a mock provider that always errors. id is used in case of multiple providers.
func NewDeadMock(id string) provider.Provider {
return &deadMock{id: id}
}
func (p *deadMock) String() string {
return fmt.Sprintf("DeadMock-%s", p.id)
}
func (p *deadMock) LightBlock(_ context.Context, height int64) (*types.LightBlock, error) {
return nil, provider.ErrNoResponse
}
func (p *deadMock) ReportEvidence(_ context.Context, ev types.Evidence) error {
return provider.ErrNoResponse
}

View File

@@ -1,118 +0,0 @@
package mock
import (
"context"
"errors"
"fmt"
"strings"
"sync"
"github.com/tendermint/tendermint/light/provider"
"github.com/tendermint/tendermint/types"
)
type Mock struct {
id string
mtx sync.Mutex
headers map[int64]*types.SignedHeader
vals map[int64]*types.ValidatorSet
evidenceToReport map[string]types.Evidence // hash => evidence
latestHeight int64
}
var _ provider.Provider = (*Mock)(nil)
// New creates a mock provider with the given set of headers and validator
// sets.
func New(id string, headers map[int64]*types.SignedHeader, vals map[int64]*types.ValidatorSet) *Mock {
height := int64(0)
for h := range headers {
if h > height {
height = h
}
}
return &Mock{
id: id,
headers: headers,
vals: vals,
evidenceToReport: make(map[string]types.Evidence),
latestHeight: height,
}
}
func (p *Mock) String() string {
var headers strings.Builder
for _, h := range p.headers {
fmt.Fprintf(&headers, " %d:%X", h.Height, h.Hash())
}
var vals strings.Builder
for _, v := range p.vals {
fmt.Fprintf(&vals, " %X", v.Hash())
}
return fmt.Sprintf("Mock{id: %s, headers: %s, vals: %v}", p.id, headers.String(), vals.String())
}
func (p *Mock) LightBlock(_ context.Context, height int64) (*types.LightBlock, error) {
p.mtx.Lock()
defer p.mtx.Unlock()
var lb *types.LightBlock
if height > p.latestHeight {
return nil, provider.ErrHeightTooHigh
}
if height == 0 && len(p.headers) > 0 {
height = p.latestHeight
}
if _, ok := p.headers[height]; ok {
sh := p.headers[height]
vals := p.vals[height]
lb = &types.LightBlock{
SignedHeader: sh,
ValidatorSet: vals,
}
}
if lb == nil {
return nil, provider.ErrLightBlockNotFound
}
if lb.SignedHeader == nil || lb.ValidatorSet == nil {
return nil, provider.ErrBadLightBlock{Reason: errors.New("nil header or vals")}
}
if err := lb.ValidateBasic(lb.ChainID); err != nil {
return nil, provider.ErrBadLightBlock{Reason: err}
}
return lb, nil
}
func (p *Mock) ReportEvidence(_ context.Context, ev types.Evidence) error {
p.evidenceToReport[string(ev.Hash())] = ev
return nil
}
func (p *Mock) HasEvidence(ev types.Evidence) bool {
_, ok := p.evidenceToReport[string(ev.Hash())]
return ok
}
func (p *Mock) AddLightBlock(lb *types.LightBlock) {
p.mtx.Lock()
defer p.mtx.Unlock()
if err := lb.ValidateBasic(lb.ChainID); err != nil {
panic(fmt.Sprintf("unable to add light block, err: %v", err))
}
p.headers[lb.Height] = lb.SignedHeader
p.vals[lb.Height] = lb.ValidatorSet
if lb.Height > p.latestHeight {
p.latestHeight = lb.Height
}
}
func (p *Mock) Copy(id string) *Mock {
return New(id, p.headers, p.vals)
}

View File

@@ -6,6 +6,8 @@ import (
"github.com/tendermint/tendermint/types"
)
//go:generate mockery --case underscore --name Provider
// Provider provides information for the light client to sync (verification
// happens in the client).
type Provider interface {

View File

@@ -81,13 +81,14 @@ func createAndStartIndexerService(
eventSinks := []indexer.EventSink{}
// Check duplicated sinks.
// check for duplicated sinks
sinks := map[string]bool{}
for _, s := range config.TxIndex.Indexer {
sl := strings.ToLower(s)
if sinks[sl] {
return nil, nil, errors.New("found duplicated sinks, please check the tx-index section in the config.toml")
}
sinks[sl] = true
}
@@ -95,25 +96,31 @@ loop:
for k := range sinks {
switch k {
case string(indexer.NULL):
// when we see null in the config, the eventsinks will be reset with the nullEventSink.
// When we see null in the config, the eventsinks will be reset with the
// nullEventSink.
eventSinks = []indexer.EventSink{null.NewEventSink()}
break loop
case string(indexer.KV):
store, err := dbProvider(&cfg.DBContext{ID: "tx_index", Config: config})
if err != nil {
return nil, nil, err
}
eventSinks = append(eventSinks, kv.NewEventSink(store))
case string(indexer.PSQL):
conn := config.TxIndex.PsqlConn
if conn == "" {
return nil, nil, errors.New("the psql connection settings cannot be empty")
}
es, _, err := psql.NewEventSink(conn, chainID)
if err != nil {
return nil, nil, err
}
eventSinks = append(eventSinks, es)
default:
return nil, nil, errors.New("unsupported event sink type")
}
@@ -365,10 +372,7 @@ func createBlockchainReactor(
return reactorShim, reactor, nil
case cfg.BlockchainV2:
reactor := bcv2.NewBlockchainReactor(state.Copy(), blockExec, blockStore, fastSync, metrics)
reactor.SetLogger(logger)
return nil, reactor, nil
return nil, nil, errors.New("fastsync version v2 is no longer supported. Please use v0")
default:
return nil, nil, fmt.Errorf("unknown fastsync version %s", config.FastSync.Version)

View File

@@ -5,7 +5,6 @@ import (
"io"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
@@ -68,17 +67,13 @@ func (r *remoteClientCreator) NewABCIClient() (abcicli.Client, error) {
}
// DefaultClientCreator returns a default ClientCreator, which will create a
// local client if addr is one of: 'counter', 'counter_serial', 'kvstore',
// local client if addr is one of: 'kvstore',
// 'persistent_kvstore' or 'noop', otherwise - a remote client.
//
// The Closer is a noop except for persistent_kvstore applications,
// which will clean up the store.
func DefaultClientCreator(addr, transport, dbDir string) (ClientCreator, io.Closer) {
switch addr {
case "counter":
return NewLocalClientCreator(counter.NewApplication(false)), noopCloser{}
case "counter_serial":
return NewLocalClientCreator(counter.NewApplication(true)), noopCloser{}
case "kvstore":
return NewLocalClientCreator(kvstore.NewApplication()), noopCloser{}
case "persistent_kvstore":

View File

@@ -120,7 +120,7 @@ func TestBroadcastEvidence_DuplicateVoteEvidence(t *testing.T) {
// previous versions of this test used a shared fixture with
// other tests, and in this version we give it a little time
// for the node to make progress before running the test
time.Sleep(10 * time.Millisecond)
time.Sleep(100 * time.Millisecond)
chainID := config.ChainID()

View File

@@ -3,6 +3,7 @@ package mock_test
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -23,6 +24,8 @@ func TestStatus(t *testing.T) {
LatestAppHash: bytes.HexBytes("app"),
LatestBlockHeight: 10,
MaxPeerBlockHeight: 20,
TotalSyncedTime: time.Second,
RemainingTime: time.Minute,
},
}},
}
@@ -36,6 +39,8 @@ func TestStatus(t *testing.T) {
assert.EqualValues("block", status.SyncInfo.LatestBlockHash)
assert.EqualValues(10, status.SyncInfo.LatestBlockHeight)
assert.EqualValues(20, status.SyncInfo.MaxPeerBlockHeight)
assert.EqualValues(time.Second, status.SyncInfo.TotalSyncedTime)
assert.EqualValues(time.Minute, status.SyncInfo.RemainingTime)
// make sure recorder works properly
require.Equal(1, len(r.Calls))
@@ -49,4 +54,6 @@ func TestStatus(t *testing.T) {
assert.EqualValues("block", st.SyncInfo.LatestBlockHash)
assert.EqualValues(10, st.SyncInfo.LatestBlockHeight)
assert.EqualValues(20, st.SyncInfo.MaxPeerBlockHeight)
assert.EqualValues(time.Second, status.SyncInfo.TotalSyncedTime)
assert.EqualValues(time.Minute, status.SyncInfo.RemainingTime)
}

View File

@@ -1,4 +1,4 @@
// Code generated by mockery 2.7.4. DO NOT EDIT.
// Code generated by mockery v0.0.0-dev. DO NOT EDIT.
package mocks

View File

@@ -71,6 +71,8 @@ func (env *Environment) Status(ctx *rpctypes.Context) (*ctypes.ResultStatus, err
EarliestBlockTime: time.Unix(0, earliestBlockTimeNano),
MaxPeerBlockHeight: env.FastSyncReactor.GetMaxPeerBlockHeight(),
CatchingUp: env.ConsensusReactor.WaitSync(),
TotalSyncedTime: env.FastSyncReactor.GetTotalSyncedTime(),
RemainingTime: env.FastSyncReactor.GetRemainingSyncTime(),
},
ValidatorInfo: validatorInfo,
}

View File

@@ -98,6 +98,9 @@ type SyncInfo struct {
MaxPeerBlockHeight int64 `json:"max_peer_block_height"`
CatchingUp bool `json:"catching_up"`
TotalSyncedTime time.Duration `json:"total_synced_time"`
RemainingTime time.Duration `json:"remaining_time"`
}
// Info about the node's validator

View File

@@ -17,6 +17,7 @@ func (bapi *broadcastAPI) Ping(ctx context.Context, req *RequestPing) (*Response
return &ResponsePing{}, nil
}
// Deprecated: gRPC in the RPC layer of Tendermint will be removed in 0.36.
func (bapi *broadcastAPI) BroadcastTx(ctx context.Context, req *RequestBroadcastTx) (*ResponseBroadcastTx, error) {
// NOTE: there's no way to get client's remote address
// see https://stackoverflow.com/questions/33684570/session-and-remote-ip-address-in-grpc-go

View File

@@ -18,6 +18,7 @@ type Config struct {
// StartGRPCServer starts a new gRPC BroadcastAPIServer using the given
// net.Listener.
// NOTE: This function blocks - you may want to call it in a go-routine.
// Deprecated: gRPC in the RPC layer of Tendermint will be removed in 0.36
func StartGRPCServer(env *core.Environment, ln net.Listener) error {
grpcServer := grpc.NewServer()
RegisterBroadcastAPIServer(grpcServer, &broadcastAPI{env: env})

View File

@@ -1330,6 +1330,12 @@ components:
catching_up:
type: boolean
example: false
total_synced_time:
type: string
example: "1000000000"
remaining_time:
type: string
example: "0"
ValidatorInfo:
type: object
properties:

View File

@@ -18,7 +18,6 @@ import (
"sync"
"github.com/google/orderedcode"
"github.com/pkg/errors"
dbm "github.com/tendermint/tm-db"
)
@@ -395,7 +394,7 @@ func Migrate(ctx context.Context, db dbm.DB) error {
// check the error results
if len(errs) != 0 {
return errors.Errorf("encountered errors during migration: %v", errStrs)
return fmt.Errorf("encountered errors during migration: %v", errStrs)
}
return nil

View File

@@ -110,6 +110,7 @@ func (es *EventSink) IndexTxEvents(txr []*abci.TxResult) error {
if err != nil {
return err
}
defer r.Close()
if !r.Next() {
return nil

View File

@@ -293,6 +293,7 @@ func verifyBlock(h int64) (bool, error) {
if err != nil {
return false, err
}
defer rows.Close()
if !rows.Next() {
return false, nil
@@ -308,6 +309,7 @@ func verifyBlock(h int64) (bool, error) {
if err != nil {
return false, err
}
defer rows.Close()
return rows.Next(), nil
}

View File

@@ -25,9 +25,7 @@ CREATE TABLE tx_events (
created_at TIMESTAMPTZ NOT NULL,
chain_id VARCHAR NOT NULL,
UNIQUE (hash, key),
FOREIGN KEY (tx_result_id)
REFERENCES tx_results(id)
ON DELETE CASCADE
FOREIGN KEY (tx_result_id) REFERENCES tx_results(id) ON DELETE CASCADE
);
CREATE INDEX idx_block_events_key_value ON block_events(key, value);
CREATE INDEX idx_tx_events_key_value ON tx_events(key, value);

View File

@@ -95,48 +95,6 @@ func (_m *Store) LoadConsensusParams(_a0 int64) (types.ConsensusParams, error) {
return r0, r1
}
// LoadFromDBOrGenesisDoc provides a mock function with given fields: _a0
func (_m *Store) LoadFromDBOrGenesisDoc(_a0 *types.GenesisDoc) (state.State, error) {
ret := _m.Called(_a0)
var r0 state.State
if rf, ok := ret.Get(0).(func(*types.GenesisDoc) state.State); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(state.State)
}
var r1 error
if rf, ok := ret.Get(1).(func(*types.GenesisDoc) error); ok {
r1 = rf(_a0)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// LoadFromDBOrGenesisFile provides a mock function with given fields: _a0
func (_m *Store) LoadFromDBOrGenesisFile(_a0 string) (state.State, error) {
ret := _m.Called(_a0)
var r0 state.State
if rf, ok := ret.Get(0).(func(string) state.State); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(state.State)
}
var r1 error
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(_a0)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// LoadValidators provides a mock function with given fields: _a0
func (_m *Store) LoadValidators(_a0 int64) (*types.ValidatorSet, error) {
ret := _m.Called(_a0)

View File

@@ -10,8 +10,6 @@ and run the following tests in docker containers:
- includes test coverage
- app tests
- kvstore app over socket
- counter app over socket
- counter app over grpc
- persistence tests
- crash tendermint at each of many predefined points, restart, and ensure it syncs properly with the app

View File

@@ -2,9 +2,6 @@
set -ex
#- kvstore over socket, curl
#- counter over socket, curl
#- counter over grpc, curl
#- counter over grpc, grpc
# TODO: install everything
@@ -45,57 +42,6 @@ function kvstore_over_socket_reorder(){
kill -9 $pid_kvstore $pid_tendermint
}
function counter_over_socket() {
rm -rf $TMHOME
tendermint init validator
echo "Starting counter_over_socket"
abci-cli counter --serial > /dev/null &
pid_counter=$!
tendermint start --mode validator > tendermint.log &
pid_tendermint=$!
sleep 5
echo "running test"
bash test/app/counter_test.sh "Counter over Socket"
kill -9 $pid_counter $pid_tendermint
}
function counter_over_grpc() {
rm -rf $TMHOME
tendermint init validator
echo "Starting counter_over_grpc"
abci-cli counter --serial --abci grpc > /dev/null &
pid_counter=$!
tendermint start --mode validator --abci grpc > tendermint.log &
pid_tendermint=$!
sleep 5
echo "running test"
bash test/app/counter_test.sh "Counter over GRPC"
kill -9 $pid_counter $pid_tendermint
}
function counter_over_grpc_grpc() {
rm -rf $TMHOME
tendermint init validator
echo "Starting counter_over_grpc_grpc (ie. with grpc broadcast_tx)"
abci-cli counter --serial --abci grpc > /dev/null &
pid_counter=$!
sleep 1
GRPC_PORT=36656
tendermint start --mode validator --abci grpc --rpc.grpc-laddr tcp://localhost:$GRPC_PORT > tendermint.log &
pid_tendermint=$!
sleep 5
echo "running test"
GRPC_BROADCAST_TX=true bash test/app/counter_test.sh "Counter over GRPC via GRPC BroadcastTx"
kill -9 $pid_counter $pid_tendermint
}
case "$1" in
"kvstore_over_socket")
kvstore_over_socket
@@ -103,25 +49,10 @@ case "$1" in
"kvstore_over_socket_reorder")
kvstore_over_socket_reorder
;;
"counter_over_socket")
counter_over_socket
;;
"counter_over_grpc")
counter_over_grpc
;;
"counter_over_grpc_grpc")
counter_over_grpc_grpc
;;
*)
echo "Running all"
kvstore_over_socket
echo ""
kvstore_over_socket_reorder
echo ""
counter_over_socket
echo ""
counter_over_grpc
echo ""
counter_over_grpc_grpc
esac

View File

@@ -28,9 +28,8 @@ var (
}
// The following specify randomly chosen values for testnet nodes.
nodeDatabases = uniformChoice{"goleveldb", "cleveldb", "rocksdb", "boltdb", "badgerdb"}
// FIXME: grpc disabled due to https://github.com/tendermint/tendermint/issues/5439
nodeABCIProtocols = uniformChoice{"unix", "tcp", "builtin"} // "grpc"
nodeDatabases = uniformChoice{"goleveldb", "cleveldb", "rocksdb", "boltdb", "badgerdb"}
nodeABCIProtocols = uniformChoice{"unix", "tcp", "builtin", "grpc"}
nodePrivvalProtocols = uniformChoice{"file", "unix", "tcp", "grpc"}
// FIXME: v2 disabled due to flake
nodeFastSyncs = uniformChoice{"v0"} // "v2"
@@ -45,6 +44,7 @@ var (
"restart": 0.1,
}
evidence = uniformChoice{0, 1, 10}
txSize = uniformChoice{1024, 10240} // either 1kb or 10kb
)
// Generate generates random testnets using the given RNG.
@@ -93,6 +93,7 @@ func generateTestnet(r *rand.Rand, opt map[string]interface{}) (e2e.Manifest, er
KeyType: opt["keyType"].(string),
Evidence: evidence.Choose(r).(int),
QueueType: opt["queueType"].(string),
TxSize: int64(txSize.Choose(r).(int)),
}
var p2pNodeFactor int
@@ -128,7 +129,6 @@ func generateTestnet(r *rand.Rand, opt map[string]interface{}) (e2e.Manifest, er
// First we generate seed nodes, starting at the initial height.
for i := 1; i <= numSeeds; i++ {
node := generateNode(r, e2e.ModeSeed, 0, manifest.InitialHeight, false)
node.QueueType = manifest.QueueType
if p2pNodeFactor == 0 {
node.DisableLegacyP2P = manifest.DisableLegacyP2P
@@ -154,7 +154,6 @@ func generateTestnet(r *rand.Rand, opt map[string]interface{}) (e2e.Manifest, er
node := generateNode(
r, e2e.ModeValidator, startAt, manifest.InitialHeight, i <= 2)
node.QueueType = manifest.QueueType
if p2pNodeFactor == 0 {
node.DisableLegacyP2P = manifest.DisableLegacyP2P
} else if p2pNodeFactor%i == 0 {
@@ -190,7 +189,7 @@ func generateTestnet(r *rand.Rand, opt map[string]interface{}) (e2e.Manifest, er
nextStartAt += 5
}
node := generateNode(r, e2e.ModeFull, startAt, manifest.InitialHeight, false)
node.QueueType = manifest.QueueType
if p2pNodeFactor == 0 {
node.DisableLegacyP2P = manifest.DisableLegacyP2P
} else if p2pNodeFactor%i == 0 {

View File

@@ -52,12 +52,11 @@ seeds = ["seed02"]
[node.validator03]
database = "badgerdb"
seeds = ["seed01"]
# FIXME: should be grpc, disabled due to https://github.com/tendermint/tendermint/issues/5439
#abci_protocol = "grpc"
abci_protocol = "grpc"
persist_interval = 3
perturb = ["kill"]
privval_protocol = "grpc"
retain_blocks = 5
retain_blocks = 7
[node.validator04]
abci_protocol = "builtin"
@@ -70,8 +69,7 @@ database = "cleveldb"
fast_sync = "v0"
seeds = ["seed02"]
start_at = 1005 # Becomes part of the validator set at 1010
# FIXME: should be grpc, disabled due to https://github.com/tendermint/tendermint/issues/5439
#abci_protocol = "grpc"
abci_protocol = "grpc"
perturb = ["kill", "pause", "disconnect", "restart"]
privval_protocol = "tcp"
@@ -82,7 +80,7 @@ start_at = 1010
fast_sync = "v0"
persistent_peers = ["validator01", "validator02", "validator03", "validator04", "validator05"]
perturb = ["restart"]
retain_blocks = 5
retain_blocks = 7
[node.full02]
mode = "full"

Some files were not shown because too many files have changed in this diff Show More