Compare commits

..

279 Commits

Author SHA1 Message Date
Cyrus Goh
5182ffee25 docs: master → docs-staging (#5990)
* Makefile: always pull image in proto-gen-docker. (#5953)

The `proto-gen-docker` target didn't pull an updated Docker image, and would use a local image if present which could be outdated and produce wrong results.

* test: fix TestPEXReactorRunning data race (#5955)

Fixes #5941.

Not entirely sure that this will fix the problem (couldn't reproduce), but in any case this is an artifact of a hack in the P2P transport refactor to make it work with the legacy P2P stack, and will be removed when the refactor is done anyway.

* test/fuzz: move fuzz tests into this repo (#5918)

Co-authored-by: Emmanuel T Odeke <emmanuel@orijtech.com>

Closes #5907

- add init-corpus to blockchain reactor
- remove validator-set FromBytes test
now that we have proto, we don't need to test it! bye amino
- simplify mempool test
do we want to test remote ABCI app?
- do not recreate mux on every crash in jsonrpc test
- update p2p pex reactor test
- remove p2p/listener test
the API has changed + I did not understand what it's tested anyway
- update secretconnection test
- add readme and makefile
- list inputs in readme
- add nightly workflow
- remove blockchain fuzz test
EncodeMsg / DecodeMsg no longer exist

* docker: dont login when in PR (#5961)

* docker: release Linux/ARM64 image (#5925)

Co-authored-by: Marko <marbar3778@yahoo.com>

* p2p: make PeerManager.DialNext() and EvictNext() block (#5947)

See #5936 and #5938 for background.

The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about.

I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.

* libs/log: format []byte as hexidecimal string (uppercased) (#5960)

Closes: #5806 

Co-authored-by: Lanie Hei <heixx011@umn.edu>

* docs: log level docs (#5945)

## Description

add section on configuring log levels

Closes: #XXX

* .github: fix fuzz-nightly job (#5965)

outputs is a property of the job, not an individual step.

* e2e: add control over the log level of nodes (#5958)

* mempool: fix reactor tests (#5967)

## Description

Update the faux router to either drop channel errors or handle them based on an argument. This prevents deadlocks in tests where we try to send an error on the mempool channel but there is no reader.

Closes: #5956

* p2p: improve peerStore prototype (#5954)

This improves the `peerStore` prototype by e.g.:

* Using a database with Protobuf for persistence, but also keeping full peer set in memory for performance.
* Simplifying the API, by taking/returning struct copies for safety, and removing errors for in-memory operations.
* Caching the ranked peer set, as a temporary solution until a better data structure is implemented.
* Adding `PeerManagerOptions.MaxPeers` and pruning the peer store (based on rank) when it's full.
* Rewriting `PeerAddress` to be independent of `url.URL`, normalizing it and tightening semantics.

* p2p: simplify PeerManager upgrade logic (#5962)

Follow-up from #5947, branched off of #5954.

This simplifies the upgrade logic by adding explicit eviction requests, which can also be useful for other use-cases (e.g. if we need to ban a peer that's misbehaving). Changes:

* Add `evict` map which queues up peers to explicitly evict.
* `upgrading` now only tracks peers that we're upgrading via dialing (`DialNext` → `Dialed`/`DialFailed`).
* `Dialed` will unmark `upgrading`, and queue `evict` if still beyond capacity.
* `Accepted` will pick a random lower-scored peer to upgrade to, if appropriate, and doesn't care about `upgrading` (the dial will fail later, since it's already connected).
* `EvictNext` will return a peer scheduled in `evict` if any, otherwise if beyond capacity just evict the lowest-scored peer.

This limits all of the `upgrading` logic to `DialNext`, `Dialed`, and `DialFailed`, making it much simplier, and it should generally do the right thing in all cases I can think of.

* p2p: add PeerManager.Advertise() (#5957)

Adds a naïve `PeerManager.Advertise()` method that the new PEX reactor can use to fetch addresses to advertise, as well as some other `FIXME`s on address advertisement.

* blockchain v0: fix waitgroup data race (#5970)

## Description

Fixes the data race in usage of `WaitGroup`. Specifically, the case where we invoke `Wait` _before_ the first delta `Add` call when the current waitgroup counter is zero. See https://golang.org/pkg/sync/#WaitGroup.Add.

Still not sure how this manifests itself in a test since the reactor has to be stopped virtually immediately after being started (I think?).

Regardless, this is the appropriate fix.

closes: #5968

* tests: fix `make test` (#5966)

## Description
 
- bump deadlock dep to master
  - fixes `make test` since we now use `deadlock.Once`

Closes: #XXX

* terminate go-fuzz gracefully (w/ SIGINT) (#5973)

and preserve exit code.

```
2021/01/26 03:34:49 workers: 2, corpus: 4 (8m28s ago), crashers: 0, restarts: 1/9976, execs: 11013732 (21596/sec), cover: 121, uptime: 8m30s
make: *** [fuzz-mempool] Terminated
Makefile:5: recipe for target 'fuzz-mempool' failed
Error: Process completed with exit code 124.
```

https://github.com/tendermint/tendermint/runs/1766661614

`continue-on-error` should make GH ignore any error codes.

* p2p: add prototype PEX reactor for new stack (#5971)

This adds a prototype PEX reactor for the new P2P stack.

* proto/p2p: rename PEX messages and fields (#5974)

Fixes #5899 by renaming a bunch of P2P Protobuf entities (while maintaining wire compatibility):

* `Message` to `PexMessage` (as it's only used for PEX messages).
* `PexAddrs` to `PexResponse`.
* `PexResponse.Addrs` to `PexResponse.Addresses`.
* `NetAddress` to `PexAddress` (as it's only used by PEX).

* p2p: resolve PEX addresses in PEX reactor (#5980)

This changes the new prototype PEX reactor to resolve peer address URLs into IP/port PEX addresses itself. Branched off of #5974.

I've spent some time thinking about address handling in the P2P stack. We currently use `PeerAddress` URLs everywhere, except for two places: when dialing a peer, and when exchanging addresses via PEX. We had two options:

1. Resolve addresses to endpoints inside `PeerManager`. This would introduce a lot of added complexity: we would have to track connection statistics per endpoint, have goroutines that asynchronously resolve and refresh these endpoints, deal with resolve scheduling before dialing (which is trickier than it sounds since it involves multiple goroutines in the peer manager and router and messes with peer rating order), handle IP address visibility issues, and so on.

2. Resolve addresses to endpoints (IP/port) only where they're used: when dialing, and in PEX. Everywhere else we use URLs.

I went with 2, because this significantly simplifies the handling of hostname resolution, and because I really think the PEX reactor should migrate to exchanging URLs instead of IP/port numbers anyway -- this allows operators to use DNS names for validators (and can easily migrate them to new IPs and/or load balance requests), and also allows different protocols (e.g. QUIC and `MemoryTransport`). Happy to discuss this.

* test/p2p: close transports to avoid goroutine leak failures (#5982)

* mempool: fix TestReactorNoBroadcastToSender (#5984)

## Description

Looks like I missed a test in the original PR when fixing the tests.

Closes: #5956

* mempool: fix mempool tests timeout (#5988)

* p2p: use stopCtx when dialing peers in Router (#5983)

This ensures we don't leak dial goroutines when shutting down the router.

* docs: fix typo in state sync example (#5989)

Co-authored-by: Erik Grinaker <erik@interchain.berlin>
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
Co-authored-by: odidev <odidev@puresoftware.com>
Co-authored-by: Lanie Hei <heixx011@umn.edu>
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: Sergey <52304443+c29r3@users.noreply.github.com>
2021-01-26 11:46:21 -08:00
Marko
a72fb2fbad docs: change v0.33 version (#5950)
## Description

- change version for v0.33.x

Closes: #XXX
2021-01-22 19:25:48 +00:00
Aleksandr Bezobchuk
68bd2116f0 mempool: p2p refactor (#5919) 2021-01-22 09:34:12 -05:00
Erik Grinaker
670e9b427b p2p: improve PeerManager prototype (#5936)
This improves the prototype peer manager by:

* Exporting `PeerManager`, making it accessible by e.g. reactors.
* Replacing `Router.SubscribePeerUpdates()` with `PeerManager.Subscribe()`.
* Tracking address/peer connection statistics, and retrying dial failures with exponential backoff.
* Prioritizing peers, with persistent peers configuration.
* Limiting simultaneous connections.
* Evicting peers and upgrading to higher-priority peers.
* Tracking peer heights, as a workaround for legacy shared peer state APIs.

This is getting to a point where we need to determine precise semantics and implement tests, so we should figure out whether it's a reasonable abstraction that we want to use. The main questions are around the API model (i.e. synchronous method calls with the router polling the manager, vs. an event-driven model using channels, vs. the peer manager calling methods on the router to connect/disconnect peers), and who should have the responsibility of managing actual connections (currently the router, while the manager only tracks peer state).
2021-01-21 18:07:54 +00:00
Aleksandr Bezobchuk
15c1936b85 p2p: revise shim log levels (#5940)
Downgrade some noisy logs to DEBUG.
2021-01-21 14:18:27 +00:00
Marko
c63854f732 proto: docker deployment (#5931) 2021-01-20 16:26:37 +01:00
Jack Yeh
527550f372 Update metrics.md (#5930) 2021-01-20 08:43:37 +01:00
Marko
64961e2267 e2e: releases nightly (#5906) 2021-01-19 21:38:43 +01:00
Tess Rinearson
eff1b16a0c changelog: update changelog for v0.34.3 (#5927)
(No changelog pending updates, since we all forgot to update the changelog pending with all these changes 🤡 )
2021-01-19 17:01:59 +00:00
Tess Rinearson
ea77360ecf .github/workflows: enable manual dispatch for some workflows (#5929) 2021-01-19 17:14:00 +01:00
Tess Rinearson
5972105b06 docs: update package-lock.json (#5928) 2021-01-19 16:51:50 +01:00
Callum
af723eca8a use correct source of evidence time
Conflicting votes are now sent to the evidence pool to form duplicate vote evidence only once
the height of the evidence is finished and the time of the block finalised.
2021-01-19 16:00:02 +01:00
Aleksandr Bezobchuk
62d7a5d028 blockchain v0: p2p refactor (#5858) 2021-01-18 16:35:11 -05:00
Erik Grinaker
96215a06ed p2p: add prototype peer lifecycle manager (#5882)
This adds a prototype peer lifecycle manager, `peerManager`, which stores peer data in an internal `peerStore`. The overall idea here is to have methods for peer lifecycle events which exchange a very narrow subset of peer data, and to keep all of the peer metadata (i.e. the `peerInfo` struct) internal, to decouple this from the router and simplify concurrency control. See `peerManager` GoDoc for more information.

The router is still responsible for actually dialing and accepting peer connections, and routing messages across them, but the peer manager is responsible for determining which peers to dial next, preventing multiple connections being established for the same peer (e.g. both inbound and outbound), and making sure we don't dial the same peer several times in parallel. Later it will also track retries and exponential backoff, as well as peer and address quality. It also assumes responsibility for peer updates subscriptions.

It's a bit unclear to me whether we want the peer manager to take on the responsibility of actually dialing and accepting connections as well, or if it should only be tracking peer state for the router while the router is responsible for all transport concerns. Let's revisit this later.
2021-01-18 19:56:13 +01:00
Tess Rinearson
3ef0b90afd readme: add security mailing list (#5916)
No one knows we have this mailing list 🙈
2021-01-18 12:05:00 +00:00
dependabot[bot]
d34e7c5b51 build(deps): Bump vuepress-theme-cosmos from 1.0.179 to 1.0.180 in /docs (#5915)
Bumps [vuepress-theme-cosmos](https://github.com/cosmos/vuepress-theme-cosmos) from 1.0.179 to 1.0.180.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/cosmos/vuepress-theme-cosmos/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=vuepress-theme-cosmos&package-manager=npm_and_yarn&previous-version=1.0.179&new-version=1.0.180)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-01-18 11:22:12 +00:00
Tess Rinearson
d8a2eb95bb config: fix mispellings (#5914)
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2021-01-17 20:26:00 +00:00
Marko
eaa948ab7d proto: bump gogoproto (1.3.2) (#5886)
## Description

- bump gogoproto (1.3.2)
- regenerate proto files

Closes: #XXX
2021-01-17 16:23:41 +00:00
Callum Waters
5cbb8263b4 patch concurrency issue with pruning in the light store (#5910) 2021-01-17 17:14:14 +01:00
Tess Rinearson
a2bd09253c .github/codeowners: add alexanderbez (#5913)
* .github/codeowners: add alexanderbez

* Update .github/CODEOWNERS

Co-authored-by: Marko <marbar3778@yahoo.com>

Co-authored-by: Marko <marbar3778@yahoo.com>
2021-01-15 16:39:21 +00:00
Callum Waters
ca285844ea light: fix light store deadlock (#5901) 2021-01-14 17:22:09 +01:00
dependabot[bot]
74bee8d834 build(deps): Bump google.golang.org/grpc from 1.34.0 to 1.35.0 (#5902)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.34.0 to 1.35.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc-go/releases">google.golang.org/grpc's releases</a>.</em></p>
<blockquote>
<h2>Release 1.35.0</h2>
<h1>Behavior Changes</h1>
<ul>
<li>roundrobin: strip attributes from addresses (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4024">#4024</a>)</li>
<li>balancer: set RPC metadata in address attributes, instead of Metadata field (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4041">#4041</a>)</li>
</ul>
<h1>New Features</h1>
<ul>
<li>support unix-abstract schema (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4079">#4079</a>)
<ul>
<li>Special Thanks: <a href="https://github.com/resec">@resec</a></li>
</ul>
</li>
<li>xds: implement experimental RouteAction timeout support (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4116">#4116</a>)</li>
<li>xds: Implement experimental circuit breaking support. (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4050">#4050</a>)</li>
</ul>
<h1>Bug Fixes</h1>
<ul>
<li>xds: <code>server_features</code> should be a child of <code>xds_servers</code> and not a sibling (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4087">#4087</a>)</li>
<li>xds: NACK more invalid RDS responses (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4120">#4120</a>)</li>
</ul>
<h2>Release 1.34.1</h2>
<ul>
<li>xds client: Updated v3 type for http connection manager (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4137">#4137</a>)</li>
<li>lrs: use JSON for locality's String representation (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4135">#4135</a>)</li>
<li>eds/lrs: handle nil when LRS is disabled (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4086">#4086</a>)</li>
<li>client: fix &quot;unix&quot; scheme handling for some corner cases (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4021">#4021</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="577eb69627"><code>577eb69</code></a> Change version to 1.35.0 (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4140">#4140</a>)</li>
<li><a href="fb40d83340"><code>fb40d83</code></a> xds interop: turn on circuit breaking test (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4144">#4144</a>)</li>
<li><a href="083393f287"><code>083393f</code></a> xds/resolver: fix resource deletion (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4143">#4143</a>)</li>
<li><a href="85e55dc558"><code>85e55dc</code></a> interop: update client for xds testing support (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4108">#4108</a>)</li>
<li><a href="6a318bb011"><code>6a318bb</code></a> xds: add HTTP connection manager max_stream_duration support (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4122">#4122</a>)</li>
<li><a href="0bd76be2bb"><code>0bd76be</code></a> lrs: use JSON for locality's String representation (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4135">#4135</a>)</li>
<li><a href="ecc9a99b66"><code>ecc9a99</code></a> interop: remove test.proto clones/variants and use grpc-proto repo instead (#...</li>
<li><a href="4f80d77fe4"><code>4f80d77</code></a> github: enable CodeQL checker (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4134">#4134</a>)</li>
<li><a href="829919d572"><code>829919d</code></a> xds client: Updated v3 type for http connection manager (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4137">#4137</a>)</li>
<li><a href="f4a20d2f41"><code>f4a20d2</code></a> xds: NACK more invalid RDS responses (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4120">#4120</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/grpc/grpc-go/compare/v1.34.0...v1.35.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/grpc&package-manager=go_modules&previous-version=1.34.0&new-version=1.35.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-01-14 15:59:39 +00:00
dependabot[bot]
211bc08217 build(deps): Bump github.com/stretchr/testify from 1.6.1 to 1.7.0 (#5897)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.6.1 to 1.7.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/stretchr/testify/releases">github.com/stretchr/testify's releases</a>.</em></p>
<blockquote>
<h2>Minor improvements and bug fixes</h2>
<p>Minor feature improvements and bug fixes</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="acba37e5db"><code>acba37e</code></a> Only use repeatability if no repeatability left</li>
<li><a href="eb8c41ec07"><code>eb8c41e</code></a> Add more tests to mock package</li>
<li><a href="a5830c56d3"><code>a5830c5</code></a> Extract method to evaluate closest match</li>
<li><a href="1962448488"><code>1962448</code></a> Use Repeatability as tie-breaker for closest match</li>
<li><a href="92707c0b2d"><code>92707c0</code></a> Fixed the link to not point to assert only</li>
<li><a href="05dd0b2b35"><code>05dd0b2</code></a> Updated the readme to point to pkg.dev</li>
<li><a href="c26b7f39f8"><code>c26b7f3</code></a> Update assertions.go</li>
<li><a href="8fb4b2442e"><code>8fb4b24</code></a> [Fix] The most recent changes to golang/protobuf breaks the spew Circular dat...</li>
<li><a href="dc8af7208c"><code>dc8af72</code></a> add generated code for positive/negative assertion</li>
<li><a href="1544508911"><code>1544508</code></a> add assert positive/negative</li>
<li>Additional commits viewable in <a href="https://github.com/stretchr/testify/compare/v1.6.1...v1.7.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/stretchr/testify&package-manager=go_modules&previous-version=1.6.1&new-version=1.7.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-01-14 10:06:23 +00:00
Tess Rinearson
178d421c77 changelog: update changelogs to reflect changes released in 0.34.2 2021-01-14 10:57:01 +01:00
Callum Waters
bada08c50c state sync: last consensus params height is not set (#5889) 2021-01-12 14:41:16 +01:00
Callum Waters
956b59af87 evidence: buffer evidence from consensus (#5890) 2021-01-12 12:50:49 +01:00
Marko
f05788e632 privval: Query validator key (#5876)
## Description

- Query validator key when a remote signer is used. This is supported gRPC remote signing and filePV only. 


Closes: #3009
2021-01-12 10:06:33 +00:00
Callum Waters
5b698ed13b tx indexer: use different field separator for keys (#5865) 2021-01-12 10:55:47 +01:00
dependabot[bot]
1d16e39c0e build(deps): Bump gaurav-nelson/github-action-markdown-link-check (#5884)
Bumps [gaurav-nelson/github-action-markdown-link-check](https://github.com/gaurav-nelson/github-action-markdown-link-check) from 1.0.11 to 1.0.12.
- [Release notes](https://github.com/gaurav-nelson/github-action-markdown-link-check/releases)
- [Commits](https://github.com/gaurav-nelson/github-action-markdown-link-check/compare/1.0.11...0fe4911067fa322422f325b002d2038ba5602170)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-01-11 14:46:30 +01:00
Tess Rinearson
78e8169750 docs: fix broken redirect links (#5881) 2021-01-09 15:05:34 +01:00
Callum Waters
03a6fb2777 state: prune states using an iterator (#5864) 2021-01-08 17:05:27 +01:00
Erik Grinaker
c61cd3fd05 p2p: add Router prototype (#5831)
Early but functional prototype of the new `p2p.Router`, see its GoDoc comment for details on how it works. Expect much of this logic to change and improve as we evolve the new P2P stack.

There is a simple test that sets up an in-memory network of four routers with reactors and passes messages between them, but otherwise no exhaustive tests since this is very much a work-in-progress.
2021-01-08 15:32:11 +00:00
Callum Waters
385ea1db7d store: use db iterators for pruning and range-based queries (#5848) 2021-01-08 13:12:54 +01:00
Erik Grinaker
66ba12d9bc test/e2e: tolerate up to 2/3 missed signatures for a validator (#5878)
E2E tests often fail due to fast sync stalls causing the validator to miss signing blocks. This increases the tolerance for missed signatures to 2/3 to allow validators to spend more time starting up.
2021-01-08 11:06:51 +00:00
Marko
09cf0bcb01 privval: add grpc (#5725)
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
2021-01-06 10:49:30 -08:00
Aleksandr Bezobchuk
e986602649 evidence: p2p refactor (#5747) 2021-01-06 11:53:18 -05:00
Tess Rinearson
2c95b0b5e0 changelog: update with changes released in 0.34.1 (#5875) 2021-01-06 16:56:13 +01:00
Erik Grinaker
a0d4d85375 os: simplify EnsureDir() (#5871)
#5852 fixed an issue with error propagation in `os.EnsureDir()`. However, this function is basically identical to `os.MkdirAll()`, and can be replaced entirely with a call to it. We keep the function for backwards compatibility.
2021-01-06 15:27:35 +00:00
Aleksandr Bezobchuk
8bf77d9b1a statesync: do not recover panic on peer updates (#5869) 2021-01-06 10:07:10 -05:00
Erik Grinaker
1ccd23ca1d p2p: fix MConnection inbound traffic statistics and rate limiting (#5868)
Fixes #5866. Inbound traffic monitoring (and by extension inbound rate limiting) was inadvertently removed in 660e72a.
2021-01-06 15:38:23 +01:00
dependabot[bot]
47f5650615 build(deps): Bump codecov/codecov-action from v1.2.0 to v1.2.1
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.2.0 to v1.2.1.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.2.0...e156083f13aff6830c92fc5faa23505779fbf649)

Signed-off-by: dependabot[bot] <support@github.com>
2021-01-06 14:29:56 +01:00
dependabot[bot]
5b17c01e41 build(deps): Bump codecov/codecov-action from v1.1.1 to v1.2.0 (#5863)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
2021-01-05 08:51:21 -08:00
Callum Waters
9b9222f461 store: order-preserving varint key encoding (#5771) 2021-01-05 16:53:26 +01:00
Erik Grinaker
0555772d3a blockchain/v0: stop tickers on poolRoutine exit (#5860)
Fixes #5841.
2021-01-05 14:45:24 +00:00
Erik Grinaker
1e1d087494 blockchain/v2: fix missing mutex unlock (#5862)
Fixes #5843.
2021-01-05 14:35:20 +00:00
Erik Grinaker
85353d9af5 test/consensus: improve WaitGroup handling in Byzantine tests (#5861)
Fixes #5845.
2021-01-05 10:44:03 +00:00
Erik Grinaker
1570d26f84 test/e2e: add conceptual overview (#5857)
This should be useful to understand the overall purpose and structure of the end-to-end tests.
2021-01-04 17:51:49 +00:00
Aleksandr Bezobchuk
c75dee5a02 state sync: Fix TestSyncer_SyncAny (#5835) 2021-01-04 10:31:20 -05:00
Erik Grinaker
17ca6c6c98 test/e2e: disable abci/grpc and blockchain/v2 due to flake (#5854)
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2021-01-04 15:19:07 +00:00
Erik Grinaker
46964f62db p2p: fix IPv6 address handling in new transport API (#5853)
The old code naïvely concatenated IP and port, which doesn't work for IPv6 addresses where `:` can be part of the IP as well.
2021-01-04 15:05:43 +00:00
Erik Grinaker
9c47b572f7 libs/os: EnsureDir now returns IO errors and checks file type (#5852)
Fixes #5839.
2021-01-04 14:30:38 +00:00
Erik Grinaker
1b18d26644 abci/grpc: fix invalid mutex handling in StopForError() (#5849)
Fixes #5840.
2021-01-04 13:56:01 +00:00
Erik Grinaker
d39eb74daa tools/tm-signer-harness: fix listener leak in newTestHarnessListener() (#5850)
Fixes #5837.
2021-01-04 13:38:42 +00:00
Erik Grinaker
91bef75f62 p2p: rename PubKeyToID to NodeIDFromPubKey 2021-01-04 11:25:20 +01:00
Erik Grinaker
b4ce1de44a p2p: rename NodeInfo.DefaultNodeID to NodeID 2021-01-04 11:25:20 +01:00
Erik Grinaker
1b6df6783d p2p: replace PeerID with NodeID 2021-01-04 11:25:20 +01:00
Erik Grinaker
cc3c18a6a7 p2p: add NodeID.Validate(), replaces validateID() 2021-01-04 11:25:20 +01:00
Erik Grinaker
8e7d431f6f p2p: rename ID to NodeID 2021-01-04 11:25:20 +01:00
John Adler
3c1416b3d7 ABCI: Update readme to fix broken link to proto (#5847)
# Description

Proto definitions file was moved and link was broken. Fixed as relative link.
2021-01-03 19:17:21 -08:00
Anton Kaliaev
aef1ac7ba5 modify Reactor priorities (#5826)
blockchain/vX reactor priority was decreased because during the normal operation
(i.e. when the node is not fast syncing) blockchain priority can't be
the same as consensus reactor priority. Otherwise, it's theoretically possible to
slow down consensus by constantly requesting blocks from the node.

NOTE: ideally blockchain/vX reactor priority would be dynamic. e.g. when
the node is fast syncing, the priority is 10 (max), but when it's done
fast syncing - the priority gets decreased to 5 (only to serve blocks
for other nodes). But it's not possible now, therefore I decided to
focus on the normal operation (priority = 5).

evidence and consensus critical messages are more important than
the mempool ones, hence priorities are bumped by 1 (from 5 to 6).

statesync reactor priority was changed from 1 to 5 to be the same as
blockchain/vX priority.

Refs https://github.com/tendermint/tendermint/issues/5816
2020-12-23 12:31:00 +00:00
Erik Grinaker
84ff991387 p2p: add MemoryTransport, an in-memory transport for testing (#5827) 2020-12-23 13:21:01 +01:00
Marko
bc1f1e5ffa ci: run goreleaser build (#5824)
# Description

- run build when on a branch matching RC[0-9]/**, in accordance with contributing.md release guide.

Closes: #5695
2020-12-22 08:06:37 -08:00
Callum Waters
c4730bb46a localnet: use 27000 port for prometheus (#5811) 2020-12-22 09:16:33 +01:00
dependabot[bot]
392acdc733 build(deps): Bump codecov/codecov-action from v1.0.15 to v1.1.1 (#5825)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.0.15 to v1.1.1.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.0.15...1fc7722ded4708880a5aea49f2bfafb9336f0c8d)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-12-22 09:13:37 +01:00
Marko
1128244f4f docs: specify master for tutorials (#5822)
## Description

Specify master for tutorials. 

I will have a followup PR for 0.34 that specifies 0.34

Ref: #5735
2020-12-21 17:36:07 +00:00
Marko
886442c111 abci: use protoio for length delimitation (#5818)
Migrate ABCI to use protoio (uint64 length delimitation) instead of specific int64 length delimiters.

Closes: #5783
2020-12-21 08:51:41 -08:00
Marko
c6691b91c2 ci: make timeout-minutes 8 for golangci (#5821) 2020-12-21 08:30:28 -08:00
dependabot[bot]
0a41711091 build(deps): Bump github.com/cosmos/iavl from 0.15.2 to 0.15.3 (#5814)
Bumps [github.com/cosmos/iavl](https://github.com/cosmos/iavl) from 0.15.2 to 0.15.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/cosmos/iavl/releases">github.com/cosmos/iavl's releases</a>.</em></p>
<blockquote>
<h2>v0.15.3</h2>
<p><a href="257e8b9292/CHANGELOG.md (0153-december-21-2020)</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/cosmos/iavl/blob/master/CHANGELOG.md">github.com/cosmos/iavl's changelog</a>.</em></p>
<blockquote>
<h2>0.15.3 (December 21, 2020)</h2>
<p>Special thanks to external contributors on this release: <a href="https://github.com/odeke-em">@odeke-em</a></p>
<h3>Improvements</h3>
<ul>
<li><a href="https://github-redirect.dependabot.com/cosmos/iavl/pull/352">#352</a> Reuse buffer to improve performance of <code>GetMembershipProof()</code> and <code>GetNonMembershipProof()</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="257e8b9292"><code>257e8b9</code></a> changelog: release 0.15.3 (<a href="https://github-redirect.dependabot.com/cosmos/iavl/issues/353">#353</a>)</li>
<li><a href="b2dffed4b2"><code>b2dffed</code></a> convertVarIntToBytes: use reusable bytes array (<a href="https://github-redirect.dependabot.com/cosmos/iavl/issues/352">#352</a>)</li>
<li><a href="9e510e5a64"><code>9e510e5</code></a> github: run tests with 32-bit arch as well (<a href="https://github-redirect.dependabot.com/cosmos/iavl/issues/350">#350</a>)</li>
<li>See full diff in <a href="https://github.com/cosmos/iavl/compare/v0.15.2...v0.15.3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/cosmos/iavl&package-manager=go_modules&previous-version=0.15.2&new-version=0.15.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2020-12-21 15:28:56 +00:00
Anton Kaliaev
77deb710fb mempool: disable MaxBatchBytes (#5800)
@p4u from vocdoni.io reported that the mempool might behave incorrectly under a
high load. The consequences can range from pauses between blocks to the peers
disconnecting from this node.

My current theory is that the flowrate lib we're using to control flow
(multiplex over a single TCP connection) was not designed w/ large blobs
(1MB batch of txs) in mind.

I've tried decreasing the Mempool reactor priority, but that did not
have any visible effect. What actually worked is adding a time.Sleep
into mempool.Reactor#broadcastTxRoutine after an each successful send ==
manual control flow of sort.

As a temporary remedy (until the mempool package
is refactored), the max-batch-bytes was disabled. Transactions will be sent
one by one without batching

Closes #5796
2020-12-21 19:17:45 +04:00
Anton Kaliaev
6a056e050c mempool: introduce KeepInvalidTxsInCache config option (#5813)
When set to true, an invalid transaction will be kept in the cache (this may help some applications to protect against spam).

NOTE: this is a temporary config option. The more correct solution would be to add a TTL to each transaction (i.e. CheckTx may return a TTL in ResponseCheckTx).

Closes: #5751
2020-12-21 15:25:14 +04:00
dependabot[bot]
bdf688debc build(deps): Bump github.com/prometheus/client_golang from 1.8.0 to 1.9.0 (#5807)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.8.0 to 1.9.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/prometheus/client_golang/releases">github.com/prometheus/client_golang's releases</a>.</em></p>
<blockquote>
<h2>1.9.0 / 2020-12-17</h2>
<ul>
<li>[FEATURE] <code>NewPidFileFn</code> helper to create process collectors for processes whose PID is read from a file. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/804">#804</a></li>
<li>[BUGFIX] promhttp: Prevent endless loop in <code>InstrumentHandler...</code> middlewares with invalid metric or label names. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/823">#823</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/prometheus/client_golang/blob/master/CHANGELOG.md">github.com/prometheus/client_golang's changelog</a>.</em></p>
<blockquote>
<h2>1.9.0 / 2020-12-17</h2>
<ul>
<li>[FEATURE] <code>NewPidFileFn</code> helper to create process collectors for processes whose PID is read from a file. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/804">#804</a></li>
<li>[BUGFIX] promhttp: Prevent endless loop in <code>InstrumentHandler...</code> middlewares with invalid metric or label names. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/823">#823</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="d89cf5af88"><code>d89cf5a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/826">#826</a> from prometheus/beorn7/release</li>
<li><a href="80ca9cdc4e"><code>80ca9cd</code></a> Cut release 1.9.0</li>
<li><a href="8d16199dea"><code>8d16199</code></a> Update dependencies</li>
<li><a href="8b73bd904c"><code>8b73bd9</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/823">#823</a> from prometheus/beorn7/promhttp</li>
<li><a href="98eb6cbf7c"><code>98eb6cb</code></a> promhttp: Correctly detect invalid metric and label names</li>
<li><a href="37c26edd5b"><code>37c26ed</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/821">#821</a> from prometheus/beorn7/multierror</li>
<li><a href="34ca120377"><code>34ca120</code></a> Be more explicit about the multi-line properties of MultiError</li>
<li><a href="fd6d368676"><code>fd6d368</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/819">#819</a> from jubalh/sp</li>
<li><a href="cf6dc82780"><code>cf6dc82</code></a> Correct spelling: possibilites -&gt; possibilities</li>
<li><a href="39b478e90c"><code>39b478e</code></a> Added example api code showing how to add auth tokens and user agents to prom...</li>
<li>Additional commits viewable in <a href="https://github.com/prometheus/client_golang/compare/v1.8.0...v1.9.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/prometheus/client_golang&package-manager=go_modules&previous-version=1.8.0&new-version=1.9.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2020-12-19 06:44:11 +00:00
Erik Grinaker
72f041b759 p2p: fix data race in MakeSwitch test helper (#5810)
Fixes #5809.
2020-12-19 06:35:16 +00:00
Dev Ojha
8c0af72987 consensus: deprecate time iota ms (#5792)
time_iota_ms is intended to ensure that an honest validator always generates timestamps 
with time increasing monotonically. For this purpose, it always suffices to have this parameter
set to `1ms`. Allowing users to choose different numbers increases bug surface area.
Thus the code now ignores the user provided time_iota_ms parameter (marking it as unused), 
and uses 1ms internally.
2020-12-17 17:32:42 +01:00
Anton Kaliaev
ced66e4eb5 config: increase MaxPacketMsgPayloadSize to 1400
The MTU (Maximum Transmission Unit) for Ethernet is 1500 bytes.
The IP header and the TCP header take up 20 bytes each at least (unless
optional header fields are used) and thus the max for (non-Jumbo frame)
Ethernet is 1500 - 20 -20 = 1460
Source: https://stackoverflow.com/a/3074427/820520
2020-12-17 15:59:18 +04:00
Anton Kaliaev
be6c016664 consensus: change log level to error when adding vote 2020-12-17 15:59:18 +04:00
Anton Kaliaev
085fd66f33 p2p: do not format raw msg bytes
While debugging the mempool issue (#5796), I've noticed we're spending
quite a bit of time encoding blobs of data, which never get printed! The
reason is filtering occurs on the level below, so Go runtime rightfully
evaluates function arguments.

I think it's okay to not format raw bytes.
2020-12-17 15:59:18 +04:00
Anton Kaliaev
2bb2af19e4 localnet: expose 6060 (pprof) and 9090 (prometheus) on node0
This removes the need to do it yourself every time you want to debug a
issue or look at Prometheus graphs.
2020-12-17 15:59:18 +04:00
Callum Waters
ebff8a96a5 docs: use hyphens instead of snake case (#5802) 2020-12-17 08:59:58 +01:00
PaddyMc
6ef81c6074 localnet: fix node starting issue with --proxy-app flag (#5803) 2020-12-16 16:14:08 +04:00
Erik Grinaker
e198edf20e p2p: remove NodeInfo interface and rename DefaultNodeInfo struct (#5799)
The `NodeInfo` interface does not appear to serve any purpose at all, so I removed it and renamed the `DefaultNodeInfo` struct to `NodeInfo` (including the Protobuf representations). Let me know if this is actually needed for anything.

Only the Protobuf rename is listed in the changelog, since we do not officially support API stability of the `p2p` package (according to `README.md`). The on-wire protocol remains compatible.
2020-12-15 18:54:25 +00:00
Erik Grinaker
bcfc889f25 p2p: implement new Transport interface (#5791)
This implements a new `Transport` interface and related types for the P2P refactor in #5670. Previously, `conn.MConnection` was very tightly coupled to the `Peer` implementation -- in order to allow alternative non-multiplexed transports (e.g. QUIC), MConnection has now been moved below the `Transport` interface, as `MConnTransport`, and decoupled from the peer. Since the `p2p` package is not covered by our Go API stability, this is not considered a breaking change, and not listed in the changelog.

The initial approach was to implement the new interface in its final form (which also involved possible protocol changes, see https://github.com/tendermint/spec/pull/227). However, it turned out that this would require a large amount of changes to existing P2P code because of the previous tight coupling between `Peer` and `MConnection` and the reliance on subtleties in the MConnection behavior. Instead, I have broadened the `Transport` interface to expose much of the existing MConnection interface, preserved much of the existing MConnection logic and behavior in the transport implementation, and tried to make as few changes to the rest of the P2P stack as possible. We will instead reduce this interface gradually as we refactor other parts of the P2P stack.

The low-level transport code and protocol (e.g. MConnection, SecretConnection and so on) has not been significantly changed, and refactoring this is not a priority until we come up with a plan for QUIC adoption, as we may end up discarding the MConnection code entirely.

There are no tests of the new `MConnTransport`, as this code is likely to evolve as we proceed with the P2P refactor, but tests should be added before a final release. The E2E tests are sufficient for basic validation in the meanwhile.
2020-12-15 15:08:16 +00:00
dependabot[bot]
6dce4ef701 build(deps): Bump github.com/cosmos/iavl from 0.15.0 to 0.15.2
Bumps [github.com/cosmos/iavl](https://github.com/cosmos/iavl) from 0.15.0 to 0.15.2.
- [Release notes](https://github.com/cosmos/iavl/releases)
- [Changelog](https://github.com/cosmos/iavl/blob/master/CHANGELOG.md)
- [Commits](https://github.com/cosmos/iavl/compare/v0.15.0...v0.15.2)

Signed-off-by: dependabot[bot] <support@github.com>
2020-12-15 13:46:37 +01:00
Tess Rinearson
1547a7e6c1 readme: update discord link (#5795) 2020-12-14 21:52:50 +01:00
dependabot[bot]
e3b96107af build(deps): Bump gaurav-nelson/github-action-markdown-link-check (#5793)
Bumps [gaurav-nelson/github-action-markdown-link-check](https://github.com/gaurav-nelson/github-action-markdown-link-check) from 1.0.10 to 1.0.11.
- [Release notes](https://github.com/gaurav-nelson/github-action-markdown-link-check/releases)
- [Commits](https://github.com/gaurav-nelson/github-action-markdown-link-check/compare/1.0.10...2a60e0fe41b5361f446ccace6621a1a2a5c324cf)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-12-14 14:32:34 +01:00
Anton Kaliaev
5aa859c370 blockchain/v2: send status request when new peer joins (#5774)
Closes #5766

* memoize the scSchedulerFail error to avoid printing it every scheduleFreq
* blockchain/v2: modify switchIO funcs to accept peer instead of peerID
2020-12-14 11:25:28 +04:00
dependabot[bot]
41c067c474 build(deps): Bump gaurav-nelson/github-action-markdown-link-check (#5787)
Bumps [gaurav-nelson/github-action-markdown-link-check](https://github.com/gaurav-nelson/github-action-markdown-link-check) from 1.0.9 to 1.0.10.
- [Release notes](https://github.com/gaurav-nelson/github-action-markdown-link-check/releases)
- [Commits](https://github.com/gaurav-nelson/github-action-markdown-link-check/compare/1.0.9...72d871b8c64d67e2161dc16596734334b188429d)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-12-11 13:39:54 +01:00
Callum Waters
3283a84ab2 cmd: hyphen case cli and config (#5777) 2020-12-11 13:14:04 +01:00
Tess Rinearson
c1be58a27b readme: add links to job post (#5785)
Also adds a slightly modified version of the footer text from the docs website.
2020-12-10 23:22:22 +01:00
dependabot[bot]
a24e00dda6 build(deps): Bump gaurav-nelson/github-action-markdown-link-check from 1.0.8 to 1.0.9 (#5779)
Bumps [gaurav-nelson/github-action-markdown-link-check](https://github.com/gaurav-nelson/github-action-markdown-link-check) from 1.0.8 to 1.0.9.
- [Release notes](https://github.com/gaurav-nelson/github-action-markdown-link-check/releases)
- [Commits](https://github.com/gaurav-nelson/github-action-markdown-link-check/compare/1.0.8...7481451f70251762f149d69596e3e276ebf2b236)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-12-10 12:19:53 +01:00
dependabot[bot]
d01e6998a7 build(deps): Bump vuepress-theme-cosmos from 1.0.178 to 1.0.179 in /docs (#5780)
Bumps [vuepress-theme-cosmos](https://github.com/cosmos/vuepress-theme-cosmos) from 1.0.178 to 1.0.179.
- [Release notes](https://github.com/cosmos/vuepress-theme-cosmos/releases)
- [Commits](https://github.com/cosmos/vuepress-theme-cosmos/commits)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-12-10 12:12:12 +01:00
Erik Grinaker
d8ae450784 Makefile: use git 2.20-compatible branch detection (#5778)
`Makefile` used `git branch --show-current` for branch detection. This option was introduced in Git 2.22. However, the current Debian release (Buster), which is used by the `golang:1.15` Docker image, uses Git 2.20. This gives spurious errors e.g. when running the E2E tests:

```
error: unknown option `show-current'
```

This PR changes the branch detection to be compatible with Git 2.20. The behavior appears to be the same as `git branch --show-current`, both when on a branch, on a tag, and on a detached HEAD.
2020-12-10 09:45:35 +00:00
Anton Kaliaev
28e79a4d02 cmd: modify gen_node_key to print key to STDOUT (#5772)
closes: #5770
closes: #5769

also, include node ID in the output (#5769) and modify NodeKey to use
value semantics (it makes perfect sense for NodeKey to not be a
pointer).
2020-12-10 11:02:35 +04:00
Aleksandr Bezobchuk
0565eb5943 state sync: cleanup (#5776) 2020-12-09 10:29:28 -05:00
Aleksandr Bezobchuk
a879eb444d p2p: state sync reactor refactor (#5671) 2020-12-09 09:31:06 -05:00
Marko
cdc217357d adr: privval gRPC (#5712) 2020-12-09 13:06:35 +01:00
dependabot[bot]
f9c54d2710 build(deps): Bump rtCamp/action-slack-notify from ecc1353ce30ef086ce3fc3d1ea9ac2e32e150402 to 2.1.2 (#5767)
Bumps [rtCamp/action-slack-notify](https://github.com/rtCamp/action-slack-notify) from ecc1353ce30ef086ce3fc3d1ea9ac2e32e150402 to 2.1.2. This release includes the previously tagged commit.
- [Release notes](https://github.com/rtCamp/action-slack-notify/releases)
- [Commits](ecc1353ce3...ae42232590)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-12-08 12:29:08 +01:00
dependabot[bot]
afb50425b3 build(deps-dev): Bump watchpack from 2.0.1 to 2.1.0 in /docs (#5768)
Bumps [watchpack](https://github.com/webpack/watchpack) from 2.0.1 to 2.1.0.
- [Release notes](https://github.com/webpack/watchpack/releases)
- [Commits](https://github.com/webpack/watchpack/compare/v2.0.1...v2.1.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-12-08 12:14:50 +01:00
Anton Kaliaev
15bf3a1509 goreleaser: lowercase binary name (#5765)
and specify files that go into an archive
2020-12-08 14:22:33 +04:00
Anton Kaliaev
89e908e340 blockchain/v0: relax termination conditions and increase sync timeout (#5741)
Closes: #5718
2020-12-08 11:33:03 +04:00
Marko
8e80c26e80 docs: fix link (#5763) 2020-12-07 22:42:20 +01:00
Callum Waters
b35e9ff53e node: improve test coverage on proposal block (#5748)
## Description

Closes: #3156
2020-12-07 21:16:18 +00:00
Marko
4dcc9e979c docs: add version dropdown and v0.34 docs(#5762)
Co-authored-by: Cyrus Goh <hello@lovincyrus.com>
2020-12-07 22:07:02 +01:00
Marko
0d108adaec dep: bump ed25519consensus version (#5760)
## Description

Bump version to get performance updates:

```
benchmark                    old ns/op     new ns/op     delta
BenchmarkVerification-8      174857        78376         -55.18%
```

Closes: #XXX
2020-12-07 12:54:07 +00:00
dependabot[bot]
cdf1e251f7 build(deps): Bump vuepress-theme-cosmos from 1.0.177 to 1.0.178 in /docs (#5754)
Bumps [vuepress-theme-cosmos](https://github.com/cosmos/vuepress-theme-cosmos) from 1.0.177 to 1.0.178.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/cosmos/vuepress-theme-cosmos/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=vuepress-theme-cosmos&package-manager=npm_and_yarn&previous-version=1.0.177&new-version=1.0.178)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2020-12-07 11:04:24 +00:00
dependabot[bot]
607565f34e build(deps): Bump vuepress-theme-cosmos from 1.0.176 to 1.0.177 in /docs (#5746)
Bumps [vuepress-theme-cosmos](https://github.com/cosmos/vuepress-theme-cosmos) from 1.0.176 to 1.0.177.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/cosmos/vuepress-theme-cosmos/commits">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=vuepress-theme-cosmos&package-manager=npm_and_yarn&previous-version=1.0.176&new-version=1.0.177)](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2020-12-04 18:44:25 +00:00
Tess Rinearson
42041f71b5 contributing: simplify our minor release process (#5749) 2020-12-04 19:12:02 +01:00
Anton Kaliaev
2c16ae99ee evidence: omit bytes field (#5745)
Follow-up to https://github.com/tendermint/tendermint/pull/5743
2020-12-04 11:33:36 +01:00
Tess Rinearson
79890d8393 reactors: omit incoming message bytes from reactor logs (#5743)
After a reactor has failed to parse an incoming message, it shouldn't output the "bad" data into the logs, as that data is unfiltered and could have anything in it. (We also don't think this information is helpful to have in the logs anyways.)
2020-12-03 22:12:08 +00:00
dependabot[bot]
5882489bba build(deps): Bump google.golang.org/grpc from 1.33.2 to 1.34.0 (#5737)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
2020-12-03 17:38:20 +01:00
Marko
8a80b97865 UX: version configuration (#5740)
## Description

- when not on a tag `tendermint version` will return "unreleased-`branchName`-`commitHash`"

Closes: #XXX
2020-12-03 15:18:08 +00:00
Marko
9205e85a9b changelog: add entry back (#5738)
## Description

add back removed changelog entry

Closes: #XXX
2020-12-03 11:43:08 +00:00
Anton Kaliaev
243ff4b43d blockchain/v1: remove in favor of v2 (#5728) 2020-12-03 09:35:47 +04:00
Alessio Treglia
bcb7044d64 consensus: fix flaky tests (#5734)
Replace testing.T.Cleanup() with deferred function
calls in test helpers as those cleanup functions
need to be called once the helper returns and not
when the entire test ends.

This reverts a few of the changes introduced in #5723.

Thanks: @erikgrinaker for pointing this out.
Ref: #5732
2020-12-02 14:21:06 +00:00
Alessio Treglia
77d7328bc6 p2p/pex: fix flaky tests (#5733)
*testing.T.TempDir() causes test cases to fail when
it is unable to remove the temporary directory once
the test case execution terminates. This seems to
happen often with pex reactor test cases.
2020-12-02 13:47:59 +00:00
dependabot[bot]
e820e68acd build(deps): Bump vuepress-theme-cosmos from 1.0.175 to 1.0.176 in /docs (#5727)
Bumps [vuepress-theme-cosmos](https://github.com/cosmos/vuepress-theme-cosmos) from 1.0.175 to 1.0.176.
- [Release notes](https://github.com/cosmos/vuepress-theme-cosmos/releases)
- [Commits](https://github.com/cosmos/vuepress-theme-cosmos/commits)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-12-01 14:52:36 +01:00
Anton Kaliaev
33dbff61d3 blockchain/v1: fix deadlock (#5711)
I introduced a new variable - syncEnded, which is now used to prevent
sending new events to channels (which would block otherwise) if reactor
is finished syncing

Closes #4591
2020-12-01 13:08:33 +00:00
Callum Waters
f368b91caf light: minor fixes / standardising errors (#5716)
## Description

I'm just doing a self audit of the light client. There's a few things I've changed

- Validate trust level in `VerifyNonAdjacent` function
- Make errNoWitnesses public (it's something people running software on top of a light client should be able to parse)
- Remove `ChainID` check of witnesses on start up. We do this already when we compare the first header with witnesses
- Remove `ChainID()` from provider interface

Closes: #4538
2020-12-01 12:53:53 +00:00
Anton Kaliaev
b1bbd37519 libs/bits: validate BitArray in FromProto (#5720)
Closes #5705
2020-12-01 12:44:56 +00:00
Marko
141d9c814d readme: remover circleci badge (#5729)
## Description

- remove circleci badge from readme

Closes: #XXX
2020-12-01 12:36:05 +00:00
Anton Kaliaev
e13b4386ff abci: modify Client interface and socket client (#5673)
`abci.Client`:

- Sync and Async methods now accept a context for cancellation
    * grpc client uses context to cancel both Sync and Async requests
    * local client ignores context parameter
    * socket client uses context to cancel Sync requests and to drop Async requests before sending them if context was cancelled prior to that

- Async methods return an error
    * socket client returns an error immediately if queue is full for Async requests
    * local client always returns nil error
    * grpc client returns an error if context was cancelled before we got response or the receiving queue had a space for response (do not confuse with the sending queue from the socket client)

- specify clients semantics in [doc.go](https://raw.githubusercontent.com/tendermint/tendermint/27112fffa62276bc016d56741f686f0f77931748/abci/client/doc.go)

`mempool.TxInfo`

- add optional `Context` to `TxInfo`, which can be used to cancel `CheckTx` request

Closes #5190
2020-11-30 16:46:16 +04:00
Alessio Treglia
0de4bec862 use Cleanup(),TempDir() in test cases (#5723)
Replace defer with t.Cleanup().

Replace the combination of ioutil.TempDir, error checking
and defer os.RemoveAll() with Go testing.T's new TempDir()
helper.

Mark auxiliary functions as test helpers.
2020-11-30 12:13:25 +00:00
Marko
781f4badc3 ci: build for 32 bit, libs: fix overflow (#5700) 2020-11-26 16:12:25 +01:00
Marko
27e8cea9ce ci: remove circle (#5714)
* remove circle

* remove prefix
2020-11-24 19:11:40 +00:00
Tess Rinearson
98234b1171 README: update link to Tendermint blog (#5713) 2020-11-24 17:57:42 +01:00
Erik Grinaker
6f9f8b58ae test: fix TestByzantinePrevoteEquivocation flake (#5710)
This fixes spurious `TestByzantinePrevoteEquivocation` failures by extending the block range and time spent waiting for evidence. I've seen many runs where the evidence isn't committed until e.g. height 27. Haven't looked into _why_ this happens, but as long as the evidence is committed eventually and the test doesn't spuriously fail I'm (mostly) happy. WDYT @cmwaters?
2020-11-24 14:49:10 +00:00
Marko
85e94161cd version: add abci version to handshake (#5706)
## Description

- add `AbciVersion` RequestInfo

Closes: #2804
2020-11-24 14:06:25 +00:00
Erik Grinaker
4988877f19 crypto: fix infinite recursion in Secp256k1 string formatting (#5707)
This caused stack overflow panics in E2E tests, e.g.:

```
2020-11-24T02:37:17.6085640Z validator04    | runtime: goroutine stack exceeds 1000000000-byte limit
2020-11-24T02:37:17.6087818Z validator04    | runtime: sp=0xc0234b23c0 stack=[0xc0234b2000, 0xc0434b2000]
2020-11-24T02:37:17.6088920Z validator04    | fatal error: stack overflow
2020-11-24T02:37:17.6089776Z validator04    | 
2020-11-24T02:37:17.6090569Z validator04    | runtime stack:
2020-11-24T02:37:17.6091677Z validator04    | runtime.throw(0x12dc476, 0xe)
2020-11-24T02:37:17.6093123Z validator04    | 	/usr/local/go/src/runtime/panic.go:1116 +0x72
2020-11-24T02:37:17.6094320Z validator04    | runtime.newstack()
2020-11-24T02:37:17.6095374Z validator04    | 	/usr/local/go/src/runtime/stack.go:1067 +0x78d
2020-11-24T02:37:17.6096381Z validator04    | runtime.morestack()
2020-11-24T02:37:17.6097657Z validator04    | 	/usr/local/go/src/runtime/asm_amd64.s:449 +0x8f
2020-11-24T02:37:17.6098505Z validator04    | 
2020-11-24T02:37:17.6099328Z validator04    | goroutine 88 [running]:
2020-11-24T02:37:17.6100470Z validator04    | runtime.heapBitsSetType(0xc009565380, 0x20, 0x18, 0x1137e00)
2020-11-24T02:37:17.6101961Z validator04    | 	/usr/local/go/src/runtime/mbitmap.go:911 +0xaa5 fp=0xc0234b23d0 sp=0xc0234b23c8 pc=0x432625
2020-11-24T02:37:17.6103906Z validator04    | runtime.mallocgc(0x20, 0x1137e00, 0x117b601, 0x11e9240)
2020-11-24T02:37:17.6105179Z validator04    | 	/usr/local/go/src/runtime/malloc.go:1090 +0x5a5 fp=0xc0234b2470 sp=0xc0234b23d0 pc=0x428b25
2020-11-24T02:37:17.6106540Z validator04    | runtime.convTslice(0xc002743710, 0x21, 0x21, 0xc0234b24e8)
2020-11-24T02:37:17.6107861Z validator04    | 	/usr/local/go/src/runtime/iface.go:385 +0x59 fp=0xc0234b24a0 sp=0xc0234b2470 pc=0x426379
2020-11-24T02:37:17.6109315Z validator04    | github.com/tendermint/tendermint/crypto/secp256k1.PubKey.String(...)
2020-11-24T02:37:17.6151692Z validator04    | 	/src/tendermint/crypto/secp256k1/secp256k1.go:161
2020-11-24T02:37:17.6153872Z validator04    | github.com/tendermint/tendermint/crypto/secp256k1.(*PubKey).String(0xc009565360, 0x11e9240, 0xc009565360)
2020-11-24T02:37:17.6157421Z validator04    | 	<autogenerated>:1 +0x65 fp=0xc0234b24f8 sp=0xc0234b24a0 pc=0x656965
2020-11-24T02:37:17.6159134Z validator04    | fmt.(*pp).handleMethods(0xc00956c680, 0x58, 0xc0234b2801)
2020-11-24T02:37:17.6161462Z validator04    | 	/usr/local/go/src/fmt/print.go:630 +0x30a fp=0xc0234b2768 sp=0xc0234b24f8 pc=0x518b8a
[...]
2020-11-24T02:37:17.6649685Z validator04    | 	/usr/local/go/src/fmt/print.go:630 +0x30a fp=0xc0234b7f48 sp=0xc0234b7cd8 pc=0x518b8a
2020-11-24T02:37:17.6651177Z validator04    | created by github.com/tendermint/tendermint/node.startStateSync
2020-11-24T02:37:17.6652521Z validator04    | 	/src/tendermint/node/node.go:587 +0x150

```
2020-11-24 11:37:49 +00:00
dependabot[bot]
ba256b383b build(deps): Bump github.com/cosmos/iavl from 0.15.0-rc5 to 0.15.0 (#5708)
Bumps [github.com/cosmos/iavl](https://github.com/cosmos/iavl) from 0.15.0-rc5 to 0.15.0.
- [Release notes](https://github.com/cosmos/iavl/releases)
- [Changelog](https://github.com/cosmos/iavl/blob/master/CHANGELOG.md)
- [Commits](https://github.com/cosmos/iavl/compare/v0.15.0-rc5...v0.15.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-11-24 12:14:27 +01:00
Marko
69dd8fea9d docs: add nodes section (#5604)
## Description

- separate docs related to running nodes into the nodes dir. 
- keep old files but dont display them
- bring over debugging like a pro blog

Closes: #XXX
2020-11-24 09:08:22 +00:00
Anton Kaliaev
170cb70e19 test/e2e: enable v1 and v2 blockchains (#5702)
* test/e2e: enable v1 and v2 blockchains

* modify networks/ci.toml
2020-11-23 13:32:22 +00:00
Anton Kaliaev
3ad1157451 blockchain/v1: handle peers without blocks (#5701)
Closes #5444

Now we record the fact that a peer does not have a requested block and later use this information to make a new request for the same block from another peer.
2020-11-23 11:59:34 +00:00
Marko
095e9cd7ef codecov: validate codecov.yml (#5699)
## Description

_Please add a description of the changes that this PR introduces and the files that
are the most critical to review._ 

Closes: #XXX
2020-11-21 16:22:45 +00:00
Tess Rinearson
2c2120691c CONTRIBUTING: update to match the release flow used for 0.34.0 (#5697) 2020-11-20 18:43:03 +01:00
Tess Rinearson
8b29622fe2 .goreleaser: add windows, remove arm (32 bit) (#5692)
This updates the goreleaser tooling to build the same binaries that were built for the 0.34.0 release.
2020-11-19 19:10:45 +01:00
Tess Rinearson
6bee97160f changelog: squash changelog from 0.34 RCs into one (#5691)
"Squashes" the changelog from RCs 2-6 into one changelog message for 0.34.0, and adds the changelog pending.
2020-11-19 17:44:08 +01:00
Tess Rinearson
64101f5ac9 .gitignore: sort (#5690) 2020-11-19 16:34:38 +01:00
Tess Rinearson
3246283cf2 .vscode: remove directory (#5626) 2020-11-19 16:20:30 +01:00
Tess Rinearson
f97a498cee scripts: make linkifier default to 'pull' rather than 'issue' (#5689) 2020-11-19 14:23:08 +01:00
Callum Waters
68dc751a8c light: ensure required header fields are present for verification (#5677) 2020-11-19 12:02:58 +01:00
Tess Rinearson
b435d9aae5 upgrading: update 0.34 instructions with updates since RC4 (#5685)
Co-authored-by: Marko <marbar3778@yahoo.com>
2020-11-18 18:10:43 +01:00
Erik Grinaker
e3728e7709 p2p: remove unused MakePoWTarget() (#5684) 2020-11-18 14:27:47 +00:00
Anton Kaliaev
f2f6a78809 docs: warn developers about calling blocking funcs in Receive (#5679)
Refs #2888
2020-11-17 15:37:35 +00:00
Marko
b2d72dce7e e2e: use ed25519 for secretConn (remote signer) (#5678)
## Description

Hardcode ed25519 to dialTCPFn in e2e tests. 

I will backport `DefaultRequestHandler` fixes

This will be replaced when grpc is implemented.
2020-11-17 13:55:23 +00:00
Callum Waters
909da42789 light: make fraction parts uint64, ensuring that it is always positive (#5655) 2020-11-17 14:23:16 +01:00
dependabot[bot]
4e71357808 build(deps): Bump codecov/codecov-action from v1.0.14 to v1.0.15 (#5676)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.0.14 to v1.0.15.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.0.14...239febf655bba88b16ff5dea1d3135ea8663a1f9)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-11-17 12:15:57 +01:00
Marko
fbf2309962 ci: remove add-path (#5674) 2020-11-17 11:49:57 +01:00
Marko
39e81807a3 buf: modify buf.yml, add buf generate (#5653) 2020-11-17 10:30:35 +01:00
Erik Grinaker
2d0fcf498d privval: duplicate SecretConnection from p2p package (#5672)
This is so that the `privval` package will not be affected when we refactor `p2p` (#5670). We will be migrating to gRPC shortly (#4698).
2020-11-16 18:16:10 +00:00
Erik Grinaker
deb4f60613 docs: add P2P architecture ADR (#5637)
[Rendered](https://github.com/tendermint/tendermint/blob/erik/adr-p2p-architecture/docs/architecture/adr-062-p2p-architecture.md)

ADR for the new P2P architecture and abstractions. Related to #2067.
2020-11-16 16:17:09 +00:00
Tess Rinearson
a736530e01 docs/architecture: add missing ADRs to README, update status of ADR 034 (#5663) 2020-11-16 13:54:52 +01:00
Marko
e0950515ff test/e2e: fix secp failures (#5649) 2020-11-16 12:31:32 +01:00
Aleksandr Bezobchuk
8aa47c7da5 rpc: fix content-type header (#5661) 2020-11-13 14:07:10 -05:00
Callum Waters
2f5e454892 remove unused version struct (#5656) 2020-11-13 13:09:38 +01:00
Erik Grinaker
e9294de946 go.mod: upgrade iavl and deps (#5657)
Bumps IAVL, which pulled in some other upgrades as well. I think they should be fine though.
2020-11-13 11:30:34 +00:00
dependabot[bot]
fdecfa177d build(deps): Bump rtCamp/action-slack-notify from e9db0ef to 2.1.1
Bumps [rtCamp/action-slack-notify](https://github.com/rtCamp/action-slack-notify) from e9db0ef to 2.1.1. This release includes the previously tagged commit.
- [Release notes](https://github.com/rtCamp/action-slack-notify/releases)
- [Commits](e9db0ef...ecc1353ce3)

Signed-off-by: dependabot[bot] <support@github.com>
2020-11-13 12:09:35 +01:00
Alessio Treglia
8bd3d5105f libs/os: remove unused aliases, add test cases (#5654)
Remove unused ReadFile (unused) and
WriteFile (almost unused, alias of ioutil.WriteFile).

Add testcases for Must{Read,Write}File.
2020-11-13 10:59:45 +00:00
Marko
95cff1efb4 proto: buf for everything (#5650)
## Description

- remove installation of protoc
- use buf protoc to generate proto stubs

prior to approving could someone test locally. I restarted my docker instance and its been stuck for 20+ minutes

Closes: #XXX
2020-11-12 11:15:18 +00:00
Alessio Treglia
eb0d353767 libs/os: add test case for TrapSignal (#5646) 2020-11-12 10:19:05 +04:00
dependabot[bot]
a399fae100 build(deps): Bump github.com/tendermint/tm-db from 0.6.2 to 0.6.3
Bumps [github.com/tendermint/tm-db](https://github.com/tendermint/tm-db) from 0.6.2 to 0.6.3.
- [Release notes](https://github.com/tendermint/tm-db/releases)
- [Changelog](https://github.com/tendermint/tm-db/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tendermint/tm-db/compare/v0.6.2...v0.6.3)

Signed-off-by: dependabot[bot] <support@github.com>
2020-11-11 12:19:30 +01:00
Marko
baa20a4b9c fix docker deployment (#5647) 2020-11-11 09:32:57 +01:00
Anton Kaliaev
af645ac778 privval: reset pingTimer to avoid sending unnecessary pings (#5642)
Refs #5550
2020-11-10 11:43:01 +00:00
Anton Kaliaev
adcfe80f09 remove Vagrantfile (#5641)
we're no longer using it for development
2020-11-10 11:33:21 +00:00
Anton Kaliaev
7041cae8e1 bump go version to 1.15 (#5639) 2020-11-10 15:19:52 +04:00
Anton Kaliaev
fa522ca323 privval: increase read/write timeout to 5s and calculate ping interval based on it (#5638)
Partially closes #5550
2020-11-10 11:10:12 +00:00
dependabot[bot]
1181cfbd77 build(deps): Bump google.golang.org/grpc from 1.33.1 to 1.33.2 (#5635)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.33.1 to 1.33.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc-go/releases">google.golang.org/grpc's releases</a>.</em></p>
<blockquote>
<h2>Release 1.33.2</h2>
<ul>
<li>protobuf: update all generated code to google.golang.org/protobuf (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3932">#3932</a>)</li>
<li>xdsclient: populate error details for NACK (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3975">#3975</a>)</li>
<li>internal/credentials: fix a bug and add one more helper function SPIFFEIDFromCert (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3929">#3929</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="56d63285d5"><code>56d6328</code></a> github: remove advancedtls examples test</li>
<li><a href="6396e4b7d7"><code>6396e4b</code></a> vet: ignore proto deprecation warnings</li>
<li><a href="0afe9d28d8"><code>0afe9d2</code></a> github: add Github Actions workflow for tests; support in vet.sh (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4005">#4005</a>)</li>
<li><a href="8a0ca33b85"><code>8a0ca33</code></a> Change version to 1.33.2</li>
<li><a href="c1989b58a5"><code>c1989b5</code></a> protobuf: update all generated code to google.golang.org/protobuf (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3932">#3932</a>)</li>
<li><a href="b205df69d4"><code>b205df6</code></a> xdsclient: populate error details for NACK (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3975">#3975</a>)</li>
<li><a href="75e27683ed"><code>75e2768</code></a> internal/credentials: fix a bug and add one more helper function SPIFFEIDFrom...</li>
<li><a href="17493ac067"><code>17493ac</code></a> Change version to 1.33.2-dev</li>
<li>See full diff in <a href="https://github.com/grpc/grpc-go/compare/v1.33.1...v1.33.2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/grpc&package-manager=go_modules&previous-version=1.33.1&new-version=1.33.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2020-11-10 10:50:42 +00:00
Marko
e7d7ad85d5 crypto: adopt zip215 ed25519 verification (#5632) 2020-11-10 11:39:52 +01:00
Aleksandr Bezobchuk
b508045eff Merge PR #5624: Types ValidateBasic Tests 2020-11-09 11:08:48 -05:00
Callum Waters
ca46cbc781 move broadcast_evidence rpc call from info to evidence (#5634) 2020-11-09 15:48:44 +01:00
Marko
bf35cc6443 cmd: add support for --key (#5612)
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
2020-11-09 15:22:36 +01:00
dependabot[bot]
27895a27a4 build(deps): Bump actions/cache from v2.1.2 to v2.1.3 (#5633)
Bumps [actions/cache](https://github.com/actions/cache) from v2.1.2 to v2.1.3.
- [Release notes](https://github.com/actions/cache/releases)
- [Commits](https://github.com/actions/cache/compare/v2.1.2...0781355a23dac32fd3bac414512f4b903437991a)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-11-09 12:03:53 +01:00
Callum Waters
9fe7b4fe77 remove misbehaviors from e2e generator (#5629) 2020-11-06 14:27:48 +01:00
Tess Rinearson
ec32df2d8a CHANGELOG: add breaking Version name change (#5628)
The good folks at Regen pointed out that this was an additional breaking change when they upgraded to use RC6: https://github.com/cosmos/cosmos-sdk/pull/7828#discussion_r518337441
2020-11-06 01:06:55 +01:00
Tess Rinearson
d0c87ff27e .github: move codecov config into .github 2020-11-06 01:02:52 +01:00
Tess Rinearson
e52f9de148 .github: move codecov.yml into .github 2020-11-06 01:02:52 +01:00
Tess Rinearson
8ae5c60637 scripts: move build.sh into scripts 2020-11-06 01:02:52 +01:00
Tess Rinearson
47687dba31 remove unused PHILOSOPHY file 2020-11-06 01:02:52 +01:00
Tess Rinearson
865234e113 remove appveyor.yml 2020-11-06 01:02:52 +01:00
Tess Rinearson
fdb7421ae8 .github: move mergify config 2020-11-06 01:02:52 +01:00
Tess Rinearson
a65c23a526 CHANGELOG: update to reflect v0.34.0-rc6 (#5622)
Note that this also deletes everything from CHANGELOG_PENDING that was included in RC6.
2020-11-05 23:24:26 +01:00
Anton Kaliaev
335e97433c blockchain/v2: remove peers from the processor (#5607)
after they were pruned by the scheduler

Closes #5513
2020-11-05 12:24:48 +00:00
Callum Waters
3922dde05d evidence: structs can independently form abci evidence (#5610) 2020-11-04 17:14:48 +01:00
dependabot[bot]
83c7bd17bf build(deps-dev): Bump watchpack from 2.0.0 to 2.0.1 in /docs (#5605)
Bumps [watchpack](https://github.com/webpack/watchpack) from 2.0.0 to 2.0.1.
- [Release notes](https://github.com/webpack/watchpack/releases)
- [Commits](https://github.com/webpack/watchpack/compare/v2.0.0...v2.0.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-11-04 10:55:04 +01:00
Cyrus Goh
f471affad5 docs: bump vuepress-theme-cosmos (#5614)
## Description

- bump to 1.0.175

### Related:
- https://github.com/cosmos/vuepress-theme-cosmos/pull/125
- https://github.com/cosmos/vuepress-theme-cosmos/pull/151
2020-11-04 05:39:29 +00:00
Marko
0e9798f39f ci: use gh pages (#5609) 2020-11-03 18:37:01 +01:00
Anton Kaliaev
627f7b5989 light: run detector for sequentially validating light client (#5538)
Closes #5445
2020-11-02 12:42:03 +04:00
Anton Kaliaev
8e6194626e light: model-based tests (#5461)
This is the first iteration of model-based testing in Go Tendermint. The test runner is using the static JSON fixtures located under the ./json directory. In the future, the Rust tensgen binary will be used to generate those (given the static intermediate scenarios and the test seed, which will be published along with each testgen release).

Closes: #5322
2020-11-02 12:07:18 +04:00
Erik Grinaker
886235311f p2p: remove p2p.FuzzedConnection and its config settings (#5598)
Removes `p2p.FuzzedConnection`, since it does not appear to be in use. While these sorts of test wrappers may be useful, they should be injected directly instead of bleeding through into the main application configuration. We'll implement something similar if and when necessary, for the new P2P abstractions in #2067.
2020-10-30 14:47:21 +00:00
Erik Grinaker
b5d9da5d89 docs: add ADR on P2P refactor scope (#5592)
[Rendered](https://github.com/tendermint/tendermint/blob/erik/adr-p2p-refactor/docs/architecture/adr-061-p2p-refactor-scope.md)

This summarizes recent discussions on the scope of the upcoming P2P refactor.
2020-10-30 12:32:06 +00:00
Anton Kaliaev
bcf9b0aa39 blockchain/v2: make the removal of an already removed peer a noop (#5553)
also, since multiple StopPeerForError calls may be executed in parallel,
only execute StopPeerForError once

Closes #5541
2020-10-30 10:31:22 +00:00
Marko
cafad28293 privval: make response values non nullable (#5583)
## Description

make response values non nullable in privval

Does this need a changelog for master?

Closes: #5581 

cc @tarcieri
2020-10-28 15:16:38 +00:00
dependabot[bot]
cd41091b18 build(deps): Bump codecov/codecov-action from v1.0.13 to v1.0.14 (#5582)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.0.13 to v1.0.14.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.0.13...7d5dfa54903bd909319c580a00535b483d1efcf3)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-28 12:32:25 +01:00
Erik Grinaker
53022220f6 test: fix various E2E test issues (#5576)
* Don't use state sync for nodes starting at initial height.
* Also remove stopped containers when cleaning up.
* Start nodes in order of startAt, mode, name to avoid full nodes starting before their seeds.
* Tweak network waiting to avoid halts caused by validator changes and perturbations.
* Disable most tests for seed nodes, which aren't always able to join consensus.
* Disable `blockchain/v2` due to known bugs.
2020-10-27 16:22:00 +00:00
Callum Waters
651d8f087b evidence: don't send committed evidence and ignore inbound evidence that is already committed (#5574) 2020-10-27 17:11:58 +01:00
Tess Rinearson
1488b0a33b docs: remove DEV_SESSIONS list (#5579) 2020-10-27 16:31:51 +01:00
Marko
3be4800810 docs: footer cleanup (#5457)
## Description

Switch maintainer information

Closes: #XXX
2020-10-27 15:13:13 +00:00
Marko
eeb92a632b ci: tests (#5577)
- use matrix builds to run multiple test jobs
- upload code coverage once not 4 times (produce more accurate codecov reports)
2020-10-27 15:44:19 +01:00
Marko
d0db59e16c ci: add goreleaser (#5527)
Co-authored-by: Erik Grinaker <erik@interchain.berlin>
Co-authored-by: Alessio Treglia <alessio@tendermint.com>
2020-10-27 13:54:53 +01:00
Marko
38587d83c4 types: move MakeBlock to block.go (#5573) 2020-10-27 10:00:31 +01:00
dependabot[bot]
80b9eb8f0f build(deps): Bump golangci/golangci-lint-action from v2.2.1 to v2.3.0 (#5571)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from v2.2.1 to v2.3.0.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v2.2.1...e868220d9fd3b523f1a8fcfb69749e8c7521ba14)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-26 12:05:00 +01:00
Erik Grinaker
7daf6a1a03 test: disable E2E misbehaviors due to bugs (#5569)
Disables misbehaviors in E2E testnets due to failures caused by #5554 and #5560. Should be re-enabled once these are fixed.
2020-10-26 10:28:11 +00:00
Erik Grinaker
10dda219a1 test: fix handling of start height in generated E2E testnets (#5563)
In #5488 the E2E testnet generator changed to setting explicit `StartAt` heights for initial nodes. This broke the runner, which expected all initial nodes to have `StartAt: 0`, as well as validator set scheduling in the generator. Testnet loading now normalizes initial nodes to have `StartAt: 0`.

This also tweaks waiting for misbehavior heights to only use an additional wait if there actually is any misbehavior in the testnet, and to output information when waiting.
2020-10-26 10:09:46 +00:00
Callum Waters
d1ef5028a0 block: fix max commit sig size (#5567) 2020-10-26 10:18:59 +01:00
Erik Grinaker
20d66803c5 abci/grpc: fix ordering of sync/async callback combinations (#5556)
Fixes #5540, fixes #2965. This is a hack that patches over the problem, but really the whole async handling in gRPC should be redesigned, as should ReqRes callback dispatch.
2020-10-23 12:52:01 +00:00
Callum Waters
50b91867c3 test: add evidence e2e tests (#5488) 2020-10-23 12:33:08 +02:00
Erik Grinaker
d11e5993b1 test: tag E2E Docker resources and autoremove them (#5558)
Fixes #5555.
2020-10-23 08:17:15 +00:00
Erik Grinaker
99f645d200 github: only notify nightly E2E failures once (#5559) 2020-10-23 00:14:59 +02:00
Erik Grinaker
e0e006d10f test: run remaining E2E testnets on run-multiple.sh failure (#5557)
Fixes #5542.
2020-10-22 20:53:27 +00:00
dependabot[bot]
d4f906609a build(deps): Bump google.golang.org/grpc from 1.32.0 to 1.33.1 (#5544)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.32.0 to 1.33.1.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.32.0...v1.33.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Erik Grinaker <erik@interchain.berlin>
2020-10-21 18:25:08 +02:00
dependabot[bot]
a24a58207a build(deps): Bump gaurav-nelson/github-action-markdown-link-check from 1.0.7 to 1.0.8 (#5543)
Bumps [gaurav-nelson/github-action-markdown-link-check](https://github.com/gaurav-nelson/github-action-markdown-link-check) from 1.0.7 to 1.0.8.
- [Release notes](https://github.com/gaurav-nelson/github-action-markdown-link-check/releases)
- [Commits](https://github.com/gaurav-nelson/github-action-markdown-link-check/compare/1.0.7...e3c371c731b2f494f856dc5de7f61cea4d519907)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-21 13:14:15 +02:00
Erik Grinaker
8c5fe166a6 test: enable restart/kill perturbations in E2E tests (#5537)
When #5536 lands we can re-enable restart/kill perturbations in E2E tests.
2020-10-20 19:30:07 +00:00
Erik Grinaker
17383be202 consensus: open target WAL as read/write during autorepair (#5536)
Fixes #5422. That turned out to be a whole lot easier than expected.
2020-10-20 18:20:41 +00:00
Callum Waters
257b34b459 evidence: don't gossip consensus evidence too soon (#5528)
and don't return errors on seeing the same evidence twice
2020-10-20 18:47:40 +02:00
Erik Grinaker
9e6248c0d7 test: enable blockchain v2 in E2E testnet generator (#5533)
When #5499 and #5530 land, we can re-enable v2 in the E2E testnet generator (and thus the nightly E2E tests).
2020-10-20 11:44:22 +00:00
Anton Kaliaev
b4adeab8b9 blockchain/v2: fix panic: processed height X+1 but expected height X (#5530)
Before: scheduler receives psBlockProcessed event, but does not mark block as processed because peer timed out (or was removed for other reasons) and all associated blocks were rescheduled.

After: scheduler receives psBlockProcessed event and marks block as processed in any case (even if peer who provided this block errors).

Closes #5387
2020-10-20 14:29:36 +04:00
Anton Kaliaev
d785036e0b blockchain/v2: fix "panic: duplicate block enqueued by processor" (#5499)
When a peer is stopped due to some network issue, the Reactor calls scheduler#handleRemovePeer, which removes the peer from the scheduler. BUT the peer stays in the processor, which sometimes could lead to "duplicate block enqueued by processor" panic WHEN the same block is requested by the scheduler again from a different peer. The solution is to return scPeerError, which will be propagated to the processor. The processor will clean up the blocks associated with the peer in purgePeer.

Closes #5513, #5517
2020-10-20 14:19:00 +04:00
dependabot[bot]
c206d9b680 build(deps): Bump codecov/codecov-action from v1.0.13 to v1.0.14 (#5525)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from v1.0.13 to v1.0.14.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Commits](https://github.com/codecov/codecov-action/compare/v1.0.13...7d5dfa54903bd909319c580a00535b483d1efcf3)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Erik Grinaker <erik@interchain.berlin>
2020-10-20 11:04:59 +02:00
Erik Grinaker
3dabfbeae0 test: enable ABCI gRPC client in E2E testnets (#5521)
Once #5520 lands, we can re-enable gRPC ABCI protocol in the E2E testnets.
2020-10-20 08:27:58 +00:00
Erik Grinaker
047267bbc8 abci/grpc: return async responses in order (#5520)
Fixes #5439. This is really a workaround for #5519 (unless we require async implementations to return ordered responses, but that kind of defeats the purpose of having an async API).
2020-10-20 05:53:36 +00:00
dependabot[bot]
6e16df8547 build(deps): Bump github.com/spf13/cobra from 1.1.0 to 1.1.1 (#5526)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.1.0 to 1.1.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/cobra/releases">github.com/spf13/cobra's releases</a>.</em></p>
<blockquote>
<h2>v1.1.1</h2>
<ul>
<li><strong>Fix:</strong> yaml.v2 2.3.0 contained a unintended breaking change. This release reverts to yaml.v2 v2.2.8 which has recent critical CVE fixes, but does not have the breaking changes. See <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1259">spf13/cobra#1259</a> for context.</li>
<li><strong>Fix:</strong> correct internal formatting for go-md2man v2 (which caused man page generation to be broken). See <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1049">spf13/cobra#1049</a> for context.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="86f8bfd7fe"><code>86f8bfd</code></a> fix manpage building with new go-md2man (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1255">#1255</a>)</li>
<li><a href="f32f4ef15b"><code>f32f4ef</code></a> Don't use yaml.v2 2.3.0 which has a breaking change (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1259">#1259</a>)</li>
<li>See full diff in <a href="https://github.com/spf13/cobra/compare/v1.1.0...v1.1.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/cobra&package-manager=go_modules&previous-version=1.1.0&new-version=1.1.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2020-10-19 11:49:51 +00:00
Marko
a5d3e19b4a types: rename json parts to part_set_header (#5523) 2020-10-19 09:16:45 +02:00
dependabot[bot]
6ef7b316cd build(deps): Bump github.com/prometheus/client_golang from 1.7.1 to 1.8.0 (#5515)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.7.1 to 1.8.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/prometheus/client_golang/releases">github.com/prometheus/client_golang's releases</a>.</em></p>
<blockquote>
<h2>1.8.0 / 2020-10-15</h2>
<ul>
<li>[CHANGE] API client: Use <code>time.Time</code> rather than <code>string</code> for timestamps in <code>RuntimeinfoResult</code>. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/777">#777</a></li>
<li>[FEATURE] Export <code>MetricVec</code> to facilitate implementation of vectors of custom <code>Metric</code> types. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/803">#803</a></li>
<li>[FEATURE API client: Support <code>/status/tsdb</code> endpoint. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/773">#773</a></li>
<li>[ENHANCEMENT] API client: Enable GET fallback on status code 501. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/802">#802</a></li>
<li>[ENHANCEMENT] Remove <code>Metric</code> references after reslicing to free up more memory. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/784">#784</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/prometheus/client_golang/blob/master/CHANGELOG.md">github.com/prometheus/client_golang's changelog</a>.</em></p>
<blockquote>
<h2>1.8.0 / 2020-10-15</h2>
<ul>
<li>[CHANGE] API client: Use <code>time.Time</code> rather than <code>string</code> for timestamps in <code>RuntimeinfoResult</code>. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/777">#777</a></li>
<li>[FEATURE] Export <code>MetricVec</code> to facilitate implementation of vectors of custom <code>Metric</code> types. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/803">#803</a></li>
<li>[FEATURE API client: Support <code>/status/tsdb</code> endpoint. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/773">#773</a></li>
<li>[ENHANCEMENT] API client: Enable GET fallback on status code 501. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/802">#802</a></li>
<li>[ENHANCEMENT] Remove <code>Metric</code> references after reslicing to free up more memory. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/784">#784</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="47cfdc9bb8"><code>47cfdc9</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/806">#806</a> from prometheus/beorn7/release</li>
<li><a href="67f573aafe"><code>67f573a</code></a> Cut v1.8.0</li>
<li><a href="ded2474420"><code>ded2474</code></a> Update dependencies</li>
<li><a href="3d1759b4c6"><code>3d1759b</code></a> Run check for unused/missing Go packages only against latest Go version</li>
<li><a href="e6ea98bdda"><code>e6ea98b</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/803">#803</a> from prometheus/beorn7/vec</li>
<li><a href="85aa957f63"><code>85aa957</code></a> Export MetricVec (again)</li>
<li><a href="6007b2b5ca"><code>6007b2b</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/802">#802</a> from prometheus/beorn7/fallback</li>
<li><a href="64b4a9cf9d"><code>64b4a9c</code></a> API client: Enable fallback on status code 501, too</li>
<li><a href="65c5578b2d"><code>65c5578</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/800">#800</a> from prometheus/beorn7/doc</li>
<li><a href="b54b73c7b1"><code>b54b73c</code></a> Remove spurious commas from links to the docs site</li>
<li>Additional commits viewable in <a href="https://github.com/prometheus/client_golang/compare/v1.7.1...v1.8.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/prometheus/client_golang&package-manager=go_modules&previous-version=1.7.1&new-version=1.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2020-10-16 13:58:26 +00:00
Erik Grinaker
e7184c499d statesync: check all necessary heights when adding snapshot to pool (#5516)
Fixes #5511.
2020-10-16 11:47:12 +00:00
Erik Grinaker
f0d4ddcf3c test: tweak E2E tests for nightly runs (#5512) 2020-10-16 12:10:29 +02:00
Tess Rinearson
7ccee61557 docs/tools: update url for kms repo (#5510)
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2020-10-16 09:33:49 +00:00
dependabot[bot]
2fff29f340 build(deps): Bump github.com/spf13/cobra from 1.0.0 to 1.1.0 (#5505)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.0.0 to 1.1.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/cobra/releases">github.com/spf13/cobra's releases</a>.</em></p>
<blockquote>
<h2>v1.1.0</h2>
<h2>Notable Changes</h2>
<ul>
<li>Extend Go completions and revamp zsh comp (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1070">#1070</a>)</li>
<li>Add completion for help command (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1136">#1136</a>)</li>
<li>Complete subcommands when TraverseChildren is set (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1171">#1171</a>)</li>
<li>Fix stderr printing functions (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/894">#894</a>)</li>
<li>fix: fish output redirection (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1247">#1247</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/spf13/cobra/blob/master/CHANGELOG.md">github.com/spf13/cobra's changelog</a>.</em></p>
<blockquote>
<h1>Cobra Changelog</h1>
<h2>Pending</h2>
<ul>
<li>Fix man page doc generation - no auto generated tag when <code>cmd.DisableAutoGenTag = true</code> <a href="https://github.com/jpmcb">@jpmcb</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="142dfb15a8"><code>142dfb1</code></a> Add example for making persistent flags required (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1135">#1135</a>)</li>
<li><a href="723d0c36fc"><code>723d0c3</code></a> Add tendermint and cosmos-sdk to the list of projects using cobra (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/855">#855</a>)</li>
<li><a href="b97b5ead31"><code>b97b5ea</code></a> fix: fish output redirection (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1247">#1247</a>)</li>
<li><a href="f64bfa1e08"><code>f64bfa1</code></a> Fix zsh completion not working on the first time in a shell session (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1237">#1237</a>)</li>
<li><a href="40d34bca1b"><code>40d34bc</code></a> Fix stderr printing functions (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/894">#894</a>)</li>
<li><a href="0bc8bfbe59"><code>0bc8bfb</code></a> Remove secondary go mod to prevent broken <code>go get</code> (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1233">#1233</a>)</li>
<li><a href="7f8e83d936"><code>7f8e83d</code></a> Modifying &quot;snake-case&quot; to &quot;kebab-case&quot; for clarity. (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1196">#1196</a>)</li>
<li><a href="8a39cb2614"><code>8a39cb2</code></a> Bug fix in README (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1199">#1199</a>)</li>
<li><a href="2a8d0f327d"><code>2a8d0f3</code></a> Adding Kool to list of projects using cobra (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1224">#1224</a>)</li>
<li><a href="6c06523c96"><code>6c06523</code></a> add arduino-cli to projects using cobra (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1117">#1117</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/spf13/cobra/compare/v1.0.0...v1.1.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/cobra&package-manager=go_modules&previous-version=1.0.0&new-version=1.1.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2020-10-16 09:25:18 +00:00
Erik Grinaker
b6979e7fbd test: clean up E2E test volumes using a container (#5509) 2020-10-15 20:59:16 +02:00
dependabot[bot]
ca1891944b build(deps): Bump github.com/golang/protobuf from 1.4.2 to 1.4.3 (#5506)
Bumps [github.com/golang/protobuf](https://github.com/golang/protobuf) from 1.4.2 to 1.4.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/golang/protobuf/releases">github.com/golang/protobuf's releases</a>.</em></p>
<blockquote>
<h2>v1.4.3</h2>
<p>Notable changes:</p>
<p>(<a href="https://github-redirect.dependabot.com/golang/protobuf/issues/1221">#1221</a>) jsonpb: Fix marshaling of Duration
(<a href="https://github-redirect.dependabot.com/golang/protobuf/issues/1210">#1210</a>) proto: convert integer to rune before converting to string</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="4846b58453"><code>4846b58</code></a> jsonpb: Fix marshaling of Duration (<a href="https://github-redirect.dependabot.com/golang/protobuf/issues/1221">#1221</a>)</li>
<li><a href="91c84e0db1"><code>91c84e0</code></a> travis.yml: update tested versions of Go (<a href="https://github-redirect.dependabot.com/golang/protobuf/issues/1211">#1211</a>)</li>
<li><a href="3860b2764f"><code>3860b27</code></a> proto: convert integer to rune before converting to string (<a href="https://github-redirect.dependabot.com/golang/protobuf/issues/1210">#1210</a>)</li>
<li>See full diff in <a href="https://github.com/golang/protobuf/compare/v1.4.2...v1.4.3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/golang/protobuf&package-manager=go_modules&previous-version=1.4.2&new-version=1.4.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2020-10-15 16:33:55 +00:00
Tess Rinearson
ce824d6fad contributing: include instructions for a release candidate (#5498) 2020-10-15 12:14:02 +02:00
Marko
c6f8f0aefc crypto: add in secp256k1 support (#5500)
Secp256k1 was removed in the protobuf migration, this pr adds it back in order to provide this functionality for users (band)

Closes: #5495
2020-10-15 10:10:06 +02:00
Erik Grinaker
8a67968416 github: add nightly E2E testnet action (#5480) 2020-10-14 16:55:02 +02:00
Erik Grinaker
61ee6b0b5d github: rename e2e jobs (#5502) 2020-10-14 16:35:03 +02:00
Marko
1a74e01d18 update changelog for rc5 (#5501)
## Description

update changelog with updates in rc5

Closes: #XXX
2020-10-14 12:55:46 +00:00
Marko
741a515f5b tx: reduce function to one parameter (#5493)
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
2020-10-13 17:38:00 +02:00
Callum Waters
55e8ccab21 block: use commit sig size instead of vote size (#5490) 2020-10-13 17:21:37 +02:00
Marko
e1644d00c5 mempool: length prefix txs when getting them from mempool (#5483)
## Description

In protobuf `[]byte` is varint encoded. When adding txs to the block we were not taking this into account. 


Closes: #XXX
2020-10-13 10:33:21 +00:00
Marko
346aa14db5 fix lint failures with 1.31 (#5489) 2020-10-13 10:22:53 +02:00
Anton Kaliaev
7121f68f25 light/rpc: fix ABCIQuery (#5375)
Closes #5106
2020-10-12 16:36:37 +04:00
dependabot[bot]
710ed63f55 build(deps): Bump technote-space/get-diff-action from v3 to v4 (#5485)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Marko Baricevic <marbar3778@yahoo.com>
2020-10-12 14:13:45 +02:00
dependabot[bot]
3de8d14baa build(deps): Bump golangci/golangci-lint-action from v2.2.0 to v2.2.1 (#5486)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
2020-10-12 13:28:28 +02:00
dependabot[bot]
6de4bb1b6b build(deps): Bump actions/cache from v2.1.1 to v2.1.2 (#5487)
Bumps [actions/cache](https://github.com/actions/cache) from v2.1.1 to v2.1.2.
- [Release notes](https://github.com/actions/cache/releases)
- [Commits](https://github.com/actions/cache/compare/v2.1.1...d1255ad9362389eac595a9ae406b8e8cb3331f16)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-10-12 13:25:32 +02:00
Callum Waters
6a2a71be07 correctly calculate evidence data size (#5482) 2020-10-12 11:28:41 +02:00
Erik Grinaker
260cc5dd69 test/e2e: add random testnet generator (#5479)
Closes #5291. Adds a randomized testnet generator. Nightly CI job will be submitted separately. A few of the testnets can be a bit flaky, even after disabling known-faulty behavior and making minor tweaks, and the larger networks may be too resource-intensive to run in CI - this will be optimized separately.
2020-10-09 12:33:48 +00:00
Anton Kaliaev
12ebd7735a light: cross-check the very first header (#5429)
Closes #5428
2020-10-09 14:29:22 +04:00
Alessio Treglia
93b9bab932 simplified reproducible buildsystem (#5477) 2020-10-09 11:23:09 +02:00
Marko
82e4693cc5 abci: remove setOption (#5447)
Remove Response/Request SetOption from ABCI.

Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
2020-10-08 19:12:12 +02:00
dependabot[bot]
169fa6dcdb build(deps-dev): Bump watchpack from 1.7.4 to 2.0.0 in /docs (#5470)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
2020-10-07 21:08:11 +02:00
Callum Waters
302aec6dcc evidence: use bytes instead of quantity to limit size (#5449)
## Description

Closes: #5432
2020-10-07 09:29:52 +00:00
Marko
10f30f8a99 docs: make /master the default (#5474) 2020-10-06 21:55:51 +02:00
Marko
07bb7fb445 ci/e2e: avoid running job when no go files are touched (#5471) 2020-10-06 14:29:29 +02:00
Cyrus Goh
46252a9b69 docs: docs-staging → master (#5468)
## Description

- Makefile: branches and path prefixes from `versions` work on `docs-staging` https://github.com/tendermint/tendermint/pull/5467
- Expects to have:
   - https://docs.tendermint.com/master
   - https://docs.tendermint.com/v0.32
   - https://docs.tendermint.com/v0.33
2020-10-06 07:36:46 +00:00
Erik Grinaker
8d28e7467c test: add E2E test for node peering (#5465)
This was a missing test case from the old P2P tests removed in #5453, which makes sure that all nodes are able to peer with each other regardless of how they discover peers.

Fixes #2795, since the default CI testnet uses a combination of (partially meshed) persistent peers and PEX-based seed nodes.
2020-10-06 06:00:20 +00:00
Erik Grinaker
b894f07380 test: remove P2P tests (#5453) 2020-10-05 17:52:01 +02:00
Callum Waters
6c77207055 docs: fix links to adr 56 (#5464)
## Description

fix broken link from a previous change
2020-10-05 15:29:35 +00:00
Erik Grinaker
a6b22cfa97 circleci: remove Gitian reproducible_builds job (#5462) 2020-10-05 17:16:05 +02:00
Marko
d7d0ffea13 fix RPC blockresults reutrn (#5459)
## Description

In blocks_results we use the proto definition of abciResponses: 2672b91ab0/rpc/core/blocks.go (L152-L155), this leads to the use of the proto definition of the pubkey which is an interface in go (oneof). The interface must be registered with the JSON encoder to have it work correctly.

A clearer divide between proto types and native types is needed.

Closes: #XXX
2020-10-05 13:55:27 +00:00
Erik Grinaker
7e27e9b852 test: add GitHub action for end-to-end tests (#5452)
Partial fix for #5291.
2020-10-05 12:44:35 +02:00
Erik Grinaker
090afe30f9 test: add basic end-to-end test cases (#5450)
Partial fix for #5291.

This adds a basic set of test cases for core network invariants. Although small, it is sufficient to replace and extend the current set of P2P tests. Further test cases can be added later.
2020-10-05 10:23:12 +00:00
Tess Rinearson
b9d98a61c2 changelog: add missing date to v0.33.5 release, fix indentation (#5454)
I forgot to add the date when we cut 0.33.5. This fixes that. It also fixes a header indentation issue for 0.33.8.
2020-10-05 11:55:25 +02:00
Erik Grinaker
250c3aa92e test: add end-to-end testing framework (#5435)
Partial fix for #5291. For details, see [README.md](https://github.com/tendermint/tendermint/blob/erik/e2e-tests/test/e2e/README.md) and [RFC-001](https://github.com/tendermint/tendermint/blob/master/docs/rfc/rfc-001-end-to-end-testing.md).

This only includes a single test case under `test/e2e/tests/`, as a proof of concept - additional test cases will be submitted separately. A randomized testnet generator will also be submitted separately, there a currently just a handful of static testnets under `test/e2e/networks/`. This will eventually replace the current P2P tests and run in CI.
2020-10-05 09:35:01 +00:00
Callum Waters
a4b7018732 light: expand on errors and docs (#5443) 2020-10-02 20:05:15 +02:00
Callum Waters
bf9e36d02d docs: revise ADR 56, documenting short term decision around amnesia evidence (#5440) 2020-10-02 19:21:22 +02:00
Erik Grinaker
08708046a7 privval: fix ping message encoding (#5441)
Fixes #5371.
2020-10-01 14:46:21 +00:00
Marko
dcdf9bbff8 ci: docker remvoe circleci and add github action (#5420) 2020-10-01 13:25:20 +02:00
Callum Waters
433bdf5063 consensus: check block parts don't exceed maximum block bytes (#5431) 2020-10-01 09:59:19 +02:00
Erik Grinaker
2a0fa665fd config: set statesync.rpc_servers when generating config file (#5433)
Required for #5291, to generate configuration files with state sync RPC servers.
2020-10-01 04:21:11 +00:00
Erik Grinaker
b0130b4661 privval: allow passing options to NewSignerDialerEndpoint (#5434)
Required for #5291 to set timeouts for remote signers.
2020-09-30 16:45:46 +00:00
Anton Kaliaev
01c32c62e8 docs: specify TM version in go tutorials (#5427)
Closes https://github.com/tendermint/tendermint/issues/5425
2020-09-30 09:10:34 +00:00
Anton Kaliaev
99aea7b079 mempool: fix nil pointer dereference (#5412)
previously, the second next could return nil, which would be the reason
for panic on line 275:

memTx := next.Value.(*mempoolTx)

Closes #5408
2020-09-29 12:15:54 +04:00
Anton Kaliaev
2672b91ab0 rpc/core: more docs and a test for /blockchain endpoint (#5417)
Closes #5339
2020-09-28 15:13:00 +00:00
Callum Waters
4f79930c12 blockchain: remove duplication of validate basic (#5418) 2020-09-28 17:02:46 +02:00
Anton Kaliaev
1635d1339c state: more test cases for block validation (#5415)
Closes #2589
2020-09-28 11:57:35 +00:00
Marko
9df66eaa7d codecov: disable annotations (#5413) 2020-09-28 11:16:25 +02:00
Marko
b802e9c9c3 docs: add explanation of p2p configuration options (#5397)
## Description

Add information on p2p settings in documentation 

Closes: #XXX
2020-09-28 07:33:09 +00:00
Marko
95367eaf51 blockchain/v1: add noBlockResponse handling (#5401)
## Description

Add simple `NoBlockResponse` handling to blockchain reactor v1. I tested before and after with erik's e2e testing and was not able to reproduce the inability to sync after the changes were applied

Closes: #5394
2020-09-28 07:20:54 +00:00
Erik Grinaker
7eb4e5c0b1 docs: update state sync config with discovery_time (#5405) 2020-09-25 09:38:22 +00:00
Callum Waters
f02987e7bc simplify commit and validators rpc calls (#5393) 2020-09-25 11:19:04 +02:00
Anton Kaliaev
7d2b3e305e docs: minor tweaks (#5404)
* docs: fix /validators description

Refs https://github.com/tendermint/spec/pull/169

* consensus: remove nil err from logging statement

* update UPGRADING.md

* note about LightBlocks
2020-09-25 11:05:56 +04:00
Callum Waters
ca8a404c7c cli: light home dir should default to where the full node default is (#5392) 2020-09-25 08:45:56 +02:00
QuantumExplorer
ecd3cbc2bd fix a few typos (#5402) 2020-09-25 08:38:28 +04:00
1136 changed files with 57664 additions and 113363 deletions

23
.github/CODEOWNERS vendored
View File

@@ -1,12 +1,25 @@
# CODEOWNERS: https://help.github.com/articles/about-codeowners/
# Everything goes through the following "global owners" by default.
# Everything goes through the following "global owners" by default.
# Unless a later match takes precedence, these three will be
# requested for review when someone opens a PR.
# requested for review when someone opens a PR.
# Note that the last matching pattern takes precedence, so
# global owners are only requested if there isn't a more specific
# codeowner specified below. For this reason, the global codeowners
# codeowner specified below. For this reason, the global codeowners
# are often repeated in package-level definitions.
* @ebuchman @tendermint/tendermint-engineering
* @alexanderbez @ebuchman @erikgrinaker @melekes @tessr
# Overrides for tooling packages
.github/ @marbar3778 @alexanderbez @ebuchman @erikgrinaker @melekes @tessr
DOCKER/ @marbar3778 @alexanderbez @ebuchman @erikgrinaker @melekes @tessr
# Overrides for core Tendermint packages
abci/ @marbar3778 @alexanderbez @ebuchman @erikgrinaker @melekes @tessr
evidence/ @cmwaters @ebuchman @melekes @tessr
light/ @cmwaters @melekes @ebuchman @tessr
# Overrides for docs
*.md @marbar3778 @alexanderbez @ebuchman @erikgrinaker @melekes @tessr
docs/ @marbar3778 @alexanderbez @ebuchman @erikgrinaker @melekes @tessr
/spec @ebuchman @tendermint/tendermint-research @tendermint/tendermint-engineering

View File

@@ -1,18 +1,13 @@
---
name: Bug report
name: Bug Report
about: Create a report to help us squash bugs!
---
<!--
Please fill in as much of the template below as you can.
If you have general questions, please create a new discussion:
https://github.com/tendermint/tendermint/discussions
Be ready for followup questions, and please respond in a timely manner. We might
ask you to provide additional logs and data (tendermint & app).
Be ready for followup questions, and please respond in a timely
manner. We might ask you to provide additional logs and data (tendermint & app).
-->
**Tendermint version** (use `tendermint version` or `git rev-parse --verify HEAD` if installed from source):

View File

@@ -1,5 +0,0 @@
blank_issues_enabled: true
contact_links:
- name: Ask a question
url: https://github.com/tendermint/tendermint/discussions
about: Please ask and answer questions here

View File

@@ -1,5 +1,5 @@
---
name: Feature request
name: Feature Request
about: Create a proposal to request a feature
---
@@ -25,3 +25,12 @@ Are there any disadvantages of including this feature? -->
## Proposal
<!-- Detailed description of requirements of implementation -->
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned

View File

@@ -1,28 +0,0 @@
---
name: Protocol change proposal
about: Create a proposal to request a change to the protocol
---
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Word of caution: Under-specified proposals may be rejected summarily
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
# Protocol Change Proposal
## Summary
<!-- Short, concise description of the proposed change -->
## Problem Definition
<!-- Why do we need this change?
What problems may be addressed by introducing this change?
What benefits does Tendermint stand to gain by including this change?
Are there any disadvantages of including this change? -->
## Proposal
<!-- Detailed description of requirements of implementation -->

View File

@@ -1,32 +1,7 @@
<!--
## Description
Please add a reference to the issue that this PR addresses and indicate which
files are most critical to review. If it fully addresses a particular issue,
please include "Closes #XXX" (where "XXX" is the issue number).
_Please add a description of the changes that this PR introduces and the files that
are the most critical to review._
If this PR is non-trivial/large/complex, please ensure that you have either
created an issue that the team's had a chance to respond to, or had some
discussion with the team prior to submitting substantial pull requests. The team
can be reached via GitHub Discussions or the Cosmos Network Discord server in
the #tendermint-core channel. GitHub Discussions is preferred over Discord as it
allows us to keep track of conversations topically.
https://github.com/tendermint/tendermint/discussions
If the work in this PR is not aligned with the team's current priorities, please
be advised that it may take some time before it is merged - especially if it has
not yet been discussed with the team.
See the project board for the team's current priorities:
https://github.com/orgs/tendermint/projects/15/views/5
-->
---
#### PR checklist
- [ ] Tests written/updated, or no tests needed
- [ ] `CHANGELOG_PENDING.md` updated, or no changelog entry needed
- [ ] Updated relevant documentation (`docs/`) and code comments, or no
documentation updates needed
Closes: #XXX

16
.github/auto-comment.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
pullRequestOpened: |
:wave: Thanks for creating a PR!
Before we can merge this PR, please make sure that all the following items have been
checked off. If any of the checklist items are not applicable, please leave them but
write a little note why.
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [ ] Re-reviewed `Files changed` in the Github PR explorer
- [ ] Applied Appropriate Labels
Thank you for your contribution to Tendermint! :rocket:

14
.github/codecov.yml vendored
View File

@@ -5,14 +5,19 @@ coverage:
status:
project:
default:
threshold: 20%
patch: off
threshold: 1%
patch: on
changes: off
github_checks:
annotations: false
comment: false
comment:
layout: "diff, files"
behavior: default
require_changes: no
require_base: no
require_head: yes
ignore:
- "docs"
@@ -20,6 +25,3 @@ ignore:
- "scripts"
- "**/*.pb.go"
- "libs/pubsub/query/query.peg.go"
- "*.md"
- "*.rst"
- "*.yml"

View File

@@ -3,73 +3,26 @@ updates:
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: weekly
target-branch: "main"
interval: daily
time: "11:00"
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: weekly
target-branch: "v0.37.x"
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: weekly
target-branch: "v0.34.x"
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge
- package-ecosystem: npm
directory: "/docs"
schedule:
interval: weekly
interval: daily
time: "11:00"
open-pull-requests-limit: 10
###################################
##
## Update All Go Dependencies
- package-ecosystem: gomod
directory: "/"
schedule:
interval: weekly
target-branch: "main"
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge
reviewers:
- fadeev
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
target-branch: "v0.37.x"
# Only allow automated security-related dependency updates on release
# branches.
open-pull-requests-limit: 0
time: "11:00"
open-pull-requests-limit: 10
reviewers:
- melekes
- tessr
- erikgrinaker
labels:
- T:dependencies
- S:automerge
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
target-branch: "v0.34.x"
# Only allow automated security-related dependency updates on release
# branches.
open-pull-requests-limit: 0
labels:
- T:dependencies
- S:automerge

View File

@@ -1,6 +0,0 @@
<!--
If you want to ask a general question, please create a new discussion instead of
an issue: https://github.com/tendermint/tendermint/discussions
-->

8
.github/linter/markdownlint.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
default: true,
MD007: { "indent": 4 }
MD013: false
MD024: { siblings_only: true }
MD025: false
MD033: { no-inline-html: false }
no-hard-tabs: false
whitespace: false

View File

@@ -1,15 +0,0 @@
# markdownlint configuration for Super-Linter
# - https://github.com/DavidAnson/markdownlint
# - https://github.com/github/super-linter
# Default state for all rules
default: true
# See https://github.com/DavidAnson/markdownlint#rules--aliases for rules
MD007: {"indent": 4}
MD013: false
MD024: {siblings_only: true}
MD025: false
MD033: {no-inline-html: false}
no-hard-tabs: false
whitespace: false

View File

@@ -1,9 +0,0 @@
---
# Default rules for YAML linting from super-linter.
# See: See https://yamllint.readthedocs.io/en/stable/rules.html
extends: default
rules:
document-end: disable
document-start: disable
line-length: disable
truthy: disable

35
.github/mergify.yml vendored
View File

@@ -1,35 +1,10 @@
queue_rules:
- name: default
conditions:
- base=main
- label=S:automerge
pull_request_rules:
- name: Automerge to main
- name: Automerge to master
conditions:
- base=main
- base=master
- label=S:automerge
actions:
queue:
merge:
method: squash
name: default
commit_message_template: |
{{ title }} (#{{ number }})
{{ body }}
- name: backport patches to v0.37.x branch
conditions:
- base=main
- label=S:backport-to-v0.37.x
actions:
backport:
branches:
- v0.37.x
- name: backport patches to v0.34.x branch
conditions:
- base=main
- label=S:backport-to-v0.34.x
actions:
backport:
branches:
- v0.34.x
strict: true
commit_message: title+body

View File

@@ -1,82 +0,0 @@
name: Build
# Tests runs different tests (test_abci_apps, test_abci_cli, test_apps)
# This workflow runs on every push to main or release branch and every pull requests
# All jobs will pass without running if no *{.go, .mod, .sum} files have been modified
on:
pull_request:
push:
branches:
- main
- release/**
jobs:
build:
name: Build
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
goarch: ["arm", "amd64"]
goos: ["linux"]
timeout-minutes: 5
steps:
- uses: actions/setup-go@v3
with:
go-version: "1.18"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/*.go
"!test/"
go.mod
go.sum
Makefile
- name: install
run: GOOS=${{ matrix.goos }} GOARCH=${{ matrix.goarch }} make build
if: "env.GIT_DIFF != ''"
test_abci_cli:
runs-on: ubuntu-latest
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v3
with:
go-version: "1.18"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/*.go
go.mod
go.sum
- name: install
run: make install_abci
if: "env.GIT_DIFF != ''"
- run: abci/tests/test_cli/test.sh
shell: bash
if: "env.GIT_DIFF != ''"
test_apps:
runs-on: ubuntu-latest
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v3
with:
go-version: "1.18"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/*.go
go.mod
go.sum
- name: install
run: make install install_abci
if: "env.GIT_DIFF != ''"
- name: test_apps
run: test/app/test.sh
shell: bash
if: "env.GIT_DIFF != ''"

View File

@@ -1,73 +0,0 @@
# Verify that generated code is up-to-date.
#
# Note that we run these checks regardless whether the input files have
# changed, because generated code can change in response to toolchain updates
# even if no files in the repository are modified.
name: Check generated code
on:
pull_request:
branches:
- main
permissions:
contents: read
jobs:
check-mocks:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
with:
go-version: "1.18"
- uses: actions/checkout@v3
- name: "Check generated mocks"
run: |
set -euo pipefail
make mockery 2>/dev/null
if ! git diff --stat --exit-code ; then
echo ">> ERROR:"
echo ">>"
echo ">> Generated mocks require update (either Mockery or source files may have changed)."
echo ">> Ensure your tools are up-to-date, re-run 'make mockery' and update this PR."
echo ">>"
exit 1
fi
check-proto:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
with:
go-version: "1.18"
- uses: actions/checkout@v3
with:
fetch-depth: 1 # we need a .git directory to run git diff
- name: "Check protobuf generated code"
run: |
set -euo pipefail
# Install buf and gogo tools, so that differences that arise from
# toolchain differences are also caught.
readonly tools="$(mktemp -d)"
export PATH="${PATH}:${tools}/bin"
export GOBIN="${tools}/bin"
go install github.com/bufbuild/buf/cmd/buf
go install github.com/cosmos/gogoproto/protoc-gen-gogofaster@latest
make proto-gen
if ! git diff --stat --exit-code ; then
echo ">> ERROR:"
echo ">>"
echo ">> Protobuf generated code requires update (either tools or .proto files may have changed)."
echo ">> Ensure your tools are up-to-date, re-run 'make proto-gen' and update this PR."
echo ">>"
exit 1
fi

127
.github/workflows/coverage.yml vendored Normal file
View File

@@ -0,0 +1,127 @@
name: Test Coverage
on:
pull_request:
push:
branches:
- master
- release/**
jobs:
split-test-files:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Create a file with all the pkgs
run: go list ./... > pkgs.txt
- name: Split pkgs into 4 files
run: split -d -n l/4 pkgs.txt pkgs.txt.part.
# cache multiple
- uses: actions/upload-artifact@v2
with:
name: "${{ github.sha }}-00"
path: ./pkgs.txt.part.00
- uses: actions/upload-artifact@v2
with:
name: "${{ github.sha }}-01"
path: ./pkgs.txt.part.01
- uses: actions/upload-artifact@v2
with:
name: "${{ github.sha }}-02"
path: ./pkgs.txt.part.02
- uses: actions/upload-artifact@v2
with:
name: "${{ github.sha }}-03"
path: ./pkgs.txt.part.03
build-linux:
name: Build
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
goarch: ["arm", "amd64"]
timeout-minutes: 5
steps:
- uses: actions/setup-go@v2
with:
go-version: "1.15"
- uses: actions/checkout@v2
- uses: technote-space/get-diff-action@v4
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- name: install
run: GOOS=linux GOARCH=${{ matrix.goarch }} make build
if: "env.GIT_DIFF != ''"
tests:
runs-on: ubuntu-latest
needs: split-test-files
strategy:
fail-fast: false
matrix:
part: ["00", "01", "02", "03"]
steps:
- uses: actions/setup-go@v2
with:
go-version: "1.15"
- uses: actions/checkout@v2
- uses: technote-space/get-diff-action@v4
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-${{ matrix.part }}"
if: env.GIT_DIFF
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15
- name: test & coverage report creation
run: |
cat pkgs.txt.part.${{ matrix.part }} | xargs go test -mod=readonly -timeout 8m -race -coverprofile=${{ matrix.part }}profile.out -covermode=atomic
if: env.GIT_DIFF
- uses: actions/upload-artifact@v2
with:
name: "${{ github.sha }}-${{ matrix.part }}-coverage"
path: ./${{ matrix.part }}profile.out
upload-coverage-report:
runs-on: ubuntu-latest
needs: tests
steps:
- uses: actions/checkout@v2
- uses: technote-space/get-diff-action@v4
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-00-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-01-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-02-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-03-coverage"
if: env.GIT_DIFF
- run: |
cat ./*profile.out | grep -v "mode: atomic" >> coverage.txt
if: env.GIT_DIFF
- uses: codecov/codecov-action@v1.2.1
with:
file: ./coverage.txt
if: env.GIT_DIFF

View File

@@ -1,21 +1,20 @@
name: Docker
# Build & Push rebuilds the Tendermint docker image on every push to main and creation of tags
# and pushes the image to https://hub.docker.com/r/tendermint/tendermint
name: Build & Push
# Build & Push rebuilds the tendermint docker image on every push to master and creation of tags
# and pushes the image to https://hub.docker.com/r/interchainio/simapp/tags
on:
pull_request:
push:
branches:
- main
- master
tags:
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
- "v[0-9]+.[0-9]+.[0-9]+-alpha.[0-9]+" # e.g. v0.37.0-alpha.1, v0.38.0-alpha.10
- "v[0-9]+.[0-9]+.[0-9]+-beta.[0-9]+" # e.g. v0.37.0-beta.1, v0.38.0-beta.10
- "v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+" # e.g. v0.37.0-rc1, v0.38.0-rc10
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
- "v[0-9]+.[0-9]+.[0-9]+-rc*" # Push events to matching v*, i.e. v1.0-rc1, v20.15.10-rc5
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@master
- name: Prepare
id: prep
run: |
@@ -40,18 +39,18 @@ jobs:
with:
platforms: all
- name: Set up Docker Build
uses: docker/setup-buildx-action@v2.2.1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@v2.1.0
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v3.2.0
uses: docker/build-push-action@v2
with:
context: .
file: ./DOCKER/Dockerfile

View File

@@ -1,63 +0,0 @@
# Build and deploy the docs.tendermint.com website content.
# The static content is published to GitHub Pages.
#
# For documentation build info, see docs/DOCS_README.md.
name: Build static documentation site
on:
workflow_dispatch: # allow manual updates
push:
branches:
- main
- v0.34.x
paths:
- docs/**
- spec/**
jobs:
# This is split into two jobs so that the build, which runs npm, does not
# have write access to anything. The deploy requires write access to publish
# to the branch used by GitHub Pages, however, so we can't just make the
# whole workflow read-only.
build:
name: VuePress build
runs-on: ubuntu-latest
container:
image: alpine:latest
permissions:
contents: read
steps:
- name: Install generator dependencies
run: |
apk add --no-cache make bash git npm
- uses: actions/checkout@v3
with:
# We need to fetch full history so the backport branches for previous
# versions will be available for the build.
fetch-depth: 0
- name: Build documentation
run: |
git config --global --add safe.directory "$PWD"
make build-docs
- uses: actions/upload-artifact@v3
with:
name: build-output
path: /tmp/tendermint-core-docs
deploy:
name: Deploy to GitHub Pages
runs-on: ubuntu-latest
needs: build
permissions:
contents: write
steps:
- uses: actions/checkout@v3
- uses: actions/download-artifact@v3
with:
name: build-output
path: ~/output
- name: Deploy to GitHub Pages
uses: JamesIves/github-pages-deploy-action@v4
with:
branch: 'docs-tendermint-com'
folder: ~/output
single-commit: true

View File

@@ -1,20 +0,0 @@
# Verify that important design docs have ToC entries.
name: Check documentation ToC
on:
pull_request:
push:
branches:
- main
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
docs/architecture/**
docs/rfc/**
- run: make check-docs-toc
if: env.GIT_DIFF

32
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: Documentation
# This job builds and deploys documentation to github pages.
# It runs on every push to master, and can be manually triggered.
on:
workflow_dispatch: # allow running workflow manually
push:
branches:
- master
jobs:
build-and-deploy:
runs-on: ubuntu-latest
container:
image: tendermintdev/docker-website-deployment
steps:
- name: Checkout 🛎️
uses: actions/checkout@v2.3.1
with:
persist-credentials: false
fetch-depth: 0
- name: Install and Build 🔧
run: |
apk add rsync
make build-docs
- name: Deploy 🚀
uses: JamesIves/github-pages-deploy-action@3.7.1
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BRANCH: gh-pages
FOLDER: ~/output

View File

@@ -1,36 +0,0 @@
# Runs randomly generated E2E testnets nightly on main
# manually run e2e tests
name: e2e-manual
on:
workflow_dispatch:
jobs:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
with:
go-version: '1.18'
- uses: actions/checkout@v3
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner tests
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 5 -d networks/nightly/
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml

View File

@@ -1,79 +0,0 @@
# Runs randomly generated E2E testnets nightly on the 0.34.x branch.
# !! This file should be kept in sync with the e2e-nightly-main.yml file,
# modulo changes to the version labels.
name: e2e-nightly-34x
on:
schedule:
- cron: '0 2 * * *'
jobs:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
group: ['00', '01']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
with:
go-version: '1.18'
- uses: actions/checkout@v3
with:
ref: 'v0.34.x'
- name: Capture git repo info
id: git-info
run: |
echo "::set-output name=branch::`git branch --show-current`"
echo "::set-output name=commit::`git rev-parse HEAD`"
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 2 -d networks/nightly
- name: Run testnets in group ${{ matrix.group }}
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
outputs:
git-branch: ${{ steps.git-info.outputs.branch }}
git-commit: ${{ steps.git-info.outputs.commit }}
e2e-nightly-fail:
needs: e2e-nightly-test
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ needs.e2e-nightly-test.outputs.git-branch }}
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
COMMIT_URL: "${{ github.server_url }}/${{ github.repository }}/commit/${{ needs.e2e-nightly-test.outputs.git-commit }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> related to the failure."
}
}
]
}

View File

@@ -1,79 +0,0 @@
# Runs randomly generated E2E testnets nightly on the v0.37.x branch.
# !! This file should be kept in sync with the e2e-nightly-main.yml file,
# modulo changes to the version labels.
name: e2e-nightly-37x
on:
schedule:
- cron: '0 2 * * *'
jobs:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03', "04"]
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
with:
go-version: '1.18'
- uses: actions/checkout@v3
with:
ref: 'v0.37.x'
- name: Capture git repo info
id: git-info
run: |
echo "::set-output name=branch::`git branch --show-current`"
echo "::set-output name=commit::`git rev-parse HEAD`"
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner tests
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 5 -d networks/nightly/
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
outputs:
git-branch: ${{ steps.git-info.outputs.branch }}
git-commit: ${{ steps.git-info.outputs.commit }}
e2e-nightly-fail:
needs: e2e-nightly-test
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ needs.e2e-nightly-test.outputs.git-branch }}
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
COMMIT_URL: "${{ github.server_url }}/${{ github.repository }}/commit/${{ needs.e2e-nightly-test.outputs.git-commit }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> related to the failure."
}
}
]
}

View File

@@ -1,68 +0,0 @@
# Runs randomly generated E2E testnets nightly on main
# !! Relevant changes to this file should be propagated to the e2e-nightly-<V>x
# files for the supported backport branches, when appropriate, modulo version
# markers.
name: e2e-nightly-main
on:
schedule:
- cron: '0 2 * * *'
jobs:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03', "04"]
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
with:
go-version: '1.18'
- uses: actions/checkout@v3
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner tests
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 5 -d networks/nightly/
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
e2e-nightly-fail:
needs: e2e-nightly-test
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ github.ref_name }}
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
COMMIT_URL: "${{ github.server_url }}/${{ github.repository }}/commit/${{ github.sha }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> related to the failure."
}
}
]
}

57
.github/workflows/e2e-nightly.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
# Runs randomly generated E2E testnets nightly.
name: e2e-nightly
on:
workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 2 * * *'
jobs:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03']
# todo: expand to multiple versions after 0.35 release
branch: ['master', 'v0.34.x']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v2
with:
go-version: '1.15'
- uses: actions/checkout@v2
with:
ref: ${{ matrix.branch}}
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly
- name: Run testnets in group ${{ matrix.group }}
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
e2e-nightly-fail:
needs: e2e-nightly-test
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: rtCamp/action-slack-notify@ae4223259071871559b6e9d08b24a63d71b3f0c0
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':skull:'
SLACK_COLOR: danger
SLACK_MESSAGE: Nightly E2E tests failed
SLACK_FOOTER: ''

View File

@@ -1,12 +1,12 @@
name: e2e
# Runs the CI end-to-end test network on all pushes to main or release branches
# Runs the CI end-to-end test network on all pushes to master or release branches
# and every pull request, but only if any Go files have been changed.
on:
workflow_dispatch: # allow running workflow manually
workflow_dispatch: # allow running workflow manually
pull_request:
push:
branches:
- main
- master
- release/**
jobs:
@@ -14,11 +14,11 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.18'
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
go-version: '1.15'
- uses: actions/checkout@v2
- uses: technote-space/get-diff-action@v4
with:
PATTERNS: |
**/**.go
@@ -28,10 +28,15 @@ jobs:
- name: Build
working-directory: test/e2e
# Run two make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker runner tests
run: make -j2 docker runner
if: "env.GIT_DIFF != ''"
- name: Run CI testnet
working-directory: test/e2e
run: ./run-multiple.sh networks/ci.toml
run: ./build/runner -f networks/ci.toml
if: "env.GIT_DIFF != ''"
- name: Emit logs on failure
if: ${{ failure() }}
working-directory: test/e2e
run: ./build/runner -f networks/ci.toml logs

View File

@@ -1,27 +1,23 @@
# Runs fuzzing nightly.
name: Fuzz Tests
name: fuzz-nightly
on:
workflow_dispatch: # allow running workflow manually
workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 3 * * *'
pull_request:
branches: [main]
paths:
- "test/fuzz/**/*.go"
jobs:
fuzz-nightly-test:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.18'
go-version: '1.15'
- uses: actions/checkout@v3
- uses: actions/checkout@v2
- name: Install go-fuzz
working-directory: test/fuzz
run: go install github.com/dvyukov/go-fuzz/go-fuzz@latest github.com/dvyukov/go-fuzz/go-fuzz-build@latest
run: go get -u github.com/dvyukov/go-fuzz/go-fuzz github.com/dvyukov/go-fuzz/go-fuzz-build
- name: Fuzz mempool
working-directory: test/fuzz
@@ -48,51 +44,26 @@ jobs:
run: timeout -s SIGINT --preserve-status 10m make fuzz-rpc-server
continue-on-error: true
- name: Archive crashers
uses: actions/upload-artifact@v3
with:
name: crashers
path: test/fuzz/**/crashers
retention-days: 3
- name: Archive suppressions
uses: actions/upload-artifact@v3
with:
name: suppressions
path: test/fuzz/**/suppressions
retention-days: 3
- name: Set crashers count
working-directory: test/fuzz
run: echo "::set-output name=count::$(find . -type d -name 'crashers' | xargs -I % sh -c 'ls % | wc -l' | awk '{total += $1} END {print total}')"
run: echo "::set-output name=crashers-count::$(find . -type d -name "crashers" | xargs -I % sh -c 'ls % | wc -l' | awk '{total += $1} END {print total}')"
id: set-crashers-count
outputs:
crashers-count: ${{ steps.set-crashers-count.outputs.count }}
crashers_count: ${{ steps.set-crashers-count.outputs.crashers-count }}
fuzz-nightly-fail:
needs: fuzz-nightly-test
if: ${{ needs.fuzz-nightly-test.outputs.crashers-count != 0 }}
if: ${{ needs.set-crashers-count.outputs.crashers-count != 0 }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.23.0
- name: Notify Slack if any crashers
uses: rtCamp/action-slack-notify@ae4223259071871559b6e9d08b24a63d71b3f0c0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ github.ref_name }}
CRASHERS: ${{ needs.fuzz-nightly-test.outputs.crashers-count }}
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly fuzz tests for `${{ env.BRANCH }}` failed with ${{ env.CRASHERS }} crasher(s). See the <${{ env.RUN_URL }}|run details>."
}
}
]
}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly Fuzz Tests
SLACK_ICON_EMOJI: ':firecracker:'
SLACK_COLOR: danger
SLACK_MESSAGE: Crashers found in Nightly Fuzz tests
SLACK_FOOTER: ''

View File

@@ -1,16 +0,0 @@
name: Janitor
# Janitor cleans up previous runs of various workflows
# To add more workflows to cancel visit https://api.github.com/repos/tendermint/tendermint/actions/workflows and find the actions name
on:
pull_request:
jobs:
cancel:
name: "Cancel Previous Runs"
runs-on: ubuntu-latest
timeout-minutes: 3
steps:
- uses: styfle/cancel-workflow-action@0.11.0
with:
workflow_id: 1041851,1401230,2837803
access_token: ${{ github.token }}

12
.github/workflows/linkchecker.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
name: Check Markdown links
on:
schedule:
- cron: '* */24 * * *'
jobs:
markdown-link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: gaurav-nelson/github-action-markdown-link-check@1.0.12
with:
folder-path: "docs"

29
.github/workflows/lint.yaml vendored Normal file
View File

@@ -0,0 +1,29 @@
name: Lint
# Lint runs golangci-lint over the entire Tendermint repository
# This workflow is run on every pull request and push to master
# The `golangci` job will pass without running if no *.{go, mod, sum} files have been modified.
on:
pull_request:
push:
branches:
- master
jobs:
golangci:
name: golangci-lint
runs-on: ubuntu-latest
timeout-minutes: 8
steps:
- uses: actions/checkout@v2
- uses: technote-space/get-diff-action@v4
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: golangci/golangci-lint-action@v2.3.0
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.31
args: --timeout 10m
github-token: ${{ secrets.github_token }}
if: env.GIT_DIFF

View File

@@ -1,37 +0,0 @@
name: Golang Linter
# Lint runs golangci-lint over the entire Tendermint repository.
#
# This workflow is run on every pull request and push to main.
#
# The `golangci` job will pass without running if no *.{go, mod, sum}
# files have been modified.
#
# To run this locally, simply run `make lint` from the root of the repo.
on:
pull_request:
push:
branches:
- main
jobs:
golangci:
name: golangci-lint
runs-on: ubuntu-latest
timeout-minutes: 8
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '1.18'
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: golangci/golangci-lint-action@v3
with:
version: v1.50.1
args: --timeout 10m
github-token: ${{ secrets.github_token }}
if: env.GIT_DIFF

View File

@@ -1,17 +1,16 @@
name: Markdown Linter
name: Lint
on:
push:
branches:
- main
- master
paths:
- "**.md"
- "**.yml"
- "**.yaml"
pull_request:
branches: [main]
branches: [master]
paths:
- "**.md"
- "**.yml"
jobs:
build:
@@ -19,14 +18,14 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v3
uses: actions/checkout@v2
- name: Lint Code Base
uses: docker://github/super-linter:v4
uses: docker://github/super-linter:v3
env:
LINTER_RULES_PATH: .
VALIDATE_ALL_CODEBASE: true
DEFAULT_BRANCH: main
DEFAULT_BRANCH: master
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
VALIDATE_MD: true
VALIDATE_OPENAPI: true
VALIDATE_OPAENAPI: true
VALIDATE_YAML: true
YAML_CONFIG_FILE: yaml-lint.yml

View File

@@ -1,20 +0,0 @@
name: Check Markdown links
on:
schedule:
# 2am UTC daily
- cron: '0 2 * * *'
jobs:
markdown-link-check:
strategy:
matrix:
branch: ['main', 'v0.37.x', 'v0.34.x']
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
ref: ${{ matrix.branch }}
- uses: informalsystems/github-action-markdown-link-check@main
with:
config-file: '.md-link-check.json'

View File

@@ -1,65 +0,0 @@
name: "Pre-release"
on:
push:
tags:
- "v[0-9]+.[0-9]+.[0-9]+-alpha.[0-9]+" # e.g. v0.37.0-alpha.1, v0.38.0-alpha.10
- "v[0-9]+.[0-9]+.[0-9]+-beta.[0-9]+" # e.g. v0.37.0-beta.1, v0.38.0-beta.10
- "v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+" # e.g. v0.37.0-rc1, v0.38.0-rc10
jobs:
prerelease:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: actions/setup-go@v3
with:
go-version: '1.18'
- name: Build
uses: goreleaser/goreleaser-action@v3
if: ${{ github.event_name == 'pull_request' }}
with:
version: latest
args: build --skip-validate # skip validate skips initial sanity checks in order to be able to fully run
# Link to CHANGELOG_PENDING.md as release notes.
- run: echo https://github.com/tendermint/tendermint/blob/${GITHUB_REF#refs/tags/}/CHANGELOG_PENDING.md > ../release_notes.md
- name: Release
uses: goreleaser/goreleaser-action@v3
if: startsWith(github.ref, 'refs/tags/')
with:
version: latest
args: release --rm-dist --release-notes=../release_notes.md
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
prerelease-success:
needs: prerelease
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack upon pre-release
uses: slackapi/slack-github-action@v1.22.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
RELEASE_URL: "${{ github.server_url }}/${{ github.repository }}/releases/tag/${{ github.ref_name }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":sparkles: New Tendermint pre-release: <${{ env.RELEASE_URL }}|${{ github.ref_name }}>"
}
}
]
}

51
.github/workflows/proto-docker.yml vendored Normal file
View File

@@ -0,0 +1,51 @@
name: Build & Push TM Proto Builder
on:
pull_request:
paths:
- "tools/proto/*"
push:
branches:
- master
paths:
- "tools/proto/*"
schedule:
# run this job once a month to recieve any go or buf updates
- cron: "* * 1 * *"
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=tendermintdev/docker-build-proto
VERSION=noop
if [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION=${GITHUB_REF#refs/tags/}
elif [[ $GITHUB_REF == refs/heads/* ]]; then
VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -r 's#/+#-#g')
if [ "${{ github.event.repository.default_branch }}" = "$VERSION" ]; then
VERSION=latest
fi
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
echo ::set-output name=tags::${TAGS}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v2
with:
context: ./tools/proto
file: ./tools/proto/Dockerfile
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.prep.outputs.tags }}

View File

@@ -1,21 +0,0 @@
name: Protobuf Lint
on:
pull_request:
paths:
- 'proto/**'
push:
branches:
- main
paths:
- 'proto/**'
jobs:
lint:
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- uses: actions/checkout@v3
- uses: bufbuild/buf-setup-action@v1.9.0
- uses: bufbuild/buf-lint-action@v1
with:
input: 'proto'

23
.github/workflows/proto.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Protobuf
# Protobuf runs buf (https://buf.build/) lint and check-breakage
# This workflow is only run when a .proto file has been modified
on:
workflow_dispatch: # allow running workflow manually
pull_request:
paths:
- "**.proto"
jobs:
proto-lint:
runs-on: ubuntu-latest
timeout-minutes: 4
steps:
- uses: actions/checkout@master
- name: lint
run: make proto-lint
proto-breakage:
runs-on: ubuntu-latest
timeout-minutes: 4
steps:
- uses: actions/checkout@master
- name: check-breakage
run: make proto-check-breaking-ci

View File

@@ -2,61 +2,39 @@ name: "Release"
on:
push:
branches:
- "RC[0-9]/**"
tags:
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
jobs:
release:
goreleaser:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.18'
go-version: '1.15'
- run: echo https://github.com/tendermint/tendermint/blob/${GITHUB_REF#refs/tags/}/CHANGELOG.md#${GITHUB_REF#refs/tags/} > ../release_notes.md
if: startsWith(github.ref, 'refs/tags/')
- name: Build
uses: goreleaser/goreleaser-action@v3
uses: goreleaser/goreleaser-action@v2
if: ${{ github.event_name == 'pull_request' }}
with:
version: latest
args: build --skip-validate # skip validate skips initial sanity checks in order to be able to fully run
- run: echo https://github.com/tendermint/tendermint/blob/${GITHUB_REF#refs/tags/}/CHANGELOG.md#${GITHUB_REF#refs/tags/} > ../release_notes.md
- name: Release
uses: goreleaser/goreleaser-action@v3
uses: goreleaser/goreleaser-action@v2
if: startsWith(github.ref, 'refs/tags/')
with:
version: latest
args: release --rm-dist --release-notes=../release_notes.md
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
release-success:
needs: release
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack upon release
uses: slackapi/slack-github-action@v1.22.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
RELEASE_URL: "${{ github.server_url }}/${{ github.repository }}/releases/tag/${{ github.ref_name }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":rocket: New Tendermint release: <${{ env.RELEASE_URL }}|${{ github.ref_name }}>"
}
}
]
}

View File

@@ -7,14 +7,12 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v6
- uses: actions/stale@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: "This pull request has been automatically marked as stale because it has not had
recent activity. It will be closed if no further activity occurs. Thank you
for your contributions."
days-before-stale: -1
days-before-close: -1
days-before-pr-stale: 10
days-before-pr-close: 4
days-before-stale: 10
days-before-close: 4
exempt-pr-labels: "S:wip"

View File

@@ -1,34 +1,146 @@
name: Test
name: Tests
# Tests runs different tests (test_abci_apps, test_abci_cli, test_apps)
# This workflow runs on every push to master or release branch and every pull requests
# All jobs will pass without running if no *{.go, .mod, .sum} files have been modified
on:
pull_request:
push:
paths:
- "**.go"
branches:
- main
- master
- release/**
jobs:
tests:
cleanup-runs:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
part: ["00", "01", "02", "03", "04", "05"]
steps:
- uses: actions/setup-go@v3
- uses: rokroskar/workflow-run-cleanup-action@master
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
if: "!startsWith(github.ref, 'refs/tags/') && github.ref != 'refs/heads/master'"
build:
name: Build
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- uses: actions/setup-go@v2
with:
go-version: "1.18"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
go-version: "1.15"
- uses: actions/checkout@v2
- uses: technote-space/get-diff-action@v4
with:
PATTERNS: |
**/**.go
"!test/"
go.mod
go.sum
Makefile
- name: Run Go Tests
run: |
make test-group-${{ matrix.part }} NUM_SPLIT=6
- name: install
run: make install install_abci
if: "env.GIT_DIFF != ''"
- uses: actions/cache@v2.1.3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
if: env.GIT_DIFF
# Cache binaries for use by other jobs
- uses: actions/cache@v2.1.3
with:
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
if: env.GIT_DIFF
test_abci_apps:
runs-on: ubuntu-latest
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v2
with:
go-version: "1.15"
- uses: actions/checkout@v2
- uses: technote-space/get-diff-action@v4
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: actions/cache@v2.1.3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
if: env.GIT_DIFF
- uses: actions/cache@v2.1.3
with:
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
if: env.GIT_DIFF
- name: test_abci_apps
run: abci/tests/test_app/test.sh
shell: bash
if: env.GIT_DIFF
test_abci_cli:
runs-on: ubuntu-latest
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v2
with:
go-version: "1.15"
- uses: actions/checkout@v2
- uses: technote-space/get-diff-action@v4
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: actions/cache@v2.1.3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
if: env.GIT_DIFF
- uses: actions/cache@v2.1.3
with:
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
if: env.GIT_DIFF
- run: abci/tests/test_cli/test.sh
shell: bash
if: env.GIT_DIFF
test_apps:
runs-on: ubuntu-latest
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v2
with:
go-version: "1.15"
- uses: actions/checkout@v2
- uses: technote-space/get-diff-action@v4
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: actions/cache@v2.1.3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
if: env.GIT_DIFF
- uses: actions/cache@v2.1.3
with:
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
if: env.GIT_DIFF
- name: test_apps
run: test/app/test.sh
shell: bash
if: env.GIT_DIFF

16
.gitignore vendored
View File

@@ -15,7 +15,7 @@
.vagrant
.vendor-new/
.vscode/
abci/abci-cli
abci-cli
addrbook.json
artifacts/*
build/*
@@ -24,8 +24,6 @@ docs/.vuepress/dist
docs/_build
docs/dist
docs/node_modules/
docs/spec
docs/.vuepress/public/rpc
index.html.md
libs/pubsub/query/fuzz_test/output
profile\.out
@@ -37,23 +35,13 @@ shunit2
terraform.tfstate
terraform.tfstate.backup
terraform.tfstate.d
test/app/grpc_client
test/loadtime/build
test/e2e/build
test/e2e/networks/*/
test/logs
test/maverick/maverick
test/p2p/data/
vendor
test/fuzz/**/corpus
test/fuzz/**/crashers
test/fuzz/**/suppressions
test/fuzz/**/*.zip
proto/spec/**/*.pb.go
*.aux
*.bbl
*.blg
*.pdf
*.gz
*.dvi
# Python virtual environments
.venv

View File

@@ -1,44 +1,61 @@
linters:
enable:
- asciicheck
- bodyclose
- deadcode
- depguard
- dogsled
- dupl
- errcheck
- exportloopref
# - funlen
# - gochecknoglobals
# - gochecknoinits
- goconst
- gocritic
# - gocyclo
# - godox
- gofmt
- goimports
- revive
- golint
- gosec
- gosimple
- govet
- ineffassign
# - interfacer
- lll
- misspell
# - maligned
- nakedret
- nolintlint
- prealloc
- scopelint
- staticcheck
# - structcheck // to be fixed by golangci-lint
- structcheck
- stylecheck
- typecheck
- unconvert
# - unparam
- unused
- varcheck
# - whitespace
# - wsl
# - gocognit
- nolintlint
issues:
exclude-rules:
- path: _test\.go
linters:
- gosec
- linters:
- lll
source: "https://"
max-same-issues: 50
linters-settings:
dogsled:
max-blank-identifiers: 3
golint:
min-confidence: 0
maligned:
suggest-new: true
misspell:
locale: US
# govet:
# check-shadowing: true
golint:
min-confidence: 0

View File

@@ -1,4 +1,4 @@
project_name: tendermint
project_name: Tendermint
env:
# Require use of Go modules.
@@ -25,13 +25,12 @@ checksum:
algorithm: sha256
release:
prerelease: auto
name_template: "{{.Version}}"
name_template: "{{.Version}} (WARNING: BETA SOFTWARE)"
archives:
- files:
- LICENSE
- README.md
- UPGRADING.md
- SECURITY.md
- CHANGELOG.md
- LICENSE
- README.md
- UPGRADING.md
- SECURITY.md
- CHANGELOG.md

View File

@@ -1,17 +0,0 @@
# markdownlint configuration
# https://github.com/DavidAnson/markdownlint
# Default state for all rules
default: true
# See https://github.com/DavidAnson/markdownlint#rules--aliases for rules
MD001: false
MD007: {indent: 4}
MD013: false
MD024: {siblings_only: true}
MD025: false
MD033: false
MD036: false
MD010: false
MD012: false
MD028: false

View File

@@ -1,17 +0,0 @@
{
"retryOn429": true,
"retryCount": 5,
"fallbackRetryDelay": "30s",
"aliveStatusCodes": [200, 206, 503],
"httpHeaders": [
{
"urls": [
"https://docs.github.com/",
"https://help.github.com/"
],
"headers": {
"Accept-Encoding": "zstd, br, gzip, deflate"
}
}
]
}

View File

@@ -1,339 +1,21 @@
# Changelog
Friendly reminder, we have a [bug bounty program](https://hackerone.com/cosmos).
## v0.34.22
This release includes several bug fixes, [one of
which](https://github.com/tendermint/tendermint/pull/9518) we discovered while
building up a baseline for v0.34 against which to compare our upcoming v0.37
release during our [QA process](./docs/qa/).
Special thanks to external contributors on this release: @RiccardoM
### FEATURES
- [rpc] [\#9423](https://github.com/tendermint/tendermint/pull/9423) Support
HTTPS URLs from the WebSocket client (@RiccardoM, @cmwaters)
### BUG FIXES
- [config] [\#9483](https://github.com/tendermint/tendermint/issues/9483)
Calling `tendermint init` would incorrectly leave out the new `[storage]`
section delimiter in the generated configuration file - this has now been
fixed
- [p2p] [\#9500](https://github.com/tendermint/tendermint/issues/9500) Prevent
peers who have errored being added to the peer set (@jmalicevic)
- [indexer] [\#9473](https://github.com/tendermint/tendermint/issues/9473) Fix
bug that caused the psql indexer to index empty blocks whenever one of the
transactions returned a non zero code. The relevant deduplication logic has
been moved within the kv indexer only (@cmwaters)
- [blocksync] [\#9518](https://github.com/tendermint/tendermint/issues/9518) A
block sync stall was observed during our QA process whereby the node was
unable to make progress. Retrying block requests after a timeout fixes this.
## v0.34.21
Release highlights include:
- A new `[storage]` configuration section and flag `discard_abci_responses`,
which, if enabled, discards all ABCI responses except the latest one in order
to reduce disk space usage in the state store. When enabled, the
`block_results` RPC endpoint can no longer function and will return an error.
- A new CLI command, `reindex-event`, to re-index block and tx events to the
event sinks. You can run this command when the event store backend
dropped/disconnected or you want to replace the backend. When
`discard_abci_responses` is enabled, you will not be able to use this command.
Special thanks to external contributors on this release: @rootwarp & @animart
### FEATURES
- [cli] [\#9083](https://github.com/tendermint/tendermint/issues/9083) Backport command to reindex missed events (@cmwaters)
- [cli] [\#9107](https://github.com/tendermint/tendermint/issues/9107) Add the `p2p.external-address` argument to set the node P2P external address (@amimart)
### IMPROVEMENTS
- [config] [\#9054](https://github.com/tendermint/tendermint/issues/9054) `discard_abci_responses` flag added to discard all ABCI
responses except the last in order to save on storage space in the state
store (@samricotta)
### BUG FIXES
- [mempool] [\#9033](https://github.com/tendermint/tendermint/issues/9033) Rework lock discipline to mitigate callback deadlocks in the
priority mempool
- [cli] [\#9103](https://github.com/tendermint/tendermint/issues/9103) fix unsafe-reset-all for working with home path (@rootwarp)
## v0.34.20
Special thanks to external contributors on this release: @joeabbey @yihuang
This release introduces a prioritized mempool. Further notes can be found in UPGRADING.md.
NOTE: There's a known issue when combining the prioritized mempool with the ABCI socket client, that the team are curently working to resolve. Read more about the issue [here](https://github.com/tendermint/tendermint/pull/9030).
### BUG FIXES
- [blocksync] [\#8496](https://github.com/tendermint/tendermint/pull/8496) validate block against state before persisting it to disk (@cmwaters)
- [indexer] [#8625](https://github.com/tendermint/tendermint/pull/8625) Fix overriding tx index of duplicated txs. (@yihuang)
- [mempool] [\#8962](https://github.com/tendermint/tendermint/issues/8962) Backport priority mempool fixes from v0.35.x to v0.34.x (@creachadair).
### FEATURES
- [cli] [\#8674] Add command to force compact goleveldb databases (@cmwaters)
- [mempool] [\#8695] Port back the priority mempool. (@alexanderbez, @jmalicevic, @cmwaters)
### IMPROVEMENTS
- [logging] [\#8845](https://github.com/tendermint/tendermint/issues/8845) Add "Lazy" Stringers to defer Sprintf and Hash until logs print. (@joeabbey)
## v0.34.19
### BUG FIXES
- [cli] [\#8270](https://github.com/tendermint/tendermint/issues/8270) fix reset commands (@alexanderbez).
## v0.34.18
### BREAKING CHANGES
- CLI/RPC/Config
- [cli] [\#8258](https://github.com/tendermint/tendermint/pull/8258) Fix a bug in the cli that caused `unsafe-reset-all` to panic
## v0.34.17
### BREAKING CHANGES
- CLI/RPC/Config
- [cli] [\#8081](https://github.com/tendermint/tendermint/issues/8081) make the reset command safe to use (@marbar3778).
### BUG FIXES
- [consensus] [\#8079](https://github.com/tendermint/tendermint/issues/8079) start the timeout ticker before relay (backport #7844) (@creachadair).
- [consensus] [\#7992](https://github.com/tendermint/tendermint/issues/7992) [\#7994](https://github.com/tendermint/tendermint/issues/7994) change lock handling in handleMsg and reactor to alleviate issues gossiping during long ABCI calls (@williambanfield).
## v0.34.16
Special thanks to external contributors on this release: @yihuang
### BUG FIXES
- [consensus] [\#7617](https://github.com/tendermint/tendermint/issues/7617) calculate prevote message delay metric (backport #7551) (@williambanfield).
- [consensus] [\#7631](https://github.com/tendermint/tendermint/issues/7631) check proposal non-nil in prevote message delay metric (backport #7625) (@williambanfield).
- [statesync] [\#7885](https://github.com/tendermint/tendermint/issues/7885) statesync: assert app version matches (backport #7856) (@cmwaters).
- [statesync] [\#7881](https://github.com/tendermint/tendermint/issues/7881) fix app hash in state rollback (backport #7837) (@cmwaters).
- [cli] [#7837](https://github.com/tendermint/tendermint/pull/7837) fix app hash in state rollback. (@yihuang).
## v0.34.15
Special thanks to external contributors on this release: @thanethomson
### BUG FIXES
- [\#7368](https://github.com/tendermint/tendermint/issues/7368) cmd: add integration test for rollback functionality (@cmwaters).
- [\#7309](https://github.com/tendermint/tendermint/issues/7309) pubsub: Report a non-nil error when shutting down (fixes #7306).
- [\#7057](https://github.com/tendermint/tendermint/pull/7057) Import Postgres driver support for the psql indexer (@creachadair).
- [\#7106](https://github.com/tendermint/tendermint/pull/7106) Revert mutex change to ABCI Clients (@tychoish).
### IMPROVEMENTS
- [config] [\#7230](https://github.com/tendermint/tendermint/issues/7230) rpc: Add experimental config params to allow for subscription buffer size control (@thanethomson).
## v0.34.14
This release backports the `rollback` feature to allow recovery in the event of an incorrect app hash.
### FEATURES
- [\#6982](https://github.com/tendermint/tendermint/pull/6982) The tendermint binary now has built-in suppport for running the end-to-end test application (with state sync support) (@cmwaters).
- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state. This may be useful in the event of non-determinstic app hash or when reverting an upgrade. @cmwaters
### IMPROVEMENTS
- [\#7103](https://github.com/tendermint/tendermint/pull/7104) Remove IAVL dependency (backport of #6550) (@cmwaters)
### BUG FIXES
- [\#7057](https://github.com/tendermint/tendermint/pull/7057) Import Postgres driver support for the psql indexer (@creachadair).
- [ABCI] [\#7110](https://github.com/tendermint/tendermint/issues/7110) Revert "change client to use multi-reader mutexes (#6873)" (@tychoish).
## v0.34.13
This release backports improvements to state synchronization and ABCI
performance under concurrent load, and the PostgreSQL event indexer.
### IMPROVEMENTS
- [statesync] [\#6881](https://github.com/tendermint/tendermint/issues/6881) improvements to stateprovider logic (@cmwaters)
- [ABCI] [\#6873](https://github.com/tendermint/tendermint/issues/6873) change client to use multi-reader mutexes (@tychoish)
- [indexing] [\#6906](https://github.com/tendermint/tendermint/issues/6906) enable the PostgreSQL indexer sink (@creachadair)
## v0.34.12
Special thanks to external contributors on this release: @JayT106.
### FEATURES
- [rpc] [\#6717](https://github.com/tendermint/tendermint/pull/6717) introduce
`/genesis_chunked` rpc endpoint for handling large genesis files by chunking them (@tychoish)
### IMPROVEMENTS
- [rpc] [\#6825](https://github.com/tendermint/tendermint/issues/6825) Remove egregious INFO log from `ABCI#Query` RPC. (@alexanderbez)
### BUG FIXES
- [light] [\#6685](https://github.com/tendermint/tendermint/pull/6685) fix bug
with incorrectly handling contexts that would occasionally freeze state sync. (@cmwaters)
- [privval] [\#6748](https://github.com/tendermint/tendermint/issues/6748) Fix vote timestamp to prevent chain halt (@JayT106)
## v0.34.11
*June 18, 2021*
This release improves the robustness of statesync; tweaking channel priorities and timeouts and
adding two new parameters to the state sync config.
### BREAKING CHANGES
- Apps
- [Version] [\#6494](https://github.com/tendermint/tendermint/issues/6494) `TMCoreSemVer` is not required to be set as a ldflag any longer.
### IMPROVEMENTS
- [statesync] [\#6566](https://github.com/tendermint/tendermint/issues/6566) Allow state sync fetchers and request timeout to be configurable. (@alexanderbez)
- [statesync] [\#6378](https://github.com/tendermint/tendermint/issues/6378) Retry requests for snapshots and add a minimum discovery time (5s) for new snapshots. (@tychoish)
- [statesync] [\#6582](https://github.com/tendermint/tendermint/issues/6582) Increase chunk priority and add multiple retry chunk requests (@cmwaters)
### BUG FIXES
- [evidence] [\#6375](https://github.com/tendermint/tendermint/issues/6375) Fix bug with inconsistent LightClientAttackEvidence hashing (@cmwaters)
## v0.34.10
*April 14, 2021*
This release fixes a bug where peers would sometimes try to send messages
on incorrect channels. Special thanks to our friends at Oasis Labs for surfacing
this issue!
- [p2p/node] [\#6339](https://github.com/tendermint/tendermint/issues/6339) Fix bug with using custom channels (@cmwaters)
- [light] [\#6346](https://github.com/tendermint/tendermint/issues/6346) Correctly handle too high errors to improve client robustness (@cmwaters)
## v0.34.9
*April 8, 2021*
This release fixes a moderate severity security issue, Security Advisory Alderfly,
which impacts all networks that rely on Tendermint light clients.
Further details will be released once networks have upgraded.
This release also includes a small Go API-breaking change, to reduce panics in the RPC layer.
Special thanks to our external contributors on this release: @gchaincl
### BREAKING CHANGES
- Go API
- [rpc/jsonrpc/server] [\#6204](https://github.com/tendermint/tendermint/issues/6204) Modify `WriteRPCResponseHTTP(Error)` to return an error (@melekes)
### FEATURES
- [rpc] [\#6226](https://github.com/tendermint/tendermint/issues/6226) Index block events and expose a new RPC method, `/block_search`, to allow querying for blocks by `BeginBlock` and `EndBlock` events (@alexanderbez)
### BUG FIXES
- [rpc/jsonrpc/server] [\#6191](https://github.com/tendermint/tendermint/issues/6191) Correctly unmarshal `RPCRequest` when data is `null` (@melekes)
- [p2p] [\#6289](https://github.com/tendermint/tendermint/issues/6289) Fix "unknown channels" bug on CustomReactors (@gchaincl)
- [light/evidence] Adds logic to handle forward lunatic attacks (@cmwaters)
## v0.34.8
*February 25, 2021*
This release, in conjunction with [a fix in the Cosmos SDK](https://github.com/cosmos/cosmos-sdk/pull/8641),
introduces changes that should mean the logs are much, much quieter. 🎉
### IMPROVEMENTS
- [libs/log] [\#6174](https://github.com/tendermint/tendermint/issues/6174) Include timestamp (`ts` field; `time.RFC3339Nano` format) in JSON logger output (@melekes)
### BUG FIXES
- [abci] [\#6124](https://github.com/tendermint/tendermint/issues/6124) Fixes a panic condition during callback execution in `ReCheckTx` during high tx load. (@alexanderbez)
## v0.34.7
*February 18, 2021*
This release fixes a downstream security issue which impacts Cosmos SDK
users who are:
* Using Cosmos SDK v0.40.0 or later, AND
* Running validator nodes, AND
* Using the file-based `FilePV` implementation for their consensus keys
Users who fulfill all the above criteria were susceptible to leaking
private key material in the logs. All other users are unaffected.
The root cause was a discrepancy
between the Tendermint Core (untyped) logger and the Cosmos SDK (typed) logger:
Tendermint Core's logger automatically stringifies Go interfaces whenever possible;
however, the Cosmos SDK's logger uses reflection to log the fields within a Go interface.
The introduction of the typed logger meant that previously un-logged fields within
interfaces are now sometimes logged, including the private key material inside the
`FilePV` struct.
Tendermint Core v0.34.7 fixes this issue; however, we strongly recommend that all validators
use remote signer implementations instead of `FilePV` in production.
Thank you to @joe-bowman for his assistance with this vulnerability and a particular
shout-out to @marbar3778 for diagnosing it quickly.
### BUG FIXES
- [consensus] [\#6128](https://github.com/tendermint/tendermint/pull/6128) Remove privValidator from log call (@tessr)
## v0.34.6
*February 18, 2021*
_Tendermint Core v0.34.5 and v0.34.6 have been recalled due to build tooling problems._
## v0.34.4
*February 11, 2021*
This release includes a fix for a memory leak in the evidence reactor (see #6068, below).
All Tendermint clients are recommended to upgrade.
Thank you to our friends at Crypto.com for the initial report of this memory leak!
Special thanks to other external contributors on this release: @yayajacky, @odidev, @laniehei, and @c29r3!
### BUG FIXES
- [light] [\#6022](https://github.com/tendermint/tendermint/pull/6022) Fix a bug when the number of validators equals 100 (@melekes)
- [light] [\#6026](https://github.com/tendermint/tendermint/pull/6026) Fix a bug when height isn't provided for the rpc calls: `/commit` and `/validators` (@cmwaters)
- [evidence] [\#6068](https://github.com/tendermint/tendermint/pull/6068) Terminate broadcastEvidenceRoutine when peer is stopped (@melekes)
## v0.34.3
## v0.34.3
*January 19, 2021*
This release includes a fix for a high-severity security vulnerability,
a DoS-vector that impacted Tendermint Core v0.34.0-v0.34.2. For more details, see
[Security Advisory Mulberry](https://github.com/tendermint/tendermint/security/advisories/GHSA-p658-8693-mhvg)
or https://nvd.nist.gov/vuln/detail/CVE-2021-21271.
This release includes a fix for a high-severity security vulnerability.
More information on this vulnerability will be released on January 26, 2021
and this changelog will be updated.
Tendermint Core v0.34.3 also updates GoGo Protobuf to 1.3.2 in order to pick up the fix for
https://nvd.nist.gov/vuln/detail/CVE-2021-3121.
It also updates GoGo Protobuf to 1.3.2 in order to pick up the fix for
https://nvd.nist.gov/vuln/detail/CVE-2021-3121.
Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermint).
### BUG FIXES
- [evidence] [[security fix]](https://github.com/tendermint/tendermint/security/advisories/GHSA-p658-8693-mhvg) Use correct source of evidence time (@cmwaters)
- [evidence] [N/A] Use correct source of evidence time (@cmwaters)
- [proto] [\#5886](https://github.com/tendermint/tendermint/pull/5889) Bump gogoproto to 1.3.2 (@marbar3778)
## v0.34.2
@@ -344,6 +26,8 @@ This release fixes a substantial bug in evidence handling where evidence could
sometimes be broadcast before the block containing that evidence was fully committed,
resulting in some nodes panicking when trying to verify said evidence.
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES
- Go API
@@ -367,6 +51,8 @@ disconnecting from this node. As a temporary remedy (until the mempool package
is refactored), the `max-batch-bytes` was disabled. Transactions will be sent
one by one without batching.
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES
- CLI/RPC/Config
@@ -395,6 +81,8 @@ Holy smokes, this is a big one! For a more reader-friendly overview of the chang
Special thanks to external contributors on this release: @james-ray, @fedekunze, @favadi, @alessio,
@joe-bowman, @cuonglm, @SadPencil and @dongsam.
And as always, friendly reminder, that we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES
- CLI/RPC/Config
@@ -421,14 +109,14 @@ Special thanks to external contributors on this release: @james-ray, @fedekunze,
- [blockchain] [\#4637](https://github.com/tendermint/tendermint/pull/4637) Migrate blockchain reactor(s) to Protobuf encoding (@marbar3778)
- [evidence] [\#4949](https://github.com/tendermint/tendermint/pull/4949) Migrate evidence reactor to Protobuf encoding (@marbar3778)
- [mempool] [\#4940](https://github.com/tendermint/tendermint/pull/4940) Migrate mempool from to Protobuf encoding (@marbar3778)
- [mempool] [\#5321](https://github.com/tendermint/tendermint/pull/5321) Batch transactions when broadcasting them to peers (@melekes)
- [mempool] [\#5321](https://github.com/tendermint/tendermint/pull/5321) Batch transactions when broadcasting them to peers (@melekes)
- `MaxBatchBytes` new config setting defines the max size of one batch.
- [p2p/pex] [\#4973](https://github.com/tendermint/tendermint/pull/4973) Migrate `p2p/pex` reactor to Protobuf encoding (@marbar3778)
- [statesync] [\#4943](https://github.com/tendermint/tendermint/pull/4943) Migrate state sync reactor to Protobuf encoding (@marbar3778)
- Blockchain Protocol
- [evidence] [\#4725](https://github.com/tendermint/tendermint/pull/4725) Remove `Pubkey` from `DuplicateVoteEvidence` (@marbar3778)
- [evidence] [\#4725](https://github.com/tendermint/tendermint/pull/4725) Remove `Pubkey` from `DuplicateVoteEvidence` (@marbar3778)
- [evidence] [\#5499](https://github.com/tendermint/tendermint/pull/5449) Cap evidence to a maximum number of bytes (supercedes [\#4780](https://github.com/tendermint/tendermint/pull/4780)) (@cmwaters)
- [merkle] [\#5193](https://github.com/tendermint/tendermint/pull/5193) Header hashes are no longer empty for empty inputs, notably `DataHash`, `EvidenceHash`, and `LastResultsHash` (@erikgrinaker)
- [state] [\#4845](https://github.com/tendermint/tendermint/pull/4845) Include `GasWanted` and `GasUsed` into `LastResultsHash` (@melekes)
@@ -487,7 +175,7 @@ Special thanks to external contributors on this release: @james-ray, @fedekunze,
- [types] [\#4852](https://github.com/tendermint/tendermint/pull/4852) Vote & Proposal `SignBytes` is now func `VoteSignBytes` & `ProposalSignBytes` (@marbar3778)
- [types] [\#4798](https://github.com/tendermint/tendermint/pull/4798) Simplify `VerifyCommitTrusting` func + remove extra validation (@melekes)
- [types] [\#4845](https://github.com/tendermint/tendermint/pull/4845) Remove `ABCIResult` (@melekes)
- [types] [\#5029](https://github.com/tendermint/tendermint/pull/5029) Rename all values from `PartsHeader` to `PartSetHeader` to have consistency (@marbar3778)
- [types] [\#5029](https://github.com/tendermint/tendermint/pull/5029) Rename all values from `PartsHeader` to `PartSetHeader` to have consistency (@marbar3778)
- [types] [\#4939](https://github.com/tendermint/tendermint/pull/4939) `Total` in `Parts` & `PartSetHeader` has been changed from a `int` to a `uint32` (@marbar3778)
- [types] [\#4939](https://github.com/tendermint/tendermint/pull/4939) Vote: `ValidatorIndex` & `Round` are now `int32` (@marbar3778)
- [types] [\#4939](https://github.com/tendermint/tendermint/pull/4939) Proposal: `POLRound` & `Round` are now `int32` (@marbar3778)
@@ -500,7 +188,7 @@ Special thanks to external contributors on this release: @james-ray, @fedekunze,
- [abci] [\#5174](https://github.com/tendermint/tendermint/pull/5174) Remove `MockEvidence` in favor of testing with actual evidence types (`DuplicateVoteEvidence` & `LightClientAttackEvidence`) (@cmwaters)
- [abci] [\#5191](https://github.com/tendermint/tendermint/pull/5191) Add `InitChain.InitialHeight` field giving the initial block height (@erikgrinaker)
- [abci] [\#5227](https://github.com/tendermint/tendermint/pull/5227) Add `ResponseInitChain.app_hash` which is recorded in genesis block (@erikgrinaker)
- [config] [\#5147](https://github.com/tendermint/tendermint/pull/5147) Add `--consensus.double_sign_check_height` flag and `DoubleSignCheckHeight` config variable. See [ADR-51](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-051-double-signing-risk-reduction.md) (@dongsam)
- [config] [\#5147](https://github.com/tendermint/tendermint/pull/5147) Add `--consensus.double_sign_check_height` flag and `DoubleSignCheckHeight` config variable. See [ADR-51](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-051-double-signing-risk-reduction.md) (@dongsam)
- [db] [\#5233](https://github.com/tendermint/tendermint/pull/5233) Add support for `badgerdb` database backend (@erikgrinaker)
- [evidence] [\#4532](https://github.com/tendermint/tendermint/pull/4532) Handle evidence from light clients (@melekes)
- [evidence] [#4821](https://github.com/tendermint/tendermint/pull/4821) Amnesia (light client attack) evidence can be detected, verified and committed (@cmwaters)
@@ -514,7 +202,7 @@ Special thanks to external contributors on this release: @james-ray, @fedekunze,
- [rpc] [\#5017](https://github.com/tendermint/tendermint/pull/5017) Add `/check_tx` endpoint to check transactions without executing them or adding them to the mempool (@melekes)
- [rpc] [\#5108](https://github.com/tendermint/tendermint/pull/5108) Subscribe using the websocket for new evidence events (@cmwaters)
- [statesync] Add state sync support, where a new node can be rapidly bootstrapped by fetching state snapshots from peers instead of replaying blocks. See the `[statesync]` config section.
- [evidence] [\#5361](https://github.com/tendermint/tendermint/pull/5361) Add LightClientAttackEvidence and refactor evidence lifecycle - for more information see [ADR-059](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-059-evidence-composition-and-lifecycle.md) (@cmwaters)
- [evidence] [\#5361](https://github.com/tendermint/tendermint/pull/5361) Add LightClientAttackEvidence and refactor evidence lifecycle - for more information see [ADR-059](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-059-evidence-composition-and-lifecycle.md) (@cmwaters)
### IMPROVEMENTS
@@ -525,7 +213,7 @@ Special thanks to external contributors on this release: @james-ray, @fedekunze,
- [evidence] [\#4722](https://github.com/tendermint/tendermint/pull/4722) Consolidate evidence store and pool types to improve evidence DB (@cmwaters)
- [evidence] [\#4839](https://github.com/tendermint/tendermint/pull/4839) Reject duplicate evidence from being proposed (@cmwaters)
- [evidence] [\#5219](https://github.com/tendermint/tendermint/pull/5219) Change the source of evidence time to block time (@cmwaters)
- [libs] [\#5126](https://github.com/tendermint/tendermint/pull/5126) Add a sync package which wraps sync.(RW)Mutex & deadlock.(RW)Mutex and use a build flag (deadlock) in order to enable deadlock checking (@marbar3778)
- [libs] [\#5126](https://github.com/tendermint/tendermint/pull/5126) Add a sync package which wraps sync.(RW)Mutex & deadlock.(RW)Mutex and use a build flag (deadlock) in order to enable deadlock checking (@marbar3778)
- [light] [\#4935](https://github.com/tendermint/tendermint/pull/4935) Fetch and compare a new header with witnesses in parallel (@melekes)
- [light] [\#4929](https://github.com/tendermint/tendermint/pull/4929) Compare header with witnesses only when doing bisection (@melekes)
- [light] [\#4916](https://github.com/tendermint/tendermint/pull/4916) Validate basic for inbound validator sets and headers before further processing them (@cmwaters)
@@ -594,7 +282,7 @@ This security release fixes:
Tendermint 0.33.0 and above allow block proposers to include signatures for the
wrong block. This may happen naturally if you start a network, have it run for
some time and restart it **without changing the chainID**. (It is a
[misconfiguration](https://docs.tendermint.com/v0.33/tendermint-core/using-tendermint.html)
[misconfiguration](https://docs.tendermint.com/master/tendermint-core/using-tendermint.html)
to reuse chainIDs.) Correct block proposers will accidentally include signatures
for the wrong block if they see these signatures, and then commits won't validate,
making all proposed blocks invalid. A malicious validator (even with a minimal
@@ -635,6 +323,9 @@ as 2/3+ of the signatures are checked._
Special thanks to @njmurarka at Bluzelle Networks for reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [consensus] Do not allow signatures for a wrong block in commits (@ebuchman)
@@ -650,6 +341,8 @@ need to update your code.**
Special thanks to external contributors on this release: @tau3,
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -709,6 +402,8 @@ Special thanks to external contributors on this release: @tau3,
Special thanks to external contributors on this release: @whylee259, @greg-szabo
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -795,6 +490,9 @@ Notes:
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for finding
and reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [mempool] Reserve IDs in InitPeer instead of AddPeer (@tessr)
@@ -807,6 +505,8 @@ and reporting this.
Special thanks to external contributors on this release:
@antho1404, @michaelfig, @gterzian, @tau3, @Shivani912
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- CLI/RPC/Config
@@ -857,6 +557,9 @@ Special thanks to external contributors on this release:
Special thanks to external contributors on this release:
@princesinha19
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc] [\#3333](https://github.com/tendermint/tendermint/issues/3333) Add `order_by` to `/tx_search` endpoint, allowing to change default ordering from asc to desc (@princesinha19)
@@ -875,6 +578,9 @@ Special thanks to external contributors on this release:
Special thanks to external contributors on this release: @mrekucci, @PSalant726, @princesinha19, @greg-szabo, @dongsam, @cuonglm, @jgimeno, @yenkhoon
Friendly reminder, we have a [bug bounty
program.](https://hackerone.com/tendermint).
*January 14, 2020*
This release contains breaking changes to the `Block#Header`, specifically
@@ -893,7 +599,7 @@ and a validator address plus a timestamp. Note we may remove the validator
address & timestamp fields in the future (see ADR-25).
`lite2` package has been added to solve `lite` issues and introduce weak
subjectivity interface. Refer to the [spec](https://github.com/tendermint/tendermint/tree/main/spec/consensus/light-client) for complete details.
subjectivity interface. Refer to the [spec](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client.md) for complete details.
`lite` package is now deprecated and will be removed in v0.34 release.
### BREAKING CHANGES:
@@ -1103,6 +809,9 @@ Notes:
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for finding
and reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [mempool] Reserve IDs in InitPeer instead of AddPeer (@tessr)
@@ -1114,6 +823,9 @@ _January, 9, 2020_
Special thanks to external contributors on this release: @greg-szabo, @gregzaitsev, @yenkhoon
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc/lib] [\#4248](https://github.com/tendermint/tendermint/issues/4248) RPC client basic authentication support (@greg-szabo)
@@ -1135,6 +847,9 @@ Special thanks to external contributors on this release: @greg-szabo, @gregzaits
Special thanks to external contributors on this release: @erikgrinaker, @guagualvcha, @hsyis, @cosmostuba, @whunmr, @austinabell
Friendly reminder, we have a [bug bounty
program.](https://hackerone.com/tendermint).
### BREAKING CHANGES:
@@ -1174,6 +889,9 @@ identified and fixed here.
Special thanks to [elvishacker](https://hackerone.com/elvishacker) for finding
and reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -1200,6 +918,9 @@ accepting new peers and only allowing `ed25519` pubkeys.
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for pointing
this out.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [p2p] [\#4030](https://github.com/tendermint/tendermint/issues/4030) Only allow ed25519 pubkeys when connecting
@@ -1215,6 +936,9 @@ All clients are recommended to upgrade. See
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for discovering
and reporting this issue.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [p2p] [\#4030](https://github.com/tendermint/tendermint/issues/4030) Fix for panic on nil public key send to a peer
@@ -1225,6 +949,9 @@ and reporting this issue.
Special thanks to external contributors on this release: @jon-certik, @gracenoah, @PSalant726, @gchaincl
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- CLI/RPC/Config
@@ -1253,13 +980,16 @@ Special thanks to external contributors on this release: @jon-certik, @gracenoah
*August 28, 2019*
@climber73 wrote the [Writing a Tendermint Core application in Java
(gRPC)](https://docs.tendermint.com/v0.34/tutorials/java.html)
@climber73 wrote the [Writing a Tendermint Core application in Java
(gRPC)](https://github.com/tendermint/tendermint/blob/master/docs/guides/java.md)
guide.
Special thanks to external contributors on this release:
@gchaincl, @bluele, @climber73
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### IMPROVEMENTS:
- [consensus] [\#3839](https://github.com/tendermint/tendermint/issues/3839) Reduce "Error attempting to add vote" message severity (Error -> Info)
@@ -1280,6 +1010,9 @@ Special thanks to external contributors on this release:
Special thanks to external contributors on this release:
@ruseinov, @bluele, @guagualvcha
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -1287,7 +1020,7 @@ Special thanks to external contributors on this release:
### FEATURES:
- [blockchain] [\#3561](https://github.com/tendermint/tendermint/issues/3561) Add early version of the new blockchain reactor, which is supposed to be more modular and testable compared to the old version. To try it, you'll have to change `version` in the config file, [here](https://github.com/tendermint/tendermint/blob/main/config/toml.go#L303) NOTE: It's not ready for a production yet. For further information, see [ADR-40](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-040-blockchain-reactor-refactor.md) & [ADR-43](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-043-blockchain-riri-org.md)
- [blockchain] [\#3561](https://github.com/tendermint/tendermint/issues/3561) Add early version of the new blockchain reactor, which is supposed to be more modular and testable compared to the old version. To try it, you'll have to change `version` in the config file, [here](https://github.com/tendermint/tendermint/blob/master/config/toml.go#L303) NOTE: It's not ready for a production yet. For further information, see [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md) & [ADR-43](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-043-blockchain-riri-org.md)
- [mempool] [\#3826](https://github.com/tendermint/tendermint/issues/3826) Make `max_msg_bytes` configurable(@bluele)
- [node] [\#3846](https://github.com/tendermint/tendermint/pull/3846) Allow replacing existing p2p.Reactor(s) using [`CustomReactors`
option](https://godoc.org/github.com/tendermint/tendermint/node#CustomReactors).
@@ -1319,6 +1052,9 @@ This release contains a minor enhancement to the ABCI and some breaking changes
- CheckTx requests include a `CheckTxType` enum that can be set to `Recheck` to indicate to the application that this transaction was already checked/validated and certain expensive operations (like checking signatures) can be skipped
- Removed various functions from `libs` pkgs
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -1345,7 +1081,7 @@ This release contains a minor enhancement to the ABCI and some breaking changes
- [p2p] [\#3338](https://github.com/tendermint/tendermint/issues/3338) Prevent "sent next PEX request too soon" errors by not calling
ensurePeers outside of ensurePeersRoutine
- [behavior] [\3772](https://github.com/tendermint/tendermint/pull/3772) Return correct reason in MessageOutOfOrder (@jim380)
- [behaviour] [\3772](https://github.com/tendermint/tendermint/pull/3772) Return correct reason in MessageOutOfOrder (@jim380)
- [config] [\#3723](https://github.com/tendermint/tendermint/issues/3723) Add consensus_params to testnet config generation; document time_iota_ms (@ashleyvega)
@@ -1364,6 +1100,9 @@ and the RPC, namely:
[docs](https://github.com/tendermint/tendermint/blob/60827f75623b92eff132dc0eff5b49d2025c591e/docs/spec/abci/abci.md#events)
- Bind RPC to localhost by default, not to the public interface [UPGRADING/RPC_Changes](./UPGRADING.md#rpc_changes)
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
@@ -1464,6 +1203,9 @@ Notes:
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for finding
and reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [mempool] Reserve IDs in InitPeer instead of AddPeer (@tessr)
@@ -1483,6 +1225,9 @@ identified and fixed here.
Special thanks to [elvishacker](https://hackerone.com/elvishacker) for finding
and reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -1509,6 +1254,9 @@ accepting new peers and only allowing `ed25519` pubkeys.
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for pointing
this out.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [p2p] [\#4030](https://github.com/tendermint/tendermint/issues/4030) Only allow ed25519 pubkeys when connecting
@@ -1524,6 +1272,9 @@ All clients are recommended to upgrade. See
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for discovering
and reporting this issue.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [p2p] [\#4030](https://github.com/tendermint/tendermint/issues/4030) Fix for panic on nil public key send to a peer
@@ -1604,7 +1355,7 @@ Special thanks to external contributors on this release:
- [libs/db] [\#3611](https://github.com/tendermint/tendermint/issues/3611) Conditional compilation
* Use `cleveldb` tag instead of `gcc` to compile Tendermint with CLevelDB or
use `make build_c` / `make install_c` (full instructions can be found at
<https://docs.tendermint.com>)
https://docs.tendermint.com/master/introduction/install.html#compile-with-cleveldb-support)
* Use `boltdb` tag to compile Tendermint with bolt db
- [node] [\#3362](https://github.com/tendermint/tendermint/issues/3362) Return an error if `persistent_peers` list is invalid (except
when IP lookup fails)
@@ -1665,7 +1416,7 @@ It brings back `NetAddress()` to `NodeInfo` and uses it instead of `SocketAddr`
Additionally, it improves response time on the `/validators` or `/status` RPC endpoints.
As a side-effect it makes these RPC endpoint more difficult to DoS and fixes a performance degradation in `ExecCommitBlock`.
Also, it contains an [ADR](https://github.com/tendermint/tendermint/pull/3539) that proposes decoupling the
responsibility for peer behavior from the `p2p.Switch` (by @brapse).
responsibility for peer behaviour from the `p2p.Switch` (by @brapse).
Special thanks to external contributors on this release:
@brapse, @guagualvcha, @mydring
@@ -1818,6 +1569,9 @@ See the [v0.31.0
Milestone](https://github.com/tendermint/tendermint/milestone/19?closed=1) for
more details.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
@@ -1828,7 +1582,7 @@ more details.
- [rpc] [\#3269](https://github.com/tendermint/tendermint/issues/2826) Limit number of unique clientIDs with open subscriptions. Configurable via `rpc.max_subscription_clients`
- [rpc] [\#3269](https://github.com/tendermint/tendermint/issues/2826) Limit number of unique queries a given client can subscribe to at once. Configurable via `rpc.max_subscriptions_per_client`.
- [rpc] [\#3435](https://github.com/tendermint/tendermint/issues/3435) Default ReadTimeout and WriteTimeout changed to 10s. WriteTimeout can increased by setting `rpc.timeout_broadcast_tx_commit` in the config.
- [rpc/client] [\#3269](https://github.com/tendermint/tendermint/issues/3269) Update `EventsClient` interface to reflect new pubsub/eventBus API [ADR-33](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-033-pubsub.md). This includes `Subscribe`, `Unsubscribe`, and `UnsubscribeAll` methods.
- [rpc/client] [\#3269](https://github.com/tendermint/tendermint/issues/3269) Update `EventsClient` interface to reflect new pubsub/eventBus API [ADR-33](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-033-pubsub.md). This includes `Subscribe`, `Unsubscribe`, and `UnsubscribeAll` methods.
* Apps
- [abci] [\#3403](https://github.com/tendermint/tendermint/issues/3403) Remove `time_iota_ms` from BlockParams. This is a
@@ -1881,7 +1635,7 @@ more details.
- [blockchain] [\#3358](https://github.com/tendermint/tendermint/pull/3358) Fix timer leak in `BlockPool` (@guagualvcha)
- [cmd] [\#3408](https://github.com/tendermint/tendermint/issues/3408) Fix `testnet` command's panic when creating non-validator configs (using `--n` flag) (@srmo)
- [libs/db/remotedb/grpcdb] [\#3402](https://github.com/tendermint/tendermint/issues/3402) Close Iterator/ReverseIterator after use
- [libs/pubsub] [\#951](https://github.com/tendermint/tendermint/issues/951), [\#1880](https://github.com/tendermint/tendermint/issues/1880) Use non-blocking send when dispatching messages [ADR-33](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-033-pubsub.md)
- [libs/pubsub] [\#951](https://github.com/tendermint/tendermint/issues/951), [\#1880](https://github.com/tendermint/tendermint/issues/1880) Use non-blocking send when dispatching messages [ADR-33](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-033-pubsub.md)
- [lite] [\#3364](https://github.com/tendermint/tendermint/issues/3364) Fix `/validators` and `/abci_query` proxy endpoints
(@guagualvcha)
- [p2p/conn] [\#3347](https://github.com/tendermint/tendermint/issues/3347) Reject all-zero shared secrets in the Diffie-Hellman step of secret-connection
@@ -1948,7 +1702,7 @@ For more, see issues marked
This release also includes a fix to prevent Tendermint from including the same
piece of evidence in more than one block. This issue was reported by @chengwenxi in our
[bug bounty program](https://hackerone.com/cosmos).
[bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
@@ -2037,6 +1791,9 @@ This release contains two important fixes: one for p2p layer where we sometimes
were not closing connections and one for consensus layer where consensus with
no empty blocks (`create_empty_blocks = false`) could halt.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### IMPROVEMENTS:
- [pex] [\#3037](https://github.com/tendermint/tendermint/issues/3037) Only log "Reached max attempts to dial" once
- [rpc] [\#3159](https://github.com/tendermint/tendermint/issues/3159) Expose
@@ -2075,6 +1832,9 @@ While we are trying to stabilize the Block protocol to preserve compatibility
with old chains, there may be some final changes yet to come before Cosmos
launch as we continue to audit and test the software.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
@@ -2122,6 +1882,9 @@ launch as we continue to audit and test the software.
Special thanks to external contributors on this release:
@HaoyangLiu
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BUG FIXES:
- [consensus] Fix consensus halt from proposing blocks with too much evidence
@@ -2249,6 +2012,9 @@ Special thanks to @dlguddus for discovering a [major
issue](https://github.com/tendermint/tendermint/issues/2718#issuecomment-440888677)
in the proposer selection algorithm.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
This release is primarily about fixes to the proposer selection algorithm
in preparation for the [Cosmos Game of
Stakes](https://blog.cosmos.network/the-game-of-stakes-is-open-for-registration-83a404746ee6).
@@ -2310,6 +2076,9 @@ Special thanks to external contributors on this release:
@ackratos, @goolAdapter, @james-ray, @joe-bowman, @kostko,
@nagarajmanjunath, @tomtau
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc] [\#2747](https://github.com/tendermint/tendermint/issues/2747) Enable subscription to tags emitted from `BeginBlock`/`EndBlock` (@kostko)
@@ -2333,8 +2102,8 @@ Special thanks to external contributors on this release:
- [blockchain] [\#2731](https://github.com/tendermint/tendermint/issues/2731) Retry both blocks if either is bad to avoid getting stuck during fast sync (@goolAdapter)
- [consensus] [\#2893](https://github.com/tendermint/tendermint/issues/2893) Use genDoc.Validators instead of state.NextValidators on replay when appHeight==0 (@james-ray)
- [log] [\#2868](https://github.com/tendermint/tendermint/issues/2868) Fix `module=main` setting overriding all others
- NOTE: this changes the default logging behavior to be much less verbose.
Set `log_level="info"` to restore the previous behavior.
- NOTE: this changes the default logging behaviour to be much less verbose.
Set `log_level="info"` to restore the previous behaviour.
- [rpc] [\#2808](https://github.com/tendermint/tendermint/issues/2808) Fix `accum` field in `/validators` by calling `IncrementAccum` if necessary
- [rpc] [\#2811](https://github.com/tendermint/tendermint/issues/2811) Allow integer IDs in JSON-RPC requests (@tomtau)
- [txindex/kv] [\#2759](https://github.com/tendermint/tendermint/issues/2759) Fix tx.height range queries
@@ -2348,6 +2117,9 @@ Special thanks to external contributors on this release:
Special thanks to external contributors on this release:
@danil-lashin, @kevlubkcm, @krhubert, @srmo
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* Go API
@@ -2391,6 +2163,8 @@ Special thanks to external contributors on this release:
Special thanks to external contributors on this release: @hleb-albau, @zhuzeyu
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc] [\#2582](https://github.com/tendermint/tendermint/issues/2582) Enable CORS on RPC API (@hleb-albau)
@@ -2408,6 +2182,8 @@ Special thanks to external contributors on this release: @hleb-albau, @zhuzeyu
Special thanks to external contributors on this release: @katakonst
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### IMPROVEMENTS:
- [consensus] [\#2704](https://github.com/tendermint/tendermint/issues/2704) Simplify valid POL round logic
@@ -2435,7 +2211,7 @@ Special thanks to external contributors on this release:
@james-ray, @overbool, @phymbert, @Slamper, @Uzair1995, @yutianwu.
Special thanks to @Slamper for a series of bug reports in our [bug bounty
program](https://hackerone.com/cosmos) which are fixed in this release.
program](https://hackerone.com/tendermint) which are fixed in this release.
This release is primarily about adding Version fields to various data structures,
optimizing consensus messages for signing and verification in
@@ -2465,7 +2241,7 @@ increasing attention to backwards compatibility. Thanks for bearing with us!
* [state] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version field to State, breaking the format of State as
encoded on disk.
* [rpc] [\#2298](https://github.com/tendermint/tendermint/issues/2298) `/abci_query` takes `prove` argument instead of `trusted` and switches the default
behavior to `prove=false`
behaviour to `prove=false`
* [rpc] [\#2654](https://github.com/tendermint/tendermint/issues/2654) Remove all `node_info.other.*_version` fields in `/status` and
`/net_info`
* [rpc] [\#2636](https://github.com/tendermint/tendermint/issues/2636) Remove
@@ -2579,7 +2355,9 @@ Special thanks to external contributors on this release:
This release is mostly about the ConsensusParams - removing fields and enforcing MaxGas.
It also addresses some issues found via security audit, removes various unused
functions from `libs/common`, and implements
[ADR-012](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-012-peer-transport.md).
[ADR-012](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-012-peer-transport.md).
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
BREAKING CHANGES:
@@ -2610,7 +2388,7 @@ FEATURES:
- [libs] [\#2286](https://github.com/tendermint/tendermint/issues/2286) Panic if `autofile` or `db/fsdb` permissions change from 0600.
IMPROVEMENTS:
- [libs/db] [\#2371](https://github.com/tendermint/tendermint/issues/2371) Output error instead of panic when the given `db_backend` is not initialized (@bradyjoestar)
- [libs/db] [\#2371](https://github.com/tendermint/tendermint/issues/2371) Output error instead of panic when the given `db_backend` is not initialised (@bradyjoestar)
- [mempool] [\#2399](https://github.com/tendermint/tendermint/issues/2399) Make mempool cache a proper LRU (@bradyjoestar)
- [p2p] [\#2126](https://github.com/tendermint/tendermint/issues/2126) Introduce PeerTransport interface to improve isolation of concerns
- [libs/common] [\#2326](https://github.com/tendermint/tendermint/issues/2326) Service returns ErrNotStarted
@@ -2644,7 +2422,7 @@ are affected by a change.
A few more breaking changes are in the works - each will come with a clear
Architecture Decision Record (ADR) explaining the change. You can review ADRs
[here](https://github.com/tendermint/tendermint/tree/main/docs/architecture)
[here](https://github.com/tendermint/tendermint/tree/develop/docs/architecture)
or in the [open Pull Requests](https://github.com/tendermint/tendermint/pulls).
You can also check in on the [issues marked as
breaking](https://github.com/tendermint/tendermint/issues?q=is%3Aopen+is%3Aissue+label%3Abreaking).
@@ -2660,7 +2438,7 @@ BREAKING CHANGES:
- [abci] Added address of the original proposer of the block to Header
- [abci] Change ABCI Header to match Tendermint exactly
- [abci] [\#2159](https://github.com/tendermint/tendermint/issues/2159) Update use of `Validator` (see
[ADR-018](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-018-ABCI-Validators.md)):
[ADR-018](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-018-ABCI-Validators.md)):
- Remove PubKey from `Validator` (so it's just Address and Power)
- Introduce `ValidatorUpdate` (with just PubKey and Power)
- InitChain and EndBlock use ValidatorUpdate
@@ -2682,7 +2460,7 @@ BREAKING CHANGES:
- [state] [\#1815](https://github.com/tendermint/tendermint/issues/1815) Validator set changes are now delayed by one block (!)
- Add NextValidatorSet to State, changes on-disk representation of state
- [state] [\#2184](https://github.com/tendermint/tendermint/issues/2184) Enforce ConsensusParams.BlockSize.MaxBytes (See
[ADR-020](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-020-block-size.md)).
[ADR-020](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-020-block-size.md)).
- Remove ConsensusParams.BlockSize.MaxTxs
- Introduce maximum sizes for all components of a block, including ChainID
- [types] Updates to the block Header:
@@ -2693,7 +2471,7 @@ BREAKING CHANGES:
- [consensus] [\#2203](https://github.com/tendermint/tendermint/issues/2203) Implement BFT time
- Timestamp in block must be monotonic and equal the median of timestamps in block's LastCommit
- [crypto] [\#2239](https://github.com/tendermint/tendermint/issues/2239) Secp256k1 signature changes (See
[ADR-014](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-014-secp-malleability.md)):
[ADR-014](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-014-secp-malleability.md)):
- format changed from DER to `r || s`, both little endian encoded as 32 bytes.
- malleability removed by requiring `s` to be in canonical form.
@@ -2923,7 +2701,7 @@ BREAKING CHANGES:
FEATURES
- [cmd] Added metrics (served under `/metrics` using a Prometheus client;
disabled by default). See the new `instrumentation` section in the config and
[metrics](https://github.com/tendermint/tendermint/blob/main/docs/tendermint-core/metrics.md)
[metrics](https://tendermint.readthedocs.io/projects/tools/en/develop/metrics.html)
guide.
- [p2p] Add IPv6 support to peering.
- [p2p] Add `external_address` to config to allow specifying the address for
@@ -3037,7 +2815,7 @@ BREAKING:
FEATURES
- [rpc] the RPC documentation is now published to https://github.com/tendermint/tendermint/tree/main/spec/rpc
- [rpc] the RPC documentation is now published to https://tendermint.github.io/slate
- [p2p] AllowDuplicateIP config option to refuse connections from same IP.
- true by default for now, false by default in next breaking release
- [docs] Add docs for query, tx indexing, events, pubsub
@@ -3493,7 +3271,7 @@ Also includes the Grand Repo-Merge of 2017.
BREAKING CHANGES:
- Config and Flags:
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/main/config/config.go#L11),
- The `config` map is replaced with a [`Config` struct](https://github.com/tendermint/tendermint/blob/master/config/config.go#L11),
containing substructs: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusConfig`, `RPCConfig`
- This affects the following flags:
- `--seeds` is now `--p2p.seeds`
@@ -3516,7 +3294,7 @@ containing substructs: `BaseConfig`, `P2PConfig`, `MempoolConfig`, `ConsensusCon
- Logger
- Replace static `log15` logger with a simple interface, and provide a new implementation using `go-kit`.
See our new [logging library](https://github.com/tendermint/tendermint/blob/main/libs/log/logger.go) and [blog post](https://blog.cosmos.network/abstracting-the-logger-interface-in-go-4cf96bf90bb7) for more details
See our new [logging library](https://github.com/tendermint/tmlibs/log) and [blog post](https://tendermint.com/blog/abstracting-the-logger-interface-in-go) for more details
- Levels `warn` and `notice` are removed (you may need to change them in your `config.toml`!)
- Change some [function and method signatures](https://gist.github.com/ebuchman/640d5fc6c2605f73497992fe107ebe0b) to accept a logger

View File

@@ -1,39 +1,6 @@
# Unreleased Changes
## v0.38.0
### BREAKING CHANGES
- CLI/RPC/Config
- Apps
- P2P Protocol
- Go API
- [p2p] \#9625 Remove unused p2p/trust package (@cmwaters)
- Blockchain Protocol
- Data Storage
- [state] \#6541 Move pruneBlocks from consensus/state to state/execution. (@JayT106)
- Tooling
- [tools/tm-signer-harness] \#6498 Set OS home dir to instead of the hardcoded PATH. (@JayT106)
### FEATURES
### IMPROVEMENTS
- [pubsub] \#7319 Performance improvements for the event query API (@creachadair)
- [p2p/pex] \#6509 Improve addrBook.hash performance (@cuonglm)
- [crypto/merkle] \#6443 & \#6513 Improve HashAlternatives performance (@cuonglm, @marbar3778)
### BUG FIXES
- [docker] \#9462 ensure Docker image uses consistent version of Go
## v0.37.0
## vX.X
Special thanks to external contributors on this release:
@@ -42,59 +9,53 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
### BREAKING CHANGES
- CLI/RPC/Config
- [config] \#9259 Rename the fastsync section and the fast_sync key blocksync and block_sync respectively
- [config] \#5598 The `test_fuzz` and `test_fuzz_config` P2P settings have been removed. (@erikgrinaker)
- [config] \#5728 `fast_sync = "v1"` is no longer supported (@melekes)
- [cli] \#5772 `gen_node_key` prints JSON-encoded `NodeKey` rather than ID and does not save it to `node_key.json` (@melekes)
- [cli] \#5777 use hyphen-case instead of snake_case for all cli commands and config parameters (@cmwaters)
- Apps
- [abci/counter] \#6684 Delete counter example app
- [abci] \#5783 Make length delimiter encoding consistent (`uint64`) between ABCI and P2P wire-level protocols
- [abci] \#9145 Removes unused Response/Request `SetOption` from ABCI (@samricotta)
- [abci/params] \#9287 Deduplicate `ConsensusParams` and `BlockParams` so only `types` proto definitions are used (@cmwaters)
- Remove `TimeIotaMs` and use a hard-coded 1 millisecond value to ensure monotonically increasing block times.
- Rename `AppVersion` to `App` so as to not stutter.
- [types] \#9287 Reduce the use of protobuf types in core logic. (@cmwaters)
- `ConsensusParams`, `BlockParams`, `ValidatorParams`, `EvidenceParams`, `VersionParams` have become native types.
They still utilize protobuf when being sent over the wire or written to disk.
- Moved `ValidateConsensusParams` inside (now native type) `ConsensusParams`, and renamed it to `ValidateBasic`.
- [abci] \#9301 New ABCI methods `PrepareProposal` and `ProcessProposal` which give the app control over transactions proposed and allows for verification of proposed blocks.
- [abci] \#8216 Renamed `EvidenceType` to `MisbehaviorType` and `Evidence` to `Misbehavior` as a more accurate label of their contents. (@williambanfield, @sergio-mena)
- [abci] \#9122 Renamed `LastCommitInfo` to `CommitInfo` in preparation for vote extensions. (@cmwaters)
- [abci] \#8656, \#8901 Added cli commands for `PrepareProposal` and `ProcessProposal`. (@jmalicevic, @hvanz)
- [abci] \#6403 Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
- [ABCI] \#5447 Remove `SetOption` method from `ABCI.Client` interface
- [ABCI] \#5447 Reset `Oneof` indexes for `Request` and `Response`.
- [ABCI] \#5818 Use protoio for msg length delimitation. Migrates from int64 to uint64 length delimiters.
- P2P Protocol
- Go API
- [all] \#9144 Change spelling from British English to American (@cmwaters)
- Rename "Subscription.Cancelled()" to "Subscription.Canceled()" in libs/pubsub
- [crypto/sr25519] \#6526 Do not re-execute the Ed25519-style key derivation step when doing signing and verification. The derivation is now done once and only once. This breaks `sr25519.GenPrivKeyFromSecret` output compatibility. (@Yawning)
- [abci/client, proxy] \#5673 `Async` funcs return an error, `Sync` and `Async` funcs accept `context.Context` (@melekes)
- [p2p] Removed unused function `MakePoWTarget`. (@erikgrinaker)
- [libs/bits] \#5720 Validate `BitArray` in `FromProto`, which now returns an error (@melekes)
- [proto/p2p] Renamed `DefaultNodeInfo` and `DefaultNodeInfoOther` to `NodeInfo` and `NodeInfoOther` (@erikgrinaker)
- [proto/p2p] Rename `NodeInfo.default_node_id` to `node_id` (@erikgrinaker)
- [libs/os] Kill() and {Must,}{Read,Write}File() functions have been removed. (@alessio)
- [store] \#5848 Remove block store state in favor of using the db iterators directly (@cmwaters)
- [state] \#5864 Use an iterator when pruning state (@cmwaters)
- Blockchain Protocol
- Data Storage
- [store/state/evidence/light] \#5771 Use an order-preserving varint key encoding (@cmwaters)
### FEATURES
- [abci] \#9301 New ABCI methods `PrepareProposal` and `ProcessProposal` which give the app control over transactions proposed and allows for verification of proposed blocks.
### IMPROVEMENTS
- [crypto] \#9250 Update to use btcec v2 and the latest btcutil. (@wcsiu)
- [cli] \#9171 add `--hard` flag to rollback command (and a boolean to the `RollbackState` method). This will rollback
state and remove the last block. This command can be triggered multiple times. The application must also rollback
state to the same height. (@tsutsu, @cmwaters)
- [proto] \#9356 Migrate from `gogo/protobuf` to `cosmos/gogoproto` (@julienrbrt)
- [rpc] \#9276 Added `header` and `header_by_hash` queries to the RPC client (@samricotta)
- [abci] \#5706 Added `AbciVersion` to `RequestInfo` allowing applications to check ABCI version when connecting to Tendermint. (@marbar3778)
- [node] \#6059 Validate and complete genesis doc before saving to state store (@silasdavis)
- [crypto/ed25519] \#5632 Adopt zip215 `ed25519` verification. (@marbar3778)
- [crypto/ed25519] \#6526 Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `ed25519` signing and verification. (@Yawning)
- [crypto/sr25519] \#6526 Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `sr25519` signing and verification. (@Yawning)
- [crypto] \#6120 Implement batch verification interface for ed25519 and sr25519. (@marbar3778 & @Yawning)
- [types] \#6120 use batch verification for verifying commits signatures. (@marbar3778 & @cmwaters & @Yawning)
- If the key type supports the batch verification API it will try to batch verify. If the verification fails we will single verify each signature.
- [state] \#9505 Added logic so when pruning, the evidence period is taken into consideration and only deletes unecessary data (@samricotta)
- [privval] \#5603 Add `--key` to `init`, `gen_validator`, `testnet` & `unsafe_reset_priv_validator` for use in generating `secp256k1` keys.
- [privval] \#5725 Add gRPC support to private validator.
- [privval] \#5876 `tendermint show-validator` will query the remote signer if gRPC is being used (@marbar3778)
- [abci/client] \#5673 `Async` requests return an error if queue is full (@melekes)
- [mempool] \#5673 Cancel `CheckTx` requests if RPC client disconnects or times out (@melekes)
- [abci] \#5706 Added `AbciVersion` to `RequestInfo` allowing applications to check ABCI version when connecting to Tendermint. (@marbar3778)
- [blockchain/v1] \#5728 Remove in favor of v2 (@melekes)
- [blockchain/v0] \#5741 Relax termination conditions and increase sync timeout (@melekes)
- [cli] \#5772 `gen_node_key` output now contains node ID (`id` field) (@melekes)
- [blockchain/v2] \#5774 Send status request when new peer joins (@melekes)
- [consensus] \#5792 Deprecates the `time_iota_ms` consensus parameter, to reduce the bug surface. The parameter is no longer used. (@valardragon)
### BUG FIXES
- [consensus] \#9229 fix round number of `enterPropose` when handling `RoundStepNewRound` timeout. (@fatcat22)
- [docker] \#9073 enable cross platform build using docker buildx
- [blocksync] \#9518 handle the case when the sending queue is full: retry block request after a timeout
- [types] \#5523 Change json naming of `PartSetHeader` within `BlockID` from `parts` to `part_set_header` (@marbar3778)
- [privval] \#5638 Increase read/write timeout to 5s and calculate ping interval based on it (@JoeKash)
- [blockchain/v1] [\#5701](https://github.com/tendermint/tendermint/pull/5701) Handle peers without blocks (@melekes)
- [blockchain/v1] \#5711 Fix deadlock (@melekes)

View File

@@ -1,109 +1,59 @@
# The Tendermint Code of Conduct
This code of conduct applies to all projects run by the Tendermint/COSMOS team
and hence to Tendermint.
This code of conduct applies to all projects run by the Tendermint/COSMOS team and hence to tendermint.
----
# Conduct
## Contact: conduct@tendermint.com
* We are committed to providing a friendly, safe and welcoming environment for
all, regardless of level of experience, gender, gender identity and
expression, sexual orientation, disability, personal appearance, body size,
race, ethnicity, age, religion, nationality, or other similar characteristics.
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
* On Slack, please avoid using overtly sexual nicknames or other nicknames that
might detract from a friendly, safe and welcoming environment for all.
* On Slack, please avoid using overtly sexual nicknames or other nicknames that might detract from a friendly, safe and welcoming environment for all.
* Please be kind and courteous. Theres no need to be mean or rude.
* Respect that people have differences of opinion and that every design or
implementation choice carries a trade-off and numerous costs. There is seldom
a right answer.
* Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.
* Please keep unstructured critique to a minimum. If you have solid ideas you
want to experiment with, make a fork and see how it works.
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone.
That is not welcome behavior. We interpret the term “harassment” as including
the definition in the [Citizen Code of Conduct][ccoc]; if you have any lack of
clarity about what might be included in that concept, please read their
definition. In particular, we dont tolerate behavior that excludes people in
socially marginalized groups.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term “harassment” as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we dont tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel
you have been or are being harassed or made uncomfortable by a community
member, please contact one of the channel admins or the person mentioned above
immediately. Whether youre a regular contributor or a newcomer, we care about
making this community a safe place for you and weve got your back.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the channel admins or the person mentioned above immediately. Whether youre a regular contributor or a newcomer, we care about making this community a safe place for you and weve got your back.
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing behaviour is not welcome.
* Likewise any spamming, trolling, flaming, baiting or other attention-stealing
behavior is not welcome.
----
# Moderation
These are the policies for upholding our communitys standards of conduct. If
you feel that a thread needs moderation, please contact the above mentioned
person.
These are the policies for upholding our communitys standards of conduct. If you feel that a thread needs moderation, please contact the above mentioned person.
1. Remarks that violate the Tendermint/COSMOS standards of conduct, including
hateful, hurtful, oppressive, or exclusionary remarks, are not allowed.
(Cursing is allowed, but never targeting another user, and never in a hateful
manner.)
1. Remarks that violate the Tendermint/COSMOS standards of conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
2. Remarks that moderators find inappropriate, whether listed in the code of
conduct or not, are also not allowed.
2. Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
3. Moderators will first respond to such remarks with a warning.
4. If the warning is unheeded, the user will be “kicked,” i.e., kicked out of
the communication channel to cool off.
4. If the warning is unheeded, the user will be “kicked,” i.e., kicked out of the communication channel to cool off.
5. If the user comes back and continues to make trouble, they will be banned,
i.e., indefinitely excluded.
5. If the user comes back and continues to make trouble, they will be banned, i.e., indefinitely excluded.
6. Moderators may choose at their discretion to un-ban the user if it was a
first offense and they offer the offended party a genuine apology.
6. Moderators may choose at their discretion to un-ban the user if it was a first offense and they offer the offended party a genuine apology.
7. If a moderator bans someone and you think it was unjustified, please take it
up with that moderator, or with a different moderator, in private. Complaints
about bans in-channel are not allowed.
7. If a moderator bans someone and you think it was unjustified, please take it up with that moderator, or with a different moderator, in private. Complaints about bans in-channel are not allowed.
8. Moderators are held to a higher standard than other community members. If a
moderator creates an inappropriate situation, they should expect less leeway
than others.
8. Moderators are held to a higher standard than other community members. If a moderator creates an inappropriate situation, they should expect less leeway than others.
In the Tendermint/COSMOS community we strive to go the extra step to look out
for each other. Dont just aim to be technically unimpeachable, try to be your
best self. In particular, avoid flirting with offensive or sensitive issues,
particularly if theyre off-topic; this all too often leads to unnecessary
fights, hurt feelings, and damaged trust; worse, it can drive people away
from the community entirely.
In the Tendermint/COSMOS community we strive to go the extra step to look out for each other. Dont just aim to be technically unimpeachable, try to be your best self. In particular, avoid flirting with offensive or sensitive issues, particularly if theyre off-topic; this all too often leads to unnecessary fights, hurt feelings, and damaged trust; worse, it can drive people away from the community entirely.
And if someone takes issue with something you said or did, resist the urge to be
defensive. Just stop doing what it was they complained about and apologize. Even
if you feel you were misinterpreted or unfairly accused, chances are good there
was something you couldve communicated better — remember that its your
responsibility to make your fellow Cosmonauts comfortable. Everyone wants to
get along and we are all here first and foremost because we want to talk
about cool technology. You will find that people will be eager to assume
good intent and forgive as long as you earn their trust.
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you couldve communicated better — remember that its your responsibility to make your fellow Cosmonauts comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust.
The enforcement policies listed above apply to all official Tendermint/COSMOS
venues. For other projects adopting the Tendermint/COSMOS Code of Conduct,
please contact the maintainers of those projects for enforcement. If you wish to
use this code of conduct for your own project, consider explicitly mentioning
your moderation policy or making a copy with your own moderation policy so as to
avoid confusion.
The enforcement policies listed above apply to all official Tendermint/COSMOS venues.For other projects adopting the Tendermint/COSMOS Code of Conduct, please contact the maintainers of those projects for enforcement. If you wish to use this code of conduct for your own project, consider explicitly mentioning your moderation policy or making a copy with your own moderation policy so as to avoid confusion.
\*Adapted from the [Node.js Policy on Trolling][node-trolling-policy], the
[Contributor Covenant v1.3.0][ccov] and the [Rust Code of Conduct][rust-coc].
[ccoc]: https://github.com/stumpsyn/policies/blob/master/citizen_code_of_conduct.md
[node-trolling-policy]: http://blog.izs.me/post/30036893703/policy-on-trolling
[ccov]: http://contributor-covenant.org/version/1/3/0/
[rust-coc]: https://www.rust-lang.org/en-US/conduct.html
*Adapted from the [Node.js Policy on Trolling](http://blog.izs.me/post/30036893703/policy-on-trolling), the [Contributor Covenant v1.3.0](http://contributor-covenant.org/version/1/3/0/) and the [Rust Code of Conduct](https://www.rust-lang.org/en-US/conduct.html).

View File

@@ -7,12 +7,12 @@ support permissionless value-carrying networks. While all contributions are
welcome, contributors should bear this goal in mind in deciding if they should
target the main Tendermint project or a potential fork. When targeting the
main Tendermint project, the following process leads to the best chance of
landing changes in `main`.
landing changes in master.
All work on the code base should be motivated by a [Github
Issue](https://github.com/tendermint/tendermint/issues).
[Search](https://github.com/tendermint/tendermint/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)
is a good place to start when looking for places to contribute. If you
is a good place start when looking for places to contribute. If you
would like to work on an issue which already exists, please indicate so
by leaving a comment.
@@ -26,8 +26,7 @@ will indicate their support with a heartfelt emoji.
If the issue would benefit from thorough discussion, maintainers may
request that you create a [Request For
Comment](https://github.com/tendermint/tendermint/tree/main/docs/rfc)
in the Tendermint spec repo. Discussion
Comment](https://github.com/tendermint/spec/tree/master/rfc). Discussion
at the RFC stage will build collective understanding of the dimensions
of the problems and help structure conversations around trade-offs.
@@ -46,7 +45,7 @@ Find the largest existing ADR number and bump it by 1.
When the problem as well as proposed solution are well understood,
changes should start with a [draft
pull request](https://github.blog/2019-02-14-introducing-draft-pull-requests/)
against `main`. The draft signals that work is underway. When the work
against master. The draft signals that work is underway. When the work
is ready for feedback, hitting "Ready for Review" will signal to the
maintainers to take a look.
@@ -54,7 +53,7 @@ maintainers to take a look.
Each stage of the process is aimed at creating feedback cycles which align contributors and maintainers to make sure:
- Contributors dont waste their time implementing/proposing features which wont land in `main`.
- Contributors dont waste their time implementing/proposing features which wont land in master.
- Maintainers have the necessary context in order to support and review contributions.
## Forking
@@ -73,19 +72,19 @@ For instance, to create a fork and work on a branch of it, I would:
- `git remote add origin git@github.com:ebuchman/basecoin.git`
Now `origin` refers to my fork and `upstream` refers to the Tendermint version.
So I can `git push -u origin main` to update my fork, and make pull requests to tendermint from there.
So I can `git push -u origin master` to update my fork, and make pull requests to tendermint from there.
Of course, replace `ebuchman` with your git handle.
To pull in updates from the origin repo, run
- `git fetch upstream`
- `git rebase upstream/main` (or whatever branch you want)
- `git rebase upstream/master` (or whatever branch you want)
## Dependencies
We use [go modules](https://github.com/golang/go/wiki/Modules) to manage dependencies.
That said, the `main` branch of every Tendermint repository should just build
That said, the master branch of every Tendermint repository should just build
with `go get`, which means they should be kept up-to-date with their
dependencies so we can get away with telling people they can just `go get` our
software.
@@ -105,45 +104,41 @@ specify exactly the dependency you want to update, eg.
## Protobuf
We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along
with [`gogoproto`](https://github.com/cosmos/gogoproto) to generate code for use
across Tendermint Core.
We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along with [gogoproto](https://github.com/gogo/protobuf) to generate code for use across Tendermint Core.
To generate proto stubs, lint, and check protos for breaking changes, you will
need to install [buf](https://buf.build/) and `gogoproto`. Then, from the root
of the repository, run:
For linting, checking breaking changes and generating proto stubs, we use [buf](https://buf.build/). If you would like to run linting and check if the changes you have made are breaking then you will need to have docker running locally. Then the linting cmd will be `make proto-lint` and the breaking changes check will be `make proto-check-breaking`.
```bash
# Lint all of the .proto files in proto/tendermint
make proto-lint
There are two ways to generate your proto stubs.
# Check if any of your local changes (prior to committing to the Git repository)
# are breaking
make proto-check-breaking
1. Use Docker, pull an image that will generate your proto stubs with no need to install anything. `make proto-gen-docker`
2. Run `make proto-gen` after installing `buf` and `gogoproto`, you can do this by running `make protobuf`.
# Generate Go code from the .proto files in proto/tendermint
make proto-gen
### Installation Instructions
To install `protoc`, download an appropriate release (<https://github.com/protocolbuffers/protobuf>) and then move the provided binaries into your PATH (follow instructions in README included with the download).
To install `gogoproto`, do the following:
```sh
go get github.com/gogo/protobuf/gogoproto
cd $GOPATH/pkg/mod/github.com/gogo/protobuf@v1.3.1 # or wherever go get installs things
make install
```
To automatically format `.proto` files, you will need
[`clang-format`](https://clang.llvm.org/docs/ClangFormat.html) installed. Once
installed, you can run:
```bash
make proto-format
```
You should now be able to run `make proto-gen` from inside the root Tendermint directory to generate new files from proto files.
### Visual Studio Code
If you are a VS Code user, you may want to add the following to your `.vscode/settings.json`:
If you are a VS Code user, you may want to add the following to your `.vscode/settings.json`:
```json
{
"protoc": {
"options": [
"--proto_path=${workspaceRoot}/proto",
]
}
{
"protoc": {
"options": [
"--proto_path=${workspaceRoot}/proto",
"--proto_path=${workspaceRoot}/third_party/proto"
]
}
}
```
@@ -152,47 +147,10 @@ If you are a VS Code user, you may want to add the following to your `.vscode/se
Every fix, improvement, feature, or breaking change should be made in a
pull-request that includes an update to the `CHANGELOG_PENDING.md` file.
A feature can also be worked on a feature branch, if its size and/or risk
justifies it (see #branching-model-and-release) below.
### What does a good changelog entry look like?
Changelog entries should answer the question: "what is important about this
change for users to know?" or "what problem does this solve for users?". It
should not simply be a reiteration of the title of the associated PR, unless the
title of the PR _very_ clearly explains the benefit of a change to a user.
Some good examples of changelog entry descriptions:
```md
- [consensus] \#1111 Small transaction throughput improvement (approximately
3-5\% from preliminary tests) through refactoring the way we use channels
- [mempool] \#1112 Refactor Go API to be able to easily swap out the current
mempool implementation in Tendermint forks
- [p2p] \#1113 Automatically ban peers when their messages are unsolicited or
are received too frequently
```
Some bad examples of changelog entry descriptions:
```md
- [consensus] \#1111 Refactor channel usage
- [mempool] \#1112 Make API generic
- [p2p] \#1113 Ban for PEX message abuse
```
For more on how to write good changelog entries, see:
- <https://keepachangelog.com>
- <https://docs.gitlab.com/ee/development/changelog.html#writing-good-changelog-entries>
- <https://depfu.com/blog/what-makes-a-good-changelog>
### Changelog entry format
Changelog entries should be formatted as follows:
```md
- [module] \#xxx Some description of the change (@contributor)
- [module] \#xxx Some description about the change (@contributor)
```
Here, `module` is the part of the code that changed (typically a
@@ -213,46 +171,37 @@ Changes with multiple classifications should be doubly included (eg. a bug fix
that is also a breaking change should be recorded under both).
Breaking changes are further subdivided according to the APIs/users they impact.
Any change that affects multiple APIs/users should be recorded multiply - for
Any change that effects multiple APIs/users should be recorded multiply - for
instance, a change to the `Blockchain Protocol` that removes a field from the
header should also be recorded under `CLI/RPC/Config` since the field will be
removed from the header in RPC responses as well.
## Branching Model and Release
The main development branch is `main`.
The main development branch is master.
Every release is maintained in a release branch named `vX.Y.Z`.
Pending minor releases have long-lived release candidate ("RC") branches. Minor release changes should be merged to these long-lived RC branches at the same time that the changes are merged to `main`.
If a feature's size is big and/or its risk is high, it can be implemented in a feature branch.
While the feature work is in progress,
pull requests are open and squash merged against the feature branch.
Branch `main` is periodically merged (merge commit) into the feature branch,
to reduce branch divergence.
When the feature is complete, the feature branch is merged back (merge commit) into `main`.
The moment of the final merge can be carefully chosen
so as to land different features in different releases.
Pending minor releases have long-lived release candidate ("RC") branches. Minor release changes should be merged to these long-lived RC branches at the same time that the changes are merged to master.
Note all pull requests should be squash merged except for merging to a release branch (named `vX.Y`). This keeps the commit history clean and makes it
easy to reference the pull request where a change was introduced.
### Development Procedure
The latest state of development is on `main`, which must never fail `make test`. _Never_ force push `main`, unless fixing broken git history (which we rarely do anyways).
The latest state of development is on `master`, which must never fail `make test`. _Never_ force push `master`, unless fixing broken git history (which we rarely do anyways).
To begin contributing, create a development branch either on `github.com/tendermint/tendermint`, or your fork (using `git remote add origin`).
Make changes, and before submitting a pull request, update the `CHANGELOG_PENDING.md` to record your change. Also, run either `git rebase` or `git merge` on top of the latest `main`. (Since pull requests are squash-merged, either is fine!)
Make changes, and before submitting a pull request, update the `CHANGELOG_PENDING.md` to record your change. Also, run either `git rebase` or `git merge` on top of the latest `master`. (Since pull requests are squash-merged, either is fine!)
Update the `UPGRADING.md` if the change you've made is breaking and the
instructions should be in place for a user on how he/she can upgrade its
instructions should be in place for a user on how he/she can upgrade it's
software (ABCI application, Tendermint-based blockchain, light client, wallet).
Once you have submitted a pull request label the pull request with either `R:minor`, if the change should be included in the next minor release, or `R:major`, if the change is meant for a major release.
Sometimes (often!) pull requests get out-of-date with `main`, as other people merge different pull requests to `main`. It is our convention that pull request authors are responsible for updating their branches with `main`. (This also means that you shouldn't update someone else's branch for them; even if it seems like you're doing them a favor, you may be interfering with their git flow in some way!)
Sometimes (often!) pull requests get out-of-date with master, as other people merge different pull requests to master. It is our convention that pull request authors are responsible for updating their branches with master. (This also means that you shouldn't update someone else's branch for them; even if it seems like you're doing them a favor, you may be interfering with their git flow in some way!)
#### Merging Pull Requests
@@ -260,20 +209,20 @@ It is also our convention that authors merge their own pull requests, when possi
Before merging a pull request:
- Ensure pull branch is up-to-date with a recent `main` (GitHub won't let you merge without this!)
- Ensure pull branch is up-to-date with a recent `master` (GitHub won't let you merge without this!)
- Run `make test` to ensure that all tests pass
- [Squash](https://stackoverflow.com/questions/5189560/squash-my-last-x-commits-together-using-git) merge pull request
#### Pull Requests for Minor Releases
If your change should be included in a minor release, please also open a PR against the long-lived minor release candidate branch (e.g., `rc1/v0.33.5`) _immediately after your change has been merged to main_.
If your change should be included in a minor release, please also open a PR against the long-lived minor release candidate branch (e.g., `rc1/v0.33.5`) _immediately after your change has been merged to master_.
You can do this by cherry-picking your commit off `main`:
You can do this by cherry-picking your commit off master:
```sh
$ git checkout rc1/v0.33.5
$ git checkout -b {new branch name}
$ git cherry-pick {commit SHA from main}
$ git cherry-pick {commit SHA from master}
# may need to fix conflicts, and then use git add and git cherry-pick --continue
$ git push origin {new branch name}
```
@@ -292,88 +241,127 @@ cmd/debug: execute p.Signal only when p is not nil
Fixes #nnnn
```
Each PR should have one commit once it lands on `main`; this can be accomplished by using the "squash and merge" button on Github. Be sure to edit your commit message, though!
Each PR should have one commit once it lands on `master`; this can be accomplished by using the "squash and merge" button on Github. Be sure to edit your commit message, though!
### Release Procedure
#### Major Release
This major release process assumes that this release was preceded by release candidates.
If there were no release candidates, and you'd like to cut a major release directly from master, see below.
1. Start on the latest RC branch (`RCx/vX.X.0`).
2. Run integration tests.
3. Branch off of the RC branch (`git checkout -b release-prep`) and prepare the release:
- "Squash" changes from the changelog entries for the RCs into a single entry,
and add all changes included in `CHANGELOG_PENDING.md`.
(Squashing includes both combining all entries, as well as removing or simplifying
any intra-RC changes. It may also help to alphabetize the entries by package name.)
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that UPGRADING.md is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Open a PR with these changes against the RC branch (`RCx/vX.X.0`).
5. Once these changes are on the RC branch, branch off of the RC branch again to create a release branch:
- `git checkout RCx/vX.X.0`
- `git checkout -b release/vX.X.0`
6. Push a tag with prepared release details. This will trigger the actual release `vX.X.0`.
- `git tag -a vX.X.0 -m 'Release vX.X.0'`
- `git push origin vX.X.0`
7. Make sure that `master` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
8. Create the long-lived minor release branch `RC0/vX.X.1` for the next point release on this
new major release series.
##### Major Release (from `master`)
1. Start on `master`
2. Run integration tests (see `test_integrations` in Makefile)
3. Prepare release in a pull request against `master` (to be squash merged):
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`; if this release
had release candidates, squash all the RC updates into one
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest
release, and add the github aliases of external contributors to the top of
the changelog. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
- Make sure all significant breaking changes are covered in `UPGRADING.md`
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Push a tag with prepared release details (this will trigger the release `vX.X.0`)
- `git tag -a vX.X.x -m 'Release vX.X.x'`
- `git push origin vX.X.x`
5. Update the `CHANGELOG.md` file on master with the releases changelog.
6. Delete any RC branches and tags for this release (if applicable)
#### Minor Release (Point Releases)
Minor releases are done differently from major releases: They are built off of long-lived backport branches, rather than from master.
Each release "line" (e.g. 0.34 or 0.33) has its own long-lived backport branch, and
the backport branches have names like `v0.34.x` or `v0.33.x` (literally, `x`; it is not a placeholder in this case).
As non-breaking changes land on `master`, they should also be backported (cherry-picked) to these backport branches.
Minor releases don't have release candidates by default, although any tricky changes may merit a release candidate.
To create a minor release:
1. Checkout the long-lived backport branch: `git checkout vX.X.x`
2. Run integration tests: `make test_integrations`
3. Check out a new branch and prepare the release:
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during minor releases, and only field additions are valid minor changes.)
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Open a PR with these changes that will land them back on `vX.X.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- `git tag -a vX.X.x -m 'Release vX.X.x'`
- `git push origin vX.X.x`
6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
- Remove all `R:minor` labels from the pull requests that were included in the release.
- Do not merge the backport branch into master.
#### Release Candidates
Before creating an official release, especially a major release, we may want to create a
release candidate (RC) for our friends and partners to test out. We use git tags to
create RCs, and we build them off of RC branches. RC branches typically have names formatted
like `RCX/vX.X.X` (or, concretely, `RC0/v0.34.0`), while the tags themselves follow
the "standard" release naming conventions, with `-rcX` at the end (`vX.X.X-rcX`).
(Note that branches and tags _cannot_ have the same names, so it's important that these branches
have distinct names from the tags/release names.)
1. Start from the RC branch (e.g. `RC0/v0.34.0`).
2. Create the new tag, specifying a name and a tag "message":
`git tag -a v0.34.0-rc0 -m "Release Candidate v0.34.0-rc0`
3. Push the tag back up to origin:
`git push origin v0.34.0-rc4`
Now the tag should be available on the repo's releases page.
4. Create a new release candidate branch for any possible updates to the RC:
`git checkout -b RC1/v0.34.0; git push origin RC1/v0.34.0`
## Testing
### Unit tests
All repos should be hooked up to [CircleCI](https://circleci.com/).
Unit tests are located in `_test.go` files as directed by [the Go testing
package](https://golang.org/pkg/testing/). If you're adding or removing a
function, please check there's a `TestType_Method` test for it.
Run: `make test`
### Integration tests
Integration tests are also located in `_test.go` files. What differentiates
them is a more complicated setup, which usually involves setting up two or more
components.
Run: `make test_integrations`
### End-to-end tests
End-to-end tests are used to verify a fully integrated Tendermint network.
See [README](./test/e2e/README.md) for details.
Run:
```sh
cd test/e2e && \
make && \
./build/runner -f networks/ci.toml
```
### Model-based tests (ADVANCED)
*NOTE: if you're just submitting your first PR, you won't need to touch these
most probably (99.9%)*.
For components, that have been [formally
verified](https://en.wikipedia.org/wiki/Formal_verification) using
[TLA+](https://en.wikipedia.org/wiki/TLA%2B), it may be possible to generate
tests using a combination of the [Apalache Model
Checker](https://apalache.informal.systems/) and [tendermint-rs testgen
util](https://github.com/informalsystems/tendermint-rs/tree/master/testgen).
Now, I know there's a lot to take in. If you want to learn more, check out [
this video](https://www.youtube.com/watch?v=aveoIMphzW8) by Andrey Kupriyanov
& Igor Konnov.
At the moment, we have model-based tests for the light client, located in the
`./light/mbt` directory.
Run: `cd light/mbt && go test`
### Fuzz tests (ADVANCED)
*NOTE: if you're just submitting your first PR, you won't need to touch these
most probably (99.9%)*.
[Fuzz tests](https://en.wikipedia.org/wiki/Fuzzing) can be found inside the
`./test/fuzz` directory. See [README.md](./test/fuzz/README.md) for details.
Run: `cd test/fuzz && make fuzz-{PACKAGE-COMPONENT}`
### Jepsen tests (ADVANCED)
*NOTE: if you're just submitting your first PR, you won't need to touch these
most probably (99.9%)*.
[Jepsen](http://jepsen.io/) tests are used to verify the
[linearizability](https://jepsen.io/consistency/models/linearizable) property
of the Tendermint consensus. They are located in a separate repository
-> <https://github.com/tendermint/jepsen>. Please refer to its README for more
information.
If they have `.go` files in the root directory, they will be automatically
tested by circle using `go test -v -race ./...`. If not, they will need a
`circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and
includes its continuous integration status using a badge in the `README.md`.
### RPC Testing
**If you contribute to the RPC endpoints it's important to document your
changes in the [Openapi file](./rpc/openapi/openapi.yaml)**.
To test your changes you must install `nodejs` and run:
If you contribute to the RPC endpoints it's important to document your changes in the [Openapi file](./rpc/openapi/openapi.yaml)
To test your changes you should install `nodejs` and run:
```bash
npm i -g dredd
@@ -381,8 +369,4 @@ make build-linux build-contract-tests-hooks
make contract-tests
```
**WARNING: these are currently broken due to <https://github.com/apiaryio/dredd>
not supporting complete OpenAPI 3**.
This command will popup a network and check every endpoint against what has
been documented.
This command will popup a network and check every endpoint against what has been documented

View File

@@ -1,18 +1,14 @@
# Use a build arg to ensure that both stages use the same,
# hopefully current, go version.
ARG GOLANG_BASE_IMAGE=golang:1.18-alpine
# stage 1 Generate Tendermint Binary
FROM --platform=$BUILDPLATFORM $GOLANG_BASE_IMAGE as builder
FROM golang:1.15-alpine as builder
RUN apk update && \
apk upgrade && \
apk --no-cache add make
COPY / /tendermint
WORKDIR /tendermint
RUN TARGETPLATFORM=$TARGETPLATFORM make build-linux
RUN make build-linux
# stage 2
FROM $GOLANG_BASE_IMAGE
FROM golang:1.15-alpine
LABEL maintainer="hello@tendermint.com"
# Tendermint will be looking for the genesis file in /tendermint/config/genesis.json
@@ -53,7 +49,7 @@ ENV PROXY_APP=kvstore MONIKER=dockernode CHAIN_ID=dockerchain
COPY ./DOCKER/docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["node"]
CMD ["start"]
# Expose the data directory as a volume since there's mutable state in there
VOLUME [ "$TMHOME" ]

View File

@@ -6,9 +6,9 @@ DockerHub tags for official releases are [here](https://hub.docker.com/r/tenderm
Official releases can be found [here](https://github.com/tendermint/tendermint/releases).
The Dockerfile for Tendermint is not expected to change in the near future. The main file used for all builds can be found [here](https://raw.githubusercontent.com/tendermint/tendermint/main/DOCKER/Dockerfile).
The Dockerfile for tendermint is not expected to change in the near future. The master file used for all builds can be found [here](https://raw.githubusercontent.com/tendermint/tendermint/master/DOCKER/Dockerfile).
Respective versioned files can be found at `https://raw.githubusercontent.com/tendermint/tendermint/vX.XX.XX/DOCKER/Dockerfile` (replace the Xs with the version number).
Respective versioned files can be found <https://raw.githubusercontent.com/tendermint/tendermint/vX.XX.XX/DOCKER/Dockerfile> (replace the Xs with the version number).
## Quick reference
@@ -20,9 +20,9 @@ Respective versioned files can be found at `https://raw.githubusercontent.com/te
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language, and securely replicates it on many machines.
For more background, see the [the docs](https://docs.tendermint.com/main/introduction/#quick-start).
For more background, see the [the docs](https://docs.tendermint.com/master/introduction/#quick-start).
To get started developing applications, see the [application developers guide](https://docs.tendermint.com/main/introduction/quick-start.html).
To get started developing applications, see the [application developers guide](https://docs.tendermint.com/master/introduction/quick-start.html).
## How to use this image
@@ -32,12 +32,12 @@ A quick example of a built-in app and Tendermint core in one container.
```sh
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint init
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app=kvstore
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy-app=kvstore
```
## Local cluster
To run a 4-node network, see the `Makefile` in the root of [the repo](https://github.com/tendermint/tendermint/blob/main/Makefile) and run:
To run a 4-node network, see the `Makefile` in the root of [the repo](https://github.com/tendermint/tendermint/blob/master/Makefile) and run:
```sh
make build-linux
@@ -49,8 +49,8 @@ Note that this will build and use a different image than the ones provided here.
## License
- Tendermint's license is [Apache 2.0](https://github.com/tendermint/tendermint/blob/main/LICENSE).
- Tendermint's license is [Apache 2.0](https://github.com/tendermint/tendermint/blob/master/LICENSE).
## Contributing
Contributions are most welcome! See the [contributing file](https://github.com/tendermint/tendermint/blob/main/CONTRIBUTING.md) for more information.
Contributions are most welcome! See the [contributing file](https://github.com/tendermint/tendermint/blob/master/CONTRIBUTING.md) for more information.

View File

@@ -6,11 +6,11 @@ if [ ! -d "$TMHOME/config" ]; then
tendermint init
sed -i \
-e "s/^proxy_app\s*=.*/proxy_app = \"$PROXY_APP\"/" \
-e "s/^proxy-app\s*=.*/proxy-app = \"$PROXY_APP\"/" \
-e "s/^moniker\s*=.*/moniker = \"$MONIKER\"/" \
-e 's/^addr_book_strict\s*=.*/addr_book_strict = false/' \
-e 's/^timeout_commit\s*=.*/timeout_commit = "500ms"/' \
-e 's/^index_all_tags\s*=.*/index_all_tags = true/' \
-e 's/^addr-book-strict\s*=.*/addr-book-strict = false/' \
-e 's/^timeout-commit\s*=.*/timeout-commit = "500ms"/' \
-e 's/^index-all-tags\s*=.*/index-all-tags = true/' \
-e 's,^laddr = "tcp://127.0.0.1:26657",laddr = "tcp://0.0.0.0:26657",' \
-e 's/^prometheus\s*=.*/prometheus = true/' \
"$TMHOME/config/config.toml"

View File

@@ -1,3 +1,5 @@
Tendermint Core
License: Apache2.0
Apache License
Version 2.0, January 2004
@@ -179,7 +181,7 @@
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
@@ -187,7 +189,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Copyright 2016 All in Bits, Inc
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

266
Makefile
View File

@@ -1,13 +1,21 @@
#!/usr/bin/make -f
PACKAGES=$(shell go list ./...)
BUILDDIR?=$(CURDIR)/build
OUTPUT?=$(BUILDDIR)/tendermint
BUILDDIR ?= $(CURDIR)/build
BUILD_TAGS?=tendermint
COMMIT_HASH := $(shell git rev-parse --short HEAD)
LD_FLAGS = -X github.com/tendermint/tendermint/version.TMGitCommitHash=$(COMMIT_HASH)
# If building a release, please checkout the version tag to get the correct version setting
ifneq ($(shell git symbolic-ref -q --short HEAD),)
VERSION := unreleased-$(shell git symbolic-ref -q --short HEAD)-$(shell git rev-parse HEAD)
else
VERSION := $(shell git describe)
endif
LD_FLAGS = -X github.com/tendermint/tendermint/version.TMCoreSemVer=$(VERSION)
BUILD_FLAGS = -mod=readonly -ldflags "$(LD_FLAGS)"
HTTPS_GIT := https://github.com/tendermint/tendermint.git
DOCKER_BUF := docker run -v $(shell pwd):/workspace --workdir /workspace bufbuild/buf
CGO_ENABLED ?= 0
# handle nostrip
@@ -47,152 +55,65 @@ endif
# allow users to pass additional flags via the conventional LDFLAGS variable
LD_FLAGS += $(LDFLAGS)
# Process Docker environment varible TARGETPLATFORM
# in order to build binary with correspondent ARCH
# by default will always build for linux/amd64
TARGETPLATFORM ?=
GOOS ?= linux
GOARCH ?= amd64
GOARM ?=
ifeq (linux/arm,$(findstring linux/arm,$(TARGETPLATFORM)))
GOOS=linux
GOARCH=arm
GOARM=7
endif
ifeq (linux/arm/v6,$(findstring linux/arm/v6,$(TARGETPLATFORM)))
GOOS=linux
GOARCH=arm
GOARM=6
endif
ifeq (linux/arm64,$(findstring linux/arm64,$(TARGETPLATFORM)))
GOOS=linux
GOARCH=arm64
GOARM=7
endif
ifeq (linux/386,$(findstring linux/386,$(TARGETPLATFORM)))
GOOS=linux
GOARCH=386
endif
ifeq (linux/amd64,$(findstring linux/amd64,$(TARGETPLATFORM)))
GOOS=linux
GOARCH=amd64
endif
ifeq (linux/mips,$(findstring linux/mips,$(TARGETPLATFORM)))
GOOS=linux
GOARCH=mips
endif
ifeq (linux/mipsle,$(findstring linux/mipsle,$(TARGETPLATFORM)))
GOOS=linux
GOARCH=mipsle
endif
ifeq (linux/mips64,$(findstring linux/mips64,$(TARGETPLATFORM)))
GOOS=linux
GOARCH=mips64
endif
ifeq (linux/mips64le,$(findstring linux/mips64le,$(TARGETPLATFORM)))
GOOS=linux
GOARCH=mips64le
endif
ifeq (linux/riscv64,$(findstring linux/riscv64,$(TARGETPLATFORM)))
GOOS=linux
GOARCH=riscv64
endif
all: check build test install
.PHONY: all
include tests.mk
# The below include contains the tools.
include tools/Makefile
include test/Makefile
###############################################################################
### Build Tendermint ###
### Build Tendermint ###
###############################################################################
build:
CGO_ENABLED=$(CGO_ENABLED) go build $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o $(OUTPUT) ./cmd/tendermint/
build: $(BUILDDIR)/
CGO_ENABLED=$(CGO_ENABLED) go build $(BUILD_FLAGS) -tags '$(BUILD_TAGS)' -o $(BUILDDIR)/ ./cmd/tendermint/
.PHONY: build
install:
CGO_ENABLED=$(CGO_ENABLED) go install $(BUILD_FLAGS) -tags $(BUILD_TAGS) ./cmd/tendermint
.PHONY: install
###############################################################################
### Metrics ###
###############################################################################
metrics: testdata-metrics
go generate -run="scripts/metricsgen" ./...
.PHONY: metrics
# By convention, the go tool ignores subdirectories of directories named
# 'testdata'. This command invokes the generate command on the folder directly
# to avoid this.
testdata-metrics:
ls ./scripts/metricsgen/testdata | xargs -I{} go generate -v -run="scripts/metricsgen" ./scripts/metricsgen/testdata/{}
.PHONY: testdata-metrics
###############################################################################
### Mocks ###
###############################################################################
mockery:
go generate -run="./scripts/mockery_generate.sh" ./...
.PHONY: mockery
$(BUILDDIR)/:
mkdir -p $@
###############################################################################
### Protobuf ###
###############################################################################
check-proto-deps:
ifeq (,$(shell which protoc-gen-gogofaster))
@go install github.com/cosmos/gogoproto/protoc-gen-gogofaster@latest
endif
.PHONY: check-proto-deps
proto-all: proto-gen proto-lint proto-check-breaking
.PHONY: proto-all
check-proto-format-deps:
ifeq (,$(shell which clang-format))
$(error "clang-format is required for Protobuf formatting. See instructions for your platform on how to install it.")
endif
.PHONY: check-proto-format-deps
proto-gen: check-proto-deps
@echo "Generating Protobuf files"
@go run github.com/bufbuild/buf/cmd/buf generate
@mv ./proto/tendermint/abci/types.pb.go ./abci/types/
@cp ./proto/tendermint/rpc/grpc/types.pb.go ./rpc/grpc
proto-gen:
## If you get the following error,
## "error while loading shared libraries: libprotobuf.so.14: cannot open shared object file: No such file or directory"
## See https://stackoverflow.com/a/25518702
## Note the $< here is substituted for the %.proto
## Note the $@ here is substituted for the %.pb.go
@sh scripts/protocgen.sh
.PHONY: proto-gen
# These targets are provided for convenience and are intended for local
# execution only.
proto-lint: check-proto-deps
@echo "Linting Protobuf files"
@go run github.com/bufbuild/buf/cmd/buf lint
proto-gen-docker:
@docker pull -q tendermintdev/docker-build-proto
@echo "Generating Protobuf files"
@docker run -v $(shell pwd):/workspace --workdir /workspace tendermintdev/docker-build-proto sh ./scripts/protocgen.sh
.PHONY: proto-gen-docker
proto-lint:
@$(DOCKER_BUF) check lint --error-format=json
.PHONY: proto-lint
proto-format: check-proto-format-deps
proto-format:
@echo "Formatting Protobuf files"
@find . -name '*.proto' -path "./proto/*" -exec clang-format -i {} \;
docker run -v $(shell pwd):/workspace --workdir /workspace tendermintdev/docker-build-proto find ./ -not -path "./third_party/*" -name *.proto -exec clang-format -i {} \;
.PHONY: proto-format
proto-check-breaking: check-proto-deps
@echo "Checking for breaking changes in Protobuf files against local branch"
@echo "Note: This is only useful if your changes have not yet been committed."
@echo " Otherwise read up on buf's \"breaking\" command usage:"
@echo " https://docs.buf.build/breaking/usage"
@go run github.com/bufbuild/buf/cmd/buf breaking --against ".git"
proto-check-breaking:
@$(DOCKER_BUF) check breaking --against-input .git#branch=master
.PHONY: proto-check-breaking
proto-check-breaking-ci:
@go run github.com/bufbuild/buf/cmd/buf breaking --against $(HTTPS_GIT)#branch=v0.34.x
@$(DOCKER_BUF) check breaking --against-input $(HTTPS_GIT)#branch=master
.PHONY: proto-check-breaking-ci
###############################################################################
@@ -207,6 +128,27 @@ install_abci:
@go install -mod=readonly ./abci/cmd/...
.PHONY: install_abci
###############################################################################
### Privval Server ###
###############################################################################
build_privval_server:
@go build -mod=readonly -o $(BUILDDIR)/ -i ./cmd/priv_val_server/...
.PHONY: build_privval_server
generate_test_cert:
# generate self signing ceritificate authority
@certstrap init --common-name "root CA" --expires "20 years"
# generate server cerificate
@certstrap request-cert -cn server -ip 127.0.0.1
# self-sign server cerificate with rootCA
@certstrap sign server --CA "root CA"
# generate client cerificate
@certstrap request-cert -cn client -ip 127.0.0.1
# self-sign client cerificate with rootCA
@certstrap sign client --CA "root CA"
.PHONY: generate_test_cert
###############################################################################
### Distribution ###
###############################################################################
@@ -235,7 +177,7 @@ draw_deps:
get_deps_bin_size:
@# Copy of build recipe with additional flags to perform binary size analysis
$(eval $(shell go build -work -a $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o $(OUTPUT) ./cmd/tendermint/ 2>&1))
$(eval $(shell go build -work -a $(BUILD_FLAGS) -tags $(BUILD_TAGS) -o $(BUILDDIR)/ ./cmd/tendermint/ 2>&1))
@find $(WORK) -type f -name "*.a" | xargs -I{} du -hxs "{}" | sort -rh | sed -e s:${WORK}/::g > deps_bin_size.log
@echo "Results can be found here: $(CURDIR)/deps_bin_size.log"
.PHONY: get_deps_bin_size
@@ -271,7 +213,7 @@ format:
lint:
@echo "--> Running linter"
@go run github.com/golangci/golangci-lint/cmd/golangci-lint run
@golangci-lint run
.PHONY: lint
DESTINATION = ./index.html.md
@@ -279,53 +221,23 @@ DESTINATION = ./index.html.md
###############################################################################
### Documentation ###
###############################################################################
DOCS_OUTPUT?=/tmp/tendermint-core-docs
# This builds a docs site for each branch/tag in `./docs/versions` and copies
# each site to a version prefixed path. The last entry inside the `versions`
# file will be the default root index.html. Only redirects that are built into
# the "redirects" folder of each of the branches will be copied out to the root
# of the build at the end.
# todo remove once tendermint.com DNS is solved
build-docs:
@cd docs && \
while read -r branch path_prefix; do \
(git checkout $${branch} && npm ci && VUEPRESS_BASE="/$${path_prefix}/" npm run build) ; \
mkdir -p $(DOCS_OUTPUT)/$${path_prefix} ; \
cp -r .vuepress/dist/* $(DOCS_OUTPUT)/$${path_prefix}/ ; \
cp $(DOCS_OUTPUT)/$${path_prefix}/index.html $(DOCS_OUTPUT) ; \
cp $(DOCS_OUTPUT)/$${path_prefix}/404.html $(DOCS_OUTPUT) ; \
cp -r $(DOCS_OUTPUT)/$${path_prefix}/redirects/* $(DOCS_OUTPUT) || true ; \
(git checkout $${branch} && npm install && VUEPRESS_BASE="/$${path_prefix}/" npm run build) ; \
mkdir -p ~/output/$${path_prefix} ; \
cp -r .vuepress/dist/* ~/output/$${path_prefix}/ ; \
cp ~/output/$${path_prefix}/index.html ~/output ; \
done < versions ;
.PHONY: build-docs
# Build and serve the local version of the docs on the current branch from
# http://0.0.0.0:8080
serve-docs:
@cd docs && \
npm ci && \
npm run serve
.PHONY: serve-docs
sync-docs:
cd ~/output && \
echo "role_arn = ${DEPLOYMENT_ROLE_ARN}" >> /root/.aws/config ; \
echo "CI job = ${CIRCLE_BUILD_URL}" >> version.html ; \
aws s3 sync . s3://${WEBSITE_BUCKET} --profile terraform --delete ; \
aws cloudfront create-invalidation --distribution-id ${CF_DISTRIBUTION_ID} --profile terraform --path "/*" ;
.PHONY: sync-docs
# Verify that important design docs have ToC entries.
check-docs-toc:
@./docs/presubmit.sh
.PHONY: check-docs-toc
###############################################################################
### Docker image ###
###############################################################################
build-docker: build-linux
cp $(OUTPUT) DOCKER/tendermint
cp $(BUILDDIR)/tendermint DOCKER/tendermint
docker build --label=tendermint --tag="tendermint/tendermint" DOCKER
rm -rf DOCKER/tendermint
.PHONY: build-docker
@@ -335,8 +247,8 @@ build-docker: build-linux
###############################################################################
# Build linux binary on other platforms
build-linux:
GOOS=$(GOOS) GOARCH=$(GOARCH) GOARM=$(GOARM) $(MAKE) build
build-linux: tools
GOOS=linux GOARCH=amd64 $(MAKE) build
.PHONY: build-linux
build-docker-localnode:
@@ -380,24 +292,16 @@ contract-tests:
dredd
.PHONY: contract-tests
# Implements test splitting and running. This is pulled directly from
# the github action workflows for better local reproducibility.
clean:
rm -rf $(CURDIR)/artifacts/ $(BUILDDIR)/
GO_TEST_FILES != find $(CURDIR) -name "*_test.go"
# default to four splits by default
NUM_SPLIT ?= 4
$(BUILDDIR):
mkdir -p $@
# The format statement filters out all packages that don't have tests.
# Note we need to check for both in-package tests (.TestGoFiles) and
# out-of-package tests (.XTestGoFiles).
$(BUILDDIR)/packages.txt:$(GO_TEST_FILES) $(BUILDDIR)
go list -f "{{ if (or .TestGoFiles .XTestGoFiles) }}{{ .ImportPath }}{{ end }}" ./... | sort > $@
split-test-packages:$(BUILDDIR)/packages.txt
split -d -n l/$(NUM_SPLIT) $< $<.
test-group-%:split-test-packages
cat $(BUILDDIR)/packages.txt.$* | xargs go test -mod=readonly -timeout=15m -race -coverprofile=$(BUILDDIR)/$*.profile.out
build-reproducible:
docker rm latest-build || true
docker run --volume=$(CURDIR):/sources:ro \
--env TARGET_PLATFORMS='linux/amd64 linux/arm64 darwin/amd64 windows/amd64' \
--env APP=tendermint \
--env COMMIT=$(shell git rev-parse --short=8 HEAD) \
--env VERSION=$(shell git describe --tags) \
--name latest-build cosmossdk/rbuilder:latest
docker cp -a latest-build:/home/builder/artifacts/ $(CURDIR)/
.PHONY: build-reproducible

View File

@@ -1,158 +0,0 @@
# Design goals
The design goals for Tendermint (and the SDK and related libraries) are:
* Simplicity and Legibility
* Parallel performance, namely ability to utilize multicore architecture
* Ability to evolve the codebase bug-free
* Debuggability
* Complete correctness that considers all edge cases, esp in concurrency
* Future-proof modular architecture, message protocol, APIs, and encapsulation
## Justification
Legibility is key to maintaining bug-free software as it evolves toward more
optimizations, more ease of debugging, and additional features.
It is too easy to introduce bugs over time by replacing lines of code with
those that may panic, which means ideally locks are unlocked by defer
statements.
For example,
```go
func (obj *MyObj) something() {
mtx.Lock()
obj.something = other
mtx.Unlock()
}
```
It is too easy to refactor the codebase in the future to replace `other` with
`other.String()` for example, and this may introduce a bug that causes a
deadlock. So as much as reasonably possible, we need to be using defer
statements, even though it introduces additional overhead.
If it is necessary to optimize the unlocking of mutex locks, the solution is
more modularity via smaller functions, so that defer'd unlocks are scoped
within a smaller function.
Similarly, idiomatic for-loops should always be preferred over those that use
custom counters, because it is too easy to evolve the body of a for-loop to
become more complicated over time, and it becomes more and more difficult to
assess the correctness of such a for-loop by visual inspection.
## On performance
It doesn't matter whether there are alternative implementations that are 2x or
3x more performant, when the software doesn't work, deadlocks, or if bugs
cannot be debugged. By taking advantage of multicore concurrency, the
Tendermint implementation will at least be an order of magnitude within the
range of what is theoretically possible. The design philosophy of Tendermint,
and the choice of Go as implementation language, is designed to make Tendermint
implementation the standard specification for concurrent BFT software.
By focusing on the message protocols (e.g. ABCI, p2p messages), and
encapsulation e.g. IAVL module, (relatively) independent reactors, we are both
implementing a standard implementation to be used as the specification for
future implementations in more optimizable languages like Rust, Java, and C++;
as well as creating sufficiently performant software. Tendermint Core will
never be as fast as future implementations of the Tendermint Spec, because Go
isn't designed to be as fast as possible. The advantage of using Go is that we
can develop the whole stack of modular components **faster** than in other
languages.
Furthermore, the real bottleneck is in the application layer, and it isn't
necessary to support more than a sufficiently decentralized set of validators
(e.g. 100 ~ 300 validators is sufficient, with delegated bonded PoS).
Instead of optimizing Tendermint performance down to the metal, lets focus on
optimizing on other matters, namely ability to push feature complete software
that works well enough, can be debugged and maintained, and can serve as a spec
for future implementations.
## On encapsulation
In order to create maintainable, forward-optimizable software, it is critical
to develop well-encapsulated objects that have well understood properties, and
to re-use these easy-to-use-correctly components as building blocks for further
encapsulated meta-objects.
For example, mutexes are cheap enough for Tendermint's design goals when there
isn't goroutine contention, so it is encouraged to create concurrency safe
structures with struct-level mutexes. If they are used in the context of
non-concurrent logic, then the performance is good enough. If they are used in
the context of concurrent logic, then it will still perform correctly.
Examples of this design principle can be seen in the types.ValidatorSet struct,
and the rand.Rand struct. It's one single struct declaration that can be used
in both concurrent and non-concurrent logic, and due to its well encapsulation,
it's easy to get the usage of the mutex right.
### example: rand.Rand
`The default Source is safe for concurrent use by multiple goroutines, but
Sources created by NewSource are not`. The reason why the default
package-level source is safe for concurrent use is because it is protected (see
`lockedSource` in <https://golang.org/src/math/rand/rand.go>).
But we shouldn't rely on the global source, we should be creating our own
Rand/Source instances and using them, especially for determinism in testing.
So it is reasonable to have rand.Rand be protected by a mutex. Whether we want
our own implementation of Rand is another question, but the answer there is
also in the affirmative. Sometimes you want to know where Rand is being used
in your code, so it becomes a simple matter of dropping in a log statement to
inject inspectability into Rand usage. Also, it is nice to be able to extend
the functionality of Rand with custom methods. For these reasons, and for the
reasons which is outlined in this design philosophy document, we should
continue to use the rand.Rand object, with mutex protection.
Another key aspect of good encapsulation is the choice of exposed vs unexposed
methods. It should be clear to the reader of the code, which methods are
intended to be used in what context, and what safe usage is. Part of this is
solved by hiding methods via unexported methods. Another part of this is
naming conventions on the methods (e.g. underscores) with good documentation,
and code organization. If there are too many exposed methods and it isn't
clear what methods have what side effects, then there is something wrong about
the design of abstractions that should be revisited.
## On concurrency
In order for Tendermint to remain relevant in the years to come, it is vital
for Tendermint to take advantage of multicore architectures. Due to the nature
of the problem, namely consensus across a concurrent p2p gossip network, and to
handle RPC requests for a large number of consuming subscribers, it is
unavoidable for Tendermint development to require expertise in concurrency
design, especially when it comes to the reactor design, and also for RPC
request handling.
# Guidelines
Here are some guidelines for designing for (sufficient) performance and concurrency:
* Mutex locks are cheap enough when there isn't contention.
* Do not optimize code without analytical or observed proof that it is in a hot path.
* Don't over-use channels when mutex locks w/ encapsulation are sufficient.
* The need to drain channels are often a hint of unconsidered edge cases.
* The creation of O(N) one-off goroutines is generally technical debt that
needs to get addressed sooner than later. Avoid creating too many
goroutines as a patch around incomplete concurrency design, or at least be
aware of the debt and do not invest in the debt. On the other hand, Tendermint
is designed to have a limited number of peers (e.g. 10 or 20), so the creation
of O(C) goroutines per O(P) peers is still O(C\*P=constant).
* Use defer statements to unlock as much as possible. If you want to unlock sooner,
try to create more modular functions that do make use of defer statements.
# Mantras
* Premature optimization kills
* Readability is paramount
* Beautiful is better than fast.
* In the face of ambiguity, refuse the temptation to guess.
* In the face of bugs, refuse the temptation to cover the bug.
* There should be one-- and preferably only one --obvious way to do it.

211
README.md
View File

@@ -2,138 +2,155 @@
![banner](docs/tendermint-core-image.jpg)
[Byzantine-Fault Tolerant][bft] [State Machine Replication][smr]. Or
[Blockchain], for short.
[Byzantine-Fault Tolerant](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance)
[State Machines](https://en.wikipedia.org/wiki/State_machine_replication).
Or [Blockchain](<https://en.wikipedia.org/wiki/Blockchain_(database)>), for short.
[![Version][version-badge]][version-url]
[![API Reference][api-badge]][api-url]
[![Go version][go-badge]][go-url]
[![Discord chat][discord-badge]][discord-url]
[![License][license-badge]][license-url]
[![Sourcegraph][sg-badge]][sg-url]
[![version](https://img.shields.io/github/tag/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/releases/latest)
[![API Reference](https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667)](https://pkg.go.dev/github.com/tendermint/tendermint)
[![Go version](https://img.shields.io/badge/go-1.15-blue.svg)](https://github.com/moovweb/gvm)
[![Discord chat](https://img.shields.io/discord/669268347736686612.svg)](https://discord.gg/vcExX9T)
[![license](https://img.shields.io/github/license/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/blob/master/LICENSE)
[![tendermint/tendermint](https://tokei.rs/b1/github/tendermint/tendermint?category=lines)](https://github.com/tendermint/tendermint)
[![Sourcegraph](https://sourcegraph.com/github.com/tendermint/tendermint/-/badge.svg)](https://sourcegraph.com/github.com/tendermint/tendermint?badge)
| Branch | Tests | Linting |
|--------|------------------------------------|---------------------------------|
| main | [![Tests][tests-badge]][tests-url] | [![Lint][lint-badge]][lint-url] |
| Branch | Tests | Coverage | Linting |
|--------|--------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------|
| master | ![Tests](https://github.com/tendermint/tendermint/workflows/Tests/badge.svg?branch=master) | [![codecov](https://codecov.io/gh/tendermint/tendermint/branch/master/graph/badge.svg)](https://codecov.io/gh/tendermint/tendermint) | ![Lint](https://github.com/tendermint/tendermint/workflows/Lint/badge.svg) |
Tendermint Core is a Byzantine Fault Tolerant (BFT) middleware that takes a
state transition machine - written in any programming language - and securely
replicates it on many machines.
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language -
and securely replicates it on many machines.
For protocol details, refer to the [Tendermint Specification](./spec/README.md).
For protocol details, see [the specification](https://github.com/tendermint/spec).
For detailed analysis of the consensus protocol, including safety and liveness
proofs, read our paper, "[The latest gossip on BFT
consensus](https://arxiv.org/abs/1807.04938)".
## Documentation
Complete documentation can be found on the
[website](https://docs.tendermint.com/).
For detailed analysis of the consensus protocol, including safety and liveness proofs,
see our recent paper, "[The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)".
## Releases
Please do not depend on `main` as your production branch. Use
[releases](https://github.com/tendermint/tendermint/releases) instead.
Please do not depend on master as your production branch. Use [releases](https://github.com/tendermint/tendermint/releases) instead.
Tendermint has been in the production of private and public environments, most
notably the blockchains of the Cosmos Network. we haven't released v1.0 yet
since we are making breaking changes to the protocol and the APIs. See below for
more details about [versioning](#versioning).
Tendermint is being used in production in both private and public environments,
most notably the blockchains of the [Cosmos Network](https://cosmos.network/).
However, we are still making breaking changes to the protocol and the APIs and have not yet released v1.0.
See below for more details about [versioning](#versioning).
In any case, if you intend to run Tendermint in production, we're happy to help.
You can contact us [over email](mailto:hello@interchain.io) or [join the
chat](https://discord.gg/cosmosnetwork).
More on how releases are conducted can be found [here](./RELEASES.md).
In any case, if you intend to run Tendermint in production, we're happy to help. You can
contact us [over email](mailto:hello@interchain.berlin) or [join the chat](https://discord.gg/vcExX9T).
## Security
To report a security vulnerability, see our [bug bounty
program](https://hackerone.com/cosmos). For examples of the kinds of bugs we're
looking for, see [our security policy](SECURITY.md).
program](https://hackerone.com/tendermint).
For examples of the kinds of bugs we're looking for, see [our security policy](SECURITY.md).
We also maintain a dedicated mailing list for security updates. We will only
ever use this mailing list to notify you of vulnerabilities and fixes in
Tendermint Core. You can subscribe [here](http://eepurl.com/gZ5hQD).
We also maintain a dedicated mailing list for security updates. We will only ever use this mailing list
to notify you of vulnerabilities and fixes in Tendermint Core. You can subscribe [here](http://eepurl.com/gZ5hQD).
## Minimum requirements
| Requirement | Notes |
|-------------|-------------------|
| Go version | Go 1.18 or higher |
| Requirement | Notes |
|-------------|------------------|
| Go version | Go1.15 or higher |
## Documentation
Complete documentation can be found on the [website](https://docs.tendermint.com/master/).
### Install
See the [install instructions](./docs/introduction/install.md).
See the [install instructions](/docs/introduction/install.md).
### Quick Start
- [Single node](./docs/introduction/quick-start.md)
- [Local cluster using docker-compose](./docs/networks/docker-compose.md)
- [Remote cluster using Terraform and Ansible](./docs/networks/terraform-and-ansible.md)
- [Single node](/docs/introduction/quick-start.md)
- [Local cluster using docker-compose](/docs/networks/docker-compose.md)
- [Remote cluster using Terraform and Ansible](/docs/networks/terraform-and-ansible.md)
- [Join the Cosmos testnet](https://cosmos.network/testnet)
## Contributing
Please abide by the [Code of Conduct](CODE_OF_CONDUCT.md) in all interactions.
Before contributing to the project, please take a look at the [contributing
guidelines](CONTRIBUTING.md) and the [style guide](STYLE_GUIDE.md). You may also
find it helpful to read the [specifications](./spec/README.md), and familiarize
yourself with our [Architectural Decision Records
(ADRs)](./docs/architecture/README.md) and
[Request For Comments (RFCs)](./docs/rfc/README.md).
Before contributing to the project, please take a look at the [contributing guidelines](CONTRIBUTING.md)
and the [style guide](STYLE_GUIDE.md). You may also find it helpful to read the
[specifications](https://github.com/tendermint/spec), watch the [Developer Sessions](/docs/DEV_SESSIONS.md),
and familiarize yourself with our
[Architectural Decision Records](https://github.com/tendermint/tendermint/tree/master/docs/architecture).
## Versioning
### Semantic Versioning
Tendermint uses [Semantic Versioning](http://semver.org/) to determine when and
how the version changes. According to SemVer, anything in the public API can
change at any time before version 1.0.0
Tendermint uses [Semantic Versioning](http://semver.org/) to determine when and how the version changes.
According to SemVer, anything in the public API can change at any time before version 1.0.0
To provide some stability to users of 0.X.X versions of Tendermint, the MINOR
version is used to signal breaking changes across Tendermint's API. This API
includes all publicly exposed types, functions, and methods in non-internal Go
packages as well as the types and methods accessible via the Tendermint RPC
interface.
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
to signal breaking changes across a subset of the total public API. This subset includes all
interfaces exposed to other processes (cli, rpc, p2p, etc.), but does not
include the Go APIs.
Breaking changes to these public APIs will be documented in the CHANGELOG.
That said, breaking changes in the following packages will be documented in the
CHANGELOG even if they don't lead to MINOR version bumps:
- crypto
- config
- libs
- bits
- bytes
- json
- log
- math
- net
- os
- protoio
- rand
- sync
- strings
- service
- node
- rpc/client
- types
### Upgrades
In an effort to avoid accumulating technical debt prior to 1.0.0, we do not
guarantee that breaking changes (ie. bumps in the MINOR version) will work with
existing Tendermint blockchains. In these cases you will have to start a new
blockchain, or write something custom to get the old data into the new chain.
However, any bump in the PATCH version should be compatible with existing
blockchain histories.
In an effort to avoid accumulating technical debt prior to 1.0.0,
we do not guarantee that breaking changes (ie. bumps in the MINOR version)
will work with existing Tendermint blockchains. In these cases you will
have to start a new blockchain, or write something custom to get the old
data into the new chain. However, any bump in the PATCH version should be
compatible with existing blockchain histories.
For more information on upgrading, see [UPGRADING.md](./UPGRADING.md).
### Supported Versions
Because we are a small core team, we only ship patch updates, including security
updates, to the most recent minor release and the second-most recent minor
release. Consequently, we strongly recommend keeping Tendermint up-to-date.
Upgrading instructions can be found in [UPGRADING.md](./UPGRADING.md).
Because we are a small core team, we only ship patch updates, including security updates,
to the most recent minor release and the second-most recent minor release. Consequently,
we strongly recommend keeping Tendermint up-to-date. Upgrading instructions can be found
in [UPGRADING.md](./UPGRADING.md).
## Resources
### Libraries
### Tendermint Core
- [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); A framework for building
applications in Golang
- [Tendermint in Rust](https://github.com/informalsystems/tendermint-rs)
- [ABCI Tower](https://github.com/penumbra-zone/tower-abci)
For details about the blockchain data structures and the p2p protocols, see the
[Tendermint specification](https://docs.tendermint.com/master/spec/).
For details on using the software, see the [documentation](/docs/) which is also
hosted at: <https://docs.tendermint.com/master/>
### Tools
Benchmarking is provided by [`tm-load-test`](https://github.com/informalsystems/tm-load-test).
Additional tooling can be found in [/docs/tools](/docs/tools).
### Applications
- [Cosmos Hub](https://hub.cosmos.network/)
- [Terra](https://www.terra.money/)
- [Celestia](https://celestia.org/)
- [Anoma](https://anoma.network/)
- [Vocdoni](https://docs.vocdoni.io/)
- [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
- [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
- [Many more](https://tendermint.com/ecosystem)
### Research
@@ -145,31 +162,9 @@ Upgrading instructions can be found in [UPGRADING.md](./UPGRADING.md).
## Join us!
Tendermint Core is maintained by [Interchain GmbH](https://interchain.io).
If you'd like to work full-time on Tendermint Core,
[we're hiring](https://interchain-gmbh.breezy.hr/)!
Tendermint Core is maintained by [Interchain GmbH](https://interchain.berlin).
If you'd like to work full-time on Tendermint Core, [we're hiring](https://interchain-gmbh.breezy.hr/p/682fb7e8a6f601-software-engineer-tendermint-core)!
Funding for Tendermint Core development comes primarily from the
[Interchain Foundation](https://interchain.io), a Swiss non-profit. The
Tendermint trademark is owned by [Tendermint Inc.](https://tendermint.com), the
for-profit entity that also maintains [tendermint.com](https://tendermint.com).
[bft]: https://en.wikipedia.org/wiki/Byzantine_fault_tolerance
[smr]: https://en.wikipedia.org/wiki/State_machine_replication
[Blockchain]: https://en.wikipedia.org/wiki/Blockchain
[version-badge]: https://img.shields.io/github/v/release/tendermint/tendermint.svg
[version-url]: https://github.com/tendermint/tendermint/releases/latest
[api-badge]: https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667
[api-url]: https://pkg.go.dev/github.com/tendermint/tendermint
[go-badge]: https://img.shields.io/badge/go-1.18-blue.svg
[go-url]: https://github.com/moovweb/gvm
[discord-badge]: https://img.shields.io/discord/669268347736686612.svg
[discord-url]: https://discord.gg/cosmosnetwork
[license-badge]: https://img.shields.io/github/license/tendermint/tendermint.svg
[license-url]: https://github.com/tendermint/tendermint/blob/main/LICENSE
[sg-badge]: https://sourcegraph.com/github.com/tendermint/tendermint/-/badge.svg
[sg-url]: https://sourcegraph.com/github.com/tendermint/tendermint?badge
[tests-url]: https://github.com/tendermint/tendermint/actions/workflows/tests.yml
[tests-badge]: https://github.com/tendermint/tendermint/actions/workflows/tests.yml/badge.svg?branch=main
[lint-badge]: https://github.com/tendermint/tendermint/actions/workflows/lint.yml/badge.svg
[lint-url]: https://github.com/tendermint/tendermint/actions/workflows/lint.yml
Funding for Tendermint Core development comes primarily from the [Interchain Foundation](https://interchain.io),
a Swiss non-profit. The Tendermint trademark is owned by [Tendermint Inc.](https://tendermint.com), the for-profit entity
that also maintains [tendermint.com](https://tendermint.com).

View File

@@ -1,371 +0,0 @@
# Releases
Tendermint uses modified [semantic versioning](https://semver.org/) with each
release following a `vX.Y.Z` format. Tendermint is currently on major version 0
and uses the minor version to signal breaking changes. The `main` branch is
used for active development and thus it is not advisable to build against it.
The latest changes are always initially merged into `main`. Releases are
specified using tags and are built from long-lived "backport" branches that are
cut from `main` when the release process begins. Each release "line" (e.g.
0.34 or 0.33) has its own long-lived backport branch, and the backport branches
have names like `v0.34.x` or `v0.33.x` (literally, `x`; it is not a placeholder
in this case). Tendermint only maintains the last two releases at a time (the
oldest release is predominantly just security patches).
## Backporting
As non-breaking changes land on `main`, they should also be backported to
these backport branches.
We use Mergify's [backport feature](https://mergify.io/features/backports) to
automatically backport to the needed branch. There should be a label for any
backport branch that you'll be targeting. To notify the bot to backport a pull
request, mark the pull request with the label corresponding to the correct
backport branch. For example, to backport to v0.38.x, add the label
`S:backport-to-v0.38.x`. Once the original pull request is merged, the bot will
try to cherry-pick the pull request to the backport branch. If the bot fails to
backport, it will open a pull request. The author of the original pull request
is responsible for solving the conflicts and merging the pull request.
### Creating a backport branch
If this is the first release candidate for a minor version release, e.g.
v0.25.0, you get to have the honor of creating the backport branch!
Note that, after creating the backport branch, you'll also need to update the
tags on `main` so that `go mod` is able to order the branches correctly. You
should tag `main` with a "dev" tag that is "greater than" the backport
branches tags. See [#6072](https://github.com/tendermint/tendermint/pull/6072)
for more context.
In the following example, we'll assume that we're making a backport branch for
the 0.38.x line.
1. Start on `main`
2. Ensure that there is a [branch protection
rule](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/managing-a-branch-protection-rule) for the
branch you are about to create (you will need admin access to the repository
in order to do this).
3. Create and push the backport branch:
```sh
git checkout -b v0.38.x
git push origin v0.38.x
```
4. Create a PR to update the documentation directory for the backport branch.
We rewrite any URLs pointing to `main` to point to the backport branch,
so that generated documentation will link to the correct versions of files
elsewhere in the repository. The following files are to be excluded from this
search:
* [`README.md`](./README.md)
* [`CHANGELOG.md`](./CHANGELOG.md)
* [`UPGRADING.md`](./UPGRADING.md)
The following links are to always point to `main`, regardless of where they
occur in the codebase:
* `https://github.com/tendermint/tendermint/blob/main/LICENSE`
Be sure to search for all of the following links and replace `main` with your
corresponding branch label or version (e.g. `v0.38.x` or `v0.38`):
* `github.com/tendermint/tendermint/blob/main` ->
`github.com/tendermint/tendermint/blob/v0.38.x`
* `github.com/tendermint/tendermint/tree/main` ->
`github.com/tendermint/tendermint/tree/v0.38.x`
* `docs.tendermint.com/main` -> `docs.tendermint.com/v0.38`
Once you have updated all of the relevant documentation:
```sh
# Create and push the PR.
git checkout -b update-docs-v038x
git commit -m "Update docs for v0.38.x backport branch."
git push -u origin update-docs-v038x
```
Be sure to merge this PR before making other changes on the newly-created
backport branch.
After doing these steps, go back to `main` and do the following:
1. Create a new workflow to run e2e nightlies for the new backport branch. (See
[e2e-nightly-main.yml][e2e] for an example.)
2. Add a new section to the Mergify config (`.github/mergify.yml`) to enable the
backport bot to work on this branch, and add a corresponding `S:backport-to-v0.38.x`
[label](https://github.com/tendermint/tendermint/labels) so the bot can be triggered.
3. Add a new section to the Dependabot config (`.github/dependabot.yml`) to
enable automatic update of Go dependencies on this branch. Copy and edit one
of the existing branch configurations to set the correct `target-branch`.
[e2e]: https://github.com/tendermint/tendermint/blob/main/.github/workflows/e2e-nightly-main.yml
## Pre-releases
Before creating an official release, especially a minor release, we may want to
create an alpha or beta version, or release candidate (RC) for our friends and
partners to test out. We use git tags to create pre-releases, and we build them
off of backport branches, for example:
- `v0.38.0-alpha.1` - The first alpha release of `v0.38.0`. Subsequent alpha
releases will be numbered `v0.38.0-alpha.2`, `v0.38.0-alpha.3`, etc.
Alpha releases are to be considered the _most_ unstable of pre-releases, and
are most likely not yet properly QA'd. These are made available to allow early
adopters to start integrating and testing new functionality before we're done
with QA.
- `v0.38.0-beta.1` - The first beta release of `v0.38.0`. Subsequent beta
releases will be numbered `v0.38.0-beta.2`, `v0.38.0-beta.3`, etc.
Beta releases can be considered more stable than alpha releases in that we
will have QA'd them better than alpha releases, but there still may be
minor breaking API changes if users have strong demands for such changes.
- `v0.38.0-rc1` - The first release candidate (RC) of `v0.38.0`. Subsequent RCs
will be numbered `v0.38.0-rc2`, `v0.38.0-rc3`, etc.
RCs are considered more stable than beta releases in that we will have
completed our QA on them. APIs will most likely be stable at this point. The
difference between an RC and a release is that there may still be small
changes (bug fixes, features) that may make their way into the series before
cutting a final release.
(Note that branches and tags _cannot_ have the same names, so it's important
that these branches have distinct names from the tags/release names.)
If this is the first pre-release for a minor release, you'll have to make a new
backport branch (see above). Otherwise:
1. Start from the backport branch (e.g. `v0.38.x`).
2. Run the integration tests and the E2E nightlies
(which can be triggered from the GitHub UI;
e.g., https://github.com/tendermint/tendermint/actions/workflows/e2e-nightly-37x.yml).
3. Prepare the pre-release documentation:
- Ensure that all relevant changes are in the `CHANGELOG_PENDING.md` file.
This file's contents must only be included in the `CHANGELOG.md` when we
cut final releases.
- Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
or other upgrading flows.
4. Prepare the versioning:
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary.
Check the changelog for breaking changes in these components.
- Bump ABCI protocol version in `version.go`, if necessary
5. Open a PR with these changes against the backport branch.
6. Once these changes have landed on the backport branch, be sure to pull them back down locally.
7. Once you have the changes locally, create the new tag, specifying a name and a tag "message":
`git tag -a v0.38.0-rc1 -m "Release Candidate v0.38.0-rc1`
8. Push the tag back up to origin:
`git push origin v0.38.0-rc1`
Now the tag should be available on the repo's releases page.
9. Future pre-releases will continue to be built off of this branch.
## Minor release
This minor release process assumes that this release was preceded by release
candidates. If there were no release candidates, begin by creating a backport
branch, as described above.
Before performing these steps, be sure the
[Minor Release Checklist](#minor-release-checklist) has been completed.
1. Start on the backport branch (e.g. `v0.38.x`)
2. Run integration tests (`make test_integrations`) and the e2e nightlies.
3. Prepare the release:
- "Squash" changes from the changelog entries for the pre-releases into a
single entry, and add all changes included in `CHANGELOG_PENDING.md`.
(Squashing includes both combining all entries, as well as removing or
simplifying any intra-pre-release changes. It may also help to alphabetize
the entries by package name.)
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
4. Open a PR with these changes against the backport branch.
5. Once these changes are on the backport branch, push a tag with prepared release details.
This will trigger the actual release `v0.38.0`.
- `git tag -a v0.38.0 -m 'Release v0.38.0'`
- `git push origin v0.38.0`
6. Make sure that `main` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
7. Add the release to the documentation site generator config (see
[DOCS\_README.md](./docs/DOCS_README.md) for more details). In summary:
- Start on branch `main`.
- Add a new line at the bottom of [`docs/versions`](./docs/versions) to
ensure the newest release is the default for the landing page.
- Add a new entry to `themeConfig.versions` in
[`docs/.vuepress/config.js`](./docs/.vuepress/config.js) to include the
release in the dropdown versions menu.
- Commit these changes to `main` and backport them into the backport
branch for this release.
## Patch release
Patch releases are done differently from minor releases: They are built off of
long-lived backport branches, rather than from main. As non-breaking changes
land on `main`, they should also be backported into these backport branches.
Patch releases don't have release candidates by default, although any tricky
changes may merit a release candidate.
To create a patch release:
1. Checkout the long-lived backport branch: `git checkout v0.38.x`
2. Run integration tests (`make test_integrations`) and the nightlies.
3. Check out a new branch and prepare the release:
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump the TMDefaultVersion in `version.go`
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during patch releases, and only field additions are valid patch changes.)
4. Open a PR with these changes that will land them back on `v0.38.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- `git tag -a v0.38.1 -m 'Release v0.38.1'`
- `git push origin v0.38.1`
6. Create a pull request back to main with the CHANGELOG & version changes from the latest release.
- Remove all `R:patch` labels from the pull requests that were included in the release.
- Do not merge the backport branch into main.
## Minor Release Checklist
The following set of steps are performed on all releases that increment the
_minor_ version, e.g. v0.25 to v0.26. These steps ensure that Tendermint is well
tested, stable, and suitable for adoption by the various diverse projects that
rely on Tendermint.
### Feature Freeze
Ahead of any minor version release of Tendermint, the software enters 'Feature
Freeze' for at least two weeks. A feature freeze means that _no_ new features
are added to the code being prepared for release. No code changes should be made
to the code being released that do not directly improve pressing issues of code
quality. The following must not be merged during a feature freeze:
* Refactors that are not related to specific bug fixes.
* Dependency upgrades.
* New test code that does not test a discovered regression.
* New features of any kind.
* Documentation or spec improvements that are not related to the newly developed
code.
This period directly follows the creation of the [backport
branch](#creating-a-backport-branch). The Tendermint team instead directs all
attention to ensuring that the existing code is stable and reliable. Broken
tests are fixed, flakey-tests are remedied, end-to-end test failures are
thoroughly diagnosed and all efforts of the team are aimed at improving the
quality of the code. During this period, the upgrade harness tests are run
repeatedly and a variety of in-house testnets are run to ensure Tendermint
functions at the scale it will be used by application developers and node
operators.
### Nightly End-To-End Tests
The Tendermint team maintains [a set of end-to-end
tests](https://github.com/tendermint/tendermint/blob/main/test/e2e/README.md#L1)
that run each night on the latest commit of the project and on the code in the
tip of each supported backport branch. These tests start a network of
containerized Tendermint processes and run automated checks that the network
functions as expected in both stable and unstable conditions. During the feature
freeze, these tests are run nightly and must pass consistently for a release of
Tendermint to be considered stable.
### Upgrade Harness
> TODO(williambanfield): Change to past tense and clarify this section once
> upgrade harness is complete.
The Tendermint team is creating an upgrade test harness to exercise the workflow
of stopping an instance of Tendermint running one version of the software and
starting up the same application running the next version. To support upgrade
testing, we will add the ability to terminate the Tendermint process at specific
pre-defined points in its execution so that we can verify upgrades work in a
representative sample of stop conditions.
### Large Scale Testnets
The Tendermint end-to-end tests run a small network (~10s of nodes) to exercise
basic consensus interactions. Real world deployments of Tendermint often have
over a hundred nodes just in the validator set, with many others acting as full
nodes and sentry nodes. To gain more assurance before a release, we will also
run larger-scale test networks to shake out emergent behaviors at scale.
Large-scale test networks are run on a set of virtual machines (VMs). Each VM is
equipped with 4 Gigabytes of RAM and 2 CPU cores. The network runs a very simple
key-value store application. The application adds artificial delays to different
ABCI calls to simulate a slow application. Each testnet is briefly run with no
load being generated to collect a baseline performance. Once baseline is
captured, a consistent load is applied across the network. This load takes the
form of 10% of the running nodes all receiving a consistent stream of two
hundred transactions per minute each.
During each test net, the following metrics are monitored and collected on each
node:
* Consensus rounds per height
* Maximum connected peers, Minimum connected peers, Rate of change of peer connections
* Memory resident set size
* CPU utilization
* Blocks produced per minute
* Seconds for each step of consensus (Propose, Prevote, Precommit, Commit)
* Latency to receive block proposals
For these tests we intentionally target low-powered host machines (with low core
counts and limited memory) to ensure we observe similar kinds of resource contention
and limitation that real-world deployments of Tendermint experience in production.
#### 200 Node Testnet
To test the stability and performance of Tendermint in a real world scenario,
a 200 node test network is run. The network comprises 5 seed nodes, 100
validators and 95 non-validating full nodes. All nodes begin by dialing
a subset of the seed nodes to discover peers. The network is run for several
days, with metrics being collected continuously. In cases of changes to performance
critical systems, testnets of larger sizes should be considered.
#### Rotating Node Testnet
Real-world deployments of Tendermint frequently see new nodes arrive and old
nodes exit the network. The rotating node testnet ensures that Tendermint is
able to handle this reliably. In this test, a network with 10 validators and
3 seed nodes is started. A rolling set of 25 full nodes are started and each
connects to the network by dialing one of the seed nodes. Once the node is able
to blocksync to the head of the chain and begins producing blocks using
Tendermint consensus it is stopped. Once stopped, a new node is started and
takes its place. This network is run for several days.
#### Network Partition Testnet
Tendermint is expected to recover from network partitions. A partition where no
subset of the nodes is left with the super-majority of the stake is expected to
stop making blocks. Upon alleviation of the partition, the network is expected
to once again become fully connected and capable of producing blocks. The
network partition testnet ensures that Tendermint is able to handle this
reliably at scale. In this test, a network with 100 validators and 95 full
nodes is started. All validators have equal stake. Once the network is
producing blocks, a set of firewall rules is deployed to create a partitioned
network with 50% of the stake on one side and 50% on the other. Once the
network stops producing blocks, the firewall rules are removed and the nodes
are monitored to ensure they reconnect and that the network again begins
producing blocks.
#### Absent Stake Testnet
Tendermint networks often run with _some_ portion of the voting power offline.
The absent stake testnet ensures that large networks are able to handle this
reliably. A set of 150 validator nodes and three seed nodes is started. The set
of 150 validators is configured to only possess a cumulative stake of 67% of
the total stake. The remaining 33% of the stake is configured to belong to
a validator that is never actually run in the test network. The network is run
for multiple days, ensuring that it is able to produce blocks without issue.

View File

@@ -2,146 +2,98 @@
## Reporting a Bug
As part of our [Coordinated Vulnerability Disclosure Policy](https://tendermint.com/security),
we operate a [bug bounty][hackerone]. See the policy for more
details on submissions and rewards, and see "Example Vulnerabilities" (below)
for examples of the kinds of bugs we're most interested in.
As part of our [Coordinated Vulnerability Disclosure
Policy](https://tendermint.com/security), we operate a [bug
bounty](https://hackerone.com/tendermint).
See the policy for more details on submissions and rewards, and see "Example Vulnerabilities" (below) for examples of the kinds of bugs we're most interested in.
### Guidelines
### Guidelines
We require that all researchers:
* Use the bug bounty to disclose all vulnerabilities, and avoid posting
vulnerability information in public places, including Github Issues, Discord
channels, and Telegram groups
* Make every effort to avoid privacy violations, degradation of user experience,
disruption to production systems (including but not limited to the Cosmos
Hub), and destruction of data
* Keep any information about vulnerabilities that youve discovered confidential
between yourself and the Tendermint Core engineering team until the issue has
been resolved and disclosed
* Use the bug bounty to disclose all vulnerabilities, and avoid posting vulnerability information in public places, including Github Issues, Discord channels, and Telegram groups
* Make every effort to avoid privacy violations, degradation of user experience, disruption to production systems (including but not limited to the Cosmos Hub), and destruction of data
* Keep any information about vulnerabilities that youve discovered confidential between yourself and the Tendermint Core engineering team until the issue has been resolved and disclosed
* Avoid posting personally identifiable information, privately or publicly
If you follow these guidelines when reporting an issue to us, we commit to:
* Not pursue or support any legal action related to your research on this
vulnerability
* Work with you to understand, resolve and ultimately disclose the issue in a
timely fashion
* Not pursue or support any legal action related to your research on this vulnerability
* Work with you to understand, resolve and ultimately disclose the issue in a timely fashion
## Disclosure Process
## Disclosure Process
Tendermint Core uses the following disclosure process:
1. Once a security report is received, the Tendermint Core team works to verify
the issue and confirm its severity level using CVSS.
2. The Tendermint Core team collaborates with the Gaia team to determine the
vulnerabilitys potential impact on the Cosmos Hub.
3. Patches are prepared for eligible releases of Tendermint in private
repositories. See “Supported Releases” below for more information on which
releases are considered eligible.
4. If it is determined that a CVE-ID is required, we request a CVE through a CVE
Numbering Authority.
5. We notify the community that a security release is coming, to give users time
to prepare their systems for the update. Notifications can include forum
posts, tweets, and emails to partners and validators, including emails sent
to the [Tendermint Security Mailing List][tmsec-mailing].
6. 24 hours following this notification, the fixes are applied publicly and new
releases are issued.
7. Cosmos SDK and Gaia update their Tendermint Core dependencies to use these
releases, and then themselves issue new releases.
8. Once releases are available for Tendermint Core, Cosmos SDK and Gaia, we
notify the community, again, through the same channels as above. We also
publish a Security Advisory on Github and publish the CVE, as long as neither
the Security Advisory nor the CVE include any information on how to exploit
these vulnerabilities beyond what information is already available in the
patch itself.
9. Once the community is notified, we will pay out any relevant bug bounties to
submitters.
10. One week after the releases go out, we will publish a post with further
details on the vulnerability as well as our response to it.
1. Once a security report is received, the Tendermint Core team works to verify the issue and confirm its severity level using CVSS.
2. The Tendermint Core team collaborates with the Gaia team to determine the vulnerabilitys potential impact on the Cosmos Hub.
3. Patches are prepared for eligible releases of Tendermint in private repositories. See “Supported Releases” below for more information on which releases are considered eligible.
4. If it is determined that a CVE-ID is required, we request a CVE through a CVE Numbering Authority.
5. We notify the community that a security release is coming, to give users time to prepare their systems for the update. Notifications can include forum posts, tweets, and emails to partners and validators, including emails sent to the [Tendermint Security Mailing List](https://berlin.us4.list-manage.com/subscribe?u=431b35421ff7edcc77df5df10&id=3fe93307bc).
6. 24 hours following this notification, the fixes are applied publicly and new releases are issued.
7. Cosmos SDK and Gaia update their Tendermint Core dependencies to use these releases, and then themselves issue new releases.
8. Once releases are available for Tendermint Core, Cosmos SDK and Gaia, we notify the community, again, through the same channels as above. We also publish a Security Advisory on Github and publish the CVE, as long as neither the Security Advisory nor the CVE include any information on how to exploit these vulnerabilities beyond what information is already available in the patch itself.
9. Once the community is notified, we will pay out any relevant bug bounties to submitters.
10. One week after the releases go out, we will publish a post with further details on the vulnerability as well as our response to it.
This process can take some time. Every effort will be made to handle the bug in
as timely a manner as possible, however it's important that we follow the
process described above to ensure that disclosures are handled consistently and
to keep Tendermint Core and its downstream dependent projects--including but not
limited to Gaia and the Cosmos Hub--as secure as possible.
This process can take some time. Every effort will be made to handle the bug in as timely a manner as possible, however it's important that we follow the process described above to ensure that disclosures are handled consistently and to keep Tendermint Core and its downstream dependent projects--including but not limited to Gaia and the Cosmos Hub--as secure as possible.
### Example Timeline
### Example Timeline
The following is an example timeline for the triage and response. The required
roles and team members are described in parentheses after each task; however,
multiple people can play each role and each person may play multiple roles.
The following is an example timeline for the triage and response. The required roles and team members are described in parentheses after each task; however, multiple people can play each role and each person may play multiple roles.
#### 24+ Hours Before Release Time
#### > 24 Hours Before Release Time
1. Request CVE number (ADMIN)
2. Gather emails and other contact info for validators (COMMS LEAD)
3. Create patches in a private security repo, and ensure that PRs are open
targeting all relevant release branches (TENDERMINT ENG, TENDERMINT LEAD)
4. Test fixes on a testnet (TENDERMINT ENG, COSMOS SDK ENG)
5. Write “Security Advisory” for forum (TENDERMINT LEAD)
1. Request CVE number (ADMIN)
2. Gather emails and other contact info for validators (COMMS LEAD)
3. Test fixes on a testnet (TENDERMINT ENG, COSMOS ENG)
4. Write “Security Advisory” for forum (TENDERMINT LEAD)
#### 24 Hours Before Release Time
1. Post “Security Advisory” pre-notification on forum (TENDERMINT LEAD)
2. Post Tweet linking to forum post (COMMS LEAD)
3. Announce security advisory/link to post in various other social channels
(Telegram, Discord) (COMMS LEAD)
4. Send emails to validators or other users (PARTNERSHIPS LEAD)
1. Post “Security Advisory” pre-notification on forum (TENDERMINT LEAD)
2. Post Tweet linking to forum post (COMMS LEAD)
3. Announce security advisory/link to post in various other social channels (Telegram, Discord) (COMMS LEAD)
4. Send emails to validators or other users (PARTNERSHIPS LEAD)
#### Release Time
1. Cut Tendermint releases for eligible versions (TENDERMINT ENG, TENDERMINT
LEAD)
1. Cut Tendermint releases for eligible versions (TENDERMINT ENG, TENDERMINT LEAD)
2. Cut Cosmos SDK release for eligible versions (COSMOS ENG)
3. Cut Gaia release for eligible versions (GAIA ENG)
4. Post “Security releases” on forum (TENDERMINT LEAD)
5. Post new Tweet linking to forum post (COMMS LEAD)
6. Remind everyone via social channels (Telegram, Discord) that the release is
out (COMMS LEAD)
7. Send emails to validators or other users (COMMS LEAD)
8. Publish Security Advisory and CVE, if CVE has no sensitive information
(ADMIN)
6. Remind everyone via social channels (Telegram, Discord) that the release is out (COMMS LEAD)
7. Send emails to validators or other users (COMMS LEAD)
8. Publish Security Advisory and CVE, if CVE has no sensitive information (ADMIN)
#### After Release Time
1. Write forum post with exploit details (TENDERMINT LEAD)
2. Approve pay-out on HackerOne for submitter (ADMIN)
2. Approve pay-out on HackerOne for submitter (ADMIN)
#### 7 Days After Release Time
1. Publish CVE if it has not yet been published (ADMIN)
1. Publish CVE if it has not yet been published (ADMIN)
2. Publish forum post with exploit details (TENDERMINT ENG, TENDERMINT LEAD)
## Supported Releases
The Tendermint Core team commits to releasing security patch releases for both
the latest minor release as well for the major/minor release that the Cosmos Hub
is running.
The Tendermint Core team commits to releasing security patch releases for both the latest minor release as well for the major/minor release that the Cosmos Hub is running.
If you are running older versions of Tendermint Core, we encourage you to
upgrade at your earliest opportunity so that you can receive security patches
directly from the Tendermint repo. While you are welcome to backport security
patches to older versions for your own use, we will not publish or promote these
backports.
If you are running older versions of Tendermint Core, we encourage you to upgrade at your earliest opportunity so that you can receive security patches directly from the Tendermint repo. While you are welcome to backport security patches to older versions for your own use, we will not publish or promote these backports.
## Scope
The full scope of our bug bounty program is outlined on our
[Hacker One program page][hackerone]. Please also note that, in the interest of
the safety of our users and staff, a few things are explicitly excluded from
scope:
The full scope of our bug bounty program is outlined on our [Hacker One program page](https://hackerone.com/tendermint). Please also note that, in the interest of the safety of our users and staff, a few things are explicitly excluded from scope:
* Any third-party services
* Findings from physical testing, such as office access
* Any third-party services
* Findings from physical testing, such as office access
* Findings derived from social engineering (e.g., phishing)
## Example Vulnerabilities
## Example Vulnerabilities
The following is a list of examples of the kinds of vulnerabilities that were
most interested in. It is not exhaustive: there are other kinds of issues we may
also be interested in!
The following is a list of examples of the kinds of vulnerabilities that were most interested in. It is not exhaustive: there are other kinds of issues we may also be interested in!
### Specification
@@ -153,8 +105,7 @@ also be interested in!
Assuming less than 1/3 of the voting power is Byzantine (malicious):
* Validation of blockchain data structures, including blocks, block parts,
votes, and so on
* Validation of blockchain data structures, including blocks, block parts, votes, and so on
* Execution of blocks
* Validator set changes
* Proposer round robin
@@ -163,9 +114,6 @@ Assuming less than 1/3 of the voting power is Byzantine (malicious):
* A node halting (liveness failure)
* Syncing new and old nodes
Assuming more than 1/3 the voting power is Byzantine:
* Attacks that go unpunished (unhandled evidence)
### Networking
@@ -191,7 +139,7 @@ Attacks may come through the P2P network or the RPC layer:
### Libraries
* Serialization
* Serialization (Amino)
* Reading/Writing files and databases
### Cryptography
@@ -202,8 +150,5 @@ Attacks may come through the P2P network or the RPC layer:
### Light Client
* Core verification
* Core verification
* Bisection/sequential algorithms
[hackerone]: https://hackerone.com/cosmos
[tmsec-mailing]: https://berlin.us4.list-manage.com/subscribe?u=431b35421ff7edcc77df5df10&id=3fe93307bc

View File

@@ -1,59 +1,44 @@
# Upgrading Tendermint Core
This guide provides instructions for upgrading to specific versions of
Tendermint Core.
This guide provides instructions for upgrading to specific versions of Tendermint Core.
## Unreleased
### ABCI Changes
* The `ABCIVersion` is now `1.0.0`.
* Added `AbciVersion` to `RequestInfo`. Applications should check that the ABCI version they expect is being used in order to avoid unimplemented changes errors.
* Added new ABCI methods `PrepareProposal` and `ProcessProposal`. For details,
please see the [spec](spec/abci/README.md). Applications upgrading to
v0.37.0 must implement these methods, at the very minimum, as described
[here](./spec/abci/abci++_app_requirements.md)
* Deduplicated `ConsensusParams` and `BlockParams`.
In the v0.34 branch they are defined both in `abci/types.proto` and `types/params.proto`.
The definitions in `abci/types.proto` have been removed.
In-process applications should make sure they are not using the deleted
version of those structures.
* In v0.34, messages on the wire used to be length-delimited with `int64` varint
values, which was inconsistent with the `uint64` varint length delimiters used
in the P2P layer. Both now consistently use `uint64` varint length delimiters.
* Added `AbciVersion` to `RequestInfo`.
Applications should check that Tendermint's ABCI version matches the one they expect
in order to ensure compatibility.
* The `SetOption` method has been removed from the ABCI `Client` interface.
The corresponding Protobuf types have been deprecated.
* The `key` and `value` fields in the `EventAttribute` type have been changed
from type `bytes` to `string`. As per the [Protocol Buffers updating
guidelines](https://developers.google.com/protocol-buffers/docs/proto3#updating),
this should have no effect on the wire-level encoding for UTF8-encoded
strings.
* The method `SetOption` has been removed from the ABCI.Client interface. This feature was used in the early ABCI implementation's.
## v0.34.20
* Messages are written to a byte stream using uin64 length delimiters instead of int64.
### Feature: Priority Mempool
### Config Changes
This release backports an implementation of the Priority Mempool from the v0.35
branch. This implementation of the mempool permits the application to set a
priority on each transaction during CheckTx, and during block selection the
highest-priority transactions are chosen (subject to the constraints on size
and gas cost).
* `fast_sync = "v1"` is no longer supported. Please use `v2` instead.
Operators can enable the priority mempool by setting `mempool.version` to
`"v1"` in the `config.toml`. For more technical details about the priority
mempool, see [ADR 067: Mempool
Refactor](https://github.com/tendermint/tendermint/blob/main/docs/architecture/adr-067-mempool-refactor.md).
* All config parameters are now hyphen-case (also known as kebab-case) instead of snake_case. Before restarting the node make sure
you have updated all the variables in your `config.toml` file.
### CLI Changes
* If you had previously used `tendermint gen_node_key` to generate a new node
key, keep in mind that it no longer saves the output to a file. You can use
`tendermint init` or pipe the output of `tendermint gen_node_key` to
`$TMHOME/config/node_key.json`:
```
$ tendermint gen_node_key > $TMHOME/config/node_key.json
```
* CLI commands and flags are all now hyphen-case instead of snake_case.
Make sure to adjust any scripts that calls a cli command with snake_casing
## v0.34.0
**Upgrading to Tendermint 0.34 requires a blockchain restart.**
This release is not compatible with previous blockchains due to changes to
the encoding format (see "Protocol Buffers," below) and the block header (see "Blockchain Protocol").
Note also that Tendermint 0.34 also requires Go 1.16 or higher.
Note also that Tendermint 0.34 also requires Go 1.15 or higher.
### ABCI Changes
@@ -63,7 +48,7 @@ Note also that Tendermint 0.34 also requires Go 1.16 or higher.
were added to support the new State Sync feature.
Previously, syncing a new node to a preexisting network could take days; but with State Sync,
new nodes are able to join a network in a matter of seconds.
Read [the spec](https://github.com/tendermint/tendermint/blob/v0.34.x/spec/abci/apps.md#state-sync)
Read [the spec](https://docs.tendermint.com/master/spec/abci/apps.html#state-sync)
if you want to learn more about State Sync, or if you'd like your application to use it.
(If you don't want to support State Sync in your application, you can just implement these new
ABCI methods as no-ops, leaving them empty.)
@@ -79,7 +64,7 @@ Note also that Tendermint 0.34 also requires Go 1.16 or higher.
Applications should be able to handle these evidence types
(i.e., through slashing or other accountability measures).
* The [`PublicKey` type](https://github.com/tendermint/tendermint/blob/main/proto/tendermint/crypto/keys.proto#L13-L15)
* The [`PublicKey` type](https://github.com/tendermint/tendermint/blob/master/proto/tendermint/crypto/keys.proto#L13-L15)
(used in ABCI as part of `ValidatorUpdate`) now uses a `oneof` protobuf type.
Note that since Tendermint only supports ed25519 validator keys, there's only one
option in the `oneof`. For more, see "Protocol Buffers," below.
@@ -87,6 +72,8 @@ Note also that Tendermint 0.34 also requires Go 1.16 or higher.
* The field `Proof`, on the ABCI type `ResponseQuery`, is now named `ProofOps`.
For more, see "Crypto," below.
* The method `SetOption` has been removed from the ABCI.Client interface. This feature was used in the early ABCI implementation's.
### P2P Protocol
The default codec is now proto3, not amino. The schema files can be found in the `/proto`
@@ -95,8 +82,8 @@ directory. For more, see "Protobuf," below.
### Blockchain Protocol
* `Header#LastResultsHash`, which is the root hash of a Merkle tree built from
`ResponseDeliverTx(Code, Data)` as of v0.34 also includes `GasWanted` and `GasUsed`
fields.
`ResponseDeliverTx(Code, Data)` as of v0.34 also includes `GasWanted` and `GasUsed`
fields.
* Merkle hashes of empty trees previously returned nothing, but now return the hash of an empty input,
to conform with [RFC-6962](https://tools.ietf.org/html/rfc6962).
@@ -186,7 +173,7 @@ The `bech32` package has moved to the Cosmos SDK:
### CLI
The `tendermint lite` command has been renamed to `tendermint light` and has a slightly different API.
See [the docs](https://docs.tendermint.com/v0.33/tendermint-core/light-client-protocol.html#http-proxy) for details.
See [the docs](https://docs.tendermint.com/master/tendermint-core/light-client-protocol.html#http-proxy) for details.
### Light Client
@@ -200,7 +187,7 @@ Other user-relevant changes include:
* The `Verifier` was broken up into two pieces:
* Core verification logic (pure `VerifyX` functions)
* `Client` object, which represents the complete light client
* The new light client stores headers and validator sets as `LightBlock`s
* The new light clients stores headers & validator sets as `LightBlock`s
* The RPC client can be found in the `/rpc` directory.
* The HTTP(S) proxy is located in the `/proxy` directory.
@@ -342,7 +329,7 @@ Evidence Params has been changed to include duration.
### RPC Changes
* `/validators` is now paginated (default: 30 vals per page)
* `/block_results` response format updated [see RPC docs for details](https://docs.tendermint.com/v0.33/rpc/#/Info/block_results)
* `/block_results` response format updated [see RPC docs for details](https://docs.tendermint.com/master/rpc/#/Info/block_results)
* Event suffix has been removed from the ID in event responses
* IDs are now integers not `json-client-XYZ`
@@ -461,11 +448,11 @@ the compilation tag:
Use `cleveldb` tag instead of `gcc` to compile Tendermint with CLevelDB or
use `make build_c` / `make install_c` (full instructions can be found at
<https://docs.tendermint.com/v0.33/introduction/install.html#compile-with-cleveldb-support>)
<https://tendermint.com/docs/introduction/install.html#compile-with-cleveldb-support>)
## v0.31.0
This release contains a breaking change to the behavior of the pubsub system.
This release contains a breaking change to the behaviour of the pubsub system.
It also contains some minor breaking changes in the Go API and ABCI.
There are no changes to the block or p2p protocols, so v0.31.0 should work fine
with blockchains created from the v0.30 series.
@@ -483,7 +470,7 @@ In this case, the WS client will receive an error with description:
"error": {
"code": -32000,
"msg": "Server error",
"data": "subscription was canceled (reason: client is not pulling messages fast enough)" // or "subscription was canceled (reason: Tendermint exited)"
"data": "subscription was cancelled (reason: client is not pulling messages fast enough)" // or "subscription was cancelled (reason: Tendermint exited)"
}
}
@@ -536,14 +523,14 @@ due to changes in how various data structures are hashed.
Any implementations of Tendermint blockchain verification, including lite clients,
will need to be updated. For specific details:
* [Merkle tree](https://github.com/tendermint/tendermint/blob/main/spec/blockchain/encoding.md#merkle-trees)
* [ConsensusParams](https://github.com/tendermint/tendermint/blob/main/spec/blockchain/state.md#consensusparams)
* [Merkle tree](https://github.com/tendermint/spec/blob/master/spec/blockchain/encoding.md#merkle-trees)
* [ConsensusParams](https://github.com/tendermint/spec/blob/master/spec/blockchain/state.md#consensusparams)
There was also a small change to field ordering in the vote struct. Any
implementations of an out-of-process validator (like a Key-Management Server)
will need to be updated. For specific details:
* [Vote](https://github.com/tendermint/tendermint/blob/main/spec/consensus/signing.md#votes)
* [Vote](https://github.com/tendermint/spec/blob/master/spec/consensus/signing.md#votes)
Finally, the proposer selection algorithm continues to evolve. See the
[work-in-progress
@@ -664,7 +651,7 @@ to `timeout_propose = "3s"`.
### RPC Changes
The default behavior of `/abci_query` has been changed to not return a proof,
The default behaviour of `/abci_query` has been changed to not return a proof,
and the name of the parameter that controls this has been changed from `trusted`
to `prove`. To get proofs with your queries, ensure you set `prove=true`.

66
Vagrantfile vendored
View File

@@ -1,66 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/focal64"
config.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 2
end
config.vm.provision "shell", inline: <<-SHELL
apt-get update
# install base requirements
apt-get install -y --no-install-recommends wget curl jq zip \
make shellcheck bsdmainutils psmisc
apt-get install -y language-pack-en
# install docker
apt-get install -y --no-install-recommends apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
apt-get update
apt-get install -y docker-ce
usermod -aG docker vagrant
# install go
wget -q https://dl.google.com/go/go1.15.linux-amd64.tar.gz
tar -xvf go1.15.linux-amd64.tar.gz
mv go /usr/local
rm -f go1.15.linux-amd64.tar.gz
# install nodejs (for docs)
curl -sL https://deb.nodesource.com/setup_11.x | bash -
apt-get install -y nodejs
# cleanup
apt-get autoremove -y
# set env variables
echo 'export GOROOT=/usr/local/go' >> /home/vagrant/.bash_profile
echo 'export GOPATH=/home/vagrant/go' >> /home/vagrant/.bash_profile
echo 'export PATH=$PATH:$GOROOT/bin:$GOPATH/bin' >> /home/vagrant/.bash_profile
echo 'export LC_ALL=en_US.UTF-8' >> /home/vagrant/.bash_profile
echo 'cd go/src/github.com/tendermint/tendermint' >> /home/vagrant/.bash_profile
mkdir -p /home/vagrant/go/bin
mkdir -p /home/vagrant/go/src/github.com/tendermint
ln -s /vagrant /home/vagrant/go/src/github.com/tendermint/tendermint
chown -R vagrant:vagrant /home/vagrant/go
chown vagrant:vagrant /home/vagrant/.bash_profile
# get all deps and tools, ready to install/test
su - vagrant -c 'source /home/vagrant/.bash_profile'
su - vagrant -c 'cd /home/vagrant/go/src/github.com/tendermint/tendermint && make tools'
SHELL
end

View File

@@ -19,8 +19,8 @@ To get up and running quickly, see the [getting started guide](../docs/app-dev/g
A detailed description of the ABCI methods and message types is contained in:
- [The main spec](https://github.com/tendermint/tendermint/blob/main/spec/abci/README.md)
- [A protobuf file](../proto/tendermint/types/types.proto)
- [The main spec](https://github.com/tendermint/spec/blob/master/spec/abci/abci.md)
- [A protobuf file](../proto/tendermint/abci/types.proto)
- [A Go interface](./types/application.go)
## Protocol Buffers

View File

@@ -1,6 +1,7 @@
package abcicli
import (
"context"
"fmt"
"sync"
@@ -14,52 +15,53 @@ const (
echoRetryIntervalSeconds = 1
)
//go:generate ../../scripts/mockery_generate.sh Client
//go:generate mockery --case underscore --name Client
// Client defines an interface for an ABCI client.
// All `Async` methods return a `ReqRes` object.
//
// All `Async` methods return a `ReqRes` object and an error.
// All `Sync` methods return the appropriate protobuf ResponseXxx struct and an error.
// Note these are client errors, eg. ABCI socket connectivity issues.
// Application-related errors are reflected in response via ABCI error codes and logs.
//
// NOTE these are client errors, eg. ABCI socket connectivity issues.
// Application-related errors are reflected in response via ABCI error codes
// and logs.
type Client interface {
service.Service
SetResponseCallback(Callback)
Error() error
FlushAsync() *ReqRes
EchoAsync(msg string) *ReqRes
InfoAsync(types.RequestInfo) *ReqRes
DeliverTxAsync(types.RequestDeliverTx) *ReqRes
CheckTxAsync(types.RequestCheckTx) *ReqRes
QueryAsync(types.RequestQuery) *ReqRes
CommitAsync() *ReqRes
InitChainAsync(types.RequestInitChain) *ReqRes
PrepareProposalAsync(types.RequestPrepareProposal) *ReqRes
BeginBlockAsync(types.RequestBeginBlock) *ReqRes
EndBlockAsync(types.RequestEndBlock) *ReqRes
ListSnapshotsAsync(types.RequestListSnapshots) *ReqRes
OfferSnapshotAsync(types.RequestOfferSnapshot) *ReqRes
LoadSnapshotChunkAsync(types.RequestLoadSnapshotChunk) *ReqRes
ApplySnapshotChunkAsync(types.RequestApplySnapshotChunk) *ReqRes
ProcessProposalAsync(types.RequestProcessProposal) *ReqRes
// Asynchronous requests
FlushAsync(context.Context) (*ReqRes, error)
EchoAsync(ctx context.Context, msg string) (*ReqRes, error)
InfoAsync(context.Context, types.RequestInfo) (*ReqRes, error)
DeliverTxAsync(context.Context, types.RequestDeliverTx) (*ReqRes, error)
CheckTxAsync(context.Context, types.RequestCheckTx) (*ReqRes, error)
QueryAsync(context.Context, types.RequestQuery) (*ReqRes, error)
CommitAsync(context.Context) (*ReqRes, error)
InitChainAsync(context.Context, types.RequestInitChain) (*ReqRes, error)
BeginBlockAsync(context.Context, types.RequestBeginBlock) (*ReqRes, error)
EndBlockAsync(context.Context, types.RequestEndBlock) (*ReqRes, error)
ListSnapshotsAsync(context.Context, types.RequestListSnapshots) (*ReqRes, error)
OfferSnapshotAsync(context.Context, types.RequestOfferSnapshot) (*ReqRes, error)
LoadSnapshotChunkAsync(context.Context, types.RequestLoadSnapshotChunk) (*ReqRes, error)
ApplySnapshotChunkAsync(context.Context, types.RequestApplySnapshotChunk) (*ReqRes, error)
FlushSync() error
EchoSync(msg string) (*types.ResponseEcho, error)
InfoSync(types.RequestInfo) (*types.ResponseInfo, error)
DeliverTxSync(types.RequestDeliverTx) (*types.ResponseDeliverTx, error)
CheckTxSync(types.RequestCheckTx) (*types.ResponseCheckTx, error)
QuerySync(types.RequestQuery) (*types.ResponseQuery, error)
CommitSync() (*types.ResponseCommit, error)
InitChainSync(types.RequestInitChain) (*types.ResponseInitChain, error)
PrepareProposalSync(types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error)
BeginBlockSync(types.RequestBeginBlock) (*types.ResponseBeginBlock, error)
EndBlockSync(types.RequestEndBlock) (*types.ResponseEndBlock, error)
ListSnapshotsSync(types.RequestListSnapshots) (*types.ResponseListSnapshots, error)
OfferSnapshotSync(types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error)
LoadSnapshotChunkSync(types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error)
ApplySnapshotChunkSync(types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error)
ProcessProposalSync(types.RequestProcessProposal) (*types.ResponseProcessProposal, error)
// Synchronous requests
FlushSync(context.Context) error
EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error)
InfoSync(context.Context, types.RequestInfo) (*types.ResponseInfo, error)
DeliverTxSync(context.Context, types.RequestDeliverTx) (*types.ResponseDeliverTx, error)
CheckTxSync(context.Context, types.RequestCheckTx) (*types.ResponseCheckTx, error)
QuerySync(context.Context, types.RequestQuery) (*types.ResponseQuery, error)
CommitSync(context.Context) (*types.ResponseCommit, error)
InitChainSync(context.Context, types.RequestInitChain) (*types.ResponseInitChain, error)
BeginBlockSync(context.Context, types.RequestBeginBlock) (*types.ResponseBeginBlock, error)
EndBlockSync(context.Context, types.RequestEndBlock) (*types.ResponseEndBlock, error)
ListSnapshotsSync(context.Context, types.RequestListSnapshots) (*types.ResponseListSnapshots, error)
OfferSnapshotSync(context.Context, types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error)
LoadSnapshotChunkSync(context.Context, types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error)
ApplySnapshotChunkSync(context.Context, types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error)
}
//----------------------------------------
@@ -78,22 +80,20 @@ func NewClient(addr, transport string, mustConnect bool) (client Client, err err
return
}
//----------------------------------------
type Callback func(*types.Request, *types.Response)
//----------------------------------------
type ReqRes struct {
*types.Request
*sync.WaitGroup
*types.Response // Not set atomically, so be sure to use WaitGroup.
mtx tmsync.Mutex
// callbackInvoked as a variable to track if the callback was already
// invoked during the regular execution of the request. This variable
// allows clients to set the callback simultaneously without potentially
// invoking the callback twice by accident, once when 'SetCallback' is
// called and once during the normal request.
callbackInvoked bool
cb func(*types.Response) // A single callback that may be set.
mtx tmsync.Mutex
done bool // Gets set to true once *after* WaitGroup.Done().
cb func(*types.Response) // A single callback that may be set.
}
func NewReqRes(req *types.Request) *ReqRes {
@@ -102,49 +102,39 @@ func NewReqRes(req *types.Request) *ReqRes {
WaitGroup: waitGroup1(),
Response: nil,
callbackInvoked: false,
cb: nil,
done: false,
cb: nil,
}
}
// Sets sets the callback. If reqRes is already done, it will call the cb
// immediately. Note, reqRes.cb should not change if reqRes.done and only one
// callback is supported.
func (r *ReqRes) SetCallback(cb func(res *types.Response)) {
r.mtx.Lock()
// Sets the callback for this ReqRes atomically.
// If reqRes is already done, calls cb immediately.
// NOTE: reqRes.cb should not change if reqRes.done.
// NOTE: only one callback is supported.
func (reqRes *ReqRes) SetCallback(cb func(res *types.Response)) {
reqRes.mtx.Lock()
if r.callbackInvoked {
r.mtx.Unlock()
cb(r.Response)
if reqRes.done {
reqRes.mtx.Unlock()
cb(reqRes.Response)
return
}
r.cb = cb
r.mtx.Unlock()
reqRes.cb = cb
reqRes.mtx.Unlock()
}
// InvokeCallback invokes a thread-safe execution of the configured callback
// if non-nil.
func (r *ReqRes) InvokeCallback() {
r.mtx.Lock()
defer r.mtx.Unlock()
if r.cb != nil {
r.cb(r.Response)
}
r.callbackInvoked = true
func (reqRes *ReqRes) GetCallback() func(*types.Response) {
reqRes.mtx.Lock()
defer reqRes.mtx.Unlock()
return reqRes.cb
}
// GetCallback returns the configured callback of the ReqRes object which may be
// nil. Note, it is not safe to concurrently call this in cases where it is
// marked done and SetCallback is called before calling GetCallback as that
// will invoke the callback twice and create a potential race condition.
//
// ref: https://github.com/tendermint/tendermint/issues/5439
func (r *ReqRes) GetCallback() func(*types.Response) {
r.mtx.Lock()
defer r.mtx.Unlock()
return r.cb
// NOTE: it should be safe to read reqRes.cb without locks after this.
func (reqRes *ReqRes) SetDone() {
reqRes.mtx.Lock()
reqRes.done = true
reqRes.mtx.Unlock()
}
func waitGroup1() (wg *sync.WaitGroup) {

29
abci/client/doc.go Normal file
View File

@@ -0,0 +1,29 @@
// Package abcicli provides an ABCI implementation in Go.
//
// There are 3 clients available:
// 1. socket (unix or TCP)
// 2. local (in memory)
// 3. gRPC
//
// ## Socket client
//
// async: the client maintains an internal buffer of a fixed size. when the
// buffer becomes full, all Async calls will return an error immediately.
//
// sync: the client blocks on 1) enqueuing the Sync request 2) enqueuing the
// Flush requests 3) waiting for the Flush response
//
// ## Local client
//
// async: global mutex is locked during each call (meaning it's not really async!)
// sync: global mutex is locked during each call
//
// ## gRPC client
//
// async: gRPC is synchronous, but an internal buffer of a fixed size is used
// to store responses and later call callbacks (separate goroutine per
// response).
//
// sync: waits for all Async calls to complete (essentially what Flush does in
// the socket client) and calls Sync method.
package abcicli

View File

@@ -1,12 +1,12 @@
package abcicli
import (
"context"
"fmt"
"net"
"sync"
"time"
"golang.org/x/net/context"
"google.golang.org/grpc"
"github.com/tendermint/tendermint/abci/types"
@@ -15,10 +15,7 @@ import (
tmsync "github.com/tendermint/tendermint/libs/sync"
)
var _ Client = (*grpcClient)(nil)
// A stripped copy of the remoteClient that makes
// synchronous calls using grpc
// A gRPC client.
type grpcClient struct {
service.BaseService
mustConnect bool
@@ -33,6 +30,18 @@ type grpcClient struct {
resCb func(*types.Request, *types.Response) // listens to all callbacks
}
var _ Client = (*grpcClient)(nil)
// NewGRPCClient creates a gRPC client, which will connect to addr upon the
// start. Note Client#Start returns an error if connection is unsuccessful and
// mustConnect is true.
//
// GRPC calls are synchronous, but some callbacks expect to be called
// asynchronously (eg. the mempool expects to be able to lock to remove bad txs
// from cache). To accommodate, we finish each call in its own go-routine,
// which is expensive, but easy - if you want something better, use the socket
// protocol! maybe one day, if people really want it, we use grpc streams, but
// hopefully not :D
func NewGRPCClient(addr string, mustConnect bool) Client {
cli := &grpcClient{
addr: addr,
@@ -54,10 +63,6 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
}
func (cli *grpcClient) OnStart() error {
if err := cli.BaseService.OnStart(); err != nil {
return err
}
// This processes asynchronous request/response messages and dispatches
// them to callbacks.
go func() {
@@ -66,6 +71,7 @@ func (cli *grpcClient) OnStart() error {
cli.mtx.Lock()
defer cli.mtx.Unlock()
reqres.SetDone()
reqres.Done()
// Notify client listener if set
@@ -74,7 +80,9 @@ func (cli *grpcClient) OnStart() error {
}
// Notify reqRes listener if set
reqres.InvokeCallback()
if cb := reqres.GetCallback(); cb != nil {
cb(reqres.Response)
}
}
for reqres := range cli.chReqRes {
if reqres != nil {
@@ -87,7 +95,6 @@ func (cli *grpcClient) OnStart() error {
RETRY_LOOP:
for {
//nolint:staticcheck // SA1019 Existing use of deprecated but supported dial option.
conn, err := grpc.Dial(cli.addr, grpc.WithInsecure(), grpc.WithContextDialer(dialerFunc))
if err != nil {
if cli.mustConnect {
@@ -118,8 +125,6 @@ RETRY_LOOP:
}
func (cli *grpcClient) OnStop() {
cli.BaseService.OnStop()
if cli.conn != nil {
cli.conn.Close()
}
@@ -158,165 +163,168 @@ func (cli *grpcClient) SetResponseCallback(resCb Callback) {
}
//----------------------------------------
// GRPC calls are synchronous, but some callbacks expect to be called asynchronously
// (eg. the mempool expects to be able to lock to remove bad txs from cache).
// To accommodate, we finish each call in its own go-routine,
// which is expensive, but easy - if you want something better, use the socket protocol!
// maybe one day, if people really want it, we use grpc streams,
// but hopefully not :D
func (cli *grpcClient) EchoAsync(msg string) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) EchoAsync(ctx context.Context, msg string) (*ReqRes, error) {
req := types.ToRequestEcho(msg)
res, err := cli.client.Echo(context.Background(), req.GetEcho(), grpc.WaitForReady(true))
res, err := cli.client.Echo(ctx, req.GetEcho(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Echo{Echo: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Echo{Echo: res}})
}
func (cli *grpcClient) FlushAsync() *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
req := types.ToRequestFlush()
res, err := cli.client.Flush(context.Background(), req.GetFlush(), grpc.WaitForReady(true))
res, err := cli.client.Flush(ctx, req.GetFlush(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Flush{Flush: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Flush{Flush: res}})
}
func (cli *grpcClient) InfoAsync(params types.RequestInfo) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) InfoAsync(ctx context.Context, params types.RequestInfo) (*ReqRes, error) {
req := types.ToRequestInfo(params)
res, err := cli.client.Info(context.Background(), req.GetInfo(), grpc.WaitForReady(true))
res, err := cli.client.Info(ctx, req.GetInfo(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Info{Info: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Info{Info: res}})
}
func (cli *grpcClient) DeliverTxAsync(params types.RequestDeliverTx) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) DeliverTxAsync(ctx context.Context, params types.RequestDeliverTx) (*ReqRes, error) {
req := types.ToRequestDeliverTx(params)
res, err := cli.client.DeliverTx(context.Background(), req.GetDeliverTx(), grpc.WaitForReady(true))
res, err := cli.client.DeliverTx(ctx, req.GetDeliverTx(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_DeliverTx{DeliverTx: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_DeliverTx{DeliverTx: res}})
}
func (cli *grpcClient) CheckTxAsync(params types.RequestCheckTx) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) CheckTxAsync(ctx context.Context, params types.RequestCheckTx) (*ReqRes, error) {
req := types.ToRequestCheckTx(params)
res, err := cli.client.CheckTx(context.Background(), req.GetCheckTx(), grpc.WaitForReady(true))
res, err := cli.client.CheckTx(ctx, req.GetCheckTx(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_CheckTx{CheckTx: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_CheckTx{CheckTx: res}})
}
func (cli *grpcClient) QueryAsync(params types.RequestQuery) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) QueryAsync(ctx context.Context, params types.RequestQuery) (*ReqRes, error) {
req := types.ToRequestQuery(params)
res, err := cli.client.Query(context.Background(), req.GetQuery(), grpc.WaitForReady(true))
res, err := cli.client.Query(ctx, req.GetQuery(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Query{Query: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Query{Query: res}})
}
func (cli *grpcClient) CommitAsync() *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) CommitAsync(ctx context.Context) (*ReqRes, error) {
req := types.ToRequestCommit()
res, err := cli.client.Commit(context.Background(), req.GetCommit(), grpc.WaitForReady(true))
res, err := cli.client.Commit(ctx, req.GetCommit(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_Commit{Commit: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Commit{Commit: res}})
}
func (cli *grpcClient) InitChainAsync(params types.RequestInitChain) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) InitChainAsync(ctx context.Context, params types.RequestInitChain) (*ReqRes, error) {
req := types.ToRequestInitChain(params)
res, err := cli.client.InitChain(context.Background(), req.GetInitChain(), grpc.WaitForReady(true))
res, err := cli.client.InitChain(ctx, req.GetInitChain(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_InitChain{InitChain: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_InitChain{InitChain: res}})
}
func (cli *grpcClient) BeginBlockAsync(params types.RequestBeginBlock) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) BeginBlockAsync(ctx context.Context, params types.RequestBeginBlock) (*ReqRes, error) {
req := types.ToRequestBeginBlock(params)
res, err := cli.client.BeginBlock(context.Background(), req.GetBeginBlock(), grpc.WaitForReady(true))
res, err := cli.client.BeginBlock(ctx, req.GetBeginBlock(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_BeginBlock{BeginBlock: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_BeginBlock{BeginBlock: res}})
}
func (cli *grpcClient) EndBlockAsync(params types.RequestEndBlock) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) EndBlockAsync(ctx context.Context, params types.RequestEndBlock) (*ReqRes, error) {
req := types.ToRequestEndBlock(params)
res, err := cli.client.EndBlock(context.Background(), req.GetEndBlock(), grpc.WaitForReady(true))
res, err := cli.client.EndBlock(ctx, req.GetEndBlock(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_EndBlock{EndBlock: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_EndBlock{EndBlock: res}})
}
func (cli *grpcClient) ListSnapshotsAsync(params types.RequestListSnapshots) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) ListSnapshotsAsync(ctx context.Context, params types.RequestListSnapshots) (*ReqRes, error) {
req := types.ToRequestListSnapshots(params)
res, err := cli.client.ListSnapshots(context.Background(), req.GetListSnapshots(), grpc.WaitForReady(true))
res, err := cli.client.ListSnapshots(ctx, req.GetListSnapshots(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_ListSnapshots{ListSnapshots: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_ListSnapshots{ListSnapshots: res}})
}
func (cli *grpcClient) OfferSnapshotAsync(params types.RequestOfferSnapshot) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) OfferSnapshotAsync(ctx context.Context, params types.RequestOfferSnapshot) (*ReqRes, error) {
req := types.ToRequestOfferSnapshot(params)
res, err := cli.client.OfferSnapshot(context.Background(), req.GetOfferSnapshot(), grpc.WaitForReady(true))
res, err := cli.client.OfferSnapshot(ctx, req.GetOfferSnapshot(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_OfferSnapshot{OfferSnapshot: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_OfferSnapshot{OfferSnapshot: res}})
}
func (cli *grpcClient) LoadSnapshotChunkAsync(params types.RequestLoadSnapshotChunk) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) LoadSnapshotChunkAsync(
ctx context.Context,
params types.RequestLoadSnapshotChunk,
) (*ReqRes, error) {
req := types.ToRequestLoadSnapshotChunk(params)
res, err := cli.client.LoadSnapshotChunk(context.Background(), req.GetLoadSnapshotChunk(), grpc.WaitForReady(true))
res, err := cli.client.LoadSnapshotChunk(ctx, req.GetLoadSnapshotChunk(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_LoadSnapshotChunk{LoadSnapshotChunk: res}})
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_LoadSnapshotChunk{LoadSnapshotChunk: res}})
}
func (cli *grpcClient) ApplySnapshotChunkAsync(params types.RequestApplySnapshotChunk) *ReqRes {
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) ApplySnapshotChunkAsync(
ctx context.Context,
params types.RequestApplySnapshotChunk,
) (*ReqRes, error) {
req := types.ToRequestApplySnapshotChunk(params)
res, err := cli.client.ApplySnapshotChunk(context.Background(), req.GetApplySnapshotChunk(), grpc.WaitForReady(true))
res, err := cli.client.ApplySnapshotChunk(ctx, req.GetApplySnapshotChunk(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
return nil, err
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_ApplySnapshotChunk{ApplySnapshotChunk: res}})
}
func (cli *grpcClient) PrepareProposalAsync(params types.RequestPrepareProposal) *ReqRes {
req := types.ToRequestPrepareProposal(params)
res, err := cli.client.PrepareProposal(context.Background(), req.GetPrepareProposal(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_PrepareProposal{PrepareProposal: res}})
}
func (cli *grpcClient) ProcessProposalAsync(params types.RequestProcessProposal) *ReqRes {
req := types.ToRequestProcessProposal(params)
res, err := cli.client.ProcessProposal(context.Background(), req.GetProcessProposal(), grpc.WaitForReady(true))
if err != nil {
cli.StopForError(err)
}
return cli.finishAsyncCall(req, &types.Response{Value: &types.Response_ProcessProposal{ProcessProposal: res}})
return cli.finishAsyncCall(
ctx,
req,
&types.Response{Value: &types.Response_ApplySnapshotChunk{ApplySnapshotChunk: res}},
)
}
// finishAsyncCall creates a ReqRes for an async call, and immediately populates it
// with the response. We don't complete it until it's been ordered via the channel.
func (cli *grpcClient) finishAsyncCall(req *types.Request, res *types.Response) *ReqRes {
func (cli *grpcClient) finishAsyncCall(ctx context.Context, req *types.Request, res *types.Response) (*ReqRes, error) {
reqres := NewReqRes(req)
reqres.Response = res
cli.chReqRes <- reqres // use channel for async responses, since they must be ordered
return reqres
select {
case cli.chReqRes <- reqres: // use channel for async responses, since they must be ordered
return reqres, nil
case <-ctx.Done():
return nil, ctx.Err()
}
}
// finishSyncCall waits for an async call to complete. It is necessary to call all
@@ -349,87 +357,150 @@ func (cli *grpcClient) finishSyncCall(reqres *ReqRes) *types.Response {
//----------------------------------------
func (cli *grpcClient) FlushSync() error {
reqres := cli.FlushAsync()
cli.finishSyncCall(reqres).GetFlush()
return cli.Error()
func (cli *grpcClient) FlushSync(ctx context.Context) error {
return nil
}
func (cli *grpcClient) EchoSync(msg string) (*types.ResponseEcho, error) {
reqres := cli.EchoAsync(msg)
// StopForError should already have been called if error is set
func (cli *grpcClient) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error) {
reqres, err := cli.EchoAsync(ctx, msg)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetEcho(), cli.Error()
}
func (cli *grpcClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, error) {
reqres := cli.InfoAsync(req)
func (cli *grpcClient) InfoSync(
ctx context.Context,
req types.RequestInfo,
) (*types.ResponseInfo, error) {
reqres, err := cli.InfoAsync(ctx, req)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetInfo(), cli.Error()
}
func (cli *grpcClient) DeliverTxSync(params types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
reqres := cli.DeliverTxAsync(params)
func (cli *grpcClient) DeliverTxSync(
ctx context.Context,
params types.RequestDeliverTx,
) (*types.ResponseDeliverTx, error) {
reqres, err := cli.DeliverTxAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetDeliverTx(), cli.Error()
}
func (cli *grpcClient) CheckTxSync(params types.RequestCheckTx) (*types.ResponseCheckTx, error) {
reqres := cli.CheckTxAsync(params)
func (cli *grpcClient) CheckTxSync(
ctx context.Context,
params types.RequestCheckTx,
) (*types.ResponseCheckTx, error) {
reqres, err := cli.CheckTxAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetCheckTx(), cli.Error()
}
func (cli *grpcClient) QuerySync(req types.RequestQuery) (*types.ResponseQuery, error) {
reqres := cli.QueryAsync(req)
func (cli *grpcClient) QuerySync(
ctx context.Context,
req types.RequestQuery,
) (*types.ResponseQuery, error) {
reqres, err := cli.QueryAsync(ctx, req)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetQuery(), cli.Error()
}
func (cli *grpcClient) CommitSync() (*types.ResponseCommit, error) {
reqres := cli.CommitAsync()
func (cli *grpcClient) CommitSync(ctx context.Context) (*types.ResponseCommit, error) {
reqres, err := cli.CommitAsync(ctx)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetCommit(), cli.Error()
}
func (cli *grpcClient) InitChainSync(params types.RequestInitChain) (*types.ResponseInitChain, error) {
reqres := cli.InitChainAsync(params)
func (cli *grpcClient) InitChainSync(
ctx context.Context,
params types.RequestInitChain,
) (*types.ResponseInitChain, error) {
reqres, err := cli.InitChainAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetInitChain(), cli.Error()
}
func (cli *grpcClient) BeginBlockSync(params types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
reqres := cli.BeginBlockAsync(params)
func (cli *grpcClient) BeginBlockSync(
ctx context.Context,
params types.RequestBeginBlock,
) (*types.ResponseBeginBlock, error) {
reqres, err := cli.BeginBlockAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetBeginBlock(), cli.Error()
}
func (cli *grpcClient) EndBlockSync(params types.RequestEndBlock) (*types.ResponseEndBlock, error) {
reqres := cli.EndBlockAsync(params)
func (cli *grpcClient) EndBlockSync(
ctx context.Context,
params types.RequestEndBlock,
) (*types.ResponseEndBlock, error) {
reqres, err := cli.EndBlockAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetEndBlock(), cli.Error()
}
func (cli *grpcClient) ListSnapshotsSync(params types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
reqres := cli.ListSnapshotsAsync(params)
func (cli *grpcClient) ListSnapshotsSync(
ctx context.Context,
params types.RequestListSnapshots,
) (*types.ResponseListSnapshots, error) {
reqres, err := cli.ListSnapshotsAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetListSnapshots(), cli.Error()
}
func (cli *grpcClient) OfferSnapshotSync(params types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
reqres := cli.OfferSnapshotAsync(params)
func (cli *grpcClient) OfferSnapshotSync(
ctx context.Context,
params types.RequestOfferSnapshot,
) (*types.ResponseOfferSnapshot, error) {
reqres, err := cli.OfferSnapshotAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetOfferSnapshot(), cli.Error()
}
func (cli *grpcClient) LoadSnapshotChunkSync(
ctx context.Context,
params types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
reqres := cli.LoadSnapshotChunkAsync(params)
reqres, err := cli.LoadSnapshotChunkAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetLoadSnapshotChunk(), cli.Error()
}
func (cli *grpcClient) ApplySnapshotChunkSync(
ctx context.Context,
params types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
reqres := cli.ApplySnapshotChunkAsync(params)
reqres, err := cli.ApplySnapshotChunkAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetApplySnapshotChunk(), cli.Error()
}
func (cli *grpcClient) PrepareProposalSync(
params types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
reqres := cli.PrepareProposalAsync(params)
return cli.finishSyncCall(reqres).GetPrepareProposal(), cli.Error()
}
func (cli *grpcClient) ProcessProposalSync(params types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
reqres := cli.ProcessProposalAsync(params)
return cli.finishSyncCall(reqres).GetProcessProposal(), cli.Error()
}

View File

@@ -1,13 +1,13 @@
package abcicli
import (
"context"
types "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/service"
tmsync "github.com/tendermint/tendermint/libs/sync"
)
var _ Client = (*localClient)(nil)
// NOTE: use defer to unlock mutex because Application might panic (e.g., in
// case of malicious tx or query). It only makes sense for publicly exposed
// methods like CheckTx (/broadcast_tx_* RPC endpoint) or Query (/abci_query
@@ -49,22 +49,22 @@ func (app *localClient) Error() error {
return nil
}
func (app *localClient) FlushAsync() *ReqRes {
func (app *localClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
// Do nothing
return newLocalReqRes(types.ToRequestFlush(), nil)
return newLocalReqRes(types.ToRequestFlush(), nil), nil
}
func (app *localClient) EchoAsync(msg string) *ReqRes {
func (app *localClient) EchoAsync(ctx context.Context, msg string) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
return app.callback(
types.ToRequestEcho(msg),
types.ToResponseEcho(msg),
)
), nil
}
func (app *localClient) InfoAsync(req types.RequestInfo) *ReqRes {
func (app *localClient) InfoAsync(ctx context.Context, req types.RequestInfo) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -72,10 +72,10 @@ func (app *localClient) InfoAsync(req types.RequestInfo) *ReqRes {
return app.callback(
types.ToRequestInfo(req),
types.ToResponseInfo(res),
)
), nil
}
func (app *localClient) DeliverTxAsync(params types.RequestDeliverTx) *ReqRes {
func (app *localClient) DeliverTxAsync(ctx context.Context, params types.RequestDeliverTx) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -83,10 +83,10 @@ func (app *localClient) DeliverTxAsync(params types.RequestDeliverTx) *ReqRes {
return app.callback(
types.ToRequestDeliverTx(params),
types.ToResponseDeliverTx(res),
)
), nil
}
func (app *localClient) CheckTxAsync(req types.RequestCheckTx) *ReqRes {
func (app *localClient) CheckTxAsync(ctx context.Context, req types.RequestCheckTx) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -94,10 +94,10 @@ func (app *localClient) CheckTxAsync(req types.RequestCheckTx) *ReqRes {
return app.callback(
types.ToRequestCheckTx(req),
types.ToResponseCheckTx(res),
)
), nil
}
func (app *localClient) QueryAsync(req types.RequestQuery) *ReqRes {
func (app *localClient) QueryAsync(ctx context.Context, req types.RequestQuery) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -105,10 +105,10 @@ func (app *localClient) QueryAsync(req types.RequestQuery) *ReqRes {
return app.callback(
types.ToRequestQuery(req),
types.ToResponseQuery(res),
)
), nil
}
func (app *localClient) CommitAsync() *ReqRes {
func (app *localClient) CommitAsync(ctx context.Context) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -116,10 +116,10 @@ func (app *localClient) CommitAsync() *ReqRes {
return app.callback(
types.ToRequestCommit(),
types.ToResponseCommit(res),
)
), nil
}
func (app *localClient) InitChainAsync(req types.RequestInitChain) *ReqRes {
func (app *localClient) InitChainAsync(ctx context.Context, req types.RequestInitChain) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -127,10 +127,10 @@ func (app *localClient) InitChainAsync(req types.RequestInitChain) *ReqRes {
return app.callback(
types.ToRequestInitChain(req),
types.ToResponseInitChain(res),
)
), nil
}
func (app *localClient) BeginBlockAsync(req types.RequestBeginBlock) *ReqRes {
func (app *localClient) BeginBlockAsync(ctx context.Context, req types.RequestBeginBlock) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -138,10 +138,10 @@ func (app *localClient) BeginBlockAsync(req types.RequestBeginBlock) *ReqRes {
return app.callback(
types.ToRequestBeginBlock(req),
types.ToResponseBeginBlock(res),
)
), nil
}
func (app *localClient) EndBlockAsync(req types.RequestEndBlock) *ReqRes {
func (app *localClient) EndBlockAsync(ctx context.Context, req types.RequestEndBlock) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -149,10 +149,10 @@ func (app *localClient) EndBlockAsync(req types.RequestEndBlock) *ReqRes {
return app.callback(
types.ToRequestEndBlock(req),
types.ToResponseEndBlock(res),
)
), nil
}
func (app *localClient) ListSnapshotsAsync(req types.RequestListSnapshots) *ReqRes {
func (app *localClient) ListSnapshotsAsync(ctx context.Context, req types.RequestListSnapshots) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -160,10 +160,10 @@ func (app *localClient) ListSnapshotsAsync(req types.RequestListSnapshots) *ReqR
return app.callback(
types.ToRequestListSnapshots(req),
types.ToResponseListSnapshots(res),
)
), nil
}
func (app *localClient) OfferSnapshotAsync(req types.RequestOfferSnapshot) *ReqRes {
func (app *localClient) OfferSnapshotAsync(ctx context.Context, req types.RequestOfferSnapshot) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -171,10 +171,13 @@ func (app *localClient) OfferSnapshotAsync(req types.RequestOfferSnapshot) *ReqR
return app.callback(
types.ToRequestOfferSnapshot(req),
types.ToResponseOfferSnapshot(res),
)
), nil
}
func (app *localClient) LoadSnapshotChunkAsync(req types.RequestLoadSnapshotChunk) *ReqRes {
func (app *localClient) LoadSnapshotChunkAsync(
ctx context.Context,
req types.RequestLoadSnapshotChunk,
) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -182,10 +185,13 @@ func (app *localClient) LoadSnapshotChunkAsync(req types.RequestLoadSnapshotChun
return app.callback(
types.ToRequestLoadSnapshotChunk(req),
types.ToResponseLoadSnapshotChunk(res),
)
), nil
}
func (app *localClient) ApplySnapshotChunkAsync(req types.RequestApplySnapshotChunk) *ReqRes {
func (app *localClient) ApplySnapshotChunkAsync(
ctx context.Context,
req types.RequestApplySnapshotChunk,
) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -193,42 +199,20 @@ func (app *localClient) ApplySnapshotChunkAsync(req types.RequestApplySnapshotCh
return app.callback(
types.ToRequestApplySnapshotChunk(req),
types.ToResponseApplySnapshotChunk(res),
)
}
func (app *localClient) PrepareProposalAsync(req types.RequestPrepareProposal) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.PrepareProposal(req)
return app.callback(
types.ToRequestPrepareProposal(req),
types.ToResponsePrepareProposal(res),
)
}
func (app *localClient) ProcessProposalAsync(req types.RequestProcessProposal) *ReqRes {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.ProcessProposal(req)
return app.callback(
types.ToRequestProcessProposal(req),
types.ToResponseProcessProposal(res),
)
), nil
}
//-------------------------------------------------------
func (app *localClient) FlushSync() error {
func (app *localClient) FlushSync(ctx context.Context) error {
return nil
}
func (app *localClient) EchoSync(msg string) (*types.ResponseEcho, error) {
func (app *localClient) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error) {
return &types.ResponseEcho{Message: msg}, nil
}
func (app *localClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, error) {
func (app *localClient) InfoSync(ctx context.Context, req types.RequestInfo) (*types.ResponseInfo, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -236,7 +220,11 @@ func (app *localClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, er
return &res, nil
}
func (app *localClient) DeliverTxSync(req types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
func (app *localClient) DeliverTxSync(
ctx context.Context,
req types.RequestDeliverTx,
) (*types.ResponseDeliverTx, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -244,7 +232,10 @@ func (app *localClient) DeliverTxSync(req types.RequestDeliverTx) (*types.Respon
return &res, nil
}
func (app *localClient) CheckTxSync(req types.RequestCheckTx) (*types.ResponseCheckTx, error) {
func (app *localClient) CheckTxSync(
ctx context.Context,
req types.RequestCheckTx,
) (*types.ResponseCheckTx, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -252,7 +243,10 @@ func (app *localClient) CheckTxSync(req types.RequestCheckTx) (*types.ResponseCh
return &res, nil
}
func (app *localClient) QuerySync(req types.RequestQuery) (*types.ResponseQuery, error) {
func (app *localClient) QuerySync(
ctx context.Context,
req types.RequestQuery,
) (*types.ResponseQuery, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -260,7 +254,7 @@ func (app *localClient) QuerySync(req types.RequestQuery) (*types.ResponseQuery,
return &res, nil
}
func (app *localClient) CommitSync() (*types.ResponseCommit, error) {
func (app *localClient) CommitSync(ctx context.Context) (*types.ResponseCommit, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -268,7 +262,11 @@ func (app *localClient) CommitSync() (*types.ResponseCommit, error) {
return &res, nil
}
func (app *localClient) InitChainSync(req types.RequestInitChain) (*types.ResponseInitChain, error) {
func (app *localClient) InitChainSync(
ctx context.Context,
req types.RequestInitChain,
) (*types.ResponseInitChain, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -276,7 +274,11 @@ func (app *localClient) InitChainSync(req types.RequestInitChain) (*types.Respon
return &res, nil
}
func (app *localClient) BeginBlockSync(req types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
func (app *localClient) BeginBlockSync(
ctx context.Context,
req types.RequestBeginBlock,
) (*types.ResponseBeginBlock, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -284,7 +286,11 @@ func (app *localClient) BeginBlockSync(req types.RequestBeginBlock) (*types.Resp
return &res, nil
}
func (app *localClient) EndBlockSync(req types.RequestEndBlock) (*types.ResponseEndBlock, error) {
func (app *localClient) EndBlockSync(
ctx context.Context,
req types.RequestEndBlock,
) (*types.ResponseEndBlock, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -292,7 +298,11 @@ func (app *localClient) EndBlockSync(req types.RequestEndBlock) (*types.Response
return &res, nil
}
func (app *localClient) ListSnapshotsSync(req types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
func (app *localClient) ListSnapshotsSync(
ctx context.Context,
req types.RequestListSnapshots,
) (*types.ResponseListSnapshots, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -300,7 +310,11 @@ func (app *localClient) ListSnapshotsSync(req types.RequestListSnapshots) (*type
return &res, nil
}
func (app *localClient) OfferSnapshotSync(req types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
func (app *localClient) OfferSnapshotSync(
ctx context.Context,
req types.RequestOfferSnapshot,
) (*types.ResponseOfferSnapshot, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -309,7 +323,9 @@ func (app *localClient) OfferSnapshotSync(req types.RequestOfferSnapshot) (*type
}
func (app *localClient) LoadSnapshotChunkSync(
ctx context.Context,
req types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -318,7 +334,9 @@ func (app *localClient) LoadSnapshotChunkSync(
}
func (app *localClient) ApplySnapshotChunkSync(
ctx context.Context,
req types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -326,33 +344,16 @@ func (app *localClient) ApplySnapshotChunkSync(
return &res, nil
}
func (app *localClient) PrepareProposalSync(req types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.PrepareProposal(req)
return &res, nil
}
func (app *localClient) ProcessProposalSync(req types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.ProcessProposal(req)
return &res, nil
}
//-------------------------------------------------------
func (app *localClient) callback(req *types.Request, res *types.Response) *ReqRes {
app.Callback(req, res)
rr := newLocalReqRes(req, res)
rr.callbackInvoked = true
return rr
return newLocalReqRes(req, res)
}
func newLocalReqRes(req *types.Request, res *types.Response) *ReqRes {
reqRes := NewReqRes(req)
reqRes.Response = res
reqRes.SetDone()
return reqRes
}

View File

@@ -1,9 +1,12 @@
// Code generated by mockery. DO NOT EDIT.
// Code generated by mockery v2.3.0. DO NOT EDIT.
package mocks
import (
context "context"
abcicli "github.com/tendermint/tendermint/abci/client"
log "github.com/tendermint/tendermint/libs/log"
mock "github.com/stretchr/testify/mock"
@@ -16,29 +19,36 @@ type Client struct {
mock.Mock
}
// ApplySnapshotChunkAsync provides a mock function with given fields: _a0
func (_m *Client) ApplySnapshotChunkAsync(_a0 types.RequestApplySnapshotChunk) *abcicli.ReqRes {
ret := _m.Called(_a0)
// ApplySnapshotChunkAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) ApplySnapshotChunkAsync(_a0 context.Context, _a1 types.RequestApplySnapshotChunk) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestApplySnapshotChunk) *abcicli.ReqRes); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestApplySnapshotChunk) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestApplySnapshotChunk) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// ApplySnapshotChunkSync provides a mock function with given fields: _a0
func (_m *Client) ApplySnapshotChunkSync(_a0 types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
ret := _m.Called(_a0)
// ApplySnapshotChunkSync provides a mock function with given fields: _a0, _a1
func (_m *Client) ApplySnapshotChunkSync(_a0 context.Context, _a1 types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseApplySnapshotChunk
if rf, ok := ret.Get(0).(func(types.RequestApplySnapshotChunk) *types.ResponseApplySnapshotChunk); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestApplySnapshotChunk) *types.ResponseApplySnapshotChunk); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseApplySnapshotChunk)
@@ -46,8 +56,8 @@ func (_m *Client) ApplySnapshotChunkSync(_a0 types.RequestApplySnapshotChunk) (*
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestApplySnapshotChunk) error); ok {
r1 = rf(_a0)
if rf, ok := ret.Get(1).(func(context.Context, types.RequestApplySnapshotChunk) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
@@ -55,29 +65,36 @@ func (_m *Client) ApplySnapshotChunkSync(_a0 types.RequestApplySnapshotChunk) (*
return r0, r1
}
// BeginBlockAsync provides a mock function with given fields: _a0
func (_m *Client) BeginBlockAsync(_a0 types.RequestBeginBlock) *abcicli.ReqRes {
ret := _m.Called(_a0)
// BeginBlockAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) BeginBlockAsync(_a0 context.Context, _a1 types.RequestBeginBlock) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestBeginBlock) *abcicli.ReqRes); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestBeginBlock) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestBeginBlock) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// BeginBlockSync provides a mock function with given fields: _a0
func (_m *Client) BeginBlockSync(_a0 types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
ret := _m.Called(_a0)
// BeginBlockSync provides a mock function with given fields: _a0, _a1
func (_m *Client) BeginBlockSync(_a0 context.Context, _a1 types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseBeginBlock
if rf, ok := ret.Get(0).(func(types.RequestBeginBlock) *types.ResponseBeginBlock); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestBeginBlock) *types.ResponseBeginBlock); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseBeginBlock)
@@ -85,8 +102,8 @@ func (_m *Client) BeginBlockSync(_a0 types.RequestBeginBlock) (*types.ResponseBe
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestBeginBlock) error); ok {
r1 = rf(_a0)
if rf, ok := ret.Get(1).(func(context.Context, types.RequestBeginBlock) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
@@ -94,29 +111,36 @@ func (_m *Client) BeginBlockSync(_a0 types.RequestBeginBlock) (*types.ResponseBe
return r0, r1
}
// CheckTxAsync provides a mock function with given fields: _a0
func (_m *Client) CheckTxAsync(_a0 types.RequestCheckTx) *abcicli.ReqRes {
ret := _m.Called(_a0)
// CheckTxAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) CheckTxAsync(_a0 context.Context, _a1 types.RequestCheckTx) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestCheckTx) *abcicli.ReqRes); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestCheckTx) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestCheckTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// CheckTxSync provides a mock function with given fields: _a0
func (_m *Client) CheckTxSync(_a0 types.RequestCheckTx) (*types.ResponseCheckTx, error) {
ret := _m.Called(_a0)
// CheckTxSync provides a mock function with given fields: _a0, _a1
func (_m *Client) CheckTxSync(_a0 context.Context, _a1 types.RequestCheckTx) (*types.ResponseCheckTx, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseCheckTx
if rf, ok := ret.Get(0).(func(types.RequestCheckTx) *types.ResponseCheckTx); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestCheckTx) *types.ResponseCheckTx); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseCheckTx)
@@ -124,7 +148,30 @@ func (_m *Client) CheckTxSync(_a0 types.RequestCheckTx) (*types.ResponseCheckTx,
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestCheckTx) error); ok {
if rf, ok := ret.Get(1).(func(context.Context, types.RequestCheckTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// CommitAsync provides a mock function with given fields: _a0
func (_m *Client) CommitAsync(_a0 context.Context) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context) *abcicli.ReqRes); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context) error); ok {
r1 = rf(_a0)
} else {
r1 = ret.Error(1)
@@ -133,29 +180,13 @@ func (_m *Client) CheckTxSync(_a0 types.RequestCheckTx) (*types.ResponseCheckTx,
return r0, r1
}
// CommitAsync provides a mock function with given fields:
func (_m *Client) CommitAsync() *abcicli.ReqRes {
ret := _m.Called()
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func() *abcicli.ReqRes); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
}
// CommitSync provides a mock function with given fields:
func (_m *Client) CommitSync() (*types.ResponseCommit, error) {
ret := _m.Called()
// CommitSync provides a mock function with given fields: _a0
func (_m *Client) CommitSync(_a0 context.Context) (*types.ResponseCommit, error) {
ret := _m.Called(_a0)
var r0 *types.ResponseCommit
if rf, ok := ret.Get(0).(func() *types.ResponseCommit); ok {
r0 = rf()
if rf, ok := ret.Get(0).(func(context.Context) *types.ResponseCommit); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseCommit)
@@ -163,8 +194,8 @@ func (_m *Client) CommitSync() (*types.ResponseCommit, error) {
}
var r1 error
if rf, ok := ret.Get(1).(func() error); ok {
r1 = rf()
if rf, ok := ret.Get(1).(func(context.Context) error); ok {
r1 = rf(_a0)
} else {
r1 = ret.Error(1)
}
@@ -172,29 +203,36 @@ func (_m *Client) CommitSync() (*types.ResponseCommit, error) {
return r0, r1
}
// DeliverTxAsync provides a mock function with given fields: _a0
func (_m *Client) DeliverTxAsync(_a0 types.RequestDeliverTx) *abcicli.ReqRes {
ret := _m.Called(_a0)
// DeliverTxAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 types.RequestDeliverTx) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestDeliverTx) *abcicli.ReqRes); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// DeliverTxSync provides a mock function with given fields: _a0
func (_m *Client) DeliverTxSync(_a0 types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
ret := _m.Called(_a0)
// DeliverTxSync provides a mock function with given fields: _a0, _a1
func (_m *Client) DeliverTxSync(_a0 context.Context, _a1 types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseDeliverTx
if rf, ok := ret.Get(0).(func(types.RequestDeliverTx) *types.ResponseDeliverTx); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *types.ResponseDeliverTx); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseDeliverTx)
@@ -202,8 +240,8 @@ func (_m *Client) DeliverTxSync(_a0 types.RequestDeliverTx) (*types.ResponseDeli
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestDeliverTx) error); ok {
r1 = rf(_a0)
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
@@ -211,29 +249,36 @@ func (_m *Client) DeliverTxSync(_a0 types.RequestDeliverTx) (*types.ResponseDeli
return r0, r1
}
// EchoAsync provides a mock function with given fields: msg
func (_m *Client) EchoAsync(msg string) *abcicli.ReqRes {
ret := _m.Called(msg)
// EchoAsync provides a mock function with given fields: ctx, msg
func (_m *Client) EchoAsync(ctx context.Context, msg string) (*abcicli.ReqRes, error) {
ret := _m.Called(ctx, msg)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(string) *abcicli.ReqRes); ok {
r0 = rf(msg)
if rf, ok := ret.Get(0).(func(context.Context, string) *abcicli.ReqRes); ok {
r0 = rf(ctx, msg)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, string) error); ok {
r1 = rf(ctx, msg)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// EchoSync provides a mock function with given fields: msg
func (_m *Client) EchoSync(msg string) (*types.ResponseEcho, error) {
ret := _m.Called(msg)
// EchoSync provides a mock function with given fields: ctx, msg
func (_m *Client) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error) {
ret := _m.Called(ctx, msg)
var r0 *types.ResponseEcho
if rf, ok := ret.Get(0).(func(string) *types.ResponseEcho); ok {
r0 = rf(msg)
if rf, ok := ret.Get(0).(func(context.Context, string) *types.ResponseEcho); ok {
r0 = rf(ctx, msg)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseEcho)
@@ -241,8 +286,8 @@ func (_m *Client) EchoSync(msg string) (*types.ResponseEcho, error) {
}
var r1 error
if rf, ok := ret.Get(1).(func(string) error); ok {
r1 = rf(msg)
if rf, ok := ret.Get(1).(func(context.Context, string) error); ok {
r1 = rf(ctx, msg)
} else {
r1 = ret.Error(1)
}
@@ -250,29 +295,36 @@ func (_m *Client) EchoSync(msg string) (*types.ResponseEcho, error) {
return r0, r1
}
// EndBlockAsync provides a mock function with given fields: _a0
func (_m *Client) EndBlockAsync(_a0 types.RequestEndBlock) *abcicli.ReqRes {
ret := _m.Called(_a0)
// EndBlockAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) EndBlockAsync(_a0 context.Context, _a1 types.RequestEndBlock) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestEndBlock) *abcicli.ReqRes); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestEndBlock) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestEndBlock) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// EndBlockSync provides a mock function with given fields: _a0
func (_m *Client) EndBlockSync(_a0 types.RequestEndBlock) (*types.ResponseEndBlock, error) {
ret := _m.Called(_a0)
// EndBlockSync provides a mock function with given fields: _a0, _a1
func (_m *Client) EndBlockSync(_a0 context.Context, _a1 types.RequestEndBlock) (*types.ResponseEndBlock, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseEndBlock
if rf, ok := ret.Get(0).(func(types.RequestEndBlock) *types.ResponseEndBlock); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestEndBlock) *types.ResponseEndBlock); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseEndBlock)
@@ -280,8 +332,8 @@ func (_m *Client) EndBlockSync(_a0 types.RequestEndBlock) (*types.ResponseEndBlo
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestEndBlock) error); ok {
r1 = rf(_a0)
if rf, ok := ret.Get(1).(func(context.Context, types.RequestEndBlock) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
@@ -303,42 +355,12 @@ func (_m *Client) Error() error {
return r0
}
// FlushAsync provides a mock function with given fields:
func (_m *Client) FlushAsync() *abcicli.ReqRes {
ret := _m.Called()
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func() *abcicli.ReqRes); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
}
// FlushSync provides a mock function with given fields:
func (_m *Client) FlushSync() error {
ret := _m.Called()
var r0 error
if rf, ok := ret.Get(0).(func() error); ok {
r0 = rf()
} else {
r0 = ret.Error(0)
}
return r0
}
// InfoAsync provides a mock function with given fields: _a0
func (_m *Client) InfoAsync(_a0 types.RequestInfo) *abcicli.ReqRes {
// FlushAsync provides a mock function with given fields: _a0
func (_m *Client) FlushAsync(_a0 context.Context) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestInfo) *abcicli.ReqRes); ok {
if rf, ok := ret.Get(0).(func(context.Context) *abcicli.ReqRes); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
@@ -346,24 +368,8 @@ func (_m *Client) InfoAsync(_a0 types.RequestInfo) *abcicli.ReqRes {
}
}
return r0
}
// InfoSync provides a mock function with given fields: _a0
func (_m *Client) InfoSync(_a0 types.RequestInfo) (*types.ResponseInfo, error) {
ret := _m.Called(_a0)
var r0 *types.ResponseInfo
if rf, ok := ret.Get(0).(func(types.RequestInfo) *types.ResponseInfo); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseInfo)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestInfo) error); ok {
if rf, ok := ret.Get(1).(func(context.Context) error); ok {
r1 = rf(_a0)
} else {
r1 = ret.Error(1)
@@ -372,29 +378,96 @@ func (_m *Client) InfoSync(_a0 types.RequestInfo) (*types.ResponseInfo, error) {
return r0, r1
}
// InitChainAsync provides a mock function with given fields: _a0
func (_m *Client) InitChainAsync(_a0 types.RequestInitChain) *abcicli.ReqRes {
// FlushSync provides a mock function with given fields: _a0
func (_m *Client) FlushSync(_a0 context.Context) error {
ret := _m.Called(_a0)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestInitChain) *abcicli.ReqRes); ok {
var r0 error
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
r0 = rf(_a0)
} else {
r0 = ret.Error(0)
}
return r0
}
// InfoAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) InfoAsync(_a0 context.Context, _a1 types.RequestInfo) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInfo) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestInfo) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// InitChainSync provides a mock function with given fields: _a0
func (_m *Client) InitChainSync(_a0 types.RequestInitChain) (*types.ResponseInitChain, error) {
ret := _m.Called(_a0)
// InfoSync provides a mock function with given fields: _a0, _a1
func (_m *Client) InfoSync(_a0 context.Context, _a1 types.RequestInfo) (*types.ResponseInfo, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseInfo
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInfo) *types.ResponseInfo); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseInfo)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestInfo) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// InitChainAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) InitChainAsync(_a0 context.Context, _a1 types.RequestInitChain) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInitChain) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestInitChain) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// InitChainSync provides a mock function with given fields: _a0, _a1
func (_m *Client) InitChainSync(_a0 context.Context, _a1 types.RequestInitChain) (*types.ResponseInitChain, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseInitChain
if rf, ok := ret.Get(0).(func(types.RequestInitChain) *types.ResponseInitChain); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInitChain) *types.ResponseInitChain); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseInitChain)
@@ -402,8 +475,8 @@ func (_m *Client) InitChainSync(_a0 types.RequestInitChain) (*types.ResponseInit
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestInitChain) error); ok {
r1 = rf(_a0)
if rf, ok := ret.Get(1).(func(context.Context, types.RequestInitChain) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
@@ -425,29 +498,36 @@ func (_m *Client) IsRunning() bool {
return r0
}
// ListSnapshotsAsync provides a mock function with given fields: _a0
func (_m *Client) ListSnapshotsAsync(_a0 types.RequestListSnapshots) *abcicli.ReqRes {
ret := _m.Called(_a0)
// ListSnapshotsAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) ListSnapshotsAsync(_a0 context.Context, _a1 types.RequestListSnapshots) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestListSnapshots) *abcicli.ReqRes); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestListSnapshots) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestListSnapshots) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// ListSnapshotsSync provides a mock function with given fields: _a0
func (_m *Client) ListSnapshotsSync(_a0 types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
ret := _m.Called(_a0)
// ListSnapshotsSync provides a mock function with given fields: _a0, _a1
func (_m *Client) ListSnapshotsSync(_a0 context.Context, _a1 types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseListSnapshots
if rf, ok := ret.Get(0).(func(types.RequestListSnapshots) *types.ResponseListSnapshots); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestListSnapshots) *types.ResponseListSnapshots); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseListSnapshots)
@@ -455,8 +535,8 @@ func (_m *Client) ListSnapshotsSync(_a0 types.RequestListSnapshots) (*types.Resp
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestListSnapshots) error); ok {
r1 = rf(_a0)
if rf, ok := ret.Get(1).(func(context.Context, types.RequestListSnapshots) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
@@ -464,29 +544,36 @@ func (_m *Client) ListSnapshotsSync(_a0 types.RequestListSnapshots) (*types.Resp
return r0, r1
}
// LoadSnapshotChunkAsync provides a mock function with given fields: _a0
func (_m *Client) LoadSnapshotChunkAsync(_a0 types.RequestLoadSnapshotChunk) *abcicli.ReqRes {
ret := _m.Called(_a0)
// LoadSnapshotChunkAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) LoadSnapshotChunkAsync(_a0 context.Context, _a1 types.RequestLoadSnapshotChunk) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestLoadSnapshotChunk) *abcicli.ReqRes); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestLoadSnapshotChunk) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestLoadSnapshotChunk) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// LoadSnapshotChunkSync provides a mock function with given fields: _a0
func (_m *Client) LoadSnapshotChunkSync(_a0 types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
ret := _m.Called(_a0)
// LoadSnapshotChunkSync provides a mock function with given fields: _a0, _a1
func (_m *Client) LoadSnapshotChunkSync(_a0 context.Context, _a1 types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseLoadSnapshotChunk
if rf, ok := ret.Get(0).(func(types.RequestLoadSnapshotChunk) *types.ResponseLoadSnapshotChunk); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestLoadSnapshotChunk) *types.ResponseLoadSnapshotChunk); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseLoadSnapshotChunk)
@@ -494,8 +581,8 @@ func (_m *Client) LoadSnapshotChunkSync(_a0 types.RequestLoadSnapshotChunk) (*ty
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestLoadSnapshotChunk) error); ok {
r1 = rf(_a0)
if rf, ok := ret.Get(1).(func(context.Context, types.RequestLoadSnapshotChunk) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
@@ -503,29 +590,36 @@ func (_m *Client) LoadSnapshotChunkSync(_a0 types.RequestLoadSnapshotChunk) (*ty
return r0, r1
}
// OfferSnapshotAsync provides a mock function with given fields: _a0
func (_m *Client) OfferSnapshotAsync(_a0 types.RequestOfferSnapshot) *abcicli.ReqRes {
ret := _m.Called(_a0)
// OfferSnapshotAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) OfferSnapshotAsync(_a0 context.Context, _a1 types.RequestOfferSnapshot) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestOfferSnapshot) *abcicli.ReqRes); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestOfferSnapshot) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestOfferSnapshot) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// OfferSnapshotSync provides a mock function with given fields: _a0
func (_m *Client) OfferSnapshotSync(_a0 types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
ret := _m.Called(_a0)
// OfferSnapshotSync provides a mock function with given fields: _a0, _a1
func (_m *Client) OfferSnapshotSync(_a0 context.Context, _a1 types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseOfferSnapshot
if rf, ok := ret.Get(0).(func(types.RequestOfferSnapshot) *types.ResponseOfferSnapshot); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestOfferSnapshot) *types.ResponseOfferSnapshot); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseOfferSnapshot)
@@ -533,8 +627,8 @@ func (_m *Client) OfferSnapshotSync(_a0 types.RequestOfferSnapshot) (*types.Resp
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestOfferSnapshot) error); ok {
r1 = rf(_a0)
if rf, ok := ret.Get(1).(func(context.Context, types.RequestOfferSnapshot) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
@@ -575,38 +669,22 @@ func (_m *Client) OnStop() {
_m.Called()
}
// PrepareProposalAsync provides a mock function with given fields: _a0
func (_m *Client) PrepareProposalAsync(_a0 types.RequestPrepareProposal) *abcicli.ReqRes {
ret := _m.Called(_a0)
// QueryAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) QueryAsync(_a0 context.Context, _a1 types.RequestQuery) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestPrepareProposal) *abcicli.ReqRes); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestQuery) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
}
// PrepareProposalSync provides a mock function with given fields: _a0
func (_m *Client) PrepareProposalSync(_a0 types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
ret := _m.Called(_a0)
var r0 *types.ResponsePrepareProposal
if rf, ok := ret.Get(0).(func(types.RequestPrepareProposal) *types.ResponsePrepareProposal); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponsePrepareProposal)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestPrepareProposal) error); ok {
r1 = rf(_a0)
if rf, ok := ret.Get(1).(func(context.Context, types.RequestQuery) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
@@ -614,68 +692,13 @@ func (_m *Client) PrepareProposalSync(_a0 types.RequestPrepareProposal) (*types.
return r0, r1
}
// ProcessProposalAsync provides a mock function with given fields: _a0
func (_m *Client) ProcessProposalAsync(_a0 types.RequestProcessProposal) *abcicli.ReqRes {
ret := _m.Called(_a0)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestProcessProposal) *abcicli.ReqRes); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
}
// ProcessProposalSync provides a mock function with given fields: _a0
func (_m *Client) ProcessProposalSync(_a0 types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
ret := _m.Called(_a0)
var r0 *types.ResponseProcessProposal
if rf, ok := ret.Get(0).(func(types.RequestProcessProposal) *types.ResponseProcessProposal); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseProcessProposal)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestProcessProposal) error); ok {
r1 = rf(_a0)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// QueryAsync provides a mock function with given fields: _a0
func (_m *Client) QueryAsync(_a0 types.RequestQuery) *abcicli.ReqRes {
ret := _m.Called(_a0)
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(types.RequestQuery) *abcicli.ReqRes); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
return r0
}
// QuerySync provides a mock function with given fields: _a0
func (_m *Client) QuerySync(_a0 types.RequestQuery) (*types.ResponseQuery, error) {
ret := _m.Called(_a0)
// QuerySync provides a mock function with given fields: _a0, _a1
func (_m *Client) QuerySync(_a0 context.Context, _a1 types.RequestQuery) (*types.ResponseQuery, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseQuery
if rf, ok := ret.Get(0).(func(types.RequestQuery) *types.ResponseQuery); ok {
r0 = rf(_a0)
if rf, ok := ret.Get(0).(func(context.Context, types.RequestQuery) *types.ResponseQuery); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseQuery)
@@ -683,8 +706,8 @@ func (_m *Client) QuerySync(_a0 types.RequestQuery) (*types.ResponseQuery, error
}
var r1 error
if rf, ok := ret.Get(1).(func(types.RequestQuery) error); ok {
r1 = rf(_a0)
if rf, ok := ret.Get(1).(func(context.Context, types.RequestQuery) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
@@ -773,18 +796,3 @@ func (_m *Client) String() string {
return r0
}
type mockConstructorTestingTNewClient interface {
mock.TestingT
Cleanup(func())
}
// NewClient creates a new instance of Client. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewClient(t mockConstructorTestingTNewClient) *Client {
mock := &Client{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -3,6 +3,7 @@ package abcicli
import (
"bufio"
"container/list"
"context"
"errors"
"fmt"
"io"
@@ -18,10 +19,18 @@ import (
)
const (
reqQueueSize = 256 // TODO make configurable
flushThrottleMS = 20 // Don't wait longer than...
// reqQueueSize is the max number of queued async requests.
// (memory: 256MB max assuming 1MB transactions)
reqQueueSize = 256
// Don't wait longer than...
flushThrottleMS = 20
)
type reqResWithContext struct {
R *ReqRes
C context.Context // if context.Err is not nil, reqRes will be thrown away (ignored)
}
// This is goroutine-safe, but users should beware that the application in
// general is not meant to be interfaced with concurrent callers.
type socketClient struct {
@@ -31,7 +40,7 @@ type socketClient struct {
mustConnect bool
conn net.Conn
reqQueue chan *ReqRes
reqQueue chan *reqResWithContext
flushTimer *timer.ThrottleTimer
mtx tmsync.Mutex
@@ -47,7 +56,7 @@ var _ Client = (*socketClient)(nil)
// if it fails to connect.
func NewSocketClient(addr string, mustConnect bool) Client {
cli := &socketClient{
reqQueue: make(chan *ReqRes, reqQueueSize),
reqQueue: make(chan *reqResWithContext, reqQueueSize),
flushTimer: timer.NewThrottleTimer("socketClient", flushThrottleMS),
mustConnect: mustConnect,
@@ -123,15 +132,20 @@ func (cli *socketClient) sendRequestsRoutine(conn io.Writer) {
case reqres := <-cli.reqQueue:
// cli.Logger.Debug("Sent request", "requestType", reflect.TypeOf(reqres.Request), "request", reqres.Request)
cli.willSendReq(reqres)
err := types.WriteMessage(reqres.Request, w)
if reqres.C.Err() != nil {
cli.Logger.Debug("Request's context is done", "req", reqres.R, "err", reqres.C.Err())
continue
}
cli.willSendReq(reqres.R)
err := types.WriteMessage(reqres.R.Request, w)
if err != nil {
cli.stopForError(fmt.Errorf("write to buffer: %w", err))
return
}
// If it's a flush request, flush the current buffer.
if _, ok := reqres.Request.Value.(*types.Request_Flush); ok {
if _, ok := reqres.R.Request.Value.(*types.Request_Flush); ok {
err = w.Flush()
if err != nil {
cli.stopForError(fmt.Errorf("flush buffer: %w", err))
@@ -140,7 +154,7 @@ func (cli *socketClient) sendRequestsRoutine(conn io.Writer) {
}
case <-cli.flushTimer.Ch: // flush queue
select {
case cli.reqQueue <- NewReqRes(types.ToRequestFlush()):
case cli.reqQueue <- &reqResWithContext{R: NewReqRes(types.ToRequestFlush()), C: context.Background()}:
default:
// Probably will fill the buffer, or retry later.
}
@@ -212,231 +226,273 @@ func (cli *socketClient) didRecvResponse(res *types.Response) error {
//
// NOTE: It is possible this callback isn't set on the reqres object. At this
// point, in which case it will be called after, when it is set.
reqres.InvokeCallback()
if cb := reqres.GetCallback(); cb != nil {
cb(res)
}
return nil
}
//----------------------------------------
func (cli *socketClient) EchoAsync(msg string) *ReqRes {
return cli.queueRequest(types.ToRequestEcho(msg))
func (cli *socketClient) EchoAsync(ctx context.Context, msg string) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestEcho(msg))
}
func (cli *socketClient) FlushAsync() *ReqRes {
return cli.queueRequest(types.ToRequestFlush())
func (cli *socketClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestFlush())
}
func (cli *socketClient) InfoAsync(req types.RequestInfo) *ReqRes {
return cli.queueRequest(types.ToRequestInfo(req))
func (cli *socketClient) InfoAsync(ctx context.Context, req types.RequestInfo) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestInfo(req))
}
func (cli *socketClient) DeliverTxAsync(req types.RequestDeliverTx) *ReqRes {
return cli.queueRequest(types.ToRequestDeliverTx(req))
func (cli *socketClient) DeliverTxAsync(ctx context.Context, req types.RequestDeliverTx) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestDeliverTx(req))
}
func (cli *socketClient) CheckTxAsync(req types.RequestCheckTx) *ReqRes {
return cli.queueRequest(types.ToRequestCheckTx(req))
func (cli *socketClient) CheckTxAsync(ctx context.Context, req types.RequestCheckTx) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestCheckTx(req))
}
func (cli *socketClient) QueryAsync(req types.RequestQuery) *ReqRes {
return cli.queueRequest(types.ToRequestQuery(req))
func (cli *socketClient) QueryAsync(ctx context.Context, req types.RequestQuery) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestQuery(req))
}
func (cli *socketClient) CommitAsync() *ReqRes {
return cli.queueRequest(types.ToRequestCommit())
func (cli *socketClient) CommitAsync(ctx context.Context) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestCommit())
}
func (cli *socketClient) InitChainAsync(req types.RequestInitChain) *ReqRes {
return cli.queueRequest(types.ToRequestInitChain(req))
func (cli *socketClient) InitChainAsync(ctx context.Context, req types.RequestInitChain) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestInitChain(req))
}
func (cli *socketClient) BeginBlockAsync(req types.RequestBeginBlock) *ReqRes {
return cli.queueRequest(types.ToRequestBeginBlock(req))
func (cli *socketClient) BeginBlockAsync(ctx context.Context, req types.RequestBeginBlock) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestBeginBlock(req))
}
func (cli *socketClient) EndBlockAsync(req types.RequestEndBlock) *ReqRes {
return cli.queueRequest(types.ToRequestEndBlock(req))
func (cli *socketClient) EndBlockAsync(ctx context.Context, req types.RequestEndBlock) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestEndBlock(req))
}
func (cli *socketClient) ListSnapshotsAsync(req types.RequestListSnapshots) *ReqRes {
return cli.queueRequest(types.ToRequestListSnapshots(req))
func (cli *socketClient) ListSnapshotsAsync(ctx context.Context, req types.RequestListSnapshots) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestListSnapshots(req))
}
func (cli *socketClient) OfferSnapshotAsync(req types.RequestOfferSnapshot) *ReqRes {
return cli.queueRequest(types.ToRequestOfferSnapshot(req))
func (cli *socketClient) OfferSnapshotAsync(ctx context.Context, req types.RequestOfferSnapshot) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestOfferSnapshot(req))
}
func (cli *socketClient) LoadSnapshotChunkAsync(req types.RequestLoadSnapshotChunk) *ReqRes {
return cli.queueRequest(types.ToRequestLoadSnapshotChunk(req))
func (cli *socketClient) LoadSnapshotChunkAsync(
ctx context.Context,
req types.RequestLoadSnapshotChunk,
) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestLoadSnapshotChunk(req))
}
func (cli *socketClient) ApplySnapshotChunkAsync(req types.RequestApplySnapshotChunk) *ReqRes {
return cli.queueRequest(types.ToRequestApplySnapshotChunk(req))
}
func (cli *socketClient) PrepareProposalAsync(req types.RequestPrepareProposal) *ReqRes {
return cli.queueRequest(types.ToRequestPrepareProposal(req))
}
func (cli *socketClient) ProcessProposalAsync(req types.RequestProcessProposal) *ReqRes {
return cli.queueRequest(types.ToRequestProcessProposal(req))
func (cli *socketClient) ApplySnapshotChunkAsync(
ctx context.Context,
req types.RequestApplySnapshotChunk,
) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestApplySnapshotChunk(req))
}
//----------------------------------------
func (cli *socketClient) FlushSync() error {
reqRes := cli.queueRequest(types.ToRequestFlush())
func (cli *socketClient) FlushSync(ctx context.Context) error {
reqRes, err := cli.queueRequest(ctx, types.ToRequestFlush(), true)
if err != nil {
return queueErr(err)
}
if err := cli.Error(); err != nil {
return err
}
reqRes.Wait() // NOTE: if we don't flush the queue, its possible to get stuck here
return cli.Error()
gotResp := make(chan struct{})
go func() {
// NOTE: if we don't flush the queue, its possible to get stuck here
reqRes.Wait()
close(gotResp)
}()
select {
case <-gotResp:
return cli.Error()
case <-ctx.Done():
return ctx.Err()
}
}
func (cli *socketClient) EchoSync(msg string) (*types.ResponseEcho, error) {
reqres := cli.queueRequest(types.ToRequestEcho(msg))
if err := cli.FlushSync(); err != nil {
func (cli *socketClient) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestEcho(msg))
if err != nil {
return nil, err
}
return reqres.Response.GetEcho(), cli.Error()
return reqres.Response.GetEcho(), nil
}
func (cli *socketClient) InfoSync(req types.RequestInfo) (*types.ResponseInfo, error) {
reqres := cli.queueRequest(types.ToRequestInfo(req))
if err := cli.FlushSync(); err != nil {
func (cli *socketClient) InfoSync(
ctx context.Context,
req types.RequestInfo,
) (*types.ResponseInfo, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestInfo(req))
if err != nil {
return nil, err
}
return reqres.Response.GetInfo(), cli.Error()
return reqres.Response.GetInfo(), nil
}
func (cli *socketClient) DeliverTxSync(req types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
reqres := cli.queueRequest(types.ToRequestDeliverTx(req))
if err := cli.FlushSync(); err != nil {
func (cli *socketClient) DeliverTxSync(
ctx context.Context,
req types.RequestDeliverTx,
) (*types.ResponseDeliverTx, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestDeliverTx(req))
if err != nil {
return nil, err
}
return reqres.Response.GetDeliverTx(), cli.Error()
return reqres.Response.GetDeliverTx(), nil
}
func (cli *socketClient) CheckTxSync(req types.RequestCheckTx) (*types.ResponseCheckTx, error) {
reqres := cli.queueRequest(types.ToRequestCheckTx(req))
if err := cli.FlushSync(); err != nil {
func (cli *socketClient) CheckTxSync(
ctx context.Context,
req types.RequestCheckTx,
) (*types.ResponseCheckTx, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestCheckTx(req))
if err != nil {
return nil, err
}
return reqres.Response.GetCheckTx(), cli.Error()
return reqres.Response.GetCheckTx(), nil
}
func (cli *socketClient) QuerySync(req types.RequestQuery) (*types.ResponseQuery, error) {
reqres := cli.queueRequest(types.ToRequestQuery(req))
if err := cli.FlushSync(); err != nil {
func (cli *socketClient) QuerySync(
ctx context.Context,
req types.RequestQuery,
) (*types.ResponseQuery, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestQuery(req))
if err != nil {
return nil, err
}
return reqres.Response.GetQuery(), cli.Error()
return reqres.Response.GetQuery(), nil
}
func (cli *socketClient) CommitSync() (*types.ResponseCommit, error) {
reqres := cli.queueRequest(types.ToRequestCommit())
if err := cli.FlushSync(); err != nil {
func (cli *socketClient) CommitSync(ctx context.Context) (*types.ResponseCommit, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestCommit())
if err != nil {
return nil, err
}
return reqres.Response.GetCommit(), cli.Error()
return reqres.Response.GetCommit(), nil
}
func (cli *socketClient) InitChainSync(req types.RequestInitChain) (*types.ResponseInitChain, error) {
reqres := cli.queueRequest(types.ToRequestInitChain(req))
if err := cli.FlushSync(); err != nil {
func (cli *socketClient) InitChainSync(
ctx context.Context,
req types.RequestInitChain,
) (*types.ResponseInitChain, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestInitChain(req))
if err != nil {
return nil, err
}
return reqres.Response.GetInitChain(), cli.Error()
return reqres.Response.GetInitChain(), nil
}
func (cli *socketClient) BeginBlockSync(req types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
reqres := cli.queueRequest(types.ToRequestBeginBlock(req))
if err := cli.FlushSync(); err != nil {
func (cli *socketClient) BeginBlockSync(
ctx context.Context,
req types.RequestBeginBlock,
) (*types.ResponseBeginBlock, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestBeginBlock(req))
if err != nil {
return nil, err
}
return reqres.Response.GetBeginBlock(), cli.Error()
return reqres.Response.GetBeginBlock(), nil
}
func (cli *socketClient) EndBlockSync(req types.RequestEndBlock) (*types.ResponseEndBlock, error) {
reqres := cli.queueRequest(types.ToRequestEndBlock(req))
if err := cli.FlushSync(); err != nil {
func (cli *socketClient) EndBlockSync(
ctx context.Context,
req types.RequestEndBlock,
) (*types.ResponseEndBlock, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestEndBlock(req))
if err != nil {
return nil, err
}
return reqres.Response.GetEndBlock(), cli.Error()
return reqres.Response.GetEndBlock(), nil
}
func (cli *socketClient) ListSnapshotsSync(req types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
reqres := cli.queueRequest(types.ToRequestListSnapshots(req))
if err := cli.FlushSync(); err != nil {
func (cli *socketClient) ListSnapshotsSync(
ctx context.Context,
req types.RequestListSnapshots,
) (*types.ResponseListSnapshots, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestListSnapshots(req))
if err != nil {
return nil, err
}
return reqres.Response.GetListSnapshots(), cli.Error()
return reqres.Response.GetListSnapshots(), nil
}
func (cli *socketClient) OfferSnapshotSync(req types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
reqres := cli.queueRequest(types.ToRequestOfferSnapshot(req))
if err := cli.FlushSync(); err != nil {
func (cli *socketClient) OfferSnapshotSync(
ctx context.Context,
req types.RequestOfferSnapshot,
) (*types.ResponseOfferSnapshot, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestOfferSnapshot(req))
if err != nil {
return nil, err
}
return reqres.Response.GetOfferSnapshot(), cli.Error()
return reqres.Response.GetOfferSnapshot(), nil
}
func (cli *socketClient) LoadSnapshotChunkSync(
ctx context.Context,
req types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
reqres := cli.queueRequest(types.ToRequestLoadSnapshotChunk(req))
if err := cli.FlushSync(); err != nil {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestLoadSnapshotChunk(req))
if err != nil {
return nil, err
}
return reqres.Response.GetLoadSnapshotChunk(), cli.Error()
return reqres.Response.GetLoadSnapshotChunk(), nil
}
func (cli *socketClient) ApplySnapshotChunkSync(
ctx context.Context,
req types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
reqres := cli.queueRequest(types.ToRequestApplySnapshotChunk(req))
if err := cli.FlushSync(); err != nil {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestApplySnapshotChunk(req))
if err != nil {
return nil, err
}
return reqres.Response.GetApplySnapshotChunk(), cli.Error()
}
func (cli *socketClient) PrepareProposalSync(req types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
reqres := cli.queueRequest(types.ToRequestPrepareProposal(req))
if err := cli.FlushSync(); err != nil {
return nil, err
}
return reqres.Response.GetPrepareProposal(), cli.Error()
}
func (cli *socketClient) ProcessProposalSync(req types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
reqres := cli.queueRequest(types.ToRequestProcessProposal(req))
if err := cli.FlushSync(); err != nil {
return nil, err
}
return reqres.Response.GetProcessProposal(), cli.Error()
return reqres.Response.GetApplySnapshotChunk(), nil
}
//----------------------------------------
func (cli *socketClient) queueRequest(req *types.Request) *ReqRes {
// queueRequest enqueues req onto the queue. If the queue is full, it ether
// returns an error (sync=false) or blocks (sync=true).
//
// When sync=true, ctx can be used to break early. When sync=false, ctx will be
// used later to determine if request should be dropped (if ctx.Err is
// non-nil).
//
// The caller is responsible for checking cli.Error.
func (cli *socketClient) queueRequest(ctx context.Context, req *types.Request, sync bool) (*ReqRes, error) {
reqres := NewReqRes(req)
// TODO: set cli.err if reqQueue times out
cli.reqQueue <- reqres
if sync {
select {
case cli.reqQueue <- &reqResWithContext{R: reqres, C: context.Background()}:
case <-ctx.Done():
return nil, ctx.Err()
}
} else {
select {
case cli.reqQueue <- &reqResWithContext{R: reqres, C: ctx}:
default:
return nil, errors.New("buffer is full")
}
}
// Maybe auto-flush, or unset auto-flush
switch req.Value.(type) {
@@ -446,7 +502,41 @@ func (cli *socketClient) queueRequest(req *types.Request) *ReqRes {
cli.flushTimer.Set()
}
return reqres
return reqres, nil
}
func (cli *socketClient) queueRequestAsync(
ctx context.Context,
req *types.Request,
) (*ReqRes, error) {
reqres, err := cli.queueRequest(ctx, req, false)
if err != nil {
return nil, queueErr(err)
}
return reqres, cli.Error()
}
func (cli *socketClient) queueRequestAndFlushSync(
ctx context.Context,
req *types.Request,
) (*ReqRes, error) {
reqres, err := cli.queueRequest(ctx, req, true)
if err != nil {
return nil, queueErr(err)
}
if err := cli.FlushSync(ctx); err != nil {
return nil, err
}
return reqres, cli.Error()
}
func queueErr(e error) error {
return fmt.Errorf("can't queue req: %w", e)
}
func (cli *socketClient) flushQueue() {
@@ -464,7 +554,7 @@ LOOP:
for {
select {
case reqres := <-cli.reqQueue:
reqres.Done()
reqres.R.Done()
default:
break LOOP
}
@@ -503,10 +593,6 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
_, ok = res.Value.(*types.Response_ListSnapshots)
case *types.Request_OfferSnapshot:
_, ok = res.Value.(*types.Response_OfferSnapshot)
case *types.Request_PrepareProposal:
_, ok = res.Value.(*types.Response_PrepareProposal)
case *types.Request_ProcessProposal:
_, ok = res.Value.(*types.Response_ProcessProposal)
}
return ok
}
@@ -517,12 +603,10 @@ func (cli *socketClient) stopForError(err error) {
}
cli.mtx.Lock()
if cli.err == nil {
cli.err = err
}
cli.err = err
cli.mtx.Unlock()
cli.Logger.Error(fmt.Sprintf("Stopping abci.socketClient for error: %v", err.Error()))
cli.Logger.Info("Stopping abci.socketClient", "reason", err)
if err := cli.Stop(); err != nil {
cli.Logger.Error("Error stopping abci.socketClient", "err", err)
}

View File

@@ -1,8 +1,8 @@
package abcicli_test
import (
"context"
"fmt"
"sync"
"testing"
"time"
@@ -16,6 +16,8 @@ import (
"github.com/tendermint/tendermint/libs/service"
)
var ctx = context.Background()
func TestProperSyncCalls(t *testing.T) {
app := slowApp{}
@@ -34,11 +36,12 @@ func TestProperSyncCalls(t *testing.T) {
resp := make(chan error, 1)
go func() {
// This is BeginBlockSync unrolled....
reqres := c.BeginBlockAsync(types.RequestBeginBlock{})
err := c.FlushSync()
require.NoError(t, err)
reqres, err := c.BeginBlockAsync(ctx, types.RequestBeginBlock{})
assert.NoError(t, err)
err = c.FlushSync(context.Background())
assert.NoError(t, err)
res := reqres.Response.GetBeginBlock()
require.NotNil(t, res)
assert.NotNil(t, res)
resp <- c.Error()
}()
@@ -69,14 +72,16 @@ func TestHangingSyncCalls(t *testing.T) {
resp := make(chan error, 1)
go func() {
// Start BeginBlock and flush it
reqres := c.BeginBlockAsync(types.RequestBeginBlock{})
flush := c.FlushAsync()
reqres, err := c.BeginBlockAsync(ctx, types.RequestBeginBlock{})
assert.NoError(t, err)
flush, err := c.FlushAsync(ctx)
assert.NoError(t, err)
// wait 20 ms for all events to travel socket, but
// no response yet from server
time.Sleep(20 * time.Millisecond)
// kill the server, so the connections break
err := s.Stop()
require.NoError(t, err)
err = s.Stop()
assert.NoError(t, err)
// wait for the response from BeginBlock
reqres.Wait()
@@ -119,71 +124,3 @@ func (slowApp) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock
time.Sleep(200 * time.Millisecond)
return types.ResponseBeginBlock{}
}
// TestCallbackInvokedWhenSetLaet ensures that the callback is invoked when
// set after the client completes the call into the app. Currently this
// test relies on the callback being allowed to be invoked twice if set multiple
// times, once when set early and once when set late.
func TestCallbackInvokedWhenSetLate(t *testing.T) {
wg := &sync.WaitGroup{}
wg.Add(1)
app := blockedABCIApplication{
wg: wg,
}
_, c := setupClientServer(t, app)
reqRes := c.CheckTxAsync(types.RequestCheckTx{})
done := make(chan struct{})
cb := func(_ *types.Response) {
close(done)
}
reqRes.SetCallback(cb)
app.wg.Done()
<-done
var called bool
cb = func(_ *types.Response) {
called = true
}
reqRes.SetCallback(cb)
require.True(t, called)
}
type blockedABCIApplication struct {
wg *sync.WaitGroup
types.BaseApplication
}
func (b blockedABCIApplication) CheckTx(r types.RequestCheckTx) types.ResponseCheckTx {
b.wg.Wait()
return b.BaseApplication.CheckTx(r)
}
// TestCallbackInvokedWhenSetEarly ensures that the callback is invoked when
// set before the client completes the call into the app.
func TestCallbackInvokedWhenSetEarly(t *testing.T) {
wg := &sync.WaitGroup{}
wg.Add(1)
app := blockedABCIApplication{
wg: wg,
}
_, c := setupClientServer(t, app)
reqRes := c.CheckTxAsync(types.RequestCheckTx{})
done := make(chan struct{})
cb := func(_ *types.Response) {
close(done)
}
reqRes.SetCallback(cb)
app.wg.Done()
called := func() bool {
select {
case <-done:
return true
default:
return false
}
}
require.Eventually(t, called, time.Second, time.Millisecond*25)
}

View File

@@ -2,6 +2,7 @@ package main
import (
"bufio"
"context"
"encoding/hex"
"errors"
"fmt"
@@ -16,6 +17,7 @@ import (
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/abci/example/counter"
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/tendermint/tendermint/abci/server"
servertest "github.com/tendermint/tendermint/abci/tests/server"
@@ -28,6 +30,8 @@ import (
var (
client abcicli.Client
logger log.Logger
ctx = context.Background()
)
// flags
@@ -43,6 +47,9 @@ var (
flagHeight int
flagProve bool
// counter
flagSerial bool
// kvstore
flagPersist string
)
@@ -54,7 +61,9 @@ var RootCmd = &cobra.Command{
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
switch cmd.Use {
case "kvstore", "version":
case "counter", "kvstore": // for the examples apps, don't pre-run
return nil
case "version": // skip running for version command
return nil
}
@@ -83,11 +92,10 @@ var RootCmd = &cobra.Command{
// Structure for data passed to print response.
type response struct {
// generic abci response
Data []byte
Code uint32
Info string
Log string
Status int32
Data []byte
Code uint32
Info string
Log string
Query *queryResponse
}
@@ -130,6 +138,10 @@ func addQueryFlags() {
"whether or not to return a merkle proof of the query result")
}
func addCounterFlags() {
counterCmd.PersistentFlags().BoolVarP(&flagSerial, "serial", "", false, "enforce incrementing (serial) transactions")
}
func addKVStoreFlags() {
kvstoreCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
}
@@ -144,12 +156,12 @@ func addCommands() {
RootCmd.AddCommand(commitCmd)
RootCmd.AddCommand(versionCmd)
RootCmd.AddCommand(testCmd)
RootCmd.AddCommand(prepareProposalCmd)
RootCmd.AddCommand(processProposalCmd)
addQueryFlags()
RootCmd.AddCommand(queryCmd)
// examples
addCounterFlags()
RootCmd.AddCommand(counterCmd)
addKVStoreFlags()
RootCmd.AddCommand(kvstoreCmd)
}
@@ -187,7 +199,7 @@ This command opens an interactive console for running any of the other commands
without opening a new connection each time
`,
Args: cobra.ExactArgs(0),
ValidArgs: []string{"echo", "info", "deliver_tx", "check_tx", "prepare_proposal", "process_proposal", "commit", "query"},
ValidArgs: []string{"echo", "info", "deliver_tx", "check_tx", "commit", "query"},
RunE: cmdConsole,
}
@@ -241,22 +253,6 @@ var versionCmd = &cobra.Command{
},
}
var prepareProposalCmd = &cobra.Command{
Use: "prepare_proposal",
Short: "prepare proposal",
Long: "prepare proposal",
Args: cobra.MinimumNArgs(0),
RunE: cmdPrepareProposal,
}
var processProposalCmd = &cobra.Command{
Use: "process_proposal",
Short: "process proposal",
Long: "process proposal",
Args: cobra.MinimumNArgs(0),
RunE: cmdProcessProposal,
}
var queryCmd = &cobra.Command{
Use: "query",
Short: "query the application state",
@@ -265,6 +261,14 @@ var queryCmd = &cobra.Command{
RunE: cmdQuery,
}
var counterCmd = &cobra.Command{
Use: "counter",
Short: "ABCI demo example",
Long: "ABCI demo example",
Args: cobra.ExactArgs(0),
RunE: cmdCounter,
}
var kvstoreCmd = &cobra.Command{
Use: "kvstore",
Short: "ABCI demo example",
@@ -328,16 +332,6 @@ func cmdTest(cmd *cobra.Command, args []string) error {
return servertest.DeliverTx(client, []byte{0x00, 0x00, 0x06}, code.CodeTypeBadNonce, nil)
},
func() error { return servertest.Commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 5}) },
func() error {
return servertest.PrepareProposal(client, [][]byte{
{0x01},
}, [][]byte{{0x01}}, nil)
},
func() error {
return servertest.ProcessProposal(client, [][]byte{
{0x01},
}, types.ResponseProcessProposal_ACCEPT)
},
})
}
@@ -438,10 +432,6 @@ func muxOnCommands(cmd *cobra.Command, pArgs []string) error {
return cmdInfo(cmd, actualArgs)
case "query":
return cmdQuery(cmd, actualArgs)
case "prepare_proposal":
return cmdPrepareProposal(cmd, actualArgs)
case "process_proposal":
return cmdProcessProposal(cmd, actualArgs)
default:
return cmdUnimplemented(cmd, pArgs)
}
@@ -476,7 +466,7 @@ func cmdEcho(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
msg = args[0]
}
res, err := client.EchoSync(msg)
res, err := client.EchoSync(ctx, msg)
if err != nil {
return err
}
@@ -492,7 +482,7 @@ func cmdInfo(cmd *cobra.Command, args []string) error {
if len(args) == 1 {
version = args[0]
}
res, err := client.InfoSync(types.RequestInfo{Version: version})
res, err := client.InfoSync(ctx, types.RequestInfo{Version: version})
if err != nil {
return err
}
@@ -517,7 +507,7 @@ func cmdDeliverTx(cmd *cobra.Command, args []string) error {
if err != nil {
return err
}
res, err := client.DeliverTxSync(types.RequestDeliverTx{Tx: txBytes})
res, err := client.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: txBytes})
if err != nil {
return err
}
@@ -543,7 +533,7 @@ func cmdCheckTx(cmd *cobra.Command, args []string) error {
if err != nil {
return err
}
res, err := client.CheckTxSync(types.RequestCheckTx{Tx: txBytes})
res, err := client.CheckTxSync(ctx, types.RequestCheckTx{Tx: txBytes})
if err != nil {
return err
}
@@ -558,7 +548,7 @@ func cmdCheckTx(cmd *cobra.Command, args []string) error {
// Get application Merkle root hash
func cmdCommit(cmd *cobra.Command, args []string) error {
res, err := client.CommitSync()
res, err := client.CommitSync(ctx)
if err != nil {
return err
}
@@ -583,7 +573,7 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
return err
}
resQuery, err := client.QuerySync(types.RequestQuery{
resQuery, err := client.QuerySync(ctx, types.RequestQuery{
Data: queryBytes,
Path: flagPath,
Height: int64(flagHeight),
@@ -606,59 +596,30 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
return nil
}
func cmdPrepareProposal(cmd *cobra.Command, args []string) error {
txsBytesArray := make([][]byte, len(args))
func cmdCounter(cmd *cobra.Command, args []string) error {
app := counter.NewApplication(flagSerial)
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
for i, arg := range args {
txBytes, err := stringOrHexToBytes(arg)
if err != nil {
return err
}
txsBytesArray[i] = txBytes
}
res, err := client.PrepareProposalSync(types.RequestPrepareProposal{
Txs: txsBytesArray,
// kvstore has to have this parameter in order not to reject a tx as the default value is 0
MaxTxBytes: 65536,
})
// Start the listener
srv, err := server.NewServer(flagAddress, flagAbci, app)
if err != nil {
return err
}
resps := make([]response, 0, len(res.Txs))
for _, tx := range res.Txs {
resps = append(resps, response{
Code: code.CodeTypeOK,
Log: "Succeeded. Tx: " + string(tx),
})
}
printResponse(cmd, args, resps...)
return nil
}
func cmdProcessProposal(cmd *cobra.Command, args []string) error {
txsBytesArray := make([][]byte, len(args))
for i, arg := range args {
txBytes, err := stringOrHexToBytes(arg)
if err != nil {
return err
}
txsBytesArray[i] = txBytes
}
res, err := client.ProcessProposalSync(types.RequestProcessProposal{
Txs: txsBytesArray,
})
if err != nil {
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
printResponse(cmd, args, response{
Status: int32(res.Status),
// Stop upon receiving SIGTERM or CTRL-C.
tmos.TrapSignal(logger, func() {
// Cleanup
if err := srv.Stop(); err != nil {
logger.Error("Error while stopping server", "err", err)
}
})
return nil
// Run forever.
select {}
}
func cmdKVStore(cmd *cobra.Command, args []string) error {
@@ -667,14 +628,11 @@ func cmdKVStore(cmd *cobra.Command, args []string) error {
// Create the application - in memory or persisted to disk
var app types.Application
if flagPersist == "" {
var err error
flagPersist, err = os.MkdirTemp("", "persistent_kvstore_tmp")
if err != nil {
return err
}
app = kvstore.NewApplication()
} else {
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
}
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
// Start the listener
srv, err := server.NewServer(flagAddress, flagAbci, app)
@@ -700,49 +658,44 @@ func cmdKVStore(cmd *cobra.Command, args []string) error {
//--------------------------------------------------------------------------------
func printResponse(cmd *cobra.Command, args []string, rsps ...response) {
func printResponse(cmd *cobra.Command, args []string, rsp response) {
if flagVerbose {
fmt.Println(">", cmd.Use, strings.Join(args, " "))
}
for _, rsp := range rsps {
// Always print the status code.
if rsp.Code == types.CodeTypeOK {
fmt.Printf("-> code: OK\n")
} else {
fmt.Printf("-> code: %d\n", rsp.Code)
// Always print the status code.
if rsp.Code == types.CodeTypeOK {
fmt.Printf("-> code: OK\n")
} else {
fmt.Printf("-> code: %d\n", rsp.Code)
}
}
if len(rsp.Data) != 0 {
// Do no print this line when using the commit command
// because the string comes out as gibberish
if cmd.Use != "commit" {
fmt.Printf("-> data: %s\n", rsp.Data)
}
fmt.Printf("-> data.hex: 0x%X\n", rsp.Data)
}
if rsp.Log != "" {
fmt.Printf("-> log: %s\n", rsp.Log)
}
if cmd.Use == "process_proposal" {
fmt.Printf("-> status: %s\n", types.ResponseProcessProposal_ProposalStatus_name[rsp.Status])
if len(rsp.Data) != 0 {
// Do no print this line when using the commit command
// because the string comes out as gibberish
if cmd.Use != "commit" {
fmt.Printf("-> data: %s\n", rsp.Data)
}
fmt.Printf("-> data.hex: 0x%X\n", rsp.Data)
}
if rsp.Log != "" {
fmt.Printf("-> log: %s\n", rsp.Log)
}
if rsp.Query != nil {
fmt.Printf("-> height: %d\n", rsp.Query.Height)
if rsp.Query.Key != nil {
fmt.Printf("-> key: %s\n", rsp.Query.Key)
fmt.Printf("-> key.hex: %X\n", rsp.Query.Key)
}
if rsp.Query.Value != nil {
fmt.Printf("-> value: %s\n", rsp.Query.Value)
fmt.Printf("-> value.hex: %X\n", rsp.Query.Value)
}
if rsp.Query.ProofOps != nil {
fmt.Printf("-> proof: %#v\n", rsp.Query.ProofOps)
}
if rsp.Query != nil {
fmt.Printf("-> height: %d\n", rsp.Query.Height)
if rsp.Query.Key != nil {
fmt.Printf("-> key: %s\n", rsp.Query.Key)
fmt.Printf("-> key.hex: %X\n", rsp.Query.Key)
}
if rsp.Query.Value != nil {
fmt.Printf("-> value: %s\n", rsp.Query.Value)
fmt.Printf("-> value.hex: %X\n", rsp.Query.Value)
}
if rsp.Query.ProofOps != nil {
fmt.Printf("-> proof: %#v\n", rsp.Query.ProofOps)
}
}
}

View File

@@ -7,5 +7,4 @@ const (
CodeTypeBadNonce uint32 = 2
CodeTypeUnauthorized uint32 = 3
CodeTypeUnknownError uint32 = 4
CodeTypeExecuted uint32 = 5
)

View File

@@ -0,0 +1,86 @@
package counter
import (
"encoding/binary"
"fmt"
"github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/abci/types"
)
type Application struct {
types.BaseApplication
hashCount int
txCount int
serial bool
}
func NewApplication(serial bool) *Application {
return &Application{serial: serial}
}
func (app *Application) Info(req types.RequestInfo) types.ResponseInfo {
return types.ResponseInfo{Data: fmt.Sprintf("{\"hashes\":%v,\"txs\":%v}", app.hashCount, app.txCount)}
}
func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
if app.serial {
if len(req.Tx) > 8 {
return types.ResponseDeliverTx{
Code: code.CodeTypeEncodingError,
Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(req.Tx))}
}
tx8 := make([]byte, 8)
copy(tx8[len(tx8)-len(req.Tx):], req.Tx)
txValue := binary.BigEndian.Uint64(tx8)
if txValue != uint64(app.txCount) {
return types.ResponseDeliverTx{
Code: code.CodeTypeBadNonce,
Log: fmt.Sprintf("Invalid nonce. Expected %v, got %v", app.txCount, txValue)}
}
}
app.txCount++
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
}
func (app *Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
if app.serial {
if len(req.Tx) > 8 {
return types.ResponseCheckTx{
Code: code.CodeTypeEncodingError,
Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(req.Tx))}
}
tx8 := make([]byte, 8)
copy(tx8[len(tx8)-len(req.Tx):], req.Tx)
txValue := binary.BigEndian.Uint64(tx8)
if txValue < uint64(app.txCount) {
return types.ResponseCheckTx{
Code: code.CodeTypeBadNonce,
Log: fmt.Sprintf("Invalid nonce. Expected >= %v, got %v", app.txCount, txValue)}
}
}
return types.ResponseCheckTx{Code: code.CodeTypeOK}
}
func (app *Application) Commit() (resp types.ResponseCommit) {
app.hashCount++
if app.txCount == 0 {
return types.ResponseCommit{}
}
hash := make([]byte, 8)
binary.BigEndian.PutUint64(hash, uint64(app.txCount))
return types.ResponseCommit{Data: hash}
}
func (app *Application) Query(reqQuery types.RequestQuery) types.ResponseQuery {
switch reqQuery.Path {
case "hash":
return types.ResponseQuery{Value: []byte(fmt.Sprintf("%v", app.hashCount))}
case "tx":
return types.ResponseQuery{Value: []byte(fmt.Sprintf("%v", app.txCount))}
default:
return types.ResponseQuery{Log: fmt.Sprintf("Invalid query path. Expected hash or tx, got %v", reqQuery.Path)}
}
}

View File

@@ -1,6 +1,7 @@
package example
import (
"context"
"fmt"
"math/rand"
"net"
@@ -13,8 +14,6 @@ import (
"google.golang.org/grpc"
"golang.org/x/net/context"
"github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
@@ -45,7 +44,7 @@ func TestGRPC(t *testing.T) {
}
func testStream(t *testing.T, app types.Application) {
numDeliverTxs := 20000
const numDeliverTxs = 20000
socketFile := fmt.Sprintf("test-%08x.sock", rand.Int31n(1<<30))
defer os.Remove(socketFile)
socket := fmt.Sprintf("unix://%v", socketFile)
@@ -53,9 +52,8 @@ func testStream(t *testing.T, app types.Application) {
// Start the listener
server := abciserver.NewSocketServer(socket, app)
server.SetLogger(log.TestingLogger().With("module", "abci-server"))
if err := server.Start(); err != nil {
require.NoError(t, err, "Error starting socket server")
}
err := server.Start()
require.NoError(t, err)
t.Cleanup(func() {
if err := server.Stop(); err != nil {
t.Error(err)
@@ -65,9 +63,8 @@ func testStream(t *testing.T, app types.Application) {
// Connect to the socket
client := abcicli.NewSocketClient(socket, false)
client.SetLogger(log.TestingLogger().With("module", "abci-client"))
if err := client.Start(); err != nil {
t.Fatalf("Error starting socket client: %v", err.Error())
}
err = client.Start()
require.NoError(t, err)
t.Cleanup(func() {
if err := client.Stop(); err != nil {
t.Error(err)
@@ -101,22 +98,24 @@ func testStream(t *testing.T, app types.Application) {
}
})
ctx := context.Background()
// Write requests
for counter := 0; counter < numDeliverTxs; counter++ {
// Send request
reqRes := client.DeliverTxAsync(types.RequestDeliverTx{Tx: []byte("test")})
_ = reqRes
// check err ?
_, err = client.DeliverTxAsync(ctx, types.RequestDeliverTx{Tx: []byte("test")})
require.NoError(t, err)
// Sometimes send flush messages
if counter%123 == 0 {
client.FlushAsync()
// check err ?
if counter%128 == 0 {
err = client.FlushSync(context.Background())
require.NoError(t, err)
}
}
// Send final flush message
client.FlushAsync()
_, err = client.FlushAsync(ctx)
require.NoError(t, err)
<-done
}
@@ -130,7 +129,7 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
func testGRPCSync(t *testing.T, app types.ABCIApplicationServer) {
numDeliverTxs := 2000
socketFile := fmt.Sprintf("/tmp/test-%08x.sock", rand.Int31n(1<<30))
socketFile := fmt.Sprintf("test-%08x.sock", rand.Int31n(1<<30))
defer os.Remove(socketFile)
socket := fmt.Sprintf("unix://%v", socketFile)
@@ -148,7 +147,6 @@ func testGRPCSync(t *testing.T, app types.ABCIApplicationServer) {
})
// Connect to the socket
//nolint:staticcheck // SA1019 Existing use of deprecated but supported dial option.
conn, err := grpc.Dial(socket, grpc.WithInsecure(), grpc.WithContextDialer(dialerFunc))
if err != nil {
t.Fatalf("Error dialing GRPC server: %v", err.Error())

View File

@@ -68,7 +68,6 @@ type Application struct {
state State
RetainBlocks int64 // blocks to retain after commit (via ResponseCommit.RetainHeight)
txToRemove map[string]struct{}
}
func NewApplication() *Application {
@@ -79,7 +78,7 @@ func NewApplication() *Application {
func (app *Application) Info(req types.RequestInfo) (resInfo types.ResponseInfo) {
return types.ResponseInfo{
Data: fmt.Sprintf("{\"size\":%v}", app.state.Size),
Version: version.ABCISemVer,
Version: version.ABCIVersion,
AppVersion: ProtocolVersion,
LastBlockHeight: app.state.Height,
LastBlockAppHash: app.state.AppHash,
@@ -88,19 +87,15 @@ func (app *Application) Info(req types.RequestInfo) (resInfo types.ResponseInfo)
// tx is either "key=value" or just arbitrary bytes
func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
if isReplacedTx(req.Tx) {
app.txToRemove[string(req.Tx)] = struct{}{}
}
var key, value string
var key, value []byte
parts := bytes.Split(req.Tx, []byte("="))
if len(parts) == 2 {
key, value = string(parts[0]), string(parts[1])
key, value = parts[0], parts[1]
} else {
key, value = string(req.Tx), string(req.Tx)
key, value = req.Tx, req.Tx
}
err := app.state.db.Set(prefixKey([]byte(key)), []byte(value))
err := app.state.db.Set(prefixKey(key), value)
if err != nil {
panic(err)
}
@@ -110,10 +105,10 @@ func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeli
{
Type: "app",
Attributes: []types.EventAttribute{
{Key: "creator", Value: "Cosmoshi Netowoko", Index: true},
{Key: "key", Value: key, Index: true},
{Key: "index_key", Value: "index is working", Index: true},
{Key: "noindex_key", Value: "index is working", Index: false},
{Key: []byte("creator"), Value: []byte("Cosmoshi Netowoko"), Index: true},
{Key: []byte("key"), Value: key, Index: true},
{Key: []byte("index_key"), Value: []byte("index is working"), Index: true},
{Key: []byte("noindex_key"), Value: []byte("index is working"), Index: false},
},
},
}
@@ -122,11 +117,6 @@ func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeli
}
func (app *Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
if req.Type == types.CheckTxType_Recheck {
if _, ok := app.txToRemove[string(req.Tx)]; ok {
return types.ResponseCheckTx{Code: code.CodeTypeExecuted, GasWanted: 1}
}
}
return types.ResponseCheckTx{Code: code.CodeTypeOK, GasWanted: 1}
}
@@ -136,8 +126,6 @@ func (app *Application) Commit() types.ResponseCommit {
binary.PutVarint(appHash, app.state.Size)
app.state.AppHash = appHash
app.state.Height++
// empty out the set of transactions to remove via rechecktx
saveState(app.state)
resp := types.ResponseCommit{Data: appHash}
@@ -182,18 +170,3 @@ func (app *Application) Query(reqQuery types.RequestQuery) (resQuery types.Respo
return resQuery
}
func (app *Application) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
app.txToRemove = map[string]struct{}{}
return types.ResponseBeginBlock{}
}
func (app *Application) ProcessProposal(
req types.RequestProcessProposal) types.ResponseProcessProposal {
for _, tx := range req.Txs {
if len(tx) == 0 {
return types.ResponseProcessProposal{Status: types.ResponseProcessProposal_REJECT}
}
}
return types.ResponseProcessProposal{Status: types.ResponseProcessProposal_ACCEPT}
}

View File

@@ -1,8 +1,9 @@
package kvstore
import (
"context"
"fmt"
"os"
"io/ioutil"
"sort"
"testing"
@@ -23,6 +24,8 @@ const (
testValue = "def"
)
var ctx = context.Background()
func testKVStore(t *testing.T, app types.Application, tx []byte, key, value string) {
req := types.RequestDeliverTx{Tx: tx}
ar := app.DeliverTx(req)
@@ -71,7 +74,7 @@ func TestKVStoreKV(t *testing.T) {
}
func TestPersistentKVStoreKV(t *testing.T) {
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
if err != nil {
t.Fatal(err)
}
@@ -87,7 +90,7 @@ func TestPersistentKVStoreKV(t *testing.T) {
}
func TestPersistentKVStoreInfo(t *testing.T) {
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
if err != nil {
t.Fatal(err)
}
@@ -119,7 +122,7 @@ func TestPersistentKVStoreInfo(t *testing.T) {
// add a validator, remove a validator, update a validator
func TestValUpdates(t *testing.T) {
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
if err != nil {
t.Fatal(err)
}
@@ -162,7 +165,7 @@ func TestValUpdates(t *testing.T) {
makeApplyBlock(t, kvstore, 2, diff, tx1, tx2, tx3)
vals1 = append(vals[:nInit-2], vals[nInit+1]) //nolint: gocritic
vals1 = append(vals[:nInit-2], vals[nInit+1]) // nolint: gocritic
vals2 = kvstore.Validators()
valsEqual(t, vals1, vals2)
@@ -293,7 +296,7 @@ func TestClientServer(t *testing.T) {
// set up grpc app
kvstore = NewApplication()
gclient, gserver, err := makeGRPCClientServer(kvstore, "/tmp/kvstore-grpc")
gclient, gserver, err := makeGRPCClientServer(kvstore, "kvstore-grpc")
require.NoError(t, err)
t.Cleanup(func() {
@@ -323,23 +326,23 @@ func runClientTests(t *testing.T, client abcicli.Client) {
}
func testClient(t *testing.T, app abcicli.Client, tx []byte, key, value string) {
ar, err := app.DeliverTxSync(types.RequestDeliverTx{Tx: tx})
ar, err := app.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: tx})
require.NoError(t, err)
require.False(t, ar.IsErr(), ar)
// repeating tx doesn't raise error
ar, err = app.DeliverTxSync(types.RequestDeliverTx{Tx: tx})
ar, err = app.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: tx})
require.NoError(t, err)
require.False(t, ar.IsErr(), ar)
// commit
_, err = app.CommitSync()
_, err = app.CommitSync(ctx)
require.NoError(t, err)
info, err := app.InfoSync(types.RequestInfo{})
info, err := app.InfoSync(ctx, types.RequestInfo{})
require.NoError(t, err)
require.NotZero(t, info.LastBlockHeight)
// make sure query is fine
resQuery, err := app.QuerySync(types.RequestQuery{
resQuery, err := app.QuerySync(ctx, types.RequestQuery{
Path: "/store",
Data: []byte(key),
})
@@ -350,7 +353,7 @@ func testClient(t *testing.T, app abcicli.Client, tx []byte, key, value string)
require.EqualValues(t, info.LastBlockHeight, resQuery.Height)
// make sure proof is fine
resQuery, err = app.QuerySync(types.RequestQuery{
resQuery, err = app.QuerySync(ctx, types.RequestQuery{
Path: "/store",
Data: []byte(key),
Prove: true,

View File

@@ -45,10 +45,7 @@ func NewPersistentKVStoreApplication(dbDir string) *PersistentKVStoreApplication
state := loadState(db)
return &PersistentKVStoreApplication{
app: &Application{
state: state,
txToRemove: map[string]struct{}{},
},
app: &Application{state: state},
valAddrToPubKeyMap: make(map[string]pc.PublicKey),
logger: log.NewNopLogger(),
}
@@ -75,10 +72,6 @@ func (app *PersistentKVStoreApplication) DeliverTx(req types.RequestDeliverTx) t
return app.execValidatorTx(req.Tx)
}
if isPrepareTx(req.Tx) {
return app.execPrepareTx(req.Tx)
}
// otherwise, update the key-value store
return app.app.DeliverTx(req)
}
@@ -129,7 +122,7 @@ func (app *PersistentKVStoreApplication) BeginBlock(req types.RequestBeginBlock)
// Punish validators who committed equivocation.
for _, ev := range req.ByzantineValidators {
if ev.Type == types.MisbehaviorType_DUPLICATE_VOTE {
if ev.Type == types.EvidenceType_DUPLICATE_VOTE {
addr := string(ev.Validator.Address)
if pubKey, ok := app.valAddrToPubKeyMap[addr]; ok {
app.updateValidator(types.ValidatorUpdate{
@@ -145,7 +138,7 @@ func (app *PersistentKVStoreApplication) BeginBlock(req types.RequestBeginBlock)
}
}
return app.app.BeginBlock(req)
return types.ResponseBeginBlock{}
}
// Update the validator set
@@ -173,21 +166,6 @@ func (app *PersistentKVStoreApplication) ApplySnapshotChunk(
return types.ResponseApplySnapshotChunk{Result: types.ResponseApplySnapshotChunk_ABORT}
}
func (app *PersistentKVStoreApplication) PrepareProposal(
req types.RequestPrepareProposal) types.ResponsePrepareProposal {
return types.ResponsePrepareProposal{Txs: app.substPrepareTx(req.Txs, req.MaxTxBytes)}
}
func (app *PersistentKVStoreApplication) ProcessProposal(
req types.RequestProcessProposal) types.ResponseProcessProposal {
for _, tx := range req.Txs {
if len(tx) == 0 || isPrepareTx(tx) {
return types.ResponseProcessProposal{Status: types.ResponseProcessProposal_REJECT}
}
}
return types.ResponseProcessProposal{Status: types.ResponseProcessProposal_ACCEPT}
}
//---------------------------------------------
// update validators
@@ -302,42 +280,3 @@ func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
}
// -----------------------------
const (
PreparePrefix = "prepare"
ReplacePrefix = "replace"
)
func isPrepareTx(tx []byte) bool { return bytes.HasPrefix(tx, []byte(PreparePrefix)) }
func isReplacedTx(tx []byte) bool {
return bytes.HasPrefix(tx, []byte(ReplacePrefix))
}
// execPrepareTx is noop. tx data is considered as placeholder
// and is substitute at the PrepareProposal.
func (app *PersistentKVStoreApplication) execPrepareTx(tx []byte) types.ResponseDeliverTx {
// noop
return types.ResponseDeliverTx{}
}
// substPrepareTx substitutes all the transactions prefixed with 'prepare' in the
// proposal for transactions with the prefix stripped.
func (app *PersistentKVStoreApplication) substPrepareTx(blockData [][]byte, maxTxBytes int64) [][]byte {
txs := make([][]byte, 0, len(blockData))
var totalBytes int64
for _, tx := range blockData {
txMod := tx
if isPrepareTx(tx) {
txMod = bytes.Replace(tx, []byte(PreparePrefix), []byte(ReplacePrefix), 1)
}
totalBytes += int64(len(txMod))
if totalBytes > maxTxBytes {
break
}
txs = append(txs, txMod)
}
return txs
}

View File

@@ -2,8 +2,9 @@
Package server is used to start a new ABCI server.
It contains two server implementation:
- gRPC server
- socket server
* gRPC server
* socket server
*/
package server

View File

@@ -227,12 +227,6 @@ func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types
case *types.Request_OfferSnapshot:
res := s.app.OfferSnapshot(*r.OfferSnapshot)
responses <- types.ToResponseOfferSnapshot(res)
case *types.Request_PrepareProposal:
res := s.app.PrepareProposal(*r.PrepareProposal)
responses <- types.ToResponsePrepareProposal(res)
case *types.Request_ProcessProposal:
res := s.app.ProcessProposal(*r.ProcessProposal)
responses <- types.ToResponseProcessProposal(res)
case *types.Request_LoadSnapshotChunk:
res := s.app.LoadSnapshotChunk(*r.LoadSnapshotChunk)
responses <- types.ToResponseLoadSnapshotChunk(res)

View File

@@ -19,12 +19,9 @@ func TestClientServerNoAddrPrefix(t *testing.T) {
assert.NoError(t, err, "expected no error on NewServer")
err = server.Start()
assert.NoError(t, err, "expected no error on server.Start")
defer func() { _ = server.Stop() }()
client, err := abciclient.NewClient(addr, transport, true)
assert.NoError(t, err, "expected no error on NewClient")
err = client.Start()
assert.NoError(t, err, "expected no error on client.Start")
_ = client.Stop()
}

View File

@@ -2,6 +2,7 @@ package testsuite
import (
"bytes"
"context"
"errors"
"fmt"
@@ -10,6 +11,8 @@ import (
tmrand "github.com/tendermint/tendermint/libs/rand"
)
var ctx = context.Background()
func InitChain(client abcicli.Client) error {
total := 10
vals := make([]types.ValidatorUpdate, total)
@@ -18,7 +21,7 @@ func InitChain(client abcicli.Client) error {
power := tmrand.Int()
vals[i] = types.UpdateValidator(pubkey, int64(power), "")
}
_, err := client.InitChainSync(types.RequestInitChain{
_, err := client.InitChainSync(ctx, types.RequestInitChain{
Validators: vals,
})
if err != nil {
@@ -30,7 +33,7 @@ func InitChain(client abcicli.Client) error {
}
func Commit(client abcicli.Client, hashExp []byte) error {
res, err := client.CommitSync()
res, err := client.CommitSync(ctx)
data := res.Data
if err != nil {
fmt.Println("Failed test: Commit")
@@ -47,7 +50,7 @@ func Commit(client abcicli.Client, hashExp []byte) error {
}
func DeliverTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.DeliverTxSync(types.RequestDeliverTx{Tx: txBytes})
res, _ := client.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: txBytes})
code, data, log := res.Code, res.Data, res.Log
if code != codeExp {
fmt.Println("Failed test: DeliverTx")
@@ -65,34 +68,8 @@ func DeliverTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []
return nil
}
func PrepareProposal(client abcicli.Client, txBytes [][]byte, txExpected [][]byte, dataExp []byte) error {
res, _ := client.PrepareProposalSync(types.RequestPrepareProposal{Txs: txBytes})
for i, tx := range res.Txs {
if !bytes.Equal(tx, txExpected[i]) {
fmt.Println("Failed test: PrepareProposal")
fmt.Printf("PrepareProposal transaction was unexpected. Got %x expected %x.",
tx, txExpected[i])
return errors.New("PrepareProposal error")
}
}
fmt.Println("Passed test: PrepareProposal")
return nil
}
func ProcessProposal(client abcicli.Client, txBytes [][]byte, statusExp types.ResponseProcessProposal_ProposalStatus) error {
res, _ := client.ProcessProposalSync(types.RequestProcessProposal{Txs: txBytes})
if res.Status != statusExp {
fmt.Println("Failed test: ProcessProposal")
fmt.Printf("ProcessProposal response status was unexpected. Got %v expected %v.",
res.Status, statusExp)
return errors.New("ProcessProposal error")
}
fmt.Println("Passed test: ProcessProposal")
return nil
}
func CheckTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.CheckTxSync(types.RequestCheckTx{Tx: txBytes})
res, _ := client.CheckTxSync(ctx, types.RequestCheckTx{Tx: txBytes})
code, data, log := res.Code, res.Data, res.Log
if code != codeExp {
fmt.Println("Failed test: CheckTx")

View File

@@ -0,0 +1,56 @@
package main
import (
"bytes"
"context"
"fmt"
"os"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/log"
)
var ctx = context.Background()
func startClient(abciType string) abcicli.Client {
// Start client
client, err := abcicli.NewClient("tcp://127.0.0.1:26658", abciType, true)
if err != nil {
panic(err.Error())
}
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
client.SetLogger(logger.With("module", "abcicli"))
if err := client.Start(); err != nil {
panicf("connecting to abci_app: %v", err.Error())
}
return client
}
func commit(client abcicli.Client, hashExp []byte) {
res, err := client.CommitSync(ctx)
if err != nil {
panicf("client error: %v", err)
}
if !bytes.Equal(res.Data, hashExp) {
panicf("Commit hash was unexpected. Got %X expected %X", res.Data, hashExp)
}
}
func deliverTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) {
res, err := client.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: txBytes})
if err != nil {
panicf("client error: %v", err)
}
if res.Code != codeExp {
panicf("DeliverTx response code was unexpected. Got %v expected %v. Log: %v", res.Code, codeExp, res.Log)
}
if !bytes.Equal(res.Data, dataExp) {
panicf("DeliverTx response data was unexpected. Got %X expected %X", res.Data, dataExp)
}
}
func panicf(format string, a ...interface{}) {
panic(fmt.Sprintf(format, a...))
}

View File

@@ -0,0 +1,93 @@
package main
import (
"fmt"
"log"
"os"
"os/exec"
"time"
"github.com/tendermint/tendermint/abci/types"
)
var abciType string
func init() {
abciType = os.Getenv("ABCI")
if abciType == "" {
abciType = "socket"
}
}
func main() {
testCounter()
}
const (
maxABCIConnectTries = 10
)
func ensureABCIIsUp(typ string, n int) error {
var err error
cmdString := "abci-cli echo hello"
if typ == "grpc" {
cmdString = "abci-cli --abci grpc echo hello"
}
for i := 0; i < n; i++ {
cmd := exec.Command("bash", "-c", cmdString)
_, err = cmd.CombinedOutput()
if err == nil {
break
}
time.Sleep(500 * time.Millisecond)
}
return err
}
func testCounter() {
abciApp := os.Getenv("ABCI_APP")
if abciApp == "" {
panic("No ABCI_APP specified")
}
fmt.Printf("Running %s test with abci=%s\n", abciApp, abciType)
subCommand := fmt.Sprintf("abci-cli %s", abciApp)
cmd := exec.Command("bash", "-c", subCommand)
cmd.Stdout = os.Stdout
if err := cmd.Start(); err != nil {
log.Fatalf("starting %q err: %v", abciApp, err)
}
defer func() {
if err := cmd.Process.Kill(); err != nil {
log.Printf("error on process kill: %v", err)
}
if err := cmd.Wait(); err != nil {
log.Printf("error while waiting for cmd to exit: %v", err)
}
}()
if err := ensureABCIIsUp(abciType, maxABCIConnectTries); err != nil {
log.Fatalf("echo failed: %v", err) //nolint:gocritic
}
client := startClient(abciType)
defer func() {
if err := client.Stop(); err != nil {
log.Printf("error trying client stop: %v", err)
}
}()
// commit(client, nil)
// deliverTx(client, []byte("abc"), code.CodeTypeBadNonce, nil)
commit(client, nil)
deliverTx(client, []byte{0x00}, types.CodeTypeOK, nil)
commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 1})
// deliverTx(client, []byte{0x00}, code.CodeTypeBadNonce, nil)
deliverTx(client, []byte{0x01}, types.CodeTypeOK, nil)
deliverTx(client, []byte{0x00, 0x02}, types.CodeTypeOK, nil)
deliverTx(client, []byte{0x00, 0x03}, types.CodeTypeOK, nil)
deliverTx(client, []byte{0x00, 0x00, 0x04}, types.CodeTypeOK, nil)
// deliverTx(client, []byte{0x00, 0x00, 0x06}, code.CodeTypeBadNonce, nil)
commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 5})
}

28
abci/tests/test_app/test.sh Executable file
View File

@@ -0,0 +1,28 @@
#! /bin/bash
set -e
# These tests spawn the counter app and server by execing the ABCI_APP command and run some simple client tests against it
# Get the directory of where this script is.
export PATH="$GOBIN:$PATH"
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
# Change into that dir because we expect that.
cd "$DIR"
echo "RUN COUNTER OVER SOCKET"
# test golang counter
ABCI_APP="counter" go run -mod=readonly ./*.go
echo "----------------------"
echo "RUN COUNTER OVER GRPC"
# test golang counter via grpc
ABCI_APP="counter --abci=grpc" ABCI="grpc" go run -mod=readonly ./*.go
echo "----------------------"
# test nodejs counter
# TODO: fix node app
#ABCI_APP="node $GOPATH/src/github.com/tendermint/js-abci/example/app.js" go test -test.run TestCounter

View File

@@ -1,7 +1,5 @@
echo hello
info
prepare_proposal "abc"
process_proposal "abc"
commit
deliver_tx "abc"
info
@@ -10,9 +8,3 @@ query "abc"
deliver_tx "def=xyz"
commit
query "def"
prepare_proposal "preparedef"
process_proposal "replacedef"
process_proposal "preparedef"
prepare_proposal
process_proposal
commit

View File

@@ -8,14 +8,6 @@
-> data: {"size":0}
-> data.hex: 0x7B2273697A65223A307D
> prepare_proposal "abc"
-> code: OK
-> log: Succeeded. Tx: abc
> process_proposal "abc"
-> code: OK
-> status: ACCEPT
> commit
-> code: OK
-> data.hex: 0x0000000000000000
@@ -57,25 +49,3 @@
-> value: xyz
-> value.hex: 78797A
> prepare_proposal "preparedef"
-> code: OK
-> log: Succeeded. Tx: replacedef
> process_proposal "replacedef"
-> code: OK
-> status: ACCEPT
> process_proposal "preparedef"
-> code: OK
-> status: REJECT
> prepare_proposal
> process_proposal
-> code: OK
-> status: ACCEPT
> commit
-> code: OK
-> data.hex: 0x0400000000000000

View File

@@ -18,6 +18,6 @@
> info
-> code: OK
-> data: {"size":3}
-> data.hex: 0x7B2273697A65223A337D
-> data: {"hashes":0,"txs":3}
-> data.hex: 0x7B22686173686573223A302C22747873223A337D

View File

@@ -30,8 +30,6 @@ function testExample() {
cat "${INPUT}.out.new"
echo "Expected:"
cat "${INPUT}.out"
echo "Diff:"
diff "${INPUT}.out" "${INPUT}.out.new"
exit 1
fi
@@ -39,7 +37,7 @@ function testExample() {
}
testExample 1 tests/test_cli/ex1.abci abci-cli kvstore
testExample 2 tests/test_cli/ex2.abci abci-cli kvstore
testExample 2 tests/test_cli/ex2.abci abci-cli counter
echo ""
echo "PASS"

View File

@@ -1,11 +1,9 @@
package types
import (
context "golang.org/x/net/context"
"context"
)
//go:generate ../../scripts/mockery_generate.sh Application
// Application is an interface that enables any finite, deterministic state machine
// to be driven by a blockchain-based replication engine via the ABCI.
// All methods take a RequestXxx argument and return a ResponseXxx argument,
@@ -19,9 +17,7 @@ type Application interface {
CheckTx(RequestCheckTx) ResponseCheckTx // Validate a tx for the mempool
// Consensus Connection
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain w validators/other info from TendermintCore
PrepareProposal(RequestPrepareProposal) ResponsePrepareProposal
ProcessProposal(RequestProcessProposal) ResponseProcessProposal
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain w validators/other info from TendermintCore
BeginBlock(RequestBeginBlock) ResponseBeginBlock // Signals the beginning of a block
DeliverTx(RequestDeliverTx) ResponseDeliverTx // Deliver a tx for full processing
EndBlock(RequestEndBlock) ResponseEndBlock // Signals the end of a block, returns changes to the validator set
@@ -94,24 +90,6 @@ func (BaseApplication) ApplySnapshotChunk(req RequestApplySnapshotChunk) Respons
return ResponseApplySnapshotChunk{}
}
func (BaseApplication) PrepareProposal(req RequestPrepareProposal) ResponsePrepareProposal {
txs := make([][]byte, 0, len(req.Txs))
var totalBytes int64
for _, tx := range req.Txs {
totalBytes += int64(len(tx))
if totalBytes > req.MaxTxBytes {
break
}
txs = append(txs, tx)
}
return ResponsePrepareProposal{Txs: txs}
}
func (BaseApplication) ProcessProposal(req RequestProcessProposal) ResponseProcessProposal {
return ResponseProcessProposal{
Status: ResponseProcessProposal_ACCEPT}
}
//-------------------------------------------------------
// GRPCApplication is a GRPC wrapper for Application
@@ -194,15 +172,3 @@ func (app *GRPCApplication) ApplySnapshotChunk(
res := app.app.ApplySnapshotChunk(*req)
return &res, nil
}
func (app *GRPCApplication) PrepareProposal(
ctx context.Context, req *RequestPrepareProposal) (*ResponsePrepareProposal, error) {
res := app.app.PrepareProposal(*req)
return &res, nil
}
func (app *GRPCApplication) ProcessProposal(
ctx context.Context, req *RequestProcessProposal) (*ResponseProcessProposal, error) {
res := app.app.ProcessProposal(*req)
return &res, nil
}

View File

@@ -3,7 +3,7 @@ package types
import (
"io"
"github.com/cosmos/gogoproto/proto"
"github.com/gogo/protobuf/proto"
"github.com/tendermint/tendermint/libs/protoio"
)
@@ -114,18 +114,6 @@ func ToRequestApplySnapshotChunk(req RequestApplySnapshotChunk) *Request {
}
}
func ToRequestPrepareProposal(req RequestPrepareProposal) *Request {
return &Request{
Value: &Request_PrepareProposal{&req},
}
}
func ToRequestProcessProposal(req RequestProcessProposal) *Request {
return &Request{
Value: &Request_ProcessProposal{&req},
}
}
//----------------------------------------
func ToResponseException(errStr string) *Response {
@@ -151,7 +139,6 @@ func ToResponseInfo(res ResponseInfo) *Response {
Value: &Response_Info{&res},
}
}
func ToResponseDeliverTx(res ResponseDeliverTx) *Response {
return &Response{
Value: &Response_DeliverTx{&res},
@@ -217,15 +204,3 @@ func ToResponseApplySnapshotChunk(res ResponseApplySnapshotChunk) *Response {
Value: &Response_ApplySnapshotChunk{&res},
}
}
func ToResponsePrepareProposal(res ResponsePrepareProposal) *Response {
return &Response{
Value: &Response_PrepareProposal{&res},
}
}
func ToResponseProcessProposal(res ResponseProcessProposal) *Response {
return &Response{
Value: &Response_ProcessProposal{&res},
}
}

View File

@@ -6,7 +6,7 @@ import (
"strings"
"testing"
"github.com/cosmos/gogoproto/proto"
"github.com/gogo/protobuf/proto"
"github.com/stretchr/testify/assert"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
@@ -25,7 +25,7 @@ func TestMarshalJSON(t *testing.T) {
{
Type: "testEvent",
Attributes: []EventAttribute{
{Key: "pho", Value: "bo"},
{Key: []byte("pho"), Value: []byte("bo")},
},
},
},
@@ -92,7 +92,7 @@ func TestWriteReadMessage2(t *testing.T) {
{
Type: "testEvent",
Attributes: []EventAttribute{
{Key: "abc", Value: "def"},
{Key: []byte("abc"), Value: []byte("def")},
},
},
},

View File

@@ -1,224 +0,0 @@
// Code generated by mockery. DO NOT EDIT.
package mocks
import (
mock "github.com/stretchr/testify/mock"
types "github.com/tendermint/tendermint/abci/types"
)
// Application is an autogenerated mock type for the Application type
type Application struct {
mock.Mock
}
// ApplySnapshotChunk provides a mock function with given fields: _a0
func (_m *Application) ApplySnapshotChunk(_a0 types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk {
ret := _m.Called(_a0)
var r0 types.ResponseApplySnapshotChunk
if rf, ok := ret.Get(0).(func(types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseApplySnapshotChunk)
}
return r0
}
// BeginBlock provides a mock function with given fields: _a0
func (_m *Application) BeginBlock(_a0 types.RequestBeginBlock) types.ResponseBeginBlock {
ret := _m.Called(_a0)
var r0 types.ResponseBeginBlock
if rf, ok := ret.Get(0).(func(types.RequestBeginBlock) types.ResponseBeginBlock); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseBeginBlock)
}
return r0
}
// CheckTx provides a mock function with given fields: _a0
func (_m *Application) CheckTx(_a0 types.RequestCheckTx) types.ResponseCheckTx {
ret := _m.Called(_a0)
var r0 types.ResponseCheckTx
if rf, ok := ret.Get(0).(func(types.RequestCheckTx) types.ResponseCheckTx); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseCheckTx)
}
return r0
}
// Commit provides a mock function with given fields:
func (_m *Application) Commit() types.ResponseCommit {
ret := _m.Called()
var r0 types.ResponseCommit
if rf, ok := ret.Get(0).(func() types.ResponseCommit); ok {
r0 = rf()
} else {
r0 = ret.Get(0).(types.ResponseCommit)
}
return r0
}
// DeliverTx provides a mock function with given fields: _a0
func (_m *Application) DeliverTx(_a0 types.RequestDeliverTx) types.ResponseDeliverTx {
ret := _m.Called(_a0)
var r0 types.ResponseDeliverTx
if rf, ok := ret.Get(0).(func(types.RequestDeliverTx) types.ResponseDeliverTx); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseDeliverTx)
}
return r0
}
// EndBlock provides a mock function with given fields: _a0
func (_m *Application) EndBlock(_a0 types.RequestEndBlock) types.ResponseEndBlock {
ret := _m.Called(_a0)
var r0 types.ResponseEndBlock
if rf, ok := ret.Get(0).(func(types.RequestEndBlock) types.ResponseEndBlock); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseEndBlock)
}
return r0
}
// Info provides a mock function with given fields: _a0
func (_m *Application) Info(_a0 types.RequestInfo) types.ResponseInfo {
ret := _m.Called(_a0)
var r0 types.ResponseInfo
if rf, ok := ret.Get(0).(func(types.RequestInfo) types.ResponseInfo); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseInfo)
}
return r0
}
// InitChain provides a mock function with given fields: _a0
func (_m *Application) InitChain(_a0 types.RequestInitChain) types.ResponseInitChain {
ret := _m.Called(_a0)
var r0 types.ResponseInitChain
if rf, ok := ret.Get(0).(func(types.RequestInitChain) types.ResponseInitChain); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseInitChain)
}
return r0
}
// ListSnapshots provides a mock function with given fields: _a0
func (_m *Application) ListSnapshots(_a0 types.RequestListSnapshots) types.ResponseListSnapshots {
ret := _m.Called(_a0)
var r0 types.ResponseListSnapshots
if rf, ok := ret.Get(0).(func(types.RequestListSnapshots) types.ResponseListSnapshots); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseListSnapshots)
}
return r0
}
// LoadSnapshotChunk provides a mock function with given fields: _a0
func (_m *Application) LoadSnapshotChunk(_a0 types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk {
ret := _m.Called(_a0)
var r0 types.ResponseLoadSnapshotChunk
if rf, ok := ret.Get(0).(func(types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseLoadSnapshotChunk)
}
return r0
}
// OfferSnapshot provides a mock function with given fields: _a0
func (_m *Application) OfferSnapshot(_a0 types.RequestOfferSnapshot) types.ResponseOfferSnapshot {
ret := _m.Called(_a0)
var r0 types.ResponseOfferSnapshot
if rf, ok := ret.Get(0).(func(types.RequestOfferSnapshot) types.ResponseOfferSnapshot); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseOfferSnapshot)
}
return r0
}
// PrepareProposal provides a mock function with given fields: _a0
func (_m *Application) PrepareProposal(_a0 types.RequestPrepareProposal) types.ResponsePrepareProposal {
ret := _m.Called(_a0)
var r0 types.ResponsePrepareProposal
if rf, ok := ret.Get(0).(func(types.RequestPrepareProposal) types.ResponsePrepareProposal); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponsePrepareProposal)
}
return r0
}
// ProcessProposal provides a mock function with given fields: _a0
func (_m *Application) ProcessProposal(_a0 types.RequestProcessProposal) types.ResponseProcessProposal {
ret := _m.Called(_a0)
var r0 types.ResponseProcessProposal
if rf, ok := ret.Get(0).(func(types.RequestProcessProposal) types.ResponseProcessProposal); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseProcessProposal)
}
return r0
}
// Query provides a mock function with given fields: _a0
func (_m *Application) Query(_a0 types.RequestQuery) types.ResponseQuery {
ret := _m.Called(_a0)
var r0 types.ResponseQuery
if rf, ok := ret.Get(0).(func(types.RequestQuery) types.ResponseQuery); ok {
r0 = rf(_a0)
} else {
r0 = ret.Get(0).(types.ResponseQuery)
}
return r0
}
type mockConstructorTestingTNewApplication interface {
mock.TestingT
Cleanup(func())
}
// NewApplication creates a new instance of Application. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewApplication(t mockConstructorTestingTNewApplication) *Application {
mock := &Application{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -1,187 +0,0 @@
package mocks
import (
types "github.com/tendermint/tendermint/abci/types"
)
// BaseMock provides a wrapper around the generated Application mock and a BaseApplication.
// BaseMock first tries to use the mock's implementation of the method.
// If no functionality was provided for the mock by the user, BaseMock dispatches
// to the BaseApplication and uses its functionality.
// BaseMock allows users to provide mocked functionality for only the methods that matter
// for their test while avoiding a panic if the code calls Application methods that are
// not relevant to the test.
type BaseMock struct {
base *types.BaseApplication
*Application
}
func NewBaseMock() BaseMock {
return BaseMock{
base: types.NewBaseApplication(),
Application: new(Application),
}
}
// Info/Query Connection
// Return application info
func (m BaseMock) Info(input types.RequestInfo) types.ResponseInfo {
var ret types.ResponseInfo
defer func() {
if r := recover(); r != nil {
ret = m.base.Info(input)
}
}()
ret = m.Application.Info(input)
return ret
}
func (m BaseMock) Query(input types.RequestQuery) types.ResponseQuery {
var ret types.ResponseQuery
defer func() {
if r := recover(); r != nil {
ret = m.base.Query(input)
}
}()
ret = m.Application.Query(input)
return ret
}
// Mempool Connection
// Validate a tx for the mempool
func (m BaseMock) CheckTx(input types.RequestCheckTx) types.ResponseCheckTx {
var ret types.ResponseCheckTx
defer func() {
if r := recover(); r != nil {
ret = m.base.CheckTx(input)
}
}()
ret = m.Application.CheckTx(input)
return ret
}
// Consensus Connection
// Initialize blockchain w validators/other info from TendermintCore
func (m BaseMock) InitChain(input types.RequestInitChain) types.ResponseInitChain {
var ret types.ResponseInitChain
defer func() {
if r := recover(); r != nil {
ret = m.base.InitChain(input)
}
}()
ret = m.Application.InitChain(input)
return ret
}
func (m BaseMock) PrepareProposal(input types.RequestPrepareProposal) types.ResponsePrepareProposal {
var ret types.ResponsePrepareProposal
defer func() {
if r := recover(); r != nil {
ret = m.base.PrepareProposal(input)
}
}()
ret = m.Application.PrepareProposal(input)
return ret
}
func (m BaseMock) ProcessProposal(input types.RequestProcessProposal) types.ResponseProcessProposal {
var ret types.ResponseProcessProposal
defer func() {
if r := recover(); r != nil {
ret = m.base.ProcessProposal(input)
}
}()
ret = m.Application.ProcessProposal(input)
return ret
}
// Commit the state and return the application Merkle root hash
func (m BaseMock) Commit() types.ResponseCommit {
var ret types.ResponseCommit
defer func() {
if r := recover(); r != nil {
ret = m.base.Commit()
}
}()
ret = m.Application.Commit()
return ret
}
// State Sync Connection
// List available snapshots
func (m BaseMock) ListSnapshots(input types.RequestListSnapshots) types.ResponseListSnapshots {
var ret types.ResponseListSnapshots
defer func() {
if r := recover(); r != nil {
ret = m.base.ListSnapshots(input)
}
}()
ret = m.Application.ListSnapshots(input)
return ret
}
func (m BaseMock) OfferSnapshot(input types.RequestOfferSnapshot) types.ResponseOfferSnapshot {
var ret types.ResponseOfferSnapshot
defer func() {
if r := recover(); r != nil {
ret = m.base.OfferSnapshot(input)
}
}()
ret = m.Application.OfferSnapshot(input)
return ret
}
func (m BaseMock) LoadSnapshotChunk(input types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk {
var ret types.ResponseLoadSnapshotChunk
defer func() {
if r := recover(); r != nil {
ret = m.base.LoadSnapshotChunk(input)
}
}()
ret = m.Application.LoadSnapshotChunk(input)
return ret
}
func (m BaseMock) ApplySnapshotChunk(input types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk {
var ret types.ResponseApplySnapshotChunk
defer func() {
if r := recover(); r != nil {
ret = m.base.ApplySnapshotChunk(input)
}
}()
ret = m.Application.ApplySnapshotChunk(input)
return ret
}
func (m BaseMock) BeginBlock(input types.RequestBeginBlock) types.ResponseBeginBlock {
var ret types.ResponseBeginBlock
defer func() {
if r := recover(); r != nil {
ret = m.base.BeginBlock(input)
}
}()
ret = m.Application.BeginBlock(input)
return ret
}
func (m BaseMock) DeliverTx(input types.RequestDeliverTx) types.ResponseDeliverTx {
var ret types.ResponseDeliverTx
defer func() {
if r := recover(); r != nil {
ret = m.base.DeliverTx(input)
}
}()
ret = m.Application.DeliverTx(input)
return ret
}
func (m BaseMock) EndBlock(input types.RequestEndBlock) types.ResponseEndBlock {
var ret types.ResponseEndBlock
defer func() {
if r := recover(); r != nil {
ret = m.base.EndBlock(input)
}
}()
ret = m.Application.EndBlock(input)
return ret
}

View File

@@ -4,7 +4,7 @@ import (
"bytes"
"encoding/json"
"github.com/cosmos/gogoproto/jsonpb"
"github.com/gogo/protobuf/jsonpb"
)
const (
@@ -41,18 +41,8 @@ func (r ResponseQuery) IsErr() bool {
return r.Code != CodeTypeOK
}
// IsAccepted returns true if Code is ACCEPT
func (r ResponseProcessProposal) IsAccepted() bool {
return r.Status == ResponseProcessProposal_ACCEPT
}
// IsStatusUnknown returns true if Code is UNKNOWN
func (r ResponseProcessProposal) IsStatusUnknown() bool {
return r.Status == ResponseProcessProposal_UNKNOWN
}
//---------------------------------------------------------------------------
// override JSON marshaling so we emit defaults (ie. disable omitempty)
// override JSON marshalling so we emit defaults (ie. disable omitempty)
var (
jsonpbMarshaller = jsonpb.Marshaler{

File diff suppressed because it is too large Load Diff

View File

@@ -6,4 +6,4 @@ import (
// TODO: eliminate this after some version refactor
const Version = version.ABCISemVer
const Version = version.ABCIVersion

View File

@@ -1,12 +0,0 @@
version: 1.0.{build}
configuration: Release
platform:
- x64
- x86
clone_folder: c:\go\path\src\github.com\tendermint\tendermint
before_build:
- cmd: set GOPATH=%GOROOT%\path
- cmd: set PATH=%GOPATH%\bin;%PATH%
build_script:
- cmd: make test
test: off

42
behaviour/doc.go Normal file
View File

@@ -0,0 +1,42 @@
/*
Package Behaviour provides a mechanism for reactors to report behaviour of peers.
Instead of a reactor calling the switch directly it will call the behaviour module which will
handle the stoping and marking peer as good on behalf of the reactor.
There are four different behaviours a reactor can report.
1. bad message
type badMessage struct {
explanation string
}
This message will request the peer be stopped for an error
2. message out of order
type messageOutOfOrder struct {
explanation string
}
This message will request the peer be stopped for an error
3. consesnsus Vote
type consensusVote struct {
explanation string
}
This message will request the peer be marked as good
4. block part
type blockPart struct {
explanation string
}
This message will request the peer be marked as good
*/
package behaviour

View File

@@ -0,0 +1,49 @@
package behaviour
import (
"github.com/tendermint/tendermint/p2p"
)
// PeerBehaviour is a struct describing a behaviour a peer performed.
// `peerID` identifies the peer and reason characterizes the specific
// behaviour performed by the peer.
type PeerBehaviour struct {
peerID p2p.NodeID
reason interface{}
}
type badMessage struct {
explanation string
}
// BadMessage returns a badMessage PeerBehaviour.
func BadMessage(peerID p2p.NodeID, explanation string) PeerBehaviour {
return PeerBehaviour{peerID: peerID, reason: badMessage{explanation}}
}
type messageOutOfOrder struct {
explanation string
}
// MessageOutOfOrder returns a messagOutOfOrder PeerBehaviour.
func MessageOutOfOrder(peerID p2p.NodeID, explanation string) PeerBehaviour {
return PeerBehaviour{peerID: peerID, reason: messageOutOfOrder{explanation}}
}
type consensusVote struct {
explanation string
}
// ConsensusVote returns a consensusVote PeerBehaviour.
func ConsensusVote(peerID p2p.NodeID, explanation string) PeerBehaviour {
return PeerBehaviour{peerID: peerID, reason: consensusVote{explanation}}
}
type blockPart struct {
explanation string
}
// BlockPart returns blockPart PeerBehaviour.
func BlockPart(peerID p2p.NodeID, explanation string) PeerBehaviour {
return PeerBehaviour{peerID: peerID, reason: blockPart{explanation}}
}

86
behaviour/reporter.go Normal file
View File

@@ -0,0 +1,86 @@
package behaviour
import (
"errors"
tmsync "github.com/tendermint/tendermint/libs/sync"
"github.com/tendermint/tendermint/p2p"
)
// Reporter provides an interface for reactors to report the behaviour
// of peers synchronously to other components.
type Reporter interface {
Report(behaviour PeerBehaviour) error
}
// SwitchReporter reports peer behaviour to an internal Switch.
type SwitchReporter struct {
sw *p2p.Switch
}
// NewSwitchReporter return a new SwitchReporter instance which wraps the Switch.
func NewSwitchReporter(sw *p2p.Switch) *SwitchReporter {
return &SwitchReporter{
sw: sw,
}
}
// Report reports the behaviour of a peer to the Switch.
func (spbr *SwitchReporter) Report(behaviour PeerBehaviour) error {
peer := spbr.sw.Peers().Get(behaviour.peerID)
if peer == nil {
return errors.New("peer not found")
}
switch reason := behaviour.reason.(type) {
case consensusVote, blockPart:
spbr.sw.MarkPeerAsGood(peer)
case badMessage:
spbr.sw.StopPeerForError(peer, reason.explanation)
case messageOutOfOrder:
spbr.sw.StopPeerForError(peer, reason.explanation)
default:
return errors.New("unknown reason reported")
}
return nil
}
// MockReporter is a concrete implementation of the Reporter
// interface used in reactor tests to ensure reactors report the correct
// behaviour in manufactured scenarios.
type MockReporter struct {
mtx tmsync.RWMutex
pb map[p2p.NodeID][]PeerBehaviour
}
// NewMockReporter returns a Reporter which records all reported
// behaviours in memory.
func NewMockReporter() *MockReporter {
return &MockReporter{
pb: map[p2p.NodeID][]PeerBehaviour{},
}
}
// Report stores the PeerBehaviour produced by the peer identified by peerID.
func (mpbr *MockReporter) Report(behaviour PeerBehaviour) error {
mpbr.mtx.Lock()
defer mpbr.mtx.Unlock()
mpbr.pb[behaviour.peerID] = append(mpbr.pb[behaviour.peerID], behaviour)
return nil
}
// GetBehaviours returns all behaviours reported on the peer identified by peerID.
func (mpbr *MockReporter) GetBehaviours(peerID p2p.NodeID) []PeerBehaviour {
mpbr.mtx.RLock()
defer mpbr.mtx.RUnlock()
if items, ok := mpbr.pb[peerID]; ok {
result := make([]PeerBehaviour, len(items))
copy(result, items)
return result
}
return []PeerBehaviour{}
}

205
behaviour/reporter_test.go Normal file
View File

@@ -0,0 +1,205 @@
package behaviour_test
import (
"sync"
"testing"
bh "github.com/tendermint/tendermint/behaviour"
"github.com/tendermint/tendermint/p2p"
)
// TestMockReporter tests the MockReporter's ability to store reported
// peer behaviour in memory indexed by the peerID.
func TestMockReporter(t *testing.T) {
var peerID p2p.NodeID = "MockPeer"
pr := bh.NewMockReporter()
behaviours := pr.GetBehaviours(peerID)
if len(behaviours) != 0 {
t.Error("Expected to have no behaviours reported")
}
badMessage := bh.BadMessage(peerID, "bad message")
if err := pr.Report(badMessage); err != nil {
t.Error(err)
}
behaviours = pr.GetBehaviours(peerID)
if len(behaviours) != 1 {
t.Error("Expected the peer have one reported behaviour")
}
if behaviours[0] != badMessage {
t.Error("Expected Bad Message to have been reported")
}
}
type scriptItem struct {
peerID p2p.NodeID
behaviour bh.PeerBehaviour
}
// equalBehaviours returns true if a and b contain the same PeerBehaviours with
// the same freequencies and otherwise false.
func equalBehaviours(a []bh.PeerBehaviour, b []bh.PeerBehaviour) bool {
aHistogram := map[bh.PeerBehaviour]int{}
bHistogram := map[bh.PeerBehaviour]int{}
for _, behaviour := range a {
aHistogram[behaviour]++
}
for _, behaviour := range b {
bHistogram[behaviour]++
}
if len(aHistogram) != len(bHistogram) {
return false
}
for _, behaviour := range a {
if aHistogram[behaviour] != bHistogram[behaviour] {
return false
}
}
for _, behaviour := range b {
if bHistogram[behaviour] != aHistogram[behaviour] {
return false
}
}
return true
}
// TestEqualPeerBehaviours tests that equalBehaviours can tell that two slices
// of peer behaviours can be compared for the behaviours they contain and the
// freequencies that those behaviours occur.
func TestEqualPeerBehaviours(t *testing.T) {
var (
peerID p2p.NodeID = "MockPeer"
consensusVote = bh.ConsensusVote(peerID, "voted")
blockPart = bh.BlockPart(peerID, "blocked")
equals = []struct {
left []bh.PeerBehaviour
right []bh.PeerBehaviour
}{
// Empty sets
{[]bh.PeerBehaviour{}, []bh.PeerBehaviour{}},
// Single behaviours
{[]bh.PeerBehaviour{consensusVote}, []bh.PeerBehaviour{consensusVote}},
// Equal Frequencies
{[]bh.PeerBehaviour{consensusVote, consensusVote},
[]bh.PeerBehaviour{consensusVote, consensusVote}},
// Equal frequencies different orders
{[]bh.PeerBehaviour{consensusVote, blockPart},
[]bh.PeerBehaviour{blockPart, consensusVote}},
}
unequals = []struct {
left []bh.PeerBehaviour
right []bh.PeerBehaviour
}{
// Comparing empty sets to non empty sets
{[]bh.PeerBehaviour{}, []bh.PeerBehaviour{consensusVote}},
// Different behaviours
{[]bh.PeerBehaviour{consensusVote}, []bh.PeerBehaviour{blockPart}},
// Same behaviour with different frequencies
{[]bh.PeerBehaviour{consensusVote},
[]bh.PeerBehaviour{consensusVote, consensusVote}},
}
)
for _, test := range equals {
if !equalBehaviours(test.left, test.right) {
t.Errorf("expected %#v and %#v to be equal", test.left, test.right)
}
}
for _, test := range unequals {
if equalBehaviours(test.left, test.right) {
t.Errorf("expected %#v and %#v to be unequal", test.left, test.right)
}
}
}
// TestPeerBehaviourConcurrency constructs a scenario in which
// multiple goroutines are using the same MockReporter instance.
// This test reproduces the conditions in which MockReporter will
// be used within a Reactor `Receive` method tests to ensure thread safety.
func TestMockPeerBehaviourReporterConcurrency(t *testing.T) {
var (
behaviourScript = []struct {
peerID p2p.NodeID
behaviours []bh.PeerBehaviour
}{
{"1", []bh.PeerBehaviour{bh.ConsensusVote("1", "")}},
{"2", []bh.PeerBehaviour{bh.ConsensusVote("2", ""), bh.ConsensusVote("2", ""), bh.ConsensusVote("2", "")}},
{
"3",
[]bh.PeerBehaviour{bh.BlockPart("3", ""),
bh.ConsensusVote("3", ""),
bh.BlockPart("3", ""),
bh.ConsensusVote("3", "")}},
{
"4",
[]bh.PeerBehaviour{bh.ConsensusVote("4", ""),
bh.ConsensusVote("4", ""),
bh.ConsensusVote("4", ""),
bh.ConsensusVote("4", "")}},
{
"5",
[]bh.PeerBehaviour{bh.BlockPart("5", ""),
bh.ConsensusVote("5", ""),
bh.BlockPart("5", ""),
bh.ConsensusVote("5", "")}},
}
)
var receiveWg sync.WaitGroup
pr := bh.NewMockReporter()
scriptItems := make(chan scriptItem)
done := make(chan int)
numConsumers := 3
for i := 0; i < numConsumers; i++ {
receiveWg.Add(1)
go func() {
defer receiveWg.Done()
for {
select {
case pb := <-scriptItems:
if err := pr.Report(pb.behaviour); err != nil {
t.Error(err)
}
case <-done:
return
}
}
}()
}
var sendingWg sync.WaitGroup
sendingWg.Add(1)
go func() {
defer sendingWg.Done()
for _, item := range behaviourScript {
for _, reason := range item.behaviours {
scriptItems <- scriptItem{item.peerID, reason}
}
}
}()
sendingWg.Wait()
for i := 0; i < numConsumers; i++ {
done <- 1
}
receiveWg.Wait()
for _, items := range behaviourScript {
reported := pr.GetBehaviours(items.peerID)
if !equalBehaviours(reported, items.behaviours) {
t.Errorf("expected peer %s to have behaved \nExpected: %#v \nGot %#v \n",
items.peerID, items.behaviours, reported)
}
}
}

Some files were not shown because too many files have changed in this diff Show More