* initial proposerWaitsUntil implementation
* switch to duration for easier use with timeout scheduling
* add proposal step waiting time with tests
* minor aesthetic change to IsTimely
* minor language fix
* Update internal/consensus/state.go
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* reword comment
* change accuracy to precision
* move tests to separate pbts test file
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* state: add an IsTimely function to implement the check for timely in proposer-based timestamps
* move time checks into block.go and add time source mechanism
* timestamp params comment
* add todo related to pbts spec and timestamp params
* remove old istimely
* switch to using built in before function
* lint++
* wip
* move into proposal and create a default set of params
* defer using default cons params for now
* add failing test
* tweak comments in failing test
* failing test comment
* initial attempt at removing prevote locked block logic
* comment out broken function
* undo reset on prevotes
* fixing TestProposeValidBlock test
* update test for completed POL update
* comment updates
* further unlock testing
* update comments
* Update internal/consensus/state.go
* spacing nit
* comment cleanup
* nil check in addVote
* update unlock description
* update precommit on relock comment
* add ensure new timeout back
* rename IsZero to IsNil and replace uses of block len check with helper
* add testing.T to new assertions
* begin removing unlock condition
* fix TestStateProposerSelection2 to precommit for nil correctly
* remove erroneous sleep
* update TestStatePOL comment
* update relock test to be more clear
* add _ into test names
* rename slashing
* udpate no relock function to be cleaner
* do not relock on old proposal test cleanup
* con state name update
* remove all references to unlock
* update test comments to include new
* add relock test
* add ensureRelock to common_test
* remove all event unlock
* remove unlock checks
* no lint add space
* lint ++
* add test for nil prevote on different proposal
* fix prevote nil condition
* fix defaultDoPrevote
* state_test.go fixes to accomodate prevoting for nil
* add failing test for POL from previous round case
* update prevote logic to prevote POL from previous round
* state.go comment fixes
* update validatePrevotes to correctly look for nil
* update new test name and comment
* update POLFromPreviousRound test
* fixes post merge
* fix spacing
* make the linter happy
* change prevote log message
* update prevote nil debug line
* update enterPrevote comment
* lint
* Update internal/consensus/state.go
Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>
* Update internal/consensus/state.go
Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>
* add english description of alg rules
* Update internal/consensus/state.go
Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>
* comment fixes from review
* fix comment
* fix comment
Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>
This is a very small change, but removes a method from the
`service.Service` interface (a win!) and forces callers to explicitly
pass loggers in to objects during construction rather than (later)
injecting them. There's not a real need for this kind of lazy
construction of loggers, and I think a decent potential for confusion
for mutable loggers.
The main concern I have is that this changes the constructor API for
ABCI clients. I think this is fine, and I suspect that as we plumb
contexts through, and make changes to the RPC services there'll be a
number of similar sorts of changes to various (quasi) public
interfaces, which I think we should welcome.
Since the doc site is built from the backport branches, the config changes for
a new major release also need to be replicated into the backport branch as well
as master. Update the release docs to mention that specifically, since I missed
it during the v0.35 release.
This pull request updates the `protocgen.sh` script to insert the `go_package` option to all of the downloaded proto files. A related pull request into the spec repo removes this options from the .proto files: https://github.com/tendermint/spec/pull/358
This pull requests, along with the related spec PR, aim to move the creation of the `tendermintdev/docker-build-proto` container into the spec repo. This change also relies on several fixes to that container that are made in the PR into the spec repo.
I think calling os.Exit at arbitrary points is _bad_ and is good to
delete. I think panics in the case of data courruption have a chance
of providing useful information.
* update the proposer-based timestamps spec per discussion with @cason
* add vote to list of changed structs
* sentence fix
* clarify crypto sig logic update
* language updates per feedback from @cason
* fix POL < 0 wording
* update timely description for non-polka
When dialing fails to succeed we should reduce the score of the peer,
which puts the peer at (potentially) greater chances of being removed
from the peer manager, and reduces the chance of the peer being
gossiped by the PEX reactor.
The evidence test produces a set of mock evidence in the evidence pool of the 'Primary' node. The test then fills the evidence pools of secondaries with half of this mock evidence. Finally, the test waits until the secondary has an evidence pool as full as the primary.
The assertions that are removed here were checking that the primary and secondaries' evidence channels were empty. However, nothing in the test actually ensures that the channels are empty. The test only waits for the secondaries to have received the complete set of evidence, and the secondaries already received half of the evidence at the beginning. It's more than possible that the secondaries can receive the complete set of evidence and not finish reading the duplicate evidence off the channels.
As a safety measure, don't allow a query string to be unreasonably
long. The query filter is not especially efficient, so a query that
needs more than basic detail should filter coarsely in the subscriber
and refine on the client side.
This affects Subscribe and TxSearch queries.
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.3 to 1.10.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/lib/pq/releases">github.com/lib/pq's releases</a>.</em></p>
<blockquote>
<h2>v1.10.4</h2>
<ul>
<li>Keep track of (context cancelled) error on connection.</li>
<li>Fix android build</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="8446d16b89"><code>8446d16</code></a> issue 1062: Keep track of (context cancelled) error on connection, and make r...</li>
<li><a href="6a102c04ac"><code>6a102c0</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1060">#1060</a> from ian4hu/patch-1</li>
<li><a href="a54251e1b6"><code>a54251e</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1061">#1061</a> from mjl-/fix-flaky-TestConnPrepareContext</li>
<li><a href="2b4fa17b44"><code>2b4fa17</code></a> Fix flaky TestConnPrepareContext</li>
<li><a href="b33a1b722c"><code>b33a1b7</code></a> Fix android build</li>
<li><a href="16e9cadb5a"><code>16e9cad</code></a> Fix build in android</li>
<li><a href="26399a7687"><code>26399a7</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1057">#1057</a> from jfcg/master</li>
<li><a href="087077605f"><code>0870776</code></a> fix possible integer truncation</li>
<li><a href="c01ab77091"><code>c01ab77</code></a> Create codeql-analysis.yml</li>
<li>See full diff in <a href="https://github.com/lib/pq/compare/v1.10.3...v1.10.4">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
This follows the same model as we did in the p2p package.
Rework the indexer service constructor to take a struct of arguments,
that makes it easier to construct the optional settings.
Deprecate but do not remove the existing constructor.
Clean up node initialization a little bit.
This is part of the work described by #7156.
Remove "unbuffered subscriptions" from the pubsub service.
Replace them with a dedicated blocking "observer" mechanism.
Use the observer mechanism for indexing.
Add a SubscribeWithArgs method and deprecate the old Subscribe
method. Remove SubscribeUnbuffered entirely (breaking).
Rework the Subscription interface to eliminate exposed channels.
Subscriptions now use a context to manage lifecycle notifications.
Internalize the eventbus package.
This is intended to document some ergonomic and reliability issues with the
existing implementation of the event subscription service on the Tendermint
node, and to discuss possible approaches to improving them.
Prior to #7177, these benchmarks did not provide any useful data about the
performance of the pubsub system (in fact, prior to #7178, half of them did not
work at all).
Specifically, they create a bunch of subscribers with 1 buffer slot on a
default publisher and copy messages to them. But because the publisher is
single-threaded, and doesn't block for delivery, all this tested is how long it
takes to receive a single message from a channel and deliver it to another
channel. The resulting stat does not even vary meaningfully with batch size,
since it's testing a serial workload.
Since #7177 the benchmarks do correctly reflect delivery time (good), but still
do not tell us anything useful: The latencies that matter for pubsub are not
internal queuing, but the effects of backpressure on the publisher via the
subscribers. That's an integration problem, and simulating a fake workload does
not provide meaningful results.
On that basis, remove these benchmarks.
Updates #7156, and a follow-up to #7070.
Event subscriptions in Tendermint currently use a fixed-length Go
channel as a queue. When the channel fills up, the publisher
immediately terminates the subscription. This prevents slow
subscribers from creating memory pressure on the node by not
servicing their queue fast enough.
Replace the buffered channel used to deliver events to buffered
subscribers with an explicit queue. The queue provides a soft
quota and burst credit mechanism: Clients that usually keep up
can survive occasional bursts, without allowing truly slow
clients to hog resources indefinitely.
Fixes#7176. Some of the benchmarks create a bunch of different subscriptions all sharing the same query. These were all using the same client ID, which violates one of the subscriber rules. Ensure each subscriber gets a unique ID.
This has been broken as long as this library has been in the repo—I tracked it back to bb9aa85d and it was already failing there, so I think this never really worked. I'm not sure these test anything useful, but at least now they run.
Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.1 to 1.5.3.
<details>
<summary>Commits</summary>
<ul>
<li><a href="ad44493166"><code>ad44493</code></a> [dist] 1.5.3</li>
<li><a href="c7984617e2"><code>c798461</code></a> [fix] Fix host parsing for file URLs (<a href="https://github-redirect.dependabot.com/unshiftio/url-parse/issues/210">#210</a>)</li>
<li><a href="201034b867"><code>201034b</code></a> [dist] 1.5.2</li>
<li><a href="2d9ac2c940"><code>2d9ac2c</code></a> [fix] Sanitize only special URLs (<a href="https://github-redirect.dependabot.com/unshiftio/url-parse/issues/209">#209</a>)</li>
<li><a href="fb128af4f4"><code>fb128af</code></a> [fix] Use <code>'null'</code> as <code>origin</code> for non special URLs</li>
<li><a href="fed6d9e338"><code>fed6d9e</code></a> [fix] Add a leading slash only if the URL is special</li>
<li><a href="94872e7ab9"><code>94872e7</code></a> [fix] Do not incorrectly set the <code>slashes</code> property to <code>true</code></li>
<li><a href="81ab967889"><code>81ab967</code></a> [fix] Ignore slashes after the protocol for special URLs</li>
<li><a href="ee22050a48"><code>ee22050</code></a> [ci] Use GitHub Actions</li>
<li><a href="d2979b586d"><code>d2979b5</code></a> [fix] Special case the <code>file:</code> protocol (<a href="https://github-redirect.dependabot.com/unshiftio/url-parse/issues/204">#204</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/unshiftio/url-parse/compare/1.5.1...1.5.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tendermint/tendermint/network/alerts).
</details>
Bumps [path-parse](https://github.com/jbgutierrez/path-parse) from 1.0.6 to 1.0.7.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/jbgutierrez/path-parse/commits/v1.0.7">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tendermint/tendermint/network/alerts).
</details>
Bumps [ws](https://github.com/websockets/ws) from 6.2.1 to 6.2.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/websockets/ws/releases">ws's releases</a>.</em></p>
<blockquote>
<h2>6.2.2</h2>
<h1>Bug fixes</h1>
<ul>
<li>Backported 00c425ec to the 6.x release line (78c676d2).</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="9bdb58070d"><code>9bdb580</code></a> [dist] 6.2.2</li>
<li><a href="78c676d2a1"><code>78c676d</code></a> [security] Fix ReDoS vulnerability</li>
<li>See full diff in <a href="https://github.com/websockets/ws/compare/6.2.1...6.2.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tendermint/tendermint/network/alerts).
</details>
Bumps [postcss](https://github.com/postcss/postcss) from 7.0.35 to 7.0.39.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/postcss/postcss/releases">postcss's releases</a>.</em></p>
<blockquote>
<h2>7.0.39</h2>
<ul>
<li>Reduce package size.</li>
<li>Backport <code>nanocolors</code> to <code>picocolors</code> migration.</li>
</ul>
<h2>7.0.38</h2>
<ul>
<li>Update <code>Processor#version</code>.</li>
</ul>
<h2>7.0.37</h2>
<ul>
<li>Backport <code>chalk</code> to <code>nanocolors</code> migration.</li>
</ul>
<h2>7.0.36</h2>
<ul>
<li>Backport ReDoS vulnerabilities from PostCSS 8.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/postcss/postcss/blob/7.0.39/CHANGELOG.md">postcss's changelog</a>.</em></p>
<blockquote>
<h2>7.0.39</h2>
<ul>
<li>Reduce package size.</li>
<li>Backport <code>nanocolors</code> to <code>picocolors</code> migration.</li>
</ul>
<h2>7.0.38</h2>
<ul>
<li>Update <code>Processor#version</code>.</li>
</ul>
<h2>7.0.37</h2>
<ul>
<li>Backport <code>chalk</code> to <code>nanocolors</code> migration.</li>
</ul>
<h2>7.0.36</h2>
<ul>
<li>Backport ReDoS vulnerabilities from PostCSS 8.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="e17c1ef762"><code>e17c1ef</code></a> Release 7.0.39 version</li>
<li><a href="6791bd3d5f"><code>6791bd3</code></a> Reduce npm package</li>
<li><a href="44c581a55a"><code>44c581a</code></a> Replace nanocolors with picocolors</li>
<li><a href="8ba21fd8f4"><code>8ba21fd</code></a> Remove eslint-ci</li>
<li><a href="3994c4aa3c"><code>3994c4a</code></a> Release 7.0.38 version</li>
<li><a href="6944e1dd80"><code>6944e1d</code></a> Remove development keys from package.json</li>
<li><a href="4dd0af024a"><code>4dd0af0</code></a> Release 7.0.37 version</li>
<li><a href="8408eb4105"><code>8408eb4</code></a> Add compilation step</li>
<li><a href="0c680639c3"><code>0c68063</code></a> Move tests to GitHub Actions</li>
<li><a href="98b61ba5b4"><code>98b61ba</code></a> Replace chalk to nanocolors</li>
<li>Additional commits viewable in <a href="https://github.com/postcss/postcss/compare/7.0.35...7.0.39">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tendermint/tendermint/network/alerts).
</details>
The main (and minor) win of this PR is that the transport is fully the
responsibility of the router and the node doesn't need to be responsible for its lifecylce.
I saw one of these tests fail and it looks like it was using code that
wasn't being called anywhere, so I deleted it, and avoided the package
name aliasing.
We stopped testing these configurations a while ago, and it doesn't
really make sense to allow nodes to run in this configuration. This
drops support for non-blocksync nodes and cleans up the
configuration/tests accordingly.
Closes: #6908
This pull request adds a new "mesage_type" label to the send/recv bytes metrics calculated in the p2p code.
Below is a snippet of the updated metrics that includes the updated label:
```
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_HasVote",peer_id="2551a13ed720101b271a5df4816d1e4b3d3bd133"} 652
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_HasVote",peer_id="4b1068420ef739db63377250553562b9a978708a"} 631
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_HasVote",peer_id="927c50a5e508c747830ce3ba64a3f70fdda58ef2"} 631
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_NewRoundStep",peer_id="2551a13ed720101b271a5df4816d1e4b3d3bd133"} 393
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_NewRoundStep",peer_id="4b1068420ef739db63377250553562b9a978708a"} 357
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_NewRoundStep",peer_id="927c50a5e508c747830ce3ba64a3f70fdda58ef2"} 386
```
Rework the internal plumbing of the server. This change does not modify the
exported interfaces or semantics of the package, and all the existing tests
still pass.
The main changes here are to:
- Simplify the interface for subscription indexing with a typed index rather
than a single nested map.
- Ensure orderly shutdown of channels, so that there is no longer a dynamic
race with concurrent publishers & subscribers at shutdown.
- Remove a layer of indirection between publishers and subscribers. This mainly
helps legibility.
- Remove order dependencies between registration and delivery.
- Add documentation comments where they seemed helpful, and clarified the
existing comments where it was practical.
Although performance was not a primary goal of this change, the simplifications
did very slightly reduce memory use and increase throughput on the existing
benchmarks, though the delta is not statistically significant.
BENCHMARK BEFORE AFTER SPEEDUP (%) B/op (B) B/op (A)
Benchmark10Clients-12 5947 5566 6.4 2017 1942
Benchmark100Clients-12 6111 5762 5.7 1992 1910
Benchmark1000Clients-12 6983 6344 9.2 2046 1959
This pull request fixes a panic that exists in both mempools. The panic occurs when the ABCI client misses a response from the ABCI application. This happen when the ABCI client drops the request as a result of a full client queue. The fix here was to loop through the ordered list of recheck-tx in the callback until one matches the currently observed recheck request.
This is, perhaps, the trival final piece of #7075 that I've been
working on.
There's more work to be done:
- push more of the setup into the pacakges themselves
- move channel-based sending/filtering out of the
- simplify the buffering throuhgout the p2p stack.
This patch was needed to pass the buf breakage check for the proto file removed
in #7121, but now that master contains the change we no longer need the patch.
This change removes the partial gRPC interface to the RPC service, which was
deprecated in resolution of #6718.
Details:
- rpc: Remove the client and server interfaces and proto definitions.
- Remove the gRPC settings from the config library.
- Remove gRPC setup for the RPC service in the node startup.
- Fix various test helpers to remove gRPC bits.
- Remove the --rpc.grpc-laddr flag from the CLI.
Note that to satisfy the protobuf interface check, this change also includes a
temporary edit to buf.yaml, that I will revert after this is merged.
This metric describes itself as 'pending' but never actual decrements when the messages are removed from the queue.
This change fixes that by decrementing the metric when the data is removed from the queue.
This PR adds an initial set of metrics for use ABCI. The initial metrics enable the calculation of timing histograms and call counts for each of the ABCI methods. The metrics are also labeled as either 'sync' or 'async' to determine if the method call was performed using ABCI's `*Async` methods.
An example of these metrics is included here for reference:
```
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.0001"} 0
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.0004"} 5
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.002"} 12
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.009"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.02"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.1"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.65"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="2"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="6"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="25"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="+Inf"} 13
tendermint_abci_connection_method_timing_sum{chain_id="ci",method="commit",type="sync"} 0.007802058000000001
tendermint_abci_connection_method_timing_count{chain_id="ci",method="commit",type="sync"} 13
```
These metrics can easily be graphed using prometheus's `histogram_quantile(...)` method to pick out a particular quantile to graph or examine. I chose buckets that were somewhat of an estimate of expected range of times for ABCI operations. They start at .0001 seconds and range to 25 seconds. The hope is that this range captures enough possible times to be useful for us and operators.
Fixes#7068. The build-docker rule relies on being able to run make
build-linux, but did not pull the Makefile into the build context.
There are various ways to fix this, but this was probably the smallest.
Fixes#7098. The light client documentation moved to the spec repository.
I was not able to figure out what happened to light-client-protocol.md, it was removed in #5252 but no corresponding file exists in the spec repository. Since the spec also discusses the protocol, this change simply links to the spec and removes the non-functional reference.
Alternatively we could link to the top-level [light client doc](https://docs.tendermint.com/master/tendermint-core/light-client.html) if you think that's better.
This tweaks the connectivity of test configurations, in hopes that more will be viable.
Additionally reduces the prevalence of testing the legacy mempool.
My earlier p2p cleanup code removed support for the p2p tests from the
e2e generator and runner, but missed removing the CI
configuration. This patch remedies that.
While discussing a question about the indexing interface (#7044), we found some
confusion about the intent of the design decisions in ADR 065.
Based on discussion with the original authors of the ADR, this commit adds some
language to the Decisions section to spell out the intentions more clearly, and
to call out future work that this ADR did not explicitly decide about.
Addresses one of the concerns with #7041.
Provides a mechanism (via the RPC interface) to delete a single transaction, described by its hash, from the mempool. The method returns an error if the transaction cannot be found. Once the transaction is removed it remains in the cache and cannot be resubmitted until the cache is cleared or it expires from the cache.
This code hasn't been battle tested, and seems to have grown
increasingly flaky int tests. Given our general direction of reducing
queue complexity over the next couple of releases I think it makes
sense to remove it.
This PR tackles the case of using the e2e application in a long lived testnet. The application continually saves snapshots (usually every 100 blocks) which after a while bloats the size of the application. This PR prunes older snapshots so that only the most recent 10 snapshots remain.
A few notes:
- this is not all the deletion that we can do, but this is the most
"simple" case: it leaves in shims, and there's some trivial
additional cleanup to the transport that can happen but that
requires writing more code, and I wanted this to be easy to review
above all else.
- This should land *after* we cut the branch for 0.35, but I'm
anticipating that to happen soon, and I wanted to run this through
CI.
The race occurred as a result of a goroutine launched by `processPeerUpdate` racing with the `OnStop` method. The `processPeerUpdates` goroutine deletes from the map as `OnStop` is reading from it. This change updates the `OnStop` method to wait for the peer updates channel to be done before closing the peers. It also copies the map contents to a new map so that it will not conflict with the view of the map that the goroutine created in `processPeerUpdate` sees.
This commit should be one of the first to land as part of the v0.36
cycle *after* cutting the 0.35 branch.
The blocksync/v2 reactor was originally implemented as an experiement
to produce an implementation of the blockstack protocol that would be
easier to test and validate, but it was never appropriately
operationalized and this implementation was never fully debugged. When
the p2p layer was refactored as part of the 0.35 cycle, the v2
implementation was not refactored and it was left in the codebase but
not removed. This commit just removes all references to it.
This script is referenced from the release documentation, we should make sure it's functional. This is helpful in generating the "Special Thanks" section of the changelog.
This is intended to fix a test failure that occurs in the p2p state provider. The issue presents as the state provider timing out waiting for the consensus params response.
The reason that this can occur is because the statesync reactor has the possibility of attempting to respond to the params request before the state provider is ready to read it. This results in the reactor hitting the `default` case seen here and then never sending on the channel. The stateprovider will then block waiting for a response and never receive one because the reactor opted not to send it.
When statesync is stopped during shutdown, it has the possibility of deadlocking. A dump of goroutines reveals that this is related to the peerUpdates channel not returning anything on its `Done()` channel when `OnStop` is called. As this is occuring, `processPeerUpdate` is attempting to acquire the reactor lock. It appears that this lock can never be acquired. I looked for the places where the lock may remain locked accidentally and cleaned them up in hopes to eradicate the issue. Dumps of the relevant goroutines may be found below. Note that the line numbers below are relative to the code in the `v0.35.0-rc1` tag.
```
goroutine 36 [chan receive]:
github.com/tendermint/tendermint/internal/statesync.(*Reactor).OnStop(0xc00058f200)
github.com/tendermint/tendermint/internal/statesync/reactor.go:243 +0x117
github.com/tendermint/tendermint/libs/service.(*BaseService).Stop(0xc00058f200, 0x0, 0x0)
github.com/tendermint/tendermint/libs/service/service.go:171 +0x323
github.com/tendermint/tendermint/node.(*nodeImpl).OnStop(0xc0001ea240)
github.com/tendermint/tendermint/node/node.go:769 +0x132
github.com/tendermint/tendermint/libs/service.(*BaseService).Stop(0xc0001ea240, 0x0, 0x0)
github.com/tendermint/tendermint/libs/service/service.go:171 +0x323
github.com/tendermint/tendermint/cmd/tendermint/commands.NewRunNodeCmd.func1.1()
github.com/tendermint/tendermint/cmd/tendermint/commands/run_node.go:143 +0x62
github.com/tendermint/tendermint/libs/os.TrapSignal.func1(0xc000629500, 0x7fdb52f96358, 0xc0002b5030, 0xc00000daa0)
github.com/tendermint/tendermint/libs/os/os.go:26 +0x102
created by github.com/tendermint/tendermint/libs/os.TrapSignal
github.com/tendermint/tendermint/libs/os/os.go:22 +0xe6
goroutine 188 [semacquire]:
sync.runtime_SemacquireMutex(0xc00026b1cc, 0x0, 0x1)
runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc00026b1c8)
sync/mutex.go:138 +0x105
sync.(*Mutex).Lock(...)
sync/mutex.go:81
sync.(*RWMutex).Lock(0xc00026b1c8)
sync/rwmutex.go:111 +0x90
github.com/tendermint/tendermint/internal/statesync.(*Reactor).processPeerUpdate(0xc00026b080, 0xc000650008, 0x28, 0x124de90, 0x4)
github.com/tendermint/tendermint/internal/statesync/reactor.go:849 +0x1a5
github.com/tendermint/tendermint/internal/statesync.(*Reactor).processPeerUpdates(0xc00026b080)
github.com/tendermint/tendermint/internal/statesync/reactor.go:883 +0xab
created by github.com/tendermint/tendermint/internal/statesync.(*Reactor.OnStart
github.com/tendermint/tendermint/internal/statesync/reactor.go:219 +0xcd)
```
When shutting down blocksync, it is observed that the process can hang completely. A dump of running goroutines reveals that this is due to goroutines not listening on the correct shutdown signal. Namely, the `poolRoutine` goroutine does not wait on `pool.Quit`. The `poolRoutine` does not receive any other shutdown signal during `OnStop` becuase it must stop before the `r.closeCh` is closed. Currently the `poolRoutine` listens in the `closeCh` which will not close until the `poolRoutine` stops and calls `poolWG.Done()`.
This change also puts the `requestRoutine()` in the `OnStart` method to make it more visible since it does not rely on anything that is spawned in the `poolRoutine`.
```
goroutine 183 [semacquire]:
sync.runtime_Semacquire(0xc0000d3bd8)
runtime/sema.go:56 +0x45
sync.(*WaitGroup).Wait(0xc0000d3bd0)
sync/waitgroup.go:130 +0x65
github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStop(0xc0000d3a00)
github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:193 +0x47
github.com/tendermint/tendermint/libs/service.(*BaseService).Stop(0xc0000d3a00, 0x0, 0x0)
github.com/tendermint/tendermint/libs/service/service.go:171 +0x323
github.com/tendermint/tendermint/node.(*nodeImpl).OnStop(0xc00052c000)
github.com/tendermint/tendermint/node/node.go:758 +0xc62
github.com/tendermint/tendermint/libs/service.(*BaseService).Stop(0xc00052c000, 0x0, 0x0)
github.com/tendermint/tendermint/libs/service/service.go:171 +0x323
github.com/tendermint/tendermint/cmd/tendermint/commands.NewRunNodeCmd.func1.1()
github.com/tendermint/tendermint/cmd/tendermint/commands/run_node.go:143 +0x62
github.com/tendermint/tendermint/libs/os.TrapSignal.func1(0xc000df6d20, 0x7f04a68da900, 0xc0004a8930, 0xc0005a72d8)
github.com/tendermint/tendermint/libs/os/os.go:26 +0x102
created by github.com/tendermint/tendermint/libs/os.TrapSignal
github.com/tendermint/tendermint/libs/os/os.go:22 +0xe6
goroutine 161 [select]:
github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).poolRoutine(0xc0000d3a00, 0x0)
github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:464 +0x2b3
created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart
github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:174 +0xf1
goroutine 162 [select]:
github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).processBlockSyncCh(0xc0000d3a00)
github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:310 +0x151
created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart
github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:177 +0x54
goroutine 163 [select]:
github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).processPeerUpdates(0xc0000d3a00)
github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:363 +0x12b
created by github.com/tendermint/tendermint/internal/blocksync/v0.(*Reactor).OnStart
github.com/tendermint/tendermint/internal/blocksync/v0/reactor.go:178 +0x76
```
This test reliably gets hung up on network configuration, (which may
be a real issue,) but it's network setup is handcranked and we should
ensure that the test focuses on it's core assertions and doesn't fail for
test architecture reasons.
Fix the order of lines in docs/versions so that v0.34 is last (the current release).
Related changes:
- Update docs/DOCS_README.md to reflect the current state of how we publish the site.
- Fix the build-docs target in Makefile to not perturb the package-lock.json during the build.
- Fix the Makefile rule to not clobber package-lock.json.
I observed a couple of problems with the generator in some recent tests:
- there were a couple of hybrid test cases which did not have any
legacy nodes (randomness and all.) I change the probability to
produce more reliable results.
- added options to the generation to be able to add a max (to
compliment the earlier min) number of nodes for local testing.
- added an option to support reversing the sort order so "more
complex" networks were first, as well as tweaked some of the point
values.
- this refactored the generators cli parsing to be a bit more clear.
The main effect of this change is to flush the socket client and server message
encoding buffers immediately once the message is fully and correctly encoded.
This allows us to remove the timer and some other special cases, without
changing the observed behaviour of the system.
-- Background
The socket protocol client and server each use a buffered writer to encode
request and response messages onto the underlying connection. This reduces the
possibility of a single message being split across multiple writes, but has the
side-effect that a request may remain buffered for some time.
The implementation worked around this by keeping a ticker that occasionally
triggers a flush, and by flushing the writer in response to an explicit request
baked into the client/server protocol (see also #6994).
These workarounds are both unnecessary: Once a message has been dequeued for
sending and fully encoded in wire format, there is no real use keeping all or
part of it buffered locally. Moreover, using an asynchronous process to flush
the buffer makes the round-trip performance of the request unpredictable.
-- Benchmarks
Code: https://play.golang.org/p/0ChUOxJOiHt
I found no pre-existing performance benchmarks to justify the flush pattern,
but a natural question is whether this will significantly harm client/server
performance. To test this, I implemented a simple benchmark that transfers
randomly-sized byte buffers from a no-op "client" to a no-op "server" over a
Unix-domain socket, using a buffered writer, both with and without explicit
flushes after each write.
As the following data show, flushing every time (FLUSH=true) does reduce raw
throughput, but not by a significant amount except for very small request
sizes, where the transfer time is already trivial (1.9μs). Given that the
client is calibrated for 1MiB transactions, the overhead is not meaningful.
The percentage in each section is the speedup for flushing only when the buffer
is full, relative to flushing every block. The benchmark uses the default
buffer size (4096 bytes), which is the same value used by the socket client and
server implementation:
FLUSH NBLOCKS MAX AVG TOTAL ELAPSED TIME/BLOCK
false 3957471 512 255 1011165416 2.00018873s 505ns
true 1068568 512 255 273064368 2.000217051s 1.871µs
(73%)
false 536096 4096 2048 1098066401 2.000229108s 3.731µs
true 477911 4096 2047 978746731 2.000177825s 4.185µs
(10.8%)
false 124595 16384 8181 1019340160 2.000235086s 16.053µs
true 120995 16384 8179 989703064 2.000329349s 16.532µs
(2.9%)
false 2114 1048576 525693 1111316541 2.000479928s 946.3µs
true 2083 1048576 526379 1096449173 2.001817137s 961.025µs
(1.5%)
Note also that the FLUSH=false baseline is actually faster than the production
code, which flushes more often than is required by the buffer filling up.
Moreover, the timer slows down the overall transaction rate of the client and
server, indepenedent of how fast the socket transfer is, so the loss on a real
workload is probably much less.
I've been noticing that there are a number of situations where the
statesync reactor blocks waiting for peers (or similar,) I've moved
things around to improve outcomes in local tests.
In the last run, there were two problems at the RPC layer returned
from light nodes' RPC end points. I think exercising the light client
proxy RPC system is something that can/should be done via unit
testing, and that likely these errors are (in production) transient
and (in CI) very likely to fail for test environment issues.
The code in the Tendermint repository makes heavy use of import aliasing.
This is made necessary by our extensive reuse of common base package names, and
by repetition of similar names across different subdirectories.
Unfortunately we have not been very consistent about which packages we alias in
various circumstances, and the aliases we use vary. In the spirit of the advice
in the style guide and https://github.com/golang/go/wiki/CodeReviewComments#imports,
his change makes an effort to clean up and normalize import aliasing.
This change makes no API or behavioral changes. It is a pure cleanup intended
o help make the code more readable to developers (including myself) trying to
understand what is being imported where.
Only unexported names have been modified, and the changes were generated and
applied mechanically with gofmt -r and comby, respecting the lexical and
syntactic rules of Go. Even so, I did not fix every inconsistency. Where the
changes would be too disruptive, I left it alone.
The principles I followed in this cleanup are:
- Remove aliases that restate the package name.
- Remove aliases where the base package name is unambiguous.
- Move overly-terse abbreviations from the import to the usage site.
- Fix lexical issues (remove underscores, remove capitalization).
- Fix import groupings to more closely match the style guide.
- Group blank (side-effecting) imports and ensure they are commented.
- Add aliases to multiple imports with the same base package name.
We moved some files further down in the directory structure in #6964, which
caused the relative paths to the mockery wrapper to stop working.
There does not seem to be an obvious way to get the module root as a default
environment variable, so for now I just added the extra up-slashes.
This package is not used in the tendermint repository since 31e7cdee.
Note that this is not the same package as rpc/client/mock (N.B. singular) which
is still used in some tests.
A search of GitHub turns up only 11 uses, all of which are in clones of the
tendermint repo at old commits..
* rpc: Strip down the base RPC client interface.
Prior to this change, the RPC client interface requires implementing the entire
Service interface, but most of the methods of Service are not needed by the
concrete clients. Dissociate the Client interface from the Service interface.
- Extract only those methods of Service that are necessary to make the existing
clients work.
- Update the clients to combine Start/Onstart and Stop/OnStop. This does not
change what the clients do to start or stop. Only the websocket clients make
use of this functionality anyway.
The websocket implementation uses some plumbing from the BaseService helper.
We should be able to excising that entirely, but the current interface
dependencies among the clients would require a much larger change, and one
that leaks into other (non-RPC) packages.
As a less-invasive intermediate step, preserve the existing client behaviour
(and tests) by extracting the necessary subset of the BaseService
functionality to an analogous RunState helper for clients. I plan to obsolete
that type in a future PR, but for now this makes a useful waypoint.
Related:
- Clean up client implementations.
- Update mocks.
## Description
- Add deprecated to config values in toml
- update config in configuration doc
- explain how to set up a node with the new network
- add sentence about not needing to fork tendermint for built-in tutorial
- closes#6865
- add note to use a released version of tendermint with the tutorials. This is to avoid unknown issues prior to a release.
If the e2e tests error, they leave all of the e2e state around including containers and networks etc.
We should clean this up when the tests shuts down, even if it exits in error.
This should address last night's failure. We've taken the perspective
of "the load generator shouldn't cause tests to fail" in recent
days/weeks, and I think this is just a next step along that line. The
e2e tests shouldn't test performance.
I included some comments indicating the ways that this isn't ideal (it
is perhaps not), and I think that if test networks could make
assertions about the required rate, that might be a cool future
improvement (and good, perhaps, for system benchmarking.)
This document attempts to capture and discuss some of the areas of Tendermint that seem to be cited as causing performance issue. I'm hoping to continue to gather feedback and input on this document to better understand what issues Tendermint performance may cause for our users.
The overall goal of this document is to allow the maintainers and community to get a better sense of these issues and to be more capably able to discuss them and weight trade-offs about any proposed performance-focused changes. This document does not aim to propose any performance improvements. It does suggest useful places for benchmarks and places where additional metrics would be useful for diagnosing and further understanding Tendermint performance.
Please comment with areas where my reasoning seems off or with additional areas that Tendermint performance may be causing user pain.
I think the `Sync` check covers our primary use case, and perhaps we
can turn this back on in the future after some kind of event-system
rewrite, or RPC rewrite that will avoid the serverside timeout.
These are mostly the timeouts that I think we're still hitting in CI.
At this point, the tests (on master) pass on my local machine (which is quite beefy) so I think this is just the first in (perhaps?) a sequence of changes that attempt to change timeouts and load patterns so that the tests pass in CI more reliably.
Communication in Tendermint among consensus nodes, applications, and operator
tools all use different message formats and transport mechanisms. In some
cases there are multiple options. Having all these options complicates both the
code and the developer experience, and hides bugs. To support a more robust,
trustworthy, and usable system, we should document which communication paths
are essential, which could be removed or reduced in scope, and what we can
improve for the most important use cases.
This document proposes a variety of possible improvements of varying size and
scope. Specific design proposals should get their own documentation.
The previous implemention of hybrid set testing, which was entirely my
own creation, was a bit peculiar, and I think this probably clears thins up.
The previous implementation had far fewer legacy nodes in hybrid
networks, *and* also for some reason that I can't quite explain,
caused a test case to fail.
Bumps [github.com/rs/zerolog](https://github.com/rs/zerolog) from 1.24.0 to 1.25.0.
<details>
<summary>Commits</summary>
<ul>
<li><a href="65adfd88ec"><code>65adfd8</code></a> Make Fields method accept both map and slice (<a href="https://github-redirect.dependabot.com/rs/zerolog/issues/352">#352</a>)</li>
<li>See full diff in <a href="https://github.com/rs/zerolog/compare/v1.24.0...v1.25.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
The responses from node RPCs encode hash values as hexadecimal strings. This
behaviour is stipulated in our OpenAPI documentation. In some cases, however,
hashes received as JSON parameters were being decoded as byte buffers, as is
the convention for JSON.
This resulted in the confusing situation that a hash reported by one request
(e.g., broadcast_tx_commit) could not be passed as a parameter to another
(e.g., tx) via JSON, without translating the hex-encoded output hash into the
base64 encoding used by JSON for opaque bytes.
Fixes#6802.
The 0.35 release cycle renamed the 'fastsync' functionality to 'blocksync'. This change brings the configuration parameters in line with that change. Namely, it updates the configuration file `[fastsync]` field to be `[blocksync]` and changes the command line flag and config file parameters `--fast-sync` and `fast-sync` to `--enable-block-sync` and `enable-block-sync` respectively.
Error messages were added to help users encountering these changes be able to quickly make the needed update to their files/scripts.
When using the old command line argument for fast-sync, the following is printed
```
./build/tendermint start --proxy-app=kvstore --consensus.create-empty-blocks=false --fast-sync=false
ERROR: invalid argument "false" for "--fast-sync" flag: --fast-sync has been deprecated, please use --enable-block-sync
```
When using one of the old config file parameters, the following is printed:
```
./build/tendermint start --proxy-app=kvstore --consensus.create-empty-blocks=false
ERROR: error in config file: a configuration parameter named 'fast-sync' was found in the configuration file. The 'fast-sync' parameter has been renamed to 'enable-block-sync', please update the 'fast-sync' field in your configuration file to 'enable-block-sync'
```
* add information on upgrading to the new p2p library
* clarify p2p backwards compatibility
* reorder p2p queue list
* add demo for p2p selection
* fix spacing in upgrading
This is a cosmetic change that restores lexicographic order to the selected
linters in the CI config. No change to which linters we run, only putting them
back in order so it's easier to spot the one you care about.
This changes the focus of the e2e suite, to (roughly) focus on
configurations that are more well used. Most production users of
tendermint run ABCI application in process and the GRPC/socket methods
cover the vast majority of the remaining use cases.
Perhaps we should consider drop support unix domain sockets in a
future release, but I think in the mean time it's useful to have the
tests *mostly* focus on the primary use cases.
Update the schema and implementation of the Postgres event indexer to improve
certain types of queries against the index. These changes address the use cases
raised by #6843, and are partly inspired by the prototype schema in that issue.
In the old schema, events were flattened, making it difficult to find all the events
associated with a particular block or transaction. In addition, events with no key/value
attributes were entirely lost, since entries were generated only for attributes.
To address these issues, this new schema records blocks, transactions, events,
and attributes in separate tables, and provides views that join these tables to
give a more convenient query surface for block and transaction events.
- All events for a given block can be queried from the `block_events` view.
- All events for a given transaction can be queried from the `tx_events` view.
- Multiple events for the same key can be indexed for both blocks and transactions.
The tests have been reworked, but all of the existing test cases for the old schema
still pass with the new implementation. Various other minor cleanups are included,
ADR-065 is also updated to reflect the updated schema.
An explicit exit prevents the deferred cleanup code from running. In this case,
falling off the end of main will achieve the same goal as an explicit exit.
When revwing #6807 I assumed that `probSetChoice` worked this way.
I think that the coverage of various configuration options should
generally track what we expect the actual useage to be to focus the
most test coverage on the configurations that are the most prevelent.
Issues reported in Osmosis, where the message is extremely long. Also, there is absolutely no reason to log the message IMO. If we must, we can make the message log DEBUG.
This is just a configuration change to default to using the new stack
unless explicitly disabled (e.g. `UseLegacy`) this renames the
configuration value and makes the configuration logic more clear.
The legacy option is good to retain as a fallback if the new stack has
issues operationally, but we should make sure that most of the time
we're using the new stack.
In the transaction load generator, the e2e test harness previously distributed load randomly to hosts, which was a source of test non-determinism. This change distributes the load generation to the different nodes in the set in a round robin fashion, to produce more reliable results, but does not otherwise change the behavior of the test harness.
Add documentation comments to the psql event sink package, and simplify the
constructor function so that it does not return the SQL database handle. The
handle is needed for testing, so expose that via a separate method on the
concrete type.
Update the tests and existing usage for the change. This change does not affect
the behaviour of the sink, so there are no functional changes, only syntactic
updates.
EDIT: Updated, see [comment below]( https://github.com/tendermint/tendermint/pull/6785#issuecomment-897793175)
This change adds a sketch of the `Debug` mode.
This change adds a `Debug` struct to the node package. This `Debug` struct is intended to be created and started by a command in the `cmd` directory. The `Debug` struct runs the RPC server on the data directories: both the state store and the block store.
This change required a good deal of refactoring. Namely, a new `rpc.go` file was added to the `node` package. This file encapsulates functions for starting RPC servers used by nodes. A potential additional change is to further factor this code into shared code _in_ the `rpc` package.
Minor API tweaks were also made that seemed appropriate such as the mechanism for fetching routes from the `rpc/core` package.
Additional work is required to register the `Debug` service as a command in the `cmd` directory but I am looking for feedback on if this direction seems appropriate before diving much further.
closes: #5908
This is a very minor change, but I was looking through the code, and
this seems like it shouldn't be exported or used more broadly, so I've
moved it out.
This ADR restores a variation of the old Request for Comments documentation
that we previously used. The proposal differs from the original formulation,
and does not replace ADRs.
I realized after my last commit that my change made a following line of code a bit redundant.
(alternatively my last change was redunadnt to the existing code.)
I took this oppertunity to make some minor cleanups and logging changes to the node changes which I hope will make tests a bit more clear.
* docs: add indexer godoc
* docs++
* docs++
* docs++
* docs++
* docs++
* Update state/indexer/doc.go
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* Update state/indexer/doc.go
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* Update state/indexer/doc.go
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* Update state/indexer/doc.go
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* Update state/indexer/doc.go
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* Update state/indexer/doc.go
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* docs++
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
We INFO log every `ABCIQuery`. This can output a tremendous amount of noise in the logs, can cause cosmovisor to completely crash and slows down the node due to I/O. This log is completely unnecessary.
Let's get this backported into v0.43 and get that into v0.43 and v0.42 releases of the SDK
/cc @marbar3778
As written, the encoding step unnecessarily made and moved multiple copies of
the encoded representation. Reduce this to a single allocation and encode the
data in-place so that a shift is no longer required.
Also: Add a test to ensure letter digits are capitalized, which was previously not
verified but was expected downstream.
No functional changes.
Adds a simple property test to the `clist` package. This test uses the [rapid](https://github.com/flyingmutant/rapid) library and works by modeling the internal clist as a simple array.
Follow up from this mornings workshop with the Regen team.
This changes adds a failing test for issue #6660. It achieves this by adding a transaction, starting the `broadcastTxRoutine` in a goroutine and then adding another transaction to the mempool. The `broadcastTxRoutine` can receive the second inserted transaction before `insertTx` returns. In that case, `broadcastTxRoutine` will derefence a nil pointer when referencing the `gossipEl` and panic.
This change aims to keep versions of mockery consistent across developer laptops.
This change adds mockery to the `tools.go` file so that its version can be managed consistently in the `go.mod` file.
Additionally, this change temporarily disables adding mockery's version number to generated files. There is an outstanding issue against the mockery project related to the version string behavior when running from `go get`. I have created a pull request to fix this issue in the mockery project.
see: https://github.com/vektra/mockery/issues/397
Update those break statements inside case clauses that are intended to reach an
enclosing for loop, so that they correctly exit the loop.
The candidate files for this change were located using:
% staticcheck -checks SA4011 ./... | cut -d: -f-2
This change is intended to preserve the intended semantics of the code, but
since the code as-written did not have its intended effect, some behaviour may
change. Specifically: Some loops may have run longer than they were supposed
to, prior to this change.
In one case I was not able to clearly determine the intended outcome. That case
has been commented but otherwise left as-written.
Fixes#6780.
This pull request removes the homegrown mocks in `light/provider/mock` in favor of mockery mocks.
Adds a simple benchmark only mock to avoid the overhead of `reflection` that `mockery` incurs.
part of #5274
This adds a test for closing the `pqueue` while the `pqueue` contains data that has not yet been dequeued.
This issue was found while debugging #6705
This test will fail until @cmwaters fix for this condition is merged.
This change does two things:
1. It fixes the json fuzzer to account for receiving array results. Arrays are returned by the rpc server when the input data is an array.
2. Adds a `fuzz_test.go` file and corresponding `testdata` directory containing the failing test case.
This seems like a reasonable way to add and track previous crash issues in our fuzz test cases. The upcoming stdlib go fuzz tool does effectively this automatically.
This change adds additional coverage to the `mConnConnection.TrySendMessage` code path. Adds test to ensure it returns `io.EOF` when closed.
Addresses: #6570
This changes adds an `MempoolError` field to the `ResponseCheckTx`. This will allow clients to understand that their transaction was rejected from the mempool despite passing the ABCI check.
This change also updates the code to make use of early returns to prevent highly nested code blocks. Namely, it returns when the type assertion fails at the beginning of the method, instead of wrapping the entire method in a large if statement. This has a somewhat large effect on the diff as rendered by github.
addresses: #3546
The Makefile at the root of the repo [includes](cd19ef244e/Makefile (L61)) the Makefile under the `test` package. This fix removes the target defined in the root Makefile in favor of the included one.
Having looked at our network address parsing and connection code, it
really looks like we're not doing anything on top of what the standard
library is doing (both in terms using `net.ParseIP` and also
`net.Dial`,) and I don't think we need to run the tests 2x the number
of times just to run through different areas of the standard
library. I think most of our users are going to be using IPv4, and
would be down to fully remove this dimension as well, if we find it's
making noise, but for now I think it's fine.
closes#2498
solves part of #3365
Note: difficult to test the event emit in SwitchToFastSync part, might need to change `stateSyncReactor` to an interface in the `nodeImpl` struct
There are many `//go:generate mockery` lines in the source code.
This change adds a make target to invoke these mock generations.
This change also invokes the mock invocations and adds the resulting mocks to the repo.
Related to #5274
This tweaks sleeps around pertubations, based on a theory that our
tests with "kill" pertubations restart the nodes fast enough the peers
haven't marked it down when it tries to reconnect. In my local test
runs, this clears out *most* of the test failures that I've seen,
except for one evidence-related test-harness problem (which should be
handled separately.)
This code change amends the dispatcher tests to read from the dispatcher's `requestCh`. This ensures that a request is waiting when the test calls `dispatcher.respond`.
addresses: #6711
I put this error log in here because I thought it might be a helpful indicator to see when a reactor sends a message to a peer that doesn't have that channel open but it turns out this is happening all the time and it's kind of annoying
This commit extends the fix in #6518, so all other goroutine which run
concurrently with processBlockchainCh can safely send data to blockchain
out channel via a bridge channel. This helps eliminating all possible
data race with sending and closing blockchainCh.Out channel at the same
time.
Fixes#6516
Closes: #6661
Note: see another error during the events indexing, guess the raw tx size exceeds the limitation?
```
3:17PM ERR failed to index block txs err="pq: index row size 2768 exceeds btree version 4 maximum 2704 for index \"tx_results_tx_result_key\"" height=5205112 module=txindex
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.2.0 to 1.2.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/cobra/releases">github.com/spf13/cobra's releases</a>.</em></p>
<blockquote>
<h2>v1.2.1</h2>
<h3>Bug fixes</h3>
<ul>
<li>Quickfix for <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1437">spf13/cobra#1437</a> after v1.2.0 where parallel use of the <code>cmd.RegisterFlagCompletionFunc()</code> (and subsequent map) now works correctly and flag completions now work again</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="de187e874d"><code>de187e8</code></a> Fix flag completion (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1438">#1438</a>)</li>
<li>See full diff in <a href="https://github.com/spf13/cobra/compare/v1.2.0...v1.2.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
## Description
Expose p2p functions for use in the sdk.
These functions could also be copied over to the sdk. I dont have a preference of which is better.
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.8.0 to 1.8.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/viper/releases">github.com/spf13/viper's releases</a>.</em></p>
<blockquote>
<h2>v1.8.1</h2>
<p>This patch releases fixes two minor issues:</p>
<ul>
<li>Replace <code>%s</code> with <code>%w</code> when wrapping errors</li>
<li>Fix <code>pflag.StringArray</code> processing</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="bd03865899"><code>bd03865</code></a> Add a proper processing for pflag.StringArray</li>
<li><a href="3fcad43618"><code>3fcad43</code></a> update %s to %w</li>
<li>See full diff in <a href="https://github.com/spf13/viper/compare/v1.8.0...v1.8.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
## Description
It confused many people what they were supposed to add here. For chains with large genesis files you will only see the error after Initgenesis. Best to add a small sentence to provide better UX
At Oasis we have spend some time writing a new Ed25519/X25519/sr25519 implementation called curve25519-voi. This PR switches the import from ed25519consensus/go-schnorrkel, which should lead to performance gains on most systems.
Summary of changes:
* curve25519-voi is now used for Ed25519 operations, following the existing ZIP-215 semantics.
* curve25519-voi's public key cache is enabled (hardcoded size of 4096 entries, should be tuned, see the code comment) to accelerate repeated Ed25519 verification with the same public key(s).
* (BREAKING) curve25519-voi is now used for sr25519 operations. This is a breaking change as the current sr25519 support does something decidedly non-standard when going from a MiniSecretKey to a SecretKey and or PublicKey (The expansion routine is called twice). While I believe the new behavior (that expands once and only once) to be more "correct", this changes the semantics as implemented.
* curve25519-voi is now used for merlin since the included STROBE implementation produces much less garbage on the heap.
Side issues fixed:
* The version of go-schnorrkel that is currently imported by tendermint has a badly broken batch verification implementation. Upstream has fixed the issue after I reported it, so the version should be bumped in the interim.
Open design questions/issues:
* As noted, the public key cache size should be tuned. It is currently backed by a trivial thread-safe LRU cache, which is not scan-resistant, but replacing it with something better is a matter of implementing an interface.
* As far as I can tell, the only reason why serial verification on batch failure is necessary is to provide more detailed error messages (that are only used in some unit tests). If you trust the batch verification to be consistent with serial verification then the fallback can be eliminated entirely (the BatchVerifier provided by the new library supports an option that omits the fallback if this is chosen as the way forward).
* curve25519-voi's sr25519 support could use more optimization and more eyes on the code. The algorithm unfortunately is woefully under-specified, and the implementation was done primarily because I got really sad when I actually looked at go-schnorrkel, and we do not use the algorithm at this time.
Closes#6551
Simple PR to add the total gas used in the block by adding the gas used in all the transactions.
This adds a `TotalGasUsed` field to `coretypes.ResultBlockResults`.
Its my first PR to the repo so let me know if there is anything I am missing!
@fedekunze In case you want to take a look
This PR make some tweaks to backfill after running e2e tests:
- Separates sync and backfill as two distinct processes that the node calls. The reason is because if sync fails then the node should fail but if backfill fails it is still possible to proceed.
- Removes peers who don't have the block at a height from the local peer list. As the process goes backwards if a node doesn't have a block at a height they're likely pruning blocks and thus they won't have any prior ones either.
- Sleep when we've run out of peers, then try again.
There is a possible data race/panic between processBlockchainCh and
processPeerUpdates, since when we send to blockchainCh.Out in one
goroutine and close the channel in the other. The race is seen in some
Github Action runs.
This commit fix the race, by adding a peerUpdatesCh as a bridge between
processPeerUpdates and processBlockchainCh, so the former will send to
this channel, the later will listen and forward the message to
blockchainCh.Out channel.
Updates #6516
## Description
Change block_size gauge to a histogram to observe block size overtime
This will help will see which chains have full blocks vs empty.
closes#5752
## Description
Trying to debug a possible hashing issue, writing test vectors on 0.34 and then porting them to master to double-check it's not a hashing issue.
Per tendermint spec, each Channel has a globally unique byte id, which
is mapped to uint8 in Go. However, the proto PacketMsg.ChannelID field
is declared as int32, and when receive the packet, we cast it to a byte
without checking for possible overflow. That leads to a malform packet
with invalid channel id is sent successfully.
To fix it, we just add a check for possible overflow, and return invalid
channel id error.
Fixed#6521
By pre-creating the hasher, instead of creating new one everytime
addrbook.hash is called.
```
name old time/op new time/op delta
AddrBook_hash-8 181ns ±13% 80ns ± 1% -56.08% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
AddrBook_hash-8 216B ± 0% 8B ± 0% -96.30% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
AddrBook_hash-8 2.00 ± 0% 1.00 ± 0% -50.00% (p=0.000 n=10+10)
```
Fixed#6508
## Description
Add version back to versions, but allow it to be overridden via a ldflag.
Reason:
Many users are not setting the ldflag causing issues with tooling that relies on it (cosmjs)
closes#6488
cc @webmaster128
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.1 to 1.10.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/lib/pq/releases">github.com/lib/pq's releases</a>.</em></p>
<blockquote>
<h2>v1.10.2</h2>
<ul>
<li>fix TimeTZ with second offsets</li>
<li>fix GOOS compilation</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="2da6713d67"><code>2da6713</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1039">#1039</a> from otan-cockroach/timetz_fix</li>
<li><a href="ad47bab1aa"><code>ad47bab</code></a> encode: fix TimeTZ with second offsets</li>
<li><a href="99af95f861"><code>99af95f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1041">#1041</a> from otan-cockroach/libpq</li>
<li><a href="62fa4b32ec"><code>62fa4b3</code></a> .travis.yml: fix CI</li>
<li><a href="d2b13db12b"><code>d2b13db</code></a> Delete test.yml</li>
<li><a href="a1b1a43f73"><code>a1b1a43</code></a> Create test.yml</li>
<li><a href="b2cfb1abfd"><code>b2cfb1a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1036">#1036</a> from bukforks/master</li>
<li><a href="6ed3b8ac03"><code>6ed3b8a</code></a> rm unused imports</li>
<li><a href="feb727accb"><code>feb727a</code></a> userCurrent for unsupported GOOS</li>
<li>See full diff in <a href="https://github.com/lib/pq/compare/v1.10.1...v1.10.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
## Description
Internalize some libs. This reduces the amount ot public API tendermint is supporting. The moved libraries are mainly ones that are used within Tendermint-core.
To make sure finalizers run, we use channel for synchronization, and a
separate goroutine for trigger runtime.GC every 1 second. In practice,
just two consecutive runtime.GC calls can make all finalizers will run,
but using a separate goroutine make the code more robust and not depend
on garbage collector internal implementation.
Fixes#6452
This change fixes a potential exploitable vulnerability
that can cause the WAL to be consistently truncated by falsely
supplying the WAL path which would be any arbitrary dirrectory.
Fixes#6427
Somehow my previous attempt to fix this test was somewhat
non-deterministic. I think I misjudged how the "median" would impact
the test.
I ran the test in question 10 times without seeing a failure (which
would show up 10-30% of the time previously,) so I'm pretty sure this
is fixed.
Bumps [github.com/confio/ics23/go](https://github.com/confio/ics23) from 0.6.3 to 0.6.6.
<details>
<summary>Commits</summary>
<ul>
<li><a href="53a3a58ab8"><code>53a3a58</code></a> Revert go mod</li>
<li><a href="b66f10fc78"><code>b66f10f</code></a> Bump to 0.6.5</li>
<li><a href="19f273dffb"><code>19f273d</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/confio/ics23/issues/40">#40</a> from confio/cleanup</li>
<li><a href="46f21260db"><code>46f2126</code></a> Clippy and cleanup in tests</li>
<li><a href="667ddb335e"><code>667ddb3</code></a> Fix clippy warnings</li>
<li><a href="ea8b91d186"><code>ea8b91d</code></a> cargo fmt</li>
<li><a href="267cfba090"><code>267cfba</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/confio/ics23/issues/39">#39</a> from kostko/kostko/feature/more-ops</li>
<li><a href="346d8d9b19"><code>346d8d9</code></a> Implement FIXED32_LITTLE length operation</li>
<li><a href="61321db422"><code>61321db</code></a> Add SHA-512/256 hash operation</li>
<li><a href="77277ad2f8"><code>77277ad</code></a> Bump Rust to 0.6.4</li>
<li>Additional commits viewable in <a href="https://github.com/confio/ics23/compare/v0.6.3...go/v0.6.6">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
I believe that this, in my testing seems to help the e2e state-sync
tests complete more reliably, by fixing some potential, range-related
slice building, as well as the way the test app hashes snapshots.
Additionally, and I'm not sure if we want to do this, but I added this
hook to the reactor that re-sends the request for snapshots during the
retry. This helps in tests prevent systems from getting stuck, but I
think in reality, it might create more traffic, and operators would
just restart a state-syncing node to get a similar effect.
Per conversations earlier today, we'll consider all proposed implementation changes part of the ADR process rather than the RFC process (which will remain, for now, on the spec; this may get incorporated instead into the burgeoning "CIPS" process).
This change renames RFC 1 to ADR 66, leaving space for the not-yet-merged ADR 65.
* add time warping lunatic attack test
* create too high and connecton refused errors and add to the light client provider
* add height check to provider
* introduce block lag
* add detection logic for processing forward lunatic attack
* add node-side verification logic
* clean up tests and formatting
* update adr's
* update testing
* fix fetching the latest block
* format
* update changelog
* implement suggestions
* modify ADR's
* format
* clean up node evidence verification
Bumps [github.com/minio/highwayhash](https://github.com/minio/highwayhash) from 1.0.1 to 1.0.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/minio/highwayhash/releases">github.com/minio/highwayhash's releases</a>.</em></p>
<blockquote>
<h2>Version v1.0.2</h2>
<h2>Changelog</h2>
<h3>Fixed</h3>
<p>Issue <a href="https://github-redirect.dependabot.com/minio/highwayhash/issues/17">#17</a> - on arm64 (on Go 1.16) wrong hash values got computed due to incorrectly naming asm constants like regular Go functions. This probably confused the linker and caused the arm64 implementation to compute incorrect hash values. Fixed by 08ce0b4</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="08ce0b4fa7"><code>08ce0b4</code></a> Fix ARM64 assembly (<a href="https://github-redirect.dependabot.com/minio/highwayhash/issues/19">#19</a>)</li>
<li><a href="5311fe963f"><code>5311fe9</code></a> disable arm64 assembler and update CI to Go 1.16 (<a href="https://github-redirect.dependabot.com/minio/highwayhash/issues/18">#18</a>)</li>
<li>See full diff in <a href="https://github.com/minio/highwayhash/compare/v1.0.1...v1.0.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
Bumps [vuepress-theme-cosmos](https://github.com/cosmos/vuepress-theme-cosmos) from 1.0.180 to 1.0.181.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/cosmos/vuepress-theme-cosmos/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
## Description
- Add `context.Context` to Privval interface
This pr does not introduce context into our custom privval connection protocol because this will be removed in the next release. When this pr is released.
## Description
Since events are not hashed into the header they can be non deterministic. Changing an event is not consensus breaking. Will update docs in the spec
* test/fuzz: fix rpc, secret_connection and pex tests
- ignore empty data in rpc
- provide correct IP in pex
- spawn a goroutine for Write and do multiple Read(s)
* test/fuzz: fix init in pex test
* test/fuzz: assign NewServeMux to global var
* test/fuzz: only try to Unmarshal if blob is not empty
* run fuzz tests for PRs which modify fuzz tests themselves
* test/fuzz: move MakeSwitch into init
Introduces heuristics that track the amount of no responses or unavailable blocks a provider has for more robust provider handling by the light client. Use concurrent calls to all witnesses when a new primary is needed.
also
- replace `MaxReconnectAttempts`, `ReadWait`, `WriteWait` and `PingPeriod` options with `WSOptions` in `WSClient` (rpc/jsonrpc/client/ws_client.go).
- set default write wait to 10s for `WSClient`(rpc/jsonrpc/client/ws_client.go)
- unexpose `WSEvents`(rpc/client/http.go)
Closes#6162
## Description
Fixes marshaling error in sdk
closes https://github.com/cosmos/cosmos-sdk/issues/8578
the output stays the same, we are avoiding the passing of the callback because sdk uses typed logging.
```
// unbuffered
out, err := httpClient.Subscribe(ctx, "event.type=NewTx and account.name=Jack", 0)
// buffered
out, err := httpClient.Subscribe(ctx, "event.type=NewTx AND account.name=Jack", 20)
```
Before: when the `out` channel is buffered and becomes full, we drop an event (+ log the error)
After: when the `out` channel is buffered and becomes full, we block
**Before it was not apparent to the app when an event was dropped (looking at the logs is manual task). After this PR, if the user does not read from `out` on 1 subscription, all other subscriptions will be stuck too.**
Closes#6161
I'm also going to add the retros for all previous security incidents to this directory - I'd like to have them somewhere more central than the Cosmos Forum, where they currently live.
Missed setting the buffer size on the subscription. Note, this doesn't really "fix" this test (a la ref: https://github.com/tendermint/tendermint/pull/5710).
However, I spent a good chunk of time looking at this test with many logs and I'm pretty sure this is mainly due to the fact that none of the nodes get the conflicting vote in time.
closes: #6127
Fixes the race condition between a callback being set and called during ReCheckTx. Note, I do not see equivalent logic in the gRPC client (anymore) as #5439 suggests, so only the socket client was updated.
closes: #5439
Bumps [watchpack](https://github.com/webpack/watchpack) from 2.1.0 to 2.1.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/webpack/watchpack/releases">watchpack's releases</a>.</em></p>
<blockquote>
<h2>v2.1.1</h2>
<h1>Bugfix</h1>
<ul>
<li>fix warnings with ENOENT when symlinks are resolved by watchpack</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="f1b5e2da2d"><code>f1b5e2d</code></a> 2.1.1</li>
<li><a href="cbfc11a8d7"><code>cbfc11a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/webpack/watchpack/issues/188">#188</a> from Aghassi/fix/enoent-throwing</li>
<li><a href="7684df0846"><code>7684df0</code></a> fix: adds ENOENT for non windows errors</li>
<li>See full diff in <a href="https://github.com/webpack/watchpack/compare/v2.1.0...v2.1.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
I got tired of seeing the literal phrase "Closes #XXX" left in PR bodies.
Also, this template isn't usually viewed as rendered markdown, so I've removed the markdown formatting and the "Description" heading (which usually gets deleted anyways).
This cleans up the `Router` code and adds a bunch of tests. These sorts of systems are a real pain to test, since they have a bunch of asynchronous goroutines living their own lives, so the test coverage is decent but not fantastic. Luckily we've been able to move all of the complex peer management and transport logic outside of the router, as synchronous components that are much easier to test, so the core router logic is fairly small and simple.
This also provides some initial test tooling in `p2p/p2ptest` that automatically sets up in-memory networks and channels for use in integration tests. It also includes channel-oriented test asserters in `p2p/p2ptest/require.go`, but these have primarily been written for router testing and should probably be adapted or extended for reactor testing.
E2E tests often fail because validators miss signing or proposing blocks. Often this is because e.g. there's a lot of disruption in the network or it takes a long time to start up all the nodes.
This changes the test criteria to only check for 3 signed/proposed blocks, rather than a fraction of the expected blocks. This should be enough to catch most issues, apart from performance problems causing nodes to miss signing/proposing, but we may want separate tests for those sorts of things.
This renames `PeerAddress` to `NodeAddress`, moves it and `NodeID` into a separate file `address.go`, adds tests for them, and fixes a bunch of bugs and inconsistencies.
This revises the new P2P `Transport` interface and does some preliminary code cleanups and simplifications.
The major change here is to add `Connection.Handshake()` for performing node handshakes (once the stream transport API is implemented, this can be done entirely independent of the transport). This moves most of the handshaking logic into the `Router`, such as prevention of head-of-line blocking, validation of peer's `NodeInfo`, controlling timeouts, and so on. This significantly simplifies transports, completely removes the need for internal goroutines, and shares common logic across all transports. This also allows varying the handshake `NodeInfo` across peers, e.g. to vary `ListenAddr`. Similarly, connection filtering is also moved into the switch/router so that it can be shared between transports.
Fixes#5981, which was caused by changes in Router behavior after the introduction of the peer manager, leading to a race condition that could halt the test.
This is a temporary measure, I'll start tightening up the new P2P core tomorrow and write "real" tests with better test infrastructure.
This patches over a test data race where the logger would try to read struct internals via `reflect` while these were concurrently modified (specifically `MemoryTransport.closeOnce`).
Executed a local network using simapp and looked for logs that seemed superfluous. This isn't by any means an exhaustive grooming, but should drastically help legibility of logs.
ref: #5912
This test occasionally fails because the peer is already stopped. It is unclear to me exactly what this test is supposed to do, since calling `FlushStop()` will stop the peer, but the test asserts that the peer shouldn't have been stopped by `FlushStop()` since calling `Stop()` afterwards will error in that case.
The current PEX reactor will be removed in the new P2P stack anyway.
Fixes#5998. Sometimes the connection returns "use of closed network connection" instead, so for now we just accept any error. The switch is not long for this world anyway.
This test relied on connecting to the external site `foo-bar.net`, and (predictably) the site went down and broke all of our CI runs. This changes it to use local HTTP servers instead.
This changes the new prototype PEX reactor to resolve peer address URLs into IP/port PEX addresses itself. Branched off of #5974.
I've spent some time thinking about address handling in the P2P stack. We currently use `PeerAddress` URLs everywhere, except for two places: when dialing a peer, and when exchanging addresses via PEX. We had two options:
1. Resolve addresses to endpoints inside `PeerManager`. This would introduce a lot of added complexity: we would have to track connection statistics per endpoint, have goroutines that asynchronously resolve and refresh these endpoints, deal with resolve scheduling before dialing (which is trickier than it sounds since it involves multiple goroutines in the peer manager and router and messes with peer rating order), handle IP address visibility issues, and so on.
2. Resolve addresses to endpoints (IP/port) only where they're used: when dialing, and in PEX. Everywhere else we use URLs.
I went with 2, because this significantly simplifies the handling of hostname resolution, and because I really think the PEX reactor should migrate to exchanging URLs instead of IP/port numbers anyway -- this allows operators to use DNS names for validators (and can easily migrate them to new IPs and/or load balance requests), and also allows different protocols (e.g. QUIC and `MemoryTransport`). Happy to discuss this.
Fixes#5899 by renaming a bunch of P2P Protobuf entities (while maintaining wire compatibility):
* `Message` to `PexMessage` (as it's only used for PEX messages).
* `PexAddrs` to `PexResponse`.
* `PexResponse.Addrs` to `PexResponse.Addresses`.
* `NetAddress` to `PexAddress` (as it's only used by PEX).
## Description
Fixes the data race in usage of `WaitGroup`. Specifically, the case where we invoke `Wait` _before_ the first delta `Add` call when the current waitgroup counter is zero. See https://golang.org/pkg/sync/#WaitGroup.Add.
Still not sure how this manifests itself in a test since the reactor has to be stopped virtually immediately after being started (I think?).
Regardless, this is the appropriate fix.
closes: #5968
Adds a naïve `PeerManager.Advertise()` method that the new PEX reactor can use to fetch addresses to advertise, as well as some other `FIXME`s on address advertisement.
Follow-up from #5947, branched off of #5954.
This simplifies the upgrade logic by adding explicit eviction requests, which can also be useful for other use-cases (e.g. if we need to ban a peer that's misbehaving). Changes:
* Add `evict` map which queues up peers to explicitly evict.
* `upgrading` now only tracks peers that we're upgrading via dialing (`DialNext` → `Dialed`/`DialFailed`).
* `Dialed` will unmark `upgrading`, and queue `evict` if still beyond capacity.
* `Accepted` will pick a random lower-scored peer to upgrade to, if appropriate, and doesn't care about `upgrading` (the dial will fail later, since it's already connected).
* `EvictNext` will return a peer scheduled in `evict` if any, otherwise if beyond capacity just evict the lowest-scored peer.
This limits all of the `upgrading` logic to `DialNext`, `Dialed`, and `DialFailed`, making it much simplier, and it should generally do the right thing in all cases I can think of.
This improves the `peerStore` prototype by e.g.:
* Using a database with Protobuf for persistence, but also keeping full peer set in memory for performance.
* Simplifying the API, by taking/returning struct copies for safety, and removing errors for in-memory operations.
* Caching the ranked peer set, as a temporary solution until a better data structure is implemented.
* Adding `PeerManagerOptions.MaxPeers` and pruning the peer store (based on rank) when it's full.
* Rewriting `PeerAddress` to be independent of `url.URL`, normalizing it and tightening semantics.
## Description
Update the faux router to either drop channel errors or handle them based on an argument. This prevents deadlocks in tests where we try to send an error on the mempool channel but there is no reader.
Closes: #5956
See #5936 and #5938 for background.
The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about.
I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.
Co-authored-by: Emmanuel T Odeke <emmanuel@orijtech.com>
Closes#5907
- add init-corpus to blockchain reactor
- remove validator-set FromBytes test
now that we have proto, we don't need to test it! bye amino
- simplify mempool test
do we want to test remote ABCI app?
- do not recreate mux on every crash in jsonrpc test
- update p2p pex reactor test
- remove p2p/listener test
the API has changed + I did not understand what it's tested anyway
- update secretconnection test
- add readme and makefile
- list inputs in readme
- add nightly workflow
- remove blockchain fuzz test
EncodeMsg / DecodeMsg no longer exist
Fixes#5941.
Not entirely sure that this will fix the problem (couldn't reproduce), but in any case this is an artifact of a hack in the P2P transport refactor to make it work with the legacy P2P stack, and will be removed when the refactor is done anyway.
The `proto-gen-docker` target didn't pull an updated Docker image, and would use a local image if present which could be outdated and produce wrong results.
2021-01-23 16:46:53 +00:00
956 changed files with 70706 additions and 65520 deletions
description:'"--dup-validators" (multiple validators share the same key) and(or) "--super-byzantine-validators" (byzantine validators have just shy of 2/3 the voting weight)'
required:false
default:''
concurrency:
description:'How many workers should we run? Must be an integer and >= 10, optionally followed by n (e.g. 3n) to multiply by the number of nodes.'
required:true
default:10
timeLimit:
description:'Excluding setup and teardown, how long should a test run for, in seconds?'
required:true
default:60
tendermintUrl:
description:'Where to grab the Tendermint tarball (w/ linux/amd64 binary)'
@JoeKash, @githubsands, @jeebster, @crypto-facs, @liamsi, and @gotjoshua
Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermint).
### FEATURES
- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state in the event of an incorrect app hash. (@cmwaters)
- [config] [\#7174](https://github.com/tendermint/tendermint/pull/7174) expose ability to write config to arbitrary paths. (@tychoish)
- [\#6982](https://github.com/tendermint/tendermint/pull/6982) tendermint binary has built-in suppport for running the e2e application (with state sync support) (@cmwaters).
- [config] Add `--mode` flag and config variable. See [ADR-52](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-052-tendermint-mode.md) @dongsam
- [rpc] [\#6329](https://github.com/tendermint/tendermint/pull/6329) Don't cap page size in unsafe mode (@gotjoshua, @cmwaters)
- [pex] [\#6305](https://github.com/tendermint/tendermint/pull/6305) v2 pex reactor with backwards compatability. Introduces two new pex messages to
accomodate for the new p2p stack. Removes the notion of seeds and crawling. All peer
exchange reactors behave the same. (@cmwaters)
- [crypto] [\#6376](https://github.com/tendermint/tendermint/pull/6376) Enable sr25519 as a validator key type
- [mempool] [\#6466](https://github.com/tendermint/tendermint/pull/6466) Introduction of a prioritized mempool. (@alexanderbez)
-`Priority` and `Sender` have been introduced into the `ResponseCheckTx` type, where the `priority` will determine the prioritization of
the transaction when a proposer reaps transactions for a block proposal. The `sender` field acts as an index.
- Operators may toggle between the legacy mempool reactor, `v0`, and the new prioritized reactor, `v1`, by setting the
`mempool.version` configuration, where `v1` is the default configuration.
- Applications that do not specify a priority, i.e. zero, will have transactions reaped by the order in which they are received by the node.
- Transactions are gossiped in FIFO order as they are in `v0`.
- [config/indexer] [\#6411](https://github.com/tendermint/tendermint/pull/6411) Introduce support for custom event indexing data sources, specifically PostgreSQL. (@JayT106)
- [blocksync/event] [\#6619](https://github.com/tendermint/tendermint/pull/6619) Emit blocksync status event when switching consensus/blocksync (@JayT106)
- [statesync/event] [\#6700](https://github.com/tendermint/tendermint/pull/6700) Emit statesync status start/end event (@JayT106)
- [inspect] [\#6785](https://github.com/tendermint/tendermint/pull/6785) Add a new `inspect` command for introspecting the state and block store of a crashed tendermint node. (@williambanfield)
### BUG FIXES
- [evidence] [N/A] Use correct source of evidence time (@cmwaters)
- [\#7106](https://github.com/tendermint/tendermint/pull/7106) Revert mutex change to ABCI Clients (@tychoish).
- [\#7142](https://github.com/tendermint/tendermint/pull/7142) mempool: remove panic when recheck-tx was not sent to ABCI application (@williambanfield).
wait until peerUpdates channel is closed to close remaining peers (@williambanfield)
- [privval] [\#5638](https://github.com/tendermint/tendermint/pull/5638) Increase read/write timeout to 5s and calculate ping interval based on it (@JoeKash)
- [evidence] [\#6375](https://github.com/tendermint/tendermint/pull/6375) Fix bug with inconsistent LightClientAttackEvidence hashing (cmwaters)
- [rpc] [\#6507](https://github.com/tendermint/tendermint/pull/6507) Ensure RPC client can handle URLs without ports (@JayT106)
- [statesync] [\#6463](https://github.com/tendermint/tendermint/pull/6463) Adds Reverse Sync feature to fetch historical light blocks after state sync in order to verify any evidence (@cmwaters)
- [blocksync] [\#6590](https://github.com/tendermint/tendermint/pull/6590) Update the metrics during blocksync (@JayT106)
### BREAKING CHANGES
- Go API
- [crypto/armor]: [\#6963](https://github.com/tendermint/tendermint/pull/6963) remove package which is unused, and based on
deprecated fundamentals. Downstream users should maintain this
library. (@tychoish)
- [state] [store] [proxy] [rpc/core]: [\#6937](https://github.com/tendermint/tendermint/pull/6937) move packages to
`internal` to prevent consumption of these internal APIs by
external users. (@tychoish)
- [pubsub] [\#6634](https://github.com/tendermint/tendermint/pull/6634) The `Query#Matches` method along with other pubsub methods, now accepts a `[]abci.Event` instead of `map[string][]string`. (@alexanderbez)
- [p2p] [\#6618](https://github.com/tendermint/tendermint/pull/6618) [\#6583](https://github.com/tendermint/tendermint/pull/6583) Move `p2p.NodeInfo`, `p2p.NodeID` and `p2p.NetAddress` into `types` to support use in external packages. (@tychoish)
- [node] [\#6540](https://github.com/tendermint/tendermint/pull/6540) Reduce surface area of the `node` package by making most of the implementation details private. (@tychoish)
- [p2p] [\#6547](https://github.com/tendermint/tendermint/pull/6547) Move the entire `p2p` package and all reactor implementations into `internal`. (@tychoish)
- [libs/log] [\#6534](https://github.com/tendermint/tendermint/pull/6534) Remove the existing custom Tendermint logger backed by go-kit. The logging interface, `Logger`, remains. Tendermint still provides a default logger backed by the performant zerolog logger. (@alexanderbez)
- [libs/time] [\#6495](https://github.com/tendermint/tendermint/pull/6495) Move types/time to libs/time to improve consistency. (@tychoish)
- [mempool] [\#6529](https://github.com/tendermint/tendermint/pull/6529) The `Context` field has been removed from the `TxInfo` type. `CheckTx` now requires a `Context` argument. (@alexanderbez)
- [abci/client, proxy] [\#5673](https://github.com/tendermint/tendermint/pull/5673) `Async` funcs return an error, `Sync` and `Async` funcs accept `context.Context` (@melekes)
- [p2p] Remove unused function `MakePoWTarget`. (@erikgrinaker)
- [libs/bits] [\#5720](https://github.com/tendermint/tendermint/pull/5720) Validate `BitArray` in `FromProto`, which now returns an error (@melekes)
- [proto/p2p] Rename `DefaultNodeInfo` and `DefaultNodeInfoOther` to `NodeInfo` and `NodeInfoOther` (@erikgrinaker)
- [proto/p2p] Rename `NodeInfo.default_node_id` to `node_id` (@erikgrinaker)
- [libs/os] Kill() and {Must,}{Read,Write}File() functions have been removed. (@alessio)
- [store] [\#5848](https://github.com/tendermint/tendermint/pull/5848) Remove block store state in favor of using the db iterators directly (@cmwaters)
- [state] [\#5864](https://github.com/tendermint/tendermint/pull/5864) Use an iterator when pruning state (@cmwaters)
- [types] [\#6023](https://github.com/tendermint/tendermint/pull/6023) Remove `tm2pb.Header`, `tm2pb.BlockID`, `tm2pb.PartSetHeader` and `tm2pb.NewValidatorUpdate`.
- Each of the above types has a `ToProto` and `FromProto` method or function which replaced this logic.
- [light] [\#6054](https://github.com/tendermint/tendermint/pull/6054) Move `MaxRetryAttempt` option from client to provider.
-`NewWithOptions` now sets the max retry attempts and timeouts (@cmwaters)
- [all] [\#6077](https://github.com/tendermint/tendermint/pull/6077) Change spelling from British English to American (@cmwaters)
- Rename "Subscription.Cancelled()" to "Subscription.Canceled()" in libs/pubsub
- Rename "behaviour" pkg to "behavior" and internalized it in blocksync v2
- [rpc/client/http] [\#6176](https://github.com/tendermint/tendermint/pull/6176) Remove `endpoint` arg from `New`, `NewWithTimeout` and `NewWithClient` (@melekes)
- [rpc/jsonrpc/client/ws_client] [\#6176](https://github.com/tendermint/tendermint/pull/6176) `NewWS` no longer accepts options (use `NewWSWithOptions` and `OnReconnect` funcs to configure the client) (@melekes)
- [internal/libs] [\#6366](https://github.com/tendermint/tendermint/pull/6366) Move `autofile`, `clist`,`fail`,`flowrate`, `protoio`, `sync`, `tempfile`, `test` and `timer` lib packages to an internal folder
- [libs/rand] [\#6364](https://github.com/tendermint/tendermint/pull/6364) Remove most of libs/rand in favour of standard lib's `math/rand` (@liamsi)
- [mempool] [\#6466](https://github.com/tendermint/tendermint/pull/6466) The original mempool reactor has been versioned as `v0` and moved to a sub-package under the root `mempool` package.
Some core types have been kept in the `mempool` package such as `TxCache` and it's implementations, the `Mempool` interface itself
and `TxInfo`. (@alexanderbez)
- [crypto/sr25519] [\#6526](https://github.com/tendermint/tendermint/pull/6526) Do not re-execute the Ed25519-style key derivation step when doing signing and verification. The derivation is now done once and only once. This breaks `sr25519.GenPrivKeyFromSecret` output compatibility. (@Yawning)
- [types] [\#6627](https://github.com/tendermint/tendermint/pull/6627) Move `NodeKey` to types to make the type public.
- [config] [\#6627](https://github.com/tendermint/tendermint/pull/6627) Extend `config` to contain methods `LoadNodeKeyID` and `LoadorGenNodeKeyID`
- [blocksync] [\#6755](https://github.com/tendermint/tendermint/pull/6755) Rename `FastSync` and `Blockchain` package to `BlockSync` (@cmwaters)
- CLI/RPC/Config
- [pubsub/events] [\#6634](https://github.com/tendermint/tendermint/pull/6634) The `ResultEvent.Events` field is now of type `[]abci.Event` preserving event order instead of `map[string][]string`. (@alexanderbez)
- [config] [\#5598](https://github.com/tendermint/tendermint/pull/5598) The `test_fuzz` and `test_fuzz_config` P2P settings have been removed. (@erikgrinaker)
- [config] [\#5728](https://github.com/tendermint/tendermint/pull/5728) `fastsync.version = "v1"` is no longer supported (@melekes)
- [cli] [\#5772](https://github.com/tendermint/tendermint/pull/5772) `gen_node_key` prints JSON-encoded `NodeKey` rather than ID and does not save it to `node_key.json` (@melekes)
- [cli] [\#5777](https://github.com/tendermint/tendermint/pull/5777) use hyphen-case instead of snake_case for all cli commands and config parameters (@cmwaters)
- [rpc] [\#6019](https://github.com/tendermint/tendermint/pull/6019) standardise RPC errors and return the correct status code (@bipulprasad&@cmwaters)
- [rpc] [\#6168](https://github.com/tendermint/tendermint/pull/6168) Change default sorting to desc for `/tx_search` results (@melekes)
- [cli] [\#6282](https://github.com/tendermint/tendermint/pull/6282) User must specify the node mode when using `tendermint init` (@cmwaters)
- [state/indexer] [\#6382](https://github.com/tendermint/tendermint/pull/6382) reconstruct indexer, move txindex into the indexer package (@JayT106)
- [cli] [\#6372](https://github.com/tendermint/tendermint/pull/6372) Introduce `BootstrapPeers` as part of the new p2p stack. Peers to be connected on startup (@cmwaters)
- [config] [\#6462](https://github.com/tendermint/tendermint/pull/6462) Move `PrivValidator` configuration out of `BaseConfig` into its own section. (@tychoish)
- [rpc] [\#6610](https://github.com/tendermint/tendermint/pull/6610) Add MaxPeerBlockHeight into /status rpc call (@JayT106)
- [blocksync/rpc] [\#6620](https://github.com/tendermint/tendermint/pull/6620) Add TotalSyncedTime & RemainingTime to SyncInfo in /status RPC (@JayT106)
- [rpc/grpc] [\#6725](https://github.com/tendermint/tendermint/pull/6725) Mark gRPC in the RPC layer as deprecated.
- [blocksync/v2] [\#6730](https://github.com/tendermint/tendermint/pull/6730) Fast Sync v2 is deprecated, please use v0
- [rpc] Add genesis_chunked method to support paginated and parallel fetching of large genesis documents.
- [rpc/jsonrpc/server] [\#6785](https://github.com/tendermint/tendermint/pull/6785) `Listen` function updated to take an `int` argument, `maxOpenConnections`, instead of an entire config object. (@williambanfield)
- [rpc] [\#6820](https://github.com/tendermint/tendermint/pull/6820) Update RPC methods to reflect changes in the p2p layer, disabling support for `UnsafeDialPeers` and `UnsafeDialPeers` when used with the new p2p layer, and changing the response format of the peer list in `NetInfo` for all users.
- [cli] [\#6854](https://github.com/tendermint/tendermint/pull/6854) Remove deprecated snake case commands. (@tychoish)
- [tools] [\#6498](https://github.com/tendermint/tendermint/pull/6498) Set OS home dir to instead of the hardcoded PATH. (@JayT106)
- [cli/indexer] [\#6676](https://github.com/tendermint/tendermint/pull/6676) Reindex events command line tooling. (@JayT106)
- Apps
- [ABCI] [\#6408](https://github.com/tendermint/tendermint/pull/6408) Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
- [ABCI] [\#5447](https://github.com/tendermint/tendermint/pull/5447) Remove `SetOption` method from `ABCI.Client` interface
- [ABCI] [\#5447](https://github.com/tendermint/tendermint/pull/5447) Reset `Oneof` indexes for `Request` and `Response`.
- [ABCI] [\#5818](https://github.com/tendermint/tendermint/pull/5818) Use protoio for msg length delimitation. Migrates from int64 to uint64 length delimiters.
- [ABCI] [\#3546](https://github.com/tendermint/tendermint/pull/3546) Add `mempool_error` field to `ResponseCheckTx`. This field will contain an error string if Tendermint encountered an error while adding a transaction to the mempool. (@williambanfield)
- [Version] [\#6494](https://github.com/tendermint/tendermint/pull/6494) `TMCoreSemVer` has been renamed to `TMVersion`.
- It is not required any longer to set ldflags to set version strings
- [abci/counter] [\#6684](https://github.com/tendermint/tendermint/pull/6684) Delete counter example app
- Data Storage
- [store/state/evidence/light] [\#5771](https://github.com/tendermint/tendermint/pull/5771) Use an order-preserving varint key encoding (@cmwaters)
- [mempool] [\#6396](https://github.com/tendermint/tendermint/pull/6396) Remove mempool's write ahead log (WAL), (previously unused by the tendermint code). (@tychoish)
- [state] [\#6541](https://github.com/tendermint/tendermint/pull/6541) Move pruneBlocks from consensus/state to state/execution. (@JayT106)
### IMPROVEMENTS
- [libs/log] Console log formatting changes as a result of [\#6534](https://github.com/tendermint/tendermint/pull/6534) and [\#6589](https://github.com/tendermint/tendermint/pull/6589). (@tychoish)
- [statesync] [\#6566](https://github.com/tendermint/tendermint/pull/6566) Allow state sync fetchers and request timeout to be configurable. (@alexanderbez)
- [types] [\#6478](https://github.com/tendermint/tendermint/pull/6478) Add `block_id` to `newblock` event (@jeebster)
- [crypto/ed25519] [\#6526](https://github.com/tendermint/tendermint/pull/6526) Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `ed25519` signing and verification. (@Yawning)
- [crypto/sr25519] [\#6526](https://github.com/tendermint/tendermint/pull/6526) Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `sr25519` signing and verification. (@Yawning)
- [privval] [\#5603](https://github.com/tendermint/tendermint/pull/5603) Add `--key` to `init`, `gen_validator`, `testnet`&`unsafe_reset_priv_validator` for use in generating `secp256k1` keys.
- [privval] [\#5725](https://github.com/tendermint/tendermint/pull/5725) Add gRPC support to private validator.
- [privval] [\#5876](https://github.com/tendermint/tendermint/pull/5876) `tendermint show-validator` will query the remote signer if gRPC is being used (@marbar3778)
- [abci/client] [\#5673](https://github.com/tendermint/tendermint/pull/5673) `Async` requests return an error if queue is full (@melekes)
- [mempool] [\#5673](https://github.com/tendermint/tendermint/pull/5673) Cancel `CheckTx` requests if RPC client disconnects or times out (@melekes)
- [abci] [\#5706](https://github.com/tendermint/tendermint/pull/5706) Added `AbciVersion` to `RequestInfo` allowing applications to check ABCI version when connecting to Tendermint. (@marbar3778)
- [cli] [\#5772](https://github.com/tendermint/tendermint/pull/5772) `gen_node_key` output now contains node ID (`id` field) (@melekes)
- [blocksync/v2] [\#5774](https://github.com/tendermint/tendermint/pull/5774) Send status request when new peer joins (@melekes)
- [store] [\#5888](https://github.com/tendermint/tendermint/pull/5888) store.SaveBlock saves using batches instead of transactions for now to improve ACID properties. This is a quick fix for underlying issues around tm-db and ACID guarantees. (@githubsands)
- [consensus] [\#5987](https://github.com/tendermint/tendermint/pull/5987) and [\#5792](https://github.com/tendermint/tendermint/pull/5792) Remove the `time_iota_ms` consensus parameter. Merge `tmproto.ConsensusParams` and `abci.ConsensusParams`. (@marbar3778, @valardragon)
- [types] [\#5994](https://github.com/tendermint/tendermint/pull/5994) Reduce the use of protobuf types in core logic. (@marbar3778)
-`ConsensusParams`, `BlockParams`, `ValidatorParams`, `EvidenceParams`, `VersionParams`, `sm.Version` and `version.Consensus` have become native types. They still utilize protobuf when being sent over the wire or written to disk.
- [rpc/client/http] [\#6163](https://github.com/tendermint/tendermint/pull/6163) Do not drop events even if the `out` channel is full (@melekes)
- [node] [\#6059](https://github.com/tendermint/tendermint/pull/6059) Validate and complete genesis doc before saving to state store (@silasdavis)
- [state] [\#6067](https://github.com/tendermint/tendermint/pull/6067) Batch save state data (@githubsands&@cmwaters)
- [crypto] [\#6120](https://github.com/tendermint/tendermint/pull/6120) Implement batch verification interface for ed25519 and sr25519. (@marbar3778)
- [types] [\#6120](https://github.com/tendermint/tendermint/pull/6120) use batch verification for verifying commits signatures.
- If the key type supports the batch verification API it will try to batch verify. If the verification fails we will single verify each signature.
- [privval/file] [\#6185](https://github.com/tendermint/tendermint/pull/6185) Return error on `LoadFilePV`, `LoadFilePVEmptyState`. Allows for better programmatic control of Tendermint.
- [privval] [\#6240](https://github.com/tendermint/tendermint/pull/6240) Add `context.Context` to privval interface.
- [rpc] [\#6265](https://github.com/tendermint/tendermint/pull/6265) set cache control in http-rpc response header (@JayT106)
- [statesync] [\#6378](https://github.com/tendermint/tendermint/pull/6378) Retry requests for snapshots and add a minimum discovery time (5s) for new snapshots.
- [node/state] [\#6370](https://github.com/tendermint/tendermint/pull/6370) graceful shutdown in the consensus reactor (@JayT106)
- [consensus/metrics] [\#6549](https://github.com/tendermint/tendermint/pull/6549) Change block_size gauge to a histogram for better observability over time (@marbar3778)
- [statesync] [\#6587](https://github.com/tendermint/tendermint/pull/6587) Increase chunk priority and re-request chunks that don't arrive (@cmwaters)
- [state/privval] [\#6578](https://github.com/tendermint/tendermint/pull/6578) No GetPubKey retry beyond the proposal/voting window (@JayT106)
- [rpc] [\#6615](https://github.com/tendermint/tendermint/pull/6615) Add TotalGasUsed to block_results response (@crypto-facs)
- [cmd/tendermint/commands] [\#6623](https://github.com/tendermint/tendermint/pull/6623) replace `$HOME/.some/test/dir` with `t.TempDir` (@tanyabouman)
- [statesync] \6807 Implement P2P state provider as an alternative to RPC (@cmwaters)
## v0.34.14
This release backports the `rollback` feature to allow recovery in the event of an incorrect app hash.
### FEATURES
- [\#6982](https://github.com/tendermint/tendermint/pull/6982) The tendermint binary now has built-in suppport for running the end-to-end test application (with state sync support) (@cmwaters).
- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state. This may be useful in the event of non-determinstic app hash or when reverting an upgrade. @cmwaters
### IMPROVEMENTS
- [\#7103](https://github.com/tendermint/tendermint/pull/7104) Remove IAVL dependency (backport of #6550) (@cmwaters)
### BUG FIXES
- [\#7057](https://github.com/tendermint/tendermint/pull/7057) Import Postgres driver support for the psql indexer (@creachadair).
- [ABCI] [\#7110](https://github.com/tendermint/tendermint/issues/7110) Revert "change client to use multi-reader mutexes (#6873)" (@tychoish).
## v0.34.13
*September 6, 2021*
This release backports improvements to state synchronization and ABCI
performance under concurrent load, and the PostgreSQL event indexer.
### IMPROVEMENTS
- [statesync] [\#6881](https://github.com/tendermint/tendermint/issues/6881) improvements to stateprovider logic (@cmwaters)
- [ABCI] [\#6873](https://github.com/tendermint/tendermint/issues/6873) change client to use multi-reader mutexes (@tychoish)
- [indexing] [\#6906](https://github.com/tendermint/tendermint/issues/6906) enable the PostgreSQL indexer sink (@creachadair)
## v0.34.12
*August 17, 2021*
Special thanks to external contributors on this release: @JayT106.
with incorrectly handling contexts that would occasionally freeze state sync. (@cmwaters)
- [privval] [\#6748](https://github.com/tendermint/tendermint/issues/6748) Fix vote timestamp to prevent chain halt (@JayT106)
## v0.34.11
*June 18, 2021*
This release improves the robustness of statesync; tweaking channel priorities and timeouts and
adding two new parameters to the state sync config.
### BREAKING CHANGES
- Apps
- [Version] [\#6494](https://github.com/tendermint/tendermint/pull/6494) `TMCoreSemVer` is not required to be set as a ldflag any longer.
### IMPROVEMENTS
- [statesync] [\#6566](https://github.com/tendermint/tendermint/pull/6566) Allow state sync fetchers and request timeout to be configurable. (@alexanderbez)
- [statesync] [\#6378](https://github.com/tendermint/tendermint/pull/6378) Retry requests for snapshots and add a minimum discovery time (5s) for new snapshots. (@tychoish)
- [evidence] [\#6375](https://github.com/tendermint/tendermint/pull/6375) Fix bug with inconsistent LightClientAttackEvidence hashing (@cmwaters)
## v0.34.10
*April 14, 2021*
This release fixes a bug where peers would sometimes try to send messages
on incorrect channels. Special thanks to our friends at Oasis Labs for surfacing
this issue!
- [p2p/node] [\#6339](https://github.com/tendermint/tendermint/issues/6339) Fix bug with using custom channels (@cmwaters)
- [light] [\#6346](https://github.com/tendermint/tendermint/issues/6346) Correctly handle too high errors to improve client robustness (@cmwaters)
## v0.34.9
*April 8, 2021*
This release fixes a moderate severity security issue, Security Advisory Alderfly,
which impacts all networks that rely on Tendermint light clients.
Further details will be released once networks have upgraded.
This release also includes a small Go API-breaking change, to reduce panics in the RPC layer.
Special thanks to our external contributors on this release: @gchaincl
### BREAKING CHANGES
- Go API
- [rpc/jsonrpc/server] [\#6204](https://github.com/tendermint/tendermint/issues/6204) Modify `WriteRPCResponseHTTP(Error)` to return an error (@melekes)
### FEATURES
- [rpc] [\#6226](https://github.com/tendermint/tendermint/issues/6226) Index block events and expose a new RPC method, `/block_search`, to allow querying for blocks by `BeginBlock` and `EndBlock` events (@alexanderbez)
### BUG FIXES
- [rpc/jsonrpc/server] [\#6191](https://github.com/tendermint/tendermint/issues/6191) Correctly unmarshal `RPCRequest` when data is `null` (@melekes)
- [p2p] [\#6289](https://github.com/tendermint/tendermint/issues/6289) Fix "unknown channels" bug on CustomReactors (@gchaincl)
- [light/evidence] Adds logic to handle forward lunatic attacks (@cmwaters)
## v0.34.8
*February 25, 2021*
This release, in conjunction with [a fix in the Cosmos SDK](https://github.com/cosmos/cosmos-sdk/pull/8641),
introduces changes that should mean the logs are much, much quieter. 🎉
### IMPROVEMENTS
- [libs/log] [\#6174](https://github.com/tendermint/tendermint/issues/6174) Include timestamp (`ts` field; `time.RFC3339Nano` format) in JSON logger output (@melekes)
### BUG FIXES
- [abci] [\#6124](https://github.com/tendermint/tendermint/issues/6124) Fixes a panic condition during callback execution in `ReCheckTx` during high tx load. (@alexanderbez)
## v0.34.7
*February 18, 2021*
This release fixes a downstream security issue which impacts Cosmos SDK
users who are:
* Using Cosmos SDK v0.40.0 or later, AND
* Running validator nodes, AND
* Using the file-based `FilePV` implementation for their consensus keys
Users who fulfill all the above criteria were susceptible to leaking
private key material in the logs. All other users are unaffected.
The root cause was a discrepancy
between the Tendermint Core (untyped) logger and the Cosmos SDK (typed) logger:
Tendermint Core's logger automatically stringifies Go interfaces whenever possible;
however, the Cosmos SDK's logger uses reflection to log the fields within a Go interface.
The introduction of the typed logger meant that previously un-logged fields within
interfaces are now sometimes logged, including the private key material inside the
`FilePV` struct.
Tendermint Core v0.34.7 fixes this issue; however, we strongly recommend that all validators
use remote signer implementations instead of `FilePV` in production.
Thank you to @joe-bowman for his assistance with this vulnerability and a particular
shout-out to @marbar3778 for diagnosing it quickly.
### BUG FIXES
- [consensus] [\#6128](https://github.com/tendermint/tendermint/pull/6128) Remove privValidator from log call (@tessr)
## v0.34.6
*February 18, 2021*
_Tendermint Core v0.34.5 and v0.34.6 have been recalled due to release tooling problems._
## v0.34.4
*February 11, 2021*
This release includes a fix for a memory leak in the evidence reactor (see #6068, below).
All Tendermint clients are recommended to upgrade.
Thank you to our friends at Crypto.com for the initial report of this memory leak!
Special thanks to other external contributors on this release: @yayajacky, @odidev, @laniehei, and @c29r3!
### BUG FIXES
- [light] [\#6022](https://github.com/tendermint/tendermint/pull/6022) Fix a bug when the number of validators equals 100 (@melekes)
- [light] [\#6026](https://github.com/tendermint/tendermint/pull/6026) Fix a bug when height isn't provided for the rpc calls: `/commit` and `/validators` (@cmwaters)
- [evidence] [\#6068](https://github.com/tendermint/tendermint/pull/6068) Terminate broadcastEvidenceRoutine when peer is stopped (@melekes)
## v0.34.3
*January 19, 2021*
This release includes a fix for a high-severity security vulnerability,
a DoS-vector that impacted Tendermint Core v0.34.0-v0.34.2. For more details, see
or https://nvd.nist.gov/vuln/detail/CVE-2021-21271.
Tendermint Core v0.34.3 also updates GoGo Protobuf to 1.3.2 in order to pick up the fix for
https://nvd.nist.gov/vuln/detail/CVE-2021-3121.
### BUG FIXES
- [evidence] [[security fix]](https://github.com/tendermint/tendermint/security/advisories/GHSA-p658-8693-mhvg) Use correct source of evidence time (@cmwaters)
- [proto] [\#5886](https://github.com/tendermint/tendermint/pull/5889) Bump gogoproto to 1.3.2 (@marbar3778)
## v0.34.2
@@ -26,8 +381,6 @@ This release fixes a substantial bug in evidence handling where evidence could
sometimes be broadcast before the block containing that evidence was fully committed,
resulting in some nodes panicking when trying to verify said evidence.
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES
- Go API
@@ -51,8 +404,6 @@ disconnecting from this node. As a temporary remedy (until the mempool package
is refactored), the `max-batch-bytes` was disabled. Transactions will be sent
one by one without batching.
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES
- CLI/RPC/Config
@@ -81,8 +432,6 @@ Holy smokes, this is a big one! For a more reader-friendly overview of the chang
Special thanks to external contributors on this release: @james-ray, @fedekunze, @favadi, @alessio,
@joe-bowman, @cuonglm, @SadPencil and @dongsam.
And as always, friendly reminder, that we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES
- CLI/RPC/Config
@@ -109,14 +458,14 @@ And as always, friendly reminder, that we have a [bug bounty program](https://ha
- [blockchain] [\#4637](https://github.com/tendermint/tendermint/pull/4637) Migrate blockchain reactor(s) to Protobuf encoding (@marbar3778)
- [evidence] [\#4949](https://github.com/tendermint/tendermint/pull/4949) Migrate evidence reactor to Protobuf encoding (@marbar3778)
- [mempool] [\#4940](https://github.com/tendermint/tendermint/pull/4940) Migrate mempool from to Protobuf encoding (@marbar3778)
- [mempool] [\#5321](https://github.com/tendermint/tendermint/pull/5321) Batch transactions when broadcasting them to peers (@melekes)
- [mempool] [\#5321](https://github.com/tendermint/tendermint/pull/5321) Batch transactions when broadcasting them to peers (@melekes)
-`MaxBatchBytes` new config setting defines the max size of one batch.
- [p2p/pex] [\#4973](https://github.com/tendermint/tendermint/pull/4973) Migrate `p2p/pex` reactor to Protobuf encoding (@marbar3778)
- [statesync] [\#4943](https://github.com/tendermint/tendermint/pull/4943) Migrate state sync reactor to Protobuf encoding (@marbar3778)
- Blockchain Protocol
- [evidence] [\#4725](https://github.com/tendermint/tendermint/pull/4725) Remove `Pubkey` from `DuplicateVoteEvidence` (@marbar3778)
- [evidence] [\#4725](https://github.com/tendermint/tendermint/pull/4725) Remove `Pubkey` from `DuplicateVoteEvidence` (@marbar3778)
- [evidence] [\#5499](https://github.com/tendermint/tendermint/pull/5449) Cap evidence to a maximum number of bytes (supercedes [\#4780](https://github.com/tendermint/tendermint/pull/4780)) (@cmwaters)
- [merkle] [\#5193](https://github.com/tendermint/tendermint/pull/5193) Header hashes are no longer empty for empty inputs, notably `DataHash`, `EvidenceHash`, and `LastResultsHash` (@erikgrinaker)
- [state] [\#4845](https://github.com/tendermint/tendermint/pull/4845) Include `GasWanted` and `GasUsed` into `LastResultsHash` (@melekes)
@@ -175,7 +524,7 @@ And as always, friendly reminder, that we have a [bug bounty program](https://ha
- [types] [\#4852](https://github.com/tendermint/tendermint/pull/4852) Vote & Proposal `SignBytes` is now func `VoteSignBytes`&`ProposalSignBytes` (@marbar3778)
- [types] [\#5029](https://github.com/tendermint/tendermint/pull/5029) Rename all values from `PartsHeader` to `PartSetHeader` to have consistency (@marbar3778)
- [types] [\#5029](https://github.com/tendermint/tendermint/pull/5029) Rename all values from `PartsHeader` to `PartSetHeader` to have consistency (@marbar3778)
- [types] [\#4939](https://github.com/tendermint/tendermint/pull/4939) `Total` in `Parts`&`PartSetHeader` has been changed from a `int` to a `uint32` (@marbar3778)
- [types] [\#4939](https://github.com/tendermint/tendermint/pull/4939) Vote: `ValidatorIndex`&`Round` are now `int32` (@marbar3778)
- [types] [\#4939](https://github.com/tendermint/tendermint/pull/4939) Proposal: `POLRound`&`Round` are now `int32` (@marbar3778)
@@ -213,7 +562,7 @@ And as always, friendly reminder, that we have a [bug bounty program](https://ha
- [evidence] [\#4722](https://github.com/tendermint/tendermint/pull/4722) Consolidate evidence store and pool types to improve evidence DB (@cmwaters)
- [evidence] [\#4839](https://github.com/tendermint/tendermint/pull/4839) Reject duplicate evidence from being proposed (@cmwaters)
- [evidence] [\#5219](https://github.com/tendermint/tendermint/pull/5219) Change the source of evidence time to block time (@cmwaters)
- [libs] [\#5126](https://github.com/tendermint/tendermint/pull/5126) Add a sync package which wraps sync.(RW)Mutex & deadlock.(RW)Mutex and use a build flag (deadlock) in order to enable deadlock checking (@marbar3778)
- [libs] [\#5126](https://github.com/tendermint/tendermint/pull/5126) Add a sync package which wraps sync.(RW)Mutex & deadlock.(RW)Mutex and use a build flag (deadlock) in order to enable deadlock checking (@marbar3778)
- [light] [\#4935](https://github.com/tendermint/tendermint/pull/4935) Fetch and compare a new header with witnesses in parallel (@melekes)
- [light] [\#4929](https://github.com/tendermint/tendermint/pull/4929) Compare header with witnesses only when doing bisection (@melekes)
- [light] [\#4916](https://github.com/tendermint/tendermint/pull/4916) Validate basic for inbound validator sets and headers before further processing them (@cmwaters)
@@ -323,9 +672,6 @@ as 2/3+ of the signatures are checked._
Special thanks to @njmurarka at Bluzelle Networks for reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [consensus] Do not allow signatures for a wrong block in commits (@ebuchman)
@@ -341,8 +687,6 @@ need to update your code.**
Special thanks to external contributors on this release: @tau3,
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -402,8 +746,6 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
Special thanks to external contributors on this release: @whylee259, @greg-szabo
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -490,9 +832,6 @@ Notes:
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for finding
and reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [mempool] Reserve IDs in InitPeer instead of AddPeer (@tessr)
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- CLI/RPC/Config
@@ -557,9 +894,6 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
Special thanks to external contributors on this release:
@princesinha19
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc] [\#3333](https://github.com/tendermint/tendermint/issues/3333) Add `order_by` to `/tx_search` endpoint, allowing to change default ordering from asc to desc (@princesinha19)
Special thanks to external contributors on this release:
@ruseinov, @bluele, @guagualvcha
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -1052,9 +1356,6 @@ This release contains a minor enhancement to the ABCI and some breaking changes
- CheckTx requests include a `CheckTxType` enum that can be set to `Recheck` to indicate to the application that this transaction was already checked/validated and certain expensive operations (like checking signatures) can be skipped
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
BREAKING CHANGES:
* CLI/RPC/Config
@@ -2388,7 +2658,7 @@ FEATURES:
- [libs] [\#2286](https://github.com/tendermint/tendermint/issues/2286) Panic if `autofile` or `db/fsdb` permissions change from 0600.
IMPROVEMENTS:
- [libs/db] [\#2371](https://github.com/tendermint/tendermint/issues/2371) Output error instead of panic when the given `db_backend` is not initialised (@bradyjoestar)
- [libs/db] [\#2371](https://github.com/tendermint/tendermint/issues/2371) Output error instead of panic when the given `db_backend` is not initialized (@bradyjoestar)
- [mempool] [\#2399](https://github.com/tendermint/tendermint/issues/2399) Make mempool cache a proper LRU (@bradyjoestar)
- [p2p] [\#2126](https://github.com/tendermint/tendermint/issues/2126) Introduce PeerTransport interface to improve isolation of concerns
- [libs/common] [\#2326](https://github.com/tendermint/tendermint/issues/2326) Service returns ErrNotStarted
- [ABCI] \#5447 Remove `SetOption` method from `ABCI.Client` interface
- [ABCI] \#5447 Reset `Oneof` indexes for `Request` and `Response`.
- [ABCI] \#5818 Use protoio for msg length delimitation. Migrates from int64 to uint64 length delimiters.
- [proto/tendermint] \#6976 Remove core protobuf files in favor of only housing them in the [tendermint/spec](https://github.com/tendermint/spec) repository.
- [config] \#7169 `WriteConfigFile` now returns an error. (@tychoish)
- [libs/service] \#7288 Remove SetLogger method on `service.Service` interface. (@tychosih)
- Blockchain Protocol
- Data Storage
- [store/state/evidence/light] \#5771 Use an order-preserving varint key encoding (@cmwaters)
### FEATURES
- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state in the event of non-determinstic app hash or reverting an upgrade.
- [mempool, rpc] \#7041 Add removeTx operation to the RPC layer. (@tychoish)
at the RFC stage will build collective understanding of the dimensions
of the problems and help structure conversations around trade-offs.
@@ -108,37 +109,20 @@ We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along
For linting, checking breaking changes and generating proto stubs, we use [buf](https://buf.build/). If you would like to run linting and check if the changes you have made are breaking then you will need to have docker running locally. Then the linting cmd will be `make proto-lint` and the breaking changes check will be `make proto-check-breaking`.
There are two ways to generate your proto stubs.
1. Use Docker, pull an image that will generate your proto stubs with no need to install anything. `make proto-gen-docker`
2. Run `make proto-gen` after installing `buf` and `gogoproto`, you can do this by running `make protobuf`.
### Installation Instructions
To install `protoc`, download an appropriate release (<https://github.com/protocolbuffers/protobuf>) and then move the provided binaries into your PATH (follow instructions in README included with the download).
To install `gogoproto`, do the following:
```sh
go get github.com/gogo/protobuf/gogoproto
cd$GOPATH/pkg/mod/github.com/gogo/protobuf@v1.3.1 # or wherever go get installs things
make install
```
You should now be able to run `make proto-gen` from inside the root Tendermint directory to generate new files from proto files.
We use [Docker](https://www.docker.com/) to generate the protobuf stubs. To generate the stubs yourself, make sure docker is running then run `make proto-gen`. This command uses the spec repo to get the necessary protobuf files for generating the go code. If you are modifying the proto files manually for changes in the core data structures, you will need to clone them into the go repo and comment out lines 22-37 of the file `./scripts/protocgen.sh`.
### Visual Studio Code
If you are a VS Code user, you may want to add the following to your `.vscode/settings.json`:
If you are a VS Code user, you may want to add the following to your `.vscode/settings.json`:
```json
{
"protoc":{
"options":[
"--proto_path=${workspaceRoot}/proto",
"--proto_path=${workspaceRoot}/third_party/proto"
]
}
{
"protoc":{
"options":[
"--proto_path=${workspaceRoot}/proto",
"--proto_path=${workspaceRoot}/third_party/proto"
]
}
}
```
@@ -243,125 +227,86 @@ Fixes #nnnn
Each PR should have one commit once it lands on `master`; this can be accomplished by using the "squash and merge" button on Github. Be sure to edit your commit message, though!
### Release Procedure
#### Major Release
This major release process assumes that this release was preceded by release candidates.
If there were no release candidates, and you'd like to cut a major release directly from master, see below.
1. Start on the latest RC branch (`RCx/vX.X.0`).
2. Run integration tests.
3. Branch off of the RC branch (`git checkout -b release-prep`) and prepare the release:
- "Squash" changes from the changelog entries for the RCs into a single entry,
and add all changes included in `CHANGELOG_PENDING.md`.
(Squashing includes both combining all entries, as well as removing or simplifying
any intra-RC changes. It may also help to alphabetize the entries by package name.)
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that UPGRADING.md is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Open a PR with these changes against the RC branch (`RCx/vX.X.0`).
5. Once these changes are on the RC branch, branch off of the RC branch again to create a release branch:
-`git checkout RCx/vX.X.0`
-`git checkout -b release/vX.X.0`
6. Push a tag with prepared release details. This will trigger the actual release `vX.X.0`.
-`git tag -a vX.X.0 -m 'Release vX.X.0'`
-`git push origin vX.X.0`
7. Make sure that `master` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
8. Create the long-lived minor release branch `RC0/vX.X.1` for the next point release on this
new major release series.
##### Major Release (from `master`)
1. Start on `master`
2. Run integration tests (see `test_integrations` in Makefile)
3. Prepare release in a pull request against `master` (to be squash merged):
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`; if this release
had release candidates, squash all the RC updates into one
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest
release, and add the github aliases of external contributors to the top of
the changelog. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
- Make sure all significant breaking changes are covered in `UPGRADING.md`
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Push a tag with prepared release details (this will trigger the release `vX.X.0`)
-`git tag -a vX.X.x -m 'Release vX.X.x'`
-`git push origin vX.X.x`
5. Update the `CHANGELOG.md` file on master with the releases changelog.
6. Delete any RC branches and tags for this release (if applicable)
#### Minor Release (Point Releases)
Minor releases are done differently from major releases: They are built off of long-lived backport branches, rather than from master.
Each release "line" (e.g. 0.34 or 0.33) has its own long-lived backport branch, and
the backport branches have names like `v0.34.x` or `v0.33.x` (literally, `x`; it is not a placeholder in this case).
As non-breaking changes land on `master`, they should also be backported (cherry-picked) to these backport branches.
Minor releases don't have release candidates by default, although any tricky changes may merit a release candidate.
To create a minor release:
1. Checkout the long-lived backport branch: `git checkout vX.X.x`
2. Run integration tests: `make test_integrations`
3. Check out a new branch and prepare the release:
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during minor releases, and only field additions are valid minor changes.)
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Open a PR with these changes that will land them back on `vX.X.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
-`git tag -a vX.X.x -m 'Release vX.X.x'`
-`git push origin vX.X.x`
6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
- Remove all `R:minor` labels from the pull requests that were included in the release.
- Do not merge the backport branch into master.
#### Release Candidates
Before creating an official release, especially a major release, we may want to create a
release candidate (RC) for our friends and partners to test out. We use git tags to
create RCs, and we build them off of RC branches. RC branches typically have names formatted
like `RCX/vX.X.X` (or, concretely, `RC0/v0.34.0`), while the tags themselves follow
the "standard" release naming conventions, with `-rcX` at the end (`vX.X.X-rcX`).
(Note that branches and tags _cannot_ have the same names, so it's important that these branches
have distinct names from the tags/release names.)
1. Start from the RC branch (e.g. `RC0/v0.34.0`).
2. Create the new tag, specifying a name and a tag "message":
`git tag -a v0.34.0-rc0 -m "Release Candidate v0.34.0-rc0`
3. Push the tag back up to origin:
`git push origin v0.34.0-rc4`
Now the tag should be available on the repo's releases page.
4. Create a new release candidate branch for any possible updates to the RC:
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language -
and securely replicates it on many machines.
Tendermint Core is a Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language - and securely replicates it on many machines.
For protocol details, see [the specification](https://github.com/tendermint/spec).
@@ -30,18 +29,18 @@ see our recent paper, "[The latest gossip on BFT consensus](https://arxiv.org/ab
Please do not depend on master as your production branch. Use [releases](https://github.com/tendermint/tendermint/releases) instead.
Tendermint is being used in production in both private and public environments,
most notably the blockchains of the [Cosmos Network](https://cosmos.network/).
However, we are still making breaking changes to the protocol and the APIs and have not yet released v1.0.
Tendermint has been in the production of private and public environments, most notably the blockchains of the Cosmos Network. we haven't released v1.0 yet since we are making breaking changes to the protocol and the APIs.
See below for more details about [versioning](#versioning).
In any case, if you intend to run Tendermint in production, we're happy to help. You can
contact us [over email](mailto:hello@interchain.berlin) or [join the chat](https://discord.gg/vcExX9T).
contact us [over email](mailto:hello@interchain.berlin) or [join the chat](https://discord.gg/cosmosnetwork).
More on how releases are conducted can be found [here](./RELEASES.md).
## Security
To report a security vulnerability, see our [bug bounty
program](https://hackerone.com/tendermint).
program](https://hackerone.com/cosmos).
For examples of the kinds of bugs we're looking for, see [our security policy](SECURITY.md).
We also maintain a dedicated mailing list for security updates. We will only ever use this mailing list
@@ -51,7 +50,7 @@ to notify you of vulnerabilities and fixes in Tendermint Core. You can subscribe
| Requirement | Notes |
|-------------|------------------|
| Go version | Go1.15 or higher |
| Go version | Go1.16 or higher |
## Documentation
@@ -85,32 +84,12 @@ and familiarize yourself with our
Tendermint uses [Semantic Versioning](http://semver.org/) to determine when and how the version changes.
According to SemVer, anything in the public API can change at any time before version 1.0.0
To provide some stability to Tendermint users in these 0.X.X days, the MINOR version is used
to signal breaking changes across a subset of the total public API. This subset includes all
interfaces exposed to other processes (cli, rpc, p2p, etc.), but does not
include the Go APIs.
To provide some stability to users of 0.X.X versions of Tendermint, the MINOR version is used
to signal breaking changes across Tendermint's API. This API includes all
publicly exposed types, functions, and methods in non-internal Go packages as well as
the types and methods accessible via the Tendermint RPC interface.
That said, breaking changes in the following packages will be documented in the
CHANGELOG even if they don't lead to MINOR version bumps:
- crypto
- config
- libs
- bits
- bytes
- json
- log
- math
- net
- os
- protoio
- rand
- sync
- strings
- service
- node
- rpc/client
- types
Breaking changes to these public APIs will be documented in the CHANGELOG.
### Upgrades
@@ -135,6 +114,8 @@ in [UPGRADING.md](./UPGRADING.md).
### Tendermint Core
We keep a public up-to-date version of our roadmap [here](./docs/roadmap/roadmap.md)
For details about the blockchain data structures and the p2p protocols, see the
- Move the changes included in `CHANGELOG_PENDING.md` into `CHANGELOG.md`. Each RC should have
it's own changelog section. These will be squashed when the final candidate is released.
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary.
Check the changelog for breaking changes in these components.
- Bump ABCI protocol version in `version.go`, if necessary
4. Open a PR with these changes against the backport branch.
5. Once these changes have landed on the backport branch, be sure to pull them back down locally.
6. Once you have the changes locally, create the new tag, specifying a name and a tag "message":
`git tag -a v0.35.0-rc0 -m "Release Candidate v0.35.0-rc0`
7. Push the tag back up to origin:
`git push origin v0.35.0-rc0`
Now the tag should be available on the repo's releases page.
8. Future RCs will continue to be built off of this branch.
Note that this process should only be used for "true" RCs--
release candidates that, if successful, will be the next release.
For more experimental "RCs," create a new, short-lived branch and tag that instead.
## Major release
This major release process assumes that this release was preceded by release candidates.
If there were no release candidates, begin by creating a backport branch, as described above.
1. Start on the backport branch (e.g. `v0.35.x`)
2. Run integration tests (`make test_integrations`) and the e2e nightlies.
3. Prepare the release:
- "Squash" changes from the changelog entries for the RCs into a single entry,
and add all changes included in `CHANGELOG_PENDING.md`.
(Squashing includes both combining all entries, as well as removing or simplifying
any intra-RC changes. It may also help to alphabetize the entries by package name.)
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
4. Open a PR with these changes against the backport branch.
5. Once these changes are on the backport branch, push a tag with prepared release details.
This will trigger the actual release `v0.35.0`.
- `git tag -a v0.35.0 -m 'Release v0.35.0'`
- `git push origin v0.35.0`
6. Make sure that `master` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
7. Add the release to the documentation site generator config (see
[DOCS_README.md](./docs/DOCS_README.md) for more details). In summary:
- Start on branch `master`.
- Add a new line at the bottom of [`docs/versions`](./docs/versions) to
ensure the newest release is the default for the landing page.
- Add a new entry to `themeConfig.versions` in
[`docs/.vuepress/config.js`](./docs/.vuepress/config.js) to include the
release in the dropdown versions menu.
- Commit these changes to `master` and backport them into the backport
branch for this release.
## Minor release (point releases)
Minor releases are done differently from major releases: They are built off of
long-lived backport branches, rather than from master. As non-breaking changes
land on `master`, they should also be backported into these backport branches.
Minor releases don't have release candidates by default, although any tricky
changes may merit a release candidate.
To create a minor release:
1. Checkout the long-lived backport branch: `git checkout v0.35.x`
2. Run integration tests (`make test_integrations`) and the nightlies.
3. Check out a new branch and prepare the release:
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump the TMDefaultVersion in `version.go`
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during minor releases, and only field additions are valid minor changes.)
4. Open a PR with these changes that will land them back on `v0.35.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- `git tag -a v0.35.1 -m 'Release v0.35.1'`
- `git push origin v0.35.1`
6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
- Remove all `R:minor` labels from the pull requests that were included in the release.
As part of our [Coordinated Vulnerability Disclosure
Policy](https://tendermint.com/security), we operate a [bug
bounty](https://hackerone.com/tendermint).
bounty](https://hackerone.com/cosmos).
See the policy for more details on submissions and rewards, and see "Example Vulnerabilities" (below) for examples of the kinds of bugs we're most interested in.
### Guidelines
### Guidelines
We require that all researchers:
* Use the bug bounty to disclose all vulnerabilities, and avoid posting vulnerability information in public places, including Github Issues, Discord channels, and Telegram groups
* Make every effort to avoid privacy violations, degradation of user experience, disruption to production systems (including but not limited to the Cosmos Hub), and destruction of data
* Keep any information about vulnerabilities that you’ve discovered confidential between yourself and the Tendermint Core engineering team until the issue has been resolved and disclosed
* Keep any information about vulnerabilities that you’ve discovered confidential between yourself and the Tendermint Core engineering team until the issue has been resolved and disclosed
* Avoid posting personally identifiable information, privately or publicly
If you follow these guidelines when reporting an issue to us, we commit to:
* Not pursue or support any legal action related to your research on this vulnerability
* Work with you to understand, resolve and ultimately disclose the issue in a timely fashion
* Work with you to understand, resolve and ultimately disclose the issue in a timely fashion
## Disclosure Process
## Disclosure Process
Tendermint Core uses the following disclosure process:
1. Once a security report is received, the Tendermint Core team works to verify the issue and confirm its severity level using CVSS.
2. The Tendermint Core team collaborates with the Gaia team to determine the vulnerability’s potential impact on the Cosmos Hub.
3. Patches are prepared for eligible releases of Tendermint in private repositories. See “Supported Releases” below for more information on which releases are considered eligible.
4. If it is determined that a CVE-ID is required, we request a CVE through a CVE Numbering Authority.
1. Once a security report is received, the Tendermint Core team works to verify the issue and confirm its severity level using CVSS.
2. The Tendermint Core team collaborates with the Gaia team to determine the vulnerability’s potential impact on the Cosmos Hub.
3. Patches are prepared for eligible releases of Tendermint in private repositories. See “Supported Releases” below for more information on which releases are considered eligible.
4. If it is determined that a CVE-ID is required, we request a CVE through a CVE Numbering Authority.
5. We notify the community that a security release is coming, to give users time to prepare their systems for the update. Notifications can include forum posts, tweets, and emails to partners and validators, including emails sent to the [Tendermint Security Mailing List](https://berlin.us4.list-manage.com/subscribe?u=431b35421ff7edcc77df5df10&id=3fe93307bc).
6. 24 hours following this notification, the fixes are applied publicly and new releases are issued.
7. Cosmos SDK and Gaia update their Tendermint Core dependencies to use these releases, and then themselves issue new releases.
8. Once releases are available for Tendermint Core, Cosmos SDK and Gaia, we notify the community, again, through the same channels as above. We also publish a Security Advisory on Github and publish the CVE, as long as neither the Security Advisory nor the CVE include any information on how to exploit these vulnerabilities beyond what information is already available in the patch itself.
9. Once the community is notified, we will pay out any relevant bug bounties to submitters.
10. One week after the releases go out, we will publish a post with further details on the vulnerability as well as our response to it.
6. 24 hours following this notification, the fixes are applied publicly and new releases are issued.
7. Cosmos SDK and Gaia update their Tendermint Core dependencies to use these releases, and then themselves issue new releases.
8. Once releases are available for Tendermint Core, Cosmos SDK and Gaia, we notify the community, again, through the same channels as above. We also publish a Security Advisory on Github and publish the CVE, as long as neither the Security Advisory nor the CVE include any information on how to exploit these vulnerabilities beyond what information is already available in the patch itself.
9. Once the community is notified, we will pay out any relevant bug bounties to submitters.
10. One week after the releases go out, we will publish a post with further details on the vulnerability as well as our response to it.
This process can take some time. Every effort will be made to handle the bug in as timely a manner as possible, however it's important that we follow the process described above to ensure that disclosures are handled consistently and to keep Tendermint Core and its downstream dependent projects--including but not limited to Gaia and the Cosmos Hub--as secure as possible.
This process can take some time. Every effort will be made to handle the bug in as timely a manner as possible, however it's important that we follow the process described above to ensure that disclosures are handled consistently and to keep Tendermint Core and its downstream dependent projects--including but not limited to Gaia and the Cosmos Hub--as secure as possible.
### Example Timeline
### Example Timeline
The following is an example timeline for the triage and response. The required roles and team members are described in parentheses after each task; however, multiple people can play each role and each person may play multiple roles.
The following is an example timeline for the triage and response. The required roles and team members are described in parentheses after each task; however, multiple people can play each role and each person may play multiple roles.
#### > 24 Hours Before Release Time
#### 24+ Hours Before Release Time
1. Request CVE number (ADMIN)
2. Gather emails and other contact info for validators (COMMS LEAD)
3.Test fixes on a testnet (TENDERMINT ENG, COSMOS ENG)
4.Write “Security Advisory” for forum (TENDERMINT LEAD)
1. Request CVE number (ADMIN)
2. Gather emails and other contact info for validators (COMMS LEAD)
3.Create patches in a private security repo, and ensure that PRs are open targeting all relevant release branches (TENDERMINT ENG, TENDERMINT LEAD)
4.Test fixes on a testnet (TENDERMINT ENG, COSMOS SDK ENG)
5. Write “Security Advisory” for forum (TENDERMINT LEAD)
#### 24 Hours Before Release Time
1. Post “Security Advisory” pre-notification on forum (TENDERMINT LEAD)
2. Post Tweet linking to forum post (COMMS LEAD)
3. Announce security advisory/link to post in various other social channels (Telegram, Discord) (COMMS LEAD)
4. Send emails to validators or other users (PARTNERSHIPS LEAD)
1. Post “Security Advisory” pre-notification on forum (TENDERMINT LEAD)
2. Post Tweet linking to forum post (COMMS LEAD)
3. Announce security advisory/link to post in various other social channels (Telegram, Discord) (COMMS LEAD)
4. Send emails to validators or other users (PARTNERSHIPS LEAD)
#### Release Time
@@ -64,36 +65,36 @@ The following is an example timeline for the triage and response. The required r
4. Post “Security releases” on forum (TENDERMINT LEAD)
5. Post new Tweet linking to forum post (COMMS LEAD)
6. Remind everyone via social channels (Telegram, Discord) that the release is out (COMMS LEAD)
7. Send emails to validators or other users (COMMS LEAD)
8. Publish Security Advisory and CVE, if CVE has no sensitive information (ADMIN)
7. Send emails to validators or other users (COMMS LEAD)
8. Publish Security Advisory and CVE, if CVE has no sensitive information (ADMIN)
#### After Release Time
1. Write forum post with exploit details (TENDERMINT LEAD)
2. Approve pay-out on HackerOne for submitter (ADMIN)
2. Approve pay-out on HackerOne for submitter (ADMIN)
#### 7 Days After Release Time
1. Publish CVE if it has not yet been published (ADMIN)
1. Publish CVE if it has not yet been published (ADMIN)
2. Publish forum post with exploit details (TENDERMINT ENG, TENDERMINT LEAD)
## Supported Releases
The Tendermint Core team commits to releasing security patch releases for both the latest minor release as well for the major/minor release that the Cosmos Hub is running.
The Tendermint Core team commits to releasing security patch releases for both the latest minor release as well for the major/minor release that the Cosmos Hub is running.
If you are running older versions of Tendermint Core, we encourage you to upgrade at your earliest opportunity so that you can receive security patches directly from the Tendermint repo. While you are welcome to backport security patches to older versions for your own use, we will not publish or promote these backports.
If you are running older versions of Tendermint Core, we encourage you to upgrade at your earliest opportunity so that you can receive security patches directly from the Tendermint repo. While you are welcome to backport security patches to older versions for your own use, we will not publish or promote these backports.
## Scope
The full scope of our bug bounty program is outlined on our [Hacker One program page](https://hackerone.com/tendermint). Please also note that, in the interest of the safety of our users and staff, a few things are explicitly excluded from scope:
The full scope of our bug bounty program is outlined on our [Hacker One program page](https://hackerone.com/cosmos). Please also note that, in the interest of the safety of our users and staff, a few things are explicitly excluded from scope:
* Any third-party services
* Findings from physical testing, such as office access
* Any third-party services
* Findings from physical testing, such as office access
* Findings derived from social engineering (e.g., phishing)
## Example Vulnerabilities
## Example Vulnerabilities
The following is a list of examples of the kinds of vulnerabilities that we’re most interested in. It is not exhaustive: there are other kinds of issues we may also be interested in!
The following is a list of examples of the kinds of vulnerabilities that we’re most interested in. It is not exhaustive: there are other kinds of issues we may also be interested in!
### Specification
@@ -114,6 +115,9 @@ Assuming less than 1/3 of the voting power is Byzantine (malicious):
* A node halting (liveness failure)
* Syncing new and old nodes
Assuming more than 1/3 the voting power is Byzantine:
* Attacks that go unpunished (unhandled evidence)
### Networking
@@ -139,7 +143,7 @@ Attacks may come through the P2P network or the RPC layer:
### Libraries
* Serialization (Amino)
* Serialization
* Reading/Writing files and databases
### Cryptography
@@ -150,5 +154,5 @@ Attacks may come through the P2P network or the RPC layer:
This guide provides instructions for upgrading to specific versions of Tendermint Core.
## Unreleased
## v0.35
### ABCI Changes
* Added `AbciVersion` to `RequestInfo`. Applications should check that the ABCI version they expect is being used in order to avoid unimplemented changes errors.
* The method `SetOption` has been removed from the ABCI.Client interface. This feature was used in the early ABCI implementation's.
* Messages are written to a byte stream using uin64 length delimiters instead of int64.
* When mempool `v1` is enabled, transactions broadcasted via `sync` mode may return a successful
response with a transaction hash indicating that the transaction was successfully inserted into
the mempool. While this is true for `v0`, the `v1` mempool reactor may at a later point in time
evict or even drop this transaction after a hash has been returned. Thus, the user or client must
query for that transaction to check if it is still in the mempool.
### Config Changes
*`fast_sync = "v1"` is no longer supported. Please use `v2` instead.
* The configuration file field`[fastsync]` has been renamed to `[blocksync]`.
* The top level configuration file field `fast-sync` has moved under the new `[blocksync]`
field as `blocksync.enable`.
*`blocksync.version = "v1"` and `blocksync.version = "v2"` (previously `fastsync`)
are no longer supported. Please use `v0` instead. During the v0.35 release cycle, `v0` was
determined to suit the existing needs and the cost of maintaining the `v1` and `v2` modules
was determined to be greater than necessary.
* All config parameters are now hyphen-case (also known as kebab-case) instead of snake_case. Before restarting the node make sure
you have updated all the variables in your `config.toml` file.
you have updated all the variables in your `config.toml` file.
* Added `--mode` flag and `mode` config variable on `config.toml` for setting Mode of the Node: `full` | `validator` | `seed` (default: `full`)
* `Client` object, which represents the complete light client
* The new light clients stores headers & validator sets as `LightBlock`s
* The RPC client can be found in the `/rpc` directory.
* The HTTP(S) proxy is located in the `/proxy` directory.
@@ -318,12 +487,12 @@ Evidence Params has been changed to include duration.
### Go API
* `libs/common` has been removed in favor of specific pkgs.
* `async`
* `service`
* `rand`
* `net`
* `strings`
* `cmap`
* `async`
* `service`
* `rand`
* `net`
* `strings`
* `cmap`
* removal of `errors` pkg
### RPC Changes
@@ -392,9 +561,9 @@ Prior to the update, suppose your `ResponseDeliverTx` look like:
```go
abci.ResponseDeliverTx{
Tags: []kv.Pair{
{Key: []byte("sender"), Value: []byte("foo")},
{Key: []byte("recipient"), Value: []byte("bar")},
{Key: []byte("amount"), Value: []byte("35")},
{Key: []byte("sender"), Value: []byte("foo")},
{Key: []byte("recipient"), Value: []byte("bar")},
{Key: []byte("amount"), Value: []byte("35")},
}
}
```
@@ -413,14 +582,14 @@ the following `Events`:
```go
abci.ResponseDeliverTx{
Events: []abci.Event{
{
Type: "transfer",
Attributes: kv.Pairs{
{Key: []byte("sender"), Value: []byte("foo")},
{Key: []byte("recipient"), Value: []byte("bar")},
{Key: []byte("amount"), Value: []byte("35")},
},
}
{
Type: "transfer",
Attributes: kv.Pairs{
{Key: []byte("sender"), Value: []byte("foo")},
{Key: []byte("recipient"), Value: []byte("bar")},
{Key: []byte("amount"), Value: []byte("35")},
},
}
}
```
@@ -468,9 +637,9 @@ In this case, the WS client will receive an error with description:
"jsonrpc": "2.0",
"id": "{ID}#event",
"error": {
"code": -32000,
"msg": "Server error",
"data": "subscription was cancelled (reason: client is not pulling messages fast enough)" // or "subscription was cancelled (reason: Tendermint exited)"
"code": -32000,
"msg": "Server error",
"data": "subscription was canceled (reason: client is not pulling messages fast enough)" // or "subscription was canceled (reason: Tendermint exited)"
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.