If a subscriber arrives while the pubsub service is shutting down, the existing
code will return a nil subscription without error. With unlucky timing, this
may lead to a nil indirection panic in the RPC service.
To avoid that problem, make sure that when a subscription fails for this
reason, we report a non-nil error so that the client will detect it and give up
gracefully.
A workaround for #6729. Add parameters to control buffer sizes for
event subscription RPC clients. On some networks, buffering causes
clients to be dropped and/or events to be lost.
For additional context, see the discussion on #7188.
- Add experimental_subscription_buffer_size config parameter
- Add experimental_websocket_write_buffer_size config parameter
- Add experimental_close_on_slow_client config parameter
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* e2e: abci protocol should be consistent across networks (#7078)
It seems weird in retrospect that we allow networks to contain
applications that use different ABCI protocols.
(cherry picked from commit f2a8f5e054)
This change backports the PostgreSQL indexing sink, addressing part of #6828.
Development on the main branch has diverged substantially since the v0.34.x
release. It includes package moves, breaking API and protobuf schema changes,
and new APIs, all of which together have a large footprint on the mapping
between the implementation at tip and the v0.34 release branch.
To avoid the need to retrofit all of those improvements, this change works by
injecting the new indexing sink into the existing (v0.34) indexing interfaces
by delegation. This means the backport does _not_ pull in all the newer APIs
for event handling, and thus has minimal impact on existing code written
against the v0.34 package structure.
This change includes the test for the `psql` implementation, and thus updates
some Go module dependencies. Because it does not interact with any other types,
however, I did not add any unit tests to other packages in this change.
Related changes:
* Update module dependencies for psql backport.
* Update test data to be type-compatible with the old protobuf types.
* Add config settings for the PostgreSQL indexer.
* Clean up some linter settings.
* Hook up the psql indexer in the node main.
Issues reported in Osmosis, where the message is extremely long. Also, there is absolutely no reason to log the message IMO. If we must, we can make the message log DEBUG.
(cherry picked from commit 58a6cfff9a)
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
I realized after my last commit that my change made a following line of code a bit redundant.
(alternatively my last change was redunadnt to the existing code.)
I took this oppertunity to make some minor cleanups and logging changes to the node changes which I hope will make tests a bit more clear.
(cherry picked from commit a374f74f7c)
Co-authored-by: Sam Kleinman <garen@tychoish.com>
* p2p/conn: check for channel id overflow before processing receive msg (#6522)
Per tendermint spec, each Channel has a globally unique byte id, which
is mapped to uint8 in Go. However, the proto PacketMsg.ChannelID field
is declared as int32, and when receive the packet, we cast it to a byte
without checking for possible overflow. That leads to a malform packet
with invalid channel id is sent successfully.
To fix it, we just add a check for possible overflow, and return invalid
channel id error.
Fixed#6521
(cherry picked from commit 1f46a4c90e)
* version: revert version through ldflag only (#6494)
Add version back to versions, but allow it to be overridden via a ldflag.
Reason:
Many users are not setting the ldflag causing issues with tooling that relies on it (cosmjs)
closes#6488
cc @webmaster128
* revert variable rename
* Update CHANGELOG_PENDING.md
This is an attempt to clean up the logging message as requested in #6269.
(cherry picked from commit 3f9066b290)
Co-authored-by: Sam Kleinman <garen@tychoish.com>
## Description
Since events are not hashed into the header they can be non deterministic. Changing an event is not consensus breaking. Will update docs in the spec
(cherry picked from commit 884d4d5252)
Co-authored-by: Marko <marbar3778@yahoo.com>
This reverts commit afd07096a7.
I had believed that this tooling change could have been what broke our
GoReleaser flow; I now know that it was a result of changes in Go 1.16
and an update to GoReleaser! GoReleaser has now been updated again
and our flow should be un-broken.
Executed a local network using simapp and looked for logs that seemed superfluous. This isn't by any means an exhaustive grooming, but should drastically help legibility of logs.
ref: #5912
Description
We use docker for all protobuf related items. This makes it unnecessary to provide a way to download tooling.
ref #6103
Co-authored-by: Tess Rinearson <tess.rinearson@gmail.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
Bumps [watchpack](https://github.com/webpack/watchpack) from 2.1.0 to 2.1.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/webpack/watchpack/releases">watchpack's releases</a>.</em></p>
<blockquote>
<h2>v2.1.1</h2>
<h1>Bugfix</h1>
<ul>
<li>fix warnings with ENOENT when symlinks are resolved by watchpack</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="f1b5e2da2d"><code>f1b5e2d</code></a> 2.1.1</li>
<li><a href="cbfc11a8d7"><code>cbfc11a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/webpack/watchpack/issues/188">#188</a> from Aghassi/fix/enoent-throwing</li>
<li><a href="7684df0846"><code>7684df0</code></a> fix: adds ENOENT for non windows errors</li>
<li>See full diff in <a href="https://github.com/webpack/watchpack/compare/v2.1.0...v2.1.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
Co-authored-by: Emmanuel T Odeke <emmanuel@orijtech.com>
Closes#5907
- add init-corpus to blockchain reactor
- remove validator-set FromBytes test
now that we have proto, we don't need to test it! bye amino
- simplify mempool test
do we want to test remote ABCI app?
- do not recreate mux on every crash in jsonrpc test
- update p2p pex reactor test
- remove p2p/listener test
the API has changed + I did not understand what it's tested anyway
- update secretconnection test
- add readme and makefile
- list inputs in readme
- add nightly workflow
- remove blockchain fuzz test
EncodeMsg / DecodeMsg no longer exist
The `proto-gen-docker` target didn't pull an updated Docker image, and would use a local image if present which could be outdated and produce wrong results.
E2E tests often fail because validators miss signing or proposing blocks. Often this is because e.g. there's a lot of disruption in the network or it takes a long time to start up all the nodes.
This changes the test criteria to only check for 3 signed/proposed blocks, rather than a fraction of the expected blocks. This should be enough to catch most issues, apart from performance problems causing nodes to miss signing/proposing, but we may want separate tests for those sorts of things.
This test relied on connecting to the external site `foo-bar.net`, and (predictably) the site went down and broke all of our CI runs. This changes it to use local HTTP servers instead.
Co-authored-by: Erik Grinaker <erik@interchain.berlin>
Conflicting votes are now sent to the evidence pool to form duplicate vote evidence only once
the height of the evidence is finished and the time of the block finalised.
Bumps [vuepress-theme-cosmos](https://github.com/cosmos/vuepress-theme-cosmos) from 1.0.179 to 1.0.180.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/cosmos/vuepress-theme-cosmos/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.6.1 to 1.7.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/stretchr/testify/releases">github.com/stretchr/testify's releases</a>.</em></p>
<blockquote>
<h2>Minor improvements and bug fixes</h2>
<p>Minor feature improvements and bug fixes</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="acba37e5db"><code>acba37e</code></a> Only use repeatability if no repeatability left</li>
<li><a href="eb8c41ec07"><code>eb8c41e</code></a> Add more tests to mock package</li>
<li><a href="a5830c56d3"><code>a5830c5</code></a> Extract method to evaluate closest match</li>
<li><a href="1962448488"><code>1962448</code></a> Use Repeatability as tie-breaker for closest match</li>
<li><a href="92707c0b2d"><code>92707c0</code></a> Fixed the link to not point to assert only</li>
<li><a href="05dd0b2b35"><code>05dd0b2</code></a> Updated the readme to point to pkg.dev</li>
<li><a href="c26b7f39f8"><code>c26b7f3</code></a> Update assertions.go</li>
<li><a href="8fb4b2442e"><code>8fb4b24</code></a> [Fix] The most recent changes to golang/protobuf breaks the spew Circular dat...</li>
<li><a href="dc8af7208c"><code>dc8af72</code></a> add generated code for positive/negative assertion</li>
<li><a href="1544508911"><code>1544508</code></a> add assert positive/negative</li>
<li>Additional commits viewable in <a href="https://github.com/stretchr/testify/compare/v1.6.1...v1.7.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
#5852 fixed an issue with error propagation in `os.EnsureDir()`. However, this function is basically identical to `os.MkdirAll()`, and can be replaced entirely with a call to it. We keep the function for backwards compatibility.
blockchain/vX reactor priority was decreased because during the normal operation
(i.e. when the node is not fast syncing) blockchain priority can't be
the same as consensus reactor priority. Otherwise, it's theoretically possible to
slow down consensus by constantly requesting blocks from the node.
NOTE: ideally blockchain/vX reactor priority would be dynamic. e.g. when
the node is fast syncing, the priority is 10 (max), but when it's done
fast syncing - the priority gets decreased to 5 (only to serve blocks
for other nodes). But it's not possible now, therefore I decided to
focus on the normal operation (priority = 5).
evidence and consensus critical messages are more important than
the mempool ones, hence priorities are bumped by 1 (from 5 to 6).
statesync reactor priority was changed from 1 to 5 to be the same as
blockchain/vX priority.
Refs https://github.com/tendermint/tendermint/issues/5816
@p4u from vocdoni.io reported that the mempool might behave incorrectly under a
high load. The consequences can range from pauses between blocks to the peers
disconnecting from this node.
My current theory is that the flowrate lib we're using to control flow
(multiplex over a single TCP connection) was not designed w/ large blobs
(1MB batch of txs) in mind.
I've tried decreasing the Mempool reactor priority, but that did not
have any visible effect. What actually worked is adding a time.Sleep
into mempool.Reactor#broadcastTxRoutine after an each successful send ==
manual control flow of sort.
As a temporary remedy (until the mempool package
is refactored), the max-batch-bytes was disabled. Transactions will be sent
one by one without batching
Closes#5796
When set to true, an invalid transaction will be kept in the cache (this may help some applications to protect against spam).
NOTE: this is a temporary config option. The more correct solution would be to add a TTL to each transaction (i.e. CheckTx may return a TTL in ResponseCheckTx).
Closes: #5751
After a reactor has failed to parse an incoming message, it shouldn't output the "bad" data into the logs, as that data is unfiltered and could have anything in it. (We also don't think this information is helpful to have in the logs anyways.)
This fixes spurious `TestByzantinePrevoteEquivocation` failures by extending the block range and time spent waiting for evidence. I've seen many runs where the evidence isn't committed until e.g. height 27. Haven't looked into _why_ this happens, but as long as the evidence is committed eventually and the test doesn't spuriously fail I'm (mostly) happy. WDYT @cmwaters?
## Description
Hardcode ed25519 to dialTCPFn in e2e tests.
I will backport `DefaultRequestHandler` fixes
This will be replaced when grpc is implemented.
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.33.1 to 1.33.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc-go/releases">google.golang.org/grpc's releases</a>.</em></p>
<blockquote>
<h2>Release 1.33.2</h2>
<ul>
<li>protobuf: update all generated code to google.golang.org/protobuf (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3932">#3932</a>)</li>
<li>xdsclient: populate error details for NACK (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3975">#3975</a>)</li>
<li>internal/credentials: fix a bug and add one more helper function SPIFFEIDFromCert (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3929">#3929</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="56d63285d5"><code>56d6328</code></a> github: remove advancedtls examples test</li>
<li><a href="6396e4b7d7"><code>6396e4b</code></a> vet: ignore proto deprecation warnings</li>
<li><a href="0afe9d28d8"><code>0afe9d2</code></a> github: add Github Actions workflow for tests; support in vet.sh (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4005">#4005</a>)</li>
<li><a href="8a0ca33b85"><code>8a0ca33</code></a> Change version to 1.33.2</li>
<li><a href="c1989b58a5"><code>c1989b5</code></a> protobuf: update all generated code to google.golang.org/protobuf (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3932">#3932</a>)</li>
<li><a href="b205df69d4"><code>b205df6</code></a> xdsclient: populate error details for NACK (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3975">#3975</a>)</li>
<li><a href="75e27683ed"><code>75e2768</code></a> internal/credentials: fix a bug and add one more helper function SPIFFEIDFrom...</li>
<li><a href="17493ac067"><code>17493ac</code></a> Change version to 1.33.2-dev</li>
<li>See full diff in <a href="https://github.com/grpc/grpc-go/compare/v1.33.1...v1.33.2">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.1.0 to 1.1.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/cobra/releases">github.com/spf13/cobra's releases</a>.</em></p>
<blockquote>
<h2>v1.1.1</h2>
<ul>
<li><strong>Fix:</strong> yaml.v2 2.3.0 contained a unintended breaking change. This release reverts to yaml.v2 v2.2.8 which has recent critical CVE fixes, but does not have the breaking changes. See <a href="https://github-redirect.dependabot.com/spf13/cobra/pull/1259">spf13/cobra#1259</a> for context.</li>
<li><strong>Fix:</strong> correct internal formatting for go-md2man v2 (which caused man page generation to be broken). See <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1049">spf13/cobra#1049</a> for context.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="86f8bfd7fe"><code>86f8bfd</code></a> fix manpage building with new go-md2man (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1255">#1255</a>)</li>
<li><a href="f32f4ef15b"><code>f32f4ef</code></a> Don't use yaml.v2 2.3.0 which has a breaking change (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1259">#1259</a>)</li>
<li>See full diff in <a href="https://github.com/spf13/cobra/compare/v1.1.0...v1.1.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
Bumps [github.com/golang/protobuf](https://github.com/golang/protobuf) from 1.4.2 to 1.4.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/golang/protobuf/releases">github.com/golang/protobuf's releases</a>.</em></p>
<blockquote>
<h2>v1.4.3</h2>
<p>Notable changes:</p>
<p>(<a href="https://github-redirect.dependabot.com/golang/protobuf/issues/1221">#1221</a>) jsonpb: Fix marshaling of Duration
(<a href="https://github-redirect.dependabot.com/golang/protobuf/issues/1210">#1210</a>) proto: convert integer to rune before converting to string</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="4846b58453"><code>4846b58</code></a> jsonpb: Fix marshaling of Duration (<a href="https://github-redirect.dependabot.com/golang/protobuf/issues/1221">#1221</a>)</li>
<li><a href="91c84e0db1"><code>91c84e0</code></a> travis.yml: update tested versions of Go (<a href="https://github-redirect.dependabot.com/golang/protobuf/issues/1211">#1211</a>)</li>
<li><a href="3860b2764f"><code>3860b27</code></a> proto: convert integer to rune before converting to string (<a href="https://github-redirect.dependabot.com/golang/protobuf/issues/1210">#1210</a>)</li>
<li>See full diff in <a href="https://github.com/golang/protobuf/compare/v1.4.2...v1.4.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/configuring-github-dependabot-security-updates)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
* Don't use state sync for nodes starting at initial height.
* Also remove stopped containers when cleaning up.
* Start nodes in order of startAt, mode, name to avoid full nodes starting before their seeds.
* Tweak network waiting to avoid halts caused by validator changes and perturbations.
* Disable most tests for seed nodes, which aren't always able to join consensus.
* Disable `blockchain/v2` due to known bugs.
Fixes#5540, fixes#2965. This is a hack that patches over the problem, but really the whole async handling in gRPC should be redesigned, as should ReqRes callback dispatch.
In #5488 the E2E testnet generator changed to setting explicit `StartAt` heights for initial nodes. This broke the runner, which expected all initial nodes to have `StartAt: 0`, as well as validator set scheduling in the generator. Testnet loading now normalizes initial nodes to have `StartAt: 0`.
This also tweaks waiting for misbehavior heights to only use an additional wait if there actually is any misbehavior in the testnet, and to output information when waiting.
Closes#5291. Adds a randomized testnet generator. Nightly CI job will be submitted separately. A few of the testnets can be a bit flaky, even after disabling known-faulty behavior and making minor tweaks, and the larger networks may be too resource-intensive to run in CI - this will be optimized separately.
This was a missing test case from the old P2P tests removed in #5453, which makes sure that all nodes are able to peer with each other regardless of how they discover peers.
Fixes#2795, since the default CI testnet uses a combination of (partially meshed) persistent peers and PEX-based seed nodes.
Partial fix for #5291.
This adds a basic set of test cases for core network invariants. Although small, it is sufficient to replace and extend the current set of P2P tests. Further test cases can be added later.
## Description
Add simple `NoBlockResponse` handling to blockchain reactor v1. I tested before and after with erik's e2e testing and was not able to reproduce the inability to sync after the changes were applied
Closes: #5394
Before: scheduler receives psBlockProcessed event, but does not mark block as processed because peer timed out (or was removed for other reasons) and all associated blocks were rescheduled.
After: scheduler receives psBlockProcessed event and marks block as processed in any case (even if peer who provided this block errors).
Closes#5387
When a peer is stopped due to some network issue, the Reactor calls scheduler#handleRemovePeer, which removes the peer from the scheduler. BUT the peer stays in the processor, which sometimes could lead to "duplicate block enqueued by processor" panic WHEN the same block is requested by the scheduler again from a different peer. The solution is to return scPeerError, which will be propagated to the processor. The processor will clean up the blocks associated with the peer in purgePeer.
Closes#5513, #5517
Fixes#5439. This is really a workaround for #5519 (unless we require async implementations to return ordered responses, but that kind of defeats the purpose of having an async API).
* mempool: length prefix txs when getting them from mempool (#5483)
* correctly calculate evidence data size (#5482)
* block: use commit sig size instead of vote size (#5490)
* tx: reduce function to one parameter (#5493)
This change adds some polish to the upgrading instructions for 0.34. The only substantive changes include:
* Calling out ABCI-impacting changes explicitly in the "ABCI Changes" section, even if those changes are also mentioned elsewhere
* Removes `ProofOfTrialPeriod` from consensus params; this change was introduced and then removed.
found out this issue when trying to decouple mempool reactor with its
underlying clist implementation.
according to its comment, the test `TestReactorNoBroadcastToSender` is
intended to make sure that a peer shouldn't send tx back to its origin.
however, the current test forgot to init peer state key, thus the code
will get stuck at waiting for peer to catch up state *instead of* skipping
sending tx back:
b8d08b9ef4/mempool/reactor.go (L216-L226)
this PR fixes the issue by init peer state key.
Revert the JSON-RPC/WebSocket response serialization format to the
standard way (i.e. a single RPC response per WebSocket text message) to
avoid breaking clients.
Serialization format changes will be discussed in an upcoming ADR.
Closes: #5373
## Description
This PR adds support for badgerdb, rocksdb and boltdb within the build options.
RocksDB builds correctly.
```
make build TENDERMINT_BUILD_OPTIONS=rocksdb
CGO_ENABLED=1 go build -mod=readonly -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD` -s -w " -trimpath -tags 'tendermint rocksdb' -o build/tendermint ./cmd/tendermint/
tendermint on master [$!] via 🐹 v1.15.2 took 19s
```
Closes: #5355
## Description
Add a sentence on `initial_height`.
There will be a section in the spec repo explaining the genesis more in depth as this is something that will affect both clients
Closes: #XXX
## Description
Check block protocol version in header validate basic.
I tried searching for where we check the P2P protocol version but was unable to find it. When we check compatibility with a node we check we both have the same block protocol and are on the same network, but we do not check if we are on the same P2P protocol. It makes sense if there is a handshake change because we would not be able to establish a secure connection, but a p2p protocol version bump may be because of a p2p message change, which would go unnoticed until that message is sent over the wire. Is this purposeful?
Closes: #4790
* docs: goleveldb is much more stable now
Refs https://github.com/syndtr/goleveldb/issues/226#issuecomment-682495490
* rpc/core/events: make sure WS client receives every event
previously, if the write buffer was full, the response would've been
lost without any trace (log msg, etc.)
* rpc/jsonrpc/server: set defaultWSWriteChanCapacity to 1
Refs #3905Closes#3829
setting write buffer capacity to 1 makes transactions count per block
more stable and also reduces the pauses length by 20s.
before: https://github.com/tendermint/tendermint/issues/3905#issuecomment-681854328 net.Read - 20s
after: net.Read - 0.66s
* rpc/jsonrpc/server: buffer writes and avoid io.ReadAll during read
## Description
Part of the issue is to add metrics to the websocket connection. It seems this would require some moving around of things in the node pkg. I opted to not make this change now, and wait for when we do a node pkg refactor.
If someone disagrees with this appraoch please let me know, I can attempt to get metrics into the rpc layer.
There is not a need to update documentation as it already states this metric is a histogram..
Closes: #1791
## Description
Add missing metrics.
`Blockchain/v2` exposes metrics for events. I don't find these as something a node operator should utilize as it does not bring insight.
Closes: #XXX
On startup, the peer-to-peer stack may have peers connected before the state sync process begins, causing these to not trigger `AddPeer` events and thus not be used for snapshot discovery. Broadcasting a snapshot request to these explicitly makes sure we discover snapshots from existing peers as well.
* config: rename prof_laddr to pprof_laddr and move it to rpc
also, remove `/unsafe_start_cpu_profiler`, `/unsafe_stop_cpu_profiler`
and `/unsafe_write_heap_profile` in favor of pprof server functionality.
Closes#5303
* update changelog
* log start
State sync broke in #5231 since the genesis state is not propagated explicitly from `NewNode()` to `Node.OnStart()` and further into the state sync initialization. This is a hack until we can clean up the node startup process.
* build(deps): Bump github.com/tendermint/tm-db from 0.6.1 to 0.6.2
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Marko Baricevic <marbar3778@yahoo.com>
## Description
When downloading mockery I ran into an issue where we were using the old version. This PR updates to a more recent version.
changelog?
Closes: #XXX
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.31.0 to 1.31.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc-go/releases">google.golang.org/grpc's releases</a>.</em></p>
<blockquote>
<h2>Release 1.31.1</h2>
<ul>
<li>eds: fix priority timeout failure when EDS removes all priorities (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3839">#3839</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="39ef2aaf62"><code>39ef2aa</code></a> Change version to 1.31.1</li>
<li><a href="644142cbf3"><code>644142c</code></a> eds: fix priority timeout failure when EDS removes all priorities (cherry pic...</li>
<li><a href="f3c2c5ed7e"><code>f3c2c5e</code></a> Change version to 1.31.1-dev (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/3756">#3756</a>)</li>
<li>See full diff in <a href="https://github.com/grpc/grpc-go/compare/v1.31.0...v1.31.1">compare view</a></li>
</ul>
</details>
<br />
[](https://help.github.com/articles/configuring-automated-security-fixes)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
* libs/bits: inline defer and change order of mutexes
Closes#3217
* abci/client: unexpose StopForError func
a) it's not called outside
b) the reason for exposing it in the first place is unclear
c) Stop already exist if someone from outside wants to stop the client
Bumps [vuepress-theme-cosmos](https://github.com/cosmos/vuepress-theme-cosmos) from 1.0.169 to 1.0.172.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/cosmos/vuepress-theme-cosmos/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://help.github.com/articles/configuring-automated-security-fixes)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
## Description
Remove secp256k1 as discussed in the tendermint dev call. The implementation has been moved to the [Cosmos-SDK](443e0c1f89/crypto/keys/secp256k1)
Closes: #XXX
## Description
Version docs, unfortunately because there is no dropdown in the theme the way one could go see docs related to v0.32 is through `/master`
I am not the biggest fan of this approach but unfortunately it all we have until https://github.com/cosmos/vuepress-theme-cosmos/issues/91 is completed
Closes: #XXX
## Description
Currently, for ci testing, we upload the code coverage reports individually across multiple machines. Once codecov receives one report it sends a notification in the form of a comment on the pull request. This comment does not contain the full report from all the tests therefore causing inaccurate reports. This PR aims at delaying the codecov notification till at least 4 reports have been uploaded, reducing noise.
Closes: #XXX
## Description
This PR makes Tendermint Architectural Overview not visible mainly because there are todos and it duplicates some information.
Closes: #XXX
## Description
add make build-linux into `make build-docker` it is not documented that the cmd depends on the build folder so this aids in helping the user.
Closes: #XXX
## Description
Basecoin in a blast from the past, this pr removes it. It is outdated as well.
I didnt add a changelog as this is unmaintained and unseable.
Closes: #XXX
## Description
Add chainid to requests to privval. This is a non-breaking change and hardware devices can opt to ignore the field.
Closes: #4503
Took the approach of passing chainID to the client instead of modifying `GetPubKey` because it would lead to a larger change throughout the codebase and in some places it could get tricky to get chainID.
## Description
These changes are up for discussion.
The current ADR template leaves a lot of room for interpretation and we currently do not have implementation level specs. This change to the ADR template is meant to solve both of these pitfalls.
Thank you to @tessr for providing an amazing template to follow. I only took a few things from it but we can add more if people would like more detail.
Closes: #XXX
## Description
This PR aims to make the crypto.PubKey interface more intuitive.
Changes:
- `VerfiyBytes` -> `VerifySignature`
Before `Bytes()` was amino encoded, now since it is the byte representation should we get rid of it entirely?
EDIT: decided to keep `Bytes()` as it is useful if you are using the interface instead of the concrete key
Closes: #XXX
Followup from #5227. Instead of checking `ResponseInitChain.app_hash` against the genesis doc app hash, we instead replace it. We should probably remove the genesis doc app hash completely, and rely solely on the one from `InitChain`, I'll open a separate issue to discuss this.
Adds a genesis parameter `initial_height` which specifies the initial block height, as well as ABCI `RequestInitChain.InitialHeight` to pass it to the ABCI application, and `State.InitialHeight` to keep track of the initial height throughout the code. Fixes#2543, based on [RFC-002](https://github.com/tendermint/spec/pull/119). Spec changes in https://github.com/tendermint/spec/pull/135.
## Description
Add test vectors for all reactors
- [x] state-sync
- [x] privval
- [x] mempool
- [x] p2p
- [x] evidence
- [ ] light?
this PR is primarily oriented at testvectors for things going over the wire. should we expand the testvectors into types as well?
Closes: #XXX
Fixes#5192.
@liamsi Can you verify that the test vectors match the Rust implementation? I updated `ProofsFromByteSlices()` as well, anything else that should be updated?
## Description
When doing a release the former rc branch should be deleted as the content of the branch is now on the release branch. After a release is made a new RC branch should be created.
Closes: #XXX
## Description
This Pr changes `GenPrivKeySecp256k1` to `GenPrivKeyFromSecret` to be consistent with the other keys. Also the previous name was not descriptive on what it did.
Closes: #XXX
Bumps the Tendermint version to 0.34.0, and debumps the P2P and block protocol numbers since 0.33.6 has:
```go
// P2PProtocol versions all p2p behaviour and msgs.
// This includes proposer selection.
P2PProtocol Protocol = 7
// BlockProtocol versions all block data structures and processing.
// This includes validity of blocks and state updates.
BlockProtocol Protocol = 10
```
Solves #5138 in the way that if a validatorSet is nil or empty it will not try to transform it to protobug
Co-authored-by: Callum Michael Waters <cmwaters19@gmail.com>
ValidateBasic() for PotentialAmnesiaEvidence checks that the rounds of the two votes are different and does not check Vote Type
ValidateBasic() now also ensures that the first block is not a nil block (else the validator hasn't actually locked onto a block)
## Description
This pr adds missing adr numbers based on what is in #2313
This pr adds empty templates that should later be filled when the time comes to do the implementation.
there are still missing numbers, we can either fill them in when we write more ADRs or not backfill numbers and only go forwards.
Closes: #2313
While working on tendermint my colleague @jinmannwong fixed a few of the unit tests that we found to be flaky in our CI. We thought that you might find this useful, see below for comments.
Closes#1581
This fixes the error in #1581, and also documents the purpose of this line. It ensures that if a peer tells us an address we know about, whose ID is the same as our current ID, we ignore it.
This removes the previous case where the ID's matched, but the IP's did not, which could yield a potential overwrite of the IP associated with the address later on. (This then would yield an eclipse attack)
This was not a vulnerability before though, thanks to a defensive check here 95fc7e58ee/p2p/pex/addrbook.go (L522))
## Description
This PR wraps the stdlib sync.(RW)Mutex & godeadlock.(RW)Mutex. This enables using go-deadlock via a build flag instead of using sed to replace sync with godeadlock in all files
Closes: #3242
## Description
this log message was marked as not useful and in the issue it was proposed to move it to debug. I am going with this path for now. After we have refactored the logger we shold go through the codebase in order to clean our log statements.
Closes: #2101
Closes#4934
* light: do not compare trusted header w/ witnesses
we don't have trusted state to bisect from
* check header before checking height
otherwise you can get nil panic
- drop Height & Base from StatusRequest
It does not make sense nor it's used anywhere currently. Also, there
seem to be no trace of these fields in the ADR-40 (blockchain reactor
v2).
- change PacketMsg#EOF type from int32 to bool
Reorganizes the Protobuf schemas. It is mostly bikeshedding, so if something is contentious or causes a lot of extra work then I'm fine with reverting. Some Protobuf and Go import paths will change.
* Move `abci/types/types.proto` to `abci/types.proto`.
* Move `crypto/keys/types.proto` and `crypto/merkle/types.proto` to `crypto/keys.proto` and `crypto/proof.proto`.
* Drop the use of `msgs` in filenames, as "message" is a very overloaded term (all Protobuf types are messages, and we also have `message Message`). Use `types.proto` as a catch-all, and otherwise name files by conceptual grouping instead of message kind.
Closes#5074
Old code does not work when --consensus.create_empty_blocks=false
(because it only calls tmos.Kill when ApplyBlock fails). New code is
listening ABCI clients for Quit and kills TM process if there were any
errors.
## Description
codecov is having issues on upload so upgrade to 1.0.7 where they claim it works better and dont fail ci on failure to upload coverage file
Closes: #XXX
* fix#5086
* fixes#5082
- run tendermint init on runtime (if necessary)
* Address some feedback:
- restrict the entrypoint to only run `tendermint`
- script into /usr/local/bin
* make it also possible to run ``ith unmodified config again via:
`docker run -v $HOME/.tendermint:/tendermint tendermint/tendermint init
* Update DOCKER/docker-entrypoint.sh
Co-authored-by: Greg Szabo <16846635+greg-szabo@users.noreply.github.com>
Co-authored-by: Greg Szabo <16846635+greg-szabo@users.noreply.github.com>
check bcR.fastSync flag when "OnStop"
fix "service/service.go:161 Not stopping BlockPool -- have not been started yet {"impl": "BlockPool"}" error when kill process
## Description
This adr is meant to weight the pros and cons of gRPC and JSON-RPC. It is fairly incomplete on the JSON-RPC side.
EDIT: Thank you to erik on filling out the pros and cons!!
Work Towards: #3367
## Description
this PR adds test vectors for proto encoding. the main difference from amino was the removal of four bytes due to interface encoding.
should i add more cases?
Closes: #XXX
Closes#4926
The dump consensus state had this:
"last_commit": {
"votes": [
"Vote{0:04CBBF43CA3E 385085/00/2(Precommit) 1B73DA9FC4C8 42C97B86D89D @ 2020-05-27T06:46:51.042392895Z}",
"Vote{1:055799E028FA 385085/00/2(Precommit) 652B08AD61EA 0D507D7FA3AB @ 2020-06-28T04:57:29.20793209Z}",
"Vote{2:056024CFA910 385085/00/2(Precommit) 652B08AD61EA C8E95532A4C3 @ 2020-06-28T04:57:29.452696998Z}",
"Vote{3:0741C95814DA 385085/00/2(Precommit) 652B08AD61EA 36D567615F7C @ 2020-06-28T04:57:29.279788593Z}",
Note there's a precommit in there from the first val from May (2020-05-27) while the rest are from today (2020-06-28). It suggests there's a validator from an old instance of the network at this height (they're using the same chain-id!). Obviously a single bad validator shouldn't be an issue. But the Commit refactor work introduced a bug.
When we propose a block, we get the block.LastCommit by calling MakeCommit on the set of precommits we saw for the last height. This set may include precommits for a different block, and hence the block.LastCommit we propose may include precommits that aren't actually for the last block (but of course +2/3 will be). Before v0.33, we just skipped over these precommits during verification. But in v0.33, we expect all signatures for a blockID to be for the same block ID! Thus we end up proposing a block that we can't verify.
Since the light client work introduced in v0.33 it appears full nodes
are no longer fully verifying commit signatures during block execution -
they stop after +2/3. See in VerifyCommit:
0c7fd316eb/types/validator_set.go (L700-L703)
This means proposers can propose blocks that contain valid +2/3
signatures and then the rest of the signatures can be whatever they
want. They can claim that all the other validators signed just by
including a CommitSig with arbitrary signature data. While this doesn't
seem to impact safety of Tendermint per se, it means that Commits may
contain a lot of invalid data. This is already true of blocks, since
they can include invalid txs filled with garbage, but in that case the
application knows they they are invalid and can punish the proposer. But
since applications dont verify commit signatures directly (they trust
tendermint to do that), they won't be able to detect it.
This can impact incentivization logic in the application that depends on
the LastCommitInfo sent in BeginBlock, which includes which validators
signed. For instance, Gaia incentivizes proposers with a bonus for
including more than +2/3 of the signatures. But a proposer can now claim
that bonus just by including arbitrary data for the final -1/3 of
validators without actually waiting for their signatures. There may be
other tricks that can be played because of this.
In general, the full node should be a fully verifying machine. While
it's true that the light client can avoid verifying all signatures by
stopping after +2/3, the full node can not. Thus the light client and
full node should use distinct VerifyCommit functions if one is going to
stop after +2/3 or otherwise perform less validation (for instance light
clients can also skip verifying votes for nil while full nodes can not).
See a commit with a bad signature that verifies here: 56367fd. From what
I can tell, Tendermint will go on to think this commit is valid and
forward this data to the app, so the app will think the second validator
actually signed when it clearly did not.
In order to have more control over the mempool implementation,
introduce a new exported function RemoveTxByKey.
Export also TxKey() and TxKeySize. Use TxKeySize const instead of
sha256.size, so future changes on the hash function won't break the API.
Allows using a TxKey (32 bytes reference) as parameter instead of
the complete array set. So the application layer does not need to
keep track of the whole transaction but only of the sha256 hash (32 bytes).
This function is useful when mempool.Recheck is disabled.
Allows the Application layer to implement its own cleaning mechanism
without having to re-implement the whole mempool interface.
Mempool.Update() would probably also need to change from txBytes to txKey,
but that would require to change the Interface thus will break backwards
compatibility. For now RemoveTxByKey() looks like a good compromise,
it won't break anything and will help to solve some mempool issues from the
application layer.
Signed-off-by: p4u <pau@dabax.net>
## Description
partially cleanup in preparation for errcheck
i ignored a bunch of defer errors in tests but with the update to go 1.14 we can use `t.Cleanup(func() { if err := <>; err != nil {..}}` to cover those errors, I will do this in pr number two of enabling errcheck.
ref #5059
## Description
To provide the ability to add more message types without needing to cause a breaking change the mempool message was migrated to a oneof.
Closes: #XXX
fix bug so that PotentialAmnesiaEvidence is being gossiped
handle inbound amnesia evidence correctly
add method to check if potential amnesia evidence is on trial
fix a bug with the height when we upgrade to amnesia evidence
change evidence to using just pointers.
More logging in the evidence module
Co-authored-by: Marko <marbar3778@yahoo.com>
* types: reject blocks w/ ConflictingHeadersEvidence
Closes#5037
* types: reject blocks w/ PotentialAmnesiaEvidence
as well
PotentialAmnesiaEvidence does not contribute anything on its own,
therefore should not be committed on chain.
* fix lint issue
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.6.0 to 1.7.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/prometheus/client_golang/releases">github.com/prometheus/client_golang's releases</a>.</em></p>
<blockquote>
<h2>1.7.0 / 2020-06-17</h2>
<ul>
<li>[CHANGE] API client: Add start/end parameters to <code>LabelNames</code> and <code>LabelValues</code>. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/767">#767</a></li>
<li>[FEATURE] testutil: Add <code>GatherAndCount</code> and enable filtering in <code>CollectAndCount</code> <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/753">#753</a></li>
<li>[FEATURE] API client: Add support for <code>status</code> and <code>runtimeinfo</code> endpoints. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/755">#755</a></li>
<li>[ENHANCEMENT] Wrapping <code>nil</code> with a <code>WrapRegistererWith...</code> function creates a no-op <code>Registerer</code>. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/764">#764</a></li>
<li>[ENHANCEMENT] promlint: Allow Kelvin as a base unit for cases like color temperature. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/761">#761</a></li>
<li>[BUGFIX] push: Properly handle empty job and label values. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/752">#752</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/prometheus/client_golang/blob/master/CHANGELOG.md">github.com/prometheus/client_golang's changelog</a>.</em></p>
<blockquote>
<h2>1.7.0 / 2020-06-17</h2>
<ul>
<li>[CHANGE] API client: Add start/end parameters to <code>LabelNames</code> and <code>LabelValues</code>. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/767">#767</a></li>
<li>[FEATURE] testutil: Add <code>GatherAndCount</code> and enable filtering in <code>CollectAndCount</code> <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/753">#753</a></li>
<li>[FEATURE] API client: Add support for <code>status</code> and <code>runtimeinfo</code> endpoints. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/755">#755</a></li>
<li>[ENHANCEMENT] Wrapping <code>nil</code> with a <code>WrapRegistererWith...</code> function creates a no-op <code>Registerer</code>. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/764">#764</a></li>
<li>[ENHANCEMENT] promlint: Allow Kelvin as a base unit for cases like color temperature. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/761">#761</a></li>
<li>[BUGFIX] push: Properly handle empty job and label values. <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/752">#752</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="b05e50c929"><code>b05e50c</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/769">#769</a> from prometheus/beorn7/release</li>
<li><a href="cc5731c16c"><code>cc5731c</code></a> Cut v1.7.0</li>
<li><a href="3c8b15fa0d"><code>3c8b15f</code></a> Update dependencies</li>
<li><a href="c304bb07a6"><code>c304bb0</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/767">#767</a> from Nexucis/feature/labelNames-parameter</li>
<li><a href="3defbd9c7c"><code>3defbd9</code></a> add start/end parameter for LabelValues</li>
<li><a href="6ce5f2ca8a"><code>6ce5f2c</code></a> add start/end parameter for LabelNames</li>
<li><a href="03575cad4e"><code>03575ca</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/prometheus/client_golang/issues/764">#764</a> from prometheus/wrap-nil</li>
<li><a href="9c8ba1f945"><code>9c8ba1f</code></a> Review feedback: add comment and tests for WrapRegistererWith.</li>
<li><a href="614377c550"><code>614377c</code></a> Review feedback: use one line.</li>
<li><a href="8961609f91"><code>8961609</code></a> Ensure that nil registers are treat as a no-op, even when wrapping.</li>
<li>Additional commits viewable in <a href="https://github.com/prometheus/client_golang/compare/v1.6.0...v1.7.0">compare view</a></li>
</ul>
</details>
<br />
[](https://dependabot.com/compatibility-score/?dependency-name=github.com/prometheus/client_golang&package-manager=go_modules&previous-version=1.6.0&new-version=1.7.0)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
Dependabot will merge this PR once CI passes on it, as requested by @marbar3778.
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
- `@dependabot badge me` will comment on this PR with code to add a "Dependabot enabled" badge to your readme
Additionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):
- Update frequency (including time of day and day of week)
- Pull request limits (per update run and/or open at any time)
- Out-of-range updates (receive only lockfile updates, if desired)
- Security updates (receive only security updates, if desired)
</details>
## Description
This PR removes options in picking different pubkey types. We don't support anything other than ed25519 so this was redundant.
We only ever supported ed25519 keys so not sure why we exposed different options.
Not sure if this needs a changelog entry ?
Closes: #XXX
## Description
This PR moves all proto files under one dir (`/proto`). The script to generate adding functionality to copy the files that still need to be in the same place. (abci & rpc)
renames every proto package from `tendermint.proto.<pkg_name>` to `tendermint.<pkg_name>`
Removes unneeded types in privval proto directory
Closes: #XXX
* test-vectors for backwards compatibility:
- copy & paste test-vectors from v0.33.5 to ensure
backwards compatibility for vote's SignBytes
* WIP: everything besides time seems to match :-/
* almost
* Found the culprit: field nums weren't consecutive ints ...
* fix order of partset header too
* this last votes-related test can easily be fixed
* some minor changes and fix last failing test
* move proto types back to stdtime, fix various linting
* use libs/protoio
* remvoe commented code
* add comments
* fix tests
* uncomment testscases
* dont ignore error panic
* fix signable test
* fix happy path testing
* fix comment
Co-authored-by: Marko Baricevic <marbar3778@yahoo.com>
## Description
Update gogoproto tools cmd to download the correct version. I still need to update the docker container and test that they generate the same
Closes: #XXX
## Description
make sections in the toml standout.
Making this PR as I found it hard to find different sections on the `config.toml` when it was generated. This makes it somewhat simpler
Closes: #XXX
## Description
This PR removes simple prefix from all types in the crypto/merkle directory.
The two proto types `Proof` & `ProofOp` have been moved to the `proto/crypto/merkle` directory.
proto messge `Proof` was renamed to `ProofOps` and `SimpleProof` message to `Proof`.
Closes: #2755
Creates Amnesia Evidence which is formed from Potential Amnesia Evidence with either a matching proof or after a period of time denoted as the Amnesia Trial Period. This also adds the code necessary so that Amnesia Evidence can be validated and committed on a block
LastCommitRound should always be >= 0 for heights > 1.
In State.updateToState, last precommit is computed only when round
greater than -1 and has votes. But "LastCommit" is always updated
regardless of the condition. If there's no last precommit, "LastCommit"
is set to (*types.VoteSet)(nil). That's why "LastCommit" can be -1 for
heights > 1.
To fix it, only update State.RoundState.LastCommit when there is last
precommit.
Fixes#2737
Co-authored-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Migrates the p2p connections to Protobuf. Supersedes #4800.
gogoproto's `NewDelimitedReader()` uses an internal buffer, which makes it unsuitable for reading individual messages from a shared reader (since any remaining data in the buffer will be discarded). We therefore add a new `protoio` package with an unbuffered `NewDelimitedReader()`. Additionally, the `NewDelimitedWriter()` returns the number of bytes written, and we've added `MarshalDelimited()` and `UnmarshalDelimited()`, to ease migration of existing code.
Migrates the `rpc` package to use new JSON encoder in #4955. Branched off of that PR.
Tests pass, but I haven't done any manual testing beyond that. This should be handled as part of broader 0.34 testing.
Amino-compatible JSON encoder/decoder, including bug compatibility. Interface types must be registered via `json.RegisterType()`. Unlike Amino, this allows floats to be encoded/decoded.
Partial fix for #4828, needs code migration.
Bumps [websocket-extensions](https://github.com/faye/websocket-extensions-node) from 0.1.3 to 0.1.4. **This update includes a security fix.**
<details>
<summary>Vulnerabilities fixed</summary>
<p><em>Sourced from <a href="https://github.com/advisories/GHSA-g78m-2chm-r7qv">The GitHub Security Advisory Database</a>.</em></p>
<blockquote>
<p><strong>Regular Expression Denial of Service in websocket-extensions (NPM package)</strong></p>
<h3>Impact</h3>
<p>The ReDoS flaw allows an attacker to exhaust the server's capacity to process
incoming requests by sending a WebSocket handshake request containing a header
of the following form:</p>
<pre><code>Sec-WebSocket-Extensions: a; b="\c\c\c\c\c\c\c\c\c\c ...
</code></pre>
<p>That is, a header containing an unclosed string parameter value whose content is
a repeating two-byte sequence of a backslash and some other character. The
parser takes exponential time to reject this header as invalid, and this will
block the processing of any other work on the same thread. Thus if you are
running a single-threaded server, such a request can render your service
completely unavailable.</p>
<h3>Patches</h3>
<p>Users should upgrade to version 0.1.4.</p>
<h3>Workarounds</h3>
</tr></table> ... (truncated)
<p>Affected versions: < 0.1.4</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/faye/websocket-extensions-node/blob/master/CHANGELOG.md">websocket-extensions's changelog</a>.</em></p>
<blockquote>
<h3>0.1.4 / 2020-06-02</h3>
<ul>
<li>Remove a ReDoS vulnerability in the header parser (CVE-2020-7662, reported by
Robert McLaughlin)</li>
<li>Change license from MIT to Apache 2.0</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="8efd0cd6e3"><code>8efd0cd</code></a> Bump version to 0.1.4</li>
<li><a href="3dad4ad44a"><code>3dad4ad</code></a> Remove ReDoS vulnerability in the Sec-WebSocket-Extensions header parser</li>
<li><a href="4a76c75efb"><code>4a76c75</code></a> Add Node versions 13 and 14 on Travis</li>
<li><a href="44a677a9c0"><code>44a677a</code></a> Formatting change: {...} should have spaces inside the braces</li>
<li><a href="f6c50aba0c"><code>f6c50ab</code></a> Let npm reformat package.json</li>
<li><a href="2d211f3705"><code>2d211f3</code></a> Change markdown formatting of docs.</li>
<li><a href="0b620834cc"><code>0b62083</code></a> Update Travis target versions.</li>
<li><a href="729a465307"><code>729a465</code></a> Switch license to Apache 2.0.</li>
<li>See full diff in <a href="https://github.com/faye/websocket-extensions-node/compare/0.1.3...0.1.4">compare view</a></li>
</ul>
</details>
<br />
[](https://dependabot.com/compatibility-score/?dependency-name=websocket-extensions&package-manager=npm_and_yarn&previous-version=0.1.3&new-version=0.1.4)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
Dependabot will merge this PR once CI passes on it, as requested by @marbar3778.
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
- `@dependabot badge me` will comment on this PR with code to add a "Dependabot enabled" badge to your readme
Additionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):
- Update frequency (including time of day and day of week)
- Pull request limits (per update run and/or open at any time)
- Out-of-range updates (receive only lockfile updates, if desired)
- Security updates (receive only security updates, if desired)
</details>
## Description
These tests were made to test the compatibility of amino and protobuf. Since we are moving to protobuf they are not needed anymore.
The proto3 directory was created to be used only in these tests
Closes: #XXX
* lite2: check header w/ witnesses only when doing bisection
Closes#4872
We don't need to check witnesses if we're doing backwards hash chain
verification. I also think we don't need to do it when sequential
verification is being used.
* lite2: require 1 witness only when verificationMode=skipping
https://github.com/tendermint/tendermint/pull/4929#pullrequestreview-423256477
we don't need witnesses when performing sequential verification (except
when primary fails)
* proto: move mempool to proto
- changes according to moving the mempool reactor to proto
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
Closes: #2883
Closes#4837
- `/block_results`
before:
failed to update light client to 7: failed to obtain the header #7: signed header not found
after:
We can't return the latest block results because we won't be able to
prove them. Return the results for the previous block instead.
- /block_results?height=X`
no changes
Ethermint currently has to maintain a map height-> block hash on the store (see here) as it needs to expose the eth_getBlockByHash JSON-RPC query for Web3 compatibility. This query is currently not supported by the tendermint RPC client.
Bumps [vuepress-theme-cosmos](https://github.com/cosmos/vuepress-theme-cosmos) from 1.0.165 to 1.0.166.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/cosmos/vuepress-theme-cosmos/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://dependabot.com/compatibility-score/?dependency-name=vuepress-theme-cosmos&package-manager=npm_and_yarn&previous-version=1.0.165&new-version=1.0.166)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
Dependabot will merge this PR once it's up-to-date and CI passes on it, as requested by @marbar3778.
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
- `@dependabot badge me` will comment on this PR with code to add a "Dependabot enabled" badge to your readme
Additionally, you can set the following in your Dependabot [dashboard](https://app.dependabot.com):
- Update frequency (including time of day and day of week)
- Pull request limits (per update run and/or open at any time)
- Out-of-range updates (receive only lockfile updates, if desired)
- Security updates (receive only security updates, if desired)
</details>
## Description
I was able to reproduce this non-determinism locally
After increasing the timeout to 8 seconds from 5 I was not able to reproduce it
Closes: #2856
## Description
in consensus/state.go, when calulating metrics, retrieve address (ergo, pubkey) once prior to iterating over validatorset to ensure we do not make excessive calls to signer.
Partially closes: #4865
Updates our development and release process to match the process outlined in #4860.
Also elaborates on a few git/Github conventions that we've discussed in the past.
## Description
Add Timeouts to Github action jobs. The goal of adding timeouts is so if a job is hanging on something it gets killed and the author will get notified.
I picked these times based on previous circle and Github action times then doubled & in some places tripled the times.
Closes: #XXX
Mitigates race condition causing these test failures:
```
=== RUN TestTxEventsSentWithBroadcastTxAsync
=== RUN TestTxEventsSentWithBroadcastTxAsync/*http.HTTP
I[2020-05-25|12:29:08.477] Starting WSEvents service impl=WSEvents
TestTxEventsSentWithBroadcastTxAsync/*http.HTTP: event_test.go:124:
Error Trace: event_test.go:124
Error: Expected nil, but got: &errors.errorString{s:"timed out waiting for event"}
Test: TestTxEventsSentWithBroadcastTxAsync/*http.HTTP
Messages: 0: timed out waiting for event
```
Fixes the following test race condition:
```
=== RUN TestAppCalls
TestAppCalls: rpc_test.go:216:
Error Trace: rpc_test.go:216
Error: Expected value not to be nil.
Test: TestAppCalls
--- FAIL: TestAppCalls (2.20s)
```
Attempt to fix this the below test failure by waiting for the listener to get ready. I am not at all convinced that this is the correct fix - the below indicates that the TCP socket was closed after it was set up - but I'm unable to come up with an actionable hypothesis for what caused it.
```
2020/05/14 17:25:11 Failed to accept conn: accept tcp 127.0.0.1:44737: use of closed network connection
2020/05/14 17:25:11 Failed to accept conn: accept tcp 127.0.0.1:42589: use of closed network connection
2020/05/14 17:25:11 Failed to accept conn: accept tcp 127.0.0.1:40905: use of closed network connection
2020/05/14 17:25:12 Failed to accept conn: accept tcp 127.0.0.1:39847: use of closed network connection
2020/05/14 17:25:12 Failed to accept conn: accept tcp 127.0.0.1:39989: use of closed network connection
2020/05/14 17:25:12 Failed to accept conn: accept tcp 127.0.0.1:43587: use of closed network connection
2020/05/14 17:25:12 Failed to accept conn: accept tcp 127.0.0.1:35415: use of closed network connection
2020/05/14 17:25:12 Failed to accept conn: accept tcp 127.0.0.1:38657: use of closed network connection
2020/05/14 17:25:12 Failed to accept conn: accept tcp 127.0.0.1:38217: use of closed network connection
2020/05/14 17:25:13 Failed to accept conn: accept tcp 127.0.0.1:42247: use of closed network connection
2020/05/14 17:25:16 Failed to accept conn: accept tcp 127.0.0.1:39705: use of closed network connection
2020/05/14 17:25:16 Failed to accept conn: accept tcp 127.0.0.1:39491: use of closed network connection
2020/05/14 17:25:16 Failed to accept conn: accept tcp 127.0.0.1:37107: use of closed network connection
2020/05/14 17:25:16 Failed to accept conn: accept tcp 127.0.0.1:39909: use of closed network connection
2020/05/14 17:25:16 Failed to accept conn: accept tcp 127.0.0.1:37987: use of closed network connection
2020/05/14 17:25:16 Failed to accept conn: accept tcp 127.0.0.1:41505: use of closed network connection
2020/05/14 17:25:16 Failed to accept conn: accept tcp 127.0.0.1:39121: use of closed network connection
2020/05/14 17:25:16 Failed to accept conn: accept tcp 127.0.0.1:46569: use of closed network connection
2020/05/14 17:25:16 Failed to accept conn: accept tcp 127.0.0.1:45643: use of closed network connection
2020/05/14 17:25:16 Failed to accept conn: accept tcp 127.0.0.1:35289: use of closed network connection
--- FAIL: TestTransportMultiplexAcceptMultiple (0.43s)
transport_test.go:200: auth failure: handshake failed: EOF
FAIL
```
## Description
In https://github.com/tendermint/tendermint/pull/4466 a new type was created (EventAttribute) that replaced KV.Pair. This made the type Pair deadcode, this pr removes that type.
I don't think a changelog entry is needed since it is not being used anymore. But will add a section to the upgrading.md to let users know that it was replaced
Closes: #XXX
* p2p: log error in transport tests
* p2p: exit from fast peer only when handshake is done
TestTransportMultiplexAcceptNonBlocking
fixes panic: write to a closed channel
* p2p: increase timeout in TestTransportMultiplexConnFilterTimeout
Fixes https://github.com/tendermint/tendermint/issues/4854#issuecomment-630739200
* p2p: yield control to another goroutine manually
* increase timeout in TestTransportMultiplexAcceptNonBlocking
## Description
by adding paths the ci job wont run if the files aren't touched. this is a change from before where they would run to see if there were changes
Closes: #XXX
## Description
bech32 is only used in the sdk. I moved Bech32 to the sdk and we can remove it here. Don't think this needs to go into a minor release
Closes: #XXX
Closes#4603
Commands used (VIM):
```
:args `rg -l errors.Wrap`
:argdo normal @q | update
```
where q is a macros rewriting the `errors.Wrap` to `fmt.Errorf`.
## Description
regenerate proto stubs
I do find max_num to be vague. possibly renaming to max_entries or max_evidence_entries may be more descriptive?
Closes: #XXX
creates a proof of lock change which is an array of votes that proves the validator was permitted to change locked block and vote again. This proof is stored in the evidence pool and is used as a part of amnesia evidence
Debian testing caused Docker image build failures:
```
The following packages have unmet dependencies:
libc6-dev : Breaks: libgcc-8-dev (< 8.4.0-2~) but 8.3.0-6 is to be installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
```
It does not appear that we actually need testing, so removing it.
Fix some linter issues to conform with the Protobuf style guide.
The state sync enum changes are ok to break since it's not released yet. Personally I find the uppercase kind of ugly, but that's what the guide says. Couldn't find a way to generate camel case in Go, short of specifying custom names for each and every enum variant.
Another option would be to simply disable the enum case lint.
Integrates the blockchain v2 reactor with state sync, fixes#4765. This mostly involves deferring fast syncing until after state sync completes. I tried a few different approaches, this was the least effort:
* `Reactor.events` is `nil` if no fast sync is in progress, in which case events are not dispatched - most importantly `AddPeer`.
* Accept status messages from unknown peers in the scheduler and register them as ready. On fast sync startup, broadcast status requests to all existing peers.
* When switching from state sync, first send a `bcResetState` message to the processor and scheduler to update their states - most importantly the initial block height.
* When fast sync completes, shut down event loop, scheduler and processor, and set `events` channel to `nil`.
Fixes#4802. The Go HTTP server has a global panic handler for requests, so it was not as severe as first thought.
This fix can still panic, since we try to send a `500` response - if that happens, the Go HTTP server will terminate the connection. Otherwise, the client will get a 200 response, which we should avoid. I'm sort of torn on whether it's even necessary to include this fix, instead of just letting the HTTP server deal with it.
Closes#4783
It looks like we're validating Commit twice. Also, height and blockID params were coming from the commit, so no need to pass them separately.
Predominantly following the discussions regarding the conventions of using mocks, I have decided to revert back to the previous state where mocks were specialized and stored in the separate packages that used them rather then have a generalized mock in the evidence package.
This also was a problem as the state package were running tests too slow and occasionally timing out unnecessarily.
For the replay file I renamed mockEvidencePool to emptyEvidencePool to illustrate that it was intentionally like this and not give the impression that testing software was being used in production
Closes: #4786
This is useful for custom state sync `StateProvider` implementations that need to build new states for bootstrapping the node. It will also be useful when implementing other mechanisms to bootstrap nodes, e.g. #4642 and #3713.
The event loop uses a `select` on multiple channels. However, reading from a closed channel in Go always yields the channel's zero value. The processor and scheduler close their channels when done, and since these channels are always ready to receive, the event loop keeps spinning on them.
This changes `routine.terminate()` to not close the channel, and also removes `stopDemux` and instead uses `events` channel closure to signal event loop termination.
Fixes#4687.
Fixes#828. Adds state sync, as outlined in [ADR-053](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-053-state-sync-prototype.md). See related PRs in Cosmos SDK (https://github.com/cosmos/cosmos-sdk/pull/5803) and Gaia (https://github.com/cosmos/gaia/pull/327).
This is split out of the previous PR #4645, and branched off of the ABCI interface in #4704.
* Adds a new P2P reactor which exchanges snapshots with peers, and bootstraps an empty local node from remote snapshots when requested.
* Adds a new configuration section `[statesync]` that enables state sync and configures the light client. Also enables `statesync:info` logging by default.
* Integrates state sync into node startup. Does not support the v2 blockchain reactor, since it needs some reorganization to defer startup.
- only run tests when .go, .mod, .sum files have been touched
- only run proto checks when .proto files have been touched
Signed-off-by: Marko Baricevic marbar3778@yahoo.com
to prevent data races
note: when we check the ConflictingHeadersEvidence, the copy is used.
Closes#4749
Also, iterate over valToLastHeight instead of loading validators. While it might be slower than accessing a single key in goleveldb, the new code is better adapted to ConsensusParams changes.
to prevent malicious nodes from sending us large messages (~21MB, which
is the default `RecvMessageCapacity`)
This allows us to remove unnecessary `maxMsgSize` check in `decodeMsg`. Since each channel has a msg capacity set to `maxMsgSize`, there's no need to check it again in `decodeMsg`.
Closes#1503
Fixes#2593
@alexanderbez This is used in a single place in the SDK, how upset are you about removing it?
______
For contributor use:
- [x] ~Wrote tests~
- [x] Updated CHANGELOG_PENDING.md
- [x] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [x] Updated relevant documentation (`docs/`) and code comments
- [x] Re-reviewed `Files changed` in the Github PR explorer
- [x] Applied Appropriate Labels
The checklist and message in the default PR description were sometimes getting
included in commit messages. This change moves that text from the PR description
to a comment from a [Probot tool](https://probot.github.io/apps/auto-comment/).
merged the existing store into pool, consolidated the three buckets into two, used block height as a marked for committed evidence, evidence list recovers on start up, improved error handling
Reduce the number of targets and make the buildsystem more
flexible by parsing the TENDERMINT_BUILD_OPTIONS command
line variable (a-la Debian, inspired by dpkg-buildpackage's
DEB_BUILD_OPTIONS), e.g:
$ make install TENDERMINT_BUILD_OPTIONS='cleveldb'
replaces the old:
$ make install_c
Options can be mix&match'd, e.g.:
$ make install TENDERMINT_BUILD_OPTIONS='cleveldb race nostrip'
Three options are available:
- nostrip: don't strip debugging symbols nor DWARF tables.
- cleveldb: use cleveldb as db backend instead of goleveldb;
it switches on the CGO_ENABLED Go environment variale.
- race: pass -race to go build and enable data race detection.
This changeset is a port of gaia pull request: cosmos/gaia#363.
Fixes#4739, kind of. See #4740 for the proper fix.
---
For contributor use:
- [x] Wrote tests
- [x] Updated CHANGELOG_PENDING.md
- [x] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [x] Updated relevant documentation (`docs/`) and code comments
- [x] Re-reviewed `Files changed` in the Github PR explorer
- [x] Applied Appropriate Labels
merged the existing store into pool, consolidated the three buckets into two, used block height as a marked for committed evidence, evidence list recovers on start up, improved error handling
- Move core stateless validation of the Header type to a ValidateBasic method.
- Call header.ValidateBasic during a SignedHeader validation.
- Call header.ValidateBasic during a PhantomValidatorEvidence validation.
- Call header.ValidateBasic during a LunaticValidatorEvidence validation.
lite tests are skipped since the package is deprecated, no need to waste time on it
closes: #4572
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
- regenerate proto using docker image
- on proto breakage a different compiler is used to generate some more helper functions, on master the same one os not used, reverting back to the one used on master to undo further confusion till proto-breakage is merged
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* Added "Edit this page" box
* Fixed link styles
* Fixed build-time errors
* Updated VuePress and dependencies
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for creating a PR! ✰
v Before smashing the submit button please review the checkboxes.
v If a checkbox is n/a - please still include it but + a little note why
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Description
<!-- Add a description of the changes that this PR introduces and the files that
are the most critical to review.
-->
Closes: #XXX
______
For contributor use:
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [ ] Re-reviewed `Files changed` in the Github PR explorer
- [ ] Applied Appropriate Labels
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for creating a PR! ✰
v Before smashing the submit button please review the checkboxes.
v If a checkbox is n/a - please still include it but + a little note why
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Description
- this pr add a check box for making sure labels are added to PRs
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
______
For contributor use:
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [ ] Re-reviewed `Files changed` in the Github PR explorer
Closes: #4530
This PR contains logic for both submitting an evidence by the light client (lite2 package) and receiving it on the Tendermint side (/broadcast_evidence RPC and/or EvidenceReactor#Receive). Upon receiving the ConflictingHeadersEvidence (introduced by this PR), the Tendermint validates it, then breaks it down into smaller pieces (DuplicateVoteEvidence, LunaticValidatorEvidence, PhantomValidatorEvidence, PotentialAmnesiaEvidence). Afterwards, each piece of evidence is verified against the state of the full node and added to the pool, from which it's reaped upon block creation.
* rpc/client: do not pass height param if height ptr is nil
* rpc/core: validate incoming evidence!
* only accept ConflictingHeadersEvidence if one
of the headers is committed from this full node's perspective
This simplifies the code. Plus, if there are multiple forks, we'll
likely to receive multiple ConflictingHeadersEvidence anyway.
* swap CommitSig with Vote in LunaticValidatorEvidence
Vote is needed to validate signature
* no need to embed client
http is a provider and should not be used as a client
Updates our Security Policy with takeaways from Security Advisory
Lavender and the associated security releases.
Thank you to @melekes and @alessio for reviewing earlier versions
of this document.
Closes: #4695
Verify /block_results and /validators responses from an HTTP client using the light client.
Added count and total to /validators response.
Refs #3113
in TestPEXReactorDialsPeerUpToMaxAttemptsInSeedMode
Closes#4668
______
For contributor use:
- [x] Wrote tests
- [ ] ~~Updated CHANGELOG_PENDING.md~~
- [x] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] ~~Updated relevant documentation (`docs/`) and code comments~~
- [x] Re-reviewed `Files changed` in the Github PR explorer
## Description
move tests for abci_cli, abci_app and app_tests to github actions
______
For contributor use:
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [ ] Re-reviewed `Files changed` in the Github PR explorer
Followup from #4588. Allow the first `SaveBlock()` call in an empty block store to be at any height, to start from a truncated block history. Subsequent `SaveBlock()` calls must be for contiguous blocks.
______
For contributor use:
- [x] Wrote tests
- [ ] ~Updated CHANGELOG_PENDING.md~
- [x] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [x] Updated relevant documentation (`docs/`) and code comments
- [x] Re-reviewed `Files changed` in the Github PR explorer
The service logging can be a bit unclear. For example, with state sync it would log:
```
I[2020-04-20|08:40:47.366] Starting StateSync module=statesync impl=Reactor
I[2020-04-20|08:40:47.834] Starting state sync module=statesync
```
Where the first message is the reactor service startup, and the second message is the start of the actual state sync process. This clarifies the first message by changing it to `Starting StateSync service`.
______
For contributor use:
- [ ] ~Wrote tests~
- [ ] ~Updated CHANGELOG_PENDING.md~
- [ ] ~Linked to Github issue with discussion and accepted design OR link to spec that describes this work.~
- [ ] ~Updated relevant documentation (`docs/`) and code comments~
- [x] Re-reviewed `Files changed` in the Github PR explorer
See #4588 for original change.
I believe this is appropriate. Anything else that needs to be updated?
______
For contributor use:
- [ ] ~Wrote tests~
- [x] Updated CHANGELOG_PENDING.md
- [x] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] ~Updated relevant documentation (`docs/`) and code comments~
- [x] Re-reviewed `Files changed` in the Github PR explorer
## Description
ADR to address the process for proving an amnesia attack (as a form of global evidence) from `PotentialAmnesiaEvidence` detected by light clients
______
For contributor use:
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [ ] Re-reviewed `Files changed` in the Github PR explorer
______
For contributor use:
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [x] Re-reviewed `Files changed` in the Github PR explorer
## Description
The minor release process is changing in order to not have major release changes sitting in the pull request tab.
This changes from taking master and releasing from master to creating a branch that you cherry-pick commits into.
There are two options on labeling which pull requests to include in a minor release:
1. Use the label `R:minor` to know which pull requests to include then remove the label when those pull requests have been included in a release.
2. Create an Issue where pull request numbers are added. then the issue is closed when the release is done.
this process should be followed after 0.33.3
______
For contributor use:
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [ ] Re-reviewed `Files changed` in the Github PR explorer
## Description
Fixes a bug where the reactor would broadcast a base with height=0.
______
For contributor use:
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [x] Re-reviewed `Files changed` in the Github PR explorer
______
For contributor use:
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [ ] Re-reviewed `Files changed` in the Github PR explorer
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for creating a PR! ✰
v Before smashing the submit button please review the checkboxes.
v If a checkbox is n/a - please still include it but + a little note why
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
Fixes an issue reported in https://github.com/tendermint/tendermint/issues/4595#issuecomment-612667441.
Not sure if this is sufficient to fully remove the reactor, but it fixes the immediate problem.
______
For contributor use:
- [x] Wrote tests
- [x] ~Updated CHANGELOG_PENDING.md~
- [x] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [x] ~Updated relevant documentation (`docs/`) and code comments~
- [x] Re-reviewed `Files changed` in the Github PR explorer
How to use it:
```
$ . <(tendermint completion)
```
Note that the completion command does not show up in the help screen,
though it comes with its own --help option.
This is a port of the feature provided by cosmos-sdk.
Add Has function, create better handling of errors when adding evidence, usage of error types.
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* mark unsolicited and too frequent messaged as bad
* add tests
* update changelog and fix error
* revised error types
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Added BlockStore.DeleteBlock()
* Added initial block pruner prototype
* wip
* Added BlockStore.PruneBlocks()
* Added consensus setting for block pruning
* Added BlockStore base
* Error on replay if base does not have blocks
* Handle missing blocks when sending VoteSetMaj23Message
* Error message tweak
* Properly update blockstore state
* Error message fix again
* blockchain: ignore peer missing blocks
* Added FIXME
* Added test for block replay with truncated history
* Handle peer base in blockchain reactor
* Improved replay error handling
* Added tests for Store.PruneBlocks()
* Fix non-RPC handling of truncated block history
* Panic on missing block meta in needProofBlock()
* Updated changelog
* Handle truncated block history in RPC layer
* Added info about earliest block in /status RPC
* Reorder height and base in blockchain reactor messages
* Updated changelog
* Fix tests
* Appease linter
* Minor review fixes
* Non-empty BlockStores should always have base > 0
* Update code to assume base > 0 invariant
* Added blockstore tests for pruning to 0
* Make sure we don't prune below the current base
* Added BlockStore.Size()
* config: added retain_blocks recommendations
* Update v1 blockchain reactor to handle blockstore base
* Added state database pruning
* Propagate errors on missing validator sets
* Comment tweaks
* Improved error message
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* use ABCI field ResponseCommit.retain_height instead of retain-blocks config option
* remove State.RetainHeight, return value instead
* fix minor issues
* rename pruneHeights() to pruneBlocks()
* noop to fix GitHub borkage
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Closes: #4537
Uses SignedHeaderBefore to find header before unverified header and then bisection to verify the header. Only when header is between first and last trusted header height else if before the first trusted header height then regular backwards verification is used.
* proto: use docker to generate stubs
- provide an option to developers to use docker to generate proto stubs
closes#4579
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
Previously, many reactors were initialized with the name "Reactor," which made it difficult to log which reactor was doing what. This changes those reactors' names to something more descriptive.
- defined what variables needed to be changed in the `config.toml` in order to run a validator.
- Briefly explained how a sentry node archtecture should look
- add section explaing importance of key secruity
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* tools: remove need to install buf
- using buf docker image instead of needing devs to install it
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* fix ci for lint and break checking
Closes: #4546
The algorithm uses an array to store the headers and validators and populates it at every bisection (which is an unsuccessful verification). When a successful verification finally occurs it updates the new trusted header, trims that header from the cache (the array) and sets the depth pointer back to 0. Instead of retrieving new headers it will use the cached headers, incrementing in depth until it reaches the end of the cache which by then it will start to retrieve new headers from the provider.
Mathematically, this method doesn't properly bisect after the first round but it will always choose a pivot header that is within 1/8th of the upper header's height. I.e. if we are trying to jump 128 headers, the maximum offset from bisection height (64) is 64 + 16(128/8) = 80, therefore a better heuristic would be to obtain the new pivot header height as the middle of these two numbers which would therefore mean to multiply it by 9/16ths instead of 1/2 (sorry this might be a bit more complicated in writing but I can try better explain if someone is interested). Therefore I would also, upon consensus, propose that we change the pivot height to 9/16th's of the previous height
* blockchain: enable v2 to be set
- enable v2 to be set via config params
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* replace tab with space
* correctly spell usability
* format: add format cmd & goimport repo
- replaced format command
- added goimports to format command
- ran goimports
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* fix outliers & undo proto file changes
* update theme
* Update version
* Updated Questions section in the footer
* Remove links to Riot chat
* Typo
* Add Discord link
* Update docs theme to the latest version
* Use docs-staging branch for staging website
* Resolve merge conflicts
* Update version
* Add google analytics
Co-authored-by: Marko <marbar3778@yahoo.com>
* deps: bump deps that bot cant
- bumping deps that dependat bot does not do.
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* run go mod tidy
* fix go.sum
* adr: crypto encoding for proto work
- this adr is meant to help with deciding on how to move forward with keys in tendermint.
* minor change
* fix gomod
* add a third option
* fix spelling
* add first part of descision
* breakdown keys and where they are used
* add some wording
* minor wording fix
* question
* change proto messages
* minor update
* undo go.mod changes
* add a few things based on comemnts
* push, push it real good
* minor explanation on interface type
* touch up
Closes: #4420
Created a new error ErrInvalidHeaderwhich can be formed during the verification process verifier.go and will result in the replacement of the primary provider with a witness by executing: replacePrimaryProvider()
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
error itself is not enough since it only signals if there were any
errors. Either (types.SignedHeader) or (success bool) is needed to
indicate the status of the operation. Returning a header is optimal
since most of the clients will want to get a newly verified header
anyway.
We first introduced auto-update as a separate struct AutoClient, which
was wrapping Client and calling Update periodically.
// AutoClient can auto update itself by fetching headers every N seconds.
type AutoClient struct {
base *Client
updatePeriod time.Duration
quit chan struct{}
trustedHeaders chan *types.SignedHeader
errs chan error
}
// NewAutoClient creates a new client and starts a polling goroutine.
func NewAutoClient(base *Client, updatePeriod time.Duration) *AutoClient {
c := &AutoClient{
base: base,
updatePeriod: updatePeriod,
quit: make(chan struct{}),
trustedHeaders: make(chan *types.SignedHeader),
errs: make(chan error),
}
go c.autoUpdate()
return c
}
// TrustedHeaders returns a channel onto which new trusted headers are posted.
func (c *AutoClient) TrustedHeaders() <-chan *types.SignedHeader {
return c.trustedHeaders
}
// Err returns a channel onto which errors are posted.
func (c *AutoClient) Errs() <-chan error {
return c.errs
}
// Stop stops the client.
func (c *AutoClient) Stop() {
close(c.quit)
}
func (c *AutoClient) autoUpdate() {
ticker := time.NewTicker(c.updatePeriod)
defer ticker.Stop()
for {
select {
case <-ticker.C:
lastTrustedHeight, err := c.base.LastTrustedHeight()
if err != nil {
c.errs <- err
continue
}
if lastTrustedHeight == -1 {
// no headers yet => wait
continue
}
newTrustedHeader, err := c.base.Update(time.Now())
if err != nil {
c.errs <- err
continue
}
if newTrustedHeader != nil {
c.trustedHeaders <- newTrustedHeader
}
case <-c.quit:
return
}
}
}
Later we merged it into the Client itself with the assumption that most clients will want it.
But now I am not sure. Neither IBC nor cosmos/relayer are using it. It increases complexity (Start/Stop methods).
That said, I think it makes sense to remove it until we see a need for it (until we better understand usage behavior). We can always introduce it later 😅. Maybe in the form of AutoClient.
* fix: fix proto-breakage
- this is amed to fix proto breakage for consumers
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* fix for importing third_party everywhere
* undo change
* test breakage change
* test ssh
* test https
* change ssh to https
* fix phony
* rpc: use BlockStoreRPC instead of BlockStore
BlockStoreRPC is a limited version of BlockStore interface, which does
not include SaveBlock method.
Closes#4159
* remove BlockStoreRPC interface in favor of single BlockStore
interface
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* lite2: fix tendermint lite sub command
- better logging
- chainID as an argument
- more examples
* one more log msg
* lite2: fire update right away after start
* turn off auto update in verification tests
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* test functions take time.Now and other minor changes
* updated remaining test files
* Update validation_test.go
* fix typo
* go fmt
* import time
Co-authored-by: Marko <marbar3778@yahoo.com>
closes#4469
Improved speed of cleanup by using SignedHeaderAfter instead of TrustedHeader to jump from header to header.
Prune() is now called when a new header and validator set are saved and is a function dealt by the database itself
## Commits:
* prune headers and vals
* modified cleanup and tests
* fixes after my own review
* implement Prune func
* make db ops concurrently safe
* use Iterator in SignedHeaderAfter
we should iterate from height+1, not from the end!
* simplify cleanup
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
closes: #4455
Verifying backwards checks that the trustedHeader hasn't expired both before and after the loop in case of verifying many headers (a longer operation), but not during the loop itself.
TrustedHeader() no longer checks whether the header saved in the store has expired.
Tests have been updated to reflect the changes
## Commits:
* verify headers backwards out of trust period
* removed expiration check in trusted header func
* modified tests to reflect changes
* wrote new tests for backwards verification
* modified TrustedHeader and TrustedValSet functions
* condensed test functions
* condensed test functions further
* fix build error
* update doc
* add comments
* remove unnecessary declaration
* extract latestHeight check into a separate func
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Before we were storing trustedHeader (height=1) and trustedNextVals
(height=2).
After this change, we will be storing trustedHeader (height=1) and
trustedVals (height=1). This a) simplifies the code b) fixes#4399
inconsistent pairing issue c) gives a relayer access to the current
validator set #4470.
The only downside is more jumps during bisection. If validator set
changes between trustedHeader and the next header (by 2/3 or more), the
light client will be forced to download the next header and check that
2/3+ signed the transition. But we don't expect validator set change too
much and too often, so it's an acceptable compromise.
Closes#4470 and #4399
closes#4426
The sequence and bisection methods no longer save the intermediate headers and validator sets that they require to verify a currently untrusted header.
## Commits:
* sequence and bisection don't save intermediate headers and vals
* check the next validator hash matches the header
* check expired header at start of backwards verification
* added tests
* handled cleanup warning
* lint fix
* removed redundant code
* tweaked minor errors
* avoided premature trusting of nextVals
* fix test error
* updated trustedHeader and Vals together
* fixed bisection error
* fixed sequence error for different vals and made test
* fixes after my own review
* reorder vars to be consistent
with the rest of the code
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
closes#4413 and #4419
When VerifyHeaderAtHeight() is called, TrustedHeader is initially run to check if the header has already been verified and returns the Header.
If the new header height is less than the lite clients latestTrustedHeader height, than backwards verification is performed else either sequence or bisection
Refactored a test to reflect the changes
* use trustedHeader func for already verified Headers
* remove fetch missing header from TrustedHeader
* check for already trusted Header in VerifyHeaderAtHeight
* replace updateTrustedHeaderAndVals to updateTrustedHeaderAndNextVals
* rename trustedHeader and trustedNextVals
* refactored backwards and included it in VerifyHeader
* cleaned up test to match changes
* lite2: fixes after my own review
Refs https://github.com/tendermint/tendermint/pull/4428#pullrequestreview-361730169
* fix ineffectual assignment
* lite2: check that header exists in VerifyHeader
* extract function
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
The work includes the reactor which ties together all the seperate routines involved in the design of the blockchain v2 refactor. This PR replaces #4067 which got far too large and messy after a failed attempt to rebase.
## Commits:
* Blockchainv 2 reactor:
+ I cleaner copy of the work done in #4067 which fell too far behind and was a nightmare to rebase.
+ The work includes the reactor which ties together all the seperate routines involved in the design of the blockchain v2 refactor.
* fixes after merge
* reorder iIO interface methodset
* change iO -> IO
* panic before send nil block
* rename switchToConsensus -> trySwitchToConsensus
* rename tdState -> tmState
* Update blockchain/v2/reactor.go
Co-Authored-By: Bot from GolangCI <42910462+golangcibot@users.noreply.github.com>
* remove peer when it sends a block unsolicited
* check for not ready in markReceived
* fix error
* fix the pcFinished event
* typo fix
* add documentation for processor fields
* simplify time.Since
* try and make the linter happy
* some doc updates
* fix channel diagram
* Update adr-043-blockchain-riri-org.md
* panic on nil switch
* liting fixes
* account for nil block in bBlockResponseMessage
* panic on duplicate block enqueued by processor
* linting
* goimport reactor_test.go
Co-authored-by: Bot from GolangCI <42910462+golangcibot@users.noreply.github.com>
Co-authored-by: Anca Zamfir <ancazamfir@users.noreply.github.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
refs #4329
As opposed to using recursion to implement the bisection method of verifying a header, which could have problems with memory allocation (especially for smaller devices), the bisection algorithm now uses a for loop.
* modified bisection to loop
* made lint changes
* made lint changes
* move note to VerifyHeader
since it applies both for sequence and bisection
* test bisection jumps to header signed by 1/3+
of old validator set
* update labels in debug log calls
* copy tc
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Closes#4398
* divided verify functions
* extacted method
* renamed functions. Created standard Verify function
* checked non-adjacency. separated VerifyCommit
* lint fixes
* fix godoc documentation for VerifyAdjacent and VerifyNonAdjacent
* add a comment about VerifyCommit being the last check
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Using the WebSocket server, when the same client calls multiple time the subscribe method, only the last subscription receives all the events of the previous ones.
example:
subscription1 = tm.event = 'NewBlock'
subscription2 = tm.event = 'Tx'
In this case, subscription2 will receive the new blocks but subscription1 will not.
This came from the WebSocket handler that had the declaration of the rpcrequest moved and so overridden for every request and given in the JSONReq client context (so the id of the subscription was not the right one).
This fixes the issue by simply declaring the rpcrequest inside the loop so every request will create a new object without overwriting the previous one.
* update theme
* Update version
* Updated Questions section in the footer
* Remove links to Riot chat
* Typo
* Add Discord link
Co-authored-by: Marko <marbar3778@yahoo.com>
Closes#4385
* extract TrustOptions into its own file
* print trusted hash before asking whenever to rollback or not
so the user could reset the light client with the trusted header
* do not return an error if rollback is aborted
reason: we trust the old header presumably, so can continue from it.
* add note about time of initial header
* improve logging and add comments
* cross-check newHeader after LC verified it
* check if header is not nil
so we don't crash on the next line
* remove witness if it sends us incorrect header
* require at least one witness
* fix build and tests
* rename tests and assert for specific error
* wrote a test
* fix linter errors
* only check 1/3 if headers diverge
Currently the sequence function always starts from the trustedHeader and trustedNextVals stored in the lite client. Whereas the bisection one allows the method to be started from any combination of header and validator set. I opened up the sequence verification method to do the same
The .PHONY targets in the Makefile are usually placed far away from the actual targets, and thus aren't always updated. Placing the .PHONY targets right next to the targets they cover make them more visible and thus more likely to be updated when necessary.
* adr: light client implementation
Closes#2133
* note on chain IDs
* explain why witnesses are required
* if chain forks maliciously, chain ID stays the same
* add a note about min witnesses while cross-checking
* make: remove sentry setup cmds
removal of make comands for sentry setup. it was unclear if they were being maintained and there has not been a mention of people using them
- closes#4379
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* remove depreacted readme
* add not being maintained section to docs
* proto: minor linting
minor linting after working with the proto files in the sdk.
there is no logic change just spacing fixes
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* hardcore linting
- erik fixed many of the broken links, just fixed two outstanding ones.
- closes#4381
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
* witnesses are dropped after no response
* test witness dropout
* corrected import structure
* moved non responsiveness check to compare function
* removed dropout test as witnesses are never dropped
* created test to compare witnesses
* proto: add buf and protogen script
- add buf with minimal changes
- add protogen script to easier generate proto files
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* add protoc needs
* add some needed shell cmds
* remove buf from tools as it is not needed everytime
* add proto lint and breakage to ci
* add section in changelog and upgrading files
* address pr comments
* remove space in circle config
* remove spaces in makefile comment
* add section on contributing on how to work with proto
* bump buf to 0.7
* test bufbuild image
* test install make in bufbuild image
* revert to tendermintdev image
* Update Makefile
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
* validate trust options
* add NewClientFromTrustedStore func
* make maxRetryAttempts an option
Closes#4370
* hash size should be equal to tmhash.Size
* make maxRetryAttempts uint
* make maxRetryAttempts uint16
maxRetryAttempts possible - 68 years
* we do not store trustingPeriod
* added test to create client from trusted store
* remove header and vals from primary
to make sure we're restoring them from the DB
As opposed to checking a random witness, all witnesses provided should be used as a reference against the header provided by the primary node. This increases security (at the tradeoff of speed) but also gives control to the user. The more witnesses provided, the more secure the lite client can be.
Closes#4328
When TrustedHeader(height) is called, if the height is less than the trusted height but the header is not in the trusted store then a function finds the previous lowest height with a trusted header and performs a forwards sequential verification to the header of the height that was given. If no error is found it updates the trusted store with the header and validator set for that height and can then return them to the user.
Commits:
* drafted trusted header
* created function to find previous trusted height
* updates missing headers less than the trusted height
* minor cosmetic tweaks
* incorporated suggestions
* lite2: implement Backwards verification
and add SignedHeaderAfter func to Store interface
Refs https://github.com/tendermint/tendermint/issues/4328#issuecomment-581878549
* remove unused method
* write tests
* start with next height in SignedHeaderAfter func
* fix linter errors
* address Callum's comments
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Closes issue #4338
Uses a wrapper function around both the signedHeader and validatorSet calls to the primary provider which attempts to retrieve the information 5 times before deeming the provider unavailable and replacing the primary provider with the first alternative before trying recursively again (until all alternatives are depleted)
Employs a mutex lock for any operations involving the providers of the light client to ensure no operations occurs whilst the new primary is chosen.
Commits:
* created swapProvider function
* eliminates old primary provider after replacement. Uses a mutex when changing providers
* renamed to replaceProvider
* created wrapped functions for signed header and val set
* created test for primary provider replacement
* implemented suggested revisions
* created Witnesses() and Primary()
* modified backoffAndJitterTime
* modified backoffAndJitterTime
* changed backoff base and jitter to functional arguments
* implemented suggested changes
* removed backoff function
* changed exp function to match go version
* halved the backoff time
* removed seeding and added comments
* fixed incorrect test
* extract backoff timeout calc into a function
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
* update guides with correct path to libs/kv proto files
* Apply suggestions from code review
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* format something to rerun ci
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
* lite2: add Start method
There are few reasons to do that:
1) separation of state and dynamics (some users will want to delay
starting the light client; does not matter we should not allow them
to create a light client object)
2) less important, but some users might not need autoUpdateRoutine and
removeNoLongerTrustedHeadersRoutine routines
* lite2: wait till routines are finished in Stop
because they are started in Start, it feels more natural to wait for
them to finish in Stop.
* lite2: add TrustedValidatorSet func
* refactor cleanup code
* changed restore header and val function to handle negative height
* reverted restoreTrustedHeaderAndNextVals() functionality
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
* docs: update links to rpc
- links to rpc have not been updated. thank you @okwme
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* Update docs/app-dev/indexing-transactions.md
* lite2: add Start method
There are few reasons to do that:
1) separation of state and dynamics (some users will want to delay
starting the light client; does not matter we should not allow them
to create a light client object)
2) less important, but some users might not need autoUpdateRoutine and
removeNoLongerTrustedHeadersRoutine routines
* lite2: wait till routines are finished in Stop
because they are started in Start, it feels more natural to wait for
them to finish in Stop.
* lite2: add TrustedValidatorSet func
* docs: minor doc fixes
- minor doc fixes that i ran into while reading things
- test if we have github actions
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* no github actions yet
* add with
* revert and change wording
* lite2: advance to latest header
without any exponential steps
rename autoUpdate to autoUpdateRoutine
* lite2: wait in Cleanup until goroutines finished running
* lite2: move AutoClient into Client
Most of the users will want auto update feature, so it makes sense to
move it into the Client itself, rather than having a separate
abstraction (it makes the code cleaner, but introduces an extra thing
the user will need to learn).
Also, add `FirstTrustedHeight` func to Client to get first trusted height.
* fix db store tests
* separate examples for auto and manual clients
* AutoUpdate tries to update to latest state
NOT 1 header at a time
* fix errors
* lite2: make Logger an option
remove SetLogger func
* fix lite cmd
* lite2: make concurrency assumptions explicit
* fixes after my own review
* no need for nextHeightFn
sequence func will download intermediate headers
* correct comment
* Separate ADR Tendermint Mode from ADR-051
* Update docs/architecture/adr-052-tendermint-mode.md
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Apply suggestions from code review
Co-Authored-By: Marko <marbar3778@yahoo.com>
* Apply suggestions from code review
Co-Authored-By: Marko <marbar3778@yahoo.com>
* remove line of mode info of rpc
* Add link to ADR table of contents
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
Fullnode mode : fullnode mode does not have any capability to participate on consensus
Validator mode : this mode is exactly same as existing state machine behavior. sync without voting on consensus, and participate consensus when fully synced
Seed mode : lightweight seed mode only for maintain an address book, p2p like TenderSeed
Separate ADR Tendermint Mode from ADR-051 #4262
* dep: update tm-db to 0.4.0
- update 0.4.0 as it is a breaking change and cannot be handled by depndabot
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* more work towards error handling
* error and emtpy bytes handling
* work on tests
* add changelog entry, change some error handling
* address some pr comments
* panic in a few more places
* move error higher up
* redo some error handling
* fix some bz == nil to len(bz) == 0
* change statebytes
* evidence: introduce time.Duration to evidence params
- add time.duration to evidence
- this pr is taking pr #2606 and updating it to use both time and height
- closes#2565
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* fix testing and genesis cfg in signer harness
* remove debugging fmt
* change maxageheight to maxagenumblocks, rename other things to block instead of height
* further check of duration
* check duration to not send peers outdated evidence
* change some lines, onward and upward
* refactor evidence package
* add a changelog pending entry
* make mockbadevidence have time and use it
* add what could possibly be called a test case
* remove mockbadevidence and mockgoodevidence in favor of mockevidence
* add a comment for err that is returned
* add a changelog for removal of good & bad evidence
* add a test for adding evidence
* fix test
* add ev to types in testcase
* Update evidence/pool_test.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update evidence/pool_test.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* fix tests
* fix linting
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
* fix raw import sr25519
* add sr25519 to multisig codec
* bump go-schnorrkel
Co-authored-by: Marko <marbar3778@yahoo.com>
Fixes sr25519 pubkey generation and signing when importing from raw bytes
* prometheus/metrics: two new metrics for consensus
- add consensus_validator_power metric so if a node is a validator it can see its own power in prometheus
- add last_signed_height metric so if a node is a validator it can be aware of at which height the most recent time it signed.
- closes: #3773
- closes: #3083
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* check if signature is present
* minor change and check if sig is not absent
* add changelog entry
* Update consensus/state.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update CHANGELOG_PENDING.md
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update CHANGELOG_PENDING.md
* add label with validator address
* change address to vali_address
* add metric missed blocks
* add changelog entry for missed blocks metric
* address naming & add docs
also
- do not send any LastCommitInfo.Votes for 1st block.
- use cs.LastValidators (not cs.Validators) in recordMetrics when counting missing validators
note:
remember that the first LastCommit is intentionally empty, so it makes
sense for LastCommitInfo.Votes to also be empty. Because we can't really
tell if validator signed or not. Similar for ^ metrics, we can't say if
particular validator was missing or not.
Closes#4192
* lint: golint issue fixes
- on my local machine golint is a lot stricter than the bot so slowly going through and fixing things.
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* more fixes from golint
* remove isPeerPersistentFn
* add changelog entry
## Issue
Implement a new subcommand: tendermint debug. This subcommand itself has two subcommands:
$ tendermint debug kill <pid> </path/to/out.zip> --home=</path/to/app.d>
Writes debug info into a compressed archive. The archive will contain the following:
├── config.toml
├── consensus_state.json
├── net_info.json
├── stacktrace.out
├── status.json
└── wal
The Tendermint process will be killed.
$ tendermint debug dump </path/to/out> --home=</path/to/app.d>
This command will perform similar to kill except it only polls the node and dumps debugging data every frequency seconds to a compressed archive under a given destination directory. Each archive will contain:
├── consensus_state.json
├── goroutine.out
├── heap.out
├── net_info.json
├── status.json
└── wal
Note:
goroutine.out and heap.out will only be written if a profile address is provided and is operational.
This command is blocking and will log any error.
replaces: #3327closes: #3249
## Commits:
* Implement debug tool command stubs
* Implement net getters and zip logic
* Update zip dir API and add home flag
* Add simple godocs for kill aux functions
* Move IO util to new file and implement copy WAL func
* Implement copy config function
* Implement killProc
* Remove debug fmt
* Validate output file input
* Direct STDERR to file
* Godoc updates
* Sleep prior to killing tail proc
* Minor cleanup of godocs
* Move debug command and add make target
* Rename command handler function
* Add example to command long descr
* Move current implementation to cmd/tendermint/commands/debug
* Update kill cmd long description
* Implement dump command
* Add pending log entry
* Add gosec nolint
* Add error check for Mkdir
* Add os.IsNotExist(err)
* Add to debugging section in running-in-prod doc
* libs/common: Refactor libs/common 5
- move mathematical functions and types out of `libs/common` to math pkg
- move net functions out of `libs/common` to net pkg
- move string functions out of `libs/common` to strings pkg
- move async functions out of `libs/common` to async pkg
- move bit functions out of `libs/common` to bits pkg
- move cmap functions out of `libs/common` to cmap pkg
- move os functions out of `libs/common` to os pkg
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* fix testing issues
* fix tests
closes#41417
woooooooooooooooooo kill the cmn pkg
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* add changelog entry
* fix goimport issues
* run gofmt
Closes#4202
- nobody's supporting them
- users who want to try TM should use pre-build binaries
- devs should be able to install using git clone (or use Vagrant)
- validators don't use them (my guess)
* libs/common: Refactor libs/common 4
- move byte function out of cmn to its own pkg
- move tempfile out of cmn to its own pkg
- move throttletimer to its own pkg
ref #4147
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* add changelog entry
* fix linting issues
Fixes#4014
The reason being when you call any endpoint supporting optional height
and use `height=0`, it will return an error. For example:
```
$ curl localhost:2667/consensus_params?height=0
{
"jsonrpc": "2.0",
"id": -1,
"error": {
"code": -32603,
"message": "Internal error",
"data": "height must be greater than 0"
}
}
```
* libs/common: refactor libs common 3
- move nil.go into types folder and make private
- move service & baseservice out of common into service pkg
ref #4147
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* add changelog entry
* libs/common: refactor libs/common 2
- move random function to there own pkg
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* change imports and usage throughout repo
* fix goimports
* add changelog entry
* libs/common: Refactor libs/common 01
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* regenerate proto files, move intslice to where its used
* update kv.KVPair(s) to kv.Pair(s)
* add changelog entry
* make intInSlice private
I put it as the awesome repo ecosytem section as this is currently the only place where there is tendermint only ecosystem information.
- closes#4198
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* Rename Tag(s) to Event(s)
- tag was replaced with event, but in some places it still mentions tag, would be easier to understand if we tried to replace it with event to not confuse people.
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* more changes from tag -> event
* rename events to compositeKeys and keys
* Apply suggestions from code review
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* add minor documentation on how composite keys are constructed
* rename eventkey to compositekey
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* add changelog entry & add info to regenerate confid to changelog entry
To execute the integration.sh, you need to run "source ./integration.sh". just execute command "./integration.sh" will not work.
It is because "source ~/.profile" executes in a separate shell than the integration.sh script
# **turn on the go module, default is auto. The value is off, if tendermint source code
#is downloaded under $GOPATH/src directory
echo "export GO111MODULE=on" >> ~/.profile
# ** use git clone instead of go get.
# once go module is on, go get will download source code to
# specific version directory under $GOPATH/pkg/mod the make
# script will not work
git clone https://github.com/tendermint/tendermint.git
#** need to install the package, otherwise tendermint testnet will not execute
make install
implementation spec of Improved Trusted Peering ADR-050 by B-Harvest
- add unconditional_peer_ids and persistent_peers_max_dial_period to config
- add unconditionalPeerIDs map to Switch struct
default config value of persistent_peers_max_dial_period is 0s(disabled)
Refs #4072, #4053
* Add processor prototype
* Change processor API
+ expose a simple `handle` function which mutates internal state
* schedule event handling
* rename schedule -> scheduler
* fill in handle function
* processor tests
* fix gofmt and ohter golangci issues
* scopelint var on range scope
* add check for short block received
* small test reorg
* ci fix changes
* go.mod revert
* some cleanup and review comments
* scheduler fixes and unit tests, also small processor changes.
changed scPeerPruned to include a list of pruned peers
touchPeer to check peer state and remove the blocks from blockStates if the peer removal causes the max peer height to be lower.
remove the block at sc.initHeight
changed peersInactiveSince, peersSlowerThan, getPeersAtHeight check peer state
prunablePeers to return a sorted list of peers
lastRate in markReceived() attempted to divide by 0, temp fix.
fixed allBlocksProcessed conditions
maxHeight() and minHeight() to return sc.initHeight if no ready peers present
make selectPeer() deterministic.
added handleBlockProcessError()
added termination cond. (sc.allBlocksProcessed()) to handleTryPrunePeer() and others.
changed pcBlockVerificationFailure to include peer of H+2 block along with the one for H+1
changed the processor to call purgePeer on block verification failure.
fixed processor tests
added scheduler tests.
* typo and ci fixes
* remove height from scBlockRequest, golangci fixes
* limit on blockState map, updated tests
* remove unused
* separate test for maxHeight(), used for sched. validation
* use Math.Min
* fix golangci
* Document the semantics of blockStates in the scheduler
* better docs
* distinguish between unknown and invalid blockstate
* Standardize peer filtering methods
* feedback
* s/getPeersAtHeight/getPeersAtHeightOrAbove
* small notes
* Update blockchain/v2/scheduler.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update comments based on feedback
* Add enum offset
* panic on nil block in processor
* remove unused max height calculation
* format shorter line
Fix for #4164
The general problem is that in certain conditions an overflow warning is issued when attempting to update a validator set even if the final set's total voting power is not over the maximum allowed.
Root cause is that in verifyUpdates(), updates are verified wrt to total voting power in the order of validator address. It is then possible that a low address validator may increase its power such that the temporary total voting power count goes over MaxTotalVotingPower.
Scenarios where removing and adding/ updating validators with high voting power, in the same update operation, cause the same false warning and the updates are not applied.
Main changes to fix this are in verifyUpdate() that now does the verification starting with the decreases in power. It also takes into account the removals that are part of the update.
## Commits:
* tests for overflow detection and prevention
* test fix
* more tests
* fix the false overflow warnings and golint
* scopelint warning fix
* review comments
* variant with using sort by amount of change in power
* compute separately number new validators in update
* types: use a switch in processChanges
* more review comments
* types: use HasAddress in numNewValidators
* types: refactor verifyUpdates
copy updates, sort them by delta and use resulting slice to calculate
tvpAfterUpdatesBeforeRemovals.
* remove unused structs
* review comments
* update changelog
* p2p/conn: simplify secret connection handshake malleability fix with merlin
Introduces new dependencies on github.com/gtank/merlin and sha3 as a cryptographic primitive
This also only uses the transcript hash as a MAC.
* p2p/conn: avoid string to byte conversion
https://github.com/uber-go/guide/blob/master/style.md#avoid-string-to-byte-conversion
* types: change `Commit` to consist of just signatures
These are final changes towards removing votes from commit and leaving
only signatures (see ADR-25)
Fixes#1648
* bring back TestCommitToVoteSetWithVotesForAnotherBlockOrNilBlock
+ add absent flag to Vote to indicate that it's for another block
* encode nil votes as CommitSig with BlockIDFlagAbsent
+ make Commit#Precommits array of non-pointers
because precommit will never be nil
* add NewCommitSigAbsent and Absent() funcs
* uncomment validation in CommitSig#ValidateBasic
* add comments to ValidatorSet funcs
* add a changelog entry
* break instead of continue
continue does not make sense in these cases
* types: rename Commit#Precommits to Signatures
* swagger: fix /commit response
* swagger: change block_id_flag type
* fix merge conflicts
Refs #1771
ADR: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-044-lite-client-with-weak-subjectivity.md
## Commits:
* add Verifier and VerifyCommitTrusting
* add two more checks
make trustLevel an option
* float32 for trustLevel
* check newHeader time
* started writing lite Client
* unify Verify methods
* ensure h2.Header.bfttime < h1.Header.bfttime + tp
* move trust checks into Verify function
* add more comments
* more docs
* started writing tests
* unbonding period failures
* tests are green
* export ErrNewHeaderTooFarIntoFuture
* make golangci happy
* test for non-adjusted headers
* more precision
* providers and stores
* VerifyHeader and VerifyHeaderAtHeight funcs
* fix compile errors
* remove lastVerifiedHeight, persist new trusted header
* sequential verification
* remove TrustedStore option
* started writing tests for light client
* cover basic cases for linear verification
* bisection tests PASS
* rename BisectingVerification to SkippingVerification
* refactor the code
* add TrustedHeader method
* consolidate sequential verification tests
* consolidate skipping verification tests
* rename trustedVals to trustedNextVals
* start writing docs
* ValidateTrustLevel func and ErrOldHeaderExpired error
* AutoClient and example tests
* fix errors
* update doc
* remove ErrNewHeaderTooFarIntoFuture
This check is unnecessary given existing a) ErrOldHeaderExpired b)
h2.Time > now checks.
* return an error if we're at more recent height
* add comments
* add LastSignedHeaderHeight method to Store
I think it's fine if Store tracks last height
* copy over proxy from old lite package
* make TrustedHeader return latest if height=0
* modify LastSignedHeaderHeight to return an error if no headers exist
* copy over proxy impl
* refactor proxy and start http lite client
* Tx and BlockchainInfo methods
* Block method
* commit method
* code compiles again
* lite client compiles
* extract updateLiteClientIfNeededTo func
* move final parts
* add placeholder for tests
* force usage of lite http client in proxy
* comment out query tests for now
* explicitly mention tp: trusting period
* verify nextVals in VerifyHeader
* refactor bisection
* move the NextValidatorsHash check into updateTrustedHeaderAndVals
+ update the comment
* add ConsensusParams method to RPC client
* add ConsensusParams to rpc/mock/client
* change trustLevel type to a new cmn.Fraction type
+ update SkippingVerification comment
* stress out trustLevel is only used for non-adjusted headers
* fixes after Fede's review
Co-authored-by: Federico Kunze <31522760+fedekunze@users.noreply.github.com>
* compare newHeader with a header from an alternative provider
* save pivot header
Refs https://github.com/tendermint/tendermint/pull/3989#discussion_r349122824
* check header can still be trusted in TrustedHeader
Refs https://github.com/tendermint/tendermint/pull/3989#discussion_r349101424
* lite: update Validators and Block endpoints
- Block no longer contains BlockMeta
- Validators now accept two additional params: page and perPage
* make linter happy
* docs: remove specs, they live in spec repo (#4172)
* docs: remove specs, they live in spec repo
- moving specs to spec repo
- https://github.com/tendermint/spec/pull/62 PR for updating them
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* add makefile command to copy in sepcs from specREPO
- move cloning of spec repo to pre and post scripts
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
Fixes#3986
This pull request is prefixing all the types in proto to avoid conflict.
When a go application is using Tendermint as a library and also define similar types in gogo proto some conflicts might occur (as types is a common package in go).
By prefixing the types with tendermint, this highly reduces the risk of conflicts.
BREAKING CHANGE.
This modification breaks the ABCI Application endpoint.
What was accessible before with `/types.ABCIApplication/Flush` is now accessible with `/tendermint.abci.types.ABCIApplication/Flush`.
- change link for bounties for different lang abci servers to interchainio funding repo
- link awesome repo ecosystem section in main docs readme
- closes#4110
- closes#4125
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
- tm-bench has a deprecation warning for 5 releases now, with the major release coming I removed the file and updated the docs to point to `tm-load-test` located in the interchainio repo
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
*libs/common/errors: remove package
- remove errors file from cmn pkg
- use errorf instead of wrap in async function
- add changelog entry
- closes#3862
https://www.jsonrpc.org/specification
What is done in this PR:
JSONRPCClient: validate that Response.ID matches Request.ID I wanted
to do the same for the WSClient, but since we're sending events as
responses, not notifications, checking IDs would require storing
them in memory indefinitely (and we won't be able to remove them
upon client unsubscribing because ID is different then).
Request.ID is now optional. Notification is a Request without an ID.
Previously "" or 0 were considered as notifications
Remove #event suffix from ID from an event response (partially fixes
#2949) ID must be either string, int or null AND must be equal to
request's ID. Now, because we've implemented events as responses, WS
clients are tripping when they see Response.ID("0#event") !=
Request.ID("0"). Implementing events as requests would require a lot
of time (~ 2 days to completely rewrite WS client and server)
generate unique ID for each request
switch to integer IDs instead of "json-client-XYZ"
id=0 method=/subscribe
id=0 result=...
id=1 method=/abci_query
id=1 result=...
> send events (resulting from /subscribe) as requests+notifications (not
responses)
this will require a lot of work. probably not worth it
* rpc: generate an unique ID for each request
in conformance with JSON-RPC spec
* WSClient: check for unsolicited responses
* fix golangci warnings
* save commit
* fix errors
* remove ID from responses from subscribe
Refs #2949
* clients are safe for concurrent access
* tm-bench: switch to int ID
* fixes after my own review
* comment out sentIDs in WSClient
see commit body for the reason
* remove body.Close
it will be closed automatically
* stop ws connection outside of write/read routines
also, use t.Rate in tm-bench indexer when calculating ID
fix gocritic issues
* update swagger.yaml
* Apply suggestions from code review
* fix stylecheck and golint linter warnings
* update changelog
* update changelog2
* Add pagination to /validators
- closes#3472
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* add swagger params, default returns all
* address pr comments
* golint fix
* swagger default change, change to default in comment
* swagger.yaml: replace x-example with example
https://swagger.io/docs/specification/adding-examples/
* Revert "swagger.yaml: replace x-example with example"
This reverts commit 9df1b006de.
* update changelog and remove extra body close
## Issue:
Removed BlockMeta in ResultBlock in favor of BlockId for /block
Added block_size to BlockMeta this is reflected in /blockchain
fixes#3188
added breaking as some clients may be using header from blockmeta instead of block in /block
## Commits:
* cleanup block path
Remove duplication of data in `/block`
fixes#3188
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* Remove duplication of data in `/block`
- Created a secondary type to be used for /block
fixes#3188
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* remove commented out code
* fix ci
* add changelog_pending entry
* remove extra variable
* update swagger
* change int to uint for blocksize
* fix swagger
* remove extensive comments
* update changelog
* fix conflicts after merge
* use int for BlockSize and NumTxs in BlockMeta
- with 99.9% guarantee, the size of either will never reach int32
- most of the Go "Size" stdlib functions return int
* Removal of TotalTx & NumTx
- Removed totalTx and numTx
closes#2521
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* abci proto changes
* proto number fix
* txfilter_test fix
* comments on PR
* further changes
* bring back metrics
* fix indexer
* fix TestBlockMaxDataBytes and TestBlockMaxDataBytesUnknownEvidence
* indexer service back to header
* statistics.go fix
* fix ci
* listen for blocks, not headers
to be able to record txs throughput
* fix TestNetworkNewBlock
* fix tests
* fix tests in types package
* fixes after Anton's review
* fix tests
* bring back `consensus_total_txs` metric
I mistakenly thought it was removed.
* improve changelog
* remove LastBlockTotalTx from state
* docs: remove getNumTxs from BeginBlock Java example
## Issue:
This is an approach to fixing secret connection that is more noise-ish than actually noise.
but it essentially fixes the problem that #3315 is trying to solve by making the secret connection handshake non-malleable. It's easy to understand and I think will be acceptable to @jaekwon
.. the formal reasoning is basically, if the "view" of the transcript between diverges between the sender and the receiver at any point in the protocol, the handshake would terminate.
The base protocol of Station to Station mistakenly assumes that if the sender and receiver arrive at shared secret they have the same view. This is only true for a DH on prime order groups.
This robustly solves the problem by having each cryptographic operation commit to operators view of the protocol.
Another nice thing about a transcript is it provides the basis for "secure" (barring cryptographic breakages, horrible design flaws, or implementation bugs) downgrades, where a backwards compatible handshake can be used to offer newer protocol features/extensions, peers agree to the common subset of what they support, and both sides have to agree on what the other offered for the transcript MAC to verify.
With something like Protos/Amino you already get "extensions" for free (TLS uses a simple TLV format https://tools.ietf.org/html/rfc8446#section-4.2 for extensions not too far off from Protos/Amino), so as long as you cryptographically commit to what they contain in the transcript, it should be possible to extend the protocol in a backwards-compatible manner.
## Commits:
* Minimal changes to remove malleability of secret connection removes the need to check for lower order points.
Breaks compatibility. Secret connections that have no been updated will fail
* Remove the redundant blacklist
* remove remainders of blacklist in tests to make the code compile again
Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>
* Apply suggestions from code review
Apply Ismail's error handling
Co-Authored-By: Ismail Khoffi <Ismail.Khoffi@gmail.com>
* fix error check for io.ReadFull
Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>
* Update p2p/conn/secret_connection.go
Co-Authored-By: Ismail Khoffi <Ismail.Khoffi@gmail.com>
* Update p2p/conn/secret_connection.go
Co-Authored-By: Bot from GolangCI <42910462+golangcibot@users.noreply.github.com>
* update changelog and format the code
* move hkdfInit closer to where it's used
BREAKING
Example response:
```json
{
"jsonrpc": "2.0",
"id": "",
"result": {
"height": "2109",
"txs_results": null,
"begin_block_events": null,
"end_block_events": null,
"validator_updates": null,
"consensus_param_updates": null
}
}
```
Old result consisted of ABCIResponses struct and height. Exposing
internal ABCI structures (which we store in state package) in RPC seems
bad to me for the following reasons:
1) high risk of breaking the API when somebody changes internal structs
(HAPPENED HERE!)
2) RPC is aware of ABCI, which I'm not sure we want
Fixes#4051
Function `parseRemoteAddr` is forcing protocol HTTP and protocol HTTPs to tcp. This causes the bug in the issue #4051.
I find that the tcp is only needed where `net.Dial`. So I moved the switch to makeHTTPDialer.
## Issue:
Hey, not sure if this is disallowed for any reason specifically, but it would be very beneficial to define additional types to decode tendermint key implementations from bytes, since it uses a static codec. If this is okay, let me know and I will add documentation.
Context: For Ethermint to switch to using Cosmos' keybase, decoding the keys requires this codec to be updated
Just to document, I did experiment with creating a mapping from string to objects to be able to keep track of the key types added to be able to be used in the RegisterAmino(..) call, but because of how go is compiled, cosmos would just use the base types. This may be a useful feature for someone just building on top of Tendermint and not going through Cosmos, but to not add confusion or unnecessary complexity, I left it out.
## Commits:
* Exposes amino codec to be able to decode pk bytes in application
* Change how codec is modified
* Remove unneeded comment
* Fix comment
* Fix comment
* Add registered type to nametable
* Add pending changelog entry
* Reorder change
* Added check if type is registered and added test
* Make test type private
* Remove unnecessary duplicate exists check
Added a small function to be able to change the default retry interval for the privval. The default is 100ms, this function allows to change to any time.Duration.
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
## Issue:
This PR adds an "EXISTS" condition to the event query grammar. It enables querying for the occurrence of an event without having to provide a condition for one of its attributes.
As an example, someone interested in all slashing events might currently catch them with a query such as slash.power > 0.
With this PR the event can be captured with slash.power EXISTS or just slash EXISTS to catch by event type.
## Examples:
`slash EXISTS`
## Commits:
* Add EXISTS condition to query grammar
* Gofmt files
* Move PEG instructions out of auto-generated file to prevent overwrite
* Update libs/pubsub/query/query.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update changelog and add test case
* Merge with other changes in PR #4070
* Add EXISTS to Conditions() func
* Apply gofmt
* Addressing PR comments
when the user searches for a tx (hash=X)
This PR fixes error handling for performing a txindex search.
TxIndex.Get returns
(txresult, nil) if the transaction is found.
(nil, nil) if the transaction is not found.
(nil, error) if error is occurred.
Therefore, if res is not nil, I think TxIndex.Search should return (txresult, nil).
Previously, however, this was not a problem because errors.Wrap returns nil if its first argument err is nil.
## Issue
Why this pr:
When restarting chain node, sometimes we lost tx index about recent(around 80)blocks, and some client complains that they can't find the tx by RPC call(tx_search) when the tx do exist in the block.
I try to partially fix this issue in a simple way by writing the index data in a sync way.
There is no performance difference under 1K TPS according to our test.
It is still possible that lost index data after restarting the node, but only 2 block data will lost at most.
I try to totally fix this in https://github.com/tendermint/tendermint/pull/3847/files, but this one is simple and can solve most part of the issue. Please review first, thks.
## Comments
Anton:
BEFORE:
BenchmarkTxIndex1-2 100000 12434 ns/op
BenchmarkTxIndex500-2 300 5151564 ns/op
BenchmarkTxIndex1000-2 100 15053910 ns/op
BenchmarkTxIndex2000-2 100 18238892 ns/op
BenchmarkTxIndex10000-2 20 124287930 ns/op
AFTER:
BenchmarkTxIndex1-2 2000 795431 ns/op
BenchmarkTxIndex500-2 200 6385124 ns/op
BenchmarkTxIndex1000-2 100 11388219 ns/op
BenchmarkTxIndex2000-2 100 20514873 ns/op
BenchmarkTxIndex10000-2 20 107456004 ns/op
Performance drop is pretty steep, but I think it's the right thing to do UNTIL we have a WAL.
* crypto: expose MaxAunts for documentation purposes
* types: update godoc for new maxes
* docs: make hard-coded limits more explicit
* wal: add todo to clarify max size
* shorten lines in test
* cs: panic only when WAL#WriteSync fails
- modify WAL#Write and WAL#WriteSync to return an error
* fix test
* types: validate Part#Proof
add ValidateBasic to crypto/merkle/SimpleProof
* cs: limit max bit array size and block parts count
* cs: test new limits
* cs: only assert important stuff
* update changelog and bump version to 0.32.7
* fixes after Ethan's review
* align max wal msg and max consensus msg sizes
* fix tests
* fix test
* add change log for 31.11
Some linting/cleanup missed from the initial events refactor
Don't panic; instead, return false, error when matching breaks unexpectedly
Strip non-numeric chars from values when attempting to match against query values
Have the server log during send upon error
* cleanup/lint Query#Conditions and do not panic
* cleanup/lint Query#Matches and do not panic
* cleanup/lint matchValue and do not panic
* rever to panic in Query#Conditions
* linting
* strip alpha chars when attempting to match
* add pending log entries
* Update libs/pubsub/query/query.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* build: update variable names
* update matchValue to return an error
* update Query#Matches to return an error
* update TestMatches
* log error in send
* Fix tests
* Fix TestEmptyQueryMatchesAnything
* fix linting
* Update libs/pubsub/query/query.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update libs/pubsub/query/query.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update libs/pubsub/query/query.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update libs/pubsub/query/query.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update libs/pubsub/query/query.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update libs/pubsub/pubsub.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* add missing errors pkg import
* update Query#Conditions to return an error
* update query pkg unit tests
* update TxIndex#Search
* update pending changelog
* Fix long line errors in abci, crypto, and libs packages
* Fix long lines in p2p and rpc packages
* Fix long lines in abci, state, and tools packages
* Fix long lines in behaviour and blockchain packages
* Fix long lines in cmd and config packages
* Begin fixing long lines in consensus package
* Finish fixing long lines in consensus package
* Add lll exclusion for lines containing URLs
* Fix long lines in crypto package
* Fix long lines in evidence package
* Fix long lines in mempool and node packages
* Fix long lines in libs package
* Fix long lines in lite package
* Fix new long line in node package
* Fix long lines in p2p package
* Ignore gocritic warning
* Fix long lines in privval package
* Fix long lines in rpc package
* Fix long lines in scripts package
* Fix long lines in state package
* Fix long lines in tools package
* Fix long lines in types package
* Enable lll linter
* Include sender when logging rejected txns
* Log as peerID to be consistent with other log messages
* Updated CHANGELOG_PENDING
* Handle nil source
* Updated PR link in CHANGELOG_PENDING
* Renamed TxInfo.SenderAddress and peerAddress til PeerFullID
* Renamed PeerFullID to PeerP2PID
* Forgot to rename a couple of references
* Add processor prototype
* Change processor API
+ expose a simple `handle` function which mutates internal state
* processor tests
* fix gofmt and ohter golangci issues
* scopelint var on range scope
* add check for short block received
* fix formatting
* small test reorg
* ignore unused for now
* ci fix changes
* go.mod revert
* New lint version upgrade
- linter was upgraded
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* enable-a;; is deprecated
* minor change
* another try
* some more changes
* some more changes
* reenable prealloc
* add version till bot is fixed
* docs theme
* vuepress-theme-cosmos
* version bump
* changes to docs
* more code changes
* sidebar order fix
* moar changes
* fixed dev sessions title
* fixed dev sessions title, again
* specs should show up in sidebar
* contents cards
* version bump
* sidebar, rpc
* version bump
* custom footer and super naive search
* version
* minor change to vuepress
* move swagger file
* pre and post scripts
* build
* changed docs build process
* added deployment config
* updated versions file and added deployment filters
* Add stale bot
- added stale bot to only pull requests for now
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* change code owner from xla to tess
* Remove traces oaf `github.com/tendermint/abci`
- removed abci dockerfile as it was still referencing `github.com/tendermint/abci`
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* nor change to install of abci
* use abci-cli instead of tendermint node
* remove traces of Dockerfile.develop
also use latest Go in Dockerfile.testing
* update docker readme
* remove wrapping because it will look awful on docker hub
* Quick clean up of docs
- removed a few files that have been deprecated and/or relocated
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* cleanup a few docs
* Add Version to golangci
- added version to golangci because of recent update
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* remove unused for 1.17
* disable new linters
* tm-monitor: tweaked formatting of start time and avg tx throughput.
* tm-monitor: update health when validator number is updated.
* Updated CHANGELOG_PENDING
* Added PR number to CHANGELOG_PENDING.
Improves `tm-monitor` formatting of start time (RFC1123 without unnecessary precision) and avg tx throughput (three decimal places). The old tx throughput display was confusing during local testing where the tx rate is low and displayed as 0.
Also updates the monitor health whenever the validator number changes. It otherwise starts with moderate health and fails to update this once it discovers the validators, leading to incorrect health reporting and invalid uptime statistics. Let me know if you would like me to submit this as a separate PR.
### Before:
```
2019-09-29 20:40:00.992834 +0200 CEST m=+0.024057059 up -92030989600.42%
Height: 2518
Avg block time: 1275.496 ms
Avg tx throughput: 0 per sec
Avg block latency: 2.464 ms
Active nodes: 4/4 (health: moderate) Validators: 4
NAME HEIGHT BLOCK LATENCY ONLINE VALIDATOR
localhost:26657 2518 0.935 ms true true
localhost:26660 2518 0.710 ms true true
localhost:26662 2518 0.708 ms true true
localhost:26664 2518 0.717 ms true true
```
### After:
```
Sun, 29 Sep 2019 20:21:59 +0200 up 100.00%
Height: 2480
Avg block time: 1361.445 ms
Avg tx throughput: 0.735 per sec
Avg block latency: 4.232 ms
Active nodes: 4/4 (health: full) Validators: 4
NAME HEIGHT BLOCK LATENCY ONLINE VALIDATOR
localhost:26657 2480 1.174 ms true true
localhost:26660 2480 1.037 ms true true
localhost:26662 2480 0.981 ms true true
localhost:26664 2480 0.995 ms true true
```
* Develop -> Master
- Some places still had develop instead of master.
closes#4107
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* add one more
Afaik, nobody is using it. I've never used it myself.
1) atomic, os.Write, map, codec benchmarks all seem temporary and to be
of no use in the future.
2) syncing benchmark may be useful, but we're trying to move away from
Bash. Also, the blocks are empty there.
* Remove omitempty from *pb.go
- remove omitempty from *pb.go files
- added command to makefile for everytime `make protoc_all` is run
- open question:
- Do we want to further remove omitempty from other places
- https://github.com/tendermint/tendermint/blob/master/rpc/lib/types/types.go#L151
- and other places
ref #3882
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* bring back omitempty to *pb.go
* Update types/tx.go
* custom marshlers
* undo benchmark `omitepmty`
* golangci lint fix
* cleanup comments
* changelog_pending entry
- Added a deprecation warining in for deprecation of tm-bench in favor of tm-load-test
- With the merging of this pr we can close tm-bench related issues.
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
- goimports is not used as a tool anymore
- correct me if wrong
- rename devtools folder to merely tools.mk
- remove slate_header.txt
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* Update makefile
- these two files were taken from the sdk with adjustments for usage in TM
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* .phony
* remove slate
* more tools
* circleci update 1/2
* fixed typo
* fixed bash error
* '<<' must be escaped in 2.1
* fixed invalid config
* removed golangci-lint from tools. runs as github app now
* one more issue in Circle config
* some more work on the Circle config
* minor changes
* fiddling with restore cache config
* fiddling with restore cache config
* added commands in circle config. makefile changes
* bash shenannigans
* bash shenannigans
* fighting the config demons
* fighting the config demons, v2
* fighting the config demons, v3
* updating p2p dockerfile
* updating p2p dockerfile
* updating p2p test
* updating p2p test
* testing circle config
* testing circle config
* remove dontcover command
* its the weekend, custom docker image
* types: add test for block commits with votes for the wrong blockID
* remove
* respond to feedback
* verify the commits as well
* testing table
* Update types/block_test.go
Co-Authored-By: Bot from GolangCI <42910462+golangcibot@users.noreply.github.com>
* Update types/block_test.go
Co-Authored-By: Bot from GolangCI <42910462+golangcibot@users.noreply.github.com>
* gofmt
* test panic case
* Add cleveldb build for Amazon Linux
In attempting to build Tendermint binaries with cleveldb support that we
can use for load testing (see https://github.com/interchainio/got), it
became apparent that we need a bit of a simpler build process for this
kind of executable. Since we're basing our load testing infrastructure
on Amazon Linux, it makes sense to support such a build process for
Amazon Linux in a platform-independent way.
This PR allows one to simply build the Amazon Linux-compatible binary
using Docker on one's local machine. It first builds an Amazon
Linux-based build image with Go v1.12.9, and then it uses that image to
build the cleveldb version of Tendermint.
This should, in theory, be compatible with CentOS too, but that's yet to
be tested.
* Add comment describing the new Makefile target
* Add missing PHONY entry for new Makefile target
* Expand on Makefile comment
* Pin range scope vars
* Don't disable scopelint
This PR repairs linter errors seen when running the following commands:
golangci-lint run --no-config --disable-all=true --enable=scopelint
Contributes to #3262
* Added badger error for Windows
* Update go-built-in.md
* Update docs/guides/go-built-in.md
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update docs/guides/go.md
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
When restarting the app, it throws an error saying that it requires truncation of badger value log because it is corrupted. This seems to be an issue with badger DB and it is reported here. It solves the problem when you set the truncate value to true, but this obviously results in loss of data. I am not sure if this is a concern or not. However, it'd great if this could be documented for Windows users!
Fixes#3954
This should've been a part of
https://github.com/tendermint/tendermint/pull/3960, but I forgot about
it while reviewing.
A good programmer is someone who always looks both ways before crossing
a one-way street. - Doug Linder
* Correct memory alignment for 32-bit machine.
Switching the `txsBytes` and `rechecking` fields of `CListMempool` to ensure the correct memory alignment for `atomic.LoadInt64` on 32-bit machine.
* Update CHANGELOG_PENDING.md for #3968Fixed#3968 `mempool` Memory Loading Error on 32-bit Ubuntu 16.04 Machine.
* Update CHANGELOG_PENDING.md
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* cs: check for SkipTimeoutCommit or wait timeout in handleTxsAvailable
Previously, if create_empty_blocks was set to false, TM sometimes could
create 2 consecutive blocks without actually waiting for timeoutCommit
(1 sec. by default).
```
I[2019-08-29|07:32:43.874] Executed block module=state height=47 validTxs=10 invalidTxs=0
I[2019-08-29|07:32:43.876] Committed state module=state height=47 txs=10 appHash=F88C010000000000
I[2019-08-29|07:32:44.885] Executed block module=state height=48 validTxs=2 invalidTxs=0
I[2019-08-29|07:32:44.887] Committed state module=state height=48 txs=2 appHash=FC8C010000000000
I[2019-08-29|07:32:44.908] Executed block module=state height=49 validTxs=8 invalidTxs=0
I[2019-08-29|07:32:44.909] Committed state module=state height=49 txs=8 appHash=8C8D010000000000
I[2019-08-29|07:32:45.886] Executed block module=state height=50 validTxs=2 invalidTxs=0
I[2019-08-29|07:32:45.895] Committed state module=state height=50 txs=2 appHash=908D010000000000
I[2019-08-29|07:32:45.908] Executed block module=state height=51 validTxs=8 invalidTxs=0
I[2019-08-29|07:32:45.909] Committed state module=state height=51 txs=8 appHash=A08D010000000000
```
This commit fixes that by adding a check to handleTxsAvailable.
Fixes#3908
* update changelog
* schedule timeoutCommit if StartTime is in the future
or SkipTimeoutCommit=true && we DON'T have all the votes
* fix TestReactorCreatesBlockWhenEmptyBlocksFalse
by checking if we have LastCommit or not
* address Ethan's comments
* Remove unnecessary type conversions
* Consolidate repeated strings into consts
* Clothe return statements
* Update blockchain/v1/reactor_fsm_test.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
This PR repairs linter errors seen when running the following commands:
golangci-lint run --no-config --disable-all=true --enable=unconvert
golangci-lint run --no-config --disable-all=true --enable=goconst
golangci-lint run --no-config --disable-all=true --enable=nakedret
Contributes to #3262
- depndabot couldnt update a few deps as they needed to beupdated together.
- PRs: #3946, 3938 can be closed with this PR
Signedoff-by: Marko Baricevic <marbar3778@yahoo.com>
* update gogo/protobuf version from v1.2.1 to v1.3.0
Also errcheck from v1.1.0 to v1.2.0
See full diff in
- https://github.com/gogo/protobuf/compare/v1.2.1...v1.3.0
- https://github.com/kisielk/errcheck/compare/v1.1.0...v1.2.0
Changelog:
Tested versions:
go 1.12.9
protoc 3.7.1
Improvements:
plugin/stringer - Handle repeated and/or nullable types a bit better now.
plugin/size - Remove the loop in sovXXX by using bit twiddling.
Thanks: https://github.com/apelisse
plugin/marshalto - Implemented a reverse marshal strategy which allows for faster marshalling. This now avoids a recursive (and repeated) call to Size().
Thanks: https://github.com/apelisse
plugin/compare - Added support for for oneof types.
Bug fixes:
protoc-gen-gogo/generator - Fix assignment to entry in nil map.
Thanks: https://github.com/tgulacsi
protoc-gen-gogo/generator - Allows plugins to call RecordTypeUse without panicking.
Thanks: https://github.com/fedenusy
proto/extensions - Fixed set extension regression. We did not clear the extensions before setting.
io/uint32 - fix uint32reader bug that causes ReadMsg to recreate buffer when lengths are the same.
Thanks: https://github.com/SebiSujar
proto/table_merge: Fix merge of non-nullable slices.
Thanks: https://github.com/euroelessar
Upstream commits:
merged in golang/protobuf commit 318d17de72747ed1c16502681db4b2bb709a92d0 - Add UnimplementedServer for server interface
merged in golang/protobuf commit b85cd75de734650db18a99a943fe351d41387800 - protoc-gen-go/grpc: inline errUnimplemented function
merged in golang/protobuf commit d3c38a4eb4970272b87a425ae00ccc4548e2f9bb - protoc-gen-go/grpc: use status and code packages only if needed
merged in golang/protobuf commit e91709a02e0e8ff8b86b7aa913fdc9ae9498e825 - fix indentation in jsonpb with Any messages
merged in golang/protobuf commit 8d0c54c1246661d9a51ca0ba455d22116d485eaa - protoc-gen-go: generate XXX_OneofWrappers instead of XXX_OneofFuncs
Misc:
extensions.md - Markdown update.
Thanks: https://github.com/TennyZhuang
Readme.md - Added user.
go/protoc update - Updated to go1.12.x and protoc 3.7.1
Makefile update - fix go vet shadow tool reference
test/mixbench - Update mixbench tool. Expose runnable benchmarks via flags.
* update certstrap
* update golangci-lint from v1.13.2 to v1.17.1
* update gox as well
* test
* comment out golangci temporary
* comment out go-deadlock deps
When using the RPC client in my test suite (with -race enabled), I do a lot of Subscribe/Unsubscribe operations, at some point (randomly) the race detector returns the following warning:
WARNING: DATA RACE
Read at 0x00c0009dbe30 by goroutine 31:
runtime.mapiterinit()
/usr/local/go/src/runtime/map.go:804 +0x0
github.com/tendermint/tendermint/rpc/client.(*WSEvents).redoSubscriptionsAfter()
/go/pkg/mod/github.com/tendermint/tendermint@v0.31.5/rpc/client/httpclient.go:364 +0xc0
github.com/tendermint/tendermint/rpc/client.(*WSEvents).eventListener()
/go/pkg/mod/github.com/tendermint/tendermint@v0.31.5/rpc/client/httpclient.go:393 +0x3c6
Turns out that the redoSubscriptionAfter is not protecting the access to subscriptions.
The following change protects the read access to the subscription map behind the mutex
* manually swagging
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* three definitions with polymorphism
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* added blockchain and block
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* low quality generation, commit, block_response and validators
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* genesis and consensus states endpoints
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* fix indentation
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* consensus parameters
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* fix indentation
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* add height to consensus parameters endpoint
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* unconfirmed_txs and num_unconfirmed_txs
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* add missing query parameter
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* add ABCI queries
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* added index document for swagger documentation
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* add missing routes
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* contract tests added on CCI
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* contract tests job should be in the test suite
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* simplify requirements to test
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* typo
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* build is a prerequisite to start localnet
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* reduce nodejs size, move goodman to get_tools, add docs, fix comments
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* Update scripts/get_tools.sh
That's cleaner, thanks!
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* xz not supported by cci image, let's keep it simple
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* REMOVE-indirect debug of CCI paths
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* dirty experiment, volume is empty but binary has been produced
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* dirty experiment, volume is empty but binary has been produced
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* dirty experiment going on
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* locally works, CCI have difficulties with second layaer containers volumes
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* restore experiment, use machine instead of docker for contract tests
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* simplify a bit
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* rollback on machine golang
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* Document the changes
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* Changelog
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* comments
Signed-off-by: Karoly Albert Szabo <szabo.karoly.a@gmail.com>
* Initiat commit of lite client with
with weak subjectivity ADR
* Apply suggestions from code review
Co-Authored-By: Marko <marbar3778@yahoo.com>
* Update docs/architecture/adr-044-lite-client-with-weak-subjectivity.md
Co-Authored-By: Christopher Goes <cwgoes@pluranimity.org>
* Apply suggestions from code review
Co-Authored-By: Christopher Goes <cwgoes@pluranimity.org>
* Apply suggestions from code review
Co-Authored-By: Anca Zamfir <ancazamfir@users.noreply.github.com>
* fix typo and format the code block
* address cwgoes comments
* add Positive/Negative points
* change status to accepted
* Apply suggestions from code review
Co-Authored-By: Christopher Goes <cwgoes@pluranimity.org>
* link the spec and explain headers caching
* Add test for bcBlockRequestMessage ValidateBasic method
* Add test for bcNoBlockResponseMessage ValidateBasic method
* Add test for bcStatusRequestMessage ValidateBasic method
* Add test for bcStatusResponseMessage ValidateBasic method
* Add blockchain v1 reactor ValidateBasic tests
* Add test for NewRoundStepMessage ValidateBasic method
* Add test for NewValidBlockMessage ValidateBasic method
* Test BlockParts Size
* Import cmn package
* Add test for ProposalPOLMessage ValidateBasic method
* Add test for BlockPartMessage ValidateBasic method
* Add test for HasVoteMessage ValidateBasic method
* Add test for VoteSetMaj23Message ValidateBasic method
* Add test for VoteSetBitsMessage ValidateBasic method
* Fix linter errors
* Improve readability
* Add test for BaseConfig ValidateBasic method
* Add test for RPCConfig ValidateBasic method
* Add test for P2PConfig ValidateBasic method
* Add test for MempoolConfig ValidateBasic method
* Add test for FastSyncConfig ValidateBasic method
* Add test for ConsensusConfig ValidateBasic method
* Add test for InstrumentationConfig ValidateBasic method
* Add test for BlockID ValidateBasic method
* Add test for SignedHeader ValidateBasic method
* Add test for MockGoodEvidence and MockBadEvidence ValidateBasic methods
* Remove debug logging
Co-Authored-By: Marko <marbar3778@yahoo.com>
* Update MempoolConfig field
* Test a single struct field at a time, for maintainability
Fixes#2740
* init of (2/2) common errors
* Remove instances of cmn.Error (2/2)
- Replace usage of cmnError and errorWrap
- ref #3862
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* comment wording
* simplify IsErrXXX functions
* log panic along with stopping the MConnection
+ use `trySend` the replicate peer sending
+ expose `next()` as a chan of events as output
+ expose `final()` as a chan of error, for the final error
+ add `ready()` as chan struct when routine is ready
* (1/2) of replace errors.go with github.com/pkg/errors
ref #3862
- step one in removing instances of errors.go in favor of github.com/pkg/errors
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* gofmt
* add in /store
+ ensure that we stop accepting messages once `stop` has been called
to avoid the case in which we attempt to write to a channel which
has already been closed
+ `routine.send` returns false when routine is not running
+ this will prevent panics sending to channels which have been
closed
+ Make output channels routine specific removing the risk of someone
writting to a channel which was closed by another touine.
+ consistency changes between the routines and the demuxer
* Add Schedule:
+ The schedule is a data structure used to determine the optimal
schedule of requests to the optimal set of peers in order to perform
fast-sync as fast and efficiently as possible.
* Add some doc strings
* fix golangci
* Add Schedule:
+ The schedule is a data structure used to determine the optimal
schedule of requests to the optimal set of peers in order to perform
fast-sync as fast and efficiently as possible.
* Add some doc strings
* remove globals from tests
This PR is related to #3107 and a continuation of #3351
It is important to emphasise that in the privval original design, client/server and listening/dialing roles are inverted and do not follow a conventional interaction.
Given two hosts A and B:
Host A is listener/client
Host B is dialer/server (contains the secret key)
When A requires a signature, it needs to wait for B to dial in before it can issue a request.
A only accepts a single connection and any failure leads to dropping the connection and waiting for B to reconnect.
The original rationale behind this design was based on security.
Host B only allows outbound connections to a list of whitelisted hosts.
It is not possible to reach B unless B dials in. There are no listening/open ports in B.
This PR results in the following changes:
Refactors ping/heartbeat to avoid previously existing race conditions.
Separates transport (dialer/listener) from signing (client/server) concerns to simplify workflow.
Unifies and abstracts away the differences between unix and tcp sockets.
A single signer endpoint implementation unifies connection handling code (read/write/close/connection obj)
The signer request handler (server side) is customizable to increase testability.
Updates and extends unit tests
A high level overview of the classes is as follows:
Transport (endpoints): The following classes take care of establishing a connection
SignerDialerEndpoint
SignerListeningEndpoint
SignerEndpoint groups common functionality (read/write/timeouts/etc.)
Signing (client/server): The following classes take care of exchanging request/responses
SignerClient
SignerServer
This PR also closes#3601
Commits:
* refactoring - work in progress
* reworking unit tests
* Encapsulating and fixing unit tests
* Improve tests
* Clean up
* Fix/improve unit tests
* clean up tests
* Improving service endpoint
* fixing unit test
* fix linter issues
* avoid invalid cache values (improve later?)
* complete implementation
* wip
* improved connection loop
* Improve reconnections + fixing unit tests
* addressing comments
* small formatting changes
* clean up
* Update node/node.go
Co-Authored-By: jleni <juan.leni@zondax.ch>
* Update privval/signer_client.go
Co-Authored-By: jleni <juan.leni@zondax.ch>
* Update privval/signer_client_test.go
Co-Authored-By: jleni <juan.leni@zondax.ch>
* check during initialization
* dropping connecting when writing fails
* removing break
* use t.log instead
* unifying and using cmn.GetFreePort()
* review fixes
* reordering and unifying drop connection
* closing instead of signalling
* refactored service loop
* removed superfluous brackets
* GetPubKey can return errors
* Revert "GetPubKey can return errors"
This reverts commit 68c06f19b4.
* adding entry to changelog
* Update CHANGELOG_PENDING.md
Co-Authored-By: jleni <juan.leni@zondax.ch>
* Update privval/signer_client.go
Co-Authored-By: jleni <juan.leni@zondax.ch>
* Update privval/signer_dialer_endpoint.go
Co-Authored-By: jleni <juan.leni@zondax.ch>
* Update privval/signer_dialer_endpoint.go
Co-Authored-By: jleni <juan.leni@zondax.ch>
* Update privval/signer_dialer_endpoint.go
Co-Authored-By: jleni <juan.leni@zondax.ch>
* Update privval/signer_dialer_endpoint.go
Co-Authored-By: jleni <juan.leni@zondax.ch>
* Update privval/signer_listener_endpoint_test.go
Co-Authored-By: jleni <juan.leni@zondax.ch>
* updating node.go
* review fixes
* fixes linter
* fixing unit test
* small fixes in comments
* addressing review comments
* addressing review comments 2
* reverting suggestion
* Update privval/signer_client_test.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update privval/signer_client_test.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update privval/signer_listener_endpoint_test.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* do not expose brokenSignerDialerEndpoint
* clean up logging
* unifying methods
shorten test time
signer also drops
* reenabling pings
* improving testability + unit test
* fixing go fmt + unit test
* remove unused code
* Addressing review comments
* simplifying connection workflow
* fix linter/go import issue
* using base service quit
* updating comment
* Simplifying design + adjusting names
* fixing linter issues
* refactoring test harness + fixes
* Addressing review comments
* cleaning up
* adding additional error check
Add gocritic as a linter
The linting is not complete, but should i complete in this PR or in a following.
23 files have been touched so it may be better to do in a following PR
Commits:
* Add gocritic to linting
- Added gocritic to linting
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* gocritic
* pr comments
* remove switch in cmdBatch
* node: allow replacing existing p2p.Reactor(s)
using [`CustomReactors`
option](https://godoc.org/github.com/tendermint/tendermint/node#CustomReactors).
Warning: beware of accidental name clashes. Here is the list of existing
reactors: MEMPOOL, BLOCKCHAIN, CONSENSUS, EVIDENCE, PEX.
* check the absence of "CUSTOM" prefix
* merge 2 tests
* add doc.go to node package
* Do not write 'Couldn't connect to any seeds' if there are no seeds
* changelog
* remove privValUpgrade
* Fix typo in changelog
* Update CHANGELOG_PENDING.md
Co-Authored-By: Marko <marbar3778@yahoo.com>
I'm setting up all peers dynamically by calling dial_peers, so p2p.seeds in configs is empty, and I'm seeing error log a lot in logs.
* p2p: fix false-positive error logging when stopping connections
This changeset fixes two types of false-positive errors occurring during
connection shutdown.
The first occurs when the process invokes FlushStop() or Stop() on a
connection. While the previous behavior did properly wait for the sendRoutine
to finish, it did not notify the recvRoutine that the connection was shutting
down. This would cause the recvRouting to receive and error when reading and
log this error. The changeset fixes this by notifying the recvRoutine that
the connection is shutting down.
The second occurs when the connection is terminated (gracefully) by the other side.
The recvRoutine would get an EOF error during the read, log it, and stop the connection
with an error. The changeset detects EOF and gracefully shuts down the connection.
* bring back the comment about flushing
* add changelog entry
* listen for quitRecvRoutine too
* we have to call stopForError
Otherwise peer won't be removed from the peer set and maybe readded
later.
cleanup to add linter
grpc change:
https://godoc.org/google.golang.org/grpc#WithContextDialerhttps://godoc.org/google.golang.org/grpc#WithDialer
grpc/grpc-go#2627
prometheous change:
due to UninstrumentedHandler, being deprecated in the future
empty branch = empty if or else statement
didn't delete them entirely but commented
couldn't find a reason to have them
could not replicate the issue #3406
but if want to keep it commented then we should comment out the if statement as well
* Renamed wire.go to codec.go
- Wire was the previous name of amino
- Codec describes the file better than `wire` & `amino`
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* ide error
* rename amino.go to codec.go
* go routines in blockchain reactor
* Added reference to the go routine diagram
* Initial commit
* cleanup
* Undo testing_logger change, committed by mistake
* Fix the test loggers
* pulled some fsm code into pool.go
* added pool tests
* changes to the design
added block requests under peer
moved the request trigger in the reactor poolRoutine, triggered now by a ticker
in general moved everything required for making block requests smarter in the poolRoutine
added a simple map of heights to keep track of what will need to be requested next
added a few more tests
* send errors to FSM in a different channel than blocks
send errors (RemovePeer) from switch on a different channel than the
one receiving blocks
renamed channels
added more pool tests
* more pool tests
* lint errors
* more tests
* more tests
* switch fast sync to new implementation
* fixed data race in tests
* cleanup
* finished fsm tests
* address golangci comments :)
* address golangci comments :)
* Added timeout on next block needed to advance
* updating docs and cleanup
* fix issue in test from previous cleanup
* cleanup
* Added termination scenarios, tests and more cleanup
* small fixes to adr, comments and cleanup
* Fix bug in sendRequest()
If we tried to send a request to a peer not present in the switch, a
missing continue statement caused the request to be blackholed in a peer
that was removed and never retried.
While this bug was manifesting, the reactor kept asking for other
blocks that would be stored and never consumed. Added the number of
unconsumed blocks in the math for requesting blocks ahead of current
processing height so eventually there will be no more blocks requested
until the already received ones are consumed.
* remove bpPeer's didTimeout field
* Use distinct err codes for peer timeout and FSM timeouts
* Don't allow peers to update with lower height
* review comments from Ethan and Zarko
* some cleanup, renaming, comments
* Move block execution in separate goroutine
* Remove pool's numPending
* review comments
* fix lint, remove old blockchain reactor and duplicates in fsm tests
* small reorg around peer after review comments
* add the reactor spec
* verify block only once
* review comments
* change to int for max number of pending requests
* cleanup and godoc
* Add configuration flag fast sync version
* golangci fixes
* fix config template
* move both reactor versions under blockchain
* cleanup, golint, renaming stuff
* updated documentation, fixed more golint warnings
* integrate with behavior package
* sync with master
* gofmt
* add changelog_pending entry
* move to improvments
* suggestion to changelog entry
* ADR TOC in readme.md
* Added A TOC to the Readme.md of ADR Section
- Added table of contents to the Readme of the architecture section.
- Easier to traverse and when you know what is there.
- If the Adr's become viewable online it would help guide the user
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* add tm-cmn to subprojects
* normalize word
* docs: go built-in guide
* fix package imports, add badger db, simplify Query
* newTendermint function
* working example
* finish the first guide
* add one more note
* add the second Golang guide - external ABCI app
* fix typos
* Remove go func {}()
closes#357
- Remove go func(){}() that caused race condiditon
- To reproduce
- add -race in make file to `install_abci`
- Remove `CGO_ENABLED=0` & add -race to `install`
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* remove -race
* fix data race
also, reorder callbacks similarly to socket client
* Release branch for v0.32.1
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* add links to changelog
* wording change
* Apply suggestions from code review
Comment from PR
Co-Authored-By: Ethan Buchman <ethan@coinculture.info>
* add link bump abci Version
* include doc change for abci/app
* moved abci change to features as it doesnt break the abci
* pr comments (#3803)
* pr comments
* abci changelog change
* Marko/update release1 (#3806)
* pr comments
* remove empty space
* more minor cleanup of libs
Remove unused `version.go`, `assert.go` and `libs/circle.yml`
* Update types/vote_set_test.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* spelling change
- The removed functions are not used in Iavl, Cosmos-sdk and tendermint repos
- Code-hygenie `whoop whoop`
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* p2p: extract ID validation into a separate func
- NewNetAddress panics if ID is invalid
- NetAddress#Valid returns an error
- remove ErrAddrBookInvalidAddrNoID
Fixes#2722
* p2p: remove repetitive check in ReceiveAddrs
* fix netaddress test
* Remove file heap.go
- cmn.Heap is not being used in cosmos-sdk, iavl nor tendermint repo.
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* changelog entry
* Update CHANGELOG_PENDING.md
closes#2432
* Add deterministic buildsystem
* Update CircleCI config
* Enable build on all branches for testing purposes
* Revert "Enable build on all branches for testing purposes"
This reverts commit bf5cf66da9.
* Remove develop from branch filters
* Remove dangling reference to develop
* Upload binaries too
* Build for stable branches too
* Quick link fixes throughout docs and repo
- used markdown link tester to find broken links
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* minor change remove slash
* pr comments
* minor fix
* remove docker.develop
* remove master tag
* [adr] First draft on adr-042
* fix diagram urls
* Update docs/architecture/adr-042-blockchain-riri-org.md
Co-Authored-By: Marko <marbar3778@yahoo.com>
* Update docs/architecture/adr-042-blockchain-riri-org.md
Co-Authored-By: Marko <marbar3778@yahoo.com>
* Update docs/architecture/adr-042-blockchain-riri-org.md
Co-Authored-By: Marko <marbar3778@yahoo.com>
* add go syntax highlight
* more highlighting
* consistency fixes
* Add references
* new adr number
* Fixes based on feedback
* aditional state info
* Add details on getSchedule
* replace spaces with tabs
* fixes based on feedback
* add clarity around r.msgs
* clarify block processing
* fix off by one error
* additional details on ioRoutine
* Update docs/architecture/adr-043-blockchain-riri-org.md
Co-Authored-By: Anca Zamfir <ancazamfir@users.noreply.github.com>
* change invocation of NewNode across
* custom reactor name are prefixed with CUSTOM_
* upgate changelog pending
* improve comments
* node: refactor NewNode to use functional options
Follow up from #3512
Specifically:
cli.conn.Close() need not be under the mutex (#3512 (comment))
call the reqRes callback after the resCb so they always happen in the same order (#3512)
Fixes#3513
Calling ensurePeers outside of ensurePeersRoutine can lead to nodes
disconnecting from us due to "sent next PEX request too soon" error.
Solution is to just dial addrs we got from src instead of calling
ensurePeers.
Refs #2093Fixes#3338
As per #2127, this refactors the RequestCheckTx ProtoBuf struct to allow for a flag indicating whether a query is a recheck or not (and allows for possible future, more nuanced states).
In order to pass this extended information through to the ABCI app, the proxy.AppConnMempool (and, for consistency, the proxy.AppConnConsensus) interface seems to need to be refactored along with abcicli.Client.
And, as per this comment, I've made the following modification to the protobuf definition for the RequestCheckTx structure:
enum CheckTxType {
New = 0;
Recheck = 1;
}
message RequestCheckTx {
bytes tx = 1;
CheckTxType type = 2;
}
* Refactor ABCI CheckTx to notify of recheck
As per #2127, this refactors the `RequestCheckTx` ProtoBuf struct to allow for:
1. a flag indicating whether a query is a recheck or not (and allows for
possible future, more nuanced states)
2. an `additional_data` bytes array to provide information for those more
nuanced states.
In order to pass this extended information through to the ABCI app, the
`proxy.AppConnMempool` (and, for consistency, the
`proxy.AppConnConsensus`) interface seems to need to be refactored.
Commits:
* Fix linting issue
* Add CHANGELOG_PENDING entry
* Remove extraneous explicit initialization
* Update ABCI spec doc to include new CheckTx params
* Rename method param for consistency
* Rename CheckTxType enum values and remove additional_data param
* Update to contributing.md
- Add in section for Draft PR, instead of WIP
- Add make lint as part of the commit process.
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* mistake
* minor word change
* changelog and version
* Add section on ABCI changes
* Update ABCI upgrade section to include events examples
* update upgrading
* more upgrading and changelog
* update changelog from pending
* refer to #rpc_changes
* minor word changes
* Added a disaclaimer so the user is aware of binary
- added disclaimer for `-s -w`
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* move up to compile
* Added flags '-s -w' to buildflags
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* added a condition for no strip
* minor changes
* on call discussions
* on call discussion v2
* Expose priv validators for use in testing
* Generalize block header validation test past height 1
* Remove ineffectual assignment
* Remove redundant SaveState call
* Reorder comment for clarity
* Use the block executor ApplyBlock function instead of implementing a stripped-down version of it
* Remove commented-out code
* Remove unnecessary test
The required tests already appear to be implemented (implicitly) through
the TestValidateBlockHeader test.
* Allow for catching of specific error types during TestValidateBlockCommit
* Make return error testable
* Clean up and add TestValidateBlockCommit code
* Fix formatting
* Extract function to create a new mock test app
* Update comment for clarity
* Fix comment
* Add skeleton code for evidence-related test
* Allow for addressing priv val by address
* Generalize test beyond a single validator
* Generalize TestValidateBlockEvidence past first height
* Reorder code to clearly separate tests and utility code
* Use a common constant for stop height for testing in state/validation_test.go
* Refactor errors to resemble existing conventions
* Fix formatting
* Extract common helper functions
Having the tests littered with helper functions makes them less easily
readable imho, so I've pulled them out into a separate file. This also
makes it easier to see what helper functions are available during
testing, so we minimize the chance of duplication when writing new
tests.
* Remove unused parameter
* Remove unused parameters
* Add field keys
* Remove unused height constant
* Fix typo
* Fix incorrect return error
* Add field keys
* Use separate package for tests
This refactors all of the state package's tests into a state_test
package, so as to keep any usage of the state package's internal methods
explicit.
Any internal methods/constants used by tests are now explicitly exported
in state/export_test.go
* Refactor: extract helper function to make, validate, execute and commit a block
* Rename state function to makeState
* Remove redundant constant for number of validators
* Refactor mock evidence registration into TestMain
* Remove extraneous nVals variable
* Replace function-level TODOs with file-level TODO and explanation
* Remove extraneous comment
* Fix linting issues brought up by GolangCI (pulled in from latest merge from develop)
* Refactor signature of Application.CheckTx
* Refactor signature of Application.DeliverTx
* Refactor example variable names for clarity and consistency
* Rename method variables for consistency
* Rename method variables for consistency
* add a changelog entry
* update docs
* node: fix a bug where `nil` is recorded as node's address
Solution
AddOurAddress when we know it
sw.NetAddress is nil in createAddrBookAndSetOnSwitch
it's set by n.transport.Listen function, which is called during start
Fixes#3716
* use addr instead of n.sw.NetAddress
* add both ExternalAddress and ListenAddress as our addresses
In Python 3 the command outlined in this doc `import codecs; codecs.decode("YWJjZA==", 'base64').decode('ascii')` throws an error:
TypeError: decoding with 'base64' codec failed (TypeError: expected bytes-like object, not str), needs to add 'b' before the encoded string
`import codecs; codecs.decode(b"YWJjZA==", 'base64').decode('ascii')` to make it work
## PR
This PR introduces a fundamental breaking change to the structure of ABCI response and tx tags and the way they're processed. Namely, the SDK can support more complex and aggregated events for distribution and slashing. In addition, block responses can include duplicate keys in events.
Implement new Event type. An event has a type and a list of KV pairs (ie. list-of-lists). Typical events may look like:
"rewards": [{"amount": "5000uatom", "validator": "...", "recipient": "..."}]
"sender": [{"address": "...", "balance": "100uatom"}]
The events are indexed by {even.type}.{even.attribute[i].key}/.... In this case a client would subscribe or query for rewards.recipient='...'
ABCI response types and related types now include Events []Event instead of Tags []cmn.KVPair.
PubSub logic now publishes/matches against map[string][]string instead of map[string]string to support duplicate keys in response events (from #1385). A match is successful if the value is found in the slice of strings.
closes: #1859closes: #2905
## Commits:
* Implement Event ABCI type and updates responses to use events
* Update messages_test.go
* Update kvstore.go
* Update event_bus.go
* Update subscription.go
* Update pubsub.go
* Update kvstore.go
* Update query logic to handle slice of strings in events
* Update Empty#Matches and unit tests
* Update pubsub logic
* Update EventBus#Publish
* Update kv tx indexer
* Update godocs
* Update ResultEvent to use slice of strings; update RPC
* Update more tests
* Update abci.md
* Check for key in validateAndStringifyEvents
* Fix KV indexer to skip empty keys
* Fix linting errors
* Update CHANGELOG_PENDING.md
* Update docs/spec/abci/abci.md
Co-Authored-By: Federico Kunze <31522760+fedekunze@users.noreply.github.com>
* Update abci/types/types.proto
Co-Authored-By: Ethan Buchman <ethan@coinculture.info>
* Update docs/spec/abci/abci.md
Co-Authored-By: Ethan Buchman <ethan@coinculture.info>
* Update libs/pubsub/query/query.go
Co-Authored-By: Ethan Buchman <ethan@coinculture.info>
* Update match function to match if ANY value matches
* Implement TestSubscribeDuplicateKeys
* Update TestMatches to include multi-key test cases
* Update events.go
* Update Query interface godoc
* Update match godoc
* Add godoc for matchValue
* DRY-up tx indexing
* Return error from PublishWithEvents in EventBus#Publish
* Update PublishEventNewBlockHeader to return an error
* Fix build
* Update events doc in ABCI
* Update ABCI events godoc
* Implement TestEventBusPublishEventTxDuplicateKeys
* Update TestSubscribeDuplicateKeys to be table-driven
* Remove mod file
* Remove markdown from events godoc
* Implement TestTxSearchDeprecatedIndexing test
* Update to using go mod from dep
* Remove references to make get_vendor_deps
* Specify go version
* Set GO111MODULE=on and add -mod=readonly
* Fix exported env
* switch to using go1.12 everywhere
* Fix test scripts
* Typo:
* Prepend GO111MODULE=on
* remove dep cache
* Revert "remove dep cache"
This reverts commit 45117bda
Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>
* bring back the dependency cache and change it to cache modules instead
of vendored deps; also:
- bump version for dependency cache
- bump version on pkg-cache (includes modules directory)
Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>
* remove some more traces of dep:
- remove Gopkg.(toml | lock)
- update contributing guidlines
- set global default in circleci (GO111MODULE=on)
Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>
* global var failed for `test_cover` with
`go: unknown environment setting GO111MODULE=true`
although the var was `GO111MODULE: on`
Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>
* Changelog pending entry
Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>
* Add bbolt dependency to go.mod
Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>
* move -mod=readonly to build flags
BoltDB's accessors will return slices that are only valid for the
lifetime of the transaction. This adds copies where required to prevent
hard to debug crashes (among other things).
* Move peer behaviour into it's own package
* refactor wip
* Adjust API and fix tests
* remove unused test struct
* Better error message
* Restructure:
+ Now behaviour is it's own package, we don't need to include
PeerBehaviour in every type.
+ Split up behaviours and reporters into seperate files
* doc string fixes
* Fix minor typos
* Update behaviour/reporter.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* Update behaviour/reporter.go
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
Fixes#3521
The function NewNetAddressStringWithOptionalID is from a time when peer
IDs were optional. They're not anymore. So this should be renamed to
NewNetAddressString and should ensure the ID is provided.
* update changelog
* use NewNetAddress in transport tests
* use NewNetAddress in TestTransportMultiplexAcceptMultiple
So we can have an error code when rpc fails.
* Update CHANGELOG_PENDING.md
Co-Authored-By: Anton Kaliaev <anton.kalyaev@gmail.com>
* wrap correct error
* format code
* Move GenesisDocProvider and DefaultGenesisDocProviderFunc
GenesisDocProvider, being a provider of *types.GenesisDoc, makes sense
to be part of the types package.
DefaultGenesisDocProviderFunc, which relies on *config.Config to produce
a types.GenesisDocProvider, makes sense being part of the config
package.
* Add aliases to avoid breaking node package API
* Revert to original structure
After discussion, it appears as though the best place for the relocated
structures is still in the node package. This means that for the v0.31.6
release and into the future, there will be no changes two these two
entities' APIs.
* update changelog
* Update changelog
- less detail about internal stuff like the `mempool`
- focus BUG FIX entries more on what was fixed rather than how
* minor cleanup.
- remove entry for GenesisDocProvider, it's being undone
* minor fix
* Peer behaviour test tweaks:
Address the remaining test issues mentioned in ##3552 notably:
+ switch to p2p_test package
+ Use `Error` instead of `Errorf` when not using formatting
+ Add expected/got errors to
`TestMockPeerBehaviourReporterConcurrency` test
* Peer behaviour equal behaviours test
+ slices of PeerBehaviours should be compared as histograms to
ensure they have the same set of PeerBehaviours at the same
freequncy.
* TestEqualPeerBehaviours:
+ Add tests for the equivalence between sets of PeerBehaviours
* Allow testnet hostnames to be overridden
This allows one to specify the `--hostname` flag multiple times, each
time providing an additional custom hostname for a respective peer
(validator or non-validator). This overrides any of the
`--hostname-prefix` or `--starting-ip-address` flags.
The string array approach is taken instead of the string slice approach
(see the pflag docs:
https://godoc.org/github.com/spf13/pflag#StringArray) because the string
slice approach (a comma-separated string) doesn't allow for cleaner
multi-line BASH scripts - where this feature is intended to be used.
* Reorder conditional for clarity with simpler earlier return
* Allow for specifying peer hostname suffix
* Quote values in help strings for greater clarity
* Fix command switch
* Add CHANGELOG_PENDING entry for PR
* Allow for unique monikers
The current approach to generating monikers for testnet nodes assigns
the local hostname of the machine on which the testnet config was
generated to all nodes. This results in the same moniker for each and
every node.
This commit makes use of the supplied `--hostname-prefix` and
`--hostname-suffix`, or `--hostname` parameters to generate unique
monikers for each node. Alternatively, another parameter
(`--random-monikers`) allows one to forcibly override all of the other
options with random hexadecimal strings.
* Update CHANGELOG_PENDING entry for new command line switch
* fix peer state init to late
Peer does not have a state yet. We set it in AddPeer.
We need an new interface before mconnection is started
* pex message to soon
fix reconnection pex send too fast,
error is caused lastReceivedRequests is still
not deleted when a peer reconnected
* add test case for initpeer
* add prove case
* remove potentially infinite loop
* Update consensus/reactor.go
Co-Authored-By: guagualvcha <baifudong@lancai.cn>
* Update consensus/reactor_test.go
Co-Authored-By: guagualvcha <baifudong@lancai.cn>
* document Reactor interface better
* refactor TestReactorReceiveDoesNotPanicIfAddPeerHasntBeenCalledYet
* fix merge conflicts
* blockchain: remove peer's ID from the pool in InitPeer
Refs #3338
* pex: resetPeersRequestsInfo both upon InitPeer and RemovePeer
* ensure RemovePeer is always called before InitPeer
by removing the peer from the switch last (after we've stopped it and
removed from all reactors)
* add some comments for ConsensusReactor#InitPeer
* fix pex reactor
* format code
* fix spelling
* update changelog
* remove unused methods
* do not clear lastReceivedRequests upon error
only in RemovePeer
* call InitPeer before we start the peer!
* add a comment to InitPeer
* write a test
* use waitUntilSwitchHasAtLeastNPeers func
* bring back timeouts
* Test to ensure Receive panics if InitPeer has not been called
* p2p/pex: consult seeds in crawlPeersRoutine
This changeset alters the startup behavior for crawlPeersRoutine. Previously
the routine would crawl a random selection of peers on startup. For a
new seed node, there are no peers. As a result, new seed nodes are unable
to bootstrap themselves with a list of peers until another node with a list
of peers connects to the seed. If this node relies on the seed node for peers,
then the two will not discover more peers.
This changeset makes the startup behavior for crawlPeersRoutine connect to
any seed nodes. Upon connecting, a request for peers will be sent to the seed node
thus helping bootstrap our seed node.
* p2p/pex: Adjust error message for no peers
Co-Authored-By: Ethan Buchman <ethan@coinculture.info>
* p2p: initial implementation of peer behaviour
* [p2p] re-use newMockPeer
* p2p: add inline docs for peer_behaviour interface
* [p2p] Align PeerBehaviour interface (#3558)
* [p2p] make switchedPeerHebaviour private
* [p2p] make storePeerHebaviour private
* [p2p] Add CHANGELOG_PENDING entry for PeerBehaviour
* [p2p] Adjustment naming for PeerBehaviour
* [p2p] Add coarse lock around storedPeerBehaviour
* [p2p]: Fix non-pointer methods in storedPeerBehaviour
+ Structs with embeded locks must specify all methods with pointer
receivers to avoid creating a copy of the embeded lock.
* [p2p] Thorough refactoring based on comments in #3552
+ Decouple PeerBehaviour interface from Peer by parametrizing
methods with `p2p.ID` instead of `p2p.Peer`
+ Setter methods wrapped in a write lock
+ Getter methods wrapped in a read lock
+ Getter methods on storedPeerBehaviour now take a `p2p.ID`
+ Getter methods return a copy of underlying stored behaviours
+ Add doc strings to public types and methods
* [p2p] make structs public
* [p2p] Test empty StoredPeerBehaviour
* [p2p] typo fix
* [p2p] add TestStoredPeerBehaviourConcurrency
+ Add a test which uses StoredPeerBehaviour in multiple goroutines
to ensure thread-safety.
* Update p2p/peer_behaviour.go
Co-Authored-By: brapse <brapse@gmail.com>
* Update p2p/peer_behaviour.go
Co-Authored-By: brapse <brapse@gmail.com>
* Update p2p/peer_behaviour.go
Co-Authored-By: brapse <brapse@gmail.com>
* Update p2p/peer_behaviour.go
Co-Authored-By: brapse <brapse@gmail.com>
* Update p2p/peer_behaviour.go
Co-Authored-By: brapse <brapse@gmail.com>
* Update p2p/peer_behaviour_test.go
Co-Authored-By: brapse <brapse@gmail.com>
* Update p2p/peer_behaviour.go
Co-Authored-By: brapse <brapse@gmail.com>
* Update p2p/peer_behaviour.go
Co-Authored-By: brapse <brapse@gmail.com>
* Update p2p/peer_behaviour.go
Co-Authored-By: brapse <brapse@gmail.com>
* Update p2p/peer_behaviour.go
Co-Authored-By: brapse <brapse@gmail.com>
* [p2p] field ordering convention
* p2p: peer behaviour refactor
+ Change naming of reporting behaviour to `Report`
+ Remove the responsibility of distinguishing between the categories
of good and bad behaviour and instead focus on reporting behaviour.
* p2p: rename PeerReporter -> PeerBehaviourReporter
for storing batch keys and values in boltDBBatch.
NOTE: batch does not have to be safe for concurrent access. Delete may
be slow, but given it should not be used often, it's ok.
Fixes#3631
* libs/db: conditional compilation
For cleveldb: go build -tags cleveldb
For boltdb: go build -tags boltdb
Fixes#3611
* document db_backend param better
* remove deprecated LevelDBBackend
* update changelog
* add missing lines
* add new line
* fix TestRemoteDB
* add a line about boltdb tag
* Revert "remove deprecated LevelDBBackend"
This reverts commit 1aa85453f7.
* make PR non breaking
* change DEPRECATED label format
https://stackoverflow.com/a/36360323/820520
* mempool: remove only valid (Code==0) txs on Update
so evil proposers can't drop valid txs in Commit stage.
Also remove invalid (Code!=0) txs from the cache so they can be
resubmitted.
Fixes#3322
@rickyyangz:
In the end of commit stage, we will update mempool to remove all the txs
in current block.
// Update mempool.
err = blockExec.mempool.Update(
block.Height,
block.Txs,
TxPreCheck(state),
TxPostCheck(state),
)
Assum an account has 3 transactions in the mempool, the sequences are
100, 101 and 102 separately, So an evil proposal can only package the
101 and 102 transactions into its proposal block, and leave 100 still in
mempool, then the two txs will be removed from all validators' mempool
when commit. So the account lost the two valid txs.
@ebuchman:
In the longer term we may want to do something like #2639 so we can
validate txs before we commit the block. But even in this case we'd only
want to run the equivalent of CheckTx, which means the DeliverTx could
still fail even if the CheckTx passes depending on how the app handles
the ABCI Code semantics. So more work will be required around the ABCI
code. See also #2185
* add changelog entry and tests
* improve changelog message
* reformat code
## Description
also
handle errors from DialPeersAsync
remove nil addr from log msg
fix TestPEXReactorDoesNotDisconnectFromPersistentPeerInSeedMode
This is a follow-up from
#3593 (review)
Fixes most of the #3617, except #3593 (comment)
## Commits
* rpc: /dial_peers: only mark peers as persistent if flag is on
also
- handle errors from DialPeersAsync
- remove nil addr from log msg
- fix TestPEXReactorDoesNotDisconnectFromPersistentPeerInSeedMode
This is a follow-up from
https://github.com/tendermint/tendermint/pull/3593#pullrequestreview-233556909
* remove a call to AddPersistentPeers
TestDialFail will trigger a reconnect
Description:
add boltdb implement for db interface.
The boltdb is safe and high performence embedded KV database. It is based on B+tree and Mmap.
Now it is maintained by etcd-io org. https://github.com/etcd-io/bbolt
It is much better than LSM tree (levelDB) when RANDOM and SCAN read.
Replaced #3604
Refs #1835
Commits:
* add boltdb for storage
* update boltdb in gopkg.toml
* update bbolt in Gopkg.lock
* dep update Gopkg.lock
* add boltdb'bucket when create Boltdb
* some modifies
* address my own comments
* fix most of the Iterator tests for boltdb
* bolt does not support concurrent writes while iterating
* add WARNING word
* note that ReadOnly option is not supported
* fixes after Ismail's review
Co-Authored-By: melekes <anton.kalyaev@gmail.com>
* panic if View returns error
plus specify bucket in the top comment
* VoteSignBytes builds CanonicalVote
* CommitVotes implements VoteSetReader
- new CommitVotes struct holds both the Commit and the ValidatorSet and
implements VoteSetReader
- ToVote takes a ValidatorSet
* fix TestCommit
* use CommitSig.BlockID
Commits may include votes for a different BlockID, could be nil,
or different altogether. This means we can't use `commit.BlockID`
for reconstructing the sign bytes, since up to -1/3 of the commits
might be for independent BlockIDs. This means CommitSig will need to
include an indicator for what BlockID it signed - if it's not the
committed one or nil, it will need to include it fully in order to be
verified. This is unfortunate but unavoidable so long as we include
votes for non-committed BlockIDs (which we do to track validator
liveness)
* fixes from review
* remove CommitVotes. CommitSig contains address
* remove commit.canonicalVote method
* toVote -> getVote, takes valIdx
* update adr-025
* commit.ToVoteSet -> CommitToVoteSet
* add test
* fix from review
## Description
Refs #2659
Breaking changes in the mempool package:
[mempool] #2659 Mempool now an interface
old Mempool renamed to CListMempool
NewMempool renamed to NewCListMempool
Option renamed to CListOption
MempoolReactor renamed to Reactor
NewMempoolReactor renamed to NewReactor
unexpose TxID method
TxInfo.PeerID renamed to SenderID
unexpose MempoolReactor.Mempool
Breaking changes in the state package:
[state] #2659 Mempool interface moved to mempool package
MockMempool moved to top-level mock package and renamed to Mempool
Non Breaking changes in the node package:
[node] #2659 Add Mempool method, which allows you to access mempool
## Commits
* move Mempool interface into mempool package
Refs #2659
Breaking changes in the mempool package:
- Mempool now an interface
- old Mempool renamed to CListMempool
Breaking changes to state package:
- MockMempool moved to mempool/mock package and renamed to Mempool
- Mempool interface moved to mempool package
* assert CListMempool impl Mempool
* gofmt code
* rename MempoolReactor to Reactor
- combine everything into one interface
- rename TxInfo.PeerID to TxInfo.SenderID
- unexpose MempoolReactor.Mempool
* move mempool mock into top-level mock package
* add a fixme
TxsFront should not be a part of the Mempool interface
because it leaks implementation details. Instead, we need to come up
with general interface for querying the mempool so the MempoolReactor
can fetch and broadcast txs to peers.
* change node#Mempool to return interface
* save commit = new reactor arch
* Revert "save commit = new reactor arch"
This reverts commit 1bfceacd9d.
* require CListMempool in mempool.Reactor
* add two changelog entries
* fixes after my own review
* quote interfaces, structs and functions
* fixes after Ismail's review
* make node's mempool an interface
* make InitWAL/CloseWAL methods a part of Mempool interface
* fix merge conflicts
* make node's mempool an interface
## Description
Previously only outbound peers can be persistent.
Now, even if the peer is inbound, if it's marked as persistent, when/if conn is lost,
Tendermint will try to reconnect. This part is actually optional and can be reverted.
Plus, seed won't disconnect from inbound peer if it's marked as
persistent. Fixes#3362
## Commits
* make persistent prop independent of conn direction
Previously only outbound peers can be persistent. Now, even if the peer
is inbound, if it's marked as persistent, when/if conn is lost,
Tendermint will try to reconnect.
Plus, seed won't disconnect from inbound peer if it's marked as
persistent. Fixes#3362
* fix TestPEXReactorDialPeer test
* add a changelog entry
* update changelog
* add two tests
* reformat code
* test UnsafeDialPeers and UnsafeDialSeeds
* add TestSwitchDialPeersAsync
* spec: update p2p/config spec
* fixes after Ismail's review
* Apply suggestions from code review
Co-Authored-By: melekes <anton.kalyaev@gmail.com>
* fix merge conflict
* remove sleep from TestPEXReactorDoesNotDisconnectFromPersistentPeerInSeedMode
We don't need it actually.
If we are low on addresses for peering, we need to discover more peers. The
previous behavior would query existing peers; however, if an existing peer
does not participate in peer exchange, then our node will not discover more peers.
This change consults both existing peers as well as seeds when there is a deficit
in address book addresses. This allows for discovering peers though existing channels
as well as via seeds if existing peers do not share addresses.
The node.NewNode method is pretty complex at the moment, an in order to address issues like #3156, we need to simplify the interface for partial node instantiation. In some places, we don't need to build up a full node (like in the node.TestCreateProposalBlock test), but the complexity of such partial instantiation needs to be reduced.
This PR aims to eventually make this easier/simpler.
See also this gist https://gist.github.com/thanethomson/56e1640d057a26186e38ad678a1d114c for some background work done when starting to refactor here.
## Commits:
* [WIP] Refactor node.NewNode to simplify
The `node.NewNode` method is pretty complex at the moment, an in order
to address issues like #3156, we need to simplify the interface for
partial node instantiation. In some places, we don't need to build up a
full node (like in the `node.TestCreateProposalBlock` test), but the
complexity of such partial instantiation needs to be reduced.
This PR aims to eventually make this easier/simpler.
* Refactor state loading and genesis doc provider into state package
* Refactor for clarity of return parameters
* Fix incorrect capitalization of error messages
* Simplify extracted functions' names
* Document optionally-prefixed functions
* Refactor optionallyFastSync for clarity of separation of concerns
* Restructure function for early return
* Restructure function for early return
* Remove dependence on deprecated panic functions
* refactor code a bit more
plus, expose PEXReactor on node
* align logger names
* add a changelog entry
* align logger names 2
* add a note about PEXReactor returning nil
* Remove deprecated PanicXXX functions from codebase
As per discussion over
[here](https://github.com/tendermint/tendermint/pull/3456#discussion_r278423492),
we need to remove these `PanicXXX` functions and eliminate our
dependence on them. In this PR, each and every `PanicXXX` function call
is replaced with a simple `panic` call.
* add a changelog entry
Should fix#3451, #2723 and #3317.
Test TestResetTimeoutPrecommitUponNewHeight is simplified so it reduces a risk of timeout failure. Furthermore, timeout we wait for TimeoutEvents is increased, and the timeout value is more precisely computed. This should hopefully decrease a chance of non-deterministic test failures.
This assertion is problematic to ensure consistently due to dependency on scheduler. On the other hand, if I am not wrong, order in which messages are read from the channel respects order in which messages are written. Therefore, process will receive 2f+1 precommits that are not all for v (one is for nil) so TriggeredTimeoutPrecommit would be set to true. So we don't need to assert it. I know that it would be better to still assert to it but I don't know how to do it without sleep and that is ugly and is causing us nondeterministic failures.
* check every block appHash
Fixes#3460
Not really needed, but it would detect if the app changed in a way it
shouldn't have.
* add a changelog entry
* no need to return an error if we panic
* rename methods
* rename methods once again
* add a test
* correct error msg
plus fix a few go-lint warnings
* better panic msg
This should fix#3576 (ran it many times locally but only time will tell). The test actually only checked for the opcode of the error. From the name of the test we actually want to test if we see a timeout after a pre-defined time.
## Commits:
* increase readWrite timeout as it is also used in the `Accept` of the tcp
listener:
- before this caused the readWriteTimeout to kick in (rarely) while Accept
- as a side-effect: remove obsolete time.Sleep: in both listener cases
the Accept will only return after successfully accepting and the timeout
that is supposed to be tested here will be triggered because there is a
read without a write
- see if we actually run into a timeout error (the whole purpose of
this test AFAIU)
Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>
* makee local test-vars `const`
Signed-off-by: Ismail Khoffi <Ismail.Khoffi@gmail.com>
## Additional comments:
@xla: Confusing how an accept could take longer than that, but assuming a noisy environment full of little docker whales will be slower than what 50 years of silicon are capable of. The only thing I'd be vary of is that we mask structural issues with the code by just bumping the timeout, if we are sensitive towards that it warrants invesigation, but again this might only be true in the environment our CI runs in.
(#2611) had suggested that an iterative version of
SimpleHashFromByteSlice would be faster, presumably because
we can envision some overhead accumulating from stack
frames and function calls. Additionally, a recursive algorithm risks
hitting the stack limit and causing a stack overflow should the tree
be too large.
Provided here is an iterative alternative, a simple test to assert
correctness and a benchmark. On the performance side, there appears to
be no overall difference:
```
BenchmarkSimpleHashAlternatives/recursive-4 20000 77677 ns/op
BenchmarkSimpleHashAlternatives/iterative-4 20000 76802 ns/op
```
On the surface it might seem that the additional overhead is due to
the different allocation patterns of the implementations. The recursive
version uses a single `[][]byte` slices which it then re-slices at each level of the tree.
The iterative version reproduces `[][]byte` once within the function and
then rewrites sub-slices of that array at each level of the tree.
Eexperimenting by modifying the code to simply calculate the
hash and not store the result show little to no difference in performance.
These preliminary results suggest:
1. The performance of the current implementation is pretty good
2. Go has low overhead for recursive functions
3. The performance of the SimpleHashFromByteSlice routine is dominated
by the actual hashing of data
Although this work is in no way exhaustive, point #3 suggests that
optimizations of this routine would need to take an alternative
approach to make significant improvements on the current performance.
Finally, considering that the recursive implementation is easier to
read, it might not be worthwhile to switch to a less intuitive
implementation for so little benefit.
* re-add slice re-writing
* [crypto] Document SimpleHashFromByteSlicesIterative
Continues from #3280 in building support for batched requests/responses in the JSON RPC (as per issue #3213).
* Add JSON RPC batching for client and server
As per #3213, this adds support for [JSON RPC batch requests and
responses](https://www.jsonrpc.org/specification#batch).
* Add additional checks to ensure client responses are the same as results
* Fix case where a notification is sent and no response is expected
* Add test to check that JSON RPC notifications in a batch are left out in responses
* Update CHANGELOG_PENDING.md
* Update PR number now that PR has been created
* Make errors start with lowercase letter
* Refactor batch functionality to be standalone
This refactors the batching functionality to rather act in a standalone
way. In light of supporting concurrent goroutines making use of the same
client, it would make sense to have batching functionality where one
could create a batch of requests per goroutine and send that batch
without interfering with a batch from another goroutine.
* Add examples for simple and batch HTTP client usage
* Check errors from writer and remove nolinter directives
* Make error strings start with lowercase letter
* Refactor examples to make them testable
* Use safer deferred shutdown for example Tendermint test node
* Recompose rpcClient interface from pre-existing interface components
* Rename WaitGroup for brevity
* Replace empty ID string with request ID
* Remove extraneous test case
* Convert first letter of errors.Wrap() messages to lowercase
* Remove extraneous function parameter
* Make variable declaration terse
* Reorder WaitGroup.Done call to help prevent race conditions in the face of failure
* Swap mutex to value representation and remove initialization
* Restore empty JSONRPC string ID in response to prevent nil
* Make JSONRPCBufferedRequest private
* Revert PR hard link in CHANGELOG_PENDING
* Add client ID for JSONRPCClient
This adds code to automatically generate a randomized client ID for the
JSONRPCClient, and adds a check of the IDs in the responses (if one was
set in the requests).
* Extract response ID validation into separate function
* Remove extraneous comments
* Reorder fields to indicate clearly which are protected by the mutex
* Refactor for loop to remove indexing
* Restructure and combine loop
* Flatten conditional block for better readability
* Make multi-variable declaration slightly more readable
* Change for loop style
* Compress error check statements
* Make function description more generic to show that we support different protocols
* Preallocate memory for request and result objects
* use dialPeer function in a seed mode
Fixes#3532
by storing a number of attempts we've tried to connect in-memory and
removing the address from addrbook when number of attempts > 16
Fixes#3457
The topic of the issue is that : write a BlockRequest int requestsCh channel will create an timer at the same time that stop the peer 15s later if no block have been received . But pop a BlockRequest from requestsCh and send it out may delay more than 15s later. So that the peer will be stopped for error("send nothing to us").
Extracting requestsCh into its own goroutine can make sure that every BlockRequest been handled timely.
Instead of the requestsCh handling, we should probably pull the didProcessCh handling in a separate go routine since this is the one "starving" the other channel handlers. I believe the way it is right now, we still have issues with high delays in errorsCh handling that might cause sending requests to invalid/ disconnected peers.
What happened:
New code was supposed to fall back to last height changed when/if it
failed to find validators at checkpoint height (to make release
non-breaking).
But because we did not check if validator set is empty, the fall back
logic was never executed => resulting in LoadValidators returning an
empty validator set for cases where `lastStoredHeight` is checkpoint
height (i.e. almost all heights if the application does not change
validator set often).
How it was found:
one of our users - @sunboshan reported a bug here
https://github.com/tendermint/tendermint/pull/3537#issuecomment-482711833
* use last height changed in validator set is empty
* add a changelog entry
A prior change to address accidental DNS lookups introduced the
SocketAddr on peer, which was then used to add it to the addressbook.
Which in turn swallowed the self reported port of the peer, which is
important on a reconnect. This change revives the NetAddress on NodeInfo
which the Peer carries, but now returns an error to avoid nil
dereferencing another issue observed in the past. Additionally we could
potentially address #3532, yet the original problem statemenf of that
issue stands.
As a drive-by optimisation `MarkAsGood` now takes only a `p2p.ID` which
makes it interface a bit stricter and leaner.
* rpc: store validator info periodly
* increase ValidatorSetStoreInterval
also
- unexpose it
- add a comment
- refactor code
- add a benchmark, which shows that 100000 results in ~ 100ms to get 100
validators
* make the change non-breaking
* expand comment
* rename valSetStoreInterval to valSetCheckpointInterval
* change the panic msg
* add a test and changelog entry
* update changelog entry
* update changelog entry
* add a link to PR
* fix test
* Update CHANGELOG_PENDING.md
Co-Authored-By: melekes <anton.kalyaev@gmail.com>
* update comment
* use MaxInt64 func
* add actionable advice for ErrAddrBookNonRoutable err
Should replace https://github.com/tendermint/tendermint/pull/3463
* reorder checks in addrbook#addAddress so
ErrAddrBookPrivate is returned first
and do not log error in DialPeersAsync if the address is private
because it's not an error
ListOfKnownAddresses is removed
panic if addrbook size is less than zero
CrawlPeers does not attempt to connect to existing or peers we're currently dialing
various perf. fixes
improved tests (though not complete)
move IsDialingOrExistingAddress check into DialPeerWithAddress (Fixes#2716)
* addrbook: preallocate memory when saving addrbook to file
* addrbook: remove oldestFirst struct and check for ID
* oldestFirst replaced with sort.Slice
* ID is now mandatory, so no need to check
* addrbook: remove ListOfKnownAddresses
GetSelection is used instead in seed mode.
* addrbook: panic if size is less than 0
* rewrite addrbook#saveToFile to not use a counter
* test AttemptDisconnects func
* move IsDialingOrExistingAddress check into DialPeerWithAddress
* save and cleanup crawl peer data
* get rid of DefaultSeedDisconnectWaitPeriod
* make linter happy
* fix TestPEXReactorSeedMode
* fix comment
* add a changelog entry
* Apply suggestions from code review
Co-Authored-By: melekes <anton.kalyaev@gmail.com>
* rename ErrDialingOrExistingAddress to ErrCurrentlyDialingOrExistingAddress
* lowercase errors
* do not persist seed data
pros:
- no extra files
- less IO
cons:
- if the node crashes, seed might crawl a peer too soon
* fixes after Ethan's review
* add a changelog entry
* we should only consult Switch about peers
checking addrbook size does not make sense since only PEX reactor uses
it for dialing peers!
https://github.com/tendermint/tendermint/pull/3011#discussion_r270948875
* OriginalAddr -> SocketAddr
OriginalAddr records the originally dialed address for outbound peers,
rather than the peer's self reported address. For inbound peers, it was
nil. Here, we rename it to SocketAddr and for inbound peers, set it to
the RemoteAddr of the connection.
* use SocketAddr
Numerous places in the code call peer.NodeInfo().NetAddress().
However, this call to NetAddress() may perform a DNS lookup if the
reported NodeInfo.ListenAddr includes a name. Failure of this lookup
returns a nil address, which can lead to panics in the code.
Instead, call peer.SocketAddr() to return the static address of the
connection.
* remove nodeInfo.NetAddress()
Expose `transport.NetAddress()`, a static result determined
when the transport is created. Removing NetAddress() from the nodeInfo
prevents accidental DNS lookups.
* fixes from review
* linter
* fixes from review
* docs: fix broken links (#3482)
A bunch of links were broken in the documentation s they included the
`docs` prefix.
* Update CHANGELOG_PENDING
* docs: switch to relative links for github compatitibility (#3482)
* docs: fix broken links (#3482)
A bunch of links were broken in the documentation s they included the
`docs` prefix.
* Update CHANGELOG_PENDING
* docs: switch to relative links for github compatitibility (#3482)
* mempool: add a safety check, write tests for mempoolIDs
and document 65536 limit in the mempool reactor spec
follow-up to https://github.com/tendermint/tendermint/pull/2778
* rename the test
* fixes after Ismail's review
Why submit this pr:
we have suffered from infinite loop in addrbook bug which takes us a long time to find out why process become a zombie peer. It have been fixed in #3232. But the ADDRS_LOOP is still there, risk of infinite loop is still exist.
The algorithm that to random pick a bucket is not stable, which means the peer may unluckily always choose the wrong bucket for a long time, the time and cpu cost is meaningless.
A simple improvement:
shuffle bucketsNew and bucketsOld, and pick necessary number of address from them. A stable
algorithm.
I think it's nice when the Client interface has all the methods. If someone does not need a particular method/set of methods, she can use individual interfaces (e.g. NetworkClient, MempoolClient) or write her own interface.
technically breaking
Fixes#3458
Refs #3306, irisnet@fdbb676
I ran an irishub validator. After the validator node ran several days, I dump the whole goroutine stack. I found that there were hundreds of broadcastTxRoutine. However, the connected peer quantity was less than 30. So I belive that there must be broadcastTxRoutine leakage issue.
According to my analysis, I think the root cause of this issue locate in below code:
select {
case <-next.NextWaitChan():
// see the start of the for loop for nil check
next = next.Next()
case <-peer.Quit():
return
case <-memR.Quit():
return
}
As we know, if multiple paths are avaliable in the same time, then a random path will be selected. Suppose that next.NextWaitChan() and peer.Quit() are both avaliable, and next.NextWaitChan() is chosen.
// send memTx
msg := &TxMessage{Tx: memTx.tx}
success := peer.Send(MempoolChannel, cdc.MustMarshalBinaryBare(msg))
if !success {
time.Sleep(peerCatchupSleepIntervalMS * time.Millisecond)
continue
}
Then next will be non-empty and the peer send operation won't be success. As a result, this go routine will be track into infinite loop and won't be released.
My proposal is to check peer.Quit() and memR.Quit() in every loop no matter whether next is nil.
Closes#1798
This is done by making every mempool tx maintain a list of peers who its received the tx from. Instead of using the 20byte peer ID, it instead uses a local map from peerID to uint16 counter, so every peer adds 2 bytes. (Word aligned to probably make it 8 bytes)
This also required resetting the callback function on every CheckTx. This likely has performance ramifications for instruction caching. The actual setting operation isn't costly with the removal of defers in this PR.
* Make the mempool not gossip txs back to peers its received it from
* Fix adversarial memleak
* Don't break interface
* Update changelog
* Forgot to add a mtx
* forgot a mutex
* Update mempool/reactor.go
Co-Authored-By: ValarDragon <ValarDragon@users.noreply.github.com>
* Update mempool/mempool.go
Co-Authored-By: ValarDragon <ValarDragon@users.noreply.github.com>
* Use unknown peer ID
Co-Authored-By: ValarDragon <ValarDragon@users.noreply.github.com>
* fix compilation
* use next wait chan logic when skipping
* Minor fixes
* Add TxInfo
* Add reverse map
* Make activeID's auto-reserve 0
* 0 -> UnknownPeerID
Co-Authored-By: ValarDragon <ValarDragon@users.noreply.github.com>
* Switch to making the normal case set a callback on the reqres object
The recheck case is still done via the global callback, and stats
are also set via global callback
* fix merge conflict
* Addres comments
* Add cache tests
* add cache tests
* minor fixes
* update metrics in reqResCb and reformat code
* goimport -w mempool/reactor.go
* mempool: update memTx senders
I had to introduce txsMap for quick mempoolTx lookups.
* change senders type from []uint16 to sync.Map
Fixes DATA RACE:
```
Read at 0x00c0013fcd3a by goroutine 183:
github.com/tendermint/tendermint/mempool.(*MempoolReactor).broadcastTxRoutine()
/go/src/github.com/tendermint/tendermint/mempool/reactor.go:195 +0x3c7
Previous write at 0x00c0013fcd3a by D[2019-02-27|10:10:49.058] Read PacketMsg switch=3 peer=35bc1e3558c182927b31987eeff3feb3d58a0fc5@127.0.0.1
:46552 conn=MConn{pipe} packet="PacketMsg{30:2B06579D0A143EB78F3D3299DE8213A51D4E11FB05ACE4D6A14F T:1}"
goroutine 190:
github.com/tendermint/tendermint/mempool.(*Mempool).CheckTxWithInfo()
/go/src/github.com/tendermint/tendermint/mempool/mempool.go:387 +0xdc1
github.com/tendermint/tendermint/mempool.(*MempoolReactor).Receive()
/go/src/github.com/tendermint/tendermint/mempool/reactor.go:134 +0xb04
github.com/tendermint/tendermint/p2p.createMConnection.func1()
/go/src/github.com/tendermint/tendermint/p2p/peer.go:374 +0x25b
github.com/tendermint/tendermint/p2p/conn.(*MConnection).recvRoutine()
/go/src/github.com/tendermint/tendermint/p2p/conn/connection.go:599 +0xcce
Goroutine 183 (running) created at:
D[2019-02-27|10:10:49.058] Send switch=2 peer=1efafad5443abeea4b7a8155218e4369525d987e@127.0.0.1:46193 channel=48 conn=MConn{pipe} m
sgBytes=2B06579D0A146194480ADAE00C2836ED7125FEE65C1D9DD51049
github.com/tendermint/tendermint/mempool.(*MempoolReactor).AddPeer()
/go/src/github.com/tendermint/tendermint/mempool/reactor.go:105 +0x1b1
github.com/tendermint/tendermint/p2p.(*Switch).startInitPeer()
/go/src/github.com/tendermint/tendermint/p2p/switch.go:683 +0x13b
github.com/tendermint/tendermint/p2p.(*Switch).addPeer()
/go/src/github.com/tendermint/tendermint/p2p/switch.go:650 +0x585
github.com/tendermint/tendermint/p2p.(*Switch).addPeerWithConnection()
/go/src/github.com/tendermint/tendermint/p2p/test_util.go:145 +0x939
github.com/tendermint/tendermint/p2p.Connect2Switches.func2()
/go/src/github.com/tendermint/tendermint/p2p/test_util.go:109 +0x50
I[2019-02-27|10:10:49.058] Added good transaction validator=0 tx=43B4D1F0F03460BD262835C4AA560DB860CFBBE85BD02386D83DAC38C67B3AD7 res="&{CheckTx:gas_w
anted:1 }" height=0 total=375
Goroutine 190 (running) created at:
github.com/tendermint/tendermint/p2p/conn.(*MConnection).OnStart()
/go/src/github.com/tendermint/tendermint/p2p/conn/connection.go:210 +0x313
github.com/tendermint/tendermint/libs/common.(*BaseService).Start()
/go/src/github.com/tendermint/tendermint/libs/common/service.go:139 +0x4df
github.com/tendermint/tendermint/p2p.(*peer).OnStart()
/go/src/github.com/tendermint/tendermint/p2p/peer.go:179 +0x56
github.com/tendermint/tendermint/libs/common.(*BaseService).Start()
/go/src/github.com/tendermint/tendermint/libs/common/service.go:139 +0x4df
github.com/tendermint/tendermint/p2p.(*peer).Start()
<autogenerated>:1 +0x43
github.com/tendermint/tendermint/p2p.(*Switch).startInitPeer()
```
* explain the choice of a map DS for senders
* extract ids pool/mapper to a separate struct
* fix literal copies lock value from senders: sync.Map contains sync.Mutex
* use sync.Map#LoadOrStore instead of Load
* fixes after Ismail's review
* rename resCbNormal to resCbFirstTime
In order to re-enable the test harness for the KMS (see
tendermint/kms#227), we need some marginally more realistic proposals
and votes. This is because the KMS does some additional sanity checks
now to ensure the height and round are increasing over time.
* p2p: refactor Switch#Broadcast func
- call wg.Add only once
- do not call peers.List twice!
* bad for perfomance
* peers list can change in between calls!
Refs #3306
* p2p: use time.Ticker instead of RepeatTimer
no need in RepeatTimer since we don't Reset them
Refs #3306
* libs/common: remove RepeatTimer (also TimerMaker and Ticker interface)
"ancient code that’s caused no end of trouble" Ethan
I believe there's much simplier way to write a ticker than can be reset
https://medium.com/@arpith/resetting-a-ticker-in-go-63858a2c17ec
* Update proposer-selection.md
* Fixed typos
* fixed typos
* Attempt to address some comments
* Update proposer-selection.md
* Update proposer-selection.md
* Update proposer-selection.md
Added the normalization step.
* Addressed review comments
* New example for normalization section
Added a new example to better show the need for normalization
Added requirement for changing validator set
Addressed review comments
* Fixed problem with R2
* fixed the math for new validator
* test
* more small updates
* Moved the centering above the round-robin election
- the centering is now done before the actual round-robin block
- updated examples
- cleanup
* change to reflect new implementation for new validator
* Make sure config.TimeoutBroadcastTxCommit < rpcserver.WriteTimeout()
* remove redundant comment
* libs/rpc/http_server: move Read/WriteTimeout into Config
* increase defaults for read/write timeouts
Based on this article
https://www.digitalocean.com/community/tutorials/how-to-optimize-nginx-configuration
* WriteTimeout should be larger than TimeoutBroadcastTxCommit
* set a deadline for subscribing to txs
* extract duration into const
* add two changelog entries
* Update CHANGELOG_PENDING.md
Co-Authored-By: melekes <anton.kalyaev@gmail.com>
* Update CHANGELOG_PENDING.md
Co-Authored-By: melekes <anton.kalyaev@gmail.com>
* 12 -> 10
* changelog
* changelog
See https://github.com/tendermint/tendermint/pull/3403/files#r266208947
In #3403 we unexposed BlockTimeIota from the ABCI, but it's still part
of the ConsensusParams struct, so we have to remember to add it back
after calling PB2TM.ConsensusParams. Instead, PB2TM.ConsensusParams
should take it as an argument
Fixes#3432
* bump ABCIVersion due to renaming BlockSizeParams -> BlockParams
(https://github.com/tendermint/tendermint/pull/3417#discussion_r264974791)
* Move changelog on consensus params entry to breaking
* Add @melekes' suggestion for breaking change in pubsub into upgrading.md
* Add changelog entry for #3351
* Add changelog entry for #3358 & #3359
* Add changelog entry for #3397
* remove changelog entry for #3397 (was already released in 0.30.2)
* move 3351 to improvements
* Update changelog comment
* limit number of /subscribe clients and queries per client
Add the following config variables (under [rpc] section):
* max_subscription_clients
* max_subscriptions_per_client
* timeout_broadcast_tx_commit
Fixes#2826
new HTTPClient interface for subscriptions
finalize HTTPClient events interface
remove EventSubscriber
fix data race
```
WARNING: DATA RACE
Read at 0x00c000a36060 by goroutine 129:
github.com/tendermint/tendermint/rpc/client.(*Local).Subscribe.func1()
/go/src/github.com/tendermint/tendermint/rpc/client/localclient.go:168 +0x1f0
Previous write at 0x00c000a36060 by goroutine 132:
github.com/tendermint/tendermint/rpc/client.(*Local).Subscribe()
/go/src/github.com/tendermint/tendermint/rpc/client/localclient.go:191 +0x4e0
github.com/tendermint/tendermint/rpc/client.WaitForOneEvent()
/go/src/github.com/tendermint/tendermint/rpc/client/helpers.go:64 +0x178
github.com/tendermint/tendermint/rpc/client_test.TestTxEventsSentWithBroadcastTxSync.func1()
/go/src/github.com/tendermint/tendermint/rpc/client/event_test.go:139 +0x298
testing.tRunner()
/usr/local/go/src/testing/testing.go:827 +0x162
Goroutine 129 (running) created at:
github.com/tendermint/tendermint/rpc/client.(*Local).Subscribe()
/go/src/github.com/tendermint/tendermint/rpc/client/localclient.go:164 +0x4b7
github.com/tendermint/tendermint/rpc/client.WaitForOneEvent()
/go/src/github.com/tendermint/tendermint/rpc/client/helpers.go:64 +0x178
github.com/tendermint/tendermint/rpc/client_test.TestTxEventsSentWithBroadcastTxSync.func1()
/go/src/github.com/tendermint/tendermint/rpc/client/event_test.go:139 +0x298
testing.tRunner()
/usr/local/go/src/testing/testing.go:827 +0x162
Goroutine 132 (running) created at:
testing.(*T).Run()
/usr/local/go/src/testing/testing.go:878 +0x659
github.com/tendermint/tendermint/rpc/client_test.TestTxEventsSentWithBroadcastTxSync()
/go/src/github.com/tendermint/tendermint/rpc/client/event_test.go:119 +0x186
testing.tRunner()
/usr/local/go/src/testing/testing.go:827 +0x162
==================
```
lite client works (tested manually)
godoc comments
httpclient: do not close the out channel
use TimeoutBroadcastTxCommit
no timeout for unsubscribe
but 1s Local (5s HTTP) timeout for resubscribe
format code
change Subscribe#out cap to 1
and replace config vars with RPCConfig
TimeoutBroadcastTxCommit can't be greater than rpcserver.WriteTimeout
rpc: Context as first parameter to all functions
reformat code
fixes after my own review
fixes after Ethan's review
add test stubs
fix config.toml
* fixes after manual testing
- rpc: do not recommend to use BroadcastTxCommit because it's slow and wastes
Tendermint resources (pubsub)
- rpc: better error in Subscribe and BroadcastTxCommit
- HTTPClient: do not resubscribe if err = ErrAlreadySubscribed
* fixes after Ismail's review
* Update rpc/grpc/grpc_test.go
Co-Authored-By: melekes <anton.kalyaev@gmail.com>
Also
- init substructures to avoid panic in pb2tm.ConsensusParams
Before: if csp.Block is nil and we later try to access/write to it,
we'll panic.
After: if csp.Block is nil and we later try to access/write to it,
there'll be no panic.
When scaling and averaging is invoked, it is possible to have validators
with close priorities ending up with same priority. With the current code,
this makes it impossible to verify the priority orders before and after updates.
Fixes#3383
Refs #3262
This fixes two small bugs:
1) lite/dbprovider: return `ok` instead of true in parse* functions. It's weird that we're ignoring `ok` value before.
2) consensus/state: previously because of the shadowing we almost never output "Error with msg". Now we declare both `added` and `err` in the beginning of the function, so there's no shadowing.
Although the version we were pinning to is from Nov. 2016 there were no substantial changes:
jmhodges/levigo@2b8c778 added go-modules support (no code changes)
jmhodges/levigo@853d788 added a badge to the readme
closes#3381
We're pinning repos without releases because it's very easy to upgrade all the dependencies by executing dep ensure --upgrade. Instead, we should just never run this command directly, only dep ensure --upgrade <some repo>. And we can defend that in PRs.
Refs #3374
The problem with pinning to exact revisions: people who import Tendermint as a library (e.g. abci/types) are stuck with these revisions even though the code they import may not even use them.
https://tools.ietf.org/html/rfc6962#section-2.1
"The largest power of two less than the number of items" is actually correct!
For n > 1, let k be the largest power of two smaller than n (i.e.,
k < n <= 2k).
* fix failure in TestProposerFrequency
* Add test to check priority order after updates
* Changed applyRemovals() and removed Remove()
Changed applyRemovals() similar to applyUpdates()
Removed function Remove()
Updated comments
* review comments
* simplify applyRemovals and add more comments
* small correction in comment
* Fix check in test
* Fix priority check for centering, address review comments
* fix assert for priority centering
* review comments
* review comments
* cleanup and review comments
added upper limit check for validator voting power
moved check for empty validator set earlier
moved panic on potential negative set length in verifyRemovals
added more tests
* review comments
Fixes: #3378
* Add stats to cleveldb implementation
* update changelog
* remote TODO
also
- sort keys
- preallocate memory
* fix const initializer []string literal is not a constant
* add test
* make BlockTimeIota a consensus parameter, not a locally configurable option
Refs #2920
* make TimeIota int64 ms
Refs #2920
* update Gopkg.toml
* fixes after Ethan's review
* fix TestRemoteSignerProposalSigningFailed
* update changelog
Before we're using it to get a round state in tests. Now it can be done
by calling csX.GetRoundState. We will need to rewrite
TestStateSlashingPrevotes and TestStateSlashingPrecommits, which are
commented right now, to not rely on EventDataRoundState#RoundState
field.
Refs #1527
1."abci_query": rpcserver.NewRPCFunc(c.ABCIQuery, "path,data,prove")
"validators": rpcserver.NewRPCFunc(c.Validators, "height"),
the parameters and function do not match, cause index out of range error.
2. the prove of query is forced to be true, while default option is false.
3. fix the wrong key of merkle
When remove peer, block pool simple remove bpPeer,
but do not stop timer, that cause stopError for recorrected
peers. Stop timer when remove from pool.
* fix dirty data in peerset
startInitPeer before PeerSet add the peer,
once mconnection start and Receive of one
Reactor faild, will try to remove it from PeerSet
while PeerSet still not contain the peer. Fix
this by change the order.
* fix test FilterDuplicate
* fix start/stop race
* fix err
This issue is related to #3107
This is a first renaming/refactoring step before reworking and removing heartbeats.
As discussed with @Liamsi , we preferred to go for a couple of independent and separate PRs to simplify review work.
The changes:
Help to clarify the relation between the validator and remote signer endpoints
Differentiate between timeouts and deadlines
Prepare to encapsulate networking related code behind RemoteSigner in the next PR
My intention is to separate and encapsulate the "network related" code from the actual signer.
SignerRemote ---(uses/contains)--> SignerValidatorEndpoint <--(connects to)--> SignerServiceEndpoint ---> SignerService (future.. not here yet but would like to decouple too)
All reconnection/heartbeat/whatever code goes in the endpoints. Signer[Remote/Service] do not need to know about that.
I agree Endpoint may not be the perfect name. I tried to find something "Go-ish" enough. It is a common name in go-kit, kubernetes, etc.
Right now:
SignerValidatorEndpoint:
handles the listener
contains SignerRemote
Implements the PrivValidator interface
connects and sets a connection object in a contained SignerRemote
delegates PrivValidator some calls to SignerRemote which in turn uses the conn object that was set externally
SignerRemote:
Implements the PrivValidator interface
read/writes from a connection object directly
handles heartbeats
SignerServiceEndpoint:
Does most things in a single place
delegates to a PrivValidator IIRC.
* cleanup
* Refactoring step 1
* Refactoring step 2
* move messages to another file
* mark for future work / next steps
* mark deprecated classes in docs
* Fix linter problems
* additional linter fixes
* docs: explain create_empty_blocks configurations
Closes#3307
* Vagrantfile: install nodejs for docs
* update docs instructions
npm install does not make sense since there's no packages.json file
* explain broadcast_tx_* tx format
Closes#536
* docs: explain how transaction ordering works
Closes#2904
* bring in consensus parameters explained
* example for create_empty_blocks_interval
* bring in explanation from https://github.com/tendermint/tendermint/issues/2487#issuecomment-424899799
* link to formatting instead of duplicating info
* libs/common: TrapSignal accepts logger as a first parameter
and does not block anymore
* previously it was dumping "captured ..." msg to os.Stdout
* TrapSignal should not be responsible for blocking thread of execution
Refs #3238
* exit with zero (0) code upon receiving SIGTERM/SIGINT
Refs #3238
* fix formatting in docs/app-dev/abci-cli.md
Co-Authored-By: melekes <anton.kalyaev@gmail.com>
* fix formatting in docs/app-dev/abci-cli.md
Co-Authored-By: melekes <anton.kalyaev@gmail.com>
* bound mempool memory usage
Closes#3079
* rename SizeBytes to TxsTotalBytes
and other small fixes after Zarko's review
* rename MaxBytes to MaxTxsTotalBytes
* make ErrMempoolIsFull more informative
* expose mempool's txs_total_bytes via RPC
* test full response
* fixes after Ethan's review
* config: rename mempool.size to mempool.max_txs
https://github.com/tendermint/tendermint/pull/3248#discussion_r254034004
* test more cases
https://github.com/tendermint/tendermint/pull/3248#discussion_r254036532
* simplify test
* Revert "config: rename mempool.size to mempool.max_txs"
This reverts commit 39bfa36961.
* rename count back to n_txs
to make a change non-breaking
* rename max_txs_total_bytes to max_txs_bytes
* format code
* fix TestWALPeriodicSync
The test was sometimes failing due to processFlushTicks being called too
early. The solution is to call wal#Start later in the test.
* Apply suggestions from code review
* reject the shared secret if is all zeros in case the blacklist was not
sufficient
* Add test that verifies lower order pub-keys are rejected at the DH step
* Update changelog
* fix typo in test-comment
* green pubsub tests :OK:
* get rid of clientToQueryMap
* Subscribe and SubscribeUnbuffered
* start adapting other pkgs to new pubsub
* nope
* rename MsgAndTags to Message
* remove TagMap
it does not bring any additional benefits
* bring back EventSubscriber
* fix test
* fix data race in TestStartNextHeightCorrectly
```
Write at 0x00c0001c7418 by goroutine 796:
github.com/tendermint/tendermint/consensus.TestStartNextHeightCorrectly()
/go/src/github.com/tendermint/tendermint/consensus/state_test.go:1296 +0xad
testing.tRunner()
/usr/local/go/src/testing/testing.go:827 +0x162
Previous read at 0x00c0001c7418 by goroutine 858:
github.com/tendermint/tendermint/consensus.(*ConsensusState).addVote()
/go/src/github.com/tendermint/tendermint/consensus/state.go:1631 +0x1366
github.com/tendermint/tendermint/consensus.(*ConsensusState).tryAddVote()
/go/src/github.com/tendermint/tendermint/consensus/state.go:1476 +0x8f
github.com/tendermint/tendermint/consensus.(*ConsensusState).handleMsg()
/go/src/github.com/tendermint/tendermint/consensus/state.go:667 +0xa1e
github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine()
/go/src/github.com/tendermint/tendermint/consensus/state.go:628 +0x794
Goroutine 796 (running) created at:
testing.(*T).Run()
/usr/local/go/src/testing/testing.go:878 +0x659
testing.runTests.func1()
/usr/local/go/src/testing/testing.go:1119 +0xa8
testing.tRunner()
/usr/local/go/src/testing/testing.go:827 +0x162
testing.runTests()
/usr/local/go/src/testing/testing.go:1117 +0x4ee
testing.(*M).Run()
/usr/local/go/src/testing/testing.go:1034 +0x2ee
main.main()
_testmain.go:214 +0x332
Goroutine 858 (running) created at:
github.com/tendermint/tendermint/consensus.(*ConsensusState).startRoutines()
/go/src/github.com/tendermint/tendermint/consensus/state.go:334 +0x221
github.com/tendermint/tendermint/consensus.startTestRound()
/go/src/github.com/tendermint/tendermint/consensus/common_test.go:122 +0x63
github.com/tendermint/tendermint/consensus.TestStateFullRound1()
/go/src/github.com/tendermint/tendermint/consensus/state_test.go:255 +0x397
testing.tRunner()
/usr/local/go/src/testing/testing.go:827 +0x162
```
* fixes after my own review
* fix formatting
* wait 100ms before kicking a subscriber out
+ a test for indexer_service
* fixes after my second review
* no timeout
* add changelog entries
* fix merge conflicts
* fix typos after Thane's review
Co-Authored-By: melekes <anton.kalyaev@gmail.com>
* reformat code
* rewrite indexer service in the attempt to fix failing test
https://github.com/tendermint/tendermint/pull/3227/#issuecomment-462316527
* Revert "rewrite indexer service in the attempt to fix failing test"
This reverts commit 0d9107a098.
* another attempt to fix indexer
* fixes after Ethan's review
* use unbuffered channel when indexing transactions
Refs https://github.com/tendermint/tendermint/pull/3227#discussion_r258786716
* add a comment for EventBus#SubscribeUnbuffered
* format code
As per #3043, this adds a ticker to sync the WAL every 2s while the WAL is running.
* Flush WAL every 2s
This adds a ticker that flushes the WAL every 2s while the WAL is
running. This is related to #3043.
* Fix spelling
* Increase timeout to 2mins for slower build environments
* Make WAL sync interval configurable
* Add TODO to replace testChan with more comprehensive testBus
* Remove extraneous debug statement
* Remove testChan in favour of using system time
As per
https://github.com/tendermint/tendermint/pull/3300#discussion_r255886586,
this removes the `testChan` WAL member and replaces the approach with a
system time-oriented one. In this new approach, we keep track of the
system time at which each flush and periodic flush successfully
occurred.
The naming of the various functions is also updated here to be more
consistent with "flushing" as opposed to "sync'ing".
* Update naming convention and ensure lock for timestamp update
* Add Flush method as part of WAL interface
Adds a `Flush` method as part of the WAL interface to enforce the idea
that we can manually trigger a WAL flush from outside of the WAL. This
is employed in the consensus state management to flush the WAL prior to
signing votes/proposals, as per https://github.com/tendermint/tendermint/issues/3043#issuecomment-453853630
* Update CHANGELOG_PENDING
* Remove mutex approach and replace with DI
The dependency injection approach to dealing with testing concerns could
allow similar effects to some kind of "testing bus"-based approach. This
commit introduces an example of this, where instead of relying on
(potentially fragile) timing of things between the code and the test, we
inject code into the function under test that can signal the test
through a channel.
This allows us to avoid the `time.Sleep()`-based approach previously
employed.
* Update comment on WAL flushing during vote signing
Co-Authored-By: thanethomson <connect@thanethomson.com>
* Simplify flush interval definition
Co-Authored-By: thanethomson <connect@thanethomson.com>
* Expand commentary on WAL disk flushing
Co-Authored-By: thanethomson <connect@thanethomson.com>
* Add broken test to illustrate WAL sync test problem
Removes test-related state (dependency injection code) from the WAL data
structure and adds test code to illustrate the problem with using
`WALGenerateNBlocks` and `wal.SearchForEndHeight` to test periodic
sync'ing.
* Fix test error messages
* Use WAL group buffer size to check for flush
A function is added to `libs/autofile/group.go#Group` in order to return
the size of the buffered data (i.e. data that has not yet been flushed
to disk). The test now checks that, prior to a `time.Sleep`, the group
buffer has data in it. After the `time.Sleep` (during which time the
periodic flush should have been called), the buffer should be empty.
* Remove config root dir removal from #3291
* Add godoc for NewWAL mentioning periodic sync
ref: [#3010 (comment)](https://github.com/tendermint/tendermint/issues/3010#issuecomment-464287627)
> I tried searching for code where we authenticate a peer against its NetAddress.ID and couldn't find it. I don't see a reason to switch to Noise, but a need to ensure that the node's ID is authenticated e.g. after dialing from the address book.
* p2p: check secret conn id matches dialed id
* Fix all p2p tests & make code compile
* add simple test for dialing with wrong ID
* update changelog
* address review comments
* yet another place where to use IDAddressString and fix
testSetupMultiplexTransport
* changelog: use issue number instead of PR number
* follow up to #3291
- rpc/test/helpers.go add StopTendermint(node) func
- remove ensureDir(filepath.Dir(walFile), 0700)
- mempool/mempool_test.go add type cleanupFunc func()
* cmd/show_validator: wrap err to make it more clear
* improve ResetTestRootWithChainID() concurrency safety
Rely on ioutil.TempDir() to create test root directories and ensure
multiple same-chain id test cases can run in parallel.
* Update config/toml.go
Co-Authored-By: alessio <quadrispro@ubuntu.com>
* clean up test directories after completion
Closes: #1034
* Remove redundant EnsureDir call
* s/PanicSafety()/panic()/s
* Put create dir functionality back in ResetTestRootWithChainID
* Place test directories in OS's tempdir
In modern UNIX and UNIX-like systems /tmp is very often
mounted as tmpfs. This might speed test execution a bit.
* Set 0700 to a const
* rootsDirs -> configRootDirs
* Don't double remove directories
* Avoid global variables
* Fix consensus tests
* Reduce defer stack
* Address review comments
* Try to fix tests
* Update CHANGELOG_PENDING.md
Co-Authored-By: alessio <quadrispro@ubuntu.com>
* Update consensus/common_test.go
Co-Authored-By: alessio <quadrispro@ubuntu.com>
* Update consensus/common_test.go
Co-Authored-By: alessio <quadrispro@ubuntu.com>
Earlier this week somebody posted this in GoS Riot chat:
```
E[2019-02-12|10:38:37.596] Corrupted entry. Skipping... module=consensus wal=/home/gaia/.gaiad/data/cs.wal/wal err="DataCorruptionError[length 878916964 exceeded maximum possible value of 1048576 bytes]"
E[2019-02-12|10:38:37.596] Corrupted entry. Skipping... module=consensus wal=/home/gaia/.gaiad/data/cs.wal/wal err="DataCorruptionError[length 825701731 exceeded maximum possible value of 1048576 bytes]"
E[2019-02-12|10:38:37.596] Corrupted entry. Skipping... module=consensus wal=/home/gaia/.gaiad/data/cs.wal/wal err="DataCorruptionError[length 1631073634 exceeded maximum possible value of 1048576 bytes]"
E[2019-02-12|10:38:37.596] Corrupted entry. Skipping... module=consensus wal=/home/gaia/.gaiad/data/cs.wal/wal err="DataCorruptionError[length 912418148 exceeded maximum possible value of 1048576 bytes]"
E[2019-02-12|10:38:37.600] Corrupted entry. Skipping... module=consensus wal=/home/gaia/.gaiad/data/cs.wal/wal err="DataCorruptionError[failed to read data: EOF]"
E[2019-02-12|10:38:37.600] Error on catchup replay. Proceeding to start ConsensusState anyway module=consensus err="Cannot replay height 7242. WAL does not contain #ENDHEIGHT for 7241"
E[2019-02-12|10:38:37.861] Error dialing peer module=p2p err="dial tcp 35.183.126.181:26656: i/o timeout
```
Note the length error messages. What has happened is the length field got corrupted probably. I've looked at the code and noticed that we don't check the msg size during encoding. This PR fixes that. It also improves a few error messages in WALDecoder.
* rpc/net_info: change RemoteIP type from net.IP to String
Before:
"AAAAAAAAAAAAAP//rB8ktw=="
which is amino-encoded net.IP byte slice
After:
"192.0.2.1"
Fixes#3251
* rpc/net_info: non-empty response in docs
* not related to linter: remove obsolete constants:
- `Insecure` and `Secure` and type `Security` are not used anywhere
* not related to linter: update example
- NewInsecure was deleted; change example to NewRemoteDB
* address: Binds to all network interfaces (gosec):
- bind to localhost instead of 0.0.0.0
- regenerate test key and cert for this purpose (was valid for ::) and
otherwise we would see:
transport: authentication handshake failed: x509: certificate is
valid for ::, not 127.0.0.1\"
(used https://github.com/google/keytransparency/blob/master/scripts/gen_server_keys.sh
to regenerate certs)
* use sha256 in tests instead of md5; time difference is negligible
* nolint usage of math/rand in test and add comment on its import
- crypto/rand is slower and we do not need sth more secure in tests
* enable linter in circle-ci
* another nolint math/rand in test
* replace another occurrence of md5
* consistent comment about importing math/rand
* types.NewCommit
* use types.NewCommit everywhere
* fix log in unsafe_reset
* memoize height and round in constructor
* notes about deprecating toVote
* bring back memoizeHeightRound
* evidence: NewEvidencePool takes evidenceDB
* evidence: failing TestStoreCommitDuplicate
tendermint/security#35
* GetEvidence -> GetEvidenceInfo
* fix TestStoreCommitDuplicate
* comment in VerifyEvidence
* add check if evidence was already seen
- modify EventPool interface (EventStore is not known in ApplyBlock):
- add IsCommitted method to iface
- add test
* update changelog
* fix TestStoreMark:
- priority in evidence info gets reset to zero after evidence gets committed
* review comments: simplify EvidencePool.IsCommitted
- delete obsolete EvidenceStore.IsCommitted
* add simple test for IsCommitted
* update changelog: this is actually breaking (PR number still missing)
* fix TestStoreMark:
- priority in evidence info gets reset to zero after evidence gets
committed
* review suggestion: simplify return
* Initial commit for 3181..still early
* unit test updates
* unit test updates
* fix check of dups accross updates and deletes
* simplify the processChange() func
* added overflow check utest
* Added checks for empty valset, new utest
* deepcopy changes in processUpdate()
* moved to new API, fixed tests
* test cleanup
* address review comments
* make sure votePower > 0
* gofmt fixes
* handle duplicates and invalid values
* more work on tests, review comments
* Renamed and explained K
* make TestVal private
* split verifyUpdatesAndComputeNewPriorities.., added check for deletes
* return error if validator set is empty after processing changes
* address review comments
* lint err
* Fixed the total voting power and added comments
* fix lint
* fix lint
* failing test
* fix infinite loop in addrbook
There are cases where we only have a small number of addresses marked
good ("old"), but the selection mechanism keeps trying to select more of these
addresses, and hence ends up in an infinite loop. Here we fix this to
only try and select such "old" addresses if we have enough of them. Note this
means, if we don't have enough of them, we may return more "new"
addresses than otherwise expected by the newSelectionBias.
This whole GetSelectionWithBias method probably needs to be rewritten,
but this is a quick fix for the issue.
* changelog
* fix infinite loop if not enough new addrs
* fix another potential infinite loop
if a.nNew == 0 -> pickFromOldBucket=true, but we don't have enough items
(a.nOld > len(oldBucketToAddrsMap) false)
* Revert "fix another potential infinite loop"
This reverts commit 146540c112.
* check num addresses instead of buckets, new test
* fixed the int division
* add slack to bias % in test, lint fixes
* Added checks for selection content in test
* test cleanup
* Apply suggestions from code review
Co-Authored-By: ebuchman <ethan@coinculture.info>
* address review comments
* change after Anton's review comments
* use the same docker image we use for testing
when building a binary for localnet
* switch back to circleci classic
* more review comments
* more review comments
* refactor addrbook_test
* build linux binary inside docker
in attempt to fix
```
--> Running dep
+ make build-linux
GOOS=linux GOARCH=amd64 make build
make[1]: Entering directory `/home/circleci/.go_workspace/src/github.com/tendermint/tendermint'
CGO_ENABLED=0 go build -ldflags "-X github.com/tendermint/tendermint/version.GitCommit=`git rev-parse --short=8 HEAD`" -tags 'tendermint' -o build/tendermint ./cmd/tendermint/
p2p/pex/addrbook.go:373:13: undefined: math.Round
```
* change dir from /usr to /go
* use concrete Go version for localnet binary
* check for nil addresses just to be sure
* WIP: Starts adding remote signer test harness
This commit adds a new command to Tendermint to allow for us to build a
standalone binary to test remote signers such as KMS
(https://github.com/tendermint/kms).
Right now, all it does is test that the local public key matches the
public key reported by the client, and fails at the point where it
attempts to get the client to sign a proposal.
* Fixes typo
* Fixes proposal validation test
This commit fixes the proposal validation test as per #3149. It also
moves the test harness into its own internal package to isolate its
exports from the `privval` package.
* Adds vote signing validation
* Applying recommendations from #3149
* Adds function descriptions for test harness
* Adds ability to ask remote signer to shut down
Prior to this commit, the remote signer needs to manually be shut down,
which is not ideal for automated testing. This commit allows us to send
a poison pill message to the KMS to let it shut down gracefully once
testing is done (whether the tests pass or fail).
* Adds tests for remote signer test harness
This commit makes some minor modifications to a few files to allow for
testing of the remote signer test harness. Two tests are added here:
checking for a fully successful (the ideal) case, and for the case where
the maximum number of retries has been reached when attempting to accept
incoming connections from the remote signer.
* Condenses serialization of proposals and votes using existing Tendermint functions
* Removes now-unnecessary amino import and codec
* Adds error message for vote signing failure
* Adds key extraction command for integration test
Took the code from here:
https://gist.github.com/Liamsi/a80993f24bff574bbfdbbfa9efa84bc7 to
create a simple utility command to extract a key from a local Tendermint
validator for use in KMS integration testing.
* Makes path expansion success non-compulsory
* Fixes segfault on SIGTERM
We need an additional variable to keep track of whether we're
successfully connected, otherwise hitting Ctrl+Break during execution
causes a segmentation fault. This now allows for a clean shutdown.
* Consolidates shutdown checks
* Adds comments indicating codes for easy lookup
* Adds Docker build for remote signer harness
Updates the `DOCKER/build.sh` and `DOCKER/push.sh` files to allow one to
override the image name and Dockerfile using environment variables.
Updates the primary `Makefile` as well as the `DOCKER/Makefile` to allow
for building the `remote_val_harness` Docker image.
* Adds build_remote_val_harness_docker_image to .PHONY
* Removes remote signer poison pill messaging functionality
* Reduces fluff code in command line parsing
As per
https://github.com/tendermint/tendermint/pull/3149#pullrequestreview-196171788,
this reduces the amount of fluff code in the PR down to the bare
minimum.
* Fixes ordering of error check and info log
* Moves remove_val_harness cmd into tools folder
It seems to make sense to rather keep the remote signer test harness in
its own tool folder (now rather named `tm-signer-harness` to keep with
the tool naming convention). It is actually a separate tool, not meant
to be one of the core binaries, but supplementary and supportive.
* Updates documentation for tm-signer-harness
* Refactors flag parsing to be more compact and less redundant
* Adds version sub-command help
* Removes extraneous flags parsing
* Adds CHANGELOG_PENDING entry for tm-signer-harness
* Improves test coverage
Adds a few extra parameters to the `MockPV` type to fake broken vote and
proposal signing. Also adds some more tests for the test harness so as
to increase coverage for failed cases.
* Fixes formatting for CHANGELOG_PENDING.md
* Fix formatting for documentation config
* Point users towards official Tendermint docs for tools documentation
* Point users towards official Tendermint docs for tm-signer-harness
* Remove extraneous constant
* Rename TestHarness.sc to TestHarness.spv for naming consistency
* Refactor to remove redundant goroutine
* Refactor conditional to cleaner switch statement and better error handling for listener protocol
* Remove extraneous goroutine
* Add note about installing tmkms via Cargo
* Fix typo in naming of output signing key
* Add note about where to find chain ID
* Replace /home/user with ~/ for brevity
* Fixes "signer.key" typo
* Minor edits for clarification for tm-signer-harness bulid/setup process
* p2p/conn: fix deadlock in FlushStop/OnStop
* makefile: set_with_deadlock
* close doneSendRoutine at end of sendRoutine
* conn: initialize channs in OnStart
* node: decrease retry conn timeout in test
Should fix#3256
The retry timeout was set to the default, which is the same as the
accept timeout, so it's no wonder this would fail. Here we decrease the
retry timeout so we can try many times before the accept timeout.
* p2p: increase handshake timeout in test
This fails sometimes, presumably because the handshake timeout is so low
(only 50ms). So increase it to 1s. Should fix#3187
* privval: fix race with ping. closes#3237
Pings happen in a go-routine and can happen concurrently with other
messages. Since we use a request/response protocol, we expect to send a
request and get back the corresponding response. But with pings
happening concurrently, this assumption could be violated. We were using
a mutex, but only a RWMutex, where the RLock was being held for sending
messages - this was to allow the underlying connection to be replaced if
it fails. Turns out we actually need to use a full lock (not just a read
lock) to prevent multiple requests from happening concurrently.
* node: fix test name. DelayedStop -> DelayedStart
* autofile: Wait() method
In the TestWALTruncate in consensus/wal_test.go we remove the WAL
directory at the end of the test. However the wal.Stop() does not
properly wait for the autofile group to finish shutting down. Hence it
was possible that the group's go-routine is still running when the
cleanup happens, which causes a panic since the directory disappeared.
Here we add a Wait() method to properly wait until the go-routine exits
so we can safely clean up. This fixes#2852.
* types: memoize height/round in commit instead of first vote
* types: commit.ValidateBasic in VerifyCommit
* types: new CommitSig alias for Vote
In preparation for reducing the redundancy in Commits, we introduce the
CommitSig as an alias for Vote. This is non-breaking on the protocol,
and minor breaking on the Go API, as Commit now contains a list of
CommitSig instead of Vote.
* remove dependence on ToVote
* update some comments
* fix tests
* fix tests
* fixes from review
* switch from fork (tendermint/btcd) to orig package (btcsuite/btcd); also
- remove obsolete check in test `size != -1` is always true
- WIP as the serialization still needs to be wrapped
* WIP: wrap signature & privkey, pubkey needs to be wrapped as well
* wrap pubkey too
* use "github.com/ethereum/go-ethereum/crypto/secp256k1" if cgo is
available, else use "github.com/btcsuite/btcd/btcec" and take care of
lower-S when verifying
Annoyingly, had to disable pruning when importing
github.com/ethereum/go-ethereum/ :-/
* update comment
* update comment
* emulate signature_nocgo.go for additional benchmarks:
592bf6a59c/crypto/signature_nocgo.go (L60-L76)
* use our format (r || s) in lower-s form when in the non-cgo case
* remove comment about using the C library directly
* vendor github.com/btcsuite/btcd too
* Add test for the !cgo case
* update changelog pending
Closes#3162#3163
Refs #1958, #2091, tendermint/btcd#1
> Implement a check for the blacklisted low order points, ala the X25519 has_small_order() function in libsodium
(#3010 (comment))
resolves first half of #3010
* close peer's connection to avoid fd leak
Fixes#2967
* rename peer#Addr to RemoteAddr
* fix test
* fixes after Ethan's review
* bring back the check
* changelog entry
* write a test for switch#acceptRoutine
* increase timeouts? :(
* remove extra assertNPeersWithTimeout
* simplify test
* assert number of peers (just to be safe)
* Cleanup in OnStop
* run tests with verbose flag on CircleCI
* spawn a reading routine to prevent connection from closing
* get port from the listener
random port is faster, but often results in
```
panic: listen tcp 127.0.0.1:44068: bind: address already in use [recovered]
panic: listen tcp 127.0.0.1:44068: bind: address already in use
goroutine 79 [running]:
testing.tRunner.func1(0xc0001bd600)
/usr/local/go/src/testing/testing.go:792 +0x387
panic(0x974d20, 0xc0001b0500)
/usr/local/go/src/runtime/panic.go:513 +0x1b9
github.com/tendermint/tendermint/p2p.MakeSwitch(0xc0000f42a0, 0x0, 0x9fb9cc, 0x9, 0x9fc346, 0xb, 0xb42128, 0x0, 0x0, 0x0, ...)
/home/vagrant/go/src/github.com/tendermint/tendermint/p2p/test_util.go:182 +0xa28
github.com/tendermint/tendermint/p2p.MakeConnectedSwitches(0xc0000f42a0, 0x2, 0xb42128, 0xb41eb8, 0x4f1205, 0xc0001bed80, 0x4f16ed)
/home/vagrant/go/src/github.com/tendermint/tendermint/p2p/test_util.go:75 +0xf9
github.com/tendermint/tendermint/p2p.MakeSwitchPair(0xbb8d20, 0xc0001bd600, 0xb42128, 0x2f7, 0x4f16c0)
/home/vagrant/go/src/github.com/tendermint/tendermint/p2p/switch_test.go:94 +0x4c
github.com/tendermint/tendermint/p2p.TestSwitches(0xc0001bd600)
/home/vagrant/go/src/github.com/tendermint/tendermint/p2p/switch_test.go:117 +0x58
testing.tRunner(0xc0001bd600, 0xb42038)
/usr/local/go/src/testing/testing.go:827 +0xbf
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:878 +0x353
exit status 2
FAIL github.com/tendermint/tendermint/p2p 0.350s
```
* base verifier: bc->bv and check chainid
* improve some comments
* comments in dynamic verifier
* fix comment in doc about BaseVerifier
It requires the validator set to perfectly match.
* failing test for #2862
* move errTooMuchChange to types. fixes#2862
* changelog, comments
* ic -> dv
* update comment, link to issue
* more proposer priority tests
- test that we don't reset to zero when updating / adding
- test that same power validators alternate
* add another test to track / simulate similar behaviour as in #2960
* address some of Chris' review comments
* address some more of Chris' review comments
* temporarily pushing branch with the following changes:
The total power might change if:
- a validator is added
- a validator is removed
- a validator is updated
Decrement the accums (of all validators) directly after any of these events
(by the inverse of the change)
* Fix 2960 by re-normalizing / scaling priorities to be in bounds of total
power, additionally:
- remove heap where it doesn't make sense
- avg. only at the end of IncrementProposerPriority instead of each
iteration
- update (and slightly improve)
TestAveragingInIncrementProposerPriorityWithVotingPower to reflect
above changes
* Fix 2960 by re-normalizing / scaling priorities to be in bounds of total
power, additionally:
- remove heap where it doesn't make sense
- avg. only at the end of IncrementProposerPriority instead of each
iteration
- update (and slightly improve)
TestAveragingInIncrementProposerPriorityWithVotingPower to reflect
above changes
* fix tests
* add comment
* update changelog pending & some minor changes
* comment about division will floor the result & fix typo
* Update TestLargeGenesisValidator:
- remove TODO and increase large genesis validator's voting power
accordingly
* move changelog entry to P2P Protocol
* Ceil instead of flooring when dividing & update test
* quickly fix failing TestProposerPriorityDoesNotGetResetToZero:
- divide by Ceil((maxPriority - minPriority) / 2*totalVotingPower)
* fix typo: rename getValWitMostPriority -> getValWithMostPriority
* test proposer frequencies
* return absolute value for diff. keep testing
* use for loop for div
* cleanup, more tests
* spellcheck
* get rid of using floats: manually ceil where necessary
* Remove float, simplify, fix tests to match chris's proof (#3157)
* consensus: createProposalBlock function
* blockExecutor.CreateProposalBlock
- factored out of consensus pkg into a method on blockExec
- new private interfaces for mempool ("txNotifier") and evpool with one function each
- consensus tests still require more mempool methods
* failing test for CreateProposalBlock
* Fix bug in include evidece into block
* evidence: change maxBytes to maxSize
* MaxEvidencePerBlock
- changed to return both the max number and the max bytes
- preparation for #2590
* changelog
* fix linter
* Fix from review
Co-Authored-By: ebuchman <ethan@coinculture.info>
* Consolidates deadline tests for privval Unix/TCP
Following on from #3115 and #3132, this converts fundamental timeout
errors from the client's `acceptConnection()` method so that these can
be detected by the test for the TCP connection.
Timeout deadlines are now tested for both TCP and Unix domain socket
connections.
There is also no need for the additional secret connection code: the
connection will time out at the `acceptConnection()` phase for TCP
connections, and will time out when attempting to obtain the
`RemoteSigner`'s public key for Unix domain socket connections.
* Removes extraneous logging
* Adds IsConnTimeout helper function
This commit adds a helper function to detect whether an error is either
a fundamental networking timeout error, or an `ErrConnTimeout` error
specific to the `RemoteSigner` class.
* Adds a test for the IsConnTimeout() helper function
* Separates tests logically for IsConnTimeout
* Adds a random suffix to temporary Unix sockets during testing
* Adds Unix domain socket tests for client (FAILING)
This adds Unix domain socket tests for the privval client. Right now,
one of the tests (TestRemoteSignerRetry) fails, probably because the
Unix domain socket state is known instantaneously on both sides by the
OS. Committing this to collaborate on the error.
* Removes extraneous logging
* Completes testing of Unix sockets client version
This completes the testing of the client connecting via Unix sockets.
There are two specific tests (TestSocketPVDeadline and
TestRemoteSignerRetryTCPOnly) that are only relevant to TCP connections.
* Renames test to show TCP-specificity
* Adds testing into closures for consistency (forgot previously)
* Moves test specific to RemoteSigner into own file
As per discussion on #3132, `TestRemoteSignerRetryTCPOnly` doesn't
really belong with the client tests. This moves it into its own file
related to the `RemoteSigner` class.
This cuts out two tests by constructing test cases and iterating through
them, rather than having separate sets of tests for TCP and Unix listeners.
This is as per the feedback from #3121.
* Begin simple merkle compatibility PR
* Fix query_test
* Use trillian test vectors
* Change the split point per RFC 6962
* update spec
* refactor innerhash to match spec
* Update changelog
* Address @liamsi's comments
* Write the comment requested by @liamsi
* Consistent order fields of Timestamp/BlockID fields in CanonicalVote and
CanonicalProposal
* update spec too
* Introduce and use IsZero & IsComplete:
- update IsZero method according to spec and introduce IsComplete
- use methods in validate basic to validate: proposals come with a
"complete" blockId and votes are either complete or empty
- update spec: BlockID.IsNil() -> BlockID.IsZero() and fix typo
* BlockID comes first
* fix tests
* more proposer priority tests
- test that we don't reset to zero when updating / adding
- test that same power validators alternate
* add another test to track / simulate similar behaviour as in #2960
* address some of Chris' review comments
* address some more of Chris' review comments
* Validating that there are txs in the query results before loop throught the array
* Created tests to validate the error has been fixed
* Added comments
* Fixing misspeling
* check if the variable "skipCount" is bigger than zero. If it is not, we set it to 0. If it, we do not do anything.
* using function that validates the skipCount variable
* undo Gopkg.lock changes
* Close and recreate a RemoteSigner on err
* Update changelog
* Address Anton's comments / suggestions:
- update changelog
- restart TCPVal
- shut down on `ErrUnexpectedResponse`
* re-init remote signer client with fresh connection if Ping fails
- add/update TODOs in secret connection
- rename tcp.go -> tcp_client.go, same with ipc to clarify their purpose
* account for `conn returned by waitConnection can be `nil`
- also add TODO about RemoteSigner conn field
* Tests for retrying: IPC / TCP
- shorter info log on success
- set conn and use it in tests to close conn
* Tests for retrying: IPC / TCP
- shorter info log on success
- set conn and use it in tests to close conn
- add rwmutex for conn field in IPC
* comments and doc.go
* fix ipc tests. fixes#2677
* use constants for tests
* cleanup some error statements
* fixes#2784, race in tests
* remove print statement
* minor fixes from review
* update comment on sts spec
* cosmetics
* p2p/conn: add failing tests
* p2p/conn: make SecretConnection thread safe
* changelog
* IPCVal signer refactor
- use a .reset() method
- don't use embedded RemoteSignerClient
- guard RemoteSignerClient with mutex
- drop the .conn
- expose Close() on RemoteSignerClient
* apply IPCVal refactor to TCPVal
* remove mtx from RemoteSignerClient
* consolidate IPCVal and TCPVal, fixes#3104
- done in tcp_client.go
- now called SocketVal
- takes a listener in the constructor
- make tcpListener and unixListener contain all the differences
* delete ipc files
* introduce unix and tcp dialer for RemoteSigner
* rename files
- drop tcp_ prefix
- rename priv_validator.go to file.go
* bring back listener options
* fix node
* fix priv_val_server
* fix node test
* minor cleanup and comments
* Don't use pointer receivers for PubKeyMultisigThreshold
* test that showcases panic when PubKeyMultisigThreshold are used in sdk:
- deserialization will fail in `readInfo` which tries to read a
`crypto.PubKey` into a `localInfo` (called by
cosmos-sdk/client/keys.GetKeyInfo)
* Update changelog
* Rename routeTable to nameTable, multisig key is no longer a pointer
* sed -i 's/PubKeyAminoRoute/PubKeyAminoName/g' `grep -lrw PubKeyAminoRoute .`
upon Jae's request
* AminoRoutes -> AminoNames
* sed -e 's/PrivKeyAminoRoute/PrivKeyAminoName/g'
* Update crypto/encoding/amino/amino.go
Co-Authored-By: alessio <quadrispro@ubuntu.com>
* fix build scripts
Search for the right variable when introspecting Go code. `Version` was
renamed to `TMCoreSemVer`.
This is regression introduced in
b95ac688af
* fix all `Version` introspections.
Use `TMCoreSemVer` instead of `Version`
* split immutable and mutable parts of priv_validator.json
* fix bugs
* minor changes
* retrig test
* delete scripts/wire2amino.go
* fix test
* fixes from review
* privval: remove mtx
* rearrange priv_validator.go
* upgrade path
* write tests for the upgrade
* fix for unsafe_reset_all
* add test
* add reset test
* config: cors options are arrays of strings, not strings
Fixes#2980
* docs: update tendermint-core/configuration.html page
* set allow_duplicate_ip to false
* in `tendermint testnet`, set allow_duplicate_ip to true
Refs #2712
* fixes after Ismail's review
* crypto: revert to mainline Go crypto lib
We used to use a fork for a modified bcrypt so we could pass our own
randomness but this was largely unecessary, unused, and a burden.
So now we just use the mainline Go crypto lib.
* changelog
* fix tests
* version and changelog
* config: cors options are arrays of strings, not strings
Fixes#2980
* docs: update tendermint-core/configuration.html page
* set allow_duplicate_ip to false
* in `tendermint testnet`, set allow_duplicate_ip to true
Refs #2712
* fixes after Ismail's review
* Revert "set allow_duplicate_ip to false"
This reverts commit 24c1094ebc.
* optimize addProposalBlockPart
* optimize addProposalBlockPart
* if ProposalBlockParts and LockedBlockParts both exist,let LockedBlockParts overwrite ProposalBlockParts.
* fix tryAddBlock
* broadcast lockedBlockParts in higher priority
* when appHeight==0, it's better fetch genDoc than state.validators.
* not save state if replay from height 1
* only save state if replay from height 1 when stateHeight is also 1
* only save state if replay from height 1 when stateHeight is also 1
* only save state if replay from height 0 when stateHeight is also 0
* handshake info's response version only update when stateHeight==0
* save the handshake responseInfo appVersion
(left after committing a block)
Fixes#2961
--------------
ORIGINAL ISSUE
Tendermint version : 0.26.4-b771798d
ABCI app : kv-store
Environment:
OS (e.g. from /etc/os-release): macOS 10.14.1 What happened: Set
mempool.recheck = false and create empty block = false in config.toml.
When transactions get added right between a new empty block is being
proposed and committed, the proposer won't propose new block for that
transactions immediately after. That transactions are stuck in the
mempool until a new transaction is added and trigger the proposer.
What you expected to happen: If there is a transaction left in the
mempool, new block should be proposed immediately.
Have you tried the latest version: yes
How to reproduce it (as minimally and precisely as possible): Fire two
transaction using broadcast_tx_sync with specific delay between them.
(You may need to do it multiple time before the right delay is found, on
my machine the delay is 0.98s)
Logs (paste a small part showing an error (< 10 lines) or link a
pastebin, gist, etc. containing more of the log file):
https://pastebin.com/0Wt6uhPF
Config (you can paste only the changes you've made): [mempool] recheck =
false create_empty_block = false
Anything else we need to know: In mempool.go, we found that proposer
will immediately propose new block if
Last committed block has some transaction (causing AppHash to changed)
or mem.notifyTxsAvailable() is called. Our scenario is as followed.
A transaction is fired, it will create 1 block with 1 tx (line 1-4 in
the log) and 1 empty block. After the empty block is proposed but before
it is committed, second transaction is fired and added to mempool. (line
8-16) Now, since the last committed block is empty and
mem.notifyTxsAvailable() will be called only if mempool.recheck = true.
The proposer won't immediately propose new block, causing the second
transaction to stuck in mempool until another transaction is added to
mempool and trigger mem.notifyTxsAvailable().
Fixes#2715
In crawlPeersRoutine, which is performed when seedMode is run, there is
logic that disconnects the peer's state information at 3-hour intervals
through the duration value. The duration value is calculated by
referring to the created value of MConnection. When MConnection is
created for the first time, the created value is not initiated, so it is
not disconnected every 3 hours but every time it is disconnected. So,
normal nodes are connected to seedNode and disconnected immediately, so
address exchange does not work properly.
https://github.com/tendermint/tendermint/blob/master/p2p/pex/pex_reactor.go#L629
This point is not work correctly.
I think,
https://github.com/tendermint/tendermint/blob/master/p2p/conn/connection.go#L148
created variable is missing the current time setting.
* p2p: panic on transport error
Addresses #2823. Currently, the acceptRoutine exits if the transport returns
an error trying to accept a new connection. Once this happens, the node
can't accept any new connections. So here, we panic instead. While we
could potentially be more intelligent by rerunning the acceptRoutine, the
error may indicate something more fundamental (eg. file desriptor limit)
that requires a restart anyways. We can leave it to process managers to
handle that restart, and notify operators about the panic.
* changelog
deliverTxResCh, like any other eventBus (pubsub) channel, is closed when
eventBus is stopped. We must check if the channel is still open. The
alternative approach is to not close any channels, which seems a bit
odd.
Fixes#2408
* WIP: tests for #2785
* rebase onto develop
* add Bucky's test without changing ValidatorSet.Update
* make TestValidatorSetBasic fail
* add ProposerPriority preserving fix to ValidatorSet.Update to fix
TestValidatorSetBasic
* fix randValidator_ to stay in bounds of MaxTotalVotingPower
* check for expected proposer and remove some duplicate code
* actually limit the voting power of random validator ...
* fix test
* types: ValidatorSet.Update preserves ProposerPriority
This solves the other issue discovered as part of #2718,
where Accum (now called ProposerPriority) is reset to
0 every time a validator is updated.
* update changelog
* add test
* update comment
* Update types/validator_set_test.go
Co-Authored-By: ebuchman <ethan@coinculture.info>
* set the accum of a new validator to (-total voting power):
- disincentivize validators to unbond, then rebon to reset their
negative Accum to zero
additional unrelated changes:
- do not capitalize error msgs
- fix typo
* review comments: (re)capitalize errors & delete obsolete comments
* More changes suggested by @melekes
* WIP: do not batch clip (#2809)
* substract avgAccum on each iteration
- temporarily skip test
* remove unused method safeMulClip / safeMul
* always substract the avg accum
- temp. skip another test
* remove overflow / underflow tests & add tests for avgAccum:
- add test for computeAvgAccum
- as we substract the avgAccum now we will not trivially over/underflow
* address @cwgoes' comments
* shift by avg at the end of IncrementAccum
* Add comment to MaxTotalVotingPower
* Guard inputs to not exceed MaxTotalVotingPower
* Address review comments:
- do not fetch current validator from set again
- update error message
* Address a few review comments:
- fix typo
- extract variable
* address more review comments:
- clarify 1.125*totalVotingPower == totalVotingPower + (totalVotingPower >> 3)
* review comments: panic instead of "clipping":
- total voting power is guarded to not exceed MaxTotalVotingPower ->
panic if this invariant is violated
* fix failing test
* Enforce validators can only use the correct pubkey type
* adapt to variable renames
* Address comments from #2636
* separate updating and validation logic
* update spec
* Add test case for TestStringSliceEqual, clarify slice copying code
* Address @ebuchman's comments
* Split up testing validator update execution, and its validation
* update docs
- make install_c cmd (install)
- explain node IDs (quick-start)
- update UPGRADING section (using-tendermint)
* use git clone with JS example
JS devs may not have Go installed and we should not force them to.
* rewrite sentence
Modify lookForHeight to return a height only there's a equal operator.
Previously, it was returning a height even for range conditions: "height
< 10000".
Fixes#2759
* Fixed accepting integer IDs in requests for Tendermint RPC server (#2366)
* added a wrapper interface `jsonrpcid` that represents both string and int IDs in JSON-RPC requests/responses + custom JSON unmarshallers
* changed client-side code in RPC that uses it
* added extra tests for integer IDs
* updated CHANGELOG_PENDING, as suggested by PR instructions
* addressed PR comments
* added table driven tests for request type marshalling/unmarshalling
* expanded handler test to check IDs
* changed pending changelog note
* changed json rpc request/response unmarshalling to use empty interfaces and type switches on ID
* some cleanup
Within every tx in the mempool, we store a 64 bit counter,
as an index for when it was inserted into the mempool.
This counter doesn't really serve any purpose.
It was likely added for debugging at one point,
Removing the counter reclaims memory,
which enables greater mempool sizes / mitigates resources at the same size.
Closes#2835
* #2815 do not broadcast heartbeat proposal when we are non-validator
* #2815 adding preliminary changelog entry
* #2815 cosmetics and added test
* #2815 missed a little detail
- it's enough to call getAddress() once here
* #2815 remove debug logging from tests
* #2815 OK. I seem to be doing something fundamentally wrong here
* #2815 next iteration of proposalHeartbeat tests
- try and use "ensure" pattern in common_test
* 2815 incorporate review comments
* Replaces our current http servers where connections stay open forever with ones with timeouts to prevent file descriptor exhaustion
* Use the correct handler
* Put in go routines
* fix err
* changelog
* rpc: export Read/WriteTimeout
The `broadcast_tx_commit` endpoint has it's own timeout.
If this is longer than the http server's WriteTimeout, the
user will receive an error. Here, we export the WriteTimeout
and set the broadcast_tx_commit timeout to be less than it.
In the future, we should use a config struct for the timeouts
to avoid the need to export. The broadcast_tx_commit timeout
may also become configurable, but we must check that it's less
than the server's WriteTimeout.
* p2p/conn: FlushStop. Use in pex. Closes#2092
In seed mode, we call StopPeer immediately after Send.
Since flushing msgs to the peer happens in the background,
the peer connection is often closed before the messages are
actually sent out. The new FlushStop method allows all msgs
to first be written and flushed out on the conn before it is closed.
* fix dummy peer
* typo
* fixes from review
* more comments
* ensure pex doesn't call FlushStop more than once
FlushStop is not safe to call more than once,
but we call it from Receive in a go-routine so Receive
doesn't block.
To ensure we only call it once, we use the lastReceivedRequests
map - if an entry already exists, then FlushStop should already have
been called and we can return.
* Make "Update to validators" msg value pretty #2765
* New format for logging validator updates
* Refactor logging validator updates
* Fix changelog item
* fix merge conflict
* Vagrantfile: install dev_tools
Follow-up on https://github.com/tendermint/tendermint/pull/2824
* update consensus params spec
* fix test name
* rpc_test: panic if failed to start listener
also
- remove http_server#MustListen
- align StartHTTPServer and StartHTTPAndTLSServer functions
* dep: allow minor releases for grpc
* add proposer info to EventCompleteProposal
* create separate EventData structure for CompleteProposal
* cant us rs.Proposal to get BlockID because it is not guaranteed to be set yet
* copying RoundState isnt helping us here
* add Step back to make compatible with original RoundState event. update changelog
* add NewRound event
* fix test
* remove unneeded RoundState
* put height round step into a struct
* pull out ValidatorInfo struct. add ensureProposal assert
* remove height-round-state sub-struct refactor
* minor fixes from review
* Decouple StartHTTP{,AndTLS}Server from Listen()
This should help solve cosmos/cosmos-sdk#2715
* Fix small mistake
* Update StartGRPCServer
* s/rpc/rpcserver/
* Start grpccore.StartGRPCServer in a goroutine
* Reinstate l.Close()
* Fix rpc/lib/test/main.go
* Update code comment
* update changelog and comments
* fix tm-monitor. more comments
because
- they are locked in .lock file already
- individual dependencies can be updated with `dep ensure -update XXX`
- review process (and ^^^) should help us prevent accidental updates
Closes#2798
* use READ lock/unlock in ConsensusState#GetLastHeight
Refs #2721
* do not use defers when there's no need
* fix peer formatting (output its address instead of the pointer)
```
[54310]: E[11-02|11:59:39.851] Connection failed @ sendRoutine module=p2p peer=0xb78f00 conn=MConn{74.207.236.148:26656} err="pong timeout"
```
https://github.com/tendermint/tendermint/issues/2721#issuecomment-435326581
* panic if peer has no state
https://github.com/tendermint/tendermint/issues/2721#issuecomment-435347165
It's confusing that sometimes we check if peer has a state, but most of
the times we expect it to be there
1. add79700b5/mempool/reactor.go (L138)
2. add79700b5/rpc/core/consensus.go (L196) (edited)
I will change everything to always assume peer has a state and panic
otherwise
that should help identify issues earlier
* abci/localclient: extend lock on app callback
App callback should be protected by lock as well (note this was already
done for InitChainAsync, why not for others???). Otherwise, when we
execute the block, tx might come in and call the callback in the same
time we're updating it in execBlockOnProxyApp => DATA RACE
Fixes#2721
Consensus state is locked
```
goroutine 113333 [semacquire, 309 minutes]:
sync.runtime_SemacquireMutex(0xc00180009c, 0xc0000c7e00)
/usr/local/go/src/runtime/sema.go:71 +0x3d
sync.(*RWMutex).RLock(0xc001800090)
/usr/local/go/src/sync/rwmutex.go:50 +0x4e
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).GetRoundState(0xc001800000, 0x0)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:218 +0x46
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusReactor).queryMaj23Routine(0xc0017def80, 0x11104a0, 0xc0072488f0, 0xc007248
9c0)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/reactor.go:735 +0x16d
created by github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusReactor).AddPeer
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/reactor.go:172 +0x236
```
because localClient is locked
```
goroutine 1899 [semacquire, 309 minutes]:
sync.runtime_SemacquireMutex(0xc00003363c, 0xc0000cb500)
/usr/local/go/src/runtime/sema.go:71 +0x3d
sync.(*Mutex).Lock(0xc000033638)
/usr/local/go/src/sync/mutex.go:134 +0xff
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client.(*localClient).SetResponseCallback(0xc0001fb560, 0xc007868540)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client/local_client.go:32 +0x33
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy.(*appConnConsensus).SetResponseCallback(0xc00002f750, 0xc007868540)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy/app_conn.go:57 +0x40
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state.execBlockOnProxyApp(0x1104e20, 0xc002ca0ba0, 0x11092a0, 0xc00002f750, 0xc0001fe960, 0xc000bfc660, 0x110cfe0, 0xc000090330, 0xc9d12, 0xc000d9d5a0, ...)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state/execution.go:230 +0x1fd
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state.(*BlockExecutor).ApplyBlock(0xc002c2a230, 0x7, 0x0, 0xc000eae880, 0x6, 0xc002e52c60, 0x16, 0x1f927, 0xc9d12, 0xc000d9d5a0, ...)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/state/execution.go:96 +0x142
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).finalizeCommit(0xc001800000, 0x1f928)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1339 +0xa3e
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).tryFinalizeCommit(0xc001800000, 0x1f928)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1270 +0x451
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit.func1(0xc001800000, 0x0, 0x1f928)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1218 +0x90
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).enterCommit(0xc001800000, 0x1f928, 0x0)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1247 +0x6b8
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).addVote(0xc001800000, 0xc003d8dea0, 0xc000cf4cc0, 0x28, 0xf1, 0xc003bc7ad0, 0xc003bc7b10)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1659 +0xbad
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).tryAddVote(0xc001800000, 0xc003d8dea0, 0xc000cf4cc0, 0x28, 0xf1, 0xf1, 0xf1)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:1517 +0x59
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).handleMsg(0xc001800000, 0xd98200, 0xc0070dbed0, 0xc000cf4cc0, 0x28)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:660 +0x64b
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).receiveRoutine(0xc001800000, 0x0)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:617 +0x670
created by github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus.(*ConsensusState).OnStart
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/consensus/state.go:311 +0x132
```
tx comes in and CheckTx is executed right when we execute the block
```
goroutine 111044 [semacquire, 309 minutes]:
sync.runtime_SemacquireMutex(0xc00003363c, 0x0)
/usr/local/go/src/runtime/sema.go:71 +0x3d
sync.(*Mutex).Lock(0xc000033638)
/usr/local/go/src/sync/mutex.go:134 +0xff
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client.(*localClient).CheckTxAsync(0xc0001fb0e0, 0xc002d94500, 0x13f, 0x280, 0x0)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/abci/client/local_client.go:85 +0x47
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy.(*appConnMempool).CheckTxAsync(0xc00002f720, 0xc002d94500, 0x13f, 0x280, 0x1)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/proxy/app_conn.go:114 +0x51
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/mempool.(*Mempool).CheckTx(0xc002d3a320, 0xc002d94500, 0x13f, 0x280, 0xc0072355f0, 0x0, 0x0)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/mempool/mempool.go:316 +0x17b
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/core.BroadcastTxSync(0xc002d94500, 0x13f, 0x280, 0x0, 0x0, 0x0)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/core/mempool.go:93 +0xb8
reflect.Value.call(0xd85560, 0x10326c0, 0x13, 0xec7b8b, 0x4, 0xc00663f180, 0x1, 0x1, 0xc00663f180, 0xc00663f188, ...)
/usr/local/go/src/reflect/value.go:447 +0x449
reflect.Value.Call(0xd85560, 0x10326c0, 0x13, 0xc00663f180, 0x1, 0x1, 0x0, 0x0, 0xc005cc9344)
/usr/local/go/src/reflect/value.go:308 +0xa4
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server.makeHTTPHandler.func2(0x1102060, 0xc00663f100, 0xc0082d7900)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server/handlers.go:269 +0x188
net/http.HandlerFunc.ServeHTTP(0xc002c81f20, 0x1102060, 0xc00663f100, 0xc0082d7900)
/usr/local/go/src/net/http/server.go:1964 +0x44
net/http.(*ServeMux).ServeHTTP(0xc002c81b60, 0x1102060, 0xc00663f100, 0xc0082d7900)
/usr/local/go/src/net/http/server.go:2361 +0x127
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server.maxBytesHandler.ServeHTTP(0x10f8a40, 0xc002c81b60, 0xf4240, 0x1102060, 0xc00663f100, 0xc0082d7900)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server/http_server.go:219 +0xcf
github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server.RecoverAndLogHandler.func1(0x1103220, 0xc00121e620, 0xc0082d7900)
/root/go/src/github.com/MinterTeam/minter-go-node/vendor/github.com/tendermint/tendermint/rpc/lib/server/http_server.go:192 +0x394
net/http.HandlerFunc.ServeHTTP(0xc002c06ea0, 0x1103220, 0xc00121e620, 0xc0082d7900)
/usr/local/go/src/net/http/server.go:1964 +0x44
net/http.serverHandler.ServeHTTP(0xc001a1aa90, 0x1103220, 0xc00121e620, 0xc0082d7900)
/usr/local/go/src/net/http/server.go:2741 +0xab
net/http.(*conn).serve(0xc00785a3c0, 0x11041a0, 0xc000f844c0)
/usr/local/go/src/net/http/server.go:1847 +0x646
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:2851 +0x2f5
```
* consensus: use read lock in Receive#VoteMessage
* use defer to unlock mutex because application might panic
* use defer in every method of the localClient
* add a changelog entry
* drain channels before Unsubscribe(All)
Read 55362ed766/libs/pubsub/pubsub.go (L13)
for the detailed explanation of the issue.
We'll need to fix it someday. Make sure to keep an eye on
https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-033-pubsub.md
* retry instead of panic when peer has no state in reactors other than consensus
in /dump_consensus_state RPC endpoint, skip a peer with no state
* rpc/core/mempool: simplify error messages
* rpc/core/mempool: use time.After instead of timer
also, do not log DeliverTx result (to be consistent with other memthods)
* unlock before calling the callback in reqRes#SetCallback
* fix amino overhead computation for Tx:
- also count the fieldnum / typ3
- add method to compute overhead per Tx
- slightly clarify comment on MaxAminoOverheadForBlock
- add tests
* fix TestReapMaxBytesMaxGas according to amino overhead
* fix TestMempoolFilters according to amino overhead
* address review comments:
- add a note about fieldNum = 1
- add forgotten godoc comment
* fix and use sm.TxPreCheck
* fix test
* remove print statement
* p2p: re-check after sleeps
* use NodeInfo as an interface
* Revert "use NodeInfo as an interface"
This reverts commit 5f7d055e6c.
* Revert "p2p: re-check after sleeps"
This reverts commit 7f41070da0.
* preserve dial to itself
* ignore ensured connections while re-connecting
* re-check after sleep
* keep protocol definition on net addresses
* decrease log level
* Revert "preserve dial to itself"
This reverts commit 0c6e0fc58d.
* correct func comment according to modification
Co-Authored-By: mgurevin <mehmet@gurevin.net>
* Require addressbook to only store addresses with valid ID
* Do not shut down peer immediately after sending pex addrs in SeedMode
* p2p: fix#2773
* seed mode: use go-routine to sleep before stopping peer
* dedup with spec/abci/client-server
* fixup abci/readme
* link to getting started in abci/README
* https
* spec/abci: some deduplication
* docs: remove extraneous comment
* mempool: ErrPreCheck and more log info
* change Pre/PostCheckFunc to return errors
also, continue execution when checking txs in mempool_test if
err=PreCheckErr
* validate reactor messages
Refs #2683
* validate blockchain messages
Refs #2683
* validate evidence messages
Refs #2683
* todo
* check ProposalPOL and signature sizes
* add a changelog entry
* check addr is valid when we add it to the addrbook
* validate incoming netAddr (not just nil check!)
* fixes after Bucky's review
* check timestamps
* beef up block#ValidateBasic
* move some checks into bcBlockResponseMessage
* update Gopkg.lock
Fix
```
grouped write of manifest, lock and vendor: failed to export github.com/tendermint/go-amino: fatal: failed to unpack tree object 6dcc6ddc143e116455c94b25c1004c99e0d0ca12
```
by running `dep ensure -update`
* bump year since now we check it
* generate test/p2p/data on the fly using tendermint testnet
* allow sync chains older than 1 year
* use full path when creating a testnet
* move testnet gen to test/docker/Dockerfile
* relax LastCommitRound check
Refs #2737
* fix conflicts after merge
* add small comment
* some ValidateBasic updates
* fixes
* AppHash length is not fixed
* Introduce EventValidBlock for informing peer about wanted block
* Merge with develop
* Add isCommit flag to NewValidBlock message
- Add test for the case of +2/3 Precommit from the previous round
This release is primarily about adding Version fields to various data structures,
optimizing consensus messages for signing and verification in
restricted environments (like HSMs and the Ethereum Virtual Machine), and
aligning the consensus code with the [specification](https://arxiv.org/abs/1807.04938).
It also includes our first take at a generalized merkle proof system.
See the [UPGRADING.md](UPGRADING.md#v0.26.0) for details on upgrading to the new
version.
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
BREAKING CHANGES:
### BREAKING CHANGES
* CLI/RPC/Config
* [config] [\#2232](https://github.com/tendermint/tendermint/issues/2232) timeouts as time.Duration, not ints
* [config] [\#2505](https://github.com/tendermint/tendermint/issues/2505) Remove Mempool.RecheckEmpty (it was effectively useless anyways)
* [config] [\#2490](https://github.com/tendermint/tendermint/issues/2490) `mempool.wal` is disabled by default
* [privval] [\#2459](https://github.com/tendermint/tendermint/issues/2459) Split `SocketPVMsg`s implementations into Request and Response, where the Response may contain a error message (returned by the remote signer)
* [state] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version field to State, breaking the format of State as
encoded on disk.
* [rpc] [\#2298](https://github.com/tendermint/tendermint/issues/2298) `/abci_query` takes `prove` argument instead of `trusted` and switches the default
behaviour to `prove=false`
* [rpc] [\#2654](https://github.com/tendermint/tendermint/issues/2654) Remove all `node_info.other.*_version` fields in `/status` and
`/net_info`
- CLI/RPC/Config
* Apps
* [abci] [\#2298](https://github.com/tendermint/tendermint/issues/2298) ResponseQuery.Proof is now a structured merkle.Proof, not just
arbitrary bytes
* [abci] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version to Header and shift all fields by one
* [abci] [\#2662](https://github.com/tendermint/tendermint/issues/2662) Bump the field numbers for some `ResponseInfo` fields to make room for
`AppVersion`
- Apps
*Go API
* [config] [\#2232](https://github.com/tendermint/tendermint/issues/2232) timeouts as time.Duration, not ints
* [crypto/merkle & lite] [\#2298](https://github.com/tendermint/tendermint/issues/2298) Various changes to accomodate General Merkle trees
* [crypto/merkle] [\#2595](https://github.com/tendermint/tendermint/issues/2595) Remove all Hasher objects in favor of byte slices
* [crypto/merkle] [\#2635](https://github.com/tendermint/tendermint/issues/2635) merkle.SimpleHashFromTwoHashes is no longer exported
* [types] [\#2298](https://github.com/tendermint/tendermint/issues/2298) Remove `Index` and `Total` fields from `TxProof`.
* [types] [\#2598](https://github.com/tendermint/tendermint/issues/2598) `VoteTypeXxx` are now of type `SignedMsgType byte` and named `XxxType`, eg. `PrevoteType`,
`PrecommitType`.
-P2P Protocol
*Blockchain Protocol
* [types] Update SignBytes for `Vote`/`Proposal`/`Heartbeat`:
* [\#2459](https://github.com/tendermint/tendermint/issues/2459) Use amino encoding instead of JSON in `SignBytes`.
* [\#2598](https://github.com/tendermint/tendermint/issues/2598) Reorder fields and use fixed sized encoding.
* [\#2598](https://github.com/tendermint/tendermint/issues/2598) Change `Type` field fromt `string` to `byte` and use new
`SignedMsgType` to enumerate.
* [types] [\#2512](https://github.com/tendermint/tendermint/issues/2512) Remove the pubkey field from the validator hash
* [types] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Add Version struct to Header
* [types] [\#2609](https://github.com/tendermint/tendermint/issues/2609) ConsensusParams.Hash() is the hash of the amino encoded
struct instead of the Merkle tree of the fields
* [state] [\#2587](https://github.com/tendermint/tendermint/issues/2587) Require block.Time of the fist block to be genesis time
* [state] [\#2644](https://github.com/tendermint/tendermint/issues/2644) Require block.Version to match state.Version
* [types] [\#2670](https://github.com/tendermint/tendermint/issues/2670) Header.Hash() builds Merkle tree out of fields in the same
order they appear in the header, instead of sorting by field name
-Go API
*P2P Protocol
* [p2p] [\#2654](https://github.com/tendermint/tendermint/issues/2654) Add `ProtocolVersion` struct with protocol versions to top of
DefaultNodeInfo and require `ProtocolVersion.Block` to match during peer handshake
-Blockchain Protocol
FEATURES:
- [abci] [\#2557](https://github.com/tendermint/tendermint/issues/2557) Add `Codespace` field to `Response{CheckTx, DeliverTx, Query}`
- [abci] [\#2662](https://github.com/tendermint/tendermint/issues/2662) Add `BlockVersion` and `P2PVersion` to `RequestInfo`
- [crypto/merkle] [\#2298](https://github.com/tendermint/tendermint/issues/2298) General Merkle Proof scheme for chaining various types of Merkle trees together
This code of conduct applies to all projects run by the Tendermint/COSMOS team and hence to tendermint.
@@ -6,7 +7,8 @@ This code of conduct applies to all projects run by the Tendermint/COSMOS team a
# Conduct
## Contact: adrian@tendermint.com
## Contact: conduct@tendermint.com
* We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.
@@ -29,6 +31,7 @@ This code of conduct applies to all projects run by the Tendermint/COSMOS team a
# Moderation
These are the policies for upholding our community’s standards of conduct. If you feel that a thread needs moderation, please contact the above mentioned person.
1. Remarks that violate the Tendermint/COSMOS standards of conduct, including hateful, hurtful, oppressive, or exclusionary remarks, are not allowed. (Cursing is allowed, but never targeting another user, and never in a hateful manner.)
Thank you for considering making contributions to Tendermint and related repositories! Start by taking a look at the [coding repo](https://github.com/tendermint/coding) for overall information on repository workflow and standards.
Thank you for your interest in contributing to Tendermint! Before
contributing, it may be helpful to understand the goal of the project. The goal
of Tendermint is to develop a BFT consensus engine robust enough to
support permissionless value-carrying networks. While all contributions are
welcome, contributors should bear this goal in mind in deciding if they should
target the main Tendermint project or a potential fork. When targeting the
main Tendermint project, the following process leads to the best chance of
landing changes in master.
Please follow standard github best practices: fork the repo, branch from the tip of develop, make some commits, and submit a pull request to develop. See the [open issues](https://github.com/tendermint/tendermint/issues) for things we need help with!
All work on the code base should be motivated by a [Github
Now `origin` refers to my fork and `upstream` refers to the tendermint version.
Now `origin` refers to my fork and `upstream` refers to the Tendermint version.
So I can `git push -u origin master` to update my fork, and make pull requests to tendermint from there.
Of course, replace `ebuchman` with your git handle.
To pull in updates from the origin repo, run
*`git fetch upstream`
*`git rebase upstream/master` (or whatever branch you want)
Please don't make Pull Requests to `master`.
-`git fetch upstream`
-`git rebase upstream/master` (or whatever branch you want)
## Dependencies
We use [dep](https://github.com/golang/dep) to manage dependencies.
We use [go modules](https://github.com/golang/go/wiki/Modules) to manage dependencies.
That said, the master branch of every Tendermint repository should just build
with `go get`, which means they should be kept up-to-date with their
@@ -42,14 +90,26 @@ dependencies so we can get away with telling people they can just `go get` our
software.
Since some dependencies are not under our control, a third party may break our
build, in which case we can fall back on `dep ensure` (or `make
get_vendor_deps`). Even for dependencies under our control, dep helps us to
build, in which case we can fall back on `go mod tidy`. Even for dependencies under our control, go helps us to
keep multiple repos in sync as they evolve. Anything with an executable, such
as apps, tools, and the core, should use dep.
Run `dep status` to get a list of vendor dependencies that may not be
Run `go list -u -m all` to get a list of dependencies that may not be
up-to-date.
When updating dependencies, please only update the particular dependencies you
need. Instead of running `go get -u=patch`, which will update anything,
specify exactly the dependency you want to update, eg.
`GO111MODULE=on go get -u github.com/tendermint/go-amino@master`.
## Protobuf
We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along with [gogoproto](https://github.com/gogo/protobuf) to generate code for use across Tendermint Core.
For linting and checking breaking changes, we use [buf](https://buf.build/). If you would like to run linting and check if the changes you have made are breaking then you will need to have docker running locally. Then the linting cmd will be `make proto-lint` and the breaking changes check will be `make proto-check-breaking`.
We use [Docker](https://www.docker.com/) to generate the protobuf stubs. To generate the stubs yourself, make sure docker is running then run `make proto-gen`.
## Vagrant
If you are a [Vagrant](https://www.vagrantup.com/) user, you can get started
@@ -58,60 +118,292 @@ hacking Tendermint with the commands below.
NOTE: In case you installed Vagrant in 2017, you might need to run
`vagrant box update` to upgrade to the latest `ubuntu/xenial64`.
```
```sh
vagrant up
vagrant ssh
make test
```
## Testing
## Changelog
All repos should be hooked up to [CircleCI](https://circleci.com/).
Every fix, improvement, feature, or breaking change should be made in a
pull-request that includes an update to the `CHANGELOG_PENDING.md` file.
If they have `.go` files in the root directory, they will be automatically
tested by circle using `go test -v -race ./...`. If not, they will need a
`circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and
includes its continuous integration status using a badge in the `README.md`.
Changelog entries should be formatted as follows:
```md
- [module] \#xxx Some description about the change (@contributor)
```
Here, `module` is the part of the code that changed (typically a
top-level Go package), `xxx` is the pull-request number, and `contributor`
is the author/s of the change.
It's also acceptable for `xxx` to refer to the relevant issue number, but pull-request
numbers are preferred.
Note this means pull-requests should be opened first so the changelog can then
be updated with the pull-request's number.
There is no need to include the full link, as this will be added
automatically during release. But please include the backslash and pound, eg. `\#2313`.
Changelog entries should be ordered alphabetically according to the
`module`, and numerically according to the pull-request number.
Changes with multiple classifications should be doubly included (eg. a bug fix
that is also a breaking change should be recorded under both).
Breaking changes are further subdivided according to the APIs/users they impact.
Any change that effects multiple APIs/users should be recorded multiply - for
instance, a change to the `Blockchain Protocol` that removes a field from the
header should also be recorded under `CLI/RPC/Config` since the field will be
removed from the header in RPC responses as well.
## Branching Model and Release
User-facing repos should adhere to the branching model: http://nvie.com/posts/a-successful-git-branching-model/.
That is, these repos should be well versioned, and any merge to master requires a version bump and tagged release.
The main development branch is master.
Libraries need not follow the model strictly, but would be wise to,
especially `go-p2p` and `go-rpc`, as their versions are referenced in tendermint core.
Every release is maintained in a release branch named `vX.Y.Z`.
### Development Procedure:
- the latest state of development is on `develop`
-`develop` must never fail `make test`
- no --force onto `develop` (except when reverting a broken commit, which should seldom happen)
- create a development branch either on github.com/tendermint/tendermint, or your fork (using `git remote add origin`)
- before submitting a pull request, begin `git rebase` on top of `develop`
Pending minor releases havelong-lived release candidate ("RC") branches. Minor release changes should be merged to these long-lived RC branches at the same time that the changes are merged to master.
### Pull Merge Procedure:
- ensure pull branch is rebased on develop
- run `make test` to ensure that all tests pass
- merge pull request
- the `unstable` branch may be used to aggregate pull merges before testing once
- push master may request that pull requests be rebased on top of `unstable`
Note all pull requests should be squash merged except for merging to a release branch (named `vX.Y`). This keeps the commit history clean and makes it
easy to reference the pull request where a change was introduced.
### Release Procedure:
- start on `develop`
- run integration tests (see `test_integrations` in Makefile)
- prepare changelog/release issue
- bump versions
- push to release-vX.X.X to run the extended integration tests on the CI
- merge to master
- merge master back to develop
### Development Procedure
### Hotfix Procedure:
- start on `master`
- checkout a new branch named hotfix-vX.X.X
- make the required changes
- these changes should be small and an absolute necessity
- add a note to CHANGELOG.md
- bump versions
- push to hotfix-vX.X.X to run the extended integration tests on the CI
- merge hotfix-vX.X.X to master
- merge hotfix-vX.X.X to develop
- delete the hotfix-vX.X.X branch
The latest state of development is on `master`, which must never fail `make test`. _Never_ force push `master`, unless fixing broken git history (which we rarely do anyways).
To begin contributing, create a development branch either on `github.com/tendermint/tendermint`, or your fork (using `git remote add origin`).
Make changes, and before submitting a pull request, update the `CHANGELOG_PENDING.md` to record your change. Also, run either `git rebase` or `git merge` on top of the latest `master`. (Since pull requests are squash-merged, either is fine!)
Update the `UPGRADING.md` if the change you've made is breaking and the
instructions should be in place for a user on how he/she can upgrade it's
Once you have submitted a pull request label the pull request with either `R:minor`, if the change should be included in the next minor release, or `R:major`, if the change is meant for a major release.
Sometimes (often!) pull requests get out-of-date with master, as other people merge different pull requests to master. It is our convention that pull request authors are responsible for updating their branches with master. (This also means that you shouldn't update someone else's branch for them; even if it seems like you're doing them a favor, you may be interfering with their git flow in some way!)
#### Merging Pull Requests
It is also our convention that authors merge their own pull requests, when possible. External contributors may not have the necessary permissions to do this, in which case, a member of the core team will merge the pull request once it's been approved.
Before merging a pull request:
- Ensure pull branch is up-to-date with a recent `master` (GitHub won't let you merge without this!)
If your change should be included in a minor release, please also open a PR against the long-lived minor release candidate branch (e.g., `rc1/v0.33.5`) _immediately after your change has been merged to master_.
You can do this by cherry-picking your commit off master:
```sh
$ git checkout rc1/v0.33.5
$ git checkout -b {new branch name}
$ git cherry-pick {commit SHA from master}
# may need to fix conflicts, and then use git add and git cherry-pick --continue
$ git push origin {new branch name}
```
After this, you can open a PR. Please note in the PR body if there were merge conflicts so that reviewers can be sure to take a thorough look.
### Git Commit Style
We follow the [Go style guide on commit messages](https://tip.golang.org/doc/contribute.html#commit_messages). Write concise commits that start with the package name and have a description that finishes the sentence "This change modifies Tendermint to...". For example,
```sh
cmd/debug: execute p.Signal only when p is not nil
[potentially longer description in the body]
Fixes #nnnn
```
Each PR should have one commit once it lands on `master`; this can be accomplished by using the "squash and merge" button on Github. Be sure to edit your commit message, though!
### Release Procedure
#### Major Release
1. Start on `master`
2. Run integration tests (see `test_integrations` in Makefile)
3. Prepare release in a pull request against `master` (to be squash merged):
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`; if this release
had release candidates, squash all the RC updates into one
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all issues
- run `bash ./scripts/authors.sh` to get a list of authors since the latest
release, and add the github aliases of external contributors to the top of
the changelog. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
- Make sure all significant breaking changes are covered in `UPGRADING.md`
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Push a tag with prepared release details (this will trigger the release `vX.X.0`)
-`git tag -a vX.X.x -m 'Release vX.X.x'`
-`git push origin vX.X.x`
5. Update the changelog.md file on master with the releases changelog.
6. Delete any RC branches and tags for this release (if applicable)
#### Minor Release
Minor releases are done differently from major releases: They are built off of long-lived release candidate branches, rather than from master.
1. Checkout the long-lived release candidate branch: `git checkout rcX/vX.X.X`
2. Run integration tests: `make test_integrations`
3. Prepare the release:
- copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
- run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- reset the `CHANGELOG_PENDING.md`
- bump P2P and block protocol versions in `version.go`, if necessary
- bump ABCI protocol version in `version.go`, if necessary
- make sure all significant breaking changes are covered in `UPGRADING.md`
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Create a release branch `release/vX.X.x` off the release candidate branch:
-`git checkout -b release/vX.X.x`
-`git push -u origin release/vX.X.x`
- Note that all branches prefixed with `release` are protected once pushed. You will need admin help to make any changes to the branch.
5. Once the release branch has been approved, make sure to pull it locally, then push a tag.
-`git tag -a vX.X.x -m 'Release vX.X.x'`
-`git push origin vX.X.x`
6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
- Remove all `R:minor` labels from the pull requests that were included in the release.
- Do not merge the release branch into master.
7. Delete the former long lived release candidate branch once the release has been made.
8. Create a new release candidate branch to be used for the next release.
#### Backport Release
1. start from the existing release branch you want to backport changes to (e.g. v0.30)
Branch to a release/vX.X.X branch locally (e.g. release/v0.30.7)
2. Cherry pick the commit(s) that contain the changes you want to backport (usually these commits are from squash-merged PRs which were already reviewed)
3. Follow steps 2 and 3 from [Major Release](#major-release)
4. Push changes to release/vX.X.X branch
5. Open a PR against the existing vX.X branch
#### Release Candidates
Before creating an official release, especially a major release, we may want to create a
release candidate (RC) for our friends and partners to test out. We use git tags to
create RCs, and we build them off of RC branches. RC branches typically have names formatted
like `RCX/vX.X.X` (or, concretely, `RC0/v0.34.0`), while the tags themselves follow
the "standard" release naming conventions, with `-rcX` at the end (`vX.X.X-rcX`).
(Note that branches and tags _cannot_ have the same names, so it's important that these branches
have distinct names from the tags/release names.)
1. Start from the RC branch (e.g. `RC0/v0.34.0`).
2. Create the new tag, specifying a name and a tag "message":
`git tag -a v0.34.0-rc0 -m "Release Candidate v0.34.0-rc0`
3. Push the tag back up to origin:
`git push origin v0.34.0-rc4`
Now the tag should be available on the repo's releases page.
4. Create a new release candidate branch for any possible updates to the RC:
DockerHub tags for official releases are [here](https://hub.docker.com/r/tendermint/tendermint/tags/). The "latest" tag will always point to the highest version number.
`develop` tag points to the [develop](https://github.com/tendermint/tendermint/tree/develop) branch.
Official releases can be found [here](https://github.com/tendermint/tendermint/releases).
The Dockerfile for tendermint is not expected to change in the near future. The master file used for all builds can be found [here](https://raw.githubusercontent.com/tendermint/tendermint/master/DOCKER/Dockerfile).
Respective versioned files can be found <https://raw.githubusercontent.com/tendermint/tendermint/vX.XX.XX/DOCKER/Dockerfile> (replace the Xs with the version number).
## Quick reference
* **Where to get help:**
https://cosmos.network/community
* **Where to file issues:**
https://github.com/tendermint/tendermint/issues
* **Supported Docker versions:**
[the latest release](https://github.com/moby/moby/releases) (down to 1.6 on a best-effort basis)
- **Where to get help:**<https://tendermint.com/>
- **Where to file issues:** <https://github.com/tendermint/tendermint/issues>
- **Supported Docker versions:** [the latest release](https://github.com/moby/moby/releases) (down to 1.6 on a best-effort basis)
## Tendermint
Tendermint Core is Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine, written in any programming language, and securely replicates it on many machines.
For more background, see the [introduction](https://tendermint.readthedocs.io/en/master/introduction.html).
For more background, see the [the docs](https://docs.tendermint.com/master/introduction/#quick-start).
To get started developing applications, see the [application developers guide](https://tendermint.readthedocs.io/en/master/getting-started.html).
To get started developing applications, see the [application developers guide](https://docs.tendermint.com/master/introduction/quick-start.html).
## How to use this image
@@ -41,16 +30,16 @@ To get started developing applications, see the [application developers guide](h
A quick example of a built-in app and Tendermint core in one container.
```
```sh
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint init
docker run -it --rm -v "/tmp:/tendermint" tendermint/tendermint node --proxy_app=kvstore
```
## Local cluster
To run a 4-node network, see the `Makefile` in the root of [the repo](https://github.com/tendermint/tendermint/master/Makefile) and run:
To run a 4-node network, see the `Makefile` in the root of [the repo](https://github.com/tendermint/tendermint/blob/master/Makefile) and run:
```
```sh
make build-linux
make build-docker-localnode
make localnet-start
@@ -60,7 +49,7 @@ Note that this will build and use a different image than the ones provided here.
## License
- Tendermint's license is [Apache 2.0](https://github.com/tendermint/tendermint/master/LICENSE).
- Tendermint's license is [Apache 2.0](https://github.com/tendermint/tendermint/blob/master/LICENSE).
cd networks/remote/ansible &&ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i inventory/digital_ocean.py -l sentrynet install.yml
@echo "Next step: Add your validator setup in the genesis.json and config.tml files and run \"make sentry-config\". (Public key of validator, chain ID, peer IP and node ID.)"
As part of our [Coordinated Vulnerability Disclosure
Policy](https://tendermint.com/security), we operate a bug bounty.
See the policy for more details on submissions and rewards.
Policy](https://tendermint.com/security), we operate a [bug
bounty](https://hackerone.com/tendermint).
See the policy for more details on submissions and rewards, and see "Example Vulnerabilities" (below) for examples of the kinds of bugs we're most interested in.
Here is a list of examples of the kinds of bugs we're most interested in:
### Guidelines
## Specification
We require that all researchers:
-Conceptual flaws
-Ambiguities, inconsistencies, or incorrect statements
-Mis-match between specification and implementation of any component
*Use the bug bounty to disclose all vulnerabilities, and avoid posting vulnerability information in public places, including Github Issues, Discord channels, and Telegram groups
*Make every effort to avoid privacy violations, degradation of user experience, disruption to production systems (including but not limited to the Cosmos Hub), and destruction of data
*Keep any information about vulnerabilities that you’ve discovered confidential between yourself and the Tendermint Core engineering team until the issue has been resolved and disclosed
* Avoid posting personally identifiable information, privately or publicly
## Consensus
If you follow these guidelines when reporting an issue to us, we commit to:
* Not pursue or support any legal action related to your research on this vulnerability
* Work with you to understand, resolve and ultimately disclose the issue in a timely fashion
## Disclosure Process
Tendermint Core uses the following disclosure process:
1. Once a security report is received, the Tendermint Core team works to verify the issue and confirm its severity level using CVSS.
2. The Tendermint Core team collaborates with the Gaia team to determine the vulnerability’s potential impact on the Cosmos Hub.
3. Patches are prepared for eligible releases of Tendermint in private repositories. See “Supported Releases” below for more information on which releases are considered eligible.
4. If it is determined that a CVE-ID is required, we request a CVE through a CVE Numbering Authority.
5. We notify the community that a security release is coming, to give users time to prepare their systems for the update. Notifications can include forum posts, tweets, and emails to partners and validators, including emails sent to the [Tendermint Security Mailing List](https://berlin.us4.list-manage.com/subscribe?u=431b35421ff7edcc77df5df10&id=3fe93307bc).
6. 24 hours following this notification, the fixes are applied publicly and new releases are issued.
7. Cosmos SDK and Gaia update their Tendermint Core dependencies to use these releases, and then themselves issue new releases.
8. Once releases are available for Tendermint Core, Cosmos SDK and Gaia, we notify the community, again, through the same channels as above. We also publish a Security Advisory on Github and publish the CVE, as long as neither the Security Advisory nor the CVE include any information on how to exploit these vulnerabilities beyond what information is already available in the patch itself.
9. Once the community is notified, we will pay out any relevant bug bounties to submitters.
10. One week after the releases go out, we will publish a post with further details on the vulnerability as well as our response to it.
This process can take some time. Every effort will be made to handle the bug in as timely a manner as possible, however it's important that we follow the process described above to ensure that disclosures are handled consistently and to keep Tendermint Core and its downstream dependent projects--including but not limited to Gaia and the Cosmos Hub--as secure as possible.
### Example Timeline
The following is an example timeline for the triage and response. The required roles and team members are described in parentheses after each task; however, multiple people can play each role and each person may play multiple roles.
#### > 24 Hours Before Release Time
1. Request CVE number (ADMIN)
2. Gather emails and other contact info for validators (COMMS LEAD)
3. Test fixes on a testnet (TENDERMINT ENG, COSMOS ENG)
4. Write “Security Advisory” for forum (TENDERMINT LEAD)
#### 24 Hours Before Release Time
1. Post “Security Advisory” pre-notification on forum (TENDERMINT LEAD)
2. Post Tweet linking to forum post (COMMS LEAD)
3. Announce security advisory/link to post in various other social channels (Telegram, Discord) (COMMS LEAD)
4. Send emails to validators or other users (PARTNERSHIPS LEAD)
2. Cut Cosmos SDK release for eligible versions (COSMOS ENG)
3. Cut Gaia release for eligible versions (GAIA ENG)
4. Post “Security releases” on forum (TENDERMINT LEAD)
5. Post new Tweet linking to forum post (COMMS LEAD)
6. Remind everyone via social channels (Telegram, Discord) that the release is out (COMMS LEAD)
7. Send emails to validators or other users (COMMS LEAD)
8. Publish Security Advisory and CVE, if CVE has no sensitive information (ADMIN)
#### After Release Time
1. Write forum post with exploit details (TENDERMINT LEAD)
2. Approve pay-out on HackerOne for submitter (ADMIN)
#### 7 Days After Release Time
1. Publish CVE if it has not yet been published (ADMIN)
2. Publish forum post with exploit details (TENDERMINT ENG, TENDERMINT LEAD)
## Supported Releases
The Tendermint Core team commits to releasing security patch releases for both the latest minor release as well for the major/minor release that the Cosmos Hub is running.
If you are running older versions of Tendermint Core, we encourage you to upgrade at your earliest opportunity so that you can receive security patches directly from the Tendermint repo. While you are welcome to backport security patches to older versions for your own use, we will not publish or promote these backports.
## Scope
The full scope of our bug bounty program is outlined on our [Hacker One program page](https://hackerone.com/tendermint). Please also note that, in the interest of the safety of our users and staff, a few things are explicitly excluded from scope:
* Any third-party services
* Findings from physical testing, such as office access
* Findings derived from social engineering (e.g., phishing)
## Example Vulnerabilities
The following is a list of examples of the kinds of vulnerabilities that we’re most interested in. It is not exhaustive: there are other kinds of issues we may also be interested in!
### Specification
* Conceptual flaws
* Ambiguities, inconsistencies, or incorrect statements
* Mis-match between specification and implementation of any component
### Consensus
Assuming less than 1/3 of the voting power is Byzantine (malicious):
- Validation of blockchain data structures, including blocks, block parts,
votes, and so on
-Execution of blocks
-Validator set changes
-Proposer round robin
-Two nodes committing conflicting blocks for the same height (safety failure)
- A correct node signing conflicting votes
-A node halting (liveness failure)
- Syncing new and old nodes
## Networking
- Authenticated encryption (MITM, information leakage)
- Eclipse attacks
- Sybil attacks
- Long-range attacks
- Denial-of-Service
## RPC
- Write-access to anything besides sending transactions
- Denial-of-Service
- Leakage of secrets
## Denial-of-Service
Attacks may come through the P2P network or the RPC:
- Amplification attacks
- Resource abuse
- Deadlocks and race conditions
- Panics and unhandled errors
## Libraries
- Serialization (Amino)
- Reading/Writing files and databases
- Logging and monitoring
## Cryptography
- Elliptic curves for validator signatures
- Hash algorithms and Merkle trees for block validation
- Authenticated encryption for P2P connections
## Light Client
- Validation of blockchain data structures
- Correctly validating an incorrect proof
- Incorrectly validating a correct proof
- Syncing validator set changes
* Validation of blockchain data structures, including blocks, block parts, votes, and so on
* Execution of blocks
*Validator set changes
*Proposer round robin
*Two nodes committing conflicting blocks for the same height (safety failure)
*A correct node signing conflicting votes
* A node halting (liveness failure)
*Syncing new and old nodes
### Networking
* Authenticated encryption (MITM, information leakage)
* Eclipse attacks
* Sybil attacks
* Long-range attacks
* Denial-of-Service
### RPC
* Write-access to anything besides sending transactions
* Denial-of-Service
* Leakage of secrets
### Denial-of-Service
Attacks may come through the P2P network or the RPC layer:
* Amplification attacks
* Resource abuse
* Deadlocks and race conditions
### Libraries
* Serialization (Amino)
* Reading/Writing files and databases
### Cryptography
* Elliptic curves for validator signatures
* Hash algorithms and Merkle trees for block validation
Tendermint 0.34 replaces Amino with Protocol Buffers for encoding.
This migration is extensive and results in a number of changes, however,
Tendermint only uses the types generated from Protocol Buffers for disk and
wire serialization.
**This means that these changes should not affect you as a Tendermint user.**
However, Tendermint users and contributors may note the following changes:
* Directory layout changes: All proto files have been moved under one directory, `/proto`.
This is in line with the recommended file layout by [Buf](https://buf.build).
For more, see the [Buf documentation](https://buf.build/docs/lint-checkers#file_layout).
* ABCI Changes: As noted in the "ABCI Changes" section above, the `PublicKey` type now uses
a `oneof` type.
For more on the Protobuf changes, please see our [blog post on this migration](https://medium.com/tendermint/tendermint-0-34-protocol-buffers-and-you-8c40558939ae).
### Consensus Parameters
Tendermint 0.34 includes new and updated consensus parameters.
#### Version Parameters (New)
* `AppVersion`, which is the version of the ABCI application.
#### Evidence Parameters
* `MaxBytes`, which caps the total amount of evidence. The default is 1048576 (1 MB).
### Crypto
#### Keys
* Keys no longer include a type prefix. For example, ed25519 pubkeys have been renamed from
`PubKeyEd25519` to `PubKey`. This reduces stutter (e.g., `ed25519.PubKey`).
* Keys are now byte slices (`[]byte`) instead of byte arrays (`[<size>]byte`).
* The multisig functionality that was previously in Tendermint now has
Evidence Params has been changed to include duration.
* `consensus_params.evidence.max_age_duration`.
* Renamed `consensus_params.evidence.max_age` to `max_age_num_blocks`.
### Go API
* `libs/common` has been removed in favor of specific pkgs.
* `async`
* `service`
* `rand`
* `net`
* `strings`
* `cmap`
* removal of `errors` pkg
### RPC Changes
* `/validators` is now paginated (default: 30 vals per page)
* `/block_results` response format updated [see RPC docs for details](https://docs.tendermint.com/master/rpc/#/Info/block_results)
* Event suffix has been removed from the ID in event responses
* IDs are now integers not `json-client-XYZ`
## v0.32.0
This release is compatible with previous blockchains,
however the new ABCI Events mechanism may create some complexity
for nodes wishing to continue operation with v0.32 from a previous version.
There are some minor breaking changes to the RPC.
### Config Changes
If you have `db_backend` set to `leveldb` in your config file, please change it
to `goleveldb` or `cleveldb`.
### RPC Changes
The default listen address for the RPC is now `127.0.0.1`. If you want to expose
it publicly, you have to explicitly configure it. Note exposing the RPC to the
public internet may not be safe - endpoints which return a lot of data may
enable resource exhaustion attacks on your node, causing the process to crash.
Any consumers of `/block_results` need to be mindful of the change in all field
names from CamelCase to Snake case, eg. `results.DeliverTx` is now `results.deliver_tx`.
This is a fix, but it's breaking.
### ABCI Changes
ABCI responses which previously had a `Tags` field now have an `Events` field
instead. The original `Tags` field was simply a list of key-value pairs, where
each key effectively represented some attribute of an event occuring in the
blockchain, like `sender`, `receiver`, or `amount`. However, it was difficult to
represent the occurence of multiple events (for instance, multiple transfers) in a single list.
The new `Events` field contains a list of `Event`, where each `Event` is itself a list
of key-value pairs, allowing for more natural expression of multiple events in
eg. a single DeliverTx or EndBlock. Note each `Event` also includes a `Type`, which is meant to categorize the
event.
For transaction indexing, the index key is
prefixed with the event type: `{eventType}.{attributeKey}`.
If the same event type and attribute key appear multiple times, the values are
appended in a list.
To make queries, include the event type as a prefix. For instance if you
previously queried for `recipient = 'XYZ'`, and after the upgrade you name your event `transfer`,
the new query would be for `transfer.recipient = 'XYZ'`.
Note that transactions indexed on a node before upgrading to v0.32 will still be indexed
using the old scheme. For instance, if a node upgraded at height 100,
transactions before 100 would be queried with `recipient = 'XYZ'` and
transactions after 100 would be queried with `transfer.recipient = 'XYZ'`.
While this presents additional complexity to clients, it avoids the need to
reindex. Of course, you can reset the node and sync from scratch to re-index
entirely using the new scheme.
We illustrate further with a more complete example.
Prior to the update, suppose your `ResponseDeliverTx` look like:
```go
abci.ResponseDeliverTx{
Tags: []kv.Pair{
{Key: []byte("sender"), Value: []byte("foo")},
{Key: []byte("recipient"), Value: []byte("bar")},
{Key: []byte("amount"), Value: []byte("35")},
}
}
```
The following queries would match this transaction:
```go
query.MustParse("tm.event = 'Tx' AND sender = 'foo'")
query.MustParse("tm.event = 'Tx' AND recipient = 'bar'")
query.MustParse("tm.event = 'Tx' AND sender = 'foo' AND recipient = 'bar'")
```
Following the upgrade, your `ResponseDeliverTx` would look something like:
the following `Events`:
```go
abci.ResponseDeliverTx{
Events: []abci.Event{
{
Type: "transfer",
Attributes: kv.Pairs{
{Key: []byte("sender"), Value: []byte("foo")},
{Key: []byte("recipient"), Value: []byte("bar")},
{Key: []byte("amount"), Value: []byte("35")},
},
}
}
```
Now the following queries would match this transaction:
```go
query.MustParse("tm.event = 'Tx' AND transfer.sender = 'foo'")
query.MustParse("tm.event = 'Tx' AND transfer.recipient = 'bar'")
query.MustParse("tm.event = 'Tx' AND transfer.sender = 'foo' AND transfer.recipient = 'bar'")
```
For further documentation on `Events`, see the [docs](https://github.com/tendermint/tendermint/blob/60827f75623b92eff132dc0eff5b49d2025c591e/docs/spec/abci/abci.md#events).
### Go Applications
The ABCI Application interface changed slightly so the CheckTx and DeliverTx
methods now take Request structs. The contents of these structs are just the raw
tx bytes, which were previously passed in as the argument.
## v0.31.6
There are no breaking changes in this release except Go API of p2p and
mempool packages. Hovewer, if you're using cleveldb, you'll need to change
the compilation tag:
Use `cleveldb` tag instead of `gcc` to compile Tendermint with CLevelDB or
use `make build_c` / `make install_c` (full instructions can be found at
This release contains a breaking change to the behaviour of the pubsub system.
It also contains some minor breaking changes in the Go API and ABCI.
There are no changes to the block or p2p protocols, so v0.31.0 should work fine
with blockchains created from the v0.30 series.
### RPC
The pubsub no longer blocks on publishing. This may cause some WebSocket (WS) clients to stop working as expected.
If your WS client is not consuming events fast enough, Tendermint can terminate the subscription.
In this case, the WS client will receive an error with description:
```json
{
"jsonrpc": "2.0",
"id": "{ID}#event",
"error": {
"code": -32000,
"msg": "Server error",
"data": "subscription was cancelled (reason: client is not pulling messages fast enough)" // or "subscription was cancelled (reason: Tendermint exited)"
}
}
Additionally, there are now limits on the number of subscribers and
subscriptions that can be active at once. See the new
`rpc.max_subscription_clients` and `rpc.max_subscriptions_per_client` values to
configure this.
```
### Applications
Simple rename of `ConsensusParams.BlockSize` to `ConsensusParams.Block`.
The `ConsensusParams.Block.TimeIotaMS` field was also removed. It's configured
in the ConsensusParsm in genesis.
### Go API
See the [CHANGELOG](CHANGELOG.md). These are relatively straight forward.
## v0.30.0
This release contains a breaking change to both the block and p2p protocols,
however it may be compatible with blockchains created with v0.29.0 depending on
the chain history. If your blockchain has not included any pieces of evidence,
or no piece of evidence has been included in more than one block,
and if your application has never returned multiple updates
for the same validator in a single block, then v0.30.0 will work fine with
blockchains created with v0.29.0.
The p2p protocol change is to fix the proposer selection algorithm again.
Note that proposer selection is purely a p2p concern right
now since the algorithm is only relevant during real time consensus.
This change is thus compatible with v0.29.0, but
all nodes must be upgraded to avoid disagreements on the proposer.
### Applications
Applications must ensure they do not return duplicates in
`ResponseEndBlock.ValidatorUpdates`. A pubkey must only appear once per set of
updates. Duplicates will cause irrecoverable failure. If you have a very good
reason why we shouldn't do this, please open an issue.
## v0.29.0
This release contains some breaking changes to the block and p2p protocols,
and will not be compatible with any previous versions of the software, primarily
due to changes in how various data structures are hashed.
Any implementations of Tendermint blockchain verification, including lite clients,
Blockchains are systems for multi-master state machine replication.
**ABCI** is an interface that defines the boundary between the replication engine (the blockchain),
and the state machine (the application).
@@ -10,162 +8,29 @@ can manage an application state running in another.
Previously, the ABCI was referred to as TMSP.
The community has provided a number of addtional implementations, see the [Tendermint Ecosystem](https://tendermint.com/ecosystem)
The community has provided a number of additional implementations, see the [Tendermint Ecosystem](https://github.com/tendermint/awesome#ecosystem)
## Installation & Usage
To get up and running quickly, see the [getting started guide](../docs/app-dev/getting-started.md) along with the [abci-cli documentation](../docs/app-dev/abci-cli.md) which will go through the examples found in the [examples](./example/) directory.
## Specification
A detailed description of the ABCI methods and message types is contained in:
We provide three implementations of the ABCI in Go:
- Golang in-process
- ABCI-socket
- GRPC
Note the GRPC version is maintained primarily to simplify onboarding and prototyping and is not receiving the same
attention to security and performance as the others
### In Process
The simplest implementation just uses function calls within Go.
This means ABCI applications written in Golang can be compiled with TendermintCore and run as a single binary.
See the [examples](#examples) below for more information.
### Socket (TSP)
ABCI is best implemented as a streaming protocol.
The socket implementation provides for asynchronous, ordered message passing over unix or tcp.
Messages are serialized using Protobuf3 and length-prefixed with a [signed Varint](https://developers.google.com/protocol-buffers/docs/encoding?csw=1#signed-integers)
For example, if the Protobuf3 encoded ABCI message is `0xDEADBEEF` (4 bytes), the length-prefixed message is `0x08DEADBEEF`, since `0x08` is the signed varint
encoding of `4`. If the Protobuf3 encoded ABCI message is 65535 bytes long, the length-prefixed message would be like `0xFEFF07...`.
Note the benefit of using this `varint` encoding over the old version (where integers were encoded as `<len of len><big endian len>` is that
it is the standard way to encode integers in Protobuf. It is also generally shorter.
### GRPC
GRPC is an rpc framework native to Protocol Buffers with support in many languages.
Implementing the ABCI using GRPC can allow for faster prototyping, but is expected to be much slower than
the ordered, asynchronous socket protocol. The implementation has also not received as much testing or review.
Note the length-prefixing used in the socket implementation does not apply for GRPC.
## Usage
The `abci-cli` tool wraps an ABCI client and can be used for probing/testing an ABCI server.
For instance, `abci-cli test` will run a test sequence against a listening server running the Counter application (see below).
It can also be used to run some example applications.
See [the documentation](https://tendermint.com/docs/) for more details.
### Examples
Check out the variety of example applications in the [example directory](example/).
It also contains the code refered to by the `counter` and `kvstore` apps; these apps come
built into the `abci-cli` binary.
#### Counter
The `abci-cli counter` application illustrates nonce checking in transactions. It's code looks like:
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.