Compare commits

...

196 Commits

Author SHA1 Message Date
William Banfield
5ae962e99e Update changelog for release v0.34.17 2022-04-01 12:42:29 -04:00
dependabot[bot]
06e8620621 build(deps): Bump github.com/adlio/schema from 1.2.3 to 1.3.0 (#8200) 2022-03-28 09:39:22 -04:00
dependabot[bot]
b2dd100a76 build(deps): Bump github.com/stretchr/testify from 1.7.0 to 1.7.1 (#8130) 2022-03-16 10:08:30 -04:00
dependabot[bot]
314b139ac3 build(deps): Bump github.com/spf13/cobra from 1.3.0 to 1.4.0 (#8108)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.3.0 to 1.4.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Changelog](https://github.com/spf13/cobra/blob/master/CHANGELOG.md)
- [Commits](https://github.com/spf13/cobra/compare/v1.3.0...v1.4.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-11 11:17:18 -08:00
dependabot[bot]
551072c962 build(deps): Bump google.golang.org/grpc from 1.44.0 to 1.45.0 (#8102) 2022-03-10 08:43:15 -05:00
William Banfield
4b5472c387 consensus: change lock handling in reactor for RoundState (#7994)
This change updates the lock handling in the consensus reactor. The consensus reactor now periodically fetches the RoundState and the gossip routines operate on this fetched copy instead of fetching the latest copy in each iteration of the gossip routine.
2022-03-09 17:38:57 -05:00
William Banfield
fd3bfb38e7 consensus: change lock handling in 'handleMsg' method (#7992)
* change lock handling in consensus state file

* add comment explaining the unlock

* comment fix

* Update consensus/state.go

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* spelling fix

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-03-09 12:57:55 -05:00
mergify[bot]
186e0e4df2 cmd: make reset more safe (backport #8081) (#8089)
Backport notes:

- Revert command declaration to the old explicit format.
- Remove threading of the keyType argument.
- Fix function naming collision.
- Fix error handling.
- Restore snake-case deprecation warnings.
2022-03-08 09:39:19 -08:00
William Banfield
a97bb37d44 consensus: start the timeout ticker before replay (backport #7844) (#8079)
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
Co-authored-by: Sam Kleinman <garen@tychoish.com>
2022-03-08 10:55:28 -05:00
mergify[bot]
9e8837ad63 Revert "Remove master from versions and copy it from the latest." (backport #8053) (#8056)
This reverts commit f939f962b1.

A lot of inbound links are still broken, so we will need to find a different
approach to suppressing unreleased docs.

(cherry picked from commit 59eaa4dba0)
2022-03-02 09:24:58 -08:00
Marko
6b4e9078de crypto: Remove build flags from secp256k1 (#8051)
Manual backport of #7823.

* remove cgo build flags
* remove nocgo file
2022-03-02 07:06:19 -08:00
M. J. Fromberger
1d25a3f0bc Prepare changelog for release v0.34.16. (#8000) 2022-02-25 08:12:20 -08:00
M. J. Fromberger
96085df7c1 Add manual e2e workflow to v0.34.x. (#8005) 2022-02-25 07:45:51 -08:00
mergify[bot]
cb6baad5ac docs: point docs/master to the same content as the latest release (backport #7980) (#7997)
* Remove master from versions and copy it from the latest. (#7980)

(cherry picked from commit f939f962b1)
2022-02-24 16:14:23 -08:00
mergify[bot]
db60bbad54 statesync: assert app version matches (backport #7856) (#7885) 2022-02-23 12:17:12 +01:00
mergify[bot]
5487718cff Restore building docs for master on docs.tendermint.com. (#7969) (#7970)
There are a lot of existing links to the master section of the site, and my
attempts to get a redirector working have so far not succeeded. While it still
makes sense to not publish docs for unreleased code, a 404 is almost certainly
more disruptive than seeing docs for unreleased stuff.

This includes the docs in the build again, but does not add them back to the
selector menu. That allows URLs to resolve but encourages folks to use the
released versions when they have a choice.

I left the redirect for the RPC link in place, since that's still useful.

Updates #7935.

(cherry picked from commit 926c469fcc)

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-02-22 09:26:16 -08:00
M. J. Fromberger
cf58c4191b docs: fix cosmos theme version. (#7967)
The various package locks got out of sync, reunify them.
2022-02-22 08:39:17 -08:00
Callum Waters
ce70b10f81 docs: remove spec section from v0.34 docs (#7940) 2022-02-22 17:09:41 +01:00
mergify[bot]
98c75c9429 docs: redirect master links to the latest release version (backport #7936) (#7953)
* docs: redirect master links to the latest release version (#7936)

(cherry picked from commit 70ee282d9e)

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-02-22 05:56:00 -08:00
mergify[bot]
9fe245025f docs: Pin the RPC docs to v0.35 instead of master (backport #7909) (#7910)
* docs: Pin the RPC docs to v0.35 instead of master (#7909)

(cherry picked from commit 3b20931da3)

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-02-21 08:00:23 -08:00
mergify[bot]
de423678eb Remove master from the docs site version config. (backport #7874) (#7902)
* Remove master from the docs site version config. (#7874)

(cherry picked from commit 351adf8ddb)

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-02-21 06:24:20 -08:00
M. J. Fromberger
6a14fc2105 Update absolute links in v0.34.x to reference that branch. (#7871) 2022-02-21 04:36:57 -08:00
mergify[bot]
89bb82617a fix app hash in state rollback (backport #7837) (#7881)
When testing rollback feature in the Cosmos SDK, we found that the app hash
in Tendermint after rollback was the value after the latest block, rather than
before it.

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: yihuang <huang@crypto.com>

(cherry picked from commit 8a238fdcb4)

Inline factory function that does not exist in this branch.

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-02-19 08:08:52 -08:00
M. J. Fromberger
df9b1676f9 Remove ADR and RFC docs from the v0.34.x backport branch. (#7867) 2022-02-18 07:13:53 -08:00
dependabot[bot]
f88aad5903 build(deps): Bump github.com/gorilla/websocket from 1.4.2 to 1.5.0 (#7831) 2022-02-16 10:46:59 +01:00
dependabot[bot]
75c6af7dcf build(deps): Bump github.com/prometheus/client_golang (#7730) 2022-01-31 10:21:50 +01:00
dependabot[bot]
bc63f213da build(deps): Bump google.golang.org/grpc from 1.43.0 to 1.44.0 (#7694) 2022-01-26 10:10:31 +01:00
mergify[bot]
80f656d8d7 consensus: check proposal non-nil in prevote message delay metric (backport #7625) (#7631)
* consensus: check proposal non-nil in prevote message delay metric (#7625)

(cherry picked from commit b6307c42e0)

# Conflicts:
#	consensus/state.go

* fix merge conflicts

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: William Banfield <wbanfield@gmail.com>
2022-01-19 13:16:03 -05:00
mergify[bot]
f36cc80568 consensus: calculate prevote message delay metric (backport #7551) (#7617)
* consensus: calculate prevote message delay metric (#7551)

## What does this pull request do?
This pull requests adds two metrics intended for use in calculating an experimental value for `MessageDelay`.

The metrics are as follows:
```
# HELP tendermint_consensus_complete_prevote_message_delay Difference in seconds between the proposal timestamp and the timestamp of the prevote that achieved 100% of the voting power in the prevote step.
# TYPE tendermint_consensus_complete_prevote_message_delay gauge
tendermint_consensus_complete_prevote_message_delay{chain_id="test-chain-aZbwF1"} 0.013025505

# HELP tendermint_consensus_quorum_prevote_message_delay Difference in seconds between the proposal timestamp and the timestamp of the prevote that achieved a quorum in the prevote step.
# TYPE tendermint_consensus_quorum_prevote_message_delay gauge
tendermint_consensus_quorum_prevote_message_delay{chain_id="test-chain-aZbwF1"} 0.013025505
```

## Why this change?

 For more information on what these metrics are calculating, see #7202. The aim is to merge to backport these metrics to v0.34 and run nodes on a few popular chains with these metrics to determine the experimental values for `MessageDelay` on these popular chains and use these to select our default `SynchronyParams.MessageDelay` value.

## Why Gauges for the metrics?
Gauges allow us to overwrite the metric on each successive observation. We can then capture these metrics over time to track the highest and lowest observed value.

(cherry picked from commit 0c82ceaa5f)

# Conflicts:
#	consensus/metrics.go
#	consensus/state.go

* fix merge conflicts

Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: William Banfield <wbanfield@gmail.com>
2022-01-19 12:10:18 -05:00
dependabot[bot]
c477c810f3 build(deps): Bump github.com/prometheus/client_golang (#7638)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.11.0 to 1.12.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.11.0...v1.12.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-01-19 08:46:25 -05:00
dependabot[bot]
3757810247 build(deps): Bump github.com/BurntSushi/toml from 0.4.1 to 1.0.0 (#7561) 2022-01-12 15:01:32 +01:00
dependabot[bot]
3b467f951d build(deps): Bump github.com/rs/cors from 1.8.0 to 1.8.2 (#7483)
Bumps [github.com/rs/cors](https://github.com/rs/cors) from 1.8.0 to 1.8.2.
- [Release notes](https://github.com/rs/cors/releases)
- [Commits](https://github.com/rs/cors/compare/v1.8.0...v1.8.2)

---
updated-dependencies:
- dependency-name: github.com/rs/cors
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-22 08:18:55 -08:00
Marko
b14dc70664 Reduce p2p log noise (#7465)
* reduce some logs

* reduce error logs

* remove debug
2021-12-17 11:32:08 +01:00
dependabot[bot]
2cbb35f980 build(deps): Bump github.com/spf13/viper from 1.10.0 to 1.10.1 (#7462)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.10.0 to 1.10.1.
- [Release notes](https://github.com/spf13/viper/releases)
- [Commits](https://github.com/spf13/viper/compare/v1.10.0...v1.10.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/viper
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-16 09:15:09 -05:00
dependabot[bot]
c9c570e151 build(deps): Bump google.golang.org/grpc from 1.42.0 to 1.43.0 (#7454)
* build(deps): Bump google.golang.org/grpc from 1.42.0 to 1.43.0

Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.42.0 to 1.43.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.42.0...v1.43.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2021-12-15 08:08:27 -08:00
dependabot[bot]
e6700355f6 build(deps): Bump github.com/spf13/cobra from 1.2.1 to 1.3.0 (#7453)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.2.1 to 1.3.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Changelog](https://github.com/spf13/cobra/blob/master/CHANGELOG.md)
- [Commits](https://github.com/spf13/cobra/compare/v1.2.1...v1.3.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-15 09:49:37 -05:00
dependabot[bot]
40f18b8d8f build(deps): Bump github.com/adlio/schema from 1.2.2 to 1.2.3 (#7431)
Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.2.2 to 1.2.3.
- [Release notes](https://github.com/adlio/schema/releases)
- [Commits](https://github.com/adlio/schema/compare/v1.2.2...v1.2.3)

---
updated-dependencies:
- dependency-name: github.com/adlio/schema
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-13 06:24:05 -08:00
dependabot[bot]
4d0b6e7c5a build(deps): Bump github.com/spf13/viper from 1.9.0 to 1.10.0 (#7433)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.9.0 to 1.10.0.
- [Release notes](https://github.com/spf13/viper/releases)
- [Commits](https://github.com/spf13/viper/compare/v1.9.0...v1.10.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/viper
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-13 08:47:33 -05:00
dependabot[bot]
6695e525f9 build(deps): Bump github.com/adlio/schema from 1.1.15 to 1.2.2 (#7421)
* build(deps): Bump github.com/adlio/schema from 1.1.15 to 1.2.2

Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.1.15 to 1.2.2.
- [Release notes](https://github.com/adlio/schema/releases)
- [Commits](https://github.com/adlio/schema/compare/v1.1.15...v1.2.2)

---
updated-dependencies:
- dependency-name: github.com/adlio/schema
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update usage of Migrator API.

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2021-12-10 12:48:02 -08:00
dependabot[bot]
6eeb1b3a5d build(deps): Bump github.com/adlio/schema from 1.1.14 to 1.1.15 (#7405)
Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.1.14 to 1.1.15.
- [Release notes](https://github.com/adlio/schema/releases)
- [Commits](https://github.com/adlio/schema/compare/v1.1.14...v1.1.15)

---
updated-dependencies:
- dependency-name: github.com/adlio/schema
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-08 09:03:42 -05:00
mergify[bot]
cac59a7677 Update Mergify configuration. (backport #7388) (#7389)
Per https://docs.mergify.com/actions/merge/#commit-message, the
commit_message option is deprecated and will be removed in 2022.
Replace it with the template suggested here:

https://docs.mergify.com/actions/queue/

(cherry picked from commit 02d456b8b8)
2021-12-06 13:34:29 -08:00
mergify[bot]
dfd5bae784 Update mergify configuration. (backport #7385) (#7386)
Per https://blog.mergify.com/strict-mode-deprecation/, the strict mode
has been deprecated and will be turned off on 10-Jan-2022. This updates
the config to use the new, approved thing instead of the old thing.

(cherry picked from commit 2d4844f97f)
2021-12-06 12:43:01 -08:00
M. J. Fromberger
41c176ccc6 Prepare release v0.34.15. (#7371) 2021-12-02 09:32:55 -08:00
mergify[bot]
05340ca069 cmd: add integration test for rollback functionality (backport #7315) (#7368)
* cmd: add integration test and fix bug in rollback command (#7315)

(cherry picked from commit bca2080c01)

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
2021-12-02 08:30:57 -08:00
M. J. Fromberger
9994396e59 pubsub: Report a non-nil error when shutting down. (#7309)
If a subscriber arrives while the pubsub service is shutting down, the existing
code will return a nil subscription without error. With unlucky timing, this
may lead to a nil indirection panic in the RPC service.

To avoid that problem, make sure that when a subscription fails for this
reason, we report a non-nil error so that the client will detect it and give up
gracefully.
2021-11-23 12:25:59 -08:00
dependabot[bot]
c4834df3f3 build(deps): Bump github.com/tendermint/tm-db from 0.6.4 to 0.6.6 (#7286)
Bumps [github.com/tendermint/tm-db](https://github.com/tendermint/tm-db) from 0.6.4 to 0.6.6.
- [Release notes](https://github.com/tendermint/tm-db/releases)
- [Changelog](https://github.com/tendermint/tm-db/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tendermint/tm-db/compare/v0.6.4...v0.6.6)

---
updated-dependencies:
- dependency-name: github.com/tendermint/tm-db
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-16 12:21:59 -08:00
Thane Thomson
12e3419f2b rpc: Add experimental config params to allow for subscription buffer size control (tm v0.34.x) (#7230)
A workaround for #6729. Add parameters to control buffer sizes for
event subscription RPC clients. On some networks, buffering causes
clients to be dropped and/or events to be lost.

For additional context, see the discussion on #7188.

- Add experimental_subscription_buffer_size config parameter
- Add experimental_websocket_write_buffer_size config parameter
- Add experimental_close_on_slow_client config parameter

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2021-11-09 12:35:45 -08:00
dependabot[bot]
9ec863f948 build(deps): Bump github.com/lib/pq from 1.10.3 to 1.10.4 (#7259) 2021-11-09 12:47:41 +01:00
dependabot[bot]
d0031b0503 build(deps): Bump github.com/go-kit/kit from 0.10.0 to 0.12.0 (#7213)
* build(deps): Bump github.com/go-kit/kit from 0.10.0 to 0.12.0

Bumps [github.com/go-kit/kit](https://github.com/go-kit/kit) from 0.10.0 to 0.12.0.
- [Release notes](https://github.com/go-kit/kit/releases)
- [Commits](https://github.com/go-kit/kit/compare/v0.10.0...v0.12.0)

---
updated-dependencies:
- dependency-name: github.com/go-kit/kit
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* add nolint

* fix lint

* fix build

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: tycho garen <garen@tychoish.com>
2021-11-08 14:39:00 -05:00
dependabot[bot]
d35b50b528 build(deps): Bump github.com/spf13/viper from 1.7.1 to 1.9.0 (#7211)
* build(deps): Bump github.com/spf13/viper from 1.7.1 to 1.9.0

Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.7.1 to 1.9.0.
- [Release notes](https://github.com/spf13/viper/releases)
- [Commits](https://github.com/spf13/viper/compare/v1.7.1...v1.9.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/viper
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* add nolint

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: tycho garen <garen@tychoish.com>
2021-11-08 14:19:54 -05:00
dependabot[bot]
bd48acb2ca build(deps): Bump google.golang.org/grpc from 1.38.0 to 1.42.0 (#7232)
* build(deps): Bump google.golang.org/grpc from 1.38.0 to 1.42.0

Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.38.0 to 1.42.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.38.0...v1.42.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Fix Unix-domain socket paths.

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
Co-authored-by: tycho garen <garen@tychoish.com>
2021-11-08 12:39:15 -05:00
dependabot[bot]
0b835bea7a build(deps): Bump github.com/minio/highwayhash from 1.0.1 to 1.0.2 (#7234)
Bumps [github.com/minio/highwayhash](https://github.com/minio/highwayhash) from 1.0.1 to 1.0.2.
- [Release notes](https://github.com/minio/highwayhash/releases)
- [Commits](https://github.com/minio/highwayhash/compare/v1.0.1...v1.0.2)

---
updated-dependencies:
- dependency-name: github.com/minio/highwayhash
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-05 11:35:05 +01:00
dependabot[bot]
12ecfb0383 build(deps): Bump github.com/prometheus/client_golang (#7233)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.8.0 to 1.11.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/master/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.8.0...v1.11.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-04 17:48:23 +01:00
dependabot[bot]
3e7fc468e4 build(deps): Bump github.com/Workiva/go-datastructures (#7236)
Bumps [github.com/Workiva/go-datastructures](https://github.com/Workiva/go-datastructures) from 1.0.52 to 1.0.53.
- [Release notes](https://github.com/Workiva/go-datastructures/releases)
- [Commits](https://github.com/Workiva/go-datastructures/compare/v1.0.52...v1.0.53)

---
updated-dependencies:
- dependency-name: github.com/Workiva/go-datastructures
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-04 15:36:27 +01:00
dependabot[bot]
113118ec00 build(deps): Bump github.com/BurntSushi/toml from 0.3.1 to 0.4.1 (#7235)
Bumps [github.com/BurntSushi/toml](https://github.com/BurntSushi/toml) from 0.3.1 to 0.4.1.
- [Release notes](https://github.com/BurntSushi/toml/releases)
- [Commits](https://github.com/BurntSushi/toml/compare/v0.3.1...v0.4.1)

---
updated-dependencies:
- dependency-name: github.com/BurntSushi/toml
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-04 09:18:51 -04:00
Sam Kleinman
4ef140f6ca lint: cleanup pending lint errors (#7237) 2021-11-04 08:08:55 -04:00
Sam Kleinman
61831cf5ef codeowners: backport master codeowners (#7229) 2021-11-03 12:49:01 -04:00
Sam Kleinman
8a2dcbafae ci: backport lint configuration changes (#7225) 2021-11-03 12:43:22 -04:00
dependabot[bot]
3e119fc6c4 build(deps): Bump github.com/spf13/cobra from 1.1.1 to 1.2.1 (#7215) 2021-11-03 16:57:32 +01:00
dependabot[bot]
f721bf5154 build(deps): Bump github.com/btcsuite/btcd (#7209)
Bumps [github.com/btcsuite/btcd](https://github.com/btcsuite/btcd) from 0.21.0-beta to 0.22.0-beta.
- [Release notes](https://github.com/btcsuite/btcd/releases)
- [Changelog](https://github.com/btcsuite/btcd/blob/master/CHANGES)
- [Commits](https://github.com/btcsuite/btcd/compare/v0.21.0-beta...v0.22.0-beta)

---
updated-dependencies:
- dependency-name: github.com/btcsuite/btcd
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-03 11:15:23 -04:00
dependabot[bot]
3567d3ab38 build(deps): Bump github.com/rs/cors from 1.7.0 to 1.8.0 (#7208)
Bumps [github.com/rs/cors](https://github.com/rs/cors) from 1.7.0 to 1.8.0.
- [Release notes](https://github.com/rs/cors/releases)
- [Commits](https://github.com/rs/cors/compare/v1.7.0...v1.8.0)

---
updated-dependencies:
- dependency-name: github.com/rs/cors
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-03 11:14:55 -04:00
dependabot[bot]
46a6691e11 build(deps): Bump github.com/golang/protobuf from 1.5.0 to 1.5.2 (#7207)
Bumps [github.com/golang/protobuf](https://github.com/golang/protobuf) from 1.5.0 to 1.5.2.
- [Release notes](https://github.com/golang/protobuf/releases)
- [Commits](https://github.com/golang/protobuf/compare/v1.5.0...v1.5.2)

---
updated-dependencies:
- dependency-name: github.com/golang/protobuf
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-03 11:14:41 -04:00
dependabot[bot]
876b3c0dbe build(deps): Bump github.com/adlio/schema from 1.1.13 to 1.1.14 (#7212)
Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.1.13 to 1.1.14.
- [Release notes](https://github.com/adlio/schema/releases)
- [Commits](https://github.com/adlio/schema/compare/v1.1.13...v1.1.14)

---
updated-dependencies:
- dependency-name: github.com/adlio/schema
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-03 10:23:19 -04:00
dependabot[bot]
31b3e279fc build(deps): Bump github.com/go-logfmt/logfmt from 0.5.0 to 0.5.1 (#7214)
Bumps [github.com/go-logfmt/logfmt](https://github.com/go-logfmt/logfmt) from 0.5.0 to 0.5.1.
- [Release notes](https://github.com/go-logfmt/logfmt/releases)
- [Changelog](https://github.com/go-logfmt/logfmt/blob/master/CHANGELOG.md)
- [Commits](https://github.com/go-logfmt/logfmt/compare/v0.5.0...v0.5.1)

---
updated-dependencies:
- dependency-name: github.com/go-logfmt/logfmt
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-03 10:22:26 -04:00
Callum Waters
85870def7b release: prepare changelog for 0.34.14 (#7105) 2021-10-13 10:47:16 +02:00
Callum Waters
ff2758b32e dep: remove IAVL dependency (backport #6550) (#7104) 2021-10-12 18:09:08 +02:00
mergify[bot]
a82cb7dcda Revert "abci: change client to use multi-reader mutexes (#6306)" (backport #7106) (#7109) 2021-10-12 18:00:22 +02:00
mergify[bot]
1dfb3451ea e2e: light nodes should use builtin abci app (#7095) (#7096)
(cherry picked from commit befd669794)

Co-authored-by: Sam Kleinman <garen@tychoish.com>
2021-10-09 00:32:41 -04:00
mergify[bot]
9f13b9b083 e2e: abci protocol should be consistent across networks (backport #7078) (#7085)
* e2e: abci protocol should be consistent across networks (#7078)

It seems weird in retrospect that we allow networks to contain
applications that use different ABCI protocols.

(cherry picked from commit f2a8f5e054)
2021-10-08 10:37:12 -04:00
mergify[bot]
16ba782fa6 cli: allow node operator to rollback last state (backport #7033) (#7080) 2021-10-08 14:35:13 +02:00
M. J. Fromberger
474ed04273 Import Postgres driver support for the psql indexer (backport). (#7057)
I accidentally omitted this from the backport in #6906.
Fixes #7043.
2021-10-04 16:40:12 -07:00
Callum Waters
2d8287d0f7 e2e: allow running of single node using the e2e app (backport) (#7024) 2021-09-29 16:17:32 +02:00
Sam Kleinman
294a9695b4 e2e: backport minor reliability improvements (#6967) 2021-09-21 17:29:56 -04:00
M. J. Fromberger
849461aab2 Release v0.34.13
https://github.com/tendermint/tendermint/blob/v0.34.13/CHANGELOG.md#v0.34.12
2021-09-08 15:09:15 -04:00
M. J. Fromberger
8ba6d218e4 Backport the psql indexer into v0.34.x (#6906)
This change backports the PostgreSQL indexing sink, addressing part of #6828.

Development on the main branch has diverged substantially since the v0.34.x
release. It includes package moves, breaking API and protobuf schema changes,
and new APIs, all of which together have a large footprint on the mapping
between the implementation at tip and the v0.34 release branch.

To avoid the need to retrofit all of those improvements, this change works by
injecting the new indexing sink into the existing (v0.34) indexing interfaces
by delegation. This means the backport does _not_ pull in all the newer APIs
for event handling, and thus has minimal impact on existing code written
against the v0.34 package structure.

This change includes the test for the `psql` implementation, and thus updates
some Go module dependencies. Because it does not interact with any other types,
however, I did not add any unit tests to other packages in this change.

Related changes:
 * Update module dependencies for psql backport.
 * Update test data to be type-compatible with the old protobuf types.
 * Add config settings for the PostgreSQL indexer.
 * Clean up some linter settings.
 * Hook up the psql indexer in the node main.
2021-09-07 18:57:44 -04:00
Callum Waters
0f8932f4ef light: fix early erroring (#6905) 2021-09-07 12:02:16 +02:00
Callum Waters
73ef2675ce statesync: improve stateprovider handling in the syncer (backport) (#6881) 2021-09-01 16:18:07 +02:00
mergify[bot]
e0c6199aae abci: change client to use multi-reader mutexes (backport #6306) (#6873) 2021-08-30 11:57:39 -04:00
mergify[bot]
0c05841902 internal/consensus: update error log (#6863) (#6867)
Issues reported in Osmosis, where the message is extremely long. Also, there is absolutely no reason to log the message IMO. If we must, we can make the message log DEBUG.

(cherry picked from commit 58a6cfff9a)

Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
2021-08-26 09:35:26 -04:00
mergify[bot]
4023580a25 e2e: cleanup node start function (#6842) (#6848)
I realized after my last commit that my change made a following line of code a bit redundant.

(alternatively my last change was redunadnt to the existing code.)

I took this oppertunity to make some minor cleanups and logging changes to the node changes which I hope will make tests a bit more clear.

(cherry picked from commit a374f74f7c)

Co-authored-by: Sam Kleinman <garen@tychoish.com>
2021-08-20 16:12:13 -04:00
mergify[bot]
2db1e422d8 e2e: avoid starting nodes from the future (#6835) (#6838)
(cherry picked from commit a4cc8317da)

Co-authored-by: Sam Kleinman <garen@tychoish.com>
2021-08-18 14:42:27 -04:00
William Banfield
093961ae2d test: install abci-cli when running make tests_integrations (#6834) 2021-08-17 11:46:09 -04:00
Tess Rinearson
d030cddca0 version: bump for 0.34.12 (#6832) 2021-08-17 16:37:25 +02:00
Tess Rinearson
3dff227c5b changelog: prepare for v0.34.12 (#6831) 2021-08-17 16:18:15 +02:00
Tess Rinearson
e290bd624f changelog_pending: add missing entry (#6830) 2021-08-17 16:05:36 +02:00
mergify[bot]
0366c2b688 rpc: log update (backport #6825) (#6826) 2021-08-14 09:54:02 -04:00
mergify[bot]
6fde228e9d state/privval: vote timestamp fix (backport #6748) (#6783) 2021-07-30 17:48:49 +02:00
mergify[bot]
b69ac23fd2 light: add case to catch cancelled contexts within the detector (backport #6701) (#6720) 2021-07-14 15:26:03 +02:00
mergify[bot]
da9eefd111 rpc: add chunked rpc interface (backport #6445) (#6717)
* rpc: add chunked rpc interface (#6445)

(cherry picked from commit d9134063e7)

# Conflicts:
#	light/proxy/routes.go
#	node/node.go
#	rpc/core/net.go
#	rpc/core/routes.go

* fix conflicts

Co-authored-by: Sam Kleinman <garen@tychoish.com>
Co-authored-by: marbar3778 <marbar3778@yahoo.com>
2021-07-14 09:22:53 +00:00
Callum Waters
2c2f511f24 light: correctly handle contexts (backport -> v0.34.x) (#6685) 2021-07-09 14:30:33 +02:00
Callum Waters
8b84c7c168 e2e: disable app tests for light client (#6672) 2021-07-07 20:06:55 +02:00
mergify[bot]
0712063ec8 config: add example on external_address (backport #6621) (#6624) 2021-06-30 15:52:19 +02:00
Callum Waters
c2908ef785 release: prepare changelog for v0.34.11 (#6597) 2021-06-18 11:44:39 +02:00
Callum Waters
d515bbcf1d statesync: increase chunk priority and robustness (#6582) 2021-06-18 09:59:52 +02:00
mergify[bot]
be8c9833ca state sync: tune request timeout and chunkers (backport #6566) (#6581)
* state sync: tune request timeout and chunkers (#6566)

(cherry picked from commit 7d961b55b2)

# Conflicts:
#	CHANGELOG_PENDING.md
#	config/config.go
#	internal/statesync/reactor.go
#	internal/statesync/reactor_test.go
#	node/node.go
#	statesync/syncer.go

* fix build

* fix config

* fix config

Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: Aleksandr Bezobchuk <aleks.bezobchuk@gmail.com>
2021-06-15 15:10:16 -04:00
mergify[bot]
358b1f23c0 p2p/conn: check for channel id overflow before processing receive msg (backport #6522) (#6528)
* p2p/conn: check for channel id overflow before processing receive msg (#6522)

Per tendermint spec, each Channel has a globally unique byte id, which
is mapped to uint8 in Go. However, the proto PacketMsg.ChannelID field
is declared as int32, and when receive the packet, we cast it to a byte
without checking for possible overflow. That leads to a malform packet
with invalid channel id is sent successfully.

To fix it, we just add a check for possible overflow, and return invalid
channel id error.

Fixed #6521

(cherry picked from commit 1f46a4c90e)
2021-06-04 20:20:36 -04:00
Marko
c376b44f1c Backport: #6494 (#6506)
* version: revert version through ldflag only (#6494)

Add version back to versions, but allow it to be overridden via a ldflag.

Reason:

Many users are not setting the ldflag causing issues with tooling that relies on it (cosmjs)

closes #6488

cc @webmaster128

* revert variable rename

* Update CHANGELOG_PENDING.md
2021-05-31 21:15:12 +00:00
Callum Waters
8dd8a4e8ea libs/os: avoid CopyFile truncating destination before checking if regular file (backport: #6428) (#6436) 2021-05-10 13:24:33 +02:00
mergify[bot]
353e3a3243 evidence: fix bug with hashes (backport #6375) (#6381) 2021-04-22 15:05:56 +02:00
Tess Rinearson
a9b4fac610 .github: make core team codeowners (#6384) 2021-04-21 13:38:07 -07:00
mergify[bot]
1614e12035 statesync: improve e2e test outcomes (backport #6378) (#6380)
(cherry picked from commit d36a5905a6)

Co-authored-by: Sam Kleinman <garen@tychoish.com>
2021-04-21 12:30:17 -04:00
Tess Rinearson
68eceda0b5 changelog: update for 0.34.10 (#6357) 2021-04-14 13:46:14 -07:00
Callum Waters
b878326396 e2e: relax timeouts (#6356)
* remove duplicate light error

* quieten handling of txs that already exist in the mempool

* notch back e2e timeouts
2021-04-14 19:53:54 +02:00
mergify[bot]
693e11c6c6 e2e: tx load to use broadcast sync instead of commit (backport #6347) (#6352) 2021-04-14 10:09:49 +02:00
mergify[bot]
6cc3e23a95 light: handle too high errors correctly (backport #6346) (#6351) 2021-04-13 14:46:54 +02:00
Callum Waters
a9ac63510d p2p: fix using custom channels (#6339) 2021-04-13 14:05:36 +02:00
mergify[bot]
bd968aba1f build(deps): Bump google.golang.org/grpc from 1.36.1 to 1.37.0 (bp #6330) (#6335) 2021-04-09 12:20:20 +02:00
Tess Rinearson
e54fdb6204 changelog: prepare changelog for 0.34.9 release (#6333) 2021-04-08 10:05:23 -07:00
Callum Waters
7869f5ec1d light/evidence: handle FLA backport (#6331) 2021-04-08 09:49:25 -07:00
mergify[bot]
af35ca9cf4 state: fix block event indexing reserved key check (#6314) (#6315) 2021-04-05 08:42:17 -04:00
Gustavo Chaín
c9966cd6be p2p: Fix "Unknown Channel" bug on CustomReactors (#6297) 2021-03-30 09:35:00 -04:00
mergify[bot]
6c0c27320c change index block log to info (#6290) (#6294)
## Description

Change log from error to info for indexing blocks

(cherry picked from commit 32ee737d42)

Co-authored-by: Marko <marbar3778@yahoo.com>
2021-03-29 13:57:57 +00:00
mergify[bot]
b7a4d5e7ba fix: jsonrpc url parsing and dial function (#6264) (#6288)
This PR fixes how the jsonrpc parses the URL, and how the dial function connects to the RPC.

Closes: https://github.com/tendermint/tendermint/issues/6260

(cherry picked from commit 9ecfcc93a6)

Co-authored-by: Frojdi Dymylja <33157909+fdymylja@users.noreply.github.com>
2021-03-29 11:05:03 +00:00
mergify[bot]
0682337de2 logging: shorten precommit log message (#6270) (#6274)
This is an attempt to clean up the logging message as requested in #6269.

(cherry picked from commit 3f9066b290)

Co-authored-by: Sam Kleinman <garen@tychoish.com>
2021-03-25 16:19:50 -04:00
mergify[bot]
b00cac9368 rpc: index block events to support block event queries (bp #6226) (#6261) 2021-03-22 15:01:25 -04:00
mergify[bot]
b2f01448be e2e: integrate light clients (bp #6196)
integrate light clients (#6196)
fix e2e app test (#6223)
fix light client generator (#6236)
2021-03-18 13:02:05 +01:00
mergify[bot]
4e25703d58 rpc/jsonrpc/server: return an error in WriteRPCResponseHTTP(Error) (bp #6204) (#6230)
* rpc/jsonrpc/server: return an error in WriteRPCResponseHTTP(Error) (#6204)

instead of panicking
Closes #5529

(cherry picked from commit 00b9524168)

# Conflicts:
#	CHANGELOG_PENDING.md
#	rpc/jsonrpc/server/http_json_handler.go
#	rpc/jsonrpc/server/http_server.go
#	rpc/jsonrpc/server/http_server_test.go
#	rpc/jsonrpc/server/http_uri_handler.go

* resolve conflicts

* fix linting

* fix conflict

Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Co-authored-by: Marko Baricevic <marbar3778@yahoo.com>
2021-03-17 14:55:05 +00:00
mergify[bot]
d004a584f8 use error.Is to check for nondeterminstic vote error type (#6237) (#6239)
(cherry picked from commit bf8cce83db)

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
2021-03-15 11:20:33 +01:00
mergify[bot]
11523b1302 note: add nondeterministic note to events (#6220) (#6225)
## Description

Since events are not hashed into the header they can be non deterministic. Changing an event is not consensus breaking. Will update docs in the spec

(cherry picked from commit 884d4d5252)

Co-authored-by: Marko <marbar3778@yahoo.com>
2021-03-09 16:39:19 +04:00
mergify[bot]
8bb85856d0 e2e: add benchmarking functionality (bp #6210) (#6216) 2021-03-05 15:30:18 +01:00
mergify[bot]
b9cdd0e28e indexer: remove info log (#6194)
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
2021-03-04 14:47:42 +00:00
mergify[bot]
1b5697a41d mempool/rpc: log grooming (bp #6201) (#6203) 2021-03-04 09:04:13 -05:00
mergify[bot]
a047a4a70f logs: cleanup (#6198)
Co-authored-by: Marko <marbar3778@yahoo.com>
2021-03-04 10:42:19 +00:00
mergify[bot]
52b1d90f56 rpc/jsonrpc: Unmarshal RPCRequest correctly (bp #6191) (#6193)
* rpc/jsonrpc: Unmarshal RPCRequest correctly (#6191)

i.e. without double pointer. With double pointer, it was possible to
submit `null` value, which will crash the server.

```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x189ddc0]

goroutine 1 [running]:
github.com/tendermint/tendermint/rpc/jsonrpc/types.(*RPCRequest).UnmarshalJSON(0xc0000147e0, 0xc00029f201, 0x4, 0x1ff, 0x883baa0, 0xc0000147e0)
        /Users/anton/go/src/github.com/tendermint/tendermint/rpc/jsonrpc/types/types.go:70 +0x100
encoding/json.(*decodeState).literalStore(0xc000216bb0, 0xc00029f201, 0x4, 0x1ff, 0x1998800, 0xc0000147e0, 0x199, 0xc000231700, 0x10e0a5e, 0x197)
        /usr/local/Cellar/go/1.16/libexec/src/encoding/json/decode.go:860 +0x30ce
encoding/json.(*decodeState).value(0xc000216bb0, 0x1998800, 0xc0000147e0, 0x199, 0x1998800, 0xc0000147e0)
        /usr/local/Cellar/go/1.16/libexec/src/encoding/json/decode.go:384 +0x40c
encoding/json.(*decodeState).array(0xc000216bb0, 0x18df040, 0xc0001be540, 0x16, 0xc000216bd8, 0x10e405b)
        /usr/local/Cellar/go/1.16/libexec/src/encoding/json/decode.go:558 +0x365
encoding/json.(*decodeState).value(0xc000216bb0, 0x18df040, 0xc0001be540, 0x16, 0x16, 0x6e)
        /usr/local/Cellar/go/1.16/libexec/src/encoding/json/decode.go:360 +0x22f
encoding/json.(*decodeState).unmarshal(0xc000216bb0, 0x18df040, 0xc0001be540, 0xc000216bd8, 0x0)
        /usr/local/Cellar/go/1.16/libexec/src/encoding/json/decode.go:180 +0x2c9
encoding/json.Unmarshal(0xc00029f200, 0x6, 0x200, 0x18df040, 0xc0001be540, 0x0, 0x0)
        /usr/local/Cellar/go/1.16/libexec/src/encoding/json/decode.go:107 +0x15d
```

(cherry picked from commit fe4e97afe0)

# Conflicts:
#	CHANGELOG_PENDING.md

* fix conflict

Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
2021-03-02 14:46:48 +04:00
mergify[bot]
28bebe3ddb docs/tutorials: fix sample code #6186
Co-authored-by: winor <12413150+winor30@users.noreply.github.com>
Co-authored-by: Marko Baricevic <marbar3778@yahoo.com>
2021-03-01 08:41:49 +00:00
Tess Rinearson
dea73e08b3 changelog: update for 0.34.8 (#6181) 2021-02-25 12:30:29 +01:00
mergify[bot]
28ce355656 libs/log: [JSON format] include timestamp (bp #6174) (#6179)
Closes #6146
2021-02-25 11:27:49 +04:00
mergify[bot]
55ae781efa logging: print string instead of callback (#6178)
## Description

Fixes marshaling error in sdk

closes https://github.com/cosmos/cosmos-sdk/issues/8578

the output stays the same, we are avoiding the passing of the callback because sdk uses typed logging.

Co-authored-by: Marko <marbar3778@yahoo.com>
2021-02-24 19:08:05 +00:00
mergify[bot]
0191a22636 state executor: groom logs (bp #6152) (#6172) 2021-02-24 09:50:46 -05:00
Tess Rinearson
9d9b947b02 goreleaser: reintroduce arm64 build instructions 2021-02-23 11:20:19 +01:00
Tess Rinearson
c6e0d20d4b Revert "Revert "tooling: remove tools/Makefile (bp #6102) (#6106)""
This reverts commit afd07096a7.

I had believed that this tooling change could have been what broke our
GoReleaser flow; I now know that it was a result of changes in Go 1.16
and an update to GoReleaser! GoReleaser has now been updated again
and our flow should be un-broken.
2021-02-23 11:20:06 +01:00
Tess Rinearson
efd9d07257 changelog: fix changelog pending version numbering (#6149) 2021-02-19 14:51:18 +01:00
mergify[bot]
a0f376127d consensus: more log grooming (bp #6140) (#6143) 2021-02-18 14:23:12 -05:00
mergify[bot]
8d3c36ccc3 abci: Fix ReCheckTx for Socket Client (bp #6124) (#6125) 2021-02-18 08:36:05 -05:00
Tess Rinearson
15eb2c2211 .goreleaser: remove arm64 build instructions and bump changelog again (#6131) 2021-02-18 03:04:16 +01:00
Tess Rinearson
e4d2893ff6 changelog: bump to v0.34.6 2021-02-18 02:36:01 +01:00
Tess Rinearson
afd07096a7 Revert "tooling: remove tools/Makefile (bp #6102) (#6106)"
This reverts commit 1b2174a0da.
2021-02-18 02:36:01 +01:00
Tess Rinearson
340071d81b changelog: update for 0.34.5 (#6129) 2021-02-18 02:09:16 +01:00
Tess Rinearson
53d40e1092 consensus: remove privValidator from log call (#6128) 2021-02-18 01:47:55 +01:00
Aleksandr Bezobchuk
bedb00d252 consensus: Groom Logs (#5917)
Executed a local network using simapp and looked for logs that seemed superfluous. This isn't by any means an exhaustive grooming, but should drastically help legibility of logs.

ref: #5912
2021-02-17 10:05:13 +00:00
mergify[bot]
1030072dd0 changelog: update 0.34.3 changelog with details on security vuln (bp #6108) (#6110)
* changelog: update 0.34.3 changelog with details on security vuln (#6108)

Closes #6095.

(cherry picked from commit df0b868415)

# Conflicts:
#	CHANGELOG.md

* solve conflicts

Co-authored-by: Tess Rinearson <tess.rinearson@gmail.com>
Co-authored-by: Marko Baricevic <marbar3778@yahoo.com>
2021-02-15 14:51:54 +01:00
mergify[bot]
1b2174a0da tooling: remove tools/Makefile (bp #6102) (#6106)
Description

We use docker for all protobuf related items. This makes it unnecessary to provide a way to download tooling.

ref #6103

Co-authored-by: Tess Rinearson <tess.rinearson@gmail.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
2021-02-12 10:09:29 +00:00
Tess Rinearson
6bac9d9f43 makefile: remove call to tools (#6104) 2021-02-11 22:31:17 +01:00
Tess Rinearson
5efbbab789 changelog: improve with suggestions from @melekes (#6097) 2021-02-11 20:47:43 +01:00
Tess Rinearson
4a0fab041b changelog: update for v0.34.4 (#6096) 2021-02-11 19:13:40 +01:00
Callum Waters
5ee2ada942 .github: remove erik as reviewer from dependapot (#6076) 2021-02-11 17:29:52 +01:00
Callum Waters
fbf2c3815d check block store base is non negative before sending block meta or commits (#6042) 2021-02-11 17:29:52 +01:00
dependabot[bot]
cc57a560e7 build(deps-dev): Bump watchpack from 2.1.0 to 2.1.1 in /docs (#6063)
Bumps [watchpack](https://github.com/webpack/watchpack) from 2.1.0 to 2.1.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/webpack/watchpack/releases">watchpack's releases</a>.</em></p>
<blockquote>
<h2>v2.1.1</h2>
<h1>Bugfix</h1>
<ul>
<li>fix warnings with ENOENT when symlinks are resolved by watchpack</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="f1b5e2da2d"><code>f1b5e2d</code></a> 2.1.1</li>
<li><a href="cbfc11a8d7"><code>cbfc11a</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/webpack/watchpack/issues/188">#188</a> from Aghassi/fix/enoent-throwing</li>
<li><a href="7684df0846"><code>7684df0</code></a> fix: adds ENOENT for non windows errors</li>
<li>See full diff in <a href="https://github.com/webpack/watchpack/compare/v2.1.0...v2.1.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=watchpack&package-manager=npm_and_yarn&previous-version=2.1.0&new-version=2.1.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-02-11 17:18:45 +01:00
Erik Grinaker
950c9f71b5 CODEOWNERS: remove erikgrinaker (#6057) 2021-02-11 17:18:45 +01:00
dependabot[bot]
90a2c33285 build(deps): Bump actions/cache from v2.1.3 to v2.1.4 (#6055)
Bumps [actions/cache](https://github.com/actions/cache) from v2.1.3 to v2.1.4.
- [Release notes](https://github.com/actions/cache/releases)
- [Commits](https://github.com/actions/cache/compare/v2.1.3...26968a09c0ea4f3e233fdddbafd1166051a095f6)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-02-11 17:18:45 +01:00
Anton Kaliaev
093dcfc8a0 goreleaser: downcase archive and binary names (#6029)
before:

```
Tendermint_0.34.3_darwin_amd64.tar.gz

-rw-r--r--  0 runner docker 192329 Jan 19 19:30 CHANGELOG.md
-rw-r--r--  0 runner docker    321 Jan 19 19:30 CHANGELOG_PENDING.md
-rw-r--r--  0 runner docker  11382 Jan 19 19:30 LICENSE
-rw-r--r--  0 runner docker   8165 Jan 19 19:30 README.md
-rwxr-xr-x  0 runner docker 23224320 Jan 19 19:30 tendermint
```

after:

```
tendermint_0.34.3_darwin_amd64.tar.gz

-rw-r--r--  0 runner docker 192329 Jan 19 19:30 CHANGELOG.md
-rw-r--r--  0 runner docker    321 Jan 19 19:30 CHANGELOG_PENDING.md
-rw-r--r--  0 runner docker  11382 Jan 19 19:30 LICENSE
-rw-r--r--  0 runner docker   8165 Jan 19 19:30 README.md
-rwxr-xr-x  0 runner docker 23224320 Jan 19 19:30 tendermint
```
2021-02-11 17:09:10 +01:00
Anton Kaliaev
72851a12d3 libs/log: format []byte as hexidecimal string (uppercased) (#5960)
Closes: #5806

Co-authored-by: Lanie Hei <heixx011@umn.edu>
2021-02-11 17:02:38 +01:00
dependabot[bot]
07979d88d0 build(deps): Bump github.com/tendermint/tm-db from 0.6.3 to 0.6.4 (#6073)
Bumps [github.com/tendermint/tm-db](https://github.com/tendermint/tm-db) from 0.6.3 to 0.6.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tendermint/tm-db/releases">github.com/tendermint/tm-db's releases</a>.</em></p>
<blockquote>
<h2>v0.6.4</h2>
<p><a href="https://github.com/tendermint/tm-db/blob/v0.6.4/CHANGELOG.md#064">https://github.com/tendermint/tm-db/blob/v0.6.4/CHANGELOG.md#064</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tendermint/tm-db/blob/master/CHANGELOG.md">github.com/tendermint/tm-db's changelog</a>.</em></p>
<blockquote>
<h2>0.6.4</h2>
<p><strong>2021-02-09</strong></p>
<p>Bump protobuf to 1.3.2 and grpc to 1.35.0.</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="6f9a08cd45"><code>6f9a08c</code></a> update changelog for v0.6.4 (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/150">#150</a>)</li>
<li><a href="4de5f6b9a4"><code>4de5f6b</code></a> CODEOWNERS: remove erikgrinaker (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/148">#148</a>)</li>
<li><a href="9f5cde003a"><code>9f5cde0</code></a> build(deps): bump google.golang.org/grpc from 1.33.2 to 1.35.0 (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/143">#143</a>)</li>
<li><a href="c606a78361"><code>c606a78</code></a> build(deps): bump github.com/stretchr/testify from 1.6.1 to 1.7.0 (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/142">#142</a>)</li>
<li><a href="0438145e16"><code>0438145</code></a> build(deps): bump github.com/gogo/protobuf from 1.3.1 to 1.3.2 (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/140">#140</a>)</li>
<li><a href="f2b292dfc2"><code>f2b292d</code></a> testing: docker deployment (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/144">#144</a>)</li>
<li><a href="3157a92898"><code>3157a92</code></a> changelog: update with 0.5.2 release (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/138">#138</a>)</li>
<li>See full diff in <a href="https://github.com/tendermint/tm-db/compare/v0.6.3...v0.6.4">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/tendermint/tm-db&package-manager=go_modules&previous-version=0.6.3&new-version=0.6.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-02-11 16:56:50 +01:00
Marko Baricevic
12eac92738 docs: fix typo in state sync example (#5989) 2021-02-11 15:08:23 +00:00
Aleksandr Bezobchuk
73375b0912 backport v0.34.x: 6000 & 6001 2021-02-11 09:50:18 -05:00
Marko
e3a79d4e2e tests: fix make test (#5966)
## Description
 
- bump deadlock dep to master
  - fixes `make test` since we now use `deadlock.Once`

Closes: #XXX
2021-02-11 14:44:19 +01:00
Marko
fa3287c012 maverick: reduce some duplication (#6052)
- Reduce duplication in messages and metrics.
- merge WAL interfaces. Meant to push the developer to make changes in both places.
2021-02-11 14:44:19 +01:00
Marko
cb7c9564a4 docker: dont login when in PR (#5961) 2021-02-11 14:44:19 +01:00
odidev
9df5fcf1f1 docker: release Linux/ARM64 image (#5925)
Co-authored-by: Marko <marbar3778@yahoo.com>
2021-02-11 14:44:19 +01:00
Anton Kaliaev
d575f8a38f fix build 2021-02-11 16:10:28 +04:00
Anton Kaliaev
1e355b6b56 .github: use job ID (not step ID) inside if condition (#6060)
https://stackoverflow.com/a/66073112/820520
2021-02-11 16:10:28 +04:00
Anton Kaliaev
108073077b .github: fix fuzz-nightly job (#5965)
outputs is a property of the job, not an individual step.
2021-02-11 16:10:28 +04:00
Anton Kaliaev
8b48d23084 terminate go-fuzz gracefully (w/ SIGINT) (#5973)
and preserve exit code.

```
2021/01/26 03:34:49 workers: 2, corpus: 4 (8m28s ago), crashers: 0, restarts: 1/9976, execs: 11013732 (21596/sec), cover: 121, uptime: 8m30s
make: *** [fuzz-mempool] Terminated
Makefile:5: recipe for target 'fuzz-mempool' failed
Error: Process completed with exit code 124.
```

https://github.com/tendermint/tendermint/runs/1766661614

`continue-on-error` should make GH ignore any error codes.
2021-02-11 16:10:28 +04:00
Anton Kaliaev
c3d2f68c05 .github: archive crashers and fix set-crashers-count step (#5992) 2021-02-11 16:10:28 +04:00
Anton Kaliaev
0f58a8470a .github: rename crashers output (fuzz-nightly-test) (#5993) 2021-02-11 16:10:28 +04:00
Anton Kaliaev
197b746f8d test/fuzz: move fuzz tests into this repo (#5918)
Co-authored-by: Emmanuel T Odeke <emmanuel@orijtech.com>

Closes #5907

- add init-corpus to blockchain reactor
- remove validator-set FromBytes test
now that we have proto, we don't need to test it! bye amino
- simplify mempool test
do we want to test remote ABCI app?
- do not recreate mux on every crash in jsonrpc test
- update p2p pex reactor test
- remove p2p/listener test
the API has changed + I did not understand what it's tested anyway
- update secretconnection test
- add readme and makefile
- list inputs in readme
- add nightly workflow
- remove blockchain fuzz test
EncodeMsg / DecodeMsg no longer exist
2021-02-11 16:10:28 +04:00
Marko Baricevic
06623202f0 Update metrics.md (#5930) 2021-02-11 10:55:29 +00:00
Marko
a3a9398971 proto: docker deployment (#5931) 2021-02-11 10:55:29 +00:00
Marko
7b7d6e1f98 docs: change v0.33 version (#5950)
- change version for v0.33.x

Closes: #XXX
2021-02-11 10:55:29 +00:00
Erik Grinaker
98be3f2aab Makefile: always pull image in proto-gen-docker. (#5953)
The `proto-gen-docker` target didn't pull an updated Docker image, and would use a local image if present which could be outdated and produce wrong results.
2021-02-11 10:55:29 +00:00
Tess Rinearson
3e41bb57d6 .github/workflows: cleanup yaml for e2e nightlies (#6049) 2021-02-11 11:43:19 +01:00
Tess Rinearson
6252b63e53 .github/workflows: fix whitespace in e2e config file (#6043) 2021-02-11 11:43:19 +01:00
Tess Rinearson
591e55b301 .github/workflows: separate e2e workflows for 0.34.x and master (#6041)
Co-authored-by: Erik Grinaker <erik@interchain.berlin>
Co-authored-by: Marko <marbar3778@yahoo.com>
2021-02-11 11:43:19 +01:00
Erik Grinaker
0028ac38ed test/e2e: increase validator tolerances (#6037) 2021-02-11 11:43:19 +01:00
Tess Rinearson
57aed01639 .github/workflows: try different e2e nightly test set (#6036) 2021-02-11 11:43:19 +01:00
Erik Grinaker
8788673a3e test/e2e: increase sign/propose tolerances (#6033)
E2E tests often fail because validators miss signing or proposing blocks. Often this is because e.g. there's a lot of disruption in the network or it takes a long time to start up all the nodes.

This changes the test criteria to only check for 3 signed/proposed blocks, rather than a fraction of the expected blocks. This should be enough to catch most issues, apart from performance problems causing nodes to miss signing/proposing, but we may want separate tests for those sorts of things.
2021-02-11 11:43:19 +01:00
Tess Rinearson
f009a1a731 Revert "e2e: releases nightly (#5906)" (#6031)
This reverts commit 64961e2267, to see if it will make the workflow dispatch trigger reappear and fix our Slack notification link.
2021-02-11 11:43:19 +01:00
Anton Kaliaev
33fb03fcc8 test/e2e: enable pprof server to help debugging failures (#6003) 2021-02-11 11:43:19 +01:00
Marko
eb09376ba0 e2e: releases nightly (#5906) 2021-02-11 11:43:19 +01:00
Anton Kaliaev
f48b154751 evidence: terminate broadcastEvidenceRoutine when peer is stopped (#6068) 2021-02-09 11:36:36 +04:00
Callum Waters
2dd5cbfb5c light: remove witnesses in order of decreasing index (#6065) 2021-02-08 17:36:21 +01:00
Callum Waters
3c22ed8320 light: fix panic with RPC calls to commit and validator when height is nil (#6040) 2021-02-04 15:17:34 +01:00
Anton Kaliaev
7f02d8971c light/provider/http: fix Validators (#6024)
Closes #6010
2021-02-04 13:28:59 +04:00
Callum Waters
b021ad5b7a test: don't use foo-bar.net in TestHTTPClientMakeHTTPDialer (#5997) (#6047)
This test relied on connecting to the external site `foo-bar.net`, and (predictably) the site went down and broke all of our CI runs. This changes it to use local HTTP servers instead.

Co-authored-by: Erik Grinaker <erik@interchain.berlin>
2021-02-04 13:11:07 +04:00
Cyrus Goh
f89eca427a docs: bump package-lock.json of v0.34.x (#5952) 2021-01-22 20:45:04 +00:00
Marko
0213e544e0 docs: package-lock.json fix (#5948) 2021-01-22 19:03:31 +00:00
Tess Rinearson
6b2ab0f0e1 changelog: update for 0.34.3 (#5926) 2021-01-19 16:12:47 +01:00
Callum
a2a6852ab9 use correct source of evidence time
Conflicting votes are now sent to the evidence pool to form duplicate vote evidence only once
the height of the evidence is finished and the time of the block finalised.
2021-01-19 16:00:02 +01:00
Tess Rinearson
7ea4dc52ed readme: add security mailing list (#5916)
No one knows we have this mailing list 🙈
2021-01-19 12:58:35 +01:00
dependabot[bot]
d969a5ed1b build(deps): Bump vuepress-theme-cosmos from 1.0.179 to 1.0.180 in /docs (#5915)
Bumps [vuepress-theme-cosmos](https://github.com/cosmos/vuepress-theme-cosmos) from 1.0.179 to 1.0.180.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/cosmos/vuepress-theme-cosmos/commits">compare view</a></li>
</ul>
</details>
<br />

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=vuepress-theme-cosmos&package-manager=npm_and_yarn&previous-version=1.0.179&new-version=1.0.180)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

</details>
2021-01-19 12:58:35 +01:00
Tess Rinearson
0def3a964a config: fix mispellings (#5914)
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2021-01-19 12:58:35 +01:00
Marko
54338a52fa proto: bump gogoproto (1.3.2) (#5886)
- bump gogoproto (1.3.2)
- regenerate proto files

Closes: #XXX
2021-01-19 12:41:35 +01:00
Tess Rinearson
bf45df0b2b mod: go mod tidy 2021-01-19 12:17:29 +01:00
Tess Rinearson
46fa6e666c .github/codeowners: add alexanderbez (#5913)
* .github/codeowners: add alexanderbez

* Update .github/CODEOWNERS

Co-authored-by: Marko <marbar3778@yahoo.com>

Co-authored-by: Marko <marbar3778@yahoo.com>
2021-01-19 12:17:29 +01:00
dependabot[bot]
a18e3de3ac build(deps): Bump google.golang.org/grpc from 1.34.0 to 1.35.0 (#5902)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.34.0 to 1.35.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc-go/releases">google.golang.org/grpc's releases</a>.</em></p>
<blockquote>
<h2>Release 1.35.0</h2>
<h1>Behavior Changes</h1>
<ul>
<li>roundrobin: strip attributes from addresses (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4024">#4024</a>)</li>
<li>balancer: set RPC metadata in address attributes, instead of Metadata field (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4041">#4041</a>)</li>
</ul>
<h1>New Features</h1>
<ul>
<li>support unix-abstract schema (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4079">#4079</a>)
<ul>
<li>Special Thanks: <a href="https://github.com/resec">@resec</a></li>
</ul>
</li>
<li>xds: implement experimental RouteAction timeout support (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4116">#4116</a>)</li>
<li>xds: Implement experimental circuit breaking support. (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4050">#4050</a>)</li>
</ul>
<h1>Bug Fixes</h1>
<ul>
<li>xds: <code>server_features</code> should be a child of <code>xds_servers</code> and not a sibling (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4087">#4087</a>)</li>
<li>xds: NACK more invalid RDS responses (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4120">#4120</a>)</li>
</ul>
<h2>Release 1.34.1</h2>
<ul>
<li>xds client: Updated v3 type for http connection manager (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4137">#4137</a>)</li>
<li>lrs: use JSON for locality's String representation (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4135">#4135</a>)</li>
<li>eds/lrs: handle nil when LRS is disabled (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4086">#4086</a>)</li>
<li>client: fix &quot;unix&quot; scheme handling for some corner cases (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4021">#4021</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="577eb69627"><code>577eb69</code></a> Change version to 1.35.0 (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4140">#4140</a>)</li>
<li><a href="fb40d83340"><code>fb40d83</code></a> xds interop: turn on circuit breaking test (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4144">#4144</a>)</li>
<li><a href="083393f287"><code>083393f</code></a> xds/resolver: fix resource deletion (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4143">#4143</a>)</li>
<li><a href="85e55dc558"><code>85e55dc</code></a> interop: update client for xds testing support (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4108">#4108</a>)</li>
<li><a href="6a318bb011"><code>6a318bb</code></a> xds: add HTTP connection manager max_stream_duration support (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4122">#4122</a>)</li>
<li><a href="0bd76be2bb"><code>0bd76be</code></a> lrs: use JSON for locality's String representation (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4135">#4135</a>)</li>
<li><a href="ecc9a99b66"><code>ecc9a99</code></a> interop: remove test.proto clones/variants and use grpc-proto repo instead (#...</li>
<li><a href="4f80d77fe4"><code>4f80d77</code></a> github: enable CodeQL checker (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4134">#4134</a>)</li>
<li><a href="829919d572"><code>829919d</code></a> xds client: Updated v3 type for http connection manager (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4137">#4137</a>)</li>
<li><a href="f4a20d2f41"><code>f4a20d2</code></a> xds: NACK more invalid RDS responses (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4120">#4120</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/grpc/grpc-go/compare/v1.34.0...v1.35.0">compare view</a></li>
</ul>
</details>
<br />

[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/grpc&package-manager=go_modules&previous-version=1.34.0&new-version=1.35.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

</details>
2021-01-19 12:17:29 +01:00
dependabot[bot]
e8d35597df build(deps): Bump github.com/stretchr/testify from 1.6.1 to 1.7.0 (#5897)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.6.1 to 1.7.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/stretchr/testify/releases">github.com/stretchr/testify's releases</a>.</em></p>
<blockquote>
<h2>Minor improvements and bug fixes</h2>
<p>Minor feature improvements and bug fixes</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="acba37e5db"><code>acba37e</code></a> Only use repeatability if no repeatability left</li>
<li><a href="eb8c41ec07"><code>eb8c41e</code></a> Add more tests to mock package</li>
<li><a href="a5830c56d3"><code>a5830c5</code></a> Extract method to evaluate closest match</li>
<li><a href="1962448488"><code>1962448</code></a> Use Repeatability as tie-breaker for closest match</li>
<li><a href="92707c0b2d"><code>92707c0</code></a> Fixed the link to not point to assert only</li>
<li><a href="05dd0b2b35"><code>05dd0b2</code></a> Updated the readme to point to pkg.dev</li>
<li><a href="c26b7f39f8"><code>c26b7f3</code></a> Update assertions.go</li>
<li><a href="8fb4b2442e"><code>8fb4b24</code></a> [Fix] The most recent changes to golang/protobuf breaks the spew Circular dat...</li>
<li><a href="dc8af7208c"><code>dc8af72</code></a> add generated code for positive/negative assertion</li>
<li><a href="1544508911"><code>1544508</code></a> add assert positive/negative</li>
<li>Additional commits viewable in <a href="https://github.com/stretchr/testify/compare/v1.6.1...v1.7.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/stretchr/testify&package-manager=go_modules&previous-version=1.6.1&new-version=1.7.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-01-19 12:17:29 +01:00
369 changed files with 28888 additions and 15402 deletions

25
.github/CODEOWNERS vendored
View File

@@ -1,27 +1,10 @@
# CODEOWNERS: https://help.github.com/articles/about-codeowners/
# Everything goes through the following "global owners" by default.
# Everything goes through the following "global owners" by default.
# Unless a later match takes precedence, these three will be
# requested for review when someone opens a PR.
# requested for review when someone opens a PR.
# Note that the last matching pattern takes precedence, so
# global owners are only requested if there isn't a more specific
# codeowner specified below. For this reason, the global codeowners
# codeowner specified below. For this reason, the global codeowners
# are often repeated in package-level definitions.
* @ebuchman @erikgrinaker @melekes @tessr
# Overrides for tooling packages
.circleci/ @marbar3778 @ebuchman @erikgrinaker @melekes @tessr
.github/ @marbar3778 @ebuchman @erikgrinaker @melekes @tessr
DOCKER/ @marbar3778 @ebuchman @erikgrinaker @melekes @tessr
# Overrides for core Tendermint packages
abci/ @marbar3778 @ebuchman @erikgrinaker @melekes @tessr
evidence/ @cmwaters @ebuchman @erikgrinaker @melekes @tessr
light/ @cmwaters @melekes @ebuchman
# Overrides for docs
*.md @marbar3778 @ebuchman @erikgrinaker @melekes @tessr
docs/ @marbar3778 @ebuchman @erikgrinaker @melekes @tessr
* @ebuchman @cmwaters @tychoish @williambanfield @creachadair

View File

@@ -23,6 +23,5 @@ updates:
reviewers:
- melekes
- tessr
- erikgrinaker
labels:
- T:dependencies

19
.github/mergify.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
queue_rules:
- name: default
conditions:
- base=v0.34.x
- label=S:automerge
pull_request_rules:
- name: Automerge to v0.34.x
conditions:
- base=v0.34.x
- label=S:automerge
actions:
queue:
method: squash
name: default
commit_message_template: |
{{ title }} (#{{ number }})
{{ body }}

View File

@@ -14,9 +14,6 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v2
with:
go-version: "^1.15.4"
- uses: actions/checkout@master
- name: Prepare
id: prep
@@ -37,23 +34,26 @@ jobs:
fi
echo ::set-output name=tags::${TAGS}
- name: Set up QEMU
uses: docker/setup-qemu-action@master
with:
platforms: all
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build Tendermint
run: |
make build-linux && cp build/tendermint DOCKER/tendermint
- name: Publish to Docker Hub
uses: docker/build-push-action@v2
with:
context: ./DOCKER
context: .
file: ./DOCKER/Dockerfile
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.prep.outputs.tags }}

35
.github/workflows/e2e-manual.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
# Manually run randomly generated E2E testnets (as nightly).
name: e2e-manual
on:
workflow_dispatch:
jobs:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v2
with:
go-version: '1.17'
- uses: actions/checkout@v2.4.0
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner tests
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly/
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml

View File

@@ -1,7 +1,12 @@
# Runs randomly generated E2E testnets nightly.
name: e2e-nightly
# Runs randomly generated E2E testnets nightly
# on the 0.34.x release branch
# !! If you change something in this file, you probably want
# to update the e2e-nightly-master workflow as well!
name: e2e-nightly-34x
on:
workflow_dispatch: # allow running workflow manually
workflow_dispatch: # allow running workflow manually, in theory
schedule:
- cron: '0 2 * * *'
@@ -21,6 +26,8 @@ jobs:
go-version: '^1.15.4'
- uses: actions/checkout@v2
with:
ref: 'v0.34.x'
- name: Build
working-directory: test/e2e
@@ -49,5 +56,21 @@ jobs:
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':skull:'
SLACK_COLOR: danger
SLACK_MESSAGE: Nightly E2E tests failed
SLACK_MESSAGE: Nightly E2E tests failed on v0.34.x
SLACK_FOOTER: ''
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on success
uses: rtCamp/action-slack-notify@ae4223259071871559b6e9d08b24a63d71b3f0c0
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':white_check_mark:'
SLACK_COLOR: good
SLACK_MESSAGE: Nightly E2E tests passed on v0.34.x
SLACK_FOOTER: ''

View File

@@ -0,0 +1,73 @@
# Runs randomly generated E2E testnets nightly on master
# !! If you change something in this file, you probably want
# to update the e2e-nightly-34x workflow as well!
name: e2e-nightly-master
on:
workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 2 * * *'
jobs:
e2e-nightly-test-2:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v2
with:
go-version: '1.15'
- uses: actions/checkout@v2
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly
- name: Run testnets in group ${{ matrix.group }}
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
e2e-nightly-fail-2:
needs: e2e-nightly-test-2
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: rtCamp/action-slack-notify@ae4223259071871559b6e9d08b24a63d71b3f0c0
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':skull:'
SLACK_COLOR: danger
SLACK_MESSAGE: Nightly E2E tests failed on master
SLACK_FOOTER: ''
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test-2
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on success
uses: rtCamp/action-slack-notify@ae4223259071871559b6e9d08b24a63d71b3f0c0
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':white_check_mark:'
SLACK_COLOR: good
SLACK_MESSAGE: Nightly E2E tests passed on master
SLACK_FOOTER: ''

83
.github/workflows/fuzz-nightly.yml vendored Normal file
View File

@@ -0,0 +1,83 @@
# Runs fuzzing nightly.
name: fuzz-nightly
on:
workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 3 * * *'
jobs:
fuzz-nightly-test:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v2
with:
go-version: '1.15'
- uses: actions/checkout@v2
- name: Install go-fuzz
working-directory: test/fuzz
run: go get -u github.com/dvyukov/go-fuzz/go-fuzz github.com/dvyukov/go-fuzz/go-fuzz-build
- name: Fuzz mempool
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-mempool
continue-on-error: true
- name: Fuzz p2p-addrbook
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-p2p-addrbook
continue-on-error: true
- name: Fuzz p2p-pex
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-p2p-pex
continue-on-error: true
- name: Fuzz p2p-sc
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-p2p-sc
continue-on-error: true
- name: Fuzz p2p-rpc-server
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-rpc-server
continue-on-error: true
- name: Archive crashers
uses: actions/upload-artifact@v2
with:
name: crashers
path: test/fuzz/**/crashers
retention-days: 1
- name: Archive suppressions
uses: actions/upload-artifact@v2
with:
name: suppressions
path: test/fuzz/**/suppressions
retention-days: 1
- name: Set crashers count
working-directory: test/fuzz
run: echo "::set-output name=count::$(find . -type d -name 'crashers' | xargs -I % sh -c 'ls % | wc -l' | awk '{total += $1} END {print total}')"
id: set-crashers-count
outputs:
crashers-count: ${{ steps.set-crashers-count.outputs.count }}
fuzz-nightly-fail:
needs: fuzz-nightly-test
if: ${{ needs.fuzz-nightly-test.outputs.crashers-count != 0 }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack if any crashers
uses: rtCamp/action-slack-notify@ae4223259071871559b6e9d08b24a63d71b3f0c0
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly Fuzz Tests
SLACK_ICON_EMOJI: ':firecracker:'
SLACK_COLOR: danger
SLACK_MESSAGE: Crashers found in Nightly Fuzz tests
SLACK_FOOTER: ''

View File

@@ -11,19 +11,19 @@ jobs:
golangci:
name: golangci-lint
runs-on: ubuntu-latest
timeout-minutes: 4
timeout-minutes: 8
steps:
- uses: actions/checkout@v2
- uses: technote-space/get-diff-action@v4
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v5
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: golangci/golangci-lint-action@v2.2.1
- uses: golangci/golangci-lint-action@v2.5.2
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.31
version: v1.42.1
args: --timeout 10m
github-token: ${{ secrets.github_token }}
if: env.GIT_DIFF

View File

@@ -11,6 +11,7 @@ on:
branches: [master]
paths:
- "**.md"
- "**.yml"
jobs:
build:
@@ -18,7 +19,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v2
uses: actions/checkout@v2.4.0
- name: Lint Code Base
uses: docker://github/super-linter:v3
env:
@@ -27,6 +28,5 @@ jobs:
DEFAULT_BRANCH: master
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
VALIDATE_MD: true
MARKDOWN_CONFIG_FILE: .markdownlint.yml
VALIDATE_OPAENAPI: true
VALIDATE_OPENAPI: true
VALIDATE_YAML: true

51
.github/workflows/proto-docker.yml vendored Normal file
View File

@@ -0,0 +1,51 @@
name: Build & Push TM Proto Builder
on:
pull_request:
paths:
- "tools/proto/*"
push:
branches:
- master
paths:
- "tools/proto/*"
schedule:
# run this job once a month to recieve any go or buf updates
- cron: "* * 1 * *"
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=tendermintdev/docker-build-proto
VERSION=noop
if [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION=${GITHUB_REF#refs/tags/}
elif [[ $GITHUB_REF == refs/heads/* ]]; then
VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -r 's#/+#-#g')
if [ "${{ github.event.repository.default_branch }}" = "$VERSION" ]; then
VERSION=latest
fi
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
echo ::set-output name=tags::${TAGS}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v2
with:
context: ./tools/proto
file: ./tools/proto/Dockerfile
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.prep.outputs.tags }}

View File

@@ -36,7 +36,7 @@ jobs:
- name: install
run: make install install_abci
if: "env.GIT_DIFF != ''"
- uses: actions/cache@v2.1.2
- uses: actions/cache@v2.1.4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
@@ -44,7 +44,7 @@ jobs:
${{ runner.os }}-go-
if: env.GIT_DIFF
# Cache binaries for use by other jobs
- uses: actions/cache@v2.1.2
- uses: actions/cache@v2.1.4
with:
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
@@ -65,14 +65,14 @@ jobs:
**/**.go
go.mod
go.sum
- uses: actions/cache@v2.1.3
- uses: actions/cache@v2.1.4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
if: env.GIT_DIFF
- uses: actions/cache@v2.1.2
- uses: actions/cache@v2.1.4
with:
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
@@ -97,14 +97,14 @@ jobs:
**/**.go
go.mod
go.sum
- uses: actions/cache@v2.1.3
- uses: actions/cache@v2.1.4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
if: env.GIT_DIFF
- uses: actions/cache@v2.1.2
- uses: actions/cache@v2.1.4
with:
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
@@ -128,14 +128,14 @@ jobs:
**/**.go
go.mod
go.sum
- uses: actions/cache@v2.1.3
- uses: actions/cache@v2.1.4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
if: env.GIT_DIFF
- uses: actions/cache@v2.1.2
- uses: actions/cache@v2.1.4
with:
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary

11
.gitignore vendored
View File

@@ -45,5 +45,14 @@ addrbook.json
terraform.tfstate
terraform.tfstate.backup
terraform.tfstate.d
profile\.out
test/e2e/build
test/e2e/networks/*/
test/logs
test/maverick/maverick
test/p2p/data/
vendor
test/fuzz/**/corpus
test/fuzz/**/crashers
test/fuzz/**/suppressions
test/fuzz/**/*.zip

View File

@@ -1,44 +1,45 @@
linters:
enable:
- asciicheck
- bodyclose
- deadcode
- depguard
- dogsled
- dupl
- errcheck
- exportloopref
# - funlen
# - gochecknoglobals
# - gochecknoinits
# - gocognit
- goconst
- gocritic
# - gocritic
# - gocyclo
# - godox
- gofmt
- goimports
- golint
- revive
- gosec
- gosimple
- govet
- ineffassign
# - interfacer
- lll
- misspell
# - maligned
# - misspell
- nakedret
- nolintlint
- prealloc
- scopelint
- staticcheck
- structcheck
- stylecheck
- typecheck
# - typecheck
- unconvert
# - unparam
- unused
- varcheck
# - whitespace
# - wsl
# - gocognit
- nolintlint
issues:
exclude-rules:
@@ -57,5 +58,13 @@ linters-settings:
suggest-new: true
# govet:
# check-shadowing: true
golint:
revive:
min-confidence: 0
maligned:
suggest-new: true
misspell:
locale: US
ignore-words:
- behaviour

View File

@@ -1,4 +1,4 @@
project_name: Tendermint
project_name: tendermint
env:
# Require use of Go modules.

View File

@@ -1,5 +1,243 @@
# Changelog
Friendly reminder, we have a [bug bounty program](https://hackerone.com/cosmos).
## v0.34.17
### BREAKING CHANGES
- CLI/RPC/Config
- [cli] [\#8081](https://github.com/tendermint/tendermint/issues/8081) make the reset command safe to use (@marbar3778).
### BUG FIXES
- [consensus] [\#8079](https://github.com/tendermint/tendermint/issues/8079) start the timeout ticker before relay (backport #7844) (@creachadair).
- [consensus] [\#7992](https://github.com/tendermint/tendermint/issues/7992) [\#7994](https://github.com/tendermint/tendermint/issues/7994) change lock handling in reactor for handleMsg and reactor to alleviate issues gossiping during long ABCI calls (@williambanfield).
## v0.34.16
Special thanks to external contributors on this release: @yihuang
### BUG FIXES
- [consensus] [\#7617](https://github.com/tendermint/tendermint/issues/7617) calculate prevote message delay metric (backport #7551) (@williambanfield).
- [consensus] [\#7631](https://github.com/tendermint/tendermint/issues/7631) check proposal non-nil in prevote message delay metric (backport #7625) (@williambanfield).
- [statesync] [\#7885](https://github.com/tendermint/tendermint/issues/7885) statesync: assert app version matches (backport #7856) (@cmwaters).
- [statesync] [\#7881](https://github.com/tendermint/tendermint/issues/7881) fix app hash in state rollback (backport #7837) (@cmwaters).
- [cli] [#7837](https://github.com/tendermint/tendermint/pull/7837) fix app hash in state rollback. (@yihuang).
## v0.34.15
Special thanks to external contributors on this release: @thanethomson
### BUG FIXES
- [\#7368](https://github.com/tendermint/tendermint/issues/7368) cmd: add integration test for rollback functionality (@cmwaters).
- [\#7309](https://github.com/tendermint/tendermint/issues/7309) pubsub: Report a non-nil error when shutting down (fixes #7306).
- [\#7057](https://github.com/tendermint/tendermint/pull/7057) Import Postgres driver support for the psql indexer (@creachadair).
- [\#7106](https://github.com/tendermint/tendermint/pull/7106) Revert mutex change to ABCI Clients (@tychoish).
### IMPROVEMENTS
- [config] [\#7230](https://github.com/tendermint/tendermint/issues/7230) rpc: Add experimental config params to allow for subscription buffer size control (@thanethomson).
## v0.34.14
This release backports the `rollback` feature to allow recovery in the event of an incorrect app hash.
### FEATURES
- [\#6982](https://github.com/tendermint/tendermint/pull/6982) The tendermint binary now has built-in suppport for running the end-to-end test application (with state sync support) (@cmwaters).
- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state. This may be useful in the event of non-determinstic app hash or when reverting an upgrade. @cmwaters
### IMPROVEMENTS
- [\#7103](https://github.com/tendermint/tendermint/pull/7104) Remove IAVL dependency (backport of #6550) (@cmwaters)
### BUG FIXES
- [\#7057](https://github.com/tendermint/tendermint/pull/7057) Import Postgres driver support for the psql indexer (@creachadair).
- [ABCI] [\#7110](https://github.com/tendermint/tendermint/issues/7110) Revert "change client to use multi-reader mutexes (#6873)" (@tychoish).
## v0.34.13
This release backports improvements to state synchronization and ABCI
performance under concurrent load, and the PostgreSQL event indexer.
### IMPROVEMENTS
- [statesync] [\#6881](https://github.com/tendermint/tendermint/issues/6881) improvements to stateprovider logic (@cmwaters)
- [ABCI] [\#6873](https://github.com/tendermint/tendermint/issues/6873) change client to use multi-reader mutexes (@tychoish)
- [indexing] [\#6906](https://github.com/tendermint/tendermint/issues/6906) enable the PostgreSQL indexer sink (@creachadair)
## v0.34.12
Special thanks to external contributors on this release: @JayT106.
### FEATURES
- [rpc] [\#6717](https://github.com/tendermint/tendermint/pull/6717) introduce
`/genesis_chunked` rpc endpoint for handling large genesis files by chunking them (@tychoish)
### IMPROVEMENTS
- [rpc] [\#6825](https://github.com/tendermint/tendermint/issues/6825) Remove egregious INFO log from `ABCI#Query` RPC. (@alexanderbez)
### BUG FIXES
- [light] [\#6685](https://github.com/tendermint/tendermint/pull/6685) fix bug
with incorrectly handling contexts that would occasionally freeze state sync. (@cmwaters)
- [privval] [\#6748](https://github.com/tendermint/tendermint/issues/6748) Fix vote timestamp to prevent chain halt (@JayT106)
## v0.34.11
*June 18, 2021*
This release improves the robustness of statesync; tweaking channel priorities and timeouts and
adding two new parameters to the state sync config.
### BREAKING CHANGES
- Apps
- [Version] [\#6494](https://github.com/tendermint/tendermint/issues/6494) `TMCoreSemVer` is not required to be set as a ldflag any longer.
### IMPROVEMENTS
- [statesync] [\#6566](https://github.com/tendermint/tendermint/issues/6566) Allow state sync fetchers and request timeout to be configurable. (@alexanderbez)
- [statesync] [\#6378](https://github.com/tendermint/tendermint/issues/6378) Retry requests for snapshots and add a minimum discovery time (5s) for new snapshots. (@tychoish)
- [statesync] [\#6582](https://github.com/tendermint/tendermint/issues/6582) Increase chunk priority and add multiple retry chunk requests (@cmwaters)
### BUG FIXES
- [evidence] [\#6375](https://github.com/tendermint/tendermint/issues/6375) Fix bug with inconsistent LightClientAttackEvidence hashing (@cmwaters)
## v0.34.10
*April 14, 2021*
This release fixes a bug where peers would sometimes try to send messages
on incorrect channels. Special thanks to our friends at Oasis Labs for surfacing
this issue!
- [p2p/node] [\#6339](https://github.com/tendermint/tendermint/issues/6339) Fix bug with using custom channels (@cmwaters)
- [light] [\#6346](https://github.com/tendermint/tendermint/issues/6346) Correctly handle too high errors to improve client robustness (@cmwaters)
## v0.34.9
*April 8, 2021*
This release fixes a moderate severity security issue, Security Advisory Alderfly,
which impacts all networks that rely on Tendermint light clients.
Further details will be released once networks have upgraded.
This release also includes a small Go API-breaking change, to reduce panics in the RPC layer.
Special thanks to our external contributors on this release: @gchaincl
### BREAKING CHANGES
- Go API
- [rpc/jsonrpc/server] [\#6204](https://github.com/tendermint/tendermint/issues/6204) Modify `WriteRPCResponseHTTP(Error)` to return an error (@melekes)
### FEATURES
- [rpc] [\#6226](https://github.com/tendermint/tendermint/issues/6226) Index block events and expose a new RPC method, `/block_search`, to allow querying for blocks by `BeginBlock` and `EndBlock` events (@alexanderbez)
### BUG FIXES
- [rpc/jsonrpc/server] [\#6191](https://github.com/tendermint/tendermint/issues/6191) Correctly unmarshal `RPCRequest` when data is `null` (@melekes)
- [p2p] [\#6289](https://github.com/tendermint/tendermint/issues/6289) Fix "unknown channels" bug on CustomReactors (@gchaincl)
- [light/evidence] Adds logic to handle forward lunatic attacks (@cmwaters)
## v0.34.8
*February 25, 2021*
This release, in conjunction with [a fix in the Cosmos SDK](https://github.com/cosmos/cosmos-sdk/pull/8641),
introduces changes that should mean the logs are much, much quieter. 🎉
### IMPROVEMENTS
- [libs/log] [\#6174](https://github.com/tendermint/tendermint/issues/6174) Include timestamp (`ts` field; `time.RFC3339Nano` format) in JSON logger output (@melekes)
### BUG FIXES
- [abci] [\#6124](https://github.com/tendermint/tendermint/issues/6124) Fixes a panic condition during callback execution in `ReCheckTx` during high tx load. (@alexanderbez)
## v0.34.7
*February 18, 2021*
This release fixes a downstream security issue which impacts Cosmos SDK
users who are:
* Using Cosmos SDK v0.40.0 or later, AND
* Running validator nodes, AND
* Using the file-based `FilePV` implementation for their consensus keys
Users who fulfill all the above criteria were susceptible to leaking
private key material in the logs. All other users are unaffected.
The root cause was a discrepancy
between the Tendermint Core (untyped) logger and the Cosmos SDK (typed) logger:
Tendermint Core's logger automatically stringifies Go interfaces whenever possible;
however, the Cosmos SDK's logger uses reflection to log the fields within a Go interface.
The introduction of the typed logger meant that previously un-logged fields within
interfaces are now sometimes logged, including the private key material inside the
`FilePV` struct.
Tendermint Core v0.34.7 fixes this issue; however, we strongly recommend that all validators
use remote signer implementations instead of `FilePV` in production.
Thank you to @joe-bowman for his assistance with this vulnerability and a particular
shout-out to @marbar3778 for diagnosing it quickly.
### BUG FIXES
- [consensus] [\#6128](https://github.com/tendermint/tendermint/pull/6128) Remove privValidator from log call (@tessr)
## v0.34.6
*February 18, 2021*
_Tendermint Core v0.34.5 and v0.34.6 have been recalled due to build tooling problems._
## v0.34.4
*February 11, 2021*
This release includes a fix for a memory leak in the evidence reactor (see #6068, below).
All Tendermint clients are recommended to upgrade.
Thank you to our friends at Crypto.com for the initial report of this memory leak!
Special thanks to other external contributors on this release: @yayajacky, @odidev, @laniehei, and @c29r3!
### BUG FIXES
- [light] [\#6022](https://github.com/tendermint/tendermint/pull/6022) Fix a bug when the number of validators equals 100 (@melekes)
- [light] [\#6026](https://github.com/tendermint/tendermint/pull/6026) Fix a bug when height isn't provided for the rpc calls: `/commit` and `/validators` (@cmwaters)
- [evidence] [\#6068](https://github.com/tendermint/tendermint/pull/6068) Terminate broadcastEvidenceRoutine when peer is stopped (@melekes)
## v0.34.3
*January 19, 2021*
This release includes a fix for a high-severity security vulnerability,
a DoS-vector that impacted Tendermint Core v0.34.0-v0.34.2. For more details, see
[Security Advisory Mulberry](https://github.com/tendermint/tendermint/security/advisories/GHSA-p658-8693-mhvg)
or https://nvd.nist.gov/vuln/detail/CVE-2021-21271.
Tendermint Core v0.34.3 also updates GoGo Protobuf to 1.3.2 in order to pick up the fix for
https://nvd.nist.gov/vuln/detail/CVE-2021-3121.
### BUG FIXES
- [evidence] [[security fix]](https://github.com/tendermint/tendermint/security/advisories/GHSA-p658-8693-mhvg) Use correct source of evidence time (@cmwaters)
- [proto] [\#5886](https://github.com/tendermint/tendermint/pull/5889) Bump gogoproto to 1.3.2 (@marbar3778)
## v0.34.2
*January 12, 2021*
@@ -8,8 +246,6 @@ This release fixes a substantial bug in evidence handling where evidence could
sometimes be broadcast before the block containing that evidence was fully committed,
resulting in some nodes panicking when trying to verify said evidence.
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES
- Go API
@@ -33,8 +269,6 @@ disconnecting from this node. As a temporary remedy (until the mempool package
is refactored), the `max-batch-bytes` was disabled. Transactions will be sent
one by one without batching.
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES
- CLI/RPC/Config
@@ -63,8 +297,6 @@ Holy smokes, this is a big one! For a more reader-friendly overview of the chang
Special thanks to external contributors on this release: @james-ray, @fedekunze, @favadi, @alessio,
@joe-bowman, @cuonglm, @SadPencil and @dongsam.
And as always, friendly reminder, that we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES
- CLI/RPC/Config
@@ -305,9 +537,6 @@ as 2/3+ of the signatures are checked._
Special thanks to @njmurarka at Bluzelle Networks for reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [consensus] Do not allow signatures for a wrong block in commits (@ebuchman)
@@ -323,8 +552,6 @@ need to update your code.**
Special thanks to external contributors on this release: @tau3,
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -384,8 +611,6 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
Special thanks to external contributors on this release: @whylee259, @greg-szabo
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -472,9 +697,6 @@ Notes:
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for finding
and reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [mempool] Reserve IDs in InitPeer instead of AddPeer (@tessr)
@@ -487,8 +709,6 @@ program](https://hackerone.com/tendermint).
Special thanks to external contributors on this release:
@antho1404, @michaelfig, @gterzian, @tau3, @Shivani912
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- CLI/RPC/Config
@@ -539,9 +759,6 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
Special thanks to external contributors on this release:
@princesinha19
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc] [\#3333](https://github.com/tendermint/tendermint/issues/3333) Add `order_by` to `/tx_search` endpoint, allowing to change default ordering from asc to desc (@princesinha19)
@@ -560,9 +777,6 @@ program](https://hackerone.com/tendermint).
Special thanks to external contributors on this release: @mrekucci, @PSalant726, @princesinha19, @greg-szabo, @dongsam, @cuonglm, @jgimeno, @yenkhoon
Friendly reminder, we have a [bug bounty
program.](https://hackerone.com/tendermint).
*January 14, 2020*
This release contains breaking changes to the `Block#Header`, specifically
@@ -791,9 +1005,6 @@ Notes:
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for finding
and reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [mempool] Reserve IDs in InitPeer instead of AddPeer (@tessr)
@@ -805,9 +1016,6 @@ _January, 9, 2020_
Special thanks to external contributors on this release: @greg-szabo, @gregzaitsev, @yenkhoon
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc/lib] [\#4248](https://github.com/tendermint/tendermint/issues/4248) RPC client basic authentication support (@greg-szabo)
@@ -829,9 +1037,6 @@ program](https://hackerone.com/tendermint).
Special thanks to external contributors on this release: @erikgrinaker, @guagualvcha, @hsyis, @cosmostuba, @whunmr, @austinabell
Friendly reminder, we have a [bug bounty
program.](https://hackerone.com/tendermint).
### BREAKING CHANGES:
@@ -871,9 +1076,6 @@ identified and fixed here.
Special thanks to [elvishacker](https://hackerone.com/elvishacker) for finding
and reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -900,9 +1102,6 @@ accepting new peers and only allowing `ed25519` pubkeys.
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for pointing
this out.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [p2p] [\#4030](https://github.com/tendermint/tendermint/issues/4030) Only allow ed25519 pubkeys when connecting
@@ -918,9 +1117,6 @@ All clients are recommended to upgrade. See
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for discovering
and reporting this issue.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [p2p] [\#4030](https://github.com/tendermint/tendermint/issues/4030) Fix for panic on nil public key send to a peer
@@ -931,9 +1127,6 @@ program](https://hackerone.com/tendermint).
Special thanks to external contributors on this release: @jon-certik, @gracenoah, @PSalant726, @gchaincl
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- CLI/RPC/Config
@@ -969,9 +1162,6 @@ guide.
Special thanks to external contributors on this release:
@gchaincl, @bluele, @climber73
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### IMPROVEMENTS:
- [consensus] [\#3839](https://github.com/tendermint/tendermint/issues/3839) Reduce "Error attempting to add vote" message severity (Error -> Info)
@@ -992,9 +1182,6 @@ program](https://hackerone.com/tendermint).
Special thanks to external contributors on this release:
@ruseinov, @bluele, @guagualvcha
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -1034,9 +1221,6 @@ This release contains a minor enhancement to the ABCI and some breaking changes
- CheckTx requests include a `CheckTxType` enum that can be set to `Recheck` to indicate to the application that this transaction was already checked/validated and certain expensive operations (like checking signatures) can be skipped
- Removed various functions from `libs` pkgs
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -1082,9 +1266,6 @@ and the RPC, namely:
[docs](https://github.com/tendermint/tendermint/blob/60827f75623b92eff132dc0eff5b49d2025c591e/docs/spec/abci/abci.md#events)
- Bind RPC to localhost by default, not to the public interface [UPGRADING/RPC_Changes](./UPGRADING.md#rpc_changes)
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
@@ -1185,9 +1366,6 @@ Notes:
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for finding
and reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [mempool] Reserve IDs in InitPeer instead of AddPeer (@tessr)
@@ -1207,9 +1385,6 @@ identified and fixed here.
Special thanks to [elvishacker](https://hackerone.com/elvishacker) for finding
and reporting this.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
- Go API
@@ -1236,9 +1411,6 @@ accepting new peers and only allowing `ed25519` pubkeys.
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for pointing
this out.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [p2p] [\#4030](https://github.com/tendermint/tendermint/issues/4030) Only allow ed25519 pubkeys when connecting
@@ -1254,9 +1426,6 @@ All clients are recommended to upgrade. See
Special thanks to [fudongbai](https://hackerone.com/fudongbai) for discovering
and reporting this issue.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### SECURITY:
- [p2p] [\#4030](https://github.com/tendermint/tendermint/issues/4030) Fix for panic on nil public key send to a peer
@@ -1551,9 +1720,6 @@ See the [v0.31.0
Milestone](https://github.com/tendermint/tendermint/milestone/19?closed=1) for
more details.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
@@ -1684,7 +1850,7 @@ For more, see issues marked
This release also includes a fix to prevent Tendermint from including the same
piece of evidence in more than one block. This issue was reported by @chengwenxi in our
[bug bounty program](https://hackerone.com/tendermint).
[bug bounty program](https://hackerone.com/cosmos).
### BREAKING CHANGES:
@@ -1773,9 +1939,6 @@ This release contains two important fixes: one for p2p layer where we sometimes
were not closing connections and one for consensus layer where consensus with
no empty blocks (`create_empty_blocks = false`) could halt.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### IMPROVEMENTS:
- [pex] [\#3037](https://github.com/tendermint/tendermint/issues/3037) Only log "Reached max attempts to dial" once
- [rpc] [\#3159](https://github.com/tendermint/tendermint/issues/3159) Expose
@@ -1814,9 +1977,6 @@ While we are trying to stabilize the Block protocol to preserve compatibility
with old chains, there may be some final changes yet to come before Cosmos
launch as we continue to audit and test the software.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* CLI/RPC/Config
@@ -1864,9 +2024,6 @@ program](https://hackerone.com/tendermint).
Special thanks to external contributors on this release:
@HaoyangLiu
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BUG FIXES:
- [consensus] Fix consensus halt from proposing blocks with too much evidence
@@ -1994,9 +2151,6 @@ Special thanks to @dlguddus for discovering a [major
issue](https://github.com/tendermint/tendermint/issues/2718#issuecomment-440888677)
in the proposer selection algorithm.
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
This release is primarily about fixes to the proposer selection algorithm
in preparation for the [Cosmos Game of
Stakes](https://blog.cosmos.network/the-game-of-stakes-is-open-for-registration-83a404746ee6).
@@ -2058,9 +2212,6 @@ Special thanks to external contributors on this release:
@ackratos, @goolAdapter, @james-ray, @joe-bowman, @kostko,
@nagarajmanjunath, @tomtau
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc] [\#2747](https://github.com/tendermint/tendermint/issues/2747) Enable subscription to tags emitted from `BeginBlock`/`EndBlock` (@kostko)
@@ -2099,9 +2250,6 @@ program](https://hackerone.com/tendermint).
Special thanks to external contributors on this release:
@danil-lashin, @kevlubkcm, @krhubert, @srmo
Friendly reminder, we have a [bug bounty
program](https://hackerone.com/tendermint).
### BREAKING CHANGES:
* Go API
@@ -2145,8 +2293,6 @@ program](https://hackerone.com/tendermint).
Special thanks to external contributors on this release: @hleb-albau, @zhuzeyu
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### FEATURES:
- [rpc] [\#2582](https://github.com/tendermint/tendermint/issues/2582) Enable CORS on RPC API (@hleb-albau)
@@ -2164,8 +2310,6 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
Special thanks to external contributors on this release: @katakonst
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
### IMPROVEMENTS:
- [consensus] [\#2704](https://github.com/tendermint/tendermint/issues/2704) Simplify valid POL round logic
@@ -2193,7 +2337,7 @@ Special thanks to external contributors on this release:
@james-ray, @overbool, @phymbert, @Slamper, @Uzair1995, @yutianwu.
Special thanks to @Slamper for a series of bug reports in our [bug bounty
program](https://hackerone.com/tendermint) which are fixed in this release.
program](https://hackerone.com/cosmos) which are fixed in this release.
This release is primarily about adding Version fields to various data structures,
optimizing consensus messages for signing and verification in
@@ -2339,8 +2483,6 @@ It also addresses some issues found via security audit, removes various unused
functions from `libs/common`, and implements
[ADR-012](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-012-peer-transport.md).
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
BREAKING CHANGES:
* CLI/RPC/Config

View File

@@ -1,6 +1,6 @@
# Unreleased Changes
## v0.34.3
## v0.34.18
Special thanks to external contributors on this release:

View File

@@ -108,24 +108,7 @@ We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along
For linting and checking breaking changes, we use [buf](https://buf.build/). If you would like to run linting and check if the changes you have made are breaking then you will need to have docker running locally. Then the linting cmd will be `make proto-lint` and the breaking changes check will be `make proto-check-breaking`.
There are two ways to generate your proto stubs.
1. Use Docker, pull an image that will generate your proto stubs with no need to install anything. `make proto-gen-docker`
2. Run `make proto-gen` after installing `protoc` and gogoproto, you can do this by running `make protobuf`.
### Installation Instructions
To install `protoc`, download an appropriate release (<https://github.com/protocolbuffers/protobuf>) and then move the provided binaries into your PATH (follow instructions in README included with the download).
To install `gogoproto`, do the following:
```sh
go get github.com/gogo/protobuf/gogoproto
cd $GOPATH/pkg/mod/github.com/gogo/protobuf@v1.3.1 # or wherever go get installs things
make install
```
You should now be able to run `make proto-gen` from inside the root Tendermint directory to generate new files from proto files.
We use [Docker](https://www.docker.com/) to generate the protobuf stubs. To generate the stubs yourself, make sure docker is running then run `make proto-gen`.
## Vagrant
@@ -257,6 +240,7 @@ Each PR should have one commit once it lands on `master`; this can be accomplish
release, and add the github aliases of external contributors to the top of
the changelog. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
- Make sure all significant breaking changes are covered in `UPGRADING.md`
@@ -326,12 +310,86 @@ have distinct names from the tags/release names.)
## Testing
All repos should be hooked up to [CircleCI](https://circleci.com/).
### Unit tests
If they have `.go` files in the root directory, they will be automatically
tested by circle using `go test -v -race ./...`. If not, they will need a
`circle.yml`. Ideally, every repo has a `Makefile` that defines `make test` and
includes its continuous integration status using a badge in the `README.md`.
Unit tests are located in `_test.go` files as directed by [the Go testing
package](https://golang.org/pkg/testing/). If you're adding or removing a
function, please check there's a `TestType_Method` test for it.
Run: `make test`
### Integration tests
Integration tests are also located in `_test.go` files. What differentiates
them is a more complicated setup, which usually involves setting up two or more
components.
Run: `make test_integrations`
### End-to-end tests
End-to-end tests are used to verify a fully integrated Tendermint network.
See [README](./test/e2e/README.md) for details.
Run:
```sh
cd test/e2e && \
make && \
./build/runner -f networks/ci.toml
```
### Maverick
**If you're changing the code in `consensus` package, please make sure to
replicate all the changes in `./test/maverick/consensus`**. Maverick is a
byzantine node used to assert that the validator gets punished for malicious
behavior.
See [README](./test/maverick/README.md) for details.
### Model-based tests (ADVANCED)
*NOTE: if you're just submitting your first PR, you won't need to touch these
most probably (99.9%)*.
For components, that have been [formally
verified](https://en.wikipedia.org/wiki/Formal_verification) using
[TLA+](https://en.wikipedia.org/wiki/TLA%2B), it may be possible to generate
tests using a combination of the [Apalache Model
Checker](https://apalache.informal.systems/) and [tendermint-rs testgen
util](https://github.com/informalsystems/tendermint-rs/tree/master/testgen).
Now, I know there's a lot to take in. If you want to learn more, check out [
this video](https://www.youtube.com/watch?v=aveoIMphzW8) by Andrey Kupriyanov
& Igor Konnov.
At the moment, we have model-based tests for the light client, located in the
`./light/mbt` directory.
Run: `cd light/mbt && go test`
### Fuzz tests (ADVANCED)
*NOTE: if you're just submitting your first PR, you won't need to touch these
most probably (99.9%)*.
[Fuzz tests](https://en.wikipedia.org/wiki/Fuzzing) can be found inside the
`./test/fuzz` directory. See [README.md](./test/fuzz/README.md) for details.
Run: `cd test/fuzz && make fuzz-{PACKAGE-COMPONENT}`
### Jepsen tests (ADVANCED)
*NOTE: if you're just submitting your first PR, you won't need to touch these
most probably (99.9%)*.
[Jepsen](http://jepsen.io/) tests are used to verify the
[linearizability](https://jepsen.io/consistency/models/linearizable) property
of the Tendermint consensus. They are located in a separate repository
-> <https://github.com/tendermint/jepsen>. Please refer to its README for more
information.
### RPC Testing
@@ -344,4 +402,8 @@ make build-linux build-contract-tests-hooks
make contract-tests
```
This command will popup a network and check every endpoint against what has been documented
**WARNING: these are currently broken due to <https://github.com/apiaryio/dredd>
not supporting complete OpenAPI 3**.
This command will popup a network and check every endpoint against what has
been documented.

View File

@@ -1,4 +1,14 @@
FROM alpine:3.9
# stage 1 Generate Tendermint Binary
FROM golang:1.15-alpine as builder
RUN apk update && \
apk upgrade && \
apk --no-cache add make
COPY / /tendermint
WORKDIR /tendermint
RUN make build-linux
# stage 2
FROM golang:1.15-alpine
LABEL maintainer="hello@tendermint.com"
# Tendermint will be looking for the genesis file in /tendermint/config/genesis.json
@@ -29,15 +39,14 @@ EXPOSE 26656 26657 26660
STOPSIGNAL SIGTERM
ARG BINARY=tendermint
COPY $BINARY /usr/bin/tendermint
COPY --from=builder /tendermint/build/tendermint /usr/bin/tendermint
# You can overwrite these before the first run to influence
# config.json and genesis.json. Additionally, you can override
# CMD to add parameters to `tendermint node`.
ENV PROXY_APP=kvstore MONIKER=dockernode CHAIN_ID=dockerchain
COPY ./docker-entrypoint.sh /usr/local/bin/
COPY ./DOCKER/docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["node"]

View File

@@ -2,7 +2,14 @@ PACKAGES=$(shell go list ./...)
OUTPUT?=build/tendermint
BUILD_TAGS?=tendermint
VERSION := $(shell git describe --always)
# If building a release, please checkout the version tag to get the correct version setting
ifneq ($(shell git symbolic-ref -q --short HEAD),)
VERSION := unreleased-$(shell git symbolic-ref -q --short HEAD)-$(shell git rev-parse HEAD)
else
VERSION := $(shell git describe)
endif
LD_FLAGS = -X github.com/tendermint/tendermint/version.TMCoreSemVer=$(VERSION)
BUILD_FLAGS = -mod=readonly -ldflags "$(LD_FLAGS)"
HTTPS_GIT := https://github.com/tendermint/tendermint.git
@@ -49,8 +56,6 @@ LD_FLAGS += $(LDFLAGS)
all: check build test install
.PHONY: all
# The below include contains the tools.
include tools.mk
include tests.mk
###############################################################################
@@ -73,18 +78,10 @@ proto-all: proto-gen proto-lint proto-check-breaking
.PHONY: proto-all
proto-gen:
## If you get the following error,
## "error while loading shared libraries: libprotobuf.so.14: cannot open shared object file: No such file or directory"
## See https://stackoverflow.com/a/25518702
## Note the $< here is substituted for the %.proto
## Note the $@ here is substituted for the %.pb.go
@sh scripts/protocgen.sh
.PHONY: proto-gen
proto-gen-docker:
@docker pull -q tendermintdev/docker-build-proto
@echo "Generating Protobuf files"
@docker run -v $(shell pwd):/workspace --workdir /workspace tendermintdev/docker-build-proto sh ./scripts/protocgen.sh
.PHONY: proto-gen-docker
.PHONY: proto-gen
proto-lint:
@$(DOCKER_BUF) check lint --error-format=json
@@ -189,12 +186,12 @@ DESTINATION = ./index.html.md
###############################################################################
build-docs:
cd docs && \
while read p; do \
(git checkout $${p} . && npm install && VUEPRESS_BASE="/$${p}/" npm run build) ; \
mkdir -p ~/output/$${p} ; \
cp -r .vuepress/dist/* ~/output/$${p}/ ; \
cp ~/output/$${p}/index.html ~/output ; \
@cd docs && \
while read -r branch path_prefix; do \
(git checkout $${branch} && npm ci && VUEPRESS_BASE="/$${path_prefix}/" npm run build) ; \
mkdir -p ~/output/$${path_prefix} ; \
cp -r .vuepress/dist/* ~/output/$${path_prefix}/ ; \
cp ~/output/$${path_prefix}/index.html ~/output ; \
done < versions ;
.PHONY: build-docs
@@ -221,7 +218,7 @@ build-docker: build-linux
###############################################################################
# Build linux binary on other platforms
build-linux: tools
build-linux:
GOOS=linux GOARCH=amd64 $(MAKE) build
.PHONY: build-linux

View File

@@ -44,6 +44,9 @@ To report a security vulnerability, see our [bug bounty
program](https://hackerone.com/tendermint).
For examples of the kinds of bugs we're looking for, see [our security policy](SECURITY.md)
We also maintain a dedicated mailing list for security updates. We will only ever use this mailing list
to notify you of vulnerabilities and fixes in Tendermint Core. You can subscribe [here](http://eepurl.com/gZ5hQD).
## Minimum requirements
| Requirement | Notes |

View File

@@ -74,12 +74,8 @@ func NewClient(addr, transport string, mustConnect bool) (client Client, err err
return
}
//----------------------------------------
type Callback func(*types.Request, *types.Response)
//----------------------------------------
type ReqRes struct {
*types.Request
*sync.WaitGroup
@@ -101,34 +97,50 @@ func NewReqRes(req *types.Request) *ReqRes {
}
}
// Sets the callback for this ReqRes atomically.
// If reqRes is already done, calls cb immediately.
// NOTE: reqRes.cb should not change if reqRes.done.
// NOTE: only one callback is supported.
func (reqRes *ReqRes) SetCallback(cb func(res *types.Response)) {
reqRes.mtx.Lock()
// Sets sets the callback. If reqRes is already done, it will call the cb
// immediately. Note, reqRes.cb should not change if reqRes.done and only one
// callback is supported.
func (r *ReqRes) SetCallback(cb func(res *types.Response)) {
r.mtx.Lock()
if reqRes.done {
reqRes.mtx.Unlock()
cb(reqRes.Response)
if r.done {
r.mtx.Unlock()
cb(r.Response)
return
}
reqRes.cb = cb
reqRes.mtx.Unlock()
r.cb = cb
r.mtx.Unlock()
}
func (reqRes *ReqRes) GetCallback() func(*types.Response) {
reqRes.mtx.Lock()
defer reqRes.mtx.Unlock()
return reqRes.cb
// InvokeCallback invokes a thread-safe execution of the configured callback
// if non-nil.
func (r *ReqRes) InvokeCallback() {
r.mtx.Lock()
defer r.mtx.Unlock()
if r.cb != nil {
r.cb(r.Response)
}
}
// NOTE: it should be safe to read reqRes.cb without locks after this.
func (reqRes *ReqRes) SetDone() {
reqRes.mtx.Lock()
reqRes.done = true
reqRes.mtx.Unlock()
// GetCallback returns the configured callback of the ReqRes object which may be
// nil. Note, it is not safe to concurrently call this in cases where it is
// marked done and SetCallback is called before calling GetCallback as that
// will invoke the callback twice and create a potential race condition.
//
// ref: https://github.com/tendermint/tendermint/issues/5439
func (r *ReqRes) GetCallback() func(*types.Response) {
r.mtx.Lock()
defer r.mtx.Unlock()
return r.cb
}
// SetDone marks the ReqRes object as done.
func (r *ReqRes) SetDone() {
r.mtx.Lock()
r.done = true
r.mtx.Unlock()
}
func waitGroup1() (wg *sync.WaitGroup) {

View File

@@ -90,6 +90,7 @@ func (cli *grpcClient) OnStart() error {
RETRY_LOOP:
for {
//nolint:staticcheck // SA1019 Existing use of deprecated but supported dial option.
conn, err := grpc.Dial(cli.addr, grpc.WithInsecure(), grpc.WithContextDialer(dialerFunc))
if err != nil {
if cli.mustConnect {

View File

@@ -20,6 +20,12 @@ type localClient struct {
Callback
}
var _ Client = (*localClient)(nil)
// NewLocalClient creates a local client, which will be directly calling the
// methods of the given app.
//
// Both Async and Sync methods ignore the given context.Context parameter.
func NewLocalClient(mtx *tmsync.Mutex, app types.Application) Client {
if mtx == nil {
mtx = new(tmsync.Mutex)

View File

@@ -212,9 +212,7 @@ func (cli *socketClient) didRecvResponse(res *types.Response) error {
//
// NOTE: It is possible this callback isn't set on the reqres object. At this
// point, in which case it will be called after, when it is set.
if cb := reqres.GetCallback(); cb != nil {
cb(res)
}
reqres.InvokeCallback()
return nil
}

View File

@@ -130,7 +130,7 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
func testGRPCSync(t *testing.T, app types.ABCIApplicationServer) {
numDeliverTxs := 2000
socketFile := fmt.Sprintf("test-%08x.sock", rand.Int31n(1<<30))
socketFile := fmt.Sprintf("/tmp/test-%08x.sock", rand.Int31n(1<<30))
defer os.Remove(socketFile)
socket := fmt.Sprintf("unix://%v", socketFile)
@@ -148,6 +148,7 @@ func testGRPCSync(t *testing.T, app types.ABCIApplicationServer) {
})
// Connect to the socket
//nolint:staticcheck // SA1019 Existing use of deprecated but supported dial option.
conn, err := grpc.Dial(socket, grpc.WithInsecure(), grpc.WithContextDialer(dialerFunc))
if err != nil {
t.Fatalf("Error dialing GRPC server: %v", err.Error())

View File

@@ -293,7 +293,7 @@ func TestClientServer(t *testing.T) {
// set up grpc app
kvstore = NewApplication()
gclient, gserver, err := makeGRPCClientServer(kvstore, "kvstore-grpc")
gclient, gserver, err := makeGRPCClientServer(kvstore, "/tmp/kvstore-grpc")
require.NoError(t, err)
t.Cleanup(func() {

View File

@@ -42,7 +42,7 @@ func (r ResponseQuery) IsErr() bool {
}
//---------------------------------------------------------------------------
// override JSON marshalling so we emit defaults (ie. disable omitempty)
// override JSON marshaling so we emit defaults (ie. disable omitempty)
var (
jsonpbMarshaller = jsonpb.Marshaler{

View File

@@ -8315,10 +8315,7 @@ func (m *Request) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -8400,10 +8397,7 @@ func (m *RequestEcho) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -8453,10 +8447,7 @@ func (m *RequestFlush) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -8576,10 +8567,7 @@ func (m *RequestInfo) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -8693,10 +8681,7 @@ func (m *RequestSetOption) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -8934,10 +8919,7 @@ func (m *RequestInitChain) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -9092,10 +9074,7 @@ func (m *RequestQuery) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -9279,10 +9258,7 @@ func (m *RequestBeginBlock) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -9385,10 +9361,7 @@ func (m *RequestCheckTx) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -9472,10 +9445,7 @@ func (m *RequestDeliverTx) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -9544,10 +9514,7 @@ func (m *RequestEndBlock) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -9597,10 +9564,7 @@ func (m *RequestCommit) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -9650,10 +9614,7 @@ func (m *RequestListSnapshots) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -9773,10 +9734,7 @@ func (m *RequestOfferSnapshot) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -9883,10 +9841,7 @@ func (m *RequestLoadSnapshotChunk) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -10021,10 +9976,7 @@ func (m *RequestApplySnapshotChunk) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -10634,10 +10586,7 @@ func (m *Response) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -10719,10 +10668,7 @@ func (m *ResponseException) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -10804,10 +10750,7 @@ func (m *ResponseEcho) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -10857,10 +10800,7 @@ func (m *ResponseFlush) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -11046,10 +10986,7 @@ func (m *ResponseInfo) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -11182,10 +11119,7 @@ func (m *ResponseSetOption) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -11339,10 +11273,7 @@ func (m *ResponseInitChain) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -11649,10 +11580,7 @@ func (m *ResponseQuery) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -11736,10 +11664,7 @@ func (m *ResponseBeginBlock) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -12010,10 +11935,7 @@ func (m *ResponseCheckTx) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -12284,10 +12206,7 @@ func (m *ResponseDeliverTx) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -12441,10 +12360,7 @@ func (m *ResponseEndBlock) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -12547,10 +12463,7 @@ func (m *ResponseCommit) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -12634,10 +12547,7 @@ func (m *ResponseListSnapshots) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -12706,10 +12616,7 @@ func (m *ResponseOfferSnapshot) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -12793,10 +12700,7 @@ func (m *ResponseLoadSnapshotChunk) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -12973,10 +12877,7 @@ func (m *ResponseApplySnapshotChunk) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -13170,10 +13071,7 @@ func (m *ConsensusParams) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -13261,10 +13159,7 @@ func (m *BlockParams) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -13367,10 +13262,7 @@ func (m *LastCommitInfo) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -13486,10 +13378,7 @@ func (m *Event) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -13627,10 +13516,7 @@ func (m *EventAttribute) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -13785,10 +13671,7 @@ func (m *TxResult) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -13891,10 +13774,7 @@ func (m *Validator) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -13996,10 +13876,7 @@ func (m *ValidatorUpdate) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -14102,10 +13979,7 @@ func (m *VoteInfo) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -14278,10 +14152,7 @@ func (m *Evidence) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
@@ -14456,10 +14327,7 @@ func (m *Snapshot) Unmarshal(dAtA []byte) error {
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) < 0 {
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {

View File

@@ -59,19 +59,22 @@ func (mp mockPeer) TrySend(byte, []byte) bool { return true }
func (mp mockPeer) Set(string, interface{}) {}
func (mp mockPeer) Get(string) interface{} { return struct{}{} }
//nolint:unused
// nolint:unused // ignore
type mockBlockStore struct {
blocks map[int64]*types.Block
}
// nolint:unused // ignore
func (ml *mockBlockStore) Height() int64 {
return int64(len(ml.blocks))
}
// nolint:unused // ignore
func (ml *mockBlockStore) LoadBlock(height int64) *types.Block {
return ml.blocks[height]
}
// nolint:unused // ignore
func (ml *mockBlockStore) SaveBlock(block *types.Block, part *types.PartSet, commit *types.Commit) {
ml.blocks[block.Height] = block
}

View File

@@ -8,7 +8,6 @@ import (
"net/http"
"os"
"path/filepath"
"regexp"
"strings"
"time"
@@ -16,7 +15,6 @@ import (
dbm "github.com/tendermint/tm-db"
"github.com/tendermint/tendermint/crypto/merkle"
"github.com/tendermint/tendermint/libs/log"
tmmath "github.com/tendermint/tendermint/libs/math"
tmos "github.com/tendermint/tendermint/libs/os"
@@ -24,7 +22,6 @@ import (
lproxy "github.com/tendermint/tendermint/light/proxy"
lrpc "github.com/tendermint/tendermint/light/rpc"
dbs "github.com/tendermint/tendermint/light/store/db"
rpchttp "github.com/tendermint/tendermint/rpc/client/http"
rpcserver "github.com/tendermint/tendermint/rpc/jsonrpc/server"
)
@@ -204,11 +201,6 @@ func runProxy(cmd *cobra.Command, args []string) error {
return err
}
rpcClient, err := rpchttp.New(primaryAddr, "/websocket")
if err != nil {
return fmt.Errorf("http client for %s: %w", primaryAddr, err)
}
cfg := rpcserver.DefaultConfig()
cfg.MaxBodyBytes = config.RPC.MaxBodyBytes
cfg.MaxHeaderBytes = config.RPC.MaxHeaderBytes
@@ -220,12 +212,11 @@ func runProxy(cmd *cobra.Command, args []string) error {
cfg.WriteTimeout = config.RPC.TimeoutBroadcastTxCommit + 1*time.Second
}
p := lproxy.Proxy{
Addr: listenAddr,
Config: cfg,
Client: lrpc.NewClient(rpcClient, c, lrpc.KeyPathFn(defaultMerkleKeyPathFn())),
Logger: logger,
p, err := lproxy.NewProxy(c, listenAddr, primaryAddr, cfg, logger, lrpc.KeyPathFn(lrpc.DefaultMerkleKeyPathFn()))
if err != nil {
return err
}
// Stop upon receiving SIGTERM or CTRL-C.
tmos.TrapSignal(logger, func() {
p.Listener.Close()
@@ -264,21 +255,3 @@ func saveProviders(db dbm.DB, primaryAddr, witnessesAddrs string) error {
}
return nil
}
func defaultMerkleKeyPathFn() lrpc.KeyPathFunc {
// regexp for extracting store name from /abci_query path
storeNameRegexp := regexp.MustCompile(`\/store\/(.+)\/key`)
return func(path string, key []byte) (merkle.KeyPath, error) {
matches := storeNameRegexp.FindStringSubmatch(path)
if len(matches) != 2 {
return nil, fmt.Errorf("can't find store name in %s using %s", path, storeNameRegexp)
}
storeName := matches[1]
kp := merkle.KeyPath{}
kp = kp.AppendKey([]byte(storeName), merkle.KeyEncodingURL)
kp = kp.AppendKey(key, merkle.KeyEncodingURL)
return kp, nil
}
}

View File

@@ -2,6 +2,7 @@ package commands
import (
"os"
"path/filepath"
"github.com/spf13/cobra"
@@ -16,12 +17,22 @@ var ResetAllCmd = &cobra.Command{
Use: "unsafe-reset-all",
Aliases: []string{"unsafe_reset_all"},
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
Run: resetAll,
RunE: resetAllCmd,
PreRun: deprecateSnakeCase,
}
var keepAddrBook bool
// ResetStateCmd removes the database of the specified Tendermint core instance.
var ResetStateCmd = &cobra.Command{
Use: "reset-state",
Short: "Remove all the data and WAL",
RunE: func(cmd *cobra.Command, args []string) error {
return resetState(config.DBDir(), logger)
},
PreRun: deprecateSnakeCase,
}
func init() {
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
}
@@ -37,8 +48,8 @@ var ResetPrivValidatorCmd = &cobra.Command{
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAll(cmd *cobra.Command, args []string) {
ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorKeyFile(),
func resetAllCmd(cmd *cobra.Command, args []string) error {
return resetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidatorKeyFile(),
config.PrivValidatorStateFile(), logger)
}
@@ -48,9 +59,8 @@ func resetPrivValidator(cmd *cobra.Command, args []string) {
resetFilePV(config.PrivValidatorKeyFile(), config.PrivValidatorStateFile(), logger)
}
// ResetAll removes address book files plus all data, and resets the privValdiator data.
// Exported so other CLI tools can use it.
func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) {
// resetAll removes address book files plus all data, and resets the privValdiator data.
func resetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) error {
if keepAddrBook {
logger.Info("The address book remains intact")
} else {
@@ -61,11 +71,71 @@ func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logg
} else {
logger.Error("Error removing all blockchain history", "dir", dbDir, "err", err)
}
// recreate the dbDir since the privVal state needs to live there
resetFilePV(privValKeyFile, privValStateFile, logger)
return nil
}
// resetState removes address book files plus all databases.
func resetState(dbDir string, logger log.Logger) error {
blockdb := filepath.Join(dbDir, "blockstore.db")
state := filepath.Join(dbDir, "state.db")
wal := filepath.Join(dbDir, "cs.wal")
evidence := filepath.Join(dbDir, "evidence.db")
txIndex := filepath.Join(dbDir, "tx_index.db")
peerstore := filepath.Join(dbDir, "peerstore.db")
if tmos.FileExists(blockdb) {
if err := os.RemoveAll(blockdb); err == nil {
logger.Info("Removed all blockstore.db", "dir", blockdb)
} else {
logger.Error("error removing all blockstore.db", "dir", blockdb, "err", err)
}
}
if tmos.FileExists(state) {
if err := os.RemoveAll(state); err == nil {
logger.Info("Removed all state.db", "dir", state)
} else {
logger.Error("error removing all state.db", "dir", state, "err", err)
}
}
if tmos.FileExists(wal) {
if err := os.RemoveAll(wal); err == nil {
logger.Info("Removed all cs.wal", "dir", wal)
} else {
logger.Error("error removing all cs.wal", "dir", wal, "err", err)
}
}
if tmos.FileExists(evidence) {
if err := os.RemoveAll(evidence); err == nil {
logger.Info("Removed all evidence.db", "dir", evidence)
} else {
logger.Error("error removing all evidence.db", "dir", evidence, "err", err)
}
}
if tmos.FileExists(txIndex) {
if err := os.RemoveAll(txIndex); err == nil {
logger.Info("Removed tx_index.db", "dir", txIndex)
} else {
logger.Error("error removing tx_index.db", "dir", txIndex, "err", err)
}
}
if tmos.FileExists(peerstore) {
if err := os.RemoveAll(peerstore); err == nil {
logger.Info("Removed peerstore.db", "dir", peerstore)
} else {
logger.Error("error removing peerstore.db", "dir", peerstore, "err", err)
}
}
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
logger.Error("unable to recreate dbDir", "err", err)
}
resetFilePV(privValKeyFile, privValStateFile, logger)
return nil
}
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) {

View File

@@ -0,0 +1,73 @@
package commands
import (
"fmt"
"github.com/spf13/cobra"
dbm "github.com/tendermint/tm-db"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/store"
)
var RollbackStateCmd = &cobra.Command{
Use: "rollback",
Short: "rollback tendermint state by one height",
Long: `
A state rollback is performed to recover from an incorrect application state transition,
when Tendermint has persisted an incorrect app hash and is thus unable to make
progress. Rollback overwrites a state at height n with the state at height n - 1.
The application should also roll back to height n - 1. No blocks are removed, so upon
restarting Tendermint the transactions in block n will be re-executed against the
application.
`,
RunE: func(cmd *cobra.Command, args []string) error {
height, hash, err := RollbackState(config)
if err != nil {
return fmt.Errorf("failed to rollback state: %w", err)
}
fmt.Printf("Rolled back state to height %d and hash %v", height, hash)
return nil
},
}
// RollbackState takes the state at the current height n and overwrites it with the state
// at height n - 1. Note state here refers to tendermint state not application state.
// Returns the latest state height and app hash alongside an error if there was one.
func RollbackState(config *cfg.Config) (int64, []byte, error) {
// use the parsed config to load the block and state store
blockStore, stateStore, err := loadStateAndBlockStore(config)
if err != nil {
return -1, nil, err
}
defer func() {
_ = blockStore.Close()
_ = stateStore.Close()
}()
// rollback the last state
return state.Rollback(blockStore, stateStore)
}
func loadStateAndBlockStore(config *cfg.Config) (*store.BlockStore, state.Store, error) {
dbType := dbm.BackendType(config.DBBackend)
// Get BlockStore
blockStoreDB, err := dbm.NewDB("blockstore", dbType, config.DBDir())
if err != nil {
return nil, nil, err
}
blockStore := store.NewBlockStore(blockStoreDB)
// Get StateStore
stateDB, err := dbm.NewDB("state", dbType, config.DBDir())
if err != nil {
return nil, nil, err
}
stateStore := state.NewStore(stateDB)
return blockStore, stateStore, nil
}

View File

@@ -51,20 +51,25 @@ var RootCmd = &cobra.Command{
if cmd.Name() == VersionCmd.Name() {
return nil
}
config, err = ParseConfig()
if err != nil {
return err
}
if config.LogFormat == cfg.LogFormatJSON {
logger = log.NewTMJSONLogger(log.NewSyncWriter(os.Stdout))
}
logger, err = tmflags.ParseLogLevel(config.LogLevel, logger, cfg.DefaultLogLevel())
logger, err = tmflags.ParseLogLevel(config.LogLevel, logger, cfg.DefaultLogLevel)
if err != nil {
return err
}
if viper.GetBool(cli.TraceFlag) {
logger = log.NewTracingLogger(logger)
}
logger = logger.With("module", "main")
return nil
},

View File

@@ -46,9 +46,7 @@ func AddNodeFlags(cmd *cobra.Command) {
"proxy_app",
config.ProxyApp,
"proxy app address, or one of: 'kvstore',"+
" 'persistent_kvstore',"+
" 'counter',"+
" 'counter_serial' or 'noop' for local testing.")
" 'persistent_kvstore', 'counter', 'e2e' or 'noop' for local testing.")
cmd.Flags().String("abci", config.ABCI, "specify abci transport (socket | grpc)")
// rpc flags

View File

@@ -22,11 +22,13 @@ func main() {
cmd.ReplayConsoleCmd,
cmd.ResetAllCmd,
cmd.ResetPrivValidatorCmd,
cmd.ResetStateCmd,
cmd.ShowValidatorCmd,
cmd.TestnetFilesCmd,
cmd.ShowNodeIDCmd,
cmd.GenNodeKeyCmd,
cmd.VersionCmd,
cmd.RollbackStateCmd,
debug.DebugCmd,
cli.NewCompletionCmd(rootCmd, true),
)

View File

@@ -20,6 +20,9 @@ const (
LogFormatPlain = "plain"
// LogFormatJSON is a format for json output
LogFormatJSON = "json"
// DefaultLogLevel defines a default log level as INFO.
DefaultLogLevel = "info"
)
// NOTE: Most of the structs & relevant comments + the
@@ -49,6 +52,9 @@ var (
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
minSubscriptionBufferSize = 100
defaultSubscriptionBufferSize = 200
)
// Config defines the top level configuration for a Tendermint node
@@ -225,7 +231,7 @@ func DefaultBaseConfig() BaseConfig {
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultPackageLogLevels(),
LogLevel: DefaultLogLevel,
LogFormat: LogFormatPlain,
FastSyncMode: true,
FilterPeers: false,
@@ -284,17 +290,6 @@ func (cfg BaseConfig) ValidateBasic() error {
return nil
}
// DefaultLogLevel returns a default log level of "error"
func DefaultLogLevel() string {
return "error"
}
// DefaultPackageLogLevels returns a default log level setting so all packages
// log at "error", while the `state` and `main` packages log at "info"
func DefaultPackageLogLevels() string {
return fmt.Sprintf("main:info,state:info,statesync:info,*:%s", DefaultLogLevel())
}
//-----------------------------------------------------------------------------
// RPCConfig
@@ -350,6 +345,29 @@ type RPCConfig struct {
// to the estimated maximum number of broadcast_tx_commit calls per block.
MaxSubscriptionsPerClient int `mapstructure:"max_subscriptions_per_client"`
// The number of events that can be buffered per subscription before
// returning `ErrOutOfCapacity`.
SubscriptionBufferSize int `mapstructure:"experimental_subscription_buffer_size"`
// The maximum number of responses that can be buffered per WebSocket
// client. If clients cannot read from the WebSocket endpoint fast enough,
// they will be disconnected, so increasing this parameter may reduce the
// chances of them being disconnected (but will cause the node to use more
// memory).
//
// Must be at least the same as `SubscriptionBufferSize`, otherwise
// connections may be dropped unnecessarily.
WebSocketWriteBufferSize int `mapstructure:"experimental_websocket_write_buffer_size"`
// If a WebSocket client cannot read fast enough, at present we may
// silently drop events instead of generating an error or disconnecting the
// client.
//
// Enabling this parameter will cause the WebSocket connection to be closed
// instead if it cannot read fast enough, allowing for greater
// predictability in subscription behaviour.
CloseOnSlowClient bool `mapstructure:"experimental_close_on_slow_client"`
// How long to wait for a tx to be committed during /broadcast_tx_commit
// WARNING: Using a value larger than 10s will result in increasing the
// global HTTP write timeout, which applies to all connections and endpoints.
@@ -363,7 +381,7 @@ type RPCConfig struct {
MaxHeaderBytes int `mapstructure:"max_header_bytes"`
// The path to a file containing certificate that is used to create the HTTPS server.
// Migth be either absolute path or path related to tendermint's config directory.
// Might be either absolute path or path related to Tendermint's config directory.
//
// If the certificate is signed by a certificate authority,
// the certFile should be the concatenation of the server's certificate, any intermediates,
@@ -374,7 +392,7 @@ type RPCConfig struct {
TLSCertFile string `mapstructure:"tls_cert_file"`
// The path to a file containing matching private key that is used to create the HTTPS server.
// Migth be either absolute path or path related to tendermint's config directory.
// Might be either absolute path or path related to tendermint's config directory.
//
// NOTE: both tls_cert_file and tls_key_file must be present for Tendermint to create HTTPS server.
// Otherwise, HTTP server is run.
@@ -399,7 +417,9 @@ func DefaultRPCConfig() *RPCConfig {
MaxSubscriptionClients: 100,
MaxSubscriptionsPerClient: 5,
SubscriptionBufferSize: defaultSubscriptionBufferSize,
TimeoutBroadcastTxCommit: 10 * time.Second,
WebSocketWriteBufferSize: defaultSubscriptionBufferSize,
MaxBodyBytes: int64(1000000), // 1MB
MaxHeaderBytes: 1 << 20, // same as the net/http default
@@ -433,6 +453,18 @@ func (cfg *RPCConfig) ValidateBasic() error {
if cfg.MaxSubscriptionsPerClient < 0 {
return errors.New("max_subscriptions_per_client can't be negative")
}
if cfg.SubscriptionBufferSize < minSubscriptionBufferSize {
return fmt.Errorf(
"experimental_subscription_buffer_size must be >= %d",
minSubscriptionBufferSize,
)
}
if cfg.WebSocketWriteBufferSize < cfg.SubscriptionBufferSize {
return fmt.Errorf(
"experimental_websocket_write_buffer_size must be >= experimental_subscription_buffer_size (%d)",
cfg.SubscriptionBufferSize,
)
}
if cfg.TimeoutBroadcastTxCommit < 0 {
return errors.New("timeout_broadcast_tx_commit can't be negative")
}
@@ -724,13 +756,15 @@ func (cfg *MempoolConfig) ValidateBasic() error {
// StateSyncConfig defines the configuration for the Tendermint state sync service
type StateSyncConfig struct {
Enable bool `mapstructure:"enable"`
TempDir string `mapstructure:"temp_dir"`
RPCServers []string `mapstructure:"rpc_servers"`
TrustPeriod time.Duration `mapstructure:"trust_period"`
TrustHeight int64 `mapstructure:"trust_height"`
TrustHash string `mapstructure:"trust_hash"`
DiscoveryTime time.Duration `mapstructure:"discovery_time"`
Enable bool `mapstructure:"enable"`
TempDir string `mapstructure:"temp_dir"`
RPCServers []string `mapstructure:"rpc_servers"`
TrustPeriod time.Duration `mapstructure:"trust_period"`
TrustHeight int64 `mapstructure:"trust_height"`
TrustHash string `mapstructure:"trust_hash"`
DiscoveryTime time.Duration `mapstructure:"discovery_time"`
ChunkRequestTimeout time.Duration `mapstructure:"chunk_request_timeout"`
ChunkFetchers int32 `mapstructure:"chunk_fetchers"`
}
func (cfg *StateSyncConfig) TrustHashBytes() []byte {
@@ -745,8 +779,10 @@ func (cfg *StateSyncConfig) TrustHashBytes() []byte {
// DefaultStateSyncConfig returns a default configuration for the state sync service
func DefaultStateSyncConfig() *StateSyncConfig {
return &StateSyncConfig{
TrustPeriod: 168 * time.Hour,
DiscoveryTime: 15 * time.Second,
TrustPeriod: 168 * time.Hour,
DiscoveryTime: 15 * time.Second,
ChunkRequestTimeout: 10 * time.Second,
ChunkFetchers: 4,
}
}
@@ -761,28 +797,47 @@ func (cfg *StateSyncConfig) ValidateBasic() error {
if len(cfg.RPCServers) == 0 {
return errors.New("rpc_servers is required")
}
if len(cfg.RPCServers) < 2 {
return errors.New("at least two rpc_servers entries is required")
}
for _, server := range cfg.RPCServers {
if len(server) == 0 {
return errors.New("found empty rpc_servers entry")
}
}
if cfg.DiscoveryTime != 0 && cfg.DiscoveryTime < 5*time.Second {
return errors.New("discovery time must be 0s or greater than five seconds")
}
if cfg.TrustPeriod <= 0 {
return errors.New("trusted_period is required")
}
if cfg.TrustHeight <= 0 {
return errors.New("trusted_height is required")
}
if len(cfg.TrustHash) == 0 {
return errors.New("trusted_hash is required")
}
_, err := hex.DecodeString(cfg.TrustHash)
if err != nil {
return fmt.Errorf("invalid trusted_hash: %w", err)
}
if cfg.ChunkRequestTimeout < 5*time.Second {
return errors.New("chunk_request_timeout must be at least 5 seconds")
}
if cfg.ChunkFetchers <= 0 {
return errors.New("chunk_fetchers is required")
}
}
return nil
}
@@ -1002,7 +1057,12 @@ type TxIndexConfig struct {
// 1) "null"
// 2) "kv" (default) - the simplest possible indexer,
// backed by key-value storage (defaults to levelDB; see DBBackend).
// 3) "psql" - the indexer services backed by PostgreSQL.
Indexer string `mapstructure:"indexer"`
// The PostgreSQL connection configuration, the connection format:
// postgresql://<user>:<password>@<host>:<port>/<db>?<opts>
PsqlConn string `mapstructure:"psql-conn"`
}
// DefaultTxIndexConfig returns a default configuration for the transaction indexer.

View File

@@ -206,6 +206,33 @@ max_subscription_clients = {{ .RPC.MaxSubscriptionClients }}
# the estimated # maximum number of broadcast_tx_commit calls per block.
max_subscriptions_per_client = {{ .RPC.MaxSubscriptionsPerClient }}
# Experimental parameter to specify the maximum number of events a node will
# buffer, per subscription, before returning an error and closing the
# subscription. Must be set to at least 100, but higher values will accommodate
# higher event throughput rates (and will use more memory).
experimental_subscription_buffer_size = {{ .RPC.SubscriptionBufferSize }}
# Experimental parameter to specify the maximum number of RPC responses that
# can be buffered per WebSocket client. If clients cannot read from the
# WebSocket endpoint fast enough, they will be disconnected, so increasing this
# parameter may reduce the chances of them being disconnected (but will cause
# the node to use more memory).
#
# Must be at least the same as "experimental_subscription_buffer_size",
# otherwise connections could be dropped unnecessarily. This value should
# ideally be somewhat higher than "experimental_subscription_buffer_size" to
# accommodate non-subscription-related RPC responses.
experimental_websocket_write_buffer_size = {{ .RPC.WebSocketWriteBufferSize }}
# If a WebSocket client cannot read fast enough, at present we may
# silently drop events instead of generating an error or disconnecting the
# client.
#
# Enabling this experimental parameter will cause the WebSocket connection to
# be closed instead if it cannot read fast enough, allowing for greater
# predictability in subscription behaviour.
experimental_close_on_slow_client = {{ .RPC.CloseOnSlowClient }}
# How long to wait for a tx to be committed during /broadcast_tx_commit.
# WARNING: Using a value larger than 10s will result in increasing the
# global HTTP write timeout, which applies to all connections and endpoints.
@@ -219,7 +246,7 @@ max_body_bytes = {{ .RPC.MaxBodyBytes }}
max_header_bytes = {{ .RPC.MaxHeaderBytes }}
# The path to a file containing certificate that is used to create the HTTPS server.
# Migth be either absolute path or path related to tendermint's config directory.
# Might be either absolute path or path related to Tendermint's config directory.
# If the certificate is signed by a certificate authority,
# the certFile should be the concatenation of the server's certificate, any intermediates,
# and the CA's certificate.
@@ -228,8 +255,8 @@ max_header_bytes = {{ .RPC.MaxHeaderBytes }}
tls_cert_file = "{{ .RPC.TLSCertFile }}"
# The path to a file containing matching private key that is used to create the HTTPS server.
# Migth be either absolute path or path related to tendermint's config directory.
# NOTE: both tls_cert_file and tls_key_file must be present for Tendermint to create HTTPS server.
# Might be either absolute path or path related to Tendermint's config directory.
# NOTE: both tls-cert-file and tls-key-file must be present for Tendermint to create HTTPS server.
# Otherwise, HTTP server is run.
tls_key_file = "{{ .RPC.TLSKeyFile }}"
@@ -247,7 +274,8 @@ laddr = "{{ .P2P.ListenAddress }}"
# Address to advertise to peers for them to dial
# If empty, will use the same port as the laddr,
# and will introspect on the listener or use UPnP
# to figure out the address.
# to figure out the address. ip and port are required
# example: 159.89.10.97:26656
external_address = "{{ .P2P.ExternalAddress }}"
# Comma separated list of seed nodes to connect to
@@ -310,7 +338,7 @@ handshake_timeout = "{{ .P2P.HandshakeTimeout }}"
dial_timeout = "{{ .P2P.DialTimeout }}"
#######################################################
### Mempool Configurattion Option ###
### Mempool Configuration Option ###
#######################################################
[mempool]
@@ -372,6 +400,13 @@ discovery_time = "{{ .StateSync.DiscoveryTime }}"
# Will create a new, randomly named directory within, and remove it when done.
temp_dir = "{{ .StateSync.TempDir }}"
# The timeout duration before re-requesting a chunk, possibly from a different
# peer (default: 1 minute).
chunk_request_timeout = "{{ .StateSync.ChunkRequestTimeout }}"
# The number of concurrent chunk fetchers to run (default: 1).
chunk_fetchers = "{{ .StateSync.ChunkFetchers }}"
#######################################################
### Fast Sync Configuration Connections ###
#######################################################

View File

@@ -154,6 +154,72 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
}
}
// introducing a lazy proposer means that the time of the block committed is different to the
// timestamp that the other nodes have. This tests to ensure that the evidence that finally gets
// proposed will have a valid timestamp
lazyProposer := css[1]
lazyProposer.decideProposal = func(height int64, round int32) {
lazyProposer.Logger.Info("Lazy Proposer proposing condensed commit")
if lazyProposer.privValidator == nil {
panic("entered createProposalBlock with privValidator being nil")
}
var commit *types.Commit
switch {
case lazyProposer.Height == lazyProposer.state.InitialHeight:
// We're creating a proposal for the first block.
// The commit is empty, but not nil.
commit = types.NewCommit(0, 0, types.BlockID{}, nil)
case lazyProposer.LastCommit.HasTwoThirdsMajority():
// Make the commit from LastCommit
commit = lazyProposer.LastCommit.MakeCommit()
default: // This shouldn't happen.
lazyProposer.Logger.Error("enterPropose: Cannot propose anything: No commit for the previous block")
return
}
// omit the last signature in the commit
commit.Signatures[len(commit.Signatures)-1] = types.NewCommitSigAbsent()
if lazyProposer.privValidatorPubKey == nil {
// If this node is a validator & proposer in the current round, it will
// miss the opportunity to create a block.
lazyProposer.Logger.Error(fmt.Sprintf("enterPropose: %v", errPubKeyIsNotSet))
return
}
proposerAddr := lazyProposer.privValidatorPubKey.Address()
block, blockParts := lazyProposer.blockExec.CreateProposalBlock(
lazyProposer.Height, lazyProposer.state, commit, proposerAddr,
)
// Flush the WAL. Otherwise, we may not recompute the same proposal to sign,
// and the privValidator will refuse to sign anything.
if err := lazyProposer.wal.FlushAndSync(); err != nil {
lazyProposer.Logger.Error("Error flushing to disk")
}
// Make proposal
propBlockID := types.BlockID{Hash: block.Hash(), PartSetHeader: blockParts.Header()}
proposal := types.NewProposal(height, round, lazyProposer.ValidRound, propBlockID)
p := proposal.ToProto()
if err := lazyProposer.privValidator.SignProposal(lazyProposer.state.ChainID, p); err == nil {
proposal.Signature = p.Signature
// send proposal and block parts on internal msg queue
lazyProposer.sendInternalMessage(msgInfo{&ProposalMessage{proposal}, ""})
for i := 0; i < int(blockParts.Total()); i++ {
part := blockParts.GetPart(i)
lazyProposer.sendInternalMessage(msgInfo{&BlockPartMessage{lazyProposer.Height, lazyProposer.Round, part}, ""})
}
lazyProposer.Logger.Info("Signed proposal", "height", height, "round", round, "proposal", proposal)
lazyProposer.Logger.Debug(fmt.Sprintf("Signed proposal block: %v", block))
} else if !lazyProposer.replayMode {
lazyProposer.Logger.Error("enterPropose: Error signing proposal", "height", height, "round", round, "err", err)
}
}
// start the consensus reactors
for i := 0; i < nValidators; i++ {
s := reactors[i].conS.GetState()

View File

@@ -12,7 +12,7 @@ import (
"testing"
"time"
"github.com/go-kit/kit/log/term"
"github.com/go-kit/log/term"
"github.com/stretchr/testify/require"
"path"
@@ -74,6 +74,7 @@ type validatorStub struct {
Round int32
types.PrivValidator
VotingPower int64
lastVote *types.Vote
}
var testMinPower int64 = 10
@@ -106,8 +107,18 @@ func (vs *validatorStub) signVote(
BlockID: types.BlockID{Hash: hash, PartSetHeader: header},
}
v := vote.ToProto()
err = vs.PrivValidator.SignVote(config.ChainID(), v)
if err := vs.PrivValidator.SignVote(config.ChainID(), v); err != nil {
return nil, fmt.Errorf("sign vote failed: %w", err)
}
// ref: signVote in FilePV, the vote should use the privious vote info when the sign data is the same.
if signDataIsEqual(vs.lastVote, v) {
v.Signature = vs.lastVote.Signature
v.Timestamp = vs.lastVote.Timestamp
}
vote.Signature = v.Signature
vote.Timestamp = v.Timestamp
return vote, err
}
@@ -118,6 +129,9 @@ func signVote(vs *validatorStub, voteType tmproto.SignedMsgType, hash []byte, he
if err != nil {
panic(fmt.Errorf("failed to sign vote: %v", err))
}
vs.lastVote = v
return v
}
@@ -866,3 +880,16 @@ func newPersistentKVStore() abci.Application {
func newPersistentKVStoreWithPath(dbDir string) abci.Application {
return kvstore.NewPersistentKVStoreApplication(dbDir)
}
func signDataIsEqual(v1 *types.Vote, v2 *tmproto.Vote) bool {
if v1 == nil || v2 == nil {
return false
}
return v1.Type == v2.Type &&
bytes.Equal(v1.BlockID.Hash, v2.BlockID.GetHash()) &&
v1.Height == v2.GetHeight() &&
v1.Round == v2.Round &&
bytes.Equal(v1.ValidatorAddress.Bytes(), v2.GetValidatorAddress()) &&
v1.ValidatorIndex == v2.GetValidatorIndex()
}

View File

@@ -60,6 +60,22 @@ type Metrics struct {
// Number of blockparts transmitted by peer.
BlockParts metrics.Counter
// QuroumPrevoteMessageDelay is the interval in seconds between the proposal
// timestamp and the timestamp of the earliest prevote that achieved a quorum
// during the prevote step.
//
// To compute it, sum the voting power over each prevote received, in increasing
// order of timestamp. The timestamp of the first prevote to increase the sum to
// be above 2/3 of the total voting power of the network defines the endpoint
// the endpoint of the interval. Subtract the proposal timestamp from this endpoint
// to obtain the quorum delay.
QuorumPrevoteMessageDelay metrics.Gauge
// FullPrevoteMessageDelay is the interval in seconds between the proposal
// timestamp and the timestamp of the latest prevote in a round where 100%
// of the voting power on the network issued prevotes.
FullPrevoteMessageDelay metrics.Gauge
}
// PrometheusMetrics returns Metrics build using Prometheus client library.
@@ -186,6 +202,20 @@ func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
Name: "block_parts",
Help: "Number of blockparts transmitted by peer.",
}, append(labels, "peer_id")).With(labelsAndValues...),
QuorumPrevoteMessageDelay: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "quorum_prevote_message_delay",
Help: "Difference in seconds between the proposal timestamp and the timestamp " +
"of the latest prevote that achieved a quorum in the prevote step.",
}, labels).With(labelsAndValues...),
FullPrevoteMessageDelay: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "full_prevote_message_delay",
Help: "Difference in seconds between the proposal timestamp and the timestamp " +
"of the latest prevote that achieved 100% of the voting power in the prevote step.",
}, labels).With(labelsAndValues...),
}
}
@@ -209,12 +239,14 @@ func NopMetrics() *Metrics {
BlockIntervalSeconds: discard.NewHistogram(),
NumTxs: discard.NewGauge(),
BlockSizeBytes: discard.NewGauge(),
TotalTxs: discard.NewGauge(),
CommittedHeight: discard.NewGauge(),
FastSyncing: discard.NewGauge(),
StateSyncing: discard.NewGauge(),
BlockParts: discard.NewCounter(),
NumTxs: discard.NewGauge(),
BlockSizeBytes: discard.NewGauge(),
TotalTxs: discard.NewGauge(),
CommittedHeight: discard.NewGauge(),
FastSyncing: discard.NewGauge(),
StateSyncing: discard.NewGauge(),
BlockParts: discard.NewCounter(),
QuorumPrevoteMessageDelay: discard.NewGauge(),
FullPrevoteMessageDelay: discard.NewGauge(),
}
}

View File

@@ -46,6 +46,7 @@ type Reactor struct {
mtx tmsync.RWMutex
waitSync bool
eventBus *types.EventBus
rs *cstypes.RoundState
Metrics *Metrics
}
@@ -58,6 +59,7 @@ func NewReactor(consensusState *State, waitSync bool, options ...ReactorOption)
conR := &Reactor{
conS: consensusState,
waitSync: waitSync,
rs: consensusState.GetRoundState(),
Metrics: NopMetrics(),
}
conR.BaseReactor = *p2p.NewBaseReactor("Consensus", conR)
@@ -78,6 +80,7 @@ func (conR *Reactor) OnStart() error {
go conR.peerStatsRoutine()
conR.subscribeToBroadcastEvents()
go conR.updateRoundStateRoutine()
if !conR.WaitSync() {
err := conR.conS.Start()
@@ -482,11 +485,31 @@ func makeRoundStepMessage(rs *cstypes.RoundState) (nrsMsg *NewRoundStepMessage)
}
func (conR *Reactor) sendNewRoundStepMessage(peer p2p.Peer) {
rs := conR.conS.GetRoundState()
rs := conR.getRoundState()
nrsMsg := makeRoundStepMessage(rs)
peer.Send(StateChannel, MustEncode(nrsMsg))
}
func (conR *Reactor) updateRoundStateRoutine() {
t := time.NewTicker(100 * time.Microsecond)
defer t.Stop()
for range t.C {
if !conR.IsRunning() {
return
}
rs := conR.conS.GetRoundState()
conR.mtx.Lock()
conR.rs = rs
conR.mtx.Unlock()
}
}
func (conR *Reactor) getRoundState() *cstypes.RoundState {
conR.mtx.RLock()
defer conR.mtx.RUnlock()
return conR.rs
}
func (conR *Reactor) gossipDataRoutine(peer p2p.Peer, ps *PeerState) {
logger := conR.Logger.With("peer", peer)
@@ -494,10 +517,9 @@ OUTER_LOOP:
for {
// Manage disconnects from self or peer.
if !peer.IsRunning() || !conR.IsRunning() {
logger.Info("Stopping gossipDataRoutine for peer")
return
}
rs := conR.conS.GetRoundState()
rs := conR.getRoundState()
prs := ps.GetRoundState()
// Send proposal Block parts?
@@ -518,7 +540,8 @@ OUTER_LOOP:
}
// If the peer is on a previous height that we have, help catch up.
if (0 < prs.Height) && (prs.Height < rs.Height) && (prs.Height >= conR.conS.blockStore.Base()) {
blockStoreBase := conR.conS.blockStore.Base()
if blockStoreBase > 0 && 0 < prs.Height && prs.Height < rs.Height && prs.Height >= blockStoreBase {
heightLogger := logger.With("height", prs.Height)
// if we never received the commit message from the peer, the block parts wont be initialized
@@ -526,7 +549,7 @@ OUTER_LOOP:
blockMeta := conR.conS.blockStore.LoadBlockMeta(prs.Height)
if blockMeta == nil {
heightLogger.Error("Failed to load block meta",
"blockstoreBase", conR.conS.blockStore.Base(), "blockstoreHeight", conR.conS.blockStore.Height())
"blockstoreBase", blockStoreBase, "blockstoreHeight", conR.conS.blockStore.Height())
time.Sleep(conR.conS.config.PeerGossipSleepDuration)
} else {
ps.InitProposalBlockParts(blockMeta.BlockID.PartSetHeader)
@@ -637,10 +660,9 @@ OUTER_LOOP:
for {
// Manage disconnects from self or peer.
if !peer.IsRunning() || !conR.IsRunning() {
logger.Info("Stopping gossipVotesRoutine for peer")
return
}
rs := conR.conS.GetRoundState()
rs := conR.getRoundState()
prs := ps.GetRoundState()
switch sleeping {
@@ -672,7 +694,8 @@ OUTER_LOOP:
// Catchup logic
// If peer is lagging by more than 1, send Commit.
if prs.Height != 0 && rs.Height >= prs.Height+2 && prs.Height >= conR.conS.blockStore.Base() {
blockStoreBase := conR.conS.blockStore.Base()
if blockStoreBase > 0 && prs.Height != 0 && rs.Height >= prs.Height+2 && prs.Height >= blockStoreBase {
// Load the block commit for prs.Height,
// which contains precommit signatures for prs.Height.
if commit := conR.conS.blockStore.LoadBlockCommit(prs.Height); commit != nil {
@@ -761,19 +784,17 @@ func (conR *Reactor) gossipVotesForHeight(
// NOTE: `queryMaj23Routine` has a simple crude design since it only comes
// into play for liveness when there's a signature DDoS attack happening.
func (conR *Reactor) queryMaj23Routine(peer p2p.Peer, ps *PeerState) {
logger := conR.Logger.With("peer", peer)
OUTER_LOOP:
for {
// Manage disconnects from self or peer.
if !peer.IsRunning() || !conR.IsRunning() {
logger.Info("Stopping queryMaj23Routine for peer")
return
}
// Maybe send Height/Round/Prevotes
{
rs := conR.conS.GetRoundState()
rs := conR.getRoundState()
prs := ps.GetRoundState()
if rs.Height == prs.Height {
if maj23, ok := rs.Votes.Prevotes(prs.Round).TwoThirdsMajority(); ok {
@@ -790,7 +811,7 @@ OUTER_LOOP:
// Maybe send Height/Round/Precommits
{
rs := conR.conS.GetRoundState()
rs := conR.getRoundState()
prs := ps.GetRoundState()
if rs.Height == prs.Height {
if maj23, ok := rs.Votes.Precommits(prs.Round).TwoThirdsMajority(); ok {
@@ -807,7 +828,7 @@ OUTER_LOOP:
// Maybe send Height/Round/ProposalPOL
{
rs := conR.conS.GetRoundState()
rs := conR.getRoundState()
prs := ps.GetRoundState()
if rs.Height == prs.Height && prs.ProposalPOLRound >= 0 {
if maj23, ok := rs.Votes.Prevotes(prs.ProposalPOLRound).TwoThirdsMajority(); ok {
@@ -1474,7 +1495,7 @@ func (m *NewRoundStepMessage) ValidateHeight(initialHeight int64) error {
m.LastCommitRound, initialHeight)
}
if m.Height > initialHeight && m.LastCommitRound < 0 {
return fmt.Errorf("LastCommitRound can only be negative for initial height %v", // nolint
return fmt.Errorf("LastCommitRound can only be negative for initial height %v",
initialHeight)
}
return nil

View File

@@ -254,7 +254,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
h.logger.Info("ABCI Handshake App Info",
"height", blockHeight,
"hash", fmt.Sprintf("%X", appHash),
"hash", appHash,
"software-version", res.Version,
"protocol-version", res.AppVersion,
)
@@ -271,7 +271,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
}
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced",
"appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash))
"appHeight", blockHeight, "appHash", appHash)
// TODO: (on restart) replay mempool

File diff suppressed because it is too large Load Diff

View File

@@ -1877,6 +1877,26 @@ func TestStateOutputVoteStats(t *testing.T) {
}
func TestSignSameVoteTwice(t *testing.T) {
_, vss := randState(2)
randBytes := tmrand.Bytes(tmhash.Size)
vote := signVote(vss[1],
tmproto.PrecommitType,
randBytes,
types.PartSetHeader{Total: 10, Hash: randBytes},
)
vote2 := signVote(vss[1],
tmproto.PrecommitType,
randBytes,
types.PartSetHeader{Total: 10, Hash: randBytes},
)
require.Equal(t, vote, vote2)
}
// subscribe subscribes test client to the given query and returns a channel with cap = 1.
func subscribe(eventBus *types.EventBus, q tmpubsub.Query) <-chan tmpubsub.Message {
sub, err := eventBus.Subscribe(context.Background(), testSubscriber, q)

View File

@@ -1,3 +1,4 @@
//go:build gofuzz
// +build gofuzz
package consensus

View File

@@ -5,7 +5,7 @@ import (
"fmt"
"io/ioutil"
"golang.org/x/crypto/openpgp/armor"
"golang.org/x/crypto/openpgp/armor" // nolint: staticcheck
)
func EncodeArmor(blockType string, headers map[string]string, data []byte) string {

View File

@@ -6,6 +6,6 @@ import (
func Sha256(bytes []byte) []byte {
hasher := sha256.New()
hasher.Write(bytes) //nolint:errcheck // ignore error
hasher.Write(bytes)
return hasher.Sum(nil)
}

View File

@@ -80,13 +80,13 @@ func (op ValueOp) Run(args [][]byte) ([][]byte, error) {
}
value := args[0]
hasher := tmhash.New()
hasher.Write(value) //nolint: errcheck // does not error
hasher.Write(value)
vhash := hasher.Sum(nil)
bz := new(bytes.Buffer)
// Wrap <op.Key, vhash> to hash the KVPair.
encodeByteSlice(bz, op.key) //nolint: errcheck // does not error
encodeByteSlice(bz, vhash) //nolint: errcheck // does not error
encodeByteSlice(bz, op.key) // nolint: errcheck // does not error
encodeByteSlice(bz, vhash) // nolint: errcheck // does not error
kvhash := leafHash(bz.Bytes())
if !bytes.Equal(kvhash, op.Proof.LeafHash) {

View File

@@ -122,6 +122,26 @@ func GenPrivKeySecp256k1(secret []byte) PrivKey {
return PrivKey(privKey32)
}
// used to reject malleable signatures
// see:
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
// The returned signature will be of the form R || S (in lower-S form).
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
sig, err := priv.Sign(crypto.Sha256(msg))
if err != nil {
return nil, err
}
sigBytes := serializeSig(sig)
return sigBytes, nil
}
//-------------------------------------
var _ crypto.PubKey = PubKey{}
@@ -152,7 +172,7 @@ func (pubKey PubKey) Address() crypto.Address {
return crypto.Address(hasherRIPEMD160.Sum(nil))
}
// Bytes returns the pubkey marshalled with amino encoding.
// Bytes returns the pubkey marshaled with amino encoding.
func (pubKey PubKey) Bytes() []byte {
return []byte(pubKey)
}
@@ -171,3 +191,47 @@ func (pubKey PubKey) Equals(other crypto.PubKey) bool {
func (pubKey PubKey) Type() string {
return KeyType
}
// VerifySignature verifies a signature of the form R || S.
// It rejects signatures which are not in lower-S form.
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
if len(sigStr) != 64 {
return false
}
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
if err != nil {
return false
}
// parse the signature:
signature := signatureFromBytes(sigStr)
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
if signature.S.Cmp(secp256k1halfN) > 0 {
return false
}
return signature.Verify(crypto.Sha256(msg), pub)
}
// Read Signature struct from R || S. Caller needs to ensure
// that len(sigStr) == 64.
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
return &secp256k1.Signature{
R: new(big.Int).SetBytes(sigStr[:32]),
S: new(big.Int).SetBytes(sigStr[32:64]),
}
}
// Serialize signature to R || S.
// R, S are padded to 32 bytes respectively.
func serializeSig(sig *secp256k1.Signature) []byte {
rBytes := sig.R.Bytes()
sBytes := sig.S.Bytes()
sigBytes := make([]byte, 64)
// 0 pad the byte arrays from the left if they aren't big enough.
copy(sigBytes[32-len(rBytes):32], rBytes)
copy(sigBytes[64-len(sBytes):64], sBytes)
return sigBytes
}

View File

@@ -1,75 +0,0 @@
// +build !libsecp256k1
package secp256k1
import (
"math/big"
secp256k1 "github.com/btcsuite/btcd/btcec"
"github.com/tendermint/tendermint/crypto"
)
// used to reject malleable signatures
// see:
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
// The returned signature will be of the form R || S (in lower-S form).
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
sig, err := priv.Sign(crypto.Sha256(msg))
if err != nil {
return nil, err
}
sigBytes := serializeSig(sig)
return sigBytes, nil
}
// VerifySignature verifies a signature of the form R || S.
// It rejects signatures which are not in lower-S form.
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
if len(sigStr) != 64 {
return false
}
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
if err != nil {
return false
}
// parse the signature:
signature := signatureFromBytes(sigStr)
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
if signature.S.Cmp(secp256k1halfN) > 0 {
return false
}
return signature.Verify(crypto.Sha256(msg), pub)
}
// Read Signature struct from R || S. Caller needs to ensure
// that len(sigStr) == 64.
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
return &secp256k1.Signature{
R: new(big.Int).SetBytes(sigStr[:32]),
S: new(big.Int).SetBytes(sigStr[32:64]),
}
}
// Serialize signature to R || S.
// R, S are padded to 32 bytes respectively.
func serializeSig(sig *secp256k1.Signature) []byte {
rBytes := sig.R.Bytes()
sBytes := sig.S.Bytes()
sigBytes := make([]byte, 64)
// 0 pad the byte arrays from the left if they aren't big enough.
copy(sigBytes[32-len(rBytes):32], rBytes)
copy(sigBytes[64-len(sBytes):64], sBytes)
return sigBytes
}

View File

@@ -36,7 +36,7 @@ func TestPubKeySecp256k1Address(t *testing.T) {
addrBbz, _, _ := base58.CheckDecode(d.addr)
addrB := crypto.Address(addrBbz)
var priv secp256k1.PrivKey = secp256k1.PrivKey(privB)
priv := secp256k1.PrivKey(privB)
pubKey := priv.PubKey()
pubT, _ := pubKey.(secp256k1.PubKey)

View File

@@ -83,7 +83,7 @@ func TestRandom(t *testing.T) {
}
}
// AFOREMENTIONED LICENCE
// AFOREMENTIONED LICENSE
// Copyright (c) 2009 The Go Authors. All rights reserved.
//
// Redistribution and use in source and binary forms, with or without

View File

@@ -21,6 +21,20 @@ module.exports = {
key: "59f0e2deb984aa9cdf2b3a5fd24ac501",
index: "tendermint"
},
versions: [
{
"label": "v0.33",
"key": "v0.33"
},
{
"label": "v0.34",
"key": "v0.34"
},
{
"label": "v0.35",
"key": "v0.35"
}
],
topbar: {
banner: false,
},
@@ -35,19 +49,10 @@ module.exports = {
path: '/DEV_SESSIONS.html'
},
{
// TODO(creachadair): Figure out how to make this per-branch.
// See: https://github.com/tendermint/tendermint/issues/7908
title: 'RPC',
path: 'https://docs.tendermint.com/master/rpc/',
static: true
},
// TODO: remove once https://github.com/cosmos/vuepress-theme-cosmos/issues/91 is closed
{
title: 'Version 0.32',
path: '/v0.32',
static: true
},
{
title: 'Version 0.33',
path: '/v0.33',
path: 'https://docs.tendermint.com/v0.35/rpc/',
static: true
},
]
@@ -71,7 +76,7 @@ module.exports = {
},
footer: {
question: {
text: 'Chat with Tendermint developers in <a href=\'https://discord.gg/cr7N47p\' target=\'_blank\'>Discord</a> or reach out on the <a href=\'https://forum.cosmos.network/c/tendermint\' target=\'_blank\'>Tendermint Forum</a> to learn more.'
text: 'Chat with Tendermint developers in <a href=\'https://discord.gg/vcExX9T\' target=\'_blank\'>Discord</a> or reach out on the <a href=\'https://forum.cosmos.network/c/tendermint\' target=\'_blank\'>Tendermint Forum</a> to learn more.'
},
logo: '/logo-bw.svg',
textLink: {
@@ -105,7 +110,7 @@ module.exports = {
}
],
smallprint:
'The development of the Tendermint project is led primarily by Tendermint Inc., the for-profit entity which also maintains this website. Funding for this development comes primarily from the Interchain Foundation, a Swiss non-profit.',
'The development of Tendermint Core is led primarily by [Interchain GmbH](https://interchain.berlin/). Funding for this development comes primarily from the Interchain Foundation, a Swiss non-profit. The Tendermint trademark is owned by Tendermint Inc, the for-profit entity that also maintains this website.',
links: [
{
title: 'Documentation',
@@ -159,6 +164,12 @@ module.exports = {
{
ga: 'UA-51029217-11'
}
],
[
'@vuepress/plugin-html-redirect',
{
countdown: 0
}
]
],
]
};

1
docs/.vuepress/redirects Normal file
View File

@@ -0,0 +1 @@
/master/ /v0.35/

View File

@@ -17,7 +17,6 @@ Next, install the `abci-cli` tool and example applications:
```sh
git clone https://github.com/tendermint/tendermint.git
cd tendermint
make tools
make install_abci
```
@@ -64,7 +63,7 @@ as `abci-cli` above. The kvstore just stores transactions in a merkle
tree.
Its code can be found
[here](https://github.com/tendermint/tendermint/blob/master/abci/cmd/abci-cli/abci-cli.go)
[here](https://github.com/tendermint/tendermint/blob/v0.34.x/abci/cmd/abci-cli/abci-cli.go)
and looks like:
```go
@@ -221,7 +220,7 @@ Now that we've got the hang of it, let's try another application, the
"counter" app.
Like the kvstore app, its code can be found
[here](https://github.com/tendermint/tendermint/blob/master/abci/cmd/abci-cli/abci-cli.go)
[here](https://github.com/tendermint/tendermint/blob/v0.34.x/abci/cmd/abci-cli/abci-cli.go)
and looks like:
```go

View File

@@ -34,7 +34,6 @@ Then run
```sh
go get github.com/tendermint/tendermint
cd $GOPATH/src/github.com/tendermint/tendermint
make tools
make install_abci
```

View File

@@ -4,12 +4,13 @@ order: 6
# Indexing Transactions
Tendermint allows you to index transactions and later query or subscribe to their results.
Events can be used to index transactions and blocks according to what happened
during their execution. Note that the set of events returned for a block from
`BeginBlock` and `EndBlock` are merged. In case both methods return the same
type, only the key-value pairs defined in `EndBlock` are used.
Tendermint allows you to index transactions and blocks and later query or
subscribe to their results. Transactions are indexed by `TxResult.Events` and
blocks are indexed by `Response(Begin|End)Block.Events`. However, transactions
are also indexed by a primary key which includes the transaction hash and maps
to and stores the corresponding `TxResult`. Blocks are indexed by a primary key
which includes the block height and maps to and stores the block height, i.e.
the block itself is never stored.
Each event contains a type and a list of attributes, which are key-value pairs
denoting something about what happened during the method's execution. For more
@@ -17,7 +18,7 @@ details on `Events`, see the
[ABCI](https://github.com/tendermint/spec/blob/master/spec/abci/abci.md#events)
documentation.
An Event has a composite key associated with it. A `compositeKey` is
An `Event` has a composite key associated with it. A `compositeKey` is
constructed by its type and key separated by a dot.
For example:
@@ -30,24 +31,79 @@ For example:
would be equal to the composite key of `jack.account.number`.
Let's take a look at the `[tx_index]` config section:
By default, Tendermint will index all transactions by their respective hashes
and height and blocks by their height.
## Configuration
Operators can configure indexing via the `[tx_index]` section. The `indexer`
field takes a series of supported indexers. If `null` is included, indexing will
be turned off regardless of other values provided.
```toml
##### transactions indexer configuration options #####
[tx_index]
[tx-index]
# What indexer to use for transactions
# The backend database to back the indexer.
# If indexer is "null", no indexer service will be used.
#
# The application will set which txs to index. In some cases a node operator will be able
# to decide which txs to index based on configuration set in the application.
#
# Options:
# 1) "null"
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
indexer = "kv"
# - When "kv" is chosen "tx.height" and "tx.hash" will always be indexed.
# 3) "psql" - the indexer services backed by PostgreSQL.
# indexer = "kv"
```
By default, Tendermint will index all transactions by their respective
hashes and height using an embedded simple indexer.
### Supported Indexers
You can turn off indexing completely by setting `tx_index` to `null`.
#### KV
The `kv` indexer type is an embedded key-value store supported by the main
underlying Tendermint database. Using the `kv` indexer type allows you to query
for block and transaction events directly against Tendermint's RPC. However, the
query syntax is limited and so this indexer type might be deprecated or removed
entirely in the future.
#### PostgreSQL
The `psql` indexer type allows an operator to enable block and transaction event
indexing by proxying it to an external PostgreSQL instance allowing for the events
to be stored in relational models. Since the events are stored in a RDBMS, operators
can leverage SQL to perform a series of rich and complex queries that are not
supported by the `kv` indexer type. Since operators can leverage SQL directly,
searching is not enabled for the `psql` indexer type via Tendermint's RPC -- any
such query will fail.
Note, the SQL schema is stored in `state/indexer/sink/psql/schema.sql` and operators
must explicitly create the relations prior to starting Tendermint and enabling
the `psql` indexer type.
Example:
```shell
$ psql ... -f state/indexer/sink/psql/schema.sql
```
## Default Indexes
The Tendermint tx and block event indexer indexes a few select reserved events
by default.
### Transactions
The following indexes are indexed by default:
- `tx.height`
- `tx.hash`
### Blocks
The following indexes are indexed by default:
- `block.height`
## Adding Events
@@ -77,19 +133,21 @@ func (app *KVStoreApplication) DeliverTx(req types.RequestDeliverTx) types.Resul
}
```
The transaction will be indexed (if the indexer is not `null`) with a certain attribute if the attribute's `Index` field is set to `true`.
In the above example, all attributes will be indexed.
If the indexer is not `null`, the transaction will be indexed. Each event is
indexed using a composite key in the form of `{eventType}.{eventAttribute}={eventValue}`,
e.g. `transfer.sender=bob`.
## Querying Transactions
## Querying Transactions Events
You can query the transaction results by calling `/tx_search` RPC endpoint:
You can query for a paginated set of transaction by their events by calling the
`/tx_search` RPC endpoint:
```bash
curl "localhost:26657/tx_search?query=\"account.name='igor'\"&prove=true"
curl "localhost:26657/tx_search?query=\"message.sender='cosmos1...'\"&prove=true"
```
Check out [API docs](https://docs.tendermint.com/master/rpc/#/Info/tx_search) for more information
on query syntax and other options.
Check out [API docs](https://docs.tendermint.com/master/rpc/#/Info/tx_search)
for more information on query syntax and other options.
## Subscribing to Transactions
@@ -102,10 +160,22 @@ a query to `/subscribe` RPC endpoint.
"method": "subscribe",
"id": "0",
"params": {
"query": "account.name='igor'"
"query": "message.sender='cosmos1...'"
}
}
```
Check out [API docs](https://docs.tendermint.com/master/rpc/#subscribe) for more information
on query syntax and other options.
## Querying Blocks Events
You can query for a paginated set of blocks by their events by calling the
`/block_search` RPC endpoint:
```bash
curl "localhost:26657/block_search?query=\"block.height > 10 AND val_set.num_changed > 0\""
```
Check out [API docs](https://docs.tendermint.com/master/rpc/#/Info/block_search)
for more information on query syntax and other options.

View File

@@ -1,81 +0,0 @@
---
order: 1
parent:
order: false
---
# Architecture Decision Records (ADR)
This is a location to record all high-level architecture decisions in the tendermint project.
You can read more about the ADR concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t).
An ADR should provide:
- Context on the relevant goals and the current state
- Proposed changes to achieve the goals
- Summary of pros and cons
- References
- Changelog
Note the distinction between an ADR and a spec. The ADR provides the context, intuition, reasoning, and
justification for a change in architecture, or for the architecture of something
new. The spec is much more compressed and streamlined summary of everything as
it stands today.
If recorded decisions turned out to be lacking, convene a discussion, record the new decisions here, and then modify the code to match.
Note the context/background should be written in the present tense.
### Table of Contents:
- [ADR-001-Logging](./adr-001-logging.md)
- [ADR-002-Event-Subscription](./adr-002-event-subscription.md)
- [ADR-003-ABCI-APP-RPC](./adr-003-abci-app-rpc.md)
- [ADR-004-Historical-Validators](./adr-004-historical-validators.md)
- [ADR-005-Consensus-Params](./adr-005-consensus-params.md)
- [ADR-006-Trust-Metric](./adr-006-trust-metric.md)
- [ADR-007-Trust-Metric-Usage](./adr-007-trust-metric-usage.md)
- [ADR-008-Priv-Validator](./adr-008-priv-validator.md)
- [ADR-009-ABCI-Design](./adr-009-ABCI-design.md)
- [ADR-010-Crypto-Changes](./adr-010-crypto-changes.md)
- [ADR-011-Monitoring](./adr-011-monitoring.md)
- [ADR-012-Peer-Transport](./adr-012-peer-transport.md)
- [ADR-013-Symmetric-Crypto](./adr-013-symmetric-crypto.md)
- [ADR-014-Secp-Malleability](./adr-014-secp-malleability.md)
- [ADR-015-Crypto-Encoding](./adr-015-crypto-encoding.md)
- [ADR-016-Protocol-Versions](./adr-016-protocol-versions.md)
- [ADR-017-Chain-Versions](./adr-017-chain-versions.md)
- [ADR-018-ABCI-Validators](./adr-018-ABCI-Validators.md)
- [ADR-019-Multisigs](./adr-019-multisigs.md)
- [ADR-020-Block-Size](./adr-020-block-size.md)
- [ADR-021-ABCI-Events](./adr-021-abci-events.md)
- [ADR-022-ABCI-Errors](./adr-022-abci-errors.md)
- [ADR-023-ABCI-Propose-tx](./adr-023-ABCI-propose-tx.md)
- [ADR-024-Sign-Bytes](./adr-024-sign-bytes.md)
- [ADR-025-Commit](./adr-025-commit.md)
- [ADR-026-General-Merkle-Proof](./adr-026-general-merkle-proof.md)
- [ADR-028-libp2p](./adr-028-libp2p.md)
- [ADR-029-Check-Tx-Consensus](./adr-029-check-tx-consensus.md)
- [ADR-030-Consensus-Refactor](./adr-030-consensus-refactor.md)
- [ADR-030-Changelog-structure](./adr-031-changelog.md)
- [ADR-033-Pubsub](./adr-033-pubsub.md)
- [ADR-034-Priv-Validator-File-Structure](./adr-034-priv-validator-file-structure.md)
- [ADR-035-Documentation](./adr-035-documentation.md)
- [ADR-037-Deliver-Block](./adr-037-deliver-block.md)
- [ADR-038-non-zero-start-height](./adr-038-non-zero-start-height.md)
- [ADR-039-Peer-Behaviour](./adr-039-peer-behaviour.md)
- [ADR-041-Proposer-Selection-via-ABCI](./adr-041-proposer-selection-via-abci.md)
- [ADR-043-Blockchain-RiRi-Org](./adr-043-blockchain-riri-org.md)
- [ADR-044-Lite-Client-With-Weak-Subjectivity](./adr-044-lite-client-with-weak-subjectivity.md)
- [ADR-045-ABCI-Evidence](./adr-045-abci-evidence.md)
- [ADR-046-Light-Client-Implementation](./adr-046-light-client-implementation.md)
- [ADR-047-Handling-Evidence-From-Light-Client](./adr-047-handling-evidence-from-light-client.md)
- [ADR-051-Double-Signing-Risk-Reduction](./adr-051-double-signing-risk-reduction.md)
- [ADR-052-Tendermint-Mode](./adr-052-tendermint-mode.md)
- [ADR-053-State-Sync-Prototype](./adr-053-state-sync-prototype.md)
- [ADR-054-crypto-encoding-2](./adr-054-crypto-encoding-2.md)
- [ADR-055-protobuf-design](./adr-055-protobuf-design.md)
- [ADR-056-proving-amnesia-attacks](./adr-056-proving-amnesia-attacks.md)
- [ADR-057-RPC](./adr-057-RPC.md)
- [ADR-058-event-hashing](./adr-058-event-hashing.md)

View File

@@ -1,216 +0,0 @@
# ADR 1: Logging
## Context
Current logging system in Tendermint is very static and not flexible enough.
Issues: [358](https://github.com/tendermint/tendermint/issues/358), [375](https://github.com/tendermint/tendermint/issues/375).
What we want from the new system:
- per package dynamic log levels
- dynamic logger setting (logger tied to the processing struct)
- conventions
- be more visually appealing
"dynamic" here means the ability to set smth in runtime.
## Decision
### 1) An interface
First, we will need an interface for all of our libraries (`tmlibs`, Tendermint, etc.). My personal preference is go-kit `Logger` interface (see Appendix A.), but that is too much a bigger change. Plus we will still need levels.
```go
# log.go
type Logger interface {
Debug(msg string, keyvals ...interface{}) error
Info(msg string, keyvals ...interface{}) error
Error(msg string, keyvals ...interface{}) error
With(keyvals ...interface{}) Logger
}
```
On a side note: difference between `Info` and `Notice` is subtle. We probably
could do without `Notice`. Don't think we need `Panic` or `Fatal` as a part of
the interface. These funcs could be implemented as helpers. In fact, we already
have some in `tmlibs/common`.
- `Debug` - extended output for devs
- `Info` - all that is useful for a user
- `Error` - errors
`Notice` should become `Info`, `Warn` either `Error` or `Debug` depending on the message, `Crit` -> `Error`.
This interface should go into `tmlibs/log`. All libraries which are part of the core (tendermint/tendermint) should obey it.
### 2) Logger with our current formatting
On top of this interface, we will need to implement a stdout logger, which will be used when Tendermint is configured to output logs to STDOUT.
Many people say that they like the current output, so let's stick with it.
```
NOTE[2017-04-25|14:45:08] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
```
Couple of minor changes:
```
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
```
Notice the level is encoded using only one char plus milliseconds.
Note: there are many other formats out there like [logfmt](https://brandur.org/logfmt).
This logger could be implemented using any logger - [logrus](https://github.com/sirupsen/logrus), [go-kit/log](https://github.com/go-kit/kit/tree/master/log), [zap](https://github.com/uber-go/zap), log15 so far as it
a) supports coloring output<br>
b) is moderately fast (buffering) <br>
c) conforms to the new interface or adapter could be written for it <br>
d) is somewhat configurable<br>
go-kit is my favorite so far. Check out how easy it is to color errors in red https://github.com/go-kit/kit/blob/master/log/term/example_test.go#L12. Although, coloring could only be applied to the whole string :(
```
go-kit +: flexible, modular
go-kit “-”: logfmt format https://brandur.org/logfmt
logrus +: popular, feature rich (hooks), API and output is more like what we want
logrus -: not so flexible
```
```go
# tm_logger.go
// NewTmLogger returns a logger that encodes keyvals to the Writer in
// tm format.
func NewTmLogger(w io.Writer) Logger {
return &tmLogger{kitlog.NewLogfmtLogger(w)}
}
func (l tmLogger) SetLevel(level string() {
switch (level) {
case "debug":
l.sourceLogger = level.NewFilter(l.sourceLogger, level.AllowDebug())
}
}
func (l tmLogger) Info(msg string, keyvals ...interface{}) error {
l.sourceLogger.Log("msg", msg, keyvals...)
}
# log.go
func With(logger Logger, keyvals ...interface{}) Logger {
kitlog.With(logger.sourceLogger, keyvals...)
}
```
Usage:
```go
logger := log.NewTmLogger(os.Stdout)
logger.SetLevel(config.GetString("log_level"))
node.SetLogger(log.With(logger, "node", Name))
```
**Other log formatters**
In the future, we may want other formatters like JSONFormatter.
```
{ "level": "notice", "time": "2017-04-25 14:45:08.562471297 -0400 EDT", "module": "consensus", "msg": "ABCI Replay Blocks", "appHeight": 0, "storeHeight": 0, "stateHeight": 0 }
```
### 3) Dynamic logger setting
https://dave.cheney.net/2017/01/23/the-package-level-logger-anti-pattern
This is the hardest part and where the most work will be done. logger should be tied to the processing struct, or the context if it adds some fields to the logger.
```go
type BaseService struct {
log log15.Logger
name string
started uint32 // atomic
stopped uint32 // atomic
...
}
```
BaseService already contains `log` field, so most of the structs embedding it should be fine. We should rename it to `logger`.
The only thing missing is the ability to set logger:
```
func (bs *BaseService) SetLogger(l log.Logger) {
bs.logger = l
}
```
### 4) Conventions
Important keyvals should go first. Example:
```
correct
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus instance=1 appHeight=0 storeHeight=0 stateHeight=0
```
not
```
wrong
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 instance=1
```
for that in most cases you'll need to add `instance` field to a logger upon creating, not when u log a particular message:
```go
colorFn := func(keyvals ...interface{}) term.FgBgColor {
for i := 1; i < len(keyvals); i += 2 {
if keyvals[i] == "instance" && keyvals[i+1] == "1" {
return term.FgBgColor{Fg: term.Blue}
} else if keyvals[i] == "instance" && keyvals[i+1] == "1" {
return term.FgBgColor{Fg: term.Red}
}
}
return term.FgBgColor{}
}
logger := term.NewLogger(os.Stdout, log.NewTmLogger, colorFn)
c1 := NewConsensusReactor(...)
c1.SetLogger(log.With(logger, "instance", 1))
c2 := NewConsensusReactor(...)
c2.SetLogger(log.With(logger, "instance", 2))
```
## Status
proposed
## Consequences
### Positive
Dynamic logger, which could be turned off for some modules at runtime. Public interface for other projects using Tendermint libraries.
### Negative
We may loose the ability to color keys in keyvalue pairs. go-kit allow you to easily change foreground / background colors of the whole string, but not its parts.
### Neutral
## Appendix A.
I really like a minimalistic approach go-kit took with his logger https://github.com/go-kit/kit/tree/master/log:
```
type Logger interface {
Log(keyvals ...interface{}) error
}
```
See [The Hunt for a Logger Interface](https://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide). The advantage is greater composability (check out how go-kit defines colored logging or log-leveled logging on top of this interface https://github.com/go-kit/kit/tree/master/log).

View File

@@ -1,88 +0,0 @@
# ADR 2: Event Subscription
## Context
In the light client (or any other client), the user may want to **subscribe to
a subset of transactions** (rather than all of them) using `/subscribe?event=X`. For
example, I want to subscribe for all transactions associated with a particular
account. Same for fetching. The user may want to **fetch transactions based on
some filter** (rather than fetching all the blocks). For example, I want to get
all transactions for a particular account in the last two weeks (`tx's block time >= '2017-06-05'`).
Now you can't even subscribe to "all txs" in Tendermint.
The goal is a simple and easy to use API for doing that.
![Tx Send Flow Diagram](img/tags1.png)
## Decision
ABCI app return tags with a `DeliverTx` response inside the `data` field (_for
now, later we may create a separate field_). Tags is a list of key-value pairs,
protobuf encoded.
Example data:
```json
{
"abci.account.name": "Igor",
"abci.account.address": "0xdeadbeef",
"tx.gas": 7
}
```
### Subscribing for transactions events
If the user wants to receive only a subset of transactions, ABCI-app must
return a list of tags with a `DeliverTx` response. These tags will be parsed and
matched with the current queries (subscribers). If the query matches the tags,
subscriber will get the transaction event.
```
/subscribe?query="tm.event = Tx AND tx.hash = AB0023433CF0334223212243BDD AND abci.account.invoice.number = 22"
```
A new package must be developed to replace the current `events` package. It
will allow clients to subscribe to a different types of events in the future:
```
/subscribe?query="abci.account.invoice.number = 22"
/subscribe?query="abci.account.invoice.owner CONTAINS Igor"
```
### Fetching transactions
This is a bit tricky because a) we want to support a number of indexers, all of
which have a different API b) we don't know whenever tags will be sufficient
for the most apps (I guess we'll see).
```
/txs/search?query="tx.hash = AB0023433CF0334223212243BDD AND abci.account.owner CONTAINS Igor"
/txs/search?query="abci.account.owner = Igor"
```
For historic queries we will need a indexing storage (Postgres, SQLite, ...).
### Issues
- https://github.com/tendermint/tendermint/issues/376
- https://github.com/tendermint/tendermint/issues/287
- https://github.com/tendermint/tendermint/issues/525 (related)
## Status
proposed
## Consequences
### Positive
- same format for event notifications and search APIs
- powerful enough query
### Negative
- performance of the `match` function (where we have too many queries / subscribers)
- there is an issue where there are too many txs in the DB
### Neutral

View File

@@ -1,34 +0,0 @@
# ADR 3: Must an ABCI-app have an RPC server?
## Context
ABCI-server could expose its own RPC-server and act as a proxy to Tendermint.
The idea was for the Tendermint RPC to just be a transparent proxy to the app.
Clients need to talk to Tendermint for proofs, unless we burden all app devs
with exposing Tendermint proof stuff. Also seems less complex to lock down one
server than two, but granted it makes querying a bit more kludgy since it needs
to be passed as a `Query`. Also, **having a very standard rpc interface means
the light-client can work with all apps and handle proofs**. The only
app-specific logic is decoding the binary data to a more readable form (eg.
json). This is a huge advantage for code-reuse and standardization.
## Decision
We dont expose an RPC server on any of our ABCI-apps.
## Status
accepted
## Consequences
### Positive
- Unified interface for all apps
### Negative
- `Query` interface
### Neutral

View File

@@ -1,38 +0,0 @@
# ADR 004: Historical Validators
## Context
Right now, we can query the present validator set, but there is no history.
If you were offline for a long time, there is no way to reconstruct past validators. This is needed for the light client and we agreed needs enhancement of the API.
## Decision
For every block, store a new structure that contains either the latest validator set,
or the height of the last block for which the validator set changed. Note this is not
the height of the block which returned the validator set change itself, but the next block,
ie. the first block it comes into effect for.
Storing the validators will be handled by the `state` package.
At some point in the future, we may consider more efficient storage in the case where the validators
are updated frequently - for instance by only saving the diffs, rather than the whole set.
An alternative approach suggested keeping the validator set, or diffs of it, in a merkle IAVL tree.
While it might afford cheaper proofs that a validator set has not changed, it would be more complex,
and likely less efficient.
## Status
Accepted.
## Consequences
### Positive
- Can query old validator sets, with proof.
### Negative
- Writes an extra structure to disk with every block.
### Neutral

View File

@@ -1,85 +0,0 @@
# ADR 005: Consensus Params
## Context
Consensus critical parameters controlling blockchain capacity have until now been hard coded, loaded from a local config, or neglected.
Since they may be need to be different in different networks, and potentially to evolve over time within
networks, we seek to initialize them in a genesis file, and expose them through the ABCI.
While we have some specific parameters now, like maximum block and transaction size, we expect to have more in the future,
such as a period over which evidence is valid, or the frequency of checkpoints.
## Decision
### ConsensusParams
No consensus critical parameters should ever be found in the `config.toml`.
A new `ConsensusParams` is optionally included in the `genesis.json` file,
and loaded into the `State`. Any items not included are set to their default value.
A value of 0 is undefined (see ABCI, below). A value of -1 is used to indicate the parameter does not apply.
The parameters are used to determine the validity of a block (and tx) via the union of all relevant parameters.
```
type ConsensusParams struct {
BlockSize
TxSize
BlockGossip
}
type BlockSize struct {
MaxBytes int
MaxTxs int
MaxGas int
}
type TxSize struct {
MaxBytes int
MaxGas int
}
type BlockGossip struct {
BlockPartSizeBytes int
}
```
The `ConsensusParams` can evolve over time by adding new structs that cover different aspects of the consensus rules.
The `BlockPartSizeBytes` and the `BlockSize.MaxBytes` are enforced to be greater than 0.
The former because we need a part size, the latter so that we always have at least some sanity check over the size of blocks.
### ABCI
#### InitChain
InitChain currently takes the initial validator set. It should be extended to also take parts of the ConsensusParams.
There is some case to be made for it to take the entire Genesis, except there may be things in the genesis,
like the BlockPartSize, that the app shouldn't really know about.
#### EndBlock
The EndBlock response includes a `ConsensusParams`, which includes BlockSize and TxSize, but not BlockGossip.
Other param struct can be added to `ConsensusParams` in the future.
The `0` value is used to denote no change.
Any other value will update that parameter in the `State.ConsensusParams`, to be applied for the next block.
Tendermint should have hard-coded upper limits as sanity checks.
## Status
Proposed.
## Consequences
### Positive
- Alternative capacity limits and consensus parameters can be specified without re-compiling the software.
- They can also change over time under the control of the application
### Negative
- More exposed parameters is more complexity
- Different rules at different heights in the blockchain complicates fast sync
### Neutral
- The TxSize, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes

View File

@@ -1,229 +0,0 @@
# ADR 006: Trust Metric Design
## Context
The proposed trust metric will allow Tendermint to maintain local trust rankings for peers it has directly interacted with, which can then be used to implement soft security controls. The calculations were obtained from the [TrustGuard](https://dl.acm.org/citation.cfm?id=1060808) project.
### Background
The Tendermint Core project developers would like to improve Tendermint security and reliability by keeping track of the level of trustworthiness peers have demonstrated within the peer-to-peer network. This way, undesirable outcomes from peers will not immediately result in them being dropped from the network (potentially causing drastic changes to take place). Instead, peers behavior can be monitored with appropriate metrics and be removed from the network once Tendermint Core is certain the peer is a threat. For example, when the PEXReactor makes a request for peers network addresses from a already known peer, and the returned network addresses are unreachable, this untrustworthy behavior should be tracked. Returning a few bad network addresses probably shouldnt cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer being dropped.
Trust metrics can be circumvented by malicious nodes through the use of strategic oscillation techniques, which adapts the malicious nodes behavior pattern in order to maximize its goals. For instance, if the malicious node learns that the time interval of the Tendermint trust metric is _X_ hours, then it could wait _X_ hours in-between malicious activities. We could try to combat this issue by increasing the interval length, yet this will make the system less adaptive to recent events.
Instead, having shorter intervals, but keeping a history of interval values, will give our metric the flexibility needed in order to keep the network stable, while also making it resilient against a strategic malicious node in the Tendermint peer-to-peer network. Also, the metric can access trust data over a rather long period of time while not greatly increasing its history size by aggregating older history values over a larger number of intervals, and at the same time, maintain great precision for the recent intervals. This approach is referred to as fading memories, and closely resembles the way human beings remember their experiences. The trade-off to using history data is that the interval values should be preserved in-between executions of the node.
### References
S. Mudhakar, L. Xiong, and L. Liu, “TrustGuard: Countering Vulnerabilities in Reputation Management for Decentralized Overlay Networks,” in _Proceedings of the 14th international conference on World Wide Web, pp. 422-431_, May 2005.
## Decision
The proposed trust metric will allow a developer to inform the trust metric store of all good and bad events relevant to a peer's behavior, and at any time, the metric can be queried for a peer's current trust ranking.
The three subsections below will cover the process being considered for calculating the trust ranking, the concept of the trust metric store, and the interface for the trust metric.
### Proposed Process
The proposed trust metric will count good and bad events relevant to the object, and calculate the percent of counters that are good over an interval with a predefined duration. This is the procedure that will continue for the life of the trust metric. When the trust metric is queried for the current **trust value**, a resilient equation will be utilized to perform the calculation.
The equation being proposed resembles a Proportional-Integral-Derivative (PID) controller used in control systems. The proportional component allows us to be sensitive to the value of the most recent interval, while the integral component allows us to incorporate trust values stored in the history data, and the derivative component allows us to give weight to sudden changes in the behavior of a peer. We compute the trust value of a peer in interval i based on its current trust ranking, its trust rating history prior to interval _i_ (over the past _maxH_ number of intervals) and its trust ranking fluctuation. We will break up the equation into the three components.
```math
(1) Proportional Value = a * R[i]
```
where _R_[*i*] denotes the raw trust value at time interval _i_ (where _i_ == 0 being current time) and _a_ is the weight applied to the contribution of the current reports. The next component of our equation uses a weighted sum over the last _maxH_ intervals to calculate the history value for time _i_:
`H[i] =` ![formula1](img/formula1.png "Weighted Sum Formula")
The weights can be chosen either optimistically or pessimistically. An optimistic weight creates larger weights for newer history data values, while the the pessimistic weight creates larger weights for time intervals with lower scores. The default weights used during the calculation of the history value are optimistic and calculated as _Wk_ = 0.8^_k_, for time interval _k_. With the history value available, we can now finish calculating the integral value:
```math
(2) Integral Value = b * H[i]
```
Where _H_[*i*] denotes the history value at time interval _i_ and _b_ is the weight applied to the contribution of past performance for the object being measured. The derivative component will be calculated as follows:
```math
D[i] = R[i] H[i]
(3) Derivative Value = c(D[i]) * D[i]
```
Where the value of _c_ is selected based on the _D_[*i*] value relative to zero. The default selection process makes _c_ equal to 0 unless _D_[*i*] is a negative value, in which case c is equal to 1. The result is that the maximum penalty is applied when current behavior is lower than previously experienced behavior. If the current behavior is better than the previously experienced behavior, then the Derivative Value has no impact on the trust value. With the three components brought together, our trust value equation is calculated as follows:
```math
TrustValue[i] = a * R[i] + b * H[i] + c(D[i]) * D[i]
```
As a performance optimization that will keep the amount of raw interval data being saved to a reasonable size of _m_, while allowing us to represent 2^_m_ - 1 history intervals, we can employ the fading memories technique that will trade space and time complexity for the precision of the history data values by summarizing larger quantities of less recent values. While our equation above attempts to access up to _maxH_ (which can be 2^_m_ - 1), we will map those requests down to _m_ values using equation 4 below:
```math
(4) j = index, where index > 0
```
Where _j_ is one of _(0, 1, 2, … , m 1)_ indices used to access history interval data. Now we can access the raw intervals using the following calculations:
```math
R[0] = raw data for current time interval
```
`R[j] =` ![formula2](img/formula2.png "Fading Memories Formula")
### Trust Metric Store
Similar to the P2P subsystem AddrBook, the trust metric store will maintain information relevant to Tendermint peers. Additionally, the trust metric store will ensure that trust metrics will only be active for peers that a node is currently and directly engaged with.
Reactors will provide a peer key to the trust metric store in order to retrieve the associated trust metric. The trust metric can then record new positive and negative events experienced by the reactor, as well as provided the current trust score calculated by the metric.
When the node is shutting down, the trust metric store will save history data for trust metrics associated with all known peers. This saved information allows experiences with a peer to be preserved across node executions, which can span a tracking windows of days or weeks. The trust history data is loaded automatically during OnStart.
### Interface Detailed Design
Each trust metric allows for the recording of positive/negative events, querying the current trust value/score, and the stopping/pausing of tracking over time intervals. This can be seen below:
```go
// TrustMetric - keeps track of peer reliability
type TrustMetric struct {
// Private elements.
}
// Pause tells the metric to pause recording data over time intervals.
// All method calls that indicate events will unpause the metric
func (tm *TrustMetric) Pause() {}
// Stop tells the metric to stop recording data over time intervals
func (tm *TrustMetric) Stop() {}
// BadEvents indicates that an undesirable event(s) took place
func (tm *TrustMetric) BadEvents(num int) {}
// GoodEvents indicates that a desirable event(s) took place
func (tm *TrustMetric) GoodEvents(num int) {}
// TrustValue gets the dependable trust value; always between 0 and 1
func (tm *TrustMetric) TrustValue() float64 {}
// TrustScore gets a score based on the trust value always between 0 and 100
func (tm *TrustMetric) TrustScore() int {}
// NewMetric returns a trust metric with the default configuration
func NewMetric() *TrustMetric {}
//------------------------------------------------------------------------------------------------
// For example
tm := NewMetric()
tm.BadEvents(1)
score := tm.TrustScore()
tm.Stop()
```
Some of the trust metric parameters can be configured. The weight values should probably be left alone in more cases, yet the time durations for the tracking window and individual time interval should be considered.
```go
// TrustMetricConfig - Configures the weight functions and time intervals for the metric
type TrustMetricConfig struct {
// Determines the percentage given to current behavior
ProportionalWeight float64
// Determines the percentage given to prior behavior
IntegralWeight float64
// The window of time that the trust metric will track events across.
// This can be set to cover many days without issue
TrackingWindow time.Duration
// Each interval should be short for adapability.
// Less than 30 seconds is too sensitive,
// and greater than 5 minutes will make the metric numb
IntervalLength time.Duration
}
// DefaultConfig returns a config with values that have been tested and produce desirable results
func DefaultConfig() TrustMetricConfig {}
// NewMetricWithConfig returns a trust metric with a custom configuration
func NewMetricWithConfig(tmc TrustMetricConfig) *TrustMetric {}
//------------------------------------------------------------------------------------------------
// For example
config := TrustMetricConfig{
TrackingWindow: time.Minute * 60 * 24, // one day
IntervalLength: time.Minute * 2,
}
tm := NewMetricWithConfig(config)
tm.BadEvents(10)
tm.Pause()
tm.GoodEvents(1) // becomes active again
```
A trust metric store should be created with a DB that has persistent storage so it can save history data across node executions. All trust metrics instantiated by the store will be created with the provided TrustMetricConfig configuration.
When you attempt to fetch the trust metric for a peer, and an entry does not exist in the trust metric store, a new metric is automatically created and the entry made within the store.
In additional to the fetching method, GetPeerTrustMetric, the trust metric store provides a method to call when a peer has disconnected from the node. This is so the metric can be paused (history data will not be saved) for periods of time when the node is not having direct experiences with the peer.
```go
// TrustMetricStore - Manages all trust metrics for peers
type TrustMetricStore struct {
cmn.BaseService
// Private elements
}
// OnStart implements Service
func (tms *TrustMetricStore) OnStart() error {}
// OnStop implements Service
func (tms *TrustMetricStore) OnStop() {}
// NewTrustMetricStore returns a store that saves data to the DB
// and uses the config when creating new trust metrics
func NewTrustMetricStore(db dbm.DB, tmc TrustMetricConfig) *TrustMetricStore {}
// Size returns the number of entries in the trust metric store
func (tms *TrustMetricStore) Size() int {}
// GetPeerTrustMetric returns a trust metric by peer key
func (tms *TrustMetricStore) GetPeerTrustMetric(key string) *TrustMetric {}
// PeerDisconnected pauses the trust metric associated with the peer identified by the key
func (tms *TrustMetricStore) PeerDisconnected(key string) {}
//------------------------------------------------------------------------------------------------
// For example
db := dbm.NewDB("trusthistory", "goleveldb", dirPathStr)
tms := NewTrustMetricStore(db, DefaultConfig())
tm := tms.GetPeerTrustMetric(key)
tm.BadEvents(1)
tms.PeerDisconnected(key)
```
## Status
Approved.
## Consequences
### Positive
- The trust metric will allow Tendermint to make non-binary security and reliability decisions
- Will help Tendermint implement deterrents that provide soft security controls, yet avoids disruption on the network
- Will provide useful profiling information when analyzing performance over time related to peer interaction
### Negative
- Requires saving the trust metric history data across node executions
### Neutral
- Keep in mind that, good events need to be recorded just as bad events do using this implementation

View File

@@ -1,106 +0,0 @@
# ADR 007: Trust Metric Usage Guide
## Context
Tendermint is required to monitor peer quality in order to inform its peer dialing and peer exchange strategies.
When a node first connects to the network, it is important that it can quickly find good peers.
Thus, while a node has fewer connections, it should prioritize connecting to higher quality peers.
As the node becomes well connected to the rest of the network, it can dial lesser known or lesser
quality peers and help assess their quality. Similarly, when queried for peers, a node should make
sure they dont return low quality peers.
Peer quality can be tracked using a trust metric that flags certain behaviours as good or bad. When enough
bad behaviour accumulates, we can mark the peer as bad and disconnect.
For example, when the PEXReactor makes a request for peers network addresses from an already known peer, and the returned network addresses are unreachable, this undesirable behavior should be tracked. Returning a few bad network addresses probably shouldnt cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer for removal. The originally proposed approach and design document for the trust metric can be found in the [ADR 006](adr-006-trust-metric.md) document.
The trust metric implementation allows a developer to obtain a peer's trust metric from a trust metric store, and track good and bad events relevant to a peer's behavior, and at any time, the peer's metric can be queried for a current trust value. The current trust value is calculated with a formula that utilizes current behavior, previous behavior, and change between the two. Current behavior is calculated as the percentage of good behavior within a time interval. The time interval is short; probably set between 30 seconds and 5 minutes. On the other hand, the historic data can estimate a peer's behavior over days worth of tracking. At the end of a time interval, the current behavior becomes part of the historic data, and a new time interval begins with the good and bad counters reset to zero.
These are some important things to keep in mind regarding how the trust metrics handle time intervals and scoring:
- Each new time interval begins with a perfect score
- Bad events quickly bring the score down and good events cause the score to slowly rise
- When the time interval is over, the percentage of good events becomes historic data.
Some useful information about the inner workings of the trust metric:
- When a trust metric is first instantiated, a timer (ticker) periodically fires in order to handle transitions between trust metric time intervals
- If a peer is disconnected from a node, the timer should be paused, since the node is no longer connected to that peer
- The ability to pause the metric is supported with the store **PeerDisconnected** method and the metric **Pause** method
- After a pause, if a good or bad event method is called on a metric, it automatically becomes unpaused and begins a new time interval.
## Decision
The trust metric capability is now available, yet, it still leaves the question of how should it be applied throughout Tendermint in order to properly track the quality of peers?
### Proposed Process
Peers are managed using an address book and a trust metric:
- The address book keeps a record of peers and provides selection methods
- The trust metric tracks the quality of the peers
#### Presence in Address Book
Outbound peers are added to the address book before they are dialed,
and inbound peers are added once the peer connection is set up.
Peers are also added to the address book when they are received in response to
a pexRequestMessage.
While a node has less than `needAddressThreshold`, it will periodically request more,
via pexRequestMessage, from randomly selected peers and from newly dialed outbound peers.
When a new address is added to an address book that has more than `0.5*needAddressThreshold` addresses,
then with some low probability, a randomly chosen low quality peer is removed.
#### Outbound Peers
Peers attempt to maintain a minimum number of outbound connections by
repeatedly querying the address book for peers to connect to.
While a node has few to no outbound connections, the address book is biased to return
higher quality peers. As the node increases the number of outbound connections,
the address book is biased to return less-vetted or lower-quality peers.
#### Inbound Peers
Peers also maintain a maximum number of total connections, MaxNumPeers.
If a peer has MaxNumPeers, new incoming connections will be accepted with low probability.
When such a new connection is accepted, the peer disconnects from a probabilistically chosen low ranking peer
so it does not exceed MaxNumPeers.
#### Peer Exchange
When a peer receives a pexRequestMessage, it returns a random sample of high quality peers from the address book. Peers with no score or low score should not be inclided in a response to pexRequestMessage.
#### Peer Quality
Peer quality is tracked in the connection and across the reactors by storing the TrustMetric in the peer's
thread safe Data store.
Peer behaviour is then defined as one of the following:
- Fatal - something outright malicious that causes us to disconnect the peer and ban it from the address book for some amount of time
- Bad - Any kind of timeout, messages that don't unmarshal, fail other validity checks, or messages we didn't ask for or aren't expecting (usually worth one bad event)
- Neutral - Unknown channels/message types/version upgrades (no good or bad events recorded)
- Correct - Normal correct behavior (worth one good event)
- Good - some random majority of peers per reactor sending us useful messages (worth more than one good event).
Note that Fatal behaviour causes us to remove the peer, and neutral behaviour does not affect the score.
## Status
Proposed.
## Consequences
### Positive
- Bringing the address book and trust metric store together will cause the network to be built in a way that encourages greater security and reliability.
### Negative
- TBD
### Neutral
- Keep in mind that, good events need to be recorded just as bad events do using this implementation.

View File

@@ -1,35 +0,0 @@
# ADR 008: SocketPV
Tendermint node's should support only two in-process PrivValidator
implementations:
- FilePV uses an unencrypted private key in a "priv_validator.json" file - no
configuration required (just `tendermint init`).
- TCPVal and IPCVal use TCP and Unix sockets respectively to send signing requests
to another process - the user is responsible for starting that process themselves.
Both TCPVal and IPCVal addresses can be provided via flags at the command line
or in the configuration file; TCPVal addresses must be of the form
`tcp://<ip_address>:<port>` and IPCVal addresses `unix:///path/to/file.sock` -
doing so will cause Tendermint to ignore any private validator files.
TCPVal will listen on the given address for incoming connections from an external
private validator process. It will halt any operation until at least one external
process successfully connected.
The external priv_validator process will dial the address to connect to
Tendermint, and then Tendermint will send requests on the ensuing connection to
sign votes and proposals. Thus the external process initiates the connection,
but the Tendermint process makes all requests. In a later stage we're going to
support multiple validators for fault tolerance. To prevent double signing they
need to be synced, which is deferred to an external solution (see #1185).
Conversely, IPCVal will make an outbound connection to an existing socket opened
by the external validator process.
In addition, Tendermint will provide implementations that can be run in that
external process. These include:
- FilePV will encrypt the private key, and the user must enter password to
decrypt key when process is started.
- LedgerPV uses a Ledger Nano S to handle all signing.

View File

@@ -1,271 +0,0 @@
# ADR 009: ABCI UX Improvements
## Changelog
23-06-2018: Some minor fixes from review
07-06-2018: Some updates based on discussion with Jae
07-06-2018: Initial draft to match what was released in ABCI v0.11
## Context
The ABCI was first introduced in late 2015. It's purpose is to be:
- a generic interface between state machines and their replication engines
- agnostic to the language the state machine is written in
- agnostic to the replication engine that drives it
This means ABCI should provide an interface for both pluggable applications and
pluggable consensus engines.
To achieve this, it uses Protocol Buffers (proto3) for message types. The dominant
implementation is in Go.
After some recent discussions with the community on github, the following were
identified as pain points:
- Amino encoded types
- Managing validator sets
- Imports in the protobuf file
See the [references](#references) for more.
### Imports
The native proto library in Go generates inflexible and verbose code.
Many in the Go community have adopted a fork called
[gogoproto](https://github.com/gogo/protobuf) that provides a
variety of features aimed to improve the developer experience.
While `gogoproto` is nice, it creates an additional dependency, and compiling
the protobuf types for other languages has been reported to fail when `gogoproto` is used.
### Amino
Amino is an encoding protocol designed to improve over insufficiencies of protobuf.
It's goal is to be proto4.
Many people are frustrated by incompatibility with protobuf,
and with the requirement for Amino to be used at all within ABCI.
We intend to make Amino successful enough that we can eventually use it for ABCI
message types directly. By then it should be called proto4. In the meantime,
we want it to be easy to use.
### PubKey
PubKeys are encoded using Amino (and before that, go-wire).
Ideally, PubKeys are an interface type where we don't know all the
implementation types, so its unfitting to use `oneof` or `enum`.
### Addresses
The address for ED25519 pubkey is the RIPEMD160 of the Amino
encoded pubkey. This introduces an Amino dependency in the address generation,
a functionality that is widely required and should be easy to compute as
possible.
### Validators
To change the validator set, applications can return a list of validator updates
with ResponseEndBlock. In these updates, the public key _must_ be included,
because Tendermint requires the public key to verify validator signatures. This
means ABCI developers have to work with PubKeys. That said, it would also be
convenient to work with address information, and for it to be simple to do so.
### AbsentValidators
Tendermint also provides a list of validators in BeginBlock who did not sign the
last block. This allows applications to reflect availability behaviour in the
application, for instance by punishing validators for not having votes included
in commits.
### InitChain
Tendermint passes in a list of validators here, and nothing else. It would
benefit the application to be able to control the initial validator set. For
instance the genesis file could include application-based information about the
initial validator set that the application could process to determine the
initial validator set. Additionally, InitChain would benefit from getting all
the genesis information.
### Header
ABCI provides the Header in RequestBeginBlock so the application can have
important information about the latest state of the blockchain.
## Decision
### Imports
Move away from gogoproto. In the short term, we will just maintain a second
protobuf file without the gogoproto annotations. In the medium term, we will
make copies of all the structs in Golang and shuttle back and forth. In the long
term, we will use Amino.
### Amino
To simplify ABCI application development in the short term,
Amino will be completely removed from the ABCI:
- It will not be required for PubKey encoding
- It will not be required for computing PubKey addresses
That said, we are working to make Amino a huge success, and to become proto4.
To facilitate adoption and cross-language compatibility in the near-term, Amino
v1 will:
- be fully compatible with the subset of proto3 that excludes `oneof`
- use the Amino prefix system to provide interface types, as opposed to `oneof`
style union types.
That said, an Amino v2 will be worked on to improve the performance of the
format and its useability in cryptographic applications.
### PubKey
Encoding schemes infect software. As a generic middleware, ABCI aims to have
some cross scheme compatibility. For this it has no choice but to include opaque
bytes from time to time. While we will not enforce Amino encoding for these
bytes yet, we need to provide a type system. The simplest way to do this is to
use a type string.
PubKey will now look like:
```
message PubKey {
string type
bytes data
}
```
where `type` can be:
- "ed225519", with `data = <raw 32-byte pubkey>`
- "secp256k1", with `data = <33-byte OpenSSL compressed pubkey>`
As we want to retain flexibility here, and since ideally, PubKey would be an
interface type, we do not use `enum` or `oneof`.
### Addresses
To simplify and improve computing addresses, we change it to the first 20-bytes of the SHA256
of the raw 32-byte public key.
We continue to use the Bitcoin address scheme for secp256k1 keys.
### Validators
Add a `bytes address` field:
```
message Validator {
bytes address
PubKey pub_key
int64 power
}
```
### RequestBeginBlock and AbsentValidators
To simplify this, RequestBeginBlock will include the complete validator set,
including the address, and voting power of each validator, along
with a boolean for whether or not they voted:
```
message RequestBeginBlock {
bytes hash
Header header
LastCommitInfo last_commit_info
repeated Evidence byzantine_validators
}
message LastCommitInfo {
int32 CommitRound
repeated SigningValidator validators
}
message SigningValidator {
Validator validator
bool signed_last_block
}
```
Note that in Validators in RequestBeginBlock, we DO NOT include public keys. Public keys are
larger than addresses and in the future, with quantum computers, will be much
larger. The overhead of passing them, especially during fast-sync, is
significant.
Additional, addresses are changing to be simpler to compute, further removing
the need to include pubkeys here.
In short, ABCI developers must be aware of both addresses and public keys.
### ResponseEndBlock
Since ResponseEndBlock includes Validator, it must now include their address.
### InitChain
Change RequestInitChain to give the app all the information from the genesis file:
```
message RequestInitChain {
int64 time
string chain_id
ConsensusParams consensus_params
repeated Validator validators
bytes app_state_bytes
}
```
Change ResponseInitChain to allow the app to specify the initial validator set
and consensus parameters.
```
message ResponseInitChain {
ConsensusParams consensus_params
repeated Validator validators
}
```
### Header
Now that Tendermint Amino will be compatible with proto3, the Header in ABCI
should exactly match the Tendermint header - they will then be encoded
identically in ABCI and in Tendermint Core.
## Status
Accepted.
## Consequences
### Positive
- Easier for developers to build on the ABCI
- ABCI and Tendermint headers are identically serialized
### Negative
- Maintenance overhead of alternative type encoding scheme
- Performance overhead of passing all validator info every block (at least its
only addresses, and not also pubkeys)
- Maintenance overhead of duplicate types
### Neutral
- ABCI developers must know about validator addresses
## References
- [ABCI v0.10.3 Specification (before this
proposal)](https://github.com/tendermint/abci/blob/v0.10.3/specification.rst)
- [ABCI v0.11.0 Specification (implementing first draft of this
proposal)](https://github.com/tendermint/abci/blob/v0.11.0/specification.md)
- [Ed25519 addresses](https://github.com/tendermint/go-crypto/issues/103)
- [InitChain contains the
Genesis](https://github.com/tendermint/abci/issues/216)
- [PubKeys](https://github.com/tendermint/tendermint/issues/1524)
- [Notes on
Header](https://github.com/tendermint/tendermint/issues/1605)
- [Gogoproto issues](https://github.com/tendermint/abci/issues/256)
- [Absent Validators](https://github.com/tendermint/abci/issues/231)

View File

@@ -1,75 +0,0 @@
# ADR 010: Crypto Changes
## Context
Tendermint is a cryptographic protocol that uses and composes a variety of cryptographic primitives.
After nearly 4 years of development, Tendermint has recently undergone multiple security reviews to search for vulnerabilities and to assess the the use and composition of cryptographic primitives.
### Hash Functions
Tendermint uses RIPEMD160 universally as a hash function, most notably in its Merkle tree implementation.
RIPEMD160 was chosen because it provides the shortest fingerprint that is long enough to be considered secure (ie. birthday bound of 80-bits).
It was also developed in the open academic community, unlike NSA-designed algorithms like SHA256.
That said, the cryptographic community appears to unanimously agree on the security of SHA256. It has become a universal standard, especially now that SHA1 is broken, being required in TLS connections and having optimized support in hardware.
### Merkle Trees
Tendermint uses a simple Merkle tree to compute digests of large structures like transaction batches
and even blockchain headers. The Merkle tree length prefixes byte arrays before concatenating and hashing them.
It uses RIPEMD160.
### Addresses
ED25519 addresses are computed using the RIPEMD160 of the Amino encoding of the public key.
RIPEMD160 is generally considered an outdated hash function, and is much slower
than more modern functions like SHA256 or Blake2.
### Authenticated Encryption
Tendermint P2P connections use authenticated encryption to provide privacy and authentication in the communications.
This is done using the simple Station-to-Station protocol with the NaCL Ed25519 library.
While there have been no vulnerabilities found in the implementation, there are some concerns:
- NaCL uses Salsa20, a not-widely used and relatively out-dated stream cipher that has been obsoleted by ChaCha20
- Connections use RIPEMD160 to compute a value that is used for the encryption nonce with subtle requirements on how it's used
## Decision
### Hash Functions
Use the first 20-bytes of the SHA256 hash instead of RIPEMD160 for everything
### Merkle Trees
TODO
### Addresses
Compute ED25519 addresses as the first 20-bytes of the SHA256 of the raw 32-byte public key
### Authenticated Encryption
Make the following changes:
- Use xChaCha20 instead of xSalsa20 - https://github.com/tendermint/tendermint/issues/1124
- Use an HKDF instead of RIPEMD160 to compute nonces - https://github.com/tendermint/tendermint/issues/1165
## Status
## Consequences
### Positive
- More modern and standard cryptographic functions with wider adoption and hardware acceleration
### Negative
- Exact authenticated encryption construction isn't already provided in a well-used library
### Neutral
## References

View File

@@ -1,116 +0,0 @@
# ADR 011: Monitoring
## Changelog
08-06-2018: Initial draft
11-06-2018: Reorg after @xla comments
13-06-2018: Clarification about usage of labels
## Context
In order to bring more visibility into Tendermint, we would like it to report
metrics and, maybe later, traces of transactions and RPC queries. See
https://github.com/tendermint/tendermint/issues/986.
A few solutions were considered:
1. [Prometheus](https://prometheus.io)
a) Prometheus API
b) [go-kit metrics package](https://github.com/go-kit/kit/tree/master/metrics) as an interface plus Prometheus
c) [telegraf](https://github.com/influxdata/telegraf)
d) new service, which will listen to events emitted by pubsub and report metrics
2. [OpenCensus](https://opencensus.io/introduction/)
### 1. Prometheus
Prometheus seems to be the most popular product out there for monitoring. It has
a Go client library, powerful queries, alerts.
**a) Prometheus API**
We can commit to using Prometheus in Tendermint, but I think Tendermint users
should be free to choose whatever monitoring tool they feel will better suit
their needs (if they don't have existing one already). So we should try to
abstract interface enough so people can switch between Prometheus and other
similar tools.
**b) go-kit metrics package as an interface**
metrics package provides a set of uniform interfaces for service
instrumentation and offers adapters to popular metrics packages:
https://godoc.org/github.com/go-kit/kit/metrics#pkg-subdirectories
Comparing to Prometheus API, we're losing customisability and control, but gaining
freedom in choosing any instrument from the above list given we will extract
metrics creation into a separate function (see "providers" in node/node.go).
**c) telegraf**
Unlike already discussed options, telegraf does not require modifying Tendermint
source code. You create something called an input plugin, which polls
Tendermint RPC every second and calculates the metrics itself.
While it may sound good, but some metrics we want to report are not exposed via
RPC or pubsub, therefore can't be accessed externally.
**d) service, listening to pubsub**
Same issue as the above.
### 2. opencensus
opencensus provides both metrics and tracing, which may be important in the
future. It's API looks different from go-kit and Prometheus, but looks like it
covers everything we need.
Unfortunately, OpenCensus go client does not define any
interfaces, so if we want to abstract away metrics we
will need to write interfaces ourselves.
### List of metrics
| | Name | Type | Description |
| --- | ------------------------------------ | ------ | ----------------------------------------------------------------------------- |
| A | consensus_height | Gauge | |
| A | consensus_validators | Gauge | Number of validators who signed |
| A | consensus_validators_power | Gauge | Total voting power of all validators |
| A | consensus_missing_validators | Gauge | Number of validators who did not sign |
| A | consensus_missing_validators_power | Gauge | Total voting power of the missing validators |
| A | consensus_byzantine_validators | Gauge | Number of validators who tried to double sign |
| A | consensus_byzantine_validators_power | Gauge | Total voting power of the byzantine validators |
| A | consensus_block_interval | Timing | Time between this and last block (Block.Header.Time) |
| | consensus_block_time | Timing | Time to create a block (from creating a proposal to commit) |
| | consensus_time_between_blocks | Timing | Time between committing last block and (receiving proposal creating proposal) |
| A | consensus_rounds | Gauge | Number of rounds |
| | consensus_prevotes | Gauge | |
| | consensus_precommits | Gauge | |
| | consensus_prevotes_total_power | Gauge | |
| | consensus_precommits_total_power | Gauge | |
| A | consensus_num_txs | Gauge | |
| A | mempool_size | Gauge | |
| A | consensus_total_txs | Gauge | |
| A | consensus_block_size | Gauge | In bytes |
| A | p2p_peers | Gauge | Number of peers node's connected to |
`A` - will be implemented in the fist place.
**Proposed solution**
## Status
Proposed.
## Consequences
### Positive
Better visibility, support of variety of monitoring backends
### Negative
One more library to audit, messing metrics reporting code with business domain.
### Neutral
-

View File

@@ -1,113 +0,0 @@
# ADR 012: PeerTransport
## Context
One of the more apparent problems with the current architecture in the p2p
package is that there is no clear separation of concerns between different
components. Most notably the `Switch` is currently doing physical connection
handling. An artifact is the dependency of the Switch on
`[config.P2PConfig`](https://github.com/tendermint/tendermint/blob/05a76fb517f50da27b4bfcdc7b4cf185fc61eff6/config/config.go#L272-L339).
Addresses:
- [#2046](https://github.com/tendermint/tendermint/issues/2046)
- [#2047](https://github.com/tendermint/tendermint/issues/2047)
First iteraton in [#2067](https://github.com/tendermint/tendermint/issues/2067)
## Decision
Transport concerns will be handled by a new component (`PeerTransport`) which
will provide Peers at its boundary to the caller. In turn `Switch` will use
this new component accept new `Peer`s and dial them based on `NetAddress`.
### PeerTransport
Responsible for emitting and connecting to Peers. The implementation of `Peer`
is left to the transport, which implies that the chosen transport dictates the
characteristics of the implementation handed back to the `Switch`. Each
transport implementation is responsible to filter establishing peers specific
to its domain, for the default multiplexed implementation the following will
apply:
- connections from our own node
- handshake fails
- upgrade to secret connection fails
- prevent duplicate ip
- prevent duplicate id
- nodeinfo incompatibility
```go
// PeerTransport proxies incoming and outgoing peer connections.
type PeerTransport interface {
// Accept returns a newly connected Peer.
Accept() (Peer, error)
// Dial connects to a Peer.
Dial(NetAddress) (Peer, error)
}
// EXAMPLE OF DEFAULT IMPLEMENTATION
// multiplexTransport accepts tcp connections and upgrades to multiplexted
// peers.
type multiplexTransport struct {
listener net.Listener
acceptc chan accept
closec <-chan struct{}
listenc <-chan struct{}
dialTimeout time.Duration
handshakeTimeout time.Duration
nodeAddr NetAddress
nodeInfo NodeInfo
nodeKey NodeKey
// TODO(xla): Remove when MConnection is refactored into mPeer.
mConfig conn.MConnConfig
}
var _ PeerTransport = (*multiplexTransport)(nil)
// NewMTransport returns network connected multiplexed peers.
func NewMTransport(
nodeAddr NetAddress,
nodeInfo NodeInfo,
nodeKey NodeKey,
) *multiplexTransport
```
### Switch
From now the Switch will depend on a fully setup `PeerTransport` to
retrieve/reach out to its peers. As the more low-level concerns are pushed to
the transport, we can omit passing the `config.P2PConfig` to the Switch.
```go
func NewSwitch(transport PeerTransport, opts ...SwitchOption) *Switch
```
## Status
In Review.
## Consequences
### Positive
- free Switch from transport concerns - simpler implementation
- pluggable transport implementation - simpler test setup
- remove Switch dependency on P2PConfig - easier to test
### Negative
- more setup for tests which depend on Switches
### Neutral
- multiplexed will be the default implementation
[0] These guards could be potentially extended to be pluggable much like
middlewares to express different concerns required by differentally configured
environments.

View File

@@ -1,99 +0,0 @@
# ADR 013: Need for symmetric cryptography
## Context
We require symmetric ciphers to handle how we encrypt keys in the sdk,
and to potentially encrypt `priv_validator.json` in tendermint.
Currently we use AEAD's to support symmetric encryption,
which is great since we want data integrity in addition to privacy and authenticity.
We don't currently have a scenario where we want to encrypt without data integrity,
so it is fine to optimize our code to just use AEAD's.
Currently there is not a way to switch out AEAD's easily, this ADR outlines a way
to easily swap these out.
### How do we encrypt with AEAD's
AEAD's typically require a nonce in addition to the key.
For the purposes we require symmetric cryptography for,
we need encryption to be stateless.
Because of this we use random nonces.
(Thus the AEAD must support random nonces)
We currently construct a random nonce, and encrypt the data with it.
The returned value is `nonce || encrypted data`.
The limitation of this is that does not provide a way to identify
which algorithm was used in encryption.
Consequently decryption with multiple algoritms is sub-optimal.
(You have to try them all)
## Decision
We should create the following two methods in a new `crypto/encoding/symmetric` package:
```golang
func Encrypt(aead cipher.AEAD, plaintext []byte) (ciphertext []byte, err error)
func Decrypt(key []byte, ciphertext []byte) (plaintext []byte, err error)
func Register(aead cipher.AEAD, algo_name string, NewAead func(key []byte) (cipher.Aead, error)) error
```
This allows you to specify the algorithm in encryption, but not have to specify
it in decryption.
This is intended for ease of use in downstream applications, in addition to people
looking at the file directly.
One downside is that for the encrypt function you must have already initialized an AEAD,
but I don't really see this as an issue.
If there is no error in encryption, Encrypt will return `algo_name || nonce || aead_ciphertext`.
`algo_name` should be length prefixed, using standard varuint encoding.
This will be binary data, but thats not a problem considering the nonce and ciphertext are also binary.
This solution requires a mapping from aead type to name.
We can achieve this via reflection.
```golang
func getType(myvar interface{}) string {
if t := reflect.TypeOf(myvar); t.Kind() == reflect.Ptr {
return "*" + t.Elem().Name()
} else {
return t.Name()
}
}
```
Then we maintain a map from the name returned from `getType(aead)` to `algo_name`.
In decryption, we read the `algo_name`, and then instantiate a new AEAD with the key.
Then we call the AEAD's decrypt method on the provided nonce/ciphertext.
`Register` allows a downstream user to add their own desired AEAD to the symmetric package.
It will error if the AEAD name is already registered.
This prevents a malicious import from modifying / nullifying an AEAD at runtime.
## Implementation strategy
The golang implementation of what is proposed is rather straight forward.
The concern is that we will break existing private keys if we just switch to this.
If this is concerning, we can make a simple script which doesn't require decoding privkeys,
for converting from the old format to the new one.
## Status
Proposed.
## Consequences
### Positive
- Allows us to support new AEAD's, in a way that makes decryption easier
- Allows downstream users to add their own AEAD
### Negative
- We will have to break all private keys stored on disk.
They can be recovered using seed words, and upgrade scripts are simple.
### Neutral
- Caller has to instantiate the AEAD with the private key.
However it forces them to be aware of what signing algorithm they are using, which is a positive.

View File

@@ -1,63 +0,0 @@
# ADR 014: Secp256k1 Signature Malleability
## Context
Secp256k1 has two layers of malleability.
The signer has a random nonce, and thus can produce many different valid signatures.
This ADR is not concerned with that.
The second layer of malleability basically allows one who is given a signature
to produce exactly one more valid signature for the same message from the same public key.
(They don't even have to know the message!)
The math behind this will be explained in the subsequent section.
Note that in many downstream applications, signatures will appear in a transaction, and therefore in the tx hash.
This means that if someone broadcasts a transaction with secp256k1 signature, the signature can be altered into the other form by anyone in the p2p network.
Thus the tx hash will change, and this altered tx hash may be committed instead.
This breaks the assumption that you can broadcast a valid transaction and just wait for its hash to be included on chain.
One example is if you are broadcasting a tx in cosmos,
and you wait for it to appear on chain before incrementing your sequence number.
You may never increment your sequence number if a different tx hash got committed.
Removing this second layer of signature malleability concerns could ease downstream development.
### ECDSA context
Secp256k1 is ECDSA over a particular curve.
The signature is of the form `(r, s)`, where `s` is a field element.
(The particular field is the `Z_n`, where the elliptic curve has order `n`)
However `(r, -s)` is also another valid solution.
Note that anyone can negate a group element, and therefore can get this second signature.
## Decision
We can just distinguish a canonical form for the ECDSA signatures.
Then we require that all ECDSA signatures be in the form which we defined as canonical.
We reject signatures in non-canonical form.
A canonical form is rather easy to define and check.
It would just be the smaller of the two values for `s`, defined lexicographically.
This is a simple check, instead of checking if `s < n`, instead check `s <= (n - 1)/2`.
An example of another cryptosystem using this
is the parity definition here https://github.com/zkcrypto/pairing/pull/30#issuecomment-372910663.
This is the same solution Ethereum has chosen for solving secp malleability.
## Proposed Implementation
Fork https://github.com/btcsuite/btcd, and just update the [parse sig method](https://github.com/btcsuite/btcd/blob/11fcd83963ab0ecd1b84b429b1efc1d2cdc6d5c5/btcec/signature.go#L195) and serialize functions to enforce our canonical form.
## Status
Implemented
## Consequences
### Positive
- Lets us maintain the ability to expect a tx hash to appear in the blockchain.
### Negative
- More work in all future implementations (Though this is a very simple check)
- Requires us to maintain another fork
### Neutral

View File

@@ -1,84 +0,0 @@
# ADR 015: Crypto encoding
## Context
We must standardize our method for encoding public keys and signatures on chain.
Currently we amino encode the public keys and signatures.
The reason we are using amino here is primarily due to ease of support in
parsing for other languages.
We don't need its upgradability properties in cryptosystems, as a change in
the crypto that requires adapting the encoding, likely warrants being deemed
a new cryptosystem.
(I.e. using new public parameters)
## Decision
### Public keys
For public keys, we will continue to use amino encoding on the canonical
representation of the pubkey.
(Canonical as defined by the cryptosystem itself)
This has two significant drawbacks.
Amino encoding is less space-efficient, due to requiring support for upgradability.
Amino encoding support requires forking protobuf and adding this new interface support
option in the language of choice.
The reason for continuing to use amino however is that people can create code
more easily in languages that already have an up to date amino library.
It is possible that this will change in the future, if it is deemed that
requiring amino for interacting with Tendermint cryptography is unnecessary.
The arguments for space efficiency here are refuted on the basis that there are
far more egregious wastages of space in the SDK.
The space requirement of the public keys doesn't cause many problems beyond
increasing the space attached to each validator / account.
The alternative to using amino here would be for us to create an enum type.
Switching to just an enum type is worthy of investigation post-launch.
For reference, part of amino encoding interfaces is basically a 4 byte enum
type definition.
Enum types would just change that 4 bytes to be a variant, and it would remove
the protobuf overhead, but it would be hard to integrate into the existing API.
### Signatures
Signatures should be switched to be `[]byte`.
Spatial efficiency in the signatures is quite important,
as it directly affects the gas cost of every transaction,
and the throughput of the chain.
Signatures don't need to encode what type they are for (unlike public keys)
since public keys must already be known.
Therefore we can validate the signature without needing to encode its type.
When placed in state, signatures will still be amino encoded, but it will be the
primitive type `[]byte` getting encoded.
#### Ed25519
Use the canonical representation for signatures.
#### Secp256k1
There isn't a clear canonical representation here.
Signatures have two elements `r,s`.
These bytes are encoded as `r || s`, where `r` and `s` are both exactly
32 bytes long, encoded big-endian.
This is basically Ethereum's encoding, but without the leading recovery bit.
## Status
Implemented
## Consequences
### Positive
- More space efficient signatures
### Negative
- We have an amino dependency for cryptography.
### Neutral
- No change to public keys

View File

@@ -1,308 +0,0 @@
# ADR 016: Protocol Versions
## TODO
- How to / should we version the authenticated encryption handshake itself (ie.
upfront protocol negotiation for the P2PVersion)
- How to / should we version ABCI itself? Should it just be absorbed by the
BlockVersion?
## Changelog
- 18-09-2018: Updates after working a bit on implementation
- ABCI Handshake needs to happen independently of starting the app
conns so we can see the result
- Add question about ABCI protocol version
- 16-08-2018: Updates after discussion with SDK team
- Remove signalling for next version from Header/ABCI
- 03-08-2018: Updates from discussion with Jae:
- ProtocolVersion contains Block/AppVersion, not Current/Next
- signal upgrades to Tendermint using EndBlock fields
- dont restrict peer compatibilty by version to simplify syncing old nodes
- 28-07-2018: Updates from review
- split into two ADRs - one for protocol, one for chains
- include signalling for upgrades in header
- 16-07-2018: Initial draft - was originally joint ADR for protocol and chain
versions
## Context
Here we focus on software-agnostic protocol versions.
The Software Version is covered by SemVer and described elsewhere.
It is not relevant to the protocol description, suffice to say that if any protocol version
changes, the software version changes, but not necessarily vice versa.
Software version should be included in NodeInfo for convenience/diagnostics.
We are also interested in versioning across different blockchains in a
meaningful way, for instance to differentiate branches of a contentious
hard-fork. We leave that for a later ADR.
## Requirements
We need to version components of the blockchain that may be independently upgraded.
We need to do it in a way that is scalable and maintainable - we can't just litter
the code with conditionals.
We can consider the complete version of the protocol to contain the following sub-versions:
BlockVersion, P2PVersion, AppVersion. These versions reflect the major sub-components
of the software that are likely to evolve together, at different rates, and in different ways,
as described below.
The BlockVersion defines the core of the blockchain data structures and
should change infrequently.
The P2PVersion defines how peers connect and communicate with eachother - it's
not part of the blockchain data structures, but defines the protocols used to build the
blockchain. It may change gradually.
The AppVersion determines how we compute app specific information, like the
AppHash and the Results.
All of these versions may change over the life of a blockchain, and we need to
be able to help new nodes sync up across version changes. This means we must be willing
to connect to peers with older version.
### BlockVersion
- All tendermint hashed data-structures (headers, votes, txs, responses, etc.).
- Note the semantic meaning of a transaction may change according to the AppVersion, but the way txs are merklized into the header is part of the BlockVersion
- It should be the least frequent/likely to change.
- Tendermint should be stabilizing - it's just Atomic Broadcast.
- We can start considering for Tendermint v2.0 in a year
- It's easy to determine the version of a block from its serialized form
### P2PVersion
- All p2p and reactor messaging (messages, detectable behaviour)
- Will change gradually as reactors evolve to improve performance and support new features - eg proposed new message types BatchTx in the mempool and HasBlockPart in the consensus
- It's easy to determine the version of a peer from its first serialized message/s
- New versions must be compatible with at least one old version to allow gradual upgrades
### AppVersion
- The ABCI state machine (txs, begin/endblock behaviour, commit hashing)
- Behaviour and message types will change abruptly in the course of the life of a chain
- Need to minimize complexity of the code for supporting different AppVersions at different heights
- Ideally, each version of the software supports only a _single_ AppVersion at one time
- this means we checkout different versions of the software at different heights instead of littering the code
with conditionals
- minimize the number of data migrations required across AppVersion (ie. most AppVersion should be able to read the same state from disk as previous AppVersion).
## Ideal
Each component of the software is independently versioned in a modular way and its easy to mix and match and upgrade.
## Proposal
Each of BlockVersion, AppVersion, P2PVersion, is a monotonically increasing uint64.
To use these versions, we need to update the block Header, the p2p NodeInfo, and the ABCI.
### Header
Block Header should include a `Version` struct as its first field like:
```
type Version struct {
Block uint64
App uint64
}
```
Here, `Version.Block` defines the rules for the current block, while
`Version.App` defines the app version that processed the last block and computed
the `AppHash` in the current block. Together they provide a complete description
of the consensus-critical protocol.
Since we have settled on a proto3 header, the ability to read the BlockVersion out of the serialized header is unanimous.
Using a Version struct gives us more flexibility to add fields without breaking
the header.
The ProtocolVersion struct includes both the Block and App versions - it should
serve as a complete description of the consensus-critical protocol.
### NodeInfo
NodeInfo should include a Version struct as its first field like:
```
type Version struct {
P2P uint64
Block uint64
App uint64
Other []string
}
```
Note this effectively makes `Version.P2P` the first field in the NodeInfo, so it
should be easy to read this out of the serialized header if need be to facilitate an upgrade.
The `Version.Other` here should include additional information like the name of the software client and
it's SemVer version - this is for convenience only. Eg.
`tendermint-core/v0.22.8`. It's a `[]string` so it can include information about
the version of Tendermint, of the app, of Tendermint libraries, etc.
### ABCI
Since the ABCI is responsible for keeping Tendermint and the App in sync, we
need to communicate version information through it.
On startup, we use Info to perform a basic handshake. It should include all the
version information.
We also need to be able to update versions in the life of a blockchain. The
natural place to do this is EndBlock.
Note that currently the result of the Handshake isn't exposed anywhere, as the
handshaking happens inside the `proxy.AppConns` abstraction. We will need to
remove the handshaking from the `proxy` package so we can call it independently
and get the result, which should contain the application version.
#### Info
RequestInfo should add support for protocol versions like:
```
message RequestInfo {
string version
uint64 block_version
uint64 p2p_version
}
```
Similarly, ResponseInfo should return the versions:
```
message ResponseInfo {
string data
string version
uint64 app_version
int64 last_block_height
bytes last_block_app_hash
}
```
The existing `version` fields should be called `software_version` but we leave
them for now to reduce the number of breaking changes.
#### EndBlock
Updating the version could be done either with new fields or by using the
existing `tags`. Since we're trying to communicate information that will be
included in Tendermint block Headers, it should be native to the ABCI, and not
something embedded through some scheme in the tags. Thus, version updates should
be communicated through EndBlock.
EndBlock already contains `ConsensusParams`. We can add version information to
the ConsensusParams as well:
```
message ConsensusParams {
BlockSize block_size
EvidenceParams evidence_params
VersionParams version
}
message VersionParams {
uint64 block_version
uint64 app_version
}
```
For now, the `block_version` will be ignored, as we do not allow block version
to be updated live. If the `app_version` is set, it signals that the app's
protocol version has changed, and the new `app_version` will be included in the
`Block.Header.Version.App` for the next block.
### BlockVersion
BlockVersion is included in both the Header and the NodeInfo.
Changing BlockVersion should happen quite infrequently and ideally only for
critical upgrades. For now, it is not encoded in ABCI, though it's always
possible to use tags to signal an external process to co-ordinate an upgrade.
Note Ethereum has not had to make an upgrade like this (everything has been at state machine level, AFAIK).
### P2PVersion
P2PVersion is not included in the block Header, just the NodeInfo.
P2PVersion is the first field in the NodeInfo. NodeInfo is also proto3 so this is easy to read out.
Note we need the peer/reactor protocols to take the versions of peers into account when sending messages:
- don't send messages they don't understand
- don't send messages they don't expect
Doing this will be specific to the upgrades being made.
Note we also include the list of reactor channels in the NodeInfo and already don't send messages for channels the peer doesn't understand.
If upgrades always use new channels, this simplifies the development cost of backwards compatibility.
Note NodeInfo is only exchanged after the authenticated encryption handshake to ensure that it's private.
Doing any version exchange before encrypting could be considered information leakage, though I'm not sure
how much that matters compared to being able to upgrade the protocol.
XXX: if needed, can we change the meaning of the first byte of the first message to encode a handshake version?
this is the first byte of a 32-byte ed25519 pubkey.
### AppVersion
AppVersion is also included in the block Header and the NodeInfo.
AppVersion essentially defines how the AppHash and LastResults are computed.
### Peer Compatibility
Restricting peer compatibility based on version is complicated by the need to
help old peers, possibly on older versions, sync the blockchain.
We might be tempted to say that we only connect to peers with the same
AppVersion and BlockVersion (since these define the consensus critical
computations), and a select list of P2PVersions (ie. those compatible with
ours), but then we'd need to make accomodations for connecting to peers with the
right Block/AppVersion for the height they're on.
For now, we will connect to peers with any version and restrict compatibility
solely based on the ChainID. We leave more restrictive rules on peer
compatibiltiy to a future proposal.
### Future Changes
It may be valuable to support an `/unsafe_stop?height=_` endpoint to tell Tendermint to shutdown at a given height.
This could be use by an external manager process that oversees upgrades by
checking out and installing new software versions and restarting the process. It
would subscribe to the relevant upgrade event (needs to be implemented) and call `/unsafe_stop` at
the correct height (of course only after getting approval from its user!)
## Consequences
### Positive
- Make tendermint and application versions native to the ABCI to more clearly
communicate about them
- Distinguish clearly between protocol versions and software version to
facilitate implementations in other languages
- Versions included in key data structures in easy to discern way
- Allows proposers to signal for upgrades and apps to decide when to actually change the
version (and start signalling for a new version)
### Neutral
- Unclear how to version the initial P2P handshake itself
- Versions aren't being used (yet) to restrict peer compatibility
- Signalling for a new version happens through the proposer and must be
tallied/tracked in the app.
### Negative
- Adds more fields to the ABCI
- Implies that a single codebase must be able to handle multiple versions

View File

@@ -1,99 +0,0 @@
# ADR 017: Chain Versions
## TODO
- clarify how to handle slashing when ChainID changes
## Changelog
- 28-07-2018: Updates from review
- split into two ADRs - one for protocol, one for chains
- 16-07-2018: Initial draft - was originally joint ADR for protocol and chain
versions
## Context
Software and Protocol versions are covered in a separate ADR.
Here we focus on chain versions.
## Requirements
We need to version blockchains across protocols, networks, forks, etc.
We need chain identifiers and descriptions so we can talk about a multitude of chains,
and especially the differences between them, in a meaningful way.
### Networks
We need to support many independent networks running the same version of the software,
even possibly starting from the same initial state.
They must have distinct identifiers so that peers know which one they are joining and so
validators and users can prevent replay attacks.
Call this the `NetworkName` (note we currently call this `ChainID` in the software. In this
ADR, ChainID has a different meaning).
It represents both the application being run and the community or intention
of running it.
Peers only connect to other peers with the same NetworkName.
### Forks
We need to support existing networks upgrading and forking, wherein they may do any of:
- revert back to some height, continue with the same versions but new blocks
- arbitrarily mutate state at some height, continue with the same versions (eg. Dao Fork)
- change the AppVersion at some height
Note because of Tendermint's voting power threshold rules, a chain can only be extended under the "original" rules and under the new rules
if 1/3 or more is double signing, which is expressly prohibited, and is supposed to result in their punishment on both chains. Since they can censor
the punishment, the chain is expected to be hardforked to remove the validators. Thus, if both branches are to continue after a fork,
they will each require a new identifier, and the old chain identifier will be retired (ie. only useful for syncing history, not for new blocks)..
TODO: explain how to handle slashing when chain id changed!
We need a consistent way to describe forks.
## Proposal
### ChainDescription
ChainDescription is a complete immutable description of a blockchain. It takes the following form:
```
ChainDescription = <NetworkName>/<BlockVersion>/<AppVersion>/<StateHash>/<ValHash>/<ConsensusParamsHash>
```
Here, StateHash is the merkle root of the initial state, ValHash is the merkle root of the initial Tendermint validator set,
and ConsensusParamsHash is the merkle root of the initial Tendermint consensus parameters.
The `genesis.json` file must contain enough information to compute this value. It need not contain the StateHash or ValHash itself,
but contain the state from which they can be computed with the given protocol versions.
NOTE: consider splitting NetworkName into NetworkName and AppName - this allows
folks to independently use the same application for different networks (ie we
could imagine multiple communities of validators wanting to put up a Hub using
the same app but having a distinct network name. Arguably not needed if
differences will come via different initial state / validators).
#### ChainID
Define `ChainID = TMHASH(ChainDescriptor)`. It's the unique ID of a blockchain.
It should be Bech32 encoded when handled by users, eg. with `cosmoschain` prefix.
#### Forks and Uprades
When a chain forks or upgrades but continues the same history, it takes a new ChainDescription as follows:
```
ChainDescription = <ChainID>/x/<Height>/<ForkDescription>
```
Where
- ChainID is the ChainID from the previous ChainDescription (ie. its hash)
- `x` denotes that a change occured
- `Height` is the height the change occured
- ForkDescription has the same form as ChainDescription but for the fork
- this allows forks to specify new versions for tendermint or the app, as well as arbitrary changes to the state or validator set

View File

@@ -1,100 +0,0 @@
# ADR 018: ABCI Validator Improvements
## Changelog
016-08-2018: Follow up from review: - Revert changes to commit round - Remind about justification for removing pubkey - Update pros/cons
05-08-2018: Initial draft
## Context
ADR 009 introduced major improvements to the ABCI around validators and the use
of Amino. Here we follow up with some additional changes to improve the naming
and expected use of Validator messages.
## Decision
### Validator
Currently a Validator contains `address` and `pub_key`, and one or the other is
optional/not-sent depending on the use case. Instead, we should have a
`Validator` (with just the address, used for RequestBeginBlock)
and a `ValidatorUpdate` (with the pubkey, used for ResponseEndBlock):
```
message Validator {
bytes address
int64 power
}
message ValidatorUpdate {
PubKey pub_key
int64 power
}
```
As noted in [ADR-009](adr-009-ABCI-design.md),
the `Validator` does not contain a pubkey because quantum public keys are
quite large and it would be wasteful to send them all over ABCI with every block.
Thus, applications that want to take advantage of the information in BeginBlock
are _required_ to store pubkeys in state (or use much less efficient lazy means
of verifying BeginBlock data).
### RequestBeginBlock
LastCommitInfo currently has an array of `SigningValidator` that contains
information for each validator in the entire validator set.
Instead, this should be called `VoteInfo`, since it is information about the
validator votes.
Note that all votes in a commit must be from the same round.
```
message LastCommitInfo {
int64 round
repeated VoteInfo commit_votes
}
message VoteInfo {
Validator validator
bool signed_last_block
}
```
### ResponseEndBlock
Use ValidatorUpdates instead of Validators. Then it's clear we don't need an
address, and we do need a pubkey.
We could require the address here as well as a sanity check, but it doesn't seem
necessary.
### InitChain
Use ValidatorUpdates for both Request and Response. InitChain
is about setting/updating the initial validator set, unlike BeginBlock
which is just informational.
## Status
Proposal.
## Consequences
### Positive
- Clarifies the distinction between the different uses of validator information
### Negative
- Apps must still store the public keys in state to utilize the RequestBeginBlock info
### Neutral
- ResponseEndBlock does not require an address
## References
- [Latest ABCI Spec](https://github.com/tendermint/tendermint/blob/v0.22.8/docs/app-dev/abci-spec.md)
- [ADR-009](https://github.com/tendermint/tendermint/blob/v0.22.8/docs/architecture/adr-009-ABCI-design.md)
- [Issue #1712 - Don't send PubKey in
RequestBeginBlock](https://github.com/tendermint/tendermint/issues/1712)

View File

@@ -1,160 +0,0 @@
# ADR 019: Encoding standard for Multisignatures
## Changelog
06-08-2018: Minor updates
27-07-2018: Update draft to use amino encoding
11-07-2018: Initial Draft
## Context
Multisignatures, or technically _Accountable Subgroup Multisignatures_ (ASM),
are signature schemes which enable any subgroup of a set of signers to sign any message,
and reveal to the verifier exactly who the signers were.
This allows for complex conditionals of when to validate a signature.
Suppose the set of signers is of size _n_.
If we validate a signature if any subgroup of size _k_ signs a message,
this becomes what is commonly reffered to as a _k of n multisig_ in Bitcoin.
This ADR specifies the encoding standard for general accountable subgroup multisignatures,
k of n accountable subgroup multisignatures, and its weighted variant.
In the future, we can also allow for more complex conditionals on the accountable subgroup.
## Proposed Solution
### New structs
Every ASM will then have its own struct, implementing the crypto.Pubkey interface.
This ADR assumes that [replacing crypto.Signature with []bytes](https://github.com/tendermint/tendermint/issues/1957) has been accepted.
#### K of N threshold signature
The pubkey is the following struct:
```golang
type ThresholdMultiSignaturePubKey struct { // K of N threshold multisig
K uint `json:"threshold"`
Pubkeys []crypto.Pubkey `json:"pubkeys"`
}
```
We will derive N from the length of pubkeys. (For spatial efficiency in encoding)
`Verify` will expect an `[]byte` encoded version of the Multisignature.
(Multisignature is described in the next section)
The multisignature will be rejected if the bitmap has less than k indices,
or if any signature at any of the k indices is not a valid signature from
the kth public key on the message.
(If more than k signatures are included, all must be valid)
`Bytes` will be the amino encoded version of the pubkey.
Address will be `Hash(amino_encoded_pubkey)`
The reason this doesn't use `log_8(n)` bytes per signer is because that heavily optimizes for the case where a very small number of signers are required.
e.g. for `n` of size `24`, that would only be more space efficient for `k < 3`.
This seems less likely, and that it should not be the case optimized for.
#### Weighted threshold signature
The pubkey is the following struct:
```golang
type WeightedThresholdMultiSignaturePubKey struct {
Weights []uint `json:"weights"`
Threshold uint `json:"threshold"`
Pubkeys []crypto.Pubkey `json:"pubkeys"`
}
```
Weights and Pubkeys must be of the same length.
Everything else proceeds identically to the K of N multisig,
except the multisig fails if the sum of the weights is less than the threshold.
#### Multisignature
The inter-mediate phase of the signatures (as it accrues more signatures) will be the following struct:
```golang
type Multisignature struct {
BitArray CryptoBitArray // Documented later
Sigs [][]byte
```
It is important to recall that each private key will output a signature on the provided message itself.
So no signing algorithm ever outputs the multisignature.
The UI will take a signature, cast into a multisignature, and then keep adding
new signatures into it, and when done marshal into `[]byte`.
This will require the following helper methods:
```golang
func SigToMultisig(sig []byte, n int)
func GetIndex(pk crypto.Pubkey, []crypto.Pubkey)
func AddSignature(sig Signature, index int, multiSig *Multisignature)
```
The multisignature will be converted to an `[]byte` using amino.MarshalBinaryBare. \*
#### Bit Array
We would be using a new implementation of a bitarray. The struct it would be encoded/decoded from is
```golang
type CryptoBitArray struct {
ExtraBitsStored byte `json:"extra_bits"` // The number of extra bits in elems.
Elems []byte `json:"elems"`
}
```
The reason for not using the BitArray currently implemented in `libs/common/bit_array.go`
is that it is less space efficient, due to a space / time trade-off.
Evidence for this is outlined in [this issue](https://github.com/tendermint/tendermint/issues/2077).
In the multisig, we will not be performing arithmetic operations,
so there is no performance increase with the current implementation,
and just loss of spatial efficiency.
Implementing this new bit array with `[]byte` _should_ be simple, as no
arithmetic operations between bit arrays are required, and save a couple of bytes.
(Explained in that same issue)
When this bit array encoded, the number of elements is encoded due to amino.
However we may be encoding a full byte for what we actually only need 1-7 bits for.
We store that difference in ExtraBitsStored.
This allows for us to have an unbounded number of signers, and is more space efficient than what is currently used in `libs/common`.
Again the implementation of this space saving feature is straight forward.
### Encoding the structs
We will use straight forward amino encoding. This is chosen for ease of compatibility in other languages.
### Future points of discussion
If desired, we can use ed25519 batch verification for all ed25519 keys.
This is a future point of discussion, but would be backwards compatible as this information won't need to be marshalled.
(There may even be cofactor concerns without ristretto)
Aggregation of pubkeys / sigs in Schnorr sigs / BLS sigs is not backwards compatible, and would need to be a new ASM type.
## Status
Proposed.
## Consequences
### Positive
- Supports multisignatures, in a way that won't require any special cases in our downstream verification code.
- Easy to serialize / deserialize
- Unbounded number of signers
### Negative
- Larger codebase, however this should reside in a subfolder of tendermint/crypto, as it provides no new interfaces. (Ref #https://github.com/tendermint/go-crypto/issues/136)
- Space inefficient due to utilization of amino encoding
- Suggested implementation requires a new struct for every ASM.
### Neutral

View File

@@ -1,104 +0,0 @@
# ADR 020: Limiting txs size inside a block
## Changelog
13-08-2018: Initial Draft
15-08-2018: Second version after Dev's comments
28-08-2018: Third version after Ethan's comments
30-08-2018: AminoOverheadForBlock => MaxAminoOverheadForBlock
31-08-2018: Bounding evidence and chain ID
13-01-2019: Add section on MaxBytes vs MaxDataBytes
## Context
We currently use MaxTxs to reap txs from the mempool when proposing a block,
but enforce MaxBytes when unmarshalling a block, so we could easily propose a
block thats too large to be valid.
We should just remove MaxTxs all together and stick with MaxBytes, and have a
`mempool.ReapMaxBytes`.
But we can't just reap BlockSize.MaxBytes, since MaxBytes is for the entire block,
not for the txs inside the block. There's extra amino overhead + the actual
headers on top of the actual transactions + evidence + last commit.
We could also consider using a MaxDataBytes instead of or in addition to MaxBytes.
## MaxBytes vs MaxDataBytes
The [PR #3045](https://github.com/tendermint/tendermint/pull/3045) suggested
additional clarity/justification was necessary here, wither respect to the use
of MaxDataBytes in addition to, or instead of, MaxBytes.
MaxBytes provides a clear limit on the total size of a block that requires no
additional calculation if you want to use it to bound resource usage, and there
has been considerable discussions about optimizing tendermint around 1MB blocks.
Regardless, we need some maximum on the size of a block so we can avoid
unmarshalling blocks that are too big during the consensus, and it seems more
straightforward to provide a single fixed number for this rather than a
computation of "MaxDataBytes + everything else you need to make room for
(signatures, evidence, header)". MaxBytes provides a simple bound so we can
always say "blocks are less than X MB".
Having both MaxBytes and MaxDataBytes feels like unnecessary complexity. It's
not particularly surprising for MaxBytes to imply the maximum size of the
entire block (not just txs), one just has to know that a block includes header,
txs, evidence, votes. For more fine grained control over the txs included in the
block, there is the MaxGas. In practice, the MaxGas may be expected to do most of
the tx throttling, and the MaxBytes to just serve as an upper bound on the total
size. Applications can use MaxGas as a MaxDataBytes by just taking the gas for
every tx to be its size in bytes.
## Proposed solution
Therefore, we should
1) Get rid of MaxTxs.
2) Rename MaxTxsBytes to MaxBytes.
When we need to ReapMaxBytes from the mempool, we calculate the upper bound as follows:
```
ExactLastCommitBytes = {number of validators currently enabled} * {MaxVoteBytes}
MaxEvidenceBytesPerBlock = MaxBytes / 10
ExactEvidenceBytes = cs.evpool.PendingEvidence(MaxEvidenceBytesPerBlock) * MaxEvidenceBytes
mempool.ReapMaxBytes(MaxBytes - MaxAminoOverheadForBlock - ExactLastCommitBytes - ExactEvidenceBytes - MaxHeaderBytes)
```
where MaxVoteBytes, MaxEvidenceBytes, MaxHeaderBytes and MaxAminoOverheadForBlock
are constants defined inside the `types` package:
- MaxVoteBytes - 170 bytes
- MaxEvidenceBytes - 364 bytes
- MaxHeaderBytes - 476 bytes (~276 bytes hashes + 200 bytes - 50 UTF-8 encoded
symbols of chain ID 4 bytes each in the worst case + amino overhead)
- MaxAminoOverheadForBlock - 8 bytes (assuming MaxHeaderBytes includes amino
overhead for encoding header, MaxVoteBytes - for encoding vote, etc.)
ChainID needs to bound to 50 symbols max.
When reaping evidence, we use MaxBytes to calculate the upper bound (e.g. 1/10)
to save some space for transactions.
NOTE while reaping the `max int` bytes in mempool, we should account that every
transaction will take `len(tx)+aminoOverhead`, where aminoOverhead=1-4 bytes.
We should write a test that fails if the underlying structs got changed, but
MaxXXX stayed the same.
## Status
Accepted.
## Consequences
### Positive
* one way to limit the size of a block
* less variables to configure
### Negative
* constants that need to be adjusted if the underlying structs got changed
### Neutral

View File

@@ -1,52 +0,0 @@
# ADR 012: ABCI Events
## Changelog
- *2018-09-02* Remove ABCI errors component. Update description for events
- *2018-07-12* Initial version
## Context
ABCI tags were first described in [ADR 002](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-002-event-subscription.md).
They are key-value pairs that can be used to index transactions.
Currently, ABCI messages return a list of tags to describe an
"event" that took place during the Check/DeliverTx/Begin/EndBlock,
where each tag refers to a different property of the event, like the sending and receiving account addresses.
Since there is only one list of tags, recording data for multiple such events in
a single Check/DeliverTx/Begin/EndBlock must be done using prefixes in the key
space.
Alternatively, groups of tags that constitute an event can be separated by a
special tag that denotes a break between the events. This would allow
straightforward encoding of multiple events into a single list of tags without
prefixing, at the cost of these "special" tags to separate the different events.
TODO: brief description of how the indexing works
## Decision
Instead of returning a list of tags, return a list of events, where
each event is a list of tags. This way we naturally capture the concept of
multiple events happening during a single ABCI message.
TODO: describe impact on indexing and querying
## Status
Proposed
## Consequences
### Positive
- Ability to track distinct events separate from ABCI calls (DeliverTx/BeginBlock/EndBlock)
- More powerful query abilities
### Negative
- More complex query syntax
- More complex search implementation
### Neutral

View File

@@ -1,64 +0,0 @@
# ADR 023: ABCI Codespaces
## Changelog
- *2018-09-01* Initial version
## Context
ABCI errors should provide an abstraction between application details
and the client interface responsible for formatting & displaying errors to the user.
Currently, this abstraction consists of a single integer (the `code`), where any
`code > 0` is considered an error (ie. invalid transaction) and all type
information about the error is contained in the code. This integer is
expected to be decoded by the client into a known error string, where any
more specific data is contained in the `data`.
In a [previous conversation](https://github.com/tendermint/abci/issues/165#issuecomment-353704015),
it was suggested that not all non-zero codes need to be errors, hence why it's called `code` and not `error code`.
It is unclear exactly how the semantics of the `code` field will evolve, though
better lite-client proofs (like discussed for tags
[here](https://github.com/tendermint/tendermint/issues/1007#issuecomment-413917763))
may play a role.
Note that having all type information in a single integer
precludes an easy coordination method between "module implementers" and "client
implementers", especially for apps with many "modules". With an unbounded error domain (such as a string), module
implementers can pick a globally unique prefix & error code set, so client
implementers could easily implement support for "module A" regardless of which
particular blockchain network it was running in and which other modules were running with it. With
only error codes, globally unique codes are difficult/impossible, as the space
is finite and collisions are likely without an easy way to coordinate.
For instance, while trying to build an ecosystem of modules that can be composed into a single
ABCI application, the Cosmos-SDK had to hack a higher level "codespace" into the
single integer so that each module could have its own space to express its
errors.
## Decision
Include a `string code_space` in all ABCI messages that have a `code`.
This allows applications to namespace the codes so they can experiment with
their own code schemes.
It is the responsibility of applications to limit the size of the `code_space`
string.
How the codespace is hashed into block headers (ie. so it can be queried
efficiently by lite clients) is left for a separate ADR.
## Consequences
## Positive
- No need for complex codespacing on a single integer
- More expressive type system for errors
## Negative
- Another field in the response needs to be accounted for
- Some redundancy with `code` field
- May encourage more error/code type info to move to the `codespace` string, which
could impact lite clients.

View File

@@ -1,183 +0,0 @@
# ADR 012: ABCI `ProposeTx` Method
## Changelog
25-06-2018: Initial draft based on [#1776](https://github.com/tendermint/tendermint/issues/1776)
## Context
[#1776](https://github.com/tendermint/tendermint/issues/1776) was
opened in relation to implementation of a Plasma child chain using Tendermint
Core as consensus/replication engine.
Due to the requirements of [Minimal Viable Plasma (MVP)](https://ethresear.ch/t/minimal-viable-plasma/426) and [Plasma Cash](https://ethresear.ch/t/plasma-cash-plasma-with-much-less-per-user-data-checking/1298), it is necessary for ABCI apps to have a mechanism to handle the following cases (more may emerge in the near future):
1. `deposit` transactions on the Root Chain, which must consist of a block
with a single transaction, where there are no inputs and only one output
made in favour of the depositor. In this case, a `block` consists of
a transaction with the following shape:
```
[0, 0, 0, 0, #input1 - zeroed out
0, 0, 0, 0, #input2 - zeroed out
<depositor_address>, <amount>, #output1 - in favour of depositor
0, 0, #output2 - zeroed out
<fee>,
]
```
`exit` transactions may also be treated in a similar manner, wherein the
input is the UTXO being exited on the Root Chain, and the output belongs to
a reserved "burn" address, e.g., `0x0`. In such cases, it is favourable for
the containing block to only hold a single transaction that may receive
special treatment.
2. Other "internal" transactions on the child chain, which may be initiated
unilaterally. The most basic example of is a coinbase transaction
implementing validator node incentives, but may also be app-specific. In
these cases, it may be favourable for such transactions to
be ordered in a specific manner, e.g., coinbase transactions will always be
at index 0. In general, such strategies increase the determinism and
predictability of blockchain applications.
While it is possible to deal with the cases enumerated above using the
existing ABCI, currently available result in suboptimal workarounds. Two are
explained in greater detail below.
### Solution 1: App state-based Plasma chain
In this work around, the app maintains a `PlasmaStore` with a corresponding
`Keeper`. The PlasmaStore is responsible for maintaing a second, separate
blockchain that complies with the MVP specification, including `deposit`
blocks and other "internal" transactions. These "virtual" blocks are then broadcasted
to the Root Chain.
This naive approach is, however, fundamentally flawed, as it by definition
diverges from the canonical chain maintained by Tendermint. This is further
exacerbated if the business logic for generating such transactions is
potentially non-deterministic, as this should not even be done in
`Begin/EndBlock`, which may, as a result, break consensus guarantees.
Additinoally, this has serious implications for "watchers" - independent third parties,
or even an auxilliary blockchain, responsible for ensuring that blocks recorded
on the Root Chain are consistent with the Plasma chain's. Since, in this case,
the Plasma chain is inconsistent with the canonical one maintained by Tendermint
Core, it seems that there exists no compact means of verifying the legitimacy of
the Plasma chain without replaying every state transition from genesis (!).
### Solution 2: Broadcast to Tendermint Core from ABCI app
This approach is inspired by `tendermint`, in which Ethereum transactions are
relayed to Tendermint Core. It requires the app to maintain a client connection
to the consensus engine.
Whenever an "internal" transaction needs to be created, the proposer of the
current block broadcasts the transaction or transactions to Tendermint as
needed in order to ensure that the Tendermint chain and Plasma chain are
completely consistent.
This allows "internal" transactions to pass through the full consensus
process, and can be validated in methods like `CheckTx`, i.e., signed by the
proposer, is the semantically correct, etc. Note that this involves informing
the ABCI app of the block proposer, which was temporarily hacked in as a means
of conducting this experiment, although this should not be necessary when the
current proposer is passed to `BeginBlock`.
It is much easier to relay these transactions directly to the Root
Chain smart contract and/or maintain a "compressed" auxiliary chain comprised
of Plasma-friendly blocks that 100% reflect the canonical (Tendermint)
blockchain. Unfortunately, this approach not idiomatic (i.e., utilises the
Tendermint consensus engine in unintended ways). Additionally, it does not
allow the application developer to:
- Control the _ordering_ of transactions in the proposed block (e.g., index 0,
or 0 to `n` for coinbase transactions)
- Control the _number_ of transactions in the block (e.g., when a `deposit`
block is required)
Since determinism is of utmost importance in blockchain engineering, this approach,
while more viable, should also not be considered as fit for production.
## Decision
### `ProposeTx`
In order to address the difficulties described above, the ABCI interface must
expose an additional method, tentatively named `ProposeTx`.
It should have the following signature:
```
ProposeTx(RequestProposeTx) ResponseProposeTx
```
Where `RequestProposeTx` and `ResponseProposeTx` are `message`s with the
following shapes:
```
message RequestProposeTx {
int64 next_block_height = 1; // height of the block the proposed tx would be part of
Validator proposer = 2; // the proposer details
}
message ResponseProposeTx {
int64 num_tx = 1; // the number of tx to include in proposed block
repeated bytes txs = 2; // ordered transaction data to include in block
bool exclusive = 3; // whether the block should include other transactions (from `mempool`)
}
```
`ProposeTx` would be called by before `mempool.Reap` at this
[line](https://github.com/tendermint/tendermint/blob/9cd9f3338bc80a12590631632c23c8dbe3ff5c34/consensus/state.go#L935).
Depending on whether `exclusive` is `true` or `false`, the proposed
transactions are then pushed on top of the transactions received from
`mempool.Reap`.
### `DeliverTx`
Since the list of `tx` received from `ProposeTx` are _not_ passed through `CheckTx`,
it is probably a good idea to provide a means of differentiatiating "internal" transactions
from user-generated ones, in case the app developer needs/wants to take extra measures to
ensure validity of the proposed transactions.
Therefore, the `RequestDeliverTx` message should be changed to provide an additional flag, like so:
```
message RequestDeliverTx {
bytes tx = 1;
bool internal = 2;
}
```
Alternatively, an additional method `DeliverProposeTx` may be added as an accompanient to
`ProposeTx`. However, it is not clear at this stage if this additional overhead is necessary
to preserve consensus guarantees given that a simple flag may suffice for now.
## Status
Pending
## Consequences
### Positive
- Tendermint ABCI apps will be able to function as minimally viable Plasma chains.
- It will thereby become possible to add an extension to `cosmos-sdk` to enable
ABCI apps to support both IBC and Plasma, maximising interop.
- ABCI apps will have great control and flexibility in managing blockchain state,
without having to resort to non-deterministic hacks and/or unsafe workarounds
### Negative
- Maintenance overhead of exposing additional ABCI method
- Potential security issues that may have been overlooked and must now be tested extensively
### Neutral
- ABCI developers must deal with increased (albeit nominal) API surface area.
## References
- [#1776 Plasma and "Internal" Transactions in ABCI Apps](https://github.com/tendermint/tendermint/issues/1776)
- [Minimal Viable Plasma](https://ethresear.ch/t/minimal-viable-plasma/426)
- [Plasma Cash: Plasma with much less per-user data checking](https://ethresear.ch/t/plasma-cash-plasma-with-much-less-per-user-data-checking/1298)

View File

@@ -1,234 +0,0 @@
# ADR 024: SignBytes and validator types in privval
## Context
Currently, the messages exchanged between tendermint and a (potentially remote) signer/validator,
namely votes, proposals, and heartbeats, are encoded as a JSON string
(e.g., via `Vote.SignBytes(...)`) and then
signed . JSON encoding is sub-optimal for both, hardware wallets
and for usage in ethereum smart contracts. Both is laid down in detail in [issue#1622].
Also, there are currently no differences between sign-request and -replies. Also, there is no possibility
for a remote signer to include an error code or message in case something went wrong.
The messages exchanged between tendermint and a remote signer currently live in
[privval/socket.go] and encapsulate the corresponding types in [types].
[privval/socket.go]: https://github.com/tendermint/tendermint/blob/d419fffe18531317c28c29a292ad7d253f6cafdf/privval/socket.go#L496-L502
[issue#1622]: https://github.com/tendermint/tendermint/issues/1622
[types]: https://github.com/tendermint/tendermint/tree/master/types
## Decision
- restructure vote, proposal, and heartbeat such that their encoding is easily parseable by
hardware devices and smart contracts using a binary encoding format ([amino] in this case)
- split up the messages exchanged between tendermint and remote signers into requests and
responses (see details below)
- include an error type in responses
### Overview
```
+--------------+ +----------------+
| | SignXRequest | |
|Remote signer |<---------------------+ tendermint |
| (e.g. KMS) | | |
| +--------------------->| |
+--------------+ SignedXReply +----------------+
SignXRequest {
x: X
}
SignedXReply {
x: X
sig: Signature // []byte
err: Error{
code: int
desc: string
}
}
```
TODO: Alternatively, the type `X` might directly include the signature. A lot of places expect a vote with a
signature and do not necessarily deal with "Replies".
Still exploring what would work best here.
This would look like (exemplified using X = Vote):
```
Vote {
// all fields besides signature
}
SignedVote {
Vote Vote
Signature []byte
}
SignVoteRequest {
Vote Vote
}
SignedVoteReply {
Vote SignedVote
Err Error
}
```
**Note:** There was a related discussion around including a fingerprint of, or, the whole public-key
into each sign-request to tell the signer which corresponding private-key to
use to sign the message. This is particularly relevant in the context of the KMS
but is currently not considered in this ADR.
[amino]: https://github.com/tendermint/go-amino/
### Vote
As explained in [issue#1622] `Vote` will be changed to contain the following fields
(notation in protobuf-like syntax for easy readability):
```proto
// vanilla protobuf / amino encoded
message Vote {
Version fixed32
Height sfixed64
Round sfixed32
VoteType fixed32
Timestamp Timestamp // << using protobuf definition
BlockID BlockID // << as already defined
ChainID string // at the end because length could vary a lot
}
// this is an amino registered type; like currently privval.SignVoteMsg:
// registered with "tendermint/socketpv/SignVoteRequest"
message SignVoteRequest {
Vote vote
}
// amino registered type
// registered with "tendermint/socketpv/SignedVoteReply"
message SignedVoteReply {
Vote Vote
Signature Signature
Err Error
}
// we will use this type everywhere below
message Error {
Type uint // error code
Description string // optional description
}
```
The `ChainID` gets moved into the vote message directly. Previously, it was injected
using the [Signable] interface method `SignBytes(chainID string) []byte`. Also, the
signature won't be included directly, only in the corresponding `SignedVoteReply` message.
[Signable]: https://github.com/tendermint/tendermint/blob/d419fffe18531317c28c29a292ad7d253f6cafdf/types/signable.go#L9-L11
### Proposal
```proto
// vanilla protobuf / amino encoded
message Proposal {
Height sfixed64
Round sfixed32
Timestamp Timestamp // << using protobuf definition
BlockPartsHeader PartSetHeader // as already defined
POLRound sfixed32
POLBlockID BlockID // << as already defined
}
// amino registered with "tendermint/socketpv/SignProposalRequest"
message SignProposalRequest {
Proposal proposal
}
// amino registered with "tendermint/socketpv/SignProposalReply"
message SignProposalReply {
Prop Proposal
Sig Signature
Err Error // as defined above
}
```
### Heartbeat
**TODO**: clarify if heartbeat also needs a fixed offset and update the fields accordingly:
```proto
message Heartbeat {
ValidatorAddress Address
ValidatorIndex int
Height int64
Round int
Sequence int
}
// amino registered with "tendermint/socketpv/SignHeartbeatRequest"
message SignHeartbeatRequest {
Hb Heartbeat
}
// amino registered with "tendermint/socketpv/SignHeartbeatReply"
message SignHeartbeatReply {
Hb Heartbeat
Sig Signature
Err Error // as defined above
}
```
## PubKey
TBA - this needs further thoughts: e.g. what todo like in the case of the KMS which holds
several keys? How does it know with which key to reply?
## SignBytes
`SignBytes` will not require a `ChainID` parameter:
```golang
type Signable interface {
SignBytes() []byte
}
```
And the implementation for vote, heartbeat, proposal will look like:
```golang
// type T is one of vote, sign, proposal
func (tp *T) SignBytes() []byte {
bz, err := cdc.MarshalBinary(tp)
if err != nil {
panic(err)
}
return bz
}
```
## Status
DRAFT
## Consequences
### Positive
The most relevant positive effect is that the signing bytes can easily be parsed by a
hardware module and a smart contract. Besides that:
- clearer separation between requests and responses
- added error messages enable better error handling
### Negative
- relatively huge change / refactoring touching quite some code
- lot's of places assume a `Vote` with a signature included -> they will need to
- need to modify some interfaces
### Neutral
not even the swiss are neutral

View File

@@ -1,150 +0,0 @@
# ADR 025 Commit
## Context
Currently the `Commit` structure contains a lot of potentially redundant or unnecessary data.
It contains a list of precommits from every validator, where the precommit
includes the whole `Vote` structure. Thus each of the commit height, round,
type, and blockID are repeated for every validator, and could be deduplicated,
leading to very significant savings in block size.
```
type Commit struct {
BlockID BlockID `json:"block_id"`
Precommits []*Vote `json:"precommits"`
}
type Vote struct {
ValidatorAddress Address `json:"validator_address"`
ValidatorIndex int `json:"validator_index"`
Height int64 `json:"height"`
Round int `json:"round"`
Timestamp time.Time `json:"timestamp"`
Type byte `json:"type"`
BlockID BlockID `json:"block_id"`
Signature []byte `json:"signature"`
}
```
The original tracking issue for this is [#1648](https://github.com/tendermint/tendermint/issues/1648).
We have discussed replacing the `Vote` type in `Commit` with a new `CommitSig`
type, which includes at minimum the vote signature. The `Vote` type will
continue to be used in the consensus reactor and elsewhere.
A primary question is what should be included in the `CommitSig` beyond the
signature. One current constraint is that we must include a timestamp, since
this is how we calculuate BFT time, though we may be able to change this [in the
future](https://github.com/tendermint/tendermint/issues/2840).
Other concerns here include:
- Validator Address [#3596](https://github.com/tendermint/tendermint/issues/3596) -
Should the CommitSig include the validator address? It is very convenient to
do so, but likely not necessary. This was also discussed in [#2226](https://github.com/tendermint/tendermint/issues/2226).
- Absent Votes [#3591](https://github.com/tendermint/tendermint/issues/3591) -
How to represent absent votes? Currently they are just present as `nil` in the
Precommits list, which is actually problematic for serialization
- Other BlockIDs [#3485](https://github.com/tendermint/tendermint/issues/3485) -
How to represent votes for nil and for other block IDs? We currently allow
votes for nil and votes for alternative block ids, but just ignore them
## Decision
Deduplicate the fields and introduce `CommitSig`:
```
type Commit struct {
Height int64
Round int
BlockID BlockID `json:"block_id"`
Precommits []CommitSig `json:"precommits"`
}
type CommitSig struct {
BlockID BlockIDFlag
ValidatorAddress Address
Timestamp time.Time
Signature []byte
}
// indicate which BlockID the signature is for
type BlockIDFlag int
const (
BlockIDFlagAbsent BlockIDFlag = iota // vote is not included in the Commit.Precommits
BlockIDFlagCommit // voted for the Commit.BlockID
BlockIDFlagNil // voted for nil
)
```
Re the concerns outlined in the context:
**Timestamp**: Leave the timestamp for now. Removing it and switching to
proposer based time will take more analysis and work, and will be left for a
future breaking change. In the meantime, the concerns with the current approach to
BFT time [can be
mitigated](https://github.com/tendermint/tendermint/issues/2840#issuecomment-529122431).
**ValidatorAddress**: we include it in the `CommitSig` for now. While this
does increase the block size unecessarily (20-bytes per validator), it has some ergonomic and debugging advantages:
- `Commit` contains everything necessary to reconstruct `[]Vote`, and doesn't depend on additional access to a `ValidatorSet`
- Lite clients can check if they know the validators in a commit without
re-downloading the validator set
- Easy to see directly in a commit which validators signed what without having
to fetch the validator set
If and when we change the `CommitSig` again, for instance to remove the timestamp,
we can reconsider whether the ValidatorAddress should be removed.
**Absent Votes**: we include absent votes explicitly with no Signature or
Timestamp but with the ValidatorAddress. This should resolve the serialization
issues and make it easy to see which validator's votes failed to be included.
**Other BlockIDs**: We use a single byte to indicate which blockID a `CommitSig`
is for. The only options are:
- `Absent` - no vote received from the this validator, so no signature
- `Nil` - validator voted Nil - meaning they did not see a polka in time
- `Commit` - validator voted for this block
Note this means we don't allow votes for any other blockIDs. If a signature is
included in a commit, it is either for nil or the correct blockID. According to
the Tendermint protocol and assumptions, there is no way for a correct validator to
precommit for a conflicting blockID in the same round an actual commit was
created. This was the consensus from
[#3485](https://github.com/tendermint/tendermint/issues/3485)
We may want to consider supporting other blockIDs later, as a way to capture
evidence that might be helpful. We should clarify if/when/how doing so would
actually help first. To implement it, we could change the `Commit.BlockID`
field to a slice, where the first entry is the correct block ID and the other
entries are other BlockIDs that validators precommited before. The BlockIDFlag
enum can be extended to represent these additional block IDs on a per block
basis.
## Status
Accepted
## Consequences
### Positive
Removing the Type/Height/Round/Index and the BlockID saves roughly 80 bytes per precommit.
It varies because some integers are varint. The BlockID contains two 32-byte hashes an integer,
and the Height is 8-bytes.
For a chain with 100 validators, that's up to 8kB in savings per block!
### Negative
- Large breaking change to the block and commit structure
- Requires differentiating in code between the Vote and CommitSig objects, which may add some complexity (votes need to be reconstructed to be verified and gossiped)
### Neutral
- Commit.Precommits no longer contains nil values

View File

@@ -1,47 +0,0 @@
# ADR 026: General Merkle Proof
## Context
We are using raw `[]byte` for merkle proofs in `abci.ResponseQuery`. It makes hard to handle multilayer merkle proofs and general cases. Here, new interface `ProofOperator` is defined. The users can defines their own Merkle proof format and layer them easily.
Goals:
- Layer Merkle proofs without decoding/reencoding
- Provide general way to chain proofs
- Make the proof format extensible, allowing thirdparty proof types
## Decision
### ProofOperator
`type ProofOperator` is an interface for Merkle proofs. The definition is:
```go
type ProofOperator interface {
Run([][]byte) ([][]byte, error)
GetKey() []byte
ProofOp() ProofOp
}
```
Since a proof can treat various data type, `Run()` takes `[][]byte` as the argument, not `[]byte`. For example, a range proof's `Run()` can take multiple key-values as its argument. It will then return the root of the tree for the further process, calculated with the input value.
`ProofOperator` does not have to be a Merkle proof - it can be a function that transforms the argument for intermediate process e.g. prepending the length to the `[]byte`.
### ProofOp
`type ProofOp` is a protobuf message which is a triple of `Type string`, `Key []byte`, and `Data []byte`. `ProofOperator` and `ProofOp`are interconvertible, using `ProofOperator.ProofOp()` and `OpDecoder()`, where `OpDecoder` is a function that each proof type can register for their own encoding scheme. For example, we can add an byte for encoding scheme before the serialized proof, supporting JSON decoding.
## Status
## Consequences
### Positive
- Layering becomes easier (no encoding/decoding at each step)
- Thirdparty proof format is available
### Negative
- Larger size for abci.ResponseQuery
- Unintuitive proof chaining(it is not clear what `Run()` is doing)
- Additional codes for registering `OpDecoder`s

View File

@@ -1,38 +0,0 @@
# ADR 028: : LibP2P Integration
## Changelog
- {date}: {changelog}
## Context
> This section contains all the context one needs to understand the current state, and why there is a problem. It should be as succinct as possible and introduce the high level idea behind the solution.
## Decision
> This section explains all of the details of the proposed solution, including implementation details.
> It should also describe affects / corollary items that may need to be changed as a part of this.
> If the proposed change will be large, please also indicate a way to do the change to maximize ease of review.
> (e.g. the optimal split of things to do between separate PR's)
## Status
> A decision may be "proposed" if it hasn't been agreed upon yet, or "accepted" once it is agreed upon. If a later ADR changes or reverses a decision, it may be marked as "deprecated" or "superseded" with a reference to its replacement.
{Deprecated|Proposed|Accepted|Declined}
## Consequences
> This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones.
### Positive
### Negative
### Neutral
## References
> Are there any relevant PR comments, issues that led up to this, or articles referenced for why we made the given design choice? If so link them here!
- {reference link}

View File

@@ -1,128 +0,0 @@
# ADR 029: Check block txs before prevote
## Changelog
04-10-2018: Update with link to issue
[#2384](https://github.com/tendermint/tendermint/issues/2384) and reason for rejection
19-09-2018: Initial Draft
## Context
We currently check a tx's validity through 2 ways.
1. Through checkTx in mempool connection.
2. Through deliverTx in consensus connection.
The 1st is called when external tx comes in, so the node should be a proposer this time. The 2nd is called when external block comes in and reach the commit phase, the node doesn't need to be the proposer of the block, however it should check the txs in that block.
In the 2nd situation, if there are many invalid txs in the block, it would be too late for all nodes to discover that most txs in the block are invalid, and we'd better not record invalid txs in the blockchain too.
## Proposed solution
Therefore, we should find a way to check the txs' validity before send out a prevote. Currently we have cs.isProposalComplete() to judge whether a block is complete. We can have
```
func (blockExec *BlockExecutor) CheckBlock(block *types.Block) error {
// check txs of block.
for _, tx := range block.Txs {
reqRes := blockExec.proxyApp.CheckTxAsync(tx)
reqRes.Wait()
if reqRes.Response == nil || reqRes.Response.GetCheckTx() == nil || reqRes.Response.GetCheckTx().Code != abci.CodeTypeOK {
return errors.Errorf("tx %v check failed. response: %v", tx, reqRes.Response)
}
}
return nil
}
```
such a method in BlockExecutor to check all txs' validity in that block.
However, this method should not be implemented like that, because checkTx will share the same state used in mempool in the app. So we should define a new interface method checkBlock in Application to indicate it to use the same state as deliverTx.
```
type Application interface {
// Info/Query Connection
Info(RequestInfo) ResponseInfo // Return application info
SetOption(RequestSetOption) ResponseSetOption // Set application option
Query(RequestQuery) ResponseQuery // Query for state
// Mempool Connection
CheckTx(tx []byte) ResponseCheckTx // Validate a tx for the mempool
// Consensus Connection
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain with validators and other info from TendermintCore
CheckBlock(RequestCheckBlock) ResponseCheckBlock
BeginBlock(RequestBeginBlock) ResponseBeginBlock // Signals the beginning of a block
DeliverTx(tx []byte) ResponseDeliverTx // Deliver a tx for full processing
EndBlock(RequestEndBlock) ResponseEndBlock // Signals the end of a block, returns changes to the validator set
Commit() ResponseCommit // Commit the state and return the application Merkle root hash
}
```
All app should implement that method. For example, counter:
```
func (app *CounterApplication) CheckBlock(block types.Request_CheckBlock) types.ResponseCheckBlock {
if app.serial {
app.originalTxCount = app.txCount //backup the txCount state
for _, tx := range block.CheckBlock.Block.Txs {
if len(tx) > 8 {
return types.ResponseCheckBlock{
Code: code.CodeTypeEncodingError,
Log: fmt.Sprintf("Max tx size is 8 bytes, got %d", len(tx))}
}
tx8 := make([]byte, 8)
copy(tx8[len(tx8)-len(tx):], tx)
txValue := binary.BigEndian.Uint64(tx8)
if txValue < uint64(app.txCount) {
return types.ResponseCheckBlock{
Code: code.CodeTypeBadNonce,
Log: fmt.Sprintf("Invalid nonce. Expected >= %v, got %v", app.txCount, txValue)}
}
app.txCount++
}
}
return types.ResponseCheckBlock{Code: code.CodeTypeOK}
}
```
In BeginBlock, the app should restore the state to the orignal state before checking the block:
```
func (app *CounterApplication) DeliverTx(tx []byte) types.ResponseDeliverTx {
if app.serial {
app.txCount = app.originalTxCount //restore the txCount state
}
app.txCount++
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
}
```
The txCount is like the nonce in ethermint, it should be restored when entering the deliverTx phase. While some operation like checking the tx signature needs not to be done again. So the deliverTx can focus on how a tx can be applied, ignoring the checking of the tx, because all the checking has already been done in the checkBlock phase before.
An optional optimization is alter the deliverTx to deliverBlock. For the block has already been checked by checkBlock, so all the txs in it are valid. So the app can cache the block, and in the deliverBlock phase, it just needs to apply the block in the cache. This optimization can save network current in deliverTx.
## Status
Rejected
## Decision
Performance impact is considered too great. See [#2384](https://github.com/tendermint/tendermint/issues/2384)
## Consequences
### Positive
- more robust to defend the adversary to propose a block full of invalid txs.
### Negative
- add a new interface method. app logic needs to adjust to appeal to it.
- sending all the tx data over the ABCI twice
- potentially redundant validations (eg. signature checks in both CheckBlock and
DeliverTx)
### Neutral

View File

@@ -1,458 +0,0 @@
# ADR 030: Consensus Refactor
## Context
One of the biggest challenges this project faces is to proof that the
implementations of the specifications are correct, much like we strive to
formaly verify our alogrithms and protocols we should work towards high
confidence about the correctness of our program code. One of those is the core
of Tendermint - Consensus - which currently resides in the `consensus` package.
Over time there has been high friction making changes to the package due to the
algorithm being scattered in a side-effectful container (the current
`ConsensusState`). In order to test the algorithm a large object-graph needs to
be set up and even than the non-deterministic parts of the container makes will
prevent high certainty. Where ideally we have a 1-to-1 representation of the
[spec](https://github.com/tendermint/spec), ready and easy to test for domain
experts.
Addresses:
- [#1495](https://github.com/tendermint/tendermint/issues/1495)
- [#1692](https://github.com/tendermint/tendermint/issues/1692)
## Decision
To remedy these issues we plan a gradual, non-invasive refactoring of the
`consensus` package. Starting of by isolating the consensus alogrithm into
a pure function and a finite state machine to address the most pressuring issue
of lack of confidence. Doing so while leaving the rest of the package in tact
and have follow-up optional changes to improve the sepration of concerns.
### Implementation changes
The core of Consensus can be modelled as a function with clear defined inputs:
* `State` - data container for current round, height, etc.
* `Event`- significant events in the network
producing clear outputs;
* `State` - updated input
* `Message` - signal what actions to perform
```go
type Event int
const (
EventUnknown Event = iota
EventProposal
Majority23PrevotesBlock
Majority23PrecommitBlock
Majority23PrevotesAny
Majority23PrecommitAny
TimeoutNewRound
TimeoutPropose
TimeoutPrevotes
TimeoutPrecommit
)
type Message int
const (
MeesageUnknown Message = iota
MessageProposal
MessageVotes
MessageDecision
)
type State struct {
height uint64
round uint64
step uint64
lockedValue interface{} // TODO: Define proper type.
lockedRound interface{} // TODO: Define proper type.
validValue interface{} // TODO: Define proper type.
validRound interface{} // TODO: Define proper type.
// From the original notes: valid(v)
valid interface{} // TODO: Define proper type.
// From the original notes: proposer(h, r)
proposer interface{} // TODO: Define proper type.
}
func Consensus(Event, State) (State, Message) {
// Consolidate implementation.
}
```
Tracking of relevant information to feed `Event` into the function and act on
the output is left to the `ConsensusExecutor` (formerly `ConsensusState`).
Benefits for testing surfacing nicely as testing for a sequence of events
against algorithm could be as simple as the following example:
``` go
func TestConsensusXXX(t *testing.T) {
type expected struct {
message Message
state State
}
// Setup order of events, initial state and expectation.
var (
events = []struct {
event Event
want expected
}{
// ...
}
state = State{
// ...
}
)
for _, e := range events {
sate, msg = Consensus(e.event, state)
// Test message expectation.
if msg != e.want.message {
t.Fatalf("have %v, want %v", msg, e.want.message)
}
// Test state expectation.
if !reflect.DeepEqual(state, e.want.state) {
t.Fatalf("have %v, want %v", state, e.want.state)
}
}
}
```
## Consensus Executor
## Consensus Core
```go
type Event interface{}
type EventNewHeight struct {
Height int64
ValidatorId int
}
type EventNewRound HeightAndRound
type EventProposal struct {
Height int64
Round int
Timestamp Time
BlockID BlockID
POLRound int
Sender int
}
type Majority23PrevotesBlock struct {
Height int64
Round int
BlockID BlockID
}
type Majority23PrecommitBlock struct {
Height int64
Round int
BlockID BlockID
}
type HeightAndRound struct {
Height int64
Round int
}
type Majority23PrevotesAny HeightAndRound
type Majority23PrecommitAny HeightAndRound
type TimeoutPropose HeightAndRound
type TimeoutPrevotes HeightAndRound
type TimeoutPrecommit HeightAndRound
type Message interface{}
type MessageProposal struct {
Height int64
Round int
BlockID BlockID
POLRound int
}
type VoteType int
const (
VoteTypeUnknown VoteType = iota
Prevote
Precommit
)
type MessageVote struct {
Height int64
Round int
BlockID BlockID
Type VoteType
}
type MessageDecision struct {
Height int64
Round int
BlockID BlockID
}
type TriggerTimeout struct {
Height int64
Round int
Duration Duration
}
type RoundStep int
const (
RoundStepUnknown RoundStep = iota
RoundStepPropose
RoundStepPrevote
RoundStepPrecommit
RoundStepCommit
)
type State struct {
Height int64
Round int
Step RoundStep
LockedValue BlockID
LockedRound int
ValidValue BlockID
ValidRound int
ValidatorId int
ValidatorSetSize int
}
func proposer(height int64, round int) int {}
func getValue() BlockID {}
func Consensus(event Event, state State) (State, Message, TriggerTimeout) {
msg = nil
timeout = nil
switch event := event.(type) {
case EventNewHeight:
if event.Height > state.Height {
state.Height = event.Height
state.Round = -1
state.Step = RoundStepPropose
state.LockedValue = nil
state.LockedRound = -1
state.ValidValue = nil
state.ValidRound = -1
state.ValidatorId = event.ValidatorId
}
return state, msg, timeout
case EventNewRound:
if event.Height == state.Height and event.Round > state.Round {
state.Round = eventRound
state.Step = RoundStepPropose
if proposer(state.Height, state.Round) == state.ValidatorId {
proposal = state.ValidValue
if proposal == nil {
proposal = getValue()
}
msg = MessageProposal { state.Height, state.Round, proposal, state.ValidRound }
}
timeout = TriggerTimeout { state.Height, state.Round, timeoutPropose(state.Round) }
}
return state, msg, timeout
case EventProposal:
if event.Height == state.Height and event.Round == state.Round and
event.Sender == proposal(state.Height, state.Round) and state.Step == RoundStepPropose {
if event.POLRound >= state.LockedRound or event.BlockID == state.BlockID or state.LockedRound == -1 {
msg = MessageVote { state.Height, state.Round, event.BlockID, Prevote }
}
state.Step = RoundStepPrevote
}
return state, msg, timeout
case TimeoutPropose:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPropose {
msg = MessageVote { state.Height, state.Round, nil, Prevote }
state.Step = RoundStepPrevote
}
return state, msg, timeout
case Majority23PrevotesBlock:
if event.Height == state.Height and event.Round == state.Round and state.Step >= RoundStepPrevote and event.Round > state.ValidRound {
state.ValidRound = event.Round
state.ValidValue = event.BlockID
if state.Step == RoundStepPrevote {
state.LockedRound = event.Round
state.LockedValue = event.BlockID
msg = MessageVote { state.Height, state.Round, event.BlockID, Precommit }
state.Step = RoundStepPrecommit
}
}
return state, msg, timeout
case Majority23PrevotesAny:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPrevote {
timeout = TriggerTimeout { state.Height, state.Round, timeoutPrevote(state.Round) }
}
return state, msg, timeout
case TimeoutPrevote:
if event.Height == state.Height and event.Round == state.Round and state.Step == RoundStepPrevote {
msg = MessageVote { state.Height, state.Round, nil, Precommit }
state.Step = RoundStepPrecommit
}
return state, msg, timeout
case Majority23PrecommitBlock:
if event.Height == state.Height {
state.Step = RoundStepCommit
state.LockedValue = event.BlockID
}
return state, msg, timeout
case Majority23PrecommitAny:
if event.Height == state.Height and event.Round == state.Round {
timeout = TriggerTimeout { state.Height, state.Round, timeoutPrecommit(state.Round) }
}
return state, msg, timeout
case TimeoutPrecommit:
if event.Height == state.Height and event.Round == state.Round {
state.Round = state.Round + 1
}
return state, msg, timeout
}
}
func ConsensusExecutor() {
proposal = nil
votes = HeightVoteSet { Height: 1 }
state = State {
Height: 1
Round: 0
Step: RoundStepPropose
LockedValue: nil
LockedRound: -1
ValidValue: nil
ValidRound: -1
}
event = EventNewHeight {1, id}
state, msg, timeout = Consensus(event, state)
event = EventNewRound {state.Height, 0}
state, msg, timeout = Consensus(event, state)
if msg != nil {
send msg
}
if timeout != nil {
trigger timeout
}
for {
select {
case message := <- msgCh:
switch msg := message.(type) {
case MessageProposal:
case MessageVote:
if msg.Height == state.Height {
newVote = votes.AddVote(msg)
if newVote {
switch msg.Type {
case Prevote:
prevotes = votes.Prevotes(msg.Round)
if prevotes.WeakCertificate() and msg.Round > state.Round {
event = EventNewRound { msg.Height, msg.Round }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
if blockID, ok = prevotes.TwoThirdsMajority(); ok and blockID != nil {
if msg.Round == state.Round and hasBlock(blockID) {
event = Majority23PrevotesBlock { msg.Height, msg.Round, blockID }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
if proposal != nil and proposal.POLRound == msg.Round and hasBlock(blockID) {
event = EventProposal {
Height: state.Height
Round: state.Round
BlockID: blockID
POLRound: proposal.POLRound
Sender: message.Sender
}
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
}
if prevotes.HasTwoThirdsAny() and msg.Round == state.Round {
event = Majority23PrevotesAny { msg.Height, msg.Round, blockID }
state, msg, timeout = Consensus(event, state)
state = handleStateChange(state, msg, timeout)
}
case Precommit:
}
}
}
case timeout := <- timeoutCh:
case block := <- blockCh:
}
}
}
func handleStateChange(state, msg, timeout) State {
if state.Step == Commit {
state = ExecuteBlock(state.LockedValue)
}
if msg != nil {
send msg
}
if timeout != nil {
trigger timeout
}
}
```
### Implementation roadmap
* implement proposed implementation
* replace currently scattered calls in `ConsensusState` with calls to the new
`Consensus` function
* rename `ConsensusState` to `ConsensusExecutor` to avoid confusion
* propose design for improved separation and clear information flow between
`ConsensusExecutor` and `ConsensusReactor`
## Status
Draft.
## Consequences
### Positive
- isolated implementation of the algorithm
- improved testability - simpler to proof correctness
- clearer separation of concerns - easier to reason
### Negative
### Neutral

View File

@@ -1,38 +0,0 @@
# ADR 031: Changelog Structure
## Changelog
- {date}: {changelog}
## Context
> This section contains all the context one needs to understand the current state, and why there is a problem. It should be as succinct as possible and introduce the high level idea behind the solution.
## Decision
> This section explains all of the details of the proposed solution, including implementation details.
> It should also describe affects / corollary items that may need to be changed as a part of this.
> If the proposed change will be large, please also indicate a way to do the change to maximize ease of review.
> (e.g. the optimal split of things to do between separate PR's)
## Status
> A decision may be "proposed" if it hasn't been agreed upon yet, or "accepted" once it is agreed upon. If a later ADR changes or reverses a decision, it may be marked as "deprecated" or "superseded" with a reference to its replacement.
{Deprecated|Proposed|Accepted|Declined}
## Consequences
> This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones.
### Positive
### Negative
### Neutral
## References
> Are there any relevant PR comments, issues that led up to this, or articles referenced for why we made the given design choice? If so link them here!
- {reference link}

View File

@@ -1,247 +0,0 @@
# ADR 033: pubsub 2.0
Author: Anton Kaliaev (@melekes)
## Changelog
02-10-2018: Initial draft
16-01-2019: Second version based on our conversation with Jae
17-01-2019: Third version explaining how new design solves current issues
25-01-2019: Fourth version to treat buffered and unbuffered channels differently
## Context
Since the initial version of the pubsub, there's been a number of issues
raised: [#951], [#1879], [#1880]. Some of them are high-level issues questioning the
core design choices made. Others are minor and mostly about the interface of
`Subscribe()` / `Publish()` functions.
### Sync vs Async
Now, when publishing a message to subscribers, we can do it in a goroutine:
_using channels for data transmission_
```go
for each subscriber {
out := subscriber.outc
go func() {
out <- msg
}
}
```
_by invoking callback functions_
```go
for each subscriber {
go subscriber.callbackFn()
}
```
This gives us greater performance and allows us to avoid "slow client problem"
(when other subscribers have to wait for a slow subscriber). A pool of
goroutines can be used to avoid uncontrolled memory growth.
In certain cases, this is what you want. But in our case, because we need
strict ordering of events (if event A was published before B, the guaranteed
delivery order will be A -> B), we can't publish msg in a new goroutine every time.
We can also have a goroutine per subscriber, although we'd need to be careful
with the number of subscribers. It's more difficult to implement as well +
unclear if we'll benefit from it (cause we'd be forced to create N additional
channels to distribute msg to these goroutines).
### Non-blocking send
There is also a question whenever we should have a non-blocking send.
Currently, sends are blocking, so publishing to one client can block on
publishing to another. This means a slow or unresponsive client can halt the
system. Instead, we can use a non-blocking send:
```go
for each subscriber {
out := subscriber.outc
select {
case out <- msg:
default:
log("subscriber %v buffer is full, skipping...")
}
}
```
This fixes the "slow client problem", but there is no way for a slow client to
know if it had missed a message. We could return a second channel and close it
to indicate subscription termination. On the other hand, if we're going to
stick with blocking send, **devs must always ensure subscriber's handling code
does not block**, which is a hard task to put on their shoulders.
The interim option is to run goroutines pool for a single message, wait for all
goroutines to finish. This will solve "slow client problem", but we'd still
have to wait `max(goroutine_X_time)` before we can publish the next message.
### Channels vs Callbacks
Yet another question is whether we should use channels for message transmission or
call subscriber-defined callback functions. Callback functions give subscribers
more flexibility - you can use mutexes in there, channels, spawn goroutines,
anything you really want. But they also carry local scope, which can result in
memory leaks and/or memory usage increase.
Go channels are de-facto standard for carrying data between goroutines.
### Why `Subscribe()` accepts an `out` channel?
Because in our tests, we create buffered channels (cap: 1). Alternatively, we
can make capacity an argument and return a channel.
## Decision
### MsgAndTags
Use a `MsgAndTags` struct on the subscription channel to indicate what tags the
msg matched.
```go
type MsgAndTags struct {
Msg interface{}
Tags TagMap
}
```
### Subscription Struct
Change `Subscribe()` function to return a `Subscription` struct:
```go
type Subscription struct {
// private fields
}
func (s *Subscription) Out() <-chan MsgAndTags
func (s *Subscription) Cancelled() <-chan struct{}
func (s *Subscription) Err() error
```
`Out()` returns a channel onto which messages and tags are published.
`Unsubscribe`/`UnsubscribeAll` does not close the channel to avoid clients from
receiving a nil message.
`Cancelled()` returns a channel that's closed when the subscription is terminated
and supposed to be used in a select statement.
If the channel returned by `Cancelled()` is not closed yet, `Err()` returns nil.
If the channel is closed, `Err()` returns a non-nil error explaining why:
`ErrUnsubscribed` if the subscriber choose to unsubscribe,
`ErrOutOfCapacity` if the subscriber is not pulling messages fast enough and the channel returned by `Out()` became full.
After `Err()` returns a non-nil error, successive calls to `Err() return the same error.
```go
subscription, err := pubsub.Subscribe(...)
if err != nil {
// ...
}
for {
select {
case msgAndTags <- subscription.Out():
// ...
case <-subscription.Cancelled():
return subscription.Err()
}
```
### Capacity and Subscriptions
Make the `Out()` channel buffered (with capacity 1) by default. In most cases, we want to
terminate the slow subscriber. Only in rare cases, we want to block the pubsub
(e.g. when debugging consensus). This should lower the chances of the pubsub
being frozen.
```go
// outCap can be used to set capacity of Out channel
// (1 by default, must be greater than 0).
Subscribe(ctx context.Context, clientID string, query Query, outCap... int) (Subscription, error) {
```
Use a different function for an unbuffered channel:
```go
// Subscription uses an unbuffered channel. Publishing will block.
SubscribeUnbuffered(ctx context.Context, clientID string, query Query) (Subscription, error) {
```
SubscribeUnbuffered should not be exposed to users.
### Blocking/Nonblocking
The publisher should treat these kinds of channels separately.
It should block on unbuffered channels (for use with internal consensus events
in the consensus tests) and not block on the buffered ones. If a client is too
slow to keep up with it's messages, it's subscription is terminated:
for each subscription {
out := subscription.outChan
if cap(out) == 0 {
// block on unbuffered channel
out <- msg
} else {
// don't block on buffered channels
select {
case out <- msg:
default:
// set the error, notify on the cancel chan
subscription.err = fmt.Errorf("client is too slow for msg)
close(subscription.cancelChan)
// ... unsubscribe and close out
}
}
}
### How this new design solves the current issues?
[#951] ([#1880]):
Because of non-blocking send, situation where we'll deadlock is not possible
anymore. If the client stops reading messages, it will be removed.
[#1879]:
MsgAndTags is used now instead of a plain message.
### Future problems and their possible solutions
[#2826]
One question I am still pondering about: how to prevent pubsub from slowing
down consensus. We can increase the pubsub queue size (which is 0 now). Also,
it's probably a good idea to limit the total number of subscribers.
This can be made automatically. Say we set queue size to 1000 and, when it's >=
80% full, refuse new subscriptions.
## Status
In review
## Consequences
### Positive
- more idiomatic interface
- subscribers know what tags msg was published with
- subscribers aware of the reason their subscription was cancelled
### Negative
- (since v1) no concurrency when it comes to publishing messages
### Neutral
[#951]: https://github.com/tendermint/tendermint/issues/951
[#1879]: https://github.com/tendermint/tendermint/issues/1879
[#1880]: https://github.com/tendermint/tendermint/issues/1880
[#2826]: https://github.com/tendermint/tendermint/issues/2826

View File

@@ -1,72 +0,0 @@
# ADR 034: PrivValidator file structure
## Changelog
03-11-2018: Initial Draft
## Context
For now, the PrivValidator file `priv_validator.json` contains mutable and immutable parts.
Even in an insecure mode which does not encrypt private key on disk, it is reasonable to separate
the mutable part and immutable part.
References:
[#1181](https://github.com/tendermint/tendermint/issues/1181)
[#2657](https://github.com/tendermint/tendermint/issues/2657)
[#2313](https://github.com/tendermint/tendermint/issues/2313)
## Proposed Solution
We can split mutable and immutable parts with two structs:
```go
// FilePVKey stores the immutable part of PrivValidator
type FilePVKey struct {
Address types.Address `json:"address"`
PubKey crypto.PubKey `json:"pub_key"`
PrivKey crypto.PrivKey `json:"priv_key"`
filePath string
}
// FilePVState stores the mutable part of PrivValidator
type FilePVLastSignState struct {
Height int64 `json:"height"`
Round int `json:"round"`
Step int8 `json:"step"`
Signature []byte `json:"signature,omitempty"`
SignBytes cmn.HexBytes `json:"signbytes,omitempty"`
filePath string
mtx sync.Mutex
}
```
Then we can combine `FilePVKey` with `FilePVLastSignState` and will get the original `FilePV`.
```go
type FilePV struct {
Key FilePVKey
LastSignState FilePVLastSignState
}
```
As discussed, `FilePV` should be located in `config`, and `FilePVLastSignState` should be stored in `data`. The
store path of each file should be specified in `config.yml`.
What we need to do next is changing the methods of `FilePV`.
## Status
Draft.
## Consequences
### Positive
- separate the mutable and immutable of PrivValidator
### Negative
- need to add more config for file path
### Neutral

View File

@@ -1,40 +0,0 @@
# ADR 035: Documentation
Author: @zramsay (Zach Ramsay)
## Changelog
### November 2nd 2018
- initial write-up
## Context
The Tendermint documentation has undergone several changes until settling on the current model. Originally, the documentation was hosted on the website and had to be updated asynchronously from the code. Along with the other repositories requiring documentation, the whole stack moved to using Read The Docs to automatically generate, publish, and host the documentation. This, however, was insufficient; the RTD site had advertisement, it wasn't easily accessible to devs, didn't collect metrics, was another set of external links, etc.
## Decision
For two reasons, the decision was made to use VuePress:
1) ability to get metrics (implemented on both Tendermint and SDK)
2) host the documentation on the website as a `/docs` endpoint.
This is done while maintaining synchrony between the docs and code, i.e., the website is built whenever the docs are updated.
## Status
The two points above have been implemented; the `config.js` has a Google Analytics identifier and the documentation workflow has been up and running largely without problems for several months. Details about the documentation build & workflow can be found [here](../DOCS_README.md)
## Consequences
Because of the organizational seperation between Tendermint & Cosmos, there is a challenge of "what goes where" for certain aspects of documentation.
### Positive
This architecture is largely positive relative to prior docs arrangements.
### Negative
A significant portion of the docs automation / build process is in private repos with limited access/visibility to devs. However, these tasks are handled by the SRE team.
### Neutral

View File

@@ -1,38 +0,0 @@
# ADR 036: Empty Blocks via ABCI
## Changelog
- {date}: {changelog}
## Context
> This section contains all the context one needs to understand the current state, and why there is a problem. It should be as succinct as possible and introduce the high level idea behind the solution.
## Decision
> This section explains all of the details of the proposed solution, including implementation details.
> It should also describe affects / corollary items that may need to be changed as a part of this.
> If the proposed change will be large, please also indicate a way to do the change to maximize ease of review.
> (e.g. the optimal split of things to do between separate PR's)
## Status
> A decision may be "proposed" if it hasn't been agreed upon yet, or "accepted" once it is agreed upon. If a later ADR changes or reverses a decision, it may be marked as "deprecated" or "superseded" with a reference to its replacement.
{Deprecated|Proposed|Accepted|Declined}
## Consequences
> This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones.
### Positive
### Negative
### Neutral
## References
> Are there any relevant PR comments, issues that led up to this, or articles referenced for why we made the given design choice? If so link them here!
- {reference link}

View File

@@ -1,100 +0,0 @@
# ADR 037: Deliver Block
Author: Daniil Lashin (@danil-lashin)
## Changelog
13-03-2019: Initial draft
## Context
Initial conversation: https://github.com/tendermint/tendermint/issues/2901
Some applications can handle transactions in parallel, or at least some
part of tx processing can be parallelized. Now it is not possible for developer
to execute txs in parallel because Tendermint delivers them consequentially.
## Decision
Now Tendermint have `BeginBlock`, `EndBlock`, `Commit`, `DeliverTx` steps
while executing block. This doc proposes merging this steps into one `DeliverBlock`
step. It will allow developers of applications to decide how they want to
execute transactions (in parallel or consequentially). Also it will simplify and
speed up communications between application and Tendermint.
As @jaekwon [mentioned](https://github.com/tendermint/tendermint/issues/2901#issuecomment-477746128)
in discussion not all application will benefit from this solution. In some cases,
when application handles transaction consequentially, it way slow down the blockchain,
because it need to wait until full block is transmitted to application to start
processing it. Also, in the case of complete change of ABCI, we need to force all the apps
to change their implementation completely. That's why I propose to introduce one more ABCI
type.
# Implementation Changes
In addition to default application interface which now have this structure
```go
type Application interface {
// Info and Mempool methods...
// Consensus Connection
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain with validators and other info from TendermintCore
BeginBlock(RequestBeginBlock) ResponseBeginBlock // Signals the beginning of a block
DeliverTx(tx []byte) ResponseDeliverTx // Deliver a tx for full processing
EndBlock(RequestEndBlock) ResponseEndBlock // Signals the end of a block, returns changes to the validator set
Commit() ResponseCommit // Commit the state and return the application Merkle root hash
}
```
this doc proposes to add one more:
```go
type Application interface {
// Info and Mempool methods...
// Consensus Connection
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain with validators and other info from TendermintCore
DeliverBlock(RequestDeliverBlock) ResponseDeliverBlock // Deliver full block
Commit() ResponseCommit // Commit the state and return the application Merkle root hash
}
type RequestDeliverBlock struct {
Hash []byte
Header Header
Txs Txs
LastCommitInfo LastCommitInfo
ByzantineValidators []Evidence
}
type ResponseDeliverBlock struct {
ValidatorUpdates []ValidatorUpdate
ConsensusParamUpdates *ConsensusParams
Tags []kv.Pair
TxResults []ResponseDeliverTx
}
```
Also, we will need to add new config param, which will specify what kind of ABCI application uses.
For example, it can be `abci_type`. Then we will have 2 types:
- `advanced` - current ABCI
- `simple` - proposed implementation
## Status
In review
## Consequences
### Positive
- much simpler introduction and tutorials for new developers (instead of implementing 5 methods whey
will need to implement only 3)
- txs can be handled in parallel
- simpler interface
- faster communications between Tendermint and application
### Negative
- Tendermint should now support 2 kinds of ABCI

View File

@@ -1,38 +0,0 @@
# ADR 038: Non-zero start height
## Changelog
- {date}: {changelog}
## Context
> This section contains all the context one needs to understand the current state, and why there is a problem. It should be as succinct as possible and introduce the high level idea behind the solution.
## Decision
> This section explains all of the details of the proposed solution, including implementation details.
> It should also describe affects / corollary items that may need to be changed as a part of this.
> If the proposed change will be large, please also indicate a way to do the change to maximize ease of review.
> (e.g. the optimal split of things to do between separate PR's)
## Status
> A decision may be "proposed" if it hasn't been agreed upon yet, or "accepted" once it is agreed upon. If a later ADR changes or reverses a decision, it may be marked as "deprecated" or "superseded" with a reference to its replacement.
{Deprecated|Proposed|Accepted|Declined}
## Consequences
> This section describes the consequences, after applying the decision. All consequences should be summarized here, not just the "positive" ones.
### Positive
### Negative
### Neutral
## References
> Are there any relevant PR comments, issues that led up to this, or articles referenced for why we made the given design choice? If so link them here!
- {reference link}

View File

@@ -1,159 +0,0 @@
# ADR 039: Peer Behaviour Interface
## Changelog
* 07-03-2019: Initial draft
* 14-03-2019: Updates from feedback
## Context
The responsibility for signaling and acting upon peer behaviour lacks a single
owning component and is heavily coupled with the network stack[<sup>1</sup>](#references). Reactors
maintain a reference to the `p2p.Switch` which they use to call
`switch.StopPeerForError(...)` when a peer misbehaves and
`switch.MarkAsGood(...)` when a peer contributes in some meaningful way.
While the switch handles `StopPeerForError` internally, the `MarkAsGood`
method delegates to another component, `p2p.AddrBook`. This scheme of delegation
across Switch obscures the responsibility for handling peer behaviour
and ties up the reactors in a larger dependency graph when testing.
## Decision
Introduce a `PeerBehaviour` interface and concrete implementations which
provide methods for reactors to signal peer behaviour without direct
coupling `p2p.Switch`. Introduce a ErrorBehaviourPeer to provide
concrete reasons for stopping peers. Introduce GoodBehaviourPeer to provide
concrete ways in which a peer contributes.
### Implementation Changes
PeerBehaviour then becomes an interface for signaling peer errors as well
as for marking peers as `good`.
```go
type PeerBehaviour interface {
Behaved(peer Peer, reason GoodBehaviourPeer)
Errored(peer Peer, reason ErrorBehaviourPeer)
}
```
Instead of signaling peers to stop with arbitrary reasons:
`reason interface{}`
We introduce a concrete error type ErrorBehaviourPeer:
```go
type ErrorBehaviourPeer int
const (
ErrorBehaviourUnknown = iota
ErrorBehaviourBadMessage
ErrorBehaviourMessageOutofOrder
...
)
```
To provide additional information on the ways a peer contributed, we introduce
the GoodBehaviourPeer type.
```go
type GoodBehaviourPeer int
const (
GoodBehaviourVote = iota
GoodBehaviourBlockPart
...
)
```
As a first iteration we provide a concrete implementation which wraps
the switch:
```go
type SwitchedPeerBehaviour struct {
sw *Switch
}
func (spb *SwitchedPeerBehaviour) Errored(peer Peer, reason ErrorBehaviourPeer) {
spb.sw.StopPeerForError(peer, reason)
}
func (spb *SwitchedPeerBehaviour) Behaved(peer Peer, reason GoodBehaviourPeer) {
spb.sw.MarkPeerAsGood(peer)
}
func NewSwitchedPeerBehaviour(sw *Switch) *SwitchedPeerBehaviour {
return &SwitchedPeerBehaviour{
sw: sw,
}
}
```
Reactors, which are often difficult to unit test[<sup>2</sup>](#references) could use an implementation which exposes the signals produced by the reactor in
manufactured scenarios:
```go
type ErrorBehaviours map[Peer][]ErrorBehaviourPeer
type GoodBehaviours map[Peer][]GoodBehaviourPeer
type StorePeerBehaviour struct {
eb ErrorBehaviours
gb GoodBehaviours
}
func NewStorePeerBehaviour() *StorePeerBehaviour{
return &StorePeerBehaviour{
eb: make(ErrorBehaviours),
gb: make(GoodBehaviours),
}
}
func (spb StorePeerBehaviour) Errored(peer Peer, reason ErrorBehaviourPeer) {
if _, ok := spb.eb[peer]; !ok {
spb.eb[peer] = []ErrorBehaviours{reason}
} else {
spb.eb[peer] = append(spb.eb[peer], reason)
}
}
func (mpb *StorePeerBehaviour) GetErrored() ErrorBehaviours {
return mpb.eb
}
func (spb StorePeerBehaviour) Behaved(peer Peer, reason GoodBehaviourPeer) {
if _, ok := spb.gb[peer]; !ok {
spb.gb[peer] = []GoodBehaviourPeer{reason}
} else {
spb.gb[peer] = append(spb.gb[peer], reason)
}
}
func (spb *StorePeerBehaviour) GetBehaved() GoodBehaviours {
return spb.gb
}
```
## Status
Accepted
## Consequences
### Positive
* De-couple signaling from acting upon peer behaviour.
* Reduce the coupling of reactors and the Switch and the network
stack
* The responsibility of managing peer behaviour can be migrated to
a single component instead of split between the switch and the
address book.
### Negative
* The first iteration will simply wrap the Switch and introduce a
level of indirection.
### Neutral
## References
1. Issue [#2067](https://github.com/tendermint/tendermint/issues/2067): P2P Refactor
2. PR: [#3506](https://github.com/tendermint/tendermint/pull/3506): ADR 036: Blockchain Reactor Refactor

View File

@@ -1,534 +0,0 @@
# ADR 040: Blockchain Reactor Refactor
## Changelog
19-03-2019: Initial draft
## Context
The Blockchain Reactor's high level responsibility is to enable peers who are far behind the current state of the
blockchain to quickly catch up by downloading many blocks in parallel from its peers, verifying block correctness, and
executing them against the ABCI application. We call the protocol executed by the Blockchain Reactor `fast-sync`.
The current architecture diagram of the blockchain reactor can be found here:
![Blockchain Reactor Architecture Diagram](img/bc-reactor.png)
The current architecture consists of dozens of routines and it is tightly depending on the `Switch`, making writing
unit tests almost impossible. Current tests require setting up complex dependency graphs and dealing with concurrency.
Note that having dozens of routines is in this case overkill as most of the time routines sits idle waiting for
something to happen (message to arrive or timeout to expire). Due to dependency on the `Switch`, testing relatively
complex network scenarios and failures (for example adding and removing peers) is very complex tasks and frequently lead
to complex tests with not deterministic behavior ([#3400]). Impossibility to write proper tests makes confidence in
the code low and this resulted in several issues (some are fixed in the meantime and some are still open):
[#3400], [#2897], [#2896], [#2699], [#2888], [#2457], [#2622], [#2026].
## Decision
To remedy these issues we plan a major refactor of the blockchain reactor. The proposed architecture is largely inspired
by ADR-30 and is presented on the following diagram:
![Blockchain Reactor Refactor Diagram](img/bc-reactor-refactor.png)
We suggest a concurrency architecture where the core algorithm (we call it `Controller`) is extracted into a finite
state machine. The active routine of the reactor is called `Executor` and is responsible for receiving and sending
messages from/to peers and triggering timeouts. What messages should be sent and timeouts triggered is determined mostly
by the `Controller`. The exception is `Peer Heartbeat` mechanism which is `Executor` responsibility. The heartbeat
mechanism is used to remove slow and unresponsive peers from the peer list. Writing of unit tests is simpler with
this architecture as most of the critical logic is part of the `Controller` function. We expect that simpler concurrency
architecture will not have significant negative effect on the performance of this reactor (to be confirmed by
experimental evaluation).
### Implementation changes
We assume the following system model for "fast sync" protocol:
* a node is connected to a random subset of all nodes that represents its peer set. Some nodes are correct and some
might be faulty. We don't make assumptions about ratio of faulty nodes, i.e., it is possible that all nodes in some
peer set are faulty.
* we assume that communication between correct nodes is synchronous, i.e., if a correct node `p` sends a message `m` to
a correct node `q` at time `t`, then `q` will receive message the latest at time `t+Delta` where `Delta` is a system
parameter that is known by network participants. `Delta` is normally chosen to be an order of magnitude higher than
the real communication delay (maximum) between correct nodes. Therefore if a correct node `p` sends a request message
to a correct node `q` at time `t` and there is no the corresponding reply at time `t + 2*Delta`, then `p` can assume
that `q` is faulty. Note that the network assumptions for the consensus reactor are different (we assume partially
synchronous model there).
The requirements for the "fast sync" protocol are formally specified as follows:
- `Correctness`: If a correct node `p` is connected to a correct node `q` for a long enough period of time, then `p`
- will eventually download all requested blocks from `q`.
- `Termination`: If a set of peers of a correct node `p` is stable (no new nodes are added to the peer set of `p`) for
- a long enough period of time, then protocol eventually terminates.
- `Fairness`: A correct node `p` sends requests for blocks to all peers from its peer set.
As explained above, the `Executor` is responsible for sending and receiving messages that are part of the `fast-sync`
protocol. The following messages are exchanged as part of `fast-sync` protocol:
``` go
type Message int
const (
MessageUnknown Message = iota
MessageStatusRequest
MessageStatusResponse
MessageBlockRequest
MessageBlockResponse
)
```
`MessageStatusRequest` is sent periodically to all peers as a request for a peer to provide its current height. It is
part of the `Peer Heartbeat` mechanism and a failure to respond timely to this message results in a peer being removed
from the peer set. Note that the `Peer Heartbeat` mechanism is used only while a peer is in `fast-sync` mode. We assume
here existence of a mechanism that gives node a possibility to inform its peers that it is in the `fast-sync` mode.
``` go
type MessageStatusRequest struct {
SeqNum int64 // sequence number of the request
}
```
`MessageStatusResponse` is sent as a response to `MessageStatusRequest` to inform requester about the peer current
height.
``` go
type MessageStatusResponse struct {
SeqNum int64 // sequence number of the corresponding request
Height int64 // current peer height
}
```
`MessageBlockRequest` is used to make a request for a block and the corresponding commit certificate at a given height.
``` go
type MessageBlockRequest struct {
Height int64
}
```
`MessageBlockResponse` is a response for the corresponding block request. In addition to providing the block and the
corresponding commit certificate, it contains also a current peer height.
``` go
type MessageBlockResponse struct {
Height int64
Block Block
Commit Commit
PeerHeight int64
}
```
In addition to sending and receiving messages, and `HeartBeat` mechanism, controller is also managing timeouts
that are triggered upon `Controller` request. `Controller` is then informed once a timeout expires.
``` go
type TimeoutTrigger int
const (
TimeoutUnknown TimeoutTrigger = iota
TimeoutResponseTrigger
TimeoutTerminationTrigger
)
```
The `Controller` can be modelled as a function with clearly defined inputs:
* `State` - current state of the node. Contains data about connected peers and its behavior, pending requests,
* received blocks, etc.
* `Event` - significant events in the network.
producing clear outputs:
* `State` - updated state of the node,
* `MessageToSend` - signal what message to send and to which peer
* `TimeoutTrigger` - signal that timeout should be triggered.
We consider the following `Event` types:
``` go
type Event int
const (
EventUnknown Event = iota
EventStatusReport
EventBlockRequest
EventBlockResponse
EventRemovePeer
EventTimeoutResponse
EventTimeoutTermination
)
```
`EventStatusResponse` event is generated once `MessageStatusResponse` is received by the `Executor`.
``` go
type EventStatusReport struct {
PeerID ID
Height int64
}
```
`EventBlockRequest` event is generated once `MessageBlockRequest` is received by the `Executor`.
``` go
type EventBlockRequest struct {
Height int64
PeerID p2p.ID
}
```
`EventBlockResponse` event is generated upon reception of `MessageBlockResponse` message by the `Executor`.
``` go
type EventBlockResponse struct {
Height int64
Block Block
Commit Commit
PeerID ID
PeerHeight int64
}
```
`EventRemovePeer` is generated by `Executor` to signal that the connection to a peer is closed due to peer misbehavior.
``` go
type EventRemovePeer struct {
PeerID ID
}
```
`EventTimeoutResponse` is generated by `Executor` to signal that a timeout triggered by `TimeoutResponseTrigger` has
expired.
``` go
type EventTimeoutResponse struct {
PeerID ID
Height int64
}
```
`EventTimeoutTermination` is generated by `Executor` to signal that a timeout triggered by `TimeoutTerminationTrigger`
has expired.
``` go
type EventTimeoutTermination struct {
Height int64
}
```
`MessageToSend` is just a wrapper around `Message` type that contains id of the peer to which message should be sent.
``` go
type MessageToSend struct {
PeerID ID
Message Message
}
```
The Controller state machine can be in two modes: `ModeFastSync` when
a node is trying to catch up with the network by downloading committed blocks,
and `ModeConsensus` in which it executes Tendermint consensus protocol. We
consider that `fast sync` mode terminates once the Controller switch to
`ModeConsensus`.
``` go
type Mode int
const (
ModeUnknown Mode = iota
ModeFastSync
ModeConsensus
)
```
`Controller` is managing the following state:
``` go
type ControllerState struct {
Height int64 // the first block that is not committed
Mode Mode // mode of operation
PeerMap map[ID]PeerStats // map of peer IDs to peer statistics
MaxRequestPending int64 // maximum height of the pending requests
FailedRequests []int64 // list of failed block requests
PendingRequestsNum int // total number of pending requests
Store []BlockInfo // contains list of downloaded blocks
Executor BlockExecutor // store, verify and executes blocks
}
```
`PeerStats` data structure keeps for every peer its current height and a list of pending requests for blocks.
``` go
type PeerStats struct {
Height int64
PendingRequest int64 // a request sent to this peer
}
```
`BlockInfo` data structure is used to store information (as part of block store) about downloaded blocks: from what peer
a block and the corresponding commit certificate are received.
``` go
type BlockInfo struct {
Block Block
Commit Commit
PeerID ID // a peer from which we received the corresponding Block and Commit
}
```
The `Controller` is initialized by providing an initial height (`startHeight`) from which it will start downloading
blocks from peers and the current state of the `BlockExecutor`.
``` go
func NewControllerState(startHeight int64, executor BlockExecutor) ControllerState {
state = ControllerState {}
state.Height = startHeight
state.Mode = ModeFastSync
state.MaxRequestPending = startHeight - 1
state.PendingRequestsNum = 0
state.Executor = executor
initialize state.PeerMap, state.FailedRequests and state.Store to empty data structures
return state
}
```
The core protocol logic is given with the following function:
``` go
func handleEvent(state ControllerState, event Event) (ControllerState, Message, TimeoutTrigger, Error) {
msg = nil
timeout = nil
error = nil
switch state.Mode {
case ModeConsensus:
switch event := event.(type) {
case EventBlockRequest:
msg = createBlockResponseMessage(state, event)
return state, msg, timeout, error
default:
error = "Only respond to BlockRequests while in ModeConsensus!"
return state, msg, timeout, error
}
case ModeFastSync:
switch event := event.(type) {
case EventBlockRequest:
msg = createBlockResponseMessage(state, event)
return state, msg, timeout, error
case EventStatusResponse:
return handleEventStatusResponse(event, state)
case EventRemovePeer:
return handleEventRemovePeer(event, state)
case EventBlockResponse:
return handleEventBlockResponse(event, state)
case EventResponseTimeout:
return handleEventResponseTimeout(event, state)
case EventTerminationTimeout:
// Termination timeout is triggered in case of empty peer set and in case there are no pending requests.
// If this timeout expires and in the meantime no new peers are added or new pending requests are made
// then `fast-sync` mode terminates by switching to `ModeConsensus`.
// Note that termination timeout should be higher than the response timeout.
if state.Height == event.Height && state.PendingRequestsNum == 0 { state.State = ConsensusMode }
return state, msg, timeout, error
default:
error = "Received unknown event type!"
return state, msg, timeout, error
}
}
}
```
``` go
func createBlockResponseMessage(state ControllerState, event BlockRequest) MessageToSend {
msgToSend = nil
if _, ok := state.PeerMap[event.PeerID]; !ok { peerStats = PeerStats{-1, -1} }
if state.Executor.ContainsBlockWithHeight(event.Height) && event.Height > peerStats.Height {
peerStats = event.Height
msg = BlockResponseMessage{
Height: event.Height,
Block: state.Executor.getBlock(eventHeight),
Commit: state.Executor.getCommit(eventHeight),
PeerID: event.PeerID,
CurrentHeight: state.Height - 1,
}
msgToSend = MessageToSend { event.PeerID, msg }
}
state.PeerMap[event.PeerID] = peerStats
return msgToSend
}
```
``` go
func handleEventStatusResponse(event EventStatusResponse, state ControllerState) (ControllerState, MessageToSend, TimeoutTrigger, Error) {
if _, ok := state.PeerMap[event.PeerID]; !ok {
peerStats = PeerStats{ -1, -1 }
} else {
peerStats = state.PeerMap[event.PeerID]
}
if event.Height > peerStats.Height { peerStats.Height = event.Height }
// if there are no pending requests for this peer, try to send him a request for block
if peerStats.PendingRequest == -1 {
msg = createBlockRequestMessages(state, event.PeerID, peerStats.Height)
// msg is nil if no request for block can be made to a peer at this point in time
if msg != nil {
peerStats.PendingRequests = msg.Height
state.PendingRequestsNum++
// when a request for a block is sent to a peer, a response timeout is triggered. If no corresponding block is sent by the peer
// during response timeout period, then the peer is considered faulty and is removed from the peer set.
timeout = ResponseTimeoutTrigger{ msg.PeerID, msg.Height, PeerTimeout }
} else if state.PendingRequestsNum == 0 {
// if there are no pending requests and no new request can be placed to the peer, termination timeout is triggered.
// If termination timeout expires and we are still at the same height and there are no pending requests, the "fast-sync"
// mode is finished and we switch to `ModeConsensus`.
timeout = TerminationTimeoutTrigger{ state.Height, TerminationTimeout }
}
}
state.PeerMap[event.PeerID] = peerStats
return state, msg, timeout, error
}
```
``` go
func handleEventRemovePeer(event EventRemovePeer, state ControllerState) (ControllerState, MessageToSend, TimeoutTrigger, Error) {
if _, ok := state.PeerMap[event.PeerID]; ok {
pendingRequest = state.PeerMap[event.PeerID].PendingRequest
// if a peer is removed from the peer set, its pending request is declared failed and added to the `FailedRequests` list
// so it can be retried.
if pendingRequest != -1 {
add(state.FailedRequests, pendingRequest)
}
state.PendingRequestsNum--
delete(state.PeerMap, event.PeerID)
// if the peer set is empty after removal of this peer then termination timeout is triggered.
if state.PeerMap.isEmpty() {
timeout = TerminationTimeoutTrigger{ state.Height, TerminationTimeout }
}
} else { error = "Removing unknown peer!" }
return state, msg, timeout, error
```
``` go
func handleEventBlockResponse(event EventBlockResponse, state ControllerState) (ControllerState, MessageToSend, TimeoutTrigger, Error)
if state.PeerMap[event.PeerID] {
peerStats = state.PeerMap[event.PeerID]
// when expected block arrives from a peer, it is added to the store so it can be verified and if correct executed after.
if peerStats.PendingRequest == event.Height {
peerStats.PendingRequest = -1
state.PendingRequestsNum--
if event.PeerHeight > peerStats.Height { peerStats.Height = event.PeerHeight }
state.Store[event.Height] = BlockInfo{ event.Block, event.Commit, event.PeerID }
// blocks are verified sequentially so adding a block to the store does not mean that it will be immediately verified
// as some of the previous blocks might be missing.
state = verifyBlocks(state) // it can lead to event.PeerID being removed from peer list
if _, ok := state.PeerMap[event.PeerID]; ok {
// we try to identify new request for a block that can be asked to the peer
msg = createBlockRequestMessage(state, event.PeerID, peerStats.Height)
if msg != nil {
peerStats.PendingRequests = msg.Height
state.PendingRequestsNum++
// if request for block is made, response timeout is triggered
timeout = ResponseTimeoutTrigger{ msg.PeerID, msg.Height, PeerTimeout }
} else if state.PeerMap.isEmpty() || state.PendingRequestsNum == 0 {
// if the peer map is empty (the peer can be removed as block verification failed) or there are no pending requests
// termination timeout is triggered.
timeout = TerminationTimeoutTrigger{ state.Height, TerminationTimeout }
}
}
} else { error = "Received Block from wrong peer!" }
} else { error = "Received Block from unknown peer!" }
state.PeerMap[event.PeerID] = peerStats
return state, msg, timeout, error
}
```
``` go
func handleEventResponseTimeout(event, state) {
if _, ok := state.PeerMap[event.PeerID]; ok {
peerStats = state.PeerMap[event.PeerID]
// if a response timeout expires and the peer hasn't delivered the block, the peer is removed from the peer list and
// the request is added to the `FailedRequests` so the block can be downloaded from other peer
if peerStats.PendingRequest == event.Height {
add(state.FailedRequests, pendingRequest)
delete(state.PeerMap, event.PeerID)
state.PendingRequestsNum--
// if peer set is empty, then termination timeout is triggered
if state.PeerMap.isEmpty() {
timeout = TimeoutTrigger{ state.Height, TerminationTimeout }
}
}
}
return state, msg, timeout, error
}
```
``` go
func createBlockRequestMessage(state ControllerState, peerID ID, peerHeight int64) MessageToSend {
msg = nil
blockHeight = -1
r = find request in state.FailedRequests such that r <= peerHeight // returns `nil` if there are no such request
// if there is a height in failed requests that can be downloaded from the peer send request to it
if r != nil {
blockNumber = r
delete(state.FailedRequests, r)
} else if state.MaxRequestPending < peerHeight {
// if height of the maximum pending request is smaller than peer height, then ask peer for next block
state.MaxRequestPending++
blockHeight = state.MaxRequestPending // increment state.MaxRequestPending and then return the new value
}
if blockHeight > -1 { msg = MessageToSend { peerID, MessageBlockRequest { blockHeight } }
return msg
}
```
``` go
func verifyBlocks(state State) State {
done = false
for !done {
block = state.Store[height]
if block != nil {
verified = verify block.Block using block.Commit // return `true` is verification succeed, 'false` otherwise
if verified {
block.Execute() // executing block is costly operation so it might make sense executing asynchronously
state.Height++
} else {
// if block verification failed, then it is added to `FailedRequests` and the peer is removed from the peer set
add(state.FailedRequests, height)
state.Store[height] = nil
if _, ok := state.PeerMap[block.PeerID]; ok {
pendingRequest = state.PeerMap[block.PeerID].PendingRequest
// if there is a pending request sent to the peer that is just to be removed from the peer set, add it to `FailedRequests`
if pendingRequest != -1 {
add(state.FailedRequests, pendingRequest)
state.PendingRequestsNum--
}
delete(state.PeerMap, event.PeerID)
}
done = true
}
} else { done = true }
}
return state
}
```
In the proposed architecture `Controller` is not active task, i.e., it is being called by the `Executor`. Depending on
the return values returned by `Controller`,`Executor` will send a message to some peer (`msg` != nil), trigger a
timeout (`timeout` != nil) or deal with errors (`error` != nil).
In case a timeout is triggered, it will provide as an input to `Controller` the corresponding timeout event once
timeout expires.
## Status
Draft.
## Consequences
### Positive
- isolated implementation of the algorithm
- improved testability - simpler to prove correctness
- clearer separation of concerns - easier to reason
### Negative
### Neutral

View File

@@ -1,29 +0,0 @@
# ADR 041: Application should be in charge of validator set
## Changelog
## Context
Currently Tendermint is in charge of validator set and proposer selection. Application can only update the validator set changes at EndBlock time.
To support Light Client, application should make sure at least 2/3 of validator are same at each round.
Application should have full control on validator set changes and proposer selection. In each round Application can provide the list of validators for next rounds in order with their power. The proposer is the first in the list, in case the proposer is offline, the next one can propose the proposal and so on.
## Decision
## Status
## Consequences
Tendermint is no more in charge of validator set and its changes. The Application should provide the correct information.
However Tendermint can provide psedo-randomness algorithm to help application for selecting proposer in each round.
### Positive
### Negative
### Neutral
## References

View File

@@ -1,238 +0,0 @@
# ADR 042: State Sync Design
## Changelog
2019-06-27: Init by EB
2019-07-04: Follow up by brapse
## Context
StateSync is a feature which would allow a new node to receive a
snapshot of the application state without downloading blocks or going
through consensus. Once downloaded, the node could switch to FastSync
and eventually participate in consensus. The goal of StateSync is to
facilitate setting up a new node as quickly as possible.
## Considerations
Because Tendermint doesn't know anything about the application state,
StateSync will broker messages between nodes and through
the ABCI to an opaque applicaton. The implementation will have multiple
touch points on both the tendermint code base and ABCI application.
* A StateSync reactor to facilitate peer communication - Tendermint
* A Set of ABCI messages to transmit application state to the reactor - Tendermint
* A Set of MultiStore APIs for exposing snapshot data to the ABCI - ABCI application
* A Storage format with validation and performance considerations - ABCI application
### Implementation Properties
Beyond the approach, any implementation of StateSync can be evaluated
across different criteria:
* Speed: Expected throughput of producing and consuming snapshots
* Safety: Cost of pushing invalid snapshots to a node
* Liveness: Cost of preventing a node from receiving/constructing a snapshot
* Effort: How much effort does an implementation require
### Implementation Question
* What is the format of a snapshot
* Complete snapshot
* Ordered IAVL key ranges
* Compressed individually chunks which can be validated
* How is data validated
* Trust a peer with it's data blindly
* Trust a majority of peers
* Use light client validation to validate each chunk against consensus
produced merkle tree root
* What are the performance characteristics
* Random vs sequential reads
* How parallelizeable is the scheduling algorithm
### Proposals
Broadly speaking there are two approaches to this problem which have had
varying degrees of discussion and progress. These approach can be
summarized as:
**Lazy:** Where snapshots are produced dynamically at request time. This
solution would use the existing data structure.
**Eager:** Where snapshots are produced periodically and served from disk at
request time. This solution would create an auxiliary data structure
optimized for batch read/writes.
Additionally the propsosals tend to vary on how they provide safety
properties.
**LightClient** Where a client can aquire the merkle root from the block
headers synchronized from a trusted validator set. Subsets of the application state,
called chunks can therefore be validated on receipt to ensure each chunk
is part of the merkle root.
**Majority of Peers** Where manifests of chunks along with checksums are
downloaded and compared against versions provided by a majority of
peers.
#### Lazy StateSync
An initial specification was published by Alexis Sellier.
In this design, the state has a given `size` of primitive elements (like
keys or nodes), each element is assigned a number from 0 to `size-1`,
and chunks consists of a range of such elements. Ackratos raised
[some concerns](https://docs.google.com/document/d/1npGTAa1qxe8EQZ1wG0a0Sip9t5oX2vYZNUDwr_LVRR4/edit)
about this design, somewhat specific to the IAVL tree, and mainly concerning
performance of random reads and of iterating through the tree to determine element numbers
(ie. elements aren't indexed by the element number).
An alternative design was suggested by Jae Kwon in
[#3639](https://github.com/tendermint/tendermint/issues/3639) where chunking
happens lazily and in a dynamic way: nodes request key ranges from their peers,
and peers respond with some subset of the
requested range and with notes on how to request the rest in parallel from other
peers. Unlike chunk numbers, keys can be verified directly. And if some keys in the
range are ommitted, proofs for the range will fail to verify.
This way a node can start by requesting the entire tree from one peer,
and that peer can respond with say the first few keys, and the ranges to request
from other peers.
Additionally, per chunk validation tends to come more naturally to the
Lazy approach since it tends to use the existing structure of the tree
(ie. keys or nodes) rather than state-sync specific chunks. Such a
design for tendermint was originally tracked in
[#828](https://github.com/tendermint/tendermint/issues/828).
#### Eager StateSync
Warp Sync as implemented in Parity
["Warp Sync"](https://wiki.parity.io/Warp-Sync-Snapshot-Format.html) to rapidly
download both blocks and state snapshots from peers. Data is carved into ~4MB
chunks and snappy compressed. Hashes of snappy compressed chunks are stored in a
manifest file which co-ordinates the state-sync. Obtaining a correct manifest
file seems to require an honest majority of peers. This means you may not find
out the state is incorrect until you download the whole thing and compare it
with a verified block header.
A similar solution was implemented by Binance in
[#3594](https://github.com/tendermint/tendermint/pull/3594)
based on their initial implementation in
[PR #3243](https://github.com/tendermint/tendermint/pull/3243)
and [some learnings](https://docs.google.com/document/d/1npGTAa1qxe8EQZ1wG0a0Sip9t5oX2vYZNUDwr_LVRR4/edit).
Note this still requires the honest majority peer assumption.
As an eager protocol, warp-sync can efficiently compress larger, more
predicatable chunks once per snapshot and service many new peers. By
comparison lazy chunkers would have to compress each chunk at request
time.
### Analysis of Lazy vs Eager
Lazy vs Eager have more in common than they differ. They all require
reactors on the tendermint side, a set of ABCI messages and a method for
serializing/deserializing snapshots facilitated by a SnapshotFormat.
The biggest difference between Lazy and Eager proposals is in the
read/write patterns necessitated by serving a snapshot chunk.
Specifically, Lazy State Sync performs random reads to the underlying data
structure while Eager can optimize for sequential reads.
This distinctin between approaches was demonstrated by Binance's
[ackratos](https://github.com/ackratos) in their implementation of [Lazy
State sync](https://github.com/tendermint/tendermint/pull/3243), The
[analysis](https://docs.google.com/document/d/1npGTAa1qxe8EQZ1wG0a0Sip9t5oX2vYZNUDwr_LVRR4/)
of the performance, and follow up implementation of [Warp
Sync](http://github.com/tendermint/tendermint/pull/3594).
#### Compairing Security Models
There are several different security models which have been
discussed/proposed in the past but generally fall into two categories.
Light client validation: In which the node receiving data is expected to
first perform a light client sync and have all the nessesary block
headers. Within the trusted block header (trusted in terms of from a
validator set subject to [weak
subjectivity](https://github.com/tendermint/tendermint/pull/3795)) and
can compare any subset of keys called a chunk against the merkle root.
The advantage of light client validation is that the block headers are
signed by validators which have something to lose for malicious
behaviour. If a validator were to provide an invalid proof, they can be
slashed.
Majority of peer validation: A manifest file containing a list of chunks
along with checksums of each chunk is downloaded from a
trusted source. That source can be a community resource similar to
[sum.golang.org](https://sum.golang.org) or downloaded from the majority
of peers. One disadantage of the majority of peer security model is the
vuliberability to eclipse attacks in which a malicious users looks to
saturate a target node's peer list and produce a manufactured picture of
majority.
A third option would be to include snapshot related data in the
block header. This could include the manifest with related checksums and be
secured through consensus. One challenge of this approach is to
ensure that creating snapshots does not put undo burden on block
propsers by synchronizing snapshot creation and block creation. One
approach to minimizing the burden is for snapshots for height
`H` to be included in block `H+n` where `n` is some `n` block away,
giving the block propser enough time to complete the snapshot
asynchronousy.
## Proposal: Eager StateSync With Per Chunk Light Client Validation
The conclusion after some concideration of the advantages/disadvances of
eager/lazy and different security models is to produce a state sync
which eagerly produces snapshots and uses light client validation. This
approach has the performance advantages of pre-computing efficient
snapshots which can streamed to new nodes on demand using sequential IO.
Secondly, by using light client validation we cna validate each chunk on
receipt and avoid the potential eclipse attack of majority of peer based
security.
### Implementation
Tendermint is responsible for downloading and verifying chunks of
AppState from peers. ABCI Application is responsible for taking
AppStateChunk objects from TM and constructing a valid state tree whose
root corresponds with the AppHash of syncing block. In particular we
will need implement:
* Build new StateSync reactor brokers message transmission between the peers
and the ABCI application
* A set of ABCI Messages
* Design SnapshotFormat as an interface which can:
* validate chunks
* read/write chunks from file
* read/write chunks to/from application state store
* convert manifests into chunkRequest ABCI messages
* Implement SnapshotFormat for cosmos-hub with concrete implementation for:
* read/write chunks in a way which can be:
* parallelized across peers
* validated on receipt
* read/write to/from IAVL+ tree
![StateSync Architecture Diagram](img/state-sync.png)
## Implementation Path
* Create StateSync reactor based on [#3753](https://github.com/tendermint/tendermint/pull/3753)
* Design SnapshotFormat with an eye towards cosmos-hub implementation
* ABCI message to send/receive SnapshotFormat
* IAVL+ changes to support SnapshotFormat
* Deliver Warp sync (no chunk validation)
* light client implementation for weak subjectivity
* Deliver StateSync with chunk validation
## Status
Proposed
## Concequences
### Neutral
### Positive
* Safe & performant state sync design substantiated with real world implementation experience
* General interfaces allowing application specific innovation
* Parallizable implementation trajectory with reasonable engineering effort
### Negative
* Static Scheduling lacks opportunity for real time chunk availability optimizations
## References
[sync: Sync current state without full replay for Applications](https://github.com/tendermint/tendermint/issues/828) - original issue
[tendermint state sync proposal 2](https://docs.google.com/document/d/1npGTAa1qxe8EQZ1wG0a0Sip9t5oX2vYZNUDwr_LVRR4/edit) - ackratos proposal
[proposal 2 implementation](https://github.com/tendermint/tendermint/pull/3243) - ackratos implementation
[WIP General/Lazy State-Sync pseudo-spec](https://github.com/tendermint/tendermint/issues/3639) - Jae Proposal
[Warp Sync Implementation](https://github.com/tendermint/tendermint/pull/3594) - ackratos
[Chunk Proposal](https://github.com/tendermint/tendermint/pull/3799) - Bucky proposed

View File

@@ -1,410 +0,0 @@
# ADR 043: Blockhchain Reactor Riri-Org
## Changelog
- 18-06-2019: Initial draft
- 08-07-2019: Reviewed
- 29-11-2019: Implemented
- 14-02-2020: Updated with the implementation details
## Context
The blockchain reactor is responsible for two high level processes:sending/receiving blocks from peers and FastSync-ing blocks to catch upnode who is far behind. The goal of [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md) was to refactor these two processes by separating business logic currently wrapped up in go-channels into pure `handle*` functions. While the ADR specified what the final form of the reactor might look like it lacked guidance on intermediary steps to get there.
The following diagram illustrates the state of the [blockchain-reorg](https://github.com/tendermint/tendermint/pull/3561) reactor which will be referred to as `v1`.
![v1 Blockchain Reactor Architecture
Diagram](https://github.com/tendermint/tendermint/blob/f9e556481654a24aeb689bdadaf5eab3ccd66829/docs/architecture/img/blockchain-reactor-v1.png)
While `v1` of the blockchain reactor has shown significant improvements in terms of simplifying the concurrency model, the current PR has run into few roadblocks.
- The current PR large and difficult to review.
- Block gossiping and fast sync processes are highly coupled to the shared `Pool` data structure.
- Peer communication is spread over multiple components creating complex dependency graph which must be mocked out during testing.
- Timeouts modeled as stateful tickers introduce non-determinism in tests
This ADR is meant to specify the missing components and control necessary to achieve [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md).
## Decision
Partition the responsibilities of the blockchain reactor into a set of components which communicate exclusively with events. Events will contain timestamps allowing each component to track time as internal state. The internal state will be mutated by a set of `handle*` which will produce event(s). The integration between components will happen in the reactor and reactor tests will then become integration tests between components. This design will be known as `v2`.
![v2 Blockchain Reactor Architecture
Diagram](https://github.com/tendermint/tendermint/blob/584e67ac3fac220c5c3e0652e3582eca8231e814/docs/architecture/img/blockchain-reactor-v2.png)
### Fast Sync Related Communication Channels
The diagram below shows the fast sync routines and the types of channels and queues used to communicate with each other.
In addition the per reactor channels used by the sendRoutine to send messages over the Peer MConnection are shown.
![v2 Blockchain Channels and Queues
Diagram](https://github.com/tendermint/tendermint/blob/5cf570690f989646fb3b615b734da503f038891f/docs/architecture/img/blockchain-v2-channels.png)
### Reactor changes in detail
The reactor will include a demultiplexing routine which will send each message to each sub routine for independent processing. Each sub routine will then select the messages it's interested in and call the handle specific function specified in [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md). The demuxRoutine acts as "pacemaker" setting the time in which events are expected to be handled.
```go
func demuxRoutine(msgs, scheduleMsgs, processorMsgs, ioMsgs) {
timer := time.NewTicker(interval)
for {
select {
case <-timer.C:
now := evTimeCheck{time.Now()}
schedulerMsgs <- now
processorMsgs <- now
ioMsgs <- now
case msg:= <- msgs:
msg.time = time.Now()
// These channels should produce backpressure before
// being full to avoid starving each other
schedulerMsgs <- msg
processorMsgs <- msg
ioMesgs <- msg
if msg == stop {
break;
}
}
}
}
func processRoutine(input chan Message, output chan Message) {
processor := NewProcessor(..)
for {
msg := <- input
switch msg := msg.(type) {
case bcBlockRequestMessage:
output <- processor.handleBlockRequest(msg))
...
case stop:
processor.stop()
break;
}
}
func scheduleRoutine(input chan Message, output chan Message) {
schelduer = NewScheduler(...)
for {
msg := <-msgs
switch msg := input.(type) {
case bcBlockResponseMessage:
output <- scheduler.handleBlockResponse(msg)
...
case stop:
schedule.stop()
break;
}
}
}
```
## Lifecycle management
A set of routines for individual processes allow processes to run in parallel with clear lifecycle management. `Start`, `Stop`, and `AddPeer` hooks currently present in the reactor will delegate to the sub-routines allowing them to manage internal state independent without further coupling to the reactor.
```go
func (r *BlockChainReactor) Start() {
r.msgs := make(chan Message, maxInFlight)
schedulerMsgs := make(chan Message)
processorMsgs := make(chan Message)
ioMsgs := make(chan Message)
go processorRoutine(processorMsgs, r.msgs)
go scheduleRoutine(schedulerMsgs, r.msgs)
go ioRoutine(ioMsgs, r.msgs)
...
}
func (bcR *BlockchainReactor) Receive(...) {
...
r.msgs <- msg
...
}
func (r *BlockchainReactor) Stop() {
...
r.msgs <- stop
...
}
...
func (r *BlockchainReactor) Stop() {
...
r.msgs <- stop
...
}
...
func (r *BlockchainReactor) AddPeer(peer p2p.Peer) {
...
r.msgs <- bcAddPeerEv{peer.ID}
...
}
```
## IO handling
An io handling routine within the reactor will isolate peer communication. Message going through the ioRoutine will usually be one way, using `p2p` APIs. In the case in which the `p2p` API such as `trySend` return errors, the ioRoutine can funnel those message back to the demuxRoutine for distribution to the other routines. For instance errors from the ioRoutine can be consumed by the scheduler to inform better peer selection implementations.
```go
func (r *BlockchainReacor) ioRoutine(ioMesgs chan Message, outMsgs chan Message) {
...
for {
msg := <-ioMsgs
switch msg := msg.(type) {
case scBlockRequestMessage:
queued := r.sendBlockRequestToPeer(...)
if queued {
outMsgs <- ioSendQueued{...}
}
case scStatusRequestMessage
r.sendStatusRequestToPeer(...)
case bcPeerError
r.Swtich.StopPeerForError(msg.src)
...
...
case bcFinished
break;
}
}
}
```
### Processor Internals
The processor is responsible for ordering, verifying and executing blocks. The Processor will maintain an internal cursor `height` refering to the last processed block. As a set of blocks arrive unordered, the Processor will check if it has `height+1` necessary to process the next block. The processor also maintains the map `blockPeers` of peers to height, to keep track of which peer provided the block at `height`. `blockPeers` can be used in`handleRemovePeer(...)` to reschedule all unprocessed blocks provided by a peer who has errored.
```go
type Processor struct {
height int64 // the height cursor
state ...
blocks [height]*Block // keep a set of blocks in memory until they are processed
blockPeers [height]PeerID // keep track of which heights came from which peerID
lastTouch timestamp
}
func (proc *Processor) handleBlockResponse(peerID, block) {
if block.height <= height || block[block.height] {
} else if blocks[block.height] {
return errDuplicateBlock{}
} else {
blocks[block.height] = block
}
if blocks[height] && blocks[height+1] {
... = state.Validators.VerifyCommit(...)
... = store.SaveBlock(...)
state, err = blockExec.ApplyBlock(...)
...
if err == nil {
delete blocks[height]
height++
lastTouch = msg.time
return pcBlockProcessed{height-1}
} else {
... // Delete all unprocessed block from the peer
return pcBlockProcessError{peerID, height}
}
}
}
func (proc *Processor) handleRemovePeer(peerID) {
events = []
// Delete all unprocessed blocks from peerID
for i = height; i < len(blocks); i++ {
if blockPeers[i] == peerID {
events = append(events, pcBlockReschedule{height})
delete block[height]
}
}
return events
}
func handleTimeCheckEv(time) {
if time - lastTouch > timeout {
// Timeout the processor
...
}
}
```
## Schedule
The Schedule maintains the internal state used for scheduling blockRequestMessages based on some scheduling algorithm. The schedule needs to maintain state on:
- The state `blockState` of every block seem up to height of maxHeight
- The set of peers and their peer state `peerState`
- which peers have which blocks
- which blocks have been requested from which peers
```go
type blockState int
const (
blockStateNew = iota
blockStatePending,
blockStateReceived,
blockStateProcessed
)
type schedule {
// a list of blocks in which blockState
blockStates map[height]blockState
// a map of which blocks are available from which peers
blockPeers map[height]map[p2p.ID]scPeer
// a map of peerID to schedule specific peer struct `scPeer`
peers map[p2p.ID]scPeer
// a map of heights to the peer we are waiting for a response from
pending map[height]scPeer
targetPending int // the number of blocks we want in blockStatePending
targetReceived int // the number of blocks we want in blockStateReceived
peerTimeout int
peerMinSpeed int
}
func (sc *schedule) numBlockInState(state blockState) uint32 {
num := 0
for i := sc.minHeight(); i <= sc.maxHeight(); i++ {
if sc.blockState[i] == state {
num++
}
}
return num
}
func (sc *schedule) popSchedule(maxRequest int) []scBlockRequestMessage {
// We only want to schedule requests such that we have less than sc.targetPending and sc.targetReceived
// This ensures we don't saturate the network or flood the processor with unprocessed blocks
todo := min(sc.targetPending - sc.numBlockInState(blockStatePending), sc.numBlockInState(blockStateReceived))
events := []scBlockRequestMessage{}
for i := sc.minHeight(); i < sc.maxMaxHeight(); i++ {
if todo == 0 {
break
}
if blockStates[i] == blockStateNew {
peer = sc.selectPeer(blockPeers[i])
sc.blockStates[i] = blockStatePending
sc.pending[i] = peer
events = append(events, scBlockRequestMessage{peerID: peer.peerID, height: i})
todo--
}
}
return events
}
...
type scPeer struct {
peerID p2p.ID
numOustandingRequest int
lastTouched time.Time
monitor flow.Monitor
}
```
# Scheduler
The scheduler is configured to maintain a target `n` of in flight
messages and will use feedback from `_blockResponseMessage`,
`_statusResponseMessage` and `_peerError` produce an optimal assignment
of scBlockRequestMessage at each `timeCheckEv`.
```
func handleStatusResponse(peerID, height, time) {
schedule.touchPeer(peerID, time)
schedule.setPeerHeight(peerID, height)
}
func handleBlockResponseMessage(peerID, height, block, time) {
schedule.touchPeer(peerID, time)
schedule.markReceived(peerID, height, size(block))
}
func handleNoBlockResponseMessage(peerID, height, time) {
schedule.touchPeer(peerID, time)
// reschedule that block, punish peer...
...
}
func handlePeerError(peerID) {
// Remove the peer, reschedule the requests
...
}
func handleTimeCheckEv(time) {
// clean peer list
events = []
for peerID := range schedule.peersNotTouchedSince(time) {
pending = schedule.pendingFrom(peerID)
schedule.setPeerState(peerID, timedout)
schedule.resetBlocks(pending)
events = append(events, peerTimeout{peerID})
}
events = append(events, schedule.popSchedule())
return events
}
```
## Peer
The Peer Stores per peer state based on messages received by the scheduler.
```go
type Peer struct {
lastTouched timestamp
lastDownloaded timestamp
pending map[height]struct{}
height height // max height for the peer
state {
pending, // we know the peer but not the height
active, // we know the height
timeout // the peer has timed out
}
}
```
## Status
This design is under active development. The Implementation has been
staged in the following PRs:
- [Routine](https://github.com/tendermint/tendermint/pull/3878)
- [Processor](https://github.com/tendermint/tendermint/pull/4012)
- [Scheduler](https://github.com/tendermint/tendermint/pull/4043)
- [Reactor](https://github.com/tendermint/tendermint/pull/4067)
## Consequences
### Positive
- Test become deterministic
- Simulation becomes a-termporal: no need wait for a wall-time timeout
- Peer Selection can be independently tested/simulated
- Develop a general approach to refactoring reactors
### Negative
### Neutral
### Implementation Path
- Implement the scheduler, test the scheduler, review the rescheduler
- Implement the processor, test the processor, review the processor
- Implement the demuxer, write integration test, review integration tests
## References
- [ADR-40](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-040-blockchain-reactor-refactor.md): The original blockchain reactor re-org proposal
- [Blockchain re-org](https://github.com/tendermint/tendermint/pull/3561): The current blockchain reactor re-org implementation (v1)

Some files were not shown because too many files have changed in this diff Show More