Compare commits

..

2 Commits

Author SHA1 Message Date
Callum Waters
508b7f9758 split up packages 2021-08-09 13:30:27 +02:00
Aleksandr Bezobchuk
e5ffe132ae pkg/block: move from types/ 2021-08-06 18:01:11 -04:00
111 changed files with 4465 additions and 5509 deletions

3
.github/CODEOWNERS vendored
View File

@@ -7,4 +7,5 @@
# global owners are only requested if there isn't a more specific
# codeowner specified below. For this reason, the global codeowners
# are often repeated in package-level definitions.
* @alexanderbez @ebuchman @cmwaters @tessr @tychoish @williambanfield @creachadair
* @alexanderbez @ebuchman @cmwaters @tessr @tychoish @williambanfield

View File

@@ -50,7 +50,7 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v2.7.0
uses: docker/build-push-action@v2.6.1
with:
context: .
file: ./DOCKER/Dockerfile

View File

@@ -43,7 +43,7 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v2.7.0
uses: docker/build-push-action@v2.6.1
with:
context: ./tools/proto
file: ./tools/proto/Dockerfile

View File

@@ -2,27 +2,6 @@
Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermint).
## v0.34.12
*August 17, 2021*
Special thanks to external contributors on this release: @JayT106.
### FEATURES
- [rpc] [\#6717](https://github.com/tendermint/tendermint/pull/6717) introduce
`/genesis_chunked` rpc endpoint for handling large genesis files by chunking them (@tychoish)
### IMPROVEMENTS
- [rpc] [\#6825](https://github.com/tendermint/tendermint/issues/6825) Remove egregious INFO log from `ABCI#Query` RPC. (@alexanderbez)
### BUG FIXES
- [light] [\#6685](https://github.com/tendermint/tendermint/pull/6685) fix bug
with incorrectly handling contexts that would occasionally freeze state sync. (@cmwaters)
- [privval] [\#6748](https://github.com/tendermint/tendermint/issues/6748) Fix vote timestamp to prevent chain halt (@JayT106)
## v0.34.11
*June 18, 2021*
@@ -33,25 +12,25 @@ adding two new parameters to the state sync config.
### BREAKING CHANGES
- Apps
- [Version] [\#6494](https://github.com/tendermint/tendermint/pull/6494) `TMCoreSemVer` is not required to be set as a ldflag any longer.
- [Version] \#6494 `TMCoreSemVer` is not required to be set as a ldflag any longer.
### IMPROVEMENTS
- [statesync] [\#6566](https://github.com/tendermint/tendermint/pull/6566) Allow state sync fetchers and request timeout to be configurable. (@alexanderbez)
- [statesync] [\#6378](https://github.com/tendermint/tendermint/pull/6378) Retry requests for snapshots and add a minimum discovery time (5s) for new snapshots. (@tychoish)
- [statesync] [\#6582](https://github.com/tendermint/tendermint/pull/6582) Increase chunk priority and add multiple retry chunk requests (@cmwaters)
- [statesync] \#6566 Allow state sync fetchers and request timeout to be configurable. (@alexanderbez)
- [statesync] \#6378 Retry requests for snapshots and add a minimum discovery time (5s) for new snapshots. (@tychoish)
- [statesync] \#6582 Increase chunk priority and add multiple retry chunk requests (@cmwaters)
### BUG FIXES
- [evidence] [\#6375](https://github.com/tendermint/tendermint/pull/6375) Fix bug with inconsistent LightClientAttackEvidence hashing (@cmwaters)
- [evidence] \#6375 Fix bug with inconsistent LightClientAttackEvidence hashing (@cmwaters)
## v0.34.10
*April 14, 2021*
This release fixes a bug where peers would sometimes try to send messages
This release fixes a bug where peers would sometimes try to send messages
on incorrect channels. Special thanks to our friends at Oasis Labs for surfacing
this issue!
this issue!
- [p2p/node] [\#6339](https://github.com/tendermint/tendermint/issues/6339) Fix bug with using custom channels (@cmwaters)
- [light] [\#6346](https://github.com/tendermint/tendermint/issues/6346) Correctly handle too high errors to improve client robustness (@cmwaters)
@@ -60,7 +39,7 @@ this issue!
*April 8, 2021*
This release fixes a moderate severity security issue, Security Advisory Alderfly,
This release fixes a moderate severity security issue, Security Advisory Alderfly,
which impacts all networks that rely on Tendermint light clients.
Further details will be released once networks have upgraded.
@@ -133,7 +112,7 @@ shout-out to @marbar3778 for diagnosing it quickly.
## v0.34.6
*February 18, 2021*
*February 18, 2021*
_Tendermint Core v0.34.5 and v0.34.6 have been recalled due to release tooling problems._
@@ -141,9 +120,9 @@ _Tendermint Core v0.34.5 and v0.34.6 have been recalled due to release tooling p
*February 11, 2021*
This release includes a fix for a memory leak in the evidence reactor (see #6068, below).
All Tendermint clients are recommended to upgrade.
Thank you to our friends at Crypto.com for the initial report of this memory leak!
This release includes a fix for a memory leak in the evidence reactor (see #6068, below).
All Tendermint clients are recommended to upgrade.
Thank you to our friends at Crypto.com for the initial report of this memory leak!
Special thanks to other external contributors on this release: @yayajacky, @odidev, @laniehei, and @c29r3!
@@ -153,17 +132,17 @@ Special thanks to other external contributors on this release: @yayajacky, @odid
- [light] [\#6026](https://github.com/tendermint/tendermint/pull/6026) Fix a bug when height isn't provided for the rpc calls: `/commit` and `/validators` (@cmwaters)
- [evidence] [\#6068](https://github.com/tendermint/tendermint/pull/6068) Terminate broadcastEvidenceRoutine when peer is stopped (@melekes)
## v0.34.3
## v0.34.3
*January 19, 2021*
This release includes a fix for a high-severity security vulnerability,
This release includes a fix for a high-severity security vulnerability,
a DoS-vector that impacted Tendermint Core v0.34.0-v0.34.2. For more details, see
[Security Advisory Mulberry](https://github.com/tendermint/tendermint/security/advisories/GHSA-p658-8693-mhvg)
or https://nvd.nist.gov/vuln/detail/CVE-2021-21271.
[Security Advisory Mulberry](https://github.com/tendermint/tendermint/security/advisories/GHSA-p658-8693-mhvg)
or https://nvd.nist.gov/vuln/detail/CVE-2021-21271.
Tendermint Core v0.34.3 also updates GoGo Protobuf to 1.3.2 in order to pick up the fix for
https://nvd.nist.gov/vuln/detail/CVE-2021-3121.
https://nvd.nist.gov/vuln/detail/CVE-2021-3121.
### BUG FIXES
@@ -255,14 +234,14 @@ Special thanks to external contributors on this release: @james-ray, @fedekunze,
- [blockchain] [\#4637](https://github.com/tendermint/tendermint/pull/4637) Migrate blockchain reactor(s) to Protobuf encoding (@marbar3778)
- [evidence] [\#4949](https://github.com/tendermint/tendermint/pull/4949) Migrate evidence reactor to Protobuf encoding (@marbar3778)
- [mempool] [\#4940](https://github.com/tendermint/tendermint/pull/4940) Migrate mempool from to Protobuf encoding (@marbar3778)
- [mempool] [\#5321](https://github.com/tendermint/tendermint/pull/5321) Batch transactions when broadcasting them to peers (@melekes)
- [mempool] [\#5321](https://github.com/tendermint/tendermint/pull/5321) Batch transactions when broadcasting them to peers (@melekes)
- `MaxBatchBytes` new config setting defines the max size of one batch.
- [p2p/pex] [\#4973](https://github.com/tendermint/tendermint/pull/4973) Migrate `p2p/pex` reactor to Protobuf encoding (@marbar3778)
- [statesync] [\#4943](https://github.com/tendermint/tendermint/pull/4943) Migrate state sync reactor to Protobuf encoding (@marbar3778)
- Blockchain Protocol
- [evidence] [\#4725](https://github.com/tendermint/tendermint/pull/4725) Remove `Pubkey` from `DuplicateVoteEvidence` (@marbar3778)
- [evidence] [\#4725](https://github.com/tendermint/tendermint/pull/4725) Remove `Pubkey` from `DuplicateVoteEvidence` (@marbar3778)
- [evidence] [\#5499](https://github.com/tendermint/tendermint/pull/5449) Cap evidence to a maximum number of bytes (supercedes [\#4780](https://github.com/tendermint/tendermint/pull/4780)) (@cmwaters)
- [merkle] [\#5193](https://github.com/tendermint/tendermint/pull/5193) Header hashes are no longer empty for empty inputs, notably `DataHash`, `EvidenceHash`, and `LastResultsHash` (@erikgrinaker)
- [state] [\#4845](https://github.com/tendermint/tendermint/pull/4845) Include `GasWanted` and `GasUsed` into `LastResultsHash` (@melekes)
@@ -321,7 +300,7 @@ Special thanks to external contributors on this release: @james-ray, @fedekunze,
- [types] [\#4852](https://github.com/tendermint/tendermint/pull/4852) Vote & Proposal `SignBytes` is now func `VoteSignBytes` & `ProposalSignBytes` (@marbar3778)
- [types] [\#4798](https://github.com/tendermint/tendermint/pull/4798) Simplify `VerifyCommitTrusting` func + remove extra validation (@melekes)
- [types] [\#4845](https://github.com/tendermint/tendermint/pull/4845) Remove `ABCIResult` (@melekes)
- [types] [\#5029](https://github.com/tendermint/tendermint/pull/5029) Rename all values from `PartsHeader` to `PartSetHeader` to have consistency (@marbar3778)
- [types] [\#5029](https://github.com/tendermint/tendermint/pull/5029) Rename all values from `PartsHeader` to `PartSetHeader` to have consistency (@marbar3778)
- [types] [\#4939](https://github.com/tendermint/tendermint/pull/4939) `Total` in `Parts` & `PartSetHeader` has been changed from a `int` to a `uint32` (@marbar3778)
- [types] [\#4939](https://github.com/tendermint/tendermint/pull/4939) Vote: `ValidatorIndex` & `Round` are now `int32` (@marbar3778)
- [types] [\#4939](https://github.com/tendermint/tendermint/pull/4939) Proposal: `POLRound` & `Round` are now `int32` (@marbar3778)
@@ -359,7 +338,7 @@ Special thanks to external contributors on this release: @james-ray, @fedekunze,
- [evidence] [\#4722](https://github.com/tendermint/tendermint/pull/4722) Consolidate evidence store and pool types to improve evidence DB (@cmwaters)
- [evidence] [\#4839](https://github.com/tendermint/tendermint/pull/4839) Reject duplicate evidence from being proposed (@cmwaters)
- [evidence] [\#5219](https://github.com/tendermint/tendermint/pull/5219) Change the source of evidence time to block time (@cmwaters)
- [libs] [\#5126](https://github.com/tendermint/tendermint/pull/5126) Add a sync package which wraps sync.(RW)Mutex & deadlock.(RW)Mutex and use a build flag (deadlock) in order to enable deadlock checking (@marbar3778)
- [libs] [\#5126](https://github.com/tendermint/tendermint/pull/5126) Add a sync package which wraps sync.(RW)Mutex & deadlock.(RW)Mutex and use a build flag (deadlock) in order to enable deadlock checking (@marbar3778)
- [light] [\#4935](https://github.com/tendermint/tendermint/pull/4935) Fetch and compare a new header with witnesses in parallel (@melekes)
- [light] [\#4929](https://github.com/tendermint/tendermint/pull/4929) Compare header with witnesses only when doing bisection (@melekes)
- [light] [\#4916](https://github.com/tendermint/tendermint/pull/4916) Validate basic for inbound validator sets and headers before further processing them (@cmwaters)

View File

@@ -24,7 +24,7 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [fastsync/rpc] \#6620 Add TotalSyncedTime & RemainingTime to SyncInfo in /status RPC (@JayT106)
- [rpc/grpc] \#6725 Mark gRPC in the RPC layer as deprecated.
- [blockchain/v2] \#6730 Fast Sync v2 is deprecated, please use v0
- [rpc] \#6820 Update RPC methods to reflect changes in the p2p layer, disabling support for `UnsafeDialPeers` and `UnsafeDialPeers` when used with the new p2p layer, and changing the response format of the peer list in `NetInfo` for all users.
- [rpc] Add genesis_chunked method to support paginated and parallel fetching of large genesis documents.
- Apps
- [ABCI] \#6408 Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
@@ -33,7 +33,7 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [ABCI] \#5818 Use protoio for msg length delimitation. Migrates from int64 to uint64 length delimiters.
- [ABCI] \#3546 Add `mempool_error` field to `ResponseCheckTx`. This field will contain an error string if Tendermint encountered an error while adding a transaction to the mempool. (@williambanfield)
- [Version] \#6494 `TMCoreSemVer` has been renamed to `TMVersion`.
- It is not required any longer to set ldflags to set version strings
- It is not required any longer to set ldflags to set version strings
- [abci/counter] \#6684 Delete counter example app
- P2P Protocol
@@ -56,25 +56,25 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [store] \#5848 Remove block store state in favor of using the db iterators directly (@cmwaters)
- [state] \#5864 Use an iterator when pruning state (@cmwaters)
- [types] \#6023 Remove `tm2pb.Header`, `tm2pb.BlockID`, `tm2pb.PartSetHeader` and `tm2pb.NewValidatorUpdate`.
- Each of the above types has a `ToProto` and `FromProto` method or function which replaced this logic.
- Each of the above types has a `ToProto` and `FromProto` method or function which replaced this logic.
- [light] \#6054 Move `MaxRetryAttempt` option from client to provider.
- `NewWithOptions` now sets the max retry attempts and timeouts (@cmwaters)
- `NewWithOptions` now sets the max retry attempts and timeouts (@cmwaters)
- [all] \#6077 Change spelling from British English to American (@cmwaters)
- Rename "Subscription.Cancelled()" to "Subscription.Canceled()" in libs/pubsub
- Rename "behaviour" pkg to "behavior" and internalized it in blockchain v2
- Rename "Subscription.Cancelled()" to "Subscription.Canceled()" in libs/pubsub
- Rename "behaviour" pkg to "behavior" and internalized it in blockchain v2
- [rpc/client/http] \#6176 Remove `endpoint` arg from `New`, `NewWithTimeout` and `NewWithClient` (@melekes)
- [rpc/client/http] \#6176 Unexpose `WSEvents` (@melekes)
- [rpc/jsonrpc/client/ws_client] \#6176 `NewWS` no longer accepts options (use `NewWSWithOptions` and `OnReconnect` funcs to configure the client) (@melekes)
- [internal/libs] \#6366 Move `autofile`, `clist`,`fail`,`flowrate`, `protoio`, `sync`, `tempfile`, `test` and `timer` lib packages to an internal folder
- [libs/rand] \#6364 Remove most of libs/rand in favour of standard lib's `math/rand` (@liamsi)
- [mempool] \#6466 The original mempool reactor has been versioned as `v0` and moved to a sub-package under the root `mempool` package.
Some core types have been kept in the `mempool` package such as `TxCache` and it's implementations, the `Mempool` interface itself
and `TxInfo`. (@alexanderbez)
Some core types have been kept in the `mempool` package such as `TxCache` and it's implementations, the `Mempool` interface itself
and `TxInfo`. (@alexanderbez)
- [crypto/sr25519] \#6526 Do not re-execute the Ed25519-style key derivation step when doing signing and verification. The derivation is now done once and only once. This breaks `sr25519.GenPrivKeyFromSecret` output compatibility. (@Yawning)
- [types] \#6627 Move `NodeKey` to types to make the type public.
- [types] \#6627 Move `NodeKey` to types to make the type public.
- [config] \#6627 Extend `config` to contain methods `LoadNodeKeyID` and `LoadorGenNodeKeyID`
- [blocksync] \#6755 Rename `FastSync` and `Blockchain` package to `BlockSync`
(@cmwaters)
(@cmwaters)
- Blockchain Protocol
@@ -107,7 +107,6 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [statesync/event] \#6700 Emit statesync status start/end event (@JayT106)
### IMPROVEMENTS
- [libs/log] Console log formatting changes as a result of \#6534 and \#6589. (@tychoish)
- [statesync] \#6566 Allow state sync fetchers and request timeout to be configurable. (@alexanderbez)
- [types] \#6478 Add `block_id` to `newblock` event (@jeebster)
@@ -155,7 +154,8 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [blockchain/v1] [\#5701](https://github.com/tendermint/tendermint/pull/5701) Handle peers without blocks (@melekes)
- [blockchain/v1] \#5711 Fix deadlock (@melekes)
- [evidence] \#6375 Fix bug with inconsistent LightClientAttackEvidence hashing (cmwaters)
- [rpc] \#6507 Ensure RPC client can handle URLs without ports (@JayT106)
- [rpc] \#6507 fix RPC client doesn't handle url's without ports (@JayT106)
- [statesync] \#6463 Adds Reverse Sync feature to fetch historical light blocks after state sync in order to verify any evidence (@cmwaters)
- [fastsync] \#6590 Update the metrics during fast-sync (@JayT106)
- [gitignore] \#6668 Fix gitignore of abci-cli (@tanyabouman)
- [gitignore] \#6668 Fix gitignore of abci-cli (@tanyabouman)
- [light] \#6687 Fix bug with incorrecly handled contexts in the light client (@cmwaters)

View File

@@ -227,96 +227,16 @@ Fixes #nnnn
Each PR should have one commit once it lands on `master`; this can be accomplished by using the "squash and merge" button on Github. Be sure to edit your commit message, though!
### Release procedure
### Release Procedure
#### A note about backport branches
Tendermint's `master` branch is under active development.
Releases are specified using tags and are built from long-lived "backport" branches.
Each release "line" (e.g. 0.34 or 0.33) has its own long-lived backport branch,
and the backport branches have names like `v0.34.x` or `v0.33.x`
(literally, `x`; it is not a placeholder in this case).
As non-breaking changes land on `master`, they should also be backported (cherry-picked)
to these backport branches.
We use Mergify's [backport feature](https://mergify.io/features/backports) to automatically backport
to the needed branch. There should be a label for any backport branch that you'll be targeting.
To notify the bot to backport a pull request, mark the pull request with
the label `S:backport-to-<backport_branch>`.
Once the original pull request is merged, the bot will try to cherry-pick the pull request
to the backport branch. If the bot fails to backport, it will open a pull request.
The author of the original pull request is responsible for solving the conflicts and
merging the pull request.
#### Creating a backport branch
If this is the first release candidate for a major release, you get to have the honor of creating
the backport branch!
Note that, after creating the backport branch, you'll also need to update the tags on `master`
so that `go mod` is able to order the branches correctly. You should tag `master` with a "dev" tag
that is "greater than" the backport branches tags. See #6072 for more context.
In the following example, we'll assume that we're making a backport branch for
the 0.35.x line.
1. Start on `master`
2. Create the backport branch:
`git checkout -b v0.35.x`
3. Go back to master and tag it as the dev branch for the _next_ major release and push it back up:
`git tag -a v0.36.0-dev; git push v0.36.0-dev`
4. Create a new workflow to run the e2e nightlies for this backport branch.
(See https://github.com/tendermint/tendermint/blob/master/.github/workflows/e2e-nightly-34x.yml
for an example.)
#### Release candidates
Before creating an official release, especially a major release, we may want to create a
release candidate (RC) for our friends and partners to test out. We use git tags to
create RCs, and we build them off of backport branches.
Tags for RCs should follow the "standard" release naming conventions, with `-rcX` at the end
(for example, `v0.35.0-rc0`).
(Note that branches and tags _cannot_ have the same names, so it's important that these branches
have distinct names from the tags/release names.)
If this is the first RC for a major release, you'll have to make a new backport branch (see above).
Otherwise:
1. Start from the backport branch (e.g. `v0.35.x`).
1. Run the integration tests and the e2e nightlies
(which can be triggered from the Github UI;
e.g., https://github.com/tendermint/tendermint/actions/workflows/e2e-nightly-34x.yml).
1. Prepare the changelog:
- Move the changes included in `CHANGELOG_PENDING.md` into `CHANGELOG.md`.
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that UPGRADING.md is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
1. Open a PR with these changes against the backport branch.
1. Once these changes have landed on the backport branch, be sure to pull them back down locally.
2. Once you have the changes locally, create the new tag, specifying a name and a tag "message":
`git tag -a v0.35.0-rc0 -m "Release Candidate v0.35.0-rc0`
3. Push the tag back up to origin:
`git push origin v0.35.0-rc0`
Now the tag should be available on the repo's releases page.
4. Future RCs will continue to be built off of this branch.
Note that this process should only be used for "true" RCs--
release candidates that, if successful, will be the next release.
For more experimental "RCs," create a new, short-lived branch and tag that instead.
#### Major release
#### Major Release
This major release process assumes that this release was preceded by release candidates.
If there were no release candidates, begin by creating a backport branch, as described above.
If there were no release candidates, and you'd like to cut a major release directly from master, see below.
1. Start on the backport branch (e.g. `v0.35.x`)
2. Run integration tests and the e2e nightlies.
3. Prepare the release:
1. Start on the latest RC branch (`RCx/vX.X.0`).
2. Run integration tests.
3. Branch off of the RC branch (`git checkout -b release-prep`) and prepare the release:
- "Squash" changes from the changelog entries for the RCs into a single entry,
and add all changes included in `CHANGELOG_PENDING.md`.
(Squashing includes both combining all entries, as well as removing or simplifying
@@ -329,24 +249,57 @@ If there were no release candidates, begin by creating a backport branch, as des
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Open a PR with these changes against the backport branch.
5. Once these changes are on the backport branch, push a tag with prepared release details.
This will trigger the actual release `v0.35.0`.
- `git tag -a v0.35.0 -m 'Release v0.35.0'`
- `git push origin v0.35.0`
4. Open a PR with these changes against the RC branch (`RCx/vX.X.0`).
5. Once these changes are on the RC branch, branch off of the RC branch again to create a release branch:
- `git checkout RCx/vX.X.0`
- `git checkout -b release/vX.X.0`
6. Push a tag with prepared release details. This will trigger the actual release `vX.X.0`.
- `git tag -a vX.X.0 -m 'Release vX.X.0'`
- `git push origin vX.X.0`
7. Make sure that `master` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
8. Create the long-lived minor release branch `RC0/vX.X.1` for the next point release on this
new major release series.
#### Minor release (point releases)
##### Major Release (from `master`)
1. Start on `master`
2. Run integration tests (see `test_integrations` in Makefile)
3. Prepare release in a pull request against `master` (to be squash merged):
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`; if this release
had release candidates, squash all the RC updates into one
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest
release, and add the github aliases of external contributors to the top of
the changelog. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
- Make sure all significant breaking changes are covered in `UPGRADING.md`
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Push a tag with prepared release details (this will trigger the release `vX.X.0`)
- `git tag -a vX.X.x -m 'Release vX.X.x'`
- `git push origin vX.X.x`
5. Update the `CHANGELOG.md` file on master with the releases changelog.
6. Delete any RC branches and tags for this release (if applicable)
#### Minor Release (Point Releases)
Minor releases are done differently from major releases: They are built off of long-lived backport branches, rather than from master.
Each release "line" (e.g. 0.34 or 0.33) has its own long-lived backport branch, and
the backport branches have names like `v0.34.x` or `v0.33.x` (literally, `x`; it is not a placeholder in this case).
As non-breaking changes land on `master`, they should also be backported (cherry-picked) to these backport branches.
We use Mergify's [backport feature](https://mergify.io/features/backports) to automatically backport to the needed branch. Depending on which backport branch you need to backport to there will be labels for them. To notify the bot to backport a pull request, mark the pull request with the label `backport-to-<backport_branch>`. Once the original pull request is merged, the bot will try to cherry-pick the pull request to the backport branch. If the bot fails to backport, it will open a pull request. The author of the original pull request is responsible for solving the conflicts and merging the pull request.
Minor releases don't have release candidates by default, although any tricky changes may merit a release candidate.
To create a minor release:
1. Checkout the long-lived backport branch: `git checkout v0.35.x`
2. Run integration tests (`make test_integrations`) and the nightlies.
1. Checkout the long-lived backport branch: `git checkout vX.X.x`
2. Run integration tests: `make test_integrations`
3. Check out a new branch and prepare the release:
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
@@ -356,14 +309,34 @@ To create a minor release:
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during minor releases, and only field additions are valid minor changes.)
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Open a PR with these changes that will land them back on `v0.35.x`
4. Open a PR with these changes that will land them back on `vX.X.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- `git tag -a v0.35.1 -m 'Release v0.35.1'`
- `git push origin v0.35.1`
- `git tag -a vX.X.x -m 'Release vX.X.x'`
- `git push origin vX.X.x`
6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
- Remove all `R:minor` labels from the pull requests that were included in the release.
- Do not merge the backport branch into master.
#### Release Candidates
Before creating an official release, especially a major release, we may want to create a
release candidate (RC) for our friends and partners to test out. We use git tags to
create RCs, and we build them off of RC branches. RC branches typically have names formatted
like `RCX/vX.X.X` (or, concretely, `RC0/v0.34.0`), while the tags themselves follow
the "standard" release naming conventions, with `-rcX` at the end (`vX.X.X-rcX`).
(Note that branches and tags _cannot_ have the same names, so it's important that these branches
have distinct names from the tags/release names.)
1. Start from the RC branch (e.g. `RC0/v0.34.0`).
2. Create the new tag, specifying a name and a tag "message":
`git tag -a v0.34.0-rc0 -m "Release Candidate v0.34.0-rc0`
3. Push the tag back up to origin:
`git push origin v0.34.0-rc4`
Now the tag should be available on the repo's releases page.
4. Create a new release candidate branch for any possible updates to the RC:
`git checkout -b RC1/v0.34.0; git push origin RC1/v0.34.0`
## Testing
### Unit tests

View File

@@ -24,10 +24,10 @@ This guide provides instructions for upgrading to specific versions of Tendermin
* Added `--mode` flag and `mode` config variable on `config.toml` for setting Mode of the Node: `full` | `validator` | `seed` (default: `full`)
[ADR-52](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-052-tendermint-mode.md)
* `BootstrapPeers` has been added as part of the new p2p stack. This will eventually replace
`Seeds`. Bootstrap peers are connected with on startup if needed for peer discovery. Unlike
persistent peers, there's no gaurantee that the node will remain connected with these peers.
persistent peers, there's no gaurantee that the node will remain connected with these peers.
* configuration values starting with `priv-validator-` have moved to the new
`priv-validator` section, without the `priv-validator-` prefix.
@@ -35,27 +35,6 @@ This guide provides instructions for upgrading to specific versions of Tendermin
* The fast sync process as well as the blockchain package and service has all
been renamed to block sync
### Key Format Changes
The format of all tendermint on-disk database keys changes in
0.35. Upgrading nodes must either re-sync all data or run a migration
script provided in this release. The script located in
`github.com/tendermint/tendermint/scripts/keymigrate/migrate.go`
provides the function `Migrate(context.Context, db.DB)` which you can
operationalize as makes sense for your deployment.
For ease of use the `tendermint` command includes a CLI version of the
migration script, which you can invoke, as in:
tendermint key-migrate
This reads the configuration file as normal and allows the
`--db-backend` and `--db-dir` flags to change database operations as
needed.
The migration operation is idempotent and can be run more than once,
if needed.
### CLI Changes
* You must now specify the node mode (validator|full|seed) in `tendermint init [mode]`
@@ -87,7 +66,7 @@ are:
- `blockchain`
- `evidence`
Accordingly, the `node` package was changed to reduce access to
Accordingly, the space `node` package was changed to reduce access to
tendermint internals: applications that use tendermint as a library
will need to change to accommodate these changes. Most notably:
@@ -100,32 +79,8 @@ will need to change to accommodate these changes. Most notably:
### RPC changes
#### gRPC Support
Mark gRPC in the RPC layer as deprecated and to be removed in 0.36.
#### Peer Management Interface
When running with the new P2P Layer, the methods `UnsafeDialSeeds` and
`UnsafeDialPeers` RPC methods will always return an error. They are
deprecated and will be removed in 0.36 when the legacy peer stack is
removed.
Additionally the format of the Peer list returned in the `NetInfo`
method changes in this release to accommodate the different way that
the new stack tracks data about peers. This change affects users of
both stacks.
### Support for Custom Reactor and Mempool Implementations
The changes to p2p layer removed existing support for custom
reactors. Based on our understanding of how this functionality was
used, the introduction of the prioritized mempool covers nearly all of
the use cases for custom reactors. If you are currently running custom
reactors and mempools and are having trouble seeing the migration path
for your project please feel free to reach out to the Tendermint Core
development team directly.
## v0.34.0
**Upgrading to Tendermint 0.34 requires a blockchain restart.**
@@ -279,8 +234,8 @@ Other user-relevant changes include:
* The old `lite` package was removed; the new light client uses the `light` package.
* The `Verifier` was broken up into two pieces:
* Core verification logic (pure `VerifyX` functions)
* `Client` object, which represents the complete light client
* Core verification logic (pure `VerifyX` functions)
* `Client` object, which represents the complete light client
* The new light clients stores headers & validator sets as `LightBlock`s
* The RPC client can be found in the `/rpc` directory.
* The HTTP(S) proxy is located in the `/proxy` directory.
@@ -412,12 +367,12 @@ Evidence Params has been changed to include duration.
### Go API
* `libs/common` has been removed in favor of specific pkgs.
* `async`
* `service`
* `rand`
* `net`
* `strings`
* `cmap`
* `async`
* `service`
* `rand`
* `net`
* `strings`
* `cmap`
* removal of `errors` pkg
### RPC Changes
@@ -486,9 +441,9 @@ Prior to the update, suppose your `ResponseDeliverTx` look like:
```go
abci.ResponseDeliverTx{
Tags: []kv.Pair{
{Key: []byte("sender"), Value: []byte("foo")},
{Key: []byte("recipient"), Value: []byte("bar")},
{Key: []byte("amount"), Value: []byte("35")},
{Key: []byte("sender"), Value: []byte("foo")},
{Key: []byte("recipient"), Value: []byte("bar")},
{Key: []byte("amount"), Value: []byte("35")},
}
}
```
@@ -507,14 +462,14 @@ the following `Events`:
```go
abci.ResponseDeliverTx{
Events: []abci.Event{
{
Type: "transfer",
Attributes: kv.Pairs{
{Key: []byte("sender"), Value: []byte("foo")},
{Key: []byte("recipient"), Value: []byte("bar")},
{Key: []byte("amount"), Value: []byte("35")},
},
}
{
Type: "transfer",
Attributes: kv.Pairs{
{Key: []byte("sender"), Value: []byte("foo")},
{Key: []byte("recipient"), Value: []byte("bar")},
{Key: []byte("amount"), Value: []byte("35")},
},
}
}
```
@@ -562,9 +517,9 @@ In this case, the WS client will receive an error with description:
"jsonrpc": "2.0",
"id": "{ID}#event",
"error": {
"code": -32000,
"msg": "Server error",
"data": "subscription was canceled (reason: client is not pulling messages fast enough)" // or "subscription was canceled (reason: Tendermint exited)"
"code": -32000,
"msg": "Server error",
"data": "subscription was canceled (reason: client is not pulling messages fast enough)" // or "subscription was canceled (reason: Tendermint exited)"
}
}
@@ -770,9 +725,9 @@ just the `Data` field set:
```go
[]ProofOp{
ProofOp{
Data: <proof bytes>,
}
ProofOp{
Data: <proof bytes>,
}
}
```

View File

@@ -1,64 +0,0 @@
package commands
import (
"context"
"fmt"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/scripts/keymigrate"
)
func MakeKeyMigrateCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "key-migrate",
Short: "Run Database key migration",
RunE: func(cmd *cobra.Command, args []string) error {
ctx, cancel := context.WithCancel(cmd.Context())
defer cancel()
contexts := []string{
// this is ordered to put the
// (presumably) biggest/most important
// subsets first.
"blockstore",
"state",
"peerstore",
"tx_index",
"evidence",
"light",
}
for idx, dbctx := range contexts {
logger.Info("beginning a key migration",
"dbctx", dbctx,
"num", idx+1,
"total", len(contexts),
)
db, err := cfg.DefaultDBProvider(&cfg.DBContext{
ID: dbctx,
Config: config,
})
if err != nil {
return fmt.Errorf("constructing database handle: %w", err)
}
if err = keymigrate.Migrate(ctx, db); err != nil {
return fmt.Errorf("running migration for context %q: %w",
dbctx, err)
}
}
logger.Info("completed database migration successfully")
return nil
},
}
// allow database info to be overridden via cli
addDBFlags(cmd)
return cmd
}

View File

@@ -83,10 +83,7 @@ func AddNodeFlags(cmd *cobra.Command) {
config.Consensus.CreateEmptyBlocksInterval.String(),
"the possible interval between empty blocks")
addDBFlags(cmd)
}
func addDBFlags(cmd *cobra.Command) {
// db flags
cmd.Flags().String(
"db-backend",
config.DBBackend,

View File

@@ -28,7 +28,6 @@ func main() {
cmd.ShowNodeIDCmd,
cmd.GenNodeKeyCmd,
cmd.VersionCmd,
cmd.MakeKeyMigrateCommand(),
debug.DebugCmd,
cli.NewCompletionCmd(rootCmd, true),
)

View File

@@ -13,7 +13,7 @@ import (
tmjson "github.com/tendermint/tendermint/libs/json"
// necessary for Bitcoin address format
"golang.org/x/crypto/ripemd160" // nolint
"golang.org/x/crypto/ripemd160" // nolint: staticcheck
)
//-------------------------------------

View File

@@ -97,4 +97,3 @@ Note the context/background should be written in the present tense.
- [ADR-041: Proposer-Selection-via-ABCI](./adr-041-proposer-selection-via-abci.md)
- [ADR-045: ABCI-Evidence](./adr-045-abci-evidence.md)
- [ADR-057: RPC](./adr-057-RPC.md)
- [ADR-069: Node Initialization](./adr-069-flexible-node-initialization.md)

View File

@@ -1,273 +0,0 @@
# ADR 069: Flexible Node Initialization
## Changlog
- 2021-06-09: Initial Draft (@tychoish)
- 2021-07-21: Major Revision (@tychoish)
## Status
Proposed.
## Context
In an effort to support [Go-API-Stability](./adr-060-go-api-stability.md),
during the 0.35 development cycle, we have attempted to reduce the the API
surface area by moving most of the interface of the `node` package into
unexported functions, as well as moving the reactors to an `internal`
package. Having this coincide with the 0.35 release made a lot of sense
because these interfaces were _already_ changing as a result of the `p2p`
[refactor](./adr-061-p2p-refactor-scope.md), so it made sense to think a bit
more about how tendermint exposes this API.
While the interfaces of the P2P layer and most of the node package are already
internalized, this precludes some operational patterns that are important to
users who use tendermint as a library. Specifically, introspecting the
tendermint node service and replacing components is not supported in the latest
version of the code, and some of these use cases would require maintaining a
vendor copy of the code. Adding these features requires rather extensive
(internal/implementation) changes to the `node` and `rpc` packages, and this
ADR describes a model for changing the way that tendermint nodes initialize, in
service of providing this kind of functionality.
We consider node initialization, because the current implemention
provides strong connections between all components, as well as between
the components of the node and the RPC layer, and being able to think
about the interactions of these components will help enable these
features and help define the requirements of the node package.
## Alternative Approaches
These alternatives are presented to frame the design space and to
contextualize the decision in terms of product requirements. These
ideas are not inherently bad, and may even be possible or desireable
in the (distant) future, and merely provide additional context for how
we, in the moment came to our decision(s).
### Do Nothing
The current implementation is functional and sufficient for the vast
majority of use cases (e.g., all users of the Cosmos-SDK as well as
anyone who runs tendermint and the ABCI application in separate
processes). In the current implementation, and even previous versions,
modifying node initialization or injecting custom components required
copying most of the `node` package, which required such users
to maintain a vendored copy of tendermint.
While this is (likely) not tenable in the long term, as users do want
more modularity, and the current service implementation is brittle and
difficult to maintain, in the short term it may be possible to delay
implementation somewhat. Eventually, however, we will need to make the
`node` package easier to maintain and reason about.
### Generic Service Pluggability
One possible system design would export interfaces (in the Golang
sense) for all components of the system, to permit runtime dependency
injection of all components in the system, so that users can compose
tendermint nodes of arbitrary user-supplied components.
Although this level of customization would provide benefits, it would be a huge
undertaking (particularly with regards to API design work) that we do not have
scope for at the moment. Eventually providing support for some kinds of
pluggability may be useful, so the current solution does not explicitly
foreclose the possibility of this alternative.
### Abstract Dependency Based Startup and Shutdown
The main proposal in this document makes tendermint node initialization simpler
and more abstract, but the system lacks a number of
features which daemon/service initialization could provide, such as a
system allowing the authors of services to control initialization and shutdown order
of components using dependency relationships.
Such a system could work by allowing services to declare
initialization order dependencies to other reactors (by ID, perhaps)
so that the node could decide the initialization based on the
dependencies declared by services rather than requiring the node to
encode this logic directly.
This level of configuration is probably more complicated than is needed. Given
that the authors of components in the current implementation of tendermint
already *do* need to know about other components, a dependency-based system
would probably be overly-abstract at this stage.
## Decisions
- To the greatest extent possible, factor the code base so that
packages are responsible for their own initialization, and minimize
the amount of code in the `node` package itself.
- As a design goal, reduce direct coupling and dependencies between
components in the implementation of `node`.
- Begin iterating on a more-flexible internal framework for
initializing tendermint nodes to make the initatilization process
less hard-coded by the implementation of the node objects.
- Reactors should not need to expose their interfaces *within* the
implementation of the node type
- This refactoring should be entirely opaque to users.
- These node initialization changes should not require a
reevaluation of the `service.Service` or a generic initialization
orchestration framework.
- Do not proactively provide a system for injecting
components/services within a tendtermint node, though make it
possible to retrofit this kind of plugability in the future if
needed.
- Prioritize implementation of p2p-based statesync reactor to obviate
need for users to inject a custom state-sync provider.
## Detailed Design
The [current
nodeImpl](https://github.com/tendermint/tendermint/blob/master/node/node.go#L47)
includes direct references to the implementations of each of the
reactors, which should be replaced by references to `service.Service`
objects. This will require moving construction of the [rpc
service](https://github.com/tendermint/tendermint/blob/master/node/node.go#L771)
into the constructor of
[makeNode](https://github.com/tendermint/tendermint/blob/master/node/node.go#L126). One
possible implementation of this would be to eliminate the current
`ConfigureRPC` method on the node package and instead [configure it
here](https://github.com/tendermint/tendermint/pull/6798/files#diff-375d57e386f20eaa5f09f02bb9d28bfc48ac3dca18d0325f59492208219e5618R441).
To avoid adding complexity to the `node` package, we will add a
composite service implementation to the `service` package
that implements `service.Service` and is composed of a sequence of
underlying `service.Service` objects and handles their
startup/shutdown in the specified sequential order.
Consensus, blocksync (*née* fast sync), and statesync all depend on
each other, and have significant initialization dependencies that are
presently encoded in the `node` package. As part of this change, a
new package/component (likely named `blocks` located at
`internal/blocks`) will encapsulate the initialization of these block
management areas of the code.
### Injectable Component Option
This section briefly describes a possible implementation for
user-supplied services running within a node. This should not be
implemented unless user-supplied components are a hard requirement for
a user.
In order to allow components to be replaced, a new public function
will be added to the public interface of `node` with a signature that
resembles the following:
```go
func NewWithServices(conf *config.Config,
logger log.Logger,
cf proxy.ClientCreator,
gen *types.GenesisDoc,
srvs []service.Service,
) (service.Service, error) {
```
The `service.Service` objects will be initialized in the order supplied, after
all pre-configured/default services have started (and shut down in reverse
order). The given services may implement additional interfaces, allowing them
to replace specific default services. `NewWithServices` will validate input
service lists with the following rules:
- None of the services may already be running.
- The caller may not supply more than one replacement reactor for a given
default service type.
If callers violate any of these rules, `NewWithServices` will return
an error. To retract support for this kind of operation in the future,
the function can be modified to *always* return an error.
## Consequences
### Positive
- The node package will become easier to maintain.
- It will become easier to add additional services within tendermint
nodes.
- It will become possible to replace default components in the node
package without vendoring the tendermint repo and modifying internal
code.
- The current end-to-end (e2e) test suite will be able to prevent any
regressions, and the new functionality can be thoroughly unit tested.
- The scope of this project is very narrow, which minimizes risk.
### Negative
- This increases our reliance on the `service.Service` interface which
is probably not an interface that we want to fully commit to.
- This proposal implements a fairly minimal set of functionality and
leaves open the possibility for many additional features which are
not included in the scope of this proposal.
### Neutral
N/A
## Open Questions
- To what extent does this new initialization framework need to accommodate
the legacy p2p stack? Would it be possible to delay a great deal of this
work to the 0.36 cycle to avoid this complexity?
- Answer: _depends on timing_, and the requirement to ship pluggable reactors in 0.35.
- Where should additional public types be exported for the 0.35
release?
Related to the general project of API stabilization we want to deprecate
the `types` package, and move its contents into a new `pkg` hierarchy;
however, the design of the `pkg` interface is currently underspecified.
If `types` is going to remain for the 0.35 release, then we should consider
the impact of using multiple organizing modalities for this code within a
single release.
## Future Work
- Improve or simplify the `service.Service` interface. There are some
pretty clear limitations with this interface as written (there's no
way to timeout slow startup or shut down, the cycle between the
`service.BaseService` and `service.Service` implementations is
troubling, the default panic in `OnReset` seems troubling.)
- As part of the refactor of `service.Service` have all services/nodes
respect the lifetime of a `context.Context` object, and avoid the
current practice of creating `context.Context` objects in p2p and
reactor code. This would be required for in-process multi-tenancy.
- Support explicit dependencies between components and allow for
parallel startup, so that different reactors can startup at the same
time, where possible.
## References
- [this
branch](https://github.com/tendermint/tendermint/tree/tychoish/scratch-node-minimize)
contains experimental work in the implementation of the node package
to unwind some of the hard dependencies between components.
- [the component
graph](https://peter.bourgon.org/go-for-industrial-programming/#the-component-graph)
as a framing for internal service construction.
## Appendix
### Dependencies
There's a relationship between the blockchain and consensus reactor
described by the following dependency graph makes replacing some of
these components more difficult relative to other reactors or
components.
![consensus blockchain dependency graph](./img/consensus_blockchain.png)

View File

@@ -1,445 +0,0 @@
# ADR 71: Proposer-Based Timestamps
* [Changelog](#changelog)
* [Status](#status)
* [Context](#context)
* [Alternative Approaches](#alternative-approaches)
* [Remove timestamps altogether](#remove-timestamps-altogether)
* [Decision](#decision)
* [Detailed Design](#detailed-design)
* [Overview](#overview)
* [Proposal Timestamp and Block Timestamp](#proposal-timestamp-and-block-timestamp)
* [Saving the timestamp across heights](#saving-the-timestamp-across-heights)
* [Changes to `CommitSig`](#changes-to-commitsig)
* [Changes to `Commit`](#changes-to-commit)
* [Changes to `Vote` messages](#changes-to-vote-messages)
* [New consensus parameters](#new-consensus-parameters)
* [Changes to `Header`](#changes-to-header)
* [Changes to the block proposal step](#changes-to-the-block-proposal-step)
* [Proposer selects proposal timestamp](#proposer-selects-proposal-timestamp)
* [Proposer selects block timestamp](#proposer-selects-block-timestamp)
* [Proposer waits](#proposer-waits)
* [Changes to the propose step timeout](#changes-to-the-propose-step-timeout)
* [Changes to validation rules](#changes-to-validation-rules)
* [Proposal timestamp validation](#proposal-timestamp-validation)
* [Block timestamp validation](#block-timestamp-validation)
* [Changes to the prevote step](#changes-to-the-prevote-step)
* [Changes to the precommit step](#changes-to-the-precommit-step)
* [Changes to locking a block](#changes-to-locking-a-block)
* [Remove voteTime Completely](#remove-votetime-completely)
* [Future Improvements](#future-improvements)
* [Consequences](#consequences)
* [Positive](#positive)
* [Neutral](#neutral)
* [Negative](#negative)
* [References](#references)
## Changelog
- July 15 2021: Created by @williambanfield
- Aug 4 2021: Draft completed by @williambanfield
- Aug 5 2021: Draft updated to include data structure changes by @williambanfield
- Aug 20 2021: Language edits completed by @williambanfield
## Status
**Accepted**
## Context
Tendermint currently provides a monotonically increasing source of time known as [BFTTime](https://github.com/tendermint/spec/blob/master/spec/consensus/bft-time.md).
This mechanism for producing a source of time is reasonably simple.
Each correct validator adds a timestamp to each `Precommit` message it sends.
The timestamp it sends is either the validator's current known Unix time or one millisecond greater than the previous block time, depending on which value is greater.
When a block is produced, the proposer chooses the block timestamp as the weighted median of the times in all of the `Precommit` messages the proposer received.
The weighting is proportional to the amount of voting power, or stake, a validator has on the network.
This mechanism for producing timestamps is both deterministic and byzantine fault tolerant.
This current mechanism for producing timestamps has a few drawbacks.
Validators do not have to agree at all on how close the selected block timestamp is to their own currently known Unix time.
Additionally, any amount of voting power `>1/3` may directly control the block timestamp.
As a result, it is quite possible that the timestamp is not particularly meaningful.
These drawbacks present issues in the Tendermint protocol.
Timestamps are used by light clients to verify blocks.
Light clients rely on correspondence between their own currently known Unix time and the block timestamp to verify blocks they see;
However, their currently known Unix time may be greatly divergent from the block timestamp as a result of the limitations of `BFTTime`.
The proposer-based timestamps specification suggests an alternative approach for producing block timestamps that remedies these issues.
Proposer-based timestamps alter the current mechanism for producing block timestamps in two main ways:
1. The block proposer is amended to offer up its currently known Unix time as the timestamp for the next block.
1. Correct validators only approve the proposed block timestamp if it is close enough to their own currently known Unix time.
The result of these changes is a more meaningful timestamp that cannot be controlled by `<= 2/3` of the validator voting power.
This document outlines the necessary code changes in Tendermint to implement the corresponding [proposer-based timestamps specification](https://github.com/tendermint/spec/tree/master/spec/consensus/proposer-based-timestamp).
## Alternative Approaches
### Remove timestamps altogether
Computer clocks are bound to skew for a variety of reasons.
Using timestamps in our protocol means either accepting the timestamps as not reliable or impacting the protocols liveness guarantees.
This design requires impacting the protocols liveness in order to make the timestamps more reliable.
An alternate approach is to remove timestamps altogether from the block protocol.
`BFTTime` is deterministic but may be arbitrarily inaccurate.
However, having a reliable source of time is quite useful for applications and protocols built on top of a blockchain.
We therefore decided not to remove the timestamp.
Applications often wish for some transactions to occur on a certain day, on a regular period, or after some time following a different event.
All of these require some meaningful representation of agreed upon time.
The following protocols and application features require a reliable source of time:
* Tendermint Light Clients [rely on correspondence between their known time](https://github.com/tendermint/spec/blob/master/spec/light-client/verification/README.md#definitions-1) and the block time for block verification.
* Tendermint Evidence validity is determined [either in terms of heights or in terms of time](https://github.com/tendermint/spec/blob/8029cf7a0fcc89a5004e173ec065aa48ad5ba3c8/spec/consensus/evidence.md#verification).
* Unbonding of staked assets in the Cosmos Hub [occurs after a period of 21 days](https://github.com/cosmos/governance/blob/ce75de4019b0129f6efcbb0e752cd2cc9e6136d3/params-change/Staking.md#unbondingtime).
* IBC packets can use either a [timestamp or a height to timeout packet delivery](https://docs.cosmos.network/v0.43/ibc/overview.html#acknowledgements).
Finally, inflation distribution in the Cosmos Hub uses an approximation of time to calculate an annual percentage rate.
This approximation of time is calculated using [block heights with an estimated number of blocks produced in a year](https://github.com/cosmos/governance/blob/master/params-change/Mint.md#blocksperyear).
Proposer-based timestamps will allow this inflation calculation to use a more meaningful and accurate source of time.
## Decision
Implement proposer-based timestamps and remove `BFTTime`.
## Detailed Design
### Overview
Implementing proposer-based timestamps will require a few changes to Tendermints code.
These changes will be to the following components:
* The `internal/consensus/` package.
* The `state/` package.
* The `Vote`, `CommitSig`, `Commit` and `Header` types.
* The consensus parameters.
### Proposal Timestamp and Block Timestamp
This design discusses two timestamps: (1) The timestamp in the block and (2) the timestamp in the proposal message.
The existence and use of both of these timestamps can get a bit confusing, so some background is given here to clarify their uses.
The [proposal message currently has a timestamp](https://github.com/tendermint/tendermint/blob/e5312942e30331e7c42b75426da2c6c9c00ae476/types/proposal.go#L31).
This timestamp is the current Unix time known to the proposer when sending the `Proposal` message.
This timestamp is not currently used as part of consensus.
The changes in this ADR will begin using the proposal message timestamp as part of consensus.
We will refer to this as the **proposal timestamp** throughout this design.
The block has a timestamp field [in the header](https://github.com/tendermint/tendermint/blob/dc7c212c41a360bfe6eb38a6dd8c709bbc39aae7/types/block.go#L338).
This timestamp is set currently as part of Tendermints `BFTtime` algorithm.
It is set when a block is proposed and it is checked by the validators when they are deciding to prevote the block.
This field will continue to be used but the logic for creating and validating this timestamp will change.
We will refer to this as the **block timestamp** throughout this design.
At a high level, the proposal timestamp from height `H` is used as the block timestamp at height `H+1`.
The following image shows this relationship.
The rest of this document describes the code changes that will make this possible.
![](./img/pbts-message.png)
### Saving the timestamp across heights
Currently, `BFTtime` uses `LastCommit` to construct the block timestamp.
The `LastCommit` is created at height `H-1` and is saved in the state store to be included in the block at height `H`.
`BFTtime` takes the weighted median of the timestamps in `LastCommit.CommitSig` to build the timestamp for height `H`.
For proposer-based timestamps, the `LastCommit.CommitSig` timestamps will no longer be used to build the timestamps for height `H`.
Instead, the proposal timestamp from height `H-1` will become the block timestamp for height `H`.
To enable this, we will add a `Timestamp` field to the `Commit` struct.
This field will be populated at each height with the proposal timestamp decided on at the previous height.
This timestamp will also be saved with the rest of the commit in the state store [when the commit is finalized](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/internal/consensus/state.go#L1611) so that it can be recovered if Tendermint crashes.
Changes to the `CommitSig` and `Commit` struct are detailed below.
### Changes to `CommitSig`
The [CommitSig](https://github.com/tendermint/tendermint/blob/a419f4df76fe4aed668a6c74696deabb9fe73211/types/block.go#L604) struct currently contains a timestamp.
This timestamp is the current Unix time known to the validator when it issued a `Precommit` for the block.
This timestamp is no longer used and will be removed in this change.
`CommitSig` will be updated as follows:
```diff
type CommitSig struct {
BlockIDFlag BlockIDFlag `json:"block_id_flag"`
ValidatorAddress Address `json:"validator_address"`
-- Timestamp time.Time `json:"timestamp"`
Signature []byte `json:"signature"`
}
```
### Changes to `Commit`
The [Commit](https://github.com/tendermint/tendermint/blob/a419f4df76fe4aed668a6c74696deabb9fe73211/types/block.go#L746) struct does not currently contain a timestamp.
The timestamps in the `Commit.CommitSig` entries are currently used to build the block timestamp.
With these timestamps removed, the commit time will instead be stored in the `Commit` struct.
`Commit` will be updated as follows.
```diff
type Commit struct {
Height int64 `json:"height"`
Round int32 `json:"round"`
++ Timestamp time.Time `json:"timestamp"`
BlockID BlockID `json:"block_id"`
Signatures []CommitSig `json:"signatures"`
}
```
### Changes to `Vote` messages
`Precommit` and `Prevote` messages use a common [Vote struct](https://github.com/tendermint/tendermint/blob/a419f4df76fe4aed668a6c74696deabb9fe73211/types/vote.go#L50).
This struct currently contains a timestamp.
This timestamp is set using the [voteTime](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/internal/consensus/state.go#L2241) function and therefore vote times correspond to the current Unix time known to the validator.
For precommits, this timestamp is used to construct the [CommitSig that is included in the block in the LastCommit](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/types/block.go#L754) field.
For prevotes, this field is unused.
Proposer-based timestamps will use the [RoundState.Proposal](https://github.com/tendermint/tendermint/blob/c3ae6f5b58e07b29c62bfdc5715b6bf8ae5ee951/internal/consensus/types/round_state.go#L76) timestamp to construct the `signedBytes` `CommitSig`.
This timestamp is therefore no longer useful and will be dropped.
`Vote` will be updated as follows:
```diff
type Vote struct {
Type tmproto.SignedMsgType `json:"type"`
Height int64 `json:"height"`
Round int32 `json:"round"`
BlockID BlockID `json:"block_id"` // zero if vote is nil.
-- Timestamp time.Time `json:"timestamp"`
ValidatorAddress Address `json:"validator_address"`
ValidatorIndex int32 `json:"validator_index"`
Signature []byte `json:"signature"`
}
```
### New consensus parameters
The proposer-based timestamp specification includes multiple new parameters that must be the same among all validators.
These parameters are `PRECISION`, `MSGDELAY`, and `ACCURACY`.
The `PRECISION` and `MSGDELAY` parameters are used to determine if the proposed timestamp is acceptable.
A validator will only Prevote a proposal if the proposal timestamp is considered `timely`.
A proposal timestamp is considered `timely` if it is within `PRECISION` and `MSGDELAY` of the Unix time known to the validator.
More specifically, a proposal timestamp is `timely` if `validatorLocalTime - PRECISION < proposalTime < validatorLocalTime + PRECISION + MSGDELAY`.
Because the `PRECISION` and `MSGDELAY` parameters must be the same across all validators, they will be added to the [consensus parameters](https://github.com/tendermint/tendermint/blob/master/proto/tendermint/types/params.proto#L13) as [durations](https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#google.protobuf.Duration).
The proposer-based timestamp specification also includes a [new ACCURACY parameter](https://github.com/tendermint/spec/blob/master/spec/consensus/proposer-based-timestamp/pbts-sysmodel_001_draft.md#pbts-clocksync-external0).
Intuitively, `ACCURACY` represents the difference between the real time and the currently known time of correct validators.
The currently known Unix time of any validator is always somewhat different from real time.
`ACCURACY` is the largest such difference between each validator's time and real time taken as an absolute value.
This is not something a computer can determine on its own and must be specified as an estimate by community running a Tendermint-based chain.
It is used in the new algorithm to [calculate a timeout for the propose step](https://github.com/tendermint/spec/blob/master/spec/consensus/proposer-based-timestamp/pbts-algorithm_001_draft.md#pbts-alg-startround0).
`ACCURACY` is assumed to be the same across all validators and therefore should be included as a consensus parameter.
The consensus will be updated to include this `Timestamp` field as follows:
```diff
type ConsensusParams struct {
Block BlockParams `json:"block"`
Evidence EvidenceParams `json:"evidence"`
Validator ValidatorParams `json:"validator"`
Version VersionParams `json:"version"`
++ Timestamp TimestampParams `json:"timestamp"`
}
```
```go
type TimestampParams struct {
Accuracy time.Duration `json:"accuracy"`
Precision time.Duration `json:"precision"`
MsgDelay time.Duration `json:"msg_delay"`
}
```
### Changes to `Header`
The [Header](https://github.com/tendermint/tendermint/blob/a419f4df76fe4aed668a6c74696deabb9fe73211/types/block.go#L338) struct currently contains a timestamp.
This timestamp is set as the `BFTtime` derived from the block's `LastCommit.CommitSig` timestamps.
This timestamp will no longer be derived from the `LastCommit.CommitSig` timestamps and will instead be included directly into the block's `LastCommit`.
This timestamp will therfore be identical in both the `Header` and the `LastCommit`.
To clarify that the timestamp in the header corresponds to the `LastCommit`'s time, we will rename this timestamp field to `last_timestamp`.
`Header` will be updated as follows:
```diff
type Header struct {
// basic block info
Version version.Consensus `json:"version"`
ChainID string `json:"chain_id"`
Height int64 `json:"height"`
-- Time time.Time `json:"time"`
++ LastTimestamp time.Time `json:"last_timestamp"`
// prev block info
LastBlockID BlockID `json:"last_block_id"`
// hashes of block data
LastCommitHash tmbytes.HexBytes `json:"last_commit_hash"`
DataHash tmbytes.HexBytes `json:"data_hash"`
// hashes from the app output from the prev block
ValidatorsHash tmbytes.HexBytes `json:"validators_hash"`
NextValidatorsHash tmbytes.HexBytes `json:"next_validators_hash"`
ConsensusHash tmbytes.HexBytes `json:"consensus_hash"`
AppHash tmbytes.HexBytes `json:"app_hash"`
// root hash of all results from the txs from the previous block
LastResultsHash tmbytes.HexBytes `json:"last_results_hash"`
// consensus info
EvidenceHash tmbytes.HexBytes `json:"evidence_hash"`
ProposerAddress Address `json:"proposer_address"`
}
```
### Changes to the block proposal step
#### Proposer selects proposal timestamp
The proposal logic already [sets the Unix time known to the validator](https://github.com/tendermint/tendermint/blob/2abfe20114ee3bb3adfee817589033529a804e4d/types/proposal.go#L44) into the `Proposal` message.
This satisfies the proposer-based timestamp specification and does not need to change.
#### Proposer selects block timestamp
The proposal timestamp that was decided in height `H-1` will be stored in the `State` struct's in the `RoundState.LastCommit` field.
The proposer will select this timestamp to use as the block timestamp at height `H`.
#### Proposer waits
Block timestamps must be monotonically increasing.
In `BFTTime`, if a validators clock was behind, the [validator added 1 millisecond to the previous blocks time and used that in its vote messages](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/internal/consensus/state.go#L2246).
A goal of adding proposer-based timestamps is to enforce some degree of clock synchronization, so having a mechanism that completely ignores the Unix time of the validator time no longer works.
Validator clocks will not be perfectly in sync.
Therefore, the proposers current known Unix time may be less than the `LastCommit.Timestamp`.
If the proposers current known Unix time is less than the `LastCommit.Timestamp`, the proposer will sleep until its known Unix time exceeds `LastCommit.Timestamp`.
This change will require amending the [defaultDecideProposal](https://github.com/tendermint/tendermint/blob/822893615564cb20b002dd5cf3b42b8d364cb7d9/internal/consensus/state.go#L1180) method.
This method should now block until the proposers time is greater than `LastCommit.Timestamp`.
#### Changes to the propose step timeout
Currently, a validator waiting for a proposal will proceed past the propose step if the configured propose timeout is reached and no proposal is seen.
Proposer-based timestamps requires changing this timeout logic.
The proposer will now wait until its current known Unix time exceeds the `LastCommit.Timestamp` to propose a block.
The validators must now take this and some other factors into account when deciding when to timeout the propose step.
Specifically, the propose step timeout must also take into account potential inaccuracy in the validators clock and in the clock of the proposer.
Additionally, there may be a delay communicating the proposal message from the proposer to the other validators.
Therefore, validators waiting for a proposal must wait until after the `LastCommit.Timestamp` before timing out.
To account for possible inaccuracy in its own clock, inaccuracy in the proposers clock, and message delay, validators waiting for a proposal will wait until `LastCommit.Timesatmp + 2*ACCURACY + MSGDELAY`.
The spec defines this as `waitingTime`.
The [propose steps timeout is set in enterPropose](https://github.com/tendermint/tendermint/blob/822893615564cb20b002dd5cf3b42b8d364cb7d9/internal/consensus/state.go#L1108) in `state.go`.
`enterPropose` will be changed to calculate waiting time using the new consensus parameters.
The timeout in `enterPropose` will then be set as the maximum of `waitingTime` and the [configured proposal step timeout](https://github.com/tendermint/tendermint/blob/dc7c212c41a360bfe6eb38a6dd8c709bbc39aae7/config/config.go#L1013).
### Changes to validation rules
The rules for validating that a proposal is valid will need slight modification to implement proposer-based timestamps.
Specifically, we will change the validation logic to ensure that the proposal timestamp is `timely` and we will modify the way the block timestamp is validated as well.
#### Proposal timestamp validation
Adding proposal timestamp validation is a reasonably straightforward change.
The current Unix time known to the proposer is already included in the [Proposal message](https://github.com/tendermint/tendermint/blob/dc7c212c41a360bfe6eb38a6dd8c709bbc39aae7/types/proposal.go#L31).
Once the proposal is received, the complete message is stored in the `RoundState.Proposal` field.
The precommit and prevote validation logic does not currently use this timestamp.
This validation logic will be updated to check that the proposal timestamp is within `PRECISION` of the current Unix time known to the validators.
If the timestamp is not within `PRECISION` of the current Unix time known to the validator, the proposal will not be considered it valid.
The validator will also check that the proposal time is greater than the block timestamp from the previous height.
If no valid proposal is received by the proposal timeout, the validator will prevote nil.
This is identical to the current logic.
#### Block timestamp validation
The [validBlock function](https://github.com/tendermint/tendermint/blob/c3ae6f5b58e07b29c62bfdc5715b6bf8ae5ee951/state/validation.go#L14) currently [validates the proposed block timestamp in three ways](https://github.com/tendermint/tendermint/blob/c3ae6f5b58e07b29c62bfdc5715b6bf8ae5ee951/state/validation.go#L118).
First, the validation logic checks that this timestamp is greater than the previous blocks timestamp.
Additionally, it validates that the block timestamp is correctly calculated as the weighted median of the timestamps in the [blocks LastCommit](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/types/block.go#L48).
Finally, the logic also authenticates the timestamps in the `LastCommit`.
The cryptographic signature in each `CommitSig` is created by signing a hash of fields in the block with the validators private key.
One of the items in this `signedBytes` hash is derived from the timestamp in the `CommitSig`.
To authenticate the `CommitSig` timestamp, the validator builds a hash of fields that includes the timestamp and checks this hash against the provided signature.
This takes place in the [VerifyCommit function](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/types/validation.go#L25).
The logic to validate that the block timestamp is greater than the previous blocks timestamp also works for proposer-based timestamps and will not change.
`BFTTime` validation is no longer applicable and will be removed.
Validators will no longer check that the block timestamp is a weighted median of `LastCommit` timestamps.
This will mean removing the call to [MedianTime in the validateBlock function](https://github.com/tendermint/tendermint/blob/4db71da68e82d5cb732b235eeb2fd69d62114b45/state/validation.go#L117).
The `MedianTime` function can be completely removed.
The `LastCommit` timestamps may also be removed.
The `signedBytes` validation logic in `VerifyCommit` will be slightly altered.
The `CommitSig`s in the blocks `LastCommit` will no longer each contain a timestamp.
The validation logic will instead include the `LastCommit.Timestamp` in the hash of fields for generating the `signedBytes`.
The cryptographic signatures included in the `CommitSig`s will then be checked against this `signedBytes` hash to authenticate the timestamp.
Specifically, the `VerifyCommit` function will be updated to use this new timestamp.
### Changes to the prevote step
Currently, a validator will prevote a proposal in one of three cases:
* Case 1: Validator has no locked block and receives a valid proposal.
* Case 2: Validator has a locked block and receives a valid proposal matching its locked block.
* Case 3: Validator has a locked block, sees a valid proposal not matching its locked block but sees +⅔ prevotes for the new proposals block.
The only change we will make to the prevote step is to what a validator considers a valid proposal as detailed above.
### Changes to the precommit step
The precommit step will not require much modification.
Its proposal validation rules will change in the same ways that validation will change in the prevote step.
### Changes to locking a block
When a validator receives a valid proposed block and +2/3 prevotes for that block, it stores the block as its locked block in the [RoundState.ValidBlock](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/internal/consensus/types/round_state.go#L85) field.
In each subsequent round it will prevote that block.
A validator will only change which block it has locked if it sees +2/3 prevotes for a different block.
This mechanism will remain largely unchanged.
The only difference is the addition of proposal timestamp validation.
A validator will prevote nil in a round if the proposal message it received is not `timely`.
Prevoting nil in this case will not cause a validator to unlock its locked block.
This difference is an incidental result of the changes to prevote validation.
It is included in this design for completeness and to clarify that no additional changes will be made to block locking.
### Remove voteTime Completely
[voteTime](https://github.com/tendermint/tendermint/blob/822893615564cb20b002dd5cf3b42b8d364cb7d9/internal/consensus/state.go#L2229) is a mechanism for calculating the next `BFTTime` given both the validator's current known Unix time and the previous block timestamp.
If the previous block timestamp is greater than the validator's current known Unix time, then voteTime returns a value one millisecond greater than the previous block timestamp.
This logic is used in multiple places and is no longer needed for proposer-based timestamps.
It should therefore be removed completely.
## Future Improvements
* Implement BLS signature aggregation.
By removing fields from the `Precommit` messages, we are able to aggregate signatures.
## Consequences
### Positive
* `<2/3` of validators can no longer influence block timestamps.
* Block timestamp will have stronger correspondence to real time.
* Improves the reliability of light client block verification.
* Enables BLS signature aggregation.
* Enables evidence handling to use time instead of height for evidence validity.
### Neutral
* Alters Tendermints liveness properties.
Liveness now requires that all correct validators have synchronized clocks within a bound.
Liveness will now also require that validators clocks move forward, which was not required under `BFTTime`.
### Negative
* May increase the length of the propose step if there is a large skew between the previous proposer and the current proposers local Unix time.
This skew will be bound by the `PRECISION` value, so it is unlikely to be too large.
* Current chains with block timestamps far in the future will either need to pause consensus until after the erroneous block timestamp or must maintain synchronized but very inaccurate clocks.
## References
* [PBTS Spec](https://github.com/tendermint/spec/tree/master/spec/consensus/proposer-based-timestamp)
* [BFTTime spec](https://github.com/tendermint/spec/blob/master/spec/consensus/bft-time.md)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 672 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

11
go.mod
View File

@@ -3,7 +3,7 @@ module github.com/tendermint/tendermint
go 1.16
require (
github.com/BurntSushi/toml v0.4.1
github.com/BurntSushi/toml v0.3.1
github.com/Masterminds/squirrel v1.5.0
github.com/Workiva/go-datastructures v1.0.53
github.com/adlio/schema v1.1.13
@@ -13,7 +13,7 @@ require (
github.com/go-kit/kit v0.11.0
github.com/gogo/protobuf v1.3.2
github.com/golang/protobuf v1.5.2
github.com/golangci/golangci-lint v1.42.0
github.com/golangci/golangci-lint v1.41.1
github.com/google/orderedcode v0.0.1
github.com/google/uuid v1.3.0
github.com/gorilla/websocket v1.4.2
@@ -35,9 +35,8 @@ require (
github.com/stretchr/testify v1.7.0
github.com/tendermint/tm-db v0.6.4
github.com/vektra/mockery/v2 v2.9.0
golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781
google.golang.org/grpc v1.40.0
golang.org/x/crypto v0.0.0-20210314154223-e6e6c4f2bb5b
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4
google.golang.org/grpc v1.39.0
gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b // indirect
pgregory.net/rapid v0.4.7
)

103
go.sum
View File

@@ -44,13 +44,10 @@ cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RX
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
contrib.go.opencensus.io/exporter/stackdriver v0.13.4/go.mod h1:aXENhDJ1Y4lIg4EUaVTwzvYETVNZk10Pu26tevFKLUc=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Antonboom/errname v0.1.3 h1:qKV8gSzPzBqrG/q0dgraZXJCymWt6KuD9+Y7K7xtzN8=
github.com/Antonboom/errname v0.1.3/go.mod h1:jRXo3m0E0EuCnK3wbsSVH3X55Z4iTDLl6ZfCxwFj4TM=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 h1:w+iIsaOQNcT7OZ575w+acHgRric5iCyQh+xv+KJ4HB8=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v0.4.1 h1:GaI7EiDXDRfa8VshkTj7Fym7ha+y8/XxIgD2okUIjLw=
github.com/BurntSushi/toml v0.4.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DataDog/zstd v1.4.1 h1:3oxKN3wbHibqx897utPC2LTQU4J+IHWWJO+glkAkpFM=
github.com/DataDog/zstd v1.4.1/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
@@ -75,7 +72,7 @@ github.com/OpenPeeDeeP/depguard v1.0.1 h1:VlW4R6jmBIv3/u1JNlawEvJMM4J+dPORPaZasQ
github.com/OpenPeeDeeP/depguard v1.0.1/go.mod h1:xsIw86fROiiwelg+jB2uM9PiKihMMmUx/1V+TNhjQvM=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/StackExchange/wmi v1.2.1/go.mod h1:rcmrprowKIVzvc+NUiLncP2uuArMWLCbu9SBzvHz7e8=
github.com/StackExchange/wmi v0.0.0-20190523213315-cbe66965904d/go.mod h1:3eOhrUMpNV+6aFIbp5/iudMxNCF27Vw2OZgy4xEx0Fg=
github.com/VividCortex/gohistogram v1.0.0 h1:6+hBz+qvs0JOrrNhhmR7lFxo5sINxBCGXrdtl/UvroE=
github.com/VividCortex/gohistogram v1.0.0/go.mod h1:Pf5mBqqDxYaXu3hDrrU+w6nw50o/4+TcAqDqk/vUH7g=
github.com/Workiva/go-datastructures v1.0.53 h1:J6Y/52yX10Xc5JjXmGtWoSSxs3mZnGSaq37xZZh7Yig=
@@ -178,8 +175,8 @@ github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:ma
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/daixiang0/gci v0.2.9 h1:iwJvwQpBZmMg31w+QQ6jsyZ54KEATn6/nfARbBNW294=
github.com/daixiang0/gci v0.2.9/go.mod h1:+4dZ7TISfSmqfAGv59ePaHfNzgGtIkHAhhdKggP1JAc=
github.com/daixiang0/gci v0.2.8 h1:1mrIGMBQsBu0P7j7m1M8Lb+ZeZxsZL+jyGX4YoMJJpg=
github.com/daixiang0/gci v0.2.8/go.mod h1:+4dZ7TISfSmqfAGv59ePaHfNzgGtIkHAhhdKggP1JAc=
github.com/davecgh/go-spew v0.0.0-20161028175848-04cdfd42973b/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -259,7 +256,7 @@ github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vb
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-ole/go-ole v1.2.5/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.2.4/go.mod h1:XCwSNxSkXRo4vlyPy93sltvi/qJq0jqQhjqQNIwKuxM=
github.com/go-playground/locales v0.12.1/go.mod h1:IUMDtCfWo/w/mtMfIE/IG2K+Ey3ygWanZIBtBW0W2TM=
github.com/go-playground/universal-translator v0.16.0/go.mod h1:1AnU7NaIRDWWzGEKwgtJRd2xk99HeFyHw3yid4rvQIY=
github.com/go-redis/redis v6.15.8+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA=
@@ -291,8 +288,8 @@ github.com/go-zookeeper/zk v1.0.2/go.mod h1:nOB03cncLtlp4t+UAkGSV+9beXP/akpekBwL
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=
github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
github.com/gofrs/flock v0.8.0 h1:MSdYClljsF3PbENUUEx85nkWfJSGfzYI9yEBZOJz6CY=
github.com/gofrs/flock v0.8.0/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
@@ -343,8 +340,8 @@ github.com/golangci/go-misc v0.0.0-20180628070357-927a3d87b613 h1:9kfjN3AdxcbsZB
github.com/golangci/go-misc v0.0.0-20180628070357-927a3d87b613/go.mod h1:SyvUF2NxV+sN8upjjeVYr5W7tyxaT1JVtvhKhOn2ii8=
github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a h1:iR3fYXUjHCR97qWS8ch1y9zPNsgXThGwjKPrYfqMPks=
github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a/go.mod h1:9qCChq59u/eW8im404Q2WWTrnBUQKjpNYKMbU4M7EFU=
github.com/golangci/golangci-lint v1.42.0 h1:hqf1zo6zY3GKGjjBk3ttdH22tGwF6ZRpk6j6xyJmE8I=
github.com/golangci/golangci-lint v1.42.0/go.mod h1:wgkGQnU9lOUFvTFo5QBSOvaSSddEV21Z1zYkJSbppZA=
github.com/golangci/golangci-lint v1.41.1 h1:KH28pTSqRu6DTXIAANl1sPXNCmqg4VEH21z6G9Wj4SM=
github.com/golangci/golangci-lint v1.41.1/go.mod h1:LPtcY3aAAU8wydHrKpnanx9Og8K/cblZSyGmI5CJZUk=
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 h1:MfyDlzVjl1hoaPzPD4Gpb/QgoRfSBR0jdhwGyAWwMSA=
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0/go.mod h1:66R6K6P6VWk9I95jvqGxkqJxVWGFy9XlDwLwVz1RCFg=
github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca h1:kNY3/svz5T29MYHubXix4aDDuE3RWHkPvopM/EDv/MA=
@@ -396,7 +393,6 @@ github.com/google/uuid v0.0.0-20161128191214-064e2069ce9c/go.mod h1:TIyPZe4Mgqvf
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
@@ -552,8 +548,8 @@ github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 h1:SOEGU9fKiNWd/HOJuq
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0/go.mod h1:dXGbAdH5GtBTC4WfIxhKZfyBF/HBFgRZSWwZ9g/He9o=
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 h1:P6pPBnrTSX3DEVR4fDembhRWSsG5rVo6hYhAB/ADZrk=
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0/go.mod h1:vmVJ0l/dxyfGW6FmdpVm2joNMFikkuWg0EoCKLGUMNw=
github.com/ldez/gomoddirectives v0.2.2 h1:p9/sXuNFArS2RLc+UpYZSI4KQwGMEDWC/LbtF5OPFVg=
github.com/ldez/gomoddirectives v0.2.2/go.mod h1:cpgBogWITnCfRq2qGoDkKMEVSaarhdBr6g8G04uz6d0=
github.com/ldez/gomoddirectives v0.2.1 h1:9pAcW9KRZW7HQjFwbozNvFMcNVwdCBufU7os5QUwLIY=
github.com/ldez/gomoddirectives v0.2.1/go.mod h1:sGicqkRgBOg//JfpXwkB9Hj0X5RyJ7mlACM5B9f6Me4=
github.com/ldez/tagliatelle v0.2.0 h1:693V8Bf1NdShJ8eu/s84QySA0J2VWBanVBa2WwXD/Wk=
github.com/ldez/tagliatelle v0.2.0/go.mod h1:8s6WJQwEYHbKZDsp/LjArytKOG8qaMrKQQ3mFukHs88=
github.com/leodido/go-urn v1.1.0/go.mod h1:+cyI34gQWZcE1eQU7NVgKkkzdXDQHr1dBMtdAPozLkw=
@@ -601,8 +597,8 @@ github.com/mbilski/exhaustivestruct v1.2.0 h1:wCBmUnSYufAHO6J4AVWY6ff+oxWxsVFrwg
github.com/mbilski/exhaustivestruct v1.2.0/go.mod h1:OeTBVxQWoEmB2J2JCHmXWPJ0aksxSUOUy+nvtVEfzXc=
github.com/mgechev/dots v0.0.0-20190921121421-c36f7dcfbb81 h1:QASJXOGm2RZ5Ardbc86qNFvby9AqkLDibfChMtAg5QM=
github.com/mgechev/dots v0.0.0-20190921121421-c36f7dcfbb81/go.mod h1:KQ7+USdGKfpPjXk4Ga+5XxQM4Lm4e3gAogrreFAYpOg=
github.com/mgechev/revive v1.1.0 h1:TvabpsolbtlzZTyJcgMRN38MHrgi8C0DhmGE5dhscGY=
github.com/mgechev/revive v1.1.0/go.mod h1:PKqk4L74K6wVNwY2b6fr+9Qqr/3hIsHVfZCJdbvozrY=
github.com/mgechev/revive v1.0.7 h1:5kEWTY/W5a0Eiqnkn2BAWsRZpxbs1ft15PsyNC7Rml8=
github.com/mgechev/revive v1.0.7/go.mod h1:vuE5ox/4L/HDd63MCcCk3H6wTLQ6XXezRphJ8cJJOxY=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
github.com/miekg/dns v1.1.35/go.mod h1:KNUDUusw/aVsxyTYZM1oqvCicbwhgbNgztCETuNZ7xM=
@@ -635,7 +631,7 @@ github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwd
github.com/moricho/tparallel v0.2.1 h1:95FytivzT6rYzdJLdtfn6m1bfFJylOJK41+lgv/EHf4=
github.com/moricho/tparallel v0.2.1/go.mod h1:fXEIZxG2vdfl0ZF8b42f5a78EhjjD5mX8qUplsoSU4k=
github.com/mozilla/scribe v0.0.0-20180711195314-fb71baf557c1/go.mod h1:FIczTrinKo8VaLxe6PWTPEXRXDIHz2QAwiaBaP5/4a8=
github.com/mozilla/tls-observatory v0.0.0-20210609171429-7bc42856d2e5/go.mod h1:FUqVoUPHSEdDR0MnFM3Dh8AU0pZHLXUD127SAJGER/s=
github.com/mozilla/tls-observatory v0.0.0-20210209181001-cf43108d6880/go.mod h1:FUqVoUPHSEdDR0MnFM3Dh8AU0pZHLXUD127SAJGER/s=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-proto-validators v0.0.0-20180403085117-0950a7990007/go.mod h1:m2XC9Qq0AlmmVksL6FktJCdTYyLk7V3fKyp0sl1yWQo=
@@ -653,8 +649,8 @@ github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 h1:4kuARK6Y6Fx
github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354/go.mod h1:KSVJerMDfblTH7p5MZaTt+8zaT2iEk3AkVb9PQdZuE8=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nishanths/exhaustive v0.2.3 h1:+ANTMqRNrqwInnP9aszg/0jDo+zbXa4x66U19Bx/oTk=
github.com/nishanths/exhaustive v0.2.3/go.mod h1:bhIX678Nx8inLM9PbpvK1yv6oGtoP8BfaIeMzgBNKvc=
github.com/nishanths/exhaustive v0.1.0 h1:kVlMw8h2LHPMGUVqUj6230oQjjTMFjwcZrnkhXzFfl8=
github.com/nishanths/exhaustive v0.1.0/go.mod h1:S1j9110vxV1ECdCudXRkeMnFQ/DQk9ajLT0Uf2MYZQQ=
github.com/nishanths/predeclared v0.0.0-20190419143655-18a43bb90ffc/go.mod h1:62PewwiQTlm/7Rj+cxVYqZvDIUc+JjZq6GHAC1fsObQ=
github.com/nishanths/predeclared v0.2.1 h1:1TXtjmy4f3YCFjTxRd8zcFHOmoUir+gp0ESzjFzG2sw=
github.com/nishanths/predeclared v0.2.1/go.mod h1:HvkGJcA3naj4lOwnFXFDkFxVtSqQMB9sbB1usJ+xjQE=
@@ -674,15 +670,14 @@ github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W
github.com/onsi/ginkgo v1.10.3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/ginkgo v1.16.2/go.mod h1:CObGmKUOKaSC0RjmoAK7tKyn4Azo5P2IWuoMnvwxz1E=
github.com/onsi/ginkgo v1.16.4 h1:29JGrr5oVBm5ulCWet69zQkzWipVXIol6ygQUe/EzNc=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/ginkgo v1.16.1 h1:foqVmeWDD6yYpK+Yz3fHyNIxFYNxswxqNFjSKe+vI54=
github.com/onsi/ginkgo v1.16.1/go.mod h1:CObGmKUOKaSC0RjmoAK7tKyn4Azo5P2IWuoMnvwxz1E=
github.com/onsi/gomega v1.4.1/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.13.0 h1:7lLHu94wT9Ij0o6EWWclhu0aOh32VxhkwEJvzuWPeak=
github.com/onsi/gomega v1.13.0/go.mod h1:lRk9szgn8TxENtWd0Tp4c3wjlRfMTMH27I+3Je41yGY=
github.com/onsi/gomega v1.11.0 h1:+CqWgvj0OZycCaqclBD1pxKHAU+tOkHmQIWvDHq2aug=
github.com/onsi/gomega v1.11.0/go.mod h1:azGKhqFUon9Vuj0YmTfLSmx0FUwqXYSTl5re8lQLTUg=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
github.com/opencontainers/go-digest v1.0.0-rc1 h1:WzifXhOVOEOuFYOJAW6aQqW0TooG2iki3E3Ii+WN7gQ=
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
@@ -773,8 +768,8 @@ github.com/rs/zerolog v1.23.0 h1:UskrK+saS9P9Y789yNNulYKdARjPZuS35B8gJF2x60g=
github.com/rs/zerolog v1.23.0/go.mod h1:6c7hFfxPOy7TacJc4Fcdi24/J0NKYGzjG8FWRI916Qo=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryancurrah/gomodguard v1.2.3 h1:ww2fsjqocGCAFamzvv/b8IsRduuHHeK2MHTcTxZTQX8=
github.com/ryancurrah/gomodguard v1.2.3/go.mod h1:rYbA/4Tg5c54mV1sv4sQTP5WOPBcoLtnBZ7/TEhXAbg=
github.com/ryancurrah/gomodguard v1.2.2 h1:ZJQeYHZ2kaJpojoQBaGqpsn5g7GMcePY36uUGW1umbs=
github.com/ryancurrah/gomodguard v1.2.2/go.mod h1:tpI+C/nzvfUR3bF28b5QHpTn/jM/zlGniI++6ZlIWeE=
github.com/ryanrolds/sqlclosecheck v0.3.0 h1:AZx+Bixh8zdUBxUA1NxbxVAS78vTPq4rCb8OUZI9xFw=
github.com/ryanrolds/sqlclosecheck v0.3.0/go.mod h1:1gREqxyTGR3lVtpngyFo3hZAgk0KCtEdgEkHwDbigdA=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
@@ -783,12 +778,12 @@ github.com/sanposhiho/wastedassign/v2 v2.0.6/go.mod h1:KyZ0MWTwxxBmfwn33zh3k1dms
github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa h1:0U2s5loxrTy6/VgfVoLuVLFJcURKLH49ie0zSch7gh4=
github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa/go.mod h1:F73l+cr82YSh10GxyRI6qZiCgK64VaZjwesgfQ1/iLM=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/securego/gosec/v2 v2.8.1 h1:Tyy/nsH39TYCOkqf5HAgRE+7B5D8sHDwPdXRgFWokh8=
github.com/securego/gosec/v2 v2.8.1/go.mod h1:pUmsq6+VyFEElJMUX+QB3p3LWNHXg1R3xh2ssVJPs8Q=
github.com/securego/gosec/v2 v2.8.0 h1:iHg9cVmHWf5n6/ijUJ4F10h5bKlNtvXmcWzRw0lxiKE=
github.com/securego/gosec/v2 v2.8.0/go.mod h1:hJZ6NT5TqoY+jmOsaxAV4cXoEdrMRLVaNPnSpUCvCZs=
github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c h1:W65qqJCIOVP4jpqPQ0YvHYKwcMEMVWIzWC5iNQQfBTU=
github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c/go.mod h1:/PevMnwAxekIXwN8qQyfc5gl2NlkB3CQlkizAbOkeBs=
github.com/shirou/gopsutil/v3 v3.21.7/go.mod h1:RGl11Y7XMTQPmHh8F0ayC6haKNBgH4PXMJuTAcMOlz4=
github.com/shirou/gopsutil/v3 v3.21.5/go.mod h1:ghfMypLDrFSWN2c9cDYFLHyynQ+QUht0cv/18ZqVczw=
github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk=
github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
@@ -823,6 +818,7 @@ github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkU
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/cobra v1.1.3/go.mod h1:pGADOWyqRD/YMrPZigI/zbliZ2wVD/23d+is3pSWzOo=
github.com/spf13/cobra v1.2.1 h1:+KmjbUw1hriSNMF55oPrkZcb27aECyrj8V2ytv7kWDw=
github.com/spf13/cobra v1.2.1/go.mod h1:ExllRjgxM/piMAM+3tAZvg8fsklGAf3tPfi+i8t68Nk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
@@ -865,18 +861,18 @@ github.com/tecbot/gorocksdb v0.0.0-20191217155057-f0fad39f321c h1:g+WoO5jjkqGAzH
github.com/tecbot/gorocksdb v0.0.0-20191217155057-f0fad39f321c/go.mod h1:ahpPrc7HpcfEWDQRZEmnXMzHY03mLDYMCxeDzy46i+8=
github.com/tendermint/tm-db v0.6.4 h1:3N2jlnYQkXNQclQwd/eKV/NzlqPlfK21cpRRIx80XXQ=
github.com/tendermint/tm-db v0.6.4/go.mod h1:dptYhIpJ2M5kUuenLr+Yyf3zQOv1SgBZcl8/BmWlMBw=
github.com/tetafro/godot v1.4.8 h1:rhuUH+tBrx24yVAr6Ox3/UxcsiUPPJcGhinfLdbdew0=
github.com/tetafro/godot v1.4.8/go.mod h1:LR3CJpxDVGlYOWn3ZZg1PgNZdTUvzsZWu8xaEohUpn8=
github.com/tetafro/godot v1.4.7 h1:zBaoSY4JRVVz33y/qnODsdaKj2yAaMr91HCbqHCifVc=
github.com/tetafro/godot v1.4.7/go.mod h1:LR3CJpxDVGlYOWn3ZZg1PgNZdTUvzsZWu8xaEohUpn8=
github.com/timakin/bodyclose v0.0.0-20200424151742-cb6215831a94 h1:ig99OeTyDwQWhPe2iw9lwfQVF1KB3Q4fpP3X7/2VBG8=
github.com/timakin/bodyclose v0.0.0-20200424151742-cb6215831a94/go.mod h1:Qimiffbc6q9tBWlVV6x0P9sat/ao1xEkREYPPj9hphk=
github.com/tinylib/msgp v1.1.5/go.mod h1:eQsjooMTnV42mHu917E26IogZ2930nFyBQdofk10Udg=
github.com/tklauser/go-sysconf v0.3.7/go.mod h1:JZIdXh4RmBvZDBZ41ld2bGxRV3n4daiiqA3skYhAoQ4=
github.com/tklauser/numcpus v0.2.3/go.mod h1:vpEPS/JC+oZGGQ/My/vJnNsvMDQL6PwOqt8dsCw5j+E=
github.com/tklauser/go-sysconf v0.3.4/go.mod h1:Cl2c8ZRWfHD5IrfHo9VN+FX9kCFjIOyVklgXycLB6ek=
github.com/tklauser/numcpus v0.2.1/go.mod h1:9aU+wOc6WjUIZEwWMP62PL/41d65P+iks1gBkr4QyP8=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20200427203606-3cfed13b9966/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tomarrell/wrapcheck/v2 v2.3.0 h1:i3DNjtyyL1xwaBQOsPPk8LAcpayWfQv2rxNi9b/eEx4=
github.com/tomarrell/wrapcheck/v2 v2.3.0/go.mod h1:aF5rnkdtqNWP/gC7vPUO5pKsB0Oac2FDTQP4F+dpZMU=
github.com/tomarrell/wrapcheck/v2 v2.1.0 h1:LTzwrYlgBUwi9JldazhbJN84fN9nS2UNGrZIo2syqxE=
github.com/tomarrell/wrapcheck/v2 v2.1.0/go.mod h1:crK5eI4RGSUrb9duDTQ5GqcukbKZvi85vX6nbhsBAeI=
github.com/tomasen/realip v0.0.0-20180522021738-f0c99a92ddce/go.mod h1:o8v6yHRoik09Xen7gje4m9ERNah1d1PPsVq1VEx9vE4=
github.com/tommy-muehle/go-mnd/v2 v2.4.0 h1:1t0f8Uiaq+fqKteUR4N9Umr6E99R+lDnLnq7PwX2PPE=
github.com/tommy-muehle/go-mnd/v2 v2.4.0/go.mod h1:WsUAkMJMYww6l/ufffCD3m+P7LEvr8TnZn9lwVDlgzw=
@@ -891,8 +887,8 @@ github.com/ultraware/whitespace v0.0.4 h1:If7Va4cM03mpgrNH9k49/VOicWpGoG70XPBFFO
github.com/ultraware/whitespace v0.0.4/go.mod h1:aVMh/gQve5Maj9hQ/hg+F75lr/X5A89uZnzAmWSineA=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/uudashr/gocognit v1.0.5 h1:rrSex7oHr3/pPLQ0xoWq108XMU8s678FJcQ+aSfOHa4=
github.com/uudashr/gocognit v1.0.5/go.mod h1:wgYz0mitoKOTysqxTDMOUXg+Jb5SvtihkfmugIZYpEA=
github.com/uudashr/gocognit v1.0.1 h1:MoG2fZ0b/Eo7NXoIwCVFLG5JED3qgQz5/NEE+rOsjPs=
github.com/uudashr/gocognit v1.0.1/go.mod h1:j44Ayx2KW4+oB6SWMv8KsmHzZrOInQav7D3cQMJ5JUM=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.16.0/go.mod h1:YOKImeEosDdBPnxc0gy7INqi3m1zK6A+xl6TwOBhHCA=
github.com/valyala/quicktemplate v1.6.3/go.mod h1:fwPzK2fHuYEODzJ9pkw0ipCPNHZ2tD5KW4lOuSdPKzY=
@@ -961,9 +957,8 @@ golang.org/x/crypto v0.0.0-20200323165209-0ec3e9974c59/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20200510223506-06a226fb4e37/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210314154223-e6e6c4f2bb5b h1:wSOdpTq0/eI46Ez/LkDwIsAKA71YP2SRKBODiRWM0as=
golang.org/x/crypto v0.0.0-20210314154223-e6e6c4f2bb5b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a h1:kr2P4QFmQr29mSLA43kwrOcgcReGTfbE9N577tCTuBc=
golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a/go.mod h1:P+XmwS30IXTQdn5tA2iutPOUgjI07+tq3H3K9MVA1s8=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -1045,13 +1040,13 @@ golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81R
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4 h1:4nGaVu0QrbjT/AK2PRLuQfQuh6DJve+pELhqTdAj3x0=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781 h1:DzZ89McO9/gWPsQXS/FVKAlG02ZjaQ6AlZRBimEYOd0=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1099,7 +1094,6 @@ golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1139,17 +1133,16 @@ golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210217105451-b926d437f341/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40 h1:JWgyZ1qgdTaF3N3oxC+MdTV7qvEEgHo3otj+HB5CM7Q=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c h1:F1jZWGFhYfh0Ci55sIpILtKKK8p3i2/krTr0H1rg74I=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
@@ -1159,9 +1152,8 @@ golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5 h1:i6eZZ+zk0SOf0xgBpEpPD18qWcJda6q1sxt3S0kzyUQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -1247,6 +1239,7 @@ golang.org/x/tools v0.0.0-20200831203904-5a2aa26beb65/go.mod h1:Cj7w3i3Rnn0Xh82u
golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
golang.org/x/tools v0.0.0-20201001104356-43ebab892c4c/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20201002184944-ecd9fd270d5d/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20201011145850-ed2f50202694/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20201022035929-9cf592e881e9/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201023174141-c8cfbd0f21e6/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201028025901-8cd080b735b3/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
@@ -1263,10 +1256,8 @@ golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.3 h1:L69ShwSZEyCsLKoAxDKeMvLDZkumEe8gXUZAjab0tX8=
golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5 h1:ouewzE6p+/VEB31YYnTbEJdi8pFqKp4P4n85vwo3DHA=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -1376,8 +1367,8 @@ google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAG
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.40.0 h1:AGJ0Ih4mHjSeibYkFGh1dD9KJ/eOtZ93I6hoHhukQ5Q=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.39.0 h1:Klz8I9kdtkIN6EpHHUOMLCYhTn/2WAe5a0s1hcBkdTI=
google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -1436,8 +1427,8 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.2.1 h1:/EPr//+UMMXwMTkXvCCoaJDq8cpjMO80Ou+L4PDo2mY=
honnef.co/go/tools v0.2.1/go.mod h1:lPVVZ2BS5TfnjLyizF7o7hv7j9/L+8cZY2hLyjP9cGY=
honnef.co/go/tools v0.2.0 h1:ws8AfbgTX3oIczLPNPCu5166oBg9ST2vNs0rcht+mDE=
honnef.co/go/tools v0.2.0/go.mod h1:lPVVZ2BS5TfnjLyizF7o7hv7j9/L+8cZY2hLyjP9cGY=
mvdan.cc/gofumpt v0.1.1 h1:bi/1aS/5W00E2ny5q65w9SnKpWEF/UIOqDYBILpo9rA=
mvdan.cc/gofumpt v0.1.1/go.mod h1:yXG1r1WqZVKWbVRtBWKWX9+CxGYfA51nSomhM0woR48=
mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed h1:WX1yoOaKQfddO/mLzdV4wptyWgoH/6hwLs7QHTixo0I=
@@ -1446,8 +1437,6 @@ mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b h1:DxJ5nJdkhDlLok9K6qO+5290kphD
mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b/go.mod h1:2odslEg/xrtNQqCYg2/jCoyKnw3vv5biOc3JnIcYfL4=
mvdan.cc/unparam v0.0.0-20210104141923-aac4ce9116a7 h1:HT3e4Krq+IE44tiN36RvVEb6tvqeIdtsVSsxmNPqlFU=
mvdan.cc/unparam v0.0.0-20210104141923-aac4ce9116a7/go.mod h1:hBpJkZE8H/sb+VRFvw2+rBpHNsTBcvSpk61hr8mzXZE=
pgregory.net/rapid v0.4.7 h1:MTNRktPuv5FNqOO151TM9mDTa+XHcX6ypYeISDVD14g=
pgregory.net/rapid v0.4.7/go.mod h1:UYpPVyjFHzYBGHIxLFoupi8vwk6rXNzRY9OMvVxFIOU=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=

View File

@@ -29,10 +29,10 @@ var (
// TODO: Remove once p2p refactor is complete.
// ref: https://github.com/tendermint/tendermint/issues/5670
ChannelShims = map[p2p.ChannelID]*p2p.ChannelDescriptorShim{
BlockSyncChannel: {
BlockchainChannel: {
MsgType: new(bcproto.Message),
Descriptor: &p2p.ChannelDescriptor{
ID: byte(BlockSyncChannel),
ID: byte(BlockchainChannel),
Priority: 5,
SendQueueCapacity: 1000,
RecvBufferCapacity: 1024,
@@ -44,8 +44,8 @@ var (
)
const (
// BlockSyncChannel is a channel for blocks and status updates
BlockSyncChannel = p2p.ChannelID(0x40)
// BlockchainChannel is a channel for blocks and status updates
BlockchainChannel = p2p.ChannelID(0x40)
trySyncIntervalMS = 10
@@ -60,7 +60,7 @@ const (
)
type consensusReactor interface {
// For when we switch from block sync reactor to the consensus
// For when we switch from blockchain reactor and block sync to the consensus
// machine.
SwitchToConsensus(state sm.State, skipWAL bool)
}
@@ -87,17 +87,17 @@ type Reactor struct {
consReactor consensusReactor
blockSync *tmSync.AtomicBool
blockSyncCh *p2p.Channel
// blockSyncOutBridgeCh defines a channel that acts as a bridge between sending Envelope
// messages that the reactor will consume in processBlockSyncCh and receiving messages
blockchainCh *p2p.Channel
// blockchainOutBridgeCh defines a channel that acts as a bridge between sending Envelope
// messages that the reactor will consume in processBlockchainCh and receiving messages
// from the peer updates channel and other goroutines. We do this instead of directly
// sending on blockSyncCh.Out to avoid race conditions in the case where other goroutines
// send Envelopes directly to the to blockSyncCh.Out channel, since processBlockSyncCh
// may close the blockSyncCh.Out channel at the same time that other goroutines send to
// blockSyncCh.Out.
blockSyncOutBridgeCh chan p2p.Envelope
peerUpdates *p2p.PeerUpdates
closeCh chan struct{}
// sending on blockchainCh.Out to avoid race conditions in the case where other goroutines
// send Envelopes directly to the to blockchainCh.Out channel, since processBlockchainCh
// may close the blockchainCh.Out channel at the same time that other goroutines send to
// blockchainCh.Out.
blockchainOutBridgeCh chan p2p.Envelope
peerUpdates *p2p.PeerUpdates
closeCh chan struct{}
requestsCh <-chan BlockRequest
errorsCh <-chan peerError
@@ -119,7 +119,7 @@ func NewReactor(
blockExec *sm.BlockExecutor,
store *store.BlockStore,
consReactor consensusReactor,
blockSyncCh *p2p.Channel,
blockchainCh *p2p.Channel,
peerUpdates *p2p.PeerUpdates,
blockSync bool,
metrics *cons.Metrics,
@@ -137,23 +137,23 @@ func NewReactor(
errorsCh := make(chan peerError, maxPeerErrBuffer) // NOTE: The capacity should be larger than the peer count.
r := &Reactor{
initialState: state,
blockExec: blockExec,
store: store,
pool: NewBlockPool(startHeight, requestsCh, errorsCh),
consReactor: consReactor,
blockSync: tmSync.NewBool(blockSync),
requestsCh: requestsCh,
errorsCh: errorsCh,
blockSyncCh: blockSyncCh,
blockSyncOutBridgeCh: make(chan p2p.Envelope),
peerUpdates: peerUpdates,
closeCh: make(chan struct{}),
metrics: metrics,
syncStartTime: time.Time{},
initialState: state,
blockExec: blockExec,
store: store,
pool: NewBlockPool(startHeight, requestsCh, errorsCh),
consReactor: consReactor,
blockSync: tmSync.NewBool(blockSync),
requestsCh: requestsCh,
errorsCh: errorsCh,
blockchainCh: blockchainCh,
blockchainOutBridgeCh: make(chan p2p.Envelope),
peerUpdates: peerUpdates,
closeCh: make(chan struct{}),
metrics: metrics,
syncStartTime: time.Time{},
}
r.BaseService = *service.NewBaseService(logger, "BlockSync", r)
r.BaseService = *service.NewBaseService(logger, "Blockchain", r)
return r, nil
}
@@ -174,7 +174,7 @@ func (r *Reactor) OnStart() error {
go r.poolRoutine(false)
}
go r.processBlockSyncCh()
go r.processBlockchainCh()
go r.processPeerUpdates()
return nil
@@ -199,7 +199,7 @@ func (r *Reactor) OnStop() {
// Wait for all p2p Channels to be closed before returning. This ensures we
// can easily reason about synchronization of all p2p Channels and ensure no
// panics will occur.
<-r.blockSyncCh.Done()
<-r.blockchainCh.Done()
<-r.peerUpdates.Done()
}
@@ -214,7 +214,7 @@ func (r *Reactor) respondToPeer(msg *bcproto.BlockRequest, peerID types.NodeID)
return
}
r.blockSyncCh.Out <- p2p.Envelope{
r.blockchainCh.Out <- p2p.Envelope{
To: peerID,
Message: &bcproto.BlockResponse{Block: blockProto},
}
@@ -223,16 +223,16 @@ func (r *Reactor) respondToPeer(msg *bcproto.BlockRequest, peerID types.NodeID)
}
r.Logger.Info("peer requesting a block we do not have", "peer", peerID, "height", msg.Height)
r.blockSyncCh.Out <- p2p.Envelope{
r.blockchainCh.Out <- p2p.Envelope{
To: peerID,
Message: &bcproto.NoBlockResponse{Height: msg.Height},
}
}
// handleBlockSyncMessage handles envelopes sent from peers on the
// BlockSyncChannel. It returns an error only if the Envelope.Message is unknown
// handleBlockchainMessage handles envelopes sent from peers on the
// BlockchainChannel. It returns an error only if the Envelope.Message is unknown
// for this channel. This should never be called outside of handleMessage.
func (r *Reactor) handleBlockSyncMessage(envelope p2p.Envelope) error {
func (r *Reactor) handleBlockchainMessage(envelope p2p.Envelope) error {
logger := r.Logger.With("peer", envelope.From)
switch msg := envelope.Message.(type) {
@@ -249,7 +249,7 @@ func (r *Reactor) handleBlockSyncMessage(envelope p2p.Envelope) error {
r.pool.AddBlock(envelope.From, block, block.Size())
case *bcproto.StatusRequest:
r.blockSyncCh.Out <- p2p.Envelope{
r.blockchainCh.Out <- p2p.Envelope{
To: envelope.From,
Message: &bcproto.StatusResponse{
Height: r.store.Height(),
@@ -288,8 +288,8 @@ func (r *Reactor) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err
r.Logger.Debug("received message", "message", envelope.Message, "peer", envelope.From)
switch chID {
case BlockSyncChannel:
err = r.handleBlockSyncMessage(envelope)
case BlockchainChannel:
err = r.handleBlockchainMessage(envelope)
default:
err = fmt.Errorf("unknown channel ID (%d) for envelope (%v)", chID, envelope)
@@ -298,30 +298,30 @@ func (r *Reactor) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err
return err
}
// processBlockSyncCh initiates a blocking process where we listen for and handle
// envelopes on the BlockSyncChannel and blockSyncOutBridgeCh. Any error encountered during
// message execution will result in a PeerError being sent on the BlockSyncChannel.
// processBlockchainCh initiates a blocking process where we listen for and handle
// envelopes on the BlockchainChannel and blockchainOutBridgeCh. Any error encountered during
// message execution will result in a PeerError being sent on the BlockchainChannel.
// When the reactor is stopped, we will catch the signal and close the p2p Channel
// gracefully.
func (r *Reactor) processBlockSyncCh() {
defer r.blockSyncCh.Close()
func (r *Reactor) processBlockchainCh() {
defer r.blockchainCh.Close()
for {
select {
case envelope := <-r.blockSyncCh.In:
if err := r.handleMessage(r.blockSyncCh.ID, envelope); err != nil {
r.Logger.Error("failed to process message", "ch_id", r.blockSyncCh.ID, "envelope", envelope, "err", err)
r.blockSyncCh.Error <- p2p.PeerError{
case envelope := <-r.blockchainCh.In:
if err := r.handleMessage(r.blockchainCh.ID, envelope); err != nil {
r.Logger.Error("failed to process message", "ch_id", r.blockchainCh.ID, "envelope", envelope, "err", err)
r.blockchainCh.Error <- p2p.PeerError{
NodeID: envelope.From,
Err: err,
}
}
case envelope := <-r.blockSyncOutBridgeCh:
r.blockSyncCh.Out <- envelope
case envelope := <-r.blockchainOutBridgeCh:
r.blockchainCh.Out <- envelope
case <-r.closeCh:
r.Logger.Debug("stopped listening on block sync channel; closing...")
r.Logger.Debug("stopped listening on blockchain channel; closing...")
return
}
@@ -340,7 +340,7 @@ func (r *Reactor) processPeerUpdate(peerUpdate p2p.PeerUpdate) {
switch peerUpdate.Status {
case p2p.PeerStatusUp:
// send a status update the newly added peer
r.blockSyncOutBridgeCh <- p2p.Envelope{
r.blockchainOutBridgeCh <- p2p.Envelope{
To: peerUpdate.NodeID,
Message: &bcproto.StatusResponse{
Base: r.store.Base(),
@@ -406,13 +406,13 @@ func (r *Reactor) requestRoutine() {
return
case request := <-r.requestsCh:
r.blockSyncOutBridgeCh <- p2p.Envelope{
r.blockchainOutBridgeCh <- p2p.Envelope{
To: request.PeerID,
Message: &bcproto.BlockRequest{Height: request.Height},
}
case pErr := <-r.errorsCh:
r.blockSyncCh.Error <- p2p.PeerError{
r.blockchainCh.Error <- p2p.PeerError{
NodeID: pErr.peerID,
Err: pErr.err,
}
@@ -423,7 +423,7 @@ func (r *Reactor) requestRoutine() {
go func() {
defer r.poolWG.Done()
r.blockSyncOutBridgeCh <- p2p.Envelope{
r.blockchainOutBridgeCh <- p2p.Envelope{
Broadcast: true,
Message: &bcproto.StatusRequest{},
}
@@ -554,14 +554,14 @@ FOR_LOOP:
// NOTE: We've already removed the peer's request, but we still need
// to clean up the rest.
peerID := r.pool.RedoRequest(first.Height)
r.blockSyncCh.Error <- p2p.PeerError{
r.blockchainCh.Error <- p2p.PeerError{
NodeID: peerID,
Err: err,
}
peerID2 := r.pool.RedoRequest(second.Height)
if peerID2 != peerID {
r.blockSyncCh.Error <- p2p.PeerError{
r.blockchainCh.Error <- p2p.PeerError{
NodeID: peerID2,
Err: err,
}

View File

@@ -32,9 +32,9 @@ type reactorTestSuite struct {
reactors map[types.NodeID]*Reactor
app map[types.NodeID]proxy.AppConns
blockSyncChannels map[types.NodeID]*p2p.Channel
peerChans map[types.NodeID]chan p2p.PeerUpdate
peerUpdates map[types.NodeID]*p2p.PeerUpdates
blockchainChannels map[types.NodeID]*p2p.Channel
peerChans map[types.NodeID]chan p2p.PeerUpdate
peerUpdates map[types.NodeID]*p2p.PeerUpdates
blockSync bool
}
@@ -53,19 +53,19 @@ func setup(
"must specify at least one block height (nodes)")
rts := &reactorTestSuite{
logger: log.TestingLogger().With("module", "block_sync", "testCase", t.Name()),
network: p2ptest.MakeNetwork(t, p2ptest.NetworkOptions{NumNodes: numNodes}),
nodes: make([]types.NodeID, 0, numNodes),
reactors: make(map[types.NodeID]*Reactor, numNodes),
app: make(map[types.NodeID]proxy.AppConns, numNodes),
blockSyncChannels: make(map[types.NodeID]*p2p.Channel, numNodes),
peerChans: make(map[types.NodeID]chan p2p.PeerUpdate, numNodes),
peerUpdates: make(map[types.NodeID]*p2p.PeerUpdates, numNodes),
blockSync: true,
logger: log.TestingLogger().With("module", "blockchain", "testCase", t.Name()),
network: p2ptest.MakeNetwork(t, p2ptest.NetworkOptions{NumNodes: numNodes}),
nodes: make([]types.NodeID, 0, numNodes),
reactors: make(map[types.NodeID]*Reactor, numNodes),
app: make(map[types.NodeID]proxy.AppConns, numNodes),
blockchainChannels: make(map[types.NodeID]*p2p.Channel, numNodes),
peerChans: make(map[types.NodeID]chan p2p.PeerUpdate, numNodes),
peerUpdates: make(map[types.NodeID]*p2p.PeerUpdates, numNodes),
blockSync: true,
}
chDesc := p2p.ChannelDescriptor{ID: byte(BlockSyncChannel)}
rts.blockSyncChannels = rts.network.MakeChannelsNoCleanup(t, chDesc, new(bcproto.Message), int(chBuf))
chDesc := p2p.ChannelDescriptor{ID: byte(BlockchainChannel)}
rts.blockchainChannels = rts.network.MakeChannelsNoCleanup(t, chDesc, new(bcproto.Message), int(chBuf))
i := 0
for nodeID := range rts.network.Nodes {
@@ -161,7 +161,7 @@ func (rts *reactorTestSuite) addNode(t *testing.T,
blockExec,
blockStore,
nil,
rts.blockSyncChannels[nodeID],
rts.blockchainChannels[nodeID],
rts.peerUpdates[nodeID],
rts.blockSync,
cons.NopMetrics())
@@ -181,7 +181,7 @@ func (rts *reactorTestSuite) start(t *testing.T) {
}
func TestReactor_AbruptDisconnect(t *testing.T) {
config := cfg.ResetTestRoot("block_sync_reactor_test")
config := cfg.ResetTestRoot("blockchain_reactor_test")
defer os.RemoveAll(config.RootDir)
genDoc, privVals := factory.RandGenesisDoc(config, 1, false, 30)
@@ -216,7 +216,7 @@ func TestReactor_AbruptDisconnect(t *testing.T) {
}
func TestReactor_SyncTime(t *testing.T) {
config := cfg.ResetTestRoot("block_sync_reactor_test")
config := cfg.ResetTestRoot("blockchain_reactor_test")
defer os.RemoveAll(config.RootDir)
genDoc, privVals := factory.RandGenesisDoc(config, 1, false, 30)
@@ -239,7 +239,7 @@ func TestReactor_SyncTime(t *testing.T) {
}
func TestReactor_NoBlockResponse(t *testing.T) {
config := cfg.ResetTestRoot("block_sync_reactor_test")
config := cfg.ResetTestRoot("blockchain_reactor_test")
defer os.RemoveAll(config.RootDir)
genDoc, privVals := factory.RandGenesisDoc(config, 1, false, 30)
@@ -286,7 +286,7 @@ func TestReactor_BadBlockStopsPeer(t *testing.T) {
// See: https://github.com/tendermint/tendermint/issues/6005
t.SkipNow()
config := cfg.ResetTestRoot("block_sync_reactor_test")
config := cfg.ResetTestRoot("blockchain_reactor_test")
defer os.RemoveAll(config.RootDir)
maxBlockHeight := int64(48)

View File

@@ -1,72 +0,0 @@
package clist_test
import (
"testing"
"github.com/stretchr/testify/require"
"pgregory.net/rapid"
"github.com/tendermint/tendermint/internal/libs/clist"
)
func TestCListProperties(t *testing.T) {
rapid.Check(t, rapid.Run(&clistModel{}))
}
// clistModel is used by the rapid state machine testing framework.
// clistModel contains both the clist that is being tested and a slice of *clist.CElements
// that will be used to model the expected clist behavior.
type clistModel struct {
clist *clist.CList
model []*clist.CElement
}
// Init is a method used by the rapid state machine testing library.
// Init is called when the test starts to initialize the data that will be used
// in the state machine test.
func (m *clistModel) Init(t *rapid.T) {
m.clist = clist.New()
m.model = []*clist.CElement{}
}
// PushBack defines an action that will be randomly selected across by the rapid state
// machines testing library. Every call to PushBack calls PushBack on the clist and
// performs a similar action on the model data.
func (m *clistModel) PushBack(t *rapid.T) {
value := rapid.String().Draw(t, "value").(string)
el := m.clist.PushBack(value)
m.model = append(m.model, el)
}
// Remove defines an action that will be randomly selected across by the rapid state
// machine testing library. Every call to Remove selects an element from the model
// and calls Remove on the CList with that element. The same element is removed from
// the model to keep the objects in sync.
func (m *clistModel) Remove(t *rapid.T) {
if len(m.model) == 0 {
return
}
ix := rapid.IntRange(0, len(m.model)-1).Draw(t, "index").(int)
value := m.model[ix]
m.model = append(m.model[:ix], m.model[ix+1:]...)
m.clist.Remove(value)
}
// Check is a method required by the rapid state machine testing library.
// Check is run after each action and is used to verify that the state of the object,
// in this case a clist.CList matches the state of the objec.
func (m *clistModel) Check(t *rapid.T) {
require.Equal(t, len(m.model), m.clist.Len())
if len(m.model) == 0 {
return
}
require.Equal(t, m.model[0], m.clist.Front())
require.Equal(t, m.model[len(m.model)-1], m.clist.Back())
iter := m.clist.Front()
for _, val := range m.model {
require.Equal(t, val, iter)
iter = iter.Next()
}
}

View File

@@ -5,7 +5,7 @@ import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/tmhash"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/pkg/meta"
"github.com/tendermint/tendermint/version"
)
@@ -32,14 +32,14 @@ func RandomHash() []byte {
return crypto.CRandBytes(tmhash.Size)
}
func MakeBlockID() types.BlockID {
func MakeBlockID() meta.BlockID {
return MakeBlockIDWithHash(RandomHash())
}
func MakeBlockIDWithHash(hash []byte) types.BlockID {
return types.BlockID{
func MakeBlockIDWithHash(hash []byte) meta.BlockID {
return meta.BlockID{
Hash: hash,
PartSetHeader: types.PartSetHeader{
PartSetHeader: meta.PartSetHeader{
Total: 100,
Hash: RandomHash(),
},
@@ -48,7 +48,7 @@ func MakeBlockIDWithHash(hash []byte) types.BlockID {
// MakeHeader fills the rest of the contents of the header such that it passes
// validate basic
func MakeHeader(h *types.Header) (*types.Header, error) {
func MakeHeader(h *meta.Header) (*meta.Header, error) {
if h.Version.Block == 0 {
h.Version.Block = version.BlockProtocol
}
@@ -92,8 +92,8 @@ func MakeHeader(h *types.Header) (*types.Header, error) {
return h, h.ValidateBasic()
}
func MakeRandomHeader() *types.Header {
h, err := MakeHeader(&types.Header{})
func MakeRandomHeader() *meta.Header {
h, err := MakeHeader(&meta.Header{})
if err != nil {
panic(err)
}

View File

@@ -5,12 +5,24 @@ import (
"fmt"
"time"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/types"
)
func MakeCommit(blockID types.BlockID, height int64, round int32,
voteSet *types.VoteSet, validators []types.PrivValidator, now time.Time) (*types.Commit, error) {
func MakeRandomCommit(time time.Time) *meta.Commit {
lastID := MakeBlockID()
h := int64(3)
voteSet, _, vals := RandVoteSet(h-1, 1, tmproto.PrecommitType, 10, 1)
commit, err := MakeCommit(lastID, h-1, 1, voteSet, vals, time)
if err != nil {
panic(err)
}
return commit
}
func MakeCommit(blockID meta.BlockID, height int64, round int32,
voteSet *consensus.VoteSet, validators []consensus.PrivValidator, now time.Time) (*meta.Commit, error) {
// all sign
for i := 0; i < len(validators); i++ {
@@ -18,7 +30,7 @@ func MakeCommit(blockID types.BlockID, height int64, round int32,
if err != nil {
return nil, fmt.Errorf("can't get pubkey: %w", err)
}
vote := &types.Vote{
vote := &consensus.Vote{
ValidatorAddress: pubKey.Address(),
ValidatorIndex: int32(i),
Height: height,
@@ -28,7 +40,7 @@ func MakeCommit(blockID types.BlockID, height int64, round int32,
Timestamp: now,
}
_, err = signAddVote(validators[i], vote, voteSet)
_, err = SignAddVote(validators[i], vote, voteSet)
if err != nil {
return nil, err
}
@@ -37,7 +49,7 @@ func MakeCommit(blockID types.BlockID, height int64, round int32,
return voteSet.MakeCommit(), nil
}
func signAddVote(privVal types.PrivValidator, vote *types.Vote, voteSet *types.VoteSet) (signed bool, err error) {
func SignAddVote(privVal consensus.PrivValidator, vote *consensus.Vote, voteSet *consensus.VoteSet) (signed bool, err error) {
v := vote.ToProto()
err = privVal.SignVote(context.Background(), voteSet.ChainID(), v)
if err != nil {

View File

@@ -5,10 +5,10 @@ import (
"github.com/stretchr/testify/assert"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/pkg/meta"
)
func TestMakeHeader(t *testing.T) {
_, err := MakeHeader(&types.Header{})
_, err := MakeHeader(&meta.Header{})
assert.NoError(t, err)
}

View File

@@ -5,28 +5,28 @@ import (
cfg "github.com/tendermint/tendermint/config"
tmtime "github.com/tendermint/tendermint/libs/time"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/pkg/consensus"
)
func RandGenesisDoc(
config *cfg.Config,
numValidators int,
randPower bool,
minPower int64) (*types.GenesisDoc, []types.PrivValidator) {
minPower int64) (*consensus.GenesisDoc, []consensus.PrivValidator) {
validators := make([]types.GenesisValidator, numValidators)
privValidators := make([]types.PrivValidator, numValidators)
validators := make([]consensus.GenesisValidator, numValidators)
privValidators := make([]consensus.PrivValidator, numValidators)
for i := 0; i < numValidators; i++ {
val, privVal := RandValidator(randPower, minPower)
validators[i] = types.GenesisValidator{
validators[i] = consensus.GenesisValidator{
PubKey: val.PubKey,
Power: val.VotingPower,
}
privValidators[i] = privVal
}
sort.Sort(types.PrivValidatorsByAddress(privValidators))
sort.Sort(consensus.PrivValidatorsByAddress(privValidators))
return &types.GenesisDoc{
return &consensus.GenesisDoc{
GenesisTime: tmtime.Now(),
InitialHeight: 1,
ChainID: config.ChainID(),

View File

@@ -1,16 +1,16 @@
package factory
import "github.com/tendermint/tendermint/types"
import "github.com/tendermint/tendermint/pkg/mempool"
// MakeTxs is a helper function to generate mock transactions by given the block height
// and the transaction numbers.
func MakeTxs(height int64, num int) (txs []types.Tx) {
func MakeTxs(height int64, num int) (txs []mempool.Tx) {
for i := 0; i < num; i++ {
txs = append(txs, types.Tx([]byte{byte(height), byte(i)}))
txs = append(txs, mempool.Tx([]byte{byte(height), byte(i)}))
}
return txs
}
func MakeTenTxs(height int64) (txs []types.Tx) {
func MakeTenTxs(height int64) (txs []mempool.Tx) {
return MakeTxs(height, 10)
}

View File

@@ -6,11 +6,11 @@ import (
"math/rand"
"sort"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/pkg/consensus"
)
func RandValidator(randPower bool, minPower int64) (*types.Validator, types.PrivValidator) {
privVal := types.NewMockPV()
func RandValidator(randPower bool, minPower int64) (*consensus.Validator, consensus.PrivValidator) {
privVal := consensus.NewMockPV()
votePower := minPower
if randPower {
// nolint:gosec // G404: Use of weak random number generator
@@ -20,14 +20,14 @@ func RandValidator(randPower bool, minPower int64) (*types.Validator, types.Priv
if err != nil {
panic(fmt.Errorf("could not retrieve pubkey %w", err))
}
val := types.NewValidator(pubKey, votePower)
val := consensus.NewValidator(pubKey, votePower)
return val, privVal
}
func RandValidatorSet(numValidators int, votingPower int64) (*types.ValidatorSet, []types.PrivValidator) {
func RandValidatorSet(numValidators int, votingPower int64) (*consensus.ValidatorSet, []consensus.PrivValidator) {
var (
valz = make([]*types.Validator, numValidators)
privValidators = make([]types.PrivValidator, numValidators)
valz = make([]*consensus.Validator, numValidators)
privValidators = make([]consensus.PrivValidator, numValidators)
)
for i := 0; i < numValidators; i++ {
@@ -36,7 +36,7 @@ func RandValidatorSet(numValidators int, votingPower int64) (*types.ValidatorSet
privValidators[i] = privValidator
}
sort.Sort(types.PrivValidatorsByAddress(privValidators))
sort.Sort(consensus.PrivValidatorsByAddress(privValidators))
return types.NewValidatorSet(valz), privValidators
return consensus.NewValidatorSet(valz), privValidators
}

View File

@@ -2,27 +2,32 @@ package factory
import (
"context"
"fmt"
"sort"
"time"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/types"
)
func MakeVote(
val types.PrivValidator,
val consensus.PrivValidator,
chainID string,
valIndex int32,
height int64,
round int32,
step int,
blockID types.BlockID,
blockID meta.BlockID,
time time.Time,
) (*types.Vote, error) {
) (*consensus.Vote, error) {
pubKey, err := val.GetPubKey(context.Background())
if err != nil {
return nil, err
}
v := &types.Vote{
v := &consensus.Vote{
ValidatorAddress: pubKey.Address(),
ValidatorIndex: valIndex,
Height: height,
@@ -40,3 +45,71 @@ func MakeVote(
v.Signature = vpb.Signature
return v, nil
}
func RandVoteSet(
height int64,
round int32,
signedMsgType tmproto.SignedMsgType,
numValidators int,
votingPower int64,
) (*consensus.VoteSet, *consensus.ValidatorSet, []consensus.PrivValidator) {
valSet, privValidators := RandValidatorPrivValSet(numValidators, votingPower)
return consensus.NewVoteSet("test_chain_id", height, round, signedMsgType, valSet), valSet, privValidators
}
func RandValidatorPrivValSet(numValidators int, votingPower int64) (*consensus.ValidatorSet, []consensus.PrivValidator) {
var (
valz = make([]*consensus.Validator, numValidators)
privValidators = make([]consensus.PrivValidator, numValidators)
)
for i := 0; i < numValidators; i++ {
val, privValidator := RandValidator(false, votingPower)
valz[i] = val
privValidators[i] = privValidator
}
sort.Sort(consensus.PrivValidatorsByAddress(privValidators))
return consensus.NewValidatorSet(valz), privValidators
}
func DeterministicVoteSet(
height int64,
round int32,
signedMsgType tmproto.SignedMsgType,
votingPower int64,
) (*consensus.VoteSet, *consensus.ValidatorSet, []consensus.PrivValidator) {
valSet, privValidators := DeterministicValidatorSet()
return consensus.NewVoteSet("test_chain_id", height, round, signedMsgType, valSet), valSet, privValidators
}
func DeterministicValidatorSet() (*consensus.ValidatorSet, []consensus.PrivValidator) {
var (
valz = make([]*consensus.Validator, 10)
privValidators = make([]consensus.PrivValidator, 10)
)
for i := 0; i < 10; i++ {
// val, privValidator := DeterministicValidator(ed25519.PrivKey([]byte(deterministicKeys[i])))
val, privValidator := DeterministicValidator(ed25519.GenPrivKeyFromSecret([]byte(fmt.Sprintf("key: %x", i))))
valz[i] = val
privValidators[i] = privValidator
}
sort.Sort(consensus.PrivValidatorsByAddress(privValidators))
return consensus.NewValidatorSet(valz), privValidators
}
func DeterministicValidator(key crypto.PrivKey) (*consensus.Validator, consensus.PrivValidator) {
privVal := consensus.NewMockPV()
privVal.PrivKey = key
var votePower int64 = 50
pubKey, err := privVal.GetPubKey(context.TODO())
if err != nil {
panic(fmt.Errorf("could not retrieve pubkey %w", err))
}
val := consensus.NewValidator(pubKey, votePower)
return val, privVal
}

View File

@@ -27,22 +27,15 @@ func (bz *HexBytes) Unmarshal(data []byte) error {
return nil
}
// MarshalJSON implements the json.Marshaler interface. The encoding is a JSON
// quoted string of hexadecimal digits.
// MarshalJSON implements the json.Marshaler interface. The hex bytes is a
// quoted hexadecimal encoded string.
func (bz HexBytes) MarshalJSON() ([]byte, error) {
size := hex.EncodedLen(len(bz)) + 2 // +2 for quotation marks
buf := make([]byte, size)
hex.Encode(buf[1:], []byte(bz))
buf[0] = '"'
buf[size-1] = '"'
// Ensure letter digits are capitalized.
for i := 1; i < size-1; i++ {
if buf[i] >= 'a' && buf[i] <= 'f' {
buf[i] = 'A' + (buf[i] - 'a')
}
}
return buf, nil
s := strings.ToUpper(hex.EncodeToString(bz))
jbz := make([]byte, len(s)+2)
jbz[0] = '"'
copy(jbz[1:], s)
jbz[len(jbz)-1] = '"'
return jbz, nil
}
// UnmarshalJSON implements the json.Umarshaler interface.

View File

@@ -37,7 +37,6 @@ func TestJSONMarshal(t *testing.T) {
{[]byte(``), `{"B1":"","B2":""}`},
{[]byte(`a`), `{"B1":"YQ==","B2":"61"}`},
{[]byte(`abc`), `{"B1":"YWJj","B2":"616263"}`},
{[]byte("\x1a\x2b\x3c"), `{"B1":"Gis8","B2":"1A2B3C"}`},
}
for i, tc := range cases {

View File

@@ -9,6 +9,78 @@ var ErrOverflowInt32 = errors.New("int32 overflow")
var ErrOverflowUint8 = errors.New("uint8 overflow")
var ErrOverflowInt8 = errors.New("int8 overflow")
// SafeAdd adds two int64 numbers. If there is an overflow,
// the function will return -1, true
func SafeAdd(a, b int64) (int64, bool) {
if b > 0 && a > math.MaxInt64-b {
return -1, true
} else if b < 0 && a < math.MinInt64-b {
return -1, true
}
return a + b, false
}
// SafeSub subtracts two int64 numbers. If there is an overflow,
// the function will return -1, true
func SafeSub(a, b int64) (int64, bool) {
if b > 0 && a < math.MinInt64+b {
return -1, true
} else if b < 0 && a > math.MaxInt64+b {
return -1, true
}
return a - b, false
}
// SafeAddClip performs SafeAdd, however if there is an overflow,
// it will return the maxInt64
func SafeAddClip(a, b int64) int64 {
c, overflow := SafeAdd(a, b)
if overflow {
if b < 0 {
return math.MinInt64
}
return math.MaxInt64
}
return c
}
// SafeSubClip performs SafeSub but will clip the result to either
// the min or max int64 number
func SafeSubClip(a, b int64) int64 {
c, overflow := SafeSub(a, b)
if overflow {
if b > 0 {
return math.MinInt64
}
return math.MaxInt64
}
return c
}
// SafeMul multiplies two int64 numbers. It returns
// true if the resultant overflows.
func SafeMul(a, b int64) (int64, bool) {
if a == 0 || b == 0 {
return 0, false
}
absOfB := b
if b < 0 {
absOfB = -b
}
absOfA := a
if a < 0 {
absOfA = -a
}
if absOfA > math.MaxInt64/absOfB {
return 0, true
}
return a * b, false
}
// SafeAddInt32 adds two int32 integers
// If there is an overflow this will panic
func SafeAddInt32(a, b int32) int32 {

View File

@@ -0,0 +1,60 @@
package math_test
import (
"math"
"testing"
"testing/quick"
"github.com/stretchr/testify/assert"
tmmath "github.com/tendermint/tendermint/libs/math"
)
func TestSafeAdd(t *testing.T) {
f := func(a, b int64) bool {
c, overflow := tmmath.SafeAdd(a, b)
return overflow || (!overflow && c == a+b)
}
if err := quick.Check(f, nil); err != nil {
t.Error(err)
}
}
func TestSafeAddClip(t *testing.T) {
assert.EqualValues(t, math.MaxInt64, tmmath.SafeAddClip(math.MaxInt64, 10))
assert.EqualValues(t, math.MaxInt64, tmmath.SafeAddClip(math.MaxInt64, math.MaxInt64))
assert.EqualValues(t, math.MinInt64, tmmath.SafeAddClip(math.MinInt64, -10))
}
func TestSafeSubClip(t *testing.T) {
assert.EqualValues(t, math.MinInt64, tmmath.SafeSubClip(math.MinInt64, 10))
assert.EqualValues(t, 0, tmmath.SafeSubClip(math.MinInt64, math.MinInt64))
assert.EqualValues(t, math.MinInt64, tmmath.SafeSubClip(math.MinInt64, math.MaxInt64))
assert.EqualValues(t, math.MaxInt64, tmmath.SafeSubClip(math.MaxInt64, -10))
}
func TestSafeMul(t *testing.T) {
testCases := []struct {
a int64
b int64
c int64
overflow bool
}{
0: {0, 0, 0, false},
1: {1, 0, 0, false},
2: {2, 3, 6, false},
3: {2, -3, -6, false},
4: {-2, -3, 6, false},
5: {-2, 3, -6, false},
6: {math.MaxInt64, 1, math.MaxInt64, false},
7: {math.MaxInt64 / 2, 2, math.MaxInt64 - 1, false},
8: {math.MaxInt64 / 2, 3, 0, true},
9: {math.MaxInt64, 2, 0, true},
}
for i, tc := range testCases {
c, overflow := tmmath.SafeMul(tc.a, tc.b)
assert.Equal(t, tc.c, c, "#%d", i)
assert.Equal(t, tc.overflow, overflow, "#%d", i)
}
}

View File

@@ -231,45 +231,34 @@ func (s *Server) Unsubscribe(ctx context.Context, args UnsubscribeArgs) error {
return err
}
var qs string
if args.Query != nil {
qs = args.Query.String()
}
clientSubscriptions, err := func() (map[string]string, error) {
s.mtx.RLock()
defer s.mtx.RUnlock()
s.mtx.RLock()
clientSubscriptions, ok := s.subscriptions[args.Subscriber]
if args.ID != "" {
qs, ok = clientSubscriptions[args.ID]
clientSubscriptions, ok := s.subscriptions[args.Subscriber]
if args.ID != "" {
qs, ok = clientSubscriptions[args.ID]
if ok && args.Query == nil {
var err error
args.Query, err = query.New(qs)
if err != nil {
return nil, err
}
if ok && args.Query == nil {
var err error
args.Query, err = query.New(qs)
if err != nil {
return err
}
} else if qs != "" {
args.ID, ok = clientSubscriptions[qs]
}
} else if qs != "" {
args.ID, ok = clientSubscriptions[qs]
}
if !ok {
return nil, ErrSubscriptionNotFound
}
return clientSubscriptions, nil
}()
if err != nil {
return err
s.mtx.RUnlock()
if !ok {
return ErrSubscriptionNotFound
}
select {
case s.cmds <- cmd{op: unsub, clientID: args.Subscriber, query: args.Query, subscription: &Subscription{id: args.ID}}:
s.mtx.Lock()
defer s.mtx.Unlock()
delete(clientSubscriptions, args.ID)
delete(clientSubscriptions, qs)
@@ -277,6 +266,7 @@ func (s *Server) Unsubscribe(ctx context.Context, args UnsubscribeArgs) error {
if len(clientSubscriptions) == 0 {
delete(s.subscriptions, args.Subscriber)
}
s.mtx.Unlock()
return nil
case <-ctx.Done():
return ctx.Err()
@@ -298,10 +288,8 @@ func (s *Server) UnsubscribeAll(ctx context.Context, clientID string) error {
select {
case s.cmds <- cmd{op: unsub, clientID: clientID}:
s.mtx.Lock()
defer s.mtx.Unlock()
delete(s.subscriptions, clientID)
s.mtx.Unlock()
return nil
case <-ctx.Done():
return ctx.Err()
@@ -507,10 +495,7 @@ func (state *state) send(msg interface{}, events []types.Event) error {
for clientID, subscription := range clientSubscriptions {
if cap(subscription.out) == 0 {
// block on unbuffered channel
select {
case subscription.out <- NewMessage(subscription.id, msg, events):
case <-subscription.canceled:
}
subscription.out <- NewMessage(subscription.id, msg, events)
} else {
// don't block on buffered channels
select {

View File

@@ -18,6 +18,7 @@ import (
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/crypto"
cs "github.com/tendermint/tendermint/internal/consensus"
"github.com/tendermint/tendermint/internal/evidence"
"github.com/tendermint/tendermint/internal/mempool"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/internal/p2p/pex"
@@ -36,6 +37,7 @@ import (
grpccore "github.com/tendermint/tendermint/rpc/grpc"
rpcserver "github.com/tendermint/tendermint/rpc/jsonrpc/server"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
@@ -69,12 +71,16 @@ type nodeImpl struct {
mempool mempool.Mempool
stateSync bool // whether the node should state sync on startup
stateSyncReactor *statesync.Reactor // for hosting and restoring state sync snapshots
consensusState *cs.State // latest consensus state
consensusReactor *cs.Reactor // for participating in the consensus
pexReactor service.Service // for exchanging peer addresses
evidenceReactor service.Service
pexReactor *pex.Reactor // for exchanging peer addresses
pexReactorV2 *pex.ReactorV2 // for exchanging peer addresses
evidenceReactor *evidence.Reactor
evidencePool *evidence.Pool // tracking evidence
proxyApp proxy.AppConns // connection to the application
rpcListeners []net.Listener // rpc servers
indexerService service.Service
rpcEnv *rpccore.Environment
eventSinks []indexer.EventSink
indexerService *indexer.Service
prometheusSrv *http.Server
}
@@ -365,43 +371,46 @@ func makeNode(config *cfg.Config,
// Note we currently use the addrBook regardless at least for AddOurAddress
var (
pexReactor service.Service
sw *p2p.Switch
addrBook pex.AddrBook
pexReactor *pex.Reactor
pexReactorV2 *pex.ReactorV2
sw *p2p.Switch
addrBook pex.AddrBook
)
pexCh := pex.ChannelDescriptor()
transport.AddChannelDescriptors([]*p2p.ChannelDescriptor{&pexCh})
if config.P2P.DisableLegacy {
addrBook = nil
pexReactor, err = createPEXReactorV2(config, logger, peerManager, router)
if err != nil {
return nil, err
}
} else {
// setup Transport and Switch
sw = createSwitch(
config, transport, p2pMetrics, mpReactorShim, bcReactorForSwitch,
stateSyncReactorShim, csReactorShim, evReactorShim, proxyApp, nodeInfo, nodeKey, p2pLogger,
)
if config.P2P.PexReactor {
if config.P2P.DisableLegacy {
addrBook = nil
pexReactorV2, err = createPEXReactorV2(config, logger, peerManager, router)
if err != nil {
return nil, err
}
} else {
// setup Transport and Switch
sw = createSwitch(
config, transport, p2pMetrics, mpReactorShim, bcReactorForSwitch,
stateSyncReactorShim, csReactorShim, evReactorShim, proxyApp, nodeInfo, nodeKey, p2pLogger,
)
err = sw.AddPersistentPeers(strings.SplitAndTrimEmpty(config.P2P.PersistentPeers, ",", " "))
if err != nil {
return nil, fmt.Errorf("could not add peers from persistent-peers field: %w", err)
}
err = sw.AddPersistentPeers(strings.SplitAndTrimEmpty(config.P2P.PersistentPeers, ",", " "))
if err != nil {
return nil, fmt.Errorf("could not add peers from persistent-peers field: %w", err)
}
err = sw.AddUnconditionalPeerIDs(strings.SplitAndTrimEmpty(config.P2P.UnconditionalPeerIDs, ",", " "))
if err != nil {
return nil, fmt.Errorf("could not add peer ids from unconditional_peer_ids field: %w", err)
}
err = sw.AddUnconditionalPeerIDs(strings.SplitAndTrimEmpty(config.P2P.UnconditionalPeerIDs, ",", " "))
if err != nil {
return nil, fmt.Errorf("could not add peer ids from unconditional_peer_ids field: %w", err)
}
addrBook, err = createAddrBookAndSetOnSwitch(config, sw, p2pLogger, nodeKey)
if err != nil {
return nil, fmt.Errorf("could not create addrbook: %w", err)
}
addrBook, err = createAddrBookAndSetOnSwitch(config, sw, p2pLogger, nodeKey)
if err != nil {
return nil, fmt.Errorf("could not create addrbook: %w", err)
}
pexReactor = createPEXReactorAndAddToSwitch(addrBook, config, sw, logger)
pexReactor = createPEXReactorAndAddToSwitch(addrBook, config, sw, logger)
}
}
if config.RPC.PprofListenAddress != "" {
@@ -429,39 +438,19 @@ func makeNode(config *cfg.Config,
bcReactor: bcReactor,
mempoolReactor: mpReactor,
mempool: mp,
consensusState: csState,
consensusReactor: csReactor,
stateSyncReactor: stateSyncReactor,
stateSync: stateSync,
pexReactor: pexReactor,
pexReactorV2: pexReactorV2,
evidenceReactor: evReactor,
evidencePool: evPool,
proxyApp: proxyApp,
indexerService: indexerService,
eventBus: eventBus,
rpcEnv: &rpccore.Environment{
ProxyAppQuery: proxyApp.Query(),
ProxyAppMempool: proxyApp.Mempool(),
StateStore: stateStore,
BlockStore: blockStore,
EvidencePool: evPool,
ConsensusState: csState,
BlockSyncReactor: bcReactor.(cs.BlockSyncReactor),
P2PPeers: sw,
PeerManager: peerManager,
GenDoc: genDoc,
EventSinks: eventSinks,
ConsensusReactor: csReactor,
EventBus: eventBus,
Mempool: mp,
Logger: logger.With("module", "rpc"),
Config: *config.RPC,
},
eventSinks: eventSinks,
}
node.rpcEnv.P2PTransport = node
node.BaseService = *service.NewBaseService(logger, "Node", node)
return node, nil
@@ -494,6 +483,25 @@ func makeSeedNode(config *cfg.Config,
p2pMetrics := p2p.PrometheusMetrics(config.Instrumentation.Namespace, "chain_id", genDoc.ChainID)
p2pLogger := logger.With("module", "p2p")
transport := createTransport(p2pLogger, config)
sw := createSwitch(
config, transport, p2pMetrics, nil, nil,
nil, nil, nil, nil, nodeInfo, nodeKey, p2pLogger,
)
err = sw.AddPersistentPeers(strings.SplitAndTrimEmpty(config.P2P.PersistentPeers, ",", " "))
if err != nil {
return nil, fmt.Errorf("could not add peers from persistent_peers field: %w", err)
}
err = sw.AddUnconditionalPeerIDs(strings.SplitAndTrimEmpty(config.P2P.UnconditionalPeerIDs, ",", " "))
if err != nil {
return nil, fmt.Errorf("could not add peer ids from unconditional_peer_ids field: %w", err)
}
addrBook, err := createAddrBookAndSetOnSwitch(config, sw, p2pLogger, nodeKey)
if err != nil {
return nil, fmt.Errorf("could not create addrbook: %w", err)
}
peerManager, err := createPeerManager(config, dbProvider, p2pLogger, nodeKey.ID)
if err != nil {
@@ -507,9 +515,8 @@ func makeSeedNode(config *cfg.Config,
}
var (
pexReactor service.Service
sw *p2p.Switch
addrBook pex.AddrBook
pexReactor *pex.Reactor
pexReactorV2 *pex.ReactorV2
)
// add the pex reactor
@@ -519,31 +526,11 @@ func makeSeedNode(config *cfg.Config,
pexCh := pex.ChannelDescriptor()
transport.AddChannelDescriptors([]*p2p.ChannelDescriptor{&pexCh})
if config.P2P.DisableLegacy {
pexReactor, err = createPEXReactorV2(config, logger, peerManager, router)
pexReactorV2, err = createPEXReactorV2(config, logger, peerManager, router)
if err != nil {
return nil, err
}
} else {
sw = createSwitch(
config, transport, p2pMetrics, nil, nil,
nil, nil, nil, nil, nodeInfo, nodeKey, p2pLogger,
)
err = sw.AddPersistentPeers(strings.SplitAndTrimEmpty(config.P2P.PersistentPeers, ",", " "))
if err != nil {
return nil, fmt.Errorf("could not add peers from persistent_peers field: %w", err)
}
err = sw.AddUnconditionalPeerIDs(strings.SplitAndTrimEmpty(config.P2P.UnconditionalPeerIDs, ",", " "))
if err != nil {
return nil, fmt.Errorf("could not add peer ids from unconditional_peer_ids field: %w", err)
}
addrBook, err = createAddrBookAndSetOnSwitch(config, sw, p2pLogger, nodeKey)
if err != nil {
return nil, fmt.Errorf("could not create addrbook: %w", err)
}
pexReactor = createPEXReactorAndAddToSwitch(addrBook, config, sw, logger)
}
@@ -566,7 +553,8 @@ func makeSeedNode(config *cfg.Config,
peerManager: peerManager,
router: router,
pexReactor: pexReactor,
pexReactor: pexReactor,
pexReactorV2: pexReactorV2,
}
node.BaseService = *service.NewBaseService(logger, "SeedNode", node)
@@ -607,22 +595,23 @@ func (n *nodeImpl) OnStart() error {
}
n.isListening = true
n.Logger.Info("p2p service", "legacy_enabled", !n.config.P2P.DisableLegacy)
if n.config.P2P.DisableLegacy {
if err = n.router.Start(); err != nil {
return err
}
err = n.router.Start()
} else {
// Add private IDs to addrbook to block those peers being added
n.addrBook.AddPrivateIDs(strings.SplitAndTrimEmpty(n.config.P2P.PrivatePeerIDs, ",", " "))
if err = n.sw.Start(); err != nil {
return err
}
err = n.sw.Start()
}
if err != nil {
return err
}
if n.config.Mode != cfg.ModeSeed {
if n.config.BlockSync.Version == cfg.BlockSyncV0 {
// Start the real blockchain reactor separately since the switch uses the shim.
if err := n.bcReactor.Start(); err != nil {
return err
}
@@ -649,8 +638,8 @@ func (n *nodeImpl) OnStart() error {
}
}
if n.config.P2P.DisableLegacy {
if err := n.pexReactor.Start(); err != nil {
if n.config.P2P.DisableLegacy && n.pexReactorV2 != nil {
if err := n.pexReactorV2.Start(); err != nil {
return err
}
} else {
@@ -659,6 +648,7 @@ func (n *nodeImpl) OnStart() error {
if err != nil {
return fmt.Errorf("could not dial peers from persistent-peers field: %w", err)
}
}
// Run state sync
@@ -733,8 +723,10 @@ func (n *nodeImpl) OnStop() {
}
}
if err := n.pexReactor.Stop(); err != nil {
n.Logger.Error("failed to stop the PEX v2 reactor", "err", err)
if n.config.P2P.DisableLegacy && n.pexReactorV2 != nil {
if err := n.pexReactorV2.Stop(); err != nil {
n.Logger.Error("failed to stop the PEX v2 reactor", "err", err)
}
}
if n.config.P2P.DisableLegacy {
@@ -775,23 +767,55 @@ func (n *nodeImpl) OnStop() {
}
}
func (n *nodeImpl) startRPC() ([]net.Listener, error) {
// ConfigureRPC makes sure RPC has all the objects it needs to operate.
func (n *nodeImpl) ConfigureRPC() (*rpccore.Environment, error) {
rpcCoreEnv := rpccore.Environment{
ProxyAppQuery: n.proxyApp.Query(),
ProxyAppMempool: n.proxyApp.Mempool(),
StateStore: n.stateStore,
BlockStore: n.blockStore,
EvidencePool: n.evidencePool,
ConsensusState: n.consensusState,
P2PPeers: n.sw,
P2PTransport: n,
GenDoc: n.genesisDoc,
EventSinks: n.eventSinks,
ConsensusReactor: n.consensusReactor,
EventBus: n.eventBus,
Mempool: n.mempool,
Logger: n.Logger.With("module", "rpc"),
Config: *n.config.RPC,
BlockSyncReactor: n.bcReactor.(cs.BlockSyncReactor),
}
if n.config.Mode == cfg.ModeValidator {
pubKey, err := n.privValidator.GetPubKey(context.TODO())
if pubKey == nil || err != nil {
return nil, fmt.Errorf("can't get pubkey: %w", err)
}
n.rpcEnv.PubKey = pubKey
rpcCoreEnv.PubKey = pubKey
}
if err := n.rpcEnv.InitGenesisChunks(); err != nil {
if err := rpcCoreEnv.InitGenesisChunks(); err != nil {
return nil, err
}
return &rpcCoreEnv, nil
}
func (n *nodeImpl) startRPC() ([]net.Listener, error) {
env, err := n.ConfigureRPC()
if err != nil {
return nil, err
}
listenAddrs := strings.SplitAndTrimEmpty(n.config.RPC.ListenAddress, ",", " ")
routes := n.rpcEnv.GetRoutes()
routes := env.GetRoutes()
if n.config.RPC.Unsafe {
n.rpcEnv.AddUnsafe(routes)
env.AddUnsafe(routes)
}
config := rpcserver.DefaultConfig()
@@ -888,7 +912,7 @@ func (n *nodeImpl) startRPC() ([]net.Listener, error) {
return nil, err
}
go func() {
if err := grpccore.StartGRPCServer(n.rpcEnv, listener); err != nil {
if err := grpccore.StartGRPCServer(env, listener); err != nil {
n.Logger.Error("Error starting gRPC server", "err", err)
}
}()
@@ -921,16 +945,46 @@ func (n *nodeImpl) startPrometheusServer(addr string) *http.Server {
return srv
}
// Switch returns the Node's Switch.
func (n *nodeImpl) Switch() *p2p.Switch {
return n.sw
}
// BlockStore returns the Node's BlockStore.
func (n *nodeImpl) BlockStore() *store.BlockStore {
return n.blockStore
}
// ConsensusState returns the Node's ConsensusState.
func (n *nodeImpl) ConsensusState() *cs.State {
return n.consensusState
}
// ConsensusReactor returns the Node's ConsensusReactor.
func (n *nodeImpl) ConsensusReactor() *cs.Reactor {
return n.consensusReactor
}
// MempoolReactor returns the Node's mempool reactor.
func (n *nodeImpl) MempoolReactor() service.Service {
return n.mempoolReactor
}
// Mempool returns the Node's mempool.
func (n *nodeImpl) Mempool() mempool.Mempool {
return n.mempool
}
// PEXReactor returns the Node's PEXReactor. It returns nil if PEX is disabled.
func (n *nodeImpl) PEXReactor() *pex.Reactor {
return n.pexReactor
}
// EvidencePool returns the Node's EvidencePool.
func (n *nodeImpl) EvidencePool() *evidence.Pool {
return n.evidencePool
}
// EventBus returns the Node's EventBus.
func (n *nodeImpl) EventBus() *types.EventBus {
return n.eventBus
@@ -947,9 +1001,19 @@ func (n *nodeImpl) GenesisDoc() *types.GenesisDoc {
return n.genesisDoc
}
// RPCEnvironment makes sure RPC has all the objects it needs to operate.
func (n *nodeImpl) RPCEnvironment() *rpccore.Environment {
return n.rpcEnv
// ProxyApp returns the Node's AppConns, representing its connections to the ABCI application.
func (n *nodeImpl) ProxyApp() proxy.AppConns {
return n.proxyApp
}
// Config returns the Node's config.
func (n *nodeImpl) Config() *cfg.Config {
return n.config
}
// EventSinks returns the Node's event indexing sinks.
func (n *nodeImpl) EventSinks() []indexer.EventSink {
return n.eventSinks
}
//------------------------------------------------------------------------------

View File

@@ -513,50 +513,36 @@ func TestNodeSetEventSink(t *testing.T) {
config := cfg.ResetTestRoot("node_app_version_test")
defer os.RemoveAll(config.RootDir)
logger := log.TestingLogger()
setupTest := func(t *testing.T, conf *cfg.Config) []indexer.EventSink {
eventBus, err := createAndStartEventBus(logger)
require.NoError(t, err)
n := getTestNode(t, config, log.TestingLogger())
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
require.NoError(t, err)
indexService, eventSinks, err := createAndStartIndexerService(config,
cfg.DefaultDBProvider, eventBus, logger, genDoc.ChainID)
require.NoError(t, err)
t.Cleanup(func() { require.NoError(t, indexService.Stop()) })
return eventSinks
}
eventSinks := setupTest(t, config)
assert.Equal(t, 1, len(eventSinks))
assert.Equal(t, indexer.KV, eventSinks[0].Type())
assert.Equal(t, 1, len(n.eventSinks))
assert.Equal(t, indexer.KV, n.eventSinks[0].Type())
config.TxIndex.Indexer = []string{"null"}
eventSinks = setupTest(t, config)
n = getTestNode(t, config, log.TestingLogger())
assert.Equal(t, 1, len(eventSinks))
assert.Equal(t, indexer.NULL, eventSinks[0].Type())
assert.Equal(t, 1, len(n.eventSinks))
assert.Equal(t, indexer.NULL, n.eventSinks[0].Type())
config.TxIndex.Indexer = []string{"null", "kv"}
eventSinks = setupTest(t, config)
n = getTestNode(t, config, log.TestingLogger())
assert.Equal(t, 1, len(eventSinks))
assert.Equal(t, indexer.NULL, eventSinks[0].Type())
assert.Equal(t, 1, len(n.eventSinks))
assert.Equal(t, indexer.NULL, n.eventSinks[0].Type())
config.TxIndex.Indexer = []string{"kvv"}
ns, err := newDefaultNode(config, logger)
ns, err := newDefaultNode(config, log.TestingLogger())
assert.Nil(t, ns)
assert.Equal(t, errors.New("unsupported event sink type"), err)
config.TxIndex.Indexer = []string{}
eventSinks = setupTest(t, config)
n = getTestNode(t, config, log.TestingLogger())
assert.Equal(t, 1, len(eventSinks))
assert.Equal(t, indexer.NULL, eventSinks[0].Type())
assert.Equal(t, 1, len(n.eventSinks))
assert.Equal(t, indexer.NULL, n.eventSinks[0].Type())
config.TxIndex.Indexer = []string{"psql"}
ns, err = newDefaultNode(config, logger)
ns, err = newDefaultNode(config, log.TestingLogger())
assert.Nil(t, ns)
assert.Equal(t, errors.New("the psql connection settings cannot be empty"), err)
@@ -564,46 +550,46 @@ func TestNodeSetEventSink(t *testing.T) {
config.TxIndex.Indexer = []string{"psql"}
config.TxIndex.PsqlConn = psqlConn
eventSinks = setupTest(t, config)
assert.Equal(t, 1, len(eventSinks))
assert.Equal(t, indexer.PSQL, eventSinks[0].Type())
n = getTestNode(t, config, log.TestingLogger())
assert.Equal(t, 1, len(n.eventSinks))
assert.Equal(t, indexer.PSQL, n.eventSinks[0].Type())
n.OnStop()
config.TxIndex.Indexer = []string{"psql", "kv"}
config.TxIndex.PsqlConn = psqlConn
eventSinks = setupTest(t, config)
assert.Equal(t, 2, len(eventSinks))
n = getTestNode(t, config, log.TestingLogger())
assert.Equal(t, 2, len(n.eventSinks))
// we use map to filter the duplicated sinks, so it's not guarantee the order when append sinks.
if eventSinks[0].Type() == indexer.KV {
assert.Equal(t, indexer.PSQL, eventSinks[1].Type())
if n.eventSinks[0].Type() == indexer.KV {
assert.Equal(t, indexer.PSQL, n.eventSinks[1].Type())
} else {
assert.Equal(t, indexer.PSQL, eventSinks[0].Type())
assert.Equal(t, indexer.KV, eventSinks[1].Type())
assert.Equal(t, indexer.PSQL, n.eventSinks[0].Type())
assert.Equal(t, indexer.KV, n.eventSinks[1].Type())
}
n.OnStop()
config.TxIndex.Indexer = []string{"kv", "psql"}
config.TxIndex.PsqlConn = psqlConn
eventSinks = setupTest(t, config)
assert.Equal(t, 2, len(eventSinks))
if eventSinks[0].Type() == indexer.KV {
assert.Equal(t, indexer.PSQL, eventSinks[1].Type())
n = getTestNode(t, config, log.TestingLogger())
assert.Equal(t, 2, len(n.eventSinks))
if n.eventSinks[0].Type() == indexer.KV {
assert.Equal(t, indexer.PSQL, n.eventSinks[1].Type())
} else {
assert.Equal(t, indexer.PSQL, eventSinks[0].Type())
assert.Equal(t, indexer.KV, eventSinks[1].Type())
assert.Equal(t, indexer.PSQL, n.eventSinks[0].Type())
assert.Equal(t, indexer.KV, n.eventSinks[1].Type())
}
n.OnStop()
var e = errors.New("found duplicated sinks, please check the tx-index section in the config.toml")
config.TxIndex.Indexer = []string{"psql", "kv", "Kv"}
config.TxIndex.PsqlConn = psqlConn
_, err = newDefaultNode(config, logger)
_, err = newDefaultNode(config, log.TestingLogger())
require.Error(t, err)
assert.Equal(t, e, err)
config.TxIndex.Indexer = []string{"Psql", "kV", "kv", "pSql"}
config.TxIndex.PsqlConn = psqlConn
_, err = newDefaultNode(config, logger)
_, err = newDefaultNode(config, log.TestingLogger())
require.Error(t, err)
assert.Equal(t, e, err)
}

View File

@@ -362,7 +362,7 @@ func createBlockchainReactor(
reactor, err := bcv0.NewReactor(
logger, state.Copy(), blockExec, blockStore, csReactor,
channels[bcv0.BlockSyncChannel], peerUpdates, blockSync,
channels[bcv0.BlockchainChannel], peerUpdates, blockSync,
metrics,
)
if err != nil {
@@ -700,7 +700,7 @@ func createPEXReactorV2(
logger log.Logger,
peerManager *p2p.PeerManager,
router *p2p.Router,
) (service.Service, error) {
) (*pex.ReactorV2, error) {
channel, err := router.OpenChannel(pex.ChannelDescriptor(), &protop2p.PexMessage{}, 128)
if err != nil {
@@ -727,7 +727,7 @@ func makeNodeInfo(
var bcChannel byte
switch config.BlockSync.Version {
case cfg.BlockSyncV0:
bcChannel = byte(bcv0.BlockSyncChannel)
bcChannel = byte(bcv0.BlockchainChannel)
case cfg.BlockSyncV2:
bcChannel = bcv2.BlockchainChannel

491
pkg/block/block.go Normal file
View File

@@ -0,0 +1,491 @@
package block
import (
"bytes"
"errors"
"fmt"
"strings"
"github.com/gogo/protobuf/proto"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
tmmath "github.com/tendermint/tendermint/libs/math"
"github.com/tendermint/tendermint/pkg/evidence"
"github.com/tendermint/tendermint/pkg/mempool"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/version"
)
// Block defines the atomic unit of a Tendermint blockchain.
type Block struct {
mtx tmsync.Mutex
meta.Header `json:"header"`
Data `json:"data"`
Evidence EvidenceData `json:"evidence"`
LastCommit *meta.Commit `json:"last_commit"`
}
// ValidateBasic performs basic validation that doesn't involve state data.
// It checks the internal consistency of the block.
// Further validation is done using state#ValidateBlock.
func (b *Block) ValidateBasic() error {
if b == nil {
return errors.New("nil block")
}
b.mtx.Lock()
defer b.mtx.Unlock()
if err := b.Header.ValidateBasic(); err != nil {
return fmt.Errorf("invalid header: %w", err)
}
// Validate the last commit and its hash.
if b.LastCommit == nil {
return errors.New("nil LastCommit")
}
if err := b.LastCommit.ValidateBasic(); err != nil {
return fmt.Errorf("wrong LastCommit: %v", err)
}
if w, g := b.LastCommit.Hash(), b.LastCommitHash; !bytes.Equal(w, g) {
return fmt.Errorf("wrong Header.LastCommitHash. Expected %X, got %X", w, g)
}
// NOTE: b.Data.Txs may be nil, but b.Data.Hash() still works fine.
if w, g := b.Data.Hash(), b.DataHash; !bytes.Equal(w, g) {
return fmt.Errorf("wrong Header.DataHash. Expected %X, got %X", w, g)
}
// NOTE: b.Evidence.Evidence may be nil, but we're just looping.
for i, ev := range b.Evidence.Evidence {
if err := ev.ValidateBasic(); err != nil {
return fmt.Errorf("invalid evidence (#%d): %v", i, err)
}
}
if w, g := b.Evidence.Hash(), b.EvidenceHash; !bytes.Equal(w, g) {
return fmt.Errorf("wrong Header.EvidenceHash. Expected %X, got %X", w, g)
}
return nil
}
// fillHeader fills in any remaining header fields that are a function of the block data
func (b *Block) fillHeader() {
if b.LastCommitHash == nil {
b.LastCommitHash = b.LastCommit.Hash()
}
if b.DataHash == nil {
b.DataHash = b.Data.Hash()
}
if b.EvidenceHash == nil {
b.EvidenceHash = b.Evidence.Hash()
}
}
// Hash computes and returns the block hash.
// If the block is incomplete, block hash is nil for safety.
func (b *Block) Hash() tmbytes.HexBytes {
if b == nil {
return nil
}
b.mtx.Lock()
defer b.mtx.Unlock()
if b.LastCommit == nil {
return nil
}
b.fillHeader()
return b.Header.Hash()
}
// MakePartSet returns a PartSet containing parts of a serialized block.
// This is the form in which the block is gossipped to peers.
// CONTRACT: partSize is greater than zero.
func (b *Block) MakePartSet(partSize uint32) *meta.PartSet {
if b == nil {
return nil
}
b.mtx.Lock()
defer b.mtx.Unlock()
pbb, err := b.ToProto()
if err != nil {
panic(err)
}
bz, err := proto.Marshal(pbb)
if err != nil {
panic(err)
}
return meta.NewPartSetFromData(bz, partSize)
}
// HashesTo is a convenience function that checks if a block hashes to the given argument.
// Returns false if the block is nil or the hash is empty.
func (b *Block) HashesTo(hash []byte) bool {
if len(hash) == 0 {
return false
}
if b == nil {
return false
}
return bytes.Equal(b.Hash(), hash)
}
// Size returns size of the block in bytes.
func (b *Block) Size() int {
pbb, err := b.ToProto()
if err != nil {
return 0
}
return pbb.Size()
}
// String returns a string representation of the block
//
// See StringIndented.
func (b *Block) String() string {
return b.StringIndented("")
}
// StringIndented returns an indented String.
//
// Header
// Data
// Evidence
// LastCommit
// Hash
func (b *Block) StringIndented(indent string) string {
if b == nil {
return "nil-Block"
}
return fmt.Sprintf(`Block{
%s %v
%s %v
%s %v
%s %v
%s}#%v`,
indent, b.Header.StringIndented(indent+" "),
indent, b.Data.StringIndented(indent+" "),
indent, b.Evidence.StringIndented(indent+" "),
indent, b.LastCommit.StringIndented(indent+" "),
indent, b.Hash())
}
// StringShort returns a shortened string representation of the block.
func (b *Block) StringShort() string {
if b == nil {
return "nil-Block"
}
return fmt.Sprintf("Block#%X", b.Hash())
}
// ToProto converts Block to protobuf
func (b *Block) ToProto() (*tmproto.Block, error) {
if b == nil {
return nil, errors.New("nil Block")
}
pb := new(tmproto.Block)
pb.Header = *b.Header.ToProto()
pb.LastCommit = b.LastCommit.ToProto()
pb.Data = b.Data.ToProto()
protoEvidence, err := b.Evidence.ToProto()
if err != nil {
return nil, err
}
pb.Evidence = *protoEvidence
return pb, nil
}
// FromProto sets a protobuf Block to the given pointer.
// It returns an error if the block is invalid.
func BlockFromProto(bp *tmproto.Block) (*Block, error) {
if bp == nil {
return nil, errors.New("nil block")
}
b := new(Block)
h, err := meta.HeaderFromProto(&bp.Header)
if err != nil {
return nil, err
}
b.Header = h
data, err := DataFromProto(&bp.Data)
if err != nil {
return nil, err
}
b.Data = data
if err := b.Evidence.FromProto(&bp.Evidence); err != nil {
return nil, err
}
if bp.LastCommit != nil {
lc, err := meta.CommitFromProto(bp.LastCommit)
if err != nil {
return nil, err
}
b.LastCommit = lc
}
return b, b.ValidateBasic()
}
//-----------------------------------------------------------------------------
// MaxDataBytes returns the maximum size of block's data.
//
// XXX: Panics on negative result.
func MaxDataBytes(maxBytes, evidenceBytes int64, valsCount int) int64 {
maxDataBytes := maxBytes -
meta.MaxOverheadForBlock -
meta.MaxHeaderBytes -
meta.MaxCommitBytes(valsCount) -
evidenceBytes
if maxDataBytes < 0 {
panic(fmt.Sprintf(
"Negative MaxDataBytes. Block.MaxBytes=%d is too small to accommodate header&lastCommit&evidence=%d",
maxBytes,
-(maxDataBytes - maxBytes),
))
}
return maxDataBytes
}
// MaxDataBytesNoEvidence returns the maximum size of block's data when
// evidence count is unknown. MaxEvidencePerBlock will be used for the size
// of evidence.
//
// XXX: Panics on negative result.
func MaxDataBytesNoEvidence(maxBytes int64, valsCount int) int64 {
maxDataBytes := maxBytes -
meta.MaxOverheadForBlock -
meta.MaxHeaderBytes -
meta.MaxCommitBytes(valsCount)
if maxDataBytes < 0 {
panic(fmt.Sprintf(
"Negative MaxDataBytesUnknownEvidence. Block.MaxBytes=%d is too small to accommodate header&lastCommit&evidence=%d",
maxBytes,
-(maxDataBytes - maxBytes),
))
}
return maxDataBytes
}
// MakeBlock returns a new block with an empty header, except what can be
// computed from itself.
// It populates the same set of fields validated by ValidateBasic.
func MakeBlock(height int64, txs []mempool.Tx, lastCommit *meta.Commit, evidence []evidence.Evidence) *Block {
block := &Block{
Header: meta.Header{
Version: version.Consensus{Block: version.BlockProtocol, App: 0},
Height: height,
},
Data: Data{
Txs: txs,
},
Evidence: EvidenceData{Evidence: evidence},
LastCommit: lastCommit,
}
block.fillHeader()
return block
}
//-----------------------------------------------------------------------------
// Data contains the set of transactions included in the block
type Data struct {
// Txs that will be applied by state @ block.Height+1.
// NOTE: not all txs here are valid. We're just agreeing on the order first.
// This means that block.AppHash does not include these txs.
Txs mempool.Txs `json:"txs"`
// Volatile
hash tmbytes.HexBytes
}
// Hash returns the hash of the data
func (data *Data) Hash() tmbytes.HexBytes {
if data == nil {
return (mempool.Txs{}).Hash()
}
if data.hash == nil {
data.hash = data.Txs.Hash() // NOTE: leaves of merkle tree are TxIDs
}
return data.hash
}
// StringIndented returns an indented string representation of the transactions.
func (data *Data) StringIndented(indent string) string {
if data == nil {
return "nil-Data"
}
txStrings := make([]string, tmmath.MinInt(len(data.Txs), 21))
for i, tx := range data.Txs {
if i == 20 {
txStrings[i] = fmt.Sprintf("... (%v total)", len(data.Txs))
break
}
txStrings[i] = fmt.Sprintf("%X (%d bytes)", tx.Hash(), len(tx))
}
return fmt.Sprintf(`Data{
%s %v
%s}#%v`,
indent, strings.Join(txStrings, "\n"+indent+" "),
indent, data.hash)
}
// ToProto converts Data to protobuf
func (data *Data) ToProto() tmproto.Data {
tp := new(tmproto.Data)
if len(data.Txs) > 0 {
txBzs := make([][]byte, len(data.Txs))
for i := range data.Txs {
txBzs[i] = data.Txs[i]
}
tp.Txs = txBzs
}
return *tp
}
// ClearCache removes the saved hash. This is predominantly used for testing.
func (data *Data) ClearCache() {
data.hash = nil
}
// DataFromProto takes a protobuf representation of Data &
// returns the native type.
func DataFromProto(dp *tmproto.Data) (Data, error) {
if dp == nil {
return Data{}, errors.New("nil data")
}
data := new(Data)
if len(dp.Txs) > 0 {
txBzs := make(mempool.Txs, len(dp.Txs))
for i := range dp.Txs {
txBzs[i] = mempool.Tx(dp.Txs[i])
}
data.Txs = txBzs
} else {
data.Txs = mempool.Txs{}
}
return *data, nil
}
// ComputeProtoSizeForTxs wraps the transactions in tmproto.Data{} and calculates the size.
// https://developers.google.com/protocol-buffers/docs/encoding
func ComputeProtoSizeForTxs(txs []mempool.Tx) int64 {
data := Data{Txs: txs}
pdData := data.ToProto()
return int64(pdData.Size())
}
//-----------------------------------------------------------------------------
// EvidenceData contains any evidence of malicious wrong-doing by validators
type EvidenceData struct {
Evidence evidence.EvidenceList `json:"evidence"`
// Volatile. Used as cache
hash tmbytes.HexBytes
byteSize int64
}
// Hash returns the hash of the data.
func (data *EvidenceData) Hash() tmbytes.HexBytes {
if data.hash == nil {
data.hash = data.Evidence.Hash()
}
return data.hash
}
// ByteSize returns the total byte size of all the evidence
func (data *EvidenceData) ByteSize() int64 {
if data.byteSize == 0 && len(data.Evidence) != 0 {
pb, err := data.ToProto()
if err != nil {
panic(err)
}
data.byteSize = int64(pb.Size())
}
return data.byteSize
}
// StringIndented returns a string representation of the evidence.
func (data *EvidenceData) StringIndented(indent string) string {
if data == nil {
return "nil-Evidence"
}
evStrings := make([]string, tmmath.MinInt(len(data.Evidence), 21))
for i, ev := range data.Evidence {
if i == 20 {
evStrings[i] = fmt.Sprintf("... (%v total)", len(data.Evidence))
break
}
evStrings[i] = fmt.Sprintf("Evidence:%v", ev)
}
return fmt.Sprintf(`EvidenceData{
%s %v
%s}#%v`,
indent, strings.Join(evStrings, "\n"+indent+" "),
indent, data.hash)
}
// ToProto converts EvidenceData to protobuf
func (data *EvidenceData) ToProto() (*tmproto.EvidenceList, error) {
if data == nil {
return nil, errors.New("nil evidence data")
}
evi := new(tmproto.EvidenceList)
eviBzs := make([]tmproto.Evidence, len(data.Evidence))
for i := range data.Evidence {
protoEvi, err := evidence.EvidenceToProto(data.Evidence[i])
if err != nil {
return nil, err
}
eviBzs[i] = *protoEvi
}
evi.Evidence = eviBzs
return evi, nil
}
// FromProto sets a protobuf EvidenceData to the given pointer.
func (data *EvidenceData) FromProto(eviData *tmproto.EvidenceList) error {
if eviData == nil {
return errors.New("nil evidenceData")
}
eviBzs := make(evidence.EvidenceList, len(eviData.Evidence))
for i := range eviData.Evidence {
evi, err := evidence.EvidenceFromProto(&eviData.Evidence[i])
if err != nil {
return err
}
eviBzs[i] = evi
}
data.Evidence = eviBzs
data.byteSize = int64(eviData.Size())
return nil
}
//--------------------------------------------------------------------------------

View File

@@ -1,25 +1,26 @@
package types
package block
import (
"bytes"
"errors"
"fmt"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
// BlockMeta contains meta information.
type BlockMeta struct {
BlockID BlockID `json:"block_id"`
BlockSize int `json:"block_size"`
Header Header `json:"header"`
NumTxs int `json:"num_txs"`
BlockID meta.BlockID `json:"block_id"`
BlockSize int `json:"block_size"`
Header meta.Header `json:"header"`
NumTxs int `json:"num_txs"`
}
// NewBlockMeta returns a new BlockMeta.
func NewBlockMeta(block *Block, blockParts *PartSet) *BlockMeta {
func NewBlockMeta(block *Block, blockParts *meta.PartSet) *BlockMeta {
return &BlockMeta{
BlockID: BlockID{block.Hash(), blockParts.Header()},
BlockID: meta.BlockID{block.Hash(), blockParts.Header()},
BlockSize: block.Size(),
Header: block.Header,
NumTxs: len(block.Data.Txs),
@@ -47,12 +48,12 @@ func BlockMetaFromProto(pb *tmproto.BlockMeta) (*BlockMeta, error) {
bm := new(BlockMeta)
bi, err := BlockIDFromProto(&pb.BlockID)
bi, err := meta.BlockIDFromProto(&pb.BlockID)
if err != nil {
return nil, err
}
h, err := HeaderFromProto(&pb.Header)
h, err := meta.HeaderFromProto(&pb.Header)
if err != nil {
return nil, err
}

View File

@@ -1,4 +1,4 @@
package types
package block_test
import (
"testing"
@@ -6,23 +6,26 @@ import (
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto/tmhash"
test "github.com/tendermint/tendermint/internal/test/factory"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/meta"
)
func TestBlockMeta_ToProto(t *testing.T) {
h := MakeRandHeader()
bi := BlockID{Hash: h.Hash(), PartSetHeader: PartSetHeader{Total: 123, Hash: tmrand.Bytes(tmhash.Size)}}
h := test.MakeRandomHeader()
bi := meta.BlockID{Hash: h.Hash(), PartSetHeader: meta.PartSetHeader{Total: 123, Hash: tmrand.Bytes(tmhash.Size)}}
bm := &BlockMeta{
bm := &block.BlockMeta{
BlockID: bi,
BlockSize: 200,
Header: h,
Header: *h,
NumTxs: 0,
}
tests := []struct {
testName string
bm *BlockMeta
bm *block.BlockMeta
expErr bool
}{
{"success", bm, false},
@@ -34,7 +37,7 @@ func TestBlockMeta_ToProto(t *testing.T) {
t.Run(tt.testName, func(t *testing.T) {
pb := tt.bm.ToProto()
bm, err := BlockMetaFromProto(pb)
bm, err := block.BlockMetaFromProto(pb)
if !tt.expErr {
require.NoError(t, err, tt.testName)
@@ -47,37 +50,37 @@ func TestBlockMeta_ToProto(t *testing.T) {
}
func TestBlockMeta_ValidateBasic(t *testing.T) {
h := MakeRandHeader()
bi := BlockID{Hash: h.Hash(), PartSetHeader: PartSetHeader{Total: 123, Hash: tmrand.Bytes(tmhash.Size)}}
bi2 := BlockID{Hash: tmrand.Bytes(tmhash.Size),
PartSetHeader: PartSetHeader{Total: 123, Hash: tmrand.Bytes(tmhash.Size)}}
bi3 := BlockID{Hash: []byte("incorrect hash"),
PartSetHeader: PartSetHeader{Total: 123, Hash: []byte("incorrect hash")}}
h := test.MakeRandomHeader()
bi := meta.BlockID{Hash: h.Hash(), PartSetHeader: meta.PartSetHeader{Total: 123, Hash: tmrand.Bytes(tmhash.Size)}}
bi2 := meta.BlockID{Hash: tmrand.Bytes(tmhash.Size),
PartSetHeader: meta.PartSetHeader{Total: 123, Hash: tmrand.Bytes(tmhash.Size)}}
bi3 := meta.BlockID{Hash: []byte("incorrect hash"),
PartSetHeader: meta.PartSetHeader{Total: 123, Hash: []byte("incorrect hash")}}
bm := &BlockMeta{
bm := &block.BlockMeta{
BlockID: bi,
BlockSize: 200,
Header: h,
Header: *h,
NumTxs: 0,
}
bm2 := &BlockMeta{
bm2 := &block.BlockMeta{
BlockID: bi2,
BlockSize: 200,
Header: h,
Header: *h,
NumTxs: 0,
}
bm3 := &BlockMeta{
bm3 := &block.BlockMeta{
BlockID: bi3,
BlockSize: 200,
Header: h,
Header: *h,
NumTxs: 0,
}
tests := []struct {
name string
bm *BlockMeta
bm *block.BlockMeta
wantErr bool
}{
{"success", bm, false},

364
pkg/block/block_test.go Normal file
View File

@@ -0,0 +1,364 @@
package block_test
import (
"encoding/hex"
"math"
mrand "math/rand"
"os"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
test "github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/libs/bytes"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/evidence"
"github.com/tendermint/tendermint/pkg/mempool"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
func TestMain(m *testing.M) {
code := m.Run()
os.Exit(code)
}
func TestBlockAddEvidence(t *testing.T) {
txs := []mempool.Tx{mempool.Tx("foo"), mempool.Tx("bar")}
lastID := test.MakeBlockID()
h := int64(3)
voteSet, _, vals := test.RandVoteSet(h-1, 1, tmproto.PrecommitType, 10, 1)
commit, err := test.MakeCommit(lastID, h-1, 1, voteSet, vals, time.Now())
require.NoError(t, err)
ev := evidence.NewMockDuplicateVoteEvidenceWithValidator(h, time.Now(), vals[0], "block-test-chain")
evList := []evidence.Evidence{ev}
b := block.MakeBlock(h, txs, commit, evList)
require.NotNil(t, b)
require.Equal(t, 1, len(b.Evidence.Evidence))
require.NotNil(t, b.EvidenceHash)
}
func TestBlockValidateBasic(t *testing.T) {
require.Error(t, (*block.Block)(nil).ValidateBasic())
txs := []mempool.Tx{mempool.Tx("foo"), mempool.Tx("bar")}
lastID := test.MakeBlockID()
h := int64(3)
voteSet, valSet, vals := test.RandVoteSet(h-1, 1, tmproto.PrecommitType, 10, 1)
commit, err := test.MakeCommit(lastID, h-1, 1, voteSet, vals, time.Now())
require.NoError(t, err)
ev := evidence.NewMockDuplicateVoteEvidenceWithValidator(h, time.Now(), vals[0], "block-test-chain")
evList := []evidence.Evidence{ev}
testCases := []struct {
testName string
malleateBlock func(*block.Block)
expErr bool
}{
{"Make Block", func(blk *block.Block) {}, false},
{"Make Block w/ proposer Addr", func(blk *block.Block) { blk.ProposerAddress = valSet.GetProposer().Address }, false},
{"Negative Height", func(blk *block.Block) { blk.Height = -1 }, true},
{"Remove 1/2 the commits", func(blk *block.Block) {
blk.LastCommit.Signatures = commit.Signatures[:commit.Size()/2]
blk.LastCommit.ClearCache() // clear hash or change wont be noticed
}, true},
{"Remove LastCommitHash", func(blk *block.Block) { blk.LastCommitHash = []byte("something else") }, true},
{"Tampered Data", func(blk *block.Block) {
blk.Data.Txs[0] = mempool.Tx("something else")
blk.Data.ClearCache() // clear hash or change wont be noticed
}, true},
{"Tampered DataHash", func(blk *block.Block) {
blk.DataHash = tmrand.Bytes(len(blk.DataHash))
}, true},
{"Tampered EvidenceHash", func(blk *block.Block) {
blk.EvidenceHash = tmrand.Bytes(len(blk.EvidenceHash))
}, true},
{"Incorrect block protocol version", func(blk *block.Block) {
blk.Version.Block = 1
}, true},
{"Missing LastCommit", func(blk *block.Block) {
blk.LastCommit = nil
}, true},
{"Invalid LastCommit", func(blk *block.Block) {
blk.LastCommit = meta.NewCommit(-1, 0, test.MakeBlockID(), nil)
}, true},
{"Invalid Evidence", func(blk *block.Block) {
emptyEv := &evidence.DuplicateVoteEvidence{}
blk.Evidence = block.EvidenceData{Evidence: []evidence.Evidence{emptyEv}}
}, true},
}
for i, tc := range testCases {
tc := tc
i := i
t.Run(tc.testName, func(t *testing.T) {
block := block.MakeBlock(h, txs, commit, evList)
block.ProposerAddress = valSet.GetProposer().Address
tc.malleateBlock(block)
err = block.ValidateBasic()
t.Log(err)
assert.Equal(t, tc.expErr, err != nil, "#%d: %v", i, err)
})
}
}
func TestBlockHash(t *testing.T) {
assert.Nil(t, (*block.Block)(nil).Hash())
assert.Nil(t, block.MakeBlock(int64(3), []mempool.Tx{mempool.Tx("Hello World")}, nil, nil).Hash())
}
func TestBlockMakePartSet(t *testing.T) {
assert.Nil(t, (*block.Block)(nil).MakePartSet(2))
partSet := block.MakeBlock(int64(3), []mempool.Tx{mempool.Tx("Hello World")}, nil, nil).MakePartSet(1024)
assert.NotNil(t, partSet)
assert.EqualValues(t, 1, partSet.Total())
}
func TestBlockMakePartSetWithEvidence(t *testing.T) {
assert.Nil(t, (*block.Block)(nil).MakePartSet(2))
lastID := test.MakeBlockID()
h := int64(3)
voteSet, _, vals := test.RandVoteSet(h-1, 1, tmproto.PrecommitType, 10, 1)
commit, err := test.MakeCommit(lastID, h-1, 1, voteSet, vals, time.Now())
require.NoError(t, err)
ev := evidence.NewMockDuplicateVoteEvidenceWithValidator(h, time.Now(), vals[0], "block-test-chain")
evList := []evidence.Evidence{ev}
partSet := block.MakeBlock(h, []mempool.Tx{mempool.Tx("Hello World")}, commit, evList).MakePartSet(512)
assert.NotNil(t, partSet)
assert.EqualValues(t, 4, partSet.Total())
}
func TestBlockHashesTo(t *testing.T) {
assert.False(t, (*block.Block)(nil).HashesTo(nil))
lastID := test.MakeBlockID()
h := int64(3)
voteSet, valSet, vals := test.RandVoteSet(h-1, 1, tmproto.PrecommitType, 10, 1)
commit, err := test.MakeCommit(lastID, h-1, 1, voteSet, vals, time.Now())
require.NoError(t, err)
ev := evidence.NewMockDuplicateVoteEvidenceWithValidator(h, time.Now(), vals[0], "block-test-chain")
evList := []evidence.Evidence{ev}
block := block.MakeBlock(h, []mempool.Tx{mempool.Tx("Hello World")}, commit, evList)
block.ValidatorsHash = valSet.Hash()
assert.False(t, block.HashesTo([]byte{}))
assert.False(t, block.HashesTo([]byte("something else")))
assert.True(t, block.HashesTo(block.Hash()))
}
func TestBlockSize(t *testing.T) {
size := block.MakeBlock(int64(3), []mempool.Tx{mempool.Tx("Hello World")}, nil, nil).Size()
if size <= 0 {
t.Fatal("Size of the block is zero or negative")
}
}
func TestBlockString(t *testing.T) {
assert.Equal(t, "nil-Block", (*block.Block)(nil).String())
assert.Equal(t, "nil-Block", (*block.Block)(nil).StringIndented(""))
assert.Equal(t, "nil-Block", (*block.Block)(nil).StringShort())
block := block.MakeBlock(int64(3), []mempool.Tx{mempool.Tx("Hello World")}, nil, nil)
assert.NotEqual(t, "nil-Block", block.String())
assert.NotEqual(t, "nil-Block", block.StringIndented(""))
assert.NotEqual(t, "nil-Block", block.StringShort())
}
func hexBytesFromString(s string) bytes.HexBytes {
b, err := hex.DecodeString(s)
if err != nil {
panic(err)
}
return bytes.HexBytes(b)
}
func TestBlockMaxDataBytes(t *testing.T) {
testCases := []struct {
maxBytes int64
valsCount int
evidenceBytes int64
panics bool
result int64
}{
0: {-10, 1, 0, true, 0},
1: {10, 1, 0, true, 0},
2: {841, 1, 0, true, 0},
3: {842, 1, 0, false, 0},
4: {843, 1, 0, false, 1},
5: {954, 2, 0, false, 1},
6: {1053, 2, 100, false, 0},
}
for i, tc := range testCases {
tc := tc
if tc.panics {
assert.Panics(t, func() {
block.MaxDataBytes(tc.maxBytes, tc.evidenceBytes, tc.valsCount)
}, "#%v", i)
} else {
assert.Equal(t,
tc.result,
block.MaxDataBytes(tc.maxBytes, tc.evidenceBytes, tc.valsCount),
"#%v", i)
}
}
}
func TestBlockMaxDataBytesNoEvidence(t *testing.T) {
testCases := []struct {
maxBytes int64
valsCount int
panics bool
result int64
}{
0: {-10, 1, true, 0},
1: {10, 1, true, 0},
2: {841, 1, true, 0},
3: {842, 1, false, 0},
4: {843, 1, false, 1},
}
for i, tc := range testCases {
tc := tc
if tc.panics {
assert.Panics(t, func() {
block.MaxDataBytesNoEvidence(tc.maxBytes, tc.valsCount)
}, "#%v", i)
} else {
assert.Equal(t,
tc.result,
block.MaxDataBytesNoEvidence(tc.maxBytes, tc.valsCount),
"#%v", i)
}
}
}
func TestBlockProtoBuf(t *testing.T) {
h := mrand.Int63()
c1 := test.MakeRandomCommit(time.Now())
b1 := block.MakeBlock(h, []mempool.Tx{mempool.Tx([]byte{1})}, &meta.Commit{Signatures: []meta.CommitSig{}}, []evidence.Evidence{})
b1.ProposerAddress = tmrand.Bytes(crypto.AddressSize)
b2 := block.MakeBlock(h, []mempool.Tx{mempool.Tx([]byte{1})}, c1, []evidence.Evidence{})
b2.ProposerAddress = tmrand.Bytes(crypto.AddressSize)
evidenceTime := time.Date(2019, 1, 1, 0, 0, 0, 0, time.UTC)
evi := evidence.NewMockDuplicateVoteEvidence(h, evidenceTime, "block-test-chain")
b2.Evidence = block.EvidenceData{Evidence: evidence.EvidenceList{evi}}
b2.EvidenceHash = b2.Evidence.Hash()
b3 := block.MakeBlock(h, []mempool.Tx{}, c1, []evidence.Evidence{})
b3.ProposerAddress = tmrand.Bytes(crypto.AddressSize)
testCases := []struct {
msg string
b1 *block.Block
expPass bool
expPass2 bool
}{
{"nil block", nil, false, false},
{"b1", b1, true, true},
{"b2", b2, true, true},
{"b3", b3, true, true},
}
for _, tc := range testCases {
pb, err := tc.b1.ToProto()
if tc.expPass {
require.NoError(t, err, tc.msg)
} else {
require.Error(t, err, tc.msg)
}
block, err := block.BlockFromProto(pb)
if tc.expPass2 {
require.NoError(t, err, tc.msg)
require.EqualValues(t, tc.b1.Header, block.Header, tc.msg)
require.EqualValues(t, tc.b1.Data, block.Data, tc.msg)
require.EqualValues(t, tc.b1.Evidence.Evidence, block.Evidence.Evidence, tc.msg)
require.EqualValues(t, *tc.b1.LastCommit, *block.LastCommit, tc.msg)
} else {
require.Error(t, err, tc.msg)
}
}
}
func TestDataProtoBuf(t *testing.T) {
data := &block.Data{Txs: mempool.Txs{mempool.Tx([]byte{1}), mempool.Tx([]byte{2}), mempool.Tx([]byte{3})}}
data2 := &block.Data{Txs: mempool.Txs{}}
testCases := []struct {
msg string
data1 *block.Data
expPass bool
}{
{"success", data, true},
{"success data2", data2, true},
}
for _, tc := range testCases {
protoData := tc.data1.ToProto()
d, err := block.DataFromProto(&protoData)
if tc.expPass {
require.NoError(t, err, tc.msg)
require.EqualValues(t, tc.data1, &d, tc.msg)
} else {
require.Error(t, err, tc.msg)
}
}
}
// TestEvidenceDataProtoBuf ensures parity in converting to and from proto.
func TestEvidenceDataProtoBuf(t *testing.T) {
const chainID = "mychain"
ev := evidence.NewMockDuplicateVoteEvidence(math.MaxInt64, time.Now(), chainID)
data := &block.EvidenceData{Evidence: evidence.EvidenceList{ev}}
_ = data.ByteSize()
testCases := []struct {
msg string
data1 *block.EvidenceData
expPass1 bool
expPass2 bool
}{
{"success", data, true, true},
{"empty evidenceData", &block.EvidenceData{Evidence: evidence.EvidenceList{}}, true, true},
{"fail nil Data", nil, false, false},
}
for _, tc := range testCases {
protoData, err := tc.data1.ToProto()
if tc.expPass1 {
require.NoError(t, err, tc.msg)
} else {
require.Error(t, err, tc.msg)
}
eviD := new(block.EvidenceData)
err = eviD.FromProto(protoData)
if tc.expPass2 {
require.NoError(t, err, tc.msg)
require.Equal(t, tc.data1, eviD, tc.msg)
} else {
require.Error(t, err, tc.msg)
}
}
}
// This follows RFC-6962, i.e. `echo -n '' | sha256sum`
var emptyBytes = []byte{0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, 0x9a, 0xfb, 0xf4, 0xc8,
0x99, 0x6f, 0xb9, 0x24, 0x27, 0xae, 0x41, 0xe4, 0x64, 0x9b, 0x93, 0x4c, 0xa4, 0x95, 0x99, 0x1b,
0x78, 0x52, 0xb8, 0x55}
func TestNilDataHashDoesntCrash(t *testing.T) {
assert.Equal(t, emptyBytes, []byte((*block.Data)(nil).Hash()))
assert.Equal(t, emptyBytes, []byte(new(block.Data).Hash()))
}

View File

@@ -0,0 +1,32 @@
package consensus
import (
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
// CanonicalizeVote transforms the given Proposal to a CanonicalProposal.
func CanonicalizeProposal(chainID string, proposal *tmproto.Proposal) tmproto.CanonicalProposal {
return tmproto.CanonicalProposal{
Type: tmproto.ProposalType,
Height: proposal.Height, // encoded as sfixed64
Round: int64(proposal.Round), // encoded as sfixed64
POLRound: int64(proposal.PolRound),
BlockID: meta.CanonicalizeBlockID(proposal.BlockID),
Timestamp: proposal.Timestamp,
ChainID: chainID,
}
}
// CanonicalizeVote transforms the given Vote to a CanonicalVote, which does
// not contain ValidatorIndex and ValidatorAddress fields.
func CanonicalizeVote(chainID string, vote *tmproto.Vote) tmproto.CanonicalVote {
return tmproto.CanonicalVote{
Type: vote.Type,
Height: vote.Height, // encoded as sfixed64
Round: int64(vote.Round), // encoded as sfixed64
BlockID: meta.CanonicalizeBlockID(vote.BlockID),
Timestamp: vote.Timestamp,
ChainID: chainID,
}
}

View File

@@ -1,4 +1,4 @@
package types
package consensus
import "fmt"

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
"bytes"
@@ -12,11 +12,7 @@ import (
tmbytes "github.com/tendermint/tendermint/libs/bytes"
tmjson "github.com/tendermint/tendermint/libs/json"
tmtime "github.com/tendermint/tendermint/libs/time"
)
const (
// MaxChainIDLen is a maximum length of the chain ID.
MaxChainIDLen = 50
"github.com/tendermint/tendermint/pkg/meta"
)
//------------------------------------------------------------
@@ -70,8 +66,8 @@ func (genDoc *GenesisDoc) ValidateAndComplete() error {
if genDoc.ChainID == "" {
return errors.New("genesis doc must include non-empty chain_id")
}
if len(genDoc.ChainID) > MaxChainIDLen {
return fmt.Errorf("chain_id in genesis doc is too long (max: %d)", MaxChainIDLen)
if len(genDoc.ChainID) > meta.MaxChainIDLen {
return fmt.Errorf("chain_id in genesis doc is too long (max: %d)", meta.MaxChainIDLen)
}
if genDoc.InitialHeight < 0 {
return fmt.Errorf("initial_height cannot be negative (got %v)", genDoc.InitialHeight)

View File

@@ -1,4 +1,4 @@
package types
package consensus_test
import (
"io/ioutil"
@@ -11,6 +11,7 @@ import (
"github.com/tendermint/tendermint/crypto/ed25519"
tmjson "github.com/tendermint/tendermint/libs/json"
tmtime "github.com/tendermint/tendermint/libs/time"
"github.com/tendermint/tendermint/pkg/consensus"
)
func TestGenesisBad(t *testing.T) {
@@ -52,7 +53,7 @@ func TestGenesisBad(t *testing.T) {
}
for _, testCase := range testCases {
_, err := GenesisDocFromJSON(testCase)
_, err := consensus.GenesisDocFromJSON(testCase)
assert.Error(t, err, "expected error for empty genDoc json")
}
}
@@ -74,20 +75,20 @@ func TestGenesisGood(t *testing.T) {
"app_state":{"account_owner": "Bob"}
}`,
)
_, err := GenesisDocFromJSON(genDocBytes)
_, err := consensus.GenesisDocFromJSON(genDocBytes)
assert.NoError(t, err, "expected no error for good genDoc json")
pubkey := ed25519.GenPrivKey().PubKey()
// create a base gendoc from struct
baseGenDoc := &GenesisDoc{
baseGenDoc := &consensus.GenesisDoc{
ChainID: "abc",
Validators: []GenesisValidator{{pubkey.Address(), pubkey, 10, "myval"}},
Validators: []consensus.GenesisValidator{{pubkey.Address(), pubkey, 10, "myval"}},
}
genDocBytes, err = tmjson.Marshal(baseGenDoc)
assert.NoError(t, err, "error marshaling genDoc")
// test base gendoc and check consensus params were filled
genDoc, err := GenesisDocFromJSON(genDocBytes)
genDoc, err := consensus.GenesisDocFromJSON(genDocBytes)
assert.NoError(t, err, "expected no error for valid genDoc json")
assert.NotNil(t, genDoc.ConsensusParams, "expected consensus params to be filled in")
@@ -97,14 +98,14 @@ func TestGenesisGood(t *testing.T) {
// create json with consensus params filled
genDocBytes, err = tmjson.Marshal(genDoc)
assert.NoError(t, err, "error marshaling genDoc")
genDoc, err = GenesisDocFromJSON(genDocBytes)
genDoc, err = consensus.GenesisDocFromJSON(genDocBytes)
assert.NoError(t, err, "expected no error for valid genDoc json")
// test with invalid consensus params
genDoc.ConsensusParams.Block.MaxBytes = 0
genDocBytes, err = tmjson.Marshal(genDoc)
assert.NoError(t, err, "error marshaling genDoc")
_, err = GenesisDocFromJSON(genDocBytes)
_, err = consensus.GenesisDocFromJSON(genDocBytes)
assert.Error(t, err, "expected error for genDoc json with block size of 0")
// Genesis doc from raw json
@@ -116,7 +117,7 @@ func TestGenesisGood(t *testing.T) {
}
for _, tc := range missingValidatorsTestCases {
_, err := GenesisDocFromJSON(tc)
_, err := consensus.GenesisDocFromJSON(tc)
assert.NoError(t, err)
}
}
@@ -141,7 +142,7 @@ func TestGenesisSaveAs(t *testing.T) {
require.NoError(t, err)
// load
genDoc2, err := GenesisDocFromFile(tmpfile.Name())
genDoc2, err := consensus.GenesisDocFromFile(tmpfile.Name())
require.NoError(t, err)
assert.EqualValues(t, genDoc2, genDoc)
assert.Equal(t, genDoc2.Validators, genDoc.Validators)
@@ -152,14 +153,14 @@ func TestGenesisValidatorHash(t *testing.T) {
assert.NotEmpty(t, genDoc.ValidatorHash())
}
func randomGenesisDoc() *GenesisDoc {
func randomGenesisDoc() *consensus.GenesisDoc {
pubkey := ed25519.GenPrivKey().PubKey()
return &GenesisDoc{
return &consensus.GenesisDoc{
GenesisTime: tmtime.Now(),
ChainID: "abc",
InitialHeight: 1000,
Validators: []GenesisValidator{{pubkey.Address(), pubkey, 10, "myval"}},
ConsensusParams: DefaultConsensusParams(),
Validators: []consensus.GenesisValidator{{pubkey.Address(), pubkey, 10, "myval"}},
ConsensusParams: consensus.DefaultConsensusParams(),
AppHash: []byte{1, 2, 3},
}
}

View File

@@ -1,4 +1,4 @@
package types
package consensus
// UNSTABLE
var (

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
"errors"
@@ -10,19 +10,11 @@ import (
"github.com/tendermint/tendermint/crypto/sr25519"
"github.com/tendermint/tendermint/crypto/tmhash"
tmstrings "github.com/tendermint/tendermint/libs/strings"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
const (
// MaxBlockSizeBytes is the maximum permitted size of the blocks.
MaxBlockSizeBytes = 104857600 // 100MB
// BlockPartSizeBytes is the size of one block part.
BlockPartSizeBytes uint32 = 65536 // 64kB
// MaxBlockPartsCount is the maximum number of block parts.
MaxBlockPartsCount = (MaxBlockSizeBytes / BlockPartSizeBytes) + 1
ABCIPubKeyTypeEd25519 = ed25519.KeyType
ABCIPubKeyTypeSecp256k1 = secp256k1.KeyType
ABCIPubKeyTypeSr25519 = sr25519.KeyType
@@ -129,16 +121,16 @@ func (val *ValidatorParams) IsValidPubkeyType(pubkeyType string) bool {
// allowed limits, and returns an error if they are not.
func (params ConsensusParams) ValidateConsensusParams() error {
if params.Block.MaxBytes <= 0 {
return fmt.Errorf("block.MaxBytes must be greater than 0. Got %d",
return fmt.Errorf("meta.MaxBytes must be greater than 0. Got %d",
params.Block.MaxBytes)
}
if params.Block.MaxBytes > MaxBlockSizeBytes {
return fmt.Errorf("block.MaxBytes is too big. %d > %d",
params.Block.MaxBytes, MaxBlockSizeBytes)
if params.Block.MaxBytes > meta.MaxBlockSizeBytes {
return fmt.Errorf("meta.MaxBytes is too big. %d > %d",
params.Block.MaxBytes, meta.MaxBlockSizeBytes)
}
if params.Block.MaxGas < -1 {
return fmt.Errorf("block.MaxGas must be greater or equal to -1. Got %d",
return fmt.Errorf("meta.MaxGas must be greater or equal to -1. Got %d",
params.Block.MaxGas)
}

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
"bytes"

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
"bytes"

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
"errors"
@@ -8,6 +8,7 @@ import (
"github.com/tendermint/tendermint/internal/libs/protoio"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
tmtime "github.com/tendermint/tendermint/libs/time"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
@@ -24,17 +25,17 @@ var (
// If POLRound >= 0, then BlockID corresponds to the block that is locked in POLRound.
type Proposal struct {
Type tmproto.SignedMsgType
Height int64 `json:"height"`
Round int32 `json:"round"` // there can not be greater than 2_147_483_647 rounds
POLRound int32 `json:"pol_round"` // -1 if null.
BlockID BlockID `json:"block_id"`
Timestamp time.Time `json:"timestamp"`
Signature []byte `json:"signature"`
Height int64 `json:"height"`
Round int32 `json:"round"` // there can not be greater than 2_147_483_647 rounds
POLRound int32 `json:"pol_round"` // -1 if null.
BlockID meta.BlockID `json:"block_id"`
Timestamp time.Time `json:"timestamp"`
Signature []byte `json:"signature"`
}
// NewProposal returns a new Proposal.
// If there is no POLRound, polRound should be -1.
func NewProposal(height int64, round int32, polRound int32, blockID BlockID) *Proposal {
func NewProposal(height int64, round int32, polRound int32, blockID meta.BlockID) *Proposal {
return &Proposal{
Type: tmproto.ProposalType,
Height: height,
@@ -73,8 +74,8 @@ func (p *Proposal) ValidateBasic() error {
return errors.New("signature is missing")
}
if len(p.Signature) > MaxSignatureSize {
return fmt.Errorf("signature is too big (max: %d)", MaxSignatureSize)
if len(p.Signature) > meta.MaxSignatureSize {
return fmt.Errorf("signature is too big (max: %d)", meta.MaxSignatureSize)
}
return nil
}
@@ -96,7 +97,7 @@ func (p *Proposal) String() string {
p.BlockID,
p.POLRound,
tmbytes.Fingerprint(p.Signature),
CanonicalTime(p.Timestamp))
meta.CanonicalTime(p.Timestamp))
}
// ProposalSignBytes returns the proto-encoding of the canonicalized Proposal,
@@ -144,7 +145,7 @@ func ProposalFromProto(pp *tmproto.Proposal) (*Proposal, error) {
p := new(Proposal)
blockID, err := BlockIDFromProto(&pp.BlockID)
blockID, err := meta.BlockIDFromProto(&pp.BlockID)
if err != nil {
return nil, err
}

View File

@@ -1,4 +1,4 @@
package types
package consensus_test
import (
"context"
@@ -13,24 +13,26 @@ import (
"github.com/tendermint/tendermint/crypto/tmhash"
"github.com/tendermint/tendermint/internal/libs/protoio"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
var (
testProposal *Proposal
testProposal *consensus.Proposal
pbp *tmproto.Proposal
)
func init() {
var stamp, err = time.Parse(TimeFormat, "2018-02-11T07:09:22.765Z")
var stamp, err = time.Parse(meta.TimeFormat, "2018-02-11T07:09:22.765Z")
if err != nil {
panic(err)
}
testProposal = &Proposal{
testProposal = &consensus.Proposal{
Height: 12345,
Round: 23456,
BlockID: BlockID{Hash: []byte("--June_15_2020_amino_was_removed"),
PartSetHeader: PartSetHeader{Total: 111, Hash: []byte("--June_15_2020_amino_was_removed")}},
BlockID: meta.BlockID{Hash: []byte("--June_15_2020_amino_was_removed"),
PartSetHeader: meta.PartSetHeader{Total: 111, Hash: []byte("--June_15_2020_amino_was_removed")}},
POLRound: -1,
Timestamp: stamp,
}
@@ -39,8 +41,8 @@ func init() {
func TestProposalSignable(t *testing.T) {
chainID := "test_chain_id"
signBytes := ProposalSignBytes(chainID, pbp)
pb := CanonicalizeProposal(chainID, pbp)
signBytes := consensus.ProposalSignBytes(chainID, pbp)
pb := consensus.CanonicalizeProposal(chainID, pbp)
expected, err := protoio.MarshalDelimited(&pb)
require.NoError(t, err)
@@ -56,15 +58,15 @@ func TestProposalString(t *testing.T) {
}
func TestProposalVerifySignature(t *testing.T) {
privVal := NewMockPV()
privVal := consensus.NewMockPV()
pubKey, err := privVal.GetPubKey(context.Background())
require.NoError(t, err)
prop := NewProposal(
prop := consensus.NewProposal(
4, 2, 2,
BlockID{tmrand.Bytes(tmhash.Size), PartSetHeader{777, tmrand.Bytes(tmhash.Size)}})
meta.BlockID{tmrand.Bytes(tmhash.Size), meta.PartSetHeader{777, tmrand.Bytes(tmhash.Size)}})
p := prop.ToProto()
signBytes := ProposalSignBytes("test_chain_id", p)
signBytes := consensus.ProposalSignBytes("test_chain_id", p)
// sign it
err = privVal.SignProposal(context.Background(), "test_chain_id", p)
@@ -85,11 +87,11 @@ func TestProposalVerifySignature(t *testing.T) {
err = proto.Unmarshal(bs, newProp)
require.NoError(t, err)
np, err := ProposalFromProto(newProp)
np, err := consensus.ProposalFromProto(newProp)
require.NoError(t, err)
// verify the transmitted proposal
newSignBytes := ProposalSignBytes("test_chain_id", pb)
newSignBytes := consensus.ProposalSignBytes("test_chain_id", pb)
require.Equal(t, string(signBytes), string(newSignBytes))
valid = pubKey.VerifySignature(newSignBytes, np.Signature)
require.True(t, valid)
@@ -97,12 +99,12 @@ func TestProposalVerifySignature(t *testing.T) {
func BenchmarkProposalWriteSignBytes(b *testing.B) {
for i := 0; i < b.N; i++ {
ProposalSignBytes("test_chain_id", pbp)
consensus.ProposalSignBytes("test_chain_id", pbp)
}
}
func BenchmarkProposalSign(b *testing.B) {
privVal := NewMockPV()
privVal := consensus.NewMockPV()
for i := 0; i < b.N; i++ {
err := privVal.SignProposal(context.Background(), "test_chain_id", pbp)
if err != nil {
@@ -112,46 +114,45 @@ func BenchmarkProposalSign(b *testing.B) {
}
func BenchmarkProposalVerifySignature(b *testing.B) {
privVal := NewMockPV()
privVal := consensus.NewMockPV()
err := privVal.SignProposal(context.Background(), "test_chain_id", pbp)
require.NoError(b, err)
pubKey, err := privVal.GetPubKey(context.Background())
require.NoError(b, err)
for i := 0; i < b.N; i++ {
pubKey.VerifySignature(ProposalSignBytes("test_chain_id", pbp), testProposal.Signature)
pubKey.VerifySignature(consensus.ProposalSignBytes("test_chain_id", pbp), testProposal.Signature)
}
}
func TestProposalValidateBasic(t *testing.T) {
privVal := NewMockPV()
privVal := consensus.NewMockPV()
testCases := []struct {
testName string
malleateProposal func(*Proposal)
malleateProposal func(*consensus.Proposal)
expectErr bool
}{
{"Good Proposal", func(p *Proposal) {}, false},
{"Invalid Type", func(p *Proposal) { p.Type = tmproto.PrecommitType }, true},
{"Invalid Height", func(p *Proposal) { p.Height = -1 }, true},
{"Invalid Round", func(p *Proposal) { p.Round = -1 }, true},
{"Invalid POLRound", func(p *Proposal) { p.POLRound = -2 }, true},
{"Invalid BlockId", func(p *Proposal) {
p.BlockID = BlockID{[]byte{1, 2, 3}, PartSetHeader{111, []byte("blockparts")}}
{"Good Proposal", func(p *consensus.Proposal) {}, false},
{"Invalid Type", func(p *consensus.Proposal) { p.Type = tmproto.PrecommitType }, true},
{"Invalid Height", func(p *consensus.Proposal) { p.Height = -1 }, true},
{"Invalid Round", func(p *consensus.Proposal) { p.Round = -1 }, true},
{"Invalid POLRound", func(p *consensus.Proposal) { p.POLRound = -2 }, true},
{"Invalid BlockId", func(p *consensus.Proposal) {
p.BlockID = meta.BlockID{[]byte{1, 2, 3}, meta.PartSetHeader{111, []byte("blockparts")}}
}, true},
{"Invalid Signature", func(p *Proposal) {
{"Invalid Signature", func(p *consensus.Proposal) {
p.Signature = make([]byte, 0)
}, true},
{"Too big Signature", func(p *Proposal) {
p.Signature = make([]byte, MaxSignatureSize+1)
{"Too big Signature", func(p *consensus.Proposal) {
p.Signature = make([]byte, meta.MaxSignatureSize+1)
}, true},
}
blockID := makeBlockID(tmhash.Sum([]byte("blockhash")), math.MaxInt32, tmhash.Sum([]byte("partshash")))
blockID := meta.BlockID{tmhash.Sum([]byte("blockhash")), meta.PartSetHeader{math.MaxInt32, tmhash.Sum([]byte("partshash"))}}
for _, tc := range testCases {
tc := tc
t.Run(tc.testName, func(t *testing.T) {
prop := NewProposal(
prop := consensus.NewProposal(
4, 2, 2,
blockID)
p := prop.ToProto()
@@ -165,24 +166,24 @@ func TestProposalValidateBasic(t *testing.T) {
}
func TestProposalProtoBuf(t *testing.T) {
proposal := NewProposal(1, 2, 3, makeBlockID([]byte("hash"), 2, []byte("part_set_hash")))
proposal := consensus.NewProposal(1, 2, 3, meta.BlockID{[]byte("hash"), meta.PartSetHeader{2, []byte("part_set_hash")}})
proposal.Signature = []byte("sig")
proposal2 := NewProposal(1, 2, 3, BlockID{})
proposal2 := consensus.NewProposal(1, 2, 3, meta.BlockID{})
testCases := []struct {
msg string
p1 *Proposal
p1 *consensus.Proposal
expPass bool
}{
{"success", proposal, true},
{"success", proposal2, false}, // blcokID cannot be empty
{"empty proposal failure validatebasic", &Proposal{}, false},
{"empty proposal failure validatebasic", &consensus.Proposal{}, false},
{"nil proposal", nil, false},
}
for _, tc := range testCases {
protoProposal := tc.p1.ToProto()
p, err := ProposalFromProto(protoProposal)
p, err := consensus.ProposalFromProto(protoProposal)
if tc.expPass {
require.NoError(t, err)
require.Equal(t, tc.p1, p, tc.msg)

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
abci "github.com/tendermint/tendermint/abci/types"

View File

@@ -1,4 +1,4 @@
package types
package consensus_test
import (
"testing"
@@ -10,11 +10,12 @@ import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
cryptoenc "github.com/tendermint/tendermint/crypto/encoding"
"github.com/tendermint/tendermint/pkg/consensus"
)
func TestABCIPubKey(t *testing.T) {
pkEd := ed25519.GenPrivKey().PubKey()
err := testABCIPubKey(t, pkEd, ABCIPubKeyTypeEd25519)
err := testABCIPubKey(t, pkEd, consensus.ABCIPubKeyTypeEd25519)
assert.NoError(t, err)
}
@@ -31,23 +32,23 @@ func TestABCIValidators(t *testing.T) {
pkEd := ed25519.GenPrivKey().PubKey()
// correct validator
tmValExpected := NewValidator(pkEd, 10)
tmValExpected := consensus.NewValidator(pkEd, 10)
tmVal := NewValidator(pkEd, 10)
tmVal := consensus.NewValidator(pkEd, 10)
abciVal := TM2PB.ValidatorUpdate(tmVal)
tmVals, err := PB2TM.ValidatorUpdates([]abci.ValidatorUpdate{abciVal})
abciVal := consensus.TM2PB.ValidatorUpdate(tmVal)
tmVals, err := consensus.PB2TM.ValidatorUpdates([]abci.ValidatorUpdate{abciVal})
assert.Nil(t, err)
assert.Equal(t, tmValExpected, tmVals[0])
abciVals := TM2PB.ValidatorUpdates(NewValidatorSet(tmVals))
abciVals := consensus.TM2PB.ValidatorUpdates(consensus.NewValidatorSet(tmVals))
assert.Equal(t, []abci.ValidatorUpdate{abciVal}, abciVals)
// val with address
tmVal.Address = pkEd.Address()
abciVal = TM2PB.ValidatorUpdate(tmVal)
tmVals, err = PB2TM.ValidatorUpdates([]abci.ValidatorUpdate{abciVal})
abciVal = consensus.TM2PB.ValidatorUpdate(tmVal)
tmVals, err = consensus.PB2TM.ValidatorUpdates([]abci.ValidatorUpdate{abciVal})
assert.Nil(t, err)
assert.Equal(t, tmValExpected, tmVals[0])
}
@@ -55,7 +56,7 @@ func TestABCIValidators(t *testing.T) {
func TestABCIValidatorWithoutPubKey(t *testing.T) {
pkEd := ed25519.GenPrivKey().PubKey()
abciVal := TM2PB.Validator(NewValidator(pkEd, 10))
abciVal := consensus.TM2PB.Validator(consensus.NewValidator(pkEd, 10))
// pubkey must be nil
tmValExpected := abci.Validator{

View File

@@ -1,17 +1,17 @@
package types
package consensus
import (
"errors"
"fmt"
"github.com/tendermint/tendermint/crypto/batch"
"github.com/tendermint/tendermint/crypto/tmhash"
tmmath "github.com/tendermint/tendermint/libs/math"
"github.com/tendermint/tendermint/pkg/meta"
)
const batchVerifyThreshold = 2
func shouldBatchVerify(vals *ValidatorSet, commit *Commit) bool {
func shouldBatchVerify(vals *ValidatorSet, commit *meta.Commit) bool {
return len(commit.Signatures) >= batchVerifyThreshold && batch.SupportsBatchVerifier(vals.GetProposer().PubKey)
}
@@ -22,8 +22,8 @@ func shouldBatchVerify(vals *ValidatorSet, commit *Commit) bool {
// application that depends on the LastCommitInfo sent in BeginBlock, which
// includes which validators signed. For instance, Gaia incentivizes proposers
// with a bonus for including more than +2/3 of the signatures.
func VerifyCommit(chainID string, vals *ValidatorSet, blockID BlockID,
height int64, commit *Commit) error {
func VerifyCommit(chainID string, vals *ValidatorSet, blockID meta.BlockID,
height int64, commit *meta.Commit) error {
// run a basic validation of the arguments
if err := verifyBasicValsAndCommit(vals, commit, height, blockID); err != nil {
return err
@@ -34,10 +34,10 @@ func VerifyCommit(chainID string, vals *ValidatorSet, blockID BlockID,
votingPowerNeeded := vals.TotalVotingPower() * 2 / 3
// ignore all absent signatures
ignore := func(c CommitSig) bool { return c.Absent() }
ignore := func(c meta.CommitSig) bool { return c.Absent() }
// only count the signatures that are for the block
count := func(c CommitSig) bool { return c.ForBlock() }
count := func(c meta.CommitSig) bool { return c.ForBlock() }
// attempt to batch verify
if shouldBatchVerify(vals, commit) {
@@ -56,8 +56,8 @@ func VerifyCommit(chainID string, vals *ValidatorSet, blockID BlockID,
//
// This method is primarily used by the light client and does not check all the
// signatures.
func VerifyCommitLight(chainID string, vals *ValidatorSet, blockID BlockID,
height int64, commit *Commit) error {
func VerifyCommitLight(chainID string, vals *ValidatorSet, blockID meta.BlockID,
height int64, commit *meta.Commit) error {
// run a basic validation of the arguments
if err := verifyBasicValsAndCommit(vals, commit, height, blockID); err != nil {
return err
@@ -67,10 +67,10 @@ func VerifyCommitLight(chainID string, vals *ValidatorSet, blockID BlockID,
votingPowerNeeded := vals.TotalVotingPower() * 2 / 3
// ignore all commit signatures that are not for the block
ignore := func(c CommitSig) bool { return !c.ForBlock() }
ignore := func(c meta.CommitSig) bool { return !c.ForBlock() }
// count all the remaining signatures
count := func(c CommitSig) bool { return true }
count := func(c meta.CommitSig) bool { return true }
// attempt to batch verify
if shouldBatchVerify(vals, commit) {
@@ -91,7 +91,7 @@ func VerifyCommitLight(chainID string, vals *ValidatorSet, blockID BlockID,
//
// This method is primarily used by the light client and does not check all the
// signatures.
func VerifyCommitLightTrusting(chainID string, vals *ValidatorSet, commit *Commit, trustLevel tmmath.Fraction) error {
func VerifyCommitLightTrusting(chainID string, vals *ValidatorSet, commit *meta.Commit, trustLevel tmmath.Fraction) error {
// sanity checks
if vals == nil {
return errors.New("nil validator set")
@@ -104,17 +104,17 @@ func VerifyCommitLightTrusting(chainID string, vals *ValidatorSet, commit *Commi
}
// safely calculate voting power needed.
totalVotingPowerMulByNumerator, overflow := safeMul(vals.TotalVotingPower(), int64(trustLevel.Numerator))
totalVotingPowerMulByNumerator, overflow := tmmath.SafeMul(vals.TotalVotingPower(), int64(trustLevel.Numerator))
if overflow {
return errors.New("int64 overflow while calculating voting power needed. please provide smaller trustLevel numerator")
}
votingPowerNeeded := totalVotingPowerMulByNumerator / int64(trustLevel.Denominator)
// ignore all commit signatures that are not for the block
ignore := func(c CommitSig) bool { return !c.ForBlock() }
ignore := func(c meta.CommitSig) bool { return !c.ForBlock() }
// count all the remaining signatures
count := func(c CommitSig) bool { return true }
count := func(c meta.CommitSig) bool { return true }
// attempt to batch verify commit. As the validator set doesn't necessarily
// correspond with the validator set that signed the block we need to look
@@ -129,18 +129,6 @@ func VerifyCommitLightTrusting(chainID string, vals *ValidatorSet, commit *Commi
ignore, count, false, false)
}
// ValidateHash returns an error if the hash is not empty, but its
// size != tmhash.Size.
func ValidateHash(h []byte) error {
if len(h) > 0 && len(h) != tmhash.Size {
return fmt.Errorf("expected size to be %d bytes, got %d bytes",
tmhash.Size,
len(h),
)
}
return nil
}
// Batch verification
// verifyCommitBatch batch verifies commits. This routine is equivalent
@@ -152,10 +140,10 @@ func ValidateHash(h []byte) error {
func verifyCommitBatch(
chainID string,
vals *ValidatorSet,
commit *Commit,
commit *meta.Commit,
votingPowerNeeded int64,
ignoreSig func(CommitSig) bool,
countSig func(CommitSig) bool,
ignoreSig func(meta.CommitSig) bool,
countSig func(meta.CommitSig) bool,
countAllSignatures bool,
lookUpByIndex bool,
) error {
@@ -203,7 +191,7 @@ func verifyCommitBatch(
}
// Validate signature.
voteSignBytes := commit.VoteSignBytes(chainID, int32(idx))
voteSignBytes := VoteSignBytesFromCommit(commit, chainID, int32(idx))
// add the key, sig and message to the verifier
if err := bv.Add(val.PubKey, voteSignBytes, commitSig.Signature); err != nil {
@@ -265,10 +253,10 @@ func verifyCommitBatch(
func verifyCommitSingle(
chainID string,
vals *ValidatorSet,
commit *Commit,
commit *meta.Commit,
votingPowerNeeded int64,
ignoreSig func(CommitSig) bool,
countSig func(CommitSig) bool,
ignoreSig func(meta.CommitSig) bool,
countSig func(meta.CommitSig) bool,
countAllSignatures bool,
lookUpByIndex bool,
) error {
@@ -306,7 +294,7 @@ func verifyCommitSingle(
seenVals[valIdx] = idx
}
voteSignBytes = commit.VoteSignBytes(chainID, int32(idx))
voteSignBytes = VoteSignBytesFromCommit(commit, chainID, int32(idx))
if !val.PubKey.VerifySignature(voteSignBytes, commitSig.Signature) {
return fmt.Errorf("wrong signature (#%d): %X", idx, commitSig.Signature)
@@ -331,7 +319,7 @@ func verifyCommitSingle(
return nil
}
func verifyBasicValsAndCommit(vals *ValidatorSet, commit *Commit, height int64, blockID BlockID) error {
func verifyBasicValsAndCommit(vals *ValidatorSet, commit *meta.Commit, height int64, blockID meta.BlockID) error {
if vals == nil {
return errors.New("nil validator set")
}

View File

@@ -1,4 +1,4 @@
package types
package consensus_test
import (
"context"
@@ -8,7 +8,10 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
test "github.com/tendermint/tendermint/internal/test/factory"
tmmath "github.com/tendermint/tendermint/libs/math"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
@@ -19,7 +22,7 @@ func TestValidatorSet_VerifyCommit_All(t *testing.T) {
round = int32(0)
height = int64(100)
blockID = makeBlockID([]byte("blockhash"), 1000, []byte("partshash"))
blockID = test.MakeBlockIDWithHash([]byte("blockhash"))
chainID = "Lalande21185"
trustLevel = tmmath.Fraction{Numerator: 2, Denominator: 3}
)
@@ -29,7 +32,7 @@ func TestValidatorSet_VerifyCommit_All(t *testing.T) {
// vote chainID
chainID string
// vote blockID
blockID BlockID
blockID meta.BlockID
valSize int
// height of the commit
@@ -46,7 +49,7 @@ func TestValidatorSet_VerifyCommit_All(t *testing.T) {
{"good (single verification)", chainID, blockID, 1, height, 1, 0, 0, false},
{"wrong signature (#0)", "EpsilonEridani", blockID, 2, height, 2, 0, 0, true},
{"wrong block ID", chainID, makeBlockIDRandom(), 2, height, 2, 0, 0, true},
{"wrong block ID", chainID, test.MakeBlockID(), 2, height, 2, 0, 0, true},
{"wrong height", chainID, blockID, 1, height - 1, 1, 0, 0, true},
{"wrong set size: 4 vs 3", chainID, blockID, 4, height, 3, 0, 0, true},
@@ -60,20 +63,20 @@ func TestValidatorSet_VerifyCommit_All(t *testing.T) {
for _, tc := range testCases {
tc := tc
t.Run(tc.description, func(t *testing.T) {
_, valSet, vals := randVoteSet(tc.height, round, tmproto.PrecommitType, tc.valSize, 10)
_, valSet, vals := test.RandVoteSet(tc.height, round, tmproto.PrecommitType, tc.valSize, 10)
totalVotes := tc.blockVotes + tc.absentVotes + tc.nilVotes
sigs := make([]CommitSig, totalVotes)
sigs := make([]meta.CommitSig, totalVotes)
vi := 0
// add absent sigs first
for i := 0; i < tc.absentVotes; i++ {
sigs[vi] = NewCommitSigAbsent()
sigs[vi] = meta.NewCommitSigAbsent()
vi++
}
for i := 0; i < tc.blockVotes+tc.nilVotes; i++ {
pubKey, err := vals[vi%len(vals)].GetPubKey(context.Background())
require.NoError(t, err)
vote := &Vote{
vote := &consensus.Vote{
ValidatorAddress: pubKey.Address(),
ValidatorIndex: int32(vi),
Height: tc.height,
@@ -83,7 +86,7 @@ func TestValidatorSet_VerifyCommit_All(t *testing.T) {
Timestamp: time.Now(),
}
if i >= tc.blockVotes {
vote.BlockID = BlockID{}
vote.BlockID = meta.BlockID{}
}
v := vote.ToProto()
@@ -95,7 +98,7 @@ func TestValidatorSet_VerifyCommit_All(t *testing.T) {
vi++
}
commit := NewCommit(tc.height, round, tc.blockID, sigs)
commit := meta.NewCommit(tc.height, round, tc.blockID, sigs)
err := valSet.VerifyCommit(chainID, blockID, height, commit)
if tc.expErr {
@@ -135,11 +138,11 @@ func TestValidatorSet_VerifyCommit_CheckAllSignatures(t *testing.T) {
var (
chainID = "test_chain_id"
h = int64(3)
blockID = makeBlockIDRandom()
blockID = test.MakeBlockID()
)
voteSet, valSet, vals := randVoteSet(h, 0, tmproto.PrecommitType, 4, 10)
commit, err := makeCommit(blockID, h, 0, voteSet, vals, time.Now())
voteSet, valSet, vals := test.RandVoteSet(h, 0, tmproto.PrecommitType, 4, 10)
commit, err := test.MakeCommit(blockID, h, 0, voteSet, vals, time.Now())
require.NoError(t, err)
require.NoError(t, valSet.VerifyCommit(chainID, blockID, h, commit))
@@ -161,11 +164,11 @@ func TestValidatorSet_VerifyCommitLight_ReturnsAsSoonAsMajorityOfVotingPowerSign
var (
chainID = "test_chain_id"
h = int64(3)
blockID = makeBlockIDRandom()
blockID = test.MakeBlockID()
)
voteSet, valSet, vals := randVoteSet(h, 0, tmproto.PrecommitType, 4, 10)
commit, err := makeCommit(blockID, h, 0, voteSet, vals, time.Now())
voteSet, valSet, vals := test.RandVoteSet(h, 0, tmproto.PrecommitType, 4, 10)
commit, err := test.MakeCommit(blockID, h, 0, voteSet, vals, time.Now())
require.NoError(t, err)
require.NoError(t, valSet.VerifyCommit(chainID, blockID, h, commit))
@@ -185,11 +188,11 @@ func TestValidatorSet_VerifyCommitLightTrusting_ReturnsAsSoonAsTrustLevelOfVotin
var (
chainID = "test_chain_id"
h = int64(3)
blockID = makeBlockIDRandom()
blockID = test.MakeBlockID()
)
voteSet, valSet, vals := randVoteSet(h, 0, tmproto.PrecommitType, 4, 10)
commit, err := makeCommit(blockID, h, 0, voteSet, vals, time.Now())
voteSet, valSet, vals := test.RandVoteSet(h, 0, tmproto.PrecommitType, 4, 10)
commit, err := test.MakeCommit(blockID, h, 0, voteSet, vals, time.Now())
require.NoError(t, err)
require.NoError(t, valSet.VerifyCommit(chainID, blockID, h, commit))
@@ -207,15 +210,15 @@ func TestValidatorSet_VerifyCommitLightTrusting_ReturnsAsSoonAsTrustLevelOfVotin
func TestValidatorSet_VerifyCommitLightTrusting(t *testing.T) {
var (
blockID = makeBlockIDRandom()
voteSet, originalValset, vals = randVoteSet(1, 1, tmproto.PrecommitType, 6, 1)
commit, err = makeCommit(blockID, 1, 1, voteSet, vals, time.Now())
newValSet, _ = randValidatorPrivValSet(2, 1)
blockID = test.MakeBlockID()
voteSet, originalValset, vals = test.RandVoteSet(1, 1, tmproto.PrecommitType, 6, 1)
commit, err = test.MakeCommit(blockID, 1, 1, voteSet, vals, time.Now())
newValSet, _ = test.RandValidatorPrivValSet(2, 1)
)
require.NoError(t, err)
testCases := []struct {
valSet *ValidatorSet
valSet *consensus.ValidatorSet
err bool
}{
// good
@@ -230,7 +233,7 @@ func TestValidatorSet_VerifyCommitLightTrusting(t *testing.T) {
},
// good - first two are different but the rest of the same -> >1/3
2: {
valSet: NewValidatorSet(append(newValSet.Validators, originalValset.Validators...)),
valSet: consensus.NewValidatorSet(append(newValSet.Validators, originalValset.Validators...)),
err: false,
},
}
@@ -248,9 +251,9 @@ func TestValidatorSet_VerifyCommitLightTrusting(t *testing.T) {
func TestValidatorSet_VerifyCommitLightTrustingErrorsOnOverflow(t *testing.T) {
var (
blockID = makeBlockIDRandom()
voteSet, valSet, vals = randVoteSet(1, 1, tmproto.PrecommitType, 1, MaxTotalVotingPower)
commit, err = makeCommit(blockID, 1, 1, voteSet, vals, time.Now())
blockID = test.MakeBlockID()
voteSet, valSet, vals = test.RandVoteSet(1, 1, tmproto.PrecommitType, 1, consensus.MaxTotalVotingPower)
commit, err = test.MakeCommit(blockID, 1, 1, voteSet, vals, time.Now())
)
require.NoError(t, err)

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
"bytes"

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
"bytes"
@@ -11,6 +11,7 @@ import (
"github.com/tendermint/tendermint/crypto/merkle"
tmmath "github.com/tendermint/tendermint/libs/math"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
@@ -166,13 +167,13 @@ func (vals *ValidatorSet) RescalePriorities(diffMax int64) {
func (vals *ValidatorSet) incrementProposerPriority() *Validator {
for _, val := range vals.Validators {
// Check for overflow for sum.
newPrio := safeAddClip(val.ProposerPriority, val.VotingPower)
newPrio := tmmath.SafeAddClip(val.ProposerPriority, val.VotingPower)
val.ProposerPriority = newPrio
}
// Decrement the validator with most ProposerPriority.
mostest := vals.getValWithMostPriority()
// Mind the underflow.
mostest.ProposerPriority = safeSubClip(mostest.ProposerPriority, vals.TotalVotingPower())
mostest.ProposerPriority = tmmath.SafeSubClip(mostest.ProposerPriority, vals.TotalVotingPower())
return mostest
}
@@ -229,7 +230,7 @@ func (vals *ValidatorSet) shiftByAvgProposerPriority() {
}
avgProposerPriority := vals.computeAvgProposerPriority()
for _, val := range vals.Validators {
val.ProposerPriority = safeSubClip(val.ProposerPriority, avgProposerPriority)
val.ProposerPriority = tmmath.SafeSubClip(val.ProposerPriority, avgProposerPriority)
}
}
@@ -299,7 +300,7 @@ func (vals *ValidatorSet) updateTotalVotingPower() {
sum := int64(0)
for _, val := range vals.Validators {
// mind overflow
sum = safeAddClip(sum, val.VotingPower)
sum = tmmath.SafeAddClip(sum, val.VotingPower)
if sum > MaxTotalVotingPower {
panic(fmt.Sprintf(
"Total voting power should be guarded to not exceed %v; got: %v",
@@ -654,22 +655,22 @@ func (vals *ValidatorSet) UpdateWithChangeSet(changes []*Validator) error {
// VerifyCommit verifies +2/3 of the set had signed the given commit and all
// other signatures are valid
func (vals *ValidatorSet) VerifyCommit(chainID string, blockID BlockID,
height int64, commit *Commit) error {
func (vals *ValidatorSet) VerifyCommit(chainID string, blockID meta.BlockID,
height int64, commit *meta.Commit) error {
return VerifyCommit(chainID, vals, blockID, height, commit)
}
// LIGHT CLIENT VERIFICATION METHODS
// VerifyCommitLight verifies +2/3 of the set had signed the given commit.
func (vals *ValidatorSet) VerifyCommitLight(chainID string, blockID BlockID,
height int64, commit *Commit) error {
func (vals *ValidatorSet) VerifyCommitLight(chainID string, blockID meta.BlockID,
height int64, commit *meta.Commit) error {
return VerifyCommitLight(chainID, vals, blockID, height, commit)
}
// VerifyCommitLightTrusting verifies that trustLevel of the validator set signed
// this commit.
func (vals *ValidatorSet) VerifyCommitLightTrusting(chainID string, commit *Commit, trustLevel tmmath.Fraction) error {
func (vals *ValidatorSet) VerifyCommitLightTrusting(chainID string, commit *meta.Commit, trustLevel tmmath.Fraction) error {
return VerifyCommitLightTrusting(chainID, vals, commit, trustLevel)
}
@@ -865,69 +866,3 @@ func ValidatorSetFromExistingValidators(valz []*Validator) (*ValidatorSet, error
sort.Sort(ValidatorsByVotingPower(vals.Validators))
return vals, nil
}
//----------------------------------------
// safe addition/subtraction/multiplication
func safeAdd(a, b int64) (int64, bool) {
if b > 0 && a > math.MaxInt64-b {
return -1, true
} else if b < 0 && a < math.MinInt64-b {
return -1, true
}
return a + b, false
}
func safeSub(a, b int64) (int64, bool) {
if b > 0 && a < math.MinInt64+b {
return -1, true
} else if b < 0 && a > math.MaxInt64+b {
return -1, true
}
return a - b, false
}
func safeAddClip(a, b int64) int64 {
c, overflow := safeAdd(a, b)
if overflow {
if b < 0 {
return math.MinInt64
}
return math.MaxInt64
}
return c
}
func safeSubClip(a, b int64) int64 {
c, overflow := safeSub(a, b)
if overflow {
if b > 0 {
return math.MinInt64
}
return math.MaxInt64
}
return c
}
func safeMul(a, b int64) (int64, bool) {
if a == 0 || b == 0 {
return 0, false
}
absOfB := b
if b < 0 {
absOfB = -b
}
absOfA := a
if a < 0 {
absOfA = -a
}
if absOfA > math.MaxInt64/absOfB {
return 0, true
}
return a * b, false
}

View File

@@ -0,0 +1,159 @@
package consensus_test
import (
"fmt"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto/ed25519"
test "github.com/tendermint/tendermint/internal/test/factory"
tmmath "github.com/tendermint/tendermint/libs/math"
"github.com/tendermint/tendermint/pkg/consensus"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
//-------------------------------------
// Benchmark tests
//
func BenchmarkUpdates(b *testing.B) {
const (
n = 100
m = 2000
)
// Init with n validators
vs := make([]*consensus.Validator, n)
for j := 0; j < n; j++ {
vs[j] = consensus.NewValidator(ed25519.GenPrivKey().PubKey(), 100)
}
valSet := consensus.NewValidatorSet(vs)
// Make m new validators
newValList := make([]*consensus.Validator, m)
for j := 0; j < m; j++ {
newValList[j] = consensus.NewValidator(ed25519.GenPrivKey().PubKey(), 1000)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
// Add m validators to valSetCopy
valSetCopy := valSet.Copy()
assert.NoError(b, valSetCopy.UpdateWithChangeSet(newValList))
}
}
func BenchmarkValidatorSet_VerifyCommit_Ed25519(b *testing.B) {
for _, n := range []int{1, 8, 64, 1024} {
n := n
var (
chainID = "test_chain_id"
h = int64(3)
blockID = test.MakeBlockID()
)
b.Run(fmt.Sprintf("valset size %d", n), func(b *testing.B) {
b.ReportAllocs()
// generate n validators
voteSet, valSet, vals := test.RandVoteSet(h, 0, tmproto.PrecommitType, n, int64(n*5))
// create a commit with n validators
commit, err := test.MakeCommit(blockID, h, 0, voteSet, vals, time.Now())
require.NoError(b, err)
for i := 0; i < b.N/n; i++ {
err = valSet.VerifyCommit(chainID, blockID, h, commit)
assert.NoError(b, err)
}
})
}
}
func BenchmarkValidatorSet_VerifyCommitLight_Ed25519(b *testing.B) {
for _, n := range []int{1, 8, 64, 1024} {
n := n
var (
chainID = "test_chain_id"
h = int64(3)
blockID = test.MakeBlockID()
)
b.Run(fmt.Sprintf("valset size %d", n), func(b *testing.B) {
b.ReportAllocs()
// generate n validators
voteSet, valSet, vals := test.RandVoteSet(h, 0, tmproto.PrecommitType, n, int64(n*5))
// create a commit with n validators
commit, err := test.MakeCommit(blockID, h, 0, voteSet, vals, time.Now())
require.NoError(b, err)
for i := 0; i < b.N/n; i++ {
err = valSet.VerifyCommitLight(chainID, blockID, h, commit)
assert.NoError(b, err)
}
})
}
}
func BenchmarkValidatorSet_VerifyCommitLightTrusting_Ed25519(b *testing.B) {
for _, n := range []int{1, 8, 64, 1024} {
n := n
var (
chainID = "test_chain_id"
h = int64(3)
blockID = test.MakeBlockID()
)
b.Run(fmt.Sprintf("valset size %d", n), func(b *testing.B) {
b.ReportAllocs()
// generate n validators
voteSet, valSet, vals := test.RandVoteSet(h, 0, tmproto.PrecommitType, n, int64(n*5))
// create a commit with n validators
commit, err := test.MakeCommit(blockID, h, 0, voteSet, vals, time.Now())
require.NoError(b, err)
for i := 0; i < b.N/n; i++ {
err = valSet.VerifyCommitLightTrusting(chainID, commit, tmmath.Fraction{Numerator: 1, Denominator: 3})
assert.NoError(b, err)
}
})
}
}
func TestValidatorSetProtoBuf(t *testing.T) {
valset, _ := test.RandValidatorPrivValSet(10, 100)
valset2, _ := test.RandValidatorPrivValSet(10, 100)
valset2.Validators[0] = &consensus.Validator{}
valset3, _ := test.RandValidatorPrivValSet(10, 100)
valset3.Proposer = nil
valset4, _ := test.RandValidatorPrivValSet(10, 100)
valset4.Proposer = &consensus.Validator{}
testCases := []struct {
msg string
v1 *consensus.ValidatorSet
expPass1 bool
expPass2 bool
}{
{"success", valset, true, true},
{"fail valSet2, pubkey empty", valset2, false, false},
{"fail nil Proposer", valset3, false, false},
{"fail empty Proposer", valset4, false, false},
{"fail empty valSet", &consensus.ValidatorSet{}, true, false},
{"false nil", nil, true, false},
}
for _, tc := range testCases {
protoValSet, err := tc.v1.ToProto()
if tc.expPass1 {
require.NoError(t, err, tc.msg)
} else {
require.Error(t, err, tc.msg)
}
valSet, err := consensus.ValidatorSetFromProto(protoValSet)
if tc.expPass2 {
require.NoError(t, err, tc.msg)
require.EqualValues(t, tc.v1, valSet, tc.msg)
} else {
require.Error(t, err, tc.msg)
}
}
}

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
"bytes"
@@ -9,14 +9,12 @@ import (
"sort"
"strings"
"testing"
"testing/quick"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
tmmath "github.com/tendermint/tendermint/libs/math"
tmrand "github.com/tendermint/tendermint/libs/rand"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
@@ -654,29 +652,6 @@ func TestAveragingInIncrementProposerPriorityWithVotingPower(t *testing.T) {
}
}
func TestSafeAdd(t *testing.T) {
f := func(a, b int64) bool {
c, overflow := safeAdd(a, b)
return overflow || (!overflow && c == a+b)
}
if err := quick.Check(f, nil); err != nil {
t.Error(err)
}
}
func TestSafeAddClip(t *testing.T) {
assert.EqualValues(t, math.MaxInt64, safeAddClip(math.MaxInt64, 10))
assert.EqualValues(t, math.MaxInt64, safeAddClip(math.MaxInt64, math.MaxInt64))
assert.EqualValues(t, math.MinInt64, safeAddClip(math.MinInt64, -10))
}
func TestSafeSubClip(t *testing.T) {
assert.EqualValues(t, math.MinInt64, safeSubClip(math.MinInt64, 10))
assert.EqualValues(t, 0, safeSubClip(math.MinInt64, math.MinInt64))
assert.EqualValues(t, math.MinInt64, safeSubClip(math.MinInt64, math.MaxInt64))
assert.EqualValues(t, math.MaxInt64, safeSubClip(math.MaxInt64, -10))
}
//-------------------------------------------------------------------
func TestEmptySet(t *testing.T) {
@@ -771,7 +746,7 @@ func valSetTotalProposerPriority(valSet *ValidatorSet) int64 {
sum := int64(0)
for _, val := range valSet.Validators {
// mind overflow
sum = safeAddClip(sum, val.ProposerPriority)
sum = tmmath.SafeAddClip(sum, val.ProposerPriority)
}
return sum
}
@@ -1382,74 +1357,6 @@ func TestValSetUpdateOverflowRelated(t *testing.T) {
}
}
func TestSafeMul(t *testing.T) {
testCases := []struct {
a int64
b int64
c int64
overflow bool
}{
0: {0, 0, 0, false},
1: {1, 0, 0, false},
2: {2, 3, 6, false},
3: {2, -3, -6, false},
4: {-2, -3, 6, false},
5: {-2, 3, -6, false},
6: {math.MaxInt64, 1, math.MaxInt64, false},
7: {math.MaxInt64 / 2, 2, math.MaxInt64 - 1, false},
8: {math.MaxInt64 / 2, 3, 0, true},
9: {math.MaxInt64, 2, 0, true},
}
for i, tc := range testCases {
c, overflow := safeMul(tc.a, tc.b)
assert.Equal(t, tc.c, c, "#%d", i)
assert.Equal(t, tc.overflow, overflow, "#%d", i)
}
}
func TestValidatorSetProtoBuf(t *testing.T) {
valset, _ := randValidatorPrivValSet(10, 100)
valset2, _ := randValidatorPrivValSet(10, 100)
valset2.Validators[0] = &Validator{}
valset3, _ := randValidatorPrivValSet(10, 100)
valset3.Proposer = nil
valset4, _ := randValidatorPrivValSet(10, 100)
valset4.Proposer = &Validator{}
testCases := []struct {
msg string
v1 *ValidatorSet
expPass1 bool
expPass2 bool
}{
{"success", valset, true, true},
{"fail valSet2, pubkey empty", valset2, false, false},
{"fail nil Proposer", valset3, false, false},
{"fail empty Proposer", valset4, false, false},
{"fail empty valSet", &ValidatorSet{}, true, false},
{"false nil", nil, true, false},
}
for _, tc := range testCases {
protoValSet, err := tc.v1.ToProto()
if tc.expPass1 {
require.NoError(t, err, tc.msg)
} else {
require.Error(t, err, tc.msg)
}
valSet, err := ValidatorSetFromProto(protoValSet)
if tc.expPass2 {
require.NoError(t, err, tc.msg)
require.EqualValues(t, tc.v1, valSet, tc.msg)
} else {
require.Error(t, err, tc.msg)
}
}
}
//---------------------
// Sort validators by priority and address
type validatorsByPriority []*Validator
@@ -1490,129 +1397,3 @@ func (tvals testValsByVotingPower) Less(i, j int) bool {
func (tvals testValsByVotingPower) Swap(i, j int) {
tvals[i], tvals[j] = tvals[j], tvals[i]
}
//-------------------------------------
// Benchmark tests
//
func BenchmarkUpdates(b *testing.B) {
const (
n = 100
m = 2000
)
// Init with n validators
vs := make([]*Validator, n)
for j := 0; j < n; j++ {
vs[j] = newValidator([]byte(fmt.Sprintf("v%d", j)), 100)
}
valSet := NewValidatorSet(vs)
l := len(valSet.Validators)
// Make m new validators
newValList := make([]*Validator, m)
for j := 0; j < m; j++ {
newValList[j] = newValidator([]byte(fmt.Sprintf("v%d", j+l)), 1000)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
// Add m validators to valSetCopy
valSetCopy := valSet.Copy()
assert.NoError(b, valSetCopy.UpdateWithChangeSet(newValList))
}
}
func BenchmarkValidatorSet_VerifyCommit_Ed25519(b *testing.B) {
for _, n := range []int{1, 8, 64, 1024} {
n := n
var (
chainID = "test_chain_id"
h = int64(3)
blockID = makeBlockIDRandom()
)
b.Run(fmt.Sprintf("valset size %d", n), func(b *testing.B) {
b.ReportAllocs()
// generate n validators
voteSet, valSet, vals := randVoteSet(h, 0, tmproto.PrecommitType, n, int64(n*5))
// create a commit with n validators
commit, err := makeCommit(blockID, h, 0, voteSet, vals, time.Now())
require.NoError(b, err)
for i := 0; i < b.N/n; i++ {
err = valSet.VerifyCommit(chainID, blockID, h, commit)
assert.NoError(b, err)
}
})
}
}
func BenchmarkValidatorSet_VerifyCommitLight_Ed25519(b *testing.B) {
for _, n := range []int{1, 8, 64, 1024} {
n := n
var (
chainID = "test_chain_id"
h = int64(3)
blockID = makeBlockIDRandom()
)
b.Run(fmt.Sprintf("valset size %d", n), func(b *testing.B) {
b.ReportAllocs()
// generate n validators
voteSet, valSet, vals := randVoteSet(h, 0, tmproto.PrecommitType, n, int64(n*5))
// create a commit with n validators
commit, err := makeCommit(blockID, h, 0, voteSet, vals, time.Now())
require.NoError(b, err)
for i := 0; i < b.N/n; i++ {
err = valSet.VerifyCommitLight(chainID, blockID, h, commit)
assert.NoError(b, err)
}
})
}
}
func BenchmarkValidatorSet_VerifyCommitLightTrusting_Ed25519(b *testing.B) {
for _, n := range []int{1, 8, 64, 1024} {
n := n
var (
chainID = "test_chain_id"
h = int64(3)
blockID = makeBlockIDRandom()
)
b.Run(fmt.Sprintf("valset size %d", n), func(b *testing.B) {
b.ReportAllocs()
// generate n validators
voteSet, valSet, vals := randVoteSet(h, 0, tmproto.PrecommitType, n, int64(n*5))
// create a commit with n validators
commit, err := makeCommit(blockID, h, 0, voteSet, vals, time.Now())
require.NoError(b, err)
for i := 0; i < b.N/n; i++ {
err = valSet.VerifyCommitLightTrusting(chainID, commit, tmmath.Fraction{Numerator: 1, Denominator: 3})
assert.NoError(b, err)
}
})
}
}
// Testing Utils
// deterministicValidatorSet returns a deterministic validator set (size: +numValidators+),
// where each validator has a power of 50
//
// EXPOSED FOR TESTING.
func deterministicValidatorSet() (*ValidatorSet, []PrivValidator) {
var (
valz = make([]*Validator, 10)
privValidators = make([]PrivValidator, 10)
)
for i := 0; i < 10; i++ {
// val, privValidator := DeterministicValidator(ed25519.PrivKey([]byte(deterministicKeys[i])))
val, privValidator := deterministicValidator(ed25519.GenPrivKeyFromSecret([]byte(fmt.Sprintf("key: %x", i))))
valz[i] = val
privValidators[i] = privValidator
}
sort.Sort(PrivValidatorsByAddress(privValidators))
return NewValidatorSet(valz), privValidators
}

View File

@@ -1,25 +1,26 @@
package types
package consensus_test
import (
"context"
"fmt"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
test "github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/pkg/consensus"
)
func TestValidatorProtoBuf(t *testing.T) {
val, _ := randValidator(true, 100)
val, _ := test.RandValidator(true, 100)
testCases := []struct {
msg string
v1 *Validator
v1 *consensus.Validator
expPass1 bool
expPass2 bool
}{
{"success validator", val, true, true},
{"failure empty", &Validator{}, false, false},
{"failure empty", &consensus.Validator{}, false, false},
{"failure nil", nil, false, false},
}
for _, tc := range testCases {
@@ -31,7 +32,7 @@ func TestValidatorProtoBuf(t *testing.T) {
require.Error(t, err, tc.msg)
}
val, err := ValidatorFromProto(protoVal)
val, err := consensus.ValidatorFromProto(protoVal)
if tc.expPass2 {
require.NoError(t, err, tc.msg)
require.Equal(t, tc.v1, val, tc.msg)
@@ -42,15 +43,15 @@ func TestValidatorProtoBuf(t *testing.T) {
}
func TestValidatorValidateBasic(t *testing.T) {
priv := NewMockPV()
priv := consensus.NewMockPV()
pubKey, _ := priv.GetPubKey(context.Background())
testCases := []struct {
val *Validator
val *consensus.Validator
err bool
msg string
}{
{
val: NewValidator(pubKey, 1),
val: consensus.NewValidator(pubKey, 1),
err: false,
msg: "",
},
@@ -60,19 +61,19 @@ func TestValidatorValidateBasic(t *testing.T) {
msg: "nil validator",
},
{
val: &Validator{
val: &consensus.Validator{
PubKey: nil,
},
err: true,
msg: "validator does not have a public key",
},
{
val: NewValidator(pubKey, -1),
val: consensus.NewValidator(pubKey, -1),
err: true,
msg: "validator has negative voting power",
},
{
val: &Validator{
val: &consensus.Validator{
PubKey: pubKey,
Address: nil,
},
@@ -80,7 +81,7 @@ func TestValidatorValidateBasic(t *testing.T) {
msg: "validator address is the wrong size: ",
},
{
val: &Validator{
val: &consensus.Validator{
PubKey: pubKey,
Address: []byte{'a'},
},
@@ -100,19 +101,3 @@ func TestValidatorValidateBasic(t *testing.T) {
}
}
}
// Testing util functions
// deterministicValidator returns a deterministic validator, useful for testing.
// UNSTABLE
func deterministicValidator(key crypto.PrivKey) (*Validator, PrivValidator) {
privVal := NewMockPV()
privVal.PrivKey = key
var votePower int64 = 50
pubKey, err := privVal.GetPubKey(context.TODO())
if err != nil {
panic(fmt.Errorf("could not retrieve pubkey %w", err))
}
val := NewValidator(pubKey, votePower)
return val, privVal
}

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
"bytes"
@@ -9,6 +9,7 @@ import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/internal/libs/protoio"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
@@ -51,7 +52,7 @@ type Vote struct {
Type tmproto.SignedMsgType `json:"type"`
Height int64 `json:"height"`
Round int32 `json:"round"` // assume there will not be greater than 2_147_483_647 rounds
BlockID BlockID `json:"block_id"` // zero if vote is nil.
BlockID meta.BlockID `json:"block_id"` // zero if vote is nil.
Timestamp time.Time `json:"timestamp"`
ValidatorAddress Address `json:"validator_address"`
ValidatorIndex int32 `json:"validator_index"`
@@ -59,22 +60,22 @@ type Vote struct {
}
// CommitSig converts the Vote to a CommitSig.
func (vote *Vote) CommitSig() CommitSig {
func (vote *Vote) CommitSig() meta.CommitSig {
if vote == nil {
return NewCommitSigAbsent()
return meta.NewCommitSigAbsent()
}
var blockIDFlag BlockIDFlag
var blockIDFlag meta.BlockIDFlag
switch {
case vote.BlockID.IsComplete():
blockIDFlag = BlockIDFlagCommit
blockIDFlag = meta.BlockIDFlagCommit
case vote.BlockID.IsZero():
blockIDFlag = BlockIDFlagNil
blockIDFlag = meta.BlockIDFlagNil
default:
panic(fmt.Sprintf("Invalid vote %v - expected BlockID to be either empty or complete", vote))
}
return CommitSig{
return meta.CommitSig{
BlockIDFlag: blockIDFlag,
ValidatorAddress: vote.ValidatorAddress,
Timestamp: vote.Timestamp,
@@ -82,6 +83,23 @@ func (vote *Vote) CommitSig() CommitSig {
}
}
// GetVote converts the CommitSig for the given valIdx to a Vote.
// Returns nil if the precommit at valIdx is nil.
// Panics if valIdx >= commit.Size().
func GetVoteFromCommit(commit *meta.Commit, valIdx int32) *Vote {
commitSig := commit.Signatures[valIdx]
return &Vote{
Type: tmproto.PrecommitType,
Height: commit.Height,
Round: commit.Round,
BlockID: commitSig.BlockID(commit.BlockID),
Timestamp: commitSig.Timestamp,
ValidatorAddress: commitSig.ValidatorAddress,
ValidatorIndex: valIdx,
Signature: commitSig.Signature,
}
}
// VoteSignBytes returns the proto-encoding of the canonicalized Vote, for
// signing. Panics is the marshaling fails.
//
@@ -105,6 +123,20 @@ func (vote *Vote) Copy() *Vote {
return &voteCopy
}
// VoteSignBytes returns the bytes of the Vote corresponding to valIdx for
// signing.
//
// The only unique part is the Timestamp - all other fields signed over are
// otherwise the same for all validators.
//
// Panics if valIdx >= commit.Size().
//
// See VoteSignBytes
func VoteSignBytesFromCommit(commit *meta.Commit, chainID string, valIdx int32) []byte {
v := GetVoteFromCommit(commit, valIdx).ToProto()
return VoteSignBytes(chainID, v)
}
// String returns a string representation of Vote.
//
// 1. validator index
@@ -140,7 +172,7 @@ func (vote *Vote) String() string {
typeString,
tmbytes.Fingerprint(vote.BlockID.Hash),
tmbytes.Fingerprint(vote.Signature),
CanonicalTime(vote.Timestamp),
meta.CanonicalTime(vote.Timestamp),
)
}
@@ -194,8 +226,8 @@ func (vote *Vote) ValidateBasic() error {
return errors.New("signature is missing")
}
if len(vote.Signature) > MaxSignatureSize {
return fmt.Errorf("signature is too big (max: %d)", MaxSignatureSize)
if len(vote.Signature) > meta.MaxSignatureSize {
return fmt.Errorf("signature is too big (max: %d)", meta.MaxSignatureSize)
}
return nil
@@ -227,7 +259,7 @@ func VoteFromProto(pv *tmproto.Vote) (*Vote, error) {
return nil, errors.New("nil vote")
}
blockID, err := BlockIDFromProto(&pv.BlockID)
blockID, err := meta.BlockIDFromProto(&pv.BlockID)
if err != nil {
return nil, err
}
@@ -244,3 +276,13 @@ func VoteFromProto(pv *tmproto.Vote) (*Vote, error) {
return vote, vote.ValidateBasic()
}
// IsVoteTypeValid returns true if t is a valid vote type.
func IsVoteTypeValid(t tmproto.SignedMsgType) bool {
switch t {
case tmproto.PrevoteType, tmproto.PrecommitType:
return true
default:
return false
}
}

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
"bytes"
@@ -8,6 +8,7 @@ import (
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/libs/bits"
tmjson "github.com/tendermint/tendermint/libs/json"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
@@ -41,10 +42,10 @@ type P2PID string
the first vote seen, but when a 2/3 majority is found, votes for that get
priority and are copied over from `.votesByBlock`.
`.votesByBlock` keeps track of a list of votes for a particular block. There
`.votesByBlock` keeps track of a list of votes for a particular meta. There
are two ways a &blockVotes{} gets created in `.votesByBlock`.
1. the first vote seen by a validator was for the particular block.
2. a peer claims to have seen 2/3 majority for the particular block.
1. the first vote seen by a validator was for the particular meta.
2. a peer claims to have seen 2/3 majority for the particular meta.
Since the first vote from a validator will always get added in `.votesByBlock`
, all votes in `.votes` will have a corresponding entry in `.votesByBlock`.
@@ -69,9 +70,9 @@ type VoteSet struct {
votesBitArray *bits.BitArray
votes []*Vote // Primary votes to share
sum int64 // Sum of voting power for seen votes, discounting conflicts
maj23 *BlockID // First 2/3 majority seen
maj23 *meta.BlockID // First 2/3 majority seen
votesByBlock map[string]*blockVotes // string(blockHash|blockParts) -> blockVotes
peerMaj23s map[P2PID]BlockID // Maj23 for each peer
peerMaj23s map[P2PID]meta.BlockID // Maj23 for each peer
}
// Constructs a new VoteSet struct used to accumulate votes for given height/round.
@@ -91,10 +92,27 @@ func NewVoteSet(chainID string, height int64, round int32,
sum: 0,
maj23: nil,
votesByBlock: make(map[string]*blockVotes, valSet.Size()),
peerMaj23s: make(map[P2PID]BlockID),
peerMaj23s: make(map[P2PID]meta.BlockID),
}
}
// CommitToVoteSet constructs a VoteSet from the Commit and validator set.
// Panics if signatures from the commit can't be added to the voteset.
// Inverse of VoteSet.MakeCommit().
func VoteSetFromCommit(chainID string, commit *meta.Commit, vals *ValidatorSet) *VoteSet {
voteSet := NewVoteSet(chainID, commit.Height, commit.Round, tmproto.PrecommitType, vals)
for idx, commitSig := range commit.Signatures {
if commitSig.Absent() {
continue // OK, some precommits can be missing.
}
added, err := voteSet.AddVote(GetVoteFromCommit(commit, int32(idx)))
if !added || err != nil {
panic(fmt.Sprintf("Failed to reconstruct LastCommit: %v", err))
}
}
return voteSet
}
func (voteSet *VoteSet) ChainID() string {
return voteSet.chainID
}
@@ -306,7 +324,7 @@ func (voteSet *VoteSet) addVerifiedVote(
// this can cause memory issues.
// TODO: implement ability to remove peers too
// NOTE: VoteSet must not be nil
func (voteSet *VoteSet) SetPeerMaj23(peerID P2PID, blockID BlockID) error {
func (voteSet *VoteSet) SetPeerMaj23(peerID P2PID, blockID meta.BlockID) error {
if voteSet == nil {
panic("SetPeerMaj23() on nil VoteSet")
}
@@ -351,7 +369,7 @@ func (voteSet *VoteSet) BitArray() *bits.BitArray {
return voteSet.votesBitArray.Copy()
}
func (voteSet *VoteSet) BitArrayByBlockID(blockID BlockID) *bits.BitArray {
func (voteSet *VoteSet) BitArrayByBlockID(blockID meta.BlockID) *bits.BitArray {
if voteSet == nil {
return nil
}
@@ -427,16 +445,16 @@ func (voteSet *VoteSet) HasAll() bool {
// If there was a +2/3 majority for blockID, return blockID and true.
// Else, return the empty BlockID{} and false.
func (voteSet *VoteSet) TwoThirdsMajority() (blockID BlockID, ok bool) {
func (voteSet *VoteSet) TwoThirdsMajority() (blockID meta.BlockID, ok bool) {
if voteSet == nil {
return BlockID{}, false
return meta.BlockID{}, false
}
voteSet.mtx.Lock()
defer voteSet.mtx.Unlock()
if voteSet.maj23 != nil {
return *voteSet.maj23, true
}
return BlockID{}, false
return meta.BlockID{}, false
}
//--------------------------------------------------------------------------------
@@ -503,9 +521,9 @@ func (voteSet *VoteSet) MarshalJSON() ([]byte, error) {
// NOTE: insufficient for unmarshaling from (compressed votes)
// TODO: make the peerMaj23s nicer to read (eg just the block hash)
type VoteSetJSON struct {
Votes []string `json:"votes"`
VotesBitArray string `json:"votes_bit_array"`
PeerMaj23s map[P2PID]BlockID `json:"peer_maj_23s"`
Votes []string `json:"votes"`
VotesBitArray string `json:"votes_bit_array"`
PeerMaj23s map[P2PID]meta.BlockID `json:"peer_maj_23s"`
}
// Return the bit-array of votes including
@@ -589,8 +607,8 @@ func (voteSet *VoteSet) sumTotalFrac() (int64, int64, float64) {
// for the block, which has 2/3+ majority, and nil.
//
// Panics if the vote type is not PrecommitType or if there's no +2/3 votes for
// a single block.
func (voteSet *VoteSet) MakeCommit() *Commit {
// a single meta.
func (voteSet *VoteSet) MakeCommit() *meta.Commit {
if voteSet.signedMsgType != tmproto.PrecommitType {
panic("Cannot MakeCommit() unless VoteSet.Type is PrecommitType")
}
@@ -603,17 +621,17 @@ func (voteSet *VoteSet) MakeCommit() *Commit {
}
// For every validator, get the precommit
commitSigs := make([]CommitSig, len(voteSet.votes))
commitSigs := make([]meta.CommitSig, len(voteSet.votes))
for i, v := range voteSet.votes {
commitSig := v.CommitSig()
// if block ID exists but doesn't match, exclude sig
if commitSig.ForBlock() && !v.BlockID.Equals(*voteSet.maj23) {
commitSig = NewCommitSigAbsent()
commitSig = meta.NewCommitSigAbsent()
}
commitSigs[i] = commitSig
}
return NewCommit(voteSet.GetHeight(), voteSet.GetRound(), *voteSet.maj23, commitSigs)
return meta.NewCommit(voteSet.GetHeight(), voteSet.GetRound(), *voteSet.maj23, commitSigs)
}
//--------------------------------------------------------------------------------

View File

@@ -1,23 +1,26 @@
package types
package consensus_test
import (
"bytes"
"context"
"sort"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
test "github.com/tendermint/tendermint/internal/test/factory"
tmrand "github.com/tendermint/tendermint/libs/rand"
tmtime "github.com/tendermint/tendermint/libs/time"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
func TestVoteSet_AddVote_Good(t *testing.T) {
height, round := int64(1), int32(0)
voteSet, _, privValidators := randVoteSet(height, round, tmproto.PrevoteType, 10, 1)
voteSet, _, privValidators := test.RandVoteSet(height, round, tmproto.PrevoteType, 10, 1)
val0 := privValidators[0]
val0p, err := val0.GetPubKey(context.Background())
@@ -29,16 +32,16 @@ func TestVoteSet_AddVote_Good(t *testing.T) {
blockID, ok := voteSet.TwoThirdsMajority()
assert.False(t, ok || !blockID.IsZero(), "there should be no 2/3 majority")
vote := &Vote{
vote := &consensus.Vote{
ValidatorAddress: val0Addr,
ValidatorIndex: 0, // since privValidators are in order
Height: height,
Round: round,
Type: tmproto.PrevoteType,
Timestamp: tmtime.Now(),
BlockID: BlockID{nil, PartSetHeader{}},
BlockID: meta.BlockID{nil, meta.PartSetHeader{}},
}
_, err = signAddVote(val0, vote, voteSet)
_, err = test.SignAddVote(val0, vote, voteSet)
require.NoError(t, err)
assert.NotNil(t, voteSet.GetByAddress(val0Addr))
@@ -49,16 +52,16 @@ func TestVoteSet_AddVote_Good(t *testing.T) {
func TestVoteSet_AddVote_Bad(t *testing.T) {
height, round := int64(1), int32(0)
voteSet, _, privValidators := randVoteSet(height, round, tmproto.PrevoteType, 10, 1)
voteSet, _, privValidators := test.RandVoteSet(height, round, tmproto.PrevoteType, 10, 1)
voteProto := &Vote{
voteProto := &consensus.Vote{
ValidatorAddress: nil,
ValidatorIndex: -1,
Height: height,
Round: round,
Timestamp: tmtime.Now(),
Type: tmproto.PrevoteType,
BlockID: BlockID{nil, PartSetHeader{}},
BlockID: meta.BlockID{nil, meta.PartSetHeader{}},
}
// val0 votes for nil.
@@ -67,19 +70,19 @@ func TestVoteSet_AddVote_Bad(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, 0)
added, err := signAddVote(privValidators[0], vote, voteSet)
added, err := test.SignAddVote(privValidators[0], vote, voteSet)
if !added || err != nil {
t.Errorf("expected VoteSet.Add to succeed")
}
}
// val0 votes again for some block.
// val0 votes again for some meta.
{
pubKey, err := privValidators[0].GetPubKey(context.Background())
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, 0)
added, err := signAddVote(privValidators[0], withBlockHash(vote, tmrand.Bytes(32)), voteSet)
added, err := test.SignAddVote(privValidators[0], withBlockHash(vote, tmrand.Bytes(32)), voteSet)
if added || err == nil {
t.Errorf("expected VoteSet.Add to fail, conflicting vote.")
}
@@ -91,7 +94,7 @@ func TestVoteSet_AddVote_Bad(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, 1)
added, err := signAddVote(privValidators[1], withHeight(vote, height+1), voteSet)
added, err := test.SignAddVote(privValidators[1], withHeight(vote, height+1), voteSet)
if added || err == nil {
t.Errorf("expected VoteSet.Add to fail, wrong height")
}
@@ -103,7 +106,7 @@ func TestVoteSet_AddVote_Bad(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, 2)
added, err := signAddVote(privValidators[2], withRound(vote, round+1), voteSet)
added, err := test.SignAddVote(privValidators[2], withRound(vote, round+1), voteSet)
if added || err == nil {
t.Errorf("expected VoteSet.Add to fail, wrong round")
}
@@ -115,7 +118,7 @@ func TestVoteSet_AddVote_Bad(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, 3)
added, err := signAddVote(privValidators[3], withType(vote, byte(tmproto.PrecommitType)), voteSet)
added, err := test.SignAddVote(privValidators[3], withType(vote, byte(tmproto.PrecommitType)), voteSet)
if added || err == nil {
t.Errorf("expected VoteSet.Add to fail, wrong type")
}
@@ -124,16 +127,16 @@ func TestVoteSet_AddVote_Bad(t *testing.T) {
func TestVoteSet_2_3Majority(t *testing.T) {
height, round := int64(1), int32(0)
voteSet, _, privValidators := randVoteSet(height, round, tmproto.PrevoteType, 10, 1)
voteSet, _, privValidators := test.RandVoteSet(height, round, tmproto.PrevoteType, 10, 1)
voteProto := &Vote{
voteProto := &consensus.Vote{
ValidatorAddress: nil, // NOTE: must fill in
ValidatorIndex: -1, // NOTE: must fill in
Height: height,
Round: round,
Type: tmproto.PrevoteType,
Timestamp: tmtime.Now(),
BlockID: BlockID{nil, PartSetHeader{}},
BlockID: meta.BlockID{nil, meta.PartSetHeader{}},
}
// 6 out of 10 voted for nil.
for i := int32(0); i < 6; i++ {
@@ -141,7 +144,7 @@ func TestVoteSet_2_3Majority(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, i)
_, err = signAddVote(privValidators[i], vote, voteSet)
_, err = test.SignAddVote(privValidators[i], vote, voteSet)
require.NoError(t, err)
}
blockID, ok := voteSet.TwoThirdsMajority()
@@ -153,7 +156,7 @@ func TestVoteSet_2_3Majority(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, 6)
_, err = signAddVote(privValidators[6], withBlockHash(vote, tmrand.Bytes(32)), voteSet)
_, err = test.SignAddVote(privValidators[6], withBlockHash(vote, tmrand.Bytes(32)), voteSet)
require.NoError(t, err)
blockID, ok = voteSet.TwoThirdsMajority()
assert.False(t, ok || !blockID.IsZero(), "there should be no 2/3 majority")
@@ -165,7 +168,7 @@ func TestVoteSet_2_3Majority(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, 7)
_, err = signAddVote(privValidators[7], vote, voteSet)
_, err = test.SignAddVote(privValidators[7], vote, voteSet)
require.NoError(t, err)
blockID, ok = voteSet.TwoThirdsMajority()
assert.True(t, ok || blockID.IsZero(), "there should be 2/3 majority for nil")
@@ -174,20 +177,20 @@ func TestVoteSet_2_3Majority(t *testing.T) {
func TestVoteSet_2_3MajorityRedux(t *testing.T) {
height, round := int64(1), int32(0)
voteSet, _, privValidators := randVoteSet(height, round, tmproto.PrevoteType, 100, 1)
voteSet, _, privValidators := test.RandVoteSet(height, round, tmproto.PrevoteType, 100, 1)
blockHash := crypto.CRandBytes(32)
blockPartsTotal := uint32(123)
blockPartSetHeader := PartSetHeader{blockPartsTotal, crypto.CRandBytes(32)}
blockPartSetHeader := meta.PartSetHeader{blockPartsTotal, crypto.CRandBytes(32)}
voteProto := &Vote{
voteProto := &consensus.Vote{
ValidatorAddress: nil, // NOTE: must fill in
ValidatorIndex: -1, // NOTE: must fill in
Height: height,
Round: round,
Timestamp: tmtime.Now(),
Type: tmproto.PrevoteType,
BlockID: BlockID{blockHash, blockPartSetHeader},
BlockID: meta.BlockID{blockHash, blockPartSetHeader},
}
// 66 out of 100 voted for nil.
@@ -196,7 +199,7 @@ func TestVoteSet_2_3MajorityRedux(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, i)
_, err = signAddVote(privValidators[i], vote, voteSet)
_, err = test.SignAddVote(privValidators[i], vote, voteSet)
require.NoError(t, err)
}
blockID, ok := voteSet.TwoThirdsMajority()
@@ -209,7 +212,7 @@ func TestVoteSet_2_3MajorityRedux(t *testing.T) {
require.NoError(t, err)
adrr := pubKey.Address()
vote := withValidator(voteProto, adrr, 66)
_, err = signAddVote(privValidators[66], withBlockHash(vote, nil), voteSet)
_, err = test.SignAddVote(privValidators[66], withBlockHash(vote, nil), voteSet)
require.NoError(t, err)
blockID, ok = voteSet.TwoThirdsMajority()
assert.False(t, ok || !blockID.IsZero(),
@@ -222,8 +225,8 @@ func TestVoteSet_2_3MajorityRedux(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, 67)
blockPartsHeader := PartSetHeader{blockPartsTotal, crypto.CRandBytes(32)}
_, err = signAddVote(privValidators[67], withBlockPartSetHeader(vote, blockPartsHeader), voteSet)
blockPartsHeader := meta.PartSetHeader{blockPartsTotal, crypto.CRandBytes(32)}
_, err = test.SignAddVote(privValidators[67], withBlockPartSetHeader(vote, blockPartsHeader), voteSet)
require.NoError(t, err)
blockID, ok = voteSet.TwoThirdsMajority()
assert.False(t, ok || !blockID.IsZero(),
@@ -236,8 +239,8 @@ func TestVoteSet_2_3MajorityRedux(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, 68)
blockPartsHeader := PartSetHeader{blockPartsTotal + 1, blockPartSetHeader.Hash}
_, err = signAddVote(privValidators[68], withBlockPartSetHeader(vote, blockPartsHeader), voteSet)
blockPartsHeader := meta.PartSetHeader{blockPartsTotal + 1, blockPartSetHeader.Hash}
_, err = test.SignAddVote(privValidators[68], withBlockPartSetHeader(vote, blockPartsHeader), voteSet)
require.NoError(t, err)
blockID, ok = voteSet.TwoThirdsMajority()
assert.False(t, ok || !blockID.IsZero(),
@@ -250,7 +253,7 @@ func TestVoteSet_2_3MajorityRedux(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, 69)
_, err = signAddVote(privValidators[69], withBlockHash(vote, tmrand.Bytes(32)), voteSet)
_, err = test.SignAddVote(privValidators[69], withBlockHash(vote, tmrand.Bytes(32)), voteSet)
require.NoError(t, err)
blockID, ok = voteSet.TwoThirdsMajority()
assert.False(t, ok || !blockID.IsZero(),
@@ -263,28 +266,28 @@ func TestVoteSet_2_3MajorityRedux(t *testing.T) {
require.NoError(t, err)
addr := pubKey.Address()
vote := withValidator(voteProto, addr, 70)
_, err = signAddVote(privValidators[70], vote, voteSet)
_, err = test.SignAddVote(privValidators[70], vote, voteSet)
require.NoError(t, err)
blockID, ok = voteSet.TwoThirdsMajority()
assert.True(t, ok && blockID.Equals(BlockID{blockHash, blockPartSetHeader}),
assert.True(t, ok && blockID.Equals(meta.BlockID{blockHash, blockPartSetHeader}),
"there should be 2/3 majority")
}
}
func TestVoteSet_Conflicts(t *testing.T) {
height, round := int64(1), int32(0)
voteSet, _, privValidators := randVoteSet(height, round, tmproto.PrevoteType, 4, 1)
voteSet, _, privValidators := test.RandVoteSet(height, round, tmproto.PrevoteType, 4, 1)
blockHash1 := tmrand.Bytes(32)
blockHash2 := tmrand.Bytes(32)
voteProto := &Vote{
voteProto := &consensus.Vote{
ValidatorAddress: nil,
ValidatorIndex: -1,
Height: height,
Round: round,
Timestamp: tmtime.Now(),
Type: tmproto.PrevoteType,
BlockID: BlockID{nil, PartSetHeader{}},
BlockID: meta.BlockID{nil, meta.PartSetHeader{}},
}
val0, err := privValidators[0].GetPubKey(context.Background())
@@ -294,7 +297,7 @@ func TestVoteSet_Conflicts(t *testing.T) {
// val0 votes for nil.
{
vote := withValidator(voteProto, val0Addr, 0)
added, err := signAddVote(privValidators[0], vote, voteSet)
added, err := test.SignAddVote(privValidators[0], vote, voteSet)
if !added || err != nil {
t.Errorf("expected VoteSet.Add to succeed")
}
@@ -303,31 +306,31 @@ func TestVoteSet_Conflicts(t *testing.T) {
// val0 votes again for blockHash1.
{
vote := withValidator(voteProto, val0Addr, 0)
added, err := signAddVote(privValidators[0], withBlockHash(vote, blockHash1), voteSet)
added, err := test.SignAddVote(privValidators[0], withBlockHash(vote, blockHash1), voteSet)
assert.False(t, added, "conflicting vote")
assert.Error(t, err, "conflicting vote")
}
// start tracking blockHash1
err = voteSet.SetPeerMaj23("peerA", BlockID{blockHash1, PartSetHeader{}})
err = voteSet.SetPeerMaj23("peerA", meta.BlockID{blockHash1, meta.PartSetHeader{}})
require.NoError(t, err)
// val0 votes again for blockHash1.
{
vote := withValidator(voteProto, val0Addr, 0)
added, err := signAddVote(privValidators[0], withBlockHash(vote, blockHash1), voteSet)
added, err := test.SignAddVote(privValidators[0], withBlockHash(vote, blockHash1), voteSet)
assert.True(t, added, "called SetPeerMaj23()")
assert.Error(t, err, "conflicting vote")
}
// attempt tracking blockHash2, should fail because already set for peerA.
err = voteSet.SetPeerMaj23("peerA", BlockID{blockHash2, PartSetHeader{}})
err = voteSet.SetPeerMaj23("peerA", meta.BlockID{blockHash2, meta.PartSetHeader{}})
require.Error(t, err)
// val0 votes again for blockHash1.
{
vote := withValidator(voteProto, val0Addr, 0)
added, err := signAddVote(privValidators[0], withBlockHash(vote, blockHash2), voteSet)
added, err := test.SignAddVote(privValidators[0], withBlockHash(vote, blockHash2), voteSet)
assert.False(t, added, "duplicate SetPeerMaj23() from peerA")
assert.Error(t, err, "conflicting vote")
}
@@ -338,7 +341,7 @@ func TestVoteSet_Conflicts(t *testing.T) {
assert.NoError(t, err)
addr := pv.Address()
vote := withValidator(voteProto, addr, 1)
added, err := signAddVote(privValidators[1], withBlockHash(vote, blockHash1), voteSet)
added, err := test.SignAddVote(privValidators[1], withBlockHash(vote, blockHash1), voteSet)
if !added || err != nil {
t.Errorf("expected VoteSet.Add to succeed")
}
@@ -358,7 +361,7 @@ func TestVoteSet_Conflicts(t *testing.T) {
assert.NoError(t, err)
addr := pv.Address()
vote := withValidator(voteProto, addr, 2)
added, err := signAddVote(privValidators[2], withBlockHash(vote, blockHash2), voteSet)
added, err := test.SignAddVote(privValidators[2], withBlockHash(vote, blockHash2), voteSet)
if !added || err != nil {
t.Errorf("expected VoteSet.Add to succeed")
}
@@ -373,7 +376,7 @@ func TestVoteSet_Conflicts(t *testing.T) {
}
// now attempt tracking blockHash1
err = voteSet.SetPeerMaj23("peerB", BlockID{blockHash1, PartSetHeader{}})
err = voteSet.SetPeerMaj23("peerB", meta.BlockID{blockHash1, meta.PartSetHeader{}})
require.NoError(t, err)
// val2 votes for blockHash1.
@@ -382,7 +385,7 @@ func TestVoteSet_Conflicts(t *testing.T) {
assert.NoError(t, err)
addr := pv.Address()
vote := withValidator(voteProto, addr, 2)
added, err := signAddVote(privValidators[2], withBlockHash(vote, blockHash1), voteSet)
added, err := test.SignAddVote(privValidators[2], withBlockHash(vote, blockHash1), voteSet)
assert.True(t, added)
assert.Error(t, err, "conflicting vote")
}
@@ -402,26 +405,26 @@ func TestVoteSet_Conflicts(t *testing.T) {
func TestVoteSet_MakeCommit(t *testing.T) {
height, round := int64(1), int32(0)
voteSet, _, privValidators := randVoteSet(height, round, tmproto.PrecommitType, 10, 1)
blockHash, blockPartSetHeader := crypto.CRandBytes(32), PartSetHeader{123, crypto.CRandBytes(32)}
voteSet, _, privValidators := test.RandVoteSet(height, round, tmproto.PrecommitType, 10, 1)
blockHash, blockPartSetHeader := crypto.CRandBytes(32), meta.PartSetHeader{123, crypto.CRandBytes(32)}
voteProto := &Vote{
voteProto := &consensus.Vote{
ValidatorAddress: nil,
ValidatorIndex: -1,
Height: height,
Round: round,
Timestamp: tmtime.Now(),
Type: tmproto.PrecommitType,
BlockID: BlockID{blockHash, blockPartSetHeader},
BlockID: meta.BlockID{blockHash, blockPartSetHeader},
}
// 6 out of 10 voted for some block.
// 6 out of 10 voted for some meta.
for i := int32(0); i < 6; i++ {
pv, err := privValidators[i].GetPubKey(context.Background())
assert.NoError(t, err)
addr := pv.Address()
vote := withValidator(voteProto, addr, i)
_, err = signAddVote(privValidators[i], vote, voteSet)
_, err = test.SignAddVote(privValidators[i], vote, voteSet)
if err != nil {
t.Error(err)
}
@@ -430,16 +433,16 @@ func TestVoteSet_MakeCommit(t *testing.T) {
// MakeCommit should fail.
assert.Panics(t, func() { voteSet.MakeCommit() }, "Doesn't have +2/3 majority")
// 7th voted for some other block.
// 7th voted for some other meta.
{
pv, err := privValidators[6].GetPubKey(context.Background())
assert.NoError(t, err)
addr := pv.Address()
vote := withValidator(voteProto, addr, 6)
vote = withBlockHash(vote, tmrand.Bytes(32))
vote = withBlockPartSetHeader(vote, PartSetHeader{123, tmrand.Bytes(32)})
vote = withBlockPartSetHeader(vote, meta.PartSetHeader{123, tmrand.Bytes(32)})
_, err = signAddVote(privValidators[6], vote, voteSet)
_, err = test.SignAddVote(privValidators[6], vote, voteSet)
require.NoError(t, err)
}
@@ -449,7 +452,7 @@ func TestVoteSet_MakeCommit(t *testing.T) {
assert.NoError(t, err)
addr := pv.Address()
vote := withValidator(voteProto, addr, 7)
_, err = signAddVote(privValidators[7], vote, voteSet)
_, err = test.SignAddVote(privValidators[7], vote, voteSet)
require.NoError(t, err)
}
@@ -459,9 +462,9 @@ func TestVoteSet_MakeCommit(t *testing.T) {
assert.NoError(t, err)
addr := pv.Address()
vote := withValidator(voteProto, addr, 8)
vote.BlockID = BlockID{}
vote.BlockID = meta.BlockID{}
_, err = signAddVote(privValidators[8], vote, voteSet)
_, err = test.SignAddVote(privValidators[8], vote, voteSet)
require.NoError(t, err)
}
@@ -476,47 +479,91 @@ func TestVoteSet_MakeCommit(t *testing.T) {
}
}
// NOTE: privValidators are in order
func randVoteSet(
height int64,
round int32,
signedMsgType tmproto.SignedMsgType,
numValidators int,
votingPower int64,
) (*VoteSet, *ValidatorSet, []PrivValidator) {
valSet, privValidators := randValidatorPrivValSet(numValidators, votingPower)
return NewVoteSet("test_chain_id", height, round, signedMsgType, valSet), valSet, privValidators
func TestCommitToVoteSet(t *testing.T) {
lastID := test.MakeBlockID()
h := int64(3)
voteSet, valSet, vals := test.RandVoteSet(h-1, 1, tmproto.PrecommitType, 10, 1)
commit, err := test.MakeCommit(lastID, h-1, 1, voteSet, vals, time.Now())
assert.NoError(t, err)
chainID := voteSet.ChainID()
voteSet2 := consensus.VoteSetFromCommit(chainID, commit, valSet)
for i := int32(0); int(i) < len(vals); i++ {
vote1 := voteSet.GetByIndex(i)
vote2 := voteSet2.GetByIndex(i)
vote3 := consensus.GetVoteFromCommit(commit, i)
vote1bz, err := vote1.ToProto().Marshal()
require.NoError(t, err)
vote2bz, err := vote2.ToProto().Marshal()
require.NoError(t, err)
vote3bz, err := vote3.ToProto().Marshal()
require.NoError(t, err)
assert.Equal(t, vote1bz, vote2bz)
assert.Equal(t, vote1bz, vote3bz)
}
}
func deterministicVoteSet(
height int64,
round int32,
signedMsgType tmproto.SignedMsgType,
votingPower int64,
) (*VoteSet, *ValidatorSet, []PrivValidator) {
valSet, privValidators := deterministicValidatorSet()
return NewVoteSet("test_chain_id", height, round, signedMsgType, valSet), valSet, privValidators
}
func TestCommitToVoteSetWithVotesForNilBlock(t *testing.T) {
blockID := test.MakeBlockIDWithHash([]byte("blockhash"))
func randValidatorPrivValSet(numValidators int, votingPower int64) (*ValidatorSet, []PrivValidator) {
var (
valz = make([]*Validator, numValidators)
privValidators = make([]PrivValidator, numValidators)
const (
height = int64(3)
round = 0
)
for i := 0; i < numValidators; i++ {
val, privValidator := randValidator(false, votingPower)
valz[i] = val
privValidators[i] = privValidator
type commitVoteTest struct {
blockIDs []meta.BlockID
numVotes []int // must sum to numValidators
numValidators int
valid bool
}
sort.Sort(PrivValidatorsByAddress(privValidators))
testCases := []commitVoteTest{
{[]meta.BlockID{blockID, {}}, []int{67, 33}, 100, true},
}
return NewValidatorSet(valz), privValidators
for _, tc := range testCases {
voteSet, valSet, vals := test.RandVoteSet(height-1, round, tmproto.PrecommitType, tc.numValidators, 1)
vi := int32(0)
for n := range tc.blockIDs {
for i := 0; i < tc.numVotes[n]; i++ {
pubKey, err := vals[vi].GetPubKey(context.Background())
require.NoError(t, err)
vote := &consensus.Vote{
ValidatorAddress: pubKey.Address(),
ValidatorIndex: vi,
Height: height - 1,
Round: round,
Type: tmproto.PrecommitType,
BlockID: tc.blockIDs[n],
Timestamp: tmtime.Now(),
}
added, err := test.SignAddVote(vals[vi], vote, voteSet)
assert.NoError(t, err)
assert.True(t, added)
vi++
}
}
if tc.valid {
commit := voteSet.MakeCommit() // panics without > 2/3 valid votes
assert.NotNil(t, commit)
err := valSet.VerifyCommit(voteSet.ChainID(), blockID, height-1, commit)
assert.Nil(t, err)
} else {
assert.Panics(t, func() { voteSet.MakeCommit() })
}
}
}
// Convenience: Return new vote with different validator address/index
func withValidator(vote *Vote, addr []byte, idx int32) *Vote {
func withValidator(vote *consensus.Vote, addr []byte, idx int32) *consensus.Vote {
vote = vote.Copy()
vote.ValidatorAddress = addr
vote.ValidatorIndex = idx
@@ -524,35 +571,35 @@ func withValidator(vote *Vote, addr []byte, idx int32) *Vote {
}
// Convenience: Return new vote with different height
func withHeight(vote *Vote, height int64) *Vote {
func withHeight(vote *consensus.Vote, height int64) *consensus.Vote {
vote = vote.Copy()
vote.Height = height
return vote
}
// Convenience: Return new vote with different round
func withRound(vote *Vote, round int32) *Vote {
func withRound(vote *consensus.Vote, round int32) *consensus.Vote {
vote = vote.Copy()
vote.Round = round
return vote
}
// Convenience: Return new vote with different type
func withType(vote *Vote, signedMsgType byte) *Vote {
func withType(vote *consensus.Vote, signedMsgType byte) *consensus.Vote {
vote = vote.Copy()
vote.Type = tmproto.SignedMsgType(signedMsgType)
return vote
}
// Convenience: Return new vote with different blockHash
func withBlockHash(vote *Vote, blockHash []byte) *Vote {
func withBlockHash(vote *consensus.Vote, blockHash []byte) *consensus.Vote {
vote = vote.Copy()
vote.BlockID.Hash = blockHash
return vote
}
// Convenience: Return new vote with different blockParts
func withBlockPartSetHeader(vote *Vote, blockPartsHeader PartSetHeader) *Vote {
func withBlockPartSetHeader(vote *consensus.Vote, blockPartsHeader meta.PartSetHeader) *consensus.Vote {
vote = vote.Copy()
vote.BlockID.PartSetHeader = blockPartsHeader
return vote

View File

@@ -1,4 +1,4 @@
package types
package consensus
import (
"context"
@@ -13,6 +13,7 @@ import (
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/crypto/tmhash"
"github.com/tendermint/tendermint/internal/libs/protoio"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
@@ -25,7 +26,7 @@ func examplePrecommit() *Vote {
}
func exampleVote(t byte) *Vote {
var stamp, err = time.Parse(TimeFormat, "2017-12-25T03:00:01.234Z")
var stamp, err = time.Parse(meta.TimeFormat, "2017-12-25T03:00:01.234Z")
if err != nil {
panic(err)
}
@@ -35,9 +36,9 @@ func exampleVote(t byte) *Vote {
Height: 12345,
Round: 2,
Timestamp: stamp,
BlockID: BlockID{
BlockID: meta.BlockID{
Hash: tmhash.Sum([]byte("blockID_hash")),
PartSetHeader: PartSetHeader{
PartSetHeader: meta.PartSetHeader{
Total: 1000000,
Hash: tmhash.Sum([]byte("blockID_part_set_header_hash")),
},
@@ -244,12 +245,12 @@ func TestVoteValidateBasic(t *testing.T) {
{"Negative Height", func(v *Vote) { v.Height = -1 }, true},
{"Negative Round", func(v *Vote) { v.Round = -1 }, true},
{"Invalid BlockID", func(v *Vote) {
v.BlockID = BlockID{[]byte{1, 2, 3}, PartSetHeader{111, []byte("blockparts")}}
v.BlockID = meta.BlockID{[]byte{1, 2, 3}, meta.PartSetHeader{111, []byte("blockparts")}}
}, true},
{"Invalid Address", func(v *Vote) { v.ValidatorAddress = make([]byte, 1) }, true},
{"Invalid ValidatorIndex", func(v *Vote) { v.ValidatorIndex = -1 }, true},
{"Invalid Signature", func(v *Vote) { v.Signature = nil }, true},
{"Too big Signature", func(v *Vote) { v.Signature = make([]byte, MaxSignatureSize+1) }, true},
{"Too big Signature", func(v *Vote) { v.Signature = make([]byte, meta.MaxSignatureSize+1) }, true},
}
for _, tc := range testCases {
tc := tc

View File

@@ -1,4 +1,4 @@
package types
package events
import (
"context"
@@ -9,6 +9,7 @@ import (
"github.com/tendermint/tendermint/libs/log"
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/pkg/mempool"
)
const defaultCapacity = 0
@@ -178,7 +179,7 @@ func (b *EventBus) PublishEventTx(data EventDataTx) error {
Attributes: []types.EventAttribute{
{
Key: tokens[1],
Value: fmt.Sprintf("%X", Tx(data.Tx).Hash()),
Value: fmt.Sprintf("%X", mempool.Tx(data.Tx).Hash()),
},
},
})

View File

@@ -1,4 +1,4 @@
package types
package events_test
import (
"context"
@@ -13,10 +13,15 @@ import (
abci "github.com/tendermint/tendermint/abci/types"
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
tmquery "github.com/tendermint/tendermint/libs/pubsub/query"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/events"
"github.com/tendermint/tendermint/pkg/evidence"
"github.com/tendermint/tendermint/pkg/mempool"
"github.com/tendermint/tendermint/pkg/meta"
)
func TestEventBusPublishEventTx(t *testing.T) {
eventBus := NewEventBus()
eventBus := events.NewEventBus()
err := eventBus.Start()
require.NoError(t, err)
t.Cleanup(func() {
@@ -25,7 +30,7 @@ func TestEventBusPublishEventTx(t *testing.T) {
}
})
tx := Tx("foo")
tx := mempool.Tx("foo")
result := abci.ResponseDeliverTx{
Data: []byte("bar"),
Events: []abci.Event{
@@ -41,7 +46,7 @@ func TestEventBusPublishEventTx(t *testing.T) {
done := make(chan struct{})
go func() {
msg := <-txsSub.Out()
edt := msg.Data().(EventDataTx)
edt := msg.Data().(events.EventDataTx)
assert.Equal(t, int64(1), edt.Height)
assert.Equal(t, uint32(0), edt.Index)
assert.EqualValues(t, tx, edt.Tx)
@@ -49,7 +54,7 @@ func TestEventBusPublishEventTx(t *testing.T) {
close(done)
}()
err = eventBus.PublishEventTx(EventDataTx{abci.TxResult{
err = eventBus.PublishEventTx(events.EventDataTx{abci.TxResult{
Height: 1,
Index: 0,
Tx: tx,
@@ -65,7 +70,7 @@ func TestEventBusPublishEventTx(t *testing.T) {
}
func TestEventBusPublishEventNewBlock(t *testing.T) {
eventBus := NewEventBus()
eventBus := events.NewEventBus()
err := eventBus.Start()
require.NoError(t, err)
t.Cleanup(func() {
@@ -74,8 +79,8 @@ func TestEventBusPublishEventNewBlock(t *testing.T) {
}
})
block := MakeBlock(0, []Tx{}, nil, []Evidence{})
blockID := BlockID{Hash: block.Hash(), PartSetHeader: block.MakePartSet(BlockPartSizeBytes).Header()}
block := block.MakeBlock(0, []mempool.Tx{}, nil, []evidence.Evidence{})
blockID := meta.BlockID{Hash: block.Hash(), PartSetHeader: block.MakePartSet(meta.BlockPartSizeBytes).Header()}
resultBeginBlock := abci.ResponseBeginBlock{
Events: []abci.Event{
{Type: "testType", Attributes: []abci.EventAttribute{{Key: "baz", Value: "1"}}},
@@ -95,7 +100,7 @@ func TestEventBusPublishEventNewBlock(t *testing.T) {
done := make(chan struct{})
go func() {
msg := <-blocksSub.Out()
edt := msg.Data().(EventDataNewBlock)
edt := msg.Data().(events.EventDataNewBlock)
assert.Equal(t, block, edt.Block)
assert.Equal(t, blockID, edt.BlockID)
assert.Equal(t, resultBeginBlock, edt.ResultBeginBlock)
@@ -103,7 +108,7 @@ func TestEventBusPublishEventNewBlock(t *testing.T) {
close(done)
}()
err = eventBus.PublishEventNewBlock(EventDataNewBlock{
err = eventBus.PublishEventNewBlock(events.EventDataNewBlock{
Block: block,
BlockID: blockID,
ResultBeginBlock: resultBeginBlock,
@@ -119,7 +124,7 @@ func TestEventBusPublishEventNewBlock(t *testing.T) {
}
func TestEventBusPublishEventTxDuplicateKeys(t *testing.T) {
eventBus := NewEventBus()
eventBus := events.NewEventBus()
err := eventBus.Start()
require.NoError(t, err)
t.Cleanup(func() {
@@ -128,7 +133,7 @@ func TestEventBusPublishEventTxDuplicateKeys(t *testing.T) {
}
})
tx := Tx("foo")
tx := mempool.Tx("foo")
result := abci.ResponseDeliverTx{
Data: []byte("bar"),
Events: []abci.Event{
@@ -194,7 +199,7 @@ func TestEventBusPublishEventTxDuplicateKeys(t *testing.T) {
go func() {
select {
case msg := <-sub.Out():
data := msg.Data().(EventDataTx)
data := msg.Data().(events.EventDataTx)
assert.Equal(t, int64(1), data.Height)
assert.Equal(t, uint32(0), data.Index)
assert.EqualValues(t, tx, data.Tx)
@@ -205,7 +210,7 @@ func TestEventBusPublishEventTxDuplicateKeys(t *testing.T) {
}
}()
err = eventBus.PublishEventTx(EventDataTx{abci.TxResult{
err = eventBus.PublishEventTx(events.EventDataTx{abci.TxResult{
Height: 1,
Index: 0,
Tx: tx,
@@ -227,7 +232,7 @@ func TestEventBusPublishEventTxDuplicateKeys(t *testing.T) {
}
func TestEventBusPublishEventNewBlockHeader(t *testing.T) {
eventBus := NewEventBus()
eventBus := events.NewEventBus()
err := eventBus.Start()
require.NoError(t, err)
t.Cleanup(func() {
@@ -236,7 +241,7 @@ func TestEventBusPublishEventNewBlockHeader(t *testing.T) {
}
})
block := MakeBlock(0, []Tx{}, nil, []Evidence{})
block := block.MakeBlock(0, []mempool.Tx{}, nil, []evidence.Evidence{})
resultBeginBlock := abci.ResponseBeginBlock{
Events: []abci.Event{
{Type: "testType", Attributes: []abci.EventAttribute{{Key: "baz", Value: "1"}}},
@@ -256,14 +261,14 @@ func TestEventBusPublishEventNewBlockHeader(t *testing.T) {
done := make(chan struct{})
go func() {
msg := <-headersSub.Out()
edt := msg.Data().(EventDataNewBlockHeader)
edt := msg.Data().(events.EventDataNewBlockHeader)
assert.Equal(t, block.Header, edt.Header)
assert.Equal(t, resultBeginBlock, edt.ResultBeginBlock)
assert.Equal(t, resultEndBlock, edt.ResultEndBlock)
close(done)
}()
err = eventBus.PublishEventNewBlockHeader(EventDataNewBlockHeader{
err = eventBus.PublishEventNewBlockHeader(events.EventDataNewBlockHeader{
Header: block.Header,
ResultBeginBlock: resultBeginBlock,
ResultEndBlock: resultEndBlock,
@@ -278,7 +283,7 @@ func TestEventBusPublishEventNewBlockHeader(t *testing.T) {
}
func TestEventBusPublishEventNewEvidence(t *testing.T) {
eventBus := NewEventBus()
eventBus := events.NewEventBus()
err := eventBus.Start()
require.NoError(t, err)
t.Cleanup(func() {
@@ -287,7 +292,7 @@ func TestEventBusPublishEventNewEvidence(t *testing.T) {
}
})
ev := NewMockDuplicateVoteEvidence(1, time.Now(), "test-chain-id")
ev := evidence.NewMockDuplicateVoteEvidence(1, time.Now(), "test-chain-id")
query := "tm.event='NewEvidence'"
evSub, err := eventBus.Subscribe(context.Background(), "test", tmquery.MustParse(query))
@@ -296,13 +301,13 @@ func TestEventBusPublishEventNewEvidence(t *testing.T) {
done := make(chan struct{})
go func() {
msg := <-evSub.Out()
edt := msg.Data().(EventDataNewEvidence)
edt := msg.Data().(events.EventDataNewEvidence)
assert.Equal(t, ev, edt.Evidence)
assert.Equal(t, int64(4), edt.Height)
close(done)
}()
err = eventBus.PublishEventNewEvidence(EventDataNewEvidence{
err = eventBus.PublishEventNewEvidence(events.EventDataNewEvidence{
Evidence: ev,
Height: 4,
})
@@ -316,7 +321,7 @@ func TestEventBusPublishEventNewEvidence(t *testing.T) {
}
func TestEventBusPublish(t *testing.T) {
eventBus := NewEventBus()
eventBus := events.NewEventBus()
err := eventBus.Start()
require.NoError(t, err)
t.Cleanup(func() {
@@ -342,37 +347,37 @@ func TestEventBusPublish(t *testing.T) {
}
}()
err = eventBus.Publish(EventNewBlockHeaderValue, EventDataNewBlockHeader{})
err = eventBus.Publish(events.EventNewBlockHeaderValue, events.EventDataNewBlockHeader{})
require.NoError(t, err)
err = eventBus.PublishEventNewBlock(EventDataNewBlock{})
err = eventBus.PublishEventNewBlock(events.EventDataNewBlock{})
require.NoError(t, err)
err = eventBus.PublishEventNewBlockHeader(EventDataNewBlockHeader{})
err = eventBus.PublishEventNewBlockHeader(events.EventDataNewBlockHeader{})
require.NoError(t, err)
err = eventBus.PublishEventVote(EventDataVote{})
err = eventBus.PublishEventVote(events.EventDataVote{})
require.NoError(t, err)
err = eventBus.PublishEventNewRoundStep(EventDataRoundState{})
err = eventBus.PublishEventNewRoundStep(events.EventDataRoundState{})
require.NoError(t, err)
err = eventBus.PublishEventTimeoutPropose(EventDataRoundState{})
err = eventBus.PublishEventTimeoutPropose(events.EventDataRoundState{})
require.NoError(t, err)
err = eventBus.PublishEventTimeoutWait(EventDataRoundState{})
err = eventBus.PublishEventTimeoutWait(events.EventDataRoundState{})
require.NoError(t, err)
err = eventBus.PublishEventNewRound(EventDataNewRound{})
err = eventBus.PublishEventNewRound(events.EventDataNewRound{})
require.NoError(t, err)
err = eventBus.PublishEventCompleteProposal(EventDataCompleteProposal{})
err = eventBus.PublishEventCompleteProposal(events.EventDataCompleteProposal{})
require.NoError(t, err)
err = eventBus.PublishEventPolka(EventDataRoundState{})
err = eventBus.PublishEventPolka(events.EventDataRoundState{})
require.NoError(t, err)
err = eventBus.PublishEventUnlock(EventDataRoundState{})
err = eventBus.PublishEventUnlock(events.EventDataRoundState{})
require.NoError(t, err)
err = eventBus.PublishEventRelock(EventDataRoundState{})
err = eventBus.PublishEventRelock(events.EventDataRoundState{})
require.NoError(t, err)
err = eventBus.PublishEventLock(EventDataRoundState{})
err = eventBus.PublishEventLock(events.EventDataRoundState{})
require.NoError(t, err)
err = eventBus.PublishEventValidatorSetUpdates(EventDataValidatorSetUpdates{})
err = eventBus.PublishEventValidatorSetUpdates(events.EventDataValidatorSetUpdates{})
require.NoError(t, err)
err = eventBus.PublishEventBlockSyncStatus(EventDataBlockSyncStatus{})
err = eventBus.PublishEventBlockSyncStatus(events.EventDataBlockSyncStatus{})
require.NoError(t, err)
err = eventBus.PublishEventStateSyncStatus(EventDataStateSyncStatus{})
err = eventBus.PublishEventStateSyncStatus(events.EventDataStateSyncStatus{})
require.NoError(t, err)
select {
@@ -418,7 +423,7 @@ func benchmarkEventBus(numClients int, randQueries bool, randEvents bool, b *tes
// for random* functions
mrand.Seed(time.Now().Unix())
eventBus := NewEventBusWithBufferCapacity(0) // set buffer capacity to 0 so we are not testing cache
eventBus := events.NewEventBusWithBufferCapacity(0) // set buffer capacity to 0 so we are not testing cache
err := eventBus.Start()
if err != nil {
b.Error(err)
@@ -430,7 +435,7 @@ func benchmarkEventBus(numClients int, randQueries bool, randEvents bool, b *tes
})
ctx := context.Background()
q := EventQueryNewBlock
q := events.EventQueryNewBlock
for i := 0; i < numClients; i++ {
if randQueries {
@@ -451,7 +456,7 @@ func benchmarkEventBus(numClients int, randQueries bool, randEvents bool, b *tes
}()
}
eventValue := EventNewBlockValue
eventValue := events.EventNewBlockValue
b.ReportAllocs()
b.ResetTimer()
@@ -460,50 +465,50 @@ func benchmarkEventBus(numClients int, randQueries bool, randEvents bool, b *tes
eventValue = randEventValue()
}
err := eventBus.Publish(eventValue, EventDataString("Gamora"))
err := eventBus.Publish(eventValue, events.EventDataString("Gamora"))
if err != nil {
b.Error(err)
}
}
}
var events = []string{
EventNewBlockValue,
EventNewBlockHeaderValue,
EventNewRoundValue,
EventNewRoundStepValue,
EventTimeoutProposeValue,
EventCompleteProposalValue,
EventPolkaValue,
EventUnlockValue,
EventLockValue,
EventRelockValue,
EventTimeoutWaitValue,
EventVoteValue,
EventBlockSyncStatusValue,
EventStateSyncStatusValue,
var allEvents = []string{
events.EventNewBlockValue,
events.EventNewBlockHeaderValue,
events.EventNewRoundValue,
events.EventNewRoundStepValue,
events.EventTimeoutProposeValue,
events.EventCompleteProposalValue,
events.EventPolkaValue,
events.EventUnlockValue,
events.EventLockValue,
events.EventRelockValue,
events.EventTimeoutWaitValue,
events.EventVoteValue,
events.EventBlockSyncStatusValue,
events.EventStateSyncStatusValue,
}
func randEventValue() string {
return events[mrand.Intn(len(events))]
return allEvents[mrand.Intn(len(allEvents))]
}
var queries = []tmpubsub.Query{
EventQueryNewBlock,
EventQueryNewBlockHeader,
EventQueryNewRound,
EventQueryNewRoundStep,
EventQueryTimeoutPropose,
EventQueryCompleteProposal,
EventQueryPolka,
EventQueryUnlock,
EventQueryLock,
EventQueryRelock,
EventQueryTimeoutWait,
EventQueryVote,
EventQueryBlockSyncStatus,
EventQueryStateSyncStatus,
events.EventQueryNewBlock,
events.EventQueryNewBlockHeader,
events.EventQueryNewRound,
events.EventQueryNewRoundStep,
events.EventQueryTimeoutPropose,
events.EventQueryCompleteProposal,
events.EventQueryPolka,
events.EventQueryUnlock,
events.EventQueryLock,
events.EventQueryRelock,
events.EventQueryTimeoutWait,
events.EventQueryVote,
events.EventQueryBlockSyncStatus,
events.EventQueryStateSyncStatus,
}
func randQuery() tmpubsub.Query {

View File

@@ -1,4 +1,4 @@
package types
package events
import (
"fmt"
@@ -8,6 +8,12 @@ import (
tmjson "github.com/tendermint/tendermint/libs/json"
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
tmquery "github.com/tendermint/tendermint/libs/pubsub/query"
"github.com/tendermint/tendermint/pkg/block"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/evidence"
"github.com/tendermint/tendermint/pkg/mempool"
"github.com/tendermint/tendermint/pkg/meta"
)
// Reserved event types (alphabetically sorted).
@@ -112,15 +118,15 @@ func init() {
// but some (an input to a call tx or a receive) are more exotic
type EventDataNewBlock struct {
Block *Block `json:"block"`
BlockID BlockID `json:"block_id"`
Block *block.Block `json:"block"`
BlockID meta.BlockID `json:"block_id"`
ResultBeginBlock abci.ResponseBeginBlock `json:"result_begin_block"`
ResultEndBlock abci.ResponseEndBlock `json:"result_end_block"`
}
type EventDataNewBlockHeader struct {
Header Header `json:"header"`
Header meta.Header `json:"header"`
NumTxs int64 `json:"num_txs"` // Number of txs in a block
ResultBeginBlock abci.ResponseBeginBlock `json:"result_begin_block"`
@@ -128,7 +134,7 @@ type EventDataNewBlockHeader struct {
}
type EventDataNewEvidence struct {
Evidence Evidence `json:"evidence"`
Evidence evidence.Evidence `json:"evidence"`
Height int64 `json:"height"`
}
@@ -146,8 +152,8 @@ type EventDataRoundState struct {
}
type ValidatorInfo struct {
Address Address `json:"address"`
Index int32 `json:"index"`
Address consensus.Address `json:"address"`
Index int32 `json:"index"`
}
type EventDataNewRound struct {
@@ -163,17 +169,17 @@ type EventDataCompleteProposal struct {
Round int32 `json:"round"`
Step string `json:"step"`
BlockID BlockID `json:"block_id"`
BlockID meta.BlockID `json:"block_id"`
}
type EventDataVote struct {
Vote *Vote
Vote *consensus.Vote
}
type EventDataString string
type EventDataValidatorSetUpdates struct {
ValidatorUpdates []*Validator `json:"validator_updates"`
ValidatorUpdates []*consensus.Validator `json:"validator_updates"`
}
// EventDataBlockSyncStatus shows the fastsync status and the
@@ -231,7 +237,7 @@ var (
EventQueryStateSyncStatus = QueryForEvent(EventStateSyncStatusValue)
)
func EventQueryTxFor(tx Tx) tmpubsub.Query {
func EventQueryTxFor(tx mempool.Tx) tmpubsub.Query {
return tmquery.MustParse(fmt.Sprintf("%s='%s' AND %s='%X'", EventTypeKey, EventTxValue, TxHashKey, tx.Hash()))
}

View File

@@ -1,14 +1,16 @@
package types
package events
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
"github.com/tendermint/tendermint/pkg/mempool"
)
func TestQueryTxFor(t *testing.T) {
tx := Tx("foo")
tx := mempool.Tx("foo")
assert.Equal(t,
fmt.Sprintf("tm.event='Tx' AND tx.hash='%X'", tx.Hash()),
EventQueryTxFor(tx).String(),

View File

@@ -1,4 +1,4 @@
package types
package evidence
import (
"bytes"
@@ -15,6 +15,9 @@ import (
"github.com/tendermint/tendermint/crypto/tmhash"
tmjson "github.com/tendermint/tendermint/libs/json"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/light"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
@@ -30,12 +33,49 @@ type Evidence interface {
ValidateBasic() error // basic consistency check
}
//-----------------------------------------------------------------------------
// EvidenceList is a list of Evidence. Evidences is not a word.
type EvidenceList []Evidence
// Hash returns the simple merkle root hash of the EvidenceList.
func (evl EvidenceList) Hash() []byte {
// These allocations are required because Evidence is not of type Bytes, and
// golang slices can't be typed cast. This shouldn't be a performance problem since
// the Evidence size is capped.
evidenceBzs := make([][]byte, len(evl))
for i := 0; i < len(evl); i++ {
// TODO: We should change this to the hash. Using bytes contains some unexported data that
// may cause different hashes
evidenceBzs[i] = evl[i].Bytes()
}
return merkle.HashFromByteSlices(evidenceBzs)
}
func (evl EvidenceList) String() string {
s := ""
for _, e := range evl {
s += fmt.Sprintf("%s\t\t", e)
}
return s
}
// Has returns true if the evidence is in the EvidenceList.
func (evl EvidenceList) Has(evidence Evidence) bool {
for _, ev := range evl {
if bytes.Equal(evidence.Hash(), ev.Hash()) {
return true
}
}
return false
}
//--------------------------------------------------------------------------------------
// DuplicateVoteEvidence contains evidence of a single validator signing two conflicting votes.
type DuplicateVoteEvidence struct {
VoteA *Vote `json:"vote_a"`
VoteB *Vote `json:"vote_b"`
VoteA *consensus.Vote `json:"vote_a"`
VoteB *consensus.Vote `json:"vote_b"`
// abci specific information
TotalVotingPower int64
@@ -47,8 +87,8 @@ var _ Evidence = &DuplicateVoteEvidence{}
// NewDuplicateVoteEvidence creates DuplicateVoteEvidence with right ordering given
// two conflicting votes. If one of the votes is nil, evidence returned is nil as well
func NewDuplicateVoteEvidence(vote1, vote2 *Vote, blockTime time.Time, valSet *ValidatorSet) *DuplicateVoteEvidence {
var voteA, voteB *Vote
func NewDuplicateVoteEvidence(vote1, vote2 *consensus.Vote, blockTime time.Time, valSet *consensus.ValidatorSet) *DuplicateVoteEvidence {
var voteA, voteB *consensus.Vote
if vote1 == nil || vote2 == nil || valSet == nil {
return nil
}
@@ -143,8 +183,8 @@ func (dve *DuplicateVoteEvidence) ValidateBasic() error {
// ValidateABCI validates the ABCI component of the evidence by checking the
// timestamp, validator power and total voting power.
func (dve *DuplicateVoteEvidence) ValidateABCI(
val *Validator,
valSet *ValidatorSet,
val *consensus.Validator,
valSet *consensus.ValidatorSet,
evidenceTime time.Time,
) error {
@@ -169,8 +209,8 @@ func (dve *DuplicateVoteEvidence) ValidateABCI(
// GenerateABCI populates the ABCI component of the evidence. This includes the
// validator power, timestamp and total voting power.
func (dve *DuplicateVoteEvidence) GenerateABCI(
val *Validator,
valSet *ValidatorSet,
val *consensus.Validator,
valSet *consensus.ValidatorSet,
evidenceTime time.Time,
) {
dve.ValidatorPower = val.VotingPower
@@ -198,12 +238,12 @@ func DuplicateVoteEvidenceFromProto(pb *tmproto.DuplicateVoteEvidence) (*Duplica
return nil, errors.New("nil duplicate vote evidence")
}
vA, err := VoteFromProto(pb.VoteA)
vA, err := consensus.VoteFromProto(pb.VoteA)
if err != nil {
return nil, err
}
vB, err := VoteFromProto(pb.VoteB)
vB, err := consensus.VoteFromProto(pb.VoteB)
if err != nil {
return nil, err
}
@@ -230,13 +270,13 @@ func DuplicateVoteEvidenceFromProto(pb *tmproto.DuplicateVoteEvidence) (*Duplica
// CommonHeight is used to indicate the type of attack. If the height is different to the conflicting block
// height, then nodes will treat this as of the Lunatic form, else it is of the Equivocation form.
type LightClientAttackEvidence struct {
ConflictingBlock *LightBlock
ConflictingBlock *light.LightBlock
CommonHeight int64
// abci specific information
ByzantineValidators []*Validator // validators in the validator set that misbehaved in creating the conflicting block
TotalVotingPower int64 // total voting power of the validator set at the common height
Timestamp time.Time // timestamp of the block at the common height
ByzantineValidators []*consensus.Validator // validators in the validator set that misbehaved in creating the conflicting block
TotalVotingPower int64 // total voting power of the validator set at the common height
Timestamp time.Time // timestamp of the block at the common height
}
var _ Evidence = &LightClientAttackEvidence{}
@@ -247,7 +287,7 @@ func (l *LightClientAttackEvidence) ABCI() []abci.Evidence {
for idx, val := range l.ByzantineValidators {
abciEv[idx] = abci.Evidence{
Type: abci.EvidenceType_LIGHT_CLIENT_ATTACK,
Validator: TM2PB.Validator(val),
Validator: consensus.TM2PB.Validator(val),
Height: l.Height(),
Time: l.Timestamp,
TotalVotingPower: l.TotalVotingPower,
@@ -272,9 +312,9 @@ func (l *LightClientAttackEvidence) Bytes() []byte {
// GetByzantineValidators finds out what style of attack LightClientAttackEvidence was and then works out who
// the malicious validators were and returns them. This is used both for forming the ByzantineValidators
// field and for validating that it is correct. Validators are ordered based on validator power
func (l *LightClientAttackEvidence) GetByzantineValidators(commonVals *ValidatorSet,
trusted *SignedHeader) []*Validator {
var validators []*Validator
func (l *LightClientAttackEvidence) GetByzantineValidators(commonVals *consensus.ValidatorSet,
trusted *light.SignedHeader) []*consensus.Validator {
var validators []*consensus.Validator
// First check if the header is invalid. This means that it is a lunatic attack and therefore we take the
// validators who are in the commonVals and voted for the lunatic header
if l.ConflictingHeaderIsInvalid(trusted.Header) {
@@ -290,7 +330,7 @@ func (l *LightClientAttackEvidence) GetByzantineValidators(commonVals *Validator
}
validators = append(validators, val)
}
sort.Sort(ValidatorsByVotingPower(validators))
sort.Sort(consensus.ValidatorsByVotingPower(validators))
return validators
} else if trusted.Commit.Round == l.ConflictingBlock.Commit.Round {
// This is an equivocation attack as both commits are in the same round. We then find the validators
@@ -311,7 +351,7 @@ func (l *LightClientAttackEvidence) GetByzantineValidators(commonVals *Validator
_, val := l.ConflictingBlock.ValidatorSet.GetByAddress(sigA.ValidatorAddress)
validators = append(validators, val)
}
sort.Sort(ValidatorsByVotingPower(validators))
sort.Sort(consensus.ValidatorsByVotingPower(validators))
return validators
}
// if the rounds are different then this is an amnesia attack. Unfortunately, given the nature of the attack,
@@ -324,7 +364,7 @@ func (l *LightClientAttackEvidence) GetByzantineValidators(commonVals *Validator
// to determine whether the conflicting header was the product of a valid state transition
// or not. If it is then all the deterministic fields of the header should be the same.
// If not, it is an invalid header and constitutes a lunatic attack.
func (l *LightClientAttackEvidence) ConflictingHeaderIsInvalid(trustedHeader *Header) bool {
func (l *LightClientAttackEvidence) ConflictingHeaderIsInvalid(trustedHeader *meta.Header) bool {
return !bytes.Equal(trustedHeader.ValidatorsHash, l.ConflictingBlock.ValidatorsHash) ||
!bytes.Equal(trustedHeader.NextValidatorsHash, l.ConflictingBlock.NextValidatorsHash) ||
!bytes.Equal(trustedHeader.ConsensusHash, l.ConflictingBlock.ConsensusHash) ||
@@ -413,8 +453,8 @@ func (l *LightClientAttackEvidence) ValidateBasic() error {
// components are validated separately because they can be re generated if
// invalid.
func (l *LightClientAttackEvidence) ValidateABCI(
commonVals *ValidatorSet,
trustedHeader *SignedHeader,
commonVals *consensus.ValidatorSet,
trustedHeader *light.SignedHeader,
evidenceTime time.Time,
) error {
@@ -469,8 +509,8 @@ func (l *LightClientAttackEvidence) ValidateABCI(
// GenerateABCI populates the ABCI component of the evidence: the timestamp,
// total voting power and byantine validators
func (l *LightClientAttackEvidence) GenerateABCI(
commonVals *ValidatorSet,
trustedHeader *SignedHeader,
commonVals *consensus.ValidatorSet,
trustedHeader *light.SignedHeader,
evidenceTime time.Time,
) {
l.Timestamp = evidenceTime
@@ -509,14 +549,14 @@ func LightClientAttackEvidenceFromProto(lpb *tmproto.LightClientAttackEvidence)
return nil, errors.New("empty light client attack evidence")
}
conflictingBlock, err := LightBlockFromProto(lpb.ConflictingBlock)
conflictingBlock, err := light.LightBlockFromProto(lpb.ConflictingBlock)
if err != nil {
return nil, err
}
byzVals := make([]*Validator, len(lpb.ByzantineValidators))
byzVals := make([]*consensus.Validator, len(lpb.ByzantineValidators))
for idx, valpb := range lpb.ByzantineValidators {
val, err := ValidatorFromProto(valpb)
val, err := consensus.ValidatorFromProto(valpb)
if err != nil {
return nil, err
}
@@ -534,43 +574,6 @@ func LightClientAttackEvidenceFromProto(lpb *tmproto.LightClientAttackEvidence)
return l, l.ValidateBasic()
}
//------------------------------------------------------------------------------------------
// EvidenceList is a list of Evidence. Evidences is not a word.
type EvidenceList []Evidence
// Hash returns the simple merkle root hash of the EvidenceList.
func (evl EvidenceList) Hash() []byte {
// These allocations are required because Evidence is not of type Bytes, and
// golang slices can't be typed cast. This shouldn't be a performance problem since
// the Evidence size is capped.
evidenceBzs := make([][]byte, len(evl))
for i := 0; i < len(evl); i++ {
// TODO: We should change this to the hash. Using bytes contains some unexported data that
// may cause different hashes
evidenceBzs[i] = evl[i].Bytes()
}
return merkle.HashFromByteSlices(evidenceBzs)
}
func (evl EvidenceList) String() string {
s := ""
for _, e := range evl {
s += fmt.Sprintf("%s\t\t", e)
}
return s
}
// Has returns true if the evidence is in the EvidenceList.
func (evl EvidenceList) Has(evidence Evidence) bool {
for _, ev := range evl {
if bytes.Equal(evidence.Hash(), ev.Hash()) {
return true
}
}
return false
}
//------------------------------------------ PROTO --------------------------------------
// EvidenceToProto is a generalized function for encoding evidence that conforms to the
@@ -667,15 +670,15 @@ func (err *ErrEvidenceOverflow) Error() string {
// assumes the round to be 0 and the validator index to be 0
func NewMockDuplicateVoteEvidence(height int64, time time.Time, chainID string) *DuplicateVoteEvidence {
val := NewMockPV()
val := consensus.NewMockPV()
return NewMockDuplicateVoteEvidenceWithValidator(height, time, val, chainID)
}
// assumes voting power to be 10 and validator to be the only one in the set
func NewMockDuplicateVoteEvidenceWithValidator(height int64, time time.Time,
pv PrivValidator, chainID string) *DuplicateVoteEvidence {
pv consensus.PrivValidator, chainID string) *DuplicateVoteEvidence {
pubKey, _ := pv.GetPubKey(context.Background())
val := NewValidator(pubKey, 10)
val := consensus.NewValidator(pubKey, 10)
voteA := makeMockVote(height, 0, 0, pubKey.Address(), randBlockID(), time)
vA := voteA.ToProto()
_ = pv.SignVote(context.Background(), chainID, vA)
@@ -684,12 +687,12 @@ func NewMockDuplicateVoteEvidenceWithValidator(height int64, time time.Time,
vB := voteB.ToProto()
_ = pv.SignVote(context.Background(), chainID, vB)
voteB.Signature = vB.Signature
return NewDuplicateVoteEvidence(voteA, voteB, time, NewValidatorSet([]*Validator{val}))
return NewDuplicateVoteEvidence(voteA, voteB, time, consensus.NewValidatorSet([]*consensus.Validator{val}))
}
func makeMockVote(height int64, round, index int32, addr Address,
blockID BlockID, time time.Time) *Vote {
return &Vote{
func makeMockVote(height int64, round, index int32, addr consensus.Address,
blockID meta.BlockID, time time.Time) *consensus.Vote {
return &consensus.Vote{
Type: tmproto.SignedMsgType(2),
Height: height,
Round: round,
@@ -700,10 +703,10 @@ func makeMockVote(height int64, round, index int32, addr Address,
}
}
func randBlockID() BlockID {
return BlockID{
func randBlockID() meta.BlockID {
return meta.BlockID{
Hash: tmrand.Bytes(tmhash.Size),
PartSetHeader: PartSetHeader{
PartSetHeader: meta.PartSetHeader{
Total: 1,
Hash: tmrand.Bytes(tmhash.Size),
},

View File

@@ -1,4 +1,4 @@
package types
package evidence_test
import (
"context"
@@ -14,7 +14,12 @@ import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/crypto/tmhash"
test "github.com/tendermint/tendermint/internal/test/factory"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/evidence"
"github.com/tendermint/tendermint/pkg/light"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/version"
)
@@ -23,19 +28,19 @@ var defaultVoteTime = time.Date(2019, 1, 1, 0, 0, 0, 0, time.UTC)
func TestEvidenceList(t *testing.T) {
ev := randomDuplicateVoteEvidence(t)
evl := EvidenceList([]Evidence{ev})
evl := evidence.EvidenceList([]evidence.Evidence{ev})
assert.NotNil(t, evl.Hash())
assert.True(t, evl.Has(ev))
assert.False(t, evl.Has(&DuplicateVoteEvidence{}))
assert.False(t, evl.Has(&evidence.DuplicateVoteEvidence{}))
}
func randomDuplicateVoteEvidence(t *testing.T) *DuplicateVoteEvidence {
val := NewMockPV()
blockID := makeBlockID([]byte("blockhash"), 1000, []byte("partshash"))
blockID2 := makeBlockID([]byte("blockhash2"), 1000, []byte("partshash"))
func randomDuplicateVoteEvidence(t *testing.T) *evidence.DuplicateVoteEvidence {
val := consensus.NewMockPV()
blockID := test.MakeBlockIDWithHash([]byte("blockhash"))
blockID2 := test.MakeBlockIDWithHash([]byte("blockhash2"))
const chainID = "mychain"
return &DuplicateVoteEvidence{
return &evidence.DuplicateVoteEvidence{
VoteA: makeVote(t, val, chainID, 0, 10, 2, 1, blockID, defaultVoteTime),
VoteB: makeVote(t, val, chainID, 0, 10, 2, 1, blockID2, defaultVoteTime.Add(1*time.Minute)),
TotalVotingPower: 30,
@@ -46,34 +51,34 @@ func randomDuplicateVoteEvidence(t *testing.T) *DuplicateVoteEvidence {
func TestDuplicateVoteEvidence(t *testing.T) {
const height = int64(13)
ev := NewMockDuplicateVoteEvidence(height, time.Now(), "mock-chain-id")
ev := evidence.NewMockDuplicateVoteEvidence(height, time.Now(), "mock-chain-id")
assert.Equal(t, ev.Hash(), tmhash.Sum(ev.Bytes()))
assert.NotNil(t, ev.String())
assert.Equal(t, ev.Height(), height)
}
func TestDuplicateVoteEvidenceValidation(t *testing.T) {
val := NewMockPV()
blockID := makeBlockID(tmhash.Sum([]byte("blockhash")), math.MaxInt32, tmhash.Sum([]byte("partshash")))
blockID2 := makeBlockID(tmhash.Sum([]byte("blockhash2")), math.MaxInt32, tmhash.Sum([]byte("partshash")))
val := consensus.NewMockPV()
blockID := test.MakeBlockIDWithHash(tmhash.Sum([]byte("blockhash")))
blockID2 := test.MakeBlockIDWithHash(tmhash.Sum([]byte("blockhash2")))
const chainID = "mychain"
testCases := []struct {
testName string
malleateEvidence func(*DuplicateVoteEvidence)
malleateEvidence func(*evidence.DuplicateVoteEvidence)
expectErr bool
}{
{"Good DuplicateVoteEvidence", func(ev *DuplicateVoteEvidence) {}, false},
{"Nil vote A", func(ev *DuplicateVoteEvidence) { ev.VoteA = nil }, true},
{"Nil vote B", func(ev *DuplicateVoteEvidence) { ev.VoteB = nil }, true},
{"Nil votes", func(ev *DuplicateVoteEvidence) {
{"Good DuplicateVoteEvidence", func(ev *evidence.DuplicateVoteEvidence) {}, false},
{"Nil vote A", func(ev *evidence.DuplicateVoteEvidence) { ev.VoteA = nil }, true},
{"Nil vote B", func(ev *evidence.DuplicateVoteEvidence) { ev.VoteB = nil }, true},
{"Nil votes", func(ev *evidence.DuplicateVoteEvidence) {
ev.VoteA = nil
ev.VoteB = nil
}, true},
{"Invalid vote type", func(ev *DuplicateVoteEvidence) {
{"Invalid vote type", func(ev *evidence.DuplicateVoteEvidence) {
ev.VoteA = makeVote(t, val, chainID, math.MaxInt32, math.MaxInt64, math.MaxInt32, 0, blockID2, defaultVoteTime)
}, true},
{"Invalid vote order", func(ev *DuplicateVoteEvidence) {
{"Invalid vote order", func(ev *evidence.DuplicateVoteEvidence) {
swap := ev.VoteA.Copy()
ev.VoteA = ev.VoteB.Copy()
ev.VoteB = swap
@@ -84,8 +89,8 @@ func TestDuplicateVoteEvidenceValidation(t *testing.T) {
t.Run(tc.testName, func(t *testing.T) {
vote1 := makeVote(t, val, chainID, math.MaxInt32, math.MaxInt64, math.MaxInt32, 0x02, blockID, defaultVoteTime)
vote2 := makeVote(t, val, chainID, math.MaxInt32, math.MaxInt64, math.MaxInt32, 0x02, blockID2, defaultVoteTime)
valSet := NewValidatorSet([]*Validator{val.ExtractIntoValidator(10)})
ev := NewDuplicateVoteEvidence(vote1, vote2, defaultVoteTime, valSet)
valSet := consensus.NewValidatorSet([]*consensus.Validator{val.ExtractIntoValidator(10)})
ev := evidence.NewDuplicateVoteEvidence(vote1, vote2, defaultVoteTime, valSet)
tc.malleateEvidence(ev)
assert.Equal(t, tc.expectErr, ev.ValidateBasic() != nil, "Validate Basic had an unexpected result")
})
@@ -96,15 +101,15 @@ func TestLightClientAttackEvidenceBasic(t *testing.T) {
height := int64(5)
commonHeight := height - 1
nValidators := 10
voteSet, valSet, privVals := randVoteSet(height, 1, tmproto.PrecommitType, nValidators, 1)
voteSet, valSet, privVals := test.RandVoteSet(height, 1, tmproto.PrecommitType, nValidators, 1)
header := makeHeaderRandom()
header.Height = height
blockID := makeBlockID(tmhash.Sum([]byte("blockhash")), math.MaxInt32, tmhash.Sum([]byte("partshash")))
commit, err := makeCommit(blockID, height, 1, voteSet, privVals, defaultVoteTime)
blockID := test.MakeBlockIDWithHash(tmhash.Sum([]byte("blockhash")))
commit, err := test.MakeCommit(blockID, height, 1, voteSet, privVals, defaultVoteTime)
require.NoError(t, err)
lcae := &LightClientAttackEvidence{
ConflictingBlock: &LightBlock{
SignedHeader: &SignedHeader{
lcae := &evidence.LightClientAttackEvidence{
ConflictingBlock: &light.LightBlock{
SignedHeader: &light.SignedHeader{
Header: header,
Commit: commit,
},
@@ -123,18 +128,18 @@ func TestLightClientAttackEvidenceBasic(t *testing.T) {
// maleate evidence to test hash uniqueness
testCases := []struct {
testName string
malleateEvidence func(*LightClientAttackEvidence)
malleateEvidence func(*evidence.LightClientAttackEvidence)
}{
{"Different header", func(ev *LightClientAttackEvidence) { ev.ConflictingBlock.Header = makeHeaderRandom() }},
{"Different common height", func(ev *LightClientAttackEvidence) {
{"Different header", func(ev *evidence.LightClientAttackEvidence) { ev.ConflictingBlock.Header = makeHeaderRandom() }},
{"Different common height", func(ev *evidence.LightClientAttackEvidence) {
ev.CommonHeight = height + 1
}},
}
for _, tc := range testCases {
lcae := &LightClientAttackEvidence{
ConflictingBlock: &LightBlock{
SignedHeader: &SignedHeader{
lcae := &evidence.LightClientAttackEvidence{
ConflictingBlock: &light.LightBlock{
SignedHeader: &light.SignedHeader{
Header: header,
Commit: commit,
},
@@ -155,16 +160,16 @@ func TestLightClientAttackEvidenceValidation(t *testing.T) {
height := int64(5)
commonHeight := height - 1
nValidators := 10
voteSet, valSet, privVals := randVoteSet(height, 1, tmproto.PrecommitType, nValidators, 1)
voteSet, valSet, privVals := test.RandVoteSet(height, 1, tmproto.PrecommitType, nValidators, 1)
header := makeHeaderRandom()
header.Height = height
header.ValidatorsHash = valSet.Hash()
blockID := makeBlockID(header.Hash(), math.MaxInt32, tmhash.Sum([]byte("partshash")))
commit, err := makeCommit(blockID, height, 1, voteSet, privVals, time.Now())
blockID := test.MakeBlockIDWithHash(header.Hash())
commit, err := test.MakeCommit(blockID, height, 1, voteSet, privVals, time.Now())
require.NoError(t, err)
lcae := &LightClientAttackEvidence{
ConflictingBlock: &LightBlock{
SignedHeader: &SignedHeader{
lcae := &evidence.LightClientAttackEvidence{
ConflictingBlock: &light.LightBlock{
SignedHeader: &light.SignedHeader{
Header: header,
Commit: commit,
},
@@ -179,32 +184,32 @@ func TestLightClientAttackEvidenceValidation(t *testing.T) {
testCases := []struct {
testName string
malleateEvidence func(*LightClientAttackEvidence)
malleateEvidence func(*evidence.LightClientAttackEvidence)
expectErr bool
}{
{"Good LightClientAttackEvidence", func(ev *LightClientAttackEvidence) {}, false},
{"Negative height", func(ev *LightClientAttackEvidence) { ev.CommonHeight = -10 }, true},
{"Height is greater than divergent block", func(ev *LightClientAttackEvidence) {
{"Good LightClientAttackEvidence", func(ev *evidence.LightClientAttackEvidence) {}, false},
{"Negative height", func(ev *evidence.LightClientAttackEvidence) { ev.CommonHeight = -10 }, true},
{"Height is greater than divergent block", func(ev *evidence.LightClientAttackEvidence) {
ev.CommonHeight = height + 1
}, true},
{"Height is equal to the divergent block", func(ev *LightClientAttackEvidence) {
{"Height is equal to the divergent block", func(ev *evidence.LightClientAttackEvidence) {
ev.CommonHeight = height
}, false},
{"Nil conflicting header", func(ev *LightClientAttackEvidence) { ev.ConflictingBlock.Header = nil }, true},
{"Nil conflicting blocl", func(ev *LightClientAttackEvidence) { ev.ConflictingBlock = nil }, true},
{"Nil validator set", func(ev *LightClientAttackEvidence) {
ev.ConflictingBlock.ValidatorSet = &ValidatorSet{}
{"Nil conflicting header", func(ev *evidence.LightClientAttackEvidence) { ev.ConflictingBlock.Header = nil }, true},
{"Nil conflicting blocl", func(ev *evidence.LightClientAttackEvidence) { ev.ConflictingBlock = nil }, true},
{"Nil validator set", func(ev *evidence.LightClientAttackEvidence) {
ev.ConflictingBlock.ValidatorSet = &consensus.ValidatorSet{}
}, true},
{"Negative total voting power", func(ev *LightClientAttackEvidence) {
{"Negative total voting power", func(ev *evidence.LightClientAttackEvidence) {
ev.TotalVotingPower = -1
}, true},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.testName, func(t *testing.T) {
lcae := &LightClientAttackEvidence{
ConflictingBlock: &LightBlock{
SignedHeader: &SignedHeader{
lcae := &evidence.LightClientAttackEvidence{
ConflictingBlock: &light.LightBlock{
SignedHeader: &light.SignedHeader{
Header: header,
Commit: commit,
},
@@ -227,16 +232,16 @@ func TestLightClientAttackEvidenceValidation(t *testing.T) {
}
func TestMockEvidenceValidateBasic(t *testing.T) {
goodEvidence := NewMockDuplicateVoteEvidence(int64(1), time.Now(), "mock-chain-id")
goodEvidence := evidence.NewMockDuplicateVoteEvidence(int64(1), time.Now(), "mock-chain-id")
assert.Nil(t, goodEvidence.ValidateBasic())
}
func makeVote(
t *testing.T, val PrivValidator, chainID string, valIndex int32, height int64, round int32, step int, blockID BlockID,
time time.Time) *Vote {
t *testing.T, val consensus.PrivValidator, chainID string, valIndex int32, height int64, round int32, step int, blockID meta.BlockID,
time time.Time) *consensus.Vote {
pubKey, err := val.GetPubKey(context.Background())
require.NoError(t, err)
v := &Vote{
v := &consensus.Vote{
ValidatorAddress: pubKey.Address(),
ValidatorIndex: valIndex,
Height: height,
@@ -255,13 +260,13 @@ func makeVote(
return v
}
func makeHeaderRandom() *Header {
return &Header{
func makeHeaderRandom() *meta.Header {
return &meta.Header{
Version: version.Consensus{Block: version.BlockProtocol, App: 1},
ChainID: tmrand.Str(12),
Height: int64(mrand.Uint32() + 1),
Time: time.Now(),
LastBlockID: makeBlockIDRandom(),
LastBlockID: test.MakeBlockID(),
LastCommitHash: crypto.CRandBytes(tmhash.Size),
DataHash: crypto.CRandBytes(tmhash.Size),
ValidatorsHash: crypto.CRandBytes(tmhash.Size),
@@ -276,36 +281,36 @@ func makeHeaderRandom() *Header {
func TestEvidenceProto(t *testing.T) {
// -------- Votes --------
val := NewMockPV()
blockID := makeBlockID(tmhash.Sum([]byte("blockhash")), math.MaxInt32, tmhash.Sum([]byte("partshash")))
blockID2 := makeBlockID(tmhash.Sum([]byte("blockhash2")), math.MaxInt32, tmhash.Sum([]byte("partshash")))
val := consensus.NewMockPV()
blockID := test.MakeBlockIDWithHash(tmhash.Sum([]byte("blockhash")))
blockID2 := test.MakeBlockIDWithHash(tmhash.Sum([]byte("blockhash2")))
const chainID = "mychain"
v := makeVote(t, val, chainID, math.MaxInt32, math.MaxInt64, 1, 0x01, blockID, defaultVoteTime)
v2 := makeVote(t, val, chainID, math.MaxInt32, math.MaxInt64, 2, 0x01, blockID2, defaultVoteTime)
tests := []struct {
testName string
evidence Evidence
evidence evidence.Evidence
toProtoErr bool
fromProtoErr bool
}{
{"nil fail", nil, true, true},
{"DuplicateVoteEvidence empty fail", &DuplicateVoteEvidence{}, false, true},
{"DuplicateVoteEvidence nil voteB", &DuplicateVoteEvidence{VoteA: v, VoteB: nil}, false, true},
{"DuplicateVoteEvidence nil voteA", &DuplicateVoteEvidence{VoteA: nil, VoteB: v}, false, true},
{"DuplicateVoteEvidence success", &DuplicateVoteEvidence{VoteA: v2, VoteB: v}, false, false},
{"DuplicateVoteEvidence empty fail", &evidence.DuplicateVoteEvidence{}, false, true},
{"DuplicateVoteEvidence nil voteB", &evidence.DuplicateVoteEvidence{VoteA: v, VoteB: nil}, false, true},
{"DuplicateVoteEvidence nil voteA", &evidence.DuplicateVoteEvidence{VoteA: nil, VoteB: v}, false, true},
{"DuplicateVoteEvidence success", &evidence.DuplicateVoteEvidence{VoteA: v2, VoteB: v}, false, false},
}
for _, tt := range tests {
tt := tt
t.Run(tt.testName, func(t *testing.T) {
pb, err := EvidenceToProto(tt.evidence)
pb, err := evidence.EvidenceToProto(tt.evidence)
if tt.toProtoErr {
assert.Error(t, err, tt.testName)
return
}
assert.NoError(t, err, tt.testName)
evi, err := EvidenceFromProto(pb)
evi, err := evidence.EvidenceFromProto(pb)
if tt.fromProtoErr {
assert.Error(t, err, tt.testName)
return
@@ -317,10 +322,10 @@ func TestEvidenceProto(t *testing.T) {
func TestEvidenceVectors(t *testing.T) {
// Votes for duplicateEvidence
val := NewMockPV()
val := consensus.NewMockPV()
val.PrivKey = ed25519.GenPrivKeyFromSecret([]byte("it's a secret")) // deterministic key
blockID := makeBlockID(tmhash.Sum([]byte("blockhash")), math.MaxInt32, tmhash.Sum([]byte("partshash")))
blockID2 := makeBlockID(tmhash.Sum([]byte("blockhash2")), math.MaxInt32, tmhash.Sum([]byte("partshash")))
blockID := test.MakeBlockIDWithHash(tmhash.Sum([]byte("blockhash")))
blockID2 := test.MakeBlockIDWithHash(tmhash.Sum([]byte("blockhash2")))
const chainID = "mychain"
v := makeVote(t, val, chainID, math.MaxInt32, math.MaxInt64, 1, 0x01, blockID, defaultVoteTime)
v2 := makeVote(t, val, chainID, math.MaxInt32, math.MaxInt64, 2, 0x01, blockID2, defaultVoteTime)
@@ -329,13 +334,13 @@ func TestEvidenceVectors(t *testing.T) {
height := int64(5)
commonHeight := height - 1
nValidators := 10
voteSet, valSet, privVals := deterministicVoteSet(height, 1, tmproto.PrecommitType, 1)
header := &Header{
voteSet, valSet, privVals := test.DeterministicVoteSet(height, 1, tmproto.PrecommitType, 1)
header := &meta.Header{
Version: version.Consensus{Block: 1, App: 1},
ChainID: chainID,
Height: height,
Time: time.Date(math.MaxInt64, 0, 0, 0, 0, 0, math.MaxInt64, time.UTC),
LastBlockID: BlockID{},
LastBlockID: meta.BlockID{},
LastCommitHash: []byte("f2564c78071e26643ae9b3e2a19fa0dc10d4d9e873aa0be808660123f11a1e78"),
DataHash: []byte("f2564c78071e26643ae9b3e2a19fa0dc10d4d9e873aa0be808660123f11a1e78"),
ValidatorsHash: valSet.Hash(),
@@ -348,12 +353,12 @@ func TestEvidenceVectors(t *testing.T) {
EvidenceHash: []byte("f2564c78071e26643ae9b3e2a19fa0dc10d4d9e873aa0be808660123f11a1e78"),
ProposerAddress: []byte("2915b7b15f979e48ebc61774bb1d86ba3136b7eb"),
}
blockID3 := makeBlockID(header.Hash(), math.MaxInt32, tmhash.Sum([]byte("partshash")))
commit, err := makeCommit(blockID3, height, 1, voteSet, privVals, defaultVoteTime)
blockID3 := test.MakeBlockIDWithHash(header.Hash())
commit, err := test.MakeCommit(blockID3, height, 1, voteSet, privVals, defaultVoteTime)
require.NoError(t, err)
lcae := &LightClientAttackEvidence{
ConflictingBlock: &LightBlock{
SignedHeader: &SignedHeader{
lcae := &evidence.LightClientAttackEvidence{
ConflictingBlock: &light.LightBlock{
SignedHeader: &light.SignedHeader{
Header: header,
Commit: commit,
},
@@ -368,19 +373,19 @@ func TestEvidenceVectors(t *testing.T) {
testCases := []struct {
testName string
evList EvidenceList
evList evidence.EvidenceList
expBytes string
}{
{"duplicateVoteEvidence",
EvidenceList{&DuplicateVoteEvidence{VoteA: v2, VoteB: v}},
evidence.EvidenceList{&evidence.DuplicateVoteEvidence{VoteA: v2, VoteB: v}},
"a9ce28d13bb31001fc3e5b7927051baf98f86abdbd64377643a304164c826923",
},
{"LightClientAttackEvidence",
EvidenceList{lcae},
evidence.EvidenceList{lcae},
"2f8782163c3905b26e65823ababc977fe54e97b94e60c0360b1e4726b668bb8e",
},
{"LightClientAttackEvidence & DuplicateVoteEvidence",
EvidenceList{&DuplicateVoteEvidence{VoteA: v2, VoteB: v}, lcae},
evidence.EvidenceList{&evidence.DuplicateVoteEvidence{VoteA: v2, VoteB: v}, lcae},
"eedb4b47d6dbc9d43f53da8aa50bb826e8d9fc7d897da777c8af6a04aa74163e",
},
}

View File

@@ -1,10 +1,12 @@
package types
package light
import (
"bytes"
"errors"
"fmt"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
@@ -12,7 +14,7 @@ import (
// It is the basis of the light client
type LightBlock struct {
*SignedHeader `json:"signed_header"`
ValidatorSet *ValidatorSet `json:"validator_set"`
ValidatorSet *consensus.ValidatorSet `json:"validator_set"`
}
// ValidateBasic checks that the data is correct and consistent
@@ -101,7 +103,7 @@ func LightBlockFromProto(pb *tmproto.LightBlock) (*LightBlock, error) {
}
if pb.ValidatorSet != nil {
vals, err := ValidatorSetFromProto(pb.ValidatorSet)
vals, err := consensus.ValidatorSetFromProto(pb.ValidatorSet)
if err != nil {
return nil, err
}
@@ -115,9 +117,9 @@ func LightBlockFromProto(pb *tmproto.LightBlock) (*LightBlock, error) {
// SignedHeader is a header along with the commits that prove it.
type SignedHeader struct {
*Header `json:"header"`
*meta.Header `json:"header"`
Commit *Commit `json:"commit"`
Commit *meta.Commit `json:"commit"`
}
// ValidateBasic does basic consistency checks and makes sure the header
@@ -202,7 +204,7 @@ func SignedHeaderFromProto(shp *tmproto.SignedHeader) (*SignedHeader, error) {
sh := new(SignedHeader)
if shp.Header != nil {
h, err := HeaderFromProto(shp.Header)
h, err := meta.HeaderFromProto(shp.Header)
if err != nil {
return nil, err
}
@@ -210,7 +212,7 @@ func SignedHeaderFromProto(shp *tmproto.SignedHeader) (*SignedHeader, error) {
}
if shp.Commit != nil {
c, err := CommitFromProto(shp.Commit)
c, err := meta.CommitFromProto(shp.Commit)
if err != nil {
return nil, err
}

View File

@@ -1,4 +1,4 @@
package types
package light_test
import (
"math"
@@ -6,43 +6,48 @@ import (
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
test "github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/light"
"github.com/tendermint/tendermint/pkg/meta"
"github.com/tendermint/tendermint/version"
)
func TestLightBlockValidateBasic(t *testing.T) {
header := MakeRandHeader()
commit := randCommit(time.Now())
vals, _ := randValidatorPrivValSet(5, 1)
header := test.MakeRandomHeader()
commit := test.MakeRandomCommit(time.Now())
vals, _ := test.RandValidatorPrivValSet(5, 1)
header.Height = commit.Height
header.LastBlockID = commit.BlockID
header.ValidatorsHash = vals.Hash()
header.Version.Block = version.BlockProtocol
vals2, _ := randValidatorPrivValSet(3, 1)
vals2, _ := test.RandValidatorPrivValSet(3, 1)
vals3 := vals.Copy()
vals3.Proposer = &Validator{}
vals3.Proposer = &consensus.Validator{}
commit.BlockID.Hash = header.Hash()
sh := &SignedHeader{
Header: &header,
sh := &light.SignedHeader{
Header: header,
Commit: commit,
}
testCases := []struct {
name string
sh *SignedHeader
vals *ValidatorSet
sh *light.SignedHeader
vals *consensus.ValidatorSet
expectErr bool
}{
{"valid light block", sh, vals, false},
{"hashes don't match", sh, vals2, true},
{"invalid validator set", sh, vals3, true},
{"invalid signed header", &SignedHeader{Header: &header, Commit: randCommit(time.Now())}, vals, true},
{"invalid signed header", &light.SignedHeader{Header: header, Commit: test.MakeRandomCommit(time.Now())}, vals, true},
}
for _, tc := range testCases {
lightBlock := LightBlock{
lightBlock := light.LightBlock{
SignedHeader: tc.sh,
ValidatorSet: tc.vals,
}
@@ -57,37 +62,37 @@ func TestLightBlockValidateBasic(t *testing.T) {
}
func TestLightBlockProtobuf(t *testing.T) {
header := MakeRandHeader()
commit := randCommit(time.Now())
vals, _ := randValidatorPrivValSet(5, 1)
header := test.MakeRandomHeader()
commit := test.MakeRandomCommit(time.Now())
vals, _ := test.RandValidatorPrivValSet(5, 1)
header.Height = commit.Height
header.LastBlockID = commit.BlockID
header.Version.Block = version.BlockProtocol
header.ValidatorsHash = vals.Hash()
vals3 := vals.Copy()
vals3.Proposer = &Validator{}
vals3.Proposer = &consensus.Validator{}
commit.BlockID.Hash = header.Hash()
sh := &SignedHeader{
Header: &header,
sh := &light.SignedHeader{
Header: header,
Commit: commit,
}
testCases := []struct {
name string
sh *SignedHeader
vals *ValidatorSet
sh *light.SignedHeader
vals *consensus.ValidatorSet
toProtoErr bool
toBlockErr bool
}{
{"valid light block", sh, vals, false, false},
{"empty signed header", &SignedHeader{}, vals, false, false},
{"empty validator set", sh, &ValidatorSet{}, false, true},
{"empty light block", &SignedHeader{}, &ValidatorSet{}, false, true},
{"empty signed header", &light.SignedHeader{}, vals, false, false},
{"empty validator set", sh, &consensus.ValidatorSet{}, false, true},
{"empty light block", &light.SignedHeader{}, &consensus.ValidatorSet{}, false, true},
}
for _, tc := range testCases {
lightBlock := &LightBlock{
lightBlock := &light.LightBlock{
SignedHeader: tc.sh,
ValidatorSet: tc.vals,
}
@@ -98,7 +103,7 @@ func TestLightBlockProtobuf(t *testing.T) {
assert.NoError(t, err, tc.name)
}
lb, err := LightBlockFromProto(lbp)
lb, err := light.LightBlockFromProto(lbp)
if tc.toBlockErr {
assert.Error(t, err, tc.name)
} else {
@@ -110,10 +115,10 @@ func TestLightBlockProtobuf(t *testing.T) {
}
func TestSignedHeaderValidateBasic(t *testing.T) {
commit := randCommit(time.Now())
commit := test.MakeRandomCommit(time.Now())
chainID := "𠜎"
timestamp := time.Date(math.MaxInt64, 0, 0, 0, 0, 0, math.MaxInt64, time.UTC)
h := Header{
h := meta.Header{
Version: version.Consensus{Block: version.BlockProtocol, App: math.MaxInt64},
ChainID: chainID,
Height: commit.Height,
@@ -130,14 +135,14 @@ func TestSignedHeaderValidateBasic(t *testing.T) {
ProposerAddress: crypto.AddressHash([]byte("proposer_address")),
}
validSignedHeader := SignedHeader{Header: &h, Commit: commit}
validSignedHeader := light.SignedHeader{Header: &h, Commit: commit}
validSignedHeader.Commit.BlockID.Hash = validSignedHeader.Hash()
invalidSignedHeader := SignedHeader{}
invalidSignedHeader := light.SignedHeader{}
testCases := []struct {
testName string
shHeader *Header
shCommit *Commit
shHeader *meta.Header
shCommit *meta.Commit
expectErr bool
}{
{"Valid Signed Header", validSignedHeader.Header, validSignedHeader.Commit, false},
@@ -148,7 +153,7 @@ func TestSignedHeaderValidateBasic(t *testing.T) {
for _, tc := range testCases {
tc := tc
t.Run(tc.testName, func(t *testing.T) {
sh := SignedHeader{
sh := light.SignedHeader{
Header: tc.shHeader,
Commit: tc.shCommit,
}
@@ -163,3 +168,32 @@ func TestSignedHeaderValidateBasic(t *testing.T) {
})
}
}
func TestSignedHeaderProtoBuf(t *testing.T) {
commit := test.MakeRandomCommit(time.Now())
h := test.MakeRandomHeader()
sh := light.SignedHeader{Header: h, Commit: commit}
testCases := []struct {
msg string
sh1 *light.SignedHeader
expPass bool
}{
{"empty SignedHeader 2", &light.SignedHeader{}, true},
{"success", &sh, true},
{"failure nil", nil, false},
}
for _, tc := range testCases {
protoSignedHeader := tc.sh1.ToProto()
sh, err := light.SignedHeaderFromProto(protoSignedHeader)
if tc.expPass {
require.NoError(t, err, tc.msg)
require.Equal(t, tc.sh1, sh, tc.msg)
} else {
require.Error(t, err, tc.msg)
}
}
}

View File

@@ -1,4 +1,4 @@
package types
package mempool
import (
"bytes"
@@ -137,11 +137,3 @@ func TxProofFromProto(pb tmproto.TxProof) (TxProof, error) {
return pbtp, nil
}
// ComputeProtoSizeForTxs wraps the transactions in tmproto.Data{} and calculates the size.
// https://developers.google.com/protocol-buffers/docs/encoding
func ComputeProtoSizeForTxs(txs []Tx) int64 {
data := Data{Txs: txs}
pdData := data.ToProto()
return int64(pdData.Size())
}

View File

@@ -1,4 +1,4 @@
package types
package mempool
import (
"bytes"

View File

@@ -1,4 +1,4 @@
package types
package meta
import (
"time"
@@ -38,32 +38,6 @@ func CanonicalizePartSetHeader(psh tmproto.PartSetHeader) tmproto.CanonicalPartS
return tmproto.CanonicalPartSetHeader(psh)
}
// CanonicalizeVote transforms the given Proposal to a CanonicalProposal.
func CanonicalizeProposal(chainID string, proposal *tmproto.Proposal) tmproto.CanonicalProposal {
return tmproto.CanonicalProposal{
Type: tmproto.ProposalType,
Height: proposal.Height, // encoded as sfixed64
Round: int64(proposal.Round), // encoded as sfixed64
POLRound: int64(proposal.PolRound),
BlockID: CanonicalizeBlockID(proposal.BlockID),
Timestamp: proposal.Timestamp,
ChainID: chainID,
}
}
// CanonicalizeVote transforms the given Vote to a CanonicalVote, which does
// not contain ValidatorIndex and ValidatorAddress fields.
func CanonicalizeVote(chainID string, vote *tmproto.Vote) tmproto.CanonicalVote {
return tmproto.CanonicalVote{
Type: vote.Type,
Height: vote.Height, // encoded as sfixed64
Round: int64(vote.Round), // encoded as sfixed64
BlockID: CanonicalizeBlockID(vote.BlockID),
Timestamp: vote.Timestamp,
ChainID: chainID,
}
}
// CanonicalTime can be used to stringify time in a canonical way.
func CanonicalTime(t time.Time) string {
// Note that sending time over amino resets it to

View File

@@ -1,4 +1,4 @@
package types
package meta_test
import (
"reflect"
@@ -6,6 +6,7 @@ import (
"github.com/tendermint/tendermint/crypto/tmhash"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
@@ -31,7 +32,7 @@ func TestCanonicalizeBlockID(t *testing.T) {
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
if got := CanonicalizeBlockID(tt.args); !reflect.DeepEqual(got, tt.want) {
if got := meta.CanonicalizeBlockID(tt.args); !reflect.DeepEqual(got, tt.want) {
t.Errorf("CanonicalizeBlockID() = %v, want %v", got, tt.want)
}
})

379
pkg/meta/commit.go Normal file
View File

@@ -0,0 +1,379 @@
package meta
import (
"crypto/ed25519"
"errors"
"fmt"
"strings"
"time"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/merkle"
"github.com/tendermint/tendermint/libs/bits"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
tmmath "github.com/tendermint/tendermint/libs/math"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
var (
// MaxSignatureSize is a maximum allowed signature size for the Proposal
// and Vote.
// XXX: secp256k1 does not have Size nor MaxSize defined.
MaxSignatureSize = tmmath.MaxInt(ed25519.SignatureSize, 64)
)
//-------------------------------------
const (
// Max size of commit without any commitSigs -> 82 for BlockID, 8 for Height, 4 for Round.
MaxCommitOverheadBytes int64 = 94
// Commit sig size is made up of 64 bytes for the signature, 20 bytes for the address,
// 1 byte for the flag and 14 bytes for the timestamp
MaxCommitSigBytes int64 = 109
)
// CommitSig is a part of the Vote included in a Commit.
type CommitSig struct {
BlockIDFlag BlockIDFlag `json:"block_id_flag"`
ValidatorAddress crypto.Address `json:"validator_address"`
Timestamp time.Time `json:"timestamp"`
Signature []byte `json:"signature"`
}
// NewCommitSigForBlock returns new CommitSig with BlockIDFlagCommit.
func NewCommitSigForBlock(signature []byte, valAddr crypto.Address, ts time.Time) CommitSig {
return CommitSig{
BlockIDFlag: BlockIDFlagCommit,
ValidatorAddress: valAddr,
Timestamp: ts,
Signature: signature,
}
}
func MaxCommitBytes(valCount int) int64 {
// From the repeated commit sig field
var protoEncodingOverhead int64 = 2
return MaxCommitOverheadBytes + ((MaxCommitSigBytes + protoEncodingOverhead) * int64(valCount))
}
// NewCommitSigAbsent returns new CommitSig with BlockIDFlagAbsent. Other
// fields are all empty.
func NewCommitSigAbsent() CommitSig {
return CommitSig{
BlockIDFlag: BlockIDFlagAbsent,
}
}
// ForBlock returns true if CommitSig is for the block.
func (cs CommitSig) ForBlock() bool {
return cs.BlockIDFlag == BlockIDFlagCommit
}
// Absent returns true if CommitSig is absent.
func (cs CommitSig) Absent() bool {
return cs.BlockIDFlag == BlockIDFlagAbsent
}
// CommitSig returns a string representation of CommitSig.
//
// 1. first 6 bytes of signature
// 2. first 6 bytes of validator address
// 3. block ID flag
// 4. timestamp
func (cs CommitSig) String() string {
return fmt.Sprintf("CommitSig{%X by %X on %v @ %s}",
tmbytes.Fingerprint(cs.Signature),
tmbytes.Fingerprint(cs.ValidatorAddress),
cs.BlockIDFlag,
CanonicalTime(cs.Timestamp))
}
// BlockID returns the Commit's BlockID if CommitSig indicates signing,
// otherwise - empty BlockID.
func (cs CommitSig) BlockID(commitBlockID BlockID) BlockID {
var blockID BlockID
switch cs.BlockIDFlag {
case BlockIDFlagAbsent:
blockID = BlockID{}
case BlockIDFlagCommit:
blockID = commitBlockID
case BlockIDFlagNil:
blockID = BlockID{}
default:
panic(fmt.Sprintf("Unknown BlockIDFlag: %v", cs.BlockIDFlag))
}
return blockID
}
// ValidateBasic performs basic validation.
func (cs CommitSig) ValidateBasic() error {
switch cs.BlockIDFlag {
case BlockIDFlagAbsent:
case BlockIDFlagCommit:
case BlockIDFlagNil:
default:
return fmt.Errorf("unknown BlockIDFlag: %v", cs.BlockIDFlag)
}
switch cs.BlockIDFlag {
case BlockIDFlagAbsent:
if len(cs.ValidatorAddress) != 0 {
return errors.New("validator address is present")
}
if !cs.Timestamp.IsZero() {
return errors.New("time is present")
}
if len(cs.Signature) != 0 {
return errors.New("signature is present")
}
default:
if len(cs.ValidatorAddress) != crypto.AddressSize {
return fmt.Errorf("expected ValidatorAddress size to be %d bytes, got %d bytes",
crypto.AddressSize,
len(cs.ValidatorAddress),
)
}
// NOTE: Timestamp validation is subtle and handled elsewhere.
if len(cs.Signature) == 0 {
return errors.New("signature is missing")
}
if len(cs.Signature) > MaxSignatureSize {
return fmt.Errorf("signature is too big (max: %d)", MaxSignatureSize)
}
}
return nil
}
// ToProto converts CommitSig to protobuf
func (cs *CommitSig) ToProto() *tmproto.CommitSig {
if cs == nil {
return nil
}
return &tmproto.CommitSig{
BlockIdFlag: tmproto.BlockIDFlag(cs.BlockIDFlag),
ValidatorAddress: cs.ValidatorAddress,
Timestamp: cs.Timestamp,
Signature: cs.Signature,
}
}
// FromProto sets a protobuf CommitSig to the given pointer.
// It returns an error if the CommitSig is invalid.
func (cs *CommitSig) FromProto(csp tmproto.CommitSig) error {
cs.BlockIDFlag = BlockIDFlag(csp.BlockIdFlag)
cs.ValidatorAddress = csp.ValidatorAddress
cs.Timestamp = csp.Timestamp
cs.Signature = csp.Signature
return cs.ValidateBasic()
}
//-------------------------------------
// Commit contains the evidence that a block was committed by a set of validators.
// NOTE: Commit is empty for height 1, but never nil.
type Commit struct {
// NOTE: The signatures are in order of address to preserve the bonded
// ValidatorSet order.
// Any peer with a block can gossip signatures by index with a peer without
// recalculating the active ValidatorSet.
Height int64 `json:"height"`
Round int32 `json:"round"`
BlockID BlockID `json:"block_id"`
Signatures []CommitSig `json:"signatures"`
// Memoized in first call to corresponding method.
// NOTE: can't memoize in constructor because constructor isn't used for
// unmarshaling.
hash tmbytes.HexBytes
bitArray *bits.BitArray
}
// NewCommit returns a new Commit.
func NewCommit(height int64, round int32, blockID BlockID, commitSigs []CommitSig) *Commit {
return &Commit{
Height: height,
Round: round,
BlockID: blockID,
Signatures: commitSigs,
}
}
// Type returns the vote type of the commit, which is always VoteTypePrecommit
// Implements VoteSetReader.
func (commit *Commit) Type() byte {
return byte(tmproto.PrecommitType)
}
// GetHeight returns height of the commit.
// Implements VoteSetReader.
func (commit *Commit) GetHeight() int64 {
return commit.Height
}
// GetRound returns height of the commit.
// Implements VoteSetReader.
func (commit *Commit) GetRound() int32 {
return commit.Round
}
// Size returns the number of signatures in the commit.
// Implements VoteSetReader.
func (commit *Commit) Size() int {
if commit == nil {
return 0
}
return len(commit.Signatures)
}
// ClearCache removes the saved hash. This is predominantly used for testing.
func (commit *Commit) ClearCache() {
commit.hash = nil
commit.bitArray = nil
}
// BitArray returns a BitArray of which validators voted for BlockID or nil in this commit.
// Implements VoteSetReader.
func (commit *Commit) BitArray() *bits.BitArray {
if commit.bitArray == nil {
commit.bitArray = bits.NewBitArray(len(commit.Signatures))
for i, commitSig := range commit.Signatures {
// TODO: need to check the BlockID otherwise we could be counting conflicts,
// not just the one with +2/3 !
commit.bitArray.SetIndex(i, !commitSig.Absent())
}
}
return commit.bitArray
}
// IsCommit returns true if there is at least one signature.
// Implements VoteSetReader.
func (commit *Commit) IsCommit() bool {
return len(commit.Signatures) != 0
}
// ValidateBasic performs basic validation that doesn't involve state data.
// Does not actually check the cryptographic signatures.
func (commit *Commit) ValidateBasic() error {
if commit.Height < 0 {
return errors.New("negative Height")
}
if commit.Round < 0 {
return errors.New("negative Round")
}
if commit.Height >= 1 {
if commit.BlockID.IsZero() {
return errors.New("commit cannot be for nil block")
}
if len(commit.Signatures) == 0 {
return errors.New("no signatures in commit")
}
for i, commitSig := range commit.Signatures {
if err := commitSig.ValidateBasic(); err != nil {
return fmt.Errorf("wrong CommitSig #%d: %v", i, err)
}
}
}
return nil
}
// Hash returns the hash of the commit
func (commit *Commit) Hash() tmbytes.HexBytes {
if commit == nil {
return nil
}
if commit.hash == nil {
bs := make([][]byte, len(commit.Signatures))
for i, commitSig := range commit.Signatures {
pbcs := commitSig.ToProto()
bz, err := pbcs.Marshal()
if err != nil {
panic(err)
}
bs[i] = bz
}
commit.hash = merkle.HashFromByteSlices(bs)
}
return commit.hash
}
// StringIndented returns a string representation of the commit.
func (commit *Commit) StringIndented(indent string) string {
if commit == nil {
return "nil-Commit"
}
commitSigStrings := make([]string, len(commit.Signatures))
for i, commitSig := range commit.Signatures {
commitSigStrings[i] = commitSig.String()
}
return fmt.Sprintf(`Commit{
%s Height: %d
%s Round: %d
%s BlockID: %v
%s Signatures:
%s %v
%s}#%v`,
indent, commit.Height,
indent, commit.Round,
indent, commit.BlockID,
indent,
indent, strings.Join(commitSigStrings, "\n"+indent+" "),
indent, commit.hash)
}
// ToProto converts Commit to protobuf
func (commit *Commit) ToProto() *tmproto.Commit {
if commit == nil {
return nil
}
c := new(tmproto.Commit)
sigs := make([]tmproto.CommitSig, len(commit.Signatures))
for i := range commit.Signatures {
sigs[i] = *commit.Signatures[i].ToProto()
}
c.Signatures = sigs
c.Height = commit.Height
c.Round = commit.Round
c.BlockID = commit.BlockID.ToProto()
return c
}
// FromProto sets a protobuf Commit to the given pointer.
// It returns an error if the commit is invalid.
func CommitFromProto(cp *tmproto.Commit) (*Commit, error) {
if cp == nil {
return nil, errors.New("nil Commit")
}
var (
commit = new(Commit)
)
bi, err := BlockIDFromProto(&cp.BlockID)
if err != nil {
return nil, err
}
sigs := make([]CommitSig, len(cp.Signatures))
for i := range cp.Signatures {
if err := sigs[i].FromProto(cp.Signatures[i]); err != nil {
return nil, err
}
}
commit.Signatures = sigs
commit.Height = cp.Height
commit.Round = cp.Round
commit.BlockID = *bi
return commit, commit.ValidateBasic()
}

284
pkg/meta/commit_test.go Normal file
View File

@@ -0,0 +1,284 @@
package meta_test
import (
"math"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/tmhash"
test "github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/libs/bits"
"github.com/tendermint/tendermint/pkg/consensus"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
func TestCommit(t *testing.T) {
lastID := test.MakeBlockID()
h := int64(3)
voteSet, _, vals := test.RandVoteSet(h-1, 1, tmproto.PrecommitType, 10, 1)
commit, err := test.MakeCommit(lastID, h-1, 1, voteSet, vals, time.Now())
require.NoError(t, err)
assert.Equal(t, h-1, commit.Height)
assert.EqualValues(t, 1, commit.Round)
assert.Equal(t, tmproto.PrecommitType, tmproto.SignedMsgType(commit.Type()))
if commit.Size() <= 0 {
t.Fatalf("commit %v has a zero or negative size: %d", commit, commit.Size())
}
require.NotNil(t, commit.BitArray())
assert.Equal(t, bits.NewBitArray(10).Size(), commit.BitArray().Size())
assert.Equal(t, voteSet.GetByIndex(0), consensus.GetVoteFromCommit(commit, 0))
assert.True(t, commit.IsCommit())
}
func TestCommitValidateBasic(t *testing.T) {
testCases := []struct {
testName string
malleateCommit func(*meta.Commit)
expectErr bool
}{
{"Random Commit", func(com *meta.Commit) {}, false},
{"Incorrect signature", func(com *meta.Commit) { com.Signatures[0].Signature = []byte{0} }, false},
{"Incorrect height", func(com *meta.Commit) { com.Height = int64(-100) }, true},
{"Incorrect round", func(com *meta.Commit) { com.Round = -100 }, true},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.testName, func(t *testing.T) {
com := test.MakeRandomCommit(time.Now())
tc.malleateCommit(com)
assert.Equal(t, tc.expectErr, com.ValidateBasic() != nil, "Validate Basic had an unexpected result")
})
}
}
func TestCommit_ValidateBasic(t *testing.T) {
testCases := []struct {
name string
commit *meta.Commit
expectErr bool
errString string
}{
{
"invalid height",
&meta.Commit{Height: -1},
true, "negative Height",
},
{
"invalid round",
&meta.Commit{Height: 1, Round: -1},
true, "negative Round",
},
{
"invalid block ID",
&meta.Commit{
Height: 1,
Round: 1,
BlockID: meta.BlockID{},
},
true, "commit cannot be for nil block",
},
{
"no signatures",
&meta.Commit{
Height: 1,
Round: 1,
BlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
},
true, "no signatures in commit",
},
{
"invalid signature",
&meta.Commit{
Height: 1,
Round: 1,
BlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
Signatures: []meta.CommitSig{
{
BlockIDFlag: meta.BlockIDFlagCommit,
ValidatorAddress: make([]byte, crypto.AddressSize),
Signature: make([]byte, meta.MaxSignatureSize+1),
},
},
},
true, "wrong CommitSig",
},
{
"valid commit",
&meta.Commit{
Height: 1,
Round: 1,
BlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
Signatures: []meta.CommitSig{
{
BlockIDFlag: meta.BlockIDFlagCommit,
ValidatorAddress: make([]byte, crypto.AddressSize),
Signature: make([]byte, meta.MaxSignatureSize),
},
},
},
false, "",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
err := tc.commit.ValidateBasic()
if tc.expectErr {
require.Error(t, err)
require.Contains(t, err.Error(), tc.errString)
} else {
require.NoError(t, err)
}
})
}
}
func TestMaxCommitBytes(t *testing.T) {
// time is varint encoded so need to pick the max.
// year int, month Month, day, hour, min, sec, nsec int, loc *Location
timestamp := time.Date(math.MaxInt64, 0, 0, 0, 0, 0, math.MaxInt64, time.UTC)
cs := meta.CommitSig{
BlockIDFlag: meta.BlockIDFlagNil,
ValidatorAddress: crypto.AddressHash([]byte("validator_address")),
Timestamp: timestamp,
Signature: crypto.CRandBytes(meta.MaxSignatureSize),
}
pbSig := cs.ToProto()
// test that a single commit sig doesn't exceed max commit sig bytes
assert.EqualValues(t, meta.MaxCommitSigBytes, pbSig.Size())
// check size with a single commit
commit := &meta.Commit{
Height: math.MaxInt64,
Round: math.MaxInt32,
BlockID: meta.BlockID{
Hash: tmhash.Sum([]byte("blockID_hash")),
PartSetHeader: meta.PartSetHeader{
Total: math.MaxInt32,
Hash: tmhash.Sum([]byte("blockID_part_set_header_hash")),
},
},
Signatures: []meta.CommitSig{cs},
}
pb := commit.ToProto()
assert.EqualValues(t, meta.MaxCommitBytes(1), int64(pb.Size()))
// check the upper bound of the commit size
for i := 1; i < consensus.MaxVotesCount; i++ {
commit.Signatures = append(commit.Signatures, cs)
}
pb = commit.ToProto()
assert.EqualValues(t, meta.MaxCommitBytes(consensus.MaxVotesCount), int64(pb.Size()))
}
func TestCommitSig_ValidateBasic(t *testing.T) {
testCases := []struct {
name string
cs meta.CommitSig
expectErr bool
errString string
}{
{
"invalid ID flag",
meta.CommitSig{BlockIDFlag: meta.BlockIDFlag(0xFF)},
true, "unknown BlockIDFlag",
},
{
"BlockIDFlagAbsent validator address present",
meta.CommitSig{BlockIDFlag: meta.BlockIDFlagAbsent, ValidatorAddress: crypto.Address("testaddr")},
true, "validator address is present",
},
{
"BlockIDFlagAbsent timestamp present",
meta.CommitSig{BlockIDFlag: meta.BlockIDFlagAbsent, Timestamp: time.Now().UTC()},
true, "time is present",
},
{
"BlockIDFlagAbsent signatures present",
meta.CommitSig{BlockIDFlag: meta.BlockIDFlagAbsent, Signature: []byte{0xAA}},
true, "signature is present",
},
{
"BlockIDFlagAbsent valid BlockIDFlagAbsent",
meta.CommitSig{BlockIDFlag: meta.BlockIDFlagAbsent},
false, "",
},
{
"non-BlockIDFlagAbsent invalid validator address",
meta.CommitSig{BlockIDFlag: meta.BlockIDFlagCommit, ValidatorAddress: make([]byte, 1)},
true, "expected ValidatorAddress size",
},
{
"non-BlockIDFlagAbsent invalid signature (zero)",
meta.CommitSig{
BlockIDFlag: meta.BlockIDFlagCommit,
ValidatorAddress: make([]byte, crypto.AddressSize),
Signature: make([]byte, 0),
},
true, "signature is missing",
},
{
"non-BlockIDFlagAbsent invalid signature (too large)",
meta.CommitSig{
BlockIDFlag: meta.BlockIDFlagCommit,
ValidatorAddress: make([]byte, crypto.AddressSize),
Signature: make([]byte, meta.MaxSignatureSize+1),
},
true, "signature is too big",
},
{
"non-BlockIDFlagAbsent valid",
meta.CommitSig{
BlockIDFlag: meta.BlockIDFlagCommit,
ValidatorAddress: make([]byte, crypto.AddressSize),
Signature: make([]byte, meta.MaxSignatureSize),
},
false, "",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
err := tc.cs.ValidateBasic()
if tc.expectErr {
require.Error(t, err)
require.Contains(t, err.Error(), tc.errString)
} else {
require.NoError(t, err)
}
})
}
}

293
pkg/meta/header.go Normal file
View File

@@ -0,0 +1,293 @@
package meta
import (
"errors"
"fmt"
"time"
gogotypes "github.com/gogo/protobuf/types"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/merkle"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/version"
)
const (
// MaxHeaderBytes is a maximum header size.
// NOTE: Because app hash can be of arbitrary size, the header is therefore not
// capped in size and thus this number should be seen as a soft max
MaxHeaderBytes int64 = 626
// MaxOverheadForBlock - maximum overhead to encode a block (up to
// MaxBlockSizeBytes in size) not including it's parts except Data.
// This means it also excludes the overhead for individual transactions.
//
// Uvarint length of MaxBlockSizeBytes: 4 bytes
// 2 fields (2 embedded): 2 bytes
// Uvarint length of Data.Txs: 4 bytes
// Data.Txs field: 1 byte
MaxOverheadForBlock int64 = 11
// MaxChainIDLen is a maximum length of the chain ID.
MaxChainIDLen = 50
// MaxBlockSizeBytes is the maximum permitted size of the blocks.
MaxBlockSizeBytes = 104857600 // 100MB
)
// Header defines the structure of a Tendermint block header.
// NOTE: changes to the Header should be duplicated in:
// - header.Hash()
// - abci.Header
// - https://github.com/tendermint/spec/blob/master/spec/blockchain/blockchain.md
type Header struct {
// basic block info
Version version.Consensus `json:"version"`
ChainID string `json:"chain_id"`
Height int64 `json:"height"`
Time time.Time `json:"time"`
// prev block info
LastBlockID BlockID `json:"last_block_id"`
// hashes of block data
LastCommitHash tmbytes.HexBytes `json:"last_commit_hash"` // commit from validators from the last block
DataHash tmbytes.HexBytes `json:"data_hash"` // transactions
// hashes from the app output from the prev block
ValidatorsHash tmbytes.HexBytes `json:"validators_hash"` // validators for the current block
NextValidatorsHash tmbytes.HexBytes `json:"next_validators_hash"` // validators for the next block
ConsensusHash tmbytes.HexBytes `json:"consensus_hash"` // consensus params for current block
AppHash tmbytes.HexBytes `json:"app_hash"` // state after txs from the previous block
// root hash of all results from the txs from the previous block
// see `deterministicResponseDeliverTx` to understand which parts of a tx is hashed into here
LastResultsHash tmbytes.HexBytes `json:"last_results_hash"`
// consensus info
EvidenceHash tmbytes.HexBytes `json:"evidence_hash"` // evidence included in the block
ProposerAddress crypto.Address `json:"proposer_address"` // original proposer of the block
}
// Populate the Header with state-derived data.
// Call this after MakeBlock to complete the Header.
func (h *Header) Populate(
version version.Consensus, chainID string,
timestamp time.Time, lastBlockID BlockID,
valHash, nextValHash []byte,
consensusHash, appHash, lastResultsHash []byte,
proposerAddress crypto.Address,
) {
h.Version = version
h.ChainID = chainID
h.Time = timestamp
h.LastBlockID = lastBlockID
h.ValidatorsHash = valHash
h.NextValidatorsHash = nextValHash
h.ConsensusHash = consensusHash
h.AppHash = appHash
h.LastResultsHash = lastResultsHash
h.ProposerAddress = proposerAddress
}
// ValidateBasic performs stateless validation on a Header returning an error
// if any validation fails.
//
// NOTE: Timestamp validation is subtle and handled elsewhere.
func (h Header) ValidateBasic() error {
if h.Version.Block != version.BlockProtocol {
return fmt.Errorf("block protocol is incorrect: got: %d, want: %d ", h.Version.Block, version.BlockProtocol)
}
if len(h.ChainID) > MaxChainIDLen {
return fmt.Errorf("chainID is too long; got: %d, max: %d", len(h.ChainID), MaxChainIDLen)
}
if h.Height < 0 {
return errors.New("negative Height")
} else if h.Height == 0 {
return errors.New("zero Height")
}
if err := h.LastBlockID.ValidateBasic(); err != nil {
return fmt.Errorf("wrong LastBlockID: %w", err)
}
if err := ValidateHash(h.LastCommitHash); err != nil {
return fmt.Errorf("wrong LastCommitHash: %v", err)
}
if err := ValidateHash(h.DataHash); err != nil {
return fmt.Errorf("wrong DataHash: %v", err)
}
if err := ValidateHash(h.EvidenceHash); err != nil {
return fmt.Errorf("wrong EvidenceHash: %v", err)
}
if len(h.ProposerAddress) != crypto.AddressSize {
return fmt.Errorf(
"invalid ProposerAddress length; got: %d, expected: %d",
len(h.ProposerAddress), crypto.AddressSize,
)
}
// Basic validation of hashes related to application data.
// Will validate fully against state in state#ValidateBlock.
if err := ValidateHash(h.ValidatorsHash); err != nil {
return fmt.Errorf("wrong ValidatorsHash: %v", err)
}
if err := ValidateHash(h.NextValidatorsHash); err != nil {
return fmt.Errorf("wrong NextValidatorsHash: %v", err)
}
if err := ValidateHash(h.ConsensusHash); err != nil {
return fmt.Errorf("wrong ConsensusHash: %v", err)
}
// NOTE: AppHash is arbitrary length
if err := ValidateHash(h.LastResultsHash); err != nil {
return fmt.Errorf("wrong LastResultsHash: %v", err)
}
return nil
}
// Hash returns the hash of the header.
// It computes a Merkle tree from the header fields
// ordered as they appear in the Header.
// Returns nil if ValidatorHash is missing,
// since a Header is not valid unless there is
// a ValidatorsHash (corresponding to the validator set).
func (h *Header) Hash() tmbytes.HexBytes {
if h == nil || len(h.ValidatorsHash) == 0 {
return nil
}
hpb := h.Version.ToProto()
hbz, err := hpb.Marshal()
if err != nil {
return nil
}
pbt, err := gogotypes.StdTimeMarshal(h.Time)
if err != nil {
return nil
}
pbbi := h.LastBlockID.ToProto()
bzbi, err := pbbi.Marshal()
if err != nil {
return nil
}
return merkle.HashFromByteSlices([][]byte{
hbz,
CdcEncode(h.ChainID),
CdcEncode(h.Height),
pbt,
bzbi,
CdcEncode(h.LastCommitHash),
CdcEncode(h.DataHash),
CdcEncode(h.ValidatorsHash),
CdcEncode(h.NextValidatorsHash),
CdcEncode(h.ConsensusHash),
CdcEncode(h.AppHash),
CdcEncode(h.LastResultsHash),
CdcEncode(h.EvidenceHash),
CdcEncode(h.ProposerAddress),
})
}
// StringIndented returns an indented string representation of the header.
func (h *Header) StringIndented(indent string) string {
if h == nil {
return "nil-Header"
}
return fmt.Sprintf(`Header{
%s Version: %v
%s ChainID: %v
%s Height: %v
%s Time: %v
%s LastBlockID: %v
%s LastCommit: %v
%s Data: %v
%s Validators: %v
%s NextValidators: %v
%s App: %v
%s Consensus: %v
%s Results: %v
%s Evidence: %v
%s Proposer: %v
%s}#%v`,
indent, h.Version,
indent, h.ChainID,
indent, h.Height,
indent, h.Time,
indent, h.LastBlockID,
indent, h.LastCommitHash,
indent, h.DataHash,
indent, h.ValidatorsHash,
indent, h.NextValidatorsHash,
indent, h.AppHash,
indent, h.ConsensusHash,
indent, h.LastResultsHash,
indent, h.EvidenceHash,
indent, h.ProposerAddress,
indent, h.Hash())
}
// ToProto converts Header to protobuf
func (h *Header) ToProto() *tmproto.Header {
if h == nil {
return nil
}
return &tmproto.Header{
Version: h.Version.ToProto(),
ChainID: h.ChainID,
Height: h.Height,
Time: h.Time,
LastBlockId: h.LastBlockID.ToProto(),
ValidatorsHash: h.ValidatorsHash,
NextValidatorsHash: h.NextValidatorsHash,
ConsensusHash: h.ConsensusHash,
AppHash: h.AppHash,
DataHash: h.DataHash,
EvidenceHash: h.EvidenceHash,
LastResultsHash: h.LastResultsHash,
LastCommitHash: h.LastCommitHash,
ProposerAddress: h.ProposerAddress,
}
}
// FromProto sets a protobuf Header to the given pointer.
// It returns an error if the header is invalid.
func HeaderFromProto(ph *tmproto.Header) (Header, error) {
if ph == nil {
return Header{}, errors.New("nil Header")
}
h := new(Header)
bi, err := BlockIDFromProto(&ph.LastBlockId)
if err != nil {
return Header{}, err
}
h.Version = version.Consensus{Block: ph.Version.Block, App: ph.Version.App}
h.ChainID = ph.ChainID
h.Height = ph.Height
h.Time = ph.Time
h.Height = ph.Height
h.LastBlockID = *bi
h.ValidatorsHash = ph.ValidatorsHash
h.NextValidatorsHash = ph.NextValidatorsHash
h.ConsensusHash = ph.ConsensusHash
h.AppHash = ph.AppHash
h.DataHash = ph.DataHash
h.EvidenceHash = ph.EvidenceHash
h.LastResultsHash = ph.LastResultsHash
h.LastCommitHash = ph.LastCommitHash
h.ProposerAddress = ph.ProposerAddress
return *h, h.ValidateBasic()
}
//-------------------------------------

496
pkg/meta/header_test.go Normal file
View File

@@ -0,0 +1,496 @@
package meta_test
import (
"encoding/hex"
"math"
"reflect"
"testing"
"time"
gogotypes "github.com/gogo/protobuf/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/merkle"
"github.com/tendermint/tendermint/crypto/tmhash"
test "github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/libs/bytes"
"github.com/tendermint/tendermint/pkg/meta"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
tmversion "github.com/tendermint/tendermint/proto/tendermint/version"
"github.com/tendermint/tendermint/version"
)
var nilBytes []byte
func TestNilHeaderHashDoesntCrash(t *testing.T) {
assert.Equal(t, nilBytes, []byte((*meta.Header)(nil).Hash()))
assert.Equal(t, nilBytes, []byte((new(meta.Header)).Hash()))
}
func TestHeaderHash(t *testing.T) {
testCases := []struct {
desc string
header *meta.Header
expectHash bytes.HexBytes
}{
{"Generates expected hash", &meta.Header{
Version: version.Consensus{Block: 1, App: 2},
ChainID: "chainId",
Height: 3,
Time: time.Date(2019, 10, 13, 16, 14, 44, 0, time.UTC),
LastBlockID: test.MakeBlockIDWithHash(make([]byte, tmhash.Size)),
LastCommitHash: tmhash.Sum([]byte("last_commit_hash")),
DataHash: tmhash.Sum([]byte("data_hash")),
ValidatorsHash: tmhash.Sum([]byte("validators_hash")),
NextValidatorsHash: tmhash.Sum([]byte("next_validators_hash")),
ConsensusHash: tmhash.Sum([]byte("consensus_hash")),
AppHash: tmhash.Sum([]byte("app_hash")),
LastResultsHash: tmhash.Sum([]byte("last_results_hash")),
EvidenceHash: tmhash.Sum([]byte("evidence_hash")),
ProposerAddress: crypto.AddressHash([]byte("proposer_address")),
}, hexBytesFromString("F740121F553B5418C3EFBD343C2DBFE9E007BB67B0D020A0741374BAB65242A4")},
{"nil header yields nil", nil, nil},
{"nil ValidatorsHash yields nil", &meta.Header{
Version: version.Consensus{Block: 1, App: 2},
ChainID: "chainId",
Height: 3,
Time: time.Date(2019, 10, 13, 16, 14, 44, 0, time.UTC),
LastBlockID: test.MakeBlockIDWithHash(make([]byte, tmhash.Size)),
LastCommitHash: tmhash.Sum([]byte("last_commit_hash")),
DataHash: tmhash.Sum([]byte("data_hash")),
ValidatorsHash: nil,
NextValidatorsHash: tmhash.Sum([]byte("next_validators_hash")),
ConsensusHash: tmhash.Sum([]byte("consensus_hash")),
AppHash: tmhash.Sum([]byte("app_hash")),
LastResultsHash: tmhash.Sum([]byte("last_results_hash")),
EvidenceHash: tmhash.Sum([]byte("evidence_hash")),
ProposerAddress: crypto.AddressHash([]byte("proposer_address")),
}, nil},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.desc, func(t *testing.T) {
assert.Equal(t, tc.expectHash, tc.header.Hash())
// We also make sure that all fields are hashed in struct order, and that all
// fields in the test struct are non-zero.
if tc.header != nil && tc.expectHash != nil {
byteSlices := [][]byte{}
s := reflect.ValueOf(*tc.header)
for i := 0; i < s.NumField(); i++ {
f := s.Field(i)
assert.False(t, f.IsZero(), "Found zero-valued field %v",
s.Type().Field(i).Name)
switch f := f.Interface().(type) {
case int64, bytes.HexBytes, string:
byteSlices = append(byteSlices, meta.CdcEncode(f))
case time.Time:
bz, err := gogotypes.StdTimeMarshal(f)
require.NoError(t, err)
byteSlices = append(byteSlices, bz)
case version.Consensus:
pbc := tmversion.Consensus{
Block: f.Block,
App: f.App,
}
bz, err := pbc.Marshal()
require.NoError(t, err)
byteSlices = append(byteSlices, bz)
case meta.BlockID:
pbbi := f.ToProto()
bz, err := pbbi.Marshal()
require.NoError(t, err)
byteSlices = append(byteSlices, bz)
default:
t.Errorf("unknown type %T", f)
}
}
assert.Equal(t,
bytes.HexBytes(merkle.HashFromByteSlices(byteSlices)), tc.header.Hash())
}
})
}
}
func TestMaxHeaderBytes(t *testing.T) {
// Construct a UTF-8 string of MaxChainIDLen length using the supplementary
// characters.
// Each supplementary character takes 4 bytes.
// http://www.i18nguy.com/unicode/supplementary-test.html
maxChainID := ""
for i := 0; i < meta.MaxChainIDLen; i++ {
maxChainID += "𠜎"
}
// time is varint encoded so need to pick the max.
// year int, month Month, day, hour, min, sec, nsec int, loc *Location
timestamp := time.Date(math.MaxInt64, 0, 0, 0, 0, 0, math.MaxInt64, time.UTC)
h := meta.Header{
Version: version.Consensus{Block: math.MaxInt64, App: math.MaxInt64},
ChainID: maxChainID,
Height: math.MaxInt64,
Time: timestamp,
LastBlockID: meta.BlockID{make([]byte, tmhash.Size), meta.PartSetHeader{math.MaxInt32, make([]byte, tmhash.Size)}},
LastCommitHash: tmhash.Sum([]byte("last_commit_hash")),
DataHash: tmhash.Sum([]byte("data_hash")),
ValidatorsHash: tmhash.Sum([]byte("validators_hash")),
NextValidatorsHash: tmhash.Sum([]byte("next_validators_hash")),
ConsensusHash: tmhash.Sum([]byte("consensus_hash")),
AppHash: tmhash.Sum([]byte("app_hash")),
LastResultsHash: tmhash.Sum([]byte("last_results_hash")),
EvidenceHash: tmhash.Sum([]byte("evidence_hash")),
ProposerAddress: crypto.AddressHash([]byte("proposer_address")),
}
bz, err := h.ToProto().Marshal()
require.NoError(t, err)
assert.EqualValues(t, meta.MaxHeaderBytes, int64(len(bz)))
}
func randCommit(now time.Time) *meta.Commit {
lastID := test.MakeBlockID()
h := int64(3)
voteSet, _, vals := test.RandVoteSet(h-1, 1, tmproto.PrecommitType, 10, 1)
commit, err := test.MakeCommit(lastID, h-1, 1, voteSet, vals, now)
if err != nil {
panic(err)
}
return commit
}
func hexBytesFromString(s string) bytes.HexBytes {
b, err := hex.DecodeString(s)
if err != nil {
panic(err)
}
return bytes.HexBytes(b)
}
func TestHeader_ValidateBasic(t *testing.T) {
testCases := []struct {
name string
header meta.Header
expectErr bool
errString string
}{
{
"invalid version block",
meta.Header{Version: version.Consensus{Block: version.BlockProtocol + 1}},
true, "block protocol is incorrect",
},
{
"invalid chain ID length",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen+1)),
},
true, "chainID is too long",
},
{
"invalid height (negative)",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: -1,
},
true, "negative Height",
},
{
"invalid height (zero)",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 0,
},
true, "zero Height",
},
{
"invalid block ID hash",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 1,
LastBlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size+1),
},
},
true, "wrong Hash",
},
{
"invalid block ID parts header hash",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 1,
LastBlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size+1),
},
},
},
true, "wrong PartSetHeader",
},
{
"invalid last commit hash",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 1,
LastBlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
LastCommitHash: make([]byte, tmhash.Size+1),
},
true, "wrong LastCommitHash",
},
{
"invalid data hash",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 1,
LastBlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
LastCommitHash: make([]byte, tmhash.Size),
DataHash: make([]byte, tmhash.Size+1),
},
true, "wrong DataHash",
},
{
"invalid evidence hash",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 1,
LastBlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
LastCommitHash: make([]byte, tmhash.Size),
DataHash: make([]byte, tmhash.Size),
EvidenceHash: make([]byte, tmhash.Size+1),
},
true, "wrong EvidenceHash",
},
{
"invalid proposer address",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 1,
LastBlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
LastCommitHash: make([]byte, tmhash.Size),
DataHash: make([]byte, tmhash.Size),
EvidenceHash: make([]byte, tmhash.Size),
ProposerAddress: make([]byte, crypto.AddressSize+1),
},
true, "invalid ProposerAddress length",
},
{
"invalid validator hash",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 1,
LastBlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
LastCommitHash: make([]byte, tmhash.Size),
DataHash: make([]byte, tmhash.Size),
EvidenceHash: make([]byte, tmhash.Size),
ProposerAddress: make([]byte, crypto.AddressSize),
ValidatorsHash: make([]byte, tmhash.Size+1),
},
true, "wrong ValidatorsHash",
},
{
"invalid next validator hash",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 1,
LastBlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
LastCommitHash: make([]byte, tmhash.Size),
DataHash: make([]byte, tmhash.Size),
EvidenceHash: make([]byte, tmhash.Size),
ProposerAddress: make([]byte, crypto.AddressSize),
ValidatorsHash: make([]byte, tmhash.Size),
NextValidatorsHash: make([]byte, tmhash.Size+1),
},
true, "wrong NextValidatorsHash",
},
{
"invalid consensus hash",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 1,
LastBlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
LastCommitHash: make([]byte, tmhash.Size),
DataHash: make([]byte, tmhash.Size),
EvidenceHash: make([]byte, tmhash.Size),
ProposerAddress: make([]byte, crypto.AddressSize),
ValidatorsHash: make([]byte, tmhash.Size),
NextValidatorsHash: make([]byte, tmhash.Size),
ConsensusHash: make([]byte, tmhash.Size+1),
},
true, "wrong ConsensusHash",
},
{
"invalid last results hash",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 1,
LastBlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
LastCommitHash: make([]byte, tmhash.Size),
DataHash: make([]byte, tmhash.Size),
EvidenceHash: make([]byte, tmhash.Size),
ProposerAddress: make([]byte, crypto.AddressSize),
ValidatorsHash: make([]byte, tmhash.Size),
NextValidatorsHash: make([]byte, tmhash.Size),
ConsensusHash: make([]byte, tmhash.Size),
LastResultsHash: make([]byte, tmhash.Size+1),
},
true, "wrong LastResultsHash",
},
{
"valid header",
meta.Header{
Version: version.Consensus{Block: version.BlockProtocol},
ChainID: string(make([]byte, meta.MaxChainIDLen)),
Height: 1,
LastBlockID: meta.BlockID{
Hash: make([]byte, tmhash.Size),
PartSetHeader: meta.PartSetHeader{
Hash: make([]byte, tmhash.Size),
},
},
LastCommitHash: make([]byte, tmhash.Size),
DataHash: make([]byte, tmhash.Size),
EvidenceHash: make([]byte, tmhash.Size),
ProposerAddress: make([]byte, crypto.AddressSize),
ValidatorsHash: make([]byte, tmhash.Size),
NextValidatorsHash: make([]byte, tmhash.Size),
ConsensusHash: make([]byte, tmhash.Size),
LastResultsHash: make([]byte, tmhash.Size),
},
false, "",
},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
err := tc.header.ValidateBasic()
if tc.expectErr {
require.Error(t, err)
require.Contains(t, err.Error(), tc.errString)
} else {
require.NoError(t, err)
}
})
}
}
func TestHeaderProto(t *testing.T) {
h1 := test.MakeRandomHeader()
tc := []struct {
msg string
h1 *meta.Header
expPass bool
}{
{"success", h1, true},
{"failure empty Header", &meta.Header{}, false},
}
for _, tt := range tc {
tt := tt
t.Run(tt.msg, func(t *testing.T) {
pb := tt.h1.ToProto()
h, err := meta.HeaderFromProto(pb)
if tt.expPass {
require.NoError(t, err, tt.msg)
require.Equal(t, tt.h1, &h, tt.msg)
} else {
require.Error(t, err, tt.msg)
}
})
}
}
func TestHeaderHashVector(t *testing.T) {
chainID := "test"
h := meta.Header{
Version: version.Consensus{Block: 1, App: 1},
ChainID: chainID,
Height: 50,
Time: time.Date(math.MaxInt64, 0, 0, 0, 0, 0, math.MaxInt64, time.UTC),
LastBlockID: meta.BlockID{},
LastCommitHash: []byte("f2564c78071e26643ae9b3e2a19fa0dc10d4d9e873aa0be808660123f11a1e78"),
DataHash: []byte("f2564c78071e26643ae9b3e2a19fa0dc10d4d9e873aa0be808660123f11a1e78"),
ValidatorsHash: []byte("f2564c78071e26643ae9b3e2a19fa0dc10d4d9e873aa0be808660123f11a1e78"),
NextValidatorsHash: []byte("f2564c78071e26643ae9b3e2a19fa0dc10d4d9e873aa0be808660123f11a1e78"),
ConsensusHash: []byte("f2564c78071e26643ae9b3e2a19fa0dc10d4d9e873aa0be808660123f11a1e78"),
AppHash: []byte("f2564c78071e26643ae9b3e2a19fa0dc10d4d9e873aa0be808660123f11a1e78"),
LastResultsHash: []byte("f2564c78071e26643ae9b3e2a19fa0dc10d4d9e873aa0be808660123f11a1e78"),
EvidenceHash: []byte("f2564c78071e26643ae9b3e2a19fa0dc10d4d9e873aa0be808660123f11a1e78"),
ProposerAddress: []byte("2915b7b15f979e48ebc61774bb1d86ba3136b7eb"),
}
testCases := []struct {
header meta.Header
expBytes string
}{
{header: h, expBytes: "87b6117ac7f827d656f178a3d6d30b24b205db2b6a3a053bae8baf4618570bfc"},
}
for _, tc := range testCases {
hash := tc.header.Hash()
require.Equal(t, tc.expBytes, hex.EncodeToString(hash))
}
}

124
pkg/meta/id.go Normal file
View File

@@ -0,0 +1,124 @@
package meta
import (
"bytes"
"errors"
"fmt"
"github.com/tendermint/tendermint/crypto/tmhash"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
)
// BlockID
type BlockID struct {
Hash tmbytes.HexBytes `json:"hash"`
PartSetHeader PartSetHeader `json:"parts"`
}
// BlockIDFlag indicates which BlockID the signature is for.
type BlockIDFlag byte
const (
// BlockIDFlagAbsent - no vote was received from a validator.
BlockIDFlagAbsent BlockIDFlag = iota + 1
// BlockIDFlagCommit - voted for the Commit.BlockID.
BlockIDFlagCommit
// BlockIDFlagNil - voted for nil.
BlockIDFlagNil
)
// Equals returns true if the BlockID matches the given BlockID
func (blockID BlockID) Equals(other BlockID) bool {
return bytes.Equal(blockID.Hash, other.Hash) &&
blockID.PartSetHeader.Equals(other.PartSetHeader)
}
// Key returns a machine-readable string representation of the BlockID
func (blockID BlockID) Key() string {
pbph := blockID.PartSetHeader.ToProto()
bz, err := pbph.Marshal()
if err != nil {
panic(err)
}
return fmt.Sprint(string(blockID.Hash), string(bz))
}
// ValidateBasic performs basic validation.
func (blockID BlockID) ValidateBasic() error {
// Hash can be empty in case of POLBlockID in Proposal.
if err := ValidateHash(blockID.Hash); err != nil {
return fmt.Errorf("wrong Hash: %w", err)
}
if err := blockID.PartSetHeader.ValidateBasic(); err != nil {
return fmt.Errorf("wrong PartSetHeader: %w", err)
}
return nil
}
// IsZero returns true if this is the BlockID of a nil block.
func (blockID BlockID) IsZero() bool {
return len(blockID.Hash) == 0 &&
blockID.PartSetHeader.IsZero()
}
// IsComplete returns true if this is a valid BlockID of a non-nil block.
func (blockID BlockID) IsComplete() bool {
return len(blockID.Hash) == tmhash.Size &&
blockID.PartSetHeader.Total > 0 &&
len(blockID.PartSetHeader.Hash) == tmhash.Size
}
// String returns a human readable string representation of the BlockID.
//
// 1. hash
// 2. part set header
//
// See PartSetHeader#String
func (blockID BlockID) String() string {
return fmt.Sprintf(`%v:%v`, blockID.Hash, blockID.PartSetHeader)
}
// ToProto converts BlockID to protobuf
func (blockID *BlockID) ToProto() tmproto.BlockID {
if blockID == nil {
return tmproto.BlockID{}
}
return tmproto.BlockID{
Hash: blockID.Hash,
PartSetHeader: blockID.PartSetHeader.ToProto(),
}
}
// FromProto sets a protobuf BlockID to the given pointer.
// It returns an error if the block id is invalid.
func BlockIDFromProto(bID *tmproto.BlockID) (*BlockID, error) {
if bID == nil {
return nil, errors.New("nil BlockID")
}
blockID := new(BlockID)
ph, err := PartSetHeaderFromProto(&bID.PartSetHeader)
if err != nil {
return nil, err
}
blockID.PartSetHeader = *ph
blockID.Hash = bID.Hash
return blockID, blockID.ValidateBasic()
}
// ValidateHash returns an error if the hash is not empty, but its
// size != tmhash.Size.
func ValidateHash(h []byte) error {
if len(h) > 0 && len(h) != tmhash.Size {
return fmt.Errorf("expected size to be %d bytes, got %d bytes",
tmhash.Size,
len(h),
)
}
return nil
}

91
pkg/meta/id_test.go Normal file
View File

@@ -0,0 +1,91 @@
package meta_test
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
test "github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/libs/bytes"
"github.com/tendermint/tendermint/pkg/meta"
)
func TestBlockIDEquals(t *testing.T) {
var (
blockID = meta.BlockID{[]byte("hash"), meta.PartSetHeader{2, []byte("part_set_hash")}}
blockIDDuplicate = meta.BlockID{[]byte("hash"), meta.PartSetHeader{2, []byte("part_set_hash")}}
blockIDDifferent = meta.BlockID{[]byte("different_hash"), meta.PartSetHeader{2, []byte("part_set_hash")}}
blockIDEmpty = meta.BlockID{}
)
assert.True(t, blockID.Equals(blockIDDuplicate))
assert.False(t, blockID.Equals(blockIDDifferent))
assert.False(t, blockID.Equals(blockIDEmpty))
assert.True(t, blockIDEmpty.Equals(blockIDEmpty))
assert.False(t, blockIDEmpty.Equals(blockIDDifferent))
}
func TestBlockIDValidateBasic(t *testing.T) {
validBlockID := meta.BlockID{
Hash: bytes.HexBytes{},
PartSetHeader: meta.PartSetHeader{
Total: 1,
Hash: bytes.HexBytes{},
},
}
invalidBlockID := meta.BlockID{
Hash: []byte{0},
PartSetHeader: meta.PartSetHeader{
Total: 1,
Hash: []byte{0},
},
}
testCases := []struct {
testName string
blockIDHash bytes.HexBytes
blockIDPartSetHeader meta.PartSetHeader
expectErr bool
}{
{"Valid BlockID", validBlockID.Hash, validBlockID.PartSetHeader, false},
{"Invalid BlockID", invalidBlockID.Hash, validBlockID.PartSetHeader, true},
{"Invalid BlockID", validBlockID.Hash, invalidBlockID.PartSetHeader, true},
}
for _, tc := range testCases {
tc := tc
t.Run(tc.testName, func(t *testing.T) {
blockID := meta.BlockID{
Hash: tc.blockIDHash,
PartSetHeader: tc.blockIDPartSetHeader,
}
assert.Equal(t, tc.expectErr, blockID.ValidateBasic() != nil, "Validate Basic had an unexpected result")
})
}
}
func TestBlockIDProtoBuf(t *testing.T) {
blockID := test.MakeBlockIDWithHash([]byte("hash"))
testCases := []struct {
msg string
bid1 *meta.BlockID
expPass bool
}{
{"success", &blockID, true},
{"success empty", &meta.BlockID{}, true},
{"failure BlockID nil", nil, false},
}
for _, tc := range testCases {
protoBlockID := tc.bid1.ToProto()
bi, err := meta.BlockIDFromProto(&protoBlockID)
if tc.expPass {
require.NoError(t, err)
require.Equal(t, tc.bid1, bi, tc.msg)
} else {
require.NotEqual(t, tc.bid1, bi, tc.msg)
}
}
}

View File

@@ -1,4 +1,4 @@
package types
package meta
import (
"bytes"
@@ -20,6 +20,14 @@ var (
ErrPartSetInvalidProof = errors.New("error part set invalid proof")
)
const (
// BlockPartSizeBytes is the size of one block part.
BlockPartSizeBytes uint32 = 65536 // 64kB
// MaxBlockPartsCount is the maximum number of block parts.
MaxBlockPartsCount = (MaxBlockSizeBytes / BlockPartSizeBytes) + 1
)
type Part struct {
Index uint32 `json:"index"`
Bytes tmbytes.HexBytes `json:"bytes"`

View File

@@ -1,4 +1,4 @@
package types
package meta
import (
"io/ioutil"

75
pkg/meta/utils.go Normal file
View File

@@ -0,0 +1,75 @@
package meta
import (
"reflect"
gogotypes "github.com/gogo/protobuf/types"
tmbytes "github.com/tendermint/tendermint/libs/bytes"
)
// Go lacks a simple and safe way to see if something is a typed nil.
// See:
// - https://dave.cheney.net/2017/08/09/typed-nils-in-go-2
// - https://groups.google.com/forum/#!topic/golang-nuts/wnH302gBa4I/discussion
// - https://github.com/golang/go/issues/21538
func isTypedNil(o interface{}) bool {
rv := reflect.ValueOf(o)
switch rv.Kind() {
case reflect.Chan, reflect.Func, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}
// Returns true if it has zero length.
func isEmpty(o interface{}) bool {
rv := reflect.ValueOf(o)
switch rv.Kind() {
case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice, reflect.String:
return rv.Len() == 0
default:
return false
}
}
// CdcEncode returns nil if the input is nil, otherwise returns
// proto.Marshal(<type>Value{Value: item})
func CdcEncode(item interface{}) []byte {
if item != nil && !isTypedNil(item) && !isEmpty(item) {
switch item := item.(type) {
case string:
i := gogotypes.StringValue{
Value: item,
}
bz, err := i.Marshal()
if err != nil {
return nil
}
return bz
case int64:
i := gogotypes.Int64Value{
Value: item,
}
bz, err := i.Marshal()
if err != nil {
return nil
}
return bz
case tmbytes.HexBytes:
i := gogotypes.BytesValue{
Value: item,
}
bz, err := i.Marshal()
if err != nil {
return nil
}
return bz
default:
return nil
}
}
return nil
}

View File

@@ -1,4 +1,4 @@
package types
package p2p
import (
"fmt"

View File

@@ -2,7 +2,7 @@
// Originally Copyright (c) 2013-2014 Conformal Systems LLC.
// https://github.com/conformal/btcd/blob/master/LICENSE
package types
package p2p
import (
"errors"

View File

@@ -1,4 +1,4 @@
package types
package p2p
import (
"net"

View File

@@ -1,4 +1,4 @@
package types
package p2p
import (
"encoding/hex"

View File

@@ -1,4 +1,4 @@
package types
package p2p
import (
"errors"

View File

@@ -1,4 +1,4 @@
package types
package p2p
import (
"fmt"

View File

@@ -1,4 +1,4 @@
package types
package p2p
import (
"io/ioutil"

View File

@@ -1,4 +1,4 @@
package types_test
package p2p_test
import (
"os"
@@ -8,16 +8,16 @@ import (
"github.com/stretchr/testify/require"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/pkg/p2p"
)
func TestLoadOrGenNodeKey(t *testing.T) {
filePath := filepath.Join(os.TempDir(), tmrand.Str(12)+"_peer_id.json")
nodeKey, err := types.LoadOrGenNodeKey(filePath)
nodeKey, err := p2p.LoadOrGenNodeKey(filePath)
require.Nil(t, err)
nodeKey2, err := types.LoadOrGenNodeKey(filePath)
nodeKey2, err := p2p.LoadOrGenNodeKey(filePath)
require.Nil(t, err)
require.Equal(t, nodeKey, nodeKey2)
}
@@ -25,13 +25,13 @@ func TestLoadOrGenNodeKey(t *testing.T) {
func TestLoadNodeKey(t *testing.T) {
filePath := filepath.Join(os.TempDir(), tmrand.Str(12)+"_peer_id.json")
_, err := types.LoadNodeKey(filePath)
_, err := p2p.LoadNodeKey(filePath)
require.True(t, os.IsNotExist(err))
_, err = types.LoadOrGenNodeKey(filePath)
_, err = p2p.LoadOrGenNodeKey(filePath)
require.NoError(t, err)
nodeKey, err := types.LoadNodeKey(filePath)
nodeKey, err := p2p.LoadNodeKey(filePath)
require.NoError(t, err)
require.NotNil(t, nodeKey)
}
@@ -40,7 +40,7 @@ func TestNodeKeySaveAs(t *testing.T) {
filePath := filepath.Join(os.TempDir(), tmrand.Str(12)+"_peer_id.json")
require.NoFileExists(t, filePath)
nodeKey := types.GenNodeKey()
nodeKey := p2p.GenNodeKey()
require.NoError(t, nodeKey.SaveAs(filePath))
require.FileExists(t, filePath)
}

View File

@@ -2,7 +2,6 @@ package local
import (
"context"
"errors"
"fmt"
"time"
@@ -47,15 +46,15 @@ type Local struct {
// NodeService describes the portion of the node interface that the
// local RPC client constructor needs to build a local client.
type NodeService interface {
RPCEnvironment() *rpccore.Environment
ConfigureRPC() (*rpccore.Environment, error)
EventBus() *types.EventBus
}
// New configures a client that calls the Node directly.
func New(node NodeService) (*Local, error) {
env := node.RPCEnvironment()
if env == nil {
return nil, errors.New("rpc is nil")
env, err := node.ConfigureRPC()
if err != nil {
return nil, err
}
return &Local{
EventBus: node.EventBus(),

View File

@@ -26,7 +26,7 @@ func (env *Environment) ABCIQuery(
if err != nil {
return nil, err
}
env.Logger.Info("ABCIQuery", "path", path, "data", data, "result", resQuery)
return &ctypes.ResultABCIQuery{Response: *resQuery}, nil
}
@@ -37,6 +37,5 @@ func (env *Environment) ABCIInfo(ctx *rpctypes.Context) (*ctypes.ResultABCIInfo,
if err != nil {
return nil, err
}
return &ctypes.ResultABCIInfo{Response: *resInfo}, nil
}

View File

@@ -1,8 +1,6 @@
package core
import (
"errors"
cm "github.com/tendermint/tendermint/internal/consensus"
tmmath "github.com/tendermint/tendermint/libs/math"
ctypes "github.com/tendermint/tendermint/rpc/core/types"
@@ -56,56 +54,24 @@ func (env *Environment) Validators(
// More: https://docs.tendermint.com/master/rpc/#/Info/dump_consensus_state
func (env *Environment) DumpConsensusState(ctx *rpctypes.Context) (*ctypes.ResultDumpConsensusState, error) {
// Get Peer consensus states.
var peerStates []ctypes.PeerStateInfo
switch {
case env.P2PPeers != nil:
peers := env.P2PPeers.Peers().List()
peerStates = make([]ctypes.PeerStateInfo, 0, len(peers))
for _, peer := range peers {
peerState, ok := peer.Get(types.PeerStateKey).(*cm.PeerState)
if !ok { // peer does not have a state yet
continue
}
peerStateJSON, err := peerState.ToJSON()
if err != nil {
return nil, err
}
peerStates = append(peerStates, ctypes.PeerStateInfo{
// Peer basic info.
NodeAddress: peer.SocketAddr().String(),
// Peer consensus state.
PeerState: peerStateJSON,
})
peers := env.P2PPeers.Peers().List()
peerStates := make([]ctypes.PeerStateInfo, len(peers))
for i, peer := range peers {
peerState, ok := peer.Get(types.PeerStateKey).(*cm.PeerState)
if !ok { // peer does not have a state yet
continue
}
case env.PeerManager != nil:
peers := env.PeerManager.Peers()
peerStates = make([]ctypes.PeerStateInfo, 0, len(peers))
for _, pid := range peers {
peerState, ok := env.ConsensusReactor.GetPeerState(pid)
if !ok {
continue
}
peerStateJSON, err := peerState.ToJSON()
if err != nil {
return nil, err
}
addr := env.PeerManager.Addresses(pid)
if len(addr) >= 1 {
peerStates = append(peerStates, ctypes.PeerStateInfo{
// Peer basic info.
NodeAddress: addr[0].String(),
// Peer consensus state.
PeerState: peerStateJSON,
})
}
peerStateJSON, err := peerState.ToJSON()
if err != nil {
return nil, err
}
peerStates[i] = ctypes.PeerStateInfo{
// Peer basic info.
NodeAddress: peer.SocketAddr().String(),
// Peer consensus state.
PeerState: peerStateJSON,
}
default:
return nil, errors.New("no peer system configured")
}
// Get self round state.
roundState, err := env.ConsensusState.GetRoundStateJSON()
if err != nil {

View File

@@ -36,7 +36,7 @@ const (
//----------------------------------------------
// These interfaces are used by RPC and must be thread safe
type consensusState interface {
type Consensus interface {
GetState() sm.State
GetValidators() (int64, []*types.Validator)
GetLastHeight() int64
@@ -58,11 +58,6 @@ type peers interface {
Peers() p2p.IPeerSet
}
type peerManager interface {
Peers() []types.NodeID
Addresses(types.NodeID) []p2p.NodeAddress
}
//----------------------------------------------
// Environment contains objects and interfaces used by the RPC. It is expected
// to be setup once during startup.
@@ -75,14 +70,9 @@ type Environment struct {
StateStore sm.Store
BlockStore sm.BlockStore
EvidencePool sm.EvidencePool
ConsensusState consensusState
ConsensusState Consensus
P2PPeers peers
// Legacy p2p stack
P2PTransport transport
// interfaces for new p2p interfaces
PeerManager peerManager
P2PTransport transport
// objects
PubKey crypto.PubKey

View File

@@ -13,35 +13,19 @@ import (
// NetInfo returns network info.
// More: https://docs.tendermint.com/master/rpc/#/Info/net_info
func (env *Environment) NetInfo(ctx *rpctypes.Context) (*ctypes.ResultNetInfo, error) {
var peers []ctypes.Peer
switch {
case env.P2PPeers != nil:
peersList := env.P2PPeers.Peers().List()
peers = make([]ctypes.Peer, 0, len(peersList))
for _, peer := range peersList {
peers = append(peers, ctypes.Peer{
ID: peer.ID(),
URL: peer.SocketAddr().String(),
})
}
case env.PeerManager != nil:
peerList := env.PeerManager.Peers()
for _, peer := range peerList {
addrs := env.PeerManager.Addresses(peer)
if len(addrs) == 0 {
continue
}
peers = append(peers, ctypes.Peer{
ID: peer,
URL: addrs[0].String(),
})
}
default:
return nil, errors.New("peer management system does not support NetInfo responses")
peersList := env.P2PPeers.Peers().List()
peers := make([]ctypes.Peer, 0, len(peersList))
for _, peer := range peersList {
peers = append(peers, ctypes.Peer{
NodeInfo: peer.NodeInfo(),
IsOutbound: peer.IsOutbound(),
ConnectionStatus: peer.Status(),
RemoteIP: peer.RemoteIP().String(),
})
}
// TODO: Should we include PersistentPeers and Seeds in here?
// PRO: useful info
// CON: privacy
return &ctypes.ResultNetInfo{
Listening: env.P2PTransport.IsListening(),
Listeners: env.P2PTransport.Listeners(),
@@ -52,10 +36,6 @@ func (env *Environment) NetInfo(ctx *rpctypes.Context) (*ctypes.ResultNetInfo, e
// UnsafeDialSeeds dials the given seeds (comma-separated id@IP:PORT).
func (env *Environment) UnsafeDialSeeds(ctx *rpctypes.Context, seeds []string) (*ctypes.ResultDialSeeds, error) {
if env.P2PPeers == nil {
return nil, errors.New("peer management system does not support this operation")
}
if len(seeds) == 0 {
return &ctypes.ResultDialSeeds{}, fmt.Errorf("%w: no seeds provided", ctypes.ErrInvalidRequest)
}
@@ -73,10 +53,6 @@ func (env *Environment) UnsafeDialPeers(
peers []string,
persistent, unconditional, private bool) (*ctypes.ResultDialPeers, error) {
if env.P2PPeers == nil {
return nil, errors.New("peer management system does not support this operation")
}
if len(peers) == 0 {
return &ctypes.ResultDialPeers{}, fmt.Errorf("%w: no peers provided", ctypes.ErrInvalidRequest)
}

View File

@@ -7,6 +7,7 @@ import (
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/libs/bytes"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/types"
@@ -144,8 +145,10 @@ type ResultDialPeers struct {
// A peer
type Peer struct {
ID types.NodeID `json:"node_id"`
URL string `json:"url"`
NodeInfo types.NodeInfo `json:"node_info"`
IsOutbound bool `json:"is_outbound"`
ConnectionStatus p2p.ConnectionStatus `json:"connection_status"`
RemoteIP string `json:"remote_ip"`
}
// Validators for a height.

View File

@@ -837,7 +837,7 @@ paths:
- Info
description: |
Get genesis document in a paginated/chunked format to make it
easier to iterate through larger genesis structures.
easier to iterate through larger gensis structures.
parameters:
- in: query
name: chunkID
@@ -1042,7 +1042,7 @@ paths:
- Info
responses:
"200":
description: List of transactions
description: List of unconfirmed transactions
content:
application/json:
schema:
@@ -1132,10 +1132,10 @@ paths:
tags:
- Info
description: |
Get a transaction
Get a trasasction
responses:
"200":
description: Get a transaction
description: Get a transaction`
content:
application/json:
schema:
@@ -1476,12 +1476,16 @@ components:
Peer:
type: object
properties:
node_id:
node_info:
$ref: "#/components/schemas/NodeInfo"
is_outbound:
type: boolean
example: true
connection_status:
$ref: "#/components/schemas/ConnectionStatus"
remote_ip:
type: string
example: ""
url:
type: string
example: "<id>@95.179.155.35:2385>"
example: "95.179.155.35"
NetInfo:
type: object
properties:
@@ -1942,14 +1946,14 @@ components:
- "chunk"
- "total"
- "data"
properties:
properties:
chunk:
type: integer
example: 0
total:
type: integer
example: 1
data:
data:
type: string
example: "Z2VuZXNpcwo="

View File

@@ -1,72 +0,0 @@
/*
Package indexer defines Tendermint's block and transaction event indexing logic.
Tendermint supports two primary means of block and transaction event indexing:
1. A key-value sink via an embedded database with a proprietary query language.
2. A Postgres-based sink.
An ABCI application can emit events during block and transaction execution in the form
<abci.Event.Type>.<abci.EventAttributeKey>=<abci.EventAttributeValue>
for example "transfer.amount=10000".
An operator can enable one or both of the supported indexing sinks via the
'tx-index.indexer' Tendermint configuration.
Example:
[tx-index]
indexer = ["kv", "psql"]
If an operator wants to completely disable indexing, they may simply just provide
the "null" sink option in the configuration. All other sinks will be ignored if
"null" is provided.
If indexing is enabled, the indexer.Service will iterate over all enabled sinks
and invoke block and transaction indexing via the appropriate IndexBlockEvents
and IndexTxEvents methods.
Note, the "kv" sink is considered deprecated and its query functionality is very
limited, but does allow users to directly query for block and transaction events
against Tendermint's RPC. Instead, operators are encouraged to use the "psql"
indexing sink when more complex queries are required and for reliability purposes
as PostgreSQL can scale.
Prior to starting Tendermint with the "psql" indexing sink enabled, operators
must ensure the following:
1. The "psql" indexing sink is provided in Tendermint's configuration.
2. A 'tx-index.psql-conn' value is provided that contains the PostgreSQL connection URI.
3. The block and transaction event schemas have been created in the PostgreSQL database.
Tendermint provides the block and transaction event schemas in the following
path: state/indexer/sink/psql/schema.sql
To create the schema in a PostgreSQL database, perform the schema query
manually or invoke schema creation via the CLI:
$ psql <flags> -f state/indexer/sink/psql/schema.sql
The "psql" indexing sink prohibits queries via RPC. When using a PostgreSQL sink,
queries can and should be made directly against the database using SQL.
The following are some example SQL queries against the database schema:
* Query for all transaction events for a given transaction hash:
SELECT * FROM tx_events WHERE hash = '3E7D1F...';
* Query for all transaction events for a given block height:
SELECT * FROM tx_events WHERE height = 25;
* Query for transaction events that have a given type (i.e. value wildcard):
SELECT * FROM tx_events WHERE key LIKE '%transfer.recipient%';
Note that if a complete abci.TxResult is needed, you will need to join "tx_events" with
"tx_results" via a foreign key, to obtain contains the raw protobuf-encoded abci.TxResult.
*/
package indexer

Some files were not shown because too many files have changed in this diff Show More