mirror of
https://github.com/tendermint/tendermint.git
synced 2026-01-13 16:22:53 +00:00
Compare commits
42 Commits
wb/undo-qu
...
wb/validat
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
32cd294d92 | ||
|
|
5d0671e95f | ||
|
|
b853afe2ce | ||
|
|
4d1ad8ec38 | ||
|
|
c49f8bc596 | ||
|
|
fadd16985e | ||
|
|
e2ec93350e | ||
|
|
b18d06bd42 | ||
|
|
07336712ad | ||
|
|
2a7497fd9f | ||
|
|
d1fe539c2e | ||
|
|
c7f6d80148 | ||
|
|
02f7d0ae6e | ||
|
|
dd0d273223 | ||
|
|
d33a83f194 | ||
|
|
d5aaa1af75 | ||
|
|
82d46fe871 | ||
|
|
e30434ad5a | ||
|
|
40b7093e85 | ||
|
|
8810d038d9 | ||
|
|
88db4cdf7e | ||
|
|
330eb8ded0 | ||
|
|
9f52d9fbd7 | ||
|
|
8eec8447e8 | ||
|
|
9e56a95473 | ||
|
|
d444a71ec7 | ||
|
|
47e0223356 | ||
|
|
2ff233d944 | ||
|
|
e17178d70d | ||
|
|
14f4db7c19 | ||
|
|
fa1afef125 | ||
|
|
3022f9192a | ||
|
|
315ee5aac5 | ||
|
|
e7f77049eb | ||
|
|
f73c247a0b | ||
|
|
32008ae1d8 | ||
|
|
89e3f43cbc | ||
|
|
7e17892650 | ||
|
|
b20bad14ae | ||
|
|
1f3c3c6848 | ||
|
|
6d2cb8ba8e | ||
|
|
76c935e325 |
37
.github/ISSUE_TEMPLATE/proposal.md
vendored
37
.github/ISSUE_TEMPLATE/proposal.md
vendored
@@ -1,37 +0,0 @@
|
||||
---
|
||||
name: Protocol Change Proposal
|
||||
about: Create a proposal to request a change to the protocol
|
||||
|
||||
---
|
||||
|
||||
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
|
||||
v ✰ Thanks for opening an issue! ✰
|
||||
v Before smashing the submit button please review the template.
|
||||
v Word of caution: Under-specified proposals may be rejected summarily
|
||||
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
|
||||
|
||||
# Protocol Change Proposal
|
||||
|
||||
## Summary
|
||||
|
||||
<!-- Short, concise description of the proposed change -->
|
||||
|
||||
## Problem Definition
|
||||
|
||||
<!-- Why do we need this change?
|
||||
What problems may be addressed by introducing this change?
|
||||
What benefits does Tendermint stand to gain by including this change?
|
||||
Are there any disadvantages of including this change? -->
|
||||
|
||||
## Proposal
|
||||
|
||||
<!-- Detailed description of requirements of implementation -->
|
||||
|
||||
____
|
||||
|
||||
#### For Admin Use
|
||||
|
||||
- [ ] Not duplicate issue
|
||||
- [ ] Appropriate labels applied
|
||||
- [ ] Appropriate contributors tagged
|
||||
- [ ] Contributor assigned/self-assigned
|
||||
20
.github/auto-comment.yml
vendored
20
.github/auto-comment.yml
vendored
@@ -1,16 +1,16 @@
|
||||
pullRequestOpened: |
|
||||
:wave: Thanks for creating a PR!
|
||||
:wave: Thanks for creating a PR!
|
||||
|
||||
Before we can merge this PR, please make sure that all the following items have been
|
||||
Before we can merge this PR, please make sure that all the following items have been
|
||||
checked off. If any of the checklist items are not applicable, please leave them but
|
||||
write a little note why.
|
||||
write a little note why.
|
||||
|
||||
- [ ] Wrote tests
|
||||
- [ ] Updated CHANGELOG_PENDING.md
|
||||
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
|
||||
- [ ] Updated relevant documentation (`docs/`) and code comments
|
||||
- [ ] Re-reviewed `Files changed` in the Github PR explorer
|
||||
- [ ] Applied Appropriate Labels
|
||||
- [ ] Wrote tests
|
||||
- [ ] Updated CHANGELOG_PENDING.md
|
||||
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
|
||||
- [ ] Updated relevant documentation (`docs/`) and code comments
|
||||
- [ ] Re-reviewed `Files changed` in the Github PR explorer
|
||||
- [ ] Applied Appropriate Labels
|
||||
|
||||
|
||||
Thank you for your contribution to Tendermint! :rocket:
|
||||
Thank you for your contribution to Tendermint! :rocket:
|
||||
8
.github/linter/markdownlint.yml
vendored
Normal file
8
.github/linter/markdownlint.yml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
default: true,
|
||||
MD007: { "indent": 4 }
|
||||
MD013: false
|
||||
MD024: { siblings_only: true }
|
||||
MD025: false
|
||||
MD033: { no-inline-html: false }
|
||||
no-hard-tabs: false
|
||||
whitespace: false
|
||||
8
.github/linters/markdownlint.yml
vendored
8
.github/linters/markdownlint.yml
vendored
@@ -1,8 +0,0 @@
|
||||
default: true,
|
||||
MD007: {"indent": 4}
|
||||
MD013: false
|
||||
MD024: {siblings_only: true}
|
||||
MD025: false
|
||||
MD033: {no-inline-html: false}
|
||||
no-hard-tabs: false
|
||||
whitespace: false
|
||||
9
.github/linters/yaml-lint.yml
vendored
9
.github/linters/yaml-lint.yml
vendored
@@ -1,9 +0,0 @@
|
||||
---
|
||||
# Default rules for YAML linting from super-linter.
|
||||
# See: See https://yamllint.readthedocs.io/en/stable/rules.html
|
||||
extends: default
|
||||
rules:
|
||||
document-end: disable
|
||||
document-start: disable
|
||||
line-length: disable
|
||||
truthy: disable
|
||||
8
.github/workflows/docker.yml
vendored
8
.github/workflows/docker.yml
vendored
@@ -1,13 +1,13 @@
|
||||
name: Docker
|
||||
# Build & Push rebuilds the tendermint docker image on every push to master and creation of tags
|
||||
# Build & Push rebuilds the tendermint docker image on every push to master and creation of tags
|
||||
# and pushes the image to https://hub.docker.com/r/interchainio/simapp/tags
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
tags:
|
||||
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
|
||||
- "v[0-9]+.[0-9]+.[0-9]+-rc*" # Push events to matching v*, i.e. v1.0-rc1, v20.15.10-rc5
|
||||
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
|
||||
- "v[0-9]+.[0-9]+.[0-9]+-rc*" # Push events to matching v*, i.e. v1.0-rc1, v20.15.10-rc5
|
||||
|
||||
jobs:
|
||||
build:
|
||||
@@ -49,7 +49,7 @@ jobs:
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Publish to Docker Hub
|
||||
uses: docker/build-push-action@v2.9.0
|
||||
uses: docker/build-push-action@v2.7.0
|
||||
with:
|
||||
context: .
|
||||
file: ./DOCKER/Dockerfile
|
||||
|
||||
36
.github/workflows/e2e-manual.yml
vendored
36
.github/workflows/e2e-manual.yml
vendored
@@ -1,36 +0,0 @@
|
||||
# Runs randomly generated E2E testnets nightly on master
|
||||
# manually run e2e tests
|
||||
name: e2e-manual
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
e2e-nightly-test:
|
||||
# Run parallel jobs for the listed testnet groups (must match the
|
||||
# ./build/generator -g flag)
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
group: ['00', '01', '02', '03']
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: '1.17'
|
||||
|
||||
- uses: actions/checkout@v2.4.0
|
||||
|
||||
- name: Build
|
||||
working-directory: test/e2e
|
||||
# Run make jobs in parallel, since we can't run steps in parallel.
|
||||
run: make -j2 docker generator runner tests
|
||||
|
||||
- name: Generate testnets
|
||||
working-directory: test/e2e
|
||||
# When changing -g, also change the matrix groups above
|
||||
run: ./build/generator -g 4 -d networks/nightly/
|
||||
|
||||
- name: Run ${{ matrix.p2p }} p2p testnets
|
||||
working-directory: test/e2e
|
||||
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
|
||||
17
.github/workflows/e2e-nightly-34x.yml
vendored
17
.github/workflows/e2e-nightly-34x.yml
vendored
@@ -6,6 +6,7 @@
|
||||
|
||||
name: e2e-nightly-34x
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually, in theory
|
||||
schedule:
|
||||
- cron: '0 2 * * *'
|
||||
|
||||
@@ -57,3 +58,19 @@ jobs:
|
||||
SLACK_COLOR: danger
|
||||
SLACK_MESSAGE: Nightly E2E tests failed on v0.34.x
|
||||
SLACK_FOOTER: ''
|
||||
|
||||
e2e-nightly-success: # may turn this off once they seem to pass consistently
|
||||
needs: e2e-nightly-test
|
||||
if: ${{ success() }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Notify Slack on success
|
||||
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
|
||||
env:
|
||||
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
|
||||
SLACK_CHANNEL: tendermint-internal
|
||||
SLACK_USERNAME: Nightly E2E Tests
|
||||
SLACK_ICON_EMOJI: ':white_check_mark:'
|
||||
SLACK_COLOR: good
|
||||
SLACK_MESSAGE: Nightly E2E tests passed on v0.34.x
|
||||
SLACK_FOOTER: ''
|
||||
|
||||
3
.github/workflows/e2e-nightly-35x.yml
vendored
3
.github/workflows/e2e-nightly-35x.yml
vendored
@@ -5,6 +5,7 @@
|
||||
|
||||
name: e2e-nightly-35x
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
schedule:
|
||||
- cron: '0 2 * * *'
|
||||
|
||||
@@ -58,7 +59,7 @@ jobs:
|
||||
SLACK_MESSAGE: Nightly E2E tests failed on v0.35.x
|
||||
SLACK_FOOTER: ''
|
||||
|
||||
e2e-nightly-success: # may turn this off once they seem to pass consistently
|
||||
e2e-nightly-success: # may turn this off once they seem to pass consistently
|
||||
needs: e2e-nightly-test
|
||||
if: ${{ success() }}
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
3
.github/workflows/e2e-nightly-master.yml
vendored
3
.github/workflows/e2e-nightly-master.yml
vendored
@@ -5,6 +5,7 @@
|
||||
|
||||
name: e2e-nightly-master
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
schedule:
|
||||
- cron: '0 2 * * *'
|
||||
|
||||
@@ -55,7 +56,7 @@ jobs:
|
||||
SLACK_MESSAGE: Nightly E2E tests failed on master
|
||||
SLACK_FOOTER: ''
|
||||
|
||||
e2e-nightly-success: # may turn this off once they seem to pass consistently
|
||||
e2e-nightly-success: # may turn this off once they seem to pass consistently
|
||||
needs: e2e-nightly-test
|
||||
if: ${{ success() }}
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
3
.github/workflows/e2e.yml
vendored
3
.github/workflows/e2e.yml
vendored
@@ -2,7 +2,7 @@ name: e2e
|
||||
# Runs the CI end-to-end test network on all pushes to master or release branches
|
||||
# and every pull request, but only if any Go files have been changed.
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
pull_request:
|
||||
push:
|
||||
branches:
|
||||
@@ -35,3 +35,4 @@ jobs:
|
||||
working-directory: test/e2e
|
||||
run: ./run-multiple.sh networks/ci.toml
|
||||
if: "env.GIT_DIFF != ''"
|
||||
|
||||
|
||||
2
.github/workflows/fuzz-nightly.yml
vendored
2
.github/workflows/fuzz-nightly.yml
vendored
@@ -1,7 +1,7 @@
|
||||
# Runs fuzzing nightly.
|
||||
name: Fuzz Tests
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
schedule:
|
||||
- cron: '0 3 * * *'
|
||||
pull_request:
|
||||
|
||||
2
.github/workflows/linkchecker.yml
vendored
2
.github/workflows/linkchecker.yml
vendored
@@ -1,5 +1,5 @@
|
||||
name: Check Markdown links
|
||||
on:
|
||||
on:
|
||||
schedule:
|
||||
- cron: '* */24 * * *'
|
||||
jobs:
|
||||
|
||||
2
.github/workflows/lint.yml
vendored
2
.github/workflows/lint.yml
vendored
@@ -1,4 +1,4 @@
|
||||
name: Golang Linter
|
||||
name: Lint
|
||||
# Lint runs golangci-lint over the entire Tendermint repository
|
||||
# This workflow is run on every pull request and push to master
|
||||
# The `golangci` job will pass without running if no *.{go, mod, sum} files have been modified.
|
||||
|
||||
4
.github/workflows/linter.yml
vendored
4
.github/workflows/linter.yml
vendored
@@ -1,4 +1,4 @@
|
||||
name: Markdown Linter
|
||||
name: Lint
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
@@ -23,10 +23,10 @@ jobs:
|
||||
- name: Lint Code Base
|
||||
uses: docker://github/super-linter:v4
|
||||
env:
|
||||
LINTER_RULES_PATH: .
|
||||
VALIDATE_ALL_CODEBASE: true
|
||||
DEFAULT_BRANCH: master
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
VALIDATE_MD: true
|
||||
VALIDATE_OPENAPI: true
|
||||
VALIDATE_YAML: true
|
||||
YAML_CONFIG_FILE: yaml-lint.yml
|
||||
|
||||
18
.github/workflows/markdown-links.yml
vendored
18
.github/workflows/markdown-links.yml
vendored
@@ -1,18 +0,0 @@
|
||||
# Currently disabled until all links have been fixed
|
||||
# name: Check Markdown links
|
||||
|
||||
# on:
|
||||
# push:
|
||||
# branches:
|
||||
# - master
|
||||
# pull_request:
|
||||
# branches: [master]
|
||||
|
||||
# jobs:
|
||||
# markdown-link-check:
|
||||
# runs-on: ubuntu-latest
|
||||
# steps:
|
||||
# - uses: actions/checkout@master
|
||||
# - uses: gaurav-nelson/github-action-markdown-link-check@1.0.13
|
||||
# with:
|
||||
# check-modified-files-only: 'yes'
|
||||
24
.github/workflows/proto-check.yml
vendored
24
.github/workflows/proto-check.yml
vendored
@@ -1,24 +0,0 @@
|
||||
name: Proto Check
|
||||
# Protobuf runs buf (https://buf.build/) lint and check-breakage
|
||||
# This workflow is only run when a file in the proto directory
|
||||
# has been modified.
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
pull_request:
|
||||
paths:
|
||||
- "proto/*"
|
||||
jobs:
|
||||
proto-lint:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 4
|
||||
steps:
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- name: lint
|
||||
run: make proto-lint
|
||||
proto-breakage:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 4
|
||||
steps:
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- name: check-breakage
|
||||
run: make proto-check-breaking-ci
|
||||
64
.github/workflows/proto-dockerfile.yml
vendored
64
.github/workflows/proto-dockerfile.yml
vendored
@@ -1,64 +0,0 @@
|
||||
# This workflow (re)builds and pushes a Docker image containing the
|
||||
# protobuf build tools used by the other workflows.
|
||||
#
|
||||
# When making changes that require updates to the builder image, you
|
||||
# should merge the updates first and wait for this workflow to complete,
|
||||
# so that the changes will be available for the dependent workflows.
|
||||
#
|
||||
|
||||
name: Build & Push Proto Builder Image
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- "proto/*"
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
paths:
|
||||
- "proto/*"
|
||||
schedule:
|
||||
# run this job once a month to recieve any go or buf updates
|
||||
- cron: "0 9 1 * *"
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: tendermint/docker-build-proto
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- name: Check out and assign tags
|
||||
id: prep
|
||||
run: |
|
||||
DOCKER_IMAGE="${REGISTRY}/${IMAGE_NAME}"
|
||||
VERSION=noop
|
||||
if [[ "$GITHUB_REF" == "refs/tags/*" ]]; then
|
||||
VERSION="${GITHUB_REF#refs/tags/}"
|
||||
elif [[ "$GITHUB_REF" == "refs/heads/*" ]]; then
|
||||
VERSION="$(echo "${GITHUB_REF#refs/heads/}" | sed -r 's#/+#-#g')"
|
||||
if [[ "${{ github.event.repository.default_branch }}" = "$VERSION" ]]; then
|
||||
VERSION=latest
|
||||
fi
|
||||
fi
|
||||
TAGS="${DOCKER_IMAGE}:${VERSION}"
|
||||
echo ::set-output name=tags::"${TAGS}"
|
||||
|
||||
- name: Set up docker buildx
|
||||
uses: docker/setup-buildx-action@v1.6.0
|
||||
|
||||
- name: Log in to the container registry
|
||||
uses: docker/login-action@v1.12.0
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Build and publish image
|
||||
uses: docker/build-push-action@v2.9.0
|
||||
with:
|
||||
context: ./proto
|
||||
file: ./proto/Dockerfile
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.prep.outputs.tags }}
|
||||
2
.github/workflows/release.yml
vendored
2
.github/workflows/release.yml
vendored
@@ -5,7 +5,7 @@ on:
|
||||
branches:
|
||||
- "RC[0-9]/**"
|
||||
tags:
|
||||
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
|
||||
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
|
||||
|
||||
jobs:
|
||||
goreleaser:
|
||||
|
||||
15
.gitignore
vendored
15
.gitignore
vendored
@@ -47,11 +47,10 @@ test/fuzz/**/corpus
|
||||
test/fuzz/**/crashers
|
||||
test/fuzz/**/suppressions
|
||||
test/fuzz/**/*.zip
|
||||
proto/spec/**/*.pb.go
|
||||
*.aux
|
||||
*.bbl
|
||||
*.blg
|
||||
*.log
|
||||
*.pdf
|
||||
*.gz
|
||||
*.dvi
|
||||
proto/tendermint/blocksync/types.proto
|
||||
proto/tendermint/consensus/types.proto
|
||||
proto/tendermint/mempool/*.proto
|
||||
proto/tendermint/p2p/*.proto
|
||||
proto/tendermint/statesync/*.proto
|
||||
proto/tendermint/types/*.proto
|
||||
proto/tendermint/version/*.proto
|
||||
|
||||
@@ -29,8 +29,8 @@ release:
|
||||
|
||||
archives:
|
||||
- files:
|
||||
- LICENSE
|
||||
- README.md
|
||||
- UPGRADING.md
|
||||
- SECURITY.md
|
||||
- CHANGELOG.md
|
||||
- LICENSE
|
||||
- README.md
|
||||
- UPGRADING.md
|
||||
- SECURITY.md
|
||||
- CHANGELOG.md
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
default: true
|
||||
MD001: false
|
||||
MD007: {indent: 4}
|
||||
MD013: false
|
||||
MD024: {siblings_only: true}
|
||||
MD025: false
|
||||
MD033: false
|
||||
MD036: false
|
||||
MD010: false
|
||||
MD012: false
|
||||
MD028: false
|
||||
35
CHANGELOG.md
35
CHANGELOG.md
@@ -2,41 +2,6 @@
|
||||
|
||||
Friendly reminder: We have a [bug bounty program](https://hackerone.com/cosmos).
|
||||
|
||||
## v0.35.1
|
||||
|
||||
January 26, 2022
|
||||
|
||||
Special thanks to external contributors on this release: @altergui, @odeke-em,
|
||||
@thanethomson
|
||||
|
||||
### BREAKING CHANGES
|
||||
|
||||
- CLI/RPC/Config
|
||||
|
||||
- [config] [\#7276](https://github.com/tendermint/tendermint/pull/7276) rpc: Add experimental config params to allow for subscription buffer size control (@thanethomson).
|
||||
|
||||
- P2P Protocol
|
||||
|
||||
- [p2p] [\#7265](https://github.com/tendermint/tendermint/pull/7265) Peer manager reduces peer score for each failed dial attempts for peers that have not successfully dialed. (@tychoish)
|
||||
- [p2p] [\#7594](https://github.com/tendermint/tendermint/pull/7594) always advertise self, to enable mutual address discovery. (@altergui)
|
||||
|
||||
### FEATURES
|
||||
|
||||
- [rpc] [\#7270](https://github.com/tendermint/tendermint/pull/7270) Add `header` and `header_by_hash` RPC Client queries. (@fedekunze) (@cmwaters)
|
||||
|
||||
### IMPROVEMENTS
|
||||
|
||||
- [internal/protoio] [\#7325](https://github.com/tendermint/tendermint/pull/7325) Optimized `MarshalDelimited` by inlining the common case and using a `sync.Pool` in the worst case. (@odeke-em)
|
||||
- [\#7338](https://github.com/tendermint/tendermint/pull/7338) pubsub: Performance improvements for the event query API (backport of #7319) (@creachadair)
|
||||
- [\#7252](https://github.com/tendermint/tendermint/pull/7252) Add basic metrics to the indexer package. (@creachadair)
|
||||
- [\#7338](https://github.com/tendermint/tendermint/pull/7338) Performance improvements for the event query API. (@creachadair)
|
||||
|
||||
### BUG FIXES
|
||||
|
||||
- [\#7310](https://github.com/tendermint/tendermint/issues/7310) pubsub: Report a non-nil error when shutting down (fixes #7306).
|
||||
- [\#7355](https://github.com/tendermint/tendermint/pull/7355) Fix incorrect tests using the PSQL sink. (@creachadair)
|
||||
- [\#7683](https://github.com/tendermint/tendermint/pull/7683) rpc: check error code for broadcast_tx_commit. (@tychoish)
|
||||
|
||||
## v0.35.0
|
||||
|
||||
November 4, 2021
|
||||
|
||||
@@ -12,15 +12,14 @@ Special thanks to external contributors on this release:
|
||||
|
||||
- CLI/RPC/Config
|
||||
|
||||
- [rpc] \#7575 Rework how RPC responses are written back via HTTP. (@creachadair)
|
||||
- [rpc] \#7121 Remove the deprecated gRPC interface to the RPC service. (@creachadair)
|
||||
- [blocksync] \#7159 Remove support for disabling blocksync in any circumstance. (@tychoish)
|
||||
- [mempool] \#7171 Remove legacy mempool implementation. (@tychoish)
|
||||
- [rpc] \#7575 Rework how RPC responses are written back via HTTP. (@creachadair)
|
||||
- [rpc] \#7713 Remove unused options for websocket clients. (@creachadair)
|
||||
|
||||
- Apps
|
||||
|
||||
- [tendermint/spec] \#7804 Migrate spec from [spec repo](https://github.com/tendermint/spec).
|
||||
- [proto/tendermint] \#6976 Remove core protobuf files in favor of only housing them in the [tendermint/spec](https://github.com/tendermint/spec) repository.
|
||||
|
||||
- P2P Protocol
|
||||
|
||||
@@ -40,15 +39,13 @@ Special thanks to external contributors on this release:
|
||||
- [p2p] \#7064 Remove WDRR queue implementation. (@tychoish)
|
||||
- [config] \#7169 `WriteConfigFile` now returns an error. (@tychoish)
|
||||
- [libs/service] \#7288 Remove SetLogger method on `service.Service` interface. (@tychoish)
|
||||
- [abci/client] \#7607 Simplify client interface (removes most "async" methods). (@creachadair)
|
||||
- [libs/json] \#7673 Remove the libs/json (tmjson) library. (@creachadair)
|
||||
|
||||
|
||||
- Blockchain Protocol
|
||||
|
||||
### FEATURES
|
||||
|
||||
- [rpc] [\#7270](https://github.com/tendermint/tendermint/pull/7270) Add `header` and `header_by_hash` RPC Client queries. (@fedekunze)
|
||||
- [rpc] [\#7701] Add `ApplicationInfo` to `status` rpc call which contains the application version. (@jonasbostoen)
|
||||
- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state in the event of non-determinstic app hash or reverting an upgrade.
|
||||
- [mempool, rpc] \#7041 Add removeTx operation to the RPC layer. (@tychoish)
|
||||
- [consensus] \#7354 add a new `synchrony` field to the `ConsensusParameter` struct for controlling the parameters of the proposer-based timestamp algorithm. (@williambanfield)
|
||||
@@ -56,21 +53,15 @@ Special thanks to external contributors on this release:
|
||||
- [consensus] \#7391 Use the proposed block timestamp as the proposal timestamp. Update the block validation logic to ensure that the proposed block's timestamp matches the timestamp in the proposal message. (@williambanfield)
|
||||
- [consensus] \#7415 Update proposal validation logic to Prevote nil if a proposal does not meet the conditions for Timelyness per the proposer-based timestamp specification. (@anca)
|
||||
- [consensus] \#7382 Update block validation to no longer require the block timestamp to be the median of the timestamps of the previous commit. (@anca)
|
||||
- [consensus] \#7711 Use the proposer timestamp for the first height instead of the genesis time. Chains will still start consensus at the genesis time. (@anca)
|
||||
|
||||
### IMPROVEMENTS
|
||||
|
||||
- [internal/protoio] \#7325 Optimized `MarshalDelimited` by inlining the common case and using a `sync.Pool` in the worst case. (@odeke-em)
|
||||
- [consensus] \#6969 remove logic to 'unlock' a locked block.
|
||||
- [evidence] \#7700 Evidence messages contain single Evidence instead of EvidenceList (@jmalicevic)
|
||||
- [pubsub] \#7319 Performance improvements for the event query API (@creachadair)
|
||||
- [node] \#7521 Define concrete type for seed node implementation (@spacech1mp)
|
||||
- [rpc] \#7612 paginate mempool /unconfirmed_txs rpc endpoint (@spacech1mp)
|
||||
- [light] [\#7536](https://github.com/tendermint/tendermint/pull/7536) rpc /status call returns info about the light client (@jmalicevic)
|
||||
- [types] \#7765 Replace EvidenceData with EvidenceList to avoid unnecessary nesting of evidence fields within a block. (@jmalicevic)
|
||||
|
||||
### BUG FIXES
|
||||
|
||||
- fix: assignment copies lock value in `BitArray.UnmarshalJSON()` (@lklimek)
|
||||
- [light] \#7640 Light Client: fix absence proof verification (@ashcherbakov)
|
||||
- [light] \#7641 Light Client: fix querying against the latest height (@ashcherbakov)
|
||||
|
||||
29
Makefile
29
Makefile
@@ -73,42 +73,22 @@ install:
|
||||
|
||||
$(BUILDDIR)/:
|
||||
mkdir -p $@
|
||||
# The Docker image containing the generator, formatter, and linter.
|
||||
# This is generated by proto/Dockerfile. To update tools, make changes
|
||||
# there and run the Build & Push Proto Builder Image workflow.
|
||||
IMAGE := ghcr.io/tendermint/docker-build-proto:latest
|
||||
DOCKER_PROTO_BUILDER := docker run -v $(shell pwd):/workspace --workdir /workspace $(IMAGE)
|
||||
HTTPS_GIT := https://github.com/tendermint/tendermint.git
|
||||
|
||||
###############################################################################
|
||||
### Protobuf ###
|
||||
###############################################################################
|
||||
|
||||
proto-all: proto-lint proto-check-breaking
|
||||
.PHONY: proto-all
|
||||
|
||||
proto-gen:
|
||||
@docker pull -q tendermintdev/docker-build-proto
|
||||
@echo "Generating Protobuf files"
|
||||
@$(DOCKER_PROTO_BUILDER) buf generate --template=./buf.gen.yaml --config ./buf.yaml
|
||||
@$(DOCKER_PROTO_BUILDER) sh ./scripts/protocgen.sh
|
||||
.PHONY: proto-gen
|
||||
|
||||
proto-lint:
|
||||
@$(DOCKER_PROTO_BUILDER) buf lint --error-format=json --config ./buf.yaml
|
||||
.PHONY: proto-lint
|
||||
|
||||
proto-format:
|
||||
@echo "Formatting Protobuf files"
|
||||
@$(DOCKER_PROTO_BUILDER) find . -name '*.proto' -path "./proto/*" -exec clang-format -i {} \;
|
||||
@$(DOCKER_PROTO_BUILDER) find ./ -not -path "./third_party/*" -name *.proto -exec clang-format -i {} \;
|
||||
.PHONY: proto-format
|
||||
|
||||
proto-check-breaking:
|
||||
@$(DOCKER_PROTO_BUILDER) buf breaking --against .git --config ./buf.yaml
|
||||
.PHONY: proto-check-breaking
|
||||
|
||||
proto-check-breaking-ci:
|
||||
@$(DOCKER_PROTO_BUILDER) buf breaking --against $(HTTPS_GIT) --config ./buf.yaml
|
||||
.PHONY: proto-check-breaking-ci
|
||||
|
||||
###############################################################################
|
||||
### Build ABCI ###
|
||||
###############################################################################
|
||||
@@ -328,5 +308,4 @@ $(BUILDDIR)/packages.txt:$(GO_TEST_FILES) $(BUILDDIR)
|
||||
split-test-packages:$(BUILDDIR)/packages.txt
|
||||
split -d -n l/$(NUM_SPLIT) $< $<.
|
||||
test-group-%:split-test-packages
|
||||
cat $(BUILDDIR)/packages.txt.$* | xargs go test -mod=readonly -timeout=5m -race -coverprofile=$(BUILDDIR)/$*.profile.out
|
||||
|
||||
cat $(BUILDDIR)/packages.txt.$* | xargs go test -mod=readonly -timeout=8m -race -coverprofile=$(BUILDDIR)/$*.profile.out
|
||||
|
||||
@@ -28,20 +28,25 @@ const (
|
||||
type Client interface {
|
||||
service.Service
|
||||
|
||||
SetResponseCallback(Callback)
|
||||
Error() error
|
||||
|
||||
// Asynchronous requests
|
||||
FlushAsync(context.Context) (*ReqRes, error)
|
||||
DeliverTxAsync(context.Context, types.RequestDeliverTx) (*ReqRes, error)
|
||||
CheckTxAsync(context.Context, types.RequestCheckTx) (*ReqRes, error)
|
||||
|
||||
// Synchronous requests
|
||||
Flush(context.Context) error
|
||||
Echo(ctx context.Context, msg string) (*types.ResponseEcho, error)
|
||||
Info(context.Context, types.RequestInfo) (*types.ResponseInfo, error)
|
||||
DeliverTx(context.Context, types.RequestDeliverTx) (*types.ResponseDeliverTx, error)
|
||||
CheckTx(context.Context, types.RequestCheckTx) (*types.ResponseCheckTx, error)
|
||||
Query(context.Context, types.RequestQuery) (*types.ResponseQuery, error)
|
||||
Commit(context.Context) (*types.ResponseCommit, error)
|
||||
InitChain(context.Context, types.RequestInitChain) (*types.ResponseInitChain, error)
|
||||
PrepareProposal(context.Context, types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error)
|
||||
ProcessProposal(context.Context, types.RequestProcessProposal) (*types.ResponseProcessProposal, error)
|
||||
ExtendVote(context.Context, types.RequestExtendVote) (*types.ResponseExtendVote, error)
|
||||
VerifyVoteExtension(context.Context, types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error)
|
||||
FinalizeBlock(context.Context, types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error)
|
||||
BeginBlock(context.Context, types.RequestBeginBlock) (*types.ResponseBeginBlock, error)
|
||||
EndBlock(context.Context, types.RequestEndBlock) (*types.ResponseEndBlock, error)
|
||||
ListSnapshots(context.Context, types.RequestListSnapshots) (*types.ResponseListSnapshots, error)
|
||||
OfferSnapshot(context.Context, types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error)
|
||||
LoadSnapshotChunk(context.Context, types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error)
|
||||
@@ -64,21 +69,26 @@ func NewClient(logger log.Logger, addr, transport string, mustConnect bool) (cli
|
||||
return
|
||||
}
|
||||
|
||||
type Callback func(*types.Request, *types.Response)
|
||||
|
||||
type ReqRes struct {
|
||||
*types.Request
|
||||
*sync.WaitGroup
|
||||
*types.Response // Not set atomically, so be sure to use WaitGroup.
|
||||
|
||||
mtx sync.Mutex
|
||||
signal chan struct{}
|
||||
cb func(*types.Response) // A single callback that may be set.
|
||||
mtx sync.Mutex
|
||||
done bool // Gets set to true once *after* WaitGroup.Done().
|
||||
cb func(*types.Response) // A single callback that may be set.
|
||||
}
|
||||
|
||||
func NewReqRes(req *types.Request) *ReqRes {
|
||||
return &ReqRes{
|
||||
Request: req,
|
||||
Response: nil,
|
||||
signal: make(chan struct{}),
|
||||
cb: nil,
|
||||
Request: req,
|
||||
WaitGroup: waitGroup1(),
|
||||
Response: nil,
|
||||
|
||||
done: false,
|
||||
cb: nil,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -88,14 +98,14 @@ func NewReqRes(req *types.Request) *ReqRes {
|
||||
func (r *ReqRes) SetCallback(cb func(res *types.Response)) {
|
||||
r.mtx.Lock()
|
||||
|
||||
select {
|
||||
case <-r.signal:
|
||||
if r.done {
|
||||
r.mtx.Unlock()
|
||||
cb(r.Response)
|
||||
default:
|
||||
r.cb = cb
|
||||
r.mtx.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
r.cb = cb
|
||||
r.mtx.Unlock()
|
||||
}
|
||||
|
||||
// InvokeCallback invokes a thread-safe execution of the configured callback
|
||||
@@ -109,10 +119,27 @@ func (r *ReqRes) InvokeCallback() {
|
||||
}
|
||||
}
|
||||
|
||||
// GetCallback returns the configured callback of the ReqRes object which may be
|
||||
// nil. Note, it is not safe to concurrently call this in cases where it is
|
||||
// marked done and SetCallback is called before calling GetCallback as that
|
||||
// will invoke the callback twice and create a potential race condition.
|
||||
//
|
||||
// ref: https://github.com/tendermint/tendermint/issues/5439
|
||||
func (r *ReqRes) GetCallback() func(*types.Response) {
|
||||
r.mtx.Lock()
|
||||
defer r.mtx.Unlock()
|
||||
return r.cb
|
||||
}
|
||||
|
||||
// SetDone marks the ReqRes object as done.
|
||||
func (r *ReqRes) SetDone() {
|
||||
r.mtx.Lock()
|
||||
defer r.mtx.Unlock()
|
||||
|
||||
close(r.signal)
|
||||
r.done = true
|
||||
r.mtx.Unlock()
|
||||
}
|
||||
|
||||
func waitGroup1() (wg *sync.WaitGroup) {
|
||||
wg = &sync.WaitGroup{}
|
||||
wg.Add(1)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package abciclient
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
@@ -13,8 +14,10 @@ type Creator func(log.Logger) (Client, error)
|
||||
// NewLocalCreator returns a Creator for the given app,
|
||||
// which will be running locally.
|
||||
func NewLocalCreator(app types.Application) Creator {
|
||||
mtx := new(sync.Mutex)
|
||||
|
||||
return func(logger log.Logger) (Client, error) {
|
||||
return NewLocalClient(logger, app), nil
|
||||
return NewLocalClient(logger, mtx, app), nil
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -28,9 +28,10 @@ type grpcClient struct {
|
||||
conn *grpc.ClientConn
|
||||
chReqRes chan *ReqRes // dispatches "async" responses to callbacks *in order*, needed by mempool
|
||||
|
||||
mtx sync.Mutex
|
||||
addr string
|
||||
err error
|
||||
mtx sync.Mutex
|
||||
addr string
|
||||
err error
|
||||
resCb func(*types.Request, *types.Response) // listens to all callbacks
|
||||
}
|
||||
|
||||
var _ Client = (*grpcClient)(nil)
|
||||
@@ -76,9 +77,17 @@ func (cli *grpcClient) OnStart(ctx context.Context) error {
|
||||
defer cli.mtx.Unlock()
|
||||
|
||||
reqres.SetDone()
|
||||
reqres.Done()
|
||||
|
||||
// Notify client listener if set
|
||||
if cli.resCb != nil {
|
||||
cli.resCb(reqres.Request, reqres.Response)
|
||||
}
|
||||
|
||||
// Notify reqRes listener if set
|
||||
reqres.InvokeCallback()
|
||||
if cb := reqres.GetCallback(); cb != nil {
|
||||
cb(reqres.Response)
|
||||
}
|
||||
}
|
||||
|
||||
for {
|
||||
@@ -153,7 +162,9 @@ func (cli *grpcClient) StopForError(err error) {
|
||||
cli.mtx.Unlock()
|
||||
|
||||
cli.logger.Error("Stopping abci.grpcClient for error", "err", err)
|
||||
cli.Stop()
|
||||
if err := cli.Stop(); err != nil {
|
||||
cli.logger.Error("error stopping abci.grpcClient", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Error() error {
|
||||
@@ -162,66 +173,198 @@ func (cli *grpcClient) Error() error {
|
||||
return cli.err
|
||||
}
|
||||
|
||||
// Set listener for all responses
|
||||
// NOTE: callback may get internally generated flush responses.
|
||||
func (cli *grpcClient) SetResponseCallback(resCb Callback) {
|
||||
cli.mtx.Lock()
|
||||
cli.resCb = resCb
|
||||
cli.mtx.Unlock()
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
// NOTE: call is synchronous, use ctx to break early if needed
|
||||
func (cli *grpcClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
|
||||
req := types.ToRequestFlush()
|
||||
res, err := cli.client.Flush(ctx, req.GetFlush(), grpc.WaitForReady(true))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Flush{Flush: res}})
|
||||
}
|
||||
|
||||
// NOTE: call is synchronous, use ctx to break early if needed
|
||||
func (cli *grpcClient) DeliverTxAsync(ctx context.Context, params types.RequestDeliverTx) (*ReqRes, error) {
|
||||
req := types.ToRequestDeliverTx(params)
|
||||
res, err := cli.client.DeliverTx(ctx, req.GetDeliverTx(), grpc.WaitForReady(true))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_DeliverTx{DeliverTx: res}})
|
||||
}
|
||||
|
||||
// NOTE: call is synchronous, use ctx to break early if needed
|
||||
func (cli *grpcClient) CheckTxAsync(ctx context.Context, params types.RequestCheckTx) (*ReqRes, error) {
|
||||
req := types.ToRequestCheckTx(params)
|
||||
res, err := cli.client.CheckTx(ctx, req.GetCheckTx(), grpc.WaitForReady(true))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_CheckTx{CheckTx: res}})
|
||||
}
|
||||
|
||||
// finishAsyncCall creates a ReqRes for an async call, and immediately populates it
|
||||
// with the response. We don't complete it until it's been ordered via the channel.
|
||||
func (cli *grpcClient) finishAsyncCall(ctx context.Context, req *types.Request, res *types.Response) (*ReqRes, error) {
|
||||
reqres := NewReqRes(req)
|
||||
reqres.Response = res
|
||||
select {
|
||||
case cli.chReqRes <- reqres: // use channel for async responses, since they must be ordered
|
||||
return reqres, nil
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
}
|
||||
|
||||
// finishSyncCall waits for an async call to complete. It is necessary to call all
|
||||
// sync calls asynchronously as well, to maintain call and response ordering via
|
||||
// the channel, and this method will wait until the async call completes.
|
||||
func (cli *grpcClient) finishSyncCall(reqres *ReqRes) *types.Response {
|
||||
// It's possible that the callback is called twice, since the callback can
|
||||
// be called immediately on SetCallback() in addition to after it has been
|
||||
// set. This is because completing the ReqRes happens in a separate critical
|
||||
// section from the one where the callback is called: there is a race where
|
||||
// SetCallback() is called between completing the ReqRes and dispatching the
|
||||
// callback.
|
||||
//
|
||||
// We also buffer the channel with 1 response, since SetCallback() will be
|
||||
// called synchronously if the reqres is already completed, in which case
|
||||
// it will block on sending to the channel since it hasn't gotten around to
|
||||
// receiving from it yet.
|
||||
//
|
||||
// ReqRes should really handle callback dispatch internally, to guarantee
|
||||
// that it's only called once and avoid the above race conditions.
|
||||
var once sync.Once
|
||||
ch := make(chan *types.Response, 1)
|
||||
reqres.SetCallback(func(res *types.Response) {
|
||||
once.Do(func() {
|
||||
ch <- res
|
||||
})
|
||||
})
|
||||
return <-ch
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *grpcClient) Flush(ctx context.Context) error { return nil }
|
||||
|
||||
func (cli *grpcClient) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
|
||||
return cli.client.Echo(ctx, types.ToRequestEcho(msg).GetEcho(), grpc.WaitForReady(true))
|
||||
req := types.ToRequestEcho(msg)
|
||||
return cli.client.Echo(ctx, req.GetEcho(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Info(ctx context.Context, params types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
return cli.client.Info(ctx, types.ToRequestInfo(params).GetInfo(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) Info(
|
||||
ctx context.Context,
|
||||
params types.RequestInfo,
|
||||
) (*types.ResponseInfo, error) {
|
||||
req := types.ToRequestInfo(params)
|
||||
return cli.client.Info(ctx, req.GetInfo(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) CheckTx(ctx context.Context, params types.RequestCheckTx) (*types.ResponseCheckTx, error) {
|
||||
return cli.client.CheckTx(ctx, types.ToRequestCheckTx(params).GetCheckTx(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) DeliverTx(
|
||||
ctx context.Context,
|
||||
params types.RequestDeliverTx,
|
||||
) (*types.ResponseDeliverTx, error) {
|
||||
|
||||
reqres, err := cli.DeliverTxAsync(ctx, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishSyncCall(reqres).GetDeliverTx(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Query(ctx context.Context, params types.RequestQuery) (*types.ResponseQuery, error) {
|
||||
return cli.client.Query(ctx, types.ToRequestQuery(params).GetQuery(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) CheckTx(
|
||||
ctx context.Context,
|
||||
params types.RequestCheckTx,
|
||||
) (*types.ResponseCheckTx, error) {
|
||||
|
||||
reqres, err := cli.CheckTxAsync(ctx, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishSyncCall(reqres).GetCheckTx(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Query(
|
||||
ctx context.Context,
|
||||
params types.RequestQuery,
|
||||
) (*types.ResponseQuery, error) {
|
||||
req := types.ToRequestQuery(params)
|
||||
return cli.client.Query(ctx, req.GetQuery(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Commit(ctx context.Context) (*types.ResponseCommit, error) {
|
||||
return cli.client.Commit(ctx, types.ToRequestCommit().GetCommit(), grpc.WaitForReady(true))
|
||||
req := types.ToRequestCommit()
|
||||
return cli.client.Commit(ctx, req.GetCommit(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) InitChain(ctx context.Context, params types.RequestInitChain) (*types.ResponseInitChain, error) {
|
||||
return cli.client.InitChain(ctx, types.ToRequestInitChain(params).GetInitChain(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) InitChain(
|
||||
ctx context.Context,
|
||||
params types.RequestInitChain,
|
||||
) (*types.ResponseInitChain, error) {
|
||||
|
||||
req := types.ToRequestInitChain(params)
|
||||
return cli.client.InitChain(ctx, req.GetInitChain(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) ListSnapshots(ctx context.Context, params types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
|
||||
return cli.client.ListSnapshots(ctx, types.ToRequestListSnapshots(params).GetListSnapshots(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) BeginBlock(
|
||||
ctx context.Context,
|
||||
params types.RequestBeginBlock,
|
||||
) (*types.ResponseBeginBlock, error) {
|
||||
|
||||
req := types.ToRequestBeginBlock(params)
|
||||
return cli.client.BeginBlock(ctx, req.GetBeginBlock(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) OfferSnapshot(ctx context.Context, params types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
|
||||
return cli.client.OfferSnapshot(ctx, types.ToRequestOfferSnapshot(params).GetOfferSnapshot(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) EndBlock(
|
||||
ctx context.Context,
|
||||
params types.RequestEndBlock,
|
||||
) (*types.ResponseEndBlock, error) {
|
||||
|
||||
req := types.ToRequestEndBlock(params)
|
||||
return cli.client.EndBlock(ctx, req.GetEndBlock(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) LoadSnapshotChunk(ctx context.Context, params types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
|
||||
return cli.client.LoadSnapshotChunk(ctx, types.ToRequestLoadSnapshotChunk(params).GetLoadSnapshotChunk(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) ListSnapshots(
|
||||
ctx context.Context,
|
||||
params types.RequestListSnapshots,
|
||||
) (*types.ResponseListSnapshots, error) {
|
||||
|
||||
req := types.ToRequestListSnapshots(params)
|
||||
return cli.client.ListSnapshots(ctx, req.GetListSnapshots(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) ApplySnapshotChunk(ctx context.Context, params types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
|
||||
return cli.client.ApplySnapshotChunk(ctx, types.ToRequestApplySnapshotChunk(params).GetApplySnapshotChunk(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) OfferSnapshot(
|
||||
ctx context.Context,
|
||||
params types.RequestOfferSnapshot,
|
||||
) (*types.ResponseOfferSnapshot, error) {
|
||||
|
||||
req := types.ToRequestOfferSnapshot(params)
|
||||
return cli.client.OfferSnapshot(ctx, req.GetOfferSnapshot(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) PrepareProposal(ctx context.Context, params types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
|
||||
return cli.client.PrepareProposal(ctx, types.ToRequestPrepareProposal(params).GetPrepareProposal(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) LoadSnapshotChunk(
|
||||
ctx context.Context,
|
||||
params types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
|
||||
|
||||
req := types.ToRequestLoadSnapshotChunk(params)
|
||||
return cli.client.LoadSnapshotChunk(ctx, req.GetLoadSnapshotChunk(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) ProcessProposal(ctx context.Context, params types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
|
||||
return cli.client.ProcessProposal(ctx, types.ToRequestProcessProposal(params).GetProcessProposal(), grpc.WaitForReady(true))
|
||||
}
|
||||
func (cli *grpcClient) ApplySnapshotChunk(
|
||||
ctx context.Context,
|
||||
params types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
|
||||
|
||||
func (cli *grpcClient) ExtendVote(ctx context.Context, params types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
return cli.client.ExtendVote(ctx, types.ToRequestExtendVote(params).GetExtendVote(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) VerifyVoteExtension(ctx context.Context, params types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
|
||||
return cli.client.VerifyVoteExtension(ctx, types.ToRequestVerifyVoteExtension(params).GetVerifyVoteExtension(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) FinalizeBlock(ctx context.Context, params types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
return cli.client.FinalizeBlock(ctx, types.ToRequestFinalizeBlock(params).GetFinalizeBlock(), grpc.WaitForReady(true))
|
||||
req := types.ToRequestApplySnapshotChunk(params)
|
||||
return cli.client.ApplySnapshotChunk(ctx, req.GetApplySnapshotChunk(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
@@ -16,8 +16,9 @@ import (
|
||||
type localClient struct {
|
||||
service.BaseService
|
||||
|
||||
mtx sync.Mutex
|
||||
mtx *sync.Mutex
|
||||
types.Application
|
||||
Callback
|
||||
}
|
||||
|
||||
var _ Client = (*localClient)(nil)
|
||||
@@ -26,8 +27,12 @@ var _ Client = (*localClient)(nil)
|
||||
// methods of the given app.
|
||||
//
|
||||
// Both Async and Sync methods ignore the given context.Context parameter.
|
||||
func NewLocalClient(logger log.Logger, app types.Application) Client {
|
||||
func NewLocalClient(logger log.Logger, mtx *sync.Mutex, app types.Application) Client {
|
||||
if mtx == nil {
|
||||
mtx = new(sync.Mutex)
|
||||
}
|
||||
cli := &localClient{
|
||||
mtx: mtx,
|
||||
Application: app,
|
||||
}
|
||||
cli.BaseService = *service.NewBaseService(logger, "localClient", cli)
|
||||
@@ -37,11 +42,44 @@ func NewLocalClient(logger log.Logger, app types.Application) Client {
|
||||
func (*localClient) OnStart(context.Context) error { return nil }
|
||||
func (*localClient) OnStop() {}
|
||||
|
||||
func (app *localClient) SetResponseCallback(cb Callback) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
app.Callback = cb
|
||||
}
|
||||
|
||||
// TODO: change types.Application to include Error()?
|
||||
func (app *localClient) Error() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (app *localClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
|
||||
// Do nothing
|
||||
return newLocalReqRes(types.ToRequestFlush(), nil), nil
|
||||
}
|
||||
|
||||
func (app *localClient) DeliverTxAsync(ctx context.Context, params types.RequestDeliverTx) (*ReqRes, error) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.DeliverTx(params)
|
||||
return app.callback(
|
||||
types.ToRequestDeliverTx(params),
|
||||
types.ToResponseDeliverTx(res),
|
||||
), nil
|
||||
}
|
||||
|
||||
func (app *localClient) CheckTxAsync(ctx context.Context, req types.RequestCheckTx) (*ReqRes, error) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.CheckTx(req)
|
||||
return app.callback(
|
||||
types.ToRequestCheckTx(req),
|
||||
types.ToResponseCheckTx(res),
|
||||
), nil
|
||||
}
|
||||
|
||||
//-------------------------------------------------------
|
||||
|
||||
func (app *localClient) Flush(ctx context.Context) error {
|
||||
@@ -60,6 +98,18 @@ func (app *localClient) Info(ctx context.Context, req types.RequestInfo) (*types
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) DeliverTx(
|
||||
ctx context.Context,
|
||||
req types.RequestDeliverTx,
|
||||
) (*types.ResponseDeliverTx, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.DeliverTx(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) CheckTx(
|
||||
ctx context.Context,
|
||||
req types.RequestCheckTx,
|
||||
@@ -102,6 +152,30 @@ func (app *localClient) InitChain(
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) BeginBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestBeginBlock,
|
||||
) (*types.ResponseBeginBlock, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.BeginBlock(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) EndBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestEndBlock,
|
||||
) (*types.ResponseEndBlock, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.EndBlock(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) ListSnapshots(
|
||||
ctx context.Context,
|
||||
req types.RequestListSnapshots,
|
||||
@@ -148,57 +222,16 @@ func (app *localClient) ApplySnapshotChunk(
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) PrepareProposal(
|
||||
ctx context.Context,
|
||||
req types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
|
||||
//-------------------------------------------------------
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.PrepareProposal(req)
|
||||
return &res, nil
|
||||
func (app *localClient) callback(req *types.Request, res *types.Response) *ReqRes {
|
||||
app.Callback(req, res)
|
||||
return newLocalReqRes(req, res)
|
||||
}
|
||||
|
||||
func (app *localClient) ProcessProposal(
|
||||
ctx context.Context,
|
||||
req types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.ProcessProposal(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) ExtendVote(
|
||||
ctx context.Context,
|
||||
req types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.ExtendVote(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) VerifyVoteExtension(
|
||||
ctx context.Context,
|
||||
req types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.VerifyVoteExtension(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) FinalizeBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.FinalizeBlock(req)
|
||||
return &res, nil
|
||||
func newLocalReqRes(req *types.Request, res *types.Response) *ReqRes {
|
||||
reqRes := NewReqRes(req)
|
||||
reqRes.Response = res
|
||||
reqRes.SetDone()
|
||||
return reqRes
|
||||
}
|
||||
|
||||
@@ -40,6 +40,29 @@ func (_m *Client) ApplySnapshotChunk(_a0 context.Context, _a1 types.RequestApply
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// BeginBlock provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) BeginBlock(_a0 context.Context, _a1 types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseBeginBlock
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestBeginBlock) *types.ResponseBeginBlock); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseBeginBlock)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestBeginBlock) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// CheckTx provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) CheckTx(_a0 context.Context, _a1 types.RequestCheckTx) (*types.ResponseCheckTx, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
@@ -109,6 +132,52 @@ func (_m *Client) Commit(_a0 context.Context) (*types.ResponseCommit, error) {
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// DeliverTx provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) DeliverTx(_a0 context.Context, _a1 types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseDeliverTx
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *types.ResponseDeliverTx); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseDeliverTx)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// DeliverTxAsync provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 types.RequestDeliverTx) (*abciclient.ReqRes, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *abciclient.ReqRes
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *abciclient.ReqRes); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*abciclient.ReqRes)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Echo provides a mock function with given fields: ctx, msg
|
||||
func (_m *Client) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
|
||||
ret := _m.Called(ctx, msg)
|
||||
@@ -132,6 +201,29 @@ func (_m *Client) Echo(ctx context.Context, msg string) (*types.ResponseEcho, er
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// EndBlock provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) EndBlock(_a0 context.Context, _a1 types.RequestEndBlock) (*types.ResponseEndBlock, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseEndBlock
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestEndBlock) *types.ResponseEndBlock); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseEndBlock)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestEndBlock) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Error provides a mock function with given fields:
|
||||
func (_m *Client) Error() error {
|
||||
ret := _m.Called()
|
||||
@@ -146,52 +238,6 @@ func (_m *Client) Error() error {
|
||||
return r0
|
||||
}
|
||||
|
||||
// ExtendVote provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) ExtendVote(_a0 context.Context, _a1 types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseExtendVote
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestExtendVote) *types.ResponseExtendVote); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseExtendVote)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestExtendVote) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// FinalizeBlock provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) FinalizeBlock(_a0 context.Context, _a1 types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseFinalizeBlock
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestFinalizeBlock) *types.ResponseFinalizeBlock); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseFinalizeBlock)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestFinalizeBlock) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Flush provides a mock function with given fields: _a0
|
||||
func (_m *Client) Flush(_a0 context.Context) error {
|
||||
ret := _m.Called(_a0)
|
||||
@@ -206,6 +252,29 @@ func (_m *Client) Flush(_a0 context.Context) error {
|
||||
return r0
|
||||
}
|
||||
|
||||
// FlushAsync provides a mock function with given fields: _a0
|
||||
func (_m *Client) FlushAsync(_a0 context.Context) (*abciclient.ReqRes, error) {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 *abciclient.ReqRes
|
||||
if rf, ok := ret.Get(0).(func(context.Context) *abciclient.ReqRes); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*abciclient.ReqRes)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context) error); ok {
|
||||
r1 = rf(_a0)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Info provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) Info(_a0 context.Context, _a1 types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
@@ -335,52 +404,6 @@ func (_m *Client) OfferSnapshot(_a0 context.Context, _a1 types.RequestOfferSnaps
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// PrepareProposal provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) PrepareProposal(_a0 context.Context, _a1 types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponsePrepareProposal
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestPrepareProposal) *types.ResponsePrepareProposal); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponsePrepareProposal)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestPrepareProposal) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// ProcessProposal provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) ProcessProposal(_a0 context.Context, _a1 types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseProcessProposal
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestProcessProposal) *types.ResponseProcessProposal); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseProcessProposal)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestProcessProposal) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Query provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) Query(_a0 context.Context, _a1 types.RequestQuery) (*types.ResponseQuery, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
@@ -404,6 +427,11 @@ func (_m *Client) Query(_a0 context.Context, _a1 types.RequestQuery) (*types.Res
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// SetResponseCallback provides a mock function with given fields: _a0
|
||||
func (_m *Client) SetResponseCallback(_a0 abciclient.Callback) {
|
||||
_m.Called(_a0)
|
||||
}
|
||||
|
||||
// Start provides a mock function with given fields: _a0
|
||||
func (_m *Client) Start(_a0 context.Context) error {
|
||||
ret := _m.Called(_a0)
|
||||
@@ -418,27 +446,18 @@ func (_m *Client) Start(_a0 context.Context) error {
|
||||
return r0
|
||||
}
|
||||
|
||||
// VerifyVoteExtension provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) VerifyVoteExtension(_a0 context.Context, _a1 types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
// String provides a mock function with given fields:
|
||||
func (_m *Client) String() string {
|
||||
ret := _m.Called()
|
||||
|
||||
var r0 *types.ResponseVerifyVoteExtension
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestVerifyVoteExtension) *types.ResponseVerifyVoteExtension); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
var r0 string
|
||||
if rf, ok := ret.Get(0).(func() string); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseVerifyVoteExtension)
|
||||
}
|
||||
r0 = ret.Get(0).(string)
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestVerifyVoteExtension) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
return r0
|
||||
}
|
||||
|
||||
// Wait provides a mock function with given fields:
|
||||
|
||||
@@ -24,6 +24,11 @@ const (
|
||||
reqQueueSize = 256
|
||||
)
|
||||
|
||||
type reqResWithContext struct {
|
||||
R *ReqRes
|
||||
C context.Context // if context.Err is not nil, reqRes will be thrown away (ignored)
|
||||
}
|
||||
|
||||
// This is goroutine-safe, but users should beware that the application in
|
||||
// general is not meant to be interfaced with concurrent callers.
|
||||
type socketClient struct {
|
||||
@@ -34,7 +39,7 @@ type socketClient struct {
|
||||
mustConnect bool
|
||||
conn net.Conn
|
||||
|
||||
reqQueue chan *ReqRes
|
||||
reqQueue chan *reqResWithContext
|
||||
|
||||
mtx sync.Mutex
|
||||
err error
|
||||
@@ -50,7 +55,7 @@ var _ Client = (*socketClient)(nil)
|
||||
func NewSocketClient(logger log.Logger, addr string, mustConnect bool) Client {
|
||||
cli := &socketClient{
|
||||
logger: logger,
|
||||
reqQueue: make(chan *ReqRes, reqQueueSize),
|
||||
reqQueue: make(chan *reqResWithContext, reqQueueSize),
|
||||
mustConnect: mustConnect,
|
||||
addr: addr,
|
||||
reqSent: list.New(),
|
||||
@@ -94,10 +99,7 @@ func (cli *socketClient) OnStop() {
|
||||
cli.conn.Close()
|
||||
}
|
||||
|
||||
// this timeout is arbitrary.
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
cli.drainQueue(ctx)
|
||||
cli.drainQueue()
|
||||
}
|
||||
|
||||
// Error returns an error if the client was stopped abruptly.
|
||||
@@ -107,6 +109,16 @@ func (cli *socketClient) Error() error {
|
||||
return cli.err
|
||||
}
|
||||
|
||||
// SetResponseCallback sets a callback, which will be executed for each
|
||||
// non-error & non-empty response from the server.
|
||||
//
|
||||
// NOTE: callback may get internally generated flush responses.
|
||||
func (cli *socketClient) SetResponseCallback(resCb Callback) {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
cli.resCb = resCb
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) sendRequestsRoutine(ctx context.Context, conn io.Writer) {
|
||||
@@ -120,9 +132,13 @@ func (cli *socketClient) sendRequestsRoutine(ctx context.Context, conn io.Writer
|
||||
return
|
||||
}
|
||||
|
||||
cli.willSendReq(reqres)
|
||||
if reqres.C.Err() != nil {
|
||||
cli.logger.Debug("Request's context is done", "req", reqres.R, "err", reqres.C.Err())
|
||||
continue
|
||||
}
|
||||
cli.willSendReq(reqres.R)
|
||||
|
||||
if err := types.WriteMessage(reqres.Request, bw); err != nil {
|
||||
if err := types.WriteMessage(reqres.R.Request, bw); err != nil {
|
||||
cli.stopForError(fmt.Errorf("write to buffer: %w", err))
|
||||
return
|
||||
}
|
||||
@@ -187,7 +203,7 @@ func (cli *socketClient) didRecvResponse(res *types.Response) error {
|
||||
}
|
||||
|
||||
reqres.Response = res
|
||||
reqres.SetDone() // release waiters
|
||||
reqres.Done() // release waiters
|
||||
cli.reqSent.Remove(next) // pop first item from linked list
|
||||
|
||||
// Notify client listener if set (global callback).
|
||||
@@ -206,8 +222,22 @@ func (cli *socketClient) didRecvResponse(res *types.Response) error {
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
|
||||
return cli.queueRequestAsync(ctx, types.ToRequestFlush())
|
||||
}
|
||||
|
||||
func (cli *socketClient) DeliverTxAsync(ctx context.Context, req types.RequestDeliverTx) (*ReqRes, error) {
|
||||
return cli.queueRequestAsync(ctx, types.ToRequestDeliverTx(req))
|
||||
}
|
||||
|
||||
func (cli *socketClient) CheckTxAsync(ctx context.Context, req types.RequestCheckTx) (*ReqRes, error) {
|
||||
return cli.queueRequestAsync(ctx, types.ToRequestCheckTx(req))
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) Flush(ctx context.Context) error {
|
||||
reqRes, err := cli.queueRequest(ctx, types.ToRequestFlush())
|
||||
reqRes, err := cli.queueRequest(ctx, types.ToRequestFlush(), true)
|
||||
if err != nil {
|
||||
return queueErr(err)
|
||||
}
|
||||
@@ -216,8 +246,15 @@ func (cli *socketClient) Flush(ctx context.Context) error {
|
||||
return err
|
||||
}
|
||||
|
||||
gotResp := make(chan struct{})
|
||||
go func() {
|
||||
// NOTE: if we don't flush the queue, its possible to get stuck here
|
||||
reqRes.Wait()
|
||||
close(gotResp)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-reqRes.signal:
|
||||
case <-gotResp:
|
||||
return cli.Error()
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
@@ -243,6 +280,18 @@ func (cli *socketClient) Info(
|
||||
return reqres.Response.GetInfo(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) DeliverTx(
|
||||
ctx context.Context,
|
||||
req types.RequestDeliverTx,
|
||||
) (*types.ResponseDeliverTx, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestDeliverTx(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetDeliverTx(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) CheckTx(
|
||||
ctx context.Context,
|
||||
req types.RequestCheckTx,
|
||||
@@ -285,6 +334,30 @@ func (cli *socketClient) InitChain(
|
||||
return reqres.Response.GetInitChain(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) BeginBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestBeginBlock,
|
||||
) (*types.ResponseBeginBlock, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestBeginBlock(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetBeginBlock(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) EndBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestEndBlock,
|
||||
) (*types.ResponseEndBlock, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestEndBlock(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetEndBlock(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) ListSnapshots(
|
||||
ctx context.Context,
|
||||
req types.RequestListSnapshots,
|
||||
@@ -331,91 +404,55 @@ func (cli *socketClient) ApplySnapshotChunk(
|
||||
return reqres.Response.GetApplySnapshotChunk(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) PrepareProposal(
|
||||
ctx context.Context,
|
||||
req types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestPrepareProposal(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetPrepareProposal(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) ProcessProposal(
|
||||
ctx context.Context,
|
||||
req types.RequestProcessProposal,
|
||||
) (*types.ResponseProcessProposal, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestProcessProposal(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetProcessProposal(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) ExtendVote(
|
||||
ctx context.Context,
|
||||
req types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestExtendVote(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetExtendVote(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) VerifyVoteExtension(
|
||||
ctx context.Context,
|
||||
req types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestVerifyVoteExtension(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetVerifyVoteExtension(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) FinalizeBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestFinalizeBlock(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetFinalizeBlock(), nil
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
// queueRequest enqueues req onto the queue. The request can break early if the
|
||||
// the context is canceled. If the queue is full, this method blocks to allow
|
||||
// the request to be placed onto the queue. This has the effect of creating an
|
||||
// unbounded queue of goroutines waiting to write to this queue which is a bit
|
||||
// antithetical to the purposes of a queue, however, undoing this behavior has
|
||||
// dangerous upstream implications as a result of the usage of this behavior upstream.
|
||||
// Remove at your peril.
|
||||
// queueRequest enqueues req onto the queue. If the queue is full, it ether
|
||||
// returns an error (sync=false) or blocks (sync=true).
|
||||
//
|
||||
// When sync=true, ctx can be used to break early. When sync=false, ctx will be
|
||||
// used later to determine if request should be dropped (if ctx.Err is
|
||||
// non-nil).
|
||||
//
|
||||
// The caller is responsible for checking cli.Error.
|
||||
func (cli *socketClient) queueRequest(ctx context.Context, req *types.Request) (*ReqRes, error) {
|
||||
func (cli *socketClient) queueRequest(ctx context.Context, req *types.Request, sync bool) (*ReqRes, error) {
|
||||
reqres := NewReqRes(req)
|
||||
|
||||
select {
|
||||
case cli.reqQueue <- reqres:
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
if sync {
|
||||
select {
|
||||
case cli.reqQueue <- &reqResWithContext{R: reqres, C: context.Background()}:
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
} else {
|
||||
select {
|
||||
case cli.reqQueue <- &reqResWithContext{R: reqres, C: ctx}:
|
||||
default:
|
||||
return nil, errors.New("buffer is full")
|
||||
}
|
||||
}
|
||||
|
||||
return reqres, nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) queueRequestAsync(
|
||||
ctx context.Context,
|
||||
req *types.Request,
|
||||
) (*ReqRes, error) {
|
||||
|
||||
reqres, err := cli.queueRequest(ctx, req, false)
|
||||
if err != nil {
|
||||
return nil, queueErr(err)
|
||||
}
|
||||
|
||||
return reqres, cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) queueRequestAndFlush(
|
||||
ctx context.Context,
|
||||
req *types.Request,
|
||||
) (*ReqRes, error) {
|
||||
|
||||
reqres, err := cli.queueRequest(ctx, req)
|
||||
reqres, err := cli.queueRequest(ctx, req, true)
|
||||
if err != nil {
|
||||
return nil, queueErr(err)
|
||||
}
|
||||
@@ -433,14 +470,14 @@ func queueErr(e error) error {
|
||||
|
||||
// drainQueue marks as complete and discards all remaining pending requests
|
||||
// from the queue.
|
||||
func (cli *socketClient) drainQueue(ctx context.Context) {
|
||||
func (cli *socketClient) drainQueue() {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
|
||||
// mark all in-flight messages as resolved (they will get cli.Error())
|
||||
for req := cli.reqSent.Front(); req != nil; req = req.Next() {
|
||||
reqres := req.Value.(*ReqRes)
|
||||
reqres.SetDone()
|
||||
reqres.Done()
|
||||
}
|
||||
|
||||
// Mark all queued messages as resolved.
|
||||
@@ -450,10 +487,8 @@ func (cli *socketClient) drainQueue(ctx context.Context) {
|
||||
// See https://github.com/tendermint/tendermint/issues/6996.
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case reqres := <-cli.reqQueue:
|
||||
reqres.SetDone()
|
||||
reqres.R.Done()
|
||||
default:
|
||||
return
|
||||
}
|
||||
@@ -470,6 +505,8 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
|
||||
_, ok = res.Value.(*types.Response_Flush)
|
||||
case *types.Request_Info:
|
||||
_, ok = res.Value.(*types.Response_Info)
|
||||
case *types.Request_DeliverTx:
|
||||
_, ok = res.Value.(*types.Response_DeliverTx)
|
||||
case *types.Request_CheckTx:
|
||||
_, ok = res.Value.(*types.Response_CheckTx)
|
||||
case *types.Request_Commit:
|
||||
@@ -478,12 +515,10 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
|
||||
_, ok = res.Value.(*types.Response_Query)
|
||||
case *types.Request_InitChain:
|
||||
_, ok = res.Value.(*types.Response_InitChain)
|
||||
case *types.Request_PrepareProposal:
|
||||
_, ok = res.Value.(*types.Response_PrepareProposal)
|
||||
case *types.Request_ExtendVote:
|
||||
_, ok = res.Value.(*types.Response_ExtendVote)
|
||||
case *types.Request_VerifyVoteExtension:
|
||||
_, ok = res.Value.(*types.Response_VerifyVoteExtension)
|
||||
case *types.Request_BeginBlock:
|
||||
_, ok = res.Value.(*types.Response_BeginBlock)
|
||||
case *types.Request_EndBlock:
|
||||
_, ok = res.Value.(*types.Response_EndBlock)
|
||||
case *types.Request_ApplySnapshotChunk:
|
||||
_, ok = res.Value.(*types.Response_ApplySnapshotChunk)
|
||||
case *types.Request_LoadSnapshotChunk:
|
||||
@@ -492,8 +527,6 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
|
||||
_, ok = res.Value.(*types.Response_ListSnapshots)
|
||||
case *types.Request_OfferSnapshot:
|
||||
_, ok = res.Value.(*types.Response_OfferSnapshot)
|
||||
case *types.Request_FinalizeBlock:
|
||||
_, ok = res.Value.(*types.Response_FinalizeBlock)
|
||||
}
|
||||
return ok
|
||||
}
|
||||
@@ -508,5 +541,7 @@ func (cli *socketClient) stopForError(err error) {
|
||||
cli.mtx.Unlock()
|
||||
|
||||
cli.logger.Info("Stopping abci.socketClient", "reason", err)
|
||||
cli.Stop()
|
||||
if err := cli.Stop(); err != nil {
|
||||
cli.logger.Error("error stopping abci.socketClient", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -29,7 +29,7 @@ func TestProperSyncCalls(t *testing.T) {
|
||||
|
||||
resp := make(chan error, 1)
|
||||
go func() {
|
||||
rsp, err := c.FinalizeBlock(ctx, types.RequestFinalizeBlock{})
|
||||
rsp, err := c.BeginBlock(ctx, types.RequestBeginBlock{})
|
||||
assert.NoError(t, err)
|
||||
assert.NoError(t, c.Flush(ctx))
|
||||
assert.NotNil(t, rsp)
|
||||
@@ -79,7 +79,7 @@ type slowApp struct {
|
||||
types.BaseApplication
|
||||
}
|
||||
|
||||
func (slowApp) FinalizeBlock(req types.RequestFinalizeBlock) types.ResponseFinalizeBlock {
|
||||
func (slowApp) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
return types.ResponseFinalizeBlock{}
|
||||
return types.ResponseBeginBlock{}
|
||||
}
|
||||
|
||||
@@ -14,7 +14,6 @@ import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/version"
|
||||
|
||||
abciclient "github.com/tendermint/tendermint/abci/client"
|
||||
"github.com/tendermint/tendermint/abci/example/code"
|
||||
@@ -22,12 +21,14 @@ import (
|
||||
"github.com/tendermint/tendermint/abci/server"
|
||||
servertest "github.com/tendermint/tendermint/abci/tests/server"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/abci/version"
|
||||
"github.com/tendermint/tendermint/proto/tendermint/crypto"
|
||||
)
|
||||
|
||||
// client is a global variable so it can be reused by the console
|
||||
var (
|
||||
client abciclient.Client
|
||||
logger log.Logger
|
||||
)
|
||||
|
||||
// flags
|
||||
@@ -47,32 +48,34 @@ var (
|
||||
flagPersist string
|
||||
)
|
||||
|
||||
func RootCmmand(logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "abci-cli",
|
||||
Short: "the ABCI CLI tool wraps an ABCI client",
|
||||
Long: "the ABCI CLI tool wraps an ABCI client and is used for testing ABCI servers",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) {
|
||||
var RootCmd = &cobra.Command{
|
||||
Use: "abci-cli",
|
||||
Short: "the ABCI CLI tool wraps an ABCI client",
|
||||
Long: "the ABCI CLI tool wraps an ABCI client and is used for testing ABCI servers",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
|
||||
switch cmd.Use {
|
||||
case "kvstore", "version":
|
||||
return nil
|
||||
}
|
||||
|
||||
if client == nil {
|
||||
var err error
|
||||
client, err = abciclient.NewClient(logger.With("module", "abci-client"), flagAddress, flagAbci, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := client.Start(cmd.Context()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
switch cmd.Use {
|
||||
case "kvstore", "version":
|
||||
return nil
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
if logger == nil {
|
||||
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
}
|
||||
|
||||
if client == nil {
|
||||
var err error
|
||||
client, err = abciclient.NewClient(logger.With("module", "abci-client"), flagAddress, flagAbci, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := client.Start(cmd.Context()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
// Structure for data passed to print response.
|
||||
@@ -94,46 +97,56 @@ type queryResponse struct {
|
||||
}
|
||||
|
||||
func Execute() error {
|
||||
logger, err := log.NewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cmd := RootCmmand(logger)
|
||||
addGlobalFlags(cmd)
|
||||
addCommands(cmd, logger)
|
||||
return cmd.Execute()
|
||||
addGlobalFlags()
|
||||
addCommands()
|
||||
return RootCmd.Execute()
|
||||
}
|
||||
|
||||
func addGlobalFlags(cmd *cobra.Command) {
|
||||
cmd.PersistentFlags().StringVarP(&flagAddress,
|
||||
func addGlobalFlags() {
|
||||
RootCmd.PersistentFlags().StringVarP(&flagAddress,
|
||||
"address",
|
||||
"",
|
||||
"tcp://0.0.0.0:26658",
|
||||
"address of application socket")
|
||||
cmd.PersistentFlags().StringVarP(&flagAbci, "abci", "", "socket", "either socket or grpc")
|
||||
cmd.PersistentFlags().BoolVarP(&flagVerbose,
|
||||
RootCmd.PersistentFlags().StringVarP(&flagAbci, "abci", "", "socket", "either socket or grpc")
|
||||
RootCmd.PersistentFlags().BoolVarP(&flagVerbose,
|
||||
"verbose",
|
||||
"v",
|
||||
false,
|
||||
"print the command and results as if it were a console session")
|
||||
cmd.PersistentFlags().StringVarP(&flagLogLevel, "log_level", "", "debug", "set the logger level")
|
||||
RootCmd.PersistentFlags().StringVarP(&flagLogLevel, "log_level", "", "debug", "set the logger level")
|
||||
}
|
||||
|
||||
func addCommands(cmd *cobra.Command, logger log.Logger) {
|
||||
cmd.AddCommand(batchCmd)
|
||||
cmd.AddCommand(consoleCmd)
|
||||
cmd.AddCommand(echoCmd)
|
||||
cmd.AddCommand(infoCmd)
|
||||
cmd.AddCommand(deliverTxCmd)
|
||||
cmd.AddCommand(checkTxCmd)
|
||||
cmd.AddCommand(commitCmd)
|
||||
cmd.AddCommand(versionCmd)
|
||||
cmd.AddCommand(testCmd)
|
||||
cmd.AddCommand(getQueryCmd())
|
||||
func addQueryFlags() {
|
||||
queryCmd.PersistentFlags().StringVarP(&flagPath, "path", "", "/store", "path to prefix query with")
|
||||
queryCmd.PersistentFlags().IntVarP(&flagHeight, "height", "", 0, "height to query the blockchain at")
|
||||
queryCmd.PersistentFlags().BoolVarP(&flagProve,
|
||||
"prove",
|
||||
"",
|
||||
false,
|
||||
"whether or not to return a merkle proof of the query result")
|
||||
}
|
||||
|
||||
func addKVStoreFlags() {
|
||||
kvstoreCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
|
||||
}
|
||||
|
||||
func addCommands() {
|
||||
RootCmd.AddCommand(batchCmd)
|
||||
RootCmd.AddCommand(consoleCmd)
|
||||
RootCmd.AddCommand(echoCmd)
|
||||
RootCmd.AddCommand(infoCmd)
|
||||
RootCmd.AddCommand(deliverTxCmd)
|
||||
RootCmd.AddCommand(checkTxCmd)
|
||||
RootCmd.AddCommand(commitCmd)
|
||||
RootCmd.AddCommand(versionCmd)
|
||||
RootCmd.AddCommand(testCmd)
|
||||
addQueryFlags()
|
||||
RootCmd.AddCommand(queryCmd)
|
||||
|
||||
// examples
|
||||
cmd.AddCommand(getKVStoreCmd(logger))
|
||||
addKVStoreFlags()
|
||||
RootCmd.AddCommand(kvstoreCmd)
|
||||
}
|
||||
|
||||
var batchCmd = &cobra.Command{
|
||||
@@ -193,7 +206,7 @@ var deliverTxCmd = &cobra.Command{
|
||||
Short: "deliver a new transaction to the application",
|
||||
Long: "deliver a new transaction to the application",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: cmdFinalizeBlock,
|
||||
RunE: cmdDeliverTx,
|
||||
}
|
||||
|
||||
var checkTxCmd = &cobra.Command{
|
||||
@@ -218,43 +231,25 @@ var versionCmd = &cobra.Command{
|
||||
Long: "print ABCI console version",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
fmt.Println(version.ABCIVersion)
|
||||
fmt.Println(version.Version)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
func getQueryCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "query",
|
||||
Short: "query the application state",
|
||||
Long: "query the application state",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: cmdQuery,
|
||||
}
|
||||
|
||||
cmd.PersistentFlags().StringVarP(&flagPath, "path", "", "/store", "path to prefix query with")
|
||||
cmd.PersistentFlags().IntVarP(&flagHeight, "height", "", 0, "height to query the blockchain at")
|
||||
cmd.PersistentFlags().BoolVarP(&flagProve,
|
||||
"prove",
|
||||
"",
|
||||
false,
|
||||
"whether or not to return a merkle proof of the query result")
|
||||
|
||||
return cmd
|
||||
var queryCmd = &cobra.Command{
|
||||
Use: "query",
|
||||
Short: "query the application state",
|
||||
Long: "query the application state",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: cmdQuery,
|
||||
}
|
||||
|
||||
func getKVStoreCmd(logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "kvstore",
|
||||
Short: "ABCI demo example",
|
||||
Long: "ABCI demo example",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: makeKVStoreCmd(logger),
|
||||
}
|
||||
|
||||
cmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
|
||||
return cmd
|
||||
|
||||
var kvstoreCmd = &cobra.Command{
|
||||
Use: "kvstore",
|
||||
Short: "ABCI demo example",
|
||||
Long: "ABCI demo example",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: cmdKVStore,
|
||||
}
|
||||
|
||||
var testCmd = &cobra.Command{
|
||||
@@ -300,38 +295,17 @@ func cmdTest(cmd *cobra.Command, args []string) error {
|
||||
[]func() error{
|
||||
func() error { return servertest.InitChain(ctx, client) },
|
||||
func() error { return servertest.Commit(ctx, client, nil) },
|
||||
func() error {
|
||||
return servertest.FinalizeBlock(ctx, client, [][]byte{
|
||||
[]byte("abc"),
|
||||
}, []uint32{
|
||||
code.CodeTypeBadNonce,
|
||||
}, nil)
|
||||
},
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte("abc"), code.CodeTypeBadNonce, nil) },
|
||||
func() error { return servertest.Commit(ctx, client, nil) },
|
||||
func() error {
|
||||
return servertest.FinalizeBlock(ctx, client, [][]byte{
|
||||
{0x00},
|
||||
}, []uint32{
|
||||
code.CodeTypeOK,
|
||||
}, nil)
|
||||
},
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.Commit(ctx, client, []byte{0, 0, 0, 0, 0, 0, 0, 1}) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00}, code.CodeTypeBadNonce, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x01}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x02}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x03}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x00, 0x04}, code.CodeTypeOK, nil) },
|
||||
func() error {
|
||||
return servertest.FinalizeBlock(ctx, client, [][]byte{
|
||||
{0x00},
|
||||
{0x01},
|
||||
{0x00, 0x02},
|
||||
{0x00, 0x03},
|
||||
{0x00, 0x00, 0x04},
|
||||
{0x00, 0x00, 0x06},
|
||||
}, []uint32{
|
||||
code.CodeTypeBadNonce,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeBadNonce,
|
||||
}, nil)
|
||||
return servertest.DeliverTx(ctx, client, []byte{0x00, 0x00, 0x06}, code.CodeTypeBadNonce, nil)
|
||||
},
|
||||
func() error { return servertest.Commit(ctx, client, []byte{0, 0, 0, 0, 0, 0, 0, 5}) },
|
||||
})
|
||||
@@ -427,7 +401,7 @@ func muxOnCommands(cmd *cobra.Command, pArgs []string) error {
|
||||
case "commit":
|
||||
return cmdCommit(cmd, actualArgs)
|
||||
case "deliver_tx":
|
||||
return cmdFinalizeBlock(cmd, actualArgs)
|
||||
return cmdDeliverTx(cmd, actualArgs)
|
||||
case "echo":
|
||||
return cmdEcho(cmd, actualArgs)
|
||||
case "info":
|
||||
@@ -451,9 +425,12 @@ func cmdUnimplemented(cmd *cobra.Command, args []string) error {
|
||||
})
|
||||
|
||||
fmt.Println("Available commands:")
|
||||
for _, cmd := range cmd.Commands() {
|
||||
fmt.Printf("%s: %s\n", cmd.Use, cmd.Short)
|
||||
}
|
||||
fmt.Printf("%s: %s\n", echoCmd.Use, echoCmd.Short)
|
||||
fmt.Printf("%s: %s\n", infoCmd.Use, infoCmd.Short)
|
||||
fmt.Printf("%s: %s\n", checkTxCmd.Use, checkTxCmd.Short)
|
||||
fmt.Printf("%s: %s\n", deliverTxCmd.Use, deliverTxCmd.Short)
|
||||
fmt.Printf("%s: %s\n", queryCmd.Use, queryCmd.Short)
|
||||
fmt.Printf("%s: %s\n", commitCmd.Use, commitCmd.Short)
|
||||
fmt.Println("Use \"[command] --help\" for more information about a command.")
|
||||
|
||||
return nil
|
||||
@@ -496,7 +473,7 @@ func cmdInfo(cmd *cobra.Command, args []string) error {
|
||||
const codeBad uint32 = 10
|
||||
|
||||
// Append a new tx to application
|
||||
func cmdFinalizeBlock(cmd *cobra.Command, args []string) error {
|
||||
func cmdDeliverTx(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
printResponse(cmd, args, response{
|
||||
Code: codeBad,
|
||||
@@ -508,18 +485,16 @@ func cmdFinalizeBlock(cmd *cobra.Command, args []string) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
res, err := client.FinalizeBlock(cmd.Context(), types.RequestFinalizeBlock{Txs: [][]byte{txBytes}})
|
||||
res, err := client.DeliverTx(cmd.Context(), types.RequestDeliverTx{Tx: txBytes})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, tx := range res.Txs {
|
||||
printResponse(cmd, args, response{
|
||||
Code: tx.Code,
|
||||
Data: tx.Data,
|
||||
Info: tx.Info,
|
||||
Log: tx.Log,
|
||||
})
|
||||
}
|
||||
printResponse(cmd, args, response{
|
||||
Code: res.Code,
|
||||
Data: res.Data,
|
||||
Info: res.Info,
|
||||
Log: res.Log,
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -599,34 +574,33 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func makeKVStoreCmd(logger log.Logger) func(*cobra.Command, []string) error {
|
||||
return func(cmd *cobra.Command, args []string) error {
|
||||
// Create the application - in memory or persisted to disk
|
||||
var app types.Application
|
||||
if flagPersist == "" {
|
||||
app = kvstore.NewApplication()
|
||||
} else {
|
||||
app = kvstore.NewPersistentKVStoreApplication(logger, flagPersist)
|
||||
}
|
||||
func cmdKVStore(cmd *cobra.Command, args []string) error {
|
||||
logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(logger.With("module", "abci-server"), flagAddress, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
if err := srv.Start(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Run forever.
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
// Create the application - in memory or persisted to disk
|
||||
var app types.Application
|
||||
if flagPersist == "" {
|
||||
app = kvstore.NewApplication()
|
||||
} else {
|
||||
app = kvstore.NewPersistentKVStoreApplication(logger, flagPersist)
|
||||
}
|
||||
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(logger.With("module", "abci-server"), flagAddress, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
if err := srv.Start(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Run forever.
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
}
|
||||
|
||||
//--------------------------------------------------------------------------------
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"math/rand"
|
||||
"net"
|
||||
"os"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -34,7 +35,7 @@ func TestKVStore(t *testing.T) {
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
logger.Info("### Testing KVStore")
|
||||
testBulk(ctx, t, logger, kvstore.NewApplication())
|
||||
testStream(ctx, t, logger, kvstore.NewApplication())
|
||||
}
|
||||
|
||||
func TestBaseApp(t *testing.T) {
|
||||
@@ -43,7 +44,7 @@ func TestBaseApp(t *testing.T) {
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
logger.Info("### Testing BaseApp")
|
||||
testBulk(ctx, t, logger, types.NewBaseApplication())
|
||||
testStream(ctx, t, logger, types.NewBaseApplication())
|
||||
}
|
||||
|
||||
func TestGRPC(t *testing.T) {
|
||||
@@ -56,10 +57,10 @@ func TestGRPC(t *testing.T) {
|
||||
testGRPCSync(ctx, t, logger, types.NewGRPCApplication(types.NewBaseApplication()))
|
||||
}
|
||||
|
||||
func testBulk(ctx context.Context, t *testing.T, logger log.Logger, app types.Application) {
|
||||
func testStream(ctx context.Context, t *testing.T, logger log.Logger, app types.Application) {
|
||||
t.Helper()
|
||||
|
||||
const numDeliverTxs = 700000
|
||||
const numDeliverTxs = 20000
|
||||
socketFile := fmt.Sprintf("test-%08x.sock", rand.Int31n(1<<30))
|
||||
defer os.Remove(socketFile)
|
||||
socket := fmt.Sprintf("unix://%v", socketFile)
|
||||
@@ -76,22 +77,51 @@ func testBulk(ctx context.Context, t *testing.T, logger log.Logger, app types.Ap
|
||||
err = client.Start(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Construct request
|
||||
rfb := types.RequestFinalizeBlock{Txs: make([][]byte, numDeliverTxs)}
|
||||
done := make(chan struct{})
|
||||
counter := 0
|
||||
client.SetResponseCallback(func(req *types.Request, res *types.Response) {
|
||||
// Process response
|
||||
switch r := res.Value.(type) {
|
||||
case *types.Response_DeliverTx:
|
||||
counter++
|
||||
if r.DeliverTx.Code != code.CodeTypeOK {
|
||||
t.Error("DeliverTx failed with ret_code", r.DeliverTx.Code)
|
||||
}
|
||||
if counter > numDeliverTxs {
|
||||
t.Fatalf("Too many DeliverTx responses. Got %d, expected %d", counter, numDeliverTxs)
|
||||
}
|
||||
if counter == numDeliverTxs {
|
||||
go func() {
|
||||
time.Sleep(time.Second * 1) // Wait for a bit to allow counter overflow
|
||||
close(done)
|
||||
}()
|
||||
return
|
||||
}
|
||||
case *types.Response_Flush:
|
||||
// ignore
|
||||
default:
|
||||
t.Error("Unexpected response type", reflect.TypeOf(res.Value))
|
||||
}
|
||||
})
|
||||
|
||||
// Write requests
|
||||
for counter := 0; counter < numDeliverTxs; counter++ {
|
||||
rfb.Txs[counter] = []byte("test")
|
||||
}
|
||||
// Send bulk request
|
||||
res, err := client.FinalizeBlock(ctx, rfb)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, numDeliverTxs, len(res.Txs), "Number of txs doesn't match")
|
||||
for _, tx := range res.Txs {
|
||||
require.Equal(t, tx.Code, code.CodeTypeOK, "Tx failed")
|
||||
// Send request
|
||||
_, err = client.DeliverTxAsync(ctx, types.RequestDeliverTx{Tx: []byte("test")})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Sometimes send flush messages
|
||||
if counter%128 == 0 {
|
||||
err = client.Flush(ctx)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Send final flush message
|
||||
err = client.Flush(ctx)
|
||||
_, err = client.FlushAsync(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
<-done
|
||||
}
|
||||
|
||||
//-------------------------
|
||||
@@ -103,7 +133,7 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
|
||||
|
||||
func testGRPCSync(ctx context.Context, t *testing.T, logger log.Logger, app types.ABCIApplicationServer) {
|
||||
t.Helper()
|
||||
numDeliverTxs := 680000
|
||||
numDeliverTxs := 2000
|
||||
socketFile := fmt.Sprintf("/tmp/test-%08x.sock", rand.Int31n(1<<30))
|
||||
defer os.Remove(socketFile)
|
||||
socket := fmt.Sprintf("unix://%v", socketFile)
|
||||
@@ -112,7 +142,7 @@ func testGRPCSync(ctx context.Context, t *testing.T, logger log.Logger, app type
|
||||
server := abciserver.NewGRPCServer(logger.With("module", "abci-server"), socket, app)
|
||||
|
||||
require.NoError(t, server.Start(ctx))
|
||||
t.Cleanup(server.Wait)
|
||||
t.Cleanup(func() { server.Wait() })
|
||||
|
||||
// Connect to the socket
|
||||
conn, err := grpc.Dial(socket,
|
||||
@@ -129,17 +159,25 @@ func testGRPCSync(ctx context.Context, t *testing.T, logger log.Logger, app type
|
||||
|
||||
client := types.NewABCIApplicationClient(conn)
|
||||
|
||||
// Construct request
|
||||
rfb := types.RequestFinalizeBlock{Txs: make([][]byte, numDeliverTxs)}
|
||||
// Write requests
|
||||
for counter := 0; counter < numDeliverTxs; counter++ {
|
||||
rfb.Txs[counter] = []byte("test")
|
||||
}
|
||||
// Send request
|
||||
response, err := client.DeliverTx(ctx, &types.RequestDeliverTx{Tx: []byte("test")})
|
||||
require.NoError(t, err, "Error in GRPC DeliverTx")
|
||||
|
||||
counter++
|
||||
if response.Code != code.CodeTypeOK {
|
||||
t.Error("DeliverTx failed with ret_code", response.Code)
|
||||
}
|
||||
if counter > numDeliverTxs {
|
||||
t.Fatal("Too many DeliverTx responses")
|
||||
}
|
||||
t.Log("response", counter)
|
||||
if counter == numDeliverTxs {
|
||||
go func() {
|
||||
time.Sleep(time.Second * 1) // Wait for a bit to allow counter overflow
|
||||
}()
|
||||
}
|
||||
|
||||
// Send request
|
||||
response, err := client.FinalizeBlock(ctx, &rfb)
|
||||
require.NoError(t, err, "Error in GRPC FinalizeBlock")
|
||||
require.Equal(t, numDeliverTxs, len(response.Txs), "Number of txs returned via GRPC doesn't match")
|
||||
for _, tx := range response.Txs {
|
||||
require.Equal(t, tx.Code, code.CodeTypeOK, "Tx failed")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,7 @@ There are two app's here: the KVStoreApplication and the PersistentKVStoreApplic
|
||||
|
||||
## KVStoreApplication
|
||||
|
||||
The KVStoreApplication is a simple merkle key-value store.
|
||||
The KVStoreApplication is a simple merkle key-value store.
|
||||
Transactions of the form `key=value` are stored as key-value pairs in the tree.
|
||||
Transactions without an `=` sign set the value to the key.
|
||||
The app has no replay protection (other than what the mempool provides).
|
||||
@@ -12,7 +12,7 @@ The app has no replay protection (other than what the mempool provides).
|
||||
## PersistentKVStoreApplication
|
||||
|
||||
The PersistentKVStoreApplication wraps the KVStoreApplication
|
||||
and provides three additional features:
|
||||
and provides two additional features:
|
||||
|
||||
1) persistence of state across app restarts (using Tendermint's ABCI-Handshake mechanism)
|
||||
2) validator set changes
|
||||
@@ -27,4 +27,4 @@ Validator set changes are effected using the following transaction format:
|
||||
|
||||
where `pubkeyN` is a base64-encoded 32-byte ed25519 key and `powerN` is a new voting power for the validator with `pubkeyN` (possibly a new one).
|
||||
To remove a validator from the validator set, set power to `0`.
|
||||
There is no sybil protection against new validators joining.
|
||||
There is no sybil protection against new validators joining.
|
||||
|
||||
@@ -86,13 +86,14 @@ func (app *Application) Info(req types.RequestInfo) (resInfo types.ResponseInfo)
|
||||
}
|
||||
|
||||
// tx is either "key=value" or just arbitrary bytes
|
||||
func (app *Application) HandleTx(tx []byte) *types.ResponseDeliverTx {
|
||||
func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
|
||||
var key, value string
|
||||
parts := bytes.Split(tx, []byte("="))
|
||||
|
||||
parts := bytes.Split(req.Tx, []byte("="))
|
||||
if len(parts) == 2 {
|
||||
key, value = string(parts[0]), string(parts[1])
|
||||
} else {
|
||||
key, value = string(tx), string(tx)
|
||||
key, value = string(req.Tx), string(req.Tx)
|
||||
}
|
||||
|
||||
err := app.state.db.Set(prefixKey([]byte(key)), []byte(value))
|
||||
@@ -113,15 +114,7 @@ func (app *Application) HandleTx(tx []byte) *types.ResponseDeliverTx {
|
||||
},
|
||||
}
|
||||
|
||||
return &types.ResponseDeliverTx{Code: code.CodeTypeOK, Events: events}
|
||||
}
|
||||
|
||||
func (app *Application) FinalizeBlock(req types.RequestFinalizeBlock) types.ResponseFinalizeBlock {
|
||||
txs := make([]*types.ResponseDeliverTx, len(req.Txs))
|
||||
for i, tx := range req.Txs {
|
||||
txs[i] = app.HandleTx(tx)
|
||||
}
|
||||
return types.ResponseFinalizeBlock{Txs: txs}
|
||||
return types.ResponseDeliverTx{Code: code.CodeTypeOK, Events: events}
|
||||
}
|
||||
|
||||
func (app *Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
|
||||
@@ -178,9 +171,3 @@ func (app *Application) Query(reqQuery types.RequestQuery) (resQuery types.Respo
|
||||
|
||||
return resQuery
|
||||
}
|
||||
|
||||
func (app *Application) PrepareProposal(
|
||||
req types.RequestPrepareProposal) types.ResponsePrepareProposal {
|
||||
return types.ResponsePrepareProposal{
|
||||
BlockData: req.BlockData}
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package kvstore
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
@@ -24,14 +25,12 @@ const (
|
||||
)
|
||||
|
||||
func testKVStore(t *testing.T, app types.Application, tx []byte, key, value string) {
|
||||
req := types.RequestFinalizeBlock{Txs: [][]byte{tx}}
|
||||
ar := app.FinalizeBlock(req)
|
||||
require.Equal(t, 1, len(ar.Txs))
|
||||
require.False(t, ar.Txs[0].IsErr())
|
||||
req := types.RequestDeliverTx{Tx: tx}
|
||||
ar := app.DeliverTx(req)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
// repeating tx doesn't raise error
|
||||
ar = app.FinalizeBlock(req)
|
||||
require.Equal(t, 1, len(ar.Txs))
|
||||
require.False(t, ar.Txs[0].IsErr())
|
||||
ar = app.DeliverTx(req)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
// commit
|
||||
app.Commit()
|
||||
|
||||
@@ -73,7 +72,10 @@ func TestKVStoreKV(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestPersistentKVStoreKV(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
kvstore := NewPersistentKVStoreApplication(logger, dir)
|
||||
@@ -88,7 +90,10 @@ func TestPersistentKVStoreKV(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestPersistentKVStoreInfo(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
kvstore := NewPersistentKVStoreApplication(logger, dir)
|
||||
@@ -106,7 +111,8 @@ func TestPersistentKVStoreInfo(t *testing.T) {
|
||||
header := tmproto.Header{
|
||||
Height: height,
|
||||
}
|
||||
kvstore.FinalizeBlock(types.RequestFinalizeBlock{Hash: hash, Header: header, Height: height})
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
|
||||
kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
|
||||
kvstore.Commit()
|
||||
|
||||
resInfo = kvstore.Info(types.RequestInfo{})
|
||||
@@ -118,7 +124,10 @@ func TestPersistentKVStoreInfo(t *testing.T) {
|
||||
|
||||
// add a validator, remove a validator, update a validator
|
||||
func TestValUpdates(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
kvstore := NewPersistentKVStoreApplication(logger, dir)
|
||||
@@ -195,16 +204,16 @@ func makeApplyBlock(
|
||||
Height: height,
|
||||
}
|
||||
|
||||
resFinalizeBlock := kvstore.FinalizeBlock(types.RequestFinalizeBlock{
|
||||
Hash: hash,
|
||||
Header: header,
|
||||
Height: height,
|
||||
Txs: txs,
|
||||
})
|
||||
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
|
||||
for _, tx := range txs {
|
||||
if r := kvstore.DeliverTx(types.RequestDeliverTx{Tx: tx}); r.IsErr() {
|
||||
t.Fatal(r)
|
||||
}
|
||||
}
|
||||
resEndBlock := kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
|
||||
kvstore.Commit()
|
||||
|
||||
valsEqual(t, diff, resFinalizeBlock.ValidatorUpdates)
|
||||
valsEqual(t, diff, resEndBlock.ValidatorUpdates)
|
||||
|
||||
}
|
||||
|
||||
@@ -321,15 +330,13 @@ func runClientTests(ctx context.Context, t *testing.T, client abciclient.Client)
|
||||
}
|
||||
|
||||
func testClient(ctx context.Context, t *testing.T, app abciclient.Client, tx []byte, key, value string) {
|
||||
ar, err := app.FinalizeBlock(ctx, types.RequestFinalizeBlock{Txs: [][]byte{tx}})
|
||||
ar, err := app.DeliverTx(ctx, types.RequestDeliverTx{Tx: tx})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, len(ar.Txs))
|
||||
require.False(t, ar.Txs[0].IsErr())
|
||||
// repeating FinalizeBlock doesn't raise error
|
||||
ar, err = app.FinalizeBlock(ctx, types.RequestFinalizeBlock{Txs: [][]byte{tx}})
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
// repeating tx doesn't raise error
|
||||
ar, err = app.DeliverTx(ctx, types.RequestDeliverTx{Tx: tx})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, len(ar.Txs))
|
||||
require.False(t, ar.Txs[0].IsErr())
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
// commit
|
||||
_, err = app.Commit(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -14,7 +14,6 @@ import (
|
||||
"github.com/tendermint/tendermint/crypto/encoding"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
cryptoproto "github.com/tendermint/tendermint/proto/tendermint/crypto"
|
||||
ptypes "github.com/tendermint/tendermint/proto/tendermint/types"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -64,21 +63,17 @@ func (app *PersistentKVStoreApplication) Info(req types.RequestInfo) types.Respo
|
||||
}
|
||||
|
||||
// tx is either "val:pubkey!power" or "key=value" or just arbitrary bytes
|
||||
func (app *PersistentKVStoreApplication) HandleTx(tx []byte) *types.ResponseDeliverTx {
|
||||
func (app *PersistentKVStoreApplication) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
|
||||
// if it starts with "val:", update the validator set
|
||||
// format is "val:pubkey!power"
|
||||
if isValidatorTx(tx) {
|
||||
if isValidatorTx(req.Tx) {
|
||||
// update validators in the merkle tree
|
||||
// and in app.ValUpdates
|
||||
return app.execValidatorTx(tx)
|
||||
}
|
||||
|
||||
if isPrepareTx(tx) {
|
||||
return app.execPrepareTx(tx)
|
||||
return app.execValidatorTx(req.Tx)
|
||||
}
|
||||
|
||||
// otherwise, update the key-value store
|
||||
return app.app.HandleTx(tx)
|
||||
return app.app.DeliverTx(req)
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
|
||||
@@ -121,9 +116,7 @@ func (app *PersistentKVStoreApplication) InitChain(req types.RequestInitChain) t
|
||||
}
|
||||
|
||||
// Track the block hash and header information
|
||||
// Execute transactions
|
||||
// Update the validator set
|
||||
func (app *PersistentKVStoreApplication) FinalizeBlock(req types.RequestFinalizeBlock) types.ResponseFinalizeBlock {
|
||||
func (app *PersistentKVStoreApplication) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
|
||||
// reset valset changes
|
||||
app.ValUpdates = make([]types.ValidatorUpdate, 0)
|
||||
|
||||
@@ -145,12 +138,12 @@ func (app *PersistentKVStoreApplication) FinalizeBlock(req types.RequestFinalize
|
||||
}
|
||||
}
|
||||
|
||||
respTxs := make([]*types.ResponseDeliverTx, len(req.Txs))
|
||||
for i, tx := range req.Txs {
|
||||
respTxs[i] = app.HandleTx(tx)
|
||||
}
|
||||
return types.ResponseBeginBlock{}
|
||||
}
|
||||
|
||||
return types.ResponseFinalizeBlock{Txs: respTxs, ValidatorUpdates: app.ValUpdates}
|
||||
// Update the validator set
|
||||
func (app *PersistentKVStoreApplication) EndBlock(req types.RequestEndBlock) types.ResponseEndBlock {
|
||||
return types.ResponseEndBlock{ValidatorUpdates: app.ValUpdates}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) ListSnapshots(
|
||||
@@ -173,34 +166,6 @@ func (app *PersistentKVStoreApplication) ApplySnapshotChunk(
|
||||
return types.ResponseApplySnapshotChunk{Result: types.ResponseApplySnapshotChunk_ABORT}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) ExtendVote(
|
||||
req types.RequestExtendVote) types.ResponseExtendVote {
|
||||
return types.ResponseExtendVote{
|
||||
VoteExtension: ConstructVoteExtension(req.Vote.ValidatorAddress),
|
||||
}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) VerifyVoteExtension(
|
||||
req types.RequestVerifyVoteExtension) types.ResponseVerifyVoteExtension {
|
||||
return types.RespondVerifyVoteExtension(
|
||||
app.verifyExtension(req.Vote.ValidatorAddress, req.Vote.VoteExtension))
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) PrepareProposal(
|
||||
req types.RequestPrepareProposal) types.ResponsePrepareProposal {
|
||||
return types.ResponsePrepareProposal{BlockData: app.substPrepareTx(req.BlockData)}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) ProcessProposal(
|
||||
req types.RequestProcessProposal) types.ResponseProcessProposal {
|
||||
for _, tx := range req.Txs {
|
||||
if len(tx) == 0 {
|
||||
return types.ResponseProcessProposal{Result: types.ResponseProcessProposal_REJECT}
|
||||
}
|
||||
}
|
||||
return types.ResponseProcessProposal{Result: types.ResponseProcessProposal_ACCEPT}
|
||||
}
|
||||
|
||||
//---------------------------------------------
|
||||
// update validators
|
||||
|
||||
@@ -240,13 +205,13 @@ func isValidatorTx(tx []byte) bool {
|
||||
|
||||
// format is "val:pubkey!power"
|
||||
// pubkey is a base64-encoded 32-byte ed25519 key
|
||||
func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) *types.ResponseDeliverTx {
|
||||
func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) types.ResponseDeliverTx {
|
||||
tx = tx[len(ValidatorSetChangePrefix):]
|
||||
|
||||
// get the pubkey and power
|
||||
pubKeyAndPower := strings.Split(string(tx), "!")
|
||||
if len(pubKeyAndPower) != 2 {
|
||||
return &types.ResponseDeliverTx{
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Expected 'pubkey!power'. Got %v", pubKeyAndPower)}
|
||||
}
|
||||
@@ -255,7 +220,7 @@ func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) *types.Respo
|
||||
// decode the pubkey
|
||||
pubkey, err := base64.StdEncoding.DecodeString(pubkeyS)
|
||||
if err != nil {
|
||||
return &types.ResponseDeliverTx{
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Pubkey (%s) is invalid base64", pubkeyS)}
|
||||
}
|
||||
@@ -263,7 +228,7 @@ func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) *types.Respo
|
||||
// decode the power
|
||||
power, err := strconv.ParseInt(powerS, 10, 64)
|
||||
if err != nil {
|
||||
return &types.ResponseDeliverTx{
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Power (%s) is not an int", powerS)}
|
||||
}
|
||||
@@ -273,7 +238,7 @@ func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) *types.Respo
|
||||
}
|
||||
|
||||
// add, update, or remove a validator
|
||||
func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate) *types.ResponseDeliverTx {
|
||||
func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate) types.ResponseDeliverTx {
|
||||
pubkey, err := encoding.PubKeyFromProto(v.PubKey)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("can't decode public key: %w", err))
|
||||
@@ -288,7 +253,7 @@ func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate
|
||||
}
|
||||
if !hasKey {
|
||||
pubStr := base64.StdEncoding.EncodeToString(pubkey.Bytes())
|
||||
return &types.ResponseDeliverTx{
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeUnauthorized,
|
||||
Log: fmt.Sprintf("Cannot remove non-existent validator %s", pubStr)}
|
||||
}
|
||||
@@ -300,7 +265,7 @@ func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate
|
||||
// add or update validator
|
||||
value := bytes.NewBuffer(make([]byte, 0))
|
||||
if err := types.WriteMessage(&v, value); err != nil {
|
||||
return &types.ResponseDeliverTx{
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("error encoding validator: %v", err)}
|
||||
}
|
||||
@@ -313,55 +278,5 @@ func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate
|
||||
// we only update the changes array if we successfully updated the tree
|
||||
app.ValUpdates = append(app.ValUpdates, v)
|
||||
|
||||
return &types.ResponseDeliverTx{Code: code.CodeTypeOK}
|
||||
}
|
||||
|
||||
// -----------------------------
|
||||
|
||||
const PreparePrefix = "prepare"
|
||||
|
||||
func isPrepareTx(tx []byte) bool {
|
||||
return strings.HasPrefix(string(tx), PreparePrefix)
|
||||
}
|
||||
|
||||
// execPrepareTx is noop. tx data is considered as placeholder
|
||||
// and is substitute at the PrepareProposal.
|
||||
func (app *PersistentKVStoreApplication) execPrepareTx(tx []byte) *types.ResponseDeliverTx {
|
||||
// noop
|
||||
return &types.ResponseDeliverTx{}
|
||||
}
|
||||
|
||||
// substPrepareTx subst all the preparetx in the blockdata
|
||||
// to null string(could be any arbitrary string).
|
||||
func (app *PersistentKVStoreApplication) substPrepareTx(blockData [][]byte) [][]byte {
|
||||
// TODO: this mechanism will change with the current spec of PrepareProposal
|
||||
// We now have a special type for marking a tx as changed
|
||||
for i, tx := range blockData {
|
||||
if isPrepareTx(tx) {
|
||||
blockData[i] = make([]byte, len(tx))
|
||||
}
|
||||
}
|
||||
|
||||
return blockData
|
||||
}
|
||||
|
||||
func ConstructVoteExtension(valAddr []byte) *ptypes.VoteExtension {
|
||||
return &ptypes.VoteExtension{
|
||||
AppDataToSign: valAddr,
|
||||
AppDataSelfAuthenticating: valAddr,
|
||||
}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) verifyExtension(valAddr []byte, ext *ptypes.VoteExtension) bool {
|
||||
if ext == nil {
|
||||
return false
|
||||
}
|
||||
canonical := ConstructVoteExtension(valAddr)
|
||||
if !bytes.Equal(canonical.AppDataToSign, ext.AppDataToSign) {
|
||||
return false
|
||||
}
|
||||
if !bytes.Equal(canonical.AppDataSelfAuthenticating, ext.AppDataSelfAuthenticating) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
|
||||
}
|
||||
|
||||
@@ -213,6 +213,9 @@ func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types
|
||||
case *types.Request_Info:
|
||||
res := s.app.Info(*r.Info)
|
||||
responses <- types.ToResponseInfo(res)
|
||||
case *types.Request_DeliverTx:
|
||||
res := s.app.DeliverTx(*r.DeliverTx)
|
||||
responses <- types.ToResponseDeliverTx(res)
|
||||
case *types.Request_CheckTx:
|
||||
res := s.app.CheckTx(*r.CheckTx)
|
||||
responses <- types.ToResponseCheckTx(res)
|
||||
@@ -225,33 +228,24 @@ func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types
|
||||
case *types.Request_InitChain:
|
||||
res := s.app.InitChain(*r.InitChain)
|
||||
responses <- types.ToResponseInitChain(res)
|
||||
case *types.Request_BeginBlock:
|
||||
res := s.app.BeginBlock(*r.BeginBlock)
|
||||
responses <- types.ToResponseBeginBlock(res)
|
||||
case *types.Request_EndBlock:
|
||||
res := s.app.EndBlock(*r.EndBlock)
|
||||
responses <- types.ToResponseEndBlock(res)
|
||||
case *types.Request_ListSnapshots:
|
||||
res := s.app.ListSnapshots(*r.ListSnapshots)
|
||||
responses <- types.ToResponseListSnapshots(res)
|
||||
case *types.Request_OfferSnapshot:
|
||||
res := s.app.OfferSnapshot(*r.OfferSnapshot)
|
||||
responses <- types.ToResponseOfferSnapshot(res)
|
||||
case *types.Request_PrepareProposal:
|
||||
res := s.app.PrepareProposal(*r.PrepareProposal)
|
||||
responses <- types.ToResponsePrepareProposal(res)
|
||||
case *types.Request_ProcessProposal:
|
||||
res := s.app.ProcessProposal(*r.ProcessProposal)
|
||||
responses <- types.ToResponseProcessProposal(res)
|
||||
case *types.Request_LoadSnapshotChunk:
|
||||
res := s.app.LoadSnapshotChunk(*r.LoadSnapshotChunk)
|
||||
responses <- types.ToResponseLoadSnapshotChunk(res)
|
||||
case *types.Request_ApplySnapshotChunk:
|
||||
res := s.app.ApplySnapshotChunk(*r.ApplySnapshotChunk)
|
||||
responses <- types.ToResponseApplySnapshotChunk(res)
|
||||
case *types.Request_ExtendVote:
|
||||
res := s.app.ExtendVote(*r.ExtendVote)
|
||||
responses <- types.ToResponseExtendVote(res)
|
||||
case *types.Request_VerifyVoteExtension:
|
||||
res := s.app.VerifyVoteExtension(*r.VerifyVoteExtension)
|
||||
responses <- types.ToResponseVerifyVoteExtension(res)
|
||||
case *types.Request_FinalizeBlock:
|
||||
res := s.app.FinalizeBlock(*r.FinalizeBlock)
|
||||
responses <- types.ToResponseFinalizeBlock(res)
|
||||
default:
|
||||
responses <- types.ToResponseException("Unknown request")
|
||||
}
|
||||
|
||||
@@ -49,24 +49,22 @@ func Commit(ctx context.Context, client abciclient.Client, hashExp []byte) error
|
||||
return nil
|
||||
}
|
||||
|
||||
func FinalizeBlock(ctx context.Context, client abciclient.Client, txBytes [][]byte, codeExp []uint32, dataExp []byte) error {
|
||||
res, _ := client.FinalizeBlock(ctx, types.RequestFinalizeBlock{Txs: txBytes})
|
||||
for i, tx := range res.Txs {
|
||||
code, data, log := tx.Code, tx.Data, tx.Log
|
||||
if code != codeExp[i] {
|
||||
fmt.Println("Failed test: FinalizeBlock")
|
||||
fmt.Printf("FinalizeBlock response code was unexpected. Got %v expected %v. Log: %v\n",
|
||||
code, codeExp, log)
|
||||
return errors.New("FinalizeBlock error")
|
||||
}
|
||||
if !bytes.Equal(data, dataExp) {
|
||||
fmt.Println("Failed test: FinalizeBlock")
|
||||
fmt.Printf("FinalizeBlock response data was unexpected. Got %X expected %X\n",
|
||||
data, dataExp)
|
||||
return errors.New("FinalizeBlock error")
|
||||
}
|
||||
func DeliverTx(ctx context.Context, client abciclient.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
|
||||
res, _ := client.DeliverTx(ctx, types.RequestDeliverTx{Tx: txBytes})
|
||||
code, data, log := res.Code, res.Data, res.Log
|
||||
if code != codeExp {
|
||||
fmt.Println("Failed test: DeliverTx")
|
||||
fmt.Printf("DeliverTx response code was unexpected. Got %v expected %v. Log: %v\n",
|
||||
code, codeExp, log)
|
||||
return errors.New("deliverTx error")
|
||||
}
|
||||
fmt.Println("Passed test: FinalizeBlock")
|
||||
if !bytes.Equal(data, dataExp) {
|
||||
fmt.Println("Failed test: DeliverTx")
|
||||
fmt.Printf("DeliverTx response data was unexpected. Got %X expected %X\n",
|
||||
data, dataExp)
|
||||
return errors.New("deliverTx error")
|
||||
}
|
||||
fmt.Println("Passed test: DeliverTx")
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -17,17 +17,11 @@ type Application interface {
|
||||
CheckTx(RequestCheckTx) ResponseCheckTx // Validate a tx for the mempool
|
||||
|
||||
// Consensus Connection
|
||||
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain w validators/other info from TendermintCore
|
||||
PrepareProposal(RequestPrepareProposal) ResponsePrepareProposal
|
||||
ProcessProposal(RequestProcessProposal) ResponseProcessProposal
|
||||
// Commit the state and return the application Merkle root hash
|
||||
Commit() ResponseCommit
|
||||
// Create application specific vote extension
|
||||
ExtendVote(RequestExtendVote) ResponseExtendVote
|
||||
// Verify application's vote extension data
|
||||
VerifyVoteExtension(RequestVerifyVoteExtension) ResponseVerifyVoteExtension
|
||||
// Deliver the decided block with its txs to the Application
|
||||
FinalizeBlock(RequestFinalizeBlock) ResponseFinalizeBlock
|
||||
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain w validators/other info from TendermintCore
|
||||
BeginBlock(RequestBeginBlock) ResponseBeginBlock // Signals the beginning of a block
|
||||
DeliverTx(RequestDeliverTx) ResponseDeliverTx // Deliver a tx for full processing
|
||||
EndBlock(RequestEndBlock) ResponseEndBlock // Signals the end of a block, returns changes to the validator set
|
||||
Commit() ResponseCommit // Commit the state and return the application Merkle root hash
|
||||
|
||||
// State Sync Connection
|
||||
ListSnapshots(RequestListSnapshots) ResponseListSnapshots // List available snapshots
|
||||
@@ -52,6 +46,10 @@ func (BaseApplication) Info(req RequestInfo) ResponseInfo {
|
||||
return ResponseInfo{}
|
||||
}
|
||||
|
||||
func (BaseApplication) DeliverTx(req RequestDeliverTx) ResponseDeliverTx {
|
||||
return ResponseDeliverTx{Code: CodeTypeOK}
|
||||
}
|
||||
|
||||
func (BaseApplication) CheckTx(req RequestCheckTx) ResponseCheckTx {
|
||||
return ResponseCheckTx{Code: CodeTypeOK}
|
||||
}
|
||||
@@ -60,16 +58,6 @@ func (BaseApplication) Commit() ResponseCommit {
|
||||
return ResponseCommit{}
|
||||
}
|
||||
|
||||
func (BaseApplication) ExtendVote(req RequestExtendVote) ResponseExtendVote {
|
||||
return ResponseExtendVote{}
|
||||
}
|
||||
|
||||
func (BaseApplication) VerifyVoteExtension(req RequestVerifyVoteExtension) ResponseVerifyVoteExtension {
|
||||
return ResponseVerifyVoteExtension{
|
||||
Result: ResponseVerifyVoteExtension_ACCEPT,
|
||||
}
|
||||
}
|
||||
|
||||
func (BaseApplication) Query(req RequestQuery) ResponseQuery {
|
||||
return ResponseQuery{Code: CodeTypeOK}
|
||||
}
|
||||
@@ -78,6 +66,14 @@ func (BaseApplication) InitChain(req RequestInitChain) ResponseInitChain {
|
||||
return ResponseInitChain{}
|
||||
}
|
||||
|
||||
func (BaseApplication) BeginBlock(req RequestBeginBlock) ResponseBeginBlock {
|
||||
return ResponseBeginBlock{}
|
||||
}
|
||||
|
||||
func (BaseApplication) EndBlock(req RequestEndBlock) ResponseEndBlock {
|
||||
return ResponseEndBlock{}
|
||||
}
|
||||
|
||||
func (BaseApplication) ListSnapshots(req RequestListSnapshots) ResponseListSnapshots {
|
||||
return ResponseListSnapshots{}
|
||||
}
|
||||
@@ -94,24 +90,6 @@ func (BaseApplication) ApplySnapshotChunk(req RequestApplySnapshotChunk) Respons
|
||||
return ResponseApplySnapshotChunk{}
|
||||
}
|
||||
|
||||
func (BaseApplication) PrepareProposal(req RequestPrepareProposal) ResponsePrepareProposal {
|
||||
return ResponsePrepareProposal{}
|
||||
}
|
||||
|
||||
func (BaseApplication) ProcessProposal(req RequestProcessProposal) ResponseProcessProposal {
|
||||
return ResponseProcessProposal{}
|
||||
}
|
||||
|
||||
func (BaseApplication) FinalizeBlock(req RequestFinalizeBlock) ResponseFinalizeBlock {
|
||||
txs := make([]*ResponseDeliverTx, len(req.Txs))
|
||||
for i := range req.Txs {
|
||||
txs[i] = &ResponseDeliverTx{Code: CodeTypeOK}
|
||||
}
|
||||
return ResponseFinalizeBlock{
|
||||
Txs: txs,
|
||||
}
|
||||
}
|
||||
|
||||
//-------------------------------------------------------
|
||||
|
||||
// GRPCApplication is a GRPC wrapper for Application
|
||||
@@ -136,6 +114,11 @@ func (app *GRPCApplication) Info(ctx context.Context, req *RequestInfo) (*Respon
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) DeliverTx(ctx context.Context, req *RequestDeliverTx) (*ResponseDeliverTx, error) {
|
||||
res := app.app.DeliverTx(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) CheckTx(ctx context.Context, req *RequestCheckTx) (*ResponseCheckTx, error) {
|
||||
res := app.app.CheckTx(*req)
|
||||
return &res, nil
|
||||
@@ -156,6 +139,16 @@ func (app *GRPCApplication) InitChain(ctx context.Context, req *RequestInitChain
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) BeginBlock(ctx context.Context, req *RequestBeginBlock) (*ResponseBeginBlock, error) {
|
||||
res := app.app.BeginBlock(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) EndBlock(ctx context.Context, req *RequestEndBlock) (*ResponseEndBlock, error) {
|
||||
res := app.app.EndBlock(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) ListSnapshots(
|
||||
ctx context.Context, req *RequestListSnapshots) (*ResponseListSnapshots, error) {
|
||||
res := app.app.ListSnapshots(*req)
|
||||
@@ -179,33 +172,3 @@ func (app *GRPCApplication) ApplySnapshotChunk(
|
||||
res := app.app.ApplySnapshotChunk(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) ExtendVote(
|
||||
ctx context.Context, req *RequestExtendVote) (*ResponseExtendVote, error) {
|
||||
res := app.app.ExtendVote(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) VerifyVoteExtension(
|
||||
ctx context.Context, req *RequestVerifyVoteExtension) (*ResponseVerifyVoteExtension, error) {
|
||||
res := app.app.VerifyVoteExtension(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) PrepareProposal(
|
||||
ctx context.Context, req *RequestPrepareProposal) (*ResponsePrepareProposal, error) {
|
||||
res := app.app.PrepareProposal(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) ProcessProposal(
|
||||
ctx context.Context, req *RequestProcessProposal) (*ResponseProcessProposal, error) {
|
||||
res := app.app.ProcessProposal(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) FinalizeBlock(
|
||||
ctx context.Context, req *RequestFinalizeBlock) (*ResponseFinalizeBlock, error) {
|
||||
res := app.app.FinalizeBlock(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"io"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
|
||||
"github.com/tendermint/tendermint/internal/libs/protoio"
|
||||
)
|
||||
|
||||
@@ -45,6 +44,12 @@ func ToRequestInfo(req RequestInfo) *Request {
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestDeliverTx(req RequestDeliverTx) *Request {
|
||||
return &Request{
|
||||
Value: &Request_DeliverTx{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestCheckTx(req RequestCheckTx) *Request {
|
||||
return &Request{
|
||||
Value: &Request_CheckTx{&req},
|
||||
@@ -69,6 +74,18 @@ func ToRequestInitChain(req RequestInitChain) *Request {
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestBeginBlock(req RequestBeginBlock) *Request {
|
||||
return &Request{
|
||||
Value: &Request_BeginBlock{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestEndBlock(req RequestEndBlock) *Request {
|
||||
return &Request{
|
||||
Value: &Request_EndBlock{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestListSnapshots(req RequestListSnapshots) *Request {
|
||||
return &Request{
|
||||
Value: &Request_ListSnapshots{&req},
|
||||
@@ -93,36 +110,6 @@ func ToRequestApplySnapshotChunk(req RequestApplySnapshotChunk) *Request {
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestExtendVote(req RequestExtendVote) *Request {
|
||||
return &Request{
|
||||
Value: &Request_ExtendVote{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestVerifyVoteExtension(req RequestVerifyVoteExtension) *Request {
|
||||
return &Request{
|
||||
Value: &Request_VerifyVoteExtension{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestPrepareProposal(req RequestPrepareProposal) *Request {
|
||||
return &Request{
|
||||
Value: &Request_PrepareProposal{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestProcessProposal(req RequestProcessProposal) *Request {
|
||||
return &Request{
|
||||
Value: &Request_ProcessProposal{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestFinalizeBlock(req RequestFinalizeBlock) *Request {
|
||||
return &Request{
|
||||
Value: &Request_FinalizeBlock{&req},
|
||||
}
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func ToResponseException(errStr string) *Response {
|
||||
@@ -148,6 +135,11 @@ func ToResponseInfo(res ResponseInfo) *Response {
|
||||
Value: &Response_Info{&res},
|
||||
}
|
||||
}
|
||||
func ToResponseDeliverTx(res ResponseDeliverTx) *Response {
|
||||
return &Response{
|
||||
Value: &Response_DeliverTx{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseCheckTx(res ResponseCheckTx) *Response {
|
||||
return &Response{
|
||||
@@ -173,6 +165,18 @@ func ToResponseInitChain(res ResponseInitChain) *Response {
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseBeginBlock(res ResponseBeginBlock) *Response {
|
||||
return &Response{
|
||||
Value: &Response_BeginBlock{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseEndBlock(res ResponseEndBlock) *Response {
|
||||
return &Response{
|
||||
Value: &Response_EndBlock{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseListSnapshots(res ResponseListSnapshots) *Response {
|
||||
return &Response{
|
||||
Value: &Response_ListSnapshots{&res},
|
||||
@@ -196,33 +200,3 @@ func ToResponseApplySnapshotChunk(res ResponseApplySnapshotChunk) *Response {
|
||||
Value: &Response_ApplySnapshotChunk{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseExtendVote(res ResponseExtendVote) *Response {
|
||||
return &Response{
|
||||
Value: &Response_ExtendVote{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseVerifyVoteExtension(res ResponseVerifyVoteExtension) *Response {
|
||||
return &Response{
|
||||
Value: &Response_VerifyVoteExtension{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponsePrepareProposal(res ResponsePrepareProposal) *Response {
|
||||
return &Response{
|
||||
Value: &Response_PrepareProposal{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseProcessProposal(res ResponseProcessProposal) *Response {
|
||||
return &Response{
|
||||
Value: &Response_ProcessProposal{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseFinalizeBlock(res ResponseFinalizeBlock) *Response {
|
||||
return &Response{
|
||||
Value: &Response_FinalizeBlock{&res},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,8 +5,6 @@ import (
|
||||
"encoding/json"
|
||||
|
||||
"github.com/gogo/protobuf/jsonpb"
|
||||
|
||||
types "github.com/tendermint/tendermint/proto/tendermint/types"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -43,26 +41,6 @@ func (r ResponseQuery) IsErr() bool {
|
||||
return r.Code != CodeTypeOK
|
||||
}
|
||||
|
||||
// IsUnknown returns true if Code is Unknown
|
||||
func (r ResponseVerifyVoteExtension) IsUnknown() bool {
|
||||
return r.Result == ResponseVerifyVoteExtension_UNKNOWN
|
||||
}
|
||||
|
||||
// IsOK returns true if Code is OK
|
||||
func (r ResponseVerifyVoteExtension) IsOK() bool {
|
||||
return r.Result == ResponseVerifyVoteExtension_ACCEPT
|
||||
}
|
||||
|
||||
// IsErr returns true if Code is something other than OK.
|
||||
func (r ResponseVerifyVoteExtension) IsErr() bool {
|
||||
return r.Result != ResponseVerifyVoteExtension_ACCEPT
|
||||
}
|
||||
|
||||
// IsOK returns true if Code is OK
|
||||
func (r ResponseProcessProposal) IsOK() bool {
|
||||
return r.Result == ResponseProcessProposal_ACCEPT
|
||||
}
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
// override JSON marshaling so we emit defaults (ie. disable omitempty)
|
||||
|
||||
@@ -140,25 +118,3 @@ var _ jsonRoundTripper = (*ResponseDeliverTx)(nil)
|
||||
var _ jsonRoundTripper = (*ResponseCheckTx)(nil)
|
||||
|
||||
var _ jsonRoundTripper = (*EventAttribute)(nil)
|
||||
|
||||
// -----------------------------------------------
|
||||
// construct Result data
|
||||
|
||||
func RespondExtendVote(appDataToSign, appDataSelfAuthenticating []byte) ResponseExtendVote {
|
||||
return ResponseExtendVote{
|
||||
VoteExtension: &types.VoteExtension{
|
||||
AppDataToSign: appDataToSign,
|
||||
AppDataSelfAuthenticating: appDataSelfAuthenticating,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func RespondVerifyVoteExtension(ok bool) ResponseVerifyVoteExtension {
|
||||
result := ResponseVerifyVoteExtension_REJECT
|
||||
if ok {
|
||||
result = ResponseVerifyVoteExtension_ACCEPT
|
||||
}
|
||||
return ResponseVerifyVoteExtension{
|
||||
Result: result,
|
||||
}
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
9
abci/version/version.go
Normal file
9
abci/version/version.go
Normal file
@@ -0,0 +1,9 @@
|
||||
package version
|
||||
|
||||
import (
|
||||
"github.com/tendermint/tendermint/version"
|
||||
)
|
||||
|
||||
// TODO: eliminate this after some version refactor
|
||||
|
||||
const Version = version.ABCIVersion
|
||||
14
buf.gen.yaml
14
buf.gen.yaml
@@ -1,14 +0,0 @@
|
||||
# The version of the generation template (required).
|
||||
# The only currently-valid value is v1beta1.
|
||||
version: v1beta1
|
||||
|
||||
# The plugins to run.
|
||||
plugins:
|
||||
# The name of the plugin.
|
||||
- name: gogofaster
|
||||
# The directory where the generated proto output will be written.
|
||||
# The directory is relative to where the generation tool was run.
|
||||
out: proto
|
||||
# Set options to assign import paths to the well-known types
|
||||
# and to enable service generation.
|
||||
opt: Mgoogle/protobuf/timestamp.proto=github.com/gogo/protobuf/types,Mgoogle/protobuf/duration.proto=github.com/golang/protobuf/ptypes/duration,plugins=grpc,paths=source_relative
|
||||
16
buf.yaml
16
buf.yaml
@@ -1,16 +0,0 @@
|
||||
version: v1beta1
|
||||
|
||||
build:
|
||||
roots:
|
||||
- proto
|
||||
- third_party/proto
|
||||
lint:
|
||||
use:
|
||||
- BASIC
|
||||
- FILE_LOWER_SNAKE_CASE
|
||||
- UNARY_RPC
|
||||
ignore:
|
||||
- gogoproto
|
||||
breaking:
|
||||
use:
|
||||
- FILE
|
||||
@@ -45,16 +45,12 @@ func main() {
|
||||
keyFile = flag.String("keyfile", "", "absolute path to server key")
|
||||
rootCA = flag.String("rootcafile", "", "absolute path to root CA")
|
||||
prometheusAddr = flag.String("prometheus-addr", "", "address for prometheus endpoint (host:port)")
|
||||
|
||||
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo).
|
||||
With("module", "priv_val")
|
||||
)
|
||||
flag.Parse()
|
||||
|
||||
logger, err := log.NewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "failed to construct logger: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger = logger.With("module", "priv_val")
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
|
||||
@@ -1,46 +0,0 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// NewCompletionCmd returns a cobra.Command that generates bash and zsh
|
||||
// completion scripts for the given root command. If hidden is true, the
|
||||
// command will not show up in the root command's list of available commands.
|
||||
func NewCompletionCmd(rootCmd *cobra.Command, hidden bool) *cobra.Command {
|
||||
flagZsh := "zsh"
|
||||
cmd := &cobra.Command{
|
||||
Use: "completion",
|
||||
Short: "Generate shell completion scripts",
|
||||
Long: fmt.Sprintf(`Generate Bash and Zsh completion scripts and print them to STDOUT.
|
||||
|
||||
Once saved to file, a completion script can be loaded in the shell's
|
||||
current session as shown:
|
||||
|
||||
$ . <(%s completion)
|
||||
|
||||
To configure your bash shell to load completions for each session add to
|
||||
your $HOME/.bashrc or $HOME/.profile the following instruction:
|
||||
|
||||
. <(%s completion)
|
||||
`, rootCmd.Use, rootCmd.Use),
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
zsh, err := cmd.Flags().GetBool(flagZsh)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if zsh {
|
||||
return rootCmd.GenZshCompletion(cmd.OutOrStdout())
|
||||
}
|
||||
return rootCmd.GenBashCompletion(cmd.OutOrStdout())
|
||||
},
|
||||
Hidden: hidden,
|
||||
Args: cobra.NoArgs,
|
||||
}
|
||||
|
||||
cmd.Flags().Bool(flagZsh, false, "Generate Zsh completion script")
|
||||
|
||||
return cmd
|
||||
}
|
||||
@@ -1,11 +1,11 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
tmjson "github.com/tendermint/tendermint/libs/json"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
@@ -20,7 +20,7 @@ var GenNodeKeyCmd = &cobra.Command{
|
||||
func genNodeKey(cmd *cobra.Command, args []string) error {
|
||||
nodeKey := types.GenNodeKey()
|
||||
|
||||
bz, err := json.Marshal(nodeKey)
|
||||
bz, err := tmjson.Marshal(nodeKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("nodeKey -> json: %w", err)
|
||||
}
|
||||
|
||||
@@ -1,41 +1,41 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
tmjson "github.com/tendermint/tendermint/libs/json"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// GenValidatorCmd allows the generation of a keypair for a
|
||||
// validator.
|
||||
func MakeGenValidatorCommand() *cobra.Command {
|
||||
var keyType string
|
||||
cmd := &cobra.Command{
|
||||
Use: "gen-validator",
|
||||
Short: "Generate new validator keypair",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
pv, err := privval.GenFilePV("", "", keyType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var GenValidatorCmd = &cobra.Command{
|
||||
Use: "gen-validator",
|
||||
Short: "Generate new validator keypair",
|
||||
RunE: genValidator,
|
||||
}
|
||||
|
||||
jsbz, err := json.Marshal(pv)
|
||||
if err != nil {
|
||||
return fmt.Errorf("validator -> json: %w", err)
|
||||
}
|
||||
func init() {
|
||||
GenValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
}
|
||||
|
||||
fmt.Printf("%v\n", string(jsbz))
|
||||
|
||||
return nil
|
||||
},
|
||||
func genValidator(cmd *cobra.Command, args []string) error {
|
||||
pv, err := privval.GenFilePV("", "", keyType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
jsbz, err := tmjson.Marshal(pv)
|
||||
if err != nil {
|
||||
return fmt.Errorf("validator -> json: %w", err)
|
||||
}
|
||||
|
||||
return cmd
|
||||
fmt.Printf(`%v
|
||||
`, string(jsbz))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -7,8 +7,7 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
tmrand "github.com/tendermint/tendermint/libs/rand"
|
||||
tmtime "github.com/tendermint/tendermint/libs/time"
|
||||
@@ -16,40 +15,43 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// MakeInitFilesCommand returns the command to initialize a fresh Tendermint Core instance.
|
||||
func MakeInitFilesCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var keyType string
|
||||
cmd := &cobra.Command{
|
||||
Use: "init [full|validator|seed]",
|
||||
Short: "Initializes a Tendermint node",
|
||||
ValidArgs: []string{"full", "validator", "seed"},
|
||||
// We allow for zero args so we can throw a more informative error
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
return errors.New("must specify a node type: tendermint init [validator|full|seed]")
|
||||
}
|
||||
conf.Mode = args[0]
|
||||
return initFilesWithConfig(cmd.Context(), conf, logger, keyType)
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
|
||||
return cmd
|
||||
// InitFilesCmd initializes a fresh Tendermint Core instance.
|
||||
var InitFilesCmd = &cobra.Command{
|
||||
Use: "init [full|validator|seed]",
|
||||
Short: "Initializes a Tendermint node",
|
||||
ValidArgs: []string{"full", "validator", "seed"},
|
||||
// We allow for zero args so we can throw a more informative error
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: initFiles,
|
||||
}
|
||||
|
||||
func initFilesWithConfig(ctx context.Context, conf *config.Config, logger log.Logger, keyType string) error {
|
||||
var (
|
||||
keyType string
|
||||
)
|
||||
|
||||
func init() {
|
||||
InitFilesCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
}
|
||||
|
||||
func initFiles(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
return errors.New("must specify a node type: tendermint init [validator|full|seed]")
|
||||
}
|
||||
config.Mode = args[0]
|
||||
return initFilesWithConfig(cmd.Context(), config)
|
||||
}
|
||||
|
||||
func initFilesWithConfig(ctx context.Context, config *cfg.Config) error {
|
||||
var (
|
||||
pv *privval.FilePV
|
||||
err error
|
||||
)
|
||||
|
||||
if conf.Mode == config.ModeValidator {
|
||||
if config.Mode == cfg.ModeValidator {
|
||||
// private validator
|
||||
privValKeyFile := conf.PrivValidator.KeyFile()
|
||||
privValStateFile := conf.PrivValidator.StateFile()
|
||||
privValKeyFile := config.PrivValidator.KeyFile()
|
||||
privValStateFile := config.PrivValidator.StateFile()
|
||||
if tmos.FileExists(privValKeyFile) {
|
||||
pv, err = privval.LoadFilePV(privValKeyFile, privValStateFile)
|
||||
if err != nil {
|
||||
@@ -71,7 +73,7 @@ func initFilesWithConfig(ctx context.Context, conf *config.Config, logger log.Lo
|
||||
}
|
||||
}
|
||||
|
||||
nodeKeyFile := conf.NodeKeyFile()
|
||||
nodeKeyFile := config.NodeKeyFile()
|
||||
if tmos.FileExists(nodeKeyFile) {
|
||||
logger.Info("Found node key", "path", nodeKeyFile)
|
||||
} else {
|
||||
@@ -82,7 +84,7 @@ func initFilesWithConfig(ctx context.Context, conf *config.Config, logger log.Lo
|
||||
}
|
||||
|
||||
// genesis file
|
||||
genFile := conf.GenesisFile()
|
||||
genFile := config.GenesisFile()
|
||||
if tmos.FileExists(genFile) {
|
||||
logger.Info("Found genesis file", "path", genFile)
|
||||
} else {
|
||||
@@ -121,10 +123,10 @@ func initFilesWithConfig(ctx context.Context, conf *config.Config, logger log.Lo
|
||||
}
|
||||
|
||||
// write config file
|
||||
if err := config.WriteConfigFile(conf.RootDir, conf); err != nil {
|
||||
if err := cfg.WriteConfigFile(config.RootDir, config); err != nil {
|
||||
return err
|
||||
}
|
||||
logger.Info("Generated config", "mode", conf.Mode)
|
||||
logger.Info("Generated config", "mode", config.Mode)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -6,17 +6,14 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/inspect"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
// InspectCmd constructs the command to start an inspect server.
|
||||
func MakeInspectCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "inspect",
|
||||
Short: "Run an inspect server for investigating Tendermint state",
|
||||
Long: `
|
||||
// InspectCmd is the command for starting an inspect server.
|
||||
var InspectCmd = &cobra.Command{
|
||||
Use: "inspect",
|
||||
Short: "Run an inspect server for investigating Tendermint state",
|
||||
Long: `
|
||||
inspect runs a subset of Tendermint's RPC endpoints that are useful for debugging
|
||||
issues with Tendermint.
|
||||
|
||||
@@ -25,27 +22,33 @@ func MakeInspectCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
The inspect command can be used to query the block and state store using Tendermint
|
||||
RPC calls to debug issues of inconsistent state.
|
||||
`,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM, syscall.SIGINT)
|
||||
defer cancel()
|
||||
|
||||
ins, err := inspect.NewFromConfig(logger, conf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("starting inspect server")
|
||||
if err := ins.Run(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
},
|
||||
}
|
||||
cmd.Flags().String("rpc.laddr",
|
||||
conf.RPC.ListenAddress, "RPC listenener address. Port required")
|
||||
cmd.Flags().String("db-backend",
|
||||
conf.DBBackend, "database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
|
||||
cmd.Flags().String("db-dir", conf.DBPath, "database directory")
|
||||
|
||||
return cmd
|
||||
RunE: runInspect,
|
||||
}
|
||||
|
||||
func init() {
|
||||
InspectCmd.Flags().
|
||||
String("rpc.laddr",
|
||||
config.RPC.ListenAddress, "RPC listenener address. Port required")
|
||||
InspectCmd.Flags().
|
||||
String("db-backend",
|
||||
config.DBBackend, "database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
|
||||
InspectCmd.Flags().
|
||||
String("db-dir", config.DBPath, "database directory")
|
||||
}
|
||||
|
||||
func runInspect(cmd *cobra.Command, args []string) error {
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM, syscall.SIGINT)
|
||||
defer cancel()
|
||||
|
||||
ins, err := inspect.NewFromConfig(logger, config)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("starting inspect server")
|
||||
if err := ins.Run(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -5,13 +5,11 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/scripts/keymigrate"
|
||||
)
|
||||
|
||||
func MakeKeyMigrateCommand(conf *cfg.Config, logger log.Logger) *cobra.Command {
|
||||
func MakeKeyMigrateCommand() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "key-migrate",
|
||||
Short: "Run Database key migration",
|
||||
@@ -40,7 +38,7 @@ func MakeKeyMigrateCommand(conf *cfg.Config, logger log.Logger) *cobra.Command {
|
||||
|
||||
db, err := cfg.DefaultDBProvider(&cfg.DBContext{
|
||||
ID: dbctx,
|
||||
Config: conf,
|
||||
Config: config,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
@@ -60,7 +58,7 @@ func MakeKeyMigrateCommand(conf *cfg.Config, logger log.Logger) *cobra.Command {
|
||||
}
|
||||
|
||||
// allow database info to be overridden via cli
|
||||
addDBFlags(cmd, conf)
|
||||
addDBFlags(cmd)
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
@@ -15,7 +15,6 @@ import (
|
||||
"github.com/spf13/cobra"
|
||||
dbm "github.com/tendermint/tm-db"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmmath "github.com/tendermint/tendermint/libs/math"
|
||||
"github.com/tendermint/tendermint/light"
|
||||
@@ -25,69 +24,20 @@ import (
|
||||
rpcserver "github.com/tendermint/tendermint/rpc/jsonrpc/server"
|
||||
)
|
||||
|
||||
// LightCmd constructs the base command called when invoked without any subcommands.
|
||||
func MakeLightCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var (
|
||||
listenAddr string
|
||||
primaryAddr string
|
||||
witnessAddrsJoined string
|
||||
chainID string
|
||||
dir string
|
||||
maxOpenConnections int
|
||||
|
||||
sequential bool
|
||||
trustingPeriod time.Duration
|
||||
trustedHeight int64
|
||||
trustedHash []byte
|
||||
trustLevelStr string
|
||||
|
||||
logLevel string
|
||||
logFormat string
|
||||
|
||||
primaryKey = []byte("primary")
|
||||
witnessesKey = []byte("witnesses")
|
||||
)
|
||||
|
||||
checkForExistingProviders := func(db dbm.DB) (string, []string, error) {
|
||||
primaryBytes, err := db.Get(primaryKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesBytes, err := db.Get(witnessesKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesAddrs := strings.Split(string(witnessesBytes), ",")
|
||||
return string(primaryBytes), witnessesAddrs, nil
|
||||
}
|
||||
|
||||
saveProviders := func(db dbm.DB, primaryAddr, witnessesAddrs string) error {
|
||||
err := db.Set(primaryKey, []byte(primaryAddr))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save primary provider: %w", err)
|
||||
}
|
||||
err = db.Set(witnessesKey, []byte(witnessesAddrs))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save witness providers: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "light [chainID]",
|
||||
Short: "Run a light client proxy server, verifying Tendermint rpc",
|
||||
Long: `Run a light client proxy server, verifying Tendermint rpc.
|
||||
// LightCmd represents the base command when called without any subcommands
|
||||
var LightCmd = &cobra.Command{
|
||||
Use: "light [chainID]",
|
||||
Short: "Run a light client proxy server, verifying Tendermint rpc",
|
||||
Long: `Run a light client proxy server, verifying Tendermint rpc.
|
||||
|
||||
All calls that can be tracked back to a block header by a proof
|
||||
will be verified before passing them back to the caller. Other than
|
||||
that, it will present the same interface as a full Tendermint node.
|
||||
|
||||
Furthermore to the chainID, a fresh instance of a light client will
|
||||
need a primary RPC address and a trusted hash and height. It is also highly
|
||||
recommended to provide additional witness RPC addresses, especially if
|
||||
not using sequential verification.
|
||||
|
||||
To restart the node, thereafter only the chainID is required.
|
||||
need a primary RPC address, a trusted hash and height and witness RPC addresses
|
||||
(if not using sequential verification). To restart the node, thereafter
|
||||
only the chainID is required.
|
||||
|
||||
When /abci_query is called, the Merkle key path format is:
|
||||
|
||||
@@ -96,138 +46,185 @@ When /abci_query is called, the Merkle key path format is:
|
||||
Please verify with your application that this Merkle key format is used (true
|
||||
for applications built w/ Cosmos SDK).
|
||||
`,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
chainID = args[0]
|
||||
logger.Info("Creating client...", "chainID", chainID)
|
||||
|
||||
var witnessesAddrs []string
|
||||
if witnessAddrsJoined != "" {
|
||||
witnessesAddrs = strings.Split(witnessAddrsJoined, ",")
|
||||
}
|
||||
|
||||
lightDB, err := dbm.NewGoLevelDB("light-client-db", dir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't create a db: %w", err)
|
||||
}
|
||||
// create a prefixed db on the chainID
|
||||
db := dbm.NewPrefixDB(lightDB, []byte(chainID))
|
||||
|
||||
if primaryAddr == "" { // check to see if we can start from an existing state
|
||||
var err error
|
||||
primaryAddr, witnessesAddrs, err = checkForExistingProviders(db)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to retrieve primary or witness from db: %w", err)
|
||||
}
|
||||
if primaryAddr == "" {
|
||||
return errors.New("no primary address was provided nor found. Please provide a primary (using -p)." +
|
||||
" Run the command: tendermint light --help for more information")
|
||||
}
|
||||
} else {
|
||||
err := saveProviders(db, primaryAddr, witnessAddrsJoined)
|
||||
if err != nil {
|
||||
logger.Error("Unable to save primary and or witness addresses", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(witnessesAddrs) < 1 && !sequential {
|
||||
logger.Info("In skipping verification mode it is highly recommended to provide at least one witness")
|
||||
}
|
||||
|
||||
trustLevel, err := tmmath.ParseFraction(trustLevelStr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't parse trust level: %w", err)
|
||||
}
|
||||
|
||||
options := []light.Option{light.Logger(logger)}
|
||||
|
||||
vo := light.SkippingVerification(trustLevel)
|
||||
if sequential {
|
||||
vo = light.SequentialVerification()
|
||||
}
|
||||
options = append(options, vo)
|
||||
|
||||
// Initiate the light client. If the trusted store already has blocks in it, this
|
||||
// will be used else we use the trusted options.
|
||||
c, err := light.NewHTTPClient(
|
||||
context.Background(),
|
||||
chainID,
|
||||
light.TrustOptions{
|
||||
Period: trustingPeriod,
|
||||
Height: trustedHeight,
|
||||
Hash: trustedHash,
|
||||
},
|
||||
primaryAddr,
|
||||
witnessesAddrs,
|
||||
dbs.New(db),
|
||||
options...,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cfg := rpcserver.DefaultConfig()
|
||||
cfg.MaxBodyBytes = conf.RPC.MaxBodyBytes
|
||||
cfg.MaxHeaderBytes = conf.RPC.MaxHeaderBytes
|
||||
cfg.MaxOpenConnections = maxOpenConnections
|
||||
// If necessary adjust global WriteTimeout to ensure it's greater than
|
||||
// TimeoutBroadcastTxCommit.
|
||||
// See https://github.com/tendermint/tendermint/issues/3435
|
||||
if cfg.WriteTimeout <= conf.RPC.TimeoutBroadcastTxCommit {
|
||||
cfg.WriteTimeout = conf.RPC.TimeoutBroadcastTxCommit + 1*time.Second
|
||||
}
|
||||
|
||||
p, err := lproxy.NewProxy(c, listenAddr, primaryAddr, cfg, logger, lrpc.KeyPathFn(lrpc.DefaultMerkleKeyPathFn()))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
p.Listener.Close()
|
||||
}()
|
||||
|
||||
logger.Info("Starting proxy...", "laddr", listenAddr)
|
||||
if err := p.ListenAndServe(ctx); err != http.ErrServerClosed {
|
||||
// Error starting or closing listener:
|
||||
logger.Error("proxy ListenAndServe", "err", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
Args: cobra.ExactArgs(1),
|
||||
Example: `light cosmoshub-3 -p http://52.57.29.196:26657 -w http://public-seed-node.cosmoshub.certus.one:26657
|
||||
RunE: runProxy,
|
||||
Args: cobra.ExactArgs(1),
|
||||
Example: `light cosmoshub-3 -p http://52.57.29.196:26657 -w http://public-seed-node.cosmoshub.certus.one:26657
|
||||
--height 962118 --hash 28B97BE9F6DE51AC69F70E0B7BFD7E5C9CD1A595B7DC31AFF27C50D4948020CD`,
|
||||
}
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&listenAddr, "laddr", "tcp://localhost:8888",
|
||||
var (
|
||||
listenAddr string
|
||||
primaryAddr string
|
||||
witnessAddrsJoined string
|
||||
chainID string
|
||||
dir string
|
||||
maxOpenConnections int
|
||||
|
||||
sequential bool
|
||||
trustingPeriod time.Duration
|
||||
trustedHeight int64
|
||||
trustedHash []byte
|
||||
trustLevelStr string
|
||||
|
||||
logLevel string
|
||||
logFormat string
|
||||
|
||||
primaryKey = []byte("primary")
|
||||
witnessesKey = []byte("witnesses")
|
||||
)
|
||||
|
||||
func init() {
|
||||
LightCmd.Flags().StringVar(&listenAddr, "laddr", "tcp://localhost:8888",
|
||||
"serve the proxy on the given address")
|
||||
cmd.Flags().StringVarP(&primaryAddr, "primary", "p", "",
|
||||
LightCmd.Flags().StringVarP(&primaryAddr, "primary", "p", "",
|
||||
"connect to a Tendermint node at this address")
|
||||
cmd.Flags().StringVarP(&witnessAddrsJoined, "witnesses", "w", "",
|
||||
LightCmd.Flags().StringVarP(&witnessAddrsJoined, "witnesses", "w", "",
|
||||
"tendermint nodes to cross-check the primary node, comma-separated")
|
||||
cmd.Flags().StringVarP(&dir, "dir", "d", os.ExpandEnv(filepath.Join("$HOME", ".tendermint-light")),
|
||||
LightCmd.Flags().StringVarP(&dir, "dir", "d", os.ExpandEnv(filepath.Join("$HOME", ".tendermint-light")),
|
||||
"specify the directory")
|
||||
cmd.Flags().IntVar(
|
||||
LightCmd.Flags().IntVar(
|
||||
&maxOpenConnections,
|
||||
"max-open-connections",
|
||||
900,
|
||||
"maximum number of simultaneous connections (including WebSocket).")
|
||||
cmd.Flags().DurationVar(&trustingPeriod, "trusting-period", 168*time.Hour,
|
||||
LightCmd.Flags().DurationVar(&trustingPeriod, "trusting-period", 168*time.Hour,
|
||||
"trusting period that headers can be verified within. Should be significantly less than the unbonding period")
|
||||
cmd.Flags().Int64Var(&trustedHeight, "height", 1, "Trusted header's height")
|
||||
cmd.Flags().BytesHexVar(&trustedHash, "hash", []byte{}, "Trusted header's hash")
|
||||
cmd.Flags().StringVar(&logLevel, "log-level", log.LogLevelInfo, "The logging level (debug|info|warn|error|fatal)")
|
||||
cmd.Flags().StringVar(&logFormat, "log-format", log.LogFormatPlain, "The logging format (text|json)")
|
||||
cmd.Flags().StringVar(&trustLevelStr, "trust-level", "1/3",
|
||||
LightCmd.Flags().Int64Var(&trustedHeight, "height", 1, "Trusted header's height")
|
||||
LightCmd.Flags().BytesHexVar(&trustedHash, "hash", []byte{}, "Trusted header's hash")
|
||||
LightCmd.Flags().StringVar(&logLevel, "log-level", log.LogLevelInfo, "The logging level (debug|info|warn|error|fatal)")
|
||||
LightCmd.Flags().StringVar(&logFormat, "log-format", log.LogFormatPlain, "The logging format (text|json)")
|
||||
LightCmd.Flags().StringVar(&trustLevelStr, "trust-level", "1/3",
|
||||
"trust level. Must be between 1/3 and 3/3",
|
||||
)
|
||||
cmd.Flags().BoolVar(&sequential, "sequential", false,
|
||||
LightCmd.Flags().BoolVar(&sequential, "sequential", false,
|
||||
"sequential verification. Verify all headers sequentially as opposed to using skipping verification",
|
||||
)
|
||||
|
||||
return cmd
|
||||
|
||||
}
|
||||
|
||||
func runProxy(cmd *cobra.Command, args []string) error {
|
||||
logger, err := log.NewDefaultLogger(logFormat, logLevel)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
chainID = args[0]
|
||||
logger.Info("Creating client...", "chainID", chainID)
|
||||
|
||||
witnessesAddrs := []string{}
|
||||
if witnessAddrsJoined != "" {
|
||||
witnessesAddrs = strings.Split(witnessAddrsJoined, ",")
|
||||
}
|
||||
|
||||
lightDB, err := dbm.NewGoLevelDB("light-client-db", dir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't create a db: %w", err)
|
||||
}
|
||||
// create a prefixed db on the chainID
|
||||
db := dbm.NewPrefixDB(lightDB, []byte(chainID))
|
||||
|
||||
if primaryAddr == "" { // check to see if we can start from an existing state
|
||||
var err error
|
||||
primaryAddr, witnessesAddrs, err = checkForExistingProviders(db)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to retrieve primary or witness from db: %w", err)
|
||||
}
|
||||
if primaryAddr == "" {
|
||||
return errors.New("no primary address was provided nor found. Please provide a primary (using -p)." +
|
||||
" Run the command: tendermint light --help for more information")
|
||||
}
|
||||
} else {
|
||||
err := saveProviders(db, primaryAddr, witnessAddrsJoined)
|
||||
if err != nil {
|
||||
logger.Error("Unable to save primary and or witness addresses", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
trustLevel, err := tmmath.ParseFraction(trustLevelStr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't parse trust level: %w", err)
|
||||
}
|
||||
|
||||
options := []light.Option{light.Logger(logger)}
|
||||
|
||||
if sequential {
|
||||
options = append(options, light.SequentialVerification())
|
||||
} else {
|
||||
options = append(options, light.SkippingVerification(trustLevel))
|
||||
}
|
||||
|
||||
// Initiate the light client. If the trusted store already has blocks in it, this
|
||||
// will be used else we use the trusted options.
|
||||
c, err := light.NewHTTPClient(
|
||||
context.Background(),
|
||||
chainID,
|
||||
light.TrustOptions{
|
||||
Period: trustingPeriod,
|
||||
Height: trustedHeight,
|
||||
Hash: trustedHash,
|
||||
},
|
||||
primaryAddr,
|
||||
witnessesAddrs,
|
||||
dbs.New(db),
|
||||
options...,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cfg := rpcserver.DefaultConfig()
|
||||
cfg.MaxBodyBytes = config.RPC.MaxBodyBytes
|
||||
cfg.MaxHeaderBytes = config.RPC.MaxHeaderBytes
|
||||
cfg.MaxOpenConnections = maxOpenConnections
|
||||
// If necessary adjust global WriteTimeout to ensure it's greater than
|
||||
// TimeoutBroadcastTxCommit.
|
||||
// See https://github.com/tendermint/tendermint/issues/3435
|
||||
if cfg.WriteTimeout <= config.RPC.TimeoutBroadcastTxCommit {
|
||||
cfg.WriteTimeout = config.RPC.TimeoutBroadcastTxCommit + 1*time.Second
|
||||
}
|
||||
|
||||
p, err := lproxy.NewProxy(c, listenAddr, primaryAddr, cfg, logger, lrpc.KeyPathFn(lrpc.DefaultMerkleKeyPathFn()))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
p.Listener.Close()
|
||||
}()
|
||||
|
||||
logger.Info("Starting proxy...", "laddr", listenAddr)
|
||||
if err := p.ListenAndServe(ctx); err != http.ErrServerClosed {
|
||||
// Error starting or closing listener:
|
||||
logger.Error("proxy ListenAndServe", "err", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkForExistingProviders(db dbm.DB) (string, []string, error) {
|
||||
primaryBytes, err := db.Get(primaryKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesBytes, err := db.Get(witnessesKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesAddrs := strings.Split(string(witnessesBytes), ",")
|
||||
return string(primaryBytes), witnessesAddrs, nil
|
||||
}
|
||||
|
||||
func saveProviders(db dbm.DB, primaryAddr, witnessesAddrs string) error {
|
||||
err := db.Set(primaryKey, []byte(primaryAddr))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save primary provider: %w", err)
|
||||
}
|
||||
err = db.Set(witnessesKey, []byte(witnessesAddrs))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save witness providers: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -17,7 +17,6 @@ import (
|
||||
"github.com/tendermint/tendermint/internal/state/indexer/sink/kv"
|
||||
"github.com/tendermint/tendermint/internal/state/indexer/sink/psql"
|
||||
"github.com/tendermint/tendermint/internal/store"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/libs/os"
|
||||
"github.com/tendermint/tendermint/rpc/coretypes"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
@@ -27,68 +26,59 @@ const (
|
||||
reindexFailed = "event re-index failed: "
|
||||
)
|
||||
|
||||
// MakeReindexEventCommand constructs a command to re-index events in a block height interval.
|
||||
func MakeReindexEventCommand(conf *tmcfg.Config, logger log.Logger) *cobra.Command {
|
||||
var (
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
)
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "reindex-event",
|
||||
Short: "reindex events to the event store backends",
|
||||
Long: `
|
||||
// ReIndexEventCmd allows re-index the event by given block height interval
|
||||
var ReIndexEventCmd = &cobra.Command{
|
||||
Use: "reindex-event",
|
||||
Short: "reindex events to the event store backends",
|
||||
Long: `
|
||||
reindex-event is an offline tooling to re-index block and tx events to the eventsinks,
|
||||
you can run this command when the event store backend dropped/disconnected or you want to
|
||||
replace the backend. The default start-height is 0, meaning the tooling will start
|
||||
reindex from the base block height(inclusive); and the default end-height is 0, meaning
|
||||
you can run this command when the event store backend dropped/disconnected or you want to
|
||||
replace the backend. The default start-height is 0, meaning the tooling will start
|
||||
reindex from the base block height(inclusive); and the default end-height is 0, meaning
|
||||
the tooling will reindex until the latest block height(inclusive). User can omit
|
||||
either or both arguments.
|
||||
`,
|
||||
Example: `
|
||||
Example: `
|
||||
tendermint reindex-event
|
||||
tendermint reindex-event --start-height 2
|
||||
tendermint reindex-event --end-height 10
|
||||
tendermint reindex-event --start-height 2 --end-height 10
|
||||
`,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
bs, ss, err := loadStateAndBlockStore(conf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
bs, ss, err := loadStateAndBlockStore(config)
|
||||
if err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
|
||||
cvhArgs := checkValidHeightArgs{
|
||||
startHeight: startHeight,
|
||||
endHeight: endHeight,
|
||||
}
|
||||
if err := checkValidHeight(bs, cvhArgs); err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
if err := checkValidHeight(bs); err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
|
||||
es, err := loadEventSinks(conf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
es, err := loadEventSinks(config)
|
||||
if err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
|
||||
riArgs := eventReIndexArgs{
|
||||
startHeight: startHeight,
|
||||
endHeight: endHeight,
|
||||
sinks: es,
|
||||
blockStore: bs,
|
||||
stateStore: ss,
|
||||
}
|
||||
if err := eventReIndex(cmd, riArgs); err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
if err = eventReIndex(cmd, es, bs, ss); err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
|
||||
logger.Info("event re-index finished")
|
||||
return nil
|
||||
},
|
||||
}
|
||||
fmt.Println("event re-index finished")
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().Int64Var(&startHeight, "start-height", 0, "the block height would like to start for re-index")
|
||||
cmd.Flags().Int64Var(&endHeight, "end-height", 0, "the block height would like to finish for re-index")
|
||||
return cmd
|
||||
var (
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
)
|
||||
|
||||
func init() {
|
||||
ReIndexEventCmd.Flags().Int64Var(&startHeight, "start-height", 0, "the block height would like to start for re-index")
|
||||
ReIndexEventCmd.Flags().Int64Var(&endHeight, "end-height", 0, "the block height would like to finish for re-index")
|
||||
}
|
||||
|
||||
func loadEventSinks(cfg *tmcfg.Config) ([]indexer.EventSink, error) {
|
||||
@@ -119,7 +109,7 @@ func loadEventSinks(cfg *tmcfg.Config) ([]indexer.EventSink, error) {
|
||||
if conn == "" {
|
||||
return nil, errors.New("the psql connection settings cannot be empty")
|
||||
}
|
||||
es, err := psql.NewEventSink(conn, cfg.ChainID())
|
||||
es, err := psql.NewEventSink(conn, chainID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -169,58 +159,52 @@ func loadStateAndBlockStore(cfg *tmcfg.Config) (*store.BlockStore, state.Store,
|
||||
return blockStore, stateStore, nil
|
||||
}
|
||||
|
||||
type eventReIndexArgs struct {
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
sinks []indexer.EventSink
|
||||
blockStore state.BlockStore
|
||||
stateStore state.Store
|
||||
}
|
||||
func eventReIndex(cmd *cobra.Command, es []indexer.EventSink, bs state.BlockStore, ss state.Store) error {
|
||||
|
||||
func eventReIndex(cmd *cobra.Command, args eventReIndexArgs) error {
|
||||
var bar progressbar.Bar
|
||||
bar.NewOption(args.startHeight-1, args.endHeight)
|
||||
bar.NewOption(startHeight-1, endHeight)
|
||||
|
||||
fmt.Println("start re-indexing events:")
|
||||
defer bar.Finish()
|
||||
for i := args.startHeight; i <= args.endHeight; i++ {
|
||||
for i := startHeight; i <= endHeight; i++ {
|
||||
select {
|
||||
case <-cmd.Context().Done():
|
||||
return fmt.Errorf("event re-index terminated at height %d: %w", i, cmd.Context().Err())
|
||||
default:
|
||||
b := args.blockStore.LoadBlock(i)
|
||||
b := bs.LoadBlock(i)
|
||||
if b == nil {
|
||||
return fmt.Errorf("not able to load block at height %d from the blockstore", i)
|
||||
}
|
||||
|
||||
r, err := args.stateStore.LoadABCIResponses(i)
|
||||
r, err := ss.LoadABCIResponses(i)
|
||||
if err != nil {
|
||||
return fmt.Errorf("not able to load ABCI Response at height %d from the statestore", i)
|
||||
}
|
||||
|
||||
e := types.EventDataNewBlockHeader{
|
||||
Header: b.Header,
|
||||
NumTxs: int64(len(b.Txs)),
|
||||
ResultFinalizeBlock: *r.FinalizeBlock,
|
||||
Header: b.Header,
|
||||
NumTxs: int64(len(b.Txs)),
|
||||
ResultBeginBlock: *r.BeginBlock,
|
||||
ResultEndBlock: *r.EndBlock,
|
||||
}
|
||||
|
||||
var batch *indexer.Batch
|
||||
if e.NumTxs > 0 {
|
||||
batch = indexer.NewBatch(e.NumTxs)
|
||||
|
||||
for i := range b.Data.Txs {
|
||||
for i, tx := range b.Data.Txs {
|
||||
tr := abcitypes.TxResult{
|
||||
Height: b.Height,
|
||||
Index: uint32(i),
|
||||
Tx: b.Data.Txs[i],
|
||||
Result: *(r.FinalizeBlock.Txs[i]),
|
||||
Tx: tx,
|
||||
Result: *(r.DeliverTxs[i]),
|
||||
}
|
||||
|
||||
_ = batch.Add(&tr)
|
||||
}
|
||||
}
|
||||
|
||||
for _, sink := range args.sinks {
|
||||
for _, sink := range es {
|
||||
if err := sink.IndexBlockEvents(e); err != nil {
|
||||
return fmt.Errorf("block event re-index at height %d failed: %w", i, err)
|
||||
}
|
||||
@@ -239,45 +223,40 @@ func eventReIndex(cmd *cobra.Command, args eventReIndexArgs) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type checkValidHeightArgs struct {
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
}
|
||||
|
||||
func checkValidHeight(bs state.BlockStore, args checkValidHeightArgs) error {
|
||||
func checkValidHeight(bs state.BlockStore) error {
|
||||
base := bs.Base()
|
||||
|
||||
if args.startHeight == 0 {
|
||||
args.startHeight = base
|
||||
if startHeight == 0 {
|
||||
startHeight = base
|
||||
fmt.Printf("set the start block height to the base height of the blockstore %d \n", base)
|
||||
}
|
||||
|
||||
if args.startHeight < base {
|
||||
if startHeight < base {
|
||||
return fmt.Errorf("%s (requested start height: %d, base height: %d)",
|
||||
coretypes.ErrHeightNotAvailable, args.startHeight, base)
|
||||
coretypes.ErrHeightNotAvailable, startHeight, base)
|
||||
}
|
||||
|
||||
height := bs.Height()
|
||||
|
||||
if args.startHeight > height {
|
||||
if startHeight > height {
|
||||
return fmt.Errorf(
|
||||
"%s (requested start height: %d, store height: %d)", coretypes.ErrHeightNotAvailable, args.startHeight, height)
|
||||
"%s (requested start height: %d, store height: %d)", coretypes.ErrHeightNotAvailable, startHeight, height)
|
||||
}
|
||||
|
||||
if args.endHeight == 0 || args.endHeight > height {
|
||||
args.endHeight = height
|
||||
if endHeight == 0 || endHeight > height {
|
||||
endHeight = height
|
||||
fmt.Printf("set the end block height to the latest height of the blockstore %d \n", height)
|
||||
}
|
||||
|
||||
if args.endHeight < base {
|
||||
if endHeight < base {
|
||||
return fmt.Errorf(
|
||||
"%s (requested end height: %d, base height: %d)", coretypes.ErrHeightNotAvailable, args.endHeight, base)
|
||||
"%s (requested end height: %d, base height: %d)", coretypes.ErrHeightNotAvailable, endHeight, base)
|
||||
}
|
||||
|
||||
if args.endHeight < args.startHeight {
|
||||
if endHeight < startHeight {
|
||||
return fmt.Errorf(
|
||||
"%s (requested the end height: %d is less than the start height: %d)",
|
||||
coretypes.ErrInvalidRequest, args.startHeight, args.endHeight)
|
||||
coretypes.ErrInvalidRequest, startHeight, endHeight)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@@ -9,15 +9,13 @@ import (
|
||||
"github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
dbm "github.com/tendermint/tm-db"
|
||||
|
||||
abcitypes "github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
tmcfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/state/indexer"
|
||||
"github.com/tendermint/tendermint/internal/state/mocks"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
prototmstate "github.com/tendermint/tendermint/proto/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
dbm "github.com/tendermint/tm-db"
|
||||
|
||||
_ "github.com/lib/pq" // for the psql sink
|
||||
)
|
||||
@@ -27,11 +25,9 @@ const (
|
||||
base int64 = 2
|
||||
)
|
||||
|
||||
func setupReIndexEventCmd(ctx context.Context, conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := MakeReindexEventCommand(conf, logger)
|
||||
|
||||
func setupReIndexEventCmd(ctx context.Context) *cobra.Command {
|
||||
reIndexEventCmd := &cobra.Command{
|
||||
Use: cmd.Use,
|
||||
Use: ReIndexEventCmd.Use,
|
||||
Run: func(cmd *cobra.Command, args []string) {},
|
||||
}
|
||||
|
||||
@@ -72,7 +68,10 @@ func TestReIndexEventCheckHeight(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
err := checkValidHeight(mockBlockStore, checkValidHeightArgs{startHeight: tc.startHeight, endHeight: tc.endHeight})
|
||||
startHeight = tc.startHeight
|
||||
endHeight = tc.endHeight
|
||||
|
||||
err := checkValidHeight(mockBlockStore)
|
||||
if tc.validHeight {
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
@@ -98,7 +97,7 @@ func TestLoadEventSink(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
cfg := config.TestConfig()
|
||||
cfg := tmcfg.TestConfig()
|
||||
cfg.TxIndex.Indexer = tc.sinks
|
||||
cfg.TxIndex.PsqlConn = tc.connURL
|
||||
_, err := loadEventSinks(cfg)
|
||||
@@ -111,7 +110,7 @@ func TestLoadEventSink(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLoadBlockStore(t *testing.T) {
|
||||
testCfg, err := config.ResetTestRoot(t.TempDir(), t.Name())
|
||||
testCfg, err := tmcfg.ResetTestRoot(t.Name())
|
||||
require.NoError(t, err)
|
||||
testCfg.DBBackend = "goleveldb"
|
||||
_, _, err = loadStateAndBlockStore(testCfg)
|
||||
@@ -155,9 +154,9 @@ func TestReIndexEvent(t *testing.T) {
|
||||
|
||||
dtx := abcitypes.ResponseDeliverTx{}
|
||||
abciResp := &prototmstate.ABCIResponses{
|
||||
FinalizeBlock: &abcitypes.ResponseFinalizeBlock{
|
||||
Txs: []*abcitypes.ResponseDeliverTx{&dtx},
|
||||
},
|
||||
DeliverTxs: []*abcitypes.ResponseDeliverTx{&dtx},
|
||||
EndBlock: &abcitypes.ResponseEndBlock{},
|
||||
BeginBlock: &abcitypes.ResponseBeginBlock{},
|
||||
}
|
||||
|
||||
mockStateStore.
|
||||
@@ -180,20 +179,12 @@ func TestReIndexEvent(t *testing.T) {
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
logger := log.NewNopLogger()
|
||||
conf := config.DefaultConfig()
|
||||
|
||||
for _, tc := range testCases {
|
||||
err := eventReIndex(
|
||||
setupReIndexEventCmd(ctx, conf, logger),
|
||||
eventReIndexArgs{
|
||||
sinks: []indexer.EventSink{mockEventSink},
|
||||
blockStore: mockBlockStore,
|
||||
stateStore: mockStateStore,
|
||||
startHeight: tc.startHeight,
|
||||
endHeight: tc.endHeight,
|
||||
})
|
||||
startHeight = tc.startHeight
|
||||
endHeight = tc.endHeight
|
||||
|
||||
err := eventReIndex(setupReIndexEventCmd(ctx), []indexer.EventSink{mockEventSink}, mockBlockStore, mockStateStore)
|
||||
if tc.reIndexErr {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
|
||||
@@ -2,30 +2,24 @@ package commands
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/consensus"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
// MakeReplayCommand constructs a command to replay messages from the WAL into consensus.
|
||||
func MakeReplayCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "replay",
|
||||
Short: "Replay messages from WAL",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, conf.BaseConfig, conf.Consensus, false)
|
||||
},
|
||||
}
|
||||
// ReplayCmd allows replaying of messages from the WAL.
|
||||
var ReplayCmd = &cobra.Command{
|
||||
Use: "replay",
|
||||
Short: "Replay messages from WAL",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, config.BaseConfig, config.Consensus, false)
|
||||
},
|
||||
}
|
||||
|
||||
// MakeReplayConsoleCommand constructs a command to replay WAL messages to stdout.
|
||||
func MakeReplayConsoleCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "replay-console",
|
||||
Short: "Replay messages from WAL in a console",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, conf.BaseConfig, conf.Consensus, true)
|
||||
},
|
||||
}
|
||||
// ReplayConsoleCmd allows replaying of messages from the WAL in a
|
||||
// console.
|
||||
var ReplayConsoleCmd = &cobra.Command{
|
||||
Use: "replay-console",
|
||||
Short: "Replay messages from WAL in a console",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, config.BaseConfig, config.Consensus, true)
|
||||
},
|
||||
}
|
||||
|
||||
@@ -5,58 +5,51 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// MakeResetAllCommand constructs a command that removes the database of
|
||||
// the specified Tendermint core instance.
|
||||
func MakeResetAllCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var keyType string
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "unsafe-reset-all",
|
||||
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return resetAll(conf.DBDir(), conf.PrivValidator.KeyFile(),
|
||||
conf.PrivValidator.StateFile(), logger, keyType)
|
||||
},
|
||||
}
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
|
||||
return cmd
|
||||
// ResetAllCmd removes the database of this Tendermint core
|
||||
// instance.
|
||||
var ResetAllCmd = &cobra.Command{
|
||||
Use: "unsafe-reset-all",
|
||||
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
|
||||
RunE: resetAll,
|
||||
}
|
||||
|
||||
func MakeResetPrivateValidatorCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var keyType string
|
||||
var keepAddrBook bool
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "unsafe-reset-priv-validator",
|
||||
Short: "(unsafe) Reset this node's validator to genesis state",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return resetFilePV(conf.PrivValidator.KeyFile(), conf.PrivValidator.StateFile(), logger, keyType)
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
func init() {
|
||||
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
|
||||
ResetPrivValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
return cmd
|
||||
}
|
||||
|
||||
// ResetPrivValidatorCmd resets the private validator files.
|
||||
var ResetPrivValidatorCmd = &cobra.Command{
|
||||
Use: "unsafe-reset-priv-validator",
|
||||
Short: "(unsafe) Reset this node's validator to genesis state",
|
||||
RunE: resetPrivValidator,
|
||||
}
|
||||
|
||||
// XXX: this is totally unsafe.
|
||||
// it's only suitable for testnets.
|
||||
func resetAll(cmd *cobra.Command, args []string) error {
|
||||
return ResetAll(config.DBDir(), config.PrivValidator.KeyFile(),
|
||||
config.PrivValidator.StateFile(), logger)
|
||||
}
|
||||
|
||||
// XXX: this is totally unsafe.
|
||||
// it's only suitable for testnets.
|
||||
func resetPrivValidator(cmd *cobra.Command, args []string) error {
|
||||
return resetFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile(), logger)
|
||||
}
|
||||
|
||||
// resetAll removes address book files plus all data, and resets the privValdiator data.
|
||||
// ResetAll removes address book files plus all data, and resets the privValdiator data.
|
||||
// Exported so other CLI tools can use it.
|
||||
func resetAll(dbDir, privValKeyFile, privValStateFile string, logger log.Logger, keyType string) error {
|
||||
func ResetAll(dbDir, privValKeyFile, privValStateFile string, logger log.Logger) error {
|
||||
if err := os.RemoveAll(dbDir); err == nil {
|
||||
logger.Info("Removed all blockchain history", "dir", dbDir)
|
||||
} else {
|
||||
@@ -66,10 +59,10 @@ func resetAll(dbDir, privValKeyFile, privValStateFile string, logger log.Logger,
|
||||
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
|
||||
logger.Error("unable to recreate dbDir", "err", err)
|
||||
}
|
||||
return resetFilePV(privValKeyFile, privValStateFile, logger, keyType)
|
||||
return resetFilePV(privValKeyFile, privValStateFile, logger)
|
||||
}
|
||||
|
||||
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger, keyType string) error {
|
||||
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) error {
|
||||
if _, err := os.Stat(privValKeyFile); err == nil {
|
||||
pv, err := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
|
||||
if err != nil {
|
||||
|
||||
@@ -5,15 +5,14 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/state"
|
||||
)
|
||||
|
||||
func MakeRollbackStateCommand(conf *config.Config) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "rollback",
|
||||
Short: "rollback tendermint state by one height",
|
||||
Long: `
|
||||
var RollbackStateCmd = &cobra.Command{
|
||||
Use: "rollback",
|
||||
Short: "rollback tendermint state by one height",
|
||||
Long: `
|
||||
A state rollback is performed to recover from an incorrect application state transition,
|
||||
when Tendermint has persisted an incorrect app hash and is thus unable to make
|
||||
progress. Rollback overwrites a state at height n with the state at height n - 1.
|
||||
@@ -21,23 +20,21 @@ The application should also roll back to height n - 1. No blocks are removed, so
|
||||
restarting Tendermint the transactions in block n will be re-executed against the
|
||||
application.
|
||||
`,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
height, hash, err := RollbackState(conf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to rollback state: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("Rolled back state to height %d and hash %X", height, hash)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
height, hash, err := RollbackState(config)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to rollback state: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("Rolled back state to height %d and hash %X", height, hash)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
// RollbackState takes the state at the current height n and overwrites it with the state
|
||||
// at height n - 1. Note state here refers to tendermint state not application state.
|
||||
// Returns the latest state height and app hash alongside an error if there was one.
|
||||
func RollbackState(config *config.Config) (int64, []byte, error) {
|
||||
func RollbackState(config *cfg.Config) (int64, []byte, error) {
|
||||
// use the parsed config to load the block and state store
|
||||
blockStore, stateStore, err := loadStateAndBlockStore(config)
|
||||
if err != nil {
|
||||
|
||||
@@ -19,12 +19,10 @@ func TestRollbackIntegration(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
cfg, err := rpctest.CreateConfig(t, t.Name())
|
||||
cfg, err := rpctest.CreateConfig(t.Name())
|
||||
require.NoError(t, err)
|
||||
cfg.BaseConfig.DBBackend = "goleveldb"
|
||||
|
||||
app, err := e2e.NewApplication(e2e.DefaultConfig(dir))
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("First run", func(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
@@ -32,29 +30,27 @@ func TestRollbackIntegration(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
node, _, err := rpctest.StartTendermint(ctx, cfg, app, rpctest.SuppressStdout)
|
||||
require.NoError(t, err)
|
||||
require.True(t, node.IsRunning())
|
||||
|
||||
time.Sleep(3 * time.Second)
|
||||
cancel()
|
||||
node.Wait()
|
||||
|
||||
require.False(t, node.IsRunning())
|
||||
})
|
||||
|
||||
t.Run("Rollback", func(t *testing.T) {
|
||||
time.Sleep(time.Second)
|
||||
require.NoError(t, app.Rollback())
|
||||
height, _, err = commands.RollbackState(cfg)
|
||||
require.NoError(t, err, "%d", height)
|
||||
})
|
||||
t.Run("Restart", func(t *testing.T) {
|
||||
require.True(t, height > 0, "%d", height)
|
||||
require.NoError(t, err)
|
||||
|
||||
})
|
||||
|
||||
t.Run("Restart", func(t *testing.T) {
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
node2, _, err2 := rpctest.StartTendermint(ctx, cfg, app, rpctest.SuppressStdout)
|
||||
require.NoError(t, err2)
|
||||
|
||||
logger := log.NewNopLogger()
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
client, err := local.New(logger, node2.(local.NodeService))
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -2,62 +2,65 @@ package commands
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
const ctxTimeout = 4 * time.Second
|
||||
var (
|
||||
config = cfg.DefaultConfig()
|
||||
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
ctxTimeout = 4 * time.Second
|
||||
)
|
||||
|
||||
func init() {
|
||||
registerFlagsRootCmd(RootCmd)
|
||||
}
|
||||
|
||||
func registerFlagsRootCmd(cmd *cobra.Command) {
|
||||
cmd.PersistentFlags().String("log-level", config.LogLevel, "log level")
|
||||
}
|
||||
|
||||
// ParseConfig retrieves the default environment configuration,
|
||||
// sets up the Tendermint root and ensures that the root exists
|
||||
func ParseConfig(conf *config.Config) (*config.Config, error) {
|
||||
if err := viper.Unmarshal(conf); err != nil {
|
||||
func ParseConfig() (*cfg.Config, error) {
|
||||
conf := cfg.DefaultConfig()
|
||||
err := viper.Unmarshal(conf)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
conf.SetRoot(conf.RootDir)
|
||||
|
||||
cfg.EnsureRoot(conf.RootDir)
|
||||
if err := conf.ValidateBasic(); err != nil {
|
||||
return nil, fmt.Errorf("error in config file: %w", err)
|
||||
}
|
||||
return conf, nil
|
||||
}
|
||||
|
||||
// RootCommand constructs the root command-line entry point for Tendermint core.
|
||||
func RootCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "tendermint",
|
||||
Short: "BFT state machine replication for applications in any programming languages",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
if cmd.Name() == VersionCmd.Name() {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := cli.BindFlagsLoadViper(cmd, args); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pconf, err := ParseConfig(conf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*conf = *pconf
|
||||
config.EnsureRoot(conf.RootDir)
|
||||
|
||||
// RootCmd is the root command for Tendermint core.
|
||||
var RootCmd = &cobra.Command{
|
||||
Use: "tendermint",
|
||||
Short: "BFT state machine replication for applications in any programming languages",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) {
|
||||
if cmd.Name() == VersionCmd.Name() {
|
||||
return nil
|
||||
},
|
||||
}
|
||||
cmd.PersistentFlags().StringP(cli.HomeFlag, "", os.ExpandEnv(filepath.Join("$HOME", config.DefaultTendermintDir)), "directory for config and data")
|
||||
cmd.PersistentFlags().Bool(cli.TraceFlag, false, "print out full stack trace on errors")
|
||||
cmd.PersistentFlags().String("log-level", conf.LogLevel, "log level")
|
||||
cobra.OnInitialize(func() { cli.InitEnv("TM") })
|
||||
return cmd
|
||||
}
|
||||
|
||||
config, err = ParseConfig()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger, err = log.NewDefaultLogger(config.LogFormat, config.LogLevel)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger = logger.With("module", "main")
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
@@ -14,54 +14,43 @@ import (
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
)
|
||||
|
||||
// writeConfigVals writes a toml file with the given values.
|
||||
// It returns an error if writing was impossible.
|
||||
func writeConfigVals(dir string, vals map[string]string) error {
|
||||
data := ""
|
||||
for k, v := range vals {
|
||||
data += fmt.Sprintf("%s = \"%s\"\n", k, v)
|
||||
}
|
||||
cfile := filepath.Join(dir, "config.toml")
|
||||
return os.WriteFile(cfile, []byte(data), 0600)
|
||||
}
|
||||
|
||||
// clearConfig clears env vars, the given root dir, and resets viper.
|
||||
func clearConfig(t *testing.T, dir string) *cfg.Config {
|
||||
func clearConfig(t *testing.T, dir string) {
|
||||
t.Helper()
|
||||
require.NoError(t, os.Unsetenv("TMHOME"))
|
||||
require.NoError(t, os.Unsetenv("TM_HOME"))
|
||||
require.NoError(t, os.RemoveAll(dir))
|
||||
|
||||
viper.Reset()
|
||||
conf := cfg.DefaultConfig()
|
||||
conf.RootDir = dir
|
||||
return conf
|
||||
config = cfg.DefaultConfig()
|
||||
}
|
||||
|
||||
// prepare new rootCmd
|
||||
func testRootCmd(conf *cfg.Config) *cobra.Command {
|
||||
logger := log.NewNopLogger()
|
||||
cmd := RootCommand(conf, logger)
|
||||
cmd.RunE = func(cmd *cobra.Command, args []string) error { return nil }
|
||||
|
||||
func testRootCmd() *cobra.Command {
|
||||
rootCmd := &cobra.Command{
|
||||
Use: RootCmd.Use,
|
||||
PersistentPreRunE: RootCmd.PersistentPreRunE,
|
||||
Run: func(cmd *cobra.Command, args []string) {},
|
||||
}
|
||||
registerFlagsRootCmd(rootCmd)
|
||||
var l string
|
||||
cmd.PersistentFlags().String("log", l, "Log")
|
||||
return cmd
|
||||
rootCmd.PersistentFlags().String("log", l, "Log")
|
||||
return rootCmd
|
||||
}
|
||||
|
||||
func testSetup(ctx context.Context, t *testing.T, conf *cfg.Config, args []string, env map[string]string) error {
|
||||
func testSetup(t *testing.T, rootDir string, args []string, env map[string]string) error {
|
||||
t.Helper()
|
||||
clearConfig(t, rootDir)
|
||||
|
||||
cmd := testRootCmd(conf)
|
||||
viper.Set(cli.HomeFlag, conf.RootDir)
|
||||
rootCmd := testRootCmd()
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", rootDir)
|
||||
|
||||
// run with the args and env
|
||||
args = append([]string{cmd.Use}, args...)
|
||||
return cli.RunWithArgs(ctx, cmd, args, env)
|
||||
args = append([]string{rootCmd.Use}, args...)
|
||||
return cli.RunWithArgs(cmd, args, env)
|
||||
}
|
||||
|
||||
func TestRootHome(t *testing.T) {
|
||||
@@ -77,29 +66,23 @@ func TestRootHome(t *testing.T) {
|
||||
{nil, map[string]string{"TMHOME": newRoot}, newRoot},
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
for i, tc := range cases {
|
||||
t.Run(fmt.Sprint(i), func(t *testing.T) {
|
||||
conf := clearConfig(t, tc.root)
|
||||
idxString := strconv.Itoa(i)
|
||||
|
||||
err := testSetup(ctx, t, conf, tc.args, tc.env)
|
||||
require.NoError(t, err)
|
||||
err := testSetup(t, defaultRoot, tc.args, tc.env)
|
||||
require.NoError(t, err, idxString)
|
||||
|
||||
require.Equal(t, tc.root, conf.RootDir)
|
||||
require.Equal(t, tc.root, conf.P2P.RootDir)
|
||||
require.Equal(t, tc.root, conf.Consensus.RootDir)
|
||||
require.Equal(t, tc.root, conf.Mempool.RootDir)
|
||||
})
|
||||
assert.Equal(t, tc.root, config.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.P2P.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.Consensus.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.Mempool.RootDir, idxString)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRootFlagsEnv(t *testing.T) {
|
||||
|
||||
// defaults
|
||||
defaults := cfg.DefaultConfig()
|
||||
defaultDir := t.TempDir()
|
||||
|
||||
defaultLogLvl := defaults.LogLevel
|
||||
|
||||
cases := []struct {
|
||||
@@ -114,25 +97,18 @@ func TestRootFlagsEnv(t *testing.T) {
|
||||
{nil, map[string]string{"TM_LOG_LEVEL": "debug"}, "debug"}, // right env
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
defaultRoot := t.TempDir()
|
||||
for i, tc := range cases {
|
||||
t.Run(fmt.Sprint(i), func(t *testing.T) {
|
||||
conf := clearConfig(t, defaultDir)
|
||||
idxString := strconv.Itoa(i)
|
||||
|
||||
err := testSetup(ctx, t, conf, tc.args, tc.env)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, tc.logLevel, conf.LogLevel)
|
||||
})
|
||||
err := testSetup(t, defaultRoot, tc.args, tc.env)
|
||||
require.NoError(t, err, idxString)
|
||||
|
||||
assert.Equal(t, tc.logLevel, config.LogLevel, idxString)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRootConfig(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
// write non-default config
|
||||
nonDefaultLogLvl := "debug"
|
||||
@@ -141,8 +117,9 @@ func TestRootConfig(t *testing.T) {
|
||||
}
|
||||
|
||||
cases := []struct {
|
||||
args []string
|
||||
env map[string]string
|
||||
args []string
|
||||
env map[string]string
|
||||
|
||||
logLvl string
|
||||
}{
|
||||
{nil, nil, nonDefaultLogLvl}, // should load config
|
||||
@@ -151,30 +128,29 @@ func TestRootConfig(t *testing.T) {
|
||||
}
|
||||
|
||||
for i, tc := range cases {
|
||||
t.Run(fmt.Sprint(i), func(t *testing.T) {
|
||||
defaultRoot := t.TempDir()
|
||||
conf := clearConfig(t, defaultRoot)
|
||||
conf.LogLevel = tc.logLvl
|
||||
defaultRoot := t.TempDir()
|
||||
idxString := strconv.Itoa(i)
|
||||
clearConfig(t, defaultRoot)
|
||||
|
||||
// XXX: path must match cfg.defaultConfigPath
|
||||
configFilePath := filepath.Join(defaultRoot, "config")
|
||||
err := tmos.EnsureDir(configFilePath, 0700)
|
||||
require.NoError(t, err)
|
||||
// XXX: path must match cfg.defaultConfigPath
|
||||
configFilePath := filepath.Join(defaultRoot, "config")
|
||||
err := tmos.EnsureDir(configFilePath, 0700)
|
||||
require.NoError(t, err)
|
||||
|
||||
// write the non-defaults to a different path
|
||||
// TODO: support writing sub configs so we can test that too
|
||||
err = writeConfigVals(configFilePath, cvals)
|
||||
require.NoError(t, err)
|
||||
// write the non-defaults to a different path
|
||||
// TODO: support writing sub configs so we can test that too
|
||||
err = WriteConfigVals(configFilePath, cvals)
|
||||
require.NoError(t, err)
|
||||
|
||||
cmd := testRootCmd(conf)
|
||||
rootCmd := testRootCmd()
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot)
|
||||
|
||||
// run with the args and env
|
||||
tc.args = append([]string{cmd.Use}, tc.args...)
|
||||
err = cli.RunWithArgs(ctx, cmd, tc.args, tc.env)
|
||||
require.NoError(t, err)
|
||||
// run with the args and env
|
||||
tc.args = append([]string{rootCmd.Use}, tc.args...)
|
||||
err = cli.RunWithArgs(cmd, tc.args, tc.env)
|
||||
require.NoError(t, err, idxString)
|
||||
|
||||
require.Equal(t, tc.logLvl, conf.LogLevel)
|
||||
})
|
||||
assert.Equal(t, tc.logLvl, config.LogLevel, idxString)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -12,26 +12,25 @@ import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
var (
|
||||
genesisHash []byte
|
||||
)
|
||||
|
||||
// AddNodeFlags exposes some common configuration options from conf in the flag
|
||||
// set for cmd. This is a convenience for commands embedding a Tendermint node.
|
||||
func AddNodeFlags(cmd *cobra.Command, conf *cfg.Config) {
|
||||
// AddNodeFlags exposes some common configuration options on the command-line
|
||||
// These are exposed for convenience of commands embedding a tendermint node
|
||||
func AddNodeFlags(cmd *cobra.Command) {
|
||||
// bind flags
|
||||
cmd.Flags().String("moniker", conf.Moniker, "node name")
|
||||
cmd.Flags().String("moniker", config.Moniker, "node name")
|
||||
|
||||
// mode flags
|
||||
cmd.Flags().String("mode", conf.Mode, "node mode (full | validator | seed)")
|
||||
cmd.Flags().String("mode", config.Mode, "node mode (full | validator | seed)")
|
||||
|
||||
// priv val flags
|
||||
cmd.Flags().String(
|
||||
"priv-validator-laddr",
|
||||
conf.PrivValidator.ListenAddr,
|
||||
config.PrivValidator.ListenAddr,
|
||||
"socket address to listen on for connections from external priv-validator process")
|
||||
|
||||
// node flags
|
||||
@@ -41,74 +40,74 @@ func AddNodeFlags(cmd *cobra.Command, conf *cfg.Config) {
|
||||
"genesis-hash",
|
||||
[]byte{},
|
||||
"optional SHA-256 hash of the genesis file")
|
||||
cmd.Flags().Int64("consensus.double-sign-check-height", conf.Consensus.DoubleSignCheckHeight,
|
||||
cmd.Flags().Int64("consensus.double-sign-check-height", config.Consensus.DoubleSignCheckHeight,
|
||||
"how many blocks to look back to check existence of the node's "+
|
||||
"consensus votes before joining consensus")
|
||||
|
||||
// abci flags
|
||||
cmd.Flags().String(
|
||||
"proxy-app",
|
||||
conf.ProxyApp,
|
||||
config.ProxyApp,
|
||||
"proxy app address, or one of: 'kvstore',"+
|
||||
" 'persistent_kvstore', 'e2e' or 'noop' for local testing.")
|
||||
cmd.Flags().String("abci", conf.ABCI, "specify abci transport (socket | grpc)")
|
||||
cmd.Flags().String("abci", config.ABCI, "specify abci transport (socket | grpc)")
|
||||
|
||||
// rpc flags
|
||||
cmd.Flags().String("rpc.laddr", conf.RPC.ListenAddress, "RPC listen address. Port required")
|
||||
cmd.Flags().Bool("rpc.unsafe", conf.RPC.Unsafe, "enabled unsafe rpc methods")
|
||||
cmd.Flags().String("rpc.pprof-laddr", conf.RPC.PprofListenAddress, "pprof listen address (https://golang.org/pkg/net/http/pprof)")
|
||||
cmd.Flags().String("rpc.laddr", config.RPC.ListenAddress, "RPC listen address. Port required")
|
||||
cmd.Flags().Bool("rpc.unsafe", config.RPC.Unsafe, "enabled unsafe rpc methods")
|
||||
cmd.Flags().String("rpc.pprof-laddr", config.RPC.PprofListenAddress, "pprof listen address (https://golang.org/pkg/net/http/pprof)")
|
||||
|
||||
// p2p flags
|
||||
cmd.Flags().String(
|
||||
"p2p.laddr",
|
||||
conf.P2P.ListenAddress,
|
||||
config.P2P.ListenAddress,
|
||||
"node listen address. (0.0.0.0:0 means any interface, any port)")
|
||||
cmd.Flags().String("p2p.seeds", conf.P2P.Seeds, "comma-delimited ID@host:port seed nodes") //nolint: staticcheck
|
||||
cmd.Flags().String("p2p.persistent-peers", conf.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
|
||||
cmd.Flags().Bool("p2p.upnp", conf.P2P.UPNP, "enable/disable UPNP port forwarding")
|
||||
cmd.Flags().Bool("p2p.pex", conf.P2P.PexReactor, "enable/disable Peer-Exchange")
|
||||
cmd.Flags().String("p2p.private-peer-ids", conf.P2P.PrivatePeerIDs, "comma-delimited private peer IDs")
|
||||
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "comma-delimited ID@host:port seed nodes") //nolint: staticcheck
|
||||
cmd.Flags().String("p2p.persistent-peers", config.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
|
||||
cmd.Flags().Bool("p2p.upnp", config.P2P.UPNP, "enable/disable UPNP port forwarding")
|
||||
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "enable/disable Peer-Exchange")
|
||||
cmd.Flags().String("p2p.private-peer-ids", config.P2P.PrivatePeerIDs, "comma-delimited private peer IDs")
|
||||
|
||||
// consensus flags
|
||||
cmd.Flags().Bool(
|
||||
"consensus.create-empty-blocks",
|
||||
conf.Consensus.CreateEmptyBlocks,
|
||||
config.Consensus.CreateEmptyBlocks,
|
||||
"set this to false to only produce blocks when there are txs or when the AppHash changes")
|
||||
cmd.Flags().String(
|
||||
"consensus.create-empty-blocks-interval",
|
||||
conf.Consensus.CreateEmptyBlocksInterval.String(),
|
||||
config.Consensus.CreateEmptyBlocksInterval.String(),
|
||||
"the possible interval between empty blocks")
|
||||
|
||||
addDBFlags(cmd, conf)
|
||||
addDBFlags(cmd)
|
||||
}
|
||||
|
||||
func addDBFlags(cmd *cobra.Command, conf *cfg.Config) {
|
||||
func addDBFlags(cmd *cobra.Command) {
|
||||
cmd.Flags().String(
|
||||
"db-backend",
|
||||
conf.DBBackend,
|
||||
config.DBBackend,
|
||||
"database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
|
||||
cmd.Flags().String(
|
||||
"db-dir",
|
||||
conf.DBPath,
|
||||
config.DBPath,
|
||||
"database directory")
|
||||
}
|
||||
|
||||
// NewRunNodeCmd returns the command that allows the CLI to start a node.
|
||||
// It can be used with a custom PrivValidator and in-process ABCI application.
|
||||
func NewRunNodeCmd(nodeProvider cfg.ServiceProvider, conf *cfg.Config, logger log.Logger) *cobra.Command {
|
||||
func NewRunNodeCmd(nodeProvider cfg.ServiceProvider) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "start",
|
||||
Aliases: []string{"node", "run"},
|
||||
Short: "Run the tendermint node",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
if err := checkGenesisHash(conf); err != nil {
|
||||
if err := checkGenesisHash(config); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
n, err := nodeProvider(ctx, conf, logger)
|
||||
n, err := nodeProvider(ctx, config, logger)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create node: %w", err)
|
||||
}
|
||||
@@ -117,14 +116,14 @@ func NewRunNodeCmd(nodeProvider cfg.ServiceProvider, conf *cfg.Config, logger lo
|
||||
return fmt.Errorf("failed to start node: %w", err)
|
||||
}
|
||||
|
||||
logger.Info("started node", "chain", conf.ChainID())
|
||||
logger.Info("started node", "node", n.String())
|
||||
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
AddNodeFlags(cmd, conf)
|
||||
AddNodeFlags(cmd)
|
||||
return cmd
|
||||
}
|
||||
|
||||
|
||||
@@ -4,23 +4,21 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
)
|
||||
|
||||
// MakeShowNodeIDCommand constructs a command to dump the node ID to stdout.
|
||||
func MakeShowNodeIDCommand(conf *config.Config) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "show-node-id",
|
||||
Short: "Show this node's ID",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
nodeKeyID, err := conf.LoadNodeKeyID()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Println(nodeKeyID)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
// ShowNodeIDCmd dumps node's ID to the standard output.
|
||||
var ShowNodeIDCmd = &cobra.Command{
|
||||
Use: "show-node-id",
|
||||
Short: "Show this node's ID",
|
||||
RunE: showNodeID,
|
||||
}
|
||||
|
||||
func showNodeID(cmd *cobra.Command, args []string) error {
|
||||
nodeKeyID, err := config.LoadNodeKeyID()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fmt.Println(nodeKeyID)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -6,78 +6,75 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/internal/jsontypes"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmjson "github.com/tendermint/tendermint/libs/json"
|
||||
tmnet "github.com/tendermint/tendermint/libs/net"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
tmgrpc "github.com/tendermint/tendermint/privval/grpc"
|
||||
)
|
||||
|
||||
// MakeShowValidatorCommand constructs a command to show the validator info.
|
||||
func MakeShowValidatorCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "show-validator",
|
||||
Short: "Show this node's validator info",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
var (
|
||||
pubKey crypto.PubKey
|
||||
err error
|
||||
bctx = cmd.Context()
|
||||
)
|
||||
//TODO: remove once gRPC is the only supported protocol
|
||||
protocol, _ := tmnet.ProtocolAndAddress(conf.PrivValidator.ListenAddr)
|
||||
switch protocol {
|
||||
case "grpc":
|
||||
pvsc, err := tmgrpc.DialRemoteSigner(
|
||||
bctx,
|
||||
conf.PrivValidator,
|
||||
conf.ChainID(),
|
||||
logger,
|
||||
conf.Instrumentation.Prometheus,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't connect to remote validator %w", err)
|
||||
}
|
||||
// ShowValidatorCmd adds capabilities for showing the validator info.
|
||||
var ShowValidatorCmd = &cobra.Command{
|
||||
Use: "show-validator",
|
||||
Short: "Show this node's validator info",
|
||||
RunE: showValidator,
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
func showValidator(cmd *cobra.Command, args []string) error {
|
||||
var (
|
||||
pubKey crypto.PubKey
|
||||
err error
|
||||
bctx = cmd.Context()
|
||||
)
|
||||
//TODO: remove once gRPC is the only supported protocol
|
||||
protocol, _ := tmnet.ProtocolAndAddress(config.PrivValidator.ListenAddr)
|
||||
switch protocol {
|
||||
case "grpc":
|
||||
pvsc, err := tmgrpc.DialRemoteSigner(
|
||||
bctx,
|
||||
config.PrivValidator,
|
||||
config.ChainID(),
|
||||
logger,
|
||||
config.Instrumentation.Prometheus,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't connect to remote validator %w", err)
|
||||
}
|
||||
|
||||
pubKey, err = pvsc.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
default:
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
keyFilePath := conf.PrivValidator.KeyFile()
|
||||
if !tmos.FileExists(keyFilePath) {
|
||||
return fmt.Errorf("private validator file %s does not exist", keyFilePath)
|
||||
}
|
||||
pubKey, err = pvsc.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
default:
|
||||
|
||||
pv, err := privval.LoadFilePV(keyFilePath, conf.PrivValidator.StateFile())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
keyFilePath := config.PrivValidator.KeyFile()
|
||||
if !tmos.FileExists(keyFilePath) {
|
||||
return fmt.Errorf("private validator file %s does not exist", keyFilePath)
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
pv, err := privval.LoadFilePV(keyFilePath, config.PrivValidator.StateFile())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pubKey, err = pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
bz, err := jsontypes.Marshal(pubKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal private validator pubkey: %w", err)
|
||||
}
|
||||
|
||||
fmt.Println(string(bz))
|
||||
return nil
|
||||
},
|
||||
pubKey, err = pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
bz, err := tmjson.Marshal(pubKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal private validator pubkey: %w", err)
|
||||
}
|
||||
|
||||
fmt.Println(string(bz))
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -13,23 +13,76 @@ import (
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/bytes"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmrand "github.com/tendermint/tendermint/libs/rand"
|
||||
tmtime "github.com/tendermint/tendermint/libs/time"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
var (
|
||||
nValidators int
|
||||
nNonValidators int
|
||||
initialHeight int64
|
||||
configFile string
|
||||
outputDir string
|
||||
nodeDirPrefix string
|
||||
|
||||
populatePersistentPeers bool
|
||||
hostnamePrefix string
|
||||
hostnameSuffix string
|
||||
startingIPAddress string
|
||||
hostnames []string
|
||||
p2pPort int
|
||||
randomMonikers bool
|
||||
)
|
||||
|
||||
const (
|
||||
nodeDirPerm = 0755
|
||||
)
|
||||
|
||||
// MakeTestnetFilesCommand constructs a command to generate testnet config files.
|
||||
func MakeTestnetFilesCommand(conf *cfg.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "testnet",
|
||||
Short: "Initialize files for a Tendermint testnet",
|
||||
Long: `testnet will create "v" + "n" number of directories and populate each with
|
||||
func init() {
|
||||
TestnetFilesCmd.Flags().IntVar(&nValidators, "v", 4,
|
||||
"number of validators to initialize the testnet with")
|
||||
TestnetFilesCmd.Flags().StringVar(&configFile, "config", "",
|
||||
"config file to use (note some options may be overwritten)")
|
||||
TestnetFilesCmd.Flags().IntVar(&nNonValidators, "n", 0,
|
||||
"number of non-validators to initialize the testnet with")
|
||||
TestnetFilesCmd.Flags().StringVar(&outputDir, "o", "./mytestnet",
|
||||
"directory to store initialization data for the testnet")
|
||||
TestnetFilesCmd.Flags().StringVar(&nodeDirPrefix, "node-dir-prefix", "node",
|
||||
"prefix the directory name for each node with (node results in node0, node1, ...)")
|
||||
TestnetFilesCmd.Flags().Int64Var(&initialHeight, "initial-height", 0,
|
||||
"initial height of the first block")
|
||||
|
||||
TestnetFilesCmd.Flags().BoolVar(&populatePersistentPeers, "populate-persistent-peers", true,
|
||||
"update config of each node with the list of persistent peers build using either"+
|
||||
" hostname-prefix or"+
|
||||
" starting-ip-address")
|
||||
TestnetFilesCmd.Flags().StringVar(&hostnamePrefix, "hostname-prefix", "node",
|
||||
"hostname prefix (\"node\" results in persistent peers list ID0@node0:26656, ID1@node1:26656, ...)")
|
||||
TestnetFilesCmd.Flags().StringVar(&hostnameSuffix, "hostname-suffix", "",
|
||||
"hostname suffix ("+
|
||||
"\".xyz.com\""+
|
||||
" results in persistent peers list ID0@node0.xyz.com:26656, ID1@node1.xyz.com:26656, ...)")
|
||||
TestnetFilesCmd.Flags().StringVar(&startingIPAddress, "starting-ip-address", "",
|
||||
"starting IP address ("+
|
||||
"\"192.168.0.1\""+
|
||||
" results in persistent peers list ID0@192.168.0.1:26656, ID1@192.168.0.2:26656, ...)")
|
||||
TestnetFilesCmd.Flags().StringArrayVar(&hostnames, "hostname", []string{},
|
||||
"manually override all hostnames of validators and non-validators (use --hostname multiple times for multiple hosts)")
|
||||
TestnetFilesCmd.Flags().IntVar(&p2pPort, "p2p-port", 26656,
|
||||
"P2P Port")
|
||||
TestnetFilesCmd.Flags().BoolVar(&randomMonikers, "random-monikers", false,
|
||||
"randomize the moniker for each generated node")
|
||||
TestnetFilesCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
}
|
||||
|
||||
// TestnetFilesCmd allows initialisation of files for a Tendermint testnet.
|
||||
var TestnetFilesCmd = &cobra.Command{
|
||||
Use: "testnet",
|
||||
Short: "Initialize files for a Tendermint testnet",
|
||||
Long: `testnet will create "v" + "n" number of directories and populate each with
|
||||
necessary files (private validator, genesis, config, etc.).
|
||||
|
||||
Note, strict routability for addresses is turned off in the config file.
|
||||
@@ -40,292 +93,205 @@ Example:
|
||||
|
||||
tendermint testnet --v 4 --o ./output --populate-persistent-peers --starting-ip-address 192.168.10.2
|
||||
`,
|
||||
}
|
||||
var (
|
||||
nValidators int
|
||||
nNonValidators int
|
||||
initialHeight int64
|
||||
configFile string
|
||||
outputDir string
|
||||
nodeDirPrefix string
|
||||
RunE: testnetFiles,
|
||||
}
|
||||
|
||||
populatePersistentPeers bool
|
||||
hostnamePrefix string
|
||||
hostnameSuffix string
|
||||
startingIPAddress string
|
||||
hostnames []string
|
||||
p2pPort int
|
||||
randomMonikers bool
|
||||
keyType string
|
||||
)
|
||||
|
||||
cmd.Flags().IntVar(&nValidators, "v", 4,
|
||||
"number of validators to initialize the testnet with")
|
||||
cmd.Flags().StringVar(&configFile, "config", "",
|
||||
"config file to use (note some options may be overwritten)")
|
||||
cmd.Flags().IntVar(&nNonValidators, "n", 0,
|
||||
"number of non-validators to initialize the testnet with")
|
||||
cmd.Flags().StringVar(&outputDir, "o", "./mytestnet",
|
||||
"directory to store initialization data for the testnet")
|
||||
cmd.Flags().StringVar(&nodeDirPrefix, "node-dir-prefix", "node",
|
||||
"prefix the directory name for each node with (node results in node0, node1, ...)")
|
||||
cmd.Flags().Int64Var(&initialHeight, "initial-height", 0,
|
||||
"initial height of the first block")
|
||||
|
||||
cmd.Flags().BoolVar(&populatePersistentPeers, "populate-persistent-peers", true,
|
||||
"update config of each node with the list of persistent peers build using either"+
|
||||
" hostname-prefix or"+
|
||||
" starting-ip-address")
|
||||
cmd.Flags().StringVar(&hostnamePrefix, "hostname-prefix", "node",
|
||||
"hostname prefix (\"node\" results in persistent peers list ID0@node0:26656, ID1@node1:26656, ...)")
|
||||
cmd.Flags().StringVar(&hostnameSuffix, "hostname-suffix", "",
|
||||
"hostname suffix ("+
|
||||
"\".xyz.com\""+
|
||||
" results in persistent peers list ID0@node0.xyz.com:26656, ID1@node1.xyz.com:26656, ...)")
|
||||
cmd.Flags().StringVar(&startingIPAddress, "starting-ip-address", "",
|
||||
"starting IP address ("+
|
||||
"\"192.168.0.1\""+
|
||||
" results in persistent peers list ID0@192.168.0.1:26656, ID1@192.168.0.2:26656, ...)")
|
||||
cmd.Flags().StringArrayVar(&hostnames, "hostname", []string{},
|
||||
"manually override all hostnames of validators and non-validators (use --hostname multiple times for multiple hosts)")
|
||||
cmd.Flags().IntVar(&p2pPort, "p2p-port", 26656,
|
||||
"P2P Port")
|
||||
cmd.Flags().BoolVar(&randomMonikers, "random-monikers", false,
|
||||
"randomize the moniker for each generated node")
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
|
||||
cmd.RunE = func(cmd *cobra.Command, args []string) error {
|
||||
if len(hostnames) > 0 && len(hostnames) != (nValidators+nNonValidators) {
|
||||
return fmt.Errorf(
|
||||
"testnet needs precisely %d hostnames (number of validators plus non-validators) if --hostname parameter is used",
|
||||
nValidators+nNonValidators,
|
||||
)
|
||||
}
|
||||
|
||||
// set mode to validator for testnet
|
||||
config := cfg.DefaultValidatorConfig()
|
||||
|
||||
// overwrite default config if set and valid
|
||||
if configFile != "" {
|
||||
viper.SetConfigFile(configFile)
|
||||
if err := viper.ReadInConfig(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := viper.Unmarshal(config); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := config.ValidateBasic(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
genVals := make([]types.GenesisValidator, nValidators)
|
||||
ctx := cmd.Context()
|
||||
for i := 0; i < nValidators; i++ {
|
||||
nodeDirName := fmt.Sprintf("%s%d", nodeDirPrefix, i)
|
||||
nodeDir := filepath.Join(outputDir, nodeDirName)
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, config, logger, keyType); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pvKeyFile := filepath.Join(nodeDir, config.PrivValidator.Key)
|
||||
pvStateFile := filepath.Join(nodeDir, config.PrivValidator.State)
|
||||
pv, err := privval.LoadFilePV(pvKeyFile, pvStateFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
pubKey, err := pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
genVals[i] = types.GenesisValidator{
|
||||
Address: pubKey.Address(),
|
||||
PubKey: pubKey,
|
||||
Power: 1,
|
||||
Name: nodeDirName,
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i+nValidators))
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, conf, logger, keyType); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Generate genesis doc from generated validators
|
||||
genDoc := &types.GenesisDoc{
|
||||
ChainID: "chain-" + tmrand.Str(6),
|
||||
GenesisTime: tmtime.Now(),
|
||||
InitialHeight: initialHeight,
|
||||
Validators: genVals,
|
||||
ConsensusParams: types.DefaultConsensusParams(),
|
||||
}
|
||||
if keyType == "secp256k1" {
|
||||
genDoc.ConsensusParams.Validator = types.ValidatorParams{
|
||||
PubKeyTypes: []string{types.ABCIPubKeyTypeSecp256k1},
|
||||
}
|
||||
}
|
||||
|
||||
// Write genesis file.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
if err := genDoc.SaveAs(filepath.Join(nodeDir, config.BaseConfig.Genesis)); err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Gather persistent peer addresses.
|
||||
var (
|
||||
persistentPeers = make([]string, 0)
|
||||
err error
|
||||
func testnetFiles(cmd *cobra.Command, args []string) error {
|
||||
if len(hostnames) > 0 && len(hostnames) != (nValidators+nNonValidators) {
|
||||
return fmt.Errorf(
|
||||
"testnet needs precisely %d hostnames (number of validators plus non-validators) if --hostname parameter is used",
|
||||
nValidators+nNonValidators,
|
||||
)
|
||||
tpargs := testnetPeerArgs{
|
||||
numValidators: nValidators,
|
||||
numNonValidators: nNonValidators,
|
||||
peerToPeerPort: p2pPort,
|
||||
nodeDirPrefix: nodeDirPrefix,
|
||||
outputDir: outputDir,
|
||||
hostnames: hostnames,
|
||||
startingIPAddr: startingIPAddress,
|
||||
hostnamePrefix: hostnamePrefix,
|
||||
hostnameSuffix: hostnameSuffix,
|
||||
randomMonikers: randomMonikers,
|
||||
}
|
||||
|
||||
// set mode to validator for testnet
|
||||
config := cfg.DefaultValidatorConfig()
|
||||
|
||||
// overwrite default config if set and valid
|
||||
if configFile != "" {
|
||||
viper.SetConfigFile(configFile)
|
||||
if err := viper.ReadInConfig(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := viper.Unmarshal(config); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := config.ValidateBasic(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
genVals := make([]types.GenesisValidator, nValidators)
|
||||
ctx := cmd.Context()
|
||||
for i := 0; i < nValidators; i++ {
|
||||
nodeDirName := fmt.Sprintf("%s%d", nodeDirPrefix, i)
|
||||
nodeDir := filepath.Join(outputDir, nodeDirName)
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, config); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pvKeyFile := filepath.Join(nodeDir, config.PrivValidator.Key)
|
||||
pvStateFile := filepath.Join(nodeDir, config.PrivValidator.State)
|
||||
pv, err := privval.LoadFilePV(pvKeyFile, pvStateFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
pubKey, err := pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
genVals[i] = types.GenesisValidator{
|
||||
Address: pubKey.Address(),
|
||||
PubKey: pubKey,
|
||||
Power: 1,
|
||||
Name: nodeDirName,
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i+nValidators))
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, config); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Generate genesis doc from generated validators
|
||||
genDoc := &types.GenesisDoc{
|
||||
ChainID: "chain-" + tmrand.Str(6),
|
||||
GenesisTime: tmtime.Now(),
|
||||
InitialHeight: initialHeight,
|
||||
Validators: genVals,
|
||||
ConsensusParams: types.DefaultConsensusParams(),
|
||||
}
|
||||
if keyType == "secp256k1" {
|
||||
genDoc.ConsensusParams.Validator = types.ValidatorParams{
|
||||
PubKeyTypes: []string{types.ABCIPubKeyTypeSecp256k1},
|
||||
}
|
||||
}
|
||||
|
||||
// Write genesis file.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
if err := genDoc.SaveAs(filepath.Join(nodeDir, config.BaseConfig.Genesis)); err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Gather persistent peer addresses.
|
||||
var (
|
||||
persistentPeers = make([]string, 0)
|
||||
err error
|
||||
)
|
||||
if populatePersistentPeers {
|
||||
persistentPeers, err = persistentPeersArray(config)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Overwrite default config.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
config.SetRoot(nodeDir)
|
||||
config.P2P.AllowDuplicateIP = true
|
||||
if populatePersistentPeers {
|
||||
|
||||
persistentPeers, err = persistentPeersArray(config, tpargs)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Overwrite default config.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
config.SetRoot(nodeDir)
|
||||
config.P2P.AllowDuplicateIP = true
|
||||
if populatePersistentPeers {
|
||||
persistentPeersWithoutSelf := make([]string, 0)
|
||||
for j := 0; j < len(persistentPeers); j++ {
|
||||
if j == i {
|
||||
continue
|
||||
}
|
||||
persistentPeersWithoutSelf = append(persistentPeersWithoutSelf, persistentPeers[j])
|
||||
persistentPeersWithoutSelf := make([]string, 0)
|
||||
for j := 0; j < len(persistentPeers); j++ {
|
||||
if j == i {
|
||||
continue
|
||||
}
|
||||
config.P2P.PersistentPeers = strings.Join(persistentPeersWithoutSelf, ",")
|
||||
}
|
||||
config.Moniker = tpargs.moniker(i)
|
||||
|
||||
if err := cfg.WriteConfigFile(nodeDir, config); err != nil {
|
||||
return err
|
||||
persistentPeersWithoutSelf = append(persistentPeersWithoutSelf, persistentPeers[j])
|
||||
}
|
||||
config.P2P.PersistentPeers = strings.Join(persistentPeersWithoutSelf, ",")
|
||||
}
|
||||
config.Moniker = moniker(i)
|
||||
|
||||
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
|
||||
return nil
|
||||
if err := cfg.WriteConfigFile(nodeDir, config); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return cmd
|
||||
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
|
||||
return nil
|
||||
}
|
||||
|
||||
type testnetPeerArgs struct {
|
||||
numValidators int
|
||||
numNonValidators int
|
||||
peerToPeerPort int
|
||||
nodeDirPrefix string
|
||||
outputDir string
|
||||
hostnames []string
|
||||
startingIPAddr string
|
||||
hostnamePrefix string
|
||||
hostnameSuffix string
|
||||
randomMonikers bool
|
||||
}
|
||||
|
||||
func (args *testnetPeerArgs) hostnameOrIP(i int) (string, error) {
|
||||
if len(args.hostnames) > 0 && i < len(args.hostnames) {
|
||||
return args.hostnames[i], nil
|
||||
func hostnameOrIP(i int) string {
|
||||
if len(hostnames) > 0 && i < len(hostnames) {
|
||||
return hostnames[i]
|
||||
}
|
||||
if args.startingIPAddr == "" {
|
||||
return fmt.Sprintf("%s%d%s", args.hostnamePrefix, i, args.hostnameSuffix), nil
|
||||
if startingIPAddress == "" {
|
||||
return fmt.Sprintf("%s%d%s", hostnamePrefix, i, hostnameSuffix)
|
||||
}
|
||||
ip := net.ParseIP(args.startingIPAddr)
|
||||
ip := net.ParseIP(startingIPAddress)
|
||||
ip = ip.To4()
|
||||
if ip == nil {
|
||||
return "", fmt.Errorf("%v is non-ipv4 address", args.startingIPAddr)
|
||||
fmt.Printf("%v: non ipv4 address\n", startingIPAddress)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
for j := 0; j < i; j++ {
|
||||
ip[3]++
|
||||
}
|
||||
return ip.String(), nil
|
||||
|
||||
return ip.String()
|
||||
}
|
||||
|
||||
// get an array of persistent peers
|
||||
func persistentPeersArray(config *cfg.Config, args testnetPeerArgs) ([]string, error) {
|
||||
peers := make([]string, args.numValidators+args.numNonValidators)
|
||||
for i := 0; i < len(peers); i++ {
|
||||
nodeDir := filepath.Join(args.outputDir, fmt.Sprintf("%s%d", args.nodeDirPrefix, i))
|
||||
func persistentPeersArray(config *cfg.Config) ([]string, error) {
|
||||
peers := make([]string, nValidators+nNonValidators)
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
config.SetRoot(nodeDir)
|
||||
nodeKey, err := config.LoadNodeKeyID()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return []string{}, err
|
||||
}
|
||||
addr, err := args.hostnameOrIP(i)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
peers[i] = nodeKey.AddressString(fmt.Sprintf("%s:%d", addr, args.peerToPeerPort))
|
||||
peers[i] = nodeKey.AddressString(fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
|
||||
}
|
||||
return peers, nil
|
||||
}
|
||||
|
||||
func (args *testnetPeerArgs) moniker(i int) string {
|
||||
if args.randomMonikers {
|
||||
func moniker(i int) string {
|
||||
if randomMonikers {
|
||||
return randomMoniker()
|
||||
}
|
||||
if len(args.hostnames) > 0 && i < len(args.hostnames) {
|
||||
return args.hostnames[i]
|
||||
if len(hostnames) > 0 && i < len(hostnames) {
|
||||
return hostnames[i]
|
||||
}
|
||||
if args.startingIPAddr == "" {
|
||||
return fmt.Sprintf("%s%d%s", args.hostnamePrefix, i, args.hostnameSuffix)
|
||||
if startingIPAddress == "" {
|
||||
return fmt.Sprintf("%s%d%s", hostnamePrefix, i, hostnameSuffix)
|
||||
}
|
||||
return randomMoniker()
|
||||
}
|
||||
|
||||
@@ -1,50 +1,37 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/tendermint/tendermint/cmd/tendermint/commands"
|
||||
cmd "github.com/tendermint/tendermint/cmd/tendermint/commands"
|
||||
"github.com/tendermint/tendermint/cmd/tendermint/commands/debug"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/node"
|
||||
)
|
||||
|
||||
func main() {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
conf, err := commands.ParseConfig(config.DefaultConfig())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
logger, err := log.NewDefaultLogger(conf.LogFormat, conf.LogLevel)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
rcmd := commands.RootCommand(conf, logger)
|
||||
rcmd.AddCommand(
|
||||
commands.MakeGenValidatorCommand(),
|
||||
commands.MakeReindexEventCommand(conf, logger),
|
||||
commands.MakeInitFilesCommand(conf, logger),
|
||||
commands.MakeLightCommand(conf, logger),
|
||||
commands.MakeReplayCommand(conf, logger),
|
||||
commands.MakeReplayConsoleCommand(conf, logger),
|
||||
commands.MakeResetAllCommand(conf, logger),
|
||||
commands.MakeResetPrivateValidatorCommand(conf, logger),
|
||||
commands.MakeShowValidatorCommand(conf, logger),
|
||||
commands.MakeTestnetFilesCommand(conf, logger),
|
||||
commands.MakeShowNodeIDCommand(conf),
|
||||
commands.GenNodeKeyCmd,
|
||||
commands.VersionCmd,
|
||||
commands.MakeInspectCommand(conf, logger),
|
||||
commands.MakeRollbackStateCommand(conf),
|
||||
commands.MakeKeyMigrateCommand(conf, logger),
|
||||
rootCmd := cmd.RootCmd
|
||||
rootCmd.AddCommand(
|
||||
cmd.GenValidatorCmd,
|
||||
cmd.ReIndexEventCmd,
|
||||
cmd.InitFilesCmd,
|
||||
cmd.LightCmd,
|
||||
cmd.ReplayCmd,
|
||||
cmd.ReplayConsoleCmd,
|
||||
cmd.ResetAllCmd,
|
||||
cmd.ResetPrivValidatorCmd,
|
||||
cmd.ShowValidatorCmd,
|
||||
cmd.TestnetFilesCmd,
|
||||
cmd.ShowNodeIDCmd,
|
||||
cmd.GenNodeKeyCmd,
|
||||
cmd.VersionCmd,
|
||||
cmd.InspectCmd,
|
||||
cmd.RollbackStateCmd,
|
||||
cmd.MakeKeyMigrateCommand(),
|
||||
debug.DebugCmd,
|
||||
commands.NewCompletionCmd(rcmd, true),
|
||||
cli.NewCompletionCmd(rootCmd, true),
|
||||
)
|
||||
|
||||
// NOTE:
|
||||
@@ -58,9 +45,10 @@ func main() {
|
||||
nodeFunc := node.NewDefault
|
||||
|
||||
// Create & start node
|
||||
rcmd.AddCommand(commands.NewRunNodeCmd(nodeFunc, conf, logger))
|
||||
rootCmd.AddCommand(cmd.NewRunNodeCmd(nodeFunc))
|
||||
|
||||
if err := cli.RunWithTrace(ctx, rcmd); err != nil {
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv(filepath.Join("$HOME", config.DefaultTendermintDir)))
|
||||
if err := cmd.Execute(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package config
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
@@ -10,6 +9,7 @@ import (
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
tmjson "github.com/tendermint/tendermint/libs/json"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
@@ -270,7 +270,7 @@ func (cfg BaseConfig) LoadNodeKeyID() (types.NodeID, error) {
|
||||
return "", err
|
||||
}
|
||||
nodeKey := types.NodeKey{}
|
||||
err = json.Unmarshal(jsonBytes, &nodeKey)
|
||||
err = tmjson.Unmarshal(jsonBytes, &nodeKey)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
@@ -504,13 +504,13 @@ namespace = "{{ .Instrumentation.Namespace }}"
|
||||
|
||||
/****** these are for test settings ***********/
|
||||
|
||||
func ResetTestRoot(dir, testName string) (*Config, error) {
|
||||
return ResetTestRootWithChainID(dir, testName, "")
|
||||
func ResetTestRoot(testName string) (*Config, error) {
|
||||
return ResetTestRootWithChainID(testName, "")
|
||||
}
|
||||
|
||||
func ResetTestRootWithChainID(dir, testName string, chainID string) (*Config, error) {
|
||||
func ResetTestRootWithChainID(testName string, chainID string) (*Config, error) {
|
||||
// create a unique, concurrency-safe test directory under os.TempDir()
|
||||
rootDir, err := os.MkdirTemp(dir, fmt.Sprintf("%s-%s_", chainID, testName))
|
||||
rootDir, err := os.MkdirTemp("", fmt.Sprintf("%s-%s_", chainID, testName))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -20,7 +20,9 @@ func ensureFiles(t *testing.T, rootDir string, files ...string) {
|
||||
|
||||
func TestEnsureRoot(t *testing.T) {
|
||||
// setup temp dir for test
|
||||
tmpDir := t.TempDir()
|
||||
tmpDir, err := os.MkdirTemp("", "config-test")
|
||||
require.NoError(t, err)
|
||||
defer os.RemoveAll(tmpDir)
|
||||
|
||||
// create root dir
|
||||
EnsureRoot(tmpDir)
|
||||
@@ -40,7 +42,7 @@ func TestEnsureTestRoot(t *testing.T) {
|
||||
testName := "ensureTestRoot"
|
||||
|
||||
// create root dir
|
||||
cfg, err := ResetTestRoot(t.TempDir(), testName)
|
||||
cfg, err := ResetTestRoot(testName)
|
||||
require.NoError(t, err)
|
||||
defer os.RemoveAll(cfg.RootDir)
|
||||
rootDir := cfg.RootDir
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/internal/benchmarking"
|
||||
)
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/tmhash"
|
||||
"github.com/tendermint/tendermint/internal/jsontypes"
|
||||
tmjson "github.com/tendermint/tendermint/libs/json"
|
||||
)
|
||||
|
||||
//-------------------------------------
|
||||
@@ -56,6 +57,9 @@ const (
|
||||
)
|
||||
|
||||
func init() {
|
||||
tmjson.RegisterType(PubKey{}, PubKeyName)
|
||||
tmjson.RegisterType(PrivKey{}, PrivKeyName)
|
||||
|
||||
jsontypes.MustRegister(PubKey{})
|
||||
jsontypes.MustRegister(PrivKey{})
|
||||
}
|
||||
|
||||
@@ -7,14 +7,14 @@ import (
|
||||
"github.com/tendermint/tendermint/crypto/ed25519"
|
||||
"github.com/tendermint/tendermint/crypto/secp256k1"
|
||||
"github.com/tendermint/tendermint/crypto/sr25519"
|
||||
"github.com/tendermint/tendermint/internal/jsontypes"
|
||||
"github.com/tendermint/tendermint/libs/json"
|
||||
cryptoproto "github.com/tendermint/tendermint/proto/tendermint/crypto"
|
||||
)
|
||||
|
||||
func init() {
|
||||
jsontypes.MustRegister((*cryptoproto.PublicKey)(nil))
|
||||
jsontypes.MustRegister((*cryptoproto.PublicKey_Ed25519)(nil))
|
||||
jsontypes.MustRegister((*cryptoproto.PublicKey_Secp256K1)(nil))
|
||||
json.RegisterType((*cryptoproto.PublicKey)(nil), "tendermint.crypto.PublicKey")
|
||||
json.RegisterType((*cryptoproto.PublicKey_Ed25519)(nil), "tendermint.crypto.PublicKey_Ed25519")
|
||||
json.RegisterType((*cryptoproto.PublicKey_Secp256K1)(nil), "tendermint.crypto.PublicKey_Secp256K1")
|
||||
}
|
||||
|
||||
// PubKeyToProto takes crypto.PubKey and transforms it to a protobuf Pubkey
|
||||
|
||||
@@ -24,10 +24,10 @@ const (
|
||||
// everything. This also affects the generalized proof system as
|
||||
// well.
|
||||
type Proof struct {
|
||||
Total int64 `json:"total,string"` // Total number of items.
|
||||
Index int64 `json:"index,string"` // Index of item to prove.
|
||||
LeafHash []byte `json:"leaf_hash"` // Hash of item value.
|
||||
Aunts [][]byte `json:"aunts"` // Hashes from leaf's sibling to a root's child.
|
||||
Total int64 `json:"total"` // Total number of items.
|
||||
Index int64 `json:"index"` // Index of item to prove.
|
||||
LeafHash []byte `json:"leaf_hash"` // Hash of item value.
|
||||
Aunts [][]byte `json:"aunts"` // Hashes from leaf's sibling to a root's child.
|
||||
}
|
||||
|
||||
// ProofsFromByteSlices computes inclusion proof for given items.
|
||||
|
||||
@@ -9,12 +9,12 @@ import (
|
||||
"math/big"
|
||||
|
||||
secp256k1 "github.com/btcsuite/btcd/btcec"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/internal/jsontypes"
|
||||
tmjson "github.com/tendermint/tendermint/libs/json"
|
||||
|
||||
// necessary for Bitcoin address format
|
||||
"golang.org/x/crypto/ripemd160" //nolint:staticcheck
|
||||
"golang.org/x/crypto/ripemd160" // nolint
|
||||
)
|
||||
|
||||
//-------------------------------------
|
||||
@@ -27,6 +27,9 @@ const (
|
||||
)
|
||||
|
||||
func init() {
|
||||
tmjson.RegisterType(PubKey{}, PubKeyName)
|
||||
tmjson.RegisterType(PrivKey{}, PrivKeyName)
|
||||
|
||||
jsontypes.MustRegister(PubKey{})
|
||||
jsontypes.MustRegister(PrivKey{})
|
||||
}
|
||||
@@ -179,67 +182,3 @@ func (pubKey PubKey) Equals(other crypto.PubKey) bool {
|
||||
func (pubKey PubKey) Type() string {
|
||||
return KeyType
|
||||
}
|
||||
|
||||
// used to reject malleable signatures
|
||||
// see:
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
|
||||
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
|
||||
|
||||
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
|
||||
// The returned signature will be of the form R || S (in lower-S form).
|
||||
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
|
||||
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
|
||||
|
||||
sig, err := priv.Sign(crypto.Sha256(msg))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sigBytes := serializeSig(sig)
|
||||
return sigBytes, nil
|
||||
}
|
||||
|
||||
// VerifySignature verifies a signature of the form R || S.
|
||||
// It rejects signatures which are not in lower-S form.
|
||||
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
|
||||
if len(sigStr) != 64 {
|
||||
return false
|
||||
}
|
||||
|
||||
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// parse the signature:
|
||||
signature := signatureFromBytes(sigStr)
|
||||
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
|
||||
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
if signature.S.Cmp(secp256k1halfN) > 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
return signature.Verify(crypto.Sha256(msg), pub)
|
||||
}
|
||||
|
||||
// Read Signature struct from R || S. Caller needs to ensure
|
||||
// that len(sigStr) == 64.
|
||||
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
|
||||
return &secp256k1.Signature{
|
||||
R: new(big.Int).SetBytes(sigStr[:32]),
|
||||
S: new(big.Int).SetBytes(sigStr[32:64]),
|
||||
}
|
||||
}
|
||||
|
||||
// Serialize signature to R || S.
|
||||
// R, S are padded to 32 bytes respectively.
|
||||
func serializeSig(sig *secp256k1.Signature) []byte {
|
||||
rBytes := sig.R.Bytes()
|
||||
sBytes := sig.S.Bytes()
|
||||
sigBytes := make([]byte, 64)
|
||||
// 0 pad the byte arrays from the left if they aren't big enough.
|
||||
copy(sigBytes[32-len(rBytes):32], rBytes)
|
||||
copy(sigBytes[64-len(sBytes):64], sBytes)
|
||||
return sigBytes
|
||||
}
|
||||
|
||||
76
crypto/secp256k1/secp256k1_nocgo.go
Normal file
76
crypto/secp256k1/secp256k1_nocgo.go
Normal file
@@ -0,0 +1,76 @@
|
||||
//go:build !libsecp256k1
|
||||
// +build !libsecp256k1
|
||||
|
||||
package secp256k1
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
|
||||
secp256k1 "github.com/btcsuite/btcd/btcec"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
)
|
||||
|
||||
// used to reject malleable signatures
|
||||
// see:
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
|
||||
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
|
||||
|
||||
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
|
||||
// The returned signature will be of the form R || S (in lower-S form).
|
||||
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
|
||||
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
|
||||
|
||||
sig, err := priv.Sign(crypto.Sha256(msg))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sigBytes := serializeSig(sig)
|
||||
return sigBytes, nil
|
||||
}
|
||||
|
||||
// VerifySignature verifies a signature of the form R || S.
|
||||
// It rejects signatures which are not in lower-S form.
|
||||
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
|
||||
if len(sigStr) != 64 {
|
||||
return false
|
||||
}
|
||||
|
||||
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// parse the signature:
|
||||
signature := signatureFromBytes(sigStr)
|
||||
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
|
||||
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
if signature.S.Cmp(secp256k1halfN) > 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
return signature.Verify(crypto.Sha256(msg), pub)
|
||||
}
|
||||
|
||||
// Read Signature struct from R || S. Caller needs to ensure
|
||||
// that len(sigStr) == 64.
|
||||
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
|
||||
return &secp256k1.Signature{
|
||||
R: new(big.Int).SetBytes(sigStr[:32]),
|
||||
S: new(big.Int).SetBytes(sigStr[32:64]),
|
||||
}
|
||||
}
|
||||
|
||||
// Serialize signature to R || S.
|
||||
// R, S are padded to 32 bytes respectively.
|
||||
func serializeSig(sig *secp256k1.Signature) []byte {
|
||||
rBytes := sig.R.Bytes()
|
||||
sBytes := sig.S.Bytes()
|
||||
sigBytes := make([]byte, 64)
|
||||
// 0 pad the byte arrays from the left if they aren't big enough.
|
||||
copy(sigBytes[32-len(rBytes):32], rBytes)
|
||||
copy(sigBytes[64-len(sBytes):64], sBytes)
|
||||
return sigBytes
|
||||
}
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/internal/benchmarking"
|
||||
)
|
||||
|
||||
@@ -2,6 +2,7 @@ package sr25519
|
||||
|
||||
import (
|
||||
"github.com/tendermint/tendermint/internal/jsontypes"
|
||||
tmjson "github.com/tendermint/tendermint/libs/json"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -10,6 +11,9 @@ const (
|
||||
)
|
||||
|
||||
func init() {
|
||||
tmjson.RegisterType(PubKey{}, PubKeyName)
|
||||
tmjson.RegisterType(PrivKey{}, PrivKeyName)
|
||||
|
||||
jsontypes.MustRegister(PubKey{})
|
||||
jsontypes.MustRegister(PrivKey{})
|
||||
}
|
||||
|
||||
132
crypto/xchacha20poly1305/vector_test.go
Normal file
132
crypto/xchacha20poly1305/vector_test.go
Normal file
@@ -0,0 +1,132 @@
|
||||
package xchacha20poly1305
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func toHex(bits []byte) string {
|
||||
return hex.EncodeToString(bits)
|
||||
}
|
||||
|
||||
func fromHex(bits string) ([]byte, error) {
|
||||
b, err := hex.DecodeString(bits)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func check(t *testing.T, fn func(string) ([]byte, error), hex string) []byte {
|
||||
t.Helper()
|
||||
|
||||
res, err := fn(hex)
|
||||
require.NoError(t, err)
|
||||
return res
|
||||
}
|
||||
|
||||
func TestHChaCha20(t *testing.T) {
|
||||
var hChaCha20Vectors = []struct {
|
||||
key, nonce, keystream []byte
|
||||
}{
|
||||
{
|
||||
check(t, fromHex, "0000000000000000000000000000000000000000000000000000000000000000"),
|
||||
check(t, fromHex, "000000000000000000000000000000000000000000000000"),
|
||||
check(t, fromHex, "1140704c328d1d5d0e30086cdf209dbd6a43b8f41518a11cc387b669b2ee6586"),
|
||||
},
|
||||
{
|
||||
check(t, fromHex, "8000000000000000000000000000000000000000000000000000000000000000"),
|
||||
check(t, fromHex, "000000000000000000000000000000000000000000000000"),
|
||||
check(t, fromHex, "7d266a7fd808cae4c02a0a70dcbfbcc250dae65ce3eae7fc210f54cc8f77df86"),
|
||||
},
|
||||
{
|
||||
check(t, fromHex, "0000000000000000000000000000000000000000000000000000000000000001"),
|
||||
check(t, fromHex, "000000000000000000000000000000000000000000000002"),
|
||||
check(t, fromHex, "e0c77ff931bb9163a5460c02ac281c2b53d792b1c43fea817e9ad275ae546963"),
|
||||
},
|
||||
{
|
||||
check(t, fromHex, "000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f"),
|
||||
check(t, fromHex, "000102030405060708090a0b0c0d0e0f1011121314151617"),
|
||||
check(t, fromHex, "51e3ff45a895675c4b33b46c64f4a9ace110d34df6a2ceab486372bacbd3eff6"),
|
||||
},
|
||||
{
|
||||
check(t, fromHex, "24f11cce8a1b3d61e441561a696c1c1b7e173d084fd4812425435a8896a013dc"),
|
||||
check(t, fromHex, "d9660c5900ae19ddad28d6e06e45fe5e"),
|
||||
check(t, fromHex, "5966b3eec3bff1189f831f06afe4d4e3be97fa9235ec8c20d08acfbbb4e851e3"),
|
||||
},
|
||||
}
|
||||
|
||||
for i, v := range hChaCha20Vectors {
|
||||
var key [32]byte
|
||||
var nonce [16]byte
|
||||
copy(key[:], v.key)
|
||||
copy(nonce[:], v.nonce)
|
||||
|
||||
HChaCha20(&key, &nonce, &key)
|
||||
if !bytes.Equal(key[:], v.keystream) {
|
||||
t.Errorf("test %d: keystream mismatch:\n \t got: %s\n \t want: %s", i, toHex(key[:]), toHex(v.keystream))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestVectors(t *testing.T) {
|
||||
for i, v := range vectors {
|
||||
if len(v.plaintext) == 0 {
|
||||
v.plaintext = make([]byte, len(v.ciphertext))
|
||||
}
|
||||
|
||||
var nonce [24]byte
|
||||
copy(nonce[:], v.nonce)
|
||||
|
||||
aead, err := New(v.key)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
dst := aead.Seal(nil, nonce[:], v.plaintext, v.ad)
|
||||
if !bytes.Equal(dst, v.ciphertext) {
|
||||
t.Errorf("test %d: ciphertext mismatch:\n \t got: %s\n \t want: %s", i, toHex(dst), toHex(v.ciphertext))
|
||||
}
|
||||
open, err := aead.Open(nil, nonce[:], dst, v.ad)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
if !bytes.Equal(open, v.plaintext) {
|
||||
t.Errorf("test %d: plaintext mismatch:\n \t got: %s\n \t want: %s", i, string(open), string(v.plaintext))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var vectors = []struct {
|
||||
key, nonce, ad, plaintext, ciphertext []byte
|
||||
}{
|
||||
{
|
||||
[]byte{
|
||||
0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87, 0x88, 0x89, 0x8a,
|
||||
0x8b, 0x8c, 0x8d, 0x8e, 0x8f, 0x90, 0x91, 0x92, 0x93, 0x94, 0x95,
|
||||
0x96, 0x97, 0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f,
|
||||
},
|
||||
[]byte{0x07, 0x00, 0x00, 0x00, 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b},
|
||||
[]byte{0x50, 0x51, 0x52, 0x53, 0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7},
|
||||
[]byte(
|
||||
"Ladies and Gentlemen of the class of '99: If I could offer you only one tip for the future, sunscreen would be it.",
|
||||
),
|
||||
[]byte{
|
||||
0x45, 0x3c, 0x06, 0x93, 0xa7, 0x40, 0x7f, 0x04, 0xff, 0x4c, 0x56,
|
||||
0xae, 0xdb, 0x17, 0xa3, 0xc0, 0xa1, 0xaf, 0xff, 0x01, 0x17, 0x49,
|
||||
0x30, 0xfc, 0x22, 0x28, 0x7c, 0x33, 0xdb, 0xcf, 0x0a, 0xc8, 0xb8,
|
||||
0x9a, 0xd9, 0x29, 0x53, 0x0a, 0x1b, 0xb3, 0xab, 0x5e, 0x69, 0xf2,
|
||||
0x4c, 0x7f, 0x60, 0x70, 0xc8, 0xf8, 0x40, 0xc9, 0xab, 0xb4, 0xf6,
|
||||
0x9f, 0xbf, 0xc8, 0xa7, 0xff, 0x51, 0x26, 0xfa, 0xee, 0xbb, 0xb5,
|
||||
0x58, 0x05, 0xee, 0x9c, 0x1c, 0xf2, 0xce, 0x5a, 0x57, 0x26, 0x32,
|
||||
0x87, 0xae, 0xc5, 0x78, 0x0f, 0x04, 0xec, 0x32, 0x4c, 0x35, 0x14,
|
||||
0x12, 0x2c, 0xfc, 0x32, 0x31, 0xfc, 0x1a, 0x8b, 0x71, 0x8a, 0x62,
|
||||
0x86, 0x37, 0x30, 0xa2, 0x70, 0x2b, 0xb7, 0x63, 0x66, 0x11, 0x6b,
|
||||
0xed, 0x09, 0xe0, 0xfd, 0x5c, 0x6d, 0x84, 0xb6, 0xb0, 0xc1, 0xab,
|
||||
0xaf, 0x24, 0x9d, 0x5d, 0xd0, 0xf7, 0xf5, 0xa7, 0xea,
|
||||
},
|
||||
},
|
||||
}
|
||||
259
crypto/xchacha20poly1305/xchachapoly.go
Normal file
259
crypto/xchacha20poly1305/xchachapoly.go
Normal file
@@ -0,0 +1,259 @@
|
||||
// Package xchacha20poly1305 creates an AEAD using hchacha, chacha, and poly1305
|
||||
// This allows for randomized nonces to be used in conjunction with chacha.
|
||||
package xchacha20poly1305
|
||||
|
||||
import (
|
||||
"crypto/cipher"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"golang.org/x/crypto/chacha20poly1305"
|
||||
)
|
||||
|
||||
// Implements crypto.AEAD
|
||||
type xchacha20poly1305 struct {
|
||||
key [KeySize]byte
|
||||
}
|
||||
|
||||
const (
|
||||
// KeySize is the size of the key used by this AEAD, in bytes.
|
||||
KeySize = 32
|
||||
// NonceSize is the size of the nonce used with this AEAD, in bytes.
|
||||
NonceSize = 24
|
||||
// TagSize is the size added from poly1305
|
||||
TagSize = 16
|
||||
// MaxPlaintextSize is the max size that can be passed into a single call of Seal
|
||||
MaxPlaintextSize = (1 << 38) - 64
|
||||
// MaxCiphertextSize is the max size that can be passed into a single call of Open,
|
||||
// this differs from plaintext size due to the tag
|
||||
MaxCiphertextSize = (1 << 38) - 48
|
||||
|
||||
// sigma are constants used in xchacha.
|
||||
// Unrolled from a slice so that they can be inlined, as slices can't be constants.
|
||||
sigma0 = uint32(0x61707865)
|
||||
sigma1 = uint32(0x3320646e)
|
||||
sigma2 = uint32(0x79622d32)
|
||||
sigma3 = uint32(0x6b206574)
|
||||
)
|
||||
|
||||
// New returns a new xchachapoly1305 AEAD
|
||||
func New(key []byte) (cipher.AEAD, error) {
|
||||
if len(key) != KeySize {
|
||||
return nil, errors.New("xchacha20poly1305: bad key length")
|
||||
}
|
||||
ret := new(xchacha20poly1305)
|
||||
copy(ret.key[:], key)
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
func (c *xchacha20poly1305) NonceSize() int {
|
||||
return NonceSize
|
||||
}
|
||||
|
||||
func (c *xchacha20poly1305) Overhead() int {
|
||||
return TagSize
|
||||
}
|
||||
|
||||
func (c *xchacha20poly1305) Seal(dst, nonce, plaintext, additionalData []byte) []byte {
|
||||
if len(nonce) != NonceSize {
|
||||
panic("xchacha20poly1305: bad nonce length passed to Seal")
|
||||
}
|
||||
|
||||
if uint64(len(plaintext)) > MaxPlaintextSize {
|
||||
panic("xchacha20poly1305: plaintext too large")
|
||||
}
|
||||
|
||||
var subKey [KeySize]byte
|
||||
var hNonce [16]byte
|
||||
var subNonce [chacha20poly1305.NonceSize]byte
|
||||
copy(hNonce[:], nonce[:16])
|
||||
|
||||
HChaCha20(&subKey, &hNonce, &c.key)
|
||||
|
||||
// This can't error because we always provide a correctly sized key
|
||||
chacha20poly1305, _ := chacha20poly1305.New(subKey[:])
|
||||
|
||||
copy(subNonce[4:], nonce[16:])
|
||||
|
||||
return chacha20poly1305.Seal(dst, subNonce[:], plaintext, additionalData)
|
||||
}
|
||||
|
||||
func (c *xchacha20poly1305) Open(dst, nonce, ciphertext, additionalData []byte) ([]byte, error) {
|
||||
if len(nonce) != NonceSize {
|
||||
return nil, fmt.Errorf("xchacha20poly1305: bad nonce length passed to Open")
|
||||
}
|
||||
if uint64(len(ciphertext)) > MaxCiphertextSize {
|
||||
return nil, fmt.Errorf("xchacha20poly1305: ciphertext too large")
|
||||
}
|
||||
var subKey [KeySize]byte
|
||||
var hNonce [16]byte
|
||||
var subNonce [chacha20poly1305.NonceSize]byte
|
||||
copy(hNonce[:], nonce[:16])
|
||||
|
||||
HChaCha20(&subKey, &hNonce, &c.key)
|
||||
|
||||
// This can't error because we always provide a correctly sized key
|
||||
chacha20poly1305, _ := chacha20poly1305.New(subKey[:])
|
||||
|
||||
copy(subNonce[4:], nonce[16:])
|
||||
|
||||
return chacha20poly1305.Open(dst, subNonce[:], ciphertext, additionalData)
|
||||
}
|
||||
|
||||
// HChaCha exported from
|
||||
// https://github.com/aead/chacha20/blob/8b13a72661dae6e9e5dea04f344f0dc95ea29547/chacha/chacha_generic.go#L194
|
||||
// TODO: Add support for the different assembly instructions used there.
|
||||
|
||||
// The MIT License (MIT)
|
||||
|
||||
// Copyright (c) 2016 Andreas Auernhammer
|
||||
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
// of this software and associated documentation files (the "Software"), to deal
|
||||
// in the Software without restriction, including without limitation the rights
|
||||
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
// copies of the Software, and to permit persons to whom the Software is
|
||||
// furnished to do so, subject to the following conditions:
|
||||
|
||||
// The above copyright notice and this permission notice shall be included in all
|
||||
// copies or substantial portions of the Software.
|
||||
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
// SOFTWARE.
|
||||
|
||||
// HChaCha20 generates 32 pseudo-random bytes from a 128 bit nonce and a 256 bit secret key.
|
||||
// It can be used as a key-derivation-function (KDF).
|
||||
func HChaCha20(out *[32]byte, nonce *[16]byte, key *[32]byte) { hChaCha20Generic(out, nonce, key) }
|
||||
|
||||
func hChaCha20Generic(out *[32]byte, nonce *[16]byte, key *[32]byte) {
|
||||
v00 := sigma0
|
||||
v01 := sigma1
|
||||
v02 := sigma2
|
||||
v03 := sigma3
|
||||
v04 := binary.LittleEndian.Uint32(key[0:])
|
||||
v05 := binary.LittleEndian.Uint32(key[4:])
|
||||
v06 := binary.LittleEndian.Uint32(key[8:])
|
||||
v07 := binary.LittleEndian.Uint32(key[12:])
|
||||
v08 := binary.LittleEndian.Uint32(key[16:])
|
||||
v09 := binary.LittleEndian.Uint32(key[20:])
|
||||
v10 := binary.LittleEndian.Uint32(key[24:])
|
||||
v11 := binary.LittleEndian.Uint32(key[28:])
|
||||
v12 := binary.LittleEndian.Uint32(nonce[0:])
|
||||
v13 := binary.LittleEndian.Uint32(nonce[4:])
|
||||
v14 := binary.LittleEndian.Uint32(nonce[8:])
|
||||
v15 := binary.LittleEndian.Uint32(nonce[12:])
|
||||
|
||||
for i := 0; i < 20; i += 2 {
|
||||
v00 += v04
|
||||
v12 ^= v00
|
||||
v12 = (v12 << 16) | (v12 >> 16)
|
||||
v08 += v12
|
||||
v04 ^= v08
|
||||
v04 = (v04 << 12) | (v04 >> 20)
|
||||
v00 += v04
|
||||
v12 ^= v00
|
||||
v12 = (v12 << 8) | (v12 >> 24)
|
||||
v08 += v12
|
||||
v04 ^= v08
|
||||
v04 = (v04 << 7) | (v04 >> 25)
|
||||
v01 += v05
|
||||
v13 ^= v01
|
||||
v13 = (v13 << 16) | (v13 >> 16)
|
||||
v09 += v13
|
||||
v05 ^= v09
|
||||
v05 = (v05 << 12) | (v05 >> 20)
|
||||
v01 += v05
|
||||
v13 ^= v01
|
||||
v13 = (v13 << 8) | (v13 >> 24)
|
||||
v09 += v13
|
||||
v05 ^= v09
|
||||
v05 = (v05 << 7) | (v05 >> 25)
|
||||
v02 += v06
|
||||
v14 ^= v02
|
||||
v14 = (v14 << 16) | (v14 >> 16)
|
||||
v10 += v14
|
||||
v06 ^= v10
|
||||
v06 = (v06 << 12) | (v06 >> 20)
|
||||
v02 += v06
|
||||
v14 ^= v02
|
||||
v14 = (v14 << 8) | (v14 >> 24)
|
||||
v10 += v14
|
||||
v06 ^= v10
|
||||
v06 = (v06 << 7) | (v06 >> 25)
|
||||
v03 += v07
|
||||
v15 ^= v03
|
||||
v15 = (v15 << 16) | (v15 >> 16)
|
||||
v11 += v15
|
||||
v07 ^= v11
|
||||
v07 = (v07 << 12) | (v07 >> 20)
|
||||
v03 += v07
|
||||
v15 ^= v03
|
||||
v15 = (v15 << 8) | (v15 >> 24)
|
||||
v11 += v15
|
||||
v07 ^= v11
|
||||
v07 = (v07 << 7) | (v07 >> 25)
|
||||
v00 += v05
|
||||
v15 ^= v00
|
||||
v15 = (v15 << 16) | (v15 >> 16)
|
||||
v10 += v15
|
||||
v05 ^= v10
|
||||
v05 = (v05 << 12) | (v05 >> 20)
|
||||
v00 += v05
|
||||
v15 ^= v00
|
||||
v15 = (v15 << 8) | (v15 >> 24)
|
||||
v10 += v15
|
||||
v05 ^= v10
|
||||
v05 = (v05 << 7) | (v05 >> 25)
|
||||
v01 += v06
|
||||
v12 ^= v01
|
||||
v12 = (v12 << 16) | (v12 >> 16)
|
||||
v11 += v12
|
||||
v06 ^= v11
|
||||
v06 = (v06 << 12) | (v06 >> 20)
|
||||
v01 += v06
|
||||
v12 ^= v01
|
||||
v12 = (v12 << 8) | (v12 >> 24)
|
||||
v11 += v12
|
||||
v06 ^= v11
|
||||
v06 = (v06 << 7) | (v06 >> 25)
|
||||
v02 += v07
|
||||
v13 ^= v02
|
||||
v13 = (v13 << 16) | (v13 >> 16)
|
||||
v08 += v13
|
||||
v07 ^= v08
|
||||
v07 = (v07 << 12) | (v07 >> 20)
|
||||
v02 += v07
|
||||
v13 ^= v02
|
||||
v13 = (v13 << 8) | (v13 >> 24)
|
||||
v08 += v13
|
||||
v07 ^= v08
|
||||
v07 = (v07 << 7) | (v07 >> 25)
|
||||
v03 += v04
|
||||
v14 ^= v03
|
||||
v14 = (v14 << 16) | (v14 >> 16)
|
||||
v09 += v14
|
||||
v04 ^= v09
|
||||
v04 = (v04 << 12) | (v04 >> 20)
|
||||
v03 += v04
|
||||
v14 ^= v03
|
||||
v14 = (v14 << 8) | (v14 >> 24)
|
||||
v09 += v14
|
||||
v04 ^= v09
|
||||
v04 = (v04 << 7) | (v04 >> 25)
|
||||
}
|
||||
|
||||
binary.LittleEndian.PutUint32(out[0:], v00)
|
||||
binary.LittleEndian.PutUint32(out[4:], v01)
|
||||
binary.LittleEndian.PutUint32(out[8:], v02)
|
||||
binary.LittleEndian.PutUint32(out[12:], v03)
|
||||
binary.LittleEndian.PutUint32(out[16:], v12)
|
||||
binary.LittleEndian.PutUint32(out[20:], v13)
|
||||
binary.LittleEndian.PutUint32(out[24:], v14)
|
||||
binary.LittleEndian.PutUint32(out[28:], v15)
|
||||
}
|
||||
113
crypto/xchacha20poly1305/xchachapoly_test.go
Normal file
113
crypto/xchacha20poly1305/xchachapoly_test.go
Normal file
@@ -0,0 +1,113 @@
|
||||
package xchacha20poly1305
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
crand "crypto/rand"
|
||||
mrand "math/rand"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// The following test is taken from
|
||||
// https://github.com/golang/crypto/blob/master/chacha20poly1305/chacha20poly1305_test.go#L69
|
||||
// It requires the below copyright notice, where "this source code" refers to the following function.
|
||||
// Copyright 2016 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found at the bottom of this file.
|
||||
func TestRandom(t *testing.T) {
|
||||
// Some random tests to verify Open(Seal) == Plaintext
|
||||
for i := 0; i < 256; i++ {
|
||||
var nonce [24]byte
|
||||
var key [32]byte
|
||||
|
||||
al := mrand.Intn(128)
|
||||
pl := mrand.Intn(16384)
|
||||
ad := make([]byte, al)
|
||||
plaintext := make([]byte, pl)
|
||||
_, err := crand.Read(key[:])
|
||||
if err != nil {
|
||||
t.Errorf("error on read: %w", err)
|
||||
}
|
||||
_, err = crand.Read(nonce[:])
|
||||
if err != nil {
|
||||
t.Errorf("error on read: %w", err)
|
||||
}
|
||||
_, err = crand.Read(ad)
|
||||
if err != nil {
|
||||
t.Errorf("error on read: %w", err)
|
||||
}
|
||||
_, err = crand.Read(plaintext)
|
||||
if err != nil {
|
||||
t.Errorf("error on read: %w", err)
|
||||
}
|
||||
|
||||
aead, err := New(key[:])
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ct := aead.Seal(nil, nonce[:], plaintext, ad)
|
||||
|
||||
plaintext2, err := aead.Open(nil, nonce[:], ct, ad)
|
||||
if err != nil {
|
||||
t.Errorf("random #%d: Open failed", i)
|
||||
continue
|
||||
}
|
||||
|
||||
if !bytes.Equal(plaintext, plaintext2) {
|
||||
t.Errorf("random #%d: plaintext's don't match: got %x vs %x", i, plaintext2, plaintext)
|
||||
continue
|
||||
}
|
||||
|
||||
if len(ad) > 0 {
|
||||
alterAdIdx := mrand.Intn(len(ad))
|
||||
ad[alterAdIdx] ^= 0x80
|
||||
if _, err := aead.Open(nil, nonce[:], ct, ad); err == nil {
|
||||
t.Errorf("random #%d: Open was successful after altering additional data", i)
|
||||
}
|
||||
ad[alterAdIdx] ^= 0x80
|
||||
}
|
||||
|
||||
alterNonceIdx := mrand.Intn(aead.NonceSize())
|
||||
nonce[alterNonceIdx] ^= 0x80
|
||||
if _, err := aead.Open(nil, nonce[:], ct, ad); err == nil {
|
||||
t.Errorf("random #%d: Open was successful after altering nonce", i)
|
||||
}
|
||||
nonce[alterNonceIdx] ^= 0x80
|
||||
|
||||
alterCtIdx := mrand.Intn(len(ct))
|
||||
ct[alterCtIdx] ^= 0x80
|
||||
if _, err := aead.Open(nil, nonce[:], ct, ad); err == nil {
|
||||
t.Errorf("random #%d: Open was successful after altering ciphertext", i)
|
||||
}
|
||||
ct[alterCtIdx] ^= 0x80
|
||||
}
|
||||
}
|
||||
|
||||
// AFOREMENTIONED LICENSE
|
||||
// Copyright (c) 2009 The Go Authors. All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
54
crypto/xsalsa20symmetric/symmetric.go
Normal file
54
crypto/xsalsa20symmetric/symmetric.go
Normal file
@@ -0,0 +1,54 @@
|
||||
package xsalsa20symmetric
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
"golang.org/x/crypto/nacl/secretbox"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
)
|
||||
|
||||
// TODO, make this into a struct that implements crypto.Symmetric.
|
||||
|
||||
const nonceLen = 24
|
||||
const secretLen = 32
|
||||
|
||||
// secret must be 32 bytes long. Use something like Sha256(Bcrypt(passphrase))
|
||||
// The ciphertext is (secretbox.Overhead + 24) bytes longer than the plaintext.
|
||||
func EncryptSymmetric(plaintext []byte, secret []byte) (ciphertext []byte) {
|
||||
if len(secret) != secretLen {
|
||||
panic(fmt.Sprintf("Secret must be 32 bytes long, got len %v", len(secret)))
|
||||
}
|
||||
nonce := crypto.CRandBytes(nonceLen)
|
||||
nonceArr := [nonceLen]byte{}
|
||||
copy(nonceArr[:], nonce)
|
||||
secretArr := [secretLen]byte{}
|
||||
copy(secretArr[:], secret)
|
||||
ciphertext = make([]byte, nonceLen+secretbox.Overhead+len(plaintext))
|
||||
copy(ciphertext, nonce)
|
||||
secretbox.Seal(ciphertext[nonceLen:nonceLen], plaintext, &nonceArr, &secretArr)
|
||||
return ciphertext
|
||||
}
|
||||
|
||||
// secret must be 32 bytes long. Use something like Sha256(Bcrypt(passphrase))
|
||||
// The ciphertext is (secretbox.Overhead + 24) bytes longer than the plaintext.
|
||||
func DecryptSymmetric(ciphertext []byte, secret []byte) (plaintext []byte, err error) {
|
||||
if len(secret) != secretLen {
|
||||
panic(fmt.Sprintf("Secret must be 32 bytes long, got len %v", len(secret)))
|
||||
}
|
||||
if len(ciphertext) <= secretbox.Overhead+nonceLen {
|
||||
return nil, errors.New("ciphertext is too short")
|
||||
}
|
||||
nonce := ciphertext[:nonceLen]
|
||||
nonceArr := [nonceLen]byte{}
|
||||
copy(nonceArr[:], nonce)
|
||||
secretArr := [secretLen]byte{}
|
||||
copy(secretArr[:], secret)
|
||||
plaintext = make([]byte, len(ciphertext)-nonceLen-secretbox.Overhead)
|
||||
_, ok := secretbox.Open(plaintext[:0], ciphertext[nonceLen:], &nonceArr, &secretArr)
|
||||
if !ok {
|
||||
return nil, errors.New("ciphertext decryption failed")
|
||||
}
|
||||
return plaintext, nil
|
||||
}
|
||||
40
crypto/xsalsa20symmetric/symmetric_test.go
Normal file
40
crypto/xsalsa20symmetric/symmetric_test.go
Normal file
@@ -0,0 +1,40 @@
|
||||
package xsalsa20symmetric
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
)
|
||||
|
||||
func TestSimple(t *testing.T) {
|
||||
|
||||
plaintext := []byte("sometext")
|
||||
secret := []byte("somesecretoflengththirtytwo===32")
|
||||
ciphertext := EncryptSymmetric(plaintext, secret)
|
||||
plaintext2, err := DecryptSymmetric(ciphertext, secret)
|
||||
|
||||
require.NoError(t, err, "%+v", err)
|
||||
assert.Equal(t, plaintext, plaintext2)
|
||||
}
|
||||
|
||||
func TestSimpleWithKDF(t *testing.T) {
|
||||
|
||||
plaintext := []byte("sometext")
|
||||
secretPass := []byte("somesecret")
|
||||
secret, err := bcrypt.GenerateFromPassword(secretPass, 12)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
secret = crypto.Sha256(secret)
|
||||
|
||||
ciphertext := EncryptSymmetric(plaintext, secret)
|
||||
plaintext2, err := DecryptSymmetric(ciphertext, secret)
|
||||
|
||||
require.NoError(t, err, "%+v", err)
|
||||
assert.Equal(t, plaintext, plaintext2)
|
||||
}
|
||||
@@ -65,5 +65,5 @@ networks:
|
||||
ipam:
|
||||
driver: default
|
||||
config:
|
||||
-
|
||||
subnet: 192.167.10.0/16
|
||||
-
|
||||
subnet: 192.167.10.0/16
|
||||
@@ -22,6 +22,10 @@ module.exports = {
|
||||
index: "tendermint"
|
||||
},
|
||||
versions: [
|
||||
{
|
||||
"label": "v0.32",
|
||||
"key": "v0.32"
|
||||
},
|
||||
{
|
||||
"label": "v0.33",
|
||||
"key": "v0.33"
|
||||
|
||||
@@ -21,10 +21,10 @@ Tendermint?](introduction/what-is-tendermint.md).
|
||||
|
||||
To get started quickly with an example application, see the [quick start guide](introduction/quick-start.md).
|
||||
|
||||
To learn about application development on Tendermint, see the [Application Blockchain Interface](https://github.com/tendermint/tendermint/tree/master/spec/abci).
|
||||
To learn about application development on Tendermint, see the [Application Blockchain Interface](https://github.com/tendermint/spec/tree/master/spec/abci).
|
||||
|
||||
For more details on using Tendermint, see the respective documentation for
|
||||
[Tendermint Core](tendermint-core/), [benchmarking and monitoring](tools/), and [network deployments](nodes/).
|
||||
[Tendermint Core](tendermint-core/), [benchmarking and monitoring](tools/), and [network deployments](networks/).
|
||||
|
||||
To find out about the Tendermint ecosystem you can go [here](https://github.com/tendermint/awesome#ecosystem). If you are a project that is using Tendermint you are welcome to make a PR to add your project to the list.
|
||||
|
||||
|
||||
@@ -57,4 +57,4 @@ See the following for more extensive documentation:
|
||||
- [Interchain Standard for the Light-Client REST API](https://github.com/cosmos/cosmos-sdk/pull/1028)
|
||||
- [Tendermint RPC Docs](https://docs.tendermint.com/master/rpc/)
|
||||
- [Tendermint in Production](../tendermint-core/running-in-production.md)
|
||||
- [ABCI spec](https://github.com/tendermint/tendermint/tree/95cf253b6df623066ff7cd4074a94e7a3f147c7a/spec/abci)
|
||||
- [ABCI spec](https://github.com/tendermint/spec/tree/95cf253b6df623066ff7cd4074a94e7a3f147c7a/spec/abci)
|
||||
|
||||
@@ -68,7 +68,7 @@ tendermint start
|
||||
```
|
||||
|
||||
If you have used Tendermint, you may want to reset the data for a new
|
||||
blockchain by running `tendermint unsafe-reset-all`. Then you can run
|
||||
blockchain by running `tendermint unsafe_reset_all`. Then you can run
|
||||
`tendermint start` to start Tendermint, and connect to the app. For more
|
||||
details, see [the guide on using Tendermint](../tendermint-core/using-tendermint.md).
|
||||
|
||||
@@ -96,21 +96,25 @@ like:
|
||||
|
||||
```json
|
||||
{
|
||||
"check_tx": { ... },
|
||||
"deliver_tx": {
|
||||
"tags": [
|
||||
{
|
||||
"key": "YXBwLmNyZWF0b3I=",
|
||||
"value": "amFl"
|
||||
},
|
||||
{
|
||||
"key": "YXBwLmtleQ==",
|
||||
"value": "YWJjZA=="
|
||||
}
|
||||
]
|
||||
},
|
||||
"hash": "9DF66553F98DE3C26E3C3317A3E4CED54F714E39",
|
||||
"height": 14
|
||||
"jsonrpc": "2.0",
|
||||
"id": "",
|
||||
"result": {
|
||||
"check_tx": {},
|
||||
"deliver_tx": {
|
||||
"tags": [
|
||||
{
|
||||
"key": "YXBwLmNyZWF0b3I=",
|
||||
"value": "amFl"
|
||||
},
|
||||
{
|
||||
"key": "YXBwLmtleQ==",
|
||||
"value": "YWJjZA=="
|
||||
}
|
||||
]
|
||||
},
|
||||
"hash": "9DF66553F98DE3C26E3C3317A3E4CED54F714E39",
|
||||
"height": 14
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -125,11 +129,15 @@ The result should look like:
|
||||
|
||||
```json
|
||||
{
|
||||
"response": {
|
||||
"log": "exists",
|
||||
"index": "-1",
|
||||
"key": "YWJjZA==",
|
||||
"value": "YWJjZA=="
|
||||
"jsonrpc": "2.0",
|
||||
"id": "",
|
||||
"result": {
|
||||
"response": {
|
||||
"log": "exists",
|
||||
"index": "-1",
|
||||
"key": "YWJjZA==",
|
||||
"value": "YWJjZA=="
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -182,7 +190,7 @@ node example/counter.js
|
||||
In another window, reset and start `tendermint`:
|
||||
|
||||
```sh
|
||||
tendermint unsafe-reset-all
|
||||
tendermint unsafe_reset_all
|
||||
tendermint start
|
||||
```
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ the block itself is never stored.
|
||||
Each event contains a type and a list of attributes, which are key-value pairs
|
||||
denoting something about what happened during the method's execution. For more
|
||||
details on `Events`, see the
|
||||
[ABCI](https://github.com/tendermint/tendermint/blob/master/spec/abci/abci.md#events)
|
||||
[ABCI](https://github.com/tendermint/spec/blob/master/spec/abci/abci.md#events)
|
||||
documentation.
|
||||
|
||||
An `Event` has a composite key associated with it. A `compositeKey` is
|
||||
|
||||
@@ -79,8 +79,6 @@ Note the context/background should be written in the present tense.
|
||||
- [ADR-065: Custom Event Indexing](./adr-065-custom-event-indexing.md)
|
||||
- [ADR-068: Reverse-Sync](./adr-068-reverse-sync.md)
|
||||
- [ADR-067: Mempool Refactor](./adr-067-mempool-refactor.md)
|
||||
- [ADR-075: RPC Event Subscription Interface](./adr-075-rpc-subscription.md)
|
||||
- [ADR-076: Combine Spec and Tendermint Repositories](./adr-076-combine-spec-repo.md)
|
||||
|
||||
### Rejected
|
||||
|
||||
@@ -104,4 +102,3 @@ Note the context/background should be written in the present tense.
|
||||
- [ADR-069: Node Initialization](./adr-069-flexible-node-initialization.md)
|
||||
- [ADR-071: Proposer-Based Timestamps](adr-071-proposer-based-timestamps.md)
|
||||
- [ADR-074: Migrate Timeout Parameters to Consensus Parameters](./adr-074-timeout-params.md)
|
||||
|
||||
|
||||
@@ -213,4 +213,4 @@ type Logger interface {
|
||||
}
|
||||
```
|
||||
|
||||
See [The Hunt for a Logger Interface](https://web.archive.org/web/20210902161539/https://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide#1). The advantage is greater composability (check out how go-kit defines colored logging or log-leveled logging on top of this interface https://github.com/go-kit/kit/tree/master/log).
|
||||
See [The Hunt for a Logger Interface](https://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide). The advantage is greater composability (check out how go-kit defines colored logging or log-leveled logging on top of this interface https://github.com/go-kit/kit/tree/master/log).
|
||||
|
||||
@@ -84,7 +84,7 @@ The linear verification algorithm requires downloading all headers
|
||||
between the `TrustHeight` and the `LatestHeight`. The lite client downloads the
|
||||
full header for the provided `TrustHeight` and then proceeds to download `N+1`
|
||||
headers and applies the [Tendermint validation
|
||||
rules](https://docs.tendermint.com/master/spec/light-client/verification/)
|
||||
rules](https://docs.tendermint.com/master/spec/blockchain/blockchain.html#validation)
|
||||
to each block.
|
||||
|
||||
### Bisecting Verification
|
||||
@@ -119,7 +119,7 @@ network usage.
|
||||
---
|
||||
|
||||
Check out the formal specification
|
||||
[here](https://github.com/tendermint/tendermint/tree/master/spec/light-client).
|
||||
[here](https://github.com/tendermint/spec/tree/master/spec/light-client).
|
||||
|
||||
## Status
|
||||
|
||||
|
||||
@@ -108,7 +108,7 @@ This is done with:
|
||||
func (c *Client) examineConflictingHeaderAgainstTrace(
|
||||
trace []*types.LightBlock,
|
||||
targetBlock *types.LightBlock,
|
||||
source provider.Provider,
|
||||
source provider.Provider,
|
||||
now time.Time,
|
||||
) ([]*types.LightBlock, *types.LightBlock, error)
|
||||
```
|
||||
@@ -123,17 +123,17 @@ as a sanity check. If this fails we have to drop the witness.
|
||||
intermediary headers of the primary (In the above example this is A, B, C, D, F, H). If bisection fails
|
||||
or the witness stops responding then we can call the witness faulty and drop it.
|
||||
|
||||
3. We eventually reach a verified header by the witness which is not the same as the intermediary header
|
||||
3. We eventually reach a verified header by the witness which is not the same as the intermediary header
|
||||
(In the above example this is E). This is the point of bifurcation (This could also be the last header).
|
||||
|
||||
4. There is a unique case where the trace that is being examined against has blocks that have a greater
|
||||
height than the targetBlock. This can occur as part of a forward lunatic attack where the primary has
|
||||
provided a light block that has a height greater than the head of the chain (see Appendix B). In this
|
||||
case, the light client will verify the sources blocks up to the targetBlock and return the block in the
|
||||
4. There is a unique case where the trace that is being examined against has blocks that have a greater
|
||||
height than the targetBlock. This can occur as part of a forward lunatic attack where the primary has
|
||||
provided a light block that has a height greater than the head of the chain (see Appendix B). In this
|
||||
case, the light client will verify the sources blocks up to the targetBlock and return the block in the
|
||||
trace that is directly after the targetBlock in height as the `ConflictingBlock`
|
||||
|
||||
This function then returns the trace of blocks from the witness node between the common header and the
|
||||
divergent header of the primary as it is likely, as seen in the example to the right, that multiple
|
||||
divergent header of the primary as it is likely, as seen in the example to the right, that multiple
|
||||
headers where required in order to verify the divergent one. This trace will
|
||||
be used later (as is also described later in this document).
|
||||
|
||||
@@ -179,7 +179,7 @@ This then ends the process and the verify function that was called at the start
|
||||
the user.
|
||||
|
||||
For a detailed overview of how each of these three attacks can be conducted please refer to the
|
||||
[fork accountability spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md).
|
||||
[fork accountability spec](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/accountability.md).
|
||||
|
||||
## Full Node Verification
|
||||
|
||||
@@ -212,7 +212,7 @@ clear from the current information which nodes behaved maliciously.
|
||||
|
||||
## References
|
||||
|
||||
* [Fork accountability spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md)
|
||||
* [Fork accountability spec](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/accountability.md)
|
||||
* [ADR 056: Light client amnesia attacks](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-056-light-client-amnesia-attacks.md)
|
||||
* [ADR-059: Evidence Composition and Lifecycle](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-059-evidence-composition-and-lifecycle.md)
|
||||
* [Informal's Light Client Detector](https://github.com/informalsystems/tendermint-rs/blob/master/docs/spec/lightclient/detection/detection.md)
|
||||
@@ -238,7 +238,7 @@ a phantom validator. Given this, it was removed.
|
||||
|
||||
A unique flavor of lunatic attack is a forward lunatic attack. This is where a malicious
|
||||
node provides a header with a height greater than the height of the blockchain. Thus there
|
||||
are no witnesses capable of rebutting the malicious header. Such an attack will also
|
||||
are no witnesses capable of rebutting the malicious header. Such an attack will also
|
||||
require an accomplice, i.e. at least one other witness to also return the same forged header.
|
||||
Although such attacks can be any arbitrary height ahead, they must still remain within the
|
||||
clock drift of the light clients real time. Therefore, to detect such an attack, a light
|
||||
@@ -251,4 +251,4 @@ client will wait for a time
|
||||
for a witness to provide the latest block it has. Given the time constraints, if the witness
|
||||
is operating at the head of the blockchain, it will have a header with an earlier height but
|
||||
a later timestamp. This can be used to prove that the primary has submitted a lunatic header
|
||||
which violates monotonically increasing time.
|
||||
which violates monotonically increasing time.
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
|
||||
## Context
|
||||
|
||||
Whilst most created evidence of malicious behavior is self evident such that any individual can verify them independently there are types of evidence, known collectively as global evidence, that require further collaboration from the network in order to accumulate enough information to create evidence that is individually verifiable and can therefore be processed through consensus. [Fork Accountability](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md) has been coined to describe the entire process of detection, proving and punishing of malicious behavior. This ADR addresses specifically what a light client amnesia attack is and how it can be proven and the current decision around handling light client amnesia attacks. For information on evidence handling by the light client, it is recommended to read [ADR 47](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-047-handling-evidence-from-light-client.md).
|
||||
Whilst most created evidence of malicious behavior is self evident such that any individual can verify them independently there are types of evidence, known collectively as global evidence, that require further collaboration from the network in order to accumulate enough information to create evidence that is individually verifiable and can therefore be processed through consensus. [Fork Accountability](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/accountability.md) has been coined to describe the entire process of detection, proving and punishing of malicious behavior. This ADR addresses specifically what a light client amnesia attack is and how it can be proven and the current decision around handling light client amnesia attacks. For information on evidence handling by the light client, it is recommended to read [ADR 47](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-047-handling-evidence-from-light-client.md).
|
||||
|
||||
### Amnesia Attack
|
||||
|
||||
@@ -65,7 +65,7 @@ Light clients where all witnesses are faulty can be subject to an amnesia attack
|
||||
## References
|
||||
|
||||
- [Fork accountability algorithm](https://docs.google.com/document/d/11ZhMsCj3y7zIZz4udO9l25xqb0kl7gmWqNpGVRzOeyY/edit)
|
||||
- [Fork accountability spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md)
|
||||
- [Fork accountability spec](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/accountability.md)
|
||||
|
||||
## Appendix A: Detailed Walkthrough of Performing a Light Client Amnesia Attack
|
||||
|
||||
@@ -128,7 +128,7 @@ This trial period will be discussed later.
|
||||
|
||||
Returning to the event of an amnesia attack, if we were to examine the behavior of the honest nodes, C1 and C2, in the schematic, C2 will not PRECOMMIT an earlier round, but it is likely, if a node in C1 were to receive +2/3 PREVOTE's or PRECOMMIT's for a higher round, that it would remove the lock and PREVOTE and PRECOMMIT for the later round. Therefore, unfortunately it is not a case of simply punishing all nodes that have double voted in the `PotentialAmnesiaEvidence`.
|
||||
|
||||
Instead we use the Proof of Lock Change (PoLC) referred to in the [consensus spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/consensus.md#terms). When an honest node votes again for a different block in a later round
|
||||
Instead we use the Proof of Lock Change (PoLC) referred to in the [consensus spec](https://github.com/tendermint/spec/blob/master/spec/consensus/consensus.md#terms). When an honest node votes again for a different block in a later round
|
||||
(which will only occur in very rare cases), it will generate the PoLC and store it in the evidence reactor for a time equal to the `MaxEvidenceAge`
|
||||
|
||||
```golang
|
||||
|
||||
@@ -106,7 +106,7 @@ invasive in the required set of protocol and implementation changes, which
|
||||
simply extends the existing `CheckTx` ABCI method. The second candidate essentially
|
||||
involves the introduction of new ABCI method(s) and would require a higher degree
|
||||
of complexity in protocol and implementation changes, some of which may either
|
||||
overlap or conflict with the upcoming introduction of [ABCI++](https://github.com/tendermint/tendermint/blob/master/docs/rfc/rfc-013-abci%2B%2B.md).
|
||||
overlap or conflict with the upcoming introduction of [ABCI++](https://github.com/tendermint/spec/blob/master/rfc/004-abci%2B%2B.md).
|
||||
|
||||
For more information on the various approaches and proposals, please see the
|
||||
[mempool discussion](https://github.com/tendermint/tendermint/discussions/6295).
|
||||
@@ -171,7 +171,7 @@ message ResponseCheckTx {
|
||||
```
|
||||
|
||||
It is entirely up the application in determining how these fields are populated
|
||||
and with what values, e.g. the `sender` could be the signer and fee payer
|
||||
and with what values, e.g. the `sender` could be the signer and fee payer
|
||||
of the transaction, the `priority` could be the cumulative sum of the fee(s).
|
||||
|
||||
Only `sender` is required, while `priority` can be omitted which would result in
|
||||
@@ -289,7 +289,7 @@ non-contentious and backwards compatible manner.
|
||||
trying again at a later point in time or by ensuring the "child" priority is
|
||||
lower than the "parent" priority. In other words, if parents always have
|
||||
priories that are higher than their children, then the new mempool design will
|
||||
maintain causal ordering.
|
||||
maintain causal ordering.
|
||||
|
||||
### Neutral
|
||||
|
||||
@@ -299,5 +299,5 @@ non-contentious and backwards compatible manner.
|
||||
|
||||
## References
|
||||
|
||||
- [ABCI++](https://github.com/tendermint/tendermint/blob/master/docs/rfc/rfc-013-abci%2B%2B.md)
|
||||
- [ABCI++](https://github.com/tendermint/spec/blob/master/rfc/004-abci%2B%2B.md)
|
||||
- [Mempool Discussion](https://github.com/tendermint/tendermint/discussions/6295)
|
||||
|
||||
@@ -10,15 +10,15 @@ Accepted
|
||||
|
||||
## Context
|
||||
|
||||
The advent of state sync and block pruning gave rise to the opportunity for full nodes to participate in consensus without needing complete block history. This also introduced a problem with respect to evidence handling. Nodes that didn't have all the blocks within the evidence age were incapable of validating evidence, thus halting if that evidence was committed on chain.
|
||||
The advent of state sync and block pruning gave rise to the opportunity for full nodes to participate in consensus without needing complete block history. This also introduced a problem with respect to evidence handling. Nodes that didn't have all the blocks within the evidence age were incapable of validating evidence, thus halting if that evidence was committed on chain.
|
||||
|
||||
[ADR 068](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-068-reverse-sync.md) was published in response to this problem and modified the spec to add a minimum block history invariant. This predominantly sought to extend state sync so that it was capable of fetching and storing the `Header`, `Commit` and `ValidatorSet` (essentially a `LightBlock`) of the last `n` heights, where `n` was calculated based from the evidence age.
|
||||
[RFC005](https://github.com/tendermint/spec/blob/master/rfc/005-reverse-sync.md) was published in response to this problem and modified the spec to add a minimum block history invariant. This predominantly sought to extend state sync so that it was capable of fetching and storing the `Header`, `Commit` and `ValidatorSet` (essentially a `LightBlock`) of the last `n` heights, where `n` was calculated based from the evidence age.
|
||||
|
||||
This ADR sets out to describe the design of this state sync extension as well as modifications to the light client provider and the merging of tm store.
|
||||
|
||||
## Decision
|
||||
|
||||
The state sync reactor will be extended by introducing 2 new P2P messages (and a new channel).
|
||||
The state sync reactor will be extended by introducing 2 new P2P messages (and a new channel).
|
||||
|
||||
```protobuf
|
||||
message LightBlockRequest {
|
||||
@@ -26,7 +26,7 @@ message LightBlockRequest {
|
||||
}
|
||||
|
||||
message LightBlockResponse {
|
||||
tendermint.types.LightBlock light_block = 1;
|
||||
tendermint.types.LightBlock light_block = 1;
|
||||
}
|
||||
```
|
||||
|
||||
@@ -93,5 +93,5 @@ This ADR tries to remain within the scope of extending state sync, however the c
|
||||
|
||||
## References
|
||||
|
||||
- [Reverse Sync RFC](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-068-reverse-sync.md)
|
||||
- [Reverse Sync RFC](https://github.com/tendermint/spec/blob/master/rfc/005-reverse-sync.md)
|
||||
- [Original Issue](https://github.com/tendermint/tendermint/issues/5617)
|
||||
|
||||
@@ -252,6 +252,11 @@ N/A
|
||||
|
||||
## References
|
||||
|
||||
- [this
|
||||
branch](https://github.com/tendermint/tendermint/tree/tychoish/scratch-node-minimize)
|
||||
contains experimental work in the implementation of the node package
|
||||
to unwind some of the hard dependencies between components.
|
||||
|
||||
- [the component
|
||||
graph](https://peter.bourgon.org/go-for-industrial-programming/#the-component-graph)
|
||||
as a framing for internal service construction.
|
||||
@@ -2,13 +2,12 @@
|
||||
|
||||
## Changelog
|
||||
|
||||
- July 15 2021: Created by @williambanfield
|
||||
- July 15 2021: Created by @williambanfield
|
||||
- Aug 4 2021: Draft completed by @williambanfield
|
||||
- Aug 5 2021: Draft updated to include data structure changes by @williambanfield
|
||||
- Aug 20 2021: Language edits completed by @williambanfield
|
||||
- Oct 25 2021: Update the ADR to match updated spec from @cason by @williambanfield
|
||||
- Nov 10 2021: Additional language updates by @williambanfield per feedback from @cason
|
||||
- Feb 2 2022: Synchronize logic for timely with latest version of the spec by @williambanfield
|
||||
|
||||
## Status
|
||||
|
||||
@@ -16,14 +15,14 @@
|
||||
|
||||
## Context
|
||||
|
||||
Tendermint currently provides a monotonically increasing source of time known as [BFTTime](https://github.com/tendermint/tendermint/blob/master/spec/consensus/bft-time.md).
|
||||
Tendermint currently provides a monotonically increasing source of time known as [BFTTime](https://github.com/tendermint/spec/blob/master/spec/consensus/bft-time.md).
|
||||
This mechanism for producing a source of time is reasonably simple.
|
||||
Each correct validator adds a timestamp to each `Precommit` message it sends.
|
||||
The timestamp it sends is either the validator's current known Unix time or one millisecond greater than the previous block time, depending on which value is greater.
|
||||
When a block is produced, the proposer chooses the block timestamp as the weighted median of the times in all of the `Precommit` messages the proposer received.
|
||||
The weighting is proportional to the amount of voting power, or stake, a validator has on the network.
|
||||
This mechanism for producing timestamps is both deterministic and byzantine fault tolerant.
|
||||
|
||||
|
||||
This current mechanism for producing timestamps has a few drawbacks.
|
||||
Validators do not have to agree at all on how close the selected block timestamp is to their own currently known Unix time.
|
||||
Additionally, any amount of voting power `>1/3` may directly control the block timestamp.
|
||||
@@ -31,17 +30,17 @@ As a result, it is quite possible that the timestamp is not particularly meaning
|
||||
|
||||
These drawbacks present issues in the Tendermint protocol.
|
||||
Timestamps are used by light clients to verify blocks.
|
||||
Light clients rely on correspondence between their own currently known Unix time and the block timestamp to verify blocks they see;
|
||||
Light clients rely on correspondence between their own currently known Unix time and the block timestamp to verify blocks they see;
|
||||
However, their currently known Unix time may be greatly divergent from the block timestamp as a result of the limitations of `BFTTime`.
|
||||
|
||||
The proposer-based timestamps specification suggests an alternative approach for producing block timestamps that remedies these issues.
|
||||
Proposer-based timestamps alter the current mechanism for producing block timestamps in two main ways:
|
||||
|
||||
1. The block proposer is amended to offer up its currently known Unix time as the timestamp for the next block instead of the `BFTTime`.
|
||||
1. Correct validators only approve the proposed block timestamp if it is close enough to their own currently known Unix time.
|
||||
1. Correct validators only approve the proposed block timestamp if it is close enough to their own currently known Unix time.
|
||||
|
||||
The result of these changes is a more meaningful timestamp that cannot be controlled by `<= 2/3` of the validator voting power.
|
||||
This document outlines the necessary code changes in Tendermint to implement the corresponding [proposer-based timestamps specification](https://github.com/tendermint/tendermint/tree/master/spec/consensus/proposer-based-timestamp).
|
||||
The result of these changes is a more meaningful timestamp that cannot be controlled by `<= 2/3` of the validator voting power.
|
||||
This document outlines the necessary code changes in Tendermint to implement the corresponding [proposer-based timestamps specification](https://github.com/tendermint/spec/tree/master/spec/consensus/proposer-based-timestamp).
|
||||
|
||||
## Alternative Approaches
|
||||
|
||||
@@ -54,17 +53,17 @@ An alternate approach is to remove timestamps altogether from the block protocol
|
||||
`BFTTime` is deterministic but may be arbitrarily inaccurate.
|
||||
However, having a reliable source of time is quite useful for applications and protocols built on top of a blockchain.
|
||||
|
||||
We therefore decided not to remove the timestamp.
|
||||
We therefore decided not to remove the timestamp.
|
||||
Applications often wish for some transactions to occur on a certain day, on a regular period, or after some time following a different event.
|
||||
All of these require some meaningful representation of agreed upon time.
|
||||
The following protocols and application features require a reliable source of time:
|
||||
* Tendermint Light Clients [rely on correspondence between their known time](https://github.com/tendermint/tendermint/blob/master/spec/light-client/verification/README.md#definitions-1) and the block time for block verification.
|
||||
* Tendermint Evidence validity is determined [either in terms of heights or in terms of time](https://github.com/tendermint/tendermint/blob/8029cf7a0fcc89a5004e173ec065aa48ad5ba3c8/spec/consensus/evidence.md#verification).
|
||||
* Tendermint Light Clients [rely on correspondence between their known time](https://github.com/tendermint/spec/blob/master/spec/light-client/verification/README.md#definitions-1) and the block time for block verification.
|
||||
* Tendermint Evidence validity is determined [either in terms of heights or in terms of time](https://github.com/tendermint/spec/blob/8029cf7a0fcc89a5004e173ec065aa48ad5ba3c8/spec/consensus/evidence.md#verification).
|
||||
* Unbonding of staked assets in the Cosmos Hub [occurs after a period of 21 days](https://github.com/cosmos/governance/blob/ce75de4019b0129f6efcbb0e752cd2cc9e6136d3/params-change/Staking.md#unbondingtime).
|
||||
* IBC packets can use either a [timestamp or a height to timeout packet delivery](https://docs.cosmos.network/v0.44/ibc/overview.html#acknowledgements)
|
||||
* IBC packets can use either a [timestamp or a height to timeout packet delivery](https://docs.cosmos.network/v0.43/ibc/overview.html#acknowledgements).
|
||||
|
||||
Finally, inflation distribution in the Cosmos Hub uses an approximation of time to calculate an annual percentage rate.
|
||||
This approximation of time is calculated using [block heights with an estimated number of blocks produced in a year](https://github.com/cosmos/governance/blob/master/params-change/Mint.md#blocksperyear).
|
||||
Finally, inflation distribution in the Cosmos Hub uses an approximation of time to calculate an annual percentage rate.
|
||||
This approximation of time is calculated using [block heights with an estimated number of blocks produced in a year](https://github.com/cosmos/governance/blob/master/params-change/Mint.md#blocksperyear).
|
||||
Proposer-based timestamps will allow this inflation calculation to use a more meaningful and accurate source of time.
|
||||
|
||||
|
||||
@@ -85,7 +84,7 @@ These changes will be to the following components:
|
||||
|
||||
### Changes to `CommitSig`
|
||||
|
||||
The [CommitSig](https://github.com/tendermint/tendermint/blob/a419f4df76fe4aed668a6c74696deabb9fe73211/types/block.go#L604) struct currently contains a timestamp.
|
||||
The [CommitSig](https://github.com/tendermint/tendermint/blob/a419f4df76fe4aed668a6c74696deabb9fe73211/types/block.go#L604) struct currently contains a timestamp.
|
||||
This timestamp is the current Unix time known to the validator when it issued a `Precommit` for the block.
|
||||
This timestamp is no longer used and will be removed in this change.
|
||||
|
||||
@@ -104,11 +103,11 @@ type CommitSig struct {
|
||||
|
||||
`Precommit` and `Prevote` messages use a common [Vote struct](https://github.com/tendermint/tendermint/blob/a419f4df76fe4aed668a6c74696deabb9fe73211/types/vote.go#L50).
|
||||
This struct currently contains a timestamp.
|
||||
This timestamp is set using the [voteTime](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/internal/consensus/state.go#L2241) function and therefore vote times correspond to the current Unix time known to the validator, provided this time is greater than the timestamp of the previous block.
|
||||
This timestamp is set using the [voteTime](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/internal/consensus/state.go#L2241) function and therefore vote times correspond to the current Unix time known to the validator.
|
||||
For precommits, this timestamp is used to construct the [CommitSig that is included in the block in the LastCommit](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/types/block.go#L754) field.
|
||||
For prevotes, this field is currently unused.
|
||||
Proposer-based timestamps will use the timestamp that the proposer sets into the block and will therefore no longer require that a timestamp be included in the vote messages.
|
||||
This timestamp is therefore no longer useful as part of consensus and may optionally be dropped from the message.
|
||||
Proposer-based timestamps will use the timestamp that the proposer sets into the block and will therefore no longer require that a timestamp be included in the vote messages.
|
||||
This timestamp is therefore no longer useful and will be dropped.
|
||||
|
||||
`Vote` will be updated as follows:
|
||||
|
||||
@@ -116,7 +115,7 @@ This timestamp is therefore no longer useful as part of consensus and may option
|
||||
type Vote struct {
|
||||
Type tmproto.SignedMsgType `json:"type"`
|
||||
Height int64 `json:"height"`
|
||||
Round int32 `json:"round"`
|
||||
Round int32 `json:"round"`
|
||||
BlockID BlockID `json:"block_id"` // zero if vote is nil.
|
||||
-- Timestamp time.Time `json:"timestamp"`
|
||||
ValidatorAddress Address `json:"validator_address"`
|
||||
@@ -127,17 +126,25 @@ type Vote struct {
|
||||
|
||||
### New consensus parameters
|
||||
|
||||
The proposer-based timestamp specification includes a pair of new parameters that must be the same among all validators.
|
||||
These parameters are `PRECISION`, and `MSGDELAY`.
|
||||
The proposer-based timestamp specification includes multiple new parameters that must be the same among all validators.
|
||||
These parameters are `PRECISION`, `MSGDELAY`, and `ACCURACY`.
|
||||
|
||||
The `PRECISION` and `MSGDELAY` parameters are used to determine if the proposed timestamp is acceptable.
|
||||
A validator will only Prevote a proposal if the proposal timestamp is considered `timely`.
|
||||
A proposal timestamp is considered `timely` if it is within `PRECISION` and `MSGDELAY` of the Unix time known to the validator.
|
||||
More specifically, a proposal timestamp is `timely` if `proposalTimestamp - PRECISION ≤ validatorLocalTime ≤ proposalTimestamp + PRECISION + MSGDELAY`.
|
||||
More specifically, a proposal timestamp is `timely` if `validatorLocalTime - PRECISION < proposalTime < validatorLocalTime + PRECISION + MSGDELAY`.
|
||||
|
||||
Because the `PRECISION` and `MSGDELAY` parameters must be the same across all validators, they will be added to the [consensus parameters](https://github.com/tendermint/spec/blob/master/proto/tendermint/types/params.proto#L11) as [durations](https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#google.protobuf.Duration).
|
||||
Because the `PRECISION` and `MSGDELAY` parameters must be the same across all validators, they will be added to the [consensus parameters](https://github.com/tendermint/tendermint/blob/master/proto/tendermint/types/params.proto#L13) as [durations](https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#google.protobuf.Duration).
|
||||
|
||||
The consensus parameters will be updated to include this `Synchrony` field as follows:
|
||||
The proposer-based timestamp specification also includes a [new ACCURACY parameter](https://github.com/tendermint/spec/blob/master/spec/consensus/proposer-based-timestamp/pbts-sysmodel_001_draft.md#pbts-clocksync-external0).
|
||||
Intuitively, `ACCURACY` represents the difference between the ‘real’ time and the currently known time of correct validators.
|
||||
The currently known Unix time of any validator is always somewhat different from real time.
|
||||
`ACCURACY` is the largest such difference between each validator's time and real time taken as an absolute value.
|
||||
This is not something a computer can determine on its own and must be specified as an estimate by community running a Tendermint-based chain.
|
||||
It is used in the new algorithm to [calculate a timeout for the propose step](https://github.com/tendermint/spec/blob/master/spec/consensus/proposer-based-timestamp/pbts-algorithm_001_draft.md#pbts-alg-startround0).
|
||||
`ACCURACY` is assumed to be the same across all validators and therefore should be included as a consensus parameter.
|
||||
|
||||
The consensus will be updated to include this `Timestamp` field as follows:
|
||||
|
||||
```diff
|
||||
type ConsensusParams struct {
|
||||
@@ -145,14 +152,15 @@ type ConsensusParams struct {
|
||||
Evidence EvidenceParams `json:"evidence"`
|
||||
Validator ValidatorParams `json:"validator"`
|
||||
Version VersionParams `json:"version"`
|
||||
++ Synchrony SynchronyParams `json:"synchrony"`
|
||||
++ Timestamp TimestampParams `json:"timestamp"`
|
||||
}
|
||||
```
|
||||
|
||||
```go
|
||||
type SynchronyParams struct {
|
||||
MessageDelay time.Duration `json:"message_delay"`
|
||||
Precision time.Duration `json:"precision"`
|
||||
type TimestampParams struct {
|
||||
Accuracy time.Duration `json:"accuracy"`
|
||||
Precision time.Duration `json:"precision"`
|
||||
MsgDelay time.Duration `json:"msg_delay"`
|
||||
}
|
||||
```
|
||||
|
||||
@@ -168,14 +176,14 @@ The timestamp the proposer sets into the `Header` will change depending on if th
|
||||
|
||||
#### Proposal of a block that has not previously received a polka
|
||||
|
||||
If a proposer is proposing a new block then it will set the Unix time currently known to the proposer into the `Header.Timestamp` field.
|
||||
If a proposer is proposing a new block, then it will set the Unix time currently known to the proposer into the `Header.Timestamp` field.
|
||||
The proposer will also set this same timestamp into the `Timestamp` field of the `Proposal` message that it issues.
|
||||
|
||||
#### Re-proposal of a block that has previously received a polka
|
||||
|
||||
If a proposer is re-proposing a block that has previously received a polka on the network, then the proposer does not update the `Header.Timestamp` of that block.
|
||||
Instead, the proposer simply re-proposes the exact same block.
|
||||
This way, the proposed block has the exact same block ID as the previously proposed block and the validators that have already received that block do not need to attempt to receive it again.
|
||||
This way, the proposed block has the exact same block ID as the previously proposed block and the validators that have already received that block do not need to attempt to receive it again.
|
||||
|
||||
The proposer will set the re-proposed block's `Header.Timestamp` as the `Proposal` message's `Timestamp`.
|
||||
|
||||
@@ -184,23 +192,42 @@ The proposer will set the re-proposed block's `Header.Timestamp` as the `Proposa
|
||||
Block timestamps must be monotonically increasing.
|
||||
In `BFTTime`, if a validator’s clock was behind, the [validator added 1 millisecond to the previous block’s time and used that in its vote messages](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/internal/consensus/state.go#L2246).
|
||||
A goal of adding proposer-based timestamps is to enforce some degree of clock synchronization, so having a mechanism that completely ignores the Unix time of the validator time no longer works.
|
||||
|
||||
Validator clocks will not be perfectly in sync.
|
||||
Therefore, the proposer’s current known Unix time may be less than the previous block's `Header.Time`.
|
||||
If the proposer’s current known Unix time is less than the previous block's `Header.Time`, the proposer will sleep until its known Unix time exceeds it.
|
||||
|
||||
This change will require amending the [defaultDecideProposal](https://github.com/tendermint/tendermint/blob/822893615564cb20b002dd5cf3b42b8d364cb7d9/internal/consensus/state.go#L1180) method.
|
||||
This method should now schedule a timeout that fires when the proposer’s time is greater than the previous block's `Header.Time`.
|
||||
When the timeout fires, the proposer will finally issue the `Proposal` message.
|
||||
When the timeout fires, the proposer will finally issue the `Proposal` message.
|
||||
|
||||
#### Changes to the propose step timeout
|
||||
|
||||
Currently, a validator waiting for a proposal will proceed past the propose step if the configured propose timeout is reached and no proposal is seen.
|
||||
Proposer-based timestamps requires changing this timeout logic.
|
||||
|
||||
The proposer will now wait until its current known Unix time exceeds the previous block's `Header.Time` to propose a block.
|
||||
The validators must now take this and some other factors into account when deciding when to timeout the propose step.
|
||||
Specifically, the propose step timeout must also take into account potential inaccuracy in the validator’s clock and in the clock of the proposer.
|
||||
Additionally, there may be a delay communicating the proposal message from the proposer to the other validators.
|
||||
|
||||
Therefore, validators waiting for a proposal must wait until after the previous block's `Header.Time` before timing out.
|
||||
To account for possible inaccuracy in its own clock, inaccuracy in the proposer’s clock, and message delay, validators waiting for a proposal will wait until the previous block's `Header.Time + 2*ACCURACY + MSGDELAY`.
|
||||
The spec defines this as `waitingTime`.
|
||||
|
||||
The [propose step’s timeout is set in enterPropose](https://github.com/tendermint/tendermint/blob/822893615564cb20b002dd5cf3b42b8d364cb7d9/internal/consensus/state.go#L1108) in `state.go`.
|
||||
`enterPropose` will be changed to calculate waiting time using the new consensus parameters.
|
||||
The timeout in `enterPropose` will then be set as the maximum of `waitingTime` and the [configured proposal step timeout](https://github.com/tendermint/tendermint/blob/dc7c212c41a360bfe6eb38a6dd8c709bbc39aae7/config/config.go#L1013).
|
||||
|
||||
### Changes to proposal validation rules
|
||||
|
||||
The rules for validating a proposed block will be modified to implement proposer-based timestamps.
|
||||
The rules for validating a proposed block will be modification to implement proposer-based timestamps.
|
||||
We will change the validation logic to ensure that a proposal is `timely`.
|
||||
|
||||
Per the proposer-based timestamps spec, `timely` only needs to be checked if a block has not received a +2/3 majority of `Prevotes` in a round.
|
||||
If a block previously received a +2/3 majority of prevotes in a previous round, then +2/3 of the voting power considered the block's timestamp near enough to their own currently known Unix time in that round.
|
||||
|
||||
The validation logic will be updated to check `timely` for blocks that did not previously receive +2/3 prevotes in a round.
|
||||
The validation logic will be updated to check `timely` for blocks that did not previously receive +2/3 prevotes in a round.
|
||||
Receiving +2/3 prevotes in a round is frequently referred to as a 'polka' and we will use this term for simplicity.
|
||||
|
||||
#### Current timestamp validation logic
|
||||
@@ -223,17 +250,17 @@ This takes place in the [VerifyCommit function](https://github.com/tendermint/te
|
||||
`BFTTime` validation is no longer applicable and will be removed.
|
||||
This means that validators will no longer check that the block timestamp is a weighted median of `LastCommit` timestamps.
|
||||
Specifically, we will remove the call to [MedianTime in the validateBlock function](https://github.com/tendermint/tendermint/blob/4db71da68e82d5cb732b235eeb2fd69d62114b45/state/validation.go#L117).
|
||||
The `MedianTime` function can be completely removed.
|
||||
The `MedianTime` function can be completely removed.
|
||||
|
||||
Since `CommitSig`s will no longer contain a timestamp, the validator authenticating a commit will no longer include the `CommitSig` timestamp in the hash of fields it builds to check against the cryptographic signature.
|
||||
|
||||
#### Timestamp validation when a block has not received a polka
|
||||
|
||||
The [POLRound](https://github.com/tendermint/tendermint/blob/68ca65f5d79905abd55ea999536b1a3685f9f19d/types/proposal.go#L29) in the `Proposal` message indicates which round the block received a polka.
|
||||
A negative value in the `POLRound` field indicates that the block has not previously been proposed on the network.
|
||||
A negative value in the `POLRound` field indicates that the block has not previously been proposed on the network.
|
||||
Therefore the validation logic will check for timely when `POLRound < 0`.
|
||||
|
||||
When a validator receives a `Proposal` message, the validator will check that the `Proposal.Timestamp` is at most `PRECISION` greater than the current Unix time known to the validator, and at maximum `PRECISION + MSGDELAY` less than the current Unix time known to the validator.
|
||||
When a validator receives a `Proposal` message, the validator will check that the `Proposal.Timestamp` is at most `PRECISION` greater than the current Unix time known to the validator, and at minimum `PRECISION + MSGDELAY` less than the current Unix time known to the validator.
|
||||
If the timestamp is not within these bounds, the proposed block will not be considered `timely`.
|
||||
|
||||
Once a full block matching the `Proposal` message is received, the validator will also check that the timestamp in the `Header.Timestamp` of the block matches this `Proposal.Timestamp`.
|
||||
@@ -248,33 +275,13 @@ When a block is re-proposed that has already received a +2/3 majority of `Prevot
|
||||
A validator will not check that the `Proposal` is `timely` if the propose message has a non-negative `POLRound`.
|
||||
If the `POLRound` is non-negative, each validator will simply ensure that it received the `Prevote` messages for the proposed block in the round indicated by `POLRound`.
|
||||
|
||||
If the validator does not receive `Prevote` messages for the proposed block before the proposal timeout, then it will prevote nil.
|
||||
If the validator did not receive `Prevote` messages for the proposed block in `POLRound`, then it will prevote nil.
|
||||
Validators already check that +2/3 prevotes were seen in `POLRound`, so this does not represent a change to the prevote logic.
|
||||
|
||||
A validator will also check that the proposed timestamp is greater than the timestamp of the block for the previous height.
|
||||
If the timestamp is not greater than the previous block's timestamp, the block will not be considered valid, which is the same as the current logic.
|
||||
|
||||
Additionally, this validation logic can be updated to check that the `Proposal.Timestamp` matches the `Header.Timestamp` of the proposed block, but it is less relevant since checking that votes were received is sufficient to ensure the block timestamp is correct.
|
||||
|
||||
#### Relaxation of the 'Timely' check
|
||||
|
||||
The `Synchrony` parameters, `MessageDelay` and `Precision` provide a means to bound the timestamp of a proposed block.
|
||||
Selecting values that are too small presents a possible liveness issue for the network.
|
||||
If a Tendermint network selects a `MessageDelay` parameter that does not accurately reflect the time to broadcast a proposal message to all of the validators on the network, nodes will begin rejecting proposals from otherwise correct proposers because these proposals will appear to be too far in the past.
|
||||
|
||||
`MessageDelay` and `Precision` are planned to be configured as `ConsensusParams`.
|
||||
A very common way to update `ConsensusParams` is by executing a transaction included in a block that specifies new values for them.
|
||||
However, if the network is unable to produce blocks because of this liveness issue, no such transaction may be executed.
|
||||
To prevent this dangerous condition, we will add a relaxation mechanism to the `Timely` predicate.
|
||||
If consensus takes more than 10 rounds to produce a block for any reason, the `MessageDelay` will be doubled.
|
||||
This doubling will continue for each subsequent 10 rounds of consensus.
|
||||
This will enable chains that selected too small of a value for the `MessageDelay` parameter to eventually issue a transaction and readjust the parameters to more accurately reflect the broadcast time.
|
||||
|
||||
This liveness issue is not as problematic for chains with very small `Precision` values.
|
||||
Operators can more easily readjust local validator clocks to be more aligned.
|
||||
Additionally, chains that wish to increase a small `Precision` value can still take advantage of the `MessageDelay` relaxation, waiting for the `MessageDelay` value to grow significantly and issuing proposals with timestamps that are far in the past of their peers.
|
||||
|
||||
For more discussion of this, see [issue 371](https://github.com/tendermint/spec/issues/371).
|
||||
Additionally, this validation logic can be updated to check that the `Proposal.Timestamp` matches the `Header.Timestamp` of the proposed block, but it is less relevant since checking that votes were received is sufficient to ensure the block timestamp is correct.
|
||||
|
||||
### Changes to the prevote step
|
||||
|
||||
@@ -282,7 +289,7 @@ Currently, a validator will prevote a proposal in one of three cases:
|
||||
|
||||
* Case 1: Validator has no locked block and receives a valid proposal.
|
||||
* Case 2: Validator has a locked block and receives a valid proposal matching its locked block.
|
||||
* Case 3: Validator has a locked block, sees a valid proposal not matching its locked block but sees +2/3 prevotes for the proposal’s block, either in the current round or in a round greater than or equal to the round in which it locked its locked block.
|
||||
* Case 3: Validator has a locked block, sees a valid proposal not matching its locked block but sees +⅔ prevotes for the proposal’s block, either in the current round or in a round greater than or equal to the round in which it locked its locked block.
|
||||
|
||||
The only change we will make to the prevote step is to what a validator considers a valid proposal as detailed above.
|
||||
|
||||
@@ -328,6 +335,5 @@ This skew will be bound by the `PRECISION` value, so it is unlikely to be too la
|
||||
|
||||
## References
|
||||
|
||||
* [PBTS Spec](https://github.com/tendermint/tendermint/tree/master/spec/consensus/proposer-based-timestamp)
|
||||
* [PBTS Spec](https://github.com/tendermint/spec/tree/master/spec/consensus/proposer-based-timestamp)
|
||||
* [BFTTime spec](https://github.com/tendermint/spec/blob/master/spec/consensus/bft-time.md)
|
||||
* [Issue 371](https://github.com/tendermint/spec/issues/371)
|
||||
|
||||
@@ -232,4 +232,4 @@ the implementation timeline.
|
||||
|
||||
[adr61]: ./adr-061-p2p-refactor-scope.md
|
||||
[adr62]: ./adr-062-p2p-architecture.md
|
||||
[rfc]: ../rfc/rfc-000-p2p-roadmap.rst
|
||||
[rfc]: ../rfc/rfc-000-p2p.rst
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user