mirror of
https://github.com/tendermint/tendermint.git
synced 2026-01-13 00:02:52 +00:00
Compare commits
349 Commits
abci++_reb
...
wb/undo-qu
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8dfd011a7f | ||
|
|
d9c9f3277d | ||
|
|
da767e732c | ||
|
|
cc18f87000 | ||
|
|
165cc29474 | ||
|
|
bb9fa171d6 | ||
|
|
a7224fd640 | ||
|
|
ac9197ed45 | ||
|
|
c2cce2a696 | ||
|
|
75dafaeacc | ||
|
|
6280f45460 | ||
|
|
be83ec6664 | ||
|
|
cb26c5238a | ||
|
|
e439cf3ba2 | ||
|
|
f9e0f77af3 | ||
|
|
2183d90d05 | ||
|
|
d187962ec0 | ||
|
|
9e69615451 | ||
|
|
f6569b5dcd | ||
|
|
f92289d5e8 | ||
|
|
28d34d635c | ||
|
|
38e29590ff | ||
|
|
c928818db9 | ||
|
|
e0d44a650e | ||
|
|
d72939fe36 | ||
|
|
cbb2c1d3bd | ||
|
|
94b409e407 | ||
|
|
c763f8ef59 | ||
|
|
e81b0e290e | ||
|
|
cdc4c31e88 | ||
|
|
6638db2473 | ||
|
|
56ee72424f | ||
|
|
60f09840dd | ||
|
|
44d9e9917c | ||
|
|
d3548eb706 | ||
|
|
7e09c2ef43 | ||
|
|
824960c565 | ||
|
|
73f605af3f | ||
|
|
9b724f7a6c | ||
|
|
2f9355c579 | ||
|
|
58d8bad99a | ||
|
|
ca6163a3ec | ||
|
|
01262b8ca9 | ||
|
|
0dbd38d4d9 | ||
|
|
dbb7d6ecdd | ||
|
|
9e59fc6924 | ||
|
|
662c0aac9e | ||
|
|
1fe1b6c032 | ||
|
|
7fb4e04b02 | ||
|
|
4ba5a053de | ||
|
|
ef1cc5b516 | ||
|
|
7885839b75 | ||
|
|
81dcc8d1b4 | ||
|
|
eed617c2d9 | ||
|
|
c555226d2b | ||
|
|
860f78f000 | ||
|
|
205bfca66f | ||
|
|
27ccf3b590 | ||
|
|
fd50d90b70 | ||
|
|
27297a447c | ||
|
|
f4e91c2aa8 | ||
|
|
e544709459 | ||
|
|
1fbe56da0c | ||
|
|
cd875c8a2c | ||
|
|
a9fa2ac5f9 | ||
|
|
7d5be3e5da | ||
|
|
4566f1e302 | ||
|
|
1543e4122a | ||
|
|
5e90a98a7c | ||
|
|
854fd07461 | ||
|
|
4fb99af40d | ||
|
|
97d47b5263 | ||
|
|
329da35a84 | ||
|
|
c67ace3433 | ||
|
|
ce61abc038 | ||
|
|
cb98d515dd | ||
|
|
de04f573cf | ||
|
|
91f898cb98 | ||
|
|
9fe1d4eeab | ||
|
|
74864f7fdb | ||
|
|
648f5ffa77 | ||
|
|
f8c4ec38ec | ||
|
|
39ffa80ae7 | ||
|
|
d2afb91e99 | ||
|
|
29f7573762 | ||
|
|
ff498ff333 | ||
|
|
8e5b44d46a | ||
|
|
17a197929c | ||
|
|
8a684c1bb5 | ||
|
|
169099c846 | ||
|
|
75b1b1d6c5 | ||
|
|
2c074e24e6 | ||
|
|
1d3ecf37ee | ||
|
|
2f1e08e948 | ||
|
|
a4e2f05d7a | ||
|
|
95158636cd | ||
|
|
89194a61a4 | ||
|
|
b3c85b795a | ||
|
|
cbae11c8fc | ||
|
|
02c354d62c | ||
|
|
c06c6a9244 | ||
|
|
611cc63a27 | ||
|
|
33f529b06b | ||
|
|
e7136888bb | ||
|
|
eb233d5565 | ||
|
|
f774c09a97 | ||
|
|
5c41de2b85 | ||
|
|
54a1773435 | ||
|
|
e3da1bf94a | ||
|
|
81be4d0d14 | ||
|
|
f4e1039830 | ||
|
|
af8a1a2ce3 | ||
|
|
0d610258f5 | ||
|
|
ddb1eb27c9 | ||
|
|
a75a2c6f00 | ||
|
|
ff2104ec0b | ||
|
|
009c120abb | ||
|
|
7bbbba9acf | ||
|
|
383d6b1117 | ||
|
|
7c1883c692 | ||
|
|
523974167a | ||
|
|
57cc810744 | ||
|
|
438b490fe0 | ||
|
|
243601d02a | ||
|
|
6fb0ca4201 | ||
|
|
9496ea84f8 | ||
|
|
f3275ae608 | ||
|
|
500a4a1419 | ||
|
|
8c2645aa81 | ||
|
|
e9211f5941 | ||
|
|
836a723ca3 | ||
|
|
86408529e0 | ||
|
|
35eaa0b0a8 | ||
|
|
bab9f68689 | ||
|
|
b695d30aae | ||
|
|
98c122f471 | ||
|
|
f4babf9551 | ||
|
|
20b2abb5f9 | ||
|
|
b907d637ca | ||
|
|
eaa2629352 | ||
|
|
339304f87c | ||
|
|
b7e5349e98 | ||
|
|
e92aa56a75 | ||
|
|
48e008ed10 | ||
|
|
be628fce3b | ||
|
|
d783273b05 | ||
|
|
ffe8742e1c | ||
|
|
a00de7199f | ||
|
|
caaafc4449 | ||
|
|
a524e95b19 | ||
|
|
86b66994d4 | ||
|
|
f0914e66e3 | ||
|
|
fa2ccc80da | ||
|
|
60d6856782 | ||
|
|
e2a038e039 | ||
|
|
f793752d07 | ||
|
|
bf71990d2f | ||
|
|
41e681293c | ||
|
|
c939e155a6 | ||
|
|
26ee62aa52 | ||
|
|
e44ab95f2f | ||
|
|
4c3339ab6a | ||
|
|
1e985f6226 | ||
|
|
72adbf9cc9 | ||
|
|
8029cf7a0f | ||
|
|
ed7fa80693 | ||
|
|
8f9cd23016 | ||
|
|
24f22eeb52 | ||
|
|
f790b6f903 | ||
|
|
0ff67d6b1e | ||
|
|
aa8f656573 | ||
|
|
6039594121 | ||
|
|
24222c5855 | ||
|
|
6bd5263515 | ||
|
|
89d381f7cf | ||
|
|
5559e14355 | ||
|
|
a2a9ffbe7e | ||
|
|
8dd91a7ac3 | ||
|
|
f3216e6953 | ||
|
|
90434cb74d | ||
|
|
aba090a69a | ||
|
|
048f6a32f9 | ||
|
|
4a9bcebe2a | ||
|
|
4b79bccc0b | ||
|
|
292828a01b | ||
|
|
5dfaa54350 | ||
|
|
00446bb9f4 | ||
|
|
255942e8c7 | ||
|
|
84ee4249ae | ||
|
|
b39af911ae | ||
|
|
0dc5d4df07 | ||
|
|
ea8238f090 | ||
|
|
640b71038b | ||
|
|
5c32ebcda8 | ||
|
|
b2465e0c3a | ||
|
|
9f6a4bcf23 | ||
|
|
b4a31746dd | ||
|
|
b270ab8d15 | ||
|
|
227e5269ca | ||
|
|
b315f04980 | ||
|
|
abaffef912 | ||
|
|
038f3e025a | ||
|
|
2f590a6392 | ||
|
|
72d15a4b07 | ||
|
|
1b2b24055c | ||
|
|
d260ff3e37 | ||
|
|
a4672048e7 | ||
|
|
fc569173a1 | ||
|
|
ce146d00d7 | ||
|
|
439a5bcacb | ||
|
|
accd7ffe18 | ||
|
|
42751ea4f3 | ||
|
|
acb9a7d734 | ||
|
|
31cfa53082 | ||
|
|
26ef2ccddb | ||
|
|
6abcb13dab | ||
|
|
033608bbf1 | ||
|
|
871d0514cd | ||
|
|
c1ff62fe44 | ||
|
|
66e9106b4d | ||
|
|
d5e0294003 | ||
|
|
32b811a1fb | ||
|
|
819e89ac7a | ||
|
|
cf03759ff5 | ||
|
|
9fce8480b0 | ||
|
|
d31a4a4b34 | ||
|
|
9ad6440bc0 | ||
|
|
97928e190a | ||
|
|
ec8af314cc | ||
|
|
a3fadb7c1a | ||
|
|
01622f81e9 | ||
|
|
792767d1cb | ||
|
|
0794fc8ff2 | ||
|
|
c5576dfa69 | ||
|
|
04fb20e33d | ||
|
|
8391fa0b89 | ||
|
|
3e56eb5fe3 | ||
|
|
733b020899 | ||
|
|
109a73f672 | ||
|
|
80747a0872 | ||
|
|
f3033c5515 | ||
|
|
6c95c3f250 | ||
|
|
a66bb37e32 | ||
|
|
606abc7fc0 | ||
|
|
b74b1c2b68 | ||
|
|
dd325bb191 | ||
|
|
d8a2c8f6f1 | ||
|
|
1075f77cc3 | ||
|
|
45bbbb6317 | ||
|
|
6140847bba | ||
|
|
cda8006569 | ||
|
|
9dbf818055 | ||
|
|
efbbc9462f | ||
|
|
c9d3564634 | ||
|
|
8dd2ed4c6f | ||
|
|
f3207cee52 | ||
|
|
a84c59734f | ||
|
|
430a4d0504 | ||
|
|
89ac8f6e62 | ||
|
|
95acfdead1 | ||
|
|
604923e034 | ||
|
|
713a773c81 | ||
|
|
e96921822d | ||
|
|
29f4e13e05 | ||
|
|
ef1e0ff886 | ||
|
|
c5e45ecb48 | ||
|
|
31b182b7aa | ||
|
|
d46cd7f573 | ||
|
|
0445156ed9 | ||
|
|
3a29521848 | ||
|
|
2bd673c8eb | ||
|
|
8ff136c716 | ||
|
|
b10ff00e1b | ||
|
|
6b570e2111 | ||
|
|
89922df775 | ||
|
|
1bd2aacb56 | ||
|
|
30ef12d0bb | ||
|
|
199124048e | ||
|
|
9c0754e617 | ||
|
|
0d5f212f30 | ||
|
|
5acd1540c0 | ||
|
|
d65205ecad | ||
|
|
3c27335db3 | ||
|
|
c3cd54a8e0 | ||
|
|
90797cef90 | ||
|
|
9842b4b0fb | ||
|
|
f399abd7ac | ||
|
|
7a0cdd53d5 | ||
|
|
ebda9dcac5 | ||
|
|
15b15d2060 | ||
|
|
3ab6026ad7 | ||
|
|
1152120dea | ||
|
|
d45389e2b0 | ||
|
|
9440fc16ce | ||
|
|
3f04e8bbce | ||
|
|
c9a664a2f8 | ||
|
|
4f0fb3325a | ||
|
|
e963deff5a | ||
|
|
ee5c790878 | ||
|
|
327767a1c1 | ||
|
|
edb4928357 | ||
|
|
b0f35a64d9 | ||
|
|
56ffcf709a | ||
|
|
452f0b775a | ||
|
|
cf24489764 | ||
|
|
576e40eabd | ||
|
|
206139f384 | ||
|
|
603364bdaa | ||
|
|
033a0cb53f | ||
|
|
026fddee4f | ||
|
|
dc542068ae | ||
|
|
c35d6e706f | ||
|
|
bd2f41bf79 | ||
|
|
d1bd98d5e0 | ||
|
|
035838901e | ||
|
|
e342c21336 | ||
|
|
eb9e1f961c | ||
|
|
f26eb4ee89 | ||
|
|
146e251892 | ||
|
|
7130c2e68c | ||
|
|
4a9eb1f1ac | ||
|
|
0adde9d415 | ||
|
|
ee0cc537b8 | ||
|
|
4f7c55507c | ||
|
|
8528cdb314 | ||
|
|
9ddfc79813 | ||
|
|
069906a25d | ||
|
|
5c580846bb | ||
|
|
afda2d39b6 | ||
|
|
743a658613 | ||
|
|
dbc8765104 | ||
|
|
2306108d8a | ||
|
|
4ee393c3da | ||
|
|
953523c3cb | ||
|
|
d862fd4ec7 | ||
|
|
7b3138e694 | ||
|
|
a4b68ec2fb | ||
|
|
f618acf2ab | ||
|
|
513c67230f | ||
|
|
fa3430ad16 | ||
|
|
95cf253b6d | ||
|
|
9b3531d7d6 | ||
|
|
81a0198af2 | ||
|
|
87abbf78e6 | ||
|
|
b362894a56 | ||
|
|
2866ba1a2c | ||
|
|
5764a81410 | ||
|
|
4a81d0a02f | ||
|
|
9d864da353 |
37
.github/ISSUE_TEMPLATE/proposal.md
vendored
Normal file
37
.github/ISSUE_TEMPLATE/proposal.md
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
name: Protocol Change Proposal
|
||||
about: Create a proposal to request a change to the protocol
|
||||
|
||||
---
|
||||
|
||||
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
|
||||
v ✰ Thanks for opening an issue! ✰
|
||||
v Before smashing the submit button please review the template.
|
||||
v Word of caution: Under-specified proposals may be rejected summarily
|
||||
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
|
||||
|
||||
# Protocol Change Proposal
|
||||
|
||||
## Summary
|
||||
|
||||
<!-- Short, concise description of the proposed change -->
|
||||
|
||||
## Problem Definition
|
||||
|
||||
<!-- Why do we need this change?
|
||||
What problems may be addressed by introducing this change?
|
||||
What benefits does Tendermint stand to gain by including this change?
|
||||
Are there any disadvantages of including this change? -->
|
||||
|
||||
## Proposal
|
||||
|
||||
<!-- Detailed description of requirements of implementation -->
|
||||
|
||||
____
|
||||
|
||||
#### For Admin Use
|
||||
|
||||
- [ ] Not duplicate issue
|
||||
- [ ] Appropriate labels applied
|
||||
- [ ] Appropriate contributors tagged
|
||||
- [ ] Contributor assigned/self-assigned
|
||||
2
.github/workflows/docker.yml
vendored
2
.github/workflows/docker.yml
vendored
@@ -49,7 +49,7 @@ jobs:
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Publish to Docker Hub
|
||||
uses: docker/build-push-action@v2.8.0
|
||||
uses: docker/build-push-action@v2.9.0
|
||||
with:
|
||||
context: .
|
||||
file: ./DOCKER/Dockerfile
|
||||
|
||||
36
.github/workflows/e2e-manual.yml
vendored
Normal file
36
.github/workflows/e2e-manual.yml
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
# Runs randomly generated E2E testnets nightly on master
|
||||
# manually run e2e tests
|
||||
name: e2e-manual
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
e2e-nightly-test:
|
||||
# Run parallel jobs for the listed testnet groups (must match the
|
||||
# ./build/generator -g flag)
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
group: ['00', '01', '02', '03']
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: '1.17'
|
||||
|
||||
- uses: actions/checkout@v2.4.0
|
||||
|
||||
- name: Build
|
||||
working-directory: test/e2e
|
||||
# Run make jobs in parallel, since we can't run steps in parallel.
|
||||
run: make -j2 docker generator runner tests
|
||||
|
||||
- name: Generate testnets
|
||||
working-directory: test/e2e
|
||||
# When changing -g, also change the matrix groups above
|
||||
run: ./build/generator -g 4 -d networks/nightly/
|
||||
|
||||
- name: Run ${{ matrix.p2p }} p2p testnets
|
||||
working-directory: test/e2e
|
||||
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
|
||||
17
.github/workflows/e2e-nightly-34x.yml
vendored
17
.github/workflows/e2e-nightly-34x.yml
vendored
@@ -6,7 +6,6 @@
|
||||
|
||||
name: e2e-nightly-34x
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually, in theory
|
||||
schedule:
|
||||
- cron: '0 2 * * *'
|
||||
|
||||
@@ -58,19 +57,3 @@ jobs:
|
||||
SLACK_COLOR: danger
|
||||
SLACK_MESSAGE: Nightly E2E tests failed on v0.34.x
|
||||
SLACK_FOOTER: ''
|
||||
|
||||
e2e-nightly-success: # may turn this off once they seem to pass consistently
|
||||
needs: e2e-nightly-test
|
||||
if: ${{ success() }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Notify Slack on success
|
||||
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
|
||||
env:
|
||||
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
|
||||
SLACK_CHANNEL: tendermint-internal
|
||||
SLACK_USERNAME: Nightly E2E Tests
|
||||
SLACK_ICON_EMOJI: ':white_check_mark:'
|
||||
SLACK_COLOR: good
|
||||
SLACK_MESSAGE: Nightly E2E tests passed on v0.34.x
|
||||
SLACK_FOOTER: ''
|
||||
|
||||
1
.github/workflows/e2e-nightly-35x.yml
vendored
1
.github/workflows/e2e-nightly-35x.yml
vendored
@@ -5,7 +5,6 @@
|
||||
|
||||
name: e2e-nightly-35x
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
schedule:
|
||||
- cron: '0 2 * * *'
|
||||
|
||||
|
||||
1
.github/workflows/e2e-nightly-master.yml
vendored
1
.github/workflows/e2e-nightly-master.yml
vendored
@@ -5,7 +5,6 @@
|
||||
|
||||
name: e2e-nightly-master
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
schedule:
|
||||
- cron: '0 2 * * *'
|
||||
|
||||
|
||||
2
.github/workflows/lint.yml
vendored
2
.github/workflows/lint.yml
vendored
@@ -1,4 +1,4 @@
|
||||
name: Lint
|
||||
name: Golang Linter
|
||||
# Lint runs golangci-lint over the entire Tendermint repository
|
||||
# This workflow is run on every pull request and push to master
|
||||
# The `golangci` job will pass without running if no *.{go, mod, sum} files have been modified.
|
||||
|
||||
2
.github/workflows/linter.yml
vendored
2
.github/workflows/linter.yml
vendored
@@ -1,4 +1,4 @@
|
||||
name: Lint
|
||||
name: Markdown Linter
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
|
||||
18
.github/workflows/markdown-links.yml
vendored
Normal file
18
.github/workflows/markdown-links.yml
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
# Currently disabled until all links have been fixed
|
||||
# name: Check Markdown links
|
||||
|
||||
# on:
|
||||
# push:
|
||||
# branches:
|
||||
# - master
|
||||
# pull_request:
|
||||
# branches: [master]
|
||||
|
||||
# jobs:
|
||||
# markdown-link-check:
|
||||
# runs-on: ubuntu-latest
|
||||
# steps:
|
||||
# - uses: actions/checkout@master
|
||||
# - uses: gaurav-nelson/github-action-markdown-link-check@1.0.13
|
||||
# with:
|
||||
# check-modified-files-only: 'yes'
|
||||
24
.github/workflows/proto-check.yml
vendored
Normal file
24
.github/workflows/proto-check.yml
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
name: Proto Check
|
||||
# Protobuf runs buf (https://buf.build/) lint and check-breakage
|
||||
# This workflow is only run when a file in the proto directory
|
||||
# has been modified.
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
pull_request:
|
||||
paths:
|
||||
- "proto/*"
|
||||
jobs:
|
||||
proto-lint:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 4
|
||||
steps:
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- name: lint
|
||||
run: make proto-lint
|
||||
proto-breakage:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 4
|
||||
steps:
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- name: check-breakage
|
||||
run: make proto-check-breaking-ci
|
||||
64
.github/workflows/proto-dockerfile.yml
vendored
Normal file
64
.github/workflows/proto-dockerfile.yml
vendored
Normal file
@@ -0,0 +1,64 @@
|
||||
# This workflow (re)builds and pushes a Docker image containing the
|
||||
# protobuf build tools used by the other workflows.
|
||||
#
|
||||
# When making changes that require updates to the builder image, you
|
||||
# should merge the updates first and wait for this workflow to complete,
|
||||
# so that the changes will be available for the dependent workflows.
|
||||
#
|
||||
|
||||
name: Build & Push Proto Builder Image
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- "proto/*"
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
paths:
|
||||
- "proto/*"
|
||||
schedule:
|
||||
# run this job once a month to recieve any go or buf updates
|
||||
- cron: "0 9 1 * *"
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: tendermint/docker-build-proto
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- name: Check out and assign tags
|
||||
id: prep
|
||||
run: |
|
||||
DOCKER_IMAGE="${REGISTRY}/${IMAGE_NAME}"
|
||||
VERSION=noop
|
||||
if [[ "$GITHUB_REF" == "refs/tags/*" ]]; then
|
||||
VERSION="${GITHUB_REF#refs/tags/}"
|
||||
elif [[ "$GITHUB_REF" == "refs/heads/*" ]]; then
|
||||
VERSION="$(echo "${GITHUB_REF#refs/heads/}" | sed -r 's#/+#-#g')"
|
||||
if [[ "${{ github.event.repository.default_branch }}" = "$VERSION" ]]; then
|
||||
VERSION=latest
|
||||
fi
|
||||
fi
|
||||
TAGS="${DOCKER_IMAGE}:${VERSION}"
|
||||
echo ::set-output name=tags::"${TAGS}"
|
||||
|
||||
- name: Set up docker buildx
|
||||
uses: docker/setup-buildx-action@v1.6.0
|
||||
|
||||
- name: Log in to the container registry
|
||||
uses: docker/login-action@v1.12.0
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Build and publish image
|
||||
uses: docker/build-push-action@v2.9.0
|
||||
with:
|
||||
context: ./proto
|
||||
file: ./proto/Dockerfile
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.prep.outputs.tags }}
|
||||
15
.gitignore
vendored
15
.gitignore
vendored
@@ -47,10 +47,11 @@ test/fuzz/**/corpus
|
||||
test/fuzz/**/crashers
|
||||
test/fuzz/**/suppressions
|
||||
test/fuzz/**/*.zip
|
||||
proto/tendermint/blocksync/types.proto
|
||||
proto/tendermint/consensus/types.proto
|
||||
proto/tendermint/mempool/*.proto
|
||||
proto/tendermint/p2p/*.proto
|
||||
proto/tendermint/statesync/*.proto
|
||||
proto/tendermint/types/*.proto
|
||||
proto/tendermint/version/*.proto
|
||||
proto/spec/**/*.pb.go
|
||||
*.aux
|
||||
*.bbl
|
||||
*.blg
|
||||
*.log
|
||||
*.pdf
|
||||
*.gz
|
||||
*.dvi
|
||||
|
||||
11
.markdownlint.yml
Normal file
11
.markdownlint.yml
Normal file
@@ -0,0 +1,11 @@
|
||||
default: true
|
||||
MD001: false
|
||||
MD007: {indent: 4}
|
||||
MD013: false
|
||||
MD024: {siblings_only: true}
|
||||
MD025: false
|
||||
MD033: false
|
||||
MD036: false
|
||||
MD010: false
|
||||
MD012: false
|
||||
MD028: false
|
||||
@@ -20,7 +20,7 @@ Special thanks to external contributors on this release:
|
||||
|
||||
- Apps
|
||||
|
||||
- [proto/tendermint] \#6976 Remove core protobuf files in favor of only housing them in the [tendermint/spec](https://github.com/tendermint/spec) repository.
|
||||
- [tendermint/spec] \#7804 Migrate spec from [spec repo](https://github.com/tendermint/spec).
|
||||
|
||||
- P2P Protocol
|
||||
|
||||
@@ -62,10 +62,12 @@ Special thanks to external contributors on this release:
|
||||
|
||||
- [internal/protoio] \#7325 Optimized `MarshalDelimited` by inlining the common case and using a `sync.Pool` in the worst case. (@odeke-em)
|
||||
- [consensus] \#6969 remove logic to 'unlock' a locked block.
|
||||
- [evidence] \#7700 Evidence messages contain single Evidence instead of EvidenceList (@jmalicevic)
|
||||
- [pubsub] \#7319 Performance improvements for the event query API (@creachadair)
|
||||
- [node] \#7521 Define concrete type for seed node implementation (@spacech1mp)
|
||||
- [rpc] \#7612 paginate mempool /unconfirmed_txs rpc endpoint (@spacech1mp)
|
||||
- [light] [\#7536](https://github.com/tendermint/tendermint/pull/7536) rpc /status call returns info about the light client (@jmalicevic)
|
||||
- [types] \#7765 Replace EvidenceData with EvidenceList to avoid unnecessary nesting of evidence fields within a block. (@jmalicevic)
|
||||
|
||||
### BUG FIXES
|
||||
|
||||
|
||||
27
Makefile
27
Makefile
@@ -73,22 +73,42 @@ install:
|
||||
|
||||
$(BUILDDIR)/:
|
||||
mkdir -p $@
|
||||
# The Docker image containing the generator, formatter, and linter.
|
||||
# This is generated by proto/Dockerfile. To update tools, make changes
|
||||
# there and run the Build & Push Proto Builder Image workflow.
|
||||
IMAGE := ghcr.io/tendermint/docker-build-proto:latest
|
||||
DOCKER_PROTO_BUILDER := docker run -v $(shell pwd):/workspace --workdir /workspace $(IMAGE)
|
||||
HTTPS_GIT := https://github.com/tendermint/tendermint.git
|
||||
|
||||
###############################################################################
|
||||
### Protobuf ###
|
||||
###############################################################################
|
||||
|
||||
proto-all: proto-lint proto-check-breaking
|
||||
.PHONY: proto-all
|
||||
|
||||
proto-gen:
|
||||
@docker pull -q tendermintdev/docker-build-proto
|
||||
@echo "Generating Protobuf files"
|
||||
@$(DOCKER_PROTO_BUILDER) sh ./scripts/protocgen.sh
|
||||
@$(DOCKER_PROTO_BUILDER) buf generate --template=./buf.gen.yaml --config ./buf.yaml
|
||||
.PHONY: proto-gen
|
||||
|
||||
proto-lint:
|
||||
@$(DOCKER_PROTO_BUILDER) buf lint --error-format=json --config ./buf.yaml
|
||||
.PHONY: proto-lint
|
||||
|
||||
proto-format:
|
||||
@echo "Formatting Protobuf files"
|
||||
@$(DOCKER_PROTO_BUILDER) find ./ -not -path "./third_party/*" -name *.proto -exec clang-format -i {} \;
|
||||
@$(DOCKER_PROTO_BUILDER) find . -name '*.proto' -path "./proto/*" -exec clang-format -i {} \;
|
||||
.PHONY: proto-format
|
||||
|
||||
proto-check-breaking:
|
||||
@$(DOCKER_PROTO_BUILDER) buf breaking --against .git --config ./buf.yaml
|
||||
.PHONY: proto-check-breaking
|
||||
|
||||
proto-check-breaking-ci:
|
||||
@$(DOCKER_PROTO_BUILDER) buf breaking --against $(HTTPS_GIT) --config ./buf.yaml
|
||||
.PHONY: proto-check-breaking-ci
|
||||
|
||||
###############################################################################
|
||||
### Build ABCI ###
|
||||
###############################################################################
|
||||
@@ -309,3 +329,4 @@ split-test-packages:$(BUILDDIR)/packages.txt
|
||||
split -d -n l/$(NUM_SPLIT) $< $<.
|
||||
test-group-%:split-test-packages
|
||||
cat $(BUILDDIR)/packages.txt.$* | xargs go test -mod=readonly -timeout=5m -race -coverprofile=$(BUILDDIR)/$*.profile.out
|
||||
|
||||
|
||||
@@ -28,28 +28,20 @@ const (
|
||||
type Client interface {
|
||||
service.Service
|
||||
|
||||
SetResponseCallback(Callback)
|
||||
Error() error
|
||||
|
||||
// Asynchronous requests
|
||||
FlushAsync(context.Context) (*ReqRes, error)
|
||||
DeliverTxAsync(context.Context, types.RequestDeliverTx) (*ReqRes, error)
|
||||
CheckTxAsync(context.Context, types.RequestCheckTx) (*ReqRes, error)
|
||||
|
||||
// Synchronous requests
|
||||
Flush(context.Context) error
|
||||
Echo(ctx context.Context, msg string) (*types.ResponseEcho, error)
|
||||
Info(context.Context, types.RequestInfo) (*types.ResponseInfo, error)
|
||||
DeliverTx(context.Context, types.RequestDeliverTx) (*types.ResponseDeliverTx, error)
|
||||
CheckTx(context.Context, types.RequestCheckTx) (*types.ResponseCheckTx, error)
|
||||
Query(context.Context, types.RequestQuery) (*types.ResponseQuery, error)
|
||||
Commit(context.Context) (*types.ResponseCommit, error)
|
||||
InitChain(context.Context, types.RequestInitChain) (*types.ResponseInitChain, error)
|
||||
PrepareProposal(context.Context, types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error)
|
||||
ProcessProposal(context.Context, types.RequestProcessProposal) (*types.ResponseProcessProposal, error)
|
||||
ExtendVote(context.Context, types.RequestExtendVote) (*types.ResponseExtendVote, error)
|
||||
VerifyVoteExtension(context.Context, types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error)
|
||||
BeginBlock(context.Context, types.RequestBeginBlock) (*types.ResponseBeginBlock, error)
|
||||
EndBlock(context.Context, types.RequestEndBlock) (*types.ResponseEndBlock, error)
|
||||
FinalizeBlock(context.Context, types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error)
|
||||
ListSnapshots(context.Context, types.RequestListSnapshots) (*types.ResponseListSnapshots, error)
|
||||
OfferSnapshot(context.Context, types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error)
|
||||
LoadSnapshotChunk(context.Context, types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error)
|
||||
@@ -72,26 +64,21 @@ func NewClient(logger log.Logger, addr, transport string, mustConnect bool) (cli
|
||||
return
|
||||
}
|
||||
|
||||
type Callback func(*types.Request, *types.Response)
|
||||
|
||||
type ReqRes struct {
|
||||
*types.Request
|
||||
*sync.WaitGroup
|
||||
*types.Response // Not set atomically, so be sure to use WaitGroup.
|
||||
|
||||
mtx sync.Mutex
|
||||
done bool // Gets set to true once *after* WaitGroup.Done().
|
||||
cb func(*types.Response) // A single callback that may be set.
|
||||
mtx sync.Mutex
|
||||
signal chan struct{}
|
||||
cb func(*types.Response) // A single callback that may be set.
|
||||
}
|
||||
|
||||
func NewReqRes(req *types.Request) *ReqRes {
|
||||
return &ReqRes{
|
||||
Request: req,
|
||||
WaitGroup: waitGroup1(),
|
||||
Response: nil,
|
||||
|
||||
done: false,
|
||||
cb: nil,
|
||||
Request: req,
|
||||
Response: nil,
|
||||
signal: make(chan struct{}),
|
||||
cb: nil,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -101,14 +88,14 @@ func NewReqRes(req *types.Request) *ReqRes {
|
||||
func (r *ReqRes) SetCallback(cb func(res *types.Response)) {
|
||||
r.mtx.Lock()
|
||||
|
||||
if r.done {
|
||||
select {
|
||||
case <-r.signal:
|
||||
r.mtx.Unlock()
|
||||
cb(r.Response)
|
||||
return
|
||||
default:
|
||||
r.cb = cb
|
||||
r.mtx.Unlock()
|
||||
}
|
||||
|
||||
r.cb = cb
|
||||
r.mtx.Unlock()
|
||||
}
|
||||
|
||||
// InvokeCallback invokes a thread-safe execution of the configured callback
|
||||
@@ -122,27 +109,10 @@ func (r *ReqRes) InvokeCallback() {
|
||||
}
|
||||
}
|
||||
|
||||
// GetCallback returns the configured callback of the ReqRes object which may be
|
||||
// nil. Note, it is not safe to concurrently call this in cases where it is
|
||||
// marked done and SetCallback is called before calling GetCallback as that
|
||||
// will invoke the callback twice and create a potential race condition.
|
||||
//
|
||||
// ref: https://github.com/tendermint/tendermint/issues/5439
|
||||
func (r *ReqRes) GetCallback() func(*types.Response) {
|
||||
r.mtx.Lock()
|
||||
defer r.mtx.Unlock()
|
||||
return r.cb
|
||||
}
|
||||
|
||||
// SetDone marks the ReqRes object as done.
|
||||
func (r *ReqRes) SetDone() {
|
||||
r.mtx.Lock()
|
||||
r.done = true
|
||||
r.mtx.Unlock()
|
||||
}
|
||||
defer r.mtx.Unlock()
|
||||
|
||||
func waitGroup1() (wg *sync.WaitGroup) {
|
||||
wg = &sync.WaitGroup{}
|
||||
wg.Add(1)
|
||||
return
|
||||
close(r.signal)
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package abciclient
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
@@ -14,10 +13,8 @@ type Creator func(log.Logger) (Client, error)
|
||||
// NewLocalCreator returns a Creator for the given app,
|
||||
// which will be running locally.
|
||||
func NewLocalCreator(app types.Application) Creator {
|
||||
mtx := new(sync.Mutex)
|
||||
|
||||
return func(logger log.Logger) (Client, error) {
|
||||
return NewLocalClient(logger, mtx, app), nil
|
||||
return NewLocalClient(logger, app), nil
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -28,10 +28,9 @@ type grpcClient struct {
|
||||
conn *grpc.ClientConn
|
||||
chReqRes chan *ReqRes // dispatches "async" responses to callbacks *in order*, needed by mempool
|
||||
|
||||
mtx sync.Mutex
|
||||
addr string
|
||||
err error
|
||||
resCb func(*types.Request, *types.Response) // listens to all callbacks
|
||||
mtx sync.Mutex
|
||||
addr string
|
||||
err error
|
||||
}
|
||||
|
||||
var _ Client = (*grpcClient)(nil)
|
||||
@@ -77,17 +76,9 @@ func (cli *grpcClient) OnStart(ctx context.Context) error {
|
||||
defer cli.mtx.Unlock()
|
||||
|
||||
reqres.SetDone()
|
||||
reqres.Done()
|
||||
|
||||
// Notify client listener if set
|
||||
if cli.resCb != nil {
|
||||
cli.resCb(reqres.Request, reqres.Response)
|
||||
}
|
||||
|
||||
// Notify reqRes listener if set
|
||||
if cb := reqres.GetCallback(); cb != nil {
|
||||
cb(reqres.Response)
|
||||
}
|
||||
reqres.InvokeCallback()
|
||||
}
|
||||
|
||||
for {
|
||||
@@ -162,9 +153,7 @@ func (cli *grpcClient) StopForError(err error) {
|
||||
cli.mtx.Unlock()
|
||||
|
||||
cli.logger.Error("Stopping abci.grpcClient for error", "err", err)
|
||||
if err := cli.Stop(); err != nil {
|
||||
cli.logger.Error("error stopping abci.grpcClient", "err", err)
|
||||
}
|
||||
cli.Stop()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Error() error {
|
||||
@@ -173,222 +162,66 @@ func (cli *grpcClient) Error() error {
|
||||
return cli.err
|
||||
}
|
||||
|
||||
// Set listener for all responses
|
||||
// NOTE: callback may get internally generated flush responses.
|
||||
func (cli *grpcClient) SetResponseCallback(resCb Callback) {
|
||||
cli.mtx.Lock()
|
||||
cli.resCb = resCb
|
||||
cli.mtx.Unlock()
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
// NOTE: call is synchronous, use ctx to break early if needed
|
||||
func (cli *grpcClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
|
||||
req := types.ToRequestFlush()
|
||||
res, err := cli.client.Flush(ctx, req.GetFlush(), grpc.WaitForReady(true))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Flush{Flush: res}})
|
||||
}
|
||||
|
||||
// NOTE: call is synchronous, use ctx to break early if needed
|
||||
func (cli *grpcClient) DeliverTxAsync(ctx context.Context, params types.RequestDeliverTx) (*ReqRes, error) {
|
||||
req := types.ToRequestDeliverTx(params)
|
||||
res, err := cli.client.DeliverTx(ctx, req.GetDeliverTx(), grpc.WaitForReady(true))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_DeliverTx{DeliverTx: res}})
|
||||
}
|
||||
|
||||
// NOTE: call is synchronous, use ctx to break early if needed
|
||||
func (cli *grpcClient) CheckTxAsync(ctx context.Context, params types.RequestCheckTx) (*ReqRes, error) {
|
||||
req := types.ToRequestCheckTx(params)
|
||||
res, err := cli.client.CheckTx(ctx, req.GetCheckTx(), grpc.WaitForReady(true))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_CheckTx{CheckTx: res}})
|
||||
}
|
||||
|
||||
// finishAsyncCall creates a ReqRes for an async call, and immediately populates it
|
||||
// with the response. We don't complete it until it's been ordered via the channel.
|
||||
func (cli *grpcClient) finishAsyncCall(ctx context.Context, req *types.Request, res *types.Response) (*ReqRes, error) {
|
||||
reqres := NewReqRes(req)
|
||||
reqres.Response = res
|
||||
select {
|
||||
case cli.chReqRes <- reqres: // use channel for async responses, since they must be ordered
|
||||
return reqres, nil
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
}
|
||||
|
||||
// finishSyncCall waits for an async call to complete. It is necessary to call all
|
||||
// sync calls asynchronously as well, to maintain call and response ordering via
|
||||
// the channel, and this method will wait until the async call completes.
|
||||
func (cli *grpcClient) finishSyncCall(reqres *ReqRes) *types.Response {
|
||||
// It's possible that the callback is called twice, since the callback can
|
||||
// be called immediately on SetCallback() in addition to after it has been
|
||||
// set. This is because completing the ReqRes happens in a separate critical
|
||||
// section from the one where the callback is called: there is a race where
|
||||
// SetCallback() is called between completing the ReqRes and dispatching the
|
||||
// callback.
|
||||
//
|
||||
// We also buffer the channel with 1 response, since SetCallback() will be
|
||||
// called synchronously if the reqres is already completed, in which case
|
||||
// it will block on sending to the channel since it hasn't gotten around to
|
||||
// receiving from it yet.
|
||||
//
|
||||
// ReqRes should really handle callback dispatch internally, to guarantee
|
||||
// that it's only called once and avoid the above race conditions.
|
||||
var once sync.Once
|
||||
ch := make(chan *types.Response, 1)
|
||||
reqres.SetCallback(func(res *types.Response) {
|
||||
once.Do(func() {
|
||||
ch <- res
|
||||
})
|
||||
})
|
||||
return <-ch
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *grpcClient) Flush(ctx context.Context) error { return nil }
|
||||
|
||||
func (cli *grpcClient) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
|
||||
req := types.ToRequestEcho(msg)
|
||||
return cli.client.Echo(ctx, req.GetEcho(), grpc.WaitForReady(true))
|
||||
return cli.client.Echo(ctx, types.ToRequestEcho(msg).GetEcho(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Info(
|
||||
ctx context.Context,
|
||||
params types.RequestInfo,
|
||||
) (*types.ResponseInfo, error) {
|
||||
req := types.ToRequestInfo(params)
|
||||
return cli.client.Info(ctx, req.GetInfo(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) Info(ctx context.Context, params types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
return cli.client.Info(ctx, types.ToRequestInfo(params).GetInfo(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) DeliverTx(
|
||||
ctx context.Context,
|
||||
params types.RequestDeliverTx,
|
||||
) (*types.ResponseDeliverTx, error) {
|
||||
|
||||
reqres, err := cli.DeliverTxAsync(ctx, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishSyncCall(reqres).GetDeliverTx(), cli.Error()
|
||||
func (cli *grpcClient) CheckTx(ctx context.Context, params types.RequestCheckTx) (*types.ResponseCheckTx, error) {
|
||||
return cli.client.CheckTx(ctx, types.ToRequestCheckTx(params).GetCheckTx(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) CheckTx(
|
||||
ctx context.Context,
|
||||
params types.RequestCheckTx,
|
||||
) (*types.ResponseCheckTx, error) {
|
||||
|
||||
reqres, err := cli.CheckTxAsync(ctx, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishSyncCall(reqres).GetCheckTx(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Query(
|
||||
ctx context.Context,
|
||||
params types.RequestQuery,
|
||||
) (*types.ResponseQuery, error) {
|
||||
req := types.ToRequestQuery(params)
|
||||
return cli.client.Query(ctx, req.GetQuery(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) Query(ctx context.Context, params types.RequestQuery) (*types.ResponseQuery, error) {
|
||||
return cli.client.Query(ctx, types.ToRequestQuery(params).GetQuery(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Commit(ctx context.Context) (*types.ResponseCommit, error) {
|
||||
req := types.ToRequestCommit()
|
||||
return cli.client.Commit(ctx, req.GetCommit(), grpc.WaitForReady(true))
|
||||
return cli.client.Commit(ctx, types.ToRequestCommit().GetCommit(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) InitChain(
|
||||
ctx context.Context,
|
||||
params types.RequestInitChain,
|
||||
) (*types.ResponseInitChain, error) {
|
||||
|
||||
req := types.ToRequestInitChain(params)
|
||||
return cli.client.InitChain(ctx, req.GetInitChain(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) InitChain(ctx context.Context, params types.RequestInitChain) (*types.ResponseInitChain, error) {
|
||||
return cli.client.InitChain(ctx, types.ToRequestInitChain(params).GetInitChain(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) BeginBlock(
|
||||
ctx context.Context,
|
||||
params types.RequestBeginBlock,
|
||||
) (*types.ResponseBeginBlock, error) {
|
||||
|
||||
req := types.ToRequestBeginBlock(params)
|
||||
return cli.client.BeginBlock(ctx, req.GetBeginBlock(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) ListSnapshots(ctx context.Context, params types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
|
||||
return cli.client.ListSnapshots(ctx, types.ToRequestListSnapshots(params).GetListSnapshots(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) EndBlock(
|
||||
ctx context.Context,
|
||||
params types.RequestEndBlock,
|
||||
) (*types.ResponseEndBlock, error) {
|
||||
|
||||
req := types.ToRequestEndBlock(params)
|
||||
return cli.client.EndBlock(ctx, req.GetEndBlock(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) OfferSnapshot(ctx context.Context, params types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
|
||||
return cli.client.OfferSnapshot(ctx, types.ToRequestOfferSnapshot(params).GetOfferSnapshot(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) ListSnapshots(
|
||||
ctx context.Context,
|
||||
params types.RequestListSnapshots,
|
||||
) (*types.ResponseListSnapshots, error) {
|
||||
|
||||
req := types.ToRequestListSnapshots(params)
|
||||
return cli.client.ListSnapshots(ctx, req.GetListSnapshots(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) LoadSnapshotChunk(ctx context.Context, params types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
|
||||
return cli.client.LoadSnapshotChunk(ctx, types.ToRequestLoadSnapshotChunk(params).GetLoadSnapshotChunk(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) OfferSnapshot(
|
||||
ctx context.Context,
|
||||
params types.RequestOfferSnapshot,
|
||||
) (*types.ResponseOfferSnapshot, error) {
|
||||
|
||||
req := types.ToRequestOfferSnapshot(params)
|
||||
return cli.client.OfferSnapshot(ctx, req.GetOfferSnapshot(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) ApplySnapshotChunk(ctx context.Context, params types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
|
||||
return cli.client.ApplySnapshotChunk(ctx, types.ToRequestApplySnapshotChunk(params).GetApplySnapshotChunk(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) LoadSnapshotChunk(
|
||||
ctx context.Context,
|
||||
params types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
|
||||
|
||||
req := types.ToRequestLoadSnapshotChunk(params)
|
||||
return cli.client.LoadSnapshotChunk(ctx, req.GetLoadSnapshotChunk(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) PrepareProposal(ctx context.Context, params types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
|
||||
return cli.client.PrepareProposal(ctx, types.ToRequestPrepareProposal(params).GetPrepareProposal(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) ApplySnapshotChunk(
|
||||
ctx context.Context,
|
||||
params types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
|
||||
|
||||
req := types.ToRequestApplySnapshotChunk(params)
|
||||
return cli.client.ApplySnapshotChunk(ctx, req.GetApplySnapshotChunk(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) ProcessProposal(ctx context.Context, params types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
|
||||
return cli.client.ProcessProposal(ctx, types.ToRequestProcessProposal(params).GetProcessProposal(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) PrepareProposal(
|
||||
ctx context.Context,
|
||||
params types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
|
||||
|
||||
req := types.ToRequestPrepareProposal(params)
|
||||
return cli.client.PrepareProposal(ctx, req.GetPrepareProposal(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) ExtendVote(ctx context.Context, params types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
return cli.client.ExtendVote(ctx, types.ToRequestExtendVote(params).GetExtendVote(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) ExtendVote(
|
||||
ctx context.Context,
|
||||
params types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
|
||||
req := types.ToRequestExtendVote(params)
|
||||
return cli.client.ExtendVote(ctx, req.GetExtendVote(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) VerifyVoteExtension(ctx context.Context, params types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
|
||||
return cli.client.VerifyVoteExtension(ctx, types.ToRequestVerifyVoteExtension(params).GetVerifyVoteExtension(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) VerifyVoteExtension(
|
||||
ctx context.Context,
|
||||
params types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
|
||||
|
||||
req := types.ToRequestVerifyVoteExtension(params)
|
||||
return cli.client.VerifyVoteExtension(ctx, req.GetVerifyVoteExtension(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) FinalizeBlock(ctx context.Context, params types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
return cli.client.FinalizeBlock(ctx, types.ToRequestFinalizeBlock(params).GetFinalizeBlock(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
@@ -16,9 +16,8 @@ import (
|
||||
type localClient struct {
|
||||
service.BaseService
|
||||
|
||||
mtx *sync.Mutex
|
||||
mtx sync.Mutex
|
||||
types.Application
|
||||
Callback
|
||||
}
|
||||
|
||||
var _ Client = (*localClient)(nil)
|
||||
@@ -27,12 +26,8 @@ var _ Client = (*localClient)(nil)
|
||||
// methods of the given app.
|
||||
//
|
||||
// Both Async and Sync methods ignore the given context.Context parameter.
|
||||
func NewLocalClient(logger log.Logger, mtx *sync.Mutex, app types.Application) Client {
|
||||
if mtx == nil {
|
||||
mtx = new(sync.Mutex)
|
||||
}
|
||||
func NewLocalClient(logger log.Logger, app types.Application) Client {
|
||||
cli := &localClient{
|
||||
mtx: mtx,
|
||||
Application: app,
|
||||
}
|
||||
cli.BaseService = *service.NewBaseService(logger, "localClient", cli)
|
||||
@@ -42,44 +37,11 @@ func NewLocalClient(logger log.Logger, mtx *sync.Mutex, app types.Application) C
|
||||
func (*localClient) OnStart(context.Context) error { return nil }
|
||||
func (*localClient) OnStop() {}
|
||||
|
||||
func (app *localClient) SetResponseCallback(cb Callback) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
app.Callback = cb
|
||||
}
|
||||
|
||||
// TODO: change types.Application to include Error()?
|
||||
func (app *localClient) Error() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (app *localClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
|
||||
// Do nothing
|
||||
return newLocalReqRes(types.ToRequestFlush(), nil), nil
|
||||
}
|
||||
|
||||
func (app *localClient) DeliverTxAsync(ctx context.Context, params types.RequestDeliverTx) (*ReqRes, error) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.DeliverTx(params)
|
||||
return app.callback(
|
||||
types.ToRequestDeliverTx(params),
|
||||
types.ToResponseDeliverTx(res),
|
||||
), nil
|
||||
}
|
||||
|
||||
func (app *localClient) CheckTxAsync(ctx context.Context, req types.RequestCheckTx) (*ReqRes, error) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.CheckTx(req)
|
||||
return app.callback(
|
||||
types.ToRequestCheckTx(req),
|
||||
types.ToResponseCheckTx(res),
|
||||
), nil
|
||||
}
|
||||
|
||||
//-------------------------------------------------------
|
||||
|
||||
func (app *localClient) Flush(ctx context.Context) error {
|
||||
@@ -98,18 +60,6 @@ func (app *localClient) Info(ctx context.Context, req types.RequestInfo) (*types
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) DeliverTx(
|
||||
ctx context.Context,
|
||||
req types.RequestDeliverTx,
|
||||
) (*types.ResponseDeliverTx, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.DeliverTx(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) CheckTx(
|
||||
ctx context.Context,
|
||||
req types.RequestCheckTx,
|
||||
@@ -152,30 +102,6 @@ func (app *localClient) InitChain(
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) BeginBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestBeginBlock,
|
||||
) (*types.ResponseBeginBlock, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.BeginBlock(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) EndBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestEndBlock,
|
||||
) (*types.ResponseEndBlock, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.EndBlock(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) ListSnapshots(
|
||||
ctx context.Context,
|
||||
req types.RequestListSnapshots,
|
||||
@@ -233,6 +159,17 @@ func (app *localClient) PrepareProposal(
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) ProcessProposal(
|
||||
ctx context.Context,
|
||||
req types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.ProcessProposal(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) ExtendVote(
|
||||
ctx context.Context,
|
||||
req types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
@@ -255,16 +192,13 @@ func (app *localClient) VerifyVoteExtension(
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
//-------------------------------------------------------
|
||||
func (app *localClient) FinalizeBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
|
||||
func (app *localClient) callback(req *types.Request, res *types.Response) *ReqRes {
|
||||
app.Callback(req, res)
|
||||
return newLocalReqRes(req, res)
|
||||
}
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
func newLocalReqRes(req *types.Request, res *types.Response) *ReqRes {
|
||||
reqRes := NewReqRes(req)
|
||||
reqRes.Response = res
|
||||
reqRes.SetDone()
|
||||
return reqRes
|
||||
res := app.Application.FinalizeBlock(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
@@ -40,29 +40,6 @@ func (_m *Client) ApplySnapshotChunk(_a0 context.Context, _a1 types.RequestApply
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// BeginBlock provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) BeginBlock(_a0 context.Context, _a1 types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseBeginBlock
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestBeginBlock) *types.ResponseBeginBlock); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseBeginBlock)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestBeginBlock) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// CheckTx provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) CheckTx(_a0 context.Context, _a1 types.RequestCheckTx) (*types.ResponseCheckTx, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
@@ -132,52 +109,6 @@ func (_m *Client) Commit(_a0 context.Context) (*types.ResponseCommit, error) {
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// DeliverTx provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) DeliverTx(_a0 context.Context, _a1 types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseDeliverTx
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *types.ResponseDeliverTx); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseDeliverTx)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// DeliverTxAsync provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 types.RequestDeliverTx) (*abciclient.ReqRes, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *abciclient.ReqRes
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *abciclient.ReqRes); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*abciclient.ReqRes)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Echo provides a mock function with given fields: ctx, msg
|
||||
func (_m *Client) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
|
||||
ret := _m.Called(ctx, msg)
|
||||
@@ -201,29 +132,6 @@ func (_m *Client) Echo(ctx context.Context, msg string) (*types.ResponseEcho, er
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// EndBlock provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) EndBlock(_a0 context.Context, _a1 types.RequestEndBlock) (*types.ResponseEndBlock, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseEndBlock
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestEndBlock) *types.ResponseEndBlock); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseEndBlock)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestEndBlock) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Error provides a mock function with given fields:
|
||||
func (_m *Client) Error() error {
|
||||
ret := _m.Called()
|
||||
@@ -261,6 +169,29 @@ func (_m *Client) ExtendVote(_a0 context.Context, _a1 types.RequestExtendVote) (
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// FinalizeBlock provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) FinalizeBlock(_a0 context.Context, _a1 types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseFinalizeBlock
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestFinalizeBlock) *types.ResponseFinalizeBlock); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseFinalizeBlock)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestFinalizeBlock) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Flush provides a mock function with given fields: _a0
|
||||
func (_m *Client) Flush(_a0 context.Context) error {
|
||||
ret := _m.Called(_a0)
|
||||
@@ -275,29 +206,6 @@ func (_m *Client) Flush(_a0 context.Context) error {
|
||||
return r0
|
||||
}
|
||||
|
||||
// FlushAsync provides a mock function with given fields: _a0
|
||||
func (_m *Client) FlushAsync(_a0 context.Context) (*abciclient.ReqRes, error) {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 *abciclient.ReqRes
|
||||
if rf, ok := ret.Get(0).(func(context.Context) *abciclient.ReqRes); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*abciclient.ReqRes)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context) error); ok {
|
||||
r1 = rf(_a0)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Info provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) Info(_a0 context.Context, _a1 types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
@@ -450,6 +358,29 @@ func (_m *Client) PrepareProposal(_a0 context.Context, _a1 types.RequestPrepareP
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// ProcessProposal provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) ProcessProposal(_a0 context.Context, _a1 types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseProcessProposal
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestProcessProposal) *types.ResponseProcessProposal); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseProcessProposal)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestProcessProposal) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Query provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) Query(_a0 context.Context, _a1 types.RequestQuery) (*types.ResponseQuery, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
@@ -473,11 +404,6 @@ func (_m *Client) Query(_a0 context.Context, _a1 types.RequestQuery) (*types.Res
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// SetResponseCallback provides a mock function with given fields: _a0
|
||||
func (_m *Client) SetResponseCallback(_a0 abciclient.Callback) {
|
||||
_m.Called(_a0)
|
||||
}
|
||||
|
||||
// Start provides a mock function with given fields: _a0
|
||||
func (_m *Client) Start(_a0 context.Context) error {
|
||||
ret := _m.Called(_a0)
|
||||
@@ -492,20 +418,6 @@ func (_m *Client) Start(_a0 context.Context) error {
|
||||
return r0
|
||||
}
|
||||
|
||||
// String provides a mock function with given fields:
|
||||
func (_m *Client) String() string {
|
||||
ret := _m.Called()
|
||||
|
||||
var r0 string
|
||||
if rf, ok := ret.Get(0).(func() string); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
r0 = ret.Get(0).(string)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// VerifyVoteExtension provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) VerifyVoteExtension(_a0 context.Context, _a1 types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
@@ -24,11 +24,6 @@ const (
|
||||
reqQueueSize = 256
|
||||
)
|
||||
|
||||
type reqResWithContext struct {
|
||||
R *ReqRes
|
||||
C context.Context // if context.Err is not nil, reqRes will be thrown away (ignored)
|
||||
}
|
||||
|
||||
// This is goroutine-safe, but users should beware that the application in
|
||||
// general is not meant to be interfaced with concurrent callers.
|
||||
type socketClient struct {
|
||||
@@ -39,7 +34,7 @@ type socketClient struct {
|
||||
mustConnect bool
|
||||
conn net.Conn
|
||||
|
||||
reqQueue chan *reqResWithContext
|
||||
reqQueue chan *ReqRes
|
||||
|
||||
mtx sync.Mutex
|
||||
err error
|
||||
@@ -55,7 +50,7 @@ var _ Client = (*socketClient)(nil)
|
||||
func NewSocketClient(logger log.Logger, addr string, mustConnect bool) Client {
|
||||
cli := &socketClient{
|
||||
logger: logger,
|
||||
reqQueue: make(chan *reqResWithContext, reqQueueSize),
|
||||
reqQueue: make(chan *ReqRes, reqQueueSize),
|
||||
mustConnect: mustConnect,
|
||||
addr: addr,
|
||||
reqSent: list.New(),
|
||||
@@ -99,7 +94,10 @@ func (cli *socketClient) OnStop() {
|
||||
cli.conn.Close()
|
||||
}
|
||||
|
||||
cli.drainQueue()
|
||||
// this timeout is arbitrary.
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
cli.drainQueue(ctx)
|
||||
}
|
||||
|
||||
// Error returns an error if the client was stopped abruptly.
|
||||
@@ -109,16 +107,6 @@ func (cli *socketClient) Error() error {
|
||||
return cli.err
|
||||
}
|
||||
|
||||
// SetResponseCallback sets a callback, which will be executed for each
|
||||
// non-error & non-empty response from the server.
|
||||
//
|
||||
// NOTE: callback may get internally generated flush responses.
|
||||
func (cli *socketClient) SetResponseCallback(resCb Callback) {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
cli.resCb = resCb
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) sendRequestsRoutine(ctx context.Context, conn io.Writer) {
|
||||
@@ -132,13 +120,9 @@ func (cli *socketClient) sendRequestsRoutine(ctx context.Context, conn io.Writer
|
||||
return
|
||||
}
|
||||
|
||||
if reqres.C.Err() != nil {
|
||||
cli.logger.Debug("Request's context is done", "req", reqres.R, "err", reqres.C.Err())
|
||||
continue
|
||||
}
|
||||
cli.willSendReq(reqres.R)
|
||||
cli.willSendReq(reqres)
|
||||
|
||||
if err := types.WriteMessage(reqres.R.Request, bw); err != nil {
|
||||
if err := types.WriteMessage(reqres.Request, bw); err != nil {
|
||||
cli.stopForError(fmt.Errorf("write to buffer: %w", err))
|
||||
return
|
||||
}
|
||||
@@ -203,7 +187,7 @@ func (cli *socketClient) didRecvResponse(res *types.Response) error {
|
||||
}
|
||||
|
||||
reqres.Response = res
|
||||
reqres.Done() // release waiters
|
||||
reqres.SetDone() // release waiters
|
||||
cli.reqSent.Remove(next) // pop first item from linked list
|
||||
|
||||
// Notify client listener if set (global callback).
|
||||
@@ -222,22 +206,8 @@ func (cli *socketClient) didRecvResponse(res *types.Response) error {
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
|
||||
return cli.queueRequestAsync(ctx, types.ToRequestFlush())
|
||||
}
|
||||
|
||||
func (cli *socketClient) DeliverTxAsync(ctx context.Context, req types.RequestDeliverTx) (*ReqRes, error) {
|
||||
return cli.queueRequestAsync(ctx, types.ToRequestDeliverTx(req))
|
||||
}
|
||||
|
||||
func (cli *socketClient) CheckTxAsync(ctx context.Context, req types.RequestCheckTx) (*ReqRes, error) {
|
||||
return cli.queueRequestAsync(ctx, types.ToRequestCheckTx(req))
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) Flush(ctx context.Context) error {
|
||||
reqRes, err := cli.queueRequest(ctx, types.ToRequestFlush(), true)
|
||||
reqRes, err := cli.queueRequest(ctx, types.ToRequestFlush())
|
||||
if err != nil {
|
||||
return queueErr(err)
|
||||
}
|
||||
@@ -246,15 +216,8 @@ func (cli *socketClient) Flush(ctx context.Context) error {
|
||||
return err
|
||||
}
|
||||
|
||||
gotResp := make(chan struct{})
|
||||
go func() {
|
||||
// NOTE: if we don't flush the queue, its possible to get stuck here
|
||||
reqRes.Wait()
|
||||
close(gotResp)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-gotResp:
|
||||
case <-reqRes.signal:
|
||||
return cli.Error()
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
@@ -280,18 +243,6 @@ func (cli *socketClient) Info(
|
||||
return reqres.Response.GetInfo(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) DeliverTx(
|
||||
ctx context.Context,
|
||||
req types.RequestDeliverTx,
|
||||
) (*types.ResponseDeliverTx, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestDeliverTx(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetDeliverTx(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) CheckTx(
|
||||
ctx context.Context,
|
||||
req types.RequestCheckTx,
|
||||
@@ -334,30 +285,6 @@ func (cli *socketClient) InitChain(
|
||||
return reqres.Response.GetInitChain(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) BeginBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestBeginBlock,
|
||||
) (*types.ResponseBeginBlock, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestBeginBlock(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetBeginBlock(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) EndBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestEndBlock,
|
||||
) (*types.ResponseEndBlock, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestEndBlock(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetEndBlock(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) ListSnapshots(
|
||||
ctx context.Context,
|
||||
req types.RequestListSnapshots,
|
||||
@@ -415,6 +342,18 @@ func (cli *socketClient) PrepareProposal(
|
||||
return reqres.Response.GetPrepareProposal(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) ProcessProposal(
|
||||
ctx context.Context,
|
||||
req types.RequestProcessProposal,
|
||||
) (*types.ResponseProcessProposal, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestProcessProposal(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetProcessProposal(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) ExtendVote(
|
||||
ctx context.Context,
|
||||
req types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
@@ -437,55 +376,46 @@ func (cli *socketClient) VerifyVoteExtension(
|
||||
return reqres.Response.GetVerifyVoteExtension(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) FinalizeBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestFinalizeBlock(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetFinalizeBlock(), nil
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
// queueRequest enqueues req onto the queue. If the queue is full, it ether
|
||||
// returns an error (sync=false) or blocks (sync=true).
|
||||
//
|
||||
// When sync=true, ctx can be used to break early. When sync=false, ctx will be
|
||||
// used later to determine if request should be dropped (if ctx.Err is
|
||||
// non-nil).
|
||||
// queueRequest enqueues req onto the queue. The request can break early if the
|
||||
// the context is canceled. If the queue is full, this method blocks to allow
|
||||
// the request to be placed onto the queue. This has the effect of creating an
|
||||
// unbounded queue of goroutines waiting to write to this queue which is a bit
|
||||
// antithetical to the purposes of a queue, however, undoing this behavior has
|
||||
// dangerous upstream implications as a result of the usage of this behavior upstream.
|
||||
// Remove at your peril.
|
||||
//
|
||||
// The caller is responsible for checking cli.Error.
|
||||
func (cli *socketClient) queueRequest(ctx context.Context, req *types.Request, sync bool) (*ReqRes, error) {
|
||||
func (cli *socketClient) queueRequest(ctx context.Context, req *types.Request) (*ReqRes, error) {
|
||||
reqres := NewReqRes(req)
|
||||
|
||||
if sync {
|
||||
select {
|
||||
case cli.reqQueue <- &reqResWithContext{R: reqres, C: context.Background()}:
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
} else {
|
||||
select {
|
||||
case cli.reqQueue <- &reqResWithContext{R: reqres, C: ctx}:
|
||||
default:
|
||||
return nil, errors.New("buffer is full")
|
||||
}
|
||||
select {
|
||||
case cli.reqQueue <- reqres:
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
|
||||
return reqres, nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) queueRequestAsync(
|
||||
ctx context.Context,
|
||||
req *types.Request,
|
||||
) (*ReqRes, error) {
|
||||
|
||||
reqres, err := cli.queueRequest(ctx, req, false)
|
||||
if err != nil {
|
||||
return nil, queueErr(err)
|
||||
}
|
||||
|
||||
return reqres, cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) queueRequestAndFlush(
|
||||
ctx context.Context,
|
||||
req *types.Request,
|
||||
) (*ReqRes, error) {
|
||||
|
||||
reqres, err := cli.queueRequest(ctx, req, true)
|
||||
reqres, err := cli.queueRequest(ctx, req)
|
||||
if err != nil {
|
||||
return nil, queueErr(err)
|
||||
}
|
||||
@@ -503,14 +433,14 @@ func queueErr(e error) error {
|
||||
|
||||
// drainQueue marks as complete and discards all remaining pending requests
|
||||
// from the queue.
|
||||
func (cli *socketClient) drainQueue() {
|
||||
func (cli *socketClient) drainQueue(ctx context.Context) {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
|
||||
// mark all in-flight messages as resolved (they will get cli.Error())
|
||||
for req := cli.reqSent.Front(); req != nil; req = req.Next() {
|
||||
reqres := req.Value.(*ReqRes)
|
||||
reqres.Done()
|
||||
reqres.SetDone()
|
||||
}
|
||||
|
||||
// Mark all queued messages as resolved.
|
||||
@@ -520,8 +450,10 @@ func (cli *socketClient) drainQueue() {
|
||||
// See https://github.com/tendermint/tendermint/issues/6996.
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case reqres := <-cli.reqQueue:
|
||||
reqres.R.Done()
|
||||
reqres.SetDone()
|
||||
default:
|
||||
return
|
||||
}
|
||||
@@ -538,8 +470,6 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
|
||||
_, ok = res.Value.(*types.Response_Flush)
|
||||
case *types.Request_Info:
|
||||
_, ok = res.Value.(*types.Response_Info)
|
||||
case *types.Request_DeliverTx:
|
||||
_, ok = res.Value.(*types.Response_DeliverTx)
|
||||
case *types.Request_CheckTx:
|
||||
_, ok = res.Value.(*types.Response_CheckTx)
|
||||
case *types.Request_Commit:
|
||||
@@ -554,10 +484,6 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
|
||||
_, ok = res.Value.(*types.Response_ExtendVote)
|
||||
case *types.Request_VerifyVoteExtension:
|
||||
_, ok = res.Value.(*types.Response_VerifyVoteExtension)
|
||||
case *types.Request_BeginBlock:
|
||||
_, ok = res.Value.(*types.Response_BeginBlock)
|
||||
case *types.Request_EndBlock:
|
||||
_, ok = res.Value.(*types.Response_EndBlock)
|
||||
case *types.Request_ApplySnapshotChunk:
|
||||
_, ok = res.Value.(*types.Response_ApplySnapshotChunk)
|
||||
case *types.Request_LoadSnapshotChunk:
|
||||
@@ -566,6 +492,8 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
|
||||
_, ok = res.Value.(*types.Response_ListSnapshots)
|
||||
case *types.Request_OfferSnapshot:
|
||||
_, ok = res.Value.(*types.Response_OfferSnapshot)
|
||||
case *types.Request_FinalizeBlock:
|
||||
_, ok = res.Value.(*types.Response_FinalizeBlock)
|
||||
}
|
||||
return ok
|
||||
}
|
||||
@@ -580,7 +508,5 @@ func (cli *socketClient) stopForError(err error) {
|
||||
cli.mtx.Unlock()
|
||||
|
||||
cli.logger.Info("Stopping abci.socketClient", "reason", err)
|
||||
if err := cli.Stop(); err != nil {
|
||||
cli.logger.Error("error stopping abci.socketClient", "err", err)
|
||||
}
|
||||
cli.Stop()
|
||||
}
|
||||
|
||||
@@ -29,7 +29,7 @@ func TestProperSyncCalls(t *testing.T) {
|
||||
|
||||
resp := make(chan error, 1)
|
||||
go func() {
|
||||
rsp, err := c.BeginBlock(ctx, types.RequestBeginBlock{})
|
||||
rsp, err := c.FinalizeBlock(ctx, types.RequestFinalizeBlock{})
|
||||
assert.NoError(t, err)
|
||||
assert.NoError(t, c.Flush(ctx))
|
||||
assert.NotNil(t, rsp)
|
||||
@@ -79,7 +79,7 @@ type slowApp struct {
|
||||
types.BaseApplication
|
||||
}
|
||||
|
||||
func (slowApp) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
|
||||
func (slowApp) FinalizeBlock(req types.RequestFinalizeBlock) types.ResponseFinalizeBlock {
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
return types.ResponseBeginBlock{}
|
||||
return types.ResponseFinalizeBlock{}
|
||||
}
|
||||
|
||||
@@ -28,7 +28,6 @@ import (
|
||||
// client is a global variable so it can be reused by the console
|
||||
var (
|
||||
client abciclient.Client
|
||||
logger log.Logger
|
||||
)
|
||||
|
||||
// flags
|
||||
@@ -48,34 +47,32 @@ var (
|
||||
flagPersist string
|
||||
)
|
||||
|
||||
var RootCmd = &cobra.Command{
|
||||
Use: "abci-cli",
|
||||
Short: "the ABCI CLI tool wraps an ABCI client",
|
||||
Long: "the ABCI CLI tool wraps an ABCI client and is used for testing ABCI servers",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
func RootCmmand(logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "abci-cli",
|
||||
Short: "the ABCI CLI tool wraps an ABCI client",
|
||||
Long: "the ABCI CLI tool wraps an ABCI client and is used for testing ABCI servers",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) {
|
||||
|
||||
switch cmd.Use {
|
||||
case "kvstore", "version":
|
||||
switch cmd.Use {
|
||||
case "kvstore", "version":
|
||||
return nil
|
||||
}
|
||||
|
||||
if client == nil {
|
||||
var err error
|
||||
client, err = abciclient.NewClient(logger.With("module", "abci-client"), flagAddress, flagAbci, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := client.Start(cmd.Context()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if logger == nil {
|
||||
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
}
|
||||
|
||||
if client == nil {
|
||||
var err error
|
||||
client, err = abciclient.NewClient(logger.With("module", "abci-client"), flagAddress, flagAbci, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := client.Start(cmd.Context()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// Structure for data passed to print response.
|
||||
@@ -97,56 +94,46 @@ type queryResponse struct {
|
||||
}
|
||||
|
||||
func Execute() error {
|
||||
addGlobalFlags()
|
||||
addCommands()
|
||||
return RootCmd.Execute()
|
||||
logger, err := log.NewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cmd := RootCmmand(logger)
|
||||
addGlobalFlags(cmd)
|
||||
addCommands(cmd, logger)
|
||||
return cmd.Execute()
|
||||
}
|
||||
|
||||
func addGlobalFlags() {
|
||||
RootCmd.PersistentFlags().StringVarP(&flagAddress,
|
||||
func addGlobalFlags(cmd *cobra.Command) {
|
||||
cmd.PersistentFlags().StringVarP(&flagAddress,
|
||||
"address",
|
||||
"",
|
||||
"tcp://0.0.0.0:26658",
|
||||
"address of application socket")
|
||||
RootCmd.PersistentFlags().StringVarP(&flagAbci, "abci", "", "socket", "either socket or grpc")
|
||||
RootCmd.PersistentFlags().BoolVarP(&flagVerbose,
|
||||
cmd.PersistentFlags().StringVarP(&flagAbci, "abci", "", "socket", "either socket or grpc")
|
||||
cmd.PersistentFlags().BoolVarP(&flagVerbose,
|
||||
"verbose",
|
||||
"v",
|
||||
false,
|
||||
"print the command and results as if it were a console session")
|
||||
RootCmd.PersistentFlags().StringVarP(&flagLogLevel, "log_level", "", "debug", "set the logger level")
|
||||
cmd.PersistentFlags().StringVarP(&flagLogLevel, "log_level", "", "debug", "set the logger level")
|
||||
}
|
||||
|
||||
func addQueryFlags() {
|
||||
queryCmd.PersistentFlags().StringVarP(&flagPath, "path", "", "/store", "path to prefix query with")
|
||||
queryCmd.PersistentFlags().IntVarP(&flagHeight, "height", "", 0, "height to query the blockchain at")
|
||||
queryCmd.PersistentFlags().BoolVarP(&flagProve,
|
||||
"prove",
|
||||
"",
|
||||
false,
|
||||
"whether or not to return a merkle proof of the query result")
|
||||
}
|
||||
|
||||
func addKVStoreFlags() {
|
||||
kvstoreCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
|
||||
}
|
||||
|
||||
func addCommands() {
|
||||
RootCmd.AddCommand(batchCmd)
|
||||
RootCmd.AddCommand(consoleCmd)
|
||||
RootCmd.AddCommand(echoCmd)
|
||||
RootCmd.AddCommand(infoCmd)
|
||||
RootCmd.AddCommand(deliverTxCmd)
|
||||
RootCmd.AddCommand(checkTxCmd)
|
||||
RootCmd.AddCommand(commitCmd)
|
||||
RootCmd.AddCommand(versionCmd)
|
||||
RootCmd.AddCommand(testCmd)
|
||||
addQueryFlags()
|
||||
RootCmd.AddCommand(queryCmd)
|
||||
func addCommands(cmd *cobra.Command, logger log.Logger) {
|
||||
cmd.AddCommand(batchCmd)
|
||||
cmd.AddCommand(consoleCmd)
|
||||
cmd.AddCommand(echoCmd)
|
||||
cmd.AddCommand(infoCmd)
|
||||
cmd.AddCommand(deliverTxCmd)
|
||||
cmd.AddCommand(checkTxCmd)
|
||||
cmd.AddCommand(commitCmd)
|
||||
cmd.AddCommand(versionCmd)
|
||||
cmd.AddCommand(testCmd)
|
||||
cmd.AddCommand(getQueryCmd())
|
||||
|
||||
// examples
|
||||
addKVStoreFlags()
|
||||
RootCmd.AddCommand(kvstoreCmd)
|
||||
cmd.AddCommand(getKVStoreCmd(logger))
|
||||
}
|
||||
|
||||
var batchCmd = &cobra.Command{
|
||||
@@ -206,7 +193,7 @@ var deliverTxCmd = &cobra.Command{
|
||||
Short: "deliver a new transaction to the application",
|
||||
Long: "deliver a new transaction to the application",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: cmdDeliverTx,
|
||||
RunE: cmdFinalizeBlock,
|
||||
}
|
||||
|
||||
var checkTxCmd = &cobra.Command{
|
||||
@@ -236,20 +223,38 @@ var versionCmd = &cobra.Command{
|
||||
},
|
||||
}
|
||||
|
||||
var queryCmd = &cobra.Command{
|
||||
Use: "query",
|
||||
Short: "query the application state",
|
||||
Long: "query the application state",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: cmdQuery,
|
||||
func getQueryCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "query",
|
||||
Short: "query the application state",
|
||||
Long: "query the application state",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: cmdQuery,
|
||||
}
|
||||
|
||||
cmd.PersistentFlags().StringVarP(&flagPath, "path", "", "/store", "path to prefix query with")
|
||||
cmd.PersistentFlags().IntVarP(&flagHeight, "height", "", 0, "height to query the blockchain at")
|
||||
cmd.PersistentFlags().BoolVarP(&flagProve,
|
||||
"prove",
|
||||
"",
|
||||
false,
|
||||
"whether or not to return a merkle proof of the query result")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
var kvstoreCmd = &cobra.Command{
|
||||
Use: "kvstore",
|
||||
Short: "ABCI demo example",
|
||||
Long: "ABCI demo example",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: cmdKVStore,
|
||||
func getKVStoreCmd(logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "kvstore",
|
||||
Short: "ABCI demo example",
|
||||
Long: "ABCI demo example",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: makeKVStoreCmd(logger),
|
||||
}
|
||||
|
||||
cmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
|
||||
return cmd
|
||||
|
||||
}
|
||||
|
||||
var testCmd = &cobra.Command{
|
||||
@@ -295,17 +300,38 @@ func cmdTest(cmd *cobra.Command, args []string) error {
|
||||
[]func() error{
|
||||
func() error { return servertest.InitChain(ctx, client) },
|
||||
func() error { return servertest.Commit(ctx, client, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte("abc"), code.CodeTypeBadNonce, nil) },
|
||||
func() error { return servertest.Commit(ctx, client, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.Commit(ctx, client, []byte{0, 0, 0, 0, 0, 0, 0, 1}) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00}, code.CodeTypeBadNonce, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x01}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x02}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x03}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x00, 0x04}, code.CodeTypeOK, nil) },
|
||||
func() error {
|
||||
return servertest.DeliverTx(ctx, client, []byte{0x00, 0x00, 0x06}, code.CodeTypeBadNonce, nil)
|
||||
return servertest.FinalizeBlock(ctx, client, [][]byte{
|
||||
[]byte("abc"),
|
||||
}, []uint32{
|
||||
code.CodeTypeBadNonce,
|
||||
}, nil)
|
||||
},
|
||||
func() error { return servertest.Commit(ctx, client, nil) },
|
||||
func() error {
|
||||
return servertest.FinalizeBlock(ctx, client, [][]byte{
|
||||
{0x00},
|
||||
}, []uint32{
|
||||
code.CodeTypeOK,
|
||||
}, nil)
|
||||
},
|
||||
func() error { return servertest.Commit(ctx, client, []byte{0, 0, 0, 0, 0, 0, 0, 1}) },
|
||||
func() error {
|
||||
return servertest.FinalizeBlock(ctx, client, [][]byte{
|
||||
{0x00},
|
||||
{0x01},
|
||||
{0x00, 0x02},
|
||||
{0x00, 0x03},
|
||||
{0x00, 0x00, 0x04},
|
||||
{0x00, 0x00, 0x06},
|
||||
}, []uint32{
|
||||
code.CodeTypeBadNonce,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeBadNonce,
|
||||
}, nil)
|
||||
},
|
||||
func() error { return servertest.Commit(ctx, client, []byte{0, 0, 0, 0, 0, 0, 0, 5}) },
|
||||
})
|
||||
@@ -401,7 +427,7 @@ func muxOnCommands(cmd *cobra.Command, pArgs []string) error {
|
||||
case "commit":
|
||||
return cmdCommit(cmd, actualArgs)
|
||||
case "deliver_tx":
|
||||
return cmdDeliverTx(cmd, actualArgs)
|
||||
return cmdFinalizeBlock(cmd, actualArgs)
|
||||
case "echo":
|
||||
return cmdEcho(cmd, actualArgs)
|
||||
case "info":
|
||||
@@ -425,12 +451,9 @@ func cmdUnimplemented(cmd *cobra.Command, args []string) error {
|
||||
})
|
||||
|
||||
fmt.Println("Available commands:")
|
||||
fmt.Printf("%s: %s\n", echoCmd.Use, echoCmd.Short)
|
||||
fmt.Printf("%s: %s\n", infoCmd.Use, infoCmd.Short)
|
||||
fmt.Printf("%s: %s\n", checkTxCmd.Use, checkTxCmd.Short)
|
||||
fmt.Printf("%s: %s\n", deliverTxCmd.Use, deliverTxCmd.Short)
|
||||
fmt.Printf("%s: %s\n", queryCmd.Use, queryCmd.Short)
|
||||
fmt.Printf("%s: %s\n", commitCmd.Use, commitCmd.Short)
|
||||
for _, cmd := range cmd.Commands() {
|
||||
fmt.Printf("%s: %s\n", cmd.Use, cmd.Short)
|
||||
}
|
||||
fmt.Println("Use \"[command] --help\" for more information about a command.")
|
||||
|
||||
return nil
|
||||
@@ -473,7 +496,7 @@ func cmdInfo(cmd *cobra.Command, args []string) error {
|
||||
const codeBad uint32 = 10
|
||||
|
||||
// Append a new tx to application
|
||||
func cmdDeliverTx(cmd *cobra.Command, args []string) error {
|
||||
func cmdFinalizeBlock(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
printResponse(cmd, args, response{
|
||||
Code: codeBad,
|
||||
@@ -485,16 +508,18 @@ func cmdDeliverTx(cmd *cobra.Command, args []string) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
res, err := client.DeliverTx(cmd.Context(), types.RequestDeliverTx{Tx: txBytes})
|
||||
res, err := client.FinalizeBlock(cmd.Context(), types.RequestFinalizeBlock{Txs: [][]byte{txBytes}})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
printResponse(cmd, args, response{
|
||||
Code: res.Code,
|
||||
Data: res.Data,
|
||||
Info: res.Info,
|
||||
Log: res.Log,
|
||||
})
|
||||
for _, tx := range res.Txs {
|
||||
printResponse(cmd, args, response{
|
||||
Code: tx.Code,
|
||||
Data: tx.Data,
|
||||
Info: tx.Info,
|
||||
Log: tx.Log,
|
||||
})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -574,33 +599,34 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func cmdKVStore(cmd *cobra.Command, args []string) error {
|
||||
logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
func makeKVStoreCmd(logger log.Logger) func(*cobra.Command, []string) error {
|
||||
return func(cmd *cobra.Command, args []string) error {
|
||||
// Create the application - in memory or persisted to disk
|
||||
var app types.Application
|
||||
if flagPersist == "" {
|
||||
app = kvstore.NewApplication()
|
||||
} else {
|
||||
app = kvstore.NewPersistentKVStoreApplication(logger, flagPersist)
|
||||
}
|
||||
|
||||
// Create the application - in memory or persisted to disk
|
||||
var app types.Application
|
||||
if flagPersist == "" {
|
||||
app = kvstore.NewApplication()
|
||||
} else {
|
||||
app = kvstore.NewPersistentKVStoreApplication(logger, flagPersist)
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(logger.With("module", "abci-server"), flagAddress, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
if err := srv.Start(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Run forever.
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(logger.With("module", "abci-server"), flagAddress, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
if err := srv.Start(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Run forever.
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
}
|
||||
|
||||
//--------------------------------------------------------------------------------
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
"math/rand"
|
||||
"net"
|
||||
"os"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -35,7 +34,7 @@ func TestKVStore(t *testing.T) {
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
logger.Info("### Testing KVStore")
|
||||
testStream(ctx, t, logger, kvstore.NewApplication())
|
||||
testBulk(ctx, t, logger, kvstore.NewApplication())
|
||||
}
|
||||
|
||||
func TestBaseApp(t *testing.T) {
|
||||
@@ -44,7 +43,7 @@ func TestBaseApp(t *testing.T) {
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
logger.Info("### Testing BaseApp")
|
||||
testStream(ctx, t, logger, types.NewBaseApplication())
|
||||
testBulk(ctx, t, logger, types.NewBaseApplication())
|
||||
}
|
||||
|
||||
func TestGRPC(t *testing.T) {
|
||||
@@ -57,10 +56,10 @@ func TestGRPC(t *testing.T) {
|
||||
testGRPCSync(ctx, t, logger, types.NewGRPCApplication(types.NewBaseApplication()))
|
||||
}
|
||||
|
||||
func testStream(ctx context.Context, t *testing.T, logger log.Logger, app types.Application) {
|
||||
func testBulk(ctx context.Context, t *testing.T, logger log.Logger, app types.Application) {
|
||||
t.Helper()
|
||||
|
||||
const numDeliverTxs = 20000
|
||||
const numDeliverTxs = 700000
|
||||
socketFile := fmt.Sprintf("test-%08x.sock", rand.Int31n(1<<30))
|
||||
defer os.Remove(socketFile)
|
||||
socket := fmt.Sprintf("unix://%v", socketFile)
|
||||
@@ -77,51 +76,22 @@ func testStream(ctx context.Context, t *testing.T, logger log.Logger, app types.
|
||||
err = client.Start(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
done := make(chan struct{})
|
||||
counter := 0
|
||||
client.SetResponseCallback(func(req *types.Request, res *types.Response) {
|
||||
// Process response
|
||||
switch r := res.Value.(type) {
|
||||
case *types.Response_DeliverTx:
|
||||
counter++
|
||||
if r.DeliverTx.Code != code.CodeTypeOK {
|
||||
t.Error("DeliverTx failed with ret_code", r.DeliverTx.Code)
|
||||
}
|
||||
if counter > numDeliverTxs {
|
||||
t.Fatalf("Too many DeliverTx responses. Got %d, expected %d", counter, numDeliverTxs)
|
||||
}
|
||||
if counter == numDeliverTxs {
|
||||
go func() {
|
||||
time.Sleep(time.Second * 1) // Wait for a bit to allow counter overflow
|
||||
close(done)
|
||||
}()
|
||||
return
|
||||
}
|
||||
case *types.Response_Flush:
|
||||
// ignore
|
||||
default:
|
||||
t.Error("Unexpected response type", reflect.TypeOf(res.Value))
|
||||
}
|
||||
})
|
||||
|
||||
// Write requests
|
||||
// Construct request
|
||||
rfb := types.RequestFinalizeBlock{Txs: make([][]byte, numDeliverTxs)}
|
||||
for counter := 0; counter < numDeliverTxs; counter++ {
|
||||
// Send request
|
||||
_, err = client.DeliverTxAsync(ctx, types.RequestDeliverTx{Tx: []byte("test")})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Sometimes send flush messages
|
||||
if counter%128 == 0 {
|
||||
err = client.Flush(ctx)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
rfb.Txs[counter] = []byte("test")
|
||||
}
|
||||
// Send bulk request
|
||||
res, err := client.FinalizeBlock(ctx, rfb)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, numDeliverTxs, len(res.Txs), "Number of txs doesn't match")
|
||||
for _, tx := range res.Txs {
|
||||
require.Equal(t, tx.Code, code.CodeTypeOK, "Tx failed")
|
||||
}
|
||||
|
||||
// Send final flush message
|
||||
_, err = client.FlushAsync(ctx)
|
||||
err = client.Flush(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
<-done
|
||||
}
|
||||
|
||||
//-------------------------
|
||||
@@ -133,7 +103,7 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
|
||||
|
||||
func testGRPCSync(ctx context.Context, t *testing.T, logger log.Logger, app types.ABCIApplicationServer) {
|
||||
t.Helper()
|
||||
numDeliverTxs := 2000
|
||||
numDeliverTxs := 680000
|
||||
socketFile := fmt.Sprintf("/tmp/test-%08x.sock", rand.Int31n(1<<30))
|
||||
defer os.Remove(socketFile)
|
||||
socket := fmt.Sprintf("unix://%v", socketFile)
|
||||
@@ -142,7 +112,7 @@ func testGRPCSync(ctx context.Context, t *testing.T, logger log.Logger, app type
|
||||
server := abciserver.NewGRPCServer(logger.With("module", "abci-server"), socket, app)
|
||||
|
||||
require.NoError(t, server.Start(ctx))
|
||||
t.Cleanup(func() { server.Wait() })
|
||||
t.Cleanup(server.Wait)
|
||||
|
||||
// Connect to the socket
|
||||
conn, err := grpc.Dial(socket,
|
||||
@@ -159,25 +129,17 @@ func testGRPCSync(ctx context.Context, t *testing.T, logger log.Logger, app type
|
||||
|
||||
client := types.NewABCIApplicationClient(conn)
|
||||
|
||||
// Write requests
|
||||
// Construct request
|
||||
rfb := types.RequestFinalizeBlock{Txs: make([][]byte, numDeliverTxs)}
|
||||
for counter := 0; counter < numDeliverTxs; counter++ {
|
||||
// Send request
|
||||
response, err := client.DeliverTx(ctx, &types.RequestDeliverTx{Tx: []byte("test")})
|
||||
require.NoError(t, err, "Error in GRPC DeliverTx")
|
||||
|
||||
counter++
|
||||
if response.Code != code.CodeTypeOK {
|
||||
t.Error("DeliverTx failed with ret_code", response.Code)
|
||||
}
|
||||
if counter > numDeliverTxs {
|
||||
t.Fatal("Too many DeliverTx responses")
|
||||
}
|
||||
t.Log("response", counter)
|
||||
if counter == numDeliverTxs {
|
||||
go func() {
|
||||
time.Sleep(time.Second * 1) // Wait for a bit to allow counter overflow
|
||||
}()
|
||||
}
|
||||
rfb.Txs[counter] = []byte("test")
|
||||
}
|
||||
|
||||
// Send request
|
||||
response, err := client.FinalizeBlock(ctx, &rfb)
|
||||
require.NoError(t, err, "Error in GRPC FinalizeBlock")
|
||||
require.Equal(t, numDeliverTxs, len(response.Txs), "Number of txs returned via GRPC doesn't match")
|
||||
for _, tx := range response.Txs {
|
||||
require.Equal(t, tx.Code, code.CodeTypeOK, "Tx failed")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -86,14 +86,13 @@ func (app *Application) Info(req types.RequestInfo) (resInfo types.ResponseInfo)
|
||||
}
|
||||
|
||||
// tx is either "key=value" or just arbitrary bytes
|
||||
func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
|
||||
func (app *Application) HandleTx(tx []byte) *types.ResponseDeliverTx {
|
||||
var key, value string
|
||||
|
||||
parts := bytes.Split(req.Tx, []byte("="))
|
||||
parts := bytes.Split(tx, []byte("="))
|
||||
if len(parts) == 2 {
|
||||
key, value = string(parts[0]), string(parts[1])
|
||||
} else {
|
||||
key, value = string(req.Tx), string(req.Tx)
|
||||
key, value = string(tx), string(tx)
|
||||
}
|
||||
|
||||
err := app.state.db.Set(prefixKey([]byte(key)), []byte(value))
|
||||
@@ -114,7 +113,15 @@ func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeli
|
||||
},
|
||||
}
|
||||
|
||||
return types.ResponseDeliverTx{Code: code.CodeTypeOK, Events: events}
|
||||
return &types.ResponseDeliverTx{Code: code.CodeTypeOK, Events: events}
|
||||
}
|
||||
|
||||
func (app *Application) FinalizeBlock(req types.RequestFinalizeBlock) types.ResponseFinalizeBlock {
|
||||
txs := make([]*types.ResponseDeliverTx, len(req.Txs))
|
||||
for i, tx := range req.Txs {
|
||||
txs[i] = app.HandleTx(tx)
|
||||
}
|
||||
return types.ResponseFinalizeBlock{Txs: txs}
|
||||
}
|
||||
|
||||
func (app *Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
|
||||
|
||||
@@ -3,7 +3,6 @@ package kvstore
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
@@ -25,12 +24,14 @@ const (
|
||||
)
|
||||
|
||||
func testKVStore(t *testing.T, app types.Application, tx []byte, key, value string) {
|
||||
req := types.RequestDeliverTx{Tx: tx}
|
||||
ar := app.DeliverTx(req)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
req := types.RequestFinalizeBlock{Txs: [][]byte{tx}}
|
||||
ar := app.FinalizeBlock(req)
|
||||
require.Equal(t, 1, len(ar.Txs))
|
||||
require.False(t, ar.Txs[0].IsErr())
|
||||
// repeating tx doesn't raise error
|
||||
ar = app.DeliverTx(req)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
ar = app.FinalizeBlock(req)
|
||||
require.Equal(t, 1, len(ar.Txs))
|
||||
require.False(t, ar.Txs[0].IsErr())
|
||||
// commit
|
||||
app.Commit()
|
||||
|
||||
@@ -72,10 +73,7 @@ func TestKVStoreKV(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestPersistentKVStoreKV(t *testing.T) {
|
||||
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
dir := t.TempDir()
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
kvstore := NewPersistentKVStoreApplication(logger, dir)
|
||||
@@ -90,10 +88,7 @@ func TestPersistentKVStoreKV(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestPersistentKVStoreInfo(t *testing.T) {
|
||||
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
dir := t.TempDir()
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
kvstore := NewPersistentKVStoreApplication(logger, dir)
|
||||
@@ -111,8 +106,7 @@ func TestPersistentKVStoreInfo(t *testing.T) {
|
||||
header := tmproto.Header{
|
||||
Height: height,
|
||||
}
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
|
||||
kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
|
||||
kvstore.FinalizeBlock(types.RequestFinalizeBlock{Hash: hash, Header: header, Height: height})
|
||||
kvstore.Commit()
|
||||
|
||||
resInfo = kvstore.Info(types.RequestInfo{})
|
||||
@@ -124,10 +118,7 @@ func TestPersistentKVStoreInfo(t *testing.T) {
|
||||
|
||||
// add a validator, remove a validator, update a validator
|
||||
func TestValUpdates(t *testing.T) {
|
||||
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
dir := t.TempDir()
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
kvstore := NewPersistentKVStoreApplication(logger, dir)
|
||||
@@ -204,16 +195,16 @@ func makeApplyBlock(
|
||||
Height: height,
|
||||
}
|
||||
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
|
||||
for _, tx := range txs {
|
||||
if r := kvstore.DeliverTx(types.RequestDeliverTx{Tx: tx}); r.IsErr() {
|
||||
t.Fatal(r)
|
||||
}
|
||||
}
|
||||
resEndBlock := kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
|
||||
resFinalizeBlock := kvstore.FinalizeBlock(types.RequestFinalizeBlock{
|
||||
Hash: hash,
|
||||
Header: header,
|
||||
Height: height,
|
||||
Txs: txs,
|
||||
})
|
||||
|
||||
kvstore.Commit()
|
||||
|
||||
valsEqual(t, diff, resEndBlock.ValidatorUpdates)
|
||||
valsEqual(t, diff, resFinalizeBlock.ValidatorUpdates)
|
||||
|
||||
}
|
||||
|
||||
@@ -330,13 +321,15 @@ func runClientTests(ctx context.Context, t *testing.T, client abciclient.Client)
|
||||
}
|
||||
|
||||
func testClient(ctx context.Context, t *testing.T, app abciclient.Client, tx []byte, key, value string) {
|
||||
ar, err := app.DeliverTx(ctx, types.RequestDeliverTx{Tx: tx})
|
||||
ar, err := app.FinalizeBlock(ctx, types.RequestFinalizeBlock{Txs: [][]byte{tx}})
|
||||
require.NoError(t, err)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
// repeating tx doesn't raise error
|
||||
ar, err = app.DeliverTx(ctx, types.RequestDeliverTx{Tx: tx})
|
||||
require.Equal(t, 1, len(ar.Txs))
|
||||
require.False(t, ar.Txs[0].IsErr())
|
||||
// repeating FinalizeBlock doesn't raise error
|
||||
ar, err = app.FinalizeBlock(ctx, types.RequestFinalizeBlock{Txs: [][]byte{tx}})
|
||||
require.NoError(t, err)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
require.Equal(t, 1, len(ar.Txs))
|
||||
require.False(t, ar.Txs[0].IsErr())
|
||||
// commit
|
||||
_, err = app.Commit(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -64,21 +64,21 @@ func (app *PersistentKVStoreApplication) Info(req types.RequestInfo) types.Respo
|
||||
}
|
||||
|
||||
// tx is either "val:pubkey!power" or "key=value" or just arbitrary bytes
|
||||
func (app *PersistentKVStoreApplication) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
|
||||
func (app *PersistentKVStoreApplication) HandleTx(tx []byte) *types.ResponseDeliverTx {
|
||||
// if it starts with "val:", update the validator set
|
||||
// format is "val:pubkey!power"
|
||||
if isValidatorTx(req.Tx) {
|
||||
if isValidatorTx(tx) {
|
||||
// update validators in the merkle tree
|
||||
// and in app.ValUpdates
|
||||
return app.execValidatorTx(req.Tx)
|
||||
return app.execValidatorTx(tx)
|
||||
}
|
||||
|
||||
if isPrepareTx(req.Tx) {
|
||||
return app.execPrepareTx(req.Tx)
|
||||
if isPrepareTx(tx) {
|
||||
return app.execPrepareTx(tx)
|
||||
}
|
||||
|
||||
// otherwise, update the key-value store
|
||||
return app.app.DeliverTx(req)
|
||||
return app.app.HandleTx(tx)
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
|
||||
@@ -121,7 +121,9 @@ func (app *PersistentKVStoreApplication) InitChain(req types.RequestInitChain) t
|
||||
}
|
||||
|
||||
// Track the block hash and header information
|
||||
func (app *PersistentKVStoreApplication) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
|
||||
// Execute transactions
|
||||
// Update the validator set
|
||||
func (app *PersistentKVStoreApplication) FinalizeBlock(req types.RequestFinalizeBlock) types.ResponseFinalizeBlock {
|
||||
// reset valset changes
|
||||
app.ValUpdates = make([]types.ValidatorUpdate, 0)
|
||||
|
||||
@@ -143,12 +145,12 @@ func (app *PersistentKVStoreApplication) BeginBlock(req types.RequestBeginBlock)
|
||||
}
|
||||
}
|
||||
|
||||
return types.ResponseBeginBlock{}
|
||||
}
|
||||
respTxs := make([]*types.ResponseDeliverTx, len(req.Txs))
|
||||
for i, tx := range req.Txs {
|
||||
respTxs[i] = app.HandleTx(tx)
|
||||
}
|
||||
|
||||
// Update the validator set
|
||||
func (app *PersistentKVStoreApplication) EndBlock(req types.RequestEndBlock) types.ResponseEndBlock {
|
||||
return types.ResponseEndBlock{ValidatorUpdates: app.ValUpdates}
|
||||
return types.ResponseFinalizeBlock{Txs: respTxs, ValidatorUpdates: app.ValUpdates}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) ListSnapshots(
|
||||
@@ -189,6 +191,16 @@ func (app *PersistentKVStoreApplication) PrepareProposal(
|
||||
return types.ResponsePrepareProposal{BlockData: app.substPrepareTx(req.BlockData)}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) ProcessProposal(
|
||||
req types.RequestProcessProposal) types.ResponseProcessProposal {
|
||||
for _, tx := range req.Txs {
|
||||
if len(tx) == 0 {
|
||||
return types.ResponseProcessProposal{Result: types.ResponseProcessProposal_REJECT}
|
||||
}
|
||||
}
|
||||
return types.ResponseProcessProposal{Result: types.ResponseProcessProposal_ACCEPT}
|
||||
}
|
||||
|
||||
//---------------------------------------------
|
||||
// update validators
|
||||
|
||||
@@ -228,13 +240,13 @@ func isValidatorTx(tx []byte) bool {
|
||||
|
||||
// format is "val:pubkey!power"
|
||||
// pubkey is a base64-encoded 32-byte ed25519 key
|
||||
func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) types.ResponseDeliverTx {
|
||||
func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) *types.ResponseDeliverTx {
|
||||
tx = tx[len(ValidatorSetChangePrefix):]
|
||||
|
||||
// get the pubkey and power
|
||||
pubKeyAndPower := strings.Split(string(tx), "!")
|
||||
if len(pubKeyAndPower) != 2 {
|
||||
return types.ResponseDeliverTx{
|
||||
return &types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Expected 'pubkey!power'. Got %v", pubKeyAndPower)}
|
||||
}
|
||||
@@ -243,7 +255,7 @@ func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) types.Respon
|
||||
// decode the pubkey
|
||||
pubkey, err := base64.StdEncoding.DecodeString(pubkeyS)
|
||||
if err != nil {
|
||||
return types.ResponseDeliverTx{
|
||||
return &types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Pubkey (%s) is invalid base64", pubkeyS)}
|
||||
}
|
||||
@@ -251,7 +263,7 @@ func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) types.Respon
|
||||
// decode the power
|
||||
power, err := strconv.ParseInt(powerS, 10, 64)
|
||||
if err != nil {
|
||||
return types.ResponseDeliverTx{
|
||||
return &types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Power (%s) is not an int", powerS)}
|
||||
}
|
||||
@@ -261,7 +273,7 @@ func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) types.Respon
|
||||
}
|
||||
|
||||
// add, update, or remove a validator
|
||||
func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate) types.ResponseDeliverTx {
|
||||
func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate) *types.ResponseDeliverTx {
|
||||
pubkey, err := encoding.PubKeyFromProto(v.PubKey)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("can't decode public key: %w", err))
|
||||
@@ -276,7 +288,7 @@ func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate
|
||||
}
|
||||
if !hasKey {
|
||||
pubStr := base64.StdEncoding.EncodeToString(pubkey.Bytes())
|
||||
return types.ResponseDeliverTx{
|
||||
return &types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeUnauthorized,
|
||||
Log: fmt.Sprintf("Cannot remove non-existent validator %s", pubStr)}
|
||||
}
|
||||
@@ -288,7 +300,7 @@ func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate
|
||||
// add or update validator
|
||||
value := bytes.NewBuffer(make([]byte, 0))
|
||||
if err := types.WriteMessage(&v, value); err != nil {
|
||||
return types.ResponseDeliverTx{
|
||||
return &types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("error encoding validator: %v", err)}
|
||||
}
|
||||
@@ -301,7 +313,7 @@ func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate
|
||||
// we only update the changes array if we successfully updated the tree
|
||||
app.ValUpdates = append(app.ValUpdates, v)
|
||||
|
||||
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
|
||||
return &types.ResponseDeliverTx{Code: code.CodeTypeOK}
|
||||
}
|
||||
|
||||
// -----------------------------
|
||||
@@ -314,14 +326,16 @@ func isPrepareTx(tx []byte) bool {
|
||||
|
||||
// execPrepareTx is noop. tx data is considered as placeholder
|
||||
// and is substitute at the PrepareProposal.
|
||||
func (app *PersistentKVStoreApplication) execPrepareTx(tx []byte) types.ResponseDeliverTx {
|
||||
func (app *PersistentKVStoreApplication) execPrepareTx(tx []byte) *types.ResponseDeliverTx {
|
||||
// noop
|
||||
return types.ResponseDeliverTx{}
|
||||
return &types.ResponseDeliverTx{}
|
||||
}
|
||||
|
||||
// substPrepareTx subst all the preparetx in the blockdata
|
||||
// to null string(could be any arbitrary string).
|
||||
func (app *PersistentKVStoreApplication) substPrepareTx(blockData [][]byte) [][]byte {
|
||||
// TODO: this mechanism will change with the current spec of PrepareProposal
|
||||
// We now have a special type for marking a tx as changed
|
||||
for i, tx := range blockData {
|
||||
if isPrepareTx(tx) {
|
||||
blockData[i] = make([]byte, len(tx))
|
||||
|
||||
@@ -213,9 +213,6 @@ func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types
|
||||
case *types.Request_Info:
|
||||
res := s.app.Info(*r.Info)
|
||||
responses <- types.ToResponseInfo(res)
|
||||
case *types.Request_DeliverTx:
|
||||
res := s.app.DeliverTx(*r.DeliverTx)
|
||||
responses <- types.ToResponseDeliverTx(res)
|
||||
case *types.Request_CheckTx:
|
||||
res := s.app.CheckTx(*r.CheckTx)
|
||||
responses <- types.ToResponseCheckTx(res)
|
||||
@@ -228,12 +225,6 @@ func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types
|
||||
case *types.Request_InitChain:
|
||||
res := s.app.InitChain(*r.InitChain)
|
||||
responses <- types.ToResponseInitChain(res)
|
||||
case *types.Request_BeginBlock:
|
||||
res := s.app.BeginBlock(*r.BeginBlock)
|
||||
responses <- types.ToResponseBeginBlock(res)
|
||||
case *types.Request_EndBlock:
|
||||
res := s.app.EndBlock(*r.EndBlock)
|
||||
responses <- types.ToResponseEndBlock(res)
|
||||
case *types.Request_ListSnapshots:
|
||||
res := s.app.ListSnapshots(*r.ListSnapshots)
|
||||
responses <- types.ToResponseListSnapshots(res)
|
||||
@@ -243,6 +234,9 @@ func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types
|
||||
case *types.Request_PrepareProposal:
|
||||
res := s.app.PrepareProposal(*r.PrepareProposal)
|
||||
responses <- types.ToResponsePrepareProposal(res)
|
||||
case *types.Request_ProcessProposal:
|
||||
res := s.app.ProcessProposal(*r.ProcessProposal)
|
||||
responses <- types.ToResponseProcessProposal(res)
|
||||
case *types.Request_LoadSnapshotChunk:
|
||||
res := s.app.LoadSnapshotChunk(*r.LoadSnapshotChunk)
|
||||
responses <- types.ToResponseLoadSnapshotChunk(res)
|
||||
@@ -255,6 +249,9 @@ func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types
|
||||
case *types.Request_VerifyVoteExtension:
|
||||
res := s.app.VerifyVoteExtension(*r.VerifyVoteExtension)
|
||||
responses <- types.ToResponseVerifyVoteExtension(res)
|
||||
case *types.Request_FinalizeBlock:
|
||||
res := s.app.FinalizeBlock(*r.FinalizeBlock)
|
||||
responses <- types.ToResponseFinalizeBlock(res)
|
||||
default:
|
||||
responses <- types.ToResponseException("Unknown request")
|
||||
}
|
||||
|
||||
@@ -49,22 +49,24 @@ func Commit(ctx context.Context, client abciclient.Client, hashExp []byte) error
|
||||
return nil
|
||||
}
|
||||
|
||||
func DeliverTx(ctx context.Context, client abciclient.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
|
||||
res, _ := client.DeliverTx(ctx, types.RequestDeliverTx{Tx: txBytes})
|
||||
code, data, log := res.Code, res.Data, res.Log
|
||||
if code != codeExp {
|
||||
fmt.Println("Failed test: DeliverTx")
|
||||
fmt.Printf("DeliverTx response code was unexpected. Got %v expected %v. Log: %v\n",
|
||||
code, codeExp, log)
|
||||
return errors.New("deliverTx error")
|
||||
func FinalizeBlock(ctx context.Context, client abciclient.Client, txBytes [][]byte, codeExp []uint32, dataExp []byte) error {
|
||||
res, _ := client.FinalizeBlock(ctx, types.RequestFinalizeBlock{Txs: txBytes})
|
||||
for i, tx := range res.Txs {
|
||||
code, data, log := tx.Code, tx.Data, tx.Log
|
||||
if code != codeExp[i] {
|
||||
fmt.Println("Failed test: FinalizeBlock")
|
||||
fmt.Printf("FinalizeBlock response code was unexpected. Got %v expected %v. Log: %v\n",
|
||||
code, codeExp, log)
|
||||
return errors.New("FinalizeBlock error")
|
||||
}
|
||||
if !bytes.Equal(data, dataExp) {
|
||||
fmt.Println("Failed test: FinalizeBlock")
|
||||
fmt.Printf("FinalizeBlock response data was unexpected. Got %X expected %X\n",
|
||||
data, dataExp)
|
||||
return errors.New("FinalizeBlock error")
|
||||
}
|
||||
}
|
||||
if !bytes.Equal(data, dataExp) {
|
||||
fmt.Println("Failed test: DeliverTx")
|
||||
fmt.Printf("DeliverTx response data was unexpected. Got %X expected %X\n",
|
||||
data, dataExp)
|
||||
return errors.New("deliverTx error")
|
||||
}
|
||||
fmt.Println("Passed test: DeliverTx")
|
||||
fmt.Println("Passed test: FinalizeBlock")
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -19,18 +19,15 @@ type Application interface {
|
||||
// Consensus Connection
|
||||
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain w validators/other info from TendermintCore
|
||||
PrepareProposal(RequestPrepareProposal) ResponsePrepareProposal
|
||||
// Signals the beginning of a block
|
||||
BeginBlock(RequestBeginBlock) ResponseBeginBlock
|
||||
// Deliver a tx for full processing
|
||||
DeliverTx(RequestDeliverTx) ResponseDeliverTx
|
||||
// Signals the end of a block, returns changes to the validator set
|
||||
EndBlock(RequestEndBlock) ResponseEndBlock
|
||||
ProcessProposal(RequestProcessProposal) ResponseProcessProposal
|
||||
// Commit the state and return the application Merkle root hash
|
||||
Commit() ResponseCommit
|
||||
// Create application specific vote extension
|
||||
ExtendVote(RequestExtendVote) ResponseExtendVote
|
||||
// Verify application's vote extension data
|
||||
VerifyVoteExtension(RequestVerifyVoteExtension) ResponseVerifyVoteExtension
|
||||
// Deliver the decided block with its txs to the Application
|
||||
FinalizeBlock(RequestFinalizeBlock) ResponseFinalizeBlock
|
||||
|
||||
// State Sync Connection
|
||||
ListSnapshots(RequestListSnapshots) ResponseListSnapshots // List available snapshots
|
||||
@@ -55,10 +52,6 @@ func (BaseApplication) Info(req RequestInfo) ResponseInfo {
|
||||
return ResponseInfo{}
|
||||
}
|
||||
|
||||
func (BaseApplication) DeliverTx(req RequestDeliverTx) ResponseDeliverTx {
|
||||
return ResponseDeliverTx{Code: CodeTypeOK}
|
||||
}
|
||||
|
||||
func (BaseApplication) CheckTx(req RequestCheckTx) ResponseCheckTx {
|
||||
return ResponseCheckTx{Code: CodeTypeOK}
|
||||
}
|
||||
@@ -72,7 +65,9 @@ func (BaseApplication) ExtendVote(req RequestExtendVote) ResponseExtendVote {
|
||||
}
|
||||
|
||||
func (BaseApplication) VerifyVoteExtension(req RequestVerifyVoteExtension) ResponseVerifyVoteExtension {
|
||||
return ResponseVerifyVoteExtension{}
|
||||
return ResponseVerifyVoteExtension{
|
||||
Result: ResponseVerifyVoteExtension_ACCEPT,
|
||||
}
|
||||
}
|
||||
|
||||
func (BaseApplication) Query(req RequestQuery) ResponseQuery {
|
||||
@@ -83,14 +78,6 @@ func (BaseApplication) InitChain(req RequestInitChain) ResponseInitChain {
|
||||
return ResponseInitChain{}
|
||||
}
|
||||
|
||||
func (BaseApplication) BeginBlock(req RequestBeginBlock) ResponseBeginBlock {
|
||||
return ResponseBeginBlock{}
|
||||
}
|
||||
|
||||
func (BaseApplication) EndBlock(req RequestEndBlock) ResponseEndBlock {
|
||||
return ResponseEndBlock{}
|
||||
}
|
||||
|
||||
func (BaseApplication) ListSnapshots(req RequestListSnapshots) ResponseListSnapshots {
|
||||
return ResponseListSnapshots{}
|
||||
}
|
||||
@@ -111,6 +98,20 @@ func (BaseApplication) PrepareProposal(req RequestPrepareProposal) ResponsePrepa
|
||||
return ResponsePrepareProposal{}
|
||||
}
|
||||
|
||||
func (BaseApplication) ProcessProposal(req RequestProcessProposal) ResponseProcessProposal {
|
||||
return ResponseProcessProposal{}
|
||||
}
|
||||
|
||||
func (BaseApplication) FinalizeBlock(req RequestFinalizeBlock) ResponseFinalizeBlock {
|
||||
txs := make([]*ResponseDeliverTx, len(req.Txs))
|
||||
for i := range req.Txs {
|
||||
txs[i] = &ResponseDeliverTx{Code: CodeTypeOK}
|
||||
}
|
||||
return ResponseFinalizeBlock{
|
||||
Txs: txs,
|
||||
}
|
||||
}
|
||||
|
||||
//-------------------------------------------------------
|
||||
|
||||
// GRPCApplication is a GRPC wrapper for Application
|
||||
@@ -135,11 +136,6 @@ func (app *GRPCApplication) Info(ctx context.Context, req *RequestInfo) (*Respon
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) DeliverTx(ctx context.Context, req *RequestDeliverTx) (*ResponseDeliverTx, error) {
|
||||
res := app.app.DeliverTx(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) CheckTx(ctx context.Context, req *RequestCheckTx) (*ResponseCheckTx, error) {
|
||||
res := app.app.CheckTx(*req)
|
||||
return &res, nil
|
||||
@@ -160,16 +156,6 @@ func (app *GRPCApplication) InitChain(ctx context.Context, req *RequestInitChain
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) BeginBlock(ctx context.Context, req *RequestBeginBlock) (*ResponseBeginBlock, error) {
|
||||
res := app.app.BeginBlock(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) EndBlock(ctx context.Context, req *RequestEndBlock) (*ResponseEndBlock, error) {
|
||||
res := app.app.EndBlock(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) ListSnapshots(
|
||||
ctx context.Context, req *RequestListSnapshots) (*ResponseListSnapshots, error) {
|
||||
res := app.app.ListSnapshots(*req)
|
||||
@@ -211,3 +197,15 @@ func (app *GRPCApplication) PrepareProposal(
|
||||
res := app.app.PrepareProposal(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) ProcessProposal(
|
||||
ctx context.Context, req *RequestProcessProposal) (*ResponseProcessProposal, error) {
|
||||
res := app.app.ProcessProposal(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) FinalizeBlock(
|
||||
ctx context.Context, req *RequestFinalizeBlock) (*ResponseFinalizeBlock, error) {
|
||||
res := app.app.FinalizeBlock(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"io"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
|
||||
"github.com/tendermint/tendermint/internal/libs/protoio"
|
||||
)
|
||||
|
||||
@@ -44,12 +45,6 @@ func ToRequestInfo(req RequestInfo) *Request {
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestDeliverTx(req RequestDeliverTx) *Request {
|
||||
return &Request{
|
||||
Value: &Request_DeliverTx{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestCheckTx(req RequestCheckTx) *Request {
|
||||
return &Request{
|
||||
Value: &Request_CheckTx{&req},
|
||||
@@ -74,18 +69,6 @@ func ToRequestInitChain(req RequestInitChain) *Request {
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestBeginBlock(req RequestBeginBlock) *Request {
|
||||
return &Request{
|
||||
Value: &Request_BeginBlock{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestEndBlock(req RequestEndBlock) *Request {
|
||||
return &Request{
|
||||
Value: &Request_EndBlock{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestListSnapshots(req RequestListSnapshots) *Request {
|
||||
return &Request{
|
||||
Value: &Request_ListSnapshots{&req},
|
||||
@@ -128,6 +111,18 @@ func ToRequestPrepareProposal(req RequestPrepareProposal) *Request {
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestProcessProposal(req RequestProcessProposal) *Request {
|
||||
return &Request{
|
||||
Value: &Request_ProcessProposal{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestFinalizeBlock(req RequestFinalizeBlock) *Request {
|
||||
return &Request{
|
||||
Value: &Request_FinalizeBlock{&req},
|
||||
}
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func ToResponseException(errStr string) *Response {
|
||||
@@ -153,11 +148,6 @@ func ToResponseInfo(res ResponseInfo) *Response {
|
||||
Value: &Response_Info{&res},
|
||||
}
|
||||
}
|
||||
func ToResponseDeliverTx(res ResponseDeliverTx) *Response {
|
||||
return &Response{
|
||||
Value: &Response_DeliverTx{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseCheckTx(res ResponseCheckTx) *Response {
|
||||
return &Response{
|
||||
@@ -183,18 +173,6 @@ func ToResponseInitChain(res ResponseInitChain) *Response {
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseBeginBlock(res ResponseBeginBlock) *Response {
|
||||
return &Response{
|
||||
Value: &Response_BeginBlock{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseEndBlock(res ResponseEndBlock) *Response {
|
||||
return &Response{
|
||||
Value: &Response_EndBlock{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseListSnapshots(res ResponseListSnapshots) *Response {
|
||||
return &Response{
|
||||
Value: &Response_ListSnapshots{&res},
|
||||
@@ -236,3 +214,15 @@ func ToResponsePrepareProposal(res ResponsePrepareProposal) *Response {
|
||||
Value: &Response_PrepareProposal{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseProcessProposal(res ResponseProcessProposal) *Response {
|
||||
return &Response{
|
||||
Value: &Response_ProcessProposal{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseFinalizeBlock(res ResponseFinalizeBlock) *Response {
|
||||
return &Response{
|
||||
Value: &Response_FinalizeBlock{&res},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -43,14 +43,24 @@ func (r ResponseQuery) IsErr() bool {
|
||||
return r.Code != CodeTypeOK
|
||||
}
|
||||
|
||||
// IsUnknown returns true if Code is Unknown
|
||||
func (r ResponseVerifyVoteExtension) IsUnknown() bool {
|
||||
return r.Result == ResponseVerifyVoteExtension_UNKNOWN
|
||||
}
|
||||
|
||||
// IsOK returns true if Code is OK
|
||||
func (r ResponseVerifyVoteExtension) IsOK() bool {
|
||||
return r.Result <= ResponseVerifyVoteExtension_ACCEPT
|
||||
return r.Result == ResponseVerifyVoteExtension_ACCEPT
|
||||
}
|
||||
|
||||
// IsErr returns true if Code is something other than OK.
|
||||
func (r ResponseVerifyVoteExtension) IsErr() bool {
|
||||
return r.Result > ResponseVerifyVoteExtension_ACCEPT
|
||||
return r.Result != ResponseVerifyVoteExtension_ACCEPT
|
||||
}
|
||||
|
||||
// IsOK returns true if Code is OK
|
||||
func (r ResponseProcessProposal) IsOK() bool {
|
||||
return r.Result == ResponseProcessProposal_ACCEPT
|
||||
}
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
14
buf.gen.yaml
Normal file
14
buf.gen.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
# The version of the generation template (required).
|
||||
# The only currently-valid value is v1beta1.
|
||||
version: v1beta1
|
||||
|
||||
# The plugins to run.
|
||||
plugins:
|
||||
# The name of the plugin.
|
||||
- name: gogofaster
|
||||
# The directory where the generated proto output will be written.
|
||||
# The directory is relative to where the generation tool was run.
|
||||
out: proto
|
||||
# Set options to assign import paths to the well-known types
|
||||
# and to enable service generation.
|
||||
opt: Mgoogle/protobuf/timestamp.proto=github.com/gogo/protobuf/types,Mgoogle/protobuf/duration.proto=github.com/golang/protobuf/ptypes/duration,plugins=grpc,paths=source_relative
|
||||
16
buf.yaml
Normal file
16
buf.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
version: v1beta1
|
||||
|
||||
build:
|
||||
roots:
|
||||
- proto
|
||||
- third_party/proto
|
||||
lint:
|
||||
use:
|
||||
- BASIC
|
||||
- FILE_LOWER_SNAKE_CASE
|
||||
- UNARY_RPC
|
||||
ignore:
|
||||
- gogoproto
|
||||
breaking:
|
||||
use:
|
||||
- FILE
|
||||
@@ -45,12 +45,16 @@ func main() {
|
||||
keyFile = flag.String("keyfile", "", "absolute path to server key")
|
||||
rootCA = flag.String("rootcafile", "", "absolute path to root CA")
|
||||
prometheusAddr = flag.String("prometheus-addr", "", "address for prometheus endpoint (host:port)")
|
||||
|
||||
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo).
|
||||
With("module", "priv_val")
|
||||
)
|
||||
flag.Parse()
|
||||
|
||||
logger, err := log.NewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "failed to construct logger: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger = logger.With("module", "priv_val")
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
|
||||
46
cmd/tendermint/commands/completion.go
Normal file
46
cmd/tendermint/commands/completion.go
Normal file
@@ -0,0 +1,46 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// NewCompletionCmd returns a cobra.Command that generates bash and zsh
|
||||
// completion scripts for the given root command. If hidden is true, the
|
||||
// command will not show up in the root command's list of available commands.
|
||||
func NewCompletionCmd(rootCmd *cobra.Command, hidden bool) *cobra.Command {
|
||||
flagZsh := "zsh"
|
||||
cmd := &cobra.Command{
|
||||
Use: "completion",
|
||||
Short: "Generate shell completion scripts",
|
||||
Long: fmt.Sprintf(`Generate Bash and Zsh completion scripts and print them to STDOUT.
|
||||
|
||||
Once saved to file, a completion script can be loaded in the shell's
|
||||
current session as shown:
|
||||
|
||||
$ . <(%s completion)
|
||||
|
||||
To configure your bash shell to load completions for each session add to
|
||||
your $HOME/.bashrc or $HOME/.profile the following instruction:
|
||||
|
||||
. <(%s completion)
|
||||
`, rootCmd.Use, rootCmd.Use),
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
zsh, err := cmd.Flags().GetBool(flagZsh)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if zsh {
|
||||
return rootCmd.GenZshCompletion(cmd.OutOrStdout())
|
||||
}
|
||||
return rootCmd.GenBashCompletion(cmd.OutOrStdout())
|
||||
},
|
||||
Hidden: hidden,
|
||||
Args: cobra.NoArgs,
|
||||
}
|
||||
|
||||
cmd.Flags().Bool(flagZsh, false, "Generate Zsh completion script")
|
||||
|
||||
return cmd
|
||||
}
|
||||
@@ -12,30 +12,30 @@ import (
|
||||
|
||||
// GenValidatorCmd allows the generation of a keypair for a
|
||||
// validator.
|
||||
var GenValidatorCmd = &cobra.Command{
|
||||
Use: "gen-validator",
|
||||
Short: "Generate new validator keypair",
|
||||
RunE: genValidator,
|
||||
}
|
||||
func MakeGenValidatorCommand() *cobra.Command {
|
||||
var keyType string
|
||||
cmd := &cobra.Command{
|
||||
Use: "gen-validator",
|
||||
Short: "Generate new validator keypair",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
pv, err := privval.GenFilePV("", "", keyType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
func init() {
|
||||
GenValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
jsbz, err := json.Marshal(pv)
|
||||
if err != nil {
|
||||
return fmt.Errorf("validator -> json: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("%v\n", string(jsbz))
|
||||
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
}
|
||||
|
||||
func genValidator(cmd *cobra.Command, args []string) error {
|
||||
pv, err := privval.GenFilePV("", "", keyType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
jsbz, err := json.Marshal(pv)
|
||||
if err != nil {
|
||||
return fmt.Errorf("validator -> json: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf(`%v
|
||||
`, string(jsbz))
|
||||
|
||||
return nil
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
@@ -7,7 +7,8 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
tmrand "github.com/tendermint/tendermint/libs/rand"
|
||||
tmtime "github.com/tendermint/tendermint/libs/time"
|
||||
@@ -15,43 +16,40 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// InitFilesCmd initializes a fresh Tendermint Core instance.
|
||||
var InitFilesCmd = &cobra.Command{
|
||||
Use: "init [full|validator|seed]",
|
||||
Short: "Initializes a Tendermint node",
|
||||
ValidArgs: []string{"full", "validator", "seed"},
|
||||
// We allow for zero args so we can throw a more informative error
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: initFiles,
|
||||
}
|
||||
|
||||
var (
|
||||
keyType string
|
||||
)
|
||||
|
||||
func init() {
|
||||
InitFilesCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
}
|
||||
|
||||
func initFiles(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
return errors.New("must specify a node type: tendermint init [validator|full|seed]")
|
||||
// MakeInitFilesCommand returns the command to initialize a fresh Tendermint Core instance.
|
||||
func MakeInitFilesCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var keyType string
|
||||
cmd := &cobra.Command{
|
||||
Use: "init [full|validator|seed]",
|
||||
Short: "Initializes a Tendermint node",
|
||||
ValidArgs: []string{"full", "validator", "seed"},
|
||||
// We allow for zero args so we can throw a more informative error
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
return errors.New("must specify a node type: tendermint init [validator|full|seed]")
|
||||
}
|
||||
conf.Mode = args[0]
|
||||
return initFilesWithConfig(cmd.Context(), conf, logger, keyType)
|
||||
},
|
||||
}
|
||||
config.Mode = args[0]
|
||||
return initFilesWithConfig(cmd.Context(), config)
|
||||
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func initFilesWithConfig(ctx context.Context, config *cfg.Config) error {
|
||||
func initFilesWithConfig(ctx context.Context, conf *config.Config, logger log.Logger, keyType string) error {
|
||||
var (
|
||||
pv *privval.FilePV
|
||||
err error
|
||||
)
|
||||
|
||||
if config.Mode == cfg.ModeValidator {
|
||||
if conf.Mode == config.ModeValidator {
|
||||
// private validator
|
||||
privValKeyFile := config.PrivValidator.KeyFile()
|
||||
privValStateFile := config.PrivValidator.StateFile()
|
||||
privValKeyFile := conf.PrivValidator.KeyFile()
|
||||
privValStateFile := conf.PrivValidator.StateFile()
|
||||
if tmos.FileExists(privValKeyFile) {
|
||||
pv, err = privval.LoadFilePV(privValKeyFile, privValStateFile)
|
||||
if err != nil {
|
||||
@@ -73,7 +71,7 @@ func initFilesWithConfig(ctx context.Context, config *cfg.Config) error {
|
||||
}
|
||||
}
|
||||
|
||||
nodeKeyFile := config.NodeKeyFile()
|
||||
nodeKeyFile := conf.NodeKeyFile()
|
||||
if tmos.FileExists(nodeKeyFile) {
|
||||
logger.Info("Found node key", "path", nodeKeyFile)
|
||||
} else {
|
||||
@@ -84,7 +82,7 @@ func initFilesWithConfig(ctx context.Context, config *cfg.Config) error {
|
||||
}
|
||||
|
||||
// genesis file
|
||||
genFile := config.GenesisFile()
|
||||
genFile := conf.GenesisFile()
|
||||
if tmos.FileExists(genFile) {
|
||||
logger.Info("Found genesis file", "path", genFile)
|
||||
} else {
|
||||
@@ -123,10 +121,10 @@ func initFilesWithConfig(ctx context.Context, config *cfg.Config) error {
|
||||
}
|
||||
|
||||
// write config file
|
||||
if err := cfg.WriteConfigFile(config.RootDir, config); err != nil {
|
||||
if err := config.WriteConfigFile(conf.RootDir, conf); err != nil {
|
||||
return err
|
||||
}
|
||||
logger.Info("Generated config", "mode", config.Mode)
|
||||
logger.Info("Generated config", "mode", conf.Mode)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -6,14 +6,17 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/inspect"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
// InspectCmd is the command for starting an inspect server.
|
||||
var InspectCmd = &cobra.Command{
|
||||
Use: "inspect",
|
||||
Short: "Run an inspect server for investigating Tendermint state",
|
||||
Long: `
|
||||
// InspectCmd constructs the command to start an inspect server.
|
||||
func MakeInspectCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "inspect",
|
||||
Short: "Run an inspect server for investigating Tendermint state",
|
||||
Long: `
|
||||
inspect runs a subset of Tendermint's RPC endpoints that are useful for debugging
|
||||
issues with Tendermint.
|
||||
|
||||
@@ -22,33 +25,27 @@ var InspectCmd = &cobra.Command{
|
||||
The inspect command can be used to query the block and state store using Tendermint
|
||||
RPC calls to debug issues of inconsistent state.
|
||||
`,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM, syscall.SIGINT)
|
||||
defer cancel()
|
||||
|
||||
RunE: runInspect,
|
||||
}
|
||||
ins, err := inspect.NewFromConfig(logger, conf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
func init() {
|
||||
InspectCmd.Flags().
|
||||
String("rpc.laddr",
|
||||
config.RPC.ListenAddress, "RPC listenener address. Port required")
|
||||
InspectCmd.Flags().
|
||||
String("db-backend",
|
||||
config.DBBackend, "database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
|
||||
InspectCmd.Flags().
|
||||
String("db-dir", config.DBPath, "database directory")
|
||||
}
|
||||
|
||||
func runInspect(cmd *cobra.Command, args []string) error {
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM, syscall.SIGINT)
|
||||
defer cancel()
|
||||
|
||||
ins, err := inspect.NewFromConfig(logger, config)
|
||||
if err != nil {
|
||||
return err
|
||||
logger.Info("starting inspect server")
|
||||
if err := ins.Run(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
},
|
||||
}
|
||||
cmd.Flags().String("rpc.laddr",
|
||||
conf.RPC.ListenAddress, "RPC listenener address. Port required")
|
||||
cmd.Flags().String("db-backend",
|
||||
conf.DBBackend, "database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
|
||||
cmd.Flags().String("db-dir", conf.DBPath, "database directory")
|
||||
|
||||
logger.Info("starting inspect server")
|
||||
if err := ins.Run(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
return cmd
|
||||
}
|
||||
|
||||
@@ -5,11 +5,13 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/scripts/keymigrate"
|
||||
)
|
||||
|
||||
func MakeKeyMigrateCommand() *cobra.Command {
|
||||
func MakeKeyMigrateCommand(conf *cfg.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "key-migrate",
|
||||
Short: "Run Database key migration",
|
||||
@@ -38,7 +40,7 @@ func MakeKeyMigrateCommand() *cobra.Command {
|
||||
|
||||
db, err := cfg.DefaultDBProvider(&cfg.DBContext{
|
||||
ID: dbctx,
|
||||
Config: config,
|
||||
Config: conf,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
@@ -58,7 +60,7 @@ func MakeKeyMigrateCommand() *cobra.Command {
|
||||
}
|
||||
|
||||
// allow database info to be overridden via cli
|
||||
addDBFlags(cmd)
|
||||
addDBFlags(cmd, conf)
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
@@ -15,6 +15,7 @@ import (
|
||||
"github.com/spf13/cobra"
|
||||
dbm "github.com/tendermint/tm-db"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmmath "github.com/tendermint/tendermint/libs/math"
|
||||
"github.com/tendermint/tendermint/light"
|
||||
@@ -24,20 +25,69 @@ import (
|
||||
rpcserver "github.com/tendermint/tendermint/rpc/jsonrpc/server"
|
||||
)
|
||||
|
||||
// LightCmd represents the base command when called without any subcommands
|
||||
var LightCmd = &cobra.Command{
|
||||
Use: "light [chainID]",
|
||||
Short: "Run a light client proxy server, verifying Tendermint rpc",
|
||||
Long: `Run a light client proxy server, verifying Tendermint rpc.
|
||||
// LightCmd constructs the base command called when invoked without any subcommands.
|
||||
func MakeLightCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var (
|
||||
listenAddr string
|
||||
primaryAddr string
|
||||
witnessAddrsJoined string
|
||||
chainID string
|
||||
dir string
|
||||
maxOpenConnections int
|
||||
|
||||
sequential bool
|
||||
trustingPeriod time.Duration
|
||||
trustedHeight int64
|
||||
trustedHash []byte
|
||||
trustLevelStr string
|
||||
|
||||
logLevel string
|
||||
logFormat string
|
||||
|
||||
primaryKey = []byte("primary")
|
||||
witnessesKey = []byte("witnesses")
|
||||
)
|
||||
|
||||
checkForExistingProviders := func(db dbm.DB) (string, []string, error) {
|
||||
primaryBytes, err := db.Get(primaryKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesBytes, err := db.Get(witnessesKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesAddrs := strings.Split(string(witnessesBytes), ",")
|
||||
return string(primaryBytes), witnessesAddrs, nil
|
||||
}
|
||||
|
||||
saveProviders := func(db dbm.DB, primaryAddr, witnessesAddrs string) error {
|
||||
err := db.Set(primaryKey, []byte(primaryAddr))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save primary provider: %w", err)
|
||||
}
|
||||
err = db.Set(witnessesKey, []byte(witnessesAddrs))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save witness providers: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "light [chainID]",
|
||||
Short: "Run a light client proxy server, verifying Tendermint rpc",
|
||||
Long: `Run a light client proxy server, verifying Tendermint rpc.
|
||||
|
||||
All calls that can be tracked back to a block header by a proof
|
||||
will be verified before passing them back to the caller. Other than
|
||||
that, it will present the same interface as a full Tendermint node.
|
||||
|
||||
Furthermore to the chainID, a fresh instance of a light client will
|
||||
need a primary RPC address, a trusted hash and height and witness RPC addresses
|
||||
(if not using sequential verification). To restart the node, thereafter
|
||||
only the chainID is required.
|
||||
need a primary RPC address and a trusted hash and height. It is also highly
|
||||
recommended to provide additional witness RPC addresses, especially if
|
||||
not using sequential verification.
|
||||
|
||||
To restart the node, thereafter only the chainID is required.
|
||||
|
||||
When /abci_query is called, the Merkle key path format is:
|
||||
|
||||
@@ -46,185 +96,138 @@ When /abci_query is called, the Merkle key path format is:
|
||||
Please verify with your application that this Merkle key format is used (true
|
||||
for applications built w/ Cosmos SDK).
|
||||
`,
|
||||
RunE: runProxy,
|
||||
Args: cobra.ExactArgs(1),
|
||||
Example: `light cosmoshub-3 -p http://52.57.29.196:26657 -w http://public-seed-node.cosmoshub.certus.one:26657
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
chainID = args[0]
|
||||
logger.Info("Creating client...", "chainID", chainID)
|
||||
|
||||
var witnessesAddrs []string
|
||||
if witnessAddrsJoined != "" {
|
||||
witnessesAddrs = strings.Split(witnessAddrsJoined, ",")
|
||||
}
|
||||
|
||||
lightDB, err := dbm.NewGoLevelDB("light-client-db", dir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't create a db: %w", err)
|
||||
}
|
||||
// create a prefixed db on the chainID
|
||||
db := dbm.NewPrefixDB(lightDB, []byte(chainID))
|
||||
|
||||
if primaryAddr == "" { // check to see if we can start from an existing state
|
||||
var err error
|
||||
primaryAddr, witnessesAddrs, err = checkForExistingProviders(db)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to retrieve primary or witness from db: %w", err)
|
||||
}
|
||||
if primaryAddr == "" {
|
||||
return errors.New("no primary address was provided nor found. Please provide a primary (using -p)." +
|
||||
" Run the command: tendermint light --help for more information")
|
||||
}
|
||||
} else {
|
||||
err := saveProviders(db, primaryAddr, witnessAddrsJoined)
|
||||
if err != nil {
|
||||
logger.Error("Unable to save primary and or witness addresses", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(witnessesAddrs) < 1 && !sequential {
|
||||
logger.Info("In skipping verification mode it is highly recommended to provide at least one witness")
|
||||
}
|
||||
|
||||
trustLevel, err := tmmath.ParseFraction(trustLevelStr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't parse trust level: %w", err)
|
||||
}
|
||||
|
||||
options := []light.Option{light.Logger(logger)}
|
||||
|
||||
vo := light.SkippingVerification(trustLevel)
|
||||
if sequential {
|
||||
vo = light.SequentialVerification()
|
||||
}
|
||||
options = append(options, vo)
|
||||
|
||||
// Initiate the light client. If the trusted store already has blocks in it, this
|
||||
// will be used else we use the trusted options.
|
||||
c, err := light.NewHTTPClient(
|
||||
context.Background(),
|
||||
chainID,
|
||||
light.TrustOptions{
|
||||
Period: trustingPeriod,
|
||||
Height: trustedHeight,
|
||||
Hash: trustedHash,
|
||||
},
|
||||
primaryAddr,
|
||||
witnessesAddrs,
|
||||
dbs.New(db),
|
||||
options...,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cfg := rpcserver.DefaultConfig()
|
||||
cfg.MaxBodyBytes = conf.RPC.MaxBodyBytes
|
||||
cfg.MaxHeaderBytes = conf.RPC.MaxHeaderBytes
|
||||
cfg.MaxOpenConnections = maxOpenConnections
|
||||
// If necessary adjust global WriteTimeout to ensure it's greater than
|
||||
// TimeoutBroadcastTxCommit.
|
||||
// See https://github.com/tendermint/tendermint/issues/3435
|
||||
if cfg.WriteTimeout <= conf.RPC.TimeoutBroadcastTxCommit {
|
||||
cfg.WriteTimeout = conf.RPC.TimeoutBroadcastTxCommit + 1*time.Second
|
||||
}
|
||||
|
||||
p, err := lproxy.NewProxy(c, listenAddr, primaryAddr, cfg, logger, lrpc.KeyPathFn(lrpc.DefaultMerkleKeyPathFn()))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
p.Listener.Close()
|
||||
}()
|
||||
|
||||
logger.Info("Starting proxy...", "laddr", listenAddr)
|
||||
if err := p.ListenAndServe(ctx); err != http.ErrServerClosed {
|
||||
// Error starting or closing listener:
|
||||
logger.Error("proxy ListenAndServe", "err", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
Args: cobra.ExactArgs(1),
|
||||
Example: `light cosmoshub-3 -p http://52.57.29.196:26657 -w http://public-seed-node.cosmoshub.certus.one:26657
|
||||
--height 962118 --hash 28B97BE9F6DE51AC69F70E0B7BFD7E5C9CD1A595B7DC31AFF27C50D4948020CD`,
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
listenAddr string
|
||||
primaryAddr string
|
||||
witnessAddrsJoined string
|
||||
chainID string
|
||||
dir string
|
||||
maxOpenConnections int
|
||||
|
||||
sequential bool
|
||||
trustingPeriod time.Duration
|
||||
trustedHeight int64
|
||||
trustedHash []byte
|
||||
trustLevelStr string
|
||||
|
||||
logLevel string
|
||||
logFormat string
|
||||
|
||||
primaryKey = []byte("primary")
|
||||
witnessesKey = []byte("witnesses")
|
||||
)
|
||||
|
||||
func init() {
|
||||
LightCmd.Flags().StringVar(&listenAddr, "laddr", "tcp://localhost:8888",
|
||||
cmd.Flags().StringVar(&listenAddr, "laddr", "tcp://localhost:8888",
|
||||
"serve the proxy on the given address")
|
||||
LightCmd.Flags().StringVarP(&primaryAddr, "primary", "p", "",
|
||||
cmd.Flags().StringVarP(&primaryAddr, "primary", "p", "",
|
||||
"connect to a Tendermint node at this address")
|
||||
LightCmd.Flags().StringVarP(&witnessAddrsJoined, "witnesses", "w", "",
|
||||
cmd.Flags().StringVarP(&witnessAddrsJoined, "witnesses", "w", "",
|
||||
"tendermint nodes to cross-check the primary node, comma-separated")
|
||||
LightCmd.Flags().StringVarP(&dir, "dir", "d", os.ExpandEnv(filepath.Join("$HOME", ".tendermint-light")),
|
||||
cmd.Flags().StringVarP(&dir, "dir", "d", os.ExpandEnv(filepath.Join("$HOME", ".tendermint-light")),
|
||||
"specify the directory")
|
||||
LightCmd.Flags().IntVar(
|
||||
cmd.Flags().IntVar(
|
||||
&maxOpenConnections,
|
||||
"max-open-connections",
|
||||
900,
|
||||
"maximum number of simultaneous connections (including WebSocket).")
|
||||
LightCmd.Flags().DurationVar(&trustingPeriod, "trusting-period", 168*time.Hour,
|
||||
cmd.Flags().DurationVar(&trustingPeriod, "trusting-period", 168*time.Hour,
|
||||
"trusting period that headers can be verified within. Should be significantly less than the unbonding period")
|
||||
LightCmd.Flags().Int64Var(&trustedHeight, "height", 1, "Trusted header's height")
|
||||
LightCmd.Flags().BytesHexVar(&trustedHash, "hash", []byte{}, "Trusted header's hash")
|
||||
LightCmd.Flags().StringVar(&logLevel, "log-level", log.LogLevelInfo, "The logging level (debug|info|warn|error|fatal)")
|
||||
LightCmd.Flags().StringVar(&logFormat, "log-format", log.LogFormatPlain, "The logging format (text|json)")
|
||||
LightCmd.Flags().StringVar(&trustLevelStr, "trust-level", "1/3",
|
||||
cmd.Flags().Int64Var(&trustedHeight, "height", 1, "Trusted header's height")
|
||||
cmd.Flags().BytesHexVar(&trustedHash, "hash", []byte{}, "Trusted header's hash")
|
||||
cmd.Flags().StringVar(&logLevel, "log-level", log.LogLevelInfo, "The logging level (debug|info|warn|error|fatal)")
|
||||
cmd.Flags().StringVar(&logFormat, "log-format", log.LogFormatPlain, "The logging format (text|json)")
|
||||
cmd.Flags().StringVar(&trustLevelStr, "trust-level", "1/3",
|
||||
"trust level. Must be between 1/3 and 3/3",
|
||||
)
|
||||
LightCmd.Flags().BoolVar(&sequential, "sequential", false,
|
||||
cmd.Flags().BoolVar(&sequential, "sequential", false,
|
||||
"sequential verification. Verify all headers sequentially as opposed to using skipping verification",
|
||||
)
|
||||
}
|
||||
|
||||
func runProxy(cmd *cobra.Command, args []string) error {
|
||||
logger, err := log.NewDefaultLogger(logFormat, logLevel)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
chainID = args[0]
|
||||
logger.Info("Creating client...", "chainID", chainID)
|
||||
|
||||
witnessesAddrs := []string{}
|
||||
if witnessAddrsJoined != "" {
|
||||
witnessesAddrs = strings.Split(witnessAddrsJoined, ",")
|
||||
}
|
||||
|
||||
lightDB, err := dbm.NewGoLevelDB("light-client-db", dir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't create a db: %w", err)
|
||||
}
|
||||
// create a prefixed db on the chainID
|
||||
db := dbm.NewPrefixDB(lightDB, []byte(chainID))
|
||||
|
||||
if primaryAddr == "" { // check to see if we can start from an existing state
|
||||
var err error
|
||||
primaryAddr, witnessesAddrs, err = checkForExistingProviders(db)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to retrieve primary or witness from db: %w", err)
|
||||
}
|
||||
if primaryAddr == "" {
|
||||
return errors.New("no primary address was provided nor found. Please provide a primary (using -p)." +
|
||||
" Run the command: tendermint light --help for more information")
|
||||
}
|
||||
} else {
|
||||
err := saveProviders(db, primaryAddr, witnessAddrsJoined)
|
||||
if err != nil {
|
||||
logger.Error("Unable to save primary and or witness addresses", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
trustLevel, err := tmmath.ParseFraction(trustLevelStr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't parse trust level: %w", err)
|
||||
}
|
||||
|
||||
options := []light.Option{light.Logger(logger)}
|
||||
|
||||
if sequential {
|
||||
options = append(options, light.SequentialVerification())
|
||||
} else {
|
||||
options = append(options, light.SkippingVerification(trustLevel))
|
||||
}
|
||||
|
||||
// Initiate the light client. If the trusted store already has blocks in it, this
|
||||
// will be used else we use the trusted options.
|
||||
c, err := light.NewHTTPClient(
|
||||
context.Background(),
|
||||
chainID,
|
||||
light.TrustOptions{
|
||||
Period: trustingPeriod,
|
||||
Height: trustedHeight,
|
||||
Hash: trustedHash,
|
||||
},
|
||||
primaryAddr,
|
||||
witnessesAddrs,
|
||||
dbs.New(db),
|
||||
options...,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cfg := rpcserver.DefaultConfig()
|
||||
cfg.MaxBodyBytes = config.RPC.MaxBodyBytes
|
||||
cfg.MaxHeaderBytes = config.RPC.MaxHeaderBytes
|
||||
cfg.MaxOpenConnections = maxOpenConnections
|
||||
// If necessary adjust global WriteTimeout to ensure it's greater than
|
||||
// TimeoutBroadcastTxCommit.
|
||||
// See https://github.com/tendermint/tendermint/issues/3435
|
||||
if cfg.WriteTimeout <= config.RPC.TimeoutBroadcastTxCommit {
|
||||
cfg.WriteTimeout = config.RPC.TimeoutBroadcastTxCommit + 1*time.Second
|
||||
}
|
||||
|
||||
p, err := lproxy.NewProxy(c, listenAddr, primaryAddr, cfg, logger, lrpc.KeyPathFn(lrpc.DefaultMerkleKeyPathFn()))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
p.Listener.Close()
|
||||
}()
|
||||
|
||||
logger.Info("Starting proxy...", "laddr", listenAddr)
|
||||
if err := p.ListenAndServe(ctx); err != http.ErrServerClosed {
|
||||
// Error starting or closing listener:
|
||||
logger.Error("proxy ListenAndServe", "err", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkForExistingProviders(db dbm.DB) (string, []string, error) {
|
||||
primaryBytes, err := db.Get(primaryKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesBytes, err := db.Get(witnessesKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesAddrs := strings.Split(string(witnessesBytes), ",")
|
||||
return string(primaryBytes), witnessesAddrs, nil
|
||||
}
|
||||
|
||||
func saveProviders(db dbm.DB, primaryAddr, witnessesAddrs string) error {
|
||||
err := db.Set(primaryKey, []byte(primaryAddr))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save primary provider: %w", err)
|
||||
}
|
||||
err = db.Set(witnessesKey, []byte(witnessesAddrs))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save witness providers: %w", err)
|
||||
}
|
||||
return nil
|
||||
|
||||
return cmd
|
||||
|
||||
}
|
||||
|
||||
@@ -17,6 +17,7 @@ import (
|
||||
"github.com/tendermint/tendermint/internal/state/indexer/sink/kv"
|
||||
"github.com/tendermint/tendermint/internal/state/indexer/sink/psql"
|
||||
"github.com/tendermint/tendermint/internal/store"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/libs/os"
|
||||
"github.com/tendermint/tendermint/rpc/coretypes"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
@@ -26,59 +27,68 @@ const (
|
||||
reindexFailed = "event re-index failed: "
|
||||
)
|
||||
|
||||
// ReIndexEventCmd allows re-index the event by given block height interval
|
||||
var ReIndexEventCmd = &cobra.Command{
|
||||
Use: "reindex-event",
|
||||
Short: "reindex events to the event store backends",
|
||||
Long: `
|
||||
// MakeReindexEventCommand constructs a command to re-index events in a block height interval.
|
||||
func MakeReindexEventCommand(conf *tmcfg.Config, logger log.Logger) *cobra.Command {
|
||||
var (
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
)
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "reindex-event",
|
||||
Short: "reindex events to the event store backends",
|
||||
Long: `
|
||||
reindex-event is an offline tooling to re-index block and tx events to the eventsinks,
|
||||
you can run this command when the event store backend dropped/disconnected or you want to
|
||||
replace the backend. The default start-height is 0, meaning the tooling will start
|
||||
reindex from the base block height(inclusive); and the default end-height is 0, meaning
|
||||
you can run this command when the event store backend dropped/disconnected or you want to
|
||||
replace the backend. The default start-height is 0, meaning the tooling will start
|
||||
reindex from the base block height(inclusive); and the default end-height is 0, meaning
|
||||
the tooling will reindex until the latest block height(inclusive). User can omit
|
||||
either or both arguments.
|
||||
`,
|
||||
Example: `
|
||||
Example: `
|
||||
tendermint reindex-event
|
||||
tendermint reindex-event --start-height 2
|
||||
tendermint reindex-event --end-height 10
|
||||
tendermint reindex-event --start-height 2 --end-height 10
|
||||
`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
bs, ss, err := loadStateAndBlockStore(config)
|
||||
if err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
bs, ss, err := loadStateAndBlockStore(conf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
|
||||
if err := checkValidHeight(bs); err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
cvhArgs := checkValidHeightArgs{
|
||||
startHeight: startHeight,
|
||||
endHeight: endHeight,
|
||||
}
|
||||
if err := checkValidHeight(bs, cvhArgs); err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
|
||||
es, err := loadEventSinks(config)
|
||||
if err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
es, err := loadEventSinks(conf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
|
||||
if err = eventReIndex(cmd, es, bs, ss); err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
riArgs := eventReIndexArgs{
|
||||
startHeight: startHeight,
|
||||
endHeight: endHeight,
|
||||
sinks: es,
|
||||
blockStore: bs,
|
||||
stateStore: ss,
|
||||
}
|
||||
if err := eventReIndex(cmd, riArgs); err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
|
||||
fmt.Println("event re-index finished")
|
||||
},
|
||||
}
|
||||
logger.Info("event re-index finished")
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
var (
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
)
|
||||
|
||||
func init() {
|
||||
ReIndexEventCmd.Flags().Int64Var(&startHeight, "start-height", 0, "the block height would like to start for re-index")
|
||||
ReIndexEventCmd.Flags().Int64Var(&endHeight, "end-height", 0, "the block height would like to finish for re-index")
|
||||
cmd.Flags().Int64Var(&startHeight, "start-height", 0, "the block height would like to start for re-index")
|
||||
cmd.Flags().Int64Var(&endHeight, "end-height", 0, "the block height would like to finish for re-index")
|
||||
return cmd
|
||||
}
|
||||
|
||||
func loadEventSinks(cfg *tmcfg.Config) ([]indexer.EventSink, error) {
|
||||
@@ -109,7 +119,7 @@ func loadEventSinks(cfg *tmcfg.Config) ([]indexer.EventSink, error) {
|
||||
if conn == "" {
|
||||
return nil, errors.New("the psql connection settings cannot be empty")
|
||||
}
|
||||
es, err := psql.NewEventSink(conn, chainID)
|
||||
es, err := psql.NewEventSink(conn, cfg.ChainID())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -159,52 +169,58 @@ func loadStateAndBlockStore(cfg *tmcfg.Config) (*store.BlockStore, state.Store,
|
||||
return blockStore, stateStore, nil
|
||||
}
|
||||
|
||||
func eventReIndex(cmd *cobra.Command, es []indexer.EventSink, bs state.BlockStore, ss state.Store) error {
|
||||
type eventReIndexArgs struct {
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
sinks []indexer.EventSink
|
||||
blockStore state.BlockStore
|
||||
stateStore state.Store
|
||||
}
|
||||
|
||||
func eventReIndex(cmd *cobra.Command, args eventReIndexArgs) error {
|
||||
var bar progressbar.Bar
|
||||
bar.NewOption(startHeight-1, endHeight)
|
||||
bar.NewOption(args.startHeight-1, args.endHeight)
|
||||
|
||||
fmt.Println("start re-indexing events:")
|
||||
defer bar.Finish()
|
||||
for i := startHeight; i <= endHeight; i++ {
|
||||
for i := args.startHeight; i <= args.endHeight; i++ {
|
||||
select {
|
||||
case <-cmd.Context().Done():
|
||||
return fmt.Errorf("event re-index terminated at height %d: %w", i, cmd.Context().Err())
|
||||
default:
|
||||
b := bs.LoadBlock(i)
|
||||
b := args.blockStore.LoadBlock(i)
|
||||
if b == nil {
|
||||
return fmt.Errorf("not able to load block at height %d from the blockstore", i)
|
||||
}
|
||||
|
||||
r, err := ss.LoadABCIResponses(i)
|
||||
r, err := args.stateStore.LoadABCIResponses(i)
|
||||
if err != nil {
|
||||
return fmt.Errorf("not able to load ABCI Response at height %d from the statestore", i)
|
||||
}
|
||||
|
||||
e := types.EventDataNewBlockHeader{
|
||||
Header: b.Header,
|
||||
NumTxs: int64(len(b.Txs)),
|
||||
ResultBeginBlock: *r.BeginBlock,
|
||||
ResultEndBlock: *r.EndBlock,
|
||||
Header: b.Header,
|
||||
NumTxs: int64(len(b.Txs)),
|
||||
ResultFinalizeBlock: *r.FinalizeBlock,
|
||||
}
|
||||
|
||||
var batch *indexer.Batch
|
||||
if e.NumTxs > 0 {
|
||||
batch = indexer.NewBatch(e.NumTxs)
|
||||
|
||||
for i, tx := range b.Data.Txs {
|
||||
for i := range b.Data.Txs {
|
||||
tr := abcitypes.TxResult{
|
||||
Height: b.Height,
|
||||
Index: uint32(i),
|
||||
Tx: tx,
|
||||
Result: *(r.DeliverTxs[i]),
|
||||
Tx: b.Data.Txs[i],
|
||||
Result: *(r.FinalizeBlock.Txs[i]),
|
||||
}
|
||||
|
||||
_ = batch.Add(&tr)
|
||||
}
|
||||
}
|
||||
|
||||
for _, sink := range es {
|
||||
for _, sink := range args.sinks {
|
||||
if err := sink.IndexBlockEvents(e); err != nil {
|
||||
return fmt.Errorf("block event re-index at height %d failed: %w", i, err)
|
||||
}
|
||||
@@ -223,40 +239,45 @@ func eventReIndex(cmd *cobra.Command, es []indexer.EventSink, bs state.BlockStor
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkValidHeight(bs state.BlockStore) error {
|
||||
type checkValidHeightArgs struct {
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
}
|
||||
|
||||
func checkValidHeight(bs state.BlockStore, args checkValidHeightArgs) error {
|
||||
base := bs.Base()
|
||||
|
||||
if startHeight == 0 {
|
||||
startHeight = base
|
||||
if args.startHeight == 0 {
|
||||
args.startHeight = base
|
||||
fmt.Printf("set the start block height to the base height of the blockstore %d \n", base)
|
||||
}
|
||||
|
||||
if startHeight < base {
|
||||
if args.startHeight < base {
|
||||
return fmt.Errorf("%s (requested start height: %d, base height: %d)",
|
||||
coretypes.ErrHeightNotAvailable, startHeight, base)
|
||||
coretypes.ErrHeightNotAvailable, args.startHeight, base)
|
||||
}
|
||||
|
||||
height := bs.Height()
|
||||
|
||||
if startHeight > height {
|
||||
if args.startHeight > height {
|
||||
return fmt.Errorf(
|
||||
"%s (requested start height: %d, store height: %d)", coretypes.ErrHeightNotAvailable, startHeight, height)
|
||||
"%s (requested start height: %d, store height: %d)", coretypes.ErrHeightNotAvailable, args.startHeight, height)
|
||||
}
|
||||
|
||||
if endHeight == 0 || endHeight > height {
|
||||
endHeight = height
|
||||
if args.endHeight == 0 || args.endHeight > height {
|
||||
args.endHeight = height
|
||||
fmt.Printf("set the end block height to the latest height of the blockstore %d \n", height)
|
||||
}
|
||||
|
||||
if endHeight < base {
|
||||
if args.endHeight < base {
|
||||
return fmt.Errorf(
|
||||
"%s (requested end height: %d, base height: %d)", coretypes.ErrHeightNotAvailable, endHeight, base)
|
||||
"%s (requested end height: %d, base height: %d)", coretypes.ErrHeightNotAvailable, args.endHeight, base)
|
||||
}
|
||||
|
||||
if endHeight < startHeight {
|
||||
if args.endHeight < args.startHeight {
|
||||
return fmt.Errorf(
|
||||
"%s (requested the end height: %d is less than the start height: %d)",
|
||||
coretypes.ErrInvalidRequest, startHeight, endHeight)
|
||||
coretypes.ErrInvalidRequest, args.startHeight, args.endHeight)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@@ -9,13 +9,15 @@ import (
|
||||
"github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
dbm "github.com/tendermint/tm-db"
|
||||
|
||||
abcitypes "github.com/tendermint/tendermint/abci/types"
|
||||
tmcfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/state/indexer"
|
||||
"github.com/tendermint/tendermint/internal/state/mocks"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
prototmstate "github.com/tendermint/tendermint/proto/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
dbm "github.com/tendermint/tm-db"
|
||||
|
||||
_ "github.com/lib/pq" // for the psql sink
|
||||
)
|
||||
@@ -25,9 +27,11 @@ const (
|
||||
base int64 = 2
|
||||
)
|
||||
|
||||
func setupReIndexEventCmd(ctx context.Context) *cobra.Command {
|
||||
func setupReIndexEventCmd(ctx context.Context, conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := MakeReindexEventCommand(conf, logger)
|
||||
|
||||
reIndexEventCmd := &cobra.Command{
|
||||
Use: ReIndexEventCmd.Use,
|
||||
Use: cmd.Use,
|
||||
Run: func(cmd *cobra.Command, args []string) {},
|
||||
}
|
||||
|
||||
@@ -68,10 +72,7 @@ func TestReIndexEventCheckHeight(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
startHeight = tc.startHeight
|
||||
endHeight = tc.endHeight
|
||||
|
||||
err := checkValidHeight(mockBlockStore)
|
||||
err := checkValidHeight(mockBlockStore, checkValidHeightArgs{startHeight: tc.startHeight, endHeight: tc.endHeight})
|
||||
if tc.validHeight {
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
@@ -97,7 +98,7 @@ func TestLoadEventSink(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
cfg := tmcfg.TestConfig()
|
||||
cfg := config.TestConfig()
|
||||
cfg.TxIndex.Indexer = tc.sinks
|
||||
cfg.TxIndex.PsqlConn = tc.connURL
|
||||
_, err := loadEventSinks(cfg)
|
||||
@@ -110,7 +111,7 @@ func TestLoadEventSink(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLoadBlockStore(t *testing.T) {
|
||||
testCfg, err := tmcfg.ResetTestRoot(t.Name())
|
||||
testCfg, err := config.ResetTestRoot(t.TempDir(), t.Name())
|
||||
require.NoError(t, err)
|
||||
testCfg.DBBackend = "goleveldb"
|
||||
_, _, err = loadStateAndBlockStore(testCfg)
|
||||
@@ -154,9 +155,9 @@ func TestReIndexEvent(t *testing.T) {
|
||||
|
||||
dtx := abcitypes.ResponseDeliverTx{}
|
||||
abciResp := &prototmstate.ABCIResponses{
|
||||
DeliverTxs: []*abcitypes.ResponseDeliverTx{&dtx},
|
||||
EndBlock: &abcitypes.ResponseEndBlock{},
|
||||
BeginBlock: &abcitypes.ResponseBeginBlock{},
|
||||
FinalizeBlock: &abcitypes.ResponseFinalizeBlock{
|
||||
Txs: []*abcitypes.ResponseDeliverTx{&dtx},
|
||||
},
|
||||
}
|
||||
|
||||
mockStateStore.
|
||||
@@ -179,12 +180,20 @@ func TestReIndexEvent(t *testing.T) {
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
logger := log.NewNopLogger()
|
||||
conf := config.DefaultConfig()
|
||||
|
||||
for _, tc := range testCases {
|
||||
startHeight = tc.startHeight
|
||||
endHeight = tc.endHeight
|
||||
err := eventReIndex(
|
||||
setupReIndexEventCmd(ctx, conf, logger),
|
||||
eventReIndexArgs{
|
||||
sinks: []indexer.EventSink{mockEventSink},
|
||||
blockStore: mockBlockStore,
|
||||
stateStore: mockStateStore,
|
||||
startHeight: tc.startHeight,
|
||||
endHeight: tc.endHeight,
|
||||
})
|
||||
|
||||
err := eventReIndex(setupReIndexEventCmd(ctx), []indexer.EventSink{mockEventSink}, mockBlockStore, mockStateStore)
|
||||
if tc.reIndexErr {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
|
||||
@@ -2,24 +2,30 @@ package commands
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/consensus"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
// ReplayCmd allows replaying of messages from the WAL.
|
||||
var ReplayCmd = &cobra.Command{
|
||||
Use: "replay",
|
||||
Short: "Replay messages from WAL",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, config.BaseConfig, config.Consensus, false)
|
||||
},
|
||||
// MakeReplayCommand constructs a command to replay messages from the WAL into consensus.
|
||||
func MakeReplayCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "replay",
|
||||
Short: "Replay messages from WAL",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, conf.BaseConfig, conf.Consensus, false)
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// ReplayConsoleCmd allows replaying of messages from the WAL in a
|
||||
// console.
|
||||
var ReplayConsoleCmd = &cobra.Command{
|
||||
Use: "replay-console",
|
||||
Short: "Replay messages from WAL in a console",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, config.BaseConfig, config.Consensus, true)
|
||||
},
|
||||
// MakeReplayConsoleCommand constructs a command to replay WAL messages to stdout.
|
||||
func MakeReplayConsoleCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "replay-console",
|
||||
Short: "Replay messages from WAL in a console",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, conf.BaseConfig, conf.Consensus, true)
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,51 +5,58 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// ResetAllCmd removes the database of this Tendermint core
|
||||
// instance.
|
||||
var ResetAllCmd = &cobra.Command{
|
||||
Use: "unsafe-reset-all",
|
||||
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
|
||||
RunE: resetAll,
|
||||
}
|
||||
// MakeResetAllCommand constructs a command that removes the database of
|
||||
// the specified Tendermint core instance.
|
||||
func MakeResetAllCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var keyType string
|
||||
|
||||
var keepAddrBook bool
|
||||
|
||||
func init() {
|
||||
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
|
||||
ResetPrivValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
cmd := &cobra.Command{
|
||||
Use: "unsafe-reset-all",
|
||||
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return resetAll(conf.DBDir(), conf.PrivValidator.KeyFile(),
|
||||
conf.PrivValidator.StateFile(), logger, keyType)
|
||||
},
|
||||
}
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
// ResetPrivValidatorCmd resets the private validator files.
|
||||
var ResetPrivValidatorCmd = &cobra.Command{
|
||||
Use: "unsafe-reset-priv-validator",
|
||||
Short: "(unsafe) Reset this node's validator to genesis state",
|
||||
RunE: resetPrivValidator,
|
||||
func MakeResetPrivateValidatorCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var keyType string
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "unsafe-reset-priv-validator",
|
||||
Short: "(unsafe) Reset this node's validator to genesis state",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return resetFilePV(conf.PrivValidator.KeyFile(), conf.PrivValidator.StateFile(), logger, keyType)
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
return cmd
|
||||
|
||||
}
|
||||
|
||||
// XXX: this is totally unsafe.
|
||||
// it's only suitable for testnets.
|
||||
func resetAll(cmd *cobra.Command, args []string) error {
|
||||
return ResetAll(config.DBDir(), config.PrivValidator.KeyFile(),
|
||||
config.PrivValidator.StateFile(), logger)
|
||||
}
|
||||
|
||||
// XXX: this is totally unsafe.
|
||||
// it's only suitable for testnets.
|
||||
func resetPrivValidator(cmd *cobra.Command, args []string) error {
|
||||
return resetFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile(), logger)
|
||||
}
|
||||
|
||||
// ResetAll removes address book files plus all data, and resets the privValdiator data.
|
||||
// resetAll removes address book files plus all data, and resets the privValdiator data.
|
||||
// Exported so other CLI tools can use it.
|
||||
func ResetAll(dbDir, privValKeyFile, privValStateFile string, logger log.Logger) error {
|
||||
func resetAll(dbDir, privValKeyFile, privValStateFile string, logger log.Logger, keyType string) error {
|
||||
if err := os.RemoveAll(dbDir); err == nil {
|
||||
logger.Info("Removed all blockchain history", "dir", dbDir)
|
||||
} else {
|
||||
@@ -59,10 +66,10 @@ func ResetAll(dbDir, privValKeyFile, privValStateFile string, logger log.Logger)
|
||||
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
|
||||
logger.Error("unable to recreate dbDir", "err", err)
|
||||
}
|
||||
return resetFilePV(privValKeyFile, privValStateFile, logger)
|
||||
return resetFilePV(privValKeyFile, privValStateFile, logger, keyType)
|
||||
}
|
||||
|
||||
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) error {
|
||||
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger, keyType string) error {
|
||||
if _, err := os.Stat(privValKeyFile); err == nil {
|
||||
pv, err := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
|
||||
if err != nil {
|
||||
|
||||
@@ -5,14 +5,15 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/state"
|
||||
)
|
||||
|
||||
var RollbackStateCmd = &cobra.Command{
|
||||
Use: "rollback",
|
||||
Short: "rollback tendermint state by one height",
|
||||
Long: `
|
||||
func MakeRollbackStateCommand(conf *config.Config) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "rollback",
|
||||
Short: "rollback tendermint state by one height",
|
||||
Long: `
|
||||
A state rollback is performed to recover from an incorrect application state transition,
|
||||
when Tendermint has persisted an incorrect app hash and is thus unable to make
|
||||
progress. Rollback overwrites a state at height n with the state at height n - 1.
|
||||
@@ -20,21 +21,23 @@ The application should also roll back to height n - 1. No blocks are removed, so
|
||||
restarting Tendermint the transactions in block n will be re-executed against the
|
||||
application.
|
||||
`,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
height, hash, err := RollbackState(config)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to rollback state: %w", err)
|
||||
}
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
height, hash, err := RollbackState(conf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to rollback state: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("Rolled back state to height %d and hash %X", height, hash)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
fmt.Printf("Rolled back state to height %d and hash %X", height, hash)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
// RollbackState takes the state at the current height n and overwrites it with the state
|
||||
// at height n - 1. Note state here refers to tendermint state not application state.
|
||||
// Returns the latest state height and app hash alongside an error if there was one.
|
||||
func RollbackState(config *cfg.Config) (int64, []byte, error) {
|
||||
func RollbackState(config *config.Config) (int64, []byte, error) {
|
||||
// use the parsed config to load the block and state store
|
||||
blockStore, stateStore, err := loadStateAndBlockStore(config)
|
||||
if err != nil {
|
||||
|
||||
@@ -19,10 +19,12 @@ func TestRollbackIntegration(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
cfg, err := rpctest.CreateConfig(t.Name())
|
||||
cfg, err := rpctest.CreateConfig(t, t.Name())
|
||||
require.NoError(t, err)
|
||||
cfg.BaseConfig.DBBackend = "goleveldb"
|
||||
|
||||
app, err := e2e.NewApplication(e2e.DefaultConfig(dir))
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("First run", func(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
@@ -30,27 +32,29 @@ func TestRollbackIntegration(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
node, _, err := rpctest.StartTendermint(ctx, cfg, app, rpctest.SuppressStdout)
|
||||
require.NoError(t, err)
|
||||
require.True(t, node.IsRunning())
|
||||
|
||||
time.Sleep(3 * time.Second)
|
||||
cancel()
|
||||
node.Wait()
|
||||
|
||||
require.False(t, node.IsRunning())
|
||||
})
|
||||
|
||||
t.Run("Rollback", func(t *testing.T) {
|
||||
time.Sleep(time.Second)
|
||||
require.NoError(t, app.Rollback())
|
||||
height, _, err = commands.RollbackState(cfg)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, err, "%d", height)
|
||||
})
|
||||
|
||||
t.Run("Restart", func(t *testing.T) {
|
||||
require.True(t, height > 0, "%d", height)
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
node2, _, err2 := rpctest.StartTendermint(ctx, cfg, app, rpctest.SuppressStdout)
|
||||
require.NoError(t, err2)
|
||||
|
||||
logger := log.NewTestingLogger(t)
|
||||
logger := log.NewNopLogger()
|
||||
|
||||
client, err := local.New(logger, node2.(local.NodeService))
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -2,65 +2,62 @@ package commands
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
var (
|
||||
config = cfg.DefaultConfig()
|
||||
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
ctxTimeout = 4 * time.Second
|
||||
)
|
||||
|
||||
func init() {
|
||||
registerFlagsRootCmd(RootCmd)
|
||||
}
|
||||
|
||||
func registerFlagsRootCmd(cmd *cobra.Command) {
|
||||
cmd.PersistentFlags().String("log-level", config.LogLevel, "log level")
|
||||
}
|
||||
const ctxTimeout = 4 * time.Second
|
||||
|
||||
// ParseConfig retrieves the default environment configuration,
|
||||
// sets up the Tendermint root and ensures that the root exists
|
||||
func ParseConfig() (*cfg.Config, error) {
|
||||
conf := cfg.DefaultConfig()
|
||||
err := viper.Unmarshal(conf)
|
||||
if err != nil {
|
||||
func ParseConfig(conf *config.Config) (*config.Config, error) {
|
||||
if err := viper.Unmarshal(conf); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
conf.SetRoot(conf.RootDir)
|
||||
cfg.EnsureRoot(conf.RootDir)
|
||||
|
||||
if err := conf.ValidateBasic(); err != nil {
|
||||
return nil, fmt.Errorf("error in config file: %w", err)
|
||||
}
|
||||
return conf, nil
|
||||
}
|
||||
|
||||
// RootCmd is the root command for Tendermint core.
|
||||
var RootCmd = &cobra.Command{
|
||||
Use: "tendermint",
|
||||
Short: "BFT state machine replication for applications in any programming languages",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) {
|
||||
if cmd.Name() == VersionCmd.Name() {
|
||||
// RootCommand constructs the root command-line entry point for Tendermint core.
|
||||
func RootCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "tendermint",
|
||||
Short: "BFT state machine replication for applications in any programming languages",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
if cmd.Name() == VersionCmd.Name() {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := cli.BindFlagsLoadViper(cmd, args); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pconf, err := ParseConfig(conf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*conf = *pconf
|
||||
config.EnsureRoot(conf.RootDir)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
config, err = ParseConfig()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger, err = log.NewDefaultLogger(config.LogFormat, config.LogLevel)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger = logger.With("module", "main")
|
||||
return nil
|
||||
},
|
||||
},
|
||||
}
|
||||
cmd.PersistentFlags().StringP(cli.HomeFlag, "", os.ExpandEnv(filepath.Join("$HOME", config.DefaultTendermintDir)), "directory for config and data")
|
||||
cmd.PersistentFlags().Bool(cli.TraceFlag, false, "print out full stack trace on errors")
|
||||
cmd.PersistentFlags().String("log-level", conf.LogLevel, "log level")
|
||||
cobra.OnInitialize(func() { cli.InitEnv("TM") })
|
||||
return cmd
|
||||
}
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
@@ -14,43 +14,54 @@ import (
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
)
|
||||
|
||||
// writeConfigVals writes a toml file with the given values.
|
||||
// It returns an error if writing was impossible.
|
||||
func writeConfigVals(dir string, vals map[string]string) error {
|
||||
data := ""
|
||||
for k, v := range vals {
|
||||
data += fmt.Sprintf("%s = \"%s\"\n", k, v)
|
||||
}
|
||||
cfile := filepath.Join(dir, "config.toml")
|
||||
return os.WriteFile(cfile, []byte(data), 0600)
|
||||
}
|
||||
|
||||
// clearConfig clears env vars, the given root dir, and resets viper.
|
||||
func clearConfig(t *testing.T, dir string) {
|
||||
func clearConfig(t *testing.T, dir string) *cfg.Config {
|
||||
t.Helper()
|
||||
require.NoError(t, os.Unsetenv("TMHOME"))
|
||||
require.NoError(t, os.Unsetenv("TM_HOME"))
|
||||
require.NoError(t, os.RemoveAll(dir))
|
||||
|
||||
viper.Reset()
|
||||
config = cfg.DefaultConfig()
|
||||
conf := cfg.DefaultConfig()
|
||||
conf.RootDir = dir
|
||||
return conf
|
||||
}
|
||||
|
||||
// prepare new rootCmd
|
||||
func testRootCmd() *cobra.Command {
|
||||
rootCmd := &cobra.Command{
|
||||
Use: RootCmd.Use,
|
||||
PersistentPreRunE: RootCmd.PersistentPreRunE,
|
||||
Run: func(cmd *cobra.Command, args []string) {},
|
||||
}
|
||||
registerFlagsRootCmd(rootCmd)
|
||||
func testRootCmd(conf *cfg.Config) *cobra.Command {
|
||||
logger := log.NewNopLogger()
|
||||
cmd := RootCommand(conf, logger)
|
||||
cmd.RunE = func(cmd *cobra.Command, args []string) error { return nil }
|
||||
|
||||
var l string
|
||||
rootCmd.PersistentFlags().String("log", l, "Log")
|
||||
return rootCmd
|
||||
cmd.PersistentFlags().String("log", l, "Log")
|
||||
return cmd
|
||||
}
|
||||
|
||||
func testSetup(t *testing.T, rootDir string, args []string, env map[string]string) error {
|
||||
func testSetup(ctx context.Context, t *testing.T, conf *cfg.Config, args []string, env map[string]string) error {
|
||||
t.Helper()
|
||||
clearConfig(t, rootDir)
|
||||
|
||||
rootCmd := testRootCmd()
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", rootDir)
|
||||
cmd := testRootCmd(conf)
|
||||
viper.Set(cli.HomeFlag, conf.RootDir)
|
||||
|
||||
// run with the args and env
|
||||
args = append([]string{rootCmd.Use}, args...)
|
||||
return cli.RunWithArgs(cmd, args, env)
|
||||
args = append([]string{cmd.Use}, args...)
|
||||
return cli.RunWithArgs(ctx, cmd, args, env)
|
||||
}
|
||||
|
||||
func TestRootHome(t *testing.T) {
|
||||
@@ -66,23 +77,29 @@ func TestRootHome(t *testing.T) {
|
||||
{nil, map[string]string{"TMHOME": newRoot}, newRoot},
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
for i, tc := range cases {
|
||||
idxString := strconv.Itoa(i)
|
||||
t.Run(fmt.Sprint(i), func(t *testing.T) {
|
||||
conf := clearConfig(t, tc.root)
|
||||
|
||||
err := testSetup(t, defaultRoot, tc.args, tc.env)
|
||||
require.NoError(t, err, idxString)
|
||||
err := testSetup(ctx, t, conf, tc.args, tc.env)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, tc.root, config.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.P2P.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.Consensus.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.Mempool.RootDir, idxString)
|
||||
require.Equal(t, tc.root, conf.RootDir)
|
||||
require.Equal(t, tc.root, conf.P2P.RootDir)
|
||||
require.Equal(t, tc.root, conf.Consensus.RootDir)
|
||||
require.Equal(t, tc.root, conf.Mempool.RootDir)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestRootFlagsEnv(t *testing.T) {
|
||||
|
||||
// defaults
|
||||
defaults := cfg.DefaultConfig()
|
||||
defaultDir := t.TempDir()
|
||||
|
||||
defaultLogLvl := defaults.LogLevel
|
||||
|
||||
cases := []struct {
|
||||
@@ -97,18 +114,25 @@ func TestRootFlagsEnv(t *testing.T) {
|
||||
{nil, map[string]string{"TM_LOG_LEVEL": "debug"}, "debug"}, // right env
|
||||
}
|
||||
|
||||
defaultRoot := t.TempDir()
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
for i, tc := range cases {
|
||||
idxString := strconv.Itoa(i)
|
||||
t.Run(fmt.Sprint(i), func(t *testing.T) {
|
||||
conf := clearConfig(t, defaultDir)
|
||||
|
||||
err := testSetup(t, defaultRoot, tc.args, tc.env)
|
||||
require.NoError(t, err, idxString)
|
||||
err := testSetup(ctx, t, conf, tc.args, tc.env)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, tc.logLevel, conf.LogLevel)
|
||||
})
|
||||
|
||||
assert.Equal(t, tc.logLevel, config.LogLevel, idxString)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRootConfig(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
// write non-default config
|
||||
nonDefaultLogLvl := "debug"
|
||||
@@ -117,9 +141,8 @@ func TestRootConfig(t *testing.T) {
|
||||
}
|
||||
|
||||
cases := []struct {
|
||||
args []string
|
||||
env map[string]string
|
||||
|
||||
args []string
|
||||
env map[string]string
|
||||
logLvl string
|
||||
}{
|
||||
{nil, nil, nonDefaultLogLvl}, // should load config
|
||||
@@ -128,29 +151,30 @@ func TestRootConfig(t *testing.T) {
|
||||
}
|
||||
|
||||
for i, tc := range cases {
|
||||
defaultRoot := t.TempDir()
|
||||
idxString := strconv.Itoa(i)
|
||||
clearConfig(t, defaultRoot)
|
||||
t.Run(fmt.Sprint(i), func(t *testing.T) {
|
||||
defaultRoot := t.TempDir()
|
||||
conf := clearConfig(t, defaultRoot)
|
||||
conf.LogLevel = tc.logLvl
|
||||
|
||||
// XXX: path must match cfg.defaultConfigPath
|
||||
configFilePath := filepath.Join(defaultRoot, "config")
|
||||
err := tmos.EnsureDir(configFilePath, 0700)
|
||||
require.NoError(t, err)
|
||||
// XXX: path must match cfg.defaultConfigPath
|
||||
configFilePath := filepath.Join(defaultRoot, "config")
|
||||
err := tmos.EnsureDir(configFilePath, 0700)
|
||||
require.NoError(t, err)
|
||||
|
||||
// write the non-defaults to a different path
|
||||
// TODO: support writing sub configs so we can test that too
|
||||
err = WriteConfigVals(configFilePath, cvals)
|
||||
require.NoError(t, err)
|
||||
// write the non-defaults to a different path
|
||||
// TODO: support writing sub configs so we can test that too
|
||||
err = writeConfigVals(configFilePath, cvals)
|
||||
require.NoError(t, err)
|
||||
|
||||
rootCmd := testRootCmd()
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot)
|
||||
cmd := testRootCmd(conf)
|
||||
|
||||
// run with the args and env
|
||||
tc.args = append([]string{rootCmd.Use}, tc.args...)
|
||||
err = cli.RunWithArgs(cmd, tc.args, tc.env)
|
||||
require.NoError(t, err, idxString)
|
||||
// run with the args and env
|
||||
tc.args = append([]string{cmd.Use}, tc.args...)
|
||||
err = cli.RunWithArgs(ctx, cmd, tc.args, tc.env)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, tc.logLvl, config.LogLevel, idxString)
|
||||
require.Equal(t, tc.logLvl, conf.LogLevel)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -12,25 +12,26 @@ import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
var (
|
||||
genesisHash []byte
|
||||
)
|
||||
|
||||
// AddNodeFlags exposes some common configuration options on the command-line
|
||||
// These are exposed for convenience of commands embedding a tendermint node
|
||||
func AddNodeFlags(cmd *cobra.Command) {
|
||||
// AddNodeFlags exposes some common configuration options from conf in the flag
|
||||
// set for cmd. This is a convenience for commands embedding a Tendermint node.
|
||||
func AddNodeFlags(cmd *cobra.Command, conf *cfg.Config) {
|
||||
// bind flags
|
||||
cmd.Flags().String("moniker", config.Moniker, "node name")
|
||||
cmd.Flags().String("moniker", conf.Moniker, "node name")
|
||||
|
||||
// mode flags
|
||||
cmd.Flags().String("mode", config.Mode, "node mode (full | validator | seed)")
|
||||
cmd.Flags().String("mode", conf.Mode, "node mode (full | validator | seed)")
|
||||
|
||||
// priv val flags
|
||||
cmd.Flags().String(
|
||||
"priv-validator-laddr",
|
||||
config.PrivValidator.ListenAddr,
|
||||
conf.PrivValidator.ListenAddr,
|
||||
"socket address to listen on for connections from external priv-validator process")
|
||||
|
||||
// node flags
|
||||
@@ -40,74 +41,74 @@ func AddNodeFlags(cmd *cobra.Command) {
|
||||
"genesis-hash",
|
||||
[]byte{},
|
||||
"optional SHA-256 hash of the genesis file")
|
||||
cmd.Flags().Int64("consensus.double-sign-check-height", config.Consensus.DoubleSignCheckHeight,
|
||||
cmd.Flags().Int64("consensus.double-sign-check-height", conf.Consensus.DoubleSignCheckHeight,
|
||||
"how many blocks to look back to check existence of the node's "+
|
||||
"consensus votes before joining consensus")
|
||||
|
||||
// abci flags
|
||||
cmd.Flags().String(
|
||||
"proxy-app",
|
||||
config.ProxyApp,
|
||||
conf.ProxyApp,
|
||||
"proxy app address, or one of: 'kvstore',"+
|
||||
" 'persistent_kvstore', 'e2e' or 'noop' for local testing.")
|
||||
cmd.Flags().String("abci", config.ABCI, "specify abci transport (socket | grpc)")
|
||||
cmd.Flags().String("abci", conf.ABCI, "specify abci transport (socket | grpc)")
|
||||
|
||||
// rpc flags
|
||||
cmd.Flags().String("rpc.laddr", config.RPC.ListenAddress, "RPC listen address. Port required")
|
||||
cmd.Flags().Bool("rpc.unsafe", config.RPC.Unsafe, "enabled unsafe rpc methods")
|
||||
cmd.Flags().String("rpc.pprof-laddr", config.RPC.PprofListenAddress, "pprof listen address (https://golang.org/pkg/net/http/pprof)")
|
||||
cmd.Flags().String("rpc.laddr", conf.RPC.ListenAddress, "RPC listen address. Port required")
|
||||
cmd.Flags().Bool("rpc.unsafe", conf.RPC.Unsafe, "enabled unsafe rpc methods")
|
||||
cmd.Flags().String("rpc.pprof-laddr", conf.RPC.PprofListenAddress, "pprof listen address (https://golang.org/pkg/net/http/pprof)")
|
||||
|
||||
// p2p flags
|
||||
cmd.Flags().String(
|
||||
"p2p.laddr",
|
||||
config.P2P.ListenAddress,
|
||||
conf.P2P.ListenAddress,
|
||||
"node listen address. (0.0.0.0:0 means any interface, any port)")
|
||||
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "comma-delimited ID@host:port seed nodes") //nolint: staticcheck
|
||||
cmd.Flags().String("p2p.persistent-peers", config.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
|
||||
cmd.Flags().Bool("p2p.upnp", config.P2P.UPNP, "enable/disable UPNP port forwarding")
|
||||
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "enable/disable Peer-Exchange")
|
||||
cmd.Flags().String("p2p.private-peer-ids", config.P2P.PrivatePeerIDs, "comma-delimited private peer IDs")
|
||||
cmd.Flags().String("p2p.seeds", conf.P2P.Seeds, "comma-delimited ID@host:port seed nodes") //nolint: staticcheck
|
||||
cmd.Flags().String("p2p.persistent-peers", conf.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
|
||||
cmd.Flags().Bool("p2p.upnp", conf.P2P.UPNP, "enable/disable UPNP port forwarding")
|
||||
cmd.Flags().Bool("p2p.pex", conf.P2P.PexReactor, "enable/disable Peer-Exchange")
|
||||
cmd.Flags().String("p2p.private-peer-ids", conf.P2P.PrivatePeerIDs, "comma-delimited private peer IDs")
|
||||
|
||||
// consensus flags
|
||||
cmd.Flags().Bool(
|
||||
"consensus.create-empty-blocks",
|
||||
config.Consensus.CreateEmptyBlocks,
|
||||
conf.Consensus.CreateEmptyBlocks,
|
||||
"set this to false to only produce blocks when there are txs or when the AppHash changes")
|
||||
cmd.Flags().String(
|
||||
"consensus.create-empty-blocks-interval",
|
||||
config.Consensus.CreateEmptyBlocksInterval.String(),
|
||||
conf.Consensus.CreateEmptyBlocksInterval.String(),
|
||||
"the possible interval between empty blocks")
|
||||
|
||||
addDBFlags(cmd)
|
||||
addDBFlags(cmd, conf)
|
||||
}
|
||||
|
||||
func addDBFlags(cmd *cobra.Command) {
|
||||
func addDBFlags(cmd *cobra.Command, conf *cfg.Config) {
|
||||
cmd.Flags().String(
|
||||
"db-backend",
|
||||
config.DBBackend,
|
||||
conf.DBBackend,
|
||||
"database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
|
||||
cmd.Flags().String(
|
||||
"db-dir",
|
||||
config.DBPath,
|
||||
conf.DBPath,
|
||||
"database directory")
|
||||
}
|
||||
|
||||
// NewRunNodeCmd returns the command that allows the CLI to start a node.
|
||||
// It can be used with a custom PrivValidator and in-process ABCI application.
|
||||
func NewRunNodeCmd(nodeProvider cfg.ServiceProvider) *cobra.Command {
|
||||
func NewRunNodeCmd(nodeProvider cfg.ServiceProvider, conf *cfg.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "start",
|
||||
Aliases: []string{"node", "run"},
|
||||
Short: "Run the tendermint node",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
if err := checkGenesisHash(config); err != nil {
|
||||
if err := checkGenesisHash(conf); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
n, err := nodeProvider(ctx, config, logger)
|
||||
n, err := nodeProvider(ctx, conf, logger)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create node: %w", err)
|
||||
}
|
||||
@@ -116,14 +117,14 @@ func NewRunNodeCmd(nodeProvider cfg.ServiceProvider) *cobra.Command {
|
||||
return fmt.Errorf("failed to start node: %w", err)
|
||||
}
|
||||
|
||||
logger.Info("started node", "node", n.String())
|
||||
logger.Info("started node", "chain", conf.ChainID())
|
||||
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
AddNodeFlags(cmd)
|
||||
AddNodeFlags(cmd, conf)
|
||||
return cmd
|
||||
}
|
||||
|
||||
|
||||
@@ -4,21 +4,23 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
)
|
||||
|
||||
// ShowNodeIDCmd dumps node's ID to the standard output.
|
||||
var ShowNodeIDCmd = &cobra.Command{
|
||||
Use: "show-node-id",
|
||||
Short: "Show this node's ID",
|
||||
RunE: showNodeID,
|
||||
}
|
||||
// MakeShowNodeIDCommand constructs a command to dump the node ID to stdout.
|
||||
func MakeShowNodeIDCommand(conf *config.Config) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "show-node-id",
|
||||
Short: "Show this node's ID",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
nodeKeyID, err := conf.LoadNodeKeyID()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
func showNodeID(cmd *cobra.Command, args []string) error {
|
||||
nodeKeyID, err := config.LoadNodeKeyID()
|
||||
if err != nil {
|
||||
return err
|
||||
fmt.Println(nodeKeyID)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
fmt.Println(nodeKeyID)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -6,75 +6,78 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/internal/jsontypes"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmnet "github.com/tendermint/tendermint/libs/net"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
tmgrpc "github.com/tendermint/tendermint/privval/grpc"
|
||||
)
|
||||
|
||||
// ShowValidatorCmd adds capabilities for showing the validator info.
|
||||
var ShowValidatorCmd = &cobra.Command{
|
||||
Use: "show-validator",
|
||||
Short: "Show this node's validator info",
|
||||
RunE: showValidator,
|
||||
}
|
||||
// MakeShowValidatorCommand constructs a command to show the validator info.
|
||||
func MakeShowValidatorCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "show-validator",
|
||||
Short: "Show this node's validator info",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
var (
|
||||
pubKey crypto.PubKey
|
||||
err error
|
||||
bctx = cmd.Context()
|
||||
)
|
||||
//TODO: remove once gRPC is the only supported protocol
|
||||
protocol, _ := tmnet.ProtocolAndAddress(conf.PrivValidator.ListenAddr)
|
||||
switch protocol {
|
||||
case "grpc":
|
||||
pvsc, err := tmgrpc.DialRemoteSigner(
|
||||
bctx,
|
||||
conf.PrivValidator,
|
||||
conf.ChainID(),
|
||||
logger,
|
||||
conf.Instrumentation.Prometheus,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't connect to remote validator %w", err)
|
||||
}
|
||||
|
||||
func showValidator(cmd *cobra.Command, args []string) error {
|
||||
var (
|
||||
pubKey crypto.PubKey
|
||||
err error
|
||||
bctx = cmd.Context()
|
||||
)
|
||||
//TODO: remove once gRPC is the only supported protocol
|
||||
protocol, _ := tmnet.ProtocolAndAddress(config.PrivValidator.ListenAddr)
|
||||
switch protocol {
|
||||
case "grpc":
|
||||
pvsc, err := tmgrpc.DialRemoteSigner(
|
||||
bctx,
|
||||
config.PrivValidator,
|
||||
config.ChainID(),
|
||||
logger,
|
||||
config.Instrumentation.Prometheus,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't connect to remote validator %w", err)
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
pubKey, err = pvsc.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
default:
|
||||
|
||||
pubKey, err = pvsc.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
default:
|
||||
keyFilePath := conf.PrivValidator.KeyFile()
|
||||
if !tmos.FileExists(keyFilePath) {
|
||||
return fmt.Errorf("private validator file %s does not exist", keyFilePath)
|
||||
}
|
||||
|
||||
keyFilePath := config.PrivValidator.KeyFile()
|
||||
if !tmos.FileExists(keyFilePath) {
|
||||
return fmt.Errorf("private validator file %s does not exist", keyFilePath)
|
||||
}
|
||||
pv, err := privval.LoadFilePV(keyFilePath, conf.PrivValidator.StateFile())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pv, err := privval.LoadFilePV(keyFilePath, config.PrivValidator.StateFile())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
pubKey, err = pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
pubKey, err = pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
bz, err := jsontypes.Marshal(pubKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal private validator pubkey: %w", err)
|
||||
}
|
||||
|
||||
fmt.Println(string(bz))
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
bz, err := jsontypes.Marshal(pubKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal private validator pubkey: %w", err)
|
||||
}
|
||||
|
||||
fmt.Println(string(bz))
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -13,76 +13,23 @@ import (
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/bytes"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmrand "github.com/tendermint/tendermint/libs/rand"
|
||||
tmtime "github.com/tendermint/tendermint/libs/time"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
var (
|
||||
nValidators int
|
||||
nNonValidators int
|
||||
initialHeight int64
|
||||
configFile string
|
||||
outputDir string
|
||||
nodeDirPrefix string
|
||||
|
||||
populatePersistentPeers bool
|
||||
hostnamePrefix string
|
||||
hostnameSuffix string
|
||||
startingIPAddress string
|
||||
hostnames []string
|
||||
p2pPort int
|
||||
randomMonikers bool
|
||||
)
|
||||
|
||||
const (
|
||||
nodeDirPerm = 0755
|
||||
)
|
||||
|
||||
func init() {
|
||||
TestnetFilesCmd.Flags().IntVar(&nValidators, "v", 4,
|
||||
"number of validators to initialize the testnet with")
|
||||
TestnetFilesCmd.Flags().StringVar(&configFile, "config", "",
|
||||
"config file to use (note some options may be overwritten)")
|
||||
TestnetFilesCmd.Flags().IntVar(&nNonValidators, "n", 0,
|
||||
"number of non-validators to initialize the testnet with")
|
||||
TestnetFilesCmd.Flags().StringVar(&outputDir, "o", "./mytestnet",
|
||||
"directory to store initialization data for the testnet")
|
||||
TestnetFilesCmd.Flags().StringVar(&nodeDirPrefix, "node-dir-prefix", "node",
|
||||
"prefix the directory name for each node with (node results in node0, node1, ...)")
|
||||
TestnetFilesCmd.Flags().Int64Var(&initialHeight, "initial-height", 0,
|
||||
"initial height of the first block")
|
||||
|
||||
TestnetFilesCmd.Flags().BoolVar(&populatePersistentPeers, "populate-persistent-peers", true,
|
||||
"update config of each node with the list of persistent peers build using either"+
|
||||
" hostname-prefix or"+
|
||||
" starting-ip-address")
|
||||
TestnetFilesCmd.Flags().StringVar(&hostnamePrefix, "hostname-prefix", "node",
|
||||
"hostname prefix (\"node\" results in persistent peers list ID0@node0:26656, ID1@node1:26656, ...)")
|
||||
TestnetFilesCmd.Flags().StringVar(&hostnameSuffix, "hostname-suffix", "",
|
||||
"hostname suffix ("+
|
||||
"\".xyz.com\""+
|
||||
" results in persistent peers list ID0@node0.xyz.com:26656, ID1@node1.xyz.com:26656, ...)")
|
||||
TestnetFilesCmd.Flags().StringVar(&startingIPAddress, "starting-ip-address", "",
|
||||
"starting IP address ("+
|
||||
"\"192.168.0.1\""+
|
||||
" results in persistent peers list ID0@192.168.0.1:26656, ID1@192.168.0.2:26656, ...)")
|
||||
TestnetFilesCmd.Flags().StringArrayVar(&hostnames, "hostname", []string{},
|
||||
"manually override all hostnames of validators and non-validators (use --hostname multiple times for multiple hosts)")
|
||||
TestnetFilesCmd.Flags().IntVar(&p2pPort, "p2p-port", 26656,
|
||||
"P2P Port")
|
||||
TestnetFilesCmd.Flags().BoolVar(&randomMonikers, "random-monikers", false,
|
||||
"randomize the moniker for each generated node")
|
||||
TestnetFilesCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
}
|
||||
|
||||
// TestnetFilesCmd allows initialisation of files for a Tendermint testnet.
|
||||
var TestnetFilesCmd = &cobra.Command{
|
||||
Use: "testnet",
|
||||
Short: "Initialize files for a Tendermint testnet",
|
||||
Long: `testnet will create "v" + "n" number of directories and populate each with
|
||||
// MakeTestnetFilesCommand constructs a command to generate testnet config files.
|
||||
func MakeTestnetFilesCommand(conf *cfg.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "testnet",
|
||||
Short: "Initialize files for a Tendermint testnet",
|
||||
Long: `testnet will create "v" + "n" number of directories and populate each with
|
||||
necessary files (private validator, genesis, config, etc.).
|
||||
|
||||
Note, strict routability for addresses is turned off in the config file.
|
||||
@@ -93,205 +40,292 @@ Example:
|
||||
|
||||
tendermint testnet --v 4 --o ./output --populate-persistent-peers --starting-ip-address 192.168.10.2
|
||||
`,
|
||||
RunE: testnetFiles,
|
||||
}
|
||||
|
||||
func testnetFiles(cmd *cobra.Command, args []string) error {
|
||||
if len(hostnames) > 0 && len(hostnames) != (nValidators+nNonValidators) {
|
||||
return fmt.Errorf(
|
||||
"testnet needs precisely %d hostnames (number of validators plus non-validators) if --hostname parameter is used",
|
||||
nValidators+nNonValidators,
|
||||
)
|
||||
}
|
||||
|
||||
// set mode to validator for testnet
|
||||
config := cfg.DefaultValidatorConfig()
|
||||
|
||||
// overwrite default config if set and valid
|
||||
if configFile != "" {
|
||||
viper.SetConfigFile(configFile)
|
||||
if err := viper.ReadInConfig(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := viper.Unmarshal(config); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := config.ValidateBasic(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
genVals := make([]types.GenesisValidator, nValidators)
|
||||
ctx := cmd.Context()
|
||||
for i := 0; i < nValidators; i++ {
|
||||
nodeDirName := fmt.Sprintf("%s%d", nodeDirPrefix, i)
|
||||
nodeDir := filepath.Join(outputDir, nodeDirName)
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, config); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pvKeyFile := filepath.Join(nodeDir, config.PrivValidator.Key)
|
||||
pvStateFile := filepath.Join(nodeDir, config.PrivValidator.State)
|
||||
pv, err := privval.LoadFilePV(pvKeyFile, pvStateFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
pubKey, err := pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
genVals[i] = types.GenesisValidator{
|
||||
Address: pubKey.Address(),
|
||||
PubKey: pubKey,
|
||||
Power: 1,
|
||||
Name: nodeDirName,
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i+nValidators))
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, config); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Generate genesis doc from generated validators
|
||||
genDoc := &types.GenesisDoc{
|
||||
ChainID: "chain-" + tmrand.Str(6),
|
||||
GenesisTime: tmtime.Now(),
|
||||
InitialHeight: initialHeight,
|
||||
Validators: genVals,
|
||||
ConsensusParams: types.DefaultConsensusParams(),
|
||||
}
|
||||
if keyType == "secp256k1" {
|
||||
genDoc.ConsensusParams.Validator = types.ValidatorParams{
|
||||
PubKeyTypes: []string{types.ABCIPubKeyTypeSecp256k1},
|
||||
}
|
||||
}
|
||||
|
||||
// Write genesis file.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
if err := genDoc.SaveAs(filepath.Join(nodeDir, config.BaseConfig.Genesis)); err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Gather persistent peer addresses.
|
||||
var (
|
||||
persistentPeers = make([]string, 0)
|
||||
err error
|
||||
nValidators int
|
||||
nNonValidators int
|
||||
initialHeight int64
|
||||
configFile string
|
||||
outputDir string
|
||||
nodeDirPrefix string
|
||||
|
||||
populatePersistentPeers bool
|
||||
hostnamePrefix string
|
||||
hostnameSuffix string
|
||||
startingIPAddress string
|
||||
hostnames []string
|
||||
p2pPort int
|
||||
randomMonikers bool
|
||||
keyType string
|
||||
)
|
||||
if populatePersistentPeers {
|
||||
persistentPeers, err = persistentPeersArray(config)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Overwrite default config.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
config.SetRoot(nodeDir)
|
||||
config.P2P.AllowDuplicateIP = true
|
||||
if populatePersistentPeers {
|
||||
persistentPeersWithoutSelf := make([]string, 0)
|
||||
for j := 0; j < len(persistentPeers); j++ {
|
||||
if j == i {
|
||||
continue
|
||||
}
|
||||
persistentPeersWithoutSelf = append(persistentPeersWithoutSelf, persistentPeers[j])
|
||||
cmd.Flags().IntVar(&nValidators, "v", 4,
|
||||
"number of validators to initialize the testnet with")
|
||||
cmd.Flags().StringVar(&configFile, "config", "",
|
||||
"config file to use (note some options may be overwritten)")
|
||||
cmd.Flags().IntVar(&nNonValidators, "n", 0,
|
||||
"number of non-validators to initialize the testnet with")
|
||||
cmd.Flags().StringVar(&outputDir, "o", "./mytestnet",
|
||||
"directory to store initialization data for the testnet")
|
||||
cmd.Flags().StringVar(&nodeDirPrefix, "node-dir-prefix", "node",
|
||||
"prefix the directory name for each node with (node results in node0, node1, ...)")
|
||||
cmd.Flags().Int64Var(&initialHeight, "initial-height", 0,
|
||||
"initial height of the first block")
|
||||
|
||||
cmd.Flags().BoolVar(&populatePersistentPeers, "populate-persistent-peers", true,
|
||||
"update config of each node with the list of persistent peers build using either"+
|
||||
" hostname-prefix or"+
|
||||
" starting-ip-address")
|
||||
cmd.Flags().StringVar(&hostnamePrefix, "hostname-prefix", "node",
|
||||
"hostname prefix (\"node\" results in persistent peers list ID0@node0:26656, ID1@node1:26656, ...)")
|
||||
cmd.Flags().StringVar(&hostnameSuffix, "hostname-suffix", "",
|
||||
"hostname suffix ("+
|
||||
"\".xyz.com\""+
|
||||
" results in persistent peers list ID0@node0.xyz.com:26656, ID1@node1.xyz.com:26656, ...)")
|
||||
cmd.Flags().StringVar(&startingIPAddress, "starting-ip-address", "",
|
||||
"starting IP address ("+
|
||||
"\"192.168.0.1\""+
|
||||
" results in persistent peers list ID0@192.168.0.1:26656, ID1@192.168.0.2:26656, ...)")
|
||||
cmd.Flags().StringArrayVar(&hostnames, "hostname", []string{},
|
||||
"manually override all hostnames of validators and non-validators (use --hostname multiple times for multiple hosts)")
|
||||
cmd.Flags().IntVar(&p2pPort, "p2p-port", 26656,
|
||||
"P2P Port")
|
||||
cmd.Flags().BoolVar(&randomMonikers, "random-monikers", false,
|
||||
"randomize the moniker for each generated node")
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
|
||||
cmd.RunE = func(cmd *cobra.Command, args []string) error {
|
||||
if len(hostnames) > 0 && len(hostnames) != (nValidators+nNonValidators) {
|
||||
return fmt.Errorf(
|
||||
"testnet needs precisely %d hostnames (number of validators plus non-validators) if --hostname parameter is used",
|
||||
nValidators+nNonValidators,
|
||||
)
|
||||
}
|
||||
|
||||
// set mode to validator for testnet
|
||||
config := cfg.DefaultValidatorConfig()
|
||||
|
||||
// overwrite default config if set and valid
|
||||
if configFile != "" {
|
||||
viper.SetConfigFile(configFile)
|
||||
if err := viper.ReadInConfig(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := viper.Unmarshal(config); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := config.ValidateBasic(); err != nil {
|
||||
return err
|
||||
}
|
||||
config.P2P.PersistentPeers = strings.Join(persistentPeersWithoutSelf, ",")
|
||||
}
|
||||
config.Moniker = moniker(i)
|
||||
|
||||
if err := cfg.WriteConfigFile(nodeDir, config); err != nil {
|
||||
return err
|
||||
genVals := make([]types.GenesisValidator, nValidators)
|
||||
ctx := cmd.Context()
|
||||
for i := 0; i < nValidators; i++ {
|
||||
nodeDirName := fmt.Sprintf("%s%d", nodeDirPrefix, i)
|
||||
nodeDir := filepath.Join(outputDir, nodeDirName)
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, config, logger, keyType); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pvKeyFile := filepath.Join(nodeDir, config.PrivValidator.Key)
|
||||
pvStateFile := filepath.Join(nodeDir, config.PrivValidator.State)
|
||||
pv, err := privval.LoadFilePV(pvKeyFile, pvStateFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
pubKey, err := pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
genVals[i] = types.GenesisValidator{
|
||||
Address: pubKey.Address(),
|
||||
PubKey: pubKey,
|
||||
Power: 1,
|
||||
Name: nodeDirName,
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i+nValidators))
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, conf, logger, keyType); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Generate genesis doc from generated validators
|
||||
genDoc := &types.GenesisDoc{
|
||||
ChainID: "chain-" + tmrand.Str(6),
|
||||
GenesisTime: tmtime.Now(),
|
||||
InitialHeight: initialHeight,
|
||||
Validators: genVals,
|
||||
ConsensusParams: types.DefaultConsensusParams(),
|
||||
}
|
||||
if keyType == "secp256k1" {
|
||||
genDoc.ConsensusParams.Validator = types.ValidatorParams{
|
||||
PubKeyTypes: []string{types.ABCIPubKeyTypeSecp256k1},
|
||||
}
|
||||
}
|
||||
|
||||
// Write genesis file.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
if err := genDoc.SaveAs(filepath.Join(nodeDir, config.BaseConfig.Genesis)); err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Gather persistent peer addresses.
|
||||
var (
|
||||
persistentPeers = make([]string, 0)
|
||||
err error
|
||||
)
|
||||
tpargs := testnetPeerArgs{
|
||||
numValidators: nValidators,
|
||||
numNonValidators: nNonValidators,
|
||||
peerToPeerPort: p2pPort,
|
||||
nodeDirPrefix: nodeDirPrefix,
|
||||
outputDir: outputDir,
|
||||
hostnames: hostnames,
|
||||
startingIPAddr: startingIPAddress,
|
||||
hostnamePrefix: hostnamePrefix,
|
||||
hostnameSuffix: hostnameSuffix,
|
||||
randomMonikers: randomMonikers,
|
||||
}
|
||||
|
||||
if populatePersistentPeers {
|
||||
|
||||
persistentPeers, err = persistentPeersArray(config, tpargs)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Overwrite default config.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
config.SetRoot(nodeDir)
|
||||
config.P2P.AllowDuplicateIP = true
|
||||
if populatePersistentPeers {
|
||||
persistentPeersWithoutSelf := make([]string, 0)
|
||||
for j := 0; j < len(persistentPeers); j++ {
|
||||
if j == i {
|
||||
continue
|
||||
}
|
||||
persistentPeersWithoutSelf = append(persistentPeersWithoutSelf, persistentPeers[j])
|
||||
}
|
||||
config.P2P.PersistentPeers = strings.Join(persistentPeersWithoutSelf, ",")
|
||||
}
|
||||
config.Moniker = tpargs.moniker(i)
|
||||
|
||||
if err := cfg.WriteConfigFile(nodeDir, config); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
|
||||
return nil
|
||||
return cmd
|
||||
}
|
||||
|
||||
func hostnameOrIP(i int) string {
|
||||
if len(hostnames) > 0 && i < len(hostnames) {
|
||||
return hostnames[i]
|
||||
type testnetPeerArgs struct {
|
||||
numValidators int
|
||||
numNonValidators int
|
||||
peerToPeerPort int
|
||||
nodeDirPrefix string
|
||||
outputDir string
|
||||
hostnames []string
|
||||
startingIPAddr string
|
||||
hostnamePrefix string
|
||||
hostnameSuffix string
|
||||
randomMonikers bool
|
||||
}
|
||||
|
||||
func (args *testnetPeerArgs) hostnameOrIP(i int) (string, error) {
|
||||
if len(args.hostnames) > 0 && i < len(args.hostnames) {
|
||||
return args.hostnames[i], nil
|
||||
}
|
||||
if startingIPAddress == "" {
|
||||
return fmt.Sprintf("%s%d%s", hostnamePrefix, i, hostnameSuffix)
|
||||
if args.startingIPAddr == "" {
|
||||
return fmt.Sprintf("%s%d%s", args.hostnamePrefix, i, args.hostnameSuffix), nil
|
||||
}
|
||||
ip := net.ParseIP(startingIPAddress)
|
||||
ip := net.ParseIP(args.startingIPAddr)
|
||||
ip = ip.To4()
|
||||
if ip == nil {
|
||||
fmt.Printf("%v: non ipv4 address\n", startingIPAddress)
|
||||
os.Exit(1)
|
||||
return "", fmt.Errorf("%v is non-ipv4 address", args.startingIPAddr)
|
||||
}
|
||||
|
||||
for j := 0; j < i; j++ {
|
||||
ip[3]++
|
||||
}
|
||||
return ip.String()
|
||||
return ip.String(), nil
|
||||
|
||||
}
|
||||
|
||||
// get an array of persistent peers
|
||||
func persistentPeersArray(config *cfg.Config) ([]string, error) {
|
||||
peers := make([]string, nValidators+nNonValidators)
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
func persistentPeersArray(config *cfg.Config, args testnetPeerArgs) ([]string, error) {
|
||||
peers := make([]string, args.numValidators+args.numNonValidators)
|
||||
for i := 0; i < len(peers); i++ {
|
||||
nodeDir := filepath.Join(args.outputDir, fmt.Sprintf("%s%d", args.nodeDirPrefix, i))
|
||||
config.SetRoot(nodeDir)
|
||||
nodeKey, err := config.LoadNodeKeyID()
|
||||
if err != nil {
|
||||
return []string{}, err
|
||||
return nil, err
|
||||
}
|
||||
peers[i] = nodeKey.AddressString(fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
|
||||
addr, err := args.hostnameOrIP(i)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
peers[i] = nodeKey.AddressString(fmt.Sprintf("%s:%d", addr, args.peerToPeerPort))
|
||||
}
|
||||
return peers, nil
|
||||
}
|
||||
|
||||
func moniker(i int) string {
|
||||
if randomMonikers {
|
||||
func (args *testnetPeerArgs) moniker(i int) string {
|
||||
if args.randomMonikers {
|
||||
return randomMoniker()
|
||||
}
|
||||
if len(hostnames) > 0 && i < len(hostnames) {
|
||||
return hostnames[i]
|
||||
if len(args.hostnames) > 0 && i < len(args.hostnames) {
|
||||
return args.hostnames[i]
|
||||
}
|
||||
if startingIPAddress == "" {
|
||||
return fmt.Sprintf("%s%d%s", hostnamePrefix, i, hostnameSuffix)
|
||||
if args.startingIPAddr == "" {
|
||||
return fmt.Sprintf("%s%d%s", args.hostnamePrefix, i, args.hostnameSuffix)
|
||||
}
|
||||
return randomMoniker()
|
||||
}
|
||||
|
||||
@@ -1,37 +1,50 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"context"
|
||||
|
||||
cmd "github.com/tendermint/tendermint/cmd/tendermint/commands"
|
||||
"github.com/tendermint/tendermint/cmd/tendermint/commands"
|
||||
"github.com/tendermint/tendermint/cmd/tendermint/commands/debug"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/node"
|
||||
)
|
||||
|
||||
func main() {
|
||||
rootCmd := cmd.RootCmd
|
||||
rootCmd.AddCommand(
|
||||
cmd.GenValidatorCmd,
|
||||
cmd.ReIndexEventCmd,
|
||||
cmd.InitFilesCmd,
|
||||
cmd.LightCmd,
|
||||
cmd.ReplayCmd,
|
||||
cmd.ReplayConsoleCmd,
|
||||
cmd.ResetAllCmd,
|
||||
cmd.ResetPrivValidatorCmd,
|
||||
cmd.ShowValidatorCmd,
|
||||
cmd.TestnetFilesCmd,
|
||||
cmd.ShowNodeIDCmd,
|
||||
cmd.GenNodeKeyCmd,
|
||||
cmd.VersionCmd,
|
||||
cmd.InspectCmd,
|
||||
cmd.RollbackStateCmd,
|
||||
cmd.MakeKeyMigrateCommand(),
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
conf, err := commands.ParseConfig(config.DefaultConfig())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
logger, err := log.NewDefaultLogger(conf.LogFormat, conf.LogLevel)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
rcmd := commands.RootCommand(conf, logger)
|
||||
rcmd.AddCommand(
|
||||
commands.MakeGenValidatorCommand(),
|
||||
commands.MakeReindexEventCommand(conf, logger),
|
||||
commands.MakeInitFilesCommand(conf, logger),
|
||||
commands.MakeLightCommand(conf, logger),
|
||||
commands.MakeReplayCommand(conf, logger),
|
||||
commands.MakeReplayConsoleCommand(conf, logger),
|
||||
commands.MakeResetAllCommand(conf, logger),
|
||||
commands.MakeResetPrivateValidatorCommand(conf, logger),
|
||||
commands.MakeShowValidatorCommand(conf, logger),
|
||||
commands.MakeTestnetFilesCommand(conf, logger),
|
||||
commands.MakeShowNodeIDCommand(conf),
|
||||
commands.GenNodeKeyCmd,
|
||||
commands.VersionCmd,
|
||||
commands.MakeInspectCommand(conf, logger),
|
||||
commands.MakeRollbackStateCommand(conf),
|
||||
commands.MakeKeyMigrateCommand(conf, logger),
|
||||
debug.DebugCmd,
|
||||
cli.NewCompletionCmd(rootCmd, true),
|
||||
commands.NewCompletionCmd(rcmd, true),
|
||||
)
|
||||
|
||||
// NOTE:
|
||||
@@ -45,10 +58,9 @@ func main() {
|
||||
nodeFunc := node.NewDefault
|
||||
|
||||
// Create & start node
|
||||
rootCmd.AddCommand(cmd.NewRunNodeCmd(nodeFunc))
|
||||
rcmd.AddCommand(commands.NewRunNodeCmd(nodeFunc, conf, logger))
|
||||
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv(filepath.Join("$HOME", config.DefaultTendermintDir)))
|
||||
if err := cmd.Execute(); err != nil {
|
||||
if err := cli.RunWithTrace(ctx, rcmd); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -504,13 +504,13 @@ namespace = "{{ .Instrumentation.Namespace }}"
|
||||
|
||||
/****** these are for test settings ***********/
|
||||
|
||||
func ResetTestRoot(testName string) (*Config, error) {
|
||||
return ResetTestRootWithChainID(testName, "")
|
||||
func ResetTestRoot(dir, testName string) (*Config, error) {
|
||||
return ResetTestRootWithChainID(dir, testName, "")
|
||||
}
|
||||
|
||||
func ResetTestRootWithChainID(testName string, chainID string) (*Config, error) {
|
||||
func ResetTestRootWithChainID(dir, testName string, chainID string) (*Config, error) {
|
||||
// create a unique, concurrency-safe test directory under os.TempDir()
|
||||
rootDir, err := os.MkdirTemp("", fmt.Sprintf("%s-%s_", chainID, testName))
|
||||
rootDir, err := os.MkdirTemp(dir, fmt.Sprintf("%s-%s_", chainID, testName))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -20,9 +20,7 @@ func ensureFiles(t *testing.T, rootDir string, files ...string) {
|
||||
|
||||
func TestEnsureRoot(t *testing.T) {
|
||||
// setup temp dir for test
|
||||
tmpDir, err := os.MkdirTemp("", "config-test")
|
||||
require.NoError(t, err)
|
||||
defer os.RemoveAll(tmpDir)
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
// create root dir
|
||||
EnsureRoot(tmpDir)
|
||||
@@ -42,7 +40,7 @@ func TestEnsureTestRoot(t *testing.T) {
|
||||
testName := "ensureTestRoot"
|
||||
|
||||
// create root dir
|
||||
cfg, err := ResetTestRoot(testName)
|
||||
cfg, err := ResetTestRoot(t.TempDir(), testName)
|
||||
require.NoError(t, err)
|
||||
defer os.RemoveAll(cfg.RootDir)
|
||||
rootDir := cfg.RootDir
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/internal/benchmarking"
|
||||
)
|
||||
|
||||
@@ -9,11 +9,12 @@ import (
|
||||
"math/big"
|
||||
|
||||
secp256k1 "github.com/btcsuite/btcd/btcec"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/internal/jsontypes"
|
||||
|
||||
// necessary for Bitcoin address format
|
||||
"golang.org/x/crypto/ripemd160" // nolint
|
||||
"golang.org/x/crypto/ripemd160" //nolint:staticcheck
|
||||
)
|
||||
|
||||
//-------------------------------------
|
||||
@@ -178,3 +179,67 @@ func (pubKey PubKey) Equals(other crypto.PubKey) bool {
|
||||
func (pubKey PubKey) Type() string {
|
||||
return KeyType
|
||||
}
|
||||
|
||||
// used to reject malleable signatures
|
||||
// see:
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
|
||||
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
|
||||
|
||||
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
|
||||
// The returned signature will be of the form R || S (in lower-S form).
|
||||
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
|
||||
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
|
||||
|
||||
sig, err := priv.Sign(crypto.Sha256(msg))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sigBytes := serializeSig(sig)
|
||||
return sigBytes, nil
|
||||
}
|
||||
|
||||
// VerifySignature verifies a signature of the form R || S.
|
||||
// It rejects signatures which are not in lower-S form.
|
||||
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
|
||||
if len(sigStr) != 64 {
|
||||
return false
|
||||
}
|
||||
|
||||
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// parse the signature:
|
||||
signature := signatureFromBytes(sigStr)
|
||||
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
|
||||
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
if signature.S.Cmp(secp256k1halfN) > 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
return signature.Verify(crypto.Sha256(msg), pub)
|
||||
}
|
||||
|
||||
// Read Signature struct from R || S. Caller needs to ensure
|
||||
// that len(sigStr) == 64.
|
||||
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
|
||||
return &secp256k1.Signature{
|
||||
R: new(big.Int).SetBytes(sigStr[:32]),
|
||||
S: new(big.Int).SetBytes(sigStr[32:64]),
|
||||
}
|
||||
}
|
||||
|
||||
// Serialize signature to R || S.
|
||||
// R, S are padded to 32 bytes respectively.
|
||||
func serializeSig(sig *secp256k1.Signature) []byte {
|
||||
rBytes := sig.R.Bytes()
|
||||
sBytes := sig.S.Bytes()
|
||||
sigBytes := make([]byte, 64)
|
||||
// 0 pad the byte arrays from the left if they aren't big enough.
|
||||
copy(sigBytes[32-len(rBytes):32], rBytes)
|
||||
copy(sigBytes[64-len(sBytes):64], sBytes)
|
||||
return sigBytes
|
||||
}
|
||||
|
||||
@@ -1,76 +0,0 @@
|
||||
//go:build !libsecp256k1
|
||||
// +build !libsecp256k1
|
||||
|
||||
package secp256k1
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
|
||||
secp256k1 "github.com/btcsuite/btcd/btcec"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
)
|
||||
|
||||
// used to reject malleable signatures
|
||||
// see:
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
|
||||
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
|
||||
|
||||
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
|
||||
// The returned signature will be of the form R || S (in lower-S form).
|
||||
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
|
||||
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
|
||||
|
||||
sig, err := priv.Sign(crypto.Sha256(msg))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sigBytes := serializeSig(sig)
|
||||
return sigBytes, nil
|
||||
}
|
||||
|
||||
// VerifySignature verifies a signature of the form R || S.
|
||||
// It rejects signatures which are not in lower-S form.
|
||||
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
|
||||
if len(sigStr) != 64 {
|
||||
return false
|
||||
}
|
||||
|
||||
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// parse the signature:
|
||||
signature := signatureFromBytes(sigStr)
|
||||
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
|
||||
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
if signature.S.Cmp(secp256k1halfN) > 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
return signature.Verify(crypto.Sha256(msg), pub)
|
||||
}
|
||||
|
||||
// Read Signature struct from R || S. Caller needs to ensure
|
||||
// that len(sigStr) == 64.
|
||||
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
|
||||
return &secp256k1.Signature{
|
||||
R: new(big.Int).SetBytes(sigStr[:32]),
|
||||
S: new(big.Int).SetBytes(sigStr[32:64]),
|
||||
}
|
||||
}
|
||||
|
||||
// Serialize signature to R || S.
|
||||
// R, S are padded to 32 bytes respectively.
|
||||
func serializeSig(sig *secp256k1.Signature) []byte {
|
||||
rBytes := sig.R.Bytes()
|
||||
sBytes := sig.S.Bytes()
|
||||
sigBytes := make([]byte, 64)
|
||||
// 0 pad the byte arrays from the left if they aren't big enough.
|
||||
copy(sigBytes[32-len(rBytes):32], rBytes)
|
||||
copy(sigBytes[64-len(sBytes):64], sBytes)
|
||||
return sigBytes
|
||||
}
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/internal/benchmarking"
|
||||
)
|
||||
|
||||
@@ -22,10 +22,6 @@ module.exports = {
|
||||
index: "tendermint"
|
||||
},
|
||||
versions: [
|
||||
{
|
||||
"label": "v0.32",
|
||||
"key": "v0.32"
|
||||
},
|
||||
{
|
||||
"label": "v0.33",
|
||||
"key": "v0.33"
|
||||
|
||||
@@ -21,10 +21,10 @@ Tendermint?](introduction/what-is-tendermint.md).
|
||||
|
||||
To get started quickly with an example application, see the [quick start guide](introduction/quick-start.md).
|
||||
|
||||
To learn about application development on Tendermint, see the [Application Blockchain Interface](https://github.com/tendermint/spec/tree/master/spec/abci).
|
||||
To learn about application development on Tendermint, see the [Application Blockchain Interface](https://github.com/tendermint/tendermint/tree/master/spec/abci).
|
||||
|
||||
For more details on using Tendermint, see the respective documentation for
|
||||
[Tendermint Core](tendermint-core/), [benchmarking and monitoring](tools/), and [network deployments](networks/).
|
||||
[Tendermint Core](tendermint-core/), [benchmarking and monitoring](tools/), and [network deployments](nodes/).
|
||||
|
||||
To find out about the Tendermint ecosystem you can go [here](https://github.com/tendermint/awesome#ecosystem). If you are a project that is using Tendermint you are welcome to make a PR to add your project to the list.
|
||||
|
||||
|
||||
@@ -57,4 +57,4 @@ See the following for more extensive documentation:
|
||||
- [Interchain Standard for the Light-Client REST API](https://github.com/cosmos/cosmos-sdk/pull/1028)
|
||||
- [Tendermint RPC Docs](https://docs.tendermint.com/master/rpc/)
|
||||
- [Tendermint in Production](../tendermint-core/running-in-production.md)
|
||||
- [ABCI spec](https://github.com/tendermint/spec/tree/95cf253b6df623066ff7cd4074a94e7a3f147c7a/spec/abci)
|
||||
- [ABCI spec](https://github.com/tendermint/tendermint/tree/95cf253b6df623066ff7cd4074a94e7a3f147c7a/spec/abci)
|
||||
|
||||
@@ -68,7 +68,7 @@ tendermint start
|
||||
```
|
||||
|
||||
If you have used Tendermint, you may want to reset the data for a new
|
||||
blockchain by running `tendermint unsafe_reset_all`. Then you can run
|
||||
blockchain by running `tendermint unsafe-reset-all`. Then you can run
|
||||
`tendermint start` to start Tendermint, and connect to the app. For more
|
||||
details, see [the guide on using Tendermint](../tendermint-core/using-tendermint.md).
|
||||
|
||||
@@ -96,25 +96,21 @@ like:
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "",
|
||||
"result": {
|
||||
"check_tx": {},
|
||||
"deliver_tx": {
|
||||
"tags": [
|
||||
{
|
||||
"key": "YXBwLmNyZWF0b3I=",
|
||||
"value": "amFl"
|
||||
},
|
||||
{
|
||||
"key": "YXBwLmtleQ==",
|
||||
"value": "YWJjZA=="
|
||||
}
|
||||
]
|
||||
},
|
||||
"hash": "9DF66553F98DE3C26E3C3317A3E4CED54F714E39",
|
||||
"height": 14
|
||||
}
|
||||
"check_tx": { ... },
|
||||
"deliver_tx": {
|
||||
"tags": [
|
||||
{
|
||||
"key": "YXBwLmNyZWF0b3I=",
|
||||
"value": "amFl"
|
||||
},
|
||||
{
|
||||
"key": "YXBwLmtleQ==",
|
||||
"value": "YWJjZA=="
|
||||
}
|
||||
]
|
||||
},
|
||||
"hash": "9DF66553F98DE3C26E3C3317A3E4CED54F714E39",
|
||||
"height": 14
|
||||
}
|
||||
```
|
||||
|
||||
@@ -129,15 +125,11 @@ The result should look like:
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"id": "",
|
||||
"result": {
|
||||
"response": {
|
||||
"log": "exists",
|
||||
"index": "-1",
|
||||
"key": "YWJjZA==",
|
||||
"value": "YWJjZA=="
|
||||
}
|
||||
"response": {
|
||||
"log": "exists",
|
||||
"index": "-1",
|
||||
"key": "YWJjZA==",
|
||||
"value": "YWJjZA=="
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -190,7 +182,7 @@ node example/counter.js
|
||||
In another window, reset and start `tendermint`:
|
||||
|
||||
```sh
|
||||
tendermint unsafe_reset_all
|
||||
tendermint unsafe-reset-all
|
||||
tendermint start
|
||||
```
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ the block itself is never stored.
|
||||
Each event contains a type and a list of attributes, which are key-value pairs
|
||||
denoting something about what happened during the method's execution. For more
|
||||
details on `Events`, see the
|
||||
[ABCI](https://github.com/tendermint/spec/blob/master/spec/abci/abci.md#events)
|
||||
[ABCI](https://github.com/tendermint/tendermint/blob/master/spec/abci/abci.md#events)
|
||||
documentation.
|
||||
|
||||
An `Event` has a composite key associated with it. A `compositeKey` is
|
||||
|
||||
@@ -80,6 +80,7 @@ Note the context/background should be written in the present tense.
|
||||
- [ADR-068: Reverse-Sync](./adr-068-reverse-sync.md)
|
||||
- [ADR-067: Mempool Refactor](./adr-067-mempool-refactor.md)
|
||||
- [ADR-075: RPC Event Subscription Interface](./adr-075-rpc-subscription.md)
|
||||
- [ADR-076: Combine Spec and Tendermint Repositories](./adr-076-combine-spec-repo.md)
|
||||
|
||||
### Rejected
|
||||
|
||||
@@ -103,3 +104,4 @@ Note the context/background should be written in the present tense.
|
||||
- [ADR-069: Node Initialization](./adr-069-flexible-node-initialization.md)
|
||||
- [ADR-071: Proposer-Based Timestamps](adr-071-proposer-based-timestamps.md)
|
||||
- [ADR-074: Migrate Timeout Parameters to Consensus Parameters](./adr-074-timeout-params.md)
|
||||
|
||||
|
||||
@@ -213,4 +213,4 @@ type Logger interface {
|
||||
}
|
||||
```
|
||||
|
||||
See [The Hunt for a Logger Interface](https://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide). The advantage is greater composability (check out how go-kit defines colored logging or log-leveled logging on top of this interface https://github.com/go-kit/kit/tree/master/log).
|
||||
See [The Hunt for a Logger Interface](https://web.archive.org/web/20210902161539/https://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide#1). The advantage is greater composability (check out how go-kit defines colored logging or log-leveled logging on top of this interface https://github.com/go-kit/kit/tree/master/log).
|
||||
|
||||
@@ -84,7 +84,7 @@ The linear verification algorithm requires downloading all headers
|
||||
between the `TrustHeight` and the `LatestHeight`. The lite client downloads the
|
||||
full header for the provided `TrustHeight` and then proceeds to download `N+1`
|
||||
headers and applies the [Tendermint validation
|
||||
rules](https://docs.tendermint.com/master/spec/blockchain/blockchain.html#validation)
|
||||
rules](https://docs.tendermint.com/master/spec/light-client/verification/)
|
||||
to each block.
|
||||
|
||||
### Bisecting Verification
|
||||
@@ -119,7 +119,7 @@ network usage.
|
||||
---
|
||||
|
||||
Check out the formal specification
|
||||
[here](https://github.com/tendermint/spec/tree/master/spec/light-client).
|
||||
[here](https://github.com/tendermint/tendermint/tree/master/spec/light-client).
|
||||
|
||||
## Status
|
||||
|
||||
|
||||
@@ -108,7 +108,7 @@ This is done with:
|
||||
func (c *Client) examineConflictingHeaderAgainstTrace(
|
||||
trace []*types.LightBlock,
|
||||
targetBlock *types.LightBlock,
|
||||
source provider.Provider,
|
||||
source provider.Provider,
|
||||
now time.Time,
|
||||
) ([]*types.LightBlock, *types.LightBlock, error)
|
||||
```
|
||||
@@ -123,17 +123,17 @@ as a sanity check. If this fails we have to drop the witness.
|
||||
intermediary headers of the primary (In the above example this is A, B, C, D, F, H). If bisection fails
|
||||
or the witness stops responding then we can call the witness faulty and drop it.
|
||||
|
||||
3. We eventually reach a verified header by the witness which is not the same as the intermediary header
|
||||
3. We eventually reach a verified header by the witness which is not the same as the intermediary header
|
||||
(In the above example this is E). This is the point of bifurcation (This could also be the last header).
|
||||
|
||||
4. There is a unique case where the trace that is being examined against has blocks that have a greater
|
||||
height than the targetBlock. This can occur as part of a forward lunatic attack where the primary has
|
||||
provided a light block that has a height greater than the head of the chain (see Appendix B). In this
|
||||
case, the light client will verify the sources blocks up to the targetBlock and return the block in the
|
||||
4. There is a unique case where the trace that is being examined against has blocks that have a greater
|
||||
height than the targetBlock. This can occur as part of a forward lunatic attack where the primary has
|
||||
provided a light block that has a height greater than the head of the chain (see Appendix B). In this
|
||||
case, the light client will verify the sources blocks up to the targetBlock and return the block in the
|
||||
trace that is directly after the targetBlock in height as the `ConflictingBlock`
|
||||
|
||||
This function then returns the trace of blocks from the witness node between the common header and the
|
||||
divergent header of the primary as it is likely, as seen in the example to the right, that multiple
|
||||
divergent header of the primary as it is likely, as seen in the example to the right, that multiple
|
||||
headers where required in order to verify the divergent one. This trace will
|
||||
be used later (as is also described later in this document).
|
||||
|
||||
@@ -179,7 +179,7 @@ This then ends the process and the verify function that was called at the start
|
||||
the user.
|
||||
|
||||
For a detailed overview of how each of these three attacks can be conducted please refer to the
|
||||
[fork accountability spec](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/accountability.md).
|
||||
[fork accountability spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md).
|
||||
|
||||
## Full Node Verification
|
||||
|
||||
@@ -212,7 +212,7 @@ clear from the current information which nodes behaved maliciously.
|
||||
|
||||
## References
|
||||
|
||||
* [Fork accountability spec](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/accountability.md)
|
||||
* [Fork accountability spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md)
|
||||
* [ADR 056: Light client amnesia attacks](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-056-light-client-amnesia-attacks.md)
|
||||
* [ADR-059: Evidence Composition and Lifecycle](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-059-evidence-composition-and-lifecycle.md)
|
||||
* [Informal's Light Client Detector](https://github.com/informalsystems/tendermint-rs/blob/master/docs/spec/lightclient/detection/detection.md)
|
||||
@@ -238,7 +238,7 @@ a phantom validator. Given this, it was removed.
|
||||
|
||||
A unique flavor of lunatic attack is a forward lunatic attack. This is where a malicious
|
||||
node provides a header with a height greater than the height of the blockchain. Thus there
|
||||
are no witnesses capable of rebutting the malicious header. Such an attack will also
|
||||
are no witnesses capable of rebutting the malicious header. Such an attack will also
|
||||
require an accomplice, i.e. at least one other witness to also return the same forged header.
|
||||
Although such attacks can be any arbitrary height ahead, they must still remain within the
|
||||
clock drift of the light clients real time. Therefore, to detect such an attack, a light
|
||||
@@ -251,4 +251,4 @@ client will wait for a time
|
||||
for a witness to provide the latest block it has. Given the time constraints, if the witness
|
||||
is operating at the head of the blockchain, it will have a header with an earlier height but
|
||||
a later timestamp. This can be used to prove that the primary has submitted a lunatic header
|
||||
which violates monotonically increasing time.
|
||||
which violates monotonically increasing time.
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
|
||||
## Context
|
||||
|
||||
Whilst most created evidence of malicious behavior is self evident such that any individual can verify them independently there are types of evidence, known collectively as global evidence, that require further collaboration from the network in order to accumulate enough information to create evidence that is individually verifiable and can therefore be processed through consensus. [Fork Accountability](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/accountability.md) has been coined to describe the entire process of detection, proving and punishing of malicious behavior. This ADR addresses specifically what a light client amnesia attack is and how it can be proven and the current decision around handling light client amnesia attacks. For information on evidence handling by the light client, it is recommended to read [ADR 47](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-047-handling-evidence-from-light-client.md).
|
||||
Whilst most created evidence of malicious behavior is self evident such that any individual can verify them independently there are types of evidence, known collectively as global evidence, that require further collaboration from the network in order to accumulate enough information to create evidence that is individually verifiable and can therefore be processed through consensus. [Fork Accountability](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md) has been coined to describe the entire process of detection, proving and punishing of malicious behavior. This ADR addresses specifically what a light client amnesia attack is and how it can be proven and the current decision around handling light client amnesia attacks. For information on evidence handling by the light client, it is recommended to read [ADR 47](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-047-handling-evidence-from-light-client.md).
|
||||
|
||||
### Amnesia Attack
|
||||
|
||||
@@ -65,7 +65,7 @@ Light clients where all witnesses are faulty can be subject to an amnesia attack
|
||||
## References
|
||||
|
||||
- [Fork accountability algorithm](https://docs.google.com/document/d/11ZhMsCj3y7zIZz4udO9l25xqb0kl7gmWqNpGVRzOeyY/edit)
|
||||
- [Fork accountability spec](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client/accountability.md)
|
||||
- [Fork accountability spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/light-client/accountability.md)
|
||||
|
||||
## Appendix A: Detailed Walkthrough of Performing a Light Client Amnesia Attack
|
||||
|
||||
@@ -128,7 +128,7 @@ This trial period will be discussed later.
|
||||
|
||||
Returning to the event of an amnesia attack, if we were to examine the behavior of the honest nodes, C1 and C2, in the schematic, C2 will not PRECOMMIT an earlier round, but it is likely, if a node in C1 were to receive +2/3 PREVOTE's or PRECOMMIT's for a higher round, that it would remove the lock and PREVOTE and PRECOMMIT for the later round. Therefore, unfortunately it is not a case of simply punishing all nodes that have double voted in the `PotentialAmnesiaEvidence`.
|
||||
|
||||
Instead we use the Proof of Lock Change (PoLC) referred to in the [consensus spec](https://github.com/tendermint/spec/blob/master/spec/consensus/consensus.md#terms). When an honest node votes again for a different block in a later round
|
||||
Instead we use the Proof of Lock Change (PoLC) referred to in the [consensus spec](https://github.com/tendermint/tendermint/blob/master/spec/consensus/consensus.md#terms). When an honest node votes again for a different block in a later round
|
||||
(which will only occur in very rare cases), it will generate the PoLC and store it in the evidence reactor for a time equal to the `MaxEvidenceAge`
|
||||
|
||||
```golang
|
||||
|
||||
@@ -106,7 +106,7 @@ invasive in the required set of protocol and implementation changes, which
|
||||
simply extends the existing `CheckTx` ABCI method. The second candidate essentially
|
||||
involves the introduction of new ABCI method(s) and would require a higher degree
|
||||
of complexity in protocol and implementation changes, some of which may either
|
||||
overlap or conflict with the upcoming introduction of [ABCI++](https://github.com/tendermint/spec/blob/master/rfc/004-abci%2B%2B.md).
|
||||
overlap or conflict with the upcoming introduction of [ABCI++](https://github.com/tendermint/tendermint/blob/master/docs/rfc/rfc-013-abci%2B%2B.md).
|
||||
|
||||
For more information on the various approaches and proposals, please see the
|
||||
[mempool discussion](https://github.com/tendermint/tendermint/discussions/6295).
|
||||
@@ -171,7 +171,7 @@ message ResponseCheckTx {
|
||||
```
|
||||
|
||||
It is entirely up the application in determining how these fields are populated
|
||||
and with what values, e.g. the `sender` could be the signer and fee payer
|
||||
and with what values, e.g. the `sender` could be the signer and fee payer
|
||||
of the transaction, the `priority` could be the cumulative sum of the fee(s).
|
||||
|
||||
Only `sender` is required, while `priority` can be omitted which would result in
|
||||
@@ -289,7 +289,7 @@ non-contentious and backwards compatible manner.
|
||||
trying again at a later point in time or by ensuring the "child" priority is
|
||||
lower than the "parent" priority. In other words, if parents always have
|
||||
priories that are higher than their children, then the new mempool design will
|
||||
maintain causal ordering.
|
||||
maintain causal ordering.
|
||||
|
||||
### Neutral
|
||||
|
||||
@@ -299,5 +299,5 @@ non-contentious and backwards compatible manner.
|
||||
|
||||
## References
|
||||
|
||||
- [ABCI++](https://github.com/tendermint/spec/blob/master/rfc/004-abci%2B%2B.md)
|
||||
- [ABCI++](https://github.com/tendermint/tendermint/blob/master/docs/rfc/rfc-013-abci%2B%2B.md)
|
||||
- [Mempool Discussion](https://github.com/tendermint/tendermint/discussions/6295)
|
||||
|
||||
@@ -10,15 +10,15 @@ Accepted
|
||||
|
||||
## Context
|
||||
|
||||
The advent of state sync and block pruning gave rise to the opportunity for full nodes to participate in consensus without needing complete block history. This also introduced a problem with respect to evidence handling. Nodes that didn't have all the blocks within the evidence age were incapable of validating evidence, thus halting if that evidence was committed on chain.
|
||||
The advent of state sync and block pruning gave rise to the opportunity for full nodes to participate in consensus without needing complete block history. This also introduced a problem with respect to evidence handling. Nodes that didn't have all the blocks within the evidence age were incapable of validating evidence, thus halting if that evidence was committed on chain.
|
||||
|
||||
[RFC005](https://github.com/tendermint/spec/blob/master/rfc/005-reverse-sync.md) was published in response to this problem and modified the spec to add a minimum block history invariant. This predominantly sought to extend state sync so that it was capable of fetching and storing the `Header`, `Commit` and `ValidatorSet` (essentially a `LightBlock`) of the last `n` heights, where `n` was calculated based from the evidence age.
|
||||
[ADR 068](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-068-reverse-sync.md) was published in response to this problem and modified the spec to add a minimum block history invariant. This predominantly sought to extend state sync so that it was capable of fetching and storing the `Header`, `Commit` and `ValidatorSet` (essentially a `LightBlock`) of the last `n` heights, where `n` was calculated based from the evidence age.
|
||||
|
||||
This ADR sets out to describe the design of this state sync extension as well as modifications to the light client provider and the merging of tm store.
|
||||
|
||||
## Decision
|
||||
|
||||
The state sync reactor will be extended by introducing 2 new P2P messages (and a new channel).
|
||||
The state sync reactor will be extended by introducing 2 new P2P messages (and a new channel).
|
||||
|
||||
```protobuf
|
||||
message LightBlockRequest {
|
||||
@@ -26,7 +26,7 @@ message LightBlockRequest {
|
||||
}
|
||||
|
||||
message LightBlockResponse {
|
||||
tendermint.types.LightBlock light_block = 1;
|
||||
tendermint.types.LightBlock light_block = 1;
|
||||
}
|
||||
```
|
||||
|
||||
@@ -93,5 +93,5 @@ This ADR tries to remain within the scope of extending state sync, however the c
|
||||
|
||||
## References
|
||||
|
||||
- [Reverse Sync RFC](https://github.com/tendermint/spec/blob/master/rfc/005-reverse-sync.md)
|
||||
- [Reverse Sync RFC](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-068-reverse-sync.md)
|
||||
- [Original Issue](https://github.com/tendermint/tendermint/issues/5617)
|
||||
|
||||
@@ -252,11 +252,6 @@ N/A
|
||||
|
||||
## References
|
||||
|
||||
- [this
|
||||
branch](https://github.com/tendermint/tendermint/tree/tychoish/scratch-node-minimize)
|
||||
contains experimental work in the implementation of the node package
|
||||
to unwind some of the hard dependencies between components.
|
||||
|
||||
- [the component
|
||||
graph](https://peter.bourgon.org/go-for-industrial-programming/#the-component-graph)
|
||||
as a framing for internal service construction.
|
||||
@@ -2,12 +2,13 @@
|
||||
|
||||
## Changelog
|
||||
|
||||
- July 15 2021: Created by @williambanfield
|
||||
- July 15 2021: Created by @williambanfield
|
||||
- Aug 4 2021: Draft completed by @williambanfield
|
||||
- Aug 5 2021: Draft updated to include data structure changes by @williambanfield
|
||||
- Aug 20 2021: Language edits completed by @williambanfield
|
||||
- Oct 25 2021: Update the ADR to match updated spec from @cason by @williambanfield
|
||||
- Nov 10 2021: Additional language updates by @williambanfield per feedback from @cason
|
||||
- Feb 2 2022: Synchronize logic for timely with latest version of the spec by @williambanfield
|
||||
|
||||
## Status
|
||||
|
||||
@@ -15,14 +16,14 @@
|
||||
|
||||
## Context
|
||||
|
||||
Tendermint currently provides a monotonically increasing source of time known as [BFTTime](https://github.com/tendermint/spec/blob/master/spec/consensus/bft-time.md).
|
||||
Tendermint currently provides a monotonically increasing source of time known as [BFTTime](https://github.com/tendermint/tendermint/blob/master/spec/consensus/bft-time.md).
|
||||
This mechanism for producing a source of time is reasonably simple.
|
||||
Each correct validator adds a timestamp to each `Precommit` message it sends.
|
||||
The timestamp it sends is either the validator's current known Unix time or one millisecond greater than the previous block time, depending on which value is greater.
|
||||
When a block is produced, the proposer chooses the block timestamp as the weighted median of the times in all of the `Precommit` messages the proposer received.
|
||||
The weighting is proportional to the amount of voting power, or stake, a validator has on the network.
|
||||
This mechanism for producing timestamps is both deterministic and byzantine fault tolerant.
|
||||
|
||||
|
||||
This current mechanism for producing timestamps has a few drawbacks.
|
||||
Validators do not have to agree at all on how close the selected block timestamp is to their own currently known Unix time.
|
||||
Additionally, any amount of voting power `>1/3` may directly control the block timestamp.
|
||||
@@ -30,17 +31,17 @@ As a result, it is quite possible that the timestamp is not particularly meaning
|
||||
|
||||
These drawbacks present issues in the Tendermint protocol.
|
||||
Timestamps are used by light clients to verify blocks.
|
||||
Light clients rely on correspondence between their own currently known Unix time and the block timestamp to verify blocks they see;
|
||||
Light clients rely on correspondence between their own currently known Unix time and the block timestamp to verify blocks they see;
|
||||
However, their currently known Unix time may be greatly divergent from the block timestamp as a result of the limitations of `BFTTime`.
|
||||
|
||||
The proposer-based timestamps specification suggests an alternative approach for producing block timestamps that remedies these issues.
|
||||
Proposer-based timestamps alter the current mechanism for producing block timestamps in two main ways:
|
||||
|
||||
1. The block proposer is amended to offer up its currently known Unix time as the timestamp for the next block instead of the `BFTTime`.
|
||||
1. Correct validators only approve the proposed block timestamp if it is close enough to their own currently known Unix time.
|
||||
1. Correct validators only approve the proposed block timestamp if it is close enough to their own currently known Unix time.
|
||||
|
||||
The result of these changes is a more meaningful timestamp that cannot be controlled by `<= 2/3` of the validator voting power.
|
||||
This document outlines the necessary code changes in Tendermint to implement the corresponding [proposer-based timestamps specification](https://github.com/tendermint/spec/tree/master/spec/consensus/proposer-based-timestamp).
|
||||
The result of these changes is a more meaningful timestamp that cannot be controlled by `<= 2/3` of the validator voting power.
|
||||
This document outlines the necessary code changes in Tendermint to implement the corresponding [proposer-based timestamps specification](https://github.com/tendermint/tendermint/tree/master/spec/consensus/proposer-based-timestamp).
|
||||
|
||||
## Alternative Approaches
|
||||
|
||||
@@ -53,17 +54,17 @@ An alternate approach is to remove timestamps altogether from the block protocol
|
||||
`BFTTime` is deterministic but may be arbitrarily inaccurate.
|
||||
However, having a reliable source of time is quite useful for applications and protocols built on top of a blockchain.
|
||||
|
||||
We therefore decided not to remove the timestamp.
|
||||
We therefore decided not to remove the timestamp.
|
||||
Applications often wish for some transactions to occur on a certain day, on a regular period, or after some time following a different event.
|
||||
All of these require some meaningful representation of agreed upon time.
|
||||
The following protocols and application features require a reliable source of time:
|
||||
* Tendermint Light Clients [rely on correspondence between their known time](https://github.com/tendermint/spec/blob/master/spec/light-client/verification/README.md#definitions-1) and the block time for block verification.
|
||||
* Tendermint Evidence validity is determined [either in terms of heights or in terms of time](https://github.com/tendermint/spec/blob/8029cf7a0fcc89a5004e173ec065aa48ad5ba3c8/spec/consensus/evidence.md#verification).
|
||||
* Tendermint Light Clients [rely on correspondence between their known time](https://github.com/tendermint/tendermint/blob/master/spec/light-client/verification/README.md#definitions-1) and the block time for block verification.
|
||||
* Tendermint Evidence validity is determined [either in terms of heights or in terms of time](https://github.com/tendermint/tendermint/blob/8029cf7a0fcc89a5004e173ec065aa48ad5ba3c8/spec/consensus/evidence.md#verification).
|
||||
* Unbonding of staked assets in the Cosmos Hub [occurs after a period of 21 days](https://github.com/cosmos/governance/blob/ce75de4019b0129f6efcbb0e752cd2cc9e6136d3/params-change/Staking.md#unbondingtime).
|
||||
* IBC packets can use either a [timestamp or a height to timeout packet delivery](https://docs.cosmos.network/v0.43/ibc/overview.html#acknowledgements).
|
||||
* IBC packets can use either a [timestamp or a height to timeout packet delivery](https://docs.cosmos.network/v0.44/ibc/overview.html#acknowledgements)
|
||||
|
||||
Finally, inflation distribution in the Cosmos Hub uses an approximation of time to calculate an annual percentage rate.
|
||||
This approximation of time is calculated using [block heights with an estimated number of blocks produced in a year](https://github.com/cosmos/governance/blob/master/params-change/Mint.md#blocksperyear).
|
||||
Finally, inflation distribution in the Cosmos Hub uses an approximation of time to calculate an annual percentage rate.
|
||||
This approximation of time is calculated using [block heights with an estimated number of blocks produced in a year](https://github.com/cosmos/governance/blob/master/params-change/Mint.md#blocksperyear).
|
||||
Proposer-based timestamps will allow this inflation calculation to use a more meaningful and accurate source of time.
|
||||
|
||||
|
||||
@@ -84,7 +85,7 @@ These changes will be to the following components:
|
||||
|
||||
### Changes to `CommitSig`
|
||||
|
||||
The [CommitSig](https://github.com/tendermint/tendermint/blob/a419f4df76fe4aed668a6c74696deabb9fe73211/types/block.go#L604) struct currently contains a timestamp.
|
||||
The [CommitSig](https://github.com/tendermint/tendermint/blob/a419f4df76fe4aed668a6c74696deabb9fe73211/types/block.go#L604) struct currently contains a timestamp.
|
||||
This timestamp is the current Unix time known to the validator when it issued a `Precommit` for the block.
|
||||
This timestamp is no longer used and will be removed in this change.
|
||||
|
||||
@@ -103,11 +104,11 @@ type CommitSig struct {
|
||||
|
||||
`Precommit` and `Prevote` messages use a common [Vote struct](https://github.com/tendermint/tendermint/blob/a419f4df76fe4aed668a6c74696deabb9fe73211/types/vote.go#L50).
|
||||
This struct currently contains a timestamp.
|
||||
This timestamp is set using the [voteTime](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/internal/consensus/state.go#L2241) function and therefore vote times correspond to the current Unix time known to the validator.
|
||||
This timestamp is set using the [voteTime](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/internal/consensus/state.go#L2241) function and therefore vote times correspond to the current Unix time known to the validator, provided this time is greater than the timestamp of the previous block.
|
||||
For precommits, this timestamp is used to construct the [CommitSig that is included in the block in the LastCommit](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/types/block.go#L754) field.
|
||||
For prevotes, this field is currently unused.
|
||||
Proposer-based timestamps will use the timestamp that the proposer sets into the block and will therefore no longer require that a timestamp be included in the vote messages.
|
||||
This timestamp is therefore no longer useful and will be dropped.
|
||||
Proposer-based timestamps will use the timestamp that the proposer sets into the block and will therefore no longer require that a timestamp be included in the vote messages.
|
||||
This timestamp is therefore no longer useful as part of consensus and may optionally be dropped from the message.
|
||||
|
||||
`Vote` will be updated as follows:
|
||||
|
||||
@@ -115,7 +116,7 @@ This timestamp is therefore no longer useful and will be dropped.
|
||||
type Vote struct {
|
||||
Type tmproto.SignedMsgType `json:"type"`
|
||||
Height int64 `json:"height"`
|
||||
Round int32 `json:"round"`
|
||||
Round int32 `json:"round"`
|
||||
BlockID BlockID `json:"block_id"` // zero if vote is nil.
|
||||
-- Timestamp time.Time `json:"timestamp"`
|
||||
ValidatorAddress Address `json:"validator_address"`
|
||||
@@ -126,25 +127,17 @@ type Vote struct {
|
||||
|
||||
### New consensus parameters
|
||||
|
||||
The proposer-based timestamp specification includes multiple new parameters that must be the same among all validators.
|
||||
These parameters are `PRECISION`, `MSGDELAY`, and `ACCURACY`.
|
||||
The proposer-based timestamp specification includes a pair of new parameters that must be the same among all validators.
|
||||
These parameters are `PRECISION`, and `MSGDELAY`.
|
||||
|
||||
The `PRECISION` and `MSGDELAY` parameters are used to determine if the proposed timestamp is acceptable.
|
||||
A validator will only Prevote a proposal if the proposal timestamp is considered `timely`.
|
||||
A proposal timestamp is considered `timely` if it is within `PRECISION` and `MSGDELAY` of the Unix time known to the validator.
|
||||
More specifically, a proposal timestamp is `timely` if `validatorLocalTime - PRECISION < proposalTime < validatorLocalTime + PRECISION + MSGDELAY`.
|
||||
More specifically, a proposal timestamp is `timely` if `proposalTimestamp - PRECISION ≤ validatorLocalTime ≤ proposalTimestamp + PRECISION + MSGDELAY`.
|
||||
|
||||
Because the `PRECISION` and `MSGDELAY` parameters must be the same across all validators, they will be added to the [consensus parameters](https://github.com/tendermint/tendermint/blob/master/proto/tendermint/types/params.proto#L13) as [durations](https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#google.protobuf.Duration).
|
||||
Because the `PRECISION` and `MSGDELAY` parameters must be the same across all validators, they will be added to the [consensus parameters](https://github.com/tendermint/spec/blob/master/proto/tendermint/types/params.proto#L11) as [durations](https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#google.protobuf.Duration).
|
||||
|
||||
The proposer-based timestamp specification also includes a [new ACCURACY parameter](https://github.com/tendermint/spec/blob/master/spec/consensus/proposer-based-timestamp/pbts-sysmodel_001_draft.md#pbts-clocksync-external0).
|
||||
Intuitively, `ACCURACY` represents the difference between the ‘real’ time and the currently known time of correct validators.
|
||||
The currently known Unix time of any validator is always somewhat different from real time.
|
||||
`ACCURACY` is the largest such difference between each validator's time and real time taken as an absolute value.
|
||||
This is not something a computer can determine on its own and must be specified as an estimate by community running a Tendermint-based chain.
|
||||
It is used in the new algorithm to [calculate a timeout for the propose step](https://github.com/tendermint/spec/blob/master/spec/consensus/proposer-based-timestamp/pbts-algorithm_001_draft.md#pbts-alg-startround0).
|
||||
`ACCURACY` is assumed to be the same across all validators and therefore should be included as a consensus parameter.
|
||||
|
||||
The consensus will be updated to include this `Timestamp` field as follows:
|
||||
The consensus parameters will be updated to include this `Synchrony` field as follows:
|
||||
|
||||
```diff
|
||||
type ConsensusParams struct {
|
||||
@@ -152,15 +145,14 @@ type ConsensusParams struct {
|
||||
Evidence EvidenceParams `json:"evidence"`
|
||||
Validator ValidatorParams `json:"validator"`
|
||||
Version VersionParams `json:"version"`
|
||||
++ Timestamp TimestampParams `json:"timestamp"`
|
||||
++ Synchrony SynchronyParams `json:"synchrony"`
|
||||
}
|
||||
```
|
||||
|
||||
```go
|
||||
type TimestampParams struct {
|
||||
Accuracy time.Duration `json:"accuracy"`
|
||||
Precision time.Duration `json:"precision"`
|
||||
MsgDelay time.Duration `json:"msg_delay"`
|
||||
type SynchronyParams struct {
|
||||
MessageDelay time.Duration `json:"message_delay"`
|
||||
Precision time.Duration `json:"precision"`
|
||||
}
|
||||
```
|
||||
|
||||
@@ -176,14 +168,14 @@ The timestamp the proposer sets into the `Header` will change depending on if th
|
||||
|
||||
#### Proposal of a block that has not previously received a polka
|
||||
|
||||
If a proposer is proposing a new block, then it will set the Unix time currently known to the proposer into the `Header.Timestamp` field.
|
||||
If a proposer is proposing a new block then it will set the Unix time currently known to the proposer into the `Header.Timestamp` field.
|
||||
The proposer will also set this same timestamp into the `Timestamp` field of the `Proposal` message that it issues.
|
||||
|
||||
#### Re-proposal of a block that has previously received a polka
|
||||
|
||||
If a proposer is re-proposing a block that has previously received a polka on the network, then the proposer does not update the `Header.Timestamp` of that block.
|
||||
Instead, the proposer simply re-proposes the exact same block.
|
||||
This way, the proposed block has the exact same block ID as the previously proposed block and the validators that have already received that block do not need to attempt to receive it again.
|
||||
This way, the proposed block has the exact same block ID as the previously proposed block and the validators that have already received that block do not need to attempt to receive it again.
|
||||
|
||||
The proposer will set the re-proposed block's `Header.Timestamp` as the `Proposal` message's `Timestamp`.
|
||||
|
||||
@@ -192,42 +184,23 @@ The proposer will set the re-proposed block's `Header.Timestamp` as the `Proposa
|
||||
Block timestamps must be monotonically increasing.
|
||||
In `BFTTime`, if a validator’s clock was behind, the [validator added 1 millisecond to the previous block’s time and used that in its vote messages](https://github.com/tendermint/tendermint/blob/e8013281281985e3ada7819f42502b09623d24a0/internal/consensus/state.go#L2246).
|
||||
A goal of adding proposer-based timestamps is to enforce some degree of clock synchronization, so having a mechanism that completely ignores the Unix time of the validator time no longer works.
|
||||
|
||||
Validator clocks will not be perfectly in sync.
|
||||
Therefore, the proposer’s current known Unix time may be less than the previous block's `Header.Time`.
|
||||
If the proposer’s current known Unix time is less than the previous block's `Header.Time`, the proposer will sleep until its known Unix time exceeds it.
|
||||
|
||||
This change will require amending the [defaultDecideProposal](https://github.com/tendermint/tendermint/blob/822893615564cb20b002dd5cf3b42b8d364cb7d9/internal/consensus/state.go#L1180) method.
|
||||
This method should now schedule a timeout that fires when the proposer’s time is greater than the previous block's `Header.Time`.
|
||||
When the timeout fires, the proposer will finally issue the `Proposal` message.
|
||||
|
||||
#### Changes to the propose step timeout
|
||||
|
||||
Currently, a validator waiting for a proposal will proceed past the propose step if the configured propose timeout is reached and no proposal is seen.
|
||||
Proposer-based timestamps requires changing this timeout logic.
|
||||
|
||||
The proposer will now wait until its current known Unix time exceeds the previous block's `Header.Time` to propose a block.
|
||||
The validators must now take this and some other factors into account when deciding when to timeout the propose step.
|
||||
Specifically, the propose step timeout must also take into account potential inaccuracy in the validator’s clock and in the clock of the proposer.
|
||||
Additionally, there may be a delay communicating the proposal message from the proposer to the other validators.
|
||||
|
||||
Therefore, validators waiting for a proposal must wait until after the previous block's `Header.Time` before timing out.
|
||||
To account for possible inaccuracy in its own clock, inaccuracy in the proposer’s clock, and message delay, validators waiting for a proposal will wait until the previous block's `Header.Time + 2*ACCURACY + MSGDELAY`.
|
||||
The spec defines this as `waitingTime`.
|
||||
|
||||
The [propose step’s timeout is set in enterPropose](https://github.com/tendermint/tendermint/blob/822893615564cb20b002dd5cf3b42b8d364cb7d9/internal/consensus/state.go#L1108) in `state.go`.
|
||||
`enterPropose` will be changed to calculate waiting time using the new consensus parameters.
|
||||
The timeout in `enterPropose` will then be set as the maximum of `waitingTime` and the [configured proposal step timeout](https://github.com/tendermint/tendermint/blob/dc7c212c41a360bfe6eb38a6dd8c709bbc39aae7/config/config.go#L1013).
|
||||
When the timeout fires, the proposer will finally issue the `Proposal` message.
|
||||
|
||||
### Changes to proposal validation rules
|
||||
|
||||
The rules for validating a proposed block will be modification to implement proposer-based timestamps.
|
||||
The rules for validating a proposed block will be modified to implement proposer-based timestamps.
|
||||
We will change the validation logic to ensure that a proposal is `timely`.
|
||||
|
||||
Per the proposer-based timestamps spec, `timely` only needs to be checked if a block has not received a +2/3 majority of `Prevotes` in a round.
|
||||
If a block previously received a +2/3 majority of prevotes in a previous round, then +2/3 of the voting power considered the block's timestamp near enough to their own currently known Unix time in that round.
|
||||
|
||||
The validation logic will be updated to check `timely` for blocks that did not previously receive +2/3 prevotes in a round.
|
||||
The validation logic will be updated to check `timely` for blocks that did not previously receive +2/3 prevotes in a round.
|
||||
Receiving +2/3 prevotes in a round is frequently referred to as a 'polka' and we will use this term for simplicity.
|
||||
|
||||
#### Current timestamp validation logic
|
||||
@@ -250,17 +223,17 @@ This takes place in the [VerifyCommit function](https://github.com/tendermint/te
|
||||
`BFTTime` validation is no longer applicable and will be removed.
|
||||
This means that validators will no longer check that the block timestamp is a weighted median of `LastCommit` timestamps.
|
||||
Specifically, we will remove the call to [MedianTime in the validateBlock function](https://github.com/tendermint/tendermint/blob/4db71da68e82d5cb732b235eeb2fd69d62114b45/state/validation.go#L117).
|
||||
The `MedianTime` function can be completely removed.
|
||||
The `MedianTime` function can be completely removed.
|
||||
|
||||
Since `CommitSig`s will no longer contain a timestamp, the validator authenticating a commit will no longer include the `CommitSig` timestamp in the hash of fields it builds to check against the cryptographic signature.
|
||||
|
||||
#### Timestamp validation when a block has not received a polka
|
||||
|
||||
The [POLRound](https://github.com/tendermint/tendermint/blob/68ca65f5d79905abd55ea999536b1a3685f9f19d/types/proposal.go#L29) in the `Proposal` message indicates which round the block received a polka.
|
||||
A negative value in the `POLRound` field indicates that the block has not previously been proposed on the network.
|
||||
A negative value in the `POLRound` field indicates that the block has not previously been proposed on the network.
|
||||
Therefore the validation logic will check for timely when `POLRound < 0`.
|
||||
|
||||
When a validator receives a `Proposal` message, the validator will check that the `Proposal.Timestamp` is at most `PRECISION` greater than the current Unix time known to the validator, and at minimum `PRECISION + MSGDELAY` less than the current Unix time known to the validator.
|
||||
When a validator receives a `Proposal` message, the validator will check that the `Proposal.Timestamp` is at most `PRECISION` greater than the current Unix time known to the validator, and at maximum `PRECISION + MSGDELAY` less than the current Unix time known to the validator.
|
||||
If the timestamp is not within these bounds, the proposed block will not be considered `timely`.
|
||||
|
||||
Once a full block matching the `Proposal` message is received, the validator will also check that the timestamp in the `Header.Timestamp` of the block matches this `Proposal.Timestamp`.
|
||||
@@ -275,13 +248,33 @@ When a block is re-proposed that has already received a +2/3 majority of `Prevot
|
||||
A validator will not check that the `Proposal` is `timely` if the propose message has a non-negative `POLRound`.
|
||||
If the `POLRound` is non-negative, each validator will simply ensure that it received the `Prevote` messages for the proposed block in the round indicated by `POLRound`.
|
||||
|
||||
If the validator did not receive `Prevote` messages for the proposed block in `POLRound`, then it will prevote nil.
|
||||
If the validator does not receive `Prevote` messages for the proposed block before the proposal timeout, then it will prevote nil.
|
||||
Validators already check that +2/3 prevotes were seen in `POLRound`, so this does not represent a change to the prevote logic.
|
||||
|
||||
A validator will also check that the proposed timestamp is greater than the timestamp of the block for the previous height.
|
||||
If the timestamp is not greater than the previous block's timestamp, the block will not be considered valid, which is the same as the current logic.
|
||||
|
||||
Additionally, this validation logic can be updated to check that the `Proposal.Timestamp` matches the `Header.Timestamp` of the proposed block, but it is less relevant since checking that votes were received is sufficient to ensure the block timestamp is correct.
|
||||
Additionally, this validation logic can be updated to check that the `Proposal.Timestamp` matches the `Header.Timestamp` of the proposed block, but it is less relevant since checking that votes were received is sufficient to ensure the block timestamp is correct.
|
||||
|
||||
#### Relaxation of the 'Timely' check
|
||||
|
||||
The `Synchrony` parameters, `MessageDelay` and `Precision` provide a means to bound the timestamp of a proposed block.
|
||||
Selecting values that are too small presents a possible liveness issue for the network.
|
||||
If a Tendermint network selects a `MessageDelay` parameter that does not accurately reflect the time to broadcast a proposal message to all of the validators on the network, nodes will begin rejecting proposals from otherwise correct proposers because these proposals will appear to be too far in the past.
|
||||
|
||||
`MessageDelay` and `Precision` are planned to be configured as `ConsensusParams`.
|
||||
A very common way to update `ConsensusParams` is by executing a transaction included in a block that specifies new values for them.
|
||||
However, if the network is unable to produce blocks because of this liveness issue, no such transaction may be executed.
|
||||
To prevent this dangerous condition, we will add a relaxation mechanism to the `Timely` predicate.
|
||||
If consensus takes more than 10 rounds to produce a block for any reason, the `MessageDelay` will be doubled.
|
||||
This doubling will continue for each subsequent 10 rounds of consensus.
|
||||
This will enable chains that selected too small of a value for the `MessageDelay` parameter to eventually issue a transaction and readjust the parameters to more accurately reflect the broadcast time.
|
||||
|
||||
This liveness issue is not as problematic for chains with very small `Precision` values.
|
||||
Operators can more easily readjust local validator clocks to be more aligned.
|
||||
Additionally, chains that wish to increase a small `Precision` value can still take advantage of the `MessageDelay` relaxation, waiting for the `MessageDelay` value to grow significantly and issuing proposals with timestamps that are far in the past of their peers.
|
||||
|
||||
For more discussion of this, see [issue 371](https://github.com/tendermint/spec/issues/371).
|
||||
|
||||
### Changes to the prevote step
|
||||
|
||||
@@ -289,7 +282,7 @@ Currently, a validator will prevote a proposal in one of three cases:
|
||||
|
||||
* Case 1: Validator has no locked block and receives a valid proposal.
|
||||
* Case 2: Validator has a locked block and receives a valid proposal matching its locked block.
|
||||
* Case 3: Validator has a locked block, sees a valid proposal not matching its locked block but sees +⅔ prevotes for the proposal’s block, either in the current round or in a round greater than or equal to the round in which it locked its locked block.
|
||||
* Case 3: Validator has a locked block, sees a valid proposal not matching its locked block but sees +2/3 prevotes for the proposal’s block, either in the current round or in a round greater than or equal to the round in which it locked its locked block.
|
||||
|
||||
The only change we will make to the prevote step is to what a validator considers a valid proposal as detailed above.
|
||||
|
||||
@@ -335,5 +328,6 @@ This skew will be bound by the `PRECISION` value, so it is unlikely to be too la
|
||||
|
||||
## References
|
||||
|
||||
* [PBTS Spec](https://github.com/tendermint/spec/tree/master/spec/consensus/proposer-based-timestamp)
|
||||
* [PBTS Spec](https://github.com/tendermint/tendermint/tree/master/spec/consensus/proposer-based-timestamp)
|
||||
* [BFTTime spec](https://github.com/tendermint/spec/blob/master/spec/consensus/bft-time.md)
|
||||
* [Issue 371](https://github.com/tendermint/spec/issues/371)
|
||||
|
||||
@@ -232,4 +232,4 @@ the implementation timeline.
|
||||
|
||||
[adr61]: ./adr-061-p2p-refactor-scope.md
|
||||
[adr62]: ./adr-062-p2p-architecture.md
|
||||
[rfc]: ../rfc/rfc-000-p2p.rst
|
||||
[rfc]: ../rfc/rfc-000-p2p-roadmap.rst
|
||||
|
||||
@@ -35,7 +35,7 @@ The configurable values are as follows:
|
||||
* How much the `TimeoutPrevote` increases with each round.
|
||||
* `TimeoutPrecommit`
|
||||
* How long the consensus algorithm waits after receiving +2/3 precommits that
|
||||
do not have a quorum for a value before entering the next round.
|
||||
do not have a quorum for a value before entering the next round.
|
||||
(See the [arXiv paper][arxiv-paper], Algorithm 1, Line 47)
|
||||
* `TimeoutPrecommitDelta`
|
||||
* How much the `TimeoutPrecommit` increases with each round.
|
||||
@@ -48,7 +48,7 @@ The configurable values are as follows:
|
||||
|
||||
### Overview of Change
|
||||
|
||||
We will consolidate the timeout parameters and migrate them from the node-local
|
||||
We will consolidate the timeout parameters and migrate them from the node-local
|
||||
`config.toml` file into the network-global consensus parameters.
|
||||
|
||||
The 8 timeout parameters will be consolidated down to 6. These will be as follows:
|
||||
@@ -84,7 +84,7 @@ a `config.toml` with Tendermint's default values for these parameters.
|
||||
### Why this parameter consolidation?
|
||||
|
||||
Reducing the number of parameters is good for UX. Fewer superfluous parameters makes
|
||||
running and operating a Tendermint network less confusing.
|
||||
running and operating a Tendermint network less confusing.
|
||||
|
||||
The Prevote and Precommit messages are both similar sizes, require similar amounts
|
||||
of processing so there is no strong need for them to be configured separately.
|
||||
@@ -125,7 +125,7 @@ would use this exact same set of values.
|
||||
While Tendermint nodes often run with similar bandwidth and on similar cloud-hosted
|
||||
machines, there are enough points of variability to make configuring
|
||||
consensus timeouts meaningful. Namely, Tendermint network topologies are likely to be
|
||||
very different from chain to chain. Additionally, applications may vary greatly in
|
||||
very different from chain to chain. Additionally, applications may vary greatly in
|
||||
how long the `Commit` phase may take. Applications that perform more work during `Commit`
|
||||
require a longer `TimeoutCommit` to allow the application to complete its work
|
||||
and be prepared for the next height.
|
||||
@@ -166,10 +166,10 @@ namely, each value must be non-negative.
|
||||
### Migration
|
||||
|
||||
The new `ConsensusParameters` will be added during an upcoming release. In this
|
||||
release, the old `config.toml` parameters will cease to control the timeouts and
|
||||
release, the old `config.toml` parameters will cease to control the timeouts and
|
||||
an error will be logged on nodes that continue to specify these values. The specific
|
||||
mechanism by which these parameters will added to a chain is being discussed in
|
||||
[RFC-009][rfc-009] and will be decided ahead of the next release.
|
||||
mechanism by which these parameters will added to a chain is being discussed in
|
||||
[RFC-009][rfc-009] and will be decided ahead of the next release.
|
||||
|
||||
The specific mechanism for adding these parameters depends on work related to
|
||||
[soft upgrades][soft-upgrades], which is still ongoing.
|
||||
|
||||
@@ -181,8 +181,6 @@ well as the server. These include:
|
||||
---
|
||||
## Decision
|
||||
|
||||
(pending)
|
||||
|
||||
To address the described problems, we will:
|
||||
|
||||
1. Introduce a new API for event subscription to the Tendermint RPC service.
|
||||
@@ -398,8 +396,8 @@ The `oldest_item` and `newest_item` fields of the reply report the cursors of
|
||||
the oldest and newest items (of any kind) recorded in the event log at the time
|
||||
of the reply, or are `""` if the log is empty.
|
||||
|
||||
The `data` field contains the type-specific event datum, and `events` contain
|
||||
the ABCI events, if any were defined.
|
||||
The `data` field contains the type-specific event datum. The datum carries any
|
||||
ABCI events that may have been defined.
|
||||
|
||||
> **Discussion point**: Based on [issue #7273][i7273], I did not include a
|
||||
> separate field in the response for the ABCI events, since it duplicates data
|
||||
@@ -458,7 +456,7 @@ crashes and connectivity issues:
|
||||
no `before_item`).
|
||||
|
||||
2. If there are more events than the client requested, or if the client needs
|
||||
to to read older events to recover from a stall or crash , clients will
|
||||
to to read older events to recover from a stall or crash, clients will
|
||||
**page** backward through the event log (by setting `before_item` and
|
||||
possibly `after_item`).
|
||||
|
||||
|
||||
112
docs/architecture/adr-076-combine-spec-repo.md
Normal file
112
docs/architecture/adr-076-combine-spec-repo.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# ADR 076: Combine Spec and Tendermint Repositories
|
||||
|
||||
## Changelog
|
||||
|
||||
- 2022-02-04: Initial Draft. (@tychoish)
|
||||
|
||||
## Status
|
||||
|
||||
Accepted.
|
||||
|
||||
## Context
|
||||
|
||||
While the specification for Tendermint was originally in the same
|
||||
repository as the Go implementation, at some point the specification
|
||||
was split from the core repository and maintained separately from the
|
||||
implementation. While this makes sense in promoting a conceptual
|
||||
separation of specification and implementation, in practice this
|
||||
separation was a premature optimization, apparently aimed at supporting
|
||||
alternate implementations of Tendermint.
|
||||
|
||||
The operational and documentary burden of maintaining a separate
|
||||
spec repo has not returned value to justify its cost. There are no active
|
||||
projects to develop alternate implementations of Tendermint based on the
|
||||
common specification, and having separate repositories creates an ongoing
|
||||
burden to coordinate versions, documentation, and releases.
|
||||
|
||||
## Decision
|
||||
|
||||
The specification repository will be merged back into the Tendermint
|
||||
core repository.
|
||||
|
||||
Stakeholders including representatives from the maintainers of the
|
||||
spec, the Go implementation, and the Tendermint Rust library, agreed
|
||||
to merge the repositories in the Tendermint core dev meeting on 27
|
||||
January 2022, including @williambanfield @cmwaters @creachadair and
|
||||
@thanethomson.
|
||||
|
||||
## Alternative Approaches
|
||||
|
||||
The main alternative we considered was to keep separate repositories,
|
||||
and to introduce a coordinated versioning scheme between the two, so
|
||||
that users could figure out which spec versions go with which versions
|
||||
of the core implementation.
|
||||
|
||||
We decided against this on the grounds that it would further complicate
|
||||
the release process for _both_ repositories, without mitigating any of
|
||||
the other existing issues.
|
||||
|
||||
## Detailed Design
|
||||
|
||||
Clone and merge the master branch of the `tendermint/spec` repository
|
||||
as a branch of the `tendermint/tendermint`, to ensure the commit history
|
||||
of both repositories remains intact.
|
||||
|
||||
### Implementation Instructions
|
||||
|
||||
1. Within the `tendermint` repository, execute the following commands
|
||||
to add a new branch with the history of the master branch of `spec`:
|
||||
|
||||
```bash
|
||||
git remote add spec git@github.com:tendermint/spec.git
|
||||
git fetch spec
|
||||
git checkout -b spec-master spec/master
|
||||
mkdir spec
|
||||
git ls-tree -z --name-only HEAD | xargs -0 -I {} git mv {} subdir/
|
||||
git commit -m "spec: organize specification prior to merge"
|
||||
git checkout -b spec-merge-mainline origin/master
|
||||
git merge --allow-unrelated-histories spec-master
|
||||
```
|
||||
|
||||
This merges the spec into the `tendermint/tendermint` repository as
|
||||
a normal branch. This commit can also be backported to the 0.35
|
||||
branch, if needed.
|
||||
|
||||
2. Migrate outstanding issues from `tendermint/spec` to the
|
||||
`tendermint/tendermint` repository.
|
||||
|
||||
3. In the specification repository, add redirect to the README and mark
|
||||
the repository as archived.
|
||||
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
Easier maintenance for the specification will obviate a number of
|
||||
complicated and annoying versioning problems, and will help prevent the
|
||||
possibility of the specification and the implementation drifting apart.
|
||||
|
||||
Additionally, co-locating the specification will help encourage
|
||||
cross-pollination and collaboration, between engineers focusing on the
|
||||
specification and the protocol and engineers focusing on the implementation.
|
||||
|
||||
### Negative
|
||||
|
||||
Co-locating the spec and Go implementation has the potential effect of
|
||||
prioritizing the Go implementation with regards to the spec, and
|
||||
making it difficult to think about alternate implementations of the
|
||||
Tendermint algorithm. Although we may want to foster additional
|
||||
Tendermint implementations in the future, this isn't an active goal
|
||||
in our current roadmap, and *not* merging these repos doesn't
|
||||
change the fact that the Go implementation of Tendermint is already the
|
||||
primary implementation.
|
||||
|
||||
### Neutral
|
||||
|
||||
N/A
|
||||
|
||||
## References
|
||||
|
||||
- https://github.com/tendermint/spec
|
||||
- https://github.com/tendermint/tendermint
|
||||
109
docs/architecture/adr-077-block-retention.md
Normal file
109
docs/architecture/adr-077-block-retention.md
Normal file
@@ -0,0 +1,109 @@
|
||||
# ADR 077: Configurable Block Retention
|
||||
|
||||
## Changelog
|
||||
|
||||
- 2020-03-23: Initial draft (@erikgrinaker)
|
||||
- 2020-03-25: Use local config for snapshot interval (@erikgrinaker)
|
||||
- 2020-03-31: Use ABCI commit response for block retention hint
|
||||
- 2020-04-02: Resolved open questions
|
||||
- 2021-02-11: Migrate to tendermint repo (Originally [RFC 001](https://github.com/tendermint/spec/pull/84))
|
||||
|
||||
## Author(s)
|
||||
|
||||
- Erik Grinaker (@erikgrinaker)
|
||||
|
||||
## Context
|
||||
|
||||
Currently, all Tendermint nodes contain the complete sequence of blocks from genesis up to some height (typically the latest chain height). This will no longer be true when the following features are released:
|
||||
|
||||
- [Block pruning](https://github.com/tendermint/tendermint/issues/3652): removes historical blocks and associated data (e.g. validator sets) up to some height, keeping only the most recent blocks.
|
||||
|
||||
- [State sync](https://github.com/tendermint/tendermint/issues/828): bootstraps a new node by syncing state machine snapshots at a given height, but not historical blocks and associated data.
|
||||
|
||||
To maintain the integrity of the chain, the use of these features must be coordinated such that necessary historical blocks will not become unavailable or lost forever. In particular:
|
||||
|
||||
- Some nodes should have complete block histories, for auditability, querying, and bootstrapping.
|
||||
|
||||
- The majority of nodes should retain blocks longer than the Cosmos SDK unbonding period, for light client verification.
|
||||
|
||||
- Some nodes must take and serve state sync snapshots with snapshot intervals less than the block retention periods, to allow new nodes to state sync and then replay blocks to catch up.
|
||||
|
||||
- Applications may not persist their state on commit, and require block replay on restart.
|
||||
|
||||
- Only a minority of nodes can be state synced within the unbonding period, for light client verification and to serve block histories for catch-up.
|
||||
|
||||
However, it is unclear if and how we should enforce this. It may not be possible to technically enforce all of these without knowing the state of the entire network, but it may also be unrealistic to expect this to be enforced entirely through social coordination. This is especially unfortunate since the consequences of misconfiguration can be permanent chain-wide data loss.
|
||||
|
||||
## Proposal
|
||||
|
||||
Add a new field `retain_height` to the ABCI `ResponseCommit` message:
|
||||
|
||||
```proto
|
||||
service ABCIApplication {
|
||||
rpc Commit(RequestCommit) returns (ResponseCommit);
|
||||
}
|
||||
|
||||
message RequestCommit {}
|
||||
|
||||
message ResponseCommit {
|
||||
// reserve 1
|
||||
bytes data = 2; // the Merkle root hash
|
||||
uint64 retain_height = 3; // the oldest block height to retain
|
||||
}
|
||||
```
|
||||
|
||||
Upon ABCI `Commit`, which finalizes execution of a block in the state machine, Tendermint removes all data for heights lower than `retain_height`. This allows the state machine to control block retention, which is preferable since only it can determine the significance of historical blocks. By default (i.e. with `retain_height=0`) all historical blocks are retained.
|
||||
|
||||
Removed data includes not only blocks, but also headers, commit info, consensus params, validator sets, and so on. In the first iteration this will be done synchronously, since the number of heights removed for each run is assumed to be small (often 1) in the typical case. It can be made asynchronous at a later time if this is shown to be necessary.
|
||||
|
||||
Since `retain_height` is dynamic, it is possible for it to refer to a height which has already been removed. For example, commit at height 100 may return `retain_height=90` while commit at height 101 may return `retain_height=80`. This is allowed, and will be ignored - it is the application's responsibility to return appropriate values.
|
||||
|
||||
State sync will eventually support backfilling heights, via e.g. a snapshot metadata field `backfill_height`, but in the initial version it will have a fully truncated block history.
|
||||
|
||||
## Cosmos SDK Example
|
||||
|
||||
As an example, we'll consider how the Cosmos SDK might make use of this. The specific details should be discussed in a separate SDK proposal.
|
||||
|
||||
The returned `retain_height` would be the lowest height that satisfies:
|
||||
|
||||
- Unbonding time: the time interval in which validators can be economically punished for misbehavior. Blocks in this interval must be auditable e.g. by the light client.
|
||||
|
||||
- IAVL snapshot interval: the block interval at which the underlying IAVL database is persisted to disk, e.g. every 10000 heights. Blocks since the last IAVL snapshot must be available for replay on application restart.
|
||||
|
||||
- State sync snapshots: blocks since the _oldest_ available snapshot must be available for state sync nodes to catch up (oldest because a node may be restoring an old snapshot while a new snapshot was taken).
|
||||
|
||||
- Local config: archive nodes may want to retain more or all blocks, e.g. via a local config option `min-retain-blocks`. There may also be a need to vary rentention for other nodes, e.g. sentry nodes which do not need historical blocks.
|
||||
|
||||

|
||||
|
||||
## Status
|
||||
|
||||
Accepted
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
- Application-specified block retention allows the application to take all relevant factors into account and prevent necessary blocks from being accidentally removed.
|
||||
|
||||
- Node operators can independently decide whether they want to provide complete block histories (if local configuration for this is provided) and snapshots.
|
||||
|
||||
### Negative
|
||||
|
||||
- Social coordination is required to run archival nodes, failure to do so may lead to permanent loss of historical blocks.
|
||||
|
||||
- Social coordination is required to run snapshot nodes, failure to do so may lead to inability to run state sync, and inability to bootstrap new nodes at all if no archival nodes are online.
|
||||
|
||||
### Neutral
|
||||
|
||||
- Reduced block retention requires application changes, and cannot be controlled directly in Tendermint.
|
||||
|
||||
- Application-specified block retention may set a lower bound on disk space requirements for all nodes.
|
||||
|
||||
## References
|
||||
|
||||
- State sync ADR: <https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-053-state-sync-prototype.md>
|
||||
|
||||
- State sync issue: <https://github.com/tendermint/tendermint/issues/828>
|
||||
|
||||
- Block pruning issue: <https://github.com/tendermint/tendermint/issues/3652>
|
||||
82
docs/architecture/adr-078-nonzero-genesis.md
Normal file
82
docs/architecture/adr-078-nonzero-genesis.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# ADR 078: Non-Zero Genesis
|
||||
|
||||
## Changelog
|
||||
|
||||
- 2020-07-26: Initial draft (@erikgrinaker)
|
||||
- 2020-07-28: Use weak chain linking, i.e. `predecessor` field (@erikgrinaker)
|
||||
- 2020-07-31: Drop chain linking (@erikgrinaker)
|
||||
- 2020-08-03: Add `State.InitialHeight` (@erikgrinaker)
|
||||
- 2021-02-11: Migrate to tendermint repo (Originally [RFC 002](https://github.com/tendermint/spec/pull/119))
|
||||
|
||||
## Author(s)
|
||||
|
||||
- Erik Grinaker (@erikgrinaker)
|
||||
|
||||
## Context
|
||||
|
||||
The recommended upgrade path for block protocol-breaking upgrades is currently to hard fork the
|
||||
chain (see e.g. [`cosmoshub-3` upgrade](https://blog.cosmos.network/cosmos-hub-3-upgrade-announcement-39c9da941aee).
|
||||
This is done by halting all validators at a predetermined height, exporting the application
|
||||
state via application-specific tooling, and creating an entirely new chain using the exported
|
||||
application state.
|
||||
|
||||
As far as Tendermint is concerned, the upgraded chain is a completely separate chain, with e.g.
|
||||
a new chain ID and genesis file. Notably, the new chain starts at height 1, and has none of the
|
||||
old chain's block history. This causes problems for integrators, e.g. coin exchanges and
|
||||
wallets, that assume a monotonically increasing height for a given blockchain. Users also find
|
||||
it confusing that a given height can now refer to distinct states depending on the chain
|
||||
version.
|
||||
|
||||
An ideal solution would be to always retain block backwards compatibility in such a way that chain
|
||||
history is never lost on upgrades. However, this may require a significant amount of engineering
|
||||
work that is not viable for the planned Stargate release (Tendermint 0.34), and may prove too
|
||||
restrictive for future development.
|
||||
|
||||
As a first step, allowing the new chain to start from an initial height specified in the genesis
|
||||
file would at least provide monotonically increasing heights. There was a proposal to include the
|
||||
last block header of the previous chain as well, but since the genesis file is not verified and
|
||||
hashed (only specific fields are) this would not be trustworthy.
|
||||
|
||||
External tooling will be required to map historical heights onto e.g. archive nodes that contain
|
||||
blocks from previous chain version. Tendermint will not include any such functionality.
|
||||
|
||||
## Proposal
|
||||
|
||||
Tendermint will allow chains to start from an arbitrary initial height:
|
||||
|
||||
- A new field `initial_height` is added to the genesis file, defaulting to `1`. It can be set to any
|
||||
non-negative integer, and `0` is considered equivalent to `1`.
|
||||
|
||||
- A new field `InitialHeight` is added to the ABCI `RequestInitChain` message, with the same value
|
||||
and semantics as the genesis field.
|
||||
|
||||
- A new field `InitialHeight` is added to the `state.State` struct, where `0` is considered invalid.
|
||||
Including the field here simplifies implementation, since the genesis value does not have to be
|
||||
propagated throughout the code base separately, but it is not strictly necessary.
|
||||
|
||||
ABCI applications may have to be updated to handle arbitrary initial heights, otherwise the initial
|
||||
block may fail.
|
||||
|
||||
## Status
|
||||
|
||||
Accepted
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
- Heights can be unique throughout the history of a "logical" chain, across hard fork upgrades.
|
||||
|
||||
### Negative
|
||||
|
||||
- Upgrades still cause loss of block history.
|
||||
|
||||
- Integrators will have to map height ranges to specific archive nodes/networks to query history.
|
||||
|
||||
### Neutral
|
||||
|
||||
- There is no explicit link to the last block of the previous chain.
|
||||
|
||||
## References
|
||||
|
||||
- [#2543: Allow genesis file to start from non-zero height w/ prev block header](https://github.com/tendermint/tendermint/issues/2543)
|
||||
57
docs/architecture/adr-079-ed25519-verification.md
Normal file
57
docs/architecture/adr-079-ed25519-verification.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# ADR 079: Ed25519 Verification
|
||||
|
||||
## Changelog
|
||||
|
||||
- 2020-08-21: Initial RFC
|
||||
- 2021-02-11: Migrate RFC to tendermint repo (Originally [RFC 003](https://github.com/tendermint/spec/pull/144))
|
||||
|
||||
## Author(s)
|
||||
|
||||
- Marko (@marbar3778)
|
||||
|
||||
## Context
|
||||
|
||||
Ed25519 keys are the only supported key types for Tendermint validators currently. Tendermint-Go wraps the ed25519 key implementation from the go standard library. As more clients are implemented to communicate with the canonical Tendermint implementation (Tendermint-Go) different implementations of ed25519 will be used. Due to [RFC 8032](https://www.rfc-editor.org/rfc/rfc8032.html) not guaranteeing implementation compatibility, Tendermint clients must to come to an agreement of how to guarantee implementation compatibility. [Zcash](https://z.cash/) has multiple implementations of their client and have identified this as a problem as well. The team at Zcash has made a proposal to address this issue, [Zcash improvement proposal 215](https://zips.z.cash/zip-0215).
|
||||
|
||||
## Proposal
|
||||
|
||||
- Tendermint-Go would adopt [hdevalence/ed25519consensus](https://github.com/hdevalence/ed25519consensus).
|
||||
- This library is implements `ed25519.Verify()` in accordance to zip-215. Tendermint-go will continue to use `crypto/ed25519` for signing and key generation.
|
||||
|
||||
- Tendermint-rs would adopt [ed25519-zebra](https://github.com/ZcashFoundation/ed25519-zebra)
|
||||
- related [issue](https://github.com/informalsystems/tendermint-rs/issues/355)
|
||||
|
||||
Signature verification is one of the major bottlenecks of Tendermint-go, batch verification can not be used unless it has the same consensus rules, ZIP 215 makes verification safe in consensus critical areas.
|
||||
|
||||
This change constitutes a breaking changes, therefore must be done in a major release. No changes to validator keys or operations will be needed for this change to be enabled.
|
||||
|
||||
This change has no impact on signature aggregation. To enable this signature aggregation Tendermint will have to use different signature schema (Schnorr, BLS, ...). Secondly, this change will enable safe batch verification for the Tendermint-Go client. Batch verification for the rust client is already supported in the library being used.
|
||||
|
||||
As part of the acceptance of this proposal it would be best to contract or discuss with a third party the process of conducting a security review of the go library.
|
||||
|
||||
## Status
|
||||
|
||||
Proposed
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
- Consistent signature verification across implementations
|
||||
- Enable safe batch verification
|
||||
|
||||
### Negative
|
||||
|
||||
#### Tendermint-Go
|
||||
|
||||
- Third_party dependency
|
||||
- library has not gone through a security review.
|
||||
- unclear maintenance schedule
|
||||
- Fragmentation of the ed25519 key for the go implementation, verification is done using a third party library while the rest
|
||||
uses the go standard library
|
||||
|
||||
### Neutral
|
||||
|
||||
## References
|
||||
|
||||
[It’s 255:19AM. Do you know what your validation criteria are?](https://hdevalence.ca/blog/2020-10-04-its-25519am)
|
||||
203
docs/architecture/adr-080-reverse-sync.md
Normal file
203
docs/architecture/adr-080-reverse-sync.md
Normal file
@@ -0,0 +1,203 @@
|
||||
# ADR 080: ReverseSync - fetching historical data
|
||||
|
||||
## Changelog
|
||||
|
||||
- 2021-02-11: Migrate to tendermint repo (Originally [RFC 005](https://github.com/tendermint/spec/pull/224))
|
||||
- 2021-04-19: Use P2P to gossip necessary data for reverse sync.
|
||||
- 2021-03-03: Simplify proposal to the state sync case.
|
||||
- 2021-02-17: Add notes on asynchronicity of processes.
|
||||
- 2020-12-10: Rename backfill blocks to reverse sync.
|
||||
- 2020-11-25: Initial draft.
|
||||
|
||||
## Author(s)
|
||||
|
||||
- Callum Waters (@cmwaters)
|
||||
|
||||
## Context
|
||||
|
||||
Two new features: [Block pruning](https://github.com/tendermint/tendermint/issues/3652)
|
||||
and [State sync](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-042-state-sync.md)
|
||||
meant nodes no longer needed a complete history of the blockchain. This
|
||||
introduced some challenges of its own which were covered and subsequently
|
||||
tackled with [RFC-001](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-077-block-retention.md).
|
||||
The RFC allowed applications to set a block retention height; an upper bound on
|
||||
what blocks would be pruned. However nodes who state sync past this upper bound
|
||||
(which is necessary as snapshots must be saved within the trusting period for
|
||||
the assisting light client to verify) have no means of backfilling the blocks
|
||||
to meet the retention limit. This could be a problem as nodes who state sync and
|
||||
then eventually switch to consensus (or fast sync) may not have the block and
|
||||
validator history to verify evidence causing them to panic if they see 2/3
|
||||
commit on what the node believes to be an invalid block.
|
||||
|
||||
Thus, this RFC sets out to instil a minimum block history invariant amongst
|
||||
honest nodes.
|
||||
|
||||
## Proposal
|
||||
|
||||
A backfill mechanism can simply be defined as an algorithm for fetching,
|
||||
verifying and storing, headers and validator sets of a height prior to the
|
||||
current base of the node's blockchain. In matching the terminology used for
|
||||
other data retrieving protocols (i.e. fast sync and state sync), we
|
||||
call this method **ReverseSync**.
|
||||
|
||||
We will define the mechanism in four sections:
|
||||
|
||||
- Usage
|
||||
- Design
|
||||
- Verification
|
||||
- Termination
|
||||
|
||||
### Usage
|
||||
|
||||
For now, we focus purely on the case of a state syncing node, whom after
|
||||
syncing to a height will need to verify historical data in order to be capable
|
||||
of processing new blocks. We can denote the earliest height that the node will
|
||||
need to verify and store in order to be able to verify any evidence that might
|
||||
arise as the `max_historical_height`/`time`. Both height and time are necessary
|
||||
as this maps to the BFT time used for evidence expiration. After acquiring
|
||||
`State`, we calculate these parameters as:
|
||||
|
||||
```go
|
||||
max_historical_height = max(state.InitialHeight, state.LastBlockHeight - state.ConsensusParams.EvidenceAgeHeight)
|
||||
max_historical_time = max(GenesisTime, state.LastBlockTime.Sub(state.ConsensusParams.EvidenceAgeTime))
|
||||
```
|
||||
|
||||
Before starting either fast sync or consensus, we then run the following
|
||||
synchronous process:
|
||||
|
||||
```go
|
||||
func ReverseSync(max_historical_height int64, max_historical_time time.Time) error
|
||||
```
|
||||
|
||||
Where we fetch and verify blocks until a block `A` where
|
||||
`A.Height <= max_historical_height` and `A.Time <= max_historical_time`.
|
||||
|
||||
Upon successfully reverse syncing, a node can now safely continue. As this
|
||||
feature is only used as part of state sync, one can think of this as merely an
|
||||
extension to it.
|
||||
|
||||
In the future we may want to extend this functionality to allow nodes to fetch
|
||||
historical blocks for reasons of accountability or data accessibility.
|
||||
|
||||
### Design
|
||||
|
||||
This section will provide a high level overview of some of the more important
|
||||
characteristics of the design, saving the more tedious details as an ADR.
|
||||
|
||||
#### P2P
|
||||
|
||||
Implementation of this RFC will require the addition of a new channel and two
|
||||
new messages.
|
||||
|
||||
```proto
|
||||
message LightBlockRequest {
|
||||
uint64 height = 1;
|
||||
}
|
||||
```
|
||||
|
||||
```proto
|
||||
message LightBlockResponse {
|
||||
Header header = 1;
|
||||
Commit commit = 2;
|
||||
ValidatorSet validator_set = 3;
|
||||
}
|
||||
```
|
||||
|
||||
The P2P path may also enable P2P networked light clients and a state sync that
|
||||
also doesn't need to rely on RPC.
|
||||
|
||||
### Verification
|
||||
|
||||
ReverseSync is used to fetch the following data structures:
|
||||
|
||||
- `Header`
|
||||
- `Commit`
|
||||
- `ValidatorSet`
|
||||
|
||||
Nodes will also need to be able to verify these. This can be achieved by first
|
||||
retrieving the header at the base height from the block store. From this trusted
|
||||
header, the node hashes each of the three data structures and checks that they are correct.
|
||||
|
||||
1. The trusted header's last block ID matches the hash of the new header
|
||||
|
||||
```go
|
||||
header[height].LastBlockID == hash(header[height-1])
|
||||
```
|
||||
|
||||
2. The trusted header's last commit hash matches the hash of the new commit
|
||||
|
||||
```go
|
||||
header[height].LastCommitHash == hash(commit[height-1])
|
||||
```
|
||||
|
||||
3. Given that the node now trusts the new header, check that the header's validator set
|
||||
hash matches the hash of the validator set
|
||||
|
||||
```go
|
||||
header[height-1].ValidatorsHash == hash(validatorSet[height-1])
|
||||
```
|
||||
|
||||
### Termination
|
||||
|
||||
ReverseSync draws a lot of parallels with fast sync. An important consideration
|
||||
for fast sync that also extends to ReverseSync is termination. ReverseSync will
|
||||
finish it's task when one of the following conditions have been met:
|
||||
|
||||
1. It reaches a block `A` where `A.Height <= max_historical_height` and
|
||||
`A.Time <= max_historical_time`.
|
||||
2. None of it's peers reports to have the block at the height below the
|
||||
processes current block.
|
||||
3. A global timeout.
|
||||
|
||||
This implies that we can't guarantee adequate history and thus the term
|
||||
"invariant" can't be used in the strictest sense. In the case that the first
|
||||
condition isn't met, the node will log an error and optimistically attempt
|
||||
to continue with either fast sync or consensus.
|
||||
|
||||
## Alternative Solutions
|
||||
|
||||
The need for a minimum block history invariant stems purely from the need to
|
||||
validate evidence (although there may be some application relevant needs as
|
||||
well). Because of this, an alternative, could be to simply trust whatever the
|
||||
2/3+ majority has agreed upon and in the case where a node is at the head of the
|
||||
blockchain, you simply abstain from voting.
|
||||
|
||||
As it stands, if 2/3+ vote on evidence you can't verify, in the same manner if
|
||||
2/3+ vote on a header that a node sees as invalid (perhaps due to a different
|
||||
app hash), the node will halt.
|
||||
|
||||
Another alternative is the method with which the relevant data is retrieved.
|
||||
Instead of introducing new messages to the P2P layer, RPC could have been used
|
||||
instead.
|
||||
|
||||
The aforementioned data is already available via the following RPC endpoints:
|
||||
`/commit` for `Header`'s' and `/validators` for `ValidatorSet`'s'. It was
|
||||
decided predominantly due to the instability of the current RPC infrastructure
|
||||
that P2P be used instead.
|
||||
|
||||
## Status
|
||||
|
||||
Proposed
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
- Ensures a minimum block history invariant for honest nodes. This will allow
|
||||
nodes to verify evidence.
|
||||
|
||||
### Negative
|
||||
|
||||
- Statesync will be slower as more processing is required.
|
||||
|
||||
### Neutral
|
||||
|
||||
- By having validator sets served through p2p, this would make it easier to
|
||||
extend p2p support to light clients and state sync.
|
||||
- In the future, it may also be possible to extend this feature to allow for
|
||||
nodes to freely fetch and verify prior blocks
|
||||
|
||||
## References
|
||||
|
||||
- [RFC-001: Block retention](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-077-block-retention.md)
|
||||
- [Original issue](https://github.com/tendermint/tendermint/issues/4629)
|
||||
BIN
docs/architecture/img/block-retention.png
Normal file
BIN
docs/architecture/img/block-retention.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 52 KiB |
@@ -50,7 +50,7 @@ little overview what they do.
|
||||
they are coming from peers or the application.
|
||||
- `p2p` Provides an abstraction around peer-to-peer communication. For
|
||||
more details, please check out the
|
||||
[README](https://github.com/tendermint/spec/tree/master/spec/p2p).
|
||||
[README](https://github.com/tendermint/tendermint/tree/master/spec/p2p).
|
||||
- `rpc-server` RPC server. For implementation details, please read the
|
||||
[doc.go](https://github.com/tendermint/tendermint/blob/master/rpc/jsonrpc/doc.go).
|
||||
- `state` Represents the latest state and execution submodule, which
|
||||
@@ -120,7 +120,7 @@ Next follows a standard block creation cycle, where we enter a new
|
||||
round, propose a block, receive more than 2/3 of prevotes, then
|
||||
precommits and finally have a chance to commit a block. For details,
|
||||
please refer to [Byzantine Consensus
|
||||
Algorithm](https://github.com/tendermint/spec/blob/master/spec/consensus/consensus.md).
|
||||
Algorithm](https://github.com/tendermint/tendermint/blob/master/spec/consensus/consensus.md).
|
||||
|
||||
```sh
|
||||
I[10-04|13:54:30.393] enterNewRound(91/0). Current: 91/0/RoundStepNewHeight module=consensus
|
||||
|
||||
@@ -83,7 +83,7 @@ for more information.
|
||||
Rate-limiting and authentication are another key aspects to help protect
|
||||
against DOS attacks. Validators are supposed to use external tools like
|
||||
[NGINX](https://www.nginx.com/blog/rate-limiting-nginx/) or
|
||||
[traefik](https://docs.traefik.io/middlewares/ratelimit/)
|
||||
[traefik](https://doc.traefik.io/traefik/middlewares/http/ratelimit/)
|
||||
to achieve the same things.
|
||||
|
||||
## Debugging Tendermint
|
||||
|
||||
@@ -109,9 +109,9 @@ Currently Tendermint uses [Ed25519](https://ed25519.cr.yp.to/) keys which are wi
|
||||
> **+2/3 is short for "more than 2/3"**
|
||||
|
||||
A block is committed when +2/3 of the validator set sign [precommit
|
||||
votes](https://github.com/tendermint/spec/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#vote) for that block at the same `round`.
|
||||
votes](https://github.com/tendermint/tendermint/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#vote) for that block at the same `round`.
|
||||
The +2/3 set of precommit votes is called a
|
||||
[_commit_](https://github.com/tendermint/spec/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#commit). While any +2/3 set of
|
||||
[_commit_](https://github.com/tendermint/tendermint/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#commit). While any +2/3 set of
|
||||
precommits for the same block at the same height&round can serve as
|
||||
validation, the canonical commit is included in the next block (see
|
||||
[LastCommit](https://github.com/tendermint/spec/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#lastcommit)).
|
||||
[LastCommit](https://github.com/tendermint/tendermint/blob/953523c3cb99fdb8c8f7a2d21e3a99094279e9de/spec/blockchain/blockchain.md#lastcommit)).
|
||||
|
||||
6
docs/package-lock.json
generated
6
docs/package-lock.json
generated
@@ -10271,9 +10271,9 @@
|
||||
}
|
||||
},
|
||||
"url-parse": {
|
||||
"version": "1.5.4",
|
||||
"resolved": "https://registry.npmjs.org/url-parse/-/url-parse-1.5.4.tgz",
|
||||
"integrity": "sha512-ITeAByWWoqutFClc/lRZnFplgXgEZr3WJ6XngMM/N9DMIm4K8zXPCZ1Jdu0rERwO84w1WC5wkle2ubwTA4NTBg==",
|
||||
"version": "1.5.7",
|
||||
"resolved": "https://registry.npmjs.org/url-parse/-/url-parse-1.5.7.tgz",
|
||||
"integrity": "sha512-HxWkieX+STA38EDk7CE9MEryFeHCKzgagxlGvsdS7WBImq9Mk+PGwiT56w82WI3aicwJA8REp42Cxo98c8FZMA==",
|
||||
"requires": {
|
||||
"querystringify": "^2.1.1",
|
||||
"requires-port": "^1.0.0"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
#!/bin/bash
|
||||
|
||||
cp -a ../rpc/openapi/ .vuepress/public/rpc/
|
||||
git clone https://github.com/tendermint/spec.git specRepo && cp -r specRepo/spec . && rm -rf specRepo
|
||||
cp -r ../spec .
|
||||
|
||||
@@ -41,10 +41,16 @@ sections.
|
||||
- [RFC-001: Storage Engines](./rfc-001-storage-engine.rst)
|
||||
- [RFC-002: Interprocess Communication](./rfc-002-ipc-ecosystem.md)
|
||||
- [RFC-003: Performance Taxonomy](./rfc-003-performance-questions.md)
|
||||
- [RFC-004: E2E Test Framework Enhancements](./rfc-004-e2e-framework.md)
|
||||
- [RFC-004: E2E Test Framework Enhancements](./rfc-004-e2e-framework.rst)
|
||||
- [RFC-005: Event System](./rfc-005-event-system.rst)
|
||||
- [RFC-006: Event Subscription](./rfc-006-event-subscription.md)
|
||||
- [RFC-007: Deterministic Proto Byte Serialization](./rfc-007-deterministic-proto-bytes.md)
|
||||
- [RFC-008: Don't Panic](./rfc-008-don't-panic.md)
|
||||
- [RFC-008: Don't Panic](./rfc-008-do-not-panic.md)
|
||||
- [RFC-009: Consensus Parameter Upgrades](./rfc-009-consensus-parameter-upgrades.md)
|
||||
- [RFC-010: P2P Light Client](./rfc-010-p2p-light-client.rst)
|
||||
- [RFC-011: Delete Gas](./rfc-011-delete-gas.md)
|
||||
- [RFC-012: Event Indexing Revisited](./rfc-012-custom-indexing.md)
|
||||
- [RFC-013: ABCI++](./rfc-013-abci++.md)
|
||||
- [RFC-014: Semantic Versioning](./rfc-014-semantic-versioning.md)
|
||||
|
||||
<!-- - [RFC-NNN: Title](./rfc-NNN-title.md) -->
|
||||
|
||||
BIN
docs/rfc/images/abci++.png
Normal file
BIN
docs/rfc/images/abci++.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 2.7 MiB |
BIN
docs/rfc/images/abci.png
Normal file
BIN
docs/rfc/images/abci.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 2.5 MiB |
@@ -407,7 +407,7 @@ discussed.
|
||||
|
||||
## References
|
||||
|
||||
[abci]: https://github.com/tendermint/spec/tree/95cf253b6df623066ff7cd4074a94e7a3f147c7a/spec/abci
|
||||
[abci]: https://github.com/tendermint/tendermint/tree/master/spec/abci
|
||||
[rpc-service]: https://docs.tendermint.com/master/rpc/
|
||||
[light-client]: https://docs.tendermint.com/master/tendermint-core/light-client.html
|
||||
[tm-cli]: https://github.com/tendermint/tendermint/tree/master/cmd/tendermint
|
||||
@@ -416,5 +416,5 @@ discussed.
|
||||
[socket-server]: https://github.com/tendermint/tendermint/blob/master/abci/server/socket_server.go
|
||||
[sdk-grpc]: https://pkg.go.dev/github.com/cosmos/cosmos-sdk/types/tx#ServiceServer
|
||||
[json-rpc]: https://www.jsonrpc.org/specification
|
||||
[abci-conn]: https://github.com/tendermint/spec/blob/master/spec/abci/apps.md#state
|
||||
[abci-conn]: https://github.com/tendermint/tendermint/blob/master/spec/abci/apps.md#state
|
||||
[adr-57]: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-057-RPC.md
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# RFC 003: Taxonomy of potential performance issues in Tendermint
|
||||
# RFC 003: Taxonomy of potential performance issues in Tendermint
|
||||
|
||||
## Changelog
|
||||
|
||||
@@ -35,7 +35,7 @@ This section attempts to delineate the different sections of Tendermint function
|
||||
that are often cited as having performance issues. It raises questions and suggests
|
||||
lines of inquiry that may be valuable for better understanding Tendermint's performance issues.
|
||||
|
||||
As a note: We should avoid quickly adding many microbenchmarks or package level benchmarks.
|
||||
As a note: We should avoid quickly adding many microbenchmarks or package level benchmarks.
|
||||
These are prone to being worse than useless as they can obscure what _should_ be
|
||||
focused on: performance of the system from the perspective of a user. We should,
|
||||
instead, tune performance with an eye towards user needs and actions users make. These users comprise
|
||||
@@ -116,7 +116,7 @@ the Tendermint node.
|
||||
|
||||
ABCI delivers blocks in several methods: `BeginBlock`, `DeliverTx`, `EndBlock`, `Commit`.
|
||||
|
||||
Tendermint delivers transactions one-by-one via the `DeliverTx` call. Most of the
|
||||
Tendermint delivers transactions one-by-one via the `DeliverTx` call. Most of the
|
||||
transaction delivery in Tendermint occurs asynchronously and therefore appears unlikely to
|
||||
form a bottleneck in ABCI.
|
||||
|
||||
@@ -153,7 +153,7 @@ slow during queries, as ABCI is no longer able to make progress. This is known
|
||||
to be causing issue in the cosmos-sdk and is being addressed [in the sdk][sdk-query-fix]
|
||||
but a more robust solution may be required. Adding metrics to each ABCI client connection
|
||||
and message as described in the Application section of this document would allow us
|
||||
to further introspect the issue here.
|
||||
to further introspect the issue here.
|
||||
|
||||
#### Claim: RPC Serialization may cause slowdown
|
||||
|
||||
@@ -169,8 +169,8 @@ The other JSON-RPC methods are much less critical to the core functionality of T
|
||||
While there may other points of performance consideration within the RPC, methods that do not
|
||||
receive high volumes of requests should not be prioritized for performance consideration.
|
||||
|
||||
NOTE: Previous discussion of the RPC framework was done in [ADR 57][adr-57] and
|
||||
there is ongoing work to inspect and alter the JSON-RPC framework in [RFC 002][rfc-002].
|
||||
NOTE: Previous discussion of the RPC framework was done in [ADR 57][adr-57] and
|
||||
there is ongoing work to inspect and alter the JSON-RPC framework in [RFC 002][rfc-002].
|
||||
Much of these RPC-related performance considerations can either wait until the work of RFC 002 work is done or be
|
||||
considered concordantly with the in-flight changes to the JSON-RPC.
|
||||
|
||||
@@ -207,7 +207,7 @@ in Tendermint. Namely, it is being considered as part of the [modular hashing pr
|
||||
It is currently unknown if hashing transactions in the Mempool forms a significant bottleneck.
|
||||
Although it does not appear to be documented as slow, there are a few open github
|
||||
issues that indicate a possible user preference for a faster hashing algorithm,
|
||||
including [issue 2187][issue-2187] and [issue 2186][issue-2186].
|
||||
including [issue 2187][issue-2187] and [issue 2186][issue-2186].
|
||||
|
||||
It is likely worth investigating what order of magnitude Tx hashing takes in comparison to other
|
||||
aspects of adding a Tx to the mempool. It is not currently clear if the rate of adding Tx
|
||||
@@ -272,7 +272,7 @@ event sends. The following metrics would be a good start for tracking this perfo
|
||||
[rfc-002]: https://github.com/tendermint/tendermint/pull/6913
|
||||
[adr-57]: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-057-RPC.md
|
||||
[issue-1319]: https://github.com/tendermint/tendermint/issues/1319
|
||||
[abci-commit-description]: https://github.com/tendermint/spec/blob/master/spec/abci/apps.md#commit
|
||||
[abci-commit-description]: https://github.com/tendermint/tendermint/blob/master/spec/abci/apps.md#commit
|
||||
[abci-local-client-code]: https://github.com/tendermint/tendermint/blob/511bd3eb7f037855a793a27ff4c53c12f085b570/abci/client/local_client.go#L84
|
||||
[hub-signature]: https://github.com/cosmos/gaia/blob/0ecb6ed8a244d835807f1ced49217d54a9ca2070/docs/resources/genesis.md#consensus-parameters
|
||||
[ed25519-bench]: https://github.com/oasisprotocol/curve25519-voi/blob/d2e7fc59fe38c18ca990c84c4186cba2cc45b1f9/PERFORMANCE.md
|
||||
|
||||
145
docs/rfc/rfc-010-p2p-light-client.rst
Normal file
145
docs/rfc/rfc-010-p2p-light-client.rst
Normal file
@@ -0,0 +1,145 @@
|
||||
==================================
|
||||
RFC 010: Peer to Peer Light Client
|
||||
==================================
|
||||
|
||||
Changelog
|
||||
---------
|
||||
|
||||
- 2022-01-21: Initial draft (@tychoish)
|
||||
|
||||
Abstract
|
||||
--------
|
||||
|
||||
The dependency on access to the RPC system makes running or using the light
|
||||
client more complicated than it should be, because in practice node operators
|
||||
choose to restrict access to these end points (often correctly.) There is no
|
||||
deep dependency for the light client on the RPC system, and there is a
|
||||
persistent notion that "make a p2p light client" is a solution to this
|
||||
operational limitation. This document explores the implications and
|
||||
requirements of implementing a p2p-based light client, as well as the
|
||||
possibilities afforded by this implementation.
|
||||
|
||||
Background
|
||||
----------
|
||||
|
||||
High Level Design
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
From a high level, the light client P2P implementation, is relatively straight
|
||||
forward, but is orthogonal to the P2P-backed statesync implementation that
|
||||
took place during the 0.35 cycle. The light client only really needs to be
|
||||
able to request (and receive) a `LightBlock` at a given height. To support
|
||||
this, a new Reactor would run on every full node and validator which would be
|
||||
able to service these requests. The workload would be entirely
|
||||
request-response, and the implementation of the reactor would likely be very
|
||||
straight forward, and the implementation of the provider is similarly
|
||||
relatively simple.
|
||||
|
||||
The complexity of the project focuses around peer discovery, handling when
|
||||
peers disconnect from the light clients, and how to change the current P2P
|
||||
code to appropriately handle specialized nodes.
|
||||
|
||||
I believe it's safe to assume that much of the current functionality of the
|
||||
current ``light`` mode would *not* need to be maintained: there is no need to
|
||||
proxy the RPC endpoints over the P2P layer and there may be no need to run a
|
||||
node/process for the p2p light client (e.g. all use of this will be as a
|
||||
client.)
|
||||
|
||||
The ability to run light clients using the RPC system will continue to be
|
||||
maintained.
|
||||
|
||||
LibP2P
|
||||
~~~~~~
|
||||
|
||||
While some aspects of the P2P light client implementation are orthogonal to
|
||||
LibP2P project, it's useful to think about the ways that these efforts may
|
||||
combine or interact.
|
||||
|
||||
We expect to be able to leverage libp2p tools to provide some kind of service
|
||||
discovery for tendermint-based networks. This means that it will be possible
|
||||
for the p2p stack to easily identify specialized nodes, (e.g. light clients)
|
||||
thus obviating many of the design challenges with providing this feature in
|
||||
the context of the current stack.
|
||||
|
||||
Similarly, libp2p makes it possible for a project to be able back their non-Go
|
||||
light clients, without the major task of first implementing Tendermint's p2p
|
||||
connection handling. We should identify if there exist users (e.g. the go IBC
|
||||
relayer, it's maintainers, and operators) who would be able to take advantage
|
||||
of p2p light client, before switching to libp2p. To our knowledge there are
|
||||
limited implementations of this p2p protocol (a simple implementation without
|
||||
secret connection support exists in rust but it has not been used in
|
||||
production), and it seems unlikely that a team would implement this directly
|
||||
ahead of its impending removal.
|
||||
|
||||
User Cases
|
||||
~~~~~~~~~~
|
||||
|
||||
This RFC makes a few assumptions about the use cases and users of light
|
||||
clients in tendermint.
|
||||
|
||||
The most active and delicate use cases for light clients is in the
|
||||
implementation of the IBC relayer. Thus, we expect that providing P2P light
|
||||
clients might increase the reliability of relayers and reduce the cost of
|
||||
running a relayer, because relayer operators won't have to decide between rely
|
||||
on public RPC endpoints (unreliable) or running their own full nodes
|
||||
(expensive.) This also assumes that there are *no* other uses of the RPC in
|
||||
the relayer, and unless the relayers have the option of dropping all RPC use,
|
||||
it's unclear if a P2P light client will actually be able to successfully
|
||||
remove the dependency on the RPC system.
|
||||
|
||||
Given that the primary relayer implementation is Hermes (rust,) it might be
|
||||
safe to deliver a version of Tendermint that adds a light client rector in
|
||||
the full nodes, but that does not provide an implementation of a Go light
|
||||
client. This either means that the rust implementation would need support for
|
||||
the legacy P2P connection protocol or wait for the libp2p implementation.
|
||||
|
||||
Client side light client (e.g. wallets, etc.) users may always want to use (a
|
||||
subset) of the RPC rather than connect to the P2P network for an ephemeral
|
||||
use.
|
||||
|
||||
Discussion
|
||||
----------
|
||||
|
||||
Implementation Questions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Most of the complication in the is how to have a long lived light client node
|
||||
that *only* runs the light client reactor, as this raises a few questions:
|
||||
|
||||
- would users specify a single P2P node to connect to when creating a light
|
||||
client or would they also need/want to discover peers?
|
||||
|
||||
- **answer**: most light client use cases won't care much about selecting
|
||||
peers (and those that do can either disable PEX and specify persistent
|
||||
peers, *or* use the RPC light client.)
|
||||
|
||||
- how do we prevent full nodes and validators from allowing their peer slots,
|
||||
which are typically limited, from filling with light clients? If
|
||||
light-clients aren't limited, how do we prevent light clients from consuming
|
||||
resources on consensus nodes?
|
||||
|
||||
- **answer**: I think we can institute an internal cap on number of light
|
||||
client connections to accept and also elide light client nodes from PEX
|
||||
(pre-libp2p, if we implement this.) I believe that libp2p should provide
|
||||
us with the kind of service discovery semantics for network connectivity
|
||||
that would obviate this issue.
|
||||
|
||||
- when a light client disconnects from its peers will it need to reset its
|
||||
internal state (cache)? does this change if it connects to the same peers?
|
||||
|
||||
- **answer**: no, the internal state only needs to be reset if the light
|
||||
client detects an invalid block or other divergence, and changing
|
||||
witnesses--which will be more common with a p2p light client--need not
|
||||
invalidate the cache.
|
||||
|
||||
These issues are primarily present given that the current peer management later
|
||||
does not have a particularly good service discovery mechanism nor does it have
|
||||
a very sophisticated way of identifying nodes of different types or modes.
|
||||
|
||||
Report Evidence
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
The current light client implementation currently has the ability to report
|
||||
observed evidence. Either the notional light client reactor needs to be able
|
||||
to handle these kinds of requests *or* all light client nodes need to also run
|
||||
the evidence reactor. This could be configured at runtime.
|
||||
162
docs/rfc/rfc-011-delete-gas.md
Normal file
162
docs/rfc/rfc-011-delete-gas.md
Normal file
@@ -0,0 +1,162 @@
|
||||
# RFC 011: Remove Gas From Tendermint
|
||||
|
||||
## Changelog
|
||||
|
||||
- 03-Feb-2022: Initial draft (@williambanfield).
|
||||
- 10-Feb-2022: Update in response to feedback (@williambanfield).
|
||||
- 11-Feb-2022: Add reflection on MaxGas during consensus (@williambanfield).
|
||||
|
||||
## Abstract
|
||||
|
||||
In the v0.25.0 release, Tendermint added a mechanism for tracking 'Gas' in the mempool.
|
||||
At a high level, Gas allows applications to specify how much it will cost the network,
|
||||
often in compute resources, to execute a given transaction. While such a mechanism is common
|
||||
in blockchain applications, it is not generalizable enough to be a maintained as a part
|
||||
of Tendermint. This RFC explores the possibility of removing the concept of Gas from
|
||||
Tendermint while still allowing applications the power to control the contents of
|
||||
blocks to achieve similar goals.
|
||||
|
||||
## Background
|
||||
|
||||
The notion of Gas was included in the original Ethereum whitepaper and exists as
|
||||
an important feature of the Ethereum blockchain.
|
||||
|
||||
The [whitepaper describes Gas][eth-whitepaper-messages] as an Anti-DoS mechanism. The Ethereum Virtual Machine
|
||||
provides a Turing complete execution platform. Without any limitations, malicious
|
||||
actors could waste computation resources by directing the EVM to perform large
|
||||
or even infinite computations. Gas serves as a metering mechanism to prevent this.
|
||||
|
||||
Gas appears to have been added to Tendermint multiple times, initially as part of
|
||||
a now defunct `/vm` package, and in its most recent iteration [as part of v0.25.0][gas-add-pr]
|
||||
as a mechanism to limit the transactions that will be included in the block by an additional
|
||||
parameter.
|
||||
|
||||
Gas has gained adoption within the Cosmos ecosystem [as part of the Cosmos SDK][cosmos-sdk-gas].
|
||||
The SDK provides facilities for tracking how much 'Gas' a transaction is expected to take
|
||||
and a mechanism for tracking how much gas a transaction has already taken.
|
||||
|
||||
Non-SDK applications also make use of the concept of Gas. Anoma appears to implement
|
||||
[a gas system][anoma-gas] to meter the transactions it executes.
|
||||
|
||||
While the notion of gas is present in projects that make use of Tendermint, it is
|
||||
not a concern of Tendermint's. Tendermint's value and goal is producing blocks
|
||||
via a distributed consensus algorithm. Tendermint relies on the application specific
|
||||
code to decide how to handle the transactions Tendermint has produced (or if the
|
||||
application wants to consider them at all). Gas is an application concern.
|
||||
|
||||
Our implementation of Gas is not currently enforced by consensus. Our current validation check that
|
||||
occurs during block propagation does not verify that the block is under the configured `MaxGas`.
|
||||
Ensuring that the transactions in a proposed block do not exceed `MaxGas` would require
|
||||
input from the application during propagation. The `ProcessProposal` method introduced
|
||||
as part of ABCI++ would enable such input but would further entwine Tendermint and
|
||||
the application. The issue of checking `MaxGas` during block propagation is important
|
||||
because it demonstrates that the feature as it currently exists is not implemented
|
||||
as fully as it perhaps should be.
|
||||
|
||||
Our implementation of Gas is causing issues for node operators and relayers. At
|
||||
the moment, transactions that overflow the configured 'MaxGas' can be silently rejected
|
||||
from the mempool. Overflowing MaxGas is the _only_ way that a transaction can be considered
|
||||
invalid that is not directly a result of failing the `CheckTx`. Operators, and the application,
|
||||
do not know that a transaction was removed from the mempool for this reason. A stateless check
|
||||
of this nature is exactly what `CheckTx` exists for and there is no reason for the mempool
|
||||
to keep track of this data separately. A special [MempoolError][add-mempool-error] field
|
||||
was added in v0.35 to communicate to clients that a transaction failed after `CheckTx`.
|
||||
While this should alleviate the pain for operators wishing to understand if their
|
||||
transaction was included in the mempool, it highlights that the abstraction of
|
||||
what is included in the mempool is not currently well defined.
|
||||
|
||||
Removing Gas from Tendermint and the mempool would allow for the mempool to be a better
|
||||
abstraction: any transaction that arrived at `CheckTx` and passed the check will either be
|
||||
a candidate for a later block or evicted after a TTL is reached or to make room for
|
||||
other, higher priority transactions. All other transactions are completely invalid and can be discarded forever.
|
||||
|
||||
Removing gas will not be completely straightforward. It will mean ensuring that
|
||||
equivalent functionality can be implemented outside of the mempool using the mempool's API.
|
||||
|
||||
## Discussion
|
||||
|
||||
This section catalogs the functionality that will need to exist within the Tendermint
|
||||
mempool to allow Gas to be removed and replaced by application-side bookkeeping.
|
||||
|
||||
### Requirement: Provide Mempool Tx Sorting Mechanism
|
||||
|
||||
Gas produces a market for inclusion in a block. On many networks, a [gas fee][cosmos-sdk-fees] is
|
||||
included in pending transactions. This fee indicates how much a user is willing to
|
||||
pay per unit of execution and the fees are distributed to validators.
|
||||
|
||||
Validators wishing to extract higher gas fees are incentivized to include transactions
|
||||
with the highest listed gas fees into each block. This produces a natural ordering
|
||||
of the pending transactions. Applications wishing to implement a gas mechanism need
|
||||
to be able to order the transactions in the mempool. This can trivially be accomplished
|
||||
by sorting transactions using the `priority` field available to applications as part of
|
||||
v0.35's `ResponseCheckTx` message.
|
||||
|
||||
### Requirement: Allow Application-Defined Block Resizing
|
||||
|
||||
When creating a block proposal, Tendermint pulls a set of possible transactions out of
|
||||
the mempool to include in the next block. Tendermint uses MaxGas to limit the set of transactions
|
||||
it pulls out of the mempool fetching a set of transactions whose sum is less than MaxGas.
|
||||
|
||||
By removing gas tracking from Tendermint's mempool, Tendermint will need to provide a way for
|
||||
applications to determine an acceptable set of transactions to include in the block.
|
||||
|
||||
This is what the new ABCI++ `PrepareProposal` method is useful for. Applications
|
||||
that wish to limit the contents of a block by an application-defined limit may
|
||||
do so by removing transactions from the proposal it is passed during `PrepareProposal`.
|
||||
Applications wishing to reach parity with the current Gas implementation may do
|
||||
so by creating an application-side limit: filtering out transactions from
|
||||
`PrepareProposal` the cause the proposal the exceed the maximum gas. Additionally,
|
||||
applications can currently opt to have all transactions in the mempool delivered
|
||||
during `PrepareProposal` by passing `-1` for `MaxGas` and `MaxBytes` into
|
||||
[ReapMaxBytesMaxGas][reap-max-bytes-max-gas].
|
||||
|
||||
### Requirement: Handle Transaction Metadata
|
||||
|
||||
Moving the gas mechanism into applications adds an additional piece of complexity
|
||||
to applications. The application must now track how much gas it expects a transaction
|
||||
to consume. The mempool currently handles this bookkeeping responsibility and uses the estimated
|
||||
gas to determine the set of transactions to include in the block. In order to task
|
||||
the application with keeping track of this metadata, we should make it easier for the
|
||||
application to do so. In general, we'll want to keep only one copy of this type
|
||||
of metadata in the program at a time, either in the application or in Tendermint.
|
||||
|
||||
The following sections are possible solutions to the problem of storing transaction
|
||||
metadata without duplication.
|
||||
|
||||
#### Metadata Handling: EvictTx Callback
|
||||
|
||||
A possible approach to handling transaction metadata is by adding a new `EvictTx`
|
||||
ABCI method. Whenever the mempool is removing a transaction, either because it has
|
||||
reached its TTL or because it failed `RecheckTx`, `EvictTx` would be called with
|
||||
the transaction hash. This would indicate to the application that it could free any
|
||||
metadata it was storing about the transaction such as the computed gas fee.
|
||||
|
||||
Eviction callbacks are pretty common in caching systems, so this would be very
|
||||
well-worn territory.
|
||||
|
||||
#### Metadata Handling: Application-Specific Metadata Field(s)
|
||||
|
||||
An alternative approach to handling transaction metadata would be would be the
|
||||
addition of a new application-metadata field in the `ResponseCheckTx`. This field
|
||||
would be a protocol buffer message whose contents were entirely opaque to Tendermint.
|
||||
The application would be responsible for marshalling and unmarshalling whatever data
|
||||
it stored in this field. During `PrepareProposal`, the application would be passed
|
||||
this metadata along with the transaction, allowing the application to use it to perform
|
||||
any necessary filtering.
|
||||
|
||||
If either of these proposed metadata handling techniques are selected, it's likely
|
||||
useful to enable applications to gossip metadata along with the transaction it is
|
||||
gossiping. This could easily take the form of an opaque proto message that is
|
||||
gossiped along with the transaction.
|
||||
|
||||
## References
|
||||
|
||||
[eth-whitepaper-messages]: https://ethereum.org/en/whitepaper/#messages-and-transactions
|
||||
[gas-add-pr]: https://github.com/tendermint/tendermint/pull/2360
|
||||
[cosmos-sdk-gas]: https://github.com/cosmos/cosmos-sdk/blob/c00cedb1427240a730d6eb2be6f7cb01f43869d3/docs/basics/gas-fees.md
|
||||
[cosmos-sdk-fees]: https://github.com/cosmos/cosmos-sdk/blob/c00cedb1427240a730d6eb2be6f7cb01f43869d3/docs/basics/tx-lifecycle.md#gas-and-fees
|
||||
[anoma-gas]: https://github.com/anoma/anoma/blob/6974fe1532a59db3574fc02e7f7e65d1216c1eb2/docs/src/specs/ledger.md#transaction-execution
|
||||
[cosmos-sdk-fee]: https://github.com/cosmos/cosmos-sdk/blob/c00cedb1427240a730d6eb2be6f7cb01f43869d3/types/tx/tx.pb.go#L780-L794
|
||||
[issue-7750]: https://github.com/tendermint/tendermint/issues/7750
|
||||
[reap-max-bytes-max-gas]: https://github.com/tendermint/tendermint/blob/1ac58469f32a98f1c0e2905ca1773d9eac7b7103/internal/mempool/types.go#L45
|
||||
[add-mempool-error]: https://github.com/tendermint/tendermint/blob/205bfca66f6da1b2dded381efb9ad3792f9404cf/rpc/coretypes/responses.go#L239
|
||||
352
docs/rfc/rfc-012-custom-indexing.md
Normal file
352
docs/rfc/rfc-012-custom-indexing.md
Normal file
@@ -0,0 +1,352 @@
|
||||
# RFC 012: Event Indexing Revisited
|
||||
|
||||
## Changelog
|
||||
|
||||
- 11-Feb-2022: Add terminological notes.
|
||||
- 10-Feb-2022: Updated from review feedback.
|
||||
- 07-Feb-2022: Initial draft (@creachadair)
|
||||
|
||||
## Abstract
|
||||
|
||||
A Tendermint node allows ABCI events associated with block and transaction
|
||||
processing to be "indexed" into persistent storage. The original Tendermint
|
||||
implementation provided a fixed, built-in [proprietary indexer][kv-index] for
|
||||
such events.
|
||||
|
||||
In response to user requests to customize indexing, [ADR 065][adr065]
|
||||
introduced an "event sink" interface that allows developers (at least in
|
||||
theory) to plug in alternative index storage.
|
||||
|
||||
Although ADR-065 was a good first step toward customization, its implementation
|
||||
model does not satisfy all the user requirements. Moreover, this approach
|
||||
leaves some existing technical issues with indexing unsolved.
|
||||
|
||||
This RFC documents these concerns, and discusses some potential approaches to
|
||||
solving them. This RFC does _not_ propose a specific technical decision. It is
|
||||
meant to unify and focus some of the disparate discussions of the topic.
|
||||
|
||||
|
||||
## Background
|
||||
|
||||
We begin with some important terminological context. The term "event" in
|
||||
Tendermint can be confusing, as the same word is used for multiple related but
|
||||
distinct concepts:
|
||||
|
||||
1. **ABCI Events** refer to the key-value metadata attached to blocks and
|
||||
transactions by the application. These values are represented by the ABCI
|
||||
`Event` protobuf message type.
|
||||
|
||||
2. **Consensus Events** refer to the data published by the Tendermint node to
|
||||
its pubsub bus in response to various consensus state transitions and other
|
||||
important activities, such as round updates, votes, transaction delivery,
|
||||
and block completion.
|
||||
|
||||
This confusion is compounded because some "consensus event" values also have
|
||||
"ABCI event" metadata attached to them. Notably, block and transaction items
|
||||
typically have ABCI metadata assigned by the application.
|
||||
|
||||
Indexers and RPC clients subscribed to the pubsub bus receive **consensus
|
||||
events**, but they identify which ones to care about using query expressions
|
||||
that match against the **ABCI events** associated with them.
|
||||
|
||||
In the discussion that follows, we will use the term **event item** to refer to
|
||||
a datum published to or received from the pubsub bus, and **ABCI event** or
|
||||
**event metadata** to refer to the key/value annotations.
|
||||
|
||||
**Indexing** in this context means recording the association between certain
|
||||
ABCI metadata and the blocks or transactions they're attached to. The ABCI
|
||||
metadata typically carry application-specific details like sender and recipient
|
||||
addresses, catgory tags, and so forth, that are not part of consensus but are
|
||||
used by UI tools to find and display transactions of interest.
|
||||
|
||||
The consensus node records the blocks and transactions as part of its block
|
||||
store, but does not persist the application metadata. Metadata persistence is
|
||||
the task of the indexer, which can be (optionally) enabled by the node
|
||||
operator.
|
||||
|
||||
### History
|
||||
|
||||
The [original indexer][kv-index] built in to Tendermint stored index data in an
|
||||
embedded [`tm-db` database][tmdb] with a proprietary key layout.
|
||||
In [ADR 065][adr065], we noted that this implementation has both performance
|
||||
and scaling problems under load. Moreover, the only practical way to query the
|
||||
index data is via the [query filter language][query] used for event
|
||||
subscription. [Issue #1161][i1161] appears to be a motivational context for that ADR.
|
||||
|
||||
To mitigate both of these concerns, we introduced the [`EventSink`][esink]
|
||||
interface, combining the original transaction and block indexer interfaces
|
||||
along with some service plumbing. Using this interface, a developer can plug
|
||||
in an indexer that uses a more efficient storage engine, and provides a more
|
||||
expressive query language. As a proof-of-concept, we built a [PostgreSQL event
|
||||
sink][psql] that exports data to a [PostgreSQL database][postgres].
|
||||
|
||||
Although this approach addressed some of the immediate concerns, there are
|
||||
several issues for custom indexing that have not been fully addressed. Here we
|
||||
will discuss them in more detail.
|
||||
|
||||
For further context, including links to user reports and related work, see also
|
||||
the [Pluggable custom event indexing tracking issue][i7135] issue.
|
||||
|
||||
### Issue 1: Tight Coupling
|
||||
|
||||
The `EventSink` interface supports multiple implementations, but plugging in
|
||||
implementations still requires tight integration with the node. In particular:
|
||||
|
||||
- Any custom indexer must either be written in Go and compiled in to the
|
||||
Tendermint binary, or the developer must write a Go shim to communicate with
|
||||
the implementation and build that into the Tendermint binary.
|
||||
|
||||
- This means to support a custom indexer, it either has to be integrated into
|
||||
the Tendermint core repository, or every installation that uses that indexer
|
||||
must fetch or build a patched version of Tendermint.
|
||||
|
||||
The problem with integrating indexers into Tendermint Core is that every user
|
||||
of Tendermint Core takes a dependency on all supported indexers, including
|
||||
those they never use. Even if the unused code is disabled with build tags,
|
||||
users have to remember to do this or potentially be exposed to security issues
|
||||
that may arise in any of the custom indexers. This is a risk for Tendermint,
|
||||
which is a trust-critical component of all applications built on it.
|
||||
|
||||
The problem with _not_ integrating indexers into Tendermint Core is that any
|
||||
developer who wants to use a particular indexer must now fetch or build a
|
||||
patched version of the core code that includes the custom indexer. Besides
|
||||
being inconvenient, this makes it harder for users to upgrade their node, since
|
||||
they need to either re-apply their patches directly or wait for an intermediary
|
||||
to do it for them.
|
||||
|
||||
Even for developers who have written their applications in Go and link with the
|
||||
consensus node directly (e.g., using the [Cosmos SDK][sdk]), these issues add a
|
||||
potentially significant complication to the build process.
|
||||
|
||||
### Issue 2: Legacy Compatibility
|
||||
|
||||
The `EventSink` interface retains several limitations of the original
|
||||
proprietary indexer. These include:
|
||||
|
||||
- The indexer has no control over which event items are reported. Only the
|
||||
exact block and transaction events that were reported to the original indexer
|
||||
are reported to a custom indexer.
|
||||
|
||||
- The interface requires the implementation to define methods for the legacy
|
||||
search and query API. This requirement comes from the integation with the
|
||||
[event subscription RPC API][event-rpc], but actually supporting these
|
||||
methods is not trivial.
|
||||
|
||||
At present, only the original KV indexer implements the query methods. Even the
|
||||
proof-of-concept PostgreSQL implementation simply reports errors for all calls
|
||||
to these methods.
|
||||
|
||||
Even for a plugin written in Go, implementing these methods "correctly" would
|
||||
require parsing and translating the custom query language over whatever storage
|
||||
platform the indexer uses.
|
||||
|
||||
For a plugin _not_ written in Go, even beyond the cost of integration the
|
||||
developer would have to re-implement the entire query language.
|
||||
|
||||
### Issue 3: Indexing Delays Consensus
|
||||
|
||||
Within the node, indexing hooks in to the same internal pubsub dispatcher that
|
||||
is used to export event items to the [event subscription RPC API][event-rpc].
|
||||
In contrast with RPC subscribers, however, indexing is a "privileged"
|
||||
subscriber: If an RPC subscriber is "too slow", the node may terminate the
|
||||
subscription and disconnect the client. That means that RPC subscribers may
|
||||
lose (miss) event items. The indexer, however, is "unbuffered", and the
|
||||
publisher will never drop or disconnect from it. If the indexer is slow, the
|
||||
publisher will block until it returns, to ensure that no event items are lost.
|
||||
|
||||
In practice, this means that the performance of the indexer has a direct effect
|
||||
on the performance of the consensus node: If the indexer is slow or stalls, it
|
||||
will slow or halt the progress of consensus. Users have already reported this
|
||||
problem even with the built-in indexer (see, for example, [#7247][i7247]).
|
||||
Extending this concern to arbitrary user-defined custom indexers gives that
|
||||
risk a much larger surface area.
|
||||
|
||||
|
||||
## Discussion
|
||||
|
||||
It is not possible to simultaneously guarantee that publishing event items will
|
||||
not delay consensus, and also that all event items of interest are always
|
||||
completely indexed.
|
||||
|
||||
Therefore, our choice is between eliminating delay (and minimizing loss) or
|
||||
eliminating loss (and minimizing delay). Currently, we take the second
|
||||
approach, which has led to user complaints about consensus delays due to
|
||||
indexing and subscription overhead.
|
||||
|
||||
- If we agree that consensus performance supersedes index completeness, our
|
||||
design choices are to constrain the likelihood and frequency of missing event
|
||||
items.
|
||||
|
||||
- If we decide that consensus performance is more important than index
|
||||
completeness, our option is to minimize overhead on the event delivery path
|
||||
and document that indexer plugins constrain the rate of consensus.
|
||||
|
||||
Since we have user reports requesting both properties, we have to choose one or
|
||||
the other. Since the primary job of the consensus engine is to correctly,
|
||||
robustly, reliablly, and efficiently replicate application state across the
|
||||
network, I believe the correct choice is to favor consensus performance.
|
||||
|
||||
An important consideration for this decision is that a node does not index
|
||||
application metadata separately: If indexing is disabled, there is no built-in
|
||||
mechanism to go back and replay or reconstruct the data that an indexer would
|
||||
have stored. The node _does_ store the blockchain itself (i.e., the blocks and
|
||||
their transactions), so potentially some use cases currently handled by the
|
||||
indexer could be handled by the node. For example, allowing clients to ask
|
||||
whether a given transaction ID has been committed to a block could in principle
|
||||
be done without an indexer, since it does not depend on application metadata.
|
||||
|
||||
Inevitably, a question will arise whether we could implement both strategies
|
||||
and toggle between them with a flag. That would be a worst-case scenario,
|
||||
requiring us to maintain the complexity of two very-different operational
|
||||
concerns. If our goal is that Tendermint should be as simple, efficient, and
|
||||
trustworthy as posible, there is not a strong case for making these options
|
||||
configurable: We should pick a side and commit to it.
|
||||
|
||||
### Design Principles
|
||||
|
||||
Although there is no unique "best" solution to the issues described above,
|
||||
there are some specific principles that a solution should include:
|
||||
|
||||
1. **A custom indexer should not require integration into Tendermint core.** A
|
||||
developer or node operator can create, build, deploy, and use a custom
|
||||
indexer with a stock build of the Tendermint consensus node.
|
||||
|
||||
2. **Custom indexers cannot stall consensus.** An indexer that is slow or
|
||||
stalls cannot slow down or prevent core consensus from making progress.
|
||||
|
||||
The plugin interface must give node operators control over the tolerances
|
||||
for acceptable indexer performance, and the means to detect when indexers
|
||||
are falling outside those tolerances, but indexer failures should "fail
|
||||
safe" with respect to consensus (even if that means the indexer may miss
|
||||
some data, in sufficiently-extreme circumstances).
|
||||
|
||||
3. **Custom indexers control which event items they index.** A custom indexer
|
||||
is not limited to only the current transaction and block events, but can
|
||||
observe any event item published by the node.
|
||||
|
||||
4. **Custom indexing is forward-compatible.** Adding new event item types or
|
||||
metadata to the consensus node should not require existing custom indexers
|
||||
to be rebuilt or modified, unless they want to take advantage of the new
|
||||
data.
|
||||
|
||||
5. **Indexers are responsible for answering queries.** An indexer plugin is not
|
||||
required to support the legacy query filter language, nor to be compatible
|
||||
with the legacy RPC endpoints for accessing them. Any APIs for clients to
|
||||
query a custom index are the responsibility of the indexer, not the node.
|
||||
|
||||
### Open Questions
|
||||
|
||||
Given the constraints outlined above, there are important design questions we
|
||||
must answer to guide any specific changes:
|
||||
|
||||
1. **What is an acceptable probability that, given sufficiently extreme
|
||||
operational issues, an indexer might miss some number of events?**
|
||||
|
||||
There are two parts to this question: One is what constitutes an extreme
|
||||
operational problem, the other is how likely we are to miss some number of
|
||||
events items.
|
||||
|
||||
- If the consensus is that no event item must ever be missed, no matter how
|
||||
bad the operational circumstances, then we _must_ accept that indexing can
|
||||
slow or halt consensus arbitrarily. It is impossible to guarantee complete
|
||||
index coverage without potentially unbounded delays.
|
||||
|
||||
- Otherwise, how much data can we afford to lose and how often? For example,
|
||||
if we can ensure no event item will be lost unless the indexer halts for
|
||||
at least five minutes, is that acceptable? What probabilities and time
|
||||
ranges are reasonable for real production environments?
|
||||
|
||||
2. **What level of operational overhead is acceptable to impose on node
|
||||
operators to support indexing?**
|
||||
|
||||
Are node operators willing to configure and run custom indexers as sidecar
|
||||
type processes alongside a node? How much indexer setup above and beyond the
|
||||
work of setting up the underlying node in isolation is tractable in
|
||||
production networks?
|
||||
|
||||
The answer to this question also informs the question of whether we should
|
||||
keep an "in-process" indexing option, and to what extent that option needs
|
||||
to satisfy the suggested design principles.
|
||||
|
||||
Relatedly, to what extent do we need to be concerned about the cost of
|
||||
encoding and sending event items to an external process (e.g., as JSON blobs
|
||||
or protobuf wire messages)? Given that the node already encodes event items
|
||||
as JSON for subscription purposes, the overhead would be negligible for the
|
||||
node itself, but the indexer would have to decode to process the results.
|
||||
|
||||
3. **What (if any) query APIs does the consensus node need to export,
|
||||
independent of the indexer implementation?**
|
||||
|
||||
One typical example is whether the node should be able to answer queries
|
||||
like "is this transaction ID in a block?" Currently, a node cannot answer
|
||||
this query _unless_ it runs the built-in KV indexer. Does the node need to
|
||||
continue to support that query even for nodes that disable the KV indexer,
|
||||
or which use a custom indexer?
|
||||
|
||||
### Informal Design Intent
|
||||
|
||||
The design principles described above implicate several components of the
|
||||
Tendermint node, beyond just the indexer. In the context of [ADR 075][adr075],
|
||||
we are re-working the RPC event subscription API to improve some of the UX
|
||||
issues discussed above for RPC clients. It is our expectation that a solution
|
||||
for pluggable custom indexing will take advantage of some of the same work.
|
||||
|
||||
On that basis, the design approach I am considering for custom indexing looks
|
||||
something like this (subject to refinement):
|
||||
|
||||
1. A custom indexer runs as a separate process from the node.
|
||||
|
||||
2. The indexer subscribes to event items via the ADR 075 events API.
|
||||
|
||||
This means indexers would receive event payloads as JSON rather than
|
||||
protobuf, but since we already have to support JSON encoding for the RPC
|
||||
interface anyway, that should not increase complexity for the node.
|
||||
|
||||
3. The existing PostgreSQL indexer gets reworked to have this form, and no
|
||||
longer built as part of the Tendermint core binary.
|
||||
|
||||
We can retain the code in the core repository as a proof-of-concept, or
|
||||
perhaps create a separate repository with contributed indexers and move it
|
||||
there.
|
||||
|
||||
4. (Possibly) Deprecate and remove the legacy KV indexer, or disable it by
|
||||
default. If we decide to remove it, we can also remove the legacy RPC
|
||||
endpoints for querying the KV indexer.
|
||||
|
||||
If we plan to do this, we should also investigate providing a way for
|
||||
clients to query whether a given transaction ID has landed in a block. That
|
||||
serves a common need, and currently _only_ works if the KV indexer is
|
||||
enabled, but could be addressed more simply using the other data a node
|
||||
already has stored, without having to answer more general queries.
|
||||
|
||||
|
||||
## References
|
||||
|
||||
- [ADR 065: Custom Event Indexing][adr065]
|
||||
- [ADR 075: RPC Event Subscription Interface][adr075]
|
||||
- [Cosmos SDK][sdk]
|
||||
- [Event subscription RPC][event-rpc]
|
||||
- [KV transaction indexer][kv-index]
|
||||
- [Pluggable custom event indexing][i7135] (#7135)
|
||||
- [PostgreSQL event sink][psql]
|
||||
- [PostgreSQL database][postgres]
|
||||
- [Query filter language][query]
|
||||
- [Stream events to postgres for indexing][i1161] (#1161)
|
||||
- [Unbuffered event subscription slow down the consensus][i7247] (#7247)
|
||||
- [`EventSink` interface][esink]
|
||||
- [`tm-db` library][tmdb]
|
||||
|
||||
[adr065]: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-065-custom-event-indexing.md
|
||||
[adr075]: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-075-rpc-subscription.md
|
||||
[esink]: https://pkg.go.dev/github.com/tendermint/tendermint/internal/state/indexer#EventSink
|
||||
[event-rpc]: https://docs.tendermint.com/master/rpc/#/Websocket/subscribe
|
||||
[i1161]: https://github.com/tendermint/tendermint/issues/1161
|
||||
[i7135]: https://github.com/tendermint/tendermint/issues/7135
|
||||
[i7247]: https://github.com/tendermint/tendermint/issues/7247
|
||||
[kv-index]: https://github.com/tendermint/tendermint/blob/master/internal/state/indexer/tx/kv
|
||||
[postgres]: https://postgresql.org/
|
||||
[psql]: https://github.com/tendermint/tendermint/blob/master/internal/state/indexer/sink/psql
|
||||
[psql]: https://github.com/tendermint/tendermint/blob/master/internal/state/indexer/sink/psql
|
||||
[query]: https://pkg.go.dev/github.com/tendermint/tendermint/internal/pubsub/query/syntax
|
||||
[sdk]: https://github.com/cosmos/cosmos-sdk
|
||||
[tmdb]: https://pkg.go.dev/github.com/tendermint/tm-db#DB
|
||||
253
docs/rfc/rfc-013-abci++.md
Normal file
253
docs/rfc/rfc-013-abci++.md
Normal file
@@ -0,0 +1,253 @@
|
||||
# RFC 013: ABCI++
|
||||
|
||||
## Changelog
|
||||
|
||||
- 2020-01-11: initialized
|
||||
- 2021-02-11: Migrate RFC to tendermint repo (Originally [RFC 004](https://github.com/tendermint/spec/pull/254))
|
||||
|
||||
## Author(s)
|
||||
|
||||
- Dev (@valardragon)
|
||||
- Sunny (@sunnya97)
|
||||
|
||||
## Context
|
||||
|
||||
ABCI is the interface between the consensus engine and the application.
|
||||
It defines when the application can talk to consensus during the execution of a blockchain.
|
||||
At the moment, the application can only act at one phase in consensus, immediately after a block has been finalized.
|
||||
|
||||
This restriction on the application prohibits numerous features for the application, including many scalability improvements that are now better understood than when ABCI was first written.
|
||||
For example, many of the scalability proposals can be boiled down to "Make the miner / block proposers / validators do work, so the network does not have to".
|
||||
This includes optimizations such as tx-level signature aggregation, state transition proofs, etc.
|
||||
Furthermore, many new security properties cannot be achieved in the current paradigm, as the application cannot enforce validators do more than just finalize txs.
|
||||
This includes features such as threshold cryptography, and guaranteed IBC connection attempts.
|
||||
We propose introducing three new phases to ABCI to enable these new features, and renaming the existing methods for block execution.
|
||||
|
||||
#### Prepare Proposal phase
|
||||
|
||||
This phase aims to allow the block proposer to perform more computation, to reduce load on all other full nodes, and light clients in the network.
|
||||
It is intended to enable features such as batch optimizations on the transaction data (e.g. signature aggregation, zk rollup style validity proofs, etc.), enabling stateless blockchains with validator provided authentication paths, etc.
|
||||
|
||||
This new phase will only be executed by the block proposer. The application will take in the block header and raw transaction data output by the consensus engine's mempool. It will then return block data that is prepared for gossip on the network, and additional fields to include into the block header.
|
||||
|
||||
#### Process Proposal Phase
|
||||
|
||||
This phase aims to allow applications to determine validity of a new block proposal, and execute computation on the block data, prior to the blocks finalization.
|
||||
It is intended to enable applications to reject block proposals with invalid data, and to enable alternate pipelined execution models. (Such as Ethereum-style immediate execution)
|
||||
|
||||
This phase will be executed by all full nodes upon receiving a block, though on the application side it can do more work in the even that the current node is a validator.
|
||||
|
||||
#### Vote Extension Phase
|
||||
|
||||
This phase aims to allow applications to require their validators do more than just validate blocks.
|
||||
Example usecases of this include validator determined price oracles, validator guaranteed IBC connection attempts, and validator based threshold crypto.
|
||||
|
||||
This adds an app-determined data field that every validator must include with their vote, and these will thus appear in the header.
|
||||
|
||||
#### Rename {BeginBlock, [DeliverTx], EndBlock} to FinalizeBlock
|
||||
|
||||
The prior phases gives the application more flexibility in their execution model for a block, and they obsolete the current methods for how the consensus engine relates the block data to the state machine. Thus we refactor the existing methods to better reflect what is happening in the new ABCI model.
|
||||
|
||||
This rename doesn't on its own enable anything new, but instead improves naming to clarify the expectations from the application in this new communication model. The existing ABCI methods `BeginBlock, [DeliverTx], EndBlock` are renamed to a single method called `FinalizeBlock`.
|
||||
|
||||
#### Summary
|
||||
|
||||
We include a more detailed list of features / scaling improvements that are blocked, and which new phases resolve them at the end of this document.
|
||||
|
||||
<image src="images/abci.png" style="float: left; width: 40%;" /> <image src="images/abci++.png" style="float: right; width: 40%;" />
|
||||
On the top is the existing definition of ABCI, and on the bottom is the proposed ABCI++.
|
||||
|
||||
## Proposal
|
||||
|
||||
Below we suggest an API to add these three new phases.
|
||||
In this document, sometimes the final round of voting is referred to as precommit for clarity in how it acts in the Tendermint case.
|
||||
|
||||
### Prepare Proposal
|
||||
|
||||
*Note, APIs in this section will change after Vote Extensions, we list the adjusted APIs further in the proposal.*
|
||||
|
||||
The Prepare Proposal phase allows the block proposer to perform application-dependent work in a block, to lower the amount of work the rest of the network must do. This enables batch optimizations to a block, which has been empirically demonstrated to be a key component for scaling. This phase introduces the following ABCI method
|
||||
|
||||
```rust
|
||||
fn PrepareProposal(Block) -> BlockData
|
||||
```
|
||||
|
||||
where `BlockData` is a type alias for however data is internally stored within the consensus engine. In Tendermint Core today, this is `[]Tx`.
|
||||
|
||||
The application may read the entire block proposal, and mutate the block data fields. Mutated transactions will still get removed from the mempool later on, as the mempool rechecks all transactions after a block is executed.
|
||||
|
||||
The `PrepareProposal` API will be modified in the vote extensions section, for allowing the application to modify the header.
|
||||
|
||||
### Process Proposal
|
||||
|
||||
The Process Proposal phase sends the block data to the state machine, prior to running the last round of votes on the state machine. This enables features such as allowing validators to reject a block according to whether state machine deems it valid, and changing block execution pipeline.
|
||||
|
||||
We introduce three new methods,
|
||||
|
||||
```rust
|
||||
fn VerifyHeader(header: Header, isValidator: bool) -> ResponseVerifyHeader {...}
|
||||
fn ProcessProposal(block: Block) -> ResponseProcessProposal {...}
|
||||
fn RevertProposal(height: usize, round: usize) {...}
|
||||
```
|
||||
|
||||
where
|
||||
|
||||
```rust
|
||||
struct ResponseVerifyHeader {
|
||||
accept_header: bool,
|
||||
evidence: Vec<Evidence>
|
||||
}
|
||||
struct ResponseProcessProposal {
|
||||
accept_block: bool,
|
||||
evidence: Vec<Evidence>
|
||||
}
|
||||
```
|
||||
|
||||
Upon receiving a block header, every validator runs `VerifyHeader(header, isValidator)`. The reason for why `VerifyHeader` is split from `ProcessProposal` is due to the later sections for Preprocess Proposal and Vote Extensions, where there may be application dependent data in the header that must be verified before accepting the header.
|
||||
If the returned `ResponseVerifyHeader.accept_header` is false, then the validator must precommit nil on this block, and reject all other precommits on this block. `ResponseVerifyHeader.evidence` is appended to the validators local `EvidencePool`.
|
||||
|
||||
Upon receiving an entire block proposal (in the current implementation, all "block parts"), every validator runs `ProcessProposal(block)`. If the returned `ResponseProcessProposal.accept_block` is false, then the validator must precommit nil on this block, and reject all other precommits on this block. `ResponseProcessProposal.evidence` is appended to the validators local `EvidencePool`.
|
||||
|
||||
Once a validator knows that consensus has failed to be achieved for a given block, it must run `RevertProposal(block.height, block.round)`, in order to signal to the application to revert any potentially mutative state changes it may have made. In Tendermint, this occurs when incrementing rounds.
|
||||
|
||||
**RFC**: How do we handle the scenario where honest node A finalized on round x, and honest node B finalized on round x + 1? (e.g. when 2f precommits are publicly known, and a validator precommits themself but doesn't broadcast, but they increment rounds) Is this a real concern? The state root derived could change if everyone finalizes on round x+1, not round x, as the state machine can depend non-uniformly on timestamp.
|
||||
|
||||
The application is expected to cache the block data for later execution.
|
||||
|
||||
The `isValidator` flag is set according to whether the current node is a validator or a full node. This is intended to allow for beginning validator-dependent computation that will be included later in vote extensions. (An example of this is threshold decryptions of ciphertexts.)
|
||||
|
||||
### DeliverTx rename to FinalizeBlock
|
||||
|
||||
After implementing `ProcessProposal`, txs no longer need to be delivered during the block execution phase. Instead, they are already in the state machine. Thus `BeginBlock, DeliverTx, EndBlock` can all be replaced with a single ABCI method for `ExecuteBlock`. Internally the application may still structure its method for executing the block as `BeginBlock, DeliverTx, EndBlock`. However, it is overly restrictive to enforce that the block be executed after it is finalized. There are multiple other, very reasonable pipelined execution models one can go for. So instead we suggest calling this succession of methods `FinalizeBlock`. We propose the following API
|
||||
|
||||
Replace the `BeginBlock, DeliverTx, EndBlock` ABCI methods with the following method
|
||||
|
||||
```rust
|
||||
fn FinalizeBlock() -> ResponseFinalizeBlock
|
||||
```
|
||||
|
||||
where `ResponseFinalizeBlock` has the following API, in terms of what already exists
|
||||
|
||||
```rust
|
||||
struct ResponseFinalizeBlock {
|
||||
updates: ResponseEndBlock,
|
||||
tx_results: Vec<ResponseDeliverTx>
|
||||
}
|
||||
```
|
||||
|
||||
`ResponseEndBlock` should then be renamed to `ConsensusUpdates` and `ResponseDeliverTx` should be renamed to `ResponseTx`.
|
||||
|
||||
### Vote Extensions
|
||||
|
||||
The Vote Extensions phase allow applications to force their validators to do more than just validate within consensus. This is done by allowing the application to add more data to their votes, in the final round of voting. (Namely the precommit)
|
||||
This additional application data will then appear in the block header.
|
||||
|
||||
First we discuss the API changes to the vote struct directly
|
||||
|
||||
```rust
|
||||
fn ExtendVote(height: u64, round: u64) -> (UnsignedAppVoteData, SelfAuthenticatingAppData)
|
||||
fn VerifyVoteExtension(signed_app_vote_data: Vec<u8>, self_authenticating_app_vote_data: Vec<u8>) -> bool
|
||||
```
|
||||
|
||||
There are two types of data that the application can enforce validators to include with their vote.
|
||||
There is data that the app needs the validator to sign over in their vote, and there can be self-authenticating vote data. Self-authenticating here means that the application upon seeing these bytes, knows its valid, came from the validator and is non-malleable. We give an example of each type of vote data here, to make their roles clearer.
|
||||
|
||||
- Unsigned app vote data: A use case of this is if you wanted validator backed oracles, where each validator independently signs some oracle data in their vote, and the median of these values is used on chain. Thus we leverage consensus' signing process for convenience, and use that same key to sign the oracle data.
|
||||
- Self-authenticating vote data: A use case of this is in threshold random beacons. Every validator produces a threshold beacon share. This threshold beacon share can be verified by any node in the network, given the share and the validators public key (which is not the same as its consensus public key). However, this decryption share will not make it into the subsequent block's header. They will be aggregated by the subsequent block proposer to get a single random beacon value that will appear in the subsequent block's header. Everyone can then verify that this aggregated value came from the requisite threshold of the validator set, without increasing the bandwidth for full nodes or light clients. To achieve this goal, the self-authenticating vote data cannot be signed over by the consensus key along with the rest of the vote, as that would require all full nodes & light clients to know this data in order to verify the vote.
|
||||
|
||||
The `CanonicalVote` struct will acommodate the `UnsignedAppVoteData` field by adding another string to its encoding, after the `chain-id`. This should not interfere with existing hardware signing integrations, as it does not affect the constant offset for the `height` and `round`, and the vote size does not have an explicit upper bound. (So adding this unsigned app vote data field is equivalent from the HSM's perspective as having a superlong chain-ID)
|
||||
|
||||
**RFC**: Please comment if you think it will be fine to have elongate the message the HSM signs, or if we need to explore pre-hashing the app vote data.
|
||||
|
||||
The flow of these methods is that when a validator has to precommit, Tendermint will first produce a precommit canonical vote without the application vote data. It will then pass it to the application, which will return unsigned application vote data, and self authenticating application vote data. It will bundle the `unsigned_application_vote_data` into the canonical vote, and pass it to the HSM to sign. Finally it will package the self-authenticating app vote data, and the `signed_vote_data` together, into one final Vote struct to be passed around the network.
|
||||
|
||||
#### Changes to Prepare Proposal Phase
|
||||
|
||||
There are many use cases where the additional data from vote extensions can be batch optimized.
|
||||
This is mainly of interest when the votes include self-authenticating app vote data that be batched together, or the unsigned app vote data is the same across all votes.
|
||||
To allow for this, we change the PrepareProposal API to the following
|
||||
|
||||
```rust
|
||||
fn PrepareProposal(Block, UnbatchedHeader) -> (BlockData, Header)
|
||||
```
|
||||
|
||||
where `UnbatchedHeader` essentially contains a "RawCommit", the `Header` contains a batch-optimized `commit` and an additional "Application Data" field in its root. This will involve a number of changes to core data structures, which will be gone over in the ADR.
|
||||
The `Unbatched` header and `rawcommit` will never be broadcasted, they will be completely internal to consensus.
|
||||
|
||||
#### Inter-process communication (IPC) effects
|
||||
|
||||
For brevity in exposition above, we did not discuss the trade-offs that may occur in interprocess communication delays that these changs will introduce.
|
||||
These new ABCI methods add more locations where the application must communicate with the consensus engine.
|
||||
In most configurations, we expect that the consensus engine and the application will be either statically or dynamically linked, so all communication is a matter of at most adjusting the memory model the data is layed out within.
|
||||
This memory model conversion is typically considered negligible, as delay here is measured on the order of microseconds at most, whereas we face milisecond delays due to cryptography and network overheads.
|
||||
Thus we ignore the overhead in the case of linked libraries.
|
||||
|
||||
In the case where the consensus engine and the application are ran in separate processes, and thus communicate with a form of Inter-process communication (IPC), the delays can easily become on the order of miliseconds based upon the data sent. Thus its important to consider whats happening here.
|
||||
We go through this phase by phase.
|
||||
|
||||
##### Prepare proposal IPC overhead
|
||||
|
||||
This requires a round of IPC communication, where both directions are quite large. Namely the proposer communicating an entire block to the application.
|
||||
However, this can be mitigated by splitting up `PrepareProposal` into two distinct, async methods, one for the block IPC communication, and one for the Header IPC communication.
|
||||
|
||||
Then for chains where the block data does not depend on the header data, the block data IPC communication can proceed in parallel to the prior block's voting phase. (As a node can know whether or not its the leader in the next round)
|
||||
|
||||
Furthermore, this IPC communication is expected to be quite low relative to the amount of p2p gossip time it takes to send the block data around the network, so this is perhaps a premature concern until more sophisticated block gossip protocols are implemented.
|
||||
|
||||
##### Process Proposal IPC overhead
|
||||
|
||||
This phase changes the amount of time available for the consensus engine to deliver a block's data to the state machine.
|
||||
Before, the block data for block N would be delivered to the state machine upon receiving a commit for block N and then be executed.
|
||||
The state machine would respond after executing the txs and before prevoting.
|
||||
The time for block delivery from the consensus engine to the state machine after this change is the time of receiving block proposal N to the to time precommit on proposal N.
|
||||
It is expected that this difference is unimportant in practice, as this time is in parallel to one round of p2p communication for prevoting, which is expected to be significantly less than the time for the consensus engine to deliver a block to the state machine.
|
||||
|
||||
##### Vote Extension IPC overhead
|
||||
|
||||
This has a small amount of data, but does incur an IPC round trip delay. This IPC round trip delay is pretty negligible as compared the variance in vote gossip time. (the IPC delay is typically on the order of 10 microseconds)
|
||||
|
||||
## Status
|
||||
|
||||
Proposed
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
- Enables a large number of new features for applications
|
||||
- Supports both immediate and delayed execution models
|
||||
- Allows application specific data from each validator
|
||||
- Allows for batch optimizations across txs, and votes
|
||||
|
||||
### Negative
|
||||
|
||||
- This is a breaking change to all existing ABCI clients, however the application should be able to have a thin wrapper to replicate existing ABCI behavior.
|
||||
- PrepareProposal - can be a no-op
|
||||
- Process Proposal - has to cache the block, but can otherwise be a no-op
|
||||
- Vote Extensions - can be a no-op
|
||||
- Finalize Block - Can black-box call BeginBlock, DeliverTx, EndBlock given the cached block data
|
||||
|
||||
- Vote Extensions adds more complexity to core Tendermint Data Structures
|
||||
- Allowing alternate alternate execution models will lead to a proliferation of new ways for applications to violate expected guarantees.
|
||||
|
||||
### Neutral
|
||||
|
||||
- IPC overhead considerations change, but mostly for the better
|
||||
|
||||
## References
|
||||
|
||||
Reference for IPC delay constants: <http://pages.cs.wisc.edu/~adityav/Evaluation_of_Inter_Process_Communication_Mechanisms.pdf>
|
||||
|
||||
### Short list of blocked features / scaling improvements with required ABCI++ Phases
|
||||
|
||||
| Feature | PrepareProposal | ProcessProposal | Vote Extensions |
|
||||
| :--- | :---: | :---: | :---: |
|
||||
| Tx based signature aggregation | X | | |
|
||||
| SNARK proof of valid state transition | X | | |
|
||||
| Validator provided authentication paths in stateless blockchains | X | | |
|
||||
| Immediate Execution | | X | |
|
||||
| Simple soft forks | | X | |
|
||||
| Validator guaranteed IBC connection attempts | | | X |
|
||||
| Validator based price oracles | | | X |
|
||||
| Immediate Execution with increased time for block execution | X | X | X |
|
||||
| Threshold Encrypted txs | X | X | X |
|
||||
94
docs/rfc/rfc-014-semantic-versioning.md
Normal file
94
docs/rfc/rfc-014-semantic-versioning.md
Normal file
@@ -0,0 +1,94 @@
|
||||
# RFC 014: Semantic Versioning
|
||||
|
||||
## Changelog
|
||||
|
||||
- 2021-11-19: Initial Draft
|
||||
- 2021-02-11: Migrate RFC to tendermint repo (Originally [RFC 006](https://github.com/tendermint/spec/pull/365))
|
||||
|
||||
## Author(s)
|
||||
|
||||
- Callum Waters @cmwaters
|
||||
|
||||
## Context
|
||||
|
||||
We use versioning as an instrument to hold a set of promises to users and signal when such a set changes and how. In the conventional sense of a Go library, major versions signal that the public Go API’s have changed in a breaking way and thus require the users of such libraries to change their usage accordingly. Tendermint is a bit different in that there are multiple users: application developers (both in-process and out-of-process), node operators, and external clients. More importantly, both how these users interact with Tendermint and what's important to these users differs from how users interact and what they find important in a more conventional library.
|
||||
|
||||
This document attempts to encapsulate the discussions around versioning in Tendermint and draws upon them to propose a guide to how Tendermint uses versioning to make promises to its users.
|
||||
|
||||
For a versioning policy to make sense, we must also address the intended frequency of breaking changes. The strictest guarantees in the world will not help users if we plan to break them with every release.
|
||||
|
||||
Finally I would like to remark that this RFC only addresses the "what", as in what are the rules for versioning. The "how" of Tendermint implementing the versioning rules we choose, will be addressed in a later RFC on Soft Upgrades.
|
||||
|
||||
## Discussion
|
||||
|
||||
We first begin with a round up of the various users and a set of assumptions on what these users expect from Tendermint in regards to versioning:
|
||||
|
||||
1. **Application Developers**, those that use the ABCI to build applications on top of Tendermint, are chiefly concerned with that API. Breaking changes will force developers to modify large portions of their codebase to accommodate for the changes. Some ABCI changes such as introducing priority for the mempool don't require any effort and can be lazily adopted whilst changes like ABCI++ may force applications to redesign their entire execution system. It's also worth considering that the API's for go developers differ to developers of other languages. The former here can use the entire Tendermint library, most notably the local RPC methods, and so the team must be wary of all public Go API's.
|
||||
2. **Node Operators**, those running node infrastructure, are predominantly concerned with downtime, complexity and frequency of upgrading, and avoiding data loss. They may be also concerned about changes that may break the scripts and tooling they use to supervise their nodes.
|
||||
3. **External Clients** are those that perform any of the following:
|
||||
- consume the RPC endpoints of nodes like `/block`
|
||||
- subscribe to the event stream
|
||||
- make queries to the indexer
|
||||
|
||||
This set are concerned with chain upgrades which will impact their ability to query state and block data as well as broadcast transactions. Examples include wallets and block explorers.
|
||||
|
||||
4. **IBC module and relayers**. The developers of IBC and consumers of their software are concerned about changes that may affect a chain's ability to send arbitrary messages to another chain. Specifically, these users are affected by any breaking changes to the light client verification algorithm.
|
||||
|
||||
Although we present them here as having different concerns, in a broader sense these user groups share a concern for the end users of applications. A crucial principle guiding this RFC is that **the ability for chains to provide continual service is more important than the actual upgrade burden put on the developers of these chains**. This means some extra burden for application developers is tolerable if it minimizes or substantially reduces downtime for the end user.
|
||||
|
||||
### Modes of Interprocess Communication
|
||||
|
||||
Tendermint has two primary mechanisms to communicate with other processes: RPC and P2P. The division marks the boundary between the internal and external components of the network:
|
||||
|
||||
- The P2P layer is used in all cases that nodes (of any type) need to communicate with one another.
|
||||
- The RPC interface is for any outside process that wants to communicate with a node.
|
||||
|
||||
The design principle here is that **communication via RPC is to a trusted source** and thus the RPC service prioritizes inspection rather than verification. The P2P interface is the primary medium for verification.
|
||||
|
||||
As an example, an in-browser light client would verify headers (and perhaps application state) via the p2p layer, and then pass along information on to the client via RPC (or potentially directly via a separate API).
|
||||
|
||||
The main exceptions to this are the IBC module and relayers, which are external to the node but also require verifiable data. Breaking changes to the light client verification path mean that all neighbouring chains that are connected will no longer be able to verify state transitions and thus pass messages back and forward.
|
||||
|
||||
## Proposal
|
||||
|
||||
Tendermint version labels will follow the syntax of [Semantic Versions 2.0.0](https://semver.org/) with a major, minor and patch version. The version components will be interpreted according to these rules:
|
||||
|
||||
For the entire cycle of a **major version** in Tendermint:
|
||||
|
||||
- All blocks and state data in a blockchain can be queried. All headers can be verified even across minor version changes. Nodes can both block sync and state sync from genesis to the head of the chain.
|
||||
- Nodes in a network are able to communicate and perform BFT state machine replication so long as the agreed network version is the lowest of all nodes in a network. For example, nodes using version 1.5.x and 1.2.x can operate together so long as the network version is 1.2 or lower (but still within the 1.x range). This rule essentially captures the concept of network backwards compatibility.
|
||||
- Node RPC endpoints will remain compatible with existing external clients:
|
||||
- New endpoints may be added, but old endpoints may not be removed.
|
||||
- Old endpoints may be extended to add new request and response fields, but requests not using those fields must function as before the change.
|
||||
- Migrations should be automatic. Upgrading of one node can happen asynchronously with respect to other nodes (although agreement of a network-wide upgrade must still occur synchronously via consensus).
|
||||
|
||||
For the entire cycle of a **minor version** in Tendermint:
|
||||
|
||||
- Public Go API's, for example in `node` or `abci` packages will not change in a way that requires any consumer (not just application developers) to modify their code.
|
||||
- No breaking changes to the block protocol. This means that all block related data structures should not change in a way that breaks any of the hashes, the consensus engine or light client verification.
|
||||
- Upgrades between minor versions may not result in any downtime (i.e., no migrations are required), nor require any changes to the config files to continue with the existing behavior. A minor version upgrade will require only stopping the existing process, swapping the binary, and starting the new process.
|
||||
|
||||
A new **patch version** of Tendermint will only contain bug fixes and updates that impact the security and stability of Tendermint.
|
||||
|
||||
These guarantees will come into effect at release 1.0.
|
||||
|
||||
## Status
|
||||
|
||||
Proposed
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
- Clearer communication of what versioning means to us and the effect they have on our users.
|
||||
|
||||
### Negative
|
||||
|
||||
- Can potentially incur greater engineering effort to uphold and follow these guarantees.
|
||||
|
||||
### Neutral
|
||||
|
||||
## References
|
||||
|
||||
- [SemVer](https://semver.org/)
|
||||
- [Tendermint Tracking Issue](https://github.com/tendermint/tendermint/issues/5680)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user