Compare commits

..

4 Commits

Author SHA1 Message Date
William Banfield
750f709b42 wip 2021-09-28 16:42:43 -04:00
William Banfield
c9b775d2f0 wip 2021-09-28 16:10:46 -04:00
William Banfield
a3889ee2cb consensus: remove logic to unlock block on 2/3 prevote for nil (#6954) 2021-09-24 11:19:57 -04:00
William Banfield
87f4beb374 consensus: remove panics from test helper functions (#6969) 2021-09-22 08:56:42 -04:00
616 changed files with 27298 additions and 24795 deletions

2
.github/CODEOWNERS vendored
View File

@@ -7,4 +7,4 @@
# global owners are only requested if there isn't a more specific
# codeowner specified below. For this reason, the global codeowners
# are often repeated in package-level definitions.
* @ebuchman @cmwaters @tychoish @williambanfield @creachadair
* @alexanderbez @ebuchman @cmwaters @tessr @tychoish @williambanfield @creachadair

View File

@@ -1,16 +1,16 @@
pullRequestOpened: |
:wave: Thanks for creating a PR!
:wave: Thanks for creating a PR!
Before we can merge this PR, please make sure that all the following items have been
Before we can merge this PR, please make sure that all the following items have been
checked off. If any of the checklist items are not applicable, please leave them but
write a little note why.
write a little note why.
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [ ] Re-reviewed `Files changed` in the Github PR explorer
- [ ] Applied Appropriate Labels
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [ ] Re-reviewed `Files changed` in the Github PR explorer
- [ ] Applied Appropriate Labels
Thank you for your contribution to Tendermint! :rocket:
Thank you for your contribution to Tendermint! :rocket:

27
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
version: 2
updates:
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: daily
time: "11:00"
open-pull-requests-limit: 10
- package-ecosystem: npm
directory: "/docs"
schedule:
interval: daily
time: "11:00"
open-pull-requests-limit: 10
reviewers:
- fadeev
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
time: "11:00"
open-pull-requests-limit: 10
reviewers:
- melekes
- tessr
labels:
- T:dependencies

8
.github/linter/markdownlint.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
default: true,
MD007: { "indent": 4 }
MD013: false
MD024: { siblings_only: true }
MD025: false
MD033: { no-inline-html: false }
no-hard-tabs: false
whitespace: false

View File

@@ -1,8 +0,0 @@
default: true,
MD007: {"indent": 4}
MD013: false
MD024: {siblings_only: true}
MD025: false
MD033: {no-inline-html: false}
no-hard-tabs: false
whitespace: false

View File

@@ -1,9 +0,0 @@
---
# Default rules for YAML linting from super-linter.
# See: See https://yamllint.readthedocs.io/en/stable/rules.html
extends: default
rules:
document-end: disable
document-start: disable
line-length: disable
truthy: disable

27
.github/mergify.yml vendored
View File

@@ -1,19 +1,18 @@
queue_rules:
- name: default
conditions:
- base=v0.35.x
- label=S:automerge
pull_request_rules:
- name: Automerge to v0.35.x
- name: Automerge to master
conditions:
- base=v0.35.x
- base=master
- label=S:automerge
actions:
queue:
merge:
method: squash
name: default
commit_message_template: |
{{ title }} (#{{ number }})
{{ body }}
strict: true
commit_message: title+body
- name: backport patches to v0.34.x branch
conditions:
- base=master
- label=S:backport-to-v0.34.x
actions:
backport:
branches:
- v0.34.x

View File

@@ -1,82 +0,0 @@
name: Build
# Tests runs different tests (test_abci_apps, test_abci_cli, test_apps)
# This workflow runs on every push to master or release branch and every pull requests
# All jobs will pass without running if no *{.go, .mod, .sum} files have been modified
on:
pull_request:
push:
branches:
- master
- release/**
jobs:
build:
name: Build
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
goarch: ["arm", "amd64"]
goos: ["linux"]
timeout-minutes: 5
steps:
- uses: actions/setup-go@v3
with:
go-version: "1.17"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
"!test/"
go.mod
go.sum
Makefile
- name: install
run: GOOS=${{ matrix.goos }} GOARCH=${{ matrix.goarch }} make build
if: "env.GIT_DIFF != ''"
test_abci_cli:
runs-on: ubuntu-latest
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v3
with:
go-version: "1.17"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- name: install
run: make install_abci
if: "env.GIT_DIFF != ''"
- run: abci/tests/test_cli/test.sh
shell: bash
if: "env.GIT_DIFF != ''"
test_apps:
runs-on: ubuntu-latest
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v3
with:
go-version: "1.17"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- name: install
run: make install install_abci
if: "env.GIT_DIFF != ''"
- name: test_apps
run: test/app/test.sh
shell: bash
if: "env.GIT_DIFF != ''"

133
.github/workflows/coverage.yml vendored Normal file
View File

@@ -0,0 +1,133 @@
name: Test Coverage
on:
pull_request:
push:
paths:
- "**.go"
- "!test/"
branches:
- master
- release/**
jobs:
split-test-files:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2.3.4
- name: Create a file with all the pkgs
run: go list ./... > pkgs.txt
- name: Split pkgs into 4 files
run: split -d -n l/4 pkgs.txt pkgs.txt.part.
# cache multiple
- uses: actions/upload-artifact@v2
with:
name: "${{ github.sha }}-00"
path: ./pkgs.txt.part.00
- uses: actions/upload-artifact@v2
with:
name: "${{ github.sha }}-01"
path: ./pkgs.txt.part.01
- uses: actions/upload-artifact@v2
with:
name: "${{ github.sha }}-02"
path: ./pkgs.txt.part.02
- uses: actions/upload-artifact@v2
with:
name: "${{ github.sha }}-03"
path: ./pkgs.txt.part.03
build-linux:
name: Build
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
goarch: ["arm", "amd64"]
timeout-minutes: 5
steps:
- uses: actions/setup-go@v2
with:
go-version: "1.16"
- uses: actions/checkout@v2.3.4
- uses: technote-space/get-diff-action@v5
with:
PATTERNS: |
**/**.go
"!test/"
go.mod
go.sum
- name: install
run: GOOS=linux GOARCH=${{ matrix.goarch }} make build
if: "env.GIT_DIFF != ''"
tests:
runs-on: ubuntu-latest
needs: split-test-files
strategy:
fail-fast: false
matrix:
part: ["00", "01", "02", "03"]
steps:
- uses: actions/setup-go@v2
with:
go-version: "1.16"
- uses: actions/checkout@v2.3.4
- uses: technote-space/get-diff-action@v5
with:
PATTERNS: |
**/**.go
"!test/"
go.mod
go.sum
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-${{ matrix.part }}"
if: env.GIT_DIFF
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: test & coverage report creation
run: |
cat pkgs.txt.part.${{ matrix.part }} | xargs go test -mod=readonly -timeout 8m -race -coverprofile=${{ matrix.part }}profile.out -covermode=atomic
if: env.GIT_DIFF
- uses: actions/upload-artifact@v2
with:
name: "${{ github.sha }}-${{ matrix.part }}-coverage"
path: ./${{ matrix.part }}profile.out
upload-coverage-report:
runs-on: ubuntu-latest
needs: tests
steps:
- uses: actions/checkout@v2.3.4
- uses: technote-space/get-diff-action@v5
with:
PATTERNS: |
**/**.go
"!test/"
go.mod
go.sum
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-00-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-01-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-02-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-03-coverage"
if: env.GIT_DIFF
- run: |
cat ./*profile.out | grep -v "mode: atomic" >> coverage.txt
if: env.GIT_DIFF
- uses: codecov/codecov-action@v2.1.0
with:
file: ./coverage.txt
if: env.GIT_DIFF

View File

@@ -1,19 +1,20 @@
name: Docker
# Build & Push rebuilds the tendermint docker image on every push to master and creation of tags
name: Build & Push
# Build & Push rebuilds the tendermint docker image on every push to master and creation of tags
# and pushes the image to https://hub.docker.com/r/interchainio/simapp/tags
on:
pull_request:
push:
branches:
- master
tags:
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
- "v[0-9]+.[0-9]+.[0-9]+-rc*" # Push events to matching v*, i.e. v1.0-rc1, v20.15.10-rc5
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
- "v[0-9]+.[0-9]+.[0-9]+-rc*" # Push events to matching v*, i.e. v1.0-rc1, v20.15.10-rc5
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v2.3.4
- name: Prepare
id: prep
run: |
@@ -38,18 +39,18 @@ jobs:
with:
platforms: all
- name: Set up Docker Build
uses: docker/setup-buildx-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1.6.0
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@v2
uses: docker/login-action@v1.10.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v3
uses: docker/build-push-action@v2.7.0
with:
context: .
file: ./DOCKER/Dockerfile

View File

@@ -1,36 +0,0 @@
# Manually run randomly generated E2E testnets (as nightly).
name: e2e-manual
on:
workflow_dispatch:
jobs:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
p2p: ['legacy', 'new', 'hybrid']
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
with:
go-version: '1.17'
- uses: actions/checkout@v3
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner tests
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly/${{ matrix.p2p }} -p ${{ matrix.p2p }}
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/${{ matrix.p2p }}/*-group${{ matrix.group }}-*.toml

View File

@@ -6,7 +6,7 @@
name: e2e-nightly-34x
on:
workflow_dispatch: # allow running workflow manually, in theory
workflow_dispatch: # allow running workflow manually, in theory
schedule:
- cron: '0 2 * * *'
@@ -17,15 +17,15 @@ jobs:
strategy:
fail-fast: false
matrix:
group: ['00', '01']
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.17'
go-version: '1.16'
- uses: actions/checkout@v3
- uses: actions/checkout@v2.3.4
with:
ref: 'v0.34.x'
@@ -37,7 +37,7 @@ jobs:
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 2 -d networks/nightly
run: ./build/generator -g 4 -d networks/nightly
- name: Run testnets in group ${{ matrix.group }}
working-directory: test/e2e
@@ -59,7 +59,7 @@ jobs:
SLACK_MESSAGE: Nightly E2E tests failed on v0.34.x
SLACK_FOOTER: ''
e2e-nightly-success: # may turn this off once they seem to pass consistently
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test
if: ${{ success() }}
runs-on: ubuntu-latest

View File

@@ -5,7 +5,7 @@
name: e2e-nightly-master
on:
workflow_dispatch: # allow running workflow manually
workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 2 * * *'
@@ -17,15 +17,15 @@ jobs:
fail-fast: false
matrix:
p2p: ['legacy', 'new', 'hybrid']
group: ['00', '01', '02', '03']
group: ['00', '01']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.17'
go-version: '1.16'
- uses: actions/checkout@v3
- uses: actions/checkout@v2.3.4
- name: Build
working-directory: test/e2e
@@ -35,7 +35,7 @@ jobs:
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly/${{ matrix.p2p }} -p ${{ matrix.p2p }}
run: ./build/generator -g 2 -d networks/nightly/${{ matrix.p2p }} -p ${{ matrix.p2p }}
- name: Run ${{ matrix.p2p }} p2p testnets in group ${{ matrix.group }}
working-directory: test/e2e
@@ -57,8 +57,8 @@ jobs:
SLACK_MESSAGE: Nightly E2E tests failed on master
SLACK_FOOTER: ''
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test-2
if: ${{ success() }}
runs-on: ubuntu-latest
steps:

View File

@@ -2,7 +2,7 @@ name: e2e
# Runs the CI end-to-end test network on all pushes to master or release branches
# and every pull request, but only if any Go files have been changed.
on:
workflow_dispatch: # allow running workflow manually
workflow_dispatch: # allow running workflow manually
pull_request:
push:
branches:
@@ -14,11 +14,11 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.17'
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
go-version: '1.16'
- uses: actions/checkout@v2.3.4
- uses: technote-space/get-diff-action@v5
with:
PATTERNS: |
**/**.go

View File

@@ -1,7 +1,7 @@
# Runs fuzzing nightly.
name: Fuzz Tests
name: fuzz-nightly
on:
workflow_dispatch: # allow running workflow manually
workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 3 * * *'
pull_request:
@@ -13,11 +13,11 @@ jobs:
fuzz-nightly-test:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.17'
go-version: '1.16'
- uses: actions/checkout@v3
- uses: actions/checkout@v2.3.4
- name: Install go-fuzz
working-directory: test/fuzz
@@ -54,14 +54,14 @@ jobs:
continue-on-error: true
- name: Archive crashers
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v2
with:
name: crashers
path: test/fuzz/**/crashers
retention-days: 3
- name: Archive suppressions
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v2
with:
name: suppressions
path: test/fuzz/**/suppressions

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 3
steps:
- uses: styfle/cancel-workflow-action@0.10.0
- uses: styfle/cancel-workflow-action@0.9.1
with:
workflow_id: 1041851,1401230,2837803
access_token: ${{ github.token }}

View File

@@ -46,7 +46,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout the Jepsen repository
uses: actions/checkout@v3
uses: actions/checkout@v2.3.4
with:
repository: 'tendermint/jepsen'
@@ -58,7 +58,7 @@ jobs:
run: docker exec -i jepsen-control bash -c 'source /root/.bashrc; cd /jepsen/tendermint; lein run test --nemesis ${{ github.event.inputs.nemesis }} --workload ${{ github.event.inputs.workload }} --concurrency ${{ github.event.inputs.concurrency }} --tendermint-url ${{ github.event.inputs.tendermintUrl }} --merkleeyes-url ${{ github.event.inputs.merkleeyesUrl }} --time-limit ${{ github.event.inputs.timeLimit }} ${{ github.event.inputs.dupOrSuperByzValidators }}'
- name: Archive results
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v2
with:
name: results
path: tendermint/store/latest

View File

@@ -1,12 +1,12 @@
name: Check Markdown links
on:
on:
schedule:
- cron: '* */24 * * *'
jobs:
markdown-link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: creachadair/github-action-markdown-link-check@master
- uses: actions/checkout@v2.3.4
- uses: gaurav-nelson/github-action-markdown-link-check@1.0.13
with:
folder-path: "docs"

View File

@@ -1,11 +1,7 @@
name: Golang Linter
# Lint runs golangci-lint over the entire Tendermint repository.
#
# This workflow is run on every pull request and push to master.
#
# The `golangci` job will pass without running if no *.{go, mod, sum}
# files have been modified.
name: Lint
# Lint runs golangci-lint over the entire Tendermint repository
# This workflow is run on every pull request and push to master
# The `golangci` job will pass without running if no *.{go, mod, sum} files have been modified.
on:
pull_request:
push:
@@ -17,22 +13,17 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 8
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '^1.17'
- uses: technote-space/get-diff-action@v6
- uses: actions/checkout@v2.3.4
- uses: technote-space/get-diff-action@v5
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: golangci/golangci-lint-action@v3
- uses: golangci/golangci-lint-action@v2.5.2
with:
# Required: the version of golangci-lint is required and
# must be specified without patch version: we always use the
# latest patch version.
version: v1.45
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.38
args: --timeout 10m
github-token: ${{ secrets.github_token }}
if: env.GIT_DIFF

View File

@@ -19,14 +19,14 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v3
uses: actions/checkout@v2.3.4
- name: Lint Code Base
uses: docker://github/super-linter:v4
uses: docker://github/super-linter:v3
env:
LINTER_RULES_PATH: .
VALIDATE_ALL_CODEBASE: true
DEFAULT_BRANCH: master
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
VALIDATE_MD: true
VALIDATE_OPENAPI: true
VALIDATE_YAML: true
YAML_CONFIG_FILE: yaml-lint.yml

View File

@@ -16,7 +16,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v2.3.4
- name: Prepare
id: prep
run: |
@@ -34,16 +34,16 @@ jobs:
echo ::set-output name=tags::${TAGS}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v1.6.0
- name: Login to DockerHub
uses: docker/login-action@v2
uses: docker/login-action@v1.10.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v3
uses: docker/build-push-action@v2.7.0
with:
context: ./tools/proto
file: ./tools/proto/Dockerfile

View File

@@ -11,13 +11,13 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 4
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v2.3.4
- name: lint
run: make proto-lint
proto-breakage:
runs-on: ubuntu-latest
timeout-minutes: 4
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v2.3.4
- name: check-breakage
run: make proto-check-breaking-ci

View File

@@ -5,35 +5,33 @@ on:
branches:
- "RC[0-9]/**"
tags:
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
jobs:
goreleaser:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v2.3.4
with:
fetch-depth: 0
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.17'
go-version: '1.16'
- name: Build
uses: goreleaser/goreleaser-action@v3
uses: goreleaser/goreleaser-action@v2
if: ${{ github.event_name == 'pull_request' }}
with:
version: latest
args: build --skip-validate # skip validate skips initial sanity checks in order to be able to fully run
- run: echo https://github.com/tendermint/tendermint/blob/${GITHUB_REF#refs/tags/}/CHANGELOG.md#${GITHUB_REF#refs/tags/} > ../release_notes.md
- name: Release
uses: goreleaser/goreleaser-action@v3
uses: goreleaser/goreleaser-action@v2
if: startsWith(github.ref, 'refs/tags/')
with:
version: latest
args: release --rm-dist --release-notes=../release_notes.md
args: release --rm-dist
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v5
- uses: actions/stale@v4
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: "This pull request has been automatically marked as stale because it has not had

View File

@@ -1,75 +1,106 @@
name: Test
name: Tests
# Tests runs different tests (test_abci_apps, test_abci_cli, test_apps)
# This workflow runs on every push to master or release branch and every pull requests
# All jobs will pass without running if no *{.go, .mod, .sum} files have been modified
on:
pull_request:
push:
paths:
- "**.go"
branches:
- master
- release/**
jobs:
tests:
build:
name: Build
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
part: ["00", "01", "02", "03", "04", "05"]
timeout-minutes: 5
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: "1.17"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
go-version: "1.16"
- uses: actions/checkout@v2.3.4
- uses: technote-space/get-diff-action@v5
with:
PATTERNS: |
**/**.go
"!test/"
go.mod
go.sum
Makefile
- name: Run Go Tests
run: |
make test-group-${{ matrix.part }} NUM_SPLIT=6
if: env.GIT_DIFF
- uses: actions/upload-artifact@v3
- name: install
run: make install install_abci
if: "env.GIT_DIFF != ''"
- uses: actions/cache@v2.1.6
with:
name: "${{ github.sha }}-${{ matrix.part }}-coverage"
path: ./build/${{ matrix.part }}.profile.out
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
if: env.GIT_DIFF
# Cache binaries for use by other jobs
- uses: actions/cache@v2.1.6
with:
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
if: env.GIT_DIFF
upload-coverage-report:
test_abci_cli:
runs-on: ubuntu-latest
needs: tests
needs: build
timeout-minutes: 5
steps:
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
- uses: actions/setup-go@v2
with:
go-version: "1.16"
- uses: actions/checkout@v2.3.4
- uses: technote-space/get-diff-action@v5
with:
PATTERNS: |
**/**.go
"!test/"
go.mod
go.sum
Makefile
- uses: actions/download-artifact@v3
- uses: actions/cache@v2.1.6
with:
name: "${{ github.sha }}-00-coverage"
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
if: env.GIT_DIFF
- uses: actions/download-artifact@v3
- uses: actions/cache@v2.1.6
with:
name: "${{ github.sha }}-01-coverage"
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
if: env.GIT_DIFF
- uses: actions/download-artifact@v3
- run: abci/tests/test_cli/test.sh
shell: bash
if: env.GIT_DIFF
test_apps:
runs-on: ubuntu-latest
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v2
with:
name: "${{ github.sha }}-02-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v3
go-version: "1.16"
- uses: actions/checkout@v2.3.4
- uses: technote-space/get-diff-action@v5
with:
name: "${{ github.sha }}-03-coverage"
if: env.GIT_DIFF
- run: |
cat ./*profile.out | grep -v "mode: set" >> coverage.txt
if: env.GIT_DIFF
- uses: codecov/codecov-action@v3
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: actions/cache@v2.1.6
with:
file: ./coverage.txt
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
if: env.GIT_DIFF
- uses: actions/cache@v2.1.6
with:
path: ~/go/bin
key: ${{ runner.os }}-${{ github.sha }}-tm-binary
if: env.GIT_DIFF
- name: test_apps
run: test/app/test.sh
shell: bash
if: env.GIT_DIFF

1
.gitignore vendored
View File

@@ -25,7 +25,6 @@ docs/_build
docs/dist
docs/node_modules/
docs/spec
docs/.vuepress/public/rpc
index.html.md
libs/pubsub/query/fuzz_test/output
profile\.out

View File

@@ -13,18 +13,18 @@ linters:
# - gochecknoinits
# - gocognit
- goconst
# - gocritic
- gocritic
# - gocyclo
# - godox
- gofmt
- goimports
- revive
- golint
- gosec
- gosimple
- govet
- ineffassign
# - interfacer
# - lll
- lll
# - maligned
- misspell
- nakedret
@@ -33,7 +33,7 @@ linters:
- staticcheck
- structcheck
- stylecheck
# - typecheck
- typecheck
- unconvert
# - unparam
- unused
@@ -46,6 +46,9 @@ issues:
- path: _test\.go
linters:
- gosec
- linters:
- lll
source: "https://"
max-same-issues: 50
linters-settings:

View File

@@ -29,8 +29,8 @@ release:
archives:
- files:
- LICENSE
- README.md
- UPGRADING.md
- SECURITY.md
- CHANGELOG.md
- LICENSE
- README.md
- UPGRADING.md
- SECURITY.md
- CHANGELOG.md

View File

@@ -1,314 +1,161 @@
# Changelog
Friendly reminder: We have a [bug bounty program](https://hackerone.com/cosmos).
Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermint).
## v0.35.7
## v0.35
June 16, 2022
### BUG FIXES
- [p2p] [\#8692](https://github.com/tendermint/tendermint/pull/8692) scale the number of stored peers by the configured maximum connections (#8684)
- [rpc] [\#8715](https://github.com/tendermint/tendermint/pull/8715) always close http bodies (backport #8712)
- [p2p] [\#8760](https://github.com/tendermint/tendermint/pull/8760) accept should not abort on first error (backport #8759)
### BREAKING CHANGES
- P2P Protocol
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Introduce "inactive" peer label to avoid re-dialing incompatible peers. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Increase frequency of dialing attempts to reduce latency for peer acquisition. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Improvements to peer scoring and sorting to gossip a greater variety of peers during PEX. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Track incoming and outgoing peers separately to ensure more peer slots open for incoming connections. (@tychoish)
## v0.35.6
June 3, 2022
### FEATURES
- [migrate] [\#8672](https://github.com/tendermint/tendermint/pull/8672) provide function for database production (backport #8614) (@tychoish)
### BUG FIXES
- [consensus] [\#8651](https://github.com/tendermint/tendermint/pull/8651) restructure peer catchup sleep (@tychoish)
- [pex] [\#8657](https://github.com/tendermint/tendermint/pull/8657) align max address thresholds (@cmwaters)
- [cmd] [\#8668](https://github.com/tendermint/tendermint/pull/8668) don't used global config for reset commands (@cmwaters)
- [p2p] [\#8681](https://github.com/tendermint/tendermint/pull/8681) shed peers from store from other networks (backport #8678) (@tychoish)
## v0.35.5
May 26, 2022
### BUG FIXES
- [p2p] [\#8371](https://github.com/tendermint/tendermint/pull/8371) fix setting in con-tracker (backport #8370) (@tychoish)
- [blocksync] [\#8496](https://github.com/tendermint/tendermint/pull/8496) validate block against state before persisting it to disk (@cmwaters)
- [statesync] [\#8494](https://github.com/tendermint/tendermint/pull/8494) avoid potential race (@tychoish)
- [keymigrate] [\#8467](https://github.com/tendermint/tendermint/pull/8467) improve filtering for legacy transaction hashes (backport #8466) (@creachadair)
- [rpc] [\#8594](https://github.com/tendermint/tendermint/pull/8594) fix encoding of block_results responses (@creachadair)
## v0.35.4
April 18, 2022
Special thanks to external contributors on this release: @firelizzard18
### FEATURES
- [cli] [\#8300](https://github.com/tendermint/tendermint/pull/8300) Add a tool to update old config files to the latest version [backport [\#8281](https://github.com/tendermint/tendermint/pull/8281)]. (@creachadair)
### IMPROVEMENTS
### BUG FIXES
- [cli] [\#8294](https://github.com/tendermint/tendermint/pull/8294) keymigrate: ensure block hash keys are correctly translated. (@creachadair)
- [cli] [\#8352](https://github.com/tendermint/tendermint/pull/8352) keymigrate: ensure transaction hash keys are correctly translated. (@creachadair)
## v0.35.3
April 8, 2022
### FEATURES
- [cli] [\#8081](https://github.com/tendermint/tendermint/pull/8081) add a safer-to-use `reset-state` command. (@marbar3778)
### IMPROVEMENTS
- [consensus] [\#8138](https://github.com/tendermint/tendermint/pull/8138) change lock handling in reactor and handleMsg for RoundState. (@williambanfield)
### BUG FIXES
- [cli] [\#8276](https://github.com/tendermint/tendermint/pull/8276) scmigrate: ensure target key is correctly renamed. (@creachadair)
## v0.35.2
February 28, 2022
Special thanks to external contributors on this release: @ashcherbakov, @yihuang, @waelsy123
### IMPROVEMENTS
- [consensus] [\#7875](https://github.com/tendermint/tendermint/pull/7875) additional timing metrics. (@williambanfield)
### BUG FIXES
- [abci] [\#7990](https://github.com/tendermint/tendermint/pull/7990) revert buffer limit change. (@williambanfield)
- [cli] [#7837](https://github.com/tendermint/tendermint/pull/7837) fix app hash in state rollback. (@yihuang)
- [cli] [\#7869](https://github.com/tendermint/tendermint/pull/7869) Update unsafe-reset-all command to match release v35. (waelsy123)
- [light] [\#7640](https://github.com/tendermint/tendermint/pull/7640) Light Client: fix absence proof verification (@ashcherbakov)
- [light] [\#7641](https://github.com/tendermint/tendermint/pull/7641) Light Client: fix querying against the latest height (@ashcherbakov)
- [mempool] [\#7718](https://github.com/tendermint/tendermint/pull/7718) return duplicate tx errors more consistently. (@tychoish)
- [rpc] [\#7744](https://github.com/tendermint/tendermint/pull/7744) fix layout of endpoint list. (@creachadair)
- [statesync] [\#7886](https://github.com/tendermint/tendermint/pull/7886) assert app version matches. (@cmwaters)
## v0.35.1
January 26, 2022
Special thanks to external contributors on this release: @altergui, @odeke-em,
@thanethomson
Special thanks to external contributors on this release: @JayT106, @bipulprasad, @alessio, @Yawning, @silasdavis,
@cuonglm, @tanyabouman, @JoeKash, @githubsands, @jeebster, @crypto-facs, @liamsi, and @gotjoshua
### BREAKING CHANGES
- CLI/RPC/Config
- [pubsub/events] \#6634 The `ResultEvent.Events` field is now of type `[]abci.Event` preserving event order instead of `map[string][]string`. (@alexanderbez)
- [config] \#5598 The `test_fuzz` and `test_fuzz_config` P2P settings have been removed. (@erikgrinaker)
- [config] \#5728 `fastsync.version = "v1"` is no longer supported (@melekes)
- [cli] \#5772 `gen_node_key` prints JSON-encoded `NodeKey` rather than ID and does not save it to `node_key.json` (@melekes)
- [cli] \#5777 use hyphen-case instead of snake_case for all cli commands and config parameters (@cmwaters)
- [rpc] \#6019 standardise RPC errors and return the correct status code (@bipulprasad & @cmwaters)
- [rpc] \#6168 Change default sorting to desc for `/tx_search` results (@melekes)
- [cli] \#6282 User must specify the node mode when using `tendermint init` (@cmwaters)
- [state/indexer] \#6382 reconstruct indexer, move txindex into the indexer package (@JayT106)
- [cli] \#6372 Introduce `BootstrapPeers` as part of the new p2p stack. Peers to be connected on startup (@cmwaters)
- [config] \#6462 Move `PrivValidator` configuration out of `BaseConfig` into its own section. (@tychoish)
- [rpc] \#6610 Add MaxPeerBlockHeight into /status rpc call (@JayT106)
- [blocksync/rpc] \#6620 Add TotalSyncedTime & RemainingTime to SyncInfo in /status RPC (@JayT106)
- [rpc/grpc] \#6725 Mark gRPC in the RPC layer as deprecated.
- [blocksync/v2] \#6730 Fast Sync v2 is deprecated, please use v0
- [rpc] Add genesis_chunked method to support paginated and parallel fetching of large genesis documents.
- [rpc/jsonrpc/server] \#6785 `Listen` function updated to take an `int` argument, `maxOpenConnections`, instead of an entire config object. (@williambanfield)
- [rpc] \#6820 Update RPC methods to reflect changes in the p2p layer, disabling support for `UnsafeDialPeers` and `UnsafeDialPeers` when used with the new p2p layer, and changing the response format of the peer list in `NetInfo` for all users.
- [cli] \#6854 Remove deprecated snake case commands. (@tychoish)
- [config] [\#7276](https://github.com/tendermint/tendermint/pull/7276) rpc: Add experimental config params to allow for subscription buffer size control (@thanethomson).
- Apps
- [ABCI] \#6408 Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
- [ABCI] \#5447 Remove `SetOption` method from `ABCI.Client` interface
- [ABCI] \#5447 Reset `Oneof` indexes for `Request` and `Response`.
- [ABCI] \#5818 Use protoio for msg length delimitation. Migrates from int64 to uint64 length delimiters.
- [ABCI] \#3546 Add `mempool_error` field to `ResponseCheckTx`. This field will contain an error string if Tendermint encountered an error while adding a transaction to the mempool. (@williambanfield)
- [Version] \#6494 `TMCoreSemVer` has been renamed to `TMVersion`.
- It is not required any longer to set ldflags to set version strings
- [abci/counter] \#6684 Delete counter example app
- P2P Protocol
- Go API
- [pubsub] \#6634 The `Query#Matches` method along with other pubsub methods, now accepts a `[]abci.Event` instead of `map[string][]string`. (@alexanderbez)
- [p2p] \#6618 \#6583 Move `p2p.NodeInfo`, `p2p.NodeID` and `p2p.NetAddress` into `types` to support use in external packages. (@tychoish)
- [node] \#6540 Reduce surface area of the `node` package by making most of the implementation details private. (@tychoish)
- [p2p] \#6547 Move the entire `p2p` package and all reactor implementations into `internal`. (@tychoish)
- [libs/log] \#6534 Remove the existing custom Tendermint logger backed by go-kit. The logging interface, `Logger`, remains. Tendermint still provides a default logger backed by the performant zerolog logger. (@alexanderbez)
- [libs/time] \#6495 Move types/time to libs/time to improve consistency. (@tychoish)
- [mempool] \#6529 The `Context` field has been removed from the `TxInfo` type. `CheckTx` now requires a `Context` argument. (@alexanderbez)
- [abci/client, proxy] \#5673 `Async` funcs return an error, `Sync` and `Async` funcs accept `context.Context` (@melekes)
- [p2p] Remove unused function `MakePoWTarget`. (@erikgrinaker)
- [libs/bits] \#5720 Validate `BitArray` in `FromProto`, which now returns an error (@melekes)
- [proto/p2p] Rename `DefaultNodeInfo` and `DefaultNodeInfoOther` to `NodeInfo` and `NodeInfoOther` (@erikgrinaker)
- [proto/p2p] Rename `NodeInfo.default_node_id` to `node_id` (@erikgrinaker)
- [libs/os] Kill() and {Must,}{Read,Write}File() functions have been removed. (@alessio)
- [store] \#5848 Remove block store state in favor of using the db iterators directly (@cmwaters)
- [state] \#5864 Use an iterator when pruning state (@cmwaters)
- [types] \#6023 Remove `tm2pb.Header`, `tm2pb.BlockID`, `tm2pb.PartSetHeader` and `tm2pb.NewValidatorUpdate`.
- Each of the above types has a `ToProto` and `FromProto` method or function which replaced this logic.
- [light] \#6054 Move `MaxRetryAttempt` option from client to provider.
- `NewWithOptions` now sets the max retry attempts and timeouts (@cmwaters)
- [all] \#6077 Change spelling from British English to American (@cmwaters)
- Rename "Subscription.Cancelled()" to "Subscription.Canceled()" in libs/pubsub
- Rename "behaviour" pkg to "behavior" and internalized it in blocksync v2
- [rpc/client/http] \#6176 Remove `endpoint` arg from `New`, `NewWithTimeout` and `NewWithClient` (@melekes)
- [rpc/client/http] \#6176 Unexpose `WSEvents` (@melekes)
- [rpc/jsonrpc/client/ws_client] \#6176 `NewWS` no longer accepts options (use `NewWSWithOptions` and `OnReconnect` funcs to configure the client) (@melekes)
- [internal/libs] \#6366 Move `autofile`, `clist`,`fail`,`flowrate`, `protoio`, `sync`, `tempfile`, `test` and `timer` lib packages to an internal folder
- [libs/rand] \#6364 Remove most of libs/rand in favour of standard lib's `math/rand` (@liamsi)
- [mempool] \#6466 The original mempool reactor has been versioned as `v0` and moved to a sub-package under the root `mempool` package.
Some core types have been kept in the `mempool` package such as `TxCache` and it's implementations, the `Mempool` interface itself
and `TxInfo`. (@alexanderbez)
- [crypto/sr25519] \#6526 Do not re-execute the Ed25519-style key derivation step when doing signing and verification. The derivation is now done once and only once. This breaks `sr25519.GenPrivKeyFromSecret` output compatibility. (@Yawning)
- [types] \#6627 Move `NodeKey` to types to make the type public.
- [config] \#6627 Extend `config` to contain methods `LoadNodeKeyID` and `LoadorGenNodeKeyID`
- [blocksync] \#6755 Rename `FastSync` and `Blockchain` package to `BlockSync` (@cmwaters)
- [p2p] [\#7265](https://github.com/tendermint/tendermint/pull/7265) Peer manager reduces peer score for each failed dial attempts for peers that have not successfully dialed. (@tychoish)
- [p2p] [\#7594](https://github.com/tendermint/tendermint/pull/7594) always advertise self, to enable mutual address discovery. (@altergui)
- Data Storage
- [store/state/evidence/light] \#5771 Use an order-preserving varint key encoding (@cmwaters)
- [mempool] \#6396 Remove mempool's write ahead log (WAL), (previously unused by the tendermint code). (@tychoish)
- [state] \#6541 Move pruneBlocks from consensus/state to state/execution. (@JayT106)
- Tooling
- [tools] \#6498 Set OS home dir to instead of the hardcoded PATH. (@JayT106)
- [cli/indexer] \#6676 Reindex events command line tooling. (@JayT106)
### FEATURES
- [rpc] [\#7270](https://github.com/tendermint/tendermint/pull/7270) Add `header` and `header_by_hash` RPC Client queries. (@fedekunze) (@cmwaters)
### IMPROVEMENTS
- [internal/protoio] [\#7325](https://github.com/tendermint/tendermint/pull/7325) Optimized `MarshalDelimited` by inlining the common case and using a `sync.Pool` in the worst case. (@odeke-em)
- [\#7338](https://github.com/tendermint/tendermint/pull/7338) pubsub: Performance improvements for the event query API (backport of #7319) (@creachadair)
- [\#7252](https://github.com/tendermint/tendermint/pull/7252) Add basic metrics to the indexer package. (@creachadair)
- [\#7338](https://github.com/tendermint/tendermint/pull/7338) Performance improvements for the event query API. (@creachadair)
### BUG FIXES
- [\#7310](https://github.com/tendermint/tendermint/issues/7310) pubsub: Report a non-nil error when shutting down (fixes #7306).
- [\#7355](https://github.com/tendermint/tendermint/pull/7355) Fix incorrect tests using the PSQL sink. (@creachadair)
- [\#7683](https://github.com/tendermint/tendermint/pull/7683) rpc: check error code for broadcast_tx_commit. (@tychoish)
## v0.35.0
November 4, 2021
Special thanks to external contributors on this release: @JayT106,
@bipulprasad, @alessio, @Yawning, @silasdavis, @cuonglm, @tanyabouman,
@JoeKash, @githubsands, @jeebster, @crypto-facs, @liamsi, and @gotjoshua
### FEATURES
- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state in the event of an incorrect app hash. (@cmwaters)
- [config] [\#7174](https://github.com/tendermint/tendermint/pull/7174) expose ability to write config to arbitrary paths. (@tychoish)
- [mempool, rpc] [\#7065](https://github.com/tendermint/tendermint/pull/7065) add removetx rpc method (backport of #7047) (@tychoish).
- [\#6982](https://github.com/tendermint/tendermint/pull/6982) tendermint binary has built-in suppport for running the e2e application (with state sync support) (@cmwaters).
- [config] Add `--mode` flag and config variable. See [ADR-52](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-052-tendermint-mode.md) @dongsam
- [rpc] [\#6329](https://github.com/tendermint/tendermint/pull/6329) Don't cap page size in unsafe mode (@gotjoshua, @cmwaters)
- [pex] [\#6305](https://github.com/tendermint/tendermint/pull/6305) v2 pex reactor with backwards compatability. Introduces two new pex messages to
- [rpc] \#6329 Don't cap page size in unsafe mode (@gotjoshua, @cmwaters)
- [pex] \#6305 v2 pex reactor with backwards compatability. Introduces two new pex messages to
accomodate for the new p2p stack. Removes the notion of seeds and crawling. All peer
exchange reactors behave the same. (@cmwaters)
- [crypto] [\#6376](https://github.com/tendermint/tendermint/pull/6376) Enable sr25519 as a validator key type
- [mempool] [\#6466](https://github.com/tendermint/tendermint/pull/6466) Introduction of a prioritized mempool. (@alexanderbez)
- [crypto] \#6376 Enable sr25519 as a validator key type
- [mempool] \#6466 Introduction of a prioritized mempool. (@alexanderbez)
- `Priority` and `Sender` have been introduced into the `ResponseCheckTx` type, where the `priority` will determine the prioritization of
the transaction when a proposer reaps transactions for a block proposal. The `sender` field acts as an index.
- Operators may toggle between the legacy mempool reactor, `v0`, and the new prioritized reactor, `v1`, by setting the
`mempool.version` configuration, where `v1` is the default configuration.
- Applications that do not specify a priority, i.e. zero, will have transactions reaped by the order in which they are received by the node.
- Transactions are gossiped in FIFO order as they are in `v0`.
- [config/indexer] [\#6411](https://github.com/tendermint/tendermint/pull/6411) Introduce support for custom event indexing data sources, specifically PostgreSQL. (@JayT106)
- [blocksync/event] [\#6619](https://github.com/tendermint/tendermint/pull/6619) Emit blocksync status event when switching consensus/blocksync (@JayT106)
- [statesync/event] [\#6700](https://github.com/tendermint/tendermint/pull/6700) Emit statesync status start/end event (@JayT106)
- [inspect] [\#6785](https://github.com/tendermint/tendermint/pull/6785) Add a new `inspect` command for introspecting the state and block store of a crashed tendermint node. (@williambanfield)
### BUG FIXES
- [\#7106](https://github.com/tendermint/tendermint/pull/7106) Revert mutex change to ABCI Clients (@tychoish).
- [\#7142](https://github.com/tendermint/tendermint/pull/7142) mempool: remove panic when recheck-tx was not sent to ABCI application (@williambanfield).
- [consensus]: [\#7060](https://github.com/tendermint/tendermint/pull/7060)
wait until peerUpdates channel is closed to close remaining peers (@williambanfield)
- [privval] [\#5638](https://github.com/tendermint/tendermint/pull/5638) Increase read/write timeout to 5s and calculate ping interval based on it (@JoeKash)
- [evidence] [\#6375](https://github.com/tendermint/tendermint/pull/6375) Fix bug with inconsistent LightClientAttackEvidence hashing (cmwaters)
- [rpc] [\#6507](https://github.com/tendermint/tendermint/pull/6507) Ensure RPC client can handle URLs without ports (@JayT106)
- [statesync] [\#6463](https://github.com/tendermint/tendermint/pull/6463) Adds Reverse Sync feature to fetch historical light blocks after state sync in order to verify any evidence (@cmwaters)
- [blocksync] [\#6590](https://github.com/tendermint/tendermint/pull/6590) Update the metrics during blocksync (@JayT106)
### BREAKING CHANGES
- Go API
- [crypto/armor]: [\#6963](https://github.com/tendermint/tendermint/pull/6963) remove package which is unused, and based on
deprecated fundamentals. Downstream users should maintain this
library. (@tychoish)
- [state] [store] [proxy] [rpc/core]: [\#6937](https://github.com/tendermint/tendermint/pull/6937) move packages to
`internal` to prevent consumption of these internal APIs by
external users. (@tychoish)
- [pubsub] [\#6634](https://github.com/tendermint/tendermint/pull/6634) The `Query#Matches` method along with other pubsub methods, now accepts a `[]abci.Event` instead of `map[string][]string`. (@alexanderbez)
- [p2p] [\#6618](https://github.com/tendermint/tendermint/pull/6618) [\#6583](https://github.com/tendermint/tendermint/pull/6583) Move `p2p.NodeInfo`, `p2p.NodeID` and `p2p.NetAddress` into `types` to support use in external packages. (@tychoish)
- [node] [\#6540](https://github.com/tendermint/tendermint/pull/6540) Reduce surface area of the `node` package by making most of the implementation details private. (@tychoish)
- [p2p] [\#6547](https://github.com/tendermint/tendermint/pull/6547) Move the entire `p2p` package and all reactor implementations into `internal`. (@tychoish)
- [libs/log] [\#6534](https://github.com/tendermint/tendermint/pull/6534) Remove the existing custom Tendermint logger backed by go-kit. The logging interface, `Logger`, remains. Tendermint still provides a default logger backed by the performant zerolog logger. (@alexanderbez)
- [libs/time] [\#6495](https://github.com/tendermint/tendermint/pull/6495) Move types/time to libs/time to improve consistency. (@tychoish)
- [mempool] [\#6529](https://github.com/tendermint/tendermint/pull/6529) The `Context` field has been removed from the `TxInfo` type. `CheckTx` now requires a `Context` argument. (@alexanderbez)
- [abci/client, proxy] [\#5673](https://github.com/tendermint/tendermint/pull/5673) `Async` funcs return an error, `Sync` and `Async` funcs accept `context.Context` (@melekes)
- [p2p] Remove unused function `MakePoWTarget`. (@erikgrinaker)
- [libs/bits] [\#5720](https://github.com/tendermint/tendermint/pull/5720) Validate `BitArray` in `FromProto`, which now returns an error (@melekes)
- [proto/p2p] Rename `DefaultNodeInfo` and `DefaultNodeInfoOther` to `NodeInfo` and `NodeInfoOther` (@erikgrinaker)
- [proto/p2p] Rename `NodeInfo.default_node_id` to `node_id` (@erikgrinaker)
- [libs/os] Kill() and {Must,}{Read,Write}File() functions have been removed. (@alessio)
- [store] [\#5848](https://github.com/tendermint/tendermint/pull/5848) Remove block store state in favor of using the db iterators directly (@cmwaters)
- [state] [\#5864](https://github.com/tendermint/tendermint/pull/5864) Use an iterator when pruning state (@cmwaters)
- [types] [\#6023](https://github.com/tendermint/tendermint/pull/6023) Remove `tm2pb.Header`, `tm2pb.BlockID`, `tm2pb.PartSetHeader` and `tm2pb.NewValidatorUpdate`.
- Each of the above types has a `ToProto` and `FromProto` method or function which replaced this logic.
- [light] [\#6054](https://github.com/tendermint/tendermint/pull/6054) Move `MaxRetryAttempt` option from client to provider.
- `NewWithOptions` now sets the max retry attempts and timeouts (@cmwaters)
- [all] [\#6077](https://github.com/tendermint/tendermint/pull/6077) Change spelling from British English to American (@cmwaters)
- Rename "Subscription.Cancelled()" to "Subscription.Canceled()" in libs/pubsub
- Rename "behaviour" pkg to "behavior" and internalized it in blocksync v2
- [rpc/client/http] [\#6176](https://github.com/tendermint/tendermint/pull/6176) Remove `endpoint` arg from `New`, `NewWithTimeout` and `NewWithClient` (@melekes)
- [rpc/client/http] [\#6176](https://github.com/tendermint/tendermint/pull/6176) Unexpose `WSEvents` (@melekes)
- [rpc/jsonrpc/client/ws_client] [\#6176](https://github.com/tendermint/tendermint/pull/6176) `NewWS` no longer accepts options (use `NewWSWithOptions` and `OnReconnect` funcs to configure the client) (@melekes)
- [internal/libs] [\#6366](https://github.com/tendermint/tendermint/pull/6366) Move `autofile`, `clist`,`fail`,`flowrate`, `protoio`, `sync`, `tempfile`, `test` and `timer` lib packages to an internal folder
- [libs/rand] [\#6364](https://github.com/tendermint/tendermint/pull/6364) Remove most of libs/rand in favour of standard lib's `math/rand` (@liamsi)
- [mempool] [\#6466](https://github.com/tendermint/tendermint/pull/6466) The original mempool reactor has been versioned as `v0` and moved to a sub-package under the root `mempool` package.
Some core types have been kept in the `mempool` package such as `TxCache` and it's implementations, the `Mempool` interface itself
and `TxInfo`. (@alexanderbez)
- [crypto/sr25519] [\#6526](https://github.com/tendermint/tendermint/pull/6526) Do not re-execute the Ed25519-style key derivation step when doing signing and verification. The derivation is now done once and only once. This breaks `sr25519.GenPrivKeyFromSecret` output compatibility. (@Yawning)
- [types] [\#6627](https://github.com/tendermint/tendermint/pull/6627) Move `NodeKey` to types to make the type public.
- [config] [\#6627](https://github.com/tendermint/tendermint/pull/6627) Extend `config` to contain methods `LoadNodeKeyID` and `LoadorGenNodeKeyID`
- [blocksync] [\#6755](https://github.com/tendermint/tendermint/pull/6755) Rename `FastSync` and `Blockchain` package to `BlockSync` (@cmwaters)
- CLI/RPC/Config
- [pubsub/events] [\#6634](https://github.com/tendermint/tendermint/pull/6634) The `ResultEvent.Events` field is now of type `[]abci.Event` preserving event order instead of `map[string][]string`. (@alexanderbez)
- [config] [\#5598](https://github.com/tendermint/tendermint/pull/5598) The `test_fuzz` and `test_fuzz_config` P2P settings have been removed. (@erikgrinaker)
- [config] [\#5728](https://github.com/tendermint/tendermint/pull/5728) `fastsync.version = "v1"` is no longer supported (@melekes)
- [cli] [\#5772](https://github.com/tendermint/tendermint/pull/5772) `gen_node_key` prints JSON-encoded `NodeKey` rather than ID and does not save it to `node_key.json` (@melekes)
- [cli] [\#5777](https://github.com/tendermint/tendermint/pull/5777) use hyphen-case instead of snake_case for all cli commands and config parameters (@cmwaters)
- [rpc] [\#6019](https://github.com/tendermint/tendermint/pull/6019) standardise RPC errors and return the correct status code (@bipulprasad & @cmwaters)
- [rpc] [\#6168](https://github.com/tendermint/tendermint/pull/6168) Change default sorting to desc for `/tx_search` results (@melekes)
- [cli] [\#6282](https://github.com/tendermint/tendermint/pull/6282) User must specify the node mode when using `tendermint init` (@cmwaters)
- [state/indexer] [\#6382](https://github.com/tendermint/tendermint/pull/6382) reconstruct indexer, move txindex into the indexer package (@JayT106)
- [cli] [\#6372](https://github.com/tendermint/tendermint/pull/6372) Introduce `BootstrapPeers` as part of the new p2p stack. Peers to be connected on startup (@cmwaters)
- [config] [\#6462](https://github.com/tendermint/tendermint/pull/6462) Move `PrivValidator` configuration out of `BaseConfig` into its own section. (@tychoish)
- [rpc] [\#6610](https://github.com/tendermint/tendermint/pull/6610) Add MaxPeerBlockHeight into /status rpc call (@JayT106)
- [blocksync/rpc] [\#6620](https://github.com/tendermint/tendermint/pull/6620) Add TotalSyncedTime & RemainingTime to SyncInfo in /status RPC (@JayT106)
- [rpc/grpc] [\#6725](https://github.com/tendermint/tendermint/pull/6725) Mark gRPC in the RPC layer as deprecated.
- [blocksync/v2] [\#6730](https://github.com/tendermint/tendermint/pull/6730) Fast Sync v2 is deprecated, please use v0
- [rpc] Add genesis_chunked method to support paginated and parallel fetching of large genesis documents.
- [rpc/jsonrpc/server] [\#6785](https://github.com/tendermint/tendermint/pull/6785) `Listen` function updated to take an `int` argument, `maxOpenConnections`, instead of an entire config object. (@williambanfield)
- [rpc] [\#6820](https://github.com/tendermint/tendermint/pull/6820) Update RPC methods to reflect changes in the p2p layer, disabling support for `UnsafeDialPeers` and `UnsafeDialPeers` when used with the new p2p layer, and changing the response format of the peer list in `NetInfo` for all users.
- [cli] [\#6854](https://github.com/tendermint/tendermint/pull/6854) Remove deprecated snake case commands. (@tychoish)
- [tools] [\#6498](https://github.com/tendermint/tendermint/pull/6498) Set OS home dir to instead of the hardcoded PATH. (@JayT106)
- [cli/indexer] [\#6676](https://github.com/tendermint/tendermint/pull/6676) Reindex events command line tooling. (@JayT106)
- Apps
- [ABCI] [\#6408](https://github.com/tendermint/tendermint/pull/6408) Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
- [ABCI] [\#5447](https://github.com/tendermint/tendermint/pull/5447) Remove `SetOption` method from `ABCI.Client` interface
- [ABCI] [\#5447](https://github.com/tendermint/tendermint/pull/5447) Reset `Oneof` indexes for `Request` and `Response`.
- [ABCI] [\#5818](https://github.com/tendermint/tendermint/pull/5818) Use protoio for msg length delimitation. Migrates from int64 to uint64 length delimiters.
- [ABCI] [\#3546](https://github.com/tendermint/tendermint/pull/3546) Add `mempool_error` field to `ResponseCheckTx`. This field will contain an error string if Tendermint encountered an error while adding a transaction to the mempool. (@williambanfield)
- [Version] [\#6494](https://github.com/tendermint/tendermint/pull/6494) `TMCoreSemVer` has been renamed to `TMVersion`.
- It is not required any longer to set ldflags to set version strings
- [abci/counter] [\#6684](https://github.com/tendermint/tendermint/pull/6684) Delete counter example app
- Data Storage
- [store/state/evidence/light] [\#5771](https://github.com/tendermint/tendermint/pull/5771) Use an order-preserving varint key encoding (@cmwaters)
- [mempool] [\#6396](https://github.com/tendermint/tendermint/pull/6396) Remove mempool's write ahead log (WAL), (previously unused by the tendermint code). (@tychoish)
- [state] [\#6541](https://github.com/tendermint/tendermint/pull/6541) Move pruneBlocks from consensus/state to state/execution. (@JayT106)
- [config/indexer] \#6411 Introduce support for custom event indexing data sources, specifically PostgreSQL. (@JayT106)
- [blocksync/event] \#6619 Emit blocksync status event when switching consensus/blocksync (@JayT106)
- [statesync/event] \#6700 Emit statesync status start/end event (@JayT106)
- [inspect] \#6785 Add a new `inspect` command for introspecting the state and block store of a crashed tendermint node. (@williambanfield)
### IMPROVEMENTS
- [libs/log] Console log formatting changes as a result of [\#6534](https://github.com/tendermint/tendermint/pull/6534) and [\#6589](https://github.com/tendermint/tendermint/pull/6589). (@tychoish)
- [statesync] [\#6566](https://github.com/tendermint/tendermint/pull/6566) Allow state sync fetchers and request timeout to be configurable. (@alexanderbez)
- [types] [\#6478](https://github.com/tendermint/tendermint/pull/6478) Add `block_id` to `newblock` event (@jeebster)
- [crypto/ed25519] [\#5632](https://github.com/tendermint/tendermint/pull/5632) Adopt zip215 `ed25519` verification. (@marbar3778)
- [crypto/ed25519] [\#6526](https://github.com/tendermint/tendermint/pull/6526) Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `ed25519` signing and verification. (@Yawning)
- [crypto/sr25519] [\#6526](https://github.com/tendermint/tendermint/pull/6526) Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `sr25519` signing and verification. (@Yawning)
- [privval] [\#5603](https://github.com/tendermint/tendermint/pull/5603) Add `--key` to `init`, `gen_validator`, `testnet` & `unsafe_reset_priv_validator` for use in generating `secp256k1` keys.
- [privval] [\#5725](https://github.com/tendermint/tendermint/pull/5725) Add gRPC support to private validator.
- [privval] [\#5876](https://github.com/tendermint/tendermint/pull/5876) `tendermint show-validator` will query the remote signer if gRPC is being used (@marbar3778)
- [abci/client] [\#5673](https://github.com/tendermint/tendermint/pull/5673) `Async` requests return an error if queue is full (@melekes)
- [mempool] [\#5673](https://github.com/tendermint/tendermint/pull/5673) Cancel `CheckTx` requests if RPC client disconnects or times out (@melekes)
- [abci] [\#5706](https://github.com/tendermint/tendermint/pull/5706) Added `AbciVersion` to `RequestInfo` allowing applications to check ABCI version when connecting to Tendermint. (@marbar3778)
- [blocksync/v1] [\#5728](https://github.com/tendermint/tendermint/pull/5728) Remove blocksync v1 (@melekes)
- [blocksync/v0] [\#5741](https://github.com/tendermint/tendermint/pull/5741) Relax termination conditions and increase sync timeout (@melekes)
- [cli] [\#5772](https://github.com/tendermint/tendermint/pull/5772) `gen_node_key` output now contains node ID (`id` field) (@melekes)
- [blocksync/v2] [\#5774](https://github.com/tendermint/tendermint/pull/5774) Send status request when new peer joins (@melekes)
- [store] [\#5888](https://github.com/tendermint/tendermint/pull/5888) store.SaveBlock saves using batches instead of transactions for now to improve ACID properties. This is a quick fix for underlying issues around tm-db and ACID guarantees. (@githubsands)
- [consensus] [\#5987](https://github.com/tendermint/tendermint/pull/5987) and [\#5792](https://github.com/tendermint/tendermint/pull/5792) Remove the `time_iota_ms` consensus parameter. Merge `tmproto.ConsensusParams` and `abci.ConsensusParams`. (@marbar3778, @valardragon)
- [types] [\#5994](https://github.com/tendermint/tendermint/pull/5994) Reduce the use of protobuf types in core logic. (@marbar3778)
- [libs/log] Console log formatting changes as a result of \#6534 and \#6589. (@tychoish)
- [statesync] \#6566 Allow state sync fetchers and request timeout to be configurable. (@alexanderbez)
- [types] \#6478 Add `block_id` to `newblock` event (@jeebster)
- [crypto/ed25519] \#5632 Adopt zip215 `ed25519` verification. (@marbar3778)
- [crypto/ed25519] \#6526 Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `ed25519` signing and verification. (@Yawning)
- [crypto/sr25519] \#6526 Use [curve25519-voi](https://github.com/oasisprotocol/curve25519-voi) for `sr25519` signing and verification. (@Yawning)
- [privval] \#5603 Add `--key` to `init`, `gen_validator`, `testnet` & `unsafe_reset_priv_validator` for use in generating `secp256k1` keys.
- [privval] \#5725 Add gRPC support to private validator.
- [privval] \#5876 `tendermint show-validator` will query the remote signer if gRPC is being used (@marbar3778)
- [abci/client] \#5673 `Async` requests return an error if queue is full (@melekes)
- [mempool] \#5673 Cancel `CheckTx` requests if RPC client disconnects or times out (@melekes)
- [abci] \#5706 Added `AbciVersion` to `RequestInfo` allowing applications to check ABCI version when connecting to Tendermint. (@marbar3778)
- [blocksync/v1] \#5728 Remove blocksync v1 (@melekes)
- [blocksync/v0] \#5741 Relax termination conditions and increase sync timeout (@melekes)
- [cli] \#5772 `gen_node_key` output now contains node ID (`id` field) (@melekes)
- [blocksync/v2] \#5774 Send status request when new peer joins (@melekes)
- [store] \#5888 store.SaveBlock saves using batches instead of transactions for now to improve ACID properties. This is a quick fix for underlying issues around tm-db and ACID guarantees. (@githubsands)
- [consensus] \#5987 and \#5792 Remove the `time_iota_ms` consensus parameter. Merge `tmproto.ConsensusParams` and `abci.ConsensusParams`. (@marbar3778, @valardragon)
- [types] \#5994 Reduce the use of protobuf types in core logic. (@marbar3778)
- `ConsensusParams`, `BlockParams`, `ValidatorParams`, `EvidenceParams`, `VersionParams`, `sm.Version` and `version.Consensus` have become native types. They still utilize protobuf when being sent over the wire or written to disk.
- [rpc/client/http] [\#6163](https://github.com/tendermint/tendermint/pull/6163) Do not drop events even if the `out` channel is full (@melekes)
- [node] [\#6059](https://github.com/tendermint/tendermint/pull/6059) Validate and complete genesis doc before saving to state store (@silasdavis)
- [state] [\#6067](https://github.com/tendermint/tendermint/pull/6067) Batch save state data (@githubsands & @cmwaters)
- [crypto] [\#6120](https://github.com/tendermint/tendermint/pull/6120) Implement batch verification interface for ed25519 and sr25519. (@marbar3778)
- [types] [\#6120](https://github.com/tendermint/tendermint/pull/6120) use batch verification for verifying commits signatures.
- [rpc/client/http] \#6163 Do not drop events even if the `out` channel is full (@melekes)
- [node] \#6059 Validate and complete genesis doc before saving to state store (@silasdavis)
- [state] \#6067 Batch save state data (@githubsands & @cmwaters)
- [crypto] \#6120 Implement batch verification interface for ed25519 and sr25519. (@marbar3778)
- [types] \#6120 use batch verification for verifying commits signatures.
- If the key type supports the batch verification API it will try to batch verify. If the verification fails we will single verify each signature.
- [privval/file] [\#6185](https://github.com/tendermint/tendermint/pull/6185) Return error on `LoadFilePV`, `LoadFilePVEmptyState`. Allows for better programmatic control of Tendermint.
- [privval] [\#6240](https://github.com/tendermint/tendermint/pull/6240) Add `context.Context` to privval interface.
- [rpc] [\#6265](https://github.com/tendermint/tendermint/pull/6265) set cache control in http-rpc response header (@JayT106)
- [statesync] [\#6378](https://github.com/tendermint/tendermint/pull/6378) Retry requests for snapshots and add a minimum discovery time (5s) for new snapshots.
- [node/state] [\#6370](https://github.com/tendermint/tendermint/pull/6370) graceful shutdown in the consensus reactor (@JayT106)
- [crypto/merkle] [\#6443](https://github.com/tendermint/tendermint/pull/6443) Improve HashAlternatives performance (@cuonglm)
- [crypto/merkle] [\#6513](https://github.com/tendermint/tendermint/pull/6513) Optimize HashAlternatives (@marbar3778)
- [p2p/pex] [\#6509](https://github.com/tendermint/tendermint/pull/6509) Improve addrBook.hash performance (@cuonglm)
- [consensus/metrics] [\#6549](https://github.com/tendermint/tendermint/pull/6549) Change block_size gauge to a histogram for better observability over time (@marbar3778)
- [statesync] [\#6587](https://github.com/tendermint/tendermint/pull/6587) Increase chunk priority and re-request chunks that don't arrive (@cmwaters)
- [state/privval] [\#6578](https://github.com/tendermint/tendermint/pull/6578) No GetPubKey retry beyond the proposal/voting window (@JayT106)
- [rpc] [\#6615](https://github.com/tendermint/tendermint/pull/6615) Add TotalGasUsed to block_results response (@crypto-facs)
- [cmd/tendermint/commands] [\#6623](https://github.com/tendermint/tendermint/pull/6623) replace `$HOME/.some/test/dir` with `t.TempDir` (@tanyabouman)
- [privval/file] \#6185 Return error on `LoadFilePV`, `LoadFilePVEmptyState`. Allows for better programmatic control of Tendermint.
- [privval] \#6240 Add `context.Context` to privval interface.
- [rpc] \#6265 set cache control in http-rpc response header (@JayT106)
- [statesync] \#6378 Retry requests for snapshots and add a minimum discovery time (5s) for new snapshots.
- [node/state] \#6370 graceful shutdown in the consensus reactor (@JayT106)
- [crypto/merkle] \#6443 Improve HashAlternatives performance (@cuonglm)
- [crypto/merkle] \#6513 Optimize HashAlternatives (@marbar3778)
- [p2p/pex] \#6509 Improve addrBook.hash performance (@cuonglm)
- [consensus/metrics] \#6549 Change block_size gauge to a histogram for better observability over time (@marbar3778)
- [statesync] \#6587 Increase chunk priority and re-request chunks that don't arrive (@cmwaters)
- [state/privval] \#6578 No GetPubKey retry beyond the proposal/voting window (@JayT106)
- [rpc] \#6615 Add TotalGasUsed to block_results response (@crypto-facs)
- [cmd/tendermint/commands] \#6623 replace `$HOME/.some/test/dir` with `t.TempDir` (@tanyabouman)
- [statesync] \6807 Implement P2P state provider as an alternative to RPC (@cmwaters)
### BUG FIXES
- [privval] \#5638 Increase read/write timeout to 5s and calculate ping interval based on it (@JoeKash)
- [evidence] \#6375 Fix bug with inconsistent LightClientAttackEvidence hashing (cmwaters)
- [rpc] \#6507 Ensure RPC client can handle URLs without ports (@JayT106)
- [statesync] \#6463 Adds Reverse Sync feature to fetch historical light blocks after state sync in order to verify any evidence (@cmwaters)
- [blocksync] \#6590 Update the metrics during blocksync (@JayT106)
## v0.34.13
*September 6, 2021*
@@ -1987,7 +1834,7 @@ more details.
- [rpc] [\#3269](https://github.com/tendermint/tendermint/issues/2826) Limit number of unique clientIDs with open subscriptions. Configurable via `rpc.max_subscription_clients`
- [rpc] [\#3269](https://github.com/tendermint/tendermint/issues/2826) Limit number of unique queries a given client can subscribe to at once. Configurable via `rpc.max_subscriptions_per_client`.
- [rpc] [\#3435](https://github.com/tendermint/tendermint/issues/3435) Default ReadTimeout and WriteTimeout changed to 10s. WriteTimeout can increased by setting `rpc.timeout_broadcast_tx_commit` in the config.
- [rpc/client] [\#3269](https://github.com/tendermint/tendermint/issues/3269) Update `EventsClient` interface to reflect new pubsub/eventBus API [ADR-33](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-033-pubsub.md). This includes `Subscribe`, `Unsubscribe`, and `UnsubscribeAll` methods.
- [rpc/client] [\#3269](https://github.com/tendermint/tendermint/issues/3269) Update `EventsClient` interface to reflect new pubsub/eventBus API [ADR-33](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-033-pubsub.md). This includes `Subscribe`, `Unsubscribe`, and `UnsubscribeAll` methods.
* Apps
- [abci] [\#3403](https://github.com/tendermint/tendermint/issues/3403) Remove `time_iota_ms` from BlockParams. This is a
@@ -2040,7 +1887,7 @@ more details.
- [blockchain] [\#3358](https://github.com/tendermint/tendermint/pull/3358) Fix timer leak in `BlockPool` (@guagualvcha)
- [cmd] [\#3408](https://github.com/tendermint/tendermint/issues/3408) Fix `testnet` command's panic when creating non-validator configs (using `--n` flag) (@srmo)
- [libs/db/remotedb/grpcdb] [\#3402](https://github.com/tendermint/tendermint/issues/3402) Close Iterator/ReverseIterator after use
- [libs/pubsub] [\#951](https://github.com/tendermint/tendermint/issues/951), [\#1880](https://github.com/tendermint/tendermint/issues/1880) Use non-blocking send when dispatching messages [ADR-33](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-033-pubsub.md)
- [libs/pubsub] [\#951](https://github.com/tendermint/tendermint/issues/951), [\#1880](https://github.com/tendermint/tendermint/issues/1880) Use non-blocking send when dispatching messages [ADR-33](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-033-pubsub.md)
- [lite] [\#3364](https://github.com/tendermint/tendermint/issues/3364) Fix `/validators` and `/abci_query` proxy endpoints
(@guagualvcha)
- [p2p/conn] [\#3347](https://github.com/tendermint/tendermint/issues/3347) Reject all-zero shared secrets in the Diffie-Hellman step of secret-connection
@@ -2744,7 +2591,7 @@ Special thanks to external contributors on this release:
This release is mostly about the ConsensusParams - removing fields and enforcing MaxGas.
It also addresses some issues found via security audit, removes various unused
functions from `libs/common`, and implements
[ADR-012](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-012-peer-transport.md).
[ADR-012](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-012-peer-transport.md).
BREAKING CHANGES:
@@ -2825,7 +2672,7 @@ BREAKING CHANGES:
- [abci] Added address of the original proposer of the block to Header
- [abci] Change ABCI Header to match Tendermint exactly
- [abci] [\#2159](https://github.com/tendermint/tendermint/issues/2159) Update use of `Validator` (see
[ADR-018](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-018-ABCI-Validators.md)):
[ADR-018](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-018-ABCI-Validators.md)):
- Remove PubKey from `Validator` (so it's just Address and Power)
- Introduce `ValidatorUpdate` (with just PubKey and Power)
- InitChain and EndBlock use ValidatorUpdate
@@ -2847,7 +2694,7 @@ BREAKING CHANGES:
- [state] [\#1815](https://github.com/tendermint/tendermint/issues/1815) Validator set changes are now delayed by one block (!)
- Add NextValidatorSet to State, changes on-disk representation of state
- [state] [\#2184](https://github.com/tendermint/tendermint/issues/2184) Enforce ConsensusParams.BlockSize.MaxBytes (See
[ADR-020](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-020-block-size.md)).
[ADR-020](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-020-block-size.md)).
- Remove ConsensusParams.BlockSize.MaxTxs
- Introduce maximum sizes for all components of a block, including ChainID
- [types] Updates to the block Header:
@@ -2858,7 +2705,7 @@ BREAKING CHANGES:
- [consensus] [\#2203](https://github.com/tendermint/tendermint/issues/2203) Implement BFT time
- Timestamp in block must be monotonic and equal the median of timestamps in block's LastCommit
- [crypto] [\#2239](https://github.com/tendermint/tendermint/issues/2239) Secp256k1 signature changes (See
[ADR-014](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-014-secp-malleability.md)):
[ADR-014](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-014-secp-malleability.md)):
- format changed from DER to `r || s`, both little endian encoded as 32 bytes.
- malleability removed by requiring `s` to be in canonical form.

View File

@@ -1,13 +1,11 @@
# Unreleased Changes
Friendly reminder: We have a [bug bounty program](https://hackerone.com/cosmos).
## v0.35.8
Month DD, YYYY
## vX.X
Special thanks to external contributors on this release:
Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermint).
### BREAKING CHANGES
- CLI/RPC/Config
@@ -22,8 +20,7 @@ Special thanks to external contributors on this release:
### FEATURES
- [cli] [\#8675] Add command to force compact goleveldb databases (@cmwaters)
### IMPROVEMENTS
### BUG FIXES

View File

@@ -20,7 +20,7 @@ This code of conduct applies to all projects run by the Tendermint/COSMOS team a
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term “harassment” as including the definition in the [Citizen Code of Conduct](https://github.com/stumpsyn/policies/blob/master/citizen_code_of_conduct.md); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we dont tolerate behavior that excludes people in socially marginalized groups.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term “harassment” as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we dont tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the channel admins or the person mentioned above immediately. Whether youre a regular contributor or a newcomer, we care about making this community a safe place for you and weve got your back.

View File

@@ -249,7 +249,6 @@ The author of the original pull request is responsible for solving the conflicts
merging the pull request.
#### Creating a backport branch
If this is the first release candidate for a major release, you get to have the honor of creating
the backport branch!
@@ -335,14 +334,6 @@ If there were no release candidates, begin by creating a backport branch, as des
- `git tag -a v0.35.0 -m 'Release v0.35.0'`
- `git push origin v0.35.0`
7. Make sure that `master` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
8. Add the release to the documentation site generator config (see
[DOCS_README.md](./docs/DOCS_README.md) for more details). In summary:
- Start on branch `master`.
- Add a new line at the bottom of [`docs/versions`](./docs/versions) to
ensure the newest release is the default for the landing page.
- Add a new entry to `themeConfig.versions` in
[`docs/.vuepress/config.js`](./docs/.vuepress/config.js) to include the
release in the dropdown versions menu.
#### Minor release (point releases)

View File

@@ -1,5 +1,6 @@
#!/usr/bin/make -f
PACKAGES=$(shell go list ./...)
BUILDDIR ?= $(CURDIR)/build
BUILD_TAGS?=tendermint
@@ -14,9 +15,7 @@ endif
LD_FLAGS = -X github.com/tendermint/tendermint/version.TMVersion=$(VERSION)
BUILD_FLAGS = -mod=readonly -ldflags "$(LD_FLAGS)"
HTTPS_GIT := https://github.com/tendermint/tendermint.git
BUILD_IMAGE := ghcr.io/tendermint/docker-build-proto
BASE_BRANCH := v0.35.x
DOCKER_PROTO := docker run -v $(shell pwd):/workspace --workdir /workspace $(BUILD_IMAGE)
DOCKER_BUF := docker run -v $(shell pwd):/workspace --workdir /workspace bufbuild/buf
CGO_ENABLED ?= 0
# handle nostrip
@@ -84,28 +83,26 @@ proto-all: proto-gen proto-lint proto-check-breaking
.PHONY: proto-all
proto-gen:
@echo "Generating Go packages for .proto files"
@$(DOCKER_PROTO) sh ./scripts/protocgen.sh
@docker pull -q tendermintdev/docker-build-proto
@echo "Generating Protobuf files"
@docker run -v $(shell pwd):/workspace --workdir /workspace tendermintdev/docker-build-proto sh ./scripts/protocgen.sh
.PHONY: proto-gen
proto-lint:
@echo "Running lint checks for .proto files"
@$(DOCKER_PROTO) buf lint --error-format=json
@$(DOCKER_BUF) check lint --error-format=json
.PHONY: proto-lint
proto-format:
@echo "Formatting .proto files"
@$(DOCKER_PROTO) find ./ -not -path "./third_party/*" -name '*.proto' -exec clang-format -i {} \;
@echo "Formatting Protobuf files"
docker run -v $(shell pwd):/workspace --workdir /workspace tendermintdev/docker-build-proto find ./ -not -path "./third_party/*" -name *.proto -exec clang-format -i {} \;
.PHONY: proto-format
proto-check-breaking:
@echo "Checking for breaking changes in .proto files"
@$(DOCKER_PROTO) buf breaking --against .git#branch=$(BASE_BRANCH)
@$(DOCKER_BUF) check breaking --against-input .git#branch=master
.PHONY: proto-check-breaking
proto-check-breaking-ci:
@echo "Checking for breaking changes in .proto files"
@$(DOCKER_PROTO) buf breaking --against $(HTTPS_GIT)#branch=$(BASE_BRANCH)
@$(DOCKER_BUF) check breaking --against-input $(HTTPS_GIT)#branch=master
.PHONY: proto-check-breaking-ci
###############################################################################
@@ -121,7 +118,7 @@ install_abci:
.PHONY: install_abci
###############################################################################
### Privval Server ###
### Privval Server ###
###############################################################################
build_privval_server:
@@ -134,11 +131,11 @@ generate_test_cert:
# generate server cerificate
@certstrap request-cert -cn server -ip 127.0.0.1
# self-sign server cerificate with rootCA
@certstrap sign server --CA "root CA"
@certstrap sign server --CA "root CA"
# generate client cerificate
@certstrap request-cert -cn client -ip 127.0.0.1
# self-sign client cerificate with rootCA
@certstrap sign client --CA "root CA"
@certstrap sign client --CA "root CA"
.PHONY: generate_test_cert
###############################################################################
@@ -217,7 +214,7 @@ DESTINATION = ./index.html.md
build-docs:
@cd docs && \
while read -r branch path_prefix; do \
(git checkout $${branch} && npm ci && VUEPRESS_BASE="/$${path_prefix}/" npm run build) ; \
(git checkout $${branch} && npm install && VUEPRESS_BASE="/$${path_prefix}/" npm run build) ; \
mkdir -p ~/output/$${path_prefix} ; \
cp -r .vuepress/dist/* ~/output/$${path_prefix}/ ; \
cp ~/output/$${path_prefix}/index.html ~/output ; \
@@ -228,13 +225,15 @@ build-docs:
### Docker image ###
###############################################################################
build-docker:
docker build --label=tendermint --tag="tendermint/tendermint" -f DOCKER/Dockerfile .
build-docker: build-linux
cp $(BUILDDIR)/tendermint DOCKER/tendermint
docker build --label=tendermint --tag="tendermint/tendermint" DOCKER
rm -rf DOCKER/tendermint
.PHONY: build-docker
###############################################################################
### Mocks ###
### Mocks ###
###############################################################################
mockery:
@@ -304,25 +303,3 @@ build-reproducible:
--name latest-build cosmossdk/rbuilder:latest
docker cp -a latest-build:/home/builder/artifacts/ $(CURDIR)/
.PHONY: build-reproducible
# Implements test splitting and running. This is pulled directly from
# the github action workflows for better local reproducibility.
GO_TEST_FILES != find $(CURDIR) -name "*_test.go"
# default to four splits by default
NUM_SPLIT ?= 4
$(BUILDDIR):
mkdir -p $@
# The format statement filters out all packages that don't have tests.
# Note we need to check for both in-package tests (.TestGoFiles) and
# out-of-package tests (.XTestGoFiles).
$(BUILDDIR)/packages.txt:$(GO_TEST_FILES) $(BUILDDIR)
go list -f "{{ if (or .TestGoFiles .XTestGoFiles) }}{{ .ImportPath }}{{ end }}" ./... | sort > $@
split-test-packages:$(BUILDDIR)/packages.txt
split -d -n l/$(NUM_SPLIT) $< $<.
test-group-%:split-test-packages
cat $(BUILDDIR)/packages.txt.$* | xargs go test -mod=readonly -timeout=15m -race -coverprofile=$(BUILDDIR)/$*.profile.out

View File

@@ -9,7 +9,7 @@ Or [Blockchain](<https://en.wikipedia.org/wiki/Blockchain_(database)>), for shor
[![version](https://img.shields.io/github/tag/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/releases/latest)
[![API Reference](https://camo.githubusercontent.com/915b7be44ada53c290eb157634330494ebe3e30a/68747470733a2f2f676f646f632e6f72672f6769746875622e636f6d2f676f6c616e672f6764646f3f7374617475732e737667)](https://pkg.go.dev/github.com/tendermint/tendermint)
[![Go version](https://img.shields.io/badge/go-1.16-blue.svg)](https://github.com/moovweb/gvm)
[![Discord chat](https://img.shields.io/discord/669268347736686612.svg)](https://discord.gg/cosmosnetwork)
[![Discord chat](https://img.shields.io/discord/669268347736686612.svg)](https://discord.gg/vcExX9T)
[![license](https://img.shields.io/github/license/tendermint/tendermint.svg)](https://github.com/tendermint/tendermint/blob/master/LICENSE)
[![tendermint/tendermint](https://tokei.rs/b1/github/tendermint/tendermint?category=lines)](https://github.com/tendermint/tendermint)
[![Sourcegraph](https://sourcegraph.com/github.com/tendermint/tendermint/-/badge.svg)](https://sourcegraph.com/github.com/tendermint/tendermint?badge)
@@ -29,15 +29,16 @@ see our recent paper, "[The latest gossip on BFT consensus](https://arxiv.org/ab
Please do not depend on master as your production branch. Use [releases](https://github.com/tendermint/tendermint/releases) instead.
Tendermint has been in the production of private and public environments, most notably the blockchains of the Cosmos Network. we haven't released v1.0 yet since we are making breaking changes to the protocol and the APIs.
Tendermint has been in the production of private and public environments, most notably the blockchains of the Cosmos Network. we haven't released v1.0 yet since we are making breaking changes to the protocol and the APIs.
See below for more details about [versioning](#versioning).
In any case, if you intend to run Tendermint in production, we're happy to help. You can
contact us [over email](mailto:hello@interchain.berlin) or [join the chat](https://discord.gg/cosmosnetwork).
contact us [over email](mailto:hello@interchain.berlin) or [join the chat](https://discord.gg/vcExX9T).
## Security
To report a security vulnerability, see our [bug bounty program](https://hackerone.com/cosmos).
To report a security vulnerability, see our [bug bounty
program](https://hackerone.com/tendermint).
For examples of the kinds of bugs we're looking for, see [our security policy](SECURITY.md).
We also maintain a dedicated mailing list for security updates. We will only ever use this mailing list
@@ -60,8 +61,8 @@ See the [install instructions](/docs/introduction/install.md).
### Quick Start
- [Single node](/docs/introduction/quick-start.md)
- [Local cluster using docker-compose](/docs/tools/docker-compose.md)
- [Remote cluster using Terraform and Ansible](/docs/tools/terraform-and-ansible.md)
- [Local cluster using docker-compose](/docs/networks/docker-compose.md)
- [Remote cluster using Terraform and Ansible](/docs/networks/terraform-and-ansible.md)
- [Join the Cosmos testnet](https://cosmos.network/testnet)
## Contributing
@@ -70,7 +71,7 @@ Please abide by the [Code of Conduct](CODE_OF_CONDUCT.md) in all interactions.
Before contributing to the project, please take a look at the [contributing guidelines](CONTRIBUTING.md)
and the [style guide](STYLE_GUIDE.md). You may also find it helpful to read the
[specifications](https://github.com/tendermint/spec), watch the [Developer Sessions](/docs/DEV_SESSIONS.md),
[specifications](https://github.com/tendermint/spec), watch the [Developer Sessions](/docs/DEV_SESSIONS.md),
and familiarize yourself with our
[Architectural Decision Records](https://github.com/tendermint/tendermint/tree/master/docs/architecture).
@@ -94,7 +95,7 @@ In an effort to avoid accumulating technical debt prior to 1.0.0,
we do not guarantee that breaking changes (ie. bumps in the MINOR version)
will work with existing Tendermint blockchains. In these cases you will
have to start a new blockchain, or write something custom to get the old
data into the new chain. However, any bump in the PATCH version should be
data into the new chain. However, any bump in the PATCH version should be
compatible with existing blockchain histories.

View File

@@ -44,47 +44,26 @@ This guide provides instructions for upgrading to specific versions of Tendermin
* The fast sync process as well as the blockchain package and service has all
been renamed to block sync
* We have added a new, experimental tool to help operators migrate
configuration files created by previous versions of Tendermint.
To try this tool, run:
```shell
# Install the tool.
go install github.com/tendermint/tendermint/scripts/confix@v0.35.x
# Run the tool with the old configuration file as input.
# Replace the -config argument with your path.
confix -config ~/.tendermint/config/config.toml -out updated.toml
```
This tool should be able to update configurations from v0.34 to v0.35. We
plan to extend it to handle older configuration files in the future. For now,
it will report an error (without making any changes) if it does not recognize
the version that created the file.
### Database Key Format Changes
The format of all tendermint on-disk database keys changes in
0.35. Upgrading nodes must either re-sync all data or run a migration
script provided in this release.
The script located in
`github.com/tendermint/tendermint/scripts/keymigrate/migrate.go` provides the
function `Migrate(context.Context, db.DB)` which you can operationalize as
makes sense for your deployment.
script provided in this release. The script located in
`github.com/tendermint/tendermint/scripts/keymigrate/migrate.go`
provides the function `Migrate(context.Context, db.DB)` which you can
operationalize as makes sense for your deployment.
For ease of use the `tendermint` command includes a CLI version of the
migration script, which you can invoke, as in:
tendermint key-migrate
This reads the configuration file as normal and allows the `--db-backend` and
`--db-dir` flags to override the database location as needed.
This reads the configuration file as normal and allows the
`--db-backend` and `--db-dir` flags to change database operations as
needed.
The migration operation is intended to be idempotent, and should be safe to
rerun on the same database multiple times. As a safety measure, however, we
recommend that operators test out the migration on a copy of the database
first, if it is practical to do so, before applying it to the production data.
The migration operation is idempotent and can be run more than once,
if needed.
### CLI Changes
@@ -119,7 +98,7 @@ are:
- `blockchain`
- `evidence`
Accordingly, the `node` package changed to reduce access to
Accordingly, the `node` package was changed to reduce access to
tendermint internals: applications that use tendermint as a library
will need to change to accommodate these changes. Most notably:
@@ -130,20 +109,6 @@ will need to change to accommodate these changes. Most notably:
longer exported and have been replaced with `node.New` and
`node.NewDefault` which provide more functional interfaces.
To access any of the functionality previously available via the
`node.Node` type, use the `*local.Local` "RPC" client, that exposes
the full RPC interface provided as direct function calls. Import the
`github.com/tendermint/tendermint/rpc/client/local` package and pass
the node service as in the following:
```go
node := node.NewDefault() //construct the node object
// start and set up the node service
client := local.New(node.(local.NodeService))
// use client object to interact with the node
```
### gRPC Support
Mark gRPC in the RPC layer as deprecated and to be removed in 0.36.
@@ -165,10 +130,10 @@ both stacks.
The P2P library was reimplemented in this release. The new implementation is
enabled by default in this version of Tendermint. The legacy implementation is still
included in this version of Tendermint as a backstop to work around unforeseen
production issues. The new and legacy version are interoperable. If necessary,
production issues. The new and legacy version are interoperable. If necessary,
you can enable the legacy implementation in the server configuration file.
To make use of the legacy P2P implemementation add or update the following field of
To make use of the legacy P2P implemementation add or update the following field of
your server's configuration file under the `[p2p]` section:
```toml
@@ -193,8 +158,8 @@ in the order in which they were received.
* `priority`: A priority queue of messages.
* `wdrr`: A queue implementing the Weighted Deficit Round Robin algorithm. A
weighted deficit round robin queue is created per peer. Each queue contains a
* `wdrr`: A queue implementing the Weighted Deficit Round Robin algorithm. A
weighted deficit round robin queue is created per peer. Each queue contains a
separate 'flow' for each of the channels of communication that exist between any two
peers. Tendermint maintains a channel per message type between peers. Each WDRR
queue maintains a shared buffered with a fixed capacity through which messages on different

View File

@@ -1,4 +1,4 @@
package abciclient
package abcicli
import (
"context"
@@ -87,15 +87,9 @@ type ReqRes struct {
*sync.WaitGroup
*types.Response // Not set atomically, so be sure to use WaitGroup.
mtx tmsync.Mutex
// callbackInvoked as a variable to track if the callback was already
// invoked during the regular execution of the request. This variable
// allows clients to set the callback simultaneously without potentially
// invoking the callback twice by accident, once when 'SetCallback' is
// called and once during the normal request.
callbackInvoked bool
cb func(*types.Response) // A single callback that may be set.
mtx tmsync.RWMutex
done bool // Gets set to true once *after* WaitGroup.Done().
cb func(*types.Response) // A single callback that may be set.
}
func NewReqRes(req *types.Request) *ReqRes {
@@ -104,8 +98,8 @@ func NewReqRes(req *types.Request) *ReqRes {
WaitGroup: waitGroup1(),
Response: nil,
callbackInvoked: false,
cb: nil,
done: false,
cb: nil,
}
}
@@ -115,7 +109,7 @@ func NewReqRes(req *types.Request) *ReqRes {
func (r *ReqRes) SetCallback(cb func(res *types.Response)) {
r.mtx.Lock()
if r.callbackInvoked {
if r.done {
r.mtx.Unlock()
cb(r.Response)
return
@@ -134,7 +128,6 @@ func (r *ReqRes) InvokeCallback() {
if r.cb != nil {
r.cb(r.Response)
}
r.callbackInvoked = true
}
// GetCallback returns the configured callback of the ReqRes object which may be
@@ -144,9 +137,16 @@ func (r *ReqRes) InvokeCallback() {
//
// ref: https://github.com/tendermint/tendermint/issues/5439
func (r *ReqRes) GetCallback() func(*types.Response) {
r.mtx.RLock()
defer r.mtx.RUnlock()
return r.cb
}
// SetDone marks the ReqRes object as done.
func (r *ReqRes) SetDone() {
r.mtx.Lock()
defer r.mtx.Unlock()
return r.cb
r.done = true
}
func waitGroup1() (wg *sync.WaitGroup) {

View File

@@ -1,35 +0,0 @@
package abciclient
import (
"fmt"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
)
// Creator creates new ABCI clients.
type Creator func() (Client, error)
// NewLocalCreator returns a Creator for the given app,
// which will be running locally.
func NewLocalCreator(app types.Application) Creator {
mtx := new(tmsync.Mutex)
return func() (Client, error) {
return NewLocalClient(mtx, app), nil
}
}
// NewRemoteCreator returns a Creator for the given address (e.g.
// "192.168.0.1") and transport (e.g. "tcp"). Set mustConnect to true if you
// want the client to connect before reporting success.
func NewRemoteCreator(addr, transport string, mustConnect bool) Creator {
return func() (Client, error) {
remoteApp, err := NewClient(addr, transport, mustConnect)
if err != nil {
return nil, fmt.Errorf("failed to connect to proxy: %w", err)
}
return remoteApp, nil
}
}

View File

@@ -1,4 +1,4 @@
// Package abciclient provides an ABCI implementation in Go.
// Package abcicli provides an ABCI implementation in Go.
//
// There are 3 clients available:
// 1. socket (unix or TCP)
@@ -26,4 +26,4 @@
//
// sync: waits for all Async calls to complete (essentially what Flush does in
// the socket client) and calls Sync method.
package abciclient
package abcicli

View File

@@ -1,4 +1,4 @@
package abciclient
package abcicli
import (
"context"
@@ -8,7 +8,6 @@ import (
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
@@ -25,7 +24,7 @@ type grpcClient struct {
conn *grpc.ClientConn
chReqRes chan *ReqRes // dispatches "async" responses to callbacks *in order*, needed by mempool
mtx tmsync.Mutex
mtx tmsync.RWMutex
addr string
err error
resCb func(*types.Request, *types.Response) // listens to all callbacks
@@ -72,6 +71,7 @@ func (cli *grpcClient) OnStart() error {
cli.mtx.Lock()
defer cli.mtx.Unlock()
reqres.SetDone()
reqres.Done()
// Notify client listener if set
@@ -80,7 +80,9 @@ func (cli *grpcClient) OnStart() error {
}
// Notify reqRes listener if set
reqres.InvokeCallback()
if cb := reqres.GetCallback(); cb != nil {
cb(reqres.Response)
}
}
for reqres := range cli.chReqRes {
if reqres != nil {
@@ -93,10 +95,7 @@ func (cli *grpcClient) OnStart() error {
RETRY_LOOP:
for {
conn, err := grpc.Dial(cli.addr,
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(dialerFunc),
)
conn, err := grpc.Dial(cli.addr, grpc.WithInsecure(), grpc.WithContextDialer(dialerFunc))
if err != nil {
if cli.mustConnect {
return err
@@ -150,8 +149,8 @@ func (cli *grpcClient) StopForError(err error) {
}
func (cli *grpcClient) Error() error {
cli.mtx.Lock()
defer cli.mtx.Unlock()
cli.mtx.RLock()
defer cli.mtx.RUnlock()
return cli.err
}
@@ -159,8 +158,8 @@ func (cli *grpcClient) Error() error {
// NOTE: callback may get internally generated flush responses.
func (cli *grpcClient) SetResponseCallback(resCb Callback) {
cli.mtx.Lock()
defer cli.mtx.Unlock()
cli.resCb = resCb
cli.mtx.Unlock()
}
//----------------------------------------

View File

@@ -1,4 +1,4 @@
package abciclient
package abcicli
import (
"context"
@@ -15,7 +15,7 @@ import (
type localClient struct {
service.BaseService
mtx *tmsync.Mutex
mtx *tmsync.RWMutex
types.Application
Callback
}
@@ -26,22 +26,24 @@ var _ Client = (*localClient)(nil)
// methods of the given app.
//
// Both Async and Sync methods ignore the given context.Context parameter.
func NewLocalClient(mtx *tmsync.Mutex, app types.Application) Client {
func NewLocalClient(mtx *tmsync.RWMutex, app types.Application) Client {
if mtx == nil {
mtx = new(tmsync.Mutex)
mtx = &tmsync.RWMutex{}
}
cli := &localClient{
mtx: mtx,
Application: app,
}
cli.BaseService = *service.NewBaseService(nil, "localClient", cli)
return cli
}
func (app *localClient) SetResponseCallback(cb Callback) {
app.mtx.Lock()
defer app.mtx.Unlock()
app.Callback = cb
app.mtx.Unlock()
}
// TODO: change types.Application to include Error()?
@@ -65,8 +67,8 @@ func (app *localClient) EchoAsync(ctx context.Context, msg string) (*ReqRes, err
}
func (app *localClient) InfoAsync(ctx context.Context, req types.RequestInfo) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
app.mtx.RLock()
defer app.mtx.RUnlock()
res := app.Application.Info(req)
return app.callback(
@@ -98,8 +100,8 @@ func (app *localClient) CheckTxAsync(ctx context.Context, req types.RequestCheck
}
func (app *localClient) QueryAsync(ctx context.Context, req types.RequestQuery) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
app.mtx.RLock()
defer app.mtx.RUnlock()
res := app.Application.Query(req)
return app.callback(
@@ -213,8 +215,8 @@ func (app *localClient) EchoSync(ctx context.Context, msg string) (*types.Respon
}
func (app *localClient) InfoSync(ctx context.Context, req types.RequestInfo) (*types.ResponseInfo, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
app.mtx.RLock()
defer app.mtx.RUnlock()
res := app.Application.Info(req)
return &res, nil
@@ -247,8 +249,8 @@ func (app *localClient) QuerySync(
ctx context.Context,
req types.RequestQuery,
) (*types.ResponseQuery, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
app.mtx.RLock()
defer app.mtx.RUnlock()
res := app.Application.Query(req)
return &res, nil
@@ -348,13 +350,12 @@ func (app *localClient) ApplySnapshotChunkSync(
func (app *localClient) callback(req *types.Request, res *types.Response) *ReqRes {
app.Callback(req, res)
rr := newLocalReqRes(req, res)
rr.callbackInvoked = true
return rr
return newLocalReqRes(req, res)
}
func newLocalReqRes(req *types.Request, res *types.Response) *ReqRes {
reqRes := NewReqRes(req)
reqRes.Response = res
reqRes.SetDone()
return reqRes
}

View File

@@ -5,7 +5,7 @@ package mocks
import (
context "context"
abciclient "github.com/tendermint/tendermint/abci/client"
abcicli "github.com/tendermint/tendermint/abci/client"
log "github.com/tendermint/tendermint/libs/log"
@@ -20,15 +20,15 @@ type Client struct {
}
// ApplySnapshotChunkAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) ApplySnapshotChunkAsync(_a0 context.Context, _a1 types.RequestApplySnapshotChunk) (*abciclient.ReqRes, error) {
func (_m *Client) ApplySnapshotChunkAsync(_a0 context.Context, _a1 types.RequestApplySnapshotChunk) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestApplySnapshotChunk) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestApplySnapshotChunk) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -66,15 +66,15 @@ func (_m *Client) ApplySnapshotChunkSync(_a0 context.Context, _a1 types.RequestA
}
// BeginBlockAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) BeginBlockAsync(_a0 context.Context, _a1 types.RequestBeginBlock) (*abciclient.ReqRes, error) {
func (_m *Client) BeginBlockAsync(_a0 context.Context, _a1 types.RequestBeginBlock) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestBeginBlock) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestBeginBlock) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -112,15 +112,15 @@ func (_m *Client) BeginBlockSync(_a0 context.Context, _a1 types.RequestBeginBloc
}
// CheckTxAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) CheckTxAsync(_a0 context.Context, _a1 types.RequestCheckTx) (*abciclient.ReqRes, error) {
func (_m *Client) CheckTxAsync(_a0 context.Context, _a1 types.RequestCheckTx) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestCheckTx) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestCheckTx) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -158,15 +158,15 @@ func (_m *Client) CheckTxSync(_a0 context.Context, _a1 types.RequestCheckTx) (*t
}
// CommitAsync provides a mock function with given fields: _a0
func (_m *Client) CommitAsync(_a0 context.Context) (*abciclient.ReqRes, error) {
func (_m *Client) CommitAsync(_a0 context.Context) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context) *abcicli.ReqRes); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -204,15 +204,15 @@ func (_m *Client) CommitSync(_a0 context.Context) (*types.ResponseCommit, error)
}
// DeliverTxAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 types.RequestDeliverTx) (*abciclient.ReqRes, error) {
func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 types.RequestDeliverTx) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -250,15 +250,15 @@ func (_m *Client) DeliverTxSync(_a0 context.Context, _a1 types.RequestDeliverTx)
}
// EchoAsync provides a mock function with given fields: ctx, msg
func (_m *Client) EchoAsync(ctx context.Context, msg string) (*abciclient.ReqRes, error) {
func (_m *Client) EchoAsync(ctx context.Context, msg string) (*abcicli.ReqRes, error) {
ret := _m.Called(ctx, msg)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, string) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, string) *abcicli.ReqRes); ok {
r0 = rf(ctx, msg)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -296,15 +296,15 @@ func (_m *Client) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho
}
// EndBlockAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) EndBlockAsync(_a0 context.Context, _a1 types.RequestEndBlock) (*abciclient.ReqRes, error) {
func (_m *Client) EndBlockAsync(_a0 context.Context, _a1 types.RequestEndBlock) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestEndBlock) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestEndBlock) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -356,15 +356,15 @@ func (_m *Client) Error() error {
}
// FlushAsync provides a mock function with given fields: _a0
func (_m *Client) FlushAsync(_a0 context.Context) (*abciclient.ReqRes, error) {
func (_m *Client) FlushAsync(_a0 context.Context) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context) *abcicli.ReqRes); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -393,15 +393,15 @@ func (_m *Client) FlushSync(_a0 context.Context) error {
}
// InfoAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) InfoAsync(_a0 context.Context, _a1 types.RequestInfo) (*abciclient.ReqRes, error) {
func (_m *Client) InfoAsync(_a0 context.Context, _a1 types.RequestInfo) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInfo) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInfo) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -439,15 +439,15 @@ func (_m *Client) InfoSync(_a0 context.Context, _a1 types.RequestInfo) (*types.R
}
// InitChainAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) InitChainAsync(_a0 context.Context, _a1 types.RequestInitChain) (*abciclient.ReqRes, error) {
func (_m *Client) InitChainAsync(_a0 context.Context, _a1 types.RequestInitChain) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInitChain) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInitChain) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -499,15 +499,15 @@ func (_m *Client) IsRunning() bool {
}
// ListSnapshotsAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) ListSnapshotsAsync(_a0 context.Context, _a1 types.RequestListSnapshots) (*abciclient.ReqRes, error) {
func (_m *Client) ListSnapshotsAsync(_a0 context.Context, _a1 types.RequestListSnapshots) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestListSnapshots) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestListSnapshots) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -545,15 +545,15 @@ func (_m *Client) ListSnapshotsSync(_a0 context.Context, _a1 types.RequestListSn
}
// LoadSnapshotChunkAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) LoadSnapshotChunkAsync(_a0 context.Context, _a1 types.RequestLoadSnapshotChunk) (*abciclient.ReqRes, error) {
func (_m *Client) LoadSnapshotChunkAsync(_a0 context.Context, _a1 types.RequestLoadSnapshotChunk) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestLoadSnapshotChunk) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestLoadSnapshotChunk) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -591,15 +591,15 @@ func (_m *Client) LoadSnapshotChunkSync(_a0 context.Context, _a1 types.RequestLo
}
// OfferSnapshotAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) OfferSnapshotAsync(_a0 context.Context, _a1 types.RequestOfferSnapshot) (*abciclient.ReqRes, error) {
func (_m *Client) OfferSnapshotAsync(_a0 context.Context, _a1 types.RequestOfferSnapshot) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestOfferSnapshot) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestOfferSnapshot) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -670,15 +670,15 @@ func (_m *Client) OnStop() {
}
// QueryAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) QueryAsync(_a0 context.Context, _a1 types.RequestQuery) (*abciclient.ReqRes, error) {
func (_m *Client) QueryAsync(_a0 context.Context, _a1 types.RequestQuery) (*abcicli.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestQuery) *abciclient.ReqRes); ok {
var r0 *abcicli.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestQuery) *abcicli.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
r0 = ret.Get(0).(*abcicli.ReqRes)
}
}
@@ -751,7 +751,7 @@ func (_m *Client) SetLogger(_a0 log.Logger) {
}
// SetResponseCallback provides a mock function with given fields: _a0
func (_m *Client) SetResponseCallback(_a0 abciclient.Callback) {
func (_m *Client) SetResponseCallback(_a0 abcicli.Callback) {
_m.Called(_a0)
}
@@ -801,18 +801,3 @@ func (_m *Client) String() string {
func (_m *Client) Wait() {
_m.Called()
}
type NewClientT interface {
mock.TestingT
Cleanup(func())
}
// NewClient creates a new instance of Client. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewClient(t NewClientT) *Client {
mock := &Client{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -1,4 +1,4 @@
package abciclient
package abcicli
import (
"bufio"
@@ -13,6 +13,7 @@ import (
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/internal/libs/timer"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/libs/service"
)
@@ -21,6 +22,8 @@ const (
// reqQueueSize is the max number of queued async requests.
// (memory: 256MB max assuming 1MB transactions)
reqQueueSize = 256
// Don't wait longer than...
flushThrottleMS = 20
)
type reqResWithContext struct {
@@ -37,9 +40,10 @@ type socketClient struct {
mustConnect bool
conn net.Conn
reqQueue chan *reqResWithContext
reqQueue chan *reqResWithContext
flushTimer *timer.ThrottleTimer
mtx tmsync.Mutex
mtx tmsync.RWMutex
err error
reqSent *list.List // list of requests sent, waiting for response
resCb func(*types.Request, *types.Response) // called on all requests, if set.
@@ -53,6 +57,7 @@ var _ Client = (*socketClient)(nil)
func NewSocketClient(addr string, mustConnect bool) Client {
cli := &socketClient{
reqQueue: make(chan *reqResWithContext, reqQueueSize),
flushTimer: timer.NewThrottleTimer("socketClient", flushThrottleMS),
mustConnect: mustConnect,
addr: addr,
@@ -97,13 +102,14 @@ func (cli *socketClient) OnStop() {
cli.conn.Close()
}
cli.drainQueue()
cli.flushQueue()
cli.flushTimer.Stop()
}
// Error returns an error if the client was stopped abruptly.
func (cli *socketClient) Error() error {
cli.mtx.Lock()
defer cli.mtx.Unlock()
cli.mtx.RLock()
defer cli.mtx.RUnlock()
return cli.err
}
@@ -113,32 +119,45 @@ func (cli *socketClient) Error() error {
// NOTE: callback may get internally generated flush responses.
func (cli *socketClient) SetResponseCallback(resCb Callback) {
cli.mtx.Lock()
defer cli.mtx.Unlock()
cli.resCb = resCb
cli.mtx.Unlock()
}
//----------------------------------------
func (cli *socketClient) sendRequestsRoutine(conn io.Writer) {
bw := bufio.NewWriter(conn)
w := bufio.NewWriter(conn)
for {
select {
case reqres := <-cli.reqQueue:
// cli.Logger.Debug("Sent request", "requestType", reflect.TypeOf(reqres.Request), "request", reqres.Request)
if reqres.C.Err() != nil {
cli.Logger.Debug("Request's context is done", "req", reqres.R, "err", reqres.C.Err())
continue
}
cli.willSendReq(reqres.R)
if err := types.WriteMessage(reqres.R.Request, bw); err != nil {
cli.willSendReq(reqres.R)
err := types.WriteMessage(reqres.R.Request, w)
if err != nil {
cli.stopForError(fmt.Errorf("write to buffer: %w", err))
return
}
if err := bw.Flush(); err != nil {
cli.stopForError(fmt.Errorf("flush buffer: %w", err))
return
}
// If it's a flush request, flush the current buffer.
if _, ok := reqres.R.Request.Value.(*types.Request_Flush); ok {
err = w.Flush()
if err != nil {
cli.stopForError(fmt.Errorf("flush buffer: %w", err))
return
}
}
case <-cli.flushTimer.Ch: // flush queue
select {
case cli.reqQueue <- &reqResWithContext{R: NewReqRes(types.ToRequestFlush()), C: context.Background()}:
default:
// Probably will fill the buffer, or retry later.
}
case <-cli.Quit():
return
}
@@ -279,7 +298,7 @@ func (cli *socketClient) ApplySnapshotChunkAsync(
//----------------------------------------
func (cli *socketClient) FlushSync(ctx context.Context) error {
reqRes, err := cli.queueRequest(ctx, types.ToRequestFlush())
reqRes, err := cli.queueRequest(ctx, types.ToRequestFlush(), true)
if err != nil {
return queueErr(err)
}
@@ -448,22 +467,37 @@ func (cli *socketClient) ApplySnapshotChunkSync(
//----------------------------------------
// queueRequest enqueues req onto the queue. The request can break early if the
// the context is canceled. If the queue is full, this method blocks to allow
// the request to be placed onto the queue. This has the effect of creating an
// unbounded queue of goroutines waiting to write to this queue which is a bit
// antithetical to the purposes of a queue, however, undoing this behavior has
// dangerous upstream implications as a result of the usage of this behavior upstream.
// Remove at your peril.
// queueRequest enqueues req onto the queue. If the queue is full, it ether
// returns an error (sync=false) or blocks (sync=true).
//
// When sync=true, ctx can be used to break early. When sync=false, ctx will be
// used later to determine if request should be dropped (if ctx.Err is
// non-nil).
//
// The caller is responsible for checking cli.Error.
func (cli *socketClient) queueRequest(ctx context.Context, req *types.Request) (*ReqRes, error) {
func (cli *socketClient) queueRequest(ctx context.Context, req *types.Request, sync bool) (*ReqRes, error) {
reqres := NewReqRes(req)
select {
case cli.reqQueue <- &reqResWithContext{R: reqres, C: ctx}:
case <-ctx.Done():
return nil, ctx.Err()
if sync {
select {
case cli.reqQueue <- &reqResWithContext{R: reqres, C: context.Background()}:
case <-ctx.Done():
return nil, ctx.Err()
}
} else {
select {
case cli.reqQueue <- &reqResWithContext{R: reqres, C: ctx}:
default:
return nil, errors.New("buffer is full")
}
}
// Maybe auto-flush, or unset auto-flush
switch req.Value.(type) {
case *types.Request_Flush:
cli.flushTimer.Unset()
default:
cli.flushTimer.Set()
}
return reqres, nil
@@ -474,7 +508,7 @@ func (cli *socketClient) queueRequestAsync(
req *types.Request,
) (*ReqRes, error) {
reqres, err := cli.queueRequest(ctx, req)
reqres, err := cli.queueRequest(ctx, req, false)
if err != nil {
return nil, queueErr(err)
}
@@ -487,7 +521,7 @@ func (cli *socketClient) queueRequestAndFlushSync(
req *types.Request,
) (*ReqRes, error) {
reqres, err := cli.queueRequest(ctx, req)
reqres, err := cli.queueRequest(ctx, req, true)
if err != nil {
return nil, queueErr(err)
}
@@ -503,9 +537,7 @@ func queueErr(e error) error {
return fmt.Errorf("can't queue req: %w", e)
}
// drainQueue marks as complete and discards all remaining pending requests
// from the queue.
func (cli *socketClient) drainQueue() {
func (cli *socketClient) flushQueue() {
cli.mtx.Lock()
defer cli.mtx.Unlock()
@@ -515,17 +547,14 @@ func (cli *socketClient) drainQueue() {
reqres.Done()
}
// Mark all queued messages as resolved.
//
// TODO(creachadair): We can't simply range the channel, because it is never
// closed, and the writer continues to add work.
// See https://github.com/tendermint/tendermint/issues/6996.
// mark all queued messages as resolved
LOOP:
for {
select {
case reqres := <-cli.reqQueue:
reqres.R.Done()
default:
return
break LOOP
}
}
}

View File

@@ -1,9 +1,8 @@
package abciclient_test
package abcicli_test
import (
"context"
"fmt"
"sync"
"testing"
"time"
@@ -12,7 +11,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
abciclient "github.com/tendermint/tendermint/abci/client"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/server"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/service"
@@ -101,7 +100,7 @@ func TestHangingSyncCalls(t *testing.T) {
}
func setupClientServer(t *testing.T, app types.Application) (
service.Service, abciclient.Client) {
service.Service, abcicli.Client) {
// some port between 20k and 30k
port := 20000 + rand.Int31()%10000
addr := fmt.Sprintf("localhost:%d", port)
@@ -111,7 +110,7 @@ func setupClientServer(t *testing.T, app types.Application) (
err = s.Start()
require.NoError(t, err)
c := abciclient.NewSocketClient(addr, true)
c := abcicli.NewSocketClient(addr, true)
err = c.Start()
require.NoError(t, err)
@@ -126,73 +125,3 @@ func (slowApp) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock
time.Sleep(200 * time.Millisecond)
return types.ResponseBeginBlock{}
}
// TestCallbackInvokedWhenSetLaet ensures that the callback is invoked when
// set after the client completes the call into the app. Currently this
// test relies on the callback being allowed to be invoked twice if set multiple
// times, once when set early and once when set late.
func TestCallbackInvokedWhenSetLate(t *testing.T) {
wg := &sync.WaitGroup{}
wg.Add(1)
app := blockedABCIApplication{
wg: wg,
}
_, c := setupClientServer(t, app)
reqRes, err := c.CheckTxAsync(context.Background(), types.RequestCheckTx{})
require.NoError(t, err)
done := make(chan struct{})
cb := func(_ *types.Response) {
close(done)
}
reqRes.SetCallback(cb)
app.wg.Done()
<-done
var called bool
cb = func(_ *types.Response) {
called = true
}
reqRes.SetCallback(cb)
require.True(t, called)
}
type blockedABCIApplication struct {
wg *sync.WaitGroup
types.BaseApplication
}
func (b blockedABCIApplication) CheckTx(r types.RequestCheckTx) types.ResponseCheckTx {
b.wg.Wait()
return b.BaseApplication.CheckTx(r)
}
// TestCallbackInvokedWhenSetEarly ensures that the callback is invoked when
// set before the client completes the call into the app.
func TestCallbackInvokedWhenSetEarly(t *testing.T) {
wg := &sync.WaitGroup{}
wg.Add(1)
app := blockedABCIApplication{
wg: wg,
}
_, c := setupClientServer(t, app)
reqRes, err := c.CheckTxAsync(context.Background(), types.RequestCheckTx{})
require.NoError(t, err)
done := make(chan struct{})
cb := func(_ *types.Response) {
close(done)
}
reqRes.SetCallback(cb)
app.wg.Done()
called := func() bool {
select {
case <-done:
return true
default:
return false
}
}
require.Eventually(t, called, time.Second, time.Millisecond*25)
}

View File

@@ -15,7 +15,7 @@ import (
"github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
abciclient "github.com/tendermint/tendermint/abci/client"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/tendermint/tendermint/abci/server"
@@ -27,7 +27,7 @@ import (
// client is a global variable so it can be reused by the console
var (
client abciclient.Client
client abcicli.Client
logger log.Logger
ctx = context.Background()
@@ -67,7 +67,7 @@ var RootCmd = &cobra.Command{
if client == nil {
var err error
client, err = abciclient.NewClient(flagAddress, flagAbci, false)
client, err = abcicli.NewClient(flagAddress, flagAbci, false)
if err != nil {
return err
}

View File

@@ -13,12 +13,11 @@ import (
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
abciclient "github.com/tendermint/tendermint/abci/client"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/abci/example/kvstore"
abciserver "github.com/tendermint/tendermint/abci/server"
@@ -62,7 +61,7 @@ func testStream(t *testing.T, app types.Application) {
})
// Connect to the socket
client := abciclient.NewSocketClient(socket, false)
client := abcicli.NewSocketClient(socket, false)
client.SetLogger(log.TestingLogger().With("module", "abci-client"))
err = client.Start()
require.NoError(t, err)
@@ -130,7 +129,7 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
func testGRPCSync(t *testing.T, app types.ABCIApplicationServer) {
numDeliverTxs := 2000
socketFile := fmt.Sprintf("/tmp/test-%08x.sock", rand.Int31n(1<<30))
socketFile := fmt.Sprintf("test-%08x.sock", rand.Int31n(1<<30))
defer os.Remove(socketFile)
socket := fmt.Sprintf("unix://%v", socketFile)
@@ -148,10 +147,7 @@ func testGRPCSync(t *testing.T, app types.ABCIApplicationServer) {
})
// Connect to the socket
conn, err := grpc.Dial(socket,
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(dialerFunc),
)
conn, err := grpc.Dial(socket, grpc.WithInsecure(), grpc.WithContextDialer(dialerFunc))
if err != nil {
t.Fatalf("Error dialing GRPC server: %v", err.Error())
}

View File

@@ -12,7 +12,7 @@ import (
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
abciclient "github.com/tendermint/tendermint/abci/client"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/code"
abciserver "github.com/tendermint/tendermint/abci/server"
"github.com/tendermint/tendermint/abci/types"
@@ -165,7 +165,7 @@ func TestValUpdates(t *testing.T) {
makeApplyBlock(t, kvstore, 2, diff, tx1, tx2, tx3)
vals1 = append(vals[:nInit-2], vals[nInit+1])
vals1 = append(vals[:nInit-2], vals[nInit+1]) // nolint: gocritic
vals2 = kvstore.Validators()
valsEqual(t, vals1, vals2)
@@ -229,7 +229,7 @@ func valsEqual(t *testing.T, vals1, vals2 []types.ValidatorUpdate) {
}
}
func makeSocketClientServer(app types.Application, name string) (abciclient.Client, service.Service, error) {
func makeSocketClientServer(app types.Application, name string) (abcicli.Client, service.Service, error) {
// Start the listener
socket := fmt.Sprintf("unix://%s.sock", name)
logger := log.TestingLogger()
@@ -241,7 +241,7 @@ func makeSocketClientServer(app types.Application, name string) (abciclient.Clie
}
// Connect to the socket
client := abciclient.NewSocketClient(socket, false)
client := abcicli.NewSocketClient(socket, false)
client.SetLogger(logger.With("module", "abci-client"))
if err := client.Start(); err != nil {
if err = server.Stop(); err != nil {
@@ -253,7 +253,7 @@ func makeSocketClientServer(app types.Application, name string) (abciclient.Clie
return client, server, nil
}
func makeGRPCClientServer(app types.Application, name string) (abciclient.Client, service.Service, error) {
func makeGRPCClientServer(app types.Application, name string) (abcicli.Client, service.Service, error) {
// Start the listener
socket := fmt.Sprintf("unix://%s.sock", name)
logger := log.TestingLogger()
@@ -265,7 +265,7 @@ func makeGRPCClientServer(app types.Application, name string) (abciclient.Client
return nil, nil, err
}
client := abciclient.NewGRPCClient(socket, true)
client := abcicli.NewGRPCClient(socket, true)
client.SetLogger(logger.With("module", "abci-client"))
if err := client.Start(); err != nil {
if err := server.Stop(); err != nil {
@@ -296,7 +296,7 @@ func TestClientServer(t *testing.T) {
// set up grpc app
kvstore = NewApplication()
gclient, gserver, err := makeGRPCClientServer(kvstore, "/tmp/kvstore-grpc")
gclient, gserver, err := makeGRPCClientServer(kvstore, "kvstore-grpc")
require.NoError(t, err)
t.Cleanup(func() {
@@ -313,7 +313,7 @@ func TestClientServer(t *testing.T) {
runClientTests(t, gclient)
}
func runClientTests(t *testing.T, client abciclient.Client) {
func runClientTests(t *testing.T, client abcicli.Client) {
// run some tests....
key := testKey
value := key
@@ -325,7 +325,7 @@ func runClientTests(t *testing.T, client abciclient.Client) {
testClient(t, client, tx, key, value)
}
func testClient(t *testing.T, app abciclient.Client, tx []byte, key, value string) {
func testClient(t *testing.T, app abcicli.Client, tx []byte, key, value string) {
ar, err := app.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: tx})
require.NoError(t, err)
require.False(t, ar.IsErr(), ar)

View File

@@ -11,9 +11,9 @@ import (
"github.com/tendermint/tendermint/abci/example/code"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/crypto/encoding"
cryptoenc "github.com/tendermint/tendermint/crypto/encoding"
"github.com/tendermint/tendermint/libs/log"
cryptoproto "github.com/tendermint/tendermint/proto/tendermint/crypto"
pc "github.com/tendermint/tendermint/proto/tendermint/crypto"
)
const (
@@ -30,7 +30,7 @@ type PersistentKVStoreApplication struct {
// validator set
ValUpdates []types.ValidatorUpdate
valAddrToPubKeyMap map[string]cryptoproto.PublicKey
valAddrToPubKeyMap map[string]pc.PublicKey
logger log.Logger
}
@@ -46,7 +46,7 @@ func NewPersistentKVStoreApplication(dbDir string) *PersistentKVStoreApplication
return &PersistentKVStoreApplication{
app: &Application{state: state},
valAddrToPubKeyMap: make(map[string]cryptoproto.PublicKey),
valAddrToPubKeyMap: make(map[string]pc.PublicKey),
logger: log.NewNopLogger(),
}
}
@@ -194,8 +194,8 @@ func (app *PersistentKVStoreApplication) Validators() (validators []types.Valida
return
}
func MakeValSetChangeTx(pubkey cryptoproto.PublicKey, power int64) []byte {
pk, err := encoding.PubKeyFromProto(pubkey)
func MakeValSetChangeTx(pubkey pc.PublicKey, power int64) []byte {
pk, err := cryptoenc.PubKeyFromProto(pubkey)
if err != nil {
panic(err)
}
@@ -243,7 +243,7 @@ func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) types.Respon
// add, update, or remove a validator
func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate) types.ResponseDeliverTx {
pubkey, err := encoding.PubKeyFromProto(v.PubKey)
pubkey, err := cryptoenc.PubKeyFromProto(v.PubKey)
if err != nil {
panic(fmt.Errorf("can't decode public key: %w", err))
}

View File

@@ -240,15 +240,22 @@ func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types
// Pull responses from 'responses' and write them to conn.
func (s *SocketServer) handleResponses(closeConn chan error, conn io.Writer, responses <-chan *types.Response) {
bw := bufio.NewWriter(conn)
for res := range responses {
if err := types.WriteMessage(res, bw); err != nil {
var count int
var bufWriter = bufio.NewWriter(conn)
for {
var res = <-responses
err := types.WriteMessage(res, bufWriter)
if err != nil {
closeConn <- fmt.Errorf("error writing message: %w", err)
return
}
if err := bw.Flush(); err != nil {
closeConn <- fmt.Errorf("error flushing write buffer: %w", err)
return
if _, ok := res.Value.(*types.Response_Flush); ok {
err = bufWriter.Flush()
if err != nil {
closeConn <- fmt.Errorf("error flushing write buffer: %w", err)
return
}
}
count++
}
}

View File

@@ -5,7 +5,7 @@ import (
"github.com/stretchr/testify/assert"
abciclientent "github.com/tendermint/tendermint/abci/client"
abciclient "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/kvstore"
abciserver "github.com/tendermint/tendermint/abci/server"
)
@@ -20,7 +20,7 @@ func TestClientServerNoAddrPrefix(t *testing.T) {
err = server.Start()
assert.NoError(t, err, "expected no error on server.Start")
client, err := abciclientent.NewClient(addr, transport, true)
client, err := abciclient.NewClient(addr, transport, true)
assert.NoError(t, err, "expected no error on NewClient")
err = client.Start()
assert.NoError(t, err, "expected no error on client.Start")

View File

@@ -7,14 +7,14 @@ import (
"fmt"
mrand "math/rand"
abciclient "github.com/tendermint/tendermint/abci/client"
abcicli "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/types"
tmrand "github.com/tendermint/tendermint/libs/rand"
)
var ctx = context.Background()
func InitChain(client abciclient.Client) error {
func InitChain(client abcicli.Client) error {
total := 10
vals := make([]types.ValidatorUpdate, total)
for i := 0; i < total; i++ {
@@ -34,7 +34,7 @@ func InitChain(client abciclient.Client) error {
return nil
}
func Commit(client abciclient.Client, hashExp []byte) error {
func Commit(client abcicli.Client, hashExp []byte) error {
res, err := client.CommitSync(ctx)
data := res.Data
if err != nil {
@@ -51,7 +51,7 @@ func Commit(client abciclient.Client, hashExp []byte) error {
return nil
}
func DeliverTx(client abciclient.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
func DeliverTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: txBytes})
code, data, log := res.Code, res.Data, res.Log
if code != codeExp {
@@ -70,7 +70,7 @@ func DeliverTx(client abciclient.Client, txBytes []byte, codeExp uint32, dataExp
return nil
}
func CheckTx(client abciclient.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
func CheckTx(client abcicli.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.CheckTxSync(ctx, types.RequestCheckTx{Tx: txBytes})
code, data, log := res.Code, res.Data, res.Log
if code != codeExp {

View File

@@ -1 +0,0 @@
package types

View File

@@ -4,7 +4,7 @@ import (
fmt "fmt"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/crypto/encoding"
cryptoenc "github.com/tendermint/tendermint/crypto/encoding"
"github.com/tendermint/tendermint/crypto/secp256k1"
"github.com/tendermint/tendermint/crypto/sr25519"
)
@@ -12,7 +12,7 @@ import (
func Ed25519ValidatorUpdate(pk []byte, power int64) ValidatorUpdate {
pke := ed25519.PubKey(pk)
pkp, err := encoding.PubKeyToProto(pke)
pkp, err := cryptoenc.PubKeyToProto(pke)
if err != nil {
panic(err)
}
@@ -29,7 +29,7 @@ func UpdateValidator(pk []byte, power int64, keyType string) ValidatorUpdate {
return Ed25519ValidatorUpdate(pk, power)
case secp256k1.KeyType:
pke := secp256k1.PubKey(pk)
pkp, err := encoding.PubKeyToProto(pke)
pkp, err := cryptoenc.PubKeyToProto(pke)
if err != nil {
panic(err)
}
@@ -39,7 +39,7 @@ func UpdateValidator(pk []byte, power int64, keyType string) ValidatorUpdate {
}
case sr25519.KeyType:
pke := sr25519.PubKey(pk)
pkp, err := encoding.PubKeyToProto(pke)
pkp, err := cryptoenc.PubKeyToProto(pke)
if err != nil {
panic(err)
}

View File

@@ -5,9 +5,6 @@ import (
"encoding/json"
"github.com/gogo/protobuf/jsonpb"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/encoding"
tmjson "github.com/tendermint/tendermint/libs/json"
)
const (
@@ -105,48 +102,6 @@ func (r *EventAttribute) UnmarshalJSON(b []byte) error {
return jsonpbUnmarshaller.Unmarshal(reader, r)
}
// validatorUpdateJSON is the JSON encoding of a validator update.
//
// It handles translation of public keys from the protobuf representation to
// the legacy Amino-compatible format expected by RPC clients.
type validatorUpdateJSON struct {
PubKey json.RawMessage `json:"pub_key,omitempty"`
Power int64 `json:"power,string"`
}
func (v *ValidatorUpdate) MarshalJSON() ([]byte, error) {
key, err := encoding.PubKeyFromProto(v.PubKey)
if err != nil {
return nil, err
}
jkey, err := tmjson.Marshal(key)
if err != nil {
return nil, err
}
return json.Marshal(validatorUpdateJSON{
PubKey: jkey,
Power: v.GetPower(),
})
}
func (v *ValidatorUpdate) UnmarshalJSON(data []byte) error {
var vu validatorUpdateJSON
if err := json.Unmarshal(data, &vu); err != nil {
return err
}
var key crypto.PubKey
if err := tmjson.Unmarshal(vu.PubKey, &key); err != nil {
return err
}
pkey, err := encoding.PubKeyToProto(key)
if err != nil {
return err
}
v.PubKey = pkey
v.Power = vu.Power
return nil
}
// Some compile time assertions to ensure we don't
// have accidental runtime surprises later on.

View File

@@ -1,69 +0,0 @@
package commands
import (
"errors"
"path/filepath"
"sync"
"github.com/spf13/cobra"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/util"
"github.com/tendermint/tendermint/libs/log"
)
func MakeCompactDBCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "experimental-compact-goleveldb",
Short: "force compacts the tendermint storage engine (only GoLevelDB supported)",
Long: `
This is a temporary utility command that performs a force compaction on the state
and blockstores to reduce disk space for a pruning node. This should only be run
once the node has stopped. This command will likely be omitted in the future after
the planned refactor to the storage engine.
Currently, only GoLevelDB is supported.
`,
RunE: func(cmd *cobra.Command, args []string) error {
if config.DBBackend != "goleveldb" {
return errors.New("compaction is currently only supported with goleveldb")
}
compactGoLevelDBs(config.RootDir, logger)
return nil
},
}
return cmd
}
func compactGoLevelDBs(rootDir string, logger log.Logger) {
dbNames := []string{"state", "blockstore"}
o := &opt.Options{
DisableSeeksCompaction: true,
}
wg := sync.WaitGroup{}
for _, dbName := range dbNames {
dbName := dbName
wg.Add(1)
go func() {
defer wg.Done()
dbPath := filepath.Join(rootDir, "data", dbName+".db")
store, err := leveldb.OpenFile(dbPath, o)
if err != nil {
logger.Error("failed to initialize tendermint db", "path", dbPath, "err", err)
return
}
defer store.Close()
logger.Info("starting compaction...", "db", dbPath)
err = store.CompactRange(util.Range{Start: nil, Limit: nil})
if err != nil {
logger.Error("failed to compact tendermint db", "path", dbPath, "err", err)
}
}()
}
wg.Wait()
}

View File

@@ -11,7 +11,7 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/tendermint/tendermint/config"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/cli"
rpchttp "github.com/tendermint/tendermint/rpc/client/http"
)
@@ -65,9 +65,9 @@ func dumpCmdHandler(_ *cobra.Command, args []string) error {
}
home := viper.GetString(cli.HomeFlag)
conf := config.DefaultConfig()
conf := cfg.DefaultConfig()
conf = conf.SetRoot(home)
config.EnsureRoot(conf.RootDir)
cfg.EnsureRoot(conf.RootDir)
dumpDebugData(outDir, conf, rpc)
@@ -79,7 +79,7 @@ func dumpCmdHandler(_ *cobra.Command, args []string) error {
return nil
}
func dumpDebugData(outDir string, conf *config.Config, rpc *rpchttp.HTTP) {
func dumpDebugData(outDir string, conf *cfg.Config, rpc *rpchttp.HTTP) {
start := time.Now().UTC()
tmpDir, err := ioutil.TempDir(outDir, "tendermint_debug_tmp")

View File

@@ -14,7 +14,7 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/tendermint/tendermint/config"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/cli"
rpchttp "github.com/tendermint/tendermint/rpc/client/http"
)
@@ -50,9 +50,9 @@ func killCmdHandler(cmd *cobra.Command, args []string) error {
}
home := viper.GetString(cli.HomeFlag)
conf := config.DefaultConfig()
conf := cfg.DefaultConfig()
conf = conf.SetRoot(home)
config.EnsureRoot(conf.RootDir)
cfg.EnsureRoot(conf.RootDir)
// Create a temporary directory which will contain all the state dumps and
// relevant files and directories that will be compressed into a file.

View File

@@ -9,7 +9,7 @@ import (
"path"
"path/filepath"
"github.com/tendermint/tendermint/config"
cfg "github.com/tendermint/tendermint/config"
rpchttp "github.com/tendermint/tendermint/rpc/client/http"
)
@@ -48,7 +48,7 @@ func dumpConsensusState(rpc *rpchttp.HTTP, dir, filename string) error {
// copyWAL copies the Tendermint node's WAL file. It returns an error if the
// WAL file cannot be read or copied.
func copyWAL(conf *config.Config, dir string) error {
func copyWAL(conf *cfg.Config, dir string) error {
walPath := conf.Consensus.WalFile()
walFile := filepath.Base(walPath)

View File

@@ -121,9 +121,7 @@ func initFilesWithConfig(config *cfg.Config) error {
}
// write config file
if err := cfg.WriteConfigFile(config.RootDir, config); err != nil {
return err
}
cfg.WriteConfigFile(config.RootDir, config)
logger.Info("Generated config", "mode", config.Mode)
return nil

View File

@@ -8,7 +8,12 @@ import (
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/internal/inspect"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/inspect"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer/sink"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
// InspectCmd is the command for starting an inspect server.
@@ -50,10 +55,29 @@ func runInspect(cmd *cobra.Command, args []string) error {
cancel()
}()
ins, err := inspect.NewFromConfig(logger, config)
blockStoreDB, err := cfg.DefaultDBProvider(&cfg.DBContext{ID: "blockstore", Config: config})
if err != nil {
return err
}
blockStore := store.NewBlockStore(blockStoreDB)
stateDB, err := cfg.DefaultDBProvider(&cfg.DBContext{ID: "state", Config: config})
if err != nil {
if err := blockStoreDB.Close(); err != nil {
logger.Error("error closing block store db", "error", err)
}
return err
}
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
if err != nil {
return err
}
sinks, err := sink.EventSinksFromConfig(config, cfg.DefaultDBProvider, genDoc.ChainID)
if err != nil {
return err
}
stateStore := state.NewStore(stateDB)
ins := inspect.New(config.RPC, blockStore, stateStore, sinks, logger)
logger.Info("starting inspect server")
if err := ins.Run(ctx); err != nil {

View File

@@ -5,11 +5,8 @@ import (
"fmt"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/scripts/keymigrate"
"github.com/tendermint/tendermint/scripts/scmigrate"
)
func MakeKeyMigrateCommand() *cobra.Command {
@@ -17,7 +14,46 @@ func MakeKeyMigrateCommand() *cobra.Command {
Use: "key-migrate",
Short: "Run Database key migration",
RunE: func(cmd *cobra.Command, args []string) error {
return RunDatabaseMigration(cmd.Context(), logger, config)
ctx, cancel := context.WithCancel(cmd.Context())
defer cancel()
contexts := []string{
// this is ordered to put the
// (presumably) biggest/most important
// subsets first.
"blockstore",
"state",
"peerstore",
"tx_index",
"evidence",
"light",
}
for idx, dbctx := range contexts {
logger.Info("beginning a key migration",
"dbctx", dbctx,
"num", idx+1,
"total", len(contexts),
)
db, err := cfg.DefaultDBProvider(&cfg.DBContext{
ID: dbctx,
Config: config,
})
if err != nil {
return fmt.Errorf("constructing database handle: %w", err)
}
if err = keymigrate.Migrate(ctx, db); err != nil {
return fmt.Errorf("running migration for context %q: %w",
dbctx, err)
}
}
logger.Info("completed database migration successfully")
return nil
},
}
@@ -26,51 +62,3 @@ func MakeKeyMigrateCommand() *cobra.Command {
return cmd
}
func RunDatabaseMigration(ctx context.Context, logger log.Logger, conf *cfg.Config) error {
contexts := []string{
// this is ordered to put
// the more ephemeral tables first to
// reduce the possibility of the
// ephemeral data overwriting later data
"tx_index",
"peerstore",
"light",
"blockstore",
"state",
"evidence",
}
for idx, dbctx := range contexts {
logger.Info("beginning a key migration",
"dbctx", dbctx,
"num", idx+1,
"total", len(contexts),
)
db, err := cfg.DefaultDBProvider(&cfg.DBContext{
ID: dbctx,
Config: conf,
})
if err != nil {
return fmt.Errorf("constructing database handle: %w", err)
}
if err = keymigrate.Migrate(ctx, db); err != nil {
return fmt.Errorf("running migration for context %q: %w",
dbctx, err)
}
if dbctx == "blockstore" {
if err := scmigrate.Migrate(ctx, db); err != nil {
return fmt.Errorf("running seen commit migration: %w", err)
}
}
}
logger.Info("completed database migration successfully")
return nil
}

View File

@@ -11,6 +11,7 @@ import (
"time"
"github.com/spf13/cobra"
dbm "github.com/tendermint/tm-db"
"github.com/tendermint/tendermint/libs/log"

View File

@@ -3,22 +3,20 @@ package commands
import (
"errors"
"fmt"
"path/filepath"
"strings"
"github.com/spf13/cobra"
dbm "github.com/tendermint/tm-db"
tmdb "github.com/tendermint/tm-db"
abcitypes "github.com/tendermint/tendermint/abci/types"
tmcfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/libs/progressbar"
"github.com/tendermint/tendermint/internal/state"
"github.com/tendermint/tendermint/internal/state/indexer"
"github.com/tendermint/tendermint/internal/state/indexer/sink/kv"
"github.com/tendermint/tendermint/internal/state/indexer/sink/psql"
"github.com/tendermint/tendermint/internal/store"
"github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/rpc/coretypes"
ctypes "github.com/tendermint/tendermint/rpc/core/types"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
"github.com/tendermint/tendermint/state/indexer/sink/kv"
"github.com/tendermint/tendermint/state/indexer/sink/psql"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
@@ -31,12 +29,11 @@ var ReIndexEventCmd = &cobra.Command{
Use: "reindex-event",
Short: "reindex events to the event store backends",
Long: `
reindex-event is an offline tooling to re-index block and tx events to the eventsinks,
you can run this command when the event store backend dropped/disconnected or you want to
replace the backend. The default start-height is 0, meaning the tooling will start
reindex from the base block height(inclusive); and the default end-height is 0, meaning
the tooling will reindex until the latest block height(inclusive). User can omit
either or both arguments.
reindex-event is an offline tooling to re-index block and tx events to the eventsinks,
you can run this command when the event store backend dropped/disconnected or you want to replace the backend.
The default start-height is 0, meaning the tooling will start reindex from the base block height(inclusive); and the
default end-height is 0, meaning the tooling will reindex until the latest block height(inclusive). User can omits
either or both arguments.
`,
Example: `
tendermint reindex-event
@@ -132,25 +129,17 @@ func loadEventSinks(cfg *tmcfg.Config) ([]indexer.EventSink, error) {
}
func loadStateAndBlockStore(cfg *tmcfg.Config) (*store.BlockStore, state.Store, error) {
dbType := dbm.BackendType(cfg.DBBackend)
if !os.FileExists(filepath.Join(cfg.DBDir(), "blockstore.db")) {
return nil, nil, fmt.Errorf("no blockstore found in %v", cfg.DBDir())
}
dbType := tmdb.BackendType(cfg.DBBackend)
// Get BlockStore
blockStoreDB, err := dbm.NewDB("blockstore", dbType, cfg.DBDir())
blockStoreDB, err := tmdb.NewDB("blockstore", dbType, cfg.DBDir())
if err != nil {
return nil, nil, err
}
blockStore := store.NewBlockStore(blockStoreDB)
if !os.FileExists(filepath.Join(cfg.DBDir(), "state.db")) {
return nil, nil, fmt.Errorf("no blockstore found in %v", cfg.DBDir())
}
// Get StateStore
stateDB, err := dbm.NewDB("state", dbType, cfg.DBDir())
stateDB, err := tmdb.NewDB("state", dbType, cfg.DBDir())
if err != nil {
return nil, nil, err
}
@@ -232,15 +221,14 @@ func checkValidHeight(bs state.BlockStore) error {
}
if startHeight < base {
return fmt.Errorf("%s (requested start height: %d, base height: %d)",
coretypes.ErrHeightNotAvailable, startHeight, base)
return fmt.Errorf("%s (requested start height: %d, base height: %d)", ctypes.ErrHeightNotAvailable, startHeight, base)
}
height := bs.Height()
if startHeight > height {
return fmt.Errorf(
"%s (requested start height: %d, store height: %d)", coretypes.ErrHeightNotAvailable, startHeight, height)
"%s (requested start height: %d, store height: %d)", ctypes.ErrHeightNotAvailable, startHeight, height)
}
if endHeight == 0 || endHeight > height {
@@ -250,13 +238,13 @@ func checkValidHeight(bs state.BlockStore) error {
if endHeight < base {
return fmt.Errorf(
"%s (requested end height: %d, base height: %d)", coretypes.ErrHeightNotAvailable, endHeight, base)
"%s (requested end height: %d, base height: %d)", ctypes.ErrHeightNotAvailable, endHeight, base)
}
if endHeight < startHeight {
return fmt.Errorf(
"%s (requested the end height: %d is less than the start height: %d)",
coretypes.ErrInvalidRequest, startHeight, endHeight)
ctypes.ErrInvalidRequest, startHeight, endHeight)
}
return nil

View File

@@ -11,13 +11,10 @@ import (
abcitypes "github.com/tendermint/tendermint/abci/types"
tmcfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/state/indexer"
"github.com/tendermint/tendermint/internal/state/mocks"
prototmstate "github.com/tendermint/tendermint/proto/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
"github.com/tendermint/tendermint/state/mocks"
"github.com/tendermint/tendermint/types"
dbm "github.com/tendermint/tm-db"
_ "github.com/lib/pq" // for the psql sink
)
const (
@@ -110,29 +107,12 @@ func TestLoadEventSink(t *testing.T) {
}
func TestLoadBlockStore(t *testing.T) {
testCfg, err := tmcfg.ResetTestRoot(t.Name())
require.NoError(t, err)
testCfg.DBBackend = "goleveldb"
_, _, err = loadStateAndBlockStore(testCfg)
// we should return an error because the state store and block store
// don't yet exist
require.Error(t, err)
dbType := dbm.BackendType(testCfg.DBBackend)
bsdb, err := dbm.NewDB("blockstore", dbType, testCfg.DBDir())
require.NoError(t, err)
bsdb.Close()
ssdb, err := dbm.NewDB("state", dbType, testCfg.DBDir())
require.NoError(t, err)
ssdb.Close()
bs, ss, err := loadStateAndBlockStore(testCfg)
bs, ss, err := loadStateAndBlockStore(tmcfg.TestConfig())
require.NoError(t, err)
require.NotNil(t, bs)
require.NotNil(t, ss)
}
}
func TestReIndexEvent(t *testing.T) {
mockBlockStore := &mocks.BlockStore{}
mockStateStore := &mocks.Store{}

View File

@@ -1,190 +0,0 @@
package commands
import (
"os"
"path/filepath"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
// ResetAllCmd removes the database of this Tendermint core
// instance.
var ResetAllCmd = &cobra.Command{
Use: "unsafe-reset-all",
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
RunE: resetAllCmd,
}
var keepAddrBook bool
// ResetStateCmd removes the database of the specified Tendermint core instance.
var ResetStateCmd = &cobra.Command{
Use: "reset-state",
Short: "Remove all the data and WAL",
RunE: func(cmd *cobra.Command, args []string) error {
config, err := ParseConfig()
if err != nil {
return err
}
return resetState(config.DBDir(), logger, keyType)
},
}
func init() {
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
ResetPrivValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
"Key type to generate privval file with. Options: ed25519, secp256k1")
}
// ResetPrivValidatorCmd resets the private validator files.
var ResetPrivValidatorCmd = &cobra.Command{
Use: "unsafe-reset-priv-validator",
Short: "(unsafe) Reset this node's validator to genesis state",
RunE: resetPrivValidator,
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAllCmd(cmd *cobra.Command, args []string) error {
config, err := ParseConfig()
if err != nil {
return err
}
return resetAll(
config.DBDir(),
config.P2P.AddrBookFile(),
config.PrivValidator.KeyFile(),
config.PrivValidator.StateFile(),
logger,
)
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) error {
config, err := ParseConfig()
if err != nil {
return err
}
return resetFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile(), logger, keyType)
}
// resetAllCmd removes address book files plus all data, and resets the privValidator data.
func resetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) error {
if keepAddrBook {
logger.Info("The address book remains intact")
} else {
removeAddrBook(addrBookFile, logger)
}
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
} else {
logger.Error("Error removing all blockchain history", "dir", dbDir, "err", err)
}
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
logger.Error("unable to recreate dbDir", "err", err)
}
// recreate the dbDir since the privVal state needs to live there
return resetFilePV(privValKeyFile, privValStateFile, logger, keyType)
}
// resetState removes address book files plus all databases.
func resetState(dbDir string, logger log.Logger, keyType string) error {
blockdb := filepath.Join(dbDir, "blockstore.db")
state := filepath.Join(dbDir, "state.db")
wal := filepath.Join(dbDir, "cs.wal")
evidence := filepath.Join(dbDir, "evidence.db")
txIndex := filepath.Join(dbDir, "tx_index.db")
peerstore := filepath.Join(dbDir, "peerstore.db")
if tmos.FileExists(blockdb) {
if err := os.RemoveAll(blockdb); err == nil {
logger.Info("Removed all blockstore.db", "dir", blockdb)
} else {
logger.Error("error removing all blockstore.db", "dir", blockdb, "err", err)
}
}
if tmos.FileExists(state) {
if err := os.RemoveAll(state); err == nil {
logger.Info("Removed all state.db", "dir", state)
} else {
logger.Error("error removing all state.db", "dir", state, "err", err)
}
}
if tmos.FileExists(wal) {
if err := os.RemoveAll(wal); err == nil {
logger.Info("Removed all cs.wal", "dir", wal)
} else {
logger.Error("error removing all cs.wal", "dir", wal, "err", err)
}
}
if tmos.FileExists(evidence) {
if err := os.RemoveAll(evidence); err == nil {
logger.Info("Removed all evidence.db", "dir", evidence)
} else {
logger.Error("error removing all evidence.db", "dir", evidence, "err", err)
}
}
if tmos.FileExists(txIndex) {
if err := os.RemoveAll(txIndex); err == nil {
logger.Info("Removed tx_index.db", "dir", txIndex)
} else {
logger.Error("error removing tx_index.db", "dir", txIndex, "err", err)
}
}
if tmos.FileExists(peerstore) {
if err := os.RemoveAll(peerstore); err == nil {
logger.Info("Removed peerstore.db", "dir", peerstore)
} else {
logger.Error("error removing peerstore.db", "dir", peerstore, "err", err)
}
}
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
logger.Error("unable to recreate dbDir", "err", err)
}
return nil
}
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger, keyType string) error {
if _, err := os.Stat(privValKeyFile); err == nil {
pv, err := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
if err != nil {
return err
}
pv.Reset()
logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv, err := privval.GenFilePV(privValKeyFile, privValStateFile, keyType)
if err != nil {
return err
}
pv.Save()
logger.Info("Generated private validator file", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
return nil
}
func removeAddrBook(addrBookFile string, logger log.Logger) {
if err := os.Remove(addrBookFile); err == nil {
logger.Info("Removed existing address book", "file", addrBookFile)
} else if !os.IsNotExist(err) {
logger.Info("Error removing address book", "file", addrBookFile, "err", err)
}
}

View File

@@ -0,0 +1,97 @@
package commands
import (
"os"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
// ResetAllCmd removes the database of this Tendermint core
// instance.
var ResetAllCmd = &cobra.Command{
Use: "unsafe-reset-all",
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
RunE: resetAll,
}
var keepAddrBook bool
func init() {
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
ResetPrivValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
"Key type to generate privval file with. Options: ed25519, secp256k1")
}
// ResetPrivValidatorCmd resets the private validator files.
var ResetPrivValidatorCmd = &cobra.Command{
Use: "unsafe-reset-priv-validator",
Short: "(unsafe) Reset this node's validator to genesis state",
RunE: resetPrivValidator,
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAll(cmd *cobra.Command, args []string) error {
return ResetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidator.KeyFile(),
config.PrivValidator.StateFile(), logger)
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) error {
return resetFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile(), logger)
}
// ResetAll removes address book files plus all data, and resets the privValdiator data.
// Exported so other CLI tools can use it.
func ResetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) error {
if keepAddrBook {
logger.Info("The address book remains intact")
} else {
removeAddrBook(addrBookFile, logger)
}
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
} else {
logger.Error("Error removing all blockchain history", "dir", dbDir, "err", err)
}
// recreate the dbDir since the privVal state needs to live there
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
logger.Error("unable to recreate dbDir", "err", err)
}
return resetFilePV(privValKeyFile, privValStateFile, logger)
}
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) error {
if _, err := os.Stat(privValKeyFile); err == nil {
pv, err := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
if err != nil {
return err
}
pv.Reset()
logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv, err := privval.GenFilePV(privValKeyFile, privValStateFile, keyType)
if err != nil {
return err
}
pv.Save()
logger.Info("Generated private validator file", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
return nil
}
func removeAddrBook(addrBookFile string, logger log.Logger) {
if err := os.Remove(addrBookFile); err == nil {
logger.Info("Removed existing address book", "file", addrBookFile)
} else if !os.IsNotExist(err) {
logger.Info("Error removing address book", "file", addrBookFile, "err", err)
}
}

View File

@@ -1,57 +0,0 @@
package commands
import (
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/privval"
)
func Test_ResetAll(t *testing.T) {
config := cfg.TestConfig()
dir := t.TempDir()
config.SetRoot(dir)
cfg.EnsureRoot(dir)
require.NoError(t, initFilesWithConfig(config))
pv, err := privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
pv.LastSignState.Height = 10
pv.Save()
require.NoError(t, resetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidator.KeyFile(),
config.PrivValidator.StateFile(), logger))
require.DirExists(t, config.DBDir())
require.NoFileExists(t, filepath.Join(config.DBDir(), "block.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "state.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "evidence.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "tx_index.db"))
require.FileExists(t, config.PrivValidator.StateFile())
pv, err = privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
require.Equal(t, int64(0), pv.LastSignState.Height)
}
func Test_ResetState(t *testing.T) {
config := cfg.TestConfig()
dir := t.TempDir()
config.SetRoot(dir)
cfg.EnsureRoot(dir)
require.NoError(t, initFilesWithConfig(config))
pv, err := privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
pv.LastSignState.Height = 10
pv.Save()
require.NoError(t, resetState(config.DBDir(), logger, keyType))
require.DirExists(t, config.DBDir())
require.NoFileExists(t, filepath.Join(config.DBDir(), "block.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "state.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "evidence.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "tx_index.db"))
require.FileExists(t, config.PrivValidator.StateFile())
pv, err = privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
// private validator state should still be intact.
require.Equal(t, int64(10), pv.LastSignState.Height)
}

View File

@@ -1,50 +0,0 @@
package commands
import (
"fmt"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/state"
)
var RollbackStateCmd = &cobra.Command{
Use: "rollback",
Short: "rollback tendermint state by one height",
Long: `
A state rollback is performed to recover from an incorrect application state transition,
when Tendermint has persisted an incorrect app hash and is thus unable to make
progress. Rollback overwrites a state at height n with the state at height n - 1.
The application should also roll back to height n - 1. No blocks are removed, so upon
restarting Tendermint the transactions in block n will be re-executed against the
application.
`,
RunE: func(cmd *cobra.Command, args []string) error {
height, hash, err := RollbackState(config)
if err != nil {
return fmt.Errorf("failed to rollback state: %w", err)
}
fmt.Printf("Rolled back state to height %d and hash %X", height, hash)
return nil
},
}
// RollbackState takes the state at the current height n and overwrites it with the state
// at height n - 1. Note state here refers to tendermint state not application state.
// Returns the latest state height and app hash alongside an error if there was one.
func RollbackState(config *cfg.Config) (int64, []byte, error) {
// use the parsed config to load the block and state store
blockStore, stateStore, err := loadStateAndBlockStore(config)
if err != nil {
return -1, nil, err
}
defer func() {
_ = blockStore.Close()
_ = stateStore.Close()
}()
// rollback the last state
return state.Rollback(blockStore, stateStore)
}

View File

@@ -65,7 +65,7 @@ func AddNodeFlags(cmd *cobra.Command) {
"proxy-app",
config.ProxyApp,
"proxy app address, or one of: 'kvstore',"+
" 'persistent_kvstore', 'e2e' or 'noop' for local testing.")
" 'persistent_kvstore' or 'noop' for local testing.")
cmd.Flags().String("abci", config.ABCI, "specify abci transport (socket | grpc)")
// rpc flags
@@ -82,7 +82,7 @@ func AddNodeFlags(cmd *cobra.Command) {
"p2p.laddr",
config.P2P.ListenAddress,
"node listen address. (0.0.0.0:0 means any interface, any port)")
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "comma-delimited ID@host:port seed nodes") //nolint: staticcheck
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "comma-delimited ID@host:port seed nodes")
cmd.Flags().String("p2p.persistent-peers", config.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
cmd.Flags().String("p2p.unconditional-peer-ids",
config.P2P.UnconditionalPeerIDs, "comma-delimited IDs of unconditional peers")

View File

@@ -240,9 +240,7 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
}
config.Moniker = moniker(i)
if err := cfg.WriteConfigFile(nodeDir, config); err != nil {
return err
}
cfg.WriteConfigFile(nodeDir, config)
}
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)

View File

@@ -6,9 +6,9 @@ import (
cmd "github.com/tendermint/tendermint/cmd/tendermint/commands"
"github.com/tendermint/tendermint/cmd/tendermint/commands/debug"
"github.com/tendermint/tendermint/config"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/cli"
"github.com/tendermint/tendermint/node"
nm "github.com/tendermint/tendermint/node"
)
func main() {
@@ -23,16 +23,13 @@ func main() {
cmd.ReplayConsoleCmd,
cmd.ResetAllCmd,
cmd.ResetPrivValidatorCmd,
cmd.ResetStateCmd,
cmd.ShowValidatorCmd,
cmd.TestnetFilesCmd,
cmd.ShowNodeIDCmd,
cmd.GenNodeKeyCmd,
cmd.VersionCmd,
cmd.InspectCmd,
cmd.RollbackStateCmd,
cmd.MakeKeyMigrateCommand(),
cmd.MakeCompactDBCommand(),
debug.DebugCmd,
cli.NewCompletionCmd(rootCmd, true),
)
@@ -45,12 +42,12 @@ func main() {
// * Provide their own DB implementation
// can copy this file and use something other than the
// node.NewDefault function
nodeFunc := node.NewDefault
nodeFunc := nm.NewDefault
// Create & start node
rootCmd.AddCommand(cmd.NewRunNodeCmd(nodeFunc))
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv(filepath.Join("$HOME", config.DefaultTendermintDir)))
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv(filepath.Join("$HOME", cfg.DefaultTendermintDir)))
if err := cmd.Execute(); err != nil {
panic(err)
}

View File

@@ -64,9 +64,6 @@ var (
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
minSubscriptionBufferSize = 100
defaultSubscriptionBufferSize = 200
)
// Config defines the top level configuration for a Tendermint node
@@ -499,29 +496,6 @@ type RPCConfig struct {
// to the estimated maximum number of broadcast_tx_commit calls per block.
MaxSubscriptionsPerClient int `mapstructure:"max-subscriptions-per-client"`
// The number of events that can be buffered per subscription before
// returning `ErrOutOfCapacity`.
SubscriptionBufferSize int `mapstructure:"experimental-subscription-buffer-size"`
// The maximum number of responses that can be buffered per WebSocket
// client. If clients cannot read from the WebSocket endpoint fast enough,
// they will be disconnected, so increasing this parameter may reduce the
// chances of them being disconnected (but will cause the node to use more
// memory).
//
// Must be at least the same as `SubscriptionBufferSize`, otherwise
// connections may be dropped unnecessarily.
WebSocketWriteBufferSize int `mapstructure:"experimental-websocket-write-buffer-size"`
// If a WebSocket client cannot read fast enough, at present we may
// silently drop events instead of generating an error or disconnecting the
// client.
//
// Enabling this parameter will cause the WebSocket connection to be closed
// instead if it cannot read fast enough, allowing for greater
// predictability in subscription behavior.
CloseOnSlowClient bool `mapstructure:"experimental-close-on-slow-client"`
// How long to wait for a tx to be committed during /broadcast_tx_commit
// WARNING: Using a value larger than 10s will result in increasing the
// global HTTP write timeout, which applies to all connections and endpoints.
@@ -571,9 +545,7 @@ func DefaultRPCConfig() *RPCConfig {
MaxSubscriptionClients: 100,
MaxSubscriptionsPerClient: 5,
SubscriptionBufferSize: defaultSubscriptionBufferSize,
TimeoutBroadcastTxCommit: 10 * time.Second,
WebSocketWriteBufferSize: defaultSubscriptionBufferSize,
MaxBodyBytes: int64(1000000), // 1MB
MaxHeaderBytes: 1 << 20, // same as the net/http default
@@ -607,18 +579,6 @@ func (cfg *RPCConfig) ValidateBasic() error {
if cfg.MaxSubscriptionsPerClient < 0 {
return errors.New("max-subscriptions-per-client can't be negative")
}
if cfg.SubscriptionBufferSize < minSubscriptionBufferSize {
return fmt.Errorf(
"experimental-subscription-buffer-size must be >= %d",
minSubscriptionBufferSize,
)
}
if cfg.WebSocketWriteBufferSize < cfg.SubscriptionBufferSize {
return fmt.Errorf(
"experimental-websocket-write-buffer-size must be >= experimental-subscription-buffer-size (%d)",
cfg.SubscriptionBufferSize,
)
}
if cfg.TimeoutBroadcastTxCommit < 0 {
return errors.New("timeout-broadcast-tx-commit can't be negative")
}
@@ -671,11 +631,9 @@ type P2PConfig struct { //nolint: maligned
// Comma separated list of seed nodes to connect to
// We only use these if we cant connect to peers in the addrbook
//
// Deprecated: This value is not used by the new PEX reactor. Use
// BootstrapPeers instead.
//
// TODO(#5670): Remove once the p2p refactor is complete.
// NOTE: not used by the new PEX reactor. Please use BootstrapPeers instead.
// TODO: Remove once p2p refactor is complete
// ref: https://github.com/tendermint/tendermint/issues/5670
Seeds string `mapstructure:"seeds"`
// Comma separated list of peers to be added to the peer store
@@ -712,10 +670,6 @@ type P2PConfig struct { //nolint: maligned
// outbound).
MaxConnections uint16 `mapstructure:"max-connections"`
// MaxOutgoingConnections defines the maximum number of connected peers (inbound and
// outbound).
MaxOutgoingConnections uint16 `mapstructure:"max-outgoing-connections"`
// MaxIncomingConnectionAttempts rate limits the number of incoming connection
// attempts per IP address.
MaxIncomingConnectionAttempts uint `mapstructure:"max-incoming-connection-attempts"`
@@ -778,7 +732,6 @@ func DefaultP2PConfig() *P2PConfig {
MaxNumInboundPeers: 40,
MaxNumOutboundPeers: 10,
MaxConnections: 64,
MaxOutgoingConnections: 32,
MaxIncomingConnectionAttempts: 100,
PersistentPeersMaxDialPeriod: 0 * time.Second,
FlushThrottleTimeout: 100 * time.Millisecond,
@@ -838,9 +791,6 @@ func (cfg *P2PConfig) ValidateBasic() error {
if cfg.RecvRate < 0 {
return errors.New("recv-rate can't be negative")
}
if cfg.MaxOutgoingConnections > cfg.MaxConnections {
return errors.New("max-outgoing-connections cannot be larger than max-connections")
}
return nil
}

View File

@@ -138,6 +138,7 @@ func TestBlockSyncConfigValidateBasic(t *testing.T) {
}
func TestConsensusConfig_ValidateBasic(t *testing.T) {
// nolint: lll
testcases := map[string]struct {
modify func(*ConsensusConfig)
expectErr bool

View File

@@ -1,10 +1,9 @@
package config
import (
dbm "github.com/tendermint/tm-db"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
db "github.com/tendermint/tm-db"
)
// ServiceProvider takes a config and a logger and returns a ready to go Node.
@@ -17,11 +16,11 @@ type DBContext struct {
}
// DBProvider takes a DBContext and returns an instantiated DB.
type DBProvider func(*DBContext) (dbm.DB, error)
type DBProvider func(*DBContext) (db.DB, error)
// DefaultDBProvider returns a database using the DBBackend and DBDir
// specified in the Config.
func DefaultDBProvider(ctx *DBContext) (dbm.DB, error) {
dbType := dbm.BackendType(ctx.Config.DBBackend)
return dbm.NewDB(ctx.ID, dbType, ctx.Config.DBDir())
func DefaultDBProvider(ctx *DBContext) (db.DB, error) {
dbType := db.BackendType(ctx.Config.DBBackend)
return db.NewDB(ctx.ID, dbType, ctx.Config.DBDir())
}

View File

@@ -45,29 +45,23 @@ func EnsureRoot(rootDir string) {
// WriteConfigFile renders config using the template and writes it to configFilePath.
// This function is called by cmd/tendermint/commands/init.go
func WriteConfigFile(rootDir string, config *Config) error {
return config.WriteToTemplate(filepath.Join(rootDir, defaultConfigFilePath))
}
// WriteToTemplate writes the config to the exact file specified by
// the path, in the default toml template and does not mangle the path
// or filename at all.
func (cfg *Config) WriteToTemplate(path string) error {
func WriteConfigFile(rootDir string, config *Config) {
var buffer bytes.Buffer
if err := configTemplate.Execute(&buffer, cfg); err != nil {
return err
if err := configTemplate.Execute(&buffer, config); err != nil {
panic(err)
}
return writeFile(path, buffer.Bytes(), 0644)
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
mustWriteFile(configFilePath, buffer.Bytes(), 0644)
}
func writeDefaultConfigFileIfNone(rootDir string) error {
func writeDefaultConfigFileIfNone(rootDir string) {
configFilePath := filepath.Join(rootDir, defaultConfigFilePath)
if !tmos.FileExists(configFilePath) {
return WriteConfigFile(rootDir, DefaultConfig())
WriteConfigFile(rootDir, DefaultConfig())
}
return nil
}
// Note: any changes to the comments/variables/mapstructure
@@ -165,15 +159,15 @@ state-file = "{{ js .PrivValidator.State }}"
# when the listenAddr is prefixed with grpc instead of tcp it will use the gRPC Client
laddr = "{{ .PrivValidator.ListenAddr }}"
# Path to the client certificate generated while creating needed files for secure connection.
# Client certificate generated while creating needed files for secure connection.
# If a remote validator address is provided but no certificate, the connection will be insecure
client-certificate-file = "{{ js .PrivValidator.ClientCertificate }}"
# Client key generated while creating certificates for secure connection
client-key-file = "{{ js .PrivValidator.ClientKey }}"
validator-client-key-file = "{{ js .PrivValidator.ClientKey }}"
# Path to the Root Certificate Authority used to sign both client and server certificates
root-ca-file = "{{ js .PrivValidator.RootCA }}"
# Path Root Certificate Authority used to sign both client and server certificates
certificate-authority = "{{ js .PrivValidator.RootCA }}"
#######################################################################
@@ -236,33 +230,6 @@ max-subscription-clients = {{ .RPC.MaxSubscriptionClients }}
# the estimated # maximum number of broadcast_tx_commit calls per block.
max-subscriptions-per-client = {{ .RPC.MaxSubscriptionsPerClient }}
# Experimental parameter to specify the maximum number of events a node will
# buffer, per subscription, before returning an error and closing the
# subscription. Must be set to at least 100, but higher values will accommodate
# higher event throughput rates (and will use more memory).
experimental-subscription-buffer-size = {{ .RPC.SubscriptionBufferSize }}
# Experimental parameter to specify the maximum number of RPC responses that
# can be buffered per WebSocket client. If clients cannot read from the
# WebSocket endpoint fast enough, they will be disconnected, so increasing this
# parameter may reduce the chances of them being disconnected (but will cause
# the node to use more memory).
#
# Must be at least the same as "experimental-subscription-buffer-size",
# otherwise connections could be dropped unnecessarily. This value should
# ideally be somewhat higher than "experimental-subscription-buffer-size" to
# accommodate non-subscription-related RPC responses.
experimental-websocket-write-buffer-size = {{ .RPC.WebSocketWriteBufferSize }}
# If a WebSocket client cannot read fast enough, at present we may
# silently drop events instead of generating an error or disconnecting the
# client.
#
# Enabling this experimental parameter will cause the WebSocket connection to
# be closed instead if it cannot read fast enough, allowing for greater
# predictability in subscription behavior.
experimental-close-on-slow-client = {{ .RPC.CloseOnSlowClient }}
# How long to wait for a tx to be committed during /broadcast_tx_commit.
# WARNING: Using a value larger than 10s will result in increasing the
# global HTTP write timeout, which applies to all connections and endpoints.
@@ -355,10 +322,6 @@ max-num-outbound-peers = {{ .P2P.MaxNumOutboundPeers }}
# Maximum number of connections (inbound and outbound).
max-connections = {{ .P2P.MaxConnections }}
# Maximum number of connections reserved for outgoing
# connections. Must be less than max-connections
max-outgoing-connections = {{ .P2P.MaxOutgoingConnections }}
# Rate limits the number of incoming connection attempts per IP address.
max-incoming-connection-attempts = {{ .P2P.MaxIncomingConnectionAttempts }}
@@ -368,28 +331,18 @@ max-incoming-connection-attempts = {{ .P2P.MaxIncomingConnectionAttempts }}
unconditional-peer-ids = "{{ .P2P.UnconditionalPeerIDs }}"
# Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
persistent-peers-max-dial-period = "{{ .P2P.PersistentPeersMaxDialPeriod }}"
# Time to wait before flushing messages out on the connection
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
flush-throttle-timeout = "{{ .P2P.FlushThrottleTimeout }}"
# Maximum size of a message packet payload, in bytes
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
max-packet-msg-payload-size = {{ .P2P.MaxPacketMsgPayloadSize }}
# Rate at which packets can be sent, in bytes/second
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
send-rate = {{ .P2P.SendRate }}
# Rate at which packets can be received, in bytes/second
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
recv-rate = {{ .P2P.RecvRate }}
# Set true to enable the peer-exchange reactor
@@ -484,8 +437,8 @@ rpc-servers = "{{ StringsJoin .StateSync.RPCServers "," }}"
trust-height = {{ .StateSync.TrustHeight }}
trust-hash = "{{ .StateSync.TrustHash }}"
# The trust period should be set so that Tendermint can detect and gossip misbehavior before
# it is considered expired. For chains based on the Cosmos SDK, one day less than the unbonding
# The trust period should be set so that Tendermint can detect and gossip misbehavior before
# it is considered expired. For chains based on the Cosmos SDK, one day less than the unbonding
# period should suffice.
trust-period = "{{ .StateSync.TrustPeriod }}"
@@ -566,7 +519,7 @@ peer-query-maj23-sleep-duration = "{{ .Consensus.PeerQueryMaj23SleepDuration }}"
[tx-index]
# The backend database list to back the indexer.
# If list contains "null" or "", meaning no indexer service will be used.
# If list contains null, meaning no indexer service will be used.
#
# The application will set which txs to index. In some cases a node operator will be able
# to decide which txs to index based on configuration set in the application.
@@ -574,8 +527,8 @@ peer-query-maj23-sleep-duration = "{{ .Consensus.PeerQueryMaj23SleepDuration }}"
# Options:
# 1) "null"
# 2) "kv" (default) - the simplest possible indexer, backed by key-value storage (defaults to levelDB; see DBBackend).
# - When "kv" is chosen "tx.height" and "tx.hash" will always be indexed.
# 3) "psql" - the indexer services backed by PostgreSQL.
# When "kv" or "psql" is chosen "tx.height" and "tx.hash" will always be indexed.
indexer = [{{ range $i, $e := .TxIndex.Indexer }}{{if $i}}, {{end}}{{ printf "%q" $e}}{{end}}]
# The PostgreSQL connection configuration, the connection format:
@@ -607,22 +560,22 @@ namespace = "{{ .Instrumentation.Namespace }}"
/****** these are for test settings ***********/
func ResetTestRoot(testName string) (*Config, error) {
func ResetTestRoot(testName string) *Config {
return ResetTestRootWithChainID(testName, "")
}
func ResetTestRootWithChainID(testName string, chainID string) (*Config, error) {
func ResetTestRootWithChainID(testName string, chainID string) *Config {
// create a unique, concurrency-safe test directory under os.TempDir()
rootDir, err := ioutil.TempDir("", fmt.Sprintf("%s-%s_", chainID, testName))
if err != nil {
return nil, err
panic(err)
}
// ensure config and data subdirs are created
if err := tmos.EnsureDir(filepath.Join(rootDir, defaultConfigDir), DefaultDirPerm); err != nil {
return nil, err
panic(err)
}
if err := tmos.EnsureDir(filepath.Join(rootDir, defaultDataDir), DefaultDirPerm); err != nil {
return nil, err
panic(err)
}
conf := DefaultConfig()
@@ -631,36 +584,26 @@ func ResetTestRootWithChainID(testName string, chainID string) (*Config, error)
privStateFilePath := filepath.Join(rootDir, conf.PrivValidator.State)
// Write default config file if missing.
if err := writeDefaultConfigFileIfNone(rootDir); err != nil {
return nil, err
}
writeDefaultConfigFileIfNone(rootDir)
if !tmos.FileExists(genesisFilePath) {
if chainID == "" {
chainID = "tendermint_test"
}
testGenesis := fmt.Sprintf(testGenesisFmt, chainID)
if err := writeFile(genesisFilePath, []byte(testGenesis), 0644); err != nil {
return nil, err
}
mustWriteFile(genesisFilePath, []byte(testGenesis), 0644)
}
// we always overwrite the priv val
if err := writeFile(privKeyFilePath, []byte(testPrivValidatorKey), 0644); err != nil {
return nil, err
}
if err := writeFile(privStateFilePath, []byte(testPrivValidatorState), 0644); err != nil {
return nil, err
}
mustWriteFile(privKeyFilePath, []byte(testPrivValidatorKey), 0644)
mustWriteFile(privStateFilePath, []byte(testPrivValidatorState), 0644)
config := TestConfig().SetRoot(rootDir)
return config, nil
return config
}
func writeFile(filePath string, contents []byte, mode os.FileMode) error {
func mustWriteFile(filePath string, contents []byte, mode os.FileMode) {
if err := ioutil.WriteFile(filePath, contents, mode); err != nil {
return fmt.Errorf("failed to write file: %w", err)
tmos.Exit(fmt.Sprintf("failed to write file: %v", err))
}
return nil
}
var testGenesisFmt = `{

View File

@@ -24,17 +24,17 @@ func TestEnsureRoot(t *testing.T) {
// setup temp dir for test
tmpDir, err := ioutil.TempDir("", "config-test")
require.NoError(err)
require.Nil(err)
defer os.RemoveAll(tmpDir)
// create root dir
EnsureRoot(tmpDir)
require.NoError(WriteConfigFile(tmpDir, DefaultConfig()))
WriteConfigFile(tmpDir, DefaultConfig())
// make sure config is set properly
data, err := ioutil.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
require.NoError(err)
require.Nil(err)
checkConfig(t, string(data))
@@ -47,8 +47,7 @@ func TestEnsureTestRoot(t *testing.T) {
testName := "ensureTestRoot"
// create root dir
cfg, err := ResetTestRoot(testName)
require.NoError(err)
cfg := ResetTestRoot(testName)
defer os.RemoveAll(cfg.RootDir)
rootDir := cfg.RootDir

39
crypto/armor/armor.go Normal file
View File

@@ -0,0 +1,39 @@
package armor
import (
"bytes"
"fmt"
"io/ioutil"
"golang.org/x/crypto/openpgp/armor"
)
func EncodeArmor(blockType string, headers map[string]string, data []byte) string {
buf := new(bytes.Buffer)
w, err := armor.Encode(buf, blockType, headers)
if err != nil {
panic(fmt.Errorf("could not encode ascii armor: %s", err))
}
_, err = w.Write(data)
if err != nil {
panic(fmt.Errorf("could not encode ascii armor: %s", err))
}
err = w.Close()
if err != nil {
panic(fmt.Errorf("could not encode ascii armor: %s", err))
}
return buf.String()
}
func DecodeArmor(armorStr string) (blockType string, headers map[string]string, data []byte, err error) {
buf := bytes.NewBufferString(armorStr)
block, err := armor.Decode(buf)
if err != nil {
return "", nil, nil, err
}
data, err = ioutil.ReadAll(block.Body)
if err != nil {
return "", nil, nil, err
}
return block.Type, block.Header, data, nil
}

View File

@@ -0,0 +1,20 @@
package armor
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestArmor(t *testing.T) {
blockType := "MINT TEST"
data := []byte("somedata")
armorStr := EncodeArmor(blockType, nil, data)
// Decode armorStr and test for equivalence.
blockType2, _, data2, err := DecodeArmor(armorStr)
require.Nil(t, err, "%+v", err)
assert.Equal(t, blockType, blockType2)
assert.Equal(t, data, data2)
}

View File

@@ -8,34 +8,34 @@ import (
"github.com/tendermint/tendermint/crypto/secp256k1"
"github.com/tendermint/tendermint/crypto/sr25519"
"github.com/tendermint/tendermint/libs/json"
cryptoproto "github.com/tendermint/tendermint/proto/tendermint/crypto"
pc "github.com/tendermint/tendermint/proto/tendermint/crypto"
)
func init() {
json.RegisterType((*cryptoproto.PublicKey)(nil), "tendermint.crypto.PublicKey")
json.RegisterType((*cryptoproto.PublicKey_Ed25519)(nil), "tendermint.crypto.PublicKey_Ed25519")
json.RegisterType((*cryptoproto.PublicKey_Secp256K1)(nil), "tendermint.crypto.PublicKey_Secp256K1")
json.RegisterType((*pc.PublicKey)(nil), "tendermint.crypto.PublicKey")
json.RegisterType((*pc.PublicKey_Ed25519)(nil), "tendermint.crypto.PublicKey_Ed25519")
json.RegisterType((*pc.PublicKey_Secp256K1)(nil), "tendermint.crypto.PublicKey_Secp256K1")
}
// PubKeyToProto takes crypto.PubKey and transforms it to a protobuf Pubkey
func PubKeyToProto(k crypto.PubKey) (cryptoproto.PublicKey, error) {
var kp cryptoproto.PublicKey
func PubKeyToProto(k crypto.PubKey) (pc.PublicKey, error) {
var kp pc.PublicKey
switch k := k.(type) {
case ed25519.PubKey:
kp = cryptoproto.PublicKey{
Sum: &cryptoproto.PublicKey_Ed25519{
kp = pc.PublicKey{
Sum: &pc.PublicKey_Ed25519{
Ed25519: k,
},
}
case secp256k1.PubKey:
kp = cryptoproto.PublicKey{
Sum: &cryptoproto.PublicKey_Secp256K1{
kp = pc.PublicKey{
Sum: &pc.PublicKey_Secp256K1{
Secp256K1: k,
},
}
case sr25519.PubKey:
kp = cryptoproto.PublicKey{
Sum: &cryptoproto.PublicKey_Sr25519{
kp = pc.PublicKey{
Sum: &pc.PublicKey_Sr25519{
Sr25519: k,
},
}
@@ -46,9 +46,9 @@ func PubKeyToProto(k crypto.PubKey) (cryptoproto.PublicKey, error) {
}
// PubKeyFromProto takes a protobuf Pubkey and transforms it to a crypto.Pubkey
func PubKeyFromProto(k cryptoproto.PublicKey) (crypto.PubKey, error) {
func PubKeyFromProto(k pc.PublicKey) (crypto.PubKey, error) {
switch k := k.Sum.(type) {
case *cryptoproto.PublicKey_Ed25519:
case *pc.PublicKey_Ed25519:
if len(k.Ed25519) != ed25519.PubKeySize {
return nil, fmt.Errorf("invalid size for PubKeyEd25519. Got %d, expected %d",
len(k.Ed25519), ed25519.PubKeySize)
@@ -56,7 +56,7 @@ func PubKeyFromProto(k cryptoproto.PublicKey) (crypto.PubKey, error) {
pk := make(ed25519.PubKey, ed25519.PubKeySize)
copy(pk, k.Ed25519)
return pk, nil
case *cryptoproto.PublicKey_Secp256K1:
case *pc.PublicKey_Secp256K1:
if len(k.Secp256K1) != secp256k1.PubKeySize {
return nil, fmt.Errorf("invalid size for PubKeySecp256k1. Got %d, expected %d",
len(k.Secp256K1), secp256k1.PubKeySize)
@@ -64,7 +64,7 @@ func PubKeyFromProto(k cryptoproto.PublicKey) (crypto.PubKey, error) {
pk := make(secp256k1.PubKey, secp256k1.PubKeySize)
copy(pk, k.Secp256K1)
return pk, nil
case *cryptoproto.PublicKey_Sr25519:
case *pc.PublicKey_Sr25519:
if len(k.Sr25519) != sr25519.PubKeySize {
return nil, fmt.Errorf("invalid size for PubKeySr25519. Got %d, expected %d",
len(k.Sr25519), sr25519.PubKeySize)

View File

@@ -172,67 +172,3 @@ func (pubKey PubKey) Equals(other crypto.PubKey) bool {
func (pubKey PubKey) Type() string {
return KeyType
}
// used to reject malleable signatures
// see:
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
// The returned signature will be of the form R || S (in lower-S form).
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
sig, err := priv.Sign(crypto.Sha256(msg))
if err != nil {
return nil, err
}
sigBytes := serializeSig(sig)
return sigBytes, nil
}
// VerifySignature verifies a signature of the form R || S.
// It rejects signatures which are not in lower-S form.
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
if len(sigStr) != 64 {
return false
}
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
if err != nil {
return false
}
// parse the signature:
signature := signatureFromBytes(sigStr)
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
if signature.S.Cmp(secp256k1halfN) > 0 {
return false
}
return signature.Verify(crypto.Sha256(msg), pub)
}
// Read Signature struct from R || S. Caller needs to ensure
// that len(sigStr) == 64.
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
return &secp256k1.Signature{
R: new(big.Int).SetBytes(sigStr[:32]),
S: new(big.Int).SetBytes(sigStr[32:64]),
}
}
// Serialize signature to R || S.
// R, S are padded to 32 bytes respectively.
func serializeSig(sig *secp256k1.Signature) []byte {
rBytes := sig.R.Bytes()
sBytes := sig.S.Bytes()
sigBytes := make([]byte, 64)
// 0 pad the byte arrays from the left if they aren't big enough.
copy(sigBytes[32-len(rBytes):32], rBytes)
copy(sigBytes[64-len(sBytes):64], sBytes)
return sigBytes
}

View File

@@ -0,0 +1,75 @@
// +build !libsecp256k1
package secp256k1
import (
"math/big"
secp256k1 "github.com/btcsuite/btcd/btcec"
"github.com/tendermint/tendermint/crypto"
)
// used to reject malleable signatures
// see:
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
// The returned signature will be of the form R || S (in lower-S form).
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
sig, err := priv.Sign(crypto.Sha256(msg))
if err != nil {
return nil, err
}
sigBytes := serializeSig(sig)
return sigBytes, nil
}
// VerifySignature verifies a signature of the form R || S.
// It rejects signatures which are not in lower-S form.
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
if len(sigStr) != 64 {
return false
}
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
if err != nil {
return false
}
// parse the signature:
signature := signatureFromBytes(sigStr)
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
if signature.S.Cmp(secp256k1halfN) > 0 {
return false
}
return signature.Verify(crypto.Sha256(msg), pub)
}
// Read Signature struct from R || S. Caller needs to ensure
// that len(sigStr) == 64.
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
return &secp256k1.Signature{
R: new(big.Int).SetBytes(sigStr[:32]),
S: new(big.Int).SetBytes(sigStr[32:64]),
}
}
// Serialize signature to R || S.
// R, S are padded to 32 bytes respectively.
func serializeSig(sig *secp256k1.Signature) []byte {
rBytes := sig.R.Bytes()
sBytes := sig.S.Bytes()
sigBytes := make([]byte, 64)
// 0 pad the byte arrays from the left if they aren't big enough.
copy(sigBytes[32-len(rBytes):32], rBytes)
copy(sigBytes[64-len(sBytes):64], sBytes)
return sigBytes
}

View File

@@ -5,13 +5,14 @@ import (
"math/big"
"testing"
underlyingSecp256k1 "github.com/btcsuite/btcd/btcec"
"github.com/btcsuite/btcutil/base58"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/secp256k1"
underlyingSecp256k1 "github.com/btcsuite/btcd/btcec"
)
type keyData struct {

View File

@@ -2,8 +2,8 @@ package xchacha20poly1305
import (
"bytes"
crand "crypto/rand"
mrand "math/rand"
cr "crypto/rand"
mr "math/rand"
"testing"
)
@@ -19,25 +19,25 @@ func TestRandom(t *testing.T) {
var nonce [24]byte
var key [32]byte
al := mrand.Intn(128)
pl := mrand.Intn(16384)
al := mr.Intn(128)
pl := mr.Intn(16384)
ad := make([]byte, al)
plaintext := make([]byte, pl)
_, err := crand.Read(key[:])
_, err := cr.Read(key[:])
if err != nil {
t.Errorf("error on read: %v", err)
t.Errorf("error on read: %w", err)
}
_, err = crand.Read(nonce[:])
_, err = cr.Read(nonce[:])
if err != nil {
t.Errorf("error on read: %v", err)
t.Errorf("error on read: %w", err)
}
_, err = crand.Read(ad)
_, err = cr.Read(ad)
if err != nil {
t.Errorf("error on read: %v", err)
t.Errorf("error on read: %w", err)
}
_, err = crand.Read(plaintext)
_, err = cr.Read(plaintext)
if err != nil {
t.Errorf("error on read: %v", err)
t.Errorf("error on read: %w", err)
}
aead, err := New(key[:])
@@ -59,7 +59,7 @@ func TestRandom(t *testing.T) {
}
if len(ad) > 0 {
alterAdIdx := mrand.Intn(len(ad))
alterAdIdx := mr.Intn(len(ad))
ad[alterAdIdx] ^= 0x80
if _, err := aead.Open(nil, nonce[:], ct, ad); err == nil {
t.Errorf("random #%d: Open was successful after altering additional data", i)
@@ -67,14 +67,14 @@ func TestRandom(t *testing.T) {
ad[alterAdIdx] ^= 0x80
}
alterNonceIdx := mrand.Intn(aead.NonceSize())
alterNonceIdx := mr.Intn(aead.NonceSize())
nonce[alterNonceIdx] ^= 0x80
if _, err := aead.Open(nil, nonce[:], ct, ad); err == nil {
t.Errorf("random #%d: Open was successful after altering nonce", i)
}
nonce[alterNonceIdx] ^= 0x80
alterCtIdx := mrand.Intn(len(ct))
alterCtIdx := mr.Intn(len(ct))
ct[alterCtIdx] ^= 0x80
if _, err := aead.Open(nil, nonce[:], ct, ad); err == nil {
t.Errorf("random #%d: Open was successful after altering ciphertext", i)

View File

@@ -65,5 +65,5 @@ networks:
ipam:
driver: default
config:
-
subnet: 192.167.10.0/16
-
subnet: 192.167.10.0/16

View File

@@ -22,6 +22,10 @@ module.exports = {
index: "tendermint"
},
versions: [
{
"label": "v0.32",
"key": "v0.32"
},
{
"label": "v0.33",
"key": "v0.33"
@@ -31,8 +35,8 @@ module.exports = {
"key": "v0.34"
},
{
"label": "v0.35",
"key": "v0.35"
"label": "master",
"key": "master"
}
],
topbar: {
@@ -45,10 +49,12 @@ module.exports = {
title: 'Resources',
children: [
{
// TODO(creachadair): Figure out how to make this per-branch.
// See: https://github.com/tendermint/tendermint/issues/7908
title: 'Developer Sessions',
path: '/DEV_SESSIONS.html'
},
{
title: 'RPC',
path: 'https://docs.tendermint.com/v0.35/rpc/',
path: 'https://docs.tendermint.com/master/rpc/',
static: true
},
]
@@ -72,7 +78,7 @@ module.exports = {
},
footer: {
question: {
text: 'Chat with Tendermint developers in <a href=\'https://discord.gg/cosmosnetwork\' target=\'_blank\'>Discord</a> or reach out on the <a href=\'https://forum.cosmos.network/c/tendermint\' target=\'_blank\'>Tendermint Forum</a> to learn more.'
text: 'Chat with Tendermint developers in <a href=\'https://discord.gg/vcExX9T\' target=\'_blank\'>Discord</a> or reach out on the <a href=\'https://forum.cosmos.network/c/tendermint\' target=\'_blank\'>Tendermint Forum</a> to learn more.'
},
logo: '/logo-bw.svg',
textLink: {
@@ -160,12 +166,6 @@ module.exports = {
{
ga: 'UA-51029217-11'
}
],
[
'@vuepress/plugin-html-redirect',
{
countdown: 0
}
]
]
};

View File

@@ -1 +0,0 @@
/master/ /v0.35/

View File

@@ -2,27 +2,17 @@
The documentation for Tendermint Core is hosted at:
- <https://docs.tendermint.com/>
- <https://docs.tendermint.com/master/>
built from the files in this [`docs` directory for `master`](https://github.com/tendermint/tendermint/tree/master/docs)
and other supported release branches.
built from the files in this (`/docs`) directory for
[master](https://github.com/tendermint/tendermint/tree/master/docs) respectively.
## How It Works
There is a [GitHub Actions workflow](https://github.com/tendermint/docs/actions/workflows/deployment.yml)
in the `tendermint/docs` repository that clones and builds the documentation
site from the contents of this `docs` directory, for `master` and for each
supported release branch. Under the hood, this workflow runs `make build-docs`
from the [Makefile](../Makefile#L214).
The list of supported versions are defined in [`config.js`](./.vuepress/config.js),
which defines the UI menu on the documentation site, and also in
[`docs/versions`](./versions), which determines which branches are built.
The last entry in the `docs/versions` file determines which version is linked
by default from the generated `index.html`. This should generally be the most
recent release, rather than `master`, so that new users are not confused by
documentation for unreleased features.
There is a CircleCI job listening for changes in the `/docs` directory, on both
the `master` branch. Any updates to files in this directory
on those branches will automatically trigger a website deployment. Under the hood,
the private website repository has a `make build-docs` target consumed by a CircleCI job in that repo.
## README

View File

@@ -24,7 +24,7 @@ To get started quickly with an example application, see the [quick start guide](
To learn about application development on Tendermint, see the [Application Blockchain Interface](https://github.com/tendermint/spec/tree/master/spec/abci).
For more details on using Tendermint, see the respective documentation for
[Tendermint Core](tendermint-core/), [benchmarking and monitoring](tools/), and [network deployments](nodes/).
[Tendermint Core](tendermint-core/), [benchmarking and monitoring](tools/), and [network deployments](networks/).
To find out about the Tendermint ecosystem you can go [here](https://github.com/tendermint/awesome#ecosystem). If you are a project that is using Tendermint you are welcome to make a PR to add your project to the list.

View File

@@ -62,7 +62,7 @@ as `abci-cli` above. The kvstore just stores transactions in a merkle
tree.
Its code can be found
[here](https://github.com/tendermint/tendermint/blob/v0.35.x/abci/cmd/abci-cli/abci-cli.go)
[here](https://github.com/tendermint/tendermint/blob/master/abci/cmd/abci-cli/abci-cli.go)
and looks like:
```go

View File

@@ -68,7 +68,7 @@ tendermint start
```
If you have used Tendermint, you may want to reset the data for a new
blockchain by running `tendermint unsafe-reset-all`. Then you can run
blockchain by running `tendermint unsafe_reset_all`. Then you can run
`tendermint start` to start Tendermint, and connect to the app. For more
details, see [the guide on using Tendermint](../tendermint-core/using-tendermint.md).
@@ -190,7 +190,7 @@ node example/counter.js
In another window, reset and start `tendermint`:
```sh
tendermint unsafe-reset-all
tendermint unsafe_reset_all
tendermint start
```

102
docs/architecture/README.md Normal file
View File

@@ -0,0 +1,102 @@
---
order: 1
parent:
order: false
---
# Architecture Decision Records (ADR)
This is a location to record all high-level architecture decisions in the tendermint project.
You can read more about the ADR concept in this [blog post](https://product.reverb.com/documenting-architecture-decisions-the-reverb-way-a3563bb24bd0#.78xhdix6t).
An ADR should provide:
- Context on the relevant goals and the current state
- Proposed changes to achieve the goals
- Summary of pros and cons
- References
- Changelog
Note the distinction between an ADR and a spec. The ADR provides the context, intuition, reasoning, and
justification for a change in architecture, or for the architecture of something
new. The spec is much more compressed and streamlined summary of everything as
it stands today.
If recorded decisions turned out to be lacking, convene a discussion, record the new decisions here, and then modify the code to match.
Note the context/background should be written in the present tense.
## Table of Contents
### Implemented
- [ADR-001: Logging](./adr-001-logging.md)
- [ADR-002: Event-Subscription](./adr-002-event-subscription.md)
- [ADR-003: ABCI-APP-RPC](./adr-003-abci-app-rpc.md)
- [ADR-004: Historical-Validators](./adr-004-historical-validators.md)
- [ADR-005: Consensus-Params](./adr-005-consensus-params.md)
- [ADR-008: Priv-Validator](./adr-008-priv-validator.md)
- [ADR-009: ABCI-Design](./adr-009-ABCI-design.md)
- [ADR-010: Crypto-Changes](./adr-010-crypto-changes.md)
- [ADR-011: Monitoring](./adr-011-monitoring.md)
- [ADR-014: Secp-Malleability](./adr-014-secp-malleability.md)
- [ADR-015: Crypto-Encoding](./adr-015-crypto-encoding.md)
- [ADR-016: Protocol-Versions](./adr-016-protocol-versions.md)
- [ADR-017: Chain-Versions](./adr-017-chain-versions.md)
- [ADR-018: ABCI-Validators](./adr-018-ABCI-Validators.md)
- [ADR-019: Multisigs](./adr-019-multisigs.md)
- [ADR-020: Block-Size](./adr-020-block-size.md)
- [ADR-021: ABCI-Events](./adr-021-abci-events.md)
- [ADR-025: Commit](./adr-025-commit.md)
- [ADR-026: General-Merkle-Proof](./adr-026-general-merkle-proof.md)
- [ADR-033: Pubsub](./adr-033-pubsub.md)
- [ADR-034: Priv-Validator-File-Structure](./adr-034-priv-validator-file-structure.md)
- [ADR-043: Blockchain-RiRi-Org](./adr-043-blockchain-riri-org.md)
- [ADR-044: Lite-Client-With-Weak-Subjectivity](./adr-044-lite-client-with-weak-subjectivity.md)
- [ADR-046: Light-Client-Implementation](./adr-046-light-client-implementation.md)
- [ADR-047: Handling-Evidence-From-Light-Client](./adr-047-handling-evidence-from-light-client.md)
- [ADR-051: Double-Signing-Risk-Reduction](./adr-051-double-signing-risk-reduction.md)
- [ADR-052: Tendermint-Mode](./adr-052-tendermint-mode.md)
- [ADR-053: State-Sync-Prototype](./adr-053-state-sync-prototype.md)
- [ADR-054: Crypto-Encoding-2](./adr-054-crypto-encoding-2.md)
- [ADR-055: Protobuf-Design](./adr-055-protobuf-design.md)
- [ADR-056: Light-Client-Amnesia-Attacks](./adr-056-light-client-amnesia-attacks.md)
- [ADR-059: Evidence-Composition-and-Lifecycle](./adr-059-evidence-composition-and-lifecycle.md)
- [ADR-062: P2P-Architecture](./adr-062-p2p-architecture.md)
- [ADR-063: Privval-gRPC](./adr-063-privval-grpc.md)
- [ADR-066-E2E-Testing](./adr-066-e2e-testing.md)
### Accepted
- [ADR-006: Trust-Metric](./adr-006-trust-metric.md)
- [ADR-024: Sign-Bytes](./adr-024-sign-bytes.md)
- [ADR-035: Documentation](./adr-035-documentation.md)
- [ADR-039: Peer-Behaviour](./adr-039-peer-behaviour.md)
- [ADR-060: Go-API-Stability](./adr-060-go-api-stability.md)
- [ADR-061: P2P-Refactor-Scope](./adr-061-p2p-refactor-scope.md)
- [ADR-065: Custom Event Indexing](./adr-065-custom-event-indexing.md)
- [ADR-068: Reverse-Sync](./adr-068-reverse-sync.md)
- [ADR-067: Mempool Refactor](./adr-067-mempool-refactor.md)
### Rejected
- [ADR-023: ABCI-Propose-tx](./adr-023-ABCI-propose-tx.md)
- [ADR-029: Check-Tx-Consensus](./adr-029-check-tx-consensus.md)
- [ADR-058: Event-Hashing](./adr-058-event-hashing.md)
### Proposed
- [ADR-007: Trust-Metric-Usage](./adr-007-trust-metric-usage.md)
- [ADR-012: Peer-Transport](./adr-012-peer-transport.md)
- [ADR-013: Symmetric-Crypto](./adr-013-symmetric-crypto.md)
- [ADR-022: ABCI-Errors](./adr-022-abci-errors.md)
- [ADR-030: Consensus-Refactor](./adr-030-consensus-refactor.md)
- [ADR-037: Deliver-Block](./adr-037-deliver-block.md)
- [ADR-038: Non-Zero-Start-Height](./adr-038-non-zero-start-height.md)
- [ADR-041: Proposer-Selection-via-ABCI](./adr-041-proposer-selection-via-abci.md)
- [ADR-045: ABCI-Evidence](./adr-045-abci-evidence.md)
- [ADR-057: RPC](./adr-057-RPC.md)
- [ADR-069: Node Initialization](./adr-069-flexible-node-initialization.md)
- [ADR-071: Proposer-Based Timestamps](adr-071-proposer-based-timestamps.md)
- [ADR-072: Restore Requests for Comments](./adr-072-request-for-comments.md)

View File

@@ -0,0 +1,216 @@
# ADR 1: Logging
## Context
Current logging system in Tendermint is very static and not flexible enough.
Issues: [358](https://github.com/tendermint/tendermint/issues/358), [375](https://github.com/tendermint/tendermint/issues/375).
What we want from the new system:
- per package dynamic log levels
- dynamic logger setting (logger tied to the processing struct)
- conventions
- be more visually appealing
"dynamic" here means the ability to set smth in runtime.
## Decision
### 1) An interface
First, we will need an interface for all of our libraries (`tmlibs`, Tendermint, etc.). My personal preference is go-kit `Logger` interface (see Appendix A.), but that is too much a bigger change. Plus we will still need levels.
```go
# log.go
type Logger interface {
Debug(msg string, keyvals ...interface{}) error
Info(msg string, keyvals ...interface{}) error
Error(msg string, keyvals ...interface{}) error
With(keyvals ...interface{}) Logger
}
```
On a side note: difference between `Info` and `Notice` is subtle. We probably
could do without `Notice`. Don't think we need `Panic` or `Fatal` as a part of
the interface. These funcs could be implemented as helpers. In fact, we already
have some in `tmlibs/common`.
- `Debug` - extended output for devs
- `Info` - all that is useful for a user
- `Error` - errors
`Notice` should become `Info`, `Warn` either `Error` or `Debug` depending on the message, `Crit` -> `Error`.
This interface should go into `tmlibs/log`. All libraries which are part of the core (tendermint/tendermint) should obey it.
### 2) Logger with our current formatting
On top of this interface, we will need to implement a stdout logger, which will be used when Tendermint is configured to output logs to STDOUT.
Many people say that they like the current output, so let's stick with it.
```
NOTE[2017-04-25|14:45:08] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
```
Couple of minor changes:
```
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0
```
Notice the level is encoded using only one char plus milliseconds.
Note: there are many other formats out there like [logfmt](https://brandur.org/logfmt).
This logger could be implemented using any logger - [logrus](https://github.com/sirupsen/logrus), [go-kit/log](https://github.com/go-kit/kit/tree/master/log), [zap](https://github.com/uber-go/zap), log15 so far as it
a) supports coloring output<br>
b) is moderately fast (buffering) <br>
c) conforms to the new interface or adapter could be written for it <br>
d) is somewhat configurable<br>
go-kit is my favorite so far. Check out how easy it is to color errors in red https://github.com/go-kit/kit/blob/master/log/term/example_test.go#L12. Although, coloring could only be applied to the whole string :(
```
go-kit +: flexible, modular
go-kit “-”: logfmt format https://brandur.org/logfmt
logrus +: popular, feature rich (hooks), API and output is more like what we want
logrus -: not so flexible
```
```go
# tm_logger.go
// NewTmLogger returns a logger that encodes keyvals to the Writer in
// tm format.
func NewTmLogger(w io.Writer) Logger {
return &tmLogger{kitlog.NewLogfmtLogger(w)}
}
func (l tmLogger) SetLevel(level string() {
switch (level) {
case "debug":
l.sourceLogger = level.NewFilter(l.sourceLogger, level.AllowDebug())
}
}
func (l tmLogger) Info(msg string, keyvals ...interface{}) error {
l.sourceLogger.Log("msg", msg, keyvals...)
}
# log.go
func With(logger Logger, keyvals ...interface{}) Logger {
kitlog.With(logger.sourceLogger, keyvals...)
}
```
Usage:
```go
logger := log.NewTmLogger(os.Stdout)
logger.SetLevel(config.GetString("log_level"))
node.SetLogger(log.With(logger, "node", Name))
```
**Other log formatters**
In the future, we may want other formatters like JSONFormatter.
```
{ "level": "notice", "time": "2017-04-25 14:45:08.562471297 -0400 EDT", "module": "consensus", "msg": "ABCI Replay Blocks", "appHeight": 0, "storeHeight": 0, "stateHeight": 0 }
```
### 3) Dynamic logger setting
https://dave.cheney.net/2017/01/23/the-package-level-logger-anti-pattern
This is the hardest part and where the most work will be done. logger should be tied to the processing struct, or the context if it adds some fields to the logger.
```go
type BaseService struct {
log log15.Logger
name string
started uint32 // atomic
stopped uint32 // atomic
...
}
```
BaseService already contains `log` field, so most of the structs embedding it should be fine. We should rename it to `logger`.
The only thing missing is the ability to set logger:
```
func (bs *BaseService) SetLogger(l log.Logger) {
bs.logger = l
}
```
### 4) Conventions
Important keyvals should go first. Example:
```
correct
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus instance=1 appHeight=0 storeHeight=0 stateHeight=0
```
not
```
wrong
I[2017-04-25|14:45:08.322] ABCI Replay Blocks module=consensus appHeight=0 storeHeight=0 stateHeight=0 instance=1
```
for that in most cases you'll need to add `instance` field to a logger upon creating, not when u log a particular message:
```go
colorFn := func(keyvals ...interface{}) term.FgBgColor {
for i := 1; i < len(keyvals); i += 2 {
if keyvals[i] == "instance" && keyvals[i+1] == "1" {
return term.FgBgColor{Fg: term.Blue}
} else if keyvals[i] == "instance" && keyvals[i+1] == "1" {
return term.FgBgColor{Fg: term.Red}
}
}
return term.FgBgColor{}
}
logger := term.NewLogger(os.Stdout, log.NewTmLogger, colorFn)
c1 := NewConsensusReactor(...)
c1.SetLogger(log.With(logger, "instance", 1))
c2 := NewConsensusReactor(...)
c2.SetLogger(log.With(logger, "instance", 2))
```
## Status
Implemented
## Consequences
### Positive
Dynamic logger, which could be turned off for some modules at runtime. Public interface for other projects using Tendermint libraries.
### Negative
We may loose the ability to color keys in keyvalue pairs. go-kit allow you to easily change foreground / background colors of the whole string, but not its parts.
### Neutral
## Appendix A.
I really like a minimalistic approach go-kit took with his logger https://github.com/go-kit/kit/tree/master/log:
```
type Logger interface {
Log(keyvals ...interface{}) error
}
```
See [The Hunt for a Logger Interface](https://go-talks.appspot.com/github.com/ChrisHines/talks/structured-logging/structured-logging.slide). The advantage is greater composability (check out how go-kit defines colored logging or log-leveled logging on top of this interface https://github.com/go-kit/kit/tree/master/log).

View File

@@ -0,0 +1,88 @@
# ADR 2: Event Subscription
## Context
In the light client (or any other client), the user may want to **subscribe to
a subset of transactions** (rather than all of them) using `/subscribe?event=X`. For
example, I want to subscribe for all transactions associated with a particular
account. Same for fetching. The user may want to **fetch transactions based on
some filter** (rather than fetching all the blocks). For example, I want to get
all transactions for a particular account in the last two weeks (`tx's block time >= '2017-06-05'`).
Now you can't even subscribe to "all txs" in Tendermint.
The goal is a simple and easy to use API for doing that.
![Tx Send Flow Diagram](img/tags1.png)
## Decision
ABCI app return tags with a `DeliverTx` response inside the `data` field (_for
now, later we may create a separate field_). Tags is a list of key-value pairs,
protobuf encoded.
Example data:
```json
{
"abci.account.name": "Igor",
"abci.account.address": "0xdeadbeef",
"tx.gas": 7
}
```
### Subscribing for transactions events
If the user wants to receive only a subset of transactions, ABCI-app must
return a list of tags with a `DeliverTx` response. These tags will be parsed and
matched with the current queries (subscribers). If the query matches the tags,
subscriber will get the transaction event.
```
/subscribe?query="tm.event = Tx AND tx.hash = AB0023433CF0334223212243BDD AND abci.account.invoice.number = 22"
```
A new package must be developed to replace the current `events` package. It
will allow clients to subscribe to a different types of events in the future:
```
/subscribe?query="abci.account.invoice.number = 22"
/subscribe?query="abci.account.invoice.owner CONTAINS Igor"
```
### Fetching transactions
This is a bit tricky because a) we want to support a number of indexers, all of
which have a different API b) we don't know whenever tags will be sufficient
for the most apps (I guess we'll see).
```
/txs/search?query="tx.hash = AB0023433CF0334223212243BDD AND abci.account.owner CONTAINS Igor"
/txs/search?query="abci.account.owner = Igor"
```
For historic queries we will need a indexing storage (Postgres, SQLite, ...).
### Issues
- https://github.com/tendermint/tendermint/issues/376
- https://github.com/tendermint/tendermint/issues/287
- https://github.com/tendermint/tendermint/issues/525 (related)
## Status
Implemented
## Consequences
### Positive
- same format for event notifications and search APIs
- powerful enough query
### Negative
- performance of the `match` function (where we have too many queries / subscribers)
- there is an issue where there are too many txs in the DB
### Neutral

View File

@@ -0,0 +1,34 @@
# ADR 3: Must an ABCI-app have an RPC server?
## Context
ABCI-server could expose its own RPC-server and act as a proxy to Tendermint.
The idea was for the Tendermint RPC to just be a transparent proxy to the app.
Clients need to talk to Tendermint for proofs, unless we burden all app devs
with exposing Tendermint proof stuff. Also seems less complex to lock down one
server than two, but granted it makes querying a bit more kludgy since it needs
to be passed as a `Query`. Also, **having a very standard rpc interface means
the light-client can work with all apps and handle proofs**. The only
app-specific logic is decoding the binary data to a more readable form (eg.
json). This is a huge advantage for code-reuse and standardization.
## Decision
We dont expose an RPC server on any of our ABCI-apps.
## Status
Implemented
## Consequences
### Positive
- Unified interface for all apps
### Negative
- `Query` interface
### Neutral

View File

@@ -0,0 +1,38 @@
# ADR 004: Historical Validators
## Context
Right now, we can query the present validator set, but there is no history.
If you were offline for a long time, there is no way to reconstruct past validators. This is needed for the light client and we agreed needs enhancement of the API.
## Decision
For every block, store a new structure that contains either the latest validator set,
or the height of the last block for which the validator set changed. Note this is not
the height of the block which returned the validator set change itself, but the next block,
ie. the first block it comes into effect for.
Storing the validators will be handled by the `state` package.
At some point in the future, we may consider more efficient storage in the case where the validators
are updated frequently - for instance by only saving the diffs, rather than the whole set.
An alternative approach suggested keeping the validator set, or diffs of it, in a merkle IAVL tree.
While it might afford cheaper proofs that a validator set has not changed, it would be more complex,
and likely less efficient.
## Status
Implemented
## Consequences
### Positive
- Can query old validator sets, with proof.
### Negative
- Writes an extra structure to disk with every block.
### Neutral

View File

@@ -0,0 +1,85 @@
# ADR 005: Consensus Params
## Context
Consensus critical parameters controlling blockchain capacity have until now been hard coded, loaded from a local config, or neglected.
Since they may be need to be different in different networks, and potentially to evolve over time within
networks, we seek to initialize them in a genesis file, and expose them through the ABCI.
While we have some specific parameters now, like maximum block and transaction size, we expect to have more in the future,
such as a period over which evidence is valid, or the frequency of checkpoints.
## Decision
### ConsensusParams
No consensus critical parameters should ever be found in the `config.toml`.
A new `ConsensusParams` is optionally included in the `genesis.json` file,
and loaded into the `State`. Any items not included are set to their default value.
A value of 0 is undefined (see ABCI, below). A value of -1 is used to indicate the parameter does not apply.
The parameters are used to determine the validity of a block (and tx) via the union of all relevant parameters.
```
type ConsensusParams struct {
BlockSize
TxSize
BlockGossip
}
type BlockSize struct {
MaxBytes int
MaxTxs int
MaxGas int
}
type TxSize struct {
MaxBytes int
MaxGas int
}
type BlockGossip struct {
BlockPartSizeBytes int
}
```
The `ConsensusParams` can evolve over time by adding new structs that cover different aspects of the consensus rules.
The `BlockPartSizeBytes` and the `BlockSize.MaxBytes` are enforced to be greater than 0.
The former because we need a part size, the latter so that we always have at least some sanity check over the size of blocks.
### ABCI
#### InitChain
InitChain currently takes the initial validator set. It should be extended to also take parts of the ConsensusParams.
There is some case to be made for it to take the entire Genesis, except there may be things in the genesis,
like the BlockPartSize, that the app shouldn't really know about.
#### EndBlock
The EndBlock response includes a `ConsensusParams`, which includes BlockSize and TxSize, but not BlockGossip.
Other param struct can be added to `ConsensusParams` in the future.
The `0` value is used to denote no change.
Any other value will update that parameter in the `State.ConsensusParams`, to be applied for the next block.
Tendermint should have hard-coded upper limits as sanity checks.
## Status
Implemented
## Consequences
### Positive
- Alternative capacity limits and consensus parameters can be specified without re-compiling the software.
- They can also change over time under the control of the application
### Negative
- More exposed parameters is more complexity
- Different rules at different heights in the blockchain complicates fast sync
### Neutral
- The TxSize, which checks validity, may be in conflict with the config's `max_block_size_tx`, which determines proposal sizes

View File

@@ -0,0 +1,229 @@
# ADR 006: Trust Metric Design
## Context
The proposed trust metric will allow Tendermint to maintain local trust rankings for peers it has directly interacted with, which can then be used to implement soft security controls. The calculations were obtained from the [TrustGuard](https://dl.acm.org/citation.cfm?id=1060808) project.
### Background
The Tendermint Core project developers would like to improve Tendermint security and reliability by keeping track of the level of trustworthiness peers have demonstrated within the peer-to-peer network. This way, undesirable outcomes from peers will not immediately result in them being dropped from the network (potentially causing drastic changes to take place). Instead, peers behavior can be monitored with appropriate metrics and be removed from the network once Tendermint Core is certain the peer is a threat. For example, when the PEXReactor makes a request for peers network addresses from a already known peer, and the returned network addresses are unreachable, this untrustworthy behavior should be tracked. Returning a few bad network addresses probably shouldnt cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer being dropped.
Trust metrics can be circumvented by malicious nodes through the use of strategic oscillation techniques, which adapts the malicious nodes behavior pattern in order to maximize its goals. For instance, if the malicious node learns that the time interval of the Tendermint trust metric is _X_ hours, then it could wait _X_ hours in-between malicious activities. We could try to combat this issue by increasing the interval length, yet this will make the system less adaptive to recent events.
Instead, having shorter intervals, but keeping a history of interval values, will give our metric the flexibility needed in order to keep the network stable, while also making it resilient against a strategic malicious node in the Tendermint peer-to-peer network. Also, the metric can access trust data over a rather long period of time while not greatly increasing its history size by aggregating older history values over a larger number of intervals, and at the same time, maintain great precision for the recent intervals. This approach is referred to as fading memories, and closely resembles the way human beings remember their experiences. The trade-off to using history data is that the interval values should be preserved in-between executions of the node.
### References
S. Mudhakar, L. Xiong, and L. Liu, “TrustGuard: Countering Vulnerabilities in Reputation Management for Decentralized Overlay Networks,” in _Proceedings of the 14th international conference on World Wide Web, pp. 422-431_, May 2005.
## Decision
The proposed trust metric will allow a developer to inform the trust metric store of all good and bad events relevant to a peer's behavior, and at any time, the metric can be queried for a peer's current trust ranking.
The three subsections below will cover the process being considered for calculating the trust ranking, the concept of the trust metric store, and the interface for the trust metric.
### Proposed Process
The proposed trust metric will count good and bad events relevant to the object, and calculate the percent of counters that are good over an interval with a predefined duration. This is the procedure that will continue for the life of the trust metric. When the trust metric is queried for the current **trust value**, a resilient equation will be utilized to perform the calculation.
The equation being proposed resembles a Proportional-Integral-Derivative (PID) controller used in control systems. The proportional component allows us to be sensitive to the value of the most recent interval, while the integral component allows us to incorporate trust values stored in the history data, and the derivative component allows us to give weight to sudden changes in the behavior of a peer. We compute the trust value of a peer in interval i based on its current trust ranking, its trust rating history prior to interval _i_ (over the past _maxH_ number of intervals) and its trust ranking fluctuation. We will break up the equation into the three components.
```math
(1) Proportional Value = a * R[i]
```
where _R_[*i*] denotes the raw trust value at time interval _i_ (where _i_ == 0 being current time) and _a_ is the weight applied to the contribution of the current reports. The next component of our equation uses a weighted sum over the last _maxH_ intervals to calculate the history value for time _i_:
`H[i] =` ![formula1](img/formula1.png "Weighted Sum Formula")
The weights can be chosen either optimistically or pessimistically. An optimistic weight creates larger weights for newer history data values, while the the pessimistic weight creates larger weights for time intervals with lower scores. The default weights used during the calculation of the history value are optimistic and calculated as _Wk_ = 0.8^_k_, for time interval _k_. With the history value available, we can now finish calculating the integral value:
```math
(2) Integral Value = b * H[i]
```
Where _H_[*i*] denotes the history value at time interval _i_ and _b_ is the weight applied to the contribution of past performance for the object being measured. The derivative component will be calculated as follows:
```math
D[i] = R[i] H[i]
(3) Derivative Value = c(D[i]) * D[i]
```
Where the value of _c_ is selected based on the _D_[*i*] value relative to zero. The default selection process makes _c_ equal to 0 unless _D_[*i*] is a negative value, in which case c is equal to 1. The result is that the maximum penalty is applied when current behavior is lower than previously experienced behavior. If the current behavior is better than the previously experienced behavior, then the Derivative Value has no impact on the trust value. With the three components brought together, our trust value equation is calculated as follows:
```math
TrustValue[i] = a * R[i] + b * H[i] + c(D[i]) * D[i]
```
As a performance optimization that will keep the amount of raw interval data being saved to a reasonable size of _m_, while allowing us to represent 2^_m_ - 1 history intervals, we can employ the fading memories technique that will trade space and time complexity for the precision of the history data values by summarizing larger quantities of less recent values. While our equation above attempts to access up to _maxH_ (which can be 2^_m_ - 1), we will map those requests down to _m_ values using equation 4 below:
```math
(4) j = index, where index > 0
```
Where _j_ is one of _(0, 1, 2, … , m 1)_ indices used to access history interval data. Now we can access the raw intervals using the following calculations:
```math
R[0] = raw data for current time interval
```
`R[j] =` ![formula2](img/formula2.png "Fading Memories Formula")
### Trust Metric Store
Similar to the P2P subsystem AddrBook, the trust metric store will maintain information relevant to Tendermint peers. Additionally, the trust metric store will ensure that trust metrics will only be active for peers that a node is currently and directly engaged with.
Reactors will provide a peer key to the trust metric store in order to retrieve the associated trust metric. The trust metric can then record new positive and negative events experienced by the reactor, as well as provided the current trust score calculated by the metric.
When the node is shutting down, the trust metric store will save history data for trust metrics associated with all known peers. This saved information allows experiences with a peer to be preserved across node executions, which can span a tracking windows of days or weeks. The trust history data is loaded automatically during OnStart.
### Interface Detailed Design
Each trust metric allows for the recording of positive/negative events, querying the current trust value/score, and the stopping/pausing of tracking over time intervals. This can be seen below:
```go
// TrustMetric - keeps track of peer reliability
type TrustMetric struct {
// Private elements.
}
// Pause tells the metric to pause recording data over time intervals.
// All method calls that indicate events will unpause the metric
func (tm *TrustMetric) Pause() {}
// Stop tells the metric to stop recording data over time intervals
func (tm *TrustMetric) Stop() {}
// BadEvents indicates that an undesirable event(s) took place
func (tm *TrustMetric) BadEvents(num int) {}
// GoodEvents indicates that a desirable event(s) took place
func (tm *TrustMetric) GoodEvents(num int) {}
// TrustValue gets the dependable trust value; always between 0 and 1
func (tm *TrustMetric) TrustValue() float64 {}
// TrustScore gets a score based on the trust value always between 0 and 100
func (tm *TrustMetric) TrustScore() int {}
// NewMetric returns a trust metric with the default configuration
func NewMetric() *TrustMetric {}
//------------------------------------------------------------------------------------------------
// For example
tm := NewMetric()
tm.BadEvents(1)
score := tm.TrustScore()
tm.Stop()
```
Some of the trust metric parameters can be configured. The weight values should probably be left alone in more cases, yet the time durations for the tracking window and individual time interval should be considered.
```go
// TrustMetricConfig - Configures the weight functions and time intervals for the metric
type TrustMetricConfig struct {
// Determines the percentage given to current behavior
ProportionalWeight float64
// Determines the percentage given to prior behavior
IntegralWeight float64
// The window of time that the trust metric will track events across.
// This can be set to cover many days without issue
TrackingWindow time.Duration
// Each interval should be short for adapability.
// Less than 30 seconds is too sensitive,
// and greater than 5 minutes will make the metric numb
IntervalLength time.Duration
}
// DefaultConfig returns a config with values that have been tested and produce desirable results
func DefaultConfig() TrustMetricConfig {}
// NewMetricWithConfig returns a trust metric with a custom configuration
func NewMetricWithConfig(tmc TrustMetricConfig) *TrustMetric {}
//------------------------------------------------------------------------------------------------
// For example
config := TrustMetricConfig{
TrackingWindow: time.Minute * 60 * 24, // one day
IntervalLength: time.Minute * 2,
}
tm := NewMetricWithConfig(config)
tm.BadEvents(10)
tm.Pause()
tm.GoodEvents(1) // becomes active again
```
A trust metric store should be created with a DB that has persistent storage so it can save history data across node executions. All trust metrics instantiated by the store will be created with the provided TrustMetricConfig configuration.
When you attempt to fetch the trust metric for a peer, and an entry does not exist in the trust metric store, a new metric is automatically created and the entry made within the store.
In additional to the fetching method, GetPeerTrustMetric, the trust metric store provides a method to call when a peer has disconnected from the node. This is so the metric can be paused (history data will not be saved) for periods of time when the node is not having direct experiences with the peer.
```go
// TrustMetricStore - Manages all trust metrics for peers
type TrustMetricStore struct {
cmn.BaseService
// Private elements
}
// OnStart implements Service
func (tms *TrustMetricStore) OnStart() error {}
// OnStop implements Service
func (tms *TrustMetricStore) OnStop() {}
// NewTrustMetricStore returns a store that saves data to the DB
// and uses the config when creating new trust metrics
func NewTrustMetricStore(db dbm.DB, tmc TrustMetricConfig) *TrustMetricStore {}
// Size returns the number of entries in the trust metric store
func (tms *TrustMetricStore) Size() int {}
// GetPeerTrustMetric returns a trust metric by peer key
func (tms *TrustMetricStore) GetPeerTrustMetric(key string) *TrustMetric {}
// PeerDisconnected pauses the trust metric associated with the peer identified by the key
func (tms *TrustMetricStore) PeerDisconnected(key string) {}
//------------------------------------------------------------------------------------------------
// For example
db := dbm.NewDB("trusthistory", "goleveldb", dirPathStr)
tms := NewTrustMetricStore(db, DefaultConfig())
tm := tms.GetPeerTrustMetric(key)
tm.BadEvents(1)
tms.PeerDisconnected(key)
```
## Status
Approved.
## Consequences
### Positive
- The trust metric will allow Tendermint to make non-binary security and reliability decisions
- Will help Tendermint implement deterrents that provide soft security controls, yet avoids disruption on the network
- Will provide useful profiling information when analyzing performance over time related to peer interaction
### Negative
- Requires saving the trust metric history data across node executions
### Neutral
- Keep in mind that, good events need to be recorded just as bad events do using this implementation

View File

@@ -0,0 +1,106 @@
# ADR 007: Trust Metric Usage Guide
## Context
Tendermint is required to monitor peer quality in order to inform its peer dialing and peer exchange strategies.
When a node first connects to the network, it is important that it can quickly find good peers.
Thus, while a node has fewer connections, it should prioritize connecting to higher quality peers.
As the node becomes well connected to the rest of the network, it can dial lesser known or lesser
quality peers and help assess their quality. Similarly, when queried for peers, a node should make
sure they dont return low quality peers.
Peer quality can be tracked using a trust metric that flags certain behaviours as good or bad. When enough
bad behaviour accumulates, we can mark the peer as bad and disconnect.
For example, when the PEXReactor makes a request for peers network addresses from an already known peer, and the returned network addresses are unreachable, this undesirable behavior should be tracked. Returning a few bad network addresses probably shouldnt cause a peer to be dropped, while excessive amounts of this behavior does qualify the peer for removal. The originally proposed approach and design document for the trust metric can be found in the [ADR 006](adr-006-trust-metric.md) document.
The trust metric implementation allows a developer to obtain a peer's trust metric from a trust metric store, and track good and bad events relevant to a peer's behavior, and at any time, the peer's metric can be queried for a current trust value. The current trust value is calculated with a formula that utilizes current behavior, previous behavior, and change between the two. Current behavior is calculated as the percentage of good behavior within a time interval. The time interval is short; probably set between 30 seconds and 5 minutes. On the other hand, the historic data can estimate a peer's behavior over days worth of tracking. At the end of a time interval, the current behavior becomes part of the historic data, and a new time interval begins with the good and bad counters reset to zero.
These are some important things to keep in mind regarding how the trust metrics handle time intervals and scoring:
- Each new time interval begins with a perfect score
- Bad events quickly bring the score down and good events cause the score to slowly rise
- When the time interval is over, the percentage of good events becomes historic data.
Some useful information about the inner workings of the trust metric:
- When a trust metric is first instantiated, a timer (ticker) periodically fires in order to handle transitions between trust metric time intervals
- If a peer is disconnected from a node, the timer should be paused, since the node is no longer connected to that peer
- The ability to pause the metric is supported with the store **PeerDisconnected** method and the metric **Pause** method
- After a pause, if a good or bad event method is called on a metric, it automatically becomes unpaused and begins a new time interval.
## Decision
The trust metric capability is now available, yet, it still leaves the question of how should it be applied throughout Tendermint in order to properly track the quality of peers?
### Proposed Process
Peers are managed using an address book and a trust metric:
- The address book keeps a record of peers and provides selection methods
- The trust metric tracks the quality of the peers
#### Presence in Address Book
Outbound peers are added to the address book before they are dialed,
and inbound peers are added once the peer connection is set up.
Peers are also added to the address book when they are received in response to
a pexRequestMessage.
While a node has less than `needAddressThreshold`, it will periodically request more,
via pexRequestMessage, from randomly selected peers and from newly dialed outbound peers.
When a new address is added to an address book that has more than `0.5*needAddressThreshold` addresses,
then with some low probability, a randomly chosen low quality peer is removed.
#### Outbound Peers
Peers attempt to maintain a minimum number of outbound connections by
repeatedly querying the address book for peers to connect to.
While a node has few to no outbound connections, the address book is biased to return
higher quality peers. As the node increases the number of outbound connections,
the address book is biased to return less-vetted or lower-quality peers.
#### Inbound Peers
Peers also maintain a maximum number of total connections, MaxNumPeers.
If a peer has MaxNumPeers, new incoming connections will be accepted with low probability.
When such a new connection is accepted, the peer disconnects from a probabilistically chosen low ranking peer
so it does not exceed MaxNumPeers.
#### Peer Exchange
When a peer receives a pexRequestMessage, it returns a random sample of high quality peers from the address book. Peers with no score or low score should not be inclided in a response to pexRequestMessage.
#### Peer Quality
Peer quality is tracked in the connection and across the reactors by storing the TrustMetric in the peer's
thread safe Data store.
Peer behaviour is then defined as one of the following:
- Fatal - something outright malicious that causes us to disconnect the peer and ban it from the address book for some amount of time
- Bad - Any kind of timeout, messages that don't unmarshal, fail other validity checks, or messages we didn't ask for or aren't expecting (usually worth one bad event)
- Neutral - Unknown channels/message types/version upgrades (no good or bad events recorded)
- Correct - Normal correct behavior (worth one good event)
- Good - some random majority of peers per reactor sending us useful messages (worth more than one good event).
Note that Fatal behaviour causes us to remove the peer, and neutral behaviour does not affect the score.
## Status
Proposed.
## Consequences
### Positive
- Bringing the address book and trust metric store together will cause the network to be built in a way that encourages greater security and reliability.
### Negative
- TBD
### Neutral
- Keep in mind that, good events need to be recorded just as bad events do using this implementation.

View File

@@ -0,0 +1,35 @@
# ADR 008: SocketPV
Tendermint node's should support only two in-process PrivValidator
implementations:
- FilePV uses an unencrypted private key in a "priv_validator.json" file - no
configuration required (just `tendermint init validator`).
- TCPVal and IPCVal use TCP and Unix sockets respectively to send signing requests
to another process - the user is responsible for starting that process themselves.
Both TCPVal and IPCVal addresses can be provided via flags at the command line
or in the configuration file; TCPVal addresses must be of the form
`tcp://<ip_address>:<port>` and IPCVal addresses `unix:///path/to/file.sock` -
doing so will cause Tendermint to ignore any private validator files.
TCPVal will listen on the given address for incoming connections from an external
private validator process. It will halt any operation until at least one external
process successfully connected.
The external priv_validator process will dial the address to connect to
Tendermint, and then Tendermint will send requests on the ensuing connection to
sign votes and proposals. Thus the external process initiates the connection,
but the Tendermint process makes all requests. In a later stage we're going to
support multiple validators for fault tolerance. To prevent double signing they
need to be synced, which is deferred to an external solution (see #1185).
Conversely, IPCVal will make an outbound connection to an existing socket opened
by the external validator process.
In addition, Tendermint will provide implementations that can be run in that
external process. These include:
- FilePV will encrypt the private key, and the user must enter password to
decrypt key when process is started.
- LedgerPV uses a Ledger Nano S to handle all signing.

View File

@@ -0,0 +1,271 @@
# ADR 009: ABCI UX Improvements
## Changelog
23-06-2018: Some minor fixes from review
07-06-2018: Some updates based on discussion with Jae
07-06-2018: Initial draft to match what was released in ABCI v0.11
## Context
The ABCI was first introduced in late 2015. It's purpose is to be:
- a generic interface between state machines and their replication engines
- agnostic to the language the state machine is written in
- agnostic to the replication engine that drives it
This means ABCI should provide an interface for both pluggable applications and
pluggable consensus engines.
To achieve this, it uses Protocol Buffers (proto3) for message types. The dominant
implementation is in Go.
After some recent discussions with the community on github, the following were
identified as pain points:
- Amino encoded types
- Managing validator sets
- Imports in the protobuf file
See the [references](#references) for more.
### Imports
The native proto library in Go generates inflexible and verbose code.
Many in the Go community have adopted a fork called
[gogoproto](https://github.com/gogo/protobuf) that provides a
variety of features aimed to improve the developer experience.
While `gogoproto` is nice, it creates an additional dependency, and compiling
the protobuf types for other languages has been reported to fail when `gogoproto` is used.
### Amino
Amino is an encoding protocol designed to improve over insufficiencies of protobuf.
It's goal is to be proto4.
Many people are frustrated by incompatibility with protobuf,
and with the requirement for Amino to be used at all within ABCI.
We intend to make Amino successful enough that we can eventually use it for ABCI
message types directly. By then it should be called proto4. In the meantime,
we want it to be easy to use.
### PubKey
PubKeys are encoded using Amino (and before that, go-wire).
Ideally, PubKeys are an interface type where we don't know all the
implementation types, so its unfitting to use `oneof` or `enum`.
### Addresses
The address for ED25519 pubkey is the RIPEMD160 of the Amino
encoded pubkey. This introduces an Amino dependency in the address generation,
a functionality that is widely required and should be easy to compute as
possible.
### Validators
To change the validator set, applications can return a list of validator updates
with ResponseEndBlock. In these updates, the public key _must_ be included,
because Tendermint requires the public key to verify validator signatures. This
means ABCI developers have to work with PubKeys. That said, it would also be
convenient to work with address information, and for it to be simple to do so.
### AbsentValidators
Tendermint also provides a list of validators in BeginBlock who did not sign the
last block. This allows applications to reflect availability behaviour in the
application, for instance by punishing validators for not having votes included
in commits.
### InitChain
Tendermint passes in a list of validators here, and nothing else. It would
benefit the application to be able to control the initial validator set. For
instance the genesis file could include application-based information about the
initial validator set that the application could process to determine the
initial validator set. Additionally, InitChain would benefit from getting all
the genesis information.
### Header
ABCI provides the Header in RequestBeginBlock so the application can have
important information about the latest state of the blockchain.
## Decision
### Imports
Move away from gogoproto. In the short term, we will just maintain a second
protobuf file without the gogoproto annotations. In the medium term, we will
make copies of all the structs in Golang and shuttle back and forth. In the long
term, we will use Amino.
### Amino
To simplify ABCI application development in the short term,
Amino will be completely removed from the ABCI:
- It will not be required for PubKey encoding
- It will not be required for computing PubKey addresses
That said, we are working to make Amino a huge success, and to become proto4.
To facilitate adoption and cross-language compatibility in the near-term, Amino
v1 will:
- be fully compatible with the subset of proto3 that excludes `oneof`
- use the Amino prefix system to provide interface types, as opposed to `oneof`
style union types.
That said, an Amino v2 will be worked on to improve the performance of the
format and its useability in cryptographic applications.
### PubKey
Encoding schemes infect software. As a generic middleware, ABCI aims to have
some cross scheme compatibility. For this it has no choice but to include opaque
bytes from time to time. While we will not enforce Amino encoding for these
bytes yet, we need to provide a type system. The simplest way to do this is to
use a type string.
PubKey will now look like:
```
message PubKey {
string type
bytes data
}
```
where `type` can be:
- "ed225519", with `data = <raw 32-byte pubkey>`
- "secp256k1", with `data = <33-byte OpenSSL compressed pubkey>`
As we want to retain flexibility here, and since ideally, PubKey would be an
interface type, we do not use `enum` or `oneof`.
### Addresses
To simplify and improve computing addresses, we change it to the first 20-bytes of the SHA256
of the raw 32-byte public key.
We continue to use the Bitcoin address scheme for secp256k1 keys.
### Validators
Add a `bytes address` field:
```
message Validator {
bytes address
PubKey pub_key
int64 power
}
```
### RequestBeginBlock and AbsentValidators
To simplify this, RequestBeginBlock will include the complete validator set,
including the address, and voting power of each validator, along
with a boolean for whether or not they voted:
```
message RequestBeginBlock {
bytes hash
Header header
LastCommitInfo last_commit_info
repeated Evidence byzantine_validators
}
message LastCommitInfo {
int32 CommitRound
repeated SigningValidator validators
}
message SigningValidator {
Validator validator
bool signed_last_block
}
```
Note that in Validators in RequestBeginBlock, we DO NOT include public keys. Public keys are
larger than addresses and in the future, with quantum computers, will be much
larger. The overhead of passing them, especially during fast-sync, is
significant.
Additional, addresses are changing to be simpler to compute, further removing
the need to include pubkeys here.
In short, ABCI developers must be aware of both addresses and public keys.
### ResponseEndBlock
Since ResponseEndBlock includes Validator, it must now include their address.
### InitChain
Change RequestInitChain to give the app all the information from the genesis file:
```
message RequestInitChain {
int64 time
string chain_id
ConsensusParams consensus_params
repeated Validator validators
bytes app_state_bytes
}
```
Change ResponseInitChain to allow the app to specify the initial validator set
and consensus parameters.
```
message ResponseInitChain {
ConsensusParams consensus_params
repeated Validator validators
}
```
### Header
Now that Tendermint Amino will be compatible with proto3, the Header in ABCI
should exactly match the Tendermint header - they will then be encoded
identically in ABCI and in Tendermint Core.
## Status
Implemented
## Consequences
### Positive
- Easier for developers to build on the ABCI
- ABCI and Tendermint headers are identically serialized
### Negative
- Maintenance overhead of alternative type encoding scheme
- Performance overhead of passing all validator info every block (at least its
only addresses, and not also pubkeys)
- Maintenance overhead of duplicate types
### Neutral
- ABCI developers must know about validator addresses
## References
- [ABCI v0.10.3 Specification (before this
proposal)](https://github.com/tendermint/abci/blob/v0.10.3/specification.rst)
- [ABCI v0.11.0 Specification (implementing first draft of this
proposal)](https://github.com/tendermint/abci/blob/v0.11.0/specification.md)
- [Ed25519 addresses](https://github.com/tendermint/go-crypto/issues/103)
- [InitChain contains the
Genesis](https://github.com/tendermint/abci/issues/216)
- [PubKeys](https://github.com/tendermint/tendermint/issues/1524)
- [Notes on
Header](https://github.com/tendermint/tendermint/issues/1605)
- [Gogoproto issues](https://github.com/tendermint/abci/issues/256)
- [Absent Validators](https://github.com/tendermint/abci/issues/231)

View File

@@ -0,0 +1,77 @@
# ADR 010: Crypto Changes
## Context
Tendermint is a cryptographic protocol that uses and composes a variety of cryptographic primitives.
After nearly 4 years of development, Tendermint has recently undergone multiple security reviews to search for vulnerabilities and to assess the the use and composition of cryptographic primitives.
### Hash Functions
Tendermint uses RIPEMD160 universally as a hash function, most notably in its Merkle tree implementation.
RIPEMD160 was chosen because it provides the shortest fingerprint that is long enough to be considered secure (ie. birthday bound of 80-bits).
It was also developed in the open academic community, unlike NSA-designed algorithms like SHA256.
That said, the cryptographic community appears to unanimously agree on the security of SHA256. It has become a universal standard, especially now that SHA1 is broken, being required in TLS connections and having optimized support in hardware.
### Merkle Trees
Tendermint uses a simple Merkle tree to compute digests of large structures like transaction batches
and even blockchain headers. The Merkle tree length prefixes byte arrays before concatenating and hashing them.
It uses RIPEMD160.
### Addresses
ED25519 addresses are computed using the RIPEMD160 of the Amino encoding of the public key.
RIPEMD160 is generally considered an outdated hash function, and is much slower
than more modern functions like SHA256 or Blake2.
### Authenticated Encryption
Tendermint P2P connections use authenticated encryption to provide privacy and authentication in the communications.
This is done using the simple Station-to-Station protocol with the NaCL Ed25519 library.
While there have been no vulnerabilities found in the implementation, there are some concerns:
- NaCL uses Salsa20, a not-widely used and relatively out-dated stream cipher that has been obsoleted by ChaCha20
- Connections use RIPEMD160 to compute a value that is used for the encryption nonce with subtle requirements on how it's used
## Decision
### Hash Functions
Use the first 20-bytes of the SHA256 hash instead of RIPEMD160 for everything
### Merkle Trees
TODO
### Addresses
Compute ED25519 addresses as the first 20-bytes of the SHA256 of the raw 32-byte public key
### Authenticated Encryption
Make the following changes:
- Use xChaCha20 instead of xSalsa20 - https://github.com/tendermint/tendermint/issues/1124
- Use an HKDF instead of RIPEMD160 to compute nonces - https://github.com/tendermint/tendermint/issues/1165
## Status
Implemented
## Consequences
### Positive
- More modern and standard cryptographic functions with wider adoption and hardware acceleration
### Negative
- Exact authenticated encryption construction isn't already provided in a well-used library
### Neutral
## References

Some files were not shown because too many files have changed in this diff Show More