Compare commits

..

28 Commits

Author SHA1 Message Date
William Banfield
1ab868180c update switch test to use new receive 2022-10-21 14:04:36 -04:00
William Banfield
27a64acc9a remove all references to NewBroadcast and change to broadcast 2022-10-21 13:54:04 -04:00
William Banfield
34ae6ae9b6 pex remove decode / encode functions 2022-10-21 13:29:30 -04:00
William Banfield
e3e7e9ec9f pex updated to wrap 2022-10-21 13:27:52 -04:00
William Banfield
5b86c5562a statesync reactor users wrapper 2022-10-21 12:59:12 -04:00
William Banfield
36decbb4c8 remove superflous extra funcs in mempool 2022-10-21 12:42:51 -04:00
William Banfield
2f7844b0b9 mempool tests pass 2022-10-21 12:41:33 -04:00
William Banfield
4ae93fb379 remove superflous decode in mempool 2022-10-21 12:22:27 -04:00
William Banfield
ca2dcac7f6 remove superflous decode in evidence code 2022-10-21 12:21:03 -04:00
William Banfield
9e1c7bdfb8 remove superflous functions in consensus 2022-10-21 11:58:57 -04:00
William Banfield
4b5373b2a5 remove wrap from blocksync 2022-10-21 11:54:22 -04:00
William Banfield
9cad0c1b33 re-wrap to mimic v0.35 2022-10-21 11:45:59 -04:00
William Banfield
5c173cbb3d consensus package uses protos directly 2022-10-20 18:21:28 -04:00
William Banfield
9742dac312 wrapper used in send 2022-10-20 17:27:57 -04:00
William Banfield
42fc1fc1cc pex implements wrap 2022-10-20 17:25:54 -04:00
William Banfield
3c6e81ad15 consensus implements wrap 2022-10-20 17:20:40 -04:00
William Banfield
3ab55dcffc statesync implements wrapper 2022-10-20 17:15:26 -04:00
William Banfield
11fe2c206d blocksync implements unwrap 2022-10-20 17:11:23 -04:00
William Banfield
ade3cee699 assert unwrapper on mempool 2022-10-20 16:59:44 -04:00
William Banfield
8015f6e254 use correct type in wrap 2022-10-20 16:59:06 -04:00
William Banfield
24c1f44085 mempool implements wrap / unwrap 2022-10-20 16:51:12 -04:00
William Banfield
27db33dbda split wrap into wrap / unwrap types 2022-10-20 16:50:57 -04:00
William Banfield
61834bb5fc remove validate 2022-10-20 16:47:31 -04:00
William Banfield
2b7b2f2012 buildable 2022-10-20 16:47:22 -04:00
William Banfield
9b2c2ee3af resurrect wrap types 2022-10-20 16:43:37 -04:00
William Banfield
b11060153e put wrap into receive 2022-10-20 16:37:03 -04:00
William Banfield
a105d2d0ea add wrapper 2022-10-20 15:19:32 -04:00
William Banfield
4d78096843 p2p: ressurrect the p2p envelope and use to calculate message metric 2022-10-20 14:19:10 -04:00
302 changed files with 4423 additions and 9402 deletions

4
.github/CODEOWNERS vendored
View File

@@ -7,6 +7,6 @@
# global owners are only requested if there isn't a more specific
# codeowner specified below. For this reason, the global codeowners
# are often repeated in package-level definitions.
* @ebuchman @tendermint/tendermint-engineering @adizere @lasarojc
* @ebuchman @tendermint/tendermint-engineering
/spec @ebuchman @tendermint/tendermint-research @tendermint/tendermint-engineering @adizere @lasarojc
/spec @ebuchman @tendermint/tendermint-research @tendermint/tendermint-engineering

View File

@@ -30,6 +30,12 @@ updates:
- T:dependencies
- S:automerge
- package-ecosystem: npm
directory: "/docs"
schedule:
interval: weekly
open-pull-requests-limit: 10
###################################
##
## Update All Go Dependencies
@@ -47,10 +53,10 @@ updates:
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
interval: weekly
target-branch: "v0.37.x"
# Only allow automated security-related dependency updates on release
# branches.
# Only allow automated security-related dependency updates until we cut the
# final v0.37.0 release.
open-pull-requests-limit: 0
labels:
- T:dependencies
@@ -59,11 +65,9 @@ updates:
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
interval: weekly
target-branch: "v0.34.x"
# Only allow automated security-related dependency updates on release
# branches.
open-pull-requests-limit: 0
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge

View File

@@ -33,7 +33,7 @@ jobs:
if [[ $VERSION =~ ^v[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
TAGS="$TAGS,${DOCKER_IMAGE}:${VERSION}"
fi
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
echo ::set-output name=tags::${TAGS}
- name: Set up QEMU
uses: docker/setup-qemu-action@master
@@ -41,17 +41,17 @@ jobs:
platforms: all
- name: Set up Docker Build
uses: docker/setup-buildx-action@v2.2.1
uses: docker/setup-buildx-action@v2.0.0
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@v2.1.0
uses: docker/login-action@v2.0.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v3.2.0
uses: docker/build-push-action@v3.1.1
with:
context: .
file: ./DOCKER/Dockerfile

View File

@@ -35,8 +35,6 @@ jobs:
# versions will be available for the build.
fetch-depth: 0
- name: Build documentation
env:
NODE_OPTIONS: "--openssl-legacy-provider"
run: |
git config --global --add safe.directory "$PWD"
make build-docs

View File

@@ -1,38 +0,0 @@
# Manually run the nightly E2E tests for a particular branch, but test with
# multiple versions.
name: e2e-manual-multiversion
on:
workflow_dispatch:
jobs:
e2e-manual-multiversion-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03', '04']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
with:
go-version: '1.18'
- uses: actions/checkout@v3
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner tests
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
# Generate multi-version tests with double the quantity of E2E nodes
# based on the current branch as compared to the latest version.
run: ./build/generator -g 5 -m "latest:1,local:2" -d networks/nightly/
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml

View File

@@ -11,7 +11,7 @@ jobs:
strategy:
fail-fast: false
matrix:
group: ['00', '01', '02', '03', '04']
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:

View File

@@ -30,7 +30,8 @@ jobs:
- name: Capture git repo info
id: git-info
run: |
echo "branch=`git branch --show-current`" >> $GITHUB_OUTPUT
echo "::set-output name=branch::`git branch --show-current`"
echo "::set-output name=commit::`git rev-parse HEAD`"
- name: Build
working-directory: test/e2e
@@ -48,6 +49,7 @@ jobs:
outputs:
git-branch: ${{ steps.git-info.outputs.branch }}
git-commit: ${{ steps.git-info.outputs.commit }}
e2e-nightly-fail:
needs: e2e-nightly-test
@@ -55,13 +57,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.23.0
uses: slackapi/slack-github-action@v1.22.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ needs.e2e-nightly-test.outputs.git-branch }}
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
COMMITS_URL: "${{ github.server_url }}/${{ github.repository }}/commits/${{ needs.e2e-nightly-test.outputs.git-branch }}"
COMMIT_URL: "${{ github.server_url }}/${{ github.repository }}/commit/${{ needs.e2e-nightly-test.outputs.git-commit }}"
with:
payload: |
{
@@ -70,7 +72,7 @@ jobs:
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMITS_URL }}|latest commits> possibly related to the failure."
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> that caused the failure."
}
}
]

View File

@@ -30,7 +30,8 @@ jobs:
- name: Capture git repo info
id: git-info
run: |
echo "branch=`git branch --show-current`" >> $GITHUB_OUTPUT
echo "::set-output name=branch::`git branch --show-current`"
echo "::set-output name=commit::`git rev-parse HEAD`"
- name: Build
working-directory: test/e2e
@@ -48,6 +49,7 @@ jobs:
outputs:
git-branch: ${{ steps.git-info.outputs.branch }}
git-commit: ${{ steps.git-info.outputs.commit }}
e2e-nightly-fail:
needs: e2e-nightly-test
@@ -55,13 +57,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.23.0
uses: slackapi/slack-github-action@v1.22.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ needs.e2e-nightly-test.outputs.git-branch }}
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
COMMITS_URL: "${{ github.server_url }}/${{ github.repository }}/commits/${{ needs.e2e-nightly-test.outputs.git-branch }}"
COMMIT_URL: "${{ github.server_url }}/${{ github.repository }}/commit/${{ needs.e2e-nightly-test.outputs.git-commit }}"
with:
payload: |
{
@@ -70,7 +72,7 @@ jobs:
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMITS_URL }}|latest commits> possibly related to the failure."
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> that caused the failure."
}
}
]

View File

@@ -46,13 +46,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.23.0
uses: slackapi/slack-github-action@v1.22.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
BRANCH: ${{ github.ref_name }}
RUN_URL: "${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"
COMMITS_URL: "${{ github.server_url }}/${{ github.repository }}/commits/${{ github.ref_name }}"
COMMIT_URL: "${{ github.server_url }}/${{ github.repository }}/commit/${{ github.sha }}"
with:
payload: |
{
@@ -61,7 +61,7 @@ jobs:
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMITS_URL }}|latest commits> possibly related to the failure."
"text": ":skull: Nightly E2E tests for `${{ env.BRANCH }}` failed. See the <${{ env.RUN_URL }}|run details> and the <${{ env.COMMIT_URL }}|commit> that caused the failure."
}
}
]

View File

@@ -64,7 +64,7 @@ jobs:
- name: Set crashers count
working-directory: test/fuzz
run: echo "count=$(find . -type d -name 'crashers' | xargs -I % sh -c 'ls % | wc -l' | awk '{total += $1} END {print total}')" >> $GITHUB_OUTPUT
run: echo "::set-output name=count::$(find . -type d -name 'crashers' | xargs -I % sh -c 'ls % | wc -l' | awk '{total += $1} END {print total}')"
id: set-crashers-count
outputs:
@@ -76,7 +76,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: slackapi/slack-github-action@v1.23.0
uses: slackapi/slack-github-action@v1.22.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK

41
.github/workflows/gosec.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Run Gosec
on:
pull_request:
paths:
- '**/*.go'
- 'go.mod'
- 'go.sum'
push:
branches:
- main
- 'feature/*'
- 'v0.37.x'
- 'v0.34.x'
paths:
- '**/*.go'
- 'go.mod'
- 'go.sum'
jobs:
Gosec:
permissions:
security-events: write
runs-on: ubuntu-latest
env:
GO111MODULE: on
steps:
- name: Checkout Source
uses: actions/checkout@v3
- name: Run Gosec Security Scanner
uses: cosmos/gosec@master
with:
# Let the report trigger a failure with the Github Security scanner features.
args: "-no-fail -fmt sarif -out results.sarif ./..."
- name: Upload SARIF file
uses: github/codeql-action/upload-sarif@v2
with:
# Path to SARIF file relative to the root of the repository
sarif_file: results.sarif

View File

@@ -1,31 +0,0 @@
name: Check for Go vulnerabilities
# Runs https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck to proactively
# check for vulnerabilities in code packages if there were any changes made to
# any Go code or dependencies.
#
# Run `make vulncheck` from the root of the repo to run this workflow locally.
on:
pull_request:
push:
branches:
- main
- release/**
jobs:
govulncheck:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
with:
go-version: "1.18"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
PATTERNS: |
**/*.go
go.mod
go.sum
Makefile
- name: govulncheck
run: make vulncheck
if: "env.GIT_DIFF != ''"

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 3
steps:
- uses: styfle/cancel-workflow-action@0.11.0
- uses: styfle/cancel-workflow-action@0.10.1
with:
workflow_id: 1041851,1401230,2837803
access_token: ${{ github.token }}

View File

@@ -31,7 +31,10 @@ jobs:
go.sum
- uses: golangci/golangci-lint-action@v3
with:
version: v1.50.1
# Required: the version of golangci-lint is required and
# must be specified without patch version: we always use the
# latest patch version.
version: v1.47.3
args: --timeout 10m
github-token: ${{ secrets.github_token }}
if: env.GIT_DIFF

View File

@@ -1,20 +1,23 @@
name: Check Markdown links
on:
schedule:
# 2am UTC daily
- cron: '0 2 * * *'
push:
branches:
- main
pull_request:
branches: [main]
jobs:
markdown-link-check:
strategy:
matrix:
branch: ['main', 'v0.37.x', 'v0.34.x']
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
with:
ref: ${{ matrix.branch }}
- uses: informalsystems/github-action-markdown-link-check@main
PATTERNS: |
**/**.md
- uses: creachadair/github-action-markdown-link-check@master
with:
check-modified-files-only: 'yes'
config-file: '.md-link-check.json'
if: env.GIT_DIFF

View File

@@ -8,7 +8,7 @@ on:
- "v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+" # e.g. v0.37.0-rc1, v0.38.0-rc10
jobs:
prerelease:
goreleaser:
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -21,7 +21,7 @@ jobs:
go-version: '1.18'
- name: Build
uses: goreleaser/goreleaser-action@v4
uses: goreleaser/goreleaser-action@v3
if: ${{ github.event_name == 'pull_request' }}
with:
version: latest
@@ -31,35 +31,10 @@ jobs:
- run: echo https://github.com/tendermint/tendermint/blob/${GITHUB_REF#refs/tags/}/CHANGELOG_PENDING.md > ../release_notes.md
- name: Release
uses: goreleaser/goreleaser-action@v4
uses: goreleaser/goreleaser-action@v3
if: startsWith(github.ref, 'refs/tags/')
with:
version: latest
args: release --rm-dist --release-notes=../release_notes.md
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
prerelease-success:
needs: prerelease
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack upon pre-release
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
RELEASE_URL: "${{ github.server_url }}/${{ github.repository }}/releases/tag/${{ github.ref_name }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":sparkles: New Tendermint pre-release: <${{ env.RELEASE_URL }}|${{ github.ref_name }}>"
}
}
]
}

View File

@@ -15,7 +15,7 @@ jobs:
timeout-minutes: 5
steps:
- uses: actions/checkout@v3
- uses: bufbuild/buf-setup-action@v1.10.0
- uses: bufbuild/buf-setup-action@v1.8.0
- uses: bufbuild/buf-lint-action@v1
with:
input: 'proto'

View File

@@ -6,7 +6,7 @@ on:
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
jobs:
release:
goreleaser:
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -19,7 +19,7 @@ jobs:
go-version: '1.18'
- name: Build
uses: goreleaser/goreleaser-action@v4
uses: goreleaser/goreleaser-action@v3
if: ${{ github.event_name == 'pull_request' }}
with:
version: latest
@@ -28,35 +28,10 @@ jobs:
- run: echo https://github.com/tendermint/tendermint/blob/${GITHUB_REF#refs/tags/}/CHANGELOG.md#${GITHUB_REF#refs/tags/} > ../release_notes.md
- name: Release
uses: goreleaser/goreleaser-action@v4
uses: goreleaser/goreleaser-action@v3
if: startsWith(github.ref, 'refs/tags/')
with:
version: latest
args: release --rm-dist --release-notes=../release_notes.md
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
release-success:
needs: release
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack upon release
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
RELEASE_URL: "${{ github.server_url }}/${{ github.repository }}/releases/tag/${{ github.ref_name }}"
with:
payload: |
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": ":rocket: New Tendermint release: <${{ env.RELEASE_URL }}|${{ github.ref_name }}>"
}
}
]
}

View File

@@ -1,60 +0,0 @@
name: Docker E2E Node
# Build & Push rebuilds the e2e Testapp docker image on every push to main and creation of tags
# and pushes the image to https://hub.docker.com/r/tendermint/e2e-node
on:
push:
branches:
- main
tags:
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
- "v[0-9]+.[0-9]+.[0-9]+-alpha.[0-9]+" # e.g. v0.37.0-alpha.1, v0.38.0-alpha.10
- "v[0-9]+.[0-9]+.[0-9]+-beta.[0-9]+" # e.g. v0.37.0-beta.1, v0.38.0-beta.10
- "v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+" # e.g. v0.37.0-rc1, v0.38.0-rc10
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=tendermint/e2e-node
VERSION=noop
if [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION=${GITHUB_REF#refs/tags/}
elif [[ $GITHUB_REF == refs/heads/* ]]; then
VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -r 's#/+#-#g')
if [ "${{ github.event.repository.default_branch }}" = "$VERSION" ]; then
VERSION=latest
fi
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
if [[ $VERSION =~ ^v[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
TAGS="$TAGS,${DOCKER_IMAGE}:${VERSION}"
fi
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@master
with:
platforms: all
- name: Set up Docker Build
uses: docker/setup-buildx-action@v2.2.1
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@v2.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v3.2.0
with:
context: .
file: ./test/e2e/docker/Dockerfile
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'beep_boop' }}
tags: ${{ steps.prep.outputs.tags }}

2
.gitignore vendored
View File

@@ -55,5 +55,3 @@ proto/spec/**/*.pb.go
*.pdf
*.gz
*.dvi
# Python virtual environments
.venv

View File

@@ -2,6 +2,7 @@ linters:
enable:
- asciicheck
- bodyclose
- deadcode
- depguard
- dogsled
- dupl
@@ -25,6 +26,7 @@ linters:
- typecheck
- unconvert
- unused
- varcheck
issues:
exclude-rules:

View File

@@ -2,16 +2,5 @@
"retryOn429": true,
"retryCount": 5,
"fallbackRetryDelay": "30s",
"aliveStatusCodes": [200, 206, 503],
"httpHeaders": [
{
"urls": [
"https://docs.github.com/",
"https://help.github.com/"
],
"headers": {
"Accept-Encoding": "zstd, br, gzip, deflate"
}
}
]
"aliveStatusCodes": [200, 206, 503]
}

View File

@@ -2,94 +2,6 @@
Friendly reminder, we have a [bug bounty program](https://hackerone.com/cosmos).
## v0.34.24
*Nov 22, 2022*
Apart from one minor bug fix, this release aims to optimize the output of the
RPC (both HTTP and WebSocket endpoints). See our [upgrading
guidelines](./UPGRADING.md#v03424) for more details.
### IMPROVEMENTS
- `[rpc]` [\#9724](https://github.com/tendermint/tendermint/issues/9724) Remove
useless whitespace in RPC output (@adizere, @thanethomson)
### BUG FIXES
- `[rpc]` [\#9692](https://github.com/tendermint/tendermint/issues/9692) Remove
`Cache-Control` header response from `/check_tx` endpoint (@JayT106)
## v0.34.23
*Nov 9, 2022*
This release introduces some new Prometheus metrics to help in determining what
kinds of messages are consuming the most P2P bandwidth. This builds towards our
broader goal of optimizing Tendermint bandwidth consumption, and will give us
meaningful insights once we can establish these metrics for a number of chains.
We now also return `Cache-Control` headers for select RPC endpoints to help
facilitate caching.
Special thanks to external contributors on this release: @JayT106
### IMPROVEMENTS
- `[p2p]` [\#9641](https://github.com/tendermint/tendermint/issues/9641) Add new
Envelope type and associated methods for sending and receiving Envelopes
instead of raw bytes. This also adds new metrics,
`tendermint_p2p_message_send_bytes_total` and
`tendermint_p2p_message_receive_bytes_total`, that expose how many bytes of
each message type have been sent.
- `[rpc]` [\#9666](https://github.com/tendermint/tendermint/issues/9666) Enable
caching of RPC responses (@JayT106)
The following RPC endpoints will return `Cache-Control` headers with a maximum
age of 1 day:
- `/abci_info`
- `/block`, if `height` is supplied
- `/block_by_hash`
- `/block_results`, if `height` is supplied
- `/blockchain`
- `/check_tx`
- `/commit`, if `height` is supplied
- `/consensus_params`, if `height` is supplied
- `/genesis`
- `/genesis_chunked`
- `/tx`
- `/validators`, if `height` is supplied
## v0.34.22
This release includes several bug fixes, [one of
which](https://github.com/tendermint/tendermint/pull/9518) we discovered while
building up a baseline for v0.34 against which to compare our upcoming v0.37
release during our [QA process](./docs/qa/).
Special thanks to external contributors on this release: @RiccardoM
### FEATURES
- [rpc] [\#9423](https://github.com/tendermint/tendermint/pull/9423) Support
HTTPS URLs from the WebSocket client (@RiccardoM, @cmwaters)
### BUG FIXES
- [config] [\#9483](https://github.com/tendermint/tendermint/issues/9483)
Calling `tendermint init` would incorrectly leave out the new `[storage]`
section delimiter in the generated configuration file - this has now been
fixed
- [p2p] [\#9500](https://github.com/tendermint/tendermint/issues/9500) Prevent
peers who have errored being added to the peer set (@jmalicevic)
- [indexer] [\#9473](https://github.com/tendermint/tendermint/issues/9473) Fix
bug that caused the psql indexer to index empty blocks whenever one of the
transactions returned a non zero code. The relevant deduplication logic has
been moved within the kv indexer only (@cmwaters)
- [blocksync] [\#9518](https://github.com/tendermint/tendermint/issues/9518) A
block sync stall was observed during our QA process whereby the node was
unable to make progress. Retrying block requests after a timeout fixes this.
## v0.34.21
Release highlights include:

View File

@@ -11,41 +11,26 @@
- P2P Protocol
- Go API
- [p2p] \#9625 Remove unused p2p/trust package (@cmwaters)
- [rpc] \#9655 Remove global environment and replace with constructor. (@williambanfield,@tychoish)
- [node] \#9655 Move DBContext and DBProvider from the node package to the config package. (@williambanfield,@tychoish)
- Blockchain Protocol
- Data Storage
- [state] \#6541 Move pruneBlocks from consensus/state to state/execution. (@JayT106)
- [state] \#6541 Move pruneBlocks from consensus/state to state/execution. (@JayT106)
- Tooling
- [tools/tm-signer-harness] \#6498 Set OS home dir to instead of the hardcoded PATH. (@JayT106)
- [metrics] \#9682 move state-syncing and block-syncing metrics to their respective packages (@cmwaters)
labels have moved from block_syncing -> blocksync_syncing and state_syncing -> statesync_syncing
- [inspect] \#9655 Add a new `inspect` command for introspecting the state and block store of a crashed tendermint node. (@williambanfield)
- [tools/tm-signer-harness] \#6498 Set OS home dir to instead of the hardcoded PATH. (@JayT106)
### FEATURES
- [proxy] \#9830 Introduce `NewUnsyncLocalClientCreator`, which allows local
ABCI clients to have the same concurrency model as remote clients (i.e. one
mutex per client "connection", for each of the four ABCI "connections").
- [config] \#9680 Introduce `BootstrapPeers` to the config to allow nodes to list peers to be added to
the addressbook upon start up (@cmwaters)
### IMPROVEMENTS
- [pubsub] \#7319 Performance improvements for the event query API (@creachadair)
- [p2p/pex] \#6509 Improve addrBook.hash performance (@cuonglm)
- [crypto/merkle] \#6443 & \#6513 Improve HashAlternatives performance (@cuonglm, @marbar3778)
- [rpc] \#9650 Enable caching of RPC responses (@JayT106)
- [consensus] \#9760 Save peer LastCommit correctly to achieve 50% reduction in gossiped precommits. (@williambanfield)
### BUG FIXES
- [docker] \#9462 ensure Docker image uses consistent version of Go
- [abci-cli] \#9717 fix broken abci-cli help command
## v0.37.0
@@ -56,24 +41,24 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
### BREAKING CHANGES
- CLI/RPC/Config
- [config] \#9259 Rename the fastsync section and the fast_sync key blocksync and block_sync respectively
- [config] \#9259 Rename the fastsync section and the fast_sync key blocksync and block_sync respectively
- Apps
- [abci/counter] \#6684 Delete counter example app
- [abci] \#5783 Make length delimiter encoding consistent (`uint64`) between ABCI and P2P wire-level protocols
- [abci] \#9145 Removes unused Response/Request `SetOption` from ABCI (@samricotta)
- [abci/params] \#9287 Deduplicate `ConsensusParams` and `BlockParams` so only `types` proto definitions are used (@cmwaters)
- Remove `TimeIotaMs` and use a hard-coded 1 millisecond value to ensure monotonically increasing block times.
- Rename `AppVersion` to `App` so as to not stutter.
- [types] \#9287 Reduce the use of protobuf types in core logic. (@cmwaters)
- `ConsensusParams`, `BlockParams`, `ValidatorParams`, `EvidenceParams`, `VersionParams` have become native types.
- [abci/counter] \#6684 Delete counter example app
- [abci] \#5783 Make length delimiter encoding consistent (`uint64`) between ABCI and P2P wire-level protocols
- [abci] \#9145 Removes unused Response/Request `SetOption` from ABCI (@samricotta)
- [abci/params] \#9287 Deduplicate `ConsensusParams` and `BlockParams` so only `types` proto definitions are used (@cmwaters)
- Remove `TimeIotaMs` and use a hard-coded 1 millisecond value to ensure monotonically increasing block times.
- Rename `AppVersion` to `App` so as to not stutter.
- [types] \#9287 Reduce the use of protobuf types in core logic. (@cmwaters)
- `ConsensusParams`, `BlockParams`, `ValidatorParams`, `EvidenceParams`, `VersionParams` have become native types.
They still utilize protobuf when being sent over the wire or written to disk.
- Moved `ValidateConsensusParams` inside (now native type) `ConsensusParams`, and renamed it to `ValidateBasic`.
- [abci] \#9301 New ABCI methods `PrepareProposal` and `ProcessProposal` which give the app control over transactions proposed and allows for verification of proposed blocks.
- [abci] \#8216 Renamed `EvidenceType` to `MisbehaviorType` and `Evidence` to `Misbehavior` as a more accurate label of their contents. (@williambanfield, @sergio-mena)
- [abci] \#9122 Renamed `LastCommitInfo` to `CommitInfo` in preparation for vote extensions. (@cmwaters)
- [abci] \#8656, \#8901 Added cli commands for `PrepareProposal` and `ProcessProposal`. (@jmalicevic, @hvanz)
- [abci] \#6403 Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
- Moved `ValidateConsensusParams` inside (now native type) `ConsensusParams`, and renamed it to `ValidateBasic`.
- [abci] \#9301 New ABCI methods `PrepareProposal` and `ProcessProposal` which give the app control over transactions proposed and allows for verification of proposed blocks.
- [abci] \#8216 Renamed `EvidenceType` to `MisbehaviorType` and `Evidence` to `Misbehavior` as a more accurate label of their contents. (@williambanfield, @sergio-mena)
- [abci] \#9122 Renamed `LastCommitInfo` to `CommitInfo` in preparation for vote extensions. (@cmwaters)
- [abci] \#8656, \#8901 Added cli commands for `PrepareProposal` and `ProcessProposal`. (@jmalicevic, @hvanz)
- [abci] \#6403 Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
- P2P Protocol
@@ -89,7 +74,6 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
- [abci] \#9301 New ABCI methods `PrepareProposal` and `ProcessProposal` which give the app control over transactions proposed and allows for verification of proposed blocks.
### IMPROVEMENTS
- [crypto] \#9250 Update to use btcec v2 and the latest btcutil. (@wcsiu)
- [cli] \#9171 add `--hard` flag to rollback command (and a boolean to the `RollbackState` method). This will rollback
@@ -112,4 +96,4 @@ Friendly reminder, we have a [bug bounty program](https://hackerone.com/tendermi
- [consensus] \#9229 fix round number of `enterPropose` when handling `RoundStepNewRound` timeout. (@fatcat22)
- [docker] \#9073 enable cross platform build using docker buildx
- [blocksync] \#9518 handle the case when the sending queue is full: retry block request after a timeout
- [blocksync] \#9518 handle the case when the sending queue is full: retry block request after a timeout

View File

@@ -271,13 +271,9 @@ format:
lint:
@echo "--> Running linter"
@go run github.com/golangci/golangci-lint/cmd/golangci-lint run
@golangci-lint run
.PHONY: lint
vulncheck:
@go run golang.org/x/vuln/cmd/govulncheck@latest ./...
.PHONY: vulncheck
DESTINATION = ./index.html.md
###############################################################################

View File

@@ -113,15 +113,10 @@ For more information on upgrading, see [UPGRADING.md](./UPGRADING.md).
### Supported Versions
Because we are a small core team, we have limited capacity to ship patch
updates, including security updates. Consequently, we strongly recommend keeping
Tendermint up-to-date. Upgrading instructions can be found in
[UPGRADING.md](./UPGRADING.md).
Currently supported versions include:
- v0.34.x
- v0.37.x (release candidate)
Because we are a small core team, we only ship patch updates, including security
updates, to the most recent minor release and the second-most recent minor
release. Consequently, we strongly recommend keeping Tendermint up-to-date.
Upgrading instructions can be found in [UPGRADING.md](./UPGRADING.md).
## Resources
@@ -148,7 +143,7 @@ Currently supported versions include:
- [Tendermint Core Blog](https://medium.com/tendermint/tagged/tendermint-core)
- [Cosmos Blog](https://blog.cosmos.network/tendermint/home)
## Join us
## Join us!
Tendermint Core is maintained by [Interchain GmbH](https://interchain.io).
If you'd like to work full-time on Tendermint Core,

View File

@@ -50,7 +50,6 @@ the 0.38.x line.
in order to do this).
3. Create and push the backport branch:
```sh
git checkout -b v0.38.x
git push origin v0.38.x
@@ -82,7 +81,6 @@ the 0.38.x line.
* `docs.tendermint.com/main` -> `docs.tendermint.com/v0.38`
Once you have updated all of the relevant documentation:
```sh
# Create and push the PR.
git checkout -b update-docs-v038x
@@ -115,7 +113,7 @@ create an alpha or beta version, or release candidate (RC) for our friends and
partners to test out. We use git tags to create pre-releases, and we build them
off of backport branches, for example:
* `v0.38.0-alpha.1` - The first alpha release of `v0.38.0`. Subsequent alpha
- `v0.38.0-alpha.1` - The first alpha release of `v0.38.0`. Subsequent alpha
releases will be numbered `v0.38.0-alpha.2`, `v0.38.0-alpha.3`, etc.
Alpha releases are to be considered the _most_ unstable of pre-releases, and
@@ -123,14 +121,14 @@ off of backport branches, for example:
adopters to start integrating and testing new functionality before we're done
with QA.
* `v0.38.0-beta.1` - The first beta release of `v0.38.0`. Subsequent beta
- `v0.38.0-beta.1` - The first beta release of `v0.38.0`. Subsequent beta
releases will be numbered `v0.38.0-beta.2`, `v0.38.0-beta.3`, etc.
Beta releases can be considered more stable than alpha releases in that we
will have QA'd them better than alpha releases, but there still may be
minor breaking API changes if users have strong demands for such changes.
* `v0.38.0-rc1` - The first release candidate (RC) of `v0.38.0`. Subsequent RCs
- `v0.38.0-rc1` - The first release candidate (RC) of `v0.38.0`. Subsequent RCs
will be numbered `v0.38.0-rc2`, `v0.38.0-rc3`, etc.
RCs are considered more stable than beta releases in that we will have
@@ -148,18 +146,18 @@ backport branch (see above). Otherwise:
1. Start from the backport branch (e.g. `v0.38.x`).
2. Run the integration tests and the E2E nightlies
(which can be triggered from the GitHub UI;
e.g., <https://github.com/tendermint/tendermint/actions/workflows/e2e-nightly-37x.yml>).
e.g., https://github.com/tendermint/tendermint/actions/workflows/e2e-nightly-37x.yml).
3. Prepare the pre-release documentation:
* Ensure that all relevant changes are in the `CHANGELOG_PENDING.md` file.
- Ensure that all relevant changes are in the `CHANGELOG_PENDING.md` file.
This file's contents must only be included in the `CHANGELOG.md` when we
cut final releases.
* Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
- Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
or other upgrading flows.
4. Prepare the versioning:
* Bump TMVersionDefault version in `version.go`
* Bump P2P and block protocol versions in `version.go`, if necessary.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary.
Check the changelog for breaking changes in these components.
* Bump ABCI protocol version in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
5. Open a PR with these changes against the backport branch.
6. Once these changes have landed on the backport branch, be sure to pull them back down locally.
7. Once you have the changes locally, create the new tag, specifying a name and a tag "message":
@@ -181,33 +179,33 @@ Before performing these steps, be sure the
1. Start on the backport branch (e.g. `v0.38.x`)
2. Run integration tests (`make test_integrations`) and the e2e nightlies.
3. Prepare the release:
* "Squash" changes from the changelog entries for the pre-releases into a
- "Squash" changes from the changelog entries for the pre-releases into a
single entry, and add all changes included in `CHANGELOG_PENDING.md`.
(Squashing includes both combining all entries, as well as removing or
simplifying any intra-pre-release changes. It may also help to alphabetize
the entries by package name.)
* Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
* Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
- Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
or other upgrading flows.
* Bump TMVersionDefault version in `version.go`
* Bump P2P and block protocol versions in `version.go`, if necessary
* Bump ABCI protocol version in `version.go`, if necessary
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
4. Open a PR with these changes against the backport branch.
5. Once these changes are on the backport branch, push a tag with prepared release details.
This will trigger the actual release `v0.38.0`.
* `git tag -a v0.38.0 -m 'Release v0.38.0'`
* `git push origin v0.38.0`
- `git tag -a v0.38.0 -m 'Release v0.38.0'`
- `git push origin v0.38.0`
6. Make sure that `main` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
7. Add the release to the documentation site generator config (see
[DOCS\_README.md](./docs/DOCS_README.md) for more details). In summary:
* Start on branch `main`.
* Add a new line at the bottom of [`docs/versions`](./docs/versions) to
- Start on branch `main`.
- Add a new line at the bottom of [`docs/versions`](./docs/versions) to
ensure the newest release is the default for the landing page.
* Add a new entry to `themeConfig.versions` in
- Add a new entry to `themeConfig.versions` in
[`docs/.vuepress/config.js`](./docs/.vuepress/config.js) to include the
release in the dropdown versions menu.
* Commit these changes to `main` and backport them into the backport
- Commit these changes to `main` and backport them into the backport
branch for this release.
## Patch release
@@ -224,21 +222,21 @@ To create a patch release:
1. Checkout the long-lived backport branch: `git checkout v0.38.x`
2. Run integration tests (`make test_integrations`) and the nightlies.
3. Check out a new branch and prepare the release:
* Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
* Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
* Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
* Reset the `CHANGELOG_PENDING.md`
* Bump the TMDefaultVersion in `version.go`
* Bump the ABCI version number, if necessary.
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump the TMDefaultVersion in `version.go`
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during patch releases, and only field additions are valid patch changes.)
4. Open a PR with these changes that will land them back on `v0.38.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
* `git tag -a v0.38.1 -m 'Release v0.38.1'`
* `git push origin v0.38.1`
- `git tag -a v0.38.1 -m 'Release v0.38.1'`
- `git push origin v0.38.1`
6. Create a pull request back to main with the CHANGELOG & version changes from the latest release.
* Remove all `R:patch` labels from the pull requests that were included in the release.
* Do not merge the backport branch into main.
- Remove all `R:patch` labels from the pull requests that were included in the release.
- Do not merge the backport branch into main.
## Minor Release Checklist

View File

@@ -98,7 +98,7 @@ Sometimes it's necessary to rename libraries to avoid naming collisions or ambig
* Make use of table driven testing where possible and not-cumbersome
* [Inspiration](https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go)
* Make use of [assert](https://godoc.org/github.com/stretchr/testify/assert) and [require](https://godoc.org/github.com/stretchr/testify/require)
* When using mocks, it is recommended to use Testify [mock](<https://pkg.go.dev/github.com/stretchr/testify/mock>
* When using mocks, it is recommended to use Testify [mock] (<https://pkg.go.dev/github.com/stretchr/testify/mock>
) along with [Mockery](https://github.com/vektra/mockery) for autogeneration
## Errors

View File

@@ -5,15 +5,6 @@ Tendermint Core.
## Unreleased
## Config Changes
* A new config field, `BootstrapPeers` has been introduced as a means of
adding a list of addresses to the addressbook upon initializing a node. This is an
alternative to `PersistentPeers`. `PersistentPeers` shold be only used for
nodes that you want to keep a constant connection with i.e. sentry nodes
----
### ABCI Changes
* The `ABCIVersion` is now `1.0.0`.
@@ -41,14 +32,6 @@ Tendermint Core.
this should have no effect on the wire-level encoding for UTF8-encoded
strings.
## v0.34.24
Note that in [\#9724](https://github.com/tendermint/tendermint/pull/9724) we
un-prettified the JSON output (i.e. removed all indentation) of the HTTP and
WebSocket RPC for performance and subscription stability reasons. We recommend
using a tool such as [jq](https://github.com/stedolan/jq) to obtain prettified
output if you rely on that prettified output in some way.
## v0.34.20
### Feature: Priority Mempool

View File

@@ -6,6 +6,8 @@ import (
tmsync "github.com/tendermint/tendermint/libs/sync"
)
var _ Client = (*localClient)(nil)
// NOTE: use defer to unlock mutex because Application might panic (e.g., in
// case of malicious tx or query). It only makes sense for publicly exposed
// methods like CheckTx (/broadcast_tx_* RPC endpoint) or Query (/abci_query
@@ -22,6 +24,8 @@ var _ Client = (*localClient)(nil)
// NewLocalClient creates a local client, which will be directly calling the
// methods of the given app.
//
// Both Async and Sync methods ignore the given context.Context parameter.
func NewLocalClient(mtx *tmsync.Mutex, app types.Application) Client {
if mtx == nil {
mtx = new(tmsync.Mutex)
@@ -305,8 +309,7 @@ func (app *localClient) OfferSnapshotSync(req types.RequestOfferSnapshot) (*type
}
func (app *localClient) LoadSnapshotChunkSync(
req types.RequestLoadSnapshotChunk,
) (*types.ResponseLoadSnapshotChunk, error) {
req types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -315,8 +318,7 @@ func (app *localClient) LoadSnapshotChunkSync(
}
func (app *localClient) ApplySnapshotChunkSync(
req types.RequestApplySnapshotChunk,
) (*types.ResponseApplySnapshotChunk, error) {
req types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
app.mtx.Lock()
defer app.mtx.Unlock()

View File

@@ -54,7 +54,7 @@ var RootCmd = &cobra.Command{
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
switch cmd.Use {
case "kvstore", "version", "help [command]":
case "kvstore", "version":
return nil
}
@@ -460,14 +460,11 @@ func cmdUnimplemented(cmd *cobra.Command, args []string) error {
fmt.Println("Available commands:")
fmt.Printf("%s: %s\n", echoCmd.Use, echoCmd.Short)
fmt.Printf("%s: %s\n", checkTxCmd.Use, checkTxCmd.Short)
fmt.Printf("%s: %s\n", commitCmd.Use, commitCmd.Short)
fmt.Printf("%s: %s\n", deliverTxCmd.Use, deliverTxCmd.Short)
fmt.Printf("%s: %s\n", infoCmd.Use, infoCmd.Short)
fmt.Printf("%s: %s\n", checkTxCmd.Use, checkTxCmd.Short)
fmt.Printf("%s: %s\n", deliverTxCmd.Use, deliverTxCmd.Short)
fmt.Printf("%s: %s\n", queryCmd.Use, queryCmd.Short)
fmt.Printf("%s: %s\n", prepareProposalCmd.Use, prepareProposalCmd.Short)
fmt.Printf("%s: %s\n", processProposalCmd.Use, processProposalCmd.Short)
fmt.Printf("%s: %s\n", commitCmd.Use, commitCmd.Short)
fmt.Println("Use \"[command] --help\" for more information about a command.")
return nil

View File

@@ -8,5 +8,4 @@ const (
CodeTypeUnauthorized uint32 = 3
CodeTypeUnknownError uint32 = 4
CodeTypeExecuted uint32 = 5
CodeTypeRejected uint32 = 6
)

View File

@@ -4,7 +4,7 @@ There are two app's here: the KVStoreApplication and the PersistentKVStoreApplic
## KVStoreApplication
The KVStoreApplication is a simple merkle key-value store.
The KVStoreApplication is a simple merkle key-value store.
Transactions of the form `key=value` are stored as key-value pairs in the tree.
Transactions without an `=` sign set the value to the key.
The app has no replay protection (other than what the mempool provides).
@@ -27,4 +27,4 @@ Validator set changes are effected using the following transaction format:
where `pubkeyN` is a base64-encoded 32-byte ed25519 key and `powerN` is a new voting power for the validator with `pubkeyN` (possibly a new one).
To remove a validator from the validator set, set power to `0`.
There is no sybil protection against new validators joining.
There is no sybil protection against new validators joining.

View File

@@ -122,10 +122,6 @@ func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeli
}
func (app *Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
if len(req.Tx) == 0 {
return types.ResponseCheckTx{Code: code.CodeTypeRejected}
}
if req.Type == types.CheckTxType_Recheck {
if _, ok := app.txToRemove[string(req.Tx)]; ok {
return types.ResponseCheckTx{Code: code.CodeTypeExecuted, GasWanted: 1}

View File

@@ -70,24 +70,6 @@ func TestKVStoreKV(t *testing.T) {
testKVStore(t, kvstore, tx, key, value)
}
func TestPersistentKVStoreEmptyTX(t *testing.T) {
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
if err != nil {
t.Fatal(err)
}
kvstore := NewPersistentKVStoreApplication(dir)
tx := []byte("")
reqCheck := types.RequestCheckTx{Tx: tx}
resCheck := kvstore.CheckTx(reqCheck)
require.Equal(t, resCheck.Code, code.CodeTypeRejected)
txs := make([][]byte, 0, 4)
txs = append(txs, []byte("key=value"), []byte("key"), []byte(""), []byte("kee=value"))
reqPrepare := types.RequestPrepareProposal{Txs: txs, MaxTxBytes: 10 * 1024}
resPrepare := kvstore.PrepareProposal(reqPrepare)
require.Equal(t, len(reqPrepare.Txs), len(resPrepare.Txs)+1, "Empty transaction not properly removed")
}
func TestPersistentKVStoreKV(t *testing.T) {
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
if err != nil {

View File

@@ -324,15 +324,11 @@ func (app *PersistentKVStoreApplication) execPrepareTx(tx []byte) types.Response
}
// substPrepareTx substitutes all the transactions prefixed with 'prepare' in the
// proposal for transactions with the prefix stripped, while discarding invalid empty transactions.
// proposal for transactions with the prefix stripped.
func (app *PersistentKVStoreApplication) substPrepareTx(blockData [][]byte, maxTxBytes int64) [][]byte {
txs := make([][]byte, 0, len(blockData))
var totalBytes int64
for _, tx := range blockData {
if len(tx) == 0 {
continue
}
txMod := tx
if isPrepareTx(tx) {
txMod = bytes.Replace(tx, []byte(PreparePrefix), []byte(ReplacePrefix), 1)

View File

@@ -38,34 +38,8 @@ function testExample() {
rm "${INPUT}".out.new
}
function testHelp() {
INPUT=$1
APP="$2 $3"
echo "Test: $APP"
$APP &> "${INPUT}.new" &
sleep 2
pre=$(shasum < "${INPUT}")
post=$(shasum < "${INPUT}.new")
if [[ "$pre" != "$post" ]]; then
echo "You broke the tutorial"
echo "Got:"
cat "${INPUT}.new"
echo "Expected:"
cat "${INPUT}"
echo "Diff:"
diff "${INPUT}" "${INPUT}.new"
exit 1
fi
rm "${INPUT}".new
}
testExample 1 tests/test_cli/ex1.abci abci-cli kvstore
testExample 2 tests/test_cli/ex2.abci abci-cli kvstore
testHelp tests/test_cli/testHelp.out abci-cli help
echo ""
echo "PASS"

View File

@@ -1,30 +0,0 @@
the ABCI CLI tool wraps an ABCI client and is used for testing ABCI servers
Usage:
abci-cli [command]
Available Commands:
batch run a batch of abci commands against an application
check_tx validate a transaction
commit commit the application state and return the Merkle root hash
completion Generate the autocompletion script for the specified shell
console start an interactive ABCI console for multiple commands
deliver_tx deliver a new transaction to the application
echo have the application echo a message
help Help about any command
info get some info about the application
kvstore ABCI demo example
prepare_proposal prepare proposal
process_proposal process proposal
query query the application state
test run integration tests
version print ABCI console version
Flags:
--abci string either socket or grpc (default "socket")
--address string address of application socket (default "tcp://0.0.0.0:26658")
-h, --help help for abci-cli
--log_level string set the logger level (default "debug")
-v, --verbose print the command and results as if it were a console session
Use "abci-cli [command] --help" for more information about a command.

View File

@@ -4,7 +4,6 @@ import (
"io"
"github.com/cosmos/gogoproto/proto"
"github.com/tendermint/tendermint/libs/protoio"
)

View File

@@ -1,30 +0,0 @@
// Code generated by metricsgen. DO NOT EDIT.
package blocksync
import (
"github.com/go-kit/kit/metrics/discard"
prometheus "github.com/go-kit/kit/metrics/prometheus"
stdprometheus "github.com/prometheus/client_golang/prometheus"
)
func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
labels := []string{}
for i := 0; i < len(labelsAndValues); i += 2 {
labels = append(labels, labelsAndValues[i])
}
return &Metrics{
Syncing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "syncing",
Help: "Whether or not a node is block syncing. 1 if yes, 0 if no.",
}, labels).With(labelsAndValues...),
}
}
func NopMetrics() *Metrics {
return &Metrics{
Syncing: discard.NewGauge(),
}
}

View File

@@ -1,19 +0,0 @@
package blocksync
import (
"github.com/go-kit/kit/metrics"
)
const (
// MetricsSubsystem is a subsystem shared by all metrics exposed by this
// package.
MetricsSubsystem = "blocksync"
)
//go:generate go run ../scripts/metricsgen -struct=Metrics
// Metrics contains metrics exposed by this package.
type Metrics struct {
// Whether or not a node is block syncing. 1 if yes, 0 if no.
Syncing metrics.Gauge
}

View File

@@ -58,13 +58,11 @@ type Reactor struct {
requestsCh <-chan BlockRequest
errorsCh <-chan peerError
metrics *Metrics
}
// NewReactor returns new reactor instance.
func NewReactor(state sm.State, blockExec *sm.BlockExecutor, store *store.BlockStore,
blockSync bool, metrics *Metrics) *Reactor {
blockSync bool) *Reactor {
if state.LastBlockHeight != store.Height() {
panic(fmt.Sprintf("state (%v) and store (%v) height mismatch", state.LastBlockHeight,
@@ -90,7 +88,6 @@ func NewReactor(state sm.State, blockExec *sm.BlockExecutor, store *store.BlockS
blockSync: blockSync,
requestsCh: requestsCh,
errorsCh: errorsCh,
metrics: metrics,
}
bcR.BaseReactor = *p2p.NewBaseReactor("Reactor", bcR)
return bcR
@@ -239,8 +236,6 @@ func (bcR *Reactor) Receive(e p2p.Envelope) {
// Handle messages from the poolReactor telling the reactor what to do.
// NOTE: Don't sleep in the FOR_LOOP or otherwise slow it down!
func (bcR *Reactor) poolRoutine(stateSynced bool) {
bcR.metrics.Syncing.Set(1)
defer bcR.metrics.Syncing.Set(0)
trySyncTicker := time.NewTicker(trySyncIntervalMS * time.Millisecond)
defer trySyncTicker.Stop()
@@ -288,7 +283,7 @@ func (bcR *Reactor) poolRoutine(stateSynced bool) {
case <-statusUpdateTicker.C:
// ask for status updates
go bcR.BroadcastStatusRequest()
go bcR.BroadcastStatusRequest() //nolint: errcheck
}
}
@@ -414,9 +409,10 @@ FOR_LOOP:
}
// BroadcastStatusRequest broadcasts `BlockStore` base and height.
func (bcR *Reactor) BroadcastStatusRequest() {
bcR.Switch.Broadcast(p2p.Envelope{
func (bcR *Reactor) BroadcastStatusRequest() error {
bcR.Switch.NewBroadcast(p2p.Envelope{
ChannelID: BlocksyncChannel,
Message: &bcproto.StatusRequest{},
})
return nil
}

View File

@@ -145,7 +145,7 @@ func newReactor(
blockStore.SaveBlock(thisBlock, thisParts, lastCommit)
}
bcReactor := NewReactor(state.Copy(), blockExec, blockStore, fastSync, NopMetrics())
bcReactor := NewReactor(state.Copy(), blockExec, blockStore, fastSync)
bcReactor.SetLogger(logger.With("module", "blocksync"))
return ReactorPair{bcReactor, proxyApp}

View File

@@ -67,8 +67,7 @@ func copyConfig(home, dir string) error {
func dumpProfile(dir, addr, profile string, debug int) error {
endpoint := fmt.Sprintf("%s/debug/pprof/%s?debug=%d", addr, profile, debug)
//nolint:gosec,nolintlint
resp, err := http.Get(endpoint)
resp, err := http.Get(endpoint) //nolint: gosec
if err != nil {
return fmt.Errorf("failed to query for %s profile: %w", profile, err)
}

View File

@@ -1,87 +0,0 @@
package commands
import (
"context"
"os"
"os/signal"
"syscall"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/inspect"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer/block"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
// InspectCmd is the command for starting an inspect server.
var InspectCmd = &cobra.Command{
Use: "inspect",
Short: "Run an inspect server for investigating Tendermint state",
Long: `
inspect runs a subset of Tendermint's RPC endpoints that are useful for debugging
issues with Tendermint.
When the Tendermint consensus engine detects inconsistent state, it will crash the
Tendermint process. Tendermint will not start up while in this inconsistent state.
The inspect command can be used to query the block and state store using Tendermint
RPC calls to debug issues of inconsistent state.
`,
RunE: runInspect,
}
func init() {
InspectCmd.Flags().
String("rpc.laddr",
config.RPC.ListenAddress, "RPC listenener address. Port required")
InspectCmd.Flags().
String("db-backend",
config.DBBackend, "database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
InspectCmd.Flags().
String("db-dir", config.DBPath, "database directory")
}
func runInspect(cmd *cobra.Command, args []string) error {
ctx, cancel := context.WithCancel(cmd.Context())
defer cancel()
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGTERM, syscall.SIGINT)
go func() {
<-c
cancel()
}()
blockStoreDB, err := cfg.DefaultDBProvider(&cfg.DBContext{ID: "blockstore", Config: config})
if err != nil {
return err
}
blockStore := store.NewBlockStore(blockStoreDB)
defer blockStore.Close()
stateDB, err := cfg.DefaultDBProvider(&cfg.DBContext{ID: "state", Config: config})
if err != nil {
return err
}
stateStore := state.NewStore(stateDB, state.StoreOptions{DiscardABCIResponses: false})
defer stateStore.Close()
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
if err != nil {
return err
}
txIndexer, blockIndexer, err := block.IndexerFromConfig(config, cfg.DefaultDBProvider, genDoc.ChainID)
if err != nil {
return err
}
ins := inspect.New(config.RPC, blockStore, stateStore, txIndexer, blockIndexer, logger)
logger.Info("starting inspect server")
if err := ins.Run(ctx); err != nil {
return err
}
return nil
}

View File

@@ -14,7 +14,7 @@ import (
"github.com/tendermint/tendermint/store"
)
var removeBlock = false
var removeBlock bool = false
func init() {
RollbackStateCmd.Flags().BoolVar(&removeBlock, "hard", false, "remove last block as well as state")

View File

@@ -66,7 +66,6 @@ func AddNodeFlags(cmd *cobra.Command) {
cmd.Flags().String("p2p.external-address", config.P2P.ExternalAddress, "ip:port address to advertise to peers for them to dial")
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "comma-delimited ID@host:port seed nodes")
cmd.Flags().String("p2p.persistent_peers", config.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
cmd.Flags().String("p2p.bootstrap_peers", config.P2P.BootstrapPeers, "comma-delimited ID@host:port peers to be added to the addressbook on startup")
cmd.Flags().String("p2p.unconditional_peer_ids",
config.P2P.UnconditionalPeerIDs, "comma-delimited IDs of unconditional peers")
cmd.Flags().Bool("p2p.upnp", config.P2P.UPNP, "enable/disable UPNP port forwarding")

View File

@@ -30,7 +30,6 @@ func main() {
cmd.VersionCmd,
cmd.RollbackStateCmd,
cmd.CompactGoLevelDBCmd,
cmd.InspectCmd,
debug.DebugCmd,
cli.NewCompletionCmd(rootCmd, true),
)

View File

@@ -423,7 +423,6 @@ type RPCConfig struct {
TLSKeyFile string `mapstructure:"tls_key_file"`
// pprof listen address (https://golang.org/pkg/net/http/pprof)
// FIXME: This should be moved under the instrumentation section
PprofListenAddress string `mapstructure:"pprof_laddr"`
}
@@ -507,10 +506,6 @@ func (cfg *RPCConfig) IsCorsEnabled() bool {
return len(cfg.CORSAllowedOrigins) != 0
}
func (cfg *RPCConfig) IsPprofEnabled() bool {
return len(cfg.PprofListenAddress) != 0
}
func (cfg RPCConfig) KeyFile() string {
path := cfg.TLSKeyFile
if filepath.IsAbs(path) {
@@ -548,11 +543,6 @@ type P2PConfig struct { //nolint: maligned
// We only use these if we cant connect to peers in the addrbook
Seeds string `mapstructure:"seeds"`
// Comma separated list of peers to be added to the peer store
// on startup. Either BootstrapPeers or PersistentPeers are
// needed for peer discovery
BootstrapPeers string `mapstructure:"bootstrap_peers"`
// Comma separated list of nodes to keep persistent connections to
PersistentPeers string `mapstructure:"persistent_peers"`
@@ -713,28 +703,14 @@ type MempoolConfig struct {
// Mempool version to use:
// 1) "v0" - (default) FIFO mempool.
// 2) "v1" - prioritized mempool.
Version string `mapstructure:"version"`
// RootDir is the root directory for all data. This should be configured via
// the $TMHOME env variable or --home cmd flag rather than overriding this
// struct field.
RootDir string `mapstructure:"home"`
// Recheck (default: true) defines whether Tendermint should recheck the
// validity for all remaining transaction in the mempool after a block.
// Since a block affects the application state, some transactions in the
// mempool may become invalid. If this does not apply to your application,
// you can disable rechecking.
Recheck bool `mapstructure:"recheck"`
// Broadcast (default: true) defines whether the mempool should relay
// transactions to other peers. Setting this to false will stop the mempool
// from relaying transactions to other peers until they are included in a
// block. In other words, if Broadcast is disabled, only the peer you send
// the tx to will see it until it is included in a block.
Broadcast bool `mapstructure:"broadcast"`
// WalPath (default: "") configures the location of the Write Ahead Log
// (WAL) for the mempool. The WAL is disabled by default. To enable, set
// WalPath to where you want the WAL to be written (e.g.
// "data/mempool.wal").
WalPath string `mapstructure:"wal_dir"`
// WARNING: There's a known memory leak with the prioritized mempool
// that the team are working on. Read more here:
// https://github.com/tendermint/tendermint/issues/8775
Version string `mapstructure:"version"`
RootDir string `mapstructure:"home"`
Recheck bool `mapstructure:"recheck"`
Broadcast bool `mapstructure:"broadcast"`
WalPath string `mapstructure:"wal_dir"`
// Maximum number of transactions in the mempool
Size int `mapstructure:"size"`
// Limit the total size of all txs in the mempool.
@@ -1228,10 +1204,6 @@ func (cfg *InstrumentationConfig) ValidateBasic() error {
return nil
}
func (cfg *InstrumentationConfig) IsPrometheusEnabled() bool {
return cfg.Prometheus && cfg.PrometheusListenAddr != ""
}
//-----------------------------------------------------------------------------
// Utils

View File

@@ -1,30 +0,0 @@
package config
import (
"context"
dbm "github.com/tendermint/tm-db"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
)
// ServiceProvider takes a config and a logger and returns a ready to go Node.
type ServiceProvider func(context.Context, *Config, log.Logger) (service.Service, error)
// DBContext specifies config information for loading a new DB.
type DBContext struct {
ID string
Config *Config
}
// DBProvider takes a DBContext and returns an instantiated DB.
type DBProvider func(*DBContext) (dbm.DB, error)
// DefaultDBProvider returns a database using the DBBackend and DBDir
// specified in the Config.
func DefaultDBProvider(ctx *DBContext) (dbm.DB, error) {
dbType := dbm.BackendType(ctx.Config.DBBackend)
return dbm.NewDB(ctx.ID, dbType, ctx.Config.DBDir())
}

View File

@@ -283,11 +283,6 @@ external_address = "{{ .P2P.ExternalAddress }}"
# Comma separated list of seed nodes to connect to
seeds = "{{ .P2P.Seeds }}"
# Comma separated list of peers to be added to the peer store
# on startup. Either BootstrapPeers or PersistentPeers are
# needed for peer discovery
bootstrap_peers = "{{ .P2P.BootstrapPeers }}"
# Comma separated list of nodes to keep persistent connections to
persistent_peers = "{{ .P2P.PersistentPeers }}"
@@ -354,24 +349,8 @@ dial_timeout = "{{ .P2P.DialTimeout }}"
# 2) "v1" - prioritized mempool.
version = "{{ .Mempool.Version }}"
# Recheck (default: true) defines whether Tendermint should recheck the
# validity for all remaining transaction in the mempool after a block.
# Since a block affects the application state, some transactions in the
# mempool may become invalid. If this does not apply to your application,
# you can disable rechecking.
recheck = {{ .Mempool.Recheck }}
# Broadcast (default: true) defines whether the mempool should relay
# transactions to other peers. Setting this to false will stop the mempool
# from relaying transactions to other peers until they are included in a
# block. In other words, if Broadcast is disabled, only the peer you send
# the tx to will see it until it is included in a block.
broadcast = {{ .Mempool.Broadcast }}
# WalPath (default: "") configures the location of the Write Ahead Log
# (WAL) for the mempool. The WAL is disabled by default. To enable, set
# WalPath to where you want the WAL to be written (e.g.
# "data/mempool.wal").
wal_dir = "{{ js .Mempool.WalPath }}"
# Maximum number of transactions in the mempool
@@ -457,7 +436,7 @@ chunk_fetchers = "{{ .StateSync.ChunkFetchers }}"
[blocksync]
# Block Sync version to use:
#
#
# In v0.37, v1 and v2 of the block sync protocols were deprecated.
# Please use v0 instead.
#

View File

@@ -1,3 +1,3 @@
# Consensus
# Consensus
See the [consensus spec](https://github.com/tendermint/tendermint/tree/main/spec/consensus) for more information.
See the [consensus spec](https://github.com/tendermint/tendermint/tree/main/spec/consensus) for more information.

View File

@@ -167,13 +167,13 @@ func TestByzantinePrevoteEquivocation(t *testing.T) {
if i < len(peerList)/2 {
bcs.Logger.Info("Signed and pushed vote", "vote", prevote1, "peer", peer)
peer.Send(p2p.Envelope{
Message: &tmcons.Vote{Vote: prevote1.ToProto()},
Message: &tmcons.Vote{prevote1.ToProto()},
ChannelID: VoteChannel,
})
} else {
bcs.Logger.Info("Signed and pushed vote", "vote", prevote2, "peer", peer)
peer.Send(p2p.Envelope{
Message: &tmcons.Vote{Vote: prevote2.ToProto()},
Message: &tmcons.Vote{prevote2.ToProto()},
ChannelID: VoteChannel,
})
}
@@ -556,11 +556,11 @@ func sendProposalAndParts(
cs.mtx.Unlock()
peer.Send(p2p.Envelope{
ChannelID: VoteChannel,
Message: &tmcons.Vote{Vote: prevote.ToProto()},
Message: &tmcons.Vote{prevote.ToProto()},
})
peer.Send(p2p.Envelope{
ChannelID: VoteChannel,
Message: &tmcons.Vote{Vote: precommit.ToProto()},
Message: &tmcons.Vote{precommit.ToProto()},
})
}

View File

@@ -96,7 +96,7 @@ func invalidDoPrevoteFunc(t *testing.T, height int64, round int32, cs *State, sw
for _, peer := range peers {
cs.Logger.Info("Sending bad vote", "block", blockHash, "peer", peer)
peer.Send(p2p.Envelope{
Message: &tmcons.Vote{Vote: precommit.ToProto()},
Message: &tmcons.Vote{precommit.ToProto()},
ChannelID: VoteChannel,
})
}

View File

@@ -118,6 +118,18 @@ func PrometheusMetrics(namespace string, labelsAndValues ...string) *Metrics {
Name: "latest_block_height",
Help: "The latest block height.",
}, labels).With(labelsAndValues...),
BlockSyncing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "block_syncing",
Help: "Whether or not a node is block syncing. 1 if yes, 0 if no.",
}, labels).With(labelsAndValues...),
StateSyncing: prometheus.NewGaugeFrom(stdprometheus.GaugeOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
Name: "state_syncing",
Help: "Whether or not a node is state syncing. 1 if yes, 0 if no.",
}, labels).With(labelsAndValues...),
BlockParts: prometheus.NewCounterFrom(stdprometheus.CounterOpts{
Namespace: namespace,
Subsystem: MetricsSubsystem,
@@ -196,6 +208,8 @@ func NopMetrics() *Metrics {
BlockSizeBytes: discard.NewGauge(),
TotalTxs: discard.NewGauge(),
CommittedHeight: discard.NewGauge(),
BlockSyncing: discard.NewGauge(),
StateSyncing: discard.NewGauge(),
BlockParts: discard.NewCounter(),
StepDurationSeconds: discard.NewHistogram(),
BlockGossipPartsReceived: discard.NewCounter(),

View File

@@ -61,6 +61,10 @@ type Metrics struct {
TotalTxs metrics.Gauge
// The latest block height.
CommittedHeight metrics.Gauge `metrics_name:"latest_block_height"`
// Whether or not a node is block syncing. 1 if yes, 0 if no.
BlockSyncing metrics.Gauge
// Whether or not a node is state syncing. 1 if yes, 0 if no.
StateSyncing metrics.Gauge
// Number of block parts transmitted by each peer.
BlockParts metrics.Counter `metrics_labels:"peer_id"`

View File

@@ -4,8 +4,6 @@ import (
"errors"
"fmt"
"github.com/cosmos/gogoproto/proto"
cstypes "github.com/tendermint/tendermint/consensus/types"
"github.com/tendermint/tendermint/libs/bits"
tmmath "github.com/tendermint/tendermint/libs/math"
@@ -18,144 +16,172 @@ import (
// MsgToProto takes a consensus message type and returns the proto defined consensus message.
//
// TODO: This needs to be removed, but WALToProto depends on this.
func MsgToProto(msg Message) (proto.Message, error) {
func MsgToProto(msg Message) (*tmcons.Message, error) {
if msg == nil {
return nil, errors.New("consensus: message is nil")
}
var pb proto.Message
var pb tmcons.Message
switch msg := msg.(type) {
case *NewRoundStepMessage:
pb = &tmcons.NewRoundStep{
Height: msg.Height,
Round: msg.Round,
Step: uint32(msg.Step),
SecondsSinceStartTime: msg.SecondsSinceStartTime,
LastCommitRound: msg.LastCommitRound,
pb = tmcons.Message{
Sum: &tmcons.Message_NewRoundStep{
NewRoundStep: &tmcons.NewRoundStep{
Height: msg.Height,
Round: msg.Round,
Step: uint32(msg.Step),
SecondsSinceStartTime: msg.SecondsSinceStartTime,
LastCommitRound: msg.LastCommitRound,
},
},
}
case *NewValidBlockMessage:
pbPartSetHeader := msg.BlockPartSetHeader.ToProto()
pbBits := msg.BlockParts.ToProto()
pb = &tmcons.NewValidBlock{
Height: msg.Height,
Round: msg.Round,
BlockPartSetHeader: pbPartSetHeader,
BlockParts: pbBits,
IsCommit: msg.IsCommit,
pb = tmcons.Message{
Sum: &tmcons.Message_NewValidBlock{
NewValidBlock: &tmcons.NewValidBlock{
Height: msg.Height,
Round: msg.Round,
BlockPartSetHeader: pbPartSetHeader,
BlockParts: pbBits,
IsCommit: msg.IsCommit,
},
},
}
case *ProposalMessage:
pbP := msg.Proposal.ToProto()
pb = &tmcons.Proposal{
Proposal: *pbP,
pb = tmcons.Message{
Sum: &tmcons.Message_Proposal{
Proposal: &tmcons.Proposal{
Proposal: *pbP,
},
},
}
case *ProposalPOLMessage:
pbBits := msg.ProposalPOL.ToProto()
pb = &tmcons.ProposalPOL{
Height: msg.Height,
ProposalPolRound: msg.ProposalPOLRound,
ProposalPol: *pbBits,
pb = tmcons.Message{
Sum: &tmcons.Message_ProposalPol{
ProposalPol: &tmcons.ProposalPOL{
Height: msg.Height,
ProposalPolRound: msg.ProposalPOLRound,
ProposalPol: *pbBits,
},
},
}
case *BlockPartMessage:
parts, err := msg.Part.ToProto()
if err != nil {
return nil, fmt.Errorf("msg to proto error: %w", err)
}
pb = &tmcons.BlockPart{
Height: msg.Height,
Round: msg.Round,
Part: *parts,
pb = tmcons.Message{
Sum: &tmcons.Message_BlockPart{
BlockPart: &tmcons.BlockPart{
Height: msg.Height,
Round: msg.Round,
Part: *parts,
},
},
}
case *VoteMessage:
vote := msg.Vote.ToProto()
pb = &tmcons.Vote{
Vote: vote,
pb = tmcons.Message{
Sum: &tmcons.Message_Vote{
Vote: &tmcons.Vote{
Vote: vote,
},
},
}
case *HasVoteMessage:
pb = &tmcons.HasVote{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
Index: msg.Index,
pb = tmcons.Message{
Sum: &tmcons.Message_HasVote{
HasVote: &tmcons.HasVote{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
Index: msg.Index,
},
},
}
case *VoteSetMaj23Message:
bi := msg.BlockID.ToProto()
pb = &tmcons.VoteSetMaj23{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
BlockID: bi,
pb = tmcons.Message{
Sum: &tmcons.Message_VoteSetMaj23{
VoteSetMaj23: &tmcons.VoteSetMaj23{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
BlockID: bi,
},
},
}
case *VoteSetBitsMessage:
bi := msg.BlockID.ToProto()
bits := msg.Votes.ToProto()
vsb := &tmcons.VoteSetBits{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
BlockID: bi,
vsb := &tmcons.Message_VoteSetBits{
VoteSetBits: &tmcons.VoteSetBits{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
BlockID: bi,
},
}
if bits != nil {
vsb.Votes = *bits
vsb.VoteSetBits.Votes = *bits
}
pb = vsb
pb = tmcons.Message{
Sum: vsb,
}
default:
return nil, fmt.Errorf("consensus: message not recognized: %T", msg)
}
return pb, nil
return &pb, nil
}
// MsgFromProto takes a consensus proto message and returns the native go type
func MsgFromProto(p proto.Message) (Message, error) {
if p == nil {
func MsgFromProto(msg *tmcons.Message) (Message, error) {
if msg == nil {
return nil, errors.New("consensus: nil message")
}
var pb Message
switch msg := p.(type) {
case *tmcons.NewRoundStep:
rs, err := tmmath.SafeConvertUint8(int64(msg.Step))
switch msg := msg.Sum.(type) {
case *tmcons.Message_NewRoundStep:
rs, err := tmmath.SafeConvertUint8(int64(msg.NewRoundStep.Step))
// deny message based on possible overflow
if err != nil {
return nil, fmt.Errorf("denying message due to possible overflow: %w", err)
}
pb = &NewRoundStepMessage{
Height: msg.Height,
Round: msg.Round,
Height: msg.NewRoundStep.Height,
Round: msg.NewRoundStep.Round,
Step: cstypes.RoundStepType(rs),
SecondsSinceStartTime: msg.SecondsSinceStartTime,
LastCommitRound: msg.LastCommitRound,
SecondsSinceStartTime: msg.NewRoundStep.SecondsSinceStartTime,
LastCommitRound: msg.NewRoundStep.LastCommitRound,
}
case *tmcons.NewValidBlock:
pbPartSetHeader, err := types.PartSetHeaderFromProto(&msg.BlockPartSetHeader)
case *tmcons.Message_NewValidBlock:
pbPartSetHeader, err := types.PartSetHeaderFromProto(&msg.NewValidBlock.BlockPartSetHeader)
if err != nil {
return nil, fmt.Errorf("parts to proto error: %w", err)
}
pbBits := new(bits.BitArray)
pbBits.FromProto(msg.BlockParts)
pbBits.FromProto(msg.NewValidBlock.BlockParts)
pb = &NewValidBlockMessage{
Height: msg.Height,
Round: msg.Round,
Height: msg.NewValidBlock.Height,
Round: msg.NewValidBlock.Round,
BlockPartSetHeader: *pbPartSetHeader,
BlockParts: pbBits,
IsCommit: msg.IsCommit,
IsCommit: msg.NewValidBlock.IsCommit,
}
case *tmcons.Proposal:
pbP, err := types.ProposalFromProto(&msg.Proposal)
case *tmcons.Message_Proposal:
pbP, err := types.ProposalFromProto(&msg.Proposal.Proposal)
if err != nil {
return nil, fmt.Errorf("proposal msg to proto error: %w", err)
}
@@ -163,26 +189,26 @@ func MsgFromProto(p proto.Message) (Message, error) {
pb = &ProposalMessage{
Proposal: pbP,
}
case *tmcons.ProposalPOL:
case *tmcons.Message_ProposalPol:
pbBits := new(bits.BitArray)
pbBits.FromProto(&msg.ProposalPol)
pbBits.FromProto(&msg.ProposalPol.ProposalPol)
pb = &ProposalPOLMessage{
Height: msg.Height,
ProposalPOLRound: msg.ProposalPolRound,
Height: msg.ProposalPol.Height,
ProposalPOLRound: msg.ProposalPol.ProposalPolRound,
ProposalPOL: pbBits,
}
case *tmcons.BlockPart:
parts, err := types.PartFromProto(&msg.Part)
case *tmcons.Message_BlockPart:
parts, err := types.PartFromProto(&msg.BlockPart.Part)
if err != nil {
return nil, fmt.Errorf("blockpart msg to proto error: %w", err)
}
pb = &BlockPartMessage{
Height: msg.Height,
Round: msg.Round,
Height: msg.BlockPart.Height,
Round: msg.BlockPart.Round,
Part: parts,
}
case *tmcons.Vote:
vote, err := types.VoteFromProto(msg.Vote)
case *tmcons.Message_Vote:
vote, err := types.VoteFromProto(msg.Vote.Vote)
if err != nil {
return nil, fmt.Errorf("vote msg to proto error: %w", err)
}
@@ -190,36 +216,36 @@ func MsgFromProto(p proto.Message) (Message, error) {
pb = &VoteMessage{
Vote: vote,
}
case *tmcons.HasVote:
case *tmcons.Message_HasVote:
pb = &HasVoteMessage{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
Index: msg.Index,
Height: msg.HasVote.Height,
Round: msg.HasVote.Round,
Type: msg.HasVote.Type,
Index: msg.HasVote.Index,
}
case *tmcons.VoteSetMaj23:
bi, err := types.BlockIDFromProto(&msg.BlockID)
case *tmcons.Message_VoteSetMaj23:
bi, err := types.BlockIDFromProto(&msg.VoteSetMaj23.BlockID)
if err != nil {
return nil, fmt.Errorf("voteSetMaj23 msg to proto error: %w", err)
}
pb = &VoteSetMaj23Message{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
Height: msg.VoteSetMaj23.Height,
Round: msg.VoteSetMaj23.Round,
Type: msg.VoteSetMaj23.Type,
BlockID: *bi,
}
case *tmcons.VoteSetBits:
bi, err := types.BlockIDFromProto(&msg.BlockID)
case *tmcons.Message_VoteSetBits:
bi, err := types.BlockIDFromProto(&msg.VoteSetBits.BlockID)
if err != nil {
return nil, fmt.Errorf("voteSetBits msg to proto error: %w", err)
}
bits := new(bits.BitArray)
bits.FromProto(&msg.Votes)
bits.FromProto(&msg.VoteSetBits.Votes)
pb = &VoteSetBitsMessage{
Height: msg.Height,
Round: msg.Round,
Type: msg.Type,
Height: msg.VoteSetBits.Height,
Round: msg.VoteSetBits.Round,
Type: msg.VoteSetBits.Type,
BlockID: *bi,
Votes: bits,
}
@@ -254,14 +280,10 @@ func WALToProto(msg WALMessage) (*tmcons.WALMessage, error) {
if err != nil {
return nil, err
}
if w, ok := consMsg.(p2p.Wrapper); ok {
consMsg = w.Wrap()
}
cm := consMsg.(*tmcons.Message)
pb = tmcons.WALMessage{
Sum: &tmcons.WALMessage_MsgInfo{
MsgInfo: &tmcons.MsgInfo{
Msg: *cm,
Msg: *consMsg,
PeerID: string(msg.PeerID),
},
},
@@ -307,11 +329,7 @@ func WALFromProto(msg *tmcons.WALMessage) (WALMessage, error) {
Step: msg.EventDataRoundState.Step,
}
case *tmcons.WALMessage_MsgInfo:
um, err := msg.MsgInfo.Msg.Unwrap()
if err != nil {
return nil, fmt.Errorf("unwrap message: %w", err)
}
walMsg, err := MsgFromProto(um)
walMsg, err := MsgFromProto(&msg.MsgInfo.Msg)
if err != nil {
return nil, fmt.Errorf("msgInfo from proto error: %w", err)
}

View File

@@ -71,7 +71,7 @@ func TestMsgToProto(t *testing.T) {
testsCases := []struct {
testName string
msg Message
want proto.Message
want *tmcons.Message
wantErr bool
}{
{"successful NewRoundStepMessage", &NewRoundStepMessage{
@@ -80,15 +80,17 @@ func TestMsgToProto(t *testing.T) {
Step: 1,
SecondsSinceStartTime: 1,
LastCommitRound: 2,
}, &tmcons.NewRoundStep{
Height: 2,
Round: 1,
Step: 1,
SecondsSinceStartTime: 1,
LastCommitRound: 2,
},
false},
}, &tmcons.Message{
Sum: &tmcons.Message_NewRoundStep{
NewRoundStep: &tmcons.NewRoundStep{
Height: 2,
Round: 1,
Step: 1,
SecondsSinceStartTime: 1,
LastCommitRound: 2,
},
},
}, false},
{"successful NewValidBlockMessage", &NewValidBlockMessage{
Height: 1,
@@ -96,78 +98,92 @@ func TestMsgToProto(t *testing.T) {
BlockPartSetHeader: psh,
BlockParts: bits,
IsCommit: false,
}, &tmcons.NewValidBlock{
Height: 1,
Round: 1,
BlockPartSetHeader: pbPsh,
BlockParts: pbBits,
IsCommit: false,
},
false},
}, &tmcons.Message{
Sum: &tmcons.Message_NewValidBlock{
NewValidBlock: &tmcons.NewValidBlock{
Height: 1,
Round: 1,
BlockPartSetHeader: pbPsh,
BlockParts: pbBits,
IsCommit: false,
},
},
}, false},
{"successful BlockPartMessage", &BlockPartMessage{
Height: 100,
Round: 1,
Part: &parts,
}, &tmcons.BlockPart{
Height: 100,
Round: 1,
Part: *pbParts,
},
false},
}, &tmcons.Message{
Sum: &tmcons.Message_BlockPart{
BlockPart: &tmcons.BlockPart{
Height: 100,
Round: 1,
Part: *pbParts,
},
},
}, false},
{"successful ProposalPOLMessage", &ProposalPOLMessage{
Height: 1,
ProposalPOLRound: 1,
ProposalPOL: bits,
}, &tmcons.ProposalPOL{
Height: 1,
ProposalPolRound: 1,
ProposalPol: *pbBits,
},
false},
}, &tmcons.Message{
Sum: &tmcons.Message_ProposalPol{
ProposalPol: &tmcons.ProposalPOL{
Height: 1,
ProposalPolRound: 1,
ProposalPol: *pbBits,
},
}}, false},
{"successful ProposalMessage", &ProposalMessage{
Proposal: &proposal,
}, &tmcons.Proposal{
Proposal: *pbProposal,
},
false},
}, &tmcons.Message{
Sum: &tmcons.Message_Proposal{
Proposal: &tmcons.Proposal{
Proposal: *pbProposal,
},
},
}, false},
{"successful VoteMessage", &VoteMessage{
Vote: vote,
}, &tmcons.Vote{
Vote: pbVote,
},
false},
}, &tmcons.Message{
Sum: &tmcons.Message_Vote{
Vote: &tmcons.Vote{
Vote: pbVote,
},
},
}, false},
{"successful VoteSetMaj23", &VoteSetMaj23Message{
Height: 1,
Round: 1,
Type: 1,
BlockID: bi,
}, &tmcons.VoteSetMaj23{
Height: 1,
Round: 1,
Type: 1,
BlockID: pbBi,
},
false},
}, &tmcons.Message{
Sum: &tmcons.Message_VoteSetMaj23{
VoteSetMaj23: &tmcons.VoteSetMaj23{
Height: 1,
Round: 1,
Type: 1,
BlockID: pbBi,
},
},
}, false},
{"successful VoteSetBits", &VoteSetBitsMessage{
Height: 1,
Round: 1,
Type: 1,
BlockID: bi,
Votes: bits,
}, &tmcons.VoteSetBits{
Height: 1,
Round: 1,
Type: 1,
BlockID: pbBi,
Votes: *pbBits,
},
false},
}, &tmcons.Message{
Sum: &tmcons.Message_VoteSetBits{
VoteSetBits: &tmcons.VoteSetBits{
Height: 1,
Round: 1,
Type: 1,
BlockID: pbBi,
Votes: *pbBits,
},
},
}, false},
{"failure", nil, &tmcons.Message{}, true},
}
for _, tt := range testsCases {

View File

@@ -119,6 +119,8 @@ func (conR *Reactor) SwitchToConsensus(state sm.State, skipWAL bool) {
conR.mtx.Lock()
conR.waitSync = false
conR.mtx.Unlock()
conR.Metrics.BlockSyncing.Set(0)
conR.Metrics.StateSyncing.Set(0)
if skipWAL {
conR.conS.doWALCatchup = false
@@ -228,7 +230,18 @@ func (conR *Reactor) Receive(e p2p.Envelope) {
conR.Logger.Debug("Receive", "src", e.Src, "chId", e.ChannelID)
return
}
msg, err := MsgFromProto(e.Message)
if w, ok := e.Message.(p2p.Wrapper); ok {
var err error
e.Message, err = w.Wrap()
if err != nil {
conR.Logger.Error("Error wrapping message", "src", e.Src, "chId", e.ChannelID, "err", err)
conR.Switch.StopPeerForError(e.Src, err)
return
}
}
msg, err := MsgFromProto(e.Message.(*tmcons.Message))
if err != nil {
conR.Logger.Error("Error decoding message", "src", e.Src, "chId", e.ChannelID, "err", err)
conR.Switch.StopPeerForError(e.Src, err)
@@ -435,7 +448,7 @@ func (conR *Reactor) unsubscribeFromBroadcastEvents() {
func (conR *Reactor) broadcastNewRoundStepMessage(rs *cstypes.RoundState) {
nrsMsg := makeRoundStepMessage(rs)
conR.Switch.Broadcast(p2p.Envelope{
conR.Switch.NewBroadcast(p2p.Envelope{
ChannelID: StateChannel,
Message: nrsMsg,
})
@@ -450,7 +463,7 @@ func (conR *Reactor) broadcastNewValidBlockMessage(rs *cstypes.RoundState) {
BlockParts: rs.ProposalBlockParts.BitArray().ToProto(),
IsCommit: rs.Step == cstypes.RoundStepCommit,
}
conR.Switch.Broadcast(p2p.Envelope{
conR.Switch.NewBroadcast(p2p.Envelope{
ChannelID: StateChannel,
Message: csMsg,
})
@@ -464,7 +477,7 @@ func (conR *Reactor) broadcastHasVoteMessage(vote *types.Vote) {
Type: vote.Type,
Index: vote.ValidatorIndex,
}
conR.Switch.Broadcast(p2p.Envelope{
conR.Switch.NewBroadcast(p2p.Envelope{
ChannelID: StateChannel,
Message: msg,
})
@@ -1351,7 +1364,6 @@ func (ps *PeerState) ApplyNewRoundStepMessage(msg *NewRoundStepMessage) {
psRound := ps.PRS.Round
psCatchupCommitRound := ps.PRS.CatchupCommitRound
psCatchupCommit := ps.PRS.CatchupCommit
lastPrecommits := ps.PRS.Precommits
startTime := tmtime.Now().Add(-1 * time.Duration(msg.SecondsSinceStartTime) * time.Second)
ps.PRS.Height = msg.Height
@@ -1379,7 +1391,7 @@ func (ps *PeerState) ApplyNewRoundStepMessage(msg *NewRoundStepMessage) {
// Shift Precommits to LastCommit.
if psHeight+1 == msg.Height && psRound == msg.LastCommitRound {
ps.PRS.LastCommitRound = msg.LastCommitRound
ps.PRS.LastCommit = lastPrecommits
ps.PRS.LastCommit = ps.PRS.Precommits
} else {
ps.PRS.LastCommitRound = msg.LastCommitRound
ps.PRS.LastCommit = nil

View File

@@ -54,7 +54,7 @@ func (cs *State) readReplayMessage(msg *TimedWALMessage, newStepSub types.Subscr
if m.Height != m2.Height || m.Round != m2.Round || m.Step != m2.Step {
return fmt.Errorf("roundState mismatch. Got %v; Expected %v", m2, m)
}
case <-newStepSub.Canceled():
case <-newStepSub.Cancelled():
return fmt.Errorf("failed to read off newStepSub.Out(). newStepSub was canceled")
case <-ticker:
return fmt.Errorf("failed to read off newStepSub.Out()")
@@ -254,7 +254,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
h.logger.Info("ABCI Handshake App Info",
"height", blockHeight,
"hash", log.NewLazySprintf("%X", appHash),
"hash", appHash,
"software-version", res.Version,
"protocol-version", res.AppVersion,
)
@@ -271,7 +271,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
}
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced",
"appHeight", blockHeight, "appHash", log.NewLazySprintf("%X", appHash))
"appHeight", blockHeight, "appHash", appHash)
// TODO: (on restart) replay mempool

View File

@@ -97,7 +97,7 @@ func startNewStateAndWaitForBlock(t *testing.T, consensusReplayConfig *cfg.Confi
require.NoError(t, err)
select {
case <-newBlockSub.Out():
case <-newBlockSub.Canceled():
case <-newBlockSub.Cancelled():
t.Fatal("newBlockSub was canceled")
case <-time.After(120 * time.Second):
t.Fatal("Timed out waiting for new block (see trace above)")
@@ -1198,7 +1198,6 @@ func (bs *mockBlockStore) PruneBlocks(height int64, state sm.State) (uint64, int
}
func (bs *mockBlockStore) DeleteLatestBlock() error { return nil }
func (bs *mockBlockStore) Close() error { return nil }
//---------------------------------------
// Test handshake/init chain

View File

@@ -1,4 +1,4 @@
# Merkle Tree
For smaller static data structures that don't require immutable snapshots or mutability;
For smaller static data structures that don't require immutable snapshots or mutability;
for instance the transactions and validation signatures of a block can be hashed using this simple merkle tree logic.

View File

@@ -9,8 +9,8 @@ module.exports = {
editLinks: true,
label: 'core',
algolia: {
id: "QQFROLBNZC",
key: "f1b68b96fb31d8aa4a54412c44917a26",
id: "BH4D9OD16A",
key: "59f0e2deb984aa9cdf2b3a5fd24ac501",
index: "tendermint"
},
versions: [

View File

@@ -99,4 +99,4 @@ configuration file that we can update with PRs.
Because the build processes are identical (as is the information contained
herein), this file should be kept in sync as much as possible with its
[counterpart in the Cosmos SDK
repo](https://github.com/cosmos/cosmos-sdk/blob/main/docs/README.md).
repo](https://github.com/cosmos/cosmos-sdk/blob/master/docs/DOCS_README.md).

View File

@@ -27,28 +27,22 @@ Usage:
abci-cli [command]
Available Commands:
batch run a batch of abci commands against an application
check_tx validate a transaction
commit commit the application state and return the Merkle root hash
completion Generate the autocompletion script for the specified shell
console start an interactive ABCI console for multiple commands
deliver_tx deliver a new transaction to the application
echo have the application echo a message
help Help about any command
info get some info about the application
kvstore ABCI demo example
prepare_proposal prepare proposal
process_proposal process proposal
query query the application state
test run integration tests
version print ABCI console version
batch Run a batch of abci commands against an application
check_tx Validate a tx
commit Commit the application state and return the Merkle root hash
console Start an interactive abci console for multiple commands
deliver_tx Deliver a new tx to the application
kvstore ABCI demo example
echo Have the application echo a message
help Help about any command
info Get some info about the application
query Query the application state
Flags:
--abci string either socket or grpc (default "socket")
--address string address of application socket (default "tcp://0.0.0.0:26658")
-h, --help help for abci-cli
--log_level string set the logger level (default "debug")
-v, --verbose print the command and results as if it were a console session
--abci string socket or grpc (default "socket")
--address string address of application socket (default "tcp://127.0.0.1:26658")
-h, --help help for abci-cli
-v, --verbose print the command and results as if it were a console session
Use "abci-cli [command] --help" for more information about a command.
```
@@ -64,51 +58,47 @@ purposes.
We'll start a kvstore application, which was installed at the same time
as `abci-cli` above. The kvstore just stores transactions in a merkle
tree. Its code can be found
[here](https://github.com/tendermint/tendermint/blob/main/abci/cmd/abci-cli/abci-cli.go)
and looks like the following:
tree.
Its code can be found
[here](https://github.com/tendermint/tendermint/blob/v0.34.x/abci/cmd/abci-cli/abci-cli.go)
and looks like:
```go
func cmdKVStore(cmd *cobra.Command, args []string) error {
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
logger := log.NewTMLogger(log.NewSyncWriter(os.Stdout))
// Create the application - in memory or persisted to disk
var app types.Application
if flagPersist == "" {
var err error
flagPersist, err = os.MkdirTemp("", "persistent_kvstore_tmp")
if err != nil {
return err
}
}
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
// Create the application - in memory or persisted to disk
var app types.Application
if flagPersist == "" {
app = kvstore.NewKVStoreApplication()
} else {
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
}
// Start the listener
srv, err := server.NewServer(flagAddress, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Start the listener
srv, err := server.NewServer(flagAddrD, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
return err
}
// Stop upon receiving SIGTERM or CTRL-C.
tmos.TrapSignal(logger, func() {
// Cleanup
if err := srv.Stop(); err != nil {
logger.Error("Error while stopping server", "err", err)
}
})
// Stop upon receiving SIGTERM or CTRL-C.
tmos.TrapSignal(logger, func() {
// Cleanup
srv.Stop()
})
// Run forever.
select {}
// Run forever.
select {}
}
```
Start the application by running:
Start by running:
```sh
abci-cli kvstore
@@ -173,32 +163,32 @@ Try running these commands:
-> data: hello
-> data.hex: 0x68656C6C6F
> info
> info
-> code: OK
-> data: {"size":0}
-> data.hex: 0x7B2273697A65223A307D
> prepare_proposal "abc"
-> code: OK
-> log: Succeeded. Tx: abc
-> log: Succeeded. Tx: abc action: UNMODIFIED
> process_proposal "abc"
-> code: OK
-> status: ACCEPT
> commit
> commit
-> code: OK
-> data.hex: 0x0000000000000000
> deliver_tx "abc"
-> code: OK
> info
> info
-> code: OK
-> data: {"size":1}
-> data.hex: 0x7B2273697A65223A317D
> commit
> commit
-> code: OK
-> data.hex: 0x0200000000000000
@@ -214,7 +204,7 @@ Try running these commands:
> deliver_tx "def=xyz"
-> code: OK
> commit
> commit
-> code: OK
-> data.hex: 0x0400000000000000
@@ -229,9 +219,11 @@ Try running these commands:
> prepare_proposal "preparedef"
-> code: OK
-> log: Succeeded. Tx: replacedef
-> log: Succeeded. Tx: def action: ADDED
-> code: OK
-> log: Succeeded. Tx: preparedef action: REMOVED
> process_proposal "replacedef"
> process_proposal "def"
-> code: OK
-> status: ACCEPT
@@ -253,21 +245,21 @@ Try running these commands:
Note that if we do `deliver_tx "abc"` it will store `(abc, abc)`, but if
we do `deliver_tx "abc=efg"` it will store `(abc, efg)`.
You could put the commands in a file and run
Similarly, you could put the commands in a file and run
`abci-cli --verbose batch < myfile`.
Note that the `abci-cli` is designed strictly for testing and debugging. In a real
deployment, the role of sending messages is taken by Tendermint, which
connects to the app using three separate connections, each with its own
pattern of messages.
For examples of running an ABCI app with Tendermint, see the
[getting started guide](./getting-started.md).
## Bounties
Want to write an app in your favorite language?! We'd be happy
to add you to our [ecosystem](https://github.com/tendermint/awesome#ecosystem)!
See [funding](https://github.com/interchainio/funding) opportunities from the
[Interchain Foundation](https://interchain.io) for implementations in new languages and more.
The `abci-cli` is designed strictly for testing and debugging. In a real
deployment, the role of sending messages is taken by Tendermint, which
connects to the app using three separate connections, each with its own
pattern of messages.
For examples of running an ABCI app with
Tendermint, see the [getting started guide](./getting-started.md).
Next is the ABCI specification.

View File

@@ -11,10 +11,9 @@ application you want to run. So, to run a complete blockchain that does
something useful, you must start two programs: one is Tendermint Core,
the other is your application, which can be written in any programming
language. Recall from [the intro to
ABCI](../introduction/what-is-tendermint.md#abci-overview) that Tendermint Core
handles all the p2p and consensus stuff, and just forwards transactions to the
ABCI](../introduction/what-is-tendermint.md#abci-overview) that Tendermint Core handles all the p2p and consensus stuff, and just forwards transactions to the
application when they need to be validated, or when they're ready to be
executed and committed.
committed to a block.
In this guide, we show you some examples of how to run an application
using Tendermint.
@@ -23,8 +22,7 @@ using Tendermint.
The first apps we will work with are written in Go. To install them, you
need to [install Go](https://golang.org/doc/install), put
`$GOPATH/bin` in your `$PATH` and enable go modules. If you use `bash`,
follow these instructions:
`$GOPATH/bin` in your `$PATH` and enable go modules with these instructions:
```bash
echo export GOPATH=\"\$HOME/go\" >> ~/.bash_profile
@@ -33,48 +31,17 @@ echo export PATH=\"\$PATH:\$GOPATH/bin\" >> ~/.bash_profile
Then run
```bash
```sh
go get github.com/tendermint/tendermint
cd $GOPATH/src/github.com/tendermint/tendermint
make install_abci
```
Now you should have the `abci-cli` installed; run `abci-cli` to see the list of commands:
Now you should have the `abci-cli` installed; you'll notice the `kvstore`
command, an example application written
in Go. See below for an application written in JavaScript.
```
Usage:
abci-cli [command]
Available Commands:
batch run a batch of abci commands against an application
check_tx validate a transaction
commit commit the application state and return the Merkle root hash
completion Generate the autocompletion script for the specified shell
console start an interactive ABCI console for multiple commands
deliver_tx deliver a new transaction to the application
echo have the application echo a message
help Help about any command
info get some info about the application
kvstore ABCI demo example
prepare_proposal prepare proposal
process_proposal process proposal
query query the application state
test run integration tests
version print ABCI console version
Flags:
--abci string either socket or grpc (default "socket")
--address string address of application socket (default "tcp://0.0.0.0:26658")
-h, --help help for abci-cli
--log_level string set the logger level (default "debug")
-v, --verbose print the command and results as if it were a console session
Use "abci-cli [command] --help" for more information about a command.
```
You'll notice the `kvstore` command, an example application written in Go.
Now, let's run an app!
Now, let's run some apps!
## KVStore - A First Example
@@ -101,7 +68,7 @@ tendermint node
```
If you have used Tendermint, you may want to reset the data for a new
blockchain by running `tendermint unsafe-reset-all`. Then you can run
blockchain by running `tendermint unsafe_reset_all`. Then you can run
`tendermint node` to start Tendermint, and connect to the app. For more
details, see [the guide on using Tendermint](../tendermint-core/using-tendermint.md).
@@ -197,3 +164,47 @@ curl -s 'localhost:26657/abci_query?data="name"'
Try some other transactions and queries to make sure everything is
working!
## CounterJS - Example in Another Language
We also want to run applications in another language - in this case,
we'll run a Javascript version of the `counter`. To run it, you'll need
to [install node](https://nodejs.org/en/download/).
You'll also need to fetch the relevant repository, from
[here](https://github.com/tendermint/js-abci), then install it:
```sh
git clone https://github.com/tendermint/js-abci.git
cd js-abci
npm install abci
```
Kill the previous `counter` and `tendermint` processes. Now run the app:
```sh
node example/counter.js
```
In another window, reset and start `tendermint`:
```sh
tendermint unsafe_reset_all
tendermint node
```
Once again, you should see blocks streaming by - but now, our
application is written in Javascript! Try sending some transactions, and
like before - the results should be the same:
```sh
# ok
curl localhost:26657/broadcast_tx_commit?tx=0x00
# invalid nonce
curl localhost:26657/broadcast_tx_commit?tx=0x05
# ok
curl localhost:26657/broadcast_tx_commit?tx=0x01
```
Neat, eh?

View File

@@ -84,7 +84,7 @@ the `psql` indexer type.
Example:
```shell
psql ... -f state/indexer/sink/psql/schema.sql
$ psql ... -f state/indexer/sink/psql/schema.sql
```
## Default Indexes

View File

@@ -78,7 +78,6 @@ Note the context/background should be written in the present tense.
- [ADR-039: Peer-Behaviour](./adr-039-peer-behaviour.md)
- [ADR-063: Privval-gRPC](./adr-063-privval-grpc.md)
- [ADR-067: Mempool Refactor](./adr-067-mempool-refactor.md)
- [ADR-071: Proposer-Based Timestamps](./adr-071-proposer-based-timestamps.md)
- [ADR-075: RPC Event Subscription Interface](./adr-075-rpc-subscription.md)
- [ADR-079: Ed25519 Verification](./adr-079-ed25519-verification.md)
- [ADR-081: Protocol Buffers Management](./adr-081-protobuf-mgmt.md)
@@ -115,6 +114,7 @@ None
- [ADR-064: Batch Verification](./adr-064-batch-verification.md)
- [ADR-068: Reverse-Sync](./adr-068-reverse-sync.md)
- [ADR-069: Node Initialization](./adr-069-flexible-node-initialization.md)
- [ADR-071: Proposer-Based Timestamps](./adr-071-proposer-based-timestamps.md)
- [ADR-073: Adopt LibP2P](./adr-073-libp2p.md)
- [ADR-074: Migrate Timeout Parameters to Consensus Parameters](./adr-074-timeout-params.md)
- [ADR-080: Reverse Sync](./adr-080-reverse-sync.md)

View File

@@ -61,7 +61,7 @@ The following protocols and application features require a reliable source of ti
* Tendermint Light Clients [rely on correspondence between their known time](https://github.com/tendermint/tendermint/blob/main/spec/light-client/verification/README.md#definitions-1) and the block time for block verification.
* Tendermint Evidence validity is determined [either in terms of heights or in terms of time](https://github.com/tendermint/tendermint/blob/8029cf7a0fcc89a5004e173ec065aa48ad5ba3c8/spec/consensus/evidence.md#verification).
* Unbonding of staked assets in the Cosmos Hub [occurs after a period of 21 days](https://github.com/cosmos/governance/blob/ce75de4019b0129f6efcbb0e752cd2cc9e6136d3/params-change/Staking.md#unbondingtime).
* IBC packets can use either a [timestamp or a height to timeout packet delivery](https://docs.cosmos.network/v0.45/ibc/overview.html#acknowledgements)
* IBC packets can use either a [timestamp or a height to timeout packet delivery](https://docs.cosmos.network/v0.44/ibc/overview.html#acknowledgements)
Finally, inflation distribution in the Cosmos Hub uses an approximation of time to calculate an annual percentage rate.
This approximation of time is calculated using [block heights with an estimated number of blocks produced in a year](https://github.com/cosmos/governance/blob/master/params-change/Mint.md#blocksperyear).

View File

@@ -39,7 +39,7 @@ When writing a p2p service, there are two primary responsibilities:
The first responsibility is handled by the Switch:
- Responsible for routing connections between peers
- Notably *only handles TCP connections*; RPC/HTTP is separate
- Notably _only handles TCP connections_; RPC/HTTP is separate
- Is a dependency for every reactor; all reactors expose a function `setSwitch`
- Holds onto channels (channels on the TCP connection--NOT Go channels) and uses them to route
- Is a global object, with a global namespace for messages
@@ -56,7 +56,7 @@ The second responsibility is handled by a combination of the PEX and the Address
Here are some relevant facts about TCP:
1. All TCP connections have a "frame window size" which represents the packet size to the "confidence;" i.e., if you are sending packets along a new connection, you must start out with small packets. As the packets are received successfully, you can start to send larger and larger packets. (This curve is illustrated below.) This means that TCP connections are slow to spin up.
2. The syn/ack process also means that there's a high overhead for small, frequent messages
2. The syn/ack process also means that there's a high overhead for small, frequent messages
3. Sockets are represented by file descriptors.
![tcp](../imgs/tcp-window.png)
@@ -114,7 +114,7 @@ Furthermore, all reactors expose:
The `receive` method can be called many times by the mconnection. It has the same signature across all reactors.
The `addReactor` call does a for loop over all the channels on the reactor and creates a map of channel IDs->reactors. The switch holds onto this map, and passes it to the *transport*, a thin wrapper around TCP connections.
The `addReactor` call does a for loop over all the channels on the reactor and creates a map of channel IDs->reactors. The switch holds onto this map, and passes it to the _transport_, a thin wrapper around TCP connections.
The following is an exhaustive (?) list of reactors:

View File

@@ -6,7 +6,7 @@ order: 4
Tendermint is software for securely and consistently replicating an
application on many machines. By securely, we mean that Tendermint works
as long as less than 1/3 of machines fail in arbitrary ways. By consistently,
even if up to 1/3 of machines fail in arbitrary ways. By consistently,
we mean that every non-faulty machine sees the same transaction log and
computes the same state. Secure and consistent replication is a
fundamental problem in distributed systems; it plays a critical role in
@@ -22,14 +22,15 @@ reformalization of BFT in a more modern setting, with emphasis on
peer-to-peer networking and cryptographic authentication. The name
derives from the way transactions are batched in blocks, where each
block contains a cryptographic hash of the previous one, forming a
chain.
chain. In practice, the blockchain data structure actually optimizes BFT
design.
Tendermint consists of two chief technical components: a blockchain
consensus engine and a generic application interface. The consensus
engine, called Tendermint Core, ensures that the same transactions are
recorded on every machine in the same order. The application interface,
called the Application BlockChain Interface (ABCI), delivers the transactions
to applications for processing. Unlike other
called the Application BlockChain Interface (ABCI), enables the
transactions to be processed in any programming language. Unlike other
blockchain and consensus solutions, which come pre-packaged with built
in state machines (like a fancy key-value store, or a quirky scripting
language), developers can use Tendermint for BFT state machine
@@ -50,13 +51,13 @@ Hyperledger's Burrow.
### Zookeeper, etcd, consul
Zookeeper, etcd, and consul are all implementations of key-value stores
atop a classical, non-BFT consensus algorithm. Zookeeper uses an
algorithm called Zookeeper Atomic Broadcast, while etcd and consul use
the Raft log replication algorithm. A
Zookeeper, etcd, and consul are all implementations of a key-value store
atop a classical, non-BFT consensus algorithm. Zookeeper uses a version
of Paxos called Zookeeper Atomic Broadcast, while etcd and consul use
the Raft consensus algorithm, which is much younger and simpler. A
typical cluster contains 3-5 machines, and can tolerate crash failures
in less than 1/2 of the machines (e.g., 1 out of 3 or 2 out of 5),
but even a single Byzantine fault can jeopardize the whole system.
in up to 1/2 of the machines, but even a single Byzantine fault can
destroy the system.
Each offering provides a slightly different implementation of a
featureful key-value store, but all are generally focused around
@@ -65,8 +66,8 @@ configuration, service discovery, locking, leader-election, and so on.
Tendermint is in essence similar software, but with two key differences:
- It is Byzantine Fault Tolerant, meaning it can only tolerate less than 1/3
of machines failing, but those failures can include arbitrary behavior -
- It is Byzantine Fault Tolerant, meaning it can only tolerate up to a
1/3 of failures, but those failures can include arbitrary behavior -
including hacking and malicious attacks. - It does not specify a
particular application, like a fancy key-value store. Instead, it
focuses on arbitrary state machine replication, so developers can build
@@ -105,8 +106,8 @@ docker containers, modules it calls "chaincode". It uses an
implementation of [PBFT](http://pmg.csail.mit.edu/papers/osdi99.pdf).
from a team at IBM that is [augmented to handle potentially
non-deterministic
chaincode](https://drops.dagstuhl.de/opus/volltexte/2017/7093/pdf/LIPIcs-OPODIS-2016-24.pdf).
It is possible to implement this docker-based behavior as an ABCI app in
chaincode](https://drops.dagstuhl.de/opus/volltexte/2017/7093/pdf/LIPIcs-OPODIS-2016-24.pdf) It is
possible to implement this docker-based behavior as a ABCI app in
Tendermint, though extending Tendermint to handle non-determinism
remains for future work.
@@ -142,22 +143,24 @@ in design and suffers from "spaghetti code".
Another problem with monolithic design is that it limits you to the
language of the blockchain stack (or vice versa). In the case of
Ethereum which supports a Turing-complete bytecode virtual-machine, it
limits you to languages that compile down to that bytecode; while the
[list](https://github.com/pirapira/awesome-ethereum-virtual-machine#programming-languages-that-compile-into-evm)
is growing, it is still very limited.
limits you to languages that compile down to that bytecode; today, those
are Serpent and Solidity.
In contrast, our approach is to decouple the consensus engine and P2P
layers from the details of the state of the particular
layers from the details of the application state of the particular
blockchain application. We do this by abstracting away the details of
the application to an interface, which is implemented as a socket
protocol.
Thus we have an interface, the Application BlockChain Interface (ABCI),
and its primary implementation, the Tendermint Socket Protocol (TSP, or
Teaspoon).
### Intro to ABCI
[Tendermint Core](https://github.com/tendermint/tendermint), the
"consensus engine", communicates with the application via a socket
protocol that satisfies the ABCI, the Tendermint Socket Protocol
(TSP, or Teaspoon).
[Tendermint Core](https://github.com/tendermint/tendermint) (the
"consensus engine") communicates with the application via a socket
protocol that satisfies the ABCI.
To draw an analogy, lets talk about a well-known cryptocurrency,
Bitcoin. Bitcoin is a cryptocurrency blockchain where each node
@@ -177,7 +180,7 @@ The application will be responsible for
- Allowing clients to query the UTXO database.
Tendermint is able to decompose the blockchain design by offering a very
simple API (i.e. the ABCI) between the application process and consensus
simple API (ie. the ABCI) between the application process and consensus
process.
The ABCI consists of 3 primary message types that get delivered from the
@@ -236,7 +239,8 @@ Solidity on Ethereum is a great language of choice for blockchain
applications because, among other reasons, it is a completely
deterministic programming language. However, it's also possible to
create deterministic applications using existing popular languages like
Java, C++, Python, or Go, by avoiding
Java, C++, Python, or Go. Game programmers and blockchain developers are
already familiar with creating deterministic programs by avoiding
sources of non-determinism such as:
- random number generators (without deterministic seeding)
@@ -267,15 +271,14 @@ committed in a chain, with one block at each **height**. A block may
fail to be committed, in which case the protocol moves to the next
**round**, and a new validator gets to propose a block for that height.
Two stages of voting are required to successfully commit a block; we
call them **pre-vote** and **pre-commit**.
call them **pre-vote** and **pre-commit**. A block is committed when
more than 2/3 of validators pre-commit for the same block in the same
round.
There is a picture of a couple doing the polka because validators are
doing something like a polka dance. When more than two-thirds of the
validators pre-vote for the same block, we call that a **polka**. Every
pre-commit must be justified by a polka in the same round.
A block is committed when
more than 2/3 of validators pre-commit for the same block in the same
round.
Validators may fail to commit a block for a number of reasons; the
current proposer may be offline, or the network may be slow. Tendermint

View File

@@ -1,4 +1,3 @@
#!/bin/bash
mkdir -p .vuepress/public/rpc/
cp -a ../rpc/openapi/* .vuepress/public/rpc/
cp -a ../rpc/openapi/ .vuepress/public/rpc/

View File

@@ -1,23 +0,0 @@
---
order: 1
parent:
title: Tendermint Quality Assurance
description: This is a report on the process followed and results obtained when running v0.34.x on testnets
order: 2
---
# Tendermint Quality Assurance
This directory keeps track of the process followed by the Tendermint Core team
for Quality Assurance before cutting a release.
This directory is to live in multiple branches. On each release branch,
the contents of this directory reflect the status of the process
at the time the Quality Assurance process was applied for that release.
File [method](./method.md) keeps track of the process followed to obtain the results
used to decide if a release is passing the Quality Assurance process.
The results obtained in each release are stored in their own directory.
The following releases have undergone the Quality Assurance process:
* [v0.34.x](./v034/), which was tested just before releasing v0.34.22
* [v0.37.x](./v037/), with v.34.x acting as a baseline

View File

@@ -1,214 +0,0 @@
---
order: 1
title: Method
---
# Method
This document provides a detailed description of the QA process.
It is intended to be used by engineers reproducing the experimental setup for future tests of Tendermint.
The (first iteration of the) QA process as described [in the RELEASES.md document][releases]
was applied to version v0.34.x in order to have a set of results acting as benchmarking baseline.
This baseline is then compared with results obtained in later versions.
Out of the testnet-based test cases described in [the releases document][releases] we focused on two of them:
_200 Node Test_, and _Rotating Nodes Test_.
[releases]: https://github.com/tendermint/tendermint/blob/v0.37.x/RELEASES.md#large-scale-testnets
## Software Dependencies
### Infrastructure Requirements to Run the Tests
* An account at Digital Ocean (DO), with a high droplet limit (>202)
* The machine to orchestrate the tests should have the following installed:
* A clone of the [testnet repository][testnet-repo]
* This repository contains all the scripts mentioned in the reminder of this section
* [Digital Ocean CLI][doctl]
* [Terraform CLI][Terraform]
* [Ansible CLI][Ansible]
[testnet-repo]: https://github.com/interchainio/tendermint-testnet
[Ansible]: https://docs.ansible.com/ansible/latest/index.html
[Terraform]: https://www.terraform.io/docs
[doctl]: https://docs.digitalocean.com/reference/doctl/how-to/install/
### Requirements for Result Extraction
* Matlab or Octave
* [Prometheus][prometheus] server installed
* blockstore DB of one of the full nodes in the testnet
* Prometheus DB
[prometheus]: https://prometheus.io/
## 200 Node Testnet
### Running the test
This section explains how the tests were carried out for reproducibility purposes.
1. [If you haven't done it before]
Follow steps 1-4 of the `README.md` at the top of the testnet repository to configure Terraform, and `doctl`.
2. Copy file `testnets/testnet200.toml` onto `testnet.toml` (do NOT commit this change)
3. Set the variable `VERSION_TAG` in the `Makefile` to the git hash that is to be tested.
4. Follow steps 5-10 of the `README.md` to configure and start the 200 node testnet
* WARNING: Do NOT forget to run `make terraform-destroy` as soon as you are done with the tests (see step 9)
5. As a sanity check, connect to the Prometheus node's web interface and check the graph for the `tendermint_consensus_height` metric.
All nodes should be increasing their heights.
6. `ssh` into the `testnet-load-runner`, then copy script `script/200-node-loadscript.sh` and run it from the load runner node.
* Before running it, you need to edit the script to provide the IP address of a full node.
This node will receive all transactions from the load runner node.
* This script will take about 40 mins to run
* It is running 90-seconds-long experiments in a loop with different loads
7. Run `make retrieve-data` to gather all relevant data from the testnet into the orchestrating machine
8. Verify that the data was collected without errors
* at least one blockstore DB for a Tendermint validator
* the Prometheus database from the Prometheus node
* for extra care, you can run `zip -T` on the `prometheus.zip` file and (one of) the `blockstore.db.zip` file(s)
9. **Run `make terraform-destroy`**
* Don't forget to type `yes`! Otherwise you're in trouble.
### Result Extraction
The method for extracting the results described here is highly manual (and exploratory) at this stage.
The Core team should improve it at every iteration to increase the amount of automation.
#### Steps
1. Unzip the blockstore into a directory
2. Extract the latency report and the raw latencies for all the experiments. Run these commands from the directory containing the blockstore
* `go run github.com/tendermint/tendermint/test/loadtime/cmd/report@3ec6e424d --database-type goleveldb --data-dir ./ > results/report.txt`
* `go run github.com/tendermint/tendermint/test/loadtime/cmd/report@3ec6e424d --database-type goleveldb --data-dir ./ --csv results/raw.csv`
3. File `report.txt` contains an unordered list of experiments with varying concurrent connections and transaction rate
* Create files `report01.txt`, `report02.txt`, `report04.txt` and, for each experiment in file `report.txt`,
copy its related lines to the filename that matches the number of connections.
* Sort the experiments in `report01.txt` in ascending tx rate order. Likewise for `report02.txt` and `report04.txt`.
4. Generate file `report_tabbed.txt` by showing the contents `report01.txt`, `report02.txt`, `report04.txt` side by side
* This effectively creates a table where rows are a particular tx rate and columns are a particular number of websocket connections.
5. Extract the raw latencies from file `raw.csv` using the following bash loop. This creates a `.csv` file and a `.dat` file per experiment.
The format of the `.dat` files is amenable to loading them as matrices in Octave
```bash
uuids=($(cat report01.txt report02.txt report04.txt | grep '^Experiment ID: ' | awk '{ print $3 }'))
c=1
for i in 01 02 04; do
for j in 0025 0050 0100 0200; do
echo $i $j $c "${uuids[$c]}"
filename=c${i}_r${j}
grep ${uuids[$c]} raw.csv > ${filename}.csv
cat ${filename}.csv | tr , ' ' | awk '{ print $2, $3 }' > ${filename}.dat
c=$(expr $c + 1)
done
done
```
6. Enter Octave
7. Load all `.dat` files generated in step 5 into matrices using this Octave code snippet
```octave
conns = { "01"; "02"; "04" };
rates = { "0025"; "0050"; "0100"; "0200" };
for i = 1:length(conns)
for j = 1:length(rates)
filename = strcat("c", conns{i}, "_r", rates{j}, ".dat");
load("-ascii", filename);
endfor
endfor
```
8. Set variable release to the current release undergoing QA
```octave
release = "v0.34.x";
```
9. Generate a plot with all (or some) experiments, where the X axis is the experiment time,
and the y axis is the latency of transactions.
The following snippet plots all experiments.
```octave
legends = {};
hold off;
for i = 1:length(conns)
for j = 1:length(rates)
data_name = strcat("c", conns{i}, "_r", rates{j});
l = strcat("c=", conns{i}, " r=", rates{j});
m = eval(data_name); plot((m(:,1) - min(m(:,1))) / 1e+9, m(:,2) / 1e+9, ".");
hold on;
legends(1, end+1) = l;
endfor
endfor
legend(legends, "location", "northeastoutside");
xlabel("experiment time (s)");
ylabel("latency (s)");
t = sprintf("200-node testnet - %s", release);
title(t);
```
10. Consider adjusting the axis, in case you want to compare your results to the baseline, for instance
```octave
axis([0, 100, 0, 30], "tic");
```
11. Use Octave's GUI menu to save the plot (e.g. as `.png`)
12. Repeat steps 9 and 10 to obtain as many plots as deemed necessary.
13. To generate a latency vs throughput plot, using the raw CSV file generated
in step 2, follow the instructions for the [`latency_throughput.py`] script.
[`latency_throughput.py`]: ../../scripts/qa/reporting/README.md
#### Extracting Prometheus Metrics
1. Stop the prometheus server if it is running as a service (e.g. a `systemd` unit).
2. Unzip the prometheus database retrieved from the testnet, and move it to replace the
local prometheus database.
3. Start the prometheus server and make sure no error logs appear at start up.
4. Introduce the metrics you want to gather or plot.
## Rotating Node Testnet
### Running the test
This section explains how the tests were carried out for reproducibility purposes.
1. [If you haven't done it before]
Follow steps 1-4 of the `README.md` at the top of the testnet repository to configure Terraform, and `doctl`.
2. Copy file `testnet_rotating.toml` onto `testnet.toml` (do NOT commit this change)
3. Set variable `VERSION_TAG` to the git hash that is to be tested.
4. Run `make terraform-apply EPHEMERAL_SIZE=25`
* WARNING: Do NOT forget to run `make terraform-destroy` as soon as you are done with the tests
5. Follow steps 6-10 of the `README.md` to configure and start the "stable" part of the rotating node testnet
6. As a sanity check, connect to the Prometheus node's web interface and check the graph for the `tendermint_consensus_height` metric.
All nodes should be increasing their heights.
7. On a different shell,
* run `make runload ROTATE_CONNECTIONS=X ROTATE_TX_RATE=Y`
* `X` and `Y` should reflect a load below the saturation point (see, e.g.,
[this paragraph](./v034/README.md#finding-the-saturation-point) for further info)
8. Run `make rotate` to start the script that creates the ephemeral nodes, and kills them when they are caught up.
* WARNING: If you run this command from your laptop, the laptop needs to be up and connected for full length
of the experiment.
9. When the height of the chain reaches 3000, stop the `make rotate` script
10. When the rotate script has made two iterations (i.e., all ephemeral nodes have caught up twice)
after height 3000 was reached, stop `make rotate`
11. Run `make retrieve-data` to gather all relevant data from the testnet into the orchestrating machine
12. Verify that the data was collected without errors
* at least one blockstore DB for a Tendermint validator
* the Prometheus database from the Prometheus node
* for extra care, you can run `zip -T` on the `prometheus.zip` file and (one of) the `blockstore.db.zip` file(s)
13. **Run `make terraform-destroy`**
Steps 8 to 10 are highly manual at the moment and will be improved in next iterations.
### Result Extraction
In order to obtain a latency plot, follow the instructions above for the 200 node experiment, but:
* The `results.txt` file contains only one experiment
* Therefore, no need for any `for` loops
As for prometheus, the same method as for the 200 node experiment can be applied.

View File

@@ -1,278 +0,0 @@
---
order: 1
parent:
title: Tendermint Quality Assurance Results for v0.34.x
description: This is a report on the results obtained when running v0.34.x on testnets
order: 2
---
# v0.34.x
## 200 Node Testnet
### Finding the Saturation Point
The first goal when examining the results of the tests is identifying the saturation point.
The saturation point is a setup with a transaction load big enough to prevent the testnet
from being stable: the load runner tries to produce slightly more transactions than can
be processed by the testnet.
The following table summarizes the results for v0.34.x, for the different experiments
(extracted from file [`v034_report_tabbed.txt`](./img/v034_report_tabbed.txt)).
The X axis of this table is `c`, the number of connections created by the load runner process to the target node.
The Y axis of this table is `r`, the rate or number of transactions issued per second.
| | c=1 | c=2 | c=4 |
| :--- | ----: | ----: | ----: |
| r=25 | 2225 | 4450 | 8900 |
| r=50 | 4450 | 8900 | 17800 |
| r=100 | 8900 | 17800 | 35600 |
| r=200 | 17800 | 35600 | 38660 |
The table shows the number of 1024-byte-long transactions that were produced by the load runner,
and processed by Tendermint, during the 90 seconds of the experiment's duration.
Each cell in the table refers to an experiment with a particular number of websocket connections (`c`)
to a chosen validator, and the number of transactions per second that the load runner
tries to produce (`r`). Note that the overall load that the tool attempts to generate is $c \cdot r$.
We can see that the saturation point is beyond the diagonal that spans cells
* `r=200,c=2`
* `r=100,c=4`
given that the total transactions should be close to the product of the rate, the number of connections,
and the experiment time (89 seconds, since the last batch never gets sent).
All experiments below the saturation diagonal (`r=200,c=4`) have in common that the total
number of transactions processed is noticeably less than the product $c \cdot r \cdot 89$,
which is the expected number of transactions when the system is able to deal well with the
load.
With `r=200,c=4`, we obtained 38660 whereas the theoretical number of transactions should
have been $200 \cdot 4 \cdot 89 = 71200$.
At this point, we chose an experiment at the limit of the saturation diagonal,
in order to further study the performance of this release.
**The chosen experiment is `r=200,c=2`**.
This is a plot of the CPU load (average over 1 minute, as output by `top`) of the load runner for `r=200,c=2`,
where we can see that the load stays close to 0 most of the time.
![load-load-runner](./img/v034_r200c2_load-runner.png)
### Examining latencies
The method described [here](../method.md) allows us to plot the latencies of transactions
for all experiments.
![all-latencies](./img/v034_200node_latencies.png)
As we can see, even the experiments beyond the saturation diagonal managed to keep
transaction latency stable (i.e. not constantly increasing).
Our interpretation for this is that contention within Tendermint was propagated,
via the websockets, to the load runner,
hence the load runner could not produce the target load, but a fraction of it.
Further examination of the Prometheus data (see below), showed that the mempool contained many transactions
at steady state, but did not grow much without quickly returning to this steady state. This demonstrates
that the transactions were able to be processed by the Tendermint network at least as quickly as they
were submitted to the mempool. Finally, the test script made sure that, at the end of an experiment, the
mempool was empty so that all transactions submitted to the chain were processed.
Finally, the number of points present in the plot appears to be much less than expected given the
number of transactions in each experiment, particularly close to or above the saturation diagonal.
This is a visual effect of the plot; what appear to be points in the plot are actually potentially huge
clusters of points. To corroborate this, we have zoomed in the plot above by setting (carefully chosen)
tiny axis intervals. The cluster shown below looks like a single point in the plot above.
![all-latencies-zoomed](./img/v034_200node_latencies_zoomed.png)
The plot of latencies can we used as a baseline to compare with other releases.
The following plot summarizes average latencies versus overall throughputs
across different numbers of WebSocket connections to the node into which
transactions are being loaded.
![latency-vs-throughput](./img/v034_latency_throughput.png)
### Prometheus Metrics on the Chosen Experiment
As mentioned [above](#finding-the-saturation-point), the chosen experiment is `r=200,c=2`.
This section further examines key metrics for this experiment extracted from Prometheus data.
#### Mempool Size
The mempool size, a count of the number of transactions in the mempool, was shown to be stable and homogeneous
at all full nodes. It did not exhibit any unconstrained growth.
The plot below shows the evolution over time of the cumulative number of transactions inside all full nodes' mempools
at a given time.
The two spikes that can be observed correspond to a period where consensus instances proceeded beyond the initial round
at some nodes.
![mempool-cumulative](./img/v034_r200c2_mempool_size.png)
The plot below shows evolution of the average over all full nodes, which oscillates between 1500 and 2000
outstanding transactions.
![mempool-avg](./img/v034_r200c2_mempool_size_avg.png)
The peaks observed coincide with the moments when some nodes proceeded beyond the initial round of consensus (see below).
#### Peers
The number of peers was stable at all nodes.
It was higher for the seed nodes (around 140) than for the rest (between 21 and 74).
The fact that non-seed nodes reach more than 50 peers is due to #9548.
![peers](./img/v034_r200c2_peers.png)
#### Consensus Rounds per Height
Most heights took just one round, but some nodes needed to advance to round 1 at some point.
![rounds](./img/v034_r200c2_rounds.png)
#### Blocks Produced per Minute, Transactions Processed per Minute
The blocks produced per minute are the slope of this plot.
![heights](./img/v034_r200c2_heights.png)
Over a period of 2 minutes, the height goes from 530 to 569.
This results in an average of 19.5 blocks produced per minute.
The transactions processed per minute are the slope of this plot.
![total-txs](./img/v034_r200c2_total-txs.png)
Over a period of 2 minutes, the total goes from 64525 to 100125 transactions,
resulting in 17800 transactions per minute. However, we can see in the plot that
all transactions in the load are processed long before the two minutes.
If we adjust the time window when transactions are processed (approx. 105 seconds),
we obtain 20343 transactions per minute.
#### Memory Resident Set Size
Resident Set Size of all monitored processes is plotted below.
![rss](./img/v034_r200c2_rss.png)
The average over all processes oscillates around 1.2 GiB and does not demonstrate unconstrained growth.
![rss-avg](./img/v034_r200c2_rss_avg.png)
#### CPU utilization
The best metric from Prometheus to gauge CPU utilization in a Unix machine is `load1`,
as it usually appears in the
[output of `top`](https://www.digitalocean.com/community/tutorials/load-average-in-linux).
![load1](./img/v034_r200c2_load1.png)
It is contained in most cases below 5, which is generally considered acceptable load.
### Test Result
**Result: N/A** (v0.34.x is the baseline)
Date: 2022-10-14
Version: 3ec6e424d6ae4c96867c2dcf8310572156068bb6
## Rotating Node Testnet
For this testnet, we will use a load that can safely be considered below the saturation
point for the size of this testnet (between 13 and 38 full nodes): `c=4,r=800`.
N.B.: The version of Tendermint used for these tests is affected by #9539.
However, the reduced load that reaches the mempools is orthogonal to functionality
we are focusing on here.
### Latencies
The plot of all latencies can be seen in the following plot.
![rotating-all-latencies](./img/v034_rotating_latencies.png)
We can observe there are some very high latencies, towards the end of the test.
Upon suspicion that they are duplicate transactions, we examined the latencies
raw file and discovered there are more than 100K duplicate transactions.
The following plot shows the latencies file where all duplicate transactions have
been removed, i.e., only the first occurrence of a duplicate transaction is kept.
![rotating-all-latencies-uniq](./img/v034_rotating_latencies_uniq.png)
This problem, existing in `v0.34.x`, will need to be addressed, perhaps in the same way
we addressed it when running the 200 node test with high loads: increasing the `cache_size`
configuration parameter.
### Prometheus Metrics
The set of metrics shown here are less than for the 200 node experiment.
We are only interested in those for which the catch-up process (blocksync) may have an impact.
#### Blocks and Transactions per minute
Just as shown for the 200 node test, the blocks produced per minute are the gradient of this plot.
![rotating-heights](./img/v034_rotating_heights.png)
Over a period of 5229 seconds, the height goes from 2 to 3638.
This results in an average of 41 blocks produced per minute.
The following plot shows only the heights reported by ephemeral nodes
(which are also included in the plot above). Note that the _height_ metric
is only showed _once the node has switched to consensus_, hence the gaps
when nodes are killed, wiped out, started from scratch, and catching up.
![rotating-heights-ephe](./img/v034_rotating_heights_ephe.png)
The transactions processed per minute are the gradient of this plot.
![rotating-total-txs](./img/v034_rotating_total-txs.png)
The small lines we see periodically close to `y=0` are the transactions that
ephemeral nodes start processing when they are caught up.
Over a period of 5229 minutes, the total goes from 0 to 387697 transactions,
resulting in 4449 transactions per minute. We can see some abrupt changes in
the plot's gradient. This will need to be investigated.
#### Peers
The plot below shows the evolution in peers throughout the experiment.
The periodic changes observed are due to the ephemeral nodes being stopped,
wiped out, and recreated.
![rotating-peers](./img/v034_rotating_peers.png)
The validators' plots are concentrated at the higher part of the graph, whereas the ephemeral nodes
are mostly at the lower part.
#### Memory Resident Set Size
The average Resident Set Size (RSS) over all processes seems stable, and slightly growing toward the end.
This might be related to the increased in transaction load observed above.
![rotating-rss-avg](./img/v034_rotating_rss_avg.png)
The memory taken by the validators and the ephemeral nodes (when they are up) is comparable.
#### CPU utilization
The plot shows metric `load1` for all nodes.
![rotating-load1](./img/v034_rotating_load1.png)
It is contained under 5 most of the time, which is considered normal load.
The purple line, which follows a different pattern is the validator receiving all
transactions, via RPC, from the load runner process.
### Test Result
**Result: N/A**
Date: 2022-10-10
Version: a28c987f5a604ff66b515dd415270063e6fb069d

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 378 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 759 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 926 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 534 KiB

View File

@@ -1,52 +0,0 @@
Experiment ID: 3d5cf4ef-1a1a-4b46-aa2d-da5643d2e81e │Experiment ID: 80e472ec-13a1-4772-a827-3b0c907fb51d │Experiment ID: 07aca6cf-c5a4-4696-988f-e3270fc6333b
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 25 │ Rate: 25 │ Rate: 25
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 2225 │ Total Valid Tx: 4450 │ Total Valid Tx: 8900
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 599.404362ms │ Minimum Latency: 448.145181ms │ Minimum Latency: 412.485729ms
Maximum Latency: 3.539686885s │ Maximum Latency: 3.237392049s │ Maximum Latency: 12.026665368s
Average Latency: 1.441485349s │ Average Latency: 1.441267946s │ Average Latency: 2.150192457s
Standard Deviation: 541.049869ms │ Standard Deviation: 525.040007ms │ Standard Deviation: 2.233852478s
│ │
Experiment ID: 953dc544-dd40-40e8-8712-20c34c3ce45e │Experiment ID: d31fc258-16e7-45cd-9dc8-13ab87bc0b0a │Experiment ID: 15d90a7e-b941-42f4-b411-2f15f857739e
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 50 │ Rate: 50 │ Rate: 50
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 4450 │ Total Valid Tx: 8900 │ Total Valid Tx: 17800
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 482.046942ms │ Minimum Latency: 435.458913ms │ Minimum Latency: 510.746448ms
Maximum Latency: 3.761483455s │ Maximum Latency: 7.175583584s │ Maximum Latency: 6.551497882s
Average Latency: 1.450408183s │ Average Latency: 1.681673116s │ Average Latency: 1.738083875s
Standard Deviation: 587.560056ms │ Standard Deviation: 1.147902047s │ Standard Deviation: 943.46522ms
│ │
Experiment ID: 9a0b9980-9ce6-4db5-a80a-65ca70294b87 │Experiment ID: df8fa4f4-80af-4ded-8a28-356d15018b43 │Experiment ID: d0e41c2c-89c0-4f38-8e34-ca07adae593a
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 100 │ Rate: 100 │ Rate: 100
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 8900 │ Total Valid Tx: 17800 │ Total Valid Tx: 35600
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 477.417219ms │ Minimum Latency: 564.29247ms │ Minimum Latency: 840.71089ms
Maximum Latency: 6.63744785s │ Maximum Latency: 6.988553219s │ Maximum Latency: 9.555312398s
Average Latency: 1.561216103s │ Average Latency: 1.76419063s │ Average Latency: 3.200941683s
Standard Deviation: 1.011333552s │ Standard Deviation: 1.068459423s │ Standard Deviation: 1.732346601s
│ │
Experiment ID: 493df3ee-4a36-4bce-80f8-6d65da66beda │Experiment ID: 13060525-f04f-46f6-8ade-286684b2fe50 │Experiment ID: 1777cbd2-8c96-42e4-9ec7-9b21f2225e4d
│ │
Connections: 1 │ Connections: 2 │ Connections: 4
Rate: 200 │ Rate: 200 │ Rate: 200
Size: 1024 │ Size: 1024 │ Size: 1024
│ │
Total Valid Tx: 17800 │ Total Valid Tx: 35600 │ Total Valid Tx: 38660
Total Negative Latencies: 0 │ Total Negative Latencies: 0 │ Total Negative Latencies: 0
Minimum Latency: 493.705261ms │ Minimum Latency: 955.090573ms │ Minimum Latency: 1.9485821s
Maximum Latency: 7.440921872s │ Maximum Latency: 10.086673491s │ Maximum Latency: 17.73103976s
Average Latency: 1.875510582s │ Average Latency: 3.438130099s │ Average Latency: 8.143862237s
Standard Deviation: 1.304336995s │ Standard Deviation: 1.966391574s │ Standard Deviation: 3.943140002s

Binary file not shown.

Before

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 486 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 197 KiB

View File

@@ -1,326 +0,0 @@
---
order: 1
parent:
title: Tendermint Quality Assurance Results for v0.37.x
description: This is a report on the results obtained when running v0.37.x on testnets
order: 2
---
# v0.37.x
## Issues discovered
During this iteration of the QA process, the following issues were found:
* (critical, fixed) [\#9533] - This bug caused full nodes to sometimes get stuck
when blocksyncing, requiring a manual restart to unblock them. Importantly,
this bug was also present in v0.34.x and the fix was also backported in
[\#9534].
* (critical, fixed) [\#9539] - `loadtime` is very likely to include more than
one "=" character in transactions, with is rejected by the e2e application.
* (critical, fixed) [\#9581] - Absent prometheus label makes Tendermint crash
when enabling Prometheus metric collection
* (non-critical, not fixed) [\#9548] - Full nodes can go over 50 connected
peers, which is not intended by the default configuration.
* (non-critical, not fixed) [\#9537] - With the default mempool cache setting,
duplicated transactions are not rejected when gossipped and eventually flood
all mempools. The 200 node testnets were thus run with a value of 200000 (as
opposed to the default 10000)
## 200 Node Testnet
### Finding the Saturation Point
The first goal is to identify the saturation point and compare it with the baseline (v0.34.x).
For further details, see [this paragraph](../v034/README.md#finding-the-saturation-point)
in the baseline version.
The following table summarizes the results for v0.37.x, for the different experiments
(extracted from file [`v037_report_tabbed.txt`](./img/v037_report_tabbed.txt)).
The X axis of this table is `c`, the number of connections created by the load runner process to the target node.
The Y axis of this table is `r`, the rate or number of transactions issued per second.
| | c=1 | c=2 | c=4 |
| :--- | ----: | ----: | ----: |
| r=25 | 2225 | 4450 | 8900 |
| r=50 | 4450 | 8900 | 17800 |
| r=100 | 8900 | 17800 | 35600 |
| r=200 | 17800 | 35600 | 38660 |
For comparison, this is the table with the baseline version.
| | c=1 | c=2 | c=4 |
| :--- | ----: | ----: | ----: |
| r=25 | 2225 | 4450 | 8900 |
| r=50 | 4450 | 8900 | 17800 |
| r=100 | 8900 | 17800 | 35400 |
| r=200 | 17800 | 35600 | 37358 |
The saturation point is beyond the diagonal:
* `r=200,c=2`
* `r=100,c=4`
which is at the same place as the baseline. For more details on the saturation point, see
[this paragraph](../v034/README.md#finding-the-saturation-point) in the baseline version.
The experiment chosen to examine Prometheus metrics is the same as in the baseline:
**`r=200,c=2`**.
The load runner's CPU load was negligible (near 0) when running `r=200,c=2`.
### Examining latencies
The method described [here](../method.md) allows us to plot the latencies of transactions
for all experiments.
![all-latencies](./img/v037_200node_latencies.png)
The data seen in the plot is similar to that of the baseline.
![all-latencies-bl](../v034/img/v034_200node_latencies.png)
Therefore, for further details on these plots,
see [this paragraph](../v034/README.md#examining-latencies) in the baseline version.
The following plot summarizes average latencies versus overall throughputs
across different numbers of WebSocket connections to the node into which
transactions are being loaded.
![latency-vs-throughput](./img/v037_latency_throughput.png)
This is similar to that of the baseline plot:
![latency-vs-throughput-bl](../v034/img/v034_latency_throughput.png)
### Prometheus Metrics on the Chosen Experiment
As mentioned [above](#finding-the-saturation-point), the chosen experiment is `r=200,c=2`.
This section further examines key metrics for this experiment extracted from Prometheus data.
#### Mempool Size
The mempool size, a count of the number of transactions in the mempool, was shown to be stable and homogeneous
at all full nodes. It did not exhibit any unconstrained growth.
The plot below shows the evolution over time of the cumulative number of transactions inside all full nodes' mempools
at a given time.
![mempool-cumulative](./img/v037_r200c2_mempool_size.png)
The plot below shows evolution of the average over all full nodes, which oscillate between 1500 and 2000 outstanding transactions.
![mempool-avg](./img/v037_r200c2_mempool_size_avg.png)
The peaks observed coincide with the moments when some nodes reached round 1 of consensus (see below).
**These plots yield similar results to the baseline**:
![mempool-cumulative-bl](../v034/img/v034_r200c2_mempool_size.png)
![mempool-avg-bl](../v034/img/v034_r200c2_mempool_size_avg.png)
#### Peers
The number of peers was stable at all nodes.
It was higher for the seed nodes (around 140) than for the rest (between 16 and 78).
![peers](./img/v037_r200c2_peers.png)
Just as in the baseline, the fact that non-seed nodes reach more than 50 peers is due to #9548.
**This plot yields similar results to the baseline**:
![peers-bl](../v034/img/v034_r200c2_peers.png)
#### Consensus Rounds per Height
Most heights took just one round, but some nodes needed to advance to round 1 at some point.
![rounds](./img/v037_r200c2_rounds.png)
**This plot yields slightly better results than the baseline**:
![rounds-bl](../v034/img/v034_r200c2_rounds.png)
#### Blocks Produced per Minute, Transactions Processed per Minute
The blocks produced per minute are the gradient of this plot.
![heights](./img/v037_r200c2_heights.png)
Over a period of 2 minutes, the height goes from 477 to 524.
This results in an average of 23.5 blocks produced per minute.
The transactions processed per minute are the gradient of this plot.
![total-txs](./img/v037_r200c2_total-txs.png)
Over a period of 2 minutes, the total goes from 64525 to 100125 transactions,
resulting in 17800 transactions per minute. However, we can see in the plot that
all transactions in the load are process long before the two minutes.
If we adjust the time window when transactions are processed (approx. 90 seconds),
we obtain 23733 transactions per minute.
**These plots yield similar results to the baseline**:
![heights-bl](../v034/img/v034_r200c2_heights.png)
![total-txs](../v034/img/v034_r200c2_total-txs.png)
#### Memory Resident Set Size
Resident Set Size of all monitored processes is plotted below.
![rss](./img/v037_r200c2_rss.png)
The average over all processes oscillates around 380 MiB and does not demonstrate unconstrained growth.
![rss-avg](./img/v037_r200c2_rss_avg.png)
**These plots yield similar results to the baseline**:
![rss-bl](../v034/img/v034_r200c2_rss.png)
![rss-avg-bl](../v034/img/v034_r200c2_rss_avg.png)
#### CPU utilization
The best metric from Prometheus to gauge CPU utilization in a Unix machine is `load1`,
as it usually appears in the
[output of `top`](https://www.digitalocean.com/community/tutorials/load-average-in-linux).
![load1](./img/v037_r200c2_load1.png)
It is contained below 5 on most nodes.
**This plot yields similar results to the baseline**:
![load1](../v034/img/v034_r200c2_load1.png)
### Test Result
**Result: PASS**
Date: 2022-10-14
Version: 1cf9d8e276afe8595cba960b51cd056514965fd1
## Rotating Node Testnet
We use the same load as in the baseline: `c=4,r=800`.
Just as in the baseline tests, the version of Tendermint used for these tests is affected by #9539.
See this paragraph in the [baseline report](../v034/README.md#rotating-node-testnet) for further details.
Finally, note that this setup allows for a fairer comparison between this version and the baseline.
### Latencies
The plot of all latencies can be seen here.
![rotating-all-latencies](./img/v037_rotating_latencies.png)
Which is similar to the baseline.
![rotating-all-latencies-bl](../v034/img/v034_rotating_latencies_uniq.png)
Note that we are comparing against the baseline plot with _unique_
transactions. This is because the problem with duplicate transactions
detected during the baseline experiment did not show up for `v0.37`,
which is _not_ proof that the problem is not present in `v0.37`.
### Prometheus Metrics
The set of metrics shown here match those shown on the baseline (`v0.34`) for the same experiment.
We also show the baseline results for comparison.
#### Blocks and Transactions per minute
The blocks produced per minute are the gradient of this plot.
![rotating-heights](./img/v037_rotating_heights.png)
Over a period of 4446 seconds, the height goes from 5 to 3323.
This results in an average of 45 blocks produced per minute,
which is similar to the baseline, shown below.
![rotating-heights-bl](../v034/img/v034_rotating_heights.png)
The following two plots show only the heights reported by ephemeral nodes.
The second plot is the baseline plot for comparison.
![rotating-heights-ephe](./img/v037_rotating_heights_ephe.png)
![rotating-heights-ephe-bl](../v034/img/v034_rotating_heights_ephe.png)
By the length of the segments, we can see that ephemeral nodes in `v0.37`
catch up slightly faster.
The transactions processed per minute are the gradient of this plot.
![rotating-total-txs](./img/v037_rotating_total-txs.png)
Over a period of 3852 seconds, the total goes from 597 to 267298 transactions in one of the validators,
resulting in 4154 transactions per minute, which is slightly lower than the baseline,
although the baseline had to deal with duplicate transactions.
For comparison, this is the baseline plot.
![rotating-total-txs-bl](../v034/img/v034_rotating_total-txs.png)
#### Peers
The plot below shows the evolution of the number of peers throughout the experiment.
![rotating-peers](./img/v037_rotating_peers.png)
This is the baseline plot, for comparison.
![rotating-peers-bl](../v034/img/v034_rotating_peers.png)
The plotted values and their evolution are comparable in both plots.
For further details on these plots, see the baseline report.
#### Memory Resident Set Size
The average Resident Set Size (RSS) over all processes looks slightly more stable
on `v0.37` (first plot) than on the baseline (second plot).
![rotating-rss-avg](./img/v037_rotating_rss_avg.png)
![rotating-rss-avg-bl](../v034/img/v034_rotating_rss_avg.png)
The memory taken by the validators and the ephemeral nodes when they are up is comparable (not shown in the plots),
just as observed in the baseline.
#### CPU utilization
The plot shows metric `load1` for all nodes.
![rotating-load1](./img/v037_rotating_load1.png)
This is the baseline plot.
![rotating-load1-bl](../v034/img/v034_rotating_load1.png)
In both cases, it is contained under 5 most of the time, which is considered normal load.
The green line in the `v0.37` plot and the purple line in the baseline plot (`v0.34`)
correspond to the validators receiving all transactions, via RPC, from the load runner process.
In both cases, they oscillate around 5 (normal load). The main difference is that other
nodes are generally less loaded in `v0.37`.
### Test Result
**Result: PASS**
Date: 2022-10-10
Version: 155110007b9d8b83997a799016c1d0844c8efbaf
[\#9533]: https://github.com/tendermint/tendermint/pull/9533
[\#9534]: https://github.com/tendermint/tendermint/pull/9534
[\#9539]: https://github.com/tendermint/tendermint/issues/9539
[\#9548]: https://github.com/tendermint/tendermint/issues/9548
[\#9537]: https://github.com/tendermint/tendermint/issues/9537
[\#9581]: https://github.com/tendermint/tendermint/issues/9581

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Some files were not shown because too many files have changed in this diff Show More