Compare commits

...

24 Commits

Author SHA1 Message Date
William Banfield
fe94ff8934 rename flags and config fields to use the new blocksync nomenclature 2021-09-08 09:44:17 -04:00
dependabot[bot]
0055f9efcc build(deps): Bump github.com/lib/pq from 1.10.2 to 1.10.3 (#6890)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.2 to 1.10.3.
- [Release notes](https://github.com/lib/pq/releases)
- [Commits](https://github.com/lib/pq/compare/v1.10.2...v1.10.3)

---
updated-dependencies:
- dependency-name: github.com/lib/pq
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-09-03 09:07:30 -04:00
Callum Waters
bda948e814 statesync: implement p2p state provider (#6807) 2021-09-02 13:19:18 +02:00
Sam Kleinman
511bd3eb7f e2e: weight protocol dimensions (#6884)
This changes the focus of the e2e suite, to (roughly) focus on
configurations that are more well used. Most production users of
tendermint run ABCI application in process and the GRPC/socket methods
cover the vast majority of the remaining use cases. 

Perhaps we should consider drop support unix domain sockets in a
future release, but I think in the mean time it's useful to have the
tests *mostly* focus on the primary use cases.
2021-09-01 19:52:40 +00:00
Sam Kleinman
9a0081f076 e2e: change restart mechanism (#6883) 2021-09-01 12:49:45 -04:00
M. J. Fromberger
7fe3e78a38 Update the psql indexer schema and implementation (#6868)
Update the schema and implementation of the Postgres event indexer to improve 
certain types of queries against the index. These changes address the use cases
raised by #6843, and are partly inspired by the prototype schema in that issue.

In the old schema, events were flattened, making it difficult to find all the events 
associated with a particular block or transaction. In addition, events with no key/value
attributes were entirely lost, since entries were generated only for attributes.

To address these issues, this new schema records blocks, transactions, events,
and attributes in separate tables, and provides views that join these tables to
give a more convenient query surface for block and transaction events.

- All events for a given block can be queried from the `block_events` view.
- All events for a given transaction can be queried from the `tx_events` view.
- Multiple events for the same key can be indexed for both blocks and transactions.

The tests have been reworked, but all of the existing test cases for the old schema 
still pass with the new implementation. Various other minor cleanups are included,

ADR-065 is also updated to reflect the updated schema.
2021-08-31 18:35:07 -04:00
Sam Kleinman
7169d26ddf e2e: more reliable method for selecting node to inject evidence (#6880)
In retrospect my previous implementation of this node, could get
unlucky and never find the correct node. This method is more reliable.
2021-08-31 21:56:06 +00:00
Sam Kleinman
c4df8a3840 types: move mempool error for consistency (#6875)
This is a little change just to make things more consistent ahead of
the 0.35 release.
2021-08-30 17:42:58 +00:00
dependabot[bot]
f858ebeb88 build(deps): Bump github.com/rs/zerolog from 1.23.0 to 1.24.0 (#6874)
Bumps [github.com/rs/zerolog](https://github.com/rs/zerolog) from 1.23.0 to 1.24.0.
- [Release notes](https://github.com/rs/zerolog/releases)
- [Commits](https://github.com/rs/zerolog/compare/v1.23.0...v1.24.0)

---
updated-dependencies:
- dependency-name: github.com/rs/zerolog
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-30 09:34:54 -04:00
Jeeyong Um
c9347a0647 docs: remove return code in normal case from go built-in example (#6841)
An explicit exit prevents the deferred cleanup code from running. In this case,
falling off the end of main will achieve the same goal as an explicit exit.
2021-08-29 12:14:20 -04:00
Sam Kleinman
0df421b37f e2e: add weighted random configuration selector (#6869)
When revwing #6807 I assumed that `probSetChoice` worked this way. 

I think that the coverage of various configuration options should
generally track what we expect the actual useage to be to focus the
most test coverage on the configurations that are the most prevelent.
2021-08-27 16:32:40 +00:00
Sam Kleinman
94e1eb8cfe rfc: fix link style (#6870)
This is super minor, but chaning this to fix a broken link and to
match the existing style of the ADRs.
2021-08-27 16:21:12 +00:00
Sam Kleinman
23abb0de8b rfc: p2p next steps (#6866) 2021-08-27 12:14:59 -04:00
Aleksandr Bezobchuk
58a6cfff9a internal/consensus: update error log (#6863)
Issues reported in Osmosis, where the message is extremely long. Also, there is absolutely no reason to log the message IMO. If we must, we can make the message log DEBUG.
2021-08-25 22:43:21 +00:00
Sam Kleinman
6e921f6644 p2p: change default to use new stack (#6862)
This is just a configuration change to default to using the new stack
unless explicitly disabled (e.g. `UseLegacy`) this renames the
configuration value and makes the configuration logic more clear.

The legacy option is good to retain as a fallback if the new stack has
issues operationally, but we should make sure that most of the time
we're using the new stack.
2021-08-25 17:33:38 +00:00
Sam Kleinman
a0a5d45cb1 lint: change deprecated linter (#6861)
This is a super minor change that silences a warning when running the linter locally.
2021-08-25 12:26:11 +00:00
Sam Kleinman
9c8379ef30 e2e: more consistent node selection during tests (#6857)
In the transaction load generator, the e2e test harness previously distributed load randomly to hosts, which was a source of test non-determinism. This change distributes the load generation to the different nodes in the set in a round robin fashion, to produce more reliable results, but does not otherwise change the behavior of the test harness.
2021-08-25 12:24:01 +00:00
dependabot[bot]
e053643b95 build(deps): Bump codecov/codecov-action from 2.0.2 to 2.0.3 (#6860)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 2.0.2 to 2.0.3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v2.0.2...v2.0.3)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-25 08:12:04 -04:00
M. J. Fromberger
41a361ed8d psql: add documentation and simplify constructor API (#6856)
Add documentation comments to the psql event sink package, and simplify the
constructor function so that it does not return the SQL database handle.  The
handle is needed for testing, so expose that via a separate method on the
concrete type.

Update the tests and existing usage for the change. This change does not affect
the behaviour of the sink, so there are no functional changes, only syntactic
updates.
2021-08-24 17:18:27 -04:00
William Banfield
bc2b529b95 inspect: add inspect mode for debugging crashed tendermint node (#6785)
EDIT: Updated, see [comment below]( https://github.com/tendermint/tendermint/pull/6785#issuecomment-897793175)

This change adds a sketch of the `Debug` mode. 

This change adds a `Debug` struct to the node package. This `Debug` struct is intended to be created and started by a command in the `cmd` directory. The `Debug` struct runs the RPC server on the data directories: both the state store and the block store.

This change required a good deal of refactoring. Namely, a new `rpc.go` file was added to the `node` package. This file encapsulates functions for starting RPC servers used by nodes. A potential additional change is to further factor this code into shared code _in_ the `rpc` package. 

Minor API tweaks were also made that seemed appropriate such as the mechanism for fetching routes from the `rpc/core` package.

Additional work is required to register the `Debug` service as a command in the `cmd` directory but I am looking for feedback on if this direction seems appropriate before diving much further.

closes: #5908
2021-08-24 18:12:06 +00:00
Tess Rinearson
6d5ff590c3 contributing: remove release_notes.md reference (#6846) 2021-08-24 19:07:53 +02:00
Sam Kleinman
d8642a941e cmd: remove deprecated snakes (#6854)
just following up on a deprecation.
2021-08-24 16:06:27 +00:00
Sam Kleinman
d7c3a8f682 time: make median time library type private (#6853)
This is a very minor change, but I was looking through the code, and
this seems like it shouldn't be exported or used more broadly, so I've
moved it out.
2021-08-24 15:43:13 +00:00
M. J. Fromberger
ce3c059a0d ADR 072: Re-instate a request-for-comments archive. (#6851)
This ADR restores a variation of the old Request for Comments documentation
that we previously used. The proposal differs from the original formulation,
and does not replace ADRs.
2021-08-23 18:06:01 -04:00
98 changed files with 4744 additions and 1943 deletions

View File

@@ -121,7 +121,7 @@ jobs:
- run: |
cat ./*profile.out | grep -v "mode: atomic" >> coverage.txt
if: env.GIT_DIFF
- uses: codecov/codecov-action@v2.0.2
- uses: codecov/codecov-action@v2.0.3
with:
file: ./coverage.txt
if: env.GIT_DIFF

View File

@@ -2,7 +2,7 @@ name: "Release"
on:
push:
branches:
branches:
- "RC[0-9]/**"
tags:
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
@@ -20,9 +20,6 @@ jobs:
with:
go-version: '1.16'
- run: echo https://github.com/tendermint/tendermint/blob/${GITHUB_REF#refs/tags/}/CHANGELOG.md#${GITHUB_REF#refs/tags/} > ../release_notes.md
if: startsWith(github.ref, 'refs/tags/')
- name: Build
uses: goreleaser/goreleaser-action@v2
if: ${{ github.event_name == 'pull_request' }}
@@ -35,6 +32,6 @@ jobs:
if: startsWith(github.ref, 'refs/tags/')
with:
version: latest
args: release --rm-dist --release-notes=../release_notes.md
args: release --rm-dist
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -26,7 +26,7 @@ linters:
# - maligned
- nakedret
- prealloc
- scopelint
- exportloopref
- staticcheck
- structcheck
- stylecheck

View File

@@ -24,8 +24,10 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [fastsync/rpc] \#6620 Add TotalSyncedTime & RemainingTime to SyncInfo in /status RPC (@JayT106)
- [rpc/grpc] \#6725 Mark gRPC in the RPC layer as deprecated.
- [blockchain/v2] \#6730 Fast Sync v2 is deprecated, please use v0
- [rpc] Add genesis_chunked method to support paginated and parallel fetching of large genesis documents.
- [rpc/jsonrpc/server] \#6785 `Listen` function updated to take an `int` argument, `maxOpenConnections`, instead of an entire config object. (@williambanfield)
- [rpc] \#6820 Update RPC methods to reflect changes in the p2p layer, disabling support for `UnsafeDialPeers` and `UnsafeDialPeers` when used with the new p2p layer, and changing the response format of the peer list in `NetInfo` for all users.
- [cli] \#6854 Remove deprecated snake case commands. (@tychoish)
- Apps
- [ABCI] \#6408 Change the `key` and `value` fields from `[]byte` to `string` in the `EventAttribute` type. (@alexanderbez)
- [ABCI] \#5447 Remove `SetOption` method from `ABCI.Client` interface
@@ -105,6 +107,7 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [config/indexer] \#6411 Introduce support for custom event indexing data sources, specifically PostgreSQL. (@JayT106)
- [fastsync/event] \#6619 Emit fastsync status event when switching consensus/fastsync (@JayT106)
- [statesync/event] \#6700 Emit statesync status start/end event (@JayT106)
- [inspect] \#6785 Add a new `inspect` command for introspecting the state and block store of a crashed tendermint node. (@williambanfield)
### IMPROVEMENTS
@@ -148,6 +151,7 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [state/privval] \#6578 No GetPubKey retry beyond the proposal/voting window (@JayT106)
- [rpc] \#6615 Add TotalGasUsed to block_results response (@crypto-facs)
- [cmd/tendermint/commands] \#6623 replace `$HOME/.some/test/dir` with `t.TempDir` (@tanyabouman)
- [statesync] \6807 Implement P2P state provider as an alternative to RPC (@cmwaters)
### BUG FIXES
@@ -158,4 +162,4 @@ Friendly reminder: We have a [bug bounty program](https://hackerone.com/tendermi
- [rpc] \#6507 Ensure RPC client can handle URLs without ports (@JayT106)
- [statesync] \#6463 Adds Reverse Sync feature to fetch historical light blocks after state sync in order to verify any evidence (@cmwaters)
- [fastsync] \#6590 Update the metrics during fast-sync (@JayT106)
- [gitignore] \#6668 Fix gitignore of abci-cli (@tanyabouman)
- [gitignore] \#6668 Fix gitignore of abci-cli (@tanyabouman)

View File

@@ -328,7 +328,6 @@ If there were no release candidates, begin by creating a backport branch, as des
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Open a PR with these changes against the backport branch.
5. Once these changes are on the backport branch, push a tag with prepared release details.
This will trigger the actual release `v0.35.0`.
@@ -355,7 +354,6 @@ To create a minor release:
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during minor releases, and only field additions are valid minor changes.)
- Add any release notes you would like to be added to the body of the release to `release_notes.md`.
4. Open a PR with these changes that will land them back on `v0.35.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- `git tag -a v0.35.1 -m 'Release v0.35.1'`

View File

@@ -17,7 +17,15 @@ This guide provides instructions for upgrading to specific versions of Tendermin
### Config Changes
* `fast_sync = "v1"` and `fast_sync = "v2"` are no longer supported. Please use `v0` instead.
* The configuration file field `[fastsync]` has been renamed to `[blocksync]`.
* The top level configuration file field `fast-sync` has moved under the new `[blocksync]`
field as `blocksync.enable`.
* `blocksync.version = "v1"` and `blocksync.version = "v2"` (previously `fastsync`)
are no longer supported. Please use `v0` instead. During the v0.35 release cycle, `v0` was
determined to suit the existing needs and the cost of maintaining the `v1` and `v2` modules
was determined to be greater than necessary.
* All config parameters are now hyphen-case (also known as kebab-case) instead of snake_case. Before restarting the node make sure
you have updated all the variables in your `config.toml` file.
@@ -60,6 +68,8 @@ if needed.
* You must now specify the node mode (validator|full|seed) in `tendermint init [mode]`
* The `--fast-sync` command line option has been renamed to `--blocksync.enable`
* If you had previously used `tendermint gen_node_key` to generate a new node
key, keep in mind that it no longer saves the output to a file. You can use
`tendermint init validator` or pipe the output of `tendermint gen_node_key` to

View File

@@ -12,11 +12,9 @@ import (
// GenNodeKeyCmd allows the generation of a node key. It prints JSON-encoded
// NodeKey to the standard output.
var GenNodeKeyCmd = &cobra.Command{
Use: "gen-node-key",
Aliases: []string{"gen_node_key"},
Short: "Generate a new node key",
RunE: genNodeKey,
PreRun: deprecateSnakeCase,
Use: "gen-node-key",
Short: "Generate a new node key",
RunE: genNodeKey,
}
func genNodeKey(cmd *cobra.Command, args []string) error {

View File

@@ -13,11 +13,9 @@ import (
// GenValidatorCmd allows the generation of a keypair for a
// validator.
var GenValidatorCmd = &cobra.Command{
Use: "gen-validator",
Aliases: []string{"gen_validator"},
Short: "Generate new validator keypair",
RunE: genValidator,
PreRun: deprecateSnakeCase,
Use: "gen-validator",
Short: "Generate new validator keypair",
RunE: genValidator,
}
func init() {

View File

@@ -0,0 +1,87 @@
package commands
import (
"context"
"os"
"os/signal"
"syscall"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/inspect"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer/sink"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
)
// InspectCmd is the command for starting an inspect server.
var InspectCmd = &cobra.Command{
Use: "inspect",
Short: "Run an inspect server for investigating Tendermint state",
Long: `
inspect runs a subset of Tendermint's RPC endpoints that are useful for debugging
issues with Tendermint.
When the Tendermint consensus engine detects inconsistent state, it will crash the
tendermint process. Tendermint will not start up while in this inconsistent state.
The inspect command can be used to query the block and state store using Tendermint
RPC calls to debug issues of inconsistent state.
`,
RunE: runInspect,
}
func init() {
InspectCmd.Flags().
String("rpc.laddr",
config.RPC.ListenAddress, "RPC listenener address. Port required")
InspectCmd.Flags().
String("db-backend",
config.DBBackend, "database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
InspectCmd.Flags().
String("db-dir", config.DBPath, "database directory")
}
func runInspect(cmd *cobra.Command, args []string) error {
ctx, cancel := context.WithCancel(cmd.Context())
defer cancel()
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGTERM, syscall.SIGINT)
go func() {
<-c
cancel()
}()
blockStoreDB, err := cfg.DefaultDBProvider(&cfg.DBContext{ID: "blockstore", Config: config})
if err != nil {
return err
}
blockStore := store.NewBlockStore(blockStoreDB)
stateDB, err := cfg.DefaultDBProvider(&cfg.DBContext{ID: "state", Config: config})
if err != nil {
if err := blockStoreDB.Close(); err != nil {
logger.Error("error closing block store db", "error", err)
}
return err
}
genDoc, err := types.GenesisDocFromFile(config.GenesisFile())
if err != nil {
return err
}
sinks, err := sink.EventSinksFromConfig(config, cfg.DefaultDBProvider, genDoc.ChainID)
if err != nil {
return err
}
stateStore := state.NewStore(stateDB)
ins := inspect.New(config.RPC, blockStore, stateStore, sinks, logger)
logger.Info("starting inspect server")
if err := ins.Run(ctx); err != nil {
return err
}
return nil
}

View File

@@ -11,11 +11,9 @@ import (
// ProbeUpnpCmd adds capabilities to test the UPnP functionality.
var ProbeUpnpCmd = &cobra.Command{
Use: "probe-upnp",
Aliases: []string{"probe_upnp"},
Short: "Test UPnP functionality",
RunE: probeUpnp,
PreRun: deprecateSnakeCase,
Use: "probe-upnp",
Short: "Test UPnP functionality",
RunE: probeUpnp,
}
func probeUpnp(cmd *cobra.Command, args []string) error {

View File

@@ -31,7 +31,7 @@ var ReIndexEventCmd = &cobra.Command{
Long: `
reindex-event is an offline tooling to re-index block and tx events to the eventsinks,
you can run this command when the event store backend dropped/disconnected or you want to replace the backend.
The default start-height is 0, meaning the tooling will start reindex from the base block height(inclusive); and the
The default start-height is 0, meaning the tooling will start reindex from the base block height(inclusive); and the
default end-height is 0, meaning the tooling will reindex until the latest block height(inclusive). User can omits
either or both arguments.
`,
@@ -106,7 +106,7 @@ func loadEventSinks(cfg *tmcfg.Config) ([]indexer.EventSink, error) {
if conn == "" {
return nil, errors.New("the psql connection settings cannot be empty")
}
es, _, err := psql.NewEventSink(conn, chainID)
es, err := psql.NewEventSink(conn, chainID)
if err != nil {
return nil, err
}

View File

@@ -17,11 +17,9 @@ var ReplayCmd = &cobra.Command{
// ReplayConsoleCmd allows replaying of messages from the WAL in a
// console.
var ReplayConsoleCmd = &cobra.Command{
Use: "replay-console",
Aliases: []string{"replay_console"},
Short: "Replay messages from WAL in a console",
Use: "replay-console",
Short: "Replay messages from WAL in a console",
Run: func(cmd *cobra.Command, args []string) {
consensus.RunReplayFile(config.BaseConfig, config.Consensus, true)
},
PreRun: deprecateSnakeCase,
}

View File

@@ -14,11 +14,9 @@ import (
// ResetAllCmd removes the database of this Tendermint core
// instance.
var ResetAllCmd = &cobra.Command{
Use: "unsafe-reset-all",
Aliases: []string{"unsafe_reset_all"},
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
RunE: resetAll,
PreRun: deprecateSnakeCase,
Use: "unsafe-reset-all",
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
RunE: resetAll,
}
var keepAddrBook bool
@@ -31,11 +29,9 @@ func init() {
// ResetPrivValidatorCmd resets the private validator files.
var ResetPrivValidatorCmd = &cobra.Command{
Use: "unsafe-reset-priv-validator",
Aliases: []string{"unsafe_reset_priv_validator"},
Short: "(unsafe) Reset this node's validator to genesis state",
RunE: resetPrivValidator,
PreRun: deprecateSnakeCase,
Use: "unsafe-reset-priv-validator",
Short: "(unsafe) Reset this node's validator to genesis state",
RunE: resetPrivValidator,
}
// XXX: this is totally unsafe.

View File

@@ -2,7 +2,6 @@ package commands
import (
"fmt"
"strings"
"time"
"github.com/spf13/cobra"
@@ -65,10 +64,3 @@ var RootCmd = &cobra.Command{
return nil
},
}
// deprecateSnakeCase is a util function for 0.34.1. Should be removed in 0.35
func deprecateSnakeCase(cmd *cobra.Command, args []string) {
if strings.Contains(cmd.CalledAs(), "_") {
fmt.Println("Deprecated: snake_case commands will be replaced by hyphen-case commands in the next major release")
}
}

View File

@@ -3,6 +3,8 @@ package commands
import (
"bytes"
"crypto/sha256"
"errors"
"flag"
"fmt"
"io"
"os"
@@ -33,7 +35,22 @@ func AddNodeFlags(cmd *cobra.Command) {
"socket address to listen on for connections from external priv-validator process")
// node flags
cmd.Flags().Bool("fast-sync", config.FastSyncMode, "fast blockchain syncing")
cmd.Flags().Bool("blocksync.enable", config.BlockSync.Enable, "enable fast blockchain syncing")
// TODO (https://github.com/tendermint/tendermint/issues/6908): remove this check after the v0.35 release cycle
// This check was added to give users an upgrade prompt to use the new flag for syncing.
//
// The pflag package does not have a native way to print a depcrecation warning
// and return an error. This logic was added to print a deprecation message to the user
// and then crash if the user attempts to use the old --fast-sync flag.
fs := flag.NewFlagSet("", flag.ExitOnError)
fs.Func("fast-sync", "deprecated",
func(string) error {
return errors.New("--fast-sync has been deprecated, please use --blocksync.enable")
})
cmd.Flags().AddGoFlagSet(fs)
cmd.Flags().MarkHidden("fast-sync") //nolint:errcheck
cmd.Flags().BytesHexVar(
&genesisHash,
"genesis-hash",
@@ -158,7 +175,7 @@ func checkGenesisHash(config *cfg.Config) error {
// Compare with the flag.
if !bytes.Equal(genesisHash, actualHash) {
return fmt.Errorf(
"--genesis_hash=%X does not match %s hash: %X",
"--genesis-hash=%X does not match %s hash: %X",
genesisHash, config.GenesisFile(), actualHash)
}

View File

@@ -8,11 +8,9 @@ import (
// ShowNodeIDCmd dumps node's ID to the standard output.
var ShowNodeIDCmd = &cobra.Command{
Use: "show-node-id",
Aliases: []string{"show_node_id"},
Short: "Show this node's ID",
RunE: showNodeID,
PreRun: deprecateSnakeCase,
Use: "show-node-id",
Short: "Show this node's ID",
RunE: showNodeID,
}
func showNodeID(cmd *cobra.Command, args []string) error {

View File

@@ -16,11 +16,9 @@ import (
// ShowValidatorCmd adds capabilities for showing the validator info.
var ShowValidatorCmd = &cobra.Command{
Use: "show-validator",
Aliases: []string{"show_validator"},
Short: "Show this node's validator info",
RunE: showValidator,
PreRun: deprecateSnakeCase,
Use: "show-validator",
Short: "Show this node's validator info",
RunE: showValidator,
}
func showValidator(cmd *cobra.Command, args []string) error {

View File

@@ -28,6 +28,7 @@ func main() {
cmd.ShowNodeIDCmd,
cmd.GenNodeKeyCmd,
cmd.VersionCmd,
cmd.InspectCmd,
cmd.MakeKeyMigrateCommand(),
debug.DebugCmd,
cli.NewCompletionCmd(rootCmd, true),

View File

@@ -76,7 +76,7 @@ type Config struct {
P2P *P2PConfig `mapstructure:"p2p"`
Mempool *MempoolConfig `mapstructure:"mempool"`
StateSync *StateSyncConfig `mapstructure:"statesync"`
BlockSync *BlockSyncConfig `mapstructure:"fastsync"`
BlockSync *BlockSyncConfig `mapstructure:"blocksync"`
Consensus *ConsensusConfig `mapstructure:"consensus"`
TxIndex *TxIndexConfig `mapstructure:"tx-index"`
Instrumentation *InstrumentationConfig `mapstructure:"instrumentation"`
@@ -152,7 +152,7 @@ func (cfg *Config) ValidateBasic() error {
return fmt.Errorf("error in [statesync] section: %w", err)
}
if err := cfg.BlockSync.ValidateBasic(); err != nil {
return fmt.Errorf("error in [fastsync] section: %w", err)
return fmt.Errorf("error in [blocksync] section: %w", err)
}
if err := cfg.Consensus.ValidateBasic(); err != nil {
return fmt.Errorf("error in [consensus] section: %w", err)
@@ -194,12 +194,6 @@ type BaseConfig struct { //nolint: maligned
// - No priv_validator_key.json, priv_validator_state.json
Mode string `mapstructure:"mode"`
// If this node is many blocks behind the tip of the chain, FastSync
// allows them to catchup quickly by downloading blocks in parallel
// and verifying their commits
// TODO: This should be moved to the blocksync config
FastSyncMode bool `mapstructure:"fast-sync"`
// Database backend: goleveldb | cleveldb | boltdb | rocksdb
// * goleveldb (github.com/syndtr/goleveldb - most popular implementation)
// - pure go
@@ -242,23 +236,24 @@ type BaseConfig struct { //nolint: maligned
// If true, query the ABCI app on connecting to a new peer
// so the app can decide if we should keep the connection or not
FilterPeers bool `mapstructure:"filter-peers"` // false
Other map[string]interface{} `mapstructure:",remain"`
}
// DefaultBaseConfig returns a default base configuration for a Tendermint node
func DefaultBaseConfig() BaseConfig {
return BaseConfig{
Genesis: defaultGenesisJSONPath,
NodeKey: defaultNodeKeyPath,
Mode: defaultMode,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultLogLevel,
LogFormat: log.LogFormatPlain,
FastSyncMode: true,
FilterPeers: false,
DBBackend: "goleveldb",
DBPath: "data",
Genesis: defaultGenesisJSONPath,
NodeKey: defaultNodeKeyPath,
Mode: defaultMode,
Moniker: defaultMoniker,
ProxyApp: "tcp://127.0.0.1:26658",
ABCI: "socket",
LogLevel: DefaultLogLevel,
LogFormat: log.LogFormatPlain,
FilterPeers: false,
DBBackend: "goleveldb",
DBPath: "data",
}
}
@@ -268,7 +263,6 @@ func TestBaseConfig() BaseConfig {
cfg.chainID = "tendermint_test"
cfg.Mode = ModeValidator
cfg.ProxyApp = "kvstore"
cfg.FastSyncMode = false
cfg.DBBackend = "memdb"
return cfg
}
@@ -345,6 +339,28 @@ func (cfg BaseConfig) ValidateBasic() error {
return fmt.Errorf("unknown mode: %v", cfg.Mode)
}
// TODO (https://github.com/tendermint/tendermint/issues/6908) remove this check after the v0.35 release cycle.
// This check was added to give users an upgrade prompt to use the new
// configuration option in v0.35. In future release cycles they should no longer
// be using this configuration parameter so the check can be removed.
// The cfg.Other field can likely be removed at the same time if it is not referenced
// elsewhere as it was added to service this check.
if fs, ok := cfg.Other["fastsync"]; ok {
if _, ok := fs.(map[string]interface{}); ok {
return fmt.Errorf("a configuration section named 'fastsync' was found in the " +
"configuration file. The 'fastsync' section has been renamed to " +
"'blocksync', please update the 'fastsync' field in your configuration file to 'blocksync'")
}
}
if fs, ok := cfg.Other["fast-sync"]; ok {
if fs != "" {
return fmt.Errorf("a parameter named 'fast-sync' was found in the " +
"configuration file. The parameter to enable or disable quickly syncing with a blockchain" +
"has moved to the [blocksync] section of the configuration file as blocksync.enable. " +
"Please move the 'fast-sync' field in your configuration file to 'blocksync.enable'")
}
}
return nil
}
@@ -694,13 +710,14 @@ type P2PConfig struct { //nolint: maligned
// Force dial to fail
TestDialFail bool `mapstructure:"test-dial-fail"`
// DisableLegacy is used mostly for testing to enable or disable the legacy
// P2P stack.
DisableLegacy bool `mapstructure:"disable-legacy"`
// UseLegacy enables the "legacy" P2P implementation and
// disables the newer default implementation. This flag will
// be removed in a future release.
UseLegacy bool `mapstructure:"use-legacy"`
// Makes it possible to configure which queue backend the p2p
// layer uses. Options are: "fifo", "priority" and "wdrr",
// with the default being "fifo".
// with the default being "priority".
QueueType string `mapstructure:"queue-type"`
}
@@ -732,6 +749,7 @@ func DefaultP2PConfig() *P2PConfig {
DialTimeout: 3 * time.Second,
TestDialFail: false,
QueueType: "priority",
UseLegacy: false,
}
}
@@ -882,15 +900,46 @@ func (cfg *MempoolConfig) ValidateBasic() error {
// StateSyncConfig defines the configuration for the Tendermint state sync service
type StateSyncConfig struct {
Enable bool `mapstructure:"enable"`
TempDir string `mapstructure:"temp-dir"`
RPCServers []string `mapstructure:"rpc-servers"`
TrustPeriod time.Duration `mapstructure:"trust-period"`
TrustHeight int64 `mapstructure:"trust-height"`
TrustHash string `mapstructure:"trust-hash"`
DiscoveryTime time.Duration `mapstructure:"discovery-time"`
// State sync rapidly bootstraps a new node by discovering, fetching, and restoring a
// state machine snapshot from peers instead of fetching and replaying historical
// blocks. Requires some peers in the network to take and serve state machine
// snapshots. State sync is not attempted if the node has any local state
// (LastBlockHeight > 0). The node will have a truncated block history, starting from
// the height of the snapshot.
Enable bool `mapstructure:"enable"`
// State sync uses light client verification to verify state. This can be done either
// through the P2P layer or the RPC layer. Set this to true to use the P2P layer. If
// false (default), the RPC layer will be used.
UseP2P bool `mapstructure:"use-p2p"`
// If using RPC, at least two addresses need to be provided. They should be compatible
// with net.Dial, for example: "host.example.com:2125".
RPCServers []string `mapstructure:"rpc-servers"`
// The hash and height of a trusted block. Must be within the trust-period.
TrustHeight int64 `mapstructure:"trust-height"`
TrustHash string `mapstructure:"trust-hash"`
// The trust period should be set so that Tendermint can detect and gossip
// misbehavior before it is considered expired. For chains based on the Cosmos SDK,
// one day less than the unbonding period should suffice.
TrustPeriod time.Duration `mapstructure:"trust-period"`
// Time to spend discovering snapshots before initiating a restore.
DiscoveryTime time.Duration `mapstructure:"discovery-time"`
// Temporary directory for state sync snapshot chunks, defaults to os.TempDir().
// The synchronizer will create a new, randomly named directory within this directory
// and remove it when the sync is complete.
TempDir string `mapstructure:"temp-dir"`
// The timeout duration before re-requesting a chunk, possibly from a different
// peer (default: 15 seconds).
ChunkRequestTimeout time.Duration `mapstructure:"chunk-request-timeout"`
Fetchers int32 `mapstructure:"fetchers"`
// The number of concurrent chunk and block fetchers to run (default: 4).
Fetchers int32 `mapstructure:"fetchers"`
}
func (cfg *StateSyncConfig) TrustHashBytes() []byte {
@@ -919,49 +968,51 @@ func TestStateSyncConfig() *StateSyncConfig {
// ValidateBasic performs basic validation.
func (cfg *StateSyncConfig) ValidateBasic() error {
if cfg.Enable {
if len(cfg.RPCServers) == 0 {
return errors.New("rpc-servers is required")
}
if !cfg.Enable {
return nil
}
// If we're not using the P2P stack then we need to validate the
// RPCServers
if !cfg.UseP2P {
if len(cfg.RPCServers) < 2 {
return errors.New("at least two rpc-servers entries is required")
return errors.New("at least two rpc-servers must be specified")
}
for _, server := range cfg.RPCServers {
if len(server) == 0 {
if server == "" {
return errors.New("found empty rpc-servers entry")
}
}
}
if cfg.DiscoveryTime != 0 && cfg.DiscoveryTime < 5*time.Second {
return errors.New("discovery time must be 0s or greater than five seconds")
}
if cfg.DiscoveryTime != 0 && cfg.DiscoveryTime < 5*time.Second {
return errors.New("discovery time must be 0s or greater than five seconds")
}
if cfg.TrustPeriod <= 0 {
return errors.New("trusted-period is required")
}
if cfg.TrustPeriod <= 0 {
return errors.New("trusted-period is required")
}
if cfg.TrustHeight <= 0 {
return errors.New("trusted-height is required")
}
if cfg.TrustHeight <= 0 {
return errors.New("trusted-height is required")
}
if len(cfg.TrustHash) == 0 {
return errors.New("trusted-hash is required")
}
if len(cfg.TrustHash) == 0 {
return errors.New("trusted-hash is required")
}
_, err := hex.DecodeString(cfg.TrustHash)
if err != nil {
return fmt.Errorf("invalid trusted-hash: %w", err)
}
_, err := hex.DecodeString(cfg.TrustHash)
if err != nil {
return fmt.Errorf("invalid trusted-hash: %w", err)
}
if cfg.ChunkRequestTimeout < 5*time.Second {
return errors.New("chunk-request-timeout must be at least 5 seconds")
}
if cfg.ChunkRequestTimeout < 5*time.Second {
return errors.New("chunk-request-timeout must be at least 5 seconds")
}
if cfg.Fetchers <= 0 {
return errors.New("fetchers is required")
}
if cfg.Fetchers <= 0 {
return errors.New("fetchers is required")
}
return nil
@@ -970,13 +1021,18 @@ func (cfg *StateSyncConfig) ValidateBasic() error {
//-----------------------------------------------------------------------------
// BlockSyncConfig (formerly known as FastSync) defines the configuration for the Tendermint block sync service
// If this node is many blocks behind the tip of the chain, BlockSync
// allows them to catchup quickly by downloading blocks in parallel
// and verifying their commits.
type BlockSyncConfig struct {
Enable bool `mapstructure:"enable"`
Version string `mapstructure:"version"`
}
// DefaultBlockSyncConfig returns a default configuration for the block sync service
func DefaultBlockSyncConfig() *BlockSyncConfig {
return &BlockSyncConfig{
Enable: true,
Version: BlockSyncV0,
}
}

View File

@@ -97,11 +97,6 @@ moniker = "{{ .BaseConfig.Moniker }}"
# - No priv_validator_key.json, priv_validator_state.json
mode = "{{ .BaseConfig.Mode }}"
# If this node is many blocks behind the tip of the chain, FastSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast-sync = {{ .BaseConfig.FastSyncMode }}
# Database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb
# * goleveldb (github.com/syndtr/goleveldb - most popular implementation)
# - pure go
@@ -270,8 +265,8 @@ pprof-laddr = "{{ .RPC.PprofListenAddress }}"
#######################################################
[p2p]
# Enable the new p2p layer.
disable-legacy = {{ .P2P.DisableLegacy }}
# Enable the legacy p2p layer.
use-legacy = {{ .P2P.UseLegacy }}
# Select the p2p internal queue
queue-type = "{{ .P2P.QueueType }}"
@@ -305,6 +300,7 @@ persistent-peers = "{{ .P2P.PersistentPeers }}"
upnp = {{ .P2P.UPNP }}
# Path to address book
# TODO: Remove once p2p refactor is complete in favor of peer store.
addr-book-file = "{{ js .P2P.AddrBook }}"
# Set true for strict address routability rules
@@ -330,6 +326,8 @@ max-connections = {{ .P2P.MaxConnections }}
max-incoming-connection-attempts = {{ .P2P.MaxIncomingConnectionAttempts }}
# List of node IDs, to which a connection will be (re)established ignoring any existing limits
# TODO: Remove once p2p refactor is complete.
# ref: https://github.com/tendermint/tendermint/issues/5670
unconditional-peer-ids = "{{ .P2P.UnconditionalPeerIDs }}"
# Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
@@ -426,22 +424,30 @@ ttl-num-blocks = {{ .Mempool.TTLNumBlocks }}
# starting from the height of the snapshot.
enable = {{ .StateSync.Enable }}
# RPC servers (comma-separated) for light client verification of the synced state machine and
# retrieval of state data for node bootstrapping. Also needs a trusted height and corresponding
# header hash obtained from a trusted source, and a period during which validators can be trusted.
#
# For Cosmos SDK-based chains, trust-period should usually be about 2/3 of the unbonding time (~2
# weeks) during which they can be financially punished (slashed) for misbehavior.
# State sync uses light client verification to verify state. This can be done either through the
# P2P layer or RPC layer. Set this to true to use the P2P layer. If false (default), RPC layer
# will be used.
use-p2p = {{ .StateSync.UseP2P }}
# If using RPC, at least two addresses need to be provided. They should be compatible with net.Dial,
# for example: "host.example.com:2125"
rpc-servers = "{{ StringsJoin .StateSync.RPCServers "," }}"
# The hash and height of a trusted block. Must be within the trust-period.
trust-height = {{ .StateSync.TrustHeight }}
trust-hash = "{{ .StateSync.TrustHash }}"
# The trust period should be set so that Tendermint can detect and gossip misbehavior before
# it is considered expired. For chains based on the Cosmos SDK, one day less than the unbonding
# period should suffice.
trust-period = "{{ .StateSync.TrustPeriod }}"
# Time to spend discovering snapshots before initiating a restore.
discovery-time = "{{ .StateSync.DiscoveryTime }}"
# Temporary directory for state sync snapshot chunks, defaults to the OS tempdir (typically /tmp).
# Will create a new, randomly named directory within, and remove it when done.
# Temporary directory for state sync snapshot chunks, defaults to os.TempDir().
# The synchronizer will create a new, randomly named directory within this directory
# and remove it when the sync is complete.
temp-dir = "{{ .StateSync.TempDir }}"
# The timeout duration before re-requesting a chunk, possibly from a different
@@ -454,10 +460,15 @@ fetchers = "{{ .StateSync.Fetchers }}"
#######################################################
### Block Sync Configuration Connections ###
#######################################################
[fastsync]
[blocksync]
# If this node is many blocks behind the tip of the chain, BlockSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
enable = {{ .BlockSync.Enable }}
# Block Sync version to use:
# 1) "v0" (default) - the legacy block sync implementation
# 1) "v0" (default) - the standard Block Sync implementation
# 2) "v2" - DEPRECATED, please use v0
version = "{{ .BlockSync.Version }}"

View File

@@ -36,9 +36,7 @@ func TestEnsureRoot(t *testing.T) {
data, err := ioutil.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
require.Nil(err)
if !checkConfig(string(data)) {
t.Fatalf("config file missing some information")
}
checkConfig(t, string(data))
ensureFiles(t, tmpDir, "data")
}
@@ -57,9 +55,7 @@ func TestEnsureTestRoot(t *testing.T) {
data, err := ioutil.ReadFile(filepath.Join(rootDir, defaultConfigFilePath))
require.Nil(err)
if !checkConfig(string(data)) {
t.Fatalf("config file missing some information")
}
checkConfig(t, string(data))
// TODO: make sure the cfg returned and testconfig are the same!
baseConfig := DefaultBaseConfig()
@@ -67,16 +63,15 @@ func TestEnsureTestRoot(t *testing.T) {
ensureFiles(t, rootDir, defaultDataDir, baseConfig.Genesis, pvConfig.Key, pvConfig.State)
}
func checkConfig(configFile string) bool {
var valid bool
func checkConfig(t *testing.T, configFile string) {
t.Helper()
// list of words we expect in the config
var elems = []string{
"moniker",
"seeds",
"proxy-app",
"fast_sync",
"create_empty_blocks",
"blocksync",
"create-empty-blocks",
"peer",
"timeout",
"broadcast",
@@ -89,10 +84,7 @@ func checkConfig(configFile string) bool {
}
for _, e := range elems {
if !strings.Contains(configFile, e) {
valid = false
} else {
valid = true
t.Errorf("config file was expected to contain %s but did not", e)
}
}
return valid
}

View File

@@ -36,8 +36,7 @@ func TestPubKeySecp256k1Address(t *testing.T) {
addrBbz, _, _ := base58.CheckDecode(d.addr)
addrB := crypto.Address(addrBbz)
var priv secp256k1.PrivKey = secp256k1.PrivKey(privB)
priv := secp256k1.PrivKey(privB)
pubKey := priv.PubKey()
pubT, _ := pubKey.(secp256k1.PubKey)
pub := pubT

View File

@@ -98,3 +98,5 @@ Note the context/background should be written in the present tense.
- [ADR-045: ABCI-Evidence](./adr-045-abci-evidence.md)
- [ADR-057: RPC](./adr-057-RPC.md)
- [ADR-069: Node Initialization](./adr-069-flexible-node-initialization.md)
- [ADR-071: Proposer-Based Timestamps](adr-071-proposer-based-timestamps.md)
- [ADR-072: Restore Requests for Comments](./adr-072-request-for-comments.md)

View File

@@ -24,6 +24,7 @@
- April 1, 2021: Initial Draft (@alexanderbez)
- April 28, 2021: Specify search capabilities are only supported through the KV indexer (@marbar3778)
- May 19, 2021: Update the SQL schema and the eventsink interface (@jayt106)
- Aug 30, 2021: Update the SQL schema and the psql implementation (@creachadair)
## Status
@@ -145,163 +146,190 @@ The postgres eventsink will not support `tx_search`, `block_search`, `GetTxByHas
```sql
-- Table Definition ----------------------------------------------
CREATE TYPE block_event_type AS ENUM ('begin_block', 'end_block', '');
-- The blocks table records metadata about each block.
-- The block record does not include its events or transactions (see tx_results).
CREATE TABLE blocks (
rowid BIGSERIAL PRIMARY KEY,
CREATE TABLE block_events (
id SERIAL PRIMARY KEY,
key VARCHAR NOT NULL,
value VARCHAR NOT NULL,
height INTEGER NOT NULL,
type block_event_type,
created_at TIMESTAMPTZ NOT NULL,
chain_id VARCHAR NOT NULL
height BIGINT NOT NULL,
chain_id VARCHAR NOT NULL,
-- When this block header was logged into the sink, in UTC.
created_at TIMESTAMPTZ NOT NULL,
UNIQUE (height, chain_id)
);
-- Index blocks by height and chain, since we need to resolve block IDs when
-- indexing transaction records and transaction events.
CREATE INDEX idx_blocks_height_chain ON blocks(height, chain_id);
-- The tx_results table records metadata about transaction results. Note that
-- the events from a transaction are stored separately.
CREATE TABLE tx_results (
id SERIAL PRIMARY KEY,
tx_result BYTEA NOT NULL,
created_at TIMESTAMPTZ NOT NULL
rowid BIGSERIAL PRIMARY KEY,
-- The block to which this transaction belongs.
block_id BIGINT NOT NULL REFERENCES blocks(rowid),
-- The sequential index of the transaction within the block.
index INTEGER NOT NULL,
-- When this result record was logged into the sink, in UTC.
created_at TIMESTAMPTZ NOT NULL,
-- The hex-encoded hash of the transaction.
tx_hash VARCHAR NOT NULL,
-- The protobuf wire encoding of the TxResult message.
tx_result BYTEA NOT NULL,
UNIQUE (block_id, index)
);
CREATE TABLE tx_events (
id SERIAL PRIMARY KEY,
key VARCHAR NOT NULL,
value VARCHAR NOT NULL,
height INTEGER NOT NULL,
hash VARCHAR NOT NULL,
tx_result_id SERIAL,
created_at TIMESTAMPTZ NOT NULL,
chain_id VARCHAR NOT NULL,
FOREIGN KEY (tx_result_id)
REFERENCES tx_results(id)
ON DELETE CASCADE
-- The events table records events. All events (both block and transaction) are
-- associated with a block ID; transaction events also have a transaction ID.
CREATE TABLE events (
rowid BIGSERIAL PRIMARY KEY,
-- The block and transaction this event belongs to.
-- If tx_id is NULL, this is a block event.
block_id BIGINT NOT NULL REFERENCES blocks(rowid),
tx_id BIGINT NULL REFERENCES tx_results(rowid),
-- The application-defined type label for the event.
type VARCHAR NOT NULL
);
-- Indices -------------------------------------------------------
-- The attributes table records event attributes.
CREATE TABLE attributes (
event_id BIGINT NOT NULL REFERENCES events(rowid),
key VARCHAR NOT NULL, -- bare key
composite_key VARCHAR NOT NULL, -- composed type.key
value VARCHAR NULL,
CREATE INDEX idx_block_events_key_value ON block_events(key, value);
CREATE INDEX idx_tx_events_key_value ON tx_events(key, value);
CREATE INDEX idx_tx_events_hash ON tx_events(hash);
UNIQUE (event_id, key)
);
-- A joined view of events and their attributes. Events that do not have any
-- attributes are represented as a single row with empty key and value fields.
CREATE VIEW event_attributes AS
SELECT block_id, tx_id, type, key, composite_key, value
FROM events LEFT JOIN attributes ON (events.rowid = attributes.event_id);
-- A joined view of all block events (those having tx_id NULL).
CREATE VIEW block_events AS
SELECT blocks.rowid as block_id, height, chain_id, type, key, composite_key, value
FROM blocks JOIN event_attributes ON (blocks.rowid = event_attributes.block_id)
WHERE event_attributes.tx_id IS NULL;
-- A joined view of all transaction events.
CREATE VIEW tx_events AS
SELECT height, index, chain_id, type, key, composite_key, value, tx_results.created_at
FROM blocks JOIN tx_results ON (blocks.rowid = tx_results.block_id)
JOIN event_attributes ON (tx_results.rowid = event_attributes.tx_id)
WHERE event_attributes.tx_id IS NOT NULL;
```
The `PSQLEventSink` will implement the `EventSink` interface as follows
(some details omitted for brevity):
```go
func NewPSQLEventSink(connStr string, chainID string) (*PSQLEventSink, error) {
db, err := sql.Open("postgres", connStr)
if err != nil {
return nil, err
}
func NewEventSink(connStr, chainID string) (*EventSink, error) {
db, err := sql.Open(driverName, connStr)
// ...
// ...
return &EventSink{
store: db,
chainID: chainID,
}, nil
}
func (es *PSQLEventSink) IndexBlockEvents(h types.EventDataNewBlockHeader) error {
sqlStmt := sq.Insert("block_events").Columns("key", "value", "height", "type", "created_at", "chain_id")
func (es *EventSink) IndexBlockEvents(h types.EventDataNewBlockHeader) error {
ts := time.Now().UTC()
// index the reserved block height index
ts := time.Now()
sqlStmt = sqlStmt.Values(types.BlockHeightKey, h.Header.Height, h.Header.Height, "", ts, es.chainID)
return runInTransaction(es.store, func(tx *sql.Tx) error {
// Add the block to the blocks table and report back its row ID for use
// in indexing the events for the block.
blockID, err := queryWithID(tx, `
INSERT INTO blocks (height, chain_id, created_at)
VALUES ($1, $2, $3)
ON CONFLICT DO NOTHING
RETURNING rowid;
`, h.Header.Height, es.chainID, ts)
// ...
for _, event := range h.ResultBeginBlock.Events {
// only index events with a non-empty type
if len(event.Type) == 0 {
continue
}
for _, attr := range event.Attributes {
if len(attr.Key) == 0 {
continue
}
// index iff the event specified index:true and it's not a reserved event
compositeKey := fmt.Sprintf("%s.%s", event.Type, string(attr.Key))
if compositeKey == types.BlockHeightKey {
return fmt.Errorf("event type and attribute key \"%s\" is reserved; please use a different key", compositeKey)
}
if attr.GetIndex() {
sqlStmt = sqlStmt.Values(compositeKey, string(attr.Value), h.Header.Height, BlockEventTypeBeginBlock, ts, es.chainID)
}
}
}
// index end_block events...
// execute sqlStmt db query...
// Insert the special block meta-event for height.
if err := insertEvents(tx, blockID, 0, []abci.Event{
makeIndexedEvent(types.BlockHeightKey, fmt.Sprint(h.Header.Height)),
}); err != nil {
return fmt.Errorf("block meta-events: %w", err)
}
// Insert all the block events. Order is important here,
if err := insertEvents(tx, blockID, 0, h.ResultBeginBlock.Events); err != nil {
return fmt.Errorf("begin-block events: %w", err)
}
if err := insertEvents(tx, blockID, 0, h.ResultEndBlock.Events); err != nil {
return fmt.Errorf("end-block events: %w", err)
}
return nil
})
}
func (es *PSQLEventSink) IndexTxEvents(txr []*abci.TxResult) error {
sqlStmtEvents := sq.Insert("tx_events").Columns("key", "value", "height", "hash", "tx_result_id", "created_at", "chain_id")
sqlStmtTxResult := sq.Insert("tx_results").Columns("tx_result", "created_at")
func (es *EventSink) IndexTxEvents(txrs []*abci.TxResult) error {
ts := time.Now().UTC()
ts := time.Now()
for _, tx := range txr {
// store the tx result
txBz, err := proto.Marshal(tx)
if err != nil {
return err
}
for _, txr := range txrs {
// Encode the result message in protobuf wire format for indexing.
resultData, err := proto.Marshal(txr)
// ...
sqlStmtTxResult = sqlStmtTxResult.Values(txBz, ts)
// Index the hash of the underlying transaction as a hex string.
txHash := fmt.Sprintf("%X", types.Tx(txr.Tx).Hash())
// execute sqlStmtTxResult db query...
var txID uint32
err = sqlStmtTxResult.QueryRow().Scan(&txID)
if err != nil {
if err := runInTransaction(es.store, func(tx *sql.Tx) error {
// Find the block associated with this transaction.
blockID, err := queryWithID(tx, `
SELECT rowid FROM blocks WHERE height = $1 AND chain_id = $2;
`, txr.Height, es.chainID)
// ...
// Insert a record for this tx_result and capture its ID for indexing events.
txID, err := queryWithID(tx, `
INSERT INTO tx_results (block_id, index, created_at, tx_hash, tx_result)
VALUES ($1, $2, $3, $4, $5)
ON CONFLICT DO NOTHING
RETURNING rowid;
`, blockID, txr.Index, ts, txHash, resultData)
// ...
// Insert the special transaction meta-events for hash and height.
if err := insertEvents(tx, blockID, txID, []abci.Event{
makeIndexedEvent(types.TxHashKey, txHash),
makeIndexedEvent(types.TxHeightKey, fmt.Sprint(txr.Height)),
}); err != nil {
return fmt.Errorf("indexing transaction meta-events: %w", err)
}
// Index any events packaged with the transaction.
if err := insertEvents(tx, blockID, txID, txr.Result.Events); err != nil {
return fmt.Errorf("indexing transaction events: %w", err)
}
return nil
}); err != nil {
return err
}
// index the reserved height and hash indices
hash := types.Tx(tx.Tx).Hash()
sqlStmtEvents = sqlStmtEvents.Values(types.TxHashKey, hash, tx.Height, hash, txID, ts, es.chainID)
sqlStmtEvents = sqlStmtEvents.Values(types.TxHeightKey, tx.Height, tx.Height, hash, txID, ts, es.chainID)
for _, event := range result.Result.Events {
// only index events with a non-empty type
if len(event.Type) == 0 {
continue
}
for _, attr := range event.Attributes {
if len(attr.Key) == 0 {
continue
}
// index if `index: true` is set
compositeTag := fmt.Sprintf("%s.%s", event.Type, string(attr.Key))
// ensure event does not conflict with a reserved prefix key
if compositeTag == types.TxHashKey || compositeTag == types.TxHeightKey {
return fmt.Errorf("event type and attribute key \"%s\" is reserved; please use a different key", compositeTag)
}
if attr.GetIndex() {
sqlStmtEvents = sqlStmtEvents.Values(compositeKey, string(attr.Value), tx.Height, hash, txID, ts, es.chainID)
}
}
}
}
// execute sqlStmtEvents db query...
}
return nil
}
func (es *PSQLEventSink) SearchBlockEvents(ctx context.Context, q *query.Query) ([]int64, error) {
return nil, errors.New("block search is not supported via the postgres event sink")
}
// SearchBlockEvents is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) SearchBlockEvents(ctx context.Context, q *query.Query) ([]int64, error)
func (es *PSQLEventSink) SearchTxEvents(ctx context.Context, q *query.Query) ([]*abci.TxResult, error) {
return nil, errors.New("tx search is not supported via the postgres event sink")
}
// SearchTxEvents is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) SearchTxEvents(ctx context.Context, q *query.Query) ([]*abci.TxResult, error)
func (es *PSQLEventSink) GetTxByHash(hash []byte) (*abci.TxResult, error) {
return nil, errors.New("getTxByHash is not supported via the postgres event sink")
}
// GetTxByHash is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) GetTxByHash(hash []byte) (*abci.TxResult, error)
func (es *PSQLEventSink) HasBlock(h int64) (bool, error) {
return false, errors.New("hasBlock is not supported via the postgres event sink")
}
// HasBlock is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) HasBlock(h int64) (bool, error)
```
### Configuration

View File

@@ -0,0 +1,105 @@
# ADR 72: Restore Requests for Comments
## Changelog
- 20-Aug-2021: Initial draft (@creachadair)
## Status
Proposed
## Context
In the past, we kept a collection of Request for Comments (RFC) documents in `docs/rfc`.
Prior to the creation of the ADR process, these documents were used to document
design and implementation decisions about Tendermint Core. The RFC directory
was removed in favor of ADRs, in commit 3761aa69 (PR
[\#6345](https://github.com/tendermint/tendermint/pull/6345)).
For issues where an explicit design decision or implementation change is
required, an ADR is generally preferable to an open-ended RFC: An ADR is
relatively narrowly-focused, identifies a specific design or implementation
question, and documents the consensus answer to that question.
Some discussions are more open-ended, however, or don't require a specific
decision to be made (yet). Such conversations are still valuable to document,
and several members of the Tendermint team have been doing so by writing gists
or Google docs to share them around. That works well enough in the moment, but
gists do not support any kind of collaborative editing, and both gists and docs
are hard to discover after the fact. Google docs have much better collaborative
editing, but are worse for discoverability, especially when contributors span
different Google accounts.
Discoverability is important, because these kinds of open-ended discussions are
useful to people who come later -- either as new team members or as outside
contributors seeking to use and understand the thoughts behind our designs and
the architectural decisions that arose from those discussion.
With these in mind, I propose that:
- We re-create a new, initially empty `docs/rfc` directory in the repository,
and use it to capture these kinds of open-ended discussions in supplement to
ADRs.
- Unlike in the previous RFC scheme, documents in this new directory will
_not_ be used directly for decision-making. This is the key difference
between an RFC and an ADR.
Instead, an RFC will exist to document background, articulate general
principles, and serve as a historical record of discussion and motivation.
In this system, an RFC may _only_ result in a decision indirectly, via ADR
documents created in response to the RFC.
**In short:** If a decision is required, write an ADR; otherwise if a
sufficiently broad discussion is needed, write an RFC.
Just so that there is a consistent format, I also propose that:
- RFC files are named `rfc-XXX-title.{md,rst,txt}` and are written in plain
text, Markdown, or ReStructured Text.
- Like an ADR, an RFC should include a high-level change log at the top of the
document, and sections for:
* Abstract: A brief, high-level synopsis of the topic.
* Background: Any background necessary to understand the topic.
* Discussion: Detailed discussion of the issue being considered.
- Unlike an ADR, an RFC does _not_ include sections for Decisions, Detailed
Design, or evaluation of proposed solutions. If an RFC leads to a proposal
for an actual architectural change, that must be recorded in an ADR in the
usual way, and may refer back to the RFC in its References section.
## Alternative Approaches
Leaving aside implementation details, the main alternative to this proposal is
to leave things as they are now, with ADRs as the only log of record and other
discussions being held informally in whatever medium is convenient at the time.
## Decision
(pending)
## Detailed Design
- Create a new `docs/rfc` directory in the `tendermint` repository. Note that
this proposal intentionally does _not_ pull back the previous contents of
that path from Git history, as those documents were appropriately merged into
the ADR process.
- Create a `README.md` for RFCs that explains the rules and their relationship
to ADRs.
- Create an `rfc-template.md` file for RFC files.
## Consequences
### Positive
- We will have a more discoverable place to record open-ended discussions that
do not immediately result in a design change.
### Negative
- Potentially some people could be confused about the RFC/ADR distinction.

View File

@@ -36,10 +36,6 @@ proxy-app = "tcp://127.0.0.1:26658"
# A custom human readable name for this node
moniker = "anonymous"
# If this node is many blocks behind the tip of the chain, BlockSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
fast-sync = true
# Mode of Node: full | validator | seed (default: "validator")
# * validator node (default)
@@ -356,11 +352,16 @@ temp-dir = ""
#######################################################
### BlockSync Configuration Connections ###
#######################################################
[fastsync]
[blocksync]
# If this node is many blocks behind the tip of the chain, BlockSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
enable = true
# Block Sync version to use:
# 1) "v0" (default) - the legacy block sync implementation
# 2) "v2" - complete redesign of v0, optimized for testability & readability
# 1) "v0" (default) - the standard block sync implementation
# 2) "v2" - DEPRECATED, please use v0
version = "v0"
#######################################################

42
docs/rfc/README.md Normal file
View File

@@ -0,0 +1,42 @@
---
order: 1
parent:
order: false
---
# Requests for Comments
A Request for Comments (RFC) is a record of discussion on an open-ended topic
related to the design and implementation of Tendermint Core, for which no
immediate decision is required.
The purpose of an RFC is to serve as a historical record of a high-level
discussion that might otherwise only be recorded in an ad hoc way (for example,
via gists or Google docs) that are difficult to discover for someone after the
fact. An RFC _may_ give rise to more specific architectural _decisions_ for
Tendermint, but those decisions must be recorded separately in [Architecture
Decision Records (ADR)](./../architecture).
As a rule of thumb, if you can articulate a specific question that needs to be
answered, write an ADR. If you need to explore the topic and get input from
others to know what questions need to be answered, an RFC may be appropriate.
## RFC Content
An RFC should provide:
- A **changelog**, documenting when and how the RFC has changed.
- An **abstract**, briefly summarizing the topic so the reader can quickly tell
whether it is relevant to their interest.
- Any **background** a reader will need to understand and participate in the
substance of the discussion (links to other documents are fine here).
- The **discussion**, the primary content of the document.
The [rfc-template.md](./rfc-template.md) file includes placeholders for these
sections.
## Table of Contents
- [RFC-000: P2P Roadmap](./rfc-000-p2p-roadmap.rst)
<!-- - [RFC-NNN: Title](./rfc-NNN-title.md) -->

View File

@@ -0,0 +1,316 @@
====================
RFC 000: P2P Roadmap
====================
Changelog
---------
- 2021-08-20: Completed initial draft and distributed via a gist
- 2021-08-25: Migrated as an RFC and changed format
Abstract
--------
This document discusses the future of peer network management in Tendermint, with
a particular focus on features, semantics, and a proposed roadmap.
Specifically, we consider libp2p as a tool kit for implementing some fundamentals.
Background
----------
For the 0.35 release cycle the switching/routing layer of Tendermint was
replaced. This work was done "in place," and produced a version of Tendermint
that was backward-compatible and interoperable with previous versions of the
software. While there are new p2p/peer management constructs in the new
version (e.g. ``PeerManager`` and ``Router``), the main effect of this change
was to simplify the ways that other components within Tendermint interacted with
the peer management layer, and to make it possible for higher-level components
(specifically the reactors), to be used and tested more independently.
This refactoring, which was a major undertaking, was entirely necessary to
enable areas for future development and iteration on this aspect of
Tendermint. There are also a number of potential user-facing features that
depend heavily on the p2p layer: additional transport protocols, transport
compression, improved resilience to network partitions. These improvements to
modularity, stability, and reliability of the p2p system will also make
ongoing maintenance and feature development easier in the rest of Tendermint.
Critique of Current Peer-to-Peer Infrastructure
---------------------------------------
The current (refactored) P2P stack is an improvement on the previous iteration
(legacy), but as of 0.35, there remains room for improvement in the design and
implementation of the P2P layer.
Some limitations of the current stack include:
- heavy reliance on buffering to avoid backups in the flow of components,
which is fragile to maintain and can lead to unexpected memory usage
patterns and forces the routing layer to make decisions about when messages
should be discarded.
- the current p2p stack relies on convention (rather than the compiler) to
enforce the API boundaries and conventions between reactors and the router,
making it very easy to write "wrong" reactor code or introduce a bad
dependency.
- the current stack is probably more complex and difficult to maintain because
the legacy system must coexist with the new components in 0.35. When the
legacy stack is removed there are some simple changes that will become
possible and could reduce the complexity of the new system. (e.g. `#6598
<https://github.com/tendermint/tendermint/issues/6598>`_.)
- the current stack encapsulates a lot of information about peers, and makes it
difficult to expose that information to monitoring/observability tools. This
general opacity also makes it difficult to interact with the peer system
from other areas of the code base (e.g. tests, reactors).
- the legacy stack provided some control to operators to force the system to
dial new peers or seed nodes or manipulate the topology of the system _in
situ_. The current stack can't easily provide this, and while the new stack
may have better behavior, it does leave operators hands tied.
Some of these issues will be resolved early in the 0.36 cycle, with the
removal of the legacy components.
The 0.36 release also provides the opportunity to make changes to the
protocol, as the release will not be compatible with previous releases.
Areas for Development
---------------------
These sections describe features that may make sense to include in a Phase 2 of
a P2P project.
Internal Message Passing
~~~~~~~~~~~~~~~~~~~~~~~~
Currently, there's no provision for intranode communication using the P2P
layer, which means when two reactors need to interact with each other they
have to have dependencies on each other's interfaces, and
initialization. Changing these interactions (e.g. transitions between
blocksync and consensus) from procedure calls to message passing.
This is a relatively simple change and could be implemented with the following
components:
- a constant to represent "local" delivery as the ``To``` field on
``p2p.Envelope``.
- special path for routing local messages that doesn't require message
serialization (protobuf marshalling/unmarshaling).
Adding these semantics, particularly if in conjunction with synchronous
semantics provides a solution to dependency graph problems currently present
in the Tendermint codebase, which will simplify development, make it possible
to isolate components for testing.
Eventually, this will also make it possible to have a logical Tendermint node
running in multiple processes or in a collection of containers, although the
usecase of this may be debatable.
Synchronous Semantics (Paired Request/Response)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the current system, all messages are sent with fire-and-forget semantics,
and there's no coupling between a request sent via the p2p layer, and a
response. These kinds of semantics would simplify the implementation of
state and block sync reactors, and make intra-node message passing more
powerful.
For some interactions, like gossiping transactions between the mempools of
different nodes, fire-and-forget semantics make sense, but for other
operations the missing link between requests/responses leads to either
inefficiency when a node fails to respond or becomes unavailable, or code that
is just difficult to follow.
To support this kind of work, the protocol would need to accommodate some kind
of request/response ID to allow identifying out-of-order responses over a
single connection. Additionally, expanded the programming model of the
``p2p.Channel`` to accommodate some kind of _future_ or similar paradigm to
make it viable to write reactor code without needing for the reactor developer
to wrestle with lower level concurency constructs.
Timeout Handling (QoS)
~~~~~~~~~~~~~~~~~~~~~~
Currently, all timeouts, buffering, and QoS features are handled at the router
layer, and the reactors are implemented in ways that assume/require
asynchronous operation. This both increases the required complexity at the
routing layer, and means that misbehavior at the reactor level is difficult to
detect or attribute. Additionally, the current system provides three main
parameters to control quality of service:
- buffer sizes for channels and queues.
- priorities for channels
- queue implementation details for shedding load.
These end up being quite coarse controls, and changing the settings are
difficult because as the queues and channels are able to buffer large numbers
of messages it can be hard to see the impact of a given change, particularly
in our extant test environment. In general, we should endeavor to:
- set real timeouts, via contexts, on most message send operations, so that
senders rather than queues can be responsible for timeout
logic. Additionally, this will make it possible to avoid sending messages
during shutdown.
- reduce (to the greatest extent possible) the amount of buffering in
channels and the queues, to more readily surface backpressure and reduce the
potential for buildup of stale messages.
Stream Based Connection Handling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Currently the transport layer is message based, which makes sense from a
mental model of how the protocol works, but makes it more difficult to
implement transports and connection types, as it forces a higher level view of
the connection and interaction which makes it harder to implement for novel
transport types and makes it more likely that message-based caching and rate
limiting will be implemented at the transport layer rather than at a more
appropriate level.
The transport then, would be responsible for negitating the connection and the
handshake and otherwise behave like a socket/file discriptor with ``Read` and
``Write`` methods.
While this was included in the initial design for the new P2P layer, it may be
obviated entirely if the transport and peer layer is replaced with libp2p,
which is primarily stream based.
Service Discovery
~~~~~~~~~~~~~~~~~
In the current system, Tendermint assumes that all nodes in a network are
largely equivelent, and nodes tend to be "chatty" making many requests of
large numbers of peers and waiting for peers to (hopefully) respond. While
this works and has allowed Tendermint to get to a certain point, this both
produces a theoretical scaling bottle neck and makes it harder to test and
verify components of the system.
In addition to peer's identity and connection information, peers should be
able to advertise a number of services or capabilities, and node operators or
developers should be able to specify peer capability requirements (e.g. target
at least <x>-percent of peers with <y> capability.)
These capabilities may be useful in selecting peers to send messages to, it
may make sense to extend Tendermint's message addressing capability to allow
reactors to send messages to groups of peers based on role rather than only
allowing addressing to one or all peers.
Having a good service discovery mechanism may pair well with the synchronous
semantics (request/response) work, as it allows reactors to "make a request of
a peer with <x> capability and wait for the response," rather force the
reactors to need to track the capabilities or state of specific peers.
Solutions
---------
Continued Homegrown Implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The current peer system is homegrown and is conceptually compatible with the
needs of the project, and while there are limitations to the system, the p2p
layer is not (currently as of 0.35) a major source of bugs or friction during
development.
However, the current implementation makes a number of allowances for
interoperability, and there are a collection of iterative improvements that
should be considered in the next couple of releases. To maintain the current
implementation, upcoming work would include:
- change the ``Transport`` mechanism to facilitate easier implementations.
- implement different ``Transport`` handlers to be able to manage peer
connections using different protocols (e.g. QUIC, etc.)
- entirely remove the constructs and implementations of the legacy peer
implementation.
- establish and enforce clearer chains of responsibility for connection
establishment (e.g. handshaking, setup,) which is currently shared between
three components.
- report better metrics regarding the into the state of peers and network
connectivity, which are opaque outside of the system. This is constrained at
the moment as a side effect of the split responsibility for connection
establishment.
- extend the PEX system to include service information so that ndoes in the
network weren't necessarily homogeneous.
While maintaining a bespoke peer management layer would seem to distract from
development of core functionality, the truth is that (once the legacy code is
removed,) the scope of the peer layer is relatively small from a maintenance
perspective, and having control at this layer might actually afford the
project with the ability to more rapidly iterate on some features.
LibP2P
~~~~~~
LibP2P provides components that, approximately, account for the
``PeerManager`` and ``Transport`` components of the current (new) P2P
stack. The Go APIs seem reasonable, and being able to externalize the
implementation details of peer and connection management seems like it could
provide a lot of benefits, particularly in supporting a more active ecosystem.
In general the API provides the kind of stream-based, multi-protocol
supporting, and idiomatic baseline for implementing a peer layer. Additionally
because it handles peer exchange and connection management at a lower
level, by using libp2p it'd be possible to remove a good deal of code in favor
of just using libp2p. Having said that, Tendermint's P2P layer covers a
greater scope (e.g. message routing to different peers) and that layer is
something that Tendermint might want to retain.
The are a number of unknowns that require more research including how much of
a peer database the Tendermint engine itself needs to maintain, in order to
support higher level operations (consensus, statesync), but it might be the
case that our internal systems need to know much less about peers than
otherwise specified. Similarly, the current system has a notion of peer
scoring that cannot be communicated to libp2p, which may be fine as this is
only used to support peer exchange (PEX,) which would become a property libp2p
and not expressed in it's current higher-level form.
In general, the effort to switch to libp2p would involve:
- timing it during an appropriate protocol-breaking window, as it doesn't seem
viable to support both libp2p *and* the current p2p protocol.
- providing some in-memory testing network to support the use case that the
current ``p2p.MemoryNetwork`` provides.
- re-homing the ``p2p.Router`` implementation on top of libp2p components to
be able to maintain the current reactor implementations.
Open question include:
- how much local buffering should we be doing? It sort of seems like we should
figure out what the expected behavior is for libp2p for QoS-type
functionality, and if our requirements mean that we should be implementing
this on top of things ourselves?
- if Tendermint was going to use libp2p, how would libp2p's stability
guarantees (protocol, etc.) impact/constrain Tendermint's stability
guarantees?
- what kind of introspection does libp2p provide, and to what extend would
this change or constrain the kind of observability that Tendermint is able
to provide?
- how do efforts to select "the best" (healthy, close, well-behaving, etc.)
peers work out if Tendermint is not maintaining a local peer database?
- would adding additional higher level semantics (internal message passing,
request/response pairs, service discovery, etc.) facilitate removing some of
the direct linkages between constructs/components in the system and reduce
the need for Tendermint nodes to maintain state about its peers?
References
----------
- `Tracking Ticket for P2P Refactor Project <https://github.com/tendermint/tendermint/issues/5670>`_
- `ADR 61: P2P Refactor Scope <../architecture/adr-061-p2p-refactor-scope.md>`_
- `ADR 62: P2P Architecture and Abstraction <../architecture/adr-061-p2p-architecture.md>`_

35
docs/rfc/rfc-template.md Normal file
View File

@@ -0,0 +1,35 @@
# RFC {RFC-NUMBER}: {TITLE}
## Changelog
- {date}: {changelog}
## Abstract
> A brief high-level synopsis of the topic of discussion for this RFC, ideally
> just a few sentences. This should help the reader quickly decide whether the
> rest of the discussion is relevant to their interest.
## Background
> Any context or orientation needed for a reader to understand and participate
> in the substance of the Discussion. If necessary, this section may include
> links to other documentation or sources rather than restating existing
> material, but should provide enough detail that the reader can tell what they
> need to read to be up-to-date.
### References
> Links to external materials needed to follow the discussion may be added here.
>
> In addition, if the discussion in a request for comments leads to any design
> decisions, it may be helpful to add links to the ADR documents here after the
> discussion has settled.
## Discussion
> This section contains the core of the discussion.
>
> There is no fixed format for this section, but ideally changes to this
> section should be updated before merging to reflect any discussion that took
> place on the PR that made those changes.

View File

@@ -17,9 +17,9 @@ consensus gossip protocol.
## Using Block Sync
To support faster syncing, Tendermint offers a `fast-sync` mode, which
To support faster syncing, Tendermint offers a `blocksync` mode, which
is enabled by default, and can be toggled in the `config.toml` or via
`--fast_sync=false`.
`--blocksync.enable=false`.
In this mode, the Tendermint daemon will sync hundreds of times faster
than if it used the real-time consensus process. Once caught up, the
@@ -29,18 +29,23 @@ has at least one peer and it's height is at least as high as the max
reported peer height. See [the IsCaughtUp
method](https://github.com/tendermint/tendermint/blob/b467515719e686e4678e6da4e102f32a491b85a0/blockchain/pool.go#L128).
Note: There are two versions of Block Sync. We recommend using v0 as v2 is still in beta.
Note: There are multiple versions of Block Sync. Please use v0 as the other versions are no longer supported.
If you would like to use a different version you can do so by changing the version in the `config.toml`:
```toml
#######################################################
### Block Sync Configuration Connections ###
#######################################################
[fastsync]
[blocksync]
# If this node is many blocks behind the tip of the chain, BlockSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
enable = true
# Block Sync version to use:
# 1) "v0" (default) - the legacy Block Sync implementation
# 2) "v2" - complete redesign of v0, optimized for testability & readability
# 1) "v0" (default) - the standard Block Sync implementation
# 2) "v2" - DEPRECATED, please use v0
version = "v0"
```
@@ -55,4 +60,4 @@ the network best height, it will switches to the state sync mechanism and then e
another event for exposing the fast-sync `complete` status and the state `height`.
The user can query the events by subscribing `EventQueryBlockSyncStatus`
Please check [types](https://pkg.go.dev/github.com/tendermint/tendermint/types?utm_source=godoc#pkg-constants) for the details.
Please check [types](https://pkg.go.dev/github.com/tendermint/tendermint/types?utm_source=godoc#pkg-constants) for the details.

View File

@@ -62,3 +62,30 @@ given destination directory. Each archive will contain:
Note: goroutine.out and heap.out will only be written if a profile address is
provided and is operational. This command is blocking and will log any error.
## Tendermint Inspect
Tendermint includes an `inspect` command for querying Tendermint's state store and block
store over Tendermint RPC.
When the Tendermint consensus engine detects inconsistent state, it will crash the
entire Tendermint process.
While in this inconsistent state, a node running Tendermint's consensus engine will not start up.
The `inspect` command runs only a subset of Tendermint's RPC endpoints for querying the block store
and state store.
`inspect` allows operators to query a read-only view of the stage.
`inspect` does not run the consensus engine at all and can therefore be used to debug
processes that have crashed due to inconsistent state.
To start the `inspect` process, run
```bash
tendermint inspect
```
### RPC endpoints
The list of available RPC endpoints can be found by making a request to the RPC port.
For an `inspect` process running on `127.0.0.1:26657`, navigate your browser to
`http://127.0.0.1:26657/` to retrieve the list of enabled RPC endpoints.
Additional information on the Tendermint RPC endpoints can be found in the [rpc documentation](https://docs.tendermint.com/master/rpc).

View File

@@ -64,13 +64,42 @@ It wont kill the node, but it will gather all of the above data and package i
At this point, depending on how severe the degradation is, you may want to restart the process.
## Tendermint Inspect
What if the Tendermint node will not start up due to inconsistent consensus state?
When a node running the Tendermint consensus engine detects an inconsistent state
it will crash the entire Tendermint process.
The Tendermint consensus engine cannot be run in this inconsistent state and the so node
will fail to start up as a result.
The Tendermint RPC server can provide valuable information for debugging in this situation.
The Tendermint `inspect` command will run a subset of the Tendermint RPC server
that is useful for debugging inconsistent state.
### Running inspect
Start up the `inspect` tool on the machine where Tendermint crashed using:
```bash
tendermint inspect --home=</path/to/app.d>
```
`inspect` will use the data directory specified in your Tendermint configuration file.
`inspect` will also run the RPC server at the address specified in your Tendermint configuration file.
### Using inspect
With the `inspect` server running, you can access RPC endpoints that are critically important
for debugging.
Calling the `/status`, `/consensus_state` and `/dump_consensus_state` RPC endpoint
will return useful information about the Tendermint consensus state.
## Outro
Were hoping that the `tendermint debug` subcommand will become de facto the first response to any accidents.
Were hoping that these Tendermint tools will become de facto the first response for any accidents.
Let us know what your experience has been so far! Have you had a chance to try `tendermint debug` yet?
Let us know what your experience has been so far! Have you had a chance to try `tendermint debug` or `tendermint inspect` yet?
Join our chat, where we discuss the current issues and future improvements.
Join our [discord chat](https://discord.gg/vcExX9T), where we discuss the current issues and future improvements.

View File

@@ -388,7 +388,6 @@ func main() {
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
<-c
os.Exit(0)
}
func newTendermint(app abci.Application, configFile string) (*nm.Node, error) {
@@ -564,7 +563,6 @@ defer func() {
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
<-c
os.Exit(0)
```
## 1.5 Getting Up and Running

8
go.mod
View File

@@ -4,7 +4,6 @@ go 1.16
require (
github.com/BurntSushi/toml v0.4.1
github.com/Masterminds/squirrel v1.5.0
github.com/Workiva/go-datastructures v1.0.53
github.com/adlio/schema v1.1.13
github.com/btcsuite/btcd v0.22.0-beta
@@ -19,24 +18,27 @@ require (
github.com/gorilla/websocket v1.4.2
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
github.com/lib/pq v1.10.2
github.com/lib/pq v1.10.3
github.com/libp2p/go-buffer-pool v0.0.2
github.com/minio/highwayhash v1.0.2
github.com/mroth/weightedrand v0.4.1
github.com/oasisprotocol/curve25519-voi v0.0.0-20210609091139-0a56a4bca00b
github.com/ory/dockertest v3.3.5+incompatible
github.com/prometheus/client_golang v1.11.0
github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0
github.com/rs/cors v1.8.0
github.com/rs/zerolog v1.23.0
github.com/rs/zerolog v1.24.0
github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa
github.com/snikch/goodman v0.0.0-20171125024755-10e37e294daa
github.com/spf13/cobra v1.2.1
github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/viper v1.8.1
github.com/stretchr/testify v1.7.0
github.com/tendermint/tm-db v0.6.4
github.com/vektra/mockery/v2 v2.9.0
golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
google.golang.org/grpc v1.40.0
gopkg.in/check.v1 v1.0.0-20200902074654-038fdea0a05b // indirect
pgregory.net/rapid v0.4.7

17
go.sum
View File

@@ -63,8 +63,6 @@ github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3Q
github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/Masterminds/sprig v2.15.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/Masterminds/sprig v2.22.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/Masterminds/squirrel v1.5.0 h1:JukIZisrUXadA9pl3rMkjhiamxiB0cXiu+HGp/Y8cY8=
github.com/Masterminds/squirrel v1.5.0/go.mod h1:NNaOrjSoIDfDA40n7sr2tPNZRfjzjA400rg+riTZj10=
github.com/Microsoft/go-winio v0.4.14 h1:+hMXMk01us9KgxGb7ftKQt2Xpf5hH/yky+TDA+qxleU=
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 h1:TngWCqHvy9oXAN6lEVMRuU21PR1EtLVZJmdB18Gu3Rw=
@@ -548,10 +546,6 @@ github.com/kunwardeep/paralleltest v1.0.2/go.mod h1:ZPqNm1fVHPllh5LPVujzbVz1JN2G
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/kyoh86/exportloopref v0.1.8 h1:5Ry/at+eFdkX9Vsdw3qU4YkvGtzuVfzT4X7S77LoN/M=
github.com/kyoh86/exportloopref v0.1.8/go.mod h1:1tUcJeiioIs7VWe5gcOObrux3lb66+sBqGZrRkMwPgg=
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 h1:SOEGU9fKiNWd/HOJuq6+3iTQz8KNCLtVX6idSoTLdUw=
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0/go.mod h1:dXGbAdH5GtBTC4WfIxhKZfyBF/HBFgRZSWwZ9g/He9o=
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 h1:P6pPBnrTSX3DEVR4fDembhRWSsG5rVo6hYhAB/ADZrk=
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0/go.mod h1:vmVJ0l/dxyfGW6FmdpVm2joNMFikkuWg0EoCKLGUMNw=
github.com/ldez/gomoddirectives v0.2.2 h1:p9/sXuNFArS2RLc+UpYZSI4KQwGMEDWC/LbtF5OPFVg=
github.com/ldez/gomoddirectives v0.2.2/go.mod h1:cpgBogWITnCfRq2qGoDkKMEVSaarhdBr6g8G04uz6d0=
github.com/ldez/tagliatelle v0.2.0 h1:693V8Bf1NdShJ8eu/s84QySA0J2VWBanVBa2WwXD/Wk=
@@ -562,8 +556,9 @@ github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.8.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.9.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.10.2 h1:AqzbZs4ZoCBp+GtejcpCpcxM3zlSMx29dXbUSeVtJb8=
github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.10.3 h1:v9QZf2Sn6AmjXtQeFpdoq/eaNtYP6IN+7lcrygsIAtg=
github.com/lib/pq v1.10.3/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/libp2p/go-buffer-pool v0.0.2 h1:QNK2iAFa8gjAe1SPz6mHSMuCcjs+X1wlHzeOSqcmlfs=
github.com/libp2p/go-buffer-pool v0.0.2/go.mod h1:MvaB6xw5vOrDl8rYZGLFdKAuk/hRoRZd1Vi32+RXyFM=
github.com/logrusorgru/aurora v0.0.0-20181002194514-a7b3b318ed4e/go.mod h1:7rIyQOR62GCctdiQpZ/zOJlFyk6y+94wXzv6RNZgaR4=
@@ -636,6 +631,8 @@ github.com/moricho/tparallel v0.2.1 h1:95FytivzT6rYzdJLdtfn6m1bfFJylOJK41+lgv/EH
github.com/moricho/tparallel v0.2.1/go.mod h1:fXEIZxG2vdfl0ZF8b42f5a78EhjjD5mX8qUplsoSU4k=
github.com/mozilla/scribe v0.0.0-20180711195314-fb71baf557c1/go.mod h1:FIczTrinKo8VaLxe6PWTPEXRXDIHz2QAwiaBaP5/4a8=
github.com/mozilla/tls-observatory v0.0.0-20210609171429-7bc42856d2e5/go.mod h1:FUqVoUPHSEdDR0MnFM3Dh8AU0pZHLXUD127SAJGER/s=
github.com/mroth/weightedrand v0.4.1 h1:rHcbUBopmi/3x4nnrvwGJBhX9d0vk+KgoLUZeDP6YyI=
github.com/mroth/weightedrand v0.4.1/go.mod h1:3p2SIcC8al1YMzGhAIoXD+r9olo/g/cdJgAD905gyNE=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-proto-validators v0.0.0-20180403085117-0950a7990007/go.mod h1:m2XC9Qq0AlmmVksL6FktJCdTYyLk7V3fKyp0sl1yWQo=
@@ -768,9 +765,10 @@ github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/rs/cors v1.8.0 h1:P2KMzcFwrPoSjkF1WLRPsp3UMLyql8L4v9hQpVeK5so=
github.com/rs/cors v1.8.0/go.mod h1:EBwu+T5AvHOcXwvZIkQFjUN6s8Czyqw12GL/Y0tUyRM=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/xid v1.3.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.18.0/go.mod h1:9nvC1axdVrAHcu/s9taAVfBuIdTZLVQmKQyvrUjF5+I=
github.com/rs/zerolog v1.23.0 h1:UskrK+saS9P9Y789yNNulYKdARjPZuS35B8gJF2x60g=
github.com/rs/zerolog v1.23.0/go.mod h1:6c7hFfxPOy7TacJc4Fcdi24/J0NKYGzjG8FWRI916Qo=
github.com/rs/zerolog v1.24.0 h1:76ivFxmVSRs1u2wUwJVg5VZDYQgeH1JpoS6ndgr9Wy8=
github.com/rs/zerolog v1.24.0/go.mod h1:7KHcEGe0QZPOm2IE4Kpb5rTh6n1h2hIgS5OOnu1rUaI=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryancurrah/gomodguard v1.2.3 h1:ww2fsjqocGCAFamzvv/b8IsRduuHHeK2MHTcTxZTQX8=
@@ -1075,6 +1073,7 @@ golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=

36
inspect/doc.go Normal file
View File

@@ -0,0 +1,36 @@
/*
Package inspect provides a tool for investigating the state of a
failed Tendermint node.
This package provides the Inspector type. The Inspector type runs a subset of the Tendermint
RPC endpoints that are useful for debugging issues with Tendermint consensus.
When a node running the Tendermint consensus engine detects an inconsistent consensus state,
the entire node will crash. The Tendermint consensus engine cannot run in this
inconsistent state so the node will not be able to start up again.
The RPC endpoints provided by the Inspector type allow for a node operator to inspect
the block store and state store to better understand what may have caused the inconsistent state.
The Inspector type's lifecycle is controlled by a context.Context
ins := inspect.NewFromConfig(rpcConfig)
ctx, cancelFunc:= context.WithCancel(context.Background())
// Run blocks until the Inspector server is shut down.
go ins.Run(ctx)
...
// calling the cancel function will stop the running inspect server
cancelFunc()
Inspector serves its RPC endpoints on the address configured in the RPC configuration
rpcConfig.ListenAddress = "tcp://127.0.0.1:26657"
ins := inspect.NewFromConfig(rpcConfig)
go ins.Run(ctx)
The list of available RPC endpoints can then be viewed by navigating to
http://127.0.0.1:26657/ in the web browser.
*/
package inspect

149
inspect/inspect.go Normal file
View File

@@ -0,0 +1,149 @@
package inspect
import (
"context"
"errors"
"fmt"
"net"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/inspect/rpc"
"github.com/tendermint/tendermint/libs/log"
tmstrings "github.com/tendermint/tendermint/libs/strings"
rpccore "github.com/tendermint/tendermint/rpc/core"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
"github.com/tendermint/tendermint/state/indexer/sink"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
"golang.org/x/sync/errgroup"
)
// Inspector manages an RPC service that exports methods to debug a failed node.
// After a node shuts down due to a consensus failure, it will no longer start
// up its state cannot easily be inspected. An Inspector value provides a similar interface
// to the node, using the underlying Tendermint data stores, without bringing up
// any other components. A caller can query the Inspector service to inspect the
// persisted state and debug the failure.
type Inspector struct {
routes rpccore.RoutesMap
config *config.RPCConfig
indexerService *indexer.Service
eventBus *types.EventBus
logger log.Logger
}
// New returns an Inspector that serves RPC on the specified BlockStore and StateStore.
// The Inspector type does not modify the state or block stores.
// The sinks are used to enable block and transaction querying via the RPC server.
// The caller is responsible for starting and stopping the Inspector service.
///
//nolint:lll
func New(cfg *config.RPCConfig, bs state.BlockStore, ss state.Store, es []indexer.EventSink, logger log.Logger) *Inspector {
routes := rpc.Routes(*cfg, ss, bs, es, logger)
eb := types.NewEventBus()
eb.SetLogger(logger.With("module", "events"))
is := indexer.NewIndexerService(es, eb)
is.SetLogger(logger.With("module", "txindex"))
return &Inspector{
routes: routes,
config: cfg,
logger: logger,
eventBus: eb,
indexerService: is,
}
}
// NewFromConfig constructs an Inspector using the values defined in the passed in config.
func NewFromConfig(cfg *config.Config) (*Inspector, error) {
bsDB, err := config.DefaultDBProvider(&config.DBContext{ID: "blockstore", Config: cfg})
if err != nil {
return nil, err
}
bs := store.NewBlockStore(bsDB)
sDB, err := config.DefaultDBProvider(&config.DBContext{ID: "state", Config: cfg})
if err != nil {
return nil, err
}
genDoc, err := types.GenesisDocFromFile(cfg.GenesisFile())
if err != nil {
return nil, err
}
sinks, err := sink.EventSinksFromConfig(cfg, config.DefaultDBProvider, genDoc.ChainID)
if err != nil {
return nil, err
}
logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
ss := state.NewStore(sDB)
return New(cfg.RPC, bs, ss, sinks, logger), nil
}
// Run starts the Inspector servers and blocks until the servers shut down. The passed
// in context is used to control the lifecycle of the servers.
func (ins *Inspector) Run(ctx context.Context) error {
err := ins.eventBus.Start()
if err != nil {
return fmt.Errorf("error starting event bus: %s", err)
}
defer func() {
err := ins.eventBus.Stop()
if err != nil {
ins.logger.Error("event bus stopped with error", "err", err)
}
}()
err = ins.indexerService.Start()
if err != nil {
return fmt.Errorf("error starting indexer service: %s", err)
}
defer func() {
err := ins.indexerService.Stop()
if err != nil {
ins.logger.Error("indexer service stopped with error", "err", err)
}
}()
return startRPCServers(ctx, ins.config, ins.logger, ins.routes)
}
func startRPCServers(ctx context.Context, cfg *config.RPCConfig, logger log.Logger, routes rpccore.RoutesMap) error {
g, tctx := errgroup.WithContext(ctx)
listenAddrs := tmstrings.SplitAndTrimEmpty(cfg.ListenAddress, ",", " ")
rh := rpc.Handler(cfg, routes, logger)
for _, listenerAddr := range listenAddrs {
server := rpc.Server{
Logger: logger,
Config: cfg,
Handler: rh,
Addr: listenerAddr,
}
if cfg.IsTLSEnabled() {
keyFile := cfg.KeyFile()
certFile := cfg.CertFile()
listenerAddr := listenerAddr
g.Go(func() error {
logger.Info("RPC HTTPS server starting", "address", listenerAddr,
"certfile", certFile, "keyfile", keyFile)
err := server.ListenAndServeTLS(tctx, certFile, keyFile)
if !errors.Is(err, net.ErrClosed) {
return err
}
logger.Info("RPC HTTPS server stopped", "address", listenerAddr)
return nil
})
} else {
listenerAddr := listenerAddr
g.Go(func() error {
logger.Info("RPC HTTP server starting", "address", listenerAddr)
err := server.ListenAndServe(tctx)
if !errors.Is(err, net.ErrClosed) {
return err
}
logger.Info("RPC HTTP server stopped", "address", listenerAddr)
return nil
})
}
}
return g.Wait()
}

583
inspect/inspect_test.go Normal file
View File

@@ -0,0 +1,583 @@
package inspect_test
import (
"context"
"fmt"
"net"
"os"
"strings"
"sync"
"testing"
"time"
"github.com/fortytw2/leaktest"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
abcitypes "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/inspect"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/pubsub/query"
"github.com/tendermint/tendermint/proto/tendermint/state"
httpclient "github.com/tendermint/tendermint/rpc/client/http"
"github.com/tendermint/tendermint/state/indexer"
indexermocks "github.com/tendermint/tendermint/state/indexer/mocks"
statemocks "github.com/tendermint/tendermint/state/mocks"
"github.com/tendermint/tendermint/types"
)
func TestInspectConstructor(t *testing.T) {
cfg := config.ResetTestRoot("test")
t.Cleanup(leaktest.Check(t))
defer func() { _ = os.RemoveAll(cfg.RootDir) }()
t.Run("from config", func(t *testing.T) {
d, err := inspect.NewFromConfig(cfg)
require.NoError(t, err)
require.NotNil(t, d)
})
}
func TestInspectRun(t *testing.T) {
cfg := config.ResetTestRoot("test")
t.Cleanup(leaktest.Check(t))
defer func() { _ = os.RemoveAll(cfg.RootDir) }()
t.Run("from config", func(t *testing.T) {
d, err := inspect.NewFromConfig(cfg)
require.NoError(t, err)
ctx, cancel := context.WithCancel(context.Background())
stoppedWG := &sync.WaitGroup{}
stoppedWG.Add(1)
go func() {
require.NoError(t, d.Run(ctx))
stoppedWG.Done()
}()
cancel()
stoppedWG.Wait()
})
}
func TestBlock(t *testing.T) {
testHeight := int64(1)
testBlock := new(types.Block)
testBlock.Header.Height = testHeight
testBlock.Header.LastCommitHash = []byte("test hash")
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Height").Return(testHeight)
blockStoreMock.On("Base").Return(int64(0))
blockStoreMock.On("LoadBlockMeta", testHeight).Return(&types.BlockMeta{})
blockStoreMock.On("LoadBlock", testHeight).Return(testBlock)
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
resultBlock, err := cli.Block(context.Background(), &testHeight)
require.NoError(t, err)
require.Equal(t, testBlock.Height, resultBlock.Block.Height)
require.Equal(t, testBlock.LastCommitHash, resultBlock.Block.LastCommitHash)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestTxSearch(t *testing.T) {
testHash := []byte("test")
testTx := []byte("tx")
testQuery := fmt.Sprintf("tx.hash='%s'", string(testHash))
testTxResult := &abcitypes.TxResult{
Height: 1,
Index: 100,
Tx: testTx,
}
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
eventSinkMock.On("Type").Return(indexer.KV)
eventSinkMock.On("SearchTxEvents", mock.Anything,
mock.MatchedBy(func(q *query.Query) bool { return testQuery == q.String() })).
Return([]*abcitypes.TxResult{testTxResult}, nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
var page = 1
resultTxSearch, err := cli.TxSearch(context.Background(), testQuery, false, &page, &page, "")
require.NoError(t, err)
require.Len(t, resultTxSearch.Txs, 1)
require.Equal(t, types.Tx(testTx), resultTxSearch.Txs[0].Tx)
cancel()
wg.Wait()
eventSinkMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
blockStoreMock.AssertExpectations(t)
}
func TestTx(t *testing.T) {
testHash := []byte("test")
testTx := []byte("tx")
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
eventSinkMock.On("Type").Return(indexer.KV)
eventSinkMock.On("GetTxByHash", testHash).Return(&abcitypes.TxResult{
Tx: testTx,
}, nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
res, err := cli.Tx(context.Background(), testHash, false)
require.NoError(t, err)
require.Equal(t, types.Tx(testTx), res.Tx)
cancel()
wg.Wait()
eventSinkMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
blockStoreMock.AssertExpectations(t)
}
func TestConsensusParams(t *testing.T) {
testHeight := int64(1)
testMaxGas := int64(55)
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Height").Return(testHeight)
blockStoreMock.On("Base").Return(int64(0))
stateStoreMock.On("LoadConsensusParams", testHeight).Return(types.ConsensusParams{
Block: types.BlockParams{
MaxGas: testMaxGas,
},
}, nil)
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
params, err := cli.ConsensusParams(context.Background(), &testHeight)
require.NoError(t, err)
require.Equal(t, params.ConsensusParams.Block.MaxGas, testMaxGas)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestBlockResults(t *testing.T) {
testHeight := int64(1)
testGasUsed := int64(100)
stateStoreMock := &statemocks.Store{}
// tmstate "github.com/tendermint/tendermint/proto/tendermint/state"
stateStoreMock.On("LoadABCIResponses", testHeight).Return(&state.ABCIResponses{
DeliverTxs: []*abcitypes.ResponseDeliverTx{
{
GasUsed: testGasUsed,
},
},
EndBlock: &abcitypes.ResponseEndBlock{},
BeginBlock: &abcitypes.ResponseBeginBlock{},
}, nil)
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Base").Return(int64(0))
blockStoreMock.On("Height").Return(testHeight)
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
res, err := cli.BlockResults(context.Background(), &testHeight)
require.NoError(t, err)
require.Equal(t, res.TotalGasUsed, testGasUsed)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestCommit(t *testing.T) {
testHeight := int64(1)
testRound := int32(101)
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Base").Return(int64(0))
blockStoreMock.On("Height").Return(testHeight)
blockStoreMock.On("LoadBlockMeta", testHeight).Return(&types.BlockMeta{}, nil)
blockStoreMock.On("LoadSeenCommit").Return(&types.Commit{
Height: testHeight,
Round: testRound,
}, nil)
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
res, err := cli.Commit(context.Background(), &testHeight)
require.NoError(t, err)
require.NotNil(t, res)
require.Equal(t, res.SignedHeader.Commit.Round, testRound)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestBlockByHash(t *testing.T) {
testHeight := int64(1)
testHash := []byte("test hash")
testBlock := new(types.Block)
testBlock.Header.Height = testHeight
testBlock.Header.LastCommitHash = testHash
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("LoadBlockMeta", testHeight).Return(&types.BlockMeta{
BlockID: types.BlockID{
Hash: testHash,
},
Header: types.Header{
Height: testHeight,
},
}, nil)
blockStoreMock.On("LoadBlockByHash", testHash).Return(testBlock, nil)
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
res, err := cli.BlockByHash(context.Background(), testHash)
require.NoError(t, err)
require.NotNil(t, res)
require.Equal(t, []byte(res.BlockID.Hash), testHash)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestBlockchain(t *testing.T) {
testHeight := int64(1)
testBlock := new(types.Block)
testBlockHash := []byte("test hash")
testBlock.Header.Height = testHeight
testBlock.Header.LastCommitHash = testBlockHash
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Height").Return(testHeight)
blockStoreMock.On("Base").Return(int64(0))
blockStoreMock.On("LoadBlockMeta", testHeight).Return(&types.BlockMeta{
BlockID: types.BlockID{
Hash: testBlockHash,
},
})
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
res, err := cli.BlockchainInfo(context.Background(), 0, 100)
require.NoError(t, err)
require.NotNil(t, res)
require.Equal(t, testBlockHash, []byte(res.BlockMetas[0].BlockID.Hash))
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestValidators(t *testing.T) {
testHeight := int64(1)
testVotingPower := int64(100)
testValidators := types.ValidatorSet{
Validators: []*types.Validator{
{
VotingPower: testVotingPower,
},
},
}
stateStoreMock := &statemocks.Store{}
stateStoreMock.On("LoadValidators", testHeight).Return(&testValidators, nil)
blockStoreMock := &statemocks.BlockStore{}
blockStoreMock.On("Height").Return(testHeight)
blockStoreMock.On("Base").Return(int64(0))
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
testPage := 1
testPerPage := 100
res, err := cli.Validators(context.Background(), &testHeight, &testPage, &testPerPage)
require.NoError(t, err)
require.NotNil(t, res)
require.Equal(t, testVotingPower, res.Validators[0].VotingPower)
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func TestBlockSearch(t *testing.T) {
testHeight := int64(1)
testBlockHash := []byte("test hash")
testQuery := "block.height = 1"
stateStoreMock := &statemocks.Store{}
blockStoreMock := &statemocks.BlockStore{}
eventSinkMock := &indexermocks.EventSink{}
eventSinkMock.On("Stop").Return(nil)
eventSinkMock.On("Type").Return(indexer.KV)
blockStoreMock.On("LoadBlock", testHeight).Return(&types.Block{
Header: types.Header{
Height: testHeight,
},
}, nil)
blockStoreMock.On("LoadBlockMeta", testHeight).Return(&types.BlockMeta{
BlockID: types.BlockID{
Hash: testBlockHash,
},
})
eventSinkMock.On("SearchBlockEvents", mock.Anything,
mock.MatchedBy(func(q *query.Query) bool { return testQuery == q.String() })).
Return([]int64{testHeight}, nil)
rpcConfig := config.TestRPCConfig()
l := log.TestingLogger()
d := inspect.New(rpcConfig, blockStoreMock, stateStoreMock, []indexer.EventSink{eventSinkMock}, l)
ctx, cancel := context.WithCancel(context.Background())
wg := &sync.WaitGroup{}
wg.Add(1)
startedWG := &sync.WaitGroup{}
startedWG.Add(1)
go func() {
startedWG.Done()
defer wg.Done()
require.NoError(t, d.Run(ctx))
}()
// FIXME: used to induce context switch.
// Determine more deterministic method for prompting a context switch
startedWG.Wait()
requireConnect(t, rpcConfig.ListenAddress, 20)
cli, err := httpclient.New(rpcConfig.ListenAddress)
require.NoError(t, err)
testPage := 1
testPerPage := 100
testOrderBy := "desc"
res, err := cli.BlockSearch(context.Background(), testQuery, &testPage, &testPerPage, testOrderBy)
require.NoError(t, err)
require.NotNil(t, res)
require.Equal(t, testBlockHash, []byte(res.Blocks[0].BlockID.Hash))
cancel()
wg.Wait()
blockStoreMock.AssertExpectations(t)
stateStoreMock.AssertExpectations(t)
}
func requireConnect(t testing.TB, addr string, retries int) {
parts := strings.SplitN(addr, "://", 2)
if len(parts) != 2 {
t.Fatalf("malformed address to dial: %s", addr)
}
var err error
for i := 0; i < retries; i++ {
var conn net.Conn
conn, err = net.Dial(parts[0], parts[1])
if err == nil {
conn.Close()
return
}
// FIXME attempt to yield and let the other goroutine continue execution.
time.Sleep(time.Microsecond * 100)
}
t.Fatalf("unable to connect to server %s after %d tries: %s", addr, retries, err)
}

143
inspect/rpc/rpc.go Normal file
View File

@@ -0,0 +1,143 @@
package rpc
import (
"context"
"net/http"
"time"
"github.com/rs/cors"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/internal/consensus"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/pubsub"
"github.com/tendermint/tendermint/rpc/core"
"github.com/tendermint/tendermint/rpc/jsonrpc/server"
"github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
"github.com/tendermint/tendermint/types"
)
// Server defines parameters for running an Inspector rpc server.
type Server struct {
Addr string // TCP address to listen on, ":http" if empty
Handler http.Handler
Logger log.Logger
Config *config.RPCConfig
}
// Routes returns the set of routes used by the Inspector server.
//
//nolint: lll
func Routes(cfg config.RPCConfig, s state.Store, bs state.BlockStore, es []indexer.EventSink, logger log.Logger) core.RoutesMap {
env := &core.Environment{
Config: cfg,
EventSinks: es,
StateStore: s,
BlockStore: bs,
ConsensusReactor: waitSyncCheckerImpl{},
Logger: logger,
}
return core.RoutesMap{
"blockchain": server.NewRPCFunc(env.BlockchainInfo, "minHeight,maxHeight", true),
"consensus_params": server.NewRPCFunc(env.ConsensusParams, "height", true),
"block": server.NewRPCFunc(env.Block, "height", true),
"block_by_hash": server.NewRPCFunc(env.BlockByHash, "hash", true),
"block_results": server.NewRPCFunc(env.BlockResults, "height", true),
"commit": server.NewRPCFunc(env.Commit, "height", true),
"validators": server.NewRPCFunc(env.Validators, "height,page,per_page", true),
"tx": server.NewRPCFunc(env.Tx, "hash,prove", true),
"tx_search": server.NewRPCFunc(env.TxSearch, "query,prove,page,per_page,order_by", false),
"block_search": server.NewRPCFunc(env.BlockSearch, "query,page,per_page,order_by", false),
}
}
// Handler returns the http.Handler configured for use with an Inspector server. Handler
// registers the routes on the http.Handler and also registers the websocket handler
// and the CORS handler if specified by the configuration options.
func Handler(rpcConfig *config.RPCConfig, routes core.RoutesMap, logger log.Logger) http.Handler {
mux := http.NewServeMux()
wmLogger := logger.With("protocol", "websocket")
var eventBus types.EventBusSubscriber
websocketDisconnectFn := func(remoteAddr string) {
err := eventBus.UnsubscribeAll(context.Background(), remoteAddr)
if err != nil && err != pubsub.ErrSubscriptionNotFound {
wmLogger.Error("Failed to unsubscribe addr from events", "addr", remoteAddr, "err", err)
}
}
wm := server.NewWebsocketManager(routes,
server.OnDisconnect(websocketDisconnectFn),
server.ReadLimit(rpcConfig.MaxBodyBytes))
wm.SetLogger(wmLogger)
mux.HandleFunc("/websocket", wm.WebsocketHandler)
server.RegisterRPCFuncs(mux, routes, logger)
var rootHandler http.Handler = mux
if rpcConfig.IsCorsEnabled() {
rootHandler = addCORSHandler(rpcConfig, mux)
}
return rootHandler
}
func addCORSHandler(rpcConfig *config.RPCConfig, h http.Handler) http.Handler {
corsMiddleware := cors.New(cors.Options{
AllowedOrigins: rpcConfig.CORSAllowedOrigins,
AllowedMethods: rpcConfig.CORSAllowedMethods,
AllowedHeaders: rpcConfig.CORSAllowedHeaders,
})
h = corsMiddleware.Handler(h)
return h
}
type waitSyncCheckerImpl struct{}
func (waitSyncCheckerImpl) WaitSync() bool {
return false
}
func (waitSyncCheckerImpl) GetPeerState(peerID types.NodeID) (*consensus.PeerState, bool) {
return nil, false
}
// ListenAndServe listens on the address specified in srv.Addr and handles any
// incoming requests over HTTP using the Inspector rpc handler specified on the server.
func (srv *Server) ListenAndServe(ctx context.Context) error {
listener, err := server.Listen(srv.Addr, srv.Config.MaxOpenConnections)
if err != nil {
return err
}
go func() {
<-ctx.Done()
listener.Close()
}()
return server.Serve(listener, srv.Handler, srv.Logger, serverRPCConfig(srv.Config))
}
// ListenAndServeTLS listens on the address specified in srv.Addr. ListenAndServeTLS handles
// incoming requests over HTTPS using the Inspector rpc handler specified on the server.
func (srv *Server) ListenAndServeTLS(ctx context.Context, certFile, keyFile string) error {
listener, err := server.Listen(srv.Addr, srv.Config.MaxOpenConnections)
if err != nil {
return err
}
go func() {
<-ctx.Done()
listener.Close()
}()
return server.ServeTLS(listener, srv.Handler, certFile, keyFile, srv.Logger, serverRPCConfig(srv.Config))
}
func serverRPCConfig(r *config.RPCConfig) *server.Config {
cfg := server.DefaultConfig()
cfg.MaxBodyBytes = r.MaxBodyBytes
cfg.MaxHeaderBytes = r.MaxHeaderBytes
// If necessary adjust global WriteTimeout to ensure it's greater than
// TimeoutBroadcastTxCommit.
// See https://github.com/tendermint/tendermint/issues/3435
if cfg.WriteTimeout <= r.TimeoutBroadcastTxCommit {
cfg.WriteTimeout = r.TimeoutBroadcastTxCommit + 1*time.Second
}
return cfg
}

View File

@@ -1096,7 +1096,7 @@ func (r *Reactor) handleDataMessage(envelope p2p.Envelope, msgI Message) error {
}
if r.WaitSync() {
logger.Info("ignoring message received during sync", "msg", msgI)
logger.Info("ignoring message received during sync", "msg", fmt.Sprintf("%T", msgI))
return nil
}

View File

@@ -916,8 +916,8 @@ func (cs *State) handleMsg(mi msgInfo) {
"height", cs.Height,
"round", cs.Round,
"peer", peerID,
"msg_type", fmt.Sprintf("%T", msg),
"err", err,
"msg", msg,
)
}
}

View File

@@ -608,7 +608,7 @@ func prefixToBytes(prefix int64) []byte {
}
func keyCommitted(evidence types.Evidence) []byte {
var height int64 = evidence.Height()
height := evidence.Height()
key, err := orderedcode.Append(nil, prefixCommitted, height, string(evidence.Hash()))
if err != nil {
panic(err)
@@ -617,7 +617,7 @@ func keyCommitted(evidence types.Evidence) []byte {
}
func keyPending(evidence types.Evidence) []byte {
var height int64 = evidence.Height()
height := evidence.Height()
key, err := orderedcode.Append(nil, prefixPending, height, string(evidence.Hash()))
if err != nil {
panic(err)

View File

@@ -14,7 +14,6 @@ import (
"github.com/tendermint/tendermint/internal/mempool"
"github.com/tendermint/tendermint/libs/log"
tmmath "github.com/tendermint/tendermint/libs/math"
pubmempool "github.com/tendermint/tendermint/pkg/mempool"
"github.com/tendermint/tendermint/proxy"
"github.com/tendermint/tendermint/types"
)
@@ -217,7 +216,7 @@ func (mem *CListMempool) CheckTx(
}
if txSize > mem.config.MaxTxBytes {
return pubmempool.ErrTxTooLarge{
return types.ErrTxTooLarge{
Max: mem.config.MaxTxBytes,
Actual: txSize,
}
@@ -225,7 +224,7 @@ func (mem *CListMempool) CheckTx(
if mem.preCheck != nil {
if err := mem.preCheck(tx); err != nil {
return pubmempool.ErrPreCheck{
return types.ErrPreCheck{
Reason: err,
}
}
@@ -248,7 +247,7 @@ func (mem *CListMempool) CheckTx(
// its non-trivial since invalid txs can become valid,
// but they can spam the same tx with little cost to them atm.
if loaded {
return pubmempool.ErrTxInCache
return types.ErrTxInCache
}
}
@@ -364,7 +363,7 @@ func (mem *CListMempool) isFull(txSize int) error {
)
if memSize >= mem.config.Size || int64(txSize)+txsBytes > mem.config.MaxTxsBytes {
return pubmempool.ErrMempoolIsFull{
return types.ErrMempoolIsFull{
NumTxs: memSize,
MaxTxs: mem.config.Size,
TxsBytes: txsBytes,

View File

@@ -23,7 +23,6 @@ import (
"github.com/tendermint/tendermint/libs/log"
tmrand "github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/libs/service"
pubmempool "github.com/tendermint/tendermint/pkg/mempool"
"github.com/tendermint/tendermint/proxy"
"github.com/tendermint/tendermint/types"
)
@@ -82,7 +81,7 @@ func checkTxs(t *testing.T, mp mempool.Mempool, count int, peerID uint16) types.
// Skip invalid txs.
// TestMempoolFilters will fail otherwise. It asserts a number of txs
// returned.
if pubmempool.IsPreCheckError(err) {
if types.IsPreCheckError(err) {
continue
}
t.Fatalf("CheckTx failed: %v while checking #%d tx", err, i)
@@ -455,7 +454,7 @@ func TestMempool_CheckTxChecksTxSize(t *testing.T) {
if !testCase.err {
require.NoError(t, err, caseString)
} else {
require.Equal(t, err, pubmempool.ErrTxTooLarge{
require.Equal(t, err, types.ErrTxTooLarge{
Max: maxTxSize,
Actual: testCase.len,
}, caseString)
@@ -503,7 +502,7 @@ func TestMempoolTxsBytes(t *testing.T) {
err = mp.CheckTx(context.Background(), []byte{0x05}, nil, mempool.TxInfo{})
if assert.Error(t, err) {
assert.IsType(t, pubmempool.ErrMempoolIsFull{}, err)
assert.IsType(t, types.ErrMempoolIsFull{}, err)
}
// 6. zero after tx is rechecked and removed due to not being valid anymore

View File

@@ -14,7 +14,6 @@ import (
"github.com/tendermint/tendermint/internal/mempool"
"github.com/tendermint/tendermint/libs/log"
tmmath "github.com/tendermint/tendermint/libs/math"
pubmempool "github.com/tendermint/tendermint/pkg/mempool"
"github.com/tendermint/tendermint/proxy"
"github.com/tendermint/tendermint/types"
)
@@ -239,7 +238,7 @@ func (txmp *TxMempool) CheckTx(
txSize := len(tx)
if txSize > txmp.config.MaxTxBytes {
return pubmempool.ErrTxTooLarge{
return types.ErrTxTooLarge{
Max: txmp.config.MaxTxBytes,
Actual: txSize,
}
@@ -247,7 +246,7 @@ func (txmp *TxMempool) CheckTx(
if txmp.preCheck != nil {
if err := txmp.preCheck(tx); err != nil {
return pubmempool.ErrPreCheck{
return types.ErrPreCheck{
Reason: err,
}
}
@@ -267,7 +266,7 @@ func (txmp *TxMempool) CheckTx(
if wtx != nil && ok {
// We already have the transaction stored and the we've already seen this
// transaction from txInfo.SenderID.
return pubmempool.ErrTxInCache
return types.ErrTxInCache
}
txmp.logger.Debug("tx exists already in cache", "tx_hash", tx.Hash())
@@ -728,7 +727,7 @@ func (txmp *TxMempool) canAddTx(wtx *WrappedTx) error {
)
if numTxs >= txmp.config.Size || int64(wtx.Size())+sizeBytes > txmp.config.MaxTxsBytes {
return pubmempool.ErrMempoolIsFull{
return types.ErrMempoolIsFull{
NumTxs: numTxs,
MaxTxs: txmp.config.Size,
TxsBytes: sizeBytes,

View File

@@ -274,8 +274,10 @@ loop:
}
func mockLBResp(t *testing.T, peer types.NodeID, height int64, time time.Time) lightBlockResponse {
vals, pv := factory.RandValidatorSet(3, 10)
_, _, lb := mockLB(t, height, time, factory.MakeBlockID(), vals, pv)
return lightBlockResponse{
block: mockLB(t, height, time, factory.MakeBlockID()),
block: lb,
peer: peer,
}
}

View File

@@ -5,7 +5,6 @@ import (
"errors"
"fmt"
"sync"
"time"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/light/provider"
@@ -17,169 +16,79 @@ import (
var (
errNoConnectedPeers = errors.New("no available peers to dispatch request to")
errUnsolicitedResponse = errors.New("unsolicited light block response")
errNoResponse = errors.New("peer failed to respond within timeout")
errPeerAlreadyBusy = errors.New("peer is already processing a request")
errDisconnected = errors.New("dispatcher has been disconnected")
errDisconnected = errors.New("dispatcher disconnected")
)
// dispatcher keeps a list of peers and allows concurrent requests for light
// blocks. NOTE: It is not the responsibility of the dispatcher to verify the
// light blocks.
type dispatcher struct {
availablePeers *peerlist
requestCh chan<- p2p.Envelope
timeout time.Duration
// A Dispatcher multiplexes concurrent requests by multiple peers for light blocks.
// Only one request per peer can be sent at a time. Subsequent concurrent requests will
// report an error from the LightBlock method.
// NOTE: It is not the responsibility of the dispatcher to verify the light blocks.
type Dispatcher struct {
// the channel with which to send light block requests on
requestCh chan<- p2p.Envelope
closeCh chan struct{}
mtx sync.Mutex
calls map[types.NodeID]chan *types.LightBlock
running bool
mtx sync.Mutex
// all pending calls that have been dispatched and are awaiting an answer
calls map[types.NodeID]chan *types.LightBlock
}
func newDispatcher(requestCh chan<- p2p.Envelope, timeout time.Duration) *dispatcher {
return &dispatcher{
availablePeers: newPeerList(),
timeout: timeout,
requestCh: requestCh,
calls: make(map[types.NodeID]chan *types.LightBlock),
running: true,
func NewDispatcher(requestCh chan<- p2p.Envelope) *Dispatcher {
return &Dispatcher{
requestCh: requestCh,
closeCh: make(chan struct{}),
calls: make(map[types.NodeID]chan *types.LightBlock),
}
}
// LightBlock uses the request channel to fetch a light block from the next peer
// in a list, tracks the call and waits for the reactor to pass along the response
func (d *dispatcher) LightBlock(ctx context.Context, height int64) (*types.LightBlock, types.NodeID, error) {
d.mtx.Lock()
// check to see that the dispatcher is connected to at least one peer
if d.availablePeers.Len() == 0 && len(d.calls) == 0 {
d.mtx.Unlock()
return nil, "", errNoConnectedPeers
}
d.mtx.Unlock()
// fetch the next peer id in the list and request a light block from that
// peer
peer := d.availablePeers.Pop(ctx)
lb, err := d.lightBlock(ctx, height, peer)
return lb, peer, err
}
// Providers turns the dispatcher into a set of providers (per peer) which can
// be used by a light client
func (d *dispatcher) Providers(chainID string, timeout time.Duration) []provider.Provider {
d.mtx.Lock()
defer d.mtx.Unlock()
providers := make([]provider.Provider, d.availablePeers.Len())
peers := d.availablePeers.Peers()
for index, peer := range peers {
providers[index] = &blockProvider{
peer: peer,
dispatcher: d,
chainID: chainID,
timeout: timeout,
}
}
return providers
}
func (d *dispatcher) stop() {
d.mtx.Lock()
defer d.mtx.Unlock()
d.running = false
for peer, call := range d.calls {
close(call)
delete(d.calls, peer)
}
}
func (d *dispatcher) start() {
d.mtx.Lock()
defer d.mtx.Unlock()
d.running = true
}
func (d *dispatcher) lightBlock(ctx context.Context, height int64, peer types.NodeID) (*types.LightBlock, error) {
// LightBlock uses the request channel to fetch a light block from a given peer
// tracking, the call and waiting for the reactor to pass back the response. A nil
// LightBlock response is used to signal that the peer doesn't have the requested LightBlock.
func (d *Dispatcher) LightBlock(ctx context.Context, height int64, peer types.NodeID) (*types.LightBlock, error) {
// dispatch the request to the peer
callCh, err := d.dispatch(peer, height)
if err != nil {
return nil, err
}
// clean up the call after a response is returned
defer func() {
d.mtx.Lock()
defer d.mtx.Unlock()
if call, ok := d.calls[peer]; ok {
delete(d.calls, peer)
close(call)
}
}()
// wait for a response, cancel or timeout
select {
case resp := <-callCh:
return resp, nil
case <-ctx.Done():
d.release(peer)
return nil, nil
return nil, ctx.Err()
case <-time.After(d.timeout):
d.release(peer)
return nil, errNoResponse
}
}
// respond allows the underlying process which receives requests on the
// requestCh to respond with the respective light block
func (d *dispatcher) respond(lb *proto.LightBlock, peer types.NodeID) error {
d.mtx.Lock()
defer d.mtx.Unlock()
// check that the response came from a request
answerCh, ok := d.calls[peer]
if !ok {
// this can also happen if the response came in after the timeout
return errUnsolicitedResponse
}
// release the peer after returning the response
defer d.availablePeers.Append(peer)
defer close(answerCh)
defer delete(d.calls, peer)
if lb == nil {
answerCh <- nil
return nil
}
block, err := types.LightBlockFromProto(lb)
if err != nil {
fmt.Println("error with converting light block")
return err
}
answerCh <- block
return nil
}
func (d *dispatcher) addPeer(peer types.NodeID) {
d.availablePeers.Append(peer)
}
func (d *dispatcher) removePeer(peer types.NodeID) {
d.mtx.Lock()
defer d.mtx.Unlock()
if _, ok := d.calls[peer]; ok {
delete(d.calls, peer)
} else {
d.availablePeers.Remove(peer)
case <-d.closeCh:
return nil, errDisconnected
}
}
// dispatch takes a peer and allocates it a channel so long as it's not already
// busy and the receiving channel is still running. It then dispatches the message
func (d *dispatcher) dispatch(peer types.NodeID, height int64) (chan *types.LightBlock, error) {
func (d *Dispatcher) dispatch(peer types.NodeID, height int64) (chan *types.LightBlock, error) {
d.mtx.Lock()
defer d.mtx.Unlock()
ch := make(chan *types.LightBlock, 1)
// check if the dispatcher is running or not
if !d.running {
close(ch)
return ch, errDisconnected
select {
case <-d.closeCh:
return nil, errDisconnected
default:
}
// this should happen only if we add the same peer twice (somehow)
ch := make(chan *types.LightBlock, 1)
// check if a request for the same peer has already been made
if _, ok := d.calls[peer]; ok {
close(ch)
return ch, errPeerAlreadyBusy
@@ -193,47 +102,107 @@ func (d *dispatcher) dispatch(peer types.NodeID, height int64) (chan *types.Ligh
Height: uint64(height),
},
}
return ch, nil
}
// release appends the peer back to the list and deletes the allocated call so
// that a new call can be made to that peer
func (d *dispatcher) release(peer types.NodeID) {
// Respond allows the underlying process which receives requests on the
// requestCh to respond with the respective light block. A nil response is used to
// represent that the receiver of the request does not have a light block at that height.
func (d *Dispatcher) Respond(lb *proto.LightBlock, peer types.NodeID) error {
d.mtx.Lock()
defer d.mtx.Unlock()
if call, ok := d.calls[peer]; ok {
close(call)
delete(d.calls, peer)
// check that the response came from a request
answerCh, ok := d.calls[peer]
if !ok {
// this can also happen if the response came in after the timeout
return errUnsolicitedResponse
}
d.availablePeers.Append(peer)
// If lb is nil we take that to mean that the peer didn't have the requested light
// block and thus pass on the nil to the caller.
if lb == nil {
answerCh <- nil
return nil
}
block, err := types.LightBlockFromProto(lb)
if err != nil {
return err
}
answerCh <- block
return nil
}
// Close shuts down the dispatcher and cancels any pending calls awaiting responses.
// Peers awaiting responses that have not arrived are delivered a nil block.
func (d *Dispatcher) Close() {
d.mtx.Lock()
defer d.mtx.Unlock()
close(d.closeCh)
for peer, call := range d.calls {
delete(d.calls, peer)
close(call)
}
}
func (d *Dispatcher) Done() <-chan struct{} {
return d.closeCh
}
//----------------------------------------------------------------
// blockProvider is a p2p based light provider which uses a dispatcher connected
// BlockProvider is a p2p based light provider which uses a dispatcher connected
// to the state sync reactor to serve light blocks to the light client
//
// TODO: This should probably be moved over to the light package but as we're
// not yet officially supporting p2p light clients we'll leave this here for now.
type blockProvider struct {
//
// NOTE: BlockProvider will return an error with concurrent calls. However, we don't
// need a mutex because a light client (and the backfill process) will never call a
// method more than once at the same time
type BlockProvider struct {
peer types.NodeID
chainID string
timeout time.Duration
dispatcher *dispatcher
dispatcher *Dispatcher
}
func (p *blockProvider) LightBlock(ctx context.Context, height int64) (*types.LightBlock, error) {
// FIXME: The provider doesn't know if the dispatcher is still connected to
// that peer. If the connection is dropped for whatever reason the
// dispatcher needs to be able to relay this back to the provider so it can
// return ErrConnectionClosed instead of ErrNoResponse
ctx, cancel := context.WithTimeout(ctx, p.timeout)
defer cancel()
lb, _ := p.dispatcher.lightBlock(ctx, height, p.peer)
if lb == nil {
return nil, provider.ErrNoResponse
// Creates a block provider which implements the light client Provider interface.
func NewBlockProvider(peer types.NodeID, chainID string, dispatcher *Dispatcher) *BlockProvider {
return &BlockProvider{
peer: peer,
chainID: chainID,
dispatcher: dispatcher,
}
}
// LightBlock fetches a light block from the peer at a specified height returning either a
// light block or an appropriate error.
func (p *BlockProvider) LightBlock(ctx context.Context, height int64) (*types.LightBlock, error) {
lb, err := p.dispatcher.LightBlock(ctx, height, p.peer)
switch err {
case nil:
if lb == nil {
return nil, provider.ErrLightBlockNotFound
}
case context.DeadlineExceeded, context.Canceled:
return nil, err
case errPeerAlreadyBusy:
return nil, provider.ErrLightBlockNotFound
default:
return nil, provider.ErrUnreliableProvider{Reason: err.Error()}
}
// check that the height requested is the same one returned
if lb.Height != height {
return nil, provider.ErrBadLightBlock{
Reason: fmt.Errorf("expected height %d, got height %d", height, lb.Height),
}
}
// perform basic validation
if err := lb.ValidateBasic(p.chainID); err != nil {
return nil, provider.ErrBadLightBlock{Reason: err}
}
@@ -245,37 +214,37 @@ func (p *blockProvider) LightBlock(ctx context.Context, height int64) (*types.Li
// attacks. This is a no op as there currently isn't a way to wire this up to
// the evidence reactor (we should endeavor to do this in the future but for now
// it's not critical for backwards verification)
func (p *blockProvider) ReportEvidence(ctx context.Context, ev types.Evidence) error {
func (p *BlockProvider) ReportEvidence(ctx context.Context, ev types.Evidence) error {
return nil
}
// String implements stringer interface
func (p *blockProvider) String() string { return string(p.peer) }
func (p *BlockProvider) String() string { return string(p.peer) }
//----------------------------------------------------------------
// peerList is a rolling list of peers. This is used to distribute the load of
// retrieving blocks over all the peers the reactor is connected to
type peerlist struct {
type peerList struct {
mtx sync.Mutex
peers []types.NodeID
waiting []chan types.NodeID
}
func newPeerList() *peerlist {
return &peerlist{
func newPeerList() *peerList {
return &peerList{
peers: make([]types.NodeID, 0),
waiting: make([]chan types.NodeID, 0),
}
}
func (l *peerlist) Len() int {
func (l *peerList) Len() int {
l.mtx.Lock()
defer l.mtx.Unlock()
return len(l.peers)
}
func (l *peerlist) Pop(ctx context.Context) types.NodeID {
func (l *peerList) Pop(ctx context.Context) types.NodeID {
l.mtx.Lock()
if len(l.peers) == 0 {
// if we don't have any peers in the list we block until a peer is
@@ -299,7 +268,7 @@ func (l *peerlist) Pop(ctx context.Context) types.NodeID {
return peer
}
func (l *peerlist) Append(peer types.NodeID) {
func (l *peerList) Append(peer types.NodeID) {
l.mtx.Lock()
defer l.mtx.Unlock()
if len(l.waiting) > 0 {
@@ -312,7 +281,7 @@ func (l *peerlist) Append(peer types.NodeID) {
}
}
func (l *peerlist) Remove(peer types.NodeID) {
func (l *peerList) Remove(peer types.NodeID) {
l.mtx.Lock()
defer l.mtx.Unlock()
for i, p := range l.peers {
@@ -323,7 +292,7 @@ func (l *peerlist) Remove(peer types.NodeID) {
}
}
func (l *peerlist) Peers() []types.NodeID {
func (l *peerList) All() []types.NodeID {
l.mtx.Lock()
defer l.mtx.Unlock()
return l.peers

View File

@@ -13,145 +13,102 @@ import (
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/internal/test/factory"
ssproto "github.com/tendermint/tendermint/proto/tendermint/statesync"
"github.com/tendermint/tendermint/types"
)
func TestDispatcherBasic(t *testing.T) {
t.Cleanup(leaktest.Check(t))
const numPeers = 5
ch := make(chan p2p.Envelope, 100)
closeCh := make(chan struct{})
defer close(closeCh)
d := newDispatcher(ch, 1*time.Second)
d := NewDispatcher(ch)
go handleRequests(t, d, ch, closeCh)
peers := createPeerSet(5)
for _, peer := range peers {
d.addPeer(peer)
}
peers := createPeerSet(numPeers)
wg := sync.WaitGroup{}
// make a bunch of async requests and require that the correct responses are
// given
for i := 1; i < 10; i++ {
for i := 0; i < numPeers; i++ {
wg.Add(1)
go func(height int64) {
defer wg.Done()
lb, peer, err := d.LightBlock(context.Background(), height)
lb, err := d.LightBlock(context.Background(), height, peers[height-1])
require.NoError(t, err)
require.NotNil(t, lb)
require.Equal(t, lb.Height, height)
require.Contains(t, peers, peer)
}(int64(i))
}(int64(i + 1))
}
wg.Wait()
// assert that all calls were responded to
assert.Empty(t, d.calls)
}
func TestDispatcherReturnsNoBlock(t *testing.T) {
t.Cleanup(leaktest.Check(t))
ch := make(chan p2p.Envelope, 100)
d := newDispatcher(ch, 1*time.Second)
peerFromSet := createPeerSet(1)[0]
d.addPeer(peerFromSet)
d := NewDispatcher(ch)
doneCh := make(chan struct{})
peer := factory.NodeID("a")
go func() {
<-ch
require.NoError(t, d.respond(nil, peerFromSet))
require.NoError(t, d.Respond(nil, peer))
close(doneCh)
}()
lb, peerResult, err := d.LightBlock(context.Background(), 1)
lb, err := d.LightBlock(context.Background(), 1, peer)
<-doneCh
require.Nil(t, lb)
require.Nil(t, err)
require.Equal(t, peerFromSet, peerResult)
}
func TestDispatcherErrorsWhenNoPeers(t *testing.T) {
func TestDispatcherTimeOutWaitingOnLightBlock(t *testing.T) {
t.Cleanup(leaktest.Check(t))
ch := make(chan p2p.Envelope, 100)
d := newDispatcher(ch, 1*time.Second)
d := NewDispatcher(ch)
peer := factory.NodeID("a")
lb, peerResult, err := d.LightBlock(context.Background(), 1)
ctx, cancelFunc := context.WithTimeout(context.Background(), 10*time.Millisecond)
defer cancelFunc()
lb, err := d.LightBlock(ctx, 1, peer)
require.Error(t, err)
require.Equal(t, context.DeadlineExceeded, err)
require.Nil(t, lb)
require.Empty(t, peerResult)
require.Equal(t, errNoConnectedPeers, err)
}
func TestDispatcherReturnsBlockOncePeerAvailable(t *testing.T) {
t.Cleanup(leaktest.Check(t))
dispatcherRequestCh := make(chan p2p.Envelope, 100)
d := newDispatcher(dispatcherRequestCh, 1*time.Second)
peerFromSet := createPeerSet(1)[0]
d.addPeer(peerFromSet)
ctx := context.Background()
wrapped, cancelFunc := context.WithCancel(ctx)
doneCh := make(chan struct{})
go func() {
lb, peerResult, err := d.LightBlock(wrapped, 1)
require.Nil(t, lb)
require.Equal(t, peerFromSet, peerResult)
require.Nil(t, err)
// calls to dispatcher.Lightblock write into the dispatcher's requestCh.
// we read from the requestCh here to unblock the requestCh for future
// calls.
<-dispatcherRequestCh
close(doneCh)
}()
cancelFunc()
<-doneCh
go func() {
<-dispatcherRequestCh
lb := &types.LightBlock{}
asProto, err := lb.ToProto()
require.Nil(t, err)
err = d.respond(asProto, peerFromSet)
require.Nil(t, err)
}()
lb, peerResult, err := d.LightBlock(context.Background(), 1)
require.NotNil(t, lb)
require.Equal(t, peerFromSet, peerResult)
require.Nil(t, err)
}
func TestDispatcherProviders(t *testing.T) {
t.Cleanup(leaktest.Check(t))
ch := make(chan p2p.Envelope, 100)
chainID := "state-sync-test"
chainID := "test-chain"
closeCh := make(chan struct{})
defer close(closeCh)
d := newDispatcher(ch, 1*time.Second)
d := NewDispatcher(ch)
go handleRequests(t, d, ch, closeCh)
peers := createPeerSet(5)
for _, peer := range peers {
d.addPeer(peer)
providers := make([]*BlockProvider, len(peers))
for idx, peer := range peers {
providers[idx] = NewBlockProvider(peer, chainID, d)
}
providers := d.Providers(chainID, 5*time.Second)
require.Len(t, providers, 5)
for i, p := range providers {
bp, ok := p.(*blockProvider)
require.True(t, ok)
assert.Equal(t, bp.String(), string(peers[i]))
assert.Equal(t, string(peers[i]), p.String(), i)
lb, err := p.LightBlock(context.Background(), 10)
assert.Error(t, err)
assert.Nil(t, lb)
assert.NoError(t, err)
assert.NotNil(t, lb)
}
}
@@ -166,7 +123,7 @@ func TestPeerListBasic(t *testing.T) {
peerList.Append(peer)
}
for idx, peer := range peerList.Peers() {
for idx, peer := range peerList.All() {
assert.Equal(t, peer, peerSet[idx])
}
@@ -178,13 +135,22 @@ func TestPeerListBasic(t *testing.T) {
}
assert.Equal(t, half, peerList.Len())
// removing a peer that doesn't exist should not change the list
peerList.Remove(types.NodeID("lp"))
assert.Equal(t, half, peerList.Len())
// removing a peer that exists should decrease the list size by one
peerList.Remove(peerSet[half])
half++
assert.Equal(t, peerSet[half], peerList.Pop(ctx))
assert.Equal(t, numPeers-half-1, peerList.Len())
// popping the next peer should work as expected
assert.Equal(t, peerSet[half+1], peerList.Pop(ctx))
assert.Equal(t, numPeers-half-2, peerList.Len())
// append the two peers back
peerList.Append(peerSet[half])
peerList.Append(peerSet[half+1])
assert.Equal(t, half, peerList.Len())
}
func TestPeerListBlocksWhenEmpty(t *testing.T) {
@@ -277,9 +243,28 @@ func TestPeerListConcurrent(t *testing.T) {
}
}
func TestPeerListRemove(t *testing.T) {
peerList := newPeerList()
numPeers := 10
peerSet := createPeerSet(numPeers)
for _, peer := range peerSet {
peerList.Append(peer)
}
for _, peer := range peerSet {
peerList.Remove(peer)
for _, p := range peerList.All() {
require.NotEqual(t, p, peer)
}
numPeers--
require.Equal(t, numPeers, peerList.Len())
}
}
// handleRequests is a helper function usually run in a separate go routine to
// imitate the expected responses of the reactor wired to the dispatcher
func handleRequests(t *testing.T, d *dispatcher, ch chan p2p.Envelope, closeCh chan struct{}) {
func handleRequests(t *testing.T, d *Dispatcher, ch chan p2p.Envelope, closeCh chan struct{}) {
t.Helper()
for {
select {
@@ -288,7 +273,7 @@ func handleRequests(t *testing.T, d *dispatcher, ch chan p2p.Envelope, closeCh c
peer := request.To
resp := mockLBResp(t, peer, int64(height), time.Now())
block, _ := resp.block.ToProto()
require.NoError(t, d.respond(block, resp.peer))
require.NoError(t, d.Respond(block, resp.peer))
case <-closeCh:
return
}

View File

@@ -1,50 +0,0 @@
package statesync
import (
"context"
"time"
mock "github.com/stretchr/testify/mock"
state "github.com/tendermint/tendermint/state"
)
// MockSyncReactor is an autogenerated mock type for the SyncReactor type.
// Because of the stateprovider uses in Sync(), we use package statesync instead of mocks.
type MockSyncReactor struct {
mock.Mock
}
// Backfill provides a mock function with given fields: _a0
func (_m *MockSyncReactor) Backfill(_a0 state.State) error {
ret := _m.Called(_a0)
var r0 error
if rf, ok := ret.Get(0).(func(state.State) error); ok {
r0 = rf(_a0)
} else {
r0 = ret.Error(0)
}
return r0
}
// Sync provides a mock function with given fields: _a0, _a1, _a2
func (_m *MockSyncReactor) Sync(_a0 context.Context, _a1 StateProvider, _a2 time.Duration) (state.State, error) {
ret := _m.Called(_a0, _a1, _a2)
var r0 state.State
if rf, ok := ret.Get(0).(func(context.Context, StateProvider, time.Duration) state.State); ok {
r0 = rf(_a0, _a1, _a2)
} else {
r0 = ret.Get(0).(state.State)
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, StateProvider, time.Duration) error); ok {
r1 = rf(_a0, _a1, _a2)
} else {
r1 = ret.Error(1)
}
return r0, r1
}

View File

@@ -16,6 +16,8 @@ import (
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/light"
"github.com/tendermint/tendermint/light/provider"
ssproto "github.com/tendermint/tendermint/proto/tendermint/statesync"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
@@ -61,13 +63,24 @@ var (
MsgType: new(ssproto.Message),
Descriptor: &p2p.ChannelDescriptor{
ID: byte(LightBlockChannel),
Priority: 2,
Priority: 5,
SendQueueCapacity: 10,
RecvMessageCapacity: lightBlockMsgSize,
RecvBufferCapacity: 128,
MaxSendBytes: 400,
},
},
ParamsChannel: {
MsgType: new(ssproto.Message),
Descriptor: &p2p.ChannelDescriptor{
ID: byte(ParamsChannel),
Priority: 2,
SendQueueCapacity: 10,
RecvMessageCapacity: paramMsgSize,
RecvBufferCapacity: 128,
MaxSendBytes: 400,
},
},
}
)
@@ -81,6 +94,9 @@ const (
// LightBlockChannel exchanges light blocks
LightBlockChannel = p2p.ChannelID(0x62)
// ParamsChannel exchanges consensus params
ParamsChannel = p2p.ChannelID(0x63)
// recentSnapshots is the number of recent snapshots to send and receive per peer.
recentSnapshots = 10
@@ -91,31 +107,34 @@ const (
chunkMsgSize = int(16e6) // ~16MB
// lightBlockMsgSize is the maximum size of a lightBlockResponseMessage
lightBlockMsgSize = int(1e7) // ~10MB
lightBlockMsgSize = int(1e7) // ~1MB
// paramMsgSize is the maximum size of a paramsResponseMessage
paramMsgSize = int(1e5) // ~100kb
// lightBlockResponseTimeout is how long the dispatcher waits for a peer to
// return a light block
lightBlockResponseTimeout = 30 * time.Second
lightBlockResponseTimeout = 10 * time.Second
// consensusParamsResponseTimeout is the time the p2p state provider waits
// before performing a secondary call
consensusParamsResponseTimeout = 5 * time.Second
// maxLightBlockRequestRetries is the amount of retries acceptable before
// the backfill process aborts
maxLightBlockRequestRetries = 20
)
// SyncReactor defines an interface used for testing abilities of node.startStateSync.
type SyncReactor interface {
Sync(context.Context, StateProvider, time.Duration) (sm.State, error)
Backfill(sm.State) error
}
// Reactor handles state sync, both restoring snapshots for the local node and
// serving snapshots for other nodes.
type Reactor struct {
service.BaseService
cfg config.StateSyncConfig
stateStore sm.Store
blockStore *store.BlockStore
chainID string
initialHeight int64
cfg config.StateSyncConfig
stateStore sm.Store
blockStore *store.BlockStore
conn proxy.AppConnSnapshot
connQuery proxy.AppConnQuery
@@ -123,15 +142,22 @@ type Reactor struct {
snapshotCh *p2p.Channel
chunkCh *p2p.Channel
blockCh *p2p.Channel
paramsCh *p2p.Channel
peerUpdates *p2p.PeerUpdates
closeCh chan struct{}
dispatcher *dispatcher
// Dispatcher is used to multiplex light block requests and responses over multiple
// peers used by the p2p state provider and in reverse sync.
dispatcher *Dispatcher
peers *peerList
// This will only be set when a state sync is in progress. It is used to feed
// received snapshots and chunks into the sync.
mtx tmsync.RWMutex
syncer *syncer
// These will only be set when a state sync is in progress. It is used to feed
// received snapshots and chunks into the syncer and manage incoming and outgoing
// providers.
mtx tmsync.RWMutex
syncer *syncer
providers map[types.NodeID]*BlockProvider
stateProvider StateProvider
}
// NewReactor returns a reference to a new state sync reactor, which implements
@@ -139,29 +165,36 @@ type Reactor struct {
// and querying, references to p2p Channels and a channel to listen for peer
// updates on. Note, the reactor will close all p2p Channels when stopping.
func NewReactor(
chainID string,
initialHeight int64,
cfg config.StateSyncConfig,
logger log.Logger,
conn proxy.AppConnSnapshot,
connQuery proxy.AppConnQuery,
snapshotCh, chunkCh, blockCh *p2p.Channel,
snapshotCh, chunkCh, blockCh, paramsCh *p2p.Channel,
peerUpdates *p2p.PeerUpdates,
stateStore sm.Store,
blockStore *store.BlockStore,
tempDir string,
) *Reactor {
r := &Reactor{
cfg: cfg,
conn: conn,
connQuery: connQuery,
snapshotCh: snapshotCh,
chunkCh: chunkCh,
blockCh: blockCh,
peerUpdates: peerUpdates,
closeCh: make(chan struct{}),
tempDir: tempDir,
dispatcher: newDispatcher(blockCh.Out, lightBlockResponseTimeout),
stateStore: stateStore,
blockStore: blockStore,
chainID: chainID,
initialHeight: initialHeight,
cfg: cfg,
conn: conn,
connQuery: connQuery,
snapshotCh: snapshotCh,
chunkCh: chunkCh,
blockCh: blockCh,
paramsCh: paramsCh,
peerUpdates: peerUpdates,
closeCh: make(chan struct{}),
tempDir: tempDir,
stateStore: stateStore,
blockStore: blockStore,
peers: newPeerList(),
dispatcher: NewDispatcher(blockCh.Out),
providers: make(map[types.NodeID]*BlockProvider),
}
r.BaseService = *service.NewBaseService(logger, "StateSync", r)
@@ -170,26 +203,20 @@ func NewReactor(
// OnStart starts separate go routines for each p2p Channel and listens for
// envelopes on each. In addition, it also listens for peer updates and handles
// messages on that p2p channel accordingly. The caller must be sure to execute
// OnStop to ensure the outbound p2p Channels are closed. No error is returned.
// messages on that p2p channel accordingly. Note, we do not launch a go-routine to
// handle individual envelopes as to not have to deal with bounding workers or pools.
// The caller must be sure to execute OnStop to ensure the outbound p2p Channels are
// closed. No error is returned.
func (r *Reactor) OnStart() error {
// Listen for envelopes on the snapshot p2p Channel in a separate go-routine
// as to not block or cause IO contention with the chunk p2p Channel. Note,
// we do not launch a go-routine to handle individual envelopes as to not
// have to deal with bounding workers or pools.
go r.processSnapshotCh()
// Listen for envelopes on the chunk p2p Channel in a separate go-routine
// as to not block or cause IO contention with the snapshot p2p Channel. Note,
// we do not launch a go-routine to handle individual envelopes as to not
// have to deal with bounding workers or pools.
go r.processChunkCh()
go r.processBlockCh()
go r.processPeerUpdates()
go r.processParamsCh()
r.dispatcher.start()
go r.processPeerUpdates()
return nil
}
@@ -198,7 +225,9 @@ func (r *Reactor) OnStart() error {
// blocking until they all exit.
func (r *Reactor) OnStop() {
// tell the dispatcher to stop sending any more requests
r.dispatcher.stop()
r.dispatcher.Close()
// wait for any remaining requests to complete
<-r.dispatcher.Done()
// Close closeCh to signal to all spawned goroutines to gracefully exit. All
// p2p Channels should execute Close().
@@ -210,27 +239,27 @@ func (r *Reactor) OnStop() {
<-r.snapshotCh.Done()
<-r.chunkCh.Done()
<-r.blockCh.Done()
<-r.paramsCh.Done()
<-r.peerUpdates.Done()
}
// Sync runs a state sync, fetching snapshots and providing chunks to the
// application. It also saves tendermint state and runs a backfill process to
// retrieve the necessary amount of headers, commits and validators sets to be
// able to process evidence and participate in consensus.
func (r *Reactor) Sync(
ctx context.Context,
stateProvider StateProvider,
discoveryTime time.Duration,
) (sm.State, error) {
// application. At the close of the operation, Sync will bootstrap the state
// store and persist the commit at that height so that either consensus or
// blocksync can commence. It will then proceed to backfill the necessary amount
// of historical blocks before participating in consensus
func (r *Reactor) Sync(ctx context.Context) (sm.State, error) {
// We need at least two peers (for cross-referencing of light blocks) before we can
// begin state sync
r.waitForEnoughPeers(ctx, 2)
r.mtx.Lock()
if r.syncer != nil {
r.mtx.Unlock()
return sm.State{}, errors.New("a state sync is already in progress")
}
if stateProvider == nil {
r.mtx.Unlock()
return sm.State{}, errors.New("the stateProvider should not be nil when doing the state sync")
if err := r.initStateProvider(ctx, r.chainID, r.initialHeight); err != nil {
return sm.State{}, err
}
r.syncer = newSyncer(
@@ -238,12 +267,19 @@ func (r *Reactor) Sync(
r.Logger,
r.conn,
r.connQuery,
stateProvider,
r.stateProvider,
r.snapshotCh.Out,
r.chunkCh.Out,
r.tempDir,
)
r.mtx.Unlock()
defer func() {
r.mtx.Lock()
// reset syncing objects at the close of Sync
r.syncer = nil
r.stateProvider = nil
r.mtx.Unlock()
}()
requestSnapshotsHook := func() {
// request snapshots from all currently connected peers
@@ -253,15 +289,11 @@ func (r *Reactor) Sync(
}
}
state, commit, err := r.syncer.SyncAny(ctx, discoveryTime, requestSnapshotsHook)
state, commit, err := r.syncer.SyncAny(ctx, r.cfg.DiscoveryTime, requestSnapshotsHook)
if err != nil {
return sm.State{}, err
}
r.mtx.Lock()
r.syncer = nil
r.mtx.Unlock()
err = r.stateStore.Bootstrap(state)
if err != nil {
return sm.State{}, fmt.Errorf("failed to bootstrap node with new state: %w", err)
@@ -272,6 +304,11 @@ func (r *Reactor) Sync(
return sm.State{}, fmt.Errorf("failed to store last seen commit: %w", err)
}
err = r.Backfill(ctx, state)
if err != nil {
r.Logger.Error("backfill failed. Proceeding optimistically...", "err", err)
}
return state, nil
}
@@ -279,7 +316,7 @@ func (r *Reactor) Sync(
// order. It does not stop verifying blocks until reaching a block with a height
// and time that is less or equal to the stopHeight and stopTime. The
// trustedBlockID should be of the header at startHeight.
func (r *Reactor) Backfill(state sm.State) error {
func (r *Reactor) Backfill(ctx context.Context, state sm.State) error {
params := state.ConsensusParams.Evidence
stopHeight := state.LastBlockHeight - params.MaxAgeNumBlocks
stopTime := state.LastBlockTime.Add(-params.MaxAgeDuration)
@@ -290,7 +327,7 @@ func (r *Reactor) Backfill(state sm.State) error {
stopTime = state.LastBlockTime
}
return r.backfill(
context.Background(),
ctx,
state.ChainID,
state.LastBlockHeight,
stopHeight,
@@ -308,12 +345,12 @@ func (r *Reactor) backfill(
stopTime time.Time,
) error {
r.Logger.Info("starting backfill process...", "startHeight", startHeight,
"stopHeight", stopHeight, "trustedBlockID", trustedBlockID)
"stopHeight", stopHeight, "stopTime", stopTime, "trustedBlockID", trustedBlockID)
const sleepTime = 1 * time.Second
var (
lastValidatorSet *types.ValidatorSet
lastChangeHeight int64 = startHeight
lastChangeHeight = startHeight
)
queue := newBlockQueue(startHeight, stopHeight, initialHeight, stopTime, maxLightBlockRequestRetries)
@@ -330,8 +367,18 @@ func (r *Reactor) backfill(
for {
select {
case height := <-queue.nextHeight():
r.Logger.Debug("fetching next block", "height", height)
lb, peer, err := r.dispatcher.LightBlock(ctxWithCancel, height)
// pop the next peer of the list to send a request to
peer := r.peers.Pop(ctx)
r.Logger.Debug("fetching next block", "height", height, "peer", peer)
subCtx, cancel := context.WithTimeout(ctxWithCancel, lightBlockResponseTimeout)
defer cancel()
lb, err := func() (*types.LightBlock, error) {
defer cancel()
// request the light block with a timeout
return r.dispatcher.LightBlock(subCtx, height, peer)
}()
// once the peer has returned a value, add it back to the peer list to be used again
r.peers.Append(peer)
if errors.Is(err, context.Canceled) {
return
}
@@ -353,7 +400,7 @@ func (r *Reactor) backfill(
queue.retry(height)
// As we are fetching blocks backwards, if this node doesn't have the block it likely doesn't
// have any prior ones, thus we remove it from the peer list.
r.dispatcher.removePeer(peer)
r.peers.Remove(peer)
continue
}
@@ -450,12 +497,6 @@ func (r *Reactor) backfill(
}
}
// Dispatcher exposes the dispatcher so that a state provider can use it for
// light client verification
func (r *Reactor) Dispatcher() *dispatcher { //nolint:golint
return r.dispatcher
}
// handleSnapshotMessage handles envelopes sent from peers on the
// SnapshotChannel. It returns an error only if the Envelope.Message is unknown
// for this channel. This should never be called outside of handleMessage.
@@ -498,7 +539,7 @@ func (r *Reactor) handleSnapshotMessage(envelope p2p.Envelope) error {
return nil
}
logger.Debug("received snapshot", "height", msg.Height, "format", msg.Format)
logger.Info("received snapshot", "height", msg.Height, "format", msg.Format)
_, err := r.syncer.AddSnapshot(envelope.From, &snapshot{
Height: msg.Height,
Format: msg.Format,
@@ -516,6 +557,7 @@ func (r *Reactor) handleSnapshotMessage(envelope p2p.Envelope) error {
)
return nil
}
logger.Info("added snapshot", "height", msg.Height, "format", msg.Format)
default:
return fmt.Errorf("received unknown message: %T", msg)
@@ -623,6 +665,15 @@ func (r *Reactor) handleLightBlockMessage(envelope p2p.Envelope) error {
r.Logger.Error("failed to retrieve light block", "err", err, "height", msg.Height)
return err
}
if lb == nil {
r.blockCh.Out <- p2p.Envelope{
To: envelope.From,
Message: &ssproto.LightBlockResponse{
LightBlock: nil,
},
}
return nil
}
lbproto, err := lb.ToProto()
if err != nil {
@@ -640,8 +691,55 @@ func (r *Reactor) handleLightBlockMessage(envelope p2p.Envelope) error {
}
case *ssproto.LightBlockResponse:
if err := r.dispatcher.respond(msg.LightBlock, envelope.From); err != nil {
r.Logger.Error("error processing light block response", "err", err)
var height int64 = 0
if msg.LightBlock != nil {
height = msg.LightBlock.SignedHeader.Header.Height
}
r.Logger.Info("received light block response", "peer", envelope.From, "height", height)
if err := r.dispatcher.Respond(msg.LightBlock, envelope.From); err != nil {
r.Logger.Error("error processing light block response", "err", err, "height", height)
}
default:
return fmt.Errorf("received unknown message: %T", msg)
}
return nil
}
func (r *Reactor) handleParamsMessage(envelope p2p.Envelope) error {
switch msg := envelope.Message.(type) {
case *ssproto.ParamsRequest:
r.Logger.Debug("received consensus params request", "height", msg.Height)
cp, err := r.stateStore.LoadConsensusParams(int64(msg.Height))
if err != nil {
r.Logger.Error("failed to fetch requested consensus params", "err", err, "height", msg.Height)
return nil
}
cpproto := cp.ToProto()
r.paramsCh.Out <- p2p.Envelope{
To: envelope.From,
Message: &ssproto.ParamsResponse{
Height: msg.Height,
ConsensusParams: cpproto,
},
}
case *ssproto.ParamsResponse:
r.mtx.RLock()
defer r.mtx.RUnlock()
r.Logger.Debug("received consensus params response", "height", msg.Height)
cp := types.ConsensusParamsFromProto(msg.ConsensusParams)
if sp, ok := r.stateProvider.(*stateProviderP2P); ok {
select {
case sp.paramsRecvCh <- cp:
default:
}
} else {
r.Logger.Debug("received unexpected params response; using RPC state provider", "peer", envelope.From)
}
default:
@@ -678,6 +776,9 @@ func (r *Reactor) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err
case LightBlockChannel:
err = r.handleLightBlockMessage(envelope)
case ParamsChannel:
err = r.handleParamsMessage(envelope)
default:
err = fmt.Errorf("unknown channel ID (%d) for envelope (%v)", chID, envelope)
}
@@ -703,6 +804,10 @@ func (r *Reactor) processBlockCh() {
r.processCh(r.blockCh, "light block")
}
func (r *Reactor) processParamsCh() {
r.processCh(r.paramsCh, "consensus params")
}
// processCh routes state sync messages to their respective handlers. Any error
// encountered during message execution will result in a PeerError being sent on
// the respective channel. When the reactor is stopped, we will catch the signal
@@ -732,24 +837,38 @@ func (r *Reactor) processCh(ch *p2p.Channel, chName string) {
// processPeerUpdate processes a PeerUpdate, returning an error upon failing to
// handle the PeerUpdate or if a panic is recovered.
func (r *Reactor) processPeerUpdate(peerUpdate p2p.PeerUpdate) {
r.Logger.Debug("received peer update", "peer", peerUpdate.NodeID, "status", peerUpdate.Status)
r.mtx.RLock()
defer r.mtx.RUnlock()
r.Logger.Info("received peer update", "peer", peerUpdate.NodeID, "status", peerUpdate.Status)
switch peerUpdate.Status {
case p2p.PeerStatusUp:
if r.syncer != nil {
r.syncer.AddPeer(peerUpdate.NodeID)
r.peers.Append(peerUpdate.NodeID)
case p2p.PeerStatusDown:
r.peers.Remove(peerUpdate.NodeID)
}
r.mtx.Lock()
if r.syncer == nil {
r.mtx.Unlock()
return
}
defer r.mtx.Unlock()
switch peerUpdate.Status {
case p2p.PeerStatusUp:
newProvider := NewBlockProvider(peerUpdate.NodeID, r.chainID, r.dispatcher)
r.providers[peerUpdate.NodeID] = newProvider
r.syncer.AddPeer(peerUpdate.NodeID)
if sp, ok := r.stateProvider.(*stateProviderP2P); ok {
// we do this in a separate routine to not block whilst waiting for the light client to finish
// whatever call it's currently executing
go sp.addProvider(newProvider)
}
r.dispatcher.addPeer(peerUpdate.NodeID)
case p2p.PeerStatusDown:
if r.syncer != nil {
r.syncer.RemovePeer(peerUpdate.NodeID)
}
r.dispatcher.removePeer(peerUpdate.NodeID)
delete(r.providers, peerUpdate.NodeID)
r.syncer.RemovePeer(peerUpdate.NodeID)
}
r.Logger.Info("processed peer update", "peer", peerUpdate.NodeID, "status", peerUpdate.Status)
}
// processPeerUpdates initiates a blocking process where we listen for and handle
@@ -839,5 +958,50 @@ func (r *Reactor) fetchLightBlock(height uint64) (*types.LightBlock, error) {
},
ValidatorSet: vals,
}, nil
}
func (r *Reactor) waitForEnoughPeers(ctx context.Context, numPeers int) {
t := time.NewTicker(200 * time.Millisecond)
defer t.Stop()
for {
select {
case <-ctx.Done():
return
case <-t.C:
if r.peers.Len() >= numPeers {
return
}
}
}
}
func (r *Reactor) initStateProvider(ctx context.Context, chainID string, initialHeight int64) error {
var err error
to := light.TrustOptions{
Period: r.cfg.TrustPeriod,
Height: r.cfg.TrustHeight,
Hash: r.cfg.TrustHashBytes(),
}
spLogger := r.Logger.With("module", "stateprovider")
spLogger.Info("initializing state provider", "trustPeriod", to.Period,
"trustHeight", to.Height, "useP2P", r.cfg.UseP2P)
if r.cfg.UseP2P {
peers := r.peers.All()
providers := make([]provider.Provider, len(peers))
for idx, p := range peers {
providers[idx] = NewBlockProvider(p, chainID, r.dispatcher)
}
r.stateProvider, err = NewP2PStateProvider(ctx, chainID, initialHeight, providers, to, r.paramsCh.Out, spLogger)
if err != nil {
return fmt.Errorf("failed to initialize P2P state provider: %w", err)
}
} else {
r.stateProvider, err = NewRPCStateProvider(ctx, chainID, initialHeight, r.cfg.RPCServers, to, spLogger)
if err != nil {
return fmt.Errorf("failed to initialize RPC state provider: %w", err)
}
}
return nil
}

View File

@@ -3,6 +3,7 @@ package statesync
import (
"context"
"fmt"
"strings"
"sync"
"testing"
"time"
@@ -21,6 +22,7 @@ import (
"github.com/tendermint/tendermint/light/provider"
ssproto "github.com/tendermint/tendermint/proto/tendermint/statesync"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/proxy"
proxymocks "github.com/tendermint/tendermint/proxy/mocks"
smmocks "github.com/tendermint/tendermint/state/mocks"
"github.com/tendermint/tendermint/store"
@@ -50,6 +52,11 @@ type reactorTestSuite struct {
blockOutCh chan p2p.Envelope
blockPeerErrCh chan p2p.PeerError
paramsChannel *p2p.Channel
paramsInCh chan p2p.Envelope
paramsOutCh chan p2p.Envelope
paramsPeerErrCh chan p2p.PeerError
peerUpdateCh chan p2p.PeerUpdate
peerUpdates *p2p.PeerUpdates
@@ -86,6 +93,9 @@ func setup(
blockInCh: make(chan p2p.Envelope, chBuf),
blockOutCh: make(chan p2p.Envelope, chBuf),
blockPeerErrCh: make(chan p2p.PeerError, chBuf),
paramsInCh: make(chan p2p.Envelope, chBuf),
paramsOutCh: make(chan p2p.Envelope, chBuf),
paramsPeerErrCh: make(chan p2p.PeerError, chBuf),
conn: conn,
connQuery: connQuery,
stateProvider: stateProvider,
@@ -118,12 +128,22 @@ func setup(
rts.blockPeerErrCh,
)
rts.paramsChannel = p2p.NewChannel(
ParamsChannel,
new(ssproto.Message),
rts.paramsInCh,
rts.paramsOutCh,
rts.paramsPeerErrCh,
)
rts.stateStore = &smmocks.Store{}
rts.blockStore = store.NewBlockStore(dbm.NewMemDB())
cfg := config.DefaultStateSyncConfig()
rts.reactor = NewReactor(
factory.DefaultTestChainID,
1,
*cfg,
log.TestingLogger(),
conn,
@@ -131,15 +151,13 @@ func setup(
rts.snapshotChannel,
rts.chunkChannel,
rts.blockChannel,
rts.paramsChannel,
rts.peerUpdates,
rts.stateStore,
rts.blockStore,
"",
)
// override the dispatcher with one with a shorter timeout
rts.reactor.dispatcher = newDispatcher(rts.blockChannel.Out, 1*time.Second)
rts.syncer = newSyncer(
*cfg,
log.NewNopLogger(),
@@ -162,6 +180,58 @@ func setup(
return rts
}
func TestReactor_Sync(t *testing.T) {
const snapshotHeight = 7
rts := setup(t, nil, nil, nil, 2)
chain := buildLightBlockChain(t, 1, 10, time.Now())
// app accepts any snapshot
rts.conn.On("OfferSnapshotSync", ctx, mock.AnythingOfType("types.RequestOfferSnapshot")).
Return(&abci.ResponseOfferSnapshot{Result: abci.ResponseOfferSnapshot_ACCEPT}, nil)
// app accepts every chunk
rts.conn.On("ApplySnapshotChunkSync", ctx, mock.AnythingOfType("types.RequestApplySnapshotChunk")).
Return(&abci.ResponseApplySnapshotChunk{Result: abci.ResponseApplySnapshotChunk_ACCEPT}, nil)
// app query returns valid state app hash
rts.connQuery.On("InfoSync", ctx, proxy.RequestInfo).Return(&abci.ResponseInfo{
AppVersion: 9,
LastBlockHeight: snapshotHeight,
LastBlockAppHash: chain[snapshotHeight+1].AppHash,
}, nil)
// store accepts state and validator sets
rts.stateStore.On("Bootstrap", mock.AnythingOfType("state.State")).Return(nil)
rts.stateStore.On("SaveValidatorSets", mock.AnythingOfType("int64"), mock.AnythingOfType("int64"),
mock.AnythingOfType("*types.ValidatorSet")).Return(nil)
closeCh := make(chan struct{})
defer close(closeCh)
go handleLightBlockRequests(t, chain, rts.blockOutCh,
rts.blockInCh, closeCh, 0)
go graduallyAddPeers(rts.peerUpdateCh, closeCh, 1*time.Second)
go handleSnapshotRequests(t, rts.snapshotOutCh, rts.snapshotInCh, closeCh, []snapshot{
{
Height: uint64(snapshotHeight),
Format: 1,
Chunks: 1,
},
})
go handleChunkRequests(t, rts.chunkOutCh, rts.chunkInCh, closeCh, []byte("abc"))
go handleConsensusParamsRequest(t, rts.paramsOutCh, rts.paramsInCh, closeCh)
// update the config to use the p2p provider
rts.reactor.cfg.UseP2P = true
rts.reactor.cfg.TrustHeight = 1
rts.reactor.cfg.TrustHash = fmt.Sprintf("%X", chain[1].Hash())
rts.reactor.cfg.DiscoveryTime = 1 * time.Second
// Run state sync
_, err := rts.reactor.Sync(context.Background())
require.NoError(t, err)
}
func TestReactor_ChunkRequest_InvalidRequest(t *testing.T) {
rts := setup(t, nil, nil, nil, 2)
@@ -370,7 +440,7 @@ func TestReactor_LightBlockResponse(t *testing.T) {
}
}
func TestReactor_Dispatcher(t *testing.T) {
func TestReactor_BlockProviders(t *testing.T) {
rts := setup(t, nil, nil, nil, 2)
rts.peerUpdateCh <- p2p.PeerUpdate{
NodeID: types.NodeID("aa"),
@@ -387,9 +457,13 @@ func TestReactor_Dispatcher(t *testing.T) {
chain := buildLightBlockChain(t, 1, 10, time.Now())
go handleLightBlockRequests(t, chain, rts.blockOutCh, rts.blockInCh, closeCh, 0)
dispatcher := rts.reactor.Dispatcher()
providers := dispatcher.Providers(factory.DefaultTestChainID, 5*time.Second)
require.Len(t, providers, 2)
peers := rts.reactor.peers.All()
require.Len(t, peers, 2)
providers := make([]provider.Provider, len(peers))
for idx, peer := range peers {
providers[idx] = NewBlockProvider(peer, factory.DefaultTestChainID, rts.reactor.dispatcher)
}
wg := sync.WaitGroup{}
@@ -416,6 +490,59 @@ func TestReactor_Dispatcher(t *testing.T) {
t.Fail()
case <-ctx.Done():
}
}
func TestReactor_StateProviderP2P(t *testing.T) {
rts := setup(t, nil, nil, nil, 2)
// make syncer non nil else test won't think we are state syncing
rts.reactor.syncer = rts.syncer
peerA := types.NodeID(strings.Repeat("a", 2*types.NodeIDByteLength))
peerB := types.NodeID(strings.Repeat("b", 2*types.NodeIDByteLength))
rts.peerUpdateCh <- p2p.PeerUpdate{
NodeID: peerA,
Status: p2p.PeerStatusUp,
}
rts.peerUpdateCh <- p2p.PeerUpdate{
NodeID: peerB,
Status: p2p.PeerStatusUp,
}
closeCh := make(chan struct{})
defer close(closeCh)
chain := buildLightBlockChain(t, 1, 10, time.Now())
go handleLightBlockRequests(t, chain, rts.blockOutCh, rts.blockInCh, closeCh, 0)
go handleConsensusParamsRequest(t, rts.paramsOutCh, rts.paramsInCh, closeCh)
rts.reactor.cfg.UseP2P = true
rts.reactor.cfg.TrustHeight = 1
rts.reactor.cfg.TrustHash = fmt.Sprintf("%X", chain[1].Hash())
ctx := context.Background()
rts.reactor.mtx.Lock()
err := rts.reactor.initStateProvider(ctx, factory.DefaultTestChainID, 1)
rts.reactor.mtx.Unlock()
require.NoError(t, err)
rts.reactor.syncer.stateProvider = rts.reactor.stateProvider
appHash, err := rts.reactor.stateProvider.AppHash(ctx, 5)
require.NoError(t, err)
require.Len(t, appHash, 32)
state, err := rts.reactor.stateProvider.State(ctx, 5)
require.NoError(t, err)
require.Equal(t, appHash, state.AppHash)
require.Equal(t, types.DefaultConsensusParams(), &state.ConsensusParams)
commit, err := rts.reactor.stateProvider.Commit(ctx, 5)
require.NoError(t, err)
require.Equal(t, commit.BlockID, state.LastBlockID)
added, err := rts.reactor.syncer.AddSnapshot(peerA, &snapshot{
Height: 1, Format: 2, Chunks: 7, Hash: []byte{1, 2}, Metadata: []byte{1},
})
require.NoError(t, err)
require.True(t, added)
}
func TestReactor_Backfill(t *testing.T) {
@@ -494,7 +621,6 @@ func retryUntil(t *testing.T, fn func() bool, timeout time.Duration) {
if fn() {
return
}
require.NoError(t, ctx.Err())
}
}
@@ -523,7 +649,9 @@ func handleLightBlockRequests(t *testing.T,
} else {
switch errorCount % 3 {
case 0: // send a different block
differntLB, err := mockLB(t, int64(msg.Height), factory.DefaultTestTime, factory.MakeBlockID()).ToProto()
vals, pv := factory.RandValidatorSet(3, 10)
_, _, lb := mockLB(t, int64(msg.Height), factory.DefaultTestTime, factory.MakeBlockID(), vals, pv)
differntLB, err := lb.ToProto()
require.NoError(t, err)
sending <- p2p.Envelope{
From: envelope.To,
@@ -550,37 +678,147 @@ func handleLightBlockRequests(t *testing.T,
}
}
func handleConsensusParamsRequest(t *testing.T, receiving, sending chan p2p.Envelope, closeCh chan struct{}) {
t.Helper()
params := types.DefaultConsensusParams()
paramsProto := params.ToProto()
for {
select {
case envelope := <-receiving:
t.Log("received consensus params request")
msg, ok := envelope.Message.(*ssproto.ParamsRequest)
require.True(t, ok)
sending <- p2p.Envelope{
From: envelope.To,
Message: &ssproto.ParamsResponse{
Height: msg.Height,
ConsensusParams: paramsProto,
},
}
case <-closeCh:
return
}
}
}
func buildLightBlockChain(t *testing.T, fromHeight, toHeight int64, startTime time.Time) map[int64]*types.LightBlock {
chain := make(map[int64]*types.LightBlock, toHeight-fromHeight)
lastBlockID := factory.MakeBlockID()
blockTime := startTime.Add(-5 * time.Minute)
blockTime := startTime.Add(time.Duration(fromHeight-toHeight) * time.Minute)
vals, pv := factory.RandValidatorSet(3, 10)
for height := fromHeight; height < toHeight; height++ {
chain[height] = mockLB(t, height, blockTime, lastBlockID)
vals, pv, chain[height] = mockLB(t, height, blockTime, lastBlockID, vals, pv)
lastBlockID = factory.MakeBlockIDWithHash(chain[height].Header.Hash())
blockTime = blockTime.Add(1 * time.Minute)
}
return chain
}
func mockLB(t *testing.T, height int64, time time.Time,
lastBlockID types.BlockID) *types.LightBlock {
func mockLB(t *testing.T, height int64, time time.Time, lastBlockID types.BlockID,
currentVals *types.ValidatorSet, currentPrivVals []types.PrivValidator,
) (*types.ValidatorSet, []types.PrivValidator, *types.LightBlock) {
header, err := factory.MakeHeader(&types.Header{
Height: height,
LastBlockID: lastBlockID,
Time: time,
})
require.NoError(t, err)
vals, pv := factory.RandValidatorSet(3, 10)
header.ValidatorsHash = vals.Hash()
nextVals, nextPrivVals := factory.RandValidatorSet(3, 10)
header.ValidatorsHash = currentVals.Hash()
header.NextValidatorsHash = nextVals.Hash()
header.ConsensusHash = types.DefaultConsensusParams().HashConsensusParams()
lastBlockID = factory.MakeBlockIDWithHash(header.Hash())
voteSet := types.NewVoteSet(factory.DefaultTestChainID, height, 0, tmproto.PrecommitType, vals)
commit, err := factory.MakeCommit(lastBlockID, height, 0, voteSet, pv, time)
voteSet := types.NewVoteSet(factory.DefaultTestChainID, height, 0, tmproto.PrecommitType, currentVals)
commit, err := factory.MakeCommit(lastBlockID, height, 0, voteSet, currentPrivVals, time)
require.NoError(t, err)
return &types.LightBlock{
return nextVals, nextPrivVals, &types.LightBlock{
SignedHeader: &types.SignedHeader{
Header: header,
Commit: commit,
},
ValidatorSet: vals,
ValidatorSet: currentVals,
}
}
// graduallyAddPeers delivers a new randomly-generated peer update on peerUpdateCh once
// per interval, until closeCh is closed. Each peer update is assigned a random node ID.
func graduallyAddPeers(
peerUpdateCh chan p2p.PeerUpdate,
closeCh chan struct{},
interval time.Duration,
) {
ticker := time.NewTicker(interval)
for {
select {
case <-ticker.C:
peerUpdateCh <- p2p.PeerUpdate{
NodeID: factory.RandomNodeID(),
Status: p2p.PeerStatusUp,
}
case <-closeCh:
return
}
}
}
func handleSnapshotRequests(
t *testing.T,
receivingCh chan p2p.Envelope,
sendingCh chan p2p.Envelope,
closeCh chan struct{},
snapshots []snapshot,
) {
t.Helper()
for {
select {
case envelope := <-receivingCh:
_, ok := envelope.Message.(*ssproto.SnapshotsRequest)
require.True(t, ok)
for _, snapshot := range snapshots {
sendingCh <- p2p.Envelope{
From: envelope.To,
Message: &ssproto.SnapshotsResponse{
Height: snapshot.Height,
Format: snapshot.Format,
Chunks: snapshot.Chunks,
Hash: snapshot.Hash,
Metadata: snapshot.Metadata,
},
}
}
case <-closeCh:
return
}
}
}
func handleChunkRequests(
t *testing.T,
receivingCh chan p2p.Envelope,
sendingCh chan p2p.Envelope,
closeCh chan struct{},
chunk []byte,
) {
t.Helper()
for {
select {
case envelope := <-receivingCh:
msg, ok := envelope.Message.(*ssproto.ChunkRequest)
require.True(t, ok)
sendingCh <- p2p.Envelope{
From: envelope.To,
Message: &ssproto.ChunkResponse{
Height: msg.Height,
Format: msg.Format,
Index: msg.Index,
Chunk: chunk,
Missing: false,
},
}
case <-closeCh:
return
}
}
}

View File

@@ -1,13 +1,11 @@
package statesync
import (
"context"
"crypto/sha256"
"fmt"
"math/rand"
"sort"
"strings"
"time"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/types"
@@ -43,8 +41,6 @@ func (s *snapshot) Key() snapshotKey {
// snapshotPool discovers and aggregates snapshots across peers.
type snapshotPool struct {
stateProvider StateProvider
tmsync.Mutex
snapshots map[snapshotKey]*snapshot
snapshotPeers map[snapshotKey]map[types.NodeID]types.NodeID
@@ -60,10 +56,9 @@ type snapshotPool struct {
snapshotBlacklist map[snapshotKey]bool
}
// newSnapshotPool creates a new snapshot pool. The state source is used for
func newSnapshotPool(stateProvider StateProvider) *snapshotPool {
// newSnapshotPool creates a new empty snapshot pool.
func newSnapshotPool() *snapshotPool {
return &snapshotPool{
stateProvider: stateProvider,
snapshots: make(map[snapshotKey]*snapshot),
snapshotPeers: make(map[snapshotKey]map[types.NodeID]types.NodeID),
formatIndex: make(map[uint32]map[snapshotKey]bool),
@@ -80,14 +75,6 @@ func newSnapshotPool(stateProvider StateProvider) *snapshotPool {
// snapshot height is verified using the light client, and the expected app hash
// is set for the snapshot.
func (p *snapshotPool) Add(peerID types.NodeID, snapshot *snapshot) (bool, error) {
ctx, cancel := context.WithTimeout(context.TODO(), 30*time.Second)
defer cancel()
appHash, err := p.stateProvider.AppHash(ctx, snapshot.Height)
if err != nil {
return false, fmt.Errorf("failed to get app hash: %w", err)
}
snapshot.trustedAppHash = appHash
key := snapshot.Key()
p.Lock()

View File

@@ -3,10 +3,8 @@ package statesync
import (
"testing"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/internal/statesync/mocks"
"github.com/tendermint/tendermint/types"
)
@@ -39,13 +37,10 @@ func TestSnapshot_Key(t *testing.T) {
}
func TestSnapshotPool_Add(t *testing.T) {
stateProvider := &mocks.StateProvider{}
stateProvider.On("AppHash", mock.Anything, uint64(1)).Return([]byte("app_hash"), nil)
peerID := types.NodeID("aa")
// Adding to the pool should work
pool := newSnapshotPool(stateProvider)
pool := newSnapshotPool()
added, err := pool.Add(peerID, &snapshot{
Height: 1,
Format: 1,
@@ -66,18 +61,12 @@ func TestSnapshotPool_Add(t *testing.T) {
require.NoError(t, err)
require.False(t, added)
// The pool should have populated the snapshot with the trusted app hash
snapshot := pool.Best()
require.NotNil(t, snapshot)
require.Equal(t, []byte("app_hash"), snapshot.trustedAppHash)
stateProvider.AssertExpectations(t)
}
func TestSnapshotPool_GetPeer(t *testing.T) {
stateProvider := &mocks.StateProvider{}
stateProvider.On("AppHash", mock.Anything, mock.Anything).Return([]byte("app_hash"), nil)
pool := newSnapshotPool(stateProvider)
pool := newSnapshotPool()
s := &snapshot{Height: 1, Format: 1, Chunks: 1, Hash: []byte{1}}
@@ -112,9 +101,7 @@ func TestSnapshotPool_GetPeer(t *testing.T) {
}
func TestSnapshotPool_GetPeers(t *testing.T) {
stateProvider := &mocks.StateProvider{}
stateProvider.On("AppHash", mock.Anything, mock.Anything).Return([]byte("app_hash"), nil)
pool := newSnapshotPool(stateProvider)
pool := newSnapshotPool()
s := &snapshot{Height: 1, Format: 1, Chunks: 1, Hash: []byte{1}}
@@ -137,9 +124,7 @@ func TestSnapshotPool_GetPeers(t *testing.T) {
}
func TestSnapshotPool_Ranked_Best(t *testing.T) {
stateProvider := &mocks.StateProvider{}
stateProvider.On("AppHash", mock.Anything, mock.Anything).Return([]byte("app_hash"), nil)
pool := newSnapshotPool(stateProvider)
pool := newSnapshotPool()
// snapshots in expected order (best to worst). Highest height wins, then highest format.
// Snapshots with different chunk hashes are considered different, and the most peers is
@@ -182,9 +167,7 @@ func TestSnapshotPool_Ranked_Best(t *testing.T) {
}
func TestSnapshotPool_Reject(t *testing.T) {
stateProvider := &mocks.StateProvider{}
stateProvider.On("AppHash", mock.Anything, mock.Anything).Return([]byte("app_hash"), nil)
pool := newSnapshotPool(stateProvider)
pool := newSnapshotPool()
peerID := types.NodeID("aa")
@@ -212,9 +195,7 @@ func TestSnapshotPool_Reject(t *testing.T) {
}
func TestSnapshotPool_RejectFormat(t *testing.T) {
stateProvider := &mocks.StateProvider{}
stateProvider.On("AppHash", mock.Anything, mock.Anything).Return([]byte("app_hash"), nil)
pool := newSnapshotPool(stateProvider)
pool := newSnapshotPool()
peerID := types.NodeID("aa")
@@ -243,9 +224,7 @@ func TestSnapshotPool_RejectFormat(t *testing.T) {
}
func TestSnapshotPool_RejectPeer(t *testing.T) {
stateProvider := &mocks.StateProvider{}
stateProvider.On("AppHash", mock.Anything, mock.Anything).Return([]byte("app_hash"), nil)
pool := newSnapshotPool(stateProvider)
pool := newSnapshotPool()
peerAID := types.NodeID("aa")
peerBID := types.NodeID("bb")
@@ -285,9 +264,7 @@ func TestSnapshotPool_RejectPeer(t *testing.T) {
}
func TestSnapshotPool_RemovePeer(t *testing.T) {
stateProvider := &mocks.StateProvider{}
stateProvider.On("AppHash", mock.Anything, mock.Anything).Return([]byte("app_hash"), nil)
pool := newSnapshotPool(stateProvider)
pool := newSnapshotPool()
peerAID := types.NodeID("aa")
peerBID := types.NodeID("bb")

View File

@@ -1,7 +1,9 @@
package statesync
import (
"bytes"
"context"
"errors"
"fmt"
"strings"
"time"
@@ -9,21 +11,25 @@ import (
dbm "github.com/tendermint/tm-db"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/light"
lightprovider "github.com/tendermint/tendermint/light/provider"
lighthttp "github.com/tendermint/tendermint/light/provider/http"
lightrpc "github.com/tendermint/tendermint/light/rpc"
lightdb "github.com/tendermint/tendermint/light/store/db"
ssproto "github.com/tendermint/tendermint/proto/tendermint/statesync"
rpchttp "github.com/tendermint/tendermint/rpc/client/http"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
)
//go:generate ../../scripts/mockery_generate.sh StateProvider
// StateProvider is a provider of trusted state data for bootstrapping a node. This refers
// to the state.State object, not the state machine.
// to the state.State object, not the state machine. There are two implementations. One
// uses the P2P layer and the other uses the RPC layer. Both use light client verification.
type StateProvider interface {
// AppHash returns the app hash after the given height has been committed.
AppHash(ctx context.Context, height uint64) ([]byte, error)
@@ -33,20 +39,17 @@ type StateProvider interface {
State(ctx context.Context, height uint64) (sm.State, error)
}
// lightClientStateProvider is a state provider using the light client.
type lightClientStateProvider struct {
type stateProviderRPC struct {
tmsync.Mutex // light.Client is not concurrency-safe
lc *light.Client
version sm.Version
initialHeight int64
providers map[lightprovider.Provider]string
}
// NewLightClientStateProvider creates a new StateProvider using a light client and RPC clients.
func NewLightClientStateProvider(
// NewRPCStateProvider creates a new StateProvider using a light client and RPC clients.
func NewRPCStateProvider(
ctx context.Context,
chainID string,
version sm.Version,
initialHeight int64,
servers []string,
trustOptions light.TrustOptions,
@@ -75,51 +78,17 @@ func NewLightClientStateProvider(
if err != nil {
return nil, err
}
return &lightClientStateProvider{
return &stateProviderRPC{
lc: lc,
version: version,
initialHeight: initialHeight,
providers: providerRemotes,
}, nil
}
// NewLightClientStateProviderFromDispatcher creates a light client state
// provider but uses a p2p connected dispatched instead of RPC endpoints
func NewLightClientStateProviderFromDispatcher(
ctx context.Context,
chainID string,
version sm.Version,
initialHeight int64,
dispatcher *dispatcher,
trustOptions light.TrustOptions,
logger log.Logger,
) (StateProvider, error) {
providers := dispatcher.Providers(chainID, 30*time.Second)
if len(providers) < 2 {
return nil, fmt.Errorf("at least 2 peers are required, got %d", len(providers))
}
providersMap := make(map[lightprovider.Provider]string)
for _, p := range providers {
providersMap[p] = p.(*blockProvider).String()
}
lc, err := light.NewClient(ctx, chainID, trustOptions, providers[0], providers[1:],
lightdb.New(dbm.NewMemDB()), light.Logger(logger))
if err != nil {
return nil, err
}
return &lightClientStateProvider{
lc: lc,
version: version,
initialHeight: initialHeight,
providers: providersMap,
}, nil
}
// AppHash implements StateProvider.
func (s *lightClientStateProvider) AppHash(ctx context.Context, height uint64) ([]byte, error) {
// AppHash implements part of StateProvider. It calls the application to verify the
// light blocks at heights h+1 and h+2 and, if verification succeeds, reports the app
// hash for the block at height h+1 which correlates to the state at height h.
func (s *stateProviderRPC) AppHash(ctx context.Context, height uint64) ([]byte, error) {
s.Lock()
defer s.Unlock()
@@ -128,27 +97,19 @@ func (s *lightClientStateProvider) AppHash(ctx context.Context, height uint64) (
if err != nil {
return nil, err
}
// We also try to fetch the blocks at height H and H+2, since we need these
// We also try to fetch the blocks at H+2, since we need these
// when building the state while restoring the snapshot. This avoids the race
// condition where we try to restore a snapshot before H+2 exists.
//
// FIXME This is a hack, since we can't add new methods to the interface without
// breaking it. We should instead have a Has(ctx, height) method which checks
// that the state provider has access to the necessary data for the height.
// We piggyback on AppHash() since it's called when adding snapshots to the pool.
_, err = s.lc.VerifyLightBlockAtHeight(ctx, int64(height+2), time.Now())
if err != nil {
return nil, err
}
_, err = s.lc.VerifyLightBlockAtHeight(ctx, int64(height), time.Now())
if err != nil {
return nil, err
}
return header.AppHash, nil
}
// Commit implements StateProvider.
func (s *lightClientStateProvider) Commit(ctx context.Context, height uint64) (*types.Commit, error) {
func (s *stateProviderRPC) Commit(ctx context.Context, height uint64) (*types.Commit, error) {
s.Lock()
defer s.Unlock()
header, err := s.lc.VerifyLightBlockAtHeight(ctx, int64(height), time.Now())
@@ -159,13 +120,12 @@ func (s *lightClientStateProvider) Commit(ctx context.Context, height uint64) (*
}
// State implements StateProvider.
func (s *lightClientStateProvider) State(ctx context.Context, height uint64) (sm.State, error) {
func (s *stateProviderRPC) State(ctx context.Context, height uint64) (sm.State, error) {
s.Lock()
defer s.Unlock()
state := sm.State{
ChainID: s.lc.ChainID(),
Version: s.version,
InitialHeight: s.initialHeight,
}
if state.InitialHeight == 0 {
@@ -193,6 +153,10 @@ func (s *lightClientStateProvider) State(ctx context.Context, height uint64) (sm
return sm.State{}, err
}
state.Version = sm.Version{
Consensus: currentLightBlock.Version,
Software: version.TMVersion,
}
state.LastBlockHeight = lastLightBlock.Height
state.LastBlockTime = lastLightBlock.Time
state.LastBlockID = lastLightBlock.Commit.BlockID
@@ -229,9 +193,188 @@ func rpcClient(server string) (*rpchttp.HTTP, error) {
if !strings.Contains(server, "://") {
server = "http://" + server
}
c, err := rpchttp.New(server)
return rpchttp.New(server)
}
type stateProviderP2P struct {
tmsync.Mutex // light.Client is not concurrency-safe
lc *light.Client
initialHeight int64
paramsSendCh chan<- p2p.Envelope
paramsRecvCh chan types.ConsensusParams
}
// NewP2PStateProvider creates a light client state
// provider but uses a dispatcher connected to the P2P layer
func NewP2PStateProvider(
ctx context.Context,
chainID string,
initialHeight int64,
providers []lightprovider.Provider,
trustOptions light.TrustOptions,
paramsSendCh chan<- p2p.Envelope,
logger log.Logger,
) (StateProvider, error) {
if len(providers) < 2 {
return nil, fmt.Errorf("at least 2 peers are required, got %d", len(providers))
}
lc, err := light.NewClient(ctx, chainID, trustOptions, providers[0], providers[1:],
lightdb.New(dbm.NewMemDB()), light.Logger(logger))
if err != nil {
return nil, err
}
return c, nil
return &stateProviderP2P{
lc: lc,
initialHeight: initialHeight,
paramsSendCh: paramsSendCh,
paramsRecvCh: make(chan types.ConsensusParams),
}, nil
}
// AppHash implements StateProvider.
func (s *stateProviderP2P) AppHash(ctx context.Context, height uint64) ([]byte, error) {
s.Lock()
defer s.Unlock()
// We have to fetch the next height, which contains the app hash for the previous height.
header, err := s.lc.VerifyLightBlockAtHeight(ctx, int64(height+1), time.Now())
if err != nil {
return nil, err
}
// We also try to fetch the blocks at H+2, since we need these
// when building the state while restoring the snapshot. This avoids the race
// condition where we try to restore a snapshot before H+2 exists.
_, err = s.lc.VerifyLightBlockAtHeight(ctx, int64(height+2), time.Now())
if err != nil {
return nil, err
}
return header.AppHash, nil
}
// Commit implements StateProvider.
func (s *stateProviderP2P) Commit(ctx context.Context, height uint64) (*types.Commit, error) {
s.Lock()
defer s.Unlock()
header, err := s.lc.VerifyLightBlockAtHeight(ctx, int64(height), time.Now())
if err != nil {
return nil, err
}
return header.Commit, nil
}
// State implements StateProvider.
func (s *stateProviderP2P) State(ctx context.Context, height uint64) (sm.State, error) {
s.Lock()
defer s.Unlock()
state := sm.State{
ChainID: s.lc.ChainID(),
InitialHeight: s.initialHeight,
}
if state.InitialHeight == 0 {
state.InitialHeight = 1
}
// The snapshot height maps onto the state heights as follows:
//
// height: last block, i.e. the snapshotted height
// height+1: current block, i.e. the first block we'll process after the snapshot
// height+2: next block, i.e. the second block after the snapshot
//
// We need to fetch the NextValidators from height+2 because if the application changed
// the validator set at the snapshot height then this only takes effect at height+2.
lastLightBlock, err := s.lc.VerifyLightBlockAtHeight(ctx, int64(height), time.Now())
if err != nil {
return sm.State{}, err
}
currentLightBlock, err := s.lc.VerifyLightBlockAtHeight(ctx, int64(height+1), time.Now())
if err != nil {
return sm.State{}, err
}
nextLightBlock, err := s.lc.VerifyLightBlockAtHeight(ctx, int64(height+2), time.Now())
if err != nil {
return sm.State{}, err
}
state.Version = sm.Version{
Consensus: currentLightBlock.Version,
Software: version.TMVersion,
}
state.LastBlockHeight = lastLightBlock.Height
state.LastBlockTime = lastLightBlock.Time
state.LastBlockID = lastLightBlock.Commit.BlockID
state.AppHash = currentLightBlock.AppHash
state.LastResultsHash = currentLightBlock.LastResultsHash
state.LastValidators = lastLightBlock.ValidatorSet
state.Validators = currentLightBlock.ValidatorSet
state.NextValidators = nextLightBlock.ValidatorSet
state.LastHeightValidatorsChanged = nextLightBlock.Height
// We'll also need to fetch consensus params via P2P.
state.ConsensusParams, err = s.consensusParams(ctx, currentLightBlock.Height)
if err != nil {
return sm.State{}, err
}
// validate the consensus params
if !bytes.Equal(nextLightBlock.ConsensusHash, state.ConsensusParams.HashConsensusParams()) {
return sm.State{}, fmt.Errorf("consensus params hash mismatch at height %d. Expected %v, got %v",
currentLightBlock.Height, nextLightBlock.ConsensusHash, state.ConsensusParams.HashConsensusParams())
}
// set the last height changed to the current height
state.LastHeightConsensusParamsChanged = currentLightBlock.Height
return state, nil
}
// addProvider dynamically adds a peer as a new witness. A limit of 6 providers is kept as a
// heuristic. Too many overburdens the network and too little compromises the second layer of security.
func (s *stateProviderP2P) addProvider(p lightprovider.Provider) {
if len(s.lc.Witnesses()) < 6 {
s.lc.AddProvider(p)
}
}
// consensusParams sends out a request for consensus params blocking until one is returned.
// If it fails to get a valid set of consensus params from any of the providers it returns an error.
func (s *stateProviderP2P) consensusParams(ctx context.Context, height int64) (types.ConsensusParams, error) {
for _, provider := range s.lc.Witnesses() {
p, ok := provider.(*BlockProvider)
if !ok {
panic("expected p2p state provider to use p2p block providers")
}
// extract the nodeID of the provider
peer, err := types.NewNodeID(p.String())
if err != nil {
return types.ConsensusParams{}, fmt.Errorf("invalid provider (%s) node id: %w", p.String(), err)
}
select {
case s.paramsSendCh <- p2p.Envelope{
To: peer,
Message: &ssproto.ParamsRequest{
Height: uint64(height),
},
}:
case <-ctx.Done():
return types.ConsensusParams{}, ctx.Err()
}
select {
// if we get no response from this provider we move on to the next one
case <-time.After(consensusParamsResponseTimeout):
continue
case <-ctx.Done():
return types.ConsensusParams{}, ctx.Err()
case params, ok := <-s.paramsRecvCh:
if !ok {
return types.ConsensusParams{}, errors.New("params channel closed")
}
return params, nil
}
}
return types.ConsensusParams{}, errors.New("unable to fetch consensus params from connected providers")
}

View File

@@ -12,6 +12,7 @@ import (
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/internal/p2p"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/light"
ssproto "github.com/tendermint/tendermint/proto/tendermint/statesync"
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
@@ -40,14 +41,11 @@ var (
errRejectSender = errors.New("snapshot sender was rejected")
// errVerifyFailed is returned by Sync() when app hash or last height
// verification fails.
errVerifyFailed = errors.New("verification failed")
errVerifyFailed = errors.New("verification with app failed")
// errTimeout is returned by Sync() when we've waited too long to receive a chunk.
errTimeout = errors.New("timed out waiting for chunk")
// errNoSnapshots is returned by SyncAny() if no snapshots are found and discovery is disabled.
errNoSnapshots = errors.New("no suitable snapshots found")
// errStateCommitTimeout is returned by Sync() when the timeout for retrieving
// tendermint state or the commit is exceeded
errStateCommitTimeout = errors.New("timed out trying to retrieve state and commit")
)
// syncer runs a state sync against an ABCI app. Use either SyncAny() to automatically attempt to
@@ -84,7 +82,7 @@ func newSyncer(
stateProvider: stateProvider,
conn: conn,
connQuery: connQuery,
snapshots: newSnapshotPool(stateProvider),
snapshots: newSnapshotPool(),
snapshotCh: snapshotCh,
chunkCh: chunkCh,
tempDir: tempDir,
@@ -153,7 +151,6 @@ func (s *syncer) SyncAny(
discoveryTime time.Duration,
requestSnapshots func(),
) (sm.State, *types.Commit, error) {
if discoveryTime != 0 && discoveryTime < minimumDiscoveryTime {
discoveryTime = minimumDiscoveryTime
}
@@ -181,7 +178,6 @@ func (s *syncer) SyncAny(
if discoveryTime == 0 {
return sm.State{}, nil, errNoSnapshots
}
requestSnapshots()
s.logger.Info(fmt.Sprintf("Discovering snapshots for %v", discoveryTime))
time.Sleep(discoveryTime)
continue
@@ -230,10 +226,6 @@ func (s *syncer) SyncAny(
s.logger.Info("Snapshot sender rejected", "peer", peer)
}
case errors.Is(err, errStateCommitTimeout):
s.logger.Info("Timed out retrieving state and commit, rejecting and retrying...", "height", snapshot.Height)
s.snapshots.Reject(snapshot)
default:
return sm.State{}, nil, fmt.Errorf("snapshot restoration failed: %w", err)
}
@@ -264,8 +256,29 @@ func (s *syncer) Sync(ctx context.Context, snapshot *snapshot, chunks *chunkQueu
s.mtx.Unlock()
}()
hctx, hcancel := context.WithTimeout(ctx, 30*time.Second)
defer hcancel()
// Fetch the app hash corresponding to the snapshot
appHash, err := s.stateProvider.AppHash(hctx, snapshot.Height)
if err != nil {
// check if the main context was triggered
if ctx.Err() != nil {
return sm.State{}, nil, ctx.Err()
}
// catch the case where all the light client providers have been exhausted
if err == light.ErrNoWitnesses {
return sm.State{}, nil,
fmt.Errorf("failed to get app hash at height %d. No witnesses remaining", snapshot.Height)
}
s.logger.Info("failed to get and verify tendermint state. Dropping snapshot and trying again",
"err", err, "height", snapshot.Height)
return sm.State{}, nil, errRejectSnapshot
}
snapshot.trustedAppHash = appHash
// Offer snapshot to ABCI app.
err := s.offerSnapshot(ctx, snapshot)
err = s.offerSnapshot(ctx, snapshot)
if err != nil {
return sm.State{}, nil, err
}
@@ -277,27 +290,37 @@ func (s *syncer) Sync(ctx context.Context, snapshot *snapshot, chunks *chunkQueu
go s.fetchChunks(fetchCtx, snapshot, chunks)
}
pctx, pcancel := context.WithTimeout(ctx, 30*time.Second)
pctx, pcancel := context.WithTimeout(ctx, 1*time.Minute)
defer pcancel()
// Optimistically build new state, so we don't discover any light client failures at the end.
state, err := s.stateProvider.State(pctx, snapshot.Height)
if err != nil {
// check if the provider context exceeded the 10 second deadline
if err == context.DeadlineExceeded && ctx.Err() == nil {
return sm.State{}, nil, errStateCommitTimeout
// check if the main context was triggered
if ctx.Err() != nil {
return sm.State{}, nil, ctx.Err()
}
return sm.State{}, nil, fmt.Errorf("failed to build new state: %w", err)
if err == light.ErrNoWitnesses {
return sm.State{}, nil,
fmt.Errorf("failed to get tendermint state at height %d. No witnesses remaining", snapshot.Height)
}
s.logger.Info("failed to get and verify tendermint state. Dropping snapshot and trying again",
"err", err, "height", snapshot.Height)
return sm.State{}, nil, errRejectSnapshot
}
commit, err := s.stateProvider.Commit(pctx, snapshot.Height)
if err != nil {
// check if the provider context exceeded the 10 second deadline
if err == context.DeadlineExceeded && ctx.Err() == nil {
return sm.State{}, nil, errStateCommitTimeout
if ctx.Err() != nil {
return sm.State{}, nil, ctx.Err()
}
return sm.State{}, nil, fmt.Errorf("failed to fetch commit: %w", err)
if err == light.ErrNoWitnesses {
return sm.State{}, nil,
fmt.Errorf("failed to get commit at height %d. No witnesses remaining", snapshot.Height)
}
s.logger.Info("failed to get and verify commit. Dropping snapshot and trying again",
"err", err, "height", snapshot.Height)
return sm.State{}, nil, errRejectSnapshot
}
// Restore snapshot

View File

@@ -0,0 +1,6 @@
/*
Package factory provides generation code for common structs in Tendermint.
It is used primarily for the testing of internal components such as statesync,
consensus, blocksync etc..
*/
package factory

View File

@@ -12,3 +12,7 @@ func TestMakeHeader(t *testing.T) {
_, err := MakeHeader(&types.Header{})
assert.NoError(t, err)
}
func TestRandomNodeID(t *testing.T) {
assert.NotPanics(t, func() { RandomNodeID() })
}

View File

@@ -0,0 +1,27 @@
package factory
import (
"encoding/hex"
"strings"
"github.com/tendermint/tendermint/libs/rand"
"github.com/tendermint/tendermint/types"
)
// NodeID returns a valid NodeID based on an inputted string
func NodeID(str string) types.NodeID {
id, err := types.NewNodeID(strings.Repeat(str, 2*types.NodeIDByteLength))
if err != nil {
panic(err)
}
return id
}
// RandomNodeID returns a randomly generated valid NodeID
func RandomNodeID() types.NodeID {
id, err := types.NewNodeID(hex.EncodeToString(rand.Bytes(types.NodeIDByteLength)))
if err != nil {
panic(err)
}
return id
}

View File

@@ -1,7 +1,6 @@
package time
import (
"sort"
"time"
)
@@ -16,43 +15,3 @@ func Now() time.Time {
func Canonical(t time.Time) time.Time {
return t.Round(0).UTC()
}
// WeightedTime for computing a median.
type WeightedTime struct {
Time time.Time
Weight int64
}
// NewWeightedTime with time and weight.
func NewWeightedTime(time time.Time, weight int64) *WeightedTime {
return &WeightedTime{
Time: time,
Weight: weight,
}
}
// WeightedMedian computes weighted median time for a given array of WeightedTime and the total voting power.
func WeightedMedian(weightedTimes []*WeightedTime, totalVotingPower int64) (res time.Time) {
median := totalVotingPower / 2
sort.Slice(weightedTimes, func(i, j int) bool {
if weightedTimes[i] == nil {
return false
}
if weightedTimes[j] == nil {
return true
}
return weightedTimes[i].Time.UnixNano() < weightedTimes[j].Time.UnixNano()
})
for _, weightedTime := range weightedTimes {
if weightedTime != nil {
if median <= weightedTime.Weight {
res = weightedTime.Time
break
}
median -= weightedTime.Weight
}
}
return
}

View File

@@ -52,6 +52,8 @@ const (
// 10s is sufficient for most networks.
defaultMaxBlockLag = 10 * time.Second
defaultProviderTimeout = 10 * time.Second
)
// Option sets a parameter for the light client.
@@ -61,9 +63,7 @@ type Option func(*Client)
// check the blocks (every block, in ascending height order). Note this is
// much slower than SkippingVerification, albeit more secure.
func SequentialVerification() Option {
return func(c *Client) {
c.verificationMode = sequential
}
return func(c *Client) { c.verificationMode = sequential }
}
// SkippingVerification option configures the light client to skip blocks as
@@ -87,24 +87,18 @@ func SkippingVerification(trustLevel tmmath.Fraction) Option {
// the h amount of light blocks will be removed from the store.
// Default: 1000. A pruning size of 0 will not prune the light client at all.
func PruningSize(h uint16) Option {
return func(c *Client) {
c.pruningSize = h
}
return func(c *Client) { c.pruningSize = h }
}
// Logger option can be used to set a logger for the client.
func Logger(l log.Logger) Option {
return func(c *Client) {
c.logger = l
}
return func(c *Client) { c.logger = l }
}
// MaxClockDrift defines how much new header's time can drift into
// the future relative to the light clients local time. Default: 10s.
func MaxClockDrift(d time.Duration) Option {
return func(c *Client) {
c.maxClockDrift = d
}
return func(c *Client) { c.maxClockDrift = d }
}
// MaxBlockLag represents the maximum time difference between the realtime
@@ -116,9 +110,13 @@ func MaxClockDrift(d time.Duration) Option {
// was 12:00. Then the lag here is 5 minutes.
// Default: 10s
func MaxBlockLag(d time.Duration) Option {
return func(c *Client) {
c.maxBlockLag = d
}
return func(c *Client) { c.maxBlockLag = d }
}
// Provider timeout is the maximum time that the light client will wait for a
// provider to respond with a light block.
func ProviderTimeout(d time.Duration) Option {
return func(c *Client) { c.providerTimeout = d }
}
// Client represents a light client, connected to a single chain, which gets
@@ -133,6 +131,7 @@ type Client struct {
trustLevel tmmath.Fraction
maxClockDrift time.Duration
maxBlockLag time.Duration
providerTimeout time.Duration
// Mutex for locking during changes of the light clients providers
providerMutex tmsync.Mutex
@@ -197,12 +196,13 @@ func NewClient(
chainID: chainID,
trustingPeriod: trustOptions.Period,
verificationMode: skipping,
trustLevel: DefaultTrustLevel,
maxClockDrift: defaultMaxClockDrift,
maxBlockLag: defaultMaxBlockLag,
primary: primary,
witnesses: witnesses,
trustedStore: trustedStore,
trustLevel: DefaultTrustLevel,
maxClockDrift: defaultMaxClockDrift,
maxBlockLag: defaultMaxBlockLag,
providerTimeout: defaultProviderTimeout,
pruningSize: defaultPruningSize,
logger: log.NewNopLogger(),
}
@@ -693,7 +693,9 @@ func (c *Client) verifySkipping(
if depth == len(blockCache)-1 {
// schedule what the next height we need to fetch is
pivotHeight := c.schedule(verifiedBlock.Height, blockCache[depth].Height)
interimBlock, providerErr := source.LightBlock(ctx, pivotHeight)
subCtx, cancel := context.WithTimeout(ctx, c.providerTimeout)
defer cancel()
interimBlock, providerErr := c.getLightBlock(subCtx, source, pivotHeight)
if providerErr != nil {
return nil, ErrVerificationFailed{From: verifiedBlock.Height, To: pivotHeight, Reason: providerErr}
}
@@ -930,7 +932,7 @@ func (c *Client) backwards(
// any other error, the primary is permanently dropped and is replaced by a witness.
func (c *Client) lightBlockFromPrimary(ctx context.Context, height int64) (*types.LightBlock, error) {
c.providerMutex.Lock()
l, err := c.primary.LightBlock(ctx, height)
l, err := c.getLightBlock(ctx, c.primary, height)
c.providerMutex.Unlock()
switch err {
@@ -957,6 +959,16 @@ func (c *Client) lightBlockFromPrimary(ctx context.Context, height int64) (*type
}
}
func (c *Client) getLightBlock(ctx context.Context, p provider.Provider, height int64) (*types.LightBlock, error) {
subCtx, cancel := context.WithTimeout(ctx, c.providerTimeout)
defer cancel()
l, err := p.LightBlock(subCtx, height)
if err == context.DeadlineExceeded || ctx.Err() != nil {
return nil, provider.ErrNoResponse
}
return l, err
}
// NOTE: requires a providerMutex lock
func (c *Client) removeWitnesses(indexes []int) error {
// check that we will still have witnesses remaining
@@ -989,7 +1001,7 @@ func (c *Client) findNewPrimary(ctx context.Context, height int64, remove bool)
c.providerMutex.Lock()
defer c.providerMutex.Unlock()
if len(c.witnesses) <= 1 {
if len(c.witnesses) < 1 {
return nil, ErrNoWitnesses
}
@@ -1001,7 +1013,7 @@ func (c *Client) findNewPrimary(ctx context.Context, height int64, remove bool)
)
// send out a light block request to all witnesses
subctx, cancel := context.WithCancel(ctx)
subctx, cancel := context.WithTimeout(ctx, c.providerTimeout)
defer cancel()
for index := range c.witnesses {
wg.Add(1)

View File

@@ -724,51 +724,32 @@ func TestClient_BackwardsVerification(t *testing.T) {
}
{
testCases := []struct {
headers map[int64]*types.SignedHeader
vals map[int64]*types.ValidatorSet
}{
{
// 7) provides incorrect height
headers: map[int64]*types.SignedHeader{
2: keys.GenSignedHeader(chainID, 1, bTime.Add(30*time.Minute), nil, vals, vals,
hash("app_hash"), hash("cons_hash"), hash("results_hash"), 0, len(keys)),
3: h3,
},
vals: valSet,
},
{
// 8) provides incorrect hash
headers: map[int64]*types.SignedHeader{
2: keys.GenSignedHeader(chainID, 2, bTime.Add(30*time.Minute), nil, vals, vals,
hash("app_hash2"), hash("cons_hash23"), hash("results_hash30"), 0, len(keys)),
3: h3,
},
vals: valSet,
},
// 8) provides incorrect hash
headers := map[int64]*types.SignedHeader{
2: keys.GenSignedHeader(chainID, 2, bTime.Add(30*time.Minute), nil, vals, vals,
hash("app_hash2"), hash("cons_hash23"), hash("results_hash30"), 0, len(keys)),
3: h3,
}
vals := valSet
mockNode := mockNodeFromHeadersAndVals(headers, vals)
c, err := light.NewClient(
ctx,
chainID,
light.TrustOptions{
Period: 1 * time.Hour,
Height: 3,
Hash: h3.Hash(),
},
mockNode,
[]provider.Provider{mockNode},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
require.NoError(t, err)
for idx, tc := range testCases {
mockNode := mockNodeFromHeadersAndVals(tc.headers, tc.vals)
c, err := light.NewClient(
ctx,
chainID,
light.TrustOptions{
Period: 1 * time.Hour,
Height: 3,
Hash: h3.Hash(),
},
mockNode,
[]provider.Provider{mockNode},
dbs.New(dbm.NewMemDB()),
light.Logger(log.TestingLogger()),
)
require.NoError(t, err, idx)
_, err = c.VerifyLightBlockAtHeight(ctx, 2, bTime.Add(1*time.Hour).Add(1*time.Second))
assert.Error(t, err, idx)
mockNode.AssertExpectations(t)
}
_, err = c.VerifyLightBlockAtHeight(ctx, 2, bTime.Add(1*time.Hour).Add(1*time.Second))
assert.Error(t, err)
mockNode.AssertExpectations(t)
}
}

View File

@@ -110,7 +110,7 @@ func (c *Client) detectDivergence(ctx context.Context, primaryTrace []*types.Lig
func (c *Client) compareNewHeaderWithWitness(ctx context.Context, errc chan error, h *types.SignedHeader,
witness provider.Provider, witnessIndex int) {
lightBlock, err := witness.LightBlock(ctx, h.Height)
lightBlock, err := c.getLightBlock(ctx, witness, h.Height)
switch err {
// no error means we move on to checking the hash of the two headers
case nil:
@@ -331,7 +331,7 @@ func (c *Client) examineConflictingHeaderAgainstTrace(
if traceBlock.Height == targetBlock.Height {
sourceBlock = targetBlock
} else {
sourceBlock, err = source.LightBlock(ctx, traceBlock.Height)
sourceBlock, err = c.getLightBlock(ctx, source, traceBlock.Height)
if err != nil {
return nil, nil, fmt.Errorf("failed to examine trace: %w", err)
}
@@ -379,7 +379,7 @@ func (c *Client) getTargetBlockOrLatest(
height int64,
witness provider.Provider,
) (bool, *types.LightBlock, error) {
lightBlock, err := witness.LightBlock(ctx, 0)
lightBlock, err := c.getLightBlock(ctx, witness, 0)
if err != nil {
return false, nil, err
}
@@ -394,7 +394,7 @@ func (c *Client) getTargetBlockOrLatest(
// the witness has caught up. We recursively call the function again. However in order
// to avoud a wild goose chase where the witness sends us one header below and one header
// above the height we set a timeout to the context
lightBlock, err := witness.LightBlock(ctx, height)
lightBlock, err := c.getLightBlock(ctx, witness, height)
return true, lightBlock, err
}

View File

@@ -113,7 +113,7 @@ func (p *Proxy) listen() (net.Listener, *http.ServeMux, error) {
}
// 4) Start listening for new connections.
listener, err := rpcserver.Listen(p.Addr, p.Config)
listener, err := rpcserver.Listen(p.Addr, p.Config.MaxOpenConnections)
if err != nil {
return nil, mux, err
}

View File

@@ -28,7 +28,6 @@ import (
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/libs/strings"
tmtime "github.com/tendermint/tendermint/libs/time"
"github.com/tendermint/tendermint/light"
"github.com/tendermint/tendermint/privval"
tmgrpc "github.com/tendermint/tendermint/privval/grpc"
"github.com/tendermint/tendermint/proxy"
@@ -221,7 +220,7 @@ func makeNode(config *cfg.Config,
// Determine whether we should do block sync. This must happen after the handshake, since the
// app may modify the validator set, specifying ourself as the only validator.
blockSync := config.FastSyncMode && !onlyValidatorIsUs(state, pubKey)
blockSync := config.BlockSync.Enable && !onlyValidatorIsUs(state, pubKey)
logNodeStartupInfo(state, pubKey, logger, consensusLogger, config.Mode)
@@ -319,15 +318,17 @@ func makeNode(config *cfg.Config,
stateSyncReactorShim = p2p.NewReactorShim(logger.With("module", "statesync"), "StateSyncShim", statesync.ChannelShims)
if config.P2P.DisableLegacy {
channels = makeChannelsFromShims(router, statesync.ChannelShims)
peerUpdates = peerManager.Subscribe()
} else {
if config.P2P.UseLegacy {
channels = getChannelsFromShim(stateSyncReactorShim)
peerUpdates = stateSyncReactorShim.PeerUpdates
} else {
channels = makeChannelsFromShims(router, statesync.ChannelShims)
peerUpdates = peerManager.Subscribe()
}
stateSyncReactor = statesync.NewReactor(
genDoc.ChainID,
genDoc.InitialHeight,
*config.StateSync,
stateSyncReactorShim.Logger,
proxyApp.Snapshot(),
@@ -335,6 +336,7 @@ func makeNode(config *cfg.Config,
channels[statesync.SnapshotChannel],
channels[statesync.ChunkChannel],
channels[statesync.LightBlockChannel],
channels[statesync.ParamsChannel],
peerUpdates,
stateStore,
blockStore,
@@ -373,13 +375,7 @@ func makeNode(config *cfg.Config,
pexCh := pex.ChannelDescriptor()
transport.AddChannelDescriptors([]*p2p.ChannelDescriptor{&pexCh})
if config.P2P.DisableLegacy {
addrBook = nil
pexReactor, err = createPEXReactorV2(config, logger, peerManager, router)
if err != nil {
return nil, err
}
} else {
if config.P2P.UseLegacy {
// setup Transport and Switch
sw = createSwitch(
config, transport, p2pMetrics, mpReactorShim, bcReactorForSwitch,
@@ -402,6 +398,12 @@ func makeNode(config *cfg.Config,
}
pexReactor = createPEXReactorAndAddToSwitch(addrBook, config, sw, logger)
} else {
addrBook = nil
pexReactor, err = createPEXReactorV2(config, logger, peerManager, router)
if err != nil {
return nil, err
}
}
if config.RPC.PprofListenAddress != "" {
@@ -441,25 +443,37 @@ func makeNode(config *cfg.Config,
ProxyAppQuery: proxyApp.Query(),
ProxyAppMempool: proxyApp.Mempool(),
StateStore: stateStore,
BlockStore: blockStore,
EvidencePool: evPool,
ConsensusState: csState,
StateStore: stateStore,
BlockStore: blockStore,
EvidencePool: evPool,
ConsensusState: csState,
ConsensusReactor: csReactor,
BlockSyncReactor: bcReactor.(cs.BlockSyncReactor),
P2PPeers: sw,
PeerManager: peerManager,
GenDoc: genDoc,
EventSinks: eventSinks,
ConsensusReactor: csReactor,
EventBus: eventBus,
Mempool: mp,
Logger: logger.With("module", "rpc"),
Config: *config.RPC,
GenDoc: genDoc,
EventSinks: eventSinks,
EventBus: eventBus,
Mempool: mp,
Logger: logger.With("module", "rpc"),
Config: *config.RPC,
},
}
// this is a terrible, because typed nil interfaces are not ==
// nil, so this is just cleanup to avoid having a non-nil
// value in the RPC environment that has the semantic
// properties of nil.
if sw == nil {
node.rpcEnv.P2PPeers = nil
} else if peerManager == nil {
node.rpcEnv.PeerManager = nil
}
// end hack
node.rpcEnv.P2PTransport = node
node.BaseService = *service.NewBaseService(logger, "Node", node)
@@ -518,12 +532,8 @@ func makeSeedNode(config *cfg.Config,
// p2p stack is removed.
pexCh := pex.ChannelDescriptor()
transport.AddChannelDescriptors([]*p2p.ChannelDescriptor{&pexCh})
if config.P2P.DisableLegacy {
pexReactor, err = createPEXReactorV2(config, logger, peerManager, router)
if err != nil {
return nil, err
}
} else {
if config.P2P.UseLegacy {
sw = createSwitch(
config, transport, p2pMetrics, nil, nil,
nil, nil, nil, nil, nodeInfo, nodeKey, p2pLogger,
@@ -545,6 +555,11 @@ func makeSeedNode(config *cfg.Config,
}
pexReactor = createPEXReactorAndAddToSwitch(addrBook, config, sw, logger)
} else {
pexReactor, err = createPEXReactorV2(config, logger, peerManager, router)
if err != nil {
return nil, err
}
}
if config.RPC.PprofListenAddress != "" {
@@ -607,18 +622,16 @@ func (n *nodeImpl) OnStart() error {
}
n.isListening = true
n.Logger.Info("p2p service", "legacy_enabled", !n.config.P2P.DisableLegacy)
n.Logger.Info("p2p service", "legacy_enabled", n.config.P2P.UseLegacy)
if n.config.P2P.DisableLegacy {
if err = n.router.Start(); err != nil {
return err
}
} else {
if n.config.P2P.UseLegacy {
// Add private IDs to addrbook to block those peers being added
n.addrBook.AddPrivateIDs(strings.SplitAndTrimEmpty(n.config.P2P.PrivatePeerIDs, ",", " "))
if err = n.sw.Start(); err != nil {
return err
}
} else if err = n.router.Start(); err != nil {
return err
}
if n.config.Mode != cfg.ModeSeed {
@@ -649,19 +662,19 @@ func (n *nodeImpl) OnStart() error {
}
}
if n.config.P2P.DisableLegacy {
if err := n.pexReactor.Start(); err != nil {
return err
}
} else {
if n.config.P2P.UseLegacy {
// Always connect to persistent peers
err = n.sw.DialPeersAsync(strings.SplitAndTrimEmpty(n.config.P2P.PersistentPeers, ",", " "))
if err != nil {
return fmt.Errorf("could not dial peers from persistent-peers field: %w", err)
}
} else if err := n.pexReactor.Start(); err != nil {
return err
}
// Run state sync
// TODO: We shouldn't run state sync if we already have state that has a
// LastBlockHeight that is not InitialHeight
if n.stateSync {
bcR, ok := n.bcReactor.(cs.BlockSyncReactor)
if !ok {
@@ -674,17 +687,52 @@ func (n *nodeImpl) OnStart() error {
return fmt.Errorf("unable to derive state: %w", err)
}
ssc := n.config.StateSync
sp, err := constructStateProvider(ssc, state, n.Logger.With("module", "light"))
if err != nil {
return fmt.Errorf("failed to set up light client state provider: %w", err)
// TODO: we may want to move these events within the respective
// reactors.
// At the beginning of the statesync start, we use the initialHeight as the event height
// because of the statesync doesn't have the concreate state height before fetched the snapshot.
d := types.EventDataStateSyncStatus{Complete: false, Height: state.InitialHeight}
if err := n.eventBus.PublishEventStateSyncStatus(d); err != nil {
n.eventBus.Logger.Error("failed to emit the statesync start event", "err", err)
}
if err := startStateSync(n.stateSyncReactor, bcR, n.consensusReactor, sp,
ssc, n.config.FastSyncMode, state.InitialHeight, n.eventBus); err != nil {
return fmt.Errorf("failed to start state sync: %w", err)
}
// FIXME: We shouldn't allow state sync to silently error out without
// bubbling up the error and gracefully shutting down the rest of the node
go func() {
n.Logger.Info("starting state sync")
state, err := n.stateSyncReactor.Sync(context.TODO())
if err != nil {
n.Logger.Error("state sync failed", "err", err)
return
}
n.consensusReactor.SetStateSyncingMetrics(0)
d := types.EventDataStateSyncStatus{Complete: true, Height: state.LastBlockHeight}
if err := n.eventBus.PublishEventStateSyncStatus(d); err != nil {
n.eventBus.Logger.Error("failed to emit the statesync start event", "err", err)
}
// TODO: Some form of orchestrator is needed here between the state
// advancing reactors to be able to control which one of the three
// is running
if n.config.BlockSync.Enable {
// FIXME Very ugly to have these metrics bleed through here.
n.consensusReactor.SetBlockSyncingMetrics(1)
if err := bcR.SwitchToBlockSync(state); err != nil {
n.Logger.Error("failed to switch to block sync", "err", err)
return
}
d := types.EventDataBlockSyncStatus{Complete: false, Height: state.LastBlockHeight}
if err := n.eventBus.PublishEventBlockSyncStatus(d); err != nil {
n.eventBus.Logger.Error("failed to emit the block sync starting event", "err", err)
}
} else {
n.consensusReactor.SwitchToConsensus(state, true)
}
}()
}
return nil
@@ -737,14 +785,14 @@ func (n *nodeImpl) OnStop() {
n.Logger.Error("failed to stop the PEX v2 reactor", "err", err)
}
if n.config.P2P.DisableLegacy {
if err := n.router.Stop(); err != nil {
n.Logger.Error("failed to stop router", "err", err)
}
} else {
if n.config.P2P.UseLegacy {
if err := n.sw.Stop(); err != nil {
n.Logger.Error("failed to stop switch", "err", err)
}
} else {
if err := n.router.Stop(); err != nil {
n.Logger.Error("failed to stop router", "err", err)
}
}
if err := n.transport.Close(); err != nil {
@@ -825,7 +873,7 @@ func (n *nodeImpl) startRPC() ([]net.Listener, error) {
rpcserver.RegisterRPCFuncs(mux, routes, rpcLogger)
listener, err := rpcserver.Listen(
listenAddr,
config,
config.MaxOpenConnections,
)
if err != nil {
return nil, err
@@ -883,7 +931,7 @@ func (n *nodeImpl) startRPC() ([]net.Listener, error) {
if config.WriteTimeout <= n.config.RPC.TimeoutBroadcastTxCommit {
config.WriteTimeout = n.config.RPC.TimeoutBroadcastTxCommit + 1*time.Second
}
listener, err := rpcserver.Listen(grpcListenAddr, config)
listener, err := rpcserver.Listen(grpcListenAddr, config.MaxOpenConnections)
if err != nil {
return nil, err
}
@@ -969,67 +1017,6 @@ func (n *nodeImpl) NodeInfo() types.NodeInfo {
return n.nodeInfo
}
// startStateSync starts an asynchronous state sync process, then switches to block sync mode.
func startStateSync(
ssR statesync.SyncReactor,
bcR cs.BlockSyncReactor,
conR cs.ConsSyncReactor,
sp statesync.StateProvider,
config *cfg.StateSyncConfig,
blockSync bool,
stateInitHeight int64,
eb *types.EventBus,
) error {
stateSyncLogger := eb.Logger.With("module", "statesync")
stateSyncLogger.Info("starting state sync...")
// at the beginning of the statesync start, we use the initialHeight as the event height
// because of the statesync doesn't have the concreate state height before fetched the snapshot.
d := types.EventDataStateSyncStatus{Complete: false, Height: stateInitHeight}
if err := eb.PublishEventStateSyncStatus(d); err != nil {
stateSyncLogger.Error("failed to emit the statesync start event", "err", err)
}
go func() {
state, err := ssR.Sync(context.TODO(), sp, config.DiscoveryTime)
if err != nil {
stateSyncLogger.Error("state sync failed", "err", err)
return
}
if err := ssR.Backfill(state); err != nil {
stateSyncLogger.Error("backfill failed; node has insufficient history to verify all evidence;"+
" proceeding optimistically...", "err", err)
}
conR.SetStateSyncingMetrics(0)
d := types.EventDataStateSyncStatus{Complete: true, Height: state.LastBlockHeight}
if err := eb.PublishEventStateSyncStatus(d); err != nil {
stateSyncLogger.Error("failed to emit the statesync start event", "err", err)
}
if blockSync {
// FIXME Very ugly to have these metrics bleed through here.
conR.SetBlockSyncingMetrics(1)
if err := bcR.SwitchToBlockSync(state); err != nil {
stateSyncLogger.Error("failed to switch to block sync", "err", err)
return
}
d := types.EventDataBlockSyncStatus{Complete: false, Height: state.LastBlockHeight}
if err := eb.PublishEventBlockSyncStatus(d); err != nil {
stateSyncLogger.Error("failed to emit the block sync starting event", "err", err)
}
} else {
conR.SwitchToConsensus(state, true)
}
}()
return nil
}
// genesisDocProvider returns a GenesisDoc.
// It allows the GenesisDoc to be pulled from sources other than the
// filesystem, for instance from a distributed key-value store cluster.
@@ -1212,24 +1199,3 @@ func getChannelsFromShim(reactorShim *p2p.ReactorShim) map[p2p.ChannelID]*p2p.Ch
return channels
}
func constructStateProvider(
ssc *cfg.StateSyncConfig,
state sm.State,
logger log.Logger,
) (statesync.StateProvider, error) {
ctx, cancel := context.WithTimeout(context.TODO(), 10*time.Second)
defer cancel()
to := light.TrustOptions{
Period: ssc.TrustPeriod,
Height: ssc.TrustHeight,
Hash: ssc.TrustHashBytes(),
}
return statesync.NewLightClientStateProvider(
ctx,
state.ChainID, state.Version, state.InitialHeight,
ssc.RPCServers, to, logger,
)
}

View File

@@ -21,16 +21,12 @@ import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/crypto/tmhash"
consmocks "github.com/tendermint/tendermint/internal/consensus/mocks"
ssmocks "github.com/tendermint/tendermint/internal/statesync/mocks"
"github.com/tendermint/tendermint/internal/evidence"
"github.com/tendermint/tendermint/internal/mempool"
mempoolv0 "github.com/tendermint/tendermint/internal/mempool/v0"
statesync "github.com/tendermint/tendermint/internal/statesync"
"github.com/tendermint/tendermint/internal/test/factory"
"github.com/tendermint/tendermint/libs/log"
tmpubsub "github.com/tendermint/tendermint/libs/pubsub"
tmrand "github.com/tendermint/tendermint/libs/rand"
tmtime "github.com/tendermint/tendermint/libs/time"
"github.com/tendermint/tendermint/privval"
@@ -43,6 +39,7 @@ import (
func TestNodeStartStop(t *testing.T) {
config := cfg.ResetTestRoot("node_node_test")
defer os.RemoveAll(config.RootDir)
// create & start node
@@ -53,8 +50,6 @@ func TestNodeStartStop(t *testing.T) {
n, ok := ns.(*nodeImpl)
require.True(t, ok)
t.Logf("Started node %v", n.sw.NodeInfo())
// wait for the node to produce a block
blocksSub, err := n.EventBus().Subscribe(context.Background(), "node_test", types.EventQueryNewBlock)
require.NoError(t, err)
@@ -670,65 +665,3 @@ func loadStatefromGenesis(t *testing.T) sm.State {
return state
}
func TestNodeStartStateSync(t *testing.T) {
mockSSR := &statesync.MockSyncReactor{}
mockFSR := &consmocks.BlockSyncReactor{}
mockCSR := &consmocks.ConsSyncReactor{}
mockSP := &ssmocks.StateProvider{}
state := loadStatefromGenesis(t)
config := cfg.ResetTestRoot("load_state_from_genesis")
eventBus, err := createAndStartEventBus(log.TestingLogger())
defer func() {
err := eventBus.Stop()
require.NoError(t, err)
}()
require.NoError(t, err)
require.NotNil(t, eventBus)
sub, err := eventBus.Subscribe(context.Background(), "test-client", types.EventQueryStateSyncStatus, 10)
require.NoError(t, err)
require.NotNil(t, sub)
cfgSS := config.StateSync
mockSSR.On("Sync", context.TODO(), mockSP, cfgSS.DiscoveryTime).Return(state, nil).
On("Backfill", state).Return(nil)
mockCSR.On("SetStateSyncingMetrics", float64(0)).Return().
On("SwitchToConsensus", state, true).Return()
require.NoError(t,
startStateSync(mockSSR, mockFSR, mockCSR, mockSP, config.StateSync, false, state.InitialHeight, eventBus))
for cnt := 0; cnt < 2; {
select {
case <-time.After(3 * time.Second):
t.Errorf("StateSyncStatus timeout")
case msg := <-sub.Out():
if cnt == 0 {
ensureStateSyncStatus(t, msg, false, state.InitialHeight)
cnt++
} else {
// the state height = 0 because we are not actually update the state in this test
ensureStateSyncStatus(t, msg, true, 0)
cnt++
}
}
}
mockSSR.AssertNumberOfCalls(t, "Sync", 1)
mockSSR.AssertNumberOfCalls(t, "Backfill", 1)
mockCSR.AssertNumberOfCalls(t, "SetStateSyncingMetrics", 1)
mockCSR.AssertNumberOfCalls(t, "SwitchToConsensus", 1)
}
func ensureStateSyncStatus(t *testing.T, msg tmpubsub.Message, complete bool, height int64) {
t.Helper()
status, ok := msg.Data().(types.EventDataStateSyncStatus)
require.True(t, ok)
require.Equal(t, complete, status.Complete)
require.Equal(t, height, status.Height)
}

View File

@@ -8,7 +8,6 @@ import (
"math"
"net"
_ "net/http/pprof" // nolint: gosec // securely exposed on separate, optional port
"strings"
"time"
dbm "github.com/tendermint/tm-db"
@@ -33,9 +32,7 @@ import (
"github.com/tendermint/tendermint/proxy"
sm "github.com/tendermint/tendermint/state"
"github.com/tendermint/tendermint/state/indexer"
kv "github.com/tendermint/tendermint/state/indexer/sink/kv"
null "github.com/tendermint/tendermint/state/indexer/sink/null"
psql "github.com/tendermint/tendermint/state/indexer/sink/psql"
"github.com/tendermint/tendermint/state/indexer/sink"
"github.com/tendermint/tendermint/store"
"github.com/tendermint/tendermint/types"
"github.com/tendermint/tendermint/version"
@@ -78,56 +75,9 @@ func createAndStartIndexerService(
logger log.Logger,
chainID string,
) (*indexer.Service, []indexer.EventSink, error) {
eventSinks := []indexer.EventSink{}
// check for duplicated sinks
sinks := map[string]bool{}
for _, s := range config.TxIndex.Indexer {
sl := strings.ToLower(s)
if sinks[sl] {
return nil, nil, errors.New("found duplicated sinks, please check the tx-index section in the config.toml")
}
sinks[sl] = true
}
loop:
for k := range sinks {
switch k {
case string(indexer.NULL):
// When we see null in the config, the eventsinks will be reset with the
// nullEventSink.
eventSinks = []indexer.EventSink{null.NewEventSink()}
break loop
case string(indexer.KV):
store, err := dbProvider(&cfg.DBContext{ID: "tx_index", Config: config})
if err != nil {
return nil, nil, err
}
eventSinks = append(eventSinks, kv.NewEventSink(store))
case string(indexer.PSQL):
conn := config.TxIndex.PsqlConn
if conn == "" {
return nil, nil, errors.New("the psql connection settings cannot be empty")
}
es, _, err := psql.NewEventSink(conn, chainID)
if err != nil {
return nil, nil, err
}
eventSinks = append(eventSinks, es)
default:
return nil, nil, errors.New("unsupported event sink type")
}
}
if len(eventSinks) == 0 {
eventSinks = []indexer.EventSink{null.NewEventSink()}
eventSinks, err := sink.EventSinksFromConfig(config, dbProvider, chainID)
if err != nil {
return nil, nil, err
}
indexerService := indexer.NewIndexerService(eventSinks, eventBus)
@@ -216,12 +166,12 @@ func createMempoolReactor(
peerUpdates *p2p.PeerUpdates
)
if config.P2P.DisableLegacy {
channels = makeChannelsFromShims(router, channelShims)
peerUpdates = peerManager.Subscribe()
} else {
if config.P2P.UseLegacy {
channels = getChannelsFromShim(reactorShim)
peerUpdates = reactorShim.PeerUpdates
} else {
channels = makeChannelsFromShims(router, channelShims)
peerUpdates = peerManager.Subscribe()
}
switch config.Mempool.Version {
@@ -310,12 +260,12 @@ func createEvidenceReactor(
peerUpdates *p2p.PeerUpdates
)
if config.P2P.DisableLegacy {
channels = makeChannelsFromShims(router, evidence.ChannelShims)
peerUpdates = peerManager.Subscribe()
} else {
if config.P2P.UseLegacy {
channels = getChannelsFromShim(reactorShim)
peerUpdates = reactorShim.PeerUpdates
} else {
channels = makeChannelsFromShims(router, evidence.ChannelShims)
peerUpdates = peerManager.Subscribe()
}
evidenceReactor := evidence.NewReactor(
@@ -352,12 +302,12 @@ func createBlockchainReactor(
peerUpdates *p2p.PeerUpdates
)
if config.P2P.DisableLegacy {
channels = makeChannelsFromShims(router, bcv0.ChannelShims)
peerUpdates = peerManager.Subscribe()
} else {
if config.P2P.UseLegacy {
channels = getChannelsFromShim(reactorShim)
peerUpdates = reactorShim.PeerUpdates
} else {
channels = makeChannelsFromShims(router, bcv0.ChannelShims)
peerUpdates = peerManager.Subscribe()
}
reactor, err := bcv0.NewReactor(
@@ -416,12 +366,12 @@ func createConsensusReactor(
peerUpdates *p2p.PeerUpdates
)
if config.P2P.DisableLegacy {
channels = makeChannelsFromShims(router, cs.ChannelShims)
peerUpdates = peerManager.Subscribe()
} else {
if config.P2P.UseLegacy {
channels = getChannelsFromShim(reactorShim)
peerUpdates = reactorShim.PeerUpdates
} else {
channels = makeChannelsFromShims(router, cs.ChannelShims)
peerUpdates = peerManager.Subscribe()
}
reactor := cs.NewReactor(
@@ -756,6 +706,7 @@ func makeNodeInfo(
byte(statesync.SnapshotChannel),
byte(statesync.ChunkChannel),
byte(statesync.LightBlockChannel),
byte(statesync.ParamsChannel),
},
Moniker: config.Moniker,
Other: types.NodeInfoOther{

View File

@@ -28,6 +28,12 @@ func (m *Message) Wrap(pb proto.Message) error {
case *LightBlockResponse:
m.Sum = &Message_LightBlockResponse{LightBlockResponse: msg}
case *ParamsRequest:
m.Sum = &Message_ParamsRequest{ParamsRequest: msg}
case *ParamsResponse:
m.Sum = &Message_ParamsResponse{ParamsResponse: msg}
default:
return fmt.Errorf("unknown message: %T", msg)
}
@@ -57,6 +63,12 @@ func (m *Message) Unwrap() (proto.Message, error) {
case *Message_LightBlockResponse:
return m.GetLightBlockResponse(), nil
case *Message_ParamsRequest:
return m.GetParamsRequest(), nil
case *Message_ParamsResponse:
return m.GetParamsResponse(), nil
default:
return nil, fmt.Errorf("unknown message: %T", msg)
}
@@ -106,6 +118,17 @@ func (m *Message) Validate() error {
// light block validation handled by the backfill process
case *Message_LightBlockResponse:
case *Message_ParamsRequest:
if m.GetParamsRequest().Height == 0 {
return errors.New("height cannot be 0")
}
case *Message_ParamsResponse:
resp := m.GetParamsResponse()
if resp.Height == 0 {
return errors.New("height cannot be 0")
}
default:
return fmt.Errorf("unknown message type: %T", msg)
}

View File

@@ -9,6 +9,7 @@ import (
ssproto "github.com/tendermint/tendermint/proto/tendermint/statesync"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/types"
)
func TestValidateMsg(t *testing.T) {
@@ -161,6 +162,35 @@ func TestStateSyncVectors(t *testing.T) {
},
"2214080110021803220c697427732061206368756e6b",
},
{
"LightBlockRequest",
&ssproto.LightBlockRequest{
Height: 100,
},
"2a020864",
},
{
"LightBlockResponse",
&ssproto.LightBlockResponse{
LightBlock: nil,
},
"3200",
},
{
"ParamsRequest",
&ssproto.ParamsRequest{
Height: 9001,
},
"3a0308a946",
},
{
"ParamsResponse",
&ssproto.ParamsResponse{
Height: 9001,
ConsensusParams: types.DefaultConsensusParams().ToProto(),
},
"423408a946122f0a10088080c00a10ffffffffffffffffff01120e08a08d0612040880c60a188080401a090a07656432353531392200",
},
}
for _, tc := range testCases {

View File

@@ -5,6 +5,7 @@ package statesync
import (
fmt "fmt"
_ "github.com/gogo/protobuf/gogoproto"
proto "github.com/gogo/protobuf/proto"
types "github.com/tendermint/tendermint/proto/tendermint/types"
io "io"
@@ -31,6 +32,8 @@ type Message struct {
// *Message_ChunkResponse
// *Message_LightBlockRequest
// *Message_LightBlockResponse
// *Message_ParamsRequest
// *Message_ParamsResponse
Sum isMessage_Sum `protobuf_oneof:"sum"`
}
@@ -91,6 +94,12 @@ type Message_LightBlockRequest struct {
type Message_LightBlockResponse struct {
LightBlockResponse *LightBlockResponse `protobuf:"bytes,6,opt,name=light_block_response,json=lightBlockResponse,proto3,oneof" json:"light_block_response,omitempty"`
}
type Message_ParamsRequest struct {
ParamsRequest *ParamsRequest `protobuf:"bytes,7,opt,name=params_request,json=paramsRequest,proto3,oneof" json:"params_request,omitempty"`
}
type Message_ParamsResponse struct {
ParamsResponse *ParamsResponse `protobuf:"bytes,8,opt,name=params_response,json=paramsResponse,proto3,oneof" json:"params_response,omitempty"`
}
func (*Message_SnapshotsRequest) isMessage_Sum() {}
func (*Message_SnapshotsResponse) isMessage_Sum() {}
@@ -98,6 +107,8 @@ func (*Message_ChunkRequest) isMessage_Sum() {}
func (*Message_ChunkResponse) isMessage_Sum() {}
func (*Message_LightBlockRequest) isMessage_Sum() {}
func (*Message_LightBlockResponse) isMessage_Sum() {}
func (*Message_ParamsRequest) isMessage_Sum() {}
func (*Message_ParamsResponse) isMessage_Sum() {}
func (m *Message) GetSum() isMessage_Sum {
if m != nil {
@@ -148,6 +159,20 @@ func (m *Message) GetLightBlockResponse() *LightBlockResponse {
return nil
}
func (m *Message) GetParamsRequest() *ParamsRequest {
if x, ok := m.GetSum().(*Message_ParamsRequest); ok {
return x.ParamsRequest
}
return nil
}
func (m *Message) GetParamsResponse() *ParamsResponse {
if x, ok := m.GetSum().(*Message_ParamsResponse); ok {
return x.ParamsResponse
}
return nil
}
// XXX_OneofWrappers is for the internal use of the proto package.
func (*Message) XXX_OneofWrappers() []interface{} {
return []interface{}{
@@ -157,6 +182,8 @@ func (*Message) XXX_OneofWrappers() []interface{} {
(*Message_ChunkResponse)(nil),
(*Message_LightBlockRequest)(nil),
(*Message_LightBlockResponse)(nil),
(*Message_ParamsRequest)(nil),
(*Message_ParamsResponse)(nil),
}
}
@@ -496,6 +523,102 @@ func (m *LightBlockResponse) GetLightBlock() *types.LightBlock {
return nil
}
type ParamsRequest struct {
Height uint64 `protobuf:"varint,1,opt,name=height,proto3" json:"height,omitempty"`
}
func (m *ParamsRequest) Reset() { *m = ParamsRequest{} }
func (m *ParamsRequest) String() string { return proto.CompactTextString(m) }
func (*ParamsRequest) ProtoMessage() {}
func (*ParamsRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_a1c2869546ca7914, []int{7}
}
func (m *ParamsRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ParamsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ParamsRequest.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ParamsRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_ParamsRequest.Merge(m, src)
}
func (m *ParamsRequest) XXX_Size() int {
return m.Size()
}
func (m *ParamsRequest) XXX_DiscardUnknown() {
xxx_messageInfo_ParamsRequest.DiscardUnknown(m)
}
var xxx_messageInfo_ParamsRequest proto.InternalMessageInfo
func (m *ParamsRequest) GetHeight() uint64 {
if m != nil {
return m.Height
}
return 0
}
type ParamsResponse struct {
Height uint64 `protobuf:"varint,1,opt,name=height,proto3" json:"height,omitempty"`
ConsensusParams types.ConsensusParams `protobuf:"bytes,2,opt,name=consensus_params,json=consensusParams,proto3" json:"consensus_params"`
}
func (m *ParamsResponse) Reset() { *m = ParamsResponse{} }
func (m *ParamsResponse) String() string { return proto.CompactTextString(m) }
func (*ParamsResponse) ProtoMessage() {}
func (*ParamsResponse) Descriptor() ([]byte, []int) {
return fileDescriptor_a1c2869546ca7914, []int{8}
}
func (m *ParamsResponse) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ParamsResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ParamsResponse.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ParamsResponse) XXX_Merge(src proto.Message) {
xxx_messageInfo_ParamsResponse.Merge(m, src)
}
func (m *ParamsResponse) XXX_Size() int {
return m.Size()
}
func (m *ParamsResponse) XXX_DiscardUnknown() {
xxx_messageInfo_ParamsResponse.DiscardUnknown(m)
}
var xxx_messageInfo_ParamsResponse proto.InternalMessageInfo
func (m *ParamsResponse) GetHeight() uint64 {
if m != nil {
return m.Height
}
return 0
}
func (m *ParamsResponse) GetConsensusParams() types.ConsensusParams {
if m != nil {
return m.ConsensusParams
}
return types.ConsensusParams{}
}
func init() {
proto.RegisterType((*Message)(nil), "tendermint.statesync.Message")
proto.RegisterType((*SnapshotsRequest)(nil), "tendermint.statesync.SnapshotsRequest")
@@ -504,43 +627,51 @@ func init() {
proto.RegisterType((*ChunkResponse)(nil), "tendermint.statesync.ChunkResponse")
proto.RegisterType((*LightBlockRequest)(nil), "tendermint.statesync.LightBlockRequest")
proto.RegisterType((*LightBlockResponse)(nil), "tendermint.statesync.LightBlockResponse")
proto.RegisterType((*ParamsRequest)(nil), "tendermint.statesync.ParamsRequest")
proto.RegisterType((*ParamsResponse)(nil), "tendermint.statesync.ParamsResponse")
}
func init() { proto.RegisterFile("tendermint/statesync/types.proto", fileDescriptor_a1c2869546ca7914) }
var fileDescriptor_a1c2869546ca7914 = []byte{
// 485 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x54, 0x51, 0x6b, 0xd3, 0x50,
0x14, 0x4e, 0x5c, 0xdb, 0x8d, 0xb3, 0x46, 0x96, 0x63, 0x91, 0x32, 0x46, 0x18, 0x11, 0x74, 0x20,
0xa4, 0xa0, 0x8f, 0xe2, 0x4b, 0x7d, 0x99, 0x30, 0x5f, 0xee, 0x1c, 0xa8, 0x08, 0x23, 0x4d, 0xaf,
0x4d, 0xb0, 0x49, 0x6a, 0xcf, 0x2d, 0xb8, 0x1f, 0xe0, 0x93, 0x2f, 0x82, 0x7f, 0xca, 0xc7, 0x3d,
0xfa, 0x28, 0xed, 0x1f, 0x91, 0x9c, 0xdc, 0x26, 0x77, 0x6d, 0x5d, 0x11, 0xf6, 0x96, 0xef, 0xeb,
0x77, 0x3e, 0xbe, 0x73, 0xcf, 0xe9, 0x81, 0x63, 0x25, 0xb3, 0xa1, 0x9c, 0xa6, 0x49, 0xa6, 0x7a,
0xa4, 0x42, 0x25, 0xe9, 0x2a, 0x8b, 0x7a, 0xea, 0x6a, 0x22, 0x29, 0x98, 0x4c, 0x73, 0x95, 0x63,
0xa7, 0x56, 0x04, 0x95, 0xe2, 0xf0, 0xc8, 0xa8, 0x63, 0xb5, 0x59, 0xe3, 0xff, 0x6c, 0xc0, 0xee,
0x1b, 0x49, 0x14, 0x8e, 0x24, 0x5e, 0x80, 0x4b, 0x59, 0x38, 0xa1, 0x38, 0x57, 0x74, 0x39, 0x95,
0x5f, 0x66, 0x92, 0x54, 0xd7, 0x3e, 0xb6, 0x4f, 0xf6, 0x9f, 0x3d, 0x0e, 0x36, 0x79, 0x07, 0xe7,
0x4b, 0xb9, 0x28, 0xd5, 0xa7, 0x96, 0x38, 0xa0, 0x15, 0x0e, 0xdf, 0x01, 0x9a, 0xb6, 0x34, 0xc9,
0x33, 0x92, 0xdd, 0x7b, 0xec, 0xfb, 0x64, 0xab, 0x6f, 0x29, 0x3f, 0xb5, 0x84, 0x4b, 0xab, 0x24,
0xbe, 0x06, 0x27, 0x8a, 0x67, 0xd9, 0xe7, 0x2a, 0xec, 0x0e, 0x9b, 0xfa, 0x9b, 0x4d, 0x5f, 0x15,
0xd2, 0x3a, 0x68, 0x3b, 0x32, 0x30, 0x9e, 0xc1, 0xfd, 0xa5, 0x95, 0x0e, 0xd8, 0x60, 0xaf, 0x47,
0xb7, 0x7a, 0x55, 0xe1, 0x9c, 0xc8, 0x24, 0xf0, 0x3d, 0x3c, 0x18, 0x27, 0xa3, 0x58, 0x5d, 0x0e,
0xc6, 0x79, 0x54, 0xc7, 0x6b, 0xde, 0xd6, 0xf3, 0x59, 0x51, 0xd0, 0x2f, 0xf4, 0x75, 0x46, 0x77,
0xbc, 0x4a, 0xe2, 0x47, 0xe8, 0xdc, 0xb4, 0xd6, 0x71, 0x5b, 0xec, 0x7d, 0xb2, 0xdd, 0xbb, 0xca,
0x8c, 0xe3, 0x35, 0xb6, 0xdf, 0x84, 0x1d, 0x9a, 0xa5, 0x3e, 0xc2, 0xc1, 0xea, 0x68, 0xfd, 0xef,
0x36, 0xb8, 0x6b, 0x73, 0xc1, 0x87, 0xd0, 0x8a, 0x65, 0xe1, 0xc3, 0x8b, 0xd2, 0x10, 0x1a, 0x15,
0xfc, 0xa7, 0x7c, 0x9a, 0x86, 0x8a, 0x07, 0xed, 0x08, 0x8d, 0x0a, 0x9e, 0x9f, 0x8a, 0x78, 0x56,
0x8e, 0xd0, 0x08, 0x11, 0x1a, 0x71, 0x48, 0x31, 0xbf, 0x7a, 0x5b, 0xf0, 0x37, 0x1e, 0xc2, 0x5e,
0x2a, 0x55, 0x38, 0x0c, 0x55, 0xc8, 0x4f, 0xd7, 0x16, 0x15, 0xf6, 0xdf, 0x42, 0xdb, 0x9c, 0xe7,
0x7f, 0xe7, 0xe8, 0x40, 0x33, 0xc9, 0x86, 0xf2, 0xab, 0x8e, 0x51, 0x02, 0xff, 0x9b, 0x0d, 0xce,
0x8d, 0xd1, 0xde, 0x8d, 0x6f, 0xc1, 0x72, 0x9f, 0xba, 0xbd, 0x12, 0x60, 0x17, 0x76, 0xd3, 0x84,
0x28, 0xc9, 0x46, 0xdc, 0xde, 0x9e, 0x58, 0x42, 0xff, 0x29, 0xb8, 0x6b, 0xeb, 0xf0, 0xaf, 0x28,
0xfe, 0x39, 0xe0, 0xfa, 0x7c, 0xf1, 0x25, 0xec, 0x1b, 0x7b, 0xa2, 0xff, 0xc6, 0x47, 0xe6, 0x7a,
0x94, 0x67, 0xc0, 0x28, 0x85, 0x7a, 0x21, 0xfa, 0x17, 0xbf, 0xe6, 0x9e, 0x7d, 0x3d, 0xf7, 0xec,
0x3f, 0x73, 0xcf, 0xfe, 0xb1, 0xf0, 0xac, 0xeb, 0x85, 0x67, 0xfd, 0x5e, 0x78, 0xd6, 0x87, 0x17,
0xa3, 0x44, 0xc5, 0xb3, 0x41, 0x10, 0xe5, 0x69, 0xcf, 0x3c, 0x2d, 0xf5, 0x27, 0x5f, 0x96, 0xde,
0xa6, 0x73, 0x35, 0x68, 0xf1, 0x6f, 0xcf, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0xc1, 0x45, 0x35,
0xee, 0xcd, 0x04, 0x00, 0x00,
// 589 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x95, 0x4f, 0x8b, 0xd3, 0x40,
0x18, 0xc6, 0x13, 0xb7, 0xdd, 0x96, 0x77, 0x9b, 0x6e, 0x3b, 0x16, 0x29, 0x65, 0x8d, 0x6b, 0x14,
0x77, 0x41, 0x68, 0x41, 0x8f, 0xe2, 0xa5, 0x7b, 0x59, 0x61, 0x45, 0x99, 0x75, 0x41, 0x45, 0x28,
0x69, 0x3a, 0x26, 0xc1, 0xe6, 0x8f, 0x7d, 0xa7, 0xe0, 0x82, 0x57, 0x4f, 0x5e, 0xfc, 0x2c, 0x7e,
0x8a, 0x3d, 0xee, 0xd1, 0x93, 0x48, 0xfb, 0x45, 0x24, 0x93, 0x69, 0x32, 0x6d, 0xda, 0x2e, 0x82,
0xb7, 0xbc, 0xcf, 0x3c, 0xf9, 0xf5, 0x99, 0xc9, 0xc3, 0x14, 0x0e, 0x39, 0x0b, 0x47, 0x6c, 0x12,
0xf8, 0x21, 0xef, 0x21, 0xb7, 0x39, 0xc3, 0xcb, 0xd0, 0xe9, 0xf1, 0xcb, 0x98, 0x61, 0x37, 0x9e,
0x44, 0x3c, 0x22, 0xad, 0xdc, 0xd1, 0xcd, 0x1c, 0x9d, 0x96, 0x1b, 0xb9, 0x91, 0x30, 0xf4, 0x92,
0xa7, 0xd4, 0xdb, 0x39, 0x50, 0x68, 0x82, 0xa1, 0x92, 0x3a, 0x77, 0x0b, 0xab, 0xb1, 0x3d, 0xb1,
0x03, 0xb9, 0x6c, 0xfd, 0x2c, 0x43, 0xe5, 0x25, 0x43, 0xb4, 0x5d, 0x46, 0x2e, 0xa0, 0x89, 0xa1,
0x1d, 0xa3, 0x17, 0x71, 0x1c, 0x4c, 0xd8, 0xe7, 0x29, 0x43, 0xde, 0xd6, 0x0f, 0xf5, 0xe3, 0xbd,
0x27, 0x8f, 0xba, 0xeb, 0x02, 0x75, 0xcf, 0x17, 0x76, 0x9a, 0xba, 0x4f, 0x35, 0xda, 0xc0, 0x15,
0x8d, 0xbc, 0x05, 0xa2, 0x62, 0x31, 0x8e, 0x42, 0x64, 0xed, 0x5b, 0x82, 0x7b, 0x74, 0x23, 0x37,
0xb5, 0x9f, 0x6a, 0xb4, 0x89, 0xab, 0x22, 0x79, 0x01, 0x86, 0xe3, 0x4d, 0xc3, 0x4f, 0x59, 0xd8,
0x1d, 0x01, 0xb5, 0xd6, 0x43, 0x4f, 0x12, 0x6b, 0x1e, 0xb4, 0xe6, 0x28, 0x33, 0x39, 0x83, 0xfa,
0x02, 0x25, 0x03, 0x96, 0x04, 0xeb, 0xc1, 0x56, 0x56, 0x16, 0xce, 0x70, 0x54, 0x81, 0xbc, 0x83,
0xdb, 0x63, 0xdf, 0xf5, 0xf8, 0x60, 0x38, 0x8e, 0x9c, 0x3c, 0x5e, 0x79, 0xdb, 0x9e, 0xcf, 0x92,
0x17, 0xfa, 0x89, 0x3f, 0xcf, 0xd8, 0x1c, 0xaf, 0x8a, 0xe4, 0x03, 0xb4, 0x96, 0xd1, 0x32, 0xee,
0xae, 0x60, 0x1f, 0xdf, 0xcc, 0xce, 0x32, 0x93, 0x71, 0x41, 0x4d, 0x8e, 0x21, 0xad, 0x47, 0x96,
0xb9, 0xb2, 0xed, 0x18, 0x5e, 0x0b, 0x6f, 0x9e, 0xd7, 0x88, 0x55, 0x81, 0xbc, 0x82, 0xfd, 0x8c,
0x26, 0x63, 0x56, 0x05, 0xee, 0xe1, 0x76, 0x5c, 0x16, 0xb1, 0x1e, 0x2f, 0x29, 0xfd, 0x32, 0xec,
0xe0, 0x34, 0xb0, 0x08, 0x34, 0x56, 0x9b, 0x67, 0x7d, 0xd7, 0xa1, 0x59, 0xa8, 0x0d, 0xb9, 0x03,
0xbb, 0x1e, 0x4b, 0xb6, 0x29, 0x7a, 0x5c, 0xa2, 0x72, 0x4a, 0xf4, 0x8f, 0xd1, 0x24, 0xb0, 0xb9,
0xe8, 0xa1, 0x41, 0xe5, 0x94, 0xe8, 0xe2, 0x4b, 0xa2, 0xa8, 0x92, 0x41, 0xe5, 0x44, 0x08, 0x94,
0x3c, 0x1b, 0x3d, 0x51, 0x8a, 0x1a, 0x15, 0xcf, 0xa4, 0x03, 0xd5, 0x80, 0x71, 0x7b, 0x64, 0x73,
0x5b, 0x7c, 0xd9, 0x1a, 0xcd, 0x66, 0xeb, 0x0d, 0xd4, 0xd4, 0xba, 0xfd, 0x73, 0x8e, 0x16, 0x94,
0xfd, 0x70, 0xc4, 0xbe, 0xc8, 0x18, 0xe9, 0x60, 0x7d, 0xd3, 0xc1, 0x58, 0x6a, 0xde, 0xff, 0xe1,
0x26, 0xaa, 0xd8, 0xa7, 0xdc, 0x5e, 0x3a, 0x90, 0x36, 0x54, 0x02, 0x1f, 0xd1, 0x0f, 0x5d, 0xb1,
0xbd, 0x2a, 0x5d, 0x8c, 0xd6, 0x63, 0x68, 0x16, 0xda, 0xba, 0x29, 0x8a, 0x75, 0x0e, 0xa4, 0x58,
0x3f, 0xf2, 0x1c, 0xf6, 0x94, 0x1a, 0xcb, 0x5b, 0xe6, 0x40, 0xad, 0x45, 0x7a, 0x89, 0x29, 0xaf,
0x42, 0xde, 0x57, 0xeb, 0x08, 0x8c, 0xa5, 0xee, 0x6d, 0xfc, 0xf5, 0xaf, 0x50, 0x5f, 0x6e, 0xd5,
0xc6, 0x23, 0xa3, 0xd0, 0x70, 0x12, 0x43, 0x88, 0x53, 0x1c, 0xa4, 0xbd, 0x93, 0x97, 0xd4, 0xfd,
0x62, 0xac, 0x93, 0x85, 0x33, 0x85, 0xf7, 0x4b, 0x57, 0xbf, 0xef, 0x69, 0x74, 0xdf, 0x59, 0x91,
0x2f, 0xae, 0x66, 0xa6, 0x7e, 0x3d, 0x33, 0xf5, 0x3f, 0x33, 0x53, 0xff, 0x31, 0x37, 0xb5, 0xeb,
0xb9, 0xa9, 0xfd, 0x9a, 0x9b, 0xda, 0xfb, 0x67, 0xae, 0xcf, 0xbd, 0xe9, 0xb0, 0xeb, 0x44, 0x41,
0x4f, 0xbd, 0xa1, 0xf3, 0xc7, 0xf4, 0x9e, 0x5f, 0xf7, 0x4f, 0x31, 0xdc, 0x15, 0x6b, 0x4f, 0xff,
0x06, 0x00, 0x00, 0xff, 0xff, 0xa1, 0xb2, 0xfd, 0x65, 0x48, 0x06, 0x00, 0x00,
}
func (m *Message) Marshal() (dAtA []byte, err error) {
@@ -701,6 +832,48 @@ func (m *Message_LightBlockResponse) MarshalToSizedBuffer(dAtA []byte) (int, err
}
return len(dAtA) - i, nil
}
func (m *Message_ParamsRequest) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Message_ParamsRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if m.ParamsRequest != nil {
{
size, err := m.ParamsRequest.MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintTypes(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x3a
}
return len(dAtA) - i, nil
}
func (m *Message_ParamsResponse) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Message_ParamsResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if m.ParamsResponse != nil {
{
size, err := m.ParamsResponse.MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintTypes(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x42
}
return len(dAtA) - i, nil
}
func (m *SnapshotsRequest) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -932,6 +1105,72 @@ func (m *LightBlockResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
func (m *ParamsRequest) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *ParamsRequest) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *ParamsRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.Height != 0 {
i = encodeVarintTypes(dAtA, i, uint64(m.Height))
i--
dAtA[i] = 0x8
}
return len(dAtA) - i, nil
}
func (m *ParamsResponse) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *ParamsResponse) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *ParamsResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
{
size, err := m.ConsensusParams.MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
i -= size
i = encodeVarintTypes(dAtA, i, uint64(size))
}
i--
dAtA[i] = 0x12
if m.Height != 0 {
i = encodeVarintTypes(dAtA, i, uint64(m.Height))
i--
dAtA[i] = 0x8
}
return len(dAtA) - i, nil
}
func encodeVarintTypes(dAtA []byte, offset int, v uint64) int {
offset -= sovTypes(v)
base := offset
@@ -1027,6 +1266,30 @@ func (m *Message_LightBlockResponse) Size() (n int) {
}
return n
}
func (m *Message_ParamsRequest) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.ParamsRequest != nil {
l = m.ParamsRequest.Size()
n += 1 + l + sovTypes(uint64(l))
}
return n
}
func (m *Message_ParamsResponse) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.ParamsResponse != nil {
l = m.ParamsResponse.Size()
n += 1 + l + sovTypes(uint64(l))
}
return n
}
func (m *SnapshotsRequest) Size() (n int) {
if m == nil {
return 0
@@ -1130,6 +1393,32 @@ func (m *LightBlockResponse) Size() (n int) {
return n
}
func (m *ParamsRequest) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Height != 0 {
n += 1 + sovTypes(uint64(m.Height))
}
return n
}
func (m *ParamsResponse) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Height != 0 {
n += 1 + sovTypes(uint64(m.Height))
}
l = m.ConsensusParams.Size()
n += 1 + l + sovTypes(uint64(l))
return n
}
func sovTypes(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
@@ -1375,6 +1664,76 @@ func (m *Message) Unmarshal(dAtA []byte) error {
}
m.Sum = &Message_LightBlockResponse{v}
iNdEx = postIndex
case 7:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ParamsRequest", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthTypes
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
v := &ParamsRequest{}
if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
m.Sum = &Message_ParamsRequest{v}
iNdEx = postIndex
case 8:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ParamsResponse", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthTypes
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
v := &ParamsResponse{}
if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
m.Sum = &Message_ParamsResponse{v}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipTypes(dAtA[iNdEx:])
@@ -2044,6 +2403,177 @@ func (m *LightBlockResponse) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *ParamsRequest) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ParamsRequest: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ParamsRequest: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
}
m.Height = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Height |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipTypes(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *ParamsResponse) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: ParamsResponse: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: ParamsResponse: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Height", wireType)
}
m.Height = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Height |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ConsensusParams", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowTypes
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthTypes
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTypes
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := m.ConsensusParams.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipTypes(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthTypes
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func skipTypes(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0

View File

@@ -1,7 +1,9 @@
syntax = "proto3";
package tendermint.statesync;
import "gogoproto/gogo.proto";
import "tendermint/types/types.proto";
import "tendermint/types/params.proto";
option go_package = "github.com/tendermint/tendermint/proto/tendermint/statesync";
@@ -13,6 +15,8 @@ message Message {
ChunkResponse chunk_response = 4;
LightBlockRequest light_block_request = 5;
LightBlockResponse light_block_response = 6;
ParamsRequest params_request = 7;
ParamsResponse params_response = 8;
}
}
@@ -46,4 +50,13 @@ message LightBlockRequest {
message LightBlockResponse {
tendermint.types.LightBlock light_block = 1;
}
message ParamsRequest {
uint64 height = 1;
}
message ParamsResponse {
uint64 height = 1;
tendermint.types.ConsensusParams consensus_params = 2 [(gogoproto.nullable) = false];
}

View File

@@ -1 +0,0 @@
<!-- TODO: add in release notes prior to release. -->

View File

@@ -58,6 +58,11 @@ type peers interface {
Peers() p2p.IPeerSet
}
type consensusReactor interface {
WaitSync() bool
GetPeerState(peerID types.NodeID) (*consensus.PeerState, bool)
}
type peerManager interface {
Peers() []types.NodeID
Addresses(types.NodeID) []p2p.NodeAddress
@@ -72,11 +77,12 @@ type Environment struct {
ProxyAppMempool proxy.AppConnMempool
// interfaces defined in types and above
StateStore sm.Store
BlockStore sm.BlockStore
EvidencePool sm.EvidencePool
ConsensusState consensusState
P2PPeers peers
StateStore sm.Store
BlockStore sm.BlockStore
EvidencePool sm.EvidencePool
ConsensusState consensusState
ConsensusReactor consensusReactor
P2PPeers peers
// Legacy p2p stack
P2PTransport transport
@@ -88,7 +94,6 @@ type Environment struct {
PubKey crypto.PubKey
GenDoc *types.GenesisDoc // cache the genesis structure
EventSinks []indexer.EventSink
ConsensusReactor *consensus.Reactor
EventBus *types.EventBus // thread safe
Mempool mempl.Mempool
BlockSyncReactor consensus.BlockSyncReactor

View File

@@ -110,7 +110,7 @@ func setup() {
wm.SetLogger(tcpLogger)
mux.HandleFunc(websocketEndpoint, wm.WebsocketHandler)
config := server.DefaultConfig()
listener1, err := server.Listen(tcpAddr, config)
listener1, err := server.Listen(tcpAddr, config.MaxOpenConnections)
if err != nil {
panic(err)
}
@@ -126,7 +126,7 @@ func setup() {
wm = server.NewWebsocketManager(Routes)
wm.SetLogger(unixLogger)
mux2.HandleFunc(websocketEndpoint, wm.WebsocketHandler)
listener2, err := server.Listen(unixAddr, config)
listener2, err := server.Listen(unixAddr, config.MaxOpenConnections)
if err != nil {
panic(err)
}

View File

@@ -261,7 +261,7 @@ func (h maxBytesHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Listen starts a new net.Listener on the given address.
// It returns an error if the address is invalid or the call to Listen() fails.
func Listen(addr string, config *Config) (listener net.Listener, err error) {
func Listen(addr string, maxOpenConnections int) (listener net.Listener, err error) {
parts := strings.SplitN(addr, "://", 2)
if len(parts) != 2 {
return nil, fmt.Errorf(
@@ -274,8 +274,8 @@ func Listen(addr string, config *Config) (listener net.Listener, err error) {
if err != nil {
return nil, fmt.Errorf("failed to listen on %v: %v", addr, err)
}
if config.MaxOpenConnections > 0 {
listener = netutil.LimitListener(listener, config.MaxOpenConnections)
if maxOpenConnections > 0 {
listener = netutil.LimitListener(listener, maxOpenConnections)
}
return listener, nil

View File

@@ -39,8 +39,7 @@ func TestMaxOpenConnections(t *testing.T) {
fmt.Fprint(w, "some body")
})
config := DefaultConfig()
config.MaxOpenConnections = max
l, err := Listen("tcp://127.0.0.1:0", config)
l, err := Listen("tcp://127.0.0.1:0", max)
require.NoError(t, err)
defer l.Close()
go Serve(l, mux, log.TestingLogger(), config) //nolint:errcheck // ignore for tests

View File

@@ -33,7 +33,7 @@ func main() {
rpcserver.RegisterRPCFuncs(mux, routes, logger)
config := rpcserver.DefaultConfig()
listener, err := rpcserver.Listen("tcp://127.0.0.1:8008", config)
listener, err := rpcserver.Listen("tcp://127.0.0.1:8008", config.MaxOpenConnections)
if err != nil {
tmos.Exit(err.Error())
}

View File

@@ -51,43 +51,47 @@ func (is *Service) OnStart() error {
go func() {
for {
msg := <-blockHeadersSub.Out()
select {
case <-blockHeadersSub.Canceled():
return
case msg := <-blockHeadersSub.Out():
eventDataHeader := msg.Data().(types.EventDataNewBlockHeader)
height := eventDataHeader.Header.Height
batch := NewBatch(eventDataHeader.NumTxs)
eventDataHeader := msg.Data().(types.EventDataNewBlockHeader)
height := eventDataHeader.Header.Height
batch := NewBatch(eventDataHeader.NumTxs)
for i := int64(0); i < eventDataHeader.NumTxs; i++ {
msg2 := <-txsSub.Out()
txResult := msg2.Data().(types.EventDataTx).TxResult
for i := int64(0); i < eventDataHeader.NumTxs; i++ {
msg2 := <-txsSub.Out()
txResult := msg2.Data().(types.EventDataTx).TxResult
if err = batch.Add(&txResult); err != nil {
is.Logger.Error(
"failed to add tx to batch",
"height", height,
"index", txResult.Index,
"err", err,
)
}
}
if !IndexingEnabled(is.eventSinks) {
continue
}
for _, sink := range is.eventSinks {
if err := sink.IndexBlockEvents(eventDataHeader); err != nil {
is.Logger.Error("failed to index block", "height", height, "err", err)
} else {
is.Logger.Debug("indexed block", "height", height, "sink", sink.Type())
if err = batch.Add(&txResult); err != nil {
is.Logger.Error(
"failed to add tx to batch",
"height", height,
"index", txResult.Index,
"err", err,
)
}
}
if len(batch.Ops) > 0 {
err := sink.IndexTxEvents(batch.Ops)
if err != nil {
is.Logger.Error("failed to index block txs", "height", height, "err", err)
if !IndexingEnabled(is.eventSinks) {
continue
}
for _, sink := range is.eventSinks {
if err := sink.IndexBlockEvents(eventDataHeader); err != nil {
is.Logger.Error("failed to index block", "height", height, "err", err)
} else {
is.Logger.Debug("indexed txs", "height", height, "sink", sink.Type())
is.Logger.Debug("indexed block", "height", height, "sink", sink.Type())
}
if len(batch.Ops) > 0 {
err := sink.IndexTxEvents(batch.Ops)
if err != nil {
is.Logger.Error("failed to index block txs", "height", height, "err", err)
} else {
is.Logger.Debug("indexed txs", "height", height, "sink", sink.Type())
}
}
}
}

View File

@@ -139,7 +139,7 @@ func setupDB(t *testing.T) (*dockertest.Pool, error) {
assert.Nil(t, err)
resource, err = pool.RunWithOptions(&dockertest.RunOptions{
Repository: psql.DriverName,
Repository: "postgres",
Tag: "13",
Env: []string{
"POSTGRES_USER=" + user,
@@ -164,19 +164,16 @@ func setupDB(t *testing.T) (*dockertest.Pool, error) {
conn := fmt.Sprintf(dsn, user, password, resource.GetPort(port+"/tcp"), dbName)
if err = pool.Retry(func() error {
var err error
pSink, psqldb, err = psql.NewEventSink(conn, "test-chainID")
assert.NoError(t, pool.Retry(func() error {
sink, err := psql.NewEventSink(conn, "test-chainID")
if err != nil {
return err
}
pSink = sink
psqldb = sink.DB()
return psqldb.Ping()
}); err != nil {
assert.Error(t, err)
}
}))
resetDB(t)

View File

@@ -1,3 +1,4 @@
// Package psql implements an event sink backed by a PostgreSQL database.
package psql
import (
@@ -5,207 +6,252 @@ import (
"database/sql"
"errors"
"fmt"
"strings"
"time"
sq "github.com/Masterminds/squirrel"
proto "github.com/gogo/protobuf/proto"
"github.com/gogo/protobuf/proto"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/pubsub/query"
"github.com/tendermint/tendermint/state/indexer"
"github.com/tendermint/tendermint/types"
)
var _ indexer.EventSink = (*EventSink)(nil)
const (
TableEventBlock = "block_events"
TableEventTx = "tx_events"
TableResultTx = "tx_results"
DriverName = "postgres"
tableBlocks = "blocks"
tableTxResults = "tx_results"
tableEvents = "events"
tableAttributes = "attributes"
driverName = "postgres"
)
// EventSink is an indexer backend providing the tx/block index services.
// EventSink is an indexer backend providing the tx/block index services. This
// implementation stores records in a PostgreSQL database using the schema
// defined in state/indexer/sink/psql/schema.sql.
type EventSink struct {
store *sql.DB
chainID string
}
func NewEventSink(connStr string, chainID string) (indexer.EventSink, *sql.DB, error) {
db, err := sql.Open(DriverName, connStr)
// NewEventSink constructs an event sink associated with the PostgreSQL
// database specified by connStr. Events written to the sink are attributed to
// the specified chainID.
func NewEventSink(connStr, chainID string) (*EventSink, error) {
db, err := sql.Open(driverName, connStr)
if err != nil {
return nil, nil, err
return nil, err
}
return &EventSink{
store: db,
chainID: chainID,
}, db, nil
}, nil
}
func (es *EventSink) Type() indexer.EventSinkType {
return indexer.PSQL
}
// DB returns the underlying Postgres connection used by the sink.
// This is exported to support testing.
func (es *EventSink) DB() *sql.DB { return es.store }
func (es *EventSink) IndexBlockEvents(h types.EventDataNewBlockHeader) error {
sqlStmt := sq.
Insert(TableEventBlock).
Columns("key", "value", "height", "type", "created_at", "chain_id").
PlaceholderFormat(sq.Dollar).
Suffix("ON CONFLICT (key,height)").
Suffix("DO NOTHING")
// Type returns the structure type for this sink, which is Postgres.
func (es *EventSink) Type() indexer.EventSinkType { return indexer.PSQL }
ts := time.Now()
// index the reserved block height index
sqlStmt = sqlStmt.
Values(types.BlockHeightKey, fmt.Sprint(h.Header.Height), h.Header.Height, "", ts, es.chainID)
// index begin_block events
sqlStmt, err := indexBlockEvents(
sqlStmt, h.ResultBeginBlock.Events, types.EventTypeBeginBlock, h.Header.Height, ts, es.chainID)
// runInTransaction executes query in a fresh database transaction.
// If query reports an error, the transaction is rolled back and the
// error from query is reported to the caller.
// Otherwise, the result of committing the transaction is returned.
func runInTransaction(db *sql.DB, query func(*sql.Tx) error) error {
dbtx, err := db.Begin()
if err != nil {
return err
}
// index end_block events
sqlStmt, err = indexBlockEvents(
sqlStmt, h.ResultEndBlock.Events, types.EventTypeEndBlock, h.Header.Height, ts, es.chainID)
if err != nil {
if err := query(dbtx); err != nil {
_ = dbtx.Rollback() // report the initial error, not the rollback
return err
}
_, err = sqlStmt.RunWith(es.store).Exec()
return err
return dbtx.Commit()
}
func (es *EventSink) IndexTxEvents(txr []*abci.TxResult) error {
// index the tx result
var txid uint32
sqlStmtTxResult := sq.
Insert(TableResultTx).
Columns("tx_result", "created_at").
PlaceholderFormat(sq.Dollar).
RunWith(es.store).
Suffix("ON CONFLICT (tx_result)").
Suffix("DO NOTHING").
Suffix("RETURNING \"id\"")
// queryWithID executes the specified SQL query with the given arguments,
// expecting a single-row, single-column result containing an ID. If the query
// succeeds, the ID from the result is returned.
func queryWithID(tx *sql.Tx, query string, args ...interface{}) (uint32, error) {
var id uint32
if err := tx.QueryRow(query, args...).Scan(&id); err != nil {
return 0, err
}
return id, nil
}
sqlStmtEvents := sq.
Insert(TableEventTx).
Columns("key", "value", "height", "hash", "tx_result_id", "created_at", "chain_id").
PlaceholderFormat(sq.Dollar).
Suffix("ON CONFLICT (key,hash)").
Suffix("DO NOTHING")
// insertEvents inserts a slice of events and any indexed attributes of those
// events into the database associated with dbtx.
//
// If txID > 0, the event is attributed to the Tendermint transaction with that
// ID; otherwise it is recorded as a block event.
func insertEvents(dbtx *sql.Tx, blockID, txID uint32, evts []abci.Event) error {
// Populate the transaction ID field iff one is defined (> 0).
var txIDArg interface{}
if txID > 0 {
txIDArg = txID
}
ts := time.Now()
for _, tx := range txr {
txBz, err := proto.Marshal(tx)
// Add each event to the events table, and retrieve its row ID to use when
// adding any attributes the event provides.
for _, evt := range evts {
// Skip events with an empty type.
if evt.Type == "" {
continue
}
eid, err := queryWithID(dbtx, `
INSERT INTO `+tableEvents+` (block_id, tx_id, type) VALUES ($1, $2, $3)
RETURNING rowid;
`, blockID, txIDArg, evt.Type)
if err != nil {
return err
}
sqlStmtTxResult = sqlStmtTxResult.Values(txBz, ts)
// execute sqlStmtTxResult db query and retrieve the txid
r, err := sqlStmtTxResult.Query()
if err != nil {
return err
}
defer r.Close()
if !r.Next() {
return nil
}
if err := r.Scan(&txid); err != nil {
return err
}
// index the reserved height and hash indices
hash := fmt.Sprintf("%X", types.Tx(tx.Tx).Hash())
sqlStmtEvents = sqlStmtEvents.Values(types.TxHashKey, hash, tx.Height, hash, txid, ts, es.chainID)
sqlStmtEvents = sqlStmtEvents.Values(types.TxHeightKey, fmt.Sprint(tx.Height), tx.Height, hash, txid, ts, es.chainID)
for _, event := range tx.Result.Events {
// only index events with a non-empty type
if len(event.Type) == 0 {
// Add any attributes flagged for indexing.
for _, attr := range evt.Attributes {
if !attr.Index {
continue
}
for _, attr := range event.Attributes {
if len(attr.Key) == 0 {
continue
}
// index if `index: true` is set
compositeTag := fmt.Sprintf("%s.%s", event.Type, attr.Key)
// ensure event does not conflict with a reserved prefix key
if compositeTag == types.TxHashKey || compositeTag == types.TxHeightKey {
return fmt.Errorf("event type and attribute key \"%s\" is reserved; please use a different key", compositeTag)
}
if attr.GetIndex() {
sqlStmtEvents = sqlStmtEvents.Values(compositeTag, attr.Value, tx.Height, hash, txid, ts, es.chainID)
}
compositeKey := evt.Type + "." + attr.Key
if _, err := dbtx.Exec(`
INSERT INTO `+tableAttributes+` (event_id, key, composite_key, value)
VALUES ($1, $2, $3, $4);
`, eid, attr.Key, compositeKey, attr.Value); err != nil {
return err
}
}
}
// execute sqlStmtEvents db query...
_, err := sqlStmtEvents.RunWith(es.store).Exec()
return err
return nil
}
// makeIndexedEvent constructs an event from the specified composite key and
// value. If the key has the form "type.name", the event will have a single
// attribute with that name and the value; otherwise the event will have only
// a type and no attributes.
func makeIndexedEvent(compositeKey, value string) abci.Event {
i := strings.Index(compositeKey, ".")
if i < 0 {
return abci.Event{Type: compositeKey}
}
return abci.Event{Type: compositeKey[:i], Attributes: []abci.EventAttribute{
{Key: compositeKey[i+1:], Value: value, Index: true},
}}
}
// IndexBlockEvents indexes the specified block header, part of the
// indexer.EventSink interface.
func (es *EventSink) IndexBlockEvents(h types.EventDataNewBlockHeader) error {
ts := time.Now().UTC()
return runInTransaction(es.store, func(dbtx *sql.Tx) error {
// Add the block to the blocks table and report back its row ID for use
// in indexing the events for the block.
blockID, err := queryWithID(dbtx, `
INSERT INTO `+tableBlocks+` (height, chain_id, created_at)
VALUES ($1, $2, $3)
ON CONFLICT DO NOTHING
RETURNING rowid;
`, h.Header.Height, es.chainID, ts)
if err == sql.ErrNoRows {
return nil // we already saw this block; quietly succeed
} else if err != nil {
return fmt.Errorf("indexing block header: %w", err)
}
// Insert the special block meta-event for height.
if err := insertEvents(dbtx, blockID, 0, []abci.Event{
makeIndexedEvent(types.BlockHeightKey, fmt.Sprint(h.Header.Height)),
}); err != nil {
return fmt.Errorf("block meta-events: %w", err)
}
// Insert all the block events. Order is important here,
if err := insertEvents(dbtx, blockID, 0, h.ResultBeginBlock.Events); err != nil {
return fmt.Errorf("begin-block events: %w", err)
}
if err := insertEvents(dbtx, blockID, 0, h.ResultEndBlock.Events); err != nil {
return fmt.Errorf("end-block events: %w", err)
}
return nil
})
}
func (es *EventSink) IndexTxEvents(txrs []*abci.TxResult) error {
ts := time.Now().UTC()
for _, txr := range txrs {
// Encode the result message in protobuf wire format for indexing.
resultData, err := proto.Marshal(txr)
if err != nil {
return fmt.Errorf("marshaling tx_result: %w", err)
}
// Index the hash of the underlying transaction as a hex string.
txHash := fmt.Sprintf("%X", types.Tx(txr.Tx).Hash())
if err := runInTransaction(es.store, func(dbtx *sql.Tx) error {
// Find the block associated with this transaction. The block header
// must have been indexed prior to the transactions belonging to it.
blockID, err := queryWithID(dbtx, `
SELECT rowid FROM `+tableBlocks+` WHERE height = $1 AND chain_id = $2;
`, txr.Height, es.chainID)
if err != nil {
return fmt.Errorf("finding block ID: %w", err)
}
// Insert a record for this tx_result and capture its ID for indexing events.
txID, err := queryWithID(dbtx, `
INSERT INTO `+tableTxResults+` (block_id, index, created_at, tx_hash, tx_result)
VALUES ($1, $2, $3, $4, $5)
ON CONFLICT DO NOTHING
RETURNING rowid;
`, blockID, txr.Index, ts, txHash, resultData)
if err == sql.ErrNoRows {
return nil // we already saw this transaction; quietly succeed
} else if err != nil {
return fmt.Errorf("indexing tx_result: %w", err)
}
// Insert the special transaction meta-events for hash and height.
if err := insertEvents(dbtx, blockID, txID, []abci.Event{
makeIndexedEvent(types.TxHashKey, txHash),
makeIndexedEvent(types.TxHeightKey, fmt.Sprint(txr.Height)),
}); err != nil {
return fmt.Errorf("indexing transaction meta-events: %w", err)
}
// Index any events packaged with the transaction.
if err := insertEvents(dbtx, blockID, txID, txr.Result.Events); err != nil {
return fmt.Errorf("indexing transaction events: %w", err)
}
return nil
}); err != nil {
return err
}
}
return nil
}
// SearchBlockEvents is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) SearchBlockEvents(ctx context.Context, q *query.Query) ([]int64, error) {
return nil, errors.New("block search is not supported via the postgres event sink")
}
// SearchTxEvents is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) SearchTxEvents(ctx context.Context, q *query.Query) ([]*abci.TxResult, error) {
return nil, errors.New("tx search is not supported via the postgres event sink")
}
// GetTxByHash is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) GetTxByHash(hash []byte) (*abci.TxResult, error) {
return nil, errors.New("getTxByHash is not supported via the postgres event sink")
}
// HasBlock is not implemented by this sink, and reports an error for all queries.
func (es *EventSink) HasBlock(h int64) (bool, error) {
return false, errors.New("hasBlock is not supported via the postgres event sink")
}
func indexBlockEvents(
sqlStmt sq.InsertBuilder,
events []abci.Event,
ty string,
height int64,
ts time.Time,
chainID string,
) (sq.InsertBuilder, error) {
for _, event := range events {
// only index events with a non-empty type
if len(event.Type) == 0 {
continue
}
for _, attr := range event.Attributes {
if len(attr.Key) == 0 {
continue
}
// index iff the event specified index:true and it's not a reserved event
compositeKey := fmt.Sprintf("%s.%s", event.Type, attr.Key)
if compositeKey == types.BlockHeightKey {
return sqlStmt, fmt.Errorf(
"event type and attribute key \"%s\" is reserved; please use a different key", compositeKey)
}
if attr.GetIndex() {
sqlStmt = sqlStmt.Values(compositeKey, attr.Value, height, ty, ts, chainID)
}
}
}
return sqlStmt, nil
}
func (es *EventSink) Stop() error {
return es.store.Close()
}
// Stop closes the underlying PostgreSQL database.
func (es *EventSink) Stop() error { return es.store.Close() }

View File

@@ -3,320 +3,64 @@ package psql
import (
"context"
"database/sql"
"errors"
"flag"
"fmt"
"io/ioutil"
"log"
"os"
"os/signal"
"testing"
"time"
sq "github.com/Masterminds/squirrel"
schema "github.com/adlio/schema"
proto "github.com/gogo/protobuf/proto"
_ "github.com/lib/pq"
dockertest "github.com/ory/dockertest"
"github.com/adlio/schema"
"github.com/gogo/protobuf/proto"
"github.com/ory/dockertest"
"github.com/ory/dockertest/docker"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
abci "github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/state/indexer"
"github.com/tendermint/tendermint/types"
// Register the Postgres database driver.
_ "github.com/lib/pq"
)
var db *sql.DB
var resource *dockertest.Resource
var chainID = "test-chainID"
// Verify that the type satisfies the EventSink interface.
var _ indexer.EventSink = (*EventSink)(nil)
var (
doPauseAtExit = flag.Bool("pause-at-exit", false,
"If true, pause the test until interrupted at shutdown, to allow debugging")
// A hook that test cases can call to obtain the shared database instance
// used for testing the sink. This is initialized in TestMain (see below).
testDB func() *sql.DB
)
const (
user = "postgres"
password = "secret"
port = "5432"
dsn = "postgres://%s:%s@localhost:%s/%s?sslmode=disable"
dbName = "postgres"
chainID = "test-chainID"
viewBlockEvents = "block_events"
viewTxEvents = "tx_events"
)
func TestType(t *testing.T) {
pool, err := setupDB(t)
require.NoError(t, err)
func TestMain(m *testing.M) {
flag.Parse()
psqlSink := &EventSink{store: db, chainID: chainID}
assert.Equal(t, indexer.PSQL, psqlSink.Type())
require.NoError(t, teardown(t, pool))
}
func TestBlockFuncs(t *testing.T) {
pool, err := setupDB(t)
require.NoError(t, err)
indexer := &EventSink{store: db, chainID: chainID}
require.NoError(t, indexer.IndexBlockEvents(getTestBlockHeader()))
r, err := verifyBlock(1)
assert.True(t, r)
require.NoError(t, err)
r, err = verifyBlock(2)
assert.False(t, r)
require.NoError(t, err)
r, err = indexer.HasBlock(1)
assert.False(t, r)
assert.Equal(t, errors.New("hasBlock is not supported via the postgres event sink"), err)
r, err = indexer.HasBlock(2)
assert.False(t, r)
assert.Equal(t, errors.New("hasBlock is not supported via the postgres event sink"), err)
r2, err := indexer.SearchBlockEvents(context.TODO(), nil)
assert.Nil(t, r2)
assert.Equal(t, errors.New("block search is not supported via the postgres event sink"), err)
require.NoError(t, verifyTimeStamp(TableEventBlock))
// try to insert the duplicate block events.
err = indexer.IndexBlockEvents(getTestBlockHeader())
require.NoError(t, err)
require.NoError(t, teardown(t, pool))
}
func TestTxFuncs(t *testing.T) {
pool, err := setupDB(t)
assert.Nil(t, err)
indexer := &EventSink{store: db, chainID: chainID}
txResult := txResultWithEvents([]abci.Event{
{Type: "account", Attributes: []abci.EventAttribute{{Key: "number", Value: "1", Index: true}}},
{Type: "account", Attributes: []abci.EventAttribute{{Key: "owner", Value: "Ivan", Index: true}}},
{Type: "", Attributes: []abci.EventAttribute{{Key: "not_allowed", Value: "Vlad", Index: true}}},
})
err = indexer.IndexTxEvents([]*abci.TxResult{txResult})
require.NoError(t, err)
tx, err := verifyTx(types.Tx(txResult.Tx).Hash())
require.NoError(t, err)
assert.Equal(t, txResult, tx)
require.NoError(t, verifyTimeStamp(TableEventTx))
require.NoError(t, verifyTimeStamp(TableResultTx))
tx, err = indexer.GetTxByHash(types.Tx(txResult.Tx).Hash())
assert.Nil(t, tx)
assert.Equal(t, errors.New("getTxByHash is not supported via the postgres event sink"), err)
r2, err := indexer.SearchTxEvents(context.TODO(), nil)
assert.Nil(t, r2)
assert.Equal(t, errors.New("tx search is not supported via the postgres event sink"), err)
// try to insert the duplicate tx events.
err = indexer.IndexTxEvents([]*abci.TxResult{txResult})
require.NoError(t, err)
assert.Nil(t, teardown(t, pool))
}
func TestStop(t *testing.T) {
pool, err := setupDB(t)
require.NoError(t, err)
indexer := &EventSink{store: db}
require.NoError(t, indexer.Stop())
defer db.Close()
require.NoError(t, pool.Purge(resource))
}
func getTestBlockHeader() types.EventDataNewBlockHeader {
return types.EventDataNewBlockHeader{
Header: types.Header{Height: 1},
ResultBeginBlock: abci.ResponseBeginBlock{
Events: []abci.Event{
{
Type: "begin_event",
Attributes: []abci.EventAttribute{
{
Key: "proposer",
Value: "FCAA001",
Index: true,
},
},
},
},
},
ResultEndBlock: abci.ResponseEndBlock{
Events: []abci.Event{
{
Type: "end_event",
Attributes: []abci.EventAttribute{
{
Key: "foo",
Value: "100",
Index: true,
},
},
},
},
},
}
}
func readSchema() ([]*schema.Migration, error) {
filename := "schema.sql"
contents, err := ioutil.ReadFile(filename)
if err != nil {
return nil, fmt.Errorf("failed to read sql file from '%s': %w", filename, err)
}
mg := &schema.Migration{}
mg.ID = time.Now().Local().String() + " db schema"
mg.Script = string(contents)
return append([]*schema.Migration{}, mg), nil
}
func resetDB(t *testing.T) {
q := "DROP TABLE IF EXISTS block_events,tx_events,tx_results"
_, err := db.Exec(q)
require.NoError(t, err)
q = "DROP TYPE IF EXISTS block_event_type"
_, err = db.Exec(q)
require.NoError(t, err)
}
func txResultWithEvents(events []abci.Event) *abci.TxResult {
tx := types.Tx("HELLO WORLD")
return &abci.TxResult{
Height: 1,
Index: 0,
Tx: tx,
Result: abci.ResponseDeliverTx{
Data: []byte{0},
Code: abci.CodeTypeOK,
Log: "",
Events: events,
},
}
}
func verifyTx(hash []byte) (*abci.TxResult, error) {
join := fmt.Sprintf("%s ON %s.id = tx_result_id", TableEventTx, TableResultTx)
sqlStmt := sq.
Select("tx_result", fmt.Sprintf("%s.id", TableResultTx), "tx_result_id", "hash", "chain_id").
Distinct().From(TableResultTx).
InnerJoin(join).
Where(fmt.Sprintf("hash = $1 AND chain_id = '%s'", chainID), fmt.Sprintf("%X", hash))
rows, err := sqlStmt.RunWith(db).Query()
if err != nil {
return nil, err
}
defer rows.Close()
if rows.Next() {
var txResult []byte
var txResultID, txid int
var h, cid string
err = rows.Scan(&txResult, &txResultID, &txid, &h, &cid)
if err != nil {
return nil, nil
}
msg := new(abci.TxResult)
err = proto.Unmarshal(txResult, msg)
if err != nil {
return nil, err
}
return msg, err
}
// No result
return nil, nil
}
func verifyTimeStamp(tb string) error {
// We assume the tx indexing time would not exceed 2 second from now
sqlStmt := sq.
Select(fmt.Sprintf("%s.created_at", tb)).
Distinct().From(tb).
Where(fmt.Sprintf("%s.created_at >= $1", tb), time.Now().Add(-2*time.Second))
rows, err := sqlStmt.RunWith(db).Query()
if err != nil {
return err
}
defer rows.Close()
if rows.Next() {
var ts string
return rows.Scan(&ts)
}
return errors.New("no result")
}
func verifyBlock(h int64) (bool, error) {
sqlStmt := sq.
Select("height").
Distinct().
From(TableEventBlock).
Where(fmt.Sprintf("height = %d", h))
rows, err := sqlStmt.RunWith(db).Query()
if err != nil {
return false, err
}
defer rows.Close()
if !rows.Next() {
return false, nil
}
sqlStmt = sq.
Select("type, height", "chain_id").
Distinct().
From(TableEventBlock).
Where(fmt.Sprintf("height = %d AND type = '%s' AND chain_id = '%s'", h, types.EventTypeBeginBlock, chainID))
rows, err = sqlStmt.RunWith(db).Query()
if err != nil {
return false, err
}
defer rows.Close()
if !rows.Next() {
return false, nil
}
sqlStmt = sq.
Select("type, height").
Distinct().
From(TableEventBlock).
Where(fmt.Sprintf("height = %d AND type = '%s'", h, types.EventTypeEndBlock))
rows, err = sqlStmt.RunWith(db).Query()
if err != nil {
return false, err
}
defer rows.Close()
return rows.Next(), nil
}
func setupDB(t *testing.T) (*dockertest.Pool, error) {
t.Helper()
// Set up docker and start a container running PostgreSQL.
pool, err := dockertest.NewPool(os.Getenv("DOCKER_URL"))
if err != nil {
log.Fatalf("Creating docker pool: %v", err)
}
require.NoError(t, err)
resource, err = pool.RunWithOptions(&dockertest.RunOptions{
Repository: DriverName,
resource, err := pool.RunWithOptions(&dockertest.RunOptions{
Repository: "postgres",
Tag: "13",
Env: []string{
"POSTGRES_USER=" + user,
@@ -332,40 +76,269 @@ func setupDB(t *testing.T) (*dockertest.Pool, error) {
Name: "no",
}
})
if err != nil {
log.Fatalf("Starting docker pool: %v", err)
}
require.NoError(t, err)
// Set the container to expire in a minute to avoid orphaned containers
// hanging around
_ = resource.Expire(60)
if *doPauseAtExit {
log.Print("Pause at exit is enabled, containers will not expire")
} else {
const expireSeconds = 60
_ = resource.Expire(expireSeconds)
log.Printf("Container expiration set to %d seconds", expireSeconds)
}
// Connect to the database, clear any leftover data, and install the
// indexing schema.
conn := fmt.Sprintf(dsn, user, password, resource.GetPort(port+"/tcp"), dbName)
var db *sql.DB
if err = pool.Retry(func() error {
var err error
_, db, err = NewEventSink(conn, chainID)
if err := pool.Retry(func() error {
sink, err := NewEventSink(conn, chainID)
if err != nil {
return err
}
db = sink.DB() // set global for test use
return db.Ping()
}); err != nil {
require.NoError(t, err)
log.Fatalf("Connecting to database: %v", err)
}
resetDB(t)
if err := resetDatabase(db); err != nil {
log.Fatalf("Flushing database: %v", err)
}
sm, err := readSchema()
assert.Nil(t, err)
assert.Nil(t, schema.NewMigrator().Apply(db, sm))
return pool, nil
if err != nil {
log.Fatalf("Reading schema: %v", err)
} else if err := schema.NewMigrator().Apply(db, sm); err != nil {
log.Fatalf("Applying schema: %v", err)
}
// Set up the hook for tests to get the shared database handle.
testDB = func() *sql.DB { return db }
// Run the selected test cases.
code := m.Run()
// Clean up and shut down the database container.
if *doPauseAtExit {
log.Print("Testing complete, pausing for inspection. Send SIGINT to resume teardown")
waitForInterrupt()
log.Print("(resuming)")
}
log.Print("Shutting down database")
if err := pool.Purge(resource); err != nil {
log.Printf("WARNING: Purging pool failed: %v", err)
}
if err := db.Close(); err != nil {
log.Printf("WARNING: Closing database failed: %v", err)
}
os.Exit(code)
}
func teardown(t *testing.T, pool *dockertest.Pool) error {
t.Helper()
// When you're done, kill and remove the container
assert.Nil(t, pool.Purge(resource))
return db.Close()
func TestType(t *testing.T) {
psqlSink := &EventSink{store: testDB(), chainID: chainID}
assert.Equal(t, indexer.PSQL, psqlSink.Type())
}
func TestIndexing(t *testing.T) {
t.Run("IndexBlockEvents", func(t *testing.T) {
indexer := &EventSink{store: testDB(), chainID: chainID}
require.NoError(t, indexer.IndexBlockEvents(newTestBlockHeader()))
verifyBlock(t, 1)
verifyBlock(t, 2)
verifyNotImplemented(t, "hasBlock", func() (bool, error) { return indexer.HasBlock(1) })
verifyNotImplemented(t, "hasBlock", func() (bool, error) { return indexer.HasBlock(2) })
verifyNotImplemented(t, "block search", func() (bool, error) {
v, err := indexer.SearchBlockEvents(context.Background(), nil)
return v != nil, err
})
require.NoError(t, verifyTimeStamp(tableBlocks))
// Attempting to reindex the same events should gracefully succeed.
require.NoError(t, indexer.IndexBlockEvents(newTestBlockHeader()))
})
t.Run("IndexTxEvents", func(t *testing.T) {
indexer := &EventSink{store: testDB(), chainID: chainID}
txResult := txResultWithEvents([]abci.Event{
makeIndexedEvent("account.number", "1"),
makeIndexedEvent("account.owner", "Ivan"),
makeIndexedEvent("account.owner", "Yulieta"),
{Type: "", Attributes: []abci.EventAttribute{{Key: "not_allowed", Value: "Vlad", Index: true}}},
})
require.NoError(t, indexer.IndexTxEvents([]*abci.TxResult{txResult}))
txr, err := loadTxResult(types.Tx(txResult.Tx).Hash())
require.NoError(t, err)
assert.Equal(t, txResult, txr)
require.NoError(t, verifyTimeStamp(tableTxResults))
require.NoError(t, verifyTimeStamp(viewTxEvents))
verifyNotImplemented(t, "getTxByHash", func() (bool, error) {
txr, err := indexer.GetTxByHash(types.Tx(txResult.Tx).Hash())
return txr != nil, err
})
verifyNotImplemented(t, "tx search", func() (bool, error) {
txr, err := indexer.SearchTxEvents(context.Background(), nil)
return txr != nil, err
})
// try to insert the duplicate tx events.
err = indexer.IndexTxEvents([]*abci.TxResult{txResult})
require.NoError(t, err)
})
}
func TestStop(t *testing.T) {
indexer := &EventSink{store: testDB()}
require.NoError(t, indexer.Stop())
}
// newTestBlockHeader constructs a fresh copy of a block header containing
// known test values to exercise the indexer.
func newTestBlockHeader() types.EventDataNewBlockHeader {
return types.EventDataNewBlockHeader{
Header: types.Header{Height: 1},
ResultBeginBlock: abci.ResponseBeginBlock{
Events: []abci.Event{
makeIndexedEvent("begin_event.proposer", "FCAA001"),
makeIndexedEvent("thingy.whatzit", "O.O"),
},
},
ResultEndBlock: abci.ResponseEndBlock{
Events: []abci.Event{
makeIndexedEvent("end_event.foo", "100"),
makeIndexedEvent("thingy.whatzit", "-.O"),
},
},
}
}
// readSchema loads the indexing database schema file
func readSchema() ([]*schema.Migration, error) {
const filename = "schema.sql"
contents, err := ioutil.ReadFile(filename)
if err != nil {
return nil, fmt.Errorf("failed to read sql file from '%s': %w", filename, err)
}
return []*schema.Migration{{
ID: time.Now().Local().String() + " db schema",
Script: string(contents),
}}, nil
}
// resetDB drops all the data from the test database.
func resetDatabase(db *sql.DB) error {
_, err := db.Exec(`DROP TABLE IF EXISTS blocks,tx_results,events,attributes CASCADE;`)
if err != nil {
return fmt.Errorf("dropping tables: %v", err)
}
_, err = db.Exec(`DROP VIEW IF EXISTS event_attributes,block_events,tx_events CASCADE;`)
if err != nil {
return fmt.Errorf("dropping views: %v", err)
}
return nil
}
// txResultWithEvents constructs a fresh transaction result with fixed values
// for testing, that includes the specified events.
func txResultWithEvents(events []abci.Event) *abci.TxResult {
return &abci.TxResult{
Height: 1,
Index: 0,
Tx: types.Tx("HELLO WORLD"),
Result: abci.ResponseDeliverTx{
Data: []byte{0},
Code: abci.CodeTypeOK,
Log: "",
Events: events,
},
}
}
func loadTxResult(hash []byte) (*abci.TxResult, error) {
hashString := fmt.Sprintf("%X", hash)
var resultData []byte
if err := testDB().QueryRow(`
SELECT tx_result FROM `+tableTxResults+` WHERE tx_hash = $1;
`, hashString).Scan(&resultData); err != nil {
return nil, fmt.Errorf("lookup transaction for hash %q failed: %v", hashString, err)
}
txr := new(abci.TxResult)
if err := proto.Unmarshal(resultData, txr); err != nil {
return nil, fmt.Errorf("unmarshaling txr: %v", err)
}
return txr, nil
}
func verifyTimeStamp(tableName string) error {
return testDB().QueryRow(fmt.Sprintf(`
SELECT DISTINCT %[1]s.created_at
FROM %[1]s
WHERE %[1]s.created_at >= $1;
`, tableName), time.Now().Add(-2*time.Second)).Err()
}
func verifyBlock(t *testing.T, height int64) {
// Check that the blocks table contains an entry for this height.
if err := testDB().QueryRow(`
SELECT height FROM `+tableBlocks+` WHERE height = $1;
`, height).Err(); err == sql.ErrNoRows {
t.Errorf("No block found for height=%d", height)
} else if err != nil {
t.Fatalf("Database query failed: %v", err)
}
// Verify the presence of begin_block and end_block events.
if err := testDB().QueryRow(`
SELECT type, height, chain_id FROM `+viewBlockEvents+`
WHERE height = $1 AND type = $2 AND chain_id = $3;
`, height, types.EventTypeBeginBlock, chainID).Err(); err == sql.ErrNoRows {
t.Errorf("No %q event found for height=%d", types.EventTypeBeginBlock, height)
} else if err != nil {
t.Fatalf("Database query failed: %v", err)
}
if err := testDB().QueryRow(`
SELECT type, height, chain_id FROM `+viewBlockEvents+`
WHERE height = $1 AND type = $2 AND chain_id = $3;
`, height, types.EventTypeEndBlock, chainID).Err(); err == sql.ErrNoRows {
t.Errorf("No %q event found for height=%d", types.EventTypeEndBlock, height)
} else if err != nil {
t.Fatalf("Database query failed: %v", err)
}
}
// verifyNotImplemented calls f and verifies that it returns both a
// false-valued flag and a non-nil error whose string matching the expected
// "not supported" message with label prefixed.
func verifyNotImplemented(t *testing.T, label string, f func() (bool, error)) {
t.Helper()
t.Logf("Verifying that %q reports it is not implemented", label)
want := label + " is not supported via the postgres event sink"
ok, err := f()
assert.False(t, ok)
require.NotNil(t, err)
assert.Equal(t, want, err.Error())
}
// waitForInterrupt blocks until a SIGINT is received by the process.
func waitForInterrupt() {
ch := make(chan os.Signal, 1)
signal.Notify(ch, os.Interrupt)
<-ch
}

View File

@@ -1,32 +1,85 @@
CREATE TYPE block_event_type AS ENUM ('begin_block', 'end_block', '');
CREATE TABLE block_events (
id SERIAL PRIMARY KEY,
key VARCHAR NOT NULL,
value VARCHAR NOT NULL,
height INTEGER NOT NULL,
type block_event_type,
created_at TIMESTAMPTZ NOT NULL,
chain_id VARCHAR NOT NULL,
UNIQUE (key, height)
/*
This file defines the database schema for the PostgresQL ("psql") event sink
implementation in Tendermint. The operator must create a database and install
this schema before using the database to index events.
*/
-- The blocks table records metadata about each block.
-- The block record does not include its events or transactions (see tx_results).
CREATE TABLE blocks (
rowid BIGSERIAL PRIMARY KEY,
height BIGINT NOT NULL,
chain_id VARCHAR NOT NULL,
-- When this block header was logged into the sink, in UTC.
created_at TIMESTAMPTZ NOT NULL,
UNIQUE (height, chain_id)
);
-- Index blocks by height and chain, since we need to resolve block IDs when
-- indexing transaction records and transaction events.
CREATE INDEX idx_blocks_height_chain ON blocks(height, chain_id);
-- The tx_results table records metadata about transaction results. Note that
-- the events from a transaction are stored separately.
CREATE TABLE tx_results (
id SERIAL PRIMARY KEY,
tx_result BYTEA NOT NULL,
created_at TIMESTAMPTZ NOT NULL,
UNIQUE (tx_result)
rowid BIGSERIAL PRIMARY KEY,
-- The block to which this transaction belongs.
block_id BIGINT NOT NULL REFERENCES blocks(rowid),
-- The sequential index of the transaction within the block.
index INTEGER NOT NULL,
-- When this result record was logged into the sink, in UTC.
created_at TIMESTAMPTZ NOT NULL,
-- The hex-encoded hash of the transaction.
tx_hash VARCHAR NOT NULL,
-- The protobuf wire encoding of the TxResult message.
tx_result BYTEA NOT NULL,
UNIQUE (block_id, index)
);
CREATE TABLE tx_events (
id SERIAL PRIMARY KEY,
key VARCHAR NOT NULL,
value VARCHAR NOT NULL,
height INTEGER NOT NULL,
hash VARCHAR NOT NULL,
tx_result_id SERIAL,
created_at TIMESTAMPTZ NOT NULL,
chain_id VARCHAR NOT NULL,
UNIQUE (hash, key),
FOREIGN KEY (tx_result_id) REFERENCES tx_results(id) ON DELETE CASCADE
-- The events table records events. All events (both block and transaction) are
-- associated with a block ID; transaction events also have a transaction ID.
CREATE TABLE events (
rowid BIGSERIAL PRIMARY KEY,
-- The block and transaction this event belongs to.
-- If tx_id is NULL, this is a block event.
block_id BIGINT NOT NULL REFERENCES blocks(rowid),
tx_id BIGINT NULL REFERENCES tx_results(rowid),
-- The application-defined type label for the event.
type VARCHAR NOT NULL
);
CREATE INDEX idx_block_events_key_value ON block_events(key, value);
CREATE INDEX idx_tx_events_key_value ON tx_events(key, value);
CREATE INDEX idx_tx_events_hash ON tx_events(hash);
-- The attributes table records event attributes.
CREATE TABLE attributes (
event_id BIGINT NOT NULL REFERENCES events(rowid),
key VARCHAR NOT NULL, -- bare key
composite_key VARCHAR NOT NULL, -- composed type.key
value VARCHAR NULL,
UNIQUE (event_id, key)
);
-- A joined view of events and their attributes. Events that do not have any
-- attributes are represented as a single row with empty key and value fields.
CREATE VIEW event_attributes AS
SELECT block_id, tx_id, type, key, composite_key, value
FROM events LEFT JOIN attributes ON (events.rowid = attributes.event_id);
-- A joined view of all block events (those having tx_id NULL).
CREATE VIEW block_events AS
SELECT blocks.rowid as block_id, height, chain_id, type, key, composite_key, value
FROM blocks JOIN event_attributes ON (blocks.rowid = event_attributes.block_id)
WHERE event_attributes.tx_id IS NULL;
-- A joined view of all transaction events.
CREATE VIEW tx_events AS
SELECT height, index, chain_id, type, key, composite_key, value, tx_results.created_at
FROM blocks JOIN tx_results ON (blocks.rowid = tx_results.block_id)
JOIN event_attributes ON (tx_results.rowid = event_attributes.tx_id)
WHERE event_attributes.tx_id IS NOT NULL;

View File

@@ -0,0 +1,65 @@
package sink
import (
"errors"
"strings"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/state/indexer"
"github.com/tendermint/tendermint/state/indexer/sink/kv"
"github.com/tendermint/tendermint/state/indexer/sink/null"
"github.com/tendermint/tendermint/state/indexer/sink/psql"
)
// EventSinksFromConfig constructs a slice of indexer.EventSink using the provided
// configuration.
//
//nolint:lll
func EventSinksFromConfig(cfg *config.Config, dbProvider config.DBProvider, chainID string) ([]indexer.EventSink, error) {
if len(cfg.TxIndex.Indexer) == 0 {
return []indexer.EventSink{null.NewEventSink()}, nil
}
// check for duplicated sinks
sinks := map[string]struct{}{}
for _, s := range cfg.TxIndex.Indexer {
sl := strings.ToLower(s)
if _, ok := sinks[sl]; ok {
return nil, errors.New("found duplicated sinks, please check the tx-index section in the config.toml")
}
sinks[sl] = struct{}{}
}
eventSinks := []indexer.EventSink{}
for k := range sinks {
switch indexer.EventSinkType(k) {
case indexer.NULL:
// When we see null in the config, the eventsinks will be reset with the
// nullEventSink.
return []indexer.EventSink{null.NewEventSink()}, nil
case indexer.KV:
store, err := dbProvider(&config.DBContext{ID: "tx_index", Config: cfg})
if err != nil {
return nil, err
}
eventSinks = append(eventSinks, kv.NewEventSink(store))
case indexer.PSQL:
conn := cfg.TxIndex.PsqlConn
if conn == "" {
return nil, errors.New("the psql connection settings cannot be empty")
}
es, err := psql.NewEventSink(conn, chainID)
if err != nil {
return nil, err
}
eventSinks = append(eventSinks, es)
default:
return nil, errors.New("unsupported event sink type")
}
}
return eventSinks, nil
}

View File

@@ -9,7 +9,6 @@ import (
"github.com/gogo/protobuf/proto"
tmtime "github.com/tendermint/tendermint/libs/time"
tmstate "github.com/tendermint/tendermint/proto/tendermint/state"
tmversion "github.com/tendermint/tendermint/proto/tendermint/version"
"github.com/tendermint/tendermint/types"
@@ -287,7 +286,7 @@ func (state State) MakeBlock(
// the votes sent by honest processes, i.e., a faulty processes can not arbitrarily increase or decrease the
// computed value.
func MedianTime(commit *types.Commit, validators *types.ValidatorSet) time.Time {
weightedTimes := make([]*tmtime.WeightedTime, len(commit.Signatures))
weightedTimes := make([]*weightedTime, len(commit.Signatures))
totalVotingPower := int64(0)
for i, commitSig := range commit.Signatures {
@@ -298,11 +297,11 @@ func MedianTime(commit *types.Commit, validators *types.ValidatorSet) time.Time
// If there's no condition, TestValidateBlockCommit panics; not needed normally.
if validator != nil {
totalVotingPower += validator.VotingPower
weightedTimes[i] = tmtime.NewWeightedTime(commitSig.Timestamp, validator.VotingPower)
weightedTimes[i] = newWeightedTime(commitSig.Timestamp, validator.VotingPower)
}
}
return tmtime.WeightedMedian(weightedTimes, totalVotingPower)
return weightedMedian(weightedTimes, totalVotingPower)
}
//------------------------------------------------------------------------

46
state/time.go Normal file
View File

@@ -0,0 +1,46 @@
package state
import (
"sort"
"time"
)
// weightedTime for computing a median.
type weightedTime struct {
Time time.Time
Weight int64
}
// newWeightedTime with time and weight.
func newWeightedTime(time time.Time, weight int64) *weightedTime {
return &weightedTime{
Time: time,
Weight: weight,
}
}
// weightedMedian computes weighted median time for a given array of WeightedTime and the total voting power.
func weightedMedian(weightedTimes []*weightedTime, totalVotingPower int64) (res time.Time) {
median := totalVotingPower / 2
sort.Slice(weightedTimes, func(i, j int) bool {
if weightedTimes[i] == nil {
return false
}
if weightedTimes[j] == nil {
return true
}
return weightedTimes[i].Time.UnixNano() < weightedTimes[j].Time.UnixNano()
})
for _, weightedTime := range weightedTimes {
if weightedTime != nil {
if median <= weightedTime.Weight {
res = weightedTime.Time
break
}
median -= weightedTime.Weight
}
}
return
}

View File

@@ -1,54 +1,55 @@
package time
package state
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
tmtime "github.com/tendermint/tendermint/libs/time"
)
func TestWeightedMedian(t *testing.T) {
m := make([]*WeightedTime, 3)
m := make([]*weightedTime, 3)
t1 := Now()
t1 := tmtime.Now()
t2 := t1.Add(5 * time.Second)
t3 := t1.Add(10 * time.Second)
m[2] = NewWeightedTime(t1, 33) // faulty processes
m[0] = NewWeightedTime(t2, 40) // correct processes
m[1] = NewWeightedTime(t3, 27) // correct processes
m[2] = newWeightedTime(t1, 33) // faulty processes
m[0] = newWeightedTime(t2, 40) // correct processes
m[1] = newWeightedTime(t3, 27) // correct processes
totalVotingPower := int64(100)
median := WeightedMedian(m, totalVotingPower)
median := weightedMedian(m, totalVotingPower)
assert.Equal(t, t2, median)
// median always returns value between values of correct processes
assert.Equal(t, true, (median.After(t1) || median.Equal(t1)) &&
(median.Before(t3) || median.Equal(t3)))
m[1] = NewWeightedTime(t1, 40) // correct processes
m[2] = NewWeightedTime(t2, 27) // correct processes
m[0] = NewWeightedTime(t3, 33) // faulty processes
m[1] = newWeightedTime(t1, 40) // correct processes
m[2] = newWeightedTime(t2, 27) // correct processes
m[0] = newWeightedTime(t3, 33) // faulty processes
totalVotingPower = int64(100)
median = WeightedMedian(m, totalVotingPower)
median = weightedMedian(m, totalVotingPower)
assert.Equal(t, t2, median)
// median always returns value between values of correct processes
assert.Equal(t, true, (median.After(t1) || median.Equal(t1)) &&
(median.Before(t2) || median.Equal(t2)))
m = make([]*WeightedTime, 8)
m = make([]*weightedTime, 8)
t4 := t1.Add(15 * time.Second)
t5 := t1.Add(60 * time.Second)
m[3] = NewWeightedTime(t1, 10) // correct processes
m[1] = NewWeightedTime(t2, 10) // correct processes
m[5] = NewWeightedTime(t2, 10) // correct processes
m[4] = NewWeightedTime(t3, 23) // faulty processes
m[0] = NewWeightedTime(t4, 20) // correct processes
m[7] = NewWeightedTime(t5, 10) // faulty processes
m[3] = newWeightedTime(t1, 10) // correct processes
m[1] = newWeightedTime(t2, 10) // correct processes
m[5] = newWeightedTime(t2, 10) // correct processes
m[4] = newWeightedTime(t3, 23) // faulty processes
m[0] = newWeightedTime(t4, 20) // correct processes
m[7] = newWeightedTime(t5, 10) // faulty processes
totalVotingPower = int64(83)
median = WeightedMedian(m, totalVotingPower)
median = weightedMedian(m, totalVotingPower)
assert.Equal(t, t3, median)
// median always returns value between values of correct processes
assert.Equal(t, true, (median.After(t1) || median.Equal(t1)) &&

View File

@@ -26,13 +26,29 @@ var (
}
// The following specify randomly chosen values for testnet nodes.
nodeDatabases = uniformChoice{"goleveldb", "cleveldb", "rocksdb", "boltdb", "badgerdb"}
nodeABCIProtocols = uniformChoice{"unix", "tcp", "builtin", "grpc"}
nodePrivvalProtocols = uniformChoice{"file", "unix", "tcp", "grpc"}
nodeDatabases = weightedChoice{
"goleveldb": 35,
"badgerdb": 35,
"boltdb": 15,
"rocksdb": 10,
"cleveldb": 5,
}
nodeABCIProtocols = weightedChoice{
"builtin": 50,
"tcp": 20,
"grpc": 20,
"unix": 10,
}
nodePrivvalProtocols = weightedChoice{
"file": 50,
"grpc": 20,
"tcp": 20,
"unix": 10,
}
// FIXME: v2 disabled due to flake
nodeBlockSyncs = uniformChoice{"v0"} // "v2"
nodeMempools = uniformChoice{"v0", "v1"}
nodeStateSyncs = uniformChoice{false, true}
nodeStateSyncs = uniformChoice{e2e.StateSyncDisabled, e2e.StateSyncP2P, e2e.StateSyncRPC}
nodePersistIntervals = uniformChoice{0, 1, 5}
nodeSnapshotIntervals = uniformChoice{0, 3}
nodeRetainBlocks = uniformChoice{0, int(e2e.EvidenceAgeHeight), int(e2e.EvidenceAgeHeight) + 5}
@@ -107,11 +123,11 @@ func generateTestnet(r *rand.Rand, opt map[string]interface{}) (e2e.Manifest, er
switch opt["p2p"].(P2PMode) {
case NewP2PMode:
manifest.DisableLegacyP2P = true
manifest.UseLegacyP2P = true
case LegacyP2PMode:
manifest.DisableLegacyP2P = false
manifest.UseLegacyP2P = false
case HybridP2PMode:
manifest.DisableLegacyP2P = false
manifest.UseLegacyP2P = true
p2pNodeFactor = 2
default:
return manifest, fmt.Errorf("unknown p2p mode %s", opt["p2p"])
@@ -138,9 +154,9 @@ func generateTestnet(r *rand.Rand, opt map[string]interface{}) (e2e.Manifest, er
node := generateNode(r, e2e.ModeSeed, 0, manifest.InitialHeight, false)
if p2pNodeFactor == 0 {
node.DisableLegacyP2P = manifest.DisableLegacyP2P
node.UseLegacyP2P = manifest.UseLegacyP2P
} else if p2pNodeFactor%i == 0 {
node.DisableLegacyP2P = !manifest.DisableLegacyP2P
node.UseLegacyP2P = !manifest.UseLegacyP2P
}
manifest.Nodes[fmt.Sprintf("seed%02d", i)] = node
@@ -162,9 +178,9 @@ func generateTestnet(r *rand.Rand, opt map[string]interface{}) (e2e.Manifest, er
r, e2e.ModeValidator, startAt, manifest.InitialHeight, i <= 2)
if p2pNodeFactor == 0 {
node.DisableLegacyP2P = manifest.DisableLegacyP2P
node.UseLegacyP2P = manifest.UseLegacyP2P
} else if p2pNodeFactor%i == 0 {
node.DisableLegacyP2P = !manifest.DisableLegacyP2P
node.UseLegacyP2P = !manifest.UseLegacyP2P
}
manifest.Nodes[name] = node
@@ -198,9 +214,9 @@ func generateTestnet(r *rand.Rand, opt map[string]interface{}) (e2e.Manifest, er
node := generateNode(r, e2e.ModeFull, startAt, manifest.InitialHeight, false)
if p2pNodeFactor == 0 {
node.DisableLegacyP2P = manifest.DisableLegacyP2P
node.UseLegacyP2P = manifest.UseLegacyP2P
} else if p2pNodeFactor%i == 0 {
node.DisableLegacyP2P = !manifest.DisableLegacyP2P
node.UseLegacyP2P = !manifest.UseLegacyP2P
}
manifest.Nodes[fmt.Sprintf("full%02d", i)] = node
}
@@ -270,18 +286,22 @@ func generateNode(
node := e2e.ManifestNode{
Mode: string(mode),
StartAt: startAt,
Database: nodeDatabases.Choose(r).(string),
ABCIProtocol: nodeABCIProtocols.Choose(r).(string),
PrivvalProtocol: nodePrivvalProtocols.Choose(r).(string),
Database: nodeDatabases.Choose(r),
ABCIProtocol: nodeABCIProtocols.Choose(r),
PrivvalProtocol: nodePrivvalProtocols.Choose(r),
BlockSync: nodeBlockSyncs.Choose(r).(string),
Mempool: nodeMempools.Choose(r).(string),
StateSync: nodeStateSyncs.Choose(r).(bool) && startAt > 0,
StateSync: e2e.StateSyncDisabled,
PersistInterval: ptrUint64(uint64(nodePersistIntervals.Choose(r).(int))),
SnapshotInterval: uint64(nodeSnapshotIntervals.Choose(r).(int)),
RetainBlocks: uint64(nodeRetainBlocks.Choose(r).(int)),
Perturb: nodePerturbations.Choose(r),
}
if startAt > 0 {
node.StateSync = nodeStateSyncs.Choose(r).(string)
}
// If this node is forced to be an archive node, retain all blocks and
// enable state sync snapshotting.
if forceArchive {
@@ -310,7 +330,7 @@ func generateNode(
}
}
if node.StateSync {
if node.StateSync != e2e.StateSyncDisabled {
node.BlockSync = "v0"
}
@@ -321,7 +341,7 @@ func generateLightNode(r *rand.Rand, startAt int64, providers []string) *e2e.Man
return &e2e.ManifestNode{
Mode: string(e2e.ModeLight),
StartAt: startAt,
Database: nodeDatabases.Choose(r).(string),
Database: nodeDatabases.Choose(r),
ABCIProtocol: "builtin",
PersistInterval: ptrUint64(0),
PersistentPeers: providers,

View File

@@ -3,6 +3,8 @@ package main
import (
"math/rand"
"sort"
"github.com/mroth/weightedrand"
)
// combinations takes input in the form of a map of item lists, and returns a
@@ -83,3 +85,19 @@ func (usc uniformSetChoice) Choose(r *rand.Rand) []string {
}
return choices
}
type weightedChoice map[string]uint
func (wc weightedChoice) Choose(r *rand.Rand) string {
choices := make([]weightedrand.Choice, 0, len(wc))
for k, v := range wc {
choices = append(choices, weightedrand.NewChoice(k, v))
}
chooser, err := weightedrand.NewChooser(choices...)
if err != nil {
panic(err)
}
return chooser.PickSource(r).(string)
}

View File

@@ -55,6 +55,7 @@ retain_blocks = 7
[node.validator04]
abci_protocol = "builtin"
snapshot_interval = 5
database = "rocksdb"
persistent_peers = ["validator01"]
perturb = ["pause"]
@@ -62,6 +63,7 @@ perturb = ["pause"]
[node.validator05]
database = "cleveldb"
block_sync = "v0"
state_sync = "p2p"
seeds = ["seed01"]
start_at = 1005 # Becomes part of the validator set at 1010
abci_protocol = "grpc"
@@ -73,10 +75,10 @@ mode = "full"
start_at = 1010
# FIXME: should be v2, disabled due to flake
block_sync = "v0"
persistent_peers = ["validator01", "validator02", "validator03", "validator04", "validator05"]
persistent_peers = ["validator01", "validator02", "validator03", "validator04"]
perturb = ["restart"]
retain_blocks = 7
state_sync = true
state_sync = "rpc"
[node.light01]
mode = "light"

View File

@@ -59,8 +59,8 @@ type Manifest struct {
// by individual nodes.
LogLevel string `toml:"log_level"`
// DisableLegacyP2P enables use of the new p2p layer for all nodes in a test.
DisableLegacyP2P bool `toml:"disable_legacy_p2p"`
// UseLegacyP2P uses the legacy p2p layer for all nodes in a test.
UseLegacyP2P bool `toml:"use_legacy_p2p"`
// QueueType describes the type of queue that the system uses internally
QueueType string `toml:"queue_type"`
@@ -117,7 +117,8 @@ type ManifestNode struct {
// block hashes and RPC servers. At least one node in the network must have
// SnapshotInterval set to non-zero, and the state syncing node must have
// StartAt set to an appropriate height where a snapshot is available.
StateSync bool `toml:"state_sync"`
// StateSync can either be "p2p" or "rpc" or an empty string to disable
StateSync string `toml:"state_sync"`
// PersistInterval specifies the height interval at which the application
// will persist state to disk. Defaults to 1 (every height), setting this to
@@ -147,8 +148,8 @@ type ManifestNode struct {
// level.
LogLevel string `toml:"log_level"`
// UseNewP2P enables use of the new p2p layer for this node.
DisableLegacyP2P bool `toml:"disable_legacy_p2p"`
// UseLegacyP2P enables use of the legacy p2p layer for this node.
UseLegacyP2P bool `toml:"use_legacy_p2p"`
}
// Save saves the testnet manifest to a file.

View File

@@ -50,6 +50,10 @@ const (
EvidenceAgeHeight int64 = 7
EvidenceAgeTime time.Duration = 500 * time.Millisecond
StateSyncP2P = "p2p"
StateSyncRPC = "rpc"
StateSyncDisabled = ""
)
// Testnet represents a single testnet.
@@ -81,7 +85,7 @@ type Node struct {
StartAt int64
BlockSync string
Mempool string
StateSync bool
StateSync string
Database string
ABCIProtocol Protocol
PrivvalProtocol Protocol
@@ -92,7 +96,7 @@ type Node struct {
PersistentPeers []*Node
Perturbations []Perturbation
LogLevel string
DisableLegacyP2P bool
UseLegacyP2P bool
QueueType string
}
@@ -177,7 +181,7 @@ func LoadTestnet(file string) (*Testnet, error) {
Perturbations: []Perturbation{},
LogLevel: manifest.LogLevel,
QueueType: manifest.QueueType,
DisableLegacyP2P: manifest.DisableLegacyP2P || nodeManifest.DisableLegacyP2P,
UseLegacyP2P: manifest.UseLegacyP2P && nodeManifest.UseLegacyP2P,
}
if node.StartAt == testnet.InitialHeight {
@@ -333,6 +337,11 @@ func (n Node) Validate(testnet Testnet) error {
default:
return fmt.Errorf("invalid block sync setting %q", n.BlockSync)
}
switch n.StateSync {
case StateSyncDisabled, StateSyncP2P, StateSyncRPC:
default:
return fmt.Errorf("invalid state sync setting %q", n.StateSync)
}
switch n.Mempool {
case "", "v0", "v1":
default:
@@ -366,7 +375,7 @@ func (n Node) Validate(testnet Testnet) error {
return fmt.Errorf("cannot start at height %v lower than initial height %v",
n.StartAt, n.Testnet.InitialHeight)
}
if n.StateSync && n.StartAt == 0 {
if n.StateSync != StateSyncDisabled && n.StartAt == 0 {
return errors.New("state synced nodes cannot start at the initial height")
}
if n.RetainBlocks != 0 && n.RetainBlocks < uint64(EvidenceAgeHeight) {
@@ -417,16 +426,6 @@ func (t Testnet) ArchiveNodes() []*Node {
return nodes
}
// RandomNode returns a random non-seed node.
func (t Testnet) RandomNode() *Node {
for {
node := t.Nodes[rand.Intn(len(t.Nodes))]
if node.Mode != ModeSeed {
return node
}
}
}
// IPv6 returns true if the testnet is an IPv6 network.
func (t Testnet) IPv6() bool {
return t.IP.IP.To4() == nil

View File

@@ -3,6 +3,7 @@ package main
import (
"bytes"
"context"
"errors"
"fmt"
"io/ioutil"
"math/rand"
@@ -29,7 +30,22 @@ const lightClientEvidenceRatio = 4
// DuplicateVoteEvidence.
func InjectEvidence(testnet *e2e.Testnet, amount int) error {
// select a random node
targetNode := testnet.RandomNode()
var targetNode *e2e.Node
for _, idx := range rand.Perm(len(testnet.Nodes)) {
targetNode = testnet.Nodes[idx]
if targetNode.Mode == e2e.ModeSeed {
targetNode = nil
continue
}
break
}
if targetNode == nil {
return errors.New("could not find node to inject evidence into")
}
logger.Info(fmt.Sprintf("Injecting evidence through %v (amount: %d)...", targetNode.Name, amount))

View File

@@ -1,6 +1,7 @@
package main
import (
"container/ring"
"context"
"crypto/rand"
"errors"
@@ -93,34 +94,64 @@ func loadGenerate(ctx context.Context, chTx chan<- types.Tx, multiplier int, siz
// loadProcess processes transactions
func loadProcess(ctx context.Context, testnet *e2e.Testnet, chTx <-chan types.Tx, chSuccess chan<- types.Tx) {
// Each worker gets its own client to each node, which allows for some
// concurrency while still bounding it.
clients := map[string]*rpchttp.HTTP{}
// Each worker gets its own client to each usable node, which
// allows for some concurrency while still bounding it.
clients := make([]*rpchttp.HTTP, 0, len(testnet.Nodes))
var err error
for tx := range chTx {
node := testnet.RandomNode()
client, ok := clients[node.Name]
if !ok {
client, err = node.Client()
if err != nil {
continue
}
// check that the node is up
_, err = client.Health(ctx)
if err != nil {
continue
}
clients[node.Name] = client
}
if _, err = client.BroadcastTxSync(ctx, tx); err != nil {
for idx := range testnet.Nodes {
// Construct a list of usable nodes for the creating
// load. Don't send load through seed nodes because
// they do not provide the RPC endpoints required to
// broadcast transaction.
if testnet.Nodes[idx].Mode == e2e.ModeSeed {
continue
}
chSuccess <- tx
client, err := testnet.Nodes[idx].Client()
if err != nil {
continue
}
clients = append(clients, client)
}
if len(clients) == 0 {
panic("no clients to process load")
}
// Put the clients in a ring so they can be used in a
// round-robin fashion.
clientRing := ring.New(len(clients))
for idx := range clients {
clientRing.Value = clients[idx]
clientRing = clientRing.Next()
}
var err error
for {
select {
case <-ctx.Done():
return
case tx := <-chTx:
clientRing = clientRing.Next()
client := clientRing.Value.(*rpchttp.HTTP)
if _, err := client.Health(ctx); err != nil {
continue
}
if _, err = client.BroadcastTxSync(ctx, tx); err != nil {
continue
}
select {
case chSuccess <- tx:
continue
case <-ctx.Done():
return
}
}
}
}

View File

@@ -168,12 +168,21 @@ func NewCLI() *CLI {
},
})
cli.root.AddCommand(&cobra.Command{
Use: "pause",
Short: "Pauses the Docker testnet",
RunE: func(cmd *cobra.Command, args []string) error {
logger.Info("Pausing testnet")
return execCompose(cli.testnet.Dir, "pause")
},
})
cli.root.AddCommand(&cobra.Command{
Use: "resume",
Short: "Resumes the Docker testnet",
RunE: func(cmd *cobra.Command, args []string) error {
logger.Info("Resuming testnet")
return execCompose(cli.testnet.Dir, "up")
return execCompose(cli.testnet.Dir, "unpause")
},
})

View File

@@ -59,7 +59,11 @@ func PerturbNode(node *e2e.Node, perturbation e2e.Perturbation) (*rpctypes.Resul
case e2e.PerturbationRestart:
logger.Info(fmt.Sprintf("Restarting node %v...", node.Name))
if err := execCompose(testnet.Dir, "restart", node.Name); err != nil {
if err := execCompose(testnet.Dir, "kill", "-s", "SIGTERM", node.Name); err != nil {
return nil, err
}
time.Sleep(10 * time.Second)
if err := execCompose(testnet.Dir, "start", node.Name); err != nil {
return nil, err
}

View File

@@ -238,7 +238,7 @@ func MakeConfig(node *e2e.Node) (*config.Config, error) {
cfg.RPC.PprofListenAddress = ":6060"
cfg.P2P.ExternalAddress = fmt.Sprintf("tcp://%v", node.AddressP2P(false))
cfg.P2P.AddrBookStrict = false
cfg.P2P.DisableLegacy = node.DisableLegacyP2P
cfg.P2P.UseLegacy = node.UseLegacyP2P
cfg.P2P.QueueType = node.QueueType
cfg.DBBackend = node.Database
cfg.StateSync.DiscoveryTime = 5 * time.Second
@@ -297,15 +297,18 @@ func MakeConfig(node *e2e.Node) (*config.Config, error) {
}
if node.BlockSync == "" {
cfg.FastSyncMode = false
cfg.BlockSync.Enable = false
} else {
cfg.BlockSync.Version = node.BlockSync
}
if node.StateSync {
switch node.StateSync {
case e2e.StateSyncP2P:
cfg.StateSync.Enable = true
cfg.StateSync.UseP2P = true
case e2e.StateSyncRPC:
cfg.StateSync.Enable = true
cfg.StateSync.RPCServers = []string{}
for _, peer := range node.Testnet.ArchiveNodes() {
if peer.Name == node.Name {
continue
@@ -342,17 +345,17 @@ func MakeConfig(node *e2e.Node) (*config.Config, error) {
// MakeAppConfig generates an ABCI application config for a node.
func MakeAppConfig(node *e2e.Node) ([]byte, error) {
cfg := map[string]interface{}{
"chain_id": node.Testnet.Name,
"dir": "data/app",
"listen": AppAddressUNIX,
"mode": node.Mode,
"proxy_port": node.ProxyPort,
"protocol": "socket",
"persist_interval": node.PersistInterval,
"snapshot_interval": node.SnapshotInterval,
"retain_blocks": node.RetainBlocks,
"key_type": node.PrivvalKey.Type(),
"disable_legacy_p2p": node.DisableLegacyP2P,
"chain_id": node.Testnet.Name,
"dir": "data/app",
"listen": AppAddressUNIX,
"mode": node.Mode,
"proxy_port": node.ProxyPort,
"protocol": "socket",
"persist_interval": node.PersistInterval,
"snapshot_interval": node.SnapshotInterval,
"retain_blocks": node.RetainBlocks,
"key_type": node.PrivvalKey.Type(),
"use_legacy_p2p": node.UseLegacyP2P,
}
switch node.ABCIProtocol {
case e2e.ProtocolUNIX:

View File

@@ -64,16 +64,6 @@ func Start(testnet *e2e.Testnet) error {
return err
}
// Update any state sync nodes with a trusted height and hash
for _, node := range nodeQueue {
if node.StateSync || node.Mode == e2e.ModeLight {
err = UpdateConfigStateSync(node, block.Height, blockID.Hash.Bytes())
if err != nil {
return err
}
}
}
for _, node := range nodeQueue {
if node.StartAt > networkHeight {
// if we're starting a node that's ahead of
@@ -85,16 +75,19 @@ func Start(testnet *e2e.Testnet) error {
networkHeight = node.StartAt
logger.Info("Waiting for network to advance before starting catch up node",
"node", node.Name,
"height", networkHeight)
if _, _, err := waitForHeight(testnet, networkHeight); err != nil {
block, blockID, err = waitForHeight(testnet, networkHeight)
if err != nil {
return err
}
}
logger.Info("Starting catch up node", "node", node.Name, "height", node.StartAt)
// Update any state sync nodes with a trusted height and hash
if node.StateSync != e2e.StateSyncDisabled || node.Mode == e2e.ModeLight {
err = UpdateConfigStateSync(node, block.Height, blockID.Hash.Bytes())
if err != nil {
return err
}
}
if err := execCompose(testnet.Dir, "up", "-d", node.Name); err != nil {
return err

View File

@@ -34,7 +34,7 @@ func TestBlock_Header(t *testing.T) {
}
// the first blocks after state sync come from the backfill process
// and are therefore not complete
if node.StateSync && block.Header.Height <= first+e2e.EvidenceAgeHeight+1 {
if node.StateSync != e2e.StateSyncDisabled && block.Header.Height <= first+e2e.EvidenceAgeHeight+1 {
continue
}
if block.Header.Height > last {
@@ -70,7 +70,7 @@ func TestBlock_Range(t *testing.T) {
switch {
// if the node state synced we ignore any assertions because it's hard to know how far back
// the node ran reverse sync for
case node.StateSync:
case node.StateSync != e2e.StateSyncDisabled:
break
case node.RetainBlocks > 0 && int64(node.RetainBlocks) < (last-node.Testnet.InitialHeight+1):
// Delta handles race conditions in reading first/last heights.
@@ -83,7 +83,7 @@ func TestBlock_Range(t *testing.T) {
}
for h := first; h <= last; h++ {
if node.StateSync && h <= first+e2e.EvidenceAgeHeight+1 {
if node.StateSync != e2e.StateSyncDisabled && h <= first+e2e.EvidenceAgeHeight+1 {
continue
}
resp, err := client.Block(ctx, &(h))

View File

@@ -1,4 +1,4 @@
package mempool
package types
import (
"errors"