Compare commits

..

335 Commits

Author SHA1 Message Date
William Banfield
32cd294d92 add logging 2022-01-21 09:30:49 -05:00
William Banfield
5d0671e95f context as param in pbts test 2022-01-20 18:48:22 -05:00
William Banfield
b853afe2ce Merge remote-tracking branch 'origin/master' into wb/pbts-rebase-master 2022-01-20 18:18:24 -05:00
William Banfield
4d1ad8ec38 re-add helper 2022-01-20 18:01:43 -05:00
William Banfield
c49f8bc596 log fixups 2022-01-20 17:41:37 -05:00
William Banfield
fadd16985e ctx fixup continued 2022-01-20 17:37:34 -05:00
William Banfield
e2ec93350e context position fixup 2022-01-20 17:35:05 -05:00
William Banfield
db7d4abdae consensus: fix height advances in test state (#7648)
The problem with the `TestStateFullRound1` is that the state that we are observeing, `cs`, can advance to the next height before we query its data. Specifically, on line `388`, when we called `validatePrevote`, the `cs` State had already advanced to height 2, so querying that State for the votes of height 1 either yielded nil or an erroneous value. This change adds a `ensurePrevoteMatch` function that checks that the prevote occurred and checks that it is for the expected block at the same time. If this change looks reasonable I can just apply the same fix to all of the places where we perform `ensurePrevote` followed by `validatePrevote` to use this function instead.
2022-01-20 22:21:41 +00:00
William Banfield
b18d06bd42 mutateValidatorSet takes context 2022-01-20 17:19:45 -05:00
William Banfield
07336712ad context add to randLightBlock 2022-01-20 17:07:46 -05:00
Sam Kleinman
78e4c7d379 autofile: avoid shutdown race (#7650) 2022-01-20 17:06:44 -05:00
William Banfield
2a7497fd9f contextify factory functions 2022-01-20 16:57:49 -05:00
William Banfield
d1fe539c2e remove unused tests 2022-01-20 16:28:58 -05:00
William Banfield
c7f6d80148 update propose step log 2022-01-20 15:56:11 -05:00
William Banfield
02f7d0ae6e use more specific require statements 2022-01-20 15:54:26 -05:00
Sam Kleinman
9dd67ad99d tests: update cleanup opertunities (#7647) 2022-01-20 15:48:26 -05:00
kmax.eth
449e127e6c privval: avoid re-signing vote when RHS and signbytes are equal (#7592)
* avoid re-signing vote when RHS and signbytes are equal

* avoid re-signing proposal when RHS and signbytes are equal

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
2022-01-20 15:06:13 -05:00
Sam Kleinman
9f4f51318c tests: update docker versions to match build version (#7646) 2022-01-20 12:17:32 -05:00
Jasmina Malicevic
d68d25dcd5 light: return light client status on rpc /status (#7536)
*light: rpc /status returns status of light client ; code refactoring
 light: moved lightClientInfo into light.go, renamed String to ID
test/e2e: Return light client trusted height instead of SyncInfo trusted height
test/e2e/start.go: Not waiting for light client to catch up in tests. Removed querying of syncInfo in start if the node is a light node

* light: Removed call to primary /status. Added trustedPeriod to light info
* light/provider: added ID function to return IP of primary and witnesses
* light/provider/http/http_test: renamed String() to ID()
2022-01-20 14:53:20 +01:00
Sam Kleinman
4e5c2b5e8f consensus: use delivertxsync (#7616) 2022-01-19 16:58:12 -05:00
Gui
ebbc3f02f5 p2p: always advertise self, to enable mutual address discovery (#7594)
Fixes #7593
2022-01-19 21:39:59 +00:00
M. J. Fromberger
c8e8a62084 abci/client: simplify client interface (#7607)
This change has two main effects:

1. Remove most of the Async methods from the abci.Client interface.
   Remaining are FlushAsync, CommitTxAsync, and DeliverTxAsync.

2. Rename the synchronous methods to remove the "Sync" suffix.

The rest of the change is updating the implementations, subsets, and mocks of
the interface, along with the call sites that point to them.

* Fix stringly-typed mock stubs.
* Rename helper method.
2022-01-19 10:58:56 -08:00
M. J. Fromberger
68d4fed236 consensus/state: avert a data race with state update and tests (#7643) 2022-01-19 10:47:06 -08:00
William Banfield
7bc3a7274a privval: do not use old proposal timestamp (#7621)
After #7592, @cmwaters noticed that the logic for re-using old timestamps for proposals may not work with proposer-based timestamps. This change removes the logic to re-use old proposal timestamps since it is no longer correct. Two proposals with different timestamps can no longer be treated as equivalent. Signing a proposal that only differs by timestamp in the new algorithm can be thought of as roughly equivalent to signing a proposal that only differs by `BlockID` in the old scheme. 

I also investigated the codebase and checked for any place we updated a timestamp using the pattern `(Timestamp = |Timestamp: )` and saw no additional places where we are updating the timestamp of a proposal message. 

Here is the output of that search:

```
privval/file.go:372:			vote.Timestamp = timestamp
privval/file.go:453:	lastVote.Timestamp = now
privval/file.go:454:	newVote.Timestamp = now
internal/test/factory/commit.go:25:			Timestamp:        now,
internal/test/factory/vote.go:34:		Timestamp:        time,
internal/consensus/state.go:2261:		Timestamp:        cs.voteTime(),
internal/consensus/state.go:2286:	vote.Timestamp = v.Timestamp
light/detector.go:414:		ev.Timestamp = common.Time
light/detector.go:418:		ev.Timestamp = trusted.Time
types/block.go:616:		Timestamp:        ts,
types/block.go:725:		Timestamp:        cs.Timestamp,
types/block.go:736:	cs.Timestamp = csp.Timestamp
types/block.go:800:		Timestamp:        commitSig.Timestamp,
types/evidence.go:84:		Timestamp:        blockTime,
types/evidence.go:190:	dve.Timestamp = evidenceTime
types/evidence.go:202:		Timestamp:        dve.Timestamp,
types/evidence.go:228:		Timestamp:        pb.Timestamp,
types/evidence.go:382:		Timestamp: %v}#%X`,
types/evidence.go:491:	l.Timestamp = evidenceTime
types/evidence.go:517:		Timestamp:           l.Timestamp,
types/evidence.go:546:		Timestamp:           lpb.Timestamp,
types/evidence.go:722:		Timestamp:        time,
types/vote.go:80:		Timestamp:        vote.Timestamp,
types/vote.go:216:		Timestamp:        vote.Timestamp,
types/vote.go:240:	vote.Timestamp = pv.Timestamp
types/test_util.go:27:			Timestamp:        now,
types/proposal.go:44:		Timestamp: tmtime.Now(),
types/proposal.go:132:	pb.Timestamp = p.Timestamp
types/proposal.go:157:	p.Timestamp = pp.Timestamp
types/canonical.go:49:		Timestamp: proposal.Timestamp,
types/canonical.go:62:		Timestamp: vote.Timestamp,
test/e2e/runner/evidence.go:186:		Timestamp:        evTime,
```
2022-01-19 17:42:09 +00:00
M. J. Fromberger
a806739375 abci/client: use a no-op logger in the test (#7633)
This averts a log-after-close issue. We should probably also chase the shutdown
issues, but since ABCI clients should generally only shut down once per process
I don't think this is a real priority, and the trace is hairy.
2022-01-19 08:58:35 -08:00
M. J. Fromberger
1af4113033 Increase test splits from 4 to 6. (#7630)
Decrease the likelihood that two flaky tests will hit
the same batch.

Account for the increase in test load from #7608.
2022-01-19 08:24:12 -08:00
dependabot[bot]
e6f6a13f8a build(deps): Bump github.com/prometheus/client_golang (#7636) 2022-01-19 11:50:52 +01:00
M. J. Fromberger
aea428d322 build: Make sure to test packages with external tests (#7608)
The test filter was looking for "TestGoFiles", which does not include tests in
a separate package (e.g., "package foo_test" for "package foo").
This caused several packages not to be tested in CI, including:

  github.com/tendermint/tendermint/abci/client
  github.com/tendermint/tendermint/crypto
  github.com/tendermint/tendermint/crypto/tmhash
  github.com/tendermint/tendermint/internal/eventbus
  github.com/tendermint/tendermint/internal/evidence
  github.com/tendermint/tendermint/internal/inspect
  github.com/tendermint/tendermint/internal/jsontypes
  github.com/tendermint/tendermint/internal/libs/protoio
  github.com/tendermint/tendermint/internal/libs/sync
  github.com/tendermint/tendermint/internal/p2p/pex
  github.com/tendermint/tendermint/internal/pubsub
  github.com/tendermint/tendermint/internal/pubsub/query
  github.com/tendermint/tendermint/internal/pubsub/query/syntax
  github.com/tendermint/tendermint/internal/state/indexer
  github.com/tendermint/tendermint/internal/state/indexer/block/kv
  github.com/tendermint/tendermint/libs/json
  github.com/tendermint/tendermint/libs/log
  github.com/tendermint/tendermint/libs/os
  github.com/tendermint/tendermint/light
  github.com/tendermint/tendermint/light/provider/http
  github.com/tendermint/tendermint/privval/grpc
  github.com/tendermint/tendermint/proto/tendermint/blocksync
  github.com/tendermint/tendermint/proto/tendermint/consensus
  github.com/tendermint/tendermint/proto/tendermint/statesync
  github.com/tendermint/tendermint/rpc/client
  github.com/tendermint/tendermint/rpc/client/mock
  github.com/tendermint/tendermint/test/e2e/tests
  github.com/tendermint/tendermint/test/fuzz/mempool
  github.com/tendermint/tendermint/test/fuzz/p2p/secretconnection
  github.com/tendermint/tendermint/test/fuzz/rpc/jsonrpc/server

Updates #7626 and #7634.
2022-01-18 18:36:46 -08:00
William Banfield
b6307c42e0 consensus: check proposal non-nil in prevote message delay metric (#7625) 2022-01-18 19:57:00 -05:00
William Banfield
dd0d273223 Merge branch 'master' into wb/pbts-rebase-master 2022-01-18 19:51:39 -05:00
M. J. Fromberger
5eae2e62c0 privval: synchronize leak check with shutdown (#7629)
The interaction between defers and t.Cleanup can be delicate.
For this case, which regularly flakes in CI, be explicit:
Defer the closes and waits before making any attempt to leaktest.
2022-01-18 16:36:09 -08:00
M. J. Fromberger
a7eb95065d autofile: ensure files are not reopened after closing (#7628)
During file rotation and WAL shutdown, there was a race condition between users
of an autofile and its termination. To fix this, ensure operations on an
autofile are properly synchronized, and report errors when attempting to use an
autofile after it was closed.

Notably:

- Simplify the cancellation protocol between signal and Close.
- Exclude writers to an autofile during rotation.
- Add documentation about what is going on.

There is a lot more that could be improved here, but this addresses the more
obvious races that have been panicking unit tests.
2022-01-18 14:57:20 -08:00
William Banfield
d33a83f194 Merge branch 'master' into wb/pbts-rebase-master 2022-01-18 17:50:14 -05:00
M. J. Fromberger
5cca45bb45 pex: improve handling of closed channels (#7623)
Reverts and improves on #7622. The problem turns out not to be on the PEX
channel side, but on the pass-through (Go) channel.
2022-01-18 14:32:22 -08:00
William Banfield
d5aaa1af75 fix typo in state_test 2022-01-18 17:16:05 -05:00
William Banfield
82d46fe871 remove nolint: ll directive 2022-01-18 16:26:13 -05:00
William Banfield
e30434ad5a reduce ensureTimeout 2022-01-18 16:24:43 -05:00
William Banfield
40b7093e85 update changelog pending 2022-01-18 16:22:17 -05:00
M. J. Fromberger
417166704a pex: do not send nil envelopes to the reactor (#7622) 2022-01-18 12:01:04 -08:00
M. J. Fromberger
7fd97bf44b pex: avert a data race on map access in the reactor (#7614)
There was a path on which computing the next delivery time did not hold the
lock, defying the admonition on its comment.
2022-01-18 07:22:50 -08:00
William Banfield
0c82ceaa5f consensus: calculate prevote message delay metric (#7551)
## What does this pull request do?
This pull requests adds two metrics intended for use in calculating an experimental value for `MessageDelay`.

The metrics are as follows:
```
# HELP tendermint_consensus_complete_prevote_message_delay Difference in seconds between the proposal timestamp and the timestamp of the prevote that achieved 100% of the voting power in the prevote step.
# TYPE tendermint_consensus_complete_prevote_message_delay gauge
tendermint_consensus_complete_prevote_message_delay{chain_id="test-chain-aZbwF1"} 0.013025505

# HELP tendermint_consensus_quorum_prevote_message_delay Difference in seconds between the proposal timestamp and the timestamp of the prevote that achieved a quorum in the prevote step.
# TYPE tendermint_consensus_quorum_prevote_message_delay gauge
tendermint_consensus_quorum_prevote_message_delay{chain_id="test-chain-aZbwF1"} 0.013025505
```

## Why this change?

 For more information on what these metrics are calculating, see #7202. The aim is to merge to backport these metrics to v0.34 and run nodes on a few popular chains with these metrics to determine the experimental values for `MessageDelay` on these popular chains and use these to select our default `SynchronyParams.MessageDelay` value.

## Why Gauges for the metrics?
Gauges allow us to overwrite the metric on each successive observation. We can then capture these metrics over time to track the highest and lowest observed value.
2022-01-18 14:55:18 +00:00
Sam Kleinman
b10c74647f testing: use noop loger with leakteset in more places (#7604) 2022-01-18 09:08:20 -05:00
Sam Kleinman
c0b56e207a consensus: test shutdown to avoid hangs (#7603) 2022-01-18 08:55:13 -05:00
Kene
49153b753c rpc: paginate mempool /unconfirmed_txs endpoint (#7612)
This commit changes the behaviour of the /unconfirmed_txs endpoint by replacing limit with a page and perPage parameter for pagination.
The test case for unconfirmed_txs have been accommodated to properly test this change and the documentation for the API as well.
2022-01-18 10:58:32 +01:00
M. J. Fromberger
679b6a65b8 light: fix provider error plumbing (#7610)
The custom error types in the provider package did not propagate their wrapped
underlying reasons, making it difficult for the test to check that the correct
error was observed.

- Fix the custom errors to have a true underlying error (not just a string).
- Add Unwrap methods to support inspection by errors.Is.
- Update usage in a few places.
- Fix the test to check for acceptable variation.

Fixes #7609.
2022-01-16 13:48:21 -08:00
M. J. Fromberger
c24f003b55 protoio: fix incorrect test assertion (#7606)
After writing and then reading a bunch of random messages, the test was
checking that it did not read the same number of messages that it wrote.
The sense of this check was inverted; they should match.

Introduced by accident in #7522. I'm not sure why this did not show up in CI.

Edit: I now know why it didn't show up in ci: #7608.
2022-01-16 20:39:37 +00:00
Sebastian Gröbler
701e6026c5 Update README.md (#7602)
Fixed typo in exchange

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-01-14 18:33:47 -08:00
M. J. Fromberger
dbe2146d0a rpc: simplify the encoding of interface-typed arguments in JSON (#7600)
Add package jsontypes that implements a subset of the custom libs/json 
package. Specifically it handles encoding and decoding of interface types
wrapped in "tagged" JSON objects. It omits the deep reflection on arbitrary
types, preserving only the handling of type tags wrapper encoding.

- Register interface types (Evidence, PubKey, PrivKey) for tagged encoding.
- Update the existing implementations to satisfy the type.
- Register those types with the jsontypes registry.
- Add string tags to 64-bit integer fields where needed.
- Add marshalers to structs that export interface-typed fields.
2022-01-14 18:14:09 -08:00
William Banfield
8810d038d9 Merge remote-tracking branch 'origin' into wb/pbts-rebase-master 2022-01-14 20:16:05 -05:00
William Banfield
88db4cdf7e remove noisy logger 2022-01-14 19:55:51 -05:00
Anca Zamfir
330eb8ded0 Remove voteTime() (#7563) 2022-01-14 19:52:15 -05:00
William Banfield
9f52d9fbd7 proto: rename timing params to synchrony params (#7554) 2022-01-14 19:52:12 -05:00
Anca Zamfir
8eec8447e8 Prevote nil if proposal is not timely (#7415)
* Prevote nil if not timely

* William's suggestion to get the proposal from the proposer instead of
generating it.

* Don't check rhs for genesis block

* Update IsTimely to match the specification

* Fix proposal tests

* Add more timely tests and check votes

* Mark proposal invalid in SetProposal, fix in the future test

* save proposal time on roundstate

* received -> receive

* always reset proposal time

* Add IsTimely test for genesis proposal

* Check timely before ValidateBlock

* Review comments from Daniel

Co-authored-by: William Banfield <wbanfield@gmail.com>
2022-01-14 19:50:36 -05:00
William Banfield
9e56a95473 consensus: check that proposal is non-nil before voting (#7480) 2022-01-14 19:47:07 -05:00
Anca Zamfir
d444a71ec7 Fix pbts tests (#7413)
* Allow nil block ID check in ensureProposalWithTimout

* William's suggestion to get the proposal from the proposer instead of
generating it.

* Remove error check on service stop
2022-01-14 19:47:03 -05:00
Anca Zamfir
47e0223356 Fix compilation 2022-01-14 19:44:48 -05:00
William Banfield
2ff233d944 internal/consensus: prevote nil if proposal timestamp does not match (#7391)
This change updates the proposal logic to use the block's timestamp in the proposal message. It adds an additional piece of validation logic to the prevote step to check that the block's timestamp matches the proposal message's timestamp.
2022-01-14 19:42:39 -05:00
William Banfield
e17178d70d internal/consensus: remove proposal wait time (#7418) 2022-01-14 19:26:58 -05:00
Anca Zamfir
14f4db7c19 Remove MedianTime, set block time to Now() (#7382)
* Remove MedianTime, set block time to Now()

* Fix goimports

* Fix import ordering
2022-01-14 19:23:42 -05:00
William Banfield
fa1afef125 internal/consensus: proposer waits for previous block time (#7376)
This change introduces the logic to have the proposer wait until the previous block time has passed before attempting to propose the next block.

The change achieves this by by adding a new clause into the enterPropose state machine method. The method now checks if the validator is the proposer and if the validator's clock is behind the previous block's time. If the validator's clock is behind the previous block time, it schedules a timeout to re-enter the enter propose method after enough time has passed.
2022-01-14 19:23:37 -05:00
William Banfield
3022f9192a types: add new consensus params from proto (#7354)
This change adds the new TimingParams proto messages. These new messages were build using the wb/proposer-based-timestamps branch on the spec repo.
This change also adds validation that these values are positive when parsed and adds the new parameters into the existing tests.
2022-01-14 18:58:16 -05:00
William Banfield
315ee5aac5 internal/consensus: refactor ensure functions to use a common function (#7373)
* internal/consensus: refactor the common_test functions to use a single timeout function

* remove ensurePrecommit

* Update internal/consensus/common_test.go

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* join lines for fatal messages

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-01-14 18:58:12 -05:00
William Banfield
e7f77049eb types: remove accuracy from timestamp params (#7341) 2022-01-14 18:40:04 -05:00
William Banfield
f73c247a0b consensus: ensure proposal receipt waits for maxWaitingTime (#7307)
* consensus: ensure proposal receipt waits for maxWaitingTime

* rebase fixups

* lint++

* lint++

* register result chan separately

* lint++
2022-01-14 18:39:58 -05:00
William Banfield
32008ae1d8 consensus: add calculation for proposal step waits from pbts (#7290)
* initial proposerWaitsUntil implementation

* switch to duration for easier use with timeout scheduling

* add proposal step waiting time with tests

* minor aesthetic change to IsTimely

* minor language fix

* Update internal/consensus/state.go

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

* reword comment

* change accuracy to precision

* move tests to separate pbts test file

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2022-01-14 18:31:11 -05:00
William Banfield
89e3f43cbc consensus: refactor the fake validator to take a clock source (#7300) 2022-01-14 18:31:07 -05:00
Sam Kleinman
7ed57ef5f9 statesync: more orderly dispatcher shutdown (#7601) 2022-01-14 16:34:12 -05:00
Sam Kleinman
2199e0a8a1 light: convert validation panics to errors (#7597) 2022-01-14 16:18:50 -05:00
William Banfield
7e17892650 factory: simplify validator and genesis factory functions (#7305) 2022-01-14 16:13:16 -05:00
Sam Kleinman
887cb219ab light: remove test panic (#7588) 2022-01-14 13:16:43 -05:00
Sam Kleinman
82b65868ce node+autofile: avoid leaks detected during WAL shutdown (#7599) 2022-01-14 13:04:01 -05:00
William Banfield
2f75899320 ADR-74: Migrate Timeout Parameters to Consensus Parameters (#7503)
related to: #7274 and #7275 

Still somewhat uncertain on two things that I'd appreciate more feedback on:
1. The optional temporary local overrides. Perhaps this is superfluous and we can simply make the transition without the override?
2. If this set of parameters seems to be large enough to allow application developers to create the chains they want but not so large as to be needlessly complex.
2022-01-14 16:48:59 +00:00
William Banfield
abb7c8c2b0 RFC-009: Consensus Parameter Upgrades (#7524)
Related to #7503
2022-01-14 16:47:02 +00:00
M. J. Fromberger
c065eeb18a Add changelog entries from release v0.34.15. (#7598)
Fixes #7596.
2022-01-14 16:07:53 +00:00
M. J. Fromberger
1ff69361e8 rpc: remove dependency of URL (GET) requests on tmjson (#7590)
The parameters for RPC GET requests are parsed from query arguments in the
request URL. Rework this code to remove the need for tmjson. The structure of a
call still requires reflection, and still works the same way as before, but the
code structure has been simplified and cleaned up a bit.

Points of note:

- Consolidate handling of pointer types, so we only need to dereference once.
- Reduce the number of allocations of reflective types.
- Report errors for unsupported types rather than returning untyped nil.

Update the tests as well. There was one test case that checked for an error on
a behaviour the OpenAPI docs explicitly demonstrates as supported, so I fixed
that test case, and also added some new ones for cases that weren't checked.

Related:

* Update e2e base Go image to 1.17 (to match config).
2022-01-14 07:53:53 -08:00
Sam Kleinman
159d763422 light: avoid panic for integer underflow (#7589) 2022-01-14 09:58:41 -05:00
M. J. Fromberger
9409cdea55 rpc: update fuzz criteria to match the implementation (#7595)
I missed this during my previous pass.

Requirements for handlers:

- First argument is context.Context.
- Last of two results is error.
2022-01-14 14:41:57 +00:00
William Banfield
b20bad14ae state: add an 'IsTimely' method to implement the 'timely' check for proposer-based timestamps (#7170)
* state: add an IsTimely function to implement the check for timely in proposer-based timestamps

* move time checks into block.go and add time source mechanism

* timestamp params comment

* add todo related to pbts spec and timestamp params

* remove old istimely

* switch to using built in before function

* lint++

* wip

* move into proposal and create a default set of params

* defer using default cons params for now
2022-01-13 23:13:10 -05:00
William Banfield
1f3c3c6848 consensus: update state to prevote nil when proposal block does not match locked block. (#6986)
* add failing test

* tweak comments in failing test

* failing test comment

* initial attempt at removing prevote locked block logic

* comment out broken function

* undo reset on prevotes

* fixing TestProposeValidBlock test

* update test for completed POL update

* comment updates

* further unlock testing

* update comments

* Update internal/consensus/state.go

* spacing nit

* comment cleanup

* nil check in addVote

* update unlock description

* update precommit on relock comment

* add ensure new timeout back

* rename IsZero to IsNil and replace uses of block len check with helper

* add testing.T to new assertions

* begin removing unlock condition

* fix TestStateProposerSelection2 to precommit for nil correctly

* remove erroneous sleep

* update TestStatePOL comment

* update relock test to be more clear

* add _ into test names

* rename slashing

* udpate no relock function to be cleaner

* do not relock on old proposal test cleanup

* con state name update

* remove all references to unlock

* update test comments to include new

* add relock test

* add ensureRelock to common_test

* remove all event unlock

* remove unlock checks

* no lint add space

* lint ++

* add test for nil prevote on different proposal

* fix prevote nil condition

* fix defaultDoPrevote

* state_test.go fixes to accomodate prevoting for nil

* add failing test for POL from previous round case

* update prevote logic to prevote POL from previous round

* state.go comment fixes

* update validatePrevotes to correctly look for nil

* update new test name and comment

* update POLFromPreviousRound test

* fixes post merge

* fix spacing

* make the linter happy

* change prevote log message

* update prevote nil debug line

* update enterPrevote comment

* lint

* Update internal/consensus/state.go

Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>

* Update internal/consensus/state.go

Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>

* add english description of alg rules

* Update internal/consensus/state.go

Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>

* comment fixes from review

* fix comment

* fix comment

Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>
2022-01-13 23:13:06 -05:00
William Banfield
6d2cb8ba8e consensus: remove logic to unlock block on 2/3 prevote for nil (#6954) 2022-01-13 22:49:39 -05:00
William Banfield
76c935e325 consensus: remove panics from test helper functions (#6969) 2022-01-13 22:04:52 -05:00
M. J. Fromberger
ec59b1a1ae rpc: check RPC service functions more carefully (#7587)
Require that RPC functions take a context as their first argument, and return
an error as either their only result, or the second of two results.

This does not change how functions are dispatched, but will make it a little
easier to make more invasive changes in the near future.
2022-01-13 13:27:45 -08:00
Sam Kleinman
7e8fa4ed85 consensus: explicit test timeout (#7585) 2022-01-13 16:11:51 -05:00
M. J. Fromberger
b7c19a5cd4 rpc: clean up the RPCFunc constructor signature (#7586)
Instead of taking a comma-separated string of parameter names, take each
parameter name as a separate argument. Now that we no longer have an extra flag
for caching, this fits nicely into a variadic trailer.

* Update all usage of NewRPCFunc and NewWSRPCFunc.
2022-01-13 12:13:28 -08:00
Sam Kleinman
8ff367ad29 log: avoid use of legacy test logging (#7583) 2022-01-13 14:38:54 -05:00
M. J. Fromberger
81ee41228a rpc: consolidate RPC route map construction (#7582)
Define interfaces for the various methods a service may implement.  This is
basically just the set of things on Environment that are exported as RPCs, but
these are also implemented by the light proxy.

* internal/rpc: use NewRoutesMap to construct routes on service start
* light/proxy: use NewRoutesMap to construct RPC routes
2022-01-13 10:45:36 -08:00
Sam Kleinman
cef17e1c02 node+rpc: rpc environment should own it's creation (#7573) 2022-01-13 12:39:48 -05:00
Sam Kleinman
fd2eccbae1 consensus: use noop logger for WAL test (#7580) 2022-01-13 12:05:12 -05:00
Sam Kleinman
ed660bddeb node+privval: refactor privval construction (#7574) 2022-01-13 11:53:05 -05:00
M. J. Fromberger
5a89263dbe rpc: simplify panic recovery in the server middleware (#7578)
Rather than installing two separate panic handlers, defer the bookkeeping
separately from recovery, and lift the delegated handler call out to the top
level of the wrapper.

Also: Regularize the server middleware wrappers.
2022-01-13 07:02:21 -08:00
M. J. Fromberger
904957aaa9 rpc: rework how responses are written back via HTTP (#7575)
Add writeRPCResponse and writeHTTPResponse helpers, that handle the way RPC
responses are written to HTTP replies. These replace the exported helpers.

Visible effects:

- JSON results are now marshaled without indentation.
- HTTP status codes are now normalized.
- Cache control headers are no longer set.

Details:

- When writing a response to a URL (GET) request, do not marshal the whole
  JSON-RPC object into the body, only encode the result or the error object.
  This is a user-visible change.

- Do not change the HTTP status code for RPC errors. The RPC error already
  reports what went wrong, the HTTP status should only report problems with the
  HTTP transaction itself. This is a user-visible change.

- Encode JSON without indentation in POST response bodies. This is mainly cosmetic
  but saves quite a bit of response data. Indent is still applied to GET responses to make
  life easier for code examples.

- Remove an obsolete TODO about reporting an HTTP error on websocket upgrade.
  Nothing needed to change; the upgrader already reports an error.

- Report an HTTP error when starting the server loop fails.

- Improve logging for encoding errors.

- Log less aggressively.
2022-01-12 17:25:58 -08:00
Sam Kleinman
2a348cc1e9 logging: remove reamining instances of SetLogger interface (#7572) 2022-01-12 16:56:49 -05:00
Sam Kleinman
7a9a38d9d7 service: avoid debug logs before error (#7564) 2022-01-12 16:17:43 -05:00
Sam Kleinman
25e665df17 internal/libs: delete unused functionality (#7569) 2022-01-12 15:55:42 -05:00
Sam Kleinman
e07c4cdcf2 node: collapse initialization internals (#7567) 2022-01-12 15:32:22 -05:00
M. J. Fromberger
5c1399d803 rpc: fix mock test cases (#7571)
In two cases, we check for the content of an error right after asserting that
no error occurs. Fix the sense of those checks.

In one case, we check that there is no error with the diagnostic "expected
error". It's not clear whether this means "an error was expected" (which is
what I believe) or "we got the expected error". However, given the way the mock
plumbing is set up, the first interpretation seems right.
2022-01-12 20:17:53 +00:00
Sam Kleinman
6efdba8aa9 statesync: SyncAny test buffering (#7570) 2022-01-12 13:38:23 -05:00
M. J. Fromberger
1f5e64e5b6 rpc: remove cache control settings from the HTTP server (#7568)
We should not set cache-control headers on RPC responses. HTTP caching
interacts poorly with resources that are expected to change frequently, or
whose rate of change is unpredictable.

More subtly, all calls to the POST endpoint use the same URL, which means a
cacheable response from one call may actually "hide" an uncacheable response
from a subsequent one. This is less of a problem for the GET endpoints, but
that means the behaviour of RPCs varies depending on which HTTP method your
client happens to use. Websocket requests were already marked statically
uncacheable, adding yet a third combination.

To address this:

- Stop setting cache-control headers.
- Update the tests that were checking for those headers.
- Remove the flags to request cache-control.

Apart from affecting the HTTP response headers, this change does not modify the
behaviour of any of the RPC methods.
2022-01-12 18:20:59 +00:00
Sam Kleinman
fb10d1c705 statesync: clarify test cleanup (#7565) 2022-01-12 12:57:23 -05:00
Sam Kleinman
46f56fcea5 node: move seed node implementation to its own file (#7566) 2022-01-12 12:33:17 -05:00
dependabot[bot]
4d11336475 build(deps): Bump github.com/BurntSushi/toml from 0.4.1 to 1.0.0 (#7562)
Bumps [github.com/BurntSushi/toml](https://github.com/BurntSushi/toml) from 0.4.1 to 1.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/BurntSushi/toml/releases">github.com/BurntSushi/toml's releases</a>.</em></p>
<blockquote>
<h2>v1.0.0</h2>
<p>This release adds much more detailed errors, support for the <code>toml.Marshaler</code> interface, and several fixes.</p>
<p>There is no special meaning in the jump to v1.0; the 0.x releases were always treated as if they're 1.x with regards to compatibility; the versioning scheme for this library predates the release of modules.</p>
<h2>New features</h2>
<ul>
<li>
<p>Error reporting is much improved; the reported position of errors should now always be correct and the library can print more detailed errors (<a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/299">#299</a>, <a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/332">#332</a>)</p>
<p>Decode always return a <code>toml.ParseError</code>, which has three methods:</p>
<ul>
<li>
<p><code>Error()</code> behaves as before and shows a single concise line with the error.</p>
</li>
<li>
<p><code>ErrorWithLocation()</code> shows the same error, but also shows the line the error occurred at, similar to e.g. clang or the Rust compiler.</p>
</li>
<li>
<p><code>ErrorWithUsage()</code> is the same as <code>ErrorWithPosition()</code>, but may also show a longer usage guidance message. This isn't always present (in which case it behaves identical to <code>ErrorWithPosition()</code>), but it should be present for most common mistakes and sources of confusion.</p>
</li>
</ul>
<p>Which error the correct one to use is depends on your application and preferences; in general I would recommend using at least <code>ErrorWithPosition()</code> for user-facing errors, as it's much more helpful for users of any skill level. If your users are likely to be non-technical then <code>ErrorWithUsage()</code> is probably a good idea; I did my best to avoid technical jargon such as &quot;newline&quot; and phrase things in a way that's understandable by most people not intimately familiar with these sort of things.</p>
<p>Additionally, the TOML key that fialed should now always be reported in all errors.</p>
</li>
<li>
<p>Add <code>toml.Marshaler</code> interface. This can be used if you want full control over how something is marshalled as TOML, similar to <code>json.Marshaler</code> etc. This takes precedence over <code>encoding.TextMarshaler</code>. (<a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/327">#327</a>)</p>
</li>
<li>
<p>Allow TOML integers to be decoded to a Go float (<a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/325">#325</a>)</p>
<p>Previously <code>int = 42</code> could only be decoded to an <code>int*</code> type; now this can also be decoded in a <code>float</code> type as long as it can be represented without loss of data.</p>
</li>
</ul>
<h2>Fixes</h2>
<ul>
<li>
<p>Key.String() is now quoted when needed (<a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/333">#333</a>)</p>
</li>
<li>
<p>Fix decoding of nested structs on 32bit platforms (<a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/314">#314</a>)</p>
</li>
<li>
<p>Empty slices are now always <code>[]T{}</code> rather than nil, which was the behaviour in v0.3.1 and before. While they are identical for most purposes, encoding/json encodes them different (<code>[]</code> vs. <code>null</code>), making it an (accidentally) incompatible change (<a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/339">#339</a>)</p>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="4272474656"><code>4272474</code></a> Reject control characters everywhere</li>
<li><a href="9bbaaec997"><code>9bbaaec</code></a> Update toml-test</li>
<li><a href="8a54f3e8df"><code>8a54f3e</code></a> Merge TestDecodeInterfaceSlice in TestDecodeSlices</li>
<li><a href="9515b92979"><code>9515b92</code></a> Decode S=[] into a non-nil []interface{}. (<a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/339">#339</a>)</li>
<li><a href="7d0236fe74"><code>7d0236f</code></a> Make sure quoted keys with dots work well (<a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/333">#333</a>)</li>
<li><a href="ff0a3f89c2"><code>ff0a3f8</code></a> Add back build tags for toml-test files</li>
<li><a href="7356d5f888"><code>7356d5f</code></a> Few staticcheck fixes</li>
<li><a href="b1471ff6cf"><code>b1471ff</code></a> Don't allow &quot;0_0&quot;</li>
<li><a href="847ee8a07a"><code>847ee8a</code></a> Update toml-test</li>
<li><a href="461925704e"><code>4619257</code></a> Clearer errors when decoding to invalid types (<a href="https://github-redirect.dependabot.com/BurntSushi/toml/issues/332">#332</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/BurntSushi/toml/compare/v0.4.1...v1.0.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/BurntSushi/toml&package-manager=go_modules&previous-version=0.4.1&new-version=1.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2022-01-12 14:25:23 +00:00
Sam Kleinman
cc51bf7587 tests: remove in-test logging (#7558) 2022-01-11 16:39:31 -05:00
Sam Kleinman
841629f5b7 privval: improve client shutdown to prevent resource leak (#7544) 2022-01-11 15:09:19 -05:00
M. J. Fromberger
50ac52e28d rpc: replace custom context-like argument with context.Context (#7559)
* Rename rpctypes.Context to CallInfo.

Add methods to attach and recover this value from a context.Context.

* Rework RPC method handlers to accept "real" contexts.

- Replace *rpctypes.Context arguments with context.Context.
- Update usage of RPC context fields to use CallInfo.
2022-01-11 11:47:56 -08:00
M. J. Fromberger
a4d0a43100 rpc: refactor the HTTP POST handler (#7555)
No functional changes.

- Pull out a some helper code to simplify the control flow within the body of
  the HTTP request handler.

- Front-load the URL path check so it does not get repeated for each request.
2022-01-11 11:04:55 -08:00
Sam Kleinman
5bf1bdcfb4 reactors: skip log on some routine cancels (#7556) 2022-01-11 12:56:52 -05:00
M. J. Fromberger
7f8b75e1ee rpc: replace anonymous arguments with structured types (#7552)
Instead of using anonymous maps, define tagged struct types for JSON argument
encoding. This allows us to have the encoding rules we want without tmjson.

This commit handles the "easy" cases. BroadcastEvidence is omitted here,
because it depends on the interface encoding rules from tmjson. I will address
that in a forthcoming change.
2022-01-11 06:37:38 -08:00
Kene
2f858f1448 node: new concrete type for seed node implementation (#7521)
Defines a different concrete type that satisfies the service interface for a seed node.
update the seed node unit test to assert the new type.

Fixes #6775
2022-01-11 14:07:49 +01:00
M. J. Fromberger
6291d22f46 rpc: simplify the JSON-RPC client Caller interface (#7549)
* Update Caller interface and its documentation.
* Rework MapToRequest as ParamsToRequest.

The old interface returned the result as well as populating it.  Nothing was
using this, so drop the duplicated value from the return signature. Clarify the
documentation on the Caller type.

Rework the MapToRequest helper to take an arbitrary value instead of only a
map. This is groundwork for getting rid of the custom marshaling code. For now,
however, the implementation preserves the existing behaviour for the map, until
we can replace those.
2022-01-10 13:56:43 -08:00
Sam Kleinman
692701a551 abci: socket server shutdown response handler (#7547) 2022-01-10 16:26:40 -05:00
Sam Kleinman
d331a08607 statesync: use specific testing.T logger for tests (#7543) 2022-01-10 15:38:20 -05:00
M. J. Fromberger
8e58c564c0 rpc: collapse Caller and HTTPClient interfaces. (#7548)
These two interfaces are identical, and besides HTTPClient being confusingly
named, all but one location uses Caller. Update that one location, and drop the
redundant interface.
2022-01-10 11:52:52 -08:00
M. J. Fromberger
211d755aca rpc: remove positional parameter encoding from clients (#7545)
Apart from the tests for the websocket client, positional parameters are not
used by RPC clients. The server supports both arrays and objects, but the
client only needs to provide one or the other.
2022-01-10 11:20:30 -08:00
Sam Kleinman
0f3f2aa4bc log: remove support for traces (#7542) 2022-01-10 13:56:42 -05:00
Sam Kleinman
f19f84bc8c test: uniquify prom IDs (#7540) 2022-01-10 13:03:53 -05:00
dependabot[bot]
6c669b70a4 build(deps): Bump technote-space/get-diff-action from 5 to 6.0.1 (#7535)
Bumps [technote-space/get-diff-action](https://github.com/technote-space/get-diff-action) from 5 to 6.0.1.
- [Release notes](https://github.com/technote-space/get-diff-action/releases)
- [Changelog](https://github.com/technote-space/get-diff-action/blob/main/.releasegarc)
- [Commits](https://github.com/technote-space/get-diff-action/compare/v5...v6.0.1)

---
updated-dependencies:
- dependency-name: technote-space/get-diff-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-01-10 11:46:41 +01:00
M. J. Fromberger
366ab1947a Replace uses of libs/json with encoding/json. (#7534)
Where possible, replace uses of the custom JSON library with the standard
library. The custom library treats interface and unnamed lteral types
differently, so this change avoids those even where it would probably be safe
to switch them.
2022-01-08 08:47:26 -08:00
Sam Kleinman
d5c39f907d test/factory: pass testing.T around rather than errors for test fixtures (#7518) 2022-01-07 15:51:39 -05:00
Sam Kleinman
90cf742065 pex: regularize reactor constructor (#7532) 2022-01-07 13:52:11 -05:00
Sam Kleinman
aa76a367e0 blocksync: standardize construction process (#7531) 2022-01-07 13:40:08 -05:00
Sam Kleinman
ae7a76a175 evidence: reactor constructor (#7533) 2022-01-07 13:28:23 -05:00
Sam Kleinman
059b38afe4 mempool: refactor mempool constructor (#7530) 2022-01-07 12:49:22 -05:00
Sam Kleinman
fc36c7782f statesync: reactor and channel construction (#7529) 2022-01-07 12:04:17 -05:00
Sam Kleinman
10402b728f consensus+p2p: change how consensus reactor is constructed (#7525) 2022-01-07 10:59:10 -05:00
Sam Kleinman
74a8941854 consensus: remove reactor options (#7526) 2022-01-07 09:15:35 -05:00
Sam Kleinman
569629486b tests: remove panics from test fixtures (#7522) 2022-01-06 17:34:32 -05:00
Sam Kleinman
a91d3cb894 e2e: make tx test more stable (#7523) 2022-01-06 15:43:30 -05:00
Sam Kleinman
0ae974e639 libs/os: remove trap signal (#7515) 2022-01-06 08:26:37 -05:00
Sam Kleinman
386c3a0ff7 e2e: constrain test parallelism and reporting (#7516) 2022-01-05 17:45:16 -05:00
Sam Kleinman
752a3a6c24 types: tests should not panic (#7506) 2022-01-05 14:21:40 -05:00
Sam Kleinman
69f0a3b1c0 e2e: avoid global test context (#7512) 2022-01-05 13:35:27 -05:00
Sam Kleinman
332163ede6 testing: remove background contexts (#7509) 2022-01-05 12:42:57 -05:00
Sam Kleinman
8564c5079f e2e: use more simple strings for generated transactions (#7513) 2022-01-05 12:17:23 -05:00
Sam Kleinman
4137c16d3f privval: improve test hygine (#7511) 2022-01-05 11:58:14 -05:00
Sam Kleinman
f2cc496f09 testing: pass testing.T to assert and require always, assertion cleanup (#7508) 2022-01-05 09:25:08 -05:00
Sam Kleinman
3c8955e4b8 errors: formating cleanup (#7507) 2022-01-04 16:11:28 -05:00
Sam Kleinman
a67e0e6c43 light: remove global context from tests (#7505) 2022-01-04 13:35:49 -05:00
Sam Kleinman
5c0abb5367 testing: use scoped logger for all public packages (#7504) 2022-01-04 12:56:17 -05:00
Sam Kleinman
430817d9e9 types: remove panic from block methods (#7501) 2022-01-03 16:14:26 -05:00
M. J. Fromberger
5f85c8f536 types: fix path handling in node key tests (#7493)
These tests use a deterministic and unseeded random source to generate
non-colliding filenames for testing. When testing locally, this means tests are
not hermetic from one run to the next.

Use proper temp directories, and clean up after they're done.
2022-01-03 09:02:14 -08:00
M. J. Fromberger
7cdf560173 config: add a Deprecation annotation to P2PConfig.Seeds. (#7496) 2021-12-27 16:50:47 -08:00
dependabot[bot]
41bfcfeb31 build(deps): Bump docker/login-action from 1.10.0 to 1.12.0 (#7494)
Bumps [docker/login-action](https://github.com/docker/login-action) from 1.10.0 to 1.12.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/login-action/releases">docker/login-action's releases</a>.</em></p>
<blockquote>
<h2>v1.12.0</h2>
<ul>
<li>ECR: only set credentials if username and password are specified (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/128">#128</a>)</li>
<li>Refactor to use aws-sdk v3 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/128">#128</a>)</li>
</ul>
<h2>v1.11.0</h2>
<ul>
<li>ECR: switch implementation to use the AWS SDK (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/126">#126</a>)</li>
<li><code>ecr</code> input to specify whether the given registry is ECR (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/123">#123</a>)</li>
<li>Test against Windows runner (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/126">#126</a>)</li>
<li>Update instructions for Google registry (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/127">#127</a>)</li>
<li>Update dev workflow (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/111">#111</a>)</li>
<li>Small changes for GHCR doc (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/86">#86</a>)</li>
<li>Update dev dependencies (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/85">#85</a>)</li>
<li>Bump ansi-regex from 5.0.0 to 5.0.1 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/101">#101</a>)</li>
<li>Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/100">#100</a>)</li>
<li>Bump <code>@​actions/core</code> from 1.4.0 to 1.6.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/94">#94</a> <a href="https://github-redirect.dependabot.com/docker/login-action/issues/103">#103</a>)</li>
<li>Bump codecov/codecov-action from 1 to 2 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/88">#88</a>)</li>
<li>Bump hosted-git-info from 2.8.8 to 2.8.9 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/83">#83</a>)</li>
<li>Bump node-notifier from 8.0.0 to 8.0.2 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/82">#82</a>)</li>
<li>Bump ws from 7.3.1 to 7.5.0 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/81">#81</a>)</li>
<li>Bump lodash from 4.17.20 to 4.17.21 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/80">#80</a>)</li>
<li>Bump y18n from 4.0.0 to 4.0.3 (<a href="https://github-redirect.dependabot.com/docker/login-action/issues/79">#79</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="42d299face"><code>42d299f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/130">#130</a> from crazy-max/ci-workflow</li>
<li><a href="4858b0b5ea"><code>4858b0b</code></a> Update ci workflow</li>
<li><a href="1d7d8649e7"><code>1d7d864</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/128">#128</a> from Flydiverny/aws-sdk-v3</li>
<li><a href="58855695bb"><code>5885569</code></a> refactor: use v3 sdk</li>
<li><a href="d9927c4142"><code>d9927c4</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/123">#123</a> from crazy-max/ecr-input</li>
<li><a href="b9a4d91ee5"><code>b9a4d91</code></a> ecr input to specify whether the given registry is ECR</li>
<li><a href="b20b9f5e31"><code>b20b9f5</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/126">#126</a> from crazy-max/aws-sdk</li>
<li><a href="cb21399f71"><code>cb21399</code></a> ci: test against windows runner</li>
<li><a href="faae4d6665"><code>faae4d6</code></a> ecr: switch implementation to use the AWS SDK</li>
<li><a href="4d84a3c20f"><code>4d84a3c</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/login-action/issues/127">#127</a> from crazy-max/carry-124</li>
<li>Additional commits viewable in <a href="https://github.com/docker/login-action/compare/v1.10.0...v1.12.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=docker/login-action&package-manager=github_actions&previous-version=1.10.0&new-version=1.12.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-12-27 17:42:10 +00:00
dependabot[bot]
a86965679d build(deps): Bump github.com/rs/cors from 1.8.0 to 1.8.2 (#7484)
Bumps [github.com/rs/cors](https://github.com/rs/cors) from 1.8.0 to 1.8.2.
- [Release notes](https://github.com/rs/cors/releases)
- [Commits](https://github.com/rs/cors/compare/v1.8.0...v1.8.2)

---
updated-dependencies:
- dependency-name: github.com/rs/cors
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2021-12-22 09:00:50 -08:00
Carlos Rodriguez
2f5ad5f8cc docs: update go ws code snippets (#7486)
* doc: fix typos in /tx_search and /tx.
* docs: update of go snippets for subscribe and unsubscribe operations

Co-authored-by: Carlos Rodriguez <crodveg@gmail.com>
2021-12-22 08:13:53 -08:00
sgmoore
e7afc57a6a Remove extra white space (#7477) 2021-12-19 08:07:27 -08:00
Sam Kleinman
1630d1cf3e privval: remove panics in privval implementation (#7475) 2021-12-17 16:28:32 -05:00
Sam Kleinman
3b6c3890d6 rfc: don't panic (#7472) 2021-12-17 16:15:01 -05:00
M. J. Fromberger
cea8bcc9bb Remove the "URI" RPC client. (#7474)
The JSON-RPC endpoint accepts requests via URL (GET) and JSON (POST).  There is
no real point in having client libraries for both modes.

A search of the SDK and on GitHub suggests that most usage is via the JSON
client (via the New constructor) or websocket (NewWS), and the only uses I
found of the NewURI client constructor are in copies of our own test code.

This does not change the functionalitiy of the server, so curl and other
URL-based clients in other languages will still function as before.
2021-12-17 09:56:41 -08:00
Sam Kleinman
9a0dbdbf13 libs/rand: remove custom seed function (#7473) 2021-12-17 12:15:27 -05:00
dependabot[bot]
ff615dda66 build(deps): Bump github.com/spf13/viper from 1.10.0 to 1.10.1 (#7470)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.10.0 to 1.10.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/viper/releases">github.com/spf13/viper's releases</a>.</em></p>
<blockquote>
<h2>v1.10.1</h2>
<p>This is a maintenance release upgrading the Consul dependency fixing CVEs.</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="f646c50b18"><code>f646c50</code></a> chore(deps): update dependencies</li>
<li>See full diff in <a href="https://github.com/spf13/viper/compare/v1.10.0...v1.10.1">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/viper&package-manager=go_modules&previous-version=1.10.0&new-version=1.10.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-12-17 16:07:10 +00:00
dependabot[bot]
ead3a21927 build(deps): Bump github.com/rs/zerolog from 1.26.0 to 1.26.1 (#7469)
Bumps [github.com/rs/zerolog](https://github.com/rs/zerolog) from 1.26.0 to 1.26.1.
- [Release notes](https://github.com/rs/zerolog/releases)
- [Commits](https://github.com/rs/zerolog/compare/v1.26.0...v1.26.1)

---
updated-dependencies:
- dependency-name: github.com/rs/zerolog
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-17 08:25:29 -05:00
Sam Kleinman
bef120dadf contexts: remove all TODO instances (#7466) 2021-12-16 15:15:26 -05:00
Callum Waters
184b105509 statesync: assert app version matches (#7463) 2021-12-16 18:25:26 +01:00
William Banfield
20c547a901 rfc: deterministic proto bytes serialization (#7427)
Major thanks to @creachadair for guidance in regards to the proto spec and for quick prototyping of a serialized-proto-canonicalizer.
2021-12-15 22:52:01 +00:00
dependabot[bot]
7705c9d086 build(deps): Bump google.golang.org/grpc from 1.42.0 to 1.43.0 (#7455)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.42.0 to 1.43.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc-go/releases">google.golang.org/grpc's releases</a>.</em></p>
<blockquote>
<h2>Release 1.43.0</h2>
<h1>API Changes</h1>
<ul>
<li>grpc: stabilize <code>WithConnectParams</code> <code>DialOption</code> (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4915">#4915</a>)
<ul>
<li>Special Thanks: <a href="https://github.com/hypnoglow"><code>@​hypnoglow</code></a></li>
</ul>
</li>
</ul>
<h1>Behavior Changes</h1>
<ul>
<li>status: support wrapped errors in <code>FromContextError</code> (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4977">#4977</a>)
<ul>
<li>Special Thanks: <a href="https://github.com/bestbeforetoday"><code>@​bestbeforetoday</code></a></li>
</ul>
</li>
<li>config: remove the environment variable to disable retry support (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4922">#4922</a>)</li>
</ul>
<h1>New Features</h1>
<ul>
<li>balancer: new field <code>Authority</code> in <code>BuildOptions</code> for server name to use in the authentication handshake with a remote load balancer (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4969">#4969</a>)</li>
</ul>
<h1>Bug Fixes</h1>
<ul>
<li>xds/resolver: fix possible <code>ClientConn</code> leak upon resolver initialization failure (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4900">#4900</a>)</li>
<li>client: fix <code>nil</code> panic in rare race conditions with the pick first LB policy (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4971">#4971</a>)</li>
<li>xds: improve RPC error messages when xDS connection errors occur (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5032">#5032</a>, <a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5054">#5054</a>)</li>
<li>transport: do not create stream object in the face of illegal stream IDs (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4873">#4873</a>)
<ul>
<li>Special Thanks: <a href="https://github.com/uds5501"><code>@​uds5501</code></a></li>
</ul>
</li>
</ul>
<h1>Documentation</h1>
<ul>
<li>client: clarify errors to indicate whether compressed or uncompressed messages exceeded size limits (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4918">#4918</a>)
<ul>
<li>Special Thanks: <a href="https://github.com/uds5501"><code>@​uds5501</code></a></li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="14c11384b7"><code>14c1138</code></a> Change version to 1.43.0 (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5039">#5039</a>)</li>
<li><a href="ae29ac3e1e"><code>ae29ac3</code></a> xds/client: send NewStream errors to the watchers (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5032">#5032</a>)</li>
<li><a href="296afc2e57"><code>296afc2</code></a> transport: better error message when per-RPC creds fail (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5033">#5033</a>)</li>
<li><a href="e15d978c82"><code>e15d978</code></a> xds/client: send connection errors to all watchers (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5054">#5054</a>)</li>
<li><a href="46e883a9ab"><code>46e883a</code></a> Backport &quot;xds/c2p: replace C2P resolver env var with experimental scheme suff...</li>
<li><a href="3786ae1778"><code>3786ae1</code></a> xds/resolver: Add support for cluster specifier plugins (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4987">#4987</a>)</li>
<li><a href="512e89474b"><code>512e894</code></a> rls: support extra_keys and constant_keys (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4995">#4995</a>)</li>
<li><a href="f3bbd12084"><code>f3bbd12</code></a> xds/bootstrap_config: add a string function to server config (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5031">#5031</a>)</li>
<li><a href="46935b9650"><code>46935b9</code></a> fix possible nil before casting (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5017">#5017</a>)</li>
<li><a href="c2bccd0b15"><code>c2bccd0</code></a> xds/kokoro: install go 1.17, and retry go build (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/5015">#5015</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/grpc/grpc-go/compare/v1.42.0...v1.43.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/grpc&package-manager=go_modules&previous-version=1.42.0&new-version=1.43.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-12-15 16:26:20 +00:00
M. J. Fromberger
82738eb016 Move the libs/pubsub package to internal scope (#7451)
No API changes, merely changes the import path.
2021-12-15 07:09:32 -08:00
dependabot[bot]
f3278e8b68 build(deps): Bump github.com/spf13/cobra from 1.2.1 to 1.3.0 (#7456)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.2.1 to 1.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/cobra/releases">github.com/spf13/cobra's releases</a>.</em></p>
<blockquote>
<h1>v1.3.0 - The Fall 2021 release 🍁</h1>
<h2>Completion fixes &amp; enhancements 💇🏼</h2>
<p>In <code>v1.2.0</code>, we introduced a new model for completions. Thanks to everyone for trying it, giving feedback, and providing numerous fixes! Continue to work with the new model as the old one (as noted in code comments) will be deprecated in a coming release.</p>
<ul>
<li><code>DisableFlagParsing</code> now triggers custom completions for flag names <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1161">#1161</a></li>
<li>Fixed unbound variables in bash completions causing edge case errors <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1321">#1321</a></li>
<li><code>help</code> completion formatting improvements &amp; fixes <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1444">#1444</a></li>
<li>All completions now follow the <code>help</code> example: short desc are now capitalized and removes extra spacing from long description <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1455">#1455</a></li>
<li>Typo fixes in bash &amp; zsh completions <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1459">#1459</a></li>
<li>Fixed mixed tab/spaces indentation in completion scripts. Now just 4 spaces <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1473">#1473</a></li>
<li>Support for different bash completion options. Bash completions v2 supports descriptions and requires descriptions to be removed for <code>menu-complete</code>, <code>menu-complete-backward</code> and <code>insert-completions</code>. These descriptions are now purposefully removed in support of this model. <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1509">#1509</a></li>
<li>Fix for invalid shell completions when using <code>~/.cobra.yaml</code>. Log message <code>Using config file: ~/.cobra.yaml</code> now printed to stderr <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1510">#1510</a></li>
<li>Removes unnecessary trailing spaces from completion command descriptions <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1520">#1520</a></li>
<li>Option to hid default <code>completion</code> command <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1541">#1541</a></li>
<li>Remove <code>__complete</code> command for programs without subcommands <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1563">#1563</a></li>
</ul>
<h2>Generator changes ⚙️</h2>
<p>Thanks to <a href="https://github.com/spf13"><code>@​spf13</code></a> for providing a number of changes to the Cobra generator tool, streamlining it for new users!</p>
<ul>
<li>The Cobra generator now <em>won't</em> automatically include Viper and cleans up a number of unused imports when not using Viper.</li>
<li>The Cobra generator's default license is now <code>none</code></li>
<li>The Cobra generator now works with Go modules</li>
<li>Documentation to reflect these changes</li>
</ul>
<h2>New Features </h2>
<ul>
<li>License can be specified by their SPDX identifiers <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1159">#1159</a></li>
<li><code>MatchAll</code> allows combining several PositionalArgs to work in concert. This now allows for enabling composing <code>PositionalArgs</code> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/896">#896</a></li>
</ul>
<h2>Bug Fixes 🐛</h2>
<ul>
<li>Fixed multiple error message from cobra <code>init</code> boilerplates <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1463">#1463</a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1552">#1552</a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1557">#1557</a></li>
</ul>
<h2>Testing 👀</h2>
<ul>
<li>Now testing golang 1.16.x and 1.17.x in CI <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1425">#1425</a></li>
<li>Fix for running diff test to ignore CR for windows <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/949">#949</a></li>
<li>Added helper functions and reduced code reproduction in <code>args_test</code> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1426">#1426</a></li>
<li>Now using official <code>golangci-lint</code> github action <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1477">#1477</a></li>
</ul>
<h2>Security 🔏</h2>
<ul>
<li>Added GitHub dependabot <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1427">#1427</a></li>
<li>Now using Viper <code>v1.10.0</code>
<ul>
<li>There is a known CVE in an <em>indirect</em> dependency from <code>viper</code>: <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1538">spf13/cobra#1538</a>. This will be patched in a future release</li>
</ul>
</li>
</ul>
<h2>Documentation 📝</h2>
<ul>
<li>Multiple projects added to the <code>projects_using_cobra.md</code> file: <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1377">#1377</a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1501">#1501</a> <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1454">#1454</a></li>
<li>Removed ToC from main readme file as it is now automagically displayed by GitHub <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1429">#1429</a></li>
<li>Documentation correct for when the <code>--author</code> flag is specified <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1009">#1009</a></li>
<li><code>shell_completions.md</code> has an easier to use snippet for copying and pasting shell completions <a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1372">#1372</a></li>
</ul>

</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="178edbb247"><code>178edbb</code></a> Bump github.com/spf13/viper from 1.9.0 to 1.10.0 (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1561">#1561</a>)</li>
<li><a href="9054739e08"><code>9054739</code></a> Remove __complete cmd for program without subcmds (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1563">#1563</a>)</li>
<li><a href="19c9c74384"><code>19c9c74</code></a> Always include the os package import when generating the root command (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1557">#1557</a>)</li>
<li><a href="01e05b8ea1"><code>01e05b8</code></a> Bump github.com/spf13/viper from 1.8.1 to 1.9.0 (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1554">#1554</a>)</li>
<li><a href="36bff0a4d5"><code>36bff0a</code></a> fix root.go.golden (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1552">#1552</a>)</li>
<li><a href="1854bb5c96"><code>1854bb5</code></a> Fix some typos (mostly found by codespell) (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1514">#1514</a>)</li>
<li><a href="ff2c55e323"><code>ff2c55e</code></a> chore(ci): use golangci-lint-action (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1477">#1477</a>)</li>
<li><a href="1beb476da9"><code>1beb476</code></a> fix: Duplicate error message from cobra init boilerplates (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1463">#1463</a>)</li>
<li><a href="6f84ef4875"><code>6f84ef4</code></a> Provide option to hide default 'completion' cmd (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1541">#1541</a>)</li>
<li><a href="ee75a2b1ed"><code>ee75a2b</code></a> Remove trailing spaces from bash completion command description (<a href="https://github-redirect.dependabot.com/spf13/cobra/issues/1520">#1520</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/spf13/cobra/compare/v1.2.1...v1.3.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/cobra&package-manager=go_modules&previous-version=1.2.1&new-version=1.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-12-15 13:54:42 +00:00
M. J. Fromberger
ab7da86b06 Internalize libs/sync. (#7450)
Inline the one usage of this library, and remove the lib.
2021-12-14 15:14:30 -08:00
M. J. Fromberger
da697089d0 Move libs/async to internal/libs/async. (#7449) 2021-12-14 14:05:42 -08:00
Sam Kleinman
f56df58fe8 testing,log: add testing.T logger connector (#7447) 2021-12-14 16:30:06 -05:00
Sam Kleinman
e3aaae570d node: minor package cleanups (#7444) 2021-12-14 14:56:28 -05:00
Sam Kleinman
2ff962a63a log: dissallow nil loggers (#7445) 2021-12-14 12:45:13 -05:00
M. J. Fromberger
a872dd75b7 Fix broken documentation link. (#7439)
A follow-up to #7416 and #7412.
2021-12-14 08:27:11 +00:00
Sam Kleinman
d0e03f01fc sync: remove special mutexes (#7438) 2021-12-13 13:35:32 -05:00
dependabot[bot]
2b35d8191c build(deps-dev): Bump watchpack from 2.3.0 to 2.3.1 in /docs (#7430)
Bumps [watchpack](https://github.com/webpack/watchpack) from 2.3.0 to 2.3.1.
- [Release notes](https://github.com/webpack/watchpack/releases)
- [Commits](https://github.com/webpack/watchpack/compare/v2.3.0...v2.3.1)

---
updated-dependencies:
- dependency-name: watchpack
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Sam Kleinman <garen@tychoish.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2021-12-13 12:51:47 -05:00
Sam Kleinman
65c0aaee5e p2p: use recieve for channel iteration (#7425) 2021-12-13 11:04:44 -05:00
Jacob Gadikian
4e2aa63bb3 Go 1.17 (#7429)
Update tendermint to Go 1.17 because imports are easier to audit.

* Update README.md
* go mod tidy

Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2021-12-13 06:55:57 -08:00
dependabot[bot]
f80c235842 build(deps): Bump github.com/spf13/viper from 1.9.0 to 1.10.0 (#7434)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.9.0 to 1.10.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/spf13/viper/releases">github.com/spf13/viper's releases</a>.</em></p>
<blockquote>
<h2>v1.10.0</h2>
<p>This is a maintenance release primarily containing minor fixes and improvements.</p>
<h2>Changes</h2>
<h3>Added</h3>
<ul>
<li>Experimental finder based on io/fs</li>
<li>Tests are executed on Windows</li>
<li>Tests are executed on Go 1.17</li>
<li>Logger interface to decouple Viper from JWW</li>
</ul>
<hr />
<p>In addition to the above changes, this release comes with minor improvements, documentation changes an dependency updates.</p>
<p><strong>Many thanks to everyone who contributed to this release!</strong></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="a4bfcd9ea0"><code>a4bfcd9</code></a> chore(deps): update crypt</li>
<li><a href="1cb6606f6e"><code>1cb6606</code></a> build(deps): bump gopkg.in/ini.v1 from 1.65.0 to 1.66.2</li>
<li><a href="a785a79f22"><code>a785a79</code></a> refactor: replace jww with the new logger interface</li>
<li><a href="f1f6b2122c"><code>f1f6b21</code></a> feat: add logger interface and default implementation</li>
<li><a href="c43197d858"><code>c43197d</code></a> build(deps): bump github.com/mitchellh/mapstructure from 1.4.2 to 1.4.3</li>
<li><a href="2abe0ddbd4"><code>2abe0dd</code></a> build(deps): bump gopkg.in/ini.v1 from 1.64.0 to 1.65.0</li>
<li><a href="8ec82f8998"><code>8ec82f8</code></a> chore(deps): update crypt</li>
<li><a href="35877c8f77"><code>35877c8</code></a> chore: fix lint</li>
<li><a href="655a0aa730"><code>655a0aa</code></a> chore(deps): update golangci-lint</li>
<li><a href="946ae75247"><code>946ae75</code></a> ci: fix github script</li>
<li>Additional commits viewable in <a href="https://github.com/spf13/viper/compare/v1.9.0...v1.10.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/spf13/viper&package-manager=go_modules&previous-version=1.9.0&new-version=1.10.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-12-13 14:27:38 +00:00
dependabot[bot]
4da0a4b8ed build(deps): Bump github.com/adlio/schema from 1.2.2 to 1.2.3 (#7432)
Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.2.2 to 1.2.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/adlio/schema/releases">github.com/adlio/schema's releases</a>.</em></p>
<blockquote>
<h2>1.2.3</h2>
<h2>What's Changed</h2>
<ul>
<li>Restore ability to chain NewMigrartor().Apply() by <a href="https://github.com/adlio"><code>@​adlio</code></a> in <a href="https://github-redirect.dependabot.com/adlio/schema/pull/14">adlio/schema#14</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/adlio/schema/compare/v1.2.2...v1.2.3">https://github.com/adlio/schema/compare/v1.2.2...v1.2.3</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="770089bd75"><code>770089b</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/adlio/schema/issues/14">#14</a> from adlio/pointer-fix</li>
<li><a href="1fd2bbf008"><code>1fd2bbf</code></a> Restore the ability to run NewMigrator().Apply()</li>
<li>See full diff in <a href="https://github.com/adlio/schema/compare/v1.2.2...v1.2.3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/adlio/schema&package-manager=go_modules&previous-version=1.2.2&new-version=1.2.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-12-13 12:36:35 +00:00
dependabot[bot]
f8bf2cb912 build(deps): Bump github.com/adlio/schema from 1.1.15 to 1.2.2 (#7423)
* build(deps): Bump github.com/adlio/schema from 1.1.15 to 1.2.2

Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.1.15 to 1.2.2.
- [Release notes](https://github.com/adlio/schema/releases)
- [Commits](https://github.com/adlio/schema/compare/v1.1.15...v1.2.2)

---
updated-dependencies:
- dependency-name: github.com/adlio/schema
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Work around API changes in the migrator package.

A recent update inadvertently broke the API by changing the receiver types of
the methods without updating the constructor.

See: https://github.com/adlio/schema/issues/13

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2021-12-10 10:30:04 -08:00
M. J. Fromberger
a925f4fa84 Fix a panic in the indexer service test. (#7424)
The test service was starting up without a logger and crashing
while trying to log.
2021-12-10 10:03:42 -08:00
Emmanuel T Odeke
3e92899bd9 internal/libs/protoio: optimize MarshalDelimited by plain byteslice allocations+sync.Pool (#7325)
Noticed in profiles that invoking *VoteSignBytes always created a
bytes.Buffer, then discarded it inside protoio.MarshalDelimited.
I dug further and examined the call paths and noticed that we
unconditionally create the bytes.Buffer, even though we might
have proto messages (in the common case) that implement
MarshalTo([]byte), and invoked varintWriter. Instead by inlining
this case, we skip a bunch of allocations and CPU cycles,
which then reflects properly on all calling functions. Here
are the benchmark results:

```shell
$ benchstat before.txt after.txt
name                                        old time/op    new time/op      delta
types.VoteSignBytes-8                       705ns ± 3%     573ns ± 6%       -18.74% (p=0.000 n=18+20)
types.CommitVoteSignBytes-8                 8.15µs ± 9%    6.81µs ± 4%      -16.51% (p=0.000 n=20+19)
protoio.MarshalDelimitedWithMarshalTo-8     788ns ± 8%     772ns ± 3%       -2.01%  (p=0.050 n=20+20)
protoio.MarshalDelimitedNoMarshalTo-8       989ns ± 4%     845ns ± 2%       -14.51% (p=0.000 n=20+18)

name                                        old alloc/op   new alloc/op    delta
types.VoteSignBytes-8                       792B ± 0%      600B ± 0%       -24.24%  (p=0.000 n=20+20)
types.CommitVoteSignBytes-8                 9.52kB ± 0%    7.60kB ± 0%     -20.17%  (p=0.000 n=20+20)
protoio.MarshalDelimitedNoMarshalTo-8       808B ± 0%      440B ± 0%       -45.54%  (p=0.000 n=20+20)

name                                        old allocs/op  new allocs/op   delta
types.VoteSignBytes-8                       13.0 ± 0%      10.0 ± 0%       -23.08%  (p=0.000 n=20+20)
types.CommitVoteSignBytes-8                 140 ± 0%       110 ± 0%        -21.43%  (p=0.000 n=20+20)
protoio.MarshalDelimitedNoMarshalTo-8       10.0 ± 0%      7.0 ± 0%        -30.00%  (p=0.000 n=20+20)
```

Thanks to Tharsis who tasked me to help them increase TPS and who
are keen on improving Tendermint and efficiency.
2021-12-10 09:36:43 -08:00
Sam Kleinman
bd6dc3ca88 p2p: refactor channel Send/out (#7414) 2021-12-09 14:03:41 -05:00
M. J. Fromberger
f79b77036f Fix link to Terraform/Ansible documentation. (#7416) 2021-12-09 08:15:57 -08:00
dependabot[bot]
358fc5f6c4 build(deps): Bump github.com/adlio/schema from 1.1.14 to 1.1.15 (#7407)
Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.1.14 to 1.1.15.
<details>
<summary>Commits</summary>
<ul>
<li><a href="3b57e35342"><code>3b57e35</code></a> Security patch: Update upstream runc dependency to 1.0.3.</li>
<li>See full diff in <a href="https://github.com/adlio/schema/compare/v1.1.14...v1.1.15">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/adlio/schema&package-manager=go_modules&previous-version=1.1.14&new-version=1.1.15)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-12-08 16:21:45 +00:00
Sam Kleinman
867d406c6c state: pass connected context (#7410) 2021-12-08 11:09:08 -05:00
Sam Kleinman
9c21d4140b mempool: avoid arbitrary background contexts (#7409)
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2021-12-08 14:29:13 +00:00
Sam Kleinman
cb88bd3941 p2p: migrate to use new interface for channel errors (#7403)
* p2p: migrate to use new interface for channel errors

* Update internal/p2p/p2ptest/require.go

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>

* rename

* feedback

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>
2021-12-08 14:05:01 +00:00
Sam Kleinman
892f5d9524 service: cleanup mempool and peer update shutdown (#7401) 2021-12-08 08:44:32 -05:00
Callum Waters
5ba3c6be42 cmd: cosmetic changes for errors and print statements (#7377)
* cmd: cosmetic changes for errors and print statements

* fix load block store test

* lint
2021-12-08 10:21:58 +00:00
Sam Kleinman
26d421b8f6 Revert "ci: tweak e2e configuration (#7400)" (#7404)
This reverts commit b057740bd3.
2021-12-07 21:17:55 +00:00
Sam Kleinman
587c91132b build: declare packages variable in correct makefile (#7402) 2021-12-07 14:53:22 -05:00
Sam Kleinman
b057740bd3 ci: tweak e2e configuration (#7400) 2021-12-07 11:47:22 -05:00
Sam Kleinman
0ff3d4b89d service: cleanup close channel in reactors (#7399) 2021-12-07 11:40:59 -05:00
Sam Kleinman
0b3e00a6b5 ci: skip docker image builds during PRs (#7397) 2021-12-07 10:54:14 -05:00
Sam Kleinman
6b35cc1a47 p2p: remove unneeded close channels from p2p layer (#7392) 2021-12-07 10:40:07 -05:00
Sam Kleinman
4b8fd28148 ci: fix missing dependency (#7396) 2021-12-07 10:27:49 -05:00
Sam Kleinman
065918d1cd ci: cleanup build/test targets (#7393) 2021-12-07 10:11:23 -05:00
M. J. Fromberger
02d456b8b8 Update Mergify configuration. (#7388)
Per https://docs.mergify.com/actions/merge/#commit-message, the
commit_message option is deprecated and will be removed in 2022.
Replace it with the template suggested here:

https://docs.mergify.com/actions/queue/
2021-12-06 13:27:39 -08:00
M. J. Fromberger
2d4844f97f Update mergify configuration. (#7385)
Per https://blog.mergify.com/strict-mode-deprecation/, the strict mode
has been deprecated and will be turned off on 10-Jan-2022. This updates
the config to use the new, approved thing instead of the old thing.
2021-12-06 12:20:28 -08:00
Sam Kleinman
a62ac27047 service: remove exported logger from base implemenation (#7381) 2021-12-06 10:16:42 -05:00
Sam Kleinman
4e355d80c4 p2p: implement interface for p2p.Channel without channels (#7378) 2021-12-03 15:19:04 -05:00
Sam Kleinman
5f9dd2e7f5 p2p/upnp: remove unused functionality (#7379) 2021-12-03 13:44:36 -05:00
M. J. Fromberger
8c52956007 Revert CI test timeout. (#7375)
I increased this while messing around with another PR, and
forgot to remove it before merging.
2021-12-03 14:13:24 +00:00
Sam Kleinman
8a991e288c service: plumb contexts to all (most) threads (#7363)
This continues the push of plumbing contexts through tendermint. I
attempted to find all goroutines in the production code (non-test) and
made sure that these threads would exit when their contexts were
canceled, and I believe this PR does that.
2021-12-02 21:38:38 +00:00
Sam Kleinman
b3be1d7d7a ci: move test execution to makefile (#7372) 2021-12-02 16:22:27 -05:00
Sam Kleinman
1e208d66e5 tools: remove tm-signer-harness (#7370) 2021-12-02 13:22:40 -05:00
Callum Waters
bca2080c01 cmd: add integration test and fix bug in rollback command (#7315) 2021-12-02 12:17:16 +01:00
Federico Kunze Küllmer
5f57d84dd3 rpc: implement header and header_by_hash queries (#7270) 2021-12-02 11:16:31 +01:00
William Banfield
f6c39126ed tools/tm-signer-harness: switch to not use hardcoded bytes for configs in test (#7362)
The current testing for this tool relies on hardcoding a set of configs into the tests. This means that when the config structure changes, the tests break. I'm not sure that this makes sense since we are separately testing our ability to read and validate the configuration file format. Hardcoding bytes into a different file duplicates this. Using the structs themselves should therefore be preferred.
2021-12-01 23:26:36 +00:00
Sam Kleinman
a823d167bc service: cleanup base implementation and some caller implementations (#7301) 2021-12-01 09:28:06 -05:00
Sam Kleinman
3749c37847 p2p: remove unused trust package (#7359) 2021-11-30 17:55:08 -05:00
M. J. Fromberger
76dea94a01 Remove now-unused nolint:lll directives. (#7356) 2021-11-30 21:32:21 +00:00
M. J. Fromberger
ab1788b922 Fix incorrect tests using the PSQL sink. (#7349)
Some of our tests were creating a psql event sink and expecting
it to report (or not report) certain kinds of errors. These tests
were ill-founded in a couple of ways:

1. Tests that required the Postgres driver were not loading it.
   This led to spurious successes on tests that wanted "some error"
   from the sink constructor, but didn't exercise the right path.

2. Tests that wanted a Postgres sink to succeed without a database.
   These tests "passed" because they weren't actually establishing a
   connection to the database, but if they had would have failed for
   the lack of one.

To fix this:
- Load the postgres driver in tests that need it.
- Verify connectivity before reporting successful creation of a PSQL event sink.
- Remove tests that wanted a psql sink without a database, since that case
  is already tested elsewhere.
2021-11-30 12:57:44 -08:00
Sam Kleinman
babd3acb70 e2e: generate keys for more stable load (#7344) 2021-11-30 20:21:48 +00:00
Sam Kleinman
24dcba9230 e2e: clarify apphash reporting (#7348) 2021-11-30 20:05:55 +00:00
Sam Kleinman
c4033f95c1 e2e: stabilize validator update form (#7340)
This might be a source of non-determinism in the e2e test.
2021-11-30 19:35:21 +00:00
Sam Kleinman
070445bc10 pex: improve goroutine lifecycle (#7343)
I saw a race detected in a test here that I think would be better
handled by just wiring up these threads.
2021-11-30 19:22:28 +00:00
Sam Kleinman
d5c18b68c8 e2e: more clear height test (#7347)
This is a more clear version of the stable parts of the apphash tests.
2021-11-30 17:34:16 +00:00
Sam Kleinman
cf5c7be4d8 e2e: control access to state in Info calls (#7345)
This is probably belt and suspenders, but might help the apphash clarity.
2021-11-30 17:19:47 +00:00
Sam Kleinman
502f92bb97 lint: remove lll check (#7346) 2021-11-30 11:55:31 -05:00
M. J. Fromberger
4734f7d806 pubsub: Make the queue unwritable after shutdown. (#7316)
Prior to this change, shutting down the pubsub server could cause
any laggard publishers to race with the shutdown plumbing.

Fix that race condition, and plumb in the service context to the
runner so that it will respect the external signal directly.
Remove now-redundant local shutdown plumbing.
2021-11-30 07:30:35 -08:00
Sam Kleinman
4af2dbd03b eventbus: plumb contexts (#7337)
* eventbus: plumb contexts

* fix lint
2021-11-30 14:24:11 +00:00
M. J. Fromberger
c9f90953a2 Add pending change log entry for #7319. (#7339) 2021-11-30 06:08:21 -08:00
M. J. Fromberger
99ee730ee7 Remove the PEG query implementation. (#7336)
A follow-up to #7319.
2021-11-29 15:00:01 -08:00
dependabot[bot]
99c51b354c build(deps-dev): Bump watchpack from 2.2.0 to 2.3.0 in /docs (#7335)
Bumps [watchpack](https://github.com/webpack/watchpack) from 2.2.0 to 2.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/webpack/watchpack/releases">watchpack's releases</a>.</em></p>
<blockquote>
<h2>v2.3.0</h2>
<h1>Features</h1>
<ul>
<li>allow to grab separate file and directory time info objects</li>
<li>allow functions passed to the <code>ignored</code> option</li>
</ul>
<h1>Bugfixes</h1>
<ul>
<li>ignore EACCESS errors during initial scan</li>
</ul>
<h1>Performance</h1>
<ul>
<li>improve performance of watcher update</li>
</ul>
<h1>Contributing</h1>
<ul>
<li>CI tests node.js 17 too</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="b903dd384d"><code>b903dd3</code></a> 2.3.0</li>
<li><a href="c82a595128"><code>c82a595</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/webpack/watchpack/issues/211">#211</a> from webpack/bugfix/missing-info</li>
<li><a href="1e560268d3"><code>1e56026</code></a> fix missing time info in files</li>
<li><a href="89d5f4884c"><code>89d5f48</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/webpack/watchpack/issues/205">#205</a> from markjm/markjm/split</li>
<li><a href="f1f3586330"><code>f1f3586</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/webpack/watchpack/issues/210">#210</a> from webpack/ci/no-macos-polling</li>
<li><a href="8d14e94274"><code>8d14e94</code></a> Merge branch 'main' into markjm/split</li>
<li><a href="e71b62ba1f"><code>e71b62b</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/webpack/watchpack/issues/197">#197</a> from markjm/markjm/watch-change</li>
<li><a href="a3b2b82ec7"><code>a3b2b82</code></a> provide additional method instead of changing existing one</li>
<li><a href="aab3697418"><code>aab3697</code></a> disable testing polling for macos ci</li>
<li><a href="523793ef5d"><code>523793e</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/webpack/watchpack/issues/208">#208</a> from webpack/perf/update-watchers</li>
<li>Additional commits viewable in <a href="https://github.com/webpack/watchpack/compare/v2.2.0...v2.3.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=watchpack&package-manager=npm_and_yarn&previous-version=2.2.0&new-version=2.3.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-11-29 22:12:35 +00:00
Sam Kleinman
abc697b46c adr: lib2p implementation plan (#7282) 2021-11-29 17:02:27 -05:00
M. J. Fromberger
1dca1a8f97 Performance improvements for the event query API (#7319)
Rework the implementation of event query parsing and execution to
improve performance and reduce memory usage.

Previous memory and CPU profiles of the pubsub service showed query
processing as a significant hotspot. While we don't have evidence that
this is visibly hurting users, fixing it is fairly easy and self-contained.

Updates #6439.

Typical benchmark results comparing the original implementation (PEG) with the reworked implementation (Custom):
```
TEST                        TIME/OP  BYTES/OP  ALLOCS/OP  SPEEDUP   MEM SAVING
BenchmarkParsePEG-12       51716 ns  526832    27
BenchmarkParseCustom-12     2167 ns    4616    17         23.8x     99.1%
BenchmarkMatchPEG-12        3086 ns    1097    22
BenchmarkMatchCustom-12    294.2 ns      64     3         10.5x     94.1%
```

Components:
* Add a basic parsing benchmark.
* Move the original query implementation to a subdirectory.
* Add lexical scanner for Query expressions.
* Add a parser for Query expressions.
* Implement query compiler.
* Add test cases based on OpenAPI examples.
* Add MustCompile to replace the original MustParse, and update usage.
2021-11-29 13:08:48 -08:00
dependabot[bot]
d1f07047ec build(deps): Bump actions/cache from 2.1.6 to 2.1.7 (#7334)
Bumps [actions/cache](https://github.com/actions/cache) from 2.1.6 to 2.1.7.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/cache/releases">actions/cache's releases</a>.</em></p>
<blockquote>
<h2>v2.1.7</h2>
<p>Support 10GB cache upload using the latest version <code>1.0.8</code> of <a href="https://www.npmjs.com/package/@actions/cache"><code>@actions/cache</code> </a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="937d244753"><code>937d244</code></a> bumping up action version to 2.1.7 (<a href="https://github-redirect.dependabot.com/actions/cache/issues/683">#683</a>)</li>
<li><a href="eb0698d1c5"><code>eb0698d</code></a> Bumping up <code>@​actions/cache</code> version to 1.0.8 (<a href="https://github-redirect.dependabot.com/actions/cache/issues/682">#682</a>)</li>
<li><a href="67b6d52d50"><code>67b6d52</code></a> (R renv) Remove unused renv-cache-path variable (<a href="https://github-redirect.dependabot.com/actions/cache/issues/663">#663</a>)</li>
<li><a href="92f67a4829"><code>92f67a4</code></a> (R renv) Fix Renv package cache location in examples (<a href="https://github-redirect.dependabot.com/actions/cache/issues/660">#660</a>)</li>
<li><a href="6bbe742add"><code>6bbe742</code></a> Use existing check-dist implementation (<a href="https://github-redirect.dependabot.com/actions/cache/issues/618">#618</a>)</li>
<li><a href="c9db520cf3"><code>c9db520</code></a> Create check-dist.yml (<a href="https://github-redirect.dependabot.com/actions/cache/issues/604">#604</a>)</li>
<li><a href="10906ba9cd"><code>10906ba</code></a> Bump ws from 5.2.2 to 5.2.3 (<a href="https://github-redirect.dependabot.com/actions/cache/issues/610">#610</a>)</li>
<li><a href="2ebdcff279"><code>2ebdcff</code></a> Add &quot;see more&quot; link to GHE-not-supported warning (<a href="https://github-redirect.dependabot.com/actions/cache/issues/609">#609</a>)</li>
<li><a href="5807af2642"><code>5807af2</code></a> Fix bugs in example of how to use with pipenv (<a href="https://github-redirect.dependabot.com/actions/cache/issues/607">#607</a>)</li>
<li><a href="0638051e9a"><code>0638051</code></a> Golang example tweak - add <code>go-build</code> path - rebuild page TOC (<a href="https://github-redirect.dependabot.com/actions/cache/issues/577">#577</a>)</li>
<li>See full diff in <a href="https://github.com/actions/cache/compare/v2.1.6...v2.1.7">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/cache&package-manager=github_actions&previous-version=2.1.6&new-version=2.1.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-11-29 13:04:54 +00:00
Piotr Pędziwiatr
a36dd49eae docs: go tutorial fixed for 0.35.0 version (#7329) (#7330) 2021-11-27 09:16:15 -08:00
M. J. Fromberger
da3449599f Update example code in OpenAPI docs. (#7318)
The event examples for the query filter language were not updated after the
change of key and value types from []byte to string. Also, the attributes need
to be a slice not a bare value.
2021-11-25 07:45:38 -08:00
William Banfield
02c7dca945 docs: add abci timing metrics to the metrics docs (#7311) 2021-11-23 19:53:59 +00:00
M. J. Fromberger
26b887b883 build: update location of proto builder image (#7296)
Updates #7272.
2021-11-19 18:45:38 +00:00
Sam Kleinman
7e58f02eb8 service: remove quit method (#7293) 2021-11-19 11:49:51 -05:00
Sam Kleinman
6ab62fe7b6 service: remove stop method and use contexts (#7292) 2021-11-18 17:56:21 -05:00
William Banfield
1c34d17240 proto: rebuild the proto files from the spec repository (#7291)
* proto: rebuild the proto files from the spec repository

* remove proto checks
2021-11-17 00:06:19 +00:00
dependabot[bot]
44b5d330b0 build(deps): Bump github.com/tendermint/tm-db from 0.6.4 to 0.6.6 (#7287)
Bumps [github.com/tendermint/tm-db](https://github.com/tendermint/tm-db) from 0.6.4 to 0.6.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tendermint/tm-db/releases">github.com/tendermint/tm-db's releases</a>.</em></p>
<blockquote>
<h2>Release v0.6.6</h2>
<p>This release is the same API as v0.6.4, but bypasses the accidental v0.6.5
release that exported a different API. Any existing users of v0.6.5 may continue
to use that tag.</p>
<p><a href="https://github.com/tendermint/tm-db/blob/v0.6.6/CHANGELOG.md">https://github.com/tendermint/tm-db/blob/v0.6.6/CHANGELOG.md</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/tendermint/tm-db/blob/master/CHANGELOG.md">github.com/tendermint/tm-db's changelog</a>.</em></p>
<blockquote>
<h2>0.6.6</h2>
<p><strong>2021-11-08</strong></p>
<p><strong>Important note:</strong> Version v0.6.5 was accidentally tagged and should be
avoided.  This version is identical to v0.6.4 in package structure and API, but
has updated the version marker so that normal <code>go get</code> upgrades will not
require modifying existing use of v0.6.4.</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="0e6efdda1b"><code>0e6efdd</code></a> Prepare a new compatibility release v0.6.6. (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/203">#203</a>)</li>
<li><a href="8f92601b65"><code>8f92601</code></a> testing: tools/Dockerfile: Upgrade RocksDB to 6.24.2 (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/197">#197</a>)</li>
<li><a href="16415bfbd7"><code>16415bf</code></a> build(deps): bump google.golang.org/grpc from 1.38.0 to 1.42.0 (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/193">#193</a>)</li>
<li><a href="11c5e90c12"><code>11c5e90</code></a> Update mergify configuration. (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/198">#198</a>)</li>
<li><a href="d3744d52e8"><code>d3744d5</code></a> Retract the accidentally-released v0.6.5. (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/194">#194</a>)</li>
<li><a href="6dc0379403"><code>6dc0379</code></a> Update CODEOWNERS file. (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/196">#196</a>)</li>
<li><a href="63a566e765"><code>63a566e</code></a> Enable cgo for testing. (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/200">#200</a>)</li>
<li><a href="451c8444fb"><code>451c844</code></a> Revert rocksdb version at tip. (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/202">#202</a>)</li>
<li><a href="d65205892e"><code>d652058</code></a> build: fix the image to work with golangci-lint (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/201">#201</a>)</li>
<li><a href="8882f27066"><code>8882f27</code></a> Update Go version in the build container. (<a href="https://github-redirect.dependabot.com/tendermint/tm-db/issues/195">#195</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/tendermint/tm-db/compare/v0.6.4...v0.6.6">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/tendermint/tm-db&package-manager=go_modules&previous-version=0.6.4&new-version=0.6.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-11-16 16:36:20 +00:00
Sam Kleinman
d7606777cf libs/service: pass logger explicitly (#7288)
This is a very small change, but removes a method from the
`service.Service` interface (a win!) and forces callers to explicitly
pass loggers in to objects during construction rather than (later)
injecting them. There's not a real need for this kind of lazy
construction of loggers, and I think a decent potential for confusion
for mutable loggers.

The main concern I have is that this changes the constructor API for
ABCI clients. I think this is fine, and I suspect that as we plumb
contexts through, and make changes to the RPC services there'll be a
number of similar sorts of changes to various (quasi) public
interfaces, which I think we should welcome.
2021-11-16 16:20:56 +00:00
M. J. Fromberger
dbac109d01 docs: clarify where doc site config settings must land (#7289)
Since the doc site is built from the backport branches, the config changes for
a new major release also need to be replicated into the backport branch as well
as master. Update the release docs to mention that specifically, since I missed
it during the v0.35 release.
2021-11-16 14:31:10 +00:00
William Banfield
d5df412b26 proto: update the mechanism for generating protos from spec repo (#7269)
This pull request updates the `protocgen.sh` script to insert the `go_package` option to all of the downloaded proto files. A related pull request into the spec repo removes this options from the .proto files: https://github.com/tendermint/spec/pull/358

This pull requests, along with the related spec PR, aim to move the creation of the `tendermintdev/docker-build-proto` container into the spec repo. This change also relies on several fixes to that container that are made in the PR into the spec repo.
2021-11-15 20:06:15 +00:00
Sam Kleinman
2a455be46c libs/os: remove arbitrary os.Exit (#7284)
I think calling os.Exit at arbitrary points is _bad_ and is good to
delete. I think panics in the case of data courruption have a chance
of providing useful information.
2021-11-15 19:25:29 +00:00
Sam Kleinman
a15ae5b53a node+consensus: handshaker initialization (#7283)
This mostly just pushes more of initialization out of the node package.
2021-11-15 18:28:52 +00:00
William Banfield
f28d629e28 ADR: Update the proposer-based timestamp spec per discussion with @cason (#7153)
* update the proposer-based timestamps spec per discussion with @cason

* add vote to list of changed structs

* sentence fix

* clarify crypto sig logic update

* language updates per feedback from @cason

* fix POL < 0 wording

* update timely description for non-polka
2021-11-12 15:21:48 -05:00
Sam Kleinman
27560cf7a4 p2p: reduce peer score for dial failures (#7265)
When dialing fails to succeed we should reduce the score of the peer,
which puts the peer at (potentially) greater chances of being removed
from the peer manager, and reduces the chance of the peer being
gossiped by the PEX reactor.
2021-11-10 15:52:18 +00:00
William Banfield
4acd117b5e evidence: remove source of non-determinism from test (#7266)
The evidence test produces a set of mock evidence in the evidence pool of the 'Primary' node. The test then fills the evidence pools of secondaries with half of this mock evidence. Finally, the test waits until the secondary has an evidence pool as full as the primary. 

The assertions that are removed here were checking that the primary and secondaries' evidence channels were empty. However, nothing in the test actually ensures that the channels are empty. The test only waits for the secondaries to have received the complete set of evidence, and the secondaries already received half of the evidence at the beginning. It's more than possible that the secondaries can receive the complete set of evidence and not finish reading the duplicate evidence off the channels.
2021-11-09 20:48:01 +00:00
M. J. Fromberger
9dc3d7f9a2 Set a cap on the length of subscription queries. (#7263)
As a safety measure, don't allow a query string to be unreasonably
long. The query filter is not especially efficient, so a query that
needs more than basic detail should filter coarsely in the subscriber
and refine on the client side.

This affects Subscribe and TxSearch queries.
2021-11-09 08:35:51 -08:00
dependabot[bot]
e4466b7905 build(deps): Bump github.com/lib/pq from 1.10.3 to 1.10.4 (#7261)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.3 to 1.10.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/lib/pq/releases">github.com/lib/pq's releases</a>.</em></p>
<blockquote>
<h2>v1.10.4</h2>
<ul>
<li>Keep track of (context cancelled) error on connection.</li>
<li>Fix android build</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="8446d16b89"><code>8446d16</code></a> issue 1062: Keep track of (context cancelled) error on connection, and make r...</li>
<li><a href="6a102c04ac"><code>6a102c0</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1060">#1060</a> from ian4hu/patch-1</li>
<li><a href="a54251e1b6"><code>a54251e</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1061">#1061</a> from mjl-/fix-flaky-TestConnPrepareContext</li>
<li><a href="2b4fa17b44"><code>2b4fa17</code></a> Fix flaky TestConnPrepareContext</li>
<li><a href="b33a1b722c"><code>b33a1b7</code></a> Fix android build</li>
<li><a href="16e9cadb5a"><code>16e9cad</code></a> Fix build in android</li>
<li><a href="26399a7687"><code>26399a7</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1057">#1057</a> from jfcg/master</li>
<li><a href="087077605f"><code>0870776</code></a> fix possible integer truncation</li>
<li><a href="c01ab77091"><code>c01ab77</code></a> Create codeql-analysis.yml</li>
<li>See full diff in <a href="https://github.com/lib/pq/compare/v1.10.3...v1.10.4">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/lib/pq&package-manager=go_modules&previous-version=1.10.3&new-version=1.10.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-11-09 13:22:40 +00:00
Callum Waters
b3b90f820c consensus: add some more checks to vote counting (#7253) 2021-11-09 10:51:19 +01:00
JayT106
1e701ed9b5 fix LGTM warning (#7257) 2021-11-08 12:45:56 -08:00
dependabot[bot]
2af7d8defd build(deps): Bump google.golang.org/grpc from 1.41.0 to 1.42.0 (#7200)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.41.0 to 1.42.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/grpc/grpc-go/releases">google.golang.org/grpc's releases</a>.</em></p>
<blockquote>
<h2>Release 1.42.0</h2>
<h1>Behavior Changes</h1>
<ul>
<li>grpc: Dial(&quot;unix://relative-path&quot;) no longer works (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4817">#4817</a>)
<ul>
<li>use &quot;unix://absolute-path&quot; or &quot;unix:relative-path&quot; instead in accordance with <a href="https://github.com/grpc/grpc/blob/master/doc/naming.md#name-syntax">our documentation</a></li>
</ul>
</li>
<li>xds/csds: use new field <code>GenericXdsConfig</code> instead of <code>PerXdsConfig</code> (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4898">#4898</a>)</li>
</ul>
<h1>New Features</h1>
<ul>
<li>grpc: support <code>grpc.WithAuthority</code> when secure credentials are used (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4817">#4817</a>)</li>
<li>creds/google: add NewDefaultCredentialsWithOptions() to support custom per-RPC creds (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4830">#4830</a>)</li>
<li>authz: create file watcher interceptor for gRPC SDK API (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4760">#4760</a>)</li>
<li>attributes: add <code>Equal</code> method (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4855">#4855</a>)</li>
<li>resolver: add <code>AddressMap</code> and <code>State.BalancerAttributes</code> (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4855">#4855</a>)</li>
<li>resolver: Add <code>URL</code> field to <code>Target</code> to store parsed dial target (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4817">#4817</a>)</li>
<li>grpclb: add a <code>target_name</code> field to lb config to specify target when used as a child policy (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4847">#4847</a>)</li>
<li>grpclog: support formatting log output as JSON (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4854">#4854</a>)</li>
</ul>
<h1>Bug Fixes</h1>
<ul>
<li>server: add missing conn.Close if the connection dies before reading the HTTP/2 preface (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4837">#4837</a>)</li>
<li>grpclb: recover if addresses are received after an empty server list was received previously (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4879">#4879</a>)</li>
<li>authz: support empty principals and fix rbac authenticated matcher (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4883">#4883</a>)</li>
<li>xds/rds: NACK the RDS response if it contains unknown cluster specifier (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4788">#4788</a>)</li>
<li>xds/priority: do not switch to low priority when high priority is in Idle (e.g. ringhash) (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4889">#4889</a>)</li>
</ul>
<h1>Documentation</h1>
<ul>
<li>grpc: stabilize WithDefaultServiceConfig and improve godoc (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4888">#4888</a>)</li>
<li>status: clarify FromError docstring (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4880">#4880</a>)</li>
<li>examples: add example illustrating the use of unix abstract sockets (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4848">#4848</a>)</li>
<li>examples: update load balancing example to use loadBalancingConfig (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4887">#4887</a>)</li>
<li>doc: promote WithDisableRetry to stable; clarify retry is enabled by default (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4901">#4901</a>)</li>
</ul>
<h1>API Changes</h1>
<ul>
<li>credentials: Mark <code>TransportCredentials.OverrideServerName</code> method as deprecated (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4817">#4817</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="aff571cc86"><code>aff571c</code></a> Change version to 1.42.0 (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4910">#4910</a>)</li>
<li><a href="2d7bdf2d23"><code>2d7bdf2</code></a> xds: Set RBAC on by default (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4909">#4909</a>)</li>
<li><a href="d47437c91e"><code>d47437c</code></a> xds: Fix invert functionality for header matcher (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4902">#4902</a>)</li>
<li><a href="9fa2698264"><code>9fa2698</code></a> xds/csds: populate new GenericXdsConfig field (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4898">#4898</a>)</li>
<li><a href="6e8625df63"><code>6e8625d</code></a> doc: promote WithDisableRetry to stable; clarify retry is enabled by default ...</li>
<li><a href="f1d87c14c2"><code>f1d87c1</code></a> client: properly disable retry if GRPC_GO_RETRY=off (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4899">#4899</a>)</li>
<li><a href="03753f593c"><code>03753f5</code></a> creds/google: fix CFE cluster name check (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4893">#4893</a>)</li>
<li><a href="4f21cde702"><code>4f21cde</code></a> authz: support empty principals and fix rbac authenticated matcher (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4883">#4883</a>)</li>
<li><a href="f00baa6c3c"><code>f00baa6</code></a> resolver: replace AddressMap.Range with Keys (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4891">#4891</a>)</li>
<li><a href="2a312458e6"><code>2a31245</code></a> client: don't force passthrough as default resolver (<a href="https://github-redirect.dependabot.com/grpc/grpc-go/issues/4890">#4890</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/grpc/grpc-go/compare/v1.41.0...v1.42.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google.golang.org/grpc&package-manager=go_modules&previous-version=1.41.0&new-version=1.42.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-11-06 02:37:58 +00:00
M. J. Fromberger
d5865af1f4 Add basic metrics to the indexer package. (#7250)
This follows the same model as we did in the p2p package.

Rework the indexer service constructor to take a struct of arguments,
that makes it easier to construct the optional settings.
Deprecate but do not remove the existing constructor.

Clean up node initialization a little bit.
2021-11-05 12:50:53 -07:00
M. J. Fromberger
54d7030510 pubsub: Move indexing out of the primary subscription path (#7231)
This is part of the work described by #7156.

Remove "unbuffered subscriptions" from the pubsub service.
Replace them with a dedicated blocking "observer" mechanism.
Use the observer mechanism for indexing.

Add a SubscribeWithArgs method and deprecate the old Subscribe
method. Remove SubscribeUnbuffered entirely (breaking).

Rework the Subscription interface to eliminate exposed channels.
Subscriptions now use a context to manage lifecycle notifications.

Internalize the eventbus package.
2021-11-05 10:25:25 -07:00
M. J. Fromberger
b4055a0753 Fix ToC entry for RFC 006. (#7248)
Oops.
2021-11-05 15:55:48 +00:00
M. J. Fromberger
432067cc0e RFC 006: Event Subscription (#7184)
This is intended to document some ergonomic and reliability issues with the
existing implementation of the event subscription service on the Tendermint
node, and to discuss possible approaches to improving them.
2021-11-05 08:23:17 -07:00
M. J. Fromberger
6c7d6f761b Remove unused libs/cmap. (#7245)
A follow-up to #7197.
2021-11-05 10:31:06 +00:00
M. J. Fromberger
3335cbe8b7 Update master changelog after v0.35.0 tag. (#7243)
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2021-11-04 16:40:43 +00:00
JayT106
641a5ec998 rpc: fix inappropriate http request log (#7244) 2021-11-04 17:38:08 +01:00
M. J. Fromberger
b3b1279d1f Add v0.35 to the configs for building the docs website. (#7055) 2021-11-04 08:29:23 -07:00
Sam Kleinman
797541e742 docs: add upgrading info about node service (#7241)
* docs: add info about RPC local to upgrading docs

* Apply suggestions from code review

Co-authored-by: Callum Waters <cmwaters19@gmail.com>

* cleanup codeblock

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
2021-11-04 13:20:26 +00:00
Callum Waters
ae92b135d8 docs: update bounty links (#7203) 2021-11-03 22:28:59 +01:00
dependabot[bot]
a2f4f3fb48 build(deps): Bump github.com/golangci/golangci-lint from 1.42.1 to 1.43.0 (#7219)
Bumps [github.com/golangci/golangci-lint](https://github.com/golangci/golangci-lint) from 1.42.1 to 1.43.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/golangci/golangci-lint/releases">github.com/golangci/golangci-lint's releases</a>.</em></p>
<blockquote>
<h2>v1.43.0</h2>
<h2>Changelog</h2>
<p>bdc2f96d Add code comments to document source code (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2306">#2306</a>)
861262b7 Add github.com/breml/bidichk linter (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2330">#2330</a>)
32292622 Add nilnil linter (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2236">#2236</a>)
20699a72 Add tenv linter (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2221">#2221</a>)
e612577d Bump gochecknoglobals to v0.1.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2317">#2317</a>)
1be9570a Refactor: preallocate slices (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2340">#2340</a>)
813ba7d9 Update index.mdx (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2230">#2230</a>)
f500e4cb add varnamelen linter (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2240">#2240</a>)
e6c56694 build(deps): bump github.com/Antonboom/errname from 0.1.4 to 0.1.5 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2249">#2249</a>)
a37843b1 build(deps): bump github.com/butuzov/ireturn from 0.1.0 to 0.1.1 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2246">#2246</a>)
680f3e6c build(deps): bump github.com/charithe/durationcheck from 0.0.8 to 0.0.9 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2289">#2289</a>)
00e47702 build(deps): bump github.com/esimonov/ifshort from 1.0.2 to 1.0.3 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2303">#2303</a>)
d3fc84bc build(deps): bump github.com/fatih/color from 1.12.0 to 1.13.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2259">#2259</a>)
4ce9a19e build(deps): bump github.com/go-critic/go-critic from 0.5.6 to 0.6.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2041">#2041</a>)
5adafe52 build(deps): bump github.com/jingyugao/rowserrcheck from 1.1.0 to 1.1.1 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2326">#2326</a>)
3fe324a1 build(deps): bump github.com/kunwardeep/paralleltest from 1.0.2 to 1.0.3 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2244">#2244</a>)
739ccd3f build(deps): bump github.com/mattn/go-colorable from 0.1.10 to 0.1.11 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2277">#2277</a>)
c6c55d25 build(deps): bump github.com/mattn/go-colorable from 0.1.8 to 0.1.9 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2252">#2252</a>)
8f2af022 build(deps): bump github.com/mattn/go-colorable from 0.1.9 to 0.1.10 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2260">#2260</a>)
78d309e7 build(deps): bump github.com/mgechev/revive from 1.1.1 to 1.1.2 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2276">#2276</a>)
1012c10d build(deps): bump github.com/nakabonne/nestif from 0.3.0 to 0.3.1 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2325">#2325</a>)
6edca924 build(deps): bump github.com/securego/gosec/v2 from 2.8.1 to 2.9.1 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2299">#2299</a>)
963257f9 build(deps): bump github.com/shirou/gopsutil/v3 from 3.21.7 to 3.21.8 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2225">#2225</a>)
b9f015cc build(deps): bump github.com/shirou/gopsutil/v3 from 3.21.8 to 3.21.9 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2275">#2275</a>)
9f628531 build(deps): bump github.com/shirou/gopsutil/v3 from 3.21.9 to 3.21.10 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2327">#2327</a>)
f1258315 build(deps): bump github.com/spf13/viper from 1.8.1 to 1.9.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2243">#2243</a>)
60a9d16f build(deps): bump github.com/tetafro/godot from 1.4.10 to 1.4.11 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2248">#2248</a>)
8c60147b build(deps): bump github.com/tetafro/godot from 1.4.9 to 1.4.10 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2226">#2226</a>)
2fb65636 build(deps): bump github.com/tomarrell/wrapcheck/v2 from 2.3.0 to 2.3.1 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2278">#2278</a>)
9bb917de build(deps): bump github.com/tomarrell/wrapcheck/v2 from 2.3.1 to 2.4.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2324">#2324</a>)
59c7b10b build(deps): bump github.com/valyala/quicktemplate from 1.6.3 to 1.7.0 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2250">#2250</a>)
5d4fe00f build(deps): bump golang.org/x/tools from 0.1.5 to 0.1.6 (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2245">#2245</a>)
91016acc build(deps): bump tmpl from 1.0.4 to 1.0.5 in /.github/peril (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2247">#2247</a>)
f47f4f55 codeql: Remove unneeded steps (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2336">#2336</a>)
413bec6a errcheck: empty selector name. (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2309">#2309</a>)
7fc2fe81 feat: add contextcheck linter (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2216">#2216</a>)
8cb9c769 fix: Add missing space in &quot;disabled by config&quot; warning (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2310">#2310</a>)
a8887d56 fix: don't hide enable-all option (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2338">#2338</a>)
cf9f3f95 fix: go.sum (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2262">#2262</a>)
2c01ea7f gocritic: add support for variable substitution in ruleguard path settings (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2308">#2308</a>)
cc262bba gosec: filter issues according to the severity and confidence (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2295">#2295</a>)
9b577fcb new-from-rev: add support for finding issues in entire files in a diff (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2264">#2264</a>)
2ea496f2 new-linter: ireturn (checks for function return type) (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2219">#2219</a>)
17d24ebd nlreturn: add block-size option (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2237">#2237</a>)</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="861262b71f"><code>861262b</code></a> Add github.com/breml/bidichk linter (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2330">#2330</a>)</li>
<li><a href="77d4115b0f"><code>77d4115</code></a> docs: add documentation for go-critic and ruleguard settings (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2304">#2304</a>)</li>
<li><a href="2c01ea7ff2"><code>2c01ea7</code></a> gocritic: add support for variable substitution in ruleguard path settings (#...</li>
<li><a href="f9f6486e15"><code>f9f6486</code></a> docs: change Github to GitHub in comments and docs (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2341">#2341</a>)</li>
<li><a href="1be9570abf"><code>1be9570</code></a> Refactor: preallocate slices (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2340">#2340</a>)</li>
<li><a href="a8887d5655"><code>a8887d5</code></a> fix: don't hide enable-all option (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2338">#2338</a>)</li>
<li><a href="f47f4f5557"><code>f47f4f5</code></a> codeql: Remove unneeded steps (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2336">#2336</a>)</li>
<li><a href="8cb9c769ff"><code>8cb9c76</code></a> fix: Add missing space in &quot;disabled by config&quot; warning (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2310">#2310</a>)</li>
<li><a href="a2b2968b12"><code>a2b2968</code></a> build(deps): bump gatsby-plugin-robots-txt in /docs (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2332">#2332</a>)</li>
<li><a href="dcaaa894af"><code>dcaaa89</code></a> build(deps): bump gatsby-remark-copy-linked-files in /docs (<a href="https://github-redirect.dependabot.com/golangci/golangci-lint/issues/2335">#2335</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/golangci/golangci-lint/compare/v1.42.1...v1.43.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/golangci/golangci-lint&package-manager=go_modules&previous-version=1.42.1&new-version=1.43.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-11-03 16:40:55 +00:00
M. J. Fromberger
01c43c153b doc: Set up Dependabot on new backport branches. (#7227) 2021-11-03 16:28:46 +00:00
M. J. Fromberger
0de4aa1765 Fix spurious crasher in mempool fuzz test. (#7190)
We were creating a TestLogger without testing being initialized.
Switch to a no-op logger instead.
2021-11-03 08:36:02 -07:00
M. J. Fromberger
dc28734dad pubsub: Remove uninformative publisher benchmarks. (#7195)
Prior to #7177, these benchmarks did not provide any useful data about the
performance of the pubsub system (in fact, prior to #7178, half of them did not
work at all).

Specifically, they create a bunch of subscribers with 1 buffer slot on a
default publisher and copy messages to them. But because the publisher is
single-threaded, and doesn't block for delivery, all this tested is how long it
takes to receive a single message from a channel and deliver it to another
channel. The resulting stat does not even vary meaningfully with batch size,
since it's testing a serial workload.

Since #7177 the benchmarks do correctly reflect delivery time (good), but still
do not tell us anything useful: The latencies that matter for pubsub are not
internal queuing, but the effects of backpressure on the publisher via the
subscribers. That's an integration problem, and simulating a fake workload does
not provide meaningful results.

On that basis, remove these benchmarks.
2021-11-03 07:39:09 -07:00
Sam Kleinman
6f08c14375 ci: update dependabot configuration (#7204)
I increased the chill of the github actions and NPM/docs dependencies,
and added default automerging of the go dependencies.
2021-11-03 13:30:53 +00:00
Sam Kleinman
ffcd347ef6 pex: allow disabled pex reactor (#7198)
This ensures the implementation respects disabling the pex reactor.
2021-11-03 11:51:42 +00:00
dependabot[bot]
279e8027d3 build(deps): Bump actions/checkout from 2.3.5 to 2.4.0 (#7199)
Bumps [actions/checkout](https://github.com/actions/checkout) from 2.3.5 to 2.4.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/checkout/releases">actions/checkout's releases</a>.</em></p>
<blockquote>
<h2>v2.4.0</h2>
<ul>
<li>Convert SSH URLs like <code>org-&lt;ORG_ID&gt;@github.com:</code> to <code>https://github.com/</code> - <a href="https://github-redirect.dependabot.com/actions/checkout/pull/621">pr</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="ec3a7ce113"><code>ec3a7ce</code></a> set insteadOf url for org-id (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/621">#621</a>)</li>
<li><a href="fd47087372"><code>fd47087</code></a> codeql should analyze lib not dist (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/620">#620</a>)</li>
<li><a href="3d677ac575"><code>3d677ac</code></a> script to generate license info (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/614">#614</a>)</li>
<li><a href="826ba42d6c"><code>826ba42</code></a> npm audit fix (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/612">#612</a>)</li>
<li><a href="eb8a193c1d"><code>eb8a193</code></a> update dev dependencies and react to new linting rules (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/611">#611</a>)</li>
<li><a href="c49af7ca1f"><code>c49af7c</code></a> Create codeql-analysis.yml (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/602">#602</a>)</li>
<li>See full diff in <a href="https://github.com/actions/checkout/compare/v2.3.5...v2.4.0">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=2.3.5&new-version=2.4.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-11-03 11:21:45 +00:00
Sam Kleinman
e2b626fc92 node: cleanup construction (#7191) 2021-11-03 07:16:35 -04:00
Sam Kleinman
63192ac300 consensus: remove stale WAL benchmark (#7194) 2021-11-03 07:01:46 -04:00
dependabot[bot]
5bc9cb6260 build(deps): Bump github.com/rs/zerolog from 1.25.0 to 1.26.0 (#7192)
Bumps [github.com/rs/zerolog](https://github.com/rs/zerolog) from 1.25.0 to 1.26.0.
- [Release notes](https://github.com/rs/zerolog/releases)
- [Commits](https://github.com/rs/zerolog/compare/v1.25.0...v1.26.0)

---
updated-dependencies:
- dependency-name: github.com/rs/zerolog
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-11-02 07:18:12 -07:00
M. J. Fromberger
d32913c889 pubsub: Use a dynamic queue for buffered subscriptions (#7177)
Updates #7156, and a follow-up to #7070.

Event subscriptions in Tendermint currently use a fixed-length Go
channel as a queue. When the channel fills up, the publisher
immediately terminates the subscription. This prevents slow
subscribers from creating memory pressure on the node by not
servicing their queue fast enough.

Replace the buffered channel used to deliver events to buffered
subscribers with an explicit queue. The queue provides a soft
quota and burst credit mechanism: Clients that usually keep up
can survive occasional bursts, without allowing truly slow
clients to hog resources indefinitely.
2021-11-01 10:38:27 -07:00
William Banfield
5599ec37bf fuzz: remove fuzz cases for deleted code (#7187)
fuzz: remove fuzz cases for deleted code
2021-11-01 15:46:35 +01:00
Sam Kleinman
5cc980698a mempool: consoldate implementations (#7171)
* mempool: consoldate implementations

* update chagelog

* fix test

* Apply suggestions from code review

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>

* cleanup locking comments

* context twiddle

* migrate away from deprecated ioutil APIs (#7175)

Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2021-10-29 04:19:06 -04:00
Sharad Chand
8441b3715a migrate away from deprecated ioutil APIs (#7175)
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2021-10-28 10:34:07 -07:00
Sam Kleinman
6108d0a9b5 config: expose ability to write config to arbitrary paths (#7174) 2021-10-28 05:17:23 -04:00
M. J. Fromberger
1fd7060542 pubsub: Use distinct client IDs for test subscriptions. (#7178)
Fixes #7176. Some of the benchmarks create a bunch of different subscriptions all sharing the same query. These were all using the same client ID, which violates one of the subscriber rules. Ensure each subscriber gets a unique ID.

This has been broken as long as this library has been in the repo—I tracked it back to bb9aa85d and it was already failing there, so I think this never really worked. I'm not sure these test anything useful, but at least now they run.
2021-10-28 07:57:45 +00:00
William Banfield
4d09a6c25d abci: fix readme link (#7173) 2021-10-27 20:27:09 -07:00
Sam Kleinman
e2a103a315 mempool: port reactor tests from legacy implementation (#7162) 2021-10-27 09:45:17 -04:00
Sam Kleinman
93eb940dcd config: WriteConfigFile should return error (#7169) 2021-10-27 08:46:18 -04:00
dependabot[bot]
e5d019d51b build(deps): Bump url-parse from 1.5.1 to 1.5.3 in /docs (#7166)
Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.1 to 1.5.3.
<details>
<summary>Commits</summary>
<ul>
<li><a href="ad44493166"><code>ad44493</code></a> [dist] 1.5.3</li>
<li><a href="c7984617e2"><code>c798461</code></a> [fix] Fix host parsing for file URLs (<a href="https://github-redirect.dependabot.com/unshiftio/url-parse/issues/210">#210</a>)</li>
<li><a href="201034b867"><code>201034b</code></a> [dist] 1.5.2</li>
<li><a href="2d9ac2c940"><code>2d9ac2c</code></a> [fix] Sanitize only special URLs (<a href="https://github-redirect.dependabot.com/unshiftio/url-parse/issues/209">#209</a>)</li>
<li><a href="fb128af4f4"><code>fb128af</code></a> [fix] Use <code>'null'</code> as <code>origin</code> for non special URLs</li>
<li><a href="fed6d9e338"><code>fed6d9e</code></a> [fix] Add a leading slash only if the URL is special</li>
<li><a href="94872e7ab9"><code>94872e7</code></a> [fix] Do not incorrectly set the <code>slashes</code> property to <code>true</code></li>
<li><a href="81ab967889"><code>81ab967</code></a> [fix] Ignore slashes after the protocol for special URLs</li>
<li><a href="ee22050a48"><code>ee22050</code></a> [ci] Use GitHub Actions</li>
<li><a href="d2979b586d"><code>d2979b5</code></a> [fix] Special case the <code>file:</code> protocol (<a href="https://github-redirect.dependabot.com/unshiftio/url-parse/issues/204">#204</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/unshiftio/url-parse/compare/1.5.1...1.5.3">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=url-parse&package-manager=npm_and_yarn&previous-version=1.5.1&new-version=1.5.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tendermint/tendermint/network/alerts).

</details>
2021-10-27 09:01:14 +00:00
dependabot[bot]
0dd5970894 build(deps): Bump path-parse from 1.0.6 to 1.0.7 in /docs (#7164)
Bumps [path-parse](https://github.com/jbgutierrez/path-parse) from 1.0.6 to 1.0.7.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/jbgutierrez/path-parse/commits/v1.0.7">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=path-parse&package-manager=npm_and_yarn&previous-version=1.0.6&new-version=1.0.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tendermint/tendermint/network/alerts).

</details>
2021-10-27 08:59:48 +00:00
dependabot[bot]
ab237091bb build(deps): Bump ws from 6.2.1 to 6.2.2 in /docs (#7165)
Bumps [ws](https://github.com/websockets/ws) from 6.2.1 to 6.2.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/websockets/ws/releases">ws's releases</a>.</em></p>
<blockquote>
<h2>6.2.2</h2>
<h1>Bug fixes</h1>
<ul>
<li>Backported 00c425ec to the 6.x release line (78c676d2).</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="9bdb58070d"><code>9bdb580</code></a> [dist] 6.2.2</li>
<li><a href="78c676d2a1"><code>78c676d</code></a> [security] Fix ReDoS vulnerability</li>
<li>See full diff in <a href="https://github.com/websockets/ws/compare/6.2.1...6.2.2">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ws&package-manager=npm_and_yarn&previous-version=6.2.1&new-version=6.2.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tendermint/tendermint/network/alerts).

</details>
2021-10-27 08:57:58 +00:00
dependabot[bot]
2416ff3be9 build(deps): Bump postcss from 7.0.35 to 7.0.39 in /docs (#7167)
Bumps [postcss](https://github.com/postcss/postcss) from 7.0.35 to 7.0.39.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/postcss/postcss/releases">postcss's releases</a>.</em></p>
<blockquote>
<h2>7.0.39</h2>
<ul>
<li>Reduce package size.</li>
<li>Backport <code>nanocolors</code> to <code>picocolors</code> migration.</li>
</ul>
<h2>7.0.38</h2>
<ul>
<li>Update <code>Processor#version</code>.</li>
</ul>
<h2>7.0.37</h2>
<ul>
<li>Backport <code>chalk</code> to <code>nanocolors</code> migration.</li>
</ul>
<h2>7.0.36</h2>
<ul>
<li>Backport ReDoS vulnerabilities from PostCSS 8.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/postcss/postcss/blob/7.0.39/CHANGELOG.md">postcss's changelog</a>.</em></p>
<blockquote>
<h2>7.0.39</h2>
<ul>
<li>Reduce package size.</li>
<li>Backport <code>nanocolors</code> to <code>picocolors</code> migration.</li>
</ul>
<h2>7.0.38</h2>
<ul>
<li>Update <code>Processor#version</code>.</li>
</ul>
<h2>7.0.37</h2>
<ul>
<li>Backport <code>chalk</code> to <code>nanocolors</code> migration.</li>
</ul>
<h2>7.0.36</h2>
<ul>
<li>Backport ReDoS vulnerabilities from PostCSS 8.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="e17c1ef762"><code>e17c1ef</code></a> Release 7.0.39 version</li>
<li><a href="6791bd3d5f"><code>6791bd3</code></a> Reduce npm package</li>
<li><a href="44c581a55a"><code>44c581a</code></a> Replace nanocolors with picocolors</li>
<li><a href="8ba21fd8f4"><code>8ba21fd</code></a> Remove eslint-ci</li>
<li><a href="3994c4aa3c"><code>3994c4a</code></a> Release 7.0.38 version</li>
<li><a href="6944e1dd80"><code>6944e1d</code></a> Remove development keys from package.json</li>
<li><a href="4dd0af024a"><code>4dd0af0</code></a> Release 7.0.37 version</li>
<li><a href="8408eb4105"><code>8408eb4</code></a> Add compilation step</li>
<li><a href="0c680639c3"><code>0c68063</code></a> Move tests to GitHub Actions</li>
<li><a href="98b61ba5b4"><code>98b61ba</code></a> Replace chalk to nanocolors</li>
<li>Additional commits viewable in <a href="https://github.com/postcss/postcss/compare/7.0.35...7.0.39">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=postcss&package-manager=npm_and_yarn&previous-version=7.0.35&new-version=7.0.39)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tendermint/tendermint/network/alerts).

</details>
2021-10-27 08:56:03 +00:00
dependabot[bot]
4725b00af1 build(deps): Bump prismjs from 1.23.0 to 1.25.0 in /docs (#7168)
Bumps [prismjs](https://github.com/PrismJS/prism) from 1.23.0 to 1.25.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/PrismJS/prism/releases">prismjs's releases</a>.</em></p>
<blockquote>
<h2>v1.25.0</h2>
<p>Release 1.25.0</p>
<h2>v1.24.1</h2>
<p>Release 1.24.1</p>
<h2>v1.24.0</h2>
<p>Release 1.24.0</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/PrismJS/prism/blob/master/CHANGELOG.md">prismjs's changelog</a>.</em></p>
<blockquote>
<h2>1.25.0 (2021-09-16)</h2>
<h3>New components</h3>
<ul>
<li><strong>AviSynth</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3071">#3071</a>) <a href="https://github.com/PrismJS/prism/commit/746a4b1a"><code>746a4b1a</code></a></li>
<li><strong>Avro IDL</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3051">#3051</a>) <a href="https://github.com/PrismJS/prism/commit/87e5a376"><code>87e5a376</code></a></li>
<li><strong>Bicep</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3027">#3027</a>) <a href="https://github.com/PrismJS/prism/commit/c1dce998"><code>c1dce998</code></a></li>
<li><strong>GAP (CAS)</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3054">#3054</a>) <a href="https://github.com/PrismJS/prism/commit/23cd9b65"><code>23cd9b65</code></a></li>
<li><strong>GN</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3062">#3062</a>) <a href="https://github.com/PrismJS/prism/commit/4f97b82b"><code>4f97b82b</code></a></li>
<li><strong>Hoon</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/2978">#2978</a>) <a href="https://github.com/PrismJS/prism/commit/ea776756"><code>ea776756</code></a></li>
<li><strong>Kusto</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3068">#3068</a>) <a href="https://github.com/PrismJS/prism/commit/e008ea05"><code>e008ea05</code></a></li>
<li><strong>Magma (CAS)</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3055">#3055</a>) <a href="https://github.com/PrismJS/prism/commit/a1b67ce3"><code>a1b67ce3</code></a></li>
<li><strong>MAXScript</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3060">#3060</a>) <a href="https://github.com/PrismJS/prism/commit/4fbdd2f8"><code>4fbdd2f8</code></a></li>
<li><strong>Mermaid</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3050">#3050</a>) <a href="https://github.com/PrismJS/prism/commit/148c1eca"><code>148c1eca</code></a></li>
<li><strong>Razor C#</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3064">#3064</a>) <a href="https://github.com/PrismJS/prism/commit/4433ccfc"><code>4433ccfc</code></a></li>
<li><strong>Systemd configuration file</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3053">#3053</a>) <a href="https://github.com/PrismJS/prism/commit/8df825e0"><code>8df825e0</code></a></li>
<li><strong>Wren</strong> (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3063">#3063</a>) <a href="https://github.com/PrismJS/prism/commit/6a356d25"><code>6a356d25</code></a></li>
</ul>
<h3>Updated components</h3>
<ul>
<li><strong>Bicep</strong>
<ul>
<li>Added support for multiline and interpolated strings and other improvements (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3028">#3028</a>) <a href="https://github.com/PrismJS/prism/commit/748bb9ac"><code>748bb9ac</code></a></li>
</ul>
</li>
<li><strong>C#</strong>
<ul>
<li>Added <code>with</code> keyword &amp; improved record support (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/2993">#2993</a>) <a href="https://github.com/PrismJS/prism/commit/fdd291c0"><code>fdd291c0</code></a></li>
<li>Added <code>record</code>, <code>init</code>, and <code>nullable</code> keyword (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/2991">#2991</a>) <a href="https://github.com/PrismJS/prism/commit/9b561565"><code>9b561565</code></a></li>
<li>Added context check for <code>from</code> keyword (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/2970">#2970</a>) <a href="https://github.com/PrismJS/prism/commit/158f25d4"><code>158f25d4</code></a></li>
</ul>
</li>
<li><strong>C++</strong>
<ul>
<li>Fixed generic function false positive (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3043">#3043</a>) <a href="https://github.com/PrismJS/prism/commit/5de8947f"><code>5de8947f</code></a></li>
</ul>
</li>
<li><strong>Clojure</strong>
<ul>
<li>Improved tokenization (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3056">#3056</a>) <a href="https://github.com/PrismJS/prism/commit/8d0b74b5"><code>8d0b74b5</code></a></li>
</ul>
</li>
<li><strong>Hoon</strong>
<ul>
<li>Fixed mixed-case aura tokenization (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3002">#3002</a>) <a href="https://github.com/PrismJS/prism/commit/9c8911bd"><code>9c8911bd</code></a></li>
</ul>
</li>
<li><strong>Liquid</strong>
<ul>
<li>Added all objects from Shopify reference (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/2998">#2998</a>) <a href="https://github.com/PrismJS/prism/commit/693b7433"><code>693b7433</code></a></li>
<li>Added <code>empty</code> keyword (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/2997">#2997</a>) <a href="https://github.com/PrismJS/prism/commit/fe3bc526"><code>fe3bc526</code></a></li>
</ul>
</li>
<li><strong>Log file</strong>
<ul>
<li>Added support for Java stack traces (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3003">#3003</a>) <a href="https://github.com/PrismJS/prism/commit/b0365e70"><code>b0365e70</code></a></li>
</ul>
</li>
<li><strong>Markup</strong>
<ul>
<li>Made most patterns greedy (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3065">#3065</a>) <a href="https://github.com/PrismJS/prism/commit/52e8cee9"><code>52e8cee9</code></a></li>
<li>Fixed ReDoS (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3078">#3078</a>) <a href="https://github.com/PrismJS/prism/commit/0ff371bb"><code>0ff371bb</code></a></li>
</ul>
</li>
<li><strong>PureScript</strong>
<ul>
<li>Made <code>∀</code> a keyword (alias for <code>forall</code>) (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3005">#3005</a>) <a href="https://github.com/PrismJS/prism/commit/b38fc89a"><code>b38fc89a</code></a></li>
<li>Improved Haskell and PureScript (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3020">#3020</a>) <a href="https://github.com/PrismJS/prism/commit/679539ec"><code>679539ec</code></a></li>
</ul>
</li>
<li><strong>Python</strong>
<ul>
<li>Support for underscores in numbers (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3039">#3039</a>) <a href="https://github.com/PrismJS/prism/commit/6f5d68f7"><code>6f5d68f7</code></a></li>
</ul>
</li>
<li><strong>Sass</strong>
<ul>
<li>Fixed issues with CSS Extras (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/2994">#2994</a>) <a href="https://github.com/PrismJS/prism/commit/14fdfe32"><code>14fdfe32</code></a></li>
</ul>
</li>
<li><strong>Shell session</strong>
<ul>
<li>Fixed command false positives (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3048">#3048</a>) <a href="https://github.com/PrismJS/prism/commit/35b88fcf"><code>35b88fcf</code></a></li>
<li>Added support for the percent sign as shell symbol (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3010">#3010</a>) <a href="https://github.com/PrismJS/prism/commit/4492b62b"><code>4492b62b</code></a></li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="99d94fa7c3"><code>99d94fa</code></a> 1.25.0</li>
<li><a href="6d8e54703b"><code>6d8e547</code></a> Updated changelog (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3083">#3083</a>)</li>
<li><a href="e008ea056d"><code>e008ea0</code></a> Added support for Kusto (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3068">#3068</a>)</li>
<li><a href="4433ccfc0c"><code>4433ccf</code></a> Added support for ASP.NET Razor (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3064">#3064</a>)</li>
<li><a href="6a356d253a"><code>6a356d2</code></a> Added support for Wren (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3063">#3063</a>)</li>
<li><a href="4fbdd2f8f8"><code>4fbdd2f</code></a> Added support for MAXScript (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3060">#3060</a>)</li>
<li><a href="746a4b1adf"><code>746a4b1</code></a> Added AviSynth language definition (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3071">#3071</a>)</li>
<li><a href="ffb2043909"><code>ffb2043</code></a> Twilight theme: Increase selector specificities of plugin overrides (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3081">#3081</a>)</li>
<li><a href="52e8cee97a"><code>52e8cee</code></a> Markup: Made most patterns greedy (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3065">#3065</a>)</li>
<li><a href="c7b6a7f6a5"><code>c7b6a7f</code></a> Previewers: Ensure popup is visible across themes (<a href="https://github-redirect.dependabot.com/PrismJS/prism/issues/3080">#3080</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/PrismJS/prism/compare/v1.23.0...v1.25.0">compare view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by <a href="https://www.npmjs.com/~rundevelopment">rundevelopment</a>, a new releaser for prismjs since your current version.</p>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=prismjs&package-manager=npm_and_yarn&previous-version=1.23.0&new-version=1.25.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tendermint/tendermint/network/alerts).

</details>
2021-10-27 08:54:53 +00:00
Callum Waters
ce89292712 docs: fix broken links and layout (#7154)
This PR does a few minor touch ups to the docs
2021-10-27 08:36:32 +00:00
Sam Kleinman
4bd8c5ab6f p2p: transport should be captive resposibility of router (#7160)
The main (and minor) win of this PR is that the transport is fully the
responsibility of the router and the node doesn't need to be responsible for its lifecylce.
2021-10-26 16:34:44 +00:00
Sam Kleinman
b15b2c1b78 flowrate: cleanup unused files (#7158)
I saw one of these tests fail and it looks like it was using code that
wasn't being called anywhere, so I deleted it, and avoided the package
name aliasing.
2021-10-26 16:11:34 +00:00
Sam Kleinman
cb39e2f917 node,blocksync,config: remove support for running nodes with blocksync disabled (#7159)
We stopped testing these configurations a while ago, and it doesn't
really make sense to allow nodes to run in this configuration. This
drops support for non-blocksync nodes and cleans up the
configuration/tests accordingly.

Closes: #6908
2021-10-26 14:35:14 +00:00
William Banfield
b4bc6bb4e8 p2p: add message type into the send/recv bytes metrics (#7155)
This pull request adds a new "mesage_type" label to the send/recv bytes metrics calculated in the p2p code.

Below is a snippet of the updated metrics that includes the updated label:
```
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_HasVote",peer_id="2551a13ed720101b271a5df4816d1e4b3d3bd133"} 652
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_HasVote",peer_id="4b1068420ef739db63377250553562b9a978708a"} 631
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_HasVote",peer_id="927c50a5e508c747830ce3ba64a3f70fdda58ef2"} 631
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_NewRoundStep",peer_id="2551a13ed720101b271a5df4816d1e4b3d3bd133"} 393
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_NewRoundStep",peer_id="4b1068420ef739db63377250553562b9a978708a"} 357
tendermint_p2p_peer_receive_bytes_total{chID="32",chain_id="ci",message_type="consensus_NewRoundStep",peer_id="927c50a5e508c747830ce3ba64a3f70fdda58ef2"} 386
```
2021-10-26 12:45:33 +00:00
Sam Kleinman
23be048294 p2p: use correct transport configuration (#7152) 2021-10-25 07:19:20 -04:00
Marko
0d68161cc8 docs: add reactor sections (#6510) 2021-10-22 18:14:43 +02:00
Callum Waters
68ca65f5d7 pex: remove legacy proto messages (#7147)
This PR implements the proto changes made in https://github.com/tendermint/spec/pull/352, removing the legacy messages that were used in the pex reactor.
2021-10-22 12:07:46 +00:00
Sam Kleinman
88bdd328ed e2e: evidence test refactor (#7146)
I've seen this failure a few times and this change seems like it
reduces the number of times that we're waiting (and can therefore hit
a timeout.)
2021-10-21 16:02:26 +00:00
Callum Waters
a8ff617773 state: add height assertion to rollback function (#7143) 2021-10-21 13:40:43 +02:00
Sam Kleinman
a8917040a8 e2e: avoid unset defaults in generated tests (#7145)
I've observed a few cases in tests that are probably wrong, and added
some tests to cover this.
2021-10-20 16:38:59 +00:00
Sam Kleinman
a30860a307 e2e: always enable blocksync (#7144)
I believe it was the case that blocksync was not consistently enabled in master, and this makes makes it the default in the master tests.
2021-10-20 14:38:29 +00:00
M. J. Fromberger
f7f4067968 pubsub: simplify and improve server concurrency handling (#7070)
Rework the internal plumbing of the server. This change does not modify the
exported interfaces or semantics of the package, and all the existing tests
still pass.

The main changes here are to:

- Simplify the interface for subscription indexing with a typed index rather
  than a single nested map.

- Ensure orderly shutdown of channels, so that there is no longer a dynamic
  race with concurrent publishers & subscribers at shutdown.

- Remove a layer of indirection between publishers and subscribers. This mainly
  helps legibility.

- Remove order dependencies between registration and delivery.

- Add documentation comments where they seemed helpful, and clarified the
  existing comments where it was practical.

Although performance was not a primary goal of this change, the simplifications
did very slightly reduce memory use and increase throughput on the existing
benchmarks, though the delta is not statistically significant.

    BENCHMARK                BEFORE AFTER SPEEDUP (%) B/op (B) B/op (A)
    Benchmark10Clients-12    5947   5566  6.4         2017     1942
    Benchmark100Clients-12   6111   5762  5.7         1992     1910
    Benchmark1000Clients-12  6983   6344  9.2         2046     1959
2021-10-19 15:32:13 -07:00
William Banfield
b0130c88fb mempool: remove panic when recheck-tx was not sent to ABCI application (#7134)
This pull request fixes a panic that exists in both mempools. The panic occurs when the ABCI client misses a response from the ABCI application. This happen when the ABCI client drops the request as a result of a full client queue. The fix here was to loop through the ordered list of recheck-tx in the callback until one matches the currently observed recheck request.
2021-10-19 13:30:52 +00:00
dependabot[bot]
0408888a5e build(deps): Bump actions/checkout from 2.3.4 to 2.3.5 (#7139)
Bumps [actions/checkout](https://github.com/actions/checkout) from 2.3.4 to 2.3.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/checkout/releases">actions/checkout's releases</a>.</em></p>
<blockquote>
<h2>v2.3.5</h2>
<p>Update dependencies</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="1e204e9a92"><code>1e204e9</code></a> update licensed check (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/606">#606</a>)</li>
<li><a href="0299a0d2b6"><code>0299a0d</code></a> update dist (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/605">#605</a>)</li>
<li><a href="be0f448456"><code>be0f448</code></a> Bump ws from 5.2.2 to 5.2.3 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/604">#604</a>)</li>
<li><a href="56c00a7b1f"><code>56c00a7</code></a> Bump tmpl from 1.0.4 to 1.0.5 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/588">#588</a>)</li>
<li><a href="85e47d1a2b"><code>85e47d1</code></a> Bump path-parse from 1.0.6 to 1.0.7 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/568">#568</a>)</li>
<li><a href="3fc17f8645"><code>3fc17f8</code></a> Bump hosted-git-info from 2.8.5 to 2.8.9 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/500">#500</a>)</li>
<li><a href="e3bc06d986"><code>e3bc06d</code></a> Bump lodash from 4.17.15 to 4.17.21 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/499">#499</a>)</li>
<li><a href="442567ba57"><code>442567b</code></a> Bump handlebars from 4.5.3 to 4.7.7 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/497">#497</a>)</li>
<li><a href="7f00b66d06"><code>7f00b66</code></a> Bump y18n from 4.0.0 to 4.0.1 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/469">#469</a>)</li>
<li><a href="eccf386318"><code>eccf386</code></a> Bump <code>@​actions/core</code> from 1.1.3 to 1.2.6 (<a href="https://github-redirect.dependabot.com/actions/checkout/issues/361">#361</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/actions/checkout/compare/v2.3.4...v2.3.5">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=2.3.4&new-version=2.3.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
2021-10-19 07:47:36 +00:00
Marko
19ec4a5322 tools: clone proto files from spec (#6976)
## Description

clone proto files from spec in order to have them in a single location

closes https://github.com/tendermint/spec/issues/343
2021-10-18 08:22:02 +00:00
Sam Kleinman
ca8f004112 p2p: remove final shims from p2p package (#7136)
This is, perhaps, the trival final piece of #7075 that I've been
working on.

There's more work to be done: 
- push more of the setup into the pacakges themselves
- move channel-based sending/filtering out of the 
- simplify the buffering throuhgout the p2p stack.
2021-10-15 20:08:09 +00:00
Sam Kleinman
7143f14a63 p2p: simplify open channel interface (#7133)
A fourth #7075 component patch to simplify the channel creation interface
2021-10-15 18:00:24 +00:00
Sam Kleinman
cbe6ad6cd5 p2p: flatten channel descriptor (#7132) 2021-10-15 13:03:10 -04:00
Sam Kleinman
0900ea8396 p2p: channel shim cleanup (#7129) 2021-10-15 12:31:33 -04:00
Sam Kleinman
f4a56f4034 p2p: refactor channel description (#7130)
This is another small sliver of #7075, with the intention of removing
the legacy shim layer related to channel registration.
2021-10-15 15:45:12 +00:00
Marko
66a11fe527 blocksync: remove v0 folder structure (#7128)
Remove v0 blocksync folder structure.
2021-10-15 13:03:53 +00:00
M. J. Fromberger
006e6108a1 Fix the protobuf generation script. (#7127)
This should have been part of #7121, but I missed it.
2021-10-15 12:18:53 +00:00
Callum Waters
c3dc7d20df docs: add roadmap to repo (#7107) 2021-10-15 10:35:36 +02:00
Jared Zhou
b95c261981 rpc: fix typo in broadcast commit (#7124) 2021-10-14 10:34:25 +02:00
M. J. Fromberger
bc1a20dbb8 Revert temporary patch to buf.yaml. (#7122)
This patch was needed to pass the buf breakage check for the proto file removed
in #7121, but now that master contains the change we no longer need the patch.
2021-10-13 23:37:50 +00:00
M. J. Fromberger
86f00135dd rpc: Remove the deprecated gRPC interface to the RPC service (#7121)
This change removes the partial gRPC interface to the RPC service, which was
deprecated in resolution of #6718.

Details:
- rpc: Remove the client and server interfaces and proto definitions.
- Remove the gRPC settings from the config library.
- Remove gRPC setup for the RPC service in the node startup.
- Fix various test helpers to remove gRPC bits.
- Remove the --rpc.grpc-laddr flag from the CLI.

Note that to satisfy the protobuf interface check, this change also includes a
temporary edit to buf.yaml, that I will revert after this is merged.
2021-10-13 15:01:01 -07:00
William Banfield
ff7b0e638e p2p: fix priority queue bytes pending calculation (#7120)
This metric describes itself as 'pending' but never actual decrements when the messages are removed from the queue.

This change fixes that by decrementing the metric when the data is removed from the queue.
2021-10-13 21:18:44 +00:00
William Banfield
36a1acff52 internal/proxy: add initial set of abci metrics (#7115)
This PR adds an initial set of metrics for use ABCI. The initial metrics enable the calculation of timing histograms and call counts for each of the ABCI methods. The metrics are also labeled as either 'sync' or 'async' to determine if the method call was performed using ABCI's `*Async` methods.

An example of these metrics is included here for reference:
```
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.0001"} 0
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.0004"} 5
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.002"} 12
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.009"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.02"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.1"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="0.65"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="2"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="6"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="25"} 13
tendermint_abci_connection_method_timing_bucket{chain_id="ci",method="commit",type="sync",le="+Inf"} 13
tendermint_abci_connection_method_timing_sum{chain_id="ci",method="commit",type="sync"} 0.007802058000000001
tendermint_abci_connection_method_timing_count{chain_id="ci",method="commit",type="sync"} 13
```

These metrics can easily be graphed using prometheus's `histogram_quantile(...)` method to pick out a particular quantile to graph or examine. I chose buckets that were somewhat of an estimate of expected range of times for ABCI operations. They start at .0001 seconds and range to 25 seconds. The hope is that this range captures enough possible times to be useful for us and operators.
2021-10-13 20:52:25 +00:00
Sam Kleinman
164de91842 rpc: move evidence tests to shared fixtures (#7119)
This is follow on to the work in #7112.
2021-10-13 18:29:03 +00:00
Callum Waters
4fe0f262d4 changelog: add 0.34.14 updates (#7117)
This PR adds the 0.34.14 changes to the changelog in master
2021-10-13 12:28:43 +00:00
M. J. Fromberger
6538776e6a build: Fix build-docker to include the full context. (#7114)
Fixes #7068. The build-docker rule relies on being able to run make
build-linux, but did not pull the Makefile into the build context.
There are various ways to fix this, but this was probably the smallest.
2021-10-12 22:29:06 +00:00
Sam Kleinman
4781d04d18 node: always close database engine (#7113) 2021-10-12 17:40:59 -04:00
Sam Kleinman
52ed994416 test: cleanup rpc/client and node test fixtures (#7112) 2021-10-12 16:49:45 -04:00
lklimek
0524558696 refactor: assignment copies lock value (#7108)
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
2021-10-12 13:22:57 -07:00
Sam Kleinman
d837432681 ci: use run-multiple.sh for e2e pr tests (#7111)
* ci: use run-multiple.sh for e2e pr tests

* fix labeling

* Update .github/workflows/e2e-nightly-35x.yml

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>

Co-authored-by: M. J. Fromberger <michael.j.fromberger@gmail.com>
2021-10-12 16:28:10 +00:00
Sam Kleinman
34a3fcd8fc Revert "abci: change client to use multi-reader mutexes (#6306)" (#7106)
This reverts commit 1c4dbe30d4.
2021-10-12 11:29:31 -04:00
M. J. Fromberger
48295955ed light: Update links in package docs. (#7099)
Fixes #7098. The light client documentation moved to the spec repository.

I was not able to figure out what happened to light-client-protocol.md, it was removed in #5252 but no corresponding file exists in the spec repository. Since the spec also discusses the protocol, this change simply links to the spec and removes the non-functional reference.

Alternatively we could link to the top-level [light client doc](https://docs.tendermint.com/master/tendermint-core/light-client.html) if you think that's better.
2021-10-11 23:21:45 +00:00
Sam Kleinman
ded310093e lint: fix collection of stale errors (#7090)
Few things that had been annoying.
2021-10-09 15:33:54 +00:00
Sam Kleinman
befd669794 e2e: light nodes should use builtin abci app (#7095) 2021-10-09 04:20:09 +00:00
Sam Kleinman
3646b635d3 p2p, types: remove legacy NetAddress type (#7084) 2021-10-08 12:29:20 -04:00
Callum Waters
59404003ee p2p: rename pexV2 to pex (#7088) 2021-10-08 16:53:54 +02:00
Sam Kleinman
f2a8f5e054 e2e: abci protocol should be consistent across networks (#7078)
It seems weird in retrospect that we allow networks to contain
applications that use different ABCI protocols.
2021-10-08 13:42:23 +00:00
Sam Kleinman
1b5bb5348f p2p: cleanup unused arguments (#7079)
This is mostly just reading through the output of uparam, after
noticing that there were a few places where we were ignoring some arguments.
2021-10-08 12:49:17 +00:00
Callum Waters
4ca130d226 cli: allow node operator to rollback last state (#7033) 2021-10-08 09:15:13 +02:00
Sam Kleinman
1f438f205a e2e: improve network connectivity (#7077)
This tweaks the connectivity of test configurations, in hopes that more will be viable.

Additionally reduces the prevalence of testing the legacy mempool.
2021-10-07 23:07:35 +00:00
Sam Kleinman
5bf30bb049 p2p: cleanup transport interface (#7071)
This is another batch of things to cleanup in the legacy P2P system.
2021-10-06 19:17:44 +00:00
dependabot[bot]
e53f92ba9c build(deps): Bump github.com/adlio/schema from 1.1.13 to 1.1.14 (#7069)
Bumps [github.com/adlio/schema](https://github.com/adlio/schema) from 1.1.13 to 1.1.14.
- [Release notes](https://github.com/adlio/schema/releases)
- [Commits](https://github.com/adlio/schema/compare/v1.1.13...v1.1.14)

---
updated-dependencies:
- dependency-name: github.com/adlio/schema
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-10-06 09:14:55 -04:00
Callum Waters
e4d6f6df09 docs: create separate releases doc (#7040) 2021-10-06 11:59:21 +02:00
Sam Kleinman
0ef1a12186 ci: fix p2p configuration for e2e tests (#7066)
My earlier p2p cleanup code removed support for the p2p tests from the
e2e generator and runner, but missed removing the CI
configuration. This patch remedies that.
2021-10-06 04:11:19 +00:00
Sam Kleinman
72aee47847 ci: 0.35.x nightly should run from master and checkout the release branch (#7067)
Nightly branches run CI from master branch, and the configuration misses checking out the correct ref.
2021-10-06 04:08:54 +00:00
M. J. Fromberger
109814c85a Clarify decision record for ADR-065. (#7062)
While discussing a question about the indexing interface (#7044), we found some
confusion about the intent of the design decisions in ADR 065.

Based on discussion with the original authors of the ADR, this commit adds some
language to the Decisions section to spell out the intentions more clearly, and
to call out future work that this ADR did not explicitly decide about.
2021-10-05 14:10:11 -07:00
Sam Kleinman
851d2e3bde mempool,rpc: add removetx rpc method (#7047)
Addresses one of the concerns with #7041.

Provides a mechanism (via the RPC interface) to delete a single transaction, described by its hash, from the mempool. The method returns an error if the transaction cannot be found. Once the transaction is removed it remains in the cache and cannot be resubmitted until the cache is cleared or it expires from the cache.
2021-10-05 20:23:15 +00:00
Sam Kleinman
3ea81bfaa7 p2p: remove wdrr queue (#7064)
This code hasn't been battle tested, and seems to have grown
increasingly flaky int tests. Given our general direction of reducing
queue complexity over the next couple of releases I think it makes
sense to remove it.
2021-10-05 20:09:31 +00:00
Callum Waters
5703ae2fb3 e2e: automatically prune old app snapshots (#7034)
This PR tackles the case of using the e2e application in a long lived testnet. The application continually saves snapshots (usually every 100 blocks) which after a while bloats the size of the application. This PR prunes older snapshots so that only the most recent 10 snapshots remain.
2021-10-05 18:19:12 +00:00
Sam Kleinman
03ad7d6f20 p2p: delete legacy stack initial pass (#7035)
A few notes:

- this is not all the deletion that we can do, but this is the most
  "simple" case: it leaves in shims, and there's some trivial
  additional cleanup to the transport that can happen but that
  requires writing more code, and I wanted this to be easy to review
  above all else.
  
- This should land *after* we cut the branch for 0.35, but I'm
  anticipating that to happen soon, and I wanted to run this through
  CI.
2021-10-05 13:40:32 +00:00
William Banfield
f5b9c210ca consensus: wait until peerUpdates channel is closed to close remaining peers (#7058)
The race occurred as a result of a goroutine launched by `processPeerUpdate` racing with the `OnStop` method. The `processPeerUpdates` goroutine deletes from the map as `OnStop` is reading from it. This change updates the `OnStop` method to wait for the peer updates channel to be done before closing the peers. It also copies the map contents to a new map so that it will not conflict with the view of the map that the goroutine created in `processPeerUpdate` sees.
2021-10-04 22:37:18 +00:00
Sam Kleinman
cb69ed8135 blocksync/v2: remove unsupported reactor (#7046)
This commit should be one of the first to land as part of the v0.36
cycle *after* cutting the 0.35 branch. 

The blocksync/v2 reactor was originally implemented as an experiement
to produce an implementation of the blockstack protocol that would be
easier to test and validate, but it was never appropriately
operationalized and this implementation was never fully debugged. When
the p2p layer was refactored as part of the 0.35 cycle, the v2
implementation was not refactored and it was left in the codebase but
not removed. This commit just removes all references to it.
2021-10-04 21:12:51 +00:00
William Banfield
c201e3b54d scripts: fix authors script to take a ref (#7051)
This script is referenced from the release documentation, we should make sure it's functional. This is helpful in generating the "Special Thanks" section of the changelog.
2021-10-04 19:47:50 +00:00
M. J. Fromberger
b30ec89ee9 Add an e2e workflow for the v0.35.x backport branch. (#7048) 2021-10-04 10:35:16 -07:00
Sam Kleinman
6276fdcb5d ci: mergify support for 0.35 backports (#7050) 2021-10-04 13:04:15 -04:00
803 changed files with 42391 additions and 71757 deletions

View File

@@ -1,16 +1,16 @@
pullRequestOpened: |
:wave: Thanks for creating a PR!
:wave: Thanks for creating a PR!
Before we can merge this PR, please make sure that all the following items have been
Before we can merge this PR, please make sure that all the following items have been
checked off. If any of the checklist items are not applicable, please leave them but
write a little note why.
write a little note why.
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [ ] Re-reviewed `Files changed` in the Github PR explorer
- [ ] Applied Appropriate Labels
- [ ] Wrote tests
- [ ] Updated CHANGELOG_PENDING.md
- [ ] Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
- [ ] Updated relevant documentation (`docs/`) and code comments
- [ ] Re-reviewed `Files changed` in the Github PR explorer
- [ ] Applied Appropriate Labels
Thank you for your contribution to Tendermint! :rocket:
Thank you for your contribution to Tendermint! :rocket:

50
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,50 @@
version: 2
updates:
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: weekly
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge
- package-ecosystem: npm
directory: "/docs"
schedule:
interval: weekly
open-pull-requests-limit: 10
###################################
##
## Update All Go Dependencies
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
target-branch: "master"
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
target-branch: "v0.34.x"
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge
- package-ecosystem: gomod
directory: "/"
schedule:
interval: daily
target-branch: "v0.35.x"
open-pull-requests-limit: 10
labels:
- T:dependencies
- S:automerge

8
.github/linter/markdownlint.yml vendored Normal file
View File

@@ -0,0 +1,8 @@
default: true,
MD007: { "indent": 4 }
MD013: false
MD024: { siblings_only: true }
MD025: false
MD033: { no-inline-html: false }
no-hard-tabs: false
whitespace: false

View File

@@ -1,8 +0,0 @@
default: true,
MD007: {"indent": 4}
MD013: false
MD024: {siblings_only: true}
MD025: false
MD033: {no-inline-html: false}
no-hard-tabs: false
whitespace: false

View File

@@ -1,9 +0,0 @@
---
# Default rules for YAML linting from super-linter.
# See: See https://yamllint.readthedocs.io/en/stable/rules.html
extends: default
rules:
document-end: disable
document-start: disable
line-length: disable
truthy: disable

22
.github/mergify.yml vendored
View File

@@ -1,13 +1,13 @@
queue_rules:
- name: default
conditions:
- base=v0.35.x
- base=master
- label=S:automerge
pull_request_rules:
- name: Automerge to v0.35.x
- name: Automerge to master
conditions:
- base=v0.35.x
- base=master
- label=S:automerge
actions:
queue:
@@ -17,3 +17,19 @@ pull_request_rules:
{{ title }} (#{{ number }})
{{ body }}
- name: backport patches to v0.34.x branch
conditions:
- base=master
- label=S:backport-to-v0.34.x
actions:
backport:
branches:
- v0.34.x
- name: backport patches to v0.35.x branch
conditions:
- base=master
- label=S:backport-to-v0.35.x
actions:
backport:
branches:
- v0.35.x

View File

@@ -20,11 +20,11 @@ jobs:
goos: ["linux"]
timeout-minutes: 5
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: "1.17"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v6.0.1
with:
PATTERNS: |
**/**.go
@@ -41,11 +41,11 @@ jobs:
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: "1.17"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v6.0.1
with:
PATTERNS: |
**/**.go
@@ -63,11 +63,11 @@ jobs:
needs: build
timeout-minutes: 5
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: "1.17"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v6.0.1
with:
PATTERNS: |
**/**.go

View File

@@ -1,19 +1,19 @@
name: Docker
# Build & Push rebuilds the tendermint docker image on every push to master and creation of tags
# Build & Push rebuilds the tendermint docker image on every push to master and creation of tags
# and pushes the image to https://hub.docker.com/r/interchainio/simapp/tags
on:
push:
branches:
- master
tags:
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
- "v[0-9]+.[0-9]+.[0-9]+-rc*" # Push events to matching v*, i.e. v1.0-rc1, v20.15.10-rc5
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
- "v[0-9]+.[0-9]+.[0-9]+-rc*" # Push events to matching v*, i.e. v1.0-rc1, v20.15.10-rc5
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v2.4.0
- name: Prepare
id: prep
run: |
@@ -39,17 +39,17 @@ jobs:
platforms: all
- name: Set up Docker Build
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v1.6.0
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@v2
uses: docker/login-action@v1.12.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v3
uses: docker/build-push-action@v2.7.0
with:
context: .
file: ./DOCKER/Dockerfile

View File

@@ -1,36 +0,0 @@
# Manually run randomly generated E2E testnets (as nightly).
name: e2e-manual
on:
workflow_dispatch:
jobs:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
p2p: ['legacy', 'new', 'hybrid']
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
with:
go-version: '1.17'
- uses: actions/checkout@v3
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner tests
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly/${{ matrix.p2p }} -p ${{ matrix.p2p }}
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/${{ matrix.p2p }}/*-group${{ matrix.group }}-*.toml

View File

@@ -6,7 +6,7 @@
name: e2e-nightly-34x
on:
workflow_dispatch: # allow running workflow manually, in theory
workflow_dispatch: # allow running workflow manually, in theory
schedule:
- cron: '0 2 * * *'
@@ -21,11 +21,11 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.17'
- uses: actions/checkout@v3
- uses: actions/checkout@v2.4.0
with:
ref: 'v0.34.x'
@@ -59,7 +59,7 @@ jobs:
SLACK_MESSAGE: Nightly E2E tests failed on v0.34.x
SLACK_FOOTER: ''
e2e-nightly-success: # may turn this off once they seem to pass consistently
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test
if: ${{ success() }}
runs-on: ubuntu-latest

76
.github/workflows/e2e-nightly-35x.yml vendored Normal file
View File

@@ -0,0 +1,76 @@
# Runs randomly generated E2E testnets nightly on v0.35.x.
# !! If you change something in this file, you probably want
# to update the e2e-nightly-master workflow as well!
name: e2e-nightly-35x
on:
workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 2 * * *'
jobs:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
p2p: ['legacy', 'new', 'hybrid']
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v2
with:
go-version: '1.17'
- uses: actions/checkout@v2.4.0
with:
ref: 'v0.35.x'
- name: Build
working-directory: test/e2e
# Run make jobs in parallel, since we can't run steps in parallel.
run: make -j2 docker generator runner tests
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly/${{ matrix.p2p }} -p ${{ matrix.p2p }}
- name: Run ${{ matrix.p2p }} p2p testnets in group ${{ matrix.group }}
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/${{ matrix.p2p }}/*-group${{ matrix.group }}-*.toml
e2e-nightly-fail-2:
needs: e2e-nightly-test
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on failure
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':skull:'
SLACK_COLOR: danger
SLACK_MESSAGE: Nightly E2E tests failed on v0.35.x
SLACK_FOOTER: ''
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test
if: ${{ success() }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack on success
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly E2E Tests
SLACK_ICON_EMOJI: ':white_check_mark:'
SLACK_COLOR: good
SLACK_MESSAGE: Nightly E2E tests passed on v0.35.x
SLACK_FOOTER: ''

View File

@@ -5,27 +5,26 @@
name: e2e-nightly-master
on:
workflow_dispatch: # allow running workflow manually
workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 2 * * *'
jobs:
e2e-nightly-test-2:
e2e-nightly-test:
# Run parallel jobs for the listed testnet groups (must match the
# ./build/generator -g flag)
strategy:
fail-fast: false
matrix:
p2p: ['legacy', 'new', 'hybrid']
group: ['00', '01', '02', '03']
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.17'
- uses: actions/checkout@v3
- uses: actions/checkout@v2.4.0
- name: Build
working-directory: test/e2e
@@ -35,14 +34,14 @@ jobs:
- name: Generate testnets
working-directory: test/e2e
# When changing -g, also change the matrix groups above
run: ./build/generator -g 4 -d networks/nightly/${{ matrix.p2p }} -p ${{ matrix.p2p }}
run: ./build/generator -g 4 -d networks/nightly/
- name: Run ${{ matrix.p2p }} p2p testnets in group ${{ matrix.group }}
- name: Run ${{ matrix.p2p }} p2p testnets
working-directory: test/e2e
run: ./run-multiple.sh networks/nightly/${{ matrix.p2p }}/*-group${{ matrix.group }}-*.toml
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
e2e-nightly-fail-2:
needs: e2e-nightly-test-2
needs: e2e-nightly-test
if: ${{ failure() }}
runs-on: ubuntu-latest
steps:
@@ -57,7 +56,7 @@ jobs:
SLACK_MESSAGE: Nightly E2E tests failed on master
SLACK_FOOTER: ''
e2e-nightly-success: # may turn this off once they seem to pass consistently
e2e-nightly-success: # may turn this off once they seem to pass consistently
needs: e2e-nightly-test
if: ${{ success() }}
runs-on: ubuntu-latest

View File

@@ -2,7 +2,7 @@ name: e2e
# Runs the CI end-to-end test network on all pushes to master or release branches
# and every pull request, but only if any Go files have been changed.
on:
workflow_dispatch: # allow running workflow manually
workflow_dispatch: # allow running workflow manually
pull_request:
push:
branches:
@@ -14,11 +14,11 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.17'
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v6.0.1
with:
PATTERNS: |
**/**.go
@@ -33,10 +33,6 @@ jobs:
- name: Run CI testnet
working-directory: test/e2e
run: ./build/runner -f networks/ci.toml
run: ./run-multiple.sh networks/ci.toml
if: "env.GIT_DIFF != ''"
- name: Emit logs on failure
if: ${{ failure() }}
working-directory: test/e2e
run: ./build/runner -f networks/ci.toml logs

View File

@@ -1,7 +1,7 @@
# Runs fuzzing nightly.
name: Fuzz Tests
on:
workflow_dispatch: # allow running workflow manually
workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 3 * * *'
pull_request:
@@ -13,34 +13,19 @@ jobs:
fuzz-nightly-test:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.17'
- uses: actions/checkout@v3
- uses: actions/checkout@v2.4.0
- name: Install go-fuzz
working-directory: test/fuzz
run: go get -u github.com/dvyukov/go-fuzz/go-fuzz github.com/dvyukov/go-fuzz/go-fuzz-build
- name: Fuzz mempool-v1
- name: Fuzz mempool
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-mempool-v1
continue-on-error: true
- name: Fuzz mempool-v0
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-mempool-v0
continue-on-error: true
- name: Fuzz p2p-addrbook
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-p2p-addrbook
continue-on-error: true
- name: Fuzz p2p-pex
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-p2p-pex
run: timeout -s SIGINT --preserve-status 10m make fuzz-mempool
continue-on-error: true
- name: Fuzz p2p-sc
@@ -54,14 +39,14 @@ jobs:
continue-on-error: true
- name: Archive crashers
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v2
with:
name: crashers
path: test/fuzz/**/crashers
retention-days: 3
- name: Archive suppressions
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v2
with:
name: suppressions
path: test/fuzz/**/suppressions

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 3
steps:
- uses: styfle/cancel-workflow-action@0.10.0
- uses: styfle/cancel-workflow-action@0.9.1
with:
workflow_id: 1041851,1401230,2837803
access_token: ${{ github.token }}

View File

@@ -46,7 +46,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout the Jepsen repository
uses: actions/checkout@v3
uses: actions/checkout@v2.4.0
with:
repository: 'tendermint/jepsen'
@@ -58,7 +58,7 @@ jobs:
run: docker exec -i jepsen-control bash -c 'source /root/.bashrc; cd /jepsen/tendermint; lein run test --nemesis ${{ github.event.inputs.nemesis }} --workload ${{ github.event.inputs.workload }} --concurrency ${{ github.event.inputs.concurrency }} --tendermint-url ${{ github.event.inputs.tendermintUrl }} --merkleeyes-url ${{ github.event.inputs.merkleeyesUrl }} --time-limit ${{ github.event.inputs.timeLimit }} ${{ github.event.inputs.dupOrSuperByzValidators }}'
- name: Archive results
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v2
with:
name: results
path: tendermint/store/latest

View File

@@ -1,12 +1,12 @@
name: Check Markdown links
on:
on:
schedule:
- cron: '* */24 * * *'
jobs:
markdown-link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: creachadair/github-action-markdown-link-check@master
- uses: actions/checkout@v2.4.0
- uses: gaurav-nelson/github-action-markdown-link-check@1.0.13
with:
folder-path: "docs"

View File

@@ -1,11 +1,7 @@
name: Golang Linter
# Lint runs golangci-lint over the entire Tendermint repository.
#
# This workflow is run on every pull request and push to master.
#
# The `golangci` job will pass without running if no *.{go, mod, sum}
# files have been modified.
name: Lint
# Lint runs golangci-lint over the entire Tendermint repository
# This workflow is run on every pull request and push to master
# The `golangci` job will pass without running if no *.{go, mod, sum} files have been modified.
on:
pull_request:
push:
@@ -17,22 +13,17 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 8
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: '^1.17'
- uses: technote-space/get-diff-action@v6
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v6.0.1
with:
PATTERNS: |
**/**.go
go.mod
go.sum
- uses: golangci/golangci-lint-action@v3
- uses: golangci/golangci-lint-action@v2.5.2
with:
# Required: the version of golangci-lint is required and
# must be specified without patch version: we always use the
# latest patch version.
version: v1.45
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.42.1
args: --timeout 10m
github-token: ${{ secrets.github_token }}
if: env.GIT_DIFF

View File

@@ -19,14 +19,14 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v3
uses: actions/checkout@v2.4.0
- name: Lint Code Base
uses: docker://github/super-linter:v4
env:
LINTER_RULES_PATH: .
VALIDATE_ALL_CODEBASE: true
DEFAULT_BRANCH: master
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
VALIDATE_MD: true
VALIDATE_OPENAPI: true
VALIDATE_YAML: true
YAML_CONFIG_FILE: yaml-lint.yml

View File

@@ -1,51 +0,0 @@
name: Build & Push TM Proto Builder
on:
pull_request:
paths:
- "tools/proto/*"
push:
branches:
- master
paths:
- "tools/proto/*"
schedule:
# run this job once a month to recieve any go or buf updates
- cron: "* * 1 * *"
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=tendermintdev/docker-build-proto
VERSION=noop
if [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION=${GITHUB_REF#refs/tags/}
elif [[ $GITHUB_REF == refs/heads/* ]]; then
VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -r 's#/+#-#g')
if [ "${{ github.event.repository.default_branch }}" = "$VERSION" ]; then
VERSION=latest
fi
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
echo ::set-output name=tags::${TAGS}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Publish to Docker Hub
uses: docker/build-push-action@v3
with:
context: ./tools/proto
file: ./tools/proto/Dockerfile
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.prep.outputs.tags }}

View File

@@ -1,23 +0,0 @@
name: Protobuf
# Protobuf runs buf (https://buf.build/) lint and check-breakage
# This workflow is only run when a .proto file has been modified
on:
workflow_dispatch: # allow running workflow manually
pull_request:
paths:
- "**.proto"
jobs:
proto-lint:
runs-on: ubuntu-latest
timeout-minutes: 4
steps:
- uses: actions/checkout@v3
- name: lint
run: make proto-lint
proto-breakage:
runs-on: ubuntu-latest
timeout-minutes: 4
steps:
- uses: actions/checkout@v3
- name: check-breakage
run: make proto-check-breaking-ci

View File

@@ -5,35 +5,33 @@ on:
branches:
- "RC[0-9]/**"
tags:
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
- "v[0-9]+.[0-9]+.[0-9]+" # Push events to matching v*, i.e. v1.0, v20.15.10
jobs:
goreleaser:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v2.4.0
with:
fetch-depth: 0
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: '1.17'
- name: Build
uses: goreleaser/goreleaser-action@v3
uses: goreleaser/goreleaser-action@v2
if: ${{ github.event_name == 'pull_request' }}
with:
version: latest
args: build --skip-validate # skip validate skips initial sanity checks in order to be able to fully run
- run: echo https://github.com/tendermint/tendermint/blob/${GITHUB_REF#refs/tags/}/CHANGELOG.md#${GITHUB_REF#refs/tags/} > ../release_notes.md
- name: Release
uses: goreleaser/goreleaser-action@v3
uses: goreleaser/goreleaser-action@v2
if: startsWith(github.ref, 'refs/tags/')
with:
version: latest
args: release --rm-dist --release-notes=../release_notes.md
args: release --rm-dist
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -7,7 +7,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v5
- uses: actions/stale@v4
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: "This pull request has been automatically marked as stale because it has not had

View File

@@ -16,11 +16,11 @@ jobs:
matrix:
part: ["00", "01", "02", "03", "04", "05"]
steps:
- uses: actions/setup-go@v3
- uses: actions/setup-go@v2
with:
go-version: "1.17"
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v6.0.1
with:
PATTERNS: |
**/**.go
@@ -32,7 +32,7 @@ jobs:
run: |
make test-group-${{ matrix.part }} NUM_SPLIT=6
if: env.GIT_DIFF
- uses: actions/upload-artifact@v3
- uses: actions/upload-artifact@v2
with:
name: "${{ github.sha }}-${{ matrix.part }}-coverage"
path: ./build/${{ matrix.part }}.profile.out
@@ -41,8 +41,8 @@ jobs:
runs-on: ubuntu-latest
needs: tests
steps:
- uses: actions/checkout@v3
- uses: technote-space/get-diff-action@v6
- uses: actions/checkout@v2.4.0
- uses: technote-space/get-diff-action@v6.0.1
with:
PATTERNS: |
**/**.go
@@ -50,26 +50,26 @@ jobs:
go.mod
go.sum
Makefile
- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-00-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-01-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-02-coverage"
if: env.GIT_DIFF
- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v2
with:
name: "${{ github.sha }}-03-coverage"
if: env.GIT_DIFF
- run: |
cat ./*profile.out | grep -v "mode: set" >> coverage.txt
if: env.GIT_DIFF
- uses: codecov/codecov-action@v3
- uses: codecov/codecov-action@v2.1.0
with:
file: ./coverage.txt
if: env.GIT_DIFF

7
.gitignore vendored
View File

@@ -47,3 +47,10 @@ test/fuzz/**/corpus
test/fuzz/**/crashers
test/fuzz/**/suppressions
test/fuzz/**/*.zip
proto/tendermint/blocksync/types.proto
proto/tendermint/consensus/types.proto
proto/tendermint/mempool/*.proto
proto/tendermint/p2p/*.proto
proto/tendermint/statesync/*.proto
proto/tendermint/types/*.proto
proto/tendermint/version/*.proto

View File

@@ -33,7 +33,7 @@ linters:
- staticcheck
- structcheck
- stylecheck
# - typecheck
- typecheck
- unconvert
# - unparam
- unused

View File

@@ -29,8 +29,8 @@ release:
archives:
- files:
- LICENSE
- README.md
- UPGRADING.md
- SECURITY.md
- CHANGELOG.md
- LICENSE
- README.md
- UPGRADING.md
- SECURITY.md
- CHANGELOG.md

View File

@@ -2,141 +2,6 @@
Friendly reminder: We have a [bug bounty program](https://hackerone.com/cosmos).
## v0.35.7
June 16, 2022
### BUG FIXES
- [p2p] [\#8692](https://github.com/tendermint/tendermint/pull/8692) scale the number of stored peers by the configured maximum connections (#8684)
- [rpc] [\#8715](https://github.com/tendermint/tendermint/pull/8715) always close http bodies (backport #8712)
- [p2p] [\#8760](https://github.com/tendermint/tendermint/pull/8760) accept should not abort on first error (backport #8759)
### BREAKING CHANGES
- P2P Protocol
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Introduce "inactive" peer label to avoid re-dialing incompatible peers. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Increase frequency of dialing attempts to reduce latency for peer acquisition. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Improvements to peer scoring and sorting to gossip a greater variety of peers during PEX. (@tychoish)
- [p2p] [\#8737](https://github.com/tendermint/tendermint/pull/8737) Track incoming and outgoing peers separately to ensure more peer slots open for incoming connections. (@tychoish)
## v0.35.6
June 3, 2022
### FEATURES
- [migrate] [\#8672](https://github.com/tendermint/tendermint/pull/8672) provide function for database production (backport #8614) (@tychoish)
### BUG FIXES
- [consensus] [\#8651](https://github.com/tendermint/tendermint/pull/8651) restructure peer catchup sleep (@tychoish)
- [pex] [\#8657](https://github.com/tendermint/tendermint/pull/8657) align max address thresholds (@cmwaters)
- [cmd] [\#8668](https://github.com/tendermint/tendermint/pull/8668) don't used global config for reset commands (@cmwaters)
- [p2p] [\#8681](https://github.com/tendermint/tendermint/pull/8681) shed peers from store from other networks (backport #8678) (@tychoish)
## v0.35.5
May 26, 2022
### BUG FIXES
- [p2p] [\#8371](https://github.com/tendermint/tendermint/pull/8371) fix setting in con-tracker (backport #8370) (@tychoish)
- [blocksync] [\#8496](https://github.com/tendermint/tendermint/pull/8496) validate block against state before persisting it to disk (@cmwaters)
- [statesync] [\#8494](https://github.com/tendermint/tendermint/pull/8494) avoid potential race (@tychoish)
- [keymigrate] [\#8467](https://github.com/tendermint/tendermint/pull/8467) improve filtering for legacy transaction hashes (backport #8466) (@creachadair)
- [rpc] [\#8594](https://github.com/tendermint/tendermint/pull/8594) fix encoding of block_results responses (@creachadair)
## v0.35.4
April 18, 2022
Special thanks to external contributors on this release: @firelizzard18
### FEATURES
- [cli] [\#8300](https://github.com/tendermint/tendermint/pull/8300) Add a tool to update old config files to the latest version [backport [\#8281](https://github.com/tendermint/tendermint/pull/8281)]. (@creachadair)
### IMPROVEMENTS
### BUG FIXES
- [cli] [\#8294](https://github.com/tendermint/tendermint/pull/8294) keymigrate: ensure block hash keys are correctly translated. (@creachadair)
- [cli] [\#8352](https://github.com/tendermint/tendermint/pull/8352) keymigrate: ensure transaction hash keys are correctly translated. (@creachadair)
## v0.35.3
April 8, 2022
### FEATURES
- [cli] [\#8081](https://github.com/tendermint/tendermint/pull/8081) add a safer-to-use `reset-state` command. (@marbar3778)
### IMPROVEMENTS
- [consensus] [\#8138](https://github.com/tendermint/tendermint/pull/8138) change lock handling in reactor and handleMsg for RoundState. (@williambanfield)
### BUG FIXES
- [cli] [\#8276](https://github.com/tendermint/tendermint/pull/8276) scmigrate: ensure target key is correctly renamed. (@creachadair)
## v0.35.2
February 28, 2022
Special thanks to external contributors on this release: @ashcherbakov, @yihuang, @waelsy123
### IMPROVEMENTS
- [consensus] [\#7875](https://github.com/tendermint/tendermint/pull/7875) additional timing metrics. (@williambanfield)
### BUG FIXES
- [abci] [\#7990](https://github.com/tendermint/tendermint/pull/7990) revert buffer limit change. (@williambanfield)
- [cli] [#7837](https://github.com/tendermint/tendermint/pull/7837) fix app hash in state rollback. (@yihuang)
- [cli] [\#7869](https://github.com/tendermint/tendermint/pull/7869) Update unsafe-reset-all command to match release v35. (waelsy123)
- [light] [\#7640](https://github.com/tendermint/tendermint/pull/7640) Light Client: fix absence proof verification (@ashcherbakov)
- [light] [\#7641](https://github.com/tendermint/tendermint/pull/7641) Light Client: fix querying against the latest height (@ashcherbakov)
- [mempool] [\#7718](https://github.com/tendermint/tendermint/pull/7718) return duplicate tx errors more consistently. (@tychoish)
- [rpc] [\#7744](https://github.com/tendermint/tendermint/pull/7744) fix layout of endpoint list. (@creachadair)
- [statesync] [\#7886](https://github.com/tendermint/tendermint/pull/7886) assert app version matches. (@cmwaters)
## v0.35.1
January 26, 2022
Special thanks to external contributors on this release: @altergui, @odeke-em,
@thanethomson
### BREAKING CHANGES
- CLI/RPC/Config
- [config] [\#7276](https://github.com/tendermint/tendermint/pull/7276) rpc: Add experimental config params to allow for subscription buffer size control (@thanethomson).
- P2P Protocol
- [p2p] [\#7265](https://github.com/tendermint/tendermint/pull/7265) Peer manager reduces peer score for each failed dial attempts for peers that have not successfully dialed. (@tychoish)
- [p2p] [\#7594](https://github.com/tendermint/tendermint/pull/7594) always advertise self, to enable mutual address discovery. (@altergui)
### FEATURES
- [rpc] [\#7270](https://github.com/tendermint/tendermint/pull/7270) Add `header` and `header_by_hash` RPC Client queries. (@fedekunze) (@cmwaters)
### IMPROVEMENTS
- [internal/protoio] [\#7325](https://github.com/tendermint/tendermint/pull/7325) Optimized `MarshalDelimited` by inlining the common case and using a `sync.Pool` in the worst case. (@odeke-em)
- [\#7338](https://github.com/tendermint/tendermint/pull/7338) pubsub: Performance improvements for the event query API (backport of #7319) (@creachadair)
- [\#7252](https://github.com/tendermint/tendermint/pull/7252) Add basic metrics to the indexer package. (@creachadair)
- [\#7338](https://github.com/tendermint/tendermint/pull/7338) Performance improvements for the event query API. (@creachadair)
### BUG FIXES
- [\#7310](https://github.com/tendermint/tendermint/issues/7310) pubsub: Report a non-nil error when shutting down (fixes #7306).
- [\#7355](https://github.com/tendermint/tendermint/pull/7355) Fix incorrect tests using the PSQL sink. (@creachadair)
- [\#7683](https://github.com/tendermint/tendermint/pull/7683) rpc: check error code for broadcast_tx_commit. (@tychoish)
## v0.35.0
November 4, 2021
@@ -309,6 +174,39 @@ Special thanks to external contributors on this release: @JayT106,
- [cmd/tendermint/commands] [\#6623](https://github.com/tendermint/tendermint/pull/6623) replace `$HOME/.some/test/dir` with `t.TempDir` (@tanyabouman)
- [statesync] \6807 Implement P2P state provider as an alternative to RPC (@cmwaters)
## v0.34.15
Special thanks to external contributors on this release: @thanethomson
### BUG FIXES
- [\#7368](https://github.com/tendermint/tendermint/issues/7368) cmd: add integration test for rollback functionality (@cmwaters).
- [\#7309](https://github.com/tendermint/tendermint/issues/7309) pubsub: Report a non-nil error when shutting down (fixes #7306).
- [\#7057](https://github.com/tendermint/tendermint/pull/7057) Import Postgres driver support for the psql indexer (@creachadair).
- [\#7106](https://github.com/tendermint/tendermint/pull/7106) Revert mutex change to ABCI Clients (@tychoish).
### IMPROVEMENTS
- [config] [\#7230](https://github.com/tendermint/tendermint/issues/7230) rpc: Add experimental config params to allow for subscription buffer size control (@thanethomson).
## v0.34.14
This release backports the `rollback` feature to allow recovery in the event of an incorrect app hash.
### FEATURES
- [\#6982](https://github.com/tendermint/tendermint/pull/6982) The tendermint binary now has built-in suppport for running the end-to-end test application (with state sync support) (@cmwaters).
- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state. This may be useful in the event of non-determinstic app hash or when reverting an upgrade. @cmwaters
### IMPROVEMENTS
- [\#7103](https://github.com/tendermint/tendermint/pull/7104) Remove IAVL dependency (backport of #6550) (@cmwaters)
### BUG FIXES
- [\#7057](https://github.com/tendermint/tendermint/pull/7057) Import Postgres driver support for the psql indexer (@creachadair).
- [ABCI] [\#7110](https://github.com/tendermint/tendermint/issues/7110) Revert "change client to use multi-reader mutexes (#6873)" (@tychoish).
## v0.34.13
*September 6, 2021*
@@ -1987,7 +1885,7 @@ more details.
- [rpc] [\#3269](https://github.com/tendermint/tendermint/issues/2826) Limit number of unique clientIDs with open subscriptions. Configurable via `rpc.max_subscription_clients`
- [rpc] [\#3269](https://github.com/tendermint/tendermint/issues/2826) Limit number of unique queries a given client can subscribe to at once. Configurable via `rpc.max_subscriptions_per_client`.
- [rpc] [\#3435](https://github.com/tendermint/tendermint/issues/3435) Default ReadTimeout and WriteTimeout changed to 10s. WriteTimeout can increased by setting `rpc.timeout_broadcast_tx_commit` in the config.
- [rpc/client] [\#3269](https://github.com/tendermint/tendermint/issues/3269) Update `EventsClient` interface to reflect new pubsub/eventBus API [ADR-33](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-033-pubsub.md). This includes `Subscribe`, `Unsubscribe`, and `UnsubscribeAll` methods.
- [rpc/client] [\#3269](https://github.com/tendermint/tendermint/issues/3269) Update `EventsClient` interface to reflect new pubsub/eventBus API [ADR-33](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-033-pubsub.md). This includes `Subscribe`, `Unsubscribe`, and `UnsubscribeAll` methods.
* Apps
- [abci] [\#3403](https://github.com/tendermint/tendermint/issues/3403) Remove `time_iota_ms` from BlockParams. This is a
@@ -2040,7 +1938,7 @@ more details.
- [blockchain] [\#3358](https://github.com/tendermint/tendermint/pull/3358) Fix timer leak in `BlockPool` (@guagualvcha)
- [cmd] [\#3408](https://github.com/tendermint/tendermint/issues/3408) Fix `testnet` command's panic when creating non-validator configs (using `--n` flag) (@srmo)
- [libs/db/remotedb/grpcdb] [\#3402](https://github.com/tendermint/tendermint/issues/3402) Close Iterator/ReverseIterator after use
- [libs/pubsub] [\#951](https://github.com/tendermint/tendermint/issues/951), [\#1880](https://github.com/tendermint/tendermint/issues/1880) Use non-blocking send when dispatching messages [ADR-33](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-033-pubsub.md)
- [libs/pubsub] [\#951](https://github.com/tendermint/tendermint/issues/951), [\#1880](https://github.com/tendermint/tendermint/issues/1880) Use non-blocking send when dispatching messages [ADR-33](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-033-pubsub.md)
- [lite] [\#3364](https://github.com/tendermint/tendermint/issues/3364) Fix `/validators` and `/abci_query` proxy endpoints
(@guagualvcha)
- [p2p/conn] [\#3347](https://github.com/tendermint/tendermint/issues/3347) Reject all-zero shared secrets in the Diffie-Hellman step of secret-connection
@@ -2107,7 +2005,7 @@ For more, see issues marked
This release also includes a fix to prevent Tendermint from including the same
piece of evidence in more than one block. This issue was reported by @chengwenxi in our
[bug bounty program](https://hackerone.com/tendermint).
[bug bounty program](https://hackerone.com/cosmos).
### BREAKING CHANGES:
@@ -2600,7 +2498,7 @@ Special thanks to external contributors on this release:
@james-ray, @overbool, @phymbert, @Slamper, @Uzair1995, @yutianwu.
Special thanks to @Slamper for a series of bug reports in our [bug bounty
program](https://hackerone.com/tendermint) which are fixed in this release.
program](https://hackerone.com/cosmos) which are fixed in this release.
This release is primarily about adding Version fields to various data structures,
optimizing consensus messages for signing and verification in
@@ -2744,7 +2642,7 @@ Special thanks to external contributors on this release:
This release is mostly about the ConsensusParams - removing fields and enforcing MaxGas.
It also addresses some issues found via security audit, removes various unused
functions from `libs/common`, and implements
[ADR-012](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-012-peer-transport.md).
[ADR-012](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-012-peer-transport.md).
BREAKING CHANGES:
@@ -2825,7 +2723,7 @@ BREAKING CHANGES:
- [abci] Added address of the original proposer of the block to Header
- [abci] Change ABCI Header to match Tendermint exactly
- [abci] [\#2159](https://github.com/tendermint/tendermint/issues/2159) Update use of `Validator` (see
[ADR-018](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-018-ABCI-Validators.md)):
[ADR-018](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-018-ABCI-Validators.md)):
- Remove PubKey from `Validator` (so it's just Address and Power)
- Introduce `ValidatorUpdate` (with just PubKey and Power)
- InitChain and EndBlock use ValidatorUpdate
@@ -2847,7 +2745,7 @@ BREAKING CHANGES:
- [state] [\#1815](https://github.com/tendermint/tendermint/issues/1815) Validator set changes are now delayed by one block (!)
- Add NextValidatorSet to State, changes on-disk representation of state
- [state] [\#2184](https://github.com/tendermint/tendermint/issues/2184) Enforce ConsensusParams.BlockSize.MaxBytes (See
[ADR-020](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-020-block-size.md)).
[ADR-020](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-020-block-size.md)).
- Remove ConsensusParams.BlockSize.MaxTxs
- Introduce maximum sizes for all components of a block, including ChainID
- [types] Updates to the block Header:
@@ -2858,7 +2756,7 @@ BREAKING CHANGES:
- [consensus] [\#2203](https://github.com/tendermint/tendermint/issues/2203) Implement BFT time
- Timestamp in block must be monotonic and equal the median of timestamps in block's LastCommit
- [crypto] [\#2239](https://github.com/tendermint/tendermint/issues/2239) Secp256k1 signature changes (See
[ADR-014](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-014-secp-malleability.md)):
[ADR-014](https://github.com/tendermint/tendermint/blob/develop/docs/architecture/adr-014-secp-malleability.md)):
- format changed from DER to `r || s`, both little endian encoded as 32 bytes.
- malleability removed by requiring `s` to be in canonical form.

View File

@@ -2,9 +2,9 @@
Friendly reminder: We have a [bug bounty program](https://hackerone.com/cosmos).
## v0.35.8
## vX.X
Month DD, YYYY
Month, DD, YYYY
Special thanks to external contributors on this release:
@@ -12,20 +12,56 @@ Special thanks to external contributors on this release:
- CLI/RPC/Config
- [rpc] \#7575 Rework how RPC responses are written back via HTTP. (@creachadair)
- [rpc] \#7121 Remove the deprecated gRPC interface to the RPC service. (@creachadair)
- [blocksync] \#7159 Remove support for disabling blocksync in any circumstance. (@tychoish)
- [mempool] \#7171 Remove legacy mempool implementation. (@tychoish)
- Apps
- [proto/tendermint] \#6976 Remove core protobuf files in favor of only housing them in the [tendermint/spec](https://github.com/tendermint/spec) repository.
- P2P Protocol
- [p2p] \#7035 Remove legacy P2P routing implementation and associated configuration options. (@tychoish)
- [p2p] \#7265 Peer manager reduces peer score for each failed dial attempts for peers that have not successfully dialed. (@tychoish)
- [p2p] [\#7594](https://github.com/tendermint/tendermint/pull/7594) always advertise self, to enable mutual address discovery. (@altergui)
- Go API
- [rpc] \#7474 Remove the "URI" RPC client. (@creachadair)
- [libs/pubsub] \#7451 Internalize the pubsub packages. (@creachadair)
- [libs/sync] \#7450 Internalize and remove the library. (@creachadair)
- [libs/async] \#7449 Move library to internal. (@creachadair)
- [pubsub] \#7231 Remove unbuffered subscriptions and rework the Subscription interface. (@creachadair)
- [eventbus] \#7231 Move the EventBus type to the internal/eventbus package. (@creachadair)
- [blocksync] \#7046 Remove v2 implementation of the blocksync service and recactor, which was disabled in the previous release. (@tychoish)
- [p2p] \#7064 Remove WDRR queue implementation. (@tychoish)
- [config] \#7169 `WriteConfigFile` now returns an error. (@tychoish)
- [libs/service] \#7288 Remove SetLogger method on `service.Service` interface. (@tychoish)
- Blockchain Protocol
### FEATURES
- [cli] [\#8675] Add command to force compact goleveldb databases (@cmwaters)
- [rpc] [\#7270](https://github.com/tendermint/tendermint/pull/7270) Add `header` and `header_by_hash` RPC Client queries. (@fedekunze)
- [cli] [#7033](https://github.com/tendermint/tendermint/pull/7033) Add a `rollback` command to rollback to the previous tendermint state in the event of non-determinstic app hash or reverting an upgrade.
- [mempool, rpc] \#7041 Add removeTx operation to the RPC layer. (@tychoish)
- [consensus] \#7354 add a new `synchrony` field to the `ConsensusParameter` struct for controlling the parameters of the proposer-based timestamp algorithm. (@williambanfield)
- [consensus] \#7376 Update the proposal logic per the Propose-based timestamps specification so that the proposer will wait for the previous block time to occur before proposing the next block. (@williambanfield)
- [consensus] \#7391 Use the proposed block timestamp as the proposal timestamp. Update the block validation logic to ensure that the proposed block's timestamp matches the timestamp in the proposal message. (@williambanfield)
- [consensus] \#7415 Update proposal validation logic to Prevote nil if a proposal does not meet the conditions for Timelyness per the proposer-based timestamp specification. (@anca)
- [consensus] \#7382 Update block validation to no longer require the block timestamp to be the median of the timestamps of the previous commit. (@anca)
### IMPROVEMENTS
- [internal/protoio] \#7325 Optimized `MarshalDelimited` by inlining the common case and using a `sync.Pool` in the worst case. (@odeke-em)
- [consensus] \#6969 remove logic to 'unlock' a locked block.
- [pubsub] \#7319 Performance improvements for the event query API (@creachadair)
- [node] \#7521 Define concrete type for seed node implementation (@spacech1mp)
- [rpc] \#7612 paginate mempool /unconfirmed_txs rpc endpoint (@spacech1mp)
- [light] [\#7536](https://github.com/tendermint/tendermint/pull/7536) rpc /status call returns info about the light client (@jmalicevic)
### BUG FIXES
- [mempool] \#8944 Fix unbounded heap growth in the priority mempool. (@creachadair)
- fix: assignment copies lock value in `BitArray.UnmarshalJSON()` (@lklimek)

View File

@@ -20,7 +20,7 @@ This code of conduct applies to all projects run by the Tendermint/COSMOS team a
* Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term “harassment” as including the definition in the [Citizen Code of Conduct](https://github.com/stumpsyn/policies/blob/master/citizen_code_of_conduct.md); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we dont tolerate behavior that excludes people in socially marginalized groups.
* We will exclude you from interaction if you insult, demean or harass anyone. That is not welcome behaviour. We interpret the term “harassment” as including the definition in the [Citizen Code of Conduct](http://citizencodeofconduct.org/); if you have any lack of clarity about what might be included in that concept, please read their definition. In particular, we dont tolerate behavior that excludes people in socially marginalized groups.
* Private harassment is also unacceptable. No matter who you are, if you feel you have been or are being harassed or made uncomfortable by a community member, please contact one of the channel admins or the person mentioned above immediately. Whether youre a regular contributor or a newcomer, we care about making this community a safe place for you and weve got your back.

View File

@@ -109,7 +109,7 @@ We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along
For linting, checking breaking changes and generating proto stubs, we use [buf](https://buf.build/). If you would like to run linting and check if the changes you have made are breaking then you will need to have docker running locally. Then the linting cmd will be `make proto-lint` and the breaking changes check will be `make proto-check-breaking`.
We use [Docker](https://www.docker.com/) to generate the protobuf stubs. To generate the stubs yourself, make sure docker is running then run `make proto-gen`.
We use [Docker](https://www.docker.com/) to generate the protobuf stubs. To generate the stubs yourself, make sure docker is running then run `make proto-gen`. This command uses the spec repo to get the necessary protobuf files for generating the go code. If you are modifying the proto files manually for changes in the core data structures, you will need to clone them into the go repo and comment out lines 22-37 of the file `./scripts/protocgen.sh`.
### Visual Studio Code
@@ -227,150 +227,6 @@ Fixes #nnnn
Each PR should have one commit once it lands on `master`; this can be accomplished by using the "squash and merge" button on Github. Be sure to edit your commit message, though!
### Release procedure
#### A note about backport branches
Tendermint's `master` branch is under active development.
Releases are specified using tags and are built from long-lived "backport" branches.
Each release "line" (e.g. 0.34 or 0.33) has its own long-lived backport branch,
and the backport branches have names like `v0.34.x` or `v0.33.x`
(literally, `x`; it is not a placeholder in this case).
As non-breaking changes land on `master`, they should also be backported (cherry-picked)
to these backport branches.
We use Mergify's [backport feature](https://mergify.io/features/backports) to automatically backport
to the needed branch. There should be a label for any backport branch that you'll be targeting.
To notify the bot to backport a pull request, mark the pull request with
the label `S:backport-to-<backport_branch>`.
Once the original pull request is merged, the bot will try to cherry-pick the pull request
to the backport branch. If the bot fails to backport, it will open a pull request.
The author of the original pull request is responsible for solving the conflicts and
merging the pull request.
#### Creating a backport branch
If this is the first release candidate for a major release, you get to have the honor of creating
the backport branch!
Note that, after creating the backport branch, you'll also need to update the tags on `master`
so that `go mod` is able to order the branches correctly. You should tag `master` with a "dev" tag
that is "greater than" the backport branches tags. See #6072 for more context.
In the following example, we'll assume that we're making a backport branch for
the 0.35.x line.
1. Start on `master`
2. Create the backport branch:
`git checkout -b v0.35.x`
3. Go back to master and tag it as the dev branch for the _next_ major release and push it back up:
`git tag -a v0.36.0-dev; git push v0.36.0-dev`
4. Create a new workflow to run the e2e nightlies for this backport branch.
(See https://github.com/tendermint/tendermint/blob/master/.github/workflows/e2e-nightly-34x.yml
for an example.)
#### Release candidates
Before creating an official release, especially a major release, we may want to create a
release candidate (RC) for our friends and partners to test out. We use git tags to
create RCs, and we build them off of backport branches.
Tags for RCs should follow the "standard" release naming conventions, with `-rcX` at the end
(for example, `v0.35.0-rc0`).
(Note that branches and tags _cannot_ have the same names, so it's important that these branches
have distinct names from the tags/release names.)
If this is the first RC for a major release, you'll have to make a new backport branch (see above).
Otherwise:
1. Start from the backport branch (e.g. `v0.35.x`).
1. Run the integration tests and the e2e nightlies
(which can be triggered from the Github UI;
e.g., https://github.com/tendermint/tendermint/actions/workflows/e2e-nightly-34x.yml).
1. Prepare the changelog:
- Move the changes included in `CHANGELOG_PENDING.md` into `CHANGELOG.md`.
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that UPGRADING.md is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
1. Open a PR with these changes against the backport branch.
1. Once these changes have landed on the backport branch, be sure to pull them back down locally.
2. Once you have the changes locally, create the new tag, specifying a name and a tag "message":
`git tag -a v0.35.0-rc0 -m "Release Candidate v0.35.0-rc0`
3. Push the tag back up to origin:
`git push origin v0.35.0-rc0`
Now the tag should be available on the repo's releases page.
4. Future RCs will continue to be built off of this branch.
Note that this process should only be used for "true" RCs--
release candidates that, if successful, will be the next release.
For more experimental "RCs," create a new, short-lived branch and tag that instead.
#### Major release
This major release process assumes that this release was preceded by release candidates.
If there were no release candidates, begin by creating a backport branch, as described above.
1. Start on the backport branch (e.g. `v0.35.x`)
2. Run integration tests and the e2e nightlies.
3. Prepare the release:
- "Squash" changes from the changelog entries for the RCs into a single entry,
and add all changes included in `CHANGELOG_PENDING.md`.
(Squashing includes both combining all entries, as well as removing or simplifying
any intra-RC changes. It may also help to alphabetize the entries by package name.)
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that UPGRADING.md is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
4. Open a PR with these changes against the backport branch.
5. Once these changes are on the backport branch, push a tag with prepared release details.
This will trigger the actual release `v0.35.0`.
- `git tag -a v0.35.0 -m 'Release v0.35.0'`
- `git push origin v0.35.0`
7. Make sure that `master` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
8. Add the release to the documentation site generator config (see
[DOCS_README.md](./docs/DOCS_README.md) for more details). In summary:
- Start on branch `master`.
- Add a new line at the bottom of [`docs/versions`](./docs/versions) to
ensure the newest release is the default for the landing page.
- Add a new entry to `themeConfig.versions` in
[`docs/.vuepress/config.js`](./docs/.vuepress/config.js) to include the
release in the dropdown versions menu.
#### Minor release (point releases)
Minor releases are done differently from major releases: They are built off of long-lived backport branches, rather than from master.
As non-breaking changes land on `master`, they should also be backported (cherry-picked) to these backport branches.
Minor releases don't have release candidates by default, although any tricky changes may merit a release candidate.
To create a minor release:
1. Checkout the long-lived backport branch: `git checkout v0.35.x`
2. Run integration tests (`make test_integrations`) and the nightlies.
3. Check out a new branch and prepare the release:
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during minor releases, and only field additions are valid minor changes.)
4. Open a PR with these changes that will land them back on `v0.35.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- `git tag -a v0.35.1 -m 'Release v0.35.1'`
- `git push origin v0.35.1`
6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
- Remove all `R:minor` labels from the pull requests that were included in the release.
- Do not merge the backport branch into master.
## Testing
### Unit tests

View File

@@ -1,5 +1,5 @@
# stage 1 Generate Tendermint Binary
FROM golang:1.16-alpine as builder
FROM golang:1.17-alpine as builder
RUN apk update && \
apk upgrade && \
apk --no-cache add make
@@ -8,7 +8,7 @@ WORKDIR /tendermint
RUN make build-linux
# stage 2
FROM golang:1.15-alpine
FROM golang:1.17-alpine
LABEL maintainer="hello@tendermint.com"
# Tendermint will be looking for the genesis file in /tendermint/config/genesis.json

View File

@@ -13,10 +13,8 @@ endif
LD_FLAGS = -X github.com/tendermint/tendermint/version.TMVersion=$(VERSION)
BUILD_FLAGS = -mod=readonly -ldflags "$(LD_FLAGS)"
HTTPS_GIT := https://github.com/tendermint/tendermint.git
BUILD_IMAGE := ghcr.io/tendermint/docker-build-proto
BASE_BRANCH := v0.35.x
DOCKER_PROTO := docker run -v $(shell pwd):/workspace --workdir /workspace $(BUILD_IMAGE)
DOCKER_PROTO_BUILDER := docker run -v $(shell pwd):/workspace --workdir /workspace $(BUILD_IMAGE)
CGO_ENABLED ?= 0
# handle nostrip
@@ -80,34 +78,17 @@ $(BUILDDIR)/:
### Protobuf ###
###############################################################################
proto-all: proto-gen proto-lint proto-check-breaking
.PHONY: proto-all
proto-gen:
@echo "Generating Go packages for .proto files"
@$(DOCKER_PROTO) sh ./scripts/protocgen.sh
@docker pull -q tendermintdev/docker-build-proto
@echo "Generating Protobuf files"
@$(DOCKER_PROTO_BUILDER) sh ./scripts/protocgen.sh
.PHONY: proto-gen
proto-lint:
@echo "Running lint checks for .proto files"
@$(DOCKER_PROTO) buf lint --error-format=json
.PHONY: proto-lint
proto-format:
@echo "Formatting .proto files"
@$(DOCKER_PROTO) find ./ -not -path "./third_party/*" -name '*.proto' -exec clang-format -i {} \;
@echo "Formatting Protobuf files"
@$(DOCKER_PROTO_BUILDER) find ./ -not -path "./third_party/*" -name *.proto -exec clang-format -i {} \;
.PHONY: proto-format
proto-check-breaking:
@echo "Checking for breaking changes in .proto files"
@$(DOCKER_PROTO) buf breaking --against .git#branch=$(BASE_BRANCH)
.PHONY: proto-check-breaking
proto-check-breaking-ci:
@echo "Checking for breaking changes in .proto files"
@$(DOCKER_PROTO) buf breaking --against $(HTTPS_GIT)#branch=$(BASE_BRANCH)
.PHONY: proto-check-breaking-ci
###############################################################################
### Build ABCI ###
###############################################################################
@@ -228,8 +209,10 @@ build-docs:
### Docker image ###
###############################################################################
build-docker:
build-docker: build-linux
cp $(BUILDDIR)/tendermint DOCKER/tendermint
docker build --label=tendermint --tag="tendermint/tendermint" -f DOCKER/Dockerfile .
rm -rf DOCKER/tendermint
.PHONY: build-docker
@@ -325,4 +308,4 @@ $(BUILDDIR)/packages.txt:$(GO_TEST_FILES) $(BUILDDIR)
split-test-packages:$(BUILDDIR)/packages.txt
split -d -n l/$(NUM_SPLIT) $< $<.
test-group-%:split-test-packages
cat $(BUILDDIR)/packages.txt.$* | xargs go test -mod=readonly -timeout=15m -race -coverprofile=$(BUILDDIR)/$*.profile.out
cat $(BUILDDIR)/packages.txt.$* | xargs go test -mod=readonly -timeout=8m -race -coverprofile=$(BUILDDIR)/$*.profile.out

View File

@@ -35,9 +35,12 @@ See below for more details about [versioning](#versioning).
In any case, if you intend to run Tendermint in production, we're happy to help. You can
contact us [over email](mailto:hello@interchain.berlin) or [join the chat](https://discord.gg/cosmosnetwork).
More on how releases are conducted can be found [here](./RELEASES.md).
## Security
To report a security vulnerability, see our [bug bounty program](https://hackerone.com/cosmos).
To report a security vulnerability, see our [bug bounty
program](https://hackerone.com/cosmos).
For examples of the kinds of bugs we're looking for, see [our security policy](SECURITY.md).
We also maintain a dedicated mailing list for security updates. We will only ever use this mailing list
@@ -47,7 +50,7 @@ to notify you of vulnerabilities and fixes in Tendermint Core. You can subscribe
| Requirement | Notes |
|-------------|------------------|
| Go version | Go1.16 or higher |
| Go version | Go1.17 or higher |
## Documentation
@@ -111,6 +114,8 @@ in [UPGRADING.md](./UPGRADING.md).
### Tendermint Core
We keep a public up-to-date version of our roadmap [here](./docs/roadmap/roadmap.md)
For details about the blockchain data structures and the p2p protocols, see the
[Tendermint specification](https://docs.tendermint.com/master/spec/).

180
RELEASES.md Normal file
View File

@@ -0,0 +1,180 @@
# Releases
Tendermint uses [semantic versioning](https://semver.org/) with each release following
a `vX.Y.Z` format. The `master` branch is used for active development and thus it's
advisable not to build against it.
The latest changes are always initially merged into `master`.
Releases are specified using tags and are built from long-lived "backport" branches
that are cut from `master` when the release process begins.
Each release "line" (e.g. 0.34 or 0.33) has its own long-lived backport branch,
and the backport branches have names like `v0.34.x` or `v0.33.x`
(literally, `x`; it is not a placeholder in this case). Tendermint only
maintains the last two releases at a time (the oldest release is predominantly
just security patches).
## Backporting
As non-breaking changes land on `master`, they should also be backported
to these backport branches.
We use Mergify's [backport feature](https://mergify.io/features/backports) to automatically backport
to the needed branch. There should be a label for any backport branch that you'll be targeting.
To notify the bot to backport a pull request, mark the pull request with the label corresponding
to the correct backport branch. For example, to backport to v0.35.x, add the label `S:backport-to-v0.35.x`.
Once the original pull request is merged, the bot will try to cherry-pick the pull request
to the backport branch. If the bot fails to backport, it will open a pull request.
The author of the original pull request is responsible for solving the conflicts and
merging the pull request.
### Creating a backport branch
If this is the first release candidate for a major release, you get to have the
honor of creating the backport branch!
Note that, after creating the backport branch, you'll also need to update the
tags on `master` so that `go mod` is able to order the branches correctly. You
should tag `master` with a "dev" tag that is "greater than" the backport
branches tags. See [#6072](https://github.com/tendermint/tendermint/pull/6072)
for more context.
In the following example, we'll assume that we're making a backport branch for
the 0.35.x line.
1. Start on `master`
2. Create and push the backport branch:
```sh
git checkout -b v0.35.x
git push origin v0.35.x
```
After doing these steps, go back to `master` and do the following:
1. Tag `master` as the dev branch for the _next_ major release and push it back up.
For example:
```sh
git tag -a v0.36.0-dev -m "Development base for Tendermint v0.36."
git push origin v0.36.0-dev
```
2. Create a new workflow to run e2e nightlies for the new backport branch.
(See [e2e-nightly-master.yml][e2e] for an example.)
3. Add a new section to the Mergify config (`.github/mergify.yml`) to enable the
backport bot to work on this branch, and add a corresponding `S:backport-to-v0.35.x`
[label](https://github.com/tendermint/tendermint/labels) so the bot can be triggered.
4. Add a new section to the Dependabot config (`.github/dependabot.yml`) to
enable automatic update of Go dependencies on this branch. Copy and edit one
of the existing branch configurations to set the correct `target-branch`.
[e2e]: https://github.com/tendermint/tendermint/blob/master/.github/workflows/e2e-nightly-master.yml
## Release candidates
Before creating an official release, especially a major release, we may want to create a
release candidate (RC) for our friends and partners to test out. We use git tags to
create RCs, and we build them off of backport branches.
Tags for RCs should follow the "standard" release naming conventions, with `-rcX` at the end
(for example, `v0.35.0-rc0`).
(Note that branches and tags _cannot_ have the same names, so it's important that these branches
have distinct names from the tags/release names.)
If this is the first RC for a major release, you'll have to make a new backport branch (see above).
Otherwise:
1. Start from the backport branch (e.g. `v0.35.x`).
2. Run the integration tests and the e2e nightlies
(which can be triggered from the Github UI;
e.g., https://github.com/tendermint/tendermint/actions/workflows/e2e-nightly-34x.yml).
3. Prepare the changelog:
- Move the changes included in `CHANGELOG_PENDING.md` into `CHANGELOG.md`. Each RC should have
it's own changelog section. These will be squashed when the final candidate is released.
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary.
Check the changelog for breaking changes in these components.
- Bump ABCI protocol version in `version.go`, if necessary
4. Open a PR with these changes against the backport branch.
5. Once these changes have landed on the backport branch, be sure to pull them back down locally.
6. Once you have the changes locally, create the new tag, specifying a name and a tag "message":
`git tag -a v0.35.0-rc0 -m "Release Candidate v0.35.0-rc0`
7. Push the tag back up to origin:
`git push origin v0.35.0-rc0`
Now the tag should be available on the repo's releases page.
8. Future RCs will continue to be built off of this branch.
Note that this process should only be used for "true" RCs--
release candidates that, if successful, will be the next release.
For more experimental "RCs," create a new, short-lived branch and tag that instead.
## Major release
This major release process assumes that this release was preceded by release candidates.
If there were no release candidates, begin by creating a backport branch, as described above.
1. Start on the backport branch (e.g. `v0.35.x`)
2. Run integration tests (`make test_integrations`) and the e2e nightlies.
3. Prepare the release:
- "Squash" changes from the changelog entries for the RCs into a single entry,
and add all changes included in `CHANGELOG_PENDING.md`.
(Squashing includes both combining all entries, as well as removing or simplifying
any intra-RC changes. It may also help to alphabetize the entries by package name.)
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for
all PRs
- Ensure that `UPGRADING.md` is up-to-date and includes notes on any breaking changes
or other upgrading flows.
- Bump TMVersionDefault version in `version.go`
- Bump P2P and block protocol versions in `version.go`, if necessary
- Bump ABCI protocol version in `version.go`, if necessary
4. Open a PR with these changes against the backport branch.
5. Once these changes are on the backport branch, push a tag with prepared release details.
This will trigger the actual release `v0.35.0`.
- `git tag -a v0.35.0 -m 'Release v0.35.0'`
- `git push origin v0.35.0`
6. Make sure that `master` is updated with the latest `CHANGELOG.md`, `CHANGELOG_PENDING.md`, and `UPGRADING.md`.
7. Add the release to the documentation site generator config (see
[DOCS_README.md](./docs/DOCS_README.md) for more details). In summary:
- Start on branch `master`.
- Add a new line at the bottom of [`docs/versions`](./docs/versions) to
ensure the newest release is the default for the landing page.
- Add a new entry to `themeConfig.versions` in
[`docs/.vuepress/config.js`](./docs/.vuepress/config.js) to include the
release in the dropdown versions menu.
- Commit these changes to `master` and backport them into the backport
branch for this release.
## Minor release (point releases)
Minor releases are done differently from major releases: They are built off of
long-lived backport branches, rather than from master. As non-breaking changes
land on `master`, they should also be backported into these backport branches.
Minor releases don't have release candidates by default, although any tricky
changes may merit a release candidate.
To create a minor release:
1. Checkout the long-lived backport branch: `git checkout v0.35.x`
2. Run integration tests (`make test_integrations`) and the nightlies.
3. Check out a new branch and prepare the release:
- Copy `CHANGELOG_PENDING.md` to top of `CHANGELOG.md`
- Run `python ./scripts/linkify_changelog.py CHANGELOG.md` to add links for all issues
- Run `bash ./scripts/authors.sh` to get a list of authors since the latest release, and add the GitHub aliases of external contributors to the top of the CHANGELOG. To lookup an alias from an email, try `bash ./scripts/authors.sh <email>`
- Reset the `CHANGELOG_PENDING.md`
- Bump the TMDefaultVersion in `version.go`
- Bump the ABCI version number, if necessary.
(Note that ABCI follows semver, and that ABCI versions are the only versions
which can change during minor releases, and only field additions are valid minor changes.)
4. Open a PR with these changes that will land them back on `v0.35.x`
5. Once this change has landed on the backport branch, make sure to pull it locally, then push a tag.
- `git tag -a v0.35.1 -m 'Release v0.35.1'`
- `git push origin v0.35.1`
6. Create a pull request back to master with the CHANGELOG & version changes from the latest release.
- Remove all `R:minor` labels from the pull requests that were included in the release.
- Do not merge the backport branch into master.

View File

@@ -4,7 +4,7 @@
As part of our [Coordinated Vulnerability Disclosure
Policy](https://tendermint.com/security), we operate a [bug
bounty](https://hackerone.com/tendermint).
bounty](https://hackerone.com/cosmos).
See the policy for more details on submissions and rewards, and see "Example Vulnerabilities" (below) for examples of the kinds of bugs we're most interested in.
### Guidelines
@@ -86,7 +86,7 @@ If you are running older versions of Tendermint Core, we encourage you to upgrad
## Scope
The full scope of our bug bounty program is outlined on our [Hacker One program page](https://hackerone.com/tendermint). Please also note that, in the interest of the safety of our users and staff, a few things are explicitly excluded from scope:
The full scope of our bug bounty program is outlined on our [Hacker One program page](https://hackerone.com/cosmos). Please also note that, in the interest of the safety of our users and staff, a few things are explicitly excluded from scope:
* Any third-party services
* Findings from physical testing, such as office access

View File

@@ -44,47 +44,26 @@ This guide provides instructions for upgrading to specific versions of Tendermin
* The fast sync process as well as the blockchain package and service has all
been renamed to block sync
* We have added a new, experimental tool to help operators migrate
configuration files created by previous versions of Tendermint.
To try this tool, run:
```shell
# Install the tool.
go install github.com/tendermint/tendermint/scripts/confix@v0.35.x
# Run the tool with the old configuration file as input.
# Replace the -config argument with your path.
confix -config ~/.tendermint/config/config.toml -out updated.toml
```
This tool should be able to update configurations from v0.34 to v0.35. We
plan to extend it to handle older configuration files in the future. For now,
it will report an error (without making any changes) if it does not recognize
the version that created the file.
### Database Key Format Changes
The format of all tendermint on-disk database keys changes in
0.35. Upgrading nodes must either re-sync all data or run a migration
script provided in this release.
The script located in
`github.com/tendermint/tendermint/scripts/keymigrate/migrate.go` provides the
function `Migrate(context.Context, db.DB)` which you can operationalize as
makes sense for your deployment.
script provided in this release. The script located in
`github.com/tendermint/tendermint/scripts/keymigrate/migrate.go`
provides the function `Migrate(context.Context, db.DB)` which you can
operationalize as makes sense for your deployment.
For ease of use the `tendermint` command includes a CLI version of the
migration script, which you can invoke, as in:
tendermint key-migrate
This reads the configuration file as normal and allows the `--db-backend` and
`--db-dir` flags to override the database location as needed.
This reads the configuration file as normal and allows the
`--db-backend` and `--db-dir` flags to change database operations as
needed.
The migration operation is intended to be idempotent, and should be safe to
rerun on the same database multiple times. As a safety measure, however, we
recommend that operators test out the migration on a copy of the database
first, if it is practical to do so, before applying it to the production data.
The migration operation is idempotent and can be run more than once,
if needed.
### CLI Changes
@@ -134,11 +113,11 @@ To access any of the functionality previously available via the
`node.Node` type, use the `*local.Local` "RPC" client, that exposes
the full RPC interface provided as direct function calls. Import the
`github.com/tendermint/tendermint/rpc/client/local` package and pass
the node service as in the following:
the node service as in the following:
```go
node := node.NewDefault() //construct the node object
// start and set up the node service
// start and set up the node service
client := local.New(node.(local.NodeService))
// use client object to interact with the node
@@ -165,10 +144,10 @@ both stacks.
The P2P library was reimplemented in this release. The new implementation is
enabled by default in this version of Tendermint. The legacy implementation is still
included in this version of Tendermint as a backstop to work around unforeseen
production issues. The new and legacy version are interoperable. If necessary,
production issues. The new and legacy version are interoperable. If necessary,
you can enable the legacy implementation in the server configuration file.
To make use of the legacy P2P implemementation add or update the following field of
To make use of the legacy P2P implemementation add or update the following field of
your server's configuration file under the `[p2p]` section:
```toml
@@ -193,8 +172,8 @@ in the order in which they were received.
* `priority`: A priority queue of messages.
* `wdrr`: A queue implementing the Weighted Deficit Round Robin algorithm. A
weighted deficit round robin queue is created per peer. Each queue contains a
* `wdrr`: A queue implementing the Weighted Deficit Round Robin algorithm. A
weighted deficit round robin queue is created per peer. Each queue contains a
separate 'flow' for each of the channels of communication that exist between any two
peers. Tendermint maintains a channel per message type between peers. Each WDRR
queue maintains a shared buffered with a fixed capacity through which messages on different

View File

@@ -20,7 +20,7 @@ To get up and running quickly, see the [getting started guide](../docs/app-dev/g
A detailed description of the ABCI methods and message types is contained in:
- [The main spec](https://github.com/tendermint/spec/blob/master/spec/abci/abci.md)
- [A protobuf file](../proto/tendermint/abci/types.proto)
- [A protobuf file](https://github.com/tendermint/spec/blob/master/proto/tendermint/abci/types.proto)
- [A Go interface](./types/application.go)
## Protocol Buffers

View File

@@ -6,7 +6,7 @@ import (
"sync"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
)
@@ -33,47 +33,36 @@ type Client interface {
// Asynchronous requests
FlushAsync(context.Context) (*ReqRes, error)
EchoAsync(ctx context.Context, msg string) (*ReqRes, error)
InfoAsync(context.Context, types.RequestInfo) (*ReqRes, error)
DeliverTxAsync(context.Context, types.RequestDeliverTx) (*ReqRes, error)
CheckTxAsync(context.Context, types.RequestCheckTx) (*ReqRes, error)
QueryAsync(context.Context, types.RequestQuery) (*ReqRes, error)
CommitAsync(context.Context) (*ReqRes, error)
InitChainAsync(context.Context, types.RequestInitChain) (*ReqRes, error)
BeginBlockAsync(context.Context, types.RequestBeginBlock) (*ReqRes, error)
EndBlockAsync(context.Context, types.RequestEndBlock) (*ReqRes, error)
ListSnapshotsAsync(context.Context, types.RequestListSnapshots) (*ReqRes, error)
OfferSnapshotAsync(context.Context, types.RequestOfferSnapshot) (*ReqRes, error)
LoadSnapshotChunkAsync(context.Context, types.RequestLoadSnapshotChunk) (*ReqRes, error)
ApplySnapshotChunkAsync(context.Context, types.RequestApplySnapshotChunk) (*ReqRes, error)
// Synchronous requests
FlushSync(context.Context) error
EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error)
InfoSync(context.Context, types.RequestInfo) (*types.ResponseInfo, error)
DeliverTxSync(context.Context, types.RequestDeliverTx) (*types.ResponseDeliverTx, error)
CheckTxSync(context.Context, types.RequestCheckTx) (*types.ResponseCheckTx, error)
QuerySync(context.Context, types.RequestQuery) (*types.ResponseQuery, error)
CommitSync(context.Context) (*types.ResponseCommit, error)
InitChainSync(context.Context, types.RequestInitChain) (*types.ResponseInitChain, error)
BeginBlockSync(context.Context, types.RequestBeginBlock) (*types.ResponseBeginBlock, error)
EndBlockSync(context.Context, types.RequestEndBlock) (*types.ResponseEndBlock, error)
ListSnapshotsSync(context.Context, types.RequestListSnapshots) (*types.ResponseListSnapshots, error)
OfferSnapshotSync(context.Context, types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error)
LoadSnapshotChunkSync(context.Context, types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error)
ApplySnapshotChunkSync(context.Context, types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error)
Flush(context.Context) error
Echo(ctx context.Context, msg string) (*types.ResponseEcho, error)
Info(context.Context, types.RequestInfo) (*types.ResponseInfo, error)
DeliverTx(context.Context, types.RequestDeliverTx) (*types.ResponseDeliverTx, error)
CheckTx(context.Context, types.RequestCheckTx) (*types.ResponseCheckTx, error)
Query(context.Context, types.RequestQuery) (*types.ResponseQuery, error)
Commit(context.Context) (*types.ResponseCommit, error)
InitChain(context.Context, types.RequestInitChain) (*types.ResponseInitChain, error)
BeginBlock(context.Context, types.RequestBeginBlock) (*types.ResponseBeginBlock, error)
EndBlock(context.Context, types.RequestEndBlock) (*types.ResponseEndBlock, error)
ListSnapshots(context.Context, types.RequestListSnapshots) (*types.ResponseListSnapshots, error)
OfferSnapshot(context.Context, types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error)
LoadSnapshotChunk(context.Context, types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error)
ApplySnapshotChunk(context.Context, types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error)
}
//----------------------------------------
// NewClient returns a new ABCI client of the specified transport type.
// It returns an error if the transport is not "socket" or "grpc"
func NewClient(addr, transport string, mustConnect bool) (client Client, err error) {
func NewClient(logger log.Logger, addr, transport string, mustConnect bool) (client Client, err error) {
switch transport {
case "socket":
client = NewSocketClient(addr, mustConnect)
client = NewSocketClient(logger, addr, mustConnect)
case "grpc":
client = NewGRPCClient(addr, mustConnect)
client = NewGRPCClient(logger, addr, mustConnect)
default:
err = fmt.Errorf("unknown abci transport %s", transport)
}
@@ -87,15 +76,9 @@ type ReqRes struct {
*sync.WaitGroup
*types.Response // Not set atomically, so be sure to use WaitGroup.
mtx tmsync.Mutex
// callbackInvoked as a variable to track if the callback was already
// invoked during the regular execution of the request. This variable
// allows clients to set the callback simultaneously without potentially
// invoking the callback twice by accident, once when 'SetCallback' is
// called and once during the normal request.
callbackInvoked bool
cb func(*types.Response) // A single callback that may be set.
mtx sync.Mutex
done bool // Gets set to true once *after* WaitGroup.Done().
cb func(*types.Response) // A single callback that may be set.
}
func NewReqRes(req *types.Request) *ReqRes {
@@ -104,8 +87,8 @@ func NewReqRes(req *types.Request) *ReqRes {
WaitGroup: waitGroup1(),
Response: nil,
callbackInvoked: false,
cb: nil,
done: false,
cb: nil,
}
}
@@ -115,7 +98,7 @@ func NewReqRes(req *types.Request) *ReqRes {
func (r *ReqRes) SetCallback(cb func(res *types.Response)) {
r.mtx.Lock()
if r.callbackInvoked {
if r.done {
r.mtx.Unlock()
cb(r.Response)
return
@@ -134,7 +117,6 @@ func (r *ReqRes) InvokeCallback() {
if r.cb != nil {
r.cb(r.Response)
}
r.callbackInvoked = true
}
// GetCallback returns the configured callback of the ReqRes object which may be
@@ -149,6 +131,13 @@ func (r *ReqRes) GetCallback() func(*types.Response) {
return r.cb
}
// SetDone marks the ReqRes object as done.
func (r *ReqRes) SetDone() {
r.mtx.Lock()
r.done = true
r.mtx.Unlock()
}
func waitGroup1() (wg *sync.WaitGroup) {
wg = &sync.WaitGroup{}
wg.Add(1)

View File

@@ -2,30 +2,31 @@ package abciclient
import (
"fmt"
"sync"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/libs/log"
)
// Creator creates new ABCI clients.
type Creator func() (Client, error)
type Creator func(log.Logger) (Client, error)
// NewLocalCreator returns a Creator for the given app,
// which will be running locally.
func NewLocalCreator(app types.Application) Creator {
mtx := new(tmsync.Mutex)
mtx := new(sync.Mutex)
return func() (Client, error) {
return NewLocalClient(mtx, app), nil
return func(logger log.Logger) (Client, error) {
return NewLocalClient(logger, mtx, app), nil
}
}
// NewRemoteCreator returns a Creator for the given address (e.g.
// "192.168.0.1") and transport (e.g. "tcp"). Set mustConnect to true if you
// want the client to connect before reporting success.
func NewRemoteCreator(addr, transport string, mustConnect bool) Creator {
return func() (Client, error) {
remoteApp, err := NewClient(addr, transport, mustConnect)
func NewRemoteCreator(logger log.Logger, addr, transport string, mustConnect bool) Creator {
return func(log.Logger) (Client, error) {
remoteApp, err := NewClient(logger, addr, transport, mustConnect)
if err != nil {
return nil, fmt.Errorf("failed to connect to proxy: %w", err)
}

View File

@@ -2,6 +2,7 @@ package abciclient
import (
"context"
"errors"
"fmt"
"net"
"sync"
@@ -11,7 +12,7 @@ import (
"google.golang.org/grpc/credentials/insecure"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/libs/service"
)
@@ -19,13 +20,15 @@ import (
// A gRPC client.
type grpcClient struct {
service.BaseService
logger log.Logger
mustConnect bool
client types.ABCIApplicationClient
conn *grpc.ClientConn
chReqRes chan *ReqRes // dispatches "async" responses to callbacks *in order*, needed by mempool
mtx tmsync.Mutex
mtx sync.Mutex
addr string
err error
resCb func(*types.Request, *types.Response) // listens to all callbacks
@@ -43,8 +46,9 @@ var _ Client = (*grpcClient)(nil)
// which is expensive, but easy - if you want something better, use the socket
// protocol! maybe one day, if people really want it, we use grpc streams, but
// hopefully not :D
func NewGRPCClient(addr string, mustConnect bool) Client {
func NewGRPCClient(logger log.Logger, addr string, mustConnect bool) Client {
cli := &grpcClient{
logger: logger,
addr: addr,
mustConnect: mustConnect,
// Buffering the channel is needed to make calls appear asynchronous,
@@ -55,7 +59,7 @@ func NewGRPCClient(addr string, mustConnect bool) Client {
// gRPC calls while processing a slow callback at the channel head.
chReqRes: make(chan *ReqRes, 64),
}
cli.BaseService = *service.NewBaseService(nil, "grpcClient", cli)
cli.BaseService = *service.NewBaseService(logger, "grpcClient", cli)
return cli
}
@@ -63,7 +67,7 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
return tmnet.Connect(addr)
}
func (cli *grpcClient) OnStart() error {
func (cli *grpcClient) OnStart(ctx context.Context) error {
// This processes asynchronous request/response messages and dispatches
// them to callbacks.
go func() {
@@ -72,6 +76,7 @@ func (cli *grpcClient) OnStart() error {
cli.mtx.Lock()
defer cli.mtx.Unlock()
reqres.SetDone()
reqres.Done()
// Notify client listener if set
@@ -80,15 +85,24 @@ func (cli *grpcClient) OnStart() error {
}
// Notify reqRes listener if set
reqres.InvokeCallback()
}
for reqres := range cli.chReqRes {
if reqres != nil {
callCb(reqres)
} else {
cli.Logger.Error("Received nil reqres")
if cb := reqres.GetCallback(); cb != nil {
cb(reqres.Response)
}
}
for {
select {
case reqres := <-cli.chReqRes:
if reqres != nil {
callCb(reqres)
} else {
cli.logger.Error("Received nil reqres")
}
case <-ctx.Done():
return
}
}
}()
RETRY_LOOP:
@@ -101,22 +115,26 @@ RETRY_LOOP:
if cli.mustConnect {
return err
}
cli.Logger.Error(fmt.Sprintf("abci.grpcClient failed to connect to %v. Retrying...\n", cli.addr), "err", err)
cli.logger.Error(fmt.Sprintf("abci.grpcClient failed to connect to %v. Retrying...\n", cli.addr), "err", err)
time.Sleep(time.Second * dialRetryIntervalSeconds)
continue RETRY_LOOP
}
cli.Logger.Info("Dialed server. Waiting for echo.", "addr", cli.addr)
cli.logger.Info("Dialed server. Waiting for echo.", "addr", cli.addr)
client := types.NewABCIApplicationClient(conn)
cli.conn = conn
ENSURE_CONNECTED:
for {
_, err := client.Echo(context.Background(), &types.RequestEcho{Message: "hello"}, grpc.WaitForReady(true))
_, err := client.Echo(ctx, &types.RequestEcho{Message: "hello"}, grpc.WaitForReady(true))
if err == nil {
break ENSURE_CONNECTED
}
cli.Logger.Error("Echo failed", "err", err)
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
cli.logger.Error("Echo failed", "err", err)
time.Sleep(time.Second * echoRetryIntervalSeconds)
}
@@ -143,9 +161,9 @@ func (cli *grpcClient) StopForError(err error) {
}
cli.mtx.Unlock()
cli.Logger.Error(fmt.Sprintf("Stopping abci.grpcClient for error: %v", err.Error()))
cli.logger.Error("Stopping abci.grpcClient for error", "err", err)
if err := cli.Stop(); err != nil {
cli.Logger.Error("Error stopping abci.grpcClient", "err", err)
cli.logger.Error("error stopping abci.grpcClient", "err", err)
}
}
@@ -165,16 +183,6 @@ func (cli *grpcClient) SetResponseCallback(resCb Callback) {
//----------------------------------------
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) EchoAsync(ctx context.Context, msg string) (*ReqRes, error) {
req := types.ToRequestEcho(msg)
res, err := cli.client.Echo(ctx, req.GetEcho(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Echo{Echo: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
req := types.ToRequestFlush()
@@ -185,16 +193,6 @@ func (cli *grpcClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Flush{Flush: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) InfoAsync(ctx context.Context, params types.RequestInfo) (*ReqRes, error) {
req := types.ToRequestInfo(params)
res, err := cli.client.Info(ctx, req.GetInfo(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Info{Info: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) DeliverTxAsync(ctx context.Context, params types.RequestDeliverTx) (*ReqRes, error) {
req := types.ToRequestDeliverTx(params)
@@ -215,106 +213,6 @@ func (cli *grpcClient) CheckTxAsync(ctx context.Context, params types.RequestChe
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_CheckTx{CheckTx: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) QueryAsync(ctx context.Context, params types.RequestQuery) (*ReqRes, error) {
req := types.ToRequestQuery(params)
res, err := cli.client.Query(ctx, req.GetQuery(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Query{Query: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) CommitAsync(ctx context.Context) (*ReqRes, error) {
req := types.ToRequestCommit()
res, err := cli.client.Commit(ctx, req.GetCommit(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Commit{Commit: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) InitChainAsync(ctx context.Context, params types.RequestInitChain) (*ReqRes, error) {
req := types.ToRequestInitChain(params)
res, err := cli.client.InitChain(ctx, req.GetInitChain(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_InitChain{InitChain: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) BeginBlockAsync(ctx context.Context, params types.RequestBeginBlock) (*ReqRes, error) {
req := types.ToRequestBeginBlock(params)
res, err := cli.client.BeginBlock(ctx, req.GetBeginBlock(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_BeginBlock{BeginBlock: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) EndBlockAsync(ctx context.Context, params types.RequestEndBlock) (*ReqRes, error) {
req := types.ToRequestEndBlock(params)
res, err := cli.client.EndBlock(ctx, req.GetEndBlock(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_EndBlock{EndBlock: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) ListSnapshotsAsync(ctx context.Context, params types.RequestListSnapshots) (*ReqRes, error) {
req := types.ToRequestListSnapshots(params)
res, err := cli.client.ListSnapshots(ctx, req.GetListSnapshots(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_ListSnapshots{ListSnapshots: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) OfferSnapshotAsync(ctx context.Context, params types.RequestOfferSnapshot) (*ReqRes, error) {
req := types.ToRequestOfferSnapshot(params)
res, err := cli.client.OfferSnapshot(ctx, req.GetOfferSnapshot(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_OfferSnapshot{OfferSnapshot: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) LoadSnapshotChunkAsync(
ctx context.Context,
params types.RequestLoadSnapshotChunk,
) (*ReqRes, error) {
req := types.ToRequestLoadSnapshotChunk(params)
res, err := cli.client.LoadSnapshotChunk(ctx, req.GetLoadSnapshotChunk(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_LoadSnapshotChunk{LoadSnapshotChunk: res}})
}
// NOTE: call is synchronous, use ctx to break early if needed
func (cli *grpcClient) ApplySnapshotChunkAsync(
ctx context.Context,
params types.RequestApplySnapshotChunk,
) (*ReqRes, error) {
req := types.ToRequestApplySnapshotChunk(params)
res, err := cli.client.ApplySnapshotChunk(ctx, req.GetApplySnapshotChunk(), grpc.WaitForReady(true))
if err != nil {
return nil, err
}
return cli.finishAsyncCall(
ctx,
req,
&types.Response{Value: &types.Response_ApplySnapshotChunk{ApplySnapshotChunk: res}},
)
}
// finishAsyncCall creates a ReqRes for an async call, and immediately populates it
// with the response. We don't complete it until it's been ordered via the channel.
func (cli *grpcClient) finishAsyncCall(ctx context.Context, req *types.Request, res *types.Response) (*ReqRes, error) {
@@ -358,30 +256,22 @@ func (cli *grpcClient) finishSyncCall(reqres *ReqRes) *types.Response {
//----------------------------------------
func (cli *grpcClient) FlushSync(ctx context.Context) error {
return nil
func (cli *grpcClient) Flush(ctx context.Context) error { return nil }
func (cli *grpcClient) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
req := types.ToRequestEcho(msg)
return cli.client.Echo(ctx, req.GetEcho(), grpc.WaitForReady(true))
}
func (cli *grpcClient) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error) {
reqres, err := cli.EchoAsync(ctx, msg)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetEcho(), cli.Error()
}
func (cli *grpcClient) InfoSync(
func (cli *grpcClient) Info(
ctx context.Context,
req types.RequestInfo,
params types.RequestInfo,
) (*types.ResponseInfo, error) {
reqres, err := cli.InfoAsync(ctx, req)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetInfo(), cli.Error()
req := types.ToRequestInfo(params)
return cli.client.Info(ctx, req.GetInfo(), grpc.WaitForReady(true))
}
func (cli *grpcClient) DeliverTxSync(
func (cli *grpcClient) DeliverTx(
ctx context.Context,
params types.RequestDeliverTx,
) (*types.ResponseDeliverTx, error) {
@@ -393,7 +283,7 @@ func (cli *grpcClient) DeliverTxSync(
return cli.finishSyncCall(reqres).GetDeliverTx(), cli.Error()
}
func (cli *grpcClient) CheckTxSync(
func (cli *grpcClient) CheckTx(
ctx context.Context,
params types.RequestCheckTx,
) (*types.ResponseCheckTx, error) {
@@ -405,103 +295,76 @@ func (cli *grpcClient) CheckTxSync(
return cli.finishSyncCall(reqres).GetCheckTx(), cli.Error()
}
func (cli *grpcClient) QuerySync(
func (cli *grpcClient) Query(
ctx context.Context,
req types.RequestQuery,
params types.RequestQuery,
) (*types.ResponseQuery, error) {
reqres, err := cli.QueryAsync(ctx, req)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetQuery(), cli.Error()
req := types.ToRequestQuery(params)
return cli.client.Query(ctx, req.GetQuery(), grpc.WaitForReady(true))
}
func (cli *grpcClient) CommitSync(ctx context.Context) (*types.ResponseCommit, error) {
reqres, err := cli.CommitAsync(ctx)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetCommit(), cli.Error()
func (cli *grpcClient) Commit(ctx context.Context) (*types.ResponseCommit, error) {
req := types.ToRequestCommit()
return cli.client.Commit(ctx, req.GetCommit(), grpc.WaitForReady(true))
}
func (cli *grpcClient) InitChainSync(
func (cli *grpcClient) InitChain(
ctx context.Context,
params types.RequestInitChain,
) (*types.ResponseInitChain, error) {
reqres, err := cli.InitChainAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetInitChain(), cli.Error()
req := types.ToRequestInitChain(params)
return cli.client.InitChain(ctx, req.GetInitChain(), grpc.WaitForReady(true))
}
func (cli *grpcClient) BeginBlockSync(
func (cli *grpcClient) BeginBlock(
ctx context.Context,
params types.RequestBeginBlock,
) (*types.ResponseBeginBlock, error) {
reqres, err := cli.BeginBlockAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetBeginBlock(), cli.Error()
req := types.ToRequestBeginBlock(params)
return cli.client.BeginBlock(ctx, req.GetBeginBlock(), grpc.WaitForReady(true))
}
func (cli *grpcClient) EndBlockSync(
func (cli *grpcClient) EndBlock(
ctx context.Context,
params types.RequestEndBlock,
) (*types.ResponseEndBlock, error) {
reqres, err := cli.EndBlockAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetEndBlock(), cli.Error()
req := types.ToRequestEndBlock(params)
return cli.client.EndBlock(ctx, req.GetEndBlock(), grpc.WaitForReady(true))
}
func (cli *grpcClient) ListSnapshotsSync(
func (cli *grpcClient) ListSnapshots(
ctx context.Context,
params types.RequestListSnapshots,
) (*types.ResponseListSnapshots, error) {
reqres, err := cli.ListSnapshotsAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetListSnapshots(), cli.Error()
req := types.ToRequestListSnapshots(params)
return cli.client.ListSnapshots(ctx, req.GetListSnapshots(), grpc.WaitForReady(true))
}
func (cli *grpcClient) OfferSnapshotSync(
func (cli *grpcClient) OfferSnapshot(
ctx context.Context,
params types.RequestOfferSnapshot,
) (*types.ResponseOfferSnapshot, error) {
reqres, err := cli.OfferSnapshotAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetOfferSnapshot(), cli.Error()
req := types.ToRequestOfferSnapshot(params)
return cli.client.OfferSnapshot(ctx, req.GetOfferSnapshot(), grpc.WaitForReady(true))
}
func (cli *grpcClient) LoadSnapshotChunkSync(
func (cli *grpcClient) LoadSnapshotChunk(
ctx context.Context,
params types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
reqres, err := cli.LoadSnapshotChunkAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetLoadSnapshotChunk(), cli.Error()
req := types.ToRequestLoadSnapshotChunk(params)
return cli.client.LoadSnapshotChunk(ctx, req.GetLoadSnapshotChunk(), grpc.WaitForReady(true))
}
func (cli *grpcClient) ApplySnapshotChunkSync(
func (cli *grpcClient) ApplySnapshotChunk(
ctx context.Context,
params types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
reqres, err := cli.ApplySnapshotChunkAsync(ctx, params)
if err != nil {
return nil, err
}
return cli.finishSyncCall(reqres).GetApplySnapshotChunk(), cli.Error()
req := types.ToRequestApplySnapshotChunk(params)
return cli.client.ApplySnapshotChunk(ctx, req.GetApplySnapshotChunk(), grpc.WaitForReady(true))
}

View File

@@ -2,9 +2,10 @@ package abciclient
import (
"context"
"sync"
types "github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
)
@@ -15,7 +16,7 @@ import (
type localClient struct {
service.BaseService
mtx *tmsync.Mutex
mtx *sync.Mutex
types.Application
Callback
}
@@ -26,22 +27,25 @@ var _ Client = (*localClient)(nil)
// methods of the given app.
//
// Both Async and Sync methods ignore the given context.Context parameter.
func NewLocalClient(mtx *tmsync.Mutex, app types.Application) Client {
func NewLocalClient(logger log.Logger, mtx *sync.Mutex, app types.Application) Client {
if mtx == nil {
mtx = new(tmsync.Mutex)
mtx = new(sync.Mutex)
}
cli := &localClient{
mtx: mtx,
Application: app,
}
cli.BaseService = *service.NewBaseService(nil, "localClient", cli)
cli.BaseService = *service.NewBaseService(logger, "localClient", cli)
return cli
}
func (*localClient) OnStart(context.Context) error { return nil }
func (*localClient) OnStop() {}
func (app *localClient) SetResponseCallback(cb Callback) {
app.mtx.Lock()
defer app.mtx.Unlock()
app.Callback = cb
app.mtx.Unlock()
}
// TODO: change types.Application to include Error()?
@@ -54,27 +58,6 @@ func (app *localClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
return newLocalReqRes(types.ToRequestFlush(), nil), nil
}
func (app *localClient) EchoAsync(ctx context.Context, msg string) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
return app.callback(
types.ToRequestEcho(msg),
types.ToResponseEcho(msg),
), nil
}
func (app *localClient) InfoAsync(ctx context.Context, req types.RequestInfo) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Info(req)
return app.callback(
types.ToRequestInfo(req),
types.ToResponseInfo(res),
), nil
}
func (app *localClient) DeliverTxAsync(ctx context.Context, params types.RequestDeliverTx) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -97,122 +80,17 @@ func (app *localClient) CheckTxAsync(ctx context.Context, req types.RequestCheck
), nil
}
func (app *localClient) QueryAsync(ctx context.Context, req types.RequestQuery) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Query(req)
return app.callback(
types.ToRequestQuery(req),
types.ToResponseQuery(res),
), nil
}
func (app *localClient) CommitAsync(ctx context.Context) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.Commit()
return app.callback(
types.ToRequestCommit(),
types.ToResponseCommit(res),
), nil
}
func (app *localClient) InitChainAsync(ctx context.Context, req types.RequestInitChain) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.InitChain(req)
return app.callback(
types.ToRequestInitChain(req),
types.ToResponseInitChain(res),
), nil
}
func (app *localClient) BeginBlockAsync(ctx context.Context, req types.RequestBeginBlock) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.BeginBlock(req)
return app.callback(
types.ToRequestBeginBlock(req),
types.ToResponseBeginBlock(res),
), nil
}
func (app *localClient) EndBlockAsync(ctx context.Context, req types.RequestEndBlock) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.EndBlock(req)
return app.callback(
types.ToRequestEndBlock(req),
types.ToResponseEndBlock(res),
), nil
}
func (app *localClient) ListSnapshotsAsync(ctx context.Context, req types.RequestListSnapshots) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.ListSnapshots(req)
return app.callback(
types.ToRequestListSnapshots(req),
types.ToResponseListSnapshots(res),
), nil
}
func (app *localClient) OfferSnapshotAsync(ctx context.Context, req types.RequestOfferSnapshot) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.OfferSnapshot(req)
return app.callback(
types.ToRequestOfferSnapshot(req),
types.ToResponseOfferSnapshot(res),
), nil
}
func (app *localClient) LoadSnapshotChunkAsync(
ctx context.Context,
req types.RequestLoadSnapshotChunk,
) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.LoadSnapshotChunk(req)
return app.callback(
types.ToRequestLoadSnapshotChunk(req),
types.ToResponseLoadSnapshotChunk(res),
), nil
}
func (app *localClient) ApplySnapshotChunkAsync(
ctx context.Context,
req types.RequestApplySnapshotChunk,
) (*ReqRes, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
res := app.Application.ApplySnapshotChunk(req)
return app.callback(
types.ToRequestApplySnapshotChunk(req),
types.ToResponseApplySnapshotChunk(res),
), nil
}
//-------------------------------------------------------
func (app *localClient) FlushSync(ctx context.Context) error {
func (app *localClient) Flush(ctx context.Context) error {
return nil
}
func (app *localClient) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error) {
func (app *localClient) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
return &types.ResponseEcho{Message: msg}, nil
}
func (app *localClient) InfoSync(ctx context.Context, req types.RequestInfo) (*types.ResponseInfo, error) {
func (app *localClient) Info(ctx context.Context, req types.RequestInfo) (*types.ResponseInfo, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -220,7 +98,7 @@ func (app *localClient) InfoSync(ctx context.Context, req types.RequestInfo) (*t
return &res, nil
}
func (app *localClient) DeliverTxSync(
func (app *localClient) DeliverTx(
ctx context.Context,
req types.RequestDeliverTx,
) (*types.ResponseDeliverTx, error) {
@@ -232,7 +110,7 @@ func (app *localClient) DeliverTxSync(
return &res, nil
}
func (app *localClient) CheckTxSync(
func (app *localClient) CheckTx(
ctx context.Context,
req types.RequestCheckTx,
) (*types.ResponseCheckTx, error) {
@@ -243,7 +121,7 @@ func (app *localClient) CheckTxSync(
return &res, nil
}
func (app *localClient) QuerySync(
func (app *localClient) Query(
ctx context.Context,
req types.RequestQuery,
) (*types.ResponseQuery, error) {
@@ -254,7 +132,7 @@ func (app *localClient) QuerySync(
return &res, nil
}
func (app *localClient) CommitSync(ctx context.Context) (*types.ResponseCommit, error) {
func (app *localClient) Commit(ctx context.Context) (*types.ResponseCommit, error) {
app.mtx.Lock()
defer app.mtx.Unlock()
@@ -262,7 +140,7 @@ func (app *localClient) CommitSync(ctx context.Context) (*types.ResponseCommit,
return &res, nil
}
func (app *localClient) InitChainSync(
func (app *localClient) InitChain(
ctx context.Context,
req types.RequestInitChain,
) (*types.ResponseInitChain, error) {
@@ -274,7 +152,7 @@ func (app *localClient) InitChainSync(
return &res, nil
}
func (app *localClient) BeginBlockSync(
func (app *localClient) BeginBlock(
ctx context.Context,
req types.RequestBeginBlock,
) (*types.ResponseBeginBlock, error) {
@@ -286,7 +164,7 @@ func (app *localClient) BeginBlockSync(
return &res, nil
}
func (app *localClient) EndBlockSync(
func (app *localClient) EndBlock(
ctx context.Context,
req types.RequestEndBlock,
) (*types.ResponseEndBlock, error) {
@@ -298,7 +176,7 @@ func (app *localClient) EndBlockSync(
return &res, nil
}
func (app *localClient) ListSnapshotsSync(
func (app *localClient) ListSnapshots(
ctx context.Context,
req types.RequestListSnapshots,
) (*types.ResponseListSnapshots, error) {
@@ -310,7 +188,7 @@ func (app *localClient) ListSnapshotsSync(
return &res, nil
}
func (app *localClient) OfferSnapshotSync(
func (app *localClient) OfferSnapshot(
ctx context.Context,
req types.RequestOfferSnapshot,
) (*types.ResponseOfferSnapshot, error) {
@@ -322,7 +200,7 @@ func (app *localClient) OfferSnapshotSync(
return &res, nil
}
func (app *localClient) LoadSnapshotChunkSync(
func (app *localClient) LoadSnapshotChunk(
ctx context.Context,
req types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
@@ -333,7 +211,7 @@ func (app *localClient) LoadSnapshotChunkSync(
return &res, nil
}
func (app *localClient) ApplySnapshotChunkSync(
func (app *localClient) ApplySnapshotChunk(
ctx context.Context,
req types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
@@ -348,13 +226,12 @@ func (app *localClient) ApplySnapshotChunkSync(
func (app *localClient) callback(req *types.Request, res *types.Response) *ReqRes {
app.Callback(req, res)
rr := newLocalReqRes(req, res)
rr.callbackInvoked = true
return rr
return newLocalReqRes(req, res)
}
func newLocalReqRes(req *types.Request, res *types.Response) *ReqRes {
reqRes := NewReqRes(req)
reqRes.Response = res
reqRes.SetDone()
return reqRes
}

View File

@@ -7,8 +7,6 @@ import (
abciclient "github.com/tendermint/tendermint/abci/client"
log "github.com/tendermint/tendermint/libs/log"
mock "github.com/stretchr/testify/mock"
types "github.com/tendermint/tendermint/abci/types"
@@ -19,31 +17,8 @@ type Client struct {
mock.Mock
}
// ApplySnapshotChunkAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) ApplySnapshotChunkAsync(_a0 context.Context, _a1 types.RequestApplySnapshotChunk) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestApplySnapshotChunk) *abciclient.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestApplySnapshotChunk) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// ApplySnapshotChunkSync provides a mock function with given fields: _a0, _a1
func (_m *Client) ApplySnapshotChunkSync(_a0 context.Context, _a1 types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
// ApplySnapshotChunk provides a mock function with given fields: _a0, _a1
func (_m *Client) ApplySnapshotChunk(_a0 context.Context, _a1 types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseApplySnapshotChunk
@@ -65,31 +40,8 @@ func (_m *Client) ApplySnapshotChunkSync(_a0 context.Context, _a1 types.RequestA
return r0, r1
}
// BeginBlockAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) BeginBlockAsync(_a0 context.Context, _a1 types.RequestBeginBlock) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestBeginBlock) *abciclient.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestBeginBlock) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// BeginBlockSync provides a mock function with given fields: _a0, _a1
func (_m *Client) BeginBlockSync(_a0 context.Context, _a1 types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
// BeginBlock provides a mock function with given fields: _a0, _a1
func (_m *Client) BeginBlock(_a0 context.Context, _a1 types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseBeginBlock
@@ -111,6 +63,29 @@ func (_m *Client) BeginBlockSync(_a0 context.Context, _a1 types.RequestBeginBloc
return r0, r1
}
// CheckTx provides a mock function with given fields: _a0, _a1
func (_m *Client) CheckTx(_a0 context.Context, _a1 types.RequestCheckTx) (*types.ResponseCheckTx, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseCheckTx
if rf, ok := ret.Get(0).(func(context.Context, types.RequestCheckTx) *types.ResponseCheckTx); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseCheckTx)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestCheckTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// CheckTxAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) CheckTxAsync(_a0 context.Context, _a1 types.RequestCheckTx) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0, _a1)
@@ -134,54 +109,8 @@ func (_m *Client) CheckTxAsync(_a0 context.Context, _a1 types.RequestCheckTx) (*
return r0, r1
}
// CheckTxSync provides a mock function with given fields: _a0, _a1
func (_m *Client) CheckTxSync(_a0 context.Context, _a1 types.RequestCheckTx) (*types.ResponseCheckTx, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseCheckTx
if rf, ok := ret.Get(0).(func(context.Context, types.RequestCheckTx) *types.ResponseCheckTx); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseCheckTx)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestCheckTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// CommitAsync provides a mock function with given fields: _a0
func (_m *Client) CommitAsync(_a0 context.Context) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context) *abciclient.ReqRes); ok {
r0 = rf(_a0)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context) error); ok {
r1 = rf(_a0)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// CommitSync provides a mock function with given fields: _a0
func (_m *Client) CommitSync(_a0 context.Context) (*types.ResponseCommit, error) {
// Commit provides a mock function with given fields: _a0
func (_m *Client) Commit(_a0 context.Context) (*types.ResponseCommit, error) {
ret := _m.Called(_a0)
var r0 *types.ResponseCommit
@@ -203,6 +132,29 @@ func (_m *Client) CommitSync(_a0 context.Context) (*types.ResponseCommit, error)
return r0, r1
}
// DeliverTx provides a mock function with given fields: _a0, _a1
func (_m *Client) DeliverTx(_a0 context.Context, _a1 types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseDeliverTx
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *types.ResponseDeliverTx); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseDeliverTx)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// DeliverTxAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 types.RequestDeliverTx) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0, _a1)
@@ -226,54 +178,8 @@ func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 types.RequestDeliverTx
return r0, r1
}
// DeliverTxSync provides a mock function with given fields: _a0, _a1
func (_m *Client) DeliverTxSync(_a0 context.Context, _a1 types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseDeliverTx
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *types.ResponseDeliverTx); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*types.ResponseDeliverTx)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// EchoAsync provides a mock function with given fields: ctx, msg
func (_m *Client) EchoAsync(ctx context.Context, msg string) (*abciclient.ReqRes, error) {
ret := _m.Called(ctx, msg)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, string) *abciclient.ReqRes); ok {
r0 = rf(ctx, msg)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, string) error); ok {
r1 = rf(ctx, msg)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// EchoSync provides a mock function with given fields: ctx, msg
func (_m *Client) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error) {
// Echo provides a mock function with given fields: ctx, msg
func (_m *Client) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
ret := _m.Called(ctx, msg)
var r0 *types.ResponseEcho
@@ -295,31 +201,8 @@ func (_m *Client) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho
return r0, r1
}
// EndBlockAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) EndBlockAsync(_a0 context.Context, _a1 types.RequestEndBlock) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestEndBlock) *abciclient.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestEndBlock) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// EndBlockSync provides a mock function with given fields: _a0, _a1
func (_m *Client) EndBlockSync(_a0 context.Context, _a1 types.RequestEndBlock) (*types.ResponseEndBlock, error) {
// EndBlock provides a mock function with given fields: _a0, _a1
func (_m *Client) EndBlock(_a0 context.Context, _a1 types.RequestEndBlock) (*types.ResponseEndBlock, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseEndBlock
@@ -355,6 +238,20 @@ func (_m *Client) Error() error {
return r0
}
// Flush provides a mock function with given fields: _a0
func (_m *Client) Flush(_a0 context.Context) error {
ret := _m.Called(_a0)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
r0 = rf(_a0)
} else {
r0 = ret.Error(0)
}
return r0
}
// FlushAsync provides a mock function with given fields: _a0
func (_m *Client) FlushAsync(_a0 context.Context) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0)
@@ -378,45 +275,8 @@ func (_m *Client) FlushAsync(_a0 context.Context) (*abciclient.ReqRes, error) {
return r0, r1
}
// FlushSync provides a mock function with given fields: _a0
func (_m *Client) FlushSync(_a0 context.Context) error {
ret := _m.Called(_a0)
var r0 error
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
r0 = rf(_a0)
} else {
r0 = ret.Error(0)
}
return r0
}
// InfoAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) InfoAsync(_a0 context.Context, _a1 types.RequestInfo) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInfo) *abciclient.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestInfo) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// InfoSync provides a mock function with given fields: _a0, _a1
func (_m *Client) InfoSync(_a0 context.Context, _a1 types.RequestInfo) (*types.ResponseInfo, error) {
// Info provides a mock function with given fields: _a0, _a1
func (_m *Client) Info(_a0 context.Context, _a1 types.RequestInfo) (*types.ResponseInfo, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseInfo
@@ -438,31 +298,8 @@ func (_m *Client) InfoSync(_a0 context.Context, _a1 types.RequestInfo) (*types.R
return r0, r1
}
// InitChainAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) InitChainAsync(_a0 context.Context, _a1 types.RequestInitChain) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestInitChain) *abciclient.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestInitChain) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// InitChainSync provides a mock function with given fields: _a0, _a1
func (_m *Client) InitChainSync(_a0 context.Context, _a1 types.RequestInitChain) (*types.ResponseInitChain, error) {
// InitChain provides a mock function with given fields: _a0, _a1
func (_m *Client) InitChain(_a0 context.Context, _a1 types.RequestInitChain) (*types.ResponseInitChain, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseInitChain
@@ -498,31 +335,8 @@ func (_m *Client) IsRunning() bool {
return r0
}
// ListSnapshotsAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) ListSnapshotsAsync(_a0 context.Context, _a1 types.RequestListSnapshots) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestListSnapshots) *abciclient.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestListSnapshots) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// ListSnapshotsSync provides a mock function with given fields: _a0, _a1
func (_m *Client) ListSnapshotsSync(_a0 context.Context, _a1 types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
// ListSnapshots provides a mock function with given fields: _a0, _a1
func (_m *Client) ListSnapshots(_a0 context.Context, _a1 types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseListSnapshots
@@ -544,31 +358,8 @@ func (_m *Client) ListSnapshotsSync(_a0 context.Context, _a1 types.RequestListSn
return r0, r1
}
// LoadSnapshotChunkAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) LoadSnapshotChunkAsync(_a0 context.Context, _a1 types.RequestLoadSnapshotChunk) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestLoadSnapshotChunk) *abciclient.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestLoadSnapshotChunk) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// LoadSnapshotChunkSync provides a mock function with given fields: _a0, _a1
func (_m *Client) LoadSnapshotChunkSync(_a0 context.Context, _a1 types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
// LoadSnapshotChunk provides a mock function with given fields: _a0, _a1
func (_m *Client) LoadSnapshotChunk(_a0 context.Context, _a1 types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseLoadSnapshotChunk
@@ -590,31 +381,8 @@ func (_m *Client) LoadSnapshotChunkSync(_a0 context.Context, _a1 types.RequestLo
return r0, r1
}
// OfferSnapshotAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) OfferSnapshotAsync(_a0 context.Context, _a1 types.RequestOfferSnapshot) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestOfferSnapshot) *abciclient.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestOfferSnapshot) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// OfferSnapshotSync provides a mock function with given fields: _a0, _a1
func (_m *Client) OfferSnapshotSync(_a0 context.Context, _a1 types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
// OfferSnapshot provides a mock function with given fields: _a0, _a1
func (_m *Client) OfferSnapshot(_a0 context.Context, _a1 types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseOfferSnapshot
@@ -636,64 +404,8 @@ func (_m *Client) OfferSnapshotSync(_a0 context.Context, _a1 types.RequestOfferS
return r0, r1
}
// OnReset provides a mock function with given fields:
func (_m *Client) OnReset() error {
ret := _m.Called()
var r0 error
if rf, ok := ret.Get(0).(func() error); ok {
r0 = rf()
} else {
r0 = ret.Error(0)
}
return r0
}
// OnStart provides a mock function with given fields:
func (_m *Client) OnStart() error {
ret := _m.Called()
var r0 error
if rf, ok := ret.Get(0).(func() error); ok {
r0 = rf()
} else {
r0 = ret.Error(0)
}
return r0
}
// OnStop provides a mock function with given fields:
func (_m *Client) OnStop() {
_m.Called()
}
// QueryAsync provides a mock function with given fields: _a0, _a1
func (_m *Client) QueryAsync(_a0 context.Context, _a1 types.RequestQuery) (*abciclient.ReqRes, error) {
ret := _m.Called(_a0, _a1)
var r0 *abciclient.ReqRes
if rf, ok := ret.Get(0).(func(context.Context, types.RequestQuery) *abciclient.ReqRes); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*abciclient.ReqRes)
}
}
var r1 error
if rf, ok := ret.Get(1).(func(context.Context, types.RequestQuery) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// QuerySync provides a mock function with given fields: _a0, _a1
func (_m *Client) QuerySync(_a0 context.Context, _a1 types.RequestQuery) (*types.ResponseQuery, error) {
// Query provides a mock function with given fields: _a0, _a1
func (_m *Client) Query(_a0 context.Context, _a1 types.RequestQuery) (*types.ResponseQuery, error) {
ret := _m.Called(_a0, _a1)
var r0 *types.ResponseQuery
@@ -715,67 +427,18 @@ func (_m *Client) QuerySync(_a0 context.Context, _a1 types.RequestQuery) (*types
return r0, r1
}
// Quit provides a mock function with given fields:
func (_m *Client) Quit() <-chan struct{} {
ret := _m.Called()
var r0 <-chan struct{}
if rf, ok := ret.Get(0).(func() <-chan struct{}); ok {
r0 = rf()
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(<-chan struct{})
}
}
return r0
}
// Reset provides a mock function with given fields:
func (_m *Client) Reset() error {
ret := _m.Called()
var r0 error
if rf, ok := ret.Get(0).(func() error); ok {
r0 = rf()
} else {
r0 = ret.Error(0)
}
return r0
}
// SetLogger provides a mock function with given fields: _a0
func (_m *Client) SetLogger(_a0 log.Logger) {
_m.Called(_a0)
}
// SetResponseCallback provides a mock function with given fields: _a0
func (_m *Client) SetResponseCallback(_a0 abciclient.Callback) {
_m.Called(_a0)
}
// Start provides a mock function with given fields:
func (_m *Client) Start() error {
ret := _m.Called()
// Start provides a mock function with given fields: _a0
func (_m *Client) Start(_a0 context.Context) error {
ret := _m.Called(_a0)
var r0 error
if rf, ok := ret.Get(0).(func() error); ok {
r0 = rf()
} else {
r0 = ret.Error(0)
}
return r0
}
// Stop provides a mock function with given fields:
func (_m *Client) Stop() error {
ret := _m.Called()
var r0 error
if rf, ok := ret.Get(0).(func() error); ok {
r0 = rf()
if rf, ok := ret.Get(0).(func(context.Context) error); ok {
r0 = rf(_a0)
} else {
r0 = ret.Error(0)
}
@@ -801,18 +464,3 @@ func (_m *Client) String() string {
func (_m *Client) Wait() {
_m.Called()
}
type mockConstructorTestingTNewClient interface {
mock.TestingT
Cleanup(func())
}
// NewClient creates a new instance of Client. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewClient(t mockConstructorTestingTNewClient) *Client {
mock := &Client{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View File

@@ -9,10 +9,11 @@ import (
"io"
"net"
"reflect"
"sync"
"time"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
"github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/libs/service"
)
@@ -32,6 +33,7 @@ type reqResWithContext struct {
// general is not meant to be interfaced with concurrent callers.
type socketClient struct {
service.BaseService
logger log.Logger
addr string
mustConnect bool
@@ -39,7 +41,7 @@ type socketClient struct {
reqQueue chan *reqResWithContext
mtx tmsync.Mutex
mtx sync.Mutex
err error
reqSent *list.List // list of requests sent, waiting for response
resCb func(*types.Request, *types.Response) // called on all requests, if set.
@@ -50,22 +52,22 @@ var _ Client = (*socketClient)(nil)
// NewSocketClient creates a new socket client, which connects to a given
// address. If mustConnect is true, the client will return an error upon start
// if it fails to connect.
func NewSocketClient(addr string, mustConnect bool) Client {
func NewSocketClient(logger log.Logger, addr string, mustConnect bool) Client {
cli := &socketClient{
logger: logger,
reqQueue: make(chan *reqResWithContext, reqQueueSize),
mustConnect: mustConnect,
addr: addr,
reqSent: list.New(),
resCb: nil,
addr: addr,
reqSent: list.New(),
resCb: nil,
}
cli.BaseService = *service.NewBaseService(nil, "socketClient", cli)
cli.BaseService = *service.NewBaseService(logger, "socketClient", cli)
return cli
}
// OnStart implements Service by connecting to the server and spawning reading
// and writing goroutines.
func (cli *socketClient) OnStart() error {
func (cli *socketClient) OnStart(ctx context.Context) error {
var (
err error
conn net.Conn
@@ -77,15 +79,15 @@ func (cli *socketClient) OnStart() error {
if cli.mustConnect {
return err
}
cli.Logger.Error(fmt.Sprintf("abci.socketClient failed to connect to %v. Retrying after %vs...",
cli.logger.Error(fmt.Sprintf("abci.socketClient failed to connect to %v. Retrying after %vs...",
cli.addr, dialRetryIntervalSeconds), "err", err)
time.Sleep(time.Second * dialRetryIntervalSeconds)
continue
}
cli.conn = conn
go cli.sendRequestsRoutine(conn)
go cli.recvResponseRoutine(conn)
go cli.sendRequestsRoutine(ctx, conn)
go cli.recvResponseRoutine(ctx, conn)
return nil
}
@@ -113,19 +115,25 @@ func (cli *socketClient) Error() error {
// NOTE: callback may get internally generated flush responses.
func (cli *socketClient) SetResponseCallback(resCb Callback) {
cli.mtx.Lock()
defer cli.mtx.Unlock()
cli.resCb = resCb
cli.mtx.Unlock()
}
//----------------------------------------
func (cli *socketClient) sendRequestsRoutine(conn io.Writer) {
func (cli *socketClient) sendRequestsRoutine(ctx context.Context, conn io.Writer) {
bw := bufio.NewWriter(conn)
for {
select {
case <-ctx.Done():
return
case reqres := <-cli.reqQueue:
if ctx.Err() != nil {
return
}
if reqres.C.Err() != nil {
cli.Logger.Debug("Request's context is done", "req", reqres.R, "err", reqres.C.Err())
cli.logger.Debug("Request's context is done", "req", reqres.R, "err", reqres.C.Err())
continue
}
cli.willSendReq(reqres.R)
@@ -138,16 +146,16 @@ func (cli *socketClient) sendRequestsRoutine(conn io.Writer) {
cli.stopForError(fmt.Errorf("flush buffer: %w", err))
return
}
case <-cli.Quit():
return
}
}
}
func (cli *socketClient) recvResponseRoutine(conn io.Reader) {
func (cli *socketClient) recvResponseRoutine(ctx context.Context, conn io.Reader) {
r := bufio.NewReader(conn)
for {
if ctx.Err() != nil {
return
}
var res = &types.Response{}
err := types.ReadMessage(r, res)
if err != nil {
@@ -155,7 +163,7 @@ func (cli *socketClient) recvResponseRoutine(conn io.Reader) {
return
}
// cli.Logger.Debug("Received response", "responseType", reflect.TypeOf(res), "response", res)
// cli.logger.Debug("Received response", "responseType", reflect.TypeOf(res), "response", res)
switch r := res.Value.(type) {
case *types.Response_Exception: // app responded with error
@@ -214,18 +222,10 @@ func (cli *socketClient) didRecvResponse(res *types.Response) error {
//----------------------------------------
func (cli *socketClient) EchoAsync(ctx context.Context, msg string) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestEcho(msg))
}
func (cli *socketClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestFlush())
}
func (cli *socketClient) InfoAsync(ctx context.Context, req types.RequestInfo) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestInfo(req))
}
func (cli *socketClient) DeliverTxAsync(ctx context.Context, req types.RequestDeliverTx) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestDeliverTx(req))
}
@@ -234,52 +234,10 @@ func (cli *socketClient) CheckTxAsync(ctx context.Context, req types.RequestChec
return cli.queueRequestAsync(ctx, types.ToRequestCheckTx(req))
}
func (cli *socketClient) QueryAsync(ctx context.Context, req types.RequestQuery) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestQuery(req))
}
func (cli *socketClient) CommitAsync(ctx context.Context) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestCommit())
}
func (cli *socketClient) InitChainAsync(ctx context.Context, req types.RequestInitChain) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestInitChain(req))
}
func (cli *socketClient) BeginBlockAsync(ctx context.Context, req types.RequestBeginBlock) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestBeginBlock(req))
}
func (cli *socketClient) EndBlockAsync(ctx context.Context, req types.RequestEndBlock) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestEndBlock(req))
}
func (cli *socketClient) ListSnapshotsAsync(ctx context.Context, req types.RequestListSnapshots) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestListSnapshots(req))
}
func (cli *socketClient) OfferSnapshotAsync(ctx context.Context, req types.RequestOfferSnapshot) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestOfferSnapshot(req))
}
func (cli *socketClient) LoadSnapshotChunkAsync(
ctx context.Context,
req types.RequestLoadSnapshotChunk,
) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestLoadSnapshotChunk(req))
}
func (cli *socketClient) ApplySnapshotChunkAsync(
ctx context.Context,
req types.RequestApplySnapshotChunk,
) (*ReqRes, error) {
return cli.queueRequestAsync(ctx, types.ToRequestApplySnapshotChunk(req))
}
//----------------------------------------
func (cli *socketClient) FlushSync(ctx context.Context) error {
reqRes, err := cli.queueRequest(ctx, types.ToRequestFlush())
func (cli *socketClient) Flush(ctx context.Context) error {
reqRes, err := cli.queueRequest(ctx, types.ToRequestFlush(), true)
if err != nil {
return queueErr(err)
}
@@ -303,143 +261,143 @@ func (cli *socketClient) FlushSync(ctx context.Context) error {
}
}
func (cli *socketClient) EchoSync(ctx context.Context, msg string) (*types.ResponseEcho, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestEcho(msg))
func (cli *socketClient) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestEcho(msg))
if err != nil {
return nil, err
}
return reqres.Response.GetEcho(), nil
}
func (cli *socketClient) InfoSync(
func (cli *socketClient) Info(
ctx context.Context,
req types.RequestInfo,
) (*types.ResponseInfo, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestInfo(req))
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestInfo(req))
if err != nil {
return nil, err
}
return reqres.Response.GetInfo(), nil
}
func (cli *socketClient) DeliverTxSync(
func (cli *socketClient) DeliverTx(
ctx context.Context,
req types.RequestDeliverTx,
) (*types.ResponseDeliverTx, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestDeliverTx(req))
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestDeliverTx(req))
if err != nil {
return nil, err
}
return reqres.Response.GetDeliverTx(), nil
}
func (cli *socketClient) CheckTxSync(
func (cli *socketClient) CheckTx(
ctx context.Context,
req types.RequestCheckTx,
) (*types.ResponseCheckTx, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestCheckTx(req))
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestCheckTx(req))
if err != nil {
return nil, err
}
return reqres.Response.GetCheckTx(), nil
}
func (cli *socketClient) QuerySync(
func (cli *socketClient) Query(
ctx context.Context,
req types.RequestQuery,
) (*types.ResponseQuery, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestQuery(req))
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestQuery(req))
if err != nil {
return nil, err
}
return reqres.Response.GetQuery(), nil
}
func (cli *socketClient) CommitSync(ctx context.Context) (*types.ResponseCommit, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestCommit())
func (cli *socketClient) Commit(ctx context.Context) (*types.ResponseCommit, error) {
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestCommit())
if err != nil {
return nil, err
}
return reqres.Response.GetCommit(), nil
}
func (cli *socketClient) InitChainSync(
func (cli *socketClient) InitChain(
ctx context.Context,
req types.RequestInitChain,
) (*types.ResponseInitChain, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestInitChain(req))
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestInitChain(req))
if err != nil {
return nil, err
}
return reqres.Response.GetInitChain(), nil
}
func (cli *socketClient) BeginBlockSync(
func (cli *socketClient) BeginBlock(
ctx context.Context,
req types.RequestBeginBlock,
) (*types.ResponseBeginBlock, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestBeginBlock(req))
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestBeginBlock(req))
if err != nil {
return nil, err
}
return reqres.Response.GetBeginBlock(), nil
}
func (cli *socketClient) EndBlockSync(
func (cli *socketClient) EndBlock(
ctx context.Context,
req types.RequestEndBlock,
) (*types.ResponseEndBlock, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestEndBlock(req))
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestEndBlock(req))
if err != nil {
return nil, err
}
return reqres.Response.GetEndBlock(), nil
}
func (cli *socketClient) ListSnapshotsSync(
func (cli *socketClient) ListSnapshots(
ctx context.Context,
req types.RequestListSnapshots,
) (*types.ResponseListSnapshots, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestListSnapshots(req))
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestListSnapshots(req))
if err != nil {
return nil, err
}
return reqres.Response.GetListSnapshots(), nil
}
func (cli *socketClient) OfferSnapshotSync(
func (cli *socketClient) OfferSnapshot(
ctx context.Context,
req types.RequestOfferSnapshot,
) (*types.ResponseOfferSnapshot, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestOfferSnapshot(req))
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestOfferSnapshot(req))
if err != nil {
return nil, err
}
return reqres.Response.GetOfferSnapshot(), nil
}
func (cli *socketClient) LoadSnapshotChunkSync(
func (cli *socketClient) LoadSnapshotChunk(
ctx context.Context,
req types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestLoadSnapshotChunk(req))
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestLoadSnapshotChunk(req))
if err != nil {
return nil, err
}
return reqres.Response.GetLoadSnapshotChunk(), nil
}
func (cli *socketClient) ApplySnapshotChunkSync(
func (cli *socketClient) ApplySnapshotChunk(
ctx context.Context,
req types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
reqres, err := cli.queueRequestAndFlushSync(ctx, types.ToRequestApplySnapshotChunk(req))
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestApplySnapshotChunk(req))
if err != nil {
return nil, err
}
@@ -448,22 +406,29 @@ func (cli *socketClient) ApplySnapshotChunkSync(
//----------------------------------------
// queueRequest enqueues req onto the queue. The request can break early if the
// the context is canceled. If the queue is full, this method blocks to allow
// the request to be placed onto the queue. This has the effect of creating an
// unbounded queue of goroutines waiting to write to this queue which is a bit
// antithetical to the purposes of a queue, however, undoing this behavior has
// dangerous upstream implications as a result of the usage of this behavior upstream.
// Remove at your peril.
// queueRequest enqueues req onto the queue. If the queue is full, it ether
// returns an error (sync=false) or blocks (sync=true).
//
// When sync=true, ctx can be used to break early. When sync=false, ctx will be
// used later to determine if request should be dropped (if ctx.Err is
// non-nil).
//
// The caller is responsible for checking cli.Error.
func (cli *socketClient) queueRequest(ctx context.Context, req *types.Request) (*ReqRes, error) {
func (cli *socketClient) queueRequest(ctx context.Context, req *types.Request, sync bool) (*ReqRes, error) {
reqres := NewReqRes(req)
select {
case cli.reqQueue <- &reqResWithContext{R: reqres, C: ctx}:
case <-ctx.Done():
return nil, ctx.Err()
if sync {
select {
case cli.reqQueue <- &reqResWithContext{R: reqres, C: context.Background()}:
case <-ctx.Done():
return nil, ctx.Err()
}
} else {
select {
case cli.reqQueue <- &reqResWithContext{R: reqres, C: ctx}:
default:
return nil, errors.New("buffer is full")
}
}
return reqres, nil
@@ -474,7 +439,7 @@ func (cli *socketClient) queueRequestAsync(
req *types.Request,
) (*ReqRes, error) {
reqres, err := cli.queueRequest(ctx, req)
reqres, err := cli.queueRequest(ctx, req, false)
if err != nil {
return nil, queueErr(err)
}
@@ -482,17 +447,17 @@ func (cli *socketClient) queueRequestAsync(
return reqres, cli.Error()
}
func (cli *socketClient) queueRequestAndFlushSync(
func (cli *socketClient) queueRequestAndFlush(
ctx context.Context,
req *types.Request,
) (*ReqRes, error) {
reqres, err := cli.queueRequest(ctx, req)
reqres, err := cli.queueRequest(ctx, req, true)
if err != nil {
return nil, queueErr(err)
}
if err := cli.FlushSync(ctx); err != nil {
if err := cli.Flush(ctx); err != nil {
return nil, err
}
@@ -575,8 +540,8 @@ func (cli *socketClient) stopForError(err error) {
cli.err = err
cli.mtx.Unlock()
cli.Logger.Info("Stopping abci.socketClient", "reason", err)
cli.logger.Info("Stopping abci.socketClient", "reason", err)
if err := cli.Stop(); err != nil {
cli.Logger.Error("Error stopping abci.socketClient", "err", err)
cli.logger.Error("error stopping abci.socketClient", "err", err)
}
}

View File

@@ -3,7 +3,6 @@ package abciclient_test
import (
"context"
"fmt"
"sync"
"testing"
"time"
@@ -15,36 +14,29 @@ import (
abciclient "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/server"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
)
var ctx = context.Background()
func TestProperSyncCalls(t *testing.T) {
app := slowApp{}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
s, c := setupClientServer(t, app)
t.Cleanup(func() {
if err := s.Stop(); err != nil {
t.Error(err)
}
})
t.Cleanup(func() {
if err := c.Stop(); err != nil {
t.Error(err)
}
})
app := slowApp{}
logger := log.NewNopLogger()
_, c := setupClientServer(ctx, t, logger, app)
resp := make(chan error, 1)
go func() {
// This is BeginBlockSync unrolled....
reqres, err := c.BeginBlockAsync(ctx, types.RequestBeginBlock{})
rsp, err := c.BeginBlock(ctx, types.RequestBeginBlock{})
assert.NoError(t, err)
err = c.FlushSync(context.Background())
assert.NoError(t, err)
res := reqres.Response.GetBeginBlock()
assert.NotNil(t, res)
resp <- c.Error()
assert.NoError(t, c.Flush(ctx))
assert.NotNil(t, rsp)
select {
case <-ctx.Done():
case resp <- c.Error():
}
}()
select {
@@ -56,64 +48,29 @@ func TestProperSyncCalls(t *testing.T) {
}
}
func TestHangingSyncCalls(t *testing.T) {
app := slowApp{}
func setupClientServer(
ctx context.Context,
t *testing.T,
logger log.Logger,
app types.Application,
) (service.Service, abciclient.Client) {
t.Helper()
s, c := setupClientServer(t, app)
t.Cleanup(func() {
if err := s.Stop(); err != nil {
t.Log(err)
}
})
t.Cleanup(func() {
if err := c.Stop(); err != nil {
t.Log(err)
}
})
resp := make(chan error, 1)
go func() {
// Start BeginBlock and flush it
reqres, err := c.BeginBlockAsync(ctx, types.RequestBeginBlock{})
assert.NoError(t, err)
flush, err := c.FlushAsync(ctx)
assert.NoError(t, err)
// wait 20 ms for all events to travel socket, but
// no response yet from server
time.Sleep(20 * time.Millisecond)
// kill the server, so the connections break
err = s.Stop()
assert.NoError(t, err)
// wait for the response from BeginBlock
reqres.Wait()
flush.Wait()
resp <- c.Error()
}()
select {
case <-time.After(time.Second):
require.Fail(t, "No response arrived")
case err, ok := <-resp:
require.True(t, ok, "Must not close channel")
assert.Error(t, err, "We should get EOF error")
}
}
func setupClientServer(t *testing.T, app types.Application) (
service.Service, abciclient.Client) {
// some port between 20k and 30k
port := 20000 + rand.Int31()%10000
addr := fmt.Sprintf("localhost:%d", port)
s, err := server.NewServer(addr, "socket", app)
require.NoError(t, err)
err = s.Start()
s, err := server.NewServer(logger, addr, "socket", app)
require.NoError(t, err)
require.NoError(t, s.Start(ctx))
t.Cleanup(s.Wait)
c := abciclient.NewSocketClient(addr, true)
err = c.Start()
require.NoError(t, err)
c := abciclient.NewSocketClient(logger, addr, true)
require.NoError(t, c.Start(ctx))
t.Cleanup(c.Wait)
require.True(t, s.IsRunning())
require.True(t, c.IsRunning())
return s, c
}
@@ -126,73 +83,3 @@ func (slowApp) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock
time.Sleep(200 * time.Millisecond)
return types.ResponseBeginBlock{}
}
// TestCallbackInvokedWhenSetLaet ensures that the callback is invoked when
// set after the client completes the call into the app. Currently this
// test relies on the callback being allowed to be invoked twice if set multiple
// times, once when set early and once when set late.
func TestCallbackInvokedWhenSetLate(t *testing.T) {
wg := &sync.WaitGroup{}
wg.Add(1)
app := blockedABCIApplication{
wg: wg,
}
_, c := setupClientServer(t, app)
reqRes, err := c.CheckTxAsync(context.Background(), types.RequestCheckTx{})
require.NoError(t, err)
done := make(chan struct{})
cb := func(_ *types.Response) {
close(done)
}
reqRes.SetCallback(cb)
app.wg.Done()
<-done
var called bool
cb = func(_ *types.Response) {
called = true
}
reqRes.SetCallback(cb)
require.True(t, called)
}
type blockedABCIApplication struct {
wg *sync.WaitGroup
types.BaseApplication
}
func (b blockedABCIApplication) CheckTx(r types.RequestCheckTx) types.ResponseCheckTx {
b.wg.Wait()
return b.BaseApplication.CheckTx(r)
}
// TestCallbackInvokedWhenSetEarly ensures that the callback is invoked when
// set before the client completes the call into the app.
func TestCallbackInvokedWhenSetEarly(t *testing.T) {
wg := &sync.WaitGroup{}
wg.Add(1)
app := blockedABCIApplication{
wg: wg,
}
_, c := setupClientServer(t, app)
reqRes, err := c.CheckTxAsync(context.Background(), types.RequestCheckTx{})
require.NoError(t, err)
done := make(chan struct{})
cb := func(_ *types.Response) {
close(done)
}
reqRes.SetCallback(cb)
app.wg.Done()
called := func() bool {
select {
case <-done:
return true
default:
return false
}
}
require.Eventually(t, called, time.Second, time.Millisecond*25)
}

View File

@@ -2,18 +2,18 @@ package main
import (
"bufio"
"context"
"encoding/hex"
"errors"
"fmt"
"io"
"os"
"os/signal"
"strings"
"syscall"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
abciclient "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/code"
@@ -29,8 +29,6 @@ import (
var (
client abciclient.Client
logger log.Logger
ctx = context.Background()
)
// flags
@@ -62,17 +60,17 @@ var RootCmd = &cobra.Command{
}
if logger == nil {
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
}
if client == nil {
var err error
client, err = abciclient.NewClient(flagAddress, flagAbci, false)
client, err = abciclient.NewClient(logger.With("module", "abci-client"), flagAddress, flagAbci, false)
if err != nil {
return err
}
client.SetLogger(logger.With("module", "abci-client"))
if err := client.Start(); err != nil {
if err := client.Start(cmd.Context()); err != nil {
return err
}
}
@@ -292,23 +290,24 @@ func compose(fs []func() error) error {
}
func cmdTest(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
return compose(
[]func() error{
func() error { return servertest.InitChain(client) },
func() error { return servertest.Commit(client, nil) },
func() error { return servertest.DeliverTx(client, []byte("abc"), code.CodeTypeBadNonce, nil) },
func() error { return servertest.Commit(client, nil) },
func() error { return servertest.DeliverTx(client, []byte{0x00}, code.CodeTypeOK, nil) },
func() error { return servertest.Commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 1}) },
func() error { return servertest.DeliverTx(client, []byte{0x00}, code.CodeTypeBadNonce, nil) },
func() error { return servertest.DeliverTx(client, []byte{0x01}, code.CodeTypeOK, nil) },
func() error { return servertest.DeliverTx(client, []byte{0x00, 0x02}, code.CodeTypeOK, nil) },
func() error { return servertest.DeliverTx(client, []byte{0x00, 0x03}, code.CodeTypeOK, nil) },
func() error { return servertest.DeliverTx(client, []byte{0x00, 0x00, 0x04}, code.CodeTypeOK, nil) },
func() error { return servertest.InitChain(ctx, client) },
func() error { return servertest.Commit(ctx, client, nil) },
func() error { return servertest.DeliverTx(ctx, client, []byte("abc"), code.CodeTypeBadNonce, nil) },
func() error { return servertest.Commit(ctx, client, nil) },
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00}, code.CodeTypeOK, nil) },
func() error { return servertest.Commit(ctx, client, []byte{0, 0, 0, 0, 0, 0, 0, 1}) },
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00}, code.CodeTypeBadNonce, nil) },
func() error { return servertest.DeliverTx(ctx, client, []byte{0x01}, code.CodeTypeOK, nil) },
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x02}, code.CodeTypeOK, nil) },
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x03}, code.CodeTypeOK, nil) },
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x00, 0x04}, code.CodeTypeOK, nil) },
func() error {
return servertest.DeliverTx(client, []byte{0x00, 0x00, 0x06}, code.CodeTypeBadNonce, nil)
return servertest.DeliverTx(ctx, client, []byte{0x00, 0x00, 0x06}, code.CodeTypeBadNonce, nil)
},
func() error { return servertest.Commit(client, []byte{0, 0, 0, 0, 0, 0, 0, 5}) },
func() error { return servertest.Commit(ctx, client, []byte{0, 0, 0, 0, 0, 0, 0, 5}) },
})
}
@@ -443,13 +442,15 @@ func cmdEcho(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
msg = args[0]
}
res, err := client.EchoSync(ctx, msg)
res, err := client.Echo(cmd.Context(), msg)
if err != nil {
return err
}
printResponse(cmd, args, response{
Data: []byte(res.Message),
})
return nil
}
@@ -459,7 +460,7 @@ func cmdInfo(cmd *cobra.Command, args []string) error {
if len(args) == 1 {
version = args[0]
}
res, err := client.InfoSync(ctx, types.RequestInfo{Version: version})
res, err := client.Info(cmd.Context(), types.RequestInfo{Version: version})
if err != nil {
return err
}
@@ -484,7 +485,7 @@ func cmdDeliverTx(cmd *cobra.Command, args []string) error {
if err != nil {
return err
}
res, err := client.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: txBytes})
res, err := client.DeliverTx(cmd.Context(), types.RequestDeliverTx{Tx: txBytes})
if err != nil {
return err
}
@@ -510,7 +511,7 @@ func cmdCheckTx(cmd *cobra.Command, args []string) error {
if err != nil {
return err
}
res, err := client.CheckTxSync(ctx, types.RequestCheckTx{Tx: txBytes})
res, err := client.CheckTx(cmd.Context(), types.RequestCheckTx{Tx: txBytes})
if err != nil {
return err
}
@@ -525,7 +526,7 @@ func cmdCheckTx(cmd *cobra.Command, args []string) error {
// Get application Merkle root hash
func cmdCommit(cmd *cobra.Command, args []string) error {
res, err := client.CommitSync(ctx)
res, err := client.Commit(cmd.Context())
if err != nil {
return err
}
@@ -550,7 +551,7 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
return err
}
resQuery, err := client.QuerySync(ctx, types.RequestQuery{
resQuery, err := client.Query(cmd.Context(), types.RequestQuery{
Data: queryBytes,
Path: flagPath,
Height: int64(flagHeight),
@@ -574,37 +575,32 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
}
func cmdKVStore(cmd *cobra.Command, args []string) error {
logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
// Create the application - in memory or persisted to disk
var app types.Application
if flagPersist == "" {
app = kvstore.NewApplication()
} else {
app = kvstore.NewPersistentKVStoreApplication(flagPersist)
app.(*kvstore.PersistentKVStoreApplication).SetLogger(logger.With("module", "kvstore"))
app = kvstore.NewPersistentKVStoreApplication(logger, flagPersist)
}
// Start the listener
srv, err := server.NewServer(flagAddress, flagAbci, app)
srv, err := server.NewServer(logger.With("module", "abci-server"), flagAddress, flagAbci, app)
if err != nil {
return err
}
srv.SetLogger(logger.With("module", "abci-server"))
if err := srv.Start(); err != nil {
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
defer cancel()
if err := srv.Start(ctx); err != nil {
return err
}
// Stop upon receiving SIGTERM or CTRL-C.
tmos.TrapSignal(logger, func() {
// Cleanup
if err := srv.Stop(); err != nil {
logger.Error("Error while stopping server", "err", err)
}
})
// Run forever.
select {}
<-ctx.Done()
return nil
}
//--------------------------------------------------------------------------------

View File

@@ -30,47 +30,52 @@ func init() {
}
func TestKVStore(t *testing.T) {
fmt.Println("### Testing KVStore")
testStream(t, kvstore.NewApplication())
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
logger := log.NewTestingLogger(t)
logger.Info("### Testing KVStore")
testStream(ctx, t, logger, kvstore.NewApplication())
}
func TestBaseApp(t *testing.T) {
fmt.Println("### Testing BaseApp")
testStream(t, types.NewBaseApplication())
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
logger := log.NewTestingLogger(t)
logger.Info("### Testing BaseApp")
testStream(ctx, t, logger, types.NewBaseApplication())
}
func TestGRPC(t *testing.T) {
fmt.Println("### Testing GRPC")
testGRPCSync(t, types.NewGRPCApplication(types.NewBaseApplication()))
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
logger := log.NewTestingLogger(t)
logger.Info("### Testing GRPC")
testGRPCSync(ctx, t, logger, types.NewGRPCApplication(types.NewBaseApplication()))
}
func testStream(t *testing.T, app types.Application) {
func testStream(ctx context.Context, t *testing.T, logger log.Logger, app types.Application) {
t.Helper()
const numDeliverTxs = 20000
socketFile := fmt.Sprintf("test-%08x.sock", rand.Int31n(1<<30))
defer os.Remove(socketFile)
socket := fmt.Sprintf("unix://%v", socketFile)
// Start the listener
server := abciserver.NewSocketServer(socket, app)
server.SetLogger(log.TestingLogger().With("module", "abci-server"))
err := server.Start()
server := abciserver.NewSocketServer(logger.With("module", "abci-server"), socket, app)
t.Cleanup(server.Wait)
err := server.Start(ctx)
require.NoError(t, err)
t.Cleanup(func() {
if err := server.Stop(); err != nil {
t.Error(err)
}
})
// Connect to the socket
client := abciclient.NewSocketClient(socket, false)
client.SetLogger(log.TestingLogger().With("module", "abci-client"))
err = client.Start()
client := abciclient.NewSocketClient(logger.With("module", "abci-client"), socket, false)
t.Cleanup(client.Wait)
err = client.Start(ctx)
require.NoError(t, err)
t.Cleanup(func() {
if err := client.Stop(); err != nil {
t.Error(err)
}
})
done := make(chan struct{})
counter := 0
@@ -99,8 +104,6 @@ func testStream(t *testing.T, app types.Application) {
}
})
ctx := context.Background()
// Write requests
for counter := 0; counter < numDeliverTxs; counter++ {
// Send request
@@ -109,7 +112,7 @@ func testStream(t *testing.T, app types.Application) {
// Sometimes send flush messages
if counter%128 == 0 {
err = client.FlushSync(context.Background())
err = client.Flush(ctx)
require.NoError(t, err)
}
}
@@ -128,33 +131,25 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
return tmnet.Connect(addr)
}
func testGRPCSync(t *testing.T, app types.ABCIApplicationServer) {
func testGRPCSync(ctx context.Context, t *testing.T, logger log.Logger, app types.ABCIApplicationServer) {
t.Helper()
numDeliverTxs := 2000
socketFile := fmt.Sprintf("/tmp/test-%08x.sock", rand.Int31n(1<<30))
defer os.Remove(socketFile)
socket := fmt.Sprintf("unix://%v", socketFile)
// Start the listener
server := abciserver.NewGRPCServer(socket, app)
server.SetLogger(log.TestingLogger().With("module", "abci-server"))
if err := server.Start(); err != nil {
t.Fatalf("Error starting GRPC server: %v", err.Error())
}
server := abciserver.NewGRPCServer(logger.With("module", "abci-server"), socket, app)
t.Cleanup(func() {
if err := server.Stop(); err != nil {
t.Error(err)
}
})
require.NoError(t, server.Start(ctx))
t.Cleanup(func() { server.Wait() })
// Connect to the socket
conn, err := grpc.Dial(socket,
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(dialerFunc),
)
if err != nil {
t.Fatalf("Error dialing GRPC server: %v", err.Error())
}
require.NoError(t, err, "Error dialing GRPC server")
t.Cleanup(func() {
if err := conn.Close(); err != nil {
@@ -167,10 +162,9 @@ func testGRPCSync(t *testing.T, app types.ABCIApplicationServer) {
// Write requests
for counter := 0; counter < numDeliverTxs; counter++ {
// Send request
response, err := client.DeliverTx(context.Background(), &types.RequestDeliverTx{Tx: []byte("test")})
if err != nil {
t.Fatalf("Error in GRPC DeliverTx: %v", err.Error())
}
response, err := client.DeliverTx(ctx, &types.RequestDeliverTx{Tx: []byte("test")})
require.NoError(t, err, "Error in GRPC DeliverTx")
counter++
if response.Code != code.CodeTypeOK {
t.Error("DeliverTx failed with ret_code", response.Code)

View File

@@ -3,7 +3,7 @@ package kvstore
import (
"context"
"fmt"
"io/ioutil"
"os"
"sort"
"testing"
@@ -24,8 +24,6 @@ const (
testValue = "def"
)
var ctx = context.Background()
func testKVStore(t *testing.T, app types.Application, tx []byte, key, value string) {
req := types.RequestDeliverTx{Tx: tx}
ar := app.DeliverTx(req)
@@ -74,11 +72,13 @@ func TestKVStoreKV(t *testing.T) {
}
func TestPersistentKVStoreKV(t *testing.T) {
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
if err != nil {
t.Fatal(err)
}
kvstore := NewPersistentKVStoreApplication(dir)
logger := log.NewTestingLogger(t)
kvstore := NewPersistentKVStoreApplication(logger, dir)
key := testKey
value := key
tx := []byte(key)
@@ -90,11 +90,13 @@ func TestPersistentKVStoreKV(t *testing.T) {
}
func TestPersistentKVStoreInfo(t *testing.T) {
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
if err != nil {
t.Fatal(err)
}
kvstore := NewPersistentKVStoreApplication(dir)
logger := log.NewTestingLogger(t)
kvstore := NewPersistentKVStoreApplication(logger, dir)
InitKVStore(kvstore)
height := int64(0)
@@ -122,11 +124,13 @@ func TestPersistentKVStoreInfo(t *testing.T) {
// add a validator, remove a validator, update a validator
func TestValUpdates(t *testing.T) {
dir, err := ioutil.TempDir("/tmp", "abci-kvstore-test") // TODO
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
if err != nil {
t.Fatal(err)
}
kvstore := NewPersistentKVStoreApplication(dir)
logger := log.NewTestingLogger(t)
kvstore := NewPersistentKVStoreApplication(logger, dir)
// init with some validators
total := 10
@@ -165,7 +169,7 @@ func TestValUpdates(t *testing.T) {
makeApplyBlock(t, kvstore, 2, diff, tx1, tx2, tx3)
vals1 = append(vals[:nInit-2], vals[nInit+1])
vals1 = append(vals[:nInit-2], vals[nInit+1]) // nolint: gocritic
vals2 = kvstore.Validators()
valsEqual(t, vals1, vals2)
@@ -229,136 +233,136 @@ func valsEqual(t *testing.T, vals1, vals2 []types.ValidatorUpdate) {
}
}
func makeSocketClientServer(app types.Application, name string) (abciclient.Client, service.Service, error) {
func makeSocketClientServer(
ctx context.Context,
t *testing.T,
logger log.Logger,
app types.Application,
name string,
) (abciclient.Client, service.Service, error) {
ctx, cancel := context.WithCancel(ctx)
t.Cleanup(cancel)
// Start the listener
socket := fmt.Sprintf("unix://%s.sock", name)
logger := log.TestingLogger()
server := abciserver.NewSocketServer(socket, app)
server.SetLogger(logger.With("module", "abci-server"))
if err := server.Start(); err != nil {
server := abciserver.NewSocketServer(logger.With("module", "abci-server"), socket, app)
if err := server.Start(ctx); err != nil {
cancel()
return nil, nil, err
}
// Connect to the socket
client := abciclient.NewSocketClient(socket, false)
client.SetLogger(logger.With("module", "abci-client"))
if err := client.Start(); err != nil {
if err = server.Stop(); err != nil {
return nil, nil, err
}
client := abciclient.NewSocketClient(logger.With("module", "abci-client"), socket, false)
if err := client.Start(ctx); err != nil {
cancel()
return nil, nil, err
}
return client, server, nil
}
func makeGRPCClientServer(app types.Application, name string) (abciclient.Client, service.Service, error) {
func makeGRPCClientServer(
ctx context.Context,
t *testing.T,
logger log.Logger,
app types.Application,
name string,
) (abciclient.Client, service.Service, error) {
ctx, cancel := context.WithCancel(ctx)
t.Cleanup(cancel)
// Start the listener
socket := fmt.Sprintf("unix://%s.sock", name)
logger := log.TestingLogger()
gapp := types.NewGRPCApplication(app)
server := abciserver.NewGRPCServer(socket, gapp)
server.SetLogger(logger.With("module", "abci-server"))
if err := server.Start(); err != nil {
server := abciserver.NewGRPCServer(logger.With("module", "abci-server"), socket, gapp)
if err := server.Start(ctx); err != nil {
cancel()
return nil, nil, err
}
client := abciclient.NewGRPCClient(socket, true)
client.SetLogger(logger.With("module", "abci-client"))
if err := client.Start(); err != nil {
if err := server.Stop(); err != nil {
return nil, nil, err
}
client := abciclient.NewGRPCClient(logger.With("module", "abci-client"), socket, true)
if err := client.Start(ctx); err != nil {
cancel()
return nil, nil, err
}
return client, server, nil
}
func TestClientServer(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
logger := log.NewTestingLogger(t)
// set up socket app
kvstore := NewApplication()
client, server, err := makeSocketClientServer(kvstore, "kvstore-socket")
client, server, err := makeSocketClientServer(ctx, t, logger, kvstore, "kvstore-socket")
require.NoError(t, err)
t.Cleanup(func() {
if err := server.Stop(); err != nil {
t.Error(err)
}
})
t.Cleanup(func() {
if err := client.Stop(); err != nil {
t.Error(err)
}
})
t.Cleanup(func() { cancel(); server.Wait() })
t.Cleanup(func() { cancel(); client.Wait() })
runClientTests(t, client)
runClientTests(ctx, t, client)
// set up grpc app
kvstore = NewApplication()
gclient, gserver, err := makeGRPCClientServer(kvstore, "/tmp/kvstore-grpc")
gclient, gserver, err := makeGRPCClientServer(ctx, t, logger, kvstore, "/tmp/kvstore-grpc")
require.NoError(t, err)
t.Cleanup(func() {
if err := gserver.Stop(); err != nil {
t.Error(err)
}
})
t.Cleanup(func() {
if err := gclient.Stop(); err != nil {
t.Error(err)
}
})
t.Cleanup(func() { cancel(); gserver.Wait() })
t.Cleanup(func() { cancel(); gclient.Wait() })
runClientTests(t, gclient)
runClientTests(ctx, t, gclient)
}
func runClientTests(t *testing.T, client abciclient.Client) {
func runClientTests(ctx context.Context, t *testing.T, client abciclient.Client) {
// run some tests....
key := testKey
value := key
tx := []byte(key)
testClient(t, client, tx, key, value)
testClient(ctx, t, client, tx, key, value)
value = testValue
tx = []byte(key + "=" + value)
testClient(t, client, tx, key, value)
testClient(ctx, t, client, tx, key, value)
}
func testClient(t *testing.T, app abciclient.Client, tx []byte, key, value string) {
ar, err := app.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: tx})
func testClient(ctx context.Context, t *testing.T, app abciclient.Client, tx []byte, key, value string) {
ar, err := app.DeliverTx(ctx, types.RequestDeliverTx{Tx: tx})
require.NoError(t, err)
require.False(t, ar.IsErr(), ar)
// repeating tx doesn't raise error
ar, err = app.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: tx})
ar, err = app.DeliverTx(ctx, types.RequestDeliverTx{Tx: tx})
require.NoError(t, err)
require.False(t, ar.IsErr(), ar)
// commit
_, err = app.CommitSync(ctx)
_, err = app.Commit(ctx)
require.NoError(t, err)
info, err := app.InfoSync(ctx, types.RequestInfo{})
info, err := app.Info(ctx, types.RequestInfo{})
require.NoError(t, err)
require.NotZero(t, info.LastBlockHeight)
// make sure query is fine
resQuery, err := app.QuerySync(ctx, types.RequestQuery{
resQuery, err := app.Query(ctx, types.RequestQuery{
Path: "/store",
Data: []byte(key),
})
require.Nil(t, err)
require.NoError(t, err)
require.Equal(t, code.CodeTypeOK, resQuery.Code)
require.Equal(t, key, string(resQuery.Key))
require.Equal(t, value, string(resQuery.Value))
require.EqualValues(t, info.LastBlockHeight, resQuery.Height)
// make sure proof is fine
resQuery, err = app.QuerySync(ctx, types.RequestQuery{
resQuery, err = app.Query(ctx, types.RequestQuery{
Path: "/store",
Data: []byte(key),
Prove: true,
})
require.Nil(t, err)
require.NoError(t, err)
require.Equal(t, code.CodeTypeOK, resQuery.Code)
require.Equal(t, key, string(resQuery.Key))
require.Equal(t, value, string(resQuery.Value))

View File

@@ -35,7 +35,7 @@ type PersistentKVStoreApplication struct {
logger log.Logger
}
func NewPersistentKVStoreApplication(dbDir string) *PersistentKVStoreApplication {
func NewPersistentKVStoreApplication(logger log.Logger, dbDir string) *PersistentKVStoreApplication {
name := "kvstore"
db, err := dbm.NewGoLevelDB(name, dbDir)
if err != nil {
@@ -47,7 +47,7 @@ func NewPersistentKVStoreApplication(dbDir string) *PersistentKVStoreApplication
return &PersistentKVStoreApplication{
app: &Application{state: state},
valAddrToPubKeyMap: make(map[string]cryptoproto.PublicKey),
logger: log.NewNopLogger(),
logger: logger,
}
}
@@ -55,10 +55,6 @@ func (app *PersistentKVStoreApplication) Close() error {
return app.app.state.db.Close()
}
func (app *PersistentKVStoreApplication) SetLogger(l log.Logger) {
app.logger = l
}
func (app *PersistentKVStoreApplication) Info(req types.RequestInfo) types.ResponseInfo {
res := app.app.Info(req)
res.LastBlockHeight = app.app.state.Height
@@ -113,7 +109,7 @@ func (app *PersistentKVStoreApplication) InitChain(req types.RequestInitChain) t
for _, v := range req.Validators {
r := app.updateValidator(v)
if r.IsErr() {
app.logger.Error("Error updating validators", "r", r)
app.logger.Error("error updating validators", "r", r)
}
}
return types.ResponseInitChain{}
@@ -271,7 +267,7 @@ func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate
if err := types.WriteMessage(&v, value); err != nil {
return types.ResponseDeliverTx{
Code: code.CodeTypeEncodingError,
Log: fmt.Sprintf("Error encoding validator: %v", err)}
Log: fmt.Sprintf("error encoding validator: %v", err)}
}
if err = app.app.state.db.Set(key, value.Bytes()); err != nil {
panic(err)

View File

@@ -1,17 +1,20 @@
package server
import (
"context"
"net"
"google.golang.org/grpc"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/libs/service"
)
type GRPCServer struct {
service.BaseService
logger log.Logger
proto string
addr string
@@ -22,20 +25,21 @@ type GRPCServer struct {
}
// NewGRPCServer returns a new gRPC ABCI server
func NewGRPCServer(protoAddr string, app types.ABCIApplicationServer) service.Service {
func NewGRPCServer(logger log.Logger, protoAddr string, app types.ABCIApplicationServer) service.Service {
proto, addr := tmnet.ProtocolAndAddress(protoAddr)
s := &GRPCServer{
logger: logger,
proto: proto,
addr: addr,
listener: nil,
app: app,
}
s.BaseService = *service.NewBaseService(nil, "ABCIServer", s)
s.BaseService = *service.NewBaseService(logger, "ABCIServer", s)
return s
}
// OnStart starts the gRPC service.
func (s *GRPCServer) OnStart() error {
func (s *GRPCServer) OnStart(ctx context.Context) error {
ln, err := net.Listen(s.proto, s.addr)
if err != nil {
@@ -46,10 +50,15 @@ func (s *GRPCServer) OnStart() error {
s.server = grpc.NewServer()
types.RegisterABCIApplicationServer(s.server, s.app)
s.Logger.Info("Listening", "proto", s.proto, "addr", s.addr)
s.logger.Info("Listening", "proto", s.proto, "addr", s.addr)
go func() {
go func() {
<-ctx.Done()
s.server.GracefulStop()
}()
if err := s.server.Serve(s.listener); err != nil {
s.Logger.Error("Error serving gRPC server", "err", err)
s.logger.Error("error serving gRPC server", "err", err)
}
}()
return nil

View File

@@ -12,17 +12,18 @@ import (
"fmt"
"github.com/tendermint/tendermint/abci/types"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
)
func NewServer(protoAddr, transport string, app types.Application) (service.Service, error) {
func NewServer(logger log.Logger, protoAddr, transport string, app types.Application) (service.Service, error) {
var s service.Service
var err error
switch transport {
case "socket":
s = NewSocketServer(protoAddr, app)
s = NewSocketServer(logger, protoAddr, app)
case "grpc":
s = NewGRPCServer(protoAddr, types.NewGRPCApplication(app))
s = NewGRPCServer(logger, protoAddr, types.NewGRPCApplication(app))
default:
err = fmt.Errorf("unknown server type %s", transport)
}

View File

@@ -2,15 +2,15 @@ package server
import (
"bufio"
"context"
"fmt"
"io"
"net"
"os"
"runtime"
"sync"
"github.com/tendermint/tendermint/abci/types"
tmsync "github.com/tendermint/tendermint/internal/libs/sync"
tmlog "github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
"github.com/tendermint/tendermint/libs/service"
)
@@ -19,61 +19,58 @@ import (
type SocketServer struct {
service.BaseService
isLoggerSet bool
logger log.Logger
proto string
addr string
listener net.Listener
connsMtx tmsync.Mutex
connsMtx sync.Mutex
conns map[int]net.Conn
nextConnID int
appMtx tmsync.Mutex
appMtx sync.Mutex
app types.Application
}
func NewSocketServer(protoAddr string, app types.Application) service.Service {
func NewSocketServer(logger log.Logger, protoAddr string, app types.Application) service.Service {
proto, addr := tmnet.ProtocolAndAddress(protoAddr)
s := &SocketServer{
logger: logger,
proto: proto,
addr: addr,
listener: nil,
app: app,
conns: make(map[int]net.Conn),
}
s.BaseService = *service.NewBaseService(nil, "ABCIServer", s)
s.BaseService = *service.NewBaseService(logger, "ABCIServer", s)
return s
}
func (s *SocketServer) SetLogger(l tmlog.Logger) {
s.BaseService.SetLogger(l)
s.isLoggerSet = true
}
func (s *SocketServer) OnStart() error {
func (s *SocketServer) OnStart(ctx context.Context) error {
ln, err := net.Listen(s.proto, s.addr)
if err != nil {
return err
}
s.listener = ln
go s.acceptConnectionsRoutine()
go s.acceptConnectionsRoutine(ctx)
return nil
}
func (s *SocketServer) OnStop() {
if err := s.listener.Close(); err != nil {
s.Logger.Error("Error closing listener", "err", err)
s.logger.Error("error closing listener", "err", err)
}
s.connsMtx.Lock()
defer s.connsMtx.Unlock()
for id, conn := range s.conns {
delete(s.conns, id)
if err := conn.Close(); err != nil {
s.Logger.Error("Error closing connection", "id", id, "conn", conn, "err", err)
s.logger.Error("error closing connection", "id", id, "conn", conn, "err", err)
}
}
}
@@ -103,20 +100,25 @@ func (s *SocketServer) rmConn(connID int) error {
return conn.Close()
}
func (s *SocketServer) acceptConnectionsRoutine() {
func (s *SocketServer) acceptConnectionsRoutine(ctx context.Context) {
for {
if ctx.Err() != nil {
return
}
// Accept a connection
s.Logger.Info("Waiting for new connection...")
s.logger.Info("Waiting for new connection...")
conn, err := s.listener.Accept()
if err != nil {
if !s.IsRunning() {
return // Ignore error from listener closing.
}
s.Logger.Error("Failed to accept connection", "err", err)
s.logger.Error("Failed to accept connection", "err", err)
continue
}
s.Logger.Info("Accepted a new connection")
s.logger.Info("Accepted a new connection")
connID := s.addConn(conn)
@@ -124,35 +126,46 @@ func (s *SocketServer) acceptConnectionsRoutine() {
responses := make(chan *types.Response, 1000) // A channel to buffer responses
// Read requests from conn and deal with them
go s.handleRequests(closeConn, conn, responses)
go s.handleRequests(ctx, closeConn, conn, responses)
// Pull responses from 'responses' and write them to conn.
go s.handleResponses(closeConn, conn, responses)
go s.handleResponses(ctx, closeConn, conn, responses)
// Wait until signal to close connection
go s.waitForClose(closeConn, connID)
go s.waitForClose(ctx, closeConn, connID)
}
}
func (s *SocketServer) waitForClose(closeConn chan error, connID int) {
err := <-closeConn
switch {
case err == io.EOF:
s.Logger.Error("Connection was closed by client")
case err != nil:
s.Logger.Error("Connection error", "err", err)
default:
// never happens
s.Logger.Error("Connection was closed")
}
func (s *SocketServer) waitForClose(ctx context.Context, closeConn chan error, connID int) {
defer func() {
// Close the connection
if err := s.rmConn(connID); err != nil {
s.logger.Error("error closing connection", "err", err)
}
}()
// Close the connection
if err := s.rmConn(connID); err != nil {
s.Logger.Error("Error closing connection", "err", err)
select {
case <-ctx.Done():
return
case err := <-closeConn:
switch {
case err == io.EOF:
s.logger.Error("Connection was closed by client")
case err != nil:
s.logger.Error("Connection error", "err", err)
default:
// never happens
s.logger.Error("Connection was closed")
}
}
}
// Read requests from conn and deal with them
func (s *SocketServer) handleRequests(closeConn chan error, conn io.Reader, responses chan<- *types.Response) {
func (s *SocketServer) handleRequests(
ctx context.Context,
closeConn chan error,
conn io.Reader,
responses chan<- *types.Response,
) {
var count int
var bufReader = bufio.NewReader(conn)
@@ -164,15 +177,15 @@ func (s *SocketServer) handleRequests(closeConn chan error, conn io.Reader, resp
buf := make([]byte, size)
buf = buf[:runtime.Stack(buf, false)]
err := fmt.Errorf("recovered from panic: %v\n%s", r, buf)
if !s.isLoggerSet {
fmt.Fprintln(os.Stderr, err)
}
closeConn <- err
s.appMtx.Unlock()
}
}()
for {
if ctx.Err() != nil {
return
}
var req = &types.Request{}
err := types.ReadMessage(bufReader, req)
@@ -239,16 +252,33 @@ func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types
}
// Pull responses from 'responses' and write them to conn.
func (s *SocketServer) handleResponses(closeConn chan error, conn io.Writer, responses <-chan *types.Response) {
func (s *SocketServer) handleResponses(
ctx context.Context,
closeConn chan error,
conn io.Writer,
responses <-chan *types.Response,
) {
bw := bufio.NewWriter(conn)
for res := range responses {
if err := types.WriteMessage(res, bw); err != nil {
closeConn <- fmt.Errorf("error writing message: %w", err)
return
}
if err := bw.Flush(); err != nil {
closeConn <- fmt.Errorf("error flushing write buffer: %w", err)
for {
select {
case <-ctx.Done():
return
case res := <-responses:
if err := types.WriteMessage(res, bw); err != nil {
select {
case <-ctx.Done():
case closeConn <- fmt.Errorf("error writing message: %w", err):
}
return
}
if err := bw.Flush(); err != nil {
select {
case <-ctx.Done():
case closeConn <- fmt.Errorf("error flushing write buffer: %w", err):
}
return
}
}
}
}

View File

@@ -1,27 +1,40 @@
package tests
import (
"context"
"testing"
"github.com/fortytw2/leaktest"
"github.com/stretchr/testify/assert"
abciclientent "github.com/tendermint/tendermint/abci/client"
"github.com/tendermint/tendermint/abci/example/kvstore"
abciserver "github.com/tendermint/tendermint/abci/server"
"github.com/tendermint/tendermint/libs/log"
)
func TestClientServerNoAddrPrefix(t *testing.T) {
addr := "localhost:26658"
transport := "socket"
t.Cleanup(leaktest.Check(t))
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
const (
addr = "localhost:26658"
transport = "socket"
)
app := kvstore.NewApplication()
logger := log.NewTestingLogger(t)
server, err := abciserver.NewServer(addr, transport, app)
server, err := abciserver.NewServer(logger, addr, transport, app)
assert.NoError(t, err, "expected no error on NewServer")
err = server.Start()
err = server.Start(ctx)
assert.NoError(t, err, "expected no error on server.Start")
t.Cleanup(server.Wait)
client, err := abciclientent.NewClient(addr, transport, true)
client, err := abciclientent.NewClient(logger, addr, transport, true)
assert.NoError(t, err, "expected no error on NewClient")
err = client.Start()
err = client.Start(ctx)
assert.NoError(t, err, "expected no error on client.Start")
t.Cleanup(client.Wait)
}

View File

@@ -12,9 +12,7 @@ import (
tmrand "github.com/tendermint/tendermint/libs/rand"
)
var ctx = context.Background()
func InitChain(client abciclient.Client) error {
func InitChain(ctx context.Context, client abciclient.Client) error {
total := 10
vals := make([]types.ValidatorUpdate, total)
for i := 0; i < total; i++ {
@@ -23,7 +21,7 @@ func InitChain(client abciclient.Client) error {
power := mrand.Int()
vals[i] = types.UpdateValidator(pubkey, int64(power), "")
}
_, err := client.InitChainSync(ctx, types.RequestInitChain{
_, err := client.InitChain(ctx, types.RequestInitChain{
Validators: vals,
})
if err != nil {
@@ -34,8 +32,8 @@ func InitChain(client abciclient.Client) error {
return nil
}
func Commit(client abciclient.Client, hashExp []byte) error {
res, err := client.CommitSync(ctx)
func Commit(ctx context.Context, client abciclient.Client, hashExp []byte) error {
res, err := client.Commit(ctx)
data := res.Data
if err != nil {
fmt.Println("Failed test: Commit")
@@ -51,8 +49,8 @@ func Commit(client abciclient.Client, hashExp []byte) error {
return nil
}
func DeliverTx(client abciclient.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.DeliverTxSync(ctx, types.RequestDeliverTx{Tx: txBytes})
func DeliverTx(ctx context.Context, client abciclient.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.DeliverTx(ctx, types.RequestDeliverTx{Tx: txBytes})
code, data, log := res.Code, res.Data, res.Log
if code != codeExp {
fmt.Println("Failed test: DeliverTx")
@@ -70,8 +68,8 @@ func DeliverTx(client abciclient.Client, txBytes []byte, codeExp uint32, dataExp
return nil
}
func CheckTx(client abciclient.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.CheckTxSync(ctx, types.RequestCheckTx{Tx: txBytes})
func CheckTx(ctx context.Context, client abciclient.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
res, _ := client.CheckTx(ctx, types.RequestCheckTx{Tx: txBytes})
code, data, log := res.Code, res.Data, res.Log
if code != codeExp {
fmt.Println("Failed test: CheckTx")

View File

@@ -14,7 +14,7 @@ import (
func TestMarshalJSON(t *testing.T) {
b, err := json.Marshal(&ResponseDeliverTx{})
assert.Nil(t, err)
assert.NoError(t, err)
// include empty fields.
assert.True(t, strings.Contains(string(b), "code"))
r1 := ResponseCheckTx{
@@ -31,11 +31,11 @@ func TestMarshalJSON(t *testing.T) {
},
}
b, err = json.Marshal(&r1)
assert.Nil(t, err)
assert.NoError(t, err)
var r2 ResponseCheckTx
err = json.Unmarshal(b, &r2)
assert.Nil(t, err)
assert.NoError(t, err)
assert.Equal(t, r1, r2)
}
@@ -49,11 +49,11 @@ func TestWriteReadMessageSimple(t *testing.T) {
for _, c := range cases {
buf := new(bytes.Buffer)
err := WriteMessage(c, buf)
assert.Nil(t, err)
assert.NoError(t, err)
msg := new(RequestEcho)
err = ReadMessage(buf, msg)
assert.Nil(t, err)
assert.NoError(t, err)
assert.True(t, proto.Equal(c, msg))
}
@@ -71,11 +71,11 @@ func TestWriteReadMessage(t *testing.T) {
for _, c := range cases {
buf := new(bytes.Buffer)
err := WriteMessage(c, buf)
assert.Nil(t, err)
assert.NoError(t, err)
msg := new(tmproto.Header)
err = ReadMessage(buf, msg)
assert.Nil(t, err)
assert.NoError(t, err)
assert.True(t, proto.Equal(c, msg))
}
@@ -103,11 +103,11 @@ func TestWriteReadMessage2(t *testing.T) {
for _, c := range cases {
buf := new(bytes.Buffer)
err := WriteMessage(c, buf)
assert.Nil(t, err)
assert.NoError(t, err)
msg := new(ResponseCheckTx)
err = ReadMessage(buf, msg)
assert.Nil(t, err)
assert.NoError(t, err)
assert.True(t, proto.Equal(c, msg))
}

View File

@@ -5,9 +5,6 @@ import (
"encoding/json"
"github.com/gogo/protobuf/jsonpb"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/encoding"
tmjson "github.com/tendermint/tendermint/libs/json"
)
const (
@@ -105,48 +102,6 @@ func (r *EventAttribute) UnmarshalJSON(b []byte) error {
return jsonpbUnmarshaller.Unmarshal(reader, r)
}
// validatorUpdateJSON is the JSON encoding of a validator update.
//
// It handles translation of public keys from the protobuf representation to
// the legacy Amino-compatible format expected by RPC clients.
type validatorUpdateJSON struct {
PubKey json.RawMessage `json:"pub_key,omitempty"`
Power int64 `json:"power,string"`
}
func (v *ValidatorUpdate) MarshalJSON() ([]byte, error) {
key, err := encoding.PubKeyFromProto(v.PubKey)
if err != nil {
return nil, err
}
jkey, err := tmjson.Marshal(key)
if err != nil {
return nil, err
}
return json.Marshal(validatorUpdateJSON{
PubKey: jkey,
Power: v.GetPower(),
})
}
func (v *ValidatorUpdate) UnmarshalJSON(data []byte) error {
var vu validatorUpdateJSON
if err := json.Unmarshal(data, &vu); err != nil {
return err
}
var key crypto.PubKey
if err := tmjson.Unmarshal(vu.PubKey, &key); err != nil {
return err
}
pkey, err := encoding.PubKeyToProto(key)
if err != nil {
return err
}
v.PubKey = pkey
v.Power = vu.Power
return nil
}
// Some compile time assertions to ensure we don't
// have accidental runtime surprises later on.

View File

@@ -1838,7 +1838,7 @@ type ResponseCheckTx struct {
Sender string `protobuf:"bytes,9,opt,name=sender,proto3" json:"sender,omitempty"`
Priority int64 `protobuf:"varint,10,opt,name=priority,proto3" json:"priority,omitempty"`
// mempool_error is set by Tendermint.
// ABCI applictions creating a ResponseCheckTX should not set mempool_error.
// ABCI applications creating a ResponseCheckTX should not set mempool_error.
MempoolError string `protobuf:"bytes,11,opt,name=mempool_error,json=mempoolError,proto3" json:"mempool_error,omitempty"`
}

View File

@@ -1,13 +0,0 @@
# The version of the generation template.
# Required.
# The only currently-valid value is v1beta1.
version: v1beta1
# The plugins to run.
plugins:
# The name of the plugin.
- name: gogofaster
# The the relative output directory.
out: proto
# Any options to provide to the plugin.
opt: Mgoogle/protobuf/timestamp.proto=github.com/gogo/protobuf/types,Mgoogle/protobuf/duration.proto=github.com/golang/protobuf/ptypes/duration,plugins=grpc,paths=source_relative

View File

@@ -1,16 +0,0 @@
version: v1beta1
build:
roots:
- proto
- third_party/proto
lint:
use:
- BASIC
- FILE_LOWER_SNAKE_CASE
- UNARY_RPC
ignore:
- gogoproto
breaking:
use:
- FILE

View File

@@ -6,10 +6,11 @@ import (
"crypto/x509"
"flag"
"fmt"
"io/ioutil"
"net"
"net/http"
"os"
"os/signal"
"syscall"
"time"
grpc_prometheus "github.com/grpc-ecosystem/go-grpc-prometheus"
@@ -20,7 +21,6 @@ import (
"github.com/tendermint/tendermint/libs/log"
tmnet "github.com/tendermint/tendermint/libs/net"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/privval"
grpcprivval "github.com/tendermint/tendermint/privval/grpc"
privvalproto "github.com/tendermint/tendermint/proto/tendermint/privval"
@@ -46,11 +46,14 @@ func main() {
rootCA = flag.String("rootcafile", "", "absolute path to root CA")
prometheusAddr = flag.String("prometheus-addr", "", "address for prometheus endpoint (host:port)")
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false).
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo).
With("module", "priv_val")
)
flag.Parse()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
logger.Info(
"Starting private validator",
"addr", *addr,
@@ -78,7 +81,7 @@ func main() {
}
certPool := x509.NewCertPool()
bs, err := ioutil.ReadFile(*rootCA)
bs, err := os.ReadFile(*rootCA)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to read client ca cert: %s", err)
os.Exit(1)
@@ -131,9 +134,10 @@ func main() {
os.Exit(1)
}
// Stop upon receiving SIGTERM or CTRL-C.
tmos.TrapSignal(logger, func() {
logger.Debug("SignerServer: calling Close")
opctx, opcancel := signal.NotifyContext(ctx, os.Interrupt, syscall.SIGTERM)
defer opcancel()
go func() {
<-opctx.Done()
if *prometheusAddr != "" {
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
@@ -143,7 +147,7 @@ func main() {
}
}
s.GracefulStop()
})
}()
// Run forever.
select {}

View File

@@ -1,69 +0,0 @@
package commands
import (
"errors"
"path/filepath"
"sync"
"github.com/spf13/cobra"
"github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt"
"github.com/syndtr/goleveldb/leveldb/util"
"github.com/tendermint/tendermint/libs/log"
)
func MakeCompactDBCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "experimental-compact-goleveldb",
Short: "force compacts the tendermint storage engine (only GoLevelDB supported)",
Long: `
This is a temporary utility command that performs a force compaction on the state
and blockstores to reduce disk space for a pruning node. This should only be run
once the node has stopped. This command will likely be omitted in the future after
the planned refactor to the storage engine.
Currently, only GoLevelDB is supported.
`,
RunE: func(cmd *cobra.Command, args []string) error {
if config.DBBackend != "goleveldb" {
return errors.New("compaction is currently only supported with goleveldb")
}
compactGoLevelDBs(config.RootDir, logger)
return nil
},
}
return cmd
}
func compactGoLevelDBs(rootDir string, logger log.Logger) {
dbNames := []string{"state", "blockstore"}
o := &opt.Options{
DisableSeeksCompaction: true,
}
wg := sync.WaitGroup{}
for _, dbName := range dbNames {
dbName := dbName
wg.Add(1)
go func() {
defer wg.Done()
dbPath := filepath.Join(rootDir, "data", dbName+".db")
store, err := leveldb.OpenFile(dbPath, o)
if err != nil {
logger.Error("failed to initialize tendermint db", "path", dbPath, "err", err)
return
}
defer store.Close()
logger.Info("starting compaction...", "db", dbPath)
err = store.CompactRange(util.Range{Start: nil, Limit: nil})
if err != nil {
logger.Error("failed to compact tendermint db", "path", dbPath, "err", err)
}
}()
}
wg.Wait()
}

View File

@@ -15,7 +15,7 @@ var (
flagProfAddr = "pprof-laddr"
flagFrequency = "frequency"
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
)
// DebugCmd defines the root command containing subcommands that assist in

View File

@@ -1,9 +1,9 @@
package debug
import (
"context"
"errors"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"time"
@@ -43,7 +43,7 @@ func init() {
)
}
func dumpCmdHandler(_ *cobra.Command, args []string) error {
func dumpCmdHandler(cmd *cobra.Command, args []string) error {
outDir := args[0]
if outDir == "" {
return errors.New("invalid output directory")
@@ -64,25 +64,27 @@ func dumpCmdHandler(_ *cobra.Command, args []string) error {
return fmt.Errorf("failed to create new http client: %w", err)
}
ctx := cmd.Context()
home := viper.GetString(cli.HomeFlag)
conf := config.DefaultConfig()
conf = conf.SetRoot(home)
config.EnsureRoot(conf.RootDir)
dumpDebugData(outDir, conf, rpc)
dumpDebugData(ctx, outDir, conf, rpc)
ticker := time.NewTicker(time.Duration(frequency) * time.Second)
for range ticker.C {
dumpDebugData(outDir, conf, rpc)
dumpDebugData(ctx, outDir, conf, rpc)
}
return nil
}
func dumpDebugData(outDir string, conf *config.Config, rpc *rpchttp.HTTP) {
func dumpDebugData(ctx context.Context, outDir string, conf *config.Config, rpc *rpchttp.HTTP) {
start := time.Now().UTC()
tmpDir, err := ioutil.TempDir(outDir, "tendermint_debug_tmp")
tmpDir, err := os.MkdirTemp(outDir, "tendermint_debug_tmp")
if err != nil {
logger.Error("failed to create temporary directory", "dir", tmpDir, "error", err)
return
@@ -90,19 +92,19 @@ func dumpDebugData(outDir string, conf *config.Config, rpc *rpchttp.HTTP) {
defer os.RemoveAll(tmpDir)
logger.Info("getting node status...")
if err := dumpStatus(rpc, tmpDir, "status.json"); err != nil {
if err := dumpStatus(ctx, rpc, tmpDir, "status.json"); err != nil {
logger.Error("failed to dump node status", "error", err)
return
}
logger.Info("getting node network info...")
if err := dumpNetInfo(rpc, tmpDir, "net_info.json"); err != nil {
if err := dumpNetInfo(ctx, rpc, tmpDir, "net_info.json"); err != nil {
logger.Error("failed to dump node network info", "error", err)
return
}
logger.Info("getting node consensus state...")
if err := dumpConsensusState(rpc, tmpDir, "consensus_state.json"); err != nil {
if err := dumpConsensusState(ctx, rpc, tmpDir, "consensus_state.json"); err != nil {
logger.Error("failed to dump node consensus state", "error", err)
return
}

View File

@@ -5,7 +5,6 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"path/filepath"
@@ -111,5 +110,5 @@ func writeStateJSONToFile(state interface{}, dir, filename string) error {
return fmt.Errorf("failed to encode state dump: %w", err)
}
return ioutil.WriteFile(path.Join(dir, filename), stateJSON, os.ModePerm)
return os.WriteFile(path.Join(dir, filename), stateJSON, os.ModePerm)
}

View File

@@ -3,7 +3,6 @@ package debug
import (
"errors"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
@@ -34,7 +33,8 @@ $ tendermint debug kill 34255 /path/to/tm-debug.zip`,
}
func killCmdHandler(cmd *cobra.Command, args []string) error {
pid, err := strconv.ParseUint(args[0], 10, 64)
ctx := cmd.Context()
pid, err := strconv.ParseInt(args[0], 10, 64)
if err != nil {
return err
}
@@ -56,24 +56,24 @@ func killCmdHandler(cmd *cobra.Command, args []string) error {
// Create a temporary directory which will contain all the state dumps and
// relevant files and directories that will be compressed into a file.
tmpDir, err := ioutil.TempDir(os.TempDir(), "tendermint_debug_tmp")
tmpDir, err := os.MkdirTemp(os.TempDir(), "tendermint_debug_tmp")
if err != nil {
return fmt.Errorf("failed to create temporary directory: %w", err)
}
defer os.RemoveAll(tmpDir)
logger.Info("getting node status...")
if err := dumpStatus(rpc, tmpDir, "status.json"); err != nil {
if err := dumpStatus(ctx, rpc, tmpDir, "status.json"); err != nil {
return err
}
logger.Info("getting node network info...")
if err := dumpNetInfo(rpc, tmpDir, "net_info.json"); err != nil {
if err := dumpNetInfo(ctx, rpc, tmpDir, "net_info.json"); err != nil {
return err
}
logger.Info("getting node consensus state...")
if err := dumpConsensusState(rpc, tmpDir, "consensus_state.json"); err != nil {
if err := dumpConsensusState(ctx, rpc, tmpDir, "consensus_state.json"); err != nil {
return err
}
@@ -92,7 +92,7 @@ func killCmdHandler(cmd *cobra.Command, args []string) error {
}
logger.Info("killing Tendermint process")
if err := killProc(pid, tmpDir); err != nil {
if err := killProc(int(pid), tmpDir); err != nil {
return err
}
@@ -105,7 +105,7 @@ func killCmdHandler(cmd *cobra.Command, args []string) error {
// is tailed and piped to a file under the directory dir. An error is returned
// if the output file cannot be created or the tail command cannot be started.
// An error is not returned if any subsequent syscall fails.
func killProc(pid uint64, dir string) error {
func killProc(pid int, dir string) error {
// pipe STDERR output from tailing the Tendermint process to a file
//
// NOTE: This will only work on UNIX systems.
@@ -128,7 +128,7 @@ func killProc(pid uint64, dir string) error {
go func() {
// Killing the Tendermint process with the '-ABRT|-6' signal will result in
// a goroutine stacktrace.
p, err := os.FindProcess(int(pid))
p, err := os.FindProcess(pid)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to find PID to kill Tendermint process: %s", err)
} else if err = p.Signal(syscall.SIGABRT); err != nil {

View File

@@ -3,7 +3,7 @@ package debug
import (
"context"
"fmt"
"io/ioutil"
"io"
"net/http"
"os"
"path"
@@ -15,8 +15,8 @@ import (
// dumpStatus gets node status state dump from the Tendermint RPC and writes it
// to file. It returns an error upon failure.
func dumpStatus(rpc *rpchttp.HTTP, dir, filename string) error {
status, err := rpc.Status(context.Background())
func dumpStatus(ctx context.Context, rpc *rpchttp.HTTP, dir, filename string) error {
status, err := rpc.Status(ctx)
if err != nil {
return fmt.Errorf("failed to get node status: %w", err)
}
@@ -26,8 +26,8 @@ func dumpStatus(rpc *rpchttp.HTTP, dir, filename string) error {
// dumpNetInfo gets network information state dump from the Tendermint RPC and
// writes it to file. It returns an error upon failure.
func dumpNetInfo(rpc *rpchttp.HTTP, dir, filename string) error {
netInfo, err := rpc.NetInfo(context.Background())
func dumpNetInfo(ctx context.Context, rpc *rpchttp.HTTP, dir, filename string) error {
netInfo, err := rpc.NetInfo(ctx)
if err != nil {
return fmt.Errorf("failed to get node network information: %w", err)
}
@@ -37,8 +37,8 @@ func dumpNetInfo(rpc *rpchttp.HTTP, dir, filename string) error {
// dumpConsensusState gets consensus state dump from the Tendermint RPC and
// writes it to file. It returns an error upon failure.
func dumpConsensusState(rpc *rpchttp.HTTP, dir, filename string) error {
consDump, err := rpc.DumpConsensusState(context.Background())
func dumpConsensusState(ctx context.Context, rpc *rpchttp.HTTP, dir, filename string) error {
consDump, err := rpc.DumpConsensusState(ctx)
if err != nil {
return fmt.Errorf("failed to get node consensus dump: %w", err)
}
@@ -73,10 +73,10 @@ func dumpProfile(dir, addr, profile string, debug int) error {
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
body, err := io.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("failed to read %s profile response body: %w", profile, err)
}
return ioutil.WriteFile(path.Join(dir, fmt.Sprintf("%s.out", profile)), body, os.ModePerm)
return os.WriteFile(path.Join(dir, fmt.Sprintf("%s.out", profile)), body, os.ModePerm)
}

View File

@@ -39,10 +39,10 @@ func initFiles(cmd *cobra.Command, args []string) error {
return errors.New("must specify a node type: tendermint init [validator|full|seed]")
}
config.Mode = args[0]
return initFilesWithConfig(config)
return initFilesWithConfig(cmd.Context(), config)
}
func initFilesWithConfig(config *cfg.Config) error {
func initFilesWithConfig(ctx context.Context, config *cfg.Config) error {
var (
pv *privval.FilePV
err error
@@ -65,7 +65,9 @@ func initFilesWithConfig(config *cfg.Config) error {
if err != nil {
return err
}
pv.Save()
if err := pv.Save(); err != nil {
return err
}
logger.Info("Generated private validator", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
@@ -98,7 +100,7 @@ func initFilesWithConfig(config *cfg.Config) error {
}
}
ctx, cancel := context.WithTimeout(context.TODO(), ctxTimeout)
ctx, cancel := context.WithTimeout(ctx, ctxTimeout)
defer cancel()
// if this is a validator we add it to genesis

View File

@@ -1,8 +1,6 @@
package commands
import (
"context"
"os"
"os/signal"
"syscall"
@@ -40,16 +38,9 @@ func init() {
}
func runInspect(cmd *cobra.Command, args []string) error {
ctx, cancel := context.WithCancel(cmd.Context())
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM, syscall.SIGINT)
defer cancel()
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGTERM, syscall.SIGINT)
go func() {
<-c
cancel()
}()
ins, err := inspect.NewFromConfig(logger, config)
if err != nil {
return err

View File

@@ -5,11 +5,8 @@ import (
"fmt"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/scripts/keymigrate"
"github.com/tendermint/tendermint/scripts/scmigrate"
)
func MakeKeyMigrateCommand() *cobra.Command {
@@ -17,7 +14,46 @@ func MakeKeyMigrateCommand() *cobra.Command {
Use: "key-migrate",
Short: "Run Database key migration",
RunE: func(cmd *cobra.Command, args []string) error {
return RunDatabaseMigration(cmd.Context(), logger, config)
ctx, cancel := context.WithCancel(cmd.Context())
defer cancel()
contexts := []string{
// this is ordered to put the
// (presumably) biggest/most important
// subsets first.
"blockstore",
"state",
"peerstore",
"tx_index",
"evidence",
"light",
}
for idx, dbctx := range contexts {
logger.Info("beginning a key migration",
"dbctx", dbctx,
"num", idx+1,
"total", len(contexts),
)
db, err := cfg.DefaultDBProvider(&cfg.DBContext{
ID: dbctx,
Config: config,
})
if err != nil {
return fmt.Errorf("constructing database handle: %w", err)
}
if err = keymigrate.Migrate(ctx, db); err != nil {
return fmt.Errorf("running migration for context %q: %w",
dbctx, err)
}
}
logger.Info("completed database migration successfully")
return nil
},
}
@@ -26,51 +62,3 @@ func MakeKeyMigrateCommand() *cobra.Command {
return cmd
}
func RunDatabaseMigration(ctx context.Context, logger log.Logger, conf *cfg.Config) error {
contexts := []string{
// this is ordered to put
// the more ephemeral tables first to
// reduce the possibility of the
// ephemeral data overwriting later data
"tx_index",
"peerstore",
"light",
"blockstore",
"state",
"evidence",
}
for idx, dbctx := range contexts {
logger.Info("beginning a key migration",
"dbctx", dbctx,
"num", idx+1,
"total", len(contexts),
)
db, err := cfg.DefaultDBProvider(&cfg.DBContext{
ID: dbctx,
Config: conf,
})
if err != nil {
return fmt.Errorf("constructing database handle: %w", err)
}
if err = keymigrate.Migrate(ctx, db); err != nil {
return fmt.Errorf("running migration for context %q: %w",
dbctx, err)
}
if dbctx == "blockstore" {
if err := scmigrate.Migrate(ctx, db); err != nil {
return fmt.Errorf("running seen commit migration: %w", err)
}
}
}
logger.Info("completed database migration successfully")
return nil
}

View File

@@ -6,8 +6,10 @@ import (
"fmt"
"net/http"
"os"
"os/signal"
"path/filepath"
"strings"
"syscall"
"time"
"github.com/spf13/cobra"
@@ -15,7 +17,6 @@ import (
"github.com/tendermint/tendermint/libs/log"
tmmath "github.com/tendermint/tendermint/libs/math"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/light"
lproxy "github.com/tendermint/tendermint/light/proxy"
lrpc "github.com/tendermint/tendermint/light/rpc"
@@ -101,7 +102,7 @@ func init() {
}
func runProxy(cmd *cobra.Command, args []string) error {
logger, err := log.NewDefaultLogger(logFormat, logLevel, false)
logger, err := log.NewDefaultLogger(logFormat, logLevel)
if err != nil {
return err
}
@@ -186,13 +187,16 @@ func runProxy(cmd *cobra.Command, args []string) error {
return err
}
// Stop upon receiving SIGTERM or CTRL-C.
tmos.TrapSignal(logger, func() {
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
defer cancel()
go func() {
<-ctx.Done()
p.Listener.Close()
})
}()
logger.Info("Starting proxy...", "laddr", listenAddr)
if err := p.ListenAndServe(); err != http.ErrServerClosed {
if err := p.ListenAndServe(ctx); err != http.ErrServerClosed {
// Error starting or closing listener:
logger.Error("proxy ListenAndServe", "err", err)
}

View File

@@ -1,32 +0,0 @@
package commands
import (
"fmt"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/internal/p2p/upnp"
tmjson "github.com/tendermint/tendermint/libs/json"
)
// ProbeUpnpCmd adds capabilities to test the UPnP functionality.
var ProbeUpnpCmd = &cobra.Command{
Use: "probe-upnp",
Short: "Test UPnP functionality",
RunE: probeUpnp,
}
func probeUpnp(cmd *cobra.Command, args []string) error {
capabilities, err := upnp.Probe(logger)
if err != nil {
fmt.Println("Probe failed: ", err)
} else {
fmt.Println("Probe success!")
jsonBytes, err := tmjson.Marshal(capabilities)
if err != nil {
return err
}
fmt.Println(string(jsonBytes))
}
return nil
}

View File

@@ -25,13 +25,13 @@ const (
base int64 = 2
)
func setupReIndexEventCmd() *cobra.Command {
func setupReIndexEventCmd(ctx context.Context) *cobra.Command {
reIndexEventCmd := &cobra.Command{
Use: ReIndexEventCmd.Use,
Run: func(cmd *cobra.Command, args []string) {},
}
_ = reIndexEventCmd.ExecuteContext(context.Background())
_ = reIndexEventCmd.ExecuteContext(ctx)
return reIndexEventCmd
}
@@ -177,11 +177,14 @@ func TestReIndexEvent(t *testing.T) {
{height, height, false},
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
for _, tc := range testCases {
startHeight = tc.startHeight
endHeight = tc.endHeight
err := eventReIndex(setupReIndexEventCmd(), []indexer.EventSink{mockEventSink}, mockBlockStore, mockStateStore)
err := eventReIndex(setupReIndexEventCmd(ctx), []indexer.EventSink{mockEventSink}, mockBlockStore, mockStateStore)
if tc.reIndexErr {
require.Error(t, err)
} else {

View File

@@ -9,8 +9,8 @@ import (
var ReplayCmd = &cobra.Command{
Use: "replay",
Short: "Replay messages from WAL",
Run: func(cmd *cobra.Command, args []string) {
consensus.RunReplayFile(config.BaseConfig, config.Consensus, false)
RunE: func(cmd *cobra.Command, args []string) error {
return consensus.RunReplayFile(cmd.Context(), logger, config.BaseConfig, config.Consensus, false)
},
}
@@ -19,7 +19,7 @@ var ReplayCmd = &cobra.Command{
var ReplayConsoleCmd = &cobra.Command{
Use: "replay-console",
Short: "Replay messages from WAL in a console",
Run: func(cmd *cobra.Command, args []string) {
consensus.RunReplayFile(config.BaseConfig, config.Consensus, true)
RunE: func(cmd *cobra.Command, args []string) error {
return consensus.RunReplayFile(cmd.Context(), logger, config.BaseConfig, config.Consensus, true)
},
}

View File

@@ -1,190 +0,0 @@
package commands
import (
"os"
"path/filepath"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
// ResetAllCmd removes the database of this Tendermint core
// instance.
var ResetAllCmd = &cobra.Command{
Use: "unsafe-reset-all",
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
RunE: resetAllCmd,
}
var keepAddrBook bool
// ResetStateCmd removes the database of the specified Tendermint core instance.
var ResetStateCmd = &cobra.Command{
Use: "reset-state",
Short: "Remove all the data and WAL",
RunE: func(cmd *cobra.Command, args []string) error {
config, err := ParseConfig()
if err != nil {
return err
}
return resetState(config.DBDir(), logger, keyType)
},
}
func init() {
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
ResetPrivValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
"Key type to generate privval file with. Options: ed25519, secp256k1")
}
// ResetPrivValidatorCmd resets the private validator files.
var ResetPrivValidatorCmd = &cobra.Command{
Use: "unsafe-reset-priv-validator",
Short: "(unsafe) Reset this node's validator to genesis state",
RunE: resetPrivValidator,
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAllCmd(cmd *cobra.Command, args []string) error {
config, err := ParseConfig()
if err != nil {
return err
}
return resetAll(
config.DBDir(),
config.P2P.AddrBookFile(),
config.PrivValidator.KeyFile(),
config.PrivValidator.StateFile(),
logger,
)
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) error {
config, err := ParseConfig()
if err != nil {
return err
}
return resetFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile(), logger, keyType)
}
// resetAllCmd removes address book files plus all data, and resets the privValidator data.
func resetAll(dbDir, addrBookFile, privValKeyFile, privValStateFile string, logger log.Logger) error {
if keepAddrBook {
logger.Info("The address book remains intact")
} else {
removeAddrBook(addrBookFile, logger)
}
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
} else {
logger.Error("Error removing all blockchain history", "dir", dbDir, "err", err)
}
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
logger.Error("unable to recreate dbDir", "err", err)
}
// recreate the dbDir since the privVal state needs to live there
return resetFilePV(privValKeyFile, privValStateFile, logger, keyType)
}
// resetState removes address book files plus all databases.
func resetState(dbDir string, logger log.Logger, keyType string) error {
blockdb := filepath.Join(dbDir, "blockstore.db")
state := filepath.Join(dbDir, "state.db")
wal := filepath.Join(dbDir, "cs.wal")
evidence := filepath.Join(dbDir, "evidence.db")
txIndex := filepath.Join(dbDir, "tx_index.db")
peerstore := filepath.Join(dbDir, "peerstore.db")
if tmos.FileExists(blockdb) {
if err := os.RemoveAll(blockdb); err == nil {
logger.Info("Removed all blockstore.db", "dir", blockdb)
} else {
logger.Error("error removing all blockstore.db", "dir", blockdb, "err", err)
}
}
if tmos.FileExists(state) {
if err := os.RemoveAll(state); err == nil {
logger.Info("Removed all state.db", "dir", state)
} else {
logger.Error("error removing all state.db", "dir", state, "err", err)
}
}
if tmos.FileExists(wal) {
if err := os.RemoveAll(wal); err == nil {
logger.Info("Removed all cs.wal", "dir", wal)
} else {
logger.Error("error removing all cs.wal", "dir", wal, "err", err)
}
}
if tmos.FileExists(evidence) {
if err := os.RemoveAll(evidence); err == nil {
logger.Info("Removed all evidence.db", "dir", evidence)
} else {
logger.Error("error removing all evidence.db", "dir", evidence, "err", err)
}
}
if tmos.FileExists(txIndex) {
if err := os.RemoveAll(txIndex); err == nil {
logger.Info("Removed tx_index.db", "dir", txIndex)
} else {
logger.Error("error removing tx_index.db", "dir", txIndex, "err", err)
}
}
if tmos.FileExists(peerstore) {
if err := os.RemoveAll(peerstore); err == nil {
logger.Info("Removed peerstore.db", "dir", peerstore)
} else {
logger.Error("error removing peerstore.db", "dir", peerstore, "err", err)
}
}
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
logger.Error("unable to recreate dbDir", "err", err)
}
return nil
}
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger, keyType string) error {
if _, err := os.Stat(privValKeyFile); err == nil {
pv, err := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
if err != nil {
return err
}
pv.Reset()
logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv, err := privval.GenFilePV(privValKeyFile, privValStateFile, keyType)
if err != nil {
return err
}
pv.Save()
logger.Info("Generated private validator file", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
return nil
}
func removeAddrBook(addrBookFile string, logger log.Logger) {
if err := os.Remove(addrBookFile); err == nil {
logger.Info("Removed existing address book", "file", addrBookFile)
} else if !os.IsNotExist(err) {
logger.Info("Error removing address book", "file", addrBookFile, "err", err)
}
}

View File

@@ -0,0 +1,88 @@
package commands
import (
"os"
"github.com/spf13/cobra"
"github.com/tendermint/tendermint/libs/log"
tmos "github.com/tendermint/tendermint/libs/os"
"github.com/tendermint/tendermint/privval"
"github.com/tendermint/tendermint/types"
)
// ResetAllCmd removes the database of this Tendermint core
// instance.
var ResetAllCmd = &cobra.Command{
Use: "unsafe-reset-all",
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
RunE: resetAll,
}
var keepAddrBook bool
func init() {
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
ResetPrivValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
"Key type to generate privval file with. Options: ed25519, secp256k1")
}
// ResetPrivValidatorCmd resets the private validator files.
var ResetPrivValidatorCmd = &cobra.Command{
Use: "unsafe-reset-priv-validator",
Short: "(unsafe) Reset this node's validator to genesis state",
RunE: resetPrivValidator,
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetAll(cmd *cobra.Command, args []string) error {
return ResetAll(config.DBDir(), config.PrivValidator.KeyFile(),
config.PrivValidator.StateFile(), logger)
}
// XXX: this is totally unsafe.
// it's only suitable for testnets.
func resetPrivValidator(cmd *cobra.Command, args []string) error {
return resetFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile(), logger)
}
// ResetAll removes address book files plus all data, and resets the privValdiator data.
// Exported so other CLI tools can use it.
func ResetAll(dbDir, privValKeyFile, privValStateFile string, logger log.Logger) error {
if err := os.RemoveAll(dbDir); err == nil {
logger.Info("Removed all blockchain history", "dir", dbDir)
} else {
logger.Error("error removing all blockchain history", "dir", dbDir, "err", err)
}
// recreate the dbDir since the privVal state needs to live there
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
logger.Error("unable to recreate dbDir", "err", err)
}
return resetFilePV(privValKeyFile, privValStateFile, logger)
}
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) error {
if _, err := os.Stat(privValKeyFile); err == nil {
pv, err := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
if err != nil {
return err
}
if err := pv.Reset(); err != nil {
return err
}
logger.Info("Reset private validator file to genesis state", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
} else {
pv, err := privval.GenFilePV(privValKeyFile, privValStateFile, keyType)
if err != nil {
return err
}
if err := pv.Save(); err != nil {
return err
}
logger.Info("Generated private validator file", "keyFile", privValKeyFile,
"stateFile", privValStateFile)
}
return nil
}

View File

@@ -1,57 +0,0 @@
package commands
import (
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
cfg "github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/privval"
)
func Test_ResetAll(t *testing.T) {
config := cfg.TestConfig()
dir := t.TempDir()
config.SetRoot(dir)
cfg.EnsureRoot(dir)
require.NoError(t, initFilesWithConfig(config))
pv, err := privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
pv.LastSignState.Height = 10
pv.Save()
require.NoError(t, resetAll(config.DBDir(), config.P2P.AddrBookFile(), config.PrivValidator.KeyFile(),
config.PrivValidator.StateFile(), logger))
require.DirExists(t, config.DBDir())
require.NoFileExists(t, filepath.Join(config.DBDir(), "block.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "state.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "evidence.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "tx_index.db"))
require.FileExists(t, config.PrivValidator.StateFile())
pv, err = privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
require.Equal(t, int64(0), pv.LastSignState.Height)
}
func Test_ResetState(t *testing.T) {
config := cfg.TestConfig()
dir := t.TempDir()
config.SetRoot(dir)
cfg.EnsureRoot(dir)
require.NoError(t, initFilesWithConfig(config))
pv, err := privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
pv.LastSignState.Height = 10
pv.Save()
require.NoError(t, resetState(config.DBDir(), logger, keyType))
require.DirExists(t, config.DBDir())
require.NoFileExists(t, filepath.Join(config.DBDir(), "block.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "state.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "evidence.db"))
require.NoFileExists(t, filepath.Join(config.DBDir(), "tx_index.db"))
require.FileExists(t, config.PrivValidator.StateFile())
pv, err = privval.LoadFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile())
require.NoError(t, err)
// private validator state should still be intact.
require.Equal(t, int64(10), pv.LastSignState.Height)
}

View File

@@ -0,0 +1,74 @@
package commands_test
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/cmd/tendermint/commands"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/rpc/client/local"
rpctest "github.com/tendermint/tendermint/rpc/test"
e2e "github.com/tendermint/tendermint/test/e2e/app"
)
func TestRollbackIntegration(t *testing.T) {
var height int64
dir := t.TempDir()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
cfg, err := rpctest.CreateConfig(t.Name())
require.NoError(t, err)
cfg.BaseConfig.DBBackend = "goleveldb"
app, err := e2e.NewApplication(e2e.DefaultConfig(dir))
t.Run("First run", func(t *testing.T) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
require.NoError(t, err)
node, _, err := rpctest.StartTendermint(ctx, cfg, app, rpctest.SuppressStdout)
require.NoError(t, err)
time.Sleep(3 * time.Second)
cancel()
node.Wait()
require.False(t, node.IsRunning())
})
t.Run("Rollback", func(t *testing.T) {
require.NoError(t, app.Rollback())
height, _, err = commands.RollbackState(cfg)
require.NoError(t, err)
})
t.Run("Restart", func(t *testing.T) {
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
node2, _, err2 := rpctest.StartTendermint(ctx, cfg, app, rpctest.SuppressStdout)
require.NoError(t, err2)
logger := log.NewTestingLogger(t)
client, err := local.New(logger, node2.(local.NodeService))
require.NoError(t, err)
ticker := time.NewTicker(200 * time.Millisecond)
for {
select {
case <-ctx.Done():
t.Fatalf("failed to make progress after 20 seconds. Min height: %d", height)
case <-ticker.C:
status, err := client.Status(ctx)
require.NoError(t, err)
if status.SyncInfo.LatestBlockHeight > height {
return
}
}
}
})
}

View File

@@ -13,7 +13,7 @@ import (
var (
config = cfg.DefaultConfig()
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo, false)
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
ctxTimeout = 4 * time.Second
)
@@ -36,7 +36,7 @@ func ParseConfig() (*cfg.Config, error) {
conf.SetRoot(conf.RootDir)
cfg.EnsureRoot(conf.RootDir)
if err := conf.ValidateBasic(); err != nil {
return nil, fmt.Errorf("error in config file: %v", err)
return nil, fmt.Errorf("error in config file: %w", err)
}
return conf, nil
}
@@ -55,7 +55,7 @@ var RootCmd = &cobra.Command{
return err
}
logger, err = log.NewDefaultLogger(config.LogFormat, config.LogLevel, false)
logger, err = log.NewDefaultLogger(config.LogFormat, config.LogLevel)
if err != nil {
return err
}

View File

@@ -2,7 +2,6 @@ package commands
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strconv"
@@ -19,17 +18,12 @@ import (
)
// clearConfig clears env vars, the given root dir, and resets viper.
func clearConfig(dir string) {
if err := os.Unsetenv("TMHOME"); err != nil {
panic(err)
}
if err := os.Unsetenv("TM_HOME"); err != nil {
panic(err)
}
func clearConfig(t *testing.T, dir string) {
t.Helper()
require.NoError(t, os.Unsetenv("TMHOME"))
require.NoError(t, os.Unsetenv("TM_HOME"))
require.NoError(t, os.RemoveAll(dir))
if err := os.RemoveAll(dir); err != nil {
panic(err)
}
viper.Reset()
config = cfg.DefaultConfig()
}
@@ -47,8 +41,9 @@ func testRootCmd() *cobra.Command {
return rootCmd
}
func testSetup(rootDir string, args []string, env map[string]string) error {
clearConfig(rootDir)
func testSetup(t *testing.T, rootDir string, args []string, env map[string]string) error {
t.Helper()
clearConfig(t, rootDir)
rootCmd := testRootCmd()
cmd := cli.PrepareBaseCmd(rootCmd, "TM", rootDir)
@@ -74,8 +69,8 @@ func TestRootHome(t *testing.T) {
for i, tc := range cases {
idxString := strconv.Itoa(i)
err := testSetup(defaultRoot, tc.args, tc.env)
require.Nil(t, err, idxString)
err := testSetup(t, defaultRoot, tc.args, tc.env)
require.NoError(t, err, idxString)
assert.Equal(t, tc.root, config.RootDir, idxString)
assert.Equal(t, tc.root, config.P2P.RootDir, idxString)
@@ -106,8 +101,8 @@ func TestRootFlagsEnv(t *testing.T) {
for i, tc := range cases {
idxString := strconv.Itoa(i)
err := testSetup(defaultRoot, tc.args, tc.env)
require.Nil(t, err, idxString)
err := testSetup(t, defaultRoot, tc.args, tc.env)
require.NoError(t, err, idxString)
assert.Equal(t, tc.logLevel, config.LogLevel, idxString)
}
@@ -135,17 +130,17 @@ func TestRootConfig(t *testing.T) {
for i, tc := range cases {
defaultRoot := t.TempDir()
idxString := strconv.Itoa(i)
clearConfig(defaultRoot)
clearConfig(t, defaultRoot)
// XXX: path must match cfg.defaultConfigPath
configFilePath := filepath.Join(defaultRoot, "config")
err := tmos.EnsureDir(configFilePath, 0700)
require.Nil(t, err)
require.NoError(t, err)
// write the non-defaults to a different path
// TODO: support writing sub configs so we can test that too
err = WriteConfigVals(configFilePath, cvals)
require.Nil(t, err)
require.NoError(t, err)
rootCmd := testRootCmd()
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot)
@@ -153,7 +148,7 @@ func TestRootConfig(t *testing.T) {
// run with the args and env
tc.args = append([]string{rootCmd.Use}, tc.args...)
err = cli.RunWithArgs(cmd, tc.args, tc.env)
require.Nil(t, err, idxString)
require.NoError(t, err, idxString)
assert.Equal(t, tc.logLvl, config.LogLevel, idxString)
}
@@ -167,5 +162,5 @@ func WriteConfigVals(dir string, vals map[string]string) error {
data += fmt.Sprintf("%s = \"%s\"\n", k, v)
}
cfile := filepath.Join(dir, "config.toml")
return ioutil.WriteFile(cfile, []byte(data), 0600)
return os.WriteFile(cfile, []byte(data), 0600)
}

View File

@@ -3,16 +3,15 @@ package commands
import (
"bytes"
"crypto/sha256"
"errors"
"flag"
"fmt"
"io"
"os"
"os/signal"
"syscall"
"github.com/spf13/cobra"
cfg "github.com/tendermint/tendermint/config"
tmos "github.com/tendermint/tendermint/libs/os"
)
var (
@@ -34,20 +33,8 @@ func AddNodeFlags(cmd *cobra.Command) {
config.PrivValidator.ListenAddr,
"socket address to listen on for connections from external priv-validator process")
// TODO (https://github.com/tendermint/tendermint/issues/6908): remove this check after the v0.35 release cycle
// This check was added to give users an upgrade prompt to use the new flag for syncing.
//
// The pflag package does not have a native way to print a depcrecation warning
// and return an error. This logic was added to print a deprecation message to the user
// and then crash if the user attempts to use the old --fast-sync flag.
fs := flag.NewFlagSet("", flag.ExitOnError)
fs.Func("fast-sync", "deprecated",
func(string) error {
return errors.New("--fast-sync has been deprecated, please use --blocksync.enable")
})
cmd.Flags().AddGoFlagSet(fs)
// node flags
cmd.Flags().MarkHidden("fast-sync") //nolint:errcheck
cmd.Flags().BytesHexVar(
&genesisHash,
"genesis-hash",
@@ -67,10 +54,6 @@ func AddNodeFlags(cmd *cobra.Command) {
// rpc flags
cmd.Flags().String("rpc.laddr", config.RPC.ListenAddress, "RPC listen address. Port required")
cmd.Flags().String(
"rpc.grpc-laddr",
config.RPC.GRPCListenAddress,
"GRPC listen address (BroadcastTx only). Port required")
cmd.Flags().Bool("rpc.unsafe", config.RPC.Unsafe, "enabled unsafe rpc methods")
cmd.Flags().String("rpc.pprof-laddr", config.RPC.PprofListenAddress, "pprof listen address (https://golang.org/pkg/net/http/pprof)")
@@ -81,8 +64,6 @@ func AddNodeFlags(cmd *cobra.Command) {
"node listen address. (0.0.0.0:0 means any interface, any port)")
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "comma-delimited ID@host:port seed nodes") //nolint: staticcheck
cmd.Flags().String("p2p.persistent-peers", config.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
cmd.Flags().String("p2p.unconditional-peer-ids",
config.P2P.UnconditionalPeerIDs, "comma-delimited IDs of unconditional peers")
cmd.Flags().Bool("p2p.upnp", config.P2P.UPNP, "enable/disable UPNP port forwarding")
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "enable/disable Peer-Exchange")
cmd.Flags().String("p2p.private-peer-ids", config.P2P.PrivatePeerIDs, "comma-delimited private peer IDs")
@@ -123,28 +104,22 @@ func NewRunNodeCmd(nodeProvider cfg.ServiceProvider) *cobra.Command {
return err
}
n, err := nodeProvider(config, logger)
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
defer cancel()
n, err := nodeProvider(ctx, config, logger)
if err != nil {
return fmt.Errorf("failed to create node: %w", err)
}
if err := n.Start(); err != nil {
if err := n.Start(ctx); err != nil {
return fmt.Errorf("failed to start node: %w", err)
}
logger.Info("started node", "node", n.String())
// Stop upon receiving SIGTERM or CTRL-C.
tmos.TrapSignal(logger, func() {
if n.IsRunning() {
if err := n.Stop(); err != nil {
logger.Error("unable to stop the node", "error", err)
}
}
})
// Run forever.
select {}
<-ctx.Done()
return nil
},
}

View File

@@ -25,13 +25,14 @@ func showValidator(cmd *cobra.Command, args []string) error {
var (
pubKey crypto.PubKey
err error
bctx = cmd.Context()
)
//TODO: remove once gRPC is the only supported protocol
protocol, _ := tmnet.ProtocolAndAddress(config.PrivValidator.ListenAddr)
switch protocol {
case "grpc":
pvsc, err := tmgrpc.DialRemoteSigner(
bctx,
config.PrivValidator,
config.ChainID(),
logger,
@@ -41,7 +42,7 @@ func showValidator(cmd *cobra.Command, args []string) error {
return fmt.Errorf("can't connect to remote validator %w", err)
}
ctx, cancel := context.WithTimeout(context.TODO(), ctxTimeout)
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
defer cancel()
pubKey, err = pvsc.GetPubKey(ctx)
@@ -60,7 +61,7 @@ func showValidator(cmd *cobra.Command, args []string) error {
return err
}
ctx, cancel := context.WithTimeout(context.TODO(), ctxTimeout)
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
defer cancel()
pubKey, err = pv.GetPubKey(ctx)

View File

@@ -122,7 +122,7 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
}
genVals := make([]types.GenesisValidator, nValidators)
ctx := cmd.Context()
for i := 0; i < nValidators; i++ {
nodeDirName := fmt.Sprintf("%s%d", nodeDirPrefix, i)
nodeDir := filepath.Join(outputDir, nodeDirName)
@@ -139,7 +139,7 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
return err
}
if err := initFilesWithConfig(config); err != nil {
if err := initFilesWithConfig(ctx, config); err != nil {
return err
}
@@ -150,7 +150,7 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
return err
}
ctx, cancel := context.WithTimeout(context.TODO(), ctxTimeout)
ctx, cancel := context.WithTimeout(ctx, ctxTimeout)
defer cancel()
pubKey, err := pv.GetPubKey(ctx)
@@ -181,7 +181,7 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
return err
}
if err := initFilesWithConfig(config); err != nil {
if err := initFilesWithConfig(ctx, config); err != nil {
return err
}
}
@@ -226,7 +226,6 @@ func testnetFiles(cmd *cobra.Command, args []string) error {
for i := 0; i < nValidators+nNonValidators; i++ {
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
config.SetRoot(nodeDir)
config.P2P.AddrBookStrict = false
config.P2P.AllowDuplicateIP = true
if populatePersistentPeers {
persistentPeersWithoutSelf := make([]string, 0)

View File

@@ -17,13 +17,11 @@ func main() {
cmd.GenValidatorCmd,
cmd.ReIndexEventCmd,
cmd.InitFilesCmd,
cmd.ProbeUpnpCmd,
cmd.LightCmd,
cmd.ReplayCmd,
cmd.ReplayConsoleCmd,
cmd.ResetAllCmd,
cmd.ResetPrivValidatorCmd,
cmd.ResetStateCmd,
cmd.ShowValidatorCmd,
cmd.TestnetFilesCmd,
cmd.ShowNodeIDCmd,
@@ -32,7 +30,6 @@ func main() {
cmd.InspectCmd,
cmd.RollbackStateCmd,
cmd.MakeKeyMigrateCommand(),
cmd.MakeCompactDBCommand(),
debug.DebugCmd,
cli.NewCompletionCmd(rootCmd, true),
)

View File

@@ -4,7 +4,6 @@ import (
"encoding/hex"
"errors"
"fmt"
"io/ioutil"
"net/http"
"os"
"path/filepath"
@@ -28,12 +27,6 @@ const (
ModeFull = "full"
ModeValidator = "validator"
ModeSeed = "seed"
BlockSyncV0 = "v0"
BlockSyncV2 = "v2"
MempoolV0 = "v0"
MempoolV1 = "v1"
)
// NOTE: Most of the structs & relevant comments + the
@@ -54,19 +47,14 @@ var (
defaultPrivValKeyName = "priv_validator_key.json"
defaultPrivValStateName = "priv_validator_state.json"
defaultNodeKeyName = "node_key.json"
defaultAddrBookName = "addrbook.json"
defaultNodeKeyName = "node_key.json"
defaultConfigFilePath = filepath.Join(defaultConfigDir, defaultConfigFileName)
defaultGenesisJSONPath = filepath.Join(defaultConfigDir, defaultGenesisJSONName)
defaultPrivValKeyPath = filepath.Join(defaultConfigDir, defaultPrivValKeyName)
defaultPrivValStatePath = filepath.Join(defaultDataDir, defaultPrivValStateName)
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
defaultAddrBookPath = filepath.Join(defaultConfigDir, defaultAddrBookName)
minSubscriptionBufferSize = 100
defaultSubscriptionBufferSize = 200
defaultNodeKeyPath = filepath.Join(defaultConfigDir, defaultNodeKeyName)
)
// Config defines the top level configuration for a Tendermint node
@@ -79,7 +67,6 @@ type Config struct {
P2P *P2PConfig `mapstructure:"p2p"`
Mempool *MempoolConfig `mapstructure:"mempool"`
StateSync *StateSyncConfig `mapstructure:"statesync"`
BlockSync *BlockSyncConfig `mapstructure:"blocksync"`
Consensus *ConsensusConfig `mapstructure:"consensus"`
TxIndex *TxIndexConfig `mapstructure:"tx-index"`
Instrumentation *InstrumentationConfig `mapstructure:"instrumentation"`
@@ -94,7 +81,6 @@ func DefaultConfig() *Config {
P2P: DefaultP2PConfig(),
Mempool: DefaultMempoolConfig(),
StateSync: DefaultStateSyncConfig(),
BlockSync: DefaultBlockSyncConfig(),
Consensus: DefaultConsensusConfig(),
TxIndex: DefaultTxIndexConfig(),
Instrumentation: DefaultInstrumentationConfig(),
@@ -117,7 +103,6 @@ func TestConfig() *Config {
P2P: TestP2PConfig(),
Mempool: TestMempoolConfig(),
StateSync: TestStateSyncConfig(),
BlockSync: TestBlockSyncConfig(),
Consensus: TestConsensusConfig(),
TxIndex: TestTxIndexConfig(),
Instrumentation: TestInstrumentationConfig(),
@@ -145,18 +130,12 @@ func (cfg *Config) ValidateBasic() error {
if err := cfg.RPC.ValidateBasic(); err != nil {
return fmt.Errorf("error in [rpc] section: %w", err)
}
if err := cfg.P2P.ValidateBasic(); err != nil {
return fmt.Errorf("error in [p2p] section: %w", err)
}
if err := cfg.Mempool.ValidateBasic(); err != nil {
return fmt.Errorf("error in [mempool] section: %w", err)
}
if err := cfg.StateSync.ValidateBasic(); err != nil {
return fmt.Errorf("error in [statesync] section: %w", err)
}
if err := cfg.BlockSync.ValidateBasic(); err != nil {
return fmt.Errorf("error in [blocksync] section: %w", err)
}
if err := cfg.Consensus.ValidateBasic(); err != nil {
return fmt.Errorf("error in [consensus] section: %w", err)
}
@@ -286,7 +265,7 @@ func (cfg BaseConfig) NodeKeyFile() string {
// LoadNodeKey loads NodeKey located in filePath.
func (cfg BaseConfig) LoadNodeKeyID() (types.NodeID, error) {
jsonBytes, err := ioutil.ReadFile(cfg.NodeKeyFile())
jsonBytes, err := os.ReadFile(cfg.NodeKeyFile())
if err != nil {
return "", err
}
@@ -342,28 +321,6 @@ func (cfg BaseConfig) ValidateBasic() error {
return fmt.Errorf("unknown mode: %v", cfg.Mode)
}
// TODO (https://github.com/tendermint/tendermint/issues/6908) remove this check after the v0.35 release cycle.
// This check was added to give users an upgrade prompt to use the new
// configuration option in v0.35. In future release cycles they should no longer
// be using this configuration parameter so the check can be removed.
// The cfg.Other field can likely be removed at the same time if it is not referenced
// elsewhere as it was added to service this check.
if fs, ok := cfg.Other["fastsync"]; ok {
if _, ok := fs.(map[string]interface{}); ok {
return fmt.Errorf("a configuration section named 'fastsync' was found in the " +
"configuration file. The 'fastsync' section has been renamed to " +
"'blocksync', please update the 'fastsync' field in your configuration file to 'blocksync'")
}
}
if fs, ok := cfg.Other["fast-sync"]; ok {
if fs != "" {
return fmt.Errorf("a parameter named 'fast-sync' was found in the " +
"configuration file. The parameter to enable or disable quickly syncing with a blockchain" +
"has moved to the [blocksync] section of the configuration file as blocksync.enable. " +
"Please move the 'fast-sync' field in your configuration file to 'blocksync.enable'")
}
}
return nil
}
@@ -464,24 +421,10 @@ type RPCConfig struct {
// A list of non simple headers the client is allowed to use with cross-domain requests.
CORSAllowedHeaders []string `mapstructure:"cors-allowed-headers"`
// TCP or UNIX socket address for the gRPC server to listen on
// NOTE: This server only supports /broadcast_tx_commit
// Deprecated: gRPC in the RPC layer of Tendermint will be removed in 0.36.
GRPCListenAddress string `mapstructure:"grpc-laddr"`
// Maximum number of simultaneous connections.
// Does not include RPC (HTTP&WebSocket) connections. See max-open-connections
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
// Deprecated: gRPC in the RPC layer of Tendermint will be removed in 0.36.
GRPCMaxOpenConnections int `mapstructure:"grpc-max-open-connections"`
// Activate unsafe RPC commands like /dial-persistent-peers and /unsafe-flush-mempool
Unsafe bool `mapstructure:"unsafe"`
// Maximum number of simultaneous connections (including WebSocket).
// Does not include gRPC connections. See grpc-max-open-connections
// If you want to accept a larger number than the default, make sure
// you increase your OS limits.
// 0 - unlimited.
@@ -495,33 +438,10 @@ type RPCConfig struct {
MaxSubscriptionClients int `mapstructure:"max-subscription-clients"`
// Maximum number of unique queries a given client can /subscribe to
// If you're using GRPC (or Local RPC client) and /broadcast_tx_commit, set
// If you're using a Local RPC client and /broadcast_tx_commit, set this
// to the estimated maximum number of broadcast_tx_commit calls per block.
MaxSubscriptionsPerClient int `mapstructure:"max-subscriptions-per-client"`
// The number of events that can be buffered per subscription before
// returning `ErrOutOfCapacity`.
SubscriptionBufferSize int `mapstructure:"experimental-subscription-buffer-size"`
// The maximum number of responses that can be buffered per WebSocket
// client. If clients cannot read from the WebSocket endpoint fast enough,
// they will be disconnected, so increasing this parameter may reduce the
// chances of them being disconnected (but will cause the node to use more
// memory).
//
// Must be at least the same as `SubscriptionBufferSize`, otherwise
// connections may be dropped unnecessarily.
WebSocketWriteBufferSize int `mapstructure:"experimental-websocket-write-buffer-size"`
// If a WebSocket client cannot read fast enough, at present we may
// silently drop events instead of generating an error or disconnecting the
// client.
//
// Enabling this parameter will cause the WebSocket connection to be closed
// instead if it cannot read fast enough, allowing for greater
// predictability in subscription behavior.
CloseOnSlowClient bool `mapstructure:"experimental-close-on-slow-client"`
// How long to wait for a tx to be committed during /broadcast_tx_commit
// WARNING: Using a value larger than 10s will result in increasing the
// global HTTP write timeout, which applies to all connections and endpoints.
@@ -559,21 +479,17 @@ type RPCConfig struct {
// DefaultRPCConfig returns a default configuration for the RPC server
func DefaultRPCConfig() *RPCConfig {
return &RPCConfig{
ListenAddress: "tcp://127.0.0.1:26657",
CORSAllowedOrigins: []string{},
CORSAllowedMethods: []string{http.MethodHead, http.MethodGet, http.MethodPost},
CORSAllowedHeaders: []string{"Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"},
GRPCListenAddress: "",
GRPCMaxOpenConnections: 900,
ListenAddress: "tcp://127.0.0.1:26657",
CORSAllowedOrigins: []string{},
CORSAllowedMethods: []string{http.MethodHead, http.MethodGet, http.MethodPost},
CORSAllowedHeaders: []string{"Origin", "Accept", "Content-Type", "X-Requested-With", "X-Server-Time"},
Unsafe: false,
MaxOpenConnections: 900,
MaxSubscriptionClients: 100,
MaxSubscriptionsPerClient: 5,
SubscriptionBufferSize: defaultSubscriptionBufferSize,
TimeoutBroadcastTxCommit: 10 * time.Second,
WebSocketWriteBufferSize: defaultSubscriptionBufferSize,
MaxBodyBytes: int64(1000000), // 1MB
MaxHeaderBytes: 1 << 20, // same as the net/http default
@@ -587,7 +503,6 @@ func DefaultRPCConfig() *RPCConfig {
func TestRPCConfig() *RPCConfig {
cfg := DefaultRPCConfig()
cfg.ListenAddress = "tcp://127.0.0.1:36657"
cfg.GRPCListenAddress = "tcp://127.0.0.1:36658"
cfg.Unsafe = true
return cfg
}
@@ -595,9 +510,6 @@ func TestRPCConfig() *RPCConfig {
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *RPCConfig) ValidateBasic() error {
if cfg.GRPCMaxOpenConnections < 0 {
return errors.New("grpc-max-open-connections can't be negative")
}
if cfg.MaxOpenConnections < 0 {
return errors.New("max-open-connections can't be negative")
}
@@ -607,18 +519,6 @@ func (cfg *RPCConfig) ValidateBasic() error {
if cfg.MaxSubscriptionsPerClient < 0 {
return errors.New("max-subscriptions-per-client can't be negative")
}
if cfg.SubscriptionBufferSize < minSubscriptionBufferSize {
return fmt.Errorf(
"experimental-subscription-buffer-size must be >= %d",
minSubscriptionBufferSize,
)
}
if cfg.WebSocketWriteBufferSize < cfg.SubscriptionBufferSize {
return fmt.Errorf(
"experimental-websocket-write-buffer-size must be >= experimental-subscription-buffer-size (%d)",
cfg.SubscriptionBufferSize,
)
}
if cfg.TimeoutBroadcastTxCommit < 0 {
return errors.New("timeout-broadcast-tx-commit can't be negative")
}
@@ -689,42 +589,23 @@ type P2PConfig struct { //nolint: maligned
// UPNP port forwarding
UPNP bool `mapstructure:"upnp"`
// Path to address book
AddrBook string `mapstructure:"addr-book-file"`
// Set true for strict address routability rules
// Set false for private or local networks
AddrBookStrict bool `mapstructure:"addr-book-strict"`
// Maximum number of inbound peers
//
// TODO: Remove once p2p refactor is complete in favor of MaxConnections.
// ref: https://github.com/tendermint/tendermint/issues/5670
MaxNumInboundPeers int `mapstructure:"max-num-inbound-peers"`
// Maximum number of outbound peers to connect to, excluding persistent peers.
//
// TODO: Remove once p2p refactor is complete in favor of MaxConnections.
// ref: https://github.com/tendermint/tendermint/issues/5670
MaxNumOutboundPeers int `mapstructure:"max-num-outbound-peers"`
// MaxConnections defines the maximum number of connected peers (inbound and
// outbound).
MaxConnections uint16 `mapstructure:"max-connections"`
// MaxOutgoingConnections defines the maximum number of connected peers (inbound and
// outbound).
MaxOutgoingConnections uint16 `mapstructure:"max-outgoing-connections"`
// MaxIncomingConnectionAttempts rate limits the number of incoming connection
// attempts per IP address.
MaxIncomingConnectionAttempts uint `mapstructure:"max-incoming-connection-attempts"`
// List of node IDs, to which a connection will be (re)established ignoring any existing limits
UnconditionalPeerIDs string `mapstructure:"unconditional-peer-ids"`
// Set true to enable the peer-exchange reactor
PexReactor bool `mapstructure:"pex"`
// Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
PersistentPeersMaxDialPeriod time.Duration `mapstructure:"persistent-peers-max-dial-period"`
// Comma separated list of peer IDs to keep private (will not be gossiped to
// other peers)
PrivatePeerIDs string `mapstructure:"private-peer-ids"`
// Toggle to disable guard against peers connecting from the same ip.
AllowDuplicateIP bool `mapstructure:"allow-duplicate-ip"`
// Time to wait before flushing messages out on the connection
FlushThrottleTimeout time.Duration `mapstructure:"flush-throttle-timeout"`
@@ -738,16 +619,6 @@ type P2PConfig struct { //nolint: maligned
// Rate at which packets can be received, in bytes/second
RecvRate int64 `mapstructure:"recv-rate"`
// Set true to enable the peer-exchange reactor
PexReactor bool `mapstructure:"pex"`
// Comma separated list of peer IDs to keep private (will not be gossiped to
// other peers)
PrivatePeerIDs string `mapstructure:"private-peer-ids"`
// Toggle to disable guard against peers connecting from the same ip.
AllowDuplicateIP bool `mapstructure:"allow-duplicate-ip"`
// Peer connection configuration.
HandshakeTimeout time.Duration `mapstructure:"handshake-timeout"`
DialTimeout time.Duration `mapstructure:"dial-timeout"`
@@ -756,13 +627,8 @@ type P2PConfig struct { //nolint: maligned
// Force dial to fail
TestDialFail bool `mapstructure:"test-dial-fail"`
// UseLegacy enables the "legacy" P2P implementation and
// disables the newer default implementation. This flag will
// be removed in a future release.
UseLegacy bool `mapstructure:"use-legacy"`
// Makes it possible to configure which queue backend the p2p
// layer uses. Options are: "fifo", "priority" and "wdrr",
// layer uses. Options are: "fifo" and "priority",
// with the default being "priority".
QueueType string `mapstructure:"queue-type"`
}
@@ -773,14 +639,8 @@ func DefaultP2PConfig() *P2PConfig {
ListenAddress: "tcp://0.0.0.0:26656",
ExternalAddress: "",
UPNP: false,
AddrBook: defaultAddrBookPath,
AddrBookStrict: true,
MaxNumInboundPeers: 40,
MaxNumOutboundPeers: 10,
MaxConnections: 64,
MaxOutgoingConnections: 12,
MaxIncomingConnectionAttempts: 100,
PersistentPeersMaxDialPeriod: 0 * time.Second,
FlushThrottleTimeout: 100 * time.Millisecond,
// The MTU (Maximum Transmission Unit) for Ethernet is 1500 bytes.
// The IP header and the TCP header take up 20 bytes each at least (unless
@@ -796,39 +656,15 @@ func DefaultP2PConfig() *P2PConfig {
DialTimeout: 3 * time.Second,
TestDialFail: false,
QueueType: "priority",
UseLegacy: false,
}
}
// TestP2PConfig returns a configuration for testing the peer-to-peer layer
func TestP2PConfig() *P2PConfig {
cfg := DefaultP2PConfig()
cfg.ListenAddress = "tcp://127.0.0.1:36656"
cfg.FlushThrottleTimeout = 10 * time.Millisecond
cfg.AllowDuplicateIP = true
return cfg
}
// AddrBookFile returns the full path to the address book
func (cfg *P2PConfig) AddrBookFile() string {
return rootify(cfg.AddrBook, cfg.RootDir)
}
// ValidateBasic performs basic validation (checking param bounds, etc.) and
// returns an error if any check fails.
func (cfg *P2PConfig) ValidateBasic() error {
if cfg.MaxNumInboundPeers < 0 {
return errors.New("max-num-inbound-peers can't be negative")
}
if cfg.MaxNumOutboundPeers < 0 {
return errors.New("max-num-outbound-peers can't be negative")
}
if cfg.FlushThrottleTimeout < 0 {
return errors.New("flush-throttle-timeout can't be negative")
}
if cfg.PersistentPeersMaxDialPeriod < 0 {
return errors.New("persistent-peers-max-dial-period can't be negative")
}
if cfg.MaxPacketMsgPayloadSize < 0 {
return errors.New("max-packet-msg-payload-size can't be negative")
}
@@ -838,18 +674,23 @@ func (cfg *P2PConfig) ValidateBasic() error {
if cfg.RecvRate < 0 {
return errors.New("recv-rate can't be negative")
}
if cfg.MaxOutgoingConnections > cfg.MaxConnections {
return errors.New("max-outgoing-connections cannot be larger than max-connections")
}
return nil
}
// TestP2PConfig returns a configuration for testing the peer-to-peer layer
func TestP2PConfig() *P2PConfig {
cfg := DefaultP2PConfig()
cfg.ListenAddress = "tcp://127.0.0.1:36656"
cfg.AllowDuplicateIP = true
cfg.FlushThrottleTimeout = 10 * time.Millisecond
return cfg
}
//-----------------------------------------------------------------------------
// MempoolConfig
// MempoolConfig defines the configuration options for the Tendermint mempool.
type MempoolConfig struct {
Version string `mapstructure:"version"`
RootDir string `mapstructure:"home"`
Recheck bool `mapstructure:"recheck"`
Broadcast bool `mapstructure:"broadcast"`
@@ -899,7 +740,6 @@ type MempoolConfig struct {
// DefaultMempoolConfig returns a default configuration for the Tendermint mempool.
func DefaultMempoolConfig() *MempoolConfig {
return &MempoolConfig{
Version: MempoolV1,
Recheck: true,
Broadcast: true,
// Each signature verification takes .5ms, Size reduced until we implement
@@ -1068,42 +908,6 @@ func (cfg *StateSyncConfig) ValidateBasic() error {
return nil
}
//-----------------------------------------------------------------------------
// BlockSyncConfig (formerly known as FastSync) defines the configuration for the Tendermint block sync service
// If this node is many blocks behind the tip of the chain, BlockSync
// allows them to catchup quickly by downloading blocks in parallel
// and verifying their commits.
type BlockSyncConfig struct {
Enable bool `mapstructure:"enable"`
Version string `mapstructure:"version"`
}
// DefaultBlockSyncConfig returns a default configuration for the block sync service
func DefaultBlockSyncConfig() *BlockSyncConfig {
return &BlockSyncConfig{
Enable: true,
Version: BlockSyncV0,
}
}
// TestBlockSyncConfig returns a default configuration for the block sync.
func TestBlockSyncConfig() *BlockSyncConfig {
return DefaultBlockSyncConfig()
}
// ValidateBasic performs basic validation.
func (cfg *BlockSyncConfig) ValidateBasic() error {
switch cfg.Version {
case BlockSyncV0:
return nil
case BlockSyncV2:
return errors.New("blocksync version v2 is no longer supported. Please use v0")
default:
return fmt.Errorf("unknown blocksync version %s", cfg.Version)
}
}
//-----------------------------------------------------------------------------
// ConsensusConfig

View File

@@ -10,21 +10,19 @@ import (
)
func TestDefaultConfig(t *testing.T) {
assert := assert.New(t)
// set up some defaults
cfg := DefaultConfig()
assert.NotNil(cfg.P2P)
assert.NotNil(cfg.Mempool)
assert.NotNil(cfg.Consensus)
assert.NotNil(t, cfg.P2P)
assert.NotNil(t, cfg.Mempool)
assert.NotNil(t, cfg.Consensus)
// check the root dir stuff...
cfg.SetRoot("/foo")
cfg.Genesis = "bar"
cfg.DBPath = "/opt/data"
assert.Equal("/foo/bar", cfg.GenesisFile())
assert.Equal("/opt/data", cfg.DBDir())
assert.Equal(t, "/foo/bar", cfg.GenesisFile())
assert.Equal(t, "/opt/data", cfg.DBDir())
}
func TestConfigValidateBasic(t *testing.T) {
@@ -37,19 +35,18 @@ func TestConfigValidateBasic(t *testing.T) {
}
func TestTLSConfiguration(t *testing.T) {
assert := assert.New(t)
cfg := DefaultConfig()
cfg.SetRoot("/home/user")
cfg.RPC.TLSCertFile = "file.crt"
assert.Equal("/home/user/config/file.crt", cfg.RPC.CertFile())
assert.Equal(t, "/home/user/config/file.crt", cfg.RPC.CertFile())
cfg.RPC.TLSKeyFile = "file.key"
assert.Equal("/home/user/config/file.key", cfg.RPC.KeyFile())
assert.Equal(t, "/home/user/config/file.key", cfg.RPC.KeyFile())
cfg.RPC.TLSCertFile = "/abs/path/to/file.crt"
assert.Equal("/abs/path/to/file.crt", cfg.RPC.CertFile())
assert.Equal(t, "/abs/path/to/file.crt", cfg.RPC.CertFile())
cfg.RPC.TLSKeyFile = "/abs/path/to/file.key"
assert.Equal("/abs/path/to/file.key", cfg.RPC.KeyFile())
assert.Equal(t, "/abs/path/to/file.key", cfg.RPC.KeyFile())
}
func TestBaseConfigValidateBasic(t *testing.T) {
@@ -66,7 +63,6 @@ func TestRPCConfigValidateBasic(t *testing.T) {
assert.NoError(t, cfg.ValidateBasic())
fieldsToTest := []string{
"GRPCMaxOpenConnections",
"MaxOpenConnections",
"MaxSubscriptionClients",
"MaxSubscriptionsPerClient",
@@ -82,26 +78,6 @@ func TestRPCConfigValidateBasic(t *testing.T) {
}
}
func TestP2PConfigValidateBasic(t *testing.T) {
cfg := TestP2PConfig()
assert.NoError(t, cfg.ValidateBasic())
fieldsToTest := []string{
"MaxNumInboundPeers",
"MaxNumOutboundPeers",
"FlushThrottleTimeout",
"MaxPacketMsgPayloadSize",
"SendRate",
"RecvRate",
}
for _, fieldName := range fieldsToTest {
reflect.ValueOf(cfg).Elem().FieldByName(fieldName).SetInt(-1)
assert.Error(t, cfg.ValidateBasic())
reflect.ValueOf(cfg).Elem().FieldByName(fieldName).SetInt(0)
}
}
func TestMempoolConfigValidateBasic(t *testing.T) {
cfg := TestMempoolConfig()
assert.NoError(t, cfg.ValidateBasic())
@@ -125,18 +101,6 @@ func TestStateSyncConfigValidateBasic(t *testing.T) {
require.NoError(t, cfg.ValidateBasic())
}
func TestBlockSyncConfigValidateBasic(t *testing.T) {
cfg := TestBlockSyncConfig()
assert.NoError(t, cfg.ValidateBasic())
// tamper with version
cfg.Version = "v2"
assert.Error(t, cfg.ValidateBasic())
cfg.Version = "invalid"
assert.Error(t, cfg.ValidateBasic())
}
func TestConsensusConfig_ValidateBasic(t *testing.T) {
testcases := map[string]struct {
modify func(*ConsensusConfig)
@@ -186,3 +150,21 @@ func TestInstrumentationConfigValidateBasic(t *testing.T) {
cfg.MaxOpenConnections = -1
assert.Error(t, cfg.ValidateBasic())
}
func TestP2PConfigValidateBasic(t *testing.T) {
cfg := TestP2PConfig()
assert.NoError(t, cfg.ValidateBasic())
fieldsToTest := []string{
"FlushThrottleTimeout",
"MaxPacketMsgPayloadSize",
"SendRate",
"RecvRate",
}
for _, fieldName := range fieldsToTest {
reflect.ValueOf(cfg).Elem().FieldByName(fieldName).SetInt(-1)
assert.Error(t, cfg.ValidateBasic())
reflect.ValueOf(cfg).Elem().FieldByName(fieldName).SetInt(0)
}
}

View File

@@ -1,6 +1,8 @@
package config
import (
"context"
dbm "github.com/tendermint/tm-db"
"github.com/tendermint/tendermint/libs/log"
@@ -8,7 +10,7 @@ import (
)
// ServiceProvider takes a config and a logger and returns a ready to go Node.
type ServiceProvider func(*Config, log.Logger) (service.Service, error)
type ServiceProvider func(context.Context, *Config, log.Logger) (service.Service, error)
// DBContext specifies config information for loading a new DB.
type DBContext struct {

View File

@@ -3,13 +3,13 @@ package config
import (
"bytes"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
"text/template"
tmos "github.com/tendermint/tendermint/libs/os"
tmrand "github.com/tendermint/tendermint/libs/rand"
)
// DefaultDirPerm is the default permissions used when creating directories.
@@ -199,26 +199,10 @@ cors-allowed-methods = [{{ range .RPC.CORSAllowedMethods }}{{ printf "%q, " . }}
# A list of non simple headers the client is allowed to use with cross-domain requests
cors-allowed-headers = [{{ range .RPC.CORSAllowedHeaders }}{{ printf "%q, " . }}{{end}}]
# TCP or UNIX socket address for the gRPC server to listen on
# NOTE: This server only supports /broadcast_tx_commit
# Deprecated gRPC in the RPC layer of Tendermint will be deprecated in 0.36.
grpc-laddr = "{{ .RPC.GRPCListenAddress }}"
# Maximum number of simultaneous connections.
# Does not include RPC (HTTP&WebSocket) connections. See max-open-connections
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
# Should be < {ulimit -Sn} - {MaxNumInboundPeers} - {MaxNumOutboundPeers} - {N of wal, db and other open files}
# 1024 - 40 - 10 - 50 = 924 = ~900
# Deprecated gRPC in the RPC layer of Tendermint will be deprecated in 0.36.
grpc-max-open-connections = {{ .RPC.GRPCMaxOpenConnections }}
# Activate unsafe RPC commands like /dial-seeds and /unsafe-flush-mempool
unsafe = {{ .RPC.Unsafe }}
# Maximum number of simultaneous connections (including WebSocket).
# Does not include gRPC connections. See grpc-max-open-connections
# If you want to accept a larger number than the default, make sure
# you increase your OS limits.
# 0 - unlimited.
@@ -232,37 +216,10 @@ max-open-connections = {{ .RPC.MaxOpenConnections }}
max-subscription-clients = {{ .RPC.MaxSubscriptionClients }}
# Maximum number of unique queries a given client can /subscribe to
# If you're using GRPC (or Local RPC client) and /broadcast_tx_commit, set to
# the estimated # maximum number of broadcast_tx_commit calls per block.
# If you're using a Local RPC client and /broadcast_tx_commit, set this
# to the estimated maximum number of broadcast_tx_commit calls per block.
max-subscriptions-per-client = {{ .RPC.MaxSubscriptionsPerClient }}
# Experimental parameter to specify the maximum number of events a node will
# buffer, per subscription, before returning an error and closing the
# subscription. Must be set to at least 100, but higher values will accommodate
# higher event throughput rates (and will use more memory).
experimental-subscription-buffer-size = {{ .RPC.SubscriptionBufferSize }}
# Experimental parameter to specify the maximum number of RPC responses that
# can be buffered per WebSocket client. If clients cannot read from the
# WebSocket endpoint fast enough, they will be disconnected, so increasing this
# parameter may reduce the chances of them being disconnected (but will cause
# the node to use more memory).
#
# Must be at least the same as "experimental-subscription-buffer-size",
# otherwise connections could be dropped unnecessarily. This value should
# ideally be somewhat higher than "experimental-subscription-buffer-size" to
# accommodate non-subscription-related RPC responses.
experimental-websocket-write-buffer-size = {{ .RPC.WebSocketWriteBufferSize }}
# If a WebSocket client cannot read fast enough, at present we may
# silently drop events instead of generating an error or disconnecting the
# client.
#
# Enabling this experimental parameter will cause the WebSocket connection to
# be closed instead if it cannot read fast enough, allowing for greater
# predictability in subscription behavior.
experimental-close-on-slow-client = {{ .RPC.CloseOnSlowClient }}
# How long to wait for a tx to be committed during /broadcast_tx_commit.
# WARNING: Using a value larger than 10s will result in increasing the
# global HTTP write timeout, which applies to all connections and endpoints.
@@ -298,9 +255,6 @@ pprof-laddr = "{{ .RPC.PprofListenAddress }}"
#######################################################
[p2p]
# Enable the legacy p2p layer.
use-legacy = {{ .P2P.UseLegacy }}
# Select the p2p internal queue
queue-type = "{{ .P2P.QueueType }}"
@@ -332,66 +286,12 @@ persistent-peers = "{{ .P2P.PersistentPeers }}"
# UPNP port forwarding
upnp = {{ .P2P.UPNP }}
# Path to address book
# TODO: Remove once p2p refactor is complete in favor of peer store.
addr-book-file = "{{ js .P2P.AddrBook }}"
# Set true for strict address routability rules
# Set false for private or local networks
addr-book-strict = {{ .P2P.AddrBookStrict }}
# Maximum number of inbound peers
#
# TODO: Remove once p2p refactor is complete in favor of MaxConnections.
# ref: https://github.com/tendermint/tendermint/issues/5670
max-num-inbound-peers = {{ .P2P.MaxNumInboundPeers }}
# Maximum number of outbound peers to connect to, excluding persistent peers
#
# TODO: Remove once p2p refactor is complete in favor of MaxConnections.
# ref: https://github.com/tendermint/tendermint/issues/5670
max-num-outbound-peers = {{ .P2P.MaxNumOutboundPeers }}
# Maximum number of connections (inbound and outbound).
max-connections = {{ .P2P.MaxConnections }}
# Maximum number of connections reserved for outgoing
# connections. Must be less than max-connections
max-outgoing-connections = {{ .P2P.MaxOutgoingConnections }}
# Rate limits the number of incoming connection attempts per IP address.
max-incoming-connection-attempts = {{ .P2P.MaxIncomingConnectionAttempts }}
# List of node IDs, to which a connection will be (re)established ignoring any existing limits
# TODO: Remove once p2p refactor is complete.
# ref: https://github.com/tendermint/tendermint/issues/5670
unconditional-peer-ids = "{{ .P2P.UnconditionalPeerIDs }}"
# Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
persistent-peers-max-dial-period = "{{ .P2P.PersistentPeersMaxDialPeriod }}"
# Time to wait before flushing messages out on the connection
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
flush-throttle-timeout = "{{ .P2P.FlushThrottleTimeout }}"
# Maximum size of a message packet payload, in bytes
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
max-packet-msg-payload-size = {{ .P2P.MaxPacketMsgPayloadSize }}
# Rate at which packets can be sent, in bytes/second
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
send-rate = {{ .P2P.SendRate }}
# Rate at which packets can be received, in bytes/second
# TODO: Remove once p2p refactor is complete
# ref: https:#github.com/tendermint/tendermint/issues/5670
recv-rate = {{ .P2P.RecvRate }}
# Set true to enable the peer-exchange reactor
pex = {{ .P2P.PexReactor }}
@@ -406,16 +306,28 @@ allow-duplicate-ip = {{ .P2P.AllowDuplicateIP }}
handshake-timeout = "{{ .P2P.HandshakeTimeout }}"
dial-timeout = "{{ .P2P.DialTimeout }}"
# Time to wait before flushing messages out on the connection
# TODO: Remove once MConnConnection is removed.
flush-throttle-timeout = "{{ .P2P.FlushThrottleTimeout }}"
# Maximum size of a message packet payload, in bytes
# TODO: Remove once MConnConnection is removed.
max-packet-msg-payload-size = {{ .P2P.MaxPacketMsgPayloadSize }}
# Rate at which packets can be sent, in bytes/second
# TODO: Remove once MConnConnection is removed.
send-rate = {{ .P2P.SendRate }}
# Rate at which packets can be received, in bytes/second
# TODO: Remove once MConnConnection is removed.
recv-rate = {{ .P2P.RecvRate }}
#######################################################
### Mempool Configuration Option ###
#######################################################
[mempool]
# Mempool version to use:
# 1) "v0" - The legacy non-prioritized mempool reactor.
# 2) "v1" (default) - The prioritized mempool reactor.
version = "{{ .Mempool.Version }}"
recheck = {{ .Mempool.Recheck }}
broadcast = {{ .Mempool.Broadcast }}
@@ -504,21 +416,6 @@ chunk-request-timeout = "{{ .StateSync.ChunkRequestTimeout }}"
# The number of concurrent chunk and block fetchers to run (default: 4).
fetchers = "{{ .StateSync.Fetchers }}"
#######################################################
### Block Sync Configuration Connections ###
#######################################################
[blocksync]
# If this node is many blocks behind the tip of the chain, BlockSync
# allows them to catchup quickly by downloading blocks in parallel
# and verifying their commits
enable = {{ .BlockSync.Enable }}
# Block Sync version to use:
# 1) "v0" (default) - the standard Block Sync implementation
# 2) "v2" - DEPRECATED, please use v0
version = "{{ .BlockSync.Version }}"
#######################################################
### Consensus Configuration Options ###
#######################################################
@@ -613,7 +510,7 @@ func ResetTestRoot(testName string) (*Config, error) {
func ResetTestRootWithChainID(testName string, chainID string) (*Config, error) {
// create a unique, concurrency-safe test directory under os.TempDir()
rootDir, err := ioutil.TempDir("", fmt.Sprintf("%s-%s_", chainID, testName))
rootDir, err := os.MkdirTemp("", fmt.Sprintf("%s-%s_", chainID, testName))
if err != nil {
return nil, err
}
@@ -653,11 +550,12 @@ func ResetTestRootWithChainID(testName string, chainID string) (*Config, error)
}
config := TestConfig().SetRoot(rootDir)
config.Instrumentation.Namespace = fmt.Sprintf("%s_%s_%s", testName, chainID, tmrand.Str(16))
return config, nil
}
func writeFile(filePath string, contents []byte, mode os.FileMode) error {
if err := ioutil.WriteFile(filePath, contents, mode); err != nil {
if err := os.WriteFile(filePath, contents, mode); err != nil {
return fmt.Errorf("failed to write file: %w", err)
}
return nil
@@ -673,6 +571,10 @@ var testGenesisFmt = `{
"max_gas": "-1",
"time_iota_ms": "10"
},
"synchrony": {
"message_delay": "500000000",
"precision": "10000000"
},
"evidence": {
"max_age_num_blocks": "100000",
"max_age_duration": "172800000000000",

View File

@@ -1,7 +1,6 @@
package config
import (
"io/ioutil"
"os"
"path/filepath"
"strings"
@@ -15,26 +14,24 @@ func ensureFiles(t *testing.T, rootDir string, files ...string) {
for _, f := range files {
p := rootify(rootDir, f)
_, err := os.Stat(p)
assert.Nil(t, err, p)
assert.NoError(t, err, p)
}
}
func TestEnsureRoot(t *testing.T) {
require := require.New(t)
// setup temp dir for test
tmpDir, err := ioutil.TempDir("", "config-test")
require.NoError(err)
tmpDir, err := os.MkdirTemp("", "config-test")
require.NoError(t, err)
defer os.RemoveAll(tmpDir)
// create root dir
EnsureRoot(tmpDir)
require.NoError(WriteConfigFile(tmpDir, DefaultConfig()))
require.NoError(t, WriteConfigFile(tmpDir, DefaultConfig()))
// make sure config is set properly
data, err := ioutil.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
require.NoError(err)
data, err := os.ReadFile(filepath.Join(tmpDir, defaultConfigFilePath))
require.NoError(t, err)
checkConfig(t, string(data))
@@ -42,19 +39,17 @@ func TestEnsureRoot(t *testing.T) {
}
func TestEnsureTestRoot(t *testing.T) {
require := require.New(t)
testName := "ensureTestRoot"
// create root dir
cfg, err := ResetTestRoot(testName)
require.NoError(err)
require.NoError(t, err)
defer os.RemoveAll(cfg.RootDir)
rootDir := cfg.RootDir
// make sure config is set properly
data, err := ioutil.ReadFile(filepath.Join(rootDir, defaultConfigFilePath))
require.Nil(err)
data, err := os.ReadFile(filepath.Join(rootDir, defaultConfigFilePath))
require.NoError(t, err)
checkConfig(t, string(data))
@@ -71,7 +66,6 @@ func checkConfig(t *testing.T, configFile string) {
"moniker",
"seeds",
"proxy-app",
"blocksync",
"create-empty-blocks",
"peer",
"timeout",

View File

@@ -2,6 +2,7 @@ package crypto
import (
"github.com/tendermint/tendermint/crypto/tmhash"
"github.com/tendermint/tendermint/internal/jsontypes"
"github.com/tendermint/tendermint/libs/bytes"
)
@@ -25,6 +26,9 @@ type PubKey interface {
VerifySignature(msg []byte, sig []byte) bool
Equals(PubKey) bool
Type() string
// Implementations must support tagged encoding in JSON.
jsontypes.Tagged
}
type PrivKey interface {
@@ -33,6 +37,9 @@ type PrivKey interface {
PubKey() PubKey
Equals(PrivKey) bool
Type() string
// Implementations must support tagged encoding in JSON.
jsontypes.Tagged
}
type Symmetric interface {

View File

@@ -12,6 +12,7 @@ import (
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/crypto/tmhash"
"github.com/tendermint/tendermint/internal/jsontypes"
tmjson "github.com/tendermint/tendermint/libs/json"
)
@@ -58,11 +59,17 @@ const (
func init() {
tmjson.RegisterType(PubKey{}, PubKeyName)
tmjson.RegisterType(PrivKey{}, PrivKeyName)
jsontypes.MustRegister(PubKey{})
jsontypes.MustRegister(PrivKey{})
}
// PrivKey implements crypto.PrivKey.
type PrivKey []byte
// TypeTag satisfies the jsontypes.Tagged interface.
func (PrivKey) TypeTag() string { return PrivKeyName }
// Bytes returns the privkey byte format.
func (privKey PrivKey) Bytes() []byte {
return []byte(privKey)
@@ -151,6 +158,9 @@ var _ crypto.PubKey = PubKey{}
// PubKeyEd25519 implements crypto.PubKey for the Ed25519 signature scheme.
type PubKey []byte
// TypeTag satisfies the jsontypes.Tagged interface.
func (PubKey) TypeTag() string { return PubKeyName }
// Address is the SHA256-20 of the raw pubkey bytes.
func (pubKey PubKey) Address() crypto.Address {
if len(pubKey) != PubKeySize {

View File

@@ -17,7 +17,7 @@ func TestSignAndValidateEd25519(t *testing.T) {
msg := crypto.CRandBytes(128)
sig, err := privKey.Sign(msg)
require.Nil(t, err)
require.NoError(t, err)
// Test the signature
assert.True(t, pubKey.VerifySignature(msg, sig))

View File

@@ -28,13 +28,13 @@ func TestKeyPath(t *testing.T) {
case KeyEncodingHex:
rand.Read(keys[i])
default:
panic("Unexpected encoding")
require.Fail(t, "Unexpected encoding")
}
path = path.AppendKey(keys[i], enc)
}
res, err := KeyPathToKeys(path.String())
require.Nil(t, err)
require.NoError(t, err)
require.Equal(t, len(keys), len(res))
for i, key := range keys {

View File

@@ -79,58 +79,58 @@ func TestProofOperators(t *testing.T) {
// Good
popz := ProofOperators([]ProofOperator{op1, op2, op3, op4})
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.Nil(t, err)
assert.NoError(t, err)
err = popz.VerifyValue(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", bz("INPUT1"))
assert.Nil(t, err)
assert.NoError(t, err)
// BAD INPUT
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1_WRONG")})
assert.NotNil(t, err)
assert.Error(t, err)
err = popz.VerifyValue(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", bz("INPUT1_WRONG"))
assert.NotNil(t, err)
assert.Error(t, err)
// BAD KEY 1
err = popz.Verify(bz("OUTPUT4"), "/KEY3/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
assert.Error(t, err)
// BAD KEY 2
err = popz.Verify(bz("OUTPUT4"), "KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
assert.Error(t, err)
// BAD KEY 3
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1/", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
assert.Error(t, err)
// BAD KEY 4
err = popz.Verify(bz("OUTPUT4"), "//KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
assert.Error(t, err)
// BAD KEY 5
err = popz.Verify(bz("OUTPUT4"), "/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
assert.Error(t, err)
// BAD OUTPUT 1
err = popz.Verify(bz("OUTPUT4_WRONG"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
assert.Error(t, err)
// BAD OUTPUT 2
err = popz.Verify(bz(""), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
assert.Error(t, err)
// BAD POPZ 1
popz = []ProofOperator{op1, op2, op4}
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
assert.Error(t, err)
// BAD POPZ 2
popz = []ProofOperator{op4, op3, op2, op1}
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
assert.Error(t, err)
// BAD POPZ 3
popz = []ProofOperator{}
err = popz.Verify(bz("OUTPUT4"), "/KEY4/KEY2/KEY1", [][]byte{bz("INPUT1")})
assert.NotNil(t, err)
assert.Error(t, err)
}
func bz(s string) []byte {

View File

@@ -10,6 +10,7 @@ import (
secp256k1 "github.com/btcsuite/btcd/btcec"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/internal/jsontypes"
tmjson "github.com/tendermint/tendermint/libs/json"
// necessary for Bitcoin address format
@@ -28,6 +29,9 @@ const (
func init() {
tmjson.RegisterType(PubKey{}, PubKeyName)
tmjson.RegisterType(PrivKey{}, PrivKeyName)
jsontypes.MustRegister(PubKey{})
jsontypes.MustRegister(PrivKey{})
}
var _ crypto.PrivKey = PrivKey{}
@@ -35,6 +39,9 @@ var _ crypto.PrivKey = PrivKey{}
// PrivKey implements PrivKey.
type PrivKey []byte
// TypeTag satisfies the jsontypes.Tagged interface.
func (PrivKey) TypeTag() string { return PrivKeyName }
// Bytes marshalls the private key using amino encoding.
func (privKey PrivKey) Bytes() []byte {
return []byte(privKey)
@@ -138,6 +145,9 @@ const PubKeySize = 33
// This prefix is followed with the x-coordinate.
type PubKey []byte
// TypeTag satisfies the jsontypes.Tagged interface.
func (PubKey) TypeTag() string { return PubKeyName }
// Address returns a Bitcoin style addresses: RIPEMD160(SHA256(pubkey))
func (pubKey PubKey) Address() crypto.Address {
if len(pubKey) != PubKeySize {
@@ -172,67 +182,3 @@ func (pubKey PubKey) Equals(other crypto.PubKey) bool {
func (pubKey PubKey) Type() string {
return KeyType
}
// used to reject malleable signatures
// see:
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
// The returned signature will be of the form R || S (in lower-S form).
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
sig, err := priv.Sign(crypto.Sha256(msg))
if err != nil {
return nil, err
}
sigBytes := serializeSig(sig)
return sigBytes, nil
}
// VerifySignature verifies a signature of the form R || S.
// It rejects signatures which are not in lower-S form.
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
if len(sigStr) != 64 {
return false
}
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
if err != nil {
return false
}
// parse the signature:
signature := signatureFromBytes(sigStr)
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
if signature.S.Cmp(secp256k1halfN) > 0 {
return false
}
return signature.Verify(crypto.Sha256(msg), pub)
}
// Read Signature struct from R || S. Caller needs to ensure
// that len(sigStr) == 64.
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
return &secp256k1.Signature{
R: new(big.Int).SetBytes(sigStr[:32]),
S: new(big.Int).SetBytes(sigStr[32:64]),
}
}
// Serialize signature to R || S.
// R, S are padded to 32 bytes respectively.
func serializeSig(sig *secp256k1.Signature) []byte {
rBytes := sig.R.Bytes()
sBytes := sig.S.Bytes()
sigBytes := make([]byte, 64)
// 0 pad the byte arrays from the left if they aren't big enough.
copy(sigBytes[32-len(rBytes):32], rBytes)
copy(sigBytes[64-len(sBytes):64], sBytes)
return sigBytes
}

View File

@@ -0,0 +1,76 @@
//go:build !libsecp256k1
// +build !libsecp256k1
package secp256k1
import (
"math/big"
secp256k1 "github.com/btcsuite/btcd/btcec"
"github.com/tendermint/tendermint/crypto"
)
// used to reject malleable signatures
// see:
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
// The returned signature will be of the form R || S (in lower-S form).
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
sig, err := priv.Sign(crypto.Sha256(msg))
if err != nil {
return nil, err
}
sigBytes := serializeSig(sig)
return sigBytes, nil
}
// VerifySignature verifies a signature of the form R || S.
// It rejects signatures which are not in lower-S form.
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
if len(sigStr) != 64 {
return false
}
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
if err != nil {
return false
}
// parse the signature:
signature := signatureFromBytes(sigStr)
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
if signature.S.Cmp(secp256k1halfN) > 0 {
return false
}
return signature.Verify(crypto.Sha256(msg), pub)
}
// Read Signature struct from R || S. Caller needs to ensure
// that len(sigStr) == 64.
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
return &secp256k1.Signature{
R: new(big.Int).SetBytes(sigStr[:32]),
S: new(big.Int).SetBytes(sigStr[32:64]),
}
}
// Serialize signature to R || S.
// R, S are padded to 32 bytes respectively.
func serializeSig(sig *secp256k1.Signature) []byte {
rBytes := sig.R.Bytes()
sBytes := sig.S.Bytes()
sigBytes := make([]byte, 64)
// 0 pad the byte arrays from the left if they aren't big enough.
copy(sigBytes[32-len(rBytes):32], rBytes)
copy(sigBytes[64-len(sBytes):64], sBytes)
return sigBytes
}

View File

@@ -52,7 +52,7 @@ func TestSignAndValidateSecp256k1(t *testing.T) {
msg := crypto.CRandBytes(128)
sig, err := privKey.Sign(msg)
require.Nil(t, err)
require.NoError(t, err)
assert.True(t, pubKey.VerifySignature(msg, sig))

View File

@@ -1,6 +1,9 @@
package sr25519
import tmjson "github.com/tendermint/tendermint/libs/json"
import (
"github.com/tendermint/tendermint/internal/jsontypes"
tmjson "github.com/tendermint/tendermint/libs/json"
)
const (
PrivKeyName = "tendermint/PrivKeySr25519"
@@ -10,4 +13,7 @@ const (
func init() {
tmjson.RegisterType(PubKey{}, PubKeyName)
tmjson.RegisterType(PrivKey{}, PrivKeyName)
jsontypes.MustRegister(PubKey{})
jsontypes.MustRegister(PrivKey{})
}

View File

@@ -29,6 +29,9 @@ type PrivKey struct {
kp *sr25519.KeyPair
}
// TypeTag satisfies the jsontypes.Tagged interface.
func (PrivKey) TypeTag() string { return PrivKeyName }
// Bytes returns the byte-encoded PrivKey.
func (privKey PrivKey) Bytes() []byte {
if privKey.kp == nil {

View File

@@ -23,6 +23,9 @@ const (
// PubKey implements crypto.PubKey.
type PubKey []byte
// TypeTag satisfies the jsontypes.Tagged interface.
func (PubKey) TypeTag() string { return PubKeyName }
// Address is the SHA256-20 of the raw pubkey bytes.
func (pubKey PubKey) Address() crypto.Address {
if len(pubKey) != PubKeySize {

View File

@@ -18,7 +18,7 @@ func TestSignAndValidateSr25519(t *testing.T) {
msg := crypto.CRandBytes(128)
sig, err := privKey.Sign(msg)
require.Nil(t, err)
require.NoError(t, err)
// Test the signature
assert.True(t, pubKey.VerifySignature(msg, sig))

Some files were not shown because too many files have changed in this diff Show More