* Makefile: always pull image in proto-gen-docker. (#5953)
The `proto-gen-docker` target didn't pull an updated Docker image, and would use a local image if present which could be outdated and produce wrong results.
* test: fix TestPEXReactorRunning data race (#5955)
Fixes#5941.
Not entirely sure that this will fix the problem (couldn't reproduce), but in any case this is an artifact of a hack in the P2P transport refactor to make it work with the legacy P2P stack, and will be removed when the refactor is done anyway.
* test/fuzz: move fuzz tests into this repo (#5918)
Co-authored-by: Emmanuel T Odeke <emmanuel@orijtech.com>
Closes#5907
- add init-corpus to blockchain reactor
- remove validator-set FromBytes test
now that we have proto, we don't need to test it! bye amino
- simplify mempool test
do we want to test remote ABCI app?
- do not recreate mux on every crash in jsonrpc test
- update p2p pex reactor test
- remove p2p/listener test
the API has changed + I did not understand what it's tested anyway
- update secretconnection test
- add readme and makefile
- list inputs in readme
- add nightly workflow
- remove blockchain fuzz test
EncodeMsg / DecodeMsg no longer exist
* docker: dont login when in PR (#5961)
* docker: release Linux/ARM64 image (#5925)
Co-authored-by: Marko <marbar3778@yahoo.com>
* p2p: make PeerManager.DialNext() and EvictNext() block (#5947)
See #5936 and #5938 for background.
The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about.
I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.
* libs/log: format []byte as hexidecimal string (uppercased) (#5960)
Closes: #5806
Co-authored-by: Lanie Hei <heixx011@umn.edu>
* docs: log level docs (#5945)
## Description
add section on configuring log levels
Closes: #XXX
* .github: fix fuzz-nightly job (#5965)
outputs is a property of the job, not an individual step.
* e2e: add control over the log level of nodes (#5958)
* mempool: fix reactor tests (#5967)
## Description
Update the faux router to either drop channel errors or handle them based on an argument. This prevents deadlocks in tests where we try to send an error on the mempool channel but there is no reader.
Closes: #5956
* p2p: improve peerStore prototype (#5954)
This improves the `peerStore` prototype by e.g.:
* Using a database with Protobuf for persistence, but also keeping full peer set in memory for performance.
* Simplifying the API, by taking/returning struct copies for safety, and removing errors for in-memory operations.
* Caching the ranked peer set, as a temporary solution until a better data structure is implemented.
* Adding `PeerManagerOptions.MaxPeers` and pruning the peer store (based on rank) when it's full.
* Rewriting `PeerAddress` to be independent of `url.URL`, normalizing it and tightening semantics.
* p2p: simplify PeerManager upgrade logic (#5962)
Follow-up from #5947, branched off of #5954.
This simplifies the upgrade logic by adding explicit eviction requests, which can also be useful for other use-cases (e.g. if we need to ban a peer that's misbehaving). Changes:
* Add `evict` map which queues up peers to explicitly evict.
* `upgrading` now only tracks peers that we're upgrading via dialing (`DialNext` → `Dialed`/`DialFailed`).
* `Dialed` will unmark `upgrading`, and queue `evict` if still beyond capacity.
* `Accepted` will pick a random lower-scored peer to upgrade to, if appropriate, and doesn't care about `upgrading` (the dial will fail later, since it's already connected).
* `EvictNext` will return a peer scheduled in `evict` if any, otherwise if beyond capacity just evict the lowest-scored peer.
This limits all of the `upgrading` logic to `DialNext`, `Dialed`, and `DialFailed`, making it much simplier, and it should generally do the right thing in all cases I can think of.
* p2p: add PeerManager.Advertise() (#5957)
Adds a naïve `PeerManager.Advertise()` method that the new PEX reactor can use to fetch addresses to advertise, as well as some other `FIXME`s on address advertisement.
* blockchain v0: fix waitgroup data race (#5970)
## Description
Fixes the data race in usage of `WaitGroup`. Specifically, the case where we invoke `Wait` _before_ the first delta `Add` call when the current waitgroup counter is zero. See https://golang.org/pkg/sync/#WaitGroup.Add.
Still not sure how this manifests itself in a test since the reactor has to be stopped virtually immediately after being started (I think?).
Regardless, this is the appropriate fix.
closes: #5968
* tests: fix `make test` (#5966)
## Description
- bump deadlock dep to master
- fixes `make test` since we now use `deadlock.Once`
Closes: #XXX
* terminate go-fuzz gracefully (w/ SIGINT) (#5973)
and preserve exit code.
```
2021/01/26 03:34:49 workers: 2, corpus: 4 (8m28s ago), crashers: 0, restarts: 1/9976, execs: 11013732 (21596/sec), cover: 121, uptime: 8m30s
make: *** [fuzz-mempool] Terminated
Makefile:5: recipe for target 'fuzz-mempool' failed
Error: Process completed with exit code 124.
```
https://github.com/tendermint/tendermint/runs/1766661614
`continue-on-error` should make GH ignore any error codes.
* p2p: add prototype PEX reactor for new stack (#5971)
This adds a prototype PEX reactor for the new P2P stack.
* proto/p2p: rename PEX messages and fields (#5974)
Fixes#5899 by renaming a bunch of P2P Protobuf entities (while maintaining wire compatibility):
* `Message` to `PexMessage` (as it's only used for PEX messages).
* `PexAddrs` to `PexResponse`.
* `PexResponse.Addrs` to `PexResponse.Addresses`.
* `NetAddress` to `PexAddress` (as it's only used by PEX).
* p2p: resolve PEX addresses in PEX reactor (#5980)
This changes the new prototype PEX reactor to resolve peer address URLs into IP/port PEX addresses itself. Branched off of #5974.
I've spent some time thinking about address handling in the P2P stack. We currently use `PeerAddress` URLs everywhere, except for two places: when dialing a peer, and when exchanging addresses via PEX. We had two options:
1. Resolve addresses to endpoints inside `PeerManager`. This would introduce a lot of added complexity: we would have to track connection statistics per endpoint, have goroutines that asynchronously resolve and refresh these endpoints, deal with resolve scheduling before dialing (which is trickier than it sounds since it involves multiple goroutines in the peer manager and router and messes with peer rating order), handle IP address visibility issues, and so on.
2. Resolve addresses to endpoints (IP/port) only where they're used: when dialing, and in PEX. Everywhere else we use URLs.
I went with 2, because this significantly simplifies the handling of hostname resolution, and because I really think the PEX reactor should migrate to exchanging URLs instead of IP/port numbers anyway -- this allows operators to use DNS names for validators (and can easily migrate them to new IPs and/or load balance requests), and also allows different protocols (e.g. QUIC and `MemoryTransport`). Happy to discuss this.
* test/p2p: close transports to avoid goroutine leak failures (#5982)
* mempool: fix TestReactorNoBroadcastToSender (#5984)
## Description
Looks like I missed a test in the original PR when fixing the tests.
Closes: #5956
* mempool: fix mempool tests timeout (#5988)
* p2p: use stopCtx when dialing peers in Router (#5983)
This ensures we don't leak dial goroutines when shutting down the router.
* docs: fix typo in state sync example (#5989)
Co-authored-by: Erik Grinaker <erik@interchain.berlin>
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
Co-authored-by: odidev <odidev@puresoftware.com>
Co-authored-by: Lanie Hei <heixx011@umn.edu>
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: Sergey <52304443+c29r3@users.noreply.github.com>
The `NodeInfo` interface does not appear to serve any purpose at all, so I removed it and renamed the `DefaultNodeInfo` struct to `NodeInfo` (including the Protobuf representations). Let me know if this is actually needed for anything.
Only the Protobuf rename is listed in the changelog, since we do not officially support API stability of the `p2p` package (according to `README.md`). The on-wire protocol remains compatible.
This implements a new `Transport` interface and related types for the P2P refactor in #5670. Previously, `conn.MConnection` was very tightly coupled to the `Peer` implementation -- in order to allow alternative non-multiplexed transports (e.g. QUIC), MConnection has now been moved below the `Transport` interface, as `MConnTransport`, and decoupled from the peer. Since the `p2p` package is not covered by our Go API stability, this is not considered a breaking change, and not listed in the changelog.
The initial approach was to implement the new interface in its final form (which also involved possible protocol changes, see https://github.com/tendermint/spec/pull/227). However, it turned out that this would require a large amount of changes to existing P2P code because of the previous tight coupling between `Peer` and `MConnection` and the reliance on subtleties in the MConnection behavior. Instead, I have broadened the `Transport` interface to expose much of the existing MConnection interface, preserved much of the existing MConnection logic and behavior in the transport implementation, and tried to make as few changes to the rest of the P2P stack as possible. We will instead reduce this interface gradually as we refactor other parts of the P2P stack.
The low-level transport code and protocol (e.g. MConnection, SecretConnection and so on) has not been significantly changed, and refactoring this is not a priority until we come up with a plan for QUIC adoption, as we may end up discarding the MConnection code entirely.
There are no tests of the new `MConnTransport`, as this code is likely to evolve as we proceed with the P2P refactor, but tests should be added before a final release. The E2E tests are sufficient for basic validation in the meanwhile.
closes: #5770closes: #5769
also, include node ID in the output (#5769) and modify NodeKey to use
value semantics (it makes perfect sense for NodeKey to not be a
pointer).
`abci.Client`:
- Sync and Async methods now accept a context for cancellation
* grpc client uses context to cancel both Sync and Async requests
* local client ignores context parameter
* socket client uses context to cancel Sync requests and to drop Async requests before sending them if context was cancelled prior to that
- Async methods return an error
* socket client returns an error immediately if queue is full for Async requests
* local client always returns nil error
* grpc client returns an error if context was cancelled before we got response or the receiving queue had a space for response (do not confuse with the sending queue from the socket client)
- specify clients semantics in [doc.go](https://raw.githubusercontent.com/tendermint/tendermint/27112fffa62276bc016d56741f686f0f77931748/abci/client/doc.go)
`mempool.TxInfo`
- add optional `Context` to `TxInfo`, which can be used to cancel `CheckTx` request
Closes#5190
* config: rename prof_laddr to pprof_laddr and move it to rpc
also, remove `/unsafe_start_cpu_profiler`, `/unsafe_stop_cpu_profiler`
and `/unsafe_write_heap_profile` in favor of pprof server functionality.
Closes#5303
* update changelog
* log start
State sync broke in #5231 since the genesis state is not propagated explicitly from `NewNode()` to `Node.OnStart()` and further into the state sync initialization. This is a hack until we can clean up the node startup process.
## Description
Add chainid to requests to privval. This is a non-breaking change and hardware devices can opt to ignore the field.
Closes: #4503
Took the approach of passing chainID to the client instead of modifying `GetPubKey` because it would lead to a larger change throughout the codebase and in some places it could get tricky to get chainID.
Adds a genesis parameter `initial_height` which specifies the initial block height, as well as ABCI `RequestInitChain.InitialHeight` to pass it to the ABCI application, and `State.InitialHeight` to keep track of the initial height throughout the code. Fixes#2543, based on [RFC-002](https://github.com/tendermint/spec/pull/119). Spec changes in https://github.com/tendermint/spec/pull/135.
Migrates the `rpc` package to use new JSON encoder in #4955. Branched off of that PR.
Tests pass, but I haven't done any manual testing beyond that. This should be handled as part of broader 0.34 testing.
Closes#4603
Commands used (VIM):
```
:args `rg -l errors.Wrap`
:argdo normal @q | update
```
where q is a macros rewriting the `errors.Wrap` to `fmt.Errorf`.
Integrates the blockchain v2 reactor with state sync, fixes#4765. This mostly involves deferring fast syncing until after state sync completes. I tried a few different approaches, this was the least effort:
* `Reactor.events` is `nil` if no fast sync is in progress, in which case events are not dispatched - most importantly `AddPeer`.
* Accept status messages from unknown peers in the scheduler and register them as ready. On fast sync startup, broadcast status requests to all existing peers.
* When switching from state sync, first send a `bcResetState` message to the processor and scheduler to update their states - most importantly the initial block height.
* When fast sync completes, shut down event loop, scheduler and processor, and set `events` channel to `nil`.
Fixes#828. Adds state sync, as outlined in [ADR-053](https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-053-state-sync-prototype.md). See related PRs in Cosmos SDK (https://github.com/cosmos/cosmos-sdk/pull/5803) and Gaia (https://github.com/cosmos/gaia/pull/327).
This is split out of the previous PR #4645, and branched off of the ABCI interface in #4704.
* Adds a new P2P reactor which exchanges snapshots with peers, and bootstraps an empty local node from remote snapshots when requested.
* Adds a new configuration section `[statesync]` that enables state sync and configures the light client. Also enables `statesync:info` logging by default.
* Integrates state sync into node startup. Does not support the v2 blockchain reactor, since it needs some reorganization to defer startup.
Closes: #4530
This PR contains logic for both submitting an evidence by the light client (lite2 package) and receiving it on the Tendermint side (/broadcast_evidence RPC and/or EvidenceReactor#Receive). Upon receiving the ConflictingHeadersEvidence (introduced by this PR), the Tendermint validates it, then breaks it down into smaller pieces (DuplicateVoteEvidence, LunaticValidatorEvidence, PhantomValidatorEvidence, PotentialAmnesiaEvidence). Afterwards, each piece of evidence is verified against the state of the full node and added to the pool, from which it's reaped upon block creation.
* rpc/client: do not pass height param if height ptr is nil
* rpc/core: validate incoming evidence!
* only accept ConflictingHeadersEvidence if one
of the headers is committed from this full node's perspective
This simplifies the code. Plus, if there are multiple forks, we'll
likely to receive multiple ConflictingHeadersEvidence anyway.
* swap CommitSig with Vote in LunaticValidatorEvidence
Vote is needed to validate signature
* no need to embed client
http is a provider and should not be used as a client
* blockchain: enable v2 to be set
- enable v2 to be set via config params
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* replace tab with space
* correctly spell usability
* format: add format cmd & goimport repo
- replaced format command
- added goimports to format command
- ran goimports
Signed-off-by: Marko Baricevic <marbar3778@yahoo.com>
* fix outliers & undo proto file changes