Compare commits

...

1 Commits

Author SHA1 Message Date
Cyrus Goh
5182ffee25 docs: master → docs-staging (#5990)
* Makefile: always pull image in proto-gen-docker. (#5953)

The `proto-gen-docker` target didn't pull an updated Docker image, and would use a local image if present which could be outdated and produce wrong results.

* test: fix TestPEXReactorRunning data race (#5955)

Fixes #5941.

Not entirely sure that this will fix the problem (couldn't reproduce), but in any case this is an artifact of a hack in the P2P transport refactor to make it work with the legacy P2P stack, and will be removed when the refactor is done anyway.

* test/fuzz: move fuzz tests into this repo (#5918)

Co-authored-by: Emmanuel T Odeke <emmanuel@orijtech.com>

Closes #5907

- add init-corpus to blockchain reactor
- remove validator-set FromBytes test
now that we have proto, we don't need to test it! bye amino
- simplify mempool test
do we want to test remote ABCI app?
- do not recreate mux on every crash in jsonrpc test
- update p2p pex reactor test
- remove p2p/listener test
the API has changed + I did not understand what it's tested anyway
- update secretconnection test
- add readme and makefile
- list inputs in readme
- add nightly workflow
- remove blockchain fuzz test
EncodeMsg / DecodeMsg no longer exist

* docker: dont login when in PR (#5961)

* docker: release Linux/ARM64 image (#5925)

Co-authored-by: Marko <marbar3778@yahoo.com>

* p2p: make PeerManager.DialNext() and EvictNext() block (#5947)

See #5936 and #5938 for background.

The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about.

I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.

* libs/log: format []byte as hexidecimal string (uppercased) (#5960)

Closes: #5806 

Co-authored-by: Lanie Hei <heixx011@umn.edu>

* docs: log level docs (#5945)

## Description

add section on configuring log levels

Closes: #XXX

* .github: fix fuzz-nightly job (#5965)

outputs is a property of the job, not an individual step.

* e2e: add control over the log level of nodes (#5958)

* mempool: fix reactor tests (#5967)

## Description

Update the faux router to either drop channel errors or handle them based on an argument. This prevents deadlocks in tests where we try to send an error on the mempool channel but there is no reader.

Closes: #5956

* p2p: improve peerStore prototype (#5954)

This improves the `peerStore` prototype by e.g.:

* Using a database with Protobuf for persistence, but also keeping full peer set in memory for performance.
* Simplifying the API, by taking/returning struct copies for safety, and removing errors for in-memory operations.
* Caching the ranked peer set, as a temporary solution until a better data structure is implemented.
* Adding `PeerManagerOptions.MaxPeers` and pruning the peer store (based on rank) when it's full.
* Rewriting `PeerAddress` to be independent of `url.URL`, normalizing it and tightening semantics.

* p2p: simplify PeerManager upgrade logic (#5962)

Follow-up from #5947, branched off of #5954.

This simplifies the upgrade logic by adding explicit eviction requests, which can also be useful for other use-cases (e.g. if we need to ban a peer that's misbehaving). Changes:

* Add `evict` map which queues up peers to explicitly evict.
* `upgrading` now only tracks peers that we're upgrading via dialing (`DialNext` → `Dialed`/`DialFailed`).
* `Dialed` will unmark `upgrading`, and queue `evict` if still beyond capacity.
* `Accepted` will pick a random lower-scored peer to upgrade to, if appropriate, and doesn't care about `upgrading` (the dial will fail later, since it's already connected).
* `EvictNext` will return a peer scheduled in `evict` if any, otherwise if beyond capacity just evict the lowest-scored peer.

This limits all of the `upgrading` logic to `DialNext`, `Dialed`, and `DialFailed`, making it much simplier, and it should generally do the right thing in all cases I can think of.

* p2p: add PeerManager.Advertise() (#5957)

Adds a naïve `PeerManager.Advertise()` method that the new PEX reactor can use to fetch addresses to advertise, as well as some other `FIXME`s on address advertisement.

* blockchain v0: fix waitgroup data race (#5970)

## Description

Fixes the data race in usage of `WaitGroup`. Specifically, the case where we invoke `Wait` _before_ the first delta `Add` call when the current waitgroup counter is zero. See https://golang.org/pkg/sync/#WaitGroup.Add.

Still not sure how this manifests itself in a test since the reactor has to be stopped virtually immediately after being started (I think?).

Regardless, this is the appropriate fix.

closes: #5968

* tests: fix `make test` (#5966)

## Description
 
- bump deadlock dep to master
  - fixes `make test` since we now use `deadlock.Once`

Closes: #XXX

* terminate go-fuzz gracefully (w/ SIGINT) (#5973)

and preserve exit code.

```
2021/01/26 03:34:49 workers: 2, corpus: 4 (8m28s ago), crashers: 0, restarts: 1/9976, execs: 11013732 (21596/sec), cover: 121, uptime: 8m30s
make: *** [fuzz-mempool] Terminated
Makefile:5: recipe for target 'fuzz-mempool' failed
Error: Process completed with exit code 124.
```

https://github.com/tendermint/tendermint/runs/1766661614

`continue-on-error` should make GH ignore any error codes.

* p2p: add prototype PEX reactor for new stack (#5971)

This adds a prototype PEX reactor for the new P2P stack.

* proto/p2p: rename PEX messages and fields (#5974)

Fixes #5899 by renaming a bunch of P2P Protobuf entities (while maintaining wire compatibility):

* `Message` to `PexMessage` (as it's only used for PEX messages).
* `PexAddrs` to `PexResponse`.
* `PexResponse.Addrs` to `PexResponse.Addresses`.
* `NetAddress` to `PexAddress` (as it's only used by PEX).

* p2p: resolve PEX addresses in PEX reactor (#5980)

This changes the new prototype PEX reactor to resolve peer address URLs into IP/port PEX addresses itself. Branched off of #5974.

I've spent some time thinking about address handling in the P2P stack. We currently use `PeerAddress` URLs everywhere, except for two places: when dialing a peer, and when exchanging addresses via PEX. We had two options:

1. Resolve addresses to endpoints inside `PeerManager`. This would introduce a lot of added complexity: we would have to track connection statistics per endpoint, have goroutines that asynchronously resolve and refresh these endpoints, deal with resolve scheduling before dialing (which is trickier than it sounds since it involves multiple goroutines in the peer manager and router and messes with peer rating order), handle IP address visibility issues, and so on.

2. Resolve addresses to endpoints (IP/port) only where they're used: when dialing, and in PEX. Everywhere else we use URLs.

I went with 2, because this significantly simplifies the handling of hostname resolution, and because I really think the PEX reactor should migrate to exchanging URLs instead of IP/port numbers anyway -- this allows operators to use DNS names for validators (and can easily migrate them to new IPs and/or load balance requests), and also allows different protocols (e.g. QUIC and `MemoryTransport`). Happy to discuss this.

* test/p2p: close transports to avoid goroutine leak failures (#5982)

* mempool: fix TestReactorNoBroadcastToSender (#5984)

## Description

Looks like I missed a test in the original PR when fixing the tests.

Closes: #5956

* mempool: fix mempool tests timeout (#5988)

* p2p: use stopCtx when dialing peers in Router (#5983)

This ensures we don't leak dial goroutines when shutting down the router.

* docs: fix typo in state sync example (#5989)

Co-authored-by: Erik Grinaker <erik@interchain.berlin>
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
Co-authored-by: odidev <odidev@puresoftware.com>
Co-authored-by: Lanie Hei <heixx011@umn.edu>
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: Sergey <52304443+c29r3@users.noreply.github.com>
2021-01-26 11:46:21 -08:00
62 changed files with 5045 additions and 1043 deletions

View File

@@ -14,9 +14,6 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v2
with:
go-version: "1.15"
- uses: actions/checkout@master
- name: Prepare
id: prep
@@ -37,23 +34,26 @@ jobs:
fi
echo ::set-output name=tags::${TAGS}
- name: Set up QEMU
uses: docker/setup-qemu-action@master
with:
platforms: all
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
if: ${{ github.event_name != 'pull_request' }}
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build Tendermint
run: |
make build-linux && cp build/tendermint DOCKER/tendermint
- name: Publish to Docker Hub
uses: docker/build-push-action@v2
with:
context: ./DOCKER
context: .
file: ./DOCKER/Dockerfile
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.prep.outputs.tags }}

69
.github/workflows/fuzz-nightly.yml vendored Normal file
View File

@@ -0,0 +1,69 @@
# Runs fuzzing nightly.
name: fuzz-nightly
on:
workflow_dispatch: # allow running workflow manually
schedule:
- cron: '0 3 * * *'
jobs:
fuzz-nightly-test:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v2
with:
go-version: '1.15'
- uses: actions/checkout@v2
- name: Install go-fuzz
working-directory: test/fuzz
run: go get -u github.com/dvyukov/go-fuzz/go-fuzz github.com/dvyukov/go-fuzz/go-fuzz-build
- name: Fuzz mempool
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-mempool
continue-on-error: true
- name: Fuzz p2p-addrbook
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-p2p-addrbook
continue-on-error: true
- name: Fuzz p2p-pex
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-p2p-pex
continue-on-error: true
- name: Fuzz p2p-sc
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-p2p-sc
continue-on-error: true
- name: Fuzz p2p-rpc-server
working-directory: test/fuzz
run: timeout -s SIGINT --preserve-status 10m make fuzz-rpc-server
continue-on-error: true
- name: Set crashers count
working-directory: test/fuzz
run: echo "::set-output name=crashers-count::$(find . -type d -name "crashers" | xargs -I % sh -c 'ls % | wc -l' | awk '{total += $1} END {print total}')"
id: set-crashers-count
outputs:
crashers_count: ${{ steps.set-crashers-count.outputs.crashers-count }}
fuzz-nightly-fail:
needs: fuzz-nightly-test
if: ${{ needs.set-crashers-count.outputs.crashers-count != 0 }}
runs-on: ubuntu-latest
steps:
- name: Notify Slack if any crashers
uses: rtCamp/action-slack-notify@ae4223259071871559b6e9d08b24a63d71b3f0c0
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACK_CHANNEL: tendermint-internal
SLACK_USERNAME: Nightly Fuzz Tests
SLACK_ICON_EMOJI: ':firecracker:'
SLACK_COLOR: danger
SLACK_MESSAGE: Crashers found in Nightly Fuzz tests
SLACK_FOOTER: ''

6
.gitignore vendored
View File

@@ -40,4 +40,8 @@ test/e2e/networks/*/
test/logs
test/maverick/maverick
test/p2p/data/
vendor
vendor
test/fuzz/**/corpus
test/fuzz/**/crashers
test/fuzz/**/suppressions
test/fuzz/**/*.zip

View File

@@ -1,4 +1,14 @@
FROM alpine:3.9
# stage 1 Generate Tendermint Binary
FROM golang:1.15-alpine as builder
RUN apk update && \
apk upgrade && \
apk --no-cache add make
COPY / /tendermint
WORKDIR /tendermint
RUN make build-linux
# stage 2
FROM golang:1.15-alpine
LABEL maintainer="hello@tendermint.com"
# Tendermint will be looking for the genesis file in /tendermint/config/genesis.json
@@ -29,15 +39,14 @@ EXPOSE 26656 26657 26660
STOPSIGNAL SIGTERM
ARG BINARY=tendermint
COPY $BINARY /usr/bin/tendermint
COPY --from=builder /tendermint/build/tendermint /usr/bin/tendermint
# You can overwrite these before the first run to influence
# config.json and genesis.json. Additionally, you can override
# CMD to add parameters to `tendermint node`.
ENV PROXY_APP=kvstore MONIKER=dockernode CHAIN_ID=dockerchain
COPY ./docker-entrypoint.sh /usr/local/bin/
COPY ./DOCKER/docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["start"]

View File

@@ -94,6 +94,7 @@ proto-gen:
.PHONY: proto-gen
proto-gen-docker:
@docker pull -q tendermintdev/docker-build-proto
@echo "Generating Protobuf files"
@docker run -v $(shell pwd):/workspace --workdir /workspace tendermintdev/docker-build-proto sh ./scripts/protocgen.sh
.PHONY: proto-gen-docker

View File

@@ -150,6 +150,7 @@ func (r *Reactor) OnStart() error {
return err
}
r.poolWG.Add(1)
go r.poolRoutine(false)
}
@@ -354,7 +355,9 @@ func (r *Reactor) SwitchToFastSync(state sm.State) error {
return err
}
r.poolWG.Add(1)
go r.poolRoutine(true)
return nil
}
@@ -426,7 +429,6 @@ func (r *Reactor) poolRoutine(stateSynced bool) {
go r.requestRoutine()
r.poolWG.Add(1)
defer r.poolWG.Done()
FOR_LOOP:

View File

@@ -255,7 +255,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
h.logger.Info("ABCI Handshake App Info",
"height", blockHeight,
"hash", fmt.Sprintf("%X", appHash),
"hash", appHash,
"software-version", res.Version,
"protocol-version", res.AppVersion,
)
@@ -272,7 +272,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
}
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced",
"appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash))
"appHeight", blockHeight, "appHash", appHash)
// TODO: (on restart) replay mempool

View File

@@ -15,6 +15,7 @@ This section will focus on how to operate full nodes, validators and light clien
- [Light Client guides](./light-client.md)
- [How to sync a light client](./light-client.md#)
- [Metrics](./metrics.md)
- [Logging](./logging.md)
## Node Types

View File

@@ -494,4 +494,4 @@ This section will cover settings within the p2p section of the `config.toml`.
- `unconditional-peer-ids` = is similar to `persistent-peers` except that these peers will be connected to even if you are already connected to the maximum number of peers. This can be a validator node ID on your sentry node.
- `pex` = turns the peer exchange reactor on or off. Validator node will want the `pex` turned off so it would not begin gossiping to unknown peers on the network. PeX can also be turned off for statically configured networks with fixed network connectivity. For full nodes on open, dynamic networks, it should be turned on.
- `seed-mode` = is used for when node operators want to run their node as a seed node. Seed node's run a variation of the PeX protocol that disconnects from peers after sending them a list of peers to connect to. To minimize the servers usage, it is recommended to set the mempool's size to 0.
- `private-peer-ids` = is a comma separated list of node ids that you would not like exposed to other peers (ie. you will not tell other peers about the private-peer-ids). This can be filled with a validators node id.
- `private-peer-ids` = is a comma separated list of node ids that you would not like exposed to other peers (ie. you will not tell other peers about the private-peer-ids). This can be filled with a validators node id.

171
docs/nodes/logging.md Normal file
View File

@@ -0,0 +1,171 @@
---
order: 7
---
## Logging
Logging adds detail and allows the node operator to better identify what they are looking for. Tendermint supports log levels on a global and per-module basis. This allows the node operator to see only the information they need and the developer to hone in on specific changes they are working on.
## Configuring Log Levels
There are three log levels, `info`, `debug` and `error`. These can be configured either through the command line via `tendermint start --log-level ""` or within the `config.toml` file.
- `info` Info represents an informational message. It is used to show that modules have started, stopped and how they are functioning.
- `debug` Debug is used to trace various calls or problems. Debug is used widely throughout a codebase and can lead to overly verbose logging.
- `error` Error represents something that has gone wrong. An error log can represent a potential problem that can lead to a node halt.
The default setting is a global `main:info,state:info,statesync:info,*:error` level. If you would like to set the log level for a specific module, it can be done in the following format:
> We are setting all modules to log level `info` and the mempool to `error`. This will log all errors within the mempool module.
Within the `config.toml`:
```toml
# Output level for logging, including package level options
log-level = "*:info,mempool:error"
```
Via the command line:
```sh
tendermint start --log-level "*:info,mempool:error"
```
## List of modules
Here is the list of modules you may encounter in Tendermint's log and a
little overview what they do.
- `abci-client` As mentioned in [Application Development Guide](../app-dev/app-development.md), Tendermint acts as an ABCI
client with respect to the application and maintains 3 connections:
mempool, consensus and query. The code used by Tendermint Core can
be found [here](https://github.com/tendermint/tendermint/tree/master/abci/client).
- `blockchain` Provides storage, pool (a group of peers), and reactor
for both storing and exchanging blocks between peers.
- `consensus` The heart of Tendermint core, which is the
implementation of the consensus algorithm. Includes two
"submodules": `wal` (write-ahead logging) for ensuring data
integrity and `replay` to replay blocks and messages on recovery
from a crash.
[here](https://github.com/tendermint/tendermint/blob/master/types/events.go).
You can subscribe to them by calling `subscribe` RPC method. Refer
to [RPC docs](./rpc.md) for additional information.
- `mempool` Mempool module handles all incoming transactions, whenever
they are coming from peers or the application.
- `p2p` Provides an abstraction around peer-to-peer communication. For
more details, please check out the
[README](https://github.com/tendermint/tendermint/blob/master/p2p/README.md).
- `rpc-server` RPC server. For implementation details, please read the
[doc.go](https://github.com/tendermint/tendermint/blob/master/rpc/jsonrpc/doc.go).
- `state` Represents the latest state and execution submodule, which
executes blocks against the application.
- `statesync` Provides a way to quickly sync a node with pruned history.
### Walkabout example
We first create three connections (mempool, consensus and query) to the
application (running `kvstore` locally in this case).
```sh
I[10-04|13:54:27.364] Starting multiAppConn module=proxy impl=multiAppConn
I[10-04|13:54:27.366] Starting localClient module=abci-client connection=query impl=localClient
I[10-04|13:54:27.366] Starting localClient module=abci-client connection=mempool impl=localClient
I[10-04|13:54:27.367] Starting localClient module=abci-client connection=consensus impl=localClient
```
Then Tendermint Core and the application perform a handshake.
```sh
I[10-04|13:54:27.367] ABCI Handshake module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:27.368] ABCI Replay Blocks module=consensus appHeight=90 storeHeight=90 stateHeight=90
I[10-04|13:54:27.368] Completed ABCI Handshake - Tendermint and App are synced module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
```
After that, we start a few more things like the event switch, reactors,
and perform UPNP discover in order to detect the IP address.
```sh
I[10-04|13:54:27.374] Starting EventSwitch module=types impl=EventSwitch
I[10-04|13:54:27.375] This node is a validator module=consensus
I[10-04|13:54:27.379] Starting Node module=main impl=Node
I[10-04|13:54:27.381] Local listener module=p2p ip=:: port=26656
I[10-04|13:54:27.382] Getting UPNP external address module=p2p
I[10-04|13:54:30.386] Could not perform UPNP discover module=p2p err="write udp4 0.0.0.0:38238->239.255.255.250:1900: i/o timeout"
I[10-04|13:54:30.386] Starting DefaultListener module=p2p impl=Listener(@10.0.2.15:26656)
I[10-04|13:54:30.387] Starting P2P Switch module=p2p impl="P2P Switch"
I[10-04|13:54:30.387] Starting MempoolReactor module=mempool impl=MempoolReactor
I[10-04|13:54:30.387] Starting BlockchainReactor module=blockchain impl=BlockchainReactor
I[10-04|13:54:30.387] Starting ConsensusReactor module=consensus impl=ConsensusReactor
I[10-04|13:54:30.387] ConsensusReactor module=consensus fastSync=false
I[10-04|13:54:30.387] Starting ConsensusState module=consensus impl=ConsensusState
I[10-04|13:54:30.387] Starting WAL module=consensus wal=/home/vagrant/.tendermint/data/cs.wal/wal impl=WAL
I[10-04|13:54:30.388] Starting TimeoutTicker module=consensus impl=TimeoutTicker
```
Notice the second row where Tendermint Core reports that "This node is a
validator". It also could be just an observer (regular node).
Next we replay all the messages from the WAL.
```sh
I[10-04|13:54:30.390] Catchup by replaying consensus messages module=consensus height=91
I[10-04|13:54:30.390] Replay: New Step module=consensus height=91 round=0 step=RoundStepNewHeight
I[10-04|13:54:30.390] Replay: Done module=consensus
```
"Started node" message signals that everything is ready for work.
```sh
I[10-04|13:54:30.391] Starting RPC HTTP server on tcp socket 0.0.0.0:26657 module=rpc-server
I[10-04|13:54:30.392] Started node module=main nodeInfo="NodeInfo{id: DF22D7C92C91082324A1312F092AA1DA197FA598DBBFB6526E, moniker: anonymous, network: test-chain-3MNw2N [remote , listen 10.0.2.15:26656], version: 0.11.0-10f361fc ([wire_version=0.6.2 p2p_version=0.5.0 consensus_version=v1/0.2.2 rpc_version=0.7.0/3 tx_index=on rpc_addr=tcp://0.0.0.0:26657])}"
```
Next follows a standard block creation cycle, where we enter a new
round, propose a block, receive more than 2/3 of prevotes, then
precommits and finally have a chance to commit a block. For details,
please refer to [Byzantine Consensus
Algorithm](https://github.com/tendermint/spec/blob/master/spec/consensus/consensus.md).
```sh
I[10-04|13:54:30.393] enterNewRound(91/0). Current: 91/0/RoundStepNewHeight module=consensus
I[10-04|13:54:30.393] enterPropose(91/0). Current: 91/0/RoundStepNewRound module=consensus
I[10-04|13:54:30.393] enterPropose: Our turn to propose module=consensus proposer=125B0E3C5512F5C2B0E1109E31885C4511570C42 privValidator="PrivValidator{125B0E3C5512F5C2B0E1109E31885C4511570C42 LH:90, LR:0, LS:3}"
I[10-04|13:54:30.394] Signed proposal module=consensus height=91 round=0 proposal="Proposal{91/0 1:21B79872514F (-1,:0:000000000000) {/10EDEDD7C84E.../}}"
I[10-04|13:54:30.397] Received complete proposal block module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E
I[10-04|13:54:30.397] enterPrevote(91/0). Current: 91/0/RoundStepPropose module=consensus
I[10-04|13:54:30.397] enterPrevote: ProposalBlock is valid module=consensus height=91 round=0
I[10-04|13:54:30.398] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" err=null
I[10-04|13:54:30.401] Added to prevote module=consensus vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" prevotes="VoteSet{H:91 R:0 T:1 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}"
I[10-04|13:54:30.401] enterPrecommit(91/0). Current: 91/0/RoundStepPrevote module=consensus
I[10-04|13:54:30.401] enterPrecommit: +2/3 prevoted proposal block. Locking module=consensus hash=F671D562C7B9242900A286E1882EE64E5556FE9E
I[10-04|13:54:30.402] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" err=null
I[10-04|13:54:30.404] Added to precommit module=consensus vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" precommits="VoteSet{H:91 R:0 T:2 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}"
I[10-04|13:54:30.404] enterCommit(91/0). Current: 91/0/RoundStepPrecommit module=consensus
I[10-04|13:54:30.405] Finalizing commit of block with 0 txs module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E root=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:30.405] Block{
Header{
ChainID: test-chain-3MNw2N
Height: 91
Time: 2017-10-04 13:54:30.393 +0000 UTC
NumTxs: 0
LastBlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544
LastCommit: 56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D
Data:
Validators: CE25FBFF2E10C0D51AA1A07C064A96931BC8B297
App: E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
}#F671D562C7B9242900A286E1882EE64E5556FE9E
Data{
}#
Commit{
BlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544
Precommits: Vote{0:125B0E3C5512 90/00/2(Precommit) F15AB8BEF9A6 {/FE98E2B956F0.../}}
}#56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D
}#F671D562C7B9242900A286E1882EE64E5556FE9E module=consensus
I[10-04|13:54:30.408] Executed block module=state height=91 validTxs=0 invalidTxs=0
I[10-04|13:54:30.410] Committed state module=state height=91 txs=0 hash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:30.410] Recheck txs module=mempool numtxs=0 height=91
```

View File

@@ -29,7 +29,7 @@ If you are relying on publicly exposed RPC's to get the need information, you ca
Example:
```bash
curl -s https://233.123.0.140:26657:26657/commit | jq "{height: .result.signed_header.header.height, hash: .result.signed_header.commit.block_id.hash}"
curl -s https://233.123.0.140:26657/commit | jq "{height: .result.signed_header.header.height, hash: .result.signed_header.commit.block_id.hash}"
```
The response will be:

View File

@@ -1,145 +1,7 @@
---
order: 7
order: false
---
# How to read logs
## Walkabout example
We first create three connections (mempool, consensus and query) to the
application (running `kvstore` locally in this case).
```sh
I[10-04|13:54:27.364] Starting multiAppConn module=proxy impl=multiAppConn
I[10-04|13:54:27.366] Starting localClient module=abci-client connection=query impl=localClient
I[10-04|13:54:27.366] Starting localClient module=abci-client connection=mempool impl=localClient
I[10-04|13:54:27.367] Starting localClient module=abci-client connection=consensus impl=localClient
```
Then Tendermint Core and the application perform a handshake.
```sh
I[10-04|13:54:27.367] ABCI Handshake module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:27.368] ABCI Replay Blocks module=consensus appHeight=90 storeHeight=90 stateHeight=90
I[10-04|13:54:27.368] Completed ABCI Handshake - Tendermint and App are synced module=consensus appHeight=90 appHash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
```
After that, we start a few more things like the event switch, reactors,
and perform UPNP discover in order to detect the IP address.
```sh
I[10-04|13:54:27.374] Starting EventSwitch module=types impl=EventSwitch
I[10-04|13:54:27.375] This node is a validator module=consensus
I[10-04|13:54:27.379] Starting Node module=main impl=Node
I[10-04|13:54:27.381] Local listener module=p2p ip=:: port=26656
I[10-04|13:54:27.382] Getting UPNP external address module=p2p
I[10-04|13:54:30.386] Could not perform UPNP discover module=p2p err="write udp4 0.0.0.0:38238->239.255.255.250:1900: i/o timeout"
I[10-04|13:54:30.386] Starting DefaultListener module=p2p impl=Listener(@10.0.2.15:26656)
I[10-04|13:54:30.387] Starting P2P Switch module=p2p impl="P2P Switch"
I[10-04|13:54:30.387] Starting MempoolReactor module=mempool impl=MempoolReactor
I[10-04|13:54:30.387] Starting BlockchainReactor module=blockchain impl=BlockchainReactor
I[10-04|13:54:30.387] Starting ConsensusReactor module=consensus impl=ConsensusReactor
I[10-04|13:54:30.387] ConsensusReactor module=consensus fastSync=false
I[10-04|13:54:30.387] Starting ConsensusState module=consensus impl=ConsensusState
I[10-04|13:54:30.387] Starting WAL module=consensus wal=/home/vagrant/.tendermint/data/cs.wal/wal impl=WAL
I[10-04|13:54:30.388] Starting TimeoutTicker module=consensus impl=TimeoutTicker
```
Notice the second row where Tendermint Core reports that "This node is a
validator". It also could be just an observer (regular node).
Next we replay all the messages from the WAL.
```sh
I[10-04|13:54:30.390] Catchup by replaying consensus messages module=consensus height=91
I[10-04|13:54:30.390] Replay: New Step module=consensus height=91 round=0 step=RoundStepNewHeight
I[10-04|13:54:30.390] Replay: Done module=consensus
```
"Started node" message signals that everything is ready for work.
```sh
I[10-04|13:54:30.391] Starting RPC HTTP server on tcp socket 0.0.0.0:26657 module=rpc-server
I[10-04|13:54:30.392] Started node module=main nodeInfo="NodeInfo{id: DF22D7C92C91082324A1312F092AA1DA197FA598DBBFB6526E, moniker: anonymous, network: test-chain-3MNw2N [remote , listen 10.0.2.15:26656], version: 0.11.0-10f361fc ([wire_version=0.6.2 p2p_version=0.5.0 consensus_version=v1/0.2.2 rpc_version=0.7.0/3 tx_index=on rpc_addr=tcp://0.0.0.0:26657])}"
```
Next follows a standard block creation cycle, where we enter a new
round, propose a block, receive more than 2/3 of prevotes, then
precommits and finally have a chance to commit a block. For details,
please refer to [Byzantine Consensus
Algorithm](https://github.com/tendermint/spec/blob/master/spec/consensus/consensus.md).
```sh
I[10-04|13:54:30.393] enterNewRound(91/0). Current: 91/0/RoundStepNewHeight module=consensus
I[10-04|13:54:30.393] enterPropose(91/0). Current: 91/0/RoundStepNewRound module=consensus
I[10-04|13:54:30.393] enterPropose: Our turn to propose module=consensus proposer=125B0E3C5512F5C2B0E1109E31885C4511570C42 privValidator="PrivValidator{125B0E3C5512F5C2B0E1109E31885C4511570C42 LH:90, LR:0, LS:3}"
I[10-04|13:54:30.394] Signed proposal module=consensus height=91 round=0 proposal="Proposal{91/0 1:21B79872514F (-1,:0:000000000000) {/10EDEDD7C84E.../}}"
I[10-04|13:54:30.397] Received complete proposal block module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E
I[10-04|13:54:30.397] enterPrevote(91/0). Current: 91/0/RoundStepPropose module=consensus
I[10-04|13:54:30.397] enterPrevote: ProposalBlock is valid module=consensus height=91 round=0
I[10-04|13:54:30.398] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" err=null
I[10-04|13:54:30.401] Added to prevote module=consensus vote="Vote{0:125B0E3C5512 91/00/1(Prevote) F671D562C7B9 {/89047FFC21D8.../}}" prevotes="VoteSet{H:91 R:0 T:1 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}"
I[10-04|13:54:30.401] enterPrecommit(91/0). Current: 91/0/RoundStepPrevote module=consensus
I[10-04|13:54:30.401] enterPrecommit: +2/3 prevoted proposal block. Locking module=consensus hash=F671D562C7B9242900A286E1882EE64E5556FE9E
I[10-04|13:54:30.402] Signed and pushed vote module=consensus height=91 round=0 vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" err=null
I[10-04|13:54:30.404] Added to precommit module=consensus vote="Vote{0:125B0E3C5512 91/00/2(Precommit) F671D562C7B9 {/80533478E41A.../}}" precommits="VoteSet{H:91 R:0 T:2 +2/3:F671D562C7B9242900A286E1882EE64E5556FE9E:1:21B79872514F BA{1:X} map[]}"
I[10-04|13:54:30.404] enterCommit(91/0). Current: 91/0/RoundStepPrecommit module=consensus
I[10-04|13:54:30.405] Finalizing commit of block with 0 txs module=consensus height=91 hash=F671D562C7B9242900A286E1882EE64E5556FE9E root=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:30.405] Block{
Header{
ChainID: test-chain-3MNw2N
Height: 91
Time: 2017-10-04 13:54:30.393 +0000 UTC
NumTxs: 0
LastBlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544
LastCommit: 56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D
Data:
Validators: CE25FBFF2E10C0D51AA1A07C064A96931BC8B297
App: E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
}#F671D562C7B9242900A286E1882EE64E5556FE9E
Data{
}#
Commit{
BlockID: F15AB8BEF9A6AAB07E457A6E16BC410546AA4DC6:1:D505DA273544
Precommits: Vote{0:125B0E3C5512 90/00/2(Precommit) F15AB8BEF9A6 {/FE98E2B956F0.../}}
}#56FEF2EFDB8B37E9C6E6D635749DF3169D5F005D
}#F671D562C7B9242900A286E1882EE64E5556FE9E module=consensus
I[10-04|13:54:30.408] Executed block module=state height=91 validTxs=0 invalidTxs=0
I[10-04|13:54:30.410] Committed state module=state height=91 txs=0 hash=E0FBAFBF6FCED8B9786DDFEB1A0D4FA2501BADAD
I[10-04|13:54:30.410] Recheck txs module=mempool numtxs=0 height=91
```
## List of modules
Here is the list of modules you may encounter in Tendermint's log and a
little overview what they do.
- `abci-client` As mentioned in [Application Development Guide](../app-dev/app-development.md), Tendermint acts as an ABCI
client with respect to the application and maintains 3 connections:
mempool, consensus and query. The code used by Tendermint Core can
be found [here](https://github.com/tendermint/tendermint/tree/master/abci/client).
- `blockchain` Provides storage, pool (a group of peers), and reactor
for both storing and exchanging blocks between peers.
- `consensus` The heart of Tendermint core, which is the
implementation of the consensus algorithm. Includes two
"submodules": `wal` (write-ahead logging) for ensuring data
integrity and `replay` to replay blocks and messages on recovery
from a crash.
- `events` Simple event notification system. The list of events can be
found
[here](https://github.com/tendermint/tendermint/blob/master/types/events.go).
You can subscribe to them by calling `subscribe` RPC method. Refer
to [RPC docs](./rpc.md) for additional information.
- `mempool` Mempool module handles all incoming transactions, whenever
they are coming from peers or the application.
- `p2p` Provides an abstraction around peer-to-peer communication. For
more details, please check out the
[README](https://github.com/tendermint/tendermint/blob/master/p2p/README.md).
- `rpc` [Tendermint's RPC](./rpc.md).
- `rpc-server` RPC server. For implementation details, please read the
[doc.go](https://github.com/tendermint/tendermint/blob/master/rpc/jsonrpc/doc.go).
- `state` Represents the latest state and execution submodule, which
executes blocks against the application.
- `types` A collection of the publicly exposed types and methods to
work with them.
This file has moved to the [node section](../nodes/logging.md).

View File

@@ -40,7 +40,7 @@ Default logging level (`log-level = "main:info,state:info,statesync:info,*:error
normal operation mode. Read [this
post](https://blog.cosmos.network/one-of-the-exciting-new-features-in-0-10-0-release-is-smart-log-level-flag-e2506b4ab756)
for details on how to configure `log-level` config variable. Some of the
modules can be found [here](./how-to-read-logs.md#list-of-modules). If
modules can be found [here](../nodes/logging#list-of-modules). If
you're trying to debug Tendermint or asked to provide logs with debug
logging level, you can do so by running Tendermint with
`--log-level="*:debug"`.
@@ -109,7 +109,7 @@ to achieve the same things.
## Debugging Tendermint
If you ever have to debug Tendermint, the first thing you should probably do is
check out the logs. See [How to read logs](./how-to-read-logs.md), where we
check out the logs. See [Logging](../nodes/logging.md), where we
explain what certain log statements mean.
If, after skimming through the logs, things are not clear still, the next thing
@@ -307,7 +307,6 @@ flush throttle timeout and increase other params.
```toml
[p2p]
send-rate=20000000 # 2MB/s
recv-rate=20000000 # 2MB/s
flush-throttle-timeout=10

2
go.mod
View File

@@ -27,7 +27,7 @@ require (
github.com/prometheus/client_golang v1.9.0
github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0
github.com/rs/cors v1.7.0
github.com/sasha-s/go-deadlock v0.2.0
github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa
github.com/snikch/goodman v0.0.0-20171125024755-10e37e294daa
github.com/spf13/cobra v1.1.1
github.com/spf13/viper v1.7.1

4
go.sum
View File

@@ -484,6 +484,8 @@ github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb
github.com/samuel/go-zookeeper v0.0.0-20190923202752-2cc03de413da/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
github.com/sasha-s/go-deadlock v0.2.0 h1:lMqc+fUb7RrFS3gQLtoQsJ7/6TV/pAIFvBsqX73DK8Y=
github.com/sasha-s/go-deadlock v0.2.0/go.mod h1:StQn567HiB1fF2yJ44N9au7wOhrPS3iZqiDbRupzT10=
github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa h1:0U2s5loxrTy6/VgfVoLuVLFJcURKLH49ie0zSch7gh4=
github.com/sasha-s/go-deadlock v0.2.1-0.20190427202633-1595213edefa/go.mod h1:F73l+cr82YSh10GxyRI6qZiCgK64VaZjwesgfQ1/iLM=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
@@ -627,6 +629,7 @@ golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0 h1:RM4zey1++hCTbCVQfnWeKs9/IEsaBLA8vTkd0WVtmH4=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180719180050-a680a1efc54d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -756,6 +759,7 @@ golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.0.0-20200103221440-774c71fcf114/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a h1:CB3a9Nez8M13wwlr/E2YtwoU+qYHKfC+JrDa45RXXoQ=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@@ -8,7 +8,7 @@ import (
"strings"
)
// The main purpose of HexBytes is to enable HEX-encoding for json/encoding.
// HexBytes enables HEX-encoding for json/encoding.
type HexBytes []byte
var (
@@ -58,7 +58,7 @@ func (bz *HexBytes) UnmarshalJSON(data []byte) error {
return nil
}
// Allow it to fulfill various interfaces in light-client, etc...
// Bytes fulfils various interfaces in light-client, etc...
func (bz HexBytes) Bytes() []byte {
return bz
}
@@ -67,6 +67,9 @@ func (bz HexBytes) String() string {
return strings.ToUpper(hex.EncodeToString(bz))
}
// Format writes either address of 0th element in a slice in base 16 notation,
// with leading 0x (%p), or casts HexBytes to bytes and writes as hexadecimal
// string to s.
func (bz HexBytes) Format(s fmt.State, verb rune) {
switch verb {
case 'p':

View File

@@ -3,6 +3,7 @@ package bytes
import (
"encoding/json"
"fmt"
"strconv"
"testing"
"github.com/stretchr/testify/assert"
@@ -24,7 +25,6 @@ func TestMarshal(t *testing.T) {
// Test that the hex encoding works.
func TestJSONMarshal(t *testing.T) {
type TestStruct struct {
B1 []byte
B2 HexBytes
@@ -64,3 +64,10 @@ func TestJSONMarshal(t *testing.T) {
})
}
}
func TestHexBytes_String(t *testing.T) {
hs := HexBytes([]byte("test me"))
if _, err := strconv.ParseInt(hs.String(), 16, 64); err != nil {
t.Fatal(err)
}
}

View File

@@ -21,7 +21,7 @@ type tmLogger struct {
// Interface assertions
var _ Logger = (*tmLogger)(nil)
// NewTMTermLogger returns a logger that encodes msg and keyvals to the Writer
// NewTMLogger returns a logger that encodes msg and keyvals to the Writer
// using go-kit's log as an underlying logger and our custom formatter. Note
// that underlying logger could be swapped with something else.
func NewTMLogger(w io.Writer) Logger {
@@ -52,6 +52,7 @@ func NewTMLoggerWithColorFn(w io.Writer, colorFn func(keyvals ...interface{}) te
// Info logs a message at level Info.
func (l *tmLogger) Info(msg string, keyvals ...interface{}) {
lWithLevel := kitlevel.Info(l.srcLogger)
if err := kitlog.With(lWithLevel, msgKey, msg).Log(keyvals...); err != nil {
errLogger := kitlevel.Error(l.srcLogger)
kitlog.With(errLogger, msgKey, msg).Log("err", err) //nolint:errcheck // no need to check error again
@@ -61,6 +62,7 @@ func (l *tmLogger) Info(msg string, keyvals ...interface{}) {
// Debug logs a message at level Debug.
func (l *tmLogger) Debug(msg string, keyvals ...interface{}) {
lWithLevel := kitlevel.Debug(l.srcLogger)
if err := kitlog.With(lWithLevel, msgKey, msg).Log(keyvals...); err != nil {
errLogger := kitlevel.Error(l.srcLogger)
kitlog.With(errLogger, msgKey, msg).Log("err", err) //nolint:errcheck // no need to check error again
@@ -70,6 +72,7 @@ func (l *tmLogger) Debug(msg string, keyvals ...interface{}) {
// Error logs a message at level Error.
func (l *tmLogger) Error(msg string, keyvals ...interface{}) {
lWithLevel := kitlevel.Error(l.srcLogger)
lWithMsg := kitlog.With(lWithLevel, msgKey, msg)
if err := lWithMsg.Log(keyvals...); err != nil {
lWithMsg.Log("err", err) //nolint:errcheck // no need to check error again

View File

@@ -20,6 +20,75 @@ func TestLoggerLogsItsErrors(t *testing.T) {
}
}
func TestInfo(t *testing.T) {
var bufInfo bytes.Buffer
l := log.NewTMLogger(&bufInfo)
l.Info("Client initialized with old header (trusted is more recent)",
"old", 42,
"trustedHeight", "forty two",
"trustedHash", []byte("test me"))
msg := strings.TrimSpace(bufInfo.String())
// Remove the timestamp information to allow
// us to test against the expected message.
receivedmsg := strings.Split(msg, "] ")[1]
const expectedmsg = `Client initialized with old header
(trusted is more recent) old=42 trustedHeight="forty two"
trustedHash=74657374206D65`
if strings.EqualFold(receivedmsg, expectedmsg) {
t.Fatalf("received %s, expected %s", receivedmsg, expectedmsg)
}
}
func TestDebug(t *testing.T) {
var bufDebug bytes.Buffer
ld := log.NewTMLogger(&bufDebug)
ld.Debug("Client initialized with old header (trusted is more recent)",
"old", 42,
"trustedHeight", "forty two",
"trustedHash", []byte("test me"))
msg := strings.TrimSpace(bufDebug.String())
// Remove the timestamp information to allow
// us to test against the expected message.
receivedmsg := strings.Split(msg, "] ")[1]
const expectedmsg = `Client initialized with old header
(trusted is more recent) old=42 trustedHeight="forty two"
trustedHash=74657374206D65`
if strings.EqualFold(receivedmsg, expectedmsg) {
t.Fatalf("received %s, expected %s", receivedmsg, expectedmsg)
}
}
func TestError(t *testing.T) {
var bufErr bytes.Buffer
le := log.NewTMLogger(&bufErr)
le.Error("Client initialized with old header (trusted is more recent)",
"old", 42,
"trustedHeight", "forty two",
"trustedHash", []byte("test me"))
msg := strings.TrimSpace(bufErr.String())
// Remove the timestamp information to allow
// us to test against the expected message.
receivedmsg := strings.Split(msg, "] ")[1]
const expectedmsg = `Client initialized with old header
(trusted is more recent) old=42 trustedHeight="forty two"
trustedHash=74657374206D65`
if strings.EqualFold(receivedmsg, expectedmsg) {
t.Fatalf("received %s, expected %s", receivedmsg, expectedmsg)
}
}
func BenchmarkTMLoggerSimple(b *testing.B) {
benchmarkRunner(b, log.NewTMLogger(ioutil.Discard), baseInfoMessage)
}

View File

@@ -2,8 +2,10 @@ package log
import (
"bytes"
"encoding/hex"
"fmt"
"io"
"strings"
"sync"
"time"
@@ -80,6 +82,11 @@ func (l tmfmtLogger) Log(keyvals ...interface{}) error {
excludeIndexes = append(excludeIndexes, i)
module = keyvals[i+1].(string)
}
// Print []byte as a hexadecimal string (uppercased)
if b, ok := keyvals[i+1].([]byte); ok {
keyvals[i+1] = strings.ToUpper(hex.EncodeToString(b))
}
}
// Form a custom Tendermint line

View File

@@ -53,6 +53,12 @@ func TestTMFmtLogger(t *testing.T) {
t.Fatal(err)
}
assert.Regexp(t, regexp.MustCompile(`N\[.+\] unknown \s+module=wire\s+\n$`), buf.String())
buf.Reset()
if err := logger.Log("hash", []byte("test me")); err != nil {
t.Fatal(err)
}
assert.Regexp(t, regexp.MustCompile(`N\[.+\] unknown \s+ hash=74657374206D65\n$`), buf.String())
}
func BenchmarkTMFmtLoggerSimple(b *testing.B) {

View File

@@ -286,7 +286,7 @@ func (c *Client) checkTrustedHeaderUsingOptions(ctx context.Context, options Tru
c.logger.Info("Client initialized with old header (trusted is more recent)",
"old", options.Height,
"trustedHeight", c.latestTrustedBlock.Height,
"trustedHash", hash2str(c.latestTrustedBlock.Hash()))
"trustedHash", c.latestTrustedBlock.Hash())
action := fmt.Sprintf(
"Rollback to %d (%X)? Note this will remove newer light blocks up to %d (%X)",
@@ -310,7 +310,7 @@ func (c *Client) checkTrustedHeaderUsingOptions(ctx context.Context, options Tru
if !bytes.Equal(primaryHash, c.latestTrustedBlock.Hash()) {
c.logger.Info("Prev. trusted header's hash (h1) doesn't match hash from primary provider (h2)",
"h1", hash2str(c.latestTrustedBlock.Hash()), "h2", hash2str(primaryHash))
"h1", c.latestTrustedBlock.Hash(), "h2", primaryHash)
action := fmt.Sprintf(
"Prev. trusted header's hash %X doesn't match hash %X from primary provider. Remove all the stored light blocks?",
@@ -425,7 +425,7 @@ func (c *Client) Update(ctx context.Context, now time.Time) (*types.LightBlock,
if err != nil {
return nil, err
}
c.logger.Info("Advanced to new state", "height", latestBlock.Height, "hash", hash2str(latestBlock.Hash()))
c.logger.Info("Advanced to new state", "height", latestBlock.Height, "hash", latestBlock.Hash())
return latestBlock, nil
}
@@ -450,7 +450,7 @@ func (c *Client) VerifyLightBlockAtHeight(ctx context.Context, height int64, now
// Check if the light block is already verified.
h, err := c.TrustedLightBlock(height)
if err == nil {
c.logger.Info("Header has already been verified", "height", height, "hash", hash2str(h.Hash()))
c.logger.Info("Header has already been verified", "height", height, "hash", h.Hash())
// Return already trusted light block
return h, nil
}
@@ -508,7 +508,7 @@ func (c *Client) VerifyHeader(ctx context.Context, newHeader *types.Header, now
return fmt.Errorf("existing trusted header %X does not match newHeader %X", l.Hash(), newHeader.Hash())
}
c.logger.Info("Header has already been verified",
"height", newHeader.Height, "hash", hash2str(newHeader.Hash()))
"height", newHeader.Height, "hash", newHeader.Hash())
return nil
}
@@ -526,7 +526,7 @@ func (c *Client) VerifyHeader(ctx context.Context, newHeader *types.Header, now
}
func (c *Client) verifyLightBlock(ctx context.Context, newLightBlock *types.LightBlock, now time.Time) error {
c.logger.Info("VerifyHeader", "height", newLightBlock.Height, "hash", hash2str(newLightBlock.Hash()))
c.logger.Info("VerifyHeader", "height", newLightBlock.Height, "hash", newLightBlock.Hash())
var (
verifyFunc func(ctx context.Context, trusted *types.LightBlock, new *types.LightBlock, now time.Time) error
@@ -607,9 +607,9 @@ func (c *Client) verifySequential(
// 2) Verify them
c.logger.Debug("Verify adjacent newLightBlock against verifiedBlock",
"trustedHeight", verifiedBlock.Height,
"trustedHash", hash2str(verifiedBlock.Hash()),
"trustedHash", verifiedBlock.Hash(),
"newHeight", interimBlock.Height,
"newHash", hash2str(interimBlock.Hash()))
"newHash", interimBlock.Hash())
err = VerifyAdjacent(verifiedBlock.SignedHeader, interimBlock.SignedHeader, interimBlock.ValidatorSet,
c.trustingPeriod, now, c.maxClockDrift)
@@ -698,9 +698,9 @@ func (c *Client) verifySkipping(
for {
c.logger.Debug("Verify non-adjacent newHeader against verifiedBlock",
"trustedHeight", verifiedBlock.Height,
"trustedHash", hash2str(verifiedBlock.Hash()),
"trustedHash", verifiedBlock.Hash(),
"newHeight", blockCache[depth].Height,
"newHash", hash2str(blockCache[depth].Hash()))
"newHash", blockCache[depth].Hash())
err := Verify(verifiedBlock.SignedHeader, verifiedBlock.ValidatorSet, blockCache[depth].SignedHeader,
blockCache[depth].ValidatorSet, c.trustingPeriod, now, c.maxClockDrift, c.trustLevel)
@@ -920,9 +920,9 @@ func (c *Client) backwards(
interimHeader = interimBlock.Header
c.logger.Debug("Verify newHeader against verifiedHeader",
"trustedHeight", verifiedHeader.Height,
"trustedHash", hash2str(verifiedHeader.Hash()),
"trustedHash", verifiedHeader.Hash(),
"newHeight", interimHeader.Height,
"newHash", hash2str(interimHeader.Hash()))
"newHash", interimHeader.Hash())
if err := VerifyBackwards(interimHeader, verifiedHeader); err != nil {
c.logger.Error("primary sent invalid header -> replacing", "err", err, "primary", c.primary)
if replaceErr := c.replacePrimaryProvider(); replaceErr != nil {
@@ -1033,7 +1033,3 @@ and remove witness. Otherwise, use the different primary`, e.WitnessIndex), "wit
return nil
}
func hash2str(hash []byte) string {
return fmt.Sprintf("%X", hash)
}

View File

@@ -776,7 +776,7 @@ func TxKey(tx types.Tx) [TxKeySize]byte {
return sha256.Sum256(tx)
}
// txID is the hex encoded hash of the bytes as a types.Tx.
func txID(tx []byte) string {
return fmt.Sprintf("%X", types.Tx(tx).Hash())
// txID is a hash of the Tx.
func txID(tx []byte) []byte {
return types.Tx(tx).Hash()
}

View File

@@ -39,7 +39,7 @@ const (
// peer information. This should eventually be replaced with a message-oriented
// approach utilizing the p2p stack.
type PeerManager interface {
GetHeight(p2p.NodeID) (int64, error)
GetHeight(p2p.NodeID) int64
}
// Reactor implements a service that contains mempool of txs that are broadcasted
@@ -357,10 +357,8 @@ func (r *Reactor) broadcastTxRoutine(peerID p2p.NodeID, closer *tmsync.Closer) {
memTx := next.Value.(*mempoolTx)
if r.peerMgr != nil {
height, err := r.peerMgr.GetHeight(peerID)
if err != nil {
r.Logger.Error("failed to get peer height", "err", err)
} else if height > 0 && height < memTx.Height()-1 {
height := r.peerMgr.GetHeight(peerID)
if height > 0 && height < memTx.Height()-1 {
// allow for a lag of one block
time.Sleep(peerCatchupSleepIntervalMS * time.Millisecond)
continue

View File

@@ -7,7 +7,6 @@ import (
"testing"
"time"
"github.com/fortytw2/leaktest"
"github.com/stretchr/testify/require"
"github.com/tendermint/tendermint/abci/example/kvstore"
@@ -93,7 +92,13 @@ func setup(t *testing.T, cfg *cfg.MempoolConfig, logger log.Logger, chBuf uint)
return rts
}
func simulateRouter(wg *sync.WaitGroup, primary *reactorTestSuite, suites []*reactorTestSuite, numOut int) {
func simulateRouter(
wg *sync.WaitGroup,
primary *reactorTestSuite,
suites []*reactorTestSuite,
numOut int,
) {
wg.Add(1)
// create a mapping for efficient suite lookup by peer ID
@@ -160,6 +165,15 @@ func TestReactorBroadcastTxs(t *testing.T) {
testSuites[i] = setup(t, config.Mempool, logger, 0)
}
// ignore all peer errors
for _, suite := range testSuites {
go func(s *reactorTestSuite) {
// drop all errors on the mempool channel
for range s.mempoolPeerErrCh {
}
}(suite)
}
primary := testSuites[0]
secondaries := testSuites[1:]
@@ -267,6 +281,15 @@ func TestReactorNoBroadcastToSender(t *testing.T) {
primary := testSuites[0]
secondary := testSuites[1]
// ignore all peer errors
for _, suite := range testSuites {
go func(s *reactorTestSuite) {
// drop all errors on the mempool channel
for range s.mempoolPeerErrCh {
}
}(suite)
}
peerID := uint16(1)
_ = checkTxs(t, primary.reactor.mempool, numTxs, peerID)
@@ -312,6 +335,15 @@ func TestReactor_MaxTxBytes(t *testing.T) {
testSuites[i] = setup(t, config.Mempool, logger, 0)
}
// ignore all peer errors
for _, suite := range testSuites {
go func(s *reactorTestSuite) {
// drop all errors on the mempool channel
for range s.mempoolPeerErrCh {
}
}(suite)
}
primary := testSuites[0]
secondary := testSuites[1]
@@ -356,10 +388,17 @@ func TestDontExhaustMaxActiveIDs(t *testing.T) {
reactor := setup(t, config.Mempool, log.TestingLogger().With("node", 0), 0)
go func() {
// drop all messages on the mempool channel
for range reactor.mempoolOutCh {
}
}()
go func() {
// drop all errors on the mempool channel
for range reactor.mempoolPeerErrCh {
}
}()
peerID, err := p2p.NewNodeID("00ffaa")
require.NoError(t, err)
@@ -407,8 +446,22 @@ func TestBroadcastTxForPeerStopsWhenPeerStops(t *testing.T) {
config := cfg.TestConfig()
primary := setup(t, config.Mempool, log.TestingLogger().With("node", 0), 0)
secondary := setup(t, config.Mempool, log.TestingLogger().With("node", 1), 0)
testSuites := []*reactorTestSuite{
setup(t, config.Mempool, log.TestingLogger().With("node", 0), 0),
setup(t, config.Mempool, log.TestingLogger().With("node", 1), 0),
}
primary := testSuites[0]
secondary := testSuites[1]
// ignore all peer errors
for _, suite := range testSuites {
go func(s *reactorTestSuite) {
// drop all errors on the mempool channel
for range s.mempoolPeerErrCh {
}
}(suite)
}
// connect peer
primary.peerUpdatesCh <- p2p.PeerUpdate{
@@ -421,8 +474,4 @@ func TestBroadcastTxForPeerStopsWhenPeerStops(t *testing.T) {
Status: p2p.PeerStatusDown,
PeerID: secondary.peerID,
}
// check that we are not leaking any go-routines
// i.e. broadcastTxRoutine finishes when peer is stopped
leaktest.CheckTimeout(t, 10*time.Second)()
}

View File

@@ -762,7 +762,11 @@ func NewNode(config *cfg.Config,
logNodeStartupInfo(state, pubKey, logger, consensusLogger)
// TODO: Fetch and provide real options and do proper p2p bootstrapping.
peerMgr := p2p.NewPeerManager(p2p.PeerManagerOptions{})
// TODO: Use a persistent peer database.
peerMgr, err := p2p.NewPeerManager(dbm.NewMemDB(), p2p.PeerManagerOptions{})
if err != nil {
return nil, err
}
csMetrics, p2pMetrics, memplMetrics, smMetrics := metricsProvider(genDoc.ChainID)
mpReactorShim, mpReactor, mempool := createMempoolReactor(config, proxyApp, state, memplMetrics, peerMgr, logger)

View File

@@ -139,8 +139,9 @@ func NewNetAddressIPPort(ip net.IP, port uint16) *NetAddress {
}
}
// NetAddressFromProto converts a Protobuf NetAddress into a native struct.
func NetAddressFromProto(pb tmp2p.NetAddress) (*NetAddress, error) {
// NetAddressFromProto converts a Protobuf PexAddress into a native struct.
// FIXME: Remove this when legacy PEX reactor is removed.
func NetAddressFromProto(pb tmp2p.PexAddress) (*NetAddress, error) {
ip := net.ParseIP(pb.IP)
if ip == nil {
return nil, fmt.Errorf("invalid IP address %v", pb.IP)
@@ -155,8 +156,9 @@ func NetAddressFromProto(pb tmp2p.NetAddress) (*NetAddress, error) {
}, nil
}
// NetAddressesFromProto converts a slice of Protobuf NetAddresses into a native slice.
func NetAddressesFromProto(pbs []tmp2p.NetAddress) ([]*NetAddress, error) {
// NetAddressesFromProto converts a slice of Protobuf PexAddresses into a native slice.
// FIXME: Remove this when legacy PEX reactor is removed.
func NetAddressesFromProto(pbs []tmp2p.PexAddress) ([]*NetAddress, error) {
nas := make([]*NetAddress, 0, len(pbs))
for _, pb := range pbs {
na, err := NetAddressFromProto(pb)
@@ -168,9 +170,10 @@ func NetAddressesFromProto(pbs []tmp2p.NetAddress) ([]*NetAddress, error) {
return nas, nil
}
// NetAddressesToProto converts a slice of NetAddresses into a Protobuf slice.
func NetAddressesToProto(nas []*NetAddress) []tmp2p.NetAddress {
pbs := make([]tmp2p.NetAddress, 0, len(nas))
// NetAddressesToProto converts a slice of NetAddresses into a Protobuf PexAddress slice.
// FIXME: Remove this when legacy PEX reactor is removed.
func NetAddressesToProto(nas []*NetAddress) []tmp2p.PexAddress {
pbs := make([]tmp2p.PexAddress, 0, len(nas))
for _, na := range nas {
if na != nil {
pbs = append(pbs, na.ToProto())
@@ -179,9 +182,10 @@ func NetAddressesToProto(nas []*NetAddress) []tmp2p.NetAddress {
return pbs
}
// ToProto converts a NetAddress to Protobuf.
func (na *NetAddress) ToProto() tmp2p.NetAddress {
return tmp2p.NetAddress{
// ToProto converts a NetAddress to a Protobuf PexAddress.
// FIXME: Remove this when legacy PEX reactor is removed.
func (na *NetAddress) ToProto() tmp2p.PexAddress {
return tmp2p.PexAddress{
ID: string(na.ID),
IP: na.IP.String(),
Port: uint32(na.Port),

File diff suppressed because it is too large Load Diff

View File

@@ -285,9 +285,9 @@ func (r *Reactor) Receive(chID byte, src Peer, msgBytes []byte) {
r.SendAddrs(src, r.book.GetSelection())
}
case *tmp2p.PexAddrs:
case *tmp2p.PexResponse:
// If we asked for addresses, add them to the book
addrs, err := p2p.NetAddressesFromProto(msg.Addrs)
addrs, err := p2p.NetAddressesFromProto(msg.Addresses)
if err != nil {
r.Switch.StopPeerForError(src, err)
r.book.MarkBad(src.SocketAddr(), defaultBanTime)
@@ -409,7 +409,7 @@ func (r *Reactor) ReceiveAddrs(addrs []*p2p.NetAddress, src Peer) error {
// SendAddrs sends addrs to the peer.
func (r *Reactor) SendAddrs(p Peer, netAddrs []*p2p.NetAddress) {
p.Send(PexChannel, mustEncode(&tmp2p.PexAddrs{Addrs: p2p.NetAddressesToProto(netAddrs)}))
p.Send(PexChannel, mustEncode(&tmp2p.PexResponse{Addresses: p2p.NetAddressesToProto(netAddrs)}))
}
// SetEnsurePeersPeriod sets period to ensure peers connected.
@@ -773,12 +773,12 @@ func markAddrInBookBasedOnErr(addr *p2p.NetAddress, book AddrBook, err error) {
// mustEncode proto encodes a tmp2p.Message
func mustEncode(pb proto.Message) []byte {
msg := tmp2p.Message{}
msg := tmp2p.PexMessage{}
switch pb := pb.(type) {
case *tmp2p.PexRequest:
msg.Sum = &tmp2p.Message_PexRequest{PexRequest: pb}
case *tmp2p.PexAddrs:
msg.Sum = &tmp2p.Message_PexAddrs{PexAddrs: pb}
msg.Sum = &tmp2p.PexMessage_PexRequest{PexRequest: pb}
case *tmp2p.PexResponse:
msg.Sum = &tmp2p.PexMessage_PexResponse{PexResponse: pb}
default:
panic(fmt.Sprintf("Unknown message type %T", pb))
}
@@ -791,7 +791,7 @@ func mustEncode(pb proto.Message) []byte {
}
func decodeMsg(bz []byte) (proto.Message, error) {
pb := &tmp2p.Message{}
pb := &tmp2p.PexMessage{}
err := pb.Unmarshal(bz)
if err != nil {
@@ -799,10 +799,10 @@ func decodeMsg(bz []byte) (proto.Message, error) {
}
switch msg := pb.Sum.(type) {
case *tmp2p.Message_PexRequest:
case *tmp2p.PexMessage_PexRequest:
return msg.PexRequest, nil
case *tmp2p.Message_PexAddrs:
return msg.PexAddrs, nil
case *tmp2p.PexMessage_PexResponse:
return msg.PexResponse, nil
default:
return nil, fmt.Errorf("unknown message: %T", msg)
}

View File

@@ -94,6 +94,11 @@ func TestPEXReactorRunning(t *testing.T) {
})
}
for _, sw := range switches {
err := sw.Start() // start switch and reactors
require.Nil(t, err)
}
addOtherNodeAddrToAddrBook := func(switchIndex, otherSwitchIndex int) {
addr := switches[otherSwitchIndex].NetAddress()
err := books[switchIndex].AddAddress(addr, addr)
@@ -104,11 +109,6 @@ func TestPEXReactorRunning(t *testing.T) {
addOtherNodeAddrToAddrBook(1, 0)
addOtherNodeAddrToAddrBook(2, 1)
for _, sw := range switches {
err := sw.Start() // start switch and reactors
require.Nil(t, err)
}
assertPeersWithTimeout(t, switches, 10*time.Millisecond, 10*time.Second, N-1)
// stop them
@@ -128,7 +128,7 @@ func TestPEXReactorReceive(t *testing.T) {
size := book.Size()
na, err := peer.NodeInfo().NetAddress()
require.NoError(t, err)
msg := mustEncode(&tmp2p.PexAddrs{Addrs: []tmp2p.NetAddress{na.ToProto()}})
msg := mustEncode(&tmp2p.PexResponse{Addresses: []tmp2p.PexAddress{na.ToProto()}})
r.Receive(PexChannel, peer, msg)
assert.Equal(t, size+1, book.Size())
@@ -185,7 +185,7 @@ func TestPEXReactorAddrsMessageAbuse(t *testing.T) {
assert.True(t, r.requestsSent.Has(id))
assert.True(t, sw.Peers().Has(peer.ID()))
msg := mustEncode(&tmp2p.PexAddrs{Addrs: []tmp2p.NetAddress{peer.SocketAddr().ToProto()}})
msg := mustEncode(&tmp2p.PexResponse{Addresses: []tmp2p.PexAddress{peer.SocketAddr().ToProto()}})
// receive some addrs. should clear the request
r.Receive(PexChannel, peer, msg)
@@ -456,7 +456,7 @@ func TestPEXReactorDoesNotAddPrivatePeersToAddrBook(t *testing.T) {
size := book.Size()
na, err := peer.NodeInfo().NetAddress()
require.NoError(t, err)
msg := mustEncode(&tmp2p.PexAddrs{Addrs: []tmp2p.NetAddress{na.ToProto()}})
msg := mustEncode(&tmp2p.PexResponse{Addresses: []tmp2p.PexAddress{na.ToProto()}})
pexR.Receive(PexChannel, peer, msg)
assert.Equal(t, size, book.Size())
@@ -634,7 +634,7 @@ func createSwitchAndAddReactors(reactors ...p2p.Reactor) *p2p.Switch {
}
func TestPexVectors(t *testing.T) {
addr := tmp2p.NetAddress{
addr := tmp2p.PexAddress{
ID: "1",
IP: "127.0.0.1",
Port: 9090,
@@ -646,7 +646,7 @@ func TestPexVectors(t *testing.T) {
expBytes string
}{
{"PexRequest", &tmp2p.PexRequest{}, "0a00"},
{"PexAddrs", &tmp2p.PexAddrs{Addrs: []tmp2p.NetAddress{addr}}, "12130a110a013112093132372e302e302e31188247"},
{"PexAddrs", &tmp2p.PexResponse{Addresses: []tmp2p.PexAddress{addr}}, "12130a110a013112093132372e302e302e31188247"},
}
for _, tc := range testCases {

226
p2p/pex/reactor.go Normal file
View File

@@ -0,0 +1,226 @@
package pex
import (
"context"
"fmt"
"time"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/p2p"
protop2p "github.com/tendermint/tendermint/proto/tendermint/p2p"
)
var (
_ service.Service = (*ReactorV2)(nil)
_ p2p.Wrapper = (*protop2p.PexMessage)(nil)
)
const (
maxAddresses uint16 = 100
resolveTimeout = 3 * time.Second
)
// ReactorV2 is a PEX reactor for the new P2P stack. The legacy reactor
// is Reactor.
//
// FIXME: Rename this when Reactor is removed, and consider moving to p2p/.
type ReactorV2 struct {
service.BaseService
peerManager *p2p.PeerManager
pexCh *p2p.Channel
peerUpdates *p2p.PeerUpdatesCh
closeCh chan struct{}
}
// NewReactor returns a reference to a new reactor.
func NewReactorV2(
logger log.Logger,
peerManager *p2p.PeerManager,
pexCh *p2p.Channel,
peerUpdates *p2p.PeerUpdatesCh,
) *ReactorV2 {
r := &ReactorV2{
peerManager: peerManager,
pexCh: pexCh,
peerUpdates: peerUpdates,
closeCh: make(chan struct{}),
}
r.BaseService = *service.NewBaseService(logger, "PEX", r)
return r
}
// OnStart starts separate go routines for each p2p Channel and listens for
// envelopes on each. In addition, it also listens for peer updates and handles
// messages on that p2p channel accordingly. The caller must be sure to execute
// OnStop to ensure the outbound p2p Channels are closed.
func (r *ReactorV2) OnStart() error {
go r.processPexCh()
go r.processPeerUpdates()
return nil
}
// OnStop stops the reactor by signaling to all spawned goroutines to exit and
// blocking until they all exit.
func (r *ReactorV2) OnStop() {
// Close closeCh to signal to all spawned goroutines to gracefully exit. All
// p2p Channels should execute Close().
close(r.closeCh)
// Wait for all p2p Channels to be closed before returning. This ensures we
// can easily reason about synchronization of all p2p Channels and ensure no
// panics will occur.
<-r.pexCh.Done()
<-r.peerUpdates.Done()
}
// handlePexMessage handles envelopes sent from peers on the PexChannel.
func (r *ReactorV2) handlePexMessage(envelope p2p.Envelope) error {
logger := r.Logger.With("peer", envelope.From)
// FIXME: We may want to add DoS protection here, by rate limiting peers and
// only processing addresses we actually requested.
switch msg := envelope.Message.(type) {
case *protop2p.PexRequest:
pexAddresses := r.resolve(r.peerManager.Advertise(envelope.From, maxAddresses), maxAddresses)
r.pexCh.Out() <- p2p.Envelope{
To: envelope.From,
Message: &protop2p.PexResponse{Addresses: pexAddresses},
}
case *protop2p.PexResponse:
for _, pexAddress := range msg.Addresses {
peerAddress, err := p2p.ParsePeerAddress(
fmt.Sprintf("%s@%s:%d", pexAddress.ID, pexAddress.IP, pexAddress.Port))
if err != nil {
logger.Debug("invalid PEX address", "address", pexAddress, "err", err)
continue
}
if err = r.peerManager.Add(peerAddress); err != nil {
logger.Debug("failed to register PEX address", "address", peerAddress, "err", err)
}
}
default:
return fmt.Errorf("received unknown message: %T", msg)
}
return nil
}
// resolve resolves a set of peer addresses into PEX addresses.
//
// FIXME: This is necessary because the current PEX protocol only supports
// IP/port pairs, while the P2P stack uses PeerAddress URLs. The PEX protocol
// should really use URLs too, to exchange DNS names instead of IPs and allow
// different transport protocols (e.g. QUIC and MemoryTransport).
//
// FIXME: We may want to cache and parallelize this, but for now we'll just rely
// on the operating system to cache it for us.
func (r *ReactorV2) resolve(addresses []p2p.PeerAddress, limit uint16) []protop2p.PexAddress {
pexAddresses := make([]protop2p.PexAddress, 0, len(addresses))
for _, address := range addresses {
ctx, cancel := context.WithTimeout(context.Background(), resolveTimeout)
endpoints, err := address.Resolve(ctx)
cancel()
if err != nil {
r.Logger.Debug("failed to resolve address", "address", address, "err", err)
continue
}
for _, endpoint := range endpoints {
if len(pexAddresses) >= int(limit) {
return pexAddresses
} else if endpoint.IP != nil {
// PEX currently only supports IP-networked transports (as
// opposed to e.g. p2p.MemoryTransport).
pexAddresses = append(pexAddresses, protop2p.PexAddress{
ID: string(endpoint.PeerID),
IP: endpoint.IP.String(),
Port: uint32(endpoint.Port),
})
}
}
}
return pexAddresses
}
// handleMessage handles an Envelope sent from a peer on a specific p2p Channel.
// It will handle errors and any possible panics gracefully. A caller can handle
// any error returned by sending a PeerError on the respective channel.
func (r *ReactorV2) handleMessage(chID p2p.ChannelID, envelope p2p.Envelope) (err error) {
defer func() {
if e := recover(); e != nil {
err = fmt.Errorf("panic in processing message: %v", e)
}
}()
r.Logger.Debug("received message", "peer", envelope.From)
switch chID {
case p2p.ChannelID(PexChannel):
err = r.handlePexMessage(envelope)
default:
err = fmt.Errorf("unknown channel ID (%d) for envelope (%v)", chID, envelope)
}
return err
}
// processPexCh implements a blocking event loop where we listen for p2p
// Envelope messages from the pexCh.
func (r *ReactorV2) processPexCh() {
defer r.pexCh.Close()
for {
select {
case envelope := <-r.pexCh.In():
if err := r.handleMessage(r.pexCh.ID(), envelope); err != nil {
r.Logger.Error("failed to process message", "ch_id", r.pexCh.ID(), "envelope", envelope, "err", err)
r.pexCh.Error() <- p2p.PeerError{
PeerID: envelope.From,
Err: err,
Severity: p2p.PeerErrorSeverityLow,
}
}
case <-r.closeCh:
r.Logger.Debug("stopped listening on PEX channel; closing...")
return
}
}
}
// processPeerUpdate processes a PeerUpdate. For added peers, PeerStatusUp, we
// send a request for addresses.
func (r *ReactorV2) processPeerUpdate(peerUpdate p2p.PeerUpdate) {
r.Logger.Debug("received peer update", "peer", peerUpdate.PeerID, "status", peerUpdate.Status)
if peerUpdate.Status == p2p.PeerStatusUp {
r.pexCh.Out() <- p2p.Envelope{
To: peerUpdate.PeerID,
Message: &protop2p.PexRequest{},
}
}
}
// processPeerUpdates initiates a blocking process where we listen for and handle
// PeerUpdate messages. When the reactor is stopped, we will catch the signal and
// close the p2p PeerUpdatesCh gracefully.
func (r *ReactorV2) processPeerUpdates() {
defer r.peerUpdates.Close()
for {
select {
case peerUpdate := <-r.peerUpdates.Updates():
r.processPeerUpdate(peerUpdate)
case <-r.closeCh:
r.Logger.Debug("stopped listening on peer updates channel; closing...")
return
}
}
}

View File

@@ -2,6 +2,7 @@ package p2p
import (
"context"
"errors"
"fmt"
"io"
"sync"
@@ -230,21 +231,16 @@ func (r *Router) routeChannel(channel *Channel) {
// acceptPeers accepts inbound connections from peers on the given transport.
func (r *Router) acceptPeers(transport Transport) {
ctx := r.stopCtx()
for {
select {
case <-r.stopCh:
return
default:
}
// FIXME: We may need transports to enforce some sort of rate limiting
// here (e.g. by IP address), or alternatively have PeerManager.Accepted()
// do it for us.
conn, err := transport.Accept(context.Background())
conn, err := transport.Accept(ctx)
switch err {
case nil:
case ErrTransportClosed{}, io.EOF:
r.logger.Info("transport closed; stopping accept routine", "transport", transport)
case ErrTransportClosed{}, io.EOF, context.Canceled:
r.logger.Debug("stopping accept routine", "transport", transport)
return
default:
r.logger.Error("failed to accept connection", "transport", transport, "err", err)
@@ -285,31 +281,25 @@ func (r *Router) acceptPeers(transport Transport) {
// dialPeers maintains outbound connections to peers.
func (r *Router) dialPeers() {
ctx := r.stopCtx()
for {
select {
case <-r.stopCh:
peerID, address, err := r.peerManager.DialNext(ctx)
switch err {
case nil:
case context.Canceled:
r.logger.Debug("stopping dial routine")
return
default:
}
peerID, address, err := r.peerManager.DialNext()
if err != nil {
r.logger.Error("failed to find next peer to dial", "err", err)
return
} else if peerID == "" {
r.logger.Debug("no eligible peers, sleeping")
select {
case <-time.After(time.Second):
continue
case <-r.stopCh:
return
}
}
go func() {
conn, err := r.dialPeer(address)
if err != nil {
r.logger.Error("failed to dial peer, will retry", "peer", peerID)
conn, err := r.dialPeer(ctx, address)
if errors.Is(err, context.Canceled) {
return
} else if err != nil {
r.logger.Error("failed to dial peer", "peer", peerID)
if err = r.peerManager.DialFailed(peerID, address); err != nil {
r.logger.Error("failed to report dial failure", "peer", peerID, "err", err)
}
@@ -344,9 +334,7 @@ func (r *Router) dialPeers() {
}
// dialPeer attempts to connect to a peer.
func (r *Router) dialPeer(address PeerAddress) (Connection, error) {
ctx := context.Background()
func (r *Router) dialPeer(ctx context.Context, address PeerAddress) (Connection, error) {
resolveCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
@@ -367,11 +355,18 @@ func (r *Router) dialPeer(address PeerAddress) (Connection, error) {
dialCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
// FIXME: When we dial and handshake the peer, we should pass it
// appropriate address(es) it can use to dial us back. It can't use our
// remote endpoint, since TCP uses different port numbers for outbound
// connections than it does for inbound. Also, we may need to vary this
// by the peer's endpoint, since e.g. a peer on 192.168.0.0 can reach us
// on a private address on this endpoint, but a peer on the public
// Internet can't and needs a different public address.
conn, err := t.Dial(dialCtx, endpoint)
if err != nil {
r.logger.Error("failed to dial endpoint", "endpoint", endpoint)
r.logger.Error("failed to dial endpoint", "endpoint", endpoint, "err", err)
} else {
r.logger.Info("connected to peer", "peer", address.NodeID(), "endpoint", endpoint)
r.logger.Info("connected to peer", "peer", address.ID, "endpoint", endpoint)
return conn, nil
}
}
@@ -481,34 +476,25 @@ func (r *Router) sendPeer(peerID NodeID, conn Connection, queue queue) error {
// evictPeers evicts connected peers as requested by the peer manager.
func (r *Router) evictPeers() {
ctx := r.stopCtx()
for {
select {
case <-r.stopCh:
peerID, err := r.peerManager.EvictNext(ctx)
switch err {
case nil:
case context.Canceled:
r.logger.Debug("stopping evict routine")
return
default:
}
peerID, err := r.peerManager.EvictNext()
if err != nil {
r.logger.Error("failed to find next peer to evict", "err", err)
return
} else if peerID == "" {
r.logger.Debug("no evictable peers, sleeping")
select {
case <-time.After(time.Second):
continue
case <-r.stopCh:
return
}
}
r.logger.Info("evicting peer", "peer", peerID)
r.peerMtx.RLock()
queue, ok := r.peerQueues[peerID]
r.peerMtx.RUnlock()
if ok {
if queue, ok := r.peerQueues[peerID]; ok {
queue.close()
}
r.peerMtx.RUnlock()
}
}
@@ -544,3 +530,13 @@ func (r *Router) OnStop() {
<-q.closed()
}
}
// stopCtx returns a context that is cancelled when the router stops.
func (r *Router) stopCtx() context.Context {
ctx, cancel := context.WithCancel(context.Background())
go func() {
<-r.stopCh
cancel()
}()
return ctx
}

View File

@@ -4,9 +4,11 @@ import (
"errors"
"testing"
"github.com/fortytw2/leaktest"
gogotypes "github.com/gogo/protobuf/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
dbm "github.com/tendermint/tm-db"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/p2p"
@@ -29,19 +31,25 @@ func echoReactor(channel *p2p.Channel) {
}
func TestRouter(t *testing.T) {
defer leaktest.Check(t)()
logger := log.TestingLogger()
network := p2p.NewMemoryNetwork(logger)
transport := network.GenerateTransport()
defer transport.Close()
chID := p2p.ChannelID(1)
// Start some other in-memory network nodes to communicate with, running
// a simple echo reactor that returns received messages.
peers := []p2p.PeerAddress{}
for i := 0; i < 3; i++ {
peerManager, err := p2p.NewPeerManager(dbm.NewMemDB(), p2p.PeerManagerOptions{})
require.NoError(t, err)
peerTransport := network.GenerateTransport()
defer peerTransport.Close()
peerRouter := p2p.NewRouter(
logger.With("peerID", i),
p2p.NewPeerManager(p2p.PeerManagerOptions{}),
peerManager,
map[p2p.Protocol]p2p.Transport{
p2p.MemoryProtocol: peerTransport,
},
@@ -59,7 +67,9 @@ func TestRouter(t *testing.T) {
}
// Start the main router and connect it to the peers above.
peerManager := p2p.NewPeerManager(p2p.PeerManagerOptions{})
peerManager, err := p2p.NewPeerManager(dbm.NewMemDB(), p2p.PeerManagerOptions{})
require.NoError(t, err)
defer peerManager.Close()
for _, address := range peers {
err := peerManager.Add(address)
require.NoError(t, err)
@@ -70,11 +80,18 @@ func TestRouter(t *testing.T) {
router := p2p.NewRouter(logger, peerManager, map[p2p.Protocol]p2p.Transport{
p2p.MemoryProtocol: transport,
})
channel, err := router.OpenChannel(chID, &TestMessage{})
require.NoError(t, err)
defer channel.Close()
err = router.Start()
require.NoError(t, err)
defer func() {
// Since earlier defers are closed after this, and we have to make sure
// we close channels and subscriptions before the router, we explicitly
// close them here to.
peerUpdates.Close()
channel.Close()
require.NoError(t, router.Stop())
}()
@@ -97,13 +114,13 @@ func TestRouter(t *testing.T) {
// We then submit an error for a peer, and watch it get disconnected.
channel.Error() <- p2p.PeerError{
PeerID: peers[0].NodeID(),
PeerID: peers[0].ID,
Err: errors.New("test error"),
Severity: p2p.PeerErrorSeverityCritical,
}
peerUpdate := <-peerUpdates.Updates()
require.Equal(t, p2p.PeerUpdate{
PeerID: peers[0].NodeID(),
PeerID: peers[0].ID,
Status: p2p.PeerStatusDown,
}, peerUpdate)
@@ -114,7 +131,7 @@ func TestRouter(t *testing.T) {
}
for i := 0; i < len(peers)-1; i++ {
envelope := <-channel.In()
require.NotEqual(t, peers[0].NodeID(), envelope.From)
require.NotEqual(t, peers[0].ID, envelope.From)
require.Equal(t, &TestMessage{Value: "broadcast"}, envelope.Message)
}
select {

View File

@@ -5,7 +5,6 @@ import (
"errors"
"fmt"
"net"
"net/url"
"github.com/tendermint/tendermint/crypto"
"github.com/tendermint/tendermint/p2p/conn"
@@ -66,27 +65,23 @@ type Endpoint struct {
Port uint16
}
// PeerAddress converts the endpoint into a peer address URL.
// PeerAddress converts the endpoint into a peer address.
func (e Endpoint) PeerAddress() PeerAddress {
u := &url.URL{
Scheme: string(e.Protocol),
User: url.User(string(e.PeerID)),
address := PeerAddress{
ID: e.PeerID,
Protocol: e.Protocol,
Path: e.Path,
}
if e.IP != nil {
u.Host = e.IP.String()
if e.Port > 0 {
u.Host = net.JoinHostPort(u.Host, fmt.Sprintf("%v", e.Port))
}
u.Path = e.Path
} else {
u.Opaque = e.Path
address.Hostname = e.IP.String()
address.Port = e.Port
}
return PeerAddress{URL: u}
return address
}
// String formats an endpoint as a URL string.
func (e Endpoint) String() string {
return e.PeerAddress().URL.String()
return e.PeerAddress().String()
}
// Validate validates an endpoint.
@@ -96,8 +91,6 @@ func (e Endpoint) Validate() error {
return errors.New("endpoint has no peer ID")
case e.Protocol == "":
return errors.New("endpoint has no protocol")
case len(e.IP) == 0 && len(e.Path) == 0:
return errors.New("endpoint must have either IP or path")
case e.Port > 0 && len(e.IP) == 0:
return fmt.Errorf("endpoint has port %v but no IP", e.Port)
default:

View File

@@ -238,7 +238,7 @@ func (m *MConnTransport) Accept(ctx context.Context) (Connection, error) {
case <-m.chClose:
return nil, ErrTransportClosed{}
case <-ctx.Done():
return nil, nil
return nil, ctx.Err()
}
}

View File

@@ -153,17 +153,14 @@ func (t *MemoryTransport) Dial(ctx context.Context, endpoint Endpoint) (Connecti
if endpoint.Protocol != MemoryProtocol {
return nil, fmt.Errorf("invalid protocol %q", endpoint.Protocol)
}
if endpoint.Path == "" {
return nil, errors.New("no path")
}
if endpoint.PeerID == "" {
return nil, errors.New("no peer ID")
}
t.logger.Info("dialing peer", "remote", endpoint)
peerTransport := t.network.GetTransport(NodeID(endpoint.Path))
peerTransport := t.network.GetTransport(endpoint.PeerID)
if peerTransport == nil {
return nil, fmt.Errorf("unknown peer %q", endpoint.Path)
return nil, fmt.Errorf("unknown peer %q", endpoint.PeerID)
}
inCh := make(chan memoryMessage, 1)
outCh := make(chan memoryMessage, 1)
@@ -241,7 +238,6 @@ func (t *MemoryTransport) Endpoints() []Endpoint {
return []Endpoint{{
Protocol: MemoryProtocol,
PeerID: t.nodeInfo.NodeID,
Path: string(t.nodeInfo.NodeID),
}}
}
}
@@ -365,7 +361,6 @@ func (c *MemoryConnection) LocalEndpoint() Endpoint {
return Endpoint{
PeerID: c.local.nodeInfo.NodeID,
Protocol: MemoryProtocol,
Path: string(c.local.nodeInfo.NodeID),
}
}
@@ -374,7 +369,6 @@ func (c *MemoryConnection) RemoteEndpoint() Endpoint {
return Endpoint{
PeerID: c.remote.nodeInfo.NodeID,
Protocol: MemoryProtocol,
Path: string(c.remote.nodeInfo.NodeID),
}
}

View File

@@ -0,0 +1,33 @@
package p2p
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
)
// Wrap implements the p2p Wrapper interface and wraps a PEX message.
func (m *PexMessage) Wrap(pb proto.Message) error {
switch msg := pb.(type) {
case *PexRequest:
m.Sum = &PexMessage_PexRequest{PexRequest: msg}
case *PexResponse:
m.Sum = &PexMessage_PexResponse{PexResponse: msg}
default:
return fmt.Errorf("unknown message: %T", msg)
}
return nil
}
// Unwrap implements the p2p Wrapper interface and unwraps a wrapped PEX
// message.
func (m *PexMessage) Unwrap() (proto.Message, error) {
switch msg := m.Sum.(type) {
case *PexMessage_PexRequest:
return msg.PexRequest, nil
case *PexMessage_PexResponse:
return msg.PexResponse, nil
default:
return nil, fmt.Errorf("unknown message: %T", msg)
}
}

View File

@@ -23,6 +23,66 @@ var _ = math.Inf
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
type PexAddress struct {
ID string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
IP string `protobuf:"bytes,2,opt,name=ip,proto3" json:"ip,omitempty"`
Port uint32 `protobuf:"varint,3,opt,name=port,proto3" json:"port,omitempty"`
}
func (m *PexAddress) Reset() { *m = PexAddress{} }
func (m *PexAddress) String() string { return proto.CompactTextString(m) }
func (*PexAddress) ProtoMessage() {}
func (*PexAddress) Descriptor() ([]byte, []int) {
return fileDescriptor_81c2f011fd13be57, []int{0}
}
func (m *PexAddress) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *PexAddress) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_PexAddress.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *PexAddress) XXX_Merge(src proto.Message) {
xxx_messageInfo_PexAddress.Merge(m, src)
}
func (m *PexAddress) XXX_Size() int {
return m.Size()
}
func (m *PexAddress) XXX_DiscardUnknown() {
xxx_messageInfo_PexAddress.DiscardUnknown(m)
}
var xxx_messageInfo_PexAddress proto.InternalMessageInfo
func (m *PexAddress) GetID() string {
if m != nil {
return m.ID
}
return ""
}
func (m *PexAddress) GetIP() string {
if m != nil {
return m.IP
}
return ""
}
func (m *PexAddress) GetPort() uint32 {
if m != nil {
return m.Port
}
return 0
}
type PexRequest struct {
}
@@ -30,7 +90,7 @@ func (m *PexRequest) Reset() { *m = PexRequest{} }
func (m *PexRequest) String() string { return proto.CompactTextString(m) }
func (*PexRequest) ProtoMessage() {}
func (*PexRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_81c2f011fd13be57, []int{0}
return fileDescriptor_81c2f011fd13be57, []int{1}
}
func (m *PexRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -59,22 +119,22 @@ func (m *PexRequest) XXX_DiscardUnknown() {
var xxx_messageInfo_PexRequest proto.InternalMessageInfo
type PexAddrs struct {
Addrs []NetAddress `protobuf:"bytes,1,rep,name=addrs,proto3" json:"addrs"`
type PexResponse struct {
Addresses []PexAddress `protobuf:"bytes,1,rep,name=addresses,proto3" json:"addresses"`
}
func (m *PexAddrs) Reset() { *m = PexAddrs{} }
func (m *PexAddrs) String() string { return proto.CompactTextString(m) }
func (*PexAddrs) ProtoMessage() {}
func (*PexAddrs) Descriptor() ([]byte, []int) {
return fileDescriptor_81c2f011fd13be57, []int{1}
func (m *PexResponse) Reset() { *m = PexResponse{} }
func (m *PexResponse) String() string { return proto.CompactTextString(m) }
func (*PexResponse) ProtoMessage() {}
func (*PexResponse) Descriptor() ([]byte, []int) {
return fileDescriptor_81c2f011fd13be57, []int{2}
}
func (m *PexAddrs) XXX_Unmarshal(b []byte) error {
func (m *PexResponse) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *PexAddrs) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
func (m *PexResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_PexAddrs.Marshal(b, m, deterministic)
return xxx_messageInfo_PexResponse.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -84,44 +144,44 @@ func (m *PexAddrs) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return b[:n], nil
}
}
func (m *PexAddrs) XXX_Merge(src proto.Message) {
xxx_messageInfo_PexAddrs.Merge(m, src)
func (m *PexResponse) XXX_Merge(src proto.Message) {
xxx_messageInfo_PexResponse.Merge(m, src)
}
func (m *PexAddrs) XXX_Size() int {
func (m *PexResponse) XXX_Size() int {
return m.Size()
}
func (m *PexAddrs) XXX_DiscardUnknown() {
xxx_messageInfo_PexAddrs.DiscardUnknown(m)
func (m *PexResponse) XXX_DiscardUnknown() {
xxx_messageInfo_PexResponse.DiscardUnknown(m)
}
var xxx_messageInfo_PexAddrs proto.InternalMessageInfo
var xxx_messageInfo_PexResponse proto.InternalMessageInfo
func (m *PexAddrs) GetAddrs() []NetAddress {
func (m *PexResponse) GetAddresses() []PexAddress {
if m != nil {
return m.Addrs
return m.Addresses
}
return nil
}
type Message struct {
type PexMessage struct {
// Types that are valid to be assigned to Sum:
// *Message_PexRequest
// *Message_PexAddrs
Sum isMessage_Sum `protobuf_oneof:"sum"`
// *PexMessage_PexRequest
// *PexMessage_PexResponse
Sum isPexMessage_Sum `protobuf_oneof:"sum"`
}
func (m *Message) Reset() { *m = Message{} }
func (m *Message) String() string { return proto.CompactTextString(m) }
func (*Message) ProtoMessage() {}
func (*Message) Descriptor() ([]byte, []int) {
return fileDescriptor_81c2f011fd13be57, []int{2}
func (m *PexMessage) Reset() { *m = PexMessage{} }
func (m *PexMessage) String() string { return proto.CompactTextString(m) }
func (*PexMessage) ProtoMessage() {}
func (*PexMessage) Descriptor() ([]byte, []int) {
return fileDescriptor_81c2f011fd13be57, []int{3}
}
func (m *Message) XXX_Unmarshal(b []byte) error {
func (m *PexMessage) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Message) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
func (m *PexMessage) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Message.Marshal(b, m, deterministic)
return xxx_messageInfo_PexMessage.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalToSizedBuffer(b)
@@ -131,90 +191,136 @@ func (m *Message) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return b[:n], nil
}
}
func (m *Message) XXX_Merge(src proto.Message) {
xxx_messageInfo_Message.Merge(m, src)
func (m *PexMessage) XXX_Merge(src proto.Message) {
xxx_messageInfo_PexMessage.Merge(m, src)
}
func (m *Message) XXX_Size() int {
func (m *PexMessage) XXX_Size() int {
return m.Size()
}
func (m *Message) XXX_DiscardUnknown() {
xxx_messageInfo_Message.DiscardUnknown(m)
func (m *PexMessage) XXX_DiscardUnknown() {
xxx_messageInfo_PexMessage.DiscardUnknown(m)
}
var xxx_messageInfo_Message proto.InternalMessageInfo
var xxx_messageInfo_PexMessage proto.InternalMessageInfo
type isMessage_Sum interface {
isMessage_Sum()
type isPexMessage_Sum interface {
isPexMessage_Sum()
MarshalTo([]byte) (int, error)
Size() int
}
type Message_PexRequest struct {
type PexMessage_PexRequest struct {
PexRequest *PexRequest `protobuf:"bytes,1,opt,name=pex_request,json=pexRequest,proto3,oneof" json:"pex_request,omitempty"`
}
type Message_PexAddrs struct {
PexAddrs *PexAddrs `protobuf:"bytes,2,opt,name=pex_addrs,json=pexAddrs,proto3,oneof" json:"pex_addrs,omitempty"`
type PexMessage_PexResponse struct {
PexResponse *PexResponse `protobuf:"bytes,2,opt,name=pex_response,json=pexResponse,proto3,oneof" json:"pex_response,omitempty"`
}
func (*Message_PexRequest) isMessage_Sum() {}
func (*Message_PexAddrs) isMessage_Sum() {}
func (*PexMessage_PexRequest) isPexMessage_Sum() {}
func (*PexMessage_PexResponse) isPexMessage_Sum() {}
func (m *Message) GetSum() isMessage_Sum {
func (m *PexMessage) GetSum() isPexMessage_Sum {
if m != nil {
return m.Sum
}
return nil
}
func (m *Message) GetPexRequest() *PexRequest {
if x, ok := m.GetSum().(*Message_PexRequest); ok {
func (m *PexMessage) GetPexRequest() *PexRequest {
if x, ok := m.GetSum().(*PexMessage_PexRequest); ok {
return x.PexRequest
}
return nil
}
func (m *Message) GetPexAddrs() *PexAddrs {
if x, ok := m.GetSum().(*Message_PexAddrs); ok {
return x.PexAddrs
func (m *PexMessage) GetPexResponse() *PexResponse {
if x, ok := m.GetSum().(*PexMessage_PexResponse); ok {
return x.PexResponse
}
return nil
}
// XXX_OneofWrappers is for the internal use of the proto package.
func (*Message) XXX_OneofWrappers() []interface{} {
func (*PexMessage) XXX_OneofWrappers() []interface{} {
return []interface{}{
(*Message_PexRequest)(nil),
(*Message_PexAddrs)(nil),
(*PexMessage_PexRequest)(nil),
(*PexMessage_PexResponse)(nil),
}
}
func init() {
proto.RegisterType((*PexAddress)(nil), "tendermint.p2p.PexAddress")
proto.RegisterType((*PexRequest)(nil), "tendermint.p2p.PexRequest")
proto.RegisterType((*PexAddrs)(nil), "tendermint.p2p.PexAddrs")
proto.RegisterType((*Message)(nil), "tendermint.p2p.Message")
proto.RegisterType((*PexResponse)(nil), "tendermint.p2p.PexResponse")
proto.RegisterType((*PexMessage)(nil), "tendermint.p2p.PexMessage")
}
func init() { proto.RegisterFile("tendermint/p2p/pex.proto", fileDescriptor_81c2f011fd13be57) }
var fileDescriptor_81c2f011fd13be57 = []byte{
// 268 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x28, 0x49, 0xcd, 0x4b,
0x49, 0x2d, 0xca, 0xcd, 0xcc, 0x2b, 0xd1, 0x2f, 0x30, 0x2a, 0xd0, 0x2f, 0x48, 0xad, 0xd0, 0x2b,
0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x43, 0xc8, 0xe8, 0x15, 0x18, 0x15, 0x48, 0x49, 0xa1, 0xa9,
0x2c, 0xa9, 0x2c, 0x48, 0x2d, 0x86, 0xa8, 0x95, 0x12, 0x49, 0xcf, 0x4f, 0xcf, 0x07, 0x33, 0xf5,
0x41, 0x2c, 0x88, 0xa8, 0x12, 0x0f, 0x17, 0x57, 0x40, 0x6a, 0x45, 0x50, 0x6a, 0x61, 0x69, 0x6a,
0x71, 0x89, 0x92, 0x13, 0x17, 0x47, 0x40, 0x6a, 0x85, 0x63, 0x4a, 0x4a, 0x51, 0xb1, 0x90, 0x19,
0x17, 0x6b, 0x22, 0x88, 0x21, 0xc1, 0xa8, 0xc0, 0xac, 0xc1, 0x6d, 0x24, 0xa5, 0x87, 0x6a, 0x97,
0x9e, 0x5f, 0x6a, 0x09, 0x48, 0x61, 0x6a, 0x71, 0xb1, 0x13, 0xcb, 0x89, 0x7b, 0xf2, 0x0c, 0x41,
0x10, 0xe5, 0x4a, 0x1d, 0x8c, 0x5c, 0xec, 0xbe, 0xa9, 0xc5, 0xc5, 0x89, 0xe9, 0xa9, 0x42, 0xb6,
0x5c, 0xdc, 0x05, 0xa9, 0x15, 0xf1, 0x45, 0x10, 0xe3, 0x25, 0x18, 0x15, 0x18, 0xb1, 0x99, 0x84,
0x70, 0x80, 0x07, 0x43, 0x10, 0x57, 0x01, 0x9c, 0x27, 0x64, 0xce, 0xc5, 0x09, 0xd2, 0x0e, 0x71,
0x06, 0x13, 0x58, 0xb3, 0x04, 0x16, 0xcd, 0x60, 0xf7, 0x7a, 0x30, 0x04, 0x71, 0x14, 0x40, 0xd9,
0x4e, 0xac, 0x5c, 0xcc, 0xc5, 0xa5, 0xb9, 0x4e, 0xfe, 0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x78, 0x24,
0xc7, 0xf8, 0xe0, 0x91, 0x1c, 0xe3, 0x84, 0xc7, 0x72, 0x0c, 0x17, 0x1e, 0xcb, 0x31, 0xdc, 0x78,
0x2c, 0xc7, 0x10, 0x65, 0x9a, 0x9e, 0x59, 0x92, 0x51, 0x9a, 0xa4, 0x97, 0x9c, 0x9f, 0xab, 0x8f,
0x14, 0x66, 0xc8, 0xc1, 0x07, 0x0e, 0x29, 0xd4, 0xf0, 0x4c, 0x62, 0x03, 0x8b, 0x1a, 0x03, 0x02,
0x00, 0x00, 0xff, 0xff, 0x3c, 0x0b, 0xcb, 0x40, 0x92, 0x01, 0x00, 0x00,
// 310 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x51, 0x31, 0x4b, 0xc3, 0x40,
0x18, 0xbd, 0x4b, 0x6b, 0xa1, 0x97, 0xea, 0x70, 0x88, 0x84, 0x0a, 0xd7, 0x92, 0xa9, 0x53, 0x02,
0x11, 0x47, 0x45, 0x83, 0x43, 0x1d, 0x8a, 0xe5, 0x46, 0x17, 0x69, 0xcd, 0x47, 0xcc, 0xd0, 0xde,
0x67, 0xee, 0x0a, 0xfd, 0x19, 0x0e, 0xfe, 0xa8, 0x8e, 0x1d, 0x9d, 0x8a, 0xa4, 0x7f, 0x44, 0xbc,
0x13, 0x93, 0x42, 0xb7, 0x7b, 0xef, 0xfb, 0xde, 0xfb, 0xde, 0xf1, 0x58, 0x60, 0x60, 0x99, 0x41,
0xb9, 0x28, 0x96, 0x26, 0xc6, 0x04, 0x63, 0x84, 0x75, 0x84, 0xa5, 0x32, 0x8a, 0x9f, 0xd5, 0x93,
0x08, 0x13, 0xec, 0x9f, 0xe7, 0x2a, 0x57, 0x76, 0x14, 0xff, 0xbe, 0xdc, 0x56, 0x38, 0x65, 0x6c,
0x0a, 0xeb, 0xfb, 0x2c, 0x2b, 0x41, 0x6b, 0x7e, 0xc1, 0xbc, 0x22, 0x0b, 0xe8, 0x90, 0x8e, 0xba,
0x69, 0xa7, 0xda, 0x0d, 0xbc, 0xc7, 0x07, 0xe9, 0x15, 0x99, 0xe5, 0x31, 0xf0, 0x1a, 0xfc, 0x54,
0x7a, 0x05, 0x72, 0xce, 0xda, 0xa8, 0x4a, 0x13, 0xb4, 0x86, 0x74, 0x74, 0x2a, 0xed, 0x3b, 0xec,
0x59, 0x47, 0x09, 0xef, 0x2b, 0xd0, 0x26, 0x9c, 0x30, 0xdf, 0x22, 0x8d, 0x6a, 0xa9, 0x81, 0xdf,
0xb2, 0xee, 0xcc, 0xdd, 0x02, 0x1d, 0xd0, 0x61, 0x6b, 0xe4, 0x27, 0xfd, 0xe8, 0x30, 0x68, 0x54,
0xe7, 0x49, 0xdb, 0x9b, 0xdd, 0x80, 0xc8, 0x5a, 0x12, 0x7e, 0x52, 0xeb, 0x3e, 0x01, 0xad, 0x67,
0x39, 0xf0, 0x1b, 0xe6, 0x23, 0xac, 0x5f, 0x4a, 0x77, 0xcc, 0x06, 0x3f, 0x6e, 0xf8, 0x17, 0x67,
0x4c, 0x24, 0xc3, 0x7f, 0xc4, 0xef, 0x58, 0xcf, 0xc9, 0x5d, 0x3a, 0xfb, 0x41, 0x3f, 0xb9, 0x3c,
0xaa, 0x77, 0x2b, 0x63, 0x22, 0x7d, 0xac, 0x61, 0x7a, 0xc2, 0x5a, 0x7a, 0xb5, 0x48, 0x9f, 0x36,
0x95, 0xa0, 0xdb, 0x4a, 0xd0, 0xef, 0x4a, 0xd0, 0x8f, 0xbd, 0x20, 0xdb, 0xbd, 0x20, 0x5f, 0x7b,
0x41, 0x9e, 0xaf, 0xf3, 0xc2, 0xbc, 0xad, 0xe6, 0xd1, 0xab, 0x5a, 0xc4, 0x8d, 0xaa, 0x9a, 0xad,
0xd9, 0x4a, 0x0e, 0x6b, 0x9c, 0x77, 0x2c, 0x7b, 0xf5, 0x13, 0x00, 0x00, 0xff, 0xff, 0xa6, 0xa1,
0x59, 0x3c, 0xdf, 0x01, 0x00, 0x00,
}
func (m *PexAddress) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *PexAddress) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *PexAddress) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if m.Port != 0 {
i = encodeVarintPex(dAtA, i, uint64(m.Port))
i--
dAtA[i] = 0x18
}
if len(m.IP) > 0 {
i -= len(m.IP)
copy(dAtA[i:], m.IP)
i = encodeVarintPex(dAtA, i, uint64(len(m.IP)))
i--
dAtA[i] = 0x12
}
if len(m.ID) > 0 {
i -= len(m.ID)
copy(dAtA[i:], m.ID)
i = encodeVarintPex(dAtA, i, uint64(len(m.ID)))
i--
dAtA[i] = 0xa
}
return len(dAtA) - i, nil
}
func (m *PexRequest) Marshal() (dAtA []byte, err error) {
@@ -240,7 +346,7 @@ func (m *PexRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
func (m *PexAddrs) Marshal() (dAtA []byte, err error) {
func (m *PexResponse) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -250,20 +356,20 @@ func (m *PexAddrs) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
func (m *PexAddrs) MarshalTo(dAtA []byte) (int, error) {
func (m *PexResponse) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *PexAddrs) MarshalToSizedBuffer(dAtA []byte) (int, error) {
func (m *PexResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
_ = l
if len(m.Addrs) > 0 {
for iNdEx := len(m.Addrs) - 1; iNdEx >= 0; iNdEx-- {
if len(m.Addresses) > 0 {
for iNdEx := len(m.Addresses) - 1; iNdEx >= 0; iNdEx-- {
{
size, err := m.Addrs[iNdEx].MarshalToSizedBuffer(dAtA[:i])
size, err := m.Addresses[iNdEx].MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
@@ -277,7 +383,7 @@ func (m *PexAddrs) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
func (m *Message) Marshal() (dAtA []byte, err error) {
func (m *PexMessage) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalToSizedBuffer(dAtA[:size])
@@ -287,12 +393,12 @@ func (m *Message) Marshal() (dAtA []byte, err error) {
return dAtA[:n], nil
}
func (m *Message) MarshalTo(dAtA []byte) (int, error) {
func (m *PexMessage) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Message) MarshalToSizedBuffer(dAtA []byte) (int, error) {
func (m *PexMessage) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
_ = i
var l int
@@ -309,12 +415,12 @@ func (m *Message) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
func (m *Message_PexRequest) MarshalTo(dAtA []byte) (int, error) {
func (m *PexMessage_PexRequest) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Message_PexRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
func (m *PexMessage_PexRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if m.PexRequest != nil {
{
@@ -330,16 +436,16 @@ func (m *Message_PexRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
}
return len(dAtA) - i, nil
}
func (m *Message_PexAddrs) MarshalTo(dAtA []byte) (int, error) {
func (m *PexMessage_PexResponse) MarshalTo(dAtA []byte) (int, error) {
size := m.Size()
return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Message_PexAddrs) MarshalToSizedBuffer(dAtA []byte) (int, error) {
func (m *PexMessage_PexResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i := len(dAtA)
if m.PexAddrs != nil {
if m.PexResponse != nil {
{
size, err := m.PexAddrs.MarshalToSizedBuffer(dAtA[:i])
size, err := m.PexResponse.MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
@@ -362,6 +468,26 @@ func encodeVarintPex(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
return base
}
func (m *PexAddress) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.ID)
if l > 0 {
n += 1 + l + sovPex(uint64(l))
}
l = len(m.IP)
if l > 0 {
n += 1 + l + sovPex(uint64(l))
}
if m.Port != 0 {
n += 1 + sovPex(uint64(m.Port))
}
return n
}
func (m *PexRequest) Size() (n int) {
if m == nil {
return 0
@@ -371,14 +497,14 @@ func (m *PexRequest) Size() (n int) {
return n
}
func (m *PexAddrs) Size() (n int) {
func (m *PexResponse) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Addrs) > 0 {
for _, e := range m.Addrs {
if len(m.Addresses) > 0 {
for _, e := range m.Addresses {
l = e.Size()
n += 1 + l + sovPex(uint64(l))
}
@@ -386,7 +512,7 @@ func (m *PexAddrs) Size() (n int) {
return n
}
func (m *Message) Size() (n int) {
func (m *PexMessage) Size() (n int) {
if m == nil {
return 0
}
@@ -398,7 +524,7 @@ func (m *Message) Size() (n int) {
return n
}
func (m *Message_PexRequest) Size() (n int) {
func (m *PexMessage_PexRequest) Size() (n int) {
if m == nil {
return 0
}
@@ -410,14 +536,14 @@ func (m *Message_PexRequest) Size() (n int) {
}
return n
}
func (m *Message_PexAddrs) Size() (n int) {
func (m *PexMessage_PexResponse) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.PexAddrs != nil {
l = m.PexAddrs.Size()
if m.PexResponse != nil {
l = m.PexResponse.Size()
n += 1 + l + sovPex(uint64(l))
}
return n
@@ -429,6 +555,139 @@ func sovPex(x uint64) (n int) {
func sozPex(x uint64) (n int) {
return sovPex(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
func (m *PexAddress) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPex
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: PexAddress: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: PexAddress: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPex
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthPex
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthPex
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.ID = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field IP", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPex
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthPex
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthPex
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.IP = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType)
}
m.Port = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPex
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Port |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipPex(dAtA[iNdEx:])
if err != nil {
return err
}
if (skippy < 0) || (iNdEx+skippy) < 0 {
return ErrInvalidLengthPex
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *PexRequest) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
@@ -479,7 +738,7 @@ func (m *PexRequest) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *PexAddrs) Unmarshal(dAtA []byte) error {
func (m *PexResponse) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -502,15 +761,15 @@ func (m *PexAddrs) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: PexAddrs: wiretype end group for non-group")
return fmt.Errorf("proto: PexResponse: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: PexAddrs: illegal tag %d (wire type %d)", fieldNum, wire)
return fmt.Errorf("proto: PexResponse: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Addrs", wireType)
return fmt.Errorf("proto: wrong wireType = %d for field Addresses", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -537,8 +796,8 @@ func (m *PexAddrs) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Addrs = append(m.Addrs, NetAddress{})
if err := m.Addrs[len(m.Addrs)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
m.Addresses = append(m.Addresses, PexAddress{})
if err := m.Addresses[len(m.Addresses)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -563,7 +822,7 @@ func (m *PexAddrs) Unmarshal(dAtA []byte) error {
}
return nil
}
func (m *Message) Unmarshal(dAtA []byte) error {
func (m *PexMessage) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -586,10 +845,10 @@ func (m *Message) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: Message: wiretype end group for non-group")
return fmt.Errorf("proto: PexMessage: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: Message: illegal tag %d (wire type %d)", fieldNum, wire)
return fmt.Errorf("proto: PexMessage: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
@@ -625,11 +884,11 @@ func (m *Message) Unmarshal(dAtA []byte) error {
if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
m.Sum = &Message_PexRequest{v}
m.Sum = &PexMessage_PexRequest{v}
iNdEx = postIndex
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field PexAddrs", wireType)
return fmt.Errorf("proto: wrong wireType = %d for field PexResponse", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -656,11 +915,11 @@ func (m *Message) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
v := &PexAddrs{}
v := &PexResponse{}
if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
m.Sum = &Message_PexAddrs{v}
m.Sum = &PexMessage_PexResponse{v}
iNdEx = postIndex
default:
iNdEx = preIndex

View File

@@ -3,18 +3,23 @@ package tendermint.p2p;
option go_package = "github.com/tendermint/tendermint/proto/tendermint/p2p";
import "tendermint/p2p/types.proto";
import "gogoproto/gogo.proto";
message PexAddress {
string id = 1 [(gogoproto.customname) = "ID"];
string ip = 2 [(gogoproto.customname) = "IP"];
uint32 port = 3;
}
message PexRequest {}
message PexAddrs {
repeated NetAddress addrs = 1 [(gogoproto.nullable) = false];
message PexResponse {
repeated PexAddress addresses = 1 [(gogoproto.nullable) = false];
}
message Message {
message PexMessage {
oneof sum {
PexRequest pex_request = 1;
PexAddrs pex_addrs = 2;
PexRequest pex_request = 1;
PexResponse pex_response = 2;
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -4,12 +4,7 @@ package tendermint.p2p;
option go_package = "github.com/tendermint/tendermint/proto/tendermint/p2p";
import "gogoproto/gogo.proto";
message NetAddress {
string id = 1 [(gogoproto.customname) = "ID"];
string ip = 2 [(gogoproto.customname) = "IP"];
uint32 port = 3;
}
import "google/protobuf/timestamp.proto";
message ProtocolVersion {
uint64 p2p = 1 [(gogoproto.customname) = "P2P"];
@@ -32,3 +27,16 @@ message NodeInfoOther {
string tx_index = 1;
string rpc_address = 2 [(gogoproto.customname) = "RPCAddress"];
}
message PeerInfo {
string id = 1 [(gogoproto.customname) = "ID"];
repeated PeerAddressInfo address_info = 2;
google.protobuf.Timestamp last_connected = 3 [(gogoproto.stdtime) = true];
}
message PeerAddressInfo {
string address = 1;
google.protobuf.Timestamp last_dial_success = 2 [(gogoproto.stdtime) = true];
google.protobuf.Timestamp last_dial_failure = 3 [(gogoproto.stdtime) = true];
uint32 dial_failures = 4;
}

View File

@@ -238,7 +238,7 @@ func (blockExec *BlockExecutor) Commit(
"Committed state",
"height", block.Height,
"txs", len(block.Txs),
"appHash", fmt.Sprintf("%X", res.Data),
"appHash", res.Data,
)
// Update mempool.

View File

@@ -114,7 +114,7 @@ func (s *syncer) AddSnapshot(peerID p2p.NodeID, snapshot *snapshot) (bool, error
}
if added {
s.logger.Info("Discovered new snapshot", "height", snapshot.Height, "format", snapshot.Format,
"hash", fmt.Sprintf("%X", snapshot.Hash))
"hash", snapshot.Hash)
}
return added, nil
}
@@ -184,18 +184,18 @@ func (s *syncer) SyncAny(discoveryTime time.Duration) (sm.State, *types.Commit,
case errors.Is(err, errRetrySnapshot):
chunks.RetryAll()
s.logger.Info("Retrying snapshot", "height", snapshot.Height, "format", snapshot.Format,
"hash", fmt.Sprintf("%X", snapshot.Hash))
"hash", snapshot.Hash)
continue
case errors.Is(err, errTimeout):
s.snapshots.Reject(snapshot)
s.logger.Error("Timed out waiting for snapshot chunks, rejected snapshot",
"height", snapshot.Height, "format", snapshot.Format, "hash", fmt.Sprintf("%X", snapshot.Hash))
"height", snapshot.Height, "format", snapshot.Format, "hash", snapshot.Hash)
case errors.Is(err, errRejectSnapshot):
s.snapshots.Reject(snapshot)
s.logger.Info("Snapshot rejected", "height", snapshot.Height, "format", snapshot.Format,
"hash", fmt.Sprintf("%X", snapshot.Hash))
"hash", snapshot.Hash)
case errors.Is(err, errRejectFormat):
s.snapshots.RejectFormat(snapshot.Format)
@@ -203,7 +203,7 @@ func (s *syncer) SyncAny(discoveryTime time.Duration) (sm.State, *types.Commit,
case errors.Is(err, errRejectSender):
s.logger.Info("Snapshot senders rejected", "height", snapshot.Height, "format", snapshot.Format,
"hash", fmt.Sprintf("%X", snapshot.Hash))
"hash", snapshot.Hash)
for _, peer := range s.snapshots.GetPeers(snapshot) {
s.snapshots.RejectPeer(peer)
s.logger.Info("Snapshot sender rejected", "peer", peer)
@@ -280,7 +280,7 @@ func (s *syncer) Sync(snapshot *snapshot, chunks *chunkQueue) (sm.State, *types.
// Done! 🎉
s.logger.Info("Snapshot restored", "height", snapshot.Height, "format", snapshot.Format,
"hash", fmt.Sprintf("%X", snapshot.Hash))
"hash", snapshot.Hash)
return state, commit, nil
}
@@ -289,7 +289,7 @@ func (s *syncer) Sync(snapshot *snapshot, chunks *chunkQueue) (sm.State, *types.
// response, or nil if the snapshot was accepted.
func (s *syncer) offerSnapshot(snapshot *snapshot) error {
s.logger.Info("Offering snapshot to ABCI app", "height", snapshot.Height,
"format", snapshot.Format, "hash", fmt.Sprintf("%X", snapshot.Hash))
"format", snapshot.Format, "hash", snapshot.Hash)
resp, err := s.conn.OfferSnapshotSync(context.Background(), abci.RequestOfferSnapshot{
Snapshot: &abci.Snapshot{
Height: snapshot.Height,
@@ -306,7 +306,7 @@ func (s *syncer) offerSnapshot(snapshot *snapshot) error {
switch resp.Result {
case abci.ResponseOfferSnapshot_ACCEPT:
s.logger.Info("Snapshot accepted, restoring", "height", snapshot.Height,
"format", snapshot.Format, "hash", fmt.Sprintf("%X", snapshot.Hash))
"format", snapshot.Format, "hash", snapshot.Hash)
return nil
case abci.ResponseOfferSnapshot_ABORT:
return errAbort
@@ -453,8 +453,8 @@ func (s *syncer) verifyApp(snapshot *snapshot) (uint64, error) {
if !bytes.Equal(snapshot.trustedAppHash, resp.LastBlockAppHash) {
s.logger.Error("appHash verification failed",
"expected", fmt.Sprintf("%X", snapshot.trustedAppHash),
"actual", fmt.Sprintf("%X", resp.LastBlockAppHash))
"expected", snapshot.trustedAppHash,
"actual", resp.LastBlockAppHash)
return 0, errVerifyFailed
}
@@ -467,6 +467,6 @@ func (s *syncer) verifyApp(snapshot *snapshot) (uint64, error) {
return 0, errVerifyFailed
}
s.logger.Info("Verified ABCI app", "height", snapshot.Height, "appHash", fmt.Sprintf("%X", snapshot.trustedAppHash))
s.logger.Info("Verified ABCI app", "height", snapshot.Height, "appHash", snapshot.trustedAppHash)
return resp.AppVersion, nil
}

View File

@@ -14,3 +14,9 @@ and run the following tests in docker containers:
- counter app over grpc
- persistence tests
- crash tendermint at each of many predefined points, restart, and ensure it syncs properly with the app
## Fuzzing
[Fuzzing](https://en.wikipedia.org/wiki/Fuzzing) of various system inputs.
See `./fuzz/README.md` for more details.

View File

@@ -50,6 +50,10 @@ type Manifest struct {
// KeyType sets the curve that will be used by validators.
// Options are ed25519 & secp256k1
KeyType string `toml:"key_type"`
// LogLevel sets the log level of the entire testnet. This can be overridden
// by individual nodes.
LogLevel string `toml:"log_level"`
}
// ManifestNode represents a node in a testnet manifest.
@@ -130,6 +134,11 @@ type ManifestNode struct {
// For more information, look at the readme in the maverick folder.
// A list of all behaviors can be found in ../maverick/consensus/behavior.go
Misbehaviors map[string]string `toml:"misbehaviors"`
// Log level sets the log level of the specific node i.e. "consensus:info,*:error".
// This is helpful when debugging a specific problem. This overrides the network
// level.
LogLevel string `toml:"log_level"`
}
// Save saves the testnet manifest to a file.

View File

@@ -60,6 +60,7 @@ type Testnet struct {
ValidatorUpdates map[int64]map[*Node]int64
Nodes []*Node
KeyType string
LogLevel string
}
// Node represents a Tendermint node in a testnet.
@@ -84,6 +85,7 @@ type Node struct {
PersistentPeers []*Node
Perturbations []Perturbation
Misbehaviors map[int64]string
LogLevel string
}
// LoadTestnet loads a testnet from a manifest file, using the filename to
@@ -123,6 +125,7 @@ func LoadTestnet(file string) (*Testnet, error) {
ValidatorUpdates: map[int64]map[*Node]int64{},
Nodes: []*Node{},
KeyType: "ed25519",
LogLevel: manifest.LogLevel,
}
if len(manifest.KeyType) != 0 {
testnet.KeyType = manifest.KeyType
@@ -159,6 +162,7 @@ func LoadTestnet(file string) (*Testnet, error) {
RetainBlocks: nodeManifest.RetainBlocks,
Perturbations: []Perturbation{},
Misbehaviors: make(map[int64]string),
LogLevel: manifest.LogLevel,
}
if node.StartAt == testnet.InitialHeight {
node.StartAt = 0 // normalize to 0 for initial nodes, since code expects this
@@ -188,6 +192,9 @@ func LoadTestnet(file string) (*Testnet, error) {
}
node.Misbehaviors[height] = misbehavior
}
if nodeManifest.LogLevel != "" {
node.LogLevel = nodeManifest.LogLevel
}
testnet.Nodes = append(testnet.Nodes, node)
}

View File

@@ -116,6 +116,11 @@ func NewCLI() *CLI {
cli.root.Flags().BoolVarP(&cli.preserve, "preserve", "p", false,
"Preserves the running of the test net after tests are completed")
cli.root.SetHelpCommand(&cobra.Command{
Use: "no-help",
Hidden: true,
})
cli.root.AddCommand(&cobra.Command{
Use: "setup",
Short: "Generates the testnet directory and configuration",
@@ -189,17 +194,26 @@ func NewCLI() *CLI {
})
cli.root.AddCommand(&cobra.Command{
Use: "logs",
Short: "Shows the testnet logs",
Use: "logs [node]",
Short: "Shows the testnet or a specefic node's logs",
Example: "runner logs valiator03",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) == 1 {
return execComposeVerbose(cli.testnet.Dir, "logs", args[0])
}
return execComposeVerbose(cli.testnet.Dir, "logs")
},
})
cli.root.AddCommand(&cobra.Command{
Use: "tail",
Short: "Tails the testnet logs",
Use: "tail [node]",
Short: "Tails the testnet or a specific node's logs",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) == 1 {
return execComposeVerbose(cli.testnet.Dir, "logs", "--follow", args[0])
}
return execComposeVerbose(cli.testnet.Dir, "logs", "--follow")
},
})

View File

@@ -122,17 +122,24 @@ func Setup(testnet *e2e.Testnet) error {
func MakeDockerCompose(testnet *e2e.Testnet) ([]byte, error) {
// Must use version 2 Docker Compose format, to support IPv6.
tmpl, err := template.New("docker-compose").Funcs(template.FuncMap{
"misbehaviorsToString": func(misbehaviors map[int64]string) string {
str := ""
"startCommands": func(misbehaviors map[int64]string, logLevel string) string {
command := "start"
misbehaviorString := ""
for height, misbehavior := range misbehaviors {
// after the first behavior set, a comma must be prepended
if str != "" {
str += ","
if misbehaviorString != "" {
misbehaviorString += ","
}
heightString := strconv.Itoa(int(height))
str += misbehavior + "," + heightString
misbehaviorString += misbehavior + "," + heightString
}
return str
if misbehaviorString != "" {
command += " --misbehaviors " + misbehaviorString
}
if logLevel != "" && logLevel != config.DefaultPackageLogLevels() {
command += " --log-level " + logLevel
}
return command
},
}).Parse(`version: '2.4'
@@ -160,7 +167,9 @@ services:
entrypoint: /usr/bin/entrypoint-builtin
{{- else if .Misbehaviors }}
entrypoint: /usr/bin/entrypoint-maverick
command: ["start", "--misbehaviors", "{{ misbehaviorsToString .Misbehaviors }}"]
{{- end }}
{{- if ne .ABCIProtocol "builtin"}}
command: {{ startCommands .Misbehaviors .LogLevel }}
{{- end }}
init: true
ports:
@@ -227,6 +236,9 @@ func MakeConfig(node *e2e.Node) (*config.Config, error) {
cfg := config.DefaultConfig()
cfg.Moniker = node.Name
cfg.ProxyApp = AppAddressTCP
if node.LogLevel != "" {
cfg.LogLevel = node.LogLevel
}
cfg.RPC.ListenAddress = "tcp://0.0.0.0:26657"
cfg.P2P.ExternalAddress = fmt.Sprintf("tcp://%v", node.AddressP2P(false))
cfg.P2P.AddrBookStrict = false

39
test/fuzz/Makefile Normal file
View File

@@ -0,0 +1,39 @@
#!/usr/bin/make -f
.PHONY: fuzz-mempool
fuzz-mempool:
cd mempool && \
rm -f *-fuzz.zip && \
go-fuzz-build && \
go-fuzz
.PHONY: fuzz-p2p-addrbook
fuzz-p2p-addrbook:
cd p2p/addrbook && \
rm -f *-fuzz.zip && \
go run ./init-corpus/main.go && \
go-fuzz-build && \
go-fuzz
.PHONY: fuzz-p2p-pex
fuzz-p2p-pex:
cd p2p/pex && \
rm -f *-fuzz.zip && \
go run ./init-corpus/main.go && \
go-fuzz-build && \
go-fuzz
.PHONY: fuzz-p2p-sc
fuzz-p2p-sc:
cd p2p/secret_connection && \
rm -f *-fuzz.zip && \
go run ./init-corpus/main.go && \
go-fuzz-build && \
go-fuzz
.PHONY: fuzz-rpc-server
fuzz-rpc-server:
cd rpc/jsonrpc/server && \
rm -f *-fuzz.zip && \
go-fuzz-build && \
go-fuzz

72
test/fuzz/README.md Normal file
View File

@@ -0,0 +1,72 @@
# fuzz
Fuzzing for various packages in Tendermint using [go-fuzz](https://github.com/dvyukov/go-fuzz) library.
Inputs:
- mempool `CheckTx` (using kvstore in-process ABCI app)
- p2p `Addrbook#AddAddress`
- p2p `pex.Reactor#Receive`
- p2p `SecretConnection#Read` and `SecretConnection#Write`
- rpc jsonrpc server
## Directory structure
```
| test
| |- corpus/
| |- crashers/
| |- init-corpus/
| |- suppressions/
| |- testdata/
| |- <testname>.go
```
`/corpus` directory contains corpus data. The idea is to help the fuzzier to
understand what bytes sequences are semantically valid (e.g. if we're testing
PNG decoder, then we would put black-white PNG into corpus directory; with
blockchain reactor - we would put blockchain messages into corpus).
`/init-corpus` (if present) contains a script for generating corpus data.
`/testdata` directory may contain an additional data (like `addrbook.json`).
Upon running the fuzzier, `/crashers` and `/suppressions` dirs will be created,
along with <testname>.zip archive. `/crashers` will show any inputs, which have
lead to panics (plus a trace). `/suppressions` will show any suppressed inputs.
## Running
```sh
make fuzz-mempool
make fuzz-p2p-addrbook
make fuzz-p2p-pex
make fuzz-p2p-sc
make fuzz-rpc-server
```
Each command will create corpus data (if needed), generate a fuzz archive and
call `go-fuzz` executable.
Then watch out for the respective outputs in the fuzzer output to announce new
crashers which can be found in the directory `crashers`.
For example if we find
```sh
ls crashers/
61bde465f47c93254d64d643c3b2480e0a54666e
61bde465f47c93254d64d643c3b2480e0a54666e.output
61bde465f47c93254d64d643c3b2480e0a54666e.quoted
da39a3ee5e6b4b0d3255bfef95601890afd80709
da39a3ee5e6b4b0d3255bfef95601890afd80709.output
da39a3ee5e6b4b0d3255bfef95601890afd80709.quoted
```
the crashing bytes generated by the fuzzer will be in
`61bde465f47c93254d64d643c3b2480e0a54666e` the respective crash report in
`61bde465f47c93254d64d643c3b2480e0a54666e.output`
and the bug report can be created by retrieving the bytes in
`61bde465f47c93254d64d643c3b2480e0a54666e` and feeding those back into the
`Fuzz` function.

View File

@@ -0,0 +1,34 @@
package checktx
import (
"github.com/tendermint/tendermint/abci/example/kvstore"
"github.com/tendermint/tendermint/config"
mempl "github.com/tendermint/tendermint/mempool"
"github.com/tendermint/tendermint/proxy"
)
var mempool mempl.Mempool
func init() {
app := kvstore.NewApplication()
cc := proxy.NewLocalClientCreator(app)
appConnMem, _ := cc.NewABCIClient()
err := appConnMem.Start()
if err != nil {
panic(err)
}
cfg := config.DefaultMempoolConfig()
cfg.Broadcast = false
mempool = mempl.NewCListMempool(cfg, appConnMem, 0)
}
func Fuzz(data []byte) int {
err := mempool.CheckTx(data, nil, mempl.TxInfo{})
if err != nil {
return 0
}
return 1
}

View File

@@ -0,0 +1,35 @@
// nolint: gosec
package addr
import (
"encoding/json"
"fmt"
"math/rand"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/p2p/pex"
)
var addrBook = pex.NewAddrBook("./testdata/addrbook.json", true)
func Fuzz(data []byte) int {
addr := new(p2p.NetAddress)
if err := json.Unmarshal(data, addr); err != nil {
return -1
}
// Fuzz AddAddress.
err := addrBook.AddAddress(addr, addr)
if err != nil {
return 0
}
// Also, make sure PickAddress always returns a non-nil address.
bias := rand.Intn(100)
if p := addrBook.PickAddress(bias); p == nil {
panic(fmt.Sprintf("picked a nil address (bias: %d, addrBook size: %v)",
bias, addrBook.Size()))
}
return 1
}

View File

@@ -0,0 +1,58 @@
// nolint: gosec
package main
import (
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"log"
"net"
"os"
"path/filepath"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/p2p"
)
func main() {
baseDir := flag.String("base", ".", `where the "corpus" directory will live`)
flag.Parse()
initCorpus(*baseDir)
}
func initCorpus(baseDir string) {
log.SetFlags(0)
// create "corpus" directory
corpusDir := filepath.Join(baseDir, "corpus")
if err := os.MkdirAll(corpusDir, 0755); err != nil {
log.Fatalf("Creating %q err: %v", corpusDir, err)
}
// create corpus
privKey := ed25519.GenPrivKey()
addrs := []*p2p.NetAddress{
{ID: p2p.NodeIDFromPubKey(privKey.PubKey()), IP: net.IPv4(0, 0, 0, 0), Port: 0},
{ID: p2p.NodeIDFromPubKey(privKey.PubKey()), IP: net.IPv4(127, 0, 0, 0), Port: 80},
{ID: p2p.NodeIDFromPubKey(privKey.PubKey()), IP: net.IPv4(213, 87, 10, 200), Port: 8808},
{ID: p2p.NodeIDFromPubKey(privKey.PubKey()), IP: net.IPv4(111, 111, 111, 111), Port: 26656},
{ID: p2p.NodeIDFromPubKey(privKey.PubKey()), IP: net.ParseIP("2001:db8::68"), Port: 26656},
}
for i, addr := range addrs {
filename := filepath.Join(corpusDir, fmt.Sprintf("%d.json", i))
bz, err := json.Marshal(addr)
if err != nil {
log.Fatalf("can't marshal %v: %v", addr, err)
}
if err := ioutil.WriteFile(filename, bz, 0644); err != nil {
log.Fatalf("can't write %v to %q: %v", addr, filename, err)
}
log.Printf("wrote %q", filename)
}
}

View File

@@ -0,0 +1,82 @@
// nolint: gosec
package main
import (
"flag"
"fmt"
"io/ioutil"
"log"
"math/rand"
"os"
"path/filepath"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/p2p"
tmp2p "github.com/tendermint/tendermint/proto/tendermint/p2p"
)
func main() {
baseDir := flag.String("base", ".", `where the "corpus" directory will live`)
flag.Parse()
initCorpus(*baseDir)
}
func initCorpus(rootDir string) {
log.SetFlags(0)
corpusDir := filepath.Join(rootDir, "corpus")
if err := os.MkdirAll(corpusDir, 0755); err != nil {
log.Fatalf("Creating %q err: %v", corpusDir, err)
}
sizes := []int{0, 1, 2, 17, 5, 31}
// Make the PRNG predictable
rand.Seed(10)
for _, n := range sizes {
var addrs []*p2p.NetAddress
// IPv4 addresses
for i := 0; i < n; i++ {
privKey := ed25519.GenPrivKey()
addr := fmt.Sprintf(
"%s@%v.%v.%v.%v:26656",
p2p.NodeIDFromPubKey(privKey.PubKey()),
rand.Int()%256,
rand.Int()%256,
rand.Int()%256,
rand.Int()%256,
)
netAddr, _ := p2p.NewNetAddressString(addr)
addrs = append(addrs, netAddr)
}
// IPv6 addresses
privKey := ed25519.GenPrivKey()
ipv6a, err := p2p.NewNetAddressString(
fmt.Sprintf("%s@[ff02::1:114]:26656", p2p.NodeIDFromPubKey(privKey.PubKey())))
if err != nil {
log.Fatalf("can't create a new netaddress: %v", err)
}
addrs = append(addrs, ipv6a)
msg := tmp2p.PexMessage{
Sum: &tmp2p.PexMessage_PexResponse{
PexResponse: &tmp2p.PexResponse{Addresses: p2p.NetAddressesToProto(addrs)},
},
}
bz, err := msg.Marshal()
if err != nil {
log.Fatalf("unable to marshal: %v", err)
}
filename := filepath.Join(rootDir, "corpus", fmt.Sprintf("%d", n))
if err := ioutil.WriteFile(filename, bz, 0644); err != nil {
log.Fatalf("can't write %X to %q: %v", bz, filename, err)
}
log.Printf("wrote %q", filename)
}
}

View File

@@ -0,0 +1,86 @@
package pex
import (
"net"
"github.com/tendermint/tendermint/config"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/libs/log"
"github.com/tendermint/tendermint/libs/service"
"github.com/tendermint/tendermint/p2p"
"github.com/tendermint/tendermint/p2p/pex"
"github.com/tendermint/tendermint/version"
)
var (
pexR *pex.Reactor
peer p2p.Peer
)
func init() {
addrB := pex.NewAddrBook("./testdata/addrbook1", false)
pexR := pex.NewReactor(addrB, &pex.ReactorConfig{SeedMode: false})
if pexR == nil {
panic("NewReactor returned nil")
}
pexR.SetLogger(log.NewNopLogger())
peer := newFuzzPeer()
pexR.AddPeer(peer)
}
func Fuzz(data []byte) int {
// MakeSwitch uses log.TestingLogger which can't be executed in init()
cfg := config.DefaultP2PConfig()
cfg.PexReactor = true
sw := p2p.MakeSwitch(cfg, 0, "127.0.0.1", "123.123.123", func(i int, sw *p2p.Switch) *p2p.Switch {
return sw
})
pexR.SetSwitch(sw)
pexR.Receive(pex.PexChannel, peer, data)
return 1
}
type fuzzPeer struct {
*service.BaseService
m map[string]interface{}
}
var _ p2p.Peer = (*fuzzPeer)(nil)
func newFuzzPeer() *fuzzPeer {
fp := &fuzzPeer{m: make(map[string]interface{})}
fp.BaseService = service.NewBaseService(nil, "fuzzPeer", fp)
return fp
}
var privKey = ed25519.GenPrivKey()
var nodeID = p2p.NodeIDFromPubKey(privKey.PubKey())
var defaultNodeInfo = p2p.NodeInfo{
ProtocolVersion: p2p.NewProtocolVersion(
version.P2PProtocol,
version.BlockProtocol,
0,
),
NodeID: nodeID,
ListenAddr: "0.0.0.0:98992",
Moniker: "foo1",
}
func (fp *fuzzPeer) FlushStop() {}
func (fp *fuzzPeer) ID() p2p.NodeID { return nodeID }
func (fp *fuzzPeer) RemoteIP() net.IP { return net.IPv4(0, 0, 0, 0) }
func (fp *fuzzPeer) RemoteAddr() net.Addr {
return &net.TCPAddr{IP: fp.RemoteIP(), Port: 98991, Zone: ""}
}
func (fp *fuzzPeer) IsOutbound() bool { return false }
func (fp *fuzzPeer) IsPersistent() bool { return false }
func (fp *fuzzPeer) CloseConn() error { return nil }
func (fp *fuzzPeer) NodeInfo() p2p.NodeInfo { return defaultNodeInfo }
func (fp *fuzzPeer) Status() p2p.ConnectionStatus { var cs p2p.ConnectionStatus; return cs }
func (fp *fuzzPeer) SocketAddr() *p2p.NetAddress { return p2p.NewNetAddress(fp.ID(), fp.RemoteAddr()) }
func (fp *fuzzPeer) Send(byte, []byte) bool { return true }
func (fp *fuzzPeer) TrySend(byte, []byte) bool { return true }
func (fp *fuzzPeer) Set(key string, value interface{}) { fp.m[key] = value }
func (fp *fuzzPeer) Get(key string) interface{} { return fp.m[key] }

1705
test/fuzz/p2p/pex/testdata/addrbook1 vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,48 @@
// nolint: gosec
package main
import (
"flag"
"fmt"
"io/ioutil"
"log"
"os"
"path/filepath"
)
func main() {
baseDir := flag.String("base", ".", `where the "corpus" directory will live`)
flag.Parse()
initCorpus(*baseDir)
}
func initCorpus(baseDir string) {
log.SetFlags(0)
corpusDir := filepath.Join(baseDir, "corpus")
if err := os.MkdirAll(corpusDir, 0755); err != nil {
log.Fatal(err)
}
data := []string{
"dadc04c2-cfb1-4aa9-a92a-c0bf780ec8b6",
"",
" ",
" a ",
`{"a": 12, "tsp": 999, k: "blue"}`,
`9999.999`,
`""`,
`Tendermint fuzzing`,
}
for i, datum := range data {
filename := filepath.Join(corpusDir, fmt.Sprintf("%d", i))
if err := ioutil.WriteFile(filename, []byte(datum), 0644); err != nil {
log.Fatalf("can't write %v to %q: %v", datum, filename, err)
}
log.Printf("wrote %q", filename)
}
}

View File

@@ -0,0 +1,107 @@
package secretconnection
import (
"bytes"
"fmt"
"io"
"log"
"github.com/tendermint/tendermint/crypto/ed25519"
"github.com/tendermint/tendermint/libs/async"
sc "github.com/tendermint/tendermint/p2p/conn"
)
func Fuzz(data []byte) int {
if len(data) == 0 {
return -1
}
fooConn, barConn := makeSecretConnPair()
n, err := fooConn.Write(data)
if err != nil {
panic(err)
}
dataRead := make([]byte, n)
m, err := barConn.Read(dataRead)
if err != nil {
panic(err)
}
if !bytes.Equal(data[:n], dataRead[:m]) {
panic(fmt.Sprintf("bytes written %X != read %X", data[:n], dataRead[:m]))
}
return 1
}
type kvstoreConn struct {
*io.PipeReader
*io.PipeWriter
}
func (drw kvstoreConn) Close() (err error) {
err2 := drw.PipeWriter.CloseWithError(io.EOF)
err1 := drw.PipeReader.Close()
if err2 != nil {
return err
}
return err1
}
// Each returned ReadWriteCloser is akin to a net.Connection
func makeKVStoreConnPair() (fooConn, barConn kvstoreConn) {
barReader, fooWriter := io.Pipe()
fooReader, barWriter := io.Pipe()
return kvstoreConn{fooReader, fooWriter}, kvstoreConn{barReader, barWriter}
}
func makeSecretConnPair() (fooSecConn, barSecConn *sc.SecretConnection) {
var (
fooConn, barConn = makeKVStoreConnPair()
fooPrvKey = ed25519.GenPrivKey()
fooPubKey = fooPrvKey.PubKey()
barPrvKey = ed25519.GenPrivKey()
barPubKey = barPrvKey.PubKey()
)
// Make connections from both sides in parallel.
var trs, ok = async.Parallel(
func(_ int) (val interface{}, abort bool, err error) {
fooSecConn, err = sc.MakeSecretConnection(fooConn, fooPrvKey)
if err != nil {
log.Printf("failed to establish SecretConnection for foo: %v", err)
return nil, true, err
}
remotePubBytes := fooSecConn.RemotePubKey()
if !remotePubBytes.Equals(barPubKey) {
err = fmt.Errorf("unexpected fooSecConn.RemotePubKey. Expected %v, got %v",
barPubKey, fooSecConn.RemotePubKey())
log.Print(err)
return nil, true, err
}
return nil, false, nil
},
func(_ int) (val interface{}, abort bool, err error) {
barSecConn, err = sc.MakeSecretConnection(barConn, barPrvKey)
if barSecConn == nil {
log.Printf("failed to establish SecretConnection for bar: %v", err)
return nil, true, err
}
remotePubBytes := barSecConn.RemotePubKey()
if !remotePubBytes.Equals(fooPubKey) {
err = fmt.Errorf("unexpected barSecConn.RemotePubKey. Expected %v, got %v",
fooPubKey, barSecConn.RemotePubKey())
log.Print(err)
return nil, true, err
}
return nil, false, nil
},
)
if trs.FirstError() != nil {
log.Fatalf("unexpected error: %v", trs.FirstError())
}
if !ok {
log.Fatal("Unexpected task abortion")
}
return fooSecConn, barSecConn
}

View File

@@ -0,0 +1,44 @@
package handler
import (
"bytes"
"encoding/json"
"io/ioutil"
"net/http"
"net/http/httptest"
"github.com/tendermint/tendermint/libs/log"
rs "github.com/tendermint/tendermint/rpc/jsonrpc/server"
types "github.com/tendermint/tendermint/rpc/jsonrpc/types"
)
var rpcFuncMap = map[string]*rs.RPCFunc{
"c": rs.NewRPCFunc(func(s string, i int) (string, int) { return "foo", 200 }, "s,i"),
}
var mux *http.ServeMux
func init() {
mux := http.NewServeMux()
buf := new(bytes.Buffer)
lgr := log.NewTMLogger(buf)
rs.RegisterRPCFuncs(mux, rpcFuncMap, lgr)
}
func Fuzz(data []byte) int {
req, _ := http.NewRequest("POST", "http://localhost/", bytes.NewReader(data))
rec := httptest.NewRecorder()
mux.ServeHTTP(rec, req)
res := rec.Result()
blob, err := ioutil.ReadAll(res.Body)
if err != nil {
panic(err)
}
if err := res.Body.Close(); err != nil {
panic(err)
}
recv := new(types.RPCResponse)
if err := json.Unmarshal(blob, recv); err != nil {
panic(err)
}
return 1
}

View File

@@ -255,7 +255,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
h.logger.Info("ABCI Handshake App Info",
"height", blockHeight,
"hash", fmt.Sprintf("%X", appHash),
"hash", appHash,
"software-version", res.Version,
"protocol-version", res.AppVersion,
)
@@ -272,7 +272,7 @@ func (h *Handshaker) Handshake(proxyApp proxy.AppConns) error {
}
h.logger.Info("Completed ABCI Handshake - Tendermint and App are synced",
"appHeight", blockHeight, "appHash", fmt.Sprintf("%X", appHash))
"appHeight", blockHeight, "appHash", appHash)
// TODO: (on restart) replay mempool

View File

@@ -35,7 +35,7 @@ func init() {
}
func registerFlagsRootCmd(command *cobra.Command) {
command.PersistentFlags().String("log_level", config.LogLevel, "Log level")
command.PersistentFlags().String("log-level", config.LogLevel, "Log level")
}
func ParseConfig() (*cfg.Config, error) {

View File

@@ -795,8 +795,11 @@ func NewNode(config *cfg.Config,
logNodeStartupInfo(state, pubKey, logger, consensusLogger)
// TODO: Fetch and provide real options and do proper p2p bootstrapping.
peerMgr := p2p.NewPeerManager(p2p.PeerManagerOptions{})
// TODO: Use a persistent peer database.
peerMgr, err := p2p.NewPeerManager(dbm.NewMemDB(), p2p.PeerManagerOptions{})
if err != nil {
return nil, err
}
csMetrics, p2pMetrics, memplMetrics, smMetrics := metricsProvider(genDoc.ChainID)
mpReactorShim, mpReactor, mempool := createMempoolReactor(config, proxyApp, state, memplMetrics, peerMgr, logger)