Files
tendermint/state/execution.go
Cyrus Goh 5182ffee25 docs: master → docs-staging (#5990)
* Makefile: always pull image in proto-gen-docker. (#5953)

The `proto-gen-docker` target didn't pull an updated Docker image, and would use a local image if present which could be outdated and produce wrong results.

* test: fix TestPEXReactorRunning data race (#5955)

Fixes #5941.

Not entirely sure that this will fix the problem (couldn't reproduce), but in any case this is an artifact of a hack in the P2P transport refactor to make it work with the legacy P2P stack, and will be removed when the refactor is done anyway.

* test/fuzz: move fuzz tests into this repo (#5918)

Co-authored-by: Emmanuel T Odeke <emmanuel@orijtech.com>

Closes #5907

- add init-corpus to blockchain reactor
- remove validator-set FromBytes test
now that we have proto, we don't need to test it! bye amino
- simplify mempool test
do we want to test remote ABCI app?
- do not recreate mux on every crash in jsonrpc test
- update p2p pex reactor test
- remove p2p/listener test
the API has changed + I did not understand what it's tested anyway
- update secretconnection test
- add readme and makefile
- list inputs in readme
- add nightly workflow
- remove blockchain fuzz test
EncodeMsg / DecodeMsg no longer exist

* docker: dont login when in PR (#5961)

* docker: release Linux/ARM64 image (#5925)

Co-authored-by: Marko <marbar3778@yahoo.com>

* p2p: make PeerManager.DialNext() and EvictNext() block (#5947)

See #5936 and #5938 for background.

The plan was initially to have `DialNext()` and `EvictNext()` return a channel. However, implementing this became unnecessarily complicated and error-prone. As an example, the channel would be both consumed and populated (via method calls) by the same driving method (e.g. `Router.dialPeers()`) which could easily cause deadlocks where a method call blocked while sending on the channel that the caller itself was responsible for consuming (but couldn't since it was busy making the method call). It would also require a set of goroutines in the peer manager that would interact with the goroutines in the router in non-obvious ways, and fully populating the channel on startup could cause deadlocks with other startup tasks. Several issues like these made the solution hard to reason about.

I therefore simply made `DialNext()` and `EvictNext()` block until the next peer was available, using internal triggers to wake these methods up in a non-blocking fashion when any relevant state changes occurred. This proved much simpler to reason about, since there are no goroutines in the peer manager (except for trivial retry timers), nor any blocking channel sends, and it instead relies entirely on the existing goroutine structure of the router for concurrency. This also happens to be the same pattern used by the `Transport.Accept()` API, following Go stdlib conventions, so all router goroutines end up using a consistent pattern as well.

* libs/log: format []byte as hexidecimal string (uppercased) (#5960)

Closes: #5806 

Co-authored-by: Lanie Hei <heixx011@umn.edu>

* docs: log level docs (#5945)

## Description

add section on configuring log levels

Closes: #XXX

* .github: fix fuzz-nightly job (#5965)

outputs is a property of the job, not an individual step.

* e2e: add control over the log level of nodes (#5958)

* mempool: fix reactor tests (#5967)

## Description

Update the faux router to either drop channel errors or handle them based on an argument. This prevents deadlocks in tests where we try to send an error on the mempool channel but there is no reader.

Closes: #5956

* p2p: improve peerStore prototype (#5954)

This improves the `peerStore` prototype by e.g.:

* Using a database with Protobuf for persistence, but also keeping full peer set in memory for performance.
* Simplifying the API, by taking/returning struct copies for safety, and removing errors for in-memory operations.
* Caching the ranked peer set, as a temporary solution until a better data structure is implemented.
* Adding `PeerManagerOptions.MaxPeers` and pruning the peer store (based on rank) when it's full.
* Rewriting `PeerAddress` to be independent of `url.URL`, normalizing it and tightening semantics.

* p2p: simplify PeerManager upgrade logic (#5962)

Follow-up from #5947, branched off of #5954.

This simplifies the upgrade logic by adding explicit eviction requests, which can also be useful for other use-cases (e.g. if we need to ban a peer that's misbehaving). Changes:

* Add `evict` map which queues up peers to explicitly evict.
* `upgrading` now only tracks peers that we're upgrading via dialing (`DialNext` → `Dialed`/`DialFailed`).
* `Dialed` will unmark `upgrading`, and queue `evict` if still beyond capacity.
* `Accepted` will pick a random lower-scored peer to upgrade to, if appropriate, and doesn't care about `upgrading` (the dial will fail later, since it's already connected).
* `EvictNext` will return a peer scheduled in `evict` if any, otherwise if beyond capacity just evict the lowest-scored peer.

This limits all of the `upgrading` logic to `DialNext`, `Dialed`, and `DialFailed`, making it much simplier, and it should generally do the right thing in all cases I can think of.

* p2p: add PeerManager.Advertise() (#5957)

Adds a naïve `PeerManager.Advertise()` method that the new PEX reactor can use to fetch addresses to advertise, as well as some other `FIXME`s on address advertisement.

* blockchain v0: fix waitgroup data race (#5970)

## Description

Fixes the data race in usage of `WaitGroup`. Specifically, the case where we invoke `Wait` _before_ the first delta `Add` call when the current waitgroup counter is zero. See https://golang.org/pkg/sync/#WaitGroup.Add.

Still not sure how this manifests itself in a test since the reactor has to be stopped virtually immediately after being started (I think?).

Regardless, this is the appropriate fix.

closes: #5968

* tests: fix `make test` (#5966)

## Description
 
- bump deadlock dep to master
  - fixes `make test` since we now use `deadlock.Once`

Closes: #XXX

* terminate go-fuzz gracefully (w/ SIGINT) (#5973)

and preserve exit code.

```
2021/01/26 03:34:49 workers: 2, corpus: 4 (8m28s ago), crashers: 0, restarts: 1/9976, execs: 11013732 (21596/sec), cover: 121, uptime: 8m30s
make: *** [fuzz-mempool] Terminated
Makefile:5: recipe for target 'fuzz-mempool' failed
Error: Process completed with exit code 124.
```

https://github.com/tendermint/tendermint/runs/1766661614

`continue-on-error` should make GH ignore any error codes.

* p2p: add prototype PEX reactor for new stack (#5971)

This adds a prototype PEX reactor for the new P2P stack.

* proto/p2p: rename PEX messages and fields (#5974)

Fixes #5899 by renaming a bunch of P2P Protobuf entities (while maintaining wire compatibility):

* `Message` to `PexMessage` (as it's only used for PEX messages).
* `PexAddrs` to `PexResponse`.
* `PexResponse.Addrs` to `PexResponse.Addresses`.
* `NetAddress` to `PexAddress` (as it's only used by PEX).

* p2p: resolve PEX addresses in PEX reactor (#5980)

This changes the new prototype PEX reactor to resolve peer address URLs into IP/port PEX addresses itself. Branched off of #5974.

I've spent some time thinking about address handling in the P2P stack. We currently use `PeerAddress` URLs everywhere, except for two places: when dialing a peer, and when exchanging addresses via PEX. We had two options:

1. Resolve addresses to endpoints inside `PeerManager`. This would introduce a lot of added complexity: we would have to track connection statistics per endpoint, have goroutines that asynchronously resolve and refresh these endpoints, deal with resolve scheduling before dialing (which is trickier than it sounds since it involves multiple goroutines in the peer manager and router and messes with peer rating order), handle IP address visibility issues, and so on.

2. Resolve addresses to endpoints (IP/port) only where they're used: when dialing, and in PEX. Everywhere else we use URLs.

I went with 2, because this significantly simplifies the handling of hostname resolution, and because I really think the PEX reactor should migrate to exchanging URLs instead of IP/port numbers anyway -- this allows operators to use DNS names for validators (and can easily migrate them to new IPs and/or load balance requests), and also allows different protocols (e.g. QUIC and `MemoryTransport`). Happy to discuss this.

* test/p2p: close transports to avoid goroutine leak failures (#5982)

* mempool: fix TestReactorNoBroadcastToSender (#5984)

## Description

Looks like I missed a test in the original PR when fixing the tests.

Closes: #5956

* mempool: fix mempool tests timeout (#5988)

* p2p: use stopCtx when dialing peers in Router (#5983)

This ensures we don't leak dial goroutines when shutting down the router.

* docs: fix typo in state sync example (#5989)

Co-authored-by: Erik Grinaker <erik@interchain.berlin>
Co-authored-by: Anton Kaliaev <anton.kalyaev@gmail.com>
Co-authored-by: Marko <marbar3778@yahoo.com>
Co-authored-by: odidev <odidev@puresoftware.com>
Co-authored-by: Lanie Hei <heixx011@umn.edu>
Co-authored-by: Callum Waters <cmwaters19@gmail.com>
Co-authored-by: Aleksandr Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: Sergey <52304443+c29r3@users.noreply.github.com>
2021-01-26 11:46:21 -08:00

553 lines
17 KiB
Go

package state
import (
"context"
"errors"
"fmt"
"time"
abci "github.com/tendermint/tendermint/abci/types"
cryptoenc "github.com/tendermint/tendermint/crypto/encoding"
"github.com/tendermint/tendermint/libs/fail"
"github.com/tendermint/tendermint/libs/log"
mempl "github.com/tendermint/tendermint/mempool"
tmstate "github.com/tendermint/tendermint/proto/tendermint/state"
tmproto "github.com/tendermint/tendermint/proto/tendermint/types"
"github.com/tendermint/tendermint/proxy"
"github.com/tendermint/tendermint/types"
)
//-----------------------------------------------------------------------------
// BlockExecutor handles block execution and state updates.
// It exposes ApplyBlock(), which validates & executes the block, updates state w/ ABCI responses,
// then commits and updates the mempool atomically, then saves state.
// BlockExecutor provides the context and accessories for properly executing a block.
type BlockExecutor struct {
// save state, validators, consensus params, abci responses here
store Store
// execute the app against this
proxyApp proxy.AppConnConsensus
// events
eventBus types.BlockEventPublisher
// manage the mempool lock during commit
// and update both with block results after commit.
mempool mempl.Mempool
evpool EvidencePool
logger log.Logger
metrics *Metrics
}
type BlockExecutorOption func(executor *BlockExecutor)
func BlockExecutorWithMetrics(metrics *Metrics) BlockExecutorOption {
return func(blockExec *BlockExecutor) {
blockExec.metrics = metrics
}
}
// NewBlockExecutor returns a new BlockExecutor with a NopEventBus.
// Call SetEventBus to provide one.
func NewBlockExecutor(
stateStore Store,
logger log.Logger,
proxyApp proxy.AppConnConsensus,
mempool mempl.Mempool,
evpool EvidencePool,
options ...BlockExecutorOption,
) *BlockExecutor {
res := &BlockExecutor{
store: stateStore,
proxyApp: proxyApp,
eventBus: types.NopEventBus{},
mempool: mempool,
evpool: evpool,
logger: logger,
metrics: NopMetrics(),
}
for _, option := range options {
option(res)
}
return res
}
func (blockExec *BlockExecutor) Store() Store {
return blockExec.store
}
// SetEventBus - sets the event bus for publishing block related events.
// If not called, it defaults to types.NopEventBus.
func (blockExec *BlockExecutor) SetEventBus(eventBus types.BlockEventPublisher) {
blockExec.eventBus = eventBus
}
// CreateProposalBlock calls state.MakeBlock with evidence from the evpool
// and txs from the mempool. The max bytes must be big enough to fit the commit.
// Up to 1/10th of the block space is allcoated for maximum sized evidence.
// The rest is given to txs, up to the max gas.
func (blockExec *BlockExecutor) CreateProposalBlock(
height int64,
state State, commit *types.Commit,
proposerAddr []byte,
) (*types.Block, *types.PartSet) {
maxBytes := state.ConsensusParams.Block.MaxBytes
maxGas := state.ConsensusParams.Block.MaxGas
evidence, evSize := blockExec.evpool.PendingEvidence(state.ConsensusParams.Evidence.MaxBytes)
// Fetch a limited amount of valid txs
maxDataBytes := types.MaxDataBytes(maxBytes, evSize, state.Validators.Size())
txs := blockExec.mempool.ReapMaxBytesMaxGas(maxDataBytes, maxGas)
return state.MakeBlock(height, txs, commit, evidence, proposerAddr)
}
// ValidateBlock validates the given block against the given state.
// If the block is invalid, it returns an error.
// Validation does not mutate state, but does require historical information from the stateDB,
// ie. to verify evidence from a validator at an old height.
func (blockExec *BlockExecutor) ValidateBlock(state State, block *types.Block) error {
err := validateBlock(state, block)
if err != nil {
return err
}
return blockExec.evpool.CheckEvidence(block.Evidence.Evidence)
}
// ApplyBlock validates the block against the state, executes it against the app,
// fires the relevant events, commits the app, and saves the new state and responses.
// It returns the new state and the block height to retain (pruning older blocks).
// It's the only function that needs to be called
// from outside this package to process and commit an entire block.
// It takes a blockID to avoid recomputing the parts hash.
func (blockExec *BlockExecutor) ApplyBlock(
state State, blockID types.BlockID, block *types.Block,
) (State, int64, error) {
if err := validateBlock(state, block); err != nil {
return state, 0, ErrInvalidBlock(err)
}
startTime := time.Now().UnixNano()
abciResponses, err := execBlockOnProxyApp(blockExec.logger, blockExec.proxyApp, block,
blockExec.store, state.InitialHeight)
endTime := time.Now().UnixNano()
blockExec.metrics.BlockProcessingTime.Observe(float64(endTime-startTime) / 1000000)
if err != nil {
return state, 0, ErrProxyAppConn(err)
}
fail.Fail() // XXX
// Save the results before we commit.
if err := blockExec.store.SaveABCIResponses(block.Height, abciResponses); err != nil {
return state, 0, err
}
fail.Fail() // XXX
// validate the validator updates and convert to tendermint types
abciValUpdates := abciResponses.EndBlock.ValidatorUpdates
err = validateValidatorUpdates(abciValUpdates, state.ConsensusParams.Validator)
if err != nil {
return state, 0, fmt.Errorf("error in validator updates: %v", err)
}
validatorUpdates, err := types.PB2TM.ValidatorUpdates(abciValUpdates)
if err != nil {
return state, 0, err
}
if len(validatorUpdates) > 0 {
blockExec.logger.Info("Updates to validators", "updates", types.ValidatorListString(validatorUpdates))
}
// Update the state with the block and responses.
state, err = updateState(state, blockID, &block.Header, abciResponses, validatorUpdates)
if err != nil {
return state, 0, fmt.Errorf("commit failed for application: %v", err)
}
// Lock mempool, commit app state, update mempoool.
appHash, retainHeight, err := blockExec.Commit(state, block, abciResponses.DeliverTxs)
if err != nil {
return state, 0, fmt.Errorf("commit failed for application: %v", err)
}
// Update evpool with the latest state.
blockExec.evpool.Update(state, block.Evidence.Evidence)
fail.Fail() // XXX
// Update the app hash and save the state.
state.AppHash = appHash
if err := blockExec.store.Save(state); err != nil {
return state, 0, err
}
fail.Fail() // XXX
// Events are fired after everything else.
// NOTE: if we crash between Commit and Save, events wont be fired during replay
fireEvents(blockExec.logger, blockExec.eventBus, block, abciResponses, validatorUpdates)
return state, retainHeight, nil
}
// Commit locks the mempool, runs the ABCI Commit message, and updates the
// mempool.
// It returns the result of calling abci.Commit (the AppHash) and the height to retain (if any).
// The Mempool must be locked during commit and update because state is
// typically reset on Commit and old txs must be replayed against committed
// state before new txs are run in the mempool, lest they be invalid.
func (blockExec *BlockExecutor) Commit(
state State,
block *types.Block,
deliverTxResponses []*abci.ResponseDeliverTx,
) ([]byte, int64, error) {
blockExec.mempool.Lock()
defer blockExec.mempool.Unlock()
// while mempool is Locked, flush to ensure all async requests have completed
// in the ABCI app before Commit.
err := blockExec.mempool.FlushAppConn()
if err != nil {
blockExec.logger.Error("Client error during mempool.FlushAppConn", "err", err)
return nil, 0, err
}
// Commit block, get hash back
res, err := blockExec.proxyApp.CommitSync(context.Background())
if err != nil {
blockExec.logger.Error(
"Client error during proxyAppConn.CommitSync",
"err", err,
)
return nil, 0, err
}
// ResponseCommit has no error code - just data
blockExec.logger.Info(
"Committed state",
"height", block.Height,
"txs", len(block.Txs),
"appHash", res.Data,
)
// Update mempool.
err = blockExec.mempool.Update(
block.Height,
block.Txs,
deliverTxResponses,
TxPreCheck(state),
TxPostCheck(state),
)
return res.Data, res.RetainHeight, err
}
//---------------------------------------------------------
// Helper functions for executing blocks and updating state
// Executes block's transactions on proxyAppConn.
// Returns a list of transaction results and updates to the validator set
func execBlockOnProxyApp(
logger log.Logger,
proxyAppConn proxy.AppConnConsensus,
block *types.Block,
store Store,
initialHeight int64,
) (*tmstate.ABCIResponses, error) {
var validTxs, invalidTxs = 0, 0
txIndex := 0
abciResponses := new(tmstate.ABCIResponses)
dtxs := make([]*abci.ResponseDeliverTx, len(block.Txs))
abciResponses.DeliverTxs = dtxs
// Execute transactions and get hash.
proxyCb := func(req *abci.Request, res *abci.Response) {
if r, ok := res.Value.(*abci.Response_DeliverTx); ok {
// TODO: make use of res.Log
// TODO: make use of this info
// Blocks may include invalid txs.
txRes := r.DeliverTx
if txRes.Code == abci.CodeTypeOK {
validTxs++
} else {
logger.Debug("Invalid tx", "code", txRes.Code, "log", txRes.Log)
invalidTxs++
}
abciResponses.DeliverTxs[txIndex] = txRes
txIndex++
}
}
proxyAppConn.SetResponseCallback(proxyCb)
commitInfo := getBeginBlockValidatorInfo(block, store, initialHeight)
byzVals := make([]abci.Evidence, 0)
for _, evidence := range block.Evidence.Evidence {
byzVals = append(byzVals, evidence.ABCI()...)
}
ctx := context.Background()
// Begin block
var err error
pbh := block.Header.ToProto()
if pbh == nil {
return nil, errors.New("nil header")
}
abciResponses.BeginBlock, err = proxyAppConn.BeginBlockSync(
ctx,
abci.RequestBeginBlock{
Hash: block.Hash(),
Header: *pbh,
LastCommitInfo: commitInfo,
ByzantineValidators: byzVals,
})
if err != nil {
logger.Error("Error in proxyAppConn.BeginBlock", "err", err)
return nil, err
}
// Run txs of block.
for _, tx := range block.Txs {
_, err = proxyAppConn.DeliverTxAsync(ctx, abci.RequestDeliverTx{Tx: tx})
if err != nil {
return nil, err
}
}
// End block.
abciResponses.EndBlock, err = proxyAppConn.EndBlockSync(ctx, abci.RequestEndBlock{Height: block.Height})
if err != nil {
logger.Error("Error in proxyAppConn.EndBlock", "err", err)
return nil, err
}
logger.Info("Executed block", "height", block.Height, "validTxs", validTxs, "invalidTxs", invalidTxs)
return abciResponses, nil
}
func getBeginBlockValidatorInfo(block *types.Block, store Store,
initialHeight int64) abci.LastCommitInfo {
voteInfos := make([]abci.VoteInfo, block.LastCommit.Size())
// Initial block -> LastCommitInfo.Votes are empty.
// Remember that the first LastCommit is intentionally empty, so it makes
// sense for LastCommitInfo.Votes to also be empty.
if block.Height > initialHeight {
lastValSet, err := store.LoadValidators(block.Height - 1)
if err != nil {
panic(err)
}
// Sanity check that commit size matches validator set size - only applies
// after first block.
var (
commitSize = block.LastCommit.Size()
valSetLen = len(lastValSet.Validators)
)
if commitSize != valSetLen {
panic(fmt.Sprintf("commit size (%d) doesn't match valset length (%d) at height %d\n\n%v\n\n%v",
commitSize, valSetLen, block.Height, block.LastCommit.Signatures, lastValSet.Validators))
}
for i, val := range lastValSet.Validators {
commitSig := block.LastCommit.Signatures[i]
voteInfos[i] = abci.VoteInfo{
Validator: types.TM2PB.Validator(val),
SignedLastBlock: !commitSig.Absent(),
}
}
}
return abci.LastCommitInfo{
Round: block.LastCommit.Round,
Votes: voteInfos,
}
}
func validateValidatorUpdates(abciUpdates []abci.ValidatorUpdate,
params tmproto.ValidatorParams) error {
for _, valUpdate := range abciUpdates {
if valUpdate.GetPower() < 0 {
return fmt.Errorf("voting power can't be negative %v", valUpdate)
} else if valUpdate.GetPower() == 0 {
// continue, since this is deleting the validator, and thus there is no
// pubkey to check
continue
}
// Check if validator's pubkey matches an ABCI type in the consensus params
pk, err := cryptoenc.PubKeyFromProto(valUpdate.PubKey)
if err != nil {
return err
}
if !types.IsValidPubkeyType(params, pk.Type()) {
return fmt.Errorf("validator %v is using pubkey %s, which is unsupported for consensus",
valUpdate, pk.Type())
}
}
return nil
}
// updateState returns a new State updated according to the header and responses.
func updateState(
state State,
blockID types.BlockID,
header *types.Header,
abciResponses *tmstate.ABCIResponses,
validatorUpdates []*types.Validator,
) (State, error) {
// Copy the valset so we can apply changes from EndBlock
// and update s.LastValidators and s.Validators.
nValSet := state.NextValidators.Copy()
// Update the validator set with the latest abciResponses.
lastHeightValsChanged := state.LastHeightValidatorsChanged
if len(validatorUpdates) > 0 {
err := nValSet.UpdateWithChangeSet(validatorUpdates)
if err != nil {
return state, fmt.Errorf("error changing validator set: %v", err)
}
// Change results from this height but only applies to the next next height.
lastHeightValsChanged = header.Height + 1 + 1
}
// Update validator proposer priority and set state variables.
nValSet.IncrementProposerPriority(1)
// Update the params with the latest abciResponses.
nextParams := state.ConsensusParams
lastHeightParamsChanged := state.LastHeightConsensusParamsChanged
if abciResponses.EndBlock.ConsensusParamUpdates != nil {
// NOTE: must not mutate s.ConsensusParams
nextParams = types.UpdateConsensusParams(state.ConsensusParams, abciResponses.EndBlock.ConsensusParamUpdates)
err := types.ValidateConsensusParams(nextParams)
if err != nil {
return state, fmt.Errorf("error updating consensus params: %v", err)
}
state.Version.Consensus.App = nextParams.Version.AppVersion
// Change results from this height but only applies to the next height.
lastHeightParamsChanged = header.Height + 1
}
nextVersion := state.Version
// NOTE: the AppHash has not been populated.
// It will be filled on state.Save.
return State{
Version: nextVersion,
ChainID: state.ChainID,
InitialHeight: state.InitialHeight,
LastBlockHeight: header.Height,
LastBlockID: blockID,
LastBlockTime: header.Time,
NextValidators: nValSet,
Validators: state.NextValidators.Copy(),
LastValidators: state.Validators.Copy(),
LastHeightValidatorsChanged: lastHeightValsChanged,
ConsensusParams: nextParams,
LastHeightConsensusParamsChanged: lastHeightParamsChanged,
LastResultsHash: ABCIResponsesResultsHash(abciResponses),
AppHash: nil,
}, nil
}
// Fire NewBlock, NewBlockHeader.
// Fire TxEvent for every tx.
// NOTE: if Tendermint crashes before commit, some or all of these events may be published again.
func fireEvents(
logger log.Logger,
eventBus types.BlockEventPublisher,
block *types.Block,
abciResponses *tmstate.ABCIResponses,
validatorUpdates []*types.Validator,
) {
if err := eventBus.PublishEventNewBlock(types.EventDataNewBlock{
Block: block,
ResultBeginBlock: *abciResponses.BeginBlock,
ResultEndBlock: *abciResponses.EndBlock,
}); err != nil {
logger.Error("Error publishing new block", "err", err)
}
if err := eventBus.PublishEventNewBlockHeader(types.EventDataNewBlockHeader{
Header: block.Header,
NumTxs: int64(len(block.Txs)),
ResultBeginBlock: *abciResponses.BeginBlock,
ResultEndBlock: *abciResponses.EndBlock,
}); err != nil {
logger.Error("Error publishing new block header", "err", err)
}
if len(block.Evidence.Evidence) != 0 {
for _, ev := range block.Evidence.Evidence {
if err := eventBus.PublishEventNewEvidence(types.EventDataNewEvidence{
Evidence: ev,
Height: block.Height,
}); err != nil {
logger.Error("Error publishing new evidence", "err", err)
}
}
}
for i, tx := range block.Data.Txs {
if err := eventBus.PublishEventTx(types.EventDataTx{TxResult: abci.TxResult{
Height: block.Height,
Index: uint32(i),
Tx: tx,
Result: *(abciResponses.DeliverTxs[i]),
}}); err != nil {
logger.Error("Error publishing event TX", "err", err)
}
}
if len(validatorUpdates) > 0 {
if err := eventBus.PublishEventValidatorSetUpdates(
types.EventDataValidatorSetUpdates{ValidatorUpdates: validatorUpdates}); err != nil {
logger.Error("Error publishing event", "err", err)
}
}
}
//----------------------------------------------------------------------------------------------------
// Execute block without state. TODO: eliminate
// ExecCommitBlock executes and commits a block on the proxyApp without validating or mutating the state.
// It returns the application root hash (result of abci.Commit).
func ExecCommitBlock(
appConnConsensus proxy.AppConnConsensus,
block *types.Block,
logger log.Logger,
store Store,
initialHeight int64,
) ([]byte, error) {
_, err := execBlockOnProxyApp(logger, appConnConsensus, block, store, initialHeight)
if err != nil {
logger.Error("Error executing block on proxy app", "height", block.Height, "err", err)
return nil, err
}
// Commit block, get hash back
res, err := appConnConsensus.CommitSync(context.Background())
if err != nil {
logger.Error("Client error during proxyAppConn.CommitSync", "err", res)
return nil, err
}
// ResponseCommit has no error or log, just data
return res.Data, nil
}