This change updates the proposal logic to use the block's timestamp in the proposal message. It adds an additional piece of validation logic to the prevote step to check that the block's timestamp matches the proposal message's timestamp.
This change introduces the logic to have the proposer wait until the previous block time has passed before attempting to propose the next block.
The change achieves this by by adding a new clause into the enterPropose state machine method. The method now checks if the validator is the proposer and if the validator's clock is behind the previous block's time. If the validator's clock is behind the previous block time, it schedules a timeout to re-enter the enter propose method after enough time has passed.
This change adds the new TimingParams proto messages. These new messages were build using the wb/proposer-based-timestamps branch on the spec repo.
This change also adds validation that these values are positive when parsed and adds the new parameters into the existing tests.
* internal/consensus: refactor the common_test functions to use a single timeout function
* remove ensurePrecommit
* Update internal/consensus/common_test.go
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* join lines for fatal messages
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* initial proposerWaitsUntil implementation
* switch to duration for easier use with timeout scheduling
* add proposal step waiting time with tests
* minor aesthetic change to IsTimely
* minor language fix
* Update internal/consensus/state.go
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* reword comment
* change accuracy to precision
* move tests to separate pbts test file
Co-authored-by: M. J. Fromberger <fromberger@interchain.io>
* state: add an IsTimely function to implement the check for timely in proposer-based timestamps
* move time checks into block.go and add time source mechanism
* timestamp params comment
* add todo related to pbts spec and timestamp params
* remove old istimely
* switch to using built in before function
* lint++
* wip
* move into proposal and create a default set of params
* defer using default cons params for now
* add failing test
* tweak comments in failing test
* failing test comment
* initial attempt at removing prevote locked block logic
* comment out broken function
* undo reset on prevotes
* fixing TestProposeValidBlock test
* update test for completed POL update
* comment updates
* further unlock testing
* update comments
* Update internal/consensus/state.go
* spacing nit
* comment cleanup
* nil check in addVote
* update unlock description
* update precommit on relock comment
* add ensure new timeout back
* rename IsZero to IsNil and replace uses of block len check with helper
* add testing.T to new assertions
* begin removing unlock condition
* fix TestStateProposerSelection2 to precommit for nil correctly
* remove erroneous sleep
* update TestStatePOL comment
* update relock test to be more clear
* add _ into test names
* rename slashing
* udpate no relock function to be cleaner
* do not relock on old proposal test cleanup
* con state name update
* remove all references to unlock
* update test comments to include new
* add relock test
* add ensureRelock to common_test
* remove all event unlock
* remove unlock checks
* no lint add space
* lint ++
* add test for nil prevote on different proposal
* fix prevote nil condition
* fix defaultDoPrevote
* state_test.go fixes to accomodate prevoting for nil
* add failing test for POL from previous round case
* update prevote logic to prevote POL from previous round
* state.go comment fixes
* update validatePrevotes to correctly look for nil
* update new test name and comment
* update POLFromPreviousRound test
* fixes post merge
* fix spacing
* make the linter happy
* change prevote log message
* update prevote nil debug line
* update enterPrevote comment
* lint
* Update internal/consensus/state.go
Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>
* Update internal/consensus/state.go
Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>
* add english description of alg rules
* Update internal/consensus/state.go
Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>
* comment fixes from review
* fix comment
* fix comment
Co-authored-by: Dev Ojha <ValarDragon@users.noreply.github.com>
This is a very small change, but removes a method from the
`service.Service` interface (a win!) and forces callers to explicitly
pass loggers in to objects during construction rather than (later)
injecting them. There's not a real need for this kind of lazy
construction of loggers, and I think a decent potential for confusion
for mutable loggers.
The main concern I have is that this changes the constructor API for
ABCI clients. I think this is fine, and I suspect that as we plumb
contexts through, and make changes to the RPC services there'll be a
number of similar sorts of changes to various (quasi) public
interfaces, which I think we should welcome.
Since the doc site is built from the backport branches, the config changes for
a new major release also need to be replicated into the backport branch as well
as master. Update the release docs to mention that specifically, since I missed
it during the v0.35 release.
This pull request updates the `protocgen.sh` script to insert the `go_package` option to all of the downloaded proto files. A related pull request into the spec repo removes this options from the .proto files: https://github.com/tendermint/spec/pull/358
This pull requests, along with the related spec PR, aim to move the creation of the `tendermintdev/docker-build-proto` container into the spec repo. This change also relies on several fixes to that container that are made in the PR into the spec repo.
I think calling os.Exit at arbitrary points is _bad_ and is good to
delete. I think panics in the case of data courruption have a chance
of providing useful information.
* update the proposer-based timestamps spec per discussion with @cason
* add vote to list of changed structs
* sentence fix
* clarify crypto sig logic update
* language updates per feedback from @cason
* fix POL < 0 wording
* update timely description for non-polka
When dialing fails to succeed we should reduce the score of the peer,
which puts the peer at (potentially) greater chances of being removed
from the peer manager, and reduces the chance of the peer being
gossiped by the PEX reactor.
The evidence test produces a set of mock evidence in the evidence pool of the 'Primary' node. The test then fills the evidence pools of secondaries with half of this mock evidence. Finally, the test waits until the secondary has an evidence pool as full as the primary.
The assertions that are removed here were checking that the primary and secondaries' evidence channels were empty. However, nothing in the test actually ensures that the channels are empty. The test only waits for the secondaries to have received the complete set of evidence, and the secondaries already received half of the evidence at the beginning. It's more than possible that the secondaries can receive the complete set of evidence and not finish reading the duplicate evidence off the channels.
As a safety measure, don't allow a query string to be unreasonably
long. The query filter is not especially efficient, so a query that
needs more than basic detail should filter coarsely in the subscriber
and refine on the client side.
This affects Subscribe and TxSearch queries.
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.3 to 1.10.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/lib/pq/releases">github.com/lib/pq's releases</a>.</em></p>
<blockquote>
<h2>v1.10.4</h2>
<ul>
<li>Keep track of (context cancelled) error on connection.</li>
<li>Fix android build</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="8446d16b89"><code>8446d16</code></a> issue 1062: Keep track of (context cancelled) error on connection, and make r...</li>
<li><a href="6a102c04ac"><code>6a102c0</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1060">#1060</a> from ian4hu/patch-1</li>
<li><a href="a54251e1b6"><code>a54251e</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1061">#1061</a> from mjl-/fix-flaky-TestConnPrepareContext</li>
<li><a href="2b4fa17b44"><code>2b4fa17</code></a> Fix flaky TestConnPrepareContext</li>
<li><a href="b33a1b722c"><code>b33a1b7</code></a> Fix android build</li>
<li><a href="16e9cadb5a"><code>16e9cad</code></a> Fix build in android</li>
<li><a href="26399a7687"><code>26399a7</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lib/pq/issues/1057">#1057</a> from jfcg/master</li>
<li><a href="087077605f"><code>0870776</code></a> fix possible integer truncation</li>
<li><a href="c01ab77091"><code>c01ab77</code></a> Create codeql-analysis.yml</li>
<li>See full diff in <a href="https://github.com/lib/pq/compare/v1.10.3...v1.10.4">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
This follows the same model as we did in the p2p package.
Rework the indexer service constructor to take a struct of arguments,
that makes it easier to construct the optional settings.
Deprecate but do not remove the existing constructor.
Clean up node initialization a little bit.
This is part of the work described by #7156.
Remove "unbuffered subscriptions" from the pubsub service.
Replace them with a dedicated blocking "observer" mechanism.
Use the observer mechanism for indexing.
Add a SubscribeWithArgs method and deprecate the old Subscribe
method. Remove SubscribeUnbuffered entirely (breaking).
Rework the Subscription interface to eliminate exposed channels.
Subscriptions now use a context to manage lifecycle notifications.
Internalize the eventbus package.
This is intended to document some ergonomic and reliability issues with the
existing implementation of the event subscription service on the Tendermint
node, and to discuss possible approaches to improving them.
Prior to #7177, these benchmarks did not provide any useful data about the
performance of the pubsub system (in fact, prior to #7178, half of them did not
work at all).
Specifically, they create a bunch of subscribers with 1 buffer slot on a
default publisher and copy messages to them. But because the publisher is
single-threaded, and doesn't block for delivery, all this tested is how long it
takes to receive a single message from a channel and deliver it to another
channel. The resulting stat does not even vary meaningfully with batch size,
since it's testing a serial workload.
Since #7177 the benchmarks do correctly reflect delivery time (good), but still
do not tell us anything useful: The latencies that matter for pubsub are not
internal queuing, but the effects of backpressure on the publisher via the
subscribers. That's an integration problem, and simulating a fake workload does
not provide meaningful results.
On that basis, remove these benchmarks.