mirror of
https://github.com/tendermint/tendermint.git
synced 2026-01-22 12:42:49 +00:00
Compare commits
482 Commits
abci++_reb
...
fix-issue-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c9997fd322 | ||
|
|
ef8276a230 | ||
|
|
dede8d3438 | ||
|
|
fb78dce904 | ||
|
|
485c96b0d3 | ||
|
|
9a833a8495 | ||
|
|
0bded371c5 | ||
|
|
12d13cd31d | ||
|
|
bba8367aac | ||
|
|
f1a8f47d4d | ||
|
|
f61e6e4201 | ||
|
|
1db41663c7 | ||
|
|
5e0e05f938 | ||
|
|
5bb51aab03 | ||
|
|
13f7501950 | ||
|
|
4400b0f6d3 | ||
|
|
5b6849ccf7 | ||
|
|
a68e356596 | ||
|
|
7c91b53999 | ||
|
|
02c7199eec | ||
|
|
1dd8807cc3 | ||
|
|
07b46d5a05 | ||
|
|
7a0b05f22d | ||
|
|
bedb68078c | ||
|
|
348c494c99 | ||
|
|
48b1952f18 | ||
|
|
93c4e00e8e | ||
|
|
68c624f5de | ||
|
|
4dce885994 | ||
|
|
faf123bda2 | ||
|
|
da5c09cf6f | ||
|
|
b08dd93d88 | ||
|
|
a5320da5c8 | ||
|
|
8e5dfa55ef | ||
|
|
70df7d9e6e | ||
|
|
98dd0d6c5a | ||
|
|
aff1481682 | ||
|
|
e9bc33d807 | ||
|
|
72bbe64da7 | ||
|
|
658a7661c5 | ||
|
|
89b4321af2 | ||
|
|
c79bb13807 | ||
|
|
d9c9675e2a | ||
|
|
a54bae25b7 | ||
|
|
ddbc93d993 | ||
|
|
6f7427ec7e | ||
|
|
7c03e7dbfb | ||
|
|
c35d6d6e2c | ||
|
|
4edc8c5523 | ||
|
|
f992a7e740 | ||
|
|
691cb52528 | ||
|
|
01266881b8 | ||
|
|
2df5c85a8d | ||
|
|
1f03287f52 | ||
|
|
e7955185b4 | ||
|
|
854add04b0 | ||
|
|
8df7b6103f | ||
|
|
f1659ce329 | ||
|
|
8d0bd1c0ff | ||
|
|
0b8a62c87b | ||
|
|
9accc1a531 | ||
|
|
0167f0d527 | ||
|
|
c8c248d733 | ||
|
|
9d98484845 | ||
|
|
63ff2f052d | ||
|
|
7c4fe5b108 | ||
|
|
a3881f0fb1 | ||
|
|
59eaa4dba0 | ||
|
|
33e6f7af11 | ||
|
|
af96ef2fe4 | ||
|
|
65065e6054 | ||
|
|
c42c6d06d2 | ||
|
|
a22942504c | ||
|
|
ea46a4e9d1 | ||
|
|
21087563eb | ||
|
|
a965f03c15 | ||
|
|
82a2ca4ba5 | ||
|
|
58dc172611 | ||
|
|
9cb01168a6 | ||
|
|
e4dced2437 | ||
|
|
8175b2b26d | ||
|
|
0fcfaa4568 | ||
|
|
b488198d47 | ||
|
|
b848c79971 | ||
|
|
f25b7ceeb2 | ||
|
|
e762dbb603 | ||
|
|
cd0472014a | ||
|
|
ab32f5a9b6 | ||
|
|
a153f82433 | ||
|
|
c80734e5af | ||
|
|
89dbebd1c5 | ||
|
|
af60a9c385 | ||
|
|
c8ae5db50e | ||
|
|
49e3688b79 | ||
|
|
c85e3e4ba8 | ||
|
|
8c5e36159e | ||
|
|
858d57a984 | ||
|
|
0875074ea2 | ||
|
|
3e2d5db289 | ||
|
|
f795d3f360 | ||
|
|
06e6d3f2e9 | ||
|
|
680ebc6f8e | ||
|
|
211b80a484 | ||
|
|
62a1cb8d17 | ||
|
|
a57567ba33 | ||
|
|
5662bd12a8 | ||
|
|
61a81279bd | ||
|
|
f939f962b1 | ||
|
|
9968f53c15 | ||
|
|
80186a9d9c | ||
|
|
21461e55a7 | ||
|
|
2ffb262600 | ||
|
|
912751cf93 | ||
|
|
926c469fcc | ||
|
|
3dc04430c3 | ||
|
|
70ee282d9e | ||
|
|
c88cf0b66c | ||
|
|
50de246a2b | ||
|
|
705f365bcd | ||
|
|
3b20931da3 | ||
|
|
bd6fce13ae | ||
|
|
351adf8ddb | ||
|
|
e80541a251 | ||
|
|
ce898a738c | ||
|
|
a185163c57 | ||
|
|
51b93c8606 | ||
|
|
325740a57c | ||
|
|
abdf717761 | ||
|
|
d65237ff87 | ||
|
|
3401eb2410 | ||
|
|
81bd9ad812 | ||
|
|
4425e62e9e | ||
|
|
c490d3f00a | ||
|
|
8a238fdcb4 | ||
|
|
abfcd08903 | ||
|
|
7f8f1cde8c | ||
|
|
cc18f87000 | ||
|
|
165cc29474 | ||
|
|
bb9fa171d6 | ||
|
|
a7224fd640 | ||
|
|
ac9197ed45 | ||
|
|
c2cce2a696 | ||
|
|
75dafaeacc | ||
|
|
6280f45460 | ||
|
|
be83ec6664 | ||
|
|
cb26c5238a | ||
|
|
e439cf3ba2 | ||
|
|
f9e0f77af3 | ||
|
|
2183d90d05 | ||
|
|
d187962ec0 | ||
|
|
9e69615451 | ||
|
|
f6569b5dcd | ||
|
|
f92289d5e8 | ||
|
|
28d34d635c | ||
|
|
38e29590ff | ||
|
|
c928818db9 | ||
|
|
e0d44a650e | ||
|
|
d72939fe36 | ||
|
|
cbb2c1d3bd | ||
|
|
94b409e407 | ||
|
|
c763f8ef59 | ||
|
|
e81b0e290e | ||
|
|
cdc4c31e88 | ||
|
|
6638db2473 | ||
|
|
56ee72424f | ||
|
|
60f09840dd | ||
|
|
44d9e9917c | ||
|
|
d3548eb706 | ||
|
|
7e09c2ef43 | ||
|
|
824960c565 | ||
|
|
73f605af3f | ||
|
|
9b724f7a6c | ||
|
|
2f9355c579 | ||
|
|
58d8bad99a | ||
|
|
ca6163a3ec | ||
|
|
01262b8ca9 | ||
|
|
0dbd38d4d9 | ||
|
|
dbb7d6ecdd | ||
|
|
9e59fc6924 | ||
|
|
662c0aac9e | ||
|
|
1fe1b6c032 | ||
|
|
7fb4e04b02 | ||
|
|
4ba5a053de | ||
|
|
ef1cc5b516 | ||
|
|
7885839b75 | ||
|
|
81dcc8d1b4 | ||
|
|
eed617c2d9 | ||
|
|
c555226d2b | ||
|
|
860f78f000 | ||
|
|
205bfca66f | ||
|
|
27ccf3b590 | ||
|
|
fd50d90b70 | ||
|
|
27297a447c | ||
|
|
f4e91c2aa8 | ||
|
|
e544709459 | ||
|
|
1fbe56da0c | ||
|
|
cd875c8a2c | ||
|
|
a9fa2ac5f9 | ||
|
|
7d5be3e5da | ||
|
|
4566f1e302 | ||
|
|
1543e4122a | ||
|
|
5e90a98a7c | ||
|
|
854fd07461 | ||
|
|
4fb99af40d | ||
|
|
97d47b5263 | ||
|
|
329da35a84 | ||
|
|
c67ace3433 | ||
|
|
ce61abc038 | ||
|
|
cb98d515dd | ||
|
|
de04f573cf | ||
|
|
91f898cb98 | ||
|
|
9fe1d4eeab | ||
|
|
74864f7fdb | ||
|
|
648f5ffa77 | ||
|
|
f8c4ec38ec | ||
|
|
39ffa80ae7 | ||
|
|
d2afb91e99 | ||
|
|
29f7573762 | ||
|
|
ff498ff333 | ||
|
|
8e5b44d46a | ||
|
|
17a197929c | ||
|
|
8a684c1bb5 | ||
|
|
169099c846 | ||
|
|
75b1b1d6c5 | ||
|
|
2c074e24e6 | ||
|
|
1d3ecf37ee | ||
|
|
2f1e08e948 | ||
|
|
a4e2f05d7a | ||
|
|
95158636cd | ||
|
|
89194a61a4 | ||
|
|
b3c85b795a | ||
|
|
cbae11c8fc | ||
|
|
02c354d62c | ||
|
|
c06c6a9244 | ||
|
|
611cc63a27 | ||
|
|
33f529b06b | ||
|
|
e7136888bb | ||
|
|
eb233d5565 | ||
|
|
f774c09a97 | ||
|
|
5c41de2b85 | ||
|
|
54a1773435 | ||
|
|
e3da1bf94a | ||
|
|
81be4d0d14 | ||
|
|
f4e1039830 | ||
|
|
af8a1a2ce3 | ||
|
|
0d610258f5 | ||
|
|
ddb1eb27c9 | ||
|
|
a75a2c6f00 | ||
|
|
ff2104ec0b | ||
|
|
009c120abb | ||
|
|
7bbbba9acf | ||
|
|
383d6b1117 | ||
|
|
7c1883c692 | ||
|
|
523974167a | ||
|
|
57cc810744 | ||
|
|
438b490fe0 | ||
|
|
243601d02a | ||
|
|
6fb0ca4201 | ||
|
|
9496ea84f8 | ||
|
|
f3275ae608 | ||
|
|
500a4a1419 | ||
|
|
8c2645aa81 | ||
|
|
e9211f5941 | ||
|
|
836a723ca3 | ||
|
|
86408529e0 | ||
|
|
35eaa0b0a8 | ||
|
|
bab9f68689 | ||
|
|
b695d30aae | ||
|
|
98c122f471 | ||
|
|
f4babf9551 | ||
|
|
20b2abb5f9 | ||
|
|
b907d637ca | ||
|
|
eaa2629352 | ||
|
|
339304f87c | ||
|
|
b7e5349e98 | ||
|
|
e92aa56a75 | ||
|
|
48e008ed10 | ||
|
|
be628fce3b | ||
|
|
d783273b05 | ||
|
|
ffe8742e1c | ||
|
|
a00de7199f | ||
|
|
caaafc4449 | ||
|
|
a524e95b19 | ||
|
|
86b66994d4 | ||
|
|
f0914e66e3 | ||
|
|
fa2ccc80da | ||
|
|
60d6856782 | ||
|
|
e2a038e039 | ||
|
|
f793752d07 | ||
|
|
bf71990d2f | ||
|
|
41e681293c | ||
|
|
c939e155a6 | ||
|
|
26ee62aa52 | ||
|
|
e44ab95f2f | ||
|
|
4c3339ab6a | ||
|
|
1e985f6226 | ||
|
|
72adbf9cc9 | ||
|
|
8029cf7a0f | ||
|
|
ed7fa80693 | ||
|
|
8f9cd23016 | ||
|
|
24f22eeb52 | ||
|
|
f790b6f903 | ||
|
|
0ff67d6b1e | ||
|
|
aa8f656573 | ||
|
|
6039594121 | ||
|
|
24222c5855 | ||
|
|
6bd5263515 | ||
|
|
89d381f7cf | ||
|
|
5559e14355 | ||
|
|
a2a9ffbe7e | ||
|
|
8dd91a7ac3 | ||
|
|
f3216e6953 | ||
|
|
90434cb74d | ||
|
|
aba090a69a | ||
|
|
048f6a32f9 | ||
|
|
4a9bcebe2a | ||
|
|
4b79bccc0b | ||
|
|
292828a01b | ||
|
|
5dfaa54350 | ||
|
|
00446bb9f4 | ||
|
|
255942e8c7 | ||
|
|
84ee4249ae | ||
|
|
b39af911ae | ||
|
|
0dc5d4df07 | ||
|
|
ea8238f090 | ||
|
|
640b71038b | ||
|
|
5c32ebcda8 | ||
|
|
b2465e0c3a | ||
|
|
9f6a4bcf23 | ||
|
|
b4a31746dd | ||
|
|
b270ab8d15 | ||
|
|
227e5269ca | ||
|
|
b315f04980 | ||
|
|
abaffef912 | ||
|
|
038f3e025a | ||
|
|
2f590a6392 | ||
|
|
72d15a4b07 | ||
|
|
1b2b24055c | ||
|
|
d260ff3e37 | ||
|
|
a4672048e7 | ||
|
|
fc569173a1 | ||
|
|
ce146d00d7 | ||
|
|
439a5bcacb | ||
|
|
accd7ffe18 | ||
|
|
42751ea4f3 | ||
|
|
acb9a7d734 | ||
|
|
31cfa53082 | ||
|
|
26ef2ccddb | ||
|
|
6abcb13dab | ||
|
|
033608bbf1 | ||
|
|
871d0514cd | ||
|
|
c1ff62fe44 | ||
|
|
66e9106b4d | ||
|
|
d5e0294003 | ||
|
|
32b811a1fb | ||
|
|
819e89ac7a | ||
|
|
cf03759ff5 | ||
|
|
9fce8480b0 | ||
|
|
d31a4a4b34 | ||
|
|
9ad6440bc0 | ||
|
|
97928e190a | ||
|
|
ec8af314cc | ||
|
|
a3fadb7c1a | ||
|
|
01622f81e9 | ||
|
|
792767d1cb | ||
|
|
0794fc8ff2 | ||
|
|
c5576dfa69 | ||
|
|
04fb20e33d | ||
|
|
8391fa0b89 | ||
|
|
3e56eb5fe3 | ||
|
|
733b020899 | ||
|
|
109a73f672 | ||
|
|
80747a0872 | ||
|
|
f3033c5515 | ||
|
|
6c95c3f250 | ||
|
|
a66bb37e32 | ||
|
|
606abc7fc0 | ||
|
|
b74b1c2b68 | ||
|
|
dd325bb191 | ||
|
|
d8a2c8f6f1 | ||
|
|
1075f77cc3 | ||
|
|
45bbbb6317 | ||
|
|
6140847bba | ||
|
|
cda8006569 | ||
|
|
9dbf818055 | ||
|
|
efbbc9462f | ||
|
|
c9d3564634 | ||
|
|
8dd2ed4c6f | ||
|
|
f3207cee52 | ||
|
|
a84c59734f | ||
|
|
430a4d0504 | ||
|
|
89ac8f6e62 | ||
|
|
95acfdead1 | ||
|
|
604923e034 | ||
|
|
713a773c81 | ||
|
|
e96921822d | ||
|
|
29f4e13e05 | ||
|
|
ef1e0ff886 | ||
|
|
c5e45ecb48 | ||
|
|
31b182b7aa | ||
|
|
d46cd7f573 | ||
|
|
0445156ed9 | ||
|
|
3a29521848 | ||
|
|
2bd673c8eb | ||
|
|
8ff136c716 | ||
|
|
b10ff00e1b | ||
|
|
6b570e2111 | ||
|
|
89922df775 | ||
|
|
1bd2aacb56 | ||
|
|
30ef12d0bb | ||
|
|
199124048e | ||
|
|
9c0754e617 | ||
|
|
0d5f212f30 | ||
|
|
5acd1540c0 | ||
|
|
d65205ecad | ||
|
|
3c27335db3 | ||
|
|
c3cd54a8e0 | ||
|
|
90797cef90 | ||
|
|
9842b4b0fb | ||
|
|
f399abd7ac | ||
|
|
7a0cdd53d5 | ||
|
|
ebda9dcac5 | ||
|
|
15b15d2060 | ||
|
|
3ab6026ad7 | ||
|
|
1152120dea | ||
|
|
d45389e2b0 | ||
|
|
9440fc16ce | ||
|
|
3f04e8bbce | ||
|
|
c9a664a2f8 | ||
|
|
4f0fb3325a | ||
|
|
e963deff5a | ||
|
|
ee5c790878 | ||
|
|
327767a1c1 | ||
|
|
edb4928357 | ||
|
|
b0f35a64d9 | ||
|
|
56ffcf709a | ||
|
|
452f0b775a | ||
|
|
cf24489764 | ||
|
|
576e40eabd | ||
|
|
206139f384 | ||
|
|
603364bdaa | ||
|
|
033a0cb53f | ||
|
|
026fddee4f | ||
|
|
dc542068ae | ||
|
|
c35d6e706f | ||
|
|
bd2f41bf79 | ||
|
|
d1bd98d5e0 | ||
|
|
035838901e | ||
|
|
e342c21336 | ||
|
|
eb9e1f961c | ||
|
|
f26eb4ee89 | ||
|
|
146e251892 | ||
|
|
7130c2e68c | ||
|
|
4a9eb1f1ac | ||
|
|
0adde9d415 | ||
|
|
ee0cc537b8 | ||
|
|
4f7c55507c | ||
|
|
8528cdb314 | ||
|
|
9ddfc79813 | ||
|
|
069906a25d | ||
|
|
5c580846bb | ||
|
|
afda2d39b6 | ||
|
|
743a658613 | ||
|
|
dbc8765104 | ||
|
|
2306108d8a | ||
|
|
4ee393c3da | ||
|
|
953523c3cb | ||
|
|
d862fd4ec7 | ||
|
|
7b3138e694 | ||
|
|
a4b68ec2fb | ||
|
|
f618acf2ab | ||
|
|
513c67230f | ||
|
|
fa3430ad16 | ||
|
|
95cf253b6d | ||
|
|
9b3531d7d6 | ||
|
|
81a0198af2 | ||
|
|
87abbf78e6 | ||
|
|
b362894a56 | ||
|
|
2866ba1a2c | ||
|
|
5764a81410 | ||
|
|
4a81d0a02f | ||
|
|
9d864da353 |
5
.github/CODEOWNERS
vendored
5
.github/CODEOWNERS
vendored
@@ -7,4 +7,7 @@
|
||||
# global owners are only requested if there isn't a more specific
|
||||
# codeowner specified below. For this reason, the global codeowners
|
||||
# are often repeated in package-level definitions.
|
||||
* @ebuchman @cmwaters @tychoish @williambanfield @creachadair
|
||||
* @ebuchman @cmwaters @tychoish @williambanfield @creachadair @sergio-mena @jmalicevic @thanethomson @ancazamfir
|
||||
|
||||
# Spec related changes can be approved by the protocol design team
|
||||
/spec @josef-widder @milosevic @cason
|
||||
|
||||
37
.github/ISSUE_TEMPLATE/proposal.md
vendored
Normal file
37
.github/ISSUE_TEMPLATE/proposal.md
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
name: Protocol Change Proposal
|
||||
about: Create a proposal to request a change to the protocol
|
||||
|
||||
---
|
||||
|
||||
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
|
||||
v ✰ Thanks for opening an issue! ✰
|
||||
v Before smashing the submit button please review the template.
|
||||
v Word of caution: Under-specified proposals may be rejected summarily
|
||||
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
|
||||
|
||||
# Protocol Change Proposal
|
||||
|
||||
## Summary
|
||||
|
||||
<!-- Short, concise description of the proposed change -->
|
||||
|
||||
## Problem Definition
|
||||
|
||||
<!-- Why do we need this change?
|
||||
What problems may be addressed by introducing this change?
|
||||
What benefits does Tendermint stand to gain by including this change?
|
||||
Are there any disadvantages of including this change? -->
|
||||
|
||||
## Proposal
|
||||
|
||||
<!-- Detailed description of requirements of implementation -->
|
||||
|
||||
____
|
||||
|
||||
#### For Admin Use
|
||||
|
||||
- [ ] Not duplicate issue
|
||||
- [ ] Appropriate labels applied
|
||||
- [ ] Appropriate contributors tagged
|
||||
- [ ] Contributor assigned/self-assigned
|
||||
6
.github/workflows/build.yml
vendored
6
.github/workflows/build.yml
vendored
@@ -23,7 +23,7 @@ jobs:
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: "1.17"
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
- uses: technote-space/get-diff-action@v6.0.1
|
||||
with:
|
||||
PATTERNS: |
|
||||
@@ -44,7 +44,7 @@ jobs:
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: "1.17"
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
- uses: technote-space/get-diff-action@v6.0.1
|
||||
with:
|
||||
PATTERNS: |
|
||||
@@ -66,7 +66,7 @@ jobs:
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: "1.17"
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
- uses: technote-space/get-diff-action@v6.0.1
|
||||
with:
|
||||
PATTERNS: |
|
||||
|
||||
6
.github/workflows/docker.yml
vendored
6
.github/workflows/docker.yml
vendored
@@ -13,7 +13,7 @@ jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
- name: Prepare
|
||||
id: prep
|
||||
run: |
|
||||
@@ -43,13 +43,13 @@ jobs:
|
||||
|
||||
- name: Login to DockerHub
|
||||
if: ${{ github.event_name != 'pull_request' }}
|
||||
uses: docker/login-action@v1.12.0
|
||||
uses: docker/login-action@v1.14.1
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Publish to Docker Hub
|
||||
uses: docker/build-push-action@v2.8.0
|
||||
uses: docker/build-push-action@v2.9.0
|
||||
with:
|
||||
context: .
|
||||
file: ./DOCKER/Dockerfile
|
||||
|
||||
36
.github/workflows/e2e-manual.yml
vendored
Normal file
36
.github/workflows/e2e-manual.yml
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
# Runs randomly generated E2E testnets nightly on master
|
||||
# manually run e2e tests
|
||||
name: e2e-manual
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
e2e-nightly-test:
|
||||
# Run parallel jobs for the listed testnet groups (must match the
|
||||
# ./build/generator -g flag)
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
group: ['00', '01', '02', '03']
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: '1.17'
|
||||
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Build
|
||||
working-directory: test/e2e
|
||||
# Run make jobs in parallel, since we can't run steps in parallel.
|
||||
run: make -j2 docker generator runner tests
|
||||
|
||||
- name: Generate testnets
|
||||
working-directory: test/e2e
|
||||
# When changing -g, also change the matrix groups above
|
||||
run: ./build/generator -g 4 -d networks/nightly/
|
||||
|
||||
- name: Run ${{ matrix.p2p }} p2p testnets
|
||||
working-directory: test/e2e
|
||||
run: ./run-multiple.sh networks/nightly/*-group${{ matrix.group }}-*.toml
|
||||
19
.github/workflows/e2e-nightly-34x.yml
vendored
19
.github/workflows/e2e-nightly-34x.yml
vendored
@@ -6,7 +6,6 @@
|
||||
|
||||
name: e2e-nightly-34x
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually, in theory
|
||||
schedule:
|
||||
- cron: '0 2 * * *'
|
||||
|
||||
@@ -25,7 +24,7 @@ jobs:
|
||||
with:
|
||||
go-version: '1.17'
|
||||
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: 'v0.34.x'
|
||||
|
||||
@@ -58,19 +57,3 @@ jobs:
|
||||
SLACK_COLOR: danger
|
||||
SLACK_MESSAGE: Nightly E2E tests failed on v0.34.x
|
||||
SLACK_FOOTER: ''
|
||||
|
||||
e2e-nightly-success: # may turn this off once they seem to pass consistently
|
||||
needs: e2e-nightly-test
|
||||
if: ${{ success() }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Notify Slack on success
|
||||
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
|
||||
env:
|
||||
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
|
||||
SLACK_CHANNEL: tendermint-internal
|
||||
SLACK_USERNAME: Nightly E2E Tests
|
||||
SLACK_ICON_EMOJI: ':white_check_mark:'
|
||||
SLACK_COLOR: good
|
||||
SLACK_MESSAGE: Nightly E2E tests passed on v0.34.x
|
||||
SLACK_FOOTER: ''
|
||||
|
||||
3
.github/workflows/e2e-nightly-35x.yml
vendored
3
.github/workflows/e2e-nightly-35x.yml
vendored
@@ -5,7 +5,6 @@
|
||||
|
||||
name: e2e-nightly-35x
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
schedule:
|
||||
- cron: '0 2 * * *'
|
||||
|
||||
@@ -25,7 +24,7 @@ jobs:
|
||||
with:
|
||||
go-version: '1.17'
|
||||
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: 'v0.35.x'
|
||||
|
||||
|
||||
3
.github/workflows/e2e-nightly-master.yml
vendored
3
.github/workflows/e2e-nightly-master.yml
vendored
@@ -5,7 +5,6 @@
|
||||
|
||||
name: e2e-nightly-master
|
||||
on:
|
||||
workflow_dispatch: # allow running workflow manually
|
||||
schedule:
|
||||
- cron: '0 2 * * *'
|
||||
|
||||
@@ -24,7 +23,7 @@ jobs:
|
||||
with:
|
||||
go-version: '1.17'
|
||||
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Build
|
||||
working-directory: test/e2e
|
||||
|
||||
2
.github/workflows/e2e.yml
vendored
2
.github/workflows/e2e.yml
vendored
@@ -17,7 +17,7 @@ jobs:
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: '1.17'
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
- uses: technote-space/get-diff-action@v6.0.1
|
||||
with:
|
||||
PATTERNS: |
|
||||
|
||||
2
.github/workflows/fuzz-nightly.yml
vendored
2
.github/workflows/fuzz-nightly.yml
vendored
@@ -17,7 +17,7 @@ jobs:
|
||||
with:
|
||||
go-version: '1.17'
|
||||
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Install go-fuzz
|
||||
working-directory: test/fuzz
|
||||
|
||||
2
.github/workflows/jepsen.yml
vendored
2
.github/workflows/jepsen.yml
vendored
@@ -46,7 +46,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout the Jepsen repository
|
||||
uses: actions/checkout@v2.4.0
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
repository: 'tendermint/jepsen'
|
||||
|
||||
|
||||
2
.github/workflows/linkchecker.yml
vendored
2
.github/workflows/linkchecker.yml
vendored
@@ -6,7 +6,7 @@ jobs:
|
||||
markdown-link-check:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
- uses: gaurav-nelson/github-action-markdown-link-check@1.0.13
|
||||
with:
|
||||
folder-path: "docs"
|
||||
|
||||
11
.github/workflows/lint.yml
vendored
11
.github/workflows/lint.yml
vendored
@@ -1,4 +1,4 @@
|
||||
name: Lint
|
||||
name: Golang Linter
|
||||
# Lint runs golangci-lint over the entire Tendermint repository
|
||||
# This workflow is run on every pull request and push to master
|
||||
# The `golangci` job will pass without running if no *.{go, mod, sum} files have been modified.
|
||||
@@ -13,17 +13,20 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 8
|
||||
steps:
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: '^1.17'
|
||||
- uses: technote-space/get-diff-action@v6.0.1
|
||||
with:
|
||||
PATTERNS: |
|
||||
**/**.go
|
||||
go.mod
|
||||
go.sum
|
||||
- uses: golangci/golangci-lint-action@v2.5.2
|
||||
- uses: golangci/golangci-lint-action@v3.1.0
|
||||
with:
|
||||
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
|
||||
version: v1.42.1
|
||||
version: v1.44
|
||||
args: --timeout 10m
|
||||
github-token: ${{ secrets.github_token }}
|
||||
if: env.GIT_DIFF
|
||||
|
||||
4
.github/workflows/linter.yml
vendored
4
.github/workflows/linter.yml
vendored
@@ -1,4 +1,4 @@
|
||||
name: Lint
|
||||
name: Markdown Linter
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
@@ -19,7 +19,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v2.4.0
|
||||
uses: actions/checkout@v3
|
||||
- name: Lint Code Base
|
||||
uses: docker://github/super-linter:v4
|
||||
env:
|
||||
|
||||
19
.github/workflows/markdown-links.yml
vendored
Normal file
19
.github/workflows/markdown-links.yml
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
# TODO: Re-enable when https://github.com/gaurav-nelson/github-action-markdown-link-check/pull/126 lands.
|
||||
|
||||
#name: Check Markdown links
|
||||
#
|
||||
#on:
|
||||
# push:
|
||||
# branches:
|
||||
# - master
|
||||
# pull_request:
|
||||
# branches: [master]
|
||||
#
|
||||
#jobs:
|
||||
# markdown-link-check:
|
||||
# runs-on: ubuntu-latest
|
||||
# steps:
|
||||
# - uses: actions/checkout@v3
|
||||
# - uses: gaurav-nelson/github-action-markdown-link-check@v1.0.13
|
||||
# with:
|
||||
# check-modified-files-only: 'yes'
|
||||
21
.github/workflows/proto-lint.yml
vendored
Normal file
21
.github/workflows/proto-lint.yml
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
name: Protobuf Lint
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- 'proto/**'
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
paths:
|
||||
- 'proto/**'
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 5
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: bufbuild/buf-setup-action@v1.1.0
|
||||
- uses: bufbuild/buf-lint-action@v1
|
||||
with:
|
||||
input: 'proto'
|
||||
2
.github/workflows/release.yml
vendored
2
.github/workflows/release.yml
vendored
@@ -12,7 +12,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2.4.0
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
|
||||
4
.github/workflows/tests.yml
vendored
4
.github/workflows/tests.yml
vendored
@@ -19,7 +19,7 @@ jobs:
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: "1.17"
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
- uses: technote-space/get-diff-action@v6.0.1
|
||||
with:
|
||||
PATTERNS: |
|
||||
@@ -41,7 +41,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
needs: tests
|
||||
steps:
|
||||
- uses: actions/checkout@v2.4.0
|
||||
- uses: actions/checkout@v3
|
||||
- uses: technote-space/get-diff-action@v6.0.1
|
||||
with:
|
||||
PATTERNS: |
|
||||
|
||||
15
.gitignore
vendored
15
.gitignore
vendored
@@ -47,10 +47,11 @@ test/fuzz/**/corpus
|
||||
test/fuzz/**/crashers
|
||||
test/fuzz/**/suppressions
|
||||
test/fuzz/**/*.zip
|
||||
proto/tendermint/blocksync/types.proto
|
||||
proto/tendermint/consensus/types.proto
|
||||
proto/tendermint/mempool/*.proto
|
||||
proto/tendermint/p2p/*.proto
|
||||
proto/tendermint/statesync/*.proto
|
||||
proto/tendermint/types/*.proto
|
||||
proto/tendermint/version/*.proto
|
||||
proto/spec/**/*.pb.go
|
||||
*.aux
|
||||
*.bbl
|
||||
*.blg
|
||||
*.log
|
||||
*.pdf
|
||||
*.gz
|
||||
*.dvi
|
||||
|
||||
11
.markdownlint.yml
Normal file
11
.markdownlint.yml
Normal file
@@ -0,0 +1,11 @@
|
||||
default: true
|
||||
MD001: false
|
||||
MD007: {indent: 4}
|
||||
MD013: false
|
||||
MD024: {siblings_only: true}
|
||||
MD025: false
|
||||
MD033: false
|
||||
MD036: false
|
||||
MD010: false
|
||||
MD012: false
|
||||
MD028: false
|
||||
35
CHANGELOG.md
35
CHANGELOG.md
@@ -2,6 +2,27 @@
|
||||
|
||||
Friendly reminder: We have a [bug bounty program](https://hackerone.com/cosmos).
|
||||
|
||||
## v0.35.2
|
||||
|
||||
February 28, 2022
|
||||
|
||||
Special thanks to external contributors on this release: @ashcherbakov, @yihuang, @waelsy123
|
||||
|
||||
### IMPROVEMENTS
|
||||
|
||||
- [consensus] [\#7875](https://github.com/tendermint/tendermint/pull/7875) additional timing metrics. (@williambanfield)
|
||||
|
||||
### BUG FIXES
|
||||
|
||||
- [abci] [\#7990](https://github.com/tendermint/tendermint/pull/7990) revert buffer limit change. (@williambanfield)
|
||||
- [cli] [#7837](https://github.com/tendermint/tendermint/pull/7837) fix app hash in state rollback. (@yihuang)
|
||||
- [cli] [\#7869](https://github.com/tendermint/tendermint/pull/7869) Update unsafe-reset-all command to match release v35. (waelsy123)
|
||||
- [light] [\#7640](https://github.com/tendermint/tendermint/pull/7640) Light Client: fix absence proof verification (@ashcherbakov)
|
||||
- [light] [\#7641](https://github.com/tendermint/tendermint/pull/7641) Light Client: fix querying against the latest height (@ashcherbakov)
|
||||
- [mempool] [\#7718](https://github.com/tendermint/tendermint/pull/7718) return duplicate tx errors more consistently. (@tychoish)
|
||||
- [rpc] [\#7744](https://github.com/tendermint/tendermint/pull/7744) fix layout of endpoint list. (@creachadair)
|
||||
- [statesync] [\#7886](https://github.com/tendermint/tendermint/pull/7886) assert app version matches. (@cmwaters)
|
||||
|
||||
## v0.35.1
|
||||
|
||||
January 26, 2022
|
||||
@@ -209,6 +230,18 @@ Special thanks to external contributors on this release: @JayT106,
|
||||
- [cmd/tendermint/commands] [\#6623](https://github.com/tendermint/tendermint/pull/6623) replace `$HOME/.some/test/dir` with `t.TempDir` (@tanyabouman)
|
||||
- [statesync] \6807 Implement P2P state provider as an alternative to RPC (@cmwaters)
|
||||
|
||||
## v0.34.16
|
||||
|
||||
Special thanks to external contributors on this release: @yihuang
|
||||
|
||||
### BUG FIXES
|
||||
|
||||
- [consensus] [\#7617](https://github.com/tendermint/tendermint/issues/7617) calculate prevote message delay metric (backport #7551) (@williambanfield).
|
||||
- [consensus] [\#7631](https://github.com/tendermint/tendermint/issues/7631) check proposal non-nil in prevote message delay metric (backport #7625) (@williambanfield).
|
||||
- [statesync] [\#7885](https://github.com/tendermint/tendermint/issues/7885) statesync: assert app version matches (backport #7856) (@cmwaters).
|
||||
- [statesync] [\#7881](https://github.com/tendermint/tendermint/issues/7881) fix app hash in state rollback (backport #7837) (@cmwaters).
|
||||
- [cli] [#7837](https://github.com/tendermint/tendermint/pull/7837) fix app hash in state rollback. (@yihuang).
|
||||
|
||||
## v0.34.15
|
||||
|
||||
Special thanks to external contributors on this release: @thanethomson
|
||||
@@ -980,7 +1013,7 @@ and a validator address plus a timestamp. Note we may remove the validator
|
||||
address & timestamp fields in the future (see ADR-25).
|
||||
|
||||
`lite2` package has been added to solve `lite` issues and introduce weak
|
||||
subjectivity interface. Refer to the [spec](https://github.com/tendermint/spec/blob/master/spec/consensus/light-client.md) for complete details.
|
||||
subjectivity interface. Refer to the [spec](./spec/consensus/light-client/) for complete details.
|
||||
`lite` package is now deprecated and will be removed in v0.34 release.
|
||||
|
||||
### BREAKING CHANGES:
|
||||
|
||||
@@ -17,10 +17,14 @@ Special thanks to external contributors on this release:
|
||||
- [mempool] \#7171 Remove legacy mempool implementation. (@tychoish)
|
||||
- [rpc] \#7575 Rework how RPC responses are written back via HTTP. (@creachadair)
|
||||
- [rpc] \#7713 Remove unused options for websocket clients. (@creachadair)
|
||||
- [config] \#7930 Add new event subscription options and defaults. (@creachadair)
|
||||
- [rpc] \#7982 Add new Events interface and deprecate Subscribe. (@creachadair)
|
||||
- [cli] \#8081 make the reset command safe to use. (@marbar3778)
|
||||
|
||||
- Apps
|
||||
|
||||
- [proto/tendermint] \#6976 Remove core protobuf files in favor of only housing them in the [tendermint/spec](https://github.com/tendermint/spec) repository.
|
||||
- [tendermint/spec] \#7804 Migrate spec from [spec repo](https://github.com/tendermint/spec).
|
||||
- [abci] \#7984 Remove the locks preventing concurrent use of ABCI applications by Tendermint. (@tychoish)
|
||||
|
||||
- P2P Protocol
|
||||
|
||||
@@ -62,13 +66,17 @@ Special thanks to external contributors on this release:
|
||||
|
||||
- [internal/protoio] \#7325 Optimized `MarshalDelimited` by inlining the common case and using a `sync.Pool` in the worst case. (@odeke-em)
|
||||
- [consensus] \#6969 remove logic to 'unlock' a locked block.
|
||||
- [evidence] \#7700 Evidence messages contain single Evidence instead of EvidenceList (@jmalicevic)
|
||||
- [evidence] \#7802 Evidence pool emits events when evidence is validated and updates a metric when the number of evidence in the evidence pool changes. (@jmalicevic)
|
||||
- [pubsub] \#7319 Performance improvements for the event query API (@creachadair)
|
||||
- [node] \#7521 Define concrete type for seed node implementation (@spacech1mp)
|
||||
- [rpc] \#7612 paginate mempool /unconfirmed_txs rpc endpoint (@spacech1mp)
|
||||
- [light] [\#7536](https://github.com/tendermint/tendermint/pull/7536) rpc /status call returns info about the light client (@jmalicevic)
|
||||
- [types] \#7765 Replace EvidenceData with EvidenceList to avoid unnecessary nesting of evidence fields within a block. (@jmalicevic)
|
||||
|
||||
### BUG FIXES
|
||||
|
||||
- fix: assignment copies lock value in `BitArray.UnmarshalJSON()` (@lklimek)
|
||||
- [light] \#7640 Light Client: fix absence proof verification (@ashcherbakov)
|
||||
- [light] \#7641 Light Client: fix querying against the latest height (@ashcherbakov)
|
||||
- [cli] [#7837](https://github.com/tendermint/tendermint/pull/7837) fix app hash in state rollback. (@yihuang)
|
||||
|
||||
@@ -105,11 +105,33 @@ specify exactly the dependency you want to update, eg.
|
||||
|
||||
## Protobuf
|
||||
|
||||
We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along with [gogoproto](https://github.com/gogo/protobuf) to generate code for use across Tendermint Core.
|
||||
We use [Protocol Buffers](https://developers.google.com/protocol-buffers) along
|
||||
with [`gogoproto`](https://github.com/gogo/protobuf) to generate code for use
|
||||
across Tendermint Core.
|
||||
|
||||
For linting, checking breaking changes and generating proto stubs, we use [buf](https://buf.build/). If you would like to run linting and check if the changes you have made are breaking then you will need to have docker running locally. Then the linting cmd will be `make proto-lint` and the breaking changes check will be `make proto-check-breaking`.
|
||||
To generate proto stubs, lint, and check protos for breaking changes, you will
|
||||
need to install [buf](https://buf.build/) and `gogoproto`. Then, from the root
|
||||
of the repository, run:
|
||||
|
||||
We use [Docker](https://www.docker.com/) to generate the protobuf stubs. To generate the stubs yourself, make sure docker is running then run `make proto-gen`. This command uses the spec repo to get the necessary protobuf files for generating the go code. If you are modifying the proto files manually for changes in the core data structures, you will need to clone them into the go repo and comment out lines 22-37 of the file `./scripts/protocgen.sh`.
|
||||
```bash
|
||||
# Lint all of the .proto files in proto/tendermint
|
||||
make proto-lint
|
||||
|
||||
# Check if any of your local changes (prior to committing to the Git repository)
|
||||
# are breaking
|
||||
make proto-check-breaking
|
||||
|
||||
# Generate Go code from the .proto files in proto/tendermint
|
||||
make proto-gen
|
||||
```
|
||||
|
||||
To automatically format `.proto` files, you will need
|
||||
[`clang-format`](https://clang.llvm.org/docs/ClangFormat.html) installed. Once
|
||||
installed, you can run:
|
||||
|
||||
```bash
|
||||
make proto-format
|
||||
```
|
||||
|
||||
### Visual Studio Code
|
||||
|
||||
|
||||
1
DOCKER/.gitignore
vendored
1
DOCKER/.gitignore
vendored
@@ -1 +0,0 @@
|
||||
tendermint
|
||||
@@ -2,7 +2,7 @@
|
||||
FROM golang:1.17-alpine as builder
|
||||
RUN apk update && \
|
||||
apk upgrade && \
|
||||
apk --no-cache add make
|
||||
apk --no-cache add make git
|
||||
COPY / /tendermint
|
||||
WORKDIR /tendermint
|
||||
RUN make build-linux
|
||||
@@ -53,4 +53,3 @@ CMD ["start"]
|
||||
|
||||
# Expose the data directory as a volume since there's mutable state in there
|
||||
VOLUME [ "$TMHOME" ]
|
||||
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
FROM amazonlinux:2
|
||||
|
||||
RUN yum -y update && \
|
||||
yum -y install wget
|
||||
|
||||
RUN wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && \
|
||||
rpm -ivh epel-release-latest-7.noarch.rpm
|
||||
|
||||
RUN yum -y groupinstall "Development Tools"
|
||||
RUN yum -y install leveldb-devel which
|
||||
|
||||
ENV GOVERSION=1.16.5
|
||||
|
||||
RUN cd /tmp && \
|
||||
wget https://dl.google.com/go/go${GOVERSION}.linux-amd64.tar.gz && \
|
||||
tar -C /usr/local -xf go${GOVERSION}.linux-amd64.tar.gz && \
|
||||
mkdir -p /go/src && \
|
||||
mkdir -p /go/bin
|
||||
|
||||
ENV PATH=$PATH:/usr/local/go/bin:/go/bin
|
||||
ENV GOBIN=/go/bin
|
||||
ENV GOPATH=/go/src
|
||||
|
||||
RUN mkdir -p /tendermint
|
||||
WORKDIR /tendermint
|
||||
|
||||
CMD ["/usr/bin/make", "build", "TENDERMINT_BUILD_OPTIONS=cleveldb"]
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
FROM golang:latest
|
||||
|
||||
# Grab deps (jq, hexdump, xxd, killall)
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
jq bsdmainutils vim-common psmisc netcat
|
||||
|
||||
# Add testing deps for curl
|
||||
RUN echo 'deb http://httpredir.debian.org/debian testing main non-free contrib' >> /etc/apt/sources.list && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends curl
|
||||
|
||||
VOLUME /go
|
||||
|
||||
EXPOSE 26656
|
||||
EXPOSE 26657
|
||||
@@ -1,13 +0,0 @@
|
||||
build:
|
||||
@sh -c "'$(CURDIR)/build.sh'"
|
||||
|
||||
push:
|
||||
@sh -c "'$(CURDIR)/push.sh'"
|
||||
|
||||
build_testing:
|
||||
docker build --tag tendermint/testing -f ./Dockerfile.testing .
|
||||
|
||||
build_amazonlinux_buildimage:
|
||||
docker build -t "tendermint/tendermint:build_c-amazonlinux" -f Dockerfile.build_c-amazonlinux .
|
||||
|
||||
.PHONY: build push build_testing build_amazonlinux_buildimage
|
||||
@@ -1,20 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
# Get the tag from the version, or try to figure it out.
|
||||
if [ -z "$TAG" ]; then
|
||||
TAG=$(awk -F\" '/TMCoreSemVer =/ { print $2; exit }' < ../version/version.go)
|
||||
fi
|
||||
if [ -z "$TAG" ]; then
|
||||
echo "Please specify a tag."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TAG_NO_PATCH=${TAG%.*}
|
||||
|
||||
read -p "==> Build 3 docker images with the following tags (latest, $TAG, $TAG_NO_PATCH)? y/n" -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]
|
||||
then
|
||||
docker build -t "tendermint/tendermint" -t "tendermint/tendermint:$TAG" -t "tendermint/tendermint:$TAG_NO_PATCH" .
|
||||
fi
|
||||
@@ -1,22 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
# Get the tag from the version, or try to figure it out.
|
||||
if [ -z "$TAG" ]; then
|
||||
TAG=$(awk -F\" '/TMCoreSemVer =/ { print $2; exit }' < ../version/version.go)
|
||||
fi
|
||||
if [ -z "$TAG" ]; then
|
||||
echo "Please specify a tag."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TAG_NO_PATCH=${TAG%.*}
|
||||
|
||||
read -p "==> Push 3 docker images with the following tags (latest, $TAG, $TAG_NO_PATCH)? y/n" -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]
|
||||
then
|
||||
docker push "tendermint/tendermint:latest"
|
||||
docker push "tendermint/tendermint:$TAG"
|
||||
docker push "tendermint/tendermint:$TAG_NO_PATCH"
|
||||
fi
|
||||
52
Makefile
52
Makefile
@@ -13,8 +13,6 @@ endif
|
||||
|
||||
LD_FLAGS = -X github.com/tendermint/tendermint/version.TMVersion=$(VERSION)
|
||||
BUILD_FLAGS = -mod=readonly -ldflags "$(LD_FLAGS)"
|
||||
BUILD_IMAGE := ghcr.io/tendermint/docker-build-proto
|
||||
DOCKER_PROTO_BUILDER := docker run -v $(shell pwd):/workspace --workdir /workspace $(BUILD_IMAGE)
|
||||
CGO_ENABLED ?= 0
|
||||
|
||||
# handle nostrip
|
||||
@@ -78,17 +76,53 @@ $(BUILDDIR)/:
|
||||
### Protobuf ###
|
||||
###############################################################################
|
||||
|
||||
proto-gen:
|
||||
@docker pull -q tendermintdev/docker-build-proto
|
||||
check-proto-deps:
|
||||
ifeq (,$(shell which buf))
|
||||
$(error "buf is required for Protobuf building, linting and breakage checking. See https://docs.buf.build/installation for installation instructions.")
|
||||
endif
|
||||
ifeq (,$(shell which protoc-gen-gogofaster))
|
||||
$(error "gogofaster plugin for protoc is required. Run 'go install github.com/gogo/protobuf/protoc-gen-gogofaster@latest' to install")
|
||||
endif
|
||||
.PHONY: check-proto-deps
|
||||
|
||||
check-proto-format-deps:
|
||||
ifeq (,$(shell which clang-format))
|
||||
$(error "clang-format is required for Protobuf formatting. See instructions for your platform on how to install it.")
|
||||
endif
|
||||
.PHONY: check-proto-format-deps
|
||||
|
||||
proto-gen: check-proto-deps
|
||||
@echo "Generating Protobuf files"
|
||||
@$(DOCKER_PROTO_BUILDER) sh ./scripts/protocgen.sh
|
||||
@buf generate
|
||||
@mv ./proto/tendermint/abci/types.pb.go ./abci/types/
|
||||
.PHONY: proto-gen
|
||||
|
||||
proto-format:
|
||||
# These targets are provided for convenience and are intended for local
|
||||
# execution only.
|
||||
proto-lint: check-proto-deps
|
||||
@echo "Linting Protobuf files"
|
||||
@buf lint
|
||||
.PHONY: proto-lint
|
||||
|
||||
proto-format: check-proto-format-deps
|
||||
@echo "Formatting Protobuf files"
|
||||
@$(DOCKER_PROTO_BUILDER) find ./ -not -path "./third_party/*" -name *.proto -exec clang-format -i {} \;
|
||||
@find . -name '*.proto' -path "./proto/*" -exec clang-format -i {} \;
|
||||
.PHONY: proto-format
|
||||
|
||||
proto-check-breaking: check-proto-deps
|
||||
@echo "Checking for breaking changes in Protobuf files against local branch"
|
||||
@echo "Note: This is only useful if your changes have not yet been committed."
|
||||
@echo " Otherwise read up on buf's \"breaking\" command usage:"
|
||||
@echo " https://docs.buf.build/breaking/usage"
|
||||
@buf breaking --against ".git"
|
||||
.PHONY: proto-check-breaking
|
||||
|
||||
# TODO: Should be removed when work on ABCI++ is complete.
|
||||
# For more information, see https://github.com/tendermint/tendermint/issues/8066
|
||||
abci-proto-gen:
|
||||
./scripts/abci-gen.sh
|
||||
.PHONY: abci-proto-gen
|
||||
|
||||
###############################################################################
|
||||
### Build ABCI ###
|
||||
###############################################################################
|
||||
@@ -209,10 +243,8 @@ build-docs:
|
||||
### Docker image ###
|
||||
###############################################################################
|
||||
|
||||
build-docker: build-linux
|
||||
cp $(BUILDDIR)/tendermint DOCKER/tendermint
|
||||
build-docker:
|
||||
docker build --label=tendermint --tag="tendermint/tendermint" -f DOCKER/Dockerfile .
|
||||
rm -rf DOCKER/tendermint
|
||||
.PHONY: build-docker
|
||||
|
||||
|
||||
|
||||
54
README.md
54
README.md
@@ -3,7 +3,7 @@
|
||||

|
||||
|
||||
[Byzantine-Fault Tolerant](https://en.wikipedia.org/wiki/Byzantine_fault_tolerance)
|
||||
[State Machines](https://en.wikipedia.org/wiki/State_machine_replication).
|
||||
[State Machine Replication](https://en.wikipedia.org/wiki/State_machine_replication).
|
||||
Or [Blockchain](<https://en.wikipedia.org/wiki/Blockchain_(database)>), for short.
|
||||
|
||||
[](https://github.com/tendermint/tendermint/releases/latest)
|
||||
@@ -20,10 +20,14 @@ Or [Blockchain](<https://en.wikipedia.org/wiki/Blockchain_(database)>), for shor
|
||||
|
||||
Tendermint Core is a Byzantine Fault Tolerant (BFT) middleware that takes a state transition machine - written in any programming language - and securely replicates it on many machines.
|
||||
|
||||
For protocol details, see [the specification](https://github.com/tendermint/spec).
|
||||
For protocol details, refer to the [Tendermint Specification](./spec/README.md).
|
||||
|
||||
For detailed analysis of the consensus protocol, including safety and liveness proofs,
|
||||
see our recent paper, "[The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)".
|
||||
read our paper, "[The latest gossip on BFT consensus](https://arxiv.org/abs/1807.04938)".
|
||||
|
||||
## Documentation
|
||||
|
||||
Complete documentation can be found on the [website](https://docs.tendermint.com/).
|
||||
|
||||
## Releases
|
||||
|
||||
@@ -33,7 +37,7 @@ Tendermint has been in the production of private and public environments, most n
|
||||
See below for more details about [versioning](#versioning).
|
||||
|
||||
In any case, if you intend to run Tendermint in production, we're happy to help. You can
|
||||
contact us [over email](mailto:hello@interchain.berlin) or [join the chat](https://discord.gg/cosmosnetwork).
|
||||
contact us [over email](mailto:hello@interchain.io) or [join the chat](https://discord.gg/cosmosnetwork).
|
||||
|
||||
More on how releases are conducted can be found [here](./RELEASES.md).
|
||||
|
||||
@@ -52,20 +56,15 @@ to notify you of vulnerabilities and fixes in Tendermint Core. You can subscribe
|
||||
|-------------|------------------|
|
||||
| Go version | Go1.17 or higher |
|
||||
|
||||
## Documentation
|
||||
|
||||
Complete documentation can be found on the [website](https://docs.tendermint.com/master/).
|
||||
|
||||
### Install
|
||||
|
||||
See the [install instructions](/docs/introduction/install.md).
|
||||
See the [install instructions](./docs/introduction/install.md).
|
||||
|
||||
### Quick Start
|
||||
|
||||
- [Single node](/docs/introduction/quick-start.md)
|
||||
- [Local cluster using docker-compose](/docs/tools/docker-compose.md)
|
||||
- [Remote cluster using Terraform and Ansible](/docs/tools/terraform-and-ansible.md)
|
||||
- [Join the Cosmos testnet](https://cosmos.network/testnet)
|
||||
- [Single node](./docs/introduction/quick-start.md)
|
||||
- [Local cluster using docker-compose](./docs/tools/docker-compose.md)
|
||||
- [Remote cluster using Terraform and Ansible](./docs/tools/terraform-and-ansible.md)
|
||||
|
||||
## Contributing
|
||||
|
||||
@@ -73,9 +72,9 @@ Please abide by the [Code of Conduct](CODE_OF_CONDUCT.md) in all interactions.
|
||||
|
||||
Before contributing to the project, please take a look at the [contributing guidelines](CONTRIBUTING.md)
|
||||
and the [style guide](STYLE_GUIDE.md). You may also find it helpful to read the
|
||||
[specifications](https://github.com/tendermint/spec), watch the [Developer Sessions](/docs/DEV_SESSIONS.md),
|
||||
[specifications](./spec/README.md),
|
||||
and familiarize yourself with our
|
||||
[Architectural Decision Records](https://github.com/tendermint/tendermint/tree/master/docs/architecture).
|
||||
[Architectural Decision Records (ADRs)](./docs/architecture/README.md) and [Request For Comments (RFCs)](./docs/rfc/README.md).
|
||||
|
||||
## Versioning
|
||||
|
||||
@@ -112,26 +111,23 @@ in [UPGRADING.md](./UPGRADING.md).
|
||||
|
||||
## Resources
|
||||
|
||||
### Tendermint Core
|
||||
### Roadmap
|
||||
|
||||
We keep a public up-to-date version of our roadmap [here](./docs/roadmap/roadmap.md)
|
||||
|
||||
For details about the blockchain data structures and the p2p protocols, see the
|
||||
[Tendermint specification](https://docs.tendermint.com/master/spec/).
|
||||
### Libraries
|
||||
|
||||
For details on using the software, see the [documentation](/docs/) which is also
|
||||
hosted at: <https://docs.tendermint.com/master/>
|
||||
|
||||
### Tools
|
||||
|
||||
Benchmarking is provided by [`tm-load-test`](https://github.com/informalsystems/tm-load-test).
|
||||
Additional tooling can be found in [/docs/tools](/docs/tools).
|
||||
- [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); A framework for building applications in Golang
|
||||
- [Tendermint in Rust](https://github.com/informalsystems/tendermint-rs)
|
||||
- [ABCI Tower](https://github.com/penumbra-zone/tower-abci)
|
||||
|
||||
### Applications
|
||||
|
||||
- [Cosmos SDK](http://github.com/cosmos/cosmos-sdk); a cryptocurrency application framework
|
||||
- [Ethermint](http://github.com/cosmos/ethermint); Ethereum on Tendermint
|
||||
- [Many more](https://tendermint.com/ecosystem)
|
||||
- [Cosmos Hub](https://hub.cosmos.network/)
|
||||
- [Terra](https://www.terra.money/)
|
||||
- [Celestia](https://celestia.org/)
|
||||
- [Anoma](https://anoma.network/)
|
||||
- [Vocdoni](https://docs.vocdoni.io/)
|
||||
|
||||
### Research
|
||||
|
||||
@@ -144,7 +140,7 @@ Additional tooling can be found in [/docs/tools](/docs/tools).
|
||||
## Join us!
|
||||
|
||||
Tendermint Core is maintained by [Interchain GmbH](https://interchain.berlin).
|
||||
If you'd like to work full-time on Tendermint Core, [we're hiring](https://interchain-gmbh.breezy.hr/p/682fb7e8a6f601-software-engineer-tendermint-core)!
|
||||
If you'd like to work full-time on Tendermint Core, [we're hiring](https://interchain-gmbh.breezy.hr/)!
|
||||
|
||||
Funding for Tendermint Core development comes primarily from the [Interchain Foundation](https://interchain.io),
|
||||
a Swiss non-profit. The Tendermint trademark is owned by [Tendermint Inc.](https://tendermint.com), the for-profit entity
|
||||
|
||||
29
RELEASES.md
29
RELEASES.md
@@ -42,15 +42,42 @@ In the following example, we'll assume that we're making a backport branch for
|
||||
the 0.35.x line.
|
||||
|
||||
1. Start on `master`
|
||||
|
||||
2. Create and push the backport branch:
|
||||
```sh
|
||||
git checkout -b v0.35.x
|
||||
git push origin v0.35.x
|
||||
```
|
||||
|
||||
3. Create a PR to update the documentation directory for the backport branch.
|
||||
|
||||
We only maintain RFC and ADR documents on master, to avoid confusion.
|
||||
In addition, we rewrite Markdown URLs pointing to master to point to the
|
||||
backport branch, so that generated documentation will link to the correct
|
||||
versions of files elsewhere in the repository. For context on the latter,
|
||||
see https://github.com/tendermint/tendermint/issues/7675.
|
||||
|
||||
To prepare the PR:
|
||||
```sh
|
||||
# Remove the RFC and ADR documents from the backport.
|
||||
# We only maintain these on master to avoid confusion.
|
||||
git rm -r docs/rfc docs/architecture
|
||||
|
||||
# Update absolute links to point to the backport.
|
||||
go run ./scripts/linkpatch -recur -target v0.35.x -skip-path docs/DOCS_README.md,docs/README.md docs
|
||||
|
||||
# Create and push the PR.
|
||||
git checkout -b update-docs-v035x
|
||||
git commit -m "Update docs for v0.35.x backport branch." docs
|
||||
git push -u origin update-docs-v035x
|
||||
```
|
||||
|
||||
Be sure to merge this PR before making other changes on the newly-created
|
||||
backport branch.
|
||||
|
||||
After doing these steps, go back to `master` and do the following:
|
||||
|
||||
1. Tag `master` as the dev branch for the _next_ major release and push it back up.
|
||||
1. Tag `master` as the dev branch for the _next_ major release and push it up to GitHub.
|
||||
For example:
|
||||
```sh
|
||||
git tag -a v0.36.0-dev -m "Development base for Tendermint v0.36."
|
||||
|
||||
78
UPGRADING.md
78
UPGRADING.md
@@ -2,6 +2,67 @@
|
||||
|
||||
This guide provides instructions for upgrading to specific versions of Tendermint Core.
|
||||
|
||||
## v0.36
|
||||
|
||||
### ABCI Changes
|
||||
|
||||
#### ABCI++
|
||||
|
||||
Coming soon...
|
||||
|
||||
#### ABCI Mutex
|
||||
|
||||
In previous versions of ABCI, Tendermint was prevented from making
|
||||
concurrent calls to ABCI implementations by virtue of mutexes in the
|
||||
implementation of Tendermint's ABCI infrastructure. These mutexes have
|
||||
been removed from the current implementation and applications will now
|
||||
be responsible for managing their own concurrency control.
|
||||
|
||||
To replicate the prior semantics, ensure that ABCI applications have a
|
||||
single mutex that protects all ABCI method calls from concurrent
|
||||
access. You can relax these requirements if your application can
|
||||
provide safe concurrent access via other means. This safety is an
|
||||
application concern so be very sure to test the application thoroughly
|
||||
using realistic workloads and the race detector to ensure your
|
||||
applications remains correct.
|
||||
|
||||
### RPC Changes
|
||||
|
||||
Tendermint v0.36 adds a new RPC event subscription API. The existing event
|
||||
subscription API based on websockets is now deprecated. It will continue to
|
||||
work throughout the v0.36 release, but the `subscribe`, `unsubscribe`, and
|
||||
`unsubscribe_all` methods, along with websocket support, will be removed in
|
||||
Tendermint v0.37. Callers currently using these features should migrate as
|
||||
soon as is practical to the new API.
|
||||
|
||||
To enable the new API, node operators set a new `event-log-window-size`
|
||||
parameter in the `[rpc]` section of the `config.toml` file. This defines a
|
||||
duration of time during which the node will log all events published to the
|
||||
event bus for use by RPC consumers.
|
||||
|
||||
Consumers use the new `events` JSON-RPC method to poll for events matching
|
||||
their query in the log. Unlike the streaming API, events are not discarded if
|
||||
the caller is slow, loses its connection, or crashes. As long as the client
|
||||
recovers before its events expire from the log window, it will be able to
|
||||
replay and catch up after recovering. Also unlike the streaming API, the client
|
||||
can tell if it has truly missed events because they have expired from the log.
|
||||
|
||||
The `events` method is a normal JSON-RPC method, and does not require any
|
||||
non-standard response processing (in contrast with the old `subscribe`).
|
||||
Clients can modify their query at any time, and no longer need to coordinate
|
||||
subscribe and unsubscribe calls to handle multiple queries.
|
||||
|
||||
The Go client implementations in the Tendermint Core repository have all been
|
||||
updated to add a new `Events` method, including the light client proxy.
|
||||
|
||||
A new `rpc/client/eventstream` package has also been added to make it easier
|
||||
for users to update existing use of the streaming API to use the polling API
|
||||
The `eventstream` package handles polling and delivers matching events to a
|
||||
callback.
|
||||
|
||||
For more detailed information, see [ADR 075](https://tinyurl.com/adr075) which
|
||||
defines and describes the new API in detail.
|
||||
|
||||
## v0.35
|
||||
|
||||
### ABCI Changes
|
||||
@@ -113,11 +174,11 @@ To access any of the functionality previously available via the
|
||||
`node.Node` type, use the `*local.Local` "RPC" client, that exposes
|
||||
the full RPC interface provided as direct function calls. Import the
|
||||
`github.com/tendermint/tendermint/rpc/client/local` package and pass
|
||||
the node service as in the following:
|
||||
the node service as in the following:
|
||||
|
||||
```go
|
||||
node := node.NewDefault() //construct the node object
|
||||
// start and set up the node service
|
||||
// start and set up the node service
|
||||
|
||||
client := local.New(node.(local.NodeService))
|
||||
// use client object to interact with the node
|
||||
@@ -144,10 +205,10 @@ both stacks.
|
||||
The P2P library was reimplemented in this release. The new implementation is
|
||||
enabled by default in this version of Tendermint. The legacy implementation is still
|
||||
included in this version of Tendermint as a backstop to work around unforeseen
|
||||
production issues. The new and legacy version are interoperable. If necessary,
|
||||
production issues. The new and legacy version are interoperable. If necessary,
|
||||
you can enable the legacy implementation in the server configuration file.
|
||||
|
||||
To make use of the legacy P2P implemementation add or update the following field of
|
||||
To make use of the legacy P2P implemementation add or update the following field of
|
||||
your server's configuration file under the `[p2p]` section:
|
||||
|
||||
```toml
|
||||
@@ -172,8 +233,8 @@ in the order in which they were received.
|
||||
|
||||
* `priority`: A priority queue of messages.
|
||||
|
||||
* `wdrr`: A queue implementing the Weighted Deficit Round Robin algorithm. A
|
||||
weighted deficit round robin queue is created per peer. Each queue contains a
|
||||
* `wdrr`: A queue implementing the Weighted Deficit Round Robin algorithm. A
|
||||
weighted deficit round robin queue is created per peer. Each queue contains a
|
||||
separate 'flow' for each of the channels of communication that exist between any two
|
||||
peers. Tendermint maintains a channel per message type between peers. Each WDRR
|
||||
queue maintains a shared buffered with a fixed capacity through which messages on different
|
||||
@@ -217,7 +278,7 @@ Note also that Tendermint 0.34 also requires Go 1.16 or higher.
|
||||
were added to support the new State Sync feature.
|
||||
Previously, syncing a new node to a preexisting network could take days; but with State Sync,
|
||||
new nodes are able to join a network in a matter of seconds.
|
||||
Read [the spec](https://docs.tendermint.com/master/spec/abci/apps.html#state-sync)
|
||||
Read [the spec](https://github.com/tendermint/tendermint/blob/master/spec/abci/apps.md)
|
||||
if you want to learn more about State Sync, or if you'd like your application to use it.
|
||||
(If you don't want to support State Sync in your application, you can just implement these new
|
||||
ABCI methods as no-ops, leaving them empty.)
|
||||
@@ -342,7 +403,6 @@ The `bech32` package has moved to the Cosmos SDK:
|
||||
### CLI
|
||||
|
||||
The `tendermint lite` command has been renamed to `tendermint light` and has a slightly different API.
|
||||
See [the docs](https://docs.tendermint.com/master/tendermint-core/light-client-protocol.html#http-proxy) for details.
|
||||
|
||||
### Light Client
|
||||
|
||||
@@ -617,7 +677,7 @@ the compilation tag:
|
||||
|
||||
Use `cleveldb` tag instead of `gcc` to compile Tendermint with CLevelDB or
|
||||
use `make build_c` / `make install_c` (full instructions can be found at
|
||||
<https://tendermint.com/docs/introduction/install.html#compile-with-cleveldb-support>)
|
||||
<https://docs.tendermint.com/v0.35/introduction/install.html)
|
||||
|
||||
## v0.31.0
|
||||
|
||||
|
||||
@@ -19,8 +19,8 @@ To get up and running quickly, see the [getting started guide](../docs/app-dev/g
|
||||
|
||||
A detailed description of the ABCI methods and message types is contained in:
|
||||
|
||||
- [The main spec](https://github.com/tendermint/spec/blob/master/spec/abci/abci.md)
|
||||
- [A protobuf file](https://github.com/tendermint/spec/blob/master/proto/tendermint/abci/types.proto)
|
||||
- [The main spec](../spec/abci/abci.md)
|
||||
- [A protobuf file](../proto/tendermint/abci/types.proto)
|
||||
- [A Go interface](./types/application.go)
|
||||
|
||||
## Protocol Buffers
|
||||
|
||||
@@ -19,8 +19,8 @@ const (
|
||||
|
||||
// Client defines an interface for an ABCI client.
|
||||
//
|
||||
// All `Async` methods return a `ReqRes` object and an error.
|
||||
// All `Sync` methods return the appropriate protobuf ResponseXxx struct and an error.
|
||||
// All methods return the appropriate protobuf ResponseXxx struct and
|
||||
// an error.
|
||||
//
|
||||
// NOTE these are client errors, eg. ABCI socket connectivity issues.
|
||||
// Application-related errors are reflected in response via ABCI error codes
|
||||
@@ -28,25 +28,20 @@ const (
|
||||
type Client interface {
|
||||
service.Service
|
||||
|
||||
SetResponseCallback(Callback)
|
||||
Error() error
|
||||
|
||||
// Asynchronous requests
|
||||
FlushAsync(context.Context) (*ReqRes, error)
|
||||
DeliverTxAsync(context.Context, types.RequestDeliverTx) (*ReqRes, error)
|
||||
CheckTxAsync(context.Context, types.RequestCheckTx) (*ReqRes, error)
|
||||
|
||||
// Synchronous requests
|
||||
Flush(context.Context) error
|
||||
Echo(ctx context.Context, msg string) (*types.ResponseEcho, error)
|
||||
Info(context.Context, types.RequestInfo) (*types.ResponseInfo, error)
|
||||
DeliverTx(context.Context, types.RequestDeliverTx) (*types.ResponseDeliverTx, error)
|
||||
CheckTx(context.Context, types.RequestCheckTx) (*types.ResponseCheckTx, error)
|
||||
Query(context.Context, types.RequestQuery) (*types.ResponseQuery, error)
|
||||
Commit(context.Context) (*types.ResponseCommit, error)
|
||||
InitChain(context.Context, types.RequestInitChain) (*types.ResponseInitChain, error)
|
||||
BeginBlock(context.Context, types.RequestBeginBlock) (*types.ResponseBeginBlock, error)
|
||||
EndBlock(context.Context, types.RequestEndBlock) (*types.ResponseEndBlock, error)
|
||||
PrepareProposal(context.Context, types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error)
|
||||
ProcessProposal(context.Context, types.RequestProcessProposal) (*types.ResponseProcessProposal, error)
|
||||
ExtendVote(context.Context, types.RequestExtendVote) (*types.ResponseExtendVote, error)
|
||||
VerifyVoteExtension(context.Context, types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error)
|
||||
FinalizeBlock(context.Context, types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error)
|
||||
ListSnapshots(context.Context, types.RequestListSnapshots) (*types.ResponseListSnapshots, error)
|
||||
OfferSnapshot(context.Context, types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error)
|
||||
LoadSnapshotChunk(context.Context, types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error)
|
||||
@@ -57,89 +52,37 @@ type Client interface {
|
||||
|
||||
// NewClient returns a new ABCI client of the specified transport type.
|
||||
// It returns an error if the transport is not "socket" or "grpc"
|
||||
func NewClient(logger log.Logger, addr, transport string, mustConnect bool) (client Client, err error) {
|
||||
func NewClient(logger log.Logger, addr, transport string, mustConnect bool) (Client, error) {
|
||||
switch transport {
|
||||
case "socket":
|
||||
client = NewSocketClient(logger, addr, mustConnect)
|
||||
return NewSocketClient(logger, addr, mustConnect), nil
|
||||
case "grpc":
|
||||
client = NewGRPCClient(logger, addr, mustConnect)
|
||||
return NewGRPCClient(logger, addr, mustConnect), nil
|
||||
default:
|
||||
err = fmt.Errorf("unknown abci transport %s", transport)
|
||||
return nil, fmt.Errorf("unknown abci transport %s", transport)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
type Callback func(*types.Request, *types.Response)
|
||||
|
||||
type ReqRes struct {
|
||||
type requestAndResponse struct {
|
||||
*types.Request
|
||||
*sync.WaitGroup
|
||||
*types.Response // Not set atomically, so be sure to use WaitGroup.
|
||||
*types.Response
|
||||
|
||||
mtx sync.Mutex
|
||||
done bool // Gets set to true once *after* WaitGroup.Done().
|
||||
cb func(*types.Response) // A single callback that may be set.
|
||||
mtx sync.Mutex
|
||||
signal chan struct{}
|
||||
}
|
||||
|
||||
func NewReqRes(req *types.Request) *ReqRes {
|
||||
return &ReqRes{
|
||||
Request: req,
|
||||
WaitGroup: waitGroup1(),
|
||||
Response: nil,
|
||||
|
||||
done: false,
|
||||
cb: nil,
|
||||
func makeReqRes(req *types.Request) *requestAndResponse {
|
||||
return &requestAndResponse{
|
||||
Request: req,
|
||||
Response: nil,
|
||||
signal: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// Sets sets the callback. If reqRes is already done, it will call the cb
|
||||
// immediately. Note, reqRes.cb should not change if reqRes.done and only one
|
||||
// callback is supported.
|
||||
func (r *ReqRes) SetCallback(cb func(res *types.Response)) {
|
||||
r.mtx.Lock()
|
||||
|
||||
if r.done {
|
||||
r.mtx.Unlock()
|
||||
cb(r.Response)
|
||||
return
|
||||
}
|
||||
|
||||
r.cb = cb
|
||||
r.mtx.Unlock()
|
||||
}
|
||||
|
||||
// InvokeCallback invokes a thread-safe execution of the configured callback
|
||||
// if non-nil.
|
||||
func (r *ReqRes) InvokeCallback() {
|
||||
// markDone marks the ReqRes object as done.
|
||||
func (r *requestAndResponse) markDone() {
|
||||
r.mtx.Lock()
|
||||
defer r.mtx.Unlock()
|
||||
|
||||
if r.cb != nil {
|
||||
r.cb(r.Response)
|
||||
}
|
||||
}
|
||||
|
||||
// GetCallback returns the configured callback of the ReqRes object which may be
|
||||
// nil. Note, it is not safe to concurrently call this in cases where it is
|
||||
// marked done and SetCallback is called before calling GetCallback as that
|
||||
// will invoke the callback twice and create a potential race condition.
|
||||
//
|
||||
// ref: https://github.com/tendermint/tendermint/issues/5439
|
||||
func (r *ReqRes) GetCallback() func(*types.Response) {
|
||||
r.mtx.Lock()
|
||||
defer r.mtx.Unlock()
|
||||
return r.cb
|
||||
}
|
||||
|
||||
// SetDone marks the ReqRes object as done.
|
||||
func (r *ReqRes) SetDone() {
|
||||
r.mtx.Lock()
|
||||
r.done = true
|
||||
r.mtx.Unlock()
|
||||
}
|
||||
|
||||
func waitGroup1() (wg *sync.WaitGroup) {
|
||||
wg = &sync.WaitGroup{}
|
||||
wg.Add(1)
|
||||
return
|
||||
close(r.signal)
|
||||
}
|
||||
|
||||
@@ -1,36 +0,0 @@
|
||||
package abciclient
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
// Creator creates new ABCI clients.
|
||||
type Creator func(log.Logger) (Client, error)
|
||||
|
||||
// NewLocalCreator returns a Creator for the given app,
|
||||
// which will be running locally.
|
||||
func NewLocalCreator(app types.Application) Creator {
|
||||
mtx := new(sync.Mutex)
|
||||
|
||||
return func(logger log.Logger) (Client, error) {
|
||||
return NewLocalClient(logger, mtx, app), nil
|
||||
}
|
||||
}
|
||||
|
||||
// NewRemoteCreator returns a Creator for the given address (e.g.
|
||||
// "192.168.0.1") and transport (e.g. "tcp"). Set mustConnect to true if you
|
||||
// want the client to connect before reporting success.
|
||||
func NewRemoteCreator(logger log.Logger, addr, transport string, mustConnect bool) Creator {
|
||||
return func(log.Logger) (Client, error) {
|
||||
remoteApp, err := NewClient(logger, addr, transport, mustConnect)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to connect to proxy: %w", err)
|
||||
}
|
||||
|
||||
return remoteApp, nil
|
||||
}
|
||||
}
|
||||
@@ -7,23 +7,14 @@
|
||||
//
|
||||
// ## Socket client
|
||||
//
|
||||
// async: the client maintains an internal buffer of a fixed size. when the
|
||||
// buffer becomes full, all Async calls will return an error immediately.
|
||||
//
|
||||
// sync: the client blocks on 1) enqueuing the Sync request 2) enqueuing the
|
||||
// Flush requests 3) waiting for the Flush response
|
||||
// The client blocks for enqueuing the request, for enqueuing the
|
||||
// Flush to send the request, and for the Flush response to return.
|
||||
//
|
||||
// ## Local client
|
||||
//
|
||||
// async: global mutex is locked during each call (meaning it's not really async!)
|
||||
// sync: global mutex is locked during each call
|
||||
// The global mutex is locked during each call
|
||||
//
|
||||
// ## gRPC client
|
||||
//
|
||||
// async: gRPC is synchronous, but an internal buffer of a fixed size is used
|
||||
// to store responses and later call callbacks (separate goroutine per
|
||||
// response).
|
||||
//
|
||||
// sync: waits for all Async calls to complete (essentially what Flush does in
|
||||
// the socket client) and calls Sync method.
|
||||
// The client waits for all calls to complete.
|
||||
package abciclient
|
||||
|
||||
@@ -24,14 +24,12 @@ type grpcClient struct {
|
||||
|
||||
mustConnect bool
|
||||
|
||||
client types.ABCIApplicationClient
|
||||
conn *grpc.ClientConn
|
||||
chReqRes chan *ReqRes // dispatches "async" responses to callbacks *in order*, needed by mempool
|
||||
client types.ABCIApplicationClient
|
||||
conn *grpc.ClientConn
|
||||
|
||||
mtx sync.Mutex
|
||||
addr string
|
||||
err error
|
||||
resCb func(*types.Request, *types.Response) // listens to all callbacks
|
||||
mtx sync.Mutex
|
||||
addr string
|
||||
err error
|
||||
}
|
||||
|
||||
var _ Client = (*grpcClient)(nil)
|
||||
@@ -39,25 +37,11 @@ var _ Client = (*grpcClient)(nil)
|
||||
// NewGRPCClient creates a gRPC client, which will connect to addr upon the
|
||||
// start. Note Client#Start returns an error if connection is unsuccessful and
|
||||
// mustConnect is true.
|
||||
//
|
||||
// GRPC calls are synchronous, but some callbacks expect to be called
|
||||
// asynchronously (eg. the mempool expects to be able to lock to remove bad txs
|
||||
// from cache). To accommodate, we finish each call in its own go-routine,
|
||||
// which is expensive, but easy - if you want something better, use the socket
|
||||
// protocol! maybe one day, if people really want it, we use grpc streams, but
|
||||
// hopefully not :D
|
||||
func NewGRPCClient(logger log.Logger, addr string, mustConnect bool) Client {
|
||||
cli := &grpcClient{
|
||||
logger: logger,
|
||||
addr: addr,
|
||||
mustConnect: mustConnect,
|
||||
// Buffering the channel is needed to make calls appear asynchronous,
|
||||
// which is required when the caller makes multiple async calls before
|
||||
// processing callbacks (e.g. due to holding locks). 64 means that a
|
||||
// caller can make up to 64 async calls before a callback must be
|
||||
// processed (otherwise it deadlocks). It also means that we can make 64
|
||||
// gRPC calls while processing a slow callback at the channel head.
|
||||
chReqRes: make(chan *ReqRes, 64),
|
||||
}
|
||||
cli.BaseService = *service.NewBaseService(logger, "grpcClient", cli)
|
||||
return cli
|
||||
@@ -68,43 +52,6 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
|
||||
}
|
||||
|
||||
func (cli *grpcClient) OnStart(ctx context.Context) error {
|
||||
// This processes asynchronous request/response messages and dispatches
|
||||
// them to callbacks.
|
||||
go func() {
|
||||
// Use a separate function to use defer for mutex unlocks (this handles panics)
|
||||
callCb := func(reqres *ReqRes) {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
|
||||
reqres.SetDone()
|
||||
reqres.Done()
|
||||
|
||||
// Notify client listener if set
|
||||
if cli.resCb != nil {
|
||||
cli.resCb(reqres.Request, reqres.Response)
|
||||
}
|
||||
|
||||
// Notify reqRes listener if set
|
||||
if cb := reqres.GetCallback(); cb != nil {
|
||||
cb(reqres.Response)
|
||||
}
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case reqres := <-cli.chReqRes:
|
||||
if reqres != nil {
|
||||
callCb(reqres)
|
||||
} else {
|
||||
cli.logger.Error("Received nil reqres")
|
||||
}
|
||||
case <-ctx.Done():
|
||||
return
|
||||
}
|
||||
|
||||
}
|
||||
}()
|
||||
|
||||
RETRY_LOOP:
|
||||
for {
|
||||
conn, err := grpc.Dial(cli.addr,
|
||||
@@ -144,227 +91,81 @@ RETRY_LOOP:
|
||||
}
|
||||
|
||||
func (cli *grpcClient) OnStop() {
|
||||
if cli.conn != nil {
|
||||
cli.conn.Close()
|
||||
}
|
||||
close(cli.chReqRes)
|
||||
}
|
||||
|
||||
func (cli *grpcClient) StopForError(err error) {
|
||||
if !cli.IsRunning() {
|
||||
return
|
||||
}
|
||||
|
||||
cli.mtx.Lock()
|
||||
if cli.err == nil {
|
||||
cli.err = err
|
||||
}
|
||||
cli.mtx.Unlock()
|
||||
defer cli.mtx.Unlock()
|
||||
|
||||
cli.logger.Error("Stopping abci.grpcClient for error", "err", err)
|
||||
if err := cli.Stop(); err != nil {
|
||||
cli.logger.Error("error stopping abci.grpcClient", "err", err)
|
||||
if cli.conn != nil {
|
||||
cli.err = cli.conn.Close()
|
||||
}
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Error() error {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
|
||||
return cli.err
|
||||
}
|
||||
|
||||
// Set listener for all responses
|
||||
// NOTE: callback may get internally generated flush responses.
|
||||
func (cli *grpcClient) SetResponseCallback(resCb Callback) {
|
||||
cli.mtx.Lock()
|
||||
cli.resCb = resCb
|
||||
cli.mtx.Unlock()
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
// NOTE: call is synchronous, use ctx to break early if needed
|
||||
func (cli *grpcClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
|
||||
req := types.ToRequestFlush()
|
||||
res, err := cli.client.Flush(ctx, req.GetFlush(), grpc.WaitForReady(true))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_Flush{Flush: res}})
|
||||
}
|
||||
|
||||
// NOTE: call is synchronous, use ctx to break early if needed
|
||||
func (cli *grpcClient) DeliverTxAsync(ctx context.Context, params types.RequestDeliverTx) (*ReqRes, error) {
|
||||
req := types.ToRequestDeliverTx(params)
|
||||
res, err := cli.client.DeliverTx(ctx, req.GetDeliverTx(), grpc.WaitForReady(true))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_DeliverTx{DeliverTx: res}})
|
||||
}
|
||||
|
||||
// NOTE: call is synchronous, use ctx to break early if needed
|
||||
func (cli *grpcClient) CheckTxAsync(ctx context.Context, params types.RequestCheckTx) (*ReqRes, error) {
|
||||
req := types.ToRequestCheckTx(params)
|
||||
res, err := cli.client.CheckTx(ctx, req.GetCheckTx(), grpc.WaitForReady(true))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishAsyncCall(ctx, req, &types.Response{Value: &types.Response_CheckTx{CheckTx: res}})
|
||||
}
|
||||
|
||||
// finishAsyncCall creates a ReqRes for an async call, and immediately populates it
|
||||
// with the response. We don't complete it until it's been ordered via the channel.
|
||||
func (cli *grpcClient) finishAsyncCall(ctx context.Context, req *types.Request, res *types.Response) (*ReqRes, error) {
|
||||
reqres := NewReqRes(req)
|
||||
reqres.Response = res
|
||||
select {
|
||||
case cli.chReqRes <- reqres: // use channel for async responses, since they must be ordered
|
||||
return reqres, nil
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
}
|
||||
|
||||
// finishSyncCall waits for an async call to complete. It is necessary to call all
|
||||
// sync calls asynchronously as well, to maintain call and response ordering via
|
||||
// the channel, and this method will wait until the async call completes.
|
||||
func (cli *grpcClient) finishSyncCall(reqres *ReqRes) *types.Response {
|
||||
// It's possible that the callback is called twice, since the callback can
|
||||
// be called immediately on SetCallback() in addition to after it has been
|
||||
// set. This is because completing the ReqRes happens in a separate critical
|
||||
// section from the one where the callback is called: there is a race where
|
||||
// SetCallback() is called between completing the ReqRes and dispatching the
|
||||
// callback.
|
||||
//
|
||||
// We also buffer the channel with 1 response, since SetCallback() will be
|
||||
// called synchronously if the reqres is already completed, in which case
|
||||
// it will block on sending to the channel since it hasn't gotten around to
|
||||
// receiving from it yet.
|
||||
//
|
||||
// ReqRes should really handle callback dispatch internally, to guarantee
|
||||
// that it's only called once and avoid the above race conditions.
|
||||
var once sync.Once
|
||||
ch := make(chan *types.Response, 1)
|
||||
reqres.SetCallback(func(res *types.Response) {
|
||||
once.Do(func() {
|
||||
ch <- res
|
||||
})
|
||||
})
|
||||
return <-ch
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *grpcClient) Flush(ctx context.Context) error { return nil }
|
||||
|
||||
func (cli *grpcClient) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
|
||||
req := types.ToRequestEcho(msg)
|
||||
return cli.client.Echo(ctx, req.GetEcho(), grpc.WaitForReady(true))
|
||||
return cli.client.Echo(ctx, types.ToRequestEcho(msg).GetEcho(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Info(
|
||||
ctx context.Context,
|
||||
params types.RequestInfo,
|
||||
) (*types.ResponseInfo, error) {
|
||||
req := types.ToRequestInfo(params)
|
||||
return cli.client.Info(ctx, req.GetInfo(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) Info(ctx context.Context, params types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
return cli.client.Info(ctx, types.ToRequestInfo(params).GetInfo(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) DeliverTx(
|
||||
ctx context.Context,
|
||||
params types.RequestDeliverTx,
|
||||
) (*types.ResponseDeliverTx, error) {
|
||||
|
||||
reqres, err := cli.DeliverTxAsync(ctx, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishSyncCall(reqres).GetDeliverTx(), cli.Error()
|
||||
func (cli *grpcClient) CheckTx(ctx context.Context, params types.RequestCheckTx) (*types.ResponseCheckTx, error) {
|
||||
return cli.client.CheckTx(ctx, types.ToRequestCheckTx(params).GetCheckTx(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) CheckTx(
|
||||
ctx context.Context,
|
||||
params types.RequestCheckTx,
|
||||
) (*types.ResponseCheckTx, error) {
|
||||
|
||||
reqres, err := cli.CheckTxAsync(ctx, params)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cli.finishSyncCall(reqres).GetCheckTx(), cli.Error()
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Query(
|
||||
ctx context.Context,
|
||||
params types.RequestQuery,
|
||||
) (*types.ResponseQuery, error) {
|
||||
req := types.ToRequestQuery(params)
|
||||
return cli.client.Query(ctx, req.GetQuery(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) Query(ctx context.Context, params types.RequestQuery) (*types.ResponseQuery, error) {
|
||||
return cli.client.Query(ctx, types.ToRequestQuery(params).GetQuery(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) Commit(ctx context.Context) (*types.ResponseCommit, error) {
|
||||
req := types.ToRequestCommit()
|
||||
return cli.client.Commit(ctx, req.GetCommit(), grpc.WaitForReady(true))
|
||||
return cli.client.Commit(ctx, types.ToRequestCommit().GetCommit(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) InitChain(
|
||||
ctx context.Context,
|
||||
params types.RequestInitChain,
|
||||
) (*types.ResponseInitChain, error) {
|
||||
|
||||
req := types.ToRequestInitChain(params)
|
||||
return cli.client.InitChain(ctx, req.GetInitChain(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) InitChain(ctx context.Context, params types.RequestInitChain) (*types.ResponseInitChain, error) {
|
||||
return cli.client.InitChain(ctx, types.ToRequestInitChain(params).GetInitChain(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) BeginBlock(
|
||||
ctx context.Context,
|
||||
params types.RequestBeginBlock,
|
||||
) (*types.ResponseBeginBlock, error) {
|
||||
|
||||
req := types.ToRequestBeginBlock(params)
|
||||
return cli.client.BeginBlock(ctx, req.GetBeginBlock(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) ListSnapshots(ctx context.Context, params types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
|
||||
return cli.client.ListSnapshots(ctx, types.ToRequestListSnapshots(params).GetListSnapshots(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) EndBlock(
|
||||
ctx context.Context,
|
||||
params types.RequestEndBlock,
|
||||
) (*types.ResponseEndBlock, error) {
|
||||
|
||||
req := types.ToRequestEndBlock(params)
|
||||
return cli.client.EndBlock(ctx, req.GetEndBlock(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) OfferSnapshot(ctx context.Context, params types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
|
||||
return cli.client.OfferSnapshot(ctx, types.ToRequestOfferSnapshot(params).GetOfferSnapshot(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) ListSnapshots(
|
||||
ctx context.Context,
|
||||
params types.RequestListSnapshots,
|
||||
) (*types.ResponseListSnapshots, error) {
|
||||
|
||||
req := types.ToRequestListSnapshots(params)
|
||||
return cli.client.ListSnapshots(ctx, req.GetListSnapshots(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) LoadSnapshotChunk(ctx context.Context, params types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
|
||||
return cli.client.LoadSnapshotChunk(ctx, types.ToRequestLoadSnapshotChunk(params).GetLoadSnapshotChunk(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) OfferSnapshot(
|
||||
ctx context.Context,
|
||||
params types.RequestOfferSnapshot,
|
||||
) (*types.ResponseOfferSnapshot, error) {
|
||||
|
||||
req := types.ToRequestOfferSnapshot(params)
|
||||
return cli.client.OfferSnapshot(ctx, req.GetOfferSnapshot(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) ApplySnapshotChunk(ctx context.Context, params types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
|
||||
return cli.client.ApplySnapshotChunk(ctx, types.ToRequestApplySnapshotChunk(params).GetApplySnapshotChunk(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) LoadSnapshotChunk(
|
||||
ctx context.Context,
|
||||
params types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
|
||||
|
||||
req := types.ToRequestLoadSnapshotChunk(params)
|
||||
return cli.client.LoadSnapshotChunk(ctx, req.GetLoadSnapshotChunk(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) PrepareProposal(ctx context.Context, params types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
|
||||
return cli.client.PrepareProposal(ctx, types.ToRequestPrepareProposal(params).GetPrepareProposal(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) ApplySnapshotChunk(
|
||||
ctx context.Context,
|
||||
params types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
|
||||
|
||||
req := types.ToRequestApplySnapshotChunk(params)
|
||||
return cli.client.ApplySnapshotChunk(ctx, req.GetApplySnapshotChunk(), grpc.WaitForReady(true))
|
||||
func (cli *grpcClient) ProcessProposal(ctx context.Context, params types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
|
||||
return cli.client.ProcessProposal(ctx, types.ToRequestProcessProposal(params).GetProcessProposal(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) ExtendVote(ctx context.Context, params types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
return cli.client.ExtendVote(ctx, types.ToRequestExtendVote(params).GetExtendVote(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) VerifyVoteExtension(ctx context.Context, params types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
|
||||
return cli.client.VerifyVoteExtension(ctx, types.ToRequestVerifyVoteExtension(params).GetVerifyVoteExtension(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
func (cli *grpcClient) FinalizeBlock(ctx context.Context, params types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
return cli.client.FinalizeBlock(ctx, types.ToRequestFinalizeBlock(params).GetFinalizeBlock(), grpc.WaitForReady(true))
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package abciclient
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
|
||||
types "github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
@@ -15,10 +14,7 @@ import (
|
||||
// RPC endpoint), but defers are used everywhere for the sake of consistency.
|
||||
type localClient struct {
|
||||
service.BaseService
|
||||
|
||||
mtx *sync.Mutex
|
||||
types.Application
|
||||
Callback
|
||||
}
|
||||
|
||||
var _ Client = (*localClient)(nil)
|
||||
@@ -26,13 +22,9 @@ var _ Client = (*localClient)(nil)
|
||||
// NewLocalClient creates a local client, which will be directly calling the
|
||||
// methods of the given app.
|
||||
//
|
||||
// Both Async and Sync methods ignore the given context.Context parameter.
|
||||
func NewLocalClient(logger log.Logger, mtx *sync.Mutex, app types.Application) Client {
|
||||
if mtx == nil {
|
||||
mtx = new(sync.Mutex)
|
||||
}
|
||||
// The client methods ignore their context argument.
|
||||
func NewLocalClient(logger log.Logger, app types.Application) Client {
|
||||
cli := &localClient{
|
||||
mtx: mtx,
|
||||
Application: app,
|
||||
}
|
||||
cli.BaseService = *service.NewBaseService(logger, "localClient", cli)
|
||||
@@ -41,197 +33,82 @@ func NewLocalClient(logger log.Logger, mtx *sync.Mutex, app types.Application) C
|
||||
|
||||
func (*localClient) OnStart(context.Context) error { return nil }
|
||||
func (*localClient) OnStop() {}
|
||||
|
||||
func (app *localClient) SetResponseCallback(cb Callback) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
app.Callback = cb
|
||||
}
|
||||
|
||||
// TODO: change types.Application to include Error()?
|
||||
func (app *localClient) Error() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (app *localClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
|
||||
// Do nothing
|
||||
return newLocalReqRes(types.ToRequestFlush(), nil), nil
|
||||
}
|
||||
|
||||
func (app *localClient) DeliverTxAsync(ctx context.Context, params types.RequestDeliverTx) (*ReqRes, error) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.DeliverTx(params)
|
||||
return app.callback(
|
||||
types.ToRequestDeliverTx(params),
|
||||
types.ToResponseDeliverTx(res),
|
||||
), nil
|
||||
}
|
||||
|
||||
func (app *localClient) CheckTxAsync(ctx context.Context, req types.RequestCheckTx) (*ReqRes, error) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.CheckTx(req)
|
||||
return app.callback(
|
||||
types.ToRequestCheckTx(req),
|
||||
types.ToResponseCheckTx(res),
|
||||
), nil
|
||||
}
|
||||
func (*localClient) Error() error { return nil }
|
||||
|
||||
//-------------------------------------------------------
|
||||
|
||||
func (app *localClient) Flush(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
func (*localClient) Flush(context.Context) error { return nil }
|
||||
|
||||
func (app *localClient) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
|
||||
func (app *localClient) Echo(_ context.Context, msg string) (*types.ResponseEcho, error) {
|
||||
return &types.ResponseEcho{Message: msg}, nil
|
||||
}
|
||||
|
||||
func (app *localClient) Info(ctx context.Context, req types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.Info(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) DeliverTx(
|
||||
ctx context.Context,
|
||||
req types.RequestDeliverTx,
|
||||
) (*types.ResponseDeliverTx, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.DeliverTx(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) CheckTx(
|
||||
ctx context.Context,
|
||||
req types.RequestCheckTx,
|
||||
) (*types.ResponseCheckTx, error) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
func (app *localClient) CheckTx(_ context.Context, req types.RequestCheckTx) (*types.ResponseCheckTx, error) {
|
||||
res := app.Application.CheckTx(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) Query(
|
||||
ctx context.Context,
|
||||
req types.RequestQuery,
|
||||
) (*types.ResponseQuery, error) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
func (app *localClient) Query(_ context.Context, req types.RequestQuery) (*types.ResponseQuery, error) {
|
||||
res := app.Application.Query(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) Commit(ctx context.Context) (*types.ResponseCommit, error) {
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.Commit()
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) InitChain(
|
||||
ctx context.Context,
|
||||
req types.RequestInitChain,
|
||||
) (*types.ResponseInitChain, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
func (app *localClient) InitChain(_ context.Context, req types.RequestInitChain) (*types.ResponseInitChain, error) {
|
||||
res := app.Application.InitChain(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) BeginBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestBeginBlock,
|
||||
) (*types.ResponseBeginBlock, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.BeginBlock(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) EndBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestEndBlock,
|
||||
) (*types.ResponseEndBlock, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
res := app.Application.EndBlock(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) ListSnapshots(
|
||||
ctx context.Context,
|
||||
req types.RequestListSnapshots,
|
||||
) (*types.ResponseListSnapshots, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
func (app *localClient) ListSnapshots(_ context.Context, req types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
|
||||
res := app.Application.ListSnapshots(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) OfferSnapshot(
|
||||
ctx context.Context,
|
||||
req types.RequestOfferSnapshot,
|
||||
) (*types.ResponseOfferSnapshot, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
func (app *localClient) OfferSnapshot(_ context.Context, req types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
|
||||
res := app.Application.OfferSnapshot(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) LoadSnapshotChunk(
|
||||
ctx context.Context,
|
||||
req types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
func (app *localClient) LoadSnapshotChunk(_ context.Context, req types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
|
||||
res := app.Application.LoadSnapshotChunk(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) ApplySnapshotChunk(
|
||||
ctx context.Context,
|
||||
req types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
|
||||
|
||||
app.mtx.Lock()
|
||||
defer app.mtx.Unlock()
|
||||
|
||||
func (app *localClient) ApplySnapshotChunk(_ context.Context, req types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
|
||||
res := app.Application.ApplySnapshotChunk(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
//-------------------------------------------------------
|
||||
|
||||
func (app *localClient) callback(req *types.Request, res *types.Response) *ReqRes {
|
||||
app.Callback(req, res)
|
||||
return newLocalReqRes(req, res)
|
||||
func (app *localClient) PrepareProposal(_ context.Context, req types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
|
||||
res := app.Application.PrepareProposal(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func newLocalReqRes(req *types.Request, res *types.Response) *ReqRes {
|
||||
reqRes := NewReqRes(req)
|
||||
reqRes.Response = res
|
||||
reqRes.SetDone()
|
||||
return reqRes
|
||||
func (app *localClient) ProcessProposal(_ context.Context, req types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
|
||||
res := app.Application.ProcessProposal(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) ExtendVote(_ context.Context, req types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
res := app.Application.ExtendVote(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) VerifyVoteExtension(_ context.Context, req types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
|
||||
res := app.Application.VerifyVoteExtension(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *localClient) FinalizeBlock(_ context.Context, req types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
res := app.Application.FinalizeBlock(req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
@@ -5,10 +5,7 @@ package mocks
|
||||
import (
|
||||
context "context"
|
||||
|
||||
abciclient "github.com/tendermint/tendermint/abci/client"
|
||||
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
|
||||
types "github.com/tendermint/tendermint/abci/types"
|
||||
)
|
||||
|
||||
@@ -40,29 +37,6 @@ func (_m *Client) ApplySnapshotChunk(_a0 context.Context, _a1 types.RequestApply
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// BeginBlock provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) BeginBlock(_a0 context.Context, _a1 types.RequestBeginBlock) (*types.ResponseBeginBlock, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseBeginBlock
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestBeginBlock) *types.ResponseBeginBlock); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseBeginBlock)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestBeginBlock) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// CheckTx provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) CheckTx(_a0 context.Context, _a1 types.RequestCheckTx) (*types.ResponseCheckTx, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
@@ -86,29 +60,6 @@ func (_m *Client) CheckTx(_a0 context.Context, _a1 types.RequestCheckTx) (*types
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// CheckTxAsync provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) CheckTxAsync(_a0 context.Context, _a1 types.RequestCheckTx) (*abciclient.ReqRes, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *abciclient.ReqRes
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestCheckTx) *abciclient.ReqRes); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*abciclient.ReqRes)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestCheckTx) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Commit provides a mock function with given fields: _a0
|
||||
func (_m *Client) Commit(_a0 context.Context) (*types.ResponseCommit, error) {
|
||||
ret := _m.Called(_a0)
|
||||
@@ -132,52 +83,6 @@ func (_m *Client) Commit(_a0 context.Context) (*types.ResponseCommit, error) {
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// DeliverTx provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) DeliverTx(_a0 context.Context, _a1 types.RequestDeliverTx) (*types.ResponseDeliverTx, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseDeliverTx
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *types.ResponseDeliverTx); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseDeliverTx)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// DeliverTxAsync provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) DeliverTxAsync(_a0 context.Context, _a1 types.RequestDeliverTx) (*abciclient.ReqRes, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *abciclient.ReqRes
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestDeliverTx) *abciclient.ReqRes); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*abciclient.ReqRes)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestDeliverTx) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Echo provides a mock function with given fields: ctx, msg
|
||||
func (_m *Client) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
|
||||
ret := _m.Called(ctx, msg)
|
||||
@@ -201,29 +106,6 @@ func (_m *Client) Echo(ctx context.Context, msg string) (*types.ResponseEcho, er
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// EndBlock provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) EndBlock(_a0 context.Context, _a1 types.RequestEndBlock) (*types.ResponseEndBlock, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseEndBlock
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestEndBlock) *types.ResponseEndBlock); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseEndBlock)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestEndBlock) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Error provides a mock function with given fields:
|
||||
func (_m *Client) Error() error {
|
||||
ret := _m.Called()
|
||||
@@ -238,6 +120,52 @@ func (_m *Client) Error() error {
|
||||
return r0
|
||||
}
|
||||
|
||||
// ExtendVote provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) ExtendVote(_a0 context.Context, _a1 types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseExtendVote
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestExtendVote) *types.ResponseExtendVote); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseExtendVote)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestExtendVote) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// FinalizeBlock provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) FinalizeBlock(_a0 context.Context, _a1 types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseFinalizeBlock
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestFinalizeBlock) *types.ResponseFinalizeBlock); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseFinalizeBlock)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestFinalizeBlock) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Flush provides a mock function with given fields: _a0
|
||||
func (_m *Client) Flush(_a0 context.Context) error {
|
||||
ret := _m.Called(_a0)
|
||||
@@ -252,29 +180,6 @@ func (_m *Client) Flush(_a0 context.Context) error {
|
||||
return r0
|
||||
}
|
||||
|
||||
// FlushAsync provides a mock function with given fields: _a0
|
||||
func (_m *Client) FlushAsync(_a0 context.Context) (*abciclient.ReqRes, error) {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 *abciclient.ReqRes
|
||||
if rf, ok := ret.Get(0).(func(context.Context) *abciclient.ReqRes); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*abciclient.ReqRes)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context) error); ok {
|
||||
r1 = rf(_a0)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Info provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) Info(_a0 context.Context, _a1 types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
@@ -404,6 +309,52 @@ func (_m *Client) OfferSnapshot(_a0 context.Context, _a1 types.RequestOfferSnaps
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// PrepareProposal provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) PrepareProposal(_a0 context.Context, _a1 types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponsePrepareProposal
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestPrepareProposal) *types.ResponsePrepareProposal); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponsePrepareProposal)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestPrepareProposal) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// ProcessProposal provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) ProcessProposal(_a0 context.Context, _a1 types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 *types.ResponseProcessProposal
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestProcessProposal) *types.ResponseProcessProposal); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseProcessProposal)
|
||||
}
|
||||
}
|
||||
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestProcessProposal) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Query provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) Query(_a0 context.Context, _a1 types.RequestQuery) (*types.ResponseQuery, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
@@ -427,11 +378,6 @@ func (_m *Client) Query(_a0 context.Context, _a1 types.RequestQuery) (*types.Res
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// SetResponseCallback provides a mock function with given fields: _a0
|
||||
func (_m *Client) SetResponseCallback(_a0 abciclient.Callback) {
|
||||
_m.Called(_a0)
|
||||
}
|
||||
|
||||
// Start provides a mock function with given fields: _a0
|
||||
func (_m *Client) Start(_a0 context.Context) error {
|
||||
ret := _m.Called(_a0)
|
||||
@@ -446,18 +392,27 @@ func (_m *Client) Start(_a0 context.Context) error {
|
||||
return r0
|
||||
}
|
||||
|
||||
// String provides a mock function with given fields:
|
||||
func (_m *Client) String() string {
|
||||
ret := _m.Called()
|
||||
// VerifyVoteExtension provides a mock function with given fields: _a0, _a1
|
||||
func (_m *Client) VerifyVoteExtension(_a0 context.Context, _a1 types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
|
||||
ret := _m.Called(_a0, _a1)
|
||||
|
||||
var r0 string
|
||||
if rf, ok := ret.Get(0).(func() string); ok {
|
||||
r0 = rf()
|
||||
var r0 *types.ResponseVerifyVoteExtension
|
||||
if rf, ok := ret.Get(0).(func(context.Context, types.RequestVerifyVoteExtension) *types.ResponseVerifyVoteExtension); ok {
|
||||
r0 = rf(_a0, _a1)
|
||||
} else {
|
||||
r0 = ret.Get(0).(string)
|
||||
if ret.Get(0) != nil {
|
||||
r0 = ret.Get(0).(*types.ResponseVerifyVoteExtension)
|
||||
}
|
||||
}
|
||||
|
||||
return r0
|
||||
var r1 error
|
||||
if rf, ok := ret.Get(1).(func(context.Context, types.RequestVerifyVoteExtension) error); ok {
|
||||
r1 = rf(_a0, _a1)
|
||||
} else {
|
||||
r1 = ret.Error(1)
|
||||
}
|
||||
|
||||
return r0, r1
|
||||
}
|
||||
|
||||
// Wait provides a mock function with given fields:
|
||||
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"reflect"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -24,11 +23,6 @@ const (
|
||||
reqQueueSize = 256
|
||||
)
|
||||
|
||||
type reqResWithContext struct {
|
||||
R *ReqRes
|
||||
C context.Context // if context.Err is not nil, reqRes will be thrown away (ignored)
|
||||
}
|
||||
|
||||
// This is goroutine-safe, but users should beware that the application in
|
||||
// general is not meant to be interfaced with concurrent callers.
|
||||
type socketClient struct {
|
||||
@@ -39,12 +33,11 @@ type socketClient struct {
|
||||
mustConnect bool
|
||||
conn net.Conn
|
||||
|
||||
reqQueue chan *reqResWithContext
|
||||
reqQueue chan *requestAndResponse
|
||||
|
||||
mtx sync.Mutex
|
||||
err error
|
||||
reqSent *list.List // list of requests sent, waiting for response
|
||||
resCb func(*types.Request, *types.Response) // called on all requests, if set.
|
||||
reqSent *list.List // list of requests sent, waiting for response
|
||||
}
|
||||
|
||||
var _ Client = (*socketClient)(nil)
|
||||
@@ -55,11 +48,10 @@ var _ Client = (*socketClient)(nil)
|
||||
func NewSocketClient(logger log.Logger, addr string, mustConnect bool) Client {
|
||||
cli := &socketClient{
|
||||
logger: logger,
|
||||
reqQueue: make(chan *reqResWithContext, reqQueueSize),
|
||||
reqQueue: make(chan *requestAndResponse, reqQueueSize),
|
||||
mustConnect: mustConnect,
|
||||
addr: addr,
|
||||
reqSent: list.New(),
|
||||
resCb: nil,
|
||||
}
|
||||
cli.BaseService = *service.NewBaseService(logger, "socketClient", cli)
|
||||
return cli
|
||||
@@ -99,7 +91,10 @@ func (cli *socketClient) OnStop() {
|
||||
cli.conn.Close()
|
||||
}
|
||||
|
||||
cli.drainQueue()
|
||||
// this timeout is arbitrary.
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
cli.drainQueue(ctx)
|
||||
}
|
||||
|
||||
// Error returns an error if the client was stopped abruptly.
|
||||
@@ -109,16 +104,6 @@ func (cli *socketClient) Error() error {
|
||||
return cli.err
|
||||
}
|
||||
|
||||
// SetResponseCallback sets a callback, which will be executed for each
|
||||
// non-error & non-empty response from the server.
|
||||
//
|
||||
// NOTE: callback may get internally generated flush responses.
|
||||
func (cli *socketClient) SetResponseCallback(resCb Callback) {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
cli.resCb = resCb
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) sendRequestsRoutine(ctx context.Context, conn io.Writer) {
|
||||
@@ -132,16 +117,13 @@ func (cli *socketClient) sendRequestsRoutine(ctx context.Context, conn io.Writer
|
||||
return
|
||||
}
|
||||
|
||||
if reqres.C.Err() != nil {
|
||||
cli.logger.Debug("Request's context is done", "req", reqres.R, "err", reqres.C.Err())
|
||||
continue
|
||||
}
|
||||
cli.willSendReq(reqres.R)
|
||||
cli.willSendReq(reqres)
|
||||
|
||||
if err := types.WriteMessage(reqres.R.Request, bw); err != nil {
|
||||
if err := types.WriteMessage(reqres.Request, bw); err != nil {
|
||||
cli.stopForError(fmt.Errorf("write to buffer: %w", err))
|
||||
return
|
||||
}
|
||||
|
||||
if err := bw.Flush(); err != nil {
|
||||
cli.stopForError(fmt.Errorf("flush buffer: %w", err))
|
||||
return
|
||||
@@ -156,23 +138,20 @@ func (cli *socketClient) recvResponseRoutine(ctx context.Context, conn io.Reader
|
||||
if ctx.Err() != nil {
|
||||
return
|
||||
}
|
||||
var res = &types.Response{}
|
||||
err := types.ReadMessage(r, res)
|
||||
if err != nil {
|
||||
res := &types.Response{}
|
||||
|
||||
if err := types.ReadMessage(r, res); err != nil {
|
||||
cli.stopForError(fmt.Errorf("read message: %w", err))
|
||||
return
|
||||
}
|
||||
|
||||
// cli.logger.Debug("Received response", "responseType", reflect.TypeOf(res), "response", res)
|
||||
|
||||
switch r := res.Value.(type) {
|
||||
case *types.Response_Exception: // app responded with error
|
||||
// XXX After setting cli.err, release waiters (e.g. reqres.Done())
|
||||
cli.stopForError(errors.New(r.Exception.Error))
|
||||
return
|
||||
default:
|
||||
err := cli.didRecvResponse(res)
|
||||
if err != nil {
|
||||
if err := cli.didRecvResponse(res); err != nil {
|
||||
cli.stopForError(err)
|
||||
return
|
||||
}
|
||||
@@ -180,7 +159,7 @@ func (cli *socketClient) recvResponseRoutine(ctx context.Context, conn io.Reader
|
||||
}
|
||||
}
|
||||
|
||||
func (cli *socketClient) willSendReq(reqres *ReqRes) {
|
||||
func (cli *socketClient) willSendReq(reqres *requestAndResponse) {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
cli.reqSent.PushBack(reqres)
|
||||
@@ -193,291 +172,184 @@ func (cli *socketClient) didRecvResponse(res *types.Response) error {
|
||||
// Get the first ReqRes.
|
||||
next := cli.reqSent.Front()
|
||||
if next == nil {
|
||||
return fmt.Errorf("unexpected %v when nothing expected", reflect.TypeOf(res.Value))
|
||||
return fmt.Errorf("unexpected %T when nothing expected", res.Value)
|
||||
}
|
||||
|
||||
reqres := next.Value.(*ReqRes)
|
||||
reqres := next.Value.(*requestAndResponse)
|
||||
if !resMatchesReq(reqres.Request, res) {
|
||||
return fmt.Errorf("unexpected %v when response to %v expected",
|
||||
reflect.TypeOf(res.Value), reflect.TypeOf(reqres.Request.Value))
|
||||
return fmt.Errorf("unexpected %T when response to %T expected", res.Value, reqres.Request.Value)
|
||||
}
|
||||
|
||||
reqres.Response = res
|
||||
reqres.Done() // release waiters
|
||||
reqres.markDone() // release waiters
|
||||
cli.reqSent.Remove(next) // pop first item from linked list
|
||||
|
||||
// Notify client listener if set (global callback).
|
||||
if cli.resCb != nil {
|
||||
cli.resCb(reqres.Request, res)
|
||||
}
|
||||
|
||||
// Notify reqRes listener if set (request specific callback).
|
||||
//
|
||||
// NOTE: It is possible this callback isn't set on the reqres object. At this
|
||||
// point, in which case it will be called after, when it is set.
|
||||
reqres.InvokeCallback()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) FlushAsync(ctx context.Context) (*ReqRes, error) {
|
||||
return cli.queueRequestAsync(ctx, types.ToRequestFlush())
|
||||
}
|
||||
|
||||
func (cli *socketClient) DeliverTxAsync(ctx context.Context, req types.RequestDeliverTx) (*ReqRes, error) {
|
||||
return cli.queueRequestAsync(ctx, types.ToRequestDeliverTx(req))
|
||||
}
|
||||
|
||||
func (cli *socketClient) CheckTxAsync(ctx context.Context, req types.RequestCheckTx) (*ReqRes, error) {
|
||||
return cli.queueRequestAsync(ctx, types.ToRequestCheckTx(req))
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func (cli *socketClient) Flush(ctx context.Context) error {
|
||||
reqRes, err := cli.queueRequest(ctx, types.ToRequestFlush(), true)
|
||||
_, err := cli.doRequest(ctx, types.ToRequestFlush())
|
||||
if err != nil {
|
||||
return queueErr(err)
|
||||
}
|
||||
|
||||
if err := cli.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
gotResp := make(chan struct{})
|
||||
go func() {
|
||||
// NOTE: if we don't flush the queue, its possible to get stuck here
|
||||
reqRes.Wait()
|
||||
close(gotResp)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-gotResp:
|
||||
return cli.Error()
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) Echo(ctx context.Context, msg string) (*types.ResponseEcho, error) {
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestEcho(msg))
|
||||
res, err := cli.doRequest(ctx, types.ToRequestEcho(msg))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetEcho(), nil
|
||||
return res.GetEcho(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) Info(
|
||||
ctx context.Context,
|
||||
req types.RequestInfo,
|
||||
) (*types.ResponseInfo, error) {
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestInfo(req))
|
||||
func (cli *socketClient) Info(ctx context.Context, req types.RequestInfo) (*types.ResponseInfo, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestInfo(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetInfo(), nil
|
||||
return res.GetInfo(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) DeliverTx(
|
||||
ctx context.Context,
|
||||
req types.RequestDeliverTx,
|
||||
) (*types.ResponseDeliverTx, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestDeliverTx(req))
|
||||
func (cli *socketClient) CheckTx(ctx context.Context, req types.RequestCheckTx) (*types.ResponseCheckTx, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestCheckTx(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetDeliverTx(), nil
|
||||
return res.GetCheckTx(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) CheckTx(
|
||||
ctx context.Context,
|
||||
req types.RequestCheckTx,
|
||||
) (*types.ResponseCheckTx, error) {
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestCheckTx(req))
|
||||
func (cli *socketClient) Query(ctx context.Context, req types.RequestQuery) (*types.ResponseQuery, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestQuery(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetCheckTx(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) Query(
|
||||
ctx context.Context,
|
||||
req types.RequestQuery,
|
||||
) (*types.ResponseQuery, error) {
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestQuery(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetQuery(), nil
|
||||
return res.GetQuery(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) Commit(ctx context.Context) (*types.ResponseCommit, error) {
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestCommit())
|
||||
res, err := cli.doRequest(ctx, types.ToRequestCommit())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetCommit(), nil
|
||||
return res.GetCommit(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) InitChain(
|
||||
ctx context.Context,
|
||||
req types.RequestInitChain,
|
||||
) (*types.ResponseInitChain, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestInitChain(req))
|
||||
func (cli *socketClient) InitChain(ctx context.Context, req types.RequestInitChain) (*types.ResponseInitChain, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestInitChain(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetInitChain(), nil
|
||||
return res.GetInitChain(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) BeginBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestBeginBlock,
|
||||
) (*types.ResponseBeginBlock, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestBeginBlock(req))
|
||||
func (cli *socketClient) ListSnapshots(ctx context.Context, req types.RequestListSnapshots) (*types.ResponseListSnapshots, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestListSnapshots(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetBeginBlock(), nil
|
||||
return res.GetListSnapshots(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) EndBlock(
|
||||
ctx context.Context,
|
||||
req types.RequestEndBlock,
|
||||
) (*types.ResponseEndBlock, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestEndBlock(req))
|
||||
func (cli *socketClient) OfferSnapshot(ctx context.Context, req types.RequestOfferSnapshot) (*types.ResponseOfferSnapshot, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestOfferSnapshot(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetEndBlock(), nil
|
||||
return res.GetOfferSnapshot(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) ListSnapshots(
|
||||
ctx context.Context,
|
||||
req types.RequestListSnapshots,
|
||||
) (*types.ResponseListSnapshots, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestListSnapshots(req))
|
||||
func (cli *socketClient) LoadSnapshotChunk(ctx context.Context, req types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestLoadSnapshotChunk(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetListSnapshots(), nil
|
||||
return res.GetLoadSnapshotChunk(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) OfferSnapshot(
|
||||
ctx context.Context,
|
||||
req types.RequestOfferSnapshot,
|
||||
) (*types.ResponseOfferSnapshot, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestOfferSnapshot(req))
|
||||
func (cli *socketClient) ApplySnapshotChunk(ctx context.Context, req types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestApplySnapshotChunk(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetOfferSnapshot(), nil
|
||||
return res.GetApplySnapshotChunk(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) LoadSnapshotChunk(
|
||||
ctx context.Context,
|
||||
req types.RequestLoadSnapshotChunk) (*types.ResponseLoadSnapshotChunk, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestLoadSnapshotChunk(req))
|
||||
func (cli *socketClient) PrepareProposal(ctx context.Context, req types.RequestPrepareProposal) (*types.ResponsePrepareProposal, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestPrepareProposal(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetLoadSnapshotChunk(), nil
|
||||
return res.GetPrepareProposal(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) ApplySnapshotChunk(
|
||||
ctx context.Context,
|
||||
req types.RequestApplySnapshotChunk) (*types.ResponseApplySnapshotChunk, error) {
|
||||
|
||||
reqres, err := cli.queueRequestAndFlush(ctx, types.ToRequestApplySnapshotChunk(req))
|
||||
func (cli *socketClient) ProcessProposal(ctx context.Context, req types.RequestProcessProposal) (*types.ResponseProcessProposal, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestProcessProposal(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reqres.Response.GetApplySnapshotChunk(), nil
|
||||
return res.GetProcessProposal(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) ExtendVote(ctx context.Context, req types.RequestExtendVote) (*types.ResponseExtendVote, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestExtendVote(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return res.GetExtendVote(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) VerifyVoteExtension(ctx context.Context, req types.RequestVerifyVoteExtension) (*types.ResponseVerifyVoteExtension, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestVerifyVoteExtension(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return res.GetVerifyVoteExtension(), nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) FinalizeBlock(ctx context.Context, req types.RequestFinalizeBlock) (*types.ResponseFinalizeBlock, error) {
|
||||
res, err := cli.doRequest(ctx, types.ToRequestFinalizeBlock(req))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return res.GetFinalizeBlock(), nil
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
// queueRequest enqueues req onto the queue. If the queue is full, it ether
|
||||
// returns an error (sync=false) or blocks (sync=true).
|
||||
//
|
||||
// When sync=true, ctx can be used to break early. When sync=false, ctx will be
|
||||
// used later to determine if request should be dropped (if ctx.Err is
|
||||
// non-nil).
|
||||
//
|
||||
// The caller is responsible for checking cli.Error.
|
||||
func (cli *socketClient) queueRequest(ctx context.Context, req *types.Request, sync bool) (*ReqRes, error) {
|
||||
reqres := NewReqRes(req)
|
||||
func (cli *socketClient) doRequest(ctx context.Context, req *types.Request) (*types.Response, error) {
|
||||
reqres := makeReqRes(req)
|
||||
|
||||
if sync {
|
||||
select {
|
||||
case cli.reqQueue <- &reqResWithContext{R: reqres, C: context.Background()}:
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
} else {
|
||||
select {
|
||||
case cli.reqQueue <- &reqResWithContext{R: reqres, C: ctx}:
|
||||
default:
|
||||
return nil, errors.New("buffer is full")
|
||||
select {
|
||||
case cli.reqQueue <- reqres:
|
||||
case <-ctx.Done():
|
||||
return nil, fmt.Errorf("can't queue req: %w", ctx.Err())
|
||||
}
|
||||
|
||||
select {
|
||||
case <-reqres.signal:
|
||||
if err := cli.Error(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return reqres.Response, nil
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
|
||||
return reqres, nil
|
||||
}
|
||||
|
||||
func (cli *socketClient) queueRequestAsync(
|
||||
ctx context.Context,
|
||||
req *types.Request,
|
||||
) (*ReqRes, error) {
|
||||
|
||||
reqres, err := cli.queueRequest(ctx, req, false)
|
||||
if err != nil {
|
||||
return nil, queueErr(err)
|
||||
}
|
||||
|
||||
return reqres, cli.Error()
|
||||
}
|
||||
|
||||
func (cli *socketClient) queueRequestAndFlush(
|
||||
ctx context.Context,
|
||||
req *types.Request,
|
||||
) (*ReqRes, error) {
|
||||
|
||||
reqres, err := cli.queueRequest(ctx, req, true)
|
||||
if err != nil {
|
||||
return nil, queueErr(err)
|
||||
}
|
||||
|
||||
if err := cli.Flush(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return reqres, cli.Error()
|
||||
}
|
||||
|
||||
func queueErr(e error) error {
|
||||
return fmt.Errorf("can't queue req: %w", e)
|
||||
}
|
||||
|
||||
// drainQueue marks as complete and discards all remaining pending requests
|
||||
// from the queue.
|
||||
func (cli *socketClient) drainQueue() {
|
||||
func (cli *socketClient) drainQueue(ctx context.Context) {
|
||||
cli.mtx.Lock()
|
||||
defer cli.mtx.Unlock()
|
||||
|
||||
// mark all in-flight messages as resolved (they will get cli.Error())
|
||||
for req := cli.reqSent.Front(); req != nil; req = req.Next() {
|
||||
reqres := req.Value.(*ReqRes)
|
||||
reqres.Done()
|
||||
reqres := req.Value.(*requestAndResponse)
|
||||
reqres.markDone()
|
||||
}
|
||||
|
||||
// Mark all queued messages as resolved.
|
||||
@@ -487,8 +359,10 @@ func (cli *socketClient) drainQueue() {
|
||||
// See https://github.com/tendermint/tendermint/issues/6996.
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case reqres := <-cli.reqQueue:
|
||||
reqres.R.Done()
|
||||
reqres.markDone()
|
||||
default:
|
||||
return
|
||||
}
|
||||
@@ -505,8 +379,6 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
|
||||
_, ok = res.Value.(*types.Response_Flush)
|
||||
case *types.Request_Info:
|
||||
_, ok = res.Value.(*types.Response_Info)
|
||||
case *types.Request_DeliverTx:
|
||||
_, ok = res.Value.(*types.Response_DeliverTx)
|
||||
case *types.Request_CheckTx:
|
||||
_, ok = res.Value.(*types.Response_CheckTx)
|
||||
case *types.Request_Commit:
|
||||
@@ -515,10 +387,14 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
|
||||
_, ok = res.Value.(*types.Response_Query)
|
||||
case *types.Request_InitChain:
|
||||
_, ok = res.Value.(*types.Response_InitChain)
|
||||
case *types.Request_BeginBlock:
|
||||
_, ok = res.Value.(*types.Response_BeginBlock)
|
||||
case *types.Request_EndBlock:
|
||||
_, ok = res.Value.(*types.Response_EndBlock)
|
||||
case *types.Request_ProcessProposal:
|
||||
_, ok = res.Value.(*types.Response_ProcessProposal)
|
||||
case *types.Request_PrepareProposal:
|
||||
_, ok = res.Value.(*types.Response_PrepareProposal)
|
||||
case *types.Request_ExtendVote:
|
||||
_, ok = res.Value.(*types.Response_ExtendVote)
|
||||
case *types.Request_VerifyVoteExtension:
|
||||
_, ok = res.Value.(*types.Response_VerifyVoteExtension)
|
||||
case *types.Request_ApplySnapshotChunk:
|
||||
_, ok = res.Value.(*types.Response_ApplySnapshotChunk)
|
||||
case *types.Request_LoadSnapshotChunk:
|
||||
@@ -527,6 +403,8 @@ func resMatchesReq(req *types.Request, res *types.Response) (ok bool) {
|
||||
_, ok = res.Value.(*types.Response_ListSnapshots)
|
||||
case *types.Request_OfferSnapshot:
|
||||
_, ok = res.Value.(*types.Response_OfferSnapshot)
|
||||
case *types.Request_FinalizeBlock:
|
||||
_, ok = res.Value.(*types.Response_FinalizeBlock)
|
||||
}
|
||||
return ok
|
||||
}
|
||||
@@ -541,7 +419,5 @@ func (cli *socketClient) stopForError(err error) {
|
||||
cli.mtx.Unlock()
|
||||
|
||||
cli.logger.Info("Stopping abci.socketClient", "reason", err)
|
||||
if err := cli.Stop(); err != nil {
|
||||
cli.logger.Error("error stopping abci.socketClient", "err", err)
|
||||
}
|
||||
cli.Stop()
|
||||
}
|
||||
|
||||
@@ -1,85 +0,0 @@
|
||||
package abciclient_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"math/rand"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
abciclient "github.com/tendermint/tendermint/abci/client"
|
||||
"github.com/tendermint/tendermint/abci/server"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/libs/service"
|
||||
)
|
||||
|
||||
func TestProperSyncCalls(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
app := slowApp{}
|
||||
logger := log.NewNopLogger()
|
||||
|
||||
_, c := setupClientServer(ctx, t, logger, app)
|
||||
|
||||
resp := make(chan error, 1)
|
||||
go func() {
|
||||
rsp, err := c.BeginBlock(ctx, types.RequestBeginBlock{})
|
||||
assert.NoError(t, err)
|
||||
assert.NoError(t, c.Flush(ctx))
|
||||
assert.NotNil(t, rsp)
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
case resp <- c.Error():
|
||||
}
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-time.After(time.Second):
|
||||
require.Fail(t, "No response arrived")
|
||||
case err, ok := <-resp:
|
||||
require.True(t, ok, "Must not close channel")
|
||||
assert.NoError(t, err, "This should return success")
|
||||
}
|
||||
}
|
||||
|
||||
func setupClientServer(
|
||||
ctx context.Context,
|
||||
t *testing.T,
|
||||
logger log.Logger,
|
||||
app types.Application,
|
||||
) (service.Service, abciclient.Client) {
|
||||
t.Helper()
|
||||
|
||||
// some port between 20k and 30k
|
||||
port := 20000 + rand.Int31()%10000
|
||||
addr := fmt.Sprintf("localhost:%d", port)
|
||||
|
||||
s, err := server.NewServer(logger, addr, "socket", app)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, s.Start(ctx))
|
||||
t.Cleanup(s.Wait)
|
||||
|
||||
c := abciclient.NewSocketClient(logger, addr, true)
|
||||
require.NoError(t, c.Start(ctx))
|
||||
t.Cleanup(c.Wait)
|
||||
|
||||
require.True(t, s.IsRunning())
|
||||
require.True(t, c.IsRunning())
|
||||
|
||||
return s, c
|
||||
}
|
||||
|
||||
type slowApp struct {
|
||||
types.BaseApplication
|
||||
}
|
||||
|
||||
func (slowApp) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
return types.ResponseBeginBlock{}
|
||||
}
|
||||
@@ -28,7 +28,6 @@ import (
|
||||
// client is a global variable so it can be reused by the console
|
||||
var (
|
||||
client abciclient.Client
|
||||
logger log.Logger
|
||||
)
|
||||
|
||||
// flags
|
||||
@@ -48,34 +47,32 @@ var (
|
||||
flagPersist string
|
||||
)
|
||||
|
||||
var RootCmd = &cobra.Command{
|
||||
Use: "abci-cli",
|
||||
Short: "the ABCI CLI tool wraps an ABCI client",
|
||||
Long: "the ABCI CLI tool wraps an ABCI client and is used for testing ABCI servers",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
func RootCmmand(logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "abci-cli",
|
||||
Short: "the ABCI CLI tool wraps an ABCI client",
|
||||
Long: "the ABCI CLI tool wraps an ABCI client and is used for testing ABCI servers",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) {
|
||||
|
||||
switch cmd.Use {
|
||||
case "kvstore", "version":
|
||||
switch cmd.Use {
|
||||
case "kvstore", "version":
|
||||
return nil
|
||||
}
|
||||
|
||||
if client == nil {
|
||||
var err error
|
||||
client, err = abciclient.NewClient(logger.With("module", "abci-client"), flagAddress, flagAbci, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := client.Start(cmd.Context()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if logger == nil {
|
||||
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
}
|
||||
|
||||
if client == nil {
|
||||
var err error
|
||||
client, err = abciclient.NewClient(logger.With("module", "abci-client"), flagAddress, flagAbci, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := client.Start(cmd.Context()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// Structure for data passed to print response.
|
||||
@@ -97,56 +94,46 @@ type queryResponse struct {
|
||||
}
|
||||
|
||||
func Execute() error {
|
||||
addGlobalFlags()
|
||||
addCommands()
|
||||
return RootCmd.Execute()
|
||||
logger, err := log.NewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cmd := RootCmmand(logger)
|
||||
addGlobalFlags(cmd)
|
||||
addCommands(cmd, logger)
|
||||
return cmd.Execute()
|
||||
}
|
||||
|
||||
func addGlobalFlags() {
|
||||
RootCmd.PersistentFlags().StringVarP(&flagAddress,
|
||||
func addGlobalFlags(cmd *cobra.Command) {
|
||||
cmd.PersistentFlags().StringVarP(&flagAddress,
|
||||
"address",
|
||||
"",
|
||||
"tcp://0.0.0.0:26658",
|
||||
"address of application socket")
|
||||
RootCmd.PersistentFlags().StringVarP(&flagAbci, "abci", "", "socket", "either socket or grpc")
|
||||
RootCmd.PersistentFlags().BoolVarP(&flagVerbose,
|
||||
cmd.PersistentFlags().StringVarP(&flagAbci, "abci", "", "socket", "either socket or grpc")
|
||||
cmd.PersistentFlags().BoolVarP(&flagVerbose,
|
||||
"verbose",
|
||||
"v",
|
||||
false,
|
||||
"print the command and results as if it were a console session")
|
||||
RootCmd.PersistentFlags().StringVarP(&flagLogLevel, "log_level", "", "debug", "set the logger level")
|
||||
cmd.PersistentFlags().StringVarP(&flagLogLevel, "log_level", "", "debug", "set the logger level")
|
||||
}
|
||||
|
||||
func addQueryFlags() {
|
||||
queryCmd.PersistentFlags().StringVarP(&flagPath, "path", "", "/store", "path to prefix query with")
|
||||
queryCmd.PersistentFlags().IntVarP(&flagHeight, "height", "", 0, "height to query the blockchain at")
|
||||
queryCmd.PersistentFlags().BoolVarP(&flagProve,
|
||||
"prove",
|
||||
"",
|
||||
false,
|
||||
"whether or not to return a merkle proof of the query result")
|
||||
}
|
||||
|
||||
func addKVStoreFlags() {
|
||||
kvstoreCmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
|
||||
}
|
||||
|
||||
func addCommands() {
|
||||
RootCmd.AddCommand(batchCmd)
|
||||
RootCmd.AddCommand(consoleCmd)
|
||||
RootCmd.AddCommand(echoCmd)
|
||||
RootCmd.AddCommand(infoCmd)
|
||||
RootCmd.AddCommand(deliverTxCmd)
|
||||
RootCmd.AddCommand(checkTxCmd)
|
||||
RootCmd.AddCommand(commitCmd)
|
||||
RootCmd.AddCommand(versionCmd)
|
||||
RootCmd.AddCommand(testCmd)
|
||||
addQueryFlags()
|
||||
RootCmd.AddCommand(queryCmd)
|
||||
func addCommands(cmd *cobra.Command, logger log.Logger) {
|
||||
cmd.AddCommand(batchCmd)
|
||||
cmd.AddCommand(consoleCmd)
|
||||
cmd.AddCommand(echoCmd)
|
||||
cmd.AddCommand(infoCmd)
|
||||
cmd.AddCommand(finalizeBlockCmd)
|
||||
cmd.AddCommand(checkTxCmd)
|
||||
cmd.AddCommand(commitCmd)
|
||||
cmd.AddCommand(versionCmd)
|
||||
cmd.AddCommand(testCmd)
|
||||
cmd.AddCommand(getQueryCmd())
|
||||
|
||||
// examples
|
||||
addKVStoreFlags()
|
||||
RootCmd.AddCommand(kvstoreCmd)
|
||||
cmd.AddCommand(getKVStoreCmd(logger))
|
||||
}
|
||||
|
||||
var batchCmd = &cobra.Command{
|
||||
@@ -163,10 +150,9 @@ where example.file looks something like:
|
||||
|
||||
check_tx 0x00
|
||||
check_tx 0xff
|
||||
deliver_tx 0x00
|
||||
finalize_block 0x00
|
||||
check_tx 0x00
|
||||
deliver_tx 0x01
|
||||
deliver_tx 0x04
|
||||
finalize_block 0x01 0x04 0xff
|
||||
info
|
||||
`,
|
||||
Args: cobra.ExactArgs(0),
|
||||
@@ -182,7 +168,7 @@ This command opens an interactive console for running any of the other commands
|
||||
without opening a new connection each time
|
||||
`,
|
||||
Args: cobra.ExactArgs(0),
|
||||
ValidArgs: []string{"echo", "info", "deliver_tx", "check_tx", "commit", "query"},
|
||||
ValidArgs: []string{"echo", "info", "finalize_block", "check_tx", "commit", "query"},
|
||||
RunE: cmdConsole,
|
||||
}
|
||||
|
||||
@@ -201,12 +187,12 @@ var infoCmd = &cobra.Command{
|
||||
RunE: cmdInfo,
|
||||
}
|
||||
|
||||
var deliverTxCmd = &cobra.Command{
|
||||
Use: "deliver_tx",
|
||||
Short: "deliver a new transaction to the application",
|
||||
Long: "deliver a new transaction to the application",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: cmdDeliverTx,
|
||||
var finalizeBlockCmd = &cobra.Command{
|
||||
Use: "finalize_block",
|
||||
Short: "deliver a block of transactions to the application",
|
||||
Long: "deliver a block of transactions to the application",
|
||||
Args: cobra.MinimumNArgs(1),
|
||||
RunE: cmdFinalizeBlock,
|
||||
}
|
||||
|
||||
var checkTxCmd = &cobra.Command{
|
||||
@@ -236,20 +222,38 @@ var versionCmd = &cobra.Command{
|
||||
},
|
||||
}
|
||||
|
||||
var queryCmd = &cobra.Command{
|
||||
Use: "query",
|
||||
Short: "query the application state",
|
||||
Long: "query the application state",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: cmdQuery,
|
||||
func getQueryCmd() *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "query",
|
||||
Short: "query the application state",
|
||||
Long: "query the application state",
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: cmdQuery,
|
||||
}
|
||||
|
||||
cmd.PersistentFlags().StringVarP(&flagPath, "path", "", "/store", "path to prefix query with")
|
||||
cmd.PersistentFlags().IntVarP(&flagHeight, "height", "", 0, "height to query the blockchain at")
|
||||
cmd.PersistentFlags().BoolVarP(&flagProve,
|
||||
"prove",
|
||||
"",
|
||||
false,
|
||||
"whether or not to return a merkle proof of the query result")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
var kvstoreCmd = &cobra.Command{
|
||||
Use: "kvstore",
|
||||
Short: "ABCI demo example",
|
||||
Long: "ABCI demo example",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: cmdKVStore,
|
||||
func getKVStoreCmd(logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "kvstore",
|
||||
Short: "ABCI demo example",
|
||||
Long: "ABCI demo example",
|
||||
Args: cobra.ExactArgs(0),
|
||||
RunE: makeKVStoreCmd(logger),
|
||||
}
|
||||
|
||||
cmd.PersistentFlags().StringVarP(&flagPersist, "persist", "", "", "directory to use for a database")
|
||||
return cmd
|
||||
|
||||
}
|
||||
|
||||
var testCmd = &cobra.Command{
|
||||
@@ -295,17 +299,38 @@ func cmdTest(cmd *cobra.Command, args []string) error {
|
||||
[]func() error{
|
||||
func() error { return servertest.InitChain(ctx, client) },
|
||||
func() error { return servertest.Commit(ctx, client, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte("abc"), code.CodeTypeBadNonce, nil) },
|
||||
func() error { return servertest.Commit(ctx, client, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.Commit(ctx, client, []byte{0, 0, 0, 0, 0, 0, 0, 1}) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00}, code.CodeTypeBadNonce, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x01}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x02}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x03}, code.CodeTypeOK, nil) },
|
||||
func() error { return servertest.DeliverTx(ctx, client, []byte{0x00, 0x00, 0x04}, code.CodeTypeOK, nil) },
|
||||
func() error {
|
||||
return servertest.DeliverTx(ctx, client, []byte{0x00, 0x00, 0x06}, code.CodeTypeBadNonce, nil)
|
||||
return servertest.FinalizeBlock(ctx, client, [][]byte{
|
||||
[]byte("abc"),
|
||||
}, []uint32{
|
||||
code.CodeTypeBadNonce,
|
||||
}, nil)
|
||||
},
|
||||
func() error { return servertest.Commit(ctx, client, nil) },
|
||||
func() error {
|
||||
return servertest.FinalizeBlock(ctx, client, [][]byte{
|
||||
{0x00},
|
||||
}, []uint32{
|
||||
code.CodeTypeOK,
|
||||
}, nil)
|
||||
},
|
||||
func() error { return servertest.Commit(ctx, client, []byte{0, 0, 0, 0, 0, 0, 0, 1}) },
|
||||
func() error {
|
||||
return servertest.FinalizeBlock(ctx, client, [][]byte{
|
||||
{0x00},
|
||||
{0x01},
|
||||
{0x00, 0x02},
|
||||
{0x00, 0x03},
|
||||
{0x00, 0x00, 0x04},
|
||||
{0x00, 0x00, 0x06},
|
||||
}, []uint32{
|
||||
code.CodeTypeBadNonce,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeOK,
|
||||
code.CodeTypeBadNonce,
|
||||
}, nil)
|
||||
},
|
||||
func() error { return servertest.Commit(ctx, client, []byte{0, 0, 0, 0, 0, 0, 0, 5}) },
|
||||
})
|
||||
@@ -400,8 +425,8 @@ func muxOnCommands(cmd *cobra.Command, pArgs []string) error {
|
||||
return cmdCheckTx(cmd, actualArgs)
|
||||
case "commit":
|
||||
return cmdCommit(cmd, actualArgs)
|
||||
case "deliver_tx":
|
||||
return cmdDeliverTx(cmd, actualArgs)
|
||||
case "finalize_block":
|
||||
return cmdFinalizeBlock(cmd, actualArgs)
|
||||
case "echo":
|
||||
return cmdEcho(cmd, actualArgs)
|
||||
case "info":
|
||||
@@ -425,12 +450,9 @@ func cmdUnimplemented(cmd *cobra.Command, args []string) error {
|
||||
})
|
||||
|
||||
fmt.Println("Available commands:")
|
||||
fmt.Printf("%s: %s\n", echoCmd.Use, echoCmd.Short)
|
||||
fmt.Printf("%s: %s\n", infoCmd.Use, infoCmd.Short)
|
||||
fmt.Printf("%s: %s\n", checkTxCmd.Use, checkTxCmd.Short)
|
||||
fmt.Printf("%s: %s\n", deliverTxCmd.Use, deliverTxCmd.Short)
|
||||
fmt.Printf("%s: %s\n", queryCmd.Use, queryCmd.Short)
|
||||
fmt.Printf("%s: %s\n", commitCmd.Use, commitCmd.Short)
|
||||
for _, cmd := range cmd.Commands() {
|
||||
fmt.Printf("%s: %s\n", cmd.Use, cmd.Short)
|
||||
}
|
||||
fmt.Println("Use \"[command] --help\" for more information about a command.")
|
||||
|
||||
return nil
|
||||
@@ -473,28 +495,34 @@ func cmdInfo(cmd *cobra.Command, args []string) error {
|
||||
const codeBad uint32 = 10
|
||||
|
||||
// Append a new tx to application
|
||||
func cmdDeliverTx(cmd *cobra.Command, args []string) error {
|
||||
func cmdFinalizeBlock(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
printResponse(cmd, args, response{
|
||||
Code: codeBad,
|
||||
Log: "want the tx",
|
||||
Log: "Must provide at least one transaction",
|
||||
})
|
||||
return nil
|
||||
}
|
||||
txBytes, err := stringOrHexToBytes(args[0])
|
||||
txs := make([][]byte, len(args))
|
||||
for i, arg := range args {
|
||||
txBytes, err := stringOrHexToBytes(arg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
txs[i] = txBytes
|
||||
}
|
||||
res, err := client.FinalizeBlock(cmd.Context(), types.RequestFinalizeBlock{Txs: txs})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
res, err := client.DeliverTx(cmd.Context(), types.RequestDeliverTx{Tx: txBytes})
|
||||
if err != nil {
|
||||
return err
|
||||
for _, tx := range res.TxResults {
|
||||
printResponse(cmd, args, response{
|
||||
Code: tx.Code,
|
||||
Data: tx.Data,
|
||||
Info: tx.Info,
|
||||
Log: tx.Log,
|
||||
})
|
||||
}
|
||||
printResponse(cmd, args, response{
|
||||
Code: res.Code,
|
||||
Data: res.Data,
|
||||
Info: res.Info,
|
||||
Log: res.Log,
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -574,33 +602,34 @@ func cmdQuery(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func cmdKVStore(cmd *cobra.Command, args []string) error {
|
||||
logger := log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
func makeKVStoreCmd(logger log.Logger) func(*cobra.Command, []string) error {
|
||||
return func(cmd *cobra.Command, args []string) error {
|
||||
// Create the application - in memory or persisted to disk
|
||||
var app types.Application
|
||||
if flagPersist == "" {
|
||||
app = kvstore.NewApplication()
|
||||
} else {
|
||||
app = kvstore.NewPersistentKVStoreApplication(logger, flagPersist)
|
||||
}
|
||||
|
||||
// Create the application - in memory or persisted to disk
|
||||
var app types.Application
|
||||
if flagPersist == "" {
|
||||
app = kvstore.NewApplication()
|
||||
} else {
|
||||
app = kvstore.NewPersistentKVStoreApplication(logger, flagPersist)
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(logger.With("module", "abci-server"), flagAddress, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
if err := srv.Start(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Run forever.
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start the listener
|
||||
srv, err := server.NewServer(logger.With("module", "abci-server"), flagAddress, flagAbci, app)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
if err := srv.Start(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Run forever.
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
}
|
||||
|
||||
//--------------------------------------------------------------------------------
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
"math/rand"
|
||||
"net"
|
||||
"os"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -32,35 +31,35 @@ func init() {
|
||||
func TestKVStore(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
logger := log.NewTestingLogger(t)
|
||||
logger := log.NewNopLogger()
|
||||
|
||||
logger.Info("### Testing KVStore")
|
||||
testStream(ctx, t, logger, kvstore.NewApplication())
|
||||
t.Log("### Testing KVStore")
|
||||
testBulk(ctx, t, logger, kvstore.NewApplication())
|
||||
}
|
||||
|
||||
func TestBaseApp(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
logger := log.NewTestingLogger(t)
|
||||
logger := log.NewNopLogger()
|
||||
|
||||
logger.Info("### Testing BaseApp")
|
||||
testStream(ctx, t, logger, types.NewBaseApplication())
|
||||
t.Log("### Testing BaseApp")
|
||||
testBulk(ctx, t, logger, types.NewBaseApplication())
|
||||
}
|
||||
|
||||
func TestGRPC(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
logger := log.NewTestingLogger(t)
|
||||
logger := log.NewNopLogger()
|
||||
|
||||
logger.Info("### Testing GRPC")
|
||||
t.Log("### Testing GRPC")
|
||||
testGRPCSync(ctx, t, logger, types.NewGRPCApplication(types.NewBaseApplication()))
|
||||
}
|
||||
|
||||
func testStream(ctx context.Context, t *testing.T, logger log.Logger, app types.Application) {
|
||||
func testBulk(ctx context.Context, t *testing.T, logger log.Logger, app types.Application) {
|
||||
t.Helper()
|
||||
|
||||
const numDeliverTxs = 20000
|
||||
const numDeliverTxs = 700000
|
||||
socketFile := fmt.Sprintf("test-%08x.sock", rand.Int31n(1<<30))
|
||||
defer os.Remove(socketFile)
|
||||
socket := fmt.Sprintf("unix://%v", socketFile)
|
||||
@@ -77,51 +76,22 @@ func testStream(ctx context.Context, t *testing.T, logger log.Logger, app types.
|
||||
err = client.Start(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
done := make(chan struct{})
|
||||
counter := 0
|
||||
client.SetResponseCallback(func(req *types.Request, res *types.Response) {
|
||||
// Process response
|
||||
switch r := res.Value.(type) {
|
||||
case *types.Response_DeliverTx:
|
||||
counter++
|
||||
if r.DeliverTx.Code != code.CodeTypeOK {
|
||||
t.Error("DeliverTx failed with ret_code", r.DeliverTx.Code)
|
||||
}
|
||||
if counter > numDeliverTxs {
|
||||
t.Fatalf("Too many DeliverTx responses. Got %d, expected %d", counter, numDeliverTxs)
|
||||
}
|
||||
if counter == numDeliverTxs {
|
||||
go func() {
|
||||
time.Sleep(time.Second * 1) // Wait for a bit to allow counter overflow
|
||||
close(done)
|
||||
}()
|
||||
return
|
||||
}
|
||||
case *types.Response_Flush:
|
||||
// ignore
|
||||
default:
|
||||
t.Error("Unexpected response type", reflect.TypeOf(res.Value))
|
||||
}
|
||||
})
|
||||
|
||||
// Write requests
|
||||
// Construct request
|
||||
rfb := types.RequestFinalizeBlock{Txs: make([][]byte, numDeliverTxs)}
|
||||
for counter := 0; counter < numDeliverTxs; counter++ {
|
||||
// Send request
|
||||
_, err = client.DeliverTxAsync(ctx, types.RequestDeliverTx{Tx: []byte("test")})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Sometimes send flush messages
|
||||
if counter%128 == 0 {
|
||||
err = client.Flush(ctx)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
rfb.Txs[counter] = []byte("test")
|
||||
}
|
||||
// Send bulk request
|
||||
res, err := client.FinalizeBlock(ctx, rfb)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, numDeliverTxs, len(res.TxResults), "Number of txs doesn't match")
|
||||
for _, tx := range res.TxResults {
|
||||
require.Equal(t, tx.Code, code.CodeTypeOK, "Tx failed")
|
||||
}
|
||||
|
||||
// Send final flush message
|
||||
_, err = client.FlushAsync(ctx)
|
||||
err = client.Flush(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
<-done
|
||||
}
|
||||
|
||||
//-------------------------
|
||||
@@ -133,7 +103,7 @@ func dialerFunc(ctx context.Context, addr string) (net.Conn, error) {
|
||||
|
||||
func testGRPCSync(ctx context.Context, t *testing.T, logger log.Logger, app types.ABCIApplicationServer) {
|
||||
t.Helper()
|
||||
numDeliverTxs := 2000
|
||||
numDeliverTxs := 680000
|
||||
socketFile := fmt.Sprintf("/tmp/test-%08x.sock", rand.Int31n(1<<30))
|
||||
defer os.Remove(socketFile)
|
||||
socket := fmt.Sprintf("unix://%v", socketFile)
|
||||
@@ -142,7 +112,7 @@ func testGRPCSync(ctx context.Context, t *testing.T, logger log.Logger, app type
|
||||
server := abciserver.NewGRPCServer(logger.With("module", "abci-server"), socket, app)
|
||||
|
||||
require.NoError(t, server.Start(ctx))
|
||||
t.Cleanup(func() { server.Wait() })
|
||||
t.Cleanup(server.Wait)
|
||||
|
||||
// Connect to the socket
|
||||
conn, err := grpc.Dial(socket,
|
||||
@@ -159,25 +129,17 @@ func testGRPCSync(ctx context.Context, t *testing.T, logger log.Logger, app type
|
||||
|
||||
client := types.NewABCIApplicationClient(conn)
|
||||
|
||||
// Write requests
|
||||
// Construct request
|
||||
rfb := types.RequestFinalizeBlock{Txs: make([][]byte, numDeliverTxs)}
|
||||
for counter := 0; counter < numDeliverTxs; counter++ {
|
||||
// Send request
|
||||
response, err := client.DeliverTx(ctx, &types.RequestDeliverTx{Tx: []byte("test")})
|
||||
require.NoError(t, err, "Error in GRPC DeliverTx")
|
||||
|
||||
counter++
|
||||
if response.Code != code.CodeTypeOK {
|
||||
t.Error("DeliverTx failed with ret_code", response.Code)
|
||||
}
|
||||
if counter > numDeliverTxs {
|
||||
t.Fatal("Too many DeliverTx responses")
|
||||
}
|
||||
t.Log("response", counter)
|
||||
if counter == numDeliverTxs {
|
||||
go func() {
|
||||
time.Sleep(time.Second * 1) // Wait for a bit to allow counter overflow
|
||||
}()
|
||||
}
|
||||
rfb.Txs[counter] = []byte("test")
|
||||
}
|
||||
|
||||
// Send request
|
||||
response, err := client.FinalizeBlock(ctx, &rfb)
|
||||
require.NoError(t, err, "Error in GRPC FinalizeBlock")
|
||||
require.Equal(t, numDeliverTxs, len(response.TxResults), "Number of txs returned via GRPC doesn't match")
|
||||
for _, tx := range response.TxResults {
|
||||
require.Equal(t, tx.Code, code.CodeTypeOK, "Tx failed")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,7 @@ There are two app's here: the KVStoreApplication and the PersistentKVStoreApplic
|
||||
|
||||
## KVStoreApplication
|
||||
|
||||
The KVStoreApplication is a simple merkle key-value store.
|
||||
The KVStoreApplication is a simple merkle key-value store.
|
||||
Transactions of the form `key=value` are stored as key-value pairs in the tree.
|
||||
Transactions without an `=` sign set the value to the key.
|
||||
The app has no replay protection (other than what the mempool provides).
|
||||
@@ -12,7 +12,7 @@ The app has no replay protection (other than what the mempool provides).
|
||||
## PersistentKVStoreApplication
|
||||
|
||||
The PersistentKVStoreApplication wraps the KVStoreApplication
|
||||
and provides two additional features:
|
||||
and provides three additional features:
|
||||
|
||||
1) persistence of state across app restarts (using Tendermint's ABCI-Handshake mechanism)
|
||||
2) validator set changes
|
||||
@@ -27,4 +27,4 @@ Validator set changes are effected using the following transaction format:
|
||||
|
||||
where `pubkeyN` is a base64-encoded 32-byte ed25519 key and `powerN` is a new voting power for the validator with `pubkeyN` (possibly a new one).
|
||||
To remove a validator from the validator set, set power to `0`.
|
||||
There is no sybil protection against new validators joining.
|
||||
There is no sybil protection against new validators joining.
|
||||
|
||||
@@ -2,14 +2,21 @@ package kvstore
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"encoding/binary"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
dbm "github.com/tendermint/tm-db"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/example/code"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/crypto/encoding"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
cryptoproto "github.com/tendermint/tendermint/proto/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/version"
|
||||
)
|
||||
|
||||
@@ -65,17 +72,41 @@ var _ types.Application = (*Application)(nil)
|
||||
|
||||
type Application struct {
|
||||
types.BaseApplication
|
||||
|
||||
mu sync.Mutex
|
||||
state State
|
||||
RetainBlocks int64 // blocks to retain after commit (via ResponseCommit.RetainHeight)
|
||||
logger log.Logger
|
||||
|
||||
// validator set
|
||||
ValUpdates []types.ValidatorUpdate
|
||||
valAddrToPubKeyMap map[string]cryptoproto.PublicKey
|
||||
}
|
||||
|
||||
func NewApplication() *Application {
|
||||
state := loadState(dbm.NewMemDB())
|
||||
return &Application{state: state}
|
||||
return &Application{
|
||||
logger: log.NewNopLogger(),
|
||||
state: loadState(dbm.NewMemDB()),
|
||||
valAddrToPubKeyMap: make(map[string]cryptoproto.PublicKey),
|
||||
}
|
||||
}
|
||||
|
||||
func (app *Application) Info(req types.RequestInfo) (resInfo types.ResponseInfo) {
|
||||
func (app *Application) InitChain(req types.RequestInitChain) types.ResponseInitChain {
|
||||
app.mu.Lock()
|
||||
defer app.mu.Unlock()
|
||||
|
||||
for _, v := range req.Validators {
|
||||
r := app.updateValidator(v)
|
||||
if r.IsErr() {
|
||||
app.logger.Error("error updating validators", "r", r)
|
||||
panic("problem updating validators")
|
||||
}
|
||||
}
|
||||
return types.ResponseInitChain{}
|
||||
}
|
||||
|
||||
func (app *Application) Info(req types.RequestInfo) types.ResponseInfo {
|
||||
app.mu.Lock()
|
||||
defer app.mu.Unlock()
|
||||
return types.ResponseInfo{
|
||||
Data: fmt.Sprintf("{\"size\":%v}", app.state.Size),
|
||||
Version: version.ABCIVersion,
|
||||
@@ -85,15 +116,26 @@ func (app *Application) Info(req types.RequestInfo) (resInfo types.ResponseInfo)
|
||||
}
|
||||
}
|
||||
|
||||
// tx is either "key=value" or just arbitrary bytes
|
||||
func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
|
||||
var key, value string
|
||||
// tx is either "val:pubkey!power" or "key=value" or just arbitrary bytes
|
||||
func (app *Application) handleTx(tx []byte) *types.ExecTxResult {
|
||||
// if it starts with "val:", update the validator set
|
||||
// format is "val:pubkey!power"
|
||||
if isValidatorTx(tx) {
|
||||
// update validators in the merkle tree
|
||||
// and in app.ValUpdates
|
||||
return app.execValidatorTx(tx)
|
||||
}
|
||||
|
||||
parts := bytes.Split(req.Tx, []byte("="))
|
||||
if isPrepareTx(tx) {
|
||||
return app.execPrepareTx(tx)
|
||||
}
|
||||
|
||||
var key, value string
|
||||
parts := bytes.Split(tx, []byte("="))
|
||||
if len(parts) == 2 {
|
||||
key, value = string(parts[0]), string(parts[1])
|
||||
} else {
|
||||
key, value = string(req.Tx), string(req.Tx)
|
||||
key, value = string(tx), string(tx)
|
||||
}
|
||||
|
||||
err := app.state.db.Set(prefixKey([]byte(key)), []byte(value))
|
||||
@@ -114,14 +156,56 @@ func (app *Application) DeliverTx(req types.RequestDeliverTx) types.ResponseDeli
|
||||
},
|
||||
}
|
||||
|
||||
return types.ResponseDeliverTx{Code: code.CodeTypeOK, Events: events}
|
||||
return &types.ExecTxResult{Code: code.CodeTypeOK, Events: events}
|
||||
}
|
||||
|
||||
func (app *Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
|
||||
func (app *Application) Close() error {
|
||||
app.mu.Lock()
|
||||
defer app.mu.Unlock()
|
||||
|
||||
return app.state.db.Close()
|
||||
}
|
||||
|
||||
func (app *Application) FinalizeBlock(req types.RequestFinalizeBlock) types.ResponseFinalizeBlock {
|
||||
app.mu.Lock()
|
||||
defer app.mu.Unlock()
|
||||
|
||||
// reset valset changes
|
||||
app.ValUpdates = make([]types.ValidatorUpdate, 0)
|
||||
|
||||
// Punish validators who committed equivocation.
|
||||
for _, ev := range req.ByzantineValidators {
|
||||
if ev.Type == types.EvidenceType_DUPLICATE_VOTE {
|
||||
addr := string(ev.Validator.Address)
|
||||
if pubKey, ok := app.valAddrToPubKeyMap[addr]; ok {
|
||||
app.updateValidator(types.ValidatorUpdate{
|
||||
PubKey: pubKey,
|
||||
Power: ev.Validator.Power - 1,
|
||||
})
|
||||
app.logger.Info("Decreased val power by 1 because of the equivocation",
|
||||
"val", addr)
|
||||
} else {
|
||||
panic(fmt.Errorf("wanted to punish val %q but can't find it", addr))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
respTxs := make([]*types.ExecTxResult, len(req.Txs))
|
||||
for i, tx := range req.Txs {
|
||||
respTxs[i] = app.handleTx(tx)
|
||||
}
|
||||
|
||||
return types.ResponseFinalizeBlock{TxResults: respTxs, ValidatorUpdates: app.ValUpdates}
|
||||
}
|
||||
|
||||
func (*Application) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
|
||||
return types.ResponseCheckTx{Code: code.CodeTypeOK, GasWanted: 1}
|
||||
}
|
||||
|
||||
func (app *Application) Commit() types.ResponseCommit {
|
||||
app.mu.Lock()
|
||||
defer app.mu.Unlock()
|
||||
|
||||
// Using a memdb - just return the big endian size of the db
|
||||
appHash := make([]byte, 8)
|
||||
binary.PutVarint(appHash, app.state.Size)
|
||||
@@ -137,37 +221,239 @@ func (app *Application) Commit() types.ResponseCommit {
|
||||
}
|
||||
|
||||
// Returns an associated value or nil if missing.
|
||||
func (app *Application) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
|
||||
func (app *Application) Query(reqQuery types.RequestQuery) types.ResponseQuery {
|
||||
app.mu.Lock()
|
||||
defer app.mu.Unlock()
|
||||
|
||||
if reqQuery.Path == "/val" {
|
||||
key := []byte("val:" + string(reqQuery.Data))
|
||||
value, err := app.state.db.Get(key)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
return types.ResponseQuery{
|
||||
Key: reqQuery.Data,
|
||||
Value: value,
|
||||
}
|
||||
}
|
||||
|
||||
if reqQuery.Prove {
|
||||
value, err := app.state.db.Get(prefixKey(reqQuery.Data))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
resQuery := types.ResponseQuery{
|
||||
Index: -1,
|
||||
Key: reqQuery.Data,
|
||||
Value: value,
|
||||
Height: app.state.Height,
|
||||
}
|
||||
|
||||
if value == nil {
|
||||
resQuery.Log = "does not exist"
|
||||
} else {
|
||||
resQuery.Log = "exists"
|
||||
}
|
||||
resQuery.Index = -1 // TODO make Proof return index
|
||||
resQuery.Key = reqQuery.Data
|
||||
resQuery.Value = value
|
||||
resQuery.Height = app.state.Height
|
||||
|
||||
return
|
||||
return resQuery
|
||||
}
|
||||
|
||||
resQuery.Key = reqQuery.Data
|
||||
value, err := app.state.db.Get(prefixKey(reqQuery.Data))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
resQuery := types.ResponseQuery{
|
||||
Key: reqQuery.Data,
|
||||
Value: value,
|
||||
Height: app.state.Height,
|
||||
}
|
||||
|
||||
if value == nil {
|
||||
resQuery.Log = "does not exist"
|
||||
} else {
|
||||
resQuery.Log = "exists"
|
||||
}
|
||||
resQuery.Value = value
|
||||
resQuery.Height = app.state.Height
|
||||
|
||||
return resQuery
|
||||
}
|
||||
|
||||
func (app *Application) PrepareProposal(req types.RequestPrepareProposal) types.ResponsePrepareProposal {
|
||||
app.mu.Lock()
|
||||
defer app.mu.Unlock()
|
||||
|
||||
return types.ResponsePrepareProposal{TxRecords: app.substPrepareTx(req.Txs)}
|
||||
}
|
||||
|
||||
func (*Application) ProcessProposal(req types.RequestProcessProposal) types.ResponseProcessProposal {
|
||||
for _, tx := range req.Txs {
|
||||
if len(tx) == 0 {
|
||||
return types.ResponseProcessProposal{Accept: false}
|
||||
}
|
||||
}
|
||||
return types.ResponseProcessProposal{Accept: true}
|
||||
}
|
||||
|
||||
//---------------------------------------------
|
||||
// update validators
|
||||
|
||||
func (app *Application) Validators() (validators []types.ValidatorUpdate) {
|
||||
app.mu.Lock()
|
||||
defer app.mu.Unlock()
|
||||
|
||||
itr, err := app.state.db.Iterator(nil, nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for ; itr.Valid(); itr.Next() {
|
||||
if isValidatorTx(itr.Key()) {
|
||||
validator := new(types.ValidatorUpdate)
|
||||
err := types.ReadMessage(bytes.NewBuffer(itr.Value()), validator)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
validators = append(validators, *validator)
|
||||
}
|
||||
}
|
||||
if err = itr.Error(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func MakeValSetChangeTx(pubkey cryptoproto.PublicKey, power int64) []byte {
|
||||
pk, err := encoding.PubKeyFromProto(pubkey)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
pubStr := base64.StdEncoding.EncodeToString(pk.Bytes())
|
||||
return []byte(fmt.Sprintf("val:%s!%d", pubStr, power))
|
||||
}
|
||||
|
||||
func isValidatorTx(tx []byte) bool {
|
||||
return strings.HasPrefix(string(tx), ValidatorSetChangePrefix)
|
||||
}
|
||||
|
||||
// format is "val:pubkey!power"
|
||||
// pubkey is a base64-encoded 32-byte ed25519 key
|
||||
func (app *Application) execValidatorTx(tx []byte) *types.ExecTxResult {
|
||||
tx = tx[len(ValidatorSetChangePrefix):]
|
||||
|
||||
// get the pubkey and power
|
||||
pubKeyAndPower := strings.Split(string(tx), "!")
|
||||
if len(pubKeyAndPower) != 2 {
|
||||
return &types.ExecTxResult{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Expected 'pubkey!power'. Got %v", pubKeyAndPower)}
|
||||
}
|
||||
pubkeyS, powerS := pubKeyAndPower[0], pubKeyAndPower[1]
|
||||
|
||||
// decode the pubkey
|
||||
pubkey, err := base64.StdEncoding.DecodeString(pubkeyS)
|
||||
if err != nil {
|
||||
return &types.ExecTxResult{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Pubkey (%s) is invalid base64", pubkeyS)}
|
||||
}
|
||||
|
||||
// decode the power
|
||||
power, err := strconv.ParseInt(powerS, 10, 64)
|
||||
if err != nil {
|
||||
return &types.ExecTxResult{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Power (%s) is not an int", powerS)}
|
||||
}
|
||||
|
||||
// update
|
||||
return app.updateValidator(types.UpdateValidator(pubkey, power, ""))
|
||||
}
|
||||
|
||||
// add, update, or remove a validator
|
||||
func (app *Application) updateValidator(v types.ValidatorUpdate) *types.ExecTxResult {
|
||||
pubkey, err := encoding.PubKeyFromProto(v.PubKey)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("can't decode public key: %w", err))
|
||||
}
|
||||
key := []byte("val:" + string(pubkey.Bytes()))
|
||||
|
||||
if v.Power == 0 {
|
||||
// remove validator
|
||||
hasKey, err := app.state.db.Has(key)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if !hasKey {
|
||||
pubStr := base64.StdEncoding.EncodeToString(pubkey.Bytes())
|
||||
return &types.ExecTxResult{
|
||||
Code: code.CodeTypeUnauthorized,
|
||||
Log: fmt.Sprintf("Cannot remove non-existent validator %s", pubStr)}
|
||||
}
|
||||
if err = app.state.db.Delete(key); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
delete(app.valAddrToPubKeyMap, string(pubkey.Address()))
|
||||
} else {
|
||||
// add or update validator
|
||||
value := bytes.NewBuffer(make([]byte, 0))
|
||||
if err := types.WriteMessage(&v, value); err != nil {
|
||||
return &types.ExecTxResult{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("error encoding validator: %v", err)}
|
||||
}
|
||||
if err = app.state.db.Set(key, value.Bytes()); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
app.valAddrToPubKeyMap[string(pubkey.Address())] = v.PubKey
|
||||
}
|
||||
|
||||
// we only update the changes array if we successfully updated the tree
|
||||
app.ValUpdates = append(app.ValUpdates, v)
|
||||
|
||||
return &types.ExecTxResult{Code: code.CodeTypeOK}
|
||||
}
|
||||
|
||||
// -----------------------------
|
||||
// prepare proposal machinery
|
||||
|
||||
const PreparePrefix = "prepare"
|
||||
|
||||
func isPrepareTx(tx []byte) bool {
|
||||
return bytes.HasPrefix(tx, []byte(PreparePrefix))
|
||||
}
|
||||
|
||||
// execPrepareTx is noop. tx data is considered as placeholder
|
||||
// and is substitute at the PrepareProposal.
|
||||
func (app *Application) execPrepareTx(tx []byte) *types.ExecTxResult {
|
||||
// noop
|
||||
return &types.ExecTxResult{}
|
||||
}
|
||||
|
||||
// substPrepareTx substitutes all the transactions prefixed with 'prepare' in the
|
||||
// proposal for transactions with the prefix strips.
|
||||
// It marks all of the original transactions as 'REMOVED' so that
|
||||
// Tendermint will remove them from its mempool.
|
||||
func (app *Application) substPrepareTx(blockData [][]byte) []*types.TxRecord {
|
||||
trs := make([]*types.TxRecord, len(blockData))
|
||||
var removed []*types.TxRecord
|
||||
for i, tx := range blockData {
|
||||
if isPrepareTx(tx) {
|
||||
removed = append(removed, &types.TxRecord{
|
||||
Tx: tx,
|
||||
Action: types.TxRecord_REMOVED,
|
||||
})
|
||||
trs[i] = &types.TxRecord{
|
||||
Tx: bytes.TrimPrefix(tx, []byte(PreparePrefix)),
|
||||
Action: types.TxRecord_ADDED,
|
||||
}
|
||||
continue
|
||||
}
|
||||
trs[i] = &types.TxRecord{
|
||||
Tx: tx,
|
||||
Action: types.TxRecord_UNMODIFIED,
|
||||
}
|
||||
}
|
||||
|
||||
return append(trs, removed...)
|
||||
}
|
||||
|
||||
@@ -3,10 +3,10 @@ package kvstore
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
"github.com/fortytw2/leaktest"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
@@ -25,12 +25,14 @@ const (
|
||||
)
|
||||
|
||||
func testKVStore(t *testing.T, app types.Application, tx []byte, key, value string) {
|
||||
req := types.RequestDeliverTx{Tx: tx}
|
||||
ar := app.DeliverTx(req)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
req := types.RequestFinalizeBlock{Txs: [][]byte{tx}}
|
||||
ar := app.FinalizeBlock(req)
|
||||
require.Equal(t, 1, len(ar.TxResults))
|
||||
require.False(t, ar.TxResults[0].IsErr())
|
||||
// repeating tx doesn't raise error
|
||||
ar = app.DeliverTx(req)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
ar = app.FinalizeBlock(req)
|
||||
require.Equal(t, 1, len(ar.TxResults))
|
||||
require.False(t, ar.TxResults[0].IsErr())
|
||||
// commit
|
||||
app.Commit()
|
||||
|
||||
@@ -72,11 +74,8 @@ func TestKVStoreKV(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestPersistentKVStoreKV(t *testing.T) {
|
||||
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
logger := log.NewTestingLogger(t)
|
||||
dir := t.TempDir()
|
||||
logger := log.NewNopLogger()
|
||||
|
||||
kvstore := NewPersistentKVStoreApplication(logger, dir)
|
||||
key := testKey
|
||||
@@ -90,11 +89,8 @@ func TestPersistentKVStoreKV(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestPersistentKVStoreInfo(t *testing.T) {
|
||||
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
logger := log.NewTestingLogger(t)
|
||||
dir := t.TempDir()
|
||||
logger := log.NewNopLogger()
|
||||
|
||||
kvstore := NewPersistentKVStoreApplication(logger, dir)
|
||||
InitKVStore(kvstore)
|
||||
@@ -111,8 +107,7 @@ func TestPersistentKVStoreInfo(t *testing.T) {
|
||||
header := tmproto.Header{
|
||||
Height: height,
|
||||
}
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
|
||||
kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
|
||||
kvstore.FinalizeBlock(types.RequestFinalizeBlock{Hash: hash, Header: header})
|
||||
kvstore.Commit()
|
||||
|
||||
resInfo = kvstore.Info(types.RequestInfo{})
|
||||
@@ -124,13 +119,7 @@ func TestPersistentKVStoreInfo(t *testing.T) {
|
||||
|
||||
// add a validator, remove a validator, update a validator
|
||||
func TestValUpdates(t *testing.T) {
|
||||
dir, err := os.MkdirTemp("/tmp", "abci-kvstore-test") // TODO
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
logger := log.NewTestingLogger(t)
|
||||
|
||||
kvstore := NewPersistentKVStoreApplication(logger, dir)
|
||||
kvstore := NewApplication()
|
||||
|
||||
// init with some validators
|
||||
total := 10
|
||||
@@ -204,21 +193,21 @@ func makeApplyBlock(
|
||||
Height: height,
|
||||
}
|
||||
|
||||
kvstore.BeginBlock(types.RequestBeginBlock{Hash: hash, Header: header})
|
||||
for _, tx := range txs {
|
||||
if r := kvstore.DeliverTx(types.RequestDeliverTx{Tx: tx}); r.IsErr() {
|
||||
t.Fatal(r)
|
||||
}
|
||||
}
|
||||
resEndBlock := kvstore.EndBlock(types.RequestEndBlock{Height: header.Height})
|
||||
resFinalizeBlock := kvstore.FinalizeBlock(types.RequestFinalizeBlock{
|
||||
Hash: hash,
|
||||
Header: header,
|
||||
Txs: txs,
|
||||
})
|
||||
|
||||
kvstore.Commit()
|
||||
|
||||
valsEqual(t, diff, resEndBlock.ValidatorUpdates)
|
||||
valsEqual(t, diff, resFinalizeBlock.ValidatorUpdates)
|
||||
|
||||
}
|
||||
|
||||
// order doesn't matter
|
||||
func valsEqual(t *testing.T, vals1, vals2 []types.ValidatorUpdate) {
|
||||
t.Helper()
|
||||
if len(vals1) != len(vals2) {
|
||||
t.Fatalf("vals dont match in len. got %d, expected %d", len(vals2), len(vals1))
|
||||
}
|
||||
@@ -240,9 +229,11 @@ func makeSocketClientServer(
|
||||
app types.Application,
|
||||
name string,
|
||||
) (abciclient.Client, service.Service, error) {
|
||||
t.Helper()
|
||||
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
t.Cleanup(cancel)
|
||||
t.Cleanup(leaktest.Check(t))
|
||||
|
||||
// Start the listener
|
||||
socket := fmt.Sprintf("unix://%s.sock", name)
|
||||
@@ -272,6 +263,8 @@ func makeGRPCClientServer(
|
||||
) (abciclient.Client, service.Service, error) {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
t.Cleanup(cancel)
|
||||
t.Cleanup(leaktest.Check(t))
|
||||
|
||||
// Start the listener
|
||||
socket := fmt.Sprintf("unix://%s.sock", name)
|
||||
|
||||
@@ -295,7 +288,7 @@ func makeGRPCClientServer(
|
||||
func TestClientServer(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
logger := log.NewTestingLogger(t)
|
||||
logger := log.NewNopLogger()
|
||||
|
||||
// set up socket app
|
||||
kvstore := NewApplication()
|
||||
@@ -330,13 +323,15 @@ func runClientTests(ctx context.Context, t *testing.T, client abciclient.Client)
|
||||
}
|
||||
|
||||
func testClient(ctx context.Context, t *testing.T, app abciclient.Client, tx []byte, key, value string) {
|
||||
ar, err := app.DeliverTx(ctx, types.RequestDeliverTx{Tx: tx})
|
||||
ar, err := app.FinalizeBlock(ctx, types.RequestFinalizeBlock{Txs: [][]byte{tx}})
|
||||
require.NoError(t, err)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
// repeating tx doesn't raise error
|
||||
ar, err = app.DeliverTx(ctx, types.RequestDeliverTx{Tx: tx})
|
||||
require.Equal(t, 1, len(ar.TxResults))
|
||||
require.False(t, ar.TxResults[0].IsErr())
|
||||
// repeating FinalizeBlock doesn't raise error
|
||||
ar, err = app.FinalizeBlock(ctx, types.RequestFinalizeBlock{Txs: [][]byte{tx}})
|
||||
require.NoError(t, err)
|
||||
require.False(t, ar.IsErr(), ar)
|
||||
require.Equal(t, 1, len(ar.TxResults))
|
||||
require.False(t, ar.TxResults[0].IsErr())
|
||||
// commit
|
||||
_, err = app.Commit(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -2,18 +2,13 @@ package kvstore
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
dbm "github.com/tendermint/tm-db"
|
||||
|
||||
"github.com/tendermint/tendermint/abci/example/code"
|
||||
"github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/crypto/encoding"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
cryptoproto "github.com/tendermint/tendermint/proto/tendermint/crypto"
|
||||
ptypes "github.com/tendermint/tendermint/proto/tendermint/types"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -25,258 +20,59 @@ const (
|
||||
var _ types.Application = (*PersistentKVStoreApplication)(nil)
|
||||
|
||||
type PersistentKVStoreApplication struct {
|
||||
app *Application
|
||||
|
||||
// validator set
|
||||
ValUpdates []types.ValidatorUpdate
|
||||
|
||||
valAddrToPubKeyMap map[string]cryptoproto.PublicKey
|
||||
|
||||
logger log.Logger
|
||||
*Application
|
||||
}
|
||||
|
||||
func NewPersistentKVStoreApplication(logger log.Logger, dbDir string) *PersistentKVStoreApplication {
|
||||
name := "kvstore"
|
||||
db, err := dbm.NewGoLevelDB(name, dbDir)
|
||||
db, err := dbm.NewGoLevelDB("kvstore", dbDir)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
state := loadState(db)
|
||||
|
||||
return &PersistentKVStoreApplication{
|
||||
app: &Application{state: state},
|
||||
valAddrToPubKeyMap: make(map[string]cryptoproto.PublicKey),
|
||||
logger: logger,
|
||||
Application: &Application{
|
||||
valAddrToPubKeyMap: make(map[string]cryptoproto.PublicKey),
|
||||
state: loadState(db),
|
||||
logger: logger,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) Close() error {
|
||||
return app.app.state.db.Close()
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) Info(req types.RequestInfo) types.ResponseInfo {
|
||||
res := app.app.Info(req)
|
||||
res.LastBlockHeight = app.app.state.Height
|
||||
res.LastBlockAppHash = app.app.state.AppHash
|
||||
return res
|
||||
}
|
||||
|
||||
// tx is either "val:pubkey!power" or "key=value" or just arbitrary bytes
|
||||
func (app *PersistentKVStoreApplication) DeliverTx(req types.RequestDeliverTx) types.ResponseDeliverTx {
|
||||
// if it starts with "val:", update the validator set
|
||||
// format is "val:pubkey!power"
|
||||
if isValidatorTx(req.Tx) {
|
||||
// update validators in the merkle tree
|
||||
// and in app.ValUpdates
|
||||
return app.execValidatorTx(req.Tx)
|
||||
}
|
||||
|
||||
// otherwise, update the key-value store
|
||||
return app.app.DeliverTx(req)
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) CheckTx(req types.RequestCheckTx) types.ResponseCheckTx {
|
||||
return app.app.CheckTx(req)
|
||||
}
|
||||
|
||||
// Commit will panic if InitChain was not called
|
||||
func (app *PersistentKVStoreApplication) Commit() types.ResponseCommit {
|
||||
return app.app.Commit()
|
||||
}
|
||||
|
||||
// When path=/val and data={validator address}, returns the validator update (types.ValidatorUpdate) varint encoded.
|
||||
// For any other path, returns an associated value or nil if missing.
|
||||
func (app *PersistentKVStoreApplication) Query(reqQuery types.RequestQuery) (resQuery types.ResponseQuery) {
|
||||
switch reqQuery.Path {
|
||||
case "/val":
|
||||
key := []byte("val:" + string(reqQuery.Data))
|
||||
value, err := app.app.state.db.Get(key)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
resQuery.Key = reqQuery.Data
|
||||
resQuery.Value = value
|
||||
return
|
||||
default:
|
||||
return app.app.Query(reqQuery)
|
||||
}
|
||||
}
|
||||
|
||||
// Save the validators in the merkle tree
|
||||
func (app *PersistentKVStoreApplication) InitChain(req types.RequestInitChain) types.ResponseInitChain {
|
||||
for _, v := range req.Validators {
|
||||
r := app.updateValidator(v)
|
||||
if r.IsErr() {
|
||||
app.logger.Error("error updating validators", "r", r)
|
||||
}
|
||||
}
|
||||
return types.ResponseInitChain{}
|
||||
}
|
||||
|
||||
// Track the block hash and header information
|
||||
func (app *PersistentKVStoreApplication) BeginBlock(req types.RequestBeginBlock) types.ResponseBeginBlock {
|
||||
// reset valset changes
|
||||
app.ValUpdates = make([]types.ValidatorUpdate, 0)
|
||||
|
||||
// Punish validators who committed equivocation.
|
||||
for _, ev := range req.ByzantineValidators {
|
||||
if ev.Type == types.EvidenceType_DUPLICATE_VOTE {
|
||||
addr := string(ev.Validator.Address)
|
||||
if pubKey, ok := app.valAddrToPubKeyMap[addr]; ok {
|
||||
app.updateValidator(types.ValidatorUpdate{
|
||||
PubKey: pubKey,
|
||||
Power: ev.Validator.Power - 1,
|
||||
})
|
||||
app.logger.Info("Decreased val power by 1 because of the equivocation",
|
||||
"val", addr)
|
||||
} else {
|
||||
app.logger.Error("Wanted to punish val, but can't find it",
|
||||
"val", addr)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return types.ResponseBeginBlock{}
|
||||
}
|
||||
|
||||
// Update the validator set
|
||||
func (app *PersistentKVStoreApplication) EndBlock(req types.RequestEndBlock) types.ResponseEndBlock {
|
||||
return types.ResponseEndBlock{ValidatorUpdates: app.ValUpdates}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) ListSnapshots(
|
||||
req types.RequestListSnapshots) types.ResponseListSnapshots {
|
||||
return types.ResponseListSnapshots{}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) LoadSnapshotChunk(
|
||||
req types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk {
|
||||
return types.ResponseLoadSnapshotChunk{}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) OfferSnapshot(
|
||||
req types.RequestOfferSnapshot) types.ResponseOfferSnapshot {
|
||||
func (app *PersistentKVStoreApplication) OfferSnapshot(req types.RequestOfferSnapshot) types.ResponseOfferSnapshot {
|
||||
return types.ResponseOfferSnapshot{Result: types.ResponseOfferSnapshot_ABORT}
|
||||
}
|
||||
|
||||
func (app *PersistentKVStoreApplication) ApplySnapshotChunk(
|
||||
req types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk {
|
||||
func (app *PersistentKVStoreApplication) ApplySnapshotChunk(req types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk {
|
||||
return types.ResponseApplySnapshotChunk{Result: types.ResponseApplySnapshotChunk_ABORT}
|
||||
}
|
||||
|
||||
//---------------------------------------------
|
||||
// update validators
|
||||
|
||||
func (app *PersistentKVStoreApplication) Validators() (validators []types.ValidatorUpdate) {
|
||||
itr, err := app.app.state.db.Iterator(nil, nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for ; itr.Valid(); itr.Next() {
|
||||
if isValidatorTx(itr.Key()) {
|
||||
validator := new(types.ValidatorUpdate)
|
||||
err := types.ReadMessage(bytes.NewBuffer(itr.Value()), validator)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
validators = append(validators, *validator)
|
||||
}
|
||||
}
|
||||
if err = itr.Error(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return
|
||||
func (app *PersistentKVStoreApplication) ExtendVote(req types.RequestExtendVote) types.ResponseExtendVote {
|
||||
return types.ResponseExtendVote{VoteExtension: ConstructVoteExtension(req.Vote.ValidatorAddress)}
|
||||
}
|
||||
|
||||
func MakeValSetChangeTx(pubkey cryptoproto.PublicKey, power int64) []byte {
|
||||
pk, err := encoding.PubKeyFromProto(pubkey)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
pubStr := base64.StdEncoding.EncodeToString(pk.Bytes())
|
||||
return []byte(fmt.Sprintf("val:%s!%d", pubStr, power))
|
||||
func (app *PersistentKVStoreApplication) VerifyVoteExtension(req types.RequestVerifyVoteExtension) types.ResponseVerifyVoteExtension {
|
||||
return types.RespondVerifyVoteExtension(app.verifyExtension(req.Vote.ValidatorAddress, req.Vote.VoteExtension))
|
||||
}
|
||||
|
||||
func isValidatorTx(tx []byte) bool {
|
||||
return strings.HasPrefix(string(tx), ValidatorSetChangePrefix)
|
||||
// -----------------------------
|
||||
|
||||
func ConstructVoteExtension(valAddr []byte) *ptypes.VoteExtension {
|
||||
return &ptypes.VoteExtension{
|
||||
AppDataToSign: valAddr,
|
||||
AppDataSelfAuthenticating: valAddr,
|
||||
}
|
||||
}
|
||||
|
||||
// format is "val:pubkey!power"
|
||||
// pubkey is a base64-encoded 32-byte ed25519 key
|
||||
func (app *PersistentKVStoreApplication) execValidatorTx(tx []byte) types.ResponseDeliverTx {
|
||||
tx = tx[len(ValidatorSetChangePrefix):]
|
||||
|
||||
// get the pubkey and power
|
||||
pubKeyAndPower := strings.Split(string(tx), "!")
|
||||
if len(pubKeyAndPower) != 2 {
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Expected 'pubkey!power'. Got %v", pubKeyAndPower)}
|
||||
func (app *PersistentKVStoreApplication) verifyExtension(valAddr []byte, ext *ptypes.VoteExtension) bool {
|
||||
if ext == nil {
|
||||
return false
|
||||
}
|
||||
pubkeyS, powerS := pubKeyAndPower[0], pubKeyAndPower[1]
|
||||
|
||||
// decode the pubkey
|
||||
pubkey, err := base64.StdEncoding.DecodeString(pubkeyS)
|
||||
if err != nil {
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Pubkey (%s) is invalid base64", pubkeyS)}
|
||||
canonical := ConstructVoteExtension(valAddr)
|
||||
if !bytes.Equal(canonical.AppDataToSign, ext.AppDataToSign) {
|
||||
return false
|
||||
}
|
||||
|
||||
// decode the power
|
||||
power, err := strconv.ParseInt(powerS, 10, 64)
|
||||
if err != nil {
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("Power (%s) is not an int", powerS)}
|
||||
if !bytes.Equal(canonical.AppDataSelfAuthenticating, ext.AppDataSelfAuthenticating) {
|
||||
return false
|
||||
}
|
||||
|
||||
// update
|
||||
return app.updateValidator(types.UpdateValidator(pubkey, power, ""))
|
||||
}
|
||||
|
||||
// add, update, or remove a validator
|
||||
func (app *PersistentKVStoreApplication) updateValidator(v types.ValidatorUpdate) types.ResponseDeliverTx {
|
||||
pubkey, err := encoding.PubKeyFromProto(v.PubKey)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("can't decode public key: %w", err))
|
||||
}
|
||||
key := []byte("val:" + string(pubkey.Bytes()))
|
||||
|
||||
if v.Power == 0 {
|
||||
// remove validator
|
||||
hasKey, err := app.app.state.db.Has(key)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if !hasKey {
|
||||
pubStr := base64.StdEncoding.EncodeToString(pubkey.Bytes())
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeUnauthorized,
|
||||
Log: fmt.Sprintf("Cannot remove non-existent validator %s", pubStr)}
|
||||
}
|
||||
if err = app.app.state.db.Delete(key); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
delete(app.valAddrToPubKeyMap, string(pubkey.Address()))
|
||||
} else {
|
||||
// add or update validator
|
||||
value := bytes.NewBuffer(make([]byte, 0))
|
||||
if err := types.WriteMessage(&v, value); err != nil {
|
||||
return types.ResponseDeliverTx{
|
||||
Code: code.CodeTypeEncodingError,
|
||||
Log: fmt.Sprintf("error encoding validator: %v", err)}
|
||||
}
|
||||
if err = app.app.state.db.Set(key, value.Bytes()); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
app.valAddrToPubKeyMap[string(pubkey.Address())] = v.PubKey
|
||||
}
|
||||
|
||||
// we only update the changes array if we successfully updated the tree
|
||||
app.ValUpdates = append(app.ValUpdates, v)
|
||||
|
||||
return types.ResponseDeliverTx{Code: code.CodeTypeOK}
|
||||
return true
|
||||
}
|
||||
|
||||
@@ -16,10 +16,9 @@ type GRPCServer struct {
|
||||
service.BaseService
|
||||
logger log.Logger
|
||||
|
||||
proto string
|
||||
addr string
|
||||
listener net.Listener
|
||||
server *grpc.Server
|
||||
proto string
|
||||
addr string
|
||||
server *grpc.Server
|
||||
|
||||
app types.ABCIApplicationServer
|
||||
}
|
||||
@@ -28,11 +27,10 @@ type GRPCServer struct {
|
||||
func NewGRPCServer(logger log.Logger, protoAddr string, app types.ABCIApplicationServer) service.Service {
|
||||
proto, addr := tmnet.ProtocolAndAddress(protoAddr)
|
||||
s := &GRPCServer{
|
||||
logger: logger,
|
||||
proto: proto,
|
||||
addr: addr,
|
||||
listener: nil,
|
||||
app: app,
|
||||
logger: logger,
|
||||
proto: proto,
|
||||
addr: addr,
|
||||
app: app,
|
||||
}
|
||||
s.BaseService = *service.NewBaseService(logger, "ABCIServer", s)
|
||||
return s
|
||||
@@ -40,13 +38,11 @@ func NewGRPCServer(logger log.Logger, protoAddr string, app types.ABCIApplicatio
|
||||
|
||||
// OnStart starts the gRPC service.
|
||||
func (s *GRPCServer) OnStart(ctx context.Context) error {
|
||||
|
||||
ln, err := net.Listen(s.proto, s.addr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s.listener = ln
|
||||
s.server = grpc.NewServer()
|
||||
types.RegisterABCIApplicationServer(s.server, s.app)
|
||||
|
||||
@@ -57,7 +53,7 @@ func (s *GRPCServer) OnStart(ctx context.Context) error {
|
||||
s.server.GracefulStop()
|
||||
}()
|
||||
|
||||
if err := s.server.Serve(s.listener); err != nil {
|
||||
if err := s.server.Serve(ln); err != nil {
|
||||
s.logger.Error("error serving gRPC server", "err", err)
|
||||
}
|
||||
}()
|
||||
@@ -65,6 +61,4 @@ func (s *GRPCServer) OnStart(ctx context.Context) error {
|
||||
}
|
||||
|
||||
// OnStop stops the gRPC server.
|
||||
func (s *GRPCServer) OnStop() {
|
||||
s.server.Stop()
|
||||
}
|
||||
func (s *GRPCServer) OnStop() { s.server.Stop() }
|
||||
|
||||
@@ -3,6 +3,7 @@ package server
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
@@ -26,22 +27,21 @@ type SocketServer struct {
|
||||
listener net.Listener
|
||||
|
||||
connsMtx sync.Mutex
|
||||
conns map[int]net.Conn
|
||||
connsClose map[int]func()
|
||||
nextConnID int
|
||||
|
||||
appMtx sync.Mutex
|
||||
app types.Application
|
||||
app types.Application
|
||||
}
|
||||
|
||||
func NewSocketServer(logger log.Logger, protoAddr string, app types.Application) service.Service {
|
||||
proto, addr := tmnet.ProtocolAndAddress(protoAddr)
|
||||
s := &SocketServer{
|
||||
logger: logger,
|
||||
proto: proto,
|
||||
addr: addr,
|
||||
listener: nil,
|
||||
app: app,
|
||||
conns: make(map[int]net.Conn),
|
||||
logger: logger,
|
||||
proto: proto,
|
||||
addr: addr,
|
||||
listener: nil,
|
||||
app: app,
|
||||
connsClose: make(map[int]func()),
|
||||
}
|
||||
s.BaseService = *service.NewBaseService(logger, "ABCIServer", s)
|
||||
return s
|
||||
@@ -67,44 +67,35 @@ func (s *SocketServer) OnStop() {
|
||||
s.connsMtx.Lock()
|
||||
defer s.connsMtx.Unlock()
|
||||
|
||||
for id, conn := range s.conns {
|
||||
delete(s.conns, id)
|
||||
if err := conn.Close(); err != nil {
|
||||
s.logger.Error("error closing connection", "id", id, "conn", conn, "err", err)
|
||||
}
|
||||
for _, closer := range s.connsClose {
|
||||
closer()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *SocketServer) addConn(conn net.Conn) int {
|
||||
func (s *SocketServer) addConn(closer func()) int {
|
||||
s.connsMtx.Lock()
|
||||
defer s.connsMtx.Unlock()
|
||||
|
||||
connID := s.nextConnID
|
||||
s.nextConnID++
|
||||
s.conns[connID] = conn
|
||||
|
||||
s.connsClose[connID] = closer
|
||||
return connID
|
||||
}
|
||||
|
||||
// deletes conn even if close errs
|
||||
func (s *SocketServer) rmConn(connID int) error {
|
||||
func (s *SocketServer) rmConn(connID int) {
|
||||
s.connsMtx.Lock()
|
||||
defer s.connsMtx.Unlock()
|
||||
|
||||
conn, ok := s.conns[connID]
|
||||
if !ok {
|
||||
return fmt.Errorf("connection %d does not exist", connID)
|
||||
if closer, ok := s.connsClose[connID]; ok {
|
||||
closer()
|
||||
delete(s.connsClose, connID)
|
||||
}
|
||||
|
||||
delete(s.conns, connID)
|
||||
return conn.Close()
|
||||
}
|
||||
|
||||
func (s *SocketServer) acceptConnectionsRoutine(ctx context.Context) {
|
||||
for {
|
||||
if ctx.Err() != nil {
|
||||
return
|
||||
|
||||
}
|
||||
|
||||
// Accept a connection
|
||||
@@ -118,143 +109,134 @@ func (s *SocketServer) acceptConnectionsRoutine(ctx context.Context) {
|
||||
continue
|
||||
}
|
||||
|
||||
s.logger.Info("Accepted a new connection")
|
||||
cctx, ccancel := context.WithCancel(ctx)
|
||||
connID := s.addConn(ccancel)
|
||||
|
||||
connID := s.addConn(conn)
|
||||
s.logger.Info("Accepted a new connection", "id", connID)
|
||||
|
||||
closeConn := make(chan error, 2) // Push to signal connection closed
|
||||
responses := make(chan *types.Response, 1000) // A channel to buffer responses
|
||||
|
||||
once := &sync.Once{}
|
||||
closer := func(err error) {
|
||||
ccancel()
|
||||
once.Do(func() {
|
||||
if cerr := conn.Close(); err != nil {
|
||||
s.logger.Error("error closing connection",
|
||||
"id", connID,
|
||||
"close_err", cerr,
|
||||
"err", err)
|
||||
}
|
||||
s.rmConn(connID)
|
||||
|
||||
switch {
|
||||
case errors.Is(err, context.Canceled):
|
||||
s.logger.Error("Connection terminated",
|
||||
"id", connID,
|
||||
"err", err)
|
||||
case errors.Is(err, context.DeadlineExceeded):
|
||||
s.logger.Error("Connection encountered timeout",
|
||||
"id", connID,
|
||||
"err", err)
|
||||
case errors.Is(err, io.EOF):
|
||||
s.logger.Error("Connection was closed by client",
|
||||
"id", connID)
|
||||
case err != nil:
|
||||
s.logger.Error("Connection error",
|
||||
"id", connID,
|
||||
"err", err)
|
||||
default:
|
||||
s.logger.Error("Connection was closed",
|
||||
"id", connID)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Read requests from conn and deal with them
|
||||
go s.handleRequests(ctx, closeConn, conn, responses)
|
||||
go s.handleRequests(cctx, closer, conn, responses)
|
||||
// Pull responses from 'responses' and write them to conn.
|
||||
go s.handleResponses(ctx, closeConn, conn, responses)
|
||||
|
||||
// Wait until signal to close connection
|
||||
go s.waitForClose(ctx, closeConn, connID)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *SocketServer) waitForClose(ctx context.Context, closeConn chan error, connID int) {
|
||||
defer func() {
|
||||
// Close the connection
|
||||
if err := s.rmConn(connID); err != nil {
|
||||
s.logger.Error("error closing connection", "err", err)
|
||||
}
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case err := <-closeConn:
|
||||
switch {
|
||||
case err == io.EOF:
|
||||
s.logger.Error("Connection was closed by client")
|
||||
case err != nil:
|
||||
s.logger.Error("Connection error", "err", err)
|
||||
default:
|
||||
// never happens
|
||||
s.logger.Error("Connection was closed")
|
||||
}
|
||||
go s.handleResponses(cctx, closer, conn, responses)
|
||||
}
|
||||
}
|
||||
|
||||
// Read requests from conn and deal with them
|
||||
func (s *SocketServer) handleRequests(
|
||||
ctx context.Context,
|
||||
closeConn chan error,
|
||||
closer func(error),
|
||||
conn io.Reader,
|
||||
responses chan<- *types.Response,
|
||||
) {
|
||||
var count int
|
||||
var bufReader = bufio.NewReader(conn)
|
||||
|
||||
defer func() {
|
||||
// make sure to recover from any app-related panics to allow proper socket cleanup
|
||||
r := recover()
|
||||
if r != nil {
|
||||
if r := recover(); r != nil {
|
||||
const size = 64 << 10
|
||||
buf := make([]byte, size)
|
||||
buf = buf[:runtime.Stack(buf, false)]
|
||||
err := fmt.Errorf("recovered from panic: %v\n%s", r, buf)
|
||||
closeConn <- err
|
||||
s.appMtx.Unlock()
|
||||
closer(fmt.Errorf("recovered from panic: %v\n%s", r, buf))
|
||||
}
|
||||
}()
|
||||
|
||||
for {
|
||||
if ctx.Err() != nil {
|
||||
req := &types.Request{}
|
||||
if err := types.ReadMessage(bufReader, req); err != nil {
|
||||
closer(fmt.Errorf("error reading message: %w", err))
|
||||
return
|
||||
}
|
||||
|
||||
var req = &types.Request{}
|
||||
err := types.ReadMessage(bufReader, req)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
closeConn <- err
|
||||
} else {
|
||||
closeConn <- fmt.Errorf("error reading message: %w", err)
|
||||
}
|
||||
resp := s.processRequest(req)
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
closer(ctx.Err())
|
||||
return
|
||||
case responses <- resp:
|
||||
}
|
||||
s.appMtx.Lock()
|
||||
count++
|
||||
s.handleRequest(req, responses)
|
||||
s.appMtx.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *SocketServer) handleRequest(req *types.Request, responses chan<- *types.Response) {
|
||||
func (s *SocketServer) processRequest(req *types.Request) *types.Response {
|
||||
switch r := req.Value.(type) {
|
||||
case *types.Request_Echo:
|
||||
responses <- types.ToResponseEcho(r.Echo.Message)
|
||||
return types.ToResponseEcho(r.Echo.Message)
|
||||
case *types.Request_Flush:
|
||||
responses <- types.ToResponseFlush()
|
||||
return types.ToResponseFlush()
|
||||
case *types.Request_Info:
|
||||
res := s.app.Info(*r.Info)
|
||||
responses <- types.ToResponseInfo(res)
|
||||
case *types.Request_DeliverTx:
|
||||
res := s.app.DeliverTx(*r.DeliverTx)
|
||||
responses <- types.ToResponseDeliverTx(res)
|
||||
return types.ToResponseInfo(s.app.Info(*r.Info))
|
||||
case *types.Request_CheckTx:
|
||||
res := s.app.CheckTx(*r.CheckTx)
|
||||
responses <- types.ToResponseCheckTx(res)
|
||||
return types.ToResponseCheckTx(s.app.CheckTx(*r.CheckTx))
|
||||
case *types.Request_Commit:
|
||||
res := s.app.Commit()
|
||||
responses <- types.ToResponseCommit(res)
|
||||
return types.ToResponseCommit(s.app.Commit())
|
||||
case *types.Request_Query:
|
||||
res := s.app.Query(*r.Query)
|
||||
responses <- types.ToResponseQuery(res)
|
||||
return types.ToResponseQuery(s.app.Query(*r.Query))
|
||||
case *types.Request_InitChain:
|
||||
res := s.app.InitChain(*r.InitChain)
|
||||
responses <- types.ToResponseInitChain(res)
|
||||
case *types.Request_BeginBlock:
|
||||
res := s.app.BeginBlock(*r.BeginBlock)
|
||||
responses <- types.ToResponseBeginBlock(res)
|
||||
case *types.Request_EndBlock:
|
||||
res := s.app.EndBlock(*r.EndBlock)
|
||||
responses <- types.ToResponseEndBlock(res)
|
||||
return types.ToResponseInitChain(s.app.InitChain(*r.InitChain))
|
||||
case *types.Request_ListSnapshots:
|
||||
res := s.app.ListSnapshots(*r.ListSnapshots)
|
||||
responses <- types.ToResponseListSnapshots(res)
|
||||
return types.ToResponseListSnapshots(s.app.ListSnapshots(*r.ListSnapshots))
|
||||
case *types.Request_OfferSnapshot:
|
||||
res := s.app.OfferSnapshot(*r.OfferSnapshot)
|
||||
responses <- types.ToResponseOfferSnapshot(res)
|
||||
return types.ToResponseOfferSnapshot(s.app.OfferSnapshot(*r.OfferSnapshot))
|
||||
case *types.Request_PrepareProposal:
|
||||
return types.ToResponsePrepareProposal(s.app.PrepareProposal(*r.PrepareProposal))
|
||||
case *types.Request_ProcessProposal:
|
||||
return types.ToResponseProcessProposal(s.app.ProcessProposal(*r.ProcessProposal))
|
||||
case *types.Request_LoadSnapshotChunk:
|
||||
res := s.app.LoadSnapshotChunk(*r.LoadSnapshotChunk)
|
||||
responses <- types.ToResponseLoadSnapshotChunk(res)
|
||||
return types.ToResponseLoadSnapshotChunk(s.app.LoadSnapshotChunk(*r.LoadSnapshotChunk))
|
||||
case *types.Request_ApplySnapshotChunk:
|
||||
res := s.app.ApplySnapshotChunk(*r.ApplySnapshotChunk)
|
||||
responses <- types.ToResponseApplySnapshotChunk(res)
|
||||
return types.ToResponseApplySnapshotChunk(s.app.ApplySnapshotChunk(*r.ApplySnapshotChunk))
|
||||
case *types.Request_ExtendVote:
|
||||
return types.ToResponseExtendVote(s.app.ExtendVote(*r.ExtendVote))
|
||||
case *types.Request_VerifyVoteExtension:
|
||||
return types.ToResponseVerifyVoteExtension(s.app.VerifyVoteExtension(*r.VerifyVoteExtension))
|
||||
case *types.Request_FinalizeBlock:
|
||||
return types.ToResponseFinalizeBlock(s.app.FinalizeBlock(*r.FinalizeBlock))
|
||||
default:
|
||||
responses <- types.ToResponseException("Unknown request")
|
||||
return types.ToResponseException("Unknown request")
|
||||
}
|
||||
}
|
||||
|
||||
// Pull responses from 'responses' and write them to conn.
|
||||
func (s *SocketServer) handleResponses(
|
||||
ctx context.Context,
|
||||
closeConn chan error,
|
||||
closer func(error),
|
||||
conn io.Writer,
|
||||
responses <-chan *types.Response,
|
||||
) {
|
||||
@@ -262,21 +244,15 @@ func (s *SocketServer) handleResponses(
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
closer(ctx.Err())
|
||||
return
|
||||
case res := <-responses:
|
||||
if err := types.WriteMessage(res, bw); err != nil {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
case closeConn <- fmt.Errorf("error writing message: %w", err):
|
||||
}
|
||||
closer(fmt.Errorf("error writing message: %w", err))
|
||||
return
|
||||
}
|
||||
if err := bw.Flush(); err != nil {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
case closeConn <- fmt.Errorf("error flushing write buffer: %w", err):
|
||||
}
|
||||
|
||||
closer(fmt.Errorf("error writing message: %w", err))
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
@@ -49,22 +49,24 @@ func Commit(ctx context.Context, client abciclient.Client, hashExp []byte) error
|
||||
return nil
|
||||
}
|
||||
|
||||
func DeliverTx(ctx context.Context, client abciclient.Client, txBytes []byte, codeExp uint32, dataExp []byte) error {
|
||||
res, _ := client.DeliverTx(ctx, types.RequestDeliverTx{Tx: txBytes})
|
||||
code, data, log := res.Code, res.Data, res.Log
|
||||
if code != codeExp {
|
||||
fmt.Println("Failed test: DeliverTx")
|
||||
fmt.Printf("DeliverTx response code was unexpected. Got %v expected %v. Log: %v\n",
|
||||
code, codeExp, log)
|
||||
return errors.New("deliverTx error")
|
||||
func FinalizeBlock(ctx context.Context, client abciclient.Client, txBytes [][]byte, codeExp []uint32, dataExp []byte) error {
|
||||
res, _ := client.FinalizeBlock(ctx, types.RequestFinalizeBlock{Txs: txBytes})
|
||||
for i, tx := range res.TxResults {
|
||||
code, data, log := tx.Code, tx.Data, tx.Log
|
||||
if code != codeExp[i] {
|
||||
fmt.Println("Failed test: FinalizeBlock")
|
||||
fmt.Printf("FinalizeBlock response code was unexpected. Got %v expected %v. Log: %v\n",
|
||||
code, codeExp, log)
|
||||
return errors.New("FinalizeBlock error")
|
||||
}
|
||||
if !bytes.Equal(data, dataExp) {
|
||||
fmt.Println("Failed test: FinalizeBlock")
|
||||
fmt.Printf("FinalizeBlock response data was unexpected. Got %X expected %X\n",
|
||||
data, dataExp)
|
||||
return errors.New("FinalizeBlock error")
|
||||
}
|
||||
}
|
||||
if !bytes.Equal(data, dataExp) {
|
||||
fmt.Println("Failed test: DeliverTx")
|
||||
fmt.Printf("DeliverTx response data was unexpected. Got %X expected %X\n",
|
||||
data, dataExp)
|
||||
return errors.New("deliverTx error")
|
||||
}
|
||||
fmt.Println("Passed test: DeliverTx")
|
||||
fmt.Println("Passed test: FinalizeBlock")
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
echo hello
|
||||
info
|
||||
commit
|
||||
deliver_tx "abc"
|
||||
finalize_block "abc"
|
||||
info
|
||||
commit
|
||||
query "abc"
|
||||
deliver_tx "def=xyz"
|
||||
finalize_block "def=xyz" "ghi=123"
|
||||
commit
|
||||
query "def"
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
-> code: OK
|
||||
-> data.hex: 0x0000000000000000
|
||||
|
||||
> deliver_tx "abc"
|
||||
> finalize_block "abc"
|
||||
-> code: OK
|
||||
|
||||
> info
|
||||
@@ -33,12 +33,14 @@
|
||||
-> value: abc
|
||||
-> value.hex: 616263
|
||||
|
||||
> deliver_tx "def=xyz"
|
||||
> finalize_block "def=xyz" "ghi=123"
|
||||
-> code: OK
|
||||
> finalize_block "def=xyz" "ghi=123"
|
||||
-> code: OK
|
||||
|
||||
> commit
|
||||
-> code: OK
|
||||
-> data.hex: 0x0400000000000000
|
||||
-> data.hex: 0x0600000000000000
|
||||
|
||||
> query "def"
|
||||
-> code: OK
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
check_tx 0x00
|
||||
check_tx 0xff
|
||||
deliver_tx 0x00
|
||||
finalize_block 0x00
|
||||
check_tx 0x00
|
||||
deliver_tx 0x01
|
||||
deliver_tx 0x04
|
||||
finalize_block 0x01
|
||||
finalize_block 0x04
|
||||
info
|
||||
|
||||
@@ -4,20 +4,20 @@
|
||||
> check_tx 0xff
|
||||
-> code: OK
|
||||
|
||||
> deliver_tx 0x00
|
||||
> finalize_block 0x00
|
||||
-> code: OK
|
||||
|
||||
> check_tx 0x00
|
||||
-> code: OK
|
||||
|
||||
> deliver_tx 0x01
|
||||
> finalize_block 0x01
|
||||
-> code: OK
|
||||
|
||||
> deliver_tx 0x04
|
||||
> finalize_block 0x04
|
||||
-> code: OK
|
||||
|
||||
> info
|
||||
-> code: OK
|
||||
-> data: {"hashes":0,"txs":3}
|
||||
-> data.hex: 0x7B22686173686573223A302C22747873223A337D
|
||||
-> data: {"size":3}
|
||||
-> data.hex: 0x7B2273697A65223A337D
|
||||
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
)
|
||||
|
||||
//go:generate ../../scripts/mockery_generate.sh Application
|
||||
// Application is an interface that enables any finite, deterministic state machine
|
||||
// to be driven by a blockchain-based replication engine via the ABCI.
|
||||
// All methods take a RequestXxx argument and return a ResponseXxx argument,
|
||||
@@ -17,11 +18,17 @@ type Application interface {
|
||||
CheckTx(RequestCheckTx) ResponseCheckTx // Validate a tx for the mempool
|
||||
|
||||
// Consensus Connection
|
||||
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain w validators/other info from TendermintCore
|
||||
BeginBlock(RequestBeginBlock) ResponseBeginBlock // Signals the beginning of a block
|
||||
DeliverTx(RequestDeliverTx) ResponseDeliverTx // Deliver a tx for full processing
|
||||
EndBlock(RequestEndBlock) ResponseEndBlock // Signals the end of a block, returns changes to the validator set
|
||||
Commit() ResponseCommit // Commit the state and return the application Merkle root hash
|
||||
InitChain(RequestInitChain) ResponseInitChain // Initialize blockchain w validators/other info from TendermintCore
|
||||
PrepareProposal(RequestPrepareProposal) ResponsePrepareProposal
|
||||
ProcessProposal(RequestProcessProposal) ResponseProcessProposal
|
||||
// Commit the state and return the application Merkle root hash
|
||||
Commit() ResponseCommit
|
||||
// Create application specific vote extension
|
||||
ExtendVote(RequestExtendVote) ResponseExtendVote
|
||||
// Verify application's vote extension data
|
||||
VerifyVoteExtension(RequestVerifyVoteExtension) ResponseVerifyVoteExtension
|
||||
// Deliver the decided block with its txs to the Application
|
||||
FinalizeBlock(RequestFinalizeBlock) ResponseFinalizeBlock
|
||||
|
||||
// State Sync Connection
|
||||
ListSnapshots(RequestListSnapshots) ResponseListSnapshots // List available snapshots
|
||||
@@ -35,8 +42,7 @@ type Application interface {
|
||||
|
||||
var _ Application = (*BaseApplication)(nil)
|
||||
|
||||
type BaseApplication struct {
|
||||
}
|
||||
type BaseApplication struct{}
|
||||
|
||||
func NewBaseApplication() *BaseApplication {
|
||||
return &BaseApplication{}
|
||||
@@ -46,10 +52,6 @@ func (BaseApplication) Info(req RequestInfo) ResponseInfo {
|
||||
return ResponseInfo{}
|
||||
}
|
||||
|
||||
func (BaseApplication) DeliverTx(req RequestDeliverTx) ResponseDeliverTx {
|
||||
return ResponseDeliverTx{Code: CodeTypeOK}
|
||||
}
|
||||
|
||||
func (BaseApplication) CheckTx(req RequestCheckTx) ResponseCheckTx {
|
||||
return ResponseCheckTx{Code: CodeTypeOK}
|
||||
}
|
||||
@@ -58,6 +60,16 @@ func (BaseApplication) Commit() ResponseCommit {
|
||||
return ResponseCommit{}
|
||||
}
|
||||
|
||||
func (BaseApplication) ExtendVote(req RequestExtendVote) ResponseExtendVote {
|
||||
return ResponseExtendVote{}
|
||||
}
|
||||
|
||||
func (BaseApplication) VerifyVoteExtension(req RequestVerifyVoteExtension) ResponseVerifyVoteExtension {
|
||||
return ResponseVerifyVoteExtension{
|
||||
Result: ResponseVerifyVoteExtension_ACCEPT,
|
||||
}
|
||||
}
|
||||
|
||||
func (BaseApplication) Query(req RequestQuery) ResponseQuery {
|
||||
return ResponseQuery{Code: CodeTypeOK}
|
||||
}
|
||||
@@ -66,14 +78,6 @@ func (BaseApplication) InitChain(req RequestInitChain) ResponseInitChain {
|
||||
return ResponseInitChain{}
|
||||
}
|
||||
|
||||
func (BaseApplication) BeginBlock(req RequestBeginBlock) ResponseBeginBlock {
|
||||
return ResponseBeginBlock{}
|
||||
}
|
||||
|
||||
func (BaseApplication) EndBlock(req RequestEndBlock) ResponseEndBlock {
|
||||
return ResponseEndBlock{}
|
||||
}
|
||||
|
||||
func (BaseApplication) ListSnapshots(req RequestListSnapshots) ResponseListSnapshots {
|
||||
return ResponseListSnapshots{}
|
||||
}
|
||||
@@ -90,6 +94,24 @@ func (BaseApplication) ApplySnapshotChunk(req RequestApplySnapshotChunk) Respons
|
||||
return ResponseApplySnapshotChunk{}
|
||||
}
|
||||
|
||||
func (BaseApplication) PrepareProposal(req RequestPrepareProposal) ResponsePrepareProposal {
|
||||
return ResponsePrepareProposal{}
|
||||
}
|
||||
|
||||
func (BaseApplication) ProcessProposal(req RequestProcessProposal) ResponseProcessProposal {
|
||||
return ResponseProcessProposal{}
|
||||
}
|
||||
|
||||
func (BaseApplication) FinalizeBlock(req RequestFinalizeBlock) ResponseFinalizeBlock {
|
||||
txs := make([]*ExecTxResult, len(req.Txs))
|
||||
for i := range req.Txs {
|
||||
txs[i] = &ExecTxResult{Code: CodeTypeOK}
|
||||
}
|
||||
return ResponseFinalizeBlock{
|
||||
TxResults: txs,
|
||||
}
|
||||
}
|
||||
|
||||
//-------------------------------------------------------
|
||||
|
||||
// GRPCApplication is a GRPC wrapper for Application
|
||||
@@ -114,11 +136,6 @@ func (app *GRPCApplication) Info(ctx context.Context, req *RequestInfo) (*Respon
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) DeliverTx(ctx context.Context, req *RequestDeliverTx) (*ResponseDeliverTx, error) {
|
||||
res := app.app.DeliverTx(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) CheckTx(ctx context.Context, req *RequestCheckTx) (*ResponseCheckTx, error) {
|
||||
res := app.app.CheckTx(*req)
|
||||
return &res, nil
|
||||
@@ -139,16 +156,6 @@ func (app *GRPCApplication) InitChain(ctx context.Context, req *RequestInitChain
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) BeginBlock(ctx context.Context, req *RequestBeginBlock) (*ResponseBeginBlock, error) {
|
||||
res := app.app.BeginBlock(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) EndBlock(ctx context.Context, req *RequestEndBlock) (*ResponseEndBlock, error) {
|
||||
res := app.app.EndBlock(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) ListSnapshots(
|
||||
ctx context.Context, req *RequestListSnapshots) (*ResponseListSnapshots, error) {
|
||||
res := app.app.ListSnapshots(*req)
|
||||
@@ -172,3 +179,33 @@ func (app *GRPCApplication) ApplySnapshotChunk(
|
||||
res := app.app.ApplySnapshotChunk(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) ExtendVote(
|
||||
ctx context.Context, req *RequestExtendVote) (*ResponseExtendVote, error) {
|
||||
res := app.app.ExtendVote(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) VerifyVoteExtension(
|
||||
ctx context.Context, req *RequestVerifyVoteExtension) (*ResponseVerifyVoteExtension, error) {
|
||||
res := app.app.VerifyVoteExtension(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) PrepareProposal(
|
||||
ctx context.Context, req *RequestPrepareProposal) (*ResponsePrepareProposal, error) {
|
||||
res := app.app.PrepareProposal(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) ProcessProposal(
|
||||
ctx context.Context, req *RequestProcessProposal) (*ResponseProcessProposal, error) {
|
||||
res := app.app.ProcessProposal(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
func (app *GRPCApplication) FinalizeBlock(
|
||||
ctx context.Context, req *RequestFinalizeBlock) (*ResponseFinalizeBlock, error) {
|
||||
res := app.app.FinalizeBlock(*req)
|
||||
return &res, nil
|
||||
}
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"io"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
|
||||
"github.com/tendermint/tendermint/internal/libs/protoio"
|
||||
)
|
||||
|
||||
@@ -44,12 +45,6 @@ func ToRequestInfo(req RequestInfo) *Request {
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestDeliverTx(req RequestDeliverTx) *Request {
|
||||
return &Request{
|
||||
Value: &Request_DeliverTx{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestCheckTx(req RequestCheckTx) *Request {
|
||||
return &Request{
|
||||
Value: &Request_CheckTx{&req},
|
||||
@@ -74,18 +69,6 @@ func ToRequestInitChain(req RequestInitChain) *Request {
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestBeginBlock(req RequestBeginBlock) *Request {
|
||||
return &Request{
|
||||
Value: &Request_BeginBlock{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestEndBlock(req RequestEndBlock) *Request {
|
||||
return &Request{
|
||||
Value: &Request_EndBlock{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestListSnapshots(req RequestListSnapshots) *Request {
|
||||
return &Request{
|
||||
Value: &Request_ListSnapshots{&req},
|
||||
@@ -110,6 +93,36 @@ func ToRequestApplySnapshotChunk(req RequestApplySnapshotChunk) *Request {
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestExtendVote(req RequestExtendVote) *Request {
|
||||
return &Request{
|
||||
Value: &Request_ExtendVote{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestVerifyVoteExtension(req RequestVerifyVoteExtension) *Request {
|
||||
return &Request{
|
||||
Value: &Request_VerifyVoteExtension{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestPrepareProposal(req RequestPrepareProposal) *Request {
|
||||
return &Request{
|
||||
Value: &Request_PrepareProposal{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestProcessProposal(req RequestProcessProposal) *Request {
|
||||
return &Request{
|
||||
Value: &Request_ProcessProposal{&req},
|
||||
}
|
||||
}
|
||||
|
||||
func ToRequestFinalizeBlock(req RequestFinalizeBlock) *Request {
|
||||
return &Request{
|
||||
Value: &Request_FinalizeBlock{&req},
|
||||
}
|
||||
}
|
||||
|
||||
//----------------------------------------
|
||||
|
||||
func ToResponseException(errStr string) *Response {
|
||||
@@ -135,11 +148,6 @@ func ToResponseInfo(res ResponseInfo) *Response {
|
||||
Value: &Response_Info{&res},
|
||||
}
|
||||
}
|
||||
func ToResponseDeliverTx(res ResponseDeliverTx) *Response {
|
||||
return &Response{
|
||||
Value: &Response_DeliverTx{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseCheckTx(res ResponseCheckTx) *Response {
|
||||
return &Response{
|
||||
@@ -165,18 +173,6 @@ func ToResponseInitChain(res ResponseInitChain) *Response {
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseBeginBlock(res ResponseBeginBlock) *Response {
|
||||
return &Response{
|
||||
Value: &Response_BeginBlock{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseEndBlock(res ResponseEndBlock) *Response {
|
||||
return &Response{
|
||||
Value: &Response_EndBlock{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseListSnapshots(res ResponseListSnapshots) *Response {
|
||||
return &Response{
|
||||
Value: &Response_ListSnapshots{&res},
|
||||
@@ -200,3 +196,33 @@ func ToResponseApplySnapshotChunk(res ResponseApplySnapshotChunk) *Response {
|
||||
Value: &Response_ApplySnapshotChunk{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseExtendVote(res ResponseExtendVote) *Response {
|
||||
return &Response{
|
||||
Value: &Response_ExtendVote{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseVerifyVoteExtension(res ResponseVerifyVoteExtension) *Response {
|
||||
return &Response{
|
||||
Value: &Response_VerifyVoteExtension{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponsePrepareProposal(res ResponsePrepareProposal) *Response {
|
||||
return &Response{
|
||||
Value: &Response_PrepareProposal{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseProcessProposal(res ResponseProcessProposal) *Response {
|
||||
return &Response{
|
||||
Value: &Response_ProcessProposal{&res},
|
||||
}
|
||||
}
|
||||
|
||||
func ToResponseFinalizeBlock(res ResponseFinalizeBlock) *Response {
|
||||
return &Response{
|
||||
Value: &Response_FinalizeBlock{&res},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -13,7 +13,7 @@ import (
|
||||
)
|
||||
|
||||
func TestMarshalJSON(t *testing.T) {
|
||||
b, err := json.Marshal(&ResponseDeliverTx{})
|
||||
b, err := json.Marshal(&ExecTxResult{Code: 1})
|
||||
assert.NoError(t, err)
|
||||
// include empty fields.
|
||||
assert.True(t, strings.Contains(string(b), "code"))
|
||||
|
||||
209
abci/types/mocks/application.go
Normal file
209
abci/types/mocks/application.go
Normal file
@@ -0,0 +1,209 @@
|
||||
// Code generated by mockery. DO NOT EDIT.
|
||||
|
||||
package mocks
|
||||
|
||||
import (
|
||||
mock "github.com/stretchr/testify/mock"
|
||||
types "github.com/tendermint/tendermint/abci/types"
|
||||
)
|
||||
|
||||
// Application is an autogenerated mock type for the Application type
|
||||
type Application struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
// ApplySnapshotChunk provides a mock function with given fields: _a0
|
||||
func (_m *Application) ApplySnapshotChunk(_a0 types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseApplySnapshotChunk
|
||||
if rf, ok := ret.Get(0).(func(types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseApplySnapshotChunk)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// CheckTx provides a mock function with given fields: _a0
|
||||
func (_m *Application) CheckTx(_a0 types.RequestCheckTx) types.ResponseCheckTx {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseCheckTx
|
||||
if rf, ok := ret.Get(0).(func(types.RequestCheckTx) types.ResponseCheckTx); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseCheckTx)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// Commit provides a mock function with given fields:
|
||||
func (_m *Application) Commit() types.ResponseCommit {
|
||||
ret := _m.Called()
|
||||
|
||||
var r0 types.ResponseCommit
|
||||
if rf, ok := ret.Get(0).(func() types.ResponseCommit); ok {
|
||||
r0 = rf()
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseCommit)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ExtendVote provides a mock function with given fields: _a0
|
||||
func (_m *Application) ExtendVote(_a0 types.RequestExtendVote) types.ResponseExtendVote {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseExtendVote
|
||||
if rf, ok := ret.Get(0).(func(types.RequestExtendVote) types.ResponseExtendVote); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseExtendVote)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// FinalizeBlock provides a mock function with given fields: _a0
|
||||
func (_m *Application) FinalizeBlock(_a0 types.RequestFinalizeBlock) types.ResponseFinalizeBlock {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseFinalizeBlock
|
||||
if rf, ok := ret.Get(0).(func(types.RequestFinalizeBlock) types.ResponseFinalizeBlock); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseFinalizeBlock)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// Info provides a mock function with given fields: _a0
|
||||
func (_m *Application) Info(_a0 types.RequestInfo) types.ResponseInfo {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseInfo
|
||||
if rf, ok := ret.Get(0).(func(types.RequestInfo) types.ResponseInfo); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseInfo)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// InitChain provides a mock function with given fields: _a0
|
||||
func (_m *Application) InitChain(_a0 types.RequestInitChain) types.ResponseInitChain {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseInitChain
|
||||
if rf, ok := ret.Get(0).(func(types.RequestInitChain) types.ResponseInitChain); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseInitChain)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ListSnapshots provides a mock function with given fields: _a0
|
||||
func (_m *Application) ListSnapshots(_a0 types.RequestListSnapshots) types.ResponseListSnapshots {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseListSnapshots
|
||||
if rf, ok := ret.Get(0).(func(types.RequestListSnapshots) types.ResponseListSnapshots); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseListSnapshots)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// LoadSnapshotChunk provides a mock function with given fields: _a0
|
||||
func (_m *Application) LoadSnapshotChunk(_a0 types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseLoadSnapshotChunk
|
||||
if rf, ok := ret.Get(0).(func(types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseLoadSnapshotChunk)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// OfferSnapshot provides a mock function with given fields: _a0
|
||||
func (_m *Application) OfferSnapshot(_a0 types.RequestOfferSnapshot) types.ResponseOfferSnapshot {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseOfferSnapshot
|
||||
if rf, ok := ret.Get(0).(func(types.RequestOfferSnapshot) types.ResponseOfferSnapshot); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseOfferSnapshot)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// PrepareProposal provides a mock function with given fields: _a0
|
||||
func (_m *Application) PrepareProposal(_a0 types.RequestPrepareProposal) types.ResponsePrepareProposal {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponsePrepareProposal
|
||||
if rf, ok := ret.Get(0).(func(types.RequestPrepareProposal) types.ResponsePrepareProposal); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponsePrepareProposal)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// ProcessProposal provides a mock function with given fields: _a0
|
||||
func (_m *Application) ProcessProposal(_a0 types.RequestProcessProposal) types.ResponseProcessProposal {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseProcessProposal
|
||||
if rf, ok := ret.Get(0).(func(types.RequestProcessProposal) types.ResponseProcessProposal); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseProcessProposal)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// Query provides a mock function with given fields: _a0
|
||||
func (_m *Application) Query(_a0 types.RequestQuery) types.ResponseQuery {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseQuery
|
||||
if rf, ok := ret.Get(0).(func(types.RequestQuery) types.ResponseQuery); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseQuery)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
|
||||
// VerifyVoteExtension provides a mock function with given fields: _a0
|
||||
func (_m *Application) VerifyVoteExtension(_a0 types.RequestVerifyVoteExtension) types.ResponseVerifyVoteExtension {
|
||||
ret := _m.Called(_a0)
|
||||
|
||||
var r0 types.ResponseVerifyVoteExtension
|
||||
if rf, ok := ret.Get(0).(func(types.RequestVerifyVoteExtension) types.ResponseVerifyVoteExtension); ok {
|
||||
r0 = rf(_a0)
|
||||
} else {
|
||||
r0 = ret.Get(0).(types.ResponseVerifyVoteExtension)
|
||||
}
|
||||
|
||||
return r0
|
||||
}
|
||||
189
abci/types/mocks/base.go
Normal file
189
abci/types/mocks/base.go
Normal file
@@ -0,0 +1,189 @@
|
||||
package mocks
|
||||
|
||||
import (
|
||||
types "github.com/tendermint/tendermint/abci/types"
|
||||
)
|
||||
|
||||
// BaseMock provides a wrapper around the generated Application mock and a BaseApplication.
|
||||
// BaseMock first tries to use the mock's implementation of the method.
|
||||
// If no functionality was provided for the mock by the user, BaseMock dispatches
|
||||
// to the BaseApplication and uses its functionality.
|
||||
// BaseMock allows users to provide mocked functionality for only the methods that matter
|
||||
// for their test while avoiding a panic if the code calls Application methods that are
|
||||
// not relevant to the test.
|
||||
type BaseMock struct {
|
||||
base *types.BaseApplication
|
||||
*Application
|
||||
}
|
||||
|
||||
func NewBaseMock() BaseMock {
|
||||
return BaseMock{
|
||||
base: types.NewBaseApplication(),
|
||||
Application: new(Application),
|
||||
}
|
||||
}
|
||||
|
||||
// Info/Query Connection
|
||||
// Return application info
|
||||
func (m BaseMock) Info(input types.RequestInfo) types.ResponseInfo {
|
||||
var ret types.ResponseInfo
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.Info(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.Info(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
func (m BaseMock) Query(input types.RequestQuery) types.ResponseQuery {
|
||||
var ret types.ResponseQuery
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.Query(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.Query(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
// Mempool Connection
|
||||
// Validate a tx for the mempool
|
||||
func (m BaseMock) CheckTx(input types.RequestCheckTx) types.ResponseCheckTx {
|
||||
var ret types.ResponseCheckTx
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.CheckTx(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.CheckTx(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
// Consensus Connection
|
||||
// Initialize blockchain w validators/other info from TendermintCore
|
||||
func (m BaseMock) InitChain(input types.RequestInitChain) types.ResponseInitChain {
|
||||
var ret types.ResponseInitChain
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.InitChain(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.InitChain(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
func (m BaseMock) PrepareProposal(input types.RequestPrepareProposal) types.ResponsePrepareProposal {
|
||||
var ret types.ResponsePrepareProposal
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.PrepareProposal(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.PrepareProposal(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
func (m BaseMock) ProcessProposal(input types.RequestProcessProposal) types.ResponseProcessProposal {
|
||||
var ret types.ResponseProcessProposal
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.ProcessProposal(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.ProcessProposal(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
// Commit the state and return the application Merkle root hash
|
||||
func (m BaseMock) Commit() types.ResponseCommit {
|
||||
var ret types.ResponseCommit
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.Commit()
|
||||
}
|
||||
}()
|
||||
ret = m.Application.Commit()
|
||||
return ret
|
||||
}
|
||||
|
||||
// Create application specific vote extension
|
||||
func (m BaseMock) ExtendVote(input types.RequestExtendVote) types.ResponseExtendVote {
|
||||
var ret types.ResponseExtendVote
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.ExtendVote(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.ExtendVote(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
// Verify application's vote extension data
|
||||
func (m BaseMock) VerifyVoteExtension(input types.RequestVerifyVoteExtension) types.ResponseVerifyVoteExtension {
|
||||
var ret types.ResponseVerifyVoteExtension
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.VerifyVoteExtension(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.VerifyVoteExtension(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
// State Sync Connection
|
||||
// List available snapshots
|
||||
func (m BaseMock) ListSnapshots(input types.RequestListSnapshots) types.ResponseListSnapshots {
|
||||
var ret types.ResponseListSnapshots
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.ListSnapshots(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.ListSnapshots(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
func (m BaseMock) OfferSnapshot(input types.RequestOfferSnapshot) types.ResponseOfferSnapshot {
|
||||
var ret types.ResponseOfferSnapshot
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.OfferSnapshot(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.OfferSnapshot(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
func (m BaseMock) LoadSnapshotChunk(input types.RequestLoadSnapshotChunk) types.ResponseLoadSnapshotChunk {
|
||||
var ret types.ResponseLoadSnapshotChunk
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.LoadSnapshotChunk(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.LoadSnapshotChunk(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
func (m BaseMock) ApplySnapshotChunk(input types.RequestApplySnapshotChunk) types.ResponseApplySnapshotChunk {
|
||||
var ret types.ResponseApplySnapshotChunk
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.ApplySnapshotChunk(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.ApplySnapshotChunk(input)
|
||||
return ret
|
||||
}
|
||||
|
||||
func (m BaseMock) FinalizeBlock(input types.RequestFinalizeBlock) types.ResponseFinalizeBlock {
|
||||
var ret types.ResponseFinalizeBlock
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
ret = m.base.FinalizeBlock(input)
|
||||
}
|
||||
}()
|
||||
ret = m.Application.FinalizeBlock(input)
|
||||
return ret
|
||||
}
|
||||
@@ -5,6 +5,8 @@ import (
|
||||
"encoding/json"
|
||||
|
||||
"github.com/gogo/protobuf/jsonpb"
|
||||
|
||||
types "github.com/tendermint/tendermint/proto/tendermint/types"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -31,6 +33,16 @@ func (r ResponseDeliverTx) IsErr() bool {
|
||||
return r.Code != CodeTypeOK
|
||||
}
|
||||
|
||||
// IsOK returns true if Code is OK.
|
||||
func (r ExecTxResult) IsOK() bool {
|
||||
return r.Code == CodeTypeOK
|
||||
}
|
||||
|
||||
// IsErr returns true if Code is something other than OK.
|
||||
func (r ExecTxResult) IsErr() bool {
|
||||
return r.Code != CodeTypeOK
|
||||
}
|
||||
|
||||
// IsOK returns true if Code is OK.
|
||||
func (r ResponseQuery) IsOK() bool {
|
||||
return r.Code == CodeTypeOK
|
||||
@@ -41,6 +53,21 @@ func (r ResponseQuery) IsErr() bool {
|
||||
return r.Code != CodeTypeOK
|
||||
}
|
||||
|
||||
// IsUnknown returns true if Code is Unknown
|
||||
func (r ResponseVerifyVoteExtension) IsUnknown() bool {
|
||||
return r.Result == ResponseVerifyVoteExtension_UNKNOWN
|
||||
}
|
||||
|
||||
// IsOK returns true if Code is OK
|
||||
func (r ResponseVerifyVoteExtension) IsOK() bool {
|
||||
return r.Result == ResponseVerifyVoteExtension_ACCEPT
|
||||
}
|
||||
|
||||
// IsErr returns true if Code is something other than OK.
|
||||
func (r ResponseVerifyVoteExtension) IsErr() bool {
|
||||
return r.Result != ResponseVerifyVoteExtension_ACCEPT
|
||||
}
|
||||
|
||||
//---------------------------------------------------------------------------
|
||||
// override JSON marshaling so we emit defaults (ie. disable omitempty)
|
||||
|
||||
@@ -118,3 +145,53 @@ var _ jsonRoundTripper = (*ResponseDeliverTx)(nil)
|
||||
var _ jsonRoundTripper = (*ResponseCheckTx)(nil)
|
||||
|
||||
var _ jsonRoundTripper = (*EventAttribute)(nil)
|
||||
|
||||
// -----------------------------------------------
|
||||
// construct Result data
|
||||
|
||||
func RespondExtendVote(appDataToSign, appDataSelfAuthenticating []byte) ResponseExtendVote {
|
||||
return ResponseExtendVote{
|
||||
VoteExtension: &types.VoteExtension{
|
||||
AppDataToSign: appDataToSign,
|
||||
AppDataSelfAuthenticating: appDataSelfAuthenticating,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func RespondVerifyVoteExtension(ok bool) ResponseVerifyVoteExtension {
|
||||
result := ResponseVerifyVoteExtension_REJECT
|
||||
if ok {
|
||||
result = ResponseVerifyVoteExtension_ACCEPT
|
||||
}
|
||||
return ResponseVerifyVoteExtension{
|
||||
Result: result,
|
||||
}
|
||||
}
|
||||
|
||||
// deterministicExecTxResult constructs a copy of response that omits
|
||||
// non-deterministic fields. The input response is not modified.
|
||||
func deterministicExecTxResult(response *ExecTxResult) *ExecTxResult {
|
||||
return &ExecTxResult{
|
||||
Code: response.Code,
|
||||
Data: response.Data,
|
||||
GasWanted: response.GasWanted,
|
||||
GasUsed: response.GasUsed,
|
||||
}
|
||||
}
|
||||
|
||||
// MarshalTxResults encodes the the TxResults as a list of byte
|
||||
// slices. It strips off the non-deterministic pieces of the TxResults
|
||||
// so that the resulting data can be used for hash comparisons and used
|
||||
// in Merkle proofs.
|
||||
func MarshalTxResults(r []*ExecTxResult) ([][]byte, error) {
|
||||
s := make([][]byte, len(r))
|
||||
for i, e := range r {
|
||||
d := deterministicExecTxResult(e)
|
||||
b, err := d.Marshal()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s[i] = b
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
74
abci/types/types_test.go
Normal file
74
abci/types/types_test.go
Normal file
@@ -0,0 +1,74 @@
|
||||
package types_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
abci "github.com/tendermint/tendermint/abci/types"
|
||||
"github.com/tendermint/tendermint/crypto/merkle"
|
||||
)
|
||||
|
||||
func TestHashAndProveResults(t *testing.T) {
|
||||
trs := []*abci.ExecTxResult{
|
||||
// Note, these tests rely on the first two entries being in this order.
|
||||
{Code: 0, Data: nil},
|
||||
{Code: 0, Data: []byte{}},
|
||||
|
||||
{Code: 0, Data: []byte("one")},
|
||||
{Code: 14, Data: nil},
|
||||
{Code: 14, Data: []byte("foo")},
|
||||
{Code: 14, Data: []byte("bar")},
|
||||
}
|
||||
|
||||
// Nil and []byte{} should produce the same bytes
|
||||
bz0, err := trs[0].Marshal()
|
||||
require.NoError(t, err)
|
||||
bz1, err := trs[1].Marshal()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, bz0, bz1)
|
||||
|
||||
// Make sure that we can get a root hash from results and verify proofs.
|
||||
rs, err := abci.MarshalTxResults(trs)
|
||||
require.NoError(t, err)
|
||||
root := merkle.HashFromByteSlices(rs)
|
||||
assert.NotEmpty(t, root)
|
||||
|
||||
_, proofs := merkle.ProofsFromByteSlices(rs)
|
||||
for i, tr := range trs {
|
||||
bz, err := tr.Marshal()
|
||||
require.NoError(t, err)
|
||||
|
||||
valid := proofs[i].Verify(root, bz)
|
||||
assert.NoError(t, valid, "%d", i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashDeterministicFieldsOnly(t *testing.T) {
|
||||
tr1 := abci.ExecTxResult{
|
||||
Code: 1,
|
||||
Data: []byte("transaction"),
|
||||
Log: "nondeterministic data: abc",
|
||||
Info: "nondeterministic data: abc",
|
||||
GasWanted: 1000,
|
||||
GasUsed: 1000,
|
||||
Events: []abci.Event{},
|
||||
Codespace: "nondeterministic.data.abc",
|
||||
}
|
||||
tr2 := abci.ExecTxResult{
|
||||
Code: 1,
|
||||
Data: []byte("transaction"),
|
||||
Log: "nondeterministic data: def",
|
||||
Info: "nondeterministic data: def",
|
||||
GasWanted: 1000,
|
||||
GasUsed: 1000,
|
||||
Events: []abci.Event{},
|
||||
Codespace: "nondeterministic.data.def",
|
||||
}
|
||||
r1, err := abci.MarshalTxResults([]*abci.ExecTxResult{&tr1})
|
||||
require.NoError(t, err)
|
||||
r2, err := abci.MarshalTxResults([]*abci.ExecTxResult{&tr2})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, merkle.HashFromByteSlices(r1), merkle.HashFromByteSlices(r2))
|
||||
}
|
||||
9
buf.gen.yaml
Normal file
9
buf.gen.yaml
Normal file
@@ -0,0 +1,9 @@
|
||||
version: v1
|
||||
plugins:
|
||||
- name: gogofaster
|
||||
out: ./proto/
|
||||
opt:
|
||||
- Mgoogle/protobuf/timestamp.proto=github.com/gogo/protobuf/types
|
||||
- Mgoogle/protobuf/duration.proto=github.com/golang/protobuf/ptypes/duration
|
||||
- plugins=grpc
|
||||
- paths=source_relative
|
||||
3
buf.work.yaml
Normal file
3
buf.work.yaml
Normal file
@@ -0,0 +1,3 @@
|
||||
version: v1
|
||||
directories:
|
||||
- proto
|
||||
@@ -45,12 +45,16 @@ func main() {
|
||||
keyFile = flag.String("keyfile", "", "absolute path to server key")
|
||||
rootCA = flag.String("rootcafile", "", "absolute path to root CA")
|
||||
prometheusAddr = flag.String("prometheus-addr", "", "address for prometheus endpoint (host:port)")
|
||||
|
||||
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo).
|
||||
With("module", "priv_val")
|
||||
)
|
||||
flag.Parse()
|
||||
|
||||
logger, err := log.NewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "failed to construct logger: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger = logger.With("module", "priv_val")
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
|
||||
46
cmd/tendermint/commands/completion.go
Normal file
46
cmd/tendermint/commands/completion.go
Normal file
@@ -0,0 +1,46 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// NewCompletionCmd returns a cobra.Command that generates bash and zsh
|
||||
// completion scripts for the given root command. If hidden is true, the
|
||||
// command will not show up in the root command's list of available commands.
|
||||
func NewCompletionCmd(rootCmd *cobra.Command, hidden bool) *cobra.Command {
|
||||
flagZsh := "zsh"
|
||||
cmd := &cobra.Command{
|
||||
Use: "completion",
|
||||
Short: "Generate shell completion scripts",
|
||||
Long: fmt.Sprintf(`Generate Bash and Zsh completion scripts and print them to STDOUT.
|
||||
|
||||
Once saved to file, a completion script can be loaded in the shell's
|
||||
current session as shown:
|
||||
|
||||
$ . <(%s completion)
|
||||
|
||||
To configure your bash shell to load completions for each session add to
|
||||
your $HOME/.bashrc or $HOME/.profile the following instruction:
|
||||
|
||||
. <(%s completion)
|
||||
`, rootCmd.Use, rootCmd.Use),
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
zsh, err := cmd.Flags().GetBool(flagZsh)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if zsh {
|
||||
return rootCmd.GenZshCompletion(cmd.OutOrStdout())
|
||||
}
|
||||
return rootCmd.GenBashCompletion(cmd.OutOrStdout())
|
||||
},
|
||||
Hidden: hidden,
|
||||
Args: cobra.NoArgs,
|
||||
}
|
||||
|
||||
cmd.Flags().Bool(flagZsh, false, "Generate Zsh completion script")
|
||||
|
||||
return cmd
|
||||
}
|
||||
@@ -2,38 +2,29 @@ package debug
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
var (
|
||||
nodeRPCAddr string
|
||||
profAddr string
|
||||
frequency uint
|
||||
|
||||
const (
|
||||
flagNodeRPCAddr = "rpc-laddr"
|
||||
flagProfAddr = "pprof-laddr"
|
||||
flagFrequency = "frequency"
|
||||
|
||||
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
)
|
||||
|
||||
// DebugCmd defines the root command containing subcommands that assist in
|
||||
// debugging running Tendermint processes.
|
||||
var DebugCmd = &cobra.Command{
|
||||
Use: "debug",
|
||||
Short: "A utility to kill or watch a Tendermint process while aggregating debugging data",
|
||||
}
|
||||
|
||||
func init() {
|
||||
DebugCmd.PersistentFlags().SortFlags = true
|
||||
DebugCmd.PersistentFlags().StringVar(
|
||||
&nodeRPCAddr,
|
||||
func GetDebugCommand(logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "debug",
|
||||
Short: "A utility to kill or watch a Tendermint process while aggregating debugging data",
|
||||
}
|
||||
cmd.PersistentFlags().SortFlags = true
|
||||
cmd.PersistentFlags().String(
|
||||
flagNodeRPCAddr,
|
||||
"tcp://localhost:26657",
|
||||
"the Tendermint node's RPC address (<host>:<port>)",
|
||||
"the Tendermint node's RPC address <host>:<port>)",
|
||||
)
|
||||
|
||||
DebugCmd.AddCommand(killCmd)
|
||||
DebugCmd.AddCommand(dumpCmd)
|
||||
cmd.AddCommand(getKillCmd(logger))
|
||||
cmd.AddCommand(getDumpCmd(logger))
|
||||
return cmd
|
||||
|
||||
}
|
||||
|
||||
@@ -13,78 +13,102 @@ import (
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
rpchttp "github.com/tendermint/tendermint/rpc/client/http"
|
||||
)
|
||||
|
||||
var dumpCmd = &cobra.Command{
|
||||
Use: "dump [output-directory]",
|
||||
Short: "Continuously poll a Tendermint process and dump debugging data into a single location",
|
||||
Long: `Continuously poll a Tendermint process and dump debugging data into a single
|
||||
func getDumpCmd(logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "dump [output-directory]",
|
||||
Short: "Continuously poll a Tendermint process and dump debugging data into a single location",
|
||||
Long: `Continuously poll a Tendermint process and dump debugging data into a single
|
||||
location at a specified frequency. At each frequency interval, an archived and compressed
|
||||
file will contain node debugging information including the goroutine and heap profiles
|
||||
if enabled.`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: dumpCmdHandler,
|
||||
}
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
outDir := args[0]
|
||||
if outDir == "" {
|
||||
return errors.New("invalid output directory")
|
||||
}
|
||||
frequency, err := cmd.Flags().GetUint(flagFrequency)
|
||||
if err != nil {
|
||||
return fmt.Errorf("flag %q not defined: %w", flagFrequency, err)
|
||||
}
|
||||
|
||||
func init() {
|
||||
dumpCmd.Flags().UintVar(
|
||||
&frequency,
|
||||
if frequency == 0 {
|
||||
return errors.New("frequency must be positive")
|
||||
}
|
||||
|
||||
nodeRPCAddr, err := cmd.Flags().GetString(flagNodeRPCAddr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("flag %q not defined: %w", flagNodeRPCAddr, err)
|
||||
}
|
||||
|
||||
profAddr, err := cmd.Flags().GetString(flagProfAddr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("flag %q not defined: %w", flagProfAddr, err)
|
||||
}
|
||||
|
||||
if _, err := os.Stat(outDir); os.IsNotExist(err) {
|
||||
if err := os.Mkdir(outDir, os.ModePerm); err != nil {
|
||||
return fmt.Errorf("failed to create output directory: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
rpc, err := rpchttp.New(nodeRPCAddr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create new http client: %w", err)
|
||||
}
|
||||
|
||||
ctx := cmd.Context()
|
||||
|
||||
home := viper.GetString(cli.HomeFlag)
|
||||
conf := config.DefaultConfig()
|
||||
conf = conf.SetRoot(home)
|
||||
config.EnsureRoot(conf.RootDir)
|
||||
|
||||
dumpArgs := dumpDebugDataArgs{
|
||||
conf: conf,
|
||||
outDir: outDir,
|
||||
profAddr: profAddr,
|
||||
}
|
||||
dumpDebugData(ctx, logger, rpc, dumpArgs)
|
||||
|
||||
ticker := time.NewTicker(time.Duration(frequency) * time.Second)
|
||||
for range ticker.C {
|
||||
dumpDebugData(ctx, logger, rpc, dumpArgs)
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
}
|
||||
cmd.Flags().Uint(
|
||||
flagFrequency,
|
||||
30,
|
||||
"the frequency (seconds) in which to poll, aggregate and dump Tendermint debug data",
|
||||
)
|
||||
|
||||
dumpCmd.Flags().StringVar(
|
||||
&profAddr,
|
||||
cmd.Flags().String(
|
||||
flagProfAddr,
|
||||
"",
|
||||
"the profiling server address (<host>:<port>)",
|
||||
)
|
||||
|
||||
return cmd
|
||||
|
||||
}
|
||||
|
||||
func dumpCmdHandler(cmd *cobra.Command, args []string) error {
|
||||
outDir := args[0]
|
||||
if outDir == "" {
|
||||
return errors.New("invalid output directory")
|
||||
}
|
||||
|
||||
if frequency == 0 {
|
||||
return errors.New("frequency must be positive")
|
||||
}
|
||||
|
||||
if _, err := os.Stat(outDir); os.IsNotExist(err) {
|
||||
if err := os.Mkdir(outDir, os.ModePerm); err != nil {
|
||||
return fmt.Errorf("failed to create output directory: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
rpc, err := rpchttp.New(nodeRPCAddr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create new http client: %w", err)
|
||||
}
|
||||
|
||||
ctx := cmd.Context()
|
||||
|
||||
home := viper.GetString(cli.HomeFlag)
|
||||
conf := config.DefaultConfig()
|
||||
conf = conf.SetRoot(home)
|
||||
config.EnsureRoot(conf.RootDir)
|
||||
|
||||
dumpDebugData(ctx, outDir, conf, rpc)
|
||||
|
||||
ticker := time.NewTicker(time.Duration(frequency) * time.Second)
|
||||
for range ticker.C {
|
||||
dumpDebugData(ctx, outDir, conf, rpc)
|
||||
}
|
||||
|
||||
return nil
|
||||
type dumpDebugDataArgs struct {
|
||||
conf *config.Config
|
||||
outDir string
|
||||
profAddr string
|
||||
}
|
||||
|
||||
func dumpDebugData(ctx context.Context, outDir string, conf *config.Config, rpc *rpchttp.HTTP) {
|
||||
func dumpDebugData(ctx context.Context, logger log.Logger, rpc *rpchttp.HTTP, args dumpDebugDataArgs) {
|
||||
start := time.Now().UTC()
|
||||
|
||||
tmpDir, err := os.MkdirTemp(outDir, "tendermint_debug_tmp")
|
||||
tmpDir, err := os.MkdirTemp(args.outDir, "tendermint_debug_tmp")
|
||||
if err != nil {
|
||||
logger.Error("failed to create temporary directory", "dir", tmpDir, "error", err)
|
||||
return
|
||||
@@ -110,26 +134,26 @@ func dumpDebugData(ctx context.Context, outDir string, conf *config.Config, rpc
|
||||
}
|
||||
|
||||
logger.Info("copying node WAL...")
|
||||
if err := copyWAL(conf, tmpDir); err != nil {
|
||||
if err := copyWAL(args.conf, tmpDir); err != nil {
|
||||
logger.Error("failed to copy node WAL", "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
if profAddr != "" {
|
||||
if args.profAddr != "" {
|
||||
logger.Info("getting node goroutine profile...")
|
||||
if err := dumpProfile(tmpDir, profAddr, "goroutine", 2); err != nil {
|
||||
if err := dumpProfile(tmpDir, args.profAddr, "goroutine", 2); err != nil {
|
||||
logger.Error("failed to dump goroutine profile", "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
logger.Info("getting node heap profile...")
|
||||
if err := dumpProfile(tmpDir, profAddr, "heap", 2); err != nil {
|
||||
if err := dumpProfile(tmpDir, args.profAddr, "heap", 2); err != nil {
|
||||
logger.Error("failed to dump heap profile", "error", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
outFile := filepath.Join(outDir, fmt.Sprintf("%s.zip", start.Format(time.RFC3339)))
|
||||
outFile := filepath.Join(args.outDir, fmt.Sprintf("%s.zip", start.Format(time.RFC3339)))
|
||||
if err := zipDir(tmpDir, outFile); err != nil {
|
||||
logger.Error("failed to create and compress archive", "file", outFile, "error", err)
|
||||
}
|
||||
|
||||
@@ -15,89 +15,96 @@ import (
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
rpchttp "github.com/tendermint/tendermint/rpc/client/http"
|
||||
)
|
||||
|
||||
var killCmd = &cobra.Command{
|
||||
Use: "kill [pid] [compressed-output-file]",
|
||||
Short: "Kill a Tendermint process while aggregating and packaging debugging data",
|
||||
Long: `Kill a Tendermint process while also aggregating Tendermint process data
|
||||
func getKillCmd(logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "kill [pid] [compressed-output-file]",
|
||||
Short: "Kill a Tendermint process while aggregating and packaging debugging data",
|
||||
Long: `Kill a Tendermint process while also aggregating Tendermint process data
|
||||
such as the latest node state, including consensus and networking state,
|
||||
go-routine state, and the node's WAL and config information. This aggregated data
|
||||
is packaged into a compressed archive.
|
||||
|
||||
Example:
|
||||
$ tendermint debug kill 34255 /path/to/tm-debug.zip`,
|
||||
Args: cobra.ExactArgs(2),
|
||||
RunE: killCmdHandler,
|
||||
}
|
||||
Args: cobra.ExactArgs(2),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := cmd.Context()
|
||||
pid, err := strconv.ParseInt(args[0], 10, 64)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
func killCmdHandler(cmd *cobra.Command, args []string) error {
|
||||
ctx := cmd.Context()
|
||||
pid, err := strconv.ParseInt(args[0], 10, 64)
|
||||
if err != nil {
|
||||
return err
|
||||
outFile := args[1]
|
||||
if outFile == "" {
|
||||
return errors.New("invalid output file")
|
||||
}
|
||||
nodeRPCAddr, err := cmd.Flags().GetString(flagNodeRPCAddr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("flag %q not defined: %w", flagNodeRPCAddr, err)
|
||||
}
|
||||
|
||||
rpc, err := rpchttp.New(nodeRPCAddr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create new http client: %w", err)
|
||||
}
|
||||
|
||||
home := viper.GetString(cli.HomeFlag)
|
||||
conf := config.DefaultConfig()
|
||||
conf = conf.SetRoot(home)
|
||||
config.EnsureRoot(conf.RootDir)
|
||||
|
||||
// Create a temporary directory which will contain all the state dumps and
|
||||
// relevant files and directories that will be compressed into a file.
|
||||
tmpDir, err := os.MkdirTemp(os.TempDir(), "tendermint_debug_tmp")
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create temporary directory: %w", err)
|
||||
}
|
||||
defer os.RemoveAll(tmpDir)
|
||||
|
||||
logger.Info("getting node status...")
|
||||
if err := dumpStatus(ctx, rpc, tmpDir, "status.json"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("getting node network info...")
|
||||
if err := dumpNetInfo(ctx, rpc, tmpDir, "net_info.json"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("getting node consensus state...")
|
||||
if err := dumpConsensusState(ctx, rpc, tmpDir, "consensus_state.json"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("copying node WAL...")
|
||||
if err := copyWAL(conf, tmpDir); err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("node WAL does not exist; continuing...")
|
||||
}
|
||||
|
||||
logger.Info("copying node configuration...")
|
||||
if err := copyConfig(home, tmpDir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("killing Tendermint process")
|
||||
if err := killProc(int(pid), tmpDir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("archiving and compressing debug directory...")
|
||||
return zipDir(tmpDir, outFile)
|
||||
},
|
||||
}
|
||||
|
||||
outFile := args[1]
|
||||
if outFile == "" {
|
||||
return errors.New("invalid output file")
|
||||
}
|
||||
|
||||
rpc, err := rpchttp.New(nodeRPCAddr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create new http client: %w", err)
|
||||
}
|
||||
|
||||
home := viper.GetString(cli.HomeFlag)
|
||||
conf := config.DefaultConfig()
|
||||
conf = conf.SetRoot(home)
|
||||
config.EnsureRoot(conf.RootDir)
|
||||
|
||||
// Create a temporary directory which will contain all the state dumps and
|
||||
// relevant files and directories that will be compressed into a file.
|
||||
tmpDir, err := os.MkdirTemp(os.TempDir(), "tendermint_debug_tmp")
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create temporary directory: %w", err)
|
||||
}
|
||||
defer os.RemoveAll(tmpDir)
|
||||
|
||||
logger.Info("getting node status...")
|
||||
if err := dumpStatus(ctx, rpc, tmpDir, "status.json"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("getting node network info...")
|
||||
if err := dumpNetInfo(ctx, rpc, tmpDir, "net_info.json"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("getting node consensus state...")
|
||||
if err := dumpConsensusState(ctx, rpc, tmpDir, "consensus_state.json"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("copying node WAL...")
|
||||
if err := copyWAL(conf, tmpDir); err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("node WAL does not exist; continuing...")
|
||||
}
|
||||
|
||||
logger.Info("copying node configuration...")
|
||||
if err := copyConfig(home, tmpDir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("killing Tendermint process")
|
||||
if err := killProc(int(pid), tmpDir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("archiving and compressing debug directory...")
|
||||
return zipDir(tmpDir, outFile)
|
||||
return cmd
|
||||
}
|
||||
|
||||
// killProc attempts to kill the Tendermint process with a given PID with an
|
||||
|
||||
@@ -12,30 +12,30 @@ import (
|
||||
|
||||
// GenValidatorCmd allows the generation of a keypair for a
|
||||
// validator.
|
||||
var GenValidatorCmd = &cobra.Command{
|
||||
Use: "gen-validator",
|
||||
Short: "Generate new validator keypair",
|
||||
RunE: genValidator,
|
||||
}
|
||||
func MakeGenValidatorCommand() *cobra.Command {
|
||||
var keyType string
|
||||
cmd := &cobra.Command{
|
||||
Use: "gen-validator",
|
||||
Short: "Generate new validator keypair",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
pv, err := privval.GenFilePV("", "", keyType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
func init() {
|
||||
GenValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
jsbz, err := json.Marshal(pv)
|
||||
if err != nil {
|
||||
return fmt.Errorf("validator -> json: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("%v\n", string(jsbz))
|
||||
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
}
|
||||
|
||||
func genValidator(cmd *cobra.Command, args []string) error {
|
||||
pv, err := privval.GenFilePV("", "", keyType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
jsbz, err := json.Marshal(pv)
|
||||
if err != nil {
|
||||
return fmt.Errorf("validator -> json: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf(`%v
|
||||
`, string(jsbz))
|
||||
|
||||
return nil
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
@@ -7,7 +7,8 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
tmrand "github.com/tendermint/tendermint/libs/rand"
|
||||
tmtime "github.com/tendermint/tendermint/libs/time"
|
||||
@@ -15,43 +16,40 @@ import (
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// InitFilesCmd initializes a fresh Tendermint Core instance.
|
||||
var InitFilesCmd = &cobra.Command{
|
||||
Use: "init [full|validator|seed]",
|
||||
Short: "Initializes a Tendermint node",
|
||||
ValidArgs: []string{"full", "validator", "seed"},
|
||||
// We allow for zero args so we can throw a more informative error
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: initFiles,
|
||||
}
|
||||
|
||||
var (
|
||||
keyType string
|
||||
)
|
||||
|
||||
func init() {
|
||||
InitFilesCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
}
|
||||
|
||||
func initFiles(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
return errors.New("must specify a node type: tendermint init [validator|full|seed]")
|
||||
// MakeInitFilesCommand returns the command to initialize a fresh Tendermint Core instance.
|
||||
func MakeInitFilesCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var keyType string
|
||||
cmd := &cobra.Command{
|
||||
Use: "init [full|validator|seed]",
|
||||
Short: "Initializes a Tendermint node",
|
||||
ValidArgs: []string{"full", "validator", "seed"},
|
||||
// We allow for zero args so we can throw a more informative error
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
return errors.New("must specify a node type: tendermint init [validator|full|seed]")
|
||||
}
|
||||
conf.Mode = args[0]
|
||||
return initFilesWithConfig(cmd.Context(), conf, logger, keyType)
|
||||
},
|
||||
}
|
||||
config.Mode = args[0]
|
||||
return initFilesWithConfig(cmd.Context(), config)
|
||||
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func initFilesWithConfig(ctx context.Context, config *cfg.Config) error {
|
||||
func initFilesWithConfig(ctx context.Context, conf *config.Config, logger log.Logger, keyType string) error {
|
||||
var (
|
||||
pv *privval.FilePV
|
||||
err error
|
||||
)
|
||||
|
||||
if config.Mode == cfg.ModeValidator {
|
||||
if conf.Mode == config.ModeValidator {
|
||||
// private validator
|
||||
privValKeyFile := config.PrivValidator.KeyFile()
|
||||
privValStateFile := config.PrivValidator.StateFile()
|
||||
privValKeyFile := conf.PrivValidator.KeyFile()
|
||||
privValStateFile := conf.PrivValidator.StateFile()
|
||||
if tmos.FileExists(privValKeyFile) {
|
||||
pv, err = privval.LoadFilePV(privValKeyFile, privValStateFile)
|
||||
if err != nil {
|
||||
@@ -73,7 +71,7 @@ func initFilesWithConfig(ctx context.Context, config *cfg.Config) error {
|
||||
}
|
||||
}
|
||||
|
||||
nodeKeyFile := config.NodeKeyFile()
|
||||
nodeKeyFile := conf.NodeKeyFile()
|
||||
if tmos.FileExists(nodeKeyFile) {
|
||||
logger.Info("Found node key", "path", nodeKeyFile)
|
||||
} else {
|
||||
@@ -84,7 +82,7 @@ func initFilesWithConfig(ctx context.Context, config *cfg.Config) error {
|
||||
}
|
||||
|
||||
// genesis file
|
||||
genFile := config.GenesisFile()
|
||||
genFile := conf.GenesisFile()
|
||||
if tmos.FileExists(genFile) {
|
||||
logger.Info("Found genesis file", "path", genFile)
|
||||
} else {
|
||||
@@ -123,10 +121,10 @@ func initFilesWithConfig(ctx context.Context, config *cfg.Config) error {
|
||||
}
|
||||
|
||||
// write config file
|
||||
if err := cfg.WriteConfigFile(config.RootDir, config); err != nil {
|
||||
if err := config.WriteConfigFile(conf.RootDir, conf); err != nil {
|
||||
return err
|
||||
}
|
||||
logger.Info("Generated config", "mode", config.Mode)
|
||||
logger.Info("Generated config", "mode", conf.Mode)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -6,14 +6,17 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/inspect"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
// InspectCmd is the command for starting an inspect server.
|
||||
var InspectCmd = &cobra.Command{
|
||||
Use: "inspect",
|
||||
Short: "Run an inspect server for investigating Tendermint state",
|
||||
Long: `
|
||||
// InspectCmd constructs the command to start an inspect server.
|
||||
func MakeInspectCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "inspect",
|
||||
Short: "Run an inspect server for investigating Tendermint state",
|
||||
Long: `
|
||||
inspect runs a subset of Tendermint's RPC endpoints that are useful for debugging
|
||||
issues with Tendermint.
|
||||
|
||||
@@ -22,33 +25,27 @@ var InspectCmd = &cobra.Command{
|
||||
The inspect command can be used to query the block and state store using Tendermint
|
||||
RPC calls to debug issues of inconsistent state.
|
||||
`,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM, syscall.SIGINT)
|
||||
defer cancel()
|
||||
|
||||
RunE: runInspect,
|
||||
}
|
||||
ins, err := inspect.NewFromConfig(logger, conf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
func init() {
|
||||
InspectCmd.Flags().
|
||||
String("rpc.laddr",
|
||||
config.RPC.ListenAddress, "RPC listenener address. Port required")
|
||||
InspectCmd.Flags().
|
||||
String("db-backend",
|
||||
config.DBBackend, "database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
|
||||
InspectCmd.Flags().
|
||||
String("db-dir", config.DBPath, "database directory")
|
||||
}
|
||||
|
||||
func runInspect(cmd *cobra.Command, args []string) error {
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM, syscall.SIGINT)
|
||||
defer cancel()
|
||||
|
||||
ins, err := inspect.NewFromConfig(logger, config)
|
||||
if err != nil {
|
||||
return err
|
||||
logger.Info("starting inspect server")
|
||||
if err := ins.Run(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
},
|
||||
}
|
||||
cmd.Flags().String("rpc.laddr",
|
||||
conf.RPC.ListenAddress, "RPC listenener address. Port required")
|
||||
cmd.Flags().String("db-backend",
|
||||
conf.DBBackend, "database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
|
||||
cmd.Flags().String("db-dir", conf.DBPath, "database directory")
|
||||
|
||||
logger.Info("starting inspect server")
|
||||
if err := ins.Run(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
return cmd
|
||||
}
|
||||
|
||||
@@ -5,11 +5,13 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/scripts/keymigrate"
|
||||
)
|
||||
|
||||
func MakeKeyMigrateCommand() *cobra.Command {
|
||||
func MakeKeyMigrateCommand(conf *cfg.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "key-migrate",
|
||||
Short: "Run Database key migration",
|
||||
@@ -38,7 +40,7 @@ func MakeKeyMigrateCommand() *cobra.Command {
|
||||
|
||||
db, err := cfg.DefaultDBProvider(&cfg.DBContext{
|
||||
ID: dbctx,
|
||||
Config: config,
|
||||
Config: conf,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
@@ -58,7 +60,7 @@ func MakeKeyMigrateCommand() *cobra.Command {
|
||||
}
|
||||
|
||||
// allow database info to be overridden via cli
|
||||
addDBFlags(cmd)
|
||||
addDBFlags(cmd, conf)
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
@@ -15,6 +14,7 @@ import (
|
||||
"github.com/spf13/cobra"
|
||||
dbm "github.com/tendermint/tm-db"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmmath "github.com/tendermint/tendermint/libs/math"
|
||||
"github.com/tendermint/tendermint/light"
|
||||
@@ -24,20 +24,69 @@ import (
|
||||
rpcserver "github.com/tendermint/tendermint/rpc/jsonrpc/server"
|
||||
)
|
||||
|
||||
// LightCmd represents the base command when called without any subcommands
|
||||
var LightCmd = &cobra.Command{
|
||||
Use: "light [chainID]",
|
||||
Short: "Run a light client proxy server, verifying Tendermint rpc",
|
||||
Long: `Run a light client proxy server, verifying Tendermint rpc.
|
||||
// LightCmd constructs the base command called when invoked without any subcommands.
|
||||
func MakeLightCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var (
|
||||
listenAddr string
|
||||
primaryAddr string
|
||||
witnessAddrsJoined string
|
||||
chainID string
|
||||
dir string
|
||||
maxOpenConnections int
|
||||
|
||||
sequential bool
|
||||
trustingPeriod time.Duration
|
||||
trustedHeight int64
|
||||
trustedHash []byte
|
||||
trustLevelStr string
|
||||
|
||||
logLevel string
|
||||
logFormat string
|
||||
|
||||
primaryKey = []byte("primary")
|
||||
witnessesKey = []byte("witnesses")
|
||||
)
|
||||
|
||||
checkForExistingProviders := func(db dbm.DB) (string, []string, error) {
|
||||
primaryBytes, err := db.Get(primaryKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesBytes, err := db.Get(witnessesKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesAddrs := strings.Split(string(witnessesBytes), ",")
|
||||
return string(primaryBytes), witnessesAddrs, nil
|
||||
}
|
||||
|
||||
saveProviders := func(db dbm.DB, primaryAddr, witnessesAddrs string) error {
|
||||
err := db.Set(primaryKey, []byte(primaryAddr))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save primary provider: %w", err)
|
||||
}
|
||||
err = db.Set(witnessesKey, []byte(witnessesAddrs))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save witness providers: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "light [chainID]",
|
||||
Short: "Run a light client proxy server, verifying Tendermint rpc",
|
||||
Long: `Run a light client proxy server, verifying Tendermint rpc.
|
||||
|
||||
All calls that can be tracked back to a block header by a proof
|
||||
will be verified before passing them back to the caller. Other than
|
||||
that, it will present the same interface as a full Tendermint node.
|
||||
|
||||
Furthermore to the chainID, a fresh instance of a light client will
|
||||
need a primary RPC address, a trusted hash and height and witness RPC addresses
|
||||
(if not using sequential verification). To restart the node, thereafter
|
||||
only the chainID is required.
|
||||
need a primary RPC address and a trusted hash and height. It is also highly
|
||||
recommended to provide additional witness RPC addresses, especially if
|
||||
not using sequential verification.
|
||||
|
||||
To restart the node, thereafter only the chainID is required.
|
||||
|
||||
When /abci_query is called, the Merkle key path format is:
|
||||
|
||||
@@ -46,185 +95,138 @@ When /abci_query is called, the Merkle key path format is:
|
||||
Please verify with your application that this Merkle key format is used (true
|
||||
for applications built w/ Cosmos SDK).
|
||||
`,
|
||||
RunE: runProxy,
|
||||
Args: cobra.ExactArgs(1),
|
||||
Example: `light cosmoshub-3 -p http://52.57.29.196:26657 -w http://public-seed-node.cosmoshub.certus.one:26657
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
chainID = args[0]
|
||||
logger.Info("Creating client...", "chainID", chainID)
|
||||
|
||||
var witnessesAddrs []string
|
||||
if witnessAddrsJoined != "" {
|
||||
witnessesAddrs = strings.Split(witnessAddrsJoined, ",")
|
||||
}
|
||||
|
||||
lightDB, err := dbm.NewGoLevelDB("light-client-db", dir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't create a db: %w", err)
|
||||
}
|
||||
// create a prefixed db on the chainID
|
||||
db := dbm.NewPrefixDB(lightDB, []byte(chainID))
|
||||
|
||||
if primaryAddr == "" { // check to see if we can start from an existing state
|
||||
var err error
|
||||
primaryAddr, witnessesAddrs, err = checkForExistingProviders(db)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to retrieve primary or witness from db: %w", err)
|
||||
}
|
||||
if primaryAddr == "" {
|
||||
return errors.New("no primary address was provided nor found. Please provide a primary (using -p)." +
|
||||
" Run the command: tendermint light --help for more information")
|
||||
}
|
||||
} else {
|
||||
err := saveProviders(db, primaryAddr, witnessAddrsJoined)
|
||||
if err != nil {
|
||||
logger.Error("Unable to save primary and or witness addresses", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(witnessesAddrs) < 1 && !sequential {
|
||||
logger.Info("In skipping verification mode it is highly recommended to provide at least one witness")
|
||||
}
|
||||
|
||||
trustLevel, err := tmmath.ParseFraction(trustLevelStr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't parse trust level: %w", err)
|
||||
}
|
||||
|
||||
options := []light.Option{light.Logger(logger)}
|
||||
|
||||
vo := light.SkippingVerification(trustLevel)
|
||||
if sequential {
|
||||
vo = light.SequentialVerification()
|
||||
}
|
||||
options = append(options, vo)
|
||||
|
||||
// Initiate the light client. If the trusted store already has blocks in it, this
|
||||
// will be used else we use the trusted options.
|
||||
c, err := light.NewHTTPClient(
|
||||
cmd.Context(),
|
||||
chainID,
|
||||
light.TrustOptions{
|
||||
Period: trustingPeriod,
|
||||
Height: trustedHeight,
|
||||
Hash: trustedHash,
|
||||
},
|
||||
primaryAddr,
|
||||
witnessesAddrs,
|
||||
dbs.New(db),
|
||||
options...,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cfg := rpcserver.DefaultConfig()
|
||||
cfg.MaxBodyBytes = conf.RPC.MaxBodyBytes
|
||||
cfg.MaxHeaderBytes = conf.RPC.MaxHeaderBytes
|
||||
cfg.MaxOpenConnections = maxOpenConnections
|
||||
// If necessary adjust global WriteTimeout to ensure it's greater than
|
||||
// TimeoutBroadcastTxCommit.
|
||||
// See https://github.com/tendermint/tendermint/issues/3435
|
||||
if cfg.WriteTimeout <= conf.RPC.TimeoutBroadcastTxCommit {
|
||||
cfg.WriteTimeout = conf.RPC.TimeoutBroadcastTxCommit + 1*time.Second
|
||||
}
|
||||
|
||||
p, err := lproxy.NewProxy(c, listenAddr, primaryAddr, cfg, logger, lrpc.KeyPathFn(lrpc.DefaultMerkleKeyPathFn()))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
p.Listener.Close()
|
||||
}()
|
||||
|
||||
logger.Info("Starting proxy...", "laddr", listenAddr)
|
||||
if err := p.ListenAndServe(ctx); err != http.ErrServerClosed {
|
||||
// Error starting or closing listener:
|
||||
logger.Error("proxy ListenAndServe", "err", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
Args: cobra.ExactArgs(1),
|
||||
Example: `light cosmoshub-3 -p http://52.57.29.196:26657 -w http://public-seed-node.cosmoshub.certus.one:26657
|
||||
--height 962118 --hash 28B97BE9F6DE51AC69F70E0B7BFD7E5C9CD1A595B7DC31AFF27C50D4948020CD`,
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
listenAddr string
|
||||
primaryAddr string
|
||||
witnessAddrsJoined string
|
||||
chainID string
|
||||
dir string
|
||||
maxOpenConnections int
|
||||
|
||||
sequential bool
|
||||
trustingPeriod time.Duration
|
||||
trustedHeight int64
|
||||
trustedHash []byte
|
||||
trustLevelStr string
|
||||
|
||||
logLevel string
|
||||
logFormat string
|
||||
|
||||
primaryKey = []byte("primary")
|
||||
witnessesKey = []byte("witnesses")
|
||||
)
|
||||
|
||||
func init() {
|
||||
LightCmd.Flags().StringVar(&listenAddr, "laddr", "tcp://localhost:8888",
|
||||
cmd.Flags().StringVar(&listenAddr, "laddr", "tcp://localhost:8888",
|
||||
"serve the proxy on the given address")
|
||||
LightCmd.Flags().StringVarP(&primaryAddr, "primary", "p", "",
|
||||
cmd.Flags().StringVarP(&primaryAddr, "primary", "p", "",
|
||||
"connect to a Tendermint node at this address")
|
||||
LightCmd.Flags().StringVarP(&witnessAddrsJoined, "witnesses", "w", "",
|
||||
cmd.Flags().StringVarP(&witnessAddrsJoined, "witnesses", "w", "",
|
||||
"tendermint nodes to cross-check the primary node, comma-separated")
|
||||
LightCmd.Flags().StringVarP(&dir, "dir", "d", os.ExpandEnv(filepath.Join("$HOME", ".tendermint-light")),
|
||||
cmd.Flags().StringVarP(&dir, "dir", "d", os.ExpandEnv(filepath.Join("$HOME", ".tendermint-light")),
|
||||
"specify the directory")
|
||||
LightCmd.Flags().IntVar(
|
||||
cmd.Flags().IntVar(
|
||||
&maxOpenConnections,
|
||||
"max-open-connections",
|
||||
900,
|
||||
"maximum number of simultaneous connections (including WebSocket).")
|
||||
LightCmd.Flags().DurationVar(&trustingPeriod, "trusting-period", 168*time.Hour,
|
||||
cmd.Flags().DurationVar(&trustingPeriod, "trusting-period", 168*time.Hour,
|
||||
"trusting period that headers can be verified within. Should be significantly less than the unbonding period")
|
||||
LightCmd.Flags().Int64Var(&trustedHeight, "height", 1, "Trusted header's height")
|
||||
LightCmd.Flags().BytesHexVar(&trustedHash, "hash", []byte{}, "Trusted header's hash")
|
||||
LightCmd.Flags().StringVar(&logLevel, "log-level", log.LogLevelInfo, "The logging level (debug|info|warn|error|fatal)")
|
||||
LightCmd.Flags().StringVar(&logFormat, "log-format", log.LogFormatPlain, "The logging format (text|json)")
|
||||
LightCmd.Flags().StringVar(&trustLevelStr, "trust-level", "1/3",
|
||||
cmd.Flags().Int64Var(&trustedHeight, "height", 1, "Trusted header's height")
|
||||
cmd.Flags().BytesHexVar(&trustedHash, "hash", []byte{}, "Trusted header's hash")
|
||||
cmd.Flags().StringVar(&logLevel, "log-level", log.LogLevelInfo, "The logging level (debug|info|warn|error|fatal)")
|
||||
cmd.Flags().StringVar(&logFormat, "log-format", log.LogFormatPlain, "The logging format (text|json)")
|
||||
cmd.Flags().StringVar(&trustLevelStr, "trust-level", "1/3",
|
||||
"trust level. Must be between 1/3 and 3/3",
|
||||
)
|
||||
LightCmd.Flags().BoolVar(&sequential, "sequential", false,
|
||||
cmd.Flags().BoolVar(&sequential, "sequential", false,
|
||||
"sequential verification. Verify all headers sequentially as opposed to using skipping verification",
|
||||
)
|
||||
}
|
||||
|
||||
func runProxy(cmd *cobra.Command, args []string) error {
|
||||
logger, err := log.NewDefaultLogger(logFormat, logLevel)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
chainID = args[0]
|
||||
logger.Info("Creating client...", "chainID", chainID)
|
||||
|
||||
witnessesAddrs := []string{}
|
||||
if witnessAddrsJoined != "" {
|
||||
witnessesAddrs = strings.Split(witnessAddrsJoined, ",")
|
||||
}
|
||||
|
||||
lightDB, err := dbm.NewGoLevelDB("light-client-db", dir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't create a db: %w", err)
|
||||
}
|
||||
// create a prefixed db on the chainID
|
||||
db := dbm.NewPrefixDB(lightDB, []byte(chainID))
|
||||
|
||||
if primaryAddr == "" { // check to see if we can start from an existing state
|
||||
var err error
|
||||
primaryAddr, witnessesAddrs, err = checkForExistingProviders(db)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to retrieve primary or witness from db: %w", err)
|
||||
}
|
||||
if primaryAddr == "" {
|
||||
return errors.New("no primary address was provided nor found. Please provide a primary (using -p)." +
|
||||
" Run the command: tendermint light --help for more information")
|
||||
}
|
||||
} else {
|
||||
err := saveProviders(db, primaryAddr, witnessAddrsJoined)
|
||||
if err != nil {
|
||||
logger.Error("Unable to save primary and or witness addresses", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
trustLevel, err := tmmath.ParseFraction(trustLevelStr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't parse trust level: %w", err)
|
||||
}
|
||||
|
||||
options := []light.Option{light.Logger(logger)}
|
||||
|
||||
if sequential {
|
||||
options = append(options, light.SequentialVerification())
|
||||
} else {
|
||||
options = append(options, light.SkippingVerification(trustLevel))
|
||||
}
|
||||
|
||||
// Initiate the light client. If the trusted store already has blocks in it, this
|
||||
// will be used else we use the trusted options.
|
||||
c, err := light.NewHTTPClient(
|
||||
context.Background(),
|
||||
chainID,
|
||||
light.TrustOptions{
|
||||
Period: trustingPeriod,
|
||||
Height: trustedHeight,
|
||||
Hash: trustedHash,
|
||||
},
|
||||
primaryAddr,
|
||||
witnessesAddrs,
|
||||
dbs.New(db),
|
||||
options...,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cfg := rpcserver.DefaultConfig()
|
||||
cfg.MaxBodyBytes = config.RPC.MaxBodyBytes
|
||||
cfg.MaxHeaderBytes = config.RPC.MaxHeaderBytes
|
||||
cfg.MaxOpenConnections = maxOpenConnections
|
||||
// If necessary adjust global WriteTimeout to ensure it's greater than
|
||||
// TimeoutBroadcastTxCommit.
|
||||
// See https://github.com/tendermint/tendermint/issues/3435
|
||||
if cfg.WriteTimeout <= config.RPC.TimeoutBroadcastTxCommit {
|
||||
cfg.WriteTimeout = config.RPC.TimeoutBroadcastTxCommit + 1*time.Second
|
||||
}
|
||||
|
||||
p, err := lproxy.NewProxy(c, listenAddr, primaryAddr, cfg, logger, lrpc.KeyPathFn(lrpc.DefaultMerkleKeyPathFn()))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
p.Listener.Close()
|
||||
}()
|
||||
|
||||
logger.Info("Starting proxy...", "laddr", listenAddr)
|
||||
if err := p.ListenAndServe(ctx); err != http.ErrServerClosed {
|
||||
// Error starting or closing listener:
|
||||
logger.Error("proxy ListenAndServe", "err", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkForExistingProviders(db dbm.DB) (string, []string, error) {
|
||||
primaryBytes, err := db.Get(primaryKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesBytes, err := db.Get(witnessesKey)
|
||||
if err != nil {
|
||||
return "", []string{""}, err
|
||||
}
|
||||
witnessesAddrs := strings.Split(string(witnessesBytes), ",")
|
||||
return string(primaryBytes), witnessesAddrs, nil
|
||||
}
|
||||
|
||||
func saveProviders(db dbm.DB, primaryAddr, witnessesAddrs string) error {
|
||||
err := db.Set(primaryKey, []byte(primaryAddr))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save primary provider: %w", err)
|
||||
}
|
||||
err = db.Set(witnessesKey, []byte(witnessesAddrs))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to save witness providers: %w", err)
|
||||
}
|
||||
return nil
|
||||
|
||||
return cmd
|
||||
|
||||
}
|
||||
|
||||
@@ -17,6 +17,7 @@ import (
|
||||
"github.com/tendermint/tendermint/internal/state/indexer/sink/kv"
|
||||
"github.com/tendermint/tendermint/internal/state/indexer/sink/psql"
|
||||
"github.com/tendermint/tendermint/internal/store"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/libs/os"
|
||||
"github.com/tendermint/tendermint/rpc/coretypes"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
@@ -26,59 +27,68 @@ const (
|
||||
reindexFailed = "event re-index failed: "
|
||||
)
|
||||
|
||||
// ReIndexEventCmd allows re-index the event by given block height interval
|
||||
var ReIndexEventCmd = &cobra.Command{
|
||||
Use: "reindex-event",
|
||||
Short: "reindex events to the event store backends",
|
||||
Long: `
|
||||
// MakeReindexEventCommand constructs a command to re-index events in a block height interval.
|
||||
func MakeReindexEventCommand(conf *tmcfg.Config, logger log.Logger) *cobra.Command {
|
||||
var (
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
)
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "reindex-event",
|
||||
Short: "reindex events to the event store backends",
|
||||
Long: `
|
||||
reindex-event is an offline tooling to re-index block and tx events to the eventsinks,
|
||||
you can run this command when the event store backend dropped/disconnected or you want to
|
||||
replace the backend. The default start-height is 0, meaning the tooling will start
|
||||
reindex from the base block height(inclusive); and the default end-height is 0, meaning
|
||||
you can run this command when the event store backend dropped/disconnected or you want to
|
||||
replace the backend. The default start-height is 0, meaning the tooling will start
|
||||
reindex from the base block height(inclusive); and the default end-height is 0, meaning
|
||||
the tooling will reindex until the latest block height(inclusive). User can omit
|
||||
either or both arguments.
|
||||
`,
|
||||
Example: `
|
||||
Example: `
|
||||
tendermint reindex-event
|
||||
tendermint reindex-event --start-height 2
|
||||
tendermint reindex-event --end-height 10
|
||||
tendermint reindex-event --start-height 2 --end-height 10
|
||||
`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
bs, ss, err := loadStateAndBlockStore(config)
|
||||
if err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
bs, ss, err := loadStateAndBlockStore(conf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
|
||||
if err := checkValidHeight(bs); err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
cvhArgs := checkValidHeightArgs{
|
||||
startHeight: startHeight,
|
||||
endHeight: endHeight,
|
||||
}
|
||||
if err := checkValidHeight(bs, cvhArgs); err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
|
||||
es, err := loadEventSinks(config)
|
||||
if err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
es, err := loadEventSinks(conf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
|
||||
if err = eventReIndex(cmd, es, bs, ss); err != nil {
|
||||
fmt.Println(reindexFailed, err)
|
||||
return
|
||||
}
|
||||
riArgs := eventReIndexArgs{
|
||||
startHeight: startHeight,
|
||||
endHeight: endHeight,
|
||||
sinks: es,
|
||||
blockStore: bs,
|
||||
stateStore: ss,
|
||||
}
|
||||
if err := eventReIndex(cmd, riArgs); err != nil {
|
||||
return fmt.Errorf("%s: %w", reindexFailed, err)
|
||||
}
|
||||
|
||||
fmt.Println("event re-index finished")
|
||||
},
|
||||
}
|
||||
logger.Info("event re-index finished")
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
var (
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
)
|
||||
|
||||
func init() {
|
||||
ReIndexEventCmd.Flags().Int64Var(&startHeight, "start-height", 0, "the block height would like to start for re-index")
|
||||
ReIndexEventCmd.Flags().Int64Var(&endHeight, "end-height", 0, "the block height would like to finish for re-index")
|
||||
cmd.Flags().Int64Var(&startHeight, "start-height", 0, "the block height would like to start for re-index")
|
||||
cmd.Flags().Int64Var(&endHeight, "end-height", 0, "the block height would like to finish for re-index")
|
||||
return cmd
|
||||
}
|
||||
|
||||
func loadEventSinks(cfg *tmcfg.Config) ([]indexer.EventSink, error) {
|
||||
@@ -109,7 +119,7 @@ func loadEventSinks(cfg *tmcfg.Config) ([]indexer.EventSink, error) {
|
||||
if conn == "" {
|
||||
return nil, errors.New("the psql connection settings cannot be empty")
|
||||
}
|
||||
es, err := psql.NewEventSink(conn, chainID)
|
||||
es, err := psql.NewEventSink(conn, cfg.ChainID())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -159,52 +169,58 @@ func loadStateAndBlockStore(cfg *tmcfg.Config) (*store.BlockStore, state.Store,
|
||||
return blockStore, stateStore, nil
|
||||
}
|
||||
|
||||
func eventReIndex(cmd *cobra.Command, es []indexer.EventSink, bs state.BlockStore, ss state.Store) error {
|
||||
type eventReIndexArgs struct {
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
sinks []indexer.EventSink
|
||||
blockStore state.BlockStore
|
||||
stateStore state.Store
|
||||
}
|
||||
|
||||
func eventReIndex(cmd *cobra.Command, args eventReIndexArgs) error {
|
||||
var bar progressbar.Bar
|
||||
bar.NewOption(startHeight-1, endHeight)
|
||||
bar.NewOption(args.startHeight-1, args.endHeight)
|
||||
|
||||
fmt.Println("start re-indexing events:")
|
||||
defer bar.Finish()
|
||||
for i := startHeight; i <= endHeight; i++ {
|
||||
for i := args.startHeight; i <= args.endHeight; i++ {
|
||||
select {
|
||||
case <-cmd.Context().Done():
|
||||
return fmt.Errorf("event re-index terminated at height %d: %w", i, cmd.Context().Err())
|
||||
default:
|
||||
b := bs.LoadBlock(i)
|
||||
b := args.blockStore.LoadBlock(i)
|
||||
if b == nil {
|
||||
return fmt.Errorf("not able to load block at height %d from the blockstore", i)
|
||||
}
|
||||
|
||||
r, err := ss.LoadABCIResponses(i)
|
||||
r, err := args.stateStore.LoadABCIResponses(i)
|
||||
if err != nil {
|
||||
return fmt.Errorf("not able to load ABCI Response at height %d from the statestore", i)
|
||||
}
|
||||
|
||||
e := types.EventDataNewBlockHeader{
|
||||
Header: b.Header,
|
||||
NumTxs: int64(len(b.Txs)),
|
||||
ResultBeginBlock: *r.BeginBlock,
|
||||
ResultEndBlock: *r.EndBlock,
|
||||
Header: b.Header,
|
||||
NumTxs: int64(len(b.Txs)),
|
||||
ResultFinalizeBlock: *r.FinalizeBlock,
|
||||
}
|
||||
|
||||
var batch *indexer.Batch
|
||||
if e.NumTxs > 0 {
|
||||
batch = indexer.NewBatch(e.NumTxs)
|
||||
|
||||
for i, tx := range b.Data.Txs {
|
||||
for i := range b.Data.Txs {
|
||||
tr := abcitypes.TxResult{
|
||||
Height: b.Height,
|
||||
Index: uint32(i),
|
||||
Tx: tx,
|
||||
Result: *(r.DeliverTxs[i]),
|
||||
Tx: b.Data.Txs[i],
|
||||
Result: *(r.FinalizeBlock.TxResults[i]),
|
||||
}
|
||||
|
||||
_ = batch.Add(&tr)
|
||||
}
|
||||
}
|
||||
|
||||
for _, sink := range es {
|
||||
for _, sink := range args.sinks {
|
||||
if err := sink.IndexBlockEvents(e); err != nil {
|
||||
return fmt.Errorf("block event re-index at height %d failed: %w", i, err)
|
||||
}
|
||||
@@ -223,40 +239,45 @@ func eventReIndex(cmd *cobra.Command, es []indexer.EventSink, bs state.BlockStor
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkValidHeight(bs state.BlockStore) error {
|
||||
type checkValidHeightArgs struct {
|
||||
startHeight int64
|
||||
endHeight int64
|
||||
}
|
||||
|
||||
func checkValidHeight(bs state.BlockStore, args checkValidHeightArgs) error {
|
||||
base := bs.Base()
|
||||
|
||||
if startHeight == 0 {
|
||||
startHeight = base
|
||||
if args.startHeight == 0 {
|
||||
args.startHeight = base
|
||||
fmt.Printf("set the start block height to the base height of the blockstore %d \n", base)
|
||||
}
|
||||
|
||||
if startHeight < base {
|
||||
if args.startHeight < base {
|
||||
return fmt.Errorf("%s (requested start height: %d, base height: %d)",
|
||||
coretypes.ErrHeightNotAvailable, startHeight, base)
|
||||
coretypes.ErrHeightNotAvailable, args.startHeight, base)
|
||||
}
|
||||
|
||||
height := bs.Height()
|
||||
|
||||
if startHeight > height {
|
||||
if args.startHeight > height {
|
||||
return fmt.Errorf(
|
||||
"%s (requested start height: %d, store height: %d)", coretypes.ErrHeightNotAvailable, startHeight, height)
|
||||
"%s (requested start height: %d, store height: %d)", coretypes.ErrHeightNotAvailable, args.startHeight, height)
|
||||
}
|
||||
|
||||
if endHeight == 0 || endHeight > height {
|
||||
endHeight = height
|
||||
if args.endHeight == 0 || args.endHeight > height {
|
||||
args.endHeight = height
|
||||
fmt.Printf("set the end block height to the latest height of the blockstore %d \n", height)
|
||||
}
|
||||
|
||||
if endHeight < base {
|
||||
if args.endHeight < base {
|
||||
return fmt.Errorf(
|
||||
"%s (requested end height: %d, base height: %d)", coretypes.ErrHeightNotAvailable, endHeight, base)
|
||||
"%s (requested end height: %d, base height: %d)", coretypes.ErrHeightNotAvailable, args.endHeight, base)
|
||||
}
|
||||
|
||||
if endHeight < startHeight {
|
||||
if args.endHeight < args.startHeight {
|
||||
return fmt.Errorf(
|
||||
"%s (requested the end height: %d is less than the start height: %d)",
|
||||
coretypes.ErrInvalidRequest, startHeight, endHeight)
|
||||
coretypes.ErrInvalidRequest, args.startHeight, args.endHeight)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@@ -9,13 +9,15 @@ import (
|
||||
"github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
dbm "github.com/tendermint/tm-db"
|
||||
|
||||
abcitypes "github.com/tendermint/tendermint/abci/types"
|
||||
tmcfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/state/indexer"
|
||||
"github.com/tendermint/tendermint/internal/state/mocks"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
prototmstate "github.com/tendermint/tendermint/proto/tendermint/state"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
dbm "github.com/tendermint/tm-db"
|
||||
|
||||
_ "github.com/lib/pq" // for the psql sink
|
||||
)
|
||||
@@ -25,9 +27,11 @@ const (
|
||||
base int64 = 2
|
||||
)
|
||||
|
||||
func setupReIndexEventCmd(ctx context.Context) *cobra.Command {
|
||||
func setupReIndexEventCmd(ctx context.Context, conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := MakeReindexEventCommand(conf, logger)
|
||||
|
||||
reIndexEventCmd := &cobra.Command{
|
||||
Use: ReIndexEventCmd.Use,
|
||||
Use: cmd.Use,
|
||||
Run: func(cmd *cobra.Command, args []string) {},
|
||||
}
|
||||
|
||||
@@ -68,10 +72,7 @@ func TestReIndexEventCheckHeight(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
startHeight = tc.startHeight
|
||||
endHeight = tc.endHeight
|
||||
|
||||
err := checkValidHeight(mockBlockStore)
|
||||
err := checkValidHeight(mockBlockStore, checkValidHeightArgs{startHeight: tc.startHeight, endHeight: tc.endHeight})
|
||||
if tc.validHeight {
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
@@ -97,7 +98,7 @@ func TestLoadEventSink(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
cfg := tmcfg.TestConfig()
|
||||
cfg := config.TestConfig()
|
||||
cfg.TxIndex.Indexer = tc.sinks
|
||||
cfg.TxIndex.PsqlConn = tc.connURL
|
||||
_, err := loadEventSinks(cfg)
|
||||
@@ -110,7 +111,7 @@ func TestLoadEventSink(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLoadBlockStore(t *testing.T) {
|
||||
testCfg, err := tmcfg.ResetTestRoot(t.Name())
|
||||
testCfg, err := config.ResetTestRoot(t.TempDir(), t.Name())
|
||||
require.NoError(t, err)
|
||||
testCfg.DBBackend = "goleveldb"
|
||||
_, _, err = loadStateAndBlockStore(testCfg)
|
||||
@@ -152,11 +153,11 @@ func TestReIndexEvent(t *testing.T) {
|
||||
On("IndexTxEvents", mock.AnythingOfType("[]*types.TxResult")).Return(errors.New("")).Once().
|
||||
On("IndexTxEvents", mock.AnythingOfType("[]*types.TxResult")).Return(nil)
|
||||
|
||||
dtx := abcitypes.ResponseDeliverTx{}
|
||||
dtx := abcitypes.ExecTxResult{}
|
||||
abciResp := &prototmstate.ABCIResponses{
|
||||
DeliverTxs: []*abcitypes.ResponseDeliverTx{&dtx},
|
||||
EndBlock: &abcitypes.ResponseEndBlock{},
|
||||
BeginBlock: &abcitypes.ResponseBeginBlock{},
|
||||
FinalizeBlock: &abcitypes.ResponseFinalizeBlock{
|
||||
TxResults: []*abcitypes.ExecTxResult{&dtx},
|
||||
},
|
||||
}
|
||||
|
||||
mockStateStore.
|
||||
@@ -179,12 +180,20 @@ func TestReIndexEvent(t *testing.T) {
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
logger := log.NewNopLogger()
|
||||
conf := config.DefaultConfig()
|
||||
|
||||
for _, tc := range testCases {
|
||||
startHeight = tc.startHeight
|
||||
endHeight = tc.endHeight
|
||||
err := eventReIndex(
|
||||
setupReIndexEventCmd(ctx, conf, logger),
|
||||
eventReIndexArgs{
|
||||
sinks: []indexer.EventSink{mockEventSink},
|
||||
blockStore: mockBlockStore,
|
||||
stateStore: mockStateStore,
|
||||
startHeight: tc.startHeight,
|
||||
endHeight: tc.endHeight,
|
||||
})
|
||||
|
||||
err := eventReIndex(setupReIndexEventCmd(ctx), []indexer.EventSink{mockEventSink}, mockBlockStore, mockStateStore)
|
||||
if tc.reIndexErr {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
|
||||
@@ -2,24 +2,30 @@ package commands
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/consensus"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
// ReplayCmd allows replaying of messages from the WAL.
|
||||
var ReplayCmd = &cobra.Command{
|
||||
Use: "replay",
|
||||
Short: "Replay messages from WAL",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, config.BaseConfig, config.Consensus, false)
|
||||
},
|
||||
// MakeReplayCommand constructs a command to replay messages from the WAL into consensus.
|
||||
func MakeReplayCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "replay",
|
||||
Short: "Replay messages from WAL",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, conf.BaseConfig, conf.Consensus, false)
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// ReplayConsoleCmd allows replaying of messages from the WAL in a
|
||||
// console.
|
||||
var ReplayConsoleCmd = &cobra.Command{
|
||||
Use: "replay-console",
|
||||
Short: "Replay messages from WAL in a console",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, config.BaseConfig, config.Consensus, true)
|
||||
},
|
||||
// MakeReplayConsoleCommand constructs a command to replay WAL messages to stdout.
|
||||
func MakeReplayConsoleCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "replay-console",
|
||||
Short: "Replay messages from WAL in a console",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return consensus.RunReplayFile(cmd.Context(), logger, conf.BaseConfig, conf.Consensus, true)
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,67 +2,147 @@ package commands
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
// ResetAllCmd removes the database of this Tendermint core
|
||||
// instance.
|
||||
var ResetAllCmd = &cobra.Command{
|
||||
Use: "unsafe-reset-all",
|
||||
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
|
||||
RunE: resetAll,
|
||||
}
|
||||
// MakeResetAllCommand constructs a command that removes the database of
|
||||
// the specified Tendermint core instance.
|
||||
func MakeResetAllCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var keyType string
|
||||
|
||||
var keepAddrBook bool
|
||||
|
||||
func init() {
|
||||
ResetAllCmd.Flags().BoolVar(&keepAddrBook, "keep-addr-book", false, "keep the address book intact")
|
||||
ResetPrivValidatorCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
cmd := &cobra.Command{
|
||||
Use: "unsafe-reset-all",
|
||||
Short: "(unsafe) Remove all the data and WAL, reset this node's validator to genesis state",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return resetAll(conf.DBDir(), conf.PrivValidator.KeyFile(),
|
||||
conf.PrivValidator.StateFile(), logger, keyType)
|
||||
},
|
||||
}
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
// ResetPrivValidatorCmd resets the private validator files.
|
||||
var ResetPrivValidatorCmd = &cobra.Command{
|
||||
Use: "unsafe-reset-priv-validator",
|
||||
Short: "(unsafe) Reset this node's validator to genesis state",
|
||||
RunE: resetPrivValidator,
|
||||
// MakeResetStateCommand constructs a command that removes the database of
|
||||
// the specified Tendermint core instance.
|
||||
func MakeResetStateCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var keyType string
|
||||
|
||||
return &cobra.Command{
|
||||
Use: "reset-state",
|
||||
Short: "Remove all the data and WAL",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return resetState(conf.DBDir(), logger, keyType)
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func MakeResetPrivateValidatorCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
var keyType string
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "unsafe-reset-priv-validator",
|
||||
Short: "(unsafe) Reset this node's validator to genesis state",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return resetFilePV(conf.PrivValidator.KeyFile(), conf.PrivValidator.StateFile(), logger, keyType)
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
return cmd
|
||||
|
||||
}
|
||||
|
||||
// XXX: this is totally unsafe.
|
||||
// it's only suitable for testnets.
|
||||
func resetAll(cmd *cobra.Command, args []string) error {
|
||||
return ResetAll(config.DBDir(), config.PrivValidator.KeyFile(),
|
||||
config.PrivValidator.StateFile(), logger)
|
||||
}
|
||||
|
||||
// XXX: this is totally unsafe.
|
||||
// it's only suitable for testnets.
|
||||
func resetPrivValidator(cmd *cobra.Command, args []string) error {
|
||||
return resetFilePV(config.PrivValidator.KeyFile(), config.PrivValidator.StateFile(), logger)
|
||||
}
|
||||
|
||||
// ResetAll removes address book files plus all data, and resets the privValdiator data.
|
||||
// Exported so other CLI tools can use it.
|
||||
func ResetAll(dbDir, privValKeyFile, privValStateFile string, logger log.Logger) error {
|
||||
// resetAll removes address book files plus all data, and resets the privValdiator data.
|
||||
func resetAll(dbDir, privValKeyFile, privValStateFile string, logger log.Logger, keyType string) error {
|
||||
if err := os.RemoveAll(dbDir); err == nil {
|
||||
logger.Info("Removed all blockchain history", "dir", dbDir)
|
||||
} else {
|
||||
logger.Error("error removing all blockchain history", "dir", dbDir, "err", err)
|
||||
}
|
||||
// recreate the dbDir since the privVal state needs to live there
|
||||
|
||||
return resetFilePV(privValKeyFile, privValStateFile, logger, keyType)
|
||||
}
|
||||
|
||||
// resetState removes address book files plus all databases.
|
||||
func resetState(dbDir string, logger log.Logger, keyType string) error {
|
||||
blockdb := filepath.Join(dbDir, "blockstore.db")
|
||||
state := filepath.Join(dbDir, "state.db")
|
||||
wal := filepath.Join(dbDir, "cs.wal")
|
||||
evidence := filepath.Join(dbDir, "evidence.db")
|
||||
txIndex := filepath.Join(dbDir, "tx_index.db")
|
||||
peerstore := filepath.Join(dbDir, "peerstore.db")
|
||||
|
||||
if tmos.FileExists(blockdb) {
|
||||
if err := os.RemoveAll(blockdb); err == nil {
|
||||
logger.Info("Removed all blockstore.db", "dir", blockdb)
|
||||
} else {
|
||||
logger.Error("error removing all blockstore.db", "dir", blockdb, "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
if tmos.FileExists(state) {
|
||||
if err := os.RemoveAll(state); err == nil {
|
||||
logger.Info("Removed all state.db", "dir", state)
|
||||
} else {
|
||||
logger.Error("error removing all state.db", "dir", state, "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
if tmos.FileExists(wal) {
|
||||
if err := os.RemoveAll(wal); err == nil {
|
||||
logger.Info("Removed all cs.wal", "dir", wal)
|
||||
} else {
|
||||
logger.Error("error removing all cs.wal", "dir", wal, "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
if tmos.FileExists(evidence) {
|
||||
if err := os.RemoveAll(evidence); err == nil {
|
||||
logger.Info("Removed all evidence.db", "dir", evidence)
|
||||
} else {
|
||||
logger.Error("error removing all evidence.db", "dir", evidence, "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
if tmos.FileExists(txIndex) {
|
||||
if err := os.RemoveAll(txIndex); err == nil {
|
||||
logger.Info("Removed tx_index.db", "dir", txIndex)
|
||||
} else {
|
||||
logger.Error("error removing tx_index.db", "dir", txIndex, "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
if tmos.FileExists(peerstore) {
|
||||
if err := os.RemoveAll(peerstore); err == nil {
|
||||
logger.Info("Removed peerstore.db", "dir", peerstore)
|
||||
} else {
|
||||
logger.Error("error removing peerstore.db", "dir", peerstore, "err", err)
|
||||
}
|
||||
}
|
||||
if err := tmos.EnsureDir(dbDir, 0700); err != nil {
|
||||
logger.Error("unable to recreate dbDir", "err", err)
|
||||
}
|
||||
return resetFilePV(privValKeyFile, privValStateFile, logger)
|
||||
return nil
|
||||
}
|
||||
|
||||
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger) error {
|
||||
func resetFilePV(privValKeyFile, privValStateFile string, logger log.Logger, keyType string) error {
|
||||
if _, err := os.Stat(privValKeyFile); err == nil {
|
||||
pv, err := privval.LoadFilePVEmptyState(privValKeyFile, privValStateFile)
|
||||
if err != nil {
|
||||
|
||||
@@ -5,14 +5,15 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/internal/state"
|
||||
)
|
||||
|
||||
var RollbackStateCmd = &cobra.Command{
|
||||
Use: "rollback",
|
||||
Short: "rollback tendermint state by one height",
|
||||
Long: `
|
||||
func MakeRollbackStateCommand(conf *config.Config) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "rollback",
|
||||
Short: "rollback tendermint state by one height",
|
||||
Long: `
|
||||
A state rollback is performed to recover from an incorrect application state transition,
|
||||
when Tendermint has persisted an incorrect app hash and is thus unable to make
|
||||
progress. Rollback overwrites a state at height n with the state at height n - 1.
|
||||
@@ -20,21 +21,23 @@ The application should also roll back to height n - 1. No blocks are removed, so
|
||||
restarting Tendermint the transactions in block n will be re-executed against the
|
||||
application.
|
||||
`,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
height, hash, err := RollbackState(config)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to rollback state: %w", err)
|
||||
}
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
height, hash, err := RollbackState(conf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to rollback state: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("Rolled back state to height %d and hash %X", height, hash)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
fmt.Printf("Rolled back state to height %d and hash %X", height, hash)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
// RollbackState takes the state at the current height n and overwrites it with the state
|
||||
// at height n - 1. Note state here refers to tendermint state not application state.
|
||||
// Returns the latest state height and app hash alongside an error if there was one.
|
||||
func RollbackState(config *cfg.Config) (int64, []byte, error) {
|
||||
func RollbackState(config *config.Config) (int64, []byte, error) {
|
||||
// use the parsed config to load the block and state store
|
||||
blockStore, stateStore, err := loadStateAndBlockStore(config)
|
||||
if err != nil {
|
||||
|
||||
@@ -19,10 +19,12 @@ func TestRollbackIntegration(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
cfg, err := rpctest.CreateConfig(t.Name())
|
||||
cfg, err := rpctest.CreateConfig(t, t.Name())
|
||||
require.NoError(t, err)
|
||||
cfg.BaseConfig.DBBackend = "goleveldb"
|
||||
|
||||
app, err := e2e.NewApplication(e2e.DefaultConfig(dir))
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("First run", func(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
@@ -30,27 +32,29 @@ func TestRollbackIntegration(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
node, _, err := rpctest.StartTendermint(ctx, cfg, app, rpctest.SuppressStdout)
|
||||
require.NoError(t, err)
|
||||
require.True(t, node.IsRunning())
|
||||
|
||||
time.Sleep(3 * time.Second)
|
||||
cancel()
|
||||
node.Wait()
|
||||
|
||||
require.False(t, node.IsRunning())
|
||||
})
|
||||
|
||||
t.Run("Rollback", func(t *testing.T) {
|
||||
time.Sleep(time.Second)
|
||||
require.NoError(t, app.Rollback())
|
||||
height, _, err = commands.RollbackState(cfg)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, err, "%d", height)
|
||||
})
|
||||
|
||||
t.Run("Restart", func(t *testing.T) {
|
||||
require.True(t, height > 0, "%d", height)
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
node2, _, err2 := rpctest.StartTendermint(ctx, cfg, app, rpctest.SuppressStdout)
|
||||
require.NoError(t, err2)
|
||||
|
||||
logger := log.NewTestingLogger(t)
|
||||
logger := log.NewNopLogger()
|
||||
|
||||
client, err := local.New(logger, node2.(local.NodeService))
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -2,65 +2,66 @@ package commands
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
var (
|
||||
config = cfg.DefaultConfig()
|
||||
logger = log.MustNewDefaultLogger(log.LogFormatPlain, log.LogLevelInfo)
|
||||
ctxTimeout = 4 * time.Second
|
||||
)
|
||||
|
||||
func init() {
|
||||
registerFlagsRootCmd(RootCmd)
|
||||
}
|
||||
|
||||
func registerFlagsRootCmd(cmd *cobra.Command) {
|
||||
cmd.PersistentFlags().String("log-level", config.LogLevel, "log level")
|
||||
}
|
||||
const ctxTimeout = 4 * time.Second
|
||||
|
||||
// ParseConfig retrieves the default environment configuration,
|
||||
// sets up the Tendermint root and ensures that the root exists
|
||||
func ParseConfig() (*cfg.Config, error) {
|
||||
conf := cfg.DefaultConfig()
|
||||
err := viper.Unmarshal(conf)
|
||||
if err != nil {
|
||||
func ParseConfig(conf *config.Config) (*config.Config, error) {
|
||||
if err := viper.Unmarshal(conf); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
conf.SetRoot(conf.RootDir)
|
||||
cfg.EnsureRoot(conf.RootDir)
|
||||
|
||||
if err := conf.ValidateBasic(); err != nil {
|
||||
return nil, fmt.Errorf("error in config file: %w", err)
|
||||
}
|
||||
return conf, nil
|
||||
}
|
||||
|
||||
// RootCmd is the root command for Tendermint core.
|
||||
var RootCmd = &cobra.Command{
|
||||
Use: "tendermint",
|
||||
Short: "BFT state machine replication for applications in any programming languages",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) {
|
||||
if cmd.Name() == VersionCmd.Name() {
|
||||
// RootCommand constructs the root command-line entry point for Tendermint core.
|
||||
func RootCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "tendermint",
|
||||
Short: "BFT state machine replication for applications in any programming languages",
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
if cmd.Name() == VersionCmd.Name() {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := cli.BindFlagsLoadViper(cmd, args); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pconf, err := ParseConfig(conf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*conf = *pconf
|
||||
config.EnsureRoot(conf.RootDir)
|
||||
|
||||
if err := log.OverrideWithNewLogger(logger, conf.LogFormat, conf.LogLevel); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
config, err = ParseConfig()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger, err = log.NewDefaultLogger(config.LogFormat, config.LogLevel)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logger = logger.With("module", "main")
|
||||
return nil
|
||||
},
|
||||
},
|
||||
}
|
||||
cmd.PersistentFlags().StringP(cli.HomeFlag, "", os.ExpandEnv(filepath.Join("$HOME", config.DefaultTendermintDir)), "directory for config and data")
|
||||
cmd.PersistentFlags().Bool(cli.TraceFlag, false, "print out full stack trace on errors")
|
||||
cmd.PersistentFlags().String("log-level", conf.LogLevel, "log level")
|
||||
cobra.OnInitialize(func() { cli.InitEnv("TM") })
|
||||
return cmd
|
||||
}
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
package commands
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
@@ -14,43 +14,54 @@ import (
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
)
|
||||
|
||||
// writeConfigVals writes a toml file with the given values.
|
||||
// It returns an error if writing was impossible.
|
||||
func writeConfigVals(dir string, vals map[string]string) error {
|
||||
data := ""
|
||||
for k, v := range vals {
|
||||
data += fmt.Sprintf("%s = \"%s\"\n", k, v)
|
||||
}
|
||||
cfile := filepath.Join(dir, "config.toml")
|
||||
return os.WriteFile(cfile, []byte(data), 0600)
|
||||
}
|
||||
|
||||
// clearConfig clears env vars, the given root dir, and resets viper.
|
||||
func clearConfig(t *testing.T, dir string) {
|
||||
func clearConfig(t *testing.T, dir string) *cfg.Config {
|
||||
t.Helper()
|
||||
require.NoError(t, os.Unsetenv("TMHOME"))
|
||||
require.NoError(t, os.Unsetenv("TM_HOME"))
|
||||
require.NoError(t, os.RemoveAll(dir))
|
||||
|
||||
viper.Reset()
|
||||
config = cfg.DefaultConfig()
|
||||
conf := cfg.DefaultConfig()
|
||||
conf.RootDir = dir
|
||||
return conf
|
||||
}
|
||||
|
||||
// prepare new rootCmd
|
||||
func testRootCmd() *cobra.Command {
|
||||
rootCmd := &cobra.Command{
|
||||
Use: RootCmd.Use,
|
||||
PersistentPreRunE: RootCmd.PersistentPreRunE,
|
||||
Run: func(cmd *cobra.Command, args []string) {},
|
||||
}
|
||||
registerFlagsRootCmd(rootCmd)
|
||||
func testRootCmd(conf *cfg.Config) *cobra.Command {
|
||||
logger := log.NewNopLogger()
|
||||
cmd := RootCommand(conf, logger)
|
||||
cmd.RunE = func(cmd *cobra.Command, args []string) error { return nil }
|
||||
|
||||
var l string
|
||||
rootCmd.PersistentFlags().String("log", l, "Log")
|
||||
return rootCmd
|
||||
cmd.PersistentFlags().String("log", l, "Log")
|
||||
return cmd
|
||||
}
|
||||
|
||||
func testSetup(t *testing.T, rootDir string, args []string, env map[string]string) error {
|
||||
func testSetup(ctx context.Context, t *testing.T, conf *cfg.Config, args []string, env map[string]string) error {
|
||||
t.Helper()
|
||||
clearConfig(t, rootDir)
|
||||
|
||||
rootCmd := testRootCmd()
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", rootDir)
|
||||
cmd := testRootCmd(conf)
|
||||
viper.Set(cli.HomeFlag, conf.RootDir)
|
||||
|
||||
// run with the args and env
|
||||
args = append([]string{rootCmd.Use}, args...)
|
||||
return cli.RunWithArgs(cmd, args, env)
|
||||
args = append([]string{cmd.Use}, args...)
|
||||
return cli.RunWithArgs(ctx, cmd, args, env)
|
||||
}
|
||||
|
||||
func TestRootHome(t *testing.T) {
|
||||
@@ -66,23 +77,29 @@ func TestRootHome(t *testing.T) {
|
||||
{nil, map[string]string{"TMHOME": newRoot}, newRoot},
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
for i, tc := range cases {
|
||||
idxString := strconv.Itoa(i)
|
||||
t.Run(fmt.Sprint(i), func(t *testing.T) {
|
||||
conf := clearConfig(t, tc.root)
|
||||
|
||||
err := testSetup(t, defaultRoot, tc.args, tc.env)
|
||||
require.NoError(t, err, idxString)
|
||||
err := testSetup(ctx, t, conf, tc.args, tc.env)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, tc.root, config.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.P2P.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.Consensus.RootDir, idxString)
|
||||
assert.Equal(t, tc.root, config.Mempool.RootDir, idxString)
|
||||
require.Equal(t, tc.root, conf.RootDir)
|
||||
require.Equal(t, tc.root, conf.P2P.RootDir)
|
||||
require.Equal(t, tc.root, conf.Consensus.RootDir)
|
||||
require.Equal(t, tc.root, conf.Mempool.RootDir)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestRootFlagsEnv(t *testing.T) {
|
||||
|
||||
// defaults
|
||||
defaults := cfg.DefaultConfig()
|
||||
defaultDir := t.TempDir()
|
||||
|
||||
defaultLogLvl := defaults.LogLevel
|
||||
|
||||
cases := []struct {
|
||||
@@ -97,18 +114,25 @@ func TestRootFlagsEnv(t *testing.T) {
|
||||
{nil, map[string]string{"TM_LOG_LEVEL": "debug"}, "debug"}, // right env
|
||||
}
|
||||
|
||||
defaultRoot := t.TempDir()
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
for i, tc := range cases {
|
||||
idxString := strconv.Itoa(i)
|
||||
t.Run(fmt.Sprint(i), func(t *testing.T) {
|
||||
conf := clearConfig(t, defaultDir)
|
||||
|
||||
err := testSetup(t, defaultRoot, tc.args, tc.env)
|
||||
require.NoError(t, err, idxString)
|
||||
err := testSetup(ctx, t, conf, tc.args, tc.env)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, tc.logLevel, conf.LogLevel)
|
||||
})
|
||||
|
||||
assert.Equal(t, tc.logLevel, config.LogLevel, idxString)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRootConfig(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
// write non-default config
|
||||
nonDefaultLogLvl := "debug"
|
||||
@@ -117,9 +141,8 @@ func TestRootConfig(t *testing.T) {
|
||||
}
|
||||
|
||||
cases := []struct {
|
||||
args []string
|
||||
env map[string]string
|
||||
|
||||
args []string
|
||||
env map[string]string
|
||||
logLvl string
|
||||
}{
|
||||
{nil, nil, nonDefaultLogLvl}, // should load config
|
||||
@@ -128,29 +151,30 @@ func TestRootConfig(t *testing.T) {
|
||||
}
|
||||
|
||||
for i, tc := range cases {
|
||||
defaultRoot := t.TempDir()
|
||||
idxString := strconv.Itoa(i)
|
||||
clearConfig(t, defaultRoot)
|
||||
t.Run(fmt.Sprint(i), func(t *testing.T) {
|
||||
defaultRoot := t.TempDir()
|
||||
conf := clearConfig(t, defaultRoot)
|
||||
conf.LogLevel = tc.logLvl
|
||||
|
||||
// XXX: path must match cfg.defaultConfigPath
|
||||
configFilePath := filepath.Join(defaultRoot, "config")
|
||||
err := tmos.EnsureDir(configFilePath, 0700)
|
||||
require.NoError(t, err)
|
||||
// XXX: path must match cfg.defaultConfigPath
|
||||
configFilePath := filepath.Join(defaultRoot, "config")
|
||||
err := tmos.EnsureDir(configFilePath, 0700)
|
||||
require.NoError(t, err)
|
||||
|
||||
// write the non-defaults to a different path
|
||||
// TODO: support writing sub configs so we can test that too
|
||||
err = WriteConfigVals(configFilePath, cvals)
|
||||
require.NoError(t, err)
|
||||
// write the non-defaults to a different path
|
||||
// TODO: support writing sub configs so we can test that too
|
||||
err = writeConfigVals(configFilePath, cvals)
|
||||
require.NoError(t, err)
|
||||
|
||||
rootCmd := testRootCmd()
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", defaultRoot)
|
||||
cmd := testRootCmd(conf)
|
||||
|
||||
// run with the args and env
|
||||
tc.args = append([]string{rootCmd.Use}, tc.args...)
|
||||
err = cli.RunWithArgs(cmd, tc.args, tc.env)
|
||||
require.NoError(t, err, idxString)
|
||||
// run with the args and env
|
||||
tc.args = append([]string{cmd.Use}, tc.args...)
|
||||
err = cli.RunWithArgs(ctx, cmd, tc.args, tc.env)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, tc.logLvl, config.LogLevel, idxString)
|
||||
require.Equal(t, tc.logLvl, conf.LogLevel)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -12,25 +12,26 @@ import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
)
|
||||
|
||||
var (
|
||||
genesisHash []byte
|
||||
)
|
||||
|
||||
// AddNodeFlags exposes some common configuration options on the command-line
|
||||
// These are exposed for convenience of commands embedding a tendermint node
|
||||
func AddNodeFlags(cmd *cobra.Command) {
|
||||
// AddNodeFlags exposes some common configuration options from conf in the flag
|
||||
// set for cmd. This is a convenience for commands embedding a Tendermint node.
|
||||
func AddNodeFlags(cmd *cobra.Command, conf *cfg.Config) {
|
||||
// bind flags
|
||||
cmd.Flags().String("moniker", config.Moniker, "node name")
|
||||
cmd.Flags().String("moniker", conf.Moniker, "node name")
|
||||
|
||||
// mode flags
|
||||
cmd.Flags().String("mode", config.Mode, "node mode (full | validator | seed)")
|
||||
cmd.Flags().String("mode", conf.Mode, "node mode (full | validator | seed)")
|
||||
|
||||
// priv val flags
|
||||
cmd.Flags().String(
|
||||
"priv-validator-laddr",
|
||||
config.PrivValidator.ListenAddr,
|
||||
conf.PrivValidator.ListenAddr,
|
||||
"socket address to listen on for connections from external priv-validator process")
|
||||
|
||||
// node flags
|
||||
@@ -40,74 +41,74 @@ func AddNodeFlags(cmd *cobra.Command) {
|
||||
"genesis-hash",
|
||||
[]byte{},
|
||||
"optional SHA-256 hash of the genesis file")
|
||||
cmd.Flags().Int64("consensus.double-sign-check-height", config.Consensus.DoubleSignCheckHeight,
|
||||
cmd.Flags().Int64("consensus.double-sign-check-height", conf.Consensus.DoubleSignCheckHeight,
|
||||
"how many blocks to look back to check existence of the node's "+
|
||||
"consensus votes before joining consensus")
|
||||
|
||||
// abci flags
|
||||
cmd.Flags().String(
|
||||
"proxy-app",
|
||||
config.ProxyApp,
|
||||
conf.ProxyApp,
|
||||
"proxy app address, or one of: 'kvstore',"+
|
||||
" 'persistent_kvstore', 'e2e' or 'noop' for local testing.")
|
||||
cmd.Flags().String("abci", config.ABCI, "specify abci transport (socket | grpc)")
|
||||
cmd.Flags().String("abci", conf.ABCI, "specify abci transport (socket | grpc)")
|
||||
|
||||
// rpc flags
|
||||
cmd.Flags().String("rpc.laddr", config.RPC.ListenAddress, "RPC listen address. Port required")
|
||||
cmd.Flags().Bool("rpc.unsafe", config.RPC.Unsafe, "enabled unsafe rpc methods")
|
||||
cmd.Flags().String("rpc.pprof-laddr", config.RPC.PprofListenAddress, "pprof listen address (https://golang.org/pkg/net/http/pprof)")
|
||||
cmd.Flags().String("rpc.laddr", conf.RPC.ListenAddress, "RPC listen address. Port required")
|
||||
cmd.Flags().Bool("rpc.unsafe", conf.RPC.Unsafe, "enabled unsafe rpc methods")
|
||||
cmd.Flags().String("rpc.pprof-laddr", conf.RPC.PprofListenAddress, "pprof listen address (https://golang.org/pkg/net/http/pprof)")
|
||||
|
||||
// p2p flags
|
||||
cmd.Flags().String(
|
||||
"p2p.laddr",
|
||||
config.P2P.ListenAddress,
|
||||
conf.P2P.ListenAddress,
|
||||
"node listen address. (0.0.0.0:0 means any interface, any port)")
|
||||
cmd.Flags().String("p2p.seeds", config.P2P.Seeds, "comma-delimited ID@host:port seed nodes") //nolint: staticcheck
|
||||
cmd.Flags().String("p2p.persistent-peers", config.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
|
||||
cmd.Flags().Bool("p2p.upnp", config.P2P.UPNP, "enable/disable UPNP port forwarding")
|
||||
cmd.Flags().Bool("p2p.pex", config.P2P.PexReactor, "enable/disable Peer-Exchange")
|
||||
cmd.Flags().String("p2p.private-peer-ids", config.P2P.PrivatePeerIDs, "comma-delimited private peer IDs")
|
||||
cmd.Flags().String("p2p.seeds", conf.P2P.Seeds, "comma-delimited ID@host:port seed nodes") //nolint: staticcheck
|
||||
cmd.Flags().String("p2p.persistent-peers", conf.P2P.PersistentPeers, "comma-delimited ID@host:port persistent peers")
|
||||
cmd.Flags().Bool("p2p.upnp", conf.P2P.UPNP, "enable/disable UPNP port forwarding")
|
||||
cmd.Flags().Bool("p2p.pex", conf.P2P.PexReactor, "enable/disable Peer-Exchange")
|
||||
cmd.Flags().String("p2p.private-peer-ids", conf.P2P.PrivatePeerIDs, "comma-delimited private peer IDs")
|
||||
|
||||
// consensus flags
|
||||
cmd.Flags().Bool(
|
||||
"consensus.create-empty-blocks",
|
||||
config.Consensus.CreateEmptyBlocks,
|
||||
conf.Consensus.CreateEmptyBlocks,
|
||||
"set this to false to only produce blocks when there are txs or when the AppHash changes")
|
||||
cmd.Flags().String(
|
||||
"consensus.create-empty-blocks-interval",
|
||||
config.Consensus.CreateEmptyBlocksInterval.String(),
|
||||
conf.Consensus.CreateEmptyBlocksInterval.String(),
|
||||
"the possible interval between empty blocks")
|
||||
|
||||
addDBFlags(cmd)
|
||||
addDBFlags(cmd, conf)
|
||||
}
|
||||
|
||||
func addDBFlags(cmd *cobra.Command) {
|
||||
func addDBFlags(cmd *cobra.Command, conf *cfg.Config) {
|
||||
cmd.Flags().String(
|
||||
"db-backend",
|
||||
config.DBBackend,
|
||||
conf.DBBackend,
|
||||
"database backend: goleveldb | cleveldb | boltdb | rocksdb | badgerdb")
|
||||
cmd.Flags().String(
|
||||
"db-dir",
|
||||
config.DBPath,
|
||||
conf.DBPath,
|
||||
"database directory")
|
||||
}
|
||||
|
||||
// NewRunNodeCmd returns the command that allows the CLI to start a node.
|
||||
// It can be used with a custom PrivValidator and in-process ABCI application.
|
||||
func NewRunNodeCmd(nodeProvider cfg.ServiceProvider) *cobra.Command {
|
||||
func NewRunNodeCmd(nodeProvider cfg.ServiceProvider, conf *cfg.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "start",
|
||||
Aliases: []string{"node", "run"},
|
||||
Short: "Run the tendermint node",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
if err := checkGenesisHash(config); err != nil {
|
||||
if err := checkGenesisHash(conf); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := signal.NotifyContext(cmd.Context(), syscall.SIGTERM)
|
||||
defer cancel()
|
||||
|
||||
n, err := nodeProvider(ctx, config, logger)
|
||||
n, err := nodeProvider(ctx, conf, logger)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create node: %w", err)
|
||||
}
|
||||
@@ -116,14 +117,14 @@ func NewRunNodeCmd(nodeProvider cfg.ServiceProvider) *cobra.Command {
|
||||
return fmt.Errorf("failed to start node: %w", err)
|
||||
}
|
||||
|
||||
logger.Info("started node", "node", n.String())
|
||||
logger.Info("started node", "chain", conf.ChainID())
|
||||
|
||||
<-ctx.Done()
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
AddNodeFlags(cmd)
|
||||
AddNodeFlags(cmd, conf)
|
||||
return cmd
|
||||
}
|
||||
|
||||
|
||||
@@ -4,21 +4,23 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
)
|
||||
|
||||
// ShowNodeIDCmd dumps node's ID to the standard output.
|
||||
var ShowNodeIDCmd = &cobra.Command{
|
||||
Use: "show-node-id",
|
||||
Short: "Show this node's ID",
|
||||
RunE: showNodeID,
|
||||
}
|
||||
// MakeShowNodeIDCommand constructs a command to dump the node ID to stdout.
|
||||
func MakeShowNodeIDCommand(conf *config.Config) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "show-node-id",
|
||||
Short: "Show this node's ID",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
nodeKeyID, err := conf.LoadNodeKeyID()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
func showNodeID(cmd *cobra.Command, args []string) error {
|
||||
nodeKeyID, err := config.LoadNodeKeyID()
|
||||
if err != nil {
|
||||
return err
|
||||
fmt.Println(nodeKeyID)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
fmt.Println(nodeKeyID)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -6,75 +6,78 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/internal/jsontypes"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmnet "github.com/tendermint/tendermint/libs/net"
|
||||
tmos "github.com/tendermint/tendermint/libs/os"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
tmgrpc "github.com/tendermint/tendermint/privval/grpc"
|
||||
)
|
||||
|
||||
// ShowValidatorCmd adds capabilities for showing the validator info.
|
||||
var ShowValidatorCmd = &cobra.Command{
|
||||
Use: "show-validator",
|
||||
Short: "Show this node's validator info",
|
||||
RunE: showValidator,
|
||||
}
|
||||
// MakeShowValidatorCommand constructs a command to show the validator info.
|
||||
func MakeShowValidatorCommand(conf *config.Config, logger log.Logger) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: "show-validator",
|
||||
Short: "Show this node's validator info",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
var (
|
||||
pubKey crypto.PubKey
|
||||
err error
|
||||
bctx = cmd.Context()
|
||||
)
|
||||
//TODO: remove once gRPC is the only supported protocol
|
||||
protocol, _ := tmnet.ProtocolAndAddress(conf.PrivValidator.ListenAddr)
|
||||
switch protocol {
|
||||
case "grpc":
|
||||
pvsc, err := tmgrpc.DialRemoteSigner(
|
||||
bctx,
|
||||
conf.PrivValidator,
|
||||
conf.ChainID(),
|
||||
logger,
|
||||
conf.Instrumentation.Prometheus,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't connect to remote validator %w", err)
|
||||
}
|
||||
|
||||
func showValidator(cmd *cobra.Command, args []string) error {
|
||||
var (
|
||||
pubKey crypto.PubKey
|
||||
err error
|
||||
bctx = cmd.Context()
|
||||
)
|
||||
//TODO: remove once gRPC is the only supported protocol
|
||||
protocol, _ := tmnet.ProtocolAndAddress(config.PrivValidator.ListenAddr)
|
||||
switch protocol {
|
||||
case "grpc":
|
||||
pvsc, err := tmgrpc.DialRemoteSigner(
|
||||
bctx,
|
||||
config.PrivValidator,
|
||||
config.ChainID(),
|
||||
logger,
|
||||
config.Instrumentation.Prometheus,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't connect to remote validator %w", err)
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
pubKey, err = pvsc.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
default:
|
||||
|
||||
pubKey, err = pvsc.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
default:
|
||||
keyFilePath := conf.PrivValidator.KeyFile()
|
||||
if !tmos.FileExists(keyFilePath) {
|
||||
return fmt.Errorf("private validator file %s does not exist", keyFilePath)
|
||||
}
|
||||
|
||||
keyFilePath := config.PrivValidator.KeyFile()
|
||||
if !tmos.FileExists(keyFilePath) {
|
||||
return fmt.Errorf("private validator file %s does not exist", keyFilePath)
|
||||
}
|
||||
pv, err := privval.LoadFilePV(keyFilePath, conf.PrivValidator.StateFile())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pv, err := privval.LoadFilePV(keyFilePath, config.PrivValidator.StateFile())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
ctx, cancel := context.WithTimeout(bctx, ctxTimeout)
|
||||
defer cancel()
|
||||
pubKey, err = pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
pubKey, err = pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
bz, err := jsontypes.Marshal(pubKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal private validator pubkey: %w", err)
|
||||
}
|
||||
|
||||
fmt.Println(string(bz))
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
bz, err := jsontypes.Marshal(pubKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal private validator pubkey: %w", err)
|
||||
}
|
||||
|
||||
fmt.Println(string(bz))
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -13,76 +13,23 @@ import (
|
||||
|
||||
cfg "github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/bytes"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
tmrand "github.com/tendermint/tendermint/libs/rand"
|
||||
tmtime "github.com/tendermint/tendermint/libs/time"
|
||||
"github.com/tendermint/tendermint/privval"
|
||||
"github.com/tendermint/tendermint/types"
|
||||
)
|
||||
|
||||
var (
|
||||
nValidators int
|
||||
nNonValidators int
|
||||
initialHeight int64
|
||||
configFile string
|
||||
outputDir string
|
||||
nodeDirPrefix string
|
||||
|
||||
populatePersistentPeers bool
|
||||
hostnamePrefix string
|
||||
hostnameSuffix string
|
||||
startingIPAddress string
|
||||
hostnames []string
|
||||
p2pPort int
|
||||
randomMonikers bool
|
||||
)
|
||||
|
||||
const (
|
||||
nodeDirPerm = 0755
|
||||
)
|
||||
|
||||
func init() {
|
||||
TestnetFilesCmd.Flags().IntVar(&nValidators, "v", 4,
|
||||
"number of validators to initialize the testnet with")
|
||||
TestnetFilesCmd.Flags().StringVar(&configFile, "config", "",
|
||||
"config file to use (note some options may be overwritten)")
|
||||
TestnetFilesCmd.Flags().IntVar(&nNonValidators, "n", 0,
|
||||
"number of non-validators to initialize the testnet with")
|
||||
TestnetFilesCmd.Flags().StringVar(&outputDir, "o", "./mytestnet",
|
||||
"directory to store initialization data for the testnet")
|
||||
TestnetFilesCmd.Flags().StringVar(&nodeDirPrefix, "node-dir-prefix", "node",
|
||||
"prefix the directory name for each node with (node results in node0, node1, ...)")
|
||||
TestnetFilesCmd.Flags().Int64Var(&initialHeight, "initial-height", 0,
|
||||
"initial height of the first block")
|
||||
|
||||
TestnetFilesCmd.Flags().BoolVar(&populatePersistentPeers, "populate-persistent-peers", true,
|
||||
"update config of each node with the list of persistent peers build using either"+
|
||||
" hostname-prefix or"+
|
||||
" starting-ip-address")
|
||||
TestnetFilesCmd.Flags().StringVar(&hostnamePrefix, "hostname-prefix", "node",
|
||||
"hostname prefix (\"node\" results in persistent peers list ID0@node0:26656, ID1@node1:26656, ...)")
|
||||
TestnetFilesCmd.Flags().StringVar(&hostnameSuffix, "hostname-suffix", "",
|
||||
"hostname suffix ("+
|
||||
"\".xyz.com\""+
|
||||
" results in persistent peers list ID0@node0.xyz.com:26656, ID1@node1.xyz.com:26656, ...)")
|
||||
TestnetFilesCmd.Flags().StringVar(&startingIPAddress, "starting-ip-address", "",
|
||||
"starting IP address ("+
|
||||
"\"192.168.0.1\""+
|
||||
" results in persistent peers list ID0@192.168.0.1:26656, ID1@192.168.0.2:26656, ...)")
|
||||
TestnetFilesCmd.Flags().StringArrayVar(&hostnames, "hostname", []string{},
|
||||
"manually override all hostnames of validators and non-validators (use --hostname multiple times for multiple hosts)")
|
||||
TestnetFilesCmd.Flags().IntVar(&p2pPort, "p2p-port", 26656,
|
||||
"P2P Port")
|
||||
TestnetFilesCmd.Flags().BoolVar(&randomMonikers, "random-monikers", false,
|
||||
"randomize the moniker for each generated node")
|
||||
TestnetFilesCmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
}
|
||||
|
||||
// TestnetFilesCmd allows initialisation of files for a Tendermint testnet.
|
||||
var TestnetFilesCmd = &cobra.Command{
|
||||
Use: "testnet",
|
||||
Short: "Initialize files for a Tendermint testnet",
|
||||
Long: `testnet will create "v" + "n" number of directories and populate each with
|
||||
// MakeTestnetFilesCommand constructs a command to generate testnet config files.
|
||||
func MakeTestnetFilesCommand(conf *cfg.Config, logger log.Logger) *cobra.Command {
|
||||
cmd := &cobra.Command{
|
||||
Use: "testnet",
|
||||
Short: "Initialize files for a Tendermint testnet",
|
||||
Long: `testnet will create "v" + "n" number of directories and populate each with
|
||||
necessary files (private validator, genesis, config, etc.).
|
||||
|
||||
Note, strict routability for addresses is turned off in the config file.
|
||||
@@ -93,205 +40,292 @@ Example:
|
||||
|
||||
tendermint testnet --v 4 --o ./output --populate-persistent-peers --starting-ip-address 192.168.10.2
|
||||
`,
|
||||
RunE: testnetFiles,
|
||||
}
|
||||
|
||||
func testnetFiles(cmd *cobra.Command, args []string) error {
|
||||
if len(hostnames) > 0 && len(hostnames) != (nValidators+nNonValidators) {
|
||||
return fmt.Errorf(
|
||||
"testnet needs precisely %d hostnames (number of validators plus non-validators) if --hostname parameter is used",
|
||||
nValidators+nNonValidators,
|
||||
)
|
||||
}
|
||||
|
||||
// set mode to validator for testnet
|
||||
config := cfg.DefaultValidatorConfig()
|
||||
|
||||
// overwrite default config if set and valid
|
||||
if configFile != "" {
|
||||
viper.SetConfigFile(configFile)
|
||||
if err := viper.ReadInConfig(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := viper.Unmarshal(config); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := config.ValidateBasic(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
genVals := make([]types.GenesisValidator, nValidators)
|
||||
ctx := cmd.Context()
|
||||
for i := 0; i < nValidators; i++ {
|
||||
nodeDirName := fmt.Sprintf("%s%d", nodeDirPrefix, i)
|
||||
nodeDir := filepath.Join(outputDir, nodeDirName)
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, config); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pvKeyFile := filepath.Join(nodeDir, config.PrivValidator.Key)
|
||||
pvStateFile := filepath.Join(nodeDir, config.PrivValidator.State)
|
||||
pv, err := privval.LoadFilePV(pvKeyFile, pvStateFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
pubKey, err := pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
genVals[i] = types.GenesisValidator{
|
||||
Address: pubKey.Address(),
|
||||
PubKey: pubKey,
|
||||
Power: 1,
|
||||
Name: nodeDirName,
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i+nValidators))
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, config); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Generate genesis doc from generated validators
|
||||
genDoc := &types.GenesisDoc{
|
||||
ChainID: "chain-" + tmrand.Str(6),
|
||||
GenesisTime: tmtime.Now(),
|
||||
InitialHeight: initialHeight,
|
||||
Validators: genVals,
|
||||
ConsensusParams: types.DefaultConsensusParams(),
|
||||
}
|
||||
if keyType == "secp256k1" {
|
||||
genDoc.ConsensusParams.Validator = types.ValidatorParams{
|
||||
PubKeyTypes: []string{types.ABCIPubKeyTypeSecp256k1},
|
||||
}
|
||||
}
|
||||
|
||||
// Write genesis file.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
if err := genDoc.SaveAs(filepath.Join(nodeDir, config.BaseConfig.Genesis)); err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Gather persistent peer addresses.
|
||||
var (
|
||||
persistentPeers = make([]string, 0)
|
||||
err error
|
||||
nValidators int
|
||||
nNonValidators int
|
||||
initialHeight int64
|
||||
configFile string
|
||||
outputDir string
|
||||
nodeDirPrefix string
|
||||
|
||||
populatePersistentPeers bool
|
||||
hostnamePrefix string
|
||||
hostnameSuffix string
|
||||
startingIPAddress string
|
||||
hostnames []string
|
||||
p2pPort int
|
||||
randomMonikers bool
|
||||
keyType string
|
||||
)
|
||||
if populatePersistentPeers {
|
||||
persistentPeers, err = persistentPeersArray(config)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Overwrite default config.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
config.SetRoot(nodeDir)
|
||||
config.P2P.AllowDuplicateIP = true
|
||||
if populatePersistentPeers {
|
||||
persistentPeersWithoutSelf := make([]string, 0)
|
||||
for j := 0; j < len(persistentPeers); j++ {
|
||||
if j == i {
|
||||
continue
|
||||
}
|
||||
persistentPeersWithoutSelf = append(persistentPeersWithoutSelf, persistentPeers[j])
|
||||
cmd.Flags().IntVar(&nValidators, "v", 4,
|
||||
"number of validators to initialize the testnet with")
|
||||
cmd.Flags().StringVar(&configFile, "config", "",
|
||||
"config file to use (note some options may be overwritten)")
|
||||
cmd.Flags().IntVar(&nNonValidators, "n", 0,
|
||||
"number of non-validators to initialize the testnet with")
|
||||
cmd.Flags().StringVar(&outputDir, "o", "./mytestnet",
|
||||
"directory to store initialization data for the testnet")
|
||||
cmd.Flags().StringVar(&nodeDirPrefix, "node-dir-prefix", "node",
|
||||
"prefix the directory name for each node with (node results in node0, node1, ...)")
|
||||
cmd.Flags().Int64Var(&initialHeight, "initial-height", 0,
|
||||
"initial height of the first block")
|
||||
|
||||
cmd.Flags().BoolVar(&populatePersistentPeers, "populate-persistent-peers", true,
|
||||
"update config of each node with the list of persistent peers build using either"+
|
||||
" hostname-prefix or"+
|
||||
" starting-ip-address")
|
||||
cmd.Flags().StringVar(&hostnamePrefix, "hostname-prefix", "node",
|
||||
"hostname prefix (\"node\" results in persistent peers list ID0@node0:26656, ID1@node1:26656, ...)")
|
||||
cmd.Flags().StringVar(&hostnameSuffix, "hostname-suffix", "",
|
||||
"hostname suffix ("+
|
||||
"\".xyz.com\""+
|
||||
" results in persistent peers list ID0@node0.xyz.com:26656, ID1@node1.xyz.com:26656, ...)")
|
||||
cmd.Flags().StringVar(&startingIPAddress, "starting-ip-address", "",
|
||||
"starting IP address ("+
|
||||
"\"192.168.0.1\""+
|
||||
" results in persistent peers list ID0@192.168.0.1:26656, ID1@192.168.0.2:26656, ...)")
|
||||
cmd.Flags().StringArrayVar(&hostnames, "hostname", []string{},
|
||||
"manually override all hostnames of validators and non-validators (use --hostname multiple times for multiple hosts)")
|
||||
cmd.Flags().IntVar(&p2pPort, "p2p-port", 26656,
|
||||
"P2P Port")
|
||||
cmd.Flags().BoolVar(&randomMonikers, "random-monikers", false,
|
||||
"randomize the moniker for each generated node")
|
||||
cmd.Flags().StringVar(&keyType, "key", types.ABCIPubKeyTypeEd25519,
|
||||
"Key type to generate privval file with. Options: ed25519, secp256k1")
|
||||
|
||||
cmd.RunE = func(cmd *cobra.Command, args []string) error {
|
||||
if len(hostnames) > 0 && len(hostnames) != (nValidators+nNonValidators) {
|
||||
return fmt.Errorf(
|
||||
"testnet needs precisely %d hostnames (number of validators plus non-validators) if --hostname parameter is used",
|
||||
nValidators+nNonValidators,
|
||||
)
|
||||
}
|
||||
|
||||
// set mode to validator for testnet
|
||||
config := cfg.DefaultValidatorConfig()
|
||||
|
||||
// overwrite default config if set and valid
|
||||
if configFile != "" {
|
||||
viper.SetConfigFile(configFile)
|
||||
if err := viper.ReadInConfig(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := viper.Unmarshal(config); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := config.ValidateBasic(); err != nil {
|
||||
return err
|
||||
}
|
||||
config.P2P.PersistentPeers = strings.Join(persistentPeersWithoutSelf, ",")
|
||||
}
|
||||
config.Moniker = moniker(i)
|
||||
|
||||
if err := cfg.WriteConfigFile(nodeDir, config); err != nil {
|
||||
return err
|
||||
genVals := make([]types.GenesisValidator, nValidators)
|
||||
ctx := cmd.Context()
|
||||
for i := 0; i < nValidators; i++ {
|
||||
nodeDirName := fmt.Sprintf("%s%d", nodeDirPrefix, i)
|
||||
nodeDir := filepath.Join(outputDir, nodeDirName)
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, config, logger, keyType); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pvKeyFile := filepath.Join(nodeDir, config.PrivValidator.Key)
|
||||
pvStateFile := filepath.Join(nodeDir, config.PrivValidator.State)
|
||||
pv, err := privval.LoadFilePV(pvKeyFile, pvStateFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, ctxTimeout)
|
||||
defer cancel()
|
||||
|
||||
pubKey, err := pv.GetPubKey(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("can't get pubkey: %w", err)
|
||||
}
|
||||
genVals[i] = types.GenesisValidator{
|
||||
Address: pubKey.Address(),
|
||||
PubKey: pubKey,
|
||||
Power: 1,
|
||||
Name: nodeDirName,
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i+nValidators))
|
||||
config.SetRoot(nodeDir)
|
||||
|
||||
err := os.MkdirAll(filepath.Join(nodeDir, "config"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
err = os.MkdirAll(filepath.Join(nodeDir, "data"), nodeDirPerm)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := initFilesWithConfig(ctx, conf, logger, keyType); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Generate genesis doc from generated validators
|
||||
genDoc := &types.GenesisDoc{
|
||||
ChainID: "chain-" + tmrand.Str(6),
|
||||
GenesisTime: tmtime.Now(),
|
||||
InitialHeight: initialHeight,
|
||||
Validators: genVals,
|
||||
ConsensusParams: types.DefaultConsensusParams(),
|
||||
}
|
||||
if keyType == "secp256k1" {
|
||||
genDoc.ConsensusParams.Validator = types.ValidatorParams{
|
||||
PubKeyTypes: []string{types.ABCIPubKeyTypeSecp256k1},
|
||||
}
|
||||
}
|
||||
|
||||
// Write genesis file.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
if err := genDoc.SaveAs(filepath.Join(nodeDir, config.BaseConfig.Genesis)); err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Gather persistent peer addresses.
|
||||
var (
|
||||
persistentPeers = make([]string, 0)
|
||||
err error
|
||||
)
|
||||
tpargs := testnetPeerArgs{
|
||||
numValidators: nValidators,
|
||||
numNonValidators: nNonValidators,
|
||||
peerToPeerPort: p2pPort,
|
||||
nodeDirPrefix: nodeDirPrefix,
|
||||
outputDir: outputDir,
|
||||
hostnames: hostnames,
|
||||
startingIPAddr: startingIPAddress,
|
||||
hostnamePrefix: hostnamePrefix,
|
||||
hostnameSuffix: hostnameSuffix,
|
||||
randomMonikers: randomMonikers,
|
||||
}
|
||||
|
||||
if populatePersistentPeers {
|
||||
|
||||
persistentPeers, err = persistentPeersArray(config, tpargs)
|
||||
if err != nil {
|
||||
_ = os.RemoveAll(outputDir)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Overwrite default config.
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
config.SetRoot(nodeDir)
|
||||
config.P2P.AllowDuplicateIP = true
|
||||
if populatePersistentPeers {
|
||||
persistentPeersWithoutSelf := make([]string, 0)
|
||||
for j := 0; j < len(persistentPeers); j++ {
|
||||
if j == i {
|
||||
continue
|
||||
}
|
||||
persistentPeersWithoutSelf = append(persistentPeersWithoutSelf, persistentPeers[j])
|
||||
}
|
||||
config.P2P.PersistentPeers = strings.Join(persistentPeersWithoutSelf, ",")
|
||||
}
|
||||
config.Moniker = tpargs.moniker(i)
|
||||
|
||||
if err := cfg.WriteConfigFile(nodeDir, config); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Printf("Successfully initialized %v node directories\n", nValidators+nNonValidators)
|
||||
return nil
|
||||
return cmd
|
||||
}
|
||||
|
||||
func hostnameOrIP(i int) string {
|
||||
if len(hostnames) > 0 && i < len(hostnames) {
|
||||
return hostnames[i]
|
||||
type testnetPeerArgs struct {
|
||||
numValidators int
|
||||
numNonValidators int
|
||||
peerToPeerPort int
|
||||
nodeDirPrefix string
|
||||
outputDir string
|
||||
hostnames []string
|
||||
startingIPAddr string
|
||||
hostnamePrefix string
|
||||
hostnameSuffix string
|
||||
randomMonikers bool
|
||||
}
|
||||
|
||||
func (args *testnetPeerArgs) hostnameOrIP(i int) (string, error) {
|
||||
if len(args.hostnames) > 0 && i < len(args.hostnames) {
|
||||
return args.hostnames[i], nil
|
||||
}
|
||||
if startingIPAddress == "" {
|
||||
return fmt.Sprintf("%s%d%s", hostnamePrefix, i, hostnameSuffix)
|
||||
if args.startingIPAddr == "" {
|
||||
return fmt.Sprintf("%s%d%s", args.hostnamePrefix, i, args.hostnameSuffix), nil
|
||||
}
|
||||
ip := net.ParseIP(startingIPAddress)
|
||||
ip := net.ParseIP(args.startingIPAddr)
|
||||
ip = ip.To4()
|
||||
if ip == nil {
|
||||
fmt.Printf("%v: non ipv4 address\n", startingIPAddress)
|
||||
os.Exit(1)
|
||||
return "", fmt.Errorf("%v is non-ipv4 address", args.startingIPAddr)
|
||||
}
|
||||
|
||||
for j := 0; j < i; j++ {
|
||||
ip[3]++
|
||||
}
|
||||
return ip.String()
|
||||
return ip.String(), nil
|
||||
|
||||
}
|
||||
|
||||
// get an array of persistent peers
|
||||
func persistentPeersArray(config *cfg.Config) ([]string, error) {
|
||||
peers := make([]string, nValidators+nNonValidators)
|
||||
for i := 0; i < nValidators+nNonValidators; i++ {
|
||||
nodeDir := filepath.Join(outputDir, fmt.Sprintf("%s%d", nodeDirPrefix, i))
|
||||
func persistentPeersArray(config *cfg.Config, args testnetPeerArgs) ([]string, error) {
|
||||
peers := make([]string, args.numValidators+args.numNonValidators)
|
||||
for i := 0; i < len(peers); i++ {
|
||||
nodeDir := filepath.Join(args.outputDir, fmt.Sprintf("%s%d", args.nodeDirPrefix, i))
|
||||
config.SetRoot(nodeDir)
|
||||
nodeKey, err := config.LoadNodeKeyID()
|
||||
if err != nil {
|
||||
return []string{}, err
|
||||
return nil, err
|
||||
}
|
||||
peers[i] = nodeKey.AddressString(fmt.Sprintf("%s:%d", hostnameOrIP(i), p2pPort))
|
||||
addr, err := args.hostnameOrIP(i)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
peers[i] = nodeKey.AddressString(fmt.Sprintf("%s:%d", addr, args.peerToPeerPort))
|
||||
}
|
||||
return peers, nil
|
||||
}
|
||||
|
||||
func moniker(i int) string {
|
||||
if randomMonikers {
|
||||
func (args *testnetPeerArgs) moniker(i int) string {
|
||||
if args.randomMonikers {
|
||||
return randomMoniker()
|
||||
}
|
||||
if len(hostnames) > 0 && i < len(hostnames) {
|
||||
return hostnames[i]
|
||||
if len(args.hostnames) > 0 && i < len(args.hostnames) {
|
||||
return args.hostnames[i]
|
||||
}
|
||||
if startingIPAddress == "" {
|
||||
return fmt.Sprintf("%s%d%s", hostnamePrefix, i, hostnameSuffix)
|
||||
if args.startingIPAddr == "" {
|
||||
return fmt.Sprintf("%s%d%s", args.hostnamePrefix, i, args.hostnameSuffix)
|
||||
}
|
||||
return randomMoniker()
|
||||
}
|
||||
|
||||
@@ -1,37 +1,51 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"context"
|
||||
|
||||
cmd "github.com/tendermint/tendermint/cmd/tendermint/commands"
|
||||
"github.com/tendermint/tendermint/cmd/tendermint/commands"
|
||||
"github.com/tendermint/tendermint/cmd/tendermint/commands/debug"
|
||||
"github.com/tendermint/tendermint/config"
|
||||
"github.com/tendermint/tendermint/libs/cli"
|
||||
"github.com/tendermint/tendermint/libs/log"
|
||||
"github.com/tendermint/tendermint/node"
|
||||
)
|
||||
|
||||
func main() {
|
||||
rootCmd := cmd.RootCmd
|
||||
rootCmd.AddCommand(
|
||||
cmd.GenValidatorCmd,
|
||||
cmd.ReIndexEventCmd,
|
||||
cmd.InitFilesCmd,
|
||||
cmd.LightCmd,
|
||||
cmd.ReplayCmd,
|
||||
cmd.ReplayConsoleCmd,
|
||||
cmd.ResetAllCmd,
|
||||
cmd.ResetPrivValidatorCmd,
|
||||
cmd.ShowValidatorCmd,
|
||||
cmd.TestnetFilesCmd,
|
||||
cmd.ShowNodeIDCmd,
|
||||
cmd.GenNodeKeyCmd,
|
||||
cmd.VersionCmd,
|
||||
cmd.InspectCmd,
|
||||
cmd.RollbackStateCmd,
|
||||
cmd.MakeKeyMigrateCommand(),
|
||||
debug.DebugCmd,
|
||||
cli.NewCompletionCmd(rootCmd, true),
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
conf, err := commands.ParseConfig(config.DefaultConfig())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
logger, err := log.NewDefaultLogger(conf.LogFormat, conf.LogLevel)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
rcmd := commands.RootCommand(conf, logger)
|
||||
rcmd.AddCommand(
|
||||
commands.MakeGenValidatorCommand(),
|
||||
commands.MakeReindexEventCommand(conf, logger),
|
||||
commands.MakeInitFilesCommand(conf, logger),
|
||||
commands.MakeLightCommand(conf, logger),
|
||||
commands.MakeReplayCommand(conf, logger),
|
||||
commands.MakeReplayConsoleCommand(conf, logger),
|
||||
commands.MakeResetAllCommand(conf, logger),
|
||||
commands.MakeResetStateCommand(conf, logger),
|
||||
commands.MakeResetPrivateValidatorCommand(conf, logger),
|
||||
commands.MakeShowValidatorCommand(conf, logger),
|
||||
commands.MakeTestnetFilesCommand(conf, logger),
|
||||
commands.MakeShowNodeIDCommand(conf),
|
||||
commands.GenNodeKeyCmd,
|
||||
commands.VersionCmd,
|
||||
commands.MakeInspectCommand(conf, logger),
|
||||
commands.MakeRollbackStateCommand(conf),
|
||||
commands.MakeKeyMigrateCommand(conf, logger),
|
||||
debug.GetDebugCommand(logger),
|
||||
commands.NewCompletionCmd(rcmd, true),
|
||||
)
|
||||
|
||||
// NOTE:
|
||||
@@ -45,10 +59,9 @@ func main() {
|
||||
nodeFunc := node.NewDefault
|
||||
|
||||
// Create & start node
|
||||
rootCmd.AddCommand(cmd.NewRunNodeCmd(nodeFunc))
|
||||
rcmd.AddCommand(commands.NewRunNodeCmd(nodeFunc, conf, logger))
|
||||
|
||||
cmd := cli.PrepareBaseCmd(rootCmd, "TM", os.ExpandEnv(filepath.Join("$HOME", config.DefaultTendermintDir)))
|
||||
if err := cmd.Execute(); err != nil {
|
||||
if err := cli.RunWithTrace(ctx, rcmd); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -442,6 +442,33 @@ type RPCConfig struct {
|
||||
// to the estimated maximum number of broadcast_tx_commit calls per block.
|
||||
MaxSubscriptionsPerClient int `mapstructure:"max-subscriptions-per-client"`
|
||||
|
||||
// If true, disable the websocket interface to the RPC service. This has
|
||||
// the effect of disabling the /subscribe, /unsubscribe, and /unsubscribe_all
|
||||
// methods for event subscription.
|
||||
//
|
||||
// EXPERIMENTAL: This setting will be removed in Tendermint v0.37.
|
||||
ExperimentalDisableWebsocket bool `mapstructure:"experimental-disable-websocket"`
|
||||
|
||||
// The time window size for the event log. All events up to this long before
|
||||
// the latest (up to EventLogMaxItems) will be available for subscribers to
|
||||
// fetch via the /events method. If 0 (the default) the event log and the
|
||||
// /events RPC method are disabled.
|
||||
EventLogWindowSize time.Duration `mapstructure:"event-log-window-size"`
|
||||
|
||||
// The maxiumum number of events that may be retained by the event log. If
|
||||
// this value is 0, no upper limit is set. Otherwise, items in excess of
|
||||
// this number will be discarded from the event log.
|
||||
//
|
||||
// Warning: This setting is a safety valve. Setting it too low may cause
|
||||
// subscribers to miss events. Try to choose a value higher than the
|
||||
// maximum worst-case expected event load within the chosen window size in
|
||||
// ordinary operation.
|
||||
//
|
||||
// For example, if the window size is 10 minutes and the node typically
|
||||
// averages 1000 events per ten minutes, but with occasional known spikes of
|
||||
// up to 2000, choose a value > 2000.
|
||||
EventLogMaxItems int `mapstructure:"event-log-max-items"`
|
||||
|
||||
// How long to wait for a tx to be committed during /broadcast_tx_commit
|
||||
// WARNING: Using a value larger than 10s will result in increasing the
|
||||
// global HTTP write timeout, which applies to all connections and endpoints.
|
||||
@@ -487,9 +514,14 @@ func DefaultRPCConfig() *RPCConfig {
|
||||
Unsafe: false,
|
||||
MaxOpenConnections: 900,
|
||||
|
||||
MaxSubscriptionClients: 100,
|
||||
MaxSubscriptionsPerClient: 5,
|
||||
TimeoutBroadcastTxCommit: 10 * time.Second,
|
||||
// Settings for event subscription.
|
||||
MaxSubscriptionClients: 100,
|
||||
MaxSubscriptionsPerClient: 5,
|
||||
ExperimentalDisableWebsocket: false, // compatible with TM v0.35 and earlier
|
||||
EventLogWindowSize: 0, // disables /events RPC by default
|
||||
EventLogMaxItems: 0,
|
||||
|
||||
TimeoutBroadcastTxCommit: 10 * time.Second,
|
||||
|
||||
MaxBodyBytes: int64(1000000), // 1MB
|
||||
MaxHeaderBytes: 1 << 20, // same as the net/http default
|
||||
@@ -519,6 +551,12 @@ func (cfg *RPCConfig) ValidateBasic() error {
|
||||
if cfg.MaxSubscriptionsPerClient < 0 {
|
||||
return errors.New("max-subscriptions-per-client can't be negative")
|
||||
}
|
||||
if cfg.EventLogWindowSize < 0 {
|
||||
return errors.New("event-log-window-size must not be negative")
|
||||
}
|
||||
if cfg.EventLogMaxItems < 0 {
|
||||
return errors.New("event-log-max-items must not be negative")
|
||||
}
|
||||
if cfg.TimeoutBroadcastTxCommit < 0 {
|
||||
return errors.New("timeout-broadcast-tx-commit can't be negative")
|
||||
}
|
||||
|
||||
@@ -220,6 +220,33 @@ max-subscription-clients = {{ .RPC.MaxSubscriptionClients }}
|
||||
# to the estimated maximum number of broadcast_tx_commit calls per block.
|
||||
max-subscriptions-per-client = {{ .RPC.MaxSubscriptionsPerClient }}
|
||||
|
||||
# If true, disable the websocket interface to the RPC service. This has
|
||||
# the effect of disabling the /subscribe, /unsubscribe, and /unsubscribe_all
|
||||
# methods for event subscription.
|
||||
#
|
||||
# EXPERIMENTAL: This setting will be removed in Tendermint v0.37.
|
||||
experimental-disable-websocket = {{ .RPC.ExperimentalDisableWebsocket }}
|
||||
|
||||
# The time window size for the event log. All events up to this long before
|
||||
# the latest (up to EventLogMaxItems) will be available for subscribers to
|
||||
# fetch via the /events method. If 0 (the default) the event log and the
|
||||
# /events RPC method are disabled.
|
||||
event-log-window-size = "{{ .RPC.EventLogWindowSize }}"
|
||||
|
||||
# The maxiumum number of events that may be retained by the event log. If
|
||||
# this value is 0, no upper limit is set. Otherwise, items in excess of
|
||||
# this number will be discarded from the event log.
|
||||
#
|
||||
# Warning: This setting is a safety valve. Setting it too low may cause
|
||||
# subscribers to miss events. Try to choose a value higher than the
|
||||
# maximum worst-case expected event load within the chosen window size in
|
||||
# ordinary operation.
|
||||
#
|
||||
# For example, if the window size is 10 minutes and the node typically
|
||||
# averages 1000 events per ten minutes, but with occasional known spikes of
|
||||
# up to 2000, choose a value > 2000.
|
||||
event-log-max-items = {{ .RPC.EventLogMaxItems }}
|
||||
|
||||
# How long to wait for a tx to be committed during /broadcast_tx_commit.
|
||||
# WARNING: Using a value larger than 10s will result in increasing the
|
||||
# global HTTP write timeout, which applies to all connections and endpoints.
|
||||
@@ -504,13 +531,13 @@ namespace = "{{ .Instrumentation.Namespace }}"
|
||||
|
||||
/****** these are for test settings ***********/
|
||||
|
||||
func ResetTestRoot(testName string) (*Config, error) {
|
||||
return ResetTestRootWithChainID(testName, "")
|
||||
func ResetTestRoot(dir, testName string) (*Config, error) {
|
||||
return ResetTestRootWithChainID(dir, testName, "")
|
||||
}
|
||||
|
||||
func ResetTestRootWithChainID(testName string, chainID string) (*Config, error) {
|
||||
func ResetTestRootWithChainID(dir, testName string, chainID string) (*Config, error) {
|
||||
// create a unique, concurrency-safe test directory under os.TempDir()
|
||||
rootDir, err := os.MkdirTemp("", fmt.Sprintf("%s-%s_", chainID, testName))
|
||||
rootDir, err := os.MkdirTemp(dir, fmt.Sprintf("%s-%s_", chainID, testName))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -20,9 +20,7 @@ func ensureFiles(t *testing.T, rootDir string, files ...string) {
|
||||
|
||||
func TestEnsureRoot(t *testing.T) {
|
||||
// setup temp dir for test
|
||||
tmpDir, err := os.MkdirTemp("", "config-test")
|
||||
require.NoError(t, err)
|
||||
defer os.RemoveAll(tmpDir)
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
// create root dir
|
||||
EnsureRoot(tmpDir)
|
||||
@@ -42,7 +40,7 @@ func TestEnsureTestRoot(t *testing.T) {
|
||||
testName := "ensureTestRoot"
|
||||
|
||||
// create root dir
|
||||
cfg, err := ResetTestRoot(testName)
|
||||
cfg, err := ResetTestRoot(t.TempDir(), testName)
|
||||
require.NoError(t, err)
|
||||
defer os.RemoveAll(cfg.RootDir)
|
||||
rootDir := cfg.RootDir
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/internal/benchmarking"
|
||||
)
|
||||
|
||||
@@ -9,11 +9,12 @@ import (
|
||||
"math/big"
|
||||
|
||||
secp256k1 "github.com/btcsuite/btcd/btcec"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/internal/jsontypes"
|
||||
|
||||
// necessary for Bitcoin address format
|
||||
"golang.org/x/crypto/ripemd160" // nolint
|
||||
"golang.org/x/crypto/ripemd160" //nolint:staticcheck
|
||||
)
|
||||
|
||||
//-------------------------------------
|
||||
@@ -178,3 +179,67 @@ func (pubKey PubKey) Equals(other crypto.PubKey) bool {
|
||||
func (pubKey PubKey) Type() string {
|
||||
return KeyType
|
||||
}
|
||||
|
||||
// used to reject malleable signatures
|
||||
// see:
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
|
||||
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
|
||||
|
||||
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
|
||||
// The returned signature will be of the form R || S (in lower-S form).
|
||||
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
|
||||
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
|
||||
|
||||
sig, err := priv.Sign(crypto.Sha256(msg))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sigBytes := serializeSig(sig)
|
||||
return sigBytes, nil
|
||||
}
|
||||
|
||||
// VerifySignature verifies a signature of the form R || S.
|
||||
// It rejects signatures which are not in lower-S form.
|
||||
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
|
||||
if len(sigStr) != 64 {
|
||||
return false
|
||||
}
|
||||
|
||||
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// parse the signature:
|
||||
signature := signatureFromBytes(sigStr)
|
||||
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
|
||||
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
if signature.S.Cmp(secp256k1halfN) > 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
return signature.Verify(crypto.Sha256(msg), pub)
|
||||
}
|
||||
|
||||
// Read Signature struct from R || S. Caller needs to ensure
|
||||
// that len(sigStr) == 64.
|
||||
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
|
||||
return &secp256k1.Signature{
|
||||
R: new(big.Int).SetBytes(sigStr[:32]),
|
||||
S: new(big.Int).SetBytes(sigStr[32:64]),
|
||||
}
|
||||
}
|
||||
|
||||
// Serialize signature to R || S.
|
||||
// R, S are padded to 32 bytes respectively.
|
||||
func serializeSig(sig *secp256k1.Signature) []byte {
|
||||
rBytes := sig.R.Bytes()
|
||||
sBytes := sig.S.Bytes()
|
||||
sigBytes := make([]byte, 64)
|
||||
// 0 pad the byte arrays from the left if they aren't big enough.
|
||||
copy(sigBytes[32-len(rBytes):32], rBytes)
|
||||
copy(sigBytes[64-len(sBytes):64], sBytes)
|
||||
return sigBytes
|
||||
}
|
||||
|
||||
@@ -1,76 +0,0 @@
|
||||
//go:build !libsecp256k1
|
||||
// +build !libsecp256k1
|
||||
|
||||
package secp256k1
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
|
||||
secp256k1 "github.com/btcsuite/btcd/btcec"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
)
|
||||
|
||||
// used to reject malleable signatures
|
||||
// see:
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
// - https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/crypto.go#L39
|
||||
var secp256k1halfN = new(big.Int).Rsh(secp256k1.S256().N, 1)
|
||||
|
||||
// Sign creates an ECDSA signature on curve Secp256k1, using SHA256 on the msg.
|
||||
// The returned signature will be of the form R || S (in lower-S form).
|
||||
func (privKey PrivKey) Sign(msg []byte) ([]byte, error) {
|
||||
priv, _ := secp256k1.PrivKeyFromBytes(secp256k1.S256(), privKey)
|
||||
|
||||
sig, err := priv.Sign(crypto.Sha256(msg))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sigBytes := serializeSig(sig)
|
||||
return sigBytes, nil
|
||||
}
|
||||
|
||||
// VerifySignature verifies a signature of the form R || S.
|
||||
// It rejects signatures which are not in lower-S form.
|
||||
func (pubKey PubKey) VerifySignature(msg []byte, sigStr []byte) bool {
|
||||
if len(sigStr) != 64 {
|
||||
return false
|
||||
}
|
||||
|
||||
pub, err := secp256k1.ParsePubKey(pubKey, secp256k1.S256())
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// parse the signature:
|
||||
signature := signatureFromBytes(sigStr)
|
||||
// Reject malleable signatures. libsecp256k1 does this check but btcec doesn't.
|
||||
// see: https://github.com/ethereum/go-ethereum/blob/f9401ae011ddf7f8d2d95020b7446c17f8d98dc1/crypto/signature_nocgo.go#L90-L93
|
||||
if signature.S.Cmp(secp256k1halfN) > 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
return signature.Verify(crypto.Sha256(msg), pub)
|
||||
}
|
||||
|
||||
// Read Signature struct from R || S. Caller needs to ensure
|
||||
// that len(sigStr) == 64.
|
||||
func signatureFromBytes(sigStr []byte) *secp256k1.Signature {
|
||||
return &secp256k1.Signature{
|
||||
R: new(big.Int).SetBytes(sigStr[:32]),
|
||||
S: new(big.Int).SetBytes(sigStr[32:64]),
|
||||
}
|
||||
}
|
||||
|
||||
// Serialize signature to R || S.
|
||||
// R, S are padded to 32 bytes respectively.
|
||||
func serializeSig(sig *secp256k1.Signature) []byte {
|
||||
rBytes := sig.R.Bytes()
|
||||
sBytes := sig.S.Bytes()
|
||||
sigBytes := make([]byte, 64)
|
||||
// 0 pad the byte arrays from the left if they aren't big enough.
|
||||
copy(sigBytes[32-len(rBytes):32], rBytes)
|
||||
copy(sigBytes[64-len(sBytes):64], sBytes)
|
||||
return sigBytes
|
||||
}
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/tendermint/tendermint/crypto"
|
||||
"github.com/tendermint/tendermint/crypto/internal/benchmarking"
|
||||
)
|
||||
|
||||
@@ -22,10 +22,6 @@ module.exports = {
|
||||
index: "tendermint"
|
||||
},
|
||||
versions: [
|
||||
{
|
||||
"label": "v0.32",
|
||||
"key": "v0.32"
|
||||
},
|
||||
{
|
||||
"label": "v0.33",
|
||||
"key": "v0.33"
|
||||
@@ -37,10 +33,6 @@ module.exports = {
|
||||
{
|
||||
"label": "v0.35",
|
||||
"key": "v0.35"
|
||||
},
|
||||
{
|
||||
"label": "master",
|
||||
"key": "master"
|
||||
}
|
||||
],
|
||||
topbar: {
|
||||
@@ -53,8 +45,10 @@ module.exports = {
|
||||
title: 'Resources',
|
||||
children: [
|
||||
{
|
||||
// TODO(creachadair): Figure out how to make this per-branch.
|
||||
// See: https://github.com/tendermint/tendermint/issues/7908
|
||||
title: 'RPC',
|
||||
path: 'https://docs.tendermint.com/master/rpc/',
|
||||
path: 'https://docs.tendermint.com/v0.35/rpc/',
|
||||
static: true
|
||||
},
|
||||
]
|
||||
@@ -166,6 +160,12 @@ module.exports = {
|
||||
{
|
||||
ga: 'UA-51029217-11'
|
||||
}
|
||||
],
|
||||
[
|
||||
'@vuepress/plugin-html-redirect',
|
||||
{
|
||||
countdown: 0
|
||||
}
|
||||
]
|
||||
]
|
||||
};
|
||||
|
||||
1
docs/.vuepress/redirects
Normal file
1
docs/.vuepress/redirects
Normal file
@@ -0,0 +1 @@
|
||||
/master/ /v0.35/
|
||||
@@ -21,10 +21,10 @@ Tendermint?](introduction/what-is-tendermint.md).
|
||||
|
||||
To get started quickly with an example application, see the [quick start guide](introduction/quick-start.md).
|
||||
|
||||
To learn about application development on Tendermint, see the [Application Blockchain Interface](https://github.com/tendermint/spec/tree/master/spec/abci).
|
||||
To learn about application development on Tendermint, see the [Application Blockchain Interface](../spec/abci).
|
||||
|
||||
For more details on using Tendermint, see the respective documentation for
|
||||
[Tendermint Core](tendermint-core/), [benchmarking and monitoring](tools/), and [network deployments](networks/).
|
||||
[Tendermint Core](tendermint-core/), [benchmarking and monitoring](tools/), and [network deployments](nodes/).
|
||||
|
||||
To find out about the Tendermint ecosystem you can go [here](https://github.com/tendermint/awesome#ecosystem). If you are a project that is using Tendermint you are welcome to make a PR to add your project to the list.
|
||||
|
||||
|
||||
@@ -27,17 +27,17 @@ Usage:
|
||||
abci-cli [command]
|
||||
|
||||
Available Commands:
|
||||
batch Run a batch of abci commands against an application
|
||||
check_tx Validate a tx
|
||||
commit Commit the application state and return the Merkle root hash
|
||||
console Start an interactive abci console for multiple commands
|
||||
deliver_tx Deliver a new tx to the application
|
||||
kvstore ABCI demo example
|
||||
echo Have the application echo a message
|
||||
help Help about any command
|
||||
info Get some info about the application
|
||||
query Query the application state
|
||||
set_option Set an options on the application
|
||||
batch Run a batch of abci commands against an application
|
||||
check_tx Validate a tx
|
||||
commit Commit the application state and return the Merkle root hash
|
||||
console Start an interactive abci console for multiple commands
|
||||
finalize_block Send a set of transactions to the application
|
||||
kvstore ABCI demo example
|
||||
echo Have the application echo a message
|
||||
help Help about any command
|
||||
info Get some info about the application
|
||||
query Query the application state
|
||||
set_option Set an options on the application
|
||||
|
||||
Flags:
|
||||
--abci string socket or grpc (default "socket")
|
||||
@@ -53,7 +53,7 @@ Use "abci-cli [command] --help" for more information about a command.
|
||||
The `abci-cli` tool lets us send ABCI messages to our application, to
|
||||
help build and debug them.
|
||||
|
||||
The most important messages are `deliver_tx`, `check_tx`, and `commit`,
|
||||
The most important messages are `finalize_block`, `check_tx`, and `commit`,
|
||||
but there are others for convenience, configuration, and information
|
||||
purposes.
|
||||
|
||||
@@ -173,7 +173,7 @@ Try running these commands:
|
||||
-> code: OK
|
||||
-> data.hex: 0x0000000000000000
|
||||
|
||||
> deliver_tx "abc"
|
||||
> finalize_block "abc"
|
||||
-> code: OK
|
||||
|
||||
> info
|
||||
@@ -192,7 +192,7 @@ Try running these commands:
|
||||
-> value: abc
|
||||
-> value.hex: 616263
|
||||
|
||||
> deliver_tx "def=xyz"
|
||||
> finalize_block "def=xyz"
|
||||
-> code: OK
|
||||
|
||||
> commit
|
||||
@@ -207,8 +207,8 @@ Try running these commands:
|
||||
-> value.hex: 78797A
|
||||
```
|
||||
|
||||
Note that if we do `deliver_tx "abc"` it will store `(abc, abc)`, but if
|
||||
we do `deliver_tx "abc=efg"` it will store `(abc, efg)`.
|
||||
Note that if we do `finalize_block "abc"` it will store `(abc, abc)`, but if
|
||||
we do `finalize_block "abc=efg"` it will store `(abc, efg)`.
|
||||
|
||||
Similarly, you could put the commands in a file and run
|
||||
`abci-cli --verbose batch < myfile`.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user