* QA Process report for v0.37.x (and baseline for v0.34.x) (#9499)
* 1st version. 200 nodes. Missing rotating node
* Small fixes
* Addressed @jmalicevic's comment
* Explain in method how to set the tmint version to test. Improve result section
* 1st version of how to run the 'rotating node' testnet
* Apply suggestions from @williambanfield
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Addressed @williambanfield's comments
* Added reference to Unix load metric
* Added total TXs
* Fixed some 'png's that got swapped. Excluded '.*-node-exporter' processes from memory plots
* Report for rotating node
* Adressed remaining comments from @williambanfield
* Cosmetic
* Addressed some of @thanethomson's comments
* Re-executed the 200 node tests and updated the corresponding sections of the report
* Ignore Python virtualenv directories
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* Add latency vs throughput script
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* Add README for latency vs throughput script
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* Fix local links to folders
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* v034: only have one level-1 heading
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* Adjust headings
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* v0.37.x: add links to issues/PRs
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* v0.37.x: add note about bug being present in v0.34
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* method: adjust heading depths
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* Show data points on latency vs throughput plot
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* Add latency vs throughput plots
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* Correct mentioning of v0.34.21 and add heading
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* Refactor latency vs throughput script
Update the latency vs throughput script to rather generate plots from
the "raw" CSV output from the loadtime reporting tool as opposed to the
separated CSV files from the experimental method.
Also update the relevant documentation, and regenerate the images from
the raw CSV data (resulting in pretty much the same plots as the
previous ones).
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* Remove unused default duration const
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* Adjust experiment start time to be more accurate and re-plot latency vs throughput
Signed-off-by: Thane Thomson <connect@thanethomson.com>
* Addressed @williambanfield's comments
* Apply suggestions from code review
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
* scripts: Update latency vs throughput readme for clarity
Signed-off-by: Thane Thomson <connect@thanethomson.com>
Signed-off-by: Thane Thomson <connect@thanethomson.com>
Co-authored-by: William Banfield <4561443+williambanfield@users.noreply.github.com>
Co-authored-by: Thane Thomson <connect@thanethomson.com>
(cherry picked from commit b06e1cea54)
* Remove v037 dir
* Removed reference to v0.37 testnets
Co-authored-by: Sergio Mena <sergio@informal.systems>
9.8 KiB
order, title
| order | title |
|---|---|
| 1 | Method |
Method
This document provides a detailed description of the QA process. It is intended to be used by engineers reproducing the experimental setup for future tests of Tendermint.
The (first iteration of the) QA process as described in the RELEASES.md document was applied to version v0.34.x in order to have a set of results acting as benchmarking baseline. This baseline is then compared with results obtained in later versions.
Out of the testnet-based test cases described in the releases document we focused on two of them: 200 Node Test, and Rotating Nodes Test.
Software Dependencies
Infrastructure Requirements to Run the Tests
- An account at Digital Ocean (DO), with a high droplet limit (>202)
- The machine to orchestrate the tests should have the following installed:
- A clone of the testnet repository
- This repository contains all the scripts mentioned in the reminder of this section
- Digital Ocean CLI
- Terraform CLI
- Ansible CLI
- A clone of the testnet repository
Requirements for Result Extraction
- Matlab or Octave
- Prometheus server installed
- blockstore DB of one of the full nodes in the testnet
- Prometheus DB
200 Node Testnet
Running the test
This section explains how the tests were carried out for reproducibility purposes.
- [If you haven't done it before]
Follow steps 1-4 of the
README.mdat the top of the testnet repository to configure Terraform, anddoctl. - Copy file
testnets/testnet200.tomlontotestnet.toml(do NOT commit this change) - Set the variable
VERSION_TAGin theMakefileto the git hash that is to be tested. - Follow steps 5-10 of the
README.mdto configure and start the 200 node testnet- WARNING: Do NOT forget to run
make terraform-destroyas soon as you are done with the tests (see step 9)
- WARNING: Do NOT forget to run
- As a sanity check, connect to the Prometheus node's web interface and check the graph for the
tendermint_consensus_heightmetric. All nodes should be increasing their heights. sshinto thetestnet-load-runner, then copy scriptscript/200-node-loadscript.shand run it from the load runner node.- Before running it, you need to edit the script to provide the IP address of a full node. This node will receive all transactions from the load runner node.
- This script will take about 40 mins to run
- It is running 90-seconds-long experiments in a loop with different loads
- Run
make retrieve-datato gather all relevant data from the testnet into the orchestrating machine - Verify that the data was collected without errors
- at least one blockstore DB for a Tendermint validator
- the Prometheus database from the Prometheus node
- for extra care, you can run
zip -Ton theprometheus.zipfile and (one of) theblockstore.db.zipfile(s)
- Run
make terraform-destroy- Don't forget to type
yes! Otherwise you're in trouble.
- Don't forget to type
Result Extraction
The method for extracting the results described here is highly manual (and exploratory) at this stage. The Core team should improve it at every iteration to increase the amount of automation.
Steps
-
Unzip the blockstore into a directory
-
Extract the latency report and the raw latencies for all the experiments. Run these commands from the directory containing the blockstore
go run github.com/tendermint/tendermint/test/loadtime/cmd/report@3ec6e424d --database-type goleveldb --data-dir ./ > results/report.txtgo run github.com/tendermint/tendermint/test/loadtime/cmd/report@3ec6e424d --database-type goleveldb --data-dir ./ --csv results/raw.csv
-
File
report.txtcontains an unordered list of experiments with varying concurrent connections and transaction rate- Create files
report01.txt,report02.txt,report04.txtand, for each experiment in filereport.txt, copy its related lines to the filename that matches the number of connections. - Sort the experiments in
report01.txtin ascending tx rate order. Likewise forreport02.txtandreport04.txt.
- Create files
-
Generate file
report_tabbed.txtby showing the contentsreport01.txt,report02.txt,report04.txtside by side- This effectively creates a table where rows are a particular tx rate and columns are a particular number of websocket connections.
-
Extract the raw latencies from file
raw.csvusing the following bash loop. This creates a.csvfile and a.datfile per experiment. The format of the.datfiles is amenable to loading them as matrices in Octaveuuids=($(cat report01.txt report02.txt report04.txt | grep '^Experiment ID: ' | awk '{ print $3 }')) c=1 for i in 01 02 04; do for j in 0025 0050 0100 0200; do echo $i $j $c "${uuids[$c]}" filename=c${i}_r${j} grep ${uuids[$c]} raw.csv > ${filename}.csv cat ${filename}.csv | tr , ' ' | awk '{ print $2, $3 }' > ${filename}.dat c=$(expr $c + 1) done done -
Enter Octave
-
Load all
.datfiles generated in step 5 into matrices using this Octave code snippetconns = { "01"; "02"; "04" }; rates = { "0025"; "0050"; "0100"; "0200" }; for i = 1:length(conns) for j = 1:length(rates) filename = strcat("c", conns{i}, "_r", rates{j}, ".dat"); load("-ascii", filename); endfor endfor -
Set variable release to the current release undergoing QA
release = "v0.34.x"; -
Generate a plot with all (or some) experiments, where the X axis is the experiment time, and the y axis is the latency of transactions. The following snippet plots all experiments.
legends = {}; hold off; for i = 1:length(conns) for j = 1:length(rates) data_name = strcat("c", conns{i}, "_r", rates{j}); l = strcat("c=", conns{i}, " r=", rates{j}); m = eval(data_name); plot((m(:,1) - min(m(:,1))) / 1e+9, m(:,2) / 1e+9, "."); hold on; legends(1, end+1) = l; endfor endfor legend(legends, "location", "northeastoutside"); xlabel("experiment time (s)"); ylabel("latency (s)"); t = sprintf("200-node testnet - %s", release); title(t); -
Consider adjusting the axis, in case you want to compare your results to the baseline, for instance
axis([0, 100, 0, 30], "tic"); -
Use Octave's GUI menu to save the plot (e.g. as
.png) -
Repeat steps 9 and 10 to obtain as many plots as deemed necessary.
-
To generate a latency vs throughput plot, using the raw CSV file generated in step 2, follow the instructions for the
latency_throughput.pyscript.
Extracting Prometheus Metrics
- Stop the prometheus server if it is running as a service (e.g. a
systemdunit). - Unzip the prometheus database retrieved from the testnet, and move it to replace the local prometheus database.
- Start the prometheus server and make sure no error logs appear at start up.
- Introduce the metrics you want to gather or plot.
Rotating Node Testnet
Running the test
This section explains how the tests were carried out for reproducibility purposes.
- [If you haven't done it before]
Follow steps 1-4 of the
README.mdat the top of the testnet repository to configure Terraform, anddoctl. - Copy file
testnet_rotating.tomlontotestnet.toml(do NOT commit this change) - Set variable
VERSION_TAGto the git hash that is to be tested. - Run
make terraform-apply EPHEMERAL_SIZE=25- WARNING: Do NOT forget to run
make terraform-destroyas soon as you are done with the tests
- WARNING: Do NOT forget to run
- Follow steps 6-10 of the
README.mdto configure and start the "stable" part of the rotating node testnet - As a sanity check, connect to the Prometheus node's web interface and check the graph for the
tendermint_consensus_heightmetric. All nodes should be increasing their heights. - On a different shell,
- run
make runload ROTATE_CONNECTIONS=X ROTATE_TX_RATE=Y XandYshould reflect a load below the saturation point (see, e.g., this paragraph for further info)
- run
- Run
make rotateto start the script that creates the ephemeral nodes, and kills them when they are caught up.- WARNING: If you run this command from your laptop, the laptop needs to be up and connected for full length of the experiment.
- When the height of the chain reaches 3000, stop the
make rotatescript - When the rotate script has made two iterations (i.e., all ephemeral nodes have caught up twice)
after height 3000 was reached, stop
make rotate - Run
make retrieve-datato gather all relevant data from the testnet into the orchestrating machine - Verify that the data was collected without errors
- at least one blockstore DB for a Tendermint validator
- the Prometheus database from the Prometheus node
- for extra care, you can run
zip -Ton theprometheus.zipfile and (one of) theblockstore.db.zipfile(s)
- Run
make terraform-destroy
Steps 8 to 10 are highly manual at the moment and will be improved in next iterations.
Result Extraction
In order to obtain a latency plot, follow the instructions above for the 200 node experiment, but:
- The
results.txtfile contains only one experiment - Therefore, no need for any
forloops
As for prometheus, the same method as for the 200 node experiment can be applied.