Daphne is a simulation of the Sonic blockchain network and its main components,
greatly simplified in comparison to the sonic repository. The simulation is local,
with different nodes being simulated within the same process.
The goal of project Daphne is to establish an evaluation framework for candidate consensus algorithms for the Sonic networks. Additionally, the repository is intended to provide a reference for the overall operation of the Sonic network free of the sometimes convoluted code base of production level implementations. The aim is to enable swift prototyping of various solutions for the network, free of the complexities of the real-world, production-ready code.
The simulation is not strictly deterministic as it is multi-threaded.
The main utility provided by Daphne is its chain simulation environment and its associated analysis. Currently, Daphne offers two modes for running simulations:
eval... running a single scenariosstudy... running a range of scenario systematically and repeatedly
The eval mode is intended for the in-depth evaluation of specific aspects of
a scenario. In particular, it is utilized for investigating identified issues or
for debugging protocol issues.
The study mode is intended for collecting data for parameter studies, enabling
the derivation of empirical data for scalability analysis and side-by-side
comparison of different protocols.
To run an evaluation, use the following command:
go run ./daphne eval <desired flags>A few example options offered by the evaluation tool are
--sim-timeor-sto enable simulation time instead of real time--num-nodesor-nto determine the number of nodes on the network to be evaluated--durationor-dto set the time span to be evaluated--tx-per-secondsor-tto set the network load to be simulated
For more parameters and options see the commands help page using
go run ./daphne eval helpThe evaluation command produces an event file (by default output.parquet). This
file can be loaded into one of the evaluation analysis Jupyter notebooks
provided in the analysis directory for
further investigation.
To run a study, use the following command:
go run ./daphne study <study-type> <desired flags>Among the available studies are
load... runs a range of configurations varying the network size and the number of transactions per secondbroadcast... runs a range of configurations varying the network size and utilized broadcasting protocolsconsensus... runs a range of configurations varying the network size and utilized consensus protocols
See
go run ./daphne study helpfor more study types.
Besides the study types, a range of flags to customize the study execution are offered:
--sim-timeor-sto enable simulation time instead of real time--repetitionsor-rto determine the number of repetitions for each configuration--durationor-dto set the time span to be evaluated for each configuration
For more parameters and options see the commands help page using
go run ./daphne study helpThe evaluation command produces an event file (by default data.parquet). This
file can be loaded into dedicated Jupyter notebooks -- at least one for each
study type -- provided in the analysis directory for further investigation.
To build the project, run:
go build ./....
If the build is successful, nothing will be output by the command.
To run all tests, run:
go test ./...
All tests are expected to pass.
-count 1asks to run all tests once, disregarding cached tests-vsets the test run to verbose, it will output how long each tests takes to run-raceruns the program with a race detector on, meaning it will detect race conditions if they exist-covershows coverage for each package tested-run ^TestMyTest$will run only the tests fitting the regex-cpuprof cpu.profwill generate a cpu profile that can be reviewed with pprof-memprof mem.profwill generate a memory profile that can be reviewed with pprof
To open those profiles run:
go tool pprof -http "localhost:8000" ./cpu.prof
When testing, we frequently need to mock certain interfaces. For this we use gomock.
It is installed by running
go install go.uber.org/mock/mockgen@latest.
(Re)generating a mock is done via a command specific for that interface, given in the .go file that contains it. The commands are of the following form:
mockgen -source <source file> -destination=<mock file> -package=<package>,
however, their usage is facilitated by //go:generate comments in source files, enabling mocks to be generated via
go generate <path to file> for generating a particular mock or go generate ./... for generating all mocks.
Regenerating mocks should be done when there is a change to an interface being mocked.
We use golangci-lint for static linters, to run it use
golangci-lint run ./....
To install it run
go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.1.6.
Notably, the project currently lacks a main function, meaning the only way to interact with its code is via testing. As future work, a proper entry point will be developed, with the facility to specify a network scenario that is to be simulated, along with its parameters.
Currently, the project lacks implementations of important consensus protocols we are interested in comparing, such as Lachesis, Tendermint etc. This, among other things, constitutes the gist of the ongoing efforts on the project.
In this section will be laid out known issues or bugs in the project. Currently, there are no known unaddressed issues.