Skip to content

Research project to explore consensus design options

License

Notifications You must be signed in to change notification settings

0xsoniclabs/daphne

Repository files navigation

Daphne

Daphne is a simulation of the Sonic blockchain network and its main components, greatly simplified in comparison to the sonic repository. The simulation is local, with different nodes being simulated within the same process.

The goal of project Daphne is to establish an evaluation framework for candidate consensus algorithms for the Sonic networks. Additionally, the repository is intended to provide a reference for the overall operation of the Sonic network free of the sometimes convoluted code base of production level implementations. The aim is to enable swift prototyping of various solutions for the network, free of the complexities of the real-world, production-ready code.

The simulation is not strictly deterministic as it is multi-threaded.

Using Daphne

The main utility provided by Daphne is its chain simulation environment and its associated analysis. Currently, Daphne offers two modes for running simulations:

  • eval ... running a single scenarios
  • study ... running a range of scenario systematically and repeatedly

The eval mode is intended for the in-depth evaluation of specific aspects of a scenario. In particular, it is utilized for investigating identified issues or for debugging protocol issues.

The study mode is intended for collecting data for parameter studies, enabling the derivation of empirical data for scalability analysis and side-by-side comparison of different protocols.

Running an Evaluation

To run an evaluation, use the following command:

go run ./daphne eval <desired flags>

A few example options offered by the evaluation tool are

  • --sim-time or -s to enable simulation time instead of real time
  • --num-nodes or -n to determine the number of nodes on the network to be evaluated
  • --duration or -d to set the time span to be evaluated
  • --tx-per-seconds or -t to set the network load to be simulated

For more parameters and options see the commands help page using

go run ./daphne eval help

Analyzing Results

The evaluation command produces an event file (by default output.parquet). This file can be loaded into one of the evaluation analysis Jupyter notebooks provided in the analysis directory for further investigation.

Running a Study

To run a study, use the following command:

go run ./daphne study <study-type> <desired flags>

Among the available studies are

  • load ... runs a range of configurations varying the network size and the number of transactions per second
  • broadcast ... runs a range of configurations varying the network size and utilized broadcasting protocols
  • consensus ... runs a range of configurations varying the network size and utilized consensus protocols

See

go run ./daphne study help

for more study types.

Besides the study types, a range of flags to customize the study execution are offered:

  • --sim-time or -s to enable simulation time instead of real time
  • --repetitions or -r to determine the number of repetitions for each configuration
  • --duration or -d to set the time span to be evaluated for each configuration

For more parameters and options see the commands help page using

go run ./daphne study help

Analyzing Results

The evaluation command produces an event file (by default data.parquet). This file can be loaded into dedicated Jupyter notebooks -- at least one for each study type -- provided in the analysis directory for further investigation.

Useful commands

Build

To build the project, run:

go build ./....

If the build is successful, nothing will be output by the command.

Run Tests

To run all tests, run:

go test ./...

All tests are expected to pass.

Optional test flags

  • -count 1 asks to run all tests once, disregarding cached tests
  • -v sets the test run to verbose, it will output how long each tests takes to run
  • -race runs the program with a race detector on, meaning it will detect race conditions if they exist
  • -cover shows coverage for each package tested
  • -run ^TestMyTest$ will run only the tests fitting the regex
  • -cpuprof cpu.prof will generate a cpu profile that can be reviewed with pprof
  • -memprof mem.prof will generate a memory profile that can be reviewed with pprof

To open those profiles run:

go tool pprof -http "localhost:8000" ./cpu.prof

Generating mocks

When testing, we frequently need to mock certain interfaces. For this we use gomock. It is installed by running

go install go.uber.org/mock/mockgen@latest.

(Re)generating a mock is done via a command specific for that interface, given in the .go file that contains it. The commands are of the following form:

mockgen -source <source file> -destination=<mock file> -package=<package>,

however, their usage is facilitated by //go:generate comments in source files, enabling mocks to be generated via

go generate <path to file> for generating a particular mock or go generate ./... for generating all mocks.

Regenerating mocks should be done when there is a change to an interface being mocked.

Lint

We use golangci-lint for static linters, to run it use

golangci-lint run ./....

To install it run

go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.1.6.

Future work

The main function

Notably, the project currently lacks a main function, meaning the only way to interact with its code is via testing. As future work, a proper entry point will be developed, with the facility to specify a network scenario that is to be simulated, along with its parameters.

Various consensus protocols

Currently, the project lacks implementations of important consensus protocols we are interested in comparing, such as Lachesis, Tendermint etc. This, among other things, constitutes the gist of the ongoing efforts on the project.

Known issues

In this section will be laid out known issues or bugs in the project. Currently, there are no known unaddressed issues.

About

Research project to explore consensus design options

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors