Skip to content

monacofj/moeabench

Repository files navigation

MoeaBench - Multi-objective Evolutionary Algorithm Benchmark

REUSE status License: GPL v3

Copyright (c) 2025 Monaco F. J. monaco@usp.br
Copyright (c) 2025 Silva F. F. fernandoferreira.silva42@usp.br

This project is distributed under the GNU General Public License v3.0 or later. See the file COPYING for more information. Some third-party components or specific files may be licensed under different terms. Please, consult the SPDX identifiers in each file's header and the LICENSES/ directory for precise details.

Introduction

Version 0.15.0 (Note this an alpha pre-release)

MoeaBench is an extensible analytical toolkit for Multi-objective Evolutionary Optimization research that adds a layer of data interpretation and visualization over standard benchmark engines. The framework establishes an intuitive abstraction layer for configuring and executing sophisticated quantitative analysis, transparently handling normalization, numerical reproducibility, and statistical validation. By transforming raw performance metrics into descriptive, narrative-driven results, it facilitates rigorous algorithmic auditing and promotes systematic, reproducible experimental comparisons.

To support this workflow, the package offers high-level facilities for programmatically establishing benchmark protocols and extracting standardized metrics. These features are augmented by advanced graphical capabilities that produce convergence time-series and interactive 3D Pareto front visualizations, bridging the gap between raw numerical data and actionable scientific insight.

Key Features

  • Built-in Benchmark Suite: Includes state-of-the-art implementations of foundational benchmarks (DTLZ and DPF), rigorously validated against the original literature and audited as the project's analytical "ground truth".
  • Built-in Algorithms: Provides built-in implementations of well-known, literature-referenced MOEAs (e.g., NSGA-III, MOEA/D, SPEA2).
  • Plugin Architecture: Seamlessly plug in your own algorithms (MOEAs) and problems (MOPs) without modifying the core library. Your custom code is the guest, MoeaBench is the host.
  • Many-Objective Readiness: Full support for Many-Objective Optimization (MaOO) with no artificial limits on the number of objectives ($M$) or variables ($N$).
  • Performance & Scalability: Built-in specialized evaluators that automatically switch between exact metrics and efficient approximations (e.g., Monte Carlo) to ensure computability of costly calculations as complexity increases.
  • Rigor & Reproducibility: Transparent handling of calibration and statistical validation to ensure robust and reproducible results.
  • One-Click Calibration: Programmatically validate custom MOPs via mop.calibrate(), generating portable sidecar JSON files for full clinical diagnostic support.
  • Replication & Statistical Confidence: Natively aggregates repeated runs and supports formal comparison through hypothesis tests, effect sizes, and distribution-aware analysis.
  • Interpretative Summaries: Automatically generates interpretative summaries that complement numerical metrics with narrative insights.
  • Smart Arguments: Core functions accept experiments, runs, populations, and compatible result objects directly, handling data extraction and generation slicing for you.
  • Cloud-Centric Aggregation: Multi-run experiments are treated as unified analytical clouds, enabling collective structural and temporal analysis without manual merging.
  • Open-Science Metadata: Experiments carry authorship, licensing, and reproducibility metadata, with CC0-1.0 fallback when author metadata is unspecified.
  • Zero-Config Reporting: Reporting interfaces use metadata introspection to surface context automatically with minimal user setup.
  • Rich Visualizations: Produces rich spatial (3D fronts), temporal (convergence performance), and stratification (ranking) visualizations.
  • Persistence & Portability: Native save() and load() support preserves full experiment state and scientific metadata in portable archives.
  • FAIR Metrics Framework: Provides resolution-aware physical diagnostics and clinical Q-scores for scale-invariant structural assessment.
  • Comparability & Normalization: Performance indicators can be evaluated in raw or normalized form against contextual references and analytical ideals.

Quick Start

import moeabench as mb                         # Canonical import

exp = mb.experiment()                           # Create an instance of an experiment

exp.mop  = mb.mops.DTLZ2()                      # Select which MOP to run
exp.moea = mb.moeas.NSGA3()                     # Select which MOEA to run

exp.run()                                       # Run the optimization process

mb.view.topology(exp)                          # Plot the 3D pareto front
mb.view.history(exp)                           # Plot Hypervolume convergence

Documentation

  • User Guide: A comprehensive "How-to" guide covering installation, basic usage, advanced configuration, and custom extensions.
  • API Reference: Exhaustive technical specification of the Data Model, Classes, and Functions.
  • API Sheet: A concise thematic map of the canonical API and common workflows.
  • Examples Directory: Access all .py and .ipynb files. example_01.py to example_10.py cover the essential analytical and diagnostic journey.

Research & Citation

If you use MoeaBench in your research, please cite the framework using the following metadata:

Contributing

MoeaBench authors warmly welcome community contributions to the project. If you find any bugs or have suggestions for new features, please refer to the CONTRIBUTING.md file for more information.

Contact

For contacting the authors, see file AUTHORS.md.

References

About

Analysis Framework for Multi-objective Evolutionary Algorithms

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages