Code for the paper "Interpretable and Efficient Data-Driven Discovery and Control of Distributed Systems" by Florian Wolf, Nicolò Botteghi, Urban Fasel, and Andrea Manzoni.
The preprint is available on arxiv and if you use our code, please cite:
@misc{wolf2024interpretableefficientdatadrivendiscovery,
title={Interpretable and Efficient Data-driven Discovery and Control of Distributed Systems},
author={Florian Wolf and Nicolò Botteghi and Urban Fasel and Andrea Manzoni},
year={2024},
eprint={2411.04098},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2411.04098},
}To simply reproduce all experiments in the paper, execute
cd src
sh run_experiments.shin your console.
-
/SindyRL-AE: This sub-folder is based on the sindy-rl code and the corresponding paper "SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning" by Zolman et al., see arxiv-link. Check/SindyRL-AE/documents/LICENSE.pdf.We extended the code by the following new functionalities to enable the support of PDE experiments and our auto encoder framework:
dynamics.pycontaining theAutoEncoderDynamicsModel(BaseDynamicsModel),burgers.pywrapping thecontrolgymenvironment for the first experiment of the paper,navier_stokes.pywrappingPDEControlGymenvironment for the second experiment of the paper,
-
/src: This sub-folder contains the actual configuration files, training and evaluation scripts.run_experiments.shwill run all experiments present in the paper./config_templatescontains the configuration files for the experiments in the paper. Each of the files corresponds to exactly one of the experiments and models shown in the paper./autoencoderprovides the implementation of the auto encoder model, loss function and a custom logger./analysis: Helper methods to read logs, count parameters and create the heatmaps presented in the paper.sindyrl_<>.pyare the main scripts to train, load and evaluate the models presented in the paper.
The easiest solution is to directly use the provided conda environment, via:
conda env create --name sindyrl --file=env_sindyrl.yml
conda activate sindyrl
or via the docker installation instructions provided in the sindy-rl repo.
Alternatively, one can manually install the dependencies
- Create a virtual environment with
python 3.10.14. - Install requirements of
sindyrlviacd sindy_rl pip install -r requirements.txt - Installation of
controlgymfrom https://github.com/Flo-Wo/controlgym viagit clone git@github.com:Flo-Wo/controlgym.git cd controlgym pip install -r .
- Install
PDEcontrolGymfrom https://github.com/lukebhan/PDEControlGym. - Install additional requirements via
pip install -r requirements.txt.