Every call to ChemPropModel.predict() silently creates a new lightning_logs/version_N/ directory in the current working directory. In a typical active learning run — where predict() is called once per committee member per AL iteration — this produces hundreds of stray directories.
In openadmet/models/architecture/chemprop.py, the ephemeral pl.Trainer constructed inside predict() is initialised with logger=None:
# openadmet/models/architecture/chemprop.py ~line 583
trainer = pl.Trainer(
logger=None, # <-- bug: should be logger=False
enable_progress_bar=False,
accelerator=accelerator,
devices=devices,
)
In PyTorch Lightning 2.x, logger=None is not equivalent to disabling logging. It causes Lightning to fall back to its default logger — TensorBoardLogger when TensorBoard is installed — which writes a version_N/events.out.tfevents.* directory tree rooted at {cwd}/lightning_logs/. The correct value to disable logging entirely is logger=False.
This can be confirmed by the contents of the stray directories:
lightning_logs/version_0/
├── events.out.tfevents.…
└── hparams.yaml
Every call to
ChemPropModel.predict()silently creates a newlightning_logs/version_N/directory in the current working directory. In a typical active learning run — wherepredict()is called once per committee member per AL iteration — this produces hundreds of stray directories.In
openadmet/models/architecture/chemprop.py, the ephemeralpl.Trainerconstructed insidepredict()is initialised withlogger=None:In PyTorch Lightning 2.x,
logger=Noneis not equivalent to disabling logging. It causes Lightning to fall back to its default logger —TensorBoardLoggerwhen TensorBoard is installed — which writes aversion_N/events.out.tfevents.*directory tree rooted at{cwd}/lightning_logs/. The correct value to disable logging entirely islogger=False.This can be confirmed by the contents of the stray directories: