- Introduction
- Architecture Overview
- Model Comparison
- Getting Started
- Check the Running Environment
- Installation and Dependencies
- Download Datasets
- Configuration
- Training the Model
This repository implements a multi-head architecture for bankruptcy prediction on financial time series. It uses recurrent models—LTC, CfC, LSTM, and GRU—to process multiple financial indicators in parallel, and includes various preprocessing techniques, undersampling methods to address class imbalance, and comprehensive evaluation metrics.
The architecture employs a multi-head design where each financial variable is processed through its own dedicated network branch, with outputs subsequently combined for final classification.
Note: Papers are here.
Continuous-time recurrent network with liquid time constants. Uses the ncps library for implementation.
Efficient continuous-time networks with closed-form solutions. Uses the ncps library with lecun_tanh activation.
Recurrent network capturing sequential dependencies via hidden‑state recurrence.
Traditional recurrent network with memory cells.
Simplified recurrent network with gating mechanisms.
[Multi Head]
| Model Type | Total Parameters |
|---|---|
| LTC | 24055 |
| CfC | 193075 |
| RNN | 8467 |
| LSTM | 24883 |
| GRU | 19411 |
| Classifier | 2946 |
[Single Head]
| Model Type | Total Parameters |
|---|---|
| LTC | 5544 |
| CfC | 15988 |
| RNN | 3828 |
| LSTM | 5556 |
| GRU | 4980 |
| Classifier | 2946 |
rnn_hidden_size = 16
fc_hidden_size = 128Verify your PyTorch installation:
python -c "import torch; print(torch.__version__); print('CUDA available:', torch.cuda.is_available())"Clone the repository and install dependencies:
git clone https://github.com/gyb357/MultiHeadLNN
cd MultiHeadLNN
pip install -r requirements.txt- torch
- pandas
- matplotlib
- scikit-learn
- ncps
- rich
- pyyaml
https://github.com/sowide/multi-head_LSTM_for_bankruptcy-predictionNote: The dataset is under a CC-BY-4.0 license. Please refer to each repository's README.
Modify ./config/configs.yaml to customize your experiment.
Ensure your dataset is structured as:
dataset/{window}_train.csv
ataset/{window}_valid.csv
dataset/{window}_test.csv
python main.py- Model checkpoints:
result/best_model.pth - Experimental results:
result/[Model Name]_[Scaler Name]_[Threshold]_[RNN Hidden Size]_[FC Hidden Size].csv - Show plots:
python plot.py
