Skip to content

gyb357/MultiHead-NN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Table of Contents

  1. Introduction
  2. Architecture Overview
  3. Model Comparison
  4. Getting Started
    • Check the Running Environment
    • Installation and Dependencies
    • Download Datasets
    • Configuration
    • Training the Model

📑Introduction

Multi-Head Neural Networks for Financial Time Series Classification

This repository implements a multi-head architecture for bankruptcy prediction on financial time series. It uses recurrent models—LTC, CfC, LSTM, and GRU—to process multiple financial indicators in parallel, and includes various preprocessing techniques, undersampling methods to address class imbalance, and comprehensive evaluation metrics.


🔍Architecture Overview

Multi-Head Architecture

Multi-head architecture schematic

The architecture employs a multi-head design where each financial variable is processed through its own dedicated network branch, with outputs subsequently combined for final classification.

Note: Papers are here.

1. Liquid Time-Constant Networks (LTC)

Continuous-time recurrent network with liquid time constants. Uses the ncps library for implementation.

2. Closed-form Continuous-time Networks (CfC)

Efficient continuous-time networks with closed-form solutions. Uses the ncps library with lecun_tanh activation.

3. Recurrent Neural Network (RNN)

Recurrent network capturing sequential dependencies via hidden‑state recurrence.

4. Long Short-Term Memory (LSTM)

Traditional recurrent network with memory cells.

5. Gated Recurrent Unit (GRU)

Simplified recurrent network with gating mechanisms.

📋Model Comparison

Model Parameters

[Multi Head]

Model Type Total Parameters
LTC 24055
CfC 193075
RNN 8467
LSTM 24883
GRU 19411
Classifier 2946

[Single Head]

Model Type Total Parameters
LTC 5544
CfC 15988
RNN 3828
LSTM 5556
GRU 4980
Classifier 2946
rnn_hidden_size = 16
fc_hidden_size = 128

🔨Getting Started

1. Check the Running Environment

Verify your PyTorch installation:

python -c "import torch; print(torch.__version__); print('CUDA available:', torch.cuda.is_available())"

2. Installation and Dependencies

Clone the repository and install dependencies:

git clone https://github.com/gyb357/MultiHeadLNN
cd MultiHeadLNN
pip install -r requirements.txt

Required Dependencies

  • torch
  • pandas
  • matplotlib
  • scikit-learn
  • ncps
  • rich
  • pyyaml

3. Download Datasets

https://github.com/sowide/multi-head_LSTM_for_bankruptcy-prediction

Note: The dataset is under a CC-BY-4.0 license. Please refer to each repository's README.

4. Configuration

Modify ./config/configs.yaml to customize your experiment.

5. Training the Model

Data Preparation

Ensure your dataset is structured as:

dataset/{window}_train.csv
ataset/{window}_valid.csv
dataset/{window}_test.csv

Running Training

python main.py

Results

  • Model checkpoints: result/best_model.pth
  • Experimental results: result/[Model Name]_[Scaler Name]_[Threshold]_[RNN Hidden Size]_[FC Hidden Size].csv
  • Show plots: python plot.py

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages