This repository was archived by the owner on Feb 23, 2026. It is now read-only.
Releases: willxxy/ECG-Bench
Releases · willxxy/ECG-Bench
0.0.6
Updates since 0.0.5:
- We released preprocessed
.npyfiles to Google Drive, such that people can just download it without preprocessing themselves. - We implemented the Signal2Vec algorithm and created an ELM using the Signal2Vec.
- We have developed a naive analysis of the Platonic Representation Hypothesis for ELMS.
- We have created a new baseline with the new OpenTSLM model.
0.0.5
Updates since 0.0.4:
We have made some big refactors since 0.0.4. We list some of them here:
- We separate a lot of processes into their own respective files and we will expand on some of them below.
- The notion of defining
trainorinferenceis gone. We separate these processes into their own respective.pyfiles (e.g.,ecg_bench/evaluate_elm.py,ecg_bench/train_elm.py). - We also simplified and separated the ELM and its components. Simply define the
--llmor--encoderflags for the model you are considering to run some processes on. - Previously, many configurations were hardcoded arbitrarily throughout the code, making it difficult to find these hardcoded values. We still hardcode most of these values but unify them in
ecg_bench/configs/constants.py. We found this to be much easier to work with. We will continuously work to abstract the code to get rid of these values (if necessary).
0.0.4
Updates since 0.0.3:
- Release stratified data splits for many of our ECG dataset on Hugging Face!
- Add naive encoder free method.
- Code cleanups (got rid of percentiles based sampling and unused configs)
- automatic learning rate scaler for effective batch size and distributed training
- Tensor broadcasting bug fix (we arange over the batch to properly allocate the corresponding signal latent for each batch)
0.0.3
Some updates:
- Added some more known bugs on seeding and non-deterministic cuda behavior
- Fixed bugs in padding for second stage.
- Differentiated scheduler between training transformers (LLM) in second or end-to-end stage vs training encoders in first stage
- A bit of re-organization to the structure of the configs/args
- Preprocessing pipeline cleanups