Skip to content

Conversation

@vhfsantos
Copy link
Contributor

No description provided.

vhfsantos and others added 13 commits July 7, 2025 15:26
We made a significant rework on PARM and now introduce **version 0.1.0**. 

Key changes include transitioning from individual model weight files to a directory-based approach for managing model folds (now each PARM model is trained five times/folds), adding support for batch processing, and introducing new parameters like `filter_size` and `type_loss` for user-trained models.

### Updates to Model Loading and Management:
* Replaced the `model_weights` parameter with `model_directory` in `PARM_mutagenesis` and `PARM_predict` functions to dynamically load all model folds from a specified directory. These two functions now take the average of all folds to output the predictions and the mutagenesis matrix.
* Integrated support for the `filter_size` parameter in `load_PARM` and related functions, allowing customization of convolutional filter sizes.

### Improvements to `PARM_predict`:
* Introduced `n_seqs_per_batch` and `store_sequence` parameters in `PARM_predict` to enable batch processing and control output verbosity. Predictions are now averaged across all folds for each sequence.
* Enhanced the `get_prediction` function to support batch predictions.

### Code Quality and Consistency:
* Standardized parameter formatting and fixed minor inconsistencies in function signatures and docstrings.
* Improved error messages for better debugging, such as mismatched model prefixes and invalid motif scanning types.

### Minor Fixes:
* Corrected string formatting in logging statements and ensured consistent use of single quotes.
Improve support for user-trained PARM models
add training data
Update README.md to use an env, remove Anaconda
@vhfsantos vhfsantos merged commit cd1e1e9 into lazy_imports_dev Oct 31, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants