Thank you for your interest in contributing to the DeepScan Framework! This document provides guidelines and instructions for contributing.
If you find a bug or have a feature request, please open an issue on GitHub with:
- A clear description of the problem or feature
- Steps to reproduce (for bugs)
- Expected vs actual behavior
- Your environment (Python version, OS, etc.)
-
Fork the repository and create a new branch for your changes
git checkout -b feature/your-feature-name
-
Make your changes following the coding standards below
-
Test your changes
# Install in development mode pip install -e ".[dev]" # Run tests pytest # Run linting black deepscan/ flake8 deepscan/ mypy deepscan/
-
Commit your changes with clear, descriptive commit messages
git commit -m "Add feature: description of what you added" -
Push to your fork and open a Pull Request
- Provide a clear description of what your PR does
- Reference any related issues
- Ensure all tests pass
- Update documentation if needed
- Follow the existing code style
- Follow PEP 8 style guidelines
- Use
blackfor code formatting (line length: 100) - Type hints are encouraged but not required for all functions
- Use descriptive variable and function names
- Models: Add new model runners in
deepscan/models/ - Evaluators: Add new evaluators in
deepscan/evaluators/ - Datasets: Add new dataset loaders in
deepscan/datasets/ - Summarizers: Add new summarizers in
deepscan/summarizers/
- Create a new file in
deepscan/models/(e.g.,my_model.py) - Implement a class inheriting from
BaseModelRunner - Register the model in the model registry:
from deepscan.registry.model_registry import get_model_registry @get_model_registry().register_model("my_model") def create_my_model(**kwargs): # Your model creation logic return MyModelRunner(...)
- Import and register in
deepscan/models/__init__.py
- Create a new file in
deepscan/evaluators/(e.g.,my_evaluator.py) - Implement a class inheriting from
BaseEvaluator - Register the evaluator:
from deepscan.evaluators.registry import get_evaluator_registry @get_evaluator_registry().register_evaluator("my_evaluator") class MyEvaluator(BaseEvaluator): def evaluate(self, model, dataset, **kwargs): # Your evaluation logic return results
- Add to
deepscan/evaluators/__init__.pyif needed
- Create a new file in
deepscan/datasets/(e.g.,my_dataset.py) - Implement a dataset loader function
- Register the dataset:
from deepscan.registry.dataset_registry import get_dataset_registry @get_dataset_registry().register_dataset("my_dataset") def load_my_dataset(**kwargs): # Your dataset loading logic return dataset
- Add to
deepscan/datasets/__init__.pyif needed
- Write tests for new features in the
tests/directory - Follow the existing test patterns
- Ensure tests pass with
pytest - Aim for good test coverage
- Update README.md if you add new features
- Add docstrings to new functions and classes
- Follow the existing documentation style
- Update example configs if you change configuration schemas
# Clone the repository
git clone https://github.com/AI45Lab/DeepScan.git
cd DeepScan
# Install in development mode with all dependencies
pip install -e ".[dev,all]"
# Run tests
pytest
# Format code
black deepscan/
# Type checking
mypy deepscan/If you have questions about contributing, please open an issue or contact the maintainers.
Thank you for contributing to DeepScan Framework!