Python Package to collect simulators for Sequential Sampling Models.
Find the package documentation here.
The ssms package serves two purposes.
- Easy access to fast simulators of sequential sampling models
- Support infrastructure to construct training data for various approaches to likelihood / posterior amortization
A number of tutorial notebooks are available under the /notebooks directory.
pip install ssm-simulatorsNote
Building from source or developing this package requires a C compiler (such as GCC). On Linux, you can install GCC with:
sudo apt-get install build-essentialMost users installing from PyPI wheels do not need to install GCC.
The package exposes a command-line tool, generate, for creating training data from a YAML configuration file.
generate --config-path <path/to/config.yaml> --output <output/directory> [--log-level INFO]--config-path: Path to your YAML configuration file (optional, uses default if not provided).--output: Directory where generated data will be saved (required).--n-files: (Optional) Number of data files to generate. Default is1file.--estimator-type: (Optional) Likelihood estimator type (kdeorpyddm). Overrides YAML config if specified.--log-level: (Optional) Set the logging level (DEBUG,INFO,WARNING,ERROR,CRITICAL). Default isWARNING.
Below is a sample YAML configuration you can use with the generate command:
MODEL: 'ddm'
GENERATOR_APPROACH: 'lan'
PIPELINE:
N_PARAMETER_SETS: 100
N_SUBRUNS: 20
SIMULATOR:
N_SAMPLES: 2000
DELTA_T: 0.001
TRAINING:
N_SAMPLES_PER_PARAM: 200
ESTIMATOR:
TYPE: 'kde' # Options: 'kde' (default) or 'pyddm'Configuration file parameter details follow.
Top-Level Parameters:
| Option | Definition |
|---|---|
MODEL |
The type of model you want to simulate (e.g., ddm, angle, levy) |
GENERATOR_APPROACH |
Type of generator used to generate data (lan or cpn) |
PIPELINE Section:
| Option | Definition |
|---|---|
N_PARAMETER_SETS |
Number of parameter vectors that are used for training |
N_SUBRUNS |
Number of repetitions of each call to generate data |
SIMULATOR Section:
| Option | Definition |
|---|---|
N_SAMPLES |
Number of samples a simulation run should entail for a given parameter set |
DELTA_T |
Time discretization step used in numerical simulation of the model. Interval between updates of evidence-accumulation. |
TRAINING Section:
| Option | Definition |
|---|---|
N_SAMPLES_PER_PARAM |
Number of times the kernel density estimate (KDE) is evaluated after creating the KDE from simulations of each set of model parameters |
ESTIMATOR Section:
| Option | Definition |
|---|---|
TYPE |
Likelihood estimator type: kde (default) or pyddm |
To make your own configuration file, you can copy the example above into a new .yaml file and modify it with your preferences.
If you are using uv (see below), you can use the uv run command to run generate from the command line
This will generate training data according to your configuration and save it in the specified output directory.
Register custom transformations to apply model-specific modifications to sampled parameters:
from ssms import register_transform_function
import numpy as np
# Register a custom transform
def exponential_drift(theta: dict) -> dict:
if 'v' in theta:
theta['v'] = np.exp(theta['v'])
return theta
register_transform_function("exp_v", exponential_drift)
# Use in model configuration
model_config = {
"name": "my_model",
"params": ["v", "a", "z", "t"],
"param_bounds": [...],
"parameter_transforms": [
{"type": "exp_v"} # Your custom transform
]
}Check the basic tutorial in our documentation.
We use uv for fast and efficient dependency management. To get started:
- Install
uv:
curl -LsSf https://astral.sh/uv/install.sh | sh- Install dependencies (including development):
uv sync --all-groups # Installs all dependency groupsWe welcome contributions from the community! Whether you want to add a new model, improve documentation, or fix bugs, your help is appreciated.
Want to add your own sequential sampling model to the package? Check out our comprehensive guide:
📖 Contributing New Models Tutorial
This guide walks you through three levels of contribution:
- Level 1: Add boundary/drift variants (~15 min)
- Level 2: Implement Python simulators (~20 min)
- Level 3: Create high-performance Cython implementations (~30 min)
For bug reports, feature requests, or general questions:
- Open an issue on GitHub Issues
- Check existing issues to avoid duplicates
- Provide clear descriptions and reproducible examples
Please use the this DOI to cite ssm-simulators: https://doi.org/10.5281/zenodo.17156205