This repository contains the refactored training and evaluation codebase for the Airbus/BMW Hybrid CycleGAN-Turbo project. It integrates quantum-enhanced image-to-image translation using Boson samplers with the classical CycleGAN-Turbo framework. The classical component for this hybrid model is a diffusion-based image translation network (CycleGAN-Turbo) trained with LoRA adapters, from https://github.com/GaParmar/
The model couples a CycleGAN-Turbo style diffusion backbone (UNet + VAE with LoRA adapters) with a Boson sampler encoder. The quantum encoder maps image embeddings from the VAE decoder into a photonic circuit representation, and its outputs are fused back into the translation pipeline. Training can run in full quantum mode, classical-only LoRA mode, or ablation mode with a random MLP in place of the sampler.
The simplest way to begin is to run the default configuration:
python src/train.py --tracker_project_name my_wandb_project --output_dir ./outputs/run001This will:
- Load the default configuration from
src/config/experiments/default.yaml - Instantiate a quantum-enhanced CycleGAN (Boson sampler + LoRA fine-tuning)
- Train on the specified dataset for 25,000 steps (or less)
- Log metrics to Weights & Biases
To disable the Boson sampler and run pure classical LoRA fine-tuning:
python src/train.py --quantum false --tracker_project_name my_wandb_projectTo verify your environment and GPU setup with a tiny dataset:
python src/train.py --experiment_config src/config/experiments/gpu_smoke.yaml \
--tracker_project_name smoke_test --output_dir ./outputs/smokeThis runs 5 training steps on a minimal dataset, perfect for CI/CD pipelines or debugging.
application_qCycleGAN/
|-- README.md
|-- src/
| |-- train.py # Main entry point for training
| |-- get_metrics.py # Evaluation & metric computation (FID, DINO)
| |-- verify_quantum_training.py # macOS environment sanity check
| |-- config/ # Configuration system (dataclasses + YAML)
| | |-- defaults.py # QuantumConfig, DatasetConfig, TrainingConfig
| | |-- loader.py # load_configs() helper for YAML parsing
| | `-- experiments/ # Pre-built experiment bundles
| | |-- default.yaml # Full quantum pipeline baseline
| | |-- classical_run.yaml # Pure classical LoRA (Day<->Night)
| | `-- gpu_smoke.yaml # Tiny dataset for quick tests
| |-- models/ # Neural network modules & builders
| | |-- cyclegan_turbo.py # CycleGAN-Turbo architecture (UNet + LoRA)
| | |-- quantum_encoder.py # Sorted Boson sampler (default, post-selected)
| | |-- quantum_encoder_unsorted.py # Unsorted variant (LexGrouping output aggregation)
| | |-- model_factory.py # Factory for assembling UNet/VAE/Boson sampler stacks
| | `-- legacy/ # Archived implementations for reference
| |-- data/ # Dataset utilities
| |-- training/ # Training infrastructure
| |-- my_utils/ # CLI helpers, device utils, argparse, etc.
| |-- tests/ # Test suite
| | |-- test_quantum_encoder.py
| | |-- test_config.py
| | |-- test_dataset.py
| | |-- test_model_factory.py
| | `-- test_losses.py
| `-- qCycleGAN.ipynb # Interactive tutorial notebook
`-- requirements.txt
All training parameters are managed through three config dataclasses. You can specify them via:
- YAML files (
src/config/experiments/*.yaml) - recommended for reproducibility - CLI arguments - useful for quick iterations
- Python dataclass defaults (
src/config/defaults.py)
quantum:
quantum: true # Enable/disable Boson sampler
sort_encoding: true # true=sorted (default), false=unsorted (LexGrouping)
num_modes: 20 # M in photonic circuit
num_photons: 3 # N photons to use
epsilon: 1.0e-5 # Numerical stability parameter
trainable_parameters: ["phi"] # Which circuit params are trainableclassical:
cl_comp: false # Classical comparison mode (freeze VAE encoders)
random_ablation: false # Use random 2-layer MLP instead of Boson sampler
random_trained: false # Train the random layer when ablation is enabled
random_ablation_hidden_dim: 2048 # Hidden size for the random ablation MLPSet classical.random_ablation=true (or pass --random_ablation true) to swap the Boson
sampler with a simple random MLP producing a 1024-d embedding for ablation studies.
Toggle classical.random_trained to decide whether that layer updates during training.
dataset:
dataset_folder: ../data/dataset_full_scale/
train_img_prep: no_resize # Image preprocessing strategy
train_batch_size: 1 # Batch size for training
validation_num_images: 4 # Images to generate per validationtraining:
learning_rate: 1.0e-5
max_train_steps: 25000
checkpointing_steps: 5000 # Save model checkpoint every N steps
validation_steps: 2000 # Validate every N steps
tracker_project_name: my_project # W&B project nameTraining behavior is controlled primarily by quantum.quantum, classical.cl_comp, and classical.random_ablation
(note: classical.random_ablation=true is incompatible with quantum.quantum=true).
- VAE encoder: frozen (including
quant_convwhen present). - VAE decoder pipeline: kept the same; trained via LoRA adapters (
vae_skip). - Fully-trained VAE components:
decoder.conv_in,decoder.conv_out,decoder.skip_conv_{1..4},post_quant_conv. - UNet:
- If
training.unet_trained=true: all UNet parameters train. - Else: UNet trains via LoRA adapters +
unet.conv_in.
- If
- Sampler head:
quantum.quantum=true: Boson sampler trains ifquantum.quantum_trained=true.classical.random_ablation=true: random MLP trains ifclassical.random_trained=true.
- VAE: encoder + decoder are trainable through the configured LoRA adapter (
vae_skip) and the added decoder skip-convs. - UNet: same rule as above for
training.unet_trained(full UNet vs LoRA +unet.conv_in).
Any config field can be overridden from the command line:
python src/train.py \
--experiment_config src/config/experiments/default.yaml \
--quantum false \ # Override quantum setting
--train_batch_size 2 \ # Override batch size
--learning_rate 5.0e-5 \ # Override learning rate
--max_train_steps 50000 \ # Override max steps
--output_dir ./outputs/custom_runThe project provides two Boson sampler variants, selectable via QuantumConfig.sort_encoding:
File: src/models/quantum_encoder.py (class _SortedBosonSampler)
Architecture:
- Input amplitudes are split into positive/negative pairs
- Each spatial coordinate is post-selected as a quantum measurement outcome
- Conditional probabilities are computed via Fock state projections
When to use:
- Default choice for all new experiments
- Compatible with all existing checkpoints
- More sophisticated quantum encoding scheme
Hyperparameters:
num_modes (M): Total modes in the interferometernum_photons (N): Photon number constraintsort_encoding=true: Activates this variant, when the amplitude encoding is not done with respect to each selected Fock state (less memory needed)
File: src/models/quantum_encoder_unsorted.py (class BosonSampler)
Architecture:
- Input amplitudes are directly encoded into the quantum circuit
- Output Fock state probabilities are aggregated via Merlin's
LexGroupingstrategy - Simpler, parameter-efficient alternative
When to use:
- Faster experiments (fewer quantum evaluations)
- Research exploring different output mapping strategies
- Parameter-constrained scenarios
Hyperparameters:
num_modes (M): Must be provided in config (not computed from dims)num_photons (N): Must be provided in configsort_encoding=false: Activates this variant
Switching Between Variants:
In your YAML config:
quantum:
sort_encoding: true # Use sorted (default, high memory demand)
# or
sort_encoding: false # Use unsorted (LexGrouping)Both variants expose the same PyTorch nn.Module interface, so switching is seamless.
# Verify your environment
python src/verify_quantum_training.py
# Explore the available configs
ls src/config/experiments/python src/train.py \
--experiment_config src/config/experiments/default.yaml \
--tracker_project_name my_project \
--output_dir ./outputs/experiment_001- Weights & Biases: View real-time metrics at
https://wandb.ai/your-user/my_project - Local checkpoints: Saved to
./outputs/experiment_001/everycheckpointing_steps
python src/get_metrics.py \
--checkpoint_dir ./outputs/experiment_001/checkpoint-25000 \
--quantum true \
--output_dir ./outputs/experiment_001/metricsThis computes FID and DINO metrics on validation images.
python src/train.py \
--experiment_config src/config/experiments/classical_run.yaml \
--tracker_project_name classical_baseline \
--quantum falsepython src/train.py \
--quantum true \
--num_modes 16 \
--num_photons 2 \
--sort_encoding true \
--train_batch_size 4 \
--learning_rate 2.0e-5 \
--max_train_steps 50000 \
--tracker_project_name quantum_custom \
--output_dir ./outputs/quantum_v2python src/train.py \
--experiment_config src/config/experiments/gpu_smoke.yaml \
--tracker_project_name smoke_testThe project includes a comprehensive test suite (23 tests) covering:
- Quantum encoder implementations (sorted & unsorted)
- Configuration loading and validation
- Dataset utilities
- Model factory instantiation
Run all tests:
pytest src/tests/ -vRun specific test file:
pytest src/tests/test_quantum_encoder.py -v- Create a new YAML file in
src/config/experiments/my_new_experiment.yaml - Copy the structure from an existing config (e.g.,
default.yaml) - Customize the
quantum,dataset, andtrainingsections - Run:
python src/train.py --experiment_config src/config/experiments/my_new_experiment.yaml
Edit src/config/defaults.py to change base values for all configs:
# src/config/defaults.py
@dataclass
class QuantumConfig:
quantum: bool = True
num_modes: int = 20 # Change this default
num_photons: int = 3 # or this
# ...- Implement your module as a PyTorch
nn.Module - Update
src/models/model_factory.pyto instantiate it - Add unit tests in
src/tests/(following existing patterns) - Document the new component in this README
| File | Purpose |
|---|---|
src/train.py |
Main training entry point |
src/config/defaults.py |
Configuration dataclass definitions |
src/config/loader.py |
YAML config parsing |
src/models/quantum_encoder.py |
Sorted Boson sampler (default) |
src/models/quantum_encoder_unsorted.py |
Unsorted Boson sampler (LexGrouping) |
src/models/model_factory.py |
Model instantiation & component factory |
src/get_metrics.py |
FID & DINO metric computation |
src/qCycleGAN.ipynb |
Interactive tutorial & demo |
src/tests/test_quantum_encoder.py |
Quantum encoder test suite (23 tests) |
- Paper: See
https://arxiv.org/abs/2403.12036for the full classical CycleGAN-Turbo approach for unpaired image generation (code: https://github.com/GaParmar/img2img-turbo) - Tutorial:
src/qCycleGAN.ipynbprovides step-by-step walkthroughs
