Skip to content

Comments

feat: Add Quantum-Enhanced Low-Rank Adaptation (QuantumLoRA)#144

Open
mosh3eb wants to merge 7 commits intomerlinquantum:mainfrom
mosh3eb:main
Open

feat: Add Quantum-Enhanced Low-Rank Adaptation (QuantumLoRA)#144
mosh3eb wants to merge 7 commits intomerlinquantum:mainfrom
mosh3eb:main

Conversation

@mosh3eb
Copy link

@mosh3eb mosh3eb commented Feb 14, 2026

Summary

Implements Quantum-Enhanced Low-Rank Adaptation (LoRA) using photonic quantum circuits for fine-tuning neural networks. Provides drop-in replacement for nn.Linear layers with quantum adaptation path.

Related Issue

Related to #34

Type of change

  • Bug fix
  • New feature
  • Documentation update
  • Refactor / Cleanup
  • Performance improvement
  • CI / Build / Tooling
  • Breaking change (requires migration notes)

Proposed changes

  • Add QuantumLoRALayer class with photonic quantum circuit integration
  • Implement convert_to_quantum_lora() utility with regex pattern matching and exclusion support
  • Add QuantumAnsatz enum for circuit architecture selection (Simple, Universal, Hardware-Efficient)
  • Export components in merlin and merlin.algorithms public APIs
  • Fix NumPy version constraint (numpy<2) for binary compatibility with PyTorch and Perceval

How to test / How to run

  1. Run unit tests:
pytest tests/algorithms/test_lora.py -v
  1. Run example script:
python3 examples/quantum_lora_finetuning.py
  1. Run benchmark:
python3 benchmarks/benchmark_quantum_lora.py
  1. Basic usage:
from merlin import convert_to_quantum_lora, QuantumAnsatz
import torch.nn as nn

model = nn.Sequential(
    nn.Linear(10, 64),
    nn.ReLU(),
    nn.Linear(64, 2)
)

convert_to_quantum_lora(
    model,
    r=4,
    n_photons=2,
    target_modules=[r"^0$"],  # First linear layer only
    ansatz=QuantumAnsatz.SIMPLE
)

Screenshots / Logs (optional)

Test output:

====================== 7 passed in 6.40s =======================

Benchmark output:

=== Quantum LoRA Efficiency Benchmark ===

Original trainable params: 131,712
Standard LoRA added params (rank 8): ~10,240
Quantum LoRA added params (rank 8): 50,144

Parameter reduction vs Standard LoRA: -389.69%

Running dummy forward pass...
Avg forward pass time: 10.72ms

Performance considerations (optional)

Quantum LoRA parameter count scales with photonic Hilbert space dimensions. While it uses more parameters than classical rank-8 LoRA in this configuration, it provides non-linear expressive power in the low-rank subspace. Forward pass timing is ~10-15ms for typical circuits.

Documentation

  • User docs updated (Sphinx) - Added docs/QUANTUM_LORA.md
  • Examples / notebooks updated - Added examples/quantum_lora_finetuning.py
  • Docstrings updated - All classes and functions documented

Checklist

  • PR title includes Jira issue key (e.g., PML-126) - N/A (external contributor)
  • "Related Jira ticket" section includes the Jira issue key (no URL) - N/A (external contributor)
  • Code formatted (ruff format)
  • Lint passes (ruff)
  • Static typing passes (mypy) if applicable
  • Unit tests added/updated (pytest)
  • Tests pass locally (pytest) - 7/7 passing
  • Tests pass on GPU (pytest) - No GPU available for testing
  • Test coverage not decreased significantly
  • Docs build locally if affected (sphinx)
  • Dependencies updated (if needed) and pinned appropriately - numpy<2 constraint added
  • PR description explains what changed and how to validate it

Implements photonic quantum circuit adaptation as drop-in replacement
for nn.Linear layers during fine-tuning. Supports multiple circuit
architectures and regex-based auto-injection.

Addresses merlinquantum#34
Tests initialization, forward/backward pass, gradient flow,
ansatz variants, and auto-injection patterns.
@ben9871 ben9871 self-assigned this Feb 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants