feat: Add Quantum-Enhanced Low-Rank Adaptation (QuantumLoRA)#144
Open
mosh3eb wants to merge 7 commits intomerlinquantum:mainfrom
Open
feat: Add Quantum-Enhanced Low-Rank Adaptation (QuantumLoRA)#144mosh3eb wants to merge 7 commits intomerlinquantum:mainfrom
mosh3eb wants to merge 7 commits intomerlinquantum:mainfrom
Conversation
Implements photonic quantum circuit adaptation as drop-in replacement for nn.Linear layers during fine-tuning. Supports multiple circuit architectures and regex-based auto-injection. Addresses merlinquantum#34
Tests initialization, forward/backward pass, gradient flow, ansatz variants, and auto-injection patterns.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Implements Quantum-Enhanced Low-Rank Adaptation (LoRA) using photonic quantum circuits for fine-tuning neural networks. Provides drop-in replacement for nn.Linear layers with quantum adaptation path.
Related Issue
Related to #34
Type of change
Proposed changes
QuantumLoRALayerclass with photonic quantum circuit integrationconvert_to_quantum_lora()utility with regex pattern matching and exclusion supportQuantumAnsatzenum for circuit architecture selection (Simple, Universal, Hardware-Efficient)merlinandmerlin.algorithmspublic APIsnumpy<2) for binary compatibility with PyTorch and PercevalHow to test / How to run
Screenshots / Logs (optional)
Test output:
Benchmark output:
Performance considerations (optional)
Quantum LoRA parameter count scales with photonic Hilbert space dimensions. While it uses more parameters than classical rank-8 LoRA in this configuration, it provides non-linear expressive power in the low-rank subspace. Forward pass timing is ~10-15ms for typical circuits.
Documentation
docs/QUANTUM_LORA.mdexamples/quantum_lora_finetuning.pyChecklist
numpy<2constraint added