diff --git a/README.md b/README.md
index c3034f9..fd0bcda 100644
--- a/README.md
+++ b/README.md
@@ -83,6 +83,33 @@ tesseract-code-stim/
- **`notebooks/`**: Interactive Jupyter notebooks for experiments and visualization
- **`tests/`**: Comprehensive test suite covering all major functionality
+## Results
+
+### Error Correction Demonstration
+
+First, we show the acceptance rates across different noise levels and number of error correction rounds:
+
+
+
+The plots below demonstrate the error correction procedure by comparing fidelity with and without Pauli frame correction applied. Both experiment run the full error correction rounds, but only the experiment plotted in the left plot applies the corrections based on the accumulated Pauli frame.
+
+
+
+
+
+
+With Pauli Frame Correction
+ |
+
+
+
+Without Pauli Frame Correction
+ |
+
+
+
+This comparison clearly shows that the error correction procedure manages to correct some of the errors and achieve higher fidelity across different noise levels and number of error correction rounds.
+
## Quick Start
### Installation
@@ -174,13 +201,15 @@ The `plotting/plot_acceptance_rates.py` script generates acceptance and logical
python tesseract_sim/plotting/plot_acceptance_rates.py --apply_pauli_frame true --encoding-mode 9a --out-dir ./custom_plots
```
-The script generates two types of plots:
-- **Acceptance Rate Plots**: Show how well the error correction accepts states across different noise levels and rounds
-- **Logical Success Rate Plots**: Show the conditional probability of logical success given acceptance
-
-**Example Results:**
+* **Customize rounds and noise levels:**
+ ```bash
+ python tesseract_sim/plotting/plot_acceptance_rates.py --rounds 1 5 10 20 --noise-levels 0.01 0.05 0.1 --shots 1000
+ ```
-
+The script generates three types of plots:
+- **Acceptance Rate Plots**: Show how well the error correction accepts states across different noise levels and rounds
+- **Logical Success Rate Plots**: Show the conditional probability of logical success given acceptance. Logical success is defined here as all qubits are measured to be in the correct state.
+- **Fidelity Rate Plots**: Show the average fidelity of the measured logical state within the shots that were not rejected.
## References
diff --git a/plots/acceptance_rates_ec_experiment.png b/plots/acceptance_rates_ec_experiment.png
new file mode 100644
index 0000000..dee27b2
Binary files /dev/null and b/plots/acceptance_rates_ec_experiment.png differ
diff --git a/plots/acceptance_rates_ec_noise_ec_experiment.png b/plots/acceptance_rates_ec_noise_ec_experiment.png
deleted file mode 100644
index 1867533..0000000
Binary files a/plots/acceptance_rates_ec_noise_ec_experiment.png and /dev/null differ
diff --git a/plots/ec_experiment_20250916_161024_with_correction/acceptance_rates_ec_experiment.png b/plots/ec_experiment_20250916_161024_with_correction/acceptance_rates_ec_experiment.png
new file mode 100644
index 0000000..dee27b2
Binary files /dev/null and b/plots/ec_experiment_20250916_161024_with_correction/acceptance_rates_ec_experiment.png differ
diff --git a/plots/ec_experiment_20250916_161024_with_correction/experiment_metadata.txt b/plots/ec_experiment_20250916_161024_with_correction/experiment_metadata.txt
new file mode 100644
index 0000000..469fddb
--- /dev/null
+++ b/plots/ec_experiment_20250916_161024_with_correction/experiment_metadata.txt
@@ -0,0 +1,20 @@
+Tesseract EC Experiment Metadata
+===================================
+
+Timestamp: 2025-09-16 16:54:26
+Total runtime: 00:44:02.614 (2642.614 seconds)
+
+Experiment Parameters:
+--------------------
+Rounds: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20]
+Noise levels: [np.float64(0.0), np.float64(0.0011111111111111111), np.float64(0.0022222222222222222), np.float64(0.003333333333333333), np.float64(0.0044444444444444444), np.float64(0.005555555555555556), np.float64(0.006666666666666666), np.float64(0.0077777777777777776), np.float64(0.008888888888888889), np.float64(0.01)]
+Shots per data point: 1000000
+Apply Pauli frame correction: True
+Encoding mode: 9a
+Sweep channel noise: False
+Noise configuration: Sweeping EC/decoding noise
+ - EC noise applied: During error correction rounds and decoding
+ - EC 1Q rate: Swept parameter
+ - EC 2Q rate: Swept parameter (same as 1Q)
+ - Channel noise: None (0.0)
+ - Encoding: Noiseless
diff --git a/plots/ec_experiment_20250916_161024_with_correction/fidelity_rates_ec_experiment.png b/plots/ec_experiment_20250916_161024_with_correction/fidelity_rates_ec_experiment.png
new file mode 100644
index 0000000..d41b6cf
Binary files /dev/null and b/plots/ec_experiment_20250916_161024_with_correction/fidelity_rates_ec_experiment.png differ
diff --git a/plots/ec_experiment_20250916_161024_with_correction/logical_rates_ec_experiment.png b/plots/ec_experiment_20250916_161024_with_correction/logical_rates_ec_experiment.png
new file mode 100644
index 0000000..c036d22
Binary files /dev/null and b/plots/ec_experiment_20250916_161024_with_correction/logical_rates_ec_experiment.png differ
diff --git a/plots/ec_experiment_20250916_165427_without_correction/acceptance_rates_ec_experiment.png b/plots/ec_experiment_20250916_165427_without_correction/acceptance_rates_ec_experiment.png
new file mode 100644
index 0000000..1a51b42
Binary files /dev/null and b/plots/ec_experiment_20250916_165427_without_correction/acceptance_rates_ec_experiment.png differ
diff --git a/plots/ec_experiment_20250916_165427_without_correction/experiment_metadata.txt b/plots/ec_experiment_20250916_165427_without_correction/experiment_metadata.txt
new file mode 100644
index 0000000..5a0cf80
--- /dev/null
+++ b/plots/ec_experiment_20250916_165427_without_correction/experiment_metadata.txt
@@ -0,0 +1,20 @@
+Tesseract EC Experiment Metadata
+===================================
+
+Timestamp: 2025-09-16 17:17:49
+Total runtime: 00:23:22.613 (1402.613 seconds)
+
+Experiment Parameters:
+--------------------
+Rounds: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20]
+Noise levels: [np.float64(0.0), np.float64(0.0011111111111111111), np.float64(0.0022222222222222222), np.float64(0.003333333333333333), np.float64(0.0044444444444444444), np.float64(0.005555555555555556), np.float64(0.006666666666666666), np.float64(0.0077777777777777776), np.float64(0.008888888888888889), np.float64(0.01)]
+Shots per data point: 1000000
+Apply Pauli frame correction: False
+Encoding mode: 9a
+Sweep channel noise: False
+Noise configuration: Sweeping EC/decoding noise
+ - EC noise applied: During error correction rounds and decoding
+ - EC 1Q rate: Swept parameter
+ - EC 2Q rate: Swept parameter (same as 1Q)
+ - Channel noise: None (0.0)
+ - Encoding: Noiseless
diff --git a/plots/ec_experiment_20250916_165427_without_correction/fidelity_rates_ec_experiment.png b/plots/ec_experiment_20250916_165427_without_correction/fidelity_rates_ec_experiment.png
new file mode 100644
index 0000000..4d9b3bb
Binary files /dev/null and b/plots/ec_experiment_20250916_165427_without_correction/fidelity_rates_ec_experiment.png differ
diff --git a/plots/ec_experiment_20250916_165427_without_correction/logical_rates_ec_experiment.png b/plots/ec_experiment_20250916_165427_without_correction/logical_rates_ec_experiment.png
new file mode 100644
index 0000000..851fb3d
Binary files /dev/null and b/plots/ec_experiment_20250916_165427_without_correction/logical_rates_ec_experiment.png differ
diff --git a/plots/fidelity_rates_ec_experiment_with_correction.png b/plots/fidelity_rates_ec_experiment_with_correction.png
new file mode 100644
index 0000000..d41b6cf
Binary files /dev/null and b/plots/fidelity_rates_ec_experiment_with_correction.png differ
diff --git a/plots/fidelity_rates_ec_experiment_without_correction.png b/plots/fidelity_rates_ec_experiment_without_correction.png
new file mode 100644
index 0000000..4d9b3bb
Binary files /dev/null and b/plots/fidelity_rates_ec_experiment_without_correction.png differ
diff --git a/tesseract_sim/error_correction/decoder_manual.py b/tesseract_sim/error_correction/decoder_manual.py
index f35fe07..b13a3b4 100644
--- a/tesseract_sim/error_correction/decoder_manual.py
+++ b/tesseract_sim/error_correction/decoder_manual.py
@@ -107,13 +107,30 @@ def verify_final_state(shot_tail, frameX=None, frameZ=None, apply_pauli_frame =
if apply_pauli_frame:
if frameX is not None and frameZ is not None:
- # Top half measured in X basis - apply X frame corrections (only LSB matters)
+ # Top half measured in X basis - apply Z frame corrections (phase errors affect X measurements)
for i in range(8):
- corrected[i] ^= (frameX[i] & 1)
+ corrected[i] ^= (frameZ[i] & 1)
- # Bottom half measured in Z basis - apply Z frame corrections (only LSB matters)
+ # Bottom half measured in Z basis - apply X frame corrections (bit flips affect Z measurements)
+ # Account for CNOT propagation: X errors on qubits 0-3 propagate to 12-15, and 4-7 propagate to 8-11
for i in range(8, 16):
- corrected[i] ^= (frameZ[i] & 1)
+ # Direct X frame corrections for qubits 8-15
+ corrected[i] ^= (frameX[i] & 1)
+
+ """ These workarounds are needed because we actually need to apply the correction before measurement.
+ A more complete and correct approach would be to branch insert the python error correction code before the final measurements
+ See https://quantumcomputing.stackexchange.com/questions/22281/simulating-flag-qubits-and-conditional-branches-using-stim
+ For more information.
+ """
+ # CNOT propagation: X errors from row 1 (0-3) propagate to row 4 (12-15)
+ if 12 <= i <= 15:
+ source_qubit = i - 12 # qubit 12→0, 13→1, 14→2, 15→3
+ corrected[i] ^= (frameX[source_qubit] & 1)
+
+ # CNOT propagation: X errors from row 2 (4-7) propagate to row 3 (8-11)
+ if 8 <= i <= 11:
+ source_qubit = i - 4 # qubit 8→4, 9→5, 10→6, 11→7
+ corrected[i] ^= (frameX[source_qubit] & 1)
# Calculate all operator parities for both 8-3-2 color codes
# X measurements (top half, qubits 0-7)
@@ -147,7 +164,6 @@ def verify_final_state(shot_tail, frameX=None, frameZ=None, apply_pauli_frame =
def run_manual_error_correction(circuit, shots, rounds, apply_pauli_frame = True, encoding_mode ='9b'):
"""
Runs the full manual error correction simulation with final logical state verification.
- Returns counts of shots that pass error correction and total successful parity checks.
Args:
circuit: The quantum circuit to simulate
@@ -155,10 +171,17 @@ def run_manual_error_correction(circuit, shots, rounds, apply_pauli_frame = True
rounds: Number of error correction rounds
apply_pauli_frame: Whether to apply Pauli frame corrections
encoding_mode: '9a' or '9b' - determines measurement offset and which parity checks to perform
+
+ Returns:
+ tuple: (ec_accept, logical_shots_passed, average_percentage)
+ - ec_accept: number of successful experiments (i.e all rounds of ec "accept")
+ - logical_shots_passed: number of experiments when the final logical qubits measured had all qubits in the ideal state
+ - average_percentage: average percentage of qubits measured correctly across all shots
"""
# Calculate parameters based on encoding mode
only_z_checks = (encoding_mode == '9a')
measurement_offset = 0 if encoding_mode == '9a' else 2
+ max_checks = 2 if only_z_checks else 4
sampler = circuit.compile_sampler()
shot_data_all = sampler.sample(shots=shots)
@@ -166,6 +189,7 @@ def run_manual_error_correction(circuit, shots, rounds, apply_pauli_frame = True
ec_accept = 0
logical_shots_passed = 0
total_successful_checks = 0
+ fractional_logical_passed = 0.0
for shot_data in shot_data_all:
# Process error correction rounds with appropriate measurement offset
@@ -178,19 +202,23 @@ def run_manual_error_correction(circuit, shots, rounds, apply_pauli_frame = True
total_successful_checks += successful_checks
# Count shots where all parity checks pass
- max_checks = 2 if only_z_checks else 4
if successful_checks == max_checks:
logical_shots_passed += 1
+
+ # Add fractional contribution for average percentage calculation
+ fractional_logical_passed += successful_checks / max_checks
- # Calculate normalized logical success rate (total successful checks / total possible checks)
- max_checks = 2 if only_z_checks else 4
- normalized_logical_rate = total_successful_checks / (shots * max_checks) if shots > 0 else 0
+ # Calculate average percentage of qubits measured correctly
+ average_percentage = fractional_logical_passed / ec_accept if ec_accept > 0 else None
print(f"Correcting by Pauli frame → {apply_pauli_frame}")
print(f"After EC rounds → {ec_accept}/{shots} accepted")
checks_desc = "Z3,Z5" if only_z_checks else "X4,X6,Z3,Z5"
print(f"Total successful parity checks ({checks_desc}) → {total_successful_checks}/{shots * max_checks}")
- print(f"Normalized logical success rate → {normalized_logical_rate:.2%}")
+ if average_percentage is not None:
+ print(f"Average percentage of checks passed → {average_percentage:.2%}")
+ else:
+ print(f"Average percentage of checks passed → N/A (no accepted shots)")
print(f"Logical shots passed (all checks) → {logical_shots_passed}/{shots}")
- return ec_accept, logical_shots_passed, shots - logical_shots_passed
\ No newline at end of file
+ return ec_accept, logical_shots_passed, average_percentage
\ No newline at end of file
diff --git a/tesseract_sim/plotting/plot_acceptance_rates.py b/tesseract_sim/plotting/plot_acceptance_rates.py
index dac708d..b562016 100644
--- a/tesseract_sim/plotting/plot_acceptance_rates.py
+++ b/tesseract_sim/plotting/plot_acceptance_rates.py
@@ -5,6 +5,8 @@
import os
from typing import Callable, Dict, List, TypeVar, Tuple, Literal
import argparse
+from datetime import datetime
+import time
T = TypeVar('T') # Type of experiment result
@@ -45,12 +47,45 @@ def sweep_results(
return results
+def compute_logical_success_rate(raw_results: Dict[float, List[Tuple[int, int, float]]]) -> Dict[float, List[float]]:
+ """Extract logical success rates from raw results. Logical success == all qubits are measured with the correct results.
+
+ Args:
+ raw_results: Dict mapping noise levels to lists of (accepted, logical_pass, avg_fidelity) tuples
+
+ Returns:
+ Dict mapping noise levels to lists of logical success rates (logical_pass/accepted)
+ """
+ return {
+ noise: [
+ t[1]/t[0] if t[0] > 0 else 0.0 # logical_pass/accepted (conditional probability)
+ for t in tuples
+ ]
+ for noise, tuples in raw_results.items()
+ }
+
+def compute_average_fidelity(raw_results: Dict[float, List[Tuple[int, int, float]]]) -> Dict[float, List[float]]:
+ """Extract average fidelity values from raw results.
+
+ Args:
+ raw_results: Dict mapping noise levels to lists of (accepted, logical_pass, avg_fidelity) tuples
+
+ Returns:
+ Dict mapping noise levels to lists of average fidelity values
+ """
+ return {
+ noise: [t[2] for t in tuples] # avg_fidelity
+ for noise, tuples in raw_results.items()
+ }
+
def plot_curve(
rounds: List[int],
data: Dict[float, List[float]],
title: str,
ylabel: str,
- out_path: str
+ out_path: str,
+ xlim: Tuple[float, float] = None,
+ ylim: Tuple[float, float] = None
) -> None:
"""Plots and saves a single curve from sweep data."""
plt.figure(figsize=(12, 8))
@@ -64,21 +99,83 @@ def plot_curve(
plt.grid(True)
plt.legend()
+ # Set axis limits if provided
+ if xlim is not None:
+ plt.xlim(xlim)
+ if ylim is not None:
+ plt.ylim(ylim)
+
plt.savefig(out_path)
print(f"Plot saved to {out_path}")
plt.close()
+def write_experiment_metadata(
+ out_dir: str,
+ rounds: List[int],
+ noise_levels: List[float],
+ shots: int,
+ apply_pauli_frame: bool,
+ encoding_mode: str,
+ sweep_channel_noise: bool,
+ runtime_seconds: float = None
+) -> None:
+ """Write experiment metadata to a text file."""
+ metadata_path = os.path.join(out_dir, "experiment_metadata.txt")
+
+ with open(metadata_path, 'w') as f:
+ f.write("Tesseract EC Experiment Metadata\n")
+ f.write("=" * 35 + "\n\n")
+ f.write(f"Timestamp: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
+ if runtime_seconds is not None:
+ hours = int(runtime_seconds // 3600)
+ minutes = int((runtime_seconds % 3600) // 60)
+ seconds = runtime_seconds % 60
+ f.write(f"Total runtime: {hours:02d}:{minutes:02d}:{seconds:06.3f} ({runtime_seconds:.3f} seconds)\n")
+ f.write("\n")
+
+ f.write("Experiment Parameters:\n")
+ f.write("-" * 20 + "\n")
+ f.write(f"Rounds: {rounds}\n")
+ f.write(f"Noise levels: {list(noise_levels)}\n")
+ f.write(f"Shots per data point: {shots}\n")
+ f.write(f"Apply Pauli frame correction: {apply_pauli_frame}\n")
+ f.write(f"Encoding mode: {encoding_mode}\n")
+ f.write(f"Sweep channel noise: {sweep_channel_noise}\n")
+
+ if sweep_channel_noise:
+ f.write(f"Noise configuration: Sweeping channel noise only\n")
+ f.write(f" - Channel noise type: DEPOLARIZE1\n")
+ f.write(f" - Channel noise applied: After encoding, before EC rounds\n")
+ f.write(f" - EC procedures: Noiseless\n")
+ f.write(f" - Encoding/Decoding: Noiseless\n")
+ else:
+ f.write(f"Noise configuration: Sweeping EC/decoding noise\n")
+ f.write(f" - EC noise applied: During error correction rounds and decoding\n")
+ f.write(f" - EC 1Q rate: Swept parameter\n")
+ f.write(f" - EC 2Q rate: Swept parameter (same as 1Q)\n")
+ f.write(f" - Channel noise: None (0.0)\n")
+ f.write(f" - Encoding: Noiseless\n")
+
+ print(f"Metadata saved to {metadata_path}")
+
def plot_ec_experiment(
rounds: List[int],
noise_levels: List[float],
shots: int,
- out_dir: str,
+ base_out_dir: str,
apply_pauli_frame: bool = True,
encoding_mode: Literal['9a', '9b'] = '9b',
sweep_channel_noise: bool = False
) -> None:
"""Plots both EC acceptance and logical check rates for the EC experiment."""
+ start_time = time.time()
+
+ # Create timestamped output directory
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
+ out_dir = os.path.join(base_out_dir, f"ec_experiment_{timestamp}")
+ os.makedirs(out_dir, exist_ok=True)
+
# One sweep collecting full results
if sweep_channel_noise:
cfg_builder = lambda noise: NoiseCfg(ec_active=False, channel_noise_level=noise, channel_noise_type="DEPOLARIZE1")
@@ -99,30 +196,58 @@ def plot_ec_experiment(
for noise, tuples in raw_results.items()
}
+ # Set fixed axis ranges
+ max_rounds = max(rounds)
+ x_range = (0, max_rounds)
+
noise_type = "Channel" if sweep_channel_noise else "EC"
plot_curve(
rounds, ec_data,
title=f"{noise_type} Acceptance vs Rounds (EC Experiment)",
ylabel="EC Acceptance Rate",
- out_path=os.path.join(out_dir, f'acceptance_rates_{"channel" if sweep_channel_noise else "ec"}_noise_ec_experiment.png')
+ out_path=os.path.join(out_dir, 'acceptance_rates_ec_experiment.png'),
+ xlim=x_range,
+ ylim=(-0.01, 1.01)
)
# Derive logical check rate from same raw results - normalized by acceptance
- logical_data = {
- noise: [
- t[1]/t[0] if t[0] > 0 else 0.0 # logical_pass/accepted (conditional probability)
- for t in tuples
- ]
- for noise, tuples in raw_results.items()
- }
+ logical_data = compute_logical_success_rate(raw_results)
plot_curve(
rounds, logical_data,
title=f"Logical Check Success vs Rounds (EC Experiment) - {noise_type} Noise",
ylabel="Logical Success Rate | Accepted",
- out_path=os.path.join(out_dir, f'logical_rates_{"channel" if sweep_channel_noise else "ec"}_noise_ec_experiment.png')
+ out_path=os.path.join(out_dir, 'logical_rates_ec_experiment.png'),
+ xlim=x_range,
+ ylim=(-0.01, 1.01)
)
+ # Derive average fidelity from same raw results
+ fidelity_data = compute_average_fidelity(raw_results)
+
+ plot_curve(
+ rounds, fidelity_data,
+ title=f"Average Fidelity vs Rounds (EC Experiment) - {noise_type} Noise",
+ ylabel="Average Fidelity",
+ out_path=os.path.join(out_dir, 'fidelity_rates_ec_experiment.png'),
+ xlim=x_range,
+ ylim=(0.45, 1.01)
+ )
+
+ # Calculate total runtime and update metadata
+ end_time = time.time()
+ runtime_seconds = end_time - start_time
+
+ # Write final metadata with runtime
+ write_experiment_metadata(
+ out_dir, rounds, noise_levels, shots,
+ apply_pauli_frame, encoding_mode, sweep_channel_noise,
+ runtime_seconds=runtime_seconds
+ )
+
+ print(f"All experiment files saved to: {out_dir}")
+ print(f"Total experiment runtime: {runtime_seconds:.1f} seconds")
+
def str_to_bool(v):
"""Convert string to boolean for argparse."""
if isinstance(v, bool):
@@ -135,21 +260,29 @@ def str_to_bool(v):
raise argparse.ArgumentTypeError('Boolean value expected.')
def main():
+ # Define defaults
+ default_rounds = list(range(1, 11)) + [15, 20]
+ default_noise_levels = list(np.linspace(0.0000, 0.01, 10))
+
parser = argparse.ArgumentParser(description="Generate acceptance rate plots for tesseract experiments")
parser.add_argument('--experiments', type=int, nargs='+', choices=[2], default=[2],
help='Which experiments to plot (currently only 2 is supported)')
parser.add_argument('--shots', type=int, default=10000,
help='Number of shots per data point')
- parser.add_argument('--out-dir', type=str, default='../plots',
- help='Output directory for plots')
+ parser.add_argument('--out-dir', type=str, default='./plots',
+ help='Base output directory for plots (timestamped subdirectory will be created)')
parser.add_argument('--apply_pauli_frame', type=str_to_bool, default=False, help='Perform final correction - apply the measured Pauli frame. The error correction rounds and measurements (besides the actual correction at the end) happen regardless, based on the number of rounds.')
parser.add_argument('--encoding-mode', type=str, choices=['9a', '9b'], default='9a', help='Encoding mode')
parser.add_argument('--sweep-channel-noise', action='store_true', help='Sweep channel noise instead of EC noise. Channel noise acts once after encoding and before the error correction rounds.')
+ parser.add_argument('--rounds', type=int, nargs='+', default=default_rounds,
+ help=f'List of EC rounds to sweep (e.g. 1 10 20 30). Default: {default_rounds}')
+ parser.add_argument('--noise-levels', type=float, nargs='+', default=default_noise_levels,
+ help=f'List of noise rates to sweep (e.g. 0.05 0.1 0.2). Default: 10 points from 0.0 to 0.01')
args = parser.parse_args()
- # Combine detailed lower rounds with higher rounds
- rounds = list(range(1, 11)) + [20, 30, 40, 50]
- noise_levels = np.linspace(0.0000, 0.01, 30) # 30 points between 0 and 1%
+ # Use configurable values
+ rounds = args.rounds
+ noise_levels = args.noise_levels
os.makedirs(args.out_dir, exist_ok=True)
diff --git a/tesseract_sim/run.py b/tesseract_sim/run.py
index fef6841..6120a96 100644
--- a/tesseract_sim/run.py
+++ b/tesseract_sim/run.py
@@ -13,27 +13,34 @@ def build_circuit_ec_experiment(rounds: int, cfg: NoiseCfg = NO_NOISE, encoding_
# Here we can use either Fig 9a encoding (|++0000>) or Fig 9b encoding (|+0+0+0>)
# depending on the encoding_mode parameter.
- # We start with a fresh circuit
- circuit = init_circuit(qubits=16, ancillas=2)
-
- # First, prepare a valid encoded state based on encoding mode
- if encoding_mode == '9a':
- encode_manual_fig9a(circuit, cfg=cfg)
- elif encoding_mode == '9b':
- encode_manual_fig9b(circuit, cfg=cfg)
- else:
- raise ValueError(f"Invalid encoding_mode: {encoding_mode}. Must be '9a' or '9b'")
+ circuit = build_encoding_circuit(cfg, encoding_mode)
# -----------------------------
if cfg.channel_noise_level > 0:
# Now, apply noise to the encoded state
channel(circuit, cfg.channel_noise_level, noise_type=cfg.channel_noise_type)
+ build_error_correction_circuit(cfg, circuit, rounds)
+
+ return circuit
+
+
+def build_error_correction_circuit(cfg, circuit, rounds):
# Append the error correction rounds to the circuit
error_correct_manual(circuit, rounds=rounds, cfg=cfg)
-
measure_logical_operators_tesseract(circuit, cfg=cfg)
+
+def build_encoding_circuit(cfg, encoding_mode):
+ # We start with a fresh circuit
+ circuit = init_circuit(qubits=16, ancillas=2)
+ # First, prepare a valid encoded state based on encoding mode
+ if encoding_mode == '9a':
+ encode_manual_fig9a(circuit, cfg=cfg)
+ elif encoding_mode == '9b':
+ encode_manual_fig9b(circuit, cfg=cfg)
+ else:
+ raise ValueError(f"Invalid encoding_mode: {encoding_mode}. Must be '9a' or '9b'")
return circuit
diff --git a/tests/encoding/test_9a_encoding.py b/tests/encoding/test_9a_encoding.py
index 9ef4d81..5894ab4 100644
--- a/tests/encoding/test_9a_encoding.py
+++ b/tests/encoding/test_9a_encoding.py
@@ -8,7 +8,7 @@ def test_9a_encoding_no_noise_perfect_state():
circuit = build_circuit_ec_experiment(rounds=1, cfg=NO_NOISE, encoding_mode='9a')
# Run simulation with 9a encoding mode (appropriate for 9a encoding)
- ec_accept, logical_pass, logical_fail = run_manual_error_correction(
+ ec_accept, logical_pass, average_percentage = run_manual_error_correction(
circuit, shots=100, rounds=1, encoding_mode='9a'
)
@@ -16,7 +16,7 @@ def test_9a_encoding_no_noise_perfect_state():
assert ec_accept == 100, f"Expected all shots accepted, got {ec_accept}"
# All shots should pass logical check (only Z3, Z5 are checked)
assert logical_pass == 100, f"Expected all shots to pass logical check, got {logical_pass}"
- assert logical_fail == 0, f"Expected no logical failures, got {logical_fail}"
+ assert average_percentage == 1.0, f"Expected 100% average success rate, got {average_percentage:.2%}"
def test_9a_encoding_no_error_correction():
"""Test that 9a encoding works even without error correction rounds."""
@@ -24,7 +24,7 @@ def test_9a_encoding_no_error_correction():
circuit = build_circuit_ec_experiment(rounds=0, cfg=NO_NOISE, encoding_mode='9a')
# Run simulation with 9a encoding mode
- ec_accept, logical_pass, logical_fail = run_manual_error_correction(
+ ec_accept, logical_pass, average_percentage = run_manual_error_correction(
circuit, shots=100, rounds=0, encoding_mode='9a'
)
@@ -32,7 +32,7 @@ def test_9a_encoding_no_error_correction():
assert ec_accept == 100, f"Expected all shots accepted, got {ec_accept}"
# All shots should pass logical check
assert logical_pass == 100, f"Expected all shots to pass logical check, got {logical_pass}"
- assert logical_fail == 0, f"Expected no logical failures, got {logical_fail}"
+ assert average_percentage == 1.0, f"Expected 100% average success rate, got {average_percentage:.2%}"
def test_9a_encoding_without_pauli_correction():
"""Test that 9a encoding works without Pauli frame correction."""
@@ -40,7 +40,7 @@ def test_9a_encoding_without_pauli_correction():
circuit = build_circuit_ec_experiment(rounds=1, cfg=NO_NOISE, encoding_mode='9a')
# Run simulation without Pauli correction
- ec_accept, logical_pass, logical_fail = run_manual_error_correction(
+ ec_accept, logical_pass, average_percentage = run_manual_error_correction(
circuit, shots=100, rounds=1, apply_pauli_frame=False, encoding_mode='9a'
)
@@ -48,7 +48,7 @@ def test_9a_encoding_without_pauli_correction():
assert ec_accept == 100, f"Expected all shots accepted, got {ec_accept}"
# All shots should pass logical check
assert logical_pass == 100, f"Expected all shots to pass logical check, got {logical_pass}"
- assert logical_fail == 0, f"Expected no logical failures, got {logical_fail}"
+ assert average_percentage == 1.0, f"Expected 100% average success rate, got {average_percentage:.2%}"
def test_9a_encoding_multiple_rounds():
"""Test that 9a encoding works with multiple error correction rounds."""
@@ -56,7 +56,7 @@ def test_9a_encoding_multiple_rounds():
circuit = build_circuit_ec_experiment(rounds=3, cfg=NO_NOISE, encoding_mode='9a')
# Run simulation with 9a encoding mode
- ec_accept, logical_pass, logical_fail = run_manual_error_correction(
+ ec_accept, logical_pass, average_percentage = run_manual_error_correction(
circuit, shots=100, rounds=3, encoding_mode='9a'
)
@@ -64,4 +64,4 @@ def test_9a_encoding_multiple_rounds():
assert ec_accept == 100, f"Expected all shots accepted, got {ec_accept}"
# All shots should pass logical check
assert logical_pass == 100, f"Expected all shots to pass logical check, got {logical_pass}"
- assert logical_fail == 0, f"Expected no logical failures, got {logical_fail}"
+ assert average_percentage == 1.0, f"Expected 100% average success rate, got {average_percentage:.2%}"
diff --git a/tests/noise/test_rejects_some_noise.py b/tests/noise/test_rejects_some_noise.py
index 7eae491..9b83fa2 100644
--- a/tests/noise/test_rejects_some_noise.py
+++ b/tests/noise/test_rejects_some_noise.py
@@ -8,7 +8,7 @@ def test_rejects_some_noise():
it's virtually guaranteed to have rejections.
"""
noise_config = NoiseCfg(ec_active=True, ec_rate_1q=0.01)
- ec_accept, logical_pass, logical_fail = run_simulation_ec_experiment(
+ ec_accept, logical_pass, average_percentage = run_simulation_ec_experiment(
rounds=2,
shots=1000,
cfg=noise_config,
diff --git a/tests/test_correction_rules.py b/tests/test_correction_rules.py
new file mode 100644
index 0000000..e39a926
--- /dev/null
+++ b/tests/test_correction_rules.py
@@ -0,0 +1,413 @@
+import pytest
+import numpy as np
+from tesseract_sim.error_correction.correction_rules import (
+ correct_column_Z,
+ correct_column_X,
+ correct_row_Z,
+ correct_row_X,
+)
+
+
+class TestUnflaggedTwoErrorRejection:
+ """Test that all correction functions reject when flag=-1 and sum(meas)==2"""
+
+ @pytest.mark.parametrize("meas_pattern", [
+ [1, 1, 0, 0],
+ [0, 0, 1, 1],
+ [1, 0, 1, 0],
+ [0, 1, 0, 1],
+ [1, 0, 0, 1],
+ [0, 1, 1, 0]
+ ])
+ def test_correct_column_Z_unflagged_two_errors_reject(self, meas_pattern):
+ """Test correct_column_Z rejects unflagged two-error patterns"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ result = correct_column_Z(-1, meas_pattern, frameZ)
+ assert result == "reject"
+ # Frame should remain unchanged
+ assert np.array_equal(frameZ, np.zeros(16, dtype=np.uint8))
+
+ @pytest.mark.parametrize("meas_pattern", [
+ [1, 1, 0, 0],
+ [0, 0, 1, 1],
+ [1, 0, 1, 0],
+ [0, 1, 0, 1],
+ [1, 0, 0, 1],
+ [0, 1, 1, 0]
+ ])
+ def test_correct_column_X_unflagged_two_errors_reject(self, meas_pattern):
+ """Test correct_column_X rejects unflagged two-error patterns"""
+ frameX = np.zeros(16, dtype=np.uint8)
+ result = correct_column_X(-1, meas_pattern, frameX)
+ assert result == "reject"
+ # Frame should remain unchanged
+ assert np.array_equal(frameX, np.zeros(16, dtype=np.uint8))
+
+ @pytest.mark.parametrize("meas_pattern", [
+ [1, 1, 0, 0],
+ [0, 0, 1, 1],
+ [1, 0, 1, 0],
+ [0, 1, 0, 1],
+ [1, 0, 0, 1],
+ [0, 1, 1, 0]
+ ])
+ def test_correct_row_Z_unflagged_two_errors_reject(self, meas_pattern):
+ """Test correct_row_Z rejects unflagged two-error patterns"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ result = correct_row_Z(-1, meas_pattern, frameZ)
+ assert result == "reject"
+ # Frame should remain unchanged
+ assert np.array_equal(frameZ, np.zeros(16, dtype=np.uint8))
+
+ @pytest.mark.parametrize("meas_pattern", [
+ [1, 1, 0, 0],
+ [0, 0, 1, 1],
+ [1, 0, 1, 0],
+ [0, 1, 0, 1],
+ [1, 0, 0, 1],
+ [0, 1, 1, 0]
+ ])
+ def test_correct_row_X_unflagged_two_errors_reject(self, meas_pattern):
+ """Test correct_row_X rejects unflagged two-error patterns"""
+ frameX = np.zeros(16, dtype=np.uint8)
+ result = correct_row_X(-1, meas_pattern, frameX)
+ assert result == "reject"
+ # Frame should remain unchanged
+ assert np.array_equal(frameX, np.zeros(16, dtype=np.uint8))
+
+
+class TestUnflaggedSingleErrorFlagging:
+ """Test that single errors (sum==1 or 3) properly set flags when unflagged"""
+
+ @pytest.mark.parametrize("error_pos", [0, 1, 2, 3])
+ def test_correct_column_Z_flags_single_error(self, error_pos):
+ """Test correct_column_Z flags the disagreeing position for single errors"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ meas_pattern = [0] * 4
+ meas_pattern[error_pos] = 1 # Single disagreeing measurement
+
+ flagX, measX, frameZ_out = correct_column_Z(-1, meas_pattern, frameZ)
+
+ assert flagX == error_pos
+ assert measX == meas_pattern
+ assert np.array_equal(frameZ_out, frameZ) # No frame correction yet
+
+ @pytest.mark.parametrize("error_pos", [0, 1, 2, 3])
+ def test_correct_column_Z_flags_triple_error(self, error_pos):
+ """Test correct_column_Z flags the single non-disagreeing position for triple errors"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ meas_pattern = [1] * 4
+ meas_pattern[error_pos] = 0 # Single non-disagreeing measurement
+
+ flagX, measX, frameZ_out = correct_column_Z(-1, meas_pattern, frameZ)
+
+ assert flagX == error_pos
+ assert measX == meas_pattern
+ assert np.array_equal(frameZ_out, frameZ) # No frame correction yet
+
+ @pytest.mark.parametrize("error_pos", [0, 1, 2, 3])
+ def test_correct_column_X_flags_single_error(self, error_pos):
+ """Test correct_column_X flags the disagreeing position for single errors"""
+ frameX = np.zeros(16, dtype=np.uint8)
+ meas_pattern = [0] * 4
+ meas_pattern[error_pos] = 1
+
+ flagZ, measZ, frameX_out = correct_column_X(-1, meas_pattern, frameX)
+
+ assert flagZ == error_pos
+ assert measZ == meas_pattern
+ assert np.array_equal(frameX_out, frameX)
+
+ @pytest.mark.parametrize("error_pos", [0, 1, 2, 3])
+ def test_correct_row_Z_flags_single_error(self, error_pos):
+ """Test correct_row_Z flags the disagreeing position for single errors"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ meas_pattern = [0] * 4
+ meas_pattern[error_pos] = 1
+
+ flagX, measX, frameZ_out = correct_row_Z(-1, meas_pattern, frameZ)
+
+ assert flagX == error_pos
+ assert measX == meas_pattern
+ assert np.array_equal(frameZ_out, frameZ)
+
+ @pytest.mark.parametrize("error_pos", [0, 1, 2, 3])
+ def test_correct_row_X_flags_single_error(self, error_pos):
+ """Test correct_row_X flags the disagreeing position for single errors"""
+ frameX = np.zeros(16, dtype=np.uint8)
+ meas_pattern = [0] * 4
+ meas_pattern[error_pos] = 1
+
+ flagZ, measZ, frameX_out = correct_row_X(-1, meas_pattern, frameX)
+
+ assert flagZ == error_pos
+ assert measZ == meas_pattern
+ assert np.array_equal(frameX_out, frameX)
+
+
+class TestFlaggedSingleErrorCorrection:
+ """Test frame corrections when flag is already set and single error occurs"""
+
+ @pytest.mark.parametrize("flag_row", [0, 1, 2, 3])
+ @pytest.mark.parametrize("error_col", [0, 1, 2, 3])
+ def test_correct_column_Z_flagged_single_correction(self, flag_row, error_col):
+ """Test correct_column_Z applies Z correction when flagged"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ meas_pattern = [0] * 4
+ meas_pattern[error_col] = 1
+
+ flagX, measX, frameZ_out = correct_column_Z(flag_row, meas_pattern, frameZ)
+
+ assert flagX == -1 # Flag should be cleared
+ assert measX == meas_pattern
+ expected_frame = np.zeros(16, dtype=np.uint8)
+ expected_frame[4 * flag_row + error_col] = 1 # Z correction at flagged position
+ assert np.array_equal(frameZ_out, expected_frame)
+
+ @pytest.mark.parametrize("flag_row", [0, 1, 2, 3])
+ @pytest.mark.parametrize("error_col", [0, 1, 2, 3])
+ def test_correct_column_X_flagged_single_correction(self, flag_row, error_col):
+ """Test correct_column_X applies X correction when flagged"""
+ frameX = np.zeros(16, dtype=np.uint8)
+ meas_pattern = [0] * 4
+ meas_pattern[error_col] = 1
+
+ flagZ, measZ, frameX_out = correct_column_X(flag_row, meas_pattern, frameX)
+
+ assert flagZ == -1 # Flag should be cleared
+ assert measZ == meas_pattern
+ expected_frame = np.zeros(16, dtype=np.uint8)
+ expected_frame[4 * flag_row + error_col] = 1 # X correction at flagged position
+ assert np.array_equal(frameX_out, expected_frame)
+
+ @pytest.mark.parametrize("flag_col", [0, 1, 2, 3])
+ @pytest.mark.parametrize("error_row", [0, 1, 2, 3])
+ def test_correct_row_Z_flagged_single_correction(self, flag_col, error_row):
+ """Test correct_row_Z applies Z correction when flagged"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ meas_pattern = [0] * 4
+ meas_pattern[error_row] = 1
+
+ flagX, measX, frameZ_out = correct_row_Z(flag_col, meas_pattern, frameZ)
+
+ assert flagX == -1 # Flag should be cleared
+ assert measX == meas_pattern
+ expected_frame = np.zeros(16, dtype=np.uint8)
+ expected_frame[4 * error_row + flag_col] = 1 # Z correction at flagged position
+ assert np.array_equal(frameZ_out, expected_frame)
+
+ @pytest.mark.parametrize("flag_col", [0, 1, 2, 3])
+ @pytest.mark.parametrize("error_row", [0, 1, 2, 3])
+ def test_correct_row_X_flagged_single_correction(self, flag_col, error_row):
+ """Test correct_row_X applies X correction when flagged"""
+ frameX = np.zeros(16, dtype=np.uint8)
+ meas_pattern = [0] * 4
+ meas_pattern[error_row] = 1
+
+ flagZ, measZ, frameX_out = correct_row_X(flag_col, meas_pattern, frameX)
+
+ assert flagZ == -1 # Flag should be cleared
+ assert measZ == meas_pattern
+ expected_frame = np.zeros(16, dtype=np.uint8)
+ expected_frame[4 * error_row + flag_col] = 1 # X correction at flagged position
+ assert np.array_equal(frameX_out, expected_frame)
+
+
+class TestFlaggedSpecialTwoErrorPatterns:
+ """Test handling of special two-error patterns when flagged"""
+
+ @pytest.mark.parametrize("flag_row", [0, 1, 2, 3])
+ @pytest.mark.parametrize("pattern", [[0, 0, 1, 1], [1, 1, 0, 0]])
+ def test_correct_column_Z_flagged_special_patterns(self, flag_row, pattern):
+ """Test correct_column_Z handles special two-error patterns when flagged"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+
+ flagX, measX, frameZ_out = correct_column_Z(flag_row, pattern, frameZ)
+
+ assert flagX == -1 # Flag should be cleared
+ assert measX == pattern
+ expected_frame = np.zeros(16, dtype=np.uint8)
+ # ZZII pattern on flagged row
+ expected_frame[4 * flag_row] = 1
+ expected_frame[4 * flag_row + 1] = 1
+ assert np.array_equal(frameZ_out, expected_frame)
+
+ @pytest.mark.parametrize("flag_row", [0, 1, 2, 3])
+ @pytest.mark.parametrize("pattern", [[0, 0, 1, 1], [1, 1, 0, 0]])
+ def test_correct_column_X_flagged_special_patterns(self, flag_row, pattern):
+ """Test correct_column_X handles special two-error patterns when flagged"""
+ frameX = np.zeros(16, dtype=np.uint8)
+
+ flagZ, measZ, frameX_out = correct_column_X(flag_row, pattern, frameX)
+
+ assert flagZ == -1 # Flag should be cleared
+ assert measZ == pattern
+ expected_frame = np.zeros(16, dtype=np.uint8)
+ # XXII pattern on flagged row
+ expected_frame[4 * flag_row] = 1
+ expected_frame[4 * flag_row + 1] = 1
+ assert np.array_equal(frameX_out, expected_frame)
+
+ @pytest.mark.parametrize("flag_col", [0, 1, 2, 3])
+ @pytest.mark.parametrize("pattern", [[0, 0, 1, 1], [1, 1, 0, 0]])
+ def test_correct_row_Z_flagged_special_patterns(self, flag_col, pattern):
+ """Test correct_row_Z handles special two-error patterns when flagged"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+
+ flagX, measX, frameZ_out = correct_row_Z(flag_col, pattern, frameZ)
+
+ assert flagX == -1 # Flag should be cleared
+ assert measX == pattern
+ expected_frame = np.zeros(16, dtype=np.uint8)
+ # ZZII pattern on flagged column
+ expected_frame[4 * 0 + flag_col] = 1 # Row 0
+ expected_frame[4 * 1 + flag_col] = 1 # Row 1
+ assert np.array_equal(frameZ_out, expected_frame)
+
+ @pytest.mark.parametrize("flag_col", [0, 1, 2, 3])
+ @pytest.mark.parametrize("pattern", [[0, 0, 1, 1], [1, 1, 0, 0]])
+ def test_correct_row_X_flagged_special_patterns(self, flag_col, pattern):
+ """Test correct_row_X handles special two-error patterns when flagged"""
+ frameX = np.zeros(16, dtype=np.uint8)
+
+ flagZ, measZ, frameX_out = correct_row_X(flag_col, pattern, frameX)
+
+ assert flagZ == -1 # Flag should be cleared
+ assert measZ == pattern
+ expected_frame = np.zeros(16, dtype=np.uint8)
+ # XXII pattern on flagged column
+ expected_frame[4 * 0 + flag_col] = 1 # Row 0
+ expected_frame[4 * 1 + flag_col] = 1 # Row 1
+ assert np.array_equal(frameX_out, expected_frame)
+
+
+class TestFlaggedInvalidTwoErrorRejection:
+ """Test rejection of invalid two-error patterns when flagged"""
+
+ @pytest.mark.parametrize("flag_row", [0, 1, 2, 3])
+ @pytest.mark.parametrize("pattern", [
+ [1, 0, 1, 0],
+ [0, 1, 0, 1],
+ [1, 0, 0, 1],
+ [0, 1, 1, 0]
+ ])
+ def test_correct_column_Z_flagged_invalid_patterns_reject(self, flag_row, pattern):
+ """Test correct_column_Z rejects invalid two-error patterns when flagged"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ result = correct_column_Z(flag_row, pattern, frameZ)
+ assert result == "reject"
+ assert np.array_equal(frameZ, np.zeros(16, dtype=np.uint8)) # Frame should remain unchanged
+
+ @pytest.mark.parametrize("flag_row", [0, 1, 2, 3])
+ @pytest.mark.parametrize("pattern", [
+ [1, 0, 1, 0],
+ [0, 1, 0, 1],
+ [1, 0, 0, 1],
+ [0, 1, 1, 0]
+ ])
+ def test_correct_column_X_flagged_invalid_patterns_reject(self, flag_row, pattern):
+ """Test correct_column_X rejects invalid two-error patterns when flagged"""
+ frameX = np.zeros(16, dtype=np.uint8)
+ result = correct_column_X(flag_row, pattern, frameX)
+ assert result == "reject"
+ assert np.array_equal(frameX, np.zeros(16, dtype=np.uint8)) # Frame should remain unchanged
+
+ @pytest.mark.parametrize("flag_col", [0, 1, 2, 3])
+ @pytest.mark.parametrize("pattern", [
+ [1, 0, 1, 0],
+ [0, 1, 0, 1],
+ [1, 0, 0, 1],
+ [0, 1, 1, 0]
+ ])
+ def test_correct_row_Z_flagged_invalid_patterns_reject(self, flag_col, pattern):
+ """Test correct_row_Z rejects invalid two-error patterns when flagged"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ result = correct_row_Z(flag_col, pattern, frameZ)
+ assert result == "reject"
+ assert np.array_equal(frameZ, np.zeros(16, dtype=np.uint8)) # Frame should remain unchanged
+
+ @pytest.mark.parametrize("flag_col", [0, 1, 2, 3])
+ @pytest.mark.parametrize("pattern", [
+ [1, 0, 1, 0],
+ [0, 1, 0, 1],
+ [1, 0, 0, 1],
+ [0, 1, 1, 0]
+ ])
+ def test_correct_row_X_flagged_invalid_patterns_reject(self, flag_col, pattern):
+ """Test correct_row_X rejects invalid two-error patterns when flagged"""
+ frameX = np.zeros(16, dtype=np.uint8)
+ result = correct_row_X(flag_col, pattern, frameX)
+ assert result == "reject"
+ assert np.array_equal(frameX, np.zeros(16, dtype=np.uint8)) # Frame should remain unchanged
+
+
+class TestEdgeCases:
+ """Test edge cases and boundary conditions"""
+
+ def test_all_zeros_no_correction_needed(self):
+ """Test that all-zero measurements don't trigger corrections"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ frameX = np.zeros(16, dtype=np.uint8)
+ pattern = [0, 0, 0, 0]
+
+ # All functions should return unchanged state for all-zero pattern
+ result_col_z = correct_column_Z(-1, pattern, frameZ.copy())
+ result_col_x = correct_column_X(-1, pattern, frameX.copy())
+ result_row_z = correct_row_Z(-1, pattern, frameZ.copy())
+ result_row_x = correct_row_X(-1, pattern, frameX.copy())
+
+ # Should return tuple with unchanged flag and frame for no errors
+ assert result_col_z[0] == -1 and result_col_z[1] == pattern and np.array_equal(result_col_z[2], frameZ)
+ assert result_col_x[0] == -1 and result_col_x[1] == pattern and np.array_equal(result_col_x[2], frameX)
+ assert result_row_z[0] == -1 and result_row_z[1] == pattern and np.array_equal(result_row_z[2], frameZ)
+ assert result_row_x[0] == -1 and result_row_x[1] == pattern and np.array_equal(result_row_x[2], frameX)
+
+ def test_all_ones_no_correction_needed(self):
+ """Test that all-one measurements don't trigger corrections"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ frameX = np.zeros(16, dtype=np.uint8)
+ pattern = [1, 1, 1, 1]
+
+ # All functions should return unchanged state for all-one pattern
+ result_col_z = correct_column_Z(-1, pattern, frameZ.copy())
+ result_col_x = correct_column_X(-1, pattern, frameX.copy())
+ result_row_z = correct_row_Z(-1, pattern, frameZ.copy())
+ result_row_x = correct_row_X(-1, pattern, frameX.copy())
+
+ # Should return tuple with unchanged flag and frame for no disagreement
+ assert result_col_z[0] == -1 and result_col_z[1] == pattern and np.array_equal(result_col_z[2], frameZ)
+ assert result_col_x[0] == -1 and result_col_x[1] == pattern and np.array_equal(result_col_x[2], frameX)
+ assert result_row_z[0] == -1 and result_row_z[1] == pattern and np.array_equal(result_row_z[2], frameZ)
+ assert result_row_x[0] == -1 and result_row_x[1] == pattern and np.array_equal(result_row_x[2], frameX)
+
+ @pytest.mark.parametrize("flag_val", [0, 1, 2, 3])
+ def test_flagged_all_zeros_no_correction(self, flag_val):
+ """Test flagged state with all-zero measurements clears flag"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ frameX = np.zeros(16, dtype=np.uint8)
+ pattern = [0, 0, 0, 0]
+
+ result_col_z = correct_column_Z(flag_val, pattern, frameZ.copy())
+ result_col_x = correct_column_X(flag_val, pattern, frameX.copy())
+ result_row_z = correct_row_Z(flag_val, pattern, frameZ.copy())
+ result_row_x = correct_row_X(flag_val, pattern, frameX.copy())
+
+ # Flag should be cleared, frame unchanged
+ assert result_col_z[0] == -1 and result_col_z[1] == pattern and np.array_equal(result_col_z[2], frameZ)
+ assert result_col_x[0] == -1 and result_col_x[1] == pattern and np.array_equal(result_col_x[2], frameX)
+ assert result_row_z[0] == -1 and result_row_z[1] == pattern and np.array_equal(result_row_z[2], frameZ)
+ assert result_row_x[0] == -1 and result_row_x[1] == pattern and np.array_equal(result_row_x[2], frameX)
+
+ def test_frame_accumulation(self):
+ """Test that corrections accumulate in the frame"""
+ frameZ = np.zeros(16, dtype=np.uint8)
+ frameZ[5] = 2 # Pre-existing correction
+
+ pattern = [0, 1, 0, 0] # Single error at position 1
+ flagX, measX, frameZ_out = correct_column_Z(1, pattern, frameZ) # Flag row 1
+
+ expected_frame = np.zeros(16, dtype=np.uint8)
+ expected_frame[5] = 3 # 2 + 1 = 3 (accumulated correction at position 4*1+1=5)
+
+ assert flagX == -1
+ assert np.array_equal(frameZ_out, expected_frame)
diff --git a/tests/test_ec_experiment_no_noise.py b/tests/test_ec_experiment_no_noise.py
index af058c5..a0c2049 100644
--- a/tests/test_ec_experiment_no_noise.py
+++ b/tests/test_ec_experiment_no_noise.py
@@ -8,11 +8,11 @@ def test_ec_experiment_no_noise_accepts_all():
shots = 100
rounds = 3
- ec_accept, logical_pass, logical_fail = run_simulation_ec_experiment(rounds=rounds, shots=shots, cfg=NO_NOISE, encoding_mode='9a')
+ ec_accept, logical_pass, average_percentage = run_simulation_ec_experiment(rounds=rounds, shots=shots, cfg=NO_NOISE, encoding_mode='9a')
assert ec_accept == shots, "All shots should pass error correction with no noise"
assert logical_pass == shots, "All shots should pass logical verification with no noise"
- assert logical_fail == 0, "No shots should fail logical verification with no noise"
+ assert average_percentage == 1.0, "Average percentage should be 100% with no noise"
# TODO: Enable and fix this test to ensure it works correctly. currently the 9b encoding/measurement is incorrect and we don't get correct results even without noise.
def disabled_test_ec_experiment_no_noise_encoding_9b():
@@ -22,8 +22,8 @@ def disabled_test_ec_experiment_no_noise_encoding_9b():
shots = 100
rounds = 3
- ec_accept, logical_pass, logical_fail = run_simulation_ec_experiment(rounds=rounds, shots=shots, cfg=NO_NOISE, encoding_mode='9b')
+ ec_accept, logical_pass, average_percentage = run_simulation_ec_experiment(rounds=rounds, shots=shots, cfg=NO_NOISE, encoding_mode='9b')
assert ec_accept == shots, "All shots should pass error correction with no noise"
assert logical_pass == shots, "All shots should pass logical verification with no noise"
- assert logical_fail == 0, "No shots should fail logical verification with no noise"
\ No newline at end of file
+ assert average_percentage == 1.0, "Average percentage should be 100% with no noise"
\ No newline at end of file
diff --git a/tests/test_ec_experiment_random_noise.py b/tests/test_ec_experiment_random_noise.py
index 4f6d44f..6aeb1ee 100644
--- a/tests/test_ec_experiment_random_noise.py
+++ b/tests/test_ec_experiment_random_noise.py
@@ -15,9 +15,37 @@ def test_ec_experiment_noise_rejects_some():
ec_rate_2q=0.02 # 2% error rate on 2-qubit gates
)
- ec_accept, logical_pass, logical_fail = run_simulation_ec_experiment(rounds=rounds, shots=shots, cfg=cfg, encoding_mode='9a')
+ ec_accept, logical_pass, average_percentage = run_simulation_ec_experiment(rounds=rounds, shots=shots, cfg=cfg, encoding_mode='9a')
# With noise, we expect some shots to fail, but not all
assert 0 < logical_pass < shots, "Some shots should pass with noise, but not all"
- assert logical_fail > 0, "Some shots should fail logical verification with noise"
- assert ec_accept > logical_pass, "Some shots that pass EC should fail logical verification"
\ No newline at end of file
+ assert average_percentage < 1.0, "Average percentage should be less than 100% with noise"
+ assert ec_accept > logical_pass, "Some shots that pass EC should fail logical verification"
+
+
+# TODO this test should pass. running many rounds with noise should converge to a completely mixed state (verify), which would yield fidelity=0.5 or 50% logical similarity
+def disabled_test_ec_experiment_noise_converges_to_half():
+ """
+ Test that with many rounds and shots, the average percentage converges to around 0.5.
+ This tests the statistical expectation that roughly half the shots should pass.
+ """
+ shots = 100
+ rounds = 10
+
+ # Configure moderate noise during error correction
+ cfg = NoiseCfg(
+ ec_active=True,
+ ec_rate_1q=0.002, # 0.2% error rate on 1-qubit gates
+ ec_rate_2q=0.004 # 0.4% error rate on 2-qubit gates
+ )
+
+ ec_accept, logical_pass, average_percentage = run_simulation_ec_experiment(
+ rounds=rounds, shots=shots, cfg=cfg, encoding_mode='9a'
+ )
+
+ print("average_percentage", average_percentage)
+ # With many shots and rounds, average should converge to around 0.5
+ # Allow for statistical variation with a reasonable tolerance
+ assert 0.4 <= average_percentage <= 0.6, f"Average percentage {average_percentage} should be close to 0.5 with many shots"
+ assert logical_pass > 0, "Some shots should pass logical verification"
+ assert logical_pass < shots, "Not all shots should pass with noise"
\ No newline at end of file
diff --git a/tests/test_no_noise_accepts_all.py b/tests/test_no_noise_accepts_all.py
index a879f74..075eab3 100644
--- a/tests/test_no_noise_accepts_all.py
+++ b/tests/test_no_noise_accepts_all.py
@@ -5,7 +5,7 @@ def test_no_noise_accepts_all():
"""
Given no noise, the simulation should accept all shots.
"""
- ec_accept, logical_pass, logical_fail = run_simulation_ec_experiment(
+ ec_accept, logical_pass, average_percentage = run_simulation_ec_experiment(
rounds=1,
shots=100,
cfg=NO_NOISE,
@@ -15,4 +15,4 @@ def test_no_noise_accepts_all():
assert ec_accept == 100
# With no noise, all accepted shots should pass logical checks
assert logical_pass == 100
- assert logical_fail == 0
\ No newline at end of file
+ assert average_percentage == 1.0 # 100% success rate
\ No newline at end of file
diff --git a/tests/test_pauli_frame_correction.py b/tests/test_pauli_frame_correction.py
index 26b5cd2..2d34a74 100644
--- a/tests/test_pauli_frame_correction.py
+++ b/tests/test_pauli_frame_correction.py
@@ -1,5 +1,5 @@
import pytest
-from tesseract_sim.run import build_circuit_ec_experiment
+from tesseract_sim.run import build_encoding_circuit, build_error_correction_circuit
from tesseract_sim.error_correction.decoder_manual import run_manual_error_correction
from tesseract_sim.noise.noise_cfg import NO_NOISE
@@ -25,83 +25,164 @@ def test_single_pauli_error_correction(qubit_index, pauli_gate):
2. The logical state is recovered after Pauli frame correction (logical_pass = 100%)
3. No logical failures occur (logical_fail = 0%)
"""
- # Build circuit with no noise during encoding/EC
- circuit = build_circuit_ec_experiment(rounds=1, cfg=NO_NOISE, encoding_mode='9a')
+ # Build encoding circuit with no noise
+ circuit = build_encoding_circuit(NO_NOISE, '9a')
# Inject the specified Pauli error on the specified qubit
circuit.append(pauli_gate, [qubit_index])
+ # Build error correction circuit
+ build_error_correction_circuit(NO_NOISE, circuit, rounds=3)
+
# Run simulation with 9a encoding mode (appropriate for |++0000>)
- ec_accept, logical_pass, logical_fail = run_manual_error_correction(
- circuit, shots=100, rounds=1, encoding_mode='9a'
+ ec_accept, logical_pass, average_percentage = run_manual_error_correction(
+ circuit, shots=5, rounds=3, encoding_mode='9a'
)
# All shots should be accepted since single errors are correctable
- assert ec_accept == 100, (
+ assert ec_accept == 5, (
f"Expected all shots accepted for {pauli_gate} error on qubit {qubit_index}, "
- f"got {ec_accept}/100 accepted"
+ f"got {ec_accept}/5 accepted"
)
# All shots should pass logical check after Pauli frame correction
- assert logical_pass == 100, (
+ assert logical_pass == 5, (
f"Expected all shots to pass logical check for {pauli_gate} error on qubit {qubit_index}, "
- f"got {logical_pass}/100 passed"
+ f"got {logical_pass}/5 passed"
)
- # No logical failures should occur
- assert logical_fail == 0, (
- f"Expected no logical failures for {pauli_gate} error on qubit {qubit_index}, "
- f"got {logical_fail}/100 failed"
+ # Average percentage should be 100% for perfect correction
+ assert average_percentage == 1.0, (
+ f"Expected 100% average success rate for {pauli_gate} error on qubit {qubit_index}, "
+ f"got {average_percentage:.2%}"
)
def test_no_noise_perfect_state():
"""Test that with no noise at all, we get perfect acceptance and logical pass rates."""
- # Build circuit with no noise and no injected errors
- circuit = build_circuit_ec_experiment(rounds=1, cfg=NO_NOISE, encoding_mode='9a')
+ # Build encoding circuit with no noise
+ circuit = build_encoding_circuit(NO_NOISE, '9a')
+
+ # Build error correction circuit (no injected errors)
+ build_error_correction_circuit(NO_NOISE, circuit, rounds=1)
# Run simulation with 9a encoding mode (appropriate for |++0000>)
- ec_accept, logical_pass, logical_fail = run_manual_error_correction(circuit, shots=100, rounds=1, encoding_mode='9a')
+ ec_accept, logical_pass, average_percentage = run_manual_error_correction(circuit, shots=100, rounds=1, encoding_mode='9a')
# All shots should be accepted
assert ec_accept == 100, f"Expected all shots accepted, got {ec_accept}"
# All shots should pass logical check
assert logical_pass == 100, f"Expected all shots to pass logical check, got {logical_pass}"
- assert logical_fail == 0, f"Expected no logical failures, got {logical_fail}"
+ assert average_percentage == 1.0, f"Expected 100% average success rate, got {average_percentage:.2%}"
# Legacy individual tests for backward compatibility and specific debugging
def test_single_x_error_correction():
"""Test that a single X error on a Z-basis measurement qubit gets corrected by the Pauli frame."""
- # Build circuit with no noise during encoding/EC
- circuit = build_circuit_ec_experiment(rounds=1, cfg=NO_NOISE, encoding_mode='9a')
+ # Build encoding circuit with no noise
+ circuit = build_encoding_circuit(NO_NOISE, '9a')
# Inject a single X error on qubit 8 (first qubit measured in Z basis)
circuit.append("X", [8])
+ # Build error correction circuit
+ build_error_correction_circuit(NO_NOISE, circuit, rounds=1)
+
# Run simulation with 9a encoding mode (appropriate for |++0000>)
- ec_accept, logical_pass, logical_fail = run_manual_error_correction(circuit, shots=100, rounds=1, encoding_mode='9a')
+ ec_accept, logical_pass, average_percentage = run_manual_error_correction(circuit, shots=100, rounds=1, encoding_mode='9a')
# All shots should be accepted since we only have a correctable error
assert ec_accept == 100, f"Expected all shots accepted, got {ec_accept}"
# All shots should pass logical check after Pauli frame correction
assert logical_pass == 100, f"Expected all shots to pass logical check, got {logical_pass}"
- assert logical_fail == 0, f"Expected no logical failures, got {logical_fail}"
+ assert average_percentage == 1.0, f"Expected 100% average success rate, got {average_percentage:.2%}"
def test_single_z_error_correction():
"""Test that a single Z error on an X-basis measurement qubit gets corrected by the Pauli frame."""
- # Build circuit with no noise during encoding/EC
- circuit = build_circuit_ec_experiment(rounds=1, cfg=NO_NOISE, encoding_mode='9a')
+ # Build encoding circuit with no noise
+ circuit = build_encoding_circuit(NO_NOISE, '9a')
# Inject a single Z error on qubit 0 (first qubit measured in X basis)
circuit.append("Z", [0])
+ # Build error correction circuit
+ build_error_correction_circuit(NO_NOISE, circuit, rounds=1)
+
# Run simulation with 9a encoding mode (appropriate for |++0000>)
- ec_accept, logical_pass, logical_fail = run_manual_error_correction(circuit, shots=100, rounds=1, encoding_mode='9a')
+ ec_accept, logical_pass, average_percentage = run_manual_error_correction(circuit, shots=100, rounds=1, encoding_mode='9a')
# All shots should be accepted since we only have a correctable error
assert ec_accept == 100, f"Expected all shots accepted, got {ec_accept}"
# All shots should pass logical check after Pauli frame correction
assert logical_pass == 100, f"Expected all shots to pass logical check, got {logical_pass}"
- assert logical_fail == 0, f"Expected no logical failures, got {logical_fail}"
\ No newline at end of file
+ assert average_percentage == 1.0, f"Expected 100% average success rate, got {average_percentage:.2%}"
+
+
+# Qubit pairs for testing double error rejection
+# Two-row pairs: same column, different rows (causes row stabilizers to fire)
+TWO_ROW_PAIRS = [(0, 4), (5, 9), (10, 14), (3, 7)]
+# Two-column pairs: same row, different columns (causes column stabilizers to fire)
+TWO_COL_PAIRS = [(0, 1), (5, 6), (10, 11), (12, 13)]
+
+
+@pytest.mark.parametrize("q1,q2", TWO_ROW_PAIRS)
+@pytest.mark.parametrize("pauli_gate", ["Z", "X"])
+def test_two_row_errors_rejection(q1, q2, pauli_gate):
+ """
+ Test that two Pauli errors in the same column but different rows get rejected.
+
+ This causes two different row stabilizers to fire, which should be rejected
+ when no flag was previously set (sum==2 with flag=-1).
+ """
+ # Build encoding circuit with no noise
+ circuit = build_encoding_circuit(NO_NOISE, '9a')
+
+ # Inject two identical Pauli errors in same column, different rows
+ circuit.append(pauli_gate, [q1])
+ circuit.append(pauli_gate, [q2])
+
+ # Build error correction circuit
+ build_error_correction_circuit(NO_NOISE, circuit, rounds=1)
+
+ # Run simulation with 9a encoding mode
+ ec_accept, logical_pass, average_percentage = run_manual_error_correction(
+ circuit, shots=5, rounds=1, encoding_mode='9a'
+ )
+
+ # Should be rejected since two row stabilizers fire without prior flag
+ assert ec_accept == 0, (
+ f"Expected rejection for {pauli_gate} errors on qubits {q1},{q2} "
+ f"(same column, different rows), got {ec_accept}/5 accepted"
+ )
+
+
+@pytest.mark.parametrize("q1,q2", TWO_COL_PAIRS)
+@pytest.mark.parametrize("pauli_gate", ["Z", "X"])
+def test_two_column_errors_rejection(q1, q2, pauli_gate):
+ """
+ Test that two Pauli errors in the same row but different columns get rejected.
+
+ This causes two different column stabilizers to fire, which should be rejected
+ when no flag was previously set (sum==2 with flag=-1).
+ """
+ # Build encoding circuit with no noise
+ circuit = build_encoding_circuit(NO_NOISE, '9a')
+
+ # Inject two identical Pauli errors in same row, different columns
+ circuit.append(pauli_gate, [q1])
+ circuit.append(pauli_gate, [q2])
+
+ # Build error correction circuit
+ build_error_correction_circuit(NO_NOISE, circuit, rounds=1)
+
+ # Run simulation with 9a encoding mode
+ ec_accept, logical_pass, average_percentage = run_manual_error_correction(
+ circuit, shots=5, rounds=1, encoding_mode='9a'
+ )
+
+ # Should be rejected since two column stabilizers fire without prior flag
+ assert ec_accept == 0, (
+ f"Expected rejection for {pauli_gate} errors on qubits {q1},{q2} "
+ f"(same row, different columns), got {ec_accept}/5 accepted"
+ )
diff --git a/tests/test_plot_acceptance_rates.py b/tests/test_plot_acceptance_rates.py
new file mode 100644
index 0000000..de8bee6
--- /dev/null
+++ b/tests/test_plot_acceptance_rates.py
@@ -0,0 +1,78 @@
+import pytest
+from tesseract_sim.plotting.plot_acceptance_rates import compute_logical_success_rate, compute_average_fidelity
+
+
+class TestPlottingHelpers:
+ """Test the helper functions for extracting data from raw results."""
+
+ def test_compute_logical_success_rate_basic(self):
+ """Test logical success rate computation with basic data."""
+ raw_results = {
+ 0.1: [(10, 5, 0.6), (20, 20, 1.0)],
+ 0.2: [(8, 4, 0.5), (16, 12, 0.75)]
+ }
+
+ result = compute_logical_success_rate(raw_results)
+
+ # For noise 0.1: 5/10=0.5, 20/20=1.0
+ assert result[0.1] == [0.5, 1.0]
+ # For noise 0.2: 4/8=0.5, 12/16=0.75
+ assert result[0.2] == [0.5, 0.75]
+
+ def test_compute_logical_success_rate_zero_accepted(self):
+ """Test logical success rate when no experiments were accepted."""
+ raw_results = {
+ 0.5: [(0, 0, 0.0), (10, 5, 0.8)]
+ }
+
+ result = compute_logical_success_rate(raw_results)
+
+ # When accepted=0, should return 0.0, not division by zero
+ assert result[0.5] == [0.0, 0.5]
+
+ def test_compute_average_fidelity_basic(self):
+ """Test average fidelity extraction with basic data."""
+ raw_results = {
+ 0.1: [(10, 5, 0.6), (20, 20, 1.0)],
+ 0.2: [(8, 4, 0.5), (16, 12, 0.75)]
+ }
+
+ result = compute_average_fidelity(raw_results)
+
+ # Should extract t[2] values directly
+ assert result[0.1] == [0.6, 1.0]
+ assert result[0.2] == [0.5, 0.75]
+
+ def test_compute_average_fidelity_edge_values(self):
+ """Test average fidelity with edge case values."""
+ raw_results = {
+ 0.0: [(100, 100, 1.0), (50, 25, 0.0)],
+ 1.0: [(1, 0, 0.123), (0, 0, 0.456)]
+ }
+
+ result = compute_average_fidelity(raw_results)
+
+ assert result[0.0] == [1.0, 0.0]
+ assert result[1.0] == [0.123, 0.456]
+
+ def test_empty_results(self):
+ """Test both functions with empty input."""
+ raw_results = {}
+
+ logical_result = compute_logical_success_rate(raw_results)
+ fidelity_result = compute_average_fidelity(raw_results)
+
+ assert logical_result == {}
+ assert fidelity_result == {}
+
+ def test_single_noise_level(self):
+ """Test both functions with single noise level and single data point."""
+ raw_results = {
+ 0.05: [(100, 80, 0.95)]
+ }
+
+ logical_result = compute_logical_success_rate(raw_results)
+ fidelity_result = compute_average_fidelity(raw_results)
+
+ assert logical_result[0.05] == [0.8] # 80/100
+ assert fidelity_result[0.05] == [0.95]