Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
## Investigation Results

### Overlap Analysis

**`get_model_uncertainty` (simulation_utils.py):**
- Computes input power spectrum via `scipy.signal.welch(u)`
- Computes output power spectrum via `scipy.signal.welch(y)`
- Computes cross-spectral density via `scipy.signal.csd(u, y)`
- Estimates frequency response: `G = Pxy / Pxx`
- Applies Hamming window smoothing (bidirectional convolution)
- Computes SNR = Pyy / Pyy_err
- Computes 95% and 68% confidence intervals

**`FrequencyDomainIdentification` (frequency_domain.py):**
- Computes correlations manually via FFT (autocorr, cross-corr)
- Computes power spectra from correlations: `Phi_u`, `Phi_y`
- Computes cross-spectrum from correlations: `Phi_uy`
- Estimates frequency response: `G = Phi_uy / Phi_u`
- Applies Hamming window smoothing (convolution)
- Computes coherence: `γ² = |Phi_uy|² / (Phi_u * Phi_y)`
- Computes quality metrics (mean, min, max, fraction_reliable)

### Common Operations
1. ✅ Power spectrum computation (Welch's method)
2. ✅ Cross-spectrum computation
3. ✅ Frequency response estimation from spectra
4. ✅ Hamming window creation & normalization
5. ✅ Spectral smoothing via Hamming window convolution
6. ✅ Quality/confidence assessment

### Unification Plan

**Phase 1: Create `spectral_utils.py`**
- Extract shared spectral analysis functions
- No breaking changes to existing code

**Phase 2: Common Functions**
1. `compute_power_spectrum(signal, dt=1.0, method='welch', nperseg=1024)` - Wrapper around scipy.signal.welch
2. `compute_cross_spectrum(u, y, dt=1.0, method='welch', nperseg=1024)` - Wrapper around scipy.signal.csd
3. `compute_correlations(u, y, max_lag=None)` - FFT-based correlation (from frequency_domain)
4. `compute_frequency_response(cross_spectrum, input_spectrum)` - Frequency response estimation
5. `create_hamming_window(window_size, normalize=True)` - Shared window creation
6. `smooth_frequency_response(G, window)` - Shared smoothing (magnitude/phase separate)
7. `compute_coherence(cross_spectrum, input_spectrum, output_spectrum)` - Coherence computation

**Phase 3: Refactor Implementations**
- `get_model_uncertainty`: Use `compute_power_spectrum`, `compute_cross_spectrum`, `compute_frequency_response`, `smooth_frequency_response`
- `FrequencyDomainIdentification`: Use shared functions, keep FFT-based correlation approach for accuracy
- Both use shared window/smoothing utilities

**Phase 4: Optional Numba Optimization**
- Add compiled versions in `compiled_utils.py` for:
- `compute_correlations` (FFT-based)
- `smooth_frequency_response` (convolution)
- `compute_coherence` (vectorized operations)

### Expected Benefits
- ✅ DRY principle: Single source of truth for spectral analysis
- ✅ Consistency: Same algorithms across both functions
- ✅ Maintainability: Bug fixes/improvements benefit both
- ✅ Testability: Shared utilities can be independently tested
- ✅ Extensibility: Other algorithms can reuse utilities
- ✅ Performance: Centralized Numba optimization opportunity
- ✅ No breaking changes: Existing APIs unchanged

### Implementation Strategy
1. Create `spectral_utils.py` with new functions
2. Refactor `get_model_uncertainty` to use utilities (backward compatible)
3. Refactor `FrequencyDomainIdentification` to use utilities
4. Add comprehensive tests for spectral utilities
5. Optional: Add Numba compilation for bottlenecks
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
## Implementation Plan: Frequency Domain System Identification

### Phase 1: Test Suite (TDD)
Create comprehensive test file `test_frequency_domain.py` with:
- **Initialization tests**: Parameter validation, factory registration, default/custom parameters
- **Basic identification tests**: NumPy arrays, IDData objects, StateSpaceModel return validation
- **Quality assessment tests**: Coherence metrics, quality labels, reliability thresholds
- **Welch's method tests**: Variance reduction, segment handling, error cases
- **Edge cases**: Short data, mismatched lengths, NaN/inf values, constant signals
- **Spectral properties**: Real/complex spectra, correlation storage, frequency vectors
- **Data preprocessing**: Mean removal, windowing options, max_lag parameter
- **Factory integration**: Algorithm registration, case-insensitivity, listing

### Phase 2: Algorithm Implementation
Create `frequency_domain.py` with:
- `FrequencyDomainIdentification(IdentificationAlgorithm)` class
- Implement 5-step algorithm: correlations → spectra → frequency response → smoothing → quality assessment
- Support Welch's method for variance reduction
- Return `StateSpaceModel` with frequency response in `identification_info`
- Comprehensive input validation and error handling

### Phase 3: Factory Registration
Update `algorithms/__init__.py` to:
- Import `FrequencyDomainIdentification`
- Register with AlgorithmFactory under three aliases: "FREQUENCY_DOMAIN", "FREQ_DOMAIN", "NONPARAMETRIC_FREQ"

### Phase 4: Verification
- Run all 40+ frequency domain tests
- Verify no regressions in existing tests
- Confirm factory integration works correctly
Loading