feat: explicit variant declarations with complexity metadata#102
feat: explicit variant declarations with complexity metadata#102
Conversation
The variants list now shows all values (e.g., MIS/SimpleGraph/i32) instead of hiding default values. This makes it clear what graph and weight types each variant uses. The diff-from-default slash notation is still used for reduction edges to keep them concise. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…bgraph) - Use parse_problem_spec + resolve_variant instead of resolve_alias so `pred create MIS/KingsSubgraph --graph ...` works - Resolved variant from reduction graph is used in output JSON instead of hardcoded variant maps - Simplify KColoring and KSatisfiability branches by removing redundant local variant variables Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #102 +/- ##
==========================================
- Coverage 96.90% 96.89% -0.01%
==========================================
Files 196 198 +2
Lines 26811 26904 +93
==========================================
+ Hits 25980 26068 +88
- Misses 831 836 +5 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds variant declarations for optimization (QUBO, SpinGlass, ILP), satisfiability (Satisfiability, KSatisfiability), set (MaximumSetPacking, MinimumSetCovering), and specialized (CircuitSAT, Factoring) models. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Phase 1 now builds variant nodes from VariantEntry inventory (with complexity metadata), then Phase 2 creates edges from ReductionEntry. Adds variant_complexity() getter for querying a variant's time complexity. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove unused `mut` from ensure_node closure - Sort variants_for() deterministically (default variant first) - Change declare_variants! doctest from `ignore` to `text` Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
This pull request introduces explicit variant declarations with complexity metadata for all problem types. It implements a compile-time enforcement system where variants must be explicitly declared using a new declare_variants! macro, and the #[reduction] proc macro verifies that all reduction source/target types are declared.
Changes:
- Adds infrastructure for explicit variant registration with
DeclaredVarianttrait,VariantEntryinventory, anddeclare_variants!macro - Updates all 18 model files to declare their concrete variants with time complexity expressions
- Modifies
ReductionGraphto use two-phase construction: nodes fromVariantEntry, edges fromReductionEntry - Enhances CLI and MCP outputs to display per-variant complexity and reduction overhead information
Reviewed changes
Copilot reviewed 33 out of 34 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| src/traits.rs | Adds DeclaredVariant marker trait for compile-time checking |
| src/registry/variant.rs | New module with VariantEntry struct and inventory collection |
| src/registry/mod.rs | Exports VariantEntry from registry module |
| src/variant.rs | Implements declare_variants! macro for variant registration |
| src/lib.rs | Re-exports inventory crate for macro usage |
| problemreductions-macros/src/lib.rs | Adds compile-time DeclaredVariant assertions to #[reduction] macro |
| src/rules/graph.rs | Two-phase graph construction, adds complexity field to nodes, variant ordering |
| src/unit_tests/rules/graph.rs | Tests for VariantEntry inventory and complexity retrieval |
| src/models/graph/*.rs | Declares variants with complexity for 9 graph problems |
| src/models/optimization/*.rs | Declares variants with complexity for QUBO, SpinGlass, ILP |
| src/models/satisfiability/*.rs | Declares variants with complexity for SAT and KSAT |
| src/models/set/*.rs | Declares variants with complexity for set covering and packing |
| src/models/specialized/*.rs | Declares variants with complexity for CircuitSAT and Factoring |
| problemreductions-cli/src/commands/graph.rs | Displays complexity and overhead in show command |
| problemreductions-cli/src/commands/create.rs | Supports geometry graph variants with positions parameter |
| problemreductions-cli/src/mcp/tools.rs | MCP support for geometry variants and complexity display |
| problemreductions-cli/src/cli.rs | Updated help text for geometry variants |
| problemreductions-cli/tests/cli_tests.rs | Tests for geometry graph creation via CLI |
| .gitignore | Adds wildcard pattern for JSON files |
| docs/plans/*.md | Design and implementation plan documents |
Comments suppressed due to low confidence (1)
problemreductions-cli/src/mcp/tools.rs:205
- The MCP
show_problem_inneroutput does not include the overhead information in thereduces_toandreduces_fromedges (lines 194-205), while the CLI version incommands/graph.rsdoes include it (via theedge_to_jsonclosure). This creates an inconsistency where the same information is presented differently depending on whether it's accessed via CLI or MCP. Consider using the same edge serialization logic in both places for consistency.
"reduces_to": outgoing.iter().map(|e| {
serde_json::json!({
"source": {"name": e.source_name, "variant": e.source_variant},
"target": {"name": e.target_name, "variant": e.target_variant},
})
}).collect::<Vec<_>>(),
"reduces_from": incoming.iter().map(|e| {
serde_json::json!({
"source": {"name": e.source_name, "variant": e.source_variant},
"target": {"name": e.target_name, "variant": e.target_variant},
})
}).collect::<Vec<_>>(),
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| /// LCG PRNG step — returns next state and a uniform f64 in [0, 1). | ||
| fn lcg_step(state: &mut u64) -> f64 { | ||
| *state = state | ||
| .wrapping_mul(6364136223846793005) | ||
| .wrapping_add(1442695040888963407); | ||
| (*state >> 33) as f64 / (1u64 << 31) as f64 | ||
| } | ||
|
|
||
| /// Initialize LCG state from seed or system time. | ||
| fn lcg_init(seed: Option<u64>) -> u64 { | ||
| seed.unwrap_or_else(|| { | ||
| std::time::SystemTime::now() | ||
| .duration_since(std::time::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_nanos() as u64 | ||
| }) | ||
| } | ||
|
|
||
| /// Generate random unique integer positions on a grid for KingsSubgraph/TriangularSubgraph. | ||
| fn create_random_int_positions(num_vertices: usize, seed: Option<u64>) -> Vec<(i32, i32)> { | ||
| let mut state = lcg_init(seed); | ||
| let grid_size = (num_vertices as f64).sqrt().ceil() as i32 + 1; | ||
| let mut positions = std::collections::BTreeSet::new(); | ||
| while positions.len() < num_vertices { | ||
| let x = (lcg_step(&mut state) * grid_size as f64) as i32; | ||
| let y = (lcg_step(&mut state) * grid_size as f64) as i32; | ||
| positions.insert((x, y)); | ||
| } | ||
| positions.into_iter().collect() | ||
| } | ||
|
|
||
| /// Generate random float positions in [0, sqrt(N)] x [0, sqrt(N)] for UnitDiskGraph. | ||
| fn create_random_float_positions(num_vertices: usize, seed: Option<u64>) -> Vec<(f64, f64)> { | ||
| let mut state = lcg_init(seed); | ||
| let side = (num_vertices as f64).sqrt(); | ||
| (0..num_vertices) | ||
| .map(|_| { | ||
| let x = lcg_step(&mut state) * side; | ||
| let y = lcg_step(&mut state) * side; | ||
| (x, y) | ||
| }) | ||
| .collect() | ||
| } |
There was a problem hiding this comment.
There is significant code duplication between problemreductions-cli/src/commands/create.rs and problemreductions-cli/src/mcp/tools.rs. Both files contain nearly identical implementations of: lcg_step, lcg_init, create_random_int_positions, create_random_float_positions, parse_positions, and ser_vertex_weight_problem_generic. Consider extracting these shared functions into a common module (e.g., problemreductions-cli/src/utils/random.rs and problemreductions-cli/src/utils/geometry.rs) to reduce duplication and ensure consistency.
| *state = state | ||
| .wrapping_mul(6364136223846793005) | ||
| .wrapping_add(1442695040888963407); | ||
| (*state >> 33) as f64 / (1u64 << 31) as f64 |
There was a problem hiding this comment.
The LCG step function has a normalization issue. The right shift by 33 bits produces a 31-bit value, but dividing by (1u64 << 31) gives values in the range [0, 2), not [0, 1). This should be (1u64 << 31) as f64 to ensure the divisor matches the range of the numerator, producing values correctly in [0, 1).
| /// Convert a variant BTreeMap to slash notation showing ALL values. | ||
| /// E.g., {graph: "SimpleGraph", weight: "i32"} → "/SimpleGraph/i32". | ||
| fn variant_to_full_slash(variant: &BTreeMap<String, String>) -> String { | ||
| if variant.is_empty() { | ||
| String::new() | ||
| } else { | ||
| let vals: Vec<&str> = variant.values().map(|v| v.as_str()).collect(); | ||
| format!("/{}", vals.join("/")) | ||
| } |
There was a problem hiding this comment.
The variant_to_full_slash function uses BTreeMap::values() which returns values in key-sorted order. While BTreeMap guarantees this ordering is deterministic, it means the slash notation depends on the alphabetical order of variant parameter names (e.g., "graph", "k", "weight"). This could produce unintuitive results - for example, a variant with keys {graph: "SimpleGraph", k: "K3", weight: "i32"} would become "/SimpleGraph/K3/i32" (alphabetical: g, k, w). Consider documenting this ordering behavior or using a more explicit parameter ordering scheme.
| /// ```text | ||
| /// declare_variants! { | ||
| /// MaximumIndependentSet<SimpleGraph, i32> => "2^num_vertices", | ||
| /// MaximumIndependentSet<KingsSubgraph, i32> => "2^num_vertices", | ||
| /// } | ||
| /// ``` |
There was a problem hiding this comment.
The doc comment example uses text` instead of ignore or ````rust,ignore. While this is functionally equivalent, using ````ignore` is more conventional for Rust documentation examples that should not be tested. This is a minor style consistency issue.
| declare_variants! { | ||
| MaximumIndependentSet<SimpleGraph, i32> => "2^num_vertices", | ||
| MaximumIndependentSet<KingsSubgraph, i32> => "2^num_vertices", | ||
| MaximumIndependentSet<UnitDiskGraph, f64> => "2^num_vertices", |
There was a problem hiding this comment.
The design document at line 59 shows UnitDiskGraph, f64 as an example variant, but in the actual implementation in maximum_independent_set.rs line 166, only UnitDiskGraph, i32 is declared. This is a minor inconsistency between the design doc and implementation. While this doesn't affect correctness, it would be clearer if the design document matched the actual implementation or explained why the example differs.
| MaximumIndependentSet<UnitDiskGraph, f64> => "2^num_vertices", | |
| MaximumIndependentSet<UnitDiskGraph, i32> => "2^num_vertices", |
| *state = state | ||
| .wrapping_mul(6364136223846793005) | ||
| .wrapping_add(1442695040888963407); | ||
| (*state >> 33) as f64 / (1u64 << 31) as f64 |
There was a problem hiding this comment.
The LCG step function has a normalization issue. The right shift by 33 bits produces a 31-bit value, but dividing by (1u64 << 31) gives values in the range [0, 2), not [0, 1). This should be (1u64 << 31) as f64 to ensure the divisor matches the range of the numerator, producing values correctly in [0, 1). The same issue exists in the inline implementation at line 590 of create_random_graph.
| variants.sort_by(|a, b| { | ||
| fn default_rank(v: &BTreeMap<String, String>) -> usize { | ||
| v.values() | ||
| .filter(|val| !["SimpleGraph", "i32", "KN"].contains(&val.as_str())) | ||
| .count() | ||
| } | ||
| default_rank(a).cmp(&default_rank(b)).then_with(|| a.cmp(b)) | ||
| }); |
There was a problem hiding this comment.
The deterministic sorting logic for variants has a potential issue. The default_rank function counts non-default values to rank variants, but this approach may not be stable across different variant maps with the same number of non-default values. Consider two variants: {graph: "KingsSubgraph", weight: "i32"} and {graph: "SimpleGraph", weight: "f64"} — both have one non-default value and would be ranked equally, falling back to lexicographic comparison. This is probably fine, but the comment claims "variants[0] is the base variant" which may not always be true if no variant is fully default. Consider adding a check that the first variant actually has rank 0 (all default values), or clarify the documentation.
Adds test coverage for the two uncovered patch areas: - VariantEntry Debug formatting - variant_complexity returning None for unknown problems Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add validate_k_param() to detect contradictions between variant suffix (e.g., /K2) and --k flag (e.g., --k 3) for KColoring/KSatisfiability - Extract shared helpers (ser, variant_map, parse_positions, LCG, random graph generation) into util.rs to eliminate CLI/MCP duplication - Apply consistent K-param validation in both CLI create and MCP create_problem paths (including random generation) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Split review-implementation into two parallel subagents (structural + quality) dispatched with fresh context. Integrates with executing-plans after each batch and standalone /review-implementation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Self-contained prompt templates for review subagents: - structural-reviewer-prompt.md: model/rule checklists + semantic review - quality-reviewer-prompt.md: DRY, KISS, HC/LC, HCI, test quality Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
SKILL.md now dispatches two parallel subagents (structural + quality) with fresh context instead of running checks inline. Detection uses git diff --diff-filter=A to find new model/rule files. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Reflects new subagent-based dispatch and executing-plans integration. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove thin wrapper functions in create.rs, use util:: directly - Add overhead info to MCP show_problem edge JSON (matching CLI output) - Fix UnitDiskGraph variant type in design doc (f64 → i32) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
These were implementation plans consumed during development and are no longer needed in the tree. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Instructs the agent to run `make copilot-review` after creating a PR to get GitHub Copilot code review feedback. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 40 out of 41 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| /// A source/target pair from the reduction graph, returned by | ||
| /// [`ReductionGraph::outgoing_reductions`] and [`ReductionGraph::incoming_reductions`]. | ||
| #[derive(Debug, Clone)] | ||
| pub struct ReductionEdgeInfo { | ||
| pub source_name: &'static str, | ||
| pub source_variant: BTreeMap<String, String>, | ||
| pub target_name: &'static str, | ||
| pub target_variant: BTreeMap<String, String>, | ||
| pub overhead: ReductionOverhead, | ||
| } |
There was a problem hiding this comment.
ReductionEdgeInfo is publicly exported (via rules/mod.rs) and has public fields. Adding the new overhead field is a breaking change for downstream crates that construct the struct via literals or pattern-match it. Consider either (a) treating this as a semver-major change, or (b) avoiding public fields (provide accessors / constructor) and/or switching the type to #[non_exhaustive] going forward so future additions don't keep breaking downstream code.
New skill that handles: - Fetching and triaging PR review comments (user + Copilot) - Checking CI status via gh api - Fixing codecov coverage gaps using gh api (not local cargo-llvm-cov) - Structured workflow: gather -> triage -> fix -> verify -> report Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
declare_variants!macro, carrying per-variant complexity metadata (e.g.,"2^num_vertices")DeclaredVariantat compile time, so undeclared variants cause build errorscomplexityfieldpred show— displays complexity per variant in both text and JSON outputvariants_for()sorts variants so the default (SimpleGraph/i32) always comes firstTest plan
make checkpasses (fmt + clippy + all tests including--include-ignored)pred show MISdisplays complexity for all 4 variants🤖 Generated with Claude Code