Skip to content

Add parameter support to NLP diff engine#151

Open
Transurgeon wants to merge 143 commits intomasterfrom
nlp-param-support
Open

Add parameter support to NLP diff engine#151
Transurgeon wants to merge 143 commits intomasterfrom
nlp-param-support

Conversation

@Transurgeon
Copy link
Member

Enable CVXPY Parameters as live nodes in the C expression tree, so parameter values can be updated via memcpy without rebuilding the tree. This is critical for parametric NLP problems (MPC, portfolio optimization).

Changes:

  • inverse_data.py: add get_param_offsets() static method
  • converters.py: add build_parameter_dict(), _normalize_shape() helper, Parameter branches in matmul/multiply converters, param_dict threading through convert_expr/convert_expressions (now returns 4-tuple)
  • c_problem.py: use convert_expressions(), register params with C problem, add update_params() method for fast parameter value refresh

Description

Please include a short summary of the change.
Issue link (if applicable):

Type of change

  • New feature (backwards compatible)
  • New feature (breaking API changes)
  • Bug fix
  • Other (Documentation, CI, ...)

Contribution checklist

  • Add our license to new files.
  • Check that your code adheres to our coding style.
  • Write unittests.
  • Run the unittests and check that they’re passing.
  • Run the benchmarks to make sure your change doesn’t introduce a regression.

Transurgeon and others added 30 commits June 16, 2025 23:53
initial attempts at adding a smooth canon for maximum
* adds oracles and bounds class to ipopt interface

* adds some settings and solver lists changes for IPOPT

* adds nlp solver option and can call ipopt

* adds more experiments for integrating ipopt as a solver interface

* passing the problem through the inversion

* add some more extra changes

* adding nlmatrixstuffing

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* adding many tests, new smoothcanon for min, and improvements to ipopt_nlpif

* fixing last two tests

* add another example, qcp

* adding example for acopf

* add control of a car example done

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* update solution statuses thanks to odow

* removes unusued solver information

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* getting rocket landing example to work

* add changes to the jacobian computation

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* adding many more example of non-convex functions

* making lots of progress on understanding good canonicalizations

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
Co-authored-by: William Zijie Zhang <william@gridmatic.com>
Transurgeon and others added 23 commits December 16, 2025 00:22
* delete sandbox

* also removes unused requirements file

* rename nlp tests folder

* rename nlp tests folder

* remove end of line

* fix ci setup
* adds first changes for interface

* some more progress on uno interface

* adds unopy to CI workflows

* remove windows tests

* remove unnecessary nlp workflow

* cleanup test_nlp_solvers
…n of NumPy (#120)

* Adds DNLP cp.prod

* Adds hstack/vstack

* Fixes failing tests

* Fixes hallucination

* Addresses Daniel's comments

* Removes unneeded code

* Addresses Daniel's comments

* Fixes constraints in broken test problem
…riables (#127)

Parameters should raise ValueError when assigned NaN values, but Variables
need to allow NaN for NLP structural Jacobian/Hessian computation. The
previous code allowed NaN for all Leaf types, breaking upstream test
test_nan_in_parameter_raises.

The fix checks self.variables() to distinguish Variables (returns [self])
from Parameters (returns []).

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* Add COPT support

* Add copt to pytest

* Fix code style and copt interface

* Fix copt tests

* Restore original test_interfaces.py

The matrix interface tests were accidentally overwritten with NLP solver
tests. This restores the original unit tests for NumPy and SciPy sparse
matrix interfaces.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: wujian <wujian@shanshu.ai>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* Implement diag_vec Jacobian for NLP solving

- Add _jacobian() and _hess_vec() methods to diag_vec atom for NLP support
- Fix diagonal variable value propagation in CvxAttr2Constr reduction
- Add tests for diagonal variables in NLP problems

The diag_vec Jacobian maps input vector positions to diagonal matrix
positions using the formula i*(n+1) for Fortran-order flattening.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Handle sparse value initialization for diag variables in NLP

When a diag variable has its value stored as a sparse matrix (e.g., after
solving), np.diag() fails on it. This adds a check for sparse values and
uses .diagonal() to extract the diagonal elements correctly.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Resolves merge conflicts:
- README.md: Keep DNLP-specific README
- cvxpy/atoms/pnorm.py: Keep DNLP _hess_vec methods, add upstream PnormApprox class
- cvxpy/utilities/citations.py: Keep IPOPT/UNO citations, add MOREAU citation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…or everything) (#131)

* Integrate DNLP diff engine into NLP solver Oracles class

Replace the pure-Python Oracles class with a new implementation that
wraps the C-based C_problem class from dnlp_diff_engine. This provides
automatic differentiation via the compiled C library instead of the
Python-based jacobian/hessian computation.

Key changes:
- Replace Oracles class implementation in nlp_solver.py
- Use lazy import to avoid circular dependency at module load time
- Convert CSR sparse matrices from diff engine to COO format for solvers
- Cache sparsity structures for jacobian and hessian

Currently supported atoms: log, exp, sum, AddExpression, NegExpression,
Promote. Other atoms (multiply, power, matmul, etc.) need to be added
to the diff engine before all NLP tests pass.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update TODO tracking for diff engine atom status

- Document all newly added Python bindings (power, trig, hyperbolic, matmul)
- Update test status: 2/15 IPOPT tests now pass
- Clarify blocking issues for remaining tests (index, norm, C bugs)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* added converter to multiply

* Optimize Oracles class: remove redundant forward calls and simplify value extraction

- Remove redundant objective_forward/constraint_forward calls from gradient(),
  jacobian(), and hessian() since NLP solvers guarantee call ordering
- Replace matrix densification + zip iteration with direct COO data return
  for jacobian and hessian value extraction

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update TODO tracking for diff engine atom status

- Updated implemented atoms list (index, reshape, sqrt, quad_over_lin, rel_entr)
- Updated test results summary with current pass/fail status
- Added known issues section (segfaults, bivariate matmul, rel_entr scalar)
- Added missing atoms by priority

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update TODO tracking: 14/15 test_nlp_solvers tests now passing

- reshape atom now fully implemented with Python binding (commit 81c5a12)
- All 3 circle packing tests now PASS
- test_clnlbeam now PASSES (previously segfaulted)
- Memory issues with many constraints fixed
- Only test_localization still failing (needs broadcast_to)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update TODO with full NLP test suite results

Full nlp_tests results (2025-01-14):
- 259 passed, 20 failed, 64 skipped, 1 xfailed

Remaining failures by missing atom:
- broadcast_to: 5 tests (localization, broadcast, best_of)
- Prod: 9 tests
- MulExpression (bivariate matmul): 5 tests
- rel_entr scalar variants: 1 test

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Clarify MulExpression: matrix @ matrix where both depend on variables

Examples: X @ Y (both Variables), cos(X) @ sin(Y)
Currently supported: A @ f(x) and f(x) @ A where A is constant

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update TODO: Prod atom now implemented (13/14 tests pass)

- Added Prod to implemented atoms list
- Updated test_prod.py results: 13 passing, 1 failing (axis param)
- Updated summary: 267 passed, 12 failed (down from 20)

Remaining failures: broadcast_to (5), Prod axis (1), MulExpression (5), rel_entr scalar (1)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Add diff_engine integration layer for CVXPY-to-C expression conversion

- Create cvxpy/reductions/solvers/nlp_solvers/diff_engine/ package
- Move C_problem wrapper and ATOM_CONVERTERS from dnlp-diff-engine
- converters.py: Expression tree conversion from CVXPY atoms to C nodes
- c_problem.py: C_problem class wrapping the C problem struct
- Update nlp_solver.py to import from new location

This separates CVXPY-specific glue code from the pure C autodiff library,
eliminating circular dependencies and preparing dnlp-diff-engine for
standalone PyPI distribution.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Fix E402 lint errors: combine docstrings

Merge license header and module docstring into single docstring
to avoid "module level import not at top of file" errors.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* debugging

* super nasty buggit statusgit status

* equivalent treatment of (n, ) as numpy and cvxpy. Very subtle

* multiply

* converters

* relative entropy converter

* clean up oracle jacobian

* cleaned up jacobian oracle a bit

* minor

* cleaned up oracle class

* best of fix and added derivative checker class

* removed hacky logic with many reshapes to handle numpy's weird broadcasting rule. This is now internally done in the diff engine

* removed skipping of tests (fixed with new matmul convention)

* prod converter

* prod with axis one

* matmul

* stress_tests_diff_engine/

* random initial points

* test for sum

* added power flow as a test and started on transpose converter

* cleaned up convert_matmul so we take advantage of sparsity

* added test for sparse matrix vector

* small edit to test

* cleaned up converter of multiply

* clean up quad form converter

* changed name of test

* added derivative checker to all tests

* added hstack in converter and as test

* trace converter

* transpose converter

* test_affine_matrix_atoms.py

* diag_vec converter and tests

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Add diff_engine_core as git submodule

Add DNLP-diff-engine as a git submodule for building the _diffengine
C extension. The submodule includes:
- Pure C library (pyproject.toml removed)
- Apache 2.0 LICENSE
- Module renamed from _core to _diffengine
- diag_vec atom implementation

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Add diff_engine build infrastructure and CI submodule checkout

- Add _diffengine Extension to setup/extensions.py (optional, only built
  if diff_engine_core submodule is initialized)
- Update setup.py to include diffengine in extensions list
- Update imports from dnlp_diff_engine to _diffengine
- Add submodules: recursive to all CI workflows that build CVXPY:
  - test_nlp_solvers.yml
  - test_optional_solvers.yml
  - test_backends.yml
  - cvxpygen.yml
  - docs.yml
  - gcsopt.yml
  - build.yml

Cherry-picked from submodule-setup branch.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Remove submodules: recursive from workflows that don't need NLP

Only keep submodules for workflows that run NLP tests or build wheels:
- build.yml (builds wheels with all features)
- test_backends.yml (runs all tests including NLP)
- test_nlp_solvers.yml (specifically for NLP tests)

Removed from workflows that don't need NLP support:
- cvxpygen.yml (only runs test_cvxpygen.py)
- docs.yml (just builds documentation)
- gcsopt.yml (only runs gcsopt tests)
- test_optional_solvers.yml (only runs conic/QP solver tests)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Fix PowerApprox support after upstream Power/PowerApprox split

The upstream CVXPY split Power into exact (Power) and approximate
(PowerApprox) subclasses. This broke NLP solving because:

1. dnlp2smooth used `power` (function) as dict key but lookup uses
   `type(expr)` which returns the class (Power or PowerApprox)
2. power_canon.py used `p_rational` which doesn't exist on PowerApprox
3. diff_engine converters used "power" (lowercase) but class name is "Power"

Fixes:
- dnlp2smooth/__init__.py: Import Power and PowerApprox classes,
  register both with power_canon
- power_canon.py: Use `p_used` instead of `p_rational`
- converters.py: Use "Power" and "PowerApprox" as keys

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Fix Linux build: define _POSIX_C_SOURCE for clock_gettime

The diff_engine_core uses clock_gettime() and struct timespec for
timing, which require _POSIX_C_SOURCE=200809L on Linux when compiling
with -std=c99 (which disables GNU extensions).

The CMakeLists.txt already had this define, but it wasn't being applied
when building through setuptools. Added define_macros to the diffengine
Extension for Linux builds.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Fix p_rational -> p_used and add Approx atom support

1. power.py: Replace all p_rational references with p_used
   - _verify_hess_vec_args, _hess_vec, _verify_jacobian_args, _jacobian
     all used p_rational which doesn't exist; p_used is correct
   - This fixes AttributeError in hess/jacobian tests

2. dnlp2smooth/__init__.py: Add support for Approx atom classes
   - Import GeoMean, GeoMeanApprox (was geo_mean function)
   - Import PnormApprox (was only Pnorm)
   - Register GeoMean, GeoMeanApprox, Pnorm, PnormApprox in
     SMOOTH_CANON_METHODS so they get canonicalized

The upstream split of atoms into exact/approx subclasses changed
type(expr).__name__ which broke the dictionary lookups.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Add DIFF_ENGINE_VERSION macro for pip builds

The diff_engine_core uses DIFF_ENGINE_VERSION macro (defined in CMakeLists.txt
for CMake builds). Added -DDIFF_ENGINE_VERSION="0.0.1" to compile args for
setuptools/pip builds.

Cherry-picked from submodule-setup branch.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* add debug output test-nlp-solvers

* Thread verbose parameter to C diff engine and remove debug prints

- Add verbose parameter to C_problem and Oracles classes
- Defer Oracles creation from apply() to solve_via_data() so verbose
  flag is available when constructing the C diff engine
- Remove debug print statements from Oracles.__init__
- Remove timing print statements from IPOPT solve_via_data
- Delete obsolete TODO_diff_engine_atoms.md file

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Fix import formatting and update diff_engine_core submodule

- Add blank line between third-party and local imports (pre-commit)
- Update diff_engine_core submodule to version with verbose parameter support

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Remove UNO installation from CI to fix Ubuntu stall

UNO solver was causing the Ubuntu CI job to hang for over an hour.
Tests will be automatically skipped when UNO is not installed.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Remove debug artifacts from NLP tests

Remove debug print statements from test_nlp_solvers.py and test_matmul.py,
remove commented-out test code, and change verbose=True to verbose=False
in stress tests for cleaner test output.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Daniel <danielcederberg1@gmail.com>
* Add diff_engine_core release workflow and documentation

- Add release.yml workflow for automated GitHub releases on tag push
- Add DIFFENGINE_RELEASE.md with release and submodule update procedures
- Bump diff_engine_core version to 0.1.0

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Add quasi-Newton support and relocate Python bindings to DNLP

- Split init_derivatives() into init_jacobian() and init_hessian() in
  C_problem to allow lazy Hessian initialization
- Add use_hessian parameter to Oracles class to skip Hessian setup
  when using quasi-Newton methods (L-BFGS, BFGS, SR1)
- Update IPOPT interface to detect hessian_approximation='limited-memory'
- Update Knitro interface to detect hessopt != 1 (BFGS/SR1/L-BFGS)
- Update COPT and UNO interfaces with explicit use_hessian=True
- Move Python bindings from diff_engine_core/python/ to
  cvxpy/.../diff_engine/_bindings/ for better maintainability
- Update setup/extensions.py to use new bindings location
- Add test_quasi_newton.py with tests for L-BFGS mode

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Fix line too long in ipopt_nlpif.py

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update diff_engine_core submodule to v0.1.0

Update to commit ee7fed2 which includes problem_init_jacobian() and
problem_init_hessian() functions needed by the new bindings.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update DIFFENGINE_RELEASE.md with current state

Add section documenting current version (v0.1.0) and note that Python
bindings are now maintained in DNLP, not diff_engine_core.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Merges PR #140 from cvxpy:master, which adds:
- Support for zero-sized expressions
- Parametric (Expression) variable bounds
- Infinite value support in Parameters
- Bounds propagation through expression trees

Resolved merge conflicts by combining DNLP's Variable imports
(for jacobian/hessian methods) with upstream's bounds_utils imports.

Additional fixes for DNLP compatibility:
- nlp_solver.py: Use var.get_bounds() for proper scalar bound handling
- problem.py: Broadcast scalar bounds to variable shape in set_NLP_initial_point()

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
These methods are now provided by the C-based diff_engine_core library.
The NLP solver Oracles class uses c_problem from diff_engine, not the
Python methods on atoms.

Removed from base classes:
- atom.py: jacobian(), hess_vec(), _jacobian(), _hess_vec(),
  _verify_jacobian_args(), _verify_hess_vec_args()
- variable.py: jacobian(), hess_vec()
- constant.py: jacobian(), hess_vec()

Removed from 13 affine atoms, 10 elementwise atoms, and 4 non-elementwise
atoms (pnorm, prod, quad_form, quad_over_lin).

Deleted test files:
- cvxpy/tests/nlp_tests/jacobian_tests/ (10 files)
- cvxpy/tests/nlp_tests/hess_tests/ (10 files)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Includes:
- Fix Windows build (#41)
- Remove Python bindings (moved to DNLP repository) (#40)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* Extract diff engine bindings into standalone SparseDiffPy package

Move C bindings (_bindings/) to the new SparseDiffPy package at
SparseDifferentiation/SparseDiffPy and import via
`from sparsediffpy import _sparsediffengine as _diffengine`.
Remove the diff_engine_core submodule and its build configuration.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* make sparsediffpy a dependency

* ci: trigger CI to verify sparsediffpy PyPI install

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
# Conflicts:
#	README.md
#	cvxpy/atoms/__init__.py
#	cvxpy/problems/problem.py
#	cvxpy/reductions/solvers/defines.py
#	cvxpy/reductions/solvers/solving_chain.py
#	cvxpy/settings.py
* clarified 0 iteration termination

* add tests

* removed print statements

* trigger CI

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
…ing cvxpy#3146)

# Conflicts:
#	cvxpy/reductions/cvx_attr2constr.py
Enable CVXPY Parameters as live nodes in the C expression tree, so
parameter values can be updated via memcpy without rebuilding the tree.
This is critical for parametric NLP problems (MPC, portfolio optimization).

Changes:
- inverse_data.py: add get_param_offsets() static method
- converters.py: add build_parameter_dict(), _normalize_shape() helper,
  Parameter branches in matmul/multiply converters, param_dict threading
  through convert_expr/convert_expressions (now returns 4-tuple)
- c_problem.py: use convert_expressions(), register params with C problem,
  add update_params() method for fast parameter value refresh

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The refactor in 33fe669 lost the inline code that copied initial
values from dimension-reduced variables (diag, sparse, symmetric) to
their reduced counterparts. This caused NLP solvers to find value=None
on the reduced variable, breaking initialization. Mirror the existing
parameter value-propagation pattern by calling lower_value(var) after
creating the reduced variable.

Also fix lower_value to handle sparse diagonal matrices by using
.diagonal() instead of np.diag(), which does not accept sparse inputs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link

github-actions bot commented Feb 17, 2026

Benchmarks that have improved:

   before           after         ratio
 [0c4f10e2]       [6bac4ca2]
  •     548±0ms          493±0ms     0.90  semidefinite_programming.SemidefiniteProgramming.time_compile_problem
    
  •     1.22±0s          1.02±0s     0.84  gini_portfolio.Cajas.time_compile_problem
    

Benchmarks that have stayed the same:

   before           after         ratio
 [0c4f10e2]       [6bac4ca2]
     39.1±0ms         39.9±0ms     1.02  matrix_stuffing.SmallMatrixStuffing.time_compile_problem
      12.1±0s          12.3±0s     1.02  finance.CVaRBenchmark.time_compile_problem
     14.5±0ms         14.7±0ms     1.02  simple_LP_benchmarks.SimpleFullyParametrizedLPBenchmark.time_compile_problem
      1.56±0s          1.58±0s     1.01  tv_inpainting.TvInpainting.time_compile_problem
      986±0ms          999±0ms     1.01  finance.FactorCovarianceModel.time_compile_problem
      886±0ms          893±0ms     1.01  simple_LP_benchmarks.SimpleScalarParametrizedLPBenchmark.time_compile_problem
      671±0ms          675±0ms     1.01  matrix_stuffing.ConeMatrixStuffingBench.time_compile_problem
      20.5±0s          20.6±0s     1.01  sdp_segfault_1132_benchmark.SDPSegfault1132Benchmark.time_compile_problem
      737±0ms          740±0ms     1.00  simple_QP_benchmarks.LeastSquares.time_compile_problem
     15.1±0ms         15.2±0ms     1.00  simple_QP_benchmarks.ParametrizedQPBenchmark.time_compile_problem
      1.78±0s          1.78±0s     1.00  simple_QP_benchmarks.UnconstrainedQP.time_compile_problem
      226±0ms          226±0ms     1.00  gini_portfolio.Murray.time_compile_problem
      134±0ms          133±0ms     1.00  high_dim_convex_plasticity.ConvexPlasticity.time_compile_problem
      278±0ms          277±0ms     1.00  slow_pruning_1668_benchmark.SlowPruningBenchmark.time_compile_problem
      282±0ms          281±0ms     1.00  matrix_stuffing.ParamSmallMatrixStuffing.time_compile_problem
      3.92±0s          3.90±0s     1.00  huber_regression.HuberRegression.time_compile_problem
      1.41±0s          1.40±0s     1.00  matrix_stuffing.ParamConeMatrixStuffing.time_compile_problem
      4.46±0s          4.44±0s     1.00  svm_l1_regularization.SVMWithL1Regularization.time_compile_problem
      10.1±0s          10.1±0s     0.99  simple_LP_benchmarks.SimpleLPBenchmark.time_compile_problem
      239±0ms          238±0ms     0.99  simple_QP_benchmarks.SimpleQPBenchmark.time_compile_problem
      5.06±0s          5.02±0s     0.99  optimal_advertising.OptimalAdvertising.time_compile_problem
      317±0ms          313±0ms     0.98  gini_portfolio.Yitzhaki.time_compile_problem
      2.98±0s          2.79±0s     0.94  quantum_hilbert_matrix.QuantumHilbertMatrix.time_compile_problem

Transurgeon and others added 4 commits February 17, 2026 20:45
Detect the reshape(sparse_coeff @ reduced_param, shape, 'F') @ x pattern
produced by CvxAttr2Constr for sparse parameters and fuse it into a single
make_left_matmul/make_right_matmul with native CSR sparsity, instead of
two separate matmuls. Values in update_params are converted to CSR order
via COO→CSR for fused sparse parameters.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Test scalar, vector, dense matrix, and sparse matrix parameter
multiplication with derivative checking and parameter value updates.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Parameters are always registered via build_parameter_dict, so the
fallthrough path that created them as make_constant was dead code.
Replace with an explicit error if a parameter is missing from param_dict.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
# Recursive case: atoms
atom_name = type(expr).__name__

# Try to fuse sparse-parameter reconstruction matmuls before normal dispatch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible to move this logic into convert_matmul?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants