Skip to content

Add Uno solver (unopy) to NLP solver CI workflow#165

Draft
Transurgeon wants to merge 152 commits intomasterfrom
retry-uno-CI
Draft

Add Uno solver (unopy) to NLP solver CI workflow#165
Transurgeon wants to merge 152 commits intomasterfrom
retry-uno-CI

Conversation

@Transurgeon
Copy link
Member

@Transurgeon Transurgeon commented Mar 2, 2026

Description

Please include a short summary of the change.
Testing the CI with new unopy release.
See issue here: cvanaret/Uno#485.
Let's wait until they fix it, since it still doesn't seem to work.
Also I had to change the solver to use the IPM method for the geo mean tests.. and Uno still takes over 1000 iterations :(.
Issue link (if applicable):

Type of change

  • New feature (backwards compatible)
  • New feature (breaking API changes)
  • Bug fix
  • Other (Documentation, CI, ...)

Contribution checklist

  • Add our license to new files.
  • Check that your code adheres to our coding style.
  • Write unittests.
  • Run the unittests and check that they’re passing.
  • Run the benchmarks to make sure your change doesn’t introduce a regression.

Transurgeon and others added 30 commits June 16, 2025 23:53
initial attempts at adding a smooth canon for maximum
* adds oracles and bounds class to ipopt interface

* adds some settings and solver lists changes for IPOPT

* adds nlp solver option and can call ipopt

* adds more experiments for integrating ipopt as a solver interface

* passing the problem through the inversion

* add some more extra changes

* adding nlmatrixstuffing

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* adding many tests, new smoothcanon for min, and improvements to ipopt_nlpif

* fixing last two tests

* add another example, qcp

* adding example for acopf

* add control of a car example done

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* update solution statuses thanks to odow

* removes unusued solver information

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* getting rocket landing example to work

* add changes to the jacobian computation

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
* adding many more example of non-convex functions

* making lots of progress on understanding good canonicalizations

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
Co-authored-by: William Zijie Zhang <william@gridmatic.com>
Transurgeon and others added 18 commits February 12, 2026 12:35
# Conflicts:
#	README.md
#	cvxpy/atoms/__init__.py
#	cvxpy/problems/problem.py
#	cvxpy/reductions/solvers/defines.py
#	cvxpy/reductions/solvers/solving_chain.py
#	cvxpy/settings.py
* clarified 0 iteration termination

* add tests

* removed print statements

* trigger CI

---------

Co-authored-by: William Zijie Zhang <william@gridmatic.com>
…ing cvxpy#3146)

# Conflicts:
#	cvxpy/reductions/cvx_attr2constr.py
* Rename ESR/HSR to linearizable_convex/linearizable_concave

Spell out opaque acronyms for clarity per PR review feedback:
is_atom_esr → is_atom_linearizable_convex,
is_atom_hsr → is_atom_linearizable_concave,
is_esr → is_linearizable_convex, is_hsr → is_linearizable_concave,
is_smooth → is_linearizable.

Docstrings clarify that "linearizable convex" means the expression is
convex after linearizing all smooth subexpressions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Adopt three-way atom classification: smooth, nonsmooth-convex, nonsmooth-concave

Replace the two-axis is_atom_linearizable_convex/concave overrides across all
atoms with a single method from the paper's three categories: is_atom_smooth,
is_atom_nonsmooth_convex, or is_atom_nonsmooth_concave. The base class derives
the old linearizable methods for backward compatibility.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Inline atom-level linearizable checks into expression-level composition rules

Remove the intermediate is_atom_linearizable_convex/concave methods and
_has_dnlp_classification helper, which were only used in the two
expression-level composition rules in atom.py. The three-way classification
(smooth, nonsmooth-convex, nonsmooth-concave) is now used directly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Simplify is_atom_smooth docstring

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Propagate initial values to reduced variables in CvxAttr2Constr

When CvxAttr2Constr creates reduced variables for dim-reducing attributes
(e.g., diag), the original variable's initial value was not being lowered
and assigned to the reduced variable. This caused NLP solvers to fail with
"Variable has no value" during initial point construction.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Handle sparse values when propagating diag variable initials

When a diag variable has its value set as a sparse matrix, extract the
diagonal directly via scipy rather than passing it through np.diag which
does not support sparse inputs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Revert "Handle sparse values when propagating diag variable initials"

This reverts commit e422565.

* Revert "Propagate initial values to reduced variables in CvxAttr2Constr"

This reverts commit d02a758.

* fix pnorm nonsmooth convex

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
When CvxAttr2Constr creates reduced variables for dimension-reducing
attributes (e.g. diag=True), the initial value was not propagated to
the reduced variable, causing NLP solvers to fail. This mirrors the
existing value propagation already done for parameters.

Also handle sparse diagonal matrices in lower_value() by using
.diagonal() instead of np.diag() which doesn't accept sparse input.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
#153)

* Make is_linearizable_convex/is_linearizable_concave abstract on Expression

Enforce implementation in all Expression subclasses by uncommenting
@abc.abstractmethod decorators. Add missing implementations to indicator
(convex, not concave) and PartialProblem (delegates to is_convex/is_concave).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* remove is_smooth from max

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Separates test-only utility from production code by moving it to
cvxpy/tests/nlp_tests/derivative_checker.py and updating all 19
test file imports.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Address review comments: remove redundant nonsmooth methods, add types, docs

- Remove is_atom_nonsmooth_convex/is_atom_nonsmooth_concave from base Atom
  class and 10 atom subclasses; use existing is_atom_convex/is_atom_concave
  in DNLP composition rules instead
- Add type annotations and docstrings to NLPsolver, Bounds, and Oracles
  in nlp_solver.py
- Document Variable.sample_bounds with class-level type annotation and
  docstring
- Revert GENERAL_PROJECTION_TOL back to 1e-10 (was loosened for removed
  IPOPT derivative checker)
- Update CLAUDE.md atom classification docs to reflect simplified API

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Move sample_bounds docs from #: comments into Variable class docstring

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…156)

* Extract NLP solving logic from problem.py into nlp_solving_chain.py

Move the ~85-line NLP block from Problem._solve() and the two initial
point methods into a dedicated module. This addresses PR review feedback:
- NLP chain building, initial point logic, and solve orchestration now
  live in cvxpy/reductions/solvers/nlp_solving_chain.py
- Use var.get_bounds() instead of var.bounds so sign attributes (nonneg,
  nonpos) are incorporated into bounds automatically
- Initial point helpers are now private module-level functions instead of
  public Problem methods

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update NLP initial point tests to use _set_nlp_initial_point

Tests now import and call the module-level helper directly instead of
the removed Problem.set_NLP_initial_point() method.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove debug print and expand comment in best_of loop

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Gate best_of print on verbose flag

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Address PR review comments on nlp_solving_chain extraction

- Add circular import comment in problem.py explaining the deferred import
- Move NLP_SOLVER_VARIANTS from nlp_solving_chain.py to defines.py
- Set BOUNDED_VARIABLES = True on NLPsolver base class and use it in
  _build_nlp_chain (matching the conic solver pattern in solving_chain.py)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* adds changes to test_problem with parametrize

* Revert verbose unpack comment to original single-line version

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…160)

* uses new coo handling

* added comment

* fix pre-commit

* fix error message if sample bounds are not set (#161)
* Remove conda from CI, use uv + system IPOPT

Replace conda-based test_nlp_solvers workflow with uv, installing IPOPT
via system packages (apt on Ubuntu, brew on macOS) instead of conda-forge.
Uncomment IPOPT optional dependency in pyproject.toml so uv sync --extra
IPOPT works. Update installation docs in CLAUDE.md and README.md.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Keep IPOPT extra commented out to avoid --all-extras breakage

The test_optional_solvers workflow uses uv sync --all-extras, which would
try to build cyipopt without system IPOPT installed. Instead, install
cyipopt directly via uv pip in test_nlp_solvers where the system library
is available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Remove UV_SYSTEM_PYTHON to fix externally-managed env error

uv pip install fails on Ubuntu when UV_SYSTEM_PYTHON=1 because the
system Python is externally managed. Removing it lets uv use its own
venv created by uv sync.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Install LAPACK and BLAS dev libraries for cyipopt build on Ubuntu

cyipopt links against LAPACK and BLAS which are not installed by default
on Ubuntu runners.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Use uv venv + uv pip install instead of uv sync for NLP workflow

uv sync manages a locked environment that doesn't play well with
uv pip install for additional packages. Switch to uv venv + uv pip
install to manage the environment directly.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Includes upstream fixes for quad_form canonicalization, gradient handling,
LDL factorization optimization, CI updates, and documentation improvements.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Same pattern as test_qcp — Uno's default FilterSQP preset struggles
with these problems, but the IPM preset converges fine.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The macOS wheel has an ABI mismatch (cvanaret/Uno#485) that causes
a silent import failure. This step makes it visible in CI logs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link

github-actions bot commented Mar 2, 2026

Benchmarks that have stayed the same:

   before           after         ratio
 [b75b7cbd]       [65f8dc99]
      231±0ms          241±0ms     1.04  gini_portfolio.Murray.time_compile_problem
      136±0ms          142±0ms     1.04  high_dim_convex_plasticity.ConvexPlasticity.time_compile_problem
      496±0ms          516±0ms     1.04  semidefinite_programming.SemidefiniteProgramming.time_compile_problem
      316±0ms          323±0ms     1.02  gini_portfolio.Yitzhaki.time_compile_problem
      1.02±0s          1.04±0s     1.02  finance.FactorCovarianceModel.time_compile_problem
      4.00±0s          4.06±0s     1.02  huber_regression.HuberRegression.time_compile_problem
      12.7±0s          12.8±0s     1.02  finance.CVaRBenchmark.time_compile_problem
      4.64±0s          4.70±0s     1.01  svm_l1_regularization.SVMWithL1Regularization.time_compile_problem
      21.5±0s          21.7±0s     1.01  sdp_segfault_1132_benchmark.SDPSegfault1132Benchmark.time_compile_problem
      281±0ms          284±0ms     1.01  matrix_stuffing.ParamSmallMatrixStuffing.time_compile_problem
      5.03±0s          5.07±0s     1.01  optimal_advertising.OptimalAdvertising.time_compile_problem
      1.60±0s          1.61±0s     1.00  tv_inpainting.TvInpainting.time_compile_problem
      3.03±0s          3.04±0s     1.00  quantum_hilbert_matrix.QuantumHilbertMatrix.time_compile_problem
      1.40±0s          1.41±0s     1.00  matrix_stuffing.ParamConeMatrixStuffing.time_compile_problem
      689±0ms          689±0ms     1.00  matrix_stuffing.ConeMatrixStuffingBench.time_compile_problem
      283±0ms          282±0ms     1.00  slow_pruning_1668_benchmark.SlowPruningBenchmark.time_compile_problem
     14.6±0ms         14.6±0ms     1.00  simple_LP_benchmarks.SimpleFullyParametrizedLPBenchmark.time_compile_problem
      1.02±0s          1.02±0s     1.00  gini_portfolio.Cajas.time_compile_problem
      240±0ms          239±0ms     1.00  simple_QP_benchmarks.SimpleQPBenchmark.time_compile_problem
      1.83±0s          1.82±0s     0.99  simple_QP_benchmarks.UnconstrainedQP.time_compile_problem
      10.7±0s          10.6±0s     0.99  simple_LP_benchmarks.SimpleLPBenchmark.time_compile_problem
     39.5±0ms         39.0±0ms     0.99  matrix_stuffing.SmallMatrixStuffing.time_compile_problem
     15.4±0ms         15.2±0ms     0.99  simple_QP_benchmarks.ParametrizedQPBenchmark.time_compile_problem
      912±0ms          901±0ms     0.99  simple_LP_benchmarks.SimpleScalarParametrizedLPBenchmark.time_compile_problem
      780±0ms          767±0ms     0.98  simple_QP_benchmarks.LeastSquares.time_compile_problem

@Transurgeon Transurgeon marked this pull request as draft March 2, 2026 00:33
@cvanaret
Copy link

cvanaret commented Mar 2, 2026

Hi @Transurgeon,
Could you try to install unopy with pip instead of uv pip, so that we can rule out uv as a culprit?

@Transurgeon
Copy link
Member Author

Hi @Transurgeon, Could you try to install unopy with pip instead of uv pip, so that we can rule out uv as a culprit?

I tried locally and it still doesn't seem to work unfortunately.
The AI says it has something to do with being unable to link to HiGHS.. since there is an ABI mismatch.
I tried uninstalling the homebrew HiGHS, but it also didn't really work.

@cvanaret
Copy link

cvanaret commented Mar 2, 2026

@amontoison do you think it's trying to link some other HiGHS library that exists somewhere else?

@amontoison
Copy link

With the latest release of unopy, I do a static linking with all libraries on Mac and Linux.
Are you sure to use the precompiled artifacts?
What is the version of unopy that you are using?

The AI is saying something totally wrong.
I am scared that someone is using -lstdc++ instead of -lc++ but it should only happen with a local build.

@Transurgeon
Copy link
Member Author

Transurgeon commented Mar 2, 2026

With the latest release of unopy, I do a static linking with all libraries on Mac and Linux. Are you sure to use the precompiled artifacts?

yes I believe so.. I think pip automatically tries to get precompiled artifacts.

What is the version of unopy that you are using?

It is installing the latest, which is 0.2.1

The AI is saying something totally wrong. I am scared that someone is using -lstdc++ instead of -lc++ but it should only happen with a local build.

I totally trust you, but it would be nice if you could double check that. Maybe the HiGHS binaries have been built using -lstdc++ by accident?

(DNLP) ➜  DNLP git:(retry-uno-CI) ✗ python -c "import unopy; print('OK')"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
    import unopy; print('OK')
    ^^^^^^^^^^^^
ImportError: dlopen(/Users/willizz/Documents/DNLP/.venv/lib/python3.13/site-packages/unopy.cpython-313-darwin.so, 0x0002): symbol not found in flat namespace '__ZN5Highs14setOptionValueERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPKc'

it seems like part of the string has a cxx which is potentially related to a libstdc++ installation?
Sorry for using the AI for testing this.. I am not familiar with these different ways of making and building wheels.
However, it does seem to be a weird error, since for linux there's no issues at all.

@cvanaret
Copy link

cvanaret commented Mar 3, 2026

@Transurgeon we did have an issue with the wheels workflow for macOS, it compiled with full GNU instead of Clang+gfortran.
It is now fixed in unopy 0.2.2. Can you re-run your workflow?

@Transurgeon
Copy link
Member Author

@Transurgeon we did have an issue with the wheels workflow for macOS, it compiled with full GNU instead of Clang+gfortran. It is now fixed in unopy 0.2.2. Can you re-run your workflow?

it seems to work now, thanks a lot @cvanaret @amontoison !

@cvanaret
Copy link

cvanaret commented Mar 3, 2026

@Transurgeon great :) I'm happy to address the issues with the ipopt preset (many iterations required) in a separate PR.

@amontoison
Copy link

amontoison commented Mar 3, 2026

Thanks @cvanaret !
I cross-compiled HiGHS and the other dependencies with clang / clang++ so they expected Uno to be linked with libc++ and not libstdc++.
By using gcc / g++ you created an hybrid beast 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants