diff --git a/.github/workflows/webui_tests.yaml b/.github/workflows/webui_tests.yaml new file mode 100644 index 000000000..622b0e55d --- /dev/null +++ b/.github/workflows/webui_tests.yaml @@ -0,0 +1,36 @@ +name: "WebUI tests" + +on: + pull_request: + branches: [master] + paths: + - "webui/**" + push: + branches: [master] + paths: + - "webui/**" + +jobs: + test: + runs-on: ubuntu-latest + + defaults: + run: + working-directory: webui + + steps: + - name: Checkout repository + uses: actions/checkout@v6 + + - name: Setup Node.js + uses: actions/setup-node@v4 + with: + node-version: 24 + cache: "npm" + cache-dependency-path: webui/package-lock.json + + - name: Install dependencies + run: npm install + + - name: Run unit tests + run: npx vitest run --config vitest.config.ts diff --git a/CHANGELOG.md b/CHANGELOG.md index 63d96aa7f..5cc9f1957 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -3,6 +3,7 @@ All notable changes to **DESDEO** are documented in this file. This project follows **Keep a Changelog** and **Semantic Versioning**: + - Keep a Changelog: https://keepachangelog.com/en/1.1.0/ - SemVer: https://semver.org/ - Changelogs are generated using an LLM and curated by a human @@ -10,35 +11,237 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: ## [Version template] ### Core logic + #### Added + #### Changed + #### Deprecated + #### Removed + #### Fixed + #### Security ### Web API + #### Added + #### Changed + #### Deprecated + #### Removed + #### Fixed + #### Security ### Web GUI + #### Added + #### Changed + #### Deprecated + #### Removed + #### Fixed + #### Security --- +## [2.4.0] - 14.4.2026 + +### Core logic + +#### Added + +- Added **CVXPY solver** support with full integration into the problem/solver + stack (#466). +- Updated `guess_best_solver` to include CVXPY and to check for a valid Gurobi + license (#468). + +#### Changed + +- Updated **gurobipy** implementation: tensor constants can now be used without + errors (#464). + +#### Fixed + +- Fixed infinite loop in `dmitry_forest_problem_disc()` (use relative path + instead of walking up to the `DESDEO` folder), which also caused + `conftest.py` to hang. + +--- + +### Web API + +#### Added + +- Added **NAUTILUS Navigator** endpoints `/initialize` and `/navigate`, with + `NautilusNavigatorInitializationState` and `NautilusNavigatorNavigationState` + states and unit tests. +- Added **constrained variant** endpoint + `POST /problem/{problem_id}/constrained_variant` with `VariableFixing`, + `ConstrainedVariantRequest`, and `ConstrainedVariantResponse` models. +- Added `is_temporary` and `parent_problem_id` fields to `ProblemDB` for + variant tracking. +- Added **site-selection** endpoints `POST /site-selection/load_metadata` and + `POST /site-selection/map`, backed by a new `SiteSelectionMetaData` DB model + (follows the `ForestProblemMetaData` pattern). +- Added **E-NAUTILUS what-if simulation** endpoint + `POST /method/enautilus/simulate` (ephemeral, no DB writes) with + `ENautilusSimulateRequest`/`Response`/`StepResult` models and a + `deprioritize` flag for worst-case selection. +- Added endpoint returning the list of **DM users**. +- Added `/add_new_dm` endpoint (restricted to analyst/admin) and enabled + analysts/admins to create and manage problems and interactive sessions on + behalf of DMs. +- Added `fetch_problem_with_role_check` and + `fetch_interactive_session_with_role_check` helpers in `utils.py` for + role-aware ownership bypass. +- Added `?target_user_id=` parameter on `POST /session/new` for analysts to + create sessions on behalf of DMs. +- Added endpoint returning an HTTP exception based on an error code. +- Added tests for the constrained-variant endpoint, the site-selection + endpoint, and DM-user/problem-ownership behavior. + +#### Changed + +- Refactored **NAUTILUS Navigator** to follow the E-NAUTILUS patterns: replaced + legacy manual auth/problem loading with `SessionContextGuard`, moved + request/response models from `schemas/` to `models/` as `SQLModel`, used + `Problem.from_problemdb()`, and walked `StateDB.parent` for session-scoped + parent-chain traversal. +- Deleted `desdeo/api/schemas/` (no longer needed after the model migration). +- Session/problem listing endpoints now return all sessions/problems for + analysts (own only for DMs); empty lists return `200 + []` instead of `404`. +- Problem action endpoints (delete, solver, JSON, representative sets) now + allow analyst access to any problem. +- Simplified `DELETE /problem/{problem_id}`: owners can delete any of their + problems. +- Restricted `POST /add_new_dm` to analyst/admin (was previously public). +- Removed the legacy `/clinic/` router, `ClinicMapRequest`/`Response` models, + and `clinic_map.py` in favor of the metadata-driven site-selection map. +- Added `401` and `500` login responses to the API/OpenAPI generator. + +#### Fixed + +- Fixed `SessionContextGuard` HTTP status codes: `404` for missing resources, + `403` for ownership failures (previously `400` for both). +- Fixed `InteractiveSessionBase` missing `from_attributes=True`, which caused + `POST /session/new` to return `{}`. +- Fixed swapped arguments in `fetch_interactive_session` (utils.py). +- Fixed `test_delete_problem_unauthorized` and `test_constraint_variant` to + expect the correct `403`/`404` status codes. + +--- + +### Web GUI + +#### Added + +- Added rudimentary **Manage Users** page (`/manage-users`) for analysts and + admins to create DM and analyst accounts; route is server-side guarded and nav + visibility is role-based (auth store populated on topbar mount). +- Added analyst/admin management of other users' interactive sessions and + problems: user filter dropdown (default "Myself"), Owner column/field, and + "Create for" / DM selectors on session and problem creation. +- Added **map metadata upload** dialog on the Problems page (JSON file + upload); renamed `clinic-map.svelte` to `site-selection-map.svelte` which + now accepts a `problem_id` prop. +- Added **map interaction** in E-NAUTILUS: click site markers to cycle + free → restricted → forced; multiple sites per city node are constrained + together; re-solve with RPM using E-NAUTILUS final objectives as aspiration + levels; constrained variant inherits the parent problem's solver metadata. +- Added comparison bottom panel with tabbed **Parallel Coordinates / + Comparison Table** views; solution tab shows the constrained solution with + the original as a dashed comparison line; "Use this solution" accepts the + constrained result, "Reset" reverts to the original. +- Added **what-if scenario simulation** in the Decision Journey view: "best" + (blue) and "worst" (red) buttons per objective, dashed-line overlays with + hollow-circle markers, and an actual-vs-what-if comparison table with + colored deltas. +- Added catch-all proxy route `/api/[...path]` that forwards browser API + calls through `event.fetch` so `handleFetch` intercepts for cookie + injection and `401`/token-refresh handling. +- Added generated Orval endpoints for NAUTILUS Navigator and enabled Orval + Zod body-schema generation. +- Added login error display after unsuccessful login attempts. + +#### Changed + +- Replaced the hardcoded clinic map system with the new metadata-driven + site-selection map. +- Unrolled tensor variables in RPM results for frontend compatibility + (e.g. `sv` → `sv_1..sv_60`). +- Updated API client to use correct URLs for debug/deploy; `getUrl` now + returns a relative `/api/` on the browser side, with a + `http://localhost:8000` fallback detected via `typeof window === 'undefined'` + on the server side. +- Removed client-side `401` retry logic from `customFetch` — now handled + server-side by `handleFetch` via the proxy; removed the dead Vite `/api` + proxy config from `vite.config.ts`. +- Replaced `$effect` with an explicit `oninput` handler for clearing login + errors and removed a debug `console.log` from the login server action. +- Default Decision Journey visualization pane height raised to 65% for more + chart room; selected iteration highlighted with a vertical band. +- Regenerated Orval clients multiple times to pick up new endpoints. + +#### Fixed + +- Fixed various URL-resolving and access-token cookie issues affecting + deployment. +- Fixed Orval import mismatches after regeneration + (`getProblemProblemGetPost`, `addRepresentativeSolutionSet`). + +--- + +### Documentation + +#### Added + +- Added a guide on how to deploy the fullstack on **OpenShift** using the + `oc` tool from a terminal. + +#### Changed + +- Updated deployment documentation and polished deployment files. +- Updated the `webui` README (including the `API_BASE_URL` fix). + +--- + +### Tooling, CI, and deployment + +#### Added + +- Added a **`justfile`** and the `rust-just` dependency — `just` is a + drop-in cross-platform replacement for `make` and is available on PyPI. +- Added a Python version of `run_fullstack.sh` so it runs on all + Python-compatible platforms. +- Added a GitHub workflow for running the Web GUI tests with Vite. +- Added `secrets` to `.gitignore`. + +#### Changed + +- Updated docs so that references to `make`/`Makefile` now point to + `just`/`justfile`. +- Updated build and Dockerfiles; switched `npm ci` to `npm install` in + `webui/Dockerfile`. +- General Rahti deployment testing and configuration updates. + +--- + ## [2.3.0] - 31.3.2026 ### Core logic + #### Added + - Added `lagrange_multipliers` field to `SolverResults`; extract duals from Ipopt, Bonmin, and Gurobi solvers. - Added `desdeo/explanations/lagrange.py` with filter, tradeoff, and @@ -46,6 +249,7 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: - Added unit tests for Lagrange utilities and solver integration tests. #### Fixed + - Fixed a bug where `ScipyMinimizeSolver` and `ScipyDeSolver` returned incorrect solutions for problems with `maximize=True` objectives. - Fixed infinite loop in the Dmitry forest problem (use @@ -54,7 +258,9 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Web API + #### Added + - Added `get-multipliers-info` endpoint to the NIMBUS router. - Added XNIMBUS (Explainable NIMBUS) `StateKinds`, state fields, and API models (`NIMBUSMultiplierRequest`/`Response`). @@ -62,6 +268,7 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: - Added XNIMBUS API endpoint tests. #### Changed + - Removed singleton request model classes (`ProblemGetRequest`, `GetSessionRequest`, `RepresentativeSolutionSetRequest`) and refactored endpoints to use query parameters instead. @@ -72,6 +279,7 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: called with request/session reversed). #### Fixed + - Fixed cookies being set to `Secure` in dev mode, which broke the UI over HTTP connections (e.g., WSL). The `Secure` flag is now derived from the SvelteKit `dev` environment setting. @@ -79,7 +287,9 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Web GUI + #### Added + - Added XNIMBUS (Explainable NIMBUS) frontend route with explainable layout, advanced sidebar, explanation bar charts, and help dialog. - Added Explainable NIMBUS to the method selection/initialize view. @@ -90,6 +300,7 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: - Added page titles and meta descriptions to all frontend routes. #### Changed + - Fully migrated to Orval-generated endpoints: replaced all `api.GET`/`api.POST` calls with Orval-generated endpoint functions across 19 files (problems, groups, NIMBUS, XNIMBUS, SCORE-bands, GDM-SCORE-bands, GNIMBUS, EMO routes). @@ -107,6 +318,7 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: - Renamed "Worsen until" to "Impair until" in NIMBUS constants. #### Fixed + - Fixed Orval GET-with-body error (pass `undefined` instead of `null` for optional body params). - Fixed stale Orval imports in E-NAUTILUS, problems/handler, and @@ -117,10 +329,13 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Documentation + #### Added + - Added guide on how to implement method interfaces in the GUI. #### Changed + - Updated contributing guide to include instructions for running tests without `make`. - Updated installation documentation for DESDEO on JYU Windows. @@ -137,12 +352,15 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: ## [2.2.2] - 17.2.2026 ### Core logic -*(No core-logic changes in this release.)* + +_(No core-logic changes in this release.)_ --- ### Web API + #### Added + - Added **E-NAUTILUS finalize** state and endpoint. - Added `ENautilusTreeNodeResponse`, `ENautilusDecisionEventResponse`, and `ENautilusSessionTreeResponse` models. - Added `GET /method/enautilus/session_tree/{session_id}` endpoint returning all nodes, edges, root IDs, and decision events. @@ -152,12 +370,14 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: - Added unit test for problem deletion. #### Changed + - Introduced **`SessionContextGuard`** class, replacing `get_session_context` and `get_session_context_without_request` across all endpoint routers. - Enabled `PRAGMA foreign_keys=ON` for SQLite connections. - Added `back_populates` for `StateDB→ProblemDB` relationship. - Added defensive cleanup in `_attach_substate` for existing orphaned rows. #### Fixed + - Fixed E-NAUTILUS endpoints to work with the new `SessionContextGuard`. - Fixed orphaned substate rows causing `UNIQUE` constraint errors after session deletion. - Added missing docstrings and fixed typos in API models and router utilities. @@ -165,7 +385,9 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Web GUI + #### Added + - Added experimental new features for E-NAUTILUS related to tree and decision visualization. - Added generic **`EndStateView`** component (reuses `FinalResultTable` + CSV download). - Added **representative solution set selector** to E-NAUTILUS initialization panel; auto-selects when only one set exists, shows warning when none available. @@ -175,6 +397,7 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: - Added `syncProblem()` to `methodSelection` store (updates `problem_id` without clearing session/method). #### Changed + - Replaced E-NAUTILUS final JSON view with `EndStateView`; show only the selected solution. - Replaced final view toggle buttons with proper **Tabs** component (experimental). - Replaced dialog-based formulation views with inline expandable rows in Objectives, Constraints, and Extra Functions tabs. @@ -183,6 +406,7 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: - Regenerated API client types for new E-NAUTILUS and delete endpoints. #### Fixed + - Fixed problem context mismatch when resuming E-NAUTILUS state from a different problem. - Fixed all `state_referenced_locally` Svelte 5 warnings across 11 components. - Fixed Svelte 5 `$state` warnings for `bind:this` element references. @@ -194,14 +418,18 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: ## [2.2.1] - 10.2.2026 ### Core logic + #### Fixed + - Fixed an issue in `PyomoEvaluator` where **equality-type (EQ) constraints were not added correctly** to the Pyomo model. - Fixed a small bug in **GDM SCORE band computation** introduced in a previous change. --- ### Web API + #### Added + - Added support for **Representative Solution Sets**: - Introduced `RepresentativeSolutionSetRequest`. - Added endpoints to create representative solution sets. @@ -210,6 +438,7 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: - Added corresponding **unit tests** for representative solution set endpoints. #### Changed + - Refactored representative solution set endpoints to use `get_session_context`. - Removed return payload from delete endpoints where it was unnecessary. - Minor refactoring and cleanup of API router structure. @@ -217,7 +446,9 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Documentation + #### Added + - Added **`llms.txt`** and **`llms-full.txt`** documentation files and integrated them into the documentation build. - Added Read the Docs-specific configuration to ensure **notebooks are executed during RTD builds**. - Added new Makefile rules for documentation: @@ -225,6 +456,7 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: - `docs-rtd`: build docs with notebook execution (RTD parity). #### Changed + - Stripped outputs from all notebooks using `nbstripout`. - Fixed issues with notebooks not executing correctly during documentation builds. - Polished and reorganized documentation structure (WIP). @@ -237,10 +469,13 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Tooling, CI, and linting + #### Added + - Added `nbstripout` as a **pre-commit hook**. #### Changed + - Updated Read the Docs configuration to **install solver binaries** so that optimization notebooks can be executed during documentation builds. - Refined pre-commit and ruff configuration: - Ignored selected boolean-argument warnings. @@ -250,6 +485,7 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Notes + - This iteration is **documentation- and infrastructure-heavy**, focusing on reproducibility, documentation execution fidelity, and API extensibility. - Representative Solution Sets introduce new **decision-support abstractions** that are expected to evolve as research workflows mature. - Several changes are **WIP and research-driven**, and interfaces may still change. @@ -259,19 +495,24 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: ## [2.2.0] - 2026-02-03 ### Core logic + #### Added + - Added a **seeded population generator** for evolutionary multiobjective optimization: - Supports generating populations by perturbing a provided seed solution. - Allows controlling the ratio of seeded vs. random solutions. - Supports all variable types. #### Fixed + - Fixed issues in the seeded generator implementation discovered during integration and testing. --- ### Web API + #### Changed + - Refactored session context handling: - `get_session_context` is now used consistently in the endpoints. - Removed redundant checks around interactive state retrieval; invalid states now raise immediately. @@ -281,13 +522,16 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Web GUI + #### Added + - Added a **problem definition page** (WIP), including: - Initial problem definition form. - Iterative refinements to structure and validation. - Added initial support for **selecting interactive sessions** in the method-selection view, analogous to problem selection. #### Changed + - Improved E-NAUTILUS session startup and validation logic. - Updated session and startup inputs for E-NAUTILUS. - Fixed missing session information in the E-NAUTILUS `selection` state. @@ -295,12 +539,15 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: - Improved authentication flow by refreshing access tokens on `401` responses and updating cookies accordingly. #### Known issues + - Deleting sessions can currently break NIMBUS if it attempts to access states from a deleted session. --- ### Tests + #### Changed + - Improved `Makefile` test rules: - Better handling of skipped tests. - Added a dedicated `make test-api` rule. @@ -310,7 +557,9 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Tooling, CI, and linting + #### Changed + - Applied extensive **ruff-based linting fixes** across the codebase. - Cleaned up code to satisfy pre-commit hooks. - Minor workflow and maintenance-related refinements. @@ -318,7 +567,9 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Documentation + #### Changed + - Updated `README.md`: - Clarified the development status of the Web GUI. - Updated installation instructions. @@ -329,6 +580,7 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Notes + - Several components (notably the Web GUI and session management) remain **actively evolving research prototypes**. --- @@ -336,25 +588,32 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: ## [2.1.1] - 2026-01-27 ### Core logic + #### Changed + - Updated `CBCOptions` parameter name from `sec` to `seconds` to better reflect semantics. - Removed obsolete pytest tags (`nogithub`) as part of test cleanup. --- ### Web API -*(No functional API changes in this release.)* + +_(No functional API changes in this release.)_ --- ### Web GUI + #### Added + - Added a **barebones page for adding and inspecting interactive sessions** (WIP). --- ### Tests + #### Changed + - Cleaned up pytest markers: - Re-added `githubskip` where appropriate. - Removed excessive pytest tags that were unintentionally skipping tests. @@ -364,13 +623,16 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Workflows / CI + #### Added + - Added a **pre-commit hook GitHub workflow** that runs checks only on changed files in pull requests. - Integrated **ruff-based linting and formatting** via pre-commit hooks. - For details about pre-commit hooks (how to install and activate), see [the documentation](https://desdeo.readthedocs.io/en/latest/tutorials/contributing/#pre-commit-hooks). #### Changed + - Updated unit test workflow: - Now runs on `master` branch commits and pull requests. - Switched dependency management from `pip` to **`uv`**. @@ -379,7 +641,9 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Tooling and linting + #### Changed + - Removed `isort` in favor of **ruff-only** linting and formatting. - Refined ruff configuration in `pyproject.toml` to be more sensible in practice. - Fixed minor linting issues revealed by enabling pre-commit hooks. @@ -387,13 +651,16 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: --- ### Documentation + #### Changed + - Updated `README.md`. - Updated contributing documentation with instructions and expectations for using pre-commit hooks. --- ### Notes + - This release focuses on **developer experience, CI correctness, and test hygiene**. - No user-facing web-API changes are expected. - GUI additions are **early-stage and exploratory**. @@ -403,48 +670,60 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: ## [2.1.0] - 2026-01-22 ### Core logic + #### Added + - Added **Group NIMBUS (GNIMBUS)** method to the core optimization framework. - Added new ADM variants (ADM2, ADMAfsar) and refactored the ADM codebase with clearer base abstractions. - Started work on a **resolver–provider–schema–based mechanism** for connecting to external test problem libraries without relying on HTTP (initial support for Pymoo-based problems). - Added preliminary infrastructure for improved solver metadata handling. #### Changed + - Reworked the computation of intermediate solutions (WIP) to improve numerical robustness and alignment with theoretical formulations. - Refactored solver and reference vector handling as part of ongoing structural cleanup. #### Fixed + - Fixed division-by-zero issues in the GUESS scalarization (implementation previously deviated from the paper). - Fixed circular imports, mismatched parentheses, and multiple small numerical and structural issues discovered during refactoring. #### Notes + - Several core-logic changes in this release are **experimental** and may evolve as methodological work continues. --- ### Web API + #### Added + - Added new endpoints and request models for stepping, reverting, and inspecting **E-NAUTILUS** iterations (WIP). - Added endpoints for retrieving all iterations, votes, confirmations, and intermediate solutions in group decision-making workflows. - Added support for defining optimization problems via **JSON file uploads**. - Added new example problems to database initialization scripts. #### Changed + - Refactored database session handling to propagate active sessions explicitly instead of relying on implicit generators. - Improved authentication handling: access and refresh tokens are now supported via cookies and explicit `Authorization` headers. - Continued work on multi-valued constraint handling and intermediate-result computation. #### Fixed + - Fixed multiple issues in Group NIMBUS–related endpoints, including incorrect group ID usage and broken tests. - Fixed API inconsistencies in `get_or_initialize`, utopia, and intermediate-solution endpoints. #### Notes + - Several endpoints and request/response models should be considered **unstable** and subject to change as research workflows mature. --- ### Web GUI + #### Added + - Added initial **GNIMBUS UI** (WIP), including: - Group and method selection views. - Voting-based decision and compromise phases. @@ -454,17 +733,20 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: - Added new visualizations, including vote bar charts and extended parallel coordinate plot features. #### Changed + - Migrated OpenAPI client generation to **Orval 8.x** and regenerated client models. - Refactored preference sidebars, solution tables, and layout components for clarity and experimentation. - Continued work on authentication and session handling in the GUI (cookies + token-based access). - Iteratively refined dashboards, top bars, and phase-specific views. #### Fixed + - Fixed WebSocket reconnection and shutdown handling. - Fixed phase update propagation, layout resizing issues, tooltip rendering, and visualization inconsistencies. - Fixed UI behavior during compromise and voting phases (e.g., disabled invalid actions, improved feedback). #### Notes + - Large parts of the GUI remain **research prototypes**; UI structure, naming, and interaction patterns may change rapidly. --- @@ -472,13 +754,19 @@ This project follows **Keep a Changelog** and **Semantic Versioning**: ## [2.0.0] - 2025-05-16 ### Core logic + #### Added + - Initial 2.0 release. ### Web API + #### Added + - Initial 2.0 release. ### Web GUI + #### Added + - Initial 2.0 release. diff --git a/Makefile b/Makefile index 55d67d82d..b8b66d528 100644 --- a/Makefile +++ b/Makefile @@ -15,6 +15,8 @@ # # test-failures: rerun the last falures only. # +# test-webui: run the web UI unit tests (vitest). +# # fullstack: run the web-API and web-GUI for local develpment. # Pytest conf (defined with `?=` can be overridden, e.g., `make test @@ -41,6 +43,9 @@ test-changes: test-failures: $(PYTEST) --lf $(PYTEST_SKIP) $(PYTEST_OPTS) +test-webui: + cd webui && npx vitest run --config vitest.config.ts + fullstack: ./run_fullstack.sh diff --git a/README.md b/README.md index 13b0f0698..2246cf1ac 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,8 @@ ![Latest release](https://img.shields.io/github/v/release/industrial-optimization-group/DESDEO?label=Latest%20release) [![PyPI version](https://img.shields.io/pypi/v/desdeo?label=PyPI)](https://pypi.org/project/desdeo/) [![Documentation Status](https://img.shields.io/readthedocs/desdeo.svg?version=desdeo2&label=Documentation)](https://desdeo.readthedocs.io/en/latest/) -![Tests](https://img.shields.io/github/actions/workflow/status/industrial-optimization-group/DESDEO/unit_tests.yaml?branch=master&label=Tests) +![Core-logic / Web-API Tests](https://img.shields.io/github/actions/workflow/status/industrial-optimization-group/DESDEO/unit_tests.yaml?branch=master&label=Core-logic%20and%20Web-API%20Tests) +![Web-UI Tests](https://img.shields.io/github/actions/workflow/status/industrial-optimization-group/DESDEO/webui_tests.yaml?branch=master&label=WebUI%20Tests) [![Discord](https://img.shields.io/discord/1382614276409266206?style=flat&label=Join%20our%20Discord&labelColor=%237289da)](https://discord.gg/uGCEgQTJyY) diff --git a/desdeo/api/routers/user_authentication.py b/desdeo/api/routers/user_authentication.py index dcaa7941b..fc4515ac2 100644 --- a/desdeo/api/routers/user_authentication.py +++ b/desdeo/api/routers/user_authentication.py @@ -374,7 +374,11 @@ def get_current_user_info(user: Annotated[User, Depends(get_current_user)]) -> U return user -@router.post("/login", response_model=Tokens) +@router.post("/login", response_model=Tokens, responses={ + 401: {"description": "Incorrect username or password"}, + 500: {"description": "Server unavailable"} +}) + def login( form_data: Annotated[OAuth2PasswordRequestForm, Depends()], session: Annotated[Session, Depends(get_session)], diff --git a/docs/howtoguides/api_and_gui.md b/docs/howtoguides/api_and_gui.md index 797357f2d..c25f556e4 100644 --- a/docs/howtoguides/api_and_gui.md +++ b/docs/howtoguides/api_and_gui.md @@ -17,7 +17,9 @@ dependencies to be installed. Below, instructions for setting up and running bot the web-API and web-GUI are provided. ## Web-API + ### Web-API prerequisites + Install the `web` dependencies with the following command if you have not already done so: === "Poetry" @@ -38,6 +40,7 @@ files for the web-API are located in the `desdeo/api` directory. ### Setting up the database + The api needs a database to run, and the database connection is configured in the `desdeo/api/db.py` file. The default database connection is to a SQLite database (but can be changed to a more robust solution for @@ -56,9 +59,10 @@ uvicorn --app-dir=./desdeo/api/ app:app --reload ``` ## Web-GUI + ### Web-GUI prerequisites -The source files for the web-GUI are located in the `webui` folder at the __root-level__ +The source files for the web-GUI are located in the `webui` folder at the **root-level** of the project. Unlike the core-logic and the web-API, the web-GUI is developed in TypeScript. It is assumed that the node package manager `npm` ([link](https://www.npmjs.com/)) is available on the system. Use `npm` to install the web-GUI's dependencies @@ -99,15 +103,15 @@ After a successful setup, the web-API and web-GUI can be readily executed with one command: ```bash -./run_fullstack.sh +python run_fullstack.py ``` -There is also an equivalent make-rule: +There is also an equivalent `just` recipe: ```bash -make fullstack +just fullstack ``` If everything works as expected, we should now see debug output in our terminal for both the web-API and web-GUI, and the web-GUI should open in a new tab in your -default browser. \ No newline at end of file +default browser. diff --git a/docs/tutorials/contributing.md b/docs/tutorials/contributing.md index e81bcf0b2..5ed273258 100644 --- a/docs/tutorials/contributing.md +++ b/docs/tutorials/contributing.md @@ -1,6 +1,6 @@ --- hide: -- navigation + - navigation --- # Contributing to DESDEO @@ -9,12 +9,12 @@ In this tutorial, step-by-step instructions are given on how to begin contributing to DESDEO, and what one should consider when developing DESDEO, such as coding practices and typical workflows. We first cover the required software to be installed in the section [Installing required -software](#installing-required-software). Then, we discuss how to download +software](#installing-required-software). Then, we discuss how to download DESDEO's source code and setup a virtual environment in the section [Setting up a virtual environment and installing -DESDEO](#setting-up-a-virtual-environment-and-installing-desdeo). A typical Git +DESDEO](#setting-up-a-virtual-environment-and-installing-desdeo). A typical Git workflow is then described in the section [Typical Git workflow for contributing -to DESDEO](#typical-git-workflow-for-contributing-to-desdeo). Development +to DESDEO](#typical-git-workflow-for-contributing-to-desdeo). Development practices and utilized tools are discussed in the section [Development practices](#development-practices), and how to integrate some of these into an integrated development environment are discussed in the section [Integrated @@ -46,7 +46,7 @@ we can visit [GitHub](https://github.com/) to setup a new account. ### Windows -Here, we assume to be operating in a __powershell__ environment. The first step +Here, we assume to be operating in a **powershell** environment. The first step is to install Python on the system, unless it is already installed. To check which version of Python are supported, check the section [Requirements](../index.md#installation-instructions-core-logic). If utilizing the `.exe` @@ -56,8 +56,8 @@ during the installation. Python binaries for Windows platforms can be found [on the Python website](https://www.python.org/downloads/). !!! Note "Ensuring Path variables are updated" - To ensure changes in `Path` variables are in effect, it is advisable to logout - of the current Windows session, and then log back in. +To ensure changes in `Path` variables are in effect, it is advisable to logout +of the current Windows session, and then log back in. To check that Python has been installed correctly on your system, we can open powershell and run the command @@ -163,13 +163,13 @@ We should remember to replace _ourusername_ with our actual GitHub username. !!! Note "On SSH and keys" - It is recommended to utilize the SSH (secure shell) url when - cloning DESDEO. This, however, requires that an SSH key-pairs has been - generated, and - that a public key - has been added to one's GitHub user account. - For instructions on how to setup and SSH key, - see [Adding a new SSH key to your GitHub account](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account?platform=windows). +It is recommended to utilize the SSH (secure shell) url when +cloning DESDEO. This, however, requires that an SSH key-pairs has been +generated, and +that a public key +has been added to one's GitHub user account. +For instructions on how to setup and SSH key, +see [Adding a new SSH key to your GitHub account](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account?platform=windows). Lastly, we should change to the newly cloned directory with the source code @@ -178,8 +178,8 @@ $ cd DESDEO ``` !!! Note "Ensuring we start working on the latest version" - We should make sure to be checked in the `master` (the main) - branch of the project with the command +We should make sure to be checked in the `master` (the main) +branch of the project with the command ```shell $ git checkout master @@ -218,9 +218,9 @@ $ poetry env use /usr/bin/python ``` !!! Note "Managing multiple Python versions" - For managing multiple Python versions, a tool, such - as [pyenv](https://github.com/pyenv/pyenv?tab=readme-ov-file) - is recommended. +For managing multiple Python versions, a tool, such +as [pyenv](https://github.com/pyenv/pyenv?tab=readme-ov-file) +is recommended. Before proceeding, it is useful to set the poetry configuration `virtualenvs.in-project` to `true`. This will ensure that our @@ -248,12 +248,13 @@ $ poetry env activate It will not re-create it, if it already exists. This will print the command that needs to be executed to activate the virtual -environment. __Please note that this command does not activate the environment, -it only shows the command that needs to be executed to do so.__ +environment. **Please note that this command does not activate the environment, +it only shows the command that needs to be executed to do so.** Once the virtual environment is activated, we can install + ```shell $ poetry install --with all-dev # (1)! ``` @@ -265,7 +266,7 @@ This might take a while. After poetry is done installing, and there are no error messages, we should be able to run ```shell -$ make test +$ just test ``` which runs all the tests present in DESDEO. Not all of them @@ -274,7 +275,10 @@ to us now that DESDEO has been correctly installed, and our virtual environment is now setup correctly. Pytest and tests are discussed in more detail in the section [Testing](#testing). -If `make` is not available on our system (e.g., on Windows), we can run the +!!! tip +Run `just --list` to see all available recipes. + +If `just` is not available on our system, we can run the equivalent pytest command directly: ```shell @@ -322,10 +326,10 @@ without affecting the original project. To fork a repository: - Visit the [DESDEO GitHub](https://github.com/industrial-optimization-group/DESDEO) repository. -- In the top-right corner of the page, click the __Fork__ button. +- In the top-right corner of the page, click the **Fork** button. - This action creates a copy of the DESDEO repository in your GitHub account. - To ensure we are working on a fork, the url to our repository should be of the form _https://github.com/ourusername/DESDEO_. We might -have a different name for the forked repository if we chose one when making the fork. + have a different name for the forked repository if we chose one when making the fork. ### Cloning the fork @@ -337,7 +341,7 @@ To clone our fork: - On GitHub, we navigate to our fork of the DESDEO repository. - Above the file list, there should be a green button labeled "Code". Clicking the button should -reveal a smaller window. We should select the "SSH" tab and copy the given url. + reveal a smaller window. We should select the "SSH" tab and copy the given url. - In a terminal, we then navigate to where we want to place the local repository. - Then we clone the fork using the command: @@ -429,12 +433,12 @@ $ git add . # (1)! with changes with the command `git add -A`. !!! Note "What files to commit?" - In general, we should commit only source files, not - compiled files. This means that no Python byte-code - should ever be committed (i.e., `__pycache__` directories and their - contents). Luckily, rules for Git to ignore the most common - types of files that should not be committed have been defined in the - `.gitignore` file defined at the root level of the DESDEO project. +In general, we should commit only source files, not +compiled files. This means that no Python byte-code +should ever be committed (i.e., `__pycache__` directories and their +contents). Luckily, rules for Git to ignore the most common +types of files that should not be committed have been defined in the +`.gitignore` file defined at the root level of the DESDEO project. We can always check which files are staged, and which are not, with the command @@ -443,9 +447,9 @@ $ git status ``` !!! Note "On Git status" - The command `git status` will 99% of the time tell us exactly - what we should in case of errors related to Git. Carefully reading the - output of the command is important and can save us a lot of troubles. +The command `git status` will 99% of the time tell us exactly +what we should in case of errors related to Git. Carefully reading the +output of the command is important and can save us a lot of troubles. Once we have staged all our changes, we can add then to the branch by _committing_ them @@ -471,15 +475,15 @@ $ git commit -m "Our commit message" ``` !!! Note "On commit messages" - In a good commit message, we should give enough information for another - developer to understand what was changed. Usually the first line - of the commit should be a short summary, e.g., "Added a few new tests.", - which is then followed (separated by a blank new line) with more details, - e.g., "A test was added to test the correct functioning of the NIMBUS method. - A similar test was also added for the E-NAUTILUS methods. Both of these - tests should be passing." __There is no such thing as a "too long" - commit message!__ We can also refer to any issues our commit addressed by - simply including the commits number preceded by a hashtag, e.g., #123. +In a good commit message, we should give enough information for another +developer to understand what was changed. Usually the first line +of the commit should be a short summary, e.g., "Added a few new tests.", +which is then followed (separated by a blank new line) with more details, +e.g., "A test was added to test the correct functioning of the NIMBUS method. +A similar test was also added for the E-NAUTILUS methods. Both of these +tests should be passing." **There is no such thing as a "too long" +commit message!** We can also refer to any issues our commit addressed by +simply including the commits number preceded by a hashtag, e.g., #123. We can make as many commits as we like. We do not need to have anything "ready" when making a commit. We should not be afraid of committing _too often_; there @@ -529,7 +533,7 @@ In practice, making a pull request consists of the following steps: 2. Switch to the branch with our new feature, e.g., `feature-x`. 3. There should be a green "Pull request" button next to our branch. Click it. 4. We can then review the changes in the pull request against the upstream. We can also -provide additional information about the contents of the pull request. + provide additional information about the contents of the pull request. 5. Once we are done creating the pull request and describing it, we can then create it. We may still continue working on our local branch and pushing commits to our fork. @@ -557,8 +561,8 @@ $ git push origin master # (4)! 1. The `fetch` command downloads all the changes made to the upstream but does not apply them, unlike the `pull` command would. 2. Remember, this is the main branch of the upstream, which we forked and which we want to keep up with. -3. This adds all the changes made in the upstream to our local fork. The `master` branch of our __local__ fork is now up to date. -4. Lastly, we want to update the fork in our repository, or remote as well, __which is on GitHub__. This command pushes the updated +3. This adds all the changes made in the upstream to our local fork. The `master` branch of our **local** fork is now up to date. +4. Lastly, we want to update the fork in our repository, or remote as well, **which is on GitHub**. This command pushes the updated version of the main working branch to our fork on GitHub as well. If we have work in progress in our feature branch (e.g., `feature-x`), we can then change back to it and attempt to merge the most recent @@ -607,14 +611,14 @@ There are at least three important aspects we should keep in mind when it comes to the development practices of DESDEO: - First, we should adhere to common coding practices so that the codebase -of DESDEO can be kept coherent and similar across its different modules and -files. + of DESDEO can be kept coherent and similar across its different modules and + files. - Second, we should ensure to test as much as possible the code -have written, ensuring that it works as expected and that we do not -break any existing code in DESDEO with out additions. + have written, ensuring that it works as expected and that we do not + break any existing code in DESDEO with out additions. - Three, we should document our additions carefully to ensure -their usability and reusability, and to support other users in utilizing -our additions. + their usability and reusability, and to support other users in utilizing + our additions. These topics are discussed in their respective sections [Code style, linting](#code-style-and-linting); [Typechecking](#typechecking); @@ -680,7 +684,7 @@ documentation](https://docs.astral.sh/ruff/). ### Pre-commit hooks -DESDEO uses *pre-commit hooks* to automatically run a small set of code-quality +DESDEO uses _pre-commit hooks_ to automatically run a small set of code-quality checks on staged files before a Git commit is created. The goal is to catch common issues early (e.g., formatting, imports, whitespace) and to keep pull requests focused on the substance of the changes. @@ -704,8 +708,8 @@ $ pre-commit install After this, the hooks will run automatically whenever `git commit` is executed. !!! Note "On virtual environments" - If the command `pre-commit` is not found, make sure your virtual environment - is activated and that development dependencies have been installed. +If the command `pre-commit` is not found, make sure your virtual environment +is activated and that development dependencies have been installed. #### What happens on commit @@ -833,7 +837,7 @@ Tests should be located in the `tests/` directory found at the root of the directory. Tests are written inside `.py` files with the `test_` prefix, e.g., `test_feature.py`. Tests themselves should be defined as test cases, each in its own Python function with its name starting with the prefix `test_`, e.g., `def -test_feature():`. It is important that these naming conventions are followed +test_feature():`. It is important that these naming conventions are followed because pytest relies on them during test discovery (i.e., when it tries to figure out where tests have been defined). @@ -856,16 +860,16 @@ def test_feature_correct_output(): it resolves to be `False`. !!! Note "What should a test test?" - Because DESDEO is currently mainly developed by researchers, we do not - have the time or resources to implement tests in a systematic fashion. - I.e., unit tests for every single feature, and then more comprehensive - tests. Instead, we have taken a code-coverage approach, where - out goal is to write at least _some_ tests for all the code - found in the code base that visits each line in the code at least once. - It is better to write a test that at least checks that a piece of code - executes without error with some input. Even better if the output can be - checked to be _logically correct_. And even even better, - if multiple input and outputs of a piece of code and be checked. +Because DESDEO is currently mainly developed by researchers, we do not +have the time or resources to implement tests in a systematic fashion. +I.e., unit tests for every single feature, and then more comprehensive +tests. Instead, we have taken a code-coverage approach, where +out goal is to write at least _some_ tests for all the code +found in the code base that visits each line in the code at least once. +It is better to write a test that at least checks that a piece of code +executes without error with some input. Even better if the output can be +checked to be _logically correct_. And even even better, +if multiple input and outputs of a piece of code and be checked. A bare minimum test would look something like the following @@ -918,8 +922,8 @@ $ pytest -m feature # (1)! ``` 1. The option `-m feature` tells pytest to only run tests with the mark `feature`. To exclude tests with mark, and run every other test instead, -we can, for instance, issue the command `pytest -m "not feature"`, which would -run all tests that are _not_ marked with the mark `feature`. + we can, for instance, issue the command `pytest -m "not feature"`, which would + run all tests that are _not_ marked with the mark `feature`. The output of pytest will be very explicit whether the test is passing or not. We can also run all the tests (that have not been explicitly marked to be skipped), with the @@ -960,24 +964,24 @@ check the directory `tests` at the root of the project and the tests therein. [The official documentation](https://docs.pytest.org/en/8.0.x/) for pytest is also a valuable resource to check out. -#### Running tests with and without Make +#### Running tests with just -The project includes a `Makefile` with convenient targets for running tests. -On systems where `make` is not available (e.g., Windows without WSL), the -equivalent `pytest` commands can be run directly instead. +The project includes a `justfile` with convenient recipes for running tests. +`just` is included as a dev dependency (`rust-just`) and is available after +running `uv sync`. Run `just --list` to see all available recipes. -| Make target | Pytest equivalent | Description | -|---|---|---| -| `make test` | `pytest -n auto -m "not fixme" --disable-warnings` | Run standard tests in parallel, skipping `fixme`-marked tests | -| `make test-api` | `pytest -n auto -m "not fixme" --disable-warnings ./desdeo/api/tests` | Run only the web-API tests | -| `make test-all` | `pytest` | Run all tests, including slow and skipped ones | -| `make test-changes` | `pytest --testmon` | Run only tests affected by recent code changes (requires `pytest-testmon`) | -| `make test-failures` | `pytest --lf -m "not fixme" --disable-warnings` | Re-run only the tests that failed in the last run | +| Recipe | Pytest equivalent | Description | +| -------------------- | --------------------------------------------------------------------- | -------------------------------------------------------------------------- | +| `just test` | `pytest -n auto -m "not fixme" --disable-warnings` | Run standard tests in parallel, skipping `fixme`-marked tests | +| `just test-api` | `pytest -n auto -m "not fixme" --disable-warnings ./desdeo/api/tests` | Run only the web-API tests | +| `just test-all` | `pytest` | Run all tests, including slow and skipped ones | +| `just test-changes` | `pytest --testmon` | Run only tests affected by recent code changes (requires `pytest-testmon`) | +| `just test-failures` | `pytest --lf -m "not fixme" --disable-warnings` | Re-run only the tests that failed in the last run | !!! tip - The `-n auto` flag runs tests in parallel using all available CPU cores - (requires `pytest-xdist`, which is included in the dev dependencies). This - significantly speeds up the test suite. +The `-n auto` flag runs tests in parallel using all available CPU cores +(requires `pytest-xdist`, which is included in the dev dependencies). This +significantly speeds up the test suite. ### Documenting @@ -1018,7 +1022,7 @@ As we can see, comments start with a hash `#`. Taken out of their context, the comments do not make much sense, but in the code, they provide valuable information. !!! Note "Write informative comments" - When writing comments, avoid redundant comments, such as: +When writing comments, avoid redundant comments, such as: ```python # the value '5' is stored in the variable 'a' (1) @@ -1269,6 +1273,7 @@ to DESDEO in this last section. 1. Fork the DESDEO repository. This is done on GitHub. 2. Clone the fork (`$ git clone `) 3. Make sure the fork is up to date with the upstream, or main repository, of DESDEO. + ```bash $ git remote add upstream \ # (1)! git@github.com:industrial-optimization-group/DESDEO.git @@ -1276,25 +1281,25 @@ to DESDEO in this last section. 1. This just indicates that the line continues on a new row. -3. If no virtual environment exists, create and activate a virtual environment. (`$ poetry shell`) -4. Else activate an existing virtual environment. (`$ poetry shell`) -5. Run tests and make sure at least most of them are passing. (`$ pytest`) -6. Create or switch to a local branch. +4. If no virtual environment exists, create and activate a virtual environment. (`$ poetry shell`) +5. Else activate an existing virtual environment. (`$ poetry shell`) +6. Run tests and make sure at least most of them are passing. (`$ pytest`) +7. Create or switch to a local branch. ```bash $ git branch feature-x $ git checkout feature-x ``` -7. Make changes to the code. Stage them, and keep committing the changes and running tests now and then. +8. Make changes to the code. Stage them, and keep committing the changes and running tests now and then. ```bash $ git add -A $ git commit ``` -8. Remember to write tests and documentation, including comments, docstrings, and external documentation, +9. Remember to write tests and documentation, including comments, docstrings, and external documentation, when relevant. -9. Be mindful of the outputs of Ruff and mypy. Fix errors and warnings whenever possible. -10. Push your commits to your fork on GitHub. (`$ git push origin feature-x`) -11. Make a pull request on GitHub. -12. Goto 3. +10. Be mindful of the outputs of Ruff and mypy. Fix errors and warnings whenever possible. +11. Push your commits to your fork on GitHub. (`$ git push origin feature-x`) +12. Make a pull request on GitHub. +13. Goto 3. ## Conclusions, where to go next, and our Discord server diff --git a/justfile b/justfile new file mode 100644 index 000000000..c05ebfb20 --- /dev/null +++ b/justfile @@ -0,0 +1,65 @@ +# This justfile defines several recipes to run tests and other useful scripts. +# +# To execute a recipe, issue the command "just ". Requires `just`, +# which is installed automatically as a dev dependency via `uv sync` +# (PyPI package: rust-just). +# +# Run "just --list" to see all available recipes. + +# Default recipe: list available recipes +default: + @just --list + +# Pytest configuration (can be overridden, e.g., `just test PYTEST_SKIP=""`) + +PYTEST := "pytest -n auto" +PYTEST_SKIP := '-m "not fixme"' +PYTEST_OPTS := "--disable-warnings" +TEST_API_PATH := "./desdeo/api/tests" + +# Run the typical tests, skipping tests marked to be skipped. +test: + {{ PYTEST }} {{ PYTEST_SKIP }} {{ PYTEST_OPTS }} + +# Run only the API tests. +test-api: + {{ PYTEST }} {{ PYTEST_SKIP }} {{ PYTEST_OPTS }} {{ TEST_API_PATH }} + +# Run all tests regardless of marks. This can be very slow. +test-all: + {{ PYTEST }} + +# Run only necessary tests given the changes in the code (pytest-testmon). +test-changes: + {{ PYTEST }} --testmon + +# Rerun the last failures only. +test-failures: + {{ PYTEST }} --lf {{ PYTEST_SKIP }} {{ PYTEST_OPTS }} + +# Run the web UI unit tests (vitest). +test-webui: + cd webui && npx vitest run --config vitest.config.ts + +# Run all tests (Python + WebUI). +test-everything: test test-webui + +# Run the web-API and web-GUI for local development. +fullstack: + python run_fullstack.py + +# Serve docs locally (fast rebuild). +docs-fast: + mkdocs serve -f mkdocs.yml + +# Serve docs locally (ReadTheDocs config). +docs-rtd: + mkdocs serve -f mkdocs.rtd.yml + +# Run pre-commit hooks on staged files. +lint: + pre-commit run + +# run pre-commit hooks on all files. +ling-all: + pre-commit run --all-files diff --git a/pyproject.toml b/pyproject.toml index dab912ca7..e6ce7fd31 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [project] name = "desdeo" -version = "2.3.0" +version = "2.4.0" description = "DESDEO is a modular and open source framework for interactive multiobjective optimization." authors = [ { name = "Giovanni Misitano", email = "giovanni.a.misitano@jyu.fi" }, @@ -59,6 +59,7 @@ dev = [ "pytest-asyncio>=0.26.0", "snakeviz>=2.2.0", "pre-commit>=0.14.14", + "rust-just>=1.49.0", ] docs = [ diff --git a/run_fullstack.py b/run_fullstack.py new file mode 100644 index 000000000..d35555daa --- /dev/null +++ b/run_fullstack.py @@ -0,0 +1,98 @@ +#!/usr/bin/env python3 +"""Run the DESDEO backend (FastAPI) and frontend (SvelteKit) for local development. + +Both processes run concurrently. Their output is prefixed with colored labels +so they can be distinguished. Press Ctrl+C to shut down both. +""" + +import signal +import subprocess +import sys +import threading +from pathlib import Path + +# ANSI color codes (work on Windows 10+ with VT support, macOS, Linux) +BLUE = "\033[0;34m" +YELLOW = "\033[0;33m" +RED = "\033[0;31m" +RESET = "\033[0m" + +ROOT = Path(__file__).resolve().parent + + +def stream_output(process: subprocess.Popen, label: str, color: str) -> None: + """Read lines from a process's stdout and print them with a colored prefix.""" + assert process.stdout is not None # noqa: S101 + for line in process.stdout: + print(f"{color}({label}){RESET} {line}", end="", flush=True) # noqa: T201 + + +def main() -> int: + """Run the backend and frontend as subprocesses.""" + backend_cmd = [ + sys.executable, + "-m", + "uvicorn", + "app:app", + "--reload", + "--log-level", + "debug", + "--host", + "127.0.0.1", + "--port", + "8000", + ] + frontend_cmd = ["npm", "run", "dev", "--", "--open"] + + procs: list[subprocess.Popen] = [] + + try: + backend = subprocess.Popen( # noqa: S603 + backend_cmd, + cwd=ROOT / "desdeo" / "api", + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + text=True, + ) + procs.append(backend) + + frontend = subprocess.Popen( # noqa: S603 + frontend_cmd, + cwd=ROOT / "webui", + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + text=True, + ) + procs.append(frontend) + + # Stream output in background threads so both are printed concurrently. + threads = [ + threading.Thread(target=stream_output, args=(backend, "Backend", BLUE), daemon=True), + threading.Thread(target=stream_output, args=(frontend, "Frontend", YELLOW), daemon=True), + ] + for t in threads: + t.start() + + # Wait for either process to exit. + while all(p.poll() is None for p in procs): + for t in threads: + t.join(timeout=0.5) + + except KeyboardInterrupt: + print(f"\n{RED}Shutting down...{RESET}") # noqa: T201 + finally: + for p in procs: + p.terminate() + for p in procs: + try: + p.wait(timeout=5) + except subprocess.TimeoutExpired: + p.kill() + + return 0 + + +if __name__ == "__main__": + # Allow Ctrl+C to propagate cleanly on Windows. + signal.signal(signal.SIGINT, signal.SIG_DFL) + raise SystemExit(main()) diff --git a/uv.lock b/uv.lock index c7b8be916..dd2f9c4a7 100644 --- a/uv.lock +++ b/uv.lock @@ -752,6 +752,7 @@ all-dev = [ { name = "python-markdown-math" }, { name = "python-multipart" }, { name = "ruff" }, + { name = "rust-just" }, { name = "scienceplots" }, { name = "seaborn" }, { name = "snakeviz" }, @@ -768,6 +769,7 @@ dev = [ { name = "pytest-testmon" }, { name = "pytest-xdist" }, { name = "ruff" }, + { name = "rust-just" }, { name = "snakeviz" }, ] docs = [ @@ -869,6 +871,7 @@ all-dev = [ { name = "python-markdown-math", specifier = ">=0.9" }, { name = "python-multipart", specifier = ">=0.0.20" }, { name = "ruff", specifier = ">=0.11" }, + { name = "rust-just", specifier = ">=1.49.0" }, { name = "scienceplots", specifier = ">=2.1.1" }, { name = "seaborn", specifier = ">=0.13.0" }, { name = "snakeviz", specifier = ">=2.2.0" }, @@ -885,6 +888,7 @@ dev = [ { name = "pytest-testmon", specifier = ">=2.1.1" }, { name = "pytest-xdist", specifier = ">=3.5.0" }, { name = "ruff", specifier = ">=0.11" }, + { name = "rust-just", specifier = ">=1.49.0" }, { name = "snakeviz", specifier = ">=2.2.0" }, ] docs = [ @@ -3400,6 +3404,28 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/c4/1c/1dbe51782c0e1e9cfce1d1004752672d2d4629ea46945d19d731ad772b3b/ruff-0.14.11-py3-none-win_arm64.whl", hash = "sha256:649fb6c9edd7f751db276ef42df1f3df41c38d67d199570ae2a7bd6cbc3590f0", size = 12938644, upload-time = "2026-01-08T19:11:50.027Z" }, ] +[[package]] +name = "rust-just" +version = "1.49.0" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/97/f2/c1612952bb4de508e89a1a10ff8bc7fbf18160bf75bcf502264e04e0c3ba/rust_just-1.49.0.tar.gz", hash = "sha256:a3b5f16f5e131b7ea86720510798688c146ae4859a92410e1ed5d13e79ddcd0f", size = 1912238, upload-time = "2026-04-09T05:38:11.349Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/14/d9/5f7fa48d2903345a885a4d93a5c3a75d14acad8211a6e41bc24c6221f479/rust_just-1.49.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:c91cad7bbb9280b74f0f6761b4b9f91ab8b30e6fd29885d463fcee011c1c28a8", size = 1976290, upload-time = "2026-04-09T05:38:09.916Z" }, + { url = "https://files.pythonhosted.org/packages/2e/2a/bcac6a5091dbbdac5536a28f5c5747a7ce4d251e2273a2586a03b6e15fd1/rust_just-1.49.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:655b50c40a6a43464a3b217874d81623143c95fecfcaca476d5552d3cdb035b3", size = 1839665, upload-time = "2026-04-09T05:38:20.522Z" }, + { url = "https://files.pythonhosted.org/packages/e3/ec/8dff9bcc9327cea9990e0c88932e554131afcf4b593e5163de1c994bd325/rust_just-1.49.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4cd8b7a1d7a84f95ebba73a2038eed1d89e2ae397f80e89c99a25e34808206f8", size = 1927171, upload-time = "2026-04-09T05:38:15.779Z" }, + { url = "https://files.pythonhosted.org/packages/93/a1/d7cc40ec13a702f1fecc4f14d5d6e34f31e557a8cf4e14fd4de0e364d9eb/rust_just-1.49.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:505f7d9c132f8ba747b35c7ba92c5bca13950875b4040a4c0c1a5aa1b94bc3f6", size = 1904953, upload-time = "2026-04-09T05:38:12.779Z" }, + { url = "https://files.pythonhosted.org/packages/fb/0f/19be2e9f6682ae4941b4ea98cefa9c6f3f41ca32f134407651fdc52c38c9/rust_just-1.49.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e82381559658f83c5888a36f420b4bec6e4abdecef7ad40d4c74a90d80551e96", size = 2108508, upload-time = "2026-04-09T05:37:59.591Z" }, + { url = "https://files.pythonhosted.org/packages/58/7f/d073eaf6729e4c239c9935bd1a94ea19e9dd9504242e077d651e41fdb386/rust_just-1.49.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b3dd978725e4ed8d34d0518acf34e6cd9e215bb24737b62eed87fec3d4747507", size = 2177355, upload-time = "2026-04-09T05:38:01.573Z" }, + { url = "https://files.pythonhosted.org/packages/5e/69/d0331a382c8c6a5390164c591dba4ca74e31c903389fc92dea895654ee67/rust_just-1.49.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ad50c7d05fedc4d87abd6cc44127e1ddc246be56f5f11fad05ac00e0b66c33e4", size = 2120914, upload-time = "2026-04-09T05:38:19.023Z" }, + { url = "https://files.pythonhosted.org/packages/25/f4/997605402b8502f7637875c1159a6dd9c0dd74d0493d49cf858ea5463b6a/rust_just-1.49.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b2e0f03694f030f3f39f7bba53545d6038eddb87fa0a3383cd9f3a55dcf4ed26", size = 2090756, upload-time = "2026-04-09T05:37:57.931Z" }, + { url = "https://files.pythonhosted.org/packages/c0/bc/bd90073c7f2725eeef93a9816c8c2d49410339d1863d385071b7960677e5/rust_just-1.49.0-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:d105edbcc02e278fa3eb6730a8be2c4576e5ce4f52f43706ae5e7a07af112278", size = 1949099, upload-time = "2026-04-09T05:38:17.346Z" }, + { url = "https://files.pythonhosted.org/packages/a8/54/cf57ec444c1fa851525d651f67f7eb4c1b52e701568638d2784fe690fd62/rust_just-1.49.0-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:6bc0c84f1c1bc5c13b66f6947e21e7828f0df0048cff40565ed7b574e04cfd16", size = 1931798, upload-time = "2026-04-09T05:38:14.114Z" }, + { url = "https://files.pythonhosted.org/packages/ef/a3/7a5ec8e401f4b069d6577c8cc72592264ec220eabd0e767db7aad22a6586/rust_just-1.49.0-py3-none-musllinux_1_2_i686.whl", hash = "sha256:abe84305fc4591c95f4dc0a02f61228a4bf64a4a5d5558cbf4777401147f9681", size = 2080900, upload-time = "2026-04-09T05:38:03.31Z" }, + { url = "https://files.pythonhosted.org/packages/98/65/817e047a93e0d9c4faa5eb9de05db4f1631f419f5dccc94a4e16dda79309/rust_just-1.49.0-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:a814fccd210727d587ba058efb2c54e772a102a73bc729759cf96667d1d4f599", size = 2155307, upload-time = "2026-04-09T05:38:05.214Z" }, + { url = "https://files.pythonhosted.org/packages/ef/e0/e7678714597383cf9ccd76ed7819b1cf6e3d079c80bdf476496634be36de/rust_just-1.49.0-py3-none-win32.whl", hash = "sha256:9ace425355867b3249b9c61ed7c23a2f482259345de3e649c262deb1263c321d", size = 1855815, upload-time = "2026-04-09T05:38:06.595Z" }, + { url = "https://files.pythonhosted.org/packages/08/85/ce1299f161ea7094c175953d1cb17aaf19900e9ec23edfbe728b27cd2cbd/rust_just-1.49.0-py3-none-win_amd64.whl", hash = "sha256:e34a1845394d947f2f7e47ba8c9fe9d323bb6c21af4aa5dca5168507ecdb7eba", size = 2064187, upload-time = "2026-04-09T05:38:08.316Z" }, +] + [[package]] name = "scienceplots" version = "2.2.0" diff --git a/webui/package-lock.json b/webui/package-lock.json index 7ab0334c9..db5860a27 100644 --- a/webui/package-lock.json +++ b/webui/package-lock.json @@ -371,7 +371,6 @@ } ], "license": "MIT", - "peer": true, "engines": { "node": ">=18" }, @@ -415,7 +414,6 @@ } ], "license": "MIT", - "peer": true, "engines": { "node": ">=18" } @@ -1068,7 +1066,6 @@ } ], "license": "MIT", - "peer": true, "engines": { "node": "^20.19.0 || ^22.13.0 || ^23.5.0 || >=24.0.0", "npm": ">=10" @@ -1269,7 +1266,6 @@ "resolved": "https://registry.npmjs.org/@internationalized/date/-/date-3.10.1.tgz", "integrity": "sha512-oJrXtQiAXLvT9clCf1K4kxp3eKsQhIaZqxEyowkBcsvZDdZkbWrVmnGknxs5flTD0VGsxrxKgBCZty1EzoiMzA==", "license": "Apache-2.0", - "peer": true, "dependencies": { "@swc/helpers": "^0.5.0" } @@ -2255,7 +2251,6 @@ "resolved": "https://registry.npmjs.org/ajv/-/ajv-8.18.0.tgz", "integrity": "sha512-PlXPeEWMXMZ7sPYOHqmDyCJzcfNrUr3fGNKtezX14ykXOEIvyK81d+qydx89KY5O71FKMPaQ2vBfBFI5NHR63A==", "license": "MIT", - "peer": true, "dependencies": { "fast-deep-equal": "^3.1.3", "fast-uri": "^3.0.1", @@ -2566,7 +2561,6 @@ "integrity": "sha512-G2ASV/Q16YkJf/3+4NXX1MdxFTO5qztePwGR12U8yKyyV2NjKtgnCA3jkvSakBLhRbO1nbD8+H13FgQrRfxdbQ==", "dev": true, "license": "MIT", - "peer": true, "dependencies": { "ts-dedent": "^2.0.0", "type-fest": "~2.19" @@ -2664,7 +2658,6 @@ "resolved": "https://registry.npmjs.org/@sveltejs/kit/-/kit-2.50.0.tgz", "integrity": "sha512-Hj8sR8O27p2zshFEIJzsvfhLzxga/hWw6tRLnBjMYw70m1aS9BSYCqAUtzDBjRREtX1EvLMYgaC0mYE3Hz4KWA==", "license": "MIT", - "peer": true, "dependencies": { "@standard-schema/spec": "^1.0.0", "@sveltejs/acorn-typescript": "^1.0.5", @@ -2707,7 +2700,6 @@ "resolved": "https://registry.npmjs.org/@sveltejs/vite-plugin-svelte/-/vite-plugin-svelte-6.2.4.tgz", "integrity": "sha512-ou/d51QSdTyN26D7h6dSpusAKaZkAiGM55/AKYi+9AGZw7q85hElbjK3kEyzXHhLSnRISHOYzVge6x0jRZ7DXA==", "license": "MIT", - "peer": true, "dependencies": { "@sveltejs/vite-plugin-svelte-inspector": "^5.0.0", "deepmerge": "^4.3.1", @@ -3553,8 +3545,7 @@ "resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz", "integrity": "sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==", "devOptional": true, - "license": "MIT", - "peer": true + "license": "MIT" }, "node_modules/@types/leaflet": { "version": "1.9.21", @@ -3589,7 +3580,6 @@ "integrity": "sha512-ne4A0IpG3+2ETuREInjPNhUGis1SFjv1d5asp8MzEAGtOZeTeHVDOYqOgqfhvseqg/iXty2hjBf1zAOb7RNiNw==", "devOptional": true, "license": "MIT", - "peer": true, "dependencies": { "undici-types": "~7.16.0" } @@ -3708,7 +3698,6 @@ "integrity": "sha512-nm3cvFN9SqZGXjmw5bZ6cGmvJSyJPn0wU9gHAZZHDnZl2wF9PhHv78Xf06E0MaNk4zLVHL8hb2/c32XvyJOLQg==", "dev": true, "license": "MIT", - "peer": true, "dependencies": { "@typescript-eslint/scope-manager": "8.53.1", "@typescript-eslint/types": "8.53.1", @@ -3966,7 +3955,6 @@ "integrity": "sha512-tJxiPrWmzH8a+w9nLKlQMzAKX/7VjFs50MWgcAj7p9XQ7AQ9/35fByFYptgPELyLw+0aixTnC4pUWV+APcZ/kw==", "dev": true, "license": "MIT", - "peer": true, "dependencies": { "@testing-library/dom": "^10.4.0", "@testing-library/user-event": "^14.6.1", @@ -4161,7 +4149,6 @@ "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.15.0.tgz", "integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==", "license": "MIT", - "peer": true, "bin": { "acorn": "bin/acorn" }, @@ -4579,6 +4566,18 @@ } } }, + "node_modules/class-validator": { + "version": "0.14.4", + "resolved": "https://registry.npmjs.org/class-validator/-/class-validator-0.14.4.tgz", + "integrity": "sha512-AwNusCCam51q703dW82x95tOqQp6oC9HNUl724KxJJOfnKscI8dOloXFgyez7LbTTKWuRBA37FScqVbJEoq8Yw==", + "license": "MIT", + "optional": true, + "dependencies": { + "@types/validator": "^13.15.3", + "libphonenumber-js": "^1.11.1", + "validator": "^13.15.22" + } + }, "node_modules/cli-width": { "version": "4.1.0", "resolved": "https://registry.npmjs.org/cli-width/-/cli-width-4.1.0.tgz", @@ -4801,7 +4800,8 @@ "resolved": "https://registry.npmjs.org/csstype/-/csstype-3.2.3.tgz", "integrity": "sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ==", "dev": true, - "license": "MIT" + "license": "MIT", + "peer": true }, "node_modules/d3": { "version": "7.9.0", @@ -5181,7 +5181,6 @@ "resolved": "https://registry.npmjs.org/d3-selection/-/d3-selection-3.0.0.tgz", "integrity": "sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ==", "license": "ISC", - "peer": true, "engines": { "node": ">=12" } @@ -5527,8 +5526,7 @@ "version": "8.6.0", "resolved": "https://registry.npmjs.org/embla-carousel/-/embla-carousel-8.6.0.tgz", "integrity": "sha512-SjWyZBHJPbqxHOzckOfo8lHisEaJWmwd23XppYFYVh10bU66/Pn5tkVkbkCMZVdbUE5eTCI2nD8OyIP4Z+uwkA==", - "license": "MIT", - "peer": true + "license": "MIT" }, "node_modules/embla-carousel-reactive-utils": { "version": "8.6.0", @@ -5697,7 +5695,6 @@ "integrity": "sha512-LEyamqS7W5HB3ujJyvi0HQK/dtVINZvd5mAAp9eT5S/ujByGjiZLCzPcHVzuXbpJDJF/cxwHlfceVUDZ2lnSTw==", "dev": true, "license": "MIT", - "peer": true, "dependencies": { "@eslint-community/eslint-utils": "^4.8.0", "@eslint-community/regexpp": "^4.12.1", @@ -6904,7 +6901,6 @@ "integrity": "sha512-mjzqwWRD9Y1J1KUi7W97Gja1bwOOM5Ug0EZ6UDK3xS7j7mndrkwozHtSblfomlzyB4NepioNt+B2sOSzczVgtQ==", "dev": true, "license": "MIT", - "peer": true, "dependencies": { "@acemir/cssom": "^0.9.28", "@asamuzakjp/dom-selector": "^6.7.6", @@ -7132,6 +7128,13 @@ "node": ">= 0.8.0" } }, + "node_modules/libphonenumber-js": { + "version": "1.12.41", + "resolved": "https://registry.npmjs.org/libphonenumber-js/-/libphonenumber-js-1.12.41.tgz", + "integrity": "sha512-lsmMmGXBxXIK/VMLEj0kL6MtUs1kBGj1nTCzi6zgQoG1DEwqwt2DQyHxcLykceIxAnfE3hya7NuIh6PpC6S3fA==", + "license": "MIT", + "optional": true + }, "node_modules/lightningcss": { "version": "1.30.2", "resolved": "https://registry.npmjs.org/lightningcss/-/lightningcss-1.30.2.tgz", @@ -8093,7 +8096,6 @@ "resolved": "https://registry.npmjs.org/commander/-/commander-14.0.2.tgz", "integrity": "sha512-TywoWNNRbhoD0BXs1P3ZEScW8W5iKrnbithIl0YH+uCmBd0QpPOA8yc82DS3BIE5Ma6FnBVUsJ7wVUDz4dvOWQ==", "license": "MIT", - "peer": true, "engines": { "node": ">=20" } @@ -8515,7 +8517,6 @@ } ], "license": "MIT", - "peer": true, "dependencies": { "nanoid": "^3.3.11", "picocolors": "^1.1.1", @@ -8649,7 +8650,6 @@ "integrity": "sha512-yEPsovQfpxYfgWNhCfECjG5AQaO+K3dp6XERmOepyPDVqcJm+bjyCVO3pmU+nAPe0N5dDvekfGezt/EIiRe1TA==", "devOptional": true, "license": "MIT", - "peer": true, "bin": { "prettier": "bin/prettier.cjs" }, @@ -8666,7 +8666,6 @@ "integrity": "sha512-xL49LCloMoZRvSwa6IEdN2GV6cq2IqpYGstYtMT+5wmml1/dClEoI0MZR78MiVPpu6BdQFfN0/y73yO6+br5Pg==", "dev": true, "license": "MIT", - "peer": true, "peerDependencies": { "prettier": "^3.0.0", "svelte": "^3.2.0 || ^4.0.0-next.0 || ^5.0.0-next.0" @@ -8888,7 +8887,6 @@ "integrity": "sha512-Ku/hhYbVjOQnXDZFv2+RibmLFGwFdeeKHFcOTlrt7xplBnya5OGn/hIRDsqDiSUcfORsDC7MPxwork8jBwsIWA==", "dev": true, "license": "MIT", - "peer": true, "engines": { "node": ">=0.10.0" } @@ -8899,7 +8897,6 @@ "integrity": "sha512-yELu4WmLPw5Mr/lmeEpox5rw3RETacE++JgHqQzd2dg+YbJuat3jH4ingc+WPZhxaoFzdv9y33G+F7Nl5O0GBg==", "dev": true, "license": "MIT", - "peer": true, "dependencies": { "scheduler": "^0.27.0" }, @@ -9042,7 +9039,6 @@ "resolved": "https://registry.npmjs.org/rollup/-/rollup-4.55.3.tgz", "integrity": "sha512-y9yUpfQvetAjiDLtNMf1hL9NXchIJgWt6zIKeoB+tCd3npX08Eqfzg60V9DhIGVMtQ0AlMkFw5xa+AQ37zxnAA==", "license": "MIT", - "peer": true, "dependencies": { "@types/estree": "1.0.8" }, @@ -9326,7 +9322,6 @@ "integrity": "sha512-pKP5jXJYM4OjvNklGuHKO53wOCAwfx79KvZyOWHoi9zXUH5WVMFUe/ZfWyxXG/GTcj0maRgHGUjq/0I43r0dDQ==", "dev": true, "license": "MIT", - "peer": true, "dependencies": { "@storybook/global": "^5.0.0", "@storybook/icons": "^2.0.0", @@ -9576,7 +9571,6 @@ "resolved": "https://registry.npmjs.org/svelte/-/svelte-5.47.1.tgz", "integrity": "sha512-MhSWfWEpG5T57z0Oyfk9D1GhAz/KTZKZZlWtGEsy9zNk2fafpuU7sJQlXNSA8HtvwKxVC9XlDyl5YovXUXjjHA==", "license": "MIT", - "peer": true, "dependencies": { "@jridgewell/remapping": "^2.3.4", "@jridgewell/sourcemap-codec": "^1.5.0", @@ -9841,7 +9835,6 @@ } ], "license": "MIT", - "peer": true, "dependencies": { "devalue": "^5.6.1", "memoize-weak": "^1.0.2", @@ -9977,8 +9970,7 @@ "version": "4.1.18", "resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-4.1.18.tgz", "integrity": "sha512-4+Z+0yiYyEtUVCScyfHCxOYP06L5Ne+JiHhY2IjR2KWMIWhJOYZKLSGZaP5HkZ8+bY0cxfzwDE5uOmzFXyIwxw==", - "license": "MIT", - "peer": true + "license": "MIT" }, "node_modules/tapable": { "version": "2.3.0", @@ -10291,7 +10283,6 @@ "resolved": "https://registry.npmjs.org/typedoc/-/typedoc-0.28.18.tgz", "integrity": "sha512-NTWTUOFRQ9+SGKKTuWKUioUkjxNwtS3JDRPVKZAXGHZy2wCA8bdv2iJiyeePn0xkmK+TCCqZFT0X7+2+FLjngA==", "license": "Apache-2.0", - "peer": true, "dependencies": { "@gerrit0/mini-shiki": "^3.23.0", "lunr": "^2.3.9", @@ -10375,7 +10366,6 @@ "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz", "integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==", "license": "Apache-2.0", - "peer": true, "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" @@ -10555,7 +10545,6 @@ "resolved": "https://registry.npmjs.org/valibot/-/valibot-1.2.0.tgz", "integrity": "sha512-mm1rxUsmOxzrwnX5arGS+U4T25RdvpPjPN4yR0u9pUBov9+zGVtO84tif1eY4r6zWxVxu3KzIyknJy3rxfRZZg==", "license": "MIT", - "peer": true, "peerDependencies": { "typescript": ">=5" }, @@ -10647,7 +10636,6 @@ "resolved": "https://registry.npmjs.org/vite/-/vite-7.3.1.tgz", "integrity": "sha512-w+N7Hifpc3gRjZ63vYBXA56dvvRlNWRczTdmCBBa+CotUzAPf5b7YMdMR/8CQoeYE5LX3W4wj6RYTgonm1b9DA==", "license": "MIT", - "peer": true, "dependencies": { "esbuild": "^0.27.0", "fdir": "^6.5.0", @@ -10779,7 +10767,6 @@ "integrity": "sha512-LUCP5ev3GURDysTWiP47wRRUpLKMOfPh+yKTx3kVIEiu5KOMeqzpnYNsKyOoVrULivR8tLcks4+lga33Whn90A==", "dev": true, "license": "MIT", - "peer": true, "dependencies": { "@types/chai": "^5.2.2", "@vitest/expect": "3.2.4", @@ -11249,7 +11236,6 @@ "resolved": "https://registry.npmjs.org/zod/-/zod-4.3.5.tgz", "integrity": "sha512-k7Nwx6vuWx1IJ9Bjuf4Zt1PEllcwe7cls3VNzm4CQ1/hgtFUK2bRNG3rvnpPUhFjmqJKAKtjV576KnUkHocg/g==", "license": "MIT", - "peer": true, "funding": { "url": "https://github.com/sponsors/colinhacks" } diff --git a/webui/src/hooks.server.test.ts b/webui/src/hooks.server.test.ts index 27b970984..a066ea820 100644 --- a/webui/src/hooks.server.test.ts +++ b/webui/src/hooks.server.test.ts @@ -3,78 +3,372 @@ import { handleFetch } from './hooks.server'; import { refreshAccessTokenRefreshPost } from '$lib/gen/endpoints/DESDEOFastAPI'; vi.mock('$lib/gen/endpoints/DESDEOFastAPI', () => ({ - refreshAccessTokenRefreshPost: vi.fn() + refreshAccessTokenRefreshPost: vi.fn() +})); + +vi.mock('$app/environment', () => ({ + dev: true })); const createCookieStore = (initial: Record) => { - const store = new Map(Object.entries(initial)); - - return { - get: (name: string) => store.get(name), - getAll: () => Array.from(store.entries()).map(([name, value]) => ({ name, value })), - set: (name: string, value: string) => { - store.set(name, value); - } - }; + const store = new Map(Object.entries(initial)); + + return { + get: (name: string) => store.get(name), + getAll: () => Array.from(store.entries()).map(([name, value]) => ({ name, value })), + set: vi.fn((name: string, value: string) => { + store.set(name, value); + }), + /** Snapshot of the store for assertions. */ + _dump: () => Object.fromEntries(store) + }; }; +type HookArgs = Parameters[0]; + +const callHook = (opts: { + request: Request; + cookies: ReturnType; + fetchMock: ReturnType; +}) => + handleFetch({ + event: { cookies: opts.cookies } as unknown as HookArgs['event'], + request: opts.request, + fetch: opts.fetchMock + } as HookArgs); + +const mockRefreshSuccess = (newToken = 'new-access-token') => { + vi.mocked(refreshAccessTokenRefreshPost).mockResolvedValue({ + status: 200, + data: { access_token: newToken }, + headers: new Headers() + }); +}; + +const mockRefreshFailure = (status = 401) => { + vi.mocked(refreshAccessTokenRefreshPost).mockResolvedValue({ + status, + data: null, + headers: new Headers() + } as unknown as Awaited>); +}; + +const ok200 = (body = '{"ok":true}') => + new Response(body, { + status: 200, + headers: { 'content-type': 'application/json' } + }); + +const unauthorized401 = () => new Response(null, { status: 401 }); + +// Tests + describe('handleFetch', () => { - beforeEach(() => { - vi.clearAllMocks(); - }); - - it('refreshes the access token and retries with updated cookies', async () => { - const cookies = createCookieStore({ refresh_token: 'refresh-123' }); - const fetchMock = vi - .fn() - .mockResolvedValueOnce(new Response(null, { status: 401 })) - .mockResolvedValueOnce( - new Response(JSON.stringify({ ok: true }), { - status: 200, - headers: { 'content-type': 'application/json' } - }) - ); - - vi.mocked(refreshAccessTokenRefreshPost).mockResolvedValue({ - status: 200, - data: { access_token: 'new-access-token' }, - headers: new Headers() - }); - - const request = new Request('http://example.test/api/resource'); - const response = await handleFetch({ - event: { cookies } as Parameters[0]['event'], - request, - fetch: fetchMock - } as Parameters[0]); - - expect(response.status).toBe(200); - expect(refreshAccessTokenRefreshPost).toHaveBeenCalledWith( - expect.objectContaining({ - headers: { cookie: 'refresh_token=refresh-123' } - }) - ); - expect(fetchMock).toHaveBeenCalledTimes(2); - - const retryRequest = fetchMock.mock.calls[1][0] as Request; - const retryCookieHeader = retryRequest.headers.get('cookie'); - expect(retryCookieHeader).toContain('refresh_token=refresh-123'); - expect(retryCookieHeader).toContain('access_token=new-access-token'); - }); - - it('returns the original response when refresh token is missing', async () => { - const cookies = createCookieStore({}); - const fetchMock = vi.fn().mockResolvedValueOnce(new Response(null, { status: 401 })); - - const request = new Request('http://example.test/api/resource'); - const response = await handleFetch({ - event: { cookies } as Parameters[0]['event'], - request, - fetch: fetchMock - } as Parameters[0]); - - expect(response.status).toBe(401); - expect(refreshAccessTokenRefreshPost).not.toHaveBeenCalled(); - expect(fetchMock).toHaveBeenCalledTimes(1); - }); + beforeEach(() => { + vi.clearAllMocks(); + }); + + // Basic pass-through (no 401) + + describe('when the initial request succeeds', () => { + it('forwards access_token cookie on a GET request', async () => { + const cookies = createCookieStore({ access_token: 'tok-123' }); + const fetchMock = vi.fn().mockResolvedValueOnce(ok200()); + + const request = new Request('http://example.test/api/resource'); + const response = await callHook({ request, cookies, fetchMock }); + + expect(response.status).toBe(200); + expect(fetchMock).toHaveBeenCalledTimes(1); + + const sentRequest = fetchMock.mock.calls[0][0] as Request; + expect(sentRequest.headers.get('cookie')).toBe('access_token=tok-123'); + }); + + it('forwards access_token cookie on a POST request and preserves the body', async () => { + const body = JSON.stringify({ problem_id: 42 }); + const cookies = createCookieStore({ access_token: 'tok-456' }); + const fetchMock = vi.fn().mockResolvedValueOnce(ok200()); + + const request = new Request('http://example.test/api/method/nimbus/solve', { + method: 'POST', + headers: { 'content-type': 'application/json' }, + body + }); + const response = await callHook({ request, cookies, fetchMock }); + + expect(response.status).toBe(200); + expect(fetchMock).toHaveBeenCalledTimes(1); + + const sentRequest = fetchMock.mock.calls[0][0] as Request; + expect(sentRequest.headers.get('cookie')).toBe('access_token=tok-456'); + const sentBody = await sentRequest.text(); + expect(sentBody).toBe(body); + }); + + it('sends request with original cookie header when no access_token exists', async () => { + const cookies = createCookieStore({}); + const fetchMock = vi.fn().mockResolvedValueOnce(ok200()); + + const request = new Request('http://example.test/api/resource'); + const response = await callHook({ request, cookies, fetchMock }); + + expect(response.status).toBe(200); + expect(fetchMock).toHaveBeenCalledTimes(1); + }); + }); + + // 401 without refresh token + + describe('when the initial request returns 401 and no refresh token exists', () => { + it('returns the 401 response without attempting refresh', async () => { + const cookies = createCookieStore({}); + const fetchMock = vi.fn().mockResolvedValueOnce(unauthorized401()); + + const request = new Request('http://example.test/api/resource'); + const response = await callHook({ request, cookies, fetchMock }); + + expect(response.status).toBe(401); + expect(refreshAccessTokenRefreshPost).not.toHaveBeenCalled(); + expect(fetchMock).toHaveBeenCalledTimes(1); + }); + }); + + // 401 -> refresh -> retry (GET) + + describe('when a GET request returns 401 and refresh succeeds', () => { + it('retries with the new access token and returns 200', async () => { + const cookies = createCookieStore({ refresh_token: 'refresh-abc' }); + const fetchMock = vi + .fn() + .mockResolvedValueOnce(unauthorized401()) + .mockResolvedValueOnce(ok200()); + + mockRefreshSuccess('fresh-token'); + + const request = new Request('http://example.test/api/resource'); + const response = await callHook({ request, cookies, fetchMock }); + + expect(response.status).toBe(200); + expect(fetchMock).toHaveBeenCalledTimes(2); + + // Refresh was called with the refresh_token cookie + expect(refreshAccessTokenRefreshPost).toHaveBeenCalledWith( + expect.objectContaining({ + headers: { cookie: 'refresh_token=refresh-abc' } + }) + ); + }); + + it('updates the access_token in the cookie store', async () => { + const cookies = createCookieStore({ refresh_token: 'refresh-abc' }); + const fetchMock = vi + .fn() + .mockResolvedValueOnce(unauthorized401()) + .mockResolvedValueOnce(ok200()); + + mockRefreshSuccess('fresh-token'); + + const request = new Request('http://example.test/api/resource'); + await callHook({ request, cookies, fetchMock }); + + expect(cookies.set).toHaveBeenCalledWith( + 'access_token', + 'fresh-token', + expect.objectContaining({ + httpOnly: true, + path: '/', + sameSite: 'lax' + }) + ); + }); + + it('includes all cookies in the retry request', async () => { + const cookies = createCookieStore({ refresh_token: 'refresh-abc' }); + const fetchMock = vi + .fn() + .mockResolvedValueOnce(unauthorized401()) + .mockResolvedValueOnce(ok200()); + + mockRefreshSuccess('fresh-token'); + + const request = new Request('http://example.test/api/resource'); + await callHook({ request, cookies, fetchMock }); + + const retryRequest = fetchMock.mock.calls[1][0] as Request; + const retryCookie = retryRequest.headers.get('cookie'); + expect(retryCookie).toContain('access_token=fresh-token'); + expect(retryCookie).toContain('refresh_token=refresh-abc'); + }); + }); + + // 401 -> refresh -> retry (POST with body) + + describe('when a POST request returns 401 and refresh succeeds', () => { + it('replays the original body on the retry request', async () => { + const body = JSON.stringify({ problem_id: 7, num_desired: 3 }); + const cookies = createCookieStore({ refresh_token: 'refresh-xyz' }); + const fetchMock = vi + .fn() + .mockResolvedValueOnce(unauthorized401()) + .mockResolvedValueOnce(ok200('{"solutions":[]}')); + + mockRefreshSuccess(); + + const request = new Request('http://example.test/api/method/nimbus/solve', { + method: 'POST', + headers: { 'content-type': 'application/json' }, + body + }); + const response = await callHook({ request, cookies, fetchMock }); + + expect(response.status).toBe(200); + expect(fetchMock).toHaveBeenCalledTimes(2); + + // Verify the retry is also a POST with the same body + const retryRequest = fetchMock.mock.calls[1][0] as Request; + expect(retryRequest.method).toBe('POST'); + const retryBody = await retryRequest.text(); + expect(retryBody).toBe(body); + }); + + it('preserves content-type header on the retry', async () => { + const cookies = createCookieStore({ refresh_token: 'refresh-xyz' }); + const fetchMock = vi + .fn() + .mockResolvedValueOnce(unauthorized401()) + .mockResolvedValueOnce(ok200()); + + mockRefreshSuccess(); + + const request = new Request('http://example.test/api/method/nimbus/solve', { + method: 'POST', + headers: { 'content-type': 'application/json' }, + body: '{}' + }); + await callHook({ request, cookies, fetchMock }); + + const retryRequest = fetchMock.mock.calls[1][0] as Request; + expect(retryRequest.headers.get('content-type')).toBe('application/json'); + }); + + it('preserves the URL and method across the retry', async () => { + const cookies = createCookieStore({ refresh_token: 'r' }); + const fetchMock = vi + .fn() + .mockResolvedValueOnce(unauthorized401()) + .mockResolvedValueOnce(ok200()); + + mockRefreshSuccess(); + + const url = 'http://example.test/api/method/e-nautilus/iterate?step=2'; + const request = new Request(url, { + method: 'POST', + headers: { 'content-type': 'application/json' }, + body: '{"ref_point":[1,2,3]}' + }); + await callHook({ request, cookies, fetchMock }); + + const initialRequest = fetchMock.mock.calls[0][0] as Request; + const retryRequest = fetchMock.mock.calls[1][0] as Request; + + expect(initialRequest.url).toBe(url); + expect(retryRequest.url).toBe(url); + expect(initialRequest.method).toBe('POST'); + expect(retryRequest.method).toBe('POST'); + }); + }); + + // Refresh failure cases + + describe('when the refresh call fails', () => { + it('returns the original 401 if refresh returns non-200', async () => { + const cookies = createCookieStore({ refresh_token: 'refresh-abc' }); + const fetchMock = vi.fn().mockResolvedValueOnce(unauthorized401()); + + mockRefreshFailure(403); + + const request = new Request('http://example.test/api/resource'); + const response = await callHook({ request, cookies, fetchMock }); + + expect(response.status).toBe(401); + expect(fetchMock).toHaveBeenCalledTimes(1); // no retry + }); + + it('returns the original 401 if refresh returns 200 but no access_token', async () => { + const cookies = createCookieStore({ refresh_token: 'refresh-abc' }); + const fetchMock = vi.fn().mockResolvedValueOnce(unauthorized401()); + + vi.mocked(refreshAccessTokenRefreshPost).mockResolvedValue({ + status: 200, + data: {}, // missing access_token + headers: new Headers() + }); + + const request = new Request('http://example.test/api/resource'); + const response = await callHook({ request, cookies, fetchMock }); + + expect(response.status).toBe(401); + expect(fetchMock).toHaveBeenCalledTimes(1); + }); + + it('returns the original 401 (not 500) if refresh throws', async () => { + const cookies = createCookieStore({ refresh_token: 'refresh-abc' }); + const fetchMock = vi.fn().mockResolvedValueOnce(unauthorized401()); + + vi.mocked(refreshAccessTokenRefreshPost).mockRejectedValue(new TypeError('fetch failed')); + + const request = new Request('http://example.test/api/resource'); + const response = await callHook({ request, cookies, fetchMock }); + + // Must degrade to 401, not propagate as an uncaught 500 + expect(response.status).toBe(401); + expect(fetchMock).toHaveBeenCalledTimes(1); + }); + + it('does not update the cookie store when refresh fails', async () => { + const cookies = createCookieStore({ refresh_token: 'refresh-abc' }); + const fetchMock = vi.fn().mockResolvedValueOnce(unauthorized401()); + + mockRefreshFailure(401); + + const request = new Request('http://example.test/api/resource'); + await callHook({ request, cookies, fetchMock }); + + expect(cookies.set).not.toHaveBeenCalled(); + }); + }); + + // Non-401 error codes are not intercepted + + describe('non-401 error responses', () => { + it('passes through 403 without attempting refresh', async () => { + const cookies = createCookieStore({ + access_token: 'tok', + refresh_token: 'ref' + }); + const fetchMock = vi.fn().mockResolvedValueOnce(new Response(null, { status: 403 })); + + const request = new Request('http://example.test/api/resource'); + const response = await callHook({ request, cookies, fetchMock }); + + expect(response.status).toBe(403); + expect(refreshAccessTokenRefreshPost).not.toHaveBeenCalled(); + expect(fetchMock).toHaveBeenCalledTimes(1); + }); + + it('passes through 500 without attempting refresh', async () => { + const cookies = createCookieStore({ + access_token: 'tok', + refresh_token: 'ref' + }); + const fetchMock = vi.fn().mockResolvedValueOnce(new Response(null, { status: 500 })); + + const request = new Request('http://example.test/api/resource'); + const response = await callHook({ request, cookies, fetchMock }); + + expect(response.status).toBe(500); + expect(refreshAccessTokenRefreshPost).not.toHaveBeenCalled(); + }); + }); }); diff --git a/webui/src/hooks.server.ts b/webui/src/hooks.server.ts index 1f5b957b8..8c61afec4 100644 --- a/webui/src/hooks.server.ts +++ b/webui/src/hooks.server.ts @@ -5,59 +5,72 @@ import { dev } from '$app/environment'; // const API = process.env.API_BASE_URL ?? '/'; export const handleFetch: HandleFetch = async ({ event, request, fetch }) => { - // Forward access_token cookie to all API requests - const accessToken = event.cookies.get("access_token"); - if (accessToken) { - request = new Request(request, { - headers: new Headers({ - ...Object.fromEntries(request.headers.entries()), - cookie: `access_token=${accessToken}`, - }), - }); - } - - const originalRequest = request.clone(); - let res = await fetch(request); - - // No auth errors, assuming either ok or other errors, pass the response back. - if (res.status !== 401) return res; - - // 401, try refreshing the access token ONCE and then try again with the original request. - - const refreshToken = event.cookies.get("refresh_token"); - - if (!refreshToken) return res; - - const response_with_new_cookies = await refreshAccessTokenRefreshPost({ - fetchImpl: fetch, - headers: { - cookie: `refresh_token=${refreshToken}`, - }, - } as RequestInit); - - if (response_with_new_cookies.status != 200 || !response_with_new_cookies.data?.access_token) return res; - - // access ok! - event.cookies.set("access_token", response_with_new_cookies.data.access_token, { - httpOnly: true, - secure: !dev, - sameSite: "lax", - path: "/", - }); - - const cookieHeader = event.cookies - .getAll() - .map(({ name, value }) => `${name}=${value}`) - .join("; "); - - // try again with new access cookie - const retryRequest = new Request(originalRequest, { - headers: new Headers({ - ...Object.fromEntries(request.headers.entries()), - cookie: cookieHeader, - }), - }); - res = await fetch(retryRequest); - - return res; + // Buffer the body upfront so we have concrete data instead of a + // ReadableStream. This avoids Node/undici "expected non-null body source" + // errors when reconstructing or cloning Requests with streaming bodies. + const hasBody = !['GET', 'HEAD'].includes(request.method); + const bodyBuffer = hasBody ? await request.arrayBuffer() : undefined; + const originalHeaders = Object.fromEntries(request.headers.entries()); + + // Build a fresh Request with the buffered body and a given cookie header. + // ArrayBuffers are reusable (unlike ReadableStreams), so this can be + // called multiple times for retries. + const makeRequest = (cookieHeader: string) => + new Request(request.url, { + method: request.method, + headers: new Headers({ + ...originalHeaders, + cookie: cookieHeader + }), + body: bodyBuffer, + // @ts-expect-error — duplex is required for streaming bodies in Node 18+ + duplex: 'half' + }); + + // Forward access_token cookie to all API requests + const accessToken = event.cookies.get('access_token'); + const originalCookie = originalHeaders['cookie'] ?? ''; + + let res = await fetch(makeRequest(accessToken ? `access_token=${accessToken}` : originalCookie)); + + // No auth errors, pass through. + if (res.status !== 401) return res; + + // 401 — try refreshing the access token ONCE, then retry. + const refreshToken = event.cookies.get('refresh_token'); + if (!refreshToken) return res; + + try { + const response_with_new_cookies = await refreshAccessTokenRefreshPost({ + fetchImpl: fetch, + headers: { + cookie: `refresh_token=${refreshToken}` + } + } as RequestInit); + + if (response_with_new_cookies.status != 200 || !response_with_new_cookies.data?.access_token) + return res; + + // Update the access_token cookie for the browser. + event.cookies.set('access_token', response_with_new_cookies.data.access_token, { + httpOnly: true, + secure: !dev, + sameSite: 'lax', + path: '/' + }); + + const cookieHeader = event.cookies + .getAll() + .map(({ name, value }) => `${name}=${value}`) + .join('; '); + + // Retry with the new access token, reuses the buffered body. + res = await fetch(makeRequest(cookieHeader)); + + return res; + } catch (err) { + console.error('[handleFetch] token refresh failed:', err); + console.error('[handleFetch] cause:', (err as any)?.cause); + return res; // degrade to 401, not 500 + } }; diff --git a/webui/src/lib/components/custom/solution-comparison/solution-comparison.svelte b/webui/src/lib/components/custom/solution-comparison/solution-comparison.svelte new file mode 100644 index 000000000..a5aa04c3e --- /dev/null +++ b/webui/src/lib/components/custom/solution-comparison/solution-comparison.svelte @@ -0,0 +1,93 @@ + + + + + + Objective + + {#each solutions as s} + {s.name ?? `Solution ${s.solution_index + 1}`} + {/each} + + + + + {#each problem.objectives as objective} + + + + {objective.name} ({objective.maximize ? 'max' : 'min'}) + + + + {#each solutions as solution, i} + {@const value = solution.objective_values?.[objective.symbol]} + {@const refValue = reference.objective_values?.[objective.symbol]} + {@const change = i === referenceIndex ? null : getChange(value, refValue)} + {@const better = i === referenceIndex ? false : isBetter(objective, value, refValue)} + + + {#if value != null} + {formatNumber(value)} + {:else} + - + {/if} + + {#if change} +
+ {change.diff > 0 ? '+' : ''} + {formatNumber(change.diff, 2)} ( + {formatNumber(change.percent, 1)}%) +
+ {/if} +
+ {/each} +
+ {/each} +
+
\ No newline at end of file diff --git a/webui/src/lib/components/ui/topbar-breadcrumbs/index.ts b/webui/src/lib/components/ui/topbar-breadcrumbs/index.ts new file mode 100644 index 000000000..1562bdfdc --- /dev/null +++ b/webui/src/lib/components/ui/topbar-breadcrumbs/index.ts @@ -0,0 +1 @@ +export {default as TopbarBreadcrumbs} from "./topbar-breadcrumbs.svelte"; \ No newline at end of file diff --git a/webui/src/lib/components/ui/topbar-breadcrumbs/topbar-breadcrumbs.svelte b/webui/src/lib/components/ui/topbar-breadcrumbs/topbar-breadcrumbs.svelte new file mode 100644 index 000000000..98d661e56 --- /dev/null +++ b/webui/src/lib/components/ui/topbar-breadcrumbs/topbar-breadcrumbs.svelte @@ -0,0 +1,104 @@ + + + + + + + Dashboard + + + + + {#each visibleBreadcrumbs as crumb, i} + {#if 'ellipsis' in crumb} + + + + + {:else if i < visibleBreadcrumbs.length - 1} + + {#if nonLinkablePaths.has(crumb.path)} + {crumb.label} + {:else} + + {crumb.label} + + {/if} + + + {:else} + + + {crumb.label} + + + {/if} + {/each} + + diff --git a/webui/src/lib/components/ui/topbar/topbar.svelte b/webui/src/lib/components/ui/topbar/topbar.svelte index 9a6c9a432..6cdac46d3 100644 --- a/webui/src/lib/components/ui/topbar/topbar.svelte +++ b/webui/src/lib/components/ui/topbar/topbar.svelte @@ -78,6 +78,8 @@ DESDEO + +