security: [T1-06] fix division by zero in CalculateSystemHealth() (DGB-SEC-003)#375
Conversation
…B-SEC-003) In the overflow-protection branch of DCA::CalculateSystemHealth(), totalDD is scaled down by dividing by 1000. When totalDD is between 1 and 999, this produces 0, causing a division-by-zero crash. Fix: check for scaledDD==0 and return max health (30000) since a tiny DD supply with large collateral is maximally healthy. Also harden SystemHealthMonitor::CalculateSystemHealth() with overflow protection for the collateral*price multiplication and the health numerator calculation, matching the pattern in dca.cpp.
There was a problem hiding this comment.
Pull request overview
This PR addresses a critical security vulnerability (DGB-SEC-003) in the DigiDollar system's health calculation functions. The vulnerability involves a division by zero that occurs when overflow protection logic scales down values before division, potentially causing the denominator to become zero through integer division.
Changes:
- Adds division-by-zero guard in
DCA::CalculateSystemHealth()overflow branch whentotalDDis scaled down - Adds overflow protection for collateral value calculation and health numerator in
SystemHealthMonitor::CalculateSystemHealth()
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| src/consensus/dca.cpp | Fixes division by zero in overflow branch by guarding scaledDD == 0 case, returning max health (30000) when DD supply is tiny compared to collateral |
| src/digidollar/health.cpp | Adds overflow protection for collateral * price calculation and collateralValue * 100 to prevent integer overflow in health calculation |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Fix division by zero in health.cpp overflow branch (ddSupply/100==0) - Add early return for invalid price in SystemHealthMonitor - Fix comment accuracy: "between 1 and 999 cents" not "< 1000 cents" - Remove ternary fallback, use explicit guard pattern matching dca.cpp
OpenSSL 1.1.1w Configure misparses multi-word flags like "-arch arm64" when passed as positional args — the shell splits them and Configure treats the second word as a target name, causing "target already defined" on ARM64 macOS. Pass CFLAGS/CPPFLAGS as VAR=value assignments per OpenSSL's official INSTALL docs. libcurl 8.5.0 fopen.c uses fileno/fdopen (POSIX) which are hidden by the depends system's strict -std=c11. Add -D_GNU_SOURCE for Linux.
Use $$ deferred evaluation for CFLAGS/CPPFLAGS assignments so OS-specific flags (cflags_linux, cppflags_linux) appended by funcs.mk after set_vars are included. Route -fPIC and -D_GNU_SOURCE through proper cflags/cppflags variables instead of bare config_opts to avoid OpenSSL's "Mixing make variables" rejection.
The depends system never defines $(package)_arflags, so ARFLAGS=$($(package)_arflags) in config_env exported an empty ARFLAGS="" to the environment. When CFLAGS/CPPFLAGS are passed as VAR=value assignments (not positional flags), OpenSSL 1.1.1w's Configure sets $anyuseradd=false and falls back to reading env vars. The empty ARFLAGS overrides the target default "r" from Configurations/00-base-templates.conf, producing a Makefile with ARFLAGS= (empty). This causes "ar: two different operation options specified" on Linux and "ar: illegal option -- /" on macOS during build_libs, as ar interprets the archive path as flags. Fix: remove ARFLAGS from config_env entirely, letting Configure use its target default ARFLAGS="r". Verified locally: - Linux x86_64: ar r apps/libapps.a ... (ARFLAGS=r in Makefile) - macOS ARM64: ar r apps/libapps.a ... (ARFLAGS=r in Makefile) - Full build_libs completes successfully on darwin64-arm64-cc
Replace BOOST_CHECK with BOOST_CHECK_MESSAGE in 5 mint validation tests that fail on CI but pass locally. Diagnostic output captures the exact reject reason, script sizes, collateral amounts, and script type identification to help debug the CI-specific failure.
Replace ClearFreeze() with ClearHistory() in the validation test fixture. ClearFreeze() only clears freeze flags, but UpdateState() (called during ValidateDigiDollarTransaction) recalculates volatility from stale price history left by earlier test suites (e.g., digidollar_health_tests) and re-sets the freeze. This caused all "valid mint" tests to fail with "minting-frozen-volatility" on CI where the test suite ordering (linker-dependent) places health_tests before validation_tests. ClearHistory() resets price history, volatility state, and all freeze flags, ensuring each test starts from a clean state.
The merge with feature/digidollar-v1 silently dropped critical code: 1. DD amount double-counting prevention in ValidateMintTransaction() The OP_RETURN and DD token output both encode ddAmount. Without the consistency check, totalDD was counted twice (e.g. 10000 + 10000 = 20000), requiring 2x the collateral and failing all valid mint tests. 2. Missing #include <digidollar/health.h> 3. Improved log format for insufficient-collateral diagnostic Also adopts the base branch's test file which includes: - DD OP_RETURN outputs required for T1-04b NUMS verification - Proper NUMS key usage via GetCollateralNUMSKey() - ClearHistory() fix for cross-suite volatility state pollution
|
Reviewed and merged manually into What was applied:
What was NOT applied:
Verification:
Closing as manually merged. Thanks @gto90 for catching the |
Summary
DCA::CalculateSystemHealth()overflow-protection branch dividestotalDDby 1000 before using as divisor. WhentotalDDis 1-999, the result is 0, crashing the node with a division-by-zeroscaledDD==0and returns max health (30000) — tiny DD supply with large collateral is maximally healthySystemHealthMonitor::CalculateSystemHealth()with overflow protection forcollateral*priceand the health numeratorTest plan
totalDD=500(triggers the overflow branch with large collateral) — should return 30000 instead of crashingtotalDD=0— still returns 30000 (existing behavior)test_digibyte --run_test=digidollar_dca_tests