Skip to content

Haoyi-Zhang/ConstantGuard

Repository files navigation

ConstantGuard

ConstantGuard is a static analysis research prototype for detecting timing side-channel risks in cryptographic C code. The repository is intended for research, benchmarking, and reproducible evaluation. It is not production-ready security tooling.

Analysis Pipeline

ConstantGuard combines five analysis stages:

  1. Semantic analysis over CFG and AST-derived metadata.
  2. Forward taint propagation with a three-level lattice (PUBLIC, SECRET, UNKNOWN).
  3. Pattern-based detection for known timing-leak idioms.
  4. Optional SMT-based cross-checking with Z3.
  5. Deduplication plus conservative false-positive filtering when stronger evidence is available.

Detected vulnerability classes:

  • secret_dependent_branch
  • secret_dependent_loop_bound
  • cache_timing_leak
  • secret_dependent_memory_access
  • variable_time_operation
  • potential_spectre_gadget

Evaluation Snapshot

Latest benchmark run reproduced on March 10, 2026:

  • Detection-level precision: 66.67%
  • Detection-level recall: 69.66%
  • Detection-level F1: 68.13%
  • Detection-level counts: TP=62, FP=31, FN=27
  • Function-level accuracy: 92.86%
  • Function-level false-positive rate: 12.12%
  • Function-level counts: TP=49, FP=4, TN=29, FN=2

Generalization checks:

  • Naming holdout delta: 0.0 pp F1
  • External holdout precision: 100.00%
  • External holdout recall: 100.00%
  • External holdout F1: 100.00%

Baseline comparison:

  • Full ConstantGuard F1: 68.13%
  • Rule baseline F1: 56.14%
  • Improvement: +11.99 pp

Performance snapshot from the February 27, 2026 reference run:

  • Total LOC analyzed: 81,542
  • Total analysis time: 31.99s
  • Average throughput: 2,869 LOC/s
  • Peak RSS observed: 131.66 MB

See RESULTS.md for the detailed metric breakdown.

Repository Layout

ConstantGuard/
  analyzer.py
  src/
  benchmarks/
  experiments/
  scripts/
  README.md
  REPRODUCIBILITY.md
  RESULTS.md
  CONTRIBUTING.md

Notes:

  • benchmarks/generated/ is intentionally ignored from version control. Performance experiments recreate synthetic inputs on demand.
  • experiments/results/ keeps curated canonical reports.
  • experiments/plots/ keeps the curated public plots generated from those reports.

Installation

Full environment:

pip install -r requirements.txt

Minimal runtime environment:

pip install -r requirements-min.txt

Recommended Python: 3.10+

Quick Start

Analyze one file:

python analyzer.py benchmarks/vulnerable_examples.c

Export JSON:

python analyzer.py benchmarks/vulnerable_examples.c --format json --output report.json

Enable SMT mode:

python analyzer.py benchmarks/vulnerable_examples.c --smt

Generate SARIF:

python analyzer.py benchmarks/vulnerable_examples.c --format sarif --output report.sarif

CLI Exit Codes

  • 0: analysis completed and no vulnerabilities were reported
  • 1: vulnerabilities were reported, or the analysis failed

For CI usage, treat 1 as "findings or failure" and inspect JSON or SARIF output to disambiguate.

Reproducing Experiments

Run the full evaluation pipeline:

python experiments/run_all_evaluations.py

Or run steps individually:

python experiments/evaluate_benchmark.py
python experiments/evaluate_holdout_naming.py
python experiments/evaluate_external_holdout.py
python experiments/evaluate_rule_baseline.py
python experiments/analyze_error_attribution.py
python experiments/performance_benchmark.py
python experiments/generate_plots.py

See REPRODUCIBILITY.md for expected outputs and metrics.

Current Coverage Notes

  • CVE-2013-0169 (Lucky Thirteen): missed
  • CVE-2016-2107 (AES-NI padding oracle): detected
  • CVE-2016-0702 (RSA CRT timing): partial detection
  • CVE-2017-5715 (Spectre v1): partial detection
  • CVE-2011-1945 (ECDSA timing): detected

Known Limitations

  • Intra-procedural analysis only
  • No industrial-strength C front-end
  • Secret parameter inference is heuristic
  • Pattern components can still generate false positives
  • SMT mode is experimental

Intended Use

Recommended:

  • research prototypes and empirical studies
  • educational use for timing-side-channel patterns
  • early-stage security review support with manual validation

Not recommended:

  • fully automated production CI gating
  • compliance-only evidence without manual review

Contributing

See CONTRIBUTING.md.

License

MIT. See LICENSE.

About

Static analysis research prototype for timing side-channel detection in cryptographic C code.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors