Healthcare AI builder focused on workflow automation, interpretable ML, and evaluation safety.
I build healthcare AI projects that are scoped, explainable, and honest about their limits. My portfolio is centered on three layers of the stack:
- Workflow support
- Interpretable prediction
- LLM evaluation and safety
Safety-first administrative decision support for prior authorization readiness.
What it shows
- Healthcare workflow realism
- Deterministic scope boundaries
- Requirement-level evidence mapping
- Governance and policy-drift awareness
Interpretable ICU deterioration-risk modeling built as a reproducible scientific artifact.
What it shows
- Transparent ML in a healthcare setting
- Reproducibility and maintenance discipline
- Honest evaluation and artifact governance
- Controlled use of AI coding tools around sensitive ML logic
A safety-focused evaluation harness for clinical-style LLM outputs.
What it shows
- Faithfulness and citation-aware evaluation
- Uncertainty and refusal analysis
- Failure interpretation over model hype
- Benchmark discipline and clear non-claims
Across all three projects, the common thread is the same:
- Narrow, defensible scope
- Explainable system behavior
- Strong documentation and reviewer clarity
- Reproducibility and maintenance boundaries
- Healthcare AI judgment over generic demos
- Healthcare AI product management
- Admin workflow automation
- LLM evaluation and safety
- Interpretable ML in clinical settings
- Building portfolio artifacts that are useful in real workflows, not just technically interesting