Prompt engineering framework + evaluation harness for LLM workflows (classification, summarization, extraction).
-
Updated
Nov 20, 2025 - Python
Prompt engineering framework + evaluation harness for LLM workflows (classification, summarization, extraction).
BioReasoner: Training LLMs for grounded scientific reasoning. 0% hallucination rate on citations, 100% format adherence. Cross-domain polymathic insights via Scientific Tribunal evaluation.
A pipeline that gives probabilistic guarantees for reducing contextual hallucinations in LLMs
Runtime patch that kills LLM loops, drift & hallucinations in real-time – works with any model (GPT, Grok, Claude, Llama, Mistral…)
Policy-constrained LoRA fine-tuning to reduce hallucinations in a billing-focused LLM, using a PayFlow (fictional SaaS) use case with before–after evaluation.
A conceptual AI architecture for reducing hallucinations by enforcing invariant, source-anchored knowledge constraints during generation.
Add a description, image, and links to the hallucination-reduction topic page so that developers can more easily learn about it.
To associate your repository with the hallucination-reduction topic, visit your repo's landing page and select "manage topics."