TrustScoreEval: Trust Scores for AI/LLM Responses — Detect hallucinations, flags misinformation & Validate outputs. Build trustworthy AI.
-
Updated
Oct 13, 2025 - Python
TrustScoreEval: Trust Scores for AI/LLM Responses — Detect hallucinations, flags misinformation & Validate outputs. Build trustworthy AI.
A community hub for developers mastering AI collaboration—taming hallucinations, sharpening prompts, and turning chaos into clean code.
Multi-step framework for detecting, correcting, and removing hallucinations in LLM-generated texts through question-based verification, factual correction, and source traceability.
🪄 Cursed but effective AI prompt collection for wranglers, troubleshooters, and language sorcerers. A playground for expressive chaos, refactoring drama, and goose-level explanations.
🤖 Master AI collaboration by sharing tools, prompts, and experiences to improve workflows and reduce errors in code development.
Add a description, image, and links to the hallucination-hunting topic page so that developers can more easily learn about it.
To associate your repository with the hallucination-hunting topic, visit your repo's landing page and select "manage topics."