Skip to content
#

ai-interpretability

Here are 6 public repositories matching this topic...

WFGY 2.0. Semantic Reasoning Engine for LLMs (MIT). Fixes RAG/OCR drift, collapse & “ghost matches” via symbolic overlays + logic patches. Autoboot; OneLine & Flagship. ⭐ Star if you explore semantic RAG or hallucination mitigation.

  • Updated Oct 14, 2025
  • Python

A NeuroAI project using Bernoulli-inspired fluid-flow analogy to explore how information moves through neural networks. The signal strength in the NN is defined as the "pressure" from Bernoulli's equation, the speed of information propagation as the "flow speed of fluid" and, the activation level as the "opening and closing of valves".

  • Updated Nov 15, 2025

A Recursive Lab for Visual Intelligence - A framework for applying structural pressure, interpretability reasoning, quality assessment and constraint-based analysis to expose image collapse and machine vision failure modes - and rebuilding images outside of engine defaults – No AI training, dataset scraping, or derivative generation permitted.

  • Updated Oct 25, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the ai-interpretability topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-interpretability topic, visit your repo's landing page and select "manage topics."

Learn more