⚖️ AI Fairness Training Gym — Detect, Measure, Fix & Explain bias in AI models using RL (PPO) + Gemini AI | Google Solution Challenge 2026
-
Updated
Apr 28, 2026 - HTML
⚖️ AI Fairness Training Gym — Detect, Measure, Fix & Explain bias in AI models using RL (PPO) + Gemini AI | Google Solution Challenge 2026
RAD-XAI is an explainable-AI framework for chest X‑ray pneumonia detection that benchmarks CNNs and ViTs using Grad‑CAM-based saliency maps and quantitative XAI metrics (concentration, faithfulness, agreement) to audit model reasoning and surface clinically unsafe failure modes beyond standard AUC/accuracy.
Decision Copilot is a CLI-first, LLM-powered decision analysis tool that runs structured multi-agent reasoning pipelines with full persistence, explainability, and reproducibility.
AI-powered customer churn prediction system with explainable AI (SHAP), interactive dashboard, and conversational insights using LLMs.
CNN-based image classifier with xAI explainability techniques — Saliency Maps, Grad-CAM, LIME, RISE and uncertainty rejection. Built with PyTorch and Optuna.
Explainable AI system for CV evaluation, field detection, and personalized feedback
Heart Failure detection model using Explainable AI (Decision Trees).
Add a description, image, and links to the explanable-ai topic page so that developers can more easily learn about it.
To associate your repository with the explanable-ai topic, visit your repo's landing page and select "manage topics."